"Can Artificial Intelligence Chatbot Assistants Provide the Same Quality and Empathy as Human Doctors? A Study Examines"
Key Highlights :

The use of artificial intelligence (AI) chatbot assistants to provide healthcare support has become increasingly popular in recent years. In an effort to reduce the workload of healthcare professionals, AI chatbot assistants are being used to provide high-quality and empathetic responses to patients’ healthcare messages. But can they really provide the same level of quality and empathy as human doctors?
A recent study published in JAMA Internal Medicine attempted to answer this question by comparing the responses of an AI chatbot assistant (ChatGPT) to those of physicians in response to questions asked by patients on a public social media platform. The study found that the AI chatbot responses were rated as significantly higher quality and significantly more empathetic than the physician responses.
The study utilized a public database of questions from a public social media platform to randomly select 195 exchanges with a unique patient’s question and a unique physician’s answer. The comparison of physician responses with chatbot responses revealed that the average length of physician responses was significantly shorter than the chatbot responses. Among selected exchanges, about 94% comprised a single patient question and only a single physician response. The remaining exchanges comprised two separate physician responses to a single patient question.
The evaluators (a team of licensed healthcare professionals) who analyzed the selected exchanges preferred chatbot responses over physician responses in 78% of the 585 evaluations. According to their reports, chatbot responses were significantly higher quality than physician responses. They used the Likert scale to categorize the responses into five groups, i.e., very poor, poor, acceptable, good, or very good. The findings revealed that chatbot responses were better than good quality, and physician responses were acceptable. The prevalence of responses rated below the acceptable quality was 10 times higher for physicians. In contrast, the prevalence of responses rated good or very good was 3-times higher for the chatbot.
The evaluators also rated chatbot responses as significantly more empathetic than physician responses. They found physician responses were 41% less empathetic than chatbot responses. In addition, the prevalence of responses rated less than slightly empathetic was 5 times higher for physicians. In contrast, the prevalence of responses rated empathetic or very empathetic was 9 times higher for the chatbot.
Based on these findings, the scientists recommend that AI chatbot assistants can be adopted in clinical setups for electronic messaging. However, chatbot-generated messages should be reviewed and edited by physicians to improve accuracy levels and restrict potential false or fabricated information. Chatbot-generated high-quality and empathetic responses might be helpful for rapidly satisfying patients’ healthcare queries, which is needed to reduce unnecessary clinic visits and preserve resources for more deserving patients. Moreover, these responses might improve patient outcomes by increasing treatment adherence and compliance and reducing the frequency of missed appointments.
Overall, the study has shown that AI chatbot assistants are capable of providing high-quality and empathetic responses to patients’ healthcare messages, which is comparable to the quality and empathy provided by human doctors. However, further research is needed to evaluate how an AI assistant may enhance clinicians responding to patient questions.