Is AI better than doctors at answering patients’ questions?

Article

The study was conducted using 195 randomly selected medical questions posted to Reddit r/AskDocs, an online social media forum where users can post medical questions and verified health care professionals submit answers.

Is AI better than doctors at answering patients’ questions? | Image Credit: © Rokas - © Rokas - stock.adobe.com.

Is AI better than doctors at answering patients’ questions? | Image Credit: © Rokas - © Rokas - stock.adobe.com.

Although it’s only a few months old, Open AI’s GPT-4 chatbot already hints at tantalizing possibilities for improving efficiency in health care delivery. Results of a recent study suggest one such possibility: using the technology to help respond to patients’ medical questions.

The study was conducted using 195 randomly selected medical questions posted to Reddit r/AskDocs, an online social media forum where users can post medical questions and verified health care professionals submit answers. The authors entered the questions into the GPT-4 chatbot, then had a group of health care professionals compare the answers the chatbot generated with those provided on the r/AskDocs forum.

Evaluators were asked to choose which response they thought was better based on two categories: “the quality of information provided” and “the empathy or bedside manner provided.” For the former they could choose from responses that included ‘very poor,’ ‘poor,’ ‘acceptable’, ‘good,’ and ‘very good’.

Responses for the latter were ‘not empathetic, ‘slightly empathetic,’ ‘moderately empathetic,’ ‘empathetic, and ‘very empathetic.’ The researchers then ordered mean outcomes on a 1 to 5 scale and compared those the chatbot to those of the physicians.

The results showed the evaluators preferring the chatbot over the physician responses in 78.6% of their overall evaluations. Broken down by category, the chatbot responses received an average rating of 4.13 for quality—between ‘good’ and ‘very good’--compared to 3.26, or ‘acceptable’ for physicians. For the empathy category, chatbot responses received an average rating of 3.65, or ‘empathetic,’ while those of physicians were rated 2.15, or ‘slightly empathetic.’ The proportion of chatbot responses rated ‘empathetic’ or ‘very empathetic’ was 45%, compared to just 4.6% for physicians.

The authors say the study’s outcome should serve as a catalyst for research into adapting AI for messaging purposes by, for example, using the technology to draft responses to patient questions that the physician or a staff member could then edit. This approach could produce time savings that clinical staff could use for more complex tasks.

In addition, they say, AI messaging could have beneficial effects on the use of clinical resources. “If more patients questions are answered quickly, with empathy, and to a high standard, it might reduce unnecessary clinical visits, freeing up resources for those who need them.”

Reference:

Ayers JW, Poliak A, Dredze M, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. Published online April 28, 2023. doi:10.1001/jamainternmed.2023.1838

This article was initially published by our sister publication, Medical Economics.

Recent Videos
Greg Forlenza, MD
COVID-19 Therapy Roundtable: Focusing on Inpatient Care | Image credit: Production Perig
Image credit: Production Perig
Paul Kruszka, MD
Herbert Bravo, MD
Tina Tan, MD
Paul Kruszka, MD, MPH, FACMG
Stephanie Anne Deutsch, MD, MS, MSCR, FAAP
H. Westley Phillips, MD
© 2024 MJH Life Sciences

All rights reserved.