Connect with us

Healt

ChatGPT Successfully Completes Radiology Board Examination

Published

on



The latest iteration of AI chatbot ChatGPT, powered by the GPT-4 model, has demonstrated notable advancements by correctly answering 81% of questions on a radiology board-style exam—up from 69% for its predecessor, GPT-3.5. Despite these improvements, the AI still faces challenges with higher-order thinking questions and occasionally produces factually incorrect responses, complicating its reliability in medical contexts. Research from the Radiological Society of North America underscores the potential of large language models while also highlighting inherent limitations.

Dr. Rajesh Bhayana, a lead author of the study, emphasized the growing utility of large language models in radiology and the impressive progress made with GPT-4. However, consistent errors in advanced reasoning and misleading confident assertions raise concerns about the chatbot’s reliability in medical education. While performance improved on higher-order thinking questions, lower-order question accuracy remained relatively unchanged.

Dr. Bhayana advises that ChatGPT’s current role should be as a supplementary tool to inspire ideas and assist in data summarization rather than serving as a sole source for medical information, as all outputs must be verified for accuracy. The ongoing concern over the AI’s tendency to “hallucinate” or produce erroneous outputs limits its practical application.

Advertisement
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement