AI Chatbots Not Built for Diagnosis, Study Warns After Finding Critical Errors in Brain Scan Interpretation

New research demonstrates how general-use AI platforms make critical and potentially dangerous medical errors.
A hand holds a pen pointing at brain MRI scans on a screen.
The researchers used this CT brain scan, showing an ischemic stroke on the left side, as the standardized test case for all five AI models.@rawpixel-com/ Freepik
Author:
MBT Desk
Published on
Updated on

OLD WESTBURY, N.Y.—Artificial intelligence (AI) is rapidly transforming healthcare. AI systems can now detect diabetic eye disease from retinal photos and analyze CT images for signs of early-stage lung cancers and stroke.

Right now, at hospitals across the country and throughout the world, specialized algorithms are quietly assisting physicians, prioritizing urgent scans and flagging subtle irregularities that might otherwise go unnoticed. These specialized AI tools—often trained on millions of precisely categorized medical images—are increasingly integrated into real clinical practice.

At the same time, another form of AI has captured the public’s attention: large language models (LLMs). These widely accessible systems, such as ChatGPT and Claude, can analyze both text and images. In theory, these capabilities should make them well-suited for medical tasks, but are general-use AI platforms reliable when it comes to medical diagnosis?

A new study led by New York Institute of Technology College of Osteopathic Medicine (NYITCOM) Associate Professor Milan Toma, Ph.D., suggests otherwise. As seen in the scholarly journal Algorithms1, Toma and his co-authors, which include NYITCOM Senior Development Security Operations Engineer Mihir Matalia and medical student Sungjoon Hong, tested the reliability of some of the world’s most advanced multimodal LLMS (GPT-5, Gemini 3 Pro, Llama 4 Maverick, Grok4, and Claude Opus 4.5 Extended).

The researchers provided each AI model with the same CT brain scan showing clear intracranial pathology. Then, they asked the models to analyze the image like a radiologist—identifying the imaging technique used, the location of the pathology in the brain, primary diagnosis, key features, and potential alternative diagnoses. Overall, the findings revealed a 20 percent rate of fundamental diagnostic error across the AI models, along with concerning variabilities in interpretation and assessment.

At first, the models produced promising results, with all five correctly identifying the image as a CT brain scan. Four models also detected a key finding: an ischemic stroke near the left middle cerebral artery. However, one made a fundamental error by incorrectly misclassifying the stroke as a hemorrhage on the opposite side of the brain. In a real, clinical setting, this error could significantly impact a patient’s health, as ischemic strokes and hemorrhagic strokes require different treatments.

Even among the four AI models that reached the correct diagnosis, their explanations differed greatly. Some offered varying interpretations on when the stroke first occurred; others disagreed on alternative diagnoses and additional brain regions affected, as well as calcification. The researchers then introduced a novel surprise: They asked each AI model to grade the others’ diagnostic explanations. This cross-evaluation exposed additional inconsistencies, with some models grading more harshly than others. One model even believed the findings showed chronic brain abnormalities rather than an acute stroke and, as such, systematically penalized the others’ responses.

In recent years, Toma has published more than 30 peer-reviewed studies on AI in medical diagnostics and healthcare, as well as two books on the topic.

A woman sits calmly with a brain-monitoring headset in a modern lab.
A researcher has authored more than 30 studies and two books on the use of AI in medical diagnostics and healthcare.@dcstudio/ Freepik

“Our research highlights a critical distinction in the AI landscape. Most successful medical AI tools are task-specific algorithms, trained on large datasets of labeled medical images and validated for very specific diagnostic tasks.”

Milan Toma, Ph.D., Associate Professor, New York Institute of Technology College of Osteopathic Medicine (NYITCOM)

“However, large language models are not optimized for diagnostics—they are built for linguistics and conversation. Accordingly, they generate explanations that sound authoritative, even when their underlying interpretation is wrong or inconsistent,” says Toma.

Toma and his co-authors conclude that the future of healthcare AI will likely combine both specialized diagnostic systems and language models. However, while LLMs may be useful for clinical documentation, summarizing reports, or communicating with patients, oversight from a medical expert remains a non-negotiable for all diagnostic interpretations.

Reference:

1) https://www.mdpi.com/1999-4893/19/3/170

(NewswiseHG)

A hand holds a pen pointing at brain MRI scans on a screen.
AI-Powered Application Enables Clinicians to Diagnose Endocrine Cancers Faster and More Accurately

Related Stories

No stories found.
logo
Medbound Times
www.medboundtimes.com