AI-Generated X-Rays Fool Radiologists and Chatbots, Study Reveals Critical Risks to Medical Imaging

New research warns of diagnostic errors, fraud, and cybersecurity threats as synthetic radiographs become nearly indistinguishable from real scans.
Scerren with AI imaging for human body and a man staring at those screens.
According to the authors, radiologists correctly identified fake images only 41% of the time when unaware that synthetic images were included.Freepik
Published on
Updated on

Artificial intelligence (AI) has reached a concerning milestone in medical imaging: AI-generated X-rays can now convincingly mimic real radiographs, deceiving both trained radiologists and advanced AI systems, according to a peer-reviewed study published in Radiology.

The findings highlight emerging risks to diagnostic accuracy, healthcare security, and trust in digital medical records, as synthetic imaging becomes increasingly sophisticated and accessible.

Radiologists Struggle to Identify AI-Generated X-Rays

The study, published by the Radiological Society of North America in Radiology, evaluated 17 radiologists across 12 institutions in six countries, who were asked to assess 264 X-ray images comprising both real and AI-generated scans.

According to the authors, radiologists correctly identified fake images only 41% of the time when unaware that synthetic images were included. When informed beforehand, their accuracy improved to 75%, but misclassification remained substantial.

The findings suggest that even experienced clinicians cannot reliably distinguish between authentic and AI-generated radiographs under typical conditions.

AI Systems Also Fail to Reliably Detect Synthetic Images

The challenge extends beyond human interpretation. As reported by Nature, multiple state-of-the-art multimodal AI systems including models capable of analyzing both images and text also struggled to differentiate real from synthetic X-rays.

Detection accuracy among AI systems ranged from 57% to 85%, indicating inconsistent performance even among advanced tools. Notably, some systems failed to reliably identify images generated using similar underlying technologies, underscoring limitations in current AI validation capabilities.

This dual vulnerability affecting both clinicians and machines raises concerns about the robustness of AI-assisted diagnostic workflows.

How the Study Generated Convincing Deepfake X-Rays

Researchers used advanced generative AI models, including diffusion-based techniques, to create highly realistic radiographs across multiple anatomical regions such as the chest, spine, and extremities.

These models were trained to replicate fine-grained anatomical structures, enabling the generation of images that closely resemble clinical scans used in routine practice. The dataset included both normal and pathological-appearing images, increasing the realism and diagnostic complexity of the evaluation.

Despite their realism, the authors noted subtle inconsistencies in synthetic images, including:

  • Overly smooth or uniform bone textures

  • Excessively symmetrical anatomical structures

  • Unnaturally straight spinal alignment

  • “Too perfect” fracture patterns

However, these artefacts were often too subtle for consistent detection in clinical settings.

AI imaging on laptop.
Beyond individual cases, Nature emphasizes that AI-generated X-rays are part of a growing “medical deepfake” ecosystem.DC Studio/Freepik

Experience Does Not Significantly Improve Detection

Contrary to expectations, the study found that years of radiology experience did not significantly correlate with improved detection accuracy.

While some subspecialists, particularly in musculoskeletal imaging performed slightly better, overall performance varied widely, suggesting that human expertise alone is insufficient to counter increasingly realistic AI-generated images.

Potential Misuse: Fraud, Misdiagnosis, and Cyberattacks

The authors warn that synthetic medical images introduce a range of high-stakes risks. These include:

  • Medical fraud, such as fabricated injuries for insurance or legal claims

  • Clinical misdiagnosis, if fake images are inserted into patient records

  • Cybersecurity breaches, where attackers manipulate hospital imaging systems

  • Erosion of trust in digital healthcare infrastructure

As highlighted in ScienceDaily and News-Medical, malicious actors could exploit these vulnerabilities to alter diagnostic evidence at scale, particularly in increasingly digitized and interconnected healthcare systems.

A Broader Threat: The Rise of Medical Deepfakes

Beyond individual cases, Nature emphasizes that AI-generated X-rays are part of a growing “medical deepfake” ecosystem.

Experts warn of future scenarios involving:

  • Dataset poisoning, where synthetic images contaminate AI training data

  • Automated large-scale attacks on hospital systems

  • Manipulation of AI-assisted diagnostic tools

These developments could compromise not only individual diagnoses but also the integrity of entire healthcare systems.

Safeguards Urgently Needed to Protect Medical Imaging

Researchers call for immediate implementation of safeguards to ensure image authenticity and system security. Recommended measures include:

  • Digital watermarking embedded in medical images

  • Cryptographic verification systems linked to imaging devices

  • AI-based deepfake detection tools

  • Enhanced training for clinicians to recognize synthetic artefacts

Without such protections, the reliability of medical imaging—long considered a cornerstone of modern diagnosis may be increasingly at risk.

Implications for the Future of AI in Healthcare

The study underscores a critical paradox: the same AI technologies that enhance medical imaging and diagnostics can also undermine them.

As synthetic imaging capabilities continue to advance, experts caution that future deepfake CT scans and MRIs may pose even greater challenges.

Ensuring trust in medical data will require not only technological solutions but also systemic changes in how healthcare systems verify, store, and interpret diagnostic information.

Reference:

Tordjman, Mickael, et al. “AI-Generated X-Rays Can Fool Radiologists and AI Systems.” Radiology. Published 2025. https://pubs.rsna.org/doi/10.1148/radiol.252094

(Rh/ARC)

Related Stories

No stories found.
logo
Medbound Times
www.medboundtimes.com