Artificial intelligence (AI) has reached a concerning milestone in medical imaging: AI-generated X-rays can now convincingly mimic real radiographs, deceiving both trained radiologists and advanced AI systems, according to a peer-reviewed study published in Radiology.
The findings highlight emerging risks to diagnostic accuracy, healthcare security, and trust in digital medical records, as synthetic imaging becomes increasingly sophisticated and accessible.
The study, published by the Radiological Society of North America in Radiology, evaluated 17 radiologists across 12 institutions in six countries, who were asked to assess 264 X-ray images comprising both real and AI-generated scans.
According to the authors, radiologists correctly identified fake images only 41% of the time when unaware that synthetic images were included. When informed beforehand, their accuracy improved to 75%, but misclassification remained substantial.
The findings suggest that even experienced clinicians cannot reliably distinguish between authentic and AI-generated radiographs under typical conditions.
The challenge extends beyond human interpretation. As reported by Nature, multiple state-of-the-art multimodal AI systems including models capable of analyzing both images and text also struggled to differentiate real from synthetic X-rays.
Detection accuracy among AI systems ranged from 57% to 85%, indicating inconsistent performance even among advanced tools. Notably, some systems failed to reliably identify images generated using similar underlying technologies, underscoring limitations in current AI validation capabilities.
This dual vulnerability affecting both clinicians and machines raises concerns about the robustness of AI-assisted diagnostic workflows.
Researchers used advanced generative AI models, including diffusion-based techniques, to create highly realistic radiographs across multiple anatomical regions such as the chest, spine, and extremities.
These models were trained to replicate fine-grained anatomical structures, enabling the generation of images that closely resemble clinical scans used in routine practice. The dataset included both normal and pathological-appearing images, increasing the realism and diagnostic complexity of the evaluation.
Despite their realism, the authors noted subtle inconsistencies in synthetic images, including:
Overly smooth or uniform bone textures
Excessively symmetrical anatomical structures
Unnaturally straight spinal alignment
“Too perfect” fracture patterns
However, these artefacts were often too subtle for consistent detection in clinical settings.
Contrary to expectations, the study found that years of radiology experience did not significantly correlate with improved detection accuracy.
While some subspecialists, particularly in musculoskeletal imaging performed slightly better, overall performance varied widely, suggesting that human expertise alone is insufficient to counter increasingly realistic AI-generated images.
The authors warn that synthetic medical images introduce a range of high-stakes risks. These include:
Medical fraud, such as fabricated injuries for insurance or legal claims
Clinical misdiagnosis, if fake images are inserted into patient records
Cybersecurity breaches, where attackers manipulate hospital imaging systems
Erosion of trust in digital healthcare infrastructure
As highlighted in ScienceDaily and News-Medical, malicious actors could exploit these vulnerabilities to alter diagnostic evidence at scale, particularly in increasingly digitized and interconnected healthcare systems.
Beyond individual cases, Nature emphasizes that AI-generated X-rays are part of a growing “medical deepfake” ecosystem.
Experts warn of future scenarios involving:
Dataset poisoning, where synthetic images contaminate AI training data
Automated large-scale attacks on hospital systems
Manipulation of AI-assisted diagnostic tools
These developments could compromise not only individual diagnoses but also the integrity of entire healthcare systems.
Researchers call for immediate implementation of safeguards to ensure image authenticity and system security. Recommended measures include:
Digital watermarking embedded in medical images
Cryptographic verification systems linked to imaging devices
AI-based deepfake detection tools
Enhanced training for clinicians to recognize synthetic artefacts
Without such protections, the reliability of medical imaging—long considered a cornerstone of modern diagnosis may be increasingly at risk.
The study underscores a critical paradox: the same AI technologies that enhance medical imaging and diagnostics can also undermine them.
As synthetic imaging capabilities continue to advance, experts caution that future deepfake CT scans and MRIs may pose even greater challenges.
Ensuring trust in medical data will require not only technological solutions but also systemic changes in how healthcare systems verify, store, and interpret diagnostic information.
Reference:
Tordjman, Mickael, et al. “AI-Generated X-Rays Can Fool Radiologists and AI Systems.” Radiology. Published 2025. https://pubs.rsna.org/doi/10.1148/radiol.252094
(Rh/ARC)