Calls for Global Guidelines for Safer AI Use in Medicine

World-first review highlights the need for clear guidelines to safely trial AI in healthcare settings.
A doctor talks to a patient in a medical office. Monitors display scans.
A world-first review by University of Adelaide highlights the lack of clear guidelines for testing AI tools in health clinics through silent trials.@dcstudio/ Freepik
Author:
MBT Desk
Published on
Updated on

A world-first review led by Adelaide University researchers has found there’s a lack of clear guidelines around the early testing of AI tools in health clinics, during a process known as silent trials.

The global scoping review looked at this early phase of testing and revealed huge variations in the way the trials are being conducted and the measures used to assess the effectiveness of the tools.

“This lack of guidance around silent trials is concerning as AI models can be unpredictable and difficult to use in real-world settings if they haven’t been tested thoroughly.”

Lana Tikhomirov, a PhD candidate from Adelaide University’s Australian Institute for Machine Learning

“Some of the trials in our review focused on AI metrics that weren’t clinically useful, while others looked at the bare minimum with no details on how the model performed in a clinical setting.

“If these AI tools are rolled out without comprehensive testing and things go wrong, it could expose both patients and clinicians to harmful advice.”

Silent trials are when AI models are tested in their intended setting for use, but the results don’t influence patient care as they aren’t given to the clinical team at the time of treatment.

A doctor shows a tablet to a man in a medical office, with brain scans visible on the wall.
Researchers from University of Adelaide warn that without clear guidelines, silent trials may reduce the effectiveness of AI tools in real-world healthcare.@dcstudio/ Freepik

Currently there are no formal guidelines on how to conduct these trials, which researchers say are critical to ensure an AI tool will be useful and beneficial in a local setting.

“Silent trials are a low-risk way to test technology without compromising patient outcomes,” said co-author Associate Professor Melissa McCradden, who is the Deputy Director of Adelaide University’s Australian Institute for Machine Learning, AI Director at the Women’s and Children’s Health Network and Hospital Research Foundation Fellow in Paediatric AI Ethics.

“We know that many AI models fail when they’re introduced into real-world settings and an AI tool that works in one hospital may not work in another.

“Conducting comprehensive silent trials that adhere to a clear set of international guidelines is critical if we want to successfully take AI tools from bench to bedside.”

The scoping review has been published in Nature Health1 and is part of a larger study looking at silent phase evaluations for healthcare AI.

Reference:

1) https://doi.org/10.1038/s44360-025-00048-z

(Newswise/HG)

A doctor talks to a patient in a medical office. Monitors display scans.
The Role of AI in Personalized Medicine: Transforming Pharmacotherapy in 2024

Related Stories

No stories found.
logo
Medbound Times
www.medboundtimes.com