AI Enters Operating Rooms, Raising Alarms Over Surgical Errors and Patient Safety

A Reuters investigation finds AI-assisted surgical tools linked to patient injuries, device malfunctions and growing regulatory gaps.
A doctor working through an AI system.
Federal safety databases have recorded at least 100 reports of malfunctions or adverse events involving the system.Freepik
Published on
Updated on

According to a Reuters investigation published on February 9, 2026, artificial intelligence is rapidly entering operating rooms, but mounting reports of messed up surgeries and misidentified body parts are raising serious concerns about patient safety.

As hospitals adopt AI to guide surgeons, regulators and medical experts are questioning whether oversight has kept pace with innovation. The reports highlight injuries, lawsuits and regulatory gaps tied to AI systems that promise precision but may introduce new risks inside operating rooms.

AI-Guided Surgery Under Scrutiny

One of the most closely examined cases involves the TruDi Navigation System, a device used primarily in sinus surgeries. The system helps surgeons track instruments in real time using imaging data. In 2021, its manufacturer added machine-learning features designed to improve accuracy and efficiency.

Since then, federal safety databases have recorded at least 100 reports of malfunctions or adverse events involving the system through November 2025. According to regulators, at least 10 patients were injured during procedures in which the device was used.

Reported injuries included punctures at the base of the skull, leaks of cerebrospinal fluid, damage to major arteries and strokes. In several cases, surgeons allegedly relied on incorrect guidance provided by the system while operating in areas close to critical blood vessels and brain tissue.

Lawsuits Highlight Patient Harm

Among the most serious cases is that of Erin Ralph, who underwent sinus surgery in June 2022. During the procedure, her carotid artery was damaged, leading to a stroke that left her with lasting disabilities. Ralph later filed a lawsuit claiming the AI-assisted system misled her surgeon about the instrument’s position.

Other lawsuits describe similar allegations, with patients arguing that they were never warned about the risks associated with AI-enhanced guidance tools. Manufacturers involved in the cases have denied that the technology directly caused the injuries and maintain that their devices meet regulatory standards.

The company behind the TruDi system changed ownership in 2024, but the legal cases remain ongoing.

AI Errors Beyond the Operating Room

Concerns extend well beyond surgical navigation. Regulators say there are now more than 1,350 AI-enabled medical devices authorized for use, roughly double the number approved just a few years ago.

Some of these systems have already shown troubling flaws. Investigators found examples where AI software misidentified fetal body parts during prenatal ultrasounds and failed to correctly flag dangerous heart rhythm abnormalities in monitoring devices.

A separate academic review found that AI-enhanced devices were recalled at a higher rate than traditional medical equipment, with many recalls occurring within a year of approval.

A Routine Procedure, a Stroke, and Questions About AI Safety

In May 2023, surgeon Marc Dean was using the TruDi navigation system during a sinuplasty procedure when patient Donna Fernihough’s carotid artery ruptured, according to a lawsuit filed in a federal court in Fort Worth. The filing states that the injury caused heavy bleeding in the operating room and that Fernihough suffered a stroke later the same day.

Fernihough alleges that flaws in TruDi’s AI-assisted guidance contributed to the injury. Her lawsuit claims Acclarent knew the artificial intelligence integrated into the system could produce inconsistent and inaccurate results. Acclarent has denied the allegations, saying it only distributed TruDi and did not design or manufacture the device. Its parent company, Integra LifeSciences, told Reuters there is no evidence linking the AI technology to the reported injuries.

2 doctors using AI screen to analyze.
Some of these systems have already shown troubling flaws.DC Studio/Freepik

Key Points:

  • Reuters reports that AI-assisted surgery has been linked to serious operating room errors and patient injuries.

  • At least 10 patients were harmed in procedures involving an AI-enabled surgical navigation system.

  • More than 1,350 AI-powered medical devices are now approved for use across healthcare.

  • AI-enabled devices are being recalled more often than traditional medical equipment.

  • Regulators and clinicians say oversight has not kept pace with rapid AI adoption.

When Medical AI Gets Anatomy Wrong

In June 2025, an FDA report alleged that AI-powered prenatal ultrasound software misidentified fetal anatomy. The system, known as Sonio Detect, uses machine-learning technology to analyze fetal images, but the report stated that the algorithm incorrectly labeled structures and matched them to the wrong body parts. The filing did not report any patient harm.

Sonio Detect is owned by Samsung Medison, a subsidiary of Samsung Electronics. In response, Samsung Medison said the FDA report did not identify a safety issue and that regulators have not requested any corrective action related to the software.

Regulatory Pressure and Staffing Gaps

Former federal scientists say regulatory teams responsible for evaluating AI medical devices are under growing strain. Staffing levels dropped following government cost-cutting efforts, even as the volume and complexity of AI submissions surged.

Scientists still warn that many AI systems behave like “black boxes,” making it difficult for doctors and regulators to fully understand how conclusions are reached or errors occur.

(Rh/ARC)

A doctor working through an AI system.
Doctors Increasingly See AI Scribes in a Positive Light. But Hiccups Persist

Related Stories

No stories found.
logo
Medbound Times
www.medboundtimes.com