Can AI reduce malpractice risk for radiologists?

Liz Carey Feature Writer Smg 2023 Headshot

If a radiologist identifies a critical finding, reports it clearly, and verbally communicates it to the clinician, will a medical malpractice claim against the radiologist make it to trial? 

The likelihood is low, according to Ben Strong, MD, chief medical officer at Virtual Radiologic (vRad). With 22 years at vRad behind him, Strong has in recent years been analyzing vRad's medical malpractice case history, as well as the company's internal data, to identify both the costliest U.S. states in which to practice and the costliest pathologies if missed upon radiologist review.

vRad identified what it calls "The Big Five" -- intracranial hemorrhage, pulmonary embolism, aortic dissection, spinal epidural abscess, and superior mesenteric artery occlusion -- and has developed an AI-supported program for quality assurance (QA).

"We didn't just look at that data, shrug and move on," Strong told AuntMinnie. Instead, the company has created a program of raising awareness and educating its radiologists about The Big Five critical pathologies.

Ben Strong, MD, chief medical officer at Virtual Radiologic Corporation (vRad), has been analyzing medical malpractice case information and company data to proactively address the costliest misses.

"We use it on the back end in order to not prejudice or bias a radiologist with regard to a given finding," Strong explained. "There's no question that there's a lot of literature about AI out there that says there is an unconscious but measurable bias that radiologists suffer when they're presented with an AI output at the beginning of an interpretation. So we use it in a very different way, on the back end following the signing of any given report." 

The company is also using elements of large language models (LLMs) to inform that collaboration and determine if the radiologist report is including a critical finding.   

"We compare that to the AI output for that study," Strong said. "If there is any disparity between the two, we have a second radiologist serve as the arbiter and turn that case back around to the radiologist for a second look. We do ultimately always leave it up to the opinion of the original reading radiologist as to whether they will addend their report and alter their findings," he noted.

Strong's research has revealed that an average radiologist will have at least one medical malpractice claim every 100,000 studies, or seven years. 

"This is a commonplace occurrence that probably no practitioner in any specialty can completely avoid through the course of their career," he said.

New liability frontier

Legal and liability questions about the use of AI have largely remained unanswered as the gradual shift to using AI in clinical practice becomes more apparent in radiology. Even if most radiology medical malpractice lawsuits don't proceed to trial, research is still probing into the minds of potential jurors in the AI era of medicine.

For example, a new study digs one part into nonexpert perception of human–AI collaboration and one part into how AI is integrated into radiologists' clinical workflow as a potential factor in malpractice outcomes.

The study suggests that radiologist–AI workflow may impact the perception of legal liability.

When mock jurors considered a hypothetical medical malpractice case where a patient suffered irreversible brain damage because a radiologist failed to detect a brain bleed from a CT scan, participants were significantly more likely to side against a radiologist who reviewed a scan only once (after AI flagged it) compared to a radiologist who reviewed the scan twice, according to Penn State/Brown University/Seton Hall research published March 10 in Nature Health.

Researchers tested single- and double-reading radiology–AI workflows.

In one scenario, AI read the case first and flagged it as abnormal, then the radiologist reviewed the images once and concluded there was no evidence of bleeding in the brain. In the second scenario, the radiologist reviewed and interpreted the CT before receiving feedback from the AI system and after the AI system flagged the case as abnormal. The radiologist reached the same conclusion in both scenarios.

The study found that more of the 282 participant jurors (74.7% in scenario one) penalized doctors who seemed to rubber-stamp an AI finding rather than exercise independent expertise. Mock jurors (52.9%) sided with the plaintiff in the double-read condition almost as often as if AI had not been used (56.3%), according to the findings.

"This suggests that the penalty for disagreeing with correct AI can be mitigated when images are interpreted twice," wrote corresponding author Michael Bernstein, MD, from the Brown Radiology, Psychology and Law Lab in Providence, RI, and colleagues from the Pennsylvania State University College of Medicine.

Workflow tweak

The study also demonstrates how AI in medical diagnostics could be quietly reshaping how juries assign blame when things go wrong.

"We found that mock jurors were more likely to believe that the radiologist met their duty of care when a false negative interpretation occurred after reading an image twice -- once without AI and then once with AI -- relative to only reading the image once with AI," Bernstein and colleagues noted.

While the study didn’t explore the underlying reasons behind the relationship between AI and perception of legal liability, the findings underscored that how people determine fault depends on context.

The research shows that the radiologist workflow can be modified to reduce legal risk to the radiologist. Double reading may increase interpretation time (relative to only one read) but, theoretically, it will also prevent anchoring bias and reduce automation bias, thereby presumably increasing diagnostic accuracy, Bernstein and colleagues explained.

If the research bears out, jurors may reward the deliberate process and professional judgment of the double read. On the other hand, the single-read condition may read as passive deference to the machine, according to the findings.

"AI invites challenging questions regarding medical malpractice among radiologists," Bernstein and colleagues noted in the Nature Health paper.

Could this study's finding accelerate standardization of double-read protocols in the U.S. in the era of AI-assisted medicine? Ultimately, Bernstein and colleagues' study reinforces that hospitals and radiology practices will need to think carefully about how they document and structure AI-assisted review.

Page 1 of 17
Next Page