A deep-learning (DL) AI model developed using gadoxetic acid-enhanced MRI can effectively diagnose hepatocellular carcinoma, according to a study published May 30 in Radiology: Imaging Cancer.
Significantly, the model not only classifies lesions but also provides visual explanations for its decisions, noted first authors Mingkai Li, MD, of the Third Affiliated Hospital of Sun Yat-sen University in Guangzhou, and Zhi Zhang, PhD, of Zhaoqing University, in China.
“Radiologists assisted by the model, which included a post hoc Liver Imaging Reporting and Data System [LI-RADS] feature identification tool, had improved sensitivity,” the group wrote.
Primary liver cancer is the sixth most frequently diagnosed cancer and the third leading cause of cancer-related death worldwide. Gadoxetic acid-enhanced MRI can effectively detect small liver tumors, with LI-RADS aiding in definitive diagnoses without the need for biopsies, the authors explained.
Yet although the MRI technique is highly specific (94%), it has relatively lower sensitivity (55%), and thus, the researchers aimed to develop a DL-based AI tool that could improve diagnostic sensitivity.
To develop the model, the group used imaging data from 839 patients with 1,023 focal liver lesions (594 hepatocellular carcinomas and 429 nonhepatocellular carcinomas) from five independent hospitals in China. Input included precontrast T1-weighted, T2-weighted, arterial phase, portal venous phase, and hepatobiliary phase images with five manually labeled bounding boxes for each liver lesion.
A graphical abstract of the study.RSNA
The researchers first trained the model to distinguish hepatocellular carcinomas (HCCs) from non-HCC lesions and then added a feature classifier designed to identify specific LI-RADS features.
Next, the group evaluated the AI model’s performance on lesions of different sizes, various LI-RADS categories, and specific lesion types. Secondly, they assessed how the model improved the diagnostic performance of individual radiologists on an external dataset. During the AI-assisted readings, radiologists were provided with the bounding boxes overlaid on images, including a diagnosis of HCC or non-HCC with estimated probability.
According to the results, on test set of 119 HCC images and 75 non-HCC images, the model accurately diagnosed HCC with an area under the receiver operating characteristic curve (AUC) of 0.97. Also, compared with LI-RADS category 5 classifications (“definitive” HCC), the AI model showed higher sensitivity (91.6% vs. 74.8%) and similar specificity (90.7% vs. 96%).
Finally, two readers identified more LI-RADS major features and more accurately classified LI-RADS category 5 lesions when assisted versus unassisted by AI, with higher sensitivities (reader 1, 86% vs. 72%; p < 0.001; reader 2, 89% vs. 74%; p < 0.001) and the same specificities (93% and 95%; p > 0.99 for both).
“Our results show that a DL-based model allows accurate diagnosis of HCC on gadoxetic acid-enhanced MRI scans. Moreover, readers showed improved sensitivity, without evidence of a difference in specificity, for HCC diagnosis when assisted by AI,” the group wrote.
Ultimately, they noted that the results suggest that the AI-assisted strategy may facilitate prompt interventions for HCC.
In an accompanying editorial, Yashbir Singh, PhD, Gregory Gores, MD, and Bradley Erickson, MD, PhD, all of the Mayo Clinic in Rochester, MN, noted that the model’s performance “is impressive” and added that an important contribution of the work is its emphasis on explainability.
“The explainable AI approach demonstrated could serve as a template for developing interpretable models in other areas of diagnostic radiology. As regulatory bodies increasingly emphasize the importance of explainability in AI systems for health care, studies like this provide practical examples of how to achieve this goal,” they wrote.
The full study is available here.