The application of artificial intelligence (AI) in medical imaging has significantly improved diagnostic accuracy. However, the reliance on black-box models remains a barrier to its widespread adoption in clinical settings. This article compares the main techniques of explainable AI (XAI), such as Grad-CAM, LIME, and SHAP, evaluating their effectiveness in interpreting deep learning models used in medical imaging. Two case studies are analyzed, comparing the three methods and highlighting their strengths and weaknesses. The results of this analysis show that Grad-CAM provides intuitive visualizations; LIME offers excellent flexibility in application; and SHAP delivers complete and accurate explanations, despite its high computational load.
A Comparative Analysis of XAI Techniques for Medical Imaging: Challenges and Opportunities
Barra P.
;Staffa M.
2024-01-01
Abstract
The application of artificial intelligence (AI) in medical imaging has significantly improved diagnostic accuracy. However, the reliance on black-box models remains a barrier to its widespread adoption in clinical settings. This article compares the main techniques of explainable AI (XAI), such as Grad-CAM, LIME, and SHAP, evaluating their effectiveness in interpreting deep learning models used in medical imaging. Two case studies are analyzed, comparing the three methods and highlighting their strengths and weaknesses. The results of this analysis show that Grad-CAM provides intuitive visualizations; LIME offers excellent flexibility in application; and SHAP delivers complete and accurate explanations, despite its high computational load.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.