关键词: Convolutional Neural Networks Explainable Artificial Intelligence Grad-CAM Integrated Gradients Pneumothorax Diagnosis Saliency Map

Mesh : Humans Pneumothorax / diagnostic imaging Artificial Intelligence Deep Learning Algorithms Tomography, X-Ray Computed / methods Medical Informatics / methods

来  源:   DOI:10.1016/j.jbi.2024.104673

Abstract:
OBJECTIVE: Pneumothorax is an acute thoracic disease caused by abnormal air collection between the lungs and chest wall. Recently, artificial intelligence (AI), especially deep learning (DL), has been increasingly employed for automating the diagnostic process of pneumothorax. To address the opaqueness often associated with DL models, explainable artificial intelligence (XAI) methods have been introduced to outline regions related to pneumothorax. However, these explanations sometimes diverge from actual lesion areas, highlighting the need for further improvement.
METHODS: We propose a template-guided approach to incorporate the clinical knowledge of pneumothorax into model explanations generated by XAI methods, thereby enhancing the quality of the explanations. Utilizing one lesion delineation created by radiologists, our approach first generates a template that represents potential areas of pneumothorax occurrence. This template is then superimposed on model explanations to filter out extraneous explanations that fall outside the template\'s boundaries. To validate its efficacy, we carried out a comparative analysis of three XAI methods (Saliency Map, Grad-CAM, and Integrated Gradients) with and without our template guidance when explaining two DL models (VGG-19 and ResNet-50) in two real-world datasets (SIIM-ACR and ChestX-Det).
RESULTS: The proposed approach consistently improved baseline XAI methods across twelve benchmark scenarios built on three XAI methods, two DL models, and two datasets. The average incremental percentages, calculated by the performance improvements over the baseline performance, were 97.8% in Intersection over Union (IoU) and 94.1% in Dice Similarity Coefficient (DSC) when comparing model explanations and ground-truth lesion areas. We further visualized baseline and template-guided model explanations on radiographs to showcase the performance of our approach.
CONCLUSIONS: In the context of pneumothorax diagnoses, we proposed a template-guided approach for improving model explanations. Our approach not only aligns model explanations more closely with clinical insights but also exhibits extensibility to other thoracic diseases. We anticipate that our template guidance will forge a novel approach to elucidating AI models by integrating clinical domain expertise.
摘要:
目的:气胸是由肺和胸壁之间的空气收集异常引起的急性胸部疾病。最近,人工智能(AI)尤其是深度学习(DL),越来越多地用于气胸的自动化诊断过程。为了解决通常与DL模型相关的不透明性,已引入可解释的人工智能(XAI)方法来概述与气胸相关的区域。然而,这些解释有时会偏离实际的病变区域,强调需要进一步改进。
方法:我们提出了一种模板指导方法,将气胸的临床知识纳入XAI方法生成的模型解释中,从而提高你解释的质量。利用放射科医生创建的一个病变轮廓,我们的方法首先生成一个表示气胸发生的潜在区域的模板.然后将此模板叠加在模型解释上,以过滤掉超出模板边界的无关解释。为了验证其功效,我们对三种XAI方法进行了比较分析(显著性地图,Grad-CAM,和集成梯度)在两个真实数据集(SIIM-ACR和ChestX-Det)中解释两个DL模型(VGG-19和ResNet-50)时,有和没有我们的模板指导。
结果:所提出的方法在基于三种XAI方法的12个基准场景中持续改进了基准XAI方法,两个DL模型,和两个数据集。平均增量百分比,由相对于基准性能的性能改进计算,在比较模型解释和地面实况病变区域时,联盟交集(IoU)为97.8%,骰子相似系数(DSC)为94.1%。我们在射线照片上进一步可视化了基线和模板指导的模型解释,以展示我们方法的性能。
结论:在气胸诊断的背景下,我们提出了一种模板指导的方法来改进模型解释。我们的方法不仅将模型解释与临床见解更紧密地结合在一起,而且还表现出对其他胸部疾病的可扩展性。我们预计,我们的模板指南将通过整合临床领域专业知识来打造一种新的方法来阐明AI模型。
公众号