背景:胸部X射线(CXR)是全球最常用的影像学检查之一。由于它的广泛使用,越来越需要自动化和通用的方法来准确诊断这些图像。由于成像协议的变化,传统的胸部X射线分析方法通常难以在不同的数据集上进行概括。患者人口统计学,和重叠的解剖结构的存在。因此,对于能够在不同患者人群和影像学设置中一致识别异常的高级诊断工具存在显著需求.我们提出了一种可以提供胸部X射线诊断的方法。
方法:我们的方法利用注意力引导分解器网络(ADSC)从胸部X射线图像中提取疾病图。ADSC采用一个编码器和多个解码器,整合了一种新颖的自我一致性损失,以确保其模块之间的功能一致。注意力引导编码器捕获异常的显著特征,虽然三个不同的解码器生成一个正常的合成图像,一张疾病地图,和重建的输入图像,分别。鉴别器区分真实和合成的正常胸部X光,增强生成的图像的质量。疾病图与原始胸部X射线图像一起被馈送到DenseNet-121分类器,该分类器被修改用于输入X射线的多类别分类。
结果:在多个公开可用数据集上的实验结果证明了我们方法的有效性。对于多类分类,与现有方法相比,我们对某些异常的AUROC评分提高了3%.对于二元分类(正常与异常),我们的方法超越了各种数据集的现有方法。就概括性而言,我们在一个数据集上训练我们的模型,并在多个数据集上测试它。计算不同测试数据集的AUROC评分的标准偏差以测量数据集之间的性能变化性。我们的模型在来自不同来源的数据集上表现出卓越的概括。
结论:我们的模型对胸部X线的可推广诊断显示了有希望的结果。从结果中可以明显看出,在我们的方法中使用注意力机制和自我一致性丧失的影响。在未来,我们计划采用可解释的人工智能技术,为模型决策提供解释。此外,我们的目标是设计数据增强技术,以减少我们模型中的类不平衡。
BACKGROUND: Chest X-ray (CXR) is one of the most commonly performed imaging tests worldwide. Due to its wide usage, there is a growing need for automated and generalizable methods to accurately diagnose these images. Traditional methods for chest X-ray analysis often struggle with generalization across diverse datasets due to variations in imaging protocols, patient demographics, and the presence of overlapping anatomical structures. Therefore, there is a significant demand for advanced diagnostic tools that can consistently identify abnormalities across different patient populations and imaging settings. We propose a method that can provide a generalizable diagnosis of chest X-ray.
METHODS: Our method utilizes an attention-guided decomposer network (ADSC) to extract disease maps from chest X-ray images. The ADSC employs one encoder and multiple decoders, incorporating a novel self-consistency loss to ensure consistent functionality across its modules. The attention-guided encoder captures salient features of abnormalities, while three distinct decoders generate a normal synthesized image, a disease map, and a reconstructed input image, respectively. A discriminator differentiates the real and the synthesized normal chest X-rays, enhancing the quality of generated images. The disease map along with the original chest X-ray image are fed to a DenseNet-121 classifier modified for multi-class classification of the input X-ray.
RESULTS: Experimental results on multiple publicly available datasets demonstrate the effectiveness of our approach. For multi-class classification, we achieve up to a 3% improvement in AUROC score for certain abnormalities compared to the existing methods. For binary classification (normal versus abnormal), our method surpasses existing approaches across various datasets. In terms of generalizability, we train our model on one dataset and tested it on multiple datasets. The standard deviation of AUROC scores for different test datasets is calculated to measure the variability of performance across datasets. Our model exhibits superior generalization across datasets from diverse sources.
CONCLUSIONS: Our model shows promising results for the generalizable diagnosis of chest X-rays. The impacts of using the attention mechanism and the self-consistency loss in our method are evident from the results. In the future, we plan to incorporate Explainable AI techniques to provide explanations for model decisions. Additionally, we aim to design data augmentation techniques to reduce class imbalance in our model.