关键词: Computer-aided Diagnosis Conventional Radiography Convolutional Neural Network (CNN) Deep Learning Algorithms Localization Machine Learning Algorithms

来  源:   DOI:10.1148/ryai.210299   PDF(Pubmed)

Abstract:
UNASSIGNED: To evaluate the ability of fine-grained annotations to overcome shortcut learning in deep learning (DL)-based diagnosis using chest radiographs.
UNASSIGNED: Two DL models were developed using radiograph-level annotations (disease present: yes or no) and fine-grained lesion-level annotations (lesion bounding boxes), respectively named CheXNet and CheXDet. A total of 34 501 chest radiographs obtained from January 2005 to September 2019 were retrospectively collected and annotated regarding cardiomegaly, pleural effusion, mass, nodule, pneumonia, pneumothorax, tuberculosis, fracture, and aortic calcification. The internal classification performance and lesion localization performance of the models were compared on a testing set (n = 2922); external classification performance was compared on National Institutes of Health (NIH) Google (n = 4376) and PadChest (n = 24 536) datasets; and external lesion localization performance was compared on the NIH ChestX-ray14 dataset (n = 880). The models were also compared with radiologist performance on a subset of the internal testing set (n = 496). Performance was evaluated using receiver operating characteristic (ROC) curve analysis.
UNASSIGNED: Given sufficient training data, both models performed similarly to radiologists. CheXDet achieved significant improvement for external classification, such as classifying fracture on NIH Google (CheXDet area under the ROC curve [AUC], 0.67; CheXNet AUC, 0.51; P < .001) and PadChest (CheXDet AUC, 0.78; CheXNet AUC, 0.55; P < .001). CheXDet achieved higher lesion detection performance than CheXNet for most abnormalities on all datasets, such as detecting pneumothorax on the internal set (CheXDet jackknife alternative free-response ROC [JAFROC] figure of merit [FOM], 0.87; CheXNet JAFROC FOM, 0.13; P < .001) and NIH ChestX-ray14 (CheXDet JAFROC FOM, 0.55; CheXNet JAFROC FOM, 0.04; P < .001).
UNASSIGNED: Fine-grained annotations overcame shortcut learning and enabled DL models to identify correct lesion patterns, improving the generalizability of the models.Keywords: Computer-aided Diagnosis, Conventional Radiography, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms, Localization Supplemental material is available for this article © RSNA, 2022.
摘要:
UNASSIGNED:使用胸部X光片评估细粒度注释克服基于深度学习(DL)的诊断中的快捷学习的能力。
UNASSIGNED:使用射线照片级注释(疾病存在:是或否)和细粒度病变级注释(病变边界框)开发了两个DL模型,分别命名为CheXNet和CheXDet。回顾性收集了从2005年1月至2019年9月获得的34501张胸片,并对心脏肥大进行了注释。胸腔积液,质量,结节,肺炎,气胸,结核病,骨折,主动脉钙化.在测试集(n=2922)上比较了模型的内部分类性能和病变定位性能;在美国国立卫生研究院(NIH)Google(n=4376)和PadChest(n=24536)数据集上比较了外部分类性能;在NIHChestX-ray14数据集(n=880)上比较了外部病变定位性能。还将模型与放射科医师在内部测试集的子集上的表现进行了比较(n=496)。使用受试者工作特征(ROC)曲线分析评估性能。
未经评估:如果有足够的训练数据,两种模型的表现与放射科医师相似。CheXDet在外部分类方面取得了显著改进,例如在NIHGoogle上对骨折进行分类(CheXDetROC曲线下面积[AUC],0.67;CheXNetAUC,0.51;P<.001)和PadChest(CheXDetAUC,0.78;CheXNetAUC,0.55;P<.001)。对于所有数据集上的大多数异常,CheXDet实现了比CheXNet更高的病变检测性能,例如在内部装置上检测气胸(CheXDet成刀替代自由反应ROC[JAFROC]品质因数[FOM],0.87;CheXNetJAFROCFOM,0.13;P<.001)和NIHChestX-ray14(CheXDetJAFROCFOM,0.55;CheXNetJAFROCFOM,0.04;P<.001)。
UNASSIGNED:细粒度注释克服了快捷学习,使DL模型能够识别正确的病变模式,提高模型的泛化性。关键词:计算机辅助诊断,常规射线照相术,卷积神经网络(CNN)深度学习算法,机器学习算法,本地化补充材料可用于本文©RSNA,2022年。

参考文献

公众号