关键词: Breast cancer Classification Feature fusion Radiomics Transfer learning

Mesh : Humans Female Radiomics Retrospective Studies Ultrasonography, Mammary Machine Learning Breast Neoplasms / diagnostic imaging

来  源:   DOI:10.1016/j.medengphy.2024.104117

Abstract:
This study aims to establish an effective benign and malignant classification model for breast tumor ultrasound images by using conventional radiomics and transfer learning features. We collaborated with a local hospital and collected a base dataset (Dataset A) consisting of 1050 cases of single lesion 2D ultrasound images from patients, with a total of 593 benign and 357 malignant tumor cases. The experimental approach comprises three main parts: conventional radiomics, transfer learning, and feature fusion. Furthermore, we assessed the model\'s generalizability by utilizing multicenter data obtained from Datasets B and C. The results from conventional radiomics indicated that the SVM classifier achieved the highest balanced accuracy of 0.791, while XGBoost obtained the highest AUC of 0.854. For transfer learning, we extracted deep features from ResNet50, Inception-v3, DenseNet121, MNASNet, and MobileNet. Among these models, MNASNet, with 640-dimensional deep features, yielded the optimal performance, with a balanced accuracy of 0.866, AUC of 0.937, sensitivity of 0.819, and specificity of 0.913. In the feature fusion phase, we trained SVM, ExtraTrees, XGBoost, and LightGBM with early fusion features and evaluated them with weighted voting. This approach achieved the highest balanced accuracy of 0.964 and AUC of 0.981. Combining conventional radiomics and transfer learning features demonstrated clear advantages over using individual features for breast tumor ultrasound image classification. This automated diagnostic model can ease patient burden and provide additional diagnostic support to radiologists. The performance of this model encourages future prospective research in this domain.
摘要:
本研究旨在利用常规影像组学和迁移学习特征,建立有效的乳腺肿瘤超声图像良恶性分类模型。我们与当地医院合作,收集了一个基本数据集(DatasetA),该数据集包含1050例患者的单病灶2D超声图像,共发生良性肿瘤593例,恶性肿瘤357例。实验方法包括三个主要部分:传统的影像组学,迁移学习,和特征融合。此外,我们利用从数据集B和C获得的多中心数据评估了模型的可推广性。常规影像组学的结果表明,SVM分类器达到了0.791的最高平衡精度,而XGBoost获得了0.854的最高AUC。对于迁移学习,我们从ResNet50,Inception-v3,DenseNet121,MNASNet中提取了深层特征,和MobileNet。在这些模型中,MNASNet,具有640维的深层特征,产生了最佳性能,平衡的准确性为0.866,AUC为0.937,灵敏度为0.819,特异性为0.913。在特征融合阶段,我们训练了SVM,ExtraTrees,XGBoost,和具有早期融合功能的LightGBM,并通过加权投票对其进行评估。该方法实现了0.964的最高平衡精度和0.981的AUC。与使用个体特征进行乳腺肿瘤超声图像分类相比,将常规影像组学和迁移学习特征相结合具有明显的优势。这种自动诊断模型可以减轻患者负担,并为放射科医生提供额外的诊断支持。该模型的性能鼓励了该领域未来的前瞻性研究。
公众号