{Reference Type}: Journal Article {Title}: Deep learning modeling using mammography images for predicting estrogen receptor status in breast cancer. {Author}: Duan W;Wu Z;Zhu H;Zhu Z;Liu X;Shu Y;Zhu X;Wu J;Peng D; {Journal}: Am J Transl Res {Volume}: 16 {Issue}: 6 {Year}: 2024 {Factor}: 3.94 {DOI}: 10.62347/PUHR6185 {Abstract}: BACKGROUND: The estrogen receptor (ER) serves as a pivotal indicator for assessing endocrine therapy efficacy and breast cancer prognosis. Invasive biopsy is a conventional approach for appraising ER expression levels, but it bears disadvantages due to tumor heterogeneity. To address the issue, a deep learning model leveraging mammography images was developed in this study for accurate evaluation of ER status in patients with breast cancer.
OBJECTIVE: To predict the ER status in breast cancer patients with a newly developed deep learning model leveraging mammography images.
METHODS: Datasets comprising preoperative mammography images, ER expression levels, and clinical data spanning from October 2016 to October 2021 were retrospectively collected from 358 patients diagnosed with invasive ductal carcinoma. Following collection, these datasets were divided into a training dataset (n = 257) and a testing dataset (n = 101). Subsequently, a deep learning prediction model, referred to as IP-SE-DResNet model, was developed utilizing two deep residual networks along with the Squeeze-and-Excitation attention mechanism. This model was tailored to forecast the ER status in breast cancer patients utilizing mammography images from both craniocaudal view and mediolateral oblique view. Performance measurements including prediction accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curves (AUCs) were employed to assess the effectiveness of the model.
RESULTS: In the training dataset, the AUCs for the IP-SE-DResNet model utilizing mammography images from the craniocaudal view, mediolateral oblique view, and the combined images from both views, were 0.849 (95% CIs: 0.809-0.868), 0.858 (95% CIs: 0.813-0.872), and 0.895 (95% CIs: 0.866-0.913), respectively. Correspondingly, the AUCs for these three image categories in the testing dataset were 0.835 (95% CIs: 0.790-0.887), 0.746 (95% CIs: 0.793-0.889), and 0.886 (95% CIs: 0.809-0.934), respectively. A comprehensive comparison between performance measurements underscored a substantial enhancement achieved by the proposed IP-SE-DResNet model in contrast to a traditional radiomics model employing the naive Bayesian classifier. For the latter, the AUCs stood at only 0.614 (95% CIs: 0.594-0.638) in the training dataset and 0.613 (95% CIs: 0.587-0.654) in the testing dataset, both utilizing a combination of mammography images from the craniocaudal and mediolateral oblique views.
CONCLUSIONS: The proposed IP-SE-DResNet model presents a potent and non-invasive approach for predicting ER status in breast cancer patients, potentially enhancing the efficiency and diagnostic precision of radiologists.