automated breast ultrasound (ABUS)

自动乳腺超声 ( ABUS )
  • 文章类型: Journal Article
    目标:深度学习算法通过利用大量标记数据,已经展示了令人印象深刻的性能。然而,获取用于医学图像分析的像素级注释,特别是在分割任务中,既昂贵又耗时,对监督学习技术构成挑战。现有的半监督方法往往未充分利用未标记数据的表示,并分别处理标记和未标记数据,忽视了他们的相互依存。
    方法:要解决此问题,我们介绍了数据增强注意力解耦对比模型(DADC)。该模型结合了注意力解耦模块,并利用对比学习来有效区分前景和背景,显著提高分割精度。我们的方法集成了一种增强技术,该技术可以合并来自标记数据和未标记数据的信息,特别是提高网络性能,特别是在标签数据有限的情况下。
    结果:我们对ABUS数据集进行了全面的实验,结果表明,就分割性能而言,DADC优于现有的分割方法。
    Objective.Deep learning algorithms have demonstrated impressive performance by leveraging large labeled data. However, acquiring pixel-level annotations for medical image analysis, especially in segmentation tasks, is both costly and time-consuming, posing challenges for supervised learning techniques. Existing semi-supervised methods tend to underutilize representations of unlabeled data and handle labeled and unlabeled data separately, neglecting their interdependencies.Approach.To address this issue, we introduce the Data-Augmented Attention-Decoupled Contrastive model (DADC). This model incorporates an attention decoupling module and utilizes contrastive learning to effectively distinguish foreground and background, significantly improving segmentation accuracy. Our approach integrates an augmentation technique that merges information from both labeled and unlabeled data, notably boosting network performance, especially in scenarios with limited labeled data.Main results.We conducted comprehensive experiments on the automated breast ultrasound (ABUS) dataset and the results demonstrate that DADC outperforms existing segmentation methods in terms of segmentation performance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在自动乳腺超声(ABUS)图像中准确分割肿瘤区域在计算机辅助诊断(CAD)系统中至关重要。然而,肿瘤固有的多样性和影像学干扰对ABUS肿瘤分割提出了巨大挑战。在本文中,我们提出了一种结合图融合(GLGM)的全局和局部特征交互模型,用于3DABUS肿瘤分割。在GLGM,我们构造了一个双分支编码器-解码器,可以提取局部和全局特征。此外,设计了一个全局和局部特征融合(GLFF)模块,它采用最深层的语义交互来促进局部和全局特征之间的信息交换。此外,为了提高小肿瘤的分割性能,设计了基于图卷积的浅层特征融合模块(SFFGC)。它利用浅层特征来增强小肿瘤在局部和全局域中的特征表达。在私有ABUS数据集和公共ABUS数据集上对所提出的方法进行评估。对于私有ABUS数据集,小肿瘤(体积小于1厘米3)占整个数据集的50%以上。实验结果表明,所提出的GLGM模型在3DABUS肿瘤分割中优于几种最先进的分割模型,特别是在分割小肿瘤。
    Accurate segmentation of tumor regions in automated breast ultrasound (ABUS) images is of paramount importance in computer-aided diagnosis system. However, the inherent diversity of tumors and the imaging interference pose great challenges to ABUS tumor segmentation. In this paper, we propose a global and local feature interaction model combined with graph fusion (GLGM), for 3D ABUS tumor segmentation. In GLGM, we construct a dual branch encoder-decoder, where both local and global features can be extracted. Besides, a global and local feature fusion module is designed, which employs the deepest semantic interaction to facilitate information exchange between local and global features. Additionally, to improve the segmentation performance for small tumors, a graph convolution-based shallow feature fusion module is designed. It exploits the shallow feature to enhance the feature expression of small tumors in both local and global domains. The proposed method is evaluated on a private ABUS dataset and a public ABUS dataset. For the private ABUS dataset, the small tumors (volume smaller than 1 cm3) account for over 50% of the entire dataset. Experimental results show that the proposed GLGM model outperforms several state-of-the-art segmentation models in 3D ABUS tumor segmentation, particularly in segmenting small tumors.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Systematic Review
    比较自动乳腺超声(ABUS)和对比增强超声(CEUS)在乳腺癌中的诊断性能。
    通过系统地搜索数据库PubMed,Embase,Cochrane图书馆和WebofScience。敏感性,特殊性,确认了似然比和诊断比值比(DOR).使用对称接收器操作员特征曲线(SROC)评估ABUS和CEUS的阈值。Fagan的列线图被绘制出来。采用Meta回归和亚组分析在纳入研究中寻找异质性来源。
    共纳入16项研究,包括4115名参与者。ABUS的联合敏感性为0.88[95%CI(0.73-0.95)],特异性为0.93[95%CI(0.82-0.97)],SROC曲线下面积(AUC)为0.96[95%CI(0.94-0.96)],DOR为89.CEUS的联合敏感性为0.88[95%CI(0.84-0.91)],特异性为0.76[95%CI(0.66-0.84)],AUC为0.89[95%CI(0.86-0.92)],DOR为24。Deeks漏斗图显示没有现有的出版偏见。前瞻性的设计,部分验证偏倚和盲法导致了特异性的异质性,而没有来源导致敏感性的异质性。卑诗省ABUS的后验概率为75%,CEUS在乳腺癌中的检测后概率为48%。
    与CEUS相比,ABUS对检测乳腺癌具有较高的特异性和DOR。ABUS有望进一步提高BC诊断的准确性。
    UNASSIGNED: To compare the diagnostic performance of automated breast ultrasound (ABUS) and contrast-enhanced ultrasound (CEUS) in breast cancer.
    UNASSIGNED: Published studies were collected by systematically searching the databases PubMed, Embase, Cochrane Library and Web of Science. The sensitivities, specificities, likelihood ratios and diagnostic odds ratio (DOR) were confirmed. The symmetric receiver operator characteristic curve (SROC) was used to assess the threshold of ABUS and CEUS. Fagan\'s nomogram was drawn. Meta-regression and subgroup analyses were applied to search for sources of heterogeneity among the included studies.
    UNASSIGNED: A total of 16 studies were included, comprising 4115 participants. The combined sensitivity of ABUS was 0.88 [95% CI (0.73-0.95)], specificity was 0.93 [95% CI (0.82-0.97)], area under the SROC curve (AUC) was 0.96 [95% CI (0.94-0.96)] and DOR was 89. The combined sensitivity of CEUS was 0.88 [95% CI (0.84-0.91)], specificity was 0.76 [95% CI (0.66-0.84)], AUC was 0.89 [95% CI (0.86-0.92)] and DOR was 24. The Deeks\' funnel plot showed no existing publication bias. The prospective design, partial verification bias and blinding contributed to the heterogeneity in specificity, while no sources contributed to the heterogeneity in sensitivity. The post-test probability of ABUS in BC was 75%, and the post-test probability of CEUS in breast cancer was 48%.
    UNASSIGNED: Compared with CEUS, ABUS showed higher specificity and DOR for detecting breast cancer. ABUS is expected to further improve the accuracy of BC diagnosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:术前评估腋窝淋巴结(ALN)状态是决定适当治疗的重要组成部分。根据ACOSOGZ0011试验,ALN状态评估的新目标是肿瘤负荷(低负荷,<3个阳性ALN;高负担,≥3个阳性ALN),而不是转移或非转移。我们旨在开发一种整合临床病理特征的放射学列线图,ABUS成像特征和来自ABUS的影像组学特征预测早期乳腺癌ALN肿瘤负荷.
    方法:共纳入310例乳腺癌患者。从ABUS图像产生影像组学评分。采用多因素logistic回归分析建立预测模型,我们纳入了影像组学评分,ABUS影像学特征和临床病理特征,这与放射组学列线图一起呈现。此外,我们分别构建了一个ABUS模型来分析ABUS成像特征在预测ALN肿瘤负荷方面的表现.通过歧视评估模型的性能,校正曲线,和决策曲线。
    结果:影像组学评分,由13个选定的特征组成,表现出中等的辨别能力(训练和测试集中的AUC0.794和0.789)。ABUS模型,包括直径,高回声光环,和缩回现象,表现出中等的预测能力(训练和测试集中的AUC0.772和0.736)。ABUS放射组学列线图,整合影像组学评分与回缩现象和美国报告的ALN状态,显示ALN肿瘤负荷和病理验证之间的准确一致性(训练和测试集中的AUC0.876和0.851)。决策曲线表明,ABUS放射组学列线图在临床上很有用,并且比经验丰富的放射科医生报告的ALN状态更好。
    结论:ABUS放射组学列线图,非侵入性,个性化和精确的评估,可以帮助临床医生确定最佳治疗策略并避免过度治疗。
    OBJECTIVE: Preoperative evaluation of axillary lymph node (ALN) status is an essential part of deciding the appropriate treatment. According to ACOSOG Z0011 trials, the new goal of the ALN status evaluation is tumor burden (low burden, < 3 positive ALNs; high burden, ≥ 3 positive ALNs), instead of metastasis or non-metastasis. We aimed to develop a radiomics nomogram integrating clinicopathologic features, ABUS imaging features and radiomics features from ABUS for predicting ALN tumor burden in early breast cancer.
    METHODS: A total of 310 patients with breast cancer were enrolled. Radiomics score was generated from the ABUS images. Multivariate logistic regression analysis was used to develop the predicting model, we incorporated the radiomics score, ABUS imaging features and clinicopathologic features, and this was presented with a radiomics nomogram. Besides, we separately constructed an ABUS model to analyze the performance of ABUS imaging features in predicting ALN tumor burden. The performance of the models was assessed through discrimination, calibration curve, and decision curve.
    RESULTS: The radiomics score, which consisted of 13 selected features, showed moderate discriminative ability (AUC 0.794 and 0.789 in the training and test sets). The ABUS model, comprising diameter, hyperechoic halo, and retraction phenomenon, showed moderate predictive ability (AUC 0.772 and 0.736 in the training and test sets). The ABUS radiomics nomogram, integrating radiomics score with retraction phenomenon and US-reported ALN status, showed an accurate agreement between ALN tumor burden and pathological verification (AUC 0.876 and 0.851 in the training and test sets). The decision curves showed that ABUS radiomics nomogram was clinically useful and more excellent than US-reported ALN status by experienced radiologists.
    CONCLUSIONS: The ABUS radiomics nomogram, with non-invasive, individualized and precise assessment, may assist clinicians to determine the optimal treatment strategy and avoid overtreatment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    早期发现其症状可以显着降低乳腺癌死亡率。3-D自动乳房超声(ABUS)由于其高灵敏度和可重复性而被广泛用于乳房筛查。大量的ABUS切片,以及质量大小和形状的高度变化,使手动评估成为一个具有挑战性和耗时的过程。为了协助放射科医生,我们提出了一个卷积BiLSTM网络来根据质量的存在对切片进行分类。由于其基于补丁的架构,该模型将质量的大致位置作为热图。准备的数据集由属于43名患者的60个体积组成。精度,召回,准确度,F1分数,所提出的切片分类模型的AUC为84%,84%,93%,84%,97%,分别。基于FROC分析,建议的检测器获得了82%的灵敏度,每体积有两个假阳性。
    Breast cancer mortality can be significantly reduced by early detection of its symptoms. The 3-D Automated Breast Ultrasound (ABUS) has been widely used for breast screening due to its high sensitivity and reproducibility. The large number of ABUS slices, and high variation in size and shape of the masses, make the manual evaluation a challenging and time-consuming process. To assist the radiologists, we propose a convolutional BiLSTM network to classify the slices based on the presence of a mass. Because of its patch-based architecture, this model produces the approximate location of masses as a heat map. The prepared dataset consists of 60 volumes belonging to 43 patients. The precision, recall, accuracy, F1-score, and AUC of the proposed model for slice classification were 84%, 84%, 93%, 84%, and 97%, respectively. Based on the FROC analysis, the proposed detector obtained a sensitivity of 82% with two false positives per volume.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:自动乳腺超声(ABUS)成像技术已广泛应用于临床诊断。在ABUS图像中准确的病变分割在计算机辅助诊断(CAD)系统中至关重要。尽管基于深度学习的方法已广泛用于医学图像分析,病变种类繁多,影像干扰使ABUS病变分割具有挑战性.
    方法:在本文中,我们提出了一种新颖的最深语义引导多尺度特征融合网络(DSGMFFN),用于二维ABUS切片中的病变分割.为了应对各种各样的病变,设计了最深语义引导解码器(DSGNet)和多尺度特征融合模型(MFFM),其中最深层的语义被充分利用来指导解码和特征融合。也就是说,在特征融合过程中,最深的信息被赋予最高的权重,并参与每个解码阶段。针对成像干扰的挑战,开发了一种新颖的混合注意力机制,整合空间自注意和通道自注意,以获得像素和通道之间的相关性,以突出病变区域。
    结果:在170个ABUS卷的3742个切片上评估了拟议的DSGMFFN。实验结果表明,DSGMFFN在骰子相似系数(DSC)和交集(IoU)方面分别达到84.54%和73.24%,分别。
    结论:所提出的方法在ABUS病变分割中显示出比最先进的方法更好的性能。可以减轻ABUS图像中由病变种类和成像干扰引起的不正确分割。
    OBJECTIVE: Automated breast ultrasound (ABUS) imaging technology has been widely used in clinical diagnosis. Accurate lesion segmentation in ABUS images is essential in computer-aided diagnosis (CAD) systems. Although deep learning-based approaches have been widely employed in medical image analysis, the large variety of lesions and the imaging interference make ABUS lesion segmentation challenging.
    METHODS: In this paper, we propose a novel deepest semantically guided multi-scale feature fusion network (DSGMFFN) for lesion segmentation in 2D ABUS slices. In order to cope with the large variety of lesions, a deepest semantically guided decoder (DSGNet) and a multi-scale feature fusion model (MFFM) are designed, where the deepest semantics is fully utilized to guide the decoding and feature fusion. That is, the deepest information is given the highest weight in the feature fusion process, and participates in every decoding stage. Aiming at the challenge of imaging interference, a novel mixed attention mechanism is developed, integrating spatial self-attention and channel self-attention to obtain the correlation among pixels and channels to highlight the lesion region.
    RESULTS: The proposed DSGMFFN is evaluated on 3742 slices of 170 ABUS volumes. The experimental result indicates that DSGMFFN achieves 84.54% and 73.24% in Dice similarity coefficient (DSC) and intersection over union (IoU), respectively.
    CONCLUSIONS: The proposed method shows better performance than the state-of-the-art methods in ABUS lesion segmentation. Incorrect segmentation caused by lesion variety and imaging interference in ABUS images can be alleviated.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本研究的目的是评估与数字乳腺断层合成(DBT)相关的FFDM相比,全视野数字乳腺X线摄影(FFDM)和自动乳腺超声(ABUS)在诊断乳腺癌中的价值。方法:这项回顾性研究包括50例女性患者的致密的结缔组织纤维框架,接受FFDM的年轻女性的特征,DBT,手持式超声(HHUS),和ABUS在2017年1月至2018年10月之间。灵敏度(Se),特异性(Sp),阳性预测值(PPV),负预测值(NPV),FFDM+ABUS的准确度为81.82%(95%CI[48.22-97.72]),89.74%(95%CI[75.78-97.13]),69.23%(95%CI[46.05-85.57]),94.59%(95%CI[83.26-98.40]),和88%(95%CI[75.69-95.47]),而对于FFDM+DBT,值如下:91.67%(95%CI[61.52-99.79]),71.79%(95%CI[55.13-85.00]),50%(95%CI[37.08-62.92]),96.55%(95%CI[80.93-99.46]),76.47%(95%CI[62.51-87.21])。我们发现两位读者之间关于与ABUS相关的FFDM的几乎完美的协议,关于FFDM+DBT的实质性协议,κ系数分别为0.896和0.8;p<0.001。结论:ABUS和DBT适合作为FFDM的额外诊断成像技术,适用于由于存在致密乳腺组织而处于中度患乳腺癌风险的女性。在这项研究中,DBT减少了假阴性结果的数量,而ABUS的使用导致特异性增加。
    The purpose of the present study was to evaluate the value of full-field digital mammography (FFDM) and automated breast ultrasound (ABUS) in the diagnosis of breast cancer compared to FFDM associated with digital breast tomosynthesis (DBT). Methods: This retrospective study included 50 female patients with a denser framework of connective tissue fibers, characteristic of young women who underwent FFDM, DBT, handheld ultrasound (HHUS), and ABUS between January 2017 and October 2018. The sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), and accuracy of FFDM+ABUS were 81.82% (95% CI [48.22-97.72]), 89.74% (95% CI [75.78-97.13]), 69.23% (95% CI [46.05-85.57]), 94.59% (95% CI [83.26-98.40]), and 88% (95% CI [75.69-95.47]), while for FFDM+DBT, the values were as follows: 91.67% (95% CI [61.52-99.79]), 71.79% (95% CI [55.13-85.00]), 50% (95% CI [37.08-62.92]), 96.55% (95% CI [80.93-99.46]), 76.47% (95% CI [62.51-87.21]). We found an almost perfect agreement between the two readers regarding FFDM associated with ABUS, and substantial agreement regarding FFDM+DBT, with a kappa coefficient of 0.896 and 0.8, respectively; p < 0.001. Conclusions: ABUS and DBT are suitable as additional diagnostic imaging techniques to FFDM in women at an intermediate risk of developing breast cancer through the presence of dense breast tissue. In this study, DBT reduced the number of false negative results, while the use of ABUS resulted in an increase in specificity.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    OBJECTIVE: Breast cancer is the most common cancer and the leading cause of cancer-related deaths for women all over the world. Recently, automated breast ultrasound (ABUS) has become a new and promising screening modality for whole breast examination. However, reviewing volumetric ABUS is time-consuming and lesions could be missed during the examination. Therefore, computer-aided cancer detection in ABUS volume is extremely expected to help clinician for the breast cancer screening.
    METHODS: We develop a novel end-to-end 3D convolutional network for automated cancer detection in ABUS volume, in order to accelerate reviewing and meanwhile to provide high detection sensitivity with low false positives (FPs). Specifically, an efficient 3D Inception Unet-style architecture with fusion deep supervision mechanism is proposed to attain decent detection performance. In addition, a novel asymmetric loss is designed to help the network balancing false positive and false negative regions, thus improving detection sensitivity for small cancerous lesions.
    RESULTS: The efficacy of our network was extensively validated on a dataset including 196 patients with 661 cancer regions. Our network obtained a detection sensitivity of 95.1% with 3.0 FPs per ABUS volume. Furthermore, the average inference time of the network was 0.1 second per volume, which largely shortens the conventional reviewing time.
    CONCLUSIONS: The proposed network provides efficient and accurate cancer detection scheme using ABUS volume, and may assist clinicians for more efficient breast cancer screening.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    Accurate breast mass segmentation of automated breast ultrasound (ABUS) is a great help to breast cancer diagnosis and treatment. However, the lack of clear boundary and significant variation in mass shapes make the automatic segmentation very challenging. In this paper, a novel automatic tumor segmentation method SC-FCN-BLSTM is proposed by incorporating bi-directional long short-term memory (BLSTM) and spatial-channel attention (SC-attention) module into fully convolutional network (FCN). In order to decrease performance degradation caused by ambiguous boundaries and varying tumor sizes, an SC-attention module is designed to integrate both finer-grained spatial information and rich semantic information. Since ABUS is three-dimensional data, utilizing inter-slice context can improve segmentation performance. A BLSTM module with SC-attention is constructed to model the correlation between slices, which employs inter-slice context to assist segmentation for false positive elimination. The proposed method is verified on our private ABUS dataset of 124 patients with 170 volumes, including 3636 2D labeled slices. The Dice similarity coefficient (DSC), Recall, Precision and Hausdorff distance (HD) of the proposed method are 0.8178, 0.8067, 0.8292 and 11.1367. Experimental results demonstrate that the proposed method offered improved segmentation results compared with existing deep learning-based methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    UNASSIGNED: Conventional ultrasound manual scanning and artificial diagnosis approaches in breast are considered to be operator-dependence, slight slow and error-prone. In this study, we used Automated Breast Ultrasound (ABUS) machine for the scanning, and deep convolutional neural network (CNN) technology, a kind of Deep Learning (DL) algorithm, for the detection and classification of breast nodules, aiming to achieve the automatic and accurate diagnosis of breast nodules.
    UNASSIGNED: Two hundred and ninety-three lesions from 194 patients with definite pathological diagnosis results (117 benign and 176 malignancy) were recruited as case group. Another 70 patients without breast diseases were enrolled as control group. All the breast scans were carried out by an ABUS machine and then randomly divided into training set, verification set and test set, with a proportion of 7:1:2. In the training set, we constructed a detection model by a three-dimensionally U-shaped convolutional neural network (3D U-Net) architecture for the purpose of segment the nodules from background breast images. Processes such as residual block, attention connections, and hard mining were used to optimize the model while strategies of random cropping, flipping and rotation for data augmentation. In the test phase, the current model was compared with those in previously reported studies. In the verification set, the detection effectiveness of detection model was evaluated. In the classification phase, multiple convolutional layers and fully-connected layers were applied to set up a classification model, aiming to identify whether the nodule was malignancy.
    UNASSIGNED: Our detection model yielded a sensitivity of 91% and 1.92 false positive subjects per automatically scanned imaging. The classification model achieved a sensitivity of 87.0%, a specificity of 88.0% and an accuracy of 87.5%.
    UNASSIGNED: Deep CNN combined with ABUS maybe a promising tool for easy detection and accurate diagnosis of breast nodule.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号