Breast ultrasound

乳腺超声
  • 文章类型: Journal Article
    将乳腺结节准确分类为良性和恶性类型对于成功治疗乳腺癌至关重要。传统方法依赖于主观解释,这可能会导致诊断错误。已经探索了使用超声图像的定量形态学分析的基于人工智能(AI)的方法来对乳腺癌进行自动化和可靠的分类。这项研究旨在调查基于AI的方法在提高诊断准确性和患者预后方面的有效性。
    在这项研究中,采用了定量分析的方法,重点关注五个关键特征进行评估:边界规则性程度,界限的清晰度,回波强度,回声的均匀性。此外,使用五种机器学习方法评估分类结果:逻辑回归(LR),支持向量机(SVM),决策树(DT),天真的贝叶斯,和K最近邻(KNN)。基于这些评估,建立了多特征组合预测模型。
    我们通过量化超声图像的各种特征并使用接收器工作特征(ROC)曲线(AUC)下的面积来评估我们的分类模型的性能。惯性矩的AUC值为0.793,而乳腺结节区域的方差和平均值分别为0.725和0.772。凸度和凹度分别达到0.988和0.987的AUC值。此外,我们对归一化后的多个特征进行了联合分析,达到0.98的召回值,超过了市场上大多数医疗评估指标。为了确保实验的严谨性,我们进行了交叉验证实验,在5-,8-,和10倍交叉验证(P>0.05)。
    定量分析可以准确区分良性和恶性乳腺结节。
    UNASSIGNED: Accurate classification of breast nodules into benign and malignant types is critical for the successful treatment of breast cancer. Traditional methods rely on subjective interpretation, which can potentially lead to diagnostic errors. Artificial intelligence (AI)-based methods using the quantitative morphological analysis of ultrasound images have been explored for the automated and reliable classification of breast cancer. This study aimed to investigate the effectiveness of AI-based approaches for improving diagnostic accuracy and patient outcomes.
    UNASSIGNED: In this study, a quantitative analysis approach was adopted, with a focus on five critical features for evaluation: degree of boundary regularity, clarity of boundaries, echo intensity, and uniformity of echoes. Furthermore, the classification results were assessed using five machine learning methods: logistic regression (LR), support vector machine (SVM), decision tree (DT), naive Bayes, and K-nearest neighbor (KNN). Based on these assessments, a multifeature combined prediction model was established.
    UNASSIGNED: We evaluated the performance of our classification model by quantifying various features of the ultrasound images and using the area under the receiver operating characteristic (ROC) curve (AUC). The moment of inertia achieved an AUC value of 0.793, while the variance and mean of breast nodule areas achieved AUC values of 0.725 and 0.772, respectively. The convexity and concavity achieved AUC values of 0.988 and 0.987, respectively. Additionally, we conducted a joint analysis of multiple features after normalization, achieving a recall value of 0.98, which surpasses most medical evaluation indexes on the market. To ensure experimental rigor, we conducted cross-validation experiments, which yielded no significant differences among the classifiers under 5-, 8-, and 10-fold cross-validation (P>0.05).
    UNASSIGNED: The quantitative analysis can accurately differentiate between benign and malignant breast nodules.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    自动乳腺超声图像分割在医学图像处理中起着重要的作用。然而,目前的乳腺超声分割方法存在计算复杂度高、模型参数大等问题,特别是在处理复杂图像时。在本文中,我们以Unext网络为基础,并利用其编码器-解码器功能。并从细胞凋亡和分裂的机制中得到启示,我们设计了凋亡和划分算法来提高模型性能。我们提出了一种新颖的分割模型,该模型集成了分割和凋亡算法,并在模型中引入了空间和通道卷积块。我们提出的模型不仅提高了乳腺超声肿瘤的分割性能,同时也减少了模型参数和计算资源消耗时间。在乳腺超声图像数据集和我们收集的数据集上评估模型。实验表明,SC-Unext模型在BUSI数据集上取得了75.29%的Dice得分和97.09%的准确率,在收集的数据集上,Dice评分为90.62%,准确率为98.37%。同时,我们对该模型在CPU上的推理速度进行了比较,以验证其在资源受限环境中的效率。结果表明,SC-Unext模型在仅配备CPU的设备上实现了每个实例92.72ms的推理速度。模型的参数数量和计算资源消耗分别为1.46M和2.13GFlops,分别,与其他网络模型相比更低。由于其重量轻的性质,该模型对医学领域的各种实际应用具有重要价值。
    Automatic breast ultrasound image segmentation plays an important role in medical image processing. However, current methods for breast ultrasound segmentation suffer from high computational complexity and large model parameters, particularly when dealing with complex images. In this paper, we take the Unext network as a basis and utilize its encoder-decoder features. And taking inspiration from the mechanisms of cellular apoptosis and division, we design apoptosis and division algorithms to improve model performance. We propose a novel segmentation model which integrates the division and apoptosis algorithms and introduces spatial and channel convolution blocks into the model. Our proposed model not only improves the segmentation performance of breast ultrasound tumors, but also reduces the model parameters and computational resource consumption time. The model was evaluated on the breast ultrasound image dataset and our collected dataset. The experiments show that the SC-Unext model achieved Dice scores of 75.29% and accuracy of 97.09% on the BUSI dataset, and on the collected dataset, it reached Dice scores of 90.62% and accuracy of 98.37%. Meanwhile, we conducted a comparison of the model\'s inference speed on CPUs to verify its efficiency in resource-constrained environments. The results indicated that the SC-Unext model achieved an inference speed of 92.72 ms per instance on devices equipped only with CPUs. The model\'s number of parameters and computational resource consumption are 1.46M and 2.13 GFlops, respectively, which are lower compared to other network models. Due to its lightweight nature, the model holds significant value for various practical applications in the medical field.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    乳腺超声(BUS)图像分析中的深度学习(DL)模型面临着数据失衡和有限的非典型肿瘤样本的挑战。生成对抗网络(GAN)通过为小型数据集提供有效的数据增强来解决这些挑战。然而,当前的GAN方法无法捕获BUS的结构特征,并且生成的图像缺乏结构合法性并且不现实。此外,生成的图像需要为不同的下游任务手动注释才能使用。因此,我们提出了一个两阶段的GAN框架,2s-BUSGAN,用于生成带注释的总线图像。它由掩模生成阶段(MGS)和图像生成阶段(IGS)组成,使用相应的肿瘤轮廓生成良性和恶性BUS图像。此外,我们采用特征匹配损失(FML)来增强生成图像的质量,并利用差分增强模块(DAM)来提高小数据集上的GAN性能。我们在两个数据集上进行实验,布西和收藏。此外,结果表明,与传统的GAN方法相比,生成的图像质量得到了提高。此外,我们生成的图像经过超声专家的评估,证明了欺骗医生的可能性。比较评估表明,当应用于训练分割和分类模型时,我们的方法也优于传统的GAN方法。我们的方法在两个数据集上实现了69%和85.7%的分类准确率,分别,比传统的增强模型高出约3%和2%。使用2s-BUSGAN增强数据集训练的分割模型在两个数据集上获得了75%和73%的DICE分数,分别,高于传统的增强方法。我们的研究解决了不平衡和有限的总线图像数据挑战。我们的2s-BUSGAN增强方法具有增强该领域深度学习模型性能的潜力。
    Deep learning (DL) models in breast ultrasound (BUS) image analysis face challenges with data imbalance and limited atypical tumor samples. Generative Adversarial Networks (GAN) address these challenges by providing efficient data augmentation for small datasets. However, current GAN approaches fail to capture the structural features of BUS and generated images lack structural legitimacy and are unrealistic. Furthermore, generated images require manual annotation for different downstream tasks before they can be used. Therefore, we propose a two-stage GAN framework, 2s-BUSGAN, for generating annotated BUS images. It consists of the Mask Generation Stage (MGS) and the Image Generation Stage (IGS), generating benign and malignant BUS images using corresponding tumor contours. Moreover, we employ a Feature-Matching Loss (FML) to enhance the quality of generated images and utilize a Differential Augmentation Module (DAM) to improve GAN performance on small datasets. We conduct experiments on two datasets, BUSI and Collected. Moreover, results indicate that the quality of generated images is improved compared with traditional GAN methods. Additionally, our generated images underwent evaluation by ultrasound experts, demonstrating the possibility of deceiving doctors. A comparative evaluation showed that our method also outperforms traditional GAN methods when applied to training segmentation and classification models. Our method achieved a classification accuracy of 69% and 85.7% on two datasets, respectively, which is about 3% and 2% higher than that of the traditional augmentation model. The segmentation model trained using the 2s-BUSGAN augmented datasets achieved DICE scores of 75% and 73% on the two datasets, respectively, which were higher than the traditional augmentation methods. Our research tackles imbalanced and limited BUS image data challenges. Our 2s-BUSGAN augmentation method holds potential for enhancing deep learning model performance in the field.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    乳腺超声图像的质量对疾病诊断的准确性有显著影响。现有的图像质量评估(IQA)方法通常使用像素级特征统计方法或端到端深度学习方法,关注整体图像质量,而忽略病变区域的图像质量。然而,在临床实践中,医生对超声图像质量的评估更多地依赖于病变的局部区域,确定超声图像的诊断价值。在这项研究中,为了学习医生的临床评估标准,我们提出了一个全球-本地的乳腺超声图像综合IQA框架.在这项研究中,收集1285张乳腺超声图像,并由经验丰富的医生进行评分。在被分类为有病变的图像或无病变的图像后,他们使用软参考IQA或双线性CNNIQA进行评估,分别。实验表明,对于有病变的超声图像,我们提出的软参考IQA在医生注释下达到PLCC0.8418,而现有的不考虑局部病变特征的端到端深度学习方法仅达到PLCC0.6606。由于病变图像的准确性提高,我们提出的全局-局部集成IQA框架在IQA任务中的性能优于现有的端到端深度学习方法,PLCC从0.8306提高到0.8851。
    The quality of breast ultrasound images has a significant impact on the accuracy of disease diagnosis. Existing image quality assessment (IQA) methods usually use pixel-level feature statistical methods or end-to-end deep learning methods, which focus on the global image quality but ignore the image quality of the lesion region. However, in clinical practice, doctors\' evaluation of ultrasound image quality relies more on the local area of the lesion, which determines the diagnostic value of ultrasound images. In this study, a global-local integrated IQA framework for breast ultrasound images was proposed to learn doctors\' clinical evaluation standards. In this study, 1285 breast ultrasound images were collected and scored by experienced doctors. After being classified as either images with lesions or images without lesions, they were evaluated using soft-reference IQA or bilinear CNN IQA, respectively. Experiments showed that for ultrasound images with lesions, our proposed soft-reference IQA achieved PLCC 0.8418 with doctors\' annotation, while the existing end-to-end deep learning method that did not consider the local lesion features only achieved PLCC 0.6606. Due to the accuracy improvement for the images with lesions, our proposed global-local integrated IQA framework had better performance in the IQA task than the existing end-to-end deep learning method, with PLCC improving from 0.8306 to 0.8851.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    深度学习技术已广泛应用于医学图像分析。但由于自身成像原理的局限性,超声图像具有分辨率低、斑点噪声密度高等缺点,这不仅妨碍了患者病情的诊断,而且影响了计算机技术对超声图像特征的提取。
    在这项研究中,我们研究了深度卷积神经网络(CNN)分类的鲁棒性,分割,通过随机椒盐噪声和高斯噪声对乳腺超声图像进行目标检测。
    我们在8617张乳腺超声图像中训练并验证了9种CNN架构,但是用嘈杂的测试集测试了模型。然后,我们在这些乳腺超声图像中训练并验证了9种具有不同噪声水平的CNN架构,并用嘈杂的测试集测试了模型。我们数据集中的每个乳腺超声图像的疾病都由三名超声医师根据其恶性可疑性进行了注释和投票。用评价指标分别对神经网络算法的鲁棒性进行评价。
    当SaltandPepperNoise,斑点噪声,或高斯噪声被分别引入到图像中。因此,DenseNet,根据所选索引,选择UNet++和Yolov5作为最稳健的模型。当这三种噪声中的任何两种同时引入图像时,模型的准确性将受到很大影响。
    我们的实验结果揭示了新的见解:精度随噪声水平的变化趋势在每个用于分类任务和对象检测任务的网络中具有一些独特的特征。这一发现为我们提供了一种方法来揭示计算机辅助诊断(CAD)系统的黑盒体系结构。另一方面,本研究的目的是探讨在图像中直接添加噪声对神经网络性能的影响,这与现有的医学图像处理领域中关于鲁棒性的文章不同。因此,为今后评价CAD系统的鲁棒性提供了一种新的方法。
    UNASSIGNED: Deep learning technology has been widely applied to medical image analysis. But due to the limitations of its own imaging principle, ultrasound image has the disadvantages of low resolution and high Speckle Noise density, which not only hinder the diagnosis of patients\' conditions but also affect the extraction of ultrasound image features by computer technology.
    UNASSIGNED: In this study, we investigate the robustness of deep convolutional neural network (CNN) for classification, segmentation, and target detection of breast ultrasound image through random Salt & Pepper Noise and Gaussian Noise.
    UNASSIGNED: We trained and validated 9 CNN architectures in 8617 breast ultrasound images, but tested the models with noisy test set. Then, we trained and validated 9 CNN architectures with different levels of noise in these breast ultrasound images, and tested the models with noisy test set. Diseases of each breast ultrasound image in our dataset were annotated and voted by three sonographers based on their malignancy suspiciousness. we use evaluation indexes to evaluate the robustness of the neural network algorithm respectively.
    UNASSIGNED: There is a moderate to high impact (The accuracy of the model decreased by about 5%-40%) on model accuracy when Salt and Pepper Noise, Speckle Noise, or Gaussian Noise is introduced to the images respectively. Consequently, DenseNet, UNet++ and Yolov5 were selected as the most robust model based on the selected index. When any two of these three kinds of noise are introduced into the image at the same time, the accuracy of the model will be greatly affected.
    UNASSIGNED: Our experimental results reveal new insights: The variation trend of accuracy with the noise level in Each network used for classification tasks and object detection tasks has some unique characteristics. This finding provides us with a method to reveal the black-box architecture of computer-aided diagnosis (CAD) systems. On the other hand, the purpose of this study is to explore the impact of adding noise directly to the image on the performance of neural networks, which is different from the existing articles on robustness in the field of medical image processing. Consequently, it provides a new way to evaluate the robustness of CAD systems in the future.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    UASSIGNED:人工智能乳腺超声诊断系统(AIBUS)已被引入作为手持式超声(HHUS)的替代方法,而他们在BI-RADS分类中的结果尚未比较。
    UNASSIGNED:这项试点研究基于2020年5月至2020年10月在中国东南部进行的筛查计划。所有同时接受HHUS和AIBUS的参与者都被纳入研究(N=344)。AIBUS扫描后的超声视频由高级放射科医生和初级放射科医生独立观看。一致率和加权Kappa值用于比较BI-RADS分类与HHUS的结果。
    UNASSIGNED:HHUS对乳腺结节的检出率为14.83%,而由高级放射科医生观看的AIBUS视频的检出率为34.01%,由初级放射科医生观看时的检出率为35.76%。AIBUS扫描后,高级放射科医生和HHUS观看的视频之间的BI-RADS分类的加权Kappa值为0.497(p<0.001),一致率为78.8%,表明其在乳腺癌筛查中的潜在用途。然而,与HHUS相比,初级放射科医师观看的AIBUS视频的Kappa值为0.39.
    UNASSIGNED:AIBUS乳腺扫描可以获得相对清晰的图像,并检测到更多的乳腺结节。高级放射科医师观察的AIBUS扫描结果与HHUS大致一致,可用于筛查实践。尤其是在放射科医师数量有限的初级卫生保健中。
    Artificial intelligence breast ultrasound diagnostic system (AIBUS) has been introduced as an alternative approach for handheld ultrasound (HHUS), while their results in BI-RADS categorization has not been compared.
    This pilot study was based on a screening program conducted from May 2020 to October 2020 in southeast China. All the participants who received both HHUS and AIBUS were included in the study (N = 344). The ultrasound videos after AIBUS scanning were independently watched by a senior radiologist and a junior radiologist. Agreement rate and weighted Kappa value were used to compare their results in BI-RADS categorization with HHUS.
    The detection rate of breast nodules by HHUS was 14.83%, while the detection rates were 34.01% for AIBUS videos watched by a senior radiologist and 35.76% when watched by a junior radiologist. After AIBUS scanning, the weighted Kappa value for BI-RADS categorization between videos watched by senior radiologists and HHUS was 0.497 (p < 0.001) with an agreement rate of 78.8%, indicating its potential use in breast cancer screening. However, the Kappa value of AIBUS videos watched by junior radiologist was 0.39, when comparing to HHUS.
    AIBUS breast scan can obtain relatively clear images and detect more breast nodules. The results of AIBUS scanning watched by senior radiologists are moderately consistent with HHUS and might be used in screening practice, especially in primary health care with limited numbers of radiologists.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:乳腺超声(BUS)成像是检测乳腺癌的最普遍方法之一。BUS图像的肿瘤分割可以方便医生定位肿瘤,是计算机辅助诊断系统的必要步骤。虽然大多数临床总线扫描是正常的,没有肿瘤,诸如U-Net之类的分割方法通常会预测这些图像的质量区域。如果将全自动人工智能系统用于常规筛选,则这种假阳性问题变得严重。
    方法:在本研究中,我们提出了一种新的模型,该模型更适合于常规BUS筛查。该模型包含一个分类分支,用于确定图像是否正常或带有肿瘤,和一个轮廓肿瘤的分割分支。两个分支共享相同的编码器网络。我们还建立了一个新的数据集,其中包含来自625名患者的1600张BUS图像用于训练,以及来自120名患者的130张图像用于测试的测试数据集。该数据集是最大的数据集,由经验丰富的放射科医生手动分割的像素掩模。我们的代码可在https://github.com/szhangNJU/BUS_segmentation上获得。
    结果:将图像分类为正常/异常类别的受试者工作特征曲线(AUC)下面积为0.991。用于分割质量区域的骰子相似系数(DSC)为0.898,优于最先进的模型。在外部数据集上的测试给出了类似的性能,证明了我们模型的良好可转移性。此外,我们通过处理BUS扫描过程中记录的视频,模拟了该模型在实际临床实践中的使用情况;该模型对正常图像的假阳性预测非常低,而不会牺牲对肿瘤图像的敏感性.
    结论:我们的模型比最先进的模型实现了更好的分割性能,并且在外部测试集上显示出良好的可转移性。所提出的深度学习架构具有用于全自动BUS健康筛查的潜力。
    BACKGROUND: Breast ultrasound (BUS) imaging is one of the most prevalent approaches for the detection of breast cancers. Tumor segmentation of BUS images can facilitate doctors in localizing tumors and is a necessary step for computer-aided diagnosis systems. While the majority of clinical BUS scans are normal ones without tumors, segmentation approaches such as U-Net often predict mass regions for these images. Such false-positive problem becomes serious if a fully automatic artificial intelligence system is used for routine screening.
    METHODS: In this study, we proposed a novel model which is more suitable for routine BUS screening. The model contains a classification branch that determines whether the image is normal or with tumors, and a segmentation branch that outlines tumors. Two branches share the same encoder network. We also built a new dataset that contains 1600 BUS images from 625 patients for training and a testing dataset with 130 images from 120 patients for testing. The dataset is the largest one with pixel-wise masks manually segmented by experienced radiologists. Our code is available at https://github.com/szhangNJU/BUS_segmentation.
    RESULTS: The area under the receiver operating characteristic curve (AUC) for classifying images into normal/abnormal categories was 0.991. The dice similarity coefficient (DSC) for segmentation of mass regions was 0.898, better than the state-of-the-art models. Testing on an external dataset gave a similar performance, demonstrating a good transferability of our model. Moreover, we simulated the use of the model in actual clinic practice by processing videos recorded during BUS scans; the model gave very low false-positive predictions on normal images without sacrificing sensitivities for images with tumors.
    CONCLUSIONS: Our model achieved better segmentation performance than the state-of-the-art models and showed a good transferability on an external test set. The proposed deep learning architecture holds potential for use in fully automatic BUS health screening.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    UNASSIGNED:高危乳腺病变(HRLs)的升级与后续治疗密切相关,但是当前升级的预测因素仅限于单一成像模式的肿瘤内特征。
    UNASSIGNED:我们回顾性回顾了通过乳房X线照相术检测到的230个HRLs,超声,2017年1月至2018年3月,复旦大学附属肿瘤医院活检前磁共振成像(MRI)。临床特征,根据乳腺影像报告和数据系统(BI-RADS)词典的影像数据,并收到肿瘤升级情况。根据报告的不同升级风险,病变分为高危I型[HR-I,非典型增生(AH)]和高风险II(HR-II,没有AH)。我们分析了临床病理因素和影像学因素之间的关联。我们使用接收器工作特性(ROC)曲线比较了三种成像模式对预测升级的功效。
    未经评估:我们纳入了230名女性的230个HRL,整体升级率为20.4%(47/230)。HR-I的升级率高于HR-II(38.5%vs.4.1%,P<0.01)。在AH患者中,雌激素受体阳性(ER+)患者占81.0%(64/79)。对于所有HRL和HR-I,在临床特征方面,年龄,最大病变大小,和绝经状态与升级显著相关(P<0.05)。在成像因素中,MRI背景实质增强(BPE),MRI和超声征象与升级有显著相关性(P<0.05)。MRI或超声表现阴性的患者升级率较低(P<0.01)。对于HR-II,只有BPE在组间有显著差异(P=0.001)。所有HRLs的多因素分析表明,年龄和BPE是升级的独立预测因素(P<0.01)。预测乳房X线照相术升级的ROC治愈(AUC)区域,超声,MRI分别为0.606、0.590和0.913,提示MRI诊断明显优于乳腺X线和超声检查(P<0.001)。
    UNASSIGNED:AH的HRLs具有更高的升级率和增加的ER表达。在三种成像模式中,MRI在诊断HRLs升级方面比超声和乳腺X线检查更有效。年龄较大和中度至明显的BPE可以指示恶性升级。MRI可以为HRLs的诊断和随访提供一定的价值。
    UNASSIGNED: The upgrade of high-risk breast lesions (HRLs) is closely related to subsequent treatment, but the current predictors for upgrade are limited to intratumoral features of single imaging mode.
    UNASSIGNED: We retrospectively reviewed 230 HRLs detected by mammography, ultrasound, and magnetic resonance imaging (MRI) before biopsy at the Fudan University Cancer Hospital from January 2017 to March 2018. The clinical features, imaging data according to the Breast Imaging Reporting and Data System (BI-RADS) lexicon, and tumor upgrade situation were received. Based on the different risks of upgrade reported, the lesions were classified into high-risk I [HR-I, with atypical hyperplasia (AH)] and high-risk II (HR-II, without AH). We analyzed the association between clinicopathological and imaging factors and upgrade. We used the receiver operating characteristic (ROC) curve to compare the efficacy of three imaging modes for predicting upgrade.
    UNASSIGNED: We included 230 HRLs in 230 women in the study, and the overall upgrade rate was 20.4% (47/230). The upgrade rate was higher in HR-I compared to HR-II (38.5% vs. 4.1%, P<0.01). In patients with AH, estrogen receptor-positive (ER+) patients accounted for 81.0% (64/79). For all HRLs and HR-I, in clinical characteristics, age, maximum size of lesion, and menopausal status were significantly associated with upgrade (P<0.05). In imaging factors, MRI background parenchymal enhancement (BPE), signs of MRI and ultrasound were significantly correlated with upgrade (P<0.05). Patients with negative MRI or ultrasound manifestations had lower upgrade rates (P<0.01). For HR-II, only BPE showed a significant difference between groups (P=0.001). Multifactorial analysis of all HRLs showed that age and BPE were independent predictors of upgrade (P<0.01). The areas under the ROC cure (AUCs) for predicting upgrade in mammography, ultrasound, and MRI were 0.606, 0.590, and 0.913, respectively, indicating that MRI diagnosis was significantly better than mammography and ultrasound (P<0.001).
    UNASSIGNED: HRLs with AH had a higher rate of upgrade and increased ER expression. Among three imaging modes, MRI was more effective than ultrasound and mammography in diagnosing the upgrade of HRLs. Older age and moderate to marked BPE can indicate malignant upgrade. MRI can provide a certain value for the diagnosis and follow-up of HRLs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:乳腺病灶分割是计算机辅助诊断系统的重要步骤。然而,斑点噪声,异质结构,相似的强度分布为乳腺病变分割带来了挑战。
    方法:在本文中,我们提出了一种新颖的集成U网的级联卷积神经网络,双向注意引导网络(BAGNet)和细化残差网络(RFNet)用于乳腺超声图像中的病变分割。具体来说,我们首先使用U-net生成一组包含低级和高级图像结构的显著性图。然后,双向注意力引导网络用于从显著性图中捕获全局(低级)和局部(高级)特征之间的上下文。全局特征图的引入可以减少周围组织对病变区域的干扰。此外,我们基于U-net的核心架构开发了一个细化残差网络,以学习粗略的显著性特征图和地面实况掩码之间的差异。残差的学习可以帮助我们获得更完整的病变掩模。
    结果:为了评估网络的分段性能,我们在公共乳腺超声数据集(BUSIS)上使用6种常用的评估指标与几种最先进的分割方法进行了比较.我们的方法在六个指标上获得了最高分。此外,p值表明我们的方法与比较方法之间存在显着差异。
    结论:实验结果表明,我们的方法实现了最具竞争力的分割结果。此外,我们将网络应用于肾脏超声图像的分割。总的来说,该方法对超声图像分割具有良好的适应性和鲁棒性。
    OBJECTIVE: Breast lesions segmentation is an important step of computer-aided diagnosis system. However, speckle noise, heterogeneous structure, and similar intensity distributions bring challenges for breast lesion segmentation.
    METHODS: In this paper, we presented a novel cascaded convolutional neural network integrating U-net, bidirectional attention guidance network (BAGNet) and refinement residual network (RFNet) for the lesion segmentation in breast ultrasound images. Specifically, we first use U-net to generate a set of saliency maps containing low-level and high-level image structures. Then, the bidirectional attention guidance network is used to capture the context between global (low-level) and local (high-level) features from the saliency map. The introduction of the global feature map can reduce the interference of surrounding tissue on the lesion regions. Furthermore, we developed a refinement residual network based on the core architecture of U-net to learn the difference between rough saliency feature maps and ground-truth masks. The learning of residuals can assist us to obtain a more complete lesion mask.
    RESULTS: To evaluate the segmentation performance of the network, we compared with several state-of-the-art segmentation methods on the public breast ultrasound dataset (BUSIS) using six commonly used evaluation metrics. Our method achieves the highest scores on six metrics. Furthermore, p-values indicate significant differences between our method and the comparative methods.
    CONCLUSIONS: Experimental results show that our method achieves the most competitive segmentation results. In addition, we apply the network on renal ultrasound images segmentation. In general, our method has good adaptability and robustness on ultrasound image segmentation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Objective.本文提出了一种针对二维(2D)超声图像的自动乳腺肿瘤分割方法,这明显更准确,健壮,并且比小型数据集上的常见深度学习模型具有适应性。方法。建立了广义联合训练和细化分割框架(JR),涉及联合训练模块(Jmodule)和细化分割模块(Rmodule)。InJmodule,同时训练两个分割网络,在提出的JocorforSegmentation(JFS)算法的指导下。InRmodule,通过提出的区域优先(AF)算法对Jmoduleis的输出进行细化,和标记分水岭(MW)算法。AF主要减少误报,这很容易从乳房超声图像的固有特征中产生,根据该地区的情况,距离,候选轮廓的平均自由基导数(ARD)和自由基梯度指数(RGI)。同时,MW避免了过度分割,并细化分割结果。为了验证其性能,在三个乳腺超声图像数据集上评估JR框架.图像数据集A包含来自当地医院的1036个图像。图像数据集B和C是两个公共数据集,包含562张图像和163张图像,分别。评估之后是相关的消融实验。主要结果。JR在三个图像数据集上的表现优于其他最先进的(SOTA)方法,特别是在图像数据集B上,与SOTA方法相比,JR将真阳性率(TPR)和Jaccard指数(JI)分别提高了1.5%和3.2%,分别,并将图像数据集B上的FPR(假阳性率)减少了3.7%。消融实验的结果表明,JR的每个组成部分都很重要,并有助于分割的准确性,特别是在减少假阳性方面。意义。本研究成功地将传统的分割方法与深度学习模型相结合。该方法能够高效、有效地分割小规模乳腺超声图像数据集,具有优良的泛化性能。
    Objective.This paper proposes an automatic breast tumor segmentation method for two-dimensional (2D) ultrasound images, which is significantly more accurate, robust, and adaptable than common deep learning models on small datasets.Approach.A generalized joint training and refined segmentation framework (JR) was established, involving a joint training module (Jmodule) and a refined segmentation module (Rmodule). InJmodule, two segmentation networks are trained simultaneously, under the guidance of the proposed Jocor for Segmentation (JFS) algorithm. InRmodule, the output ofJmoduleis refined by the proposed area first (AF) algorithm, and marked watershed (MW) algorithm. The AF mainly reduces false positives, which arise easily from the inherent features of breast ultrasound images, in the light of the area, distance, average radical derivative (ARD) and radical gradient index (RGI) of candidate contours. Meanwhile, the MW avoids over-segmentation, and refines segmentation results. To verify its performance, the JR framework was evaluated on three breast ultrasound image datasets. Image dataset A contains 1036 images from local hospitals. Image datasets B and C are two public datasets, containing 562 images and 163 images, respectively. The evaluation was followed by related ablation experiments.Main results.The JR outperformed the other state-of-the-art (SOTA) methods on the three image datasets, especially on image dataset B. Compared with the SOTA methods, the JR improved true positive ratio (TPR) and Jaccard index (JI) by 1.5% and 3.2%, respectively, and reduces (false positive ratio) FPR by 3.7% on image dataset B. The results of the ablation experiments show that each component of the JR matters, and contributes to the segmentation accuracy, particularly in the reduction of false positives.Significance.This study successfully combines traditional segmentation methods with deep learning models. The proposed method can segment small-scale breast ultrasound image datasets efficiently and effectively, with excellent generalization performance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号