Ultrasound images

超声图像
  • 文章类型: Journal Article
    在常规经阴道超声(TVU)检测下准确区分非典型子宫内膜增生(AEH)和子宫内膜癌(EC)具有挑战性。我们的研究旨在使用少量学习(FSL)方法来识别非非典型子宫内膜增生(NAEH),AEH,和EC基于有限的TVU图像。
    经病理证实的NAEH的TVU图像,AEH,和EC患者(每班n=33)被分成支持集(SS,每个类n=3)和查询集(QS,n=每班30)。接下来,我们使用双重预训练的ResNet50V2,该方法首先在ImageNet上进行预训练,然后在额外收集的TVU图像上进行预训练,以从SS和QS中的TVU图像中提取1*64个特征向量。然后,计算QS中的每幅TVU图像与SS的9幅TVU图像之间的欧氏距离。最后,使用k-最近邻(KNN)算法诊断QS中的TVU图像。
    在QS中提出的FSL模型的总体精度和宏观精度分别为0.878和0.882,优于自动化机器学习模型,传统的ResNet50V2模型,初级超声师,和高级超声医师.识别EC时,所提出的FSL模型达到了0.964的最高精度,0.900的最高召回率和0.931的最高F1分数。
    所提出的FSL模型结合了双预先训练的ResNet50V2特征向量提取器和KNN分类器,在识别NAEH方面表现良好,AEH,和EC患者的TVU图像有限,在计算机辅助疾病诊断的应用中显示出潜力。
    UNASSIGNED: It is challenging to accurately distinguish atypical endometrial hyperplasia (AEH) and endometrial cancer (EC) under routine transvaginal ultrasonic (TVU) detection. Our research aims to use the few-shot learning (FSL) method to identify non-atypical endometrial hyperplasia (NAEH), AEH, and EC based on limited TVU images.
    UNASSIGNED: The TVU images of pathologically confirmed NAEH, AEH, and EC patients (n = 33 per class) were split into the support set (SS, n = 3 per class) and the query set (QS, n = 30 per class). Next, we used dual pretrained ResNet50 V2 which pretrained on ImageNet first and then on extra collected TVU images to extract 1*64 eigenvectors from the TVU images in SS and QS. Then, the Euclidean distances were calculated between each TVU image in QS and nine TVU images of SS. Finally, the k-nearest neighbor (KNN) algorithm was used to diagnose the TVU images in QS.
    UNASSIGNED: The overall accuracy and macro precision of the proposed FSL model in QS were 0.878 and 0.882 respectively, superior to the automated machine learning models, traditional ResNet50 V2 model, junior sonographer, and senior sonographer. When identifying EC, the proposed FSL model achieved the highest precision of 0.964, the highest recall of 0.900, and the highest F1-score of 0.931.
    UNASSIGNED: The proposed FSL model combining dual pretrained ResNet50 V2 eigenvectors extractor and KNN classifier presented well in identifying NAEH, AEH, and EC patients with limited TVU images, showing potential in the application of computer-aided disease diagnosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    深度学习在超声图像分析中得到了广泛的应用,它也有利于肾脏超声解释和诊断。然而,超声图像分辨率的重要性在深度学习方法中经常被忽视。在这项研究中,我们将超声图像分辨率集成到卷积神经网络中,并探讨分辨率对肾脏肿瘤诊断的影响。在整合图像分辨率信息的过程中,我们提出了两种不同的方法来缩小神经网络提取的特征和分辨率特征之间的语义差距。在第一种方法中,分辨率与神经网络提取的特征直接连接。在第二种方法中,首先对神经网络提取的特征进行降维,然后与分辨率特征组合,形成新的复合特征。我们将结合分辨率的这两种方法与不结合分辨率的方法进行了比较,该方法包含926张图像的肾脏肿瘤数据集,其中包括211张良性肾脏肿瘤图像和715张恶性肾脏肿瘤图像。未纳入分辨率的方法的受试者工作特征曲线下面积(AUC)为0.8665,纳入分辨率的两种方法的AUC分别为0.8926(P<0.0001)和0.9135(P<0.0001)。这项研究已经建立了端到端肾肿瘤分类系统,并证明了整合图像分辨率的好处,表明将图像分辨率整合到神经网络中可以更准确地区分超声图像中的恶性和良性肾脏肿瘤。
    Deep learning has been widely used in ultrasound image analysis, and it also benefits kidney ultrasound interpretation and diagnosis. However, the importance of ultrasound image resolution often goes overlooked within deep learning methodologies. In this study, we integrate the ultrasound image resolution into a convolutional neural network and explore the effect of the resolution on diagnosis of kidney tumors. In the process of integrating the image resolution information, we propose two different approaches to narrow the semantic gap between the features extracted by the neural network and the resolution features. In the first approach, the resolution is directly concatenated with the features extracted by the neural network. In the second approach, the features extracted by the neural network are first dimensionally reduced and then combined with the resolution features to form new composite features. We compare these two approaches incorporating the resolution with the method without incorporating the resolution on a kidney tumor dataset of 926 images consisting of 211 images of benign kidney tumors and 715 images of malignant kidney tumors. The area under the receiver operating characteristic curve (AUC) of the method without incorporating the resolution is 0.8665, and the AUCs of the two approaches incorporating the resolution are 0.8926 (P < 0.0001) and 0.9135 (P < 0.0001) respectively. This study has established end-to-end kidney tumor classification systems and has demonstrated the benefits of integrating image resolution, showing that incorporating image resolution into neural networks can more accurately distinguish between malignant and benign kidney tumors in ultrasound images.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    超声图像易受各种形式的质量劣化的影响,其负面地影响诊断。常见的降级包括斑点噪声,高斯噪声,盐和胡椒的噪音,和模糊。本研究提出了一种基于首次检测噪声类型的超声图像去噪策略,然后,合适的去噪方法可以应用于每个腐败。该技术依赖于卷积神经网络来对影响输入超声图像的噪声的类型进行分类。预训练的卷积神经网络模型,包括GoogleNet,VGG-19,AlexNet和AlexNet支持向量机(SVM)被开发和训练来执行此分类。跨不同疾病和噪声类型的782个数字生成的超声图像的数据集用于模型训练和评估。结果表明,AlexNet-SVM在分类噪声类型方面达到了99.2%的最高准确率。结果表明,本技术被认为是性能最好的模型之一,然后应用于具有不同噪声破坏的真实超声图像,以证明所提出的检测然后去噪系统的功效。研究重点:提出了一种基于先检测噪声类型的超声图像去噪策略。使用预训练的卷积神经网络对输入图像中的噪声类型进行分类。评估GoogleNet,VGG-19,AlexNet,和AlexNet支持向量机(SVM)在782个合成超声图像数据集上建立模型。AlexNet-SVM在分类噪声类型方面实现了99.2%的最高准确率。演示了所提出的检测然后去噪系统在真实超声图像上的功效。
    Ultrasound images are susceptible to various forms of quality degradation that negatively impact diagnosis. Common degradations include speckle noise, Gaussian noise, salt and pepper noise, and blurring. This research proposes an accurate ultrasound image denoising strategy based on firstly detecting the noise type, then, suitable denoising methods can be applied for each corruption. The technique depends on convolutional neural networks to categorize the type of noise affecting an input ultrasound image. Pre-trained convolutional neural network models including GoogleNet, VGG-19, AlexNet and AlexNet-support vector machine (SVM) are developed and trained to perform this classification. A dataset of 782 numerically generated ultrasound images across different diseases and noise types is utilized for model training and evaluation. Results show AlexNet-SVM achieves the highest accuracy of 99.2% in classifying noise types. The results indicate that, the present technique is considered one of the top-performing models is then applied to real ultrasound images with different noise corruptions to demonstrate efficacy of the proposed detect-then-denoise system. RESEARCH HIGHLIGHTS: Proposes an accurate ultrasound image denoising strategy based on detecting noise type first. Uses pre-trained convolutional neural networks to categorize noise type in input images. Evaluates GoogleNet, VGG-19, AlexNet, and AlexNet-support vector machine (SVM) models on a dataset of 782 synthetic ultrasound images. AlexNet-SVM achieves highest accuracy of 99.2% in classifying noise types. Demonstrates efficacy of the proposed detect-then-denoise system on real ultrasound images.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:碰撞肿瘤是肿瘤,包括两个组织学上不同的肿瘤,它们共存于同一肿块中,没有组织学混合。碰撞肿瘤的发生率低,临床上很少见。
    目的:探讨超声图像和卵巢附件报告和数据系统(O-RADS)在评估卵巢碰撞肿瘤的风险和病理特征中的应用。
    方法:本研究回顾性分析2020年1月至2023年12月经病理诊断为卵巢碰撞瘤的17例。所有临床特征,收集并分析超声图像和组织病理学特征。O-RADS评分用于分类。由妇科超声组的两名高级医生确定O-RADS评分。O-RADS评分为1-3分的病变被分类为良性肿瘤,并且O-RADS评分为4或5分的病变被分类为恶性肿瘤。
    结果:在接受妇科手术的6274例患者中,有16例发现了17个碰撞肿瘤。17例卵巢碰撞肿瘤患者的平均年龄为36.7岁(范围20-68岁),在谁,一个是双边发生的,其余的是单边发生的。肿瘤平均直径10cm,其中三个是2-5厘米,11是5-10厘米,和三个是>10厘米。5例(29.4%)O-RADS评分为3分的子宫内膜异位囊肿伴纤维瘤/浆液性囊腺瘤,单眼或多房性囊肿含有少量实质成分。11个(64.7%)肿瘤的O-RADS评分为4分,其中2个为4A类,4B类中有6个,在4C类中有3个;所有这些都是多房性囊性肿瘤,具有实体成分或多个乳头状成分。一个(5.9%)肿瘤的O-RADS评分为5。这个案子是一个坚实的质量,超声检查发现少量盆腔积液。病理为高级别浆液性囊性癌合并囊性成熟畸胎瘤。有9例(52.9%)血清碳水化合物抗原(CA)125升高的肿瘤和2例(11.8%)血清CA19-9升高的肿瘤。组织学和病理结果显示以上皮细胞源性肿瘤合并其他肿瘤最为常见,这与以前的结果不同。
    结论:卵巢碰撞瘤的超声图像具有一定的特异性,但术前超声诊断困难。上皮间质细胞联合瘤是卵巢碰撞瘤中最多见的类型之一。卵巢碰撞肿瘤的O-RADS评分多≥4分,可敏感检测恶性肿瘤。
    BACKGROUND: Collision tumor are neoplasms, including two histologically distinct tumors that coexist in the same mass without histological admixture. The incidence of collision tumor is low and is rare clinically.
    OBJECTIVE: To investigate ultrasound images and application of ovarian-adnexal reporting and data system (O-RADS) to evaluate the risk and pathological characteristics of ovarian collision tumor.
    METHODS: This study retrospectively analyzed 17 cases of ovarian collision tumor diagnosed pathologically from January 2020 to December 2023. All clinical features, ultrasound images and histopathological features were collected and analyzed. The O-RADS score was used for classification. The O-RADS score was determined by two senior doctors in the gynecological ultrasound group. Lesions with O-RADS score of 1-3 were classified as benign tumors, and lesions with O-RADS score of 4 or 5 were classified as malignant tumors.
    RESULTS: There were 17 collision tumors detected in 16 of 6274 patients who underwent gynecological surgery. The average age of 17 women with ovarian collision tumor was 36.7 years (range 20-68 years), in whom, one occurred bilaterally and the rest occurred unilaterally. The average tumor diameter was 10 cm, of which three were 2-5 cm, 11 were 5-10 cm, and three were > 10 cm. Five (29.4%) tumors with O-RADS score 3 were endometriotic cysts with fibroma/serous cystadenoma, and unilocular or multilocular cysts contained a small number of parenchymal components. Eleven (64.7%) tumors had an O-RADS score of 4, including two in category 4A, six in category 4B, and three in category 4C; all of which were multilocular cystic tumors with solid components or multiple papillary components. One (5.9%) tumor had an O-RADS score of 5. This case was a solid mass, and a small amount of pelvic effusion was detected under ultrasound. The pathology was high-grade serous cystic cancer combined with cystic mature teratoma. There were nine (52.9%) tumors with elevated serum carbohydrate antigen (CA)125 and two (11.8%) with elevated serum CA19-9. Histological and pathological results showed that epithelial-cell-derived tumors combined with other tumors were the most common, which was different from previous results.
    CONCLUSIONS: The ultrasound images of ovarian collision tumor have certain specificity, but diagnosis by preoperative ultrasound is difficult. The combination of epithelial and mesenchymal cell tumors is one of the most common types of ovarian collision tumor. The O-RADS score of ovarian collision tumor is mostly ≥ 4, which can sensitively detect malignant tumors.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:超声图像中乳腺肿块的自动检测和分割对于乳腺癌诊断至关重要,但由于有限的图像质量和复杂的乳房组织,仍然具有挑战性。这项研究旨在开发一种基于深度学习的方法,该方法可以在超声图像中进行准确的乳房肿块检测和分割。 方法。开发了一种新颖的基于卷积神经网络的框架,该框架结合了YouOnlyLookOnce(YOLO)v5网络和Global-Local(GOLO)策略。首先,YOLOv5用于定位感兴趣的质量区域(ROI)。第二,开发了全球本地连接多尺度选择(GOLO-CMSS)网络来分割群众。GOLO-CMSS在全球范围内对整个图像进行操作,并在本地对大量ROI进行操作,然后集成两个分支以进行最终分割输出。特别是,在全球分支机构中,CMSS应用多尺度选择(MSS)模块来自动调整接受域,和多输入(MLI)模块,以实现不同分辨率的浅层和深层特征的融合。收集包含28,477张乳房超声图像的USTC数据集用于训练和测试。所提出的方法还在三个公共数据集上进行了测试,UDIAT,布西和塔。将GOLO-CMSS的分割性能与其他网络和三位经验丰富的放射科医生进行了比较。 主要结果。YOLOv5优于其他检测模型,平均精确度为99.41%,95.15%,USTC的93.69%和96.42%,UDIAT,BUSI和TUH数据集,分别。拟议的GOLO-CMSS显示出优于其他最先进的网络的分割性能,骰子相似系数(DSC)为93.19%,88.56%,中科大87.58%和90.37%,UDIAT,BUSI和TUH数据集,分别。GOLO-CMSS和每个放射科医师之间的平均DSC显著优于放射科医师之间的平均DSC(p<0.001)。 意义。我们提出的方法可以准确地检测和分割乳腺肿块,具有与放射科医生相当的良好性能,突出了其在乳腺超声检查临床实施的巨大潜力。
    Objective.Automated detection and segmentation of breast masses in ultrasound images are critical for breast cancer diagnosis, but remain challenging due to limited image quality and complex breast tissues. This study aims to develop a deep learning-based method that enables accurate breast mass detection and segmentation in ultrasound images.Approach.A novel convolutional neural network-based framework that combines the You Only Look Once (YOLO) v5 network and the Global-Local (GOLO) strategy was developed. First, YOLOv5 was applied to locate the mass regions of interest (ROIs). Second, a Global Local-Connected Multi-Scale Selection (GOLO-CMSS) network was developed to segment the masses. The GOLO-CMSS operated on both the entire images globally and mass ROIs locally, and then integrated the two branches for a final segmentation output. Particularly, in global branch, CMSS applied Multi-Scale Selection (MSS) modules to automatically adjust the receptive fields, and Multi-Input (MLI) modules to enable fusion of shallow and deep features at different resolutions. The USTC dataset containing 28 477 breast ultrasound images was collected for training and test. The proposed method was also tested on three public datasets, UDIAT, BUSI and TUH. The segmentation performance of GOLO-CMSS was compared with other networks and three experienced radiologists.Main results.YOLOv5 outperformed other detection models with average precisions of 99.41%, 95.15%, 93.69% and 96.42% on the USTC, UDIAT, BUSI and TUH datasets, respectively. The proposed GOLO-CMSS showed superior segmentation performance over other state-of-the-art networks, with Dice similarity coefficients (DSCs) of 93.19%, 88.56%, 87.58% and 90.37% on the USTC, UDIAT, BUSI and TUH datasets, respectively. The mean DSC between GOLO-CMSS and each radiologist was significantly better than that between radiologists (p< 0.001).Significance.Our proposed method can accurately detect and segment breast masses with a decent performance comparable to radiologists, highlighting its great potential for clinical implementation in breast ultrasound examination.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本研究的目的是提出一种基于“分割分类”的新诊断模型,通过利用医学诊断任务的关键领域知识来改善甲状腺结节超声检查的常规筛查。提出了一种基于多平行空隙空间的金字塔池化结构的多尺度分割网络。首先,在分割网络中,通过AttentionGate获得底层特征空间的确切信息。第二,Atrous空间金字塔池(ASPP)的膨胀卷积部分被级联以进行多次降采样。最后,设计了一个结合专家知识的三分支分类网络,借鉴医生的临床诊断经验,从结节的原始图像中提取特征,结节的区域图像,和结节的边缘图像,分别,并利用坐标注意(CA)机制和跨级特征融合来提高模型的分类精度。多尺度分割网络实现94.27%,平均精密度(mPA)的93.90%和88.85%,骰子值(骰子)和平均关节交点(MIoU),分别,和准确性,分类网络的特异性和敏感性达到86.07%,81.34%和90.19%,分别。对比测试表明,该方法优于U-Net,AGU-Net和DeepLabV3+经典模型以及nnU-Net,近年来出现的SwinUNetr和MedFormer模型。这个算法,作为辅助诊断工具,可以帮助医生更准确地评估甲状腺结节的良性或恶性。它能提供客观的量化指标,减少主观判断的偏差,提高诊断的一致性和准确性。代码和模型可在https://github.com/enheliang/甲状腺分割网络。git.
    The aim of this study is to propose a new diagnostic model based on \"segmentation + classification\" to improve the routine screening of Thyroid nodule ultrasonography by utilizing the key domain knowledge of medical diagnostic tasks. A Multi-scale segmentation network based on a pyramidal pooling structure of multi-parallel void spaces is proposed. First, in the segmentation network, the exact information of the underlying feature space is obtained by an Attention Gate. Second, the inflated convolutional part of Atrous Spatial Pyramid Pooling (ASPP) is cascaded for multiple downsampling. Finally, a three-branch classification network combined with expert knowledge is designed, drawing on doctors\' clinical diagnosis experience, to extract features from the original image of the nodule, the regional image of the nodule, and the edge image of the nodule, respectively, and to improve the classification accuracy of the model by utilizing the Coordinate attention (CA) mechanism and cross-level feature fusion. The Multi-scale segmentation network achieves 94.27%, 93.90% and 88.85% of mean precision (mPA), Dice value (Dice) and mean joint intersection (MIoU), respectively, and the accuracy, specificity and sensitivity of the classification network reaches 86.07%, 81.34% and 90.19%, respectively. Comparison tests show that this method outperforms the U-Net, AGU-Net and DeepLab V3+ classical models as well as the nnU-Net, Swin UNetr and MedFormer models that have emerged in recent years. This algorithm, as an auxiliary diagnostic tool, can help physicians more accurately assess the benign or malignant nature of Thyroid nodules. It can provide objective quantitative indicators, reduce the bias of subjective judgment, and improve the consistency and accuracy of diagnosis. Codes and models are available at https://github.com/enheliang/Thyroid-Segmentation-Network.git.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:乳腺癌是女性中最常见的癌症,超声是早期筛查的常用工具。如今,深度学习技术被用作辅助工具,为医生决定是否进行进一步的检查或治疗提供预测结果。本研究旨在通过从局部和多中心超声数据中提取更多潜在特征来开发一种用于乳腺超声分类的混合学习方法。
    方法:我们提出了一种混合学习方法将乳腺肿瘤分为良性和恶性。三个多中心数据集(BUSI,汇流条,OASBUD)用于通过联合学习对模型进行预训练,然后每个数据集都在本地进行了微调。所提出的模型由卷积神经网络(CNN)和图神经网络(GNN)组成,旨在从空间水平的图像和几何水平的图形中提取特征。输入图像尺寸小,没有像素级标签,并且输入图以无监督的方式自动生成,这节省了劳动力和内存空间的成本。
    结果:对于BUSI,我们提出的方法的分类AUCROC为0.911、0.871和0.767,巴士和OASBUD。平衡精度为87.6%,分别为85.2%和61.4%。结果表明,我们的方法优于常规方法。
    结论:我们的混合方法可以学习多中心数据之间的特征和本地数据的内部特征。它显示了在早期超声中帮助医生进行乳腺肿瘤分类的潜力。
    BACKGROUND: Breast cancer is the most common cancer among women, and ultrasound is a usual tool for early screening. Nowadays, deep learning technique is applied as an auxiliary tool to provide the predictive results for doctors to decide whether to make further examinations or treatments. This study aimed to develop a hybrid learning approach for breast ultrasound classification by extracting more potential features from local and multi-center ultrasound data.
    METHODS: We proposed a hybrid learning approach to classify the breast tumors into benign and malignant. Three multi-center datasets (BUSI, BUS, OASBUD) were used to pretrain a model by federated learning, then every dataset was fine-tuned at local. The proposed model consisted of a convolutional neural network (CNN) and a graph neural network (GNN), aiming to extract features from images at a spatial level and from graphs at a geometric level. The input images are small-sized and free from pixel-level labels, and the input graphs are generated automatically in an unsupervised manner, which saves the costs of labor and memory space.
    RESULTS: The classification AUCROC of our proposed method is 0.911, 0.871 and 0.767 for BUSI, BUS and OASBUD. The balanced accuracy is 87.6%, 85.2% and 61.4% respectively. The results show that our method outperforms conventional methods.
    CONCLUSIONS: Our hybrid approach can learn the inter-feature among multi-center data and the intra-feature of local data. It shows potential in aiding doctors for breast tumor classification in ultrasound at an early stage.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    临床实践中最危险的疾病之一是乳腺癌,因为它影响了最近几天女性的整个生活。然而,现有的诊断乳腺癌的技术很复杂,贵,和不准确。最近创建了许多跨学科和计算机化系统,以防止量化和诊断中的人为错误。超声检查是癌症检测的关键成像技术。因此,必须开发一种系统,使医疗保健部门能够快速有效地检测乳腺癌。由于其在从复杂的乳腺癌数据集中预测关键特征识别方面的优势,机器学习被广泛用于乳腺癌模式的分类。机器学习模型的性能受到缺乏成功的特征增强策略的限制。传统的乳腺癌检测方法需要处理一些问题。因此,基于机器学习方法并采用超声图像设计了一种新的乳腺癌检测模型。起初,用于分析的超声图像是从基准资源中获取的,并作为预处理阶段的输入提供。利用滤波和对比度增强方法对图像进行预处理,并获得预处理图像。然后,预处理后的图像进行分割阶段。在这个阶段,分割是通过使用模糊C均值来执行的,活动计数器,和分水岭算法,并获得了分割图像。稍后,分割的图像被提供给像素选择阶段。这里,像素由开发的混合模型用银河群优化(CAGSO)选择,以获得最终的分割像素。然后,所选择的分段像素被馈送到特征提取阶段以获得形状特征和文本特征。Further,将获取的特征提供给最佳加权特征选择阶段,它们的重量也由开发的CAGSO调谐。最后,最佳加权特征被提供给乳腺癌检测阶段。最后,在整个实验分析中,开发的乳腺癌检测模型比经典方法获得了更高的性能。
    One of the most dangerous conditions in clinical practice is breast cancer because it affects the entire life of women in recent days. Nevertheless, the existing techniques for diagnosing breast cancer are complicated, expensive, and inaccurate. Many trans-disciplinary and computerized systems are recently created to prevent human errors in both quantification and diagnosis. Ultrasonography is a crucial imaging technique for cancer detection. Therefore, it is essential to develop a system that enables the healthcare sector to rapidly and effectively detect breast cancer. Due to its benefits in predicting crucial feature identification from complicated breast cancer datasets, machine learning is widely employed in the categorization of breast cancer patterns. The performance of machine learning models is limited by the absence of a successful feature enhancement strategy. There are a few issues that need to be handled with the traditional breast cancer detection method. Thus, a novel breast cancer detection model is designed based on machine learning approaches and employing ultrasonic images. At first, ultrasound images utilized for the analysis is acquired from the benchmark resources and offered as the input to preprocessing phase. The images are preprocessed by utilizing a filtering and contrast enhancement approach and attained the preprocessed image. Then, the preprocessed images are subjected to the segmentation phase. In this phase, segmentation is performed by employing Fuzzy C-Means, active counter, and watershed algorithm and also attained the segmented images. Later, the segmented images are provided to the pixel selection phase. Here, the pixels are selected by the developed hybrid model Conglomerated Aphid with Galactic Swarm Optimization (CAGSO) to attain the final segmented pixels. Then, the selected segmented pixel is fed in to feature extraction phase for attaining the shape features and the textual features. Further, the acquired features are offered to the optimal weighted feature selection phase, and also their weights are tuned tune by the developed CAGSO. Finally, the optimal weighted features are offered to the breast cancer detection phase. Finally, the developed breast cancer detection model secured an enhanced performance rate than the classical approaches throughout the experimental analysis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    先天性心脏病(CHD)是怀孕期间出现的严重问题之一。早期CHD检测降低了死亡率和发病率,但受到相对较低的检测率的阻碍(即,60%)的现有筛选技术。可以通过使用深度学习技术将超声成像与胎儿超声图像评估(FUSI)进行补充来提高检测率。因此,非侵入性胎儿超声图像在CHD诊断中具有明确的潜力,除了胎儿超声心动图外,还应考虑这些因素.这篇综述论文重点介绍了使用超声图像检测CHD的尖端技术,涉及预处理,本地化,分割,和分类。现有的预处理技术包括空间域滤波器,非线性均值滤波器,变换域滤波器,和基于卷积神经网络(CNN)的去噪方法;分割包括基于阈值的技术,基于区域增长的技术,边缘检测技术,基于人工神经网络(ANN)的分割方法,非深度学习方法和深度学习方法。本文还提出了改进现有方法的未来研究方向。
    Congenital heart defects (CHD) are one of the serious problems that arise during pregnancy. Early CHD detection reduces death rates and morbidity but is hampered by the relatively low detection rates (i.e., 60%) of current screening technology. The detection rate could be increased by supplementing ultrasound imaging with fetal ultrasound image evaluation (FUSI) using deep learning techniques. As a result, the non-invasive foetal ultrasound image has clear potential in the diagnosis of CHD and should be considered in addition to foetal echocardiography. This review paper highlights cutting-edge technologies for detecting CHD using ultrasound images, which involve pre-processing, localization, segmentation, and classification. Existing technique of preprocessing includes spatial domain filter, non-linear mean filter, transform domain filter, and denoising methods based on Convolutional Neural Network (CNN); segmentation includes thresholding-based techniques, region growing-based techniques, edge detection techniques, Artificial Neural Network (ANN) based segmentation methods, non-deep learning approaches and deep learning approaches. The paper also suggests future research directions for improving current methodologies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    甲状腺结节通常通过超声成像来识别,这在恶性肿瘤的早期检测中起着至关重要的作用。诊断的准确性,然而,受放射科医生的专业知识影响很大,设备的质量,和图像采集技术。这种可变性强调了对支持诊断的计算工具的关键需求。
    这项回顾性研究评估了人工智能(AI)驱动的甲状腺结节评估系统,整合泰国多个著名医疗中心的临床实践。我们纳入了在2015年1月至2021年3月期间接受甲状腺超声检查并辅以超声引导细针穿刺(FNA)的患者。参与者组成了一个连续的系列,提高研究的有效性。在AI模型的诊断性能与经验丰富的放射科医师和第三年放射科住院医师的诊断性能之间进行了比较分析,使用来自三个著名泰国医疗机构的600张超声图像的数据集,每个都经过细胞学检查。
    AI系统表现出卓越的诊断性能,总体敏感性为80%[95%置信区间(CI):59.3-93.2%],特异性为71.4%(95%CI:53.7-85.4%)。在Siriraj医院,AI的灵敏度为90.0%(95%CI:55.5-99.8%),特异性为100.0%(95%CI:69.2-100%),正预测值(PPV)为100.0%,负预测值(NPV)为90.9%,总体准确率为95.0%,表明AI在不同数据集中进行广泛培训的好处。有经验的放射科医师的敏感度为40.0%(95%CI:21.1-61.3%),而特异性为80.0%(95%CI:63.6-91.6%),分别,显示AI在灵敏度方面显著优于放射科医师(P=0.043),同时保持相当的特异性.观察者间变异性分析表明放射科医生和住院医师之间的一致性中等(K=0.53),与AI系统比较时的一致性(K=0.37/0.33)。值得注意的是,这些诊断索引的95%CI突出显示了AI系统在不同设置中的一致性能。
    这些发现主张将AI整合到临床环境中,以提高放射科医师评估甲状腺结节的诊断准确性。AI系统,设计为支持性工具而不是替代品,通过提供高水平的诊断精度,有望彻底改变甲状腺结节的诊断和管理。
    UNASSIGNED: Thyroid nodules are commonly identified through ultrasound imaging, which plays a crucial role in the early detection of malignancy. The diagnostic accuracy, however, is significantly influenced by the expertise of radiologists, the quality of equipment, and image acquisition techniques. This variability underscores the critical need for computational tools that support diagnosis.
    UNASSIGNED: This retrospective study evaluates an artificial intelligence (AI)-driven system for thyroid nodule assessment, integrating clinical practices from multiple prominent Thai medical centers. We included patients who underwent thyroid ultrasonography complemented by ultrasound-guided fine needle aspiration (FNA) between January 2015 and March 2021. Participants formed a consecutive series, enhancing the study\'s validity. A comparative analysis was conducted between the AI model\'s diagnostic performance and that of both an experienced radiologist and a third-year radiology resident, using a dataset of 600 ultrasound images from three distinguished Thai medical institutions, each verified with cytological findings.
    UNASSIGNED: The AI system demonstrated superior diagnostic performance, with an overall sensitivity of 80% [95% confidence interval (CI): 59.3-93.2%] and specificity of 71.4% (95% CI: 53.7-85.4%). At Siriraj Hospital, the AI achieved a sensitivity of 90.0% (95% CI: 55.5-99.8%), specificity of 100.0% (95% CI: 69.2-100%), positive prediction value (PPV) of 100.0%, negative prediction value (NPV) of 90.9%, and an overall accuracy of 95.0%, indicating the benefits of AI\'s extensive training across diverse datasets. The experienced radiologist\'s sensitivity was 40.0% (95% CI: 21.1-61.3%), while the specificity was 80.0% (95% CIs: 63.6-91.6%), respectively, showing that the AI significantly outperformed the radiologist in terms of sensitivity (P=0.043) while maintaining comparable specificity. The inter-observer variability analysis indicated a moderate agreement (K=0.53) between the radiologist and the resident, contrasting with fair agreement (K=0.37/0.33) when each was compared with the AI system. Notably, 95% CIs for these diagnostic indexes highlight the AI system\'s consistent performance across different settings.
    UNASSIGNED: The findings advocate for the integration of AI into clinical settings to enhance the diagnostic accuracy of radiologists in assessing thyroid nodules. The AI system, designed as a supportive tool rather than a replacement, promises to revolutionize thyroid nodule diagnosis and management by providing a high level of diagnostic precision.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号