computer aided diagnosis

计算机辅助诊断
  • 文章类型: Journal Article
    目标:面部骨折的发病率在全球范围内呈上升趋势,然而,有限的研究正在解决3D图像中存在的各种形式的面部骨折。特别是,由于面部骨折的性质,骨折的方向变化,没有清晰的轮廓,在二维图像中很难确定骨折的确切位置。因此,需要3D图像分析才能找到确切的骨折区域,但是它需要沉重的计算复杂性和昂贵的像素级标签来进行监督学习。在这项研究中,我们通过在3D图像空间中使用没有像素级标记的弱监督对象定位来解决减少计算负担和提高骨折定位精度的问题。
    方法:我们提出了一个非常快的,高分辨率聚合三维检测CAM(VFHA-CAM)模型,可以检测各种面部骨折。为了更好地检测微小的骨折,我们的模型使用高分辨率特征图,并采用消融CAM来找到精确的裂缝位置,而无需逐像素标记,在这里,我们使用一个粗糙的裂缝图像检测与三维方框式标签。为此,我们提取重要的特征,只使用必要的特征来降低3D图像空间的计算复杂度。
    结果:实验结果表明,VFHA-CAM在灵敏度/人和特异性/人方面比最先进的2D检测方法高出20%,实现87%和85%的敏感性/人和特异性/人得分,分别。此外,与耗时超过20分钟的简单消融CAM方法相比,我们的VFHA-CAM将位置分析时间减少到76s,而不会降低性能。
    结论:本研究引入了一种新颖的弱监督对象定位方法,用于3D面部图像中的骨折检测。所提出的方法采用3D检测模型,这有助于准确检测各种形式的面部骨折。在3D骨折检测框内用于骨折区域分割的CAM算法是在弱监督对象定位中快速告知医务人员面部骨骨折的确切位置的关键。此外,我们提供3D可视化,因此即使不熟悉3DCT图像的非专家也可以识别骨折状态和位置。
    OBJECTIVE: The incidence of facial fractures is on the rise globally, yet limited studies are addressing the diverse forms of facial fractures present in 3D images. In particular, due to the nature of the facial fracture, the direction in which the bone fractures vary, and there is no clear outline, it is difficult to determine the exact location of the fracture in 2D images. Thus, 3D image analysis is required to find the exact fracture area, but it needs heavy computational complexity and expensive pixel-wise labeling for supervised learning. In this study, we tackle the problem of reducing the computational burden and increasing the accuracy of fracture localization by using a weakly-supervised object localization without pixel-wise labeling in a 3D image space.
    METHODS: We propose a Very Fast, High-Resolution Aggregation 3D Detection CAM (VFHA-CAM) model, which can detect various facial fractures. To better detect tiny fractures, our model uses high-resolution feature maps and employs Ablation CAM to find an exact fracture location without pixel-wise labeling, where we use a rough fracture image detected with 3D box-wise labeling. To this end, we extract important features and use only essential features to reduce the computational complexity in 3D image space.
    RESULTS: Experimental findings demonstrate that VFHA-CAM surpasses state-of-the-art 2D detection methods by up to 20% in sensitivity/person and specificity/person, achieving sensitivity/person and specificity/person scores of 87% and 85%, respectively. In addition, Our VFHA-CAM reduces location analysis time to 76 s without performance degradation compared to a simple Ablation CAM method that takes more than 20 min.
    CONCLUSIONS: This study introduces a novel weakly-supervised object localization approach for bone fracture detection in 3D facial images. The proposed method employs a 3D detection model, which helps detect various forms of facial bone fractures accurately. The CAM algorithm adopted for fracture area segmentation within a 3D fracture detection box is key in quickly informing medical staff of the exact location of a facial bone fracture in a weakly-supervised object localization. In addition, we provide 3D visualization so that even non-experts unfamiliar with 3D CT images can identify the fracture status and location.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    深度学习在超声图像分析中得到了广泛的应用,它也有利于肾脏超声解释和诊断。然而,超声图像分辨率的重要性在深度学习方法中经常被忽视。在这项研究中,我们将超声图像分辨率集成到卷积神经网络中,并探讨分辨率对肾脏肿瘤诊断的影响。在整合图像分辨率信息的过程中,我们提出了两种不同的方法来缩小神经网络提取的特征和分辨率特征之间的语义差距。在第一种方法中,分辨率与神经网络提取的特征直接连接。在第二种方法中,首先对神经网络提取的特征进行降维,然后与分辨率特征组合,形成新的复合特征。我们将结合分辨率的这两种方法与不结合分辨率的方法进行了比较,该方法包含926张图像的肾脏肿瘤数据集,其中包括211张良性肾脏肿瘤图像和715张恶性肾脏肿瘤图像。未纳入分辨率的方法的受试者工作特征曲线下面积(AUC)为0.8665,纳入分辨率的两种方法的AUC分别为0.8926(P<0.0001)和0.9135(P<0.0001)。这项研究已经建立了端到端肾肿瘤分类系统,并证明了整合图像分辨率的好处,表明将图像分辨率整合到神经网络中可以更准确地区分超声图像中的恶性和良性肾脏肿瘤。
    Deep learning has been widely used in ultrasound image analysis, and it also benefits kidney ultrasound interpretation and diagnosis. However, the importance of ultrasound image resolution often goes overlooked within deep learning methodologies. In this study, we integrate the ultrasound image resolution into a convolutional neural network and explore the effect of the resolution on diagnosis of kidney tumors. In the process of integrating the image resolution information, we propose two different approaches to narrow the semantic gap between the features extracted by the neural network and the resolution features. In the first approach, the resolution is directly concatenated with the features extracted by the neural network. In the second approach, the features extracted by the neural network are first dimensionally reduced and then combined with the resolution features to form new composite features. We compare these two approaches incorporating the resolution with the method without incorporating the resolution on a kidney tumor dataset of 926 images consisting of 211 images of benign kidney tumors and 715 images of malignant kidney tumors. The area under the receiver operating characteristic curve (AUC) of the method without incorporating the resolution is 0.8665, and the AUCs of the two approaches incorporating the resolution are 0.8926 (P < 0.0001) and 0.9135 (P < 0.0001) respectively. This study has established end-to-end kidney tumor classification systems and has demonstrated the benefits of integrating image resolution, showing that incorporating image resolution into neural networks can more accurately distinguish between malignant and benign kidney tumors in ultrasound images.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: English Abstract
    Objective:To build a VGG-based computer-aided diagnostic model for chronic sinusitis and evaluate its efficacy. Methods:①A total of 5 000 frames of diagnosed sinus CT images were collected. The normal group consisted of 1 000 frames(250 frames each of maxillary sinus, frontal sinus, septal sinus, and pterygoid sinus), while the abnormal group consisted of 4 000 frames(1 000 frames each of maxillary sinusitis, frontal sinusitis, septal sinusitis, and pterygoid sinusitis). ②The models were trained and simulated to obtain five classification models for the normal group, the pteroid sinusitis group, the frontal sinusitis group, the septal sinusitis group and the maxillary sinusitis group, respectively. The classification efficacy of the models was evaluated objectively in six dimensions: accuracy, precision, sensitivity, specificity, interpretation time and area under the ROC curve(AUC). ③Two hundred randomly selected images were read by the model with three groups of physicians(low, middle and high seniority) to constitute a comparative experiment. The efficacy of the model was objectively evaluated using the aforementioned evaluation indexes in conjunction with clinical analysis. Results:①Simulation experiment: The overall recognition accuracy of the model is 83.94%, with a precision of 89.52%, sensitivity of 83.94%, specificity of 95.99%, and the average interpretation time of each frame is 0.2 s. The AUC for sphenoid sinusitis was 0.865(95%CI 0.849-0.881), for frontal sinusitis was 0.924(0.991-0.936), for ethmoidoid sinusitis was 0.895(0.880-0.909), and for maxillary sinusitis was 0.974(0.967-0.982). ②Comparison experiment: In terms of recognition accuracy, the model was 84.52%, while the low-seniority physicians group was 78.50%, the middle-seniority physicians group was 80.50%, and the seniority physicians group was 83.50%; In terms of recognition accuracy, the model was 85.67%, the low seniority physicians group was 79.72%, the middle seniority physicians group was 82.67%, and the high seniority physicians group was 83.66%. In terms of recognition sensitivity, the model was 84.52%, the low seniority group was 78.50%, the middle seniority group was 80.50%, and the high seniority group was 83.50%. In terms of recognition specificity, the model was 96.58%, the low-seniority physicians group was 94.63%, the middle-seniority physicians group was 95.13%, and the seniority physicians group was 95.88%. In terms of time consumption, the average image per frame of the model is 0.20 s, the average image per frame of the low-seniority physicians group is 2.35 s, the average image per frame of the middle-seniority physicians group is 1.98 s, and the average image per frame of the senior physicians group is 2.19 s. Conclusion:This study demonstrates the potential of a deep learning-based artificial intelligence diagnostic model for chronic sinusitis to classify and diagnose chronic sinusitis; the deep learning-based artificial intelligence diagnosis model for chronic sinusitis has good classification performance and high diagnostic efficacy.
    目的:搭建基于VGG的慢性鼻窦炎计算机辅助诊断模型,并评价其效能。 方法:①收集5 000帧已确诊的鼻窦CT图像,将其分为正常组1 000帧图像(其中,正常的上颌窦、额窦、筛窦、蝶窦影像图像各250帧)及异常组4 000帧图像(其中,上颌窦炎、额窦炎、筛窦炎、蝶窦炎影像图像各1 000帧),对图像进行大小归一化及分割预处理;②训练模型并对其进行仿真实验,分别得到正常组,蝶窦炎组,额窦炎组,筛窦炎组以及上颌窦炎组5个分类模型,从准确度、精确度、灵敏度、特异度、判读时间及ROC曲线下面积(AUC)6个维度,客观评价模型的分类效能;③随机选取200帧图像,通过模型与低年资医师组、中年资医师组、高年资医师组分别阅片构成对比试验,结合临床通过以上评价指标客观评价模型的效能。 结果:①仿真实验:整个模型的识别准确度为83.94%,精确度为89.52%,灵敏度为83.94%,特异度为95.99%,平均每帧图像判读时间为0.20 s;蝶窦炎的AUC为0.865(95%CI 0.849~0.881),额窦炎的AUC为0.924(0.911~0.936),筛窦炎的AUC为0.895(0.880~0.909),上颌窦炎的AUC为0.974(0.967~0.982)。②对比实验:在识别准确度上,模型为84.52%,低年资医师组为78.5%、中年资医师组为80.5%,高年资医师组为83.5%;在识别精确度上,模型为85.67%,低年资医师组为79.72%,中年资医师组为82.67%,高年资医师组为83.66%;在识别灵敏度上,模型为84.52%,低年资医师组为78.50%,中年资医师组为80.50%,高年资医师组为83.50%;在识别特异度上,模型为96.58%,低年资医师组为94.63%,中年资医师组为95.13%,高年资医师组为95.88%;在耗时上,模型平均每帧图像为0.20 s,低年资医师组平均每帧图像为2.35 s,中年资医师组平均每帧图像为1.98 s,高年资医师组平均每帧图像为2.19 s。 结论:本研究强调了基于深度学习的慢性鼻窦炎人工智能诊断模型分类诊断慢性鼻窦炎的可能性;基于深度学习的慢性鼻窦炎人工智能诊断模型分类性能好,具有较高的诊断效能。.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:&#xD;医学领域的趋势是基于智能检测的医疗诊断系统。然而,由于缺乏可解释性,这些方法通常被视为“黑匣子”。这种情况在识别误诊原因和提高准确性方面提出了挑战,这导致误诊和延迟治疗的潜在风险。因此,如何增强诊断模型的可解释性对于改善患者预后和减少治疗延误至关重要.到目前为止,关于基于深度学习的自发性气胸预测的研究有限,影响肺通气和静脉回流的肺部疾病。 方法。&#xD;本研究开发了一个集成的医学图像分析系统,该系统使用可解释的深度学习模型进行图像识别和可视化,以实现可解释的自动诊断过程。 主要结果。&#xD;该系统在气胸分类中实现了令人印象深刻的95.56%的准确率,强调了血管穿透缺陷在临床判断中的重要意义。 意义。 这将提高模型的可信度,减少不确定性,准确诊断各种肺部疾病,从而为患者带来更好的医疗结果和更好的医疗资源利用。未来的研究可以专注于实施新的深度学习模型,以检测和诊断其他肺部疾病,从而增强该系统的通用性。 .
    Objective.The trend in the medical field is towards intelligent detection-based medical diagnostic systems. However, these methods are often seen as \'black boxes\' due to their lack of interpretability. This situation presents challenges in identifying reasons for misdiagnoses and improving accuracy, which leads to potential risks of misdiagnosis and delayed treatment. Therefore, how to enhance the interpretability of diagnostic models is crucial for improving patient outcomes and reducing treatment delays. So far, only limited researches exist on deep learning-based prediction of spontaneous pneumothorax, a pulmonary disease that affects lung ventilation and venous return.Approach.This study develops an integrated medical image analysis system using explainable deep learning model for image recognition and visualization to achieve an interpretable automatic diagnosis process.Main results.The system achieves an impressive 95.56% accuracy in pneumothorax classification, which emphasizes the significance of the blood vessel penetration defect in clinical judgment.Significance.This would lead to improve model trustworthiness, reduce uncertainty, and accurate diagnosis of various lung diseases, which results in better medical outcomes for patients and better utilization of medical resources. Future research can focus on implementing new deep learning models to detect and diagnose other lung diseases that can enhance the generalizability of this system.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    计算智能的最新进展,深度学习,和计算机辅助检测对医学成像领域产生了重大影响。图像分割的任务,这涉及准确地解释和识别图像的内容,引起了很多关注。此任务的主要目标是将对象与背景分离,从而简化和增强图像的意义。然而,当应用于某些类型的图像时,现有的图像分割方法有其局限性。这篇调查论文旨在通过全面检查图像分割技术的优缺点来强调图像分割技术的重要性。医学图像中癌症区域的准确检测对于确保有效治疗至关重要。在这项研究中,我们还广泛分析了用于癌症识别的计算机辅助诊断(CAD)系统,重点关注最近的研究进展。该论文严格评估了各种癌症检测技术,并比较了它们的有效性。卷积神经网络(CNN)由于能够在大型数据集中对医学图像进行分割和分类而引起了特别的兴趣,感谢他们的自学和决策能力。 .
    Recent advancements in computational intelligence, deep learning, and computer-aided detection have had a significant impact on the field of medical imaging. The task of image segmentation, which involves accurately interpreting and identifying the content of an image, has garnered much attention. The main objective of this task is to separate objects from the background, thereby simplifying and enhancing the significance of the image. However, existing methods for image segmentation have their limitations when applied to certain types of images. This survey paper aims to highlight the importance of image segmentation techniques by providing a thorough examination of their advantages and disadvantages. The accurate detection of cancer regions in medical images is crucial for ensuring effective treatment. In this study, we have also extensive analysis of Computer-Aided Diagnosis (CAD) systems for cancer identification, with a focus on recent research advancements. The paper critically assesses various techniques for cancer detection and compares their effectiveness. Convolutional neural networks (CNNs) have attracted particular interest due to their ability to segment and classify medical images in large datasets, thanks to their capacity for self- learning and decision-making.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    胸部X射线(CXR)成像被放射科医生广泛用于诊断胸部疾病。最近,许多深度学习技术已被提出作为计算机辅助诊断(CAD)工具,以帮助放射科医生最大限度地减少错误诊断的风险。从应用的角度来看,这些模型表现出两个主要的挑战:(1)他们需要大量的注释数据在训练阶段和(2)他们缺乏可解释的因素来证明他们的结果在预测阶段。在本研究中,我们开发了一个基于类激活映射(CAM)的集成模型,叫做Ensemble-CAM,通过采用可解释的AI(XAI)功能,通过弱监督学习来解决这两个挑战。Ensemble-CAM利用类标签来预测与可解释特征相关的疾病位置。所提出的工作利用具有类激活函数的集成和迁移学习来实现三个目标:(1)在定位胸部疾病时最大程度地减少对强注释数据的依赖性,(2)通过可视化预测结果的可解释特征来增强对预测结果的信心,(3)通过融合函数优化累积性能。Ensemble-CAM在三个CXR图像数据集上进行了训练,并通过热图和Jaccard指数通过定性和定量措施进行了评估。结果反映了与现有的独立和集成模型相比增强的性能和可靠性。
    Chest X-ray (CXR) imaging is widely employed by radiologists to diagnose thoracic diseases. Recently, many deep learning techniques have been proposed as computer-aided diagnostic (CAD) tools to assist radiologists in minimizing the risk of incorrect diagnosis. From an application perspective, these models have exhibited two major challenges: (1) They require large volumes of annotated data at the training stage and (2) They lack explainable factors to justify their outcomes at the prediction stage. In the present study, we developed a class activation mapping (CAM)-based ensemble model, called Ensemble-CAM, to address both of these challenges via weakly supervised learning by employing explainable AI (XAI) functions. Ensemble-CAM utilizes class labels to predict the location of disease in association with interpretable features. The proposed work leverages ensemble and transfer learning with class activation functions to achieve three objectives: (1) minimizing the dependency on strongly annotated data when locating thoracic diseases, (2) enhancing confidence in predicted outcomes by visualizing their interpretable features, and (3) optimizing cumulative performance via fusion functions. Ensemble-CAM was trained on three CXR image datasets and evaluated through qualitative and quantitative measures via heatmaps and Jaccard indices. The results reflect the enhanced performance and reliability in comparison to existing standalone and ensembled models.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:当前的自动心电图(ECG)诊断系统可以提供分类结果,但通常缺乏对这些结果的解释。这种限制阻碍了它们在临床诊断中的应用。先前的监督学习无法在没有手动标记大型ECG数据集的情况下足够准确地突出异常分割输出以用于临床应用。
    方法:在本研究中,我们提出了一个名为MA-MIL的多实例学习框架,它设计了一个多层和多实例的结构,以不同的规模逐步聚合。我们使用公共MIT-BIH数据集和私有数据集评估了我们的方法。
    结果:结果表明,我们的模型在ECG分类输出和心跳水平方面均表现良好,次心跳级异常段检测,ECG分类的准确性和F1评分分别为0.987和0.986,心跳水平异常检测的准确性和F1评分分别为0.968和0.949,分别。与可视化方法相比,在所有类别中,MA-MIL的IoU值提高了至少17%,至多31%。
    结论:MA-MIL可以准确定位异常心电图段,为临床应用提供更可靠的结果。
    OBJECTIVE: Current automatic electrocardiogram (ECG) diagnostic systems could provide classification outcomes but often lack explanations for these results. This limitation hampers their application in clinical diagnoses. Previous supervised learning could not highlight abnormal segmentation output accurately enough for clinical application without manual labeling of large ECG datasets.
    METHODS: In this study, we present a multi-instance learning framework called MA-MIL, which has designed a multi-layer and multi-instance structure that is aggregated step by step at different scales. We evaluated our method using the public MIT-BIH dataset and our private dataset.
    RESULTS: The results show that our model performed well in both ECG classification output and heartbeat level, sub-heartbeat level abnormal segment detection, with accuracy and F1 scores of 0.987 and 0.986 for ECG classification and 0.968 and 0.949 for heartbeat level abnormal detection, respectively. Compared to visualization methods, the IoU values of MA-MIL improved by at least 17 % and at most 31 % across all categories.
    CONCLUSIONS: MA-MIL could accurately locate the abnormal ECG segment, offering more trustworthy results for clinical application.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    脑磁共振成像(MRI)扫描有多种序列,视图平面,和磁铁强度。任何自动诊断的必要预处理步骤是识别MRI序列,视图平面,以及所获取图像的磁体强度。MRI序列的自动识别可用于标记数据科学家在计算机辅助诊断(CAD)工具的设计和开发中使用的大量在线数据集。本文提出了一种深度学习(DL)方法,用于使用不同数据类型的扫描作为输入的脑MRI序列和视图平面识别。针对常用的MRI扫描,提出了12类分类系统,包括T1,T2加权,质子密度(PD),轴向流体衰减反演恢复(FLAIR)序列,冠状和矢状视图平面。多个在线公开可用的数据集已用于训练系统,多个基础设施。MobileNet-v2在未处理的MRI扫描中提供了99.76%的足够性能精度,并且与颅骨剥离扫描具有相当的精度,并已部署在供公众使用的工具中。该工具已在来自在线和医院来源的看不见的数据上进行了测试,具有99.84和86.49%的令人满意的性能准确性,分别。
    Brain magnetic resonance imaging (MRI) scans are available in a wide variety of sequences, view planes, and magnet strengths. A necessary preprocessing step for any automated diagnosis is to identify the MRI sequence, view plane, and magnet strength of the acquired image. Automatic identification of the MRI sequence can be useful in labeling massive online datasets used by data scientists in the design and development of computer aided diagnosis (CAD) tools. This paper presents a deep learning (DL) approach for brain MRI sequence and view plane identification using scans of different data types as input. A 12-class classification system is presented for commonly used MRI scans, including T1, T2-weighted, proton density (PD), fluid attenuated inversion recovery (FLAIR) sequences in axial, coronal and sagittal view planes. Multiple online publicly available datasets have been used to train the system, with multiple infrastructures. MobileNet-v2 offers an adequate performance accuracy of 99.76% with unprocessed MRI scans and a comparable accuracy with skull-stripped scans and has been deployed in a tool for public use. The tool has been tested on unseen data from online and hospital sources with a satisfactory performance accuracy of 99.84 and 86.49%, respectively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:错牙合畸形已成为新兴的全球公共卫生问题。患有前交叉咬伤的人面临着表现出诸如凹形面部轮廓等特征的风险增加,负超喷,咀嚼效率差。针对这个问题,我们提出了一种基于卷积神经网络(CNN)的模型,用于口内图像和视频的自动检测和分类。
    方法:本研究共纳入1865例口内图像,其中1493(80%)用于培训,372(20%)用于测试CNN。此外,我们在10个视频上测试了模型,跨越124秒的累积持续时间。为了评估我们预测的表现,指标,包括准确性,灵敏度,特异性,精度,F1分数,精确率-召回率(AUPR)曲线下的面积,采用受试者工作特征(ROC)曲线下面积(AUC)。
    结果:训练后的模型表现出值得称道的分类性能,达到0.965的精度和0.986的AUC。此外,它表现出优异的特异性(0.992vs.0.978和0.956,P<0.05)与两名正畸医生的评估相比。相反,CNN模型显示灵敏度降低(0.89vs.0.96和0.92,P<0.05)相对于正畸医生。值得注意的是,CNN模型实现了完美的分类率,成功识别测试集中100%的视频。
    结论:深度学习(DL)模型在通过口内图像和视频识别前牙咬合方面表现出卓越的分类准确性。这种熟练程度有可能加快检测严重的咬合不正,促进及时分类以进行适当的治疗,因此,降低并发症的风险。
    OBJECTIVE: Malocclusion has emerged as a burgeoning global public health concern. Individuals with an anterior crossbite face an elevated risk of exhibiting characteristics such as a concave facial profile, negative overjet, and poor masticatory efficiency. In response to this issue, we proposed a convolutional neural network (CNN)-based model designed for the automated detection and classification of intraoral images and videos.
    METHODS: A total of 1865 intraoral images were included in this study, 1493 (80%) of which were allocated for training and 372 (20%) for testing the CNN. Additionally, we tested the models on 10 videos, spanning a cumulative duration of 124 seconds. To assess the performance of our predictions, metrics including accuracy, sensitivity, specificity, precision, F1-score, area under the precision-recall (AUPR) curve, and area under the receiver operating characteristic (ROC) curve (AUC) were employed.
    RESULTS: The trained model exhibited commendable classification performance, achieving an accuracy of 0.965 and an AUC of 0.986. Moreover, it demonstrated superior specificity (0.992 vs. 0.978 and 0.956, P < 0.05) in comparison to assessments by two orthodontists. Conversely, the CNN model displayed diminished sensitivity (0.89 vs. 0.96 and 0.92, P < 0.05) relative to the orthodontists. Notably, the CNN model accomplished a perfect classification rate, successfully identifying 100% of the videos in the test set.
    CONCLUSIONS: The deep learning (DL) model exhibited remarkable classification accuracy in identifying anterior crossbite through both intraoral images and videos. This proficiency holds the potential to expedite the detection of severe malocclusions, facilitating timely classification for appropriate treatment and, consequently, mitigating the risk of complications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:在CT扫描中有效分割食管鳞癌病灶对辅助诊断和治疗具有重要意义。然而,由于食道的不规则形式和小尺寸,准确的病变分割仍然是一项具有挑战性的任务,时空结构的不一致性,食管及其周围组织在医学图像中的对比度较低。目的提高食管鳞状细胞癌病灶的分割效果。
    方法:对于分割网络而言,有效提取3D判别特征以将食管癌与一些视觉上封闭的邻近食管组织和器官区分开来至关重要。在这项工作中,有效的HRU-Net架构(高分辨率U-Net)被用于CT切片中的食管癌和食管癌分割。基于先定位后分割的思想,HRU-Net在分割前定位食管区域。此外,设计了一个分辨率融合模块(RFM),将相邻分辨率特征图的信息进行融合,以及保留高分辨率的功能。
    结果:与其他五种典型方法相比,设计的HRU-Net能够产生优异的分割结果。
    结论:我们提出的HRU-NET提高了食管鳞癌分割的准确性。与其他型号相比,我们的模型表现最好。该方法可提高临床诊断食管鳞癌病变的效率。
    OBJECTIVE: The effective segmentation of esophageal squamous carcinoma lesions in CT scans is significant for auxiliary diagnosis and treatment. However, accurate lesion segmentation is still a challenging task due to the irregular form of the esophagus and small size, the inconsistency of spatio-temporal structure, and low contrast of esophagus and its peripheral tissues in medical images. The objective of this study is to improve the segmentation effect of esophageal squamous cell carcinoma lesions.
    METHODS: It is critical for a segmentation network to effectively extract 3D discriminative features to distinguish esophageal cancers from some visually closed adjacent esophageal tissues and organs. In this work, an efficient HRU-Net architecture (High-Resolution U-Net) was exploited for esophageal cancer and esophageal carcinoma segmentation in CT slices. Based on the idea of localization first and segmentation later, the HRU-Net locates the esophageal region before segmentation. In addition, an Resolution Fusion Module (RFM) was designed to integrate the information of adjacent resolution feature maps to obtain strong semantic information, as well as preserve the high-resolution features.
    RESULTS: Compared with the other five typical methods, the devised HRU-Net is capable of generating superior segmentation results.
    CONCLUSIONS: Our proposed HRU-NET improves the accuracy of segmentation for squamous esophageal cancer. Compared to other models, our model performs the best. The designed method may improve the efficiency of clinical diagnosis of esophageal squamous cell carcinoma lesions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号