nnUNet

nnUNet
  • 文章类型: Journal Article
    目的:主动脉夹层(AD)是一种需要快速准确诊断的严重疾病。在这项研究中,我们旨在通过提出一种新颖的计算机断层扫描图像中的主动脉分割方法,该方法使用变压器和UNet级联网络的组合以及放大和放大方案(ZOZI-seg)来提高AD的诊断准确性。
    方法:所提出的方法分割了主动脉的每个隔室,包括真腔(TL),假腔(FL),和血栓形成(TH)使用级联策略,该策略基于ZOZI方案的动态补丁大小捕获全局上下文(解剖结构)和局部细节纹理。ZOZI-seg模型具有两级体系结构,同时使用“用于全景上下文感知的3D转换器”和用于局部纹理细化的3DUNet。“在消融研究中证明了独特的ZOZI修补策略。使用AsanMedicalCenter的数据集测试了我们提出的ZOZI-seg模型的性能,并将其与nnUNet和nnFormer等现有模型进行了比较。
    结果:在分割准确性方面,我们的方法产生了更好的结果,TL的Dice相似系数(DSC)为0.917、0.882和0.630,FL,TH,分别。此外,我们使用外部数据集间接地将我们的模型与以前的研究进行了比较,以评估其稳健性和可泛化性.
    结论:该方法可能有助于不同临床情况下AD的诊断和治疗,为进一步研究和临床应用提供有力依据。
    OBJECTIVE: Aortic dissection (AD) is a serious condition requiring rapid and accurate diagnosis. In this study, we aimed to improve the diagnostic accuracy of AD by presenting a novel method for aortic segmentation in computed tomography images that uses a combination of a transformer and a UNet cascade network with a Zoom-Out and Zoom-In scheme (ZOZI-seg).
    METHODS: The proposed method segments each compartment of the aorta, comprising the true lumen (TL), false lumen (FL), and thrombosis (TH) using a cascade strategy that captures both the global context (anatomical structure) and the local detail texture based on the dynamic patch size with ZOZI schemes. The ZOZI-seg model has a two-stage architecture using both a \"3D transformer for panoptic context-awareness\" and a \"3D UNet for localized texture refinement.\" The unique ZOZI strategies for patching were demonstrated in an ablation study. The performance of our proposed ZOZI-seg model was tested using a dataset from Asan Medical Center and compared with those of existing models such as nnUNet and nnFormer.
    RESULTS: In terms of segmentation accuracy, our method yielded better results, with Dice similarity coefficients (DSCs) of 0.917, 0.882, and 0.630 for TL, FL, and TH, respectively. Furthermore, we indirectly compared our model with those in previous studies using an external dataset to evaluate its robustness and generalizability.
    CONCLUSIONS: This approach may help in the diagnosis and treatment of AD in different clinical situations and provide a strong basis for further research and clinical applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    自动分割角膜基质细胞可以帮助眼科医生检测共聚焦显微镜图像中的异常形态,从而评估角膜的病毒感染或锥形突变,避免不可逆的病理损伤。然而,角膜基质细胞经常遭受不均匀的光照和紊乱的血管阻塞,导致分割不准确。
    为了应对这些挑战,本研究提出了一种新颖的方法:结合双高阶信道注意力的nnUNet和嵌套的基于Transformer的网络,命名为U-NTCA。与nnUNet不同,这种架构允许关键上下文特征的递归传输和特征跨层的直接交互,以提高低质量区域细胞识别的准确性。所提出的方法涉及多个步骤。首先,具有相同通道号的三个底层特征被发送到名为gnConv的注意力通道,以促进本地上下文的高阶交互。其次,我们利用U-Net中的不同层将Transformer与gnConv嵌套集成,并串联多个变压器,以自下而上的方式传输多尺度特征。我们对下采样特征进行编码,相应的上采样特征,和从较低层传输的低级特征信息,以模拟不同大小和分辨率的特征之间的潜在相关性。这些多尺度特征在通过递归传输细化当前层的位置信息和形态细节方面起着关键作用。
    在包括136张图像的临床数据集上的实验结果表明,所提出的方法以82.72%的Dice得分和90.92%的AUC(曲线下面积)实现了竞争性能,高于nnUNet的性能。
    实验结果表明,我们的模型为角膜基质细胞提供了一种经济高效且高精度的分割解决方案,特别是在具有挑战性的图像场景中。
    UNASSIGNED: Automatic segmentation of corneal stromal cells can assist ophthalmologists to detect abnormal morphology in confocal microscopy images, thereby assessing the virus infection or conical mutation of corneas, and avoiding irreversible pathological damage. However, the corneal stromal cells often suffer from uneven illumination and disordered vascular occlusion, resulting in inaccurate segmentation.
    UNASSIGNED: In response to these challenges, this study proposes a novel approach: a nnUNet and nested Transformer-based network integrated with dual high-order channel attention, named U-NTCA. Unlike nnUNet, this architecture allows for the recursive transmission of crucial contextual features and direct interaction of features across layers to improve the accuracy of cell recognition in low-quality regions. The proposed methodology involves multiple steps. Firstly, three underlying features with the same channel number are sent into an attention channel named gnConv to facilitate higher-order interaction of local context. Secondly, we leverage different layers in U-Net to integrate Transformer nested with gnConv, and concatenate multiple Transformers to transmit multi-scale features in a bottom-up manner. We encode the downsampling features, corresponding upsampling features, and low-level feature information transmitted from lower layers to model potential correlations between features of varying sizes and resolutions. These multi-scale features play a pivotal role in refining the position information and morphological details of the current layer through recursive transmission.
    UNASSIGNED: Experimental results on a clinical dataset including 136 images show that the proposed method achieves competitive performance with a Dice score of 82.72% and an AUC (Area Under Curve) of 90.92%, which are higher than the performance of nnUNet.
    UNASSIGNED: The experimental results indicate that our model provides a cost-effective and high-precision segmentation solution for corneal stromal cells, particularly in challenging image scenarios.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    三维(3D)超声可以评估手术期间切除的舌癌的边缘。手动分割(MS)是耗时的,劳动密集型,并受操作员可变性的影响。本研究旨在研究3D深度学习模型在3D超声体积中快速分割舌癌的应用。此外,它研究了自动分割的临床效果。3DNoNewU-Net(nnUNet)在113个手动注释的切除的舌癌超声体积上进行了训练。该模型在移动工作站上实施,并在16名前瞻性纳入的舌癌患者中进行了临床验证。研究了不同的预测设置。通过选择最佳代表岛来调整具有多个岛的自动分割。最终边距状态(FMS)基于自动,半自动,并计算手动分割,并与组织病理学边缘进行比较。标准3DnnUNet产生了性能最佳的自动分割,平均(SD)骰子体积评分为0.65(0.30),骰子表面评分为0.73(0.26),平均表面距离为0.44(0.61)mm,Hausdorff距离6.65(8.84)mm,预测时间为8秒。基于自动分割的FMS与组织病理学的相关性较低(r=0.12,p=0.67);MS与组织病理学的相关性中等,但不明显(r=0.4,p=0.12,n=16)。实现3DnnUNet很快就产生了,舌癌三维超声容积的自动分割.从这些分段获得的FMS与组织病理学之间的相关性低于MS与组织病理学之间的中度相关性。
    Three-dimensional (3D) ultrasound can assess the margins of resected tongue carcinoma during surgery. Manual segmentation (MS) is time-consuming, labour-intensive, and subject to operator variability. This study aims to investigate use of a 3D deep learning model for fast intraoperative segmentation of tongue carcinoma in 3D ultrasound volumes. Additionally, it investigates the clinical effect of automatic segmentation. A 3D No New U-Net (nnUNet) was trained on 113 manually annotated ultrasound volumes of resected tongue carcinoma. The model was implemented on a mobile workstation and clinically validated on 16 prospectively included tongue carcinoma patients. Different prediction settings were investigated. Automatic segmentations with multiple islands were adjusted by selecting the best-representing island. The final margin status (FMS) based on automatic, semi-automatic, and manual segmentation was computed and compared with the histopathological margin. The standard 3D nnUNet resulted in the best-performing automatic segmentation with a mean (SD) Dice volumetric score of 0.65 (0.30), Dice surface score of 0.73 (0.26), average surface distance of 0.44 (0.61) mm, Hausdorff distance of 6.65 (8.84) mm, and prediction time of 8 seconds. FMS based on automatic segmentation had a low correlation with histopathology (r = 0.12, p = 0.67); MS resulted in a moderate but insignificant correlation with histopathology (r = 0.4, p = 0.12, n = 16). Implementing the 3D nnUNet yielded fast, automatic segmentation of tongue carcinoma in 3D ultrasound volumes. Correlation between FMS and histopathology obtained from these segmentations was lower than the moderate correlation between MS and histopathology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    包含提供增强的软组织对比度的磁共振(MR)成像设备的直线加速器(直线加速器)特别适合于腹部放射治疗。特别是,治疗计划所需的腹部肿瘤和危险器官(OAR)的准确分割成为可能。目前,这种分割是由放射肿瘤学家手动执行的.这个过程是非常耗时的,并且受到操作者之间和内部的变化的影响。在这项工作中,基于深度学习的自动分割解决方案在0.35TMR图像上研究了腹部OAR。
    收集了121组腹部MR图像及其相应的地面实况分割并用于这项工作。感兴趣的OAR包括肝脏,肾脏,脊髓,胃和十二指肠.已经在2D中训练了几个基于UNet的模型(经典UNet,ResattentionUNet,EfficientNetUNet,和nnUNet)。然后用3D策略训练最佳模型,以研究可能的改进。几何指标,如骰子相似系数(DSC),交汇处(IoU),进行Hausdorff距离(HD)和计算体积的分析(归功于Bland-Altman图)以评估结果。
    在3D模式下训练的nnUNet实现了最佳性能,肝脏的DSC评分,肾脏,脊髓,胃,十二指肠分别为0.96±0.01、0.91±0.02、0.91±0.01、0.83±0.10和0.69±0.15。匹配的IoU评分分别为0.92±0.01、0.84±0.04、0.84±0.02、0.54±0.16和0.72±0.13。相应的HD评分为13.0±6.0mm,16.0±6.6mm,3.3±0.7mm,35.0±33.0mm,和42.0±24.0毫米。计算体积的分析遵循相同的行为。
    尽管十二指肠的分割结果不是最佳的,这些发现暗示了3DnnUNet模型对于0.35TMR-Linac图像的腹部OAR分割的潜在临床应用.
    UNASSIGNED: Linear accelerator (linac) incorporating a magnetic resonance (MR) imaging device providing enhanced soft tissue contrast is particularly suited for abdominal radiation therapy. In particular, accurate segmentation for abdominal tumors and organs at risk (OARs) required for the treatment planning is becoming possible. Currently, this segmentation is performed manually by radiation oncologists. This process is very time consuming and subject to inter and intra operator variabilities. In this work, deep learning based automatic segmentation solutions were investigated for abdominal OARs on 0.35 T MR-images.
    UNASSIGNED: One hundred and twenty one sets of abdominal MR images and their corresponding ground truth segmentations were collected and used for this work. The OARs of interest included the liver, the kidneys, the spinal cord, the stomach and the duodenum. Several UNet based models have been trained in 2D (the Classical UNet, the ResAttention UNet, the EfficientNet UNet, and the nnUNet). The best model was then trained with a 3D strategy in order to investigate possible improvements. Geometrical metrics such as Dice Similarity Coefficient (DSC), Intersection over Union (IoU), Hausdorff Distance (HD) and analysis of the calculated volumes (thanks to Bland-Altman plot) were performed to evaluate the results.
    UNASSIGNED: The nnUNet trained in 3D mode achieved the best performance, with DSC scores for the liver, the kidneys, the spinal cord, the stomach, and the duodenum of 0.96 ± 0.01, 0.91 ± 0.02, 0.91 ± 0.01, 0.83 ± 0.10, and 0.69 ± 0.15, respectively. The matching IoU scores were 0.92 ± 0.01, 0.84 ± 0.04, 0.84 ± 0.02, 0.54 ± 0.16 and 0.72 ± 0.13. The corresponding HD scores were 13.0 ± 6.0 mm, 16.0 ± 6.6 mm, 3.3 ± 0.7 mm, 35.0 ± 33.0 mm, and 42.0 ± 24.0 mm. The analysis of the calculated volumes followed the same behavior.
    UNASSIGNED: Although the segmentation results for the duodenum were not optimal, these findings imply a potential clinical application of the 3D nnUNet model for the segmentation of abdominal OARs for images from 0.35 T MR-Linac.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:局部和全局上下文信息都是脑肿瘤分割的关键语义特征,而由于卷积运算的限制,几乎所有基于CNN的方法都不能很好地学习全局空间依赖。本文的目的是建立一个新的框架,充分利用多模态MR图像的局部和全局特征,以提高脑肿瘤分割的性能。 方法:基于nnUnet和transformer,提出了一种新的自动分割方法nnUnetFormer。它将变压器模块融合到nnUnet框架的较深层中,以从多模态MR图像中有效地获得病变区域的局部和全局特征。主要结果:我们通过5倍交叉验证在BraTS2021数据集上评估了我们的方法,并通过Dice相似性系数(DSC)0.936、0.921和0.872以及Hausdorff距离(HD95)的第95百分位数3.96、4.57和10.45获得了出色的性能整个肿瘤(WT)区域,肿瘤核心(TC),和增强肿瘤(ET),分别,在平均DSC和平均HD95方面都优于最近的最新方法。此外,消融实验表明,将变压器融合到我们改进的nnUnet框架中可以提高脑肿瘤分割的性能,特别是TC地区。此外,为了验证我们方法的泛化能力,我们在FeTS2021数据集上进行了进一步的实验,并在11个未见过的机构上获得了令人满意的分割性能,其中DSC为0.912,0.872和0.759,对于WT的区域,HD95为6.16,8.81和38.50,TC,ET,分别。
意义:广泛的定性和定量实验结果表明,所提出的方法与最先进的方法相比具有竞争力,表明其对临床应用的兴趣。
    Objective. Both local and global context information is crucial semantic features for brain tumor segmentation, while almost all the CNN-based methods cannot learn global spatial dependencies very well due to the limitation of convolution operations. The purpose of this paper is to build a new framework to make full use of local and global features from multimodal MR images for improving the performance of brain tumor segmentation.Approach. A new automated segmentation method named nnUnetFormer was proposed based on nnUnet and transformer. It fused transformer modules into the deeper layers of the nnUnet framework to efficiently obtain both local and global features of lesion regions from multimodal MR images.Main results.We evaluated our method on BraTS 2021 dataset by 5-fold cross-validation and achieved excellent performance with Dice similarity coefficient (DSC) 0.936, 0.921 and 0.872, and 95th percentile of Hausdorff distance (HD95) 3.96, 4.57 and 10.45 for the regions of whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively, which outperformed recent state-of-the-art methods in terms of both average DSC and average HD95. Besides, ablation experiments showed that fusing transformer into our modified nnUnet framework improves the performance of brain tumor segmentation, especially for the TC region. Moreover, for validating the generalization capacity of our method, we further conducted experiments on FeTS 2021 dataset and achieved satisfactory segmentation performance on 11 unseen institutions with DSC 0.912, 0.872 and 0.759, and HD95 6.16, 8.81 and 38.50 for the regions of WT, TC, and ET, respectively.Significance. Extensive qualitative and quantitative experimental results demonstrated that the proposed method has competitive performance against the state-of-the-art methods, indicating its interest for clinical applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Preprint
    针对小儿神经胶质瘤的人工智能(AI)自动肿瘤描绘将使实时体积评估能够支持诊断,治疗反应评估,和临床决策。儿科肿瘤的自动分割算法很少见,由于数据可用性有限,和算法尚未证明临床翻译。
    我们利用来自国家脑肿瘤联盟(n=184)和儿科癌症中心(n=100)的两个数据集来开发,外部验证,和临床基准深度学习神经网络,用于儿科低度胶质瘤(pLGG)分割,使用新的域内,逐步迁移学习方法。最佳模型[通过Dice相似性系数(DSC)]经过外部验证,并经过随机化,由三名专家临床医生进行盲法评估,其中临床医生通过10点Likert量表和图灵测试评估专家和AI生成的分段的临床可接受性。
    领域内使用的最佳AI模型,逐步迁移学习(中位DSC:0.877[IQR0.715-0.914])与基线模型(中位DSC0.812[IQR0.559-0.888];p<0.05)。在外部测试中(n=60),AI模型的准确性与专家间协议相当(DSC中位数:0.834[IQR0.726-0.901]与0.861[IQR0.795-0.905],p=0.13)。在临床基准测试(n=100次扫描,来自3位专家的300个细分),与其他专家相比,专家对AI模型的平均评价更高(李克特评分中位数:9[IQR7-9])7[IQR7-9],每个p<0.05)。此外,与专家平均相比,AI细分的总体可接受性明显更高(p<0.05)(80.2%vs.65.4%)。专家正确预测了平均26.0%的病例中AI分割的起源。
    逐步迁移学习支持专家级,自动小儿脑肿瘤自动分割和体积测量,具有较高的临床可接受性。这种方法可以在有限的数据场景中实现AI成像分割算法的开发和翻译。
    作者提出并利用一种新颖的逐步迁移学习方法来开发和外部验证小儿低度胶质瘤的深度学习自动分割模型,其表现和临床可接受性与小儿神经放射学家和放射肿瘤学家相当。
    可用于训练儿童脑肿瘤的深度学习肿瘤分割的成像数据有限,以成人为中心的模型在儿科环境中的推广效果较差。与其他方法相比,逐步迁移学习证明了深度学习分割性能(骰子得分:0.877[IQR0.715-0.914])的增益,并在外部验证方面产生了与人类专家相当的分割准确性。在盲目的临床可接受性测试中,与其他专家相比,该模型获得了更高的平均Likert评分评级和临床可接受性(Transfer-Encoder模型与平均专家:80.2%vs.65.4%)图灵测试表明,专家正确识别Transfer-Encoder模型分割的起源的能力均很低,因为AI生成与人类生成(平均准确率:26%)。
    UNASSIGNED: Artificial intelligence (AI)-automated tumor delineation for pediatric gliomas would enable real-time volumetric evaluation to support diagnosis, treatment response assessment, and clinical decision-making. Auto-segmentation algorithms for pediatric tumors are rare, due to limited data availability, and algorithms have yet to demonstrate clinical translation.
    UNASSIGNED: We leveraged two datasets from a national brain tumor consortium (n=184) and a pediatric cancer center (n=100) to develop, externally validate, and clinically benchmark deep learning neural networks for pediatric low-grade glioma (pLGG) segmentation using a novel in-domain, stepwise transfer learning approach. The best model [via Dice similarity coefficient (DSC)] was externally validated and subject to randomized, blinded evaluation by three expert clinicians wherein clinicians assessed clinical acceptability of expert- and AI-generated segmentations via 10-point Likert scales and Turing tests.
    UNASSIGNED: The best AI model utilized in-domain, stepwise transfer learning (median DSC: 0.877 [IQR 0.715-0.914]) versus baseline model (median DSC 0.812 [IQR 0.559-0.888]; p<0.05). On external testing (n=60), the AI model yielded accuracy comparable to inter-expert agreement (median DSC: 0.834 [IQR 0.726-0.901] vs. 0.861 [IQR 0.795-0.905], p=0.13). On clinical benchmarking (n=100 scans, 300 segmentations from 3 experts), the experts rated the AI model higher on average compared to other experts (median Likert rating: 9 [IQR 7-9]) vs. 7 [IQR 7-9], p<0.05 for each). Additionally, the AI segmentations had significantly higher (p<0.05) overall acceptability compared to experts on average (80.2% vs. 65.4%). Experts correctly predicted the origins of AI segmentations in an average of 26.0% of cases.
    UNASSIGNED: Stepwise transfer learning enabled expert-level, automated pediatric brain tumor auto-segmentation and volumetric measurement with a high level of clinical acceptability. This approach may enable development and translation of AI imaging segmentation algorithms in limited data scenarios.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:神经内分泌肿瘤(NETs)是一种罕见的癌症形式,可发生在体内任何地方,通常会转移。肿瘤的位置和侵袭性的巨大差异使其成为难以治疗的癌症。对患者图像中的全身肿瘤负荷的评估允许更好地跟踪疾病进展并告知更好的治疗决策。目前,放射科医师依赖于定性评估的这一指标,因为手动分割是不可行的,在一个典型的繁忙的临床工作流程。
    方法:我们通过扩展nnU-net管道的应用来产生自动NET分割模型来解决这些挑战。我们利用68Ga-DOTATATEPET/CT的理想成像类型来产生分割掩模,从中计算总肿瘤负荷指标。我们为任务提供人类水平的基线,并执行模型输入的消融实验,架构,和损失函数。
    结果:我们的数据集由915次PET/CT扫描组成,并分为保留测试集(87例)和5个训练子集以进行交叉验证。所提出的模型实现了0.644的测试骰子得分,与我们在0.682的6位患者中的注释者间骰子得分相当。如果我们将修改后的骰子得分应用于预测,测试性能达到0.80分。
    结论:在本文中,我们展示了通过监督学习自动生成给定PET图像的精确NET分割掩模的能力。我们发布了用于扩展使用的模型,并支持这种罕见癌症的治疗计划。
    OBJECTIVE: Neuroendocrine tumors (NETs) are a rare form of cancer that can occur anywhere in the body and commonly metastasizes. The large variance in location and aggressiveness of the tumors makes it a difficult cancer to treat. Assessments of the whole-body tumor burden in a patient image allow for better tracking of disease progression and inform better treatment decisions. Currently, radiologists rely on qualitative assessments of this metric since manual segmentation is unfeasible within a typical busy clinical workflow.
    METHODS: We address these challenges by extending the application of the nnU-net pipeline to produce automatic NET segmentation models. We utilize the ideal imaging type of 68Ga-DOTATATE PET/CT to produce segmentation masks from which to calculate total tumor burden metrics. We provide a human-level baseline for the task and perform ablation experiments of model inputs, architectures, and loss functions.
    RESULTS: Our dataset is comprised of 915 PET/CT scans and is divided into a held-out test set (87 cases) and 5 training subsets to perform cross-validation. The proposed models achieve test Dice scores of 0.644, on par with our inter-annotator Dice score on a subset 6 patients of 0.682. If we apply our modified Dice score to the predictions, the test performance reaches a score of 0.80.
    CONCLUSIONS: In this paper, we demonstrate the ability to automatically generate accurate NET segmentation masks given PET images through supervised learning. We publish the model for extended use and to support the treatment planning of this rare cancer.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:血管内血栓切除术(EVT)持续时间是神经系统预后的重要预测指标。最近表明,颈内动脉(ICA)的角度≤90°可以预测更长的EVT持续时间。由于手动角度测量并不简单且耗时,深度学习(DL)可以帮助提前识别困难的EVT病例。
    方法:我们纳入了2016年1月至2020年12月期间接受EVT患者的379例CT血管造影(CTA)。人工分割主动脉弓121个CTA,颈总动脉(CCA)和ICA。这些被用来训练nnUNet。其余258个CTA使用经过训练的nnUNet进行分段,然后进行手动验证。测量左和右ICA的角度,得到两类:锐角≤90°和>90°。分割与角度测量一起用于训练确定ICA角度的卷积神经网络(CNN)。使用Dice评分评估性能。使用AUC和准确性评估分类。使用中位数和Whitney‑U检验探索了ICA角度和程序时间的关联。
    结果:对于ICA角度>90°的病例,EVT的中位持续时间为48分钟,≤90°的病例为64分钟(p=0.001)。分割评估显示,主动脉的Dice评分为0.94,CCA/ICA的Dice评分为0.86,分别。ICA角度测定的评估导致AUC为0.92和准确度为0.85。
    结论:可以验证ICA角度与EVT持续时间之间的关联,并开发了一种基于DL的半自动评估方法,具有完全自动化的潜力。可以以类似的方式检查更多感兴趣的解剖特征。
    BACKGROUND: Endovascular thrombectomy (EVT) duration is an important predictor for neurological outcome. Recently it was shown that an angle of ≤ 90° of the internal carotid artery (ICA) is predictive for longer EVT duration. As manual angle measurement is not trivial and time-consuming, deep learning (DL) could help identifying difficult EVT cases in advance.
    METHODS: We included 379 CT angiographies (CTA) of patients who underwent EVT between January 2016 and December 2020. Manual segmentation of 121 CTAs was performed for the aortic arch, common carotid artery (CCA) and ICA. These were used to train a nnUNet. The remaining 258 CTAs were segmented using the trained nnUNet with manual verification afterwards. Angles of left and right ICAs were measured resulting in two classes: acute angle ≤ 90° and > 90°. The segmentations together with angle measurements were used to train a convolutional neural network (CNN) determining the ICA angle. The performance was evaluated using Dice scores. The classification was evaluated using AUC and accuracy. Associations of ICA angle and procedural times was explored using median and Whitney‑U test.
    RESULTS: Median EVT duration for cases with ICA angle > 90° was 48 min and with ≤ 90° was 64 min (p = 0.001). Segmentation evaluation showed Dice scores of 0.94 for the aorta and 0.86 for CCA/ICA, respectively. Evaluation of ICA angle determination resulted in an AUC of 0.92 and accuracy of 0.85.
    CONCLUSIONS: The association between ICA angle and EVT duration could be verified and a DL-based method for semi-automatic assessment with the potential for full automation was developed. More anatomical features of interest could be examined in a similar fashion.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    未经证实:肝移植中的受体供体匹配可能需要精确估计肝脏体积。目前使用的基于人口统计的器官体积估计是不精确和非特异性的。来自医学成像的手动图像器官注释是有效的;然而,这个过程很麻烦,通常需要一段不受欢迎的时间来完成。此外,手动器官分割和体积测量会给付款人带来额外的直接成本,以供临床医生或训练有素的技术人员完成。基于深度学习的图像自动分割工具可以很好地满足这种临床需求。
    UNASSIGNED:构建一个深度学习模型,该模型可以准确估计肝脏体积并从计算机断层扫描(CT)医学图像创建3D器官渲染图。
    UNASSIGNED:我们训练了nnU-Net深度学习模型来识别腹腔图像中的肝脏边界。我们使用了151个公开的CT扫描。对于每次CT扫描,董事会认证的放射科医师注释了肝脏边缘(基本事实注释)。我们把图像数据集分成训练,验证,和测试集。我们在这些数据上训练了我们的nnU-Net模型,以识别3D体素中的肝脏边界,并整合这些以重建总器官体积估计。
    UNASSIGNED:nnU-Net模型与地面实况注释相比,以97.5%的平均重叠精度准确地识别了肝脏的边界。我们计算的体积估计值在测试集上实现了1.92%+1.54%的平均百分比误差。
    UNASSIGNED:使用nnU-Net深度学习架构,从CT扫描精确估计肝脏体积是准确的。部署得当,nnU-Net算法准确、快速,使其适合纳入移植前临床决策工作流程。
    UNASSIGNED: Recipient donor matching in liver transplantation can require precise estimations of liver volume. Currently utilized demographic-based organ volume estimates are imprecise and nonspecific. Manual image organ annotation from medical imaging is effective; however, this process is cumbersome, often taking an undesirable length of time to complete. Additionally, manual organ segmentation and volume measurement incurs additional direct costs to payers for either a clinician or trained technician to complete. Deep learning-based image automatic segmentation tools are well positioned to address this clinical need.
    UNASSIGNED: To build a deep learning model that could accurately estimate liver volumes and create 3D organ renderings from computed tomography (CT) medical images.
    UNASSIGNED: We trained a nnU-Net deep learning model to identify liver borders in images of the abdominal cavity. We used 151 publicly available CT scans. For each CT scan, a board-certified radiologist annotated the liver margins (ground truth annotations). We split our image dataset into training, validation, and test sets. We trained our nnU-Net model on these data to identify liver borders in 3D voxels and integrated these to reconstruct a total organ volume estimate.
    UNASSIGNED: The nnU-Net model accurately identified the border of the liver with a mean overlap accuracy of 97.5% compared with ground truth annotations. Our calculated volume estimates achieved a mean percent error of 1.92% + 1.54% on the test set.
    UNASSIGNED: Precise volume estimation of livers from CT scans is accurate using a nnU-Net deep learning architecture. Appropriately deployed, a nnU-Net algorithm is accurate and quick, making it suitable for incorporation into the pretransplant clinical decision-making workflow.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号