Cardiac segmentation

心脏分割
  • 文章类型: Journal Article
    心脏计算机断层扫描(CT)和磁共振成像(MRI)的自动分割在心血管疾病的预防和治疗中起着至关重要的作用。在这项研究中,我们提出了一种基于多尺度的高效网络,多头自我注意(MSMHSA)机制。这种机制的结合使我们能够实现更大的感受野,有助于在CT和MRI图像中准确分割整个心脏结构。在这个网络中,从浅层特征提取网络中提取的特征经过MHSA机制,与人类视觉密切相关,使得上下文语义信息的提取更加全面和准确。为了提高不同尺寸的心脏子结构分割的精度,我们提出的方法在不同的尺度上引入了三个MHSA网络。这种方法允许通过调整分割图像的大小来微调微目标分割的准确性。我们方法的有效性在多模式全心脏分割(MM-WHS)挑战2017数据集上得到了严格验证,在心脏CT和MRI图像中展示有竞争力的结果和七个心脏亚结构的准确分割。通过与先进的基于变压器的模型的对比实验,我们的研究提供了令人信服的证据,尽管基于变压器的模型取得了显著成就,CNN模型和自我注意力的融合仍然是双模态全心脏分割的一种简单而高效的方法.
    The automatic segmentation of cardiac computed tomography (CT) and magnetic resonance imaging (MRI) plays a pivotal role in the prevention and treatment of cardiovascular diseases. In this study, we propose an efficient network based on the multi-scale, multi-head self-attention (MSMHSA) mechanism. The incorporation of this mechanism enables us to achieve larger receptive fields, facilitating the accurate segmentation of whole heart structures in both CT and MRI images. Within this network, features extracted from the shallow feature extraction network undergo a MHSA mechanism that closely aligns with human vision, resulting in the extraction of contextual semantic information more comprehensively and accurately. To improve the precision of cardiac substructure segmentation across varying sizes, our proposed method introduces three MHSA networks at distinct scales. This approach allows for fine-tuning the accuracy of micro-object segmentation by adapting the size of the segmented images. The efficacy of our method is rigorously validated on the Multi-Modality Whole Heart Segmentation (MM-WHS) Challenge 2017 dataset, demonstrating competitive results and the accurate segmentation of seven cardiac substructures in both cardiac CT and MRI images. Through comparative experiments with advanced transformer-based models, our study provides compelling evidence that despite the remarkable achievements of transformer-based models, the fusion of CNN models and self-attention remains a simple yet highly effective approach for dual-modality whole heart segmentation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目标:变压器,它以其全局上下文建模的能力而著称,已被用于弥补卷积神经网络(CNN)的缺点,并打破其在医学图像分割中的主导地位。然而,自我注意模块既是内存又是计算效率低下的模块,因此,许多方法必须在大量下采样的特征映射上构建其Transformer分支,或者采用标记化的图像补丁来将其模型拟合到可访问的GPU中。这种补丁式操作限制了网络提取像素级内在结构或每个补丁内部的依赖关系,损害像素级分类任务的性能。
    方法:为了解决这些问题,我们提出了一个记忆和计算高效的自我注意模块,以实现对相对高分辨率特征的推理,在有效把握精细空间细节的同时,提高学习全局信息的效率。此外,我们设计了一种新颖的多分支变压器(MultiTrans)体系结构,以提供用于处理医学图像中具有可变形状和大小的对象的分层特征。通过在CNN的不同层面上建立四个并行变压器分支,我们的混合网络聚合了多尺度全局上下文和多尺度局部特征。
    结果:MultiTrans在三个具有不同模态的医学图像数据集上实现了最高的分割精度:Synapse,ACDC和M&M与标准自我注意(SSA)相比,提出的有效的自我注意(ESA)可以在很大程度上减少训练记忆和计算复杂性,同时甚至稍微提高准确性。具体来说,训练记忆成本,我们欧空局的FLOP和Params是18.77%,20.68%和74.07%的SSA。
    结论:在三个医学图像数据集上的实验证明了所设计网络的通用性和鲁棒性。消融研究显示了我们提出的ESA的效率和有效性。代码可在以下网址获得:https://github.com/Yanhua-Zhang/MultiTrans-extension。
    OBJECTIVE: Transformer, which is notable for its ability of global context modeling, has been used to remedy the shortcomings of Convolutional neural networks (CNN) and break its dominance in medical image segmentation. However, the self-attention module is both memory and computational inefficient, so many methods have to build their Transformer branch upon largely downsampled feature maps or adopt the tokenized image patches to fit their model into accessible GPUs. This patch-wise operation restricts the network in extracting pixel-level intrinsic structural or dependencies inside each patch, hurting the performance of pixel-level classification tasks.
    METHODS: To tackle these issues, we propose a memory- and computation-efficient self-attention module to enable reasoning on relatively high-resolution features, promoting the efficiency of learning global information while effective grasping fine spatial details. Furthermore, we design a novel Multi-Branch Transformer (MultiTrans) architecture to provide hierarchical features for handling objects with variable shapes and sizes in medical images. By building four parallel Transformer branches on different levels of CNN, our hybrid network aggregates both multi-scale global contexts and multi-scale local features.
    RESULTS: MultiTrans achieves the highest segmentation accuracy on three medical image datasets with different modalities: Synapse, ACDC and M&Ms. Compared to the Standard Self-Attention (SSA), the proposed Efficient Self-Attention (ESA) can largely reduce the training memory and computational complexity while even slightly improve the accuracy. Specifically, the training memory cost, FLOPs and Params of our ESA are 18.77%, 20.68% and 74.07% of the SSA.
    CONCLUSIONS: Experiments on three medical image datasets demonstrate the generality and robustness of the designed network. The ablation study shows the efficiency and effectiveness of our proposed ESA. Code is available at: https://github.com/Yanhua-Zhang/MultiTrans-extension.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:在这项工作中,我们提出了一种用于心脏磁共振成像(MRI)的深度学习分割算法,以帮助左心室(LV)的轮廓。右心室(RV),和心肌(Myo)。
方法:我们提出了一种基于3DU形对称编码器-解码器结构的移位窗口多层感知器(Swin-MLP)混合器网络。我们使用来自100个人的公共数据评估了我们提出的网络。使用地面实况轮廓和使用Dice得分系数的预测之间的3D体积相似性对网络性能进行了定量评估,灵敏度,和精度以及使用Hausdorff距离(HD)的2D表面相似性,平均表面距离(MSD)和残差均方距离(RMSD)。我们在同一公共数据集上针对其他两个当前的领先边缘网络(称为动态UNet和Swin-UNetr)进行了性能基准测试。 结果:当在三个心脏段上平均时,所提出的网络实现以下体积相似性度量:Dice=0.952±0.017,精度=0.948±0.016,灵敏度=0.956±0.022。平均表面相似性为HD=1.521±0.121mm,MSD=0.266±0.075mm,RMSD=0.668±0.288mm。对于p值小于0.05的大多数体积和表面度量,与动态UNet和Swin-UNetr算法相比,网络显示出统计学上的显着改善。总的来说,所提出的Swin-MLP混合器网络比竞争方法表现出更好或可比的性能。
结论:与当前的前沿方法相比,所提出的Swin-MLP混合器网络展示了更准确的分割性能。这种强大的方法证明了简化多种应用的临床工作流程的潜力。
    Objectives. In this work, we proposed a deep-learning segmentation algorithm for cardiac magnetic resonance imaging to aid in contouring of the left ventricle, right ventricle, and Myocardium (Myo).Approach.We proposed a shifted window multilayer perceptron (Swin-MLP) mixer network which is built upon a 3D U-shaped symmetric encoder-decoder structure. We evaluated our proposed network using public data from 100 individuals. The network performance was quantitatively evaluated using 3D volume similarity between the ground truth contours and the predictions using Dice score coefficient, sensitivity, and precision as well as 2D surface similarity using Hausdorff distance (HD), mean surface distance (MSD) and residual mean square distance (RMSD). We benchmarked the performance against two other current leading edge networks known as Dynamic UNet and Swin-UNetr on the same public dataset.Results.The proposed network achieved the following volume similarity metrics when averaged over three cardiac segments: Dice = 0.952 ± 0.017, precision = 0.948 ± 0.016, sensitivity = 0.956 ± 0.022. The average surface similarities were HD = 1.521 ± 0.121 mm, MSD = 0.266 ± 0.075 mm, and RMSD = 0.668 ± 0.288 mm. The network shows statistically significant improvement in comparison to the Dynamic UNet and Swin-UNetr algorithms for most volumetric and surface metrics withp-value less than 0.05. Overall, the proposed Swin-MLP mixer network demonstrates better or comparable performance than competing methods.Significance.The proposed Swin-MLP mixer network demonstrates more accurate segmentation performance compared to current leading edge methods. This robust method demonstrates the potential to streamline clinical workflows for multiple applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:本研究调查了心外膜脂肪组织(EAT)体积对接受压力心脏磁共振(CMR)成像的患者主要不良心血管事件(MACE)的额外预后价值。
    方法:730名连续患者[平均年龄:63±10岁;616名男性]因已知或疑似冠状动脉疾病而接受应激性CMR,随机分为推导组(n=365)和验证组(n=365)。MACE定义为非致死性心肌梗死和心脏死亡。开发并训练了一种深度学习算法,以量化CMR的EAT量。针对高度调整EAT体积(EAT体积指数)。通过Cox分析MACE的风险,创建了基于CMR的复合风险评分。
    结果:在派生队列中,32例患者(8.7%)在2103天的随访期间发生MACE。左心室射血分数(LVEF)<35%(HR4.407[95%CI1.903-10.202];p<0.001),应力灌注缺损(HR3.550[95%CI1.765-7.138];p<0.001),钆晚期增强(LGE)(HR4.428[95CI1.822-10.759];p=0.001)和进食量指数(HR1.082[95%CI1.045-1.120];p<0.001)是MACE的独立预测因子。在多元Cox回归分析中,将EAT容量指数添加到包括LVEF在内的综合风险评分中,应力灌注缺陷和LGE在MACE预测中提供了额外的价值,净重新分类改善0.683(95CI,0.336-1.03;p<0.001)。与风险评分相比,风险评分和EAT体积指数的综合评估显示出较高的HarrelC统计量(0.85vs.0.76;p<0.001)和单独的EAT体积指数(0.85vs.0.74;p<0.001)。这些发现在验证队列中得到证实。
    结论:在有临床指示应激CMR的患者中,通过深度学习测量的全自动EAT体积可以在标准临床和成像参数之上提供额外的预后信息。
    OBJECTIVE: This study investigated the additional prognostic value of epicardial adipose tissue (EAT) volume for major adverse cardiovascular events (MACE) in patients undergoing stress cardiac magnetic resonance (CMR) imaging.
    METHODS: 730 consecutive patients [mean age: 63 ± 10 years; 616 men] who underwent stress CMR for known or suspected coronary artery disease were randomly divided into derivation (n = 365) and validation (n = 365) cohorts. MACE was defined as non-fatal myocardial infarction and cardiac deaths. A deep learning algorithm was developed and trained to quantify EAT volume from CMR. EAT volume was adjusted for height (EAT volume index). A composite CMR-based risk score by Cox analysis of the risk of MACE was created.
    RESULTS: In the derivation cohort, 32 patients (8.7 %) developed MACE during a follow-up of 2103 days. Left ventricular ejection fraction (LVEF) < 35 % (HR 4.407 [95 % CI 1.903-10.202]; p<0.001), stress perfusion defect (HR 3.550 [95 % CI 1.765-7.138]; p<0.001), late gadolinium enhancement (LGE) (HR 4.428 [95%CI 1.822-10.759]; p = 0.001) and EAT volume index (HR 1.082 [95 % CI 1.045-1.120]; p<0.001) were independent predictors of MACE. In a multivariate Cox regression analysis, adding EAT volume index to a composite risk score including LVEF, stress perfusion defect and LGE provided additional value in MACE prediction, with a net reclassification improvement of 0.683 (95%CI, 0.336-1.03; p<0.001). The combined evaluation of risk score and EAT volume index showed a higher Harrel C statistic as compared to risk score (0.85 vs. 0.76; p<0.001) and EAT volume index alone (0.85 vs.0.74; p<0.001). These findings were confirmed in the validation cohort.
    CONCLUSIONS: In patients with clinically indicated stress CMR, fully automated EAT volume measured by deep learning can provide additional prognostic information on top of standard clinical and imaging parameters.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:了解马心脏的三维(3D)解剖结构对于兽医教育和开发微创心内手术至关重要。目的是创建一个3D计算机模型,模拟成年马心脏的体内解剖结构。
    方法:十匹马和五匹小马。
    方法:十匹马,非心血管原因安乐死,用于使用聚氨酯泡沫进行原位心脏铸造,并随后对切除的心脏进行计算机断层扫描(CT)。在五个麻醉小马中,优化了对比增强心电图门控CT方案,以对整个心脏成像.使用专用图像处理软件来创建来自两种方法的所有CT扫描的3D模型。比较了所得模型的相对比例,细节和易于分割。
    结果:铸造方案产生了很高的细节,但是顺应性结构,如肺动脉干是不成比例地膨胀的泡沫。优化对比增强CT方案,特别是增加了一个延迟的阶段来可视化心脏静脉,产生足够详细的CT图像,以创建小马心脏的解剖学正确的3D模型。需要重新缩放才能获得马大小的模型。
    结论:基于对比增强CT图像的三维计算机模型似乎优于基于铸造心脏的三维计算机模型,以代表体内情况,并且优选获得对教育有用的解剖学正确的心脏模型,客户沟通和研究目的。缩放是,然而,需要获得成年马心脏的近似值,因为心脏CT成像受胸部大小的限制。
    OBJECTIVE: Insight into the three-dimensional (3D) anatomy of the equine heart is essential in veterinary education and to develop minimally invasive intracardiac procedures. The aim was to create a 3D computer model simulating the in vivo anatomy of the adult equine heart.
    METHODS: Ten horses and five ponies.
    METHODS: Ten horses, euthanized for non-cardiovascular reasons, were used for in situ cardiac casting with polyurethane foam and subsequent computed tomography (CT) of the excised heart. In five anaesthetized ponies, a contrast-enhanced electrocardiogram-gated CT protocol was optimized to image the entire heart. Dedicated image processing software was used to create 3D models of all CT scans derived from both methods. Resulting models were compared regarding relative proportions, detail and ease of segmentation.
    RESULTS: The casting protocol produced high detail, but compliant structures such as the pulmonary trunk were disproportionally expanded by the foam. Optimization of the contrast-enhanced CT protocol, especially adding a delayed phase for visualization of the cardiac veins, resulted in sufficiently detailed CT images to create an anatomically correct 3D model of the pony heart. Rescaling was needed to obtain a horse-sized model.
    CONCLUSIONS: Three-dimensional computer models based on contrast-enhanced CT images appeared superior to those based on casted hearts to represent the in vivo situation and are preferred to obtain an anatomically correct heart model useful for education, client communication and research purposes. Scaling was, however, necessary to obtain an approximation of an adult horse heart as cardiac CT imaging is restricted by thoracic size.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    已经提出了人工智能(AI)技术,用于短轴(SAX)电影心脏磁共振(CMR)的自动化分析,但是没有CMR分析工具可以自动分析大型(非结构化)临床CMR数据集。我们开发并验证了一种强大的AI工具,用于从大型临床数据库中的SAXcineCMR开始到结束自动定量心脏功能。
    我们处理和分析CMR数据库的流程包括识别正确数据的自动化步骤,鲁棒的图像预处理,一种用于SAXCMR双心室分割和功能生物标志物估计的AI算法,和自动分析后质量控制,以检测和纠正错误。分割算法在来自两家NHS医院的2793次CMR扫描上进行了训练,并在该数据集(n=414)和五个外部数据集(n=6888)的其他病例上进行了验证。包括使用所有主要供应商的CMR扫描仪对12个不同中心获得的一系列疾病的患者进行扫描。心脏生物标志物的绝对误差中位数在观察者间变异性的范围内:<8.4mL(左心室容积),<9.2mL(右心室容积),<13.3g(左心室质量),所有数据集的射血分数<5.9%。根据心脏病和扫描仪供应商的表型对病例进行分层显示出所有组的良好表现。
    我们证明了我们提出的工具,它结合了图像预处理步骤,在大规模多域CMR数据集和质量控制步骤上训练的领域可概括的AI算法,允许对来自多个中心的(临床或研究)数据库进行强大的分析,供应商,和心脏疾病。这使得我们的工具能够用于大型多中心数据库的完全自动化处理。
    UNASSIGNED: Artificial intelligence (AI) techniques have been proposed for automating analysis of short-axis (SAX) cine cardiac magnetic resonance (CMR), but no CMR analysis tool exists to automatically analyse large (unstructured) clinical CMR datasets. We develop and validate a robust AI tool for start-to-end automatic quantification of cardiac function from SAX cine CMR in large clinical databases.
    UNASSIGNED: Our pipeline for processing and analysing CMR databases includes automated steps to identify the correct data, robust image pre-processing, an AI algorithm for biventricular segmentation of SAX CMR and estimation of functional biomarkers, and automated post-analysis quality control to detect and correct errors. The segmentation algorithm was trained on 2793 CMR scans from two NHS hospitals and validated on additional cases from this dataset (n = 414) and five external datasets (n = 6888), including scans of patients with a range of diseases acquired at 12 different centres using CMR scanners from all major vendors. Median absolute errors in cardiac biomarkers were within the range of inter-observer variability: <8.4 mL (left ventricle volume), <9.2 mL (right ventricle volume), <13.3 g (left ventricular mass), and <5.9% (ejection fraction) across all datasets. Stratification of cases according to phenotypes of cardiac disease and scanner vendors showed good performance across all groups.
    UNASSIGNED: We show that our proposed tool, which combines image pre-processing steps, a domain-generalizable AI algorithm trained on a large-scale multi-domain CMR dataset and quality control steps, allows robust analysis of (clinical or research) databases from multiple centres, vendors, and cardiac diseases. This enables translation of our tool for use in fully automated processing of large multi-centre databases.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    带有疤痕的心脏左心室的自动分割仍然是一项具有挑战性和临床意义的任务,因为它对患者的诊断和治疗途径至关重要。本研究旨在开发一种新颖的框架和成本函数,以使用LGE-MRI图像实现带有疤痕的左心室的最佳自动分割。为了确保框架的泛化,使用非分布(OOD)内部和外部验证队列建立了无偏验证方案,以及观察内和观察者间的可变性。该框架结合了传统的计算机视觉技术和深度学习,以达到最佳分割效果。传统方法使用多图谱技术,活动轮廓,和k-均值方法,而深度学习方法利用各种深度学习技术和网络。研究发现,传统的计算机视觉技术比深度学习提供更准确的结果,除非有呼吸错位误差。该框架的最佳解决方案在内部和外部OOD队列中实现了Dice评分为82.8±6.4%和72.1±4.6%的稳健和通用结果,分别。开发的框架为使用LGE-MRI自动分割带有疤痕的左心室提供了高性能解决方案。与现有的最先进的方法不同,它在不同的医院和供应商之间实现了无偏见的结果,而无需对医院队列进行培训或调整。该框架为专家提供了一个有价值的工具,可以完成基于单模态心脏扫描的带有疤痕的左心室全自动分割任务。
    Automatic segmentation of the cardiac left ventricle with scars remains a challenging and clinically significant task, as it is essential for patient diagnosis and treatment pathways. This study aimed to develop a novel framework and cost function to achieve optimal automatic segmentation of the left ventricle with scars using LGE-MRI images. To ensure the generalization of the framework, an unbiased validation protocol was established using out-of-distribution (OOD) internal and external validation cohorts, and intra-observation and inter-observer variability ground truths. The framework employs a combination of traditional computer vision techniques and deep learning, to achieve optimal segmentation results. The traditional approach uses multi-atlas techniques, active contours, and k-means methods, while the deep learning approach utilizes various deep learning techniques and networks. The study found that the traditional computer vision technique delivered more accurate results than deep learning, except in cases where there was breath misalignment error. The optimal solution of the framework achieved robust and generalized results with Dice scores of 82.8 ± 6.4% and 72.1 ± 4.6% in the internal and external OOD cohorts, respectively. The developed framework offers a high-performance solution for automatic segmentation of the left ventricle with scars using LGE-MRI. Unlike existing state-of-the-art approaches, it achieves unbiased results across different hospitals and vendors without the need for training or tuning in hospital cohorts. This framework offers a valuable tool for experts to accomplish the task of fully automatic segmentation of the left ventricle with scars based on a single-modality cardiac scan.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    当在大量标记数据上进行训练时,深度学习模型可以实现高精度。然而,现实世界的场景通常涉及几个挑战:训练数据可能会分期付款,可能来自多个不同的域,并且可能不包含用于训练的标签。某些设置,例如医疗应用,通常涉及进一步的限制,禁止由于隐私法规而保留以前看到的数据。在这项工作中,为了应对这些挑战,我们在涉及域移位的持续学习场景中研究无监督分割。为此,我们引入GarDA(用于连续域自适应的生成外观回放),一种基于生成重放的方法,可以使分割模型按顺序适应具有未标记数据的新域。与单步无监督域自适应(UDA)相反,对一系列域的连续适应使得能够利用和整合来自多个域的信息。与以前的增量UDA方法不同,我们的方法不需要访问以前看到的数据,使其适用于许多实际场景。我们在三个具有不同器官和模态的数据集上评估GarDA,它大大优于现有技术。我们的代码可在以下网址获得:https://github.com/histocartography/generative-appearance-replay。
    Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on three datasets with different organs and modalities, where it substantially outperforms existing techniques. Our code is available at: https://github.com/histocartography/generative-appearance-replay.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    心脏结构和功能异常可以通过心脏磁共振成像(CMR)进行非侵入性检查。由于MR设备的发展,诊断扫描可以捕获越来越多关于可能的心脏病的相关信息。T1和T2映射是这样的新技术,提供组织特异性信息,即使没有对比材料的管理。基于深度学习的人工智能解决方案已经在许多应用领域展示了最先进的成果,包括医学成像。更具体地说,在过去的五年中,电影序列中应用的自动化工具彻底改变了CMR报告的容量。将深度学习模型应用于T1和T2映射图像可以类似地提高后处理管道的效率,从而促进诊断过程。
    在本文中,我们引入了一种用于心肌分割的深度学习模型,该模型在来自262名异质性疾病病因受试者的7,000多幅原始CMR图像上进行了训练。数据由三位专家标记。作为评估的一部分,计算专家之间的骰子得分和豪斯多夫距离,并将专家共识与模型的预测进行比较。
    我们的深度学习方法获得了86%的平均骰子得分,而由三位专家提供的相同数据的轮廓显示90%的平均骰子得分。该方法的准确性在心外膜和心内膜轮廓上是一致的,在基础上,脑室中段切片,根尖切片的结果只有5%低,这甚至对专家来说都是具有挑战性的。
    我们在262个异构CMR案例中训练并评估了基于深度学习的分割模型。将深度神经网络应用于T1和T2映射可以类似地改善诊断实践。使用T1和T2映射图像和高质量标签的精细细节,这项研究的目的是通过深度学习来接近人体分割的准确性。
    UNASSIGNED: Structural and functional heart abnormalities can be examined non-invasively with cardiac magnetic resonance imaging (CMR). Thanks to the development of MR devices, diagnostic scans can capture more and more relevant information about possible heart diseases. T1 and T2 mapping are such novel technology, providing tissue specific information even without the administration of contrast material. Artificial intelligence solutions based on deep learning have demonstrated state-of-the-art results in many application areas, including medical imaging. More specifically, automated tools applied at cine sequences have revolutionized volumetric CMR reporting in the past five years. Applying deep learning models to T1 and T2 mapping images can similarly improve the efficiency of post-processing pipelines and consequently facilitate diagnostic processes.
    UNASSIGNED: In this paper, we introduce a deep learning model for myocardium segmentation trained on over 7,000 raw CMR images from 262 subjects of heterogeneous disease etiology. The data were labeled by three experts. As part of the evaluation, Dice score and Hausdorff distance among experts is calculated, and the expert consensus is compared with the model\'s predictions.
    UNASSIGNED: Our deep learning method achieves 86% mean Dice score, while contours provided by three experts on the same data show 90% mean Dice score. The method\'s accuracy is consistent across epicardial and endocardial contours, and on basal, midventricular slices, with only 5% lower results on apical slices, which are often challenging even for experts.
    UNASSIGNED: We trained and evaluated a deep learning based segmentation model on 262 heterogeneous CMR cases. Applying deep neural networks to T1 and T2 mapping could similarly improve diagnostic practices. Using the fine details of T1 and T2 mapping images and high-quality labels, the objective of this research is to approach human segmentation accuracy with deep learning.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:人口成像研究产生的大量数据是目前提取客观和定量心脏表型的能力所无法比拟的;主观和耗时的手动图像分析仍然是金标准。迫切需要自动图像分析来计算心脏功能的定量成像生物标志物。数据量及其变异性对最先进的心内膜和心外膜轮廓方法构成了挑战,当应用于非常大的数据集时,它们缺乏鲁棒性。我们的目标是开发一种分析管道,用于根据电影磁共振成像数据自动量化心脏功能。
    方法:这项工作采用了来自UKBiobank的4,638例心脏MRI病例,可用于左和右心室轮廓。提出了一种混合鲁棒算法,通过利用深度学习的定位精度和3D-ASM(三维活动形状模型)的形态学精度来提高自动左右心室分割的精度。本文的贡献有三方面。首先,利用时空约束,提出了一种全自动的左右心室初始化和心脏MRI分割方法。第二,引入了一个深度监督的网络来训练和分割心脏。第三,通过将图像强度模型与卷积神经网络(CNN)得出的距离图相结合,改善了3D-ASM图像搜索过程,从而改善了心内膜和心外膜边缘定位。
    结果:所提出的架构优于UKBiobank的心脏MRI分割技术。三尖阀和RV顶点的RV标志检测误差统计分别为4.17mm和5.58mm。重叠度量,平均等高线距离,计算LV(左心室)和RV(右心室)轮廓分割的Hausdorff距离和心脏功能参数。Bland-Altman对临床参数的分析表明,我们的自动图像分析管道的结果与专家手动分析的结果非常吻合。
    结论:我们的混合方案结合了深度学习和统计形状建模,用于从心脏MRI数据集中自动分割LV/RV是有效且可靠的,并且可以从人群成像中计算心脏功能指标。
    OBJECTIVE: The sheer volume of data generated by population imaging studies is unparalleled by current capabilities to extract objective and quantitative cardiac phenotypes; subjective and time-consuming manual image analysis remains the gold standard. Automated image analytics to compute quantitative imaging biomarkers of cardiac function are desperately needed. Data volumes and their variability pose a challenge to most state-of-the-art methods for endo- and epicardial contours, which lack robustness when applied to very large datasets. Our aim is to develop an analysis pipeline for the automatic quantification of cardiac function from cine magnetic resonance imaging data.
    METHODS: This work adopt 4,638 cardiac MRI cases coming from UK Biobank with ground truth available for left and RV contours. A hybrid and robust algorithm is proposed to improve the accuracy of automatic left and right ventricle segmentation by harnessing the localization accuracy of deep learning and the morphological accuracy of 3D-ASM (three-dimensional active shape models). The contributions of this paper are three-fold. First, a fully automatic method is proposed for left and right ventricle initialization and cardiac MRI segmentation by taking full advantage of spatiotemporal constraint. Second, a deeply supervised network is introduced to train and segment the heart. Third, the 3D-ASM image search procedure is improved by combining image intensity models with convolutional neural network (CNN) derived distance maps improving endo- and epicardial edge localization.
    RESULTS: The proposed architecture outperformed the state of the art for cardiac MRI segmentation from UK Biobank. The statistics of RV landmarks detection errors for Triscuspid valve and RV apex are 4.17 mm and 5.58 mm separately. The overlap metric, mean contour distance, Hausdorff distance and cardiac functional parameters are calculated for the LV (Left Ventricle) and RV (Right Ventricle) contour segmentation. Bland-Altman analysis for clinical parameters shows that the results from our automated image analysis pipelines are in good agreement with results from expert manual analysis.
    CONCLUSIONS: Our hybrid scheme combines deep learning and statistical shape modeling for automatic segmentation of the LV/RV from cardiac MRI datasets is effective and robust and can compute cardiac functional indexes from population imaging.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号