segmentation

分割
  • 文章类型: Journal Article
    医学成像数据集经常遇到数据不平衡问题,其中大多数像素对应于健康区域,少数人属于受影响地区。像素的这种不均匀分布加剧了与计算机辅助诊断相关的挑战。用不平衡数据训练的网络往往表现出对多数类的偏见,往往表现出高精度,但灵敏度低。
    我们设计了一种基于对抗学习的新网络,即条件对比生成对抗网络(CCGAN),以解决高度不平衡的MRI数据集中的类不平衡问题。所提出的模型有三个新的组成部分:(1)特定类别的关注,(2)区域再平衡模块(RRM)和监督对比学习网络(SCoLN)。特定于班级的注意力集中在输入表示的更具区别性的区域,捕获更多相关特征。RRM促进了跨输入表示的各个区域的特征的更平衡的分布,确保更公平的细分过程。CCGAN的生成器通过基于真负图和真正图接收来自SCoLN的反馈来学习像素级分割。此过程确保最终的语义分割不仅解决了不平衡的数据问题,而且还提高了分类准确性。
    所提出的模型在五个高度不平衡的医学图像分割数据集上显示了最先进的性能。因此,该模型在医学诊断中具有巨大的应用潜力,在数据分布高度不平衡的情况下。CCGAN在各种数据集上的骰子相似系数(DSC)得分最高:BUS2017为0.965±0.012,DDTI为0.896±0.091,对于LiTSMICCAI2017,为0.786±0.046,对于ATLAS数据集,为0.712±1.5,和0.877±1.2的BRATS2015数据集。DeepLab-V3紧随其后,BUS2017的DSC评分为0.948±0.010,DDTI的DSC评分为0.895±0.014,对于LiTSMICCAI2017,为0.763±0.044,对于ATLAS数据集,为0.696±1.1,和0.846±1.4的BRATS2015数据集。
    UNASSIGNED: Medical imaging datasets frequently encounter a data imbalance issue, where the majority of pixels correspond to healthy regions, and the minority belong to affected regions. This uneven distribution of pixels exacerbates the challenges associated with computer-aided diagnosis. The networks trained with imbalanced data tends to exhibit bias toward majority classes, often demonstrate high precision but low sensitivity.
    UNASSIGNED: We have designed a new network based on adversarial learning namely conditional contrastive generative adversarial network (CCGAN) to tackle the problem of class imbalancing in a highly imbalancing MRI dataset. The proposed model has three new components: (1) class-specific attention, (2) region rebalancing module (RRM) and supervised contrastive-based learning network (SCoLN). The class-specific attention focuses on more discriminative areas of the input representation, capturing more relevant features. The RRM promotes a more balanced distribution of features across various regions of the input representation, ensuring a more equitable segmentation process. The generator of the CCGAN learns pixel-level segmentation by receiving feedback from the SCoLN based on the true negative and true positive maps. This process ensures that final semantic segmentation not only addresses imbalanced data issues but also enhances classification accuracy.
    UNASSIGNED: The proposed model has shown state-of-art-performance on five highly imbalance medical image segmentation datasets. Therefore, the suggested model holds significant potential for application in medical diagnosis, in cases characterized by highly imbalanced data distributions. The CCGAN achieved the highest scores in terms of dice similarity coefficient (DSC) on various datasets: 0.965 ± 0.012 for BUS2017, 0.896 ± 0.091 for DDTI, 0.786 ± 0.046 for LiTS MICCAI 2017, 0.712 ± 1.5 for the ATLAS dataset, and 0.877 ± 1.2 for the BRATS 2015 dataset. DeepLab-V3 follows closely, securing the second-best position with DSC scores of 0.948 ± 0.010 for BUS2017, 0.895 ± 0.014 for DDTI, 0.763 ± 0.044 for LiTS MICCAI 2017, 0.696 ± 1.1 for the ATLAS dataset, and 0.846 ± 1.4 for the BRATS 2015 dataset.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:脑出血(ICH)和脑室内出血(IVH)的体积测量为自发性ICH患者的精确治疗提供了关键信息,但仍然是一个巨大的挑战,特别是IVH分割。然而,先前提出的ICH和IVH分割工具缺乏外部验证和分割质量评估.
    目的:本研究旨在通过外部验证,为ICH和IVH的分割开发一个健壮的深度学习模型,并为IVH分割提供质量评估。
    方法:在本研究中,a用于ICH和IVH分割的残差编码Unet(REUnet)是使用由977个CT图像组成的数据集开发的(所有包含ICH,338包含IVH;采用了五重交叉验证程序进行培训和内部验证),并使用由375张CT图像组成的独立数据集进行外部测试(所有包含ICH,105包含IVH)。将REUnet的性能与其他六种高级深度学习模型进行了比较。随后,三种方法,包括原型分割(ProtoSeg),测试时间丢失(TTD),和测试时间增强(TTA),用于在没有地面实况的情况下得出分割质量分数,以提供一种在实际实践中评估分割质量的方法。
    结果:对于ICH分段,从REUnet获得的Dice评分的中位数(低分位数-高分位数)内部验证为0.932(0.898-0.953),外部测试为0.888(0.859-0.916),两者都优于其他模型,同时在外部测试中与nnUnet3D相当。对于IVH分割,从REUnet获得的骰子分数为内部验证的0.826(0.757-0.868)和外部测试的0.777(0.693-0.827),比所有其他型号都好。从REUnet生成的分割估计的体积与从ICH和IVH的手动分割估计的体积之间的一致相关系数范围为0.944至0.987。对于IVH分割质量评估,来自ProtoSeg的分割质量评分与Dice评分相关(外部测试的Spearmanr=0.752),并且在外部测试中表现优于TTD(Spearmanr=0.718)和TTA(Spearmanr=0.260).通过为分割质量分数设置阈值,我们能够通过ProtoSeg识别低质量的IVH分割结果。
    结论:提出的REUnet为准确和自动分割ICH和IVH提供了一个有前途的工具,以及有效的IVH分割质量评估,因此,在临床实践中显示出促进自发性ICH患者治疗决策的潜力。
    BACKGROUND: The volume measurement of intracerebral hemorrhage (ICH) and intraventricular hemorrhage (IVH) provides critical information for precise treatment of patients with spontaneous ICH but remains a big challenge, especially for IVH segmentation. However, the previously proposed ICH and IVH segmentation tools lack external validation and segmentation quality assessment.
    OBJECTIVE: This study aimed to develop a robust deep learning model for the segmentation of ICH and IVH with external validation, and to provide quality assessment for IVH segmentation.
    METHODS: In this study, a Residual Encoding Unet (REUnet) for the segmentation of ICH and IVH was developed using a dataset composed of 977 CT images (all contained ICH, and 338 contained IVH; a five-fold cross-validation procedure was adopted for training and internal validation), and externally tested using an independent dataset consisting of 375 CT images (all contained ICH, and 105 contained IVH). The performance of REUnet was compared with six other advanced deep learning models. Subsequently, three approaches, including Prototype Segmentation (ProtoSeg), Test Time Dropout (TTD), and Test Time Augmentation (TTA), were employed to derive segmentation quality scores in the absence of ground truth to provide a way to assess the segmentation quality in real practice.
    RESULTS: For ICH segmentation, the median (lower-quantile-upper quantile) of Dice scores obtained from REUnet were 0.932 (0.898-0.953) for internal validation and 0.888 (0.859-0.916) for external test, both of which were better than those of other models while comparable to that of nnUnet3D in external test. For IVH segmentation, the Dice scores obtained from REUnet were 0.826 (0.757-0.868) for internal validation and 0.777 (0.693-0.827) for external tests, which were better than those of all other models. The concordance correlation coefficients between the volumes estimated from the REUnet-generated segmentations and those from the manual segmentations for both ICH and IVH ranged from 0.944 to 0.987. For IVH segmentation quality assessment, the segmentation quality score derived from ProtoSeg was correlated with the Dice Score (Spearman r = 0.752 for the external test) and performed better than those from TTD (Spearman r = 0.718) and TTA (Spearman r = 0.260) in the external test. By setting a threshold to the segmentation quality score, we were able to identify low-quality IVH segmentation results by ProtoSeg.
    CONCLUSIONS: The proposed REUnet offers a promising tool for accurate and automated segmentation of ICH and IVH, and for effective IVH segmentation quality assessment, and thus exhibits the potential to facilitate therapeutic decision-making for patients with spontaneous ICH in clinical practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    医学图像中的分割本质上是模糊的。至关重要的是捕获病变分割的不确定性,以协助癌症诊断和进一步干预。最近的工作在生成多个似是而非的分割结果方面取得了很大的进展,作为解释病变分割不确定性的多样化参考。然而,现有模型的效率是有限的,多注释数据集中的不确定性信息仍有待充分利用。在这项研究中,我们提出了一系列方法来处理上述限制,并利用多注释数据集中的丰富信息:(1)定制的T时间内部采样网络,以提高建模灵活性,并有效地生成与多个注释器的真实分布相匹配的样本;(2)定义的不确定度,用于从全新的角度定量测量每个样本的不确定性和整个多注释数据集的不平衡;我们已经在公开可用的肺结节数据集和我们的内部肝肿瘤数据集上评估了它们中的每一个。结果表明,我们提出的方法在准确性和效率上都达到了整体最佳性能,证明其在病变分割和更多的下游任务在真实的临床场景中的巨大潜力。
    Segmentation in medical images is inherently ambiguous. It is crucial to capture the uncertainty in lesion segmentations to assist cancer diagnosis and further interventions. Recent works have made great progress in generating multiple plausible segmentation results as diversified references to account for the uncertainty in lesion segmentations. However, the efficiency of existing models is limited, and the uncertainty information lying in multi-annotated datasets remains to be fully utilized. In this study, we propose a series of methods to corporately deal with the above limitation and leverage the abundant information in multi-annotated datasets: (1) Customized T-time Inner Sampling Network to promote the modeling flexibility and efficiently generate samples matching the ground-truth distribution of a number of annotators; (2) Uncertainty Degree defined for quantitatively measuring the uncertainty of each sample and the imbalance of the whole multi-annotated dataset from a brand new perspective; (3) Uncertainty-aware Data Augmentation Strategy to help probabilistic models adaptively fit samples with different ranges of uncertainty. We have evaluated each of them on both the publicly available lung nodule dataset and our in-house Liver Tumor dataset. Results show that our proposed methods achieves the overall best performance on both accuracy and efficiency, demonstrating its great potential in lesion segmentations and more downstream tasks in real clinical scenarios.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    颅骨剥离是磁共振成像(MRI)中至关重要的预处理步骤,专家手动创建大脑面具。这个劳动密集型的过程在很大程度上依赖于注释者的专业知识,随着自动化面临诸如低组织对比度等挑战,图像分辨率的显著变化,大脑和周围组织之间的边界模糊,特别是在啮齿动物中。在这项研究中,我们开发了一个基于Swin-UNETR的轻量级框架,用于在小鼠和大鼠的MRI扫描中自动化颅骨剥离过程.这个框架的主要目标是消除预处理的需要,减少工作量,并提供能够适应各种MRI图像分辨率的开箱即用的解决方案。通过采用轻量级神经网络,我们的目标是降低框架的性能要求。为了验证我们方法的有效性,我们使用公开的多中心数据训练和评估网络,包括来自89个中心的1,037只啮齿动物和1,142张图像,导致初步平均骰子系数为0.9914。框架,数据,和预先训练的模型可以在以下链接中找到:https://github.com/VitoLin21/啮齿动物头骨剥离。
    Skull stripping is a crucial preprocessing step in magnetic resonance imaging (MRI), where experts manually create brain masks. This labor-intensive process heavily relies on the annotator\'s expertise, as automation faces challenges such as low tissue contrast, significant variations in image resolution, and blurred boundaries between the brain and surrounding tissues, particularly in rodents. In this study, we have developed a lightweight framework based on Swin-UNETR to automate the skull stripping process in MRI scans of mice and rats. The primary objective of this framework is to eliminate the need for preprocessing, reduce the workload, and provide an out-of-the-box solution capable of adapting to various MRI image resolutions. By employing a lightweight neural network, we aim to lower the performance requirements of the framework. To validate the effectiveness of our approach, we trained and evaluated the network using publicly available multi-center data, encompassing 1,037 rodents and 1,142 images from 89 centers, resulting in a preliminary mean Dice coefficient of 0.9914. The framework, data, and pre-trained models can be found on the following link: https://github.com/VitoLin21/Rodent-Skull-Stripping.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    准确预测Kirsten大鼠肉瘤(KRAS)突变状态对于晚期结直肠癌患者的个性化治疗至关重要。然而,尽管深度学习模型在某些方面表现优异,他们往往忽视了多项任务之间的协同促进以及全球和本地信息的考虑,这可能会大大降低预测精度。为了解决这些问题,本文提出了一种称为多任务全局-局部协作混合网络(CHNet)的创新方法,旨在更准确地预测患者的KRAS突变状态。CHNet由两个分支组成,可以从分割和分类任务中提取全局和局部特征,分别,并交换补充信息以协作执行这些任务。在两个分支中,我们设计了一个通道混合变压器(CHT)和空间混合变压器(SHT)。这些变压器集成了变压器和CNN的优点,采用级联混合注意力和卷积从两个任务中捕获全局和局部信息。此外,我们创建了一个自适应协同注意(ACA)模块,以促进通过指导分割和分类特征的协同融合。此外,我们引入了一种新颖的类激活图(CAM)损失,以鼓励CHNet学习两个任务之间的互补信息。我们在T2加权MRI数据集上评估CHNet,KRAS突变状态预测准确率达到88.93%,其性能优于代表性的KRAS突变状态预测方法。结果表明,我们的CHNet可以通过多任务协作促进和考虑全局-局部信息的方式更准确地预测患者的KRAS突变状态。这可以帮助医生为患者制定更个性化的治疗策略。
    Accurate prediction of Kirsten rat sarcoma (KRAS) mutation status is crucial for personalized treatment of advanced colorectal cancer patients. However, despite the excellent performance of deep learning models in certain aspects, they often overlook the synergistic promotion among multiple tasks and the consideration of both global and local information, which can significantly reduce prediction accuracy. To address these issues, this paper proposes an innovative method called the Multi-task Global-Local Collaborative Hybrid Network (CHNet) aimed at more accurately predicting patients\' KRAS mutation status. CHNet consists of two branches that can extract global and local features from segmentation and classification tasks, respectively, and exchange complementary information to collaborate in executing these tasks. Within the two branches, we have designed a Channel-wise Hybrid Transformer (CHT) and a Spatial-wise Hybrid Transformer (SHT). These transformers integrate the advantages of both Transformer and CNN, employing cascaded hybrid attention and convolution to capture global and local information from the two tasks. Additionally, we have created an Adaptive Collaborative Attention (ACA) module to facilitate the collaborative fusion of segmentation and classification features through guidance. Furthermore, we introduce a novel Class Activation Map (CAM) loss to encourage CHNet to learn complementary information between the two tasks. We evaluate CHNet on the T2-weighted MRI dataset, and achieve an accuracy of 88.93% in KRAS mutation status prediction, which outperforms the performance of representative KRAS mutation status prediction methods. The results suggest that our CHNet can more accurately predict KRAS mutation status in patients via a multi-task collaborative facilitation and considering global-local information way, which can assist doctors in formulating more personalized treatment strategies for patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    将自动分割方法结合到牙科X射线图像中,通过促进细致,完善了临床诊断和治疗计划的范例,牙齿结构和邻近组织的像素级关节。这是早期病理检测和细致的疾病进展监测的支柱。尽管如此,由于X射线成像的内在局限性,传统的分割框架经常会遇到重大挫折,包括受损的图像保真度,结构边界的模糊划定,以及牙髓等牙齿成分的复杂解剖结构,搪瓷,还有牙本质.为了克服这些障碍,我们提出了可变形卷积和Mamba集成网络,创新的2D牙科X射线图像分割架构,合并了一个合并结构可变形编码器,认知优化的语义增强模块,和分层收敛解码器。总的来说,这些组件支持多尺度全球功能的管理,加强特征表示的稳定性,并完善特征向量的合并。对14个基线的比较评估强调了其有效性,记录骰子系数增加了0.95%,第95个百分位数Hausdorff距离减少到7.494。
    The incorporation of automatic segmentation methodologies into dental X-ray images refined the paradigms of clinical diagnostics and therapeutic planning by facilitating meticulous, pixel-level articulation of both dental structures and proximate tissues. This underpins the pillars of early pathological detection and meticulous disease progression monitoring. Nonetheless, conventional segmentation frameworks often encounter significant setbacks attributable to the intrinsic limitations of X-ray imaging, including compromised image fidelity, obscured delineation of structural boundaries, and the intricate anatomical structures of dental constituents such as pulp, enamel, and dentin. To surmount these impediments, we propose the Deformable Convolution and Mamba Integration Network, an innovative 2D dental X-ray image segmentation architecture, which amalgamates a Coalescent Structural Deformable Encoder, a Cognitively-Optimized Semantic Enhance Module, and a Hierarchical Convergence Decoder. Collectively, these components bolster the management of multi-scale global features, fortify the stability of feature representation, and refine the amalgamation of feature vectors. A comparative assessment against 14 baselines underscores its efficacy, registering a 0.95% enhancement in the Dice Coefficient and a diminution of the 95th percentile Hausdorff Distance to 7.494.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    半月板损伤是膝关节疼痛的常见原因和膝骨关节炎(KOA)的前兆。这项研究的目的是开发一种自动管道,用于半月板损伤分类和定位,使用基于MRI图像的完全和弱监督网络。在这项回顾性研究中,数据来自骨关节炎倡议(OAI)。使用矢状中间加权脂肪抑制的涡轮自旋回波序列重建MR图像。(1)我们使用了OAI的130个膝盖来开发LGSA-UNet模型,该模型融合了相邻切片的特征并调整了Siam中的块,以使中央切片获得丰富的上下文信息。(2)包括来自OAI的一千七百五十六个膝盖,以建立分割和分类模型。分割模型实现了从0.84到0.93的DICE系数。在二元模型中AUC值的范围为0.85至0.95。三种半月板的准确性(正常,眼泪,和浸渍)范围从0.60到0.88。此外,来自骨科医院的206个膝盖被用作外部验证数据集,以评估模型的性能。分割和分类模型在外部验证集上仍然表现良好。为了比较深度学习(DL)模型和放射科医生之间的诊断性能,将外部验证集发送给两名放射科医师.二元分类模型优于初级放射科医师的诊断性能(0.82-0.87对0.74-0.88)。这项研究强调了DL在膝关节半月板分割和损伤分类中的潜力,这有助于提高诊断效率。
    Meniscal injury is a common cause of knee joint pain and a precursor to knee osteoarthritis (KOA). The purpose of this study is to develop an automatic pipeline for meniscal injury classification and localization using fully and weakly supervised networks based on MRI images. In this retrospective study, data were from the osteoarthritis initiative (OAI). The MR images were reconstructed using a sagittal intermediate-weighted fat-suppressed turbo spin-echo sequence. (1) We used 130 knees from the OAI to develop the LGSA-UNet model which fuses the features of adjacent slices and adjusts the blocks in Siam to enable the central slice to obtain rich contextual information. (2) One thousand seven hundred and fifty-six knees from the OAI were included to establish segmentation and classification models. The segmentation model achieved a DICE coefficient ranging from 0.84 to 0.93. The AUC values ranged from 0.85 to 0.95 in the binary models. The accuracy for the three types of menisci (normal, tear, and maceration) ranged from 0.60 to 0.88. Furthermore, 206 knees from the orthopedic hospital were used as an external validation data set to evaluate the performance of the model. The segmentation and classification models still performed well on the external validation set. To compare the diagnostic performances between the deep learning (DL) models and radiologists, the external validation sets were sent to two radiologists. The binary classification model outperformed the diagnostic performance of the junior radiologist (0.82-0.87 versus 0.74-0.88). This study highlights the potential of DL in knee meniscus segmentation and injury classification which can help improve diagnostic efficiency.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:为了突出MRI测量肾脏大小的进展和机会,并激发研究解决与肾脏大小评估有关的剩余方法学空白和未解决的问题。
    方法:这项工作不是对文献的全面回顾,而是强调了肾脏大小MRI的最新发展。
    结果:概述了肾脏(病理)生理和肾脏大小之间的联系。综述了肾脏大小MRI的常用方法学方法。讨论了为肾脏分割和肾脏大小量化量身定制的技术。综述了肾脏大小监测在临床前模型和人体研究中的前沿应用。探讨了肾脏大小MRI的未来方向。
    结论:肾脏大小的MRI很重要。它将促进越来越多的(前)临床应用,并为肾脏(病理)生理学提供新见解的跳板。由于可以从已经建立的肾脏MRI协议中轻松获得肾脏大小,而无需额外的扫描,这种测量应始终伴随着诊断性MRI检查.将整体肾脏大小的变化与特定肾层大小的变化相协调是进一步研究的重要课题。单独的急性肾脏大小测量不能区分血液或肾小管体积分数变化引起的变化-这种区别需要进一步研究肾脏血液和肾小管体积的制图。
    OBJECTIVE: To highlight progress and opportunities of measuring kidney size with MRI, and to inspire research into resolving the remaining methodological gaps and unanswered questions relating to kidney size assessment.
    METHODS: This work is not a comprehensive review of the literature but highlights valuable recent developments of MRI of kidney size.
    RESULTS: The links between renal (patho)physiology and kidney size are outlined. Common methodological approaches for MRI of kidney size are reviewed. Techniques tailored for renal segmentation and quantification of kidney size are discussed. Frontier applications of kidney size monitoring in preclinical models and human studies are reviewed. Future directions of MRI of kidney size are explored.
    CONCLUSIONS: MRI of kidney size matters. It will facilitate a growing range of (pre)clinical applications, and provide a springboard for new insights into renal (patho)physiology. As kidney size can be easily obtained from already established renal MRI protocols without the need for additional scans, this measurement should always accompany diagnostic MRI exams. Reconciling global kidney size changes with alterations in the size of specific renal layers is an important topic for further research. Acute kidney size measurements alone cannot distinguish between changes induced by alterations in the blood or the tubular volume fractions-this distinction requires further research into cartography of the renal blood and the tubular volumes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    脑部医学图像分割是医学图像处理中的一项关键任务,在中风等疾病的预测和诊断中发挥着重要作用,老年痴呆症,和脑肿瘤。然而,由于不同扫描仪之间的站点间差异很大,因此不同来源的数据集之间的分布差异很大,成像协议,和人口。这导致实际应用中的跨域问题。近年来,已经进行了许多研究来解决大脑图像分割中的跨域问题。
    本评论遵循系统评论和荟萃分析(PRISMA)的首选报告项目的标准,用于数据处理和分析。我们从PubMed检索了相关论文,WebofScience,和IEEE数据库从2018年1月到2023年12月,提取有关医疗领域的信息,成像模式,解决跨域问题的方法,实验设计,和来自选定论文的数据集。此外,我们比较了中风病变分割方法的性能,脑白质分割和脑肿瘤分割。
    本综述共纳入并分析了71项研究。解决跨域问题的方法包括迁移学习,规范化,无监督学习,变压器型号,和卷积神经网络(CNN)。在ATLAS数据集上,领域自适应方法显示,与非自适应方法相比,卒中病变分割任务总体改善约3%.然而,鉴于当前研究中基于MICCAI2017中白质分割任务的方法和BraTS中脑肿瘤分割任务的方法的数据集和实验方法的多样性,直观地比较这些方法的优缺点是具有挑战性的。
    尽管已经应用了各种技术来解决大脑图像分割中的跨域问题,目前缺乏统一的数据集和实验标准。例如,许多研究仍然基于n折交叉验证,而直接基于跨站点或数据集的交叉验证的方法相对较少。此外,由于大脑分割领域的医学图像类型多种多样,对性能进行简单直观的比较并不容易。这些挑战需要在未来的研究中解决。
    UNASSIGNED: Brain medical image segmentation is a critical task in medical image processing, playing a significant role in the prediction and diagnosis of diseases such as stroke, Alzheimer\'s disease, and brain tumors. However, substantial distribution discrepancies among datasets from different sources arise due to the large inter-site discrepancy among different scanners, imaging protocols, and populations. This leads to cross-domain problems in practical applications. In recent years, numerous studies have been conducted to address the cross-domain problem in brain image segmentation.
    UNASSIGNED: This review adheres to the standards of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) for data processing and analysis. We retrieved relevant papers from PubMed, Web of Science, and IEEE databases from January 2018 to December 2023, extracting information about the medical domain, imaging modalities, methods for addressing cross-domain issues, experimental designs, and datasets from the selected papers. Moreover, we compared the performance of methods in stroke lesion segmentation, white matter segmentation and brain tumor segmentation.
    UNASSIGNED: A total of 71 studies were included and analyzed in this review. The methods for tackling the cross-domain problem include Transfer Learning, Normalization, Unsupervised Learning, Transformer models, and Convolutional Neural Networks (CNNs). On the ATLAS dataset, domain-adaptive methods showed an overall improvement of ~3 percent in stroke lesion segmentation tasks compared to non-adaptive methods. However, given the diversity of datasets and experimental methodologies in current studies based on the methods for white matter segmentation tasks in MICCAI 2017 and those for brain tumor segmentation tasks in BraTS, it is challenging to intuitively compare the strengths and weaknesses of these methods.
    UNASSIGNED: Although various techniques have been applied to address the cross-domain problem in brain image segmentation, there is currently a lack of unified dataset collections and experimental standards. For instance, many studies are still based on n-fold cross-validation, while methods directly based on cross-validation across sites or datasets are relatively scarce. Furthermore, due to the diverse types of medical images in the field of brain segmentation, it is not straightforward to make simple and intuitive comparisons of performance. These challenges need to be addressed in future research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:为了促进利用计算机断层扫描(CT)和磁共振(MR)成像模式信息的头颈部(HaN)放射治疗(RT)计划的自动分割方法的发展,我们组织了HaN-Seg:头颈部危险器官CT和MR分割挑战。
    方法:挑战任务是在有42个公开可用的培训案例的情况下,在14个保留的测试案例中自动分割HaN区域的30个危险器官(OAR)。每例病例均由同一患者的HaN区域的1张对比增强CT和1张T1加权MR图像组成,与多达30个相应的参考OAR划定蒙版。根据骰子相似系数(DSC)和95百分位数Hausdorff距离(HD95)评估性能,通过使用Wilcoxon符号秩检验对提交的方法进行成对比较,对每个度量进行统计排名。
    结果:虽然有23个团队报名参加了这项挑战,只有七个提交了最后阶段的方法。表现最好的团队实现了76.9%的DSC和3.5毫米的HD95。所有参与团队都使用基于U-Net的架构,获胜团队利用刚性MR到CT注册,结合两种模式的网络入门级串联。
    结论:这一挑战通过提供具有不同视场和体素大小的非配准MR和CT图像来模拟真实世界的临床场景。值得注意的是,表现最好的团队在同一数据集上取得了超过观察者间协议的细分绩效。这些结果为该公开可用数据集和配对多模态图像分割的未来研究设定了基准。
    OBJECTIVE: To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit the information of computed tomography (CT) and magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head and Neck Organ-at-Risk CT and MR Segmentation Challenge.
    METHODS: The challenge task was to automatically segment 30 organs-at-risk (OARs) of the HaN region in 14 withheld test cases given the availability of 42 publicly available training cases. Each case consisted of one contrast-enhanced CT and one T1-weighted MR image of the HaN region of the same patient, with up to 30 corresponding reference OAR delineation masks. The performance was evaluated in terms of the Dice similarity coefficient (DSC) and 95-percentile Hausdorff distance (HD95), and statistical ranking was applied for each metric by pairwise comparison of the submitted methods using the Wilcoxon signed-rank test.
    RESULTS: While 23 teams registered for the challenge, only seven submitted their methods for the final phase. The top-performing team achieved a DSC of 76.9 % and a HD95 of 3.5 mm. All participating teams utilized architectures based on U-Net, with the winning team leveraging rigid MR to CT registration combined with network entry-level concatenation of both modalities.
    CONCLUSIONS: This challenge simulated a real-world clinical scenario by providing non-registered MR and CT images with varying fields-of-view and voxel sizes. Remarkably, the top-performing teams achieved segmentation performance surpassing the inter-observer agreement on the same dataset. These results set a benchmark for future research on this publicly available dataset and on paired multi-modal image segmentation in general.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号