medical image segmentation

医学图像分割
  • 文章类型: Journal Article
    随着深度学习技术的进步,医学图像分割取得了显著进展,取决于标记数据的质量和数量。尽管已经提出了各种深度学习模型结构和训练方法,并且已经发布了高性能,在实际临床应用中存在类间准确性偏差等限制,特别是由于在多器官分割任务中严重缺乏小对象性能。在本文中,我们提出了一种基于不确定性的对比学习技术,即Uncernce,具有最佳的混合架构,可实现小器官的高分类和分割性能。我们的骨干架构采用混合网络,同时采用卷积层和变压器层,近年来表现显著。本研究的主要建议解决了多类精度偏差,并解决了现有研究中分割小物体区域和减少整体噪声之间的常见权衡(即,假阳性)。基于所提出的混合网络的基于不确定性的对比学习对基于不确定性的选定区域进行聚光灯学习,并在抑制噪声的同时实现对所有类别的准确分割。与最新技术的比较证明了我们的结果在BTCV和1K数据上的优越性。
    Medical image segmentation has made remarkable progress with advances in deep learning technology, depending on the quality and quantity of labeled data. Although various deep learning model structures and training methods have been proposed and high performance has been published, limitations such as inter-class accuracy bias exist in actual clinical applications, especially due to the significant lack of small object performance in multi-organ segmentation tasks. In this paper, we propose an uncertainty-based contrastive learning technique, namely UncerNCE, with an optimal hybrid architecture for high classification and segmentation performance of small organs. Our backbone architecture adopts a hybrid network that employs both convolutional and transformer layers, which have demonstrated remarkable performance in recent years. The key proposal of this study addresses the multi-class accuracy bias and resolves a common tradeoff in existing studies between segmenting regions of small objects and reducing overall noise (i.e., false positives). Uncertainty based contrastive learning based on the proposed hybrid network performs spotlight learning on selected regions based on uncertainty and achieved accurate segmentation for all classes while suppressing noise. Comparison with state-of-the-art techniques demonstrates the superiority of our results on BTCV and 1K data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:近年来,将深度学习用于医学图像分割已成为一种流行趋势,但其发展也面临一些挑战。首先,由于医疗数据的特殊性,精确注释是耗时且费力的。用有限的标记数据有效地训练神经网络是医学图像分析中的重大挑战。其次,卷积神经网络常用于医学图像分割的研究往往关注图像中的局部特征。然而,复杂解剖结构或不规则病变的识别通常需要局部和全局信息的帮助,这导致了其发展的瓶颈。解决这两个问题,在本文中,我们提出了一种新颖的网络架构。
    方法:我们集成了一个移位窗口机制来学习更全面的语义信息,并采用了一种半监督学习策略,方法是结合大量灵活的未标记数据。具体来说,采用典型的U形编码器-解码器结构来获得丰富的特征图。每个编码器被设计为双分支结构,包含Swin模块配备不同大小的窗口来捕获多个尺度的特征。为了有效地利用未标记的数据,引入水平集函数来建立函数回归和像素分类之间的一致性。
    结果:我们在COVID-19CT数据集和DRIVE数据集上进行了实验,并将我们的方法与各种半监督和完全监督学习模型进行了比较。在COVID-19CT数据集上,我们取得了高达74.56%的分割准确率。我们在DRIVE数据集上的分割准确率为79.79%。
    结论:结果表明我们的方法在几种常用的评估指标上具有出色的性能。我们的模型的高分割精度表明,利用具有不同窗口大小的Swin模块可以增强模型的特征提取能力,并且水平集函数可以使半监督模型更有效地利用未标记数据。这为深度学习在医学图像分割中的应用提供了有意义的见解。一旦手稿被接受出版,我们的代码将被发布。
    OBJECTIVE: In recent years, the use of deep learning for medical image segmentation has become a popular trend, but its development also faces some challenges. Firstly, due to the specialized nature of medical data, precise annotation is time-consuming and labor-intensive. Training neural networks effectively with limited labeled data is a significant challenge in medical image analysis. Secondly, convolutional neural networks commonly used for medical image segmentation research often focus on local features in images. However, the recognition of complex anatomical structures or irregular lesions often requires the assistance of both local and global information, which has led to a bottleneck in its development. Addressing these two issues, in this paper, we propose a novel network architecture.
    METHODS: We integrate a shift window mechanism to learn more comprehensive semantic information and employ a semi-supervised learning strategy by incorporating a flexible amount of unlabeled data. Specifically, a typical U-shaped encoder-decoder structure is applied to obtain rich feature maps. Each encoder is designed as a dual-branch structure, containing Swin modules equipped with windows of different size to capture features of multiple scales. To effectively utilize unlabeled data, a level set function is introduced to establish consistency between the function regression and pixel classification.
    RESULTS: We conducted experiments on the COVID-19 CT dataset and DRIVE dataset and compared our approach with various semi-supervised and fully supervised learning models. On the COVID-19 CT dataset, we achieved a segmentation accuracy of up to 74.56%. Our segmentation accuracy on the DRIVE dataset was 79.79%.
    CONCLUSIONS: The results demonstrate the outstanding performance of our method on several commonly used evaluation metrics. The high segmentation accuracy of our model demonstrates that utilizing Swin modules with different window sizes can enhance the feature extraction capability of the model, and the level set function can enable semi-supervised models to more effectively utilize unlabeled data. This provides meaningful insights for the application of deep learning in medical image segmentation. Our code will be released once the manuscript is accepted for publication.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:近端等速表面积(PISA)方法是一种公认的二尖瓣反流(MR)定量方法。然而,在非半球形流会聚和非全收缩MR的情况下,它表现出很高的观察者间变异性和不准确性。为了解决这个问题,我们展示EasyPISA,直接从二维彩色多普勒序列中自动集成PISA测量的框架。
    方法:我们对来自196个记录(54名患者)的1171个图像进行了卷积神经网络(UNet/AttentionUNet)的训练,以检测和分割二维彩色多普勒图像中的血流会聚区。比较了不同的预处理方案和模型架构。估计了流动会聚表面积,考虑到非半球形收敛,和反流体积(RVol)通过随时间积分流速来计算。EasyPISA应用于26例MR患者检查,将结果与参考PISARVol测量结果进行比较,严重性等级,和13例患者的cMRIRVol测量。
    结果:在双工图像上训练的UNet取得了最好的结果(精度:0.63,召回率:0.95,骰子:0.58,流速误差:10.4ml/s)。通过与二尖瓣分割网络集成,可以减轻二尖瓣心房侧的假阳性分割。EasyPISA和PISA之间的组内相关系数为0.83,EasyPISA和cMRI之间为0.66。相对标准偏差分别为46%和53%,分别。接收器操作员特征表明,EasyPISARVol估计值和参考严重程度等级的曲线下平均面积介于0.90和0.97之间。
    结论:EasyPISA证明了在MR中全自动集成PISA测量的有希望的结果,在减少MR评估中的工作量和减轻观察者之间的差异方面提供潜在的好处。
    OBJECTIVE: The proximal isovelocity surface area (PISA) method is a well-established approach for mitral regurgitation (MR) quantification. However, it exhibits high inter-observer variability and inaccuracies in cases of non-hemispherical flow convergence and non-holosystolic MR. To address this, we present EasyPISA, a framework for automated integrated PISA measurements taken directly from 2-D color-Doppler sequences.
    METHODS: We trained convolutional neural networks (UNet/Attention UNet) on 1171 images from 196 recordings (54 patients) to detect and segment flow convergence zones in 2-D color-Doppler images. Different preprocessing schemes and model architectures were compared. Flow convergence surface areas were estimated, accounting for non-hemispherical convergence, and regurgitant volume (RVol) was computed by integrating the flow rate over time. EasyPISA was retrospectively applied to 26 MR patient examinations, comparing results with reference PISA RVol measurements, severity grades, and cMRI RVol measurements for 13 patients.
    RESULTS: The UNet trained on duplex images achieved the best results (precision: 0.63, recall: 0.95, dice: 0.58, flow rate error: 10.4 ml/s). Mitigation of false-positive segmentation on the atrial side of the mitral valve was achieved through integration with a mitral valve segmentation network. The intraclass correlation coefficient was 0.83 between EasyPISA and PISA, and 0.66 between EasyPISA and cMRI. Relative standard deviations were 46% and 53%, respectively. Receiver operator characteristics demonstrated a mean area under the curve between 0.90 and 0.97 for EasyPISA RVol estimates and reference severity grades.
    CONCLUSIONS: EasyPISA demonstrates promising results for fully automated integrated PISA measurements in MR, offering potential benefits in workload reduction and mitigating inter-observer variability in MR assessment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目前,深度学习在图像分割领域发展迅速,医学图像分割是该领域的关键应用之一。常规CNN在一般医学图像分割任务中取得了巨大的成功,但是它在特征提取部分存在特征丢失,并且缺乏显式建模远程依赖的能力,很难适应人体器官分割的任务。尽管包含注意力机制的方法在语义分割领域取得了良好的进展,当前的大多数注意力机制仅限于单个样本,虽然人体器官图像的样本数量很大,忽略样本之间的相关性不利于图像分割。为了解决这些问题,本文提出了一种内部和外部双注意分割网络(IEA-Net),并在该网络中设计了带残差的交错卷积系统(ICSwR)模块和IEAM模块。ICSwR包含交错卷积和跳频连接,用于编码器部分中的特征的初始提取。IEAM模块(内部和外部双注意模块)由LGGW-SA(局部-全局高斯加权自注意)模块和EA模块组成,它们是串联结构。LGGW-SA模块专注于学习单个样本内的局部-全局特征相关性,以实现高效的特征提取。同时,EA模块旨在捕获样品间连接,解决多样本复杂性。此外,跳过连接将被合并到编码器和解码器内的每个IEAM模块中,以减少特征损失。我们在Synapse多器官分割数据集和ACDC心脏分割数据集上测试了我们的方法,实验结果表明,该方法比其他最先进的方法具有更好的性能。
    Currently, deep learning is developing rapidly in the field of image segmentation, and medical image segmentation is one of the key applications in this field. Conventional CNN has achieved great success in general medical image segmentation tasks, but it has feature loss in the feature extraction part and lacks the ability to explicitly model remote dependencies, which makes it difficult to adapt to the task of human organ segmentation. Although methods containing attention mechanisms have made good progress in the field of semantic segmentation, most of the current attention mechanisms are limited to a single sample, while the number of samples of human organ images is large, ignoring the correlation between the samples is not conducive to image segmentation. In order to solve these problems, an internal and external dual-attention segmentation network (IEA-Net) is proposed in this paper, and the ICSwR (interleaved convolutional system with residual) module and the IEAM module are designed in this network. The ICSwR contains interleaved convolution and hopping connection, which are used for the initial extraction of the features in the encoder part. The IEAM module (internal and external dual-attention module) consists of the LGGW-SA (local-global Gaussian-weighted self-attention) module and the EA module, which are in a tandem structure. The LGGW-SA module focuses on learning local-global feature correlations within individual samples for efficient feature extraction. Meanwhile, the EA module is designed to capture inter-sample connections, addressing multi-sample complexities. Additionally, skip connections will be incorporated into each IEAM module within both the encoder and decoder to reduce feature loss. We tested our method on the Synapse multi-organ segmentation dataset and the ACDC cardiac segmentation dataset, and the experimental results show that the proposed method achieves better performance than other state-of-the-art methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    医学图像分割对医疗保健至关重要,然而,基于卷积的方法,如U-Net在建模远程依赖关系方面面临限制。为了解决这个问题,为序列到序列预测而设计的变压器已集成到医学图像分割中。然而,缺乏对U-Net组件中的变形金刚自我注意的全面理解。TransUNet,2021年首次推出,被广泛认为是首批将Transformer集成到医学图像分析中的模型之一。在这项研究中,我们介绍了TransUNet的通用框架,该框架将变形金刚的自我注意力封装到两个关键模块中:(1)Transformer编码器从卷积神经网络(CNN)特征图中标记图像块,促进全局上下文提取,和(2)变换器解码器通过提议和U-Net特征之间的交叉注意来细化候选区域。这些模块可以灵活地插入到U-Net骨干,导致三种配置:仅编码器,仅解码器,和编码器+解码器。TransUNet提供了一个包含2D和3D实现的库,使用户能够轻松定制所选择的体系结构。我们的发现强调了编码器在模拟多个腹部器官之间的相互作用方面的功效,以及解码器在处理肿瘤等小目标方面的优势。它擅长各种医疗应用,比如多器官分割,胰腺肿瘤分割,和肝血管分割。值得注意的是,我们的TransUNet在多器官分割和胰腺肿瘤分割方面实现了1.06%和4.30%的平均Dice改善,分别,与竞争激烈的NN-UNet相比,并超越BrasTS2021挑战中的Top-1解决方案。2D/3D代码和模型可在https://github.com/Beckschen/TransUNet和https://github.com/Beckschen/TransUNet-3D获得,分别。
    Medical image segmentation is crucial for healthcare, yet convolution-based methods like U-Net face limitations in modeling long-range dependencies. To address this, Transformers designed for sequence-to-sequence predictions have been integrated into medical image segmentation. However, a comprehensive understanding of Transformers\' self-attention in U-Net components is lacking. TransUNet, first introduced in 2021, is widely recognized as one of the first models to integrate Transformer into medical image analysis. In this study, we present the versatile framework of TransUNet that encapsulates Transformers\' self-attention into two key modules: (1) a Transformer encoder tokenizing image patches from a convolution neural network (CNN) feature map, facilitating global context extraction, and (2) a Transformer decoder refining candidate regions through cross-attention between proposals and U-Net features. These modules can be flexibly inserted into the U-Net backbone, resulting in three configurations: Encoder-only, Decoder-only, and Encoder+Decoder. TransUNet provides a library encompassing both 2D and 3D implementations, enabling users to easily tailor the chosen architecture. Our findings highlight the encoder\'s efficacy in modeling interactions among multiple abdominal organs and the decoder\'s strength in handling small targets like tumors. It excels in diverse medical applications, such as multi-organ segmentation, pancreatic tumor segmentation, and hepatic vessel segmentation. Notably, our TransUNet achieves a significant average Dice improvement of 1.06% and 4.30% for multi-organ segmentation and pancreatic tumor segmentation, respectively, when compared to the highly competitive nn-UNet, and surpasses the top-1 solution in the BrasTS2021 challenge. 2D/3D Code and models are available at https://github.com/Beckschen/TransUNet and https://github.com/Beckschen/TransUNet-3D, respectively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    医学图像分割需要精确的准确性和评估分割不确定性的能力,以便做出明智的临床决策。去噪扩散概率模型(DDPM),随着他们在图像生成方面的进步,可以将分割视为条件生成任务,提供准确的分割和不确定性估计。然而,当前用于医学图像分割的DDPM在正向处理结束时由于过多的噪声而导致推理效率低和预测误差。为了解决这个问题,我们提出了一种通过截断逆过程(ADDPM)的加速去噪扩散概率模型,该模型是专门为医学图像分割而设计的。ADDPM的逆过程从非高斯分布开始,并且一旦在多次迭代去噪之后获得具有相对低噪声的预测,就提前终止。我们采用单独的强大分割网络来获得预分割,并基于前向扩散规则构造分割的非高斯分布。通过进一步采用单独的去噪网络,从低噪声的预测中只需一个去噪步骤就可以获得最终的分割。ADDPM大大减少了去噪步骤的数量,约为香草DDPM的十分之一。我们对四个分割任务的实验表明,ADDPM优于香草DDPM和现有的代表性加速DDPM方法。此外,ADDPM可以轻松地与现有的高级分割模型集成,以提高分割性能并提供不确定性估计。实现代码:https://github.com/Guoxt/ADDPM。
    Medical image segmentation demands precise accuracy and the capability to assess segmentation uncertainty for informed clinical decision-making. Denoising Diffusion Probability Models (DDPMs), with their advancements in image generation, can treat segmentation as a conditional generation task, providing accurate segmentation and uncertainty estimation. However, current DDPMs used in medical image segmentation suffer from low inference efficiency and prediction errors caused by excessive noise at the end of the forward process. To address this issue, we propose an accelerated denoising diffusion probabilistic model via truncated inverse processes (ADDPM) that is specifically designed for medical image segmentation. The inverse process of ADDPM starts from a non-Gaussian distribution and terminates early once a prediction with relatively low noise is obtained after multiple iterations of denoising. We employ a separate powerful segmentation network to obtain pre-segmentation and construct the non-Gaussian distribution of the segmentation based on the forward diffusion rule. By further adopting a separate denoising network, the final segmentation can be obtained with just one denoising step from the predictions with low noise. ADDPM greatly reduces the number of denoising steps to approximately one-tenth of that in vanilla DDPMs. Our experiments on four segmentation tasks demonstrate that ADDPM outperforms both vanilla DDPMs and existing representative accelerating DDPMs methods. Moreover, ADDPM can be easily integrated with existing advanced segmentation models to improve segmentation performance and provide uncertainty estimation. Implementation code: https://github.com/Guoxt/ADDPM.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:单个学习算法可以产生基于深度学习的图像分割模型,这些模型纯粹是由于训练过程中的随机效应而在性能上有所不同。这项研究评估了这些随机性能波动对比较分段模型的标准方法的可靠性的影响。
    方法:通过运行具有50种不同随机种子的单个学习算法(nnU-Net)来评估训练过程中随机效应的影响,以解决三个多类3D医学图像分割问题,包括脑瘤,海马体,和心脏分割。对最近的文献进行了采样,以找到用于估计和比较深度学习分割模型性能的最常用方法。基于此,使用保持验证和5倍交叉验证评估分段性能,并使用配对t检验和Wilcoxon符号秩检验对Dice评分测量性能差异的统计学意义.
    结果:对于不同的分段问题,TheseedproducingthehighestmeanDicescorestatisticallyoutperformancebetween0%and76%oftheremainingseedswhenestimatingperformanceusinghold-outvalidation,使用5倍交叉验证估计性能时,在10%到38%之间。
    结论:训练过程中的随机效应会导致来自相同学习算法的分割模型之间的高比率的统计上显著的性能差异。虽然统计检验在当代文学中被广泛使用,我们的研究结果表明,分割性能的统计学显著差异是两种学习算法之间真实性能差异的微弱且不可靠的指标.
    BACKGROUND: A single learning algorithm can produce deep learning-based image segmentation models that vary in performance purely due to random effects during training. This study assessed the effect of these random performance fluctuations on the reliability of standard methods of comparing segmentation models.
    METHODS: The influence of random effects during training was assessed by running a single learning algorithm (nnU-Net) with 50 different random seeds for three multiclass 3D medical image segmentation problems, including brain tumour, hippocampus, and cardiac segmentation. Recent literature was sampled to find the most common methods for estimating and comparing the performance of deep learning segmentation models. Based on this, segmentation performance was assessed using both hold-out validation and 5-fold cross-validation and the statistical significance of performance differences was measured using the Paired t-test and the Wilcoxon signed rank test on Dice scores.
    RESULTS: For the different segmentation problems, the seed producing the highest mean Dice score statistically significantly outperformed between 0 % and 76 % of the remaining seeds when estimating performance using hold-out validation, and between 10 % and 38 % when estimating performance using 5-fold cross-validation.
    CONCLUSIONS: Random effects during training can cause high rates of statistically-significant performance differences between segmentation models from the same learning algorithm. Whilst statistical testing is widely used in contemporary literature, our results indicate that a statistically-significant difference in segmentation performance is a weak and unreliable indicator of a true performance difference between two learning algorithms.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    最近,基于编码器-解码器架构的ViT和CNN已经成为医学图像分割领域的主导模型。然而,它们中的每一个都有一些不足:(1)考虑到较长距离,CNN很难捕获两个位置之间的相互作用。(2)ViT无法获取局部上下文信息的交互,计算复杂度高。为了优化上述不足,我们提出了一种新的医学图像分割网络,称为FCSU-Net。FCSU-Net使用提出的多尺度特征块的协作融合,使网络获得更丰富,更准确的特征。此外,FCSU-Net通过FFF(全尺度特征融合)结构融合全尺度特征信息,而不是简单的跳过连接,并通过CS(Cross-dimensionSelf-attention)机制在多个维度上建立长程依赖关系。同时,每个维度都是相辅相成的。此外,CS机制具有卷积捕获局部上下文权重的优点。最后,FCSU-Net在多个数据集上进行了验证,结果表明,FCSU-Net不仅参数数量相对较少,而且还具有领先的细分性能。
    Recently, ViT and CNNs based on encoder-decoder architecture have become the dominant model in the field of medical image segmentation. However, there are some deficiencies for each of them: (1) It is difficult for CNNs to capture the interaction between two locations with consideration of the longer distance. (2) ViT cannot acquire the interaction of local context information and carries high computational complexity. To optimize the above deficiencies, we propose a new network for medical image segmentation, which is called FCSU-Net. FCSU-Net uses the proposed collaborative fusion of multi-scale feature block that enables the network to obtain more abundant and more accurate features. In addition, FCSU-Net fuses full-scale feature information through the FFF (Full-scale Feature Fusion) structure instead of simple skip connections, and establishes long-range dependencies on multiple dimensions through the CS (Cross-dimension Self-attention) mechanism. Meantime, every dimension is complementary to each other. Also, CS mechanism has the advantage of convolutions capturing local contextual weights. Finally, FCSU-Net is validated on several datasets, and the results show that FCSU-Net not only has a relatively small number of parameters, but also has a leading segmentation performance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    SwinTransformer是所有尝试中的一项重要工作,旨在降低变压器的计算复杂度,同时保持其在计算机视觉中的出色性能。基于窗口的补丁自注意可以使用图像特征的本地连接,和移位的基于窗口的补丁自注意使得能够在整个图像范围内的不同补丁之间进行信息的通信。通过深入研究不同移位窗口大小对贴片信息传播效率的影响,本文提出了一种双尺度变压器双尺寸移位窗口注意方法。所提出的方法超越了基于CNN的方法,如U-Net,AttenU-Net,ResU-Net,CE-Net大幅增长(约3%~6%增长),并且优于基于变压器的模型单尺度双变压器(SwinT)(大约增加1%),在Kvasir-SEG的数据集上,ISIC2017,MICCAIEndoVisSub仪器和CadVesSet。实验结果验证了所提出的双尺度移位窗口注意力有利于补丁信息的交流,并且可以将分割结果增强到最先进的水平。我们还对移位窗口大小对信息流效率的影响进行了消融研究,并验证了双尺度移位窗口注意力是优化的网络设计。我们的研究强调了网络结构设计对视觉性能的重大影响,为基于变压器体系结构的网络设计提供有价值的见解。
    Swin Transformer is an important work among all the attempts to reduce the computational complexity of Transformers while maintaining its excellent performance in computer vision. Window-based patch self-attention can use the local connectivity of the image features, and the shifted window-based patch self-attention enables the communication of information between different patches in the entire image scope. Through in-depth research on the effects of different sizes of shifted windows on the patch information communication efficiency, this article proposes a Dual-Scale Transformer with double-sized shifted window attention method. The proposed method surpasses CNN-based methods such as U-Net, AttenU-Net, ResU-Net, CE-Net by a considerable margin (Approximately 3% ∼ 6% increase), and outperforms the Transformer based models single-scale Swin Transformer(SwinT)(Approximately 1% increase), on the datasets of the Kvasir-SEG, ISIC2017, MICCAI EndoVisSub-Instrument and CadVesSet. The experimental results verify that the proposed dual scale shifted window attention benefits the communication of patch information and can enhance the segmentation results to state of the art. We also implement an ablation study on the effect of the shifted window size on the information flow efficiency and verify that the dual-scale shifted window attention is the optimized network design. Our study highlights the significant impact of network structure design on visual performance, providing valuable insights for the design of networks based on Transformer architectures.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    我们提出了一种形状先验表示约束的多尺度特征融合分割网络,用于医学图像分割,包括培训和测试阶段。我们的训练框架的新颖性在于由形状先验约束和多尺度特征融合组成的两个模块。将形状先验学习模型嵌入到分割神经网络中,以解决对比度低,邻近器官强度与目标器官相似的问题。后者可以提供本地和全球环境,以解决患者姿势以及器官形状的大变化问题。在测试阶段,我们提出了一种循环协作框架策略,该策略将形状生成器自动编码器网络模型与分段网络模型相结合,允许两个模型相互合作,导致精确分割的合作效应。我们提出的方法在ACDCMICCAI'17挑战数据集上进行了评估和演示,CT扫描数据集,即,在COVID-19CT肺部,和来自三个不同数据集的LITS2017肝脏,并将其结果与这些领域的最新技术进行比较。我们的方法在Dice评分方面在ACDC数据集上排名第一,在COVID-19CT肺部和LiTS2017肝脏分割方面取得了非常有竞争力的表现。
    We propose a shape prior representation-constrained multi-scale features fusion segmentation network for medical image segmentation, including training and testing stages. The novelty of our training framework lies in two modules comprised of the shape prior constraint and the multi-scale features fusion. The shape prior learning model is embedded into a segmentation neural network to solve the problems of low contrast and neighboring organs with intensities similar to the target organ. The latter can provide both local and global contexts to address the issues of large variations in patient postures as well as organ\'s shape. In the testing stage, we propose a circular collaboration framework strategy which combines a shape generator auto-encoder network model with a segmentation network model, allowing the two models to collaborate with each other, resulting in a cooperative effect that leads to accurate segmentations. Our proposed method is evaluated and demonstrated on the ACDC MICCAI\'17 Challenge Dataset, CT scans datasets, namely, in COVID-19 CT lung, and LiTS2017 liver from three different datasets, and its results are compared with the recent state of the art in these areas. Our method ranked 1st on the ACDC Dataset in terms of Dice score and achieved very competitive performance on COVID-19 CT lung and LiTS2017 liver segmentation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号