image segmentation

图像分割
  • 文章类型: Journal Article
    目的:心肌声学造影(MCE)在诊断缺血中起着至关重要的作用。梗塞,肿块和其他心脏病。在MCE图像分析领域,准确和一致的心肌分割结果对于实现各种心脏疾病的自动分析至关重要。然而,当前MCE中的手动诊断方法的可重复性差,临床适用性有限。由于超声信号的不稳定性,MCE图像往往表现出低质量和高噪声,而干扰结构会进一步破坏分割的一致性。
    方法:为了克服这些挑战,我们提出了一个用于MCE分割的深度学习网络。这种架构利用扩张卷积来捕获大规模信息,而不牺牲位置准确性,并修改多头自我注意以增强全局上下文并确保一致性,有效地克服了与低图像质量和干扰相关的问题。此外,我们还调整了变压器与卷积神经网络的级联应用,以改善MCE中的分割。
    结果:在我们的实验中,与几种最先进的分割模型相比,我们的架构在标准MCE视图中获得了84.35%的最佳Dice评分.对于具有干扰结构(质量)的非标准视图和框架,我们的模型还获得了83.33%和83.97%的最佳骰子得分,分别。
    结论:这些研究证明我们的架构具有出色的形状一致性和坚固性,这使得它能够处理各种类型的MCE的分割。我们相对精确和一致的心肌分割结果为自动分析各种心脏病提供了基本条件,有可能发现潜在的病理特征并降低医疗保健成本。
    OBJECTIVE: Myocardial contrast echocardiography (MCE) plays a crucial role in diagnosing ischemia, infarction, masses and other cardiac conditions. In the realm of MCE image analysis, accurate and consistent myocardial segmentation results are essential for enabling automated analysis of various heart diseases. However, current manual diagnostic methods in MCE suffer from poor repeatability and limited clinical applicability. MCE images often exhibit low quality and high noise due to the instability of ultrasound signals, while interference structures can further disrupt segmentation consistency.
    METHODS: To overcome these challenges, we proposed a deep-learning network for the segmentation of MCE. This architecture leverages dilated convolutions to capture high-scale information without sacrificing positional accuracy and modifies multi-head self-attention to enhance global context and ensure consistency, effectively overcoming issues related to low image quality and interference. Furthermore, we also adapted the cascade application of transformers with convolutional neural networks for improved segmentation in MCE.
    RESULTS: In our experiments, our architecture achieved the best Dice score of 84.35% for standard MCE views compared with that of several state-of-the-art segmentation models. For non-standard views and frames with interfering structures (mass), our models also attained the best Dice scores of 83.33% and 83.97%, respectively.
    CONCLUSIONS: These studies proved that our architecture is of excellent shape consistency and robustness, which allows it to deal with segmentation of various types of MCE. Our relatively precise and consistent myocardial segmentation results provide fundamental conditions for the automated analysis of various heart diseases, with the potential to discover underlying pathological features and reduce healthcare costs.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    尽管最近取得了进展,计算机视觉方法在临床和商业应用中的应用受到了训练健壮的监督模型所需的精确的地面实况组织注释的有限可用性的阻碍。可以通过使用免疫荧光染色(IF)对组织进行分子注释并将这些注释映射到IFH&E(末端H&E)来加速生成这样的地面实况。在IF和终端H&E之间映射注释增加了可以生成地面实况的比例和准确度。然而,由IF组织处理引起的终端H&E与常规H&E之间的差异限制了这种实现。我们试图克服这一挑战,并使用合成图像生成实现这些并行模式之间的兼容性,其中应用了周期一致的生成对抗网络(CycleGAN)来传输常规H&E的外观,从而模拟终端H&E。这些合成仿真使我们能够训练用于终末H&E中上皮分割的深度学习(DL)模型,该模型可以针对基于上皮的细胞角蛋白的IF染色进行验证。该分割模型与CycleGAN染色转移模型的组合使得能够在常规H&E图像中进行上皮分割。该方法表明,通过利用分子注释策略(如IF,只要分子注释协议的组织影响由可以在分割过程之前部署的生成模型捕获。
    Despite recent advances, the adoption of computer vision methods into clinical and commercial applications has been hampered by the limited availability of accurate ground truth tissue annotations required to train robust supervised models. Generating such ground truth can be accelerated by annotating tissue molecularly using immunofluorescence staining (IF) and mapping these annotations to a post-IF H&E (terminal H&E). Mapping the annotations between the IF and the terminal H&E increases both the scale and accuracy by which ground truth could be generated. However, discrepancies between terminal H&E and conventional H&E caused by IF tissue processing have limited this implementation. We sought to overcome this challenge and achieve compatibility between these parallel modalities using synthetic image generation, in which a cycle-consistent generative adversarial network (CycleGAN) was applied to transfer the appearance of conventional H&E such that it emulates the terminal H&E. These synthetic emulations allowed us to train a deep learning (DL) model for the segmentation of epithelium in the terminal H&E that could be validated against the IF staining of epithelial-based cytokeratins. The combination of this segmentation model with the CycleGAN stain transfer model enabled performative epithelium segmentation in conventional H&E images. The approach demonstrates that the training of accurate segmentation models for the breadth of conventional H&E data can be executed free of human-expert annotations by leveraging molecular annotation strategies such as IF, so long as the tissue impacts of the molecular annotation protocol are captured by generative models that can be deployed prior to the segmentation process.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    图像分割在医学图像分析中起着举足轻重的作用,特别是为了准确分离肿瘤和病变。有效的分割提高了诊断精度,便于定量分析,这对医疗专业人员至关重要。然而,由于相关的计算复杂性,传统的分割方法通常难以进行多级阈值处理。因此,确定最优阈值集是一个NP难问题,强调迫切需要有效的优化策略来克服这些挑战。本文介绍了一种多阈值图像分割(MTIS)方法,该方法集成了差分进化(DE)和小龙虾优化算法(COA)的混合方法,被称为HADECO。利用二维(2D)Kapur的熵和2D直方图,该方法旨在提高后续图像分析和诊断的效率和准确性。HADECO是一种混合算法,通过基于预定义规则交换信息来组合DE和COA,利用两者的优势获得卓越的优化结果。它采用拉丁超立方体采样(LHS)来生成高质量的初始种群。HADECO引入了一种改进的DE算法(IDE),该算法具有对关键DE参数的自适应和动态调整以及新的突变策略,以增强其搜索能力。此外,它包含一个自适应COA(ACOA),对切换概率参数进行动态调整,有效平衡勘探和开发。为了评估HADECO的有效性,其性能最初是使用CEC\'22个基准函数进行评估的。使用Wilcoxon符号秩检验(WSRT)和Friedman检验(FT)对结果进行整合,以针对几种当代算法对HADECO进行评估。这些发现凸显了HADECO卓越的优化能力,弗里德曼的最低平均排名为1.08。此外,基于HADECO的MTIS方法使用膝关节MRI图像和脑颅内出血(ICH)的CT扫描进行评估.脑出血图像分割的定量结果表明,该方法在6级阈值下获得了1.5和1.7的平均峰值信噪比(PSNR)和特征相似指数(FSIM)。在膝关节图像分割中,它在5级阈值下达到1.3和1.2的平均PSNR和FSIM,证明了该方法在解决图像分割问题上的有效性。
    Image segmentation plays a pivotal role in medical image analysis, particularly for accurately isolating tumors and lesions. Effective segmentation improves diagnostic precision and facilitates quantitative analysis, which is vital for medical professionals. However, traditional segmentation methods often struggle with multilevel thresholding due to the associated computational complexity. Therefore, determining the optimal threshold set is an NP-hard problem, highlighting the pressing need for efficient optimization strategies to overcome these challenges. This paper introduces a multi-threshold image segmentation (MTIS) method that integrates a hybrid approach combining Differential Evolution (DE) and the Crayfish Optimization Algorithm (COA), known as HADECO. Utilizing two-dimensional (2D) Kapur\'s entropy and a 2D histogram, this method aims to enhance the efficiency and accuracy of subsequent image analysis and diagnosis. HADECO is a hybrid algorithm that combines DE and COA by exchanging information based on predefined rules, leveraging the strengths of both for superior optimization results. It employs Latin Hypercube Sampling (LHS) to generate a high-quality initial population. HADECO introduces an improved DE algorithm (IDE) with adaptive and dynamic adjustments to key DE parameters and new mutation strategies to enhance its search capability. In addition, it incorporates an adaptive COA (ACOA) with dynamic adjustments to the switching probability parameter, effectively balancing exploration and exploitation. To evaluate the effectiveness of HADECO, its performance is initially assessed using CEC\'22 benchmark functions. HADECO is evaluated against several contemporary algorithms using the Wilcoxon signed rank test (WSRT) and the Friedman test (FT) to integrate the results. The findings highlight HADECO\'s superior optimization abilities, demonstrated by its lowest average Friedman ranking of 1.08. Furthermore, the HADECO-based MTIS method is evaluated using MRI images for knee and CT scans for brain intracranial hemorrhage (ICH). Quantitative results in brain hemorrhage image segmentation show that the proposed method achieves a superior average peak signal-to-noise ratio (PSNR) and feature similarity index (FSIM) of 1.5 and 1.7 at the 6-level threshold. In knee image segmentation, it attains an average PSNR and FSIM of 1.3 and 1.2 at the 5-level threshold, demonstrating the method\'s effectiveness in solving image segmentation problems.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    图像分割是图像处理领域中一个至关重要的过程。多级阈值分割是一种有效的图像分割方法,其中图像根据多级阈值被分割成不同的区域以进行信息分析。然而,多级阈值处理的复杂性随着阈值数量的增加而急剧增加。为了应对这一挑战,本文提出了一种新的混合算法,称为差分进化-黄金狼优化器(DEGJO),使用最小交叉熵(MCE)作为适应度函数进行多级阈值图像分割。将DE算法与GJO算法相结合,进行位置的迭代更新,这增强了GJO算法的搜索能力。在CEC2021基准函数上评估DEGJO算法的性能,并与最先进的优化算法进行比较。此外,通过对基准图像进行多级分割实验,评估了该算法的有效性。实验结果表明,与其他元启发式算法相比,DEGJO算法在适应度值方面具有优越的性能。此外,它还在定量性能指标方面产生良好的结果,如峰值信噪比(PSNR),结构相似性指数(SSIM),和特征相似性指数(FSIM)测量。
    Image segmentation is a crucial process in the field of image processing. Multilevel threshold segmentation is an effective image segmentation method, where an image is segmented into different regions based on multilevel thresholds for information analysis. However, the complexity of multilevel thresholding increases dramatically as the number of thresholds increases. To address this challenge, this article proposes a novel hybrid algorithm, termed differential evolution-golden jackal optimizer (DEGJO), for multilevel thresholding image segmentation using the minimum cross-entropy (MCE) as a fitness function. The DE algorithm is combined with the GJO algorithm for iterative updating of position, which enhances the search capacity of the GJO algorithm. The performance of the DEGJO algorithm is assessed on the CEC2021 benchmark function and compared with state-of-the-art optimization algorithms. Additionally, the efficacy of the proposed algorithm is evaluated by performing multilevel segmentation experiments on benchmark images. The experimental results demonstrate that the DEGJO algorithm achieves superior performance in terms of fitness values compared to other metaheuristic algorithms. Moreover, it also yields good results in quantitative performance metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and feature similarity index (FSIM) measurements.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:心肌灌注MRI数据集的全自动分析可以快速客观地报告疑似缺血性心脏病患者的压力/休息研究。尽管训练数据有限,软件(脉冲序列)和硬件(扫描仪供应商)的变化,开发可以分析多中心数据集的深度学习技术是一个持续的挑战。
    方法:包括在3T(n=150名受试者;21,150张首传图像)从3个医疗中心获取的数据集:内部数据集(inD;n=95)和两个外部数据集(exD;n=55),用于评估训练的深度神经网络(DNN)模型对脉冲序列(exD-1)和扫描仪供应商(exD-2)差异的鲁棒性。IND的子集(n=85)用于训练/验证用于分割的DNN池,所有使用相同的时空U-Net架构和超参数,但具有不同的参数初始化。我们采用了时空滑动补丁分析方法,该方法自动生成像素级的“不确定性图”作为分割过程的副产品。在我们的方法中,被称为数据自适应不确定性引导时空(DAUGS)分析,给定的测试用例由DNN池的所有成员分段,并利用产生的不确定性图来自动选择解决方案池中的“最佳”之一。为了比较,我们还使用具有相同设置的既定方法训练了DNN(超参数,数据增强,等。).
    结果:建议的DAUGS分析方法与内部数据集上已建立的方法相似(inD测试子集的Dice评分:0.896±0.050vs.0.890±0.049;p=n.s.),而它在外部数据集上的表现明显优于(exD-1的骰子:0.885±0.040与0.849±0.065,p<0.005;exD-2的骰子:0.811±0.070vs.0.728±0.149,p<0.005)。此外,与“失败”分割的图像系列的数量(定义为具有包括血池或在≥1段中不连续的心肌轮廓)明显较低。既定方法(4.3%与17.1%,p<0.0005)。
    结论:所提出的DAUGS分析方法有可能提高深度学习方法的鲁棒性,以分割具有脉冲序列选择变化的多中心应力灌注数据集,站点位置或扫描仪供应商。
    BACKGROUND: Fully automatic analysis of myocardial perfusion MRI datasets enables rapid and objective reporting of stress/rest studies in patients with suspected ischemic heart disease. Developing deep learning techniques that can analyze multi-center datasets despite limited training data and variations in software (pulse sequence) and hardware (scanner vendor) is an ongoing challenge.
    METHODS: Datasets from 3 medical centers acquired at 3T (n = 150 subjects; 21,150 first-pass images) were included: an internal dataset (inD; n = 95) and two external datasets (exDs; n = 55) used for evaluating the robustness of the trained deep neural network (DNN) models against differences in pulse sequence (exD-1) and scanner vendor (exD-2). A subset of inD (n = 85) was used for training/validation of a pool of DNNs for segmentation, all using the same spatiotemporal U-Net architecture and hyperparameters but with different parameter initializations. We employed a space-time sliding-patch analysis approach that automatically yields a pixel-wise \"uncertainty map\" as a byproduct of the segmentation process. In our approach, dubbed Data Adaptive Uncertainty-Guided Space-time (DAUGS) analysis, a given test case is segmented by all members of the DNN pool and the resulting uncertainty maps are leveraged to automatically select the \"best\" one among the pool of solutions. For comparison, we also trained a DNN using the established approach with the same settings (hyperparameters, data augmentation, etc.).
    RESULTS: The proposed DAUGS analysis approach performed similarly to the established approach on the internal dataset (Dice score for the testing subset of inD: 0.896 ± 0.050 vs. 0.890 ± 0.049; p = n.s.) whereas it significantly outperformed on the external datasets (Dice for exD-1: 0.885 ± 0.040 vs. 0.849 ± 0.065, p < 0.005; Dice for exD-2: 0.811 ± 0.070 vs. 0.728 ± 0.149, p < 0.005). Moreover, the number of image series with \"failed\" segmentation (defined as having myocardial contours that include bloodpool or are noncontiguous in ≥1 segment) was significantly lower for the proposed vs. the established approach (4.3% vs. 17.1%, p < 0.0005).
    CONCLUSIONS: The proposed DAUGS analysis approach has the potential to improve the robustness of deep learning methods for segmentation of multi-center stress perfusion datasets with variations in the choice of pulse sequence, site location or scanner vendor.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    随着计算机辅助诊断的发展,COVID-19感染区域的自动分割对于在临床实践中帮助患者及时诊断和康复具有很大的前景.目前,依赖于U-Net的方法在有效利用来自输入图像的细粒度语义信息以及弥合编码器和解码器之间的语义鸿沟方面面临挑战。为了解决这些问题,我们提出了一种用于COVID-19感染分割的FMD-UNet双解码器U-Net网络,它集成了细粒度特征压缩(FGFS)解码器和多尺度扩展语义聚合(MDSA)解码器。FGFS解码器通过压缩细粒度特征和加权注意力机制来产生精细特征图,指导模型捕获详细的语义信息。MDSA解码器由三个分层MDSA模块组成,该模块针对输入信息的不同阶段而设计。这些模块逐步融合不同规模的扩张卷积来处理来自编码器的浅层和深层语义信息,并使用提取的特征信息来弥合各个阶段的语义鸿沟,此设计捕获广泛的上下文信息,同时解码和预测分割,从而抑制模型参数的增加。为了更好地验证FMD-UNet的鲁棒性和泛化性,我们对三个公共数据集进行了全面的性能评估和消融实验,并在COVID-19感染细分中获得了84.76、78.56和61.99%的领先骰子相似系数(DSC)得分,分别。与以前的方法相比,FMD-UNet的参数更少,推断时间更短,这也证明了它的竞争力。
    With the advancement of computer-aided diagnosis, the automatic segmentation of COVID-19 infection areas holds great promise for assisting in the timely diagnosis and recovery of patients in clinical practice. Currently, methods relying on U-Net face challenges in effectively utilizing fine-grained semantic information from input images and bridging the semantic gap between the encoder and decoder. To address these issues, we propose an FMD-UNet dual-decoder U-Net network for COVID-19 infection segmentation, which integrates a Fine-grained Feature Squeezing (FGFS) decoder and a Multi-scale Dilated Semantic Aggregation (MDSA) decoder. The FGFS decoder produces fine feature maps through the compression of fine-grained features and a weighted attention mechanism, guiding the model to capture detailed semantic information. The MDSA decoder consists of three hierarchical MDSA modules designed for different stages of input information. These modules progressively fuse different scales of dilated convolutions to process the shallow and deep semantic information from the encoder, and use the extracted feature information to bridge the semantic gaps at various stages, this design captures extensive contextual information while decoding and predicting segmentation, thereby suppressing the increase in model parameters. To better validate the robustness and generalizability of the FMD-UNet, we conducted comprehensive performance evaluations and ablation experiments on three public datasets, and achieved leading Dice Similarity Coefficient (DSC) scores of 84.76, 78.56 and 61.99% in COVID-19 infection segmentation, respectively. Compared to previous methods, the FMD-UNet has fewer parameters and shorter inference time, which also demonstrates its competitiveness.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    大脑地图集,它提供了基因分布的信息,蛋白质,神经元,或解剖区域,在当代神经科学研究中起着至关重要的作用。为了根据来自不同大脑样本的图像分析这些物质的空间分布,我们经常需要扭曲和注册单个大脑图像到一个标准的大脑模板。然而,翘曲和配准的过程可能导致空间误差,从而严重降低了分析的准确性。为了解决这个问题,我们开发了一种自动方法,用于从FlyCircuit数据库中分割果蝇大脑中的神经质,以获取荧光图像。该技术允许未来的脑图谱研究在个体水平上准确地进行,而不扭曲和对准标准脑模板。我们的方法,LYNSU(通过YOLO定位,通过U-Net分割),包括两个阶段。在第一阶段,我们使用YOLOv7模型快速定位神经痛,并快速提取小规模3D图像作为第二阶段模型的输入。此阶段在Neuropil定位中实现了99.4%的准确率。在第二阶段,我们使用3DU-Net模型来分割神经痛。LYNSU可以使用由来自仅16个大脑的图像组成的小训练集来实现高分割精度。我们展示了LYNSU在六个不同的神经质或结构上,实现与专业手动注释相当的高分割精度与3D交集(IoU)高达0.869。我们的方法只需要大约7s来分割神经纤维,同时实现与人类注释者相似的性能水平。为了演示LYNSU的用例,我们将其应用于FlyCircuit数据库中的所有雌性果蝇大脑,以研究蘑菇体(MB)的不对称性,果蝇的学习中心。我们使用LYNSU分割双侧MB,并比较每个人左右之间的体积。值得注意的是,8703份有效的大脑样本,10.14%的双侧体积差异超过10%。该研究证明了所提出的方法在果蝇大脑的高通量解剖分析和连接组学构建中的潜力。
    The brain atlas, which provides information about the distribution of genes, proteins, neurons, or anatomical regions, plays a crucial role in contemporary neuroscience research. To analyze the spatial distribution of those substances based on images from different brain samples, we often need to warp and register individual brain images to a standard brain template. However, the process of warping and registration may lead to spatial errors, thereby severely reducing the accuracy of the analysis. To address this issue, we develop an automated method for segmenting neuropils in the Drosophila brain for fluorescence images from the FlyCircuit database. This technique allows future brain atlas studies to be conducted accurately at the individual level without warping and aligning to a standard brain template. Our method, LYNSU (Locating by YOLO and Segmenting by U-Net), consists of two stages. In the first stage, we use the YOLOv7 model to quickly locate neuropils and rapidly extract small-scale 3D images as input for the second stage model. This stage achieves a 99.4% accuracy rate in neuropil localization. In the second stage, we employ the 3D U-Net model to segment neuropils. LYNSU can achieve high accuracy in segmentation using a small training set consisting of images from merely 16 brains. We demonstrate LYNSU on six distinct neuropils or structures, achieving a high segmentation accuracy comparable to professional manual annotations with a 3D Intersection-over-Union (IoU) reaching up to 0.869. Our method takes only about 7 s to segment a neuropil while achieving a similar level of performance as the human annotators. To demonstrate a use case of LYNSU, we applied it to all female Drosophila brains from the FlyCircuit database to investigate the asymmetry of the mushroom bodies (MBs), the learning center of fruit flies. We used LYNSU to segment bilateral MBs and compare the volumes between left and right for each individual. Notably, of 8,703 valid brain samples, 10.14% showed bilateral volume differences that exceeded 10%. The study demonstrated the potential of the proposed method in high-throughput anatomical analysis and connectomics construction of the Drosophila brain.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:多参数磁共振成像(MP-MRI)中的前列腺癌(PCa)分割景观被分割,在纳入背景细节方面明显缺乏共识,最终导致不一致的分段输出。鉴于PCa的复杂性和异质性,传统的成像分割算法经常失败,促使需要专门的研究和完善。
    目的:本研究试图剖析和比较各种分割方法,强调背景信息和腺体掩模在实现优越的PCa分割中的作用。目标是系统地完善分割网络,以确定最有效的方法。
    方法:232名患者(年龄61-73岁,前列腺特异性抗原:3.4-45.6ng/mL),他们接受了MP-MRI,然后进行了前列腺活检,被分析。先进的分割模型,即注意-Unet,结合了U-Net和注意门,被用于培训和验证。通过多尺度模块和复合损失函数进一步增强了模型,最终导致了Matt-Unet的发展。性能指标包括骰子相似系数(DSC)和准确度(ACC)。
    结果:Matt-Unet模型,整合了背景信息和腺体面具,使用原始图像的性能优于基线U-Net模型,产生显著收益(DSC:0.7215与0.6592;ACC:0.8899vs.0.8601,p<0.001)。
    结论:设计了一种有针对性的实用的PCa分割方法,通过结合背景信息和腺体掩模,可以显着改善MP-MRI上的PCa分割。Matt-Unet模型展示了有效描绘PCa的有前途的能力,提高MP-MRI分析的精度。
    BACKGROUND: The landscape of prostate cancer (PCa) segmentation within multiparametric magnetic resonance imaging (MP-MRI) was fragmented, with a noticeable lack of consensus on incorporating background details, culminating in inconsistent segmentation outputs. Given the complex and heterogeneous nature of PCa, conventional imaging segmentation algorithms frequently fell short, prompting the need for specialized research and refinement.
    OBJECTIVE: This study sought to dissect and compare various segmentation methods, emphasizing the role of background information and gland masks in achieving superior PCa segmentation. The goal was to systematically refine segmentation networks to ascertain the most efficacious approach.
    METHODS: A cohort of 232 patients (ages 61-73 years old, prostate-specific antigen: 3.4-45.6 ng/mL), who had undergone MP-MRI followed by prostate biopsies, was analyzed. An advanced segmentation model, namely Attention-Unet, which combines U-Net with attention gates, was employed for training and validation. The model was further enhanced through a multiscale module and a composite loss function, culminating in the development of Matt-Unet. Performance metrics included Dice Similarity Coefficient (DSC) and accuracy (ACC).
    RESULTS: The Matt-Unet model, which integrated background information and gland masks, outperformed the baseline U-Net model using raw images, yielding significant gains (DSC: 0.7215 vs. 0.6592; ACC: 0.8899 vs. 0.8601, p < 0.001).
    CONCLUSIONS: A targeted and practical PCa segmentation method was designed, which could significantly improve PCa segmentation on MP-MRI by combining background information and gland masks. The Matt-Unet model showcased promising capabilities for effectively delineating PCa, enhancing the precision of MP-MRI analysis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    机器视觉是热锻件理想的非接触测量方法,由于热锻件工作条件的多样性,图像分割在性能和鲁棒性方面一直是一个具有挑战性的问题。因此,本文针对锻造图像提出了一种高效、鲁棒的活动轮廓模型和相应的图像分割方法,通过测量锻件的几何参数,进行了验证实验,以证明分割方法的性能。具体来说,基于用于锻造图像的等效灰度表面的几何连续性,定义了三种类型的连续性参数;因此,提出了一种新的图像力和外部能量函数来形成新的主动轮廓模型,几何连续性蛇(GC蛇),这更适合于锻造图像的灰度分布特征,以鲁棒地提高主动轮廓的收敛性;此外,提出了一种GCSnakes初始控制点的生成策略,以构成一种高效,鲁棒的图像分割方法。实验结果表明,对于不同温度和大小的锻造图像,与现有的活动轮廓模型相比,提出的GCSnakes具有更好的分割性能,为热锻件的几何参数测量提供了更好的性能和效率。GCSnakes的最大定位和尺寸误差为0.5525mm和0.3868mm,分别,与Snakes模型的0.7873mm和0.6868mm的误差相比。
    Machine vision is a desirable non-contact measurement method for hot forgings, as image segmentation has been a challenging issue in performance and robustness resulting from the diversity of working conditions for hot forgings. Thus, this paper proposes an efficient and robust active contour model and corresponding image segmentation approach for forging images, by which verification experiments are conducted to prove the performance of the segmentation method by measuring geometric parameters for forging parts. Specifically, three types of continuity parameters are defined based on the geometric continuity of equivalent grayscale surfaces for forging images; hence, a new image force and external energy functional are proposed to form a new active contour model, Geometric Continuity Snakes (GC Snakes), which is more percipient to the grayscale distribution characteristics of forging images to improve the convergence for active contour robustly; additionally, a generating strategy for initial control points for GC Snakes is proposed to compose an efficient and robust image segmentation approach. The experimental results show that the proposed GC Snakes has better segmentation performance compared with existing active contour models for forging images of different temperatures and sizes, which provides better performance and efficiency in geometric parameter measurement for hot forgings. The maximum positioning and dimension errors by GC Snakes are 0.5525 mm and 0.3868 mm, respectively, compared with errors of 0.7873 mm and 0.6868 mm by the Snakes model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    深度学习已经成为一种从三维图像中自动提取特征的强大工具,为劳动密集型和可能有偏见的手动图像分割方法提供了一种有效的替代方法。然而,对最佳训练集大小的探索有限,包括评估通过数据增强进行的艺术扩展是否可以在更短的时间内获得一致的结果,以及这些好处在不同类型的特征中的一致性。在这项研究中,我们手动分割了来自Menardella属的50个浮游有孔虫标本,以确定从内部和外部结构产生准确的体积和形状数据所需的最少训练图像数量。结果揭示,深度学习模型随着大量训练图像的增加而改进,需要八个样本才能达到95%的准确率。此外,数据增强可以将网络准确性提高高达8.0%。值得注意的是,与外部结构相比,预测内部结构的体积和形状测量提出了更大的挑战,由于不同材料之间的低对比度差异和增加的几何复杂性。这些结果为不同特征的精确图像分割提供了对最佳训练集大小的新见解,并突出了数据增强增强从三维图像中提取多元特征的潜力。
    Deep learning has emerged as a robust tool for automating feature extraction from three-dimensional images, offering an efficient alternative to labour-intensive and potentially biased manual image segmentation methods. However, there has been limited exploration into the optimal training set sizes, including assessing whether artficial expansion by data augmentation can achieve consistent results in less time and how consistent these benefits are across different types of traits. In this study, we manually segmented 50 planktonic foraminifera specimens from the genus Menardella to determine the minimum number of training images required to produce accurate volumetric and shape data from internal and external structures. The results reveal unsurprisingly that deep learning models improve with a larger number of training images with eight specimens being required to achieve 95% accuracy. Furthermore, data augmentation can enhance network accuracy by up to 8.0%. Notably, predicting both volumetric and shape measurements for the internal structure poses a greater challenge compared with the external structure, owing to low contrast differences between different materials and increased geometric complexity. These results provide novel insight into optimal training set sizes for precise image segmentation of diverse traits and highlight the potential of data augmentation for enhancing multivariate feature extraction from three-dimensional images.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号