segmentation

分割
  • 文章类型: Journal Article
    基础模型的最新进展彻底改变了数字病理学中的模型开发,减少对传统方法所需的大量手动注释的依赖。基础模型通过少量学习来很好地概括的能力解决了使模型适应各种医学成像任务的关键障碍。这项工作提出了颗粒盒提示段任意模型(GB-SAM),段任意模型(SAM)的改进版本,使用粒度框提示和有限的训练数据进行微调。GB-SAM旨在通过提高自动注释过程的效率来减少对专家病理学家注释器的依赖。颗粒框提示是源自地面实况蒙版的小框区域,设想取代使用覆盖整个H&E染色图像块的单个大框的常规方法。这种方法允许腺体形态的局部和详细分析,提高了单个腺体的分割精度,并减少了较大的盒子可能在形态复杂的区域中引入的歧义。我们比较了GB-SAM模型与在不同大小的CRAG数据集上训练的U-Net的性能。我们评估了组织病理学数据集的模型,包括CRAG,Glas,和Camelyon16.GB-SAM的表现始终优于U-Net,随着训练数据的减少,显示较少的分段性能下降。具体来说,在CRAG数据集上,当在25%的数据上训练时,GB-SAM获得的骰子系数为0.885,而U-Net为0.857。此外,GB-SAM在CRAG测试数据集上展示了分段稳定性,并且在看不见的数据集上具有出色的泛化能力。包括在Camelyon16中具有挑战性的淋巴结分割,其Dice系数为0.740,而U-Net为0.491。此外,与SAM-Path和Med-SAM相比,GB-SAM表现出竞争力。GB-SAM在CRAG数据集上的骰子得分为0.900,而SAM-Path达到0.884。在GLS数据集上,Med-SAM报告Dice得分为0.956,而GB-SAM获得0.885,训练数据明显较少。这些结果突出了GB-SAM的高级分段功能和对大型数据集的减少的依赖,表明其在数字病理学中的实际应用潜力,特别是在具有有限注释数据集的设置中。
    Recent advances in foundation models have revolutionized model development in digital pathology, reducing dependence on extensive manual annotations required by traditional methods. The ability of foundation models to generalize well with few-shot learning addresses critical barriers in adapting models to diverse medical imaging tasks. This work presents the Granular Box Prompt Segment Anything Model (GB-SAM), an improved version of the Segment Anything Model (SAM) fine-tuned using granular box prompts with limited training data. The GB-SAM aims to reduce the dependency on expert pathologist annotators by enhancing the efficiency of the automated annotation process. Granular box prompts are small box regions derived from ground truth masks, conceived to replace the conventional approach of using a single large box covering the entire H&E-stained image patch. This method allows a localized and detailed analysis of gland morphology, enhancing the segmentation accuracy of individual glands and reducing the ambiguity that larger boxes might introduce in morphologically complex regions. We compared the performance of our GB-SAM model against U-Net trained on different sizes of the CRAG dataset. We evaluated the models across histopathological datasets, including CRAG, GlaS, and Camelyon16. GB-SAM consistently outperformed U-Net, with reduced training data, showing less segmentation performance degradation. Specifically, on the CRAG dataset, GB-SAM achieved a Dice coefficient of 0.885 compared to U-Net\'s 0.857 when trained on 25% of the data. Additionally, GB-SAM demonstrated segmentation stability on the CRAG testing dataset and superior generalization across unseen datasets, including challenging lymph node segmentation in Camelyon16, which achieved a Dice coefficient of 0.740 versus U-Net\'s 0.491. Furthermore, compared to SAM-Path and Med-SAM, GB-SAM showed competitive performance. GB-SAM achieved a Dice score of 0.900 on the CRAG dataset, while SAM-Path achieved 0.884. On the GlaS dataset, Med-SAM reported a Dice score of 0.956, whereas GB-SAM achieved 0.885 with significantly less training data. These results highlight GB-SAM\'s advanced segmentation capabilities and reduced dependency on large datasets, indicating its potential for practical deployment in digital pathology, particularly in settings with limited annotated datasets.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:MRI图像上的膀胱癌(BC)分割是确定是否存在肌肉浸润的第一步。本研究旨在评估三种深度学习(DL)模型在多参数MRI(mp-MRI)图像上的肿瘤分割性能。
    方法:我们研究了53例膀胱癌患者。膀胱肿瘤在T2加权(T2WI)的每个切片上进行分割,扩散加权成像/表观扩散系数(DWI/ADC),和在3TeslaMRI扫描仪上采集的T1加权对比增强(T1WI)图像。我们训练了Unet,MAnet,和PSPnet使用三个损失函数:交叉熵(CE),骰子相似系数损失(DSC),和病灶丢失(FL)。我们使用DSC评估了模型性能,Hausdorff距离(HD),和预期校准误差(ECE)。
    结果:具有CE+DSC损失函数的MAnet算法在ADC上给出了最高的DSC值,T2WI,和T1WI图像。PSPnet与CE+DSC在ADC上获得了最小的HDs,T2WI,和T1WI图像。总体上,ADC和T1WI的分割精度优于T2WI。在ADC图像上,带FL的PSPnet的ECE最小,而在T2WI和T1WI上使用CE+DSC的MAnet是最小的。
    结论:与Unet相比,根据评估指标的选择,具有混合CEDSC损失函数的MAnet和PSPnet在BC分割中显示出更好的性能。
    BACKGROUND: Bladder cancer (BC) segmentation on MRI images is the first step to determining the presence of muscular invasion. This study aimed to assess the tumor segmentation performance of three deep learning (DL) models on multi-parametric MRI (mp-MRI) images.
    METHODS: We studied 53 patients with bladder cancer. Bladder tumors were segmented on each slice of T2-weighted (T2WI), diffusion-weighted imaging/apparent diffusion coefficient (DWI/ADC), and T1-weighted contrast-enhanced (T1WI) images acquired at a 3Tesla MRI scanner. We trained Unet, MAnet, and PSPnet using three loss functions: cross-entropy (CE), dice similarity coefficient loss (DSC), and focal loss (FL). We evaluated the model performances using DSC, Hausdorff distance (HD), and expected calibration error (ECE).
    RESULTS: The MAnet algorithm with the CE+DSC loss function gave the highest DSC values on the ADC, T2WI, and T1WI images. PSPnet with CE+DSC obtained the smallest HDs on the ADC, T2WI, and T1WI images. The segmentation accuracy overall was better on the ADC and T1WI than on the T2WI. The ECEs were the smallest for PSPnet with FL on the ADC images, while they were the smallest for MAnet with CE+DSC on the T2WI and T1WI.
    CONCLUSIONS: Compared to Unet, MAnet and PSPnet with a hybrid CE+DSC loss function displayed better performances in BC segmentation depending on the choice of the evaluation metric.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    胃癌(GC)是一个重要的医疗保健问题,高危患者的识别至关重要。的确,胃癌前病变提出了重大的诊断挑战,特别是早期肠上皮化生(IM)检测。这项研究开发了一种深度学习系统,以使用西方国家使用虚拟色素内窥镜检查的胃体图像块来辅助IM检测。利用罗马Sant\'Andrea大学医院的内窥镜图像的回顾性数据集,在2020年1月至2023年12月之间收集,该系统提取了200×200像素的补丁,用投票方案对它们进行分类。斑贴试验集的特异性和敏感性分别为76%和72%,分别。在验证集上的可学习投票方案的优化实现了整个图像的70%的特异性和100%的灵敏度。尽管数据有限且缺乏预训练模型,该系统在胃癌前状态诊断的初步筛查中显示出有希望的结果,提供一种可解释和强大的人工智能方法。
    Gastric cancer (GC) is a significant healthcare concern, and the identification of high-risk patients is crucial. Indeed, gastric precancerous conditions present significant diagnostic challenges, particularly early intestinal metaplasia (IM) detection. This study developed a deep learning system to assist in IM detection using image patches from gastric corpus examined using virtual chromoendoscopy in a Western country. Utilizing a retrospective dataset of endoscopic images from Sant\'Andrea University Hospital of Rome, collected between January 2020 and December 2023, the system extracted 200 × 200 pixel patches, classifying them with a voting scheme. The specificity and sensitivity on the patch test set were 76% and 72%, respectively. The optimization of a learnable voting scheme on a validation set achieved a specificity of 70% and sensitivity of 100% for entire images. Despite data limitations and the absence of pre-trained models, the system shows promising results for preliminary screening in gastric precancerous condition diagnostics, providing an explainable and robust Artificial Intelligence approach.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:由于对比度降低,双能X射线(DXA)图像中股骨的分割提出了挑战,噪音,骨骼形状变化,和不一致的X射线束穿透。在这项研究中,我们研究了噪声与某些用于股骨语义分割的深度学习(DL)技术之间的关系,以通过将降噪方法纳入DL模型来提高分割和骨密度(BMD)准确性。
    方法:采用基于卷积神经网络(CNN)的模型对DXA图像中的股骨进行分割,并评估降噪滤波器对分割精度的影响及其对BMD计算的影响。在训练之前,将各种降噪技术集成到基于DL的模型中以增强图像质量。与降噪算法和手动分割方法相比,我们评估了全卷积神经网络(FCNN)的性能。
    结果:我们的研究表明,FCNN在提高分割精度和实现BMD精确计算方面优于降噪算法。基于FCNN的分割方法实现了98.84%的分割精度和0.9928的BMD测量相关系数,表明其在骨质疏松症的临床诊断中的有效性。
    结论:结论:将降噪技术集成到基于DL的模型中,可以显着提高DXA图像中股骨分割的准确性。FCNN模型,特别是,在增强BMD计算和骨质疏松症的临床诊断方面显示出有希望的结果。这些发现凸显了DL技术在解决分割挑战和提高医学成像诊断准确性方面的潜力。
    OBJECTIVE: Segmentation of the femur in Dual-Energy X-ray (DXA) images poses challenges due to reduced contrast, noise, bone shape variations, and inconsistent X-ray beam penetration. In this study, we investigate the relationship between noise and certain deep learning (DL) techniques for semantic segmentation of the femur to enhance segmentation and bone mineral density (BMD) accuracy by incorporating noise reduction methods into DL models.
    METHODS: Convolutional neural network (CNN)-based models were employed to segment femurs in DXA images and evaluate the effects of noise reduction filters on segmentation accuracy and their effect on BMD calculation. Various noise reduction techniques were integrated into DL-based models to enhance image quality before training. We assessed the performance of the fully convolutional neural network (FCNN) in comparison to noise reduction algorithms and manual segmentation methods.
    RESULTS: Our study demonstrated that the FCNN outperformed noise reduction algorithms in enhancing segmentation accuracy and enabling precise calculation of BMD. The FCNN-based segmentation approach achieved a segmentation accuracy of 98.84% and a correlation coefficient of 0.9928 for BMD measurements, indicating its effectiveness in the clinical diagnosis of osteoporosis.
    CONCLUSIONS: In conclusion, integrating noise reduction techniques into DL-based models significantly improves femur segmentation accuracy in DXA images. The FCNN model, in particular, shows promising results in enhancing BMD calculation and clinical diagnosis of osteoporosis. These findings highlight the potential of DL techniques in addressing segmentation challenges and improving diagnostic accuracy in medical imaging.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    急性淋巴细胞白血病,通常被称为所有,是一种可以影响血液和骨髓的癌症。诊断过程是一个困难的过程,因为它经常需要专家测试,比如验血,骨髓穿刺,还有活检,所有这些都非常耗时和昂贵。必须获得ALL的早期诊断,以便及时和适当地开始治疗。在最近的医学诊断中,人工智能(AI)和物联网(IoT)设备的集成取得了实质性进展。我们的提案引入了一种新的基于AI的医疗物联网(IoMT)框架,旨在从外周血涂片(PBS)图像中自动识别白血病。在这项研究中,我们提出了一种新的基于深度学习的融合模型来检测所有类型的白血病。系统将诊断报告无缝地提供给集中式数据库,包括患者特定的设备。从医院采集血样后,PBS图像通过支持WiFi的微观设备传输到云服务器。在云服务器中,配置了能够对PBS图像中的ALL进行分类的新融合模型。使用包括来自89个个体的6512个原始和分割图像的数据集来训练融合模型。在融合模型中,两个输入通道用于特征提取。这些通道包括原始图像和分割图像。VGG16负责从原始图像中提取特征,而DenseNet-121负责从分割图像中提取特征。两个输出特征合并在一起,和致密层用于白血病的分类。已经提出的融合模型获得了99.89%的准确率,精度为99.80%,召回率达到99.72%,这使它在白血病分类中处于很好的位置。所提出的模型在性能方面优于几种最先进的卷积神经网络(CNN)模型。因此,这个提出的模型有可能挽救生命和努力。为了更全面地模拟整个方法,本研究开发了一个网络应用程序(测试版)。本申请旨在确定个体中是否存在白血病。这项研究的结果具有在生物医学研究中应用的巨大潜力,特别是提高计算机辅助白血病检测的准确性。
    Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone marrow aspiration, and biopsy, all of which are highly time-consuming and expensive. It is essential to obtain an early diagnosis of ALL in order to start therapy in a timely and suitable manner. In recent medical diagnostics, substantial progress has been achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) devices. Our proposal introduces a new AI-based Internet of Medical Things (IoMT) framework designed to automatically identify leukemia from peripheral blood smear (PBS) images. In this study, we present a novel deep learning-based fusion model to detect ALL types of leukemia. The system seamlessly delivers the diagnostic reports to the centralized database, inclusive of patient-specific devices. After collecting blood samples from the hospital, the PBS images are transmitted to the cloud server through a WiFi-enabled microscopic device. In the cloud server, a new fusion model that is capable of classifying ALL from PBS images is configured. The fusion model is trained using a dataset including 6512 original and segmented images from 89 individuals. Two input channels are used for the purpose of feature extraction in the fusion model. These channels include both the original and the segmented images. VGG16 is responsible for extracting features from the original images, whereas DenseNet-121 is responsible for extracting features from the segmented images. The two output features are merged together, and dense layers are used for the categorization of leukemia. The fusion model that has been suggested obtains an accuracy of 99.89%, a precision of 99.80%, and a recall of 99.72%, which places it in an excellent position for the categorization of leukemia. The proposed model outperformed several state-of-the-art Convolutional Neural Network (CNN) models in terms of performance. Consequently, this proposed model has the potential to save lives and effort. For a more comprehensive simulation of the entire methodology, a web application (Beta Version) has been developed in this study. This application is designed to determine the presence or absence of leukemia in individuals. The findings of this study hold significant potential for application in biomedical research, particularly in enhancing the accuracy of computer-aided leukemia detection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本文提出了一种新颖的分割算法,专门为具有高变异性和噪声的3D点云应用而开发,特别适用于文物建筑的三维数据。该方法可以在基于边缘检测的分割过程中进行分类。此外,它使用从3D点云的超体素化生成的基于图形的拓扑结构,用于使边缘点闭合并定义不同的段。该算法为生成结果提供了有价值的工具,这些结果可用于后续的分类任务和处理3D点云的更广泛的计算机应用。这种分割方法的特点之一是它是无监督的,这使得它对于标记数据稀缺的传统应用特别有利。它也很容易适应不同的边缘点检测和超体素化算法。最后,结果表明,三维数据可以分割成不同的建筑元素,这对于进一步分类或识别很重要。对历史建筑的真实数据进行的大量测试证明了该方法的有效性。结果表明,与其他三种分割方法相比,性能更优越,无论是在全球范围内还是在历史建筑的平面和弯曲区域的分割中。
    This paper presents a novel segmentation algorithm specially developed for applications in 3D point clouds with high variability and noise, particularly suitable for heritage building 3D data. The method can be categorized within the segmentation procedures based on edge detection. In addition, it uses a graph-based topological structure generated from the supervoxelization of the 3D point clouds, which is used to make the closure of the edge points and to define the different segments. The algorithm provides a valuable tool for generating results that can be used in subsequent classification tasks and broader computer applications dealing with 3D point clouds. One of the characteristics of this segmentation method is that it is unsupervised, which makes it particularly advantageous for heritage applications where labelled data is scarce. It is also easily adaptable to different edge point detection and supervoxelization algorithms. Finally, the results show that the 3D data can be segmented into different architectural elements, which is important for further classification or recognition. Extensive testing on real data from historic buildings demonstrated the effectiveness of the method. The results show superior performance compared to three other segmentation methods, both globally and in the segmentation of planar and curved zones of historic buildings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    脑中风,或者是脑血管意外,是一种破坏性的疾病,会破坏大脑的血液供应,剥夺它的氧气和营养。每一年,根据世界卫生组织的说法,全球有1500万人中风。这导致大约500万人死亡,另有500万人患有永久性残疾。各种风险因素的复杂相互作用凸显了迫切需要复杂的分析方法来更准确地预测中风风险并管理其结果。机器学习和深度学习技术通过分析包括患者人口统计在内的广泛数据集,提供有前途的解决方案。健康记录,和生活方式的选择,以揭示人类不容易辨别的模式和预测因素。这些技术实现了先进的数据处理,分析,和综合健康评估的融合技术。我们对2020年至2024年间发表的关于机器学习和深度学习在脑中风诊断中的应用的25篇综述论文进行了全面回顾。注重分类,分割,和物体检测。此外,所有这些评论都探讨了这些领域先进传感器系统的性能评估和验证,加强预测性健康监测和个性化护理建议。此外,我们还提供了一组用于脑中风分析的最相关的数据集.论文的选择是根据PRISMA指南进行的。此外,这篇综述严格地审查了每个领域,确定当前的挑战,并提出了未来的研究方向,强调人工智能方法在转变健康监测和患者护理方面的潜力。
    Brain stroke, or a cerebrovascular accident, is a devastating medical condition that disrupts the blood supply to the brain, depriving it of oxygen and nutrients. Each year, according to the World Health Organization, 15 million people worldwide experience a stroke. This results in approximately 5 million deaths and another 5 million individuals suffering permanent disabilities. The complex interplay of various risk factors highlights the urgent need for sophisticated analytical methods to more accurately predict stroke risks and manage their outcomes. Machine learning and deep learning technologies offer promising solutions by analyzing extensive datasets including patient demographics, health records, and lifestyle choices to uncover patterns and predictors not easily discernible by humans. These technologies enable advanced data processing, analysis, and fusion techniques for a comprehensive health assessment. We conducted a comprehensive review of 25 review papers published between 2020 and 2024 on machine learning and deep learning applications in brain stroke diagnosis, focusing on classification, segmentation, and object detection. Furthermore, all these reviews explore the performance evaluation and validation of advanced sensor systems in these areas, enhancing predictive health monitoring and personalized care recommendations. Moreover, we also provide a collection of the most relevant datasets used in brain stroke analysis. The selection of the papers was conducted according to PRISMA guidelines. Furthermore, this review critically examines each domain, identifies current challenges, and proposes future research directions, emphasizing the potential of AI methods in transforming health monitoring and patient care.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    点云是利用包括3D位置和属性的无序点的对象或场景的表示。点云模仿自然形态的能力已获得不同应用领域的极大关注,例如虚拟现实和增强现实。然而,点云,尤其是那些代表动态场景或运动物体的人,由于其巨大的数据量,必须有效地压缩。用于动态点云的最新基于视频的点云压缩(V-PCC)标准使用计算昂贵的正态估计将3D点云划分为许多补丁,分割,和细化。面片被投影到2D平面上以应用现有的视频编码技术。该过程通常导致丢失接近度信息和一些原始点。这种损失引起对用户感知产生不利影响的伪像。该方法在块生成前基于形状相似性和遮挡分割动态点云。这种分割策略通过利用点的密度和遮挡来帮助保持点的接近度并保留更多的原始点。实验结果证明,对于几个基准视频序列的几何和纹理数据,该方法在率失真性能和主观质量测试方面明显优于V-PCC标准和其他相关方法。
    A point cloud is a representation of objects or scenes utilising unordered points comprising 3D positions and attributes. The ability of point clouds to mimic natural forms has gained significant attention from diverse applied fields, such as virtual reality and augmented reality. However, the point cloud, especially those representing dynamic scenes or objects in motion, must be compressed efficiently due to its huge data volume. The latest video-based point cloud compression (V-PCC) standard for dynamic point clouds divides the 3D point cloud into many patches using computationally expensive normal estimation, segmentation, and refinement. The patches are projected onto a 2D plane to apply existing video coding techniques. This process often results in losing proximity information and some original points. This loss induces artefacts that adversely affect user perception. The proposed method segments dynamic point clouds based on shape similarity and occlusion before patch generation. This segmentation strategy helps maintain the points\' proximity and retain more original points by exploiting the density and occlusion of the points. The experimental results establish that the proposed method significantly outperforms the V-PCC standard and other relevant methods regarding rate-distortion performance and subjective quality testing for both geometric and texture data of several benchmark video sequences.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    脑肿瘤分割对诊断和治疗计划有很大贡献。手动脑肿瘤描绘是一项耗时且繁琐的任务,并且根据放射科医师的技能而有所不同。自动脑肿瘤分割非常重要,并且不依赖于内部或内部观察。这项研究的目的是从流体衰减反转恢复(FLAIR)自动描绘脑肿瘤,T1加权(T1W),T2加权(T2W),和T1W对比增强(T1ce)磁共振(MR)序列通过深度学习方法,集中于确定单独哪个MR序列或其哪个组合将导致其中的最高精度。
    BraTS-2020挑战数据集,包含370名受试者,具有四个MR序列和手动描绘的肿瘤面罩,用于训练残差神经网络。针对MR序列中的每一个(单通道输入)及其任何组合(双通道或多通道输入)单独地训练和评估该网络。
    单通道模型的定量评估表明,与具有0.77±0.10Dice指数的对应序列相比,FLAIR序列将产生更高的分割精度。至于考虑双通道模型,具有FLAIR和T2W输入的模型产生0.80±0.10Dice指数,表现出更高的性能。在整个四个MR序列上的联合肿瘤分割产生最高的总体分割精度,具有0.82±0.09Dice指数。
    FLAIRMR序列被认为是在单个MR序列上进行肿瘤分割的最佳选择,而在整个四个MR序列上的联合分割将产生更高的肿瘤描绘精度。
    UNASSIGNED: Brain tumor segmentation is highly contributive in diagnosing and treatment planning. Manual brain tumor delineation is a time-consuming and tedious task and varies depending on the radiologist\'s skill. Automated brain tumor segmentation is of high importance and does not depend on either inter- or intra-observation. The objective of this study is to automate the delineation of brain tumors from the Fluid-attenuated inversion recovery (FLAIR), T1-weighted (T1W), T2-weighted (T2W), and T1W contrast-enhanced (T1ce) magnetic resonance (MR) sequences through a deep learning approach, with a focus on determining which MR sequence alone or which combination thereof would lead to the highest accuracy therein.
    UNASSIGNED: The BraTS-2020 challenge dataset, containing 370 subjects with four MR sequences and manually delineated tumor masks, is applied to train a residual neural network. This network is trained and assessed separately for each one of the MR sequences (single-channel input) and any combination thereof (dual- or multi-channel input).
    UNASSIGNED: The quantitative assessment of the single-channel models reveals that the FLAIR sequence would yield higher segmentation accuracy compared to its counterparts with a 0.77 ± 0.10 Dice index. As to considering the dual-channel models, the model with FLAIR and T2W inputs yields a 0.80 ± 0.10 Dice index, exhibiting higher performance. The joint tumor segmentation on the entire four MR sequences yields the highest overall segmentation accuracy with a 0.82 ± 0.09 Dice index.
    UNASSIGNED: The FLAIR MR sequence is considered the best choice for tumor segmentation on a single MR sequence, while the joint segmentation on the entire four MR sequences would yield higher tumor delineation accuracy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:非灌注体积除以总肌瘤负荷(NPV/TFL)是MRI引导的高强度聚焦超声(MR-HIFU)治疗子宫肌瘤的预测结果参数,这与长期症状缓解有关。在目前的临床实践中,MR-HIFU结果参数通常通过目视检查确定,因此,一种自动化的计算机辅助方法可以促进客观结果的量化。这项研究的目的是开发和评估一种基于深度学习的子宫体积测量分割算法,子宫肌瘤,和MRI中的NPV,以便自动量化NPV/TFL。
    方法:对115例子宫肌瘤患者进行MRI扫描的专家手动分割,开发并评估了分割管道,筛查和/或接受MR-HIFU治疗。管道包含三个独立的神经网络,每个目标结构一个。管道的第一步是从对比增强(CE)-T1w扫描中分割子宫。该分割随后用于去除非子宫背景组织以进行NPV和纤维瘤分割。在接下来的步骤中,NPV从仅子宫CE-T1w扫描中分割。最后,根据仅子宫的T2w扫描对肌瘤进行分割。分割用于计算每个结构的体积。手动和自动分割之间的可靠性和协议,卷,和NPV/TFL进行评估。
    结果:对于治疗扫描,手动和自动获得的切分之间的骰子相似系数(DSC)为0.90(子宫),0.84(净现值)和0.74(肌瘤)。组内相关系数(ICC)为1.00[0.99,1.00](子宫),手动和自动导出体积之间的0.99[0.98,1.00](NPV)和0.98[0.95,0.99](纤维瘤)。对于手动和自动导出的NPV/TFL,平均差异为5%[-41%,51%](ICC:0.66[0.32,0.85])。
    结论:本研究中提出的算法自动计算子宫体积,肌瘤负荷,和NPV,与目视检查相比,这可能导致MR-HIFU治疗子宫肌瘤后更客观的结果量化。当在未来的研究中确定了稳健性时,该工具最终可用于临床实践,在子宫肌瘤MR-HIFU手术后自动测量NPV/TFL.
    BACKGROUND: The non-perfused volume divided by total fibroid load (NPV/TFL) is a predictive outcome parameter for MRI-guided high-intensity focused ultrasound (MR-HIFU) treatments of uterine fibroids, which is related to long-term symptom relief. In current clinical practice, the MR-HIFU outcome parameters are typically determined by visual inspection, so an automated computer-aided method could facilitate objective outcome quantification. The objective of this study was to develop and evaluate a deep learning-based segmentation algorithm for volume measurements of the uterus, uterine fibroids, and NPVs in MRI in order to automatically quantify the NPV/TFL.
    METHODS: A segmentation pipeline was developed and evaluated using expert manual segmentations of MRI scans of 115 uterine fibroid patients, screened for and/or undergoing MR-HIFU treatment. The pipeline contained three separate neural networks, one per target structure. The first step in the pipeline was uterus segmentation from contrast-enhanced (CE)-T1w scans. This segmentation was subsequently used to remove non-uterus background tissue for NPV and fibroid segmentation. In the following step, NPVs were segmented from uterus-only CE-T1w scans. Finally, fibroids were segmented from uterus-only T2w scans. The segmentations were used to calculate the volume for each structure. Reliability and agreement between manual and automatic segmentations, volumes, and NPV/TFLs were assessed.
    RESULTS: For treatment scans, the Dice similarity coefficients (DSC) between the manually and automatically obtained segmentations were 0.90 (uterus), 0.84 (NPV) and 0.74 (fibroid). Intraclass correlation coefficients (ICC) were 1.00 [0.99, 1.00] (uterus), 0.99 [0.98, 1.00] (NPV) and 0.98 [0.95, 0.99] (fibroid) between manually and automatically derived volumes. For manually and automatically derived NPV/TFLs, the mean difference was 5% [-41%, 51%] (ICC: 0.66 [0.32, 0.85]).
    CONCLUSIONS: The algorithm presented in this study automatically calculates uterus volume, fibroid load, and NPVs, which could lead to more objective outcome quantification after MR-HIFU treatments of uterine fibroids in comparison to visual inspection. When robustness has been ascertained in a future study, this tool may eventually be employed in clinical practice to automatically measure the NPV/TFL after MR-HIFU procedures of uterine fibroids.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号