segmentation

分割
  • 文章类型: Journal Article
    这篇综合综述探讨了深度学习(DL)在使用多参数磁共振成像(MRI)数据进行神经胶质瘤分割中的作用。该研究调查了诸如多参数MRI之类的先进技术,以捕获神经胶质瘤的复杂性质。它深入研究了DL与MRI的整合,专注于卷积神经网络(CNN)及其在肿瘤分割方面的卓越能力。重点介绍了基于DL的分割的临床应用,包括治疗计划,监测治疗反应,并区分肿瘤进展和假性进展。此外,这篇综述考察了基于DL的分割研究的演变,从早期的CNN模型到最近的进步,如注意力机制和变换器模型。数据质量方面的挑战,渐变消失,并对模型的可解释性进行了讨论。这篇综述总结了对未来研究方向的见解,强调解决肿瘤异质性的重要性,整合基因组数据,并确保负责任地部署DL驱动的医疗技术。证据级别:不适用技术效率:第二阶段。
    This comprehensive review explores the role of deep learning (DL) in glioma segmentation using multiparametric magnetic resonance imaging (MRI) data. The study surveys advanced techniques such as multiparametric MRI for capturing the complex nature of gliomas. It delves into the integration of DL with MRI, focusing on convolutional neural networks (CNNs) and their remarkable capabilities in tumor segmentation. Clinical applications of DL-based segmentation are highlighted, including treatment planning, monitoring treatment response, and distinguishing between tumor progression and pseudo-progression. Furthermore, the review examines the evolution of DL-based segmentation studies, from early CNN models to recent advancements such as attention mechanisms and transformer models. Challenges in data quality, gradient vanishing, and model interpretability are discussed. The review concludes with insights into future research directions, emphasizing the importance of addressing tumor heterogeneity, integrating genomic data, and ensuring responsible deployment of DL-driven healthcare technologies. EVIDENCE LEVEL: N/A TECHNICAL EFFICACY: Stage 2.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:视网膜假体通过刺激剩余的视网膜细胞部分恢复视力,为患有退行性视网膜疾病的个体提供了希望。这篇综述深入研究了视网膜假体技术的当前进展,特别强调图像处理和机器学习技术在这一演变中发挥的关键作用。
    方法:我们对现有的可植入装置和光遗传学策略进行了全面分析,描绘他们的优势,局限性,以及解决复杂视觉任务的挑战。该评论扩展到已实施以增强视网膜假体设备的功能的各种图像处理算法和深度学习架构。我们还通过演示临床试验或通过膦模拟使用模拟假肢视觉(SPV)来说明测试结果,这是模拟视网膜假体使用者的视觉感知的一个关键方面。
    结果:我们的综述强调了视网膜假体技术的重大进展,特别是它增强视障人士视觉感知的能力。它讨论了图像处理和深度学习之间的集成,通过应用临床试验说明它们对环境中的个体相互作用和导航的影响,并说明与当前设备一起使用的某些技术的局限性,因为有些方法只使用模拟,即使是对正常的人,或者依赖于定性分析,有些人考虑现实的感知模型,而另一些人则不考虑。
    结论:这个跨学科领域对视网膜假体的未来充满希望,具有显着提高视网膜假体患者生活质量的潜力。未来的研究方向应转向优化SPV方法的膦模拟,考虑到磷酸盐感知的扭曲和混乱的性质,从而丰富了由这些假体装置提供的视觉感知。这一努力不仅将提高导航独立性,还将促进与环境的更身临其境的互动。
    BACKGROUND: Retinal prostheses offer hope for individuals with degenerative retinal diseases by stimulating the remaining retinal cells to partially restore their vision. This review delves into the current advancements in retinal prosthesis technology, with a special emphasis on the pivotal role that image processing and machine learning techniques play in this evolution.
    METHODS: We provide a comprehensive analysis of the existing implantable devices and optogenetic strategies, delineating their advantages, limitations, and challenges in addressing complex visual tasks. The review extends to various image processing algorithms and deep learning architectures that have been implemented to enhance the functionality of retinal prosthetic devices. We also illustrate the testing results by demonstrating the clinical trials or using Simulated Prosthetic Vision (SPV) through phosphene simulations, which is a critical aspect of simulating visual perception for retinal prosthesis users.
    RESULTS: Our review highlights the significant progress in retinal prosthesis technology, particularly its capacity to augment visual perception among the visually impaired. It discusses the integration between image processing and deep learning, illustrating their impact on individual interactions and navigations within the environment through applying clinical trials and also illustrating the limitations of some techniques to be used with current devices, as some approaches only use simulation even on sighted-normal individuals or rely on qualitative analysis, where some consider realistic perception models and others do not.
    CONCLUSIONS: This interdisciplinary field holds promise for the future of retinal prostheses, with the potential to significantly enhance the quality of life for individuals with retinal prostheses. Future research directions should pivot towards optimizing phosphene simulations for SPV approaches, considering the distorted and confusing nature of phosphene perception, thereby enriching the visual perception provided by these prosthetic devices. This endeavor will not only improve navigational independence but also facilitate a more immersive interaction with the environment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    脑中风,或者是脑血管意外,是一种破坏性的疾病,会破坏大脑的血液供应,剥夺它的氧气和营养。每一年,根据世界卫生组织的说法,全球有1500万人中风。这导致大约500万人死亡,另有500万人患有永久性残疾。各种风险因素的复杂相互作用凸显了迫切需要复杂的分析方法来更准确地预测中风风险并管理其结果。机器学习和深度学习技术通过分析包括患者人口统计在内的广泛数据集,提供有前途的解决方案。健康记录,和生活方式的选择,以揭示人类不容易辨别的模式和预测因素。这些技术实现了先进的数据处理,分析,和综合健康评估的融合技术。我们对2020年至2024年间发表的关于机器学习和深度学习在脑中风诊断中的应用的25篇综述论文进行了全面回顾。注重分类,分割,和物体检测。此外,所有这些评论都探讨了这些领域先进传感器系统的性能评估和验证,加强预测性健康监测和个性化护理建议。此外,我们还提供了一组用于脑中风分析的最相关的数据集.论文的选择是根据PRISMA指南进行的。此外,这篇综述严格地审查了每个领域,确定当前的挑战,并提出了未来的研究方向,强调人工智能方法在转变健康监测和患者护理方面的潜力。
    Brain stroke, or a cerebrovascular accident, is a devastating medical condition that disrupts the blood supply to the brain, depriving it of oxygen and nutrients. Each year, according to the World Health Organization, 15 million people worldwide experience a stroke. This results in approximately 5 million deaths and another 5 million individuals suffering permanent disabilities. The complex interplay of various risk factors highlights the urgent need for sophisticated analytical methods to more accurately predict stroke risks and manage their outcomes. Machine learning and deep learning technologies offer promising solutions by analyzing extensive datasets including patient demographics, health records, and lifestyle choices to uncover patterns and predictors not easily discernible by humans. These technologies enable advanced data processing, analysis, and fusion techniques for a comprehensive health assessment. We conducted a comprehensive review of 25 review papers published between 2020 and 2024 on machine learning and deep learning applications in brain stroke diagnosis, focusing on classification, segmentation, and object detection. Furthermore, all these reviews explore the performance evaluation and validation of advanced sensor systems in these areas, enhancing predictive health monitoring and personalized care recommendations. Moreover, we also provide a collection of the most relevant datasets used in brain stroke analysis. The selection of the papers was conducted according to PRISMA guidelines. Furthermore, this review critically examines each domain, identifies current challenges, and proposes future research directions, emphasizing the potential of AI methods in transforming health monitoring and patient care.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    图像处理领域正在经历重大进步,以支持专业人员分析从活检获得的组织学图像。主要目标是加强诊断和预后评估的过程。各种形式的癌症可以通过采用不同的分割技术来诊断,随后是可以识别不同肿瘤区域的后处理方法。使用计算机方法有助于对专家进行更客观和有效的研究。组织学图像分析的进步在现代医学中具有重要意义。本文综述了目前滤泡性淋巴瘤图像分割和分类方法的研究进展。本研究分析了在预处理的各个阶段中使用的主要图像处理技术,感兴趣区域的分割,分类,和现有文献中描述的后处理。该研究还研究了与这些方法相关的优点和缺点。此外,这项研究包括对验证程序的检查,以及对肿瘤分割的未来研究道路的探索。
    The field of image processing is experiencing significant advancements to support professionals in analyzing histological images obtained from biopsies. The primary objective is to enhance the process of diagnosis and prognostic evaluations. Various forms of cancer can be diagnosed by employing different segmentation techniques followed by postprocessing approaches that can identify distinct neoplastic areas. Using computer approaches facilitates a more objective and efficient study of experts. The progressive advancement of histological image analysis holds significant importance in modern medicine. This paper provides an overview of the current advances in segmentation and classification approaches for images of follicular lymphoma. This research analyzes the primary image processing techniques utilized in the various stages of preprocessing, segmentation of the region of interest, classification, and postprocessing as described in the existing literature. The study also examines the strengths and weaknesses associated with these approaches. Additionally, this study encompasses an examination of validation procedures and an exploration of prospective future research roads in the segmentation of neoplasias.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    脑部医学图像分割是医学图像处理中的一项关键任务,在中风等疾病的预测和诊断中发挥着重要作用,老年痴呆症,和脑肿瘤。然而,由于不同扫描仪之间的站点间差异很大,因此不同来源的数据集之间的分布差异很大,成像协议,和人口。这导致实际应用中的跨域问题。近年来,已经进行了许多研究来解决大脑图像分割中的跨域问题。
    本评论遵循系统评论和荟萃分析(PRISMA)的首选报告项目的标准,用于数据处理和分析。我们从PubMed检索了相关论文,WebofScience,和IEEE数据库从2018年1月到2023年12月,提取有关医疗领域的信息,成像模式,解决跨域问题的方法,实验设计,和来自选定论文的数据集。此外,我们比较了中风病变分割方法的性能,脑白质分割和脑肿瘤分割。
    本综述共纳入并分析了71项研究。解决跨域问题的方法包括迁移学习,规范化,无监督学习,变压器型号,和卷积神经网络(CNN)。在ATLAS数据集上,领域自适应方法显示,与非自适应方法相比,卒中病变分割任务总体改善约3%.然而,鉴于当前研究中基于MICCAI2017中白质分割任务的方法和BraTS中脑肿瘤分割任务的方法的数据集和实验方法的多样性,直观地比较这些方法的优缺点是具有挑战性的。
    尽管已经应用了各种技术来解决大脑图像分割中的跨域问题,目前缺乏统一的数据集和实验标准。例如,许多研究仍然基于n折交叉验证,而直接基于跨站点或数据集的交叉验证的方法相对较少。此外,由于大脑分割领域的医学图像类型多种多样,对性能进行简单直观的比较并不容易。这些挑战需要在未来的研究中解决。
    UNASSIGNED: Brain medical image segmentation is a critical task in medical image processing, playing a significant role in the prediction and diagnosis of diseases such as stroke, Alzheimer\'s disease, and brain tumors. However, substantial distribution discrepancies among datasets from different sources arise due to the large inter-site discrepancy among different scanners, imaging protocols, and populations. This leads to cross-domain problems in practical applications. In recent years, numerous studies have been conducted to address the cross-domain problem in brain image segmentation.
    UNASSIGNED: This review adheres to the standards of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) for data processing and analysis. We retrieved relevant papers from PubMed, Web of Science, and IEEE databases from January 2018 to December 2023, extracting information about the medical domain, imaging modalities, methods for addressing cross-domain issues, experimental designs, and datasets from the selected papers. Moreover, we compared the performance of methods in stroke lesion segmentation, white matter segmentation and brain tumor segmentation.
    UNASSIGNED: A total of 71 studies were included and analyzed in this review. The methods for tackling the cross-domain problem include Transfer Learning, Normalization, Unsupervised Learning, Transformer models, and Convolutional Neural Networks (CNNs). On the ATLAS dataset, domain-adaptive methods showed an overall improvement of ~3 percent in stroke lesion segmentation tasks compared to non-adaptive methods. However, given the diversity of datasets and experimental methodologies in current studies based on the methods for white matter segmentation tasks in MICCAI 2017 and those for brain tumor segmentation tasks in BraTS, it is challenging to intuitively compare the strengths and weaknesses of these methods.
    UNASSIGNED: Although various techniques have been applied to address the cross-domain problem in brain image segmentation, there is currently a lack of unified dataset collections and experimental standards. For instance, many studies are still based on n-fold cross-validation, while methods directly based on cross-validation across sites or datasets are relatively scarce. Furthermore, due to the diverse types of medical images in the field of brain segmentation, it is not straightforward to make simple and intuitive comparisons of performance. These challenges need to be addressed in future research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    听觉损伤是一个普遍的全球性问题,对个人的日常功能和人际交往产生重大影响。人工耳蜗(CI)已成为严重至严重听力损失的尖端解决方案,用电信号直接刺激听觉神经。CI程序的成功取决于精确的术前计划和术后评估,突出了先进的三维(3D)内耳重建软件的意义。准确的术前成像对于识别解剖标志和评估耳蜗畸形至关重要。像3D切片器这样的工具,Amira和OTOPLAN提供了耳蜗解剖的详细描述,帮助外科医生模拟植入场景和完善手术方法。术后扫描在检测并发症和确保CI寿命方面起着至关重要的作用。尽管技术进步,标准化和优化等挑战依然存在。本文综述了3D内耳重建软件在患者选择中的作用,手术计划,和术后评估,跟踪其演变,强调图像分割和虚拟仿真等特征。它解决了软件限制并提出了解决方案,倡导将其融入临床实践。最终,这篇综述强调了3D内耳重建软件对人工耳蜗植入的影响,将创新与精准医学联系起来。
    Auditory impairment stands as a pervasive global issue, exerting significant effects on individuals\' daily functioning and interpersonal engagements. Cochlear implants (CIs) have risen as a cutting-edge solution for severe to profound hearing loss, directly stimulating the auditory nerve with electrical signals. The success of CI procedures hinges on precise pre-operative planning and post-operative evaluation, highlighting the significance of advanced three-dimensional (3D) inner ear reconstruction software. Accurate pre-operative imaging is vital for identifying anatomical landmarks and assessing cochlear deformities. Tools like 3D Slicer, Amira and OTOPLAN provide detailed depictions of cochlear anatomy, aiding surgeons in simulating implantation scenarios and refining surgical approaches. Post-operative scans play a crucial role in detecting complications and ensuring CI longevity. Despite technological advancements, challenges such as standardization and optimization persist. This review explores the role of 3D inner ear reconstruction software in patient selection, surgical planning, and post-operative assessment, tracing its evolution and emphasizing features like image segmentation and virtual simulation. It addresses software limitations and proposes solutions, advocating for their integration into clinical practice. Ultimately, this review underscores the impact of 3D inner ear reconstruction software on cochlear implantation, connecting innovation with precision medicine.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    遥感(RS)中的变压器研究,2021年后开始增加,面临着相对缺乏审查的问题。为了了解RS变压器的发展趋势,我们通过将变压器的应用分为八个领域,对变压器的主要研究进行了定量分析:土地利用/土地覆盖(LULC)分类,分割,聚变,变化检测,物体检测,物体识别,注册,和其他人。定量结果表明,变压器在LULC分类和融合中获得了更高的准确性,在分割和目标检测方面具有更稳定的性能。结合LULC分类和分割的分析结果,我们发现变压器比卷积神经网络(CNN)需要更多的参数。此外,还需要进一步研究推理速度,以提高变压器的性能。确定我们数据库中变压器最常见的应用场景是城市,农田,和水体。我们还发现,变压器用于自然科学,如农业和环境保护,而不是人文或经济学。最后,这项工作总结了在研究过程中获得的遥感变压器的分析结果,并为未来的发展方向提供了展望。
    Research on transformers in remote sensing (RS), which started to increase after 2021, is facing the problem of a relative lack of review. To understand the trends of transformers in RS, we undertook a quantitative analysis of the major research on transformers over the past two years by dividing the application of transformers into eight domains: land use/land cover (LULC) classification, segmentation, fusion, change detection, object detection, object recognition, registration, and others. Quantitative results show that transformers achieve a higher accuracy in LULC classification and fusion, with more stable performance in segmentation and object detection. Combining the analysis results on LULC classification and segmentation, we have found that transformers need more parameters than convolutional neural networks (CNNs). Additionally, further research is also needed regarding inference speed to improve transformers\' performance. It was determined that the most common application scenes for transformers in our database are urban, farmland, and water bodies. We also found that transformers are employed in the natural sciences such as agriculture and environmental protection rather than the humanities or economics. Finally, this work summarizes the analysis results of transformers in remote sensing obtained during the research process and provides a perspective on future directions of development.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    慢性肝病在世界范围内造成显著的发病率和死亡率。腹部计算机断层扫描(CT)和磁共振成像(MRI)可以完全可视化上腹部的肝脏和相邻结构,从而对肝脏和胆道系统进行可重复的评估,并可以检测门脉高压的特征。CT和MRI在肝实质评估纤维化早期和晚期(肝硬化前)中的主观解释,以及门静脉高压的严重程度,是有限的。肝和脾体积的定量和可重复测量已被证明与纤维化分期相关。临床结果,和死亡率。在这次审查中,我们将探讨体积测量在诊断中的作用,评估慢性肝病患者的严重程度和预后。我们得出的结论是,肝脏和脾脏的体积分析可以为此类患者提供重要的信息,有可能对患者的肝纤维化分期和疾病严重程度进行分层,并且可以提供关键的预后信息。关键相关陈述:这篇综述强调了使用CT和MRI对肝脏和脾脏进行体积测量在诊断中的作用,严重程度评估,并预测慢性肝病患者的预后。关键点:使用CT和MRI的肝脏和脾脏的容量与肝纤维化阶段和肝硬化相关。体积测量与慢性肝病结果相关。在常规临床实践中,需要使用全自动容量测量方法。
    Chronic liver disease is responsible for significant morbidity and mortality worldwide. Abdominal computed tomography (CT) and magnetic resonance imaging (MRI) can fully visualise the liver and adjacent structures in the upper abdomen providing a reproducible assessment of the liver and biliary system and can detect features of portal hypertension. Subjective interpretation of CT and MRI in the assessment of liver parenchyma for early and advanced stages of fibrosis (pre-cirrhosis), as well as severity of portal hypertension, is limited. Quantitative and reproducible measurements of hepatic and splenic volumes have been shown to correlate with fibrosis staging, clinical outcomes, and mortality. In this review, we will explore the role of volumetric measurements in relation to diagnosis, assessment of severity and prediction of outcomes in chronic liver disease patients. We conclude that volumetric analysis of the liver and spleen can provide important information in such patients, has the potential to stratify patients\' stage of hepatic fibrosis and disease severity, and can provide critical prognostic information. CRITICAL RELEVANCE STATEMENT: This review highlights the role of volumetric measurements of the liver and spleen using CT and MRI in relation to diagnosis, assessment of severity, and prediction of outcomes in chronic liver disease patients. KEY POINTS: Volumetry of the liver and spleen using CT and MRI correlates with hepatic fibrosis stages and cirrhosis. Volumetric measurements correlate with chronic liver disease outcomes. Fully automated methods for volumetry are required for implementation into routine clinical practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:在锥形束计算机断层扫描(CBCT)图像上自动分割三维牙髓空间为增强诊断提供了重要的机会,治疗计划,和牙髓临床教育。这项系统评价的目的是研究AI驱动的自动牙髓空间分割在CBCT图像上的性能。
    方法:使用PubMed进行了全面的电子搜索,WebofScience,和Cochrane数据库,直到2024年2月。两名独立审稿人参与了研究的选择,数据提取,以及对纳入研究的评价。任何分歧都由第三位审阅者解决。诊断准确性研究质量评估-2(QUADAS-2)工具用于评估偏倚风险。
    结果:纳入了符合资格标准的13项研究。大多数研究表明,他们各自的分割方法具有很高的准确性,虽然不同的结构有一些变化(纸浆室,根管)和牙齿类型(单根,多根)。与根管和单根牙齿相比,与多根牙齿相比,自动分割显示出在分割牙髓腔方面的性能稍好。此外,第二近颊(MB2)管分割也显示出高性能。在时间效率方面,分割所需的最短时间为13秒.
    结论:AI驱动模型在纸浆空间分割方面表现突出。然而,这些发现值得仔细解释,由于不充分详细的方法和不一致的评估技术所产生的潜在风险和低证据水平,它们的普遍性受到限制。此外,还有进一步改进的空间,特别是用于根管分割和在伪影诱导图像中测试AI性能。
    BACKGROUND: Automated segmentation of 3-dimensional pulp space on cone-beam computed tomography images presents a significant opportunity for enhancing diagnosis, treatment planning, and clinical education in endodontics. The aim of this systematic review was to investigate the performance of artificial intelligence-driven automated pulp space segmentation on cone-beam computed tomography images.
    METHODS: A comprehensive electronic search was performed using PubMed, Web of Science, and Cochrane databases, up until February 2024. Two independent reviewers participated in the selection of studies, data extraction, and evaluation of the included studies. Any disagreements were resolved by a third reviewer. The Quality Assessment of Diagnostic Accuracy Studies-2 tool was used to assess the risk of bias.
    RESULTS: Thirteen studies that met the eligibility criteria were included. Most studies demonstrated high accuracy in their respective segmentation methods, although there was some variation across different structures (pulp chamber, root canal) and tooth types (single-rooted, multirooted). Automated segmentation showed slightly superior performance for segmenting the pulp chamber compared to the root canal and single-rooted teeth compared to multi-rooted ones. Furthermore, the second mesiobuccal (MB2) canalsegmentation also demonstrated high performance. In terms of time efficiency, the minimum time required for segmentation was 13 seconds.
    CONCLUSIONS: Artificial intelligence-driven models demonstrated outstanding performance in pulp space segmentation. Nevertheless, these findings warrant careful interpretation, and their generalizability is limited due to the potential risk and low evidence level arising from inadequately detailed methodologies and inconsistent assessment techniques. In addition, there is room for further improvement, specifically for root canal segmentation and testing of artificial intelligence performance in artifact-induced images.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:在胸部计算机断层扫描(CT)扫描中准确分割肺部肿瘤对于有效的诊断和治疗计划至关重要。深度学习(DL)已经成为医学成像中一种有前途的工具,特别是肺癌分割。然而,其在不同临床设置和肿瘤分期中的疗效仍然存在差异.
    方法:我们对PubMed进行了全面搜索,Embase,和WebofScience,直到2023年11月7日。我们通过使用医学成像人工智能检查表和诊断准确性研究质量评估2工具来评估这些研究的质量。该分析包括来自肺癌的各种临床设置和阶段的数据。关键性能指标,如骰子相似系数,被汇集,以及影响算法性能的因素,例如临床设置,算法类型,和图像处理技术,进行了检查。
    结果:我们对37项研究的分析显示,Dice得分为79%(95%CI:76%-83%),表明适度的准确性。放疗研究得分略低,为78%(95%CI:74%-82%)。注意到时间增加,最近的研究(2022年后)显示从75%(95%CI:70%-81%)有所改善。至82%(95%CI:81%-84%)。影响性能的关键因素包括算法类型,分辨率调整,和图像裁剪。由于数据间隔遗漏,QUADAS-2评估在78%的研究中发现了模棱两可的风险,并且由于排除了结节大小,在8%的研究中存在对普遍性的担忧。,索赔标准强调了需要改进的地方,42分的平均得分为27.24分。
    结论:这项荟萃分析证明了DL算法在肺癌分割中的有希望但不同的功效,特别是在早期阶段注意到更高的疗效。结果强调了持续开发量身定制的DL模型以提高跨不同临床环境的分割精度的迫切需要。特别是在癌症晚期阶段面临更大的挑战。正如最近的研究表明,算法方法的不断进步对未来的应用至关重要。
    BACKGROUND: Accurate segmentation of lung tumors on chest computed tomography (CT) scans is crucial for effective diagnosis and treatment planning. Deep Learning (DL) has emerged as a promising tool in medical imaging, particularly for lung cancer segmentation. However, its efficacy across different clinical settings and tumor stages remains variable.
    METHODS: We conducted a comprehensive search of PubMed, Embase, and Web of Science until November 7, 2023. We assessed the quality of these studies by using the Checklist for Artificial Intelligence in Medical Imaging and the Quality Assessment of Diagnostic Accuracy Studies-2 tools. This analysis included data from various clinical settings and stages of lung cancer. Key performance metrics, such as the Dice similarity coefficient, were pooled, and factors affecting algorithm performance, such as clinical setting, algorithm type, and image processing techniques, were examined.
    RESULTS: Our analysis of 37 studies revealed a pooled Dice score of 79 % (95 % CI: 76 %-83 %), indicating moderate accuracy. Radiotherapy studies had a slightly lower score of 78 % (95 % CI: 74 %-82 %). A temporal increase was noted, with recent studies (post-2022) showing improvement from 75 % (95 % CI: 70 %-81 %). to 82 % (95 % CI: 81 %-84 %). Key factors affecting performance included algorithm type, resolution adjustment, and image cropping. QUADAS-2 assessments identified ambiguous risks in 78 % of studies due to data interval omissions and concerns about generalizability in 8 % due to nodule size exclusions, and CLAIM criteria highlighted areas for improvement, with an average score of 27.24 out of 42.
    CONCLUSIONS: This meta-analysis demonstrates DL algorithms\' promising but varied efficacy in lung cancer segmentation, particularly higher efficacy noted in early stages. The results highlight the critical need for continued development of tailored DL models to improve segmentation accuracy across diverse clinical settings, especially in advanced cancer stages with greater challenges. As recent studies demonstrate, ongoing advancements in algorithmic approaches are crucial for future applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号