tumor segmentation

肿瘤分割
  • 文章类型: Journal Article
    准确的肿瘤目标轮廓和T分期对于鼻咽癌(NPC)的精确放射治疗至关重要。手动识别T期和勾画大体肿瘤体积(GTV)是费力且非常耗时的过程。以前的基于深度学习的研究主要集中在肿瘤分割上,很少有研究专门针对NPC的肿瘤分期。
    为了弥合这一差距,我们的目标是设计一个模型,可以同时识别T阶段,并在NPC中执行GTV的准确分割。
    我们开发了一种基于变压器的多任务深度学习模型,该模型可以同时执行两项任务:勾画肿瘤轮廓和识别T分期。我们的回顾性研究涉及2017年至2020年在我们机构收集的320名NPC患者(T期:T1-T4)的对比增强T1加权图像(CE-T1WI)。被随机分配到三个队列中进行三次交叉验证,并使用独立的测试集进行外部验证。我们使用接收器工作特征曲线下面积(ROC-AUC)和准确性(ACC)评估了预测性能,95%的置信区间(CI),以及使用Dice相似系数(DSC)和平均表面距离(ASD)的轮廓性能。
    我们的多任务模型在320名患者的GTV轮廓(中位DSC:0.74;ASD:0.97mm)和T分期(AUC:0.85,95%CI:0.82-0.87)方面表现良好。在早期T类肿瘤中,该模型的DSC中位数为0.74,ASD为0.98mm,而在晚期T类肿瘤中,它达到0.74的中值DSC和0.96mm的ASD。早期阶段(T1-T2)自动T分期的准确性为76%(166个中的126个),晚期阶段(T3-T4)为64%(154个中的99个)。此外,实验结果表明,我们的多任务模型优于其他单任务模型。
    这项研究强调了多任务模型同时描绘肿瘤轮廓和识别T分期的潜力。多任务模型利用这些相互关联的学习任务之间的协同作用,从而提高了这两项任务的性能。该性能证明了我们的工作在描绘肿瘤轮廓和识别T分期方面的潜力,并表明它可以成为支持临床精确放射治疗的实用工具。
    UNASSIGNED: Accurate tumor target contouring and T staging are vital for precision radiation therapy in nasopharyngeal carcinoma (NPC). Identifying T-stage and contouring the Gross tumor volume (GTV) manually is a laborious and highly time-consuming process. Previous deep learning-based studies have mainly been focused on tumor segmentation, and few studies have specifically addressed the tumor staging of NPC.
    UNASSIGNED: To bridge this gap, we aim to devise a model that can simultaneously identify T-stage and perform accurate segmentation of GTV in NPC.
    UNASSIGNED: We have developed a transformer-based multi-task deep learning model that can perform two tasks simultaneously: delineating the tumor contour and identifying T-stage. Our retrospective study involved contrast-enhanced T1-weighted images (CE-T1WI) of 320 NPC patients (T-stage: T1-T4) collected between 2017 and 2020 at our institution, which were randomly allocated into three cohorts for three-fold cross-validations, and conducted the external validation using an independent test set. We evaluated the predictive performance using the area under the receiver operating characteristic curve (ROC-AUC) and accuracy (ACC), with a 95% confidence interval (CI), and the contouring performance using the Dice similarity coefficient (DSC) and average surface distance (ASD).
    UNASSIGNED: Our multi-task model exhibited sound performance in GTV contouring (median DSC: 0.74; ASD: 0.97 mm) and T staging (AUC: 0.85, 95% CI: 0.82-0.87) across 320 patients. In early T category tumors, the model achieved a median DSC of 0.74 and ASD of 0.98 mm, while in advanced T category tumors, it reached a median DSC of 0.74 and ASD of 0.96 mm. The accuracy of automated T staging was 76% (126 of 166) for early stages (T1-T2) and 64% (99 of 154) for advanced stages (T3-T4). Moreover, experimental results show that our multi-task model outperformed the other single-task models.
    UNASSIGNED: This study emphasized the potential of multi-task model for simultaneously delineating the tumor contour and identifying T-stage. The multi-task model harnesses the synergy between these interrelated learning tasks, leading to improvements in the performance of both tasks. The performance demonstrates the potential of our work for delineating the tumor contour and identifying T-stage and suggests that it can be a practical tool for supporting clinical precision radiation therapy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Editorial
    使用18F-FDGPET/CT图像集成自动全身肿瘤分割代表了肿瘤诊断的关键转变。提高肿瘤负荷评估的准确性和效率。这篇社论探讨了向自动化的过渡,在人工智能进步的推动下,特别是通过深度学习技术。我们强调商业工具的当前可用性以及为这些发展奠定基础的学术努力。Further,我们评论数据多样性的挑战,验证需求,和监管障碍。代谢性肿瘤体积和总病变糖酵解作为癌症治疗中的重要指标的作用强调了这种评估的重要性。尽管取得了可喜的进展,我们呼吁加强学术界的合作,临床使用者,和行业更好地实现自动分割的临床优势,从而有助于简化工作流程并改善肿瘤学患者的预后.
    The integration of automated whole-body tumor segmentation using 18F-FDG PET/CT images represents a pivotal shift in oncologic diagnostics, enhancing the precision and efficiency of tumor burden assessment. This editorial examines the transition toward automation, propelled by advancements in artificial intelligence, notably through deep learning techniques. We highlight the current availability of commercial tools and the academic efforts that have set the stage for these developments. Further, we comment on the challenges of data diversity, validation needs, and regulatory barriers. The role of metabolic tumor volume and total lesion glycolysis as vital metrics in cancer management underscores the significance of this evaluation. Despite promising progress, we call for increased collaboration across academia, clinical users, and industry to better realize the clinical benefits of automated segmentation, thus helping to streamline workflows and improve patient outcomes in oncology.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:头颈部癌的不同影像学表现,扫描仪,和采集参数说明了手动肿瘤分割任务的高度主观性。手动轮廓的可变性是缺乏可泛化性和基于深度学习(DL)的肿瘤自动分割模型的次优性能的原因之一。因此,开发了一种基于DL的方法,该方法以概率图而不是一个固定轮廓的形式输出每个PET-CT体素的预测肿瘤概率。这项研究的目的是证明DL生成的肿瘤分割概率图具有临床相关性,直观,和更合适的解决方案来辅助放射肿瘤学家对头颈部癌症患者的PET-CT图像进行大体肿瘤体积分割。
    方法:设计了图形用户界面(GUI),并开发了一个原型,允许用户与肿瘤概率图进行交互。此外,我们进行了一项用户研究,其中9位肿瘤勾画专家与界面原型及其功能进行了交互.对参与者的经验进行了定性和定量评估。
    结果:对放射肿瘤学家的访谈表明,他们倾向于在轮廓绘制过程中使用彩虹色图来可视化肿瘤概率图,他们发现直觉。他们还赞赏滑块功能,它通过允许选择阈值来创建用于编辑和用作起点的单个轮廓来促进交互。对原型的反馈强调了其出色的可用性和与临床工作流程的积极整合。
    结论:这项研究表明,DL生成的肿瘤概率图是可以解释的,透明,直观和更好的替代肿瘤分割模型的单一输出。
    BACKGROUND: The different tumor appearance of head and neck cancer across imaging modalities, scanners, and acquisition parameters accounts for the highly subjective nature of the manual tumor segmentation task. The variability of the manual contours is one of the causes of the lack of generalizability and the suboptimal performance of deep learning (DL) based tumor auto-segmentation models. Therefore, a DL-based method was developed that outputs predicted tumor probabilities for each PET-CT voxel in the form of a probability map instead of one fixed contour. The aim of this study was to show that DL-generated probability maps for tumor segmentation are clinically relevant, intuitive, and a more suitable solution to assist radiation oncologists in gross tumor volume segmentation on PET-CT images of head and neck cancer patients.
    METHODS: A graphical user interface (GUI) was designed, and a prototype was developed to allow the user to interact with tumor probability maps. Furthermore, a user study was conducted where nine experts in tumor delineation interacted with the interface prototype and its functionality. The participants\' experience was assessed qualitatively and quantitatively.
    RESULTS: The interviews with radiation oncologists revealed their preference for using a rainbow colormap to visualize tumor probability maps during contouring, which they found intuitive. They also appreciated the slider feature, which facilitated interaction by allowing the selection of threshold values to create single contours for editing and use as a starting point. Feedback on the prototype highlighted its excellent usability and positive integration into clinical workflows.
    CONCLUSIONS: This study shows that DL-generated tumor probability maps are explainable, transparent, intuitive and a better alternative to the single output of tumor segmentation models.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:本研究旨在开发一种创新的,使用术前CT图像进行胸腺瘤风险分层的深层模型。当前的算法主要集中在影像组学特征或2D深层特征,并且需要放射科医生手动进行肿瘤分割。限制其实际适用性。
    方法:在包含147名患者(82名女性;平均年龄,54岁±10),接受手术切除并接受随后的病理确认。根据CT扫描时间将符合条件的参与者分为训练队列(117名患者)和测试队列(30名患者)。该模型包括两个阶段:3D肿瘤分割和风险分层。构建放射学模型和深度模型(2D)进行比较分析。通过骰子系数评估模型性能,曲线下面积(AUC),和准确性。
    结果:在培训和测试队列中,深度模型在区分胸腺瘤风险方面表现更好,AUC分别为0.998和0.893。将其与放射组学模型(AUC为0.773和0.769)和深度模型(2D)(AUC为0.981和0.760)进行比较。值得注意的是,深层模型能够同时识别病变,分割感兴趣区域(ROI),并在动脉期CT图像上区分胸腺瘤的风险。其诊断能力优于基线模型。
    结论:深度模型具有作为创新决策工具的潜力,协助临床预后评估和识别不同胸腺瘤病理亚型的合适治疗方法。
    结论:•本研究纳入了肿瘤分割和风险分层。•深度模型,使用临床和3D深层特征,有效预测胸腺瘤风险。•与放射学模型和深度模型(2D)相比,深度模型分别将AUC提高了16.1pt和17.5pt。
    OBJECTIVE: This study aims to develop an innovative, deep model for thymoma risk stratification using preoperative CT images. Current algorithms predominantly focus on radiomic features or 2D deep features and require manual tumor segmentation by radiologists, limiting their practical applicability.
    METHODS: The deep model was trained and tested on a dataset comprising CT images from 147 patients (82 female; mean age, 54 years ± 10) who underwent surgical resection and received subsequent pathological confirmation. The eligible participants were divided into a training cohort (117 patients) and a testing cohort (30 patients) based on the CT scan time. The model consists of two stages: 3D tumor segmentation and risk stratification. The radiomic model and deep model (2D) were constructed for comparative analysis. Model performance was evaluated through dice coefficient, area under the curve (AUC), and accuracy.
    RESULTS: In both the training and testing cohorts, the deep model demonstrated better performance in differentiating thymoma risk, boasting AUCs of 0.998 and 0.893 respectively. This was compared to the radiomic model (AUCs of 0.773 and 0.769) and deep model (2D) (AUCs of 0.981 and 0.760). Notably, the deep model was capable of simultaneously identifying lesions, segmenting the region of interest (ROI), and differentiating the risk of thymoma on arterial phase CT images. Its diagnostic prowess outperformed that of the baseline model.
    CONCLUSIONS: The deep model has the potential to serve as an innovative decision-making tool, assisting on clinical prognosis evaluation and the discernment of suitable treatments for different thymoma pathological subtypes.
    CONCLUSIONS: • This study incorporated both tumor segmentation and risk stratification. • The deep model, using clinical and 3D deep features, effectively predicted thymoma risk. • The deep model improved AUCs by 16.1pt and 17.5pt compared to radiomic model and deep model (2D) respectively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    MR成像是评估神经肿瘤学中肿瘤负荷和随时间变化的核心。儿童神经肿瘤学反应评估(RAPNO)工作组在不同肿瘤组织学中提出了一些反应评估指南;然而,使用核磁共振成像对肿瘤成分的视觉描绘并不总是简单的,这些标准目前未解决的复杂性可能会在手动评估中引入观察者之间和观察者之间的差异。非增强肿瘤与瘤周水肿的鉴别,从没有增强的轻度增强,和各种囊性成分可能是具有挑战性的;特别是在临床实践中缺乏足够和统一的成像方案。具有人工智能(AI)的自动肿瘤分割可能能够提供更客观的轮廓,但是依赖于手动创建的准确和一致的训练数据(地面实况)。在这里,本文回顾了目前指南未明确解决的儿童脑肿瘤(PBT)亚区域识别和定义方面的现有挑战和潜在解决方案.目标是断言定义和采用应对这些挑战的标准的重要性,因为这对于在PBT中实现标准化的肿瘤测量和可重复的反应评估至关重要,最终导致更精确的结果指标和临床研究之间的准确比较。
    MR imaging is central to the assessment of tumor burden and changes over time in neuro-oncology. Several response assessment guidelines have been set forth by the Response Assessment in Pediatric Neuro-Oncology (RAPNO) working groups in different tumor histologies; however, the visual delineation of tumor components using MRIs is not always straightforward, and complexities not currently addressed by these criteria can introduce inter- and intra-observer variability in manual assessments. Differentiation of non-enhancing tumor from peritumoral edema, mild enhancement from absence of enhancement, and various cystic components can be challenging; particularly given a lack of sufficient and uniform imaging protocols in clinical practice. Automated tumor segmentation with artificial intelligence (AI) may be able to provide more objective delineations, but rely on accurate and consistent training data created manually (ground truth). Herein, this paper reviews existing challenges and potential solutions to identifying and defining subregions of pediatric brain tumors (PBTs) that are not explicitly addressed by current guidelines. The goal is to assert the importance of defining and adopting criteria for addressing these challenges, as it will be critical to achieving standardized tumor measurements and reproducible response assessment in PBTs, ultimately leading to more precise outcome metrics and accurate comparisons among clinical studies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在自动乳腺超声(ABUS)图像中准确分割肿瘤区域在计算机辅助诊断(CAD)系统中至关重要。然而,肿瘤固有的多样性和影像学干扰对ABUS肿瘤分割提出了巨大挑战。在本文中,我们提出了一种结合图融合(GLGM)的全局和局部特征交互模型,用于3DABUS肿瘤分割。在GLGM,我们构造了一个双分支编码器-解码器,可以提取局部和全局特征。此外,设计了一个全局和局部特征融合(GLFF)模块,它采用最深层的语义交互来促进局部和全局特征之间的信息交换。此外,为了提高小肿瘤的分割性能,设计了基于图卷积的浅层特征融合模块(SFFGC)。它利用浅层特征来增强小肿瘤在局部和全局域中的特征表达。在私有ABUS数据集和公共ABUS数据集上对所提出的方法进行评估。对于私有ABUS数据集,小肿瘤(体积小于1厘米3)占整个数据集的50%以上。实验结果表明,所提出的GLGM模型在3DABUS肿瘤分割中优于几种最先进的分割模型,特别是在分割小肿瘤。
    Accurate segmentation of tumor regions in automated breast ultrasound (ABUS) images is of paramount importance in computer-aided diagnosis system. However, the inherent diversity of tumors and the imaging interference pose great challenges to ABUS tumor segmentation. In this paper, we propose a global and local feature interaction model combined with graph fusion (GLGM), for 3D ABUS tumor segmentation. In GLGM, we construct a dual branch encoder-decoder, where both local and global features can be extracted. Besides, a global and local feature fusion module is designed, which employs the deepest semantic interaction to facilitate information exchange between local and global features. Additionally, to improve the segmentation performance for small tumors, a graph convolution-based shallow feature fusion module is designed. It exploits the shallow feature to enhance the feature expression of small tumors in both local and global domains. The proposed method is evaluated on a private ABUS dataset and a public ABUS dataset. For the private ABUS dataset, the small tumors (volume smaller than 1 cm3) account for over 50% of the entire dataset. Experimental results show that the proposed GLGM model outperforms several state-of-the-art segmentation models in 3D ABUS tumor segmentation, particularly in segmenting small tumors.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:自动肿瘤分割在癌症诊断和治疗计划中起着至关重要的作用。计算机断层扫描(CT)和正电子发射断层扫描(PET)被广泛用于其互补的医学信息。然而,现有方法在特征提取过程中忽略了全局特征的双边跨模态交互,他们没有充分利用多阶段肿瘤边界特征。
    方法:为了解决这些限制,我们提出了一种基于PET/CT图像中全局跨模态相互作用和边界引导的双分支肿瘤分割网络(DGCBG-Net)。DGCBG-Net包括:1)全局跨模态交互模块,从PET/CT图像中提取全局上下文信息,促进全局特征的双边跨模态交互;2)共享的多径下采样模块,从PET/CT模态中学习互补特征,以减轻误导性特征的影响,减少下采样过程中判别特征的损失;3)边界先验引导分支,从多个阶段的图像中提取潜在边界CT特征,辅助语义分割分支提高肿瘤边界分割的准确性。
    结果:对STS和Hecktor2022数据集进行了广泛的实验,以评估所提出的方法。我们的DGCB-Net在两个数据集上的平均Dice得分分别为80.33%和79.29%,IOU平均得分为67.64%和70.18%。DGCB-Net优于当前最先进的方法,Dice评分提高1.77%,IOU评分提高2.12%。
    结论:大量实验结果表明,DGCBG-Net优于现有的分割方法,并且与艺术竞争。
    OBJECTIVE: Automatic tumor segmentation plays a crucial role in cancer diagnosis and treatment planning. Computed tomography (CT) and positron emission tomography (PET) are extensively employed for their complementary medical information. However, existing methods ignore bilateral cross-modal interaction of global features during feature extraction, and they underutilize multi-stage tumor boundary features.
    METHODS: To address these limitations, we propose a dual-branch tumor segmentation network based on global cross-modal interaction and boundary guidance in PET/CT images (DGCBG-Net). DGCBG-Net consists of 1) a global cross-modal interaction module that extracts global contextual information from PET/CT images and promotes bilateral cross-modal interaction of global feature; 2) a shared multi-path downsampling module that learns complementary features from PET/CT modalities to mitigate the impact of misleading features and decrease the loss of discriminative features during downsampling; 3) a boundary prior-guided branch that extracts potential boundary features from CT images at multiple stages, assisting the semantic segmentation branch in improving the accuracy of tumor boundary segmentation.
    RESULTS: Extensive experiments are conducted on STS and Hecktor 2022 datasets to evaluate the proposed method. The average Dice scores of our DGCB-Net on the two datasets are 80.33% and 79.29%, with average IOU scores of 67.64% and 70.18%. DGCB-Net outperformed the current state-of-the-art methods with a 1.77% higher Dice score and a 2.12% higher IOU score.
    CONCLUSIONS: Extensive experimental results demonstrate that DGCBG-Net outperforms existing segmentation methods, and is competitive to state-of-arts.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    放射治疗是肿瘤的主要治疗方法之一,但是呼吸引起的器官运动限制了它的准确性。最近,来自单个X射线投影的3D成像作为解决该问题的有希望的方法已经受到广泛关注。然而,目前的方法只能重建3D图像,而无需直接定位肿瘤,并且仅用于固定角度成像,不能完全满足放射治疗对运动控制的要求。在这项研究中,提出了一种新颖的成像方法RT-SRTS,该方法基于多任务学习(MTL)将3D成像和肿瘤分割集成到一个网络中,并从单个X射线投影在任何角度实现实时同步3D重建和肿瘤分割。此外,注意力增强校准器(AEC)和不确定区域细化(URE)模块已被提出来帮助特征提取和提高分割精度。对15例患者进行了评估,并将其与三种最新方法进行了比较。它不仅提供了卓越的三维重建,而且还展示了值得赞扬的肿瘤分割结果。同时重建和分割可以在大约70毫秒内完成,明显快于实时肿瘤跟踪所需的时间阈值。AEC和URE的疗效也已在消融研究中得到验证。工作代码可在https://github.com/ZywooSimple/RT-SRTS获得。
    Radiotherapy is one of the primary treatment methods for tumors, but the organ movement caused by respiration limits its accuracy. Recently, 3D imaging from a single X-ray projection has received extensive attention as a promising approach to address this issue. However, current methods can only reconstruct 3D images without directly locating the tumor and are only validated for fixed-angle imaging, which fails to fully meet the requirements of motion control in radiotherapy. In this study, a novel imaging method RT-SRTS is proposed which integrates 3D imaging and tumor segmentation into one network based on multi-task learning (MTL) and achieves real-time simultaneous 3D reconstruction and tumor segmentation from a single X-ray projection at any angle. Furthermore, the attention enhanced calibrator (AEC) and uncertain-region elaboration (URE) modules have been proposed to aid feature extraction and improve segmentation accuracy. The proposed method was evaluated on fifteen patient cases and compared with three state-of-the-art methods. It not only delivers superior 3D reconstruction but also demonstrates commendable tumor segmentation results. Simultaneous reconstruction and segmentation can be completed in approximately 70 ms, significantly faster than the required time threshold for real-time tumor tracking. The efficacies of both AEC and URE have also been validated in ablation studies. The code of work is available at https://github.com/ZywooSimple/RT-SRTS.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Objective.最近,深度学习技术在肿瘤区域的精确和自动分割中得到了广泛的应用。然而,由于肿瘤形状的多样性,复杂类型,和空间分布的不可预测性,肿瘤分割仍面临重大挑战。从深度监督和对抗性学习中汲取线索,在这项研究中,我们设计了一种基于级联的方法,将多尺度对抗学习和困难区域监督学习结合起来,以应对这些挑战。方法。总的来说,该方法坚持从粗到细的策略,首先大致定位目标区域,然后通过多级级联二进制分割来细化目标对象,将复杂的多类分割问题转化为多个更简单的二进制分割问题。此外,提出了一种多尺度对抗学习困难的监督UNet(MSALDS-UNet)作为我们的精细分割模型,它沿着分割网络的解码路径应用多个鉴别器来实现多尺度对抗学习,提高了网络分割的准确性。同时,在MSALDS-UNet,我们引入了一个困难的区域监督损失,以有效地利用结构信息来分割难以区分的区域,例如模糊的边界区域。主要结果。对三个独立的公共数据库(KiTS21,MSD的大脑和胰腺数据集)的彻底验证表明,我们的模型在关键评估指标(包括骰子相似系数)方面实现了令人满意的肿瘤分割结果,Jaccard相似系数,HD95意义。本文介绍了一种结合多尺度对抗学习和困难监督的级联方法,以实现精确的肿瘤分割。证实了该组合可以提高分割性能,特别是对于小对象(我们的代码在https://zhengshenhai上公开可用。github.io/)。
    Objective.Recently, deep learning techniques have found extensive application in accurate and automated segmentation of tumor regions. However, owing to the variety of tumor shapes, complex types, and unpredictability of spatial distribution, tumor segmentation still faces major challenges. Taking cues from the deep supervision and adversarial learning, we have devised a cascade-based methodology incorporating multi-scale adversarial learning and difficult-region supervision learning in this study to tackle these challenges.Approach.Overall, the method adheres to a coarse-to-fine strategy, first roughly locating the target region, and then refining the target object with multi-stage cascaded binary segmentation which converts complex multi-class segmentation problems into multiple simpler binary segmentation problems. In addition, a multi-scale adversarial learning difficult supervised UNet (MSALDS-UNet) is proposed as our model for fine-segmentation, which applies multiple discriminators along the decoding path of the segmentation network to implement multi-scale adversarial learning, thereby enhancing the accuracy of network segmentation. Meanwhile, in MSALDS-UNet, we introduce a difficult region supervision loss to effectively utilize structural information for segmenting difficult-to-distinguish areas, such as blurry boundary areas.Main results.A thorough validation of three independent public databases (KiTS21, MSD\'s Brain and Pancreas datasets) shows that our model achieves satisfactory results for tumor segmentation in terms of key evaluation metrics including dice similarity coefficient, Jaccard similarity coefficient, and HD95.Significance.This paper introduces a cascade approach that combines multi-scale adversarial learning and difficult supervision to achieve precise tumor segmentation. It confirms that the combination can improve the segmentation performance, especially for small objects (our codes are publicly availabled onhttps://zhengshenhai.github.io/).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    自动乳腺超声图像分割在医学图像处理中起着重要的作用。然而,目前的乳腺超声分割方法存在计算复杂度高、模型参数大等问题,特别是在处理复杂图像时。在本文中,我们以Unext网络为基础,并利用其编码器-解码器功能。并从细胞凋亡和分裂的机制中得到启示,我们设计了凋亡和划分算法来提高模型性能。我们提出了一种新颖的分割模型,该模型集成了分割和凋亡算法,并在模型中引入了空间和通道卷积块。我们提出的模型不仅提高了乳腺超声肿瘤的分割性能,同时也减少了模型参数和计算资源消耗时间。在乳腺超声图像数据集和我们收集的数据集上评估模型。实验表明,SC-Unext模型在BUSI数据集上取得了75.29%的Dice得分和97.09%的准确率,在收集的数据集上,Dice评分为90.62%,准确率为98.37%。同时,我们对该模型在CPU上的推理速度进行了比较,以验证其在资源受限环境中的效率。结果表明,SC-Unext模型在仅配备CPU的设备上实现了每个实例92.72ms的推理速度。模型的参数数量和计算资源消耗分别为1.46M和2.13GFlops,分别,与其他网络模型相比更低。由于其重量轻的性质,该模型对医学领域的各种实际应用具有重要价值。
    Automatic breast ultrasound image segmentation plays an important role in medical image processing. However, current methods for breast ultrasound segmentation suffer from high computational complexity and large model parameters, particularly when dealing with complex images. In this paper, we take the Unext network as a basis and utilize its encoder-decoder features. And taking inspiration from the mechanisms of cellular apoptosis and division, we design apoptosis and division algorithms to improve model performance. We propose a novel segmentation model which integrates the division and apoptosis algorithms and introduces spatial and channel convolution blocks into the model. Our proposed model not only improves the segmentation performance of breast ultrasound tumors, but also reduces the model parameters and computational resource consumption time. The model was evaluated on the breast ultrasound image dataset and our collected dataset. The experiments show that the SC-Unext model achieved Dice scores of 75.29% and accuracy of 97.09% on the BUSI dataset, and on the collected dataset, it reached Dice scores of 90.62% and accuracy of 98.37%. Meanwhile, we conducted a comparison of the model\'s inference speed on CPUs to verify its efficiency in resource-constrained environments. The results indicated that the SC-Unext model achieved an inference speed of 92.72 ms per instance on devices equipped only with CPUs. The model\'s number of parameters and computational resource consumption are 1.46M and 2.13 GFlops, respectively, which are lower compared to other network models. Due to its lightweight nature, the model holds significant value for various practical applications in the medical field.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号