segmentation

分割
  • 文章类型: Journal Article
    结核病(TB)仍然是一种重要的全球性传染病,构成相当大的健康威胁,特别是在资源有限的地区。由于不同的数据集,放射科医生在使用X射线图像准确诊断结核病方面面临挑战。本研究旨在提出一种利用图像处理技术的创新方法,以在医疗保健的自动分割和分类(AuSC)框架内提高结核病诊断的准确性。
    结核病检测的AuSC(AuSC-DTB)框架包括几个步骤:涉及大小调整和中值滤波的图像预处理,使用随机步行算法进行分割,利用局部二值模式和梯度描述符直方图进行特征提取。然后使用支持向量机分类器对提取的特征进行分类,以区分健康和感染的胸部X射线图像。使用四个不同的数据集评估了所提出技术的有效性,如日本放射技术学会(JSRT),蒙哥马利,国家医学图书馆(NLM)和深圳。
    实验结果表明有希望的结果,准确率为94%,95%,95%,JSRT实现了93%,蒙哥马利,NLM,和深圳数据集,分别。与最近研究的比较分析表明,所提出的混合方法具有出色的性能。
    在AuSC框架内提出的混合方法展示了从不同X射线图像数据集进行TB检测的改进的诊断准确性。此外,这种方法有望推广通过X射线成像诊断的其他疾病.它可以适应计算机断层扫描和磁共振成像图像,扩展其在医疗保健诊断中的适用性。
    UNASSIGNED: Tuberculosis (TB) remains a significant global infectious disease, posing a considerable health threat, particularly in resource-constrained regions. Due to diverse datasets, radiologists face challenges in accurately diagnosing TB using X-ray images. This study aims to propose an innovative approach leveraging image processing techniques to enhance TB diagnostic accuracy within the automatic segmentation and classification (AuSC) framework for healthcare.
    UNASSIGNED: The AuSC of detection of TB (AuSC-DTB) framework comprises several steps: image preprocessing involving resizing and median filtering, segmentation using the random walker algorithm, and feature extraction utilizing local binary pattern and histogram of gradient descriptors. The extracted features are then classified using the support vector machine classifier to distinguish between healthy and infected chest X-ray images. The effectiveness of the proposed technique was evaluated using four distinct datasets, such as Japanese Society of Radiological Technology (JSRT), Montgomery, National Library of Medicine (NLM), and Shenzhen.
    UNASSIGNED: Experimental results demonstrate promising outcomes, with accuracy rates of 94%, 95%, 95%, and 93% achieved for JSRT, Montgomery, NLM, and Shenzhen datasets, respectively. Comparative analysis against recent studies indicates superior performance of the proposed hybrid approach.
    UNASSIGNED: The presented hybrid approach within the AuSC framework showcases improved diagnostic accuracy for TB detection from diverse X-ray image datasets. Furthermore, this methodology holds promise for generalizing other diseases diagnosed through X-ray imaging. It can be adapted with computed tomography scans and magnetic resonance imaging images, extending its applicability in healthcare diagnostics.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    医学成像数据集经常遇到数据不平衡问题,其中大多数像素对应于健康区域,少数人属于受影响地区。像素的这种不均匀分布加剧了与计算机辅助诊断相关的挑战。用不平衡数据训练的网络往往表现出对多数类的偏见,往往表现出高精度,但灵敏度低。
    我们设计了一种基于对抗学习的新网络,即条件对比生成对抗网络(CCGAN),以解决高度不平衡的MRI数据集中的类不平衡问题。所提出的模型有三个新的组成部分:(1)特定类别的关注,(2)区域再平衡模块(RRM)和监督对比学习网络(SCoLN)。特定于班级的注意力集中在输入表示的更具区别性的区域,捕获更多相关特征。RRM促进了跨输入表示的各个区域的特征的更平衡的分布,确保更公平的细分过程。CCGAN的生成器通过基于真负图和真正图接收来自SCoLN的反馈来学习像素级分割。此过程确保最终的语义分割不仅解决了不平衡的数据问题,而且还提高了分类准确性。
    所提出的模型在五个高度不平衡的医学图像分割数据集上显示了最先进的性能。因此,该模型在医学诊断中具有巨大的应用潜力,在数据分布高度不平衡的情况下。CCGAN在各种数据集上的骰子相似系数(DSC)得分最高:BUS2017为0.965±0.012,DDTI为0.896±0.091,对于LiTSMICCAI2017,为0.786±0.046,对于ATLAS数据集,为0.712±1.5,和0.877±1.2的BRATS2015数据集。DeepLab-V3紧随其后,BUS2017的DSC评分为0.948±0.010,DDTI的DSC评分为0.895±0.014,对于LiTSMICCAI2017,为0.763±0.044,对于ATLAS数据集,为0.696±1.1,和0.846±1.4的BRATS2015数据集。
    UNASSIGNED: Medical imaging datasets frequently encounter a data imbalance issue, where the majority of pixels correspond to healthy regions, and the minority belong to affected regions. This uneven distribution of pixels exacerbates the challenges associated with computer-aided diagnosis. The networks trained with imbalanced data tends to exhibit bias toward majority classes, often demonstrate high precision but low sensitivity.
    UNASSIGNED: We have designed a new network based on adversarial learning namely conditional contrastive generative adversarial network (CCGAN) to tackle the problem of class imbalancing in a highly imbalancing MRI dataset. The proposed model has three new components: (1) class-specific attention, (2) region rebalancing module (RRM) and supervised contrastive-based learning network (SCoLN). The class-specific attention focuses on more discriminative areas of the input representation, capturing more relevant features. The RRM promotes a more balanced distribution of features across various regions of the input representation, ensuring a more equitable segmentation process. The generator of the CCGAN learns pixel-level segmentation by receiving feedback from the SCoLN based on the true negative and true positive maps. This process ensures that final semantic segmentation not only addresses imbalanced data issues but also enhances classification accuracy.
    UNASSIGNED: The proposed model has shown state-of-art-performance on five highly imbalance medical image segmentation datasets. Therefore, the suggested model holds significant potential for application in medical diagnosis, in cases characterized by highly imbalanced data distributions. The CCGAN achieved the highest scores in terms of dice similarity coefficient (DSC) on various datasets: 0.965 ± 0.012 for BUS2017, 0.896 ± 0.091 for DDTI, 0.786 ± 0.046 for LiTS MICCAI 2017, 0.712 ± 1.5 for the ATLAS dataset, and 0.877 ± 1.2 for the BRATS 2015 dataset. DeepLab-V3 follows closely, securing the second-best position with DSC scores of 0.948 ± 0.010 for BUS2017, 0.895 ± 0.014 for DDTI, 0.763 ± 0.044 for LiTS MICCAI 2017, 0.696 ± 1.1 for the ATLAS dataset, and 0.846 ± 1.4 for the BRATS 2015 dataset.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    卵巢囊肿构成严重的健康风险,包括扭转,不孕症,和癌症,需要快速准确的诊断。超声检查通常用于筛查,然而,它的有效性受到弱对比等挑战的阻碍,斑点噪声,和图像中模糊的边界。这项研究提出了一种使用卵巢超声囊肿图像数据库的基于自适应深度学习的分割技术。引导三边滤波器(GTF)用于预处理中的降噪。分割利用自适应卷积神经网络(AdaResU-net)进行精确的囊肿大小识别和良性/恶性分类,通过野马优化(WHO)算法进行优化。优化目标函数骰子损失系数和加权交叉熵以提高分割精度。囊肿类型的分类是使用锥体扩张卷积(PDC)网络进行的。该方法的分割准确率达到98.87%,超越现有技术,从而有望提高诊断准确性和患者护理结果。
    Ovarian cysts pose significant health risks including torsion, infertility, and cancer, necessitating rapid and accurate diagnosis. Ultrasonography is commonly employed for screening, yet its effectiveness is hindered by challenges like weak contrast, speckle noise, and hazy boundaries in images. This study proposes an adaptive deep learning-based segmentation technique using a database of ovarian ultrasound cyst images. A Guided Trilateral Filter (GTF) is applied for noise reduction in pre-processing. Segmentation utilizes an Adaptive Convolutional Neural Network (AdaResU-net) for precise cyst size identification and benign/malignant classification, optimized via the Wild Horse Optimization (WHO) algorithm. Objective functions Dice Loss Coefficient and Weighted Cross-Entropy are optimized to enhance segmentation accuracy. Classification of cyst types is performed using a Pyramidal Dilated Convolutional (PDC) network. The method achieves a segmentation accuracy of 98.87%, surpassing existing techniques, thereby promising improved diagnostic accuracy and patient care outcomes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:这项研究旨在训练3DU-Net卷积神经网络(CNN),用于从锥形束计算机断层扫描(CBCT)扫描中分割下颌骨和下牙列。
    方法:在两全截面设计中,来自两家医院(2009-2019年和2021-2022年)的CBCT扫描构成了内部数据集和外部验证集,分别。手动分割告知CNN培训,并采用骰子相似系数(DSC)进行体积精度评估。失明的口腔颌面外科医生对CBCT扫描和对象网格进行了定性分级。统计分析包括独立的t检验和ANOVA检验,以比较不同性别患者亚组的DSC。种族,体重指数(BMI),使用的测试数据集,年龄,和程度的金属神器。测试的最小可检测DSC差异为0.025,α为0.05,功率水平为0.8。
    结果:来自490名患者的648次CBCT扫描被纳入研究。CNN实现了高精度(平均DSC:内部0.945,0.940外部)。使用的测试集之间未观察到DSC差异,性别,BMI,和种族。根据年龄组和金属伪影的程度确定DSC的显着差异。通过手动和自动分割产生的大多数(80%)对象网格被评为可接受或更高的质量。
    结论:我们开发了一种模型,用于在包括高度金属伪影的人口学多样化队列中从CBCT扫描中自动分割下颌骨和下牙列。该模型在内部和外部测试集上表现出良好的准确性,与大多数可接受的质量从临床分级。
    OBJECTIVE: This study aimed to train a 3D U-Net convolutional neural network (CNN) for mandible and lower dentition segmentation from cone-beam computed tomography (CBCT) scans.
    METHODS: In an ambispective cross-sectional design, CBCT scans from two hospitals (2009-2019 and 2021-2022) constituted an internal dataset and external validation set, respectively. Manual segmentation informed CNN training, and evaluations employed Dice similarity coefficient (DSC) for volumetric accuracy. A blinded oral maxillofacial surgeon performed qualitative grading of CBCT scans and object meshes. Statistical analyses included independent t-tests and ANOVA tests to compare DSC across patient subgroups of gender, race, body mass index (BMI), test dataset used, age, and degree of metal artifact. Tests were powered for a minimum detectable difference in DSC of 0.025, with alpha of 0.05 and power level of 0.8.
    RESULTS: 648 CBCT scans from 490 patients were included in the study. The CNN achieved high accuracy (average DSC: 0.945 internal, 0.940 external). No DSC differences were observed between test set used, gender, BMI, and race. Significant differences in DSC were identified based on age group and the degree of metal artifact. The majority (80%) of object meshes produced by both manual and automatic segmentation were rated as acceptable or higher quality.
    CONCLUSIONS: We developed a model for automatic mandible and lower dentition segmentation from CBCT scans in a demographically diverse cohort including a high degree of metal artifacts. The model demonstrated good accuracy on internal and external test sets, with majority acceptable quality from a clinical grader.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    梗死的分割在缺血性卒中管理和预后中具有临床重要意义。目前还不清楚DWI的组合扮演什么角色,ADC,FLAIRMRI序列为梗死分割提供了深度学习。模型自配置中的最新技术已通过自动优化承诺了更高的性能和通用性。我们评估了DWI的实用性,ADC,和缺血性中风分割的FLAIR序列,将自配置nnU-Net模型与无需手动优化的常规U-Net模型进行了比较,并评估了结果在外部临床数据集上的普遍性。使用DWI在200条梗塞上训练了3D自配置nnU-Net模型和具有MONAI的标准3DU-Net模型,ADC,和FLAIR序列分别和所有组合。在50例病例的保持测试集上,使用配对t检验比较在模型之间比较分割结果。在50个MRI的临床数据集上外部验证了性能最高的模型。具有DWI序列的nnU-Net获得0.810±0.155的Dice评分。当DWI序列补充ADC和FLAIR图像时,差异无统计学意义(Dice评分为0.813±0.150;p=0.15)。对于所有序列组合,nnU-Net模型显著优于标准U-Net模型(p<0.001)。在外部数据集上,对于颅内出血假阳性的阳性病例,Dice评分为0.704±0.199。高度优化的神经网络,如nnU-Net,即使仅提供DWI图像,也能提供出色的笔划分割,没有其他序列的显着改善。这与标准U-Net体系结构不同,并且明显优于标准U-Net体系结构。结果很好地转化为外部临床环境,并为MRI上优化急性中风分割提供了基础。
    Segmentation of infarcts is clinically important in ischemic stroke management and prognostication. It is unclear what role the combination of DWI, ADC, and FLAIR MRI sequences provide for deep learning in infarct segmentation. Recent technologies in model self-configuration have promised greater performance and generalizability through automated optimization. We assessed the utility of DWI, ADC, and FLAIR sequences on ischemic stroke segmentation, compared self-configuring nnU-Net models to conventional U-Net models without manual optimization, and evaluated the generalizability of results on an external clinical dataset. 3D self-configuring nnU-Net models and standard 3D U-Net models with MONAI were trained on 200 infarcts using DWI, ADC, and FLAIR sequences separately and in all combinations. Segmentation results were compared between models using paired t-test comparison on a hold-out test set of 50 cases. The highest performing model was externally validated on a clinical dataset of 50 MRIs. nnU-Net with DWI sequences attained a Dice score of 0.810 ± 0.155. There was no statistically significant difference when DWI sequences were supplemented with ADC and FLAIR images (Dice score of 0.813 ± 0.150; p = 0.15). nnU-Net models significantly outperformed standard U-Net models for all sequence combinations (p < 0.001). On the external dataset, Dice scores measured 0.704 ± 0.199 for positive cases with false positives with intracranial hemorrhage. Highly optimized neural networks such as nnU-Net provide excellent stroke segmentation even when only provided DWI images, without significant improvement from other sequences. This differs from-and significantly outperforms-standard U-Net architectures. Results translated well to the external clinical environment and provide the groundwork for optimized acute stroke segmentation on MRI.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:确定容量双能量低kV和碘放射学特征对鉴别胸腔内淋巴结组织病理学的性能,以及对比方案的影响。
    方法:具有组织病理学相关性的胸腔内淋巴结(肿瘤,肉芽肿样,良性)在DECT胸部成像的90天内进行体积分割。从碘图和低千伏图像中提取1691个体积影像,总计3382个功能。使用2样本t检验进行单变量分析,并过滤错误发现。多变量分析用于计算淋巴结分类任务的AUC。
    结果:纳入72例患者(平均年龄61±15岁)的129个淋巴结,52肿瘤,51良性,和26个肉芽肿性结节病。在所有对比增强的DECT方案检查中(常规,PE和CTA),单变量分析表明,肿瘤淋巴结和非肿瘤淋巴结之间的碘和低kV特征没有显着差异;在常规DECT方案的肿瘤与良性淋巴结的子集中,199个特征不同(p=.01-<0.05)。使用碘和低kV特征的多变量分析得出AUC>0.8用于区分肿瘤与非肿瘤淋巴结(AUC0.86),包括肉芽肿性(AUC0.86)和良性(AUC0.9)淋巴结肿瘤的亚群,在所有对比方案中。
    结论:容积DECT影像组学特征在区分肿瘤和非肿瘤胸内淋巴结方面表现出强烈的集体表现,并受对比方案的影响。
    OBJECTIVE: To determine the performance of volumetric dual energy low kV and iodine radiomic features for the differentiation of intrathoracic lymph node histopathology, and influence of contrast protocol.
    METHODS: Intrathoracic lymph nodes with histopathologic correlation (neoplastic, granulomatous sarcoid, benign) within 90 days of DECT chest imaging were volumetrically segmented. 1691 volumetric radiomic features were extracted from iodine maps and low-kV images, totaling 3382 features. Univariate analysis was performed using 2-sample t-test and filtered for false discoveries. Multivariable analysis was used to compute AUCs for lymph node classification tasks.
    RESULTS: 129 lymph nodes from 72 individuals (mean age 61 ± 15 years) were included, 52 neoplastic, 51 benign, and 26 granulomatous-sarcoid. Among all contrast enhanced DECT protocol exams (routine, PE and CTA), univariable analysis demonstrated no significant differences in iodine and low kV features between neoplastic and non-neoplastic lymph nodes; in the subset of neoplastic versus benign lymph nodes with routine DECT protocol, 199 features differed (p = .01- < 0.05). Multivariable analysis using both iodine and low kV features yielded AUCs >0.8 for differentiating neoplastic from non-neoplastic lymph nodes (AUC 0.86), including subsets of neoplastic from granulomatous (AUC 0.86) and neoplastic from benign (AUC 0.9) lymph nodes, among all contrast protocols.
    CONCLUSIONS: Volumetric DECT radiomic features demonstrate strong collective performance in differentiation of neoplastic from non-neoplastic intrathoracic lymph nodes, and are influenced by contrast protocol.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    随着越来越多的磁共振成像(MRI)数据集变得公开可用,研究人员和临床医生都已经转向自动分割方法,以实现对这些数据的人群水平分析。尽管先前的研究已经评估了自动化方法在人脑中概括“黄金标准”手动分割方法的程度,这样的评估还没有被用于对猕猴大脑的MRI进行分割。猕猴提供了重要的机会,可以使用诸如管道追踪之类的侵入性方法来弥合显微解剖学研究之间的差距,神经记录,高分辨率组织学和使用MRI等方法的非侵入性宏观解剖学研究。因此,重要的是要评估自动化工具是否从猕猴MRI获得足够质量的数据以弥补这些差距.我们使用开源并积极维护的NHP成像分析管道(AFNI)测试了基于自动配准的分割与4个结构(2个皮质:前扣带皮质和脑岛;2个皮质下:杏仁核和尾状)的黄金标准手动分割之间的关系。我们确定了跨神经区域的自动和手动分割之间的相关性强度的一些变异性,以及两种技术之间与年龄和性别等人口统计学变量的关系的差异。
    With increasing numbers of magnetic resonance imaging (MRI) datasets becoming publicly available, researchers and clinicians alike have turned to automated methods of segmentation to enable population-level analyses of these data. Although prior research has evaluated the extent to which automated methods recapitulate \"gold standard\" manual segmentation methods in the human brain, such an evaluation has not yet been carried out for segmentation of MRIs of the macaque brain. Macaques offer the important opportunity to bridge gaps between microanatomical studies using invasive methods like tract tracing, neural recordings, and high-resolution histology and non-invasive macroanatomical studies using methods like MRI. As such, it is important to evaluate whether automated tools derive data of sufficient quality from macaque MRIs to bridge these gaps. We tested the relationship between automated registration-based segmentation using an open source and actively maintained NHP imaging analysis pipeline (AFNI) and gold standard manual segmentation of 4 structures (2 cortical: anterior cingulate cortex and insula; 2 subcortical: amygdala and caudate) across 37 rhesus macaques (Macaca mulatta). We identified some variability in the strength of correlation between automated and manual segmentations across neural regions and differences in relationships with demographic variables like age and sex between the two techniques.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    为了研究植物器官,有必要研究植物的三维(3D)结构。近年来,通过计算机断层扫描(CT)进行的无损测量已用于了解植物的3D结构。在这项研究中,我们以菊花小头花序为例,重点研究了3D小头花序芽结构中容器和小花之间的接触点,以研究小花在容器上的3D排列。要确定接触点的3D顺序,我们从CT体积数据构建了切片图像,并检测了图像中的容器和小花。然而,因为每个CT样本都包含数百个待处理的切片图像,每个C.seticuspe头花序都包含几个小花,手动检测容器和小花是劳动密集型的。因此,利用图像识别技术,提出了一种基于CT切片图像的接触点自动检测方法。所提出的方法使用接触点仅存在于插座周围的先验知识来提高接触点检测的准确性。此外,检测结果的积分使得能够估计接触点的3D位置。根据实验结果,我们证实了所提出的方法可以高精度地检测切片图像上的接触,并通过聚类估计它们的3D位置。此外,与样本无关的实验表明,所提出的方法达到了与样本相关实验相同的检测精度。
    To study plant organs, it is necessary to investigate the three-dimensional (3D) structures of plants. In recent years, non-destructive measurements through computed tomography (CT) have been used to understand the 3D structures of plants. In this study, we use the Chrysanthemum seticuspe capitulum inflorescence as an example and focus on contact points between the receptacles and florets within the 3D capitulum inflorescence bud structure to investigate the 3D arrangement of the florets on the receptacle. To determine the 3D order of the contact points, we constructed slice images from the CT volume data and detected the receptacles and florets in the image. However, because each CT sample comprises hundreds of slice images to be processed and each C. seticuspe capitulum inflorescence comprises several florets, manually detecting the receptacles and florets is labor-intensive. Therefore, we propose an automatic contact point detection method based on CT slice images using image recognition techniques. The proposed method improves the accuracy of contact point detection using prior knowledge that contact points exist only around the receptacle. In addition, the integration of the detection results enables the estimation of the 3D position of the contact points. According to the experimental results, we confirmed that the proposed method can detect contacts on slice images with high accuracy and estimate their 3D positions through clustering. Additionally, the sample-independent experiments showed that the proposed method achieved the same detection accuracy as sample-dependent experiments.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:脑出血(ICH)和脑室内出血(IVH)的体积测量为自发性ICH患者的精确治疗提供了关键信息,但仍然是一个巨大的挑战,特别是IVH分割。然而,先前提出的ICH和IVH分割工具缺乏外部验证和分割质量评估.
    目的:本研究旨在通过外部验证,为ICH和IVH的分割开发一个健壮的深度学习模型,并为IVH分割提供质量评估。
    方法:在本研究中,a用于ICH和IVH分割的残差编码Unet(REUnet)是使用由977个CT图像组成的数据集开发的(所有包含ICH,338包含IVH;采用了五重交叉验证程序进行培训和内部验证),并使用由375张CT图像组成的独立数据集进行外部测试(所有包含ICH,105包含IVH)。将REUnet的性能与其他六种高级深度学习模型进行了比较。随后,三种方法,包括原型分割(ProtoSeg),测试时间丢失(TTD),和测试时间增强(TTA),用于在没有地面实况的情况下得出分割质量分数,以提供一种在实际实践中评估分割质量的方法。
    结果:对于ICH分段,从REUnet获得的Dice评分的中位数(低分位数-高分位数)内部验证为0.932(0.898-0.953),外部测试为0.888(0.859-0.916),两者都优于其他模型,同时在外部测试中与nnUnet3D相当。对于IVH分割,从REUnet获得的骰子分数为内部验证的0.826(0.757-0.868)和外部测试的0.777(0.693-0.827),比所有其他型号都好。从REUnet生成的分割估计的体积与从ICH和IVH的手动分割估计的体积之间的一致相关系数范围为0.944至0.987。对于IVH分割质量评估,来自ProtoSeg的分割质量评分与Dice评分相关(外部测试的Spearmanr=0.752),并且在外部测试中表现优于TTD(Spearmanr=0.718)和TTA(Spearmanr=0.260).通过为分割质量分数设置阈值,我们能够通过ProtoSeg识别低质量的IVH分割结果。
    结论:提出的REUnet为准确和自动分割ICH和IVH提供了一个有前途的工具,以及有效的IVH分割质量评估,因此,在临床实践中显示出促进自发性ICH患者治疗决策的潜力。
    BACKGROUND: The volume measurement of intracerebral hemorrhage (ICH) and intraventricular hemorrhage (IVH) provides critical information for precise treatment of patients with spontaneous ICH but remains a big challenge, especially for IVH segmentation. However, the previously proposed ICH and IVH segmentation tools lack external validation and segmentation quality assessment.
    OBJECTIVE: This study aimed to develop a robust deep learning model for the segmentation of ICH and IVH with external validation, and to provide quality assessment for IVH segmentation.
    METHODS: In this study, a Residual Encoding Unet (REUnet) for the segmentation of ICH and IVH was developed using a dataset composed of 977 CT images (all contained ICH, and 338 contained IVH; a five-fold cross-validation procedure was adopted for training and internal validation), and externally tested using an independent dataset consisting of 375 CT images (all contained ICH, and 105 contained IVH). The performance of REUnet was compared with six other advanced deep learning models. Subsequently, three approaches, including Prototype Segmentation (ProtoSeg), Test Time Dropout (TTD), and Test Time Augmentation (TTA), were employed to derive segmentation quality scores in the absence of ground truth to provide a way to assess the segmentation quality in real practice.
    RESULTS: For ICH segmentation, the median (lower-quantile-upper quantile) of Dice scores obtained from REUnet were 0.932 (0.898-0.953) for internal validation and 0.888 (0.859-0.916) for external test, both of which were better than those of other models while comparable to that of nnUnet3D in external test. For IVH segmentation, the Dice scores obtained from REUnet were 0.826 (0.757-0.868) for internal validation and 0.777 (0.693-0.827) for external tests, which were better than those of all other models. The concordance correlation coefficients between the volumes estimated from the REUnet-generated segmentations and those from the manual segmentations for both ICH and IVH ranged from 0.944 to 0.987. For IVH segmentation quality assessment, the segmentation quality score derived from ProtoSeg was correlated with the Dice Score (Spearman r = 0.752 for the external test) and performed better than those from TTD (Spearman r = 0.718) and TTA (Spearman r = 0.260) in the external test. By setting a threshold to the segmentation quality score, we were able to identify low-quality IVH segmentation results by ProtoSeg.
    CONCLUSIONS: The proposed REUnet offers a promising tool for accurate and automated segmentation of ICH and IVH, and for effective IVH segmentation quality assessment, and thus exhibits the potential to facilitate therapeutic decision-making for patients with spontaneous ICH in clinical practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    准确,自动地分割目标和危险器官(OAR)对于在线自适应放射治疗(ART)的成功临床应用至关重要。锥束计算机断层扫描(CBCT)自动分割的当前方法面临挑战,导致分割通常无法达到临床可接受性。当前的CBCT自动分割方法忽略了从初始规划和先前的自适应分数中获得的大量信息,这些信息可以提高分割精度。
    我们引入了一个新颖的框架,该框架结合了来自患者初始计划和先前适应性分数的数据,利用这个额外的时间上下文来显著改善当前分数的CBCT图像的分割精度。我们介绍LSTM-UNet,一种创新的体系结构,将长短期内存(LSTM)单元集成到传统U-Net框架的跳过连接中,以保留以前分数的信息。这些模型用模拟数据进行初始预训练,然后对临床数据集进行微调。
    我们提出的模型的分割预测从8个头颈部器官和目标中得出平均Dice相似系数为79%,与没有先验知识的基线模型的52%和具有先验知识但没有记忆的基线模型的78%相比。
    我们提出的模型通过有效利用先验分数的信息,超越了基线分割框架,从而减少了临床医生修改自动分割结果的努力。此外,它与基于注册的方法一起工作,提供更好的先验知识。我们的模型有望集成到在线ART工作流程中,在合成CT图像上提供精确的分割功能。
    UNASSIGNED: Accurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to reach clinical acceptability. Current approaches for CBCT auto-segmentation overlook the wealth of information available from initial planning and prior adaptive fractions that could enhance segmentation precision.
    UNASSIGNED: We introduce a novel framework that incorporates data from a patient\'s initial plan and previous adaptive fractions, harnessing this additional temporal context to significantly refine the segmentation accuracy for the current fraction\'s CBCT images. We present LSTM-UNet, an innovative architecture that integrates Long Short-Term Memory (LSTM) units into the skip connections of the traditional U-Net framework to retain information from previous fractions. The models underwent initial pre-training with simulated data followed by fine-tuning on a clinical dataset.
    UNASSIGNED: Our proposed model\'s segmentation predictions yield an average Dice similarity coefficient of 79% from 8 Head & Neck organs and targets, compared to 52% from a baseline model without prior knowledge and 78% from a baseline model with prior knowledge but no memory.
    UNASSIGNED: Our proposed model excels beyond baseline segmentation frameworks by effectively utilizing information from prior fractions, thus reducing the effort of clinicians to revise the auto-segmentation results. Moreover, it works together with registration-based methods that offer better prior knowledge. Our model holds promise for integration into the online ART workflow, offering precise segmentation capabilities on synthetic CT images.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号