Auto-segmentation

自动分割
  • 文章类型: Journal Article
    目的:人工智能(AI)辅助方法在正常组织的自动勾画方面取得了重大进展。然而,这些方法与放疗目标体积的自动轮廓有关。我们的目标是将目标体积的描绘建模为临床决策问题,通过利用大型语言模型辅助的多模态学习方法来解决。
    方法:视觉语言模型,被称为Medformer,已经开发出来了,采用分层视觉转换器作为其骨干,并结合大型语言模型来提取文本丰富的特征。上下文嵌入的语言特征通过视觉语言注意模块被无缝地集成到用于语言感知的视觉编码的视觉特征中。Metrics,包括骰子相似系数(DSC),联合交叉点(IOU),和95百分位数Hausdorff距离(HD95),用于定量评估我们模型的性能。评估是在内部前列腺癌数据集和公共口咽癌(OPC)数据集上进行的,共668个科目。
    结果:我们的Medformer的DSC为0.81±0.10对0.72±0.10,IOU为0.73±0.12对0.65±0.09,HD95为9.86±9.77mm对19.13±12.96mm。同样,在OPC数据集上,它实现了0.77±0.11对0.72±0.09的DSC,0.70±0.09对0.65±0.07的IOU,以及7.52±4.8mm对13.63±7.13mm的HD95,代表显著改善(p<0.05)。为了描绘临床目标体积(CTV),Medformer的DSC为0.91±0.04,IOU为0.85±0.05,HD95为2.98±1.60mm,与其他最先进的算法相当。
    结论:基于多模态学习的治疗目标自动描绘优于单纯依赖视觉特征的传统方法。我们的方法可以用于常规实践中,以快速绘制CTV/GTV的轮廓。
    OBJECTIVE: Artificial intelligence (AI)-aided methods have made significant progress in the auto-delineation of normal tissues. However, these approaches struggle with the auto-contouring of radiotherapy target volume. Our goal is to model the delineation of target volume as a clinical decision-making problem, resolved by leveraging large language model-aided multimodal learning approaches.
    METHODS: A vision-language model, termed Medformer, has been developed, employing the hierarchical vision transformer as its backbone, and incorporating large language models to extract text-rich features. The contextually embedded linguistic features are seamlessly integrated into visual features for language-aware visual encoding through the visual language attention module. Metrics, including Dice similarity coefficient (DSC), intersection over union (IOU), and 95th percentile Hausdorff distance (HD95), were used to quantitatively evaluate the performance of our model. The evaluation was conducted on an in-house prostate cancer dataset and a public oropharyngeal carcinoma (OPC) dataset, totaling 668 subjects.
    RESULTS: Our Medformer achieved a DSC of 0.81 ± 0.10 versus 0.72 ± 0.10, IOU of 0.73 ± 0.12 versus 0.65 ± 0.09, and HD95 of 9.86 ± 9.77 mm versus 19.13 ± 12.96 mm for delineation of gross tumor volume (GTV) on the prostate cancer dataset. Similarly, on the OPC dataset, it achieved a DSC of 0.77 ± 0.11 versus 0.72 ± 0.09, IOU of 0.70 ± 0.09 versus 0.65 ± 0.07, and HD95 of 7.52 ± 4.8 mm versus 13.63 ± 7.13 mm, representing significant improvements (p < 0.05). For delineating the clinical target volume (CTV), Medformer achieved a DSC of 0.91 ± 0.04, IOU of 0.85 ± 0.05, and HD95 of 2.98 ± 1.60 mm, comparable to other state-of-the-art algorithms.
    CONCLUSIONS: Auto-delineation of the treatment target based on multimodal learning outperforms conventional approaches that rely purely on visual features. Our method could be adopted into routine practice to rapidly contour CTV/GTV.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    医学图像自动分割技术对于许多基于图像的分析应用是基本和关键的,这些应用在开发先进的个性化医学中起着重要作用。与手动分割相比,自动分割预计将有助于更有效的临床常规和工作流程,因为需要更少的人为干预或修改自动分割。然而,当前的自动分割方法通常是在一些流行的分割指标的帮助下开发的,这些分割指标不直接考虑人类的矫正行为。骰子系数(DC)专注于真正细分的领域,而Hausdorff距离(HD)仅测量自动分割边界与地面实况边界之间的最大距离。基于边界长度的度量,例如表面DC(surDC)和添加路径长度(APL),试图区分真实预测的边界像素和错误的边界像素。不确定这些指标是否可以可靠地指示在细分研究中应用所需的手动修补工作。因此,在本文中,上述四个指标的潜在用途,以及一种名为可升级性指数(MI)的新指标,用线性和支持向量回归模型研究了预测人类矫正努力的问题。来自3个机构的3个感兴趣对象的265个3D计算机断层扫描(CT)样本,具有相应的自动分割和地面实况分割,用于训练和测试预测模型。五次交叉验证实验表明,可以使用具有不同对象的不同预测误差的分割度量来实现有意义的人力预测。MI的改进变体,叫做MIhd,通常显示出最佳的预测性能,这表明它有可能可靠地表明自动分割的临床价值。
    Medical image auto-segmentation techniques are basic and critical for numerous image-based analysis applications that play an important role in developing advanced and personalized medicine. Compared with manual segmentations, auto-segmentations are expected to contribute to a more efficient clinical routine and workflow by requiring fewer human interventions or revisions to auto-segmentations. However, current auto-segmentation methods are usually developed with the help of some popular segmentation metrics that do not directly consider human correction behavior. Dice Coefficient (DC) focuses on the truly-segmented areas, while Hausdorff Distance (HD) only measures the maximal distance between the auto-segmentation boundary with the ground truth boundary. Boundary length-based metrics such as surface DC (surDC) and Added Path Length (APL) try to distinguish truly-predicted boundary pixels and wrong ones. It is uncertain if these metrics can reliably indicate the required manual mending effort for application in segmentation research. Therefore, in this paper, the potential use of the above four metrics, as well as a novel metric called Mendability Index (MI), to predict the human correction effort is studied with linear and support vector regression models. 265 3D computed tomography (CT) samples for 3 objects of interest from 3 institutions with corresponding auto-segmentations and ground truth segmentations are utilized to train and test the prediction models. The five-fold cross-validation experiments demonstrate that meaningful human effort prediction can be achieved using segmentation metrics with varying prediction errors for different objects. The improved variant of MI, called MIhd, generally shows the best prediction performance, suggesting its potential to indicate reliably the clinical value of auto-segmentations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    呼吸期间胸腹器官(诸如肺)的动态特性的定量分析可以导致针对诸如胸廓功能不全综合征(TIS)的病症的更准确的手术计划。该分析可以从上述器官在胸腹部身体区域的扫描中的半自动描绘来完成。动态磁共振成像(dMRI)是这种应用的实用和首选成像模式,尽管自动分割这些图像中的器官是非常具有挑战性的。在本文中,我们描述了一个自动分割系统,该系统基于来自95名健康受试者的dMRI采集数据构建和评估.对于三种识别方法,该系统实现了肺的约1体素的最佳平均位置误差(LE)。LE的标准偏差(SD)约为1-2个体素。对于划定方法,肺的平均骰子系数(DC)约为0.95。对于肺,DC的标准偏差约为0.01至0.02。该系统似乎能够应对低分辨率带来的挑战,运动模糊,对比度不足,和图像强度非标准相当好。我们正在测试其对TIS患者dMRI数据和包括肝脏在内的其他胸腹器官的有效性,肾脏,还有脾脏.
    Quantitative analysis of the dynamic properties of thoraco-abdominal organs such as lungs during respiration could lead to more accurate surgical planning for disorders such as Thoracic Insufficiency Syndrome (TIS). This analysis can be done from semi-automatic delineations of the aforesaid organs in scans of the thoraco-abdominal body region. Dynamic magnetic resonance imaging (dMRI) is a practical and preferred imaging modality for this application, although automatic segmentation of the organs in these images is very challenging. In this paper, we describe an auto-segmentation system we built and evaluated based on dMRI acquisitions from 95 healthy subjects. For the three recognition approaches, the system achieves a best average location error (LE) of about 1 voxel for the lungs. The standard deviation (SD) of LE is about 1-2 voxels. For the delineation approach, the average Dice coefficient (DC) is about 0.95 for the lungs. The standard deviation of DC is about 0.01 to 0.02 for the lungs. The system seems to be able to cope with the challenges posed by low resolution, motion blur, inadequate contrast, and image intensity non-standardness quite well. We are in the process of testing its effectiveness on TIS patient dMRI data and on other thoraco-abdominal organs including liver, kidneys, and spleen.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人工智能(AI)是一种尝试像人类一样思考并模仿人类行为的技术。它已被认为是放射治疗(RT)中许多依赖人类的步骤的替代方案,因为人类参与是RT的主要不确定性来源。这项工作的目的是对当前有关AI在RT中应用的文献进行系统的总结,并从临床观点上阐明其对RT实践的作用。
    对PubMed和GoogleScholar进行了系统的文献检索,以确定从成立到2022年在RT中涉及AI应用程序的原始文章。如果他们报告了原始数据并探索了AI在RT中的临床应用,则包括研究。
    选定的研究分为三个方面的RT:器官和病变分割,治疗计划和质量保证。对于每个方面,这篇综述讨论了这些人工智能工具如何参与RT协议。
    我们的研究表明,AI是RT复杂过程中依赖人类的步骤的潜在替代品。
    UNASSIGNED: Artificial intelligence (AI) is a technique which tries to think like humans and mimic human behaviors. It has been considered as an alternative in a lot of human-dependent steps in radiotherapy (RT), since the human participation is a principal uncertainty source in RT. The aim of this work is to provide a systematic summary of the current literature on AI application for RT, and to clarify its role for RT practice in terms of clinical views.
    UNASSIGNED: A systematic literature search of PubMed and Google Scholar was performed to identify original articles involving the AI applications in RT from the inception to 2022. Studies were included if they reported original data and explored the clinical applications of AI in RT.
    UNASSIGNED: The selected studies were categorized into three aspects of RT: organ and lesion segmentation, treatment planning and quality assurance. For each aspect, this review discussed how these AI tools could be involved in the RT protocol.
    UNASSIGNED: Our study revealed that AI was a potential alternative for the human-dependent steps in the complex process of RT.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本研究的目的是表征用于前列腺锥形束计算机断层扫描(CBCT)图像的基于深度学习的自动分割软件(DL),并评估其在临床自适应放射治疗常规中的适用性。
    十个病人,他们接受了明确的前列腺和精囊的独家放射治疗,被选中。股骨头,膀胱,直肠,前列腺,和精囊由四位不同的专家放射肿瘤学家对患者CBCT进行回顾性轮廓,在治疗期间获得。从这些数据开始生成共识轮廓(CC),并与DL使用不同算法创建的共识轮廓进行比较,接受CBCT(DL-CBCT)或计算机断层扫描(DL-CT)训练。骰子相似系数(DSC),选择质心(COM)移位和体积相对变化(VRV)作为比较指标。由于无法定义公差限制,结果还与操作者间变异性(IOV)进行了比较,使用相同的指标。
    对于股骨头,观察到DL和CC之间的最佳一致性(DL-CBCT和DL-CT的DSC均为0.96)。低对比度软组织器官的性能恶化:精囊的结果最差(DL-CBCT和DL-CT的DSC为0.70和0.59,分别)。分析表明,使用在特定成像模态上训练的算法是合适的。此外,统计分析表明,对于几乎所有考虑过的结构,在IOV方面,DL-CBCT与人类操作者之间没有显着差异。
    DL-CBCT的准确性与CC一致;通过与操作者间的变异性进行比较,可以证明其在临床实践中的使用是合理的。
    UNASSIGNED: Aim of the present study is to characterize a deep learning-based auto-segmentation software (DL) for prostate cone beam computed tomography (CBCT) images and to evaluate its applicability in clinical adaptive radiation therapy routine.
    UNASSIGNED: Ten patients, who received exclusive radiation therapy with definitive intent on the prostate gland and seminal vesicles, were selected. Femoral heads, bladder, rectum, prostate, and seminal vesicles were retrospectively contoured by four different expert radiation oncologists on patients CBCT, acquired during treatment. Consensus contours (CC) were generated starting from these data and compared with those created by DL with different algorithms, trained on CBCT (DL-CBCT) or computed tomography (DL-CT). Dice similarity coefficient (DSC), centre of mass (COM) shift and volume relative variation (VRV) were chosen as comparison metrics. Since no tolerance limit can be defined, results were also compared with the inter-operator variability (IOV), using the same metrics.
    UNASSIGNED: The best agreement between DL and CC was observed for femoral heads (DSC of 0.96 for both DL-CBCT and DL-CT). Performance worsened for low-contrast soft tissue organs: the worst results were found for seminal vesicles (DSC of 0.70 and 0.59 for DL-CBCT and DL-CT, respectively). The analysis shows that it is appropriate to use algorithms trained on the specific imaging modality. Furthermore, the statistical analysis showed that, for almost all considered structures, there is no significant difference between DL-CBCT and human operator in terms of IOV.
    UNASSIGNED: The accuracy of DL-CBCT is in accordance with CC; its use in clinical practice is justified by the comparison with the inter-operator variability.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:生物医学图像分割模型通常试图预测一个尽可能接近地面真实结构的分割。然而,因为医学图像不是完美的解剖学表现,获得这个真相是不可能的。通常使用的代理是让多个专家观察者为数据集定义相同的结构。当多个观察者在同一图像上定义相同的结构时,根据结构的不同,可能会有明显的差异。图像质量/模态和被定义的区域。通常期望在分割模型中估计这种类型的aleatoric不确定性,以帮助理解真实结构可能被定位的区域。此外,获取这些数据集是资源密集型的,因此可能需要使用有限的数据来训练这些模型。在数据集大小较小的情况下,不同的患者解剖结构可能没有很好的表现,导致认知不确定性,也应进行估计,因此可以确定模型在哪些情况下有效或无效。
    方法:我们使用3D概率U-Net来训练模型,从中可以对多个分割进行采样,以估计多个观察者之间的不确定性范围。为了确保在模型训练中强调观察者最不同意的区域,我们使用约束优化(GECO)损失函数扩展了广义证据下界(ELBO),并带有附加的轮廓损失项,以关注该区域。在推理过程中使用合奏和蒙特卡洛(MCDO)不确定性量化方法来估计未知案例的模型置信度。我们将我们的方法应用于两个放射治疗临床试验数据集,胃癌试验(TOPGEAR,TROG08.08)和前列腺切除术后前列腺癌试验(RAVES,TROG08.03).每个数据集仅包含10个案例,每个案例用于模型开发以分割临床目标体积(CTV),每个案例由多个观察者定义。另外50个案例可作为每个试验的坚持数据集,每个试验只有一个观察者定义了每个案例的CTV结构。在保留数据集中使用每种情况的概率模型生成多达50个样品。为了评估性能,每个手动定义的结构根据常用指标与最匹配的采样分割进行匹配.
    结果:TOPGEARCTV模型的骰子相似系数(DSC)和表面DSC(sDSC)分别为0.7和0.43,RAVES模型分别为0.75和0.71。在保留数据集中不同案例的分割质量是可变的,然而当使用皮尔逊相关系数比较DSC时,对于TOPGEAR和RAVES,集合和MCDO不确定性估计方法都能够以p值<0.001的方式准确估计模型置信度。
    结论:我们证明了使用有限的数据集来训练可以估计aleatoric和epistemic不确定性的自动分割模型是可能的。具有模型估计预测置信度对于了解模型可能有用的未知情况很重要。
    OBJECTIVE: Bio-medical image segmentation models typically attempt to predict one segmentation that resembles a ground-truth structure as closely as possible. However, as medical images are not perfect representations of anatomy, obtaining this ground truth is not possible. A surrogate commonly used is to have multiple expert observers define the same structure for a dataset. When multiple observers define the same structure on the same image there can be significant differences depending on the structure, image quality/modality and the region being defined. It is often desirable to estimate this type of aleatoric uncertainty in a segmentation model to help understand the region in which the true structure is likely to be positioned. Furthermore, obtaining these datasets is resource intensive so training such models using limited data may be required. With a small dataset size, differing patient anatomy is likely not well represented causing epistemic uncertainty which should also be estimated so it can be determined for which cases the model is effective or not.
    METHODS: We use a 3D probabilistic U-Net to train a model from which several segmentations can be sampled to estimate the range of uncertainty seen between multiple observers. To ensure that regions where observers disagree most are emphasised in model training, we expand the Generalised Evidence Lower Bound (ELBO) with a Constrained Optimisation (GECO) loss function with an additional contour loss term to give attention to this region. Ensemble and Monte-Carlo dropout (MCDO) uncertainty quantification methods are used during inference to estimate model confidence on an unseen case. We apply our methodology to two radiotherapy clinical trial datasets, a gastric cancer trial (TOPGEAR, TROG 08.08) and a post-prostatectomy prostate cancer trial (RAVES, TROG 08.03). Each dataset contains only 10 cases each for model development to segment the clinical target volume (CTV) which was defined by multiple observers on each case. An additional 50 cases are available as a hold-out dataset for each trial which had only one observer define the CTV structure on each case. Up to 50 samples were generated using the probabilistic model for each case in the hold-out dataset. To assess performance, each manually defined structure was matched to the closest matching sampled segmentation based on commonly used metrics.
    RESULTS: The TOPGEAR CTV model achieved a Dice Similarity Coefficient (DSC) and Surface DSC (sDSC) of 0.7 and 0.43 respectively with the RAVES model achieving 0.75 and 0.71 respectively. Segmentation quality across cases in the hold-out datasets was variable however both the ensemble and MCDO uncertainty estimation approaches were able to accurately estimate model confidence with a p-value < 0.001 for both TOPGEAR and RAVES when comparing the DSC using the Pearson correlation coefficient.
    CONCLUSIONS: We demonstrated that training auto-segmentation models which can estimate aleatoric and epistemic uncertainty using limited datasets is possible. Having the model estimate prediction confidence is important to understand for which unseen cases a model is likely to be useful.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:目前广泛使用全骨髓照射(TMI)和全骨髓和淋巴照射(TMLI)的障碍之一是肿瘤靶轮廓检查工作流程中具有挑战性的困难。本研究旨在开发一种混合神经网络模型,以促进准确、自动,多类临床目标体积的快速分割。
    方法:纳入2018年1月至2022年5月接受TMI和TMLI的患者。两名独立的肿瘤学家在CT图像上为患者手动勾画了八个目标体积。使用来自一个内部机构的46名患者开发并训练了一种新颖的双编码器对齐网络(DEA-Net),并对总共39名内部和外部患者进行了独立评估。根据准确性指标和圈定时间对性能进行了评估。
    结果:DEA-Net在内部测试数据集(23名患者)中获得了90.1%±1.8%的平均骰子相似系数,在外部测试数据集(16名患者)中获得了91.1%±2.5%的平均骰子相似系数。内部测试数据集的95%Hausdorff距离和平均对称表面距离分别为2.04±0.62mm和0.57±0.11mm,和2.17±0.68毫米,外部测试数据集0.57±0.20mm,分别,优于大多数现有的最先进的方法。此外,与传统的手动轮廓绘制过程相比,自动分割工作流程将轮廓绘制时间减少了98%(平均173±29s与12168±1690s;P<0.001)。消融研究验证了混合结构的有效性。
    结论:提出的深度学习框架实现了可比或更高的目标体积描绘精度,显着加快放射治疗计划过程。
    OBJECTIVE: One of the current roadblocks to the widespread use of Total Marrow Irradiation (TMI) and Total Marrow and Lymphoid Irradiation (TMLI) is the challenging difficulties in tumor target contouring workflow. This study aims to develop a hybrid neural network model that promotes accurate, automatic, and rapid segmentation of multi-class clinical target volumes.
    METHODS: Patients who underwent TMI and TMLI from January 2018 to May 2022 were included. Two independent oncologists manually contoured eight target volumes for patients on CT images. A novel Dual-Encoder Alignment Network (DEA-Net) was developed and trained using 46 patients from one internal institution and independently evaluated on a total of 39 internal and external patients. Performance was evaluated on accuracy metrics and delineation time.
    RESULTS: The DEA-Net achieved a mean dice similarity coefficient of 90.1 % ± 1.8 % for internal testing dataset (23 patients) and 91.1 % ± 2.5 % for external testing dataset (16 patients). The 95 % Hausdorff distance and average symmetric surface distance were 2.04 ± 0.62 mm and 0.57 ± 0.11 mm for internal testing dataset, and 2.17 ± 0.68 mm, and 0.57 ± 0.20 mm for external testing dataset, respectively, outperforming most of existing state-of-the-art methods. In addition, the automatic segmentation workflow reduced delineation time by 98 % compared to the conventional manual contouring process (mean 173 ± 29 s vs. 12168 ± 1690 s; P < 0.001). Ablation study validate the effectiveness of hybrid structures.
    CONCLUSIONS: The proposed deep learning framework achieved comparable or superior target volume delineation accuracy, significantly accelerating the radiotherapy planning process.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Editorial
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:用于超分辨率(SR)重建的深度学习网络可增强MR引导放射治疗(MRgRT)的3D磁共振成像(MRI)的空间分辨率。然而,MRI扫描仪和患者之间的差异会影响实时3D低分辨率(LR)电影MRI的SR质量。在这项研究中,我们提出了一个个性化的超分辨率(psSR)网络,该网络结合了迁移学习,以克服3D电影MRI扫描仪间SR的挑战。&#xD;方法:拟议的psSR网络的开发包括两个阶段:1)使用临床患者数据集的队列特定SR(csSR)网络,和2)使用迁移学习来目标数据集的psSR网络。csSR网络是通过训练屏气和呼吸门控高分辨率(HR)3DMRI及其在1.5T扫描的53例胸腹患者的k空间下采样LRMRI来开发的。psSR网络是通过迁移学习来开发的,以使用单个屏气HRMRI和来自5名健康志愿者的在0.55T扫描的相应3D影像MRI来重新训练csSR网络。临床可行性通过使用自动分割网络在psSRMRI上进行肝脏轮廓评估,并使用Dice相似性系数(DSC)进行量化。 结果。PSSR核磁共振的平均PSNR和SSIM值分别增加了57.2%(13.8至21.7)和94.7%(0.38至0.74),参考0.55T屏气HRMRI。在轮廓评估中,DSC增加15%(0.79至0.91)。迁移学习的平均时间为90秒,PSSR为每卷4.51ms,自动分割是210毫秒,分别。 意义。拟议的psSR重建在扫描仪和不到2分钟的前提迁移学习的患者中平均215ms内大大提高了电影MRI的图像和分割质量。这种方法将有效克服MRgRT的深度学习的队列和扫描仪依赖性。 .
    Objective. Deep-learning networks for super-resolution (SR) reconstruction enhance the spatial-resolution of 3D magnetic resonance imaging (MRI) for MR-guided radiotherapy (MRgRT). However, variations between MRI scanners and patients impact the quality of SR for real-time 3D low-resolution (LR) cine MRI. In this study, we present a personalized super-resolution (psSR) network that incorporates transfer-learning to overcome the challenges in inter-scanner SR of 3D cine MRI.Approach: Development of the proposed psSR network comprises two-stages: (1) a cohort-specific SR (csSR) network using clinical patient datasets, and (2) a psSR network using transfer-learning to target datasets. The csSR network was developed by training on breath-hold and respiratory-gated high-resolution (HR) 3D MRIs and their k-space down-sampled LR MRIs from 53 thoracoabdominal patients scanned at 1.5 T. The psSR network was developed through transfer-learning to retrain the csSR network using a single breath-hold HR MRI and a corresponding 3D cine MRI from 5 healthy volunteers scanned at 0.55 T. Image quality was evaluated using the peak-signal-noise-ratio (PSNR) and the structure-similarity-index-measure (SSIM). The clinical feasibility was assessed by liver contouring on the psSR MRI using an auto-segmentation network and quantified using the dice-similarity-coefficient (DSC).Results. Mean PSNR and SSIM values of psSR MRIs were increased by 57.2% (13.8-21.7) and 94.7% (0.38-0.74) compared to cine MRIs, with the reference 0.55 T breath-hold HR MRI. In the contour evaluation, DSC was increased by 15% (0.79-0.91). Average time consumed for transfer-learning was 90 s, psSR was 4.51 ms per volume, and auto-segmentation was 210 ms, respectively.Significance. The proposed psSR reconstruction substantially increased image and segmentation quality of cine MRI in an average of 215 ms across the scanners and patients with less than 2 min of prerequisite transfer-learning. This approach would be effective in overcoming cohort- and scanner-dependency of deep-learning for MRgRT.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在放射肿瘤学临床实践中,与手动分割相比,自动分割有望实现更快的速度和更低的读者间变异性。本研究旨在实现和评估自动分割算法的准确性,\“使用视觉变压器(SMIT)的掩模图像建模,口咽鳞癌(OPSCC)患者的纵向T2加权(T2w)MR图像上的颈部淋巴结转移。
    这项前瞻性临床试验研究包括123名与人乳头瘤病毒(HPV阳性[+])相关的OSPCC患者,他们接受了同步放化疗。在治疗前(Tx)在3T上采集T2wMR图像,第0周和Tx内周(1-3)。来自123名OPSCC患者的转移性颈部淋巴结的手动描绘用于SMIT自动分割,计算肿瘤总体积。标准统计分析比较了SMIT与手动分割的轮廓体积(Wilcoxon符号秩检验[WSRT]),并计算了Spearman的秩相关系数(ρ)。使用骰子相似性系数(DSC)度量值在测试数据集上评估分割准确性。P值<0.05被认为是显著的。
    在Tx前手动和SMIT描绘的肿瘤体积没有显着差异(8.68±7.15vs8.38±7.01cm3,P=0.26[WSRT]),Bland-Altman方法确定的协议极限为-1.71至2.31cm3,平均差为0.30cm3。SMIT模型和手动描绘的肿瘤体积估计值高度相关(ρ=0.84-0.96,P<0.001)。在Tx前和Tx内(1-3)周的平均DSC度量值分别为0.86、0.85、0.77和0.79,分别。
    SMIT算法为HPV+OPSCC中的肿瘤学应用提供了足够的分割精度。
    首次在HPV+OPSCC中使用纵向T2wMRI评估SMIT自动分割。
    UNASSIGNED: Auto-segmentation promises greater speed and lower inter-reader variability than manual segmentations in radiation oncology clinical practice. This study aims to implement and evaluate the accuracy of the auto-segmentation algorithm, \"Masked Image modeling using the vision Transformers (SMIT),\" for neck nodal metastases on longitudinal T2-weighted (T2w) MR images in oropharyngeal squamous cell carcinoma (OPSCC) patients.
    UNASSIGNED: This prospective clinical trial study included 123 human papillomaviruses (HPV-positive [+]) related OSPCC patients who received concurrent chemoradiotherapy. T2w MR images were acquired on 3 T at pre-treatment (Tx), week 0, and intra-Tx weeks (1-3). Manual delineations of metastatic neck nodes from 123 OPSCC patients were used for the SMIT auto-segmentation, and total tumor volumes were calculated. Standard statistical analyses compared contour volumes from SMIT vs manual segmentation (Wilcoxon signed-rank test [WSRT]), and Spearman\'s rank correlation coefficients (ρ) were computed. Segmentation accuracy was evaluated on the test data set using the dice similarity coefficient (DSC) metric value. P-values <0.05 were considered significant.
    UNASSIGNED: No significant difference in manual and SMIT delineated tumor volume at pre-Tx (8.68 ± 7.15 vs 8.38 ± 7.01 cm3, P = 0.26 [WSRT]), and the Bland-Altman method established the limits of agreement as -1.71 to 2.31 cm3, with a mean difference of 0.30 cm3. SMIT model and manually delineated tumor volume estimates were highly correlated (ρ = 0.84-0.96, P < 0.001). The mean DSC metric values were 0.86, 0.85, 0.77, and 0.79 at the pre-Tx and intra-Tx weeks (1-3), respectively.
    UNASSIGNED: The SMIT algorithm provides sufficient segmentation accuracy for oncological applications in HPV+ OPSCC.
    UNASSIGNED: First evaluation of auto-segmentation with SMIT using longitudinal T2w MRI in HPV+ OPSCC.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号