image segmentation

图像分割
  • 文章类型: Journal Article
    目的:克罗恩病(CD)的定性发现对于可靠地报告和量化可能具有挑战性。我们评估了机器学习方法,以标准化回肠CD的常见定性发现的检测,并确定在CT小肠造影(CTE)上发现空间定位。
    方法:纳入2016年至2021年单中心回顾性研究的回肠CD和CTE患者。165CTE由两名受过研究金训练的腹部放射科医师审查了五个定性CD发现的存在和空间分布:壁画增强,壁画分层,狭窄,壁厚,和肠系膜脂肪绞合。开发了使用自动提取的专家指导的肠特征和无偏卷积神经网络(CNN)的随机森林(RF)集成模型来预测定性发现的存在。使用曲线下面积(AUC)评估模型性能,灵敏度,特异性,准确度,和kappa协议统计。
    结果:在165名受试者中,进行了29,895项个人定性发现评估,放射科医生对定位的一致性从好到非常好(κ=0.66到0.73),除了肠系膜脂肪绞合(κ=0.47)。射频预测模型具有优异的性能,总体AUC,灵敏度,特异性分别为0.91、0.81和0.85。用于CD发现定位的RF模型和放射科医师协议近似放射科医师之间的协议(κ=0.67至0.76)。没有疾病知识的无偏CNN模型与使用专家定义的成像特征的RF模型具有非常相似的性能。
    结论:用于CTE图像分析的机器学习技术可以识别存在,location,以及与经验丰富的放射科医生性能相似的定性CD发现的分布。
    OBJECTIVE: Qualitative findings in Crohn\'s disease (CD) can be challenging to reliably report and quantify. We evaluated machine learning methodologies to both standardize the detection of common qualitative findings of ileal CD and determine finding spatial localization on CT enterography (CTE).
    METHODS: Subjects with ileal CD and a CTE from a single center retrospective study between 2016 and 2021 were included. 165 CTEs were reviewed by two fellowship-trained abdominal radiologists for the presence and spatial distribution of five qualitative CD findings: mural enhancement, mural stratification, stenosis, wall thickening, and mesenteric fat stranding. A Random Forest (RF) ensemble model using automatically extracted specialist-directed bowel features and an unbiased convolutional neural network (CNN) were developed to predict the presence of qualitative findings. Model performance was assessed using area under the curve (AUC), sensitivity, specificity, accuracy, and kappa agreement statistics.
    RESULTS: In 165 subjects with 29,895 individual qualitative finding assessments, agreement between radiologists for localization was good to very good (κ = 0.66 to 0.73), except for mesenteric fat stranding (κ = 0.47). RF prediction models had excellent performance, with an overall AUC, sensitivity, specificity of 0.91, 0.81 and 0.85, respectively. RF model and radiologist agreement for localization of CD findings approximated agreement between radiologists (κ = 0.67 to 0.76). Unbiased CNN models without benefit of disease knowledge had very similar performance to RF models which used specialist-defined imaging features.
    CONCLUSIONS: Machine learning techniques for CTE image analysis can identify the presence, location, and distribution of qualitative CD findings with similar performance to experienced radiologists.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    大体肿瘤体积(GTV)的准确描绘对于放射治疗至关重要。深度学习驱动的GTV分割技术在快速准确地描绘GTV、为放射科医师制定放射计划提供依据。现有的基于深度学习的GTV二维和三维分割模型分别受到空间特征损失和各向异性,并且都受到肿瘤特征变异性的影响,模糊的边界,背景干扰。所有这些因素都严重影响分割性能。为了解决上述问题,本研究提出了一种基于2D-3D架构的层-体并行注意力(LVPA)-UNet模型,其中介绍了三种策略。首先,在LVPA-UNet中引入了2D和3D工作流程。它们并行工作,可以相互引导。通过它们可以提取2DMRI的每个切片的精细特征以及肿瘤的3D解剖结构和空间特征。其次,平行多分支深度条带卷积使模型适应切片和体积空间内不同形状和大小的肿瘤,并实现模糊边界的精细处理。最后,提出了一种层-通道注意力机制,根据切片和通道的不同肿瘤信息自适应调整其权重,然后突出切片和肿瘤通道。LVPA-UNet对来自三个中心的1010个鼻咽癌(NPC)MRI数据集的实验显示,DSC为0.7907,精度为0.7929,召回率为0.8025,HD95为1.8702mm,优于八种典型型号。与基线模型相比,它使DSC提高了2.14%,精度为2.96%,召回率为1.01%,而减少HD950.5434毫米。因此,在通过深度学习确保分割效率的同时,LVPA-UNet能够为放射治疗提供优越的GTV勾画结果,为精准医学提供技术支持。
    Accurate delineation of Gross Tumor Volume (GTV) is crucial for radiotherapy. Deep learning-driven GTV segmentation technologies excel in rapidly and accurately delineating GTV, providing a basis for radiologists in formulating radiation plans. The existing 2D and 3D segmentation models of GTV based on deep learning are limited by the loss of spatial features and anisotropy respectively, and are both affected by the variability of tumor characteristics, blurred boundaries, and background interference. All these factors seriously affect the segmentation performance. To address the above issues, a Layer-Volume Parallel Attention (LVPA)-UNet model based on 2D-3D architecture has been proposed in this study, in which three strategies are introduced. Firstly, 2D and 3D workflows are introduced in the LVPA-UNet. They work in parallel and can guide each other. Both the fine features of each slice of 2D MRI and the 3D anatomical structure and spatial features of the tumor can be extracted by them. Secondly, parallel multi-branch depth-wise strip convolutions adapt the model to tumors of varying shapes and sizes within slices and volumetric spaces, and achieve refined processing of blurred boundaries. Lastly, a Layer-Channel Attention mechanism is proposed to adaptively adjust the weights of slices and channels according to their different tumor information, and then to highlight slices and channels with tumor. The experiments by LVPA-UNet on 1010 nasopharyngeal carcinoma (NPC) MRI datasets from three centers show a DSC of 0.7907, precision of 0.7929, recall of 0.8025, and HD95 of 1.8702 mm, outperforming eight typical models. Compared to the baseline model, it improves DSC by 2.14 %, precision by 2.96 %, and recall by 1.01 %, while reducing HD95 by 0.5434 mm. Consequently, while ensuring the efficiency of segmentation through deep learning, LVPA-UNet is able to provide superior GTV delineation results for radiotherapy and offer technical support for precision medicine.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:深度学习可以在放射治疗中自动化描绘,减少时间和可变性。然而,它的功效因不同机构而异,扫描仪,或设置,强调在临床环境中需要适应性强的模型。我们的研究证明了迁移学习(TL)方法在增强深度学习模型在宫颈近距离放射治疗中对危险器官(OAR)进行自动分割的泛化性方面的有效性。
    方法:在3T磁共振(MR)扫描仪(RT3)上使用环形和串联涂药器进行120次扫描,开发了预训练模型。对四个OAR进行了分段和评估。分割性能通过体积骰子相似系数(vDSC)进行评估,95%Hausdorff距离(HD95),表面DSC,并添加路径长度(APL)。该模型在三个分布外的目标群体上进行了微调。前和后TL结果,以及微调扫描次数的影响,进行了比较。在观察到的和未观察到的数据分布上评估用一组训练的模型(单个)和用所有四组训练的模型(混合)。
    结果:TL提高了目标群体的分割精度,匹配预训练模型的性能。前五次微调扫描导致了最明显的改进,随着更多数据的增加,性能趋于稳定。在给定相同的训练数据的情况下,TL的性能优于从头开始训练。混合模型在RT3扫描上的表现类似于单一模型,但在看不见的数据上表现出卓越的性能。
    结论:TL可以提高MR引导的颈椎近距离放射治疗中OAR分割模型的普适性,需要较少的微调数据和减少的训练时间。这些结果为开发适应临床环境的适应性模型提供了基础。
    Deep learning can automate delineation in radiation therapy, reducing time and variability. Yet, its efficacy varies across different institutions, scanners, or settings, emphasizing the need for adaptable and robust models in clinical environments. Our study demonstrates the effectiveness of the transfer learning (TL) approach in enhancing the generalizability of deep learning models for auto-segmentation of organs-at-risk (OARs) in cervical brachytherapy.
    A pre-trained model was developed using 120 scans with ring and tandem applicator on a 3T magnetic resonance (MR) scanner (RT3). Four OARs were segmented and evaluated. Segmentation performance was evaluated by Volumetric Dice Similarity Coefficient (vDSC), 95 % Hausdorff Distance (HD95), surface DSC, and Added Path Length (APL). The model was fine-tuned on three out-of-distribution target groups. Pre- and post-TL outcomes, and influence of number of fine-tuning scans, were compared. A model trained with one group (Single) and a model trained with all four groups (Mixed) were evaluated on both seen and unseen data distributions.
    TL enhanced segmentation accuracy across target groups, matching the pre-trained model\'s performance. The first five fine-tuning scans led to the most noticeable improvements, with performance plateauing with more data. TL outperformed training-from-scratch given the same training data. The Mixed model performed similarly to the Single model on RT3 scans but demonstrated superior performance on unseen data.
    TL can improve a model\'s generalizability for OAR segmentation in MR-guided cervical brachytherapy, requiring less fine-tuning data and reduced training time. These results provide a foundation for developing adaptable models to accommodate clinical settings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    这项研究旨在自动分析和提取2019年冠状病毒病(COVID-19)导致的肺野异常。可以检测到的异常类型是磨玻璃不透明度(GGO)和固结。所提出的方法还可以识别肺野中异常的位置,也就是说,中央和外周肺区域。这些异常的位置和类型会影响COVID-19患者的严重程度和信心水平。将使用该方法的检测结果与放射科医生手动检测的结果进行比较。从实验结果来看,所提出的系统可以提供0.059的严重程度评分和0.069的置信水平的平均误差.此方法已在一般用户的基于Web的应用程序中实现。•一种检测病毒性肺炎影像学特征的方法,即地面玻璃不透明度(GGO)和巩固胸部计算机断层扫描(CT)扫描图像。•这种方法可以将肺野分离到右肺和左肺,它还可以识别检测到的成像特征在肺野的中心或外围的位置。•测量患者痛苦的严重程度和信心水平。
    This study aims to automatically analyze and extract abnormalities in the lung field due to Coronavirus Disease 2019 (COVID-19). Types of abnormalities that can be detected are Ground Glass Opacity (GGO) and consolidation. The proposed method can also identify the location of the abnormality in the lung field, that is, the central and peripheral lung area. The location and type of these abnormalities affect the severity and confidence level of a patient suffering from COVID-19. The detection results using the proposed method are compared with the results of manual detection by radiologists. From the experimental results, the proposed system can provide an average error of 0.059 for the severity score and 0.069 for the confidence level. This method has been implemented in a web-based application for general users.•A method to detect the appearance of viral pneumonia imaging features, namely Ground Glass Opacity (GGO) and consolidation on the chest Computed Tomography (CT) scan images.•This method can separate the lung field to the right lung and the left lung, and it also can identify the detected imaging feature\'s location in the central or peripheral of the lung field.•Severity level and confidence level of the patient\'s suffering are measured.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    这项验证研究的目的是在具有临床代表性的锥形束计算机断层扫描(CBCT)数据集上全面评估基于深度学习的根尖周病变检测算法的性能和泛化能力,并进行非劣效性测试。评估涉及成人上下颌195张CBCT图像,计算所有牙齿的敏感性和特异性指标,按下巴分层,并按牙齿类型分层。此外,根据病变的大小为每个病变分配了根尖周指数评分,以实现基于评分的评估.以90%的敏感性和82%的特异性的比例进行非劣效性测试。该算法实现了86.7%的总体灵敏度和84.3%的特异性。非劣效性检验表明拒绝了零假设的特异性,而不是敏感性。然而,当排除根尖周指数评分为1的病变时(即,非常小的病变),灵敏度提高到90.4%。尽管数据集带来了挑战,该算法证明了有希望的结果。然而,需要进一步改进以增强算法的鲁棒性,特别是在检测非常小的病变以及处理真实世界临床场景中常见的伪影和异常值方面。
    The aim of this validation study was to comprehensively evaluate the performance and generalization capability of a deep learning-based periapical lesion detection algorithm on a clinically representative cone-beam computed tomography (CBCT) dataset and test for non-inferiority. The evaluation involved 195 CBCT images of adult upper and lower jaws, where sensitivity and specificity metrics were calculated for all teeth, stratified by jaw, and stratified by tooth type. Furthermore, each lesion was assigned a periapical index score based on its size to enable a score-based evaluation. Non-inferiority tests were conducted with proportions of 90% for sensitivity and 82% for specificity. The algorithm achieved an overall sensitivity of 86.7% and a specificity of 84.3%. The non-inferiority test indicated the rejection of the null hypothesis for specificity but not for sensitivity. However, when excluding lesions with a periapical index score of one (i.e., very small lesions), the sensitivity improved to 90.4%. Despite the challenges posed by the dataset, the algorithm demonstrated promising results. Nevertheless, further improvements are needed to enhance the algorithm\'s robustness, particularly in detecting very small lesions and the handling of artifacts and outliers commonly encountered in real-world clinical scenarios.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着时间的推移,3D打印变得越来越重要。随着最新技术推动许多领域的创新,包括眼科.然而,需要更多的知识来了解临床医生如何在日常实践中成为创新者,而不需要底层技术的专业工程知识。我们旨在通过开发临床医生可以使用3D打印的管道来解决这一缺点。此工作流名为SS3DP:Segment,切片,3D打印它是通过制造3D打印的眼球进行测试的。从这项工作的结果来看,我们观察到,由于分割小结构的困难,分割过程是不完善的。最初的学习曲线很陡峭,但是该技术改进了花费在分割平台上的更多时间。没有进行定量分析。如果医学创新的主要参与者,临床医生,由于缺乏知识而无法参与其中。
    3D printing is becoming increasingly important as time passes, with the latest technologies driving innovation in many fields, including ophthalmology. However, more is needed to know how clinicians can become innovators in their daily practice without needing expert engineering knowledge of the underlying technologies. We aimed to address that shortcoming by developing a pipeline clinicians can use to 3D print. This workflow was named SS3DP: Segment, Slice, and 3D Print. It was tested by fabricating a 3D-printed eyeball. In terms of the results of this work, we observed that the segmentation process was imperfect due to the difficulty of segmenting small structures. The learning curve was steep initially, but the technique improved the more time spent on the segmentation platform. No quantitative analysis was carried out. Innovation in medicine is stifled if its leading participants, clinicians, cannot engage with it due to a lack of knowledge.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:这项研究的目的是研究深度学习分割模型在大型队列血管内超声(IVUS)图像数据集上在管腔和外部弹性膜(EEM)上的泛化性能。并评估自动IVUS定量测量参数的一致性和准确性。
    方法:收集了来自113名患者的11,070张IVUS图像,并由心脏病专家进行注释,以训练和测试深度学习分割模型。通过评估管腔和EEM的分割,对五种现有技术的医学图像分割模型进行了比较。骰子相似系数(DSC),计算了整体和不同IVUS图像类别的子集的联合交点(IoU)和Hausdorff距离(HD)。Further,评估了自动分割计算的IVUS定量测量参数与手动分割计算的IVUS定量测量参数之间的一致性.最后,我们的模型的分割性能也与以前的研究进行了比较。
    结果:CENet在DSC中取得了最佳性能(对于流明为0.958,0.921用于EEM)和IoU(0.975用于管腔,对于EEM为0.951),在所有型号中,而Res-UNet在HD中表现最好(流明为0.219,EEM为0.178)。平均组内相关系数(ICC)和Bland-Altman图表明模型的自动预测和手动测量之间具有极强的一致性(0.855,95%CI0.822-0.887)。
    结论:基于大型队列图像数据集的深度学习模型能够在管腔和EEM分割中实现最先进的(SOTA)结果。它可用于IVUS临床评估,并在定量参数测量上与临床医生达成极好的协议。
    OBJECTIVE: The aim of this study was to investigate the generalization performance of deep learning segmentation models on a large cohort intravascular ultrasound (IVUS) image dataset over the lumen and external elastic membrane (EEM), and to assess the consistency and accuracy of automated IVUS quantitative measurement parameters.
    METHODS: A total of 11,070 IVUS images from 113 patients and pullbacks were collected and annotated by cardiologists to train and test deep learning segmentation models. A comparison of five state of the art medical image segmentation models was performed by evaluating the segmentation of the lumen and EEM. Dice similarity coefficient (DSC), intersection over union (IoU) and Hausdorff distance (HD) were calculated for the overall and for subsets of different IVUS image categories. Further, the agreement between the IVUS quantitative measurement parameters calculated by automatic segmentation and those calculated by manual segmentation was evaluated. Finally, the segmentation performance of our model was also compared with previous studies.
    RESULTS: CENet achieved the best performance in DSC (0.958 for lumen, 0.921 for EEM) and IoU (0.975 for lumen, 0.951 for EEM) among all models, while Res-UNet was the best performer in HD (0.219 for lumen, 0.178 for EEM). The mean intraclass correlation coefficient (ICC) and Bland-Altman plot demonstrated the extremely strong agreement (0.855, 95% CI 0.822-0.887) between model\'s automatic prediction and manual measurements.
    CONCLUSIONS: Deep learning models based on large cohort image datasets were capable of achieving state of the art (SOTA) results in lumen and EEM segmentation. It can be used for IVUS clinical evaluation and achieve excellent agreement with clinicians on quantitative parameter measurements.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    计算血液动力学越来越多地用于以患者特定的方式量化腹主动脉瘤(AAA)及其周围的血液动力学特征。然而,耗时的人工注释阻碍了计算血液动力学分析的临床转化.因此,我们研究了使用基于深度学习的图像分割方法来减少手动分割所需时间的可行性。 两种最新的基于深度学习的图像分割方法,ARU-Net和CACU-Net,用于测试用于计算血液动力学分析的自动计算机模型创建的可行性。在预测模型和手动模型之间比较了30次计算机断层扫描血管造影(CTA)扫描的形态学特征和血液动力学指标。
两个网络的DICE得分均为0.916,相关值高于0.95,表明它们能够生成与人类分割相当的模型。Bland-Altman分析显示了深度学习和手动分割结果之间的良好一致性。与手动(计算血液动力学)模型重建相比,自动计算机模型生成的时间显着减少(从〜2小时到〜10分钟)。
自动图像分割可以显着减少患者特定的AAA模型的娱乐时间费用。此外,我们的研究表明,CACU-Net和ARU-Net都可以完成AAA分割,CACU-Net在准确性和节省时间方面优于ARU-Net。 .
    Computational hemodynamics is increasingly being used to quantify hemodynamic characteristics in and around abdominal aortic aneurysms (AAA) in a patient-specific fashion. However, the time-consuming manual annotation hinders the clinical translation of computational hemodynamic analysis. Thus, we investigate the feasibility of using deep-learning-based image segmentation methods to reduce the time required for manual segmentation. Two of the latest deep-learning-based image segmentation methods, ARU-Net and CACU-Net, were used to test the feasibility of automated computer model creation for computational hemodynamic analysis. Morphological features and hemodynamic metrics of 30 computed tomography angiography (CTA) scans were compared between pre-dictions and manual models. The DICE score for both networks was 0.916, and the correlation value was above 0.95, indicating their ability to generate models comparable to human segmentation. The Bland-Altman analysis shows a good agreement between deep learning and manual segmentation results. Compared with manual (computational hemodynamics) model recreation, the time for automated computer model generation was significantly reduced (from ∼2 h to ∼10 min). Automated image segmentation can significantly reduce time expenses on the recreation of patient-specific AAA models. Moreover, our study showed that both CACU-Net and ARU-Net could accomplish AAA segmentation, and CACU-Net outperformed ARU-Net in terms of accuracy and time-saving.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    肺动脉高压(PH),由主肺动脉的平均肺动脉血压超过20mmHg定义,是一种影响肺血管的心血管疾病。PH伴随慢性血管重塑,其中血管变得更硬,大血管扩张,和较小的船只收缩。某些类型的PH,包括缺氧诱导的PH(HPH),也会导致微血管稀疏。本研究使用拓扑数据分析(TDA)的新方法分析了HPH存在下肺动脉形态的变化。我们采用持续的同源性来量化对照和HPH小鼠的动脉形态计量学,这些小鼠表征了从微型计算机断层扫描(micro-CT)图像中提取的归一化动脉树。在比较控制树和HPH树的拓扑之前,我们使用三种修剪算法对生成的树进行归一化。此概念验证研究表明,修剪方法会影响空间树的统计和复杂性。我们发现HPH树比对照树更硬,但分支更多,深度更高。右侧的HPH动物的相对方向复杂性较低,腹侧,和向后的方向。对于半径修剪的树木,这种差异在较低的灌注压力下更为显著,从而能够分析较大血管的重塑.在更高的压力下,动脉网络包括更多的远端血管。结果显示,腹侧,HPH树的后部相对方向复杂性增加,表明远端血管在这些方向上的重塑。Strahler订单修剪使我们能够生成大小相当的树木,和结果,在所有压力下,表明HPH树的复杂度低于对照树。我们的分析基于6只动物(3只对照和3只HPH小鼠)的数据,即使我们的分析是在一个小数据集中进行的,本研究为使用拓扑数据分析(TDA)的工具分析生物树的属性提供了框架和概念证明。从这项研究得出的结果使我们更接近于提取相关信息以量化HPH中的重塑。
    Pulmonary hypertension (PH), defined by a mean pulmonary arterial blood pressure above 20 mmHg in the main pulmonary artery, is a cardiovascular disease impacting the pulmonary vasculature. PH is accompanied by chronic vascular remodeling, wherein vessels become stiffer, large vessels dilate, and smaller vessels constrict. Some types of PH, including hypoxia-induced PH (HPH), also lead to microvascular rarefaction. This study analyzes the change in pulmonary arterial morphometry in the presence of HPH using novel methods from topological data analysis (TDA). We employ persistent homology to quantify arterial morphometry for control and HPH mice characterizing normalized arterial trees extracted from micro-computed tomography (micro-CT) images. We normalize generated trees using three pruning algorithms before comparing the topology of control and HPH trees. This proof-of-concept study shows that the pruning method affects the spatial tree statistics and complexity. We find that HPH trees are stiffer than control trees but have more branches and a higher depth. Relative directional complexities are lower in HPH animals in the right, ventral, and posterior directions. For the radius pruned trees, this difference is more significant at lower perfusion pressures enabling analysis of remodeling of larger vessels. At higher pressures, the arterial networks include more distal vessels. Results show that the right, ventral, and posterior relative directional complexities increase in HPH trees, indicating the remodeling of distal vessels in these directions. Strahler order pruning enables us to generate trees of comparable size, and results, at all pressure, show that HPH trees have lower complexity than the control trees. Our analysis is based on data from 6 animals (3 control and 3 HPH mice), and even though our analysis is performed in a small dataset, this study provides a framework and proof-of-concept for analyzing properties of biological trees using tools from Topological Data Analysis (TDA). Findings derived from this study bring us a step closer to extracting relevant information for quantifying remodeling in HPH.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在临床前研究中越来越多地使用多模态高分辨率体积数据导致与大量这些数据集的管理和处理相关的挑战。与临床背景相反,目前尚无标准指南来规范临床前背景下图像压缩的使用,以缓解这一问题。在这项工作中,作者研究了有损图像编码在压缩高分辨率体积生物医学数据中的应用。对于相关的多模态成像研究,量化了压缩对度量和体积数据解释的影响,以表征小鼠肿瘤脉管系统。使用体积高分辨率episcopic显微镜(HREM),微型计算机断层扫描(µCT),和显微磁共振成像(µMRI)。通过测量几位生物医学专家的任务特定性能来评估压缩的效果,这些专家解释并标记了以不同程度压缩的多个数据卷。我们定义了数据量减少和视觉信息保存之间的权衡,这确保了在跨尺度的最大压缩效率下保留相关的脉管系统形态。使用血管分割后的Jaccard指数(JI)和平均Hausdorff距离(HD),我们可以证明,在这项研究中,压缩可使数据大小减少256倍,从而使压缩引起的误差保持在观察者间的可变性以下,对跨尺度肿瘤脉管系统的评估影响最小。
    The growing use of multimodal high-resolution volumetric data in pre-clinical studies leads to challenges related to the management and handling of the large amount of these datasets. Contrarily to the clinical context, currently there are no standard guidelines to regulate the use of image compression in pre-clinical contexts as a potential alleviation of this problem. In this work, the authors study the application of lossy image coding to compress high-resolution volumetric biomedical data. The impact of compression on the metrics and interpretation of volumetric data was quantified for a correlated multimodal imaging study to characterize murine tumor vasculature, using volumetric high-resolution episcopic microscopy (HREM), micro-computed tomography (µCT), and micro-magnetic resonance imaging (µMRI). The effects of compression were assessed by measuring task-specific performances of several biomedical experts who interpreted and labeled multiple data volumes compressed at different degrees. We defined trade-offs between data volume reduction and preservation of visual information, which ensured the preservation of relevant vasculature morphology at maximum compression efficiency across scales. Using the Jaccard Index (JI) and the average Hausdorff Distance (HD) after vasculature segmentation, we could demonstrate that, in this study, compression that yields to a 256-fold reduction of the data size allowed to keep the error induced by compression below the inter-observer variability, with minimal impact on the assessment of the tumor vasculature across scales.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号