Intensity normalization

强度归一化
  • 文章类型: Journal Article
    磁共振成像(MRI)扫描对采集和重建参数高度敏感,这些参数会影响放射学研究中的特征稳定性和模型泛化性。这项工作旨在研究图像预处理和协调方法对脑转移瘤(BMs)患者脑MRI影像组学特征的稳定性和影像组学模型的预测性能的影响。
    本研究使用了两个T1对比增强脑MRI数据集。第一个包含25名BMs患者,在两个不同的时间点进行扫描,并用于特征稳定性分析。灰度离散化(GLD)的影响,强度归一化(Z分数,Nyul,WhiteStripe,并在内部开发的名为N-Peaks的方法中),和ComBat协调对特征稳定性进行了研究,认为组内相关系数>0.8的特征是稳定的。包含64名BMs患者的第二个数据集用于分类任务,以研究稳定特征的信息量以及协调方法对放射学模型性能的影响。
    应用固定箱编号(FBN)GLD,与固定箱大小(FBS)离散化相比,稳定特征的数量更高(高10±5.5%)。特征域的协调使用Z分数和WhiteStripe方法提高了非归一化和归一化图像的稳定性。对于分类任务,保持稳定的特征仅对于具有N峰以及FBS离散化的归一化图像产生良好的性能。
    为了开发基于MRI的鲁棒放射学模型,我们建议使用基于参考组织的强度归一化方法(例如gN-Peaks),然后使用FBS离散化。
    UNASSIGNED: Magnetic resonance imaging (MRI) scans are highly sensitive to acquisition and reconstruction parameters which affect feature stability and model generalizability in radiomic research. This work aims to investigate the effect of image pre-processing and harmonization methods on the stability of brain MRI radiomic features and the prediction performance of radiomic models in patients with brain metastases (BMs).
    UNASSIGNED: Two T1 contrast enhanced brain MRI data-sets were used in this study. The first contained 25 BMs patients with scans at two different time points and was used for features stability analysis. The effect of gray level discretization (GLD), intensity normalization (Z-score, Nyul, WhiteStripe, and in house-developed method named N-Peaks), and ComBat harmonization on features stability was investigated and features with intraclass correlation coefficient >0.8 were considered as stable. The second data-set containing 64 BMs patients was used for a classification task to investigate the informativeness of stable features and the effects of harmonization methods on radiomic model performance.
    UNASSIGNED: Applying fixed bin number (FBN) GLD, resulted in higher number of stable features compare to fixed bin size (FBS) discretization (10 ± 5.5 % higher). `Harmonization in feature domain improved the stability for non-normalized and normalized images with Z-score and WhiteStripe methods. For the classification task, keeping the stable features resulted in good performance only for normalized images with N-Peaks along with FBS discretization.
    UNASSIGNED: To develop a robust MRI based radiomic model we recommend using an intensity normalization method based on a reference tissue (e.g N-Peaks) and then using FBS discretization.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Objective.在过去的二十年中,多参数磁共振成像(mpMRI)已成为检测前列腺癌的重要工具。尽管MRI对组织表征的敏感性很高,它通常缺乏特异性。几种完善的预处理工具可用于提高图像质量并消除患者内部和患者间的变异性,以提高MRI的诊断准确性。迄今为止,这些预处理工具中的大多数基本上都是单独评估的。在这项研究中,我们提出了多步骤mpMRI预处理管道的系统评估,以使用先前训练的模型自动进行前列腺内的肿瘤定位。方法。该研究是对31名接受PI-RADS-v2兼容mpMRI检查的初治前列腺癌患者进行的。对每个预处理步骤的多种方法进行了比较:(1)偏置场校正,(2)规范化,(3)可变形多模态配准。根据相关的各个指标估计每个步骤的最佳参数值。然后通过基于模型的方法进行肿瘤定位,该方法将mpMRI和先验临床知识特征作为输入。采用顺序优化方法来确定管道每个步骤中的最佳参数和技术。主要结果。与未处理的数据(AUC=0.74)相比,单独应用偏场校正增加了肿瘤定位的准确性(曲线下面积(AUC)=0.77;p值=0.004)。将归一化添加到预处理流水线进一步将模型的诊断准确度提高到0.85的AUC(p值=0.00012)。表观扩散系数图像与T2加权图像的多模态配准改善了除一名患者外的所有患者的肿瘤位置的对齐。导致精度略有下降(AUC=0.84;p值=0.30)。意义。总的来说,我们的研究结果表明,多个预处理步骤与最佳值的组合效应能够改善使用mpMRI对前列腺癌的定量分类.临床试验:NCT03378856和NCT03367702。
    Objective. Multi-parametric magnetic resonance imaging (mpMRI) has become an important tool for the detection of prostate cancer in the past two decades. Despite the high sensitivity of MRI for tissue characterization, it often suffers from a lack of specificity. Several well-established pre-processing tools are publicly available for improving image quality and removing both intra- and inter-patient variability in order to increase the diagnostic accuracy of MRI. To date, most of these pre-processing tools have largely been assessed individually. In this study we present a systematic evaluation of a multi-step mpMRI pre-processing pipeline to automate tumor localization within the prostate using a previously trained model.Approach. The study was conducted on 31 treatment-naïve prostate cancer patients with a PI-RADS-v2 compliant mpMRI examination. Multiple methods were compared for each pre-processing step: (1) bias field correction, (2) normalization, and (3) deformable multi-modal registration. Optimal parameter values were estimated for each step on the basis of relevant individual metrics. Tumor localization was then carried out via a model-based approach that takes both mpMRI and prior clinical knowledge features as input. A sequential optimization approach was adopted for determining the optimal parameters and techniques in each step of the pipeline.Main results. The application of bias field correction alone increased the accuracy of tumor localization (area under the curve (AUC) = 0.77;p-value = 0.004) over unprocessed data (AUC = 0.74). Adding normalization to the pre-processing pipeline further improved diagnostic accuracy of the model to an AUC of 0.85 (p-value = 0.000 12). Multi-modal registration of apparent diffusion coefficient images to T2-weighted images improved the alignment of tumor locations in all but one patient, resulting in a slight decrease in accuracy (AUC = 0.84;p-value = 0.30).Significance. Overall, our findings suggest that the combined effect of multiple pre-processing steps with optimal values has the ability to improve the quantitative classification of prostate cancer using mpMRI. Clinical trials: NCT03378856 and NCT03367702.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    BACKGROUND: The ratio of T1-weighted (T1w) and T2-weighted (T2w) magnetic resonance imaging (MRI) images is often used as a proxy measure of cortical myelin. However, the T1w/T2w-ratio is based on signal intensities that are inherently non-quantitative and known to be affected by extrinsic factors. To account for this a variety of processing methods have been proposed, but a systematic evaluation of their efficacy is lacking. Given the dependence of the T1w/T2w-ratio on scanner hardware and T1w and T2w protocols, it is important to ensure that processing pipelines perform well also across different sites.
    METHODS: We assessed a variety of processing methods for computing cortical T1w/T2w-ratio maps, including correction methods for nonlinear field inhomogeneities, local outliers, and partial volume effects as well as intensity normalisation. These were implemented in 33 processing pipelines which were applied to four test-retest datasets, with a total of 170 pairs of T1w and T2w images acquired on four different MRI scanners. We assessed processing pipelines across datasets in terms of their reproducibility of expected regional distributions of cortical myelin, lateral intensity biases, and test-retest reliability regionally and across the cortex. Regional distributions were compared both qualitatively with histology and quantitatively with two reference datasets, YA-BC and YA-B1+, from the Human Connectome Project.
    RESULTS: Reproducibility of raw T1w/T2w-ratio distributions was overall high with the exception of one dataset. For this dataset, Spearman rank correlations increased from 0.27 to 0.70 after N3 bias correction relative to the YA-BC reference and from -0.04 to 0.66 after N4ITK bias correction relative to the YA-B1+ reference. Partial volume and outlier corrections had only marginal effects on the reproducibility of T1w/T2w-ratio maps and test-retest reliability. Before intensity normalisation, we found large coefficients of variation (CVs) and low intraclass correlation coefficients (ICCs), with total whole-cortex CV of 10.13% and whole-cortex ICC of 0.58 for the raw T1w/T2w-ratio. Intensity normalisation with WhiteStripe, RAVEL, and Z-Score improved total whole-cortex CVs to 5.91%, 5.68%, and 5.19% respectively, whereas Z-Score and Least Squares improved whole-cortex ICCs to 0.96 and 0.97 respectively.
    CONCLUSIONS: In the presence of large intensity nonuniformities, bias field correction is necessary to achieve acceptable correspondence with known distributions of cortical myelin, but it can be detrimental in datasets with less intensity inhomogeneity. Intensity normalisation can improve test-retest reliability and inter-subject comparability. However, both bias field correction and intensity normalisation methods vary greatly in their efficacy and may affect the interpretation of results. The choice of T1w/T2w-ratio processing method must therefore be informed by both scanner and acquisition protocol as well as the given study objective. Our results highlight limitations of the T1w/T2w-ratio, but also suggest concrete ways to enhance its usefulness in future studies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Introduction: [18F]-FDG PET is a widely used imaging modality that visualizes cellular glucose uptake and provides functional information on the metabolic state of different tissues in vivo. Various quantification methods can be used to evaluate glucose metabolism in the brain, including the cerebral metabolic rate of glucose (CMRglc) and standard uptake values (SUVs). Especially in the brain, these (semi-)quantitative measures can be affected by several physiological factors, such as blood glucose level, age, gender, and stress. Next to this inter- and intra-subject variability, the use of different PET acquisition protocols across studies has created a need for the standardization and harmonization of brain PET evaluation. In this study we present a framework for statistical voxel-based analysis of glucose uptake in the rat brain using histogram-based intensity normalization. Methods: [18F]-FDG PET images of 28 normal rat brains were coregistered and voxel-wisely averaged. Ratio images were generated by voxel-wisely dividing each of these images with the group average. The most prevalent value in the ratio image was used as normalization factor. The normalized PET images were voxel-wisely averaged to generate a normal rat brain atlas. The variability of voxel intensities across the normalized PET images was compared to images that were either normalized by whole brain normalization, or not normalized. To illustrate the added value of this normal rat brain atlas, 9 animals with a striatal hemorrhagic lesion and 9 control animals were intravenously injected with [18F]-FDG and the PET images of these animals were voxel-wisely compared to the normal atlas by group- and individual analyses. Results: The average coefficient of variation of the voxel intensities in the brain across normal [18F]-FDG PET images was 6.7% for the histogram-based normalized images, 11.6% for whole brain normalized images, and 31.2% when no normalization was applied. Statistical voxel-based analysis, using the normal template, indicated regions of significantly decreased glucose uptake at the site of the ICH lesion in the ICH animals, but not in control animals. Conclusion: In summary, histogram-based intensity normalization of [18F]-FDG uptake in the brain is a suitable data-driven approach for standardized voxel-based comparison of brain PET images.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Image normalization is a building block in medical image analysis. Conventional approaches are customarily employed on a per-dataset basis. This strategy, however, prevents the current normalization algorithms from fully exploiting the complex joint information available across multiple datasets. Consequently, ignoring such joint information has a direct impact on the processing of segmentation algorithms. This paper proposes to revisit the conventional image normalization approach by, instead, learning a common normalizing function across multiple datasets. Jointly normalizing multiple datasets is shown to yield consistent normalized images as well as an improved image segmentation when intensity shifts are large. To do so, a fully automated adversarial and task-driven normalization approach is employed as it facilitates the training of realistic and interpretable images while keeping performance on par with the state-of-the-art. The adversarial training of our network aims at finding the optimal transfer function to improve both, jointly, the segmentation accuracy and the generation of realistic images. We have evaluated the performance of our normalizer on both infant and adult brain images from the iSEG, MRBrainS and ABIDE datasets. The results indicate that our contribution does provide an improved realism to the normalized images, while retaining a segmentation accuracy at par with the state-of-the-art learnable normalization approaches.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在脑MRI影像组学研究中,由不同的图像采集设置引入的非生物变化,即扫描仪效果,影响影像组学结果的可靠性和可重复性。本文评估了预处理方法(包括N4偏置场校正和图像重采样)和协调方法(用于脑MRI图像的六种强度归一化方法或用于放射组学特征的ComBat方法)如何帮助消除扫描仪影响并改善脑MRI放射组学中的放射组学特征再现性。分析基于体外数据集(同质和异质体模数据)和体内数据集(从健康志愿者和脑肿瘤临床患者收集的脑MRI图像)。结果表明,ComBat方法对于消除脑MRI放射学研究中的扫描仪影响至关重要。此外,强度归一化方法,虽然无法消除放射学特征级别的扫描仪效果,仍然可以产生更具可比性的MRI图像,并提高了协调特征对ComBat实现中选择的鲁棒性。
    In brain MRI radiomics studies, the non-biological variations introduced by different image acquisition settings, namely scanner effects, affect the reliability and reproducibility of the radiomics results. This paper assesses how the preprocessing methods (including N4 bias field correction and image resampling) and the harmonization methods (either the six intensity normalization methods working on brain MRI images or the ComBat method working on radiomic features) help to remove the scanner effects and improve the radiomic feature reproducibility in brain MRI radiomics. The analyses were based on in vitro datasets (homogeneous and heterogeneous phantom data) and in vivo datasets (brain MRI images collected from healthy volunteers and clinical patients with brain tumors). The results show that the ComBat method is essential and vital to remove scanner effects in brain MRI radiomic studies. Moreover, the intensity normalization methods, while not able to remove scanner effects at the radiomic feature level, still yield more comparable MRI images and improve the robustness of the harmonized features to the choice among ComBat implementations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:研究的目的是定义大脑18FDGPET半定量分析中强度归一化的最合适区域。最好的选择可以基于以前的绝对量化研究,这表明与衰老相关的代谢变化会影响健康受试者大脑区域的准整体。因此,在两个接受常规(n=56)或数字(n=78)18FDGPET/CT检查的健康对照人群中评估了与衰老相关的脑代谢变化.对于120个不同的强度归一化(根据120个区域),报告了年龄与每个120个图谱大脑区域的代谢之间的中值相关系数。对最显著的归一化进行了年龄的SPM线性回归分析(FWE,p<0.05)。
    结果:小脑和脑桥是两个唯一的区域,显示出与年龄相关的中值系数小于-0.5。有了SPM,pons的强度归一化提供了比常规和数字PET的其他归一化至少1.7倍和2.5倍的显着簇体积,分别。
    结论:脑桥是大脑18FDGPET强度归一化的最合适区域,用于检查衰老过程中的代谢变化。
    BACKGROUND: The objective of the study is to define the most appropriate region for intensity normalization in brain 18FDG PET semi-quantitative analysis. The best option could be based on previous absolute quantification studies, which showed that the metabolic changes related to ageing affect the quasi-totality of brain regions in healthy subjects. Consequently, brain metabolic changes related to ageing were evaluated in two populations of healthy controls who underwent conventional (n = 56) or digital (n = 78) 18FDG PET/CT. The median correlation coefficients between age and the metabolism of each 120 atlas brain region were reported for 120 distinct intensity normalizations (according to the 120 regions). SPM linear regression analyses with age were performed on most significant normalizations (FWE, p < 0.05).
    RESULTS: The cerebellum and pons were the two sole regions showing median coefficients of correlation with age less than - 0.5. With SPM, the intensity normalization by the pons provided at least 1.7- and 2.5-fold more significant cluster volumes than other normalizations for conventional and digital PET, respectively.
    CONCLUSIONS: The pons is the most appropriate area for brain 18FDG PET intensity normalization for examining the metabolic changes through ageing.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    在多部位神经成像研究中,扫描仪和部位之间通常存在不必要的技术差异。这些“扫描仪效应”可能会阻碍对感兴趣的生物特征的检测,产生不一致的结果,并导致虚假的联想。我们提出了云母(通过累积分布函数对齐实现多点图像协调),一种工具,通过识别和消除主体内的扫描仪效果来协调不同扫描仪上拍摄的图像。我们在本研究中的目标是(1)建立一种方法,通过利用在同一主题上收集的多个扫描来消除扫描仪的影响,and,在这个基础上,(2)开发一种技术来量化大型多站点研究中的扫描仪效应,因此可以将其减少为预处理步骤。我们在脑部MRI研究中说明了扫描仪的效果,在该研究中,同一受试者在七个扫描仪上进行了两次测量,并评估我们的方法在第二项研究中的表现,其中10名受试者在两台机器上扫描。我们发现,不协调的图像在站点和扫描仪类型之间差异很大,我们的方法通过对齐强度分布有效地消除了这种变异性。我们进一步研究了使用交叉验证预测在新站点对现有受试者进行的扫描的图像协调结果的能力。
    In multisite neuroimaging studies there is often unwanted technical variation across scanners and sites. These \"scanner effects\" can hinder detection of biological features of interest, produce inconsistent results, and lead to spurious associations. We propose mica (multisite image harmonization by cumulative distribution function alignment), a tool to harmonize images taken on different scanners by identifying and removing within-subject scanner effects. Our goals in the present study were to (1) establish a method that removes scanner effects by leveraging multiple scans collected on the same subject, and, building on this, (2) develop a technique to quantify scanner effects in large multisite studies so these can be reduced as a preprocessing step. We illustrate scanner effects in a brain MRI study in which the same subject was measured twice on seven scanners, and assess our method\'s performance in a second study in which ten subjects were scanned on two machines. We found that unharmonized images were highly variable across site and scanner type, and our method effectively removed this variability by aligning intensity distributions. We further studied the ability to predict image harmonization results for a scan taken on an existing subject at a new site using cross-validation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    缺乏强度归一化方法的标准化及其对定量输出的未知影响被认为是脑FDG-PET定量协议协调的主要缺点。这项工作的目的是对大脑FDG-PET定量输出的不同强度归一化方法进行基于地面实况的评估。
    使用蒙特卡罗模拟从直接来源于25个健康受试者的活动和衰减图生成真实的FDG-PET图像(在6个感兴趣区域和5个低代谢水平上添加理论相对低代谢)。应用单受试者统计参数映射(SPM)将每个模拟的FDG-PET图像与基于参考区域方法(例如脑干(RRBS))的强度归一化后的健康数据库进行比较,小脑(RRC)和病变对侧颞叶(RRTL),和数据驱动的方法,例如比例缩放(PS),基于直方图的方法(HN)和这两种方法的迭代版本(iPS和iHN)。根据引入的理论低代谢模式的恢复以及非特异性低代谢和高代谢发现的出现来评估这些方法的性能。
    对于所有强度归一化方法,检测到的低代谢模式具有明显低于引入的低代谢的体积,特别是对于代谢的轻微降低。在强度归一化方法中,RRC和HN提供了最大的恢复的低代谢体积,而RRBS的回收率最小。总的来说,数据驱动方法克服了参考区域,其中,迭代方法克服了非迭代方法。所有方法的非特异性高代谢体积相似,除了PS,在那里,它成为一个主要的限制(高达250cm3)的扩展和强烈的低代谢。另一方面,非特异性低代谢与所有方法相似,通常用适当的聚类来解决。
    我们的发现表明,强度归一化方法的不当使用可以在检测到的低代谢中提供显着的偏差,并且在假阳性方面代表了严重的关注。根据我们的发现,我们建议使用基于直方图的强度归一化方法。只有当选择的参考区域较大且稳定时,参考区域方法的性能才等同于数据驱动方法。
    The lack of standardization of intensity normalization methods and its unknown effect on the quantification output is recognized as a major drawback for the harmonization of brain FDG-PET quantification protocols. The aim of this work is the ground truth-based evaluation of different intensity normalization methods on brain FDG-PET quantification output.
    Realistic FDG-PET images were generated using Monte Carlo simulation from activity and attenuation maps directly derived from 25 healthy subjects (adding theoretical relative hypometabolisms on 6 regions of interest and for 5 hypometabolism levels). Single-subject statistical parametric mapping (SPM) was applied to compare each simulated FDG-PET image with a healthy database after intensity normalization based on reference regions methods such as the brain stem (RRBS), cerebellum (RRC) and the temporal lobe contralateral to the lesion (RRTL), and data-driven methods, such as proportional scaling (PS), histogram-based method (HN) and iterative versions of both methods (iPS and iHN). The performance of these methods was evaluated in terms of the recovery of the introduced theoretical hypometabolic pattern and the appearance of unspecific hypometabolic and hypermetabolic findings.
    Detected hypometabolic patterns had significantly lower volumes than the introduced hypometabolisms for all intensity normalization methods particularly for slighter reductions in metabolism . Among the intensity normalization methods, RRC and HN provided the largest recovered hypometabolic volumes, while the RRBS showed the smallest recovery. In general, data-driven methods overcame reference regions and among them, the iterative methods overcame the non-iterative ones. Unspecific hypermetabolic volumes were similar for all methods, with the exception of PS, where it became a major limitation (up to 250 cm3) for extended and intense hypometabolism. On the other hand, unspecific hypometabolism was similar far all methods, and usually solved with appropriate clustering.
    Our findings showed that the inappropriate use of intensity normalization methods can provide remarkable bias in the detected hypometabolism and it represents a serious concern in terms of false positives. Based on our findings, we recommend the use of histogram-based intensity normalization methods. Reference region methods performance was equivalent to data-driven methods only when the selected reference region is large and stable.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    Image synthesis learns a transformation from the intensity features of an input image to yield a different tissue contrast of the output image. This process has been shown to have application in many medical image analysis tasks including imputation, registration, and segmentation. To carry out synthesis, the intensities of the input images are typically scaled-i.e., normalized-both in training to learn the transformation and in testing when applying the transformation, but it is not presently known what type of input scaling is optimal. In this paper, we consider seven different intensity normalization algorithms and three different synthesis methods to evaluate the impact of normalization. Our experiments demonstrate that intensity normalization as a preprocessing step improves the synthesis results across all investigated synthesis algorithms. Furthermore, we show evidence that suggests intensity normalization is vital for successful deep learning-based MR image synthesis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号