normalization

归一化
  • 文章类型: Journal Article
    背景:疫苗通过提供针对传染病的保护而彻底改变了公共卫生。它们刺激免疫系统并产生记忆细胞以防御目标疾病。临床试验评估疫苗性能,包括剂量,管理路线,和潜在的副作用。
    结果:gov是一个有价值的临床试验信息库,但是其中的疫苗数据缺乏标准化,导致自动概念图的挑战,疫苗相关知识的发展,基于证据的决策,和疫苗监测。
    结果:在这项研究中,我们开发了一个利用多个领域知识来源的级联框架,包括临床试验,统一医疗语言系统(UMLS)和疫苗本体论(VO),增强领域特定语言模型的性能,以自动映射来自临床试验的VO。疫苗本体(VO)是一个基于社区的本体,旨在促进疫苗数据标准化,一体化,和计算机辅助推理。我们的方法涉及从各种来源提取和注释数据。然后,我们对PubMedBERT模型进行了预训练,导致CTPubMedBERT的发展。随后,我们通过整合SAPBERT增强了CTPubMedBERT,使用UMLS进行了预训练,导致CTPubMedBERT+SAPBERT。通过使用疫苗本体论语料库和临床试验的疫苗数据进行微调,进一步完善。产生CTPubMedBERT+SAPBERT+VO模型。最后,我们利用了一组预先训练的模型,连同加权的基于规则的集成方法,标准化疫苗语料,提高流程的准确性。概念规范化中的排序过程涉及对潜在概念进行优先级排序和排序,以识别给定上下文的最合适匹配。我们对十大概念进行了排名,我们的实验结果表明,我们提出的级联框架在疫苗图谱上的表现始终优于现有的有效基线,前1名候选人的准确率达到71.8%,前10名候选人的准确率达到90.0%。
    结论:这项研究提供了一个详细的见解,一个级联的框架微调的特定领域的语言模型,改善从临床试验的VO映射。通过有效地利用特定领域的信息,并应用不同的预训练BERT模型的加权基于规则的集合,我们的框架可以显著增强临床试验的VO图谱.
    BACKGROUND: Vaccines have revolutionized public health by providing protection against infectious diseases. They stimulate the immune system and generate memory cells to defend against targeted diseases. Clinical trials evaluate vaccine performance, including dosage, administration routes, and potential side effects.
    RESULTS: gov is a valuable repository of clinical trial information, but the vaccine data in them lacks standardization, leading to challenges in automatic concept mapping, vaccine-related knowledge development, evidence-based decision-making, and vaccine surveillance.
    RESULTS: In this study, we developed a cascaded framework that capitalized on multiple domain knowledge sources, including clinical trials, the Unified Medical Language System (UMLS), and the Vaccine Ontology (VO), to enhance the performance of domain-specific language models for automated mapping of VO from clinical trials. The Vaccine Ontology (VO) is a community-based ontology that was developed to promote vaccine data standardization, integration, and computer-assisted reasoning. Our methodology involved extracting and annotating data from various sources. We then performed pre-training on the PubMedBERT model, leading to the development of CTPubMedBERT. Subsequently, we enhanced CTPubMedBERT by incorporating SAPBERT, which was pretrained using the UMLS, resulting in CTPubMedBERT + SAPBERT. Further refinement was accomplished through fine-tuning using the Vaccine Ontology corpus and vaccine data from clinical trials, yielding the CTPubMedBERT + SAPBERT + VO model. Finally, we utilized a collection of pre-trained models, along with the weighted rule-based ensemble approach, to normalize the vaccine corpus and improve the accuracy of the process. The ranking process in concept normalization involves prioritizing and ordering potential concepts to identify the most suitable match for a given context. We conducted a ranking of the Top 10 concepts, and our experimental results demonstrate that our proposed cascaded framework consistently outperformed existing effective baselines on vaccine mapping, achieving 71.8% on top 1 candidate\'s accuracy and 90.0% on top 10 candidate\'s accuracy.
    CONCLUSIONS: This study provides a detailed insight into a cascaded framework of fine-tuned domain-specific language models improving mapping of VO from clinical trials. By effectively leveraging domain-specific information and applying weighted rule-based ensembles of different pre-trained BERT models, our framework can significantly enhance the mapping of VO from clinical trials.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    转录组和蛋白质组可以用少量RNA或蛋白质(或它们的肽)标准化,如GAPDH,β-肌动蛋白,RPBMS,和/或GAP43。即使有数百个标准,对于小分子,归一化不能在不同的分子质量范围内实现,如脂质和代谢物,由于即使是光谱的最小部分,质量与电荷比的非线性。我们定义了每限定量蛋白质的代谢物和/或脂质的量(或量的范围)。在多模式生物比较的所有样本中一致识别,作为代谢物或脂质的规范水平。定义的蛋白质量(或范围)是一组完整样品的归一化值,对于所述完整样品,可获得样内相对蛋白质定量。例如,在蛋白质组中确定的每µg乌头酸水合酶的柠檬酸盐(一种代谢物)的量(标准化蛋白质量)是柠檬酸盐与乌头酸酶的标准水平。我们将正常性定义为当与标准化蛋白质水平相比时检测到的代谢物的量(或量范围)。我们以轴突再生为例,说明对蛋白质标准化的高级方法的需求。不同药理学诱导的轴突再生小鼠模型之间的比较需要轴突再生的比较,在使用不同代理设计的几个模型的不同时间点进行了研究。为了在不同的药理学诱导模型中对蛋白质进行标准化,我们在每个样品中进行肽掺杂(已知肽的固定量),以标准化样品中的蛋白质组。我们开发了RegenV肽,分为RegenIII(SEB,LLO,CFP)和II(HH4B,A1315),对于提取前和提取后的比较,在添加定义的情况下执行,消化的肽(牛血清白蛋白胰蛋白酶消化)用于蛋白质丰度标准化,超出商业标记的相对定量(例如,18-plex串联质量标签)。我们还通过在再生代谢组/脂质组谱上使用这种标准化技术来说明规范性的概念。由于标准化的蛋白质量在不同的生物状态下是不同的(对照与轴突再生),对于特定的生物状态,规范的代谢物或脂质含量预计会有所不同。这些概念和标准化方法对于跨轴突再生的不同模型的不同数据集的整合是重要的。
    Transcriptomes and proteomes can be normalized with a handful of RNAs or proteins (or their peptides), such as GAPDH, β-actin, RPBMS, and/or GAP43. Even with hundreds of standards, normalization cannot be achieved across different molecular mass ranges for small molecules, such as lipids and metabolites, due to the non-linearity of mass by charge ratio for even the smallest part of the spectrum. We define the amount (or range of amounts) of metabolites and/or lipids per a defined amount of a protein, consistently identified in all samples of a multiple-model organism comparison, as the normative level of that metabolite or lipid. The defined protein amount (or range) is a normalized value for one cohort of complete samples for which intrasample relative protein quantification is available. For example, the amount of citrate (a metabolite) per µg of aconitate hydratase (normalized protein amount) identified in the proteome is the normative level of citrate with aconitase. We define normativity as the amount of metabolites (or amount range) detected when compared to normalized protein levels. We use axon regeneration as an example to illustrate the need for advanced approaches to the normalization of proteins. Comparison across different pharmacologically induced axon regeneration mouse models entails the comparison of axon regeneration, studied at different time points in several models designed using different agents. For the normalization of the proteins across different pharmacologically induced models, we perform peptide doping (fixed amounts of known peptides) in each sample to normalize the proteome across the samples. We develop Regen V peptides, divided into Regen III (SEB, LLO, CFP) and II (HH4B, A1315), for pre- and post-extraction comparisons, performed with the addition of defined, digested peptides (bovine serum albumin tryptic digest) for protein abundance normalization beyond commercial labeled relative quantification (for example, 18-plex tandem mass tags). We also illustrate the concept of normativity by using this normalization technique on regenerative metabolome/lipidome profiles. As normalized protein amounts are different in different biological states (control versus axon regeneration), normative metabolite or lipid amounts are expected to be different for specific biological states. These concepts and standardization approaches are important for the integration of different datasets across different models of axon regeneration.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    由于影像组学特征标准化的局限性,MRI影像组学肿瘤谱分析在癌症预后和治疗计划中的广泛临床应用面临着主要障碍。当前工作的目的是评估不同的MRI扫描和标准化方案对两个患有子宫内膜子宫内膜癌(EC)(n=136)和子宫颈癌(CC)(n=132)的患者队列中肿瘤影像数据的统计分析的影响。1.5T和3T,造影剂注射后2分钟T1加权MRI,T2加权涡轮自旋回波成像,并获得弥散加权成像。从3D中手动分割的肿瘤中提取放射学特征,并使用z评分归一化或线性回归模型(LRM)进行归一化,以说明与MRI采集参数的线性依赖性。根据影像学资料将患者分为两组。使用Kruskal-Wallis测试分析MRI扫描参数对簇组成和预后的影响,卡普兰-迈耶地块,对数秩检验,随机生存森林和LASSOCox回归具有时间依赖性曲线下面积(tdAUC)(α=0.05)。在两个队列中,大部分影像组学特征与MRI扫描方案在统计学上相关(EC:162/385[42%];CC:180/292[62%])。当在z评分与LRM归一化后进行聚类时,相当数量的EC(49/136[36%])和CC(50/132[38%])患者改变聚类。基于聚类组的预后建模在EC/CC队列中对于两种归一化方法产生了相似的输出(对数秩检验;z分数:p=0.02/0.33;LRM:p=0.01/0.45)。根据EC/CC的影像组学特征对疾病特异性生存(DSS)进行预后建模的平均tdAUC与两种归一化方法相似(随机生存森林;z评分:平均tdAUC=0.77/0.78;LRM:平均tdAUC=0.80/0.75;LASSOCox;z评分:平均tdAUC=0.64/0.75;LRM:平均tAU76)。由于MRI扫描参数,肿瘤影像组学数据存在严重偏差。Z分数归一化并不能消除这些偏见,而LRM标准化有效地做到了。尽管如此,在EC和CC患者中,z-score-和LRM正常化后的影像组学簇群与DSS相似.
    Widespread clinical use of MRI radiomic tumor profiling for prognostication and treatment planning in cancers faces major obstacles due to limitations in standardization of radiomic features. The purpose of the current work was to assess the impact of different MRI scanning- and normalization protocols for the statistical analyses of tumor radiomic data in two patient cohorts with uterine endometrial-(EC) (n = 136) and cervical (CC) (n = 132) cancer. 1.5 T and 3 T, T1-weighted MRI 2 min post-contrast injection, T2-weighted turbo spin echo imaging, and diffusion-weighted imaging were acquired. Radiomic features were extracted from within manually segmented tumors in 3D and normalized either using z-score normalization or a linear regression model (LRM) accounting for linear dependencies with MRI acquisition parameters. Patients were clustered into two groups based on radiomic profile. Impact of MRI scanning parameters on cluster composition and prognostication were analyzed using Kruskal-Wallis tests, Kaplan-Meier plots, log-rank test, random survival forests and LASSO Cox regression with time-dependent area under curve (tdAUC) (α = 0.05). A large proportion of the radiomic features was statistically associated with MRI scanning protocol in both cohorts (EC: 162/385 [42%]; CC: 180/292 [62%]). A substantial number of EC (49/136 [36%]) and CC (50/132 [38%]) patients changed cluster when clustering was performed after z-score-versus LRM normalization. Prognostic modeling based on cluster groups yielded similar outputs for the two normalization methods in the EC/CC cohorts (log-rank test; z-score: p = 0.02/0.33; LRM: p = 0.01/0.45). Mean tdAUC for prognostic modeling of disease-specific survival (DSS) by the radiomic features in EC/CC was similar for the two normalization methods (random survival forests; z-score: mean tdAUC = 0.77/0.78; LRM: mean tdAUC = 0.80/0.75; LASSO Cox; z-score: mean tdAUC = 0.64/0.76; LRM: mean tdAUC = 0.76/0.75). Severe biases in tumor radiomics data due to MRI scanning parameters exist. Z-score normalization does not eliminate these biases, whereas LRM normalization effectively does. Still, radiomic cluster groups after z-score- and LRM normalization were similarly associated with DSS in EC and CC patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    脑部医学图像分割是医学图像处理中的一项关键任务,在中风等疾病的预测和诊断中发挥着重要作用,老年痴呆症,和脑肿瘤。然而,由于不同扫描仪之间的站点间差异很大,因此不同来源的数据集之间的分布差异很大,成像协议,和人口。这导致实际应用中的跨域问题。近年来,已经进行了许多研究来解决大脑图像分割中的跨域问题。
    本评论遵循系统评论和荟萃分析(PRISMA)的首选报告项目的标准,用于数据处理和分析。我们从PubMed检索了相关论文,WebofScience,和IEEE数据库从2018年1月到2023年12月,提取有关医疗领域的信息,成像模式,解决跨域问题的方法,实验设计,和来自选定论文的数据集。此外,我们比较了中风病变分割方法的性能,脑白质分割和脑肿瘤分割。
    本综述共纳入并分析了71项研究。解决跨域问题的方法包括迁移学习,规范化,无监督学习,变压器型号,和卷积神经网络(CNN)。在ATLAS数据集上,领域自适应方法显示,与非自适应方法相比,卒中病变分割任务总体改善约3%.然而,鉴于当前研究中基于MICCAI2017中白质分割任务的方法和BraTS中脑肿瘤分割任务的方法的数据集和实验方法的多样性,直观地比较这些方法的优缺点是具有挑战性的。
    尽管已经应用了各种技术来解决大脑图像分割中的跨域问题,目前缺乏统一的数据集和实验标准。例如,许多研究仍然基于n折交叉验证,而直接基于跨站点或数据集的交叉验证的方法相对较少。此外,由于大脑分割领域的医学图像类型多种多样,对性能进行简单直观的比较并不容易。这些挑战需要在未来的研究中解决。
    UNASSIGNED: Brain medical image segmentation is a critical task in medical image processing, playing a significant role in the prediction and diagnosis of diseases such as stroke, Alzheimer\'s disease, and brain tumors. However, substantial distribution discrepancies among datasets from different sources arise due to the large inter-site discrepancy among different scanners, imaging protocols, and populations. This leads to cross-domain problems in practical applications. In recent years, numerous studies have been conducted to address the cross-domain problem in brain image segmentation.
    UNASSIGNED: This review adheres to the standards of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) for data processing and analysis. We retrieved relevant papers from PubMed, Web of Science, and IEEE databases from January 2018 to December 2023, extracting information about the medical domain, imaging modalities, methods for addressing cross-domain issues, experimental designs, and datasets from the selected papers. Moreover, we compared the performance of methods in stroke lesion segmentation, white matter segmentation and brain tumor segmentation.
    UNASSIGNED: A total of 71 studies were included and analyzed in this review. The methods for tackling the cross-domain problem include Transfer Learning, Normalization, Unsupervised Learning, Transformer models, and Convolutional Neural Networks (CNNs). On the ATLAS dataset, domain-adaptive methods showed an overall improvement of ~3 percent in stroke lesion segmentation tasks compared to non-adaptive methods. However, given the diversity of datasets and experimental methodologies in current studies based on the methods for white matter segmentation tasks in MICCAI 2017 and those for brain tumor segmentation tasks in BraTS, it is challenging to intuitively compare the strengths and weaknesses of these methods.
    UNASSIGNED: Although various techniques have been applied to address the cross-domain problem in brain image segmentation, there is currently a lack of unified dataset collections and experimental standards. For instance, many studies are still based on n-fold cross-validation, while methods directly based on cross-validation across sites or datasets are relatively scarce. Furthermore, due to the diverse types of medical images in the field of brain segmentation, it is not straightforward to make simple and intuitive comparisons of performance. These challenges need to be addressed in future research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    脑癌是全球经常发生的疾病,主要是由于大脑中/周围存在肿瘤而发展起来的。一般来说,脑癌的患病率和发病率远低于其他癌症类型(乳腺癌,皮肤,肺,等。).然而,脑癌与高死亡率相关,尤其是成年人,由于对肿瘤类型的错误识别,延迟诊断。因此,减少对脑肿瘤类型的错误检测和早期诊断对提高患者生存率起着至关重要的作用。为了实现这一点,许多研究人员最近开发了基于深度学习(DL)的方法,因为它们表现出了显著的性能,特别是在分类任务中。
    本文提出了一种名为BrainCDNet的新型DL架构。该模型是通过连接池化层并通过使用“HeNormal”初始化以及批量范数和全局平均池化(GAP)将权重初始化到层中来处理过拟合问题的。最初,我们使用灵活的过滤器锐化输入图像,这导致保持边缘和精细的细节。之后,我们使用建议的BrainCDNet来提取相关特征和分类。在这项工作中,两种不同形式的磁共振成像(MRI)数据库,如二进制(健康与病理性)和多类(神经胶质瘤vs.脑膜瘤vs.垂体)用于执行所有这些实验。
    经验证据表明,与最先进的方法相比,所提出的模型在两个数据集上都获得了显着的准确性,99.45%(二进制)和96.78%(多类),分别。因此,所提出的模型可作为放射科医师在脑癌患者诊断过程中的决策支持工具.
    UNASSIGNED: Brain cancer is a frequently occurring disease around the globe and mostly developed due to the presence of tumors in/around the brain. Generally, the prevalence and incidence of brain cancer are much lower than that of other cancer types (breast, skin, lung, etc.). However, brain cancers are associated with high mortality rates, especially in adults, due to the false identification of tumor types, and delay in the diagnosis. Therefore, the minimization of false detection of brain tumor types and early diagnosis plays a crucial role in the improvement of patient survival rate. To achieve this, many researchers have recently developed deep learning (DL)-based approaches since they showed a remarkable performance, particularly in the classification task.
    UNASSIGNED: This article proposes a novel DL architecture named BrainCDNet. This model was made by concatenating the pooling layers and dealing with the overfitting issues by initializing the weights into layers using \'He Normal\' initialization along with the batch norm and global average pooling (GAP). Initially, we sharpen the input images using a Nimble filter, which results in maintaining the edges and fine details. After that, we employed the suggested BrainCDNet for the extraction of relevant features and classification. In this work, two different forms of magnetic resonance imaging (MRI) databases such as binary (healthy vs. pathological) and multiclass (glioma vs. meningioma vs. pituitary) are utilized to perform all these experiments.
    UNASSIGNED: Empirical evidence suggests that the presented model attained a significant accuracy on both datasets compared to the state-of-the-art approaches, with 99.45% (binary) and 96.78% (multiclass), respectively. Hence, the proposed model can be used as a decision-supportive tool for radiologists during the diagnosis of brain cancer patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    基因型到表型作图是当前基因组时代的基本问题。虽然定性病例对照预测受到了极大的关注,较少强调预测定量表型。这个新兴领域在揭示微生物群落与宿主健康之间的复杂联系方面具有巨大的前景。然而,微生物组数据集异质性的存在对预测的准确性提出了重大挑战,并削弱了模型的可重复性.为了应对这一挑战,我们调查了22种标准化方法,旨在消除多个数据集的异质性,对它们进行了全面审查,并评估了它们在三个模拟场景和31个真实数据集中预测定量表型的有效性。结果表明,这些方法中没有一种在预测定量表型方面表现出明显的优势,或者在预测的均方根误差(RMSE)方面显着降低。鉴于批量效应的频繁发生以及批量校正方法在预测受这些效应影响的数据集时的令人满意的性能,我们强烈建议使用批量校正方法作为预测定量表型的第一步.总之,标准化方法在预测宏基因组数据中的表现仍然是一个动态和持续的研究领域。我们的研究通过对各种方法进行全面评估并为预测定量表型的有效性提供有价值的见解,从而为这一领域做出了贡献。
    Genotype-to-phenotype mapping is an essential problem in the current genomic era. While qualitative case-control predictions have received significant attention, less emphasis has been placed on predicting quantitative phenotypes. This emerging field holds great promise in revealing intricate connections between microbial communities and host health. However, the presence of heterogeneity in microbiome datasets poses a substantial challenge to the accuracy of predictions and undermines the reproducibility of models. To tackle this challenge, we investigated 22 normalization methods that aimed at removing heterogeneity across multiple datasets, conducted a comprehensive review of them, and evaluated their effectiveness in predicting quantitative phenotypes in three simulation scenarios and 31 real datasets. The results indicate that none of these methods demonstrate significant superiority in predicting quantitative phenotypes or attain a noteworthy reduction in Root Mean Squared Error (RMSE) of the predictions. Given the frequent occurrence of batch effects and the satisfactory performance of batch correction methods in predicting datasets affected by these effects, we strongly recommend utilizing batch correction methods as the initial step in predicting quantitative phenotypes. In summary, the performance of normalization methods in predicting metagenomic data remains a dynamic and ongoing research area. Our study contributes to this field by undertaking a comprehensive evaluation of diverse methods and offering valuable insights into their effectiveness in predicting quantitative phenotypes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:细胞外囊泡衍生(EV)-miRNA具有作为诊断各种疾病的生物标志物的潜力。miRNA微阵列广泛用于量化循环EV-miRNA水平,miRNA微阵列数据的预处理对于分析的准确性和可靠性至关重要。因此,尽管微阵列数据已用于各种研究,Toray的3D基因芯片尚未研究预处理的效果,一种广泛使用的测量方法。我们的目标是评估批次效应,缺失值填补准确性,以及使用3D-Gene技术对来自两个肌萎缩侧索硬化症队列的EV-miRNA微阵列数据在18个不同预处理管道中的测量值的影响。
    结果:使用具有缺失值完成和归一化的不同类型和顺序的18种不同管道对3D-Gene微阵列EV-miRNA数据进行预处理。使用批量效应校正方法ComBat在所有管道中的批量效应中抑制了显著结果。此外,利用MissForest进行缺失值填补的管道与测量值高度吻合。相比之下,使用恒定值对缺失数据进行填补显示出较低的一致性。
    结论:本研究强调了在使用3D-Gene技术时,为EV-miRNA微阵列数据选择适当的预处理策略的重要性。这些发现强调了验证预处理方法的重要性,特别是在批量效应校正和缺失值填补的情况下,用于可靠地分析生物标志物发现和疾病研究中的数据。
    BACKGROUND: Extracellular vesicle-derived (EV)-miRNAs have potential to serve as biomarkers for the diagnosis of various diseases. miRNA microarrays are widely used to quantify circulating EV-miRNA levels, and the preprocessing of miRNA microarray data is critical for analytical accuracy and reliability. Thus, although microarray data have been used in various studies, the effects of preprocessing have not been studied for Toray\'s 3D-Gene chip, a widely used measurement method. We aimed to evaluate batch effect, missing value imputation accuracy, and the influence of preprocessing on measured values in 18 different preprocessing pipelines for EV-miRNA microarray data from two cohorts with amyotrophic lateral sclerosis using 3D-Gene technology.
    RESULTS: Eighteen different pipelines with different types and orders of missing value completion and normalization were used to preprocess the 3D-Gene microarray EV-miRNA data. Notable results were suppressed in the batch effects in all pipelines using the batch effect correction method ComBat. Furthermore, pipelines utilizing missForest for missing value imputation showed high agreement with measured values. In contrast, imputation using constant values for missing data exhibited low agreement.
    CONCLUSIONS: This study highlights the importance of selecting the appropriate preprocessing strategy for EV-miRNA microarray data when using 3D-Gene technology. These findings emphasize the importance of validating preprocessing approaches, particularly in the context of batch effect correction and missing value imputation, for reliably analyzing data in biomarker discovery and disease research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    注册到标准化模板(即“标准化”)是进行神经影像学研究时的关键步骤。我们提出了一项比较研究,涉及评估分流治疗的小儿脑积水的通用注册算法。我们的样本数据集提出了许多交叉的注册挑战,代表大脑结构和整体大脑形状的潜在大变形,分流器的文物,和年龄对应的形态差异。当前的研究使用免费的神经影像学登记工具评估了分流治疗的脑积水患者的正常化准确性。
    对8例经分流治疗的小儿脑积水患者的解剖神经影像进行了标准化。除了颅骨剥离和偏差校正的预处理步骤外,还评估了四种非线性配准算法。使用皮质下和皮质区域的骰子系数(DC)和Hausdorff距离(HD)评估配准准确性。
    共进行了592次注册。平均而言,与全头部/无偏倚校正图像相比,使用脑部提取图像和偏倚校正图像进行的归一化具有更高的DC和更低的HD.使用ANTs的SyN与颅骨剥离和偏差校正图像实现了最准确的配准。没有预处理,DARTEL工具箱能够产生具有相当精度的归一化图像。使用儿科模板作为中间配准并不能改善标准化。
    使用分流治疗的小儿脑积水患者的结构神经影像,事实证明,在采取指定的预处理步骤后,有些工具表现良好。总的来说,这些结果为注册程序的性能提供了见解,该程序可用于使具有复杂病理的大脑正常化。
    UNASSIGNED: Registration to a standardized template (i.e. \"normalization\") is a critical step when performing neuroimaging studies. We present a comparative study involving the evaluation of general-purpose registration algorithms for pediatric patients with shunt treated hydrocephalus. Our sample dataset presents a number of intersecting challenges for registration, representing the potentially large deformations to both brain structures and overall brain shape, artifacts from shunts, and morphological differences corresponding to age. The current study assesses the normalization accuracy of shunt-treated hydrocephalus patients using freely available neuroimaging registration tools.
    UNASSIGNED: Anatomical neuroimages from eight pediatric patients with shunt-treated hydrocephalus were normalized. Four non-linear registration algorithms were assessed in addition to the preprocessing steps of skull-stripping and bias-correction. Registration accuracy was assessed using the Dice Coefficient (DC) and Hausdorff Distance (HD) in subcortical and cortical regions.
    UNASSIGNED: A total of 592 registrations were performed. On average, normalizations performed using the brain extracted and bias-corrected images had a higher DC and lower HD compared to full head/ non-biased corrected images. The most accurate registration was achieved using SyN by ANTs with skull-stripped and bias corrected images. Without preprocessing, the DARTEL Toolbox was able to produce normalized images with comparable accuracy. The use of a pediatric template as an intermediate registration did not improve normalization.
    UNASSIGNED: Using structural neuroimages from patients with shunt-treated pediatric hydrocephalus, it was demonstrated that there are tools which perform well after specified pre-processing steps were taken. Overall, these results provide insight to the performance of registration programs that can be used for normalization of brains with complex pathologies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:基于成像的空间分辨转录组学(im-SRT)技术的最新进展现在能够实现靶向基因及其在固定组织中位置的高通量谱分析。基因表达数据的标准化通常需要考虑可能混淆潜在生物信号的技术因素。
    结果:这里,我们研究了不同基因计数归一化方法与不同靶向基因面板在分析和解释im-SRT数据中的潜在影响.使用不同的模拟基因面板,过度代表在特定组织区域或细胞类型中表达的基因,我们证明了基于每个细胞检测到的基因计数的归一化方法如何以区域或细胞类型特定的方式差异影响归一化的基因表达量。我们表明,这些标准化诱导效应可能会降低下游分析的可靠性,包括差异基因表达,基因折叠变化,和空间可变基因分析,引入假阳性和假阴性的结果相比,从基因面板获得的结果是更有代表性的组织的组成细胞类型的基因表达。使用不使用检测到的基因计数进行基因表达幅度调整的归一化方法未观察到这些效果。如细胞体积或细胞面积归一化。
    结论:我们建议在可行的情况下使用基于非基因计数的标准化方法,并在必要时使用基于基因计数的标准化方法之前评估基因面板代表性。总的来说,我们提醒标准化方法和基因面板的选择可能会影响im-SRT数据的生物学解释.
    Recent advances in imaging-based spatially resolved transcriptomics (im-SRT) technologies now enable high-throughput profiling of targeted genes and their locations in fixed tissues. Normalization of gene expression data is often needed to account for technical factors that may confound underlying biological signals.
    Here, we investigate the potential impact of different gene count normalization methods with different targeted gene panels in the analysis and interpretation of im-SRT data. Using different simulated gene panels that overrepresent genes expressed in specific tissue regions or cell types, we demonstrate how normalization methods based on detected gene counts per cell differentially impact normalized gene expression magnitudes in a region- or cell type-specific manner. We show that these normalization-induced effects may reduce the reliability of downstream analyses including differential gene expression, gene fold change, and spatially variable gene analysis, introducing false positive and false negative results when compared to results obtained from gene panels that are more representative of the gene expression of the tissue\'s component cell types. These effects are not observed with normalization approaches that do not use detected gene counts for gene expression magnitude adjustment, such as with cell volume or cell area normalization.
    We recommend using non-gene count-based normalization approaches when feasible and evaluating gene panel representativeness before using gene count-based normalization methods if necessary. Overall, we caution that the choice of normalization method and gene panel may impact the biological interpretation of the im-SRT data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    了解COVID-19的结果仍然是一个挑战。虽然已经提出了许多生物标志物用于入院时的严重程度,在感染过程中对标志物的探索有限,特别是需要氧疗。这项研究调查了嗜酸性粒细胞计数正常化作为大流行初期氧气断流预测因子的潜力。
    在2020年3月至4月(第一波)期间,对直接进入医学病房的成年人进行了回顾性研究。生物学异常,包括淋巴细胞计数,嗜酸性粒细胞计数,和C反应蛋白(CRP),根据氧气水平在入院的第一周每天收集。如果情况恶化,氧气水平在15升/分钟时进行审查。主要目的是评估嗜酸性粒细胞计数正常化是否预测随后的氧气需求减少。
    总的来说,132名患者入院,平均年龄59.0±16.3岁。在患者中,需要72%的氧气,20.5%的患者在中位延迟48小时后入住重症监护病房.入院时CRP中位数为79(26-130)mg/L,而嗜酸性粒细胞计数为10(0-60)/mm3。第2天嗜酸性粒细胞计数正常化(≥100/mm3)与氧气需求减少(<2L)显着相关,危险比(HR)=3.7[1.1-12.9](p=0.04)。同样,CRP<80mg/L与氧需求降低相关(p<0.001)。预测器,包括潜在的慢性呼吸道疾病,表现出负关联的趋势(p=0.06)。
    该研究强调了嗜酸性粒细胞计数与CRP之间的关系,对预测COVID-19期间的氧气断奶具有重要意义。需要进一步的研究来探索这些生物标志物在其他呼吸道感染中的相关性。
    UNASSIGNED: Understanding COVID-19 outcomes remains a challenge. While numerous biomarkers have been proposed for severity at admission, limited exploration exists for markers during the infection course, especially for the requirement of oxygen therapy. This study investigates the potential of eosinophil count normalization as a predictor for oxygen weaning during the initial wave of the pandemic.
    UNASSIGNED: A retrospective study was conducted between March and April 2020 (first wave) among adults admitted directly to a medicine ward. Biological abnormalities, including lymphocyte count, eosinophil count, and C-reactive protein (CRP), were gathered daily during the first week of admission according to oxygen level. In case of worsening, oxygen level was censored at 15 L/min. The primary aim was to assess whether eosinophil count normalization predicts a subsequent decrease in oxygen requirements.
    UNASSIGNED: Overall, 132 patients were admitted, with a mean age of 59.0 ± 16.3 years. Of the patients, 72% required oxygen, and 20.5% were admitted to the intensive care unit after a median delay of 48 hours. The median CRP at admission was 79 (26-130) mg/L, whereas the eosinophil count was 10 (0-60)/mm3. Eosinophil count normalization (≥100/mm3) by day 2 correlated significantly with decreased oxygen needs (<2 L) with hazard ratio (HR) = 3.7 [1.1-12.9] (p = 0.04). Likewise, CRP < 80 mg/L was associated with reduced oxygen requirements (p < 0.001). Predictors, including underlying chronic respiratory disease, exhibited a trend toward a negative association (p = 0.06).
    UNASSIGNED: The study highlights the relationship between eosinophil count and CRP, with implications for predicting oxygen weaning during COVID-19. Further research is warranted to explore the relevance of these biomarkers in other respiratory infections.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号