normalization

归一化
  • 文章类型: Journal Article
    背景:疫苗通过提供针对传染病的保护而彻底改变了公共卫生。它们刺激免疫系统并产生记忆细胞以防御目标疾病。临床试验评估疫苗性能,包括剂量,管理路线,和潜在的副作用。
    结果:gov是一个有价值的临床试验信息库,但是其中的疫苗数据缺乏标准化,导致自动概念图的挑战,疫苗相关知识的发展,基于证据的决策,和疫苗监测。
    结果:在这项研究中,我们开发了一个利用多个领域知识来源的级联框架,包括临床试验,统一医疗语言系统(UMLS)和疫苗本体论(VO),增强领域特定语言模型的性能,以自动映射来自临床试验的VO。疫苗本体(VO)是一个基于社区的本体,旨在促进疫苗数据标准化,一体化,和计算机辅助推理。我们的方法涉及从各种来源提取和注释数据。然后,我们对PubMedBERT模型进行了预训练,导致CTPubMedBERT的发展。随后,我们通过整合SAPBERT增强了CTPubMedBERT,使用UMLS进行了预训练,导致CTPubMedBERT+SAPBERT。通过使用疫苗本体论语料库和临床试验的疫苗数据进行微调,进一步完善。产生CTPubMedBERT+SAPBERT+VO模型。最后,我们利用了一组预先训练的模型,连同加权的基于规则的集成方法,标准化疫苗语料,提高流程的准确性。概念规范化中的排序过程涉及对潜在概念进行优先级排序和排序,以识别给定上下文的最合适匹配。我们对十大概念进行了排名,我们的实验结果表明,我们提出的级联框架在疫苗图谱上的表现始终优于现有的有效基线,前1名候选人的准确率达到71.8%,前10名候选人的准确率达到90.0%。
    结论:这项研究提供了一个详细的见解,一个级联的框架微调的特定领域的语言模型,改善从临床试验的VO映射。通过有效地利用特定领域的信息,并应用不同的预训练BERT模型的加权基于规则的集合,我们的框架可以显著增强临床试验的VO图谱.
    BACKGROUND: Vaccines have revolutionized public health by providing protection against infectious diseases. They stimulate the immune system and generate memory cells to defend against targeted diseases. Clinical trials evaluate vaccine performance, including dosage, administration routes, and potential side effects.
    RESULTS: gov is a valuable repository of clinical trial information, but the vaccine data in them lacks standardization, leading to challenges in automatic concept mapping, vaccine-related knowledge development, evidence-based decision-making, and vaccine surveillance.
    RESULTS: In this study, we developed a cascaded framework that capitalized on multiple domain knowledge sources, including clinical trials, the Unified Medical Language System (UMLS), and the Vaccine Ontology (VO), to enhance the performance of domain-specific language models for automated mapping of VO from clinical trials. The Vaccine Ontology (VO) is a community-based ontology that was developed to promote vaccine data standardization, integration, and computer-assisted reasoning. Our methodology involved extracting and annotating data from various sources. We then performed pre-training on the PubMedBERT model, leading to the development of CTPubMedBERT. Subsequently, we enhanced CTPubMedBERT by incorporating SAPBERT, which was pretrained using the UMLS, resulting in CTPubMedBERT + SAPBERT. Further refinement was accomplished through fine-tuning using the Vaccine Ontology corpus and vaccine data from clinical trials, yielding the CTPubMedBERT + SAPBERT + VO model. Finally, we utilized a collection of pre-trained models, along with the weighted rule-based ensemble approach, to normalize the vaccine corpus and improve the accuracy of the process. The ranking process in concept normalization involves prioritizing and ordering potential concepts to identify the most suitable match for a given context. We conducted a ranking of the Top 10 concepts, and our experimental results demonstrate that our proposed cascaded framework consistently outperformed existing effective baselines on vaccine mapping, achieving 71.8% on top 1 candidate\'s accuracy and 90.0% on top 10 candidate\'s accuracy.
    CONCLUSIONS: This study provides a detailed insight into a cascaded framework of fine-tuned domain-specific language models improving mapping of VO from clinical trials. By effectively leveraging domain-specific information and applying weighted rule-based ensembles of different pre-trained BERT models, our framework can significantly enhance the mapping of VO from clinical trials.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    正电子发射断层扫描(PET)中的归一化可校正所有系统响应线(LOR)上的灵敏度不均匀性。自归一化是旨在从发射数据估计归一化分量而无需归一化体模的单独扫描的框架。在这项工作中,我们首次提出了使用条件生成对抗网络(cGAN)的基于图像的端到端自归一化框架。我们通过探索以下三种方法来评估不同的方法。首先,我们使用了未归一化或几何因素校正的图像,包括所有时间不变因素,作为输入数据类型。第二,我们将输入张量形状设置为单个轴向切片(2-D)或三个连续的轴向切片(2.5-D)。第三,我们选择Pix2Pix或极化自我注意(PSA)Pix2Pix,我们为这项工作开发的,作为一个深度学习网络。所有方法的目标是使用直接归一化方法归一化的图像的轴向切片。我们使用SimSET模拟工具对十个体素化体模进行了蒙特卡罗模拟,并产生了26,000对轴向图像切片用于训练和测试。结果表明,在我们测试的所有方法中,使用几何因子校正的输入图像训练的2.5DPSAPix2Pix均取得了最佳性能。所有方法都将一般图像质量的优值峰值信噪比(PSNR)和结构相似指数(SSIM)从〜15%提高到〜55%,2.5-DPSAPix2Pix显示最高的PSNR(28.074)和SSIM(0.921)。病变可检测性,用感兴趣区域(ROI)PSNR测量,SSIM,归一化对比度恢复系数(NCRC),和对比噪声比(CNR),通常对所有方法都有改进,用几何因子校正的输入图像训练的2.5DPSAPix2Pix实现了最高的ROIPSNR(28.920)和SSIM(0.973)。
    Objective.This work proposes, for the first time, an image-based end-to-end self-normalization framework for positron emission tomography (PET) using conditional generative adversarial networks (cGANs).Approach.We evaluated different approaches by exploring each of the following three methodologies. First, we used images that were either unnormalized or corrected for geometric factors, which encompass all time-invariant factors, as input data types. Second, we set the input tensor shape as either a single axial slice (2D) or three contiguous axial slices (2.5D). Third, we chose either Pix2Pix or polarized self-attention (PSA) Pix2Pix, which we developed for this work, as a deep learning network. The targets for all approaches were the axial slices of images normalized using the direct normalization method. We performed Monte Carlo simulations of ten voxelized phantoms with the SimSET simulation tool and produced 26,000 pairs of axial image slices for training and testing.Main results.The results showed that 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the best performance among all the methods we tested. All approaches improved general image quality figures of merit peak signal to noise ratio (PSNR) and structural similarity index (SSIM) from ∼15 % to ∼55 %, and 2.5D PSA Pix2Pix showed the highest PSNR (28.074) and SSIM (0.921). Lesion detectability, measured with region of interest (ROI) PSNR, SSIM, normalized contrast recovery coefficient, and contrast-to-noise ratio, was generally improved for all approaches, and 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the highest ROI PSNR (28.920) and SSIM (0.973).Significance.This study demonstrates the potential of an image-based end-to-end self-normalization framework using cGANs for improving PET image quality and lesion detectability without the need for separate normalization scans.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    转录组和蛋白质组可以用少量RNA或蛋白质(或它们的肽)标准化,如GAPDH,β-肌动蛋白,RPBMS,和/或GAP43。即使有数百个标准,对于小分子,归一化不能在不同的分子质量范围内实现,如脂质和代谢物,由于即使是光谱的最小部分,质量与电荷比的非线性。我们定义了每限定量蛋白质的代谢物和/或脂质的量(或量的范围)。在多模式生物比较的所有样本中一致识别,作为代谢物或脂质的规范水平。定义的蛋白质量(或范围)是一组完整样品的归一化值,对于所述完整样品,可获得样内相对蛋白质定量。例如,在蛋白质组中确定的每µg乌头酸水合酶的柠檬酸盐(一种代谢物)的量(标准化蛋白质量)是柠檬酸盐与乌头酸酶的标准水平。我们将正常性定义为当与标准化蛋白质水平相比时检测到的代谢物的量(或量范围)。我们以轴突再生为例,说明对蛋白质标准化的高级方法的需求。不同药理学诱导的轴突再生小鼠模型之间的比较需要轴突再生的比较,在使用不同代理设计的几个模型的不同时间点进行了研究。为了在不同的药理学诱导模型中对蛋白质进行标准化,我们在每个样品中进行肽掺杂(已知肽的固定量),以标准化样品中的蛋白质组。我们开发了RegenV肽,分为RegenIII(SEB,LLO,CFP)和II(HH4B,A1315),对于提取前和提取后的比较,在添加定义的情况下执行,消化的肽(牛血清白蛋白胰蛋白酶消化)用于蛋白质丰度标准化,超出商业标记的相对定量(例如,18-plex串联质量标签)。我们还通过在再生代谢组/脂质组谱上使用这种标准化技术来说明规范性的概念。由于标准化的蛋白质量在不同的生物状态下是不同的(对照与轴突再生),对于特定的生物状态,规范的代谢物或脂质含量预计会有所不同。这些概念和标准化方法对于跨轴突再生的不同模型的不同数据集的整合是重要的。
    Transcriptomes and proteomes can be normalized with a handful of RNAs or proteins (or their peptides), such as GAPDH, β-actin, RPBMS, and/or GAP43. Even with hundreds of standards, normalization cannot be achieved across different molecular mass ranges for small molecules, such as lipids and metabolites, due to the non-linearity of mass by charge ratio for even the smallest part of the spectrum. We define the amount (or range of amounts) of metabolites and/or lipids per a defined amount of a protein, consistently identified in all samples of a multiple-model organism comparison, as the normative level of that metabolite or lipid. The defined protein amount (or range) is a normalized value for one cohort of complete samples for which intrasample relative protein quantification is available. For example, the amount of citrate (a metabolite) per µg of aconitate hydratase (normalized protein amount) identified in the proteome is the normative level of citrate with aconitase. We define normativity as the amount of metabolites (or amount range) detected when compared to normalized protein levels. We use axon regeneration as an example to illustrate the need for advanced approaches to the normalization of proteins. Comparison across different pharmacologically induced axon regeneration mouse models entails the comparison of axon regeneration, studied at different time points in several models designed using different agents. For the normalization of the proteins across different pharmacologically induced models, we perform peptide doping (fixed amounts of known peptides) in each sample to normalize the proteome across the samples. We develop Regen V peptides, divided into Regen III (SEB, LLO, CFP) and II (HH4B, A1315), for pre- and post-extraction comparisons, performed with the addition of defined, digested peptides (bovine serum albumin tryptic digest) for protein abundance normalization beyond commercial labeled relative quantification (for example, 18-plex tandem mass tags). We also illustrate the concept of normativity by using this normalization technique on regenerative metabolome/lipidome profiles. As normalized protein amounts are different in different biological states (control versus axon regeneration), normative metabolite or lipid amounts are expected to be different for specific biological states. These concepts and standardization approaches are important for the integration of different datasets across different models of axon regeneration.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    由于影像组学特征标准化的局限性,MRI影像组学肿瘤谱分析在癌症预后和治疗计划中的广泛临床应用面临着主要障碍。当前工作的目的是评估不同的MRI扫描和标准化方案对两个患有子宫内膜子宫内膜癌(EC)(n=136)和子宫颈癌(CC)(n=132)的患者队列中肿瘤影像数据的统计分析的影响。1.5T和3T,造影剂注射后2分钟T1加权MRI,T2加权涡轮自旋回波成像,并获得弥散加权成像。从3D中手动分割的肿瘤中提取放射学特征,并使用z评分归一化或线性回归模型(LRM)进行归一化,以说明与MRI采集参数的线性依赖性。根据影像学资料将患者分为两组。使用Kruskal-Wallis测试分析MRI扫描参数对簇组成和预后的影响,卡普兰-迈耶地块,对数秩检验,随机生存森林和LASSOCox回归具有时间依赖性曲线下面积(tdAUC)(α=0.05)。在两个队列中,大部分影像组学特征与MRI扫描方案在统计学上相关(EC:162/385[42%];CC:180/292[62%])。当在z评分与LRM归一化后进行聚类时,相当数量的EC(49/136[36%])和CC(50/132[38%])患者改变聚类。基于聚类组的预后建模在EC/CC队列中对于两种归一化方法产生了相似的输出(对数秩检验;z分数:p=0.02/0.33;LRM:p=0.01/0.45)。根据EC/CC的影像组学特征对疾病特异性生存(DSS)进行预后建模的平均tdAUC与两种归一化方法相似(随机生存森林;z评分:平均tdAUC=0.77/0.78;LRM:平均tdAUC=0.80/0.75;LASSOCox;z评分:平均tdAUC=0.64/0.75;LRM:平均tAU76)。由于MRI扫描参数,肿瘤影像组学数据存在严重偏差。Z分数归一化并不能消除这些偏见,而LRM标准化有效地做到了。尽管如此,在EC和CC患者中,z-score-和LRM正常化后的影像组学簇群与DSS相似.
    Widespread clinical use of MRI radiomic tumor profiling for prognostication and treatment planning in cancers faces major obstacles due to limitations in standardization of radiomic features. The purpose of the current work was to assess the impact of different MRI scanning- and normalization protocols for the statistical analyses of tumor radiomic data in two patient cohorts with uterine endometrial-(EC) (n = 136) and cervical (CC) (n = 132) cancer. 1.5 T and 3 T, T1-weighted MRI 2 min post-contrast injection, T2-weighted turbo spin echo imaging, and diffusion-weighted imaging were acquired. Radiomic features were extracted from within manually segmented tumors in 3D and normalized either using z-score normalization or a linear regression model (LRM) accounting for linear dependencies with MRI acquisition parameters. Patients were clustered into two groups based on radiomic profile. Impact of MRI scanning parameters on cluster composition and prognostication were analyzed using Kruskal-Wallis tests, Kaplan-Meier plots, log-rank test, random survival forests and LASSO Cox regression with time-dependent area under curve (tdAUC) (α = 0.05). A large proportion of the radiomic features was statistically associated with MRI scanning protocol in both cohorts (EC: 162/385 [42%]; CC: 180/292 [62%]). A substantial number of EC (49/136 [36%]) and CC (50/132 [38%]) patients changed cluster when clustering was performed after z-score-versus LRM normalization. Prognostic modeling based on cluster groups yielded similar outputs for the two normalization methods in the EC/CC cohorts (log-rank test; z-score: p = 0.02/0.33; LRM: p = 0.01/0.45). Mean tdAUC for prognostic modeling of disease-specific survival (DSS) by the radiomic features in EC/CC was similar for the two normalization methods (random survival forests; z-score: mean tdAUC = 0.77/0.78; LRM: mean tdAUC = 0.80/0.75; LASSO Cox; z-score: mean tdAUC = 0.64/0.76; LRM: mean tdAUC = 0.76/0.75). Severe biases in tumor radiomics data due to MRI scanning parameters exist. Z-score normalization does not eliminate these biases, whereas LRM normalization effectively does. Still, radiomic cluster groups after z-score- and LRM normalization were similarly associated with DSS in EC and CC patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:本研究旨在评估使用交叉组装噬菌体(crAssphage)作为内源性对照,采用多变量归一化分析及其作为SARS-CoV-2数据归一化剂的应用。
    结果:在1年的监测期内,从八个污水处理厂(WWTP)获得了总共188个12小时复合原污水样品。采用N1和N2目标区域,在94%(177)和90%(170)的样本中检测到SARS-CoV-2RNA,分别,全球中位数为每升5log10基因组拷贝(GCl-1)。在100%的样品中检测到CrAssphage,范围为8.29至10.43log10GCl-1,中位数为9.46±0.40log10GCl-1,呈现时空变化。
    结论:尽管使用crAssphage的SARS-CoV-2数据标准化显示与研究期间发生的临床病例相关,通过每个污水处理厂每天的人均流量进行的crassphage归一化增加了这种相关性,证实了在疾病趋势监测中标准化废水监测(WWS)数据的重要性。
    OBJECTIVE: This study aimed to assess the use of cross-assembled phage (crAssphage) as an endogenous control employing a multivariate normalization analysis and its application as a severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) data normalizer.
    RESULTS: A total of 188 twelve-hour composite raw sewage samples were obtained from eight wastewater treatment plants (WWTP) during a 1-year monitoring period. Employing the N1 and N2 target regions, SARS-CoV-2 RNA was detected in 94% (177) and 90% (170) of the samples, respectively, with a global median of 5 log10 genomic copies per liter (GC l-1). CrAssphage was detected in 100% of the samples, ranging from 8.29 to 10.43 log10 GC l-1, with a median of 9.46 ± 0.40 log10 GC l-1, presenting both spatial and temporal variabilities.
    CONCLUSIONS: Although SARS-CoV-2 data normalization employing crAssphage revealed a correlation with clinical cases occurring during the study period, crAssphage normalization by the flow per capita per day of each WWTP increased this correlation, corroborating the importance of normalizing wastewater surveillance data in disease trend monitoring.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:血友病患者的治疗选择正在迅速发展,目前有一系列使用各种技术的预防性治疗选择,每个人都有自己独特的安全性和有效性。
    获得替代疗法和预防疗法已导致死亡率的急剧下降和预期寿命的增加。除此之外,消除出血和保持共同健康代表了预期,但很少达到,血友病治疗和护理的目标。这些结果也没有解决血友病及其治疗影响的健康相关生活质量的复杂性。
    结论:利用治疗创新的主要潜力,止血正常化,作为一个概念,应该包括使个人尽可能正常生活的愿望,没有血友病施加的限制。为了实现这一目标-在本手稿中审查的数据的支持下-需要在更广泛的多学科团队和血友病社区中探索和辩论止血和生活正常化的概念。
    BACKGROUND: Treatment options for people with haemophilia are evolving at a rapid pace and a range of prophylactic treatment options using various technologies are currently available, each with their own distinct safety and efficacy profile.
    UNASSIGNED: The access to replacement therapy and prophylaxis has driven a dramatic reduction in mortality and resultant increase in life expectancy. Beyond this, the abolition of bleeds and preservation of joint health represent the expected, but rarely attained, goals of haemophilia treatment and care. These outcomes also do not address the complexity of health-related quality of life impacted by haemophilia and its treatment.
    CONCLUSIONS: Capitalizing on the major potential of therapeutic innovations, \'Normalization\' of haemostasis, as a concept, should include the aspiration of enabling individuals to live as normal a life as possible, free from haemophilia-imposed limitations. To achieve this-being supported by the data reviewed in this manuscript-the concept of haemostatic and life Normalization needs to be explored and debated within the wider multidisciplinary teams and haemophilia community.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    基于废水的流行病学(WBE)是监测社区健康的重要工具。尽管很多注意力都集中在严重急性呼吸道综合症冠状病毒2(SARS-CoV-2)上,2019年冠状病毒病的病原体(COVID-19),其他病原体也构成重大健康风险。这项研究量化了SARS-CoV-2,甲型流感病毒(Inf-A)的存在,从2023年7月至2024年2月,从山梨县的五个污水处理厂(WWTP)每周收集(n=170)的废水样品中的I型基因组(NoV-GI)和II型基因组(NoV-GII)的诺如病毒,Japan,通过定量PCR。Inf-ARNA在不同的WWTP中表现出局部患病率,阳性率为59%-82%,表明特定地区内的区域性疫情。NoV-GI(94%,160/170)和NoV-GII(100%,170/170)RNA非常普遍,NoV-GII(6.1±0.8log10拷贝/L)始终超过NoV-GI(5.4±0.7log10拷贝/L)RNA浓度。在100%的样本中检测到SARS-CoV-2RNA,WWTPE中的平均浓度为5.3±0.5log10拷贝/L,其他WWTP中的平均浓度为5.8±0.4log10拷贝/L。季节性变化很明显,冬季所有致病病毒的浓度都较高。粪便指示细菌(大肠杆菌和总大肠杆菌)的非标准化和标准化病毒浓度,指示病毒(辣椒轻度斑驳病毒(PMMoV)),和浊度显示与报告的疾病病例显著正相关。Inf-A和NoV-GI+GIIRNA浓度显示与流感和急性胃肠炎病例有很强的相关性,特别是当归一化为大肠杆菌(斯皮尔曼ρ=0.70-0.81)和总大肠杆菌(ρ=0.70-0.81)时,分别。对于SARS-CoV-2,非归一化浓度显示出0.61的相关性,当归一化到PMMoV时,下降到0.31,这表明PMMoV不合适。浊度归一化也产生次优结果。这项研究强调了选择适合特定病原体的标准化参数的重要性,以便使用WBE进行准确的疾病趋势监测。展示了其超越COVID-19监控的效用。
    Wastewater-based epidemiology (WBE) is a critical tool for monitoring community health. Although much attention has focused on severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a causative agent of coronavirus disease 2019 (COVID-19), other pathogens also pose significant health risks. This study quantified the presence of SARS-CoV-2, influenza A virus (Inf-A), and noroviruses of genogroups I (NoV-GI) and II (NoV-GII) in wastewater samples collected weekly (n = 170) from July 2023 to February 2024 from five wastewater treatment plants (WWTPs) in Yamanashi Prefecture, Japan, by quantitative PCR. Inf-A RNA exhibited localized prevalence with positive ratios of 59 %-82 % in different WWTPs, suggesting regional outbreaks within specific areas. NoV-GI (94 %, 160/170) and NoV-GII (100 %, 170/170) RNA were highly prevalent, with NoV-GII (6.1 ± 0.8 log10 copies/L) consistently exceeding NoV-GI (5.4 ± 0.7 log10 copies/L) RNA concentrations. SARS-CoV-2 RNA was detected in 100 % of the samples, with mean concentrations of 5.3 ± 0.5 log10 copies/L in WWTP E and 5.8 ± 0.4 log10 copies/L each in other WWTPs. Seasonal variability was evident, with higher concentrations of all pathogenic viruses during winter. Non-normalized and normalized virus concentrations by fecal indicator bacteria (Escherichia coli and total coliforms), an indicator virus (pepper mild mottle virus (PMMoV)), and turbidity revealed significant positive associations with the reported disease cases. Inf-A and NoV-GI + GII RNA concentrations showed strong correlations with influenza and acute gastroenteritis cases, particularly when normalized to E. coli (Spearman\'s ρ = 0.70-0.81) and total coliforms (ρ = 0.70-0.81), respectively. For SARS-CoV-2, non-normalized concentrations showed a correlation of 0.61, decreasing to 0.31 when normalized to PMMoV, suggesting that PMMoV is unsuitable. Turbidity normalization also yielded suboptimal results. This study underscored the importance of selecting suitable normalization parameters tailored to specific pathogens for accurate disease trend monitoring using WBE, demonstrating its utility beyond COVID-19 surveillance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    脑部医学图像分割是医学图像处理中的一项关键任务,在中风等疾病的预测和诊断中发挥着重要作用,老年痴呆症,和脑肿瘤。然而,由于不同扫描仪之间的站点间差异很大,因此不同来源的数据集之间的分布差异很大,成像协议,和人口。这导致实际应用中的跨域问题。近年来,已经进行了许多研究来解决大脑图像分割中的跨域问题。
    本评论遵循系统评论和荟萃分析(PRISMA)的首选报告项目的标准,用于数据处理和分析。我们从PubMed检索了相关论文,WebofScience,和IEEE数据库从2018年1月到2023年12月,提取有关医疗领域的信息,成像模式,解决跨域问题的方法,实验设计,和来自选定论文的数据集。此外,我们比较了中风病变分割方法的性能,脑白质分割和脑肿瘤分割。
    本综述共纳入并分析了71项研究。解决跨域问题的方法包括迁移学习,规范化,无监督学习,变压器型号,和卷积神经网络(CNN)。在ATLAS数据集上,领域自适应方法显示,与非自适应方法相比,卒中病变分割任务总体改善约3%.然而,鉴于当前研究中基于MICCAI2017中白质分割任务的方法和BraTS中脑肿瘤分割任务的方法的数据集和实验方法的多样性,直观地比较这些方法的优缺点是具有挑战性的。
    尽管已经应用了各种技术来解决大脑图像分割中的跨域问题,目前缺乏统一的数据集和实验标准。例如,许多研究仍然基于n折交叉验证,而直接基于跨站点或数据集的交叉验证的方法相对较少。此外,由于大脑分割领域的医学图像类型多种多样,对性能进行简单直观的比较并不容易。这些挑战需要在未来的研究中解决。
    UNASSIGNED: Brain medical image segmentation is a critical task in medical image processing, playing a significant role in the prediction and diagnosis of diseases such as stroke, Alzheimer\'s disease, and brain tumors. However, substantial distribution discrepancies among datasets from different sources arise due to the large inter-site discrepancy among different scanners, imaging protocols, and populations. This leads to cross-domain problems in practical applications. In recent years, numerous studies have been conducted to address the cross-domain problem in brain image segmentation.
    UNASSIGNED: This review adheres to the standards of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) for data processing and analysis. We retrieved relevant papers from PubMed, Web of Science, and IEEE databases from January 2018 to December 2023, extracting information about the medical domain, imaging modalities, methods for addressing cross-domain issues, experimental designs, and datasets from the selected papers. Moreover, we compared the performance of methods in stroke lesion segmentation, white matter segmentation and brain tumor segmentation.
    UNASSIGNED: A total of 71 studies were included and analyzed in this review. The methods for tackling the cross-domain problem include Transfer Learning, Normalization, Unsupervised Learning, Transformer models, and Convolutional Neural Networks (CNNs). On the ATLAS dataset, domain-adaptive methods showed an overall improvement of ~3 percent in stroke lesion segmentation tasks compared to non-adaptive methods. However, given the diversity of datasets and experimental methodologies in current studies based on the methods for white matter segmentation tasks in MICCAI 2017 and those for brain tumor segmentation tasks in BraTS, it is challenging to intuitively compare the strengths and weaknesses of these methods.
    UNASSIGNED: Although various techniques have been applied to address the cross-domain problem in brain image segmentation, there is currently a lack of unified dataset collections and experimental standards. For instance, many studies are still based on n-fold cross-validation, while methods directly based on cross-validation across sites or datasets are relatively scarce. Furthermore, due to the diverse types of medical images in the field of brain segmentation, it is not straightforward to make simple and intuitive comparisons of performance. These challenges need to be addressed in future research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    脑癌是全球经常发生的疾病,主要是由于大脑中/周围存在肿瘤而发展起来的。一般来说,脑癌的患病率和发病率远低于其他癌症类型(乳腺癌,皮肤,肺,等。).然而,脑癌与高死亡率相关,尤其是成年人,由于对肿瘤类型的错误识别,延迟诊断。因此,减少对脑肿瘤类型的错误检测和早期诊断对提高患者生存率起着至关重要的作用。为了实现这一点,许多研究人员最近开发了基于深度学习(DL)的方法,因为它们表现出了显著的性能,特别是在分类任务中。
    本文提出了一种名为BrainCDNet的新型DL架构。该模型是通过连接池化层并通过使用“HeNormal”初始化以及批量范数和全局平均池化(GAP)将权重初始化到层中来处理过拟合问题的。最初,我们使用灵活的过滤器锐化输入图像,这导致保持边缘和精细的细节。之后,我们使用建议的BrainCDNet来提取相关特征和分类。在这项工作中,两种不同形式的磁共振成像(MRI)数据库,如二进制(健康与病理性)和多类(神经胶质瘤vs.脑膜瘤vs.垂体)用于执行所有这些实验。
    经验证据表明,与最先进的方法相比,所提出的模型在两个数据集上都获得了显着的准确性,99.45%(二进制)和96.78%(多类),分别。因此,所提出的模型可作为放射科医师在脑癌患者诊断过程中的决策支持工具.
    UNASSIGNED: Brain cancer is a frequently occurring disease around the globe and mostly developed due to the presence of tumors in/around the brain. Generally, the prevalence and incidence of brain cancer are much lower than that of other cancer types (breast, skin, lung, etc.). However, brain cancers are associated with high mortality rates, especially in adults, due to the false identification of tumor types, and delay in the diagnosis. Therefore, the minimization of false detection of brain tumor types and early diagnosis plays a crucial role in the improvement of patient survival rate. To achieve this, many researchers have recently developed deep learning (DL)-based approaches since they showed a remarkable performance, particularly in the classification task.
    UNASSIGNED: This article proposes a novel DL architecture named BrainCDNet. This model was made by concatenating the pooling layers and dealing with the overfitting issues by initializing the weights into layers using \'He Normal\' initialization along with the batch norm and global average pooling (GAP). Initially, we sharpen the input images using a Nimble filter, which results in maintaining the edges and fine details. After that, we employed the suggested BrainCDNet for the extraction of relevant features and classification. In this work, two different forms of magnetic resonance imaging (MRI) databases such as binary (healthy vs. pathological) and multiclass (glioma vs. meningioma vs. pituitary) are utilized to perform all these experiments.
    UNASSIGNED: Empirical evidence suggests that the presented model attained a significant accuracy on both datasets compared to the state-of-the-art approaches, with 99.45% (binary) and 96.78% (multiclass), respectively. Hence, the proposed model can be used as a decision-supportive tool for radiologists during the diagnosis of brain cancer patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    基因型到表型作图是当前基因组时代的基本问题。虽然定性病例对照预测受到了极大的关注,较少强调预测定量表型。这个新兴领域在揭示微生物群落与宿主健康之间的复杂联系方面具有巨大的前景。然而,微生物组数据集异质性的存在对预测的准确性提出了重大挑战,并削弱了模型的可重复性.为了应对这一挑战,我们调查了22种标准化方法,旨在消除多个数据集的异质性,对它们进行了全面审查,并评估了它们在三个模拟场景和31个真实数据集中预测定量表型的有效性。结果表明,这些方法中没有一种在预测定量表型方面表现出明显的优势,或者在预测的均方根误差(RMSE)方面显着降低。鉴于批量效应的频繁发生以及批量校正方法在预测受这些效应影响的数据集时的令人满意的性能,我们强烈建议使用批量校正方法作为预测定量表型的第一步.总之,标准化方法在预测宏基因组数据中的表现仍然是一个动态和持续的研究领域。我们的研究通过对各种方法进行全面评估并为预测定量表型的有效性提供有价值的见解,从而为这一领域做出了贡献。
    Genotype-to-phenotype mapping is an essential problem in the current genomic era. While qualitative case-control predictions have received significant attention, less emphasis has been placed on predicting quantitative phenotypes. This emerging field holds great promise in revealing intricate connections between microbial communities and host health. However, the presence of heterogeneity in microbiome datasets poses a substantial challenge to the accuracy of predictions and undermines the reproducibility of models. To tackle this challenge, we investigated 22 normalization methods that aimed at removing heterogeneity across multiple datasets, conducted a comprehensive review of them, and evaluated their effectiveness in predicting quantitative phenotypes in three simulation scenarios and 31 real datasets. The results indicate that none of these methods demonstrate significant superiority in predicting quantitative phenotypes or attain a noteworthy reduction in Root Mean Squared Error (RMSE) of the predictions. Given the frequent occurrence of batch effects and the satisfactory performance of batch correction methods in predicting datasets affected by these effects, we strongly recommend utilizing batch correction methods as the initial step in predicting quantitative phenotypes. In summary, the performance of normalization methods in predicting metagenomic data remains a dynamic and ongoing research area. Our study contributes to this field by undertaking a comprehensive evaluation of diverse methods and offering valuable insights into their effectiveness in predicting quantitative phenotypes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号