Dimensionality reduction

降维
  • 文章类型: Journal Article
    生物医学技术的最新进展和高维下一代测序(NGS)数据集的激增导致数据的体积和密度显着增长。NGS高维数据,以大量基因组学为特征,转录组学,蛋白质组学,和相对于生物样本数量的宏基因组学特征,对降低特征维数提出了重大挑战。NGS数据的高维度对数据分析提出了重大挑战,包括增加的计算负担,潜在的过拟合,以及解释结果的困难。特征选择和特征提取是通过降低数据的维数来解决这些挑战的两种关键技术,从而提高模型性能,可解释性,和计算效率。特征选择和特征提取可以分为统计和机器学习方法。本研究对各种统计数据进行了全面和比较的审查,机器学习,以及专门为人类的NGS和微阵列数据解释量身定制的基于深度学习的特征选择和提取技术。进行了彻底的文献检索,以收集有关这些技术的信息,专注于基于阵列和NGS的数据分析。各种技术,包括深度学习架构,机器学习算法,和统计方法,已经探索了微阵列,批量RNA-Seq,和单细胞,此处调查的基于单细胞RNA-Seq(scRNA-Seq)技术的数据集。该研究概述了这些技术,突出它们的应用,优势,以及高维NGS数据上下文中的局限性。这篇评论为读者应用特征选择和特征提取技术来增强预测模型的性能提供了更好的见解,揭示潜在的生物模式,并获得对庞大而复杂的NGS和微阵列数据的更深入的见解。
    Recent advancements in biomedical technologies and the proliferation of high-dimensional Next Generation Sequencing (NGS) datasets have led to significant growth in the bulk and density of data. The NGS high-dimensional data, characterized by a large number of genomics, transcriptomics, proteomics, and metagenomics features relative to the number of biological samples, presents significant challenges for reducing feature dimensionality. The high dimensionality of NGS data poses significant challenges for data analysis, including increased computational burden, potential overfitting, and difficulty in interpreting results. Feature selection and feature extraction are two pivotal techniques employed to address these challenges by reducing the dimensionality of the data, thereby enhancing model performance, interpretability, and computational efficiency. Feature selection and feature extraction can be categorized into statistical and machine learning methods. The present study conducts a comprehensive and comparative review of various statistical, machine learning, and deep learning-based feature selection and extraction techniques specifically tailored for NGS and microarray data interpretation of humankind. A thorough literature search was performed to gather information on these techniques, focusing on array-based and NGS data analysis. Various techniques, including deep learning architectures, machine learning algorithms, and statistical methods, have been explored for microarray, bulk RNA-Seq, and single-cell, single-cell RNA-Seq (scRNA-Seq) technology-based datasets surveyed here. The study provides an overview of these techniques, highlighting their applications, advantages, and limitations in the context of high-dimensional NGS data. This review provides better insights for readers to apply feature selection and feature extraction techniques to enhance the performance of predictive models, uncover underlying biological patterns, and gain deeper insights into massive and complex NGS and microarray data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    随着本体论词典(WordNet,Babelnet),基于synset的文本表示的使用在分类任务中越来越受欢迎。最近,本体论词典用于减少这种表示中的维数(例如,语义降维系统(SDRS)(VélezdeMendizabal等人。,2020))。这些方法通过利用从本体词典中提取的语义信息来基于语义相关列的组合。它们的主要优点是它们不仅可以消除功能,还可以将它们组合在一起,最小化(低损耗)或避免(无损)信息的损失。该组中包括的最新(和准确的)技术基于使用进化算法来找出可以分组多少个特征以减少获得的假阳性(FP)和假阴性(FN)错误。这些基于进化的方案的主要限制是使用优化算法得出的计算要求。这项研究的贡献是一种新的无损特征减少方案,该方案利用了来自本体词典的信息,它比基于优化的方法实现了略好的精度(特别是在FP错误中),但使用的计算资源要少得多。而不是使用计算昂贵的进化算法,我们的建议通过观察数据集中是否包含的实例来确定是否可以组合两列(synset)(例如,包含这些synset的训练数据集)大多属于同一类。该研究包括使用三个数据集的实验,以及与先前两种基于优化的方法的详细比较。
    With the advent and improvement of ontological dictionaries (WordNet, Babelnet), the use of synsets-based text representations is gaining popularity in classification tasks. More recently, ontological dictionaries were used for reducing dimensionality in this kind of representation (e.g., Semantic Dimensionality Reduction System (SDRS) (Vélez de Mendizabal et al., 2020)). These approaches are based on the combination of semantically related columns by taking advantage of semantic information extracted from ontological dictionaries. Their main advantage is that they not only eliminate features but can also combine them, minimizing (low-loss) or avoiding (lossless) the loss of information. The most recent (and accurate) techniques included in this group are based on using evolutionary algorithms to find how many features can be grouped to reduce false positive (FP) and false negative (FN) errors obtained. The main limitation of these evolutionary-based schemes is the computational requirements derived from the use of optimization algorithms. The contribution of this study is a new lossless feature reduction scheme exploiting information from ontological dictionaries, which achieves slightly better accuracy (specially in FP errors) than optimization-based approaches but using far fewer computational resources. Instead of using computationally expensive evolutionary algorithms, our proposal determines whether two columns (synsets) can be combined by observing whether the instances included in a dataset (e.g., training dataset) containing these synsets are mostly of the same class. The study includes experiments using three datasets and a detailed comparison with two previous optimization-based approaches.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    研究已将精神分裂症谱系障碍(SCZ)中的幻听(AH)与语言和听觉处理电路(LAPC)中大脑白质微结构的改变联系起来。然而,对LAPC的特异性尚不清楚.这里,我们使用弥散张量成像(DTI)研究了SCZ患者中AH与DTI的关系.
    我们纳入了有(AH+;n=59)和无(AH-;n=81)电流AH的SCZ患者,和140个年龄和性别匹配的对照。分数各向异性(FA),平均扩散率(MD),径向扩散系数(RD),从39条纤维束中提取轴向扩散系数(AD)。我们使用主成分分析(PCA)来识别纤维束和DTI指标之间的一般变化因素。根据性别调整的回归模型,年龄,和age2用于比较AH+之间的逐道DTI指标和PCA因子,AH-,和健康对照,并评估与临床特征的关联。
    在没有当前AH的患者中,观察到MD和RD相对于对照组的广泛差异。在AH+和对照之间仅观察到2个纤维束的有限差异。基于MD的单峰PCA因子,RD,AD,以及多模态PCA因素,相对于AH-,但不是AH+。我们没有发现PCA因素和临床特征之间的任何显著关联。
    与以前的研究相反,与对照组相比,没有当前AH的患者的DTI指标主要不同,表明广泛的神经解剖学分布。这挑战了LAPC内改变的DTI度量是AH基础的特定特征的概念。
    UNASSIGNED: Studies have linked auditory hallucinations (AH) in schizophrenia spectrum disorders (SCZ) to altered cerebral white matter microstructure within the language and auditory processing circuitry (LAPC). However, the specificity to the LAPC remains unclear. Here, we investigated the relationship between AH and DTI among patients with SCZ using diffusion tensor imaging (DTI).
    UNASSIGNED: We included patients with SCZ with (AH+; n = 59) and without (AH-; n = 81) current AH, and 140 age- and sex-matched controls. Fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), and axial diffusivity (AD) were extracted from 39 fiber tracts. We used principal component analysis (PCA) to identify general factors of variation across fiber tracts and DTI metrics. Regression models adjusted for sex, age, and age2 were used to compare tract-wise DTI metrics and PCA factors between AH+, AH-, and healthy controls and to assess associations with clinical characteristics.
    UNASSIGNED: Widespread differences relative to controls were observed for MD and RD in patients without current AH. Only limited differences in 2 fiber tracts were observed between AH+ and controls. Unimodal PCA factors based on MD, RD, and AD, as well as multimodal PCA factors, differed significantly relative to controls for AH-, but not AH+. We did not find any significant associations between PCA factors and clinical characteristics.
    UNASSIGNED: Contrary to previous studies, DTI metrics differed mainly in patients without current AH compared to controls, indicating a widespread neuroanatomical distribution. This challenges the notion that altered DTI metrics within the LAPC is a specific feature underlying AH.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    作为维度,细胞计数数据的吞吐量和复杂性增加,对用户友好的需求也是如此,利用高性能机器学习框架的交互式分析工具。在这里,我们介绍FlowAtlas:一个交互式Web应用程序,可以在不进行下采样的情况下降低细胞计数数据的维度,并且与使用非相同面板染色的数据集兼容。FlowAtlas连接了FlowJo的用户友好环境和由科学机器学习社区开发的Julia中的计算工具,消除了对编码和生物信息学专业知识的需求。FlowAtlas中的新种群发现和稀有种群的检测是直观而快速的。我们使用人类多组织演示FlowAtlas的功能,多供体免疫细胞数据集,突出关键的免疫学发现。FlowAtlas可在https://github.com/gszep/FlowAtlas获得。JL.git.
    As the dimensionality, throughput and complexity of cytometry data increases, so does the demand for user-friendly, interactive analysis tools that leverage high-performance machine learning frameworks. Here we introduce FlowAtlas: an interactive web application that enables dimensionality reduction of cytometry data without down-sampling and that is compatible with datasets stained with non-identical panels. FlowAtlas bridges the user-friendly environment of FlowJo and computational tools in Julia developed by the scientific machine learning community, eliminating the need for coding and bioinformatics expertise. New population discovery and detection of rare populations in FlowAtlas is intuitive and rapid. We demonstrate the capabilities of FlowAtlas using a human multi-tissue, multi-donor immune cell dataset, highlighting key immunological findings. FlowAtlas is available at https://github.com/gszep/FlowAtlas.jl.git.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    潜在变量分析是心理测量学研究的重要组成部分。在这种情况下,因素分析和其他相关技术已广泛应用于心理测量测试的内部结构研究。然而,这些方法在一系列假设下进行线性降维,这些假设并不总是在心理数据中得到验证。预测技术,比如人工神经网络,可以补充和改善潜在空间的探索,克服传统方法的局限性。在这项研究中,我们探索由特定的人工神经网络产生的潜在空间:变分自动编码器。该自动编码器可以执行非线性降维,并通过学习隐藏在数据中的最重要的关系来鼓励潜在特征遵循预定义的分布(通常是正态分布)。在这项研究中,我们研究了自动编码器对模拟数据中的项目-因素关系进行建模的能力,其中包括线性和非线性关联。我们还将调查扩展到真实的数据集。模拟数据的结果表明,当观察变量和潜在变量之间的关系为线性时,变分自动编码器的性能与因子分析相似,它能够重现因子得分。此外,非线性数据的结果表明,与因子分析不同,它还可以学习再现观察到的变量和因素之间的非线性关系。相对于因子分析,因子得分估计也更准确。实际情况的结果证实了自动编码器在通过对输入数据的温和假设来降低维数以及识别链接观察到的变量和潜在变量的函数方面的潜力。
    Latent variables analysis is an important part of psychometric research. In this context, factor analysis and other related techniques have been widely applied for the investigation of the internal structure of psychometric tests. However, these methods perform a linear dimensionality reduction under a series of assumptions that could not always be verified in psychological data. Predictive techniques, such as artificial neural networks, could complement and improve the exploration of latent space, overcoming the limits of traditional methods. In this study, we explore the latent space generated by a particular artificial neural network: the variational autoencoder. This autoencoder could perform a nonlinear dimensionality reduction and encourage the latent features to follow a predefined distribution (usually a normal distribution) by learning the most important relationships hidden in data. In this study, we investigate the capacity of autoencoders to model item-factor relationships in simulated data, which encompasses linear and nonlinear associations. We also extend our investigation to a real dataset. Results on simulated data show that the variational autoencoder performs similarly to factor analysis when the relationships among observed and latent variables are linear, and it is able to reproduce the factor scores. Moreover, results on nonlinear data show that, differently than factor analysis, it can also learn to reproduce nonlinear relationships among observed variables and factors. The factor score estimates are also more accurate with respect to factor analysis. The real case results confirm the potential of the autoencoder in reducing dimensionality with mild assumptions on input data and in recognizing the function that links observed and latent variables.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Images,文本,声音,信号可以通过多维向量中的潜在空间合成,可以在没有噪音或其他干扰因素的情况下进行探索。在本文中,我们提供了一个实际的用例,展示了潜在空间在探索图像空间等复杂现实方面的力量。我们专注于DaVincife,一个基于AI的系统,探索StyleGAN2空间,为文艺复兴时期天才达芬奇风格的任何人创建高质量的肖像。用户输入他们的肖像之一,并接收相应的达芬奇风格的肖像作为输出。由于达芬奇的大多数作品都描绘了年轻美丽的女性(例如,“LaBelleFerroniere”,\“比阿特丽斯·德\'Benci\”),我们调查了DaVinciFace解释其他社会分类的能力,包括性别,种族,和年龄。实验结果评估了我们的方法对1158张肖像的有效性,这些肖像作用于潜在空间的矢量表示,以产生高质量的肖像,保留受试者社会类别的面部特征,并得出结论,稀疏向量对这些特征有更大的影响。为了客观地评估和量化我们的结果,我们通过众包活动征求了人类的反馈。对人类反馈的分析显示,当达芬奇风格更加明显时,对所产生的肖像中重要身份特征的丢失具有很高的耐受性,除了一些例外,包括非洲人。
    Images, texts, voices, and signals can be synthesized by latent spaces in a multidimensional vector, which can be explored without the hurdles of noise or other interfering factors. In this paper, we present a practical use case that demonstrates the power of latent space in exploring complex realities such as image space. We focus on DaVinciFace, an AI-based system that explores the StyleGAN2 space to create a high-quality portrait for anyone in the style of the Renaissance genius Leonardo da Vinci. The user enters one of their portraits and receives the corresponding Da Vinci-style portrait as an output. Since most of Da Vinci\'s artworks depict young and beautiful women (e.g., \"La Belle Ferroniere\", \"Beatrice de\' Benci\"), we investigate the ability of DaVinciFace to account for other social categorizations, including gender, race, and age. The experimental results evaluate the effectiveness of our methodology on 1158 portraits acting on the vector representations of the latent space to produce high-quality portraits that retain the facial features of the subject\'s social categories, and conclude that sparser vectors have a greater effect on these features. To objectively evaluate and quantify our results, we solicited human feedback via a crowd-sourcing campaign. Analysis of the human feedback showed a high tolerance for the loss of important identity features in the resulting portraits when the Da Vinci style is more pronounced, with some exceptions, including Africanized individuals.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    阿尔茨海默病(AD)正在影响越来越多的个体。因此,迫切需要准确和早期的诊断方法。本研究旨在通过开发最佳的数据分析策略以增强计算诊断来实现这一目标。尽管收集了各种形式的AD诊断数据,过去对AD诊断计算方法的研究主要集中在使用单模态输入。我们假设整合,或“融合”,“各种数据模式作为预测模型的输入,可以通过提供更全面的个人健康状况视图来提高诊断准确性。然而,一个潜在的挑战出现了,因为这种多种模式的融合可能会导致更高的维度数据。我们假设,在异构模态中采用合适的降维方法不仅可以帮助诊断模型提取潜在信息,还可以提高准确性。因此,必须确定数据融合和降维的最佳策略。在本文中,我们对80多种统计机器学习方法进行了综合比较,考虑到各种分类器,降维技术,和数据融合策略来评估我们的假设。具体来说,我们探索了三种主要策略:(1)简单的数据融合,这涉及在将数据集输入分类器之前直接串联(融合)数据集;(2)早期数据融合,首先连接数据集,然后在将结果数据馈送到分类器之前应用降维技术;以及(3)中间数据融合,其中降维方法在连接它们以构造分类器之前单独应用于每个数据集。对于降维,我们已经探索了几种常用的技术,如主成分分析(PCA),自动编码器(AE),还有LASSO.此外,我们已经实现了一种新的降维方法,称为监督编码器(SE),这涉及对标准深度神经网络的轻微修改。我们的结果表明,与PCA相比,SE大大提高了预测精度,AE,还有LASSO,特别是结合中间融合进行多类诊断预测。
    Alzheimer\'s disease (AD) is affecting a growing number of individuals. As a result, there is a pressing need for accurate and early diagnosis methods. This study aims to achieve this goal by developing an optimal data analysis strategy to enhance computational diagnosis. Although various modalities of AD diagnostic data are collected, past research on computational methods of AD diagnosis has mainly focused on using single-modal inputs. We hypothesize that integrating, or \"fusing,\" various data modalities as inputs to prediction models could enhance diagnostic accuracy by offering a more comprehensive view of an individual\'s health profile. However, a potential challenge arises as this fusion of multiple modalities may result in significantly higher dimensional data. We hypothesize that employing suitable dimensionality reduction methods across heterogeneous modalities would not only help diagnosis models extract latent information but also enhance accuracy. Therefore, it is imperative to identify optimal strategies for both data fusion and dimensionality reduction. In this paper, we have conducted a comprehensive comparison of over 80 statistical machine learning methods, considering various classifiers, dimensionality reduction techniques, and data fusion strategies to assess our hypotheses. Specifically, we have explored three primary strategies: (1) Simple data fusion, which involves straightforward concatenation (fusion) of datasets before inputting them into a classifier; (2) Early data fusion, in which datasets are concatenated first, and then a dimensionality reduction technique is applied before feeding the resulting data into a classifier; and (3) Intermediate data fusion, in which dimensionality reduction methods are applied individually to each dataset before concatenating them to construct a classifier. For dimensionality reduction, we have explored several commonly-used techniques such as principal component analysis (PCA), autoencoder (AE), and LASSO. Additionally, we have implemented a new dimensionality-reduction method called the supervised encoder (SE), which involves slight modifications to standard deep neural networks. Our results show that SE substantially improves prediction accuracy compared to PCA, AE, and LASSO, especially in combination with intermediate fusion for multiclass diagnosis prediction.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    大脑的结构过于复杂,无法在不使用压缩表示的情况下直观地进行测量,而压缩表示将其变化投射到紧凑的形式中,导航空间。这项任务对于高维数据尤其具有挑战性,如基因表达,其中解剖和转录模式的联合复杂性需要最大的压缩。既定的做法是使用标准的主成分分析(PCA),其计算的灵活性被有限的表现力所抵消,尤其是在很大的压缩比下。利用全脑,按体素AllenBrainAtlas转录数据,在这里,我们系统地比较基于最广泛支持的线性和非线性方法的压缩表示-PCA,内核PCA,非负矩阵分解(NMF),t-随机邻居嵌入(t-SNE),均匀流形近似和投影(UMAP),和深度自动编码量化重建保真度,解剖学连贯性,和跨信号的预测效用,微观结构,和代谢目标,来自大规模开源MRI和PET数据。我们表明,深度自动编码器在性能和目标域的所有指标上都能产生卓越的表示,支持将其用作表示人脑转录模式的参考标准。
    The architecture of the brain is too complex to be intuitively surveyable without the use of compressed representations that project its variation into a compact, navigable space. The task is especially challenging with high-dimensional data, such as gene expression, where the joint complexity of anatomical and transcriptional patterns demands maximum compression. The established practice is to use standard principal component analysis (PCA), whose computational felicity is offset by limited expressivity, especially at great compression ratios. Employing whole-brain, voxel-wise Allen Brain Atlas transcription data, here we systematically compare compressed representations based on the most widely supported linear and non-linear methods-PCA, kernel PCA, non-negative matrix factorisation (NMF), t-stochastic neighbour embedding (t-SNE), uniform manifold approximation and projection (UMAP), and deep auto-encoding-quantifying reconstruction fidelity, anatomical coherence, and predictive utility across signalling, microstructural, and metabolic targets, drawn from large-scale open-source MRI and PET data. We show that deep auto-encoders yield superior representations across all metrics of performance and target domains, supporting their use as the reference standard for representing transcription patterns in the human brain.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    高光谱成像(HSI)在医学中的相关性越来越高,一项创新的应用是对用于微创肿瘤切除的激光消融治疗的结果进行术中评估。然而,HSI数据的高维度和复杂性导致需要专门为处理这些数据而定制的端到端图像处理工作流程。本研究通过提出用于分析高光谱数据的多阶段工作流程来解决这一挑战,并允许调查用于消融检测和分割的不同组件和模态的性能。为了解决降维问题,我们集成了主成分分析(PCA)和t分布随机邻居嵌入(t-SNE)来捕获主要变化并揭示复杂的结构,分别。此外,我们采用基于快速区域的卷积神经网络(FasterR-CNN)来精确定位消融区域。快速R-CNN的两阶段检测过程,随着降维技术和数据模态的选择,显着影响检测消融区域的性能。在独立测试集上对消融检测的评估表明,平均精度约为0.74,这验证了模型的泛化能力。在分段组件中,MeanShift算法在没有手动聚类定义的情况下显示出高质量的分割。我们的结果证明了PCA的集成,t-SNE,和更快的R-CNN可以改善高光谱数据的解释,导致可靠的消融检测和分割系统的发展。
    Hyperspectral imaging (HSI) is gaining increasing relevance in medicine, with an innovative application being the intraoperative assessment of the outcome of laser ablation treatment used for minimally invasive tumor removal. However, the high dimensionality and complexity of HSI data create a need for end-to-end image processing workflows specifically tailored to handle these data. This study addresses this challenge by proposing a multi-stage workflow for the analysis of hyperspectral data and allows investigating the performance of different components and modalities for ablation detection and segmentation. To address dimensionality reduction, we integrated principal component analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE) to capture dominant variations and reveal intricate structures, respectively. Additionally, we employed the Faster Region-based Convolutional Neural Network (Faster R-CNN) to accurately localize ablation areas. The two-stage detection process of Faster R-CNN, along with the choice of dimensionality reduction technique and data modality, significantly influenced the performance in detecting ablation areas. The evaluation of the ablation detection on an independent test set demonstrated a mean average precision of approximately 0.74, which validates the generalization ability of the models. In the segmentation component, the Mean Shift algorithm showed high quality segmentation without manual cluster definition. Our results prove that the integration of PCA, t-SNE, and Faster R-CNN enables improved interpretation of hyperspectral data, leading to the development of reliable ablation detection and segmentation systems.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    大脑结构与体重指数(BMI)之间的关联越来越受到关注。尽管先前已经报道了与BMI相关的大脑形态区域改变,BMI对微观结构的影响,它提供了皮质内神经元密度的代表信息,是未经探索的。在这项研究中,我们在302名神经系统健康的年轻成年人中调查了皮质层特定的微结构特征与BMI之间的联系.使用基于T1和T2加权比率的微观结构敏感代理,我们通过计算不同大脑区域之间皮质深度强度分布的线性相关性来估计微结构分布协方差(MPC).然后,使用降维技术估计MPC矩阵的低维梯度,梯度与BMI相关。观察到异型关联区域的显着影响。BMI-梯度关联图与沿皮质表面的测地距离有关,曲率,和沟深,表明微结构改变是沿着皮质拓扑发生的。BMI-梯度关联图进一步与负面情绪相关的认知状态相关联。我们的发现可能为理解与BMI相关的非典型皮质微观结构提供了见解。
    Associations between brain structure and body mass index (BMI) are increasingly gaining attention. Although BMI-related regional alterations in brain morphology have been previously reported, the effect of BMI on the microstructural profiles, which provide information on the proxy of neuronal density within the cortex, is unexplored. In this study, we investigated the links between cortical layer-specific microstructural profiles and BMI in 302 neurologically healthy young adults. Using the microstructure-sensitive proxy based on the T1-and T2-weighted ratio, we estimated microstructural profile covariance (MPC) by calculating linear correlations of cortical depth-wise intensity profiles between different brain regions. Then, low-dimensional gradients of the MPC matrix were estimated using dimensionality reduction techniques, and the gradients were associated with BMI. Significant effects in the heteromodal association areas were observed. The BMI-gradient association map was related to the geodesic distance along the cortical surface, curvature, and sulcal depth, suggesting that the microstructural alterations occurred along the cortical topology. The BMI-gradient association map was further linked to cognitive states related to negative emotions. Our findings may provide insights into understanding the atypical cortical microstructure associated with BMI.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号