Multi-modal

多模态
  • 文章类型: Journal Article
    人类通过面部表情和自然语言等各种方式表达情感。然而,通过不同方式表达的情绪之间的关系及其与神经活动的相关性仍然不确定。这里,我们的目的是通过调查情绪表征在模态和大脑区域之间的相似性来揭示这些不确定性中的一些。首先,我们将各种情感类别表示为源自视觉(面部)的多维向量,语言学,和Visio语言数据,并使用代表性相似性分析来比较这些模式。第二,我们研究了情绪表示从其他模态到视觉模态的线性可转移性。第三,我们将第一步中得出的代表性结构与来自360个区域的大脑活动的代表性结构进行了比较.我们的发现表明,情感表征在模态类型依赖变化的模态之间共享共性,它们可以从其他模态线性映射到视觉模态。此外,单模态中的情绪表征与特定的大脑区域显示出相对较高的相似性,而多模态情绪表示与整个大脑区域的表示最相似。这些发现表明,情感体验在不同的大脑区域表现不同,与不同的模态类型相似程度不同。它们在视觉和语言领域可能是多模式可传递的。
    Humans express emotions through various modalities such as facial expressions and natural language. However, the relationships between emotions expressed through different modalities and their correlations with neural activities remain uncertain. Here, we aimed to unveil some of these uncertainties by investigating the similarity of emotion representations across modalities and brain regions. First, we represented various emotion categories as multi-dimensional vectors derived from visual (face), linguistic, and visio-linguistic data, and used representational similarity analysis to compare these modalities. Second, we examined the linear transferability of emotion representation from other modalities to the visual modality. Third, we compared the representational structure derived in the first step with those from brain activities across 360 regions. Our findings revealed that emotion representations share commonalities across modalities with modality-type dependent variations, and they can be linearly mapped from other modalities to the visual modality. Additionally, emotion representations in uni-modalities showed relatively higher similarity with specific brain regions, while multi-modal emotion representation was most similar to representations across the entire brain region. These findings suggest that emotional experiences are represented differently across various brain regions with varying degrees of similarity to different modality types, and that they may be multi-modally conveyable in visual and linguistic domains.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:开发一种结合CT扫描和临床信息的深度学习模型,以预测晚期肝细胞癌(HCC)的总体生存率。
    方法:这项回顾性研究包括2018年至2022年间来自52个跨国内部中心的免疫治疗治疗的晚期HCC患者。提出了使用基线和首次随访CT图像以及7个临床变量的多模式预后模型。开发了卷积递归神经网络(CRNN),以从自动选择的代表性2DCT切片中提取时空信息,以提供放射学评分,然后与基于Cox的临床评分融合以提供生存风险。使用受试者工作曲线下的时间依赖性面积(AUC)评估模型的有效性,和风险组分层使用对数秩检验。将多模态输入的预后性能与缺失模态的模型进行比较,和基于大小的RECIST标准。
    结果:27名患者(平均年龄,61年±12[SD],包括180名男子)。多模态CRNN模型在验证和测试集中达到了1年总体生存预测的0.777和0.704的AUC。该模型在验证中实现了显著的风险分层(风险比[HR]=3.330,p=0.008),和基于训练集的中值风险评分的测试集(HR=2.024,p=0.047)。模式缺失的模型(基于单模态成像的模型和仅包含基线扫描的模型)仍然可以实现有利的风险分层性能(所有p<0.05,除了一个,p=0.053)。此外,结果证明了基于深度学习的模型优于RECIST标准。
    结论:CT扫描和临床数据的深度学习分析可以为晚期HCC患者提供重要的预后见解。
    建立的模型可以帮助监测患者的疾病状态,并在首次随访时识别预后不良的患者,帮助临床医生做出明智的治疗决定,以及早期和及时的干预。
    结论:使用跨国患者开发了一种基于AI的晚期HCC预后模型。该模型从CT扫描中提取时空信息,并将其与临床变量集成以进行预测。与传统的基于大小的RECIST方法相比,该模型显示出优越的预后能力。
    OBJECTIVE: To develop a deep learning model combining CT scans and clinical information to predict overall survival in advanced hepatocellular carcinoma (HCC).
    METHODS: This retrospective study included immunotherapy-treated advanced HCC patients from 52 multi-national in-house centers between 2018 and 2022. A multi-modal prognostic model using baseline and the first follow-up CT images and 7 clinical variables was proposed. A convolutional-recurrent neural network (CRNN) was developed to extract spatial-temporal information from automatically selected representative 2D CT slices to provide a radiological score, then fused with a Cox-based clinical score to provide the survival risk. The model\'s effectiveness was assessed using a time-dependent area under the receiver operating curve (AUC), and risk group stratification using the log-rank test. Prognostic performances of multi-modal inputs were compared to models of missing modality, and the size-based RECIST criteria.
    RESULTS: Two-hundred seven patients (mean age, 61 years ± 12 [SD], 180 men) were included. The multi-modal CRNN model reached the AUC of 0.777 and 0.704 of 1-year overall survival predictions in the validation and test sets. The model achieved significant risk stratification in validation (hazard ratio [HR] = 3.330, p = 0.008), and test sets (HR = 2.024, p = 0.047) based on the median risk score of the training set. Models with missing modalities (the single-modal imaging-based model and the model incorporating only baseline scans) can still achieve favorable risk stratification performance (all p < 0.05, except for one, p = 0.053). Moreover, results proved the superiority of the deep learning-based model to the RECIST criteria.
    CONCLUSIONS: Deep learning analysis of CT scans and clinical data can offer significant prognostic insights for patients with advanced HCC.
    UNASSIGNED: The established model can help monitor patients\' disease statuses and identify those with poor prognosis at the time of first follow-up, helping clinicians make informed treatment decisions, as well as early and timely interventions.
    CONCLUSIONS: An AI-based prognostic model was developed for advanced HCC using multi-national patients. The model extracts spatial-temporal information from CT scans and integrates it with clinical variables to prognosticate. The model demonstrated superior prognostic ability compared to the conventional size-based RECIST method.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本研究旨在探索利用深度学习技术对排球训练视频进行分类和描述的方法。通过开发集成双向长短期记忆(BiLSTM)和注意力机制的创新模型,参考BiLSTM-多模态注意融合时间分类(BiLSTM-MAFTC),提高了排球视频内容分析的准确性和效率。最初,该模型将来自各种模态的特征编码为特征向量,捕获不同类型的信息,如位置和模态数据。然后使用BiLSTM网络对多模态时间信息进行建模,而空间和渠道注意力机制被纳入以形成双重注意力模块。该模块建立不同模态特征之间的相关性,从每种模态中提取有价值的信息,并发现跨模态的互补信息。大量实验验证了该方法的有效性和最先进的性能。与传统的递归神经网络算法相比,在动作识别的Top-1和Top-5度量下,该模型的识别准确率超过95%,每个视频的识别速度为0.04s。研究表明,该模型能够有效地处理和分析多模态时态信息,包括运动员的动作,在法庭上的位置关系,和球的轨迹。因此,实现了排球训练视频的精确分类和描述。这种进步大大提高了教练员和运动员在排球训练中的效率,并为更广泛的体育视频分析研究提供了宝贵的见解。
    This study aims to explore methods for classifying and describing volleyball training videos using deep learning techniques. By developing an innovative model that integrates Bi-directional Long Short-Term Memory (BiLSTM) and attention mechanisms, referred to BiLSTM-Multimodal Attention Fusion Temporal Classification (BiLSTM-MAFTC), the study enhances the accuracy and efficiency of volleyball video content analysis. Initially, the model encodes features from various modalities into feature vectors, capturing different types of information such as positional and modal data. The BiLSTM network is then used to model multi-modal temporal information, while spatial and channel attention mechanisms are incorporated to form a dual-attention module. This module establishes correlations between different modality features, extracting valuable information from each modality and uncovering complementary information across modalities. Extensive experiments validate the method\'s effectiveness and state-of-the-art performance. Compared to conventional recurrent neural network algorithms, the model achieves recognition accuracies exceeding 95 % under Top-1 and Top-5 metrics for action recognition, with a recognition speed of 0.04 s per video. The study demonstrates that the model can effectively process and analyze multimodal temporal information, including athlete movements, positional relationships on the court, and ball trajectories. Consequently, precise classification and description of volleyball training videos are achieved. This advancement significantly enhances the efficiency of coaches and athletes in volleyball training and provides valuable insights for broader sports video analysis research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    及时准确地检测自闭症谱系障碍(ASD)对于早期干预和改善患者预后至关重要。本研究旨在利用机器学习(ML)技术的强大功能,通过结合时间眼动跟踪数据来改善ASD检测。我们开发了一种新颖的机器学习模型来利用眼睛扫描路径,眼球运动的距离序列,和一系列固定持续时间,增强分析的时间方面,以更有效地识别ASD。
    我们利用了眼动追踪数据的数据集来训练我们的机器学习模型,由CNN-GRU-ANN架构组成。模型是用凝视图训练的,眼睛注视之间的距离序列,以及注视和扫视的持续时间。此外,我们使用了一个验证数据集来评估模型的性能,并将其与其他作品进行比较。
    与VGG-16模型相比,我们的ML模型在ASD检测中表现出卓越的性能。通过合并来自眼睛跟踪数据的时间信息,我们的模型实现了更高的精度,精度,和回忆。新添加的基于序列的特征允许我们的模型有效地区分ASD和典型的发展中的个人,在验证数据集上实现了93.10%的令人印象深刻的精度值。
    本研究提出了一种基于ML的ASD检测方法,该方法利用机器学习技术并结合时间眼动跟踪数据。我们的发现强调了时间分析在改善ASD检测方面的潜力,并为神经发育障碍的基于眼睛跟踪的诊断和干预领域的进一步发展提供了有希望的方向。
    UNASSIGNED: Timely and accurate detection of Autism Spectrum Disorder (ASD) is essential for early intervention and improved patient outcomes. This study aims to harness the power of machine learning (ML) techniques to improve ASD detection by incorporating temporal eye-tracking data. We developed a novel ML model to leverage eye scan paths, sequences of distances of eye movement, and a sequence of fixation durations, enhancing the temporal aspect of the analysis for more effective ASD identification.
    UNASSIGNED: We utilized a dataset of eye-tracking data without augmentation to train our ML model, which consists of a CNN-GRU-ANN architecture. The model was trained using gaze maps, the sequences of distances between eye fixations, and durations of fixations and saccades. Additionally, we employed a validation dataset to assess the model\'s performance and compare it with other works.
    UNASSIGNED: Our ML model demonstrated superior performance in ASD detection compared to the VGG-16 model. By incorporating temporal information from eye-tracking data, our model achieved higher accuracy, precision, and recall. The novel addition of sequence-based features allowed our model to effectively distinguish between ASD and typically developing individuals, achieving an impressive precision value of 93.10% on the validation dataset.
    UNASSIGNED: This study presents an ML-based approach to ASD detection by utilizing machine learning techniques and incorporating temporal eye-tracking data. Our findings highlight the potential of temporal analysis for improved ASD detection and provide a promising direction for further advancements in the field of eye-tracking-based diagnosis and intervention for neurodevelopmental disorders.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:化合物-蛋白质相互作用(CPI)预测在药物发现和药物重新定位中起着至关重要的作用。早期的研究人员依赖于耗时且劳动密集型的湿式实验室实验。然而,深度学习的出现大大加快了这一进展。大多数现有的深度学习方法利用深度神经网络从序列和图形中提取复合特征,无论是单独或组合。我们团队先前的研究表明,复合图像包含有价值的信息,可用于CPI任务。然而,缺乏有效结合CPI中化合物的序列和图像表示的多模式方法。目前,使用文本图像对进行对比语言图像预训练是多模态领域的一种流行方法。需要进一步的研究来探索序列和图像表示的集成如何提高CPI任务的准确性。
    结果:本文提出了一种称为MMCL-CPI的新方法,其中包括两个关键亮点:1)首先,我们建议从两种模式中提取复合特征:一维SMILES和二维图像。这种方法使我们能够捕获序列和空间特征,提高CPI预测精度。基于此,我们设计了一种新颖的多模态模型。(2)第二,我们引入了一种多模式预训练策略,该策略利用大规模无标记数据集上的比较学习来建立SMILES字符串和化合物图像之间的对应关系。这种预训练方法显著改善了下游CPI任务的复合特征表示。我们的方法在多个数据集上显示出竞争性结果。
    BACKGROUND: Compound-protein interaction (CPI) prediction plays a crucial role in drug discovery and drug repositioning. Early researchers relied on time-consuming and labor-intensive wet laboratory experiments. However, the advent of deep learning has significantly accelerated this progress. Most existing deep learning methods utilize deep neural networks to extract compound features from sequences and graphs, either separately or in combination. Our team\'s previous research has demonstrated that compound images contain valuable information that can be leveraged for CPI task. However, there is a scarcity of multimodal methods that effectively combine sequence and image representations of compounds in CPI. Currently, the use of text-image pairs for contrastive language-image pre-training is a popular approach in the multimodal field. Further research is needed to explore how the integration of sequence and image representations can enhance the accuracy of CPI task.
    RESULTS: This paper presents a novel method called MMCL-CPI, which encompasses two key highlights: 1) Firstly, we propose extracting compound features from two modalities: one-dimensional SMILES and two-dimensional images. This approach enables us to capture both sequence and spatial features, enhancing the prediction accuracy for CPI. Based on this, we design a novel multimodal model. 2) Secondly, we introduce a multimodal pre-training strategy that leverages comparative learning on a large-scale unlabeled dataset to establish the correspondence between SMILES string and compound\'s image. This pre-training approach significantly improves compound feature representations for downstream CPI task. Our method has shown competitive results on multiple datasets.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    尽管出现了新的诊断方法,药物和治疗方案,耐多药肺结核(MDR-PTB)仍然是全球健康威胁。治疗周期长,治愈率低,疾病负担重。人口统计等因素,疾病特征,肺成像,生物标志物,治疗方案和药物依从性与MDR-PTB预后相关.然而,到目前为止,现有的大部分研究集中在通过静态单尺度或低维信息预测治疗结果.因此,基于多维度动态数据的多模态深度学习可以提供对个性化治疗计划的更深入理解,以帮助患者的临床管理。
    Despite the advent of new diagnostics, drugs and regimens, multi-drug resistant pulmonary tuberculosis (MDR-PTB) remains a global health threat. It has a long treatment cycle, low cure rate and heavy disease burden. Factors such as demographics, disease characteristics, lung imaging, biomarkers, therapeutic schedule and adherence to medications are associated with MDR-PTB prognosis. However, thus far, the majority of existing studies have focused on predicting treatment outcomes through static single-scale or low dimensional information. Hence, multi-modal deep learning based on dynamic data for multiple dimensions can provide a deeper understanding of personalized treatment plans to aid in the clinical management of patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    结论:单细胞组学数据分析的第一步是可视化,这使研究人员能够看到细胞类型之间的分离程度。一次可视化多个数据集时,数据集合并使用数据集成/批量修正方法。虽然下游分析需要,这些方法修改特征空间(例如基因表达)/PCA空间,以便尽可能在批次之间混合细胞类型。这掩盖了样本特定的特征,并破坏了单独嵌入样本时可以看到的局部嵌入结构。因此,为了改善大量样本之间的视觉比较(例如,多名患者,总体模态,不同的时间点),我们介绍复合SNE,它执行我们所说的嵌入空间中样本的软对齐。我们证明Compound-SNE能够在样本的嵌入空间中排列细胞类型,同时保留样本独立嵌入时的局部嵌入结构。
    方法:Compound-SNE的Python代码可从https://github.com/HaghverdiLab/Compound-SNE下载。
    背景:在线提供。提供算法详细信息和其他测试。
    CONCLUSIONS: One of the first steps in single-cell omics data analysis is visualization, which allows researchers to see how well-separated cell-types are from each other. When visualizing multiple datasets at once, data integration/batch correction methods are used to merge the datasets. While needed for downstream analyses, these methods modify features space (e.g. gene expression)/PCA space in order to mix cell-types between batches as well as possible. This obscures sample-specific features and breaks down local embedding structures that can be seen when a sample is embedded alone. Therefore, in order to improve in visual comparisons between large numbers of samples (e.g., multiple patients, omic modalities, different time points), we introduce Compound-SNE, which performs what we term a soft alignment of samples in embedding space. We show that Compound-SNE is able to align cell-types in embedding space across samples, while preserving local embedding structures from when samples are embedded independently.
    METHODS: Python code for Compound-SNE is available for download at https://github.com/HaghverdiLab/Compound-SNE.
    BACKGROUND: Available online. Provides algorithmic details and additional tests.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    精神分裂症(SZ)是一种精神病,对个人的认知产生不利影响,情感,和行为方面。SZ的病因,尽管进行了广泛的研究,还不清楚,因为多种因素共同为其发展做出贡献。有一致的证据证明SZ患者大脑中存在结构和功能偏差。此外,SZ的遗传方面得到了基因组学标记的显著参与的支持。因此,需要从多模态角度研究SZ,并开发改进检测的方法。
    我们提出的方法采用了结合结构磁共振成像(sMRI)特征的深度学习框架,功能磁共振成像(fMRI),和遗传标记如单核苷酸多态性(SNP)。对于核磁共振成像,我们使用预训练的DenseNet来提取形态特征。为了确定功能磁共振成像和与SZ相关的SNP中最相关的功能连接,我们应用了一维卷积神经网络(CNN),然后是分层相关传播(LRP)。最后,我们将这些获得的特征跨模态连接在一起,并将它们提供给基于极端梯度提升(XGBoost)树的分类器,以将SZ与健康对照(HC)分类.
    对临床数据集的实验评估表明,与从每种模式单独获得的结果相比,我们提出的多模式方法从HC中对SZ个体进行了分类,准确率提高了79.01%.
    我们提出了一种基于深度学习的框架,该框架选择多模态(sMRI,fMRI和遗传)特征有效地融合它们以获得改进的分类分数。此外,通过使用可解释AI(XAI),我们能够查明并验证对SZ分类贡献最大的重要功能网络连接和SNP,为我们的发现提供必要的解释。
    UNASSIGNED: Schizophrenia (SZ) is a psychiatric condition that adversely affects an individual\'s cognitive, emotional, and behavioral aspects. The etiology of SZ, although extensively studied, remains unclear, as multiple factors come together to contribute toward its development. There is a consistent body of evidence documenting the presence of structural and functional deviations in the brains of individuals with SZ. Moreover, the hereditary aspect of SZ is supported by the significant involvement of genomics markers. Therefore, the need to investigate SZ from a multi-modal perspective and develop approaches for improved detection arises.
    UNASSIGNED: Our proposed method employed a deep learning framework combining features from structural magnetic resonance imaging (sMRI), functional magnetic resonance imaging (fMRI), and genetic markers such as single nucleotide polymorphism (SNP). For sMRI, we used a pre-trained DenseNet to extract the morphological features. To identify the most relevant functional connections in fMRI and SNPs linked to SZ, we applied a 1-dimensional convolutional neural network (CNN) followed by layerwise relevance propagation (LRP). Finally, we concatenated these obtained features across modalities and fed them to the extreme gradient boosting (XGBoost) tree-based classifier to classify SZ from healthy control (HC).
    UNASSIGNED: Experimental evaluation on clinical dataset demonstrated that, compared to the outcomes obtained from each modality individually, our proposed multi-modal approach performed classification of SZ individuals from HC with an improved accuracy of 79.01%.
    UNASSIGNED: We proposed a deep learning based framework that selects multi-modal (sMRI, fMRI and genetic) features efficiently and fuse them to obtain improved classification scores. Additionally, by using Explainable AI (XAI), we were able to pinpoint and validate significant functional network connections and SNPs that contributed the most toward SZ classification, providing necessary interpretation behind our findings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    通过将有毒形式(Se(VI)和Se(IV))还原为Se(0),微生物可以在硒(Se)生物修复和Se基纳米材料的制造中发挥关键作用。近年来,组学已成为理解还原过程中涉及的代谢途径的有用工具。本文旨在阐明细菌嗜铅单胞菌还原Se(VI)的特定分子机制。细胞质和膜部分都能够将Se(VI)还原为具有不同形态(纳米球和纳米棒)和同素异形体(无定形,单斜,和三角)。蛋白质组学分析表明,通过改变几种代谢途径,包括与能量获取相关的代谢途径,对Se(VI)产生了适应性反应。蛋白质和核酸的合成,和运输系统。虽然硫氧还蛋白系统和Painter反应被认为在硒还原中起关键作用,鞭毛蛋白也可能参与Se的同素异形转化。这些发现表明涉及多模态还原机制,为开发生物修复和纳米颗粒合成的新策略提供新的见解,以在循环经济的概念内回收关键材料。
    Microorganisms can play a key role in selenium (Se) bioremediation and the fabrication of Se-based nanomaterials by reducing toxic forms (Se(VI) and Se(IV)) into Se(0). In recent years, omics have become a useful tool in understanding the metabolic pathways involved in the reduction process. This paper aims to elucidate the specific molecular mechanisms involved in Se(VI) reduction by the bacterium Stenotrophomonas bentonitica. Both cytoplasmic and membrane fractions were able to reduce Se(VI) to Se(0) nanoparticles (NPs) with different morphologies (nanospheres and nanorods) and allotropes (amorphous, monoclinic, and trigonal). Proteomic analyses indicated an adaptive response against Se(VI) through the alteration of several metabolic pathways including those related to energy acquisition, synthesis of proteins and nucleic acids, and transport systems. Whilst the thioredoxin system and the Painter reactions were identified to play a crucial role in Se reduction, flagellin may also be involved in the allotropic transformation of Se. These findings suggest a multi-modal reduction mechanism is involved, providing new insights for developing novel strategies in bioremediation and nanoparticle synthesis for the recovery of critical materials within the concept of circular economy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    随着大量和单细胞分析对多组数据的依赖增加,对聚类进行无监督分析的健壮方法的可用性,可视化,而特征选择势在必行。联合降维方法可以应用于多组学数据集,以得出类似于单组学技术的全局样本嵌入,例如主成分分析(PCA)。多重协同惯性分析(MCIA)是一种用于联合降维的方法,可最大化块级和全局级嵌入之间的协方差。MCIA的当前实现未针对大型数据集进行优化,例如来自单细胞研究的数据集。并且缺乏嵌入新数据的能力。
    我们介绍一下nipalsMCIA,一种MCIA实现,使用对非线性迭代偏最小二乘(NIPALS)的扩展来求解目标函数,与依赖单细胞多组学数据的特征分解的早期实现相比,显示出显着的加速。它还消除了对计算解释方差的特征分解的依赖,并允许用户对新数据执行样本外嵌入。nipalsMCIA为用户提供各种预处理和参数选项,以及简单的功能,用于单个整体和全局嵌入因子的下游分析。
    nipalsMCIA作为BioConductor软件包可在https://bioparductor.org/packages/release/bioc/html/nipalsMCIA获得。html,并包括详细的文档和应用插图。补充材料可在线获得。
    UNASSIGNED: With the increased reliance on multi-omics data for bulk and single cell analyses, the availability of robust approaches to perform unsupervised analysis for clustering, visualization, and feature selection is imperative. Joint dimensionality reduction methods can be applied to multi-omics datasets to derive a global sample embedding analogous to single-omic techniques such as Principal Components Analysis (PCA). Multiple co-inertia analysis (MCIA) is a method for joint dimensionality reduction that maximizes the covariance between block- and global-level embeddings. Current implementations for MCIA are not optimized for large datasets such such as those arising from single cell studies, and lack capabilities with respect to embedding new data.
    UNASSIGNED: We introduce nipalsMCIA, an MCIA implementation that solves the objective function using an extension to Non-linear Iterative Partial Least Squares (NIPALS), and shows significant speed-up over earlier implementations that rely on eigendecompositions for single cell multi-omics data. It also removes the dependence on an eigendecomposition for calculating the variance explained, and allows users to perform out-of-sample embedding for new data. nipalsMCIA provides users with a variety of pre-processing and parameter options, as well as ease of functionality for down-stream analysis of single-omic and global-embedding factors.
    UNASSIGNED: nipalsMCIA is available as a BioConductor package at https://bioconductor.org/packages/release/bioc/html/nipalsMCIA.html, and includes detailed documentation and application vignettes. Supplementary Materials are available online.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号