Multi-modal

多模态
  • 文章类型: Journal Article
    本研究旨在探索利用深度学习技术对排球训练视频进行分类和描述的方法。通过开发集成双向长短期记忆(BiLSTM)和注意力机制的创新模型,参考BiLSTM-多模态注意融合时间分类(BiLSTM-MAFTC),提高了排球视频内容分析的准确性和效率。最初,该模型将来自各种模态的特征编码为特征向量,捕获不同类型的信息,如位置和模态数据。然后使用BiLSTM网络对多模态时间信息进行建模,而空间和渠道注意力机制被纳入以形成双重注意力模块。该模块建立不同模态特征之间的相关性,从每种模态中提取有价值的信息,并发现跨模态的互补信息。大量实验验证了该方法的有效性和最先进的性能。与传统的递归神经网络算法相比,在动作识别的Top-1和Top-5度量下,该模型的识别准确率超过95%,每个视频的识别速度为0.04s。研究表明,该模型能够有效地处理和分析多模态时态信息,包括运动员的动作,在法庭上的位置关系,和球的轨迹。因此,实现了排球训练视频的精确分类和描述。这种进步大大提高了教练员和运动员在排球训练中的效率,并为更广泛的体育视频分析研究提供了宝贵的见解。
    This study aims to explore methods for classifying and describing volleyball training videos using deep learning techniques. By developing an innovative model that integrates Bi-directional Long Short-Term Memory (BiLSTM) and attention mechanisms, referred to BiLSTM-Multimodal Attention Fusion Temporal Classification (BiLSTM-MAFTC), the study enhances the accuracy and efficiency of volleyball video content analysis. Initially, the model encodes features from various modalities into feature vectors, capturing different types of information such as positional and modal data. The BiLSTM network is then used to model multi-modal temporal information, while spatial and channel attention mechanisms are incorporated to form a dual-attention module. This module establishes correlations between different modality features, extracting valuable information from each modality and uncovering complementary information across modalities. Extensive experiments validate the method\'s effectiveness and state-of-the-art performance. Compared to conventional recurrent neural network algorithms, the model achieves recognition accuracies exceeding 95 % under Top-1 and Top-5 metrics for action recognition, with a recognition speed of 0.04 s per video. The study demonstrates that the model can effectively process and analyze multimodal temporal information, including athlete movements, positional relationships on the court, and ball trajectories. Consequently, precise classification and description of volleyball training videos are achieved. This advancement significantly enhances the efficiency of coaches and athletes in volleyball training and provides valuable insights for broader sports video analysis research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    及时准确地检测自闭症谱系障碍(ASD)对于早期干预和改善患者预后至关重要。本研究旨在利用机器学习(ML)技术的强大功能,通过结合时间眼动跟踪数据来改善ASD检测。我们开发了一种新颖的机器学习模型来利用眼睛扫描路径,眼球运动的距离序列,和一系列固定持续时间,增强分析的时间方面,以更有效地识别ASD。
    我们利用了眼动追踪数据的数据集来训练我们的机器学习模型,由CNN-GRU-ANN架构组成。模型是用凝视图训练的,眼睛注视之间的距离序列,以及注视和扫视的持续时间。此外,我们使用了一个验证数据集来评估模型的性能,并将其与其他作品进行比较。
    与VGG-16模型相比,我们的ML模型在ASD检测中表现出卓越的性能。通过合并来自眼睛跟踪数据的时间信息,我们的模型实现了更高的精度,精度,和回忆。新添加的基于序列的特征允许我们的模型有效地区分ASD和典型的发展中的个人,在验证数据集上实现了93.10%的令人印象深刻的精度值。
    本研究提出了一种基于ML的ASD检测方法,该方法利用机器学习技术并结合时间眼动跟踪数据。我们的发现强调了时间分析在改善ASD检测方面的潜力,并为神经发育障碍的基于眼睛跟踪的诊断和干预领域的进一步发展提供了有希望的方向。
    UNASSIGNED: Timely and accurate detection of Autism Spectrum Disorder (ASD) is essential for early intervention and improved patient outcomes. This study aims to harness the power of machine learning (ML) techniques to improve ASD detection by incorporating temporal eye-tracking data. We developed a novel ML model to leverage eye scan paths, sequences of distances of eye movement, and a sequence of fixation durations, enhancing the temporal aspect of the analysis for more effective ASD identification.
    UNASSIGNED: We utilized a dataset of eye-tracking data without augmentation to train our ML model, which consists of a CNN-GRU-ANN architecture. The model was trained using gaze maps, the sequences of distances between eye fixations, and durations of fixations and saccades. Additionally, we employed a validation dataset to assess the model\'s performance and compare it with other works.
    UNASSIGNED: Our ML model demonstrated superior performance in ASD detection compared to the VGG-16 model. By incorporating temporal information from eye-tracking data, our model achieved higher accuracy, precision, and recall. The novel addition of sequence-based features allowed our model to effectively distinguish between ASD and typically developing individuals, achieving an impressive precision value of 93.10% on the validation dataset.
    UNASSIGNED: This study presents an ML-based approach to ASD detection by utilizing machine learning techniques and incorporating temporal eye-tracking data. Our findings highlight the potential of temporal analysis for improved ASD detection and provide a promising direction for further advancements in the field of eye-tracking-based diagnosis and intervention for neurodevelopmental disorders.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:化合物-蛋白质相互作用(CPI)预测在药物发现和药物重新定位中起着至关重要的作用。早期的研究人员依赖于耗时且劳动密集型的湿式实验室实验。然而,深度学习的出现大大加快了这一进展。大多数现有的深度学习方法利用深度神经网络从序列和图形中提取复合特征,无论是单独或组合。我们团队先前的研究表明,复合图像包含有价值的信息,可用于CPI任务。然而,缺乏有效结合CPI中化合物的序列和图像表示的多模式方法。目前,使用文本图像对进行对比语言图像预训练是多模态领域的一种流行方法。需要进一步的研究来探索序列和图像表示的集成如何提高CPI任务的准确性。
    结果:本文提出了一种称为MMCL-CPI的新方法,其中包括两个关键亮点:1)首先,我们建议从两种模式中提取复合特征:一维SMILES和二维图像。这种方法使我们能够捕获序列和空间特征,提高CPI预测精度。基于此,我们设计了一种新颖的多模态模型。(2)第二,我们引入了一种多模式预训练策略,该策略利用大规模无标记数据集上的比较学习来建立SMILES字符串和化合物图像之间的对应关系。这种预训练方法显著改善了下游CPI任务的复合特征表示。我们的方法在多个数据集上显示出竞争性结果。
    BACKGROUND: Compound-protein interaction (CPI) prediction plays a crucial role in drug discovery and drug repositioning. Early researchers relied on time-consuming and labor-intensive wet laboratory experiments. However, the advent of deep learning has significantly accelerated this progress. Most existing deep learning methods utilize deep neural networks to extract compound features from sequences and graphs, either separately or in combination. Our team\'s previous research has demonstrated that compound images contain valuable information that can be leveraged for CPI task. However, there is a scarcity of multimodal methods that effectively combine sequence and image representations of compounds in CPI. Currently, the use of text-image pairs for contrastive language-image pre-training is a popular approach in the multimodal field. Further research is needed to explore how the integration of sequence and image representations can enhance the accuracy of CPI task.
    RESULTS: This paper presents a novel method called MMCL-CPI, which encompasses two key highlights: 1) Firstly, we propose extracting compound features from two modalities: one-dimensional SMILES and two-dimensional images. This approach enables us to capture both sequence and spatial features, enhancing the prediction accuracy for CPI. Based on this, we design a novel multimodal model. 2) Secondly, we introduce a multimodal pre-training strategy that leverages comparative learning on a large-scale unlabeled dataset to establish the correspondence between SMILES string and compound\'s image. This pre-training approach significantly improves compound feature representations for downstream CPI task. Our method has shown competitive results on multiple datasets.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    尽管出现了新的诊断方法,药物和治疗方案,耐多药肺结核(MDR-PTB)仍然是全球健康威胁。治疗周期长,治愈率低,疾病负担重。人口统计等因素,疾病特征,肺成像,生物标志物,治疗方案和药物依从性与MDR-PTB预后相关.然而,到目前为止,现有的大部分研究集中在通过静态单尺度或低维信息预测治疗结果.因此,基于多维度动态数据的多模态深度学习可以提供对个性化治疗计划的更深入理解,以帮助患者的临床管理。
    Despite the advent of new diagnostics, drugs and regimens, multi-drug resistant pulmonary tuberculosis (MDR-PTB) remains a global health threat. It has a long treatment cycle, low cure rate and heavy disease burden. Factors such as demographics, disease characteristics, lung imaging, biomarkers, therapeutic schedule and adherence to medications are associated with MDR-PTB prognosis. However, thus far, the majority of existing studies have focused on predicting treatment outcomes through static single-scale or low dimensional information. Hence, multi-modal deep learning based on dynamic data for multiple dimensions can provide a deeper understanding of personalized treatment plans to aid in the clinical management of patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    结论:单细胞组学数据分析的第一步是可视化,这使研究人员能够看到细胞类型之间的分离程度。一次可视化多个数据集时,数据集合并使用数据集成/批量修正方法。虽然下游分析需要,这些方法修改特征空间(例如基因表达)/PCA空间,以便尽可能在批次之间混合细胞类型。这掩盖了样本特定的特征,并破坏了单独嵌入样本时可以看到的局部嵌入结构。因此,为了改善大量样本之间的视觉比较(例如,多名患者,总体模态,不同的时间点),我们介绍复合SNE,它执行我们所说的嵌入空间中样本的软对齐。我们证明Compound-SNE能够在样本的嵌入空间中排列细胞类型,同时保留样本独立嵌入时的局部嵌入结构。
    方法:Compound-SNE的Python代码可从https://github.com/HaghverdiLab/Compound-SNE下载。
    背景:在线提供。提供算法详细信息和其他测试。
    CONCLUSIONS: One of the first steps in single-cell omics data analysis is visualization, which allows researchers to see how well-separated cell-types are from each other. When visualizing multiple datasets at once, data integration/batch correction methods are used to merge the datasets. While needed for downstream analyses, these methods modify features space (e.g. gene expression)/PCA space in order to mix cell-types between batches as well as possible. This obscures sample-specific features and breaks down local embedding structures that can be seen when a sample is embedded alone. Therefore, in order to improve in visual comparisons between large numbers of samples (e.g., multiple patients, omic modalities, different time points), we introduce Compound-SNE, which performs what we term a soft alignment of samples in embedding space. We show that Compound-SNE is able to align cell-types in embedding space across samples, while preserving local embedding structures from when samples are embedded independently.
    METHODS: Python code for Compound-SNE is available for download at https://github.com/HaghverdiLab/Compound-SNE.
    BACKGROUND: Available online. Provides algorithmic details and additional tests.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    精神分裂症(SZ)是一种精神病,对个人的认知产生不利影响,情感,和行为方面。SZ的病因,尽管进行了广泛的研究,还不清楚,因为多种因素共同为其发展做出贡献。有一致的证据证明SZ患者大脑中存在结构和功能偏差。此外,SZ的遗传方面得到了基因组学标记的显著参与的支持。因此,需要从多模态角度研究SZ,并开发改进检测的方法。
    我们提出的方法采用了结合结构磁共振成像(sMRI)特征的深度学习框架,功能磁共振成像(fMRI),和遗传标记如单核苷酸多态性(SNP)。对于核磁共振成像,我们使用预训练的DenseNet来提取形态特征。为了确定功能磁共振成像和与SZ相关的SNP中最相关的功能连接,我们应用了一维卷积神经网络(CNN),然后是分层相关传播(LRP)。最后,我们将这些获得的特征跨模态连接在一起,并将它们提供给基于极端梯度提升(XGBoost)树的分类器,以将SZ与健康对照(HC)分类.
    对临床数据集的实验评估表明,与从每种模式单独获得的结果相比,我们提出的多模式方法从HC中对SZ个体进行了分类,准确率提高了79.01%.
    我们提出了一种基于深度学习的框架,该框架选择多模态(sMRI,fMRI和遗传)特征有效地融合它们以获得改进的分类分数。此外,通过使用可解释AI(XAI),我们能够查明并验证对SZ分类贡献最大的重要功能网络连接和SNP,为我们的发现提供必要的解释。
    UNASSIGNED: Schizophrenia (SZ) is a psychiatric condition that adversely affects an individual\'s cognitive, emotional, and behavioral aspects. The etiology of SZ, although extensively studied, remains unclear, as multiple factors come together to contribute toward its development. There is a consistent body of evidence documenting the presence of structural and functional deviations in the brains of individuals with SZ. Moreover, the hereditary aspect of SZ is supported by the significant involvement of genomics markers. Therefore, the need to investigate SZ from a multi-modal perspective and develop approaches for improved detection arises.
    UNASSIGNED: Our proposed method employed a deep learning framework combining features from structural magnetic resonance imaging (sMRI), functional magnetic resonance imaging (fMRI), and genetic markers such as single nucleotide polymorphism (SNP). For sMRI, we used a pre-trained DenseNet to extract the morphological features. To identify the most relevant functional connections in fMRI and SNPs linked to SZ, we applied a 1-dimensional convolutional neural network (CNN) followed by layerwise relevance propagation (LRP). Finally, we concatenated these obtained features across modalities and fed them to the extreme gradient boosting (XGBoost) tree-based classifier to classify SZ from healthy control (HC).
    UNASSIGNED: Experimental evaluation on clinical dataset demonstrated that, compared to the outcomes obtained from each modality individually, our proposed multi-modal approach performed classification of SZ individuals from HC with an improved accuracy of 79.01%.
    UNASSIGNED: We proposed a deep learning based framework that selects multi-modal (sMRI, fMRI and genetic) features efficiently and fuse them to obtain improved classification scores. Additionally, by using Explainable AI (XAI), we were able to pinpoint and validate significant functional network connections and SNPs that contributed the most toward SZ classification, providing necessary interpretation behind our findings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    通过将有毒形式(Se(VI)和Se(IV))还原为Se(0),微生物可以在硒(Se)生物修复和Se基纳米材料的制造中发挥关键作用。近年来,组学已成为理解还原过程中涉及的代谢途径的有用工具。本文旨在阐明细菌嗜铅单胞菌还原Se(VI)的特定分子机制。细胞质和膜部分都能够将Se(VI)还原为具有不同形态(纳米球和纳米棒)和同素异形体(无定形,单斜,和三角)。蛋白质组学分析表明,通过改变几种代谢途径,包括与能量获取相关的代谢途径,对Se(VI)产生了适应性反应。蛋白质和核酸的合成,和运输系统。虽然硫氧还蛋白系统和Painter反应被认为在硒还原中起关键作用,鞭毛蛋白也可能参与Se的同素异形转化。这些发现表明涉及多模态还原机制,为开发生物修复和纳米颗粒合成的新策略提供新的见解,以在循环经济的概念内回收关键材料。
    Microorganisms can play a key role in selenium (Se) bioremediation and the fabrication of Se-based nanomaterials by reducing toxic forms (Se(VI) and Se(IV)) into Se(0). In recent years, omics have become a useful tool in understanding the metabolic pathways involved in the reduction process. This paper aims to elucidate the specific molecular mechanisms involved in Se(VI) reduction by the bacterium Stenotrophomonas bentonitica. Both cytoplasmic and membrane fractions were able to reduce Se(VI) to Se(0) nanoparticles (NPs) with different morphologies (nanospheres and nanorods) and allotropes (amorphous, monoclinic, and trigonal). Proteomic analyses indicated an adaptive response against Se(VI) through the alteration of several metabolic pathways including those related to energy acquisition, synthesis of proteins and nucleic acids, and transport systems. Whilst the thioredoxin system and the Painter reactions were identified to play a crucial role in Se reduction, flagellin may also be involved in the allotropic transformation of Se. These findings suggest a multi-modal reduction mechanism is involved, providing new insights for developing novel strategies in bioremediation and nanoparticle synthesis for the recovery of critical materials within the concept of circular economy.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    随着大量和单细胞分析对多组数据的依赖增加,对聚类进行无监督分析的健壮方法的可用性,可视化,而特征选择势在必行。联合降维方法可以应用于多组学数据集,以得出类似于单组学技术的全局样本嵌入,例如主成分分析(PCA)。多重协同惯性分析(MCIA)是一种用于联合降维的方法,可最大化块级和全局级嵌入之间的协方差。MCIA的当前实现未针对大型数据集进行优化,例如来自单细胞研究的数据集。并且缺乏嵌入新数据的能力。
    我们介绍一下nipalsMCIA,一种MCIA实现,使用对非线性迭代偏最小二乘(NIPALS)的扩展来求解目标函数,与依赖单细胞多组学数据的特征分解的早期实现相比,显示出显着的加速。它还消除了对计算解释方差的特征分解的依赖,并允许用户对新数据执行样本外嵌入。nipalsMCIA为用户提供各种预处理和参数选项,以及简单的功能,用于单个整体和全局嵌入因子的下游分析。
    nipalsMCIA作为BioConductor软件包可在https://bioparductor.org/packages/release/bioc/html/nipalsMCIA获得。html,并包括详细的文档和应用插图。补充材料可在线获得。
    UNASSIGNED: With the increased reliance on multi-omics data for bulk and single cell analyses, the availability of robust approaches to perform unsupervised analysis for clustering, visualization, and feature selection is imperative. Joint dimensionality reduction methods can be applied to multi-omics datasets to derive a global sample embedding analogous to single-omic techniques such as Principal Components Analysis (PCA). Multiple co-inertia analysis (MCIA) is a method for joint dimensionality reduction that maximizes the covariance between block- and global-level embeddings. Current implementations for MCIA are not optimized for large datasets such such as those arising from single cell studies, and lack capabilities with respect to embedding new data.
    UNASSIGNED: We introduce nipalsMCIA, an MCIA implementation that solves the objective function using an extension to Non-linear Iterative Partial Least Squares (NIPALS), and shows significant speed-up over earlier implementations that rely on eigendecompositions for single cell multi-omics data. It also removes the dependence on an eigendecomposition for calculating the variance explained, and allows users to perform out-of-sample embedding for new data. nipalsMCIA provides users with a variety of pre-processing and parameter options, as well as ease of functionality for down-stream analysis of single-omic and global-embedding factors.
    UNASSIGNED: nipalsMCIA is available as a BioConductor package at https://bioconductor.org/packages/release/bioc/html/nipalsMCIA.html, and includes detailed documentation and application vignettes. Supplementary Materials are available online.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    预测患者从免疫疗法生存的现有标准主要集中在患者的PD-L1状态。我们检验了从CT图像非侵入性捕获基线全肺影像组学特征的假设,基线临床参数,结合先进的机器学习方法,可以帮助建立与PD-L1状态相比有利的患者生存模型,以预测接受Durvalumab治疗的转移性NSCLC患者的“低于中位生存风险”。共有1062名患者,包括模型训练和验证,这是迄今为止规模最大的此类研究。
    为了确保足够的样本量,我们合并了三项转移性NSCLC研究的治疗组数据.大约80%的数据用于模型训练,其余的等待验证。我们首先训练了两个独立的模型;模型-C训练以使用临床数据预测生存;模型-R训练以使用全肺放射组学特征预测生存。最后,我们创建了利用临床和影像组学功能的Model-C+R.
    模型C的分类精度(中位生存期),Model-R,Model-C+R为63%,55%,分别为68%。不同训练和验证队列生存预测的敏感性分析显示一致性指数([95百分位数])为0.64([0.63,0.65]),0.60([0.59,0.60]),和0.66([0.65,0.67]),分别。我们还评估了来自独立研究的144名患者的可比较队列中这些模型的推广。显示65%的分类准确率,62%,分别为72%。
    结合基线全肺CT影像和临床特征的机器学习模型可能是免疫治疗患者选择的有用工具。需要通过前瞻性研究进一步验证。
    UNASSIGNED: Existing criteria for predicting patient survival from immunotherapy are primarily centered on the PD-L1 status of patients. We tested the hypothesis that noninvasively captured baseline whole-lung radiomics features from CT images, baseline clinical parameters, combined with advanced machine learning approaches, can help to build models of patient survival that compare favorably with PD-L1 status for predicting \'less-than-median-survival risk\' in the metastatic NSCLC setting for patients on durvalumab. With a total of 1062 patients, inclusive of model training and validation, this is the largest such study yet.
    UNASSIGNED: To ensure a sufficient sample size, we combined data from treatment arms of three metastatic NSCLC studies. About 80% of this data was used for model training, and the remainder was held-out for validation. We first trained two independent models; Model-C trained to predict survival using clinical data; and Model-R trained to predict survival using whole-lung radiomics features. Finally, we created Model-C+R which leveraged both clinical and radiomics features.
    UNASSIGNED: The classification accuracy (for median survival) of Model-C, Model-R, and Model-C+R was 63%, 55%, and 68% respectively. Sensitivity analysis of survival prediction across different training and validation cohorts showed concordance indices ([95 percentile]) of 0.64 ([0.63, 0.65]), 0.60 ([0.59, 0.60]), and 0.66 ([0.65,0.67]), respectively. We additionally evaluated generalization of these models on a comparable cohort of 144 patients from an independent study, demonstrating classification accuracies of 65%, 62%, and 72% respectively.
    UNASSIGNED: Machine Learning models combining baseline whole-lung CT radiomic and clinical features may be a useful tool for patient selection in immunotherapy. Further validation through prospective studies is needed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    确定临床前未分化多形性肉瘤(UPS)中定量细胞计数组织特征与瘤内定量MR(qMRI)之间的显着关系。
    在对UPS的基因工程小鼠模型的前瞻性研究中,我们注册了由匹配的体内多对比MRI组成的成像库,三维(3D)多对比度高分辨率离体MR组织学(MRH),和二维(2D)组织切片。从数字化组织学中,我们从整个载玻片自动核分割中生成了定量细胞计数特征图。我们自动分割不同qMRI值的肿瘤内区域,并测量相应的细胞计数特征。进行线性回归分析以比较肿瘤内qMRI和组织细胞计数特征,并对多重比较结果进行了校正.在校正多重比较后,qMRI和细胞计数特征之间的线性相关性(p值<0.05)被认为是显著的。
    与离体表观扩散系数(ADC)相关的三个特征,没有与体内ADC相关的特征。六个特征显示出与离体T2*的显着线性关系,15个特征与体内T2*显著相关。在这两种情况下,核Haralick纹理特征是与T2*相关的最普遍的特征类型。一小组核拓扑特征也与一个或两个T2*对比相关,在T2*和核大小指标之间出现了积极的趋势。
    注册的多参数成像数据集可以识别对UPSMR信号有贡献的定量组织特征。T2*可以提供有关核形态和多态性的定量信息,为UPS的放射学解释增加组织学见解。
    UNASSIGNED: To identify significant relationships between quantitative cytometric tissue features and quantitative MR (qMRI) intratumorally in preclinical undifferentiated pleomorphic sarcomas (UPS).
    UNASSIGNED: In a prospective study of genetically engineered mouse models of UPS, we registered imaging libraries consisting of matched multi-contrast in vivo MRI, three-dimensional (3D) multi-contrast high-resolution ex vivo MR histology (MRH), and two-dimensional (2D) tissue slides. From digitized histology we generated quantitative cytometric feature maps from whole-slide automated nuclear segmentation. We automatically segmented intratumoral regions of distinct qMRI values and measured corresponding cytometric features. Linear regression analysis was performed to compare intratumoral qMRI and tissue cytometric features, and results were corrected for multiple comparisons. Linear correlations between qMRI and cytometric features with p values of <0.05 after correction for multiple comparisons were considered significant.
    UNASSIGNED: Three features correlated with ex vivo apparent diffusion coefficient (ADC), and no features correlated with in vivo ADC. Six features demonstrated significant linear relationships with ex vivo T2*, and fifteen features correlated significantly with in vivo T2*. In both cases, nuclear Haralick texture features were the most prevalent type of feature correlated with T2*. A small group of nuclear topology features also correlated with one or both T2* contrasts, and positive trends were seen between T2* and nuclear size metrics.
    UNASSIGNED: Registered multi-parametric imaging datasets can identify quantitative tissue features which contribute to UPS MR signal. T2* may provide quantitative information about nuclear morphology and pleomorphism, adding histological insights to radiological interpretation of UPS.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号