multimodal

多式联运
  • 文章类型: Journal Article
    BRCA1/2基因的状态在多种癌症类型的治疗决策过程中起着至关重要的作用。然而,由于高成本和有限的资源,目前,患者对BRCA1/2基因检测的需求尚未得到满足.值得注意的是,并非所有具有BRCA1/2突变的患者使用聚(ADP-核糖)聚合酶抑制剂(PARPi)均能获得良好的结局,表明风险分层的必要性。在这项研究中,我们旨在开发并验证预测BRCA1/2基因状态和PARPi治疗预后的多模式模型.
    我们纳入了1417例卵巢患者的1695张幻灯片,乳房,前列腺,和胰腺癌在三个独立的队列。利用自我注意机制,我们构建了一个多实例注意模型(MIAM),从苏木精和伊红(H&E)病理图像中检测BRCA1/2基因状态。我们进一步结合了MIAM模型的组织特征,细胞特征,和临床因素(MIAM-C模型)来预测BRCA1/2突变和PARPi治疗的无进展生存期(PFS)。使用曲线下面积(AUC)和Kaplan-Meier分析评价模型性能。分析了有助于MIAM-C的形态特征的可解释性。
    在四种癌症类型中,MIAM-C在识别BRCA1/2基因型方面优于基于深度学习的MIAM。可解释性分析显示,高度关注区域包括高级别肿瘤和淋巴细胞浸润,与BRCA1/2突变相关。值得注意的是,高淋巴细胞比率出现BRCA1/2突变的特征.此外,MIAM-C预测PARPi治疗反应(log-rankp<0.05),并作为BRCA1/2突变卵巢癌患者的独立预后因素(p<0.05,风险比:0.4,95%置信区间:0.16-0.99)。
    MIAM-C模型准确检测了BRCA1/2基因状态,并有效地对具有BRCA1/2突变的患者进行了分层预后。
    UNASSIGNED: The status of BRCA1/2 genes plays a crucial role in the treatment decision-making process for multiple cancer types. However, due to high costs and limited resources, a demand for BRCA1/2 genetic testing among patients is currently unmet. Notably, not all patients with BRCA1/2 mutations achieve favorable outcomes with poly (ADP-ribose) polymerase inhibitors (PARPi), indicating the necessity for risk stratification. In this study, we aimed to develop and validate a multimodal model for predicting BRCA1/2 gene status and prognosis with PARPi treatment.
    UNASSIGNED: We included 1695 slides from 1417 patients with ovarian, breast, prostate, and pancreatic cancers across three independent cohorts. Using a self-attention mechanism, we constructed a multi-instance attention model (MIAM) to detect BRCA1/2 gene status from hematoxylin and eosin (H&E) pathological images. We further combined tissue features from the MIAM model, cell features, and clinical factors (the MIAM-C model) to predict BRCA1/2 mutations and progression-free survival (PFS) with PARPi therapy. Model performance was evaluated using area under the curve (AUC) and Kaplan-Meier analysis. Morphological features contributing to MIAM-C were analyzed for interpretability.
    UNASSIGNED: Across the four cancer types, MIAM-C outperformed the deep learning-based MIAM in identifying the BRCA1/2 genotype. Interpretability analysis revealed that high-attention regions included high-grade tumors and lymphocytic infiltration, which correlated with BRCA1/2 mutations. Notably, high lymphocyte ratios appeared characteristic of BRCA1/2 mutations. Furthermore, MIAM-C predicted PARPi therapy response (log-rank p < 0.05) and served as an independent prognostic factor for patients with BRCA1/2-mutant ovarian cancer (p < 0.05, hazard ratio:0.4, 95% confidence interval: 0.16-0.99).
    UNASSIGNED: The MIAM-C model accurately detected BRCA1/2 gene status and effectively stratified prognosis for patients with BRCA1/2 mutations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    先前基于静息态功能磁共振成像(rs-fMRI)和基于体素的形态计量学(VBM)的研究表明,早发性精神分裂症(EOS)患者的大脑结构和静息态功能大脑活动明显异常,与健康对照(HCs)相比,这些改变与EOS的发病机制密切相关。然而,以前的研究受到小样本量和结果高度异质性的限制。因此,本研究旨在有效整合以往的研究,以确定EOS患者常见和特定的脑功能和结构异常。
    PubMed,WebofScience,Embase,中国国家知识基础设施(CNKI),系统搜索了WanFang数据库,以确定有关EOS患者静息状态区域功能脑活动和灰质体积(GMV)异常的出版物。然后,我们利用基于种子的d映射与受试者图像排列(SDM-PSI)软件进行VBM和rs-fMRI研究的全脑体素荟萃分析,分别,并在此基础上进行多模态重叠,全面识别EOS患者的脑结构和功能异常。
    本荟萃分析共纳入了27项原始研究(28个数据集),包括与静息状态功能性脑活动相关的12项研究(13个数据集)(496名EOS患者,395项HCs)和15项与GMV相关的研究(15项数据集)(458例EOS患者,531HC)。总的来说,在功能荟萃分析中,EOS患者在左额中回(延伸至左额下回的三角形部分)和右尾状核中显示出静息状态的功能性脑活动显着增加。另一方面,在结构荟萃分析中,EOS患者在右颞上回(延伸到右罗兰骨)显示GMV显着降低,右颞中回,和颞极(颞上回)。
    这项荟萃分析显示,EOS中的某些区域表现出明显的结构或功能异常,比如时间回转,前额叶皮质,和纹状体。这些发现可能有助于加深我们对EOS潜在病理生理机制的理解,并为EOS的诊断或治疗提供潜在的生物标志物。
    UNASSIGNED: Previous studies based on resting-state functional magnetic resonance imaging(rs-fMRI) and voxel-based morphometry (VBM) have demonstrated significant abnormalities in brain structure and resting-state functional brain activity in patients with early-onset schizophrenia (EOS), compared with healthy controls (HCs), and these alterations were closely related to the pathogenesis of EOS. However, previous studies suffer from the limitations of small sample sizes and high heterogeneity of results. Therefore, the present study aimed to effectively integrate previous studies to identify common and specific brain functional and structural abnormalities in patients with EOS.
    UNASSIGNED: The PubMed, Web of Science, Embase, Chinese National Knowledge Infrastructure (CNKI), and WanFang databases were systematically searched to identify publications on abnormalities in resting-state regional functional brain activity and gray matter volume (GMV) in patients with EOS. Then, we utilized the Seed-based d Mapping with Permutation of Subject Images (SDM-PSI) software to conduct a whole-brain voxel meta-analysis of VBM and rs-fMRI studies, respectively, and followed by multimodal overlapping on this basis to comprehensively identify brain structural and functional abnormalities in patients with EOS.
    UNASSIGNED: A total of 27 original studies (28 datasets) were included in the present meta-analysis, including 12 studies (13 datasets) related to resting-state functional brain activity (496 EOS patients, 395 HCs) and 15 studies (15 datasets) related to GMV (458 EOS patients, 531 HCs). Overall, in the functional meta-analysis, patients with EOS showed significantly increased resting-state functional brain activity in the left middle frontal gyrus (extending to the triangular part of the left inferior frontal gyrus) and the right caudate nucleus. On the other hand, in the structural meta-analysis, patients with EOS showed significantly decreased GMV in the right superior temporal gyrus (extending to the right rolandic operculum), the right middle temporal gyrus, and the temporal pole (superior temporal gyrus).
    UNASSIGNED: This meta-analysis revealed that some regions in the EOS exhibited significant structural or functional abnormalities, such as the temporal gyri, prefrontal cortex, and striatum. These findings may help deepen our understanding of the underlying pathophysiological mechanisms of EOS and provide potential biomarkers for the diagnosis or treatment of EOS.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    油茶是一种具有很高经济价值的作物,然而,它特别容易受到各种疾病和害虫的影响,这大大降低了它的产量和质量。因此,病害山茶叶的精确分割和分类对于有效管理病虫害至关重要。深度学习在植物病虫害分割方面表现出显著优势,特别是在复杂的图像处理和自动特征提取中。然而,在采用单模态模型分割油茶病害时,出现了三个关键挑战:(A)病变可能与复杂背景的颜色非常相似;(B)病叶的小部分重叠;(C)单叶上存在多种疾病。这些因素极大地阻碍了分割准确性。一种新颖的多模态模型,CNN-变压器双U形网络(CTDUNet),基于CNN-Transformer架构,已经被提出来集成图像和文本信息。该模型首先利用文本数据来解决单模态图像特征的缺点,增强其区分病变与环境特征的能力,即使是在彼此非常相似的条件下。此外,我们引入坐标空间注意力(CSA),它专注于目标之间的位置关系,从而改善了重叠叶边的分割。此外,交叉注意力(CA)用于有效地对齐图像和文本特征,保存本地信息,增强对各种疾病的感知和区分。CTDUNet模型在自制的多模态数据集上进行了评估,并与几个模型进行了比较,包括DeeplabV3+,UNet,PSPNet,Segformer,HrNet,和语言满足视觉转换(LVIT)。实验结果表明,CTDUNet实现了86.14%的平均交集(mIoU),分别超过多模态模型和最佳单模态模型3.91%和5.84%,分别。此外,CTDUNet在油茶病虫害的多分类中表现出高度平衡。这些结果表明融合图像和文本多模态信息在山茶病分割中的成功应用,实现了出色的性能。
    Camellia oleifera is a crop of high economic value, yet it is particularly susceptible to various diseases and pests that significantly reduce its yield and quality. Consequently, the precise segmentation and classification of diseased Camellia leaves are vital for managing pests and diseases effectively. Deep learning exhibits significant advantages in the segmentation of plant diseases and pests, particularly in complex image processing and automated feature extraction. However, when employing single-modal models to segment Camellia oleifera diseases, three critical challenges arise: (A) lesions may closely resemble the colors of the complex background; (B) small sections of diseased leaves overlap; (C) the presence of multiple diseases on a single leaf. These factors considerably hinder segmentation accuracy. A novel multimodal model, CNN-Transformer Dual U-shaped Network (CTDUNet), based on a CNN-Transformer architecture, has been proposed to integrate image and text information. This model first utilizes text data to address the shortcomings of single-modal image features, enhancing its ability to distinguish lesions from environmental characteristics, even under conditions where they closely resemble one another. Additionally, we introduce Coordinate Space Attention (CSA), which focuses on the positional relationships between targets, thereby improving the segmentation of overlapping leaf edges. Furthermore, cross-attention (CA) is employed to align image and text features effectively, preserving local information and enhancing the perception and differentiation of various diseases. The CTDUNet model was evaluated on a self-made multimodal dataset compared against several models, including DeeplabV3+, UNet, PSPNet, Segformer, HrNet, and Language meets Vision Transformer (LViT). The experimental results demonstrate that CTDUNet achieved an mean Intersection over Union (mIoU) of 86.14%, surpassing both multimodal models and the best single-modal model by 3.91% and 5.84%, respectively. Additionally, CTDUNet exhibits high balance in the multi-class segmentation of Camellia oleifera diseases and pests. These results indicate the successful application of fused image and text multimodal information in the segmentation of Camellia disease, achieving outstanding performance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    脑机接口(BCI)是神经科学中最强大的工具之一,通常包括一个记录系统,处理器系统,和一个刺激系统。光遗传学具有双向调节的优势,高时空分辨率,和细胞特异性调节,扩展了BCI的应用场景。近年来,随着材料和软件的发展,光遗传学BCI已广泛用于实验室。这些系统被设计得更加集成,轻量级,生物相容性和高效的电源,无线传输和芯片级嵌入式BCI也是如此。软件也在不断改进,具有更好的实时性能和准确性以及更低的功耗。另一方面,作为一项跨越分子生物学等多学科领域的尖端技术,神经科学,材料工程,和信息处理,光遗传学BCI在神经解码中具有巨大的应用潜力,增强大脑功能,治疗神经疾病。这里,本文综述了光遗传学BCIs的发展和应用。在未来,结合其他功能成像技术,如近红外光谱(fNIRS)和功能磁共振成像(fMRI),光遗传学BCI可以调节特定电路的功能,促进神经康复,协助感知,建立脑-脑接口,并应用于更广泛的应用场景。
    The brain-computer interface (BCI) is one of the most powerful tools in neuroscience and generally includes a recording system, a processor system, and a stimulation system. Optogenetics has the advantages of bidirectional regulation, high spatiotemporal resolution, and cell-specific regulation, which expands the application scenarios of BCIs. In recent years, optogenetic BCIs have become widely used in the lab with the development of materials and software. The systems were designed to be more integrated, lightweight, biocompatible, and power efficient, as were the wireless transmission and chip-level embedded BCIs. The software is also constantly improving, with better real-time performance and accuracy and lower power consumption. On the other hand, as a cutting-edge technology spanning multidisciplinary fields including molecular biology, neuroscience, material engineering, and information processing, optogenetic BCIs have great application potential in neural decoding, enhancing brain function, and treating neural diseases. Here, we review the development and application of optogenetic BCIs. In the future, combined with other functional imaging techniques such as near-infrared spectroscopy (fNIRS) and functional magnetic resonance imaging (fMRI), optogenetic BCIs can modulate the function of specific circuits, facilitate neurological rehabilitation, assist perception, establish a brain-to-brain interface, and be applied in wider application scenarios.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本研究旨在探索混合深度学习和影像组学方法的功效,补充了患者元数据,在基于非侵入性皮肤镜成像的皮肤病变诊断中。我们分析了来自国际皮肤成像合作组织(ISIC)数据集的皮肤镜图像,跨越2016-2020年,涵盖各种皮肤病变。我们的方法将深度学习与全面的影像组学分析相结合,利用大量的定量图像特征来精确量化皮肤病变模式。数据集包括三个案例,四,和八种不同的皮肤损伤类型。我们的方法以ISIC2020挑战和先前研究中使用二元决策框架的七种分类方法为基准。提出的混合模型在区分良性和恶性病变方面表现出优异的性能,实现99%的受试者工作特征曲线下面积(AUROC)评分,95%,96%,多类解码AUROC为98.5%,94.9%,和96.4%,敏感度为97.6%,93.9%,96.0%和98.4%的特异性,96.7%,在2018年ISIC内部挑战中占96.9%,以及在济南和龙华的外部数据集中,分别。我们的研究结果表明,影像组学和深度学习的整合,利用皮肤镜图像,有效捕获皮肤病变的异质性和模式表达。
    This study aims to explore the efficacy of a hybrid deep learning and radiomics approach, supplemented with patient metadata, in the noninvasive dermoscopic imaging-based diagnosis of skin lesions. We analyzed dermoscopic images from the International Skin Imaging Collaboration (ISIC) dataset, spanning 2016-2020, encompassing a variety of skin lesions. Our approach integrates deep learning with a comprehensive radiomics analysis, utilizing a vast array of quantitative image features to precisely quantify skin lesion patterns. The dataset includes cases of three, four, and eight different skin lesion types. Our methodology was benchmarked against seven classification methods from the ISIC 2020 challenge and prior research using a binary decision framework. The proposed hybrid model demonstrated superior performance in distinguishing benign from malignant lesions, achieving area under the receiver operating characteristic curve (AUROC) scores of 99%, 95%, and 96%, and multiclass decoding AUROCs of 98.5%, 94.9%, and 96.4%, with sensitivities of 97.6%, 93.9%, and 96.0% and specificities of 98.4%, 96.7%, and 96.9% in the internal ISIC 2018 challenge, as well as in the external Jinan and Longhua datasets, respectively. Our findings suggest that the integration of radiomics and deep learning, utilizing dermoscopic images, effectively captures the heterogeneity and pattern expression of skin lesions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    深度学习在准确预测分子性质方面发挥着越来越重要的作用。在被深度学习模型处理之前,分子通常以文本或图形的形式表示。虽然一些方法试图整合这两种形式的分子表示,图形和文本嵌入的错位对融合两种模式提出了重大挑战。为了解决这个问题,我们提出了一种利用对比损失和交叉注意在嵌入空间中对齐和融合图形和文本特征的方法。此外,我们通过将分子的多粒度信息结合在原子级上来增强分子的代表性,功能组,和分子。大量实验表明,我们的模型在分子性质预测的下游任务中优于最先进的模型,以更少的预训练数据实现卓越的性能。源代码和数据可在https://github.com/zzr624663649/multimodal_molecular_property获得。
    Deep learning is playing an increasingly important role in accurate prediction of molecular properties. Prior to being processed by a deep learning model, a molecule is typically represented in the form of a text or a graph. While some methods attempt to integrate these two forms of molecular representations, the misalignment of graph and text embeddings presents a significant challenge to fuse two modalities. To solve this problem, we propose a method that aligns and fuses graph and text features in the embedding space by using contrastive loss and cross attentions. Additionally, we enhance the molecular representation by incorporating multi-granularity information of molecules on the levels of atoms, functional groups, and molecules. Extensive experiments show that our model outperforms state-of-the-art models in downstream tasks of molecular property prediction, achieving superior performance with less pretraining data. The source codes and data are available at https://github.com/zzr624663649/multimodal_molecular_property.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    地板是建筑物内最大的区域之一,用户在日常活动中与之互动最频繁。使用地板传感器对于智能建筑数字双胞胎至关重要,其中摩擦电纳米发电机由于其良好的性能和自供电特性而显示出广泛的应用潜力。然而,它们的感应稳定性,可靠性,和多模态需要进一步增强,以满足快速发展的需求。因此,这项工作介绍了一个多模态智能地板系统,实现用于多模态信息检测的4×4楼层阵列(包括位置,压力,材料,用户身份,和活动)和人机交互。地板单元包含摩擦电压力传感器和顶面材料传感器的混合结构,在宽压力范围(0-800N)内促进线性和增强的灵敏度,除了物质识别能力。地板阵列通过具有极简输出通道的高级输出比方法实现,对湿度和温度等环境因素不敏感。除了多模态传感,能量收集与压力传感器共同设计,用于清除废物能量,为智能建筑传感器节点供电。这个开发的地板系统可以实现多模态传感,能量收集,智能建筑中的智能运动互动,大大拓展了地板传感的场景和应用。
    The floor constitutes one of the largest areas within a building with which users interact most frequently in daily activities. Employing floor sensors is vital for smart-building digital twins, wherein triboelectric nanogenerators demonstrate wide application potential due to their good performance and self-powering characteristics. However, their sensing stability, reliability, and multimodality require further enhancement to meet the rapidly evolving demands. Thus, this work introduces a multimodal intelligent flooring system, implementing a 4 × 4 floor array for multimodal information detection (including position, pressure, material, user identity, and activity) and human-machine interactions. The floor unit incorporates a hybrid structure of triboelectric pressure sensors and a top-surface material sensor, facilitating linear and enhanced sensitivity across a wide pressure range (0-800 N), alongside the material recognition capability. The floor array is implemented by an advanced output-ratio method with minimalist output channels, which is insensitive to environmental factors such as humidity and temperature. In addition to multimodal sensing, energy harvesting is co-designed with the pressure sensors for scavenging waste energy to power smart-building sensor nodes. This developed flooring system enables multimodal sensing, energy harvesting, and smart-sport interactions in smart buildings, greatly expanding the floor sensing scenarios and applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    肺癌是一种恶性肿瘤,肺结节被认为是重要指标。早期识别并及时治疗肺结节有助于提高肿瘤患者的生存率。正电子发射断层扫描-计算机断层扫描(PET/CT)是一种非侵入性,融合成像技术可以同时获得肺部区域的功能和结构信息。然而,由于依赖于结节的注释,基于计算机辅助诊断的肺结节研究主要集中在结节水平,这是肤浅的,无法有助于实际的临床诊断。因此,这项研究的目的是开发一个全自动分类框架,以更全面地评估PET/CT成像数据中的肺结节。
    我们开发了一个两阶段多模态学习框架,用于在PET/CT成像中诊断肺结节。在这个框架中,第一阶段侧重于使用预训练的U-Net和PET/CT配准进行肺实质分割。第二阶段旨在提取,集成,并通过采用三维(3D)Inception-残差网(ResNet)卷积块注意力模块架构和密集投票融合机制来识别图像级和特征级特征。
    在实验中,使用一组真实的临床数据全面验证了所提出的模型的性能,平均得分为89.98%,89.21%,84.75%,93.38%,86.83%,和0.9227的准确性,精度,召回,特异性,F1得分,和曲线下面积值,分别。
    本文提出了一种用于肺结节自动诊断的两阶段多模态学习方法。研究结果表明,肺结节诊断中结节的非孤立性是模型性能受限的主要原因,为今后的研究提供方向。
    UNASSIGNED: Lung cancer is a malignant tumor, for which pulmonary nodules are considered to be significant indicators. Early recognition and timely treatment of pulmonary nodules can contribute to improving the survival rate of patients with cancer. Positron emission tomography-computed tomography (PET/CT) is a noninvasive, fusion imaging technique that can obtain both functional and structural information of lung regions. However, studies of pulmonary nodules based on computer-aided diagnosis have primarily focused on the nodule level due to a reliance on the annotation of nodules, which is superficial and unable to contribute to the actual clinical diagnosis. The aim of this study was thus to develop a fully automated classification framework for a more comprehensive assessment of pulmonary nodules in PET/CT imaging data.
    UNASSIGNED: We developed a two-stage multimodal learning framework for the diagnosis of pulmonary nodules in PET/CT imaging. In this framework, Stage I focuses on pulmonary parenchyma segmentation using a pretrained U-Net and PET/CT registration. Stage II aims to extract, integrate, and recognize image-level and feature-level features by employing the three-dimensional (3D) Inception-residual net (ResNet) convolutional block attention module architecture and a dense-voting fusion mechanism.
    UNASSIGNED: In the experiments, the proposed model\'s performance was comprehensively validated using a set of real clinical data, achieving mean scores of 89.98%, 89.21%, 84.75%, 93.38%, 86.83%, and 0.9227 for accuracy, precision, recall, specificity, F1 score, and area under curve values, respectively.
    UNASSIGNED: This paper presents a two-stage multimodal learning approach for the automatic diagnosis of pulmonary nodules. The findings reveal that the main reason for limiting model performance is the nonsolitary property of nodules in pulmonary nodule diagnosis, providing direction for future research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    这项研究的主要目的是评估在颈椎磁共振成像中采用多模式影像组学技术区分颈脊髓损伤和脊髓脑震荡的可行性。这是一项多中心研究,涉及来自主要医疗中心的288名患者作为培训小组,以及来自另外两个医疗中心的75名患者作为测试组。记录了有关脊髓损伤症状的存在及其在72小时内的恢复状态的数据。这些患者使用颈部磁共振成像进行矢状T1加权和T2加权成像。影像组学技术用于帮助诊断这些患者是否患有颈脊髓损伤或脊髓脑震荡。为每个患者的每个模式提取1197个影像组学特征。测试组T1模态的准确度为0.773,AUC为0.799。测试组T2模态的准确度为0.707,AUC为0.813。试验组T1+T2模态的准确度为0.800,AUC为0.840。我们的研究表明,利用颈椎磁共振成像的多模式影像组学技术可以有效地诊断颈髓损伤或脊髓脑震荡的存在。
    The primary aim of this study is to assess the viability of employing multimodal radiomics techniques for distinguishing between cervical spinal cord injury and spinal cord concussion in cervical magnetic resonance imaging. This is a multicenter study involving 288 patients from a major medical center as the training group, and 75 patients from two other medical centers as the testing group. Data regarding the presence of spinal cord injury symptoms and their recovery status within 72 h were documented. These patients underwent sagittal T1-weighted and T2-weighted imaging using cervical magnetic resonance imaging. Radiomics techniques are used to help diagnose whether these patients have cervical spinal cord injury or spinal cord concussion. 1197 radiomics features were extracted for each modality of each patient. The accuracy of T1 modal in testing group is 0.773, AUC is 0.799. The accuracy of T2 modal in testing group is 0.707, AUC is 0.813. The accuracy of T1 + T2 modal in testing group is 0.800, AUC is 0.840. Our research indicates that multimodal radiomics techniques utilizing cervical magnetic resonance imaging can effectively diagnose the presence of cervical spinal cord injury or spinal cord concussion.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:深度学习可以增强多模态图像分析的性能,以其非侵入性属性和互补功效而闻名,预测腋窝淋巴结(ALN)转移。因此,我们建立了结合超声(US)和磁共振成像(MRI)图像的多模态深度学习模型,以预测乳腺癌患者的ALN转移.
    方法:来自两家医院的经组织学证实的乳腺癌患者的回顾性队列,由主要队列(n=465)和外部验证队列(n=123)组成。所有患者均接受了术前US和MRI扫描。数据预处理后,三个卷积神经网络模型用于分析US和MRI图像,分别。在整合US和MRI深度学习预测结果(DLUS和DLMRI,分别),建立了多模态深度学习(DLMRI+US+临床参数)模型。将所提出的模型的预测能力与DLUS的预测能力进行了比较,DLMRI,联合双峰(DLMRI+US),和临床参数模型。使用接受者工作特征曲线(AUC)和决策曲线下面积进行评价。
    结果:共有588名乳腺癌患者参与了这项研究。DLMRI+US+临床参数模型优于替代模型,在内部和外部验证集上达到0.819(95%置信区间[CI]0.734-0.903)和0.809(95%CI0.723-0.895)的最高AUC,分别。判定曲线剖析证实了其临床有用性。
    结论:DLMRI+US+临床参数模型证明了其预测乳腺癌患者ALN转移的可行性和可靠性。
    OBJECTIVE: Deep learning can enhance the performance of multimodal image analysis, which is known for its noninvasive attributes and complementary efficacy, in predicting axillary lymph node (ALN) metastasis. Therefore, we established a multimodal deep learning model incorporating ultrasound (US) and magnetic resonance imaging (MRI) images to predict ALN metastasis in patients with breast cancer.
    METHODS: A retrospective cohort of patients with histologically confirmed breast cancer from two hospitals composed of the primary cohort (n = 465) and the external validation cohort (n = 123). All patients had undergone both preoperative US and MRI scans. After data preprocessing, three convolutional neural network models were used to analyze the US and MRI images, respectively. After integrating the US and MRI deep learning prediction results (DLUS and DLMRI, respectively), a multimodal deep learning (DLMRI+US+Clinical parameter) model was constructed. The predictive ability of the proposed model was compared to that of the DLUS, DLMRI, combined bimodal (DLMRI+US), and clinical parameter models. Evaluation was performed using the area under the receiver operating characteristic curves (AUCs) and decision curves.
    RESULTS: A total of 588 patients with breast cancer participated in this study. The DLMRI+US+Clinical parameter model outperformed the alternative models, achieving the highest AUCs of 0.819 (95% confidence interval [CI] 0.734-0.903) and 0.809 (95% CI 0.723-0.895) on the internal and external validation sets, respectively. The decision curve analysis confirmed its clinical usefulness.
    CONCLUSIONS: The DLMRI+US+Clinical parameter model demonstrates the feasibility and reliability of its performance for ALN metastasis prediction in patients with breast cancer.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号