Multi-modal

多模态
  • 文章类型: Journal Article
    目的:开发一种结合CT扫描和临床信息的深度学习模型,以预测晚期肝细胞癌(HCC)的总体生存率。
    方法:这项回顾性研究包括2018年至2022年间来自52个跨国内部中心的免疫治疗治疗的晚期HCC患者。提出了使用基线和首次随访CT图像以及7个临床变量的多模式预后模型。开发了卷积递归神经网络(CRNN),以从自动选择的代表性2DCT切片中提取时空信息,以提供放射学评分,然后与基于Cox的临床评分融合以提供生存风险。使用受试者工作曲线下的时间依赖性面积(AUC)评估模型的有效性,和风险组分层使用对数秩检验。将多模态输入的预后性能与缺失模态的模型进行比较,和基于大小的RECIST标准。
    结果:27名患者(平均年龄,61年±12[SD],包括180名男子)。多模态CRNN模型在验证和测试集中达到了1年总体生存预测的0.777和0.704的AUC。该模型在验证中实现了显著的风险分层(风险比[HR]=3.330,p=0.008),和基于训练集的中值风险评分的测试集(HR=2.024,p=0.047)。模式缺失的模型(基于单模态成像的模型和仅包含基线扫描的模型)仍然可以实现有利的风险分层性能(所有p<0.05,除了一个,p=0.053)。此外,结果证明了基于深度学习的模型优于RECIST标准。
    结论:CT扫描和临床数据的深度学习分析可以为晚期HCC患者提供重要的预后见解。
    建立的模型可以帮助监测患者的疾病状态,并在首次随访时识别预后不良的患者,帮助临床医生做出明智的治疗决定,以及早期和及时的干预。
    结论:使用跨国患者开发了一种基于AI的晚期HCC预后模型。该模型从CT扫描中提取时空信息,并将其与临床变量集成以进行预测。与传统的基于大小的RECIST方法相比,该模型显示出优越的预后能力。
    OBJECTIVE: To develop a deep learning model combining CT scans and clinical information to predict overall survival in advanced hepatocellular carcinoma (HCC).
    METHODS: This retrospective study included immunotherapy-treated advanced HCC patients from 52 multi-national in-house centers between 2018 and 2022. A multi-modal prognostic model using baseline and the first follow-up CT images and 7 clinical variables was proposed. A convolutional-recurrent neural network (CRNN) was developed to extract spatial-temporal information from automatically selected representative 2D CT slices to provide a radiological score, then fused with a Cox-based clinical score to provide the survival risk. The model\'s effectiveness was assessed using a time-dependent area under the receiver operating curve (AUC), and risk group stratification using the log-rank test. Prognostic performances of multi-modal inputs were compared to models of missing modality, and the size-based RECIST criteria.
    RESULTS: Two-hundred seven patients (mean age, 61 years ± 12 [SD], 180 men) were included. The multi-modal CRNN model reached the AUC of 0.777 and 0.704 of 1-year overall survival predictions in the validation and test sets. The model achieved significant risk stratification in validation (hazard ratio [HR] = 3.330, p = 0.008), and test sets (HR = 2.024, p = 0.047) based on the median risk score of the training set. Models with missing modalities (the single-modal imaging-based model and the model incorporating only baseline scans) can still achieve favorable risk stratification performance (all p < 0.05, except for one, p = 0.053). Moreover, results proved the superiority of the deep learning-based model to the RECIST criteria.
    CONCLUSIONS: Deep learning analysis of CT scans and clinical data can offer significant prognostic insights for patients with advanced HCC.
    UNASSIGNED: The established model can help monitor patients\' disease statuses and identify those with poor prognosis at the time of first follow-up, helping clinicians make informed treatment decisions, as well as early and timely interventions.
    CONCLUSIONS: An AI-based prognostic model was developed for advanced HCC using multi-national patients. The model extracts spatial-temporal information from CT scans and integrates it with clinical variables to prognosticate. The model demonstrated superior prognostic ability compared to the conventional size-based RECIST method.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本研究旨在探索利用深度学习技术对排球训练视频进行分类和描述的方法。通过开发集成双向长短期记忆(BiLSTM)和注意力机制的创新模型,参考BiLSTM-多模态注意融合时间分类(BiLSTM-MAFTC),提高了排球视频内容分析的准确性和效率。最初,该模型将来自各种模态的特征编码为特征向量,捕获不同类型的信息,如位置和模态数据。然后使用BiLSTM网络对多模态时间信息进行建模,而空间和渠道注意力机制被纳入以形成双重注意力模块。该模块建立不同模态特征之间的相关性,从每种模态中提取有价值的信息,并发现跨模态的互补信息。大量实验验证了该方法的有效性和最先进的性能。与传统的递归神经网络算法相比,在动作识别的Top-1和Top-5度量下,该模型的识别准确率超过95%,每个视频的识别速度为0.04s。研究表明,该模型能够有效地处理和分析多模态时态信息,包括运动员的动作,在法庭上的位置关系,和球的轨迹。因此,实现了排球训练视频的精确分类和描述。这种进步大大提高了教练员和运动员在排球训练中的效率,并为更广泛的体育视频分析研究提供了宝贵的见解。
    This study aims to explore methods for classifying and describing volleyball training videos using deep learning techniques. By developing an innovative model that integrates Bi-directional Long Short-Term Memory (BiLSTM) and attention mechanisms, referred to BiLSTM-Multimodal Attention Fusion Temporal Classification (BiLSTM-MAFTC), the study enhances the accuracy and efficiency of volleyball video content analysis. Initially, the model encodes features from various modalities into feature vectors, capturing different types of information such as positional and modal data. The BiLSTM network is then used to model multi-modal temporal information, while spatial and channel attention mechanisms are incorporated to form a dual-attention module. This module establishes correlations between different modality features, extracting valuable information from each modality and uncovering complementary information across modalities. Extensive experiments validate the method\'s effectiveness and state-of-the-art performance. Compared to conventional recurrent neural network algorithms, the model achieves recognition accuracies exceeding 95 % under Top-1 and Top-5 metrics for action recognition, with a recognition speed of 0.04 s per video. The study demonstrates that the model can effectively process and analyze multimodal temporal information, including athlete movements, positional relationships on the court, and ball trajectories. Consequently, precise classification and description of volleyball training videos are achieved. This advancement significantly enhances the efficiency of coaches and athletes in volleyball training and provides valuable insights for broader sports video analysis research.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    及时准确地检测自闭症谱系障碍(ASD)对于早期干预和改善患者预后至关重要。本研究旨在利用机器学习(ML)技术的强大功能,通过结合时间眼动跟踪数据来改善ASD检测。我们开发了一种新颖的机器学习模型来利用眼睛扫描路径,眼球运动的距离序列,和一系列固定持续时间,增强分析的时间方面,以更有效地识别ASD。
    我们利用了眼动追踪数据的数据集来训练我们的机器学习模型,由CNN-GRU-ANN架构组成。模型是用凝视图训练的,眼睛注视之间的距离序列,以及注视和扫视的持续时间。此外,我们使用了一个验证数据集来评估模型的性能,并将其与其他作品进行比较。
    与VGG-16模型相比,我们的ML模型在ASD检测中表现出卓越的性能。通过合并来自眼睛跟踪数据的时间信息,我们的模型实现了更高的精度,精度,和回忆。新添加的基于序列的特征允许我们的模型有效地区分ASD和典型的发展中的个人,在验证数据集上实现了93.10%的令人印象深刻的精度值。
    本研究提出了一种基于ML的ASD检测方法,该方法利用机器学习技术并结合时间眼动跟踪数据。我们的发现强调了时间分析在改善ASD检测方面的潜力,并为神经发育障碍的基于眼睛跟踪的诊断和干预领域的进一步发展提供了有希望的方向。
    UNASSIGNED: Timely and accurate detection of Autism Spectrum Disorder (ASD) is essential for early intervention and improved patient outcomes. This study aims to harness the power of machine learning (ML) techniques to improve ASD detection by incorporating temporal eye-tracking data. We developed a novel ML model to leverage eye scan paths, sequences of distances of eye movement, and a sequence of fixation durations, enhancing the temporal aspect of the analysis for more effective ASD identification.
    UNASSIGNED: We utilized a dataset of eye-tracking data without augmentation to train our ML model, which consists of a CNN-GRU-ANN architecture. The model was trained using gaze maps, the sequences of distances between eye fixations, and durations of fixations and saccades. Additionally, we employed a validation dataset to assess the model\'s performance and compare it with other works.
    UNASSIGNED: Our ML model demonstrated superior performance in ASD detection compared to the VGG-16 model. By incorporating temporal information from eye-tracking data, our model achieved higher accuracy, precision, and recall. The novel addition of sequence-based features allowed our model to effectively distinguish between ASD and typically developing individuals, achieving an impressive precision value of 93.10% on the validation dataset.
    UNASSIGNED: This study presents an ML-based approach to ASD detection by utilizing machine learning techniques and incorporating temporal eye-tracking data. Our findings highlight the potential of temporal analysis for improved ASD detection and provide a promising direction for further advancements in the field of eye-tracking-based diagnosis and intervention for neurodevelopmental disorders.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    尽管出现了新的诊断方法,药物和治疗方案,耐多药肺结核(MDR-PTB)仍然是全球健康威胁。治疗周期长,治愈率低,疾病负担重。人口统计等因素,疾病特征,肺成像,生物标志物,治疗方案和药物依从性与MDR-PTB预后相关.然而,到目前为止,现有的大部分研究集中在通过静态单尺度或低维信息预测治疗结果.因此,基于多维度动态数据的多模态深度学习可以提供对个性化治疗计划的更深入理解,以帮助患者的临床管理。
    Despite the advent of new diagnostics, drugs and regimens, multi-drug resistant pulmonary tuberculosis (MDR-PTB) remains a global health threat. It has a long treatment cycle, low cure rate and heavy disease burden. Factors such as demographics, disease characteristics, lung imaging, biomarkers, therapeutic schedule and adherence to medications are associated with MDR-PTB prognosis. However, thus far, the majority of existing studies have focused on predicting treatment outcomes through static single-scale or low dimensional information. Hence, multi-modal deep learning based on dynamic data for multiple dimensions can provide a deeper understanding of personalized treatment plans to aid in the clinical management of patients.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    结论:单细胞组学数据分析的第一步是可视化,这使研究人员能够看到细胞类型之间的分离程度。一次可视化多个数据集时,数据集合并使用数据集成/批量修正方法。虽然下游分析需要,这些方法修改特征空间(例如基因表达)/PCA空间,以便尽可能在批次之间混合细胞类型。这掩盖了样本特定的特征,并破坏了单独嵌入样本时可以看到的局部嵌入结构。因此,为了改善大量样本之间的视觉比较(例如,多名患者,总体模态,不同的时间点),我们介绍复合SNE,它执行我们所说的嵌入空间中样本的软对齐。我们证明Compound-SNE能够在样本的嵌入空间中排列细胞类型,同时保留样本独立嵌入时的局部嵌入结构。
    方法:Compound-SNE的Python代码可从https://github.com/HaghverdiLab/Compound-SNE下载。
    背景:在线提供。提供算法详细信息和其他测试。
    CONCLUSIONS: One of the first steps in single-cell omics data analysis is visualization, which allows researchers to see how well-separated cell-types are from each other. When visualizing multiple datasets at once, data integration/batch correction methods are used to merge the datasets. While needed for downstream analyses, these methods modify features space (e.g. gene expression)/PCA space in order to mix cell-types between batches as well as possible. This obscures sample-specific features and breaks down local embedding structures that can be seen when a sample is embedded alone. Therefore, in order to improve in visual comparisons between large numbers of samples (e.g., multiple patients, omic modalities, different time points), we introduce Compound-SNE, which performs what we term a soft alignment of samples in embedding space. We show that Compound-SNE is able to align cell-types in embedding space across samples, while preserving local embedding structures from when samples are embedded independently.
    METHODS: Python code for Compound-SNE is available for download at https://github.com/HaghverdiLab/Compound-SNE.
    BACKGROUND: Available online. Provides algorithmic details and additional tests.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    精神分裂症(SZ)是一种精神病,对个人的认知产生不利影响,情感,和行为方面。SZ的病因,尽管进行了广泛的研究,还不清楚,因为多种因素共同为其发展做出贡献。有一致的证据证明SZ患者大脑中存在结构和功能偏差。此外,SZ的遗传方面得到了基因组学标记的显著参与的支持。因此,需要从多模态角度研究SZ,并开发改进检测的方法。
    我们提出的方法采用了结合结构磁共振成像(sMRI)特征的深度学习框架,功能磁共振成像(fMRI),和遗传标记如单核苷酸多态性(SNP)。对于核磁共振成像,我们使用预训练的DenseNet来提取形态特征。为了确定功能磁共振成像和与SZ相关的SNP中最相关的功能连接,我们应用了一维卷积神经网络(CNN),然后是分层相关传播(LRP)。最后,我们将这些获得的特征跨模态连接在一起,并将它们提供给基于极端梯度提升(XGBoost)树的分类器,以将SZ与健康对照(HC)分类.
    对临床数据集的实验评估表明,与从每种模式单独获得的结果相比,我们提出的多模式方法从HC中对SZ个体进行了分类,准确率提高了79.01%.
    我们提出了一种基于深度学习的框架,该框架选择多模态(sMRI,fMRI和遗传)特征有效地融合它们以获得改进的分类分数。此外,通过使用可解释AI(XAI),我们能够查明并验证对SZ分类贡献最大的重要功能网络连接和SNP,为我们的发现提供必要的解释。
    UNASSIGNED: Schizophrenia (SZ) is a psychiatric condition that adversely affects an individual\'s cognitive, emotional, and behavioral aspects. The etiology of SZ, although extensively studied, remains unclear, as multiple factors come together to contribute toward its development. There is a consistent body of evidence documenting the presence of structural and functional deviations in the brains of individuals with SZ. Moreover, the hereditary aspect of SZ is supported by the significant involvement of genomics markers. Therefore, the need to investigate SZ from a multi-modal perspective and develop approaches for improved detection arises.
    UNASSIGNED: Our proposed method employed a deep learning framework combining features from structural magnetic resonance imaging (sMRI), functional magnetic resonance imaging (fMRI), and genetic markers such as single nucleotide polymorphism (SNP). For sMRI, we used a pre-trained DenseNet to extract the morphological features. To identify the most relevant functional connections in fMRI and SNPs linked to SZ, we applied a 1-dimensional convolutional neural network (CNN) followed by layerwise relevance propagation (LRP). Finally, we concatenated these obtained features across modalities and fed them to the extreme gradient boosting (XGBoost) tree-based classifier to classify SZ from healthy control (HC).
    UNASSIGNED: Experimental evaluation on clinical dataset demonstrated that, compared to the outcomes obtained from each modality individually, our proposed multi-modal approach performed classification of SZ individuals from HC with an improved accuracy of 79.01%.
    UNASSIGNED: We proposed a deep learning based framework that selects multi-modal (sMRI, fMRI and genetic) features efficiently and fuse them to obtain improved classification scores. Additionally, by using Explainable AI (XAI), we were able to pinpoint and validate significant functional network connections and SNPs that contributed the most toward SZ classification, providing necessary interpretation behind our findings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    随着大量和单细胞分析对多组数据的依赖增加,对聚类进行无监督分析的健壮方法的可用性,可视化,而特征选择势在必行。联合降维方法可以应用于多组学数据集,以得出类似于单组学技术的全局样本嵌入,例如主成分分析(PCA)。多重协同惯性分析(MCIA)是一种用于联合降维的方法,可最大化块级和全局级嵌入之间的协方差。MCIA的当前实现未针对大型数据集进行优化,例如来自单细胞研究的数据集。并且缺乏嵌入新数据的能力。
    我们介绍一下nipalsMCIA,一种MCIA实现,使用对非线性迭代偏最小二乘(NIPALS)的扩展来求解目标函数,与依赖单细胞多组学数据的特征分解的早期实现相比,显示出显着的加速。它还消除了对计算解释方差的特征分解的依赖,并允许用户对新数据执行样本外嵌入。nipalsMCIA为用户提供各种预处理和参数选项,以及简单的功能,用于单个整体和全局嵌入因子的下游分析。
    nipalsMCIA作为BioConductor软件包可在https://bioparductor.org/packages/release/bioc/html/nipalsMCIA获得。html,并包括详细的文档和应用插图。补充材料可在线获得。
    UNASSIGNED: With the increased reliance on multi-omics data for bulk and single cell analyses, the availability of robust approaches to perform unsupervised analysis for clustering, visualization, and feature selection is imperative. Joint dimensionality reduction methods can be applied to multi-omics datasets to derive a global sample embedding analogous to single-omic techniques such as Principal Components Analysis (PCA). Multiple co-inertia analysis (MCIA) is a method for joint dimensionality reduction that maximizes the covariance between block- and global-level embeddings. Current implementations for MCIA are not optimized for large datasets such such as those arising from single cell studies, and lack capabilities with respect to embedding new data.
    UNASSIGNED: We introduce nipalsMCIA, an MCIA implementation that solves the objective function using an extension to Non-linear Iterative Partial Least Squares (NIPALS), and shows significant speed-up over earlier implementations that rely on eigendecompositions for single cell multi-omics data. It also removes the dependence on an eigendecomposition for calculating the variance explained, and allows users to perform out-of-sample embedding for new data. nipalsMCIA provides users with a variety of pre-processing and parameter options, as well as ease of functionality for down-stream analysis of single-omic and global-embedding factors.
    UNASSIGNED: nipalsMCIA is available as a BioConductor package at https://bioconductor.org/packages/release/bioc/html/nipalsMCIA.html, and includes detailed documentation and application vignettes. Supplementary Materials are available online.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    预测患者从免疫疗法生存的现有标准主要集中在患者的PD-L1状态。我们检验了从CT图像非侵入性捕获基线全肺影像组学特征的假设,基线临床参数,结合先进的机器学习方法,可以帮助建立与PD-L1状态相比有利的患者生存模型,以预测接受Durvalumab治疗的转移性NSCLC患者的“低于中位生存风险”。共有1062名患者,包括模型训练和验证,这是迄今为止规模最大的此类研究。
    为了确保足够的样本量,我们合并了三项转移性NSCLC研究的治疗组数据.大约80%的数据用于模型训练,其余的等待验证。我们首先训练了两个独立的模型;模型-C训练以使用临床数据预测生存;模型-R训练以使用全肺放射组学特征预测生存。最后,我们创建了利用临床和影像组学功能的Model-C+R.
    模型C的分类精度(中位生存期),Model-R,Model-C+R为63%,55%,分别为68%。不同训练和验证队列生存预测的敏感性分析显示一致性指数([95百分位数])为0.64([0.63,0.65]),0.60([0.59,0.60]),和0.66([0.65,0.67]),分别。我们还评估了来自独立研究的144名患者的可比较队列中这些模型的推广。显示65%的分类准确率,62%,分别为72%。
    结合基线全肺CT影像和临床特征的机器学习模型可能是免疫治疗患者选择的有用工具。需要通过前瞻性研究进一步验证。
    UNASSIGNED: Existing criteria for predicting patient survival from immunotherapy are primarily centered on the PD-L1 status of patients. We tested the hypothesis that noninvasively captured baseline whole-lung radiomics features from CT images, baseline clinical parameters, combined with advanced machine learning approaches, can help to build models of patient survival that compare favorably with PD-L1 status for predicting \'less-than-median-survival risk\' in the metastatic NSCLC setting for patients on durvalumab. With a total of 1062 patients, inclusive of model training and validation, this is the largest such study yet.
    UNASSIGNED: To ensure a sufficient sample size, we combined data from treatment arms of three metastatic NSCLC studies. About 80% of this data was used for model training, and the remainder was held-out for validation. We first trained two independent models; Model-C trained to predict survival using clinical data; and Model-R trained to predict survival using whole-lung radiomics features. Finally, we created Model-C+R which leveraged both clinical and radiomics features.
    UNASSIGNED: The classification accuracy (for median survival) of Model-C, Model-R, and Model-C+R was 63%, 55%, and 68% respectively. Sensitivity analysis of survival prediction across different training and validation cohorts showed concordance indices ([95 percentile]) of 0.64 ([0.63, 0.65]), 0.60 ([0.59, 0.60]), and 0.66 ([0.65,0.67]), respectively. We additionally evaluated generalization of these models on a comparable cohort of 144 patients from an independent study, demonstrating classification accuracies of 65%, 62%, and 72% respectively.
    UNASSIGNED: Machine Learning models combining baseline whole-lung CT radiomic and clinical features may be a useful tool for patient selection in immunotherapy. Further validation through prospective studies is needed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    确定临床前未分化多形性肉瘤(UPS)中定量细胞计数组织特征与瘤内定量MR(qMRI)之间的显着关系。
    在对UPS的基因工程小鼠模型的前瞻性研究中,我们注册了由匹配的体内多对比MRI组成的成像库,三维(3D)多对比度高分辨率离体MR组织学(MRH),和二维(2D)组织切片。从数字化组织学中,我们从整个载玻片自动核分割中生成了定量细胞计数特征图。我们自动分割不同qMRI值的肿瘤内区域,并测量相应的细胞计数特征。进行线性回归分析以比较肿瘤内qMRI和组织细胞计数特征,并对多重比较结果进行了校正.在校正多重比较后,qMRI和细胞计数特征之间的线性相关性(p值<0.05)被认为是显著的。
    与离体表观扩散系数(ADC)相关的三个特征,没有与体内ADC相关的特征。六个特征显示出与离体T2*的显着线性关系,15个特征与体内T2*显著相关。在这两种情况下,核Haralick纹理特征是与T2*相关的最普遍的特征类型。一小组核拓扑特征也与一个或两个T2*对比相关,在T2*和核大小指标之间出现了积极的趋势。
    注册的多参数成像数据集可以识别对UPSMR信号有贡献的定量组织特征。T2*可以提供有关核形态和多态性的定量信息,为UPS的放射学解释增加组织学见解。
    UNASSIGNED: To identify significant relationships between quantitative cytometric tissue features and quantitative MR (qMRI) intratumorally in preclinical undifferentiated pleomorphic sarcomas (UPS).
    UNASSIGNED: In a prospective study of genetically engineered mouse models of UPS, we registered imaging libraries consisting of matched multi-contrast in vivo MRI, three-dimensional (3D) multi-contrast high-resolution ex vivo MR histology (MRH), and two-dimensional (2D) tissue slides. From digitized histology we generated quantitative cytometric feature maps from whole-slide automated nuclear segmentation. We automatically segmented intratumoral regions of distinct qMRI values and measured corresponding cytometric features. Linear regression analysis was performed to compare intratumoral qMRI and tissue cytometric features, and results were corrected for multiple comparisons. Linear correlations between qMRI and cytometric features with p values of <0.05 after correction for multiple comparisons were considered significant.
    UNASSIGNED: Three features correlated with ex vivo apparent diffusion coefficient (ADC), and no features correlated with in vivo ADC. Six features demonstrated significant linear relationships with ex vivo T2*, and fifteen features correlated significantly with in vivo T2*. In both cases, nuclear Haralick texture features were the most prevalent type of feature correlated with T2*. A small group of nuclear topology features also correlated with one or both T2* contrasts, and positive trends were seen between T2* and nuclear size metrics.
    UNASSIGNED: Registered multi-parametric imaging datasets can identify quantitative tissue features which contribute to UPS MR signal. T2* may provide quantitative information about nuclear morphology and pleomorphism, adding histological insights to radiological interpretation of UPS.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    大麻的使用在年轻的成年期很常见,然而,对多模态的流行和模式知之甚少(即,使用一种以上模式)大麻使用。
    我们的目的是(1)确定过去30天流行的五种模式(烟雾,vape,可食用,dab,其他)大麻使用,(2)描述多模式大麻使用的普遍性(单一与双重vs.多模态),(3)确定年轻人多模式使用的社会人口统计学相关性。
    参与者是764名22-30岁的人,他们目前使用来自德克萨斯州大学项目营销和促销活动第9波(2019年春季)的大麻。参与者平均年龄为25.11岁(SD=1.81),63.6%女性,38.7%被确定为非西班牙裔白人,30.6%是西班牙裔/拉丁裔,亚洲占13.0%,黑人占9.4%,8.2%的人被确定为两个或多个种族或另一个种族/族裔。双变量分析和多项回归用于检查研究问题。
    吸烟是最常见的大麻使用方式,其次是电子烟,然后是食用。近43%的参与者报告单模式大麻使用,33%报告使用双模式,24%报告使用多模态。男性和那些被认定为非异性恋的人比使用多种大麻方式的同龄人面临更大的风险。与单模态使用相比,确定为Black的参与者多模态使用的风险降低。
    多模式使用在目前使用大麻的年轻人中很常见,这表明需要针对所有年轻人的普遍努力。还需要针对多模式使用风险较高的人群进行量身定制的干预措施。
    Cannabis use is common in young adulthood, yet little is known about the prevalence and patterns of multi-modal (i.e., use of more than one mode) cannabis use.
    UNASSIGNED: We aimed to (1) determine the past 30-day prevalence of five modes (smoke, vape, edible, dab, other) of cannabis use, (2) describe the prevalence of multi-modal cannabis use (single vs. dual vs. poly-modal), and (3) identify socio-demographic correlates of multi-modal use among young adults.
    UNASSIGNED: Participants were 764 22-30-year-olds who currently used cannabis from Wave 9 (Spring 2019) of the Marketing and Promotions Across Colleges in Texas Project. Participants were 25.11 years old on average (SD = 1.81), 63.6% female, 38.7% identified as non-Hispanic white, 30.6% as Hispanic/Latino, 13.0% as Asian and 9.4% as Black, and 8.2% identified with two or more races or another race/ethnicity. Bivariate analyses and a multinomial regression were used to examine study questions.
    UNASSIGNED: Smoking was the most common mode of cannabis use followed by vaping and then edibles. Nearly 43% of participants reported single-modal cannabis use, 33% reported dual-modal use, and 24% reported poly-modal use. Males and those identifying as non-heterosexual were at a greater risk than their counterparts for using multiple modes of cannabis. Participants identifying as Black were at a reduced risk for poly-modal compared to single-modal use.
    UNASSIGNED: Multi-modal use is common among young adults who currently use cannabis, indicating a need for universal efforts aimed at all young adults. Tailored interventions aimed toward those at elevated risk for multi-modal use also are needed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号