关键词: Biomarkers Clinical Trials Huntington’s Disease Machine Learning Neuroimaging Stratification

来  源:   DOI:10.1016/j.nicl.2024.103650   PDF(Pubmed)

Abstract:
BACKGROUND: In Huntington\'s disease clinical trials, recruitment and stratification approaches primarily rely on genetic load, cognitive and motor assessment scores. They focus less on in vivo brain imaging markers, which reflect neuropathology well before clinical diagnosis. Machine learning methods offer a degree of sophistication which could significantly improve prognosis and stratification by leveraging multimodal biomarkers from large datasets. Such models specifically tailored to HD gene expansion carriers could further enhance the efficacy of the stratification process.
OBJECTIVE: To improve stratification of Huntington\'s disease individuals for clinical trials.
METHODS: We used data from 451 gene positive individuals with Huntington\'s disease (both premanifest and diagnosed) from previously published cohorts (PREDICT, TRACK, TrackON, and IMAGE). We applied whole-brain parcellation to longitudinal brain scans and measured the rate of lateral ventricular enlargement, over 3 years, which was used as the target variable for our prognostic random forest regression models. The models were trained on various combinations of features at baseline, including genetic load, cognitive and motor assessment score biomarkers, as well as brain imaging-derived features. Furthermore, a simplified stratification model was developed to classify individuals into two homogenous groups (low risk and high risk) based on their anticipated rate of ventricular enlargement.
RESULTS: The predictive accuracy of the prognostic models substantially improved by integrating brain imaging features alongside genetic load, cognitive and motor biomarkers: a 24 % reduction in the cross-validated mean absolute error, yielding an error of 530 mm3/year. The stratification model had a cross-validated accuracy of 81 % in differentiating between moderate and fast progressors (precision = 83 %, recall = 80 %).
CONCLUSIONS: This study validated the effectiveness of machine learning in differentiating between low- and high-risk individuals based on the rate of ventricular enlargement. The models were exclusively trained using features from HD individuals, which offers a more disease-specific, simplified, and accurate approach for prognostic enrichment compared to relying on features extracted from healthy control groups, as done in previous studies. The proposed method has the potential to enhance clinical utility by: i) enabling more targeted recruitment of individuals for clinical trials, ii) improving post-hoc evaluation of individuals, and iii) ultimately leading to better outcomes for individuals through personalized treatment selection.
摘要:
背景:在亨廷顿病临床试验中,招募和分层方法主要依赖于遗传负荷,认知和运动评估得分。他们不太关注体内脑成像标记,在临床诊断之前很好地反映了神经病理学。机器学习方法提供了一定程度的复杂性,可以通过利用来自大型数据集的多模态生物标志物来显着改善预后和分层。这种专门针对HD基因扩增载体定制的模型可以进一步增强分层过程的功效。
目的:改善亨廷顿病患者的临床试验分层。
方法:我们使用了先前发表的队列中451名患有亨廷顿病的基因阳性个体(包括预见性和诊断性)的数据(PREDICT,TRACK,TrackON,和图像)。我们将全脑分割应用于纵向脑部扫描,并测量侧脑室扩大的速度,超过3年,它被用作我们的预后随机森林回归模型的目标变量。模型在基线时对特征的各种组合进行了训练,包括遗传负荷,认知和运动评估评分生物标志物,以及脑成像衍生的特征。此外,我们建立了一个简化的分层模型,根据预期的心室扩大率将个体分为两个同质组(低危和高危).
结果:通过整合脑成像特征和遗传负荷,预后模型的预测准确性大大提高,认知和运动生物标志物:交叉验证的平均绝对误差减少24%,产生530mm3/年的误差。分层模型在区分中等和快速进展者方面的交叉验证准确性为81%(精度=83%,召回=80%)。
结论:这项研究验证了机器学习在根据心室扩大率区分低危和高危个体方面的有效性。这些模型是使用HD个体的特征进行专门训练的,这提供了更多的疾病特异性,简化,与依赖从健康对照组中提取的特征相比,预后富集的准确方法,正如以前的研究所做的那样。所提出的方法有可能通过以下方式提高临床效用:i)使更有针对性地招募个人进行临床试验,ii)改善对个人的事后评估,和iii)最终通过个性化治疗选择为个人带来更好的结果。
公众号