关键词: brain PET deep learning image processing multi-tracer parametric image

Mesh : Alzheimer Disease / diagnostic imaging Aniline Compounds Brain Deep Learning Fluorodeoxyglucose F18 Humans Positron-Emission Tomography

来  源:   DOI:10.1002/mp.15073   PDF(Sci-hub)   PDF(Pubmed)

Abstract:
OBJECTIVE: Positron emission tomography (PET) imaging with various tracers is increasingly used in Alzheimer\'s disease (AD) studies. However, access to PET scans using new or less-available tracers with sophisticated synthesis and short half-life isotopes may be very limited. Therefore, it is of great significance and interest in AD research to assess the feasibility of generating synthetic PET images of less-available tracers from the PET image of another common tracer, in particular 18 F-FDG.
METHODS: We implemented advanced deep learning methods using the U-Net model to predict 11 C-UCB-J PET images of synaptic vesicle protein 2A (SV2A), a surrogate of synaptic density, from 18 F-FDG PET data. Dynamic 18 F-FDG and 11 C-UCB-J scans were performed in 21 participants with normal cognition (CN) and 33 participants with Alzheimer\'s disease (AD). Cerebellum was used as the reference region for both tracers. For 11 C-UCB-J image prediction, four network models were trained and tested, which included 1) 18 F-FDG SUV ratio (SUVR) to 11 C-UCB-J SUVR, 2) 18 F-FDG Ki ratio to 11 C-UCB-J SUVR, 3) 18 F-FDG SUVR to 11 C-UCB-J distribution volume ratio (DVR), and 4) 18 F-FDG Ki ratio to 11 C-UCB-J DVR. The normalized root mean square error (NRMSE), structure similarity index (SSIM), and Pearson\'s correlation coefficient were calculated for evaluating the overall image prediction accuracy. Mean bias of various ROIs in the brain and correlation plots between predicted images and true images were calculated for ROI-based prediction accuracy. Following a similar training and evaluation strategy, 18 F-FDG SUVR to 11 C-PiB SUVR network was also trained and tested for 11 C-PiB static image prediction.
RESULTS: The results showed that all four network models obtained satisfactory 11 C-UCB-J static and parametric images. For 11 C-UCB-J SUVR prediction, the mean ROI bias was -0.3% ± 7.4% for the AD group and -0.5% ± 7.3% for the CN group with 18 F-FDG SUVR as the input, -0.7% ± 8.1% for the AD group, and -1.3% ± 7.0% for the CN group with 18 F-FDG Ki ratio as the input. For 11 C-UCB-J DVR prediction, the mean ROI bias was -1.3% ± 7.5% for the AD group and -2.0% ± 6.9% for the CN group with 18 F-FDG SUVR as the input, -0.7% ± 9.0% for the AD group, and -1.7% ± 7.8% for the CN group with 18 F-FDG Ki ratio as the input. For 11 C-PiB SUVR image prediction, which appears to be a more challenging task, the incorporation of additional diagnostic information into the network is needed to control the bias below 5% for most ROIs.
CONCLUSIONS: It is feasible to use 3D U-Net-based methods to generate synthetic 11 C-UCB-J PET images from 18 F-FDG images with reasonable prediction accuracy. It is also possible to predict 11 C-PiB SUVR images from 18 F-FDG images, though the incorporation of additional non-imaging information is needed.
摘要:
目的:使用各种示踪剂的正电子发射断层扫描(PET)成像越来越多地用于阿尔茨海默病(AD)研究。然而,使用具有复杂合成和短半衰期同位素的新的或较少可用的示踪剂进行PET扫描可能非常有限。因此,在AD研究中,评估从另一种常见示踪剂的PET图像生成较少可用示踪剂的合成PET图像的可行性具有重要意义和兴趣,特别是18F-FDG。
方法:我们使用U-Net模型实施了高级深度学习方法,以预测突触小泡蛋白2A(SV2A)的11个C-UCB-JPET图像,突触密度的替代物,来自18F-FDGPET数据。对21名认知正常(CN)参与者和33名阿尔茨海默病(AD)参与者进行了动态18F-FDG和11C-UCB-J扫描。小脑用作两种示踪剂的参考区域。对于11个C-UCB-J图像预测,对四个网络模型进行了训练和测试,其中包括1)18F-FDGSUV比率(SUVR)到11C-UCB-JSUVR,2)18F-FDGKi比11C-UCB-JSUVR,3)18F-FDGSUVR至11C-UCB-J分配体积比(DVR),和4)18F-FDGKi比11C-UCB-JDVR。归一化均方根误差(NRMSE),结构相似性指数(SSIM),计算Pearson相关系数以评估整体图像预测精度。为了基于ROI的预测准确性,计算了大脑中各种ROI的平均偏差和预测图像与真实图像之间的相关图。遵循类似的培训和评估策略,还对18F-FDGSUVR到11C-PiBSUVR网络进行了训练和测试,用于11C-PiB静态图像预测。
结果:结果表明,所有四个网络模型都获得了令人满意的11个C-UCB-J静态和参数图像。对于11个C-UCB-JSUVR预测,AD组的平均ROI偏差为-0.3%±7.4%,CN组为-0.5%±7.3%,以18F-FDGSUVR为输入,AD组-0.7%±8.1%,以18F-FDGKi比率为输入的CN组为-1.3%±7.0%。对于11个C-UCB-JDVR预测,AD组的平均ROI偏差为-1.3%±7.5%,CN组为-2.0%±6.9%,以18F-FDGSUVR为输入,AD组-0.7%±9.0%,以18F-FDGKi比率为输入的CN组为-1.7%±7.8%。对于11个C-PiBSUVR图像预测,这似乎是一项更具挑战性的任务,对于大多数ROI,需要将额外的诊断信息纳入网络,以将偏倚控制在5%以下.
结论:使用基于3DU-Net的方法从18个F-FDG图像中生成合成的11个C-UCB-JPET图像具有合理的预测精度是可行的。还可以从18张F-FDG图像中预测11张C-PiBSUVR图像,尽管需要合并额外的非成像信息。
公众号