目的:肿瘤患者在其治疗随访过程中可以进行的序贯PET/CT研究受到辐射剂量的限制。我们提出了一种人工智能(AI)工具,用于从非衰减校正的PET(NAC-PET)图像中产生衰减校正的PET(AC-PET)图像,以减少对低剂量CT扫描的需求。
方法:从配对的AC-PET和NAC-PET图像开发了一种基于2DPix-2-Pix生成对抗网络(GAN)架构的深度学习算法。来自302名前列腺癌患者的18F-DCFPyLPSMAPET-CT研究,分成训练,验证,和测试队列(n=183,60,59,分别)。模型使用两种归一化策略进行训练:基于标准摄取值(SUV)和基于SUV-Nyul。扫描级性能通过归一化均方误差(NMSE)进行评估,平均绝对误差(MAE),结构相似性指数(SSIM),和峰值信噪比(PSNR)。前瞻性地在核医学医师感兴趣的区域进行了病变水平分析。SUV指标使用组内相关系数(ICC)进行评估,重复性系数(RC),和线性混合效果建模。
结果:NMSE中位数,MAE,SSIM,PSNR为13.26%,3.59%,分别为0.891和26.82,在独立测试队列中。SUVmax和SUVmean的ICC分别为0.88和0.89,这表明原始和AI生成的定量成像标记之间存在高度相关性。病变位置,密度(亨氏单位),和病变摄取均显示影响生成的SUV指标的相对误差(均p<0.05)。
结论:用于生成AC-PET的Pix-2-PixGAN模型证明了SUV指标与原始图像高度相关。AI生成的PET图像显示出临床潜力,可以减少CT扫描的衰减校正需求,同时保留定量标记和图像质量。
OBJECTIVE: Sequential PET/CT studies oncology patients can undergo during their treatment follow-up course is limited by radiation dosage. We propose an artificial intelligence (AI) tool to produce attenuation-corrected PET (AC-PET) images from non-attenuation-corrected PET (NAC-PET) images to reduce need for low-dose CT scans.
METHODS: A deep learning algorithm based on 2D Pix-2-Pix generative adversarial network (GAN) architecture was developed from paired AC-PET and NAC-PET images. 18F-DCFPyL PSMA PET-CT studies from 302 prostate cancer patients, split into training, validation, and testing cohorts (n = 183, 60, 59, respectively). Models were trained with two normalization strategies: Standard Uptake Value (SUV)-based and SUV-Nyul-based. Scan-level performance was evaluated by normalized mean square error (NMSE), mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Lesion-level analysis was performed in regions-of-interest prospectively from nuclear medicine physicians. SUV metrics were evaluated using intraclass correlation coefficient (ICC), repeatability coefficient (RC), and linear mixed-effects modeling.
RESULTS: Median NMSE, MAE, SSIM, and PSNR were 13.26%, 3.59%, 0.891, and 26.82, respectively, in the independent test cohort. ICC for SUVmax and SUVmean were 0.88 and 0.89, which indicated a high correlation between original and AI-generated quantitative imaging markers. Lesion location, density (Hounsfield units), and lesion uptake were all shown to impact relative error in generated SUV metrics (all p < 0.05).
CONCLUSIONS: The Pix-2-Pix GAN model for generating AC-PET demonstrates SUV metrics that highly correlate with original images. AI-generated PET images show clinical potential for reducing the need for CT scans for attenuation correction while preserving quantitative markers and image quality.