关键词: Monte Carlo simulation PET Pix2Pix conditional generative adversarial networks (cGAN) deep learning normalization self-attention Pix2Pix

Mesh : Deep Learning Positron-Emission Tomography Image Processing, Computer-Assisted / methods Humans Phantoms, Imaging Monte Carlo Method

来  源:   DOI:10.1088/1361-6560/ad69fb

Abstract:
Objective.This work proposes, for the first time, an image-based end-to-end self-normalization framework for positron emission tomography (PET) using conditional generative adversarial networks (cGANs).Approach.We evaluated different approaches by exploring each of the following three methodologies. First, we used images that were either unnormalized or corrected for geometric factors, which encompass all time-invariant factors, as input data types. Second, we set the input tensor shape as either a single axial slice (2D) or three contiguous axial slices (2.5D). Third, we chose either Pix2Pix or polarized self-attention (PSA) Pix2Pix, which we developed for this work, as a deep learning network. The targets for all approaches were the axial slices of images normalized using the direct normalization method. We performed Monte Carlo simulations of ten voxelized phantoms with the SimSET simulation tool and produced 26,000 pairs of axial image slices for training and testing.Main results.The results showed that 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the best performance among all the methods we tested. All approaches improved general image quality figures of merit peak signal to noise ratio (PSNR) and structural similarity index (SSIM) from ∼15 % to ∼55 %, and 2.5D PSA Pix2Pix showed the highest PSNR (28.074) and SSIM (0.921). Lesion detectability, measured with region of interest (ROI) PSNR, SSIM, normalized contrast recovery coefficient, and contrast-to-noise ratio, was generally improved for all approaches, and 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the highest ROI PSNR (28.920) and SSIM (0.973).Significance.This study demonstrates the potential of an image-based end-to-end self-normalization framework using cGANs for improving PET image quality and lesion detectability without the need for separate normalization scans.
摘要:
正电子发射断层扫描(PET)中的归一化可校正所有系统响应线(LOR)上的灵敏度不均匀性。自归一化是旨在从发射数据估计归一化分量而无需归一化体模的单独扫描的框架。在这项工作中,我们首次提出了使用条件生成对抗网络(cGAN)的基于图像的端到端自归一化框架。我们通过探索以下三种方法来评估不同的方法。首先,我们使用了未归一化或几何因素校正的图像,包括所有时间不变因素,作为输入数据类型。第二,我们将输入张量形状设置为单个轴向切片(2-D)或三个连续的轴向切片(2.5-D)。第三,我们选择Pix2Pix或极化自我注意(PSA)Pix2Pix,我们为这项工作开发的,作为一个深度学习网络。所有方法的目标是使用直接归一化方法归一化的图像的轴向切片。我们使用SimSET模拟工具对十个体素化体模进行了蒙特卡罗模拟,并产生了26,000对轴向图像切片用于训练和测试。结果表明,在我们测试的所有方法中,使用几何因子校正的输入图像训练的2.5DPSAPix2Pix均取得了最佳性能。所有方法都将一般图像质量的优值峰值信噪比(PSNR)和结构相似指数(SSIM)从〜15%提高到〜55%,2.5-DPSAPix2Pix显示最高的PSNR(28.074)和SSIM(0.921)。病变可检测性,用感兴趣区域(ROI)PSNR测量,SSIM,归一化对比度恢复系数(NCRC),和对比噪声比(CNR),通常对所有方法都有改进,用几何因子校正的输入图像训练的2.5DPSAPix2Pix实现了最高的ROIPSNR(28.920)和SSIM(0.973)。
公众号