关键词: CT image CoordConv Lung deformation Neural network Respiratory motion

Mesh : Humans Lung Neoplasms / diagnostic imaging radiotherapy Lung / diagnostic imaging Four-Dimensional Computed Tomography / methods Neural Networks, Computer Thorax Respiration

来  源:   DOI:10.1016/j.cmpb.2023.107998

Abstract:
OBJECTIVE: Estimating the three-dimensional (3D) deformation of the lung is important for accurate dose delivery in radiotherapy and precise surgical guidance in lung surgery navigation. Additional 4D-CT information is often required to eliminate the effect of individual variations and obtain a more accurate estimation of lung deformation. However, this results in increased radiation dose. Therefore, we propose a novel method that estimates lung tissue deformation from depth maps and two CT phases per patient.
METHODS: The method models the 3D motion of each voxel as a linear displacement along a direction vector, with a variable amplitude and phase that depend on the voxel location. The direction vector and amplitude are derived from the registration of the CT images at the end-of-exhale (EOE) and the end-of-inhale (EOI) phases. The voxel phase is estimated by a neural network. Coordinate convolution (CoordConv) is used to fuse multimodal data and embed absolute position information. The network takes the front and side views as well as the previous phase views as inputs to enhance accuracy.
RESULTS: We evaluate the proposed method on two datasets: DIR-Lab and 4D-Lung, and obtain average errors of 2.11 mm and 1.36 mm, respectively. The method achieves real-time performance of less than 7 ms per frame on a NVIDIA GeForce 2080Ti GPU.
CONCLUSIONS: Compared with previous methods, our method achieves comparable or even better accuracy with less CT phases.
摘要:
目的:估计肺部的三维(3D)变形对于放射治疗中的准确剂量递送和肺部手术导航中的精确手术指导非常重要。通常需要额外的4D-CT信息来消除个体差异的影响并获得对肺变形的更准确估计。然而,这导致辐射剂量增加。因此,我们提出了一种新的方法,从深度图和每个患者的两个CT相位估计肺组织变形。
方法:该方法将每个体素的3D运动建模为沿方向矢量的线性位移,具有取决于体素位置的可变振幅和相位。方向矢量和幅度是从呼气末(EOE)和吸气末(EOI)阶段的CT图像的配准中得出的。体素相位由神经网络估计。坐标卷积(CoordConv)用于融合多模态数据并嵌入绝对位置信息。网络采用前视图和侧视图以及先前的相位视图作为输入以提高准确性。
结果:我们在两个数据集上评估了提出的方法:DIR-Lab和4D-Lung,并获得2.11mm和1.36mm的平均误差,分别。该方法在NVIDIAGeForce2080TiGPU上实现每帧小于7ms的实时性能。
结论:与以前的方法相比,我们的方法以更少的CT相位实现了相当甚至更好的精度。
公众号