关键词: convolutional neural network cross modality image-guided radiation therapy multimodal image registration

Mesh : Algorithms Humans Image Processing, Computer-Assisted Multimodal Imaging Radiographic Image Enhancement Radiotherapy, Image-Guided

来  源:   DOI:10.1088/1361-6560/ac195e

Abstract:
A long-standing problem in image-guided radiotherapy is that inferior intraoperative images present a difficult problem for automatic registration algorithms. Particularly for digital radiography (DR) and digitally reconstructed radiograph (DRR), the blurred, low-contrast, and noisy DR makes the multimodal registration of DR-DRR challenging. Therefore, we propose a novel CNN-based method called CrossModalNet to exploit the quality preoperative modality (DRR) for handling the limitations of intraoperative images (DR), thereby improving the registration accuracy. The method consists of two parts: DR-DRR contour predictions and contour-based rigid registration. We have designed the CrossModal Attention Module and CrossModal Refine Module to fully exploit the multiscale crossmodal features and implement the crossmodal interactions during the feature encoding and decoding stages. Then, the predicted anatomical contours of DR-DRR are registered by the classic mutual information method. We collected 2486 patient scans to train CrossModalNet and 170 scans to test its performance. The results show that it outperforms the classic and state-of-the-art methods with 95th percentile Hausdorff distance of 5.82 pixels and registration accuracy of 81.2%. The code is available at https://github.com/lc82111/crossModalNet.
摘要:
图像引导放射治疗中的一个长期存在的问题是,劣质的术中图像对自动配准算法提出了难题。特别是数字射线照相(DR)和数字重建射线照相(DRR),模糊,低对比度,和嘈杂的DR使得DR-DRR的多模态配准具有挑战性。因此,我们提出了一种新的基于CNN的方法,称为CrossModalNet,以利用高质量的术前模态(DRR)来处理术中图像(DR)的局限性,从而提高配准精度。该方法由DR-DRR轮廓预测和基于轮廓的刚性配准两部分组成。我们设计了CrossModal注意模块和CrossModal细化模块,以充分利用多尺度交叉模态特征,并在特征编码和解码阶段实现交叉模态交互。然后,通过经典的互信息方法对DR-DRR的预测解剖轮廓进行配准。我们收集了2486个患者扫描来训练CrossModalNet,并收集了170个扫描来测试其性能。结果表明,它优于经典和最先进的方法,第95百分位数Hausdorff距离为5.82像素,配准精度为81.2%。该代码可在https://github.com/lc82111/crossModalNet上获得。
公众号