关键词: Cross-patch feature interaction Downsampling enhancement Dual decoder Retinal vessel segmentation Spatial context guide

Mesh : Retinal Vessels / diagnostic imaging Humans Deep Learning Image Processing, Computer-Assisted / methods Algorithms

来  源:   DOI:10.1016/j.compbiomed.2024.108443

Abstract:
Retinal vessel segmentation based on deep learning is an important auxiliary method for assisting clinical doctors in diagnosing retinal diseases. However, existing methods often produce mis-segmentation when dealing with low contrast images and thin blood vessels, which affects the continuity and integrity of the vessel skeleton. In addition, existing deep learning methods tend to lose a lot of detailed information during training, which affects the accuracy of segmentation. To address these issues, we propose a novel dual-decoder based Cross-patch Feature Interactive Net with Edge Refinement (CFI-Net) for end-to-end retinal vessel segmentation. In the encoder part, a joint refinement down-sampling method (JRDM) is proposed to compress feature information in the process of reducing image size, so as to reduce the loss of thin vessels and vessel edge information during the encoding process. In the decoder part, we adopt a dual-path model based on edge detection, and propose a Cross-patch Interactive Attention Mechanism (CIAM) in the main path to enhancing multi-scale spatial channel features and transferring cross-spatial information. Consequently, it improve the network\'s ability to segment complete and continuous vessel skeletons, reducing vessel segmentation fractures. Finally, the Adaptive Spatial Context Guide Method (ASCGM) is proposed to fuse the prediction results of the two decoder paths, which enhances segmentation details while removing part of the background noise. We evaluated our model on two retinal image datasets and one coronary angiography dataset, achieving outstanding performance in segmentation comprehensive assessment metrics such as AUC and CAL. The experimental results showed that the proposed CFI-Net has superior segmentation performance compared with other existing methods, especially for thin vessels and vessel edges. The code is available at https://github.com/kita0420/CFI-Net.
摘要:
基于深度学习的视网膜血管分割是辅助临床医生诊断视网膜疾病的重要辅助方法。然而,现有的方法在处理低对比度图像和细血管时经常会产生误分割,影响血管骨架的连续性和完整性。此外,现有的深度学习方法在训练过程中往往会丢失大量的详细信息,这影响了分割的准确性。为了解决这些问题,我们提出了一种新颖的基于双解码器的带边缘细化的跨补丁特征交互式网络(CFI-Net),用于端到端视网膜血管分割。在编码器部分,提出了一种联合细化下采样方法(JRDM)来压缩图像尺寸缩小过程中的特征信息,从而减少编码过程中细血管和血管边缘信息的丢失。在解码器部分,采用基于边缘检测的双路径模型,并在增强多尺度空间信道特征和传递跨空间信息的主要路径中提出了一种跨补丁交互注意机制(CIAM)。因此,它提高了网络分割完整和连续血管骨架的能力,减少血管分割骨折。最后,提出了自适应空间上下文引导方法(ASCGM)来融合两个解码器路径的预测结果,这增强了分割细节,同时消除了部分背景噪声。我们在两个视网膜图像数据集和一个冠状动脉造影数据集上评估了我们的模型,在AUC和CAL等细分综合评估指标方面取得了出色的表现。实验结果表明,所提出的CFI-Net分割性能优于现有的其他方法,特别是薄血管和血管边缘。该代码可在https://github.com/kita0420/CFI-Net上获得。
公众号