关键词: Coronary angiograph segmentation Graph neural network Graph-based vessel representation Multi-scale edge feature attention Retinal vessel segmentation Vision transformer

Mesh : Humans Neural Networks, Computer Retinal Vessels / diagnostic imaging Algorithms Coronary Angiography / methods Coronary Vessels / diagnostic imaging physiology Image Processing, Computer-Assisted / methods Deep Learning Coronary Artery Disease / diagnostic imaging physiopathology

来  源:   DOI:10.1016/j.neunet.2024.106356

Abstract:
Blood vessel segmentation is a crucial stage in extracting morphological characteristics of vessels for the clinical diagnosis of fundus and coronary artery disease. However, traditional convolutional neural networks (CNNs) are confined to learning local vessel features, making it challenging to capture the graph structural information and fail to perceive the global context of vessels. Therefore, we propose a novel graph neural network-guided vision transformer enhanced network (G2ViT) for vessel segmentation. G2ViT skillfully orchestrates the Convolutional Neural Network, Graph Neural Network, and Vision Transformer to enhance comprehension of the entire graphical structure of blood vessels. To achieve deeper insights into the global graph structure and higher-level global context cognizance, we investigate a graph neural network-guided vision transformer module. This module constructs graph-structured representation in an unprecedented manner using the high-level features extracted by CNNs for graph reasoning. To increase the receptive field while ensuring minimal loss of edge information, G2ViT introduces a multi-scale edge feature attention module (MEFA), leveraging dilated convolutions with different dilation rates and the Sobel edge detection algorithm to obtain multi-scale edge information of vessels. To avoid critical information loss during upsampling and downsampling, we design a multi-level feature fusion module (MLF2) to fuse complementary information between coarse and fine features. Experiments on retinal vessel datasets (DRIVE, STARE, CHASE_DB1, and HRF) and coronary angiography datasets (DCA1 and CHUAC) indicate that the G2ViT excels in robustness, generality, and applicability. Furthermore, it has acceptable inference time and computational complexity and presents a new solution for blood vessel segmentation.
摘要:
血管分割是提取血管形态特征进行眼底和冠状动脉疾病临床诊断的关键阶段。然而,传统的卷积神经网络(CNN)局限于学习局部血管特征,这使得捕获图形结构信息具有挑战性,并且无法感知船只的全球背景。因此,我们提出了一种新颖的图神经网络引导的视觉变压器增强网络(G2ViT)用于血管分割。G2ViT巧妙地协调了卷积神经网络,图神经网络,和视觉变压器,以增强对血管整个图形结构的理解。为了更深入地了解全局图结构和更高级别的全局上下文认知,我们研究了一个图神经网络引导的视觉变压器模块。该模块使用CNN提取的高级特征进行图形推理,以前所未有的方式构造图形结构化表示。为了增加感受野,同时确保边缘信息的最小损失,G2ViT引入了多尺度边缘特征注意模块(MEFA),利用不同扩张率的扩张卷积和Sobel边缘检测算法获得血管的多尺度边缘信息。为了避免上采样和下采样期间的关键信息丢失,我们设计了一个多级特征融合模块(MLF2)来融合粗特征和细特征之间的互补信息。视网膜血管数据集的实验(DRIVE,STARE,CHASE_DB1和HRF)和冠状动脉造影数据集(DCA1和CHUAC)表明G2ViT在鲁棒性方面表现出色,一般性,和适用性。此外,它具有可接受的推理时间和计算复杂度,为血管分割提供了一种新的解决方案。
公众号