retinal vessel segmentation

视网膜血管分割
  • 文章类型: Journal Article
    视网膜血管分割对于眼科和心血管疾病的诊断至关重要。然而,视网膜血管密集且不规则分布,许多毛细血管融合在背景中,并表现出低对比度。此外,基于编码器-解码器的视网膜血管分割网络由于多次编码和解码而遭受详细特征的不可逆转的损失,导致血管的不正确分割。同时,单维注意力机制具有局限性,忽视了多维特征的重要性。为了解决这些问题,在本文中,我们提出了一种用于视网膜血管分割的细节增强注意力特征融合网络(DEAF-Net)。首先,提出了细节增强残差块(DERB)模块,以增强详细表示的能力,确保在精细血管的分割过程中有效地保持复杂的特征。第二,提出了多维协同注意编码器(MCAE)模块来优化多维信息的提取。然后,引入动态解码器(DYD)模块,在解码过程中保留空间信息,减少上采样操作造成的信息损失。最后,所提出的由DERB组成的细节增强特征融合(DEFF)模块,MCAE和DYD模块融合了编码和解码的特征图,实现了多尺度上下文信息的有效聚合。在DRIVE的数据集上进行的实验,CHASEDB1和STARE,在DRIVE上实现了0.8305、0.8784和0.8654的Sen,以及0.9886、0.9913和0.9911的AUC,CHASEDB1和STARE,分别,展示我们提出的网络的性能,特别是在细视网膜血管的分割中。
    Retinal vessel segmentation is crucial for the diagnosis of ophthalmic and cardiovascular diseases. However, retinal vessels are densely and irregularly distributed, with many capillaries blending into the background, and exhibit low contrast. Moreover, the encoder-decoder-based network for retinal vessel segmentation suffers from irreversible loss of detailed features due to multiple encoding and decoding, leading to incorrect segmentation of the vessels. Meanwhile, the single-dimensional attention mechanisms possess limitations, neglecting the importance of multidimensional features. To solve these issues, in this paper, we propose a detail-enhanced attention feature fusion network (DEAF-Net) for retinal vessel segmentation. First, the detail-enhanced residual block (DERB) module is proposed to strengthen the capacity for detailed representation, ensuring that intricate features are efficiently maintained during the segmentation of delicate vessels. Second, the multidimensional collaborative attention encoder (MCAE) module is proposed to optimize the extraction of multidimensional information. Then, the dynamic decoder (DYD) module is introduced to preserve spatial information during the decoding process and reduce the information loss caused by upsampling operations. Finally, the proposed detail-enhanced feature fusion (DEFF) module composed of DERB, MCAE and DYD modules fuses feature maps from both encoding and decoding and achieves effective aggregation of multi-scale contextual information. The experiments conducted on the datasets of DRIVE, CHASEDB1, and STARE, achieving Sen of 0.8305, 0.8784, and 0.8654, and AUC of 0.9886, 0.9913, and 0.9911 on DRIVE, CHASEDB1, and STARE, respectively, demonstrate the performance of our proposed network, particularly in the segmentation of fine retinal vessels.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    视网膜血管的准确分割对多种疾病的计算机辅助诊断和治疗具有重要意义。由于视网膜血管样本数量有限,标记样本稀缺,由于灰色理论擅长处理“少数数据”的问题,信息不佳\“,本文提出了一种新的基于灰色关联的视网膜血管分割方法。首先,设计了一种基于灰色关联分析的噪声自适应判别滤波算法(NADF-GRA)对图像进行增强。其次,设计了一种基于灰色关联分析的阈值分割模型(TS-GRA)对增强后的血管图像进行分割。最后,包括孔填充和分离像素的去除的后处理阶段被应用以获得最终的分割输出。在公开可用的数字视网膜DRIVE上使用多个不同的测量指标来评估所提出的方法的性能。STARE和HRF数据集。实验分析表明,DRIVE数据集的平均准确性和特异性分别为96.03%和98.51%。STARE数据集的平均准确性和特异性分别为95.46%和97.85%。Precision,F1分数,HRF数据集上的Jaccard指数和Jaccard指数都表现出高性能水平。本文提出的办法优于今朝主流办法。
    Accurate segmentation of retinal vessels is of great significance for computer-aided diagnosis and treatment of many diseases. Due to the limited number of retinal vessel samples and the scarcity of labeled samples, and since grey theory excels in handling problems of \"few data, poor information\", this paper proposes a novel grey relational-based method for retinal vessel segmentation. Firstly, a noise-adaptive discrimination filtering algorithm based on grey relational analysis (NADF-GRA) is designed to enhance the image. Secondly, a threshold segmentation model based on grey relational analysis (TS-GRA) is designed to segment the enhanced vessel image. Finally, a post-processing stage involving hole filling and removal of isolated pixels is applied to obtain the final segmentation output. The performance of the proposed method is evaluated using multiple different measurement metrics on publicly available digital retinal DRIVE, STARE and HRF datasets. Experimental analysis showed that the average accuracy and specificity on the DRIVE dataset were 96.03% and 98.51%. The mean accuracy and specificity on the STARE dataset were 95.46% and 97.85%. Precision, F1-score, and Jaccard index on the HRF dataset all demonstrated high-performance levels. The method proposed in this paper is superior to the current mainstream methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    视网膜血管分割对于诊断和监测各种眼部疾病,如糖尿病性视网膜病变,青光眼,和高血压。在这项研究中,我们研究了锐度感知最小化(SAM)如何提高RF-UNet的泛化性能。RF-UNet是一种新的视网膜血管分割模型。我们的实验重点是血管提取的数字视网膜图像(DRIVE)数据集,这是视网膜血管分割的基准,我们的测试结果表明,在训练过程中添加SAM会带来显著的改进。与非SAM模型(训练损失为0.45709,验证损失为0.40266)相比,SAM训练的RF-UNet模型在训练损失(0.094225)和验证损失(0.08053)方面均实现了显著降低.此外,与非SAM模型(训练精度为0.90169,验证精度为0.93999)相比,SAM训练模型显示出更高的训练准确度(0.96225)和验证准确度(0.96821).此外,模型在灵敏度方面表现更好,特异性,AUC,和F1得分,表明改进了对看不见的数据的泛化。我们的结果证实了SAM有助于学习更平坦的最小值的观点,从而提高泛化,并与其他强调先进优化方法优势的研究相一致。对其他医学成像任务有更广泛的影响,这些结果表明SAM可以成功地减少过拟合并增强视网膜血管分割模型的鲁棒性。前瞻性研究途径包括在更大和更多样化的数据集上验证模型,并研究其在现实世界临床情况下的实际实施。
    Retinal vessel segmentation is crucial for diagnosing and monitoring various eye diseases such as diabetic retinopathy, glaucoma, and hypertension. In this study, we examine how sharpness-aware minimization (SAM) can improve RF-UNet\'s generalization performance. RF-UNet is a novel model for retinal vessel segmentation. We focused our experiments on the digital retinal images for vessel extraction (DRIVE) dataset, which is a benchmark for retinal vessel segmentation, and our test results show that adding SAM to the training procedure leads to notable improvements. Compared to the non-SAM model (training loss of 0.45709 and validation loss of 0.40266), the SAM-trained RF-UNet model achieved a significant reduction in both training loss (0.094225) and validation loss (0.08053). Furthermore, compared to the non-SAM model (training accuracy of 0.90169 and validation accuracy of 0.93999), the SAM-trained model demonstrated higher training accuracy (0.96225) and validation accuracy (0.96821). Additionally, the model performed better in terms of sensitivity, specificity, AUC, and F1 score, indicating improved generalization to unseen data. Our results corroborate the notion that SAM facilitates the learning of flatter minima, thereby improving generalization, and are consistent with other research highlighting the advantages of advanced optimization methods. With wider implications for other medical imaging tasks, these results imply that SAM can successfully reduce overfitting and enhance the robustness of retinal vessel segmentation models. Prospective research avenues encompass verifying the model on vaster and more diverse datasets and investigating its practical implementation in real-world clinical situations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:视网膜血管系统,人体的重要组成部分,反映了各种疾病,如心血管疾病,青光眼,和视网膜病变。眼底图像中视网膜血管的准确分割对于诊断和理解这些情况至关重要。然而,现有的分割模型经常与来自不同来源的图像作斗争,在交叉源眼底图像中进行准确的分割具有挑战性。
    方法:为了解决跨源分段问题,本文提出了一种新颖的基于多级对抗学习和伪标签去噪的自训练框架(MLAL和PDSF)。扩展我们先前提出的具有断点和空间双注意力网络(MCG和BSA-Net)的多尺度上下文门控,MLAL和PDSF引入了一个多级对抗网络,该网络在特征和图像层都运行,以对齐目标域和源域之间的分布。此外,它采用距离比较技术来细化在自我训练过程中生成的伪标签。通过比较伪标签和网络预测之间的距离,框架识别和纠正不准确的地方,从而提高了精细血管分割的准确性。
    结果:我们对CHASEDB1、STARE、和HRF数据集,以评估MLAL和PDSF的疗效。评价指标包括工作特性曲线下面积(AUC)、灵敏度(SE),特异性(SP),精度(ACC),和平衡的F分数(F1)。来自无监督域自适应分割的性能结果是显著的:对于DRIVE至CHASEDB1,结果为AUC:0.9806,SE:0.7400,SP:0.9737,ACC:0.9874,F1:0.8851;对于DRIVE至STARE,结果为AUC:0.9827,SE:0.7944,SP:0.9651,ACC:0.9826,和F1:0.8326。
    结论:这些结果证明了MLAL和PDSF在从交叉域视网膜血管数据集获得准确分割结果方面的有效性和鲁棒性。该框架为跨域分割的进一步发展奠定了坚实的基础,并增强了对相关疾病的诊断和理解。
    BACKGROUND: The retinal vasculature, a crucial component of the human body, mirrors various illnesses such as cardiovascular disease, glaucoma, and retinopathy. Accurate segmentation of retinal vessels in funduscopic images is essential for diagnosing and understanding these conditions. However, existing segmentation models often struggle with images from different sources, making accurate segmentation in crossing-source fundus images challenging.
    METHODS: To address the crossing-source segmentation issues, this paper proposes a novel Multi-level Adversarial Learning and Pseudo-label Denoising-based Self-training Framework (MLAL&PDSF). Expanding on our previously proposed Multiscale Context Gating with Breakpoint and Spatial Dual Attention Network (MCG&BSA-Net), MLAL&PDSF introduces a multi-level adversarial network that operates at both the feature and image layers to align distributions between the target and source domains. Additionally, it employs a distance comparison technique to refine pseudo-labels generated during the self-training process. By comparing the distance between the pseudo-labels and the network predictions, the framework identifies and corrects inaccuracies, thus enhancing the accuracy of the fine vessel segmentation.
    RESULTS: We have conducted extensive validation and comparative experiments on the CHASEDB1, STARE, and HRF datasets to evaluate the efficacy of the MLAL&PDSF. The evaluation metrics included the area under the operating characteristic curve (AUC), sensitivity (SE), specificity (SP), accuracy (ACC), and balanced F-score (F1). The performance results from unsupervised domain adaptive segmentation are remarkable: for DRIVE to CHASEDB1, results are AUC: 0.9806, SE: 0.7400, SP: 0.9737, ACC: 0.9874, and F1: 0.8851; for DRIVE to STARE, results are AUC: 0.9827, SE: 0.7944, SP: 0.9651, ACC: 0.9826, and F1: 0.8326.
    CONCLUSIONS: These results demonstrate the effectiveness and robustness of MLAL&PDSF in achieving accurate segmentation results from crossing-domain retinal vessel datasets. The framework lays a solid foundation for further advancements in cross-domain segmentation and enhances the diagnosis and understanding of related diseases.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    眼底图像中视网膜血管的准确分割对于众多眼部疾病的诊断具有重要意义。然而,由于眼底图像的复杂特征,如各种病变,图像噪声和复杂的背景,一些血管的像素特征有显著差异,这使得分割网络很容易将这些血管误判为噪声,从而影响整体分割的准确性。因此,在复杂情况下准确分割视网膜血管仍然是一个很大的挑战。为了解决这个问题,提出了一种用于视网膜血管分割的部分类激活映射引导图卷积级联U-Net。提出的网络的核心思想是首先使用部分类激活映射引导图卷积网络来消除局部船只的差异,并生成具有全局一致性的特征图,随后通过分割网络U-Net进一步细化这些特征图,以达到更好的分割效果。具体来说,一个新的神经网络模块,叫EdgeConv,多层堆叠形成一个图卷积网络,实现信息从局部到全局的更新,从而逐步增强图节点的特征一致性。同时,为了抑制可能在图卷积中传输的噪声信息,从而减少噪声对分割结果的不利影响,介绍了部分类激活映射。部分类激活映射可以指导图节点之间的信息传递,通过分类标签有效激活血管特征,从而提高分割的准确性。在四个不同的眼底图像数据集上验证了所提出方法的性能。与现有的最先进的方法相比,当局部血管的像素特征存在显著差异时,该方法能在一定程度上提高血管的完整性,由不适当的照明和渗出物等客观因素引起的。此外,所提出的方法在分割复杂的视网膜血管时具有鲁棒性。
    Accurate segmentation of retinal vessels in fundus images is of great importance for the diagnosis of numerous ocular diseases. However, due to the complex characteristics of fundus images, such as various lesions, image noise and complex background, the pixel features of some vessels have significant differences, which makes it easy for the segmentation networks to misjudge these vessels as noise, thus affecting the accuracy of the overall segmentation. Therefore, accurately segment retinal vessels in complex situations is still a great challenge. To address the problem, a partial class activation mapping guided graph convolution cascaded U-Net for retinal vessel segmentation is proposed. The core idea of the proposed network is first to use the partial class activation mapping guided graph convolutional network to eliminate the differences of local vessels and generate feature maps with global consistency, and subsequently these feature maps are further refined by segmentation network U-Net to achieve better segmentation results. Specifically, a new neural network block, called EdgeConv, is stacked multiple layers to form a graph convolutional network to realize the transfer an update of information from local to global, so as gradually enhance the feature consistency of graph nodes. Simultaneously, in an effort to suppress the noise information that may be transferred in graph convolution and thus reduce adverse effects of noise on segmentation results, the partial class activation mapping is introduced. The partial class activation mapping can guide the information transmission between graph nodes and effectively activate vessel feature through classification labels, thereby improving the accuracy of segmentation. The performance of the proposed method is validated on four different fundus image datasets. Compared with existing state-of-the-art methods, the proposed method can improve the integrity of vessel to a certain extent when the pixel features of local vessels are significantly different, caused by objective factors such as inappropriate illumination and exudates. Moreover, the proposed method shows robustness when segmenting complex retinal vessels.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    视网膜血管在视网膜疾病的检测中作为生物标志物发挥着关键作用,包括高血压视网膜病变.这些视网膜血管的手动识别是资源密集型和耗时的。自动方法中血管分割的保真度直接取决于眼底图像的质量。在图像质量次优的情况下,应用基于深度学习的方法成为一种更有效的精确分割方法。我们提出了一种异构神经网络,结合卷积神经网络的局部语义信息提取和变压器网络结构的远程空间特征挖掘的好处。这种交叉注意力网络结构增强了模型处理视网膜图像中血管结构的能力。在四个公开数据集上进行的实验证明了我们的模型在血管分割方面的优越性能和高血压视网膜病变量化的巨大潜力。
    Retinal vessels play a pivotal role as biomarkers in the detection of retinal diseases, including hypertensive retinopathy. The manual identification of these retinal vessels is both resource-intensive and time-consuming. The fidelity of vessel segmentation in automated methods directly depends on the fundus images\' quality. In instances of sub-optimal image quality, applying deep learning-based methodologies emerges as a more effective approach for precise segmentation. We propose a heterogeneous neural network combining the benefit of local semantic information extraction of convolutional neural network and long-range spatial features mining of transformer network structures. Such cross-attention network structure boosts the model\'s ability to tackle vessel structures in the retinal images. Experiments on four publicly available datasets demonstrate our model\'s superior performance on vessel segmentation and the big potential of hypertensive retinopathy quantification.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    视网膜血管分割在医学图像分析中起着至关重要的作用,协助眼科医生进行疾病诊断,监测,和治疗指导。然而,由于视网膜血管图像具有复杂的边界结构和丰富的纹理特征,现有的方法在血管边界的精确分割方面存在挑战。在这项研究中,我们提出了纹理驱动的Swin-UNet,具有增强的边界感知。首先,我们设计了一个跨级别的纹理互补模块(CTCM)在编码阶段融合不同尺度的特征图,从而恢复在下采样过程中丢失的详细特征。此外,我们引入了像素纹理双块(PT双块)来提高模型定位血管边界和轮廓信息的能力。最后,我们引入了改进的Hausdorff距离损失函数来进一步提高血管边界分割的准确性。在DRIVE和CHASEDB1数据集上对所提出的方法进行了评估,实验结果表明,我们的模型在精度(ACC)方面获得了优越的性能,灵敏度(SE),特异性(SP),和F1得分(F1),血管边界分割的精度明显提高。
    Retinal vessel segmentation plays a crucial role in medical image analysis, aiding ophthalmologists in disease diagnosis, monitoring, and treatment guidance. However, due to the complex boundary structure and rich texture features in retinal blood vessel images, existing methods have challenges in the accurate segmentation of blood vessel boundaries. In this study, we propose the texture-driven Swin-UNet with enhanced boundary-wise perception. Firstly, we designed a Cross-level Texture Complementary Module (CTCM) to fuse feature maps at different scales during the encoding stage, thereby recovering detailed features lost in the downsampling process. Additionally, we introduced a Pixel-wise Texture Swin Block (PT Swin Block) to improve the model\'s ability to localize vessel boundary and contour information. Finally, we introduced an improved Hausdorff distance loss function to further enhance the accuracy of vessel boundary segmentation. The proposed method was evaluated on the DRIVE and CHASEDB1 datasets, and the experimental results demonstrate that our model obtained superior performance in terms of Accuracy (ACC), Sensitivity (SE), Specificity (SP), and F1 score (F1), and the accuracy of vessel boundary segmentation was significantly improved.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    视网膜的许多主要疾病通常表现为眼底病变的症状。从视网膜眼底图像中提取血管对于辅助医生至关重要。现有的一些方法不能充分提取视网膜图像的细节特征或丢失一些信息,这使得很难准确地分割位于图像边缘的毛细血管。在本文中,提出了一种基于跳跃连接信息增强的多尺度视网膜血管分割网络(SCIE_Net)。首先,该网络在多个尺度上处理视网膜图像,以实现对不同尺度特征的网络捕获。其次,特征聚合模块用于聚合浅层网络的丰富信息。Further,提出了跳过连接信息增强模块,以考虑浅层的细节特征和更深层网络的高级特征,以避免网络各层之间信息交互不完整的问题。最后,SCIE_Net在公开可用的视网膜图像标准数据集DRIVE上实现了更好的血管分割性能和结果,CHASE_DB1和STARE。
    Many major diseases of the retina often show symptoms of lesions in the fundus of the eye. The extraction of blood vessels from retinal fundus images is essential to assist doctors. Some of the existing methods do not fully extract the detailed features of retinal images or lose some information, making it difficult to accurately segment capillaries located at the edges of the images. In this paper, we propose a multi-scale retinal vessel segmentation network (SCIE_Net) based on skip connection information enhancement. Firstly, the network processes retinal images at multiple scales to achieve network capture of features at different scales. Secondly, the feature aggregation module is proposed to aggregate the rich information of the shallow network. Further, the skip connection information enhancement module is proposed to take into account the detailed features of the shallow layer and the advanced features of the deeper network to avoid the problem of incomplete information interaction between the layers of the network. Finally, SCIE_Net achieves better vessel segmentation performance and results on the publicly available retinal image standard datasets DRIVE, CHASE_DB1, and STARE.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    血管分割是提取血管形态特征进行眼底和冠状动脉疾病临床诊断的关键阶段。然而,传统的卷积神经网络(CNN)局限于学习局部血管特征,这使得捕获图形结构信息具有挑战性,并且无法感知船只的全球背景。因此,我们提出了一种新颖的图神经网络引导的视觉变压器增强网络(G2ViT)用于血管分割。G2ViT巧妙地协调了卷积神经网络,图神经网络,和视觉变压器,以增强对血管整个图形结构的理解。为了更深入地了解全局图结构和更高级别的全局上下文认知,我们研究了一个图神经网络引导的视觉变压器模块。该模块使用CNN提取的高级特征进行图形推理,以前所未有的方式构造图形结构化表示。为了增加感受野,同时确保边缘信息的最小损失,G2ViT引入了多尺度边缘特征注意模块(MEFA),利用不同扩张率的扩张卷积和Sobel边缘检测算法获得血管的多尺度边缘信息。为了避免上采样和下采样期间的关键信息丢失,我们设计了一个多级特征融合模块(MLF2)来融合粗特征和细特征之间的互补信息。视网膜血管数据集的实验(DRIVE,STARE,CHASE_DB1和HRF)和冠状动脉造影数据集(DCA1和CHUAC)表明G2ViT在鲁棒性方面表现出色,一般性,和适用性。此外,它具有可接受的推理时间和计算复杂度,为血管分割提供了一种新的解决方案。
    Blood vessel segmentation is a crucial stage in extracting morphological characteristics of vessels for the clinical diagnosis of fundus and coronary artery disease. However, traditional convolutional neural networks (CNNs) are confined to learning local vessel features, making it challenging to capture the graph structural information and fail to perceive the global context of vessels. Therefore, we propose a novel graph neural network-guided vision transformer enhanced network (G2ViT) for vessel segmentation. G2ViT skillfully orchestrates the Convolutional Neural Network, Graph Neural Network, and Vision Transformer to enhance comprehension of the entire graphical structure of blood vessels. To achieve deeper insights into the global graph structure and higher-level global context cognizance, we investigate a graph neural network-guided vision transformer module. This module constructs graph-structured representation in an unprecedented manner using the high-level features extracted by CNNs for graph reasoning. To increase the receptive field while ensuring minimal loss of edge information, G2ViT introduces a multi-scale edge feature attention module (MEFA), leveraging dilated convolutions with different dilation rates and the Sobel edge detection algorithm to obtain multi-scale edge information of vessels. To avoid critical information loss during upsampling and downsampling, we design a multi-level feature fusion module (MLF2) to fuse complementary information between coarse and fine features. Experiments on retinal vessel datasets (DRIVE, STARE, CHASE_DB1, and HRF) and coronary angiography datasets (DCA1 and CHUAC) indicate that the G2ViT excels in robustness, generality, and applicability. Furthermore, it has acceptable inference time and computational complexity and presents a new solution for blood vessel segmentation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    基于深度学习的视网膜血管分割是辅助临床医生诊断视网膜疾病的重要辅助方法。然而,现有的方法在处理低对比度图像和细血管时经常会产生误分割,影响血管骨架的连续性和完整性。此外,现有的深度学习方法在训练过程中往往会丢失大量的详细信息,这影响了分割的准确性。为了解决这些问题,我们提出了一种新颖的基于双解码器的带边缘细化的跨补丁特征交互式网络(CFI-Net),用于端到端视网膜血管分割。在编码器部分,提出了一种联合细化下采样方法(JRDM)来压缩图像尺寸缩小过程中的特征信息,从而减少编码过程中细血管和血管边缘信息的丢失。在解码器部分,采用基于边缘检测的双路径模型,并在增强多尺度空间信道特征和传递跨空间信息的主要路径中提出了一种跨补丁交互注意机制(CIAM)。因此,它提高了网络分割完整和连续血管骨架的能力,减少血管分割骨折。最后,提出了自适应空间上下文引导方法(ASCGM)来融合两个解码器路径的预测结果,这增强了分割细节,同时消除了部分背景噪声。我们在两个视网膜图像数据集和一个冠状动脉造影数据集上评估了我们的模型,在AUC和CAL等细分综合评估指标方面取得了出色的表现。实验结果表明,所提出的CFI-Net分割性能优于现有的其他方法,特别是薄血管和血管边缘。该代码可在https://github.com/kita0420/CFI-Net上获得。
    Retinal vessel segmentation based on deep learning is an important auxiliary method for assisting clinical doctors in diagnosing retinal diseases. However, existing methods often produce mis-segmentation when dealing with low contrast images and thin blood vessels, which affects the continuity and integrity of the vessel skeleton. In addition, existing deep learning methods tend to lose a lot of detailed information during training, which affects the accuracy of segmentation. To address these issues, we propose a novel dual-decoder based Cross-patch Feature Interactive Net with Edge Refinement (CFI-Net) for end-to-end retinal vessel segmentation. In the encoder part, a joint refinement down-sampling method (JRDM) is proposed to compress feature information in the process of reducing image size, so as to reduce the loss of thin vessels and vessel edge information during the encoding process. In the decoder part, we adopt a dual-path model based on edge detection, and propose a Cross-patch Interactive Attention Mechanism (CIAM) in the main path to enhancing multi-scale spatial channel features and transferring cross-spatial information. Consequently, it improve the network\'s ability to segment complete and continuous vessel skeletons, reducing vessel segmentation fractures. Finally, the Adaptive Spatial Context Guide Method (ASCGM) is proposed to fuse the prediction results of the two decoder paths, which enhances segmentation details while removing part of the background noise. We evaluated our model on two retinal image datasets and one coronary angiography dataset, achieving outstanding performance in segmentation comprehensive assessment metrics such as AUC and CAL. The experimental results showed that the proposed CFI-Net has superior segmentation performance compared with other existing methods, especially for thin vessels and vessel edges. The code is available at https://github.com/kita0420/CFI-Net.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号