背景:视网膜血管系统,人体的重要组成部分,反映了各种疾病,如心血管疾病,青光眼,和视网膜病变。眼底图像中视网膜血管的准确分割对于诊断和理解这些情况至关重要。然而,现有的分割模型经常与来自不同来源的图像作斗争,在交叉源眼底图像中进行准确的分割具有挑战性。
方法:为了解决跨源分段问题,本文提出了一种新颖的基于多级对抗学习和伪标签去噪的自训练框架(MLAL和PDSF)。扩展我们先前提出的具有断点和空间双注意力网络(MCG和BSA-Net)的多尺度上下文门控,MLAL和PDSF引入了一个多级对抗网络,该网络在特征和图像层都运行,以对齐目标域和源域之间的分布。此外,它采用距离比较技术来细化在自我训练过程中生成的伪标签。通过比较伪标签和网络预测之间的距离,框架识别和纠正不准确的地方,从而提高了精细血管分割的准确性。
结果:我们对CHASEDB1、STARE、和HRF数据集,以评估MLAL和PDSF的疗效。评价指标包括工作特性曲线下面积(AUC)、灵敏度(SE),特异性(SP),精度(ACC),和平衡的F分数(F1)。来自无监督域自适应分割的性能结果是显著的:对于DRIVE至CHASEDB1,结果为AUC:0.9806,SE:0.7400,SP:0.9737,ACC:0.9874,F1:0.8851;对于DRIVE至STARE,结果为AUC:0.9827,SE:0.7944,SP:0.9651,ACC:0.9826,和F1:0.8326。
结论:这些结果证明了MLAL和PDSF在从交叉域视网膜血管数据集获得准确分割结果方面的有效性和鲁棒性。该框架为跨域分割的进一步发展奠定了坚实的基础,并增强了对相关疾病的诊断和理解。
BACKGROUND: The retinal vasculature, a crucial component of the human body, mirrors various illnesses such as cardiovascular disease, glaucoma, and retinopathy. Accurate segmentation of retinal vessels in funduscopic images is essential for diagnosing and understanding these conditions. However, existing segmentation models often struggle with images from different sources, making accurate segmentation in crossing-source fundus images challenging.
METHODS: To address the crossing-source segmentation issues, this paper proposes a novel Multi-level Adversarial Learning and Pseudo-label Denoising-based Self-training Framework (MLAL&PDSF). Expanding on our previously proposed Multiscale Context Gating with Breakpoint and Spatial Dual Attention Network (MCG&BSA-Net), MLAL&PDSF introduces a multi-level adversarial network that operates at both the feature and image layers to align distributions between the target and source domains. Additionally, it employs a distance comparison technique to refine pseudo-labels generated during the self-training process. By comparing the distance between the pseudo-labels and the network predictions, the framework identifies and corrects inaccuracies, thus enhancing the accuracy of the fine vessel segmentation.
RESULTS: We have conducted extensive validation and comparative experiments on the CHASEDB1, STARE, and HRF datasets to evaluate the efficacy of the MLAL&PDSF. The evaluation metrics included the area under the operating characteristic curve (AUC), sensitivity (SE), specificity (SP), accuracy (ACC), and balanced F-score (F1). The performance results from unsupervised domain adaptive segmentation are remarkable: for DRIVE to CHASEDB1, results are AUC: 0.9806, SE: 0.7400, SP: 0.9737, ACC: 0.9874, and F1: 0.8851; for DRIVE to STARE, results are AUC: 0.9827, SE: 0.7944, SP: 0.9651, ACC: 0.9826, and F1: 0.8326.
CONCLUSIONS: These results demonstrate the effectiveness and robustness of MLAL&PDSF in achieving accurate segmentation results from crossing-domain retinal vessel datasets. The framework lays a solid foundation for further advancements in cross-domain segmentation and enhances the diagnosis and understanding of related diseases.