背景:青光眼是一种世界性的眼病,可导致不可逆的视力丧失。早期发现青光眼对减少视力丧失很重要,而视网膜眼底图像检查由于其成本低,是青光眼诊断最常用的解决方案之一。临床上,眼底图像的杯盘比是青光眼诊断的重要指标。近年来,已经有越来越多的算法用于分割和识别视盘(OD)和视杯(OC),但是这些算法的普适性一般较差,分割性能,和分割精度。
方法:通过改进YOLOv8算法对OD和OC进行分割。首先,设计了一组算法,使REFUGE数据集的结果图像适应YOLOv8算法的输入格式。其次,为了提高分割性能,改进了YOLOV8的网络结构,包括添加ROI(感兴趣区域)模块,将边界框回归损失函数从CIOU修改为Focal-EIoU。最后,通过训练和测试REFUGE数据集,对改进的YOLOv8算法进行了评价。
结果:实验结果表明,改进的YOLOv8算法在REFUGE数据集上实现了良好的分割性能。在OD和OC分割测试中,F1得分为0.999。
结论:我们改进了YOLOv8算法,并将改进的模型应用于眼底图像中OD和OC的分割任务。结果表明,改进后的模型在训练速度上远远优于主流的U-Net模型,分割性能,和分割精度。
BACKGROUND: Glaucoma is a worldwide eye disease that can cause irreversible vision loss. Early detection of
glaucoma is important to reduce vision loss, and retinal fundus image examination is one of the most commonly used solutions for
glaucoma diagnosis due to its low cost. Clinically, the cup-disc ratio of fundus images is an important indicator for
glaucoma diagnosis. In recent years, there have been an increasing number of algorithms for segmentation and recognition of the optic disc (OD) and optic cup (OC), but these algorithms generally have poor universality, segmentation performance, and segmentation accuracy.
METHODS: By improving the YOLOv8 algorithm for segmentation of OD and OC. Firstly, a set of algorithms was designed to adapt the REFUGE dataset\'s result images to the input format of the YOLOv8 algorithm. Secondly, in order to improve segmentation performance, the network structure of YOLOv8 was improved, including adding a ROI (Region of Interest) module, modifying the bounding box regression loss function from CIOU to Focal-EIoU. Finally, by training and testing the REFUGE dataset, the improved YOLOv8 algorithm was evaluated.
RESULTS: The experimental results show that the improved YOLOv8 algorithm achieves good segmentation performance on the REFUGE dataset. In the OD and OC segmentation tests, the F1 score is 0.999.
CONCLUSIONS: We improved the YOLOv8 algorithm and applied the improved model to the segmentation task of OD and OC in fundus images. The results show that our improved model is far superior to the mainstream U-Net model in terms of training speed, segmentation performance, and segmentation accuracy.