optic cup

  • 文章类型: Journal Article
    准确的图像分割在计算机视觉和医学图像分析中起着至关重要的作用。在这项研究中,我们开发了一种新颖的不确定性引导深度学习策略(UGLS)来增强现有神经网络的性能(即,U-Net)从具有不同模态的图像中分割多个感兴趣的对象。在发达的UGLS中,根据每个对象的粗分割(由U-Net获得)引入了边界不确定性图,然后将其与输入图像组合以进行对象的精细分割。我们通过从彩色眼底图像中分割光学杯(OC)区域以及从X射线图像中分割左右肺区域来验证所开发的方法。在公共眼底和X射线图像数据集上的实验表明,所开发的方法对OC分割的平均Dice评分(DS)为0.8791,灵敏度(SEN)为0.8858,左、右肺分割为0.9605、0.9607、0.9621和0.9668,分别。我们的方法显着提高了U-Net的分割性能,使其与五个复杂网络(即,AU-Net,BiO-Net,AS-Net,Swin-Unet,和TransUNet)。
    Accurate image segmentation plays a crucial role in computer vision and medical image analysis. In this study, we developed a novel uncertainty guided deep learning strategy (UGLS) to enhance the performance of an existing neural network (i.e., U-Net) in segmenting multiple objects of interest from images with varying modalities. In the developed UGLS, a boundary uncertainty map was introduced for each object based on its coarse segmentation (obtained by the U-Net) and then combined with input images for the fine segmentation of the objects. We validated the developed method by segmenting optic cup (OC) regions from color fundus images and left and right lung regions from Xray images. Experiments on public fundus and Xray image datasets showed that the developed method achieved a average Dice Score (DS) of 0.8791 and a sensitivity (SEN) of 0.8858 for the OC segmentation, and 0.9605, 0.9607, 0.9621, and 0.9668 for the left and right lung segmentation, respectively. Our method significantly improved the segmentation performance of the U-Net, making it comparable or superior to five sophisticated networks (i.e., AU-Net, BiO-Net, AS-Net, Swin-Unet, and TransUNet).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    半监督分割在计算机视觉和医学图像分析中起着重要的作用,可以减轻获取大量专家注释图像的负担。在本文中,我们通过引入新颖的模型级残差扰动和指数骰子(eDice)损失,开发了一种基于经典均值教师(MT)框架的残差驱动半监督分割方法(称为RDMT)。引入的扰动被集成到指数移动平均(EMA)方案中,以增强MT的性能,而eDice损失用于提高给定网络对对象边界的检测灵敏度。我们通过将其应用于基于V-Net和U-Net的公共LASC和REFUGE数据集中的3D左中庭(LA)和2D光学杯(OC)来验证所开发的方法,分别。大量实验表明,当在10%和20%的标记图像上训练时,开发的方法获得了平均Dice评分为0.8776和0.7751。分别用于LASC和REFUGE数据集上描绘的LA和OC区域。它的性能明显优于MT,并且可以与几种现有的半监督分割方法(即,HCMT,UAMT,DTC和SASS)。
    Semi-supervised segmentation plays an important role in computer vision and medical image analysis and can alleviate the burden of acquiring abundant expert-annotated images. In this paper, we developed a residual-driven semi-supervised segmentation method (termed RDMT) based on the classical mean teacher (MT) framework by introducing a novel model-level residual perturbation and an exponential Dice (eDice) loss. The introduced perturbation was integrated into the exponential moving average (EMA) scheme to enhance the performance of the MT, while the eDice loss was used to improve the detection sensitivity of a given network to object boundaries. We validated the developed method by applying it to segment 3D Left Atrium (LA) and 2D optic cup (OC) from the public LASC and REFUGE datasets based on the V-Net and U-Net, respectively. Extensive experiments demonstrated that the developed method achieved the average Dice score of 0.8776 and 0.7751, when trained on 10% and 20% labeled images, respectively for the LA and OC regions depicted on the LASC and REFUGE datasets. It significantly outperformed the MT and can compete with several existing semi-supervised segmentation methods (i.e., HCMT, UAMT, DTC and SASS).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    正确估计杯盘比(C/D比)在眼科检查中起着重要作用,迫切需要提高C/D比自动测量的效率。因此,我们提出了一种测量正常人OCTC/D比的新方法。首先,端到端深度卷积网络用于分割和检测内界膜(ILM)和两个布鲁赫膜开口(BMO)终止。然后,我们引入椭圆拟合技术对视盘边缘进行后处理。最后,所提出的方法使用三台机器的视盘区扫描模式对41名正常受试者进行评估:BV1000,Topcon3DOCT-1和NidekARK-1.此外,进行了成对相关性分析,以将BV1000的C/D比测量方法与现有的商用OCT机器以及其他最先进的方法进行比较。BV1000计算的C/D比与人工标注计算的C/D比之间的相关系数为0.84,表明所提出的方法与眼科医生人工标注的结果具有很强的相关性。此外,在正常受试者的实际筛查中,BV1000、Topcon和Nidek之间的比较,BV1000计算的C/D比小于0.6的比例占96.34%,这是三个OCT机器中最接近的临床统计数据。以上实验结果和分析表明,该方法在杯盘检测和C/D比测量中,与现有的商用OCT设备相比,C/D比测量结果与实际比较接近,具有一定的临床应用价值。
    Proper estimation of the cup-to-disc ratio (C/D ratio) plays a significant role in ophthalmic examinations, and it is urgent to improve the efficiency of C/D ratio automatic measurement. Therefore, we propose a new method for measuring the C/D ratio of OCTs in normal subjects. Firstly, the end-to-end deep convolution network is used to segment and detect the inner limiting membrane (ILM) and the two Bruch\'s membrane opening (BMO) terminations. Then, we introduce an ellipse fitting technique to post-process the edge of the optic disc. Finally, the proposed method is evaluated on 41 normal subjects using the optic-disc-area scanning mode of three machines: BV1000, Topcon 3D OCT-1, and Nidek ARK-1. In addition, pairwise correlation analyses are carried out to compare the C/D ratio measurement method of BV1000 to existing commercial OCT machines as well as other state-of-the-art methods. The correlation coefficient between the C/D ratio calculated by BV1000 and the C/D ratio calculated by manual annotation is 0.84, which indicates that the proposed method has a strong correlation with the results of manual annotation by ophthalmologists. Moreover, in comparison between BV1000, Topcon and Nidek in practical screening among normal subjects, the proportion of the C/D ratio less than 0.6 calculated by BV1000 accounts for 96.34%, which is the closest to the clinical statistics among the three OCT machines. The above experimental results and analysis show that the proposed method performs well in cup and disc detection and C/D ratio measurement, and compared with the existing commercial OCT equipment, the C/D ratio measurement results are relatively close to reality, which has certain clinical application value.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    眼底图像中视盘(OD)和视杯(OC)的自动和准确分割是计算机辅助眼部病理诊断的基本任务。复杂的结构,比如血管和黄斑区,眼底图像中病变的存在给分割任务带来了极大的挑战。最近,基于卷积神经网络的方法在眼底图像分析中显示了其潜力。在本文中,我们提出了一种级联的两级网络体系结构,用于眼底图像中可靠且准确的OD和OC分割。在第一阶段,提出了具有改进的注意力机制和焦点丢失的U-Net框架,以从全分辨率眼底图像中检测准确可靠的OD位置。根据第一级的输出,在第二阶段中,将多任务框架和对抗性学习集成在一起的精细分割网络进一步分别设计用于OD和OC分割。多任务框架是通过同时估计轮廓和距离图作为辅助任务来预测OD和OC掩模,可以保证分割预测中对象的平滑性和形状。引入对抗性学习技术以鼓励分割网络产生与空间和形状分布中的真实标签一致的输出。我们使用两个公共视网膜眼底图像数据集(RIM-ONE-r3和REFUGE)评估我们方法的性能。广泛的消融研究和与现有方法的比较实验表明,与最先进的方法相比,我们的方法可以产生有竞争力的性能。
    Automatic and accurate segmentation of optic disc (OD) and optic cup (OC) in fundus images is a fundamental task in computer-aided ocular pathologies diagnosis. The complex structures, such as blood vessels and macular region, and the existence of lesions in fundus images bring great challenges to the segmentation task. Recently, the convolutional neural network-based methods have exhibited its potential in fundus image analysis. In this paper, we propose a cascaded two-stage network architecture for robust and accurate OD and OC segmentation in fundus images. In the first stage, the U-Net like framework with an improved attention mechanism and focal loss is proposed to detect accurate and reliable OD location from the full-scale resolution fundus images. Based on the outputs of the first stage, a refined segmentation network in the second stage that integrates multi-task framework and adversarial learning is further designed for OD and OC segmentation separately. The multi-task framework is conducted to predict the OD and OC masks by simultaneously estimating contours and distance maps as auxiliary tasks, which can guarantee the smoothness and shape of object in segmentation predictions. The adversarial learning technique is introduced to encourage the segmentation network to produce an output that is consistent with the true labels in space and shape distribution. We evaluate the performance of our method using two public retinal fundus image datasets (RIM-ONE-r3 and REFUGE). Extensive ablation studies and comparison experiments with existing methods demonstrate that our approach can produce competitive performance compared with state-of-the-art methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Glaucoma is the leading cause of irreversible blindness. For glaucoma screening, the cup to disc ratio (CDR) is a significant indicator, whose calculation relies on the segmentation of optic disc(OD) and optic cup(OC) in color fundus images. This study proposes a residual multi-scale convolutional neural network with a context semantic extraction module to jointly segment the OD and OC. The proposed method uses a W-shaped backbone network, including image pyramid multi-scale input with the side output layer as an early classifier to generate local prediction output. The proposed method includes a context extraction module that extracts contextual semantic information from multiple level receptive field sizes and adaptively recalibrates channel-wise feature responses. It can effectively extract global information and reduce the semantic gaps in the fusion of deep and shallow semantic information. We validated the proposed method on four datasets, including DRISHTI-GS1, REFUGE, RIM-ONE r3, and a private dataset. The overlap errors are 0.0540, 0.0684, 0.0492, 0.0511 in OC segmentation and 0.2332, 0.1777, 0.2372, 0.2547 in OD segmentation, respectively. Experimental results indicate that the proposed method can estimate the CDR for a large-scale glaucoma screening.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    Optic disc (OD) and optic cup (OC) segmentation are fundamental for fundus image analysis. Manual annotation is time consuming, expensive, and highly subjective, whereas an automated system is invaluable to the medical community. The aim of this study is to develop a deep learning system to segment OD and OC in fundus photographs, and evaluate how the algorithm compares against manual annotations.
    A total of 1200 fundus photographs with 120 glaucoma cases were collected. The OD and OC annotations were labeled by seven licensed ophthalmologists, and glaucoma diagnoses were based on comprehensive evaluations of the subject medical records. A deep learning system for OD and OC segmentation was developed. The performances of segmentation and glaucoma discriminating based on the cup-to-disc ratio (CDR) of automated model were compared against the manual annotations.
    The algorithm achieved an OD dice of 0.938 (95% confidence interval [CI] = 0.934-0.941), OC dice of 0.801 (95% CI = 0.793-0.809), and CDR mean absolute error (MAE) of 0.077 (95% CI = 0.073 mean absolute error (MAE)0.082). For glaucoma discriminating based on CDR calculations, the algorithm obtained an area under receiver operator characteristic curve (AUC) of 0.948 (95% CI = 0.920 mean absolute error (MAE)0.973), with a sensitivity of 0.850 (95% CI = 0.794-0.923) and specificity of 0.853 (95% CI = 0.798-0.918).
    We demonstrated the potential of the deep learning system to assist ophthalmologists in analyzing OD and OC segmentation and discriminating glaucoma from nonglaucoma subjects based on CDR calculations.
    We investigate the segmentation of OD and OC by deep learning system compared against the manual annotations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    BACKGROUND: Optic cup is an important structure in ophthalmologic diagnosis such as glaucoma. Automatic optic cup segmentation is also a key issue in computer aided diagnosis based on digital fundus image. However, current methods didn\'t effectively solve the problem of edge blurring caused by blood vessels around the optic cup.
    METHODS: In this study, an improved Bertalmio-Sapiro-Caselles-Ballester (BSCB) model was proposed to eliminate the noising induced by blood vessel. First, morphological operations were performed to get the enhanced green channel image. Then blood vessels were extracted and filled by improved BSCB model. Finally, Local Chart-Vest model was used to segment the optic cup. A total of 94 samples which included 32 glaucoma fundus images and 62 normal fundus images were experimented.
    RESULTS: The evaluation parameters of F-score and the boundary distance achieved by the proposed method against the results from experts were 0.7955 ± 0.0724 and 11.42 ± 3.61, respectively. Average vertical optic cup-to-disc ratio values of the normal and glaucoma samples achieved by the proposed method were 0.4369 ± 0.1193 and 0.7156 ± 0.0698, which were also close to those by experts. In addition, 39 glaucoma images from the public dataset RIM-ONE were also used for methodology evaluation.
    CONCLUSIONS: The results showed that our proposed method could overcome the influence of blood vessels in some degree and was competitive to other current optic cup segmentation algorithms. This novel methodology will be expected to use in clinic in the field of glaucoma early detection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号