Fundus image

眼底图像
  • 文章类型: Journal Article
    慢性肾脏病(CKD)是一个重要的全球健康问题,强调早期发现的必要性,以促进及时的临床干预。利用视网膜的独特能力提供对全身血管健康的见解,它变得很有趣,早期CKD检测的非侵入性选择。将这种方法与现有的侵入性方法相结合,可以全面了解患者的健康状况。提高诊断的准确性和治疗的有效性。
    这篇综述的目的是批判性地评估视网膜成像作为基于视网膜血管变化的CKD检测的诊断工具的潜力。该评论跟踪了从传统的手动评估到最新的深度学习的演变。
    对文献进行了全面审查,使用有针对性的数据库搜索和三步方法进行文章评估:识别,筛选,以及基于Prisma指南的包容性。重点是关于视网膜成像检测CKD的独特和新的研究。最初发现的457种出版物总共有70种,符合我们的纳入标准,因此进行了分析。在包括的70项研究中,35例糖尿病视网膜病变与CKD的相关性,23以通过视网膜成像检测CKD为中心,四个人试图通过人工智能和视网膜成像的结合来自动化检测。
    显著的视网膜特征,如小动脉狭窄,静脉扩张,特定的视网膜病变标志物(如微动脉瘤,出血,和渗出物),动静脉比值(AVR)的变化与CKD进展密切相关。我们还发现,将深度学习与视网膜成像相结合用于CKD检测可以提供非常有前途的途径。因此,通过这种技术利用视网膜成像有望提高CKD检测系统的精度和预后能力,提供一种可以改变患者护理实践的非侵入性诊断替代方案。
    总之,视网膜成像作为CKD的诊断工具具有很高的潜力,因为它是非侵入性的,通过可观察的微血管变化促进早期检测,提供肾脏健康的预测性见解,and,当与深度学习算法配对时,提高CKD筛查的准确性和有效性。
    UNASSIGNED: Chronic kidney disease (CKD) is a significant global health concern, emphasizing the necessity of early detection to facilitate prompt clinical intervention. Leveraging the unique ability of the retina to offer insights into systemic vascular health, it emerges as an interesting, non-invasive option for early CKD detection. Integrating this approach with existing invasive methods could provide a comprehensive understanding of patient health, enhancing diagnostic accuracy and treatment effectiveness.
    UNASSIGNED: The purpose of this review is to critically assess the potential of retinal imaging to serve as a diagnostic tool for CKD detection based on retinal vascular changes. The review tracks the evolution from conventional manual evaluations to the latest state-of-the-art in deep learning.
    UNASSIGNED: A comprehensive examination of the literature was carried out, using targeted database searches and a three-step methodology for article evaluation: identification, screening, and inclusion based on Prisma guidelines. Priority was given to unique and new research concerning the detection of CKD with retinal imaging. A total of 70 publications from 457 that were initially discovered satisfied our inclusion criteria and were thus subjected to analysis. Out of the 70 studies included, 35 investigated the correlation between diabetic retinopathy and CKD, 23 centered on the detection of CKD via retinal imaging, and four attempted to automate the detection through the combination of artificial intelligence and retinal imaging.
    UNASSIGNED: Significant retinal features such as arteriolar narrowing, venular widening, specific retinopathy markers (like microaneurysms, hemorrhages, and exudates), and changes in arteriovenous ratio (AVR) have shown strong correlations with CKD progression. We also found that the combination of deep learning with retinal imaging for CKD detection could provide a very promising pathway. Accordingly, leveraging retinal imaging through this technique is expected to enhance the precision and prognostic capacity of the CKD detection system, offering a non-invasive diagnostic alternative that could transform patient care practices.
    UNASSIGNED: In summary, retinal imaging holds high potential as a diagnostic tool for CKD because it is non-invasive, facilitates early detection through observable microvascular changes, offers predictive insights into renal health, and, when paired with deep learning algorithms, enhances the accuracy and effectiveness of CKD screening.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    青光眼是世界上最常见的失明原因之一。基于深度学习从视网膜眼底图像中筛选青光眼是目前常用的方法。在基于深度学习的青光眼诊断中,视盘内的血管会干扰诊断,眼底图像中视盘外也有一些病理信息。因此,将原始眼底图像与去血管的视盘图像相结合可以提高诊断效率。在本文中,我们提出了一种新的多步骤框架MSGC-CNN,可以更好地诊断青光眼。在框架中,(1)将青光眼病理知识与深度学习模式相结合,融合原始眼底图像和视盘区域的特征,其中血管的干扰被U-Net特异性去除,并根据融合特征进行青光眼诊断。(2)针对青光眼眼底图像的特点,例如少量的数据,高分辨率,丰富的功能信息,我们设计了一个新的特征提取网络RA-ResNet,并将其与迁移学习相结合。为了验证我们的方法,我们对三个公共数据集进行二元分类实验,Drishti-GS,RIM-ONE-R3和ACRIMA,精度为92.01%,93.75%,和97.87%。结果证明了较早期结果的显著改进。
    Glaucoma is one of the most common causes of blindness in the world. Screening glaucoma from retinal fundus images based on deep learning is a common method at present. In the diagnosis of glaucoma based on deep learning, the blood vessels within the optic disc interfere with the diagnosis, and there is also some pathological information outside the optic disc in fundus images. Therefore, integrating the original fundus image with the vessel-removed optic disc image can improve diagnostic efficiency. In this paper, we propose a novel multi-step framework named MSGC-CNN that can better diagnose glaucoma. In the framework, (1) we combine glaucoma pathological knowledge with deep learning model, fuse the features of original fundus image and optic disc region in which the interference of blood vessel is specifically removed by U-Net, and make glaucoma diagnosis based on the fused features. (2) Aiming at the characteristics of glaucoma fundus images, such as small amount of data, high resolution, and rich feature information, we design a new feature extraction network RA-ResNet and combined it with transfer learning. In order to verify our method, we conduct binary classification experiments on three public datasets, Drishti-GS, RIM-ONE-R3, and ACRIMA, with accuracy of 92.01%, 93.75%, and 97.87%. The results demonstrate a significant improvement over earlier results.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    增大的杯盘比(CDR)是青光眼视神经病变的标志。CDR的手动评估可能比自动化方法更不准确且更耗时。这里,我们试图开发和验证一种基于深度学习的算法,以自动确定眼底图像中的CDR.
    使用基于人群的观察研究的眼底数据估计CDR的算法开发。
    来自英国生物银行(UKBB)的总共181.768张眼底图像,Drishti_GS,和EyePACS。
    FastAI和PyTorch库用于在来自UKBB的眼底图像上训练基于卷积神经网络的模型。构建模型以确定图像可分级性(分类分析)以及估计CDR(回归分析)。然后使用来自EyePACS和Drishti_GS的多种族数据集验证最佳性能模型用于青光眼筛查。
    接收器工作特性曲线和确定系数下的面积。
    我们的分级模型vgg19_batch归一化(bn)在16.045个图像的验证集上实现了97.13%的精度,精度为99.26%,接收器工作特性曲线下面积为96.56%。使用回归分析,我们的最佳性能模型(在vgg19_bn架构上训练)获得了0.8514的确定系数(95%置信区间[CI]:0.8459-0.8568),而在用于确定CDR的12.183个验证图像集上,均方误差为0.0050(95%CI:0.0048-0.0051),平均绝对误差为0.0551(95%CI:0.0543-0.0559).使用20个类别的0.2的容差将回归点转换为分类指标;分类指标实现了99.20%的准确度。EyePACS数据集(98.172健康,3270青光眼)然后用于外部验证青光眼分类模型,准确地说,灵敏度,特异性为82.49%,72.02%,82.83%,分别。
    我们的模型在确定图像可分级性和估计CDR方面是精确的。尽管我们的人工智能衍生的CDR估计实现了高精度,青光眼筛查的CDR阈值将根据其他临床参数而变化.
    专有或商业披露可在本文末尾的脚注和披露中找到。
    UNASSIGNED: An enlarged cup-to-disc ratio (CDR) is a hallmark of glaucomatous optic neuropathy. Manual assessment of the CDR may be less accurate and more time-consuming than automated methods. Here, we sought to develop and validate a deep learning-based algorithm to automatically determine the CDR from fundus images.
    UNASSIGNED: Algorithm development for estimating CDR using fundus data from a population-based observational study.
    UNASSIGNED: A total of 181 768 fundus images from the United Kingdom Biobank (UKBB), Drishti_GS, and EyePACS.
    UNASSIGNED: FastAI and PyTorch libraries were used to train a convolutional neural network-based model on fundus images from the UKBB. Models were constructed to determine image gradability (classification analysis) as well as to estimate CDR (regression analysis). The best-performing model was then validated for use in glaucoma screening using a multiethnic dataset from EyePACS and Drishti_GS.
    UNASSIGNED: The area under the receiver operating characteristic curve and coefficient of determination.
    UNASSIGNED: Our gradability model vgg19_batch normalization (bn) achieved an accuracy of 97.13% on a validation set of 16 045 images, with 99.26% precision and area under the receiver operating characteristic curve of 96.56%. Using regression analysis, our best-performing model (trained on the vgg19_bn architecture) attained a coefficient of determination of 0.8514 (95% confidence interval [CI]: 0.8459-0.8568), while the mean squared error was 0.0050 (95% CI: 0.0048-0.0051) and mean absolute error was 0.0551 (95% CI: 0.0543-0.0559) on a validation set of 12 183 images for determining CDR. The regression point was converted into classification metrics using a tolerance of 0.2 for 20 classes; the classification metrics achieved an accuracy of 99.20%. The EyePACS dataset (98 172 healthy, 3270 glaucoma) was then used to externally validate the model for glaucoma classification, with an accuracy, sensitivity, and specificity of 82.49%, 72.02%, and 82.83%, respectively.
    UNASSIGNED: Our models were precise in determining image gradability and estimating CDR. Although our artificial intelligence-derived CDR estimates achieve high accuracy, the CDR threshold for glaucoma screening will vary depending on other clinical parameters.
    UNASSIGNED: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    我们设计了一个双模态融合网络来检测青光眼视神经病变,利用光学相干断层扫描报告和眼底图像的视网膜神经纤维层厚度。
    共纳入327例健康受试者(410只眼)和87例青光眼视神经病变患者(113只眼)。来自光学相干断层扫描报告和眼底图像的视网膜神经纤维层厚度被用作双模态融合网络中的预测因子来诊断青光眼。接收器工作特性曲线下的面积,准确度,灵敏度,和特异性进行了测量,以比较我们的方法和其他方法。
    使用光学相干断层扫描报告和眼底图像中的视网膜神经纤维层厚度,我们的双模态融合网络的准确性为0.935,并且我们以0.968(95%置信区间,0.937-0.999)。对于仅使用视网膜神经纤维层厚度,我们比较了我们的网络和其他三种方法的接收器操作特性曲线下的面积:0.916(95%置信区间,0.855,0.977)与我们的光学相干断层扫描网络;0.841(95%置信区间,0.749,0.933),具有时钟扇区划分;0.862(95%置信区间,0.757,0.968)具有劣质,上级,鼻颞部划分和0.886(95%置信区间,0.815,0.957)与视盘扇区划分。对于仅使用眼底图像,我们比较了我们的网络和其他两种方法之间的接收器操作特征曲线下面积:0.867(95%置信区间:0.781-0.952)与我们的ImageNet;0.774(95%置信区间:0.670,0.878)与ResNet50;0.747(95%置信区间:0.628,0.866)与VGG16。
    我们的双模态融合网络利用来自光学相干断层扫描报告和眼底图像的视网膜神经纤维层厚度可以诊断青光眼,其性能优于仅基于光学相干断层扫描或仅基于眼底图像的当前方法。
    UNASSIGNED: We designed a dual-modal fusion network to detect glaucomatous optic neuropathy, which utilized both retinal nerve fiber layer thickness from optical coherence tomography reports and fundus images.
    UNASSIGNED: A total of 327 healthy subjects (410 eyes) and 87 glaucomatous optic neuropathy patients (113 eyes) were included. The retinal nerve fiber layer thickness from optical coherence tomography reports and fundus images were used as predictors in the dual-modal fusion network to diagnose glaucoma. The area under the receiver operation characteristic curve, accuracy, sensitivity, and specificity were measured to compare our method and other approaches.
    UNASSIGNED: The accuracy of our dual-modal fusion network using both retinal nerve fiber layer thickness from optical coherence tomography reports and fundus images was 0.935 and we achieved a significant larger area under the receiver operation characteristic curve of our method with 0.968 (95% confidence interval, 0.937-0.999). For only using retinal nerve fiber layer thickness, we compared the area under the receiver operation characteristic curves between our network and other three approaches: 0.916 (95% confidence interval, 0.855, 0.977) with our optical coherence tomography Net; 0.841 (95% confidence interval, 0.749, 0.933) with Clock sectors division; 0.862 (95% confidence interval, 0.757, 0.968) with inferior, superior, nasal temporal sectors division and 0.886 (95% confidence interval, 0.815, 0.957) with optic disc sectors division. For only using fundus images, we compared the area under the receiver operation characteristic curves between our network and other two approaches: 0.867 (95% confidence interval: 0.781-0.952) with our Image Net; 0.774 (95% confidence interval: 0.670, 0.878) with ResNet50; 0.747 (95% confidence interval: 0.628, 0.866) with VGG16.
    UNASSIGNED: Our dual-modal fusion network utilizing both retinal nerve fiber layer thickness from optical coherence tomography reports and fundus images can diagnose glaucoma with a much better performance than the current approaches based on optical coherence tomography only or fundus images only.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    糖尿病性视网膜病变(DR)是全球视觉障碍的主要原因。它是由于长期糖尿病和血糖水平波动而发生的。它已经成为工作年龄组的人们的一个重要问题,因为它可能导致未来的视力丧失。眼底图像的手动检查是耗时的并且需要大量的努力和专业知识来确定视网膜病变的严重程度。诊断和评估疾病,基于深度学习的技术已经被使用,分析血管,微动脉瘤,分泌物,黄斑,光盘,和出血也用于DR的初始检测和分级。这项研究检查了糖尿病的基本原理,其患病率,并发症,以及使用机器学习(ML)等人工智能方法的治疗策略,深度学习(DL),和联邦学习(FL)。这项研究涵盖了未来的研究,绩效评估,生物标志物,筛选方法,和当前数据集。各种神经网络设计,包括递归神经网络(RNN),生成对抗网络(GAN),以及ML的应用,DL,和FL在眼底图像处理中,例如卷积神经网络(CNN)及其变体,彻底检查。潜在的研究方法,例如开发DL模型和合并异构数据源,也概述了。最后,讨论了本研究面临的挑战和未来的发展方向。
    Diabetic retinopathy (DR) is the leading cause of visual impairment globally. It occurs due to long-term diabetes with fluctuating blood glucose levels. It has become a significant concern for people in the working age group as it can lead to vision loss in the future. Manual examination of fundus images is time-consuming and requires much effort and expertise to determine the severity of the retinopathy. To diagnose and evaluate the disease, deep learning-based technologies have been used, which analyze blood vessels, microaneurysms, exudates, macula, optic discs, and hemorrhages also used for initial detection and grading of DR. This study examines the fundamentals of diabetes, its prevalence, complications, and treatment strategies that use artificial intelligence methods such as machine learning (ML), deep learning (DL), and federated learning (FL). The research covers future studies, performance assessments, biomarkers, screening methods, and current datasets. Various neural network designs, including recurrent neural networks (RNNs), generative adversarial networks (GANs), and applications of ML, DL, and FL in the processing of fundus images, such as convolutional neural networks (CNNs) and their variations, are thoroughly examined. The potential research methods, such as developing DL models and incorporating heterogeneous data sources, are also outlined. Finally, the challenges and future directions of this research are discussed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    准确的图像分割在计算机视觉和医学图像分析中起着至关重要的作用。在这项研究中,我们开发了一种新颖的不确定性引导深度学习策略(UGLS)来增强现有神经网络的性能(即,U-Net)从具有不同模态的图像中分割多个感兴趣的对象。在发达的UGLS中,根据每个对象的粗分割(由U-Net获得)引入了边界不确定性图,然后将其与输入图像组合以进行对象的精细分割。我们通过从彩色眼底图像中分割光学杯(OC)区域以及从X射线图像中分割左右肺区域来验证所开发的方法。在公共眼底和X射线图像数据集上的实验表明,所开发的方法对OC分割的平均Dice评分(DS)为0.8791,灵敏度(SEN)为0.8858,左、右肺分割为0.9605、0.9607、0.9621和0.9668,分别。我们的方法显着提高了U-Net的分割性能,使其与五个复杂网络(即,AU-Net,BiO-Net,AS-Net,Swin-Unet,和TransUNet)。
    Accurate image segmentation plays a crucial role in computer vision and medical image analysis. In this study, we developed a novel uncertainty guided deep learning strategy (UGLS) to enhance the performance of an existing neural network (i.e., U-Net) in segmenting multiple objects of interest from images with varying modalities. In the developed UGLS, a boundary uncertainty map was introduced for each object based on its coarse segmentation (obtained by the U-Net) and then combined with input images for the fine segmentation of the objects. We validated the developed method by segmenting optic cup (OC) regions from color fundus images and left and right lung regions from Xray images. Experiments on public fundus and Xray image datasets showed that the developed method achieved a average Dice Score (DS) of 0.8791 and a sensitivity (SEN) of 0.8858 for the OC segmentation, and 0.9605, 0.9607, 0.9621, and 0.9668 for the left and right lung segmentation, respectively. Our method significantly improved the segmentation performance of the U-Net, making it comparable or superior to five sophisticated networks (i.e., AU-Net, BiO-Net, AS-Net, Swin-Unet, and TransUNet).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:及时诊断医疗状况,特别是糖尿病视网膜病变,依赖于视网膜微动脉瘤的识别。然而,由于微动脉瘤在图像中的尺寸较小和差异有限,常用的视网膜造影方法提出了挑战。
    目的:微动脉瘤的自动识别变得至关重要,需要使用全面的临时处理技术。尽管荧光素血管造影增强了可检测性,其侵袭性限制了其对常规预防性筛查的适用性.
    目的:本研究提出了一种利用眼底扫描检测视网膜微动脉瘤的新方法,利用基于参考的圆形形状特征(CR-SF)和基于梯度的径向纹理特征(RG-TF)。
    方法:提出的技术涉及提取每个候选微动脉瘤的CR-SF和RG-TF,采用鲁棒的反向传播机器学习方法进行训练。在测试过程中,从测试图像中提取的特征与训练特征进行比较,以对微动脉瘤的存在进行分类。
    结果:实验评估使用了四个数据集(MESSIDOR,Diarettb1,e-ophtha-MA,和ROC),采取各种措施。所提出的方法具有很高的准确性(98.01%),灵敏度(98.74%),特异性(97.12%),和曲线下面积(91.72%)。
    结论:所提出的方法展示了一种使用眼底扫描检测视网膜微动脉瘤的成功方法,提供有前途的准确性和灵敏度。这种非侵入性技术具有有效筛查糖尿病性视网膜病变和其他相关医疗状况的潜力。
    BACKGROUND: The timely diagnosis of medical conditions, particularly diabetic retinopathy, relies on the identification of retinal microaneurysms. However, the commonly used retinography method poses a challenge due to the diminutive dimensions and limited differentiation of microaneurysms in images.
    OBJECTIVE: Automated identification of microaneurysms becomes crucial, necessitating the use of comprehensive ad-hoc processing techniques. Although fluorescein angiography enhances detectability, its invasiveness limits its suitability for routine preventative screening.
    OBJECTIVE: This study proposes a novel approach for detecting retinal microaneurysms using a fundus scan, leveraging circular reference-based shape features (CR-SF) and radial gradient-based texture features (RG-TF).
    METHODS: The proposed technique involves extracting CR-SF and RG-TF for each candidate microaneurysm, employing a robust back-propagation machine learning method for training. During testing, extracted features from test images are compared with training features to categorize microaneurysm presence.
    RESULTS: The experimental assessment utilized four datasets (MESSIDOR, Diaretdb1, e-ophtha-MA, and ROC), employing various measures. The proposed approach demonstrated high accuracy (98.01%), sensitivity (98.74%), specificity (97.12%), and area under the curve (91.72%).
    CONCLUSIONS: The presented approach showcases a successful method for detecting retinal microaneurysms using a fundus scan, providing promising accuracy and sensitivity. This non-invasive technique holds potential for effective screening in diabetic retinopathy and other related medical conditions.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    糖尿病视网膜病变(DR)的自动分级是辅助临床诊断和防止进一步视网膜损伤的重要手段。然而,DR数据集中类别之间的不平衡和相似性使得对病情的严重程度进行准确分级非常具有挑战性。此外,DR图像包括各种病变,这些病变之间的病理关系信息容易被忽视。例如,在不同的严重程度下,不同病变对准确模型分级的贡献差异显著.为了解决上述问题,我们设计了一个变压器引导类别关系注意网络(CRA-Net)。具体来说,我们提出了一种新颖的类别注意块,从DR图像类别的角度增强了类内的特征信息,从而缓解阶级不平衡问题。此外,我们设计了一个病变关系注意力块,通过将注意力机制纳入两个主要方面来捕获病变之间的关系:胶囊注意力模型不同病变的相对重要性,允许模型专注于更多的“信息”。空间注意力在变压器引导下捕获病变特征之间的全局位置关系,有利于更准确地定位病变。对DDR和APTOS2019两个数据集的实验和消融研究证明了CRA-Net的有效性并获得了竞争性能。
    Automated grading of diabetic retinopathy (DR) is an important means for assisting clinical diagnosis and preventing further retinal damage. However, imbalances and similarities between categories in the DR dataset make it highly challenging to accurately grade the severity of the condition. Furthermore, DR images encompass various lesions, and the pathological relationship information among these lesions can be easily overlooked. For instance, under different severity levels, the varying contributions of different lesions to accurate model grading differ significantly. To address the aforementioned issues, we design a transformer guided category-relation attention network (CRA-Net). Specifically, we propose a novel category attention block that enhances feature information within the class from the perspective of DR image categories, thereby alleviating class imbalance problems. Additionally, we design a lesion relation attention block that captures relationships between lesions by incorporating attention mechanisms in two primary aspects: capsule attention models the relative importance of different lesions, allowing the model to focus on more \"informative\" ones. Spatial attention captures the global position relationship between lesion features under transformer guidance, facilitating more accurate localization of lesions. Experimental and ablation studies on two datasets DDR and APTOS 2019 demonstrate the effectiveness of CRA-Net and obtain competitive performance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    超宽视野眼底成像(UFI)提供了关键眼睛组件的全面可视化,包括光盘,Fovea,和黄斑。这种深入的观点有助于医生准确诊断疾病并推荐合适的治疗方法。这项研究调查了各种深度学习模型在使用UFI检测眼部疾病中的应用。我们开发了一个自动化系统,可以处理和增强4697张图像的数据集。我们的方法涉及亮度和对比度增强,然后应用特征提取,数据增强和图像分类,与卷积神经网络集成。这些网络利用逐层特征提取和来自预训练模型的转移学习来准确地表示和分析医学图像。在五个评估模型中,包括ResNet152、VisionTransformer、InceptionResNetV2、RegNet和ConVNext,ResNet152是最有效的,曲线下测试面积(AUC)得分为96.47%(95%置信区间(CI)为0.931-0.974)。此外,本文介绍了模型预测的可视化,包括置信度评分和热图,突出模型的焦点-特别是在损伤明显的病变。通过简化诊断过程并在没有人为干预的情况下提供复杂的预测细节,我们的系统是眼科医生的关键工具。这项研究强调了将超广域图像与深度学习结合使用的兼容性和潜力。
    Ultra-wide-field fundus imaging (UFI) provides comprehensive visualization of crucial eye components, including the optic disk, fovea, and macula. This in-depth view facilitates doctors in accurately diagnosing diseases and recommending suitable treatments. This study investigated the application of various deep learning models for detecting eye diseases using UFI. We developed an automated system that processes and enhances a dataset of 4697 images. Our approach involves brightness and contrast enhancement, followed by applying feature extraction, data augmentation and image classification, integrated with convolutional neural networks. These networks utilize layer-wise feature extraction and transfer learning from pre-trained models to accurately represent and analyze medical images. Among the five evaluated models, including ResNet152, Vision Transformer, InceptionResNetV2, RegNet and ConVNext, ResNet152 is the most effective, achieving a testing area under the curve (AUC) score of 96.47% (with a 95% confidence interval (CI) of 0.931-0.974). Additionally, the paper presents visualizations of the model\'s predictions, including confidence scores and heatmaps that highlight the model\'s focal points-particularly where lesions due to damage are evident. By streamlining the diagnosis process and providing intricate prediction details without human intervention, our system serves as a pivotal tool for ophthalmologists. This research underscores the compatibility and potential of utilizing ultra-wide-field images in conjunction with deep learning.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    眼底镶嵌(FT)是与近视相关的普遍临床特征,对近视性黄斑病变的发展具有重要意义。导致不可逆的视力损害.彩色眼底照片中FT的准确分类有助于预测疾病进展和预后。然而,缺乏精确的检测和分类工具造成了未满足的医疗需求,强调探索FT临床应用的重要性。因此,为了解决这个差距,我们引入了一个自动FT分级系统(称为DeepGraFT),通过深度学习使用分类和分割共同决策模型。ConvNeXt,利用来自预训练的ImageNet权重的迁移学习,用于分类算法,与基于ETDRS分级系统的感兴趣区域对齐,以提高性能。开发了一种分割模型来检测FT退出,补充分类,提高分级精度。DeepGraFT的训练组来自我们的内部队列(MAGIC),验证集包括内部队列的其余部分和独立的公共队列(UKBiobank).DeepGraFT在训练阶段表现出很高的性能,并在验证阶段取得了令人印象深刻的准确性(内部队列:86.85%;公共队列:81.50%)。此外,我们的研究结果表明,DeepGraFT在FT分类中超越了基于机器学习的分类模型,精度提高了5.57%。消融分析显示,引入的模块显着提高了分类有效性,并将准确性从79.85%提高到86.85%。使用DeepGraFT提供的结果进行的进一步分析揭示了英国生物库队列中FT和球形当量(SE)之间的显着负相关。总之,DeepGraFT强调了深度学习模型在自动化FT分级方面的潜在优势,并允许作为预测病理性近视进展的临床决策支持工具的潜在用途。
    Fundus tessellation (FT) is a prevalent clinical feature associated with myopia and has implications in the development of myopic maculopathy, which causes irreversible visual impairment. Accurate classification of FT in color fundus photo can help predict the disease progression and prognosis. However, the lack of precise detection and classification tools has created an unmet medical need, underscoring the importance of exploring the clinical utility of FT. Thus, to address this gap, we introduce an automatic FT grading system (called DeepGraFT) using classification-and-segmentation co-decision models by deep learning. ConvNeXt, utilizing transfer learning from pretrained ImageNet weights, was employed for the classification algorithm, aligning with a region of interest based on the ETDRS grading system to boost performance. A segmentation model was developed to detect FT exits, complementing the classification for improved grading accuracy. The training set of DeepGraFT was from our in-house cohort (MAGIC), and the validation sets consisted of the rest part of in-house cohort and an independent public cohort (UK Biobank). DeepGraFT demonstrated a high performance in the training stage and achieved an impressive accuracy in validation phase (in-house cohort: 86.85 %; public cohort: 81.50 %). Furthermore, our findings demonstrated that DeepGraFT surpasses machine learning-based classification models in FT classification, achieving a 5.57 % increase in accuracy. Ablation analysis revealed that the introduced modules significantly enhanced classification effectiveness and elevated accuracy from 79.85 % to 86.85 %. Further analysis using the results provided by DeepGraFT unveiled a significant negative association between FT and spherical equivalent (SE) in the UK Biobank cohort. In conclusion, DeepGraFT accentuates potential benefits of the deep learning model in automating the grading of FT and allows for potential utility as a clinical-decision support tool for predicting progression of pathological myopia.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号