retinal image

视网膜图像
  • 文章类型: Journal Article
    眼科医生广泛使用眼底照相机来监测和诊断视网膜病变。不幸的是,没有一个光学系统是完美的,由于存在有问题的照明,视网膜图像的可见性可能会大大降低,眼内散射,或者由突然的运动引起的模糊。为了提高图像质量,不同的视网膜图像复原/增强技术已经发展,在提高各种临床和计算机辅助应用的性能方面发挥着重要作用。本文对这些修复/增强技术进行了全面的回顾,讨论他们的基本数学模型,并展示了它们如何有效地应用于现实生活中的实践,以提高视网膜图像的视觉质量,用于潜在的临床应用,包括诊断和视网膜结构识别。视网膜图像恢复/增强技术的所有三个主要主题,即,照明校正,去雾,和去模糊,已解决。最后,将讨论一些关于视网膜图像复原/增强技术的挑战和未来范围的考虑。
    Fundus cameras are widely used by ophthalmologists for monitoring and diagnosing retinal pathologies. Unfortunately, no optical system is perfect, and the visibility of retinal images can be greatly degraded due to the presence of problematic illumination, intraocular scattering, or blurriness caused by sudden movements. To improve image quality, different retinal image restoration/enhancement techniques have been developed, which play an important role in improving the performance of various clinical and computer-assisted applications. This paper gives a comprehensive review of these restoration/enhancement techniques, discusses their underlying mathematical models, and shows how they may be effectively applied in real-life practice to increase the visual quality of retinal images for potential clinical applications including diagnosis and retinal structure recognition. All three main topics of retinal image restoration/enhancement techniques, i.e., illumination correction, dehazing, and deblurring, are addressed. Finally, some considerations about challenges and the future scope of retinal image restoration/enhancement techniques will be discussed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:比较通过扩展焦深(EDoF)人工晶状体(IOL)观察的模型眼光栅与衍射双焦点IOL或单焦点IOL的图像质量。
    方法:实验实验室调查。
    方法:非衍射波前整形EDoF(CNAET0,Alcon实验室),梯队设计的EDoF(ZXR00V,强生愿景),低功率附加衍射双焦IOL(SV25T,爱尔康实验室),或单焦点IOL(CNA0T0,Alcon实验室)放置在充满液体的模型眼中。将USAF分辨率光栅目标粘合到模型眼睛的后表面,并通过平坦或广角隐形眼镜进行观察。通过EDoF或多焦点IOL观察的光栅对比度与通过单焦点IOL观察的光栅对比度进行比较。使用波前分析仪测量EDoF的中心4.5mm光学器件的球面功率,多焦点,和单焦点IOL。比较了屈光度的分布和屈光度图。
    结果:通过带有CNAET0,ZXR00V的平面隐形眼镜观察到的光栅,或SV25T在通过多焦点光学观察时略微模糊。模糊区域位于CNAET0的圆周区域,SV25T的中心区域,和ZXR00V的外围区域。CNAET0的平均对比度为0.258±0.020,ZXR00V的平均对比度为0.227±0.025,对于16.0cyc/mm光栅,SV25T为0.221±0.020。ZXR00V(P=0.004)和SV25T(P=0.004)的对比度显著低于CNA0T0的0.303±0.015,但差异不显著。对于广角隐形眼镜,CNAET0的对比度为0.182±0.009,ZXR00V的对比度为0.162±0.011,SV25T的对比度为0.163±0.007,光栅为16.0cyc/mm,与CNA0T0的0.188±0.012无显著差异。CNAET0的屈光变化表明较高屈光力的环形区域对应于通过平面接触镜片观察到的圆周模糊区。
    结论:当通过平面接触镜观察时,波前整形和小阶梯设计的EDoF-IOL比单焦点IOL更多地降低了光栅的对比度。减少的程度取决于扩展焦距光学器件的设计。通过广角隐形眼镜的差异较小。
    OBJECTIVE: To compare the quality of images of gratings placed in a model eye viewed through an extended depth of focus (EDoF) intraocular lens (IOL) to that of diffractive bifocal IOL or monofocal IOL.
    METHODS: Experimental laboratory investigation.
    METHODS: Nondiffractive wavefront shaping EDoF (CNAET0, Alcon Laboratories), echelette-designed EDoF (ZXR00V, Johnson & Johnson Vision), diffractive bifocal IOL with low power addition (SV25T, Alcon Laboratories), or monofocal IOL (CNA0T0, Alcon Laboratories) was placed in a fluid-filled model eye. A United States Air Force Resolution Grating Target was glued to the posterior surface of the model eye and viewed through a flat or a wide-angle contact lens. The contrast of the gratings viewed through the EDoF or multifocal IOLs was compared to that through the monofocal IOL. A wavefront analyzer was used to measure the spherical power of the central 4.5 mm optics of the EDoF, multifocal, and monofocal IOLs. The distribution of the dioptric power and the dioptric power map were compared.
    RESULTS: The gratings observed through the flat contact lens with CNAET0, ZXR00V, or SV25T were slightly blurred when viewed through the multifocal optics. The blurred area was in the circumferential area of CNAET0, the central area of SV25T, and the peripheral area of ZXR00V. The mean contrast was 0.258 ± 0.020 for CNAET0, 0.227 ± 0.025 for ZXR00V, and 0.221 ± 0.020 for SV25T for the 16.0 cyc/mm grating. The contrast was significantly lower for ZXR00V (P = .004) and SV25T (P = .004) than 0.303 ± 0.015 for CNA0T0 but the differences were not significant for CNAET0. For the wide-angle contact lens, the contrast for CNAET0 was 0.182 ± 0.009, for ZXR00V was 0.162 ± 0.011, and for SV25T was 0.163 ± 0.007 for the 16.0 cyc/mm grating, and none was significantly different from 0.188 ± 0.012 for CNA0T0. The dioptric variations of CNAET0 indicated a ring-shaped area of higher power corresponding to the circumferential blurred zone observed through the flat contact lens.
    CONCLUSIONS: The wavefront shaping and echelette-designed EDoF-IOLs reduce the contrast of the grating more than the monofocal IOL when viewed through the flat contact lens. The degree of reduction depended on the design of the extended-focus optics. The difference was less through the wide-angle contact lens.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    我们开发了婴儿视网膜智能诊断系统(IRIDS),一个自动化系统,帮助早期诊断和监测婴儿眼底疾病和健康状况,以满足眼科医生的迫切需求。
    我们通过结合卷积神经网络和变压器结构开发了IRIDS,使用来自四家医院的7697张视网膜图像(1089名婴儿)的数据集。它确定了九种眼底疾病和病症,即,早产儿视网膜病变(ROP)(轻度ROP,适度ROP,和严重的ROP),视网膜母细胞瘤(RB),视网膜色素变性(RP),Coats病,脉络膜的结肠瘤,先天性视网膜皱褶(CRF),和正常。IRIDS还包括深度注意模块,ResNet-18(Res-18),和多轴视觉变压器(MaxViT)。使用450张视网膜图像将性能与眼科医生进行比较。IRIDS采用五重交叉验证方法来生成分类结果。
    几个基准模型实现了以下指标:准确性,精度,召回,F1分数(F1),kappa,和接收器工作特征曲线下面积(AUC)的最佳值为94.62%(95%CI,94.34%-94.90%),94.07%(95%CI,93.32%-94.82%),90.56%(95%CI,88.64%-92.48%),92.34%(95%CI,91.87%-92.81%),91.15%(95%CI,90.37%-91.93%),和99.08%(95%CI,99.07%-99.09%),分别。相比之下,与眼科医生相比,IRIDS显示出有希望的结果,证明了平均准确性,精度,召回,F1,卡帕,AUC为96.45%(95%CI,96.37%-96.53%),95.86%(95%CI,94.56%-97.16%),94.37%(95%CI,93.95%-94.79%),95.03%(95%CI,94.45%-95.61%),94.43%(95%CI,93.96%-94.90%),和99.51%(95%CI,99.51%-99.51%),分别,在测试数据集上的多标签分类中,利用Res-18和MaxViT模型。这些结果表明,特别是在AUC方面,IRIDS取得的性能值得进一步研究以检测视网膜异常。
    IRIDS准确识别了九种婴儿眼底疾病和病症。它可以帮助非眼科医生在婴儿眼底疾病筛查服务不足的地区。因此,预防严重并发症。IRIDS是人工智能集成到眼科中的一个例子,可以在预测方面取得更好的结果,预防性,和个性化医学(PPPM/3PM)治疗小儿眼底疾病。
    在线版本包含补充材料,可在10.1007/s13167-024-00350-y获得。
    UNASSIGNED: We developed an Infant Retinal Intelligent Diagnosis System (IRIDS), an automated system to aid early diagnosis and monitoring of infantile fundus diseases and health conditions to satisfy urgent needs of ophthalmologists.
    UNASSIGNED: We developed IRIDS by combining convolutional neural networks and transformer structures, using a dataset of 7697 retinal images (1089 infants) from four hospitals. It identifies nine fundus diseases and conditions, namely, retinopathy of prematurity (ROP) (mild ROP, moderate ROP, and severe ROP), retinoblastoma (RB), retinitis pigmentosa (RP), Coats disease, coloboma of the choroid, congenital retinal fold (CRF), and normal. IRIDS also includes depth attention modules, ResNet-18 (Res-18), and Multi-Axis Vision Transformer (MaxViT). Performance was compared to that of ophthalmologists using 450 retinal images. The IRIDS employed a five-fold cross-validation approach to generate the classification results.
    UNASSIGNED: Several baseline models achieved the following metrics: accuracy, precision, recall, F1-score (F1), kappa, and area under the receiver operating characteristic curve (AUC) with best values of 94.62% (95% CI, 94.34%-94.90%), 94.07% (95% CI, 93.32%-94.82%), 90.56% (95% CI, 88.64%-92.48%), 92.34% (95% CI, 91.87%-92.81%), 91.15% (95% CI, 90.37%-91.93%), and 99.08% (95% CI, 99.07%-99.09%), respectively. In comparison, IRIDS showed promising results compared to ophthalmologists, demonstrating an average accuracy, precision, recall, F1, kappa, and AUC of 96.45% (95% CI, 96.37%-96.53%), 95.86% (95% CI, 94.56%-97.16%), 94.37% (95% CI, 93.95%-94.79%), 95.03% (95% CI, 94.45%-95.61%), 94.43% (95% CI, 93.96%-94.90%), and 99.51% (95% CI, 99.51%-99.51%), respectively, in multi-label classification on the test dataset, utilizing the Res-18 and MaxViT models. These results suggest that, particularly in terms of AUC, IRIDS achieved performance that warrants further investigation for the detection of retinal abnormalities.
    UNASSIGNED: IRIDS identifies nine infantile fundus diseases and conditions accurately. It may aid non-ophthalmologist personnel in underserved areas in infantile fundus disease screening. Thus, preventing severe complications. The IRIDS serves as an example of artificial intelligence integration into ophthalmology to achieve better outcomes in predictive, preventive, and personalized medicine (PPPM / 3PM) in the treatment of infantile fundus diseases.
    UNASSIGNED: The online version contains supplementary material available at 10.1007/s13167-024-00350-y.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    视网膜疾病的准确识别对于预防暂时性和永久性视力损害至关重要。先前的研究已经在与特定视网膜状况有关的视网膜图像的分类中产生了令人鼓舞的结果。在临床实践中,单个患者同时出现多种视网膜疾病并不少见.因此,将视网膜图像分类为多个标签的任务仍然是现有方法的重要障碍,但是它的成功实现将同时产生对各种情况的宝贵见解。
    这项研究提出了一种称为视网膜ViT的新型视觉变压器架构,将自我注意机制纳入医学图像分析领域。要注意,这项研究应该证明,与基于CNN的模型相比,基于变压器的模型可以实现有竞争力的性能,因此,卷积模块已从所提出的模型中删除。建议的模型以利用前馈网络架构的多标签分类器结束。该分类器由两层组成并采用S型激活函数。
    与ResNet等最先进的方法相比,实验结果提供了建议模型表现出的改进性能的证据,VGG,DenseNet,和MobileNet,在公开可用的数据集ODIR-2019上,所提出的方法在Kappa方面优于最先进的算法,F1得分,AUC,和AVG。
    UNASSIGNED: The precise identification of retinal disorders is of utmost importance in the prevention of both temporary and permanent visual impairment. Prior research has yielded encouraging results in the classification of retinal images pertaining to a specific retinal condition. In clinical practice, it is not uncommon for a single patient to present with multiple retinal disorders concurrently. Hence, the task of classifying retinal images into multiple labels remains a significant obstacle for existing methodologies, but its successful accomplishment would yield valuable insights into a diverse array of situations simultaneously.
    UNASSIGNED: This study presents a novel vision transformer architecture called retinal ViT, which incorporates the self-attention mechanism into the field of medical image analysis. To note that this study supposed to prove that the transformer-based models can achieve competitive performance comparing with the CNN-based models, hence the convolutional modules have been eliminated from the proposed model. The suggested model concludes with a multi-label classifier that utilizes a feed-forward network architecture. This classifier consists of two layers and employs a sigmoid activation function.
    UNASSIGNED: The experimental findings provide evidence of the improved performance exhibited by the suggested model when compared to state-of-the-art approaches such as ResNet, VGG, DenseNet, and MobileNet, on the publicly available dataset ODIR-2019, and the proposed approach has outperformed the state-of-the-art algorithms in terms of Kappa, F1 score, AUC, and AVG.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    这项研究的目的是确定非渗出性黄斑新生血管(MNV)的视网膜微血管特征中的预测性激活生物标志物,并避免亚临床MNV的延迟治疗或过度治疗。主要目的是促进关于视网膜血管特征在非渗出性MNV和年龄相关性黄斑变性(AMD)的发病机理和进展中的作用的新认识的国际辩论。提出了有关修订相关临床方案的讨论。
    在这项回顾性研究中,作者包括非渗出性MNV的眼睛,眼睛有渗出性AMD,和年龄匹配的健康受试者的正常眼睛。通过光学相干断层扫描(OCT)和光学相干断层扫描血管造影(OCTA)获得参数。
    总共,21眼渗出性AMD,21只眼非渗出性MNV,纳入20例年龄匹配的健康受试者的20只眼,无视网膜病变。非渗出性MNV眼的深血管复合体(DVC)血管密度(VD)明显大于渗出性AMD眼(p=0.002),而对于浅表血管丛(SVP)指标,非渗出性MNV眼和渗出性AMD眼之间未观察到节段间VD差异。
    视网膜血管密度降低,尤其是在DVC中,似乎参与或伴有非渗出性MNV激活,在随访期间应密切监测,以确保及时进行抗血管生成治疗.对适用的临床协议进行了讨论,旨在为针对这种特定类型的患者和诊断的眼科服务开发提供新的见解。
    UNASSIGNED: The purpose of this study is to identify predictive activation biomarkers in retinal microvascular characteristics of non-exudative macular neovascularization (MNV) and avoid delayed treatment or overtreatment of subclinical MNV. The main objective is to contribute to the international debate on a new understanding of the role of retinal vessel features in the pathogenesis and progression of non-exudative MNV and age-related macular degeneration (AMD). A discussion on revising-related clinical protocols is presented.
    UNASSIGNED: In this retrospective study, the authors included eyes with non-exudative MNV, eyes with exudative AMD, and normal eyes of age-matched healthy subjects. The parameters were obtained by optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA).
    UNASSIGNED: In total, 21 eyes with exudative AMD, 21 eyes with non-exudative MNV, and 20 eyes of 20 age-matched healthy subjects without retinal pathology were included. Vessel density (VD) of the deep vascular complex (DVC) in eyes with non-exudative MNV was significantly greater than that in eyes with exudative AMD (p = 0.002), while for superficial vascular plexus (SVP) metrics, no VD differences among sectors were observed between eyes with non-exudative MNV and eyes with exudative AMD.
    UNASSIGNED: The reduction in retinal vessel density, especially in the DVC, seems to be involved in or be accompanied by non-exudative MNV activation and should be closely monitored during follow-up visits in order to ensure prompt anti-angiogenic therapy. A discussion on applicable clinical protocols is presented aiming to contribute to new insights into ophthalmology service development which is directed to this specific type of patient and diagnosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    视网膜血管分割是临床医生诊断动脉粥样硬化等疾病的有价值的工具,青光眼,和年龄相关性黄斑变性。本文提出了一种新的视网膜图像血管分割框架。该框架具有两个阶段:多层预处理阶段和采用具有多残差注意块的U-Net的后续分割阶段。多层预处理阶段有三个步骤。第一步是降噪,采用带有矩阵分解的U形卷积神经网络(带有MF的CNN)和详细的U形U-Net(D_U-Net)来最小化图像噪声,最终根据PSNR和SSIM值选择最合适的图像。第二步是动态数据插补,利用多个模型来填充缺失数据。第三步是通过利用潜在扩散模型(LDM)来扩展训练数据集大小的数据增强。框架的第二阶段是分割,其中具有多残差注意块的U-Nets用于在视网膜图像已经被预处理并且已经去除噪声之后对其进行分割。实验表明,该框架在分割视网膜血管方面是有效的。Dice得分为95.32分,准确率为93.56分,准确率为95.68分,召回率为95.45分。对于(0.1、0.25、0.5和0.75)级的噪声,根据PSNR和SSIM的值,还实现了使用具有矩阵分解(MF)的CNN和D-U-NET来去除噪声的有效结果。在增强步骤中,LDM的初始得分为13.6,FID为46.2。
    Retinal blood vessel segmentation is a valuable tool for clinicians to diagnose conditions such as atherosclerosis, glaucoma, and age-related macular degeneration. This paper presents a new framework for segmenting blood vessels in retinal images. The framework has two stages: a multi-layer preprocessing stage and a subsequent segmentation stage employing a U-Net with a multi-residual attention block. The multi-layer preprocessing stage has three steps. The first step is noise reduction, employing a U-shaped convolutional neural network with matrix factorization (CNN with MF) and detailed U-shaped U-Net (D_U-Net) to minimize image noise, culminating in the selection of the most suitable image based on the PSNR and SSIM values. The second step is dynamic data imputation, utilizing multiple models for the purpose of filling in missing data. The third step is data augmentation through the utilization of a latent diffusion model (LDM) to expand the training dataset size. The second stage of the framework is segmentation, where the U-Nets with a multi-residual attention block are used to segment the retinal images after they have been preprocessed and noise has been removed. The experiments show that the framework is effective at segmenting retinal blood vessels. It achieved Dice scores of 95.32, accuracy of 93.56, precision of 95.68, and recall of 95.45. It also achieved efficient results in removing noise using CNN with matrix factorization (MF) and D-U-NET according to values of PSNR and SSIM for (0.1, 0.25, 0.5, and 0.75) levels of noise. The LDM achieved an inception score of 13.6 and an FID of 46.2 in the augmentation step.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    由于成像条件有限,眼底图像的质量往往不尽人意,特别是手持眼底照相机拍摄的图像。这里,我们开发了一种基于结合两个镜像对称生成对抗网络(GAN)进行图像增强的自动化方法。
    共包括1047张视网膜图像。通过基于GAN的深度增强器和基于亮度和对比度调整的另一种方法来增强原始图像。所有原始图像和增强图像均由三位经验丰富的眼科医生匿名评估并分类为6个质量分类级别。比较图像的质量分类和质量变化。此外,还比较了可疑病理基础数量的图像详细阅读结果。
    GAN增强后,42.9%的图像提高了质量,37.5%保持稳定,下降19.6%。在排除增强前的最高级别(级别0)的图像后,大量(75.6%)的图像显示质量分类增加,只有少数(9.3%)出现下降。GAN增强方法在质量改善方面优于光度和对比度调整方法(P<0.001)。在图像读取结果方面,一致性率从86.6%波动到95.6%,对于特定的疾病亚型,差异数和差异率均小于15%和15%,给两位眼科医生.
    学习基于所提出的深度增强器的高质量视网膜图像的风格可能是提高手持式眼底相机拍摄的视网膜图像质量的有效方法。
    UNASSIGNED: Due to limited imaging conditions, the quality of fundus images is often unsatisfactory, especially for images photographed by handheld fundus cameras. Here, we have developed an automated method based on combining two mirror-symmetric generative adversarial networks (GANs) for image enhancement.
    UNASSIGNED: A total of 1047 retinal images were included. The raw images were enhanced by a GAN-based deep enhancer and another methods based on luminosity and contrast adjustment. All raw images and enhanced images were anonymously assessed and classified into 6 levels of quality classification by three experienced ophthalmologists. The quality classification and quality change of images were compared. In addition, image-detailed reading results for the number of dubiously pathological fundi were also compared.
    UNASSIGNED: After GAN enhancement, 42.9% of images increased their quality, 37.5% remained stable, and 19.6% decreased. After excluding the images at the highest level (level 0) before enhancement, a large number (75.6%) of images showed an increase in quality classification, and only a minority (9.3%) showed a decrease. The GAN-enhanced method was superior for quality improvement over a luminosity and contrast adjustment method (P<0.001). In terms of image reading results, the consistency rate fluctuated from 86.6% to 95.6%, and for the specific disease subtypes, both discrepancy number and discrepancy rate were less than 15 and 15%, for two ophthalmologists.
    UNASSIGNED: Learning the style of high-quality retinal images based on the proposed deep enhancer may be an effective way to improve the quality of retinal images photographed by handheld fundus cameras.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景与目的:青光眼是导致不可逆性视力损害和失明的主要原因,所以它的及时发现是至关重要的。来自糖尿病性视网膜病变筛查计划(DRSP)的视网膜图像为检测未诊断的青光眼提供了机会。我们的目的是找出哪些视网膜图像指标最适合推荐DRSP患者进行青光眼评估,并确定斯洛文尼亚DRSP的青光眼检测潜力。材料和方法:我们回顾了卢布尔雅那大学医学中心DRSP患者的视网膜图像(2019年11月至2020年1月,2020年5月至8月)。邀请具有至少一个指标的患者和一些随机选择的没有指标的患者进行眼部检查。怀疑青光眼和青光眼患者被认为是准确的。使用以患者为统计单位的逻辑回归(LOGIT)和以眼睛为统计单位的逻辑回归(GEE)的广义估计方程来确定指标的转诊准确性。结果:在检查的2230例患者中,209名患者(10.1%)在一只眼睛或两只眼睛的视网膜图像上至少有一个指标。共有149人(129人至少有一个指标,20人没有)参加了眼科检查。79例(53.0%)青光眼阴性,54(36.2%)怀疑青光眼,16例(10.7%)青光眼阳性。新发现7例青光眼患者。神经视网膜边缘切迹可预测所有病例的青光眼。杯盘比率是准确转诊的最重要指标(比值比7.59(95%CI3.98-14.47;p<0.001),并且在多变量方面保持统计学意义。青光眼家族史也显示了影响(比值比3.06(95%CI1.02-9.19;p=0.046),但仅在LOGIT多变量模型中仍具有统计学意义。其他指标和混杂因素在多变量模型中没有统计学意义。结论:我们的结果表明,神经视网膜边缘凹口和杯盘比对于从DRSP中的视网膜图像准确转诊青光眼最重要。DRSP中约有一半的青光眼病例可能未被诊断。
    Background and Objectives: Glaucoma is a major cause of irreversible visual impairment and blindness, so its timely detection is crucial. Retinal images from diabetic retinopathy screening programmes (DRSP) provide an opportunity to detect undiagnosed glaucoma. Our aim was to find out which retinal image indicators are most suitable for referring DRSP patients for glaucoma assessment and to determine the glaucoma detection potential of Slovenian DRSP. Materials and Methods: We reviewed retinal images of patients from the DRSP at the University Medical Centre Ljubljana (November 2019-January 2020, May-August 2020). Patients with at least one indicator and some randomly selected patients without indicators were invited for an eye examination. Suspect glaucoma and glaucoma patients were considered accurately referred. Logistic regression (LOGIT) with patients as statistical units and generalised estimating equation with logistic regression (GEE) with eyes as statistical units were used to determine the referral accuracy of indicators. Results: Of the 2230 patients reviewed, 209 patients (10.1%) had at least one indicator on a retinal image of either one eye or both eyes. A total of 149 (129 with at least one indicator and 20 without) attended the eye exam. Seventy-nine (53.0%) were glaucoma negative, 54 (36.2%) suspect glaucoma, and 16 (10.7%) glaucoma positive. Seven glaucoma patients were newly detected. Neuroretinal rim notch predicted glaucoma in all cases. The cup-to-disc ratio was the most important indicator for accurate referral (odds ratio 7.59 (95% CI 3.98-14.47; p < 0.001) and remained statistically significant multivariably. Family history of glaucoma also showed an impact (odds ratio 3.06 (95% CI 1.02-9.19; p = 0.046) but remained statistically significant only in the LOGIT multivariable model. Other indicators and confounders were not statistically significant in the multivariable models. Conclusions: Our results suggest that the neuroretinal rim notch and cup-to-disc ratio are the most important for accurate glaucoma referral from retinal images in DRSP. Approximately half of the glaucoma cases in DRSPs may be undiagnosed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    如果不在早期进行检测和治疗,糖尿病视网膜病变(DR)将导致失明。为了制定有效的治疗策略,疾病的严重程度必须首先分为有转诊许可的糖尿病性视网膜病变(RWDR)和无转诊的糖尿病性视网膜病变(NRDR).然而,由于社区缺乏专业服务,通常没有足够的眼底检查,特别是在发展中国家。在这项研究中,我们介绍UGAN_Resnet_CBAM(URNet;UGAN是使用Unet进行特征提取的生成对抗网络),用于自动检测糖尿病视网膜病变的两阶段端到端深度学习技术。第一阶段利用DDR眼底数据集的特点设计了自适应图像预处理模块。以梯度加权类激活映射(Grad-CAM)和t分布和随机邻居嵌入(t-SNE)为评价指标,对预处理结果进行分析。在第二阶段,我们通过集成卷积块注意模块(CBAM)来增强Resnet50网络的性能。结果表明,我们提出的解决方案优于其他现有结构,达到94.5%和94.4%的精度,NRDR和RWDR召回率分别为96.2%和91.9%,分别。
    Diabetic retinopathy (DR) will cause blindness if the detection and treatment are not carried out in the early stages. To create an effective treatment strategy, the severity of the disease must first be divided into referral-warranted diabetic retinopathy (RWDR) and non-referral diabetic retinopathy (NRDR). However, there are usually no sufficient fundus examinations due to lack of professional service in the communities, particularly in the developing countries. In this study, we introduce UGAN_Resnet_CBAM (URNet; UGAN is a generative adversarial network that uses Unet for feature extraction), a two-stage end-to-end deep learning technique for the automatic detection of diabetic retinopathy. The characteristics of DDR fundus data set were used to design an adaptive image preprocessing module in the first stage. Gradient-weighted Class Activation Mapping (Grad-CAM) and t-distribution and stochastic neighbor embedding (t-SNE) were used as the evaluation indices to analyze the preprocessing results. In the second stage, we enhanced the performance of the Resnet50 network by integrating the convolutional block attention module (CBAM). The outcomes demonstrate that our proposed solution outperformed other current structures, achieving 94.5% and 94.4% precisions, and 96.2% and 91.9% recall for NRDR and RWDR, respectively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在这项研究中,我们研究了白内障视网膜图像去雾和图像去噪之间的对偶性,并提出将图像去噪和sigmoid函数相结合可以实现白内障视网膜图像的去雾任务。要做到这一点,我们在YPbPr颜色空间中引入了双通眼底反射模型,并开发了一种称为MUTE的多级刺激去噪策略。白内障层的传输矩阵表示为由像素级S形函数加权的不同级别的去噪原始图像的叠加。我们进一步设计了基于强度的成本函数,可以指导模型参数的更新。它们通过梯度下降和自适应动量估计进行更新,这为我们提供了白内障层的最终细化传输矩阵。我们在公共和专有数据库的白内障视网膜图像上测试了我们的方法,我们比较了我们的方法与其他最先进的增强方法的性能。视觉评估和客观评估都显示了所提出方法的优越性。我们进一步展示了三个潜在的应用,包括血管分割,视网膜图像配准,并使用增强的图像进行诊断,这些图像可能很大程度上受益于我们提出的方法。
    In this research, we studied the duality between cataractous retinal image dehazing and image denoising and proposed that the dehazing task for cataractous retinal images can be achieved with the combination of image denoising and sigmoid function. To do so, we introduce the double-pass fundus reflection model in the YPbPr color space and developed a multilevel stimulated denoising strategy termed MUTE. The transmission matrix of the cataract layer is expressed as the superposition of denoised raw images of different levels weighted by pixel-wise sigmoid functions. We further designed an intensity-based cost function that can guide the updating of the model parameters. They are updated by gradient descent with adaptive momentum estimation, which gives us the final refined transmission matrix of the cataract layer. We tested our methods on cataract retinal images from both public and proprietary databases, and we compared the performance of our method with other state-of-the-art enhancement methods. Both visual assessments and objective assessments show the superiority of the proposed method. We further demonstrated three potential applications including blood vessel segmentation, retinal image registrations, and diagnosing with enhanced images that may largely benefit from our proposed methods.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号