retinal image

视网膜图像
  • 文章类型: Journal Article
    眼科医生广泛使用眼底照相机来监测和诊断视网膜病变。不幸的是,没有一个光学系统是完美的,由于存在有问题的照明,视网膜图像的可见性可能会大大降低,眼内散射,或者由突然的运动引起的模糊。为了提高图像质量,不同的视网膜图像复原/增强技术已经发展,在提高各种临床和计算机辅助应用的性能方面发挥着重要作用。本文对这些修复/增强技术进行了全面的回顾,讨论他们的基本数学模型,并展示了它们如何有效地应用于现实生活中的实践,以提高视网膜图像的视觉质量,用于潜在的临床应用,包括诊断和视网膜结构识别。视网膜图像恢复/增强技术的所有三个主要主题,即,照明校正,去雾,和去模糊,已解决。最后,将讨论一些关于视网膜图像复原/增强技术的挑战和未来范围的考虑。
    Fundus cameras are widely used by ophthalmologists for monitoring and diagnosing retinal pathologies. Unfortunately, no optical system is perfect, and the visibility of retinal images can be greatly degraded due to the presence of problematic illumination, intraocular scattering, or blurriness caused by sudden movements. To improve image quality, different retinal image restoration/enhancement techniques have been developed, which play an important role in improving the performance of various clinical and computer-assisted applications. This paper gives a comprehensive review of these restoration/enhancement techniques, discusses their underlying mathematical models, and shows how they may be effectively applied in real-life practice to increase the visual quality of retinal images for potential clinical applications including diagnosis and retinal structure recognition. All three main topics of retinal image restoration/enhancement techniques, i.e., illumination correction, dehazing, and deblurring, are addressed. Finally, some considerations about challenges and the future scope of retinal image restoration/enhancement techniques will be discussed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    我们开发了婴儿视网膜智能诊断系统(IRIDS),一个自动化系统,帮助早期诊断和监测婴儿眼底疾病和健康状况,以满足眼科医生的迫切需求。
    我们通过结合卷积神经网络和变压器结构开发了IRIDS,使用来自四家医院的7697张视网膜图像(1089名婴儿)的数据集。它确定了九种眼底疾病和病症,即,早产儿视网膜病变(ROP)(轻度ROP,适度ROP,和严重的ROP),视网膜母细胞瘤(RB),视网膜色素变性(RP),Coats病,脉络膜的结肠瘤,先天性视网膜皱褶(CRF),和正常。IRIDS还包括深度注意模块,ResNet-18(Res-18),和多轴视觉变压器(MaxViT)。使用450张视网膜图像将性能与眼科医生进行比较。IRIDS采用五重交叉验证方法来生成分类结果。
    几个基准模型实现了以下指标:准确性,精度,召回,F1分数(F1),kappa,和接收器工作特征曲线下面积(AUC)的最佳值为94.62%(95%CI,94.34%-94.90%),94.07%(95%CI,93.32%-94.82%),90.56%(95%CI,88.64%-92.48%),92.34%(95%CI,91.87%-92.81%),91.15%(95%CI,90.37%-91.93%),和99.08%(95%CI,99.07%-99.09%),分别。相比之下,与眼科医生相比,IRIDS显示出有希望的结果,证明了平均准确性,精度,召回,F1,卡帕,AUC为96.45%(95%CI,96.37%-96.53%),95.86%(95%CI,94.56%-97.16%),94.37%(95%CI,93.95%-94.79%),95.03%(95%CI,94.45%-95.61%),94.43%(95%CI,93.96%-94.90%),和99.51%(95%CI,99.51%-99.51%),分别,在测试数据集上的多标签分类中,利用Res-18和MaxViT模型。这些结果表明,特别是在AUC方面,IRIDS取得的性能值得进一步研究以检测视网膜异常。
    IRIDS准确识别了九种婴儿眼底疾病和病症。它可以帮助非眼科医生在婴儿眼底疾病筛查服务不足的地区。因此,预防严重并发症。IRIDS是人工智能集成到眼科中的一个例子,可以在预测方面取得更好的结果,预防性,和个性化医学(PPPM/3PM)治疗小儿眼底疾病。
    在线版本包含补充材料,可在10.1007/s13167-024-00350-y获得。
    UNASSIGNED: We developed an Infant Retinal Intelligent Diagnosis System (IRIDS), an automated system to aid early diagnosis and monitoring of infantile fundus diseases and health conditions to satisfy urgent needs of ophthalmologists.
    UNASSIGNED: We developed IRIDS by combining convolutional neural networks and transformer structures, using a dataset of 7697 retinal images (1089 infants) from four hospitals. It identifies nine fundus diseases and conditions, namely, retinopathy of prematurity (ROP) (mild ROP, moderate ROP, and severe ROP), retinoblastoma (RB), retinitis pigmentosa (RP), Coats disease, coloboma of the choroid, congenital retinal fold (CRF), and normal. IRIDS also includes depth attention modules, ResNet-18 (Res-18), and Multi-Axis Vision Transformer (MaxViT). Performance was compared to that of ophthalmologists using 450 retinal images. The IRIDS employed a five-fold cross-validation approach to generate the classification results.
    UNASSIGNED: Several baseline models achieved the following metrics: accuracy, precision, recall, F1-score (F1), kappa, and area under the receiver operating characteristic curve (AUC) with best values of 94.62% (95% CI, 94.34%-94.90%), 94.07% (95% CI, 93.32%-94.82%), 90.56% (95% CI, 88.64%-92.48%), 92.34% (95% CI, 91.87%-92.81%), 91.15% (95% CI, 90.37%-91.93%), and 99.08% (95% CI, 99.07%-99.09%), respectively. In comparison, IRIDS showed promising results compared to ophthalmologists, demonstrating an average accuracy, precision, recall, F1, kappa, and AUC of 96.45% (95% CI, 96.37%-96.53%), 95.86% (95% CI, 94.56%-97.16%), 94.37% (95% CI, 93.95%-94.79%), 95.03% (95% CI, 94.45%-95.61%), 94.43% (95% CI, 93.96%-94.90%), and 99.51% (95% CI, 99.51%-99.51%), respectively, in multi-label classification on the test dataset, utilizing the Res-18 and MaxViT models. These results suggest that, particularly in terms of AUC, IRIDS achieved performance that warrants further investigation for the detection of retinal abnormalities.
    UNASSIGNED: IRIDS identifies nine infantile fundus diseases and conditions accurately. It may aid non-ophthalmologist personnel in underserved areas in infantile fundus disease screening. Thus, preventing severe complications. The IRIDS serves as an example of artificial intelligence integration into ophthalmology to achieve better outcomes in predictive, preventive, and personalized medicine (PPPM / 3PM) in the treatment of infantile fundus diseases.
    UNASSIGNED: The online version contains supplementary material available at 10.1007/s13167-024-00350-y.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    视网膜疾病的准确识别对于预防暂时性和永久性视力损害至关重要。先前的研究已经在与特定视网膜状况有关的视网膜图像的分类中产生了令人鼓舞的结果。在临床实践中,单个患者同时出现多种视网膜疾病并不少见.因此,将视网膜图像分类为多个标签的任务仍然是现有方法的重要障碍,但是它的成功实现将同时产生对各种情况的宝贵见解。
    这项研究提出了一种称为视网膜ViT的新型视觉变压器架构,将自我注意机制纳入医学图像分析领域。要注意,这项研究应该证明,与基于CNN的模型相比,基于变压器的模型可以实现有竞争力的性能,因此,卷积模块已从所提出的模型中删除。建议的模型以利用前馈网络架构的多标签分类器结束。该分类器由两层组成并采用S型激活函数。
    与ResNet等最先进的方法相比,实验结果提供了建议模型表现出的改进性能的证据,VGG,DenseNet,和MobileNet,在公开可用的数据集ODIR-2019上,所提出的方法在Kappa方面优于最先进的算法,F1得分,AUC,和AVG。
    UNASSIGNED: The precise identification of retinal disorders is of utmost importance in the prevention of both temporary and permanent visual impairment. Prior research has yielded encouraging results in the classification of retinal images pertaining to a specific retinal condition. In clinical practice, it is not uncommon for a single patient to present with multiple retinal disorders concurrently. Hence, the task of classifying retinal images into multiple labels remains a significant obstacle for existing methodologies, but its successful accomplishment would yield valuable insights into a diverse array of situations simultaneously.
    UNASSIGNED: This study presents a novel vision transformer architecture called retinal ViT, which incorporates the self-attention mechanism into the field of medical image analysis. To note that this study supposed to prove that the transformer-based models can achieve competitive performance comparing with the CNN-based models, hence the convolutional modules have been eliminated from the proposed model. The suggested model concludes with a multi-label classifier that utilizes a feed-forward network architecture. This classifier consists of two layers and employs a sigmoid activation function.
    UNASSIGNED: The experimental findings provide evidence of the improved performance exhibited by the suggested model when compared to state-of-the-art approaches such as ResNet, VGG, DenseNet, and MobileNet, on the publicly available dataset ODIR-2019, and the proposed approach has outperformed the state-of-the-art algorithms in terms of Kappa, F1 score, AUC, and AVG.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    这项研究的目的是确定非渗出性黄斑新生血管(MNV)的视网膜微血管特征中的预测性激活生物标志物,并避免亚临床MNV的延迟治疗或过度治疗。主要目的是促进关于视网膜血管特征在非渗出性MNV和年龄相关性黄斑变性(AMD)的发病机理和进展中的作用的新认识的国际辩论。提出了有关修订相关临床方案的讨论。
    在这项回顾性研究中,作者包括非渗出性MNV的眼睛,眼睛有渗出性AMD,和年龄匹配的健康受试者的正常眼睛。通过光学相干断层扫描(OCT)和光学相干断层扫描血管造影(OCTA)获得参数。
    总共,21眼渗出性AMD,21只眼非渗出性MNV,纳入20例年龄匹配的健康受试者的20只眼,无视网膜病变。非渗出性MNV眼的深血管复合体(DVC)血管密度(VD)明显大于渗出性AMD眼(p=0.002),而对于浅表血管丛(SVP)指标,非渗出性MNV眼和渗出性AMD眼之间未观察到节段间VD差异。
    视网膜血管密度降低,尤其是在DVC中,似乎参与或伴有非渗出性MNV激活,在随访期间应密切监测,以确保及时进行抗血管生成治疗.对适用的临床协议进行了讨论,旨在为针对这种特定类型的患者和诊断的眼科服务开发提供新的见解。
    UNASSIGNED: The purpose of this study is to identify predictive activation biomarkers in retinal microvascular characteristics of non-exudative macular neovascularization (MNV) and avoid delayed treatment or overtreatment of subclinical MNV. The main objective is to contribute to the international debate on a new understanding of the role of retinal vessel features in the pathogenesis and progression of non-exudative MNV and age-related macular degeneration (AMD). A discussion on revising-related clinical protocols is presented.
    UNASSIGNED: In this retrospective study, the authors included eyes with non-exudative MNV, eyes with exudative AMD, and normal eyes of age-matched healthy subjects. The parameters were obtained by optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA).
    UNASSIGNED: In total, 21 eyes with exudative AMD, 21 eyes with non-exudative MNV, and 20 eyes of 20 age-matched healthy subjects without retinal pathology were included. Vessel density (VD) of the deep vascular complex (DVC) in eyes with non-exudative MNV was significantly greater than that in eyes with exudative AMD (p = 0.002), while for superficial vascular plexus (SVP) metrics, no VD differences among sectors were observed between eyes with non-exudative MNV and eyes with exudative AMD.
    UNASSIGNED: The reduction in retinal vessel density, especially in the DVC, seems to be involved in or be accompanied by non-exudative MNV activation and should be closely monitored during follow-up visits in order to ensure prompt anti-angiogenic therapy. A discussion on applicable clinical protocols is presented aiming to contribute to new insights into ophthalmology service development which is directed to this specific type of patient and diagnosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    视网膜血管分割是临床医生诊断动脉粥样硬化等疾病的有价值的工具,青光眼,和年龄相关性黄斑变性。本文提出了一种新的视网膜图像血管分割框架。该框架具有两个阶段:多层预处理阶段和采用具有多残差注意块的U-Net的后续分割阶段。多层预处理阶段有三个步骤。第一步是降噪,采用带有矩阵分解的U形卷积神经网络(带有MF的CNN)和详细的U形U-Net(D_U-Net)来最小化图像噪声,最终根据PSNR和SSIM值选择最合适的图像。第二步是动态数据插补,利用多个模型来填充缺失数据。第三步是通过利用潜在扩散模型(LDM)来扩展训练数据集大小的数据增强。框架的第二阶段是分割,其中具有多残差注意块的U-Nets用于在视网膜图像已经被预处理并且已经去除噪声之后对其进行分割。实验表明,该框架在分割视网膜血管方面是有效的。Dice得分为95.32分,准确率为93.56分,准确率为95.68分,召回率为95.45分。对于(0.1、0.25、0.5和0.75)级的噪声,根据PSNR和SSIM的值,还实现了使用具有矩阵分解(MF)的CNN和D-U-NET来去除噪声的有效结果。在增强步骤中,LDM的初始得分为13.6,FID为46.2。
    Retinal blood vessel segmentation is a valuable tool for clinicians to diagnose conditions such as atherosclerosis, glaucoma, and age-related macular degeneration. This paper presents a new framework for segmenting blood vessels in retinal images. The framework has two stages: a multi-layer preprocessing stage and a subsequent segmentation stage employing a U-Net with a multi-residual attention block. The multi-layer preprocessing stage has three steps. The first step is noise reduction, employing a U-shaped convolutional neural network with matrix factorization (CNN with MF) and detailed U-shaped U-Net (D_U-Net) to minimize image noise, culminating in the selection of the most suitable image based on the PSNR and SSIM values. The second step is dynamic data imputation, utilizing multiple models for the purpose of filling in missing data. The third step is data augmentation through the utilization of a latent diffusion model (LDM) to expand the training dataset size. The second stage of the framework is segmentation, where the U-Nets with a multi-residual attention block are used to segment the retinal images after they have been preprocessed and noise has been removed. The experiments show that the framework is effective at segmenting retinal blood vessels. It achieved Dice scores of 95.32, accuracy of 93.56, precision of 95.68, and recall of 95.45. It also achieved efficient results in removing noise using CNN with matrix factorization (MF) and D-U-NET according to values of PSNR and SSIM for (0.1, 0.25, 0.5, and 0.75) levels of noise. The LDM achieved an inception score of 13.6 and an FID of 46.2 in the augmentation step.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    由于成像条件有限,眼底图像的质量往往不尽人意,特别是手持眼底照相机拍摄的图像。这里,我们开发了一种基于结合两个镜像对称生成对抗网络(GAN)进行图像增强的自动化方法。
    共包括1047张视网膜图像。通过基于GAN的深度增强器和基于亮度和对比度调整的另一种方法来增强原始图像。所有原始图像和增强图像均由三位经验丰富的眼科医生匿名评估并分类为6个质量分类级别。比较图像的质量分类和质量变化。此外,还比较了可疑病理基础数量的图像详细阅读结果。
    GAN增强后,42.9%的图像提高了质量,37.5%保持稳定,下降19.6%。在排除增强前的最高级别(级别0)的图像后,大量(75.6%)的图像显示质量分类增加,只有少数(9.3%)出现下降。GAN增强方法在质量改善方面优于光度和对比度调整方法(P<0.001)。在图像读取结果方面,一致性率从86.6%波动到95.6%,对于特定的疾病亚型,差异数和差异率均小于15%和15%,给两位眼科医生.
    学习基于所提出的深度增强器的高质量视网膜图像的风格可能是提高手持式眼底相机拍摄的视网膜图像质量的有效方法。
    UNASSIGNED: Due to limited imaging conditions, the quality of fundus images is often unsatisfactory, especially for images photographed by handheld fundus cameras. Here, we have developed an automated method based on combining two mirror-symmetric generative adversarial networks (GANs) for image enhancement.
    UNASSIGNED: A total of 1047 retinal images were included. The raw images were enhanced by a GAN-based deep enhancer and another methods based on luminosity and contrast adjustment. All raw images and enhanced images were anonymously assessed and classified into 6 levels of quality classification by three experienced ophthalmologists. The quality classification and quality change of images were compared. In addition, image-detailed reading results for the number of dubiously pathological fundi were also compared.
    UNASSIGNED: After GAN enhancement, 42.9% of images increased their quality, 37.5% remained stable, and 19.6% decreased. After excluding the images at the highest level (level 0) before enhancement, a large number (75.6%) of images showed an increase in quality classification, and only a minority (9.3%) showed a decrease. The GAN-enhanced method was superior for quality improvement over a luminosity and contrast adjustment method (P<0.001). In terms of image reading results, the consistency rate fluctuated from 86.6% to 95.6%, and for the specific disease subtypes, both discrepancy number and discrepancy rate were less than 15 and 15%, for two ophthalmologists.
    UNASSIGNED: Learning the style of high-quality retinal images based on the proposed deep enhancer may be an effective way to improve the quality of retinal images photographed by handheld fundus cameras.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景与目的:青光眼是导致不可逆性视力损害和失明的主要原因,所以它的及时发现是至关重要的。来自糖尿病性视网膜病变筛查计划(DRSP)的视网膜图像为检测未诊断的青光眼提供了机会。我们的目的是找出哪些视网膜图像指标最适合推荐DRSP患者进行青光眼评估,并确定斯洛文尼亚DRSP的青光眼检测潜力。材料和方法:我们回顾了卢布尔雅那大学医学中心DRSP患者的视网膜图像(2019年11月至2020年1月,2020年5月至8月)。邀请具有至少一个指标的患者和一些随机选择的没有指标的患者进行眼部检查。怀疑青光眼和青光眼患者被认为是准确的。使用以患者为统计单位的逻辑回归(LOGIT)和以眼睛为统计单位的逻辑回归(GEE)的广义估计方程来确定指标的转诊准确性。结果:在检查的2230例患者中,209名患者(10.1%)在一只眼睛或两只眼睛的视网膜图像上至少有一个指标。共有149人(129人至少有一个指标,20人没有)参加了眼科检查。79例(53.0%)青光眼阴性,54(36.2%)怀疑青光眼,16例(10.7%)青光眼阳性。新发现7例青光眼患者。神经视网膜边缘切迹可预测所有病例的青光眼。杯盘比率是准确转诊的最重要指标(比值比7.59(95%CI3.98-14.47;p<0.001),并且在多变量方面保持统计学意义。青光眼家族史也显示了影响(比值比3.06(95%CI1.02-9.19;p=0.046),但仅在LOGIT多变量模型中仍具有统计学意义。其他指标和混杂因素在多变量模型中没有统计学意义。结论:我们的结果表明,神经视网膜边缘凹口和杯盘比对于从DRSP中的视网膜图像准确转诊青光眼最重要。DRSP中约有一半的青光眼病例可能未被诊断。
    Background and Objectives: Glaucoma is a major cause of irreversible visual impairment and blindness, so its timely detection is crucial. Retinal images from diabetic retinopathy screening programmes (DRSP) provide an opportunity to detect undiagnosed glaucoma. Our aim was to find out which retinal image indicators are most suitable for referring DRSP patients for glaucoma assessment and to determine the glaucoma detection potential of Slovenian DRSP. Materials and Methods: We reviewed retinal images of patients from the DRSP at the University Medical Centre Ljubljana (November 2019-January 2020, May-August 2020). Patients with at least one indicator and some randomly selected patients without indicators were invited for an eye examination. Suspect glaucoma and glaucoma patients were considered accurately referred. Logistic regression (LOGIT) with patients as statistical units and generalised estimating equation with logistic regression (GEE) with eyes as statistical units were used to determine the referral accuracy of indicators. Results: Of the 2230 patients reviewed, 209 patients (10.1%) had at least one indicator on a retinal image of either one eye or both eyes. A total of 149 (129 with at least one indicator and 20 without) attended the eye exam. Seventy-nine (53.0%) were glaucoma negative, 54 (36.2%) suspect glaucoma, and 16 (10.7%) glaucoma positive. Seven glaucoma patients were newly detected. Neuroretinal rim notch predicted glaucoma in all cases. The cup-to-disc ratio was the most important indicator for accurate referral (odds ratio 7.59 (95% CI 3.98-14.47; p < 0.001) and remained statistically significant multivariably. Family history of glaucoma also showed an impact (odds ratio 3.06 (95% CI 1.02-9.19; p = 0.046) but remained statistically significant only in the LOGIT multivariable model. Other indicators and confounders were not statistically significant in the multivariable models. Conclusions: Our results suggest that the neuroretinal rim notch and cup-to-disc ratio are the most important for accurate glaucoma referral from retinal images in DRSP. Approximately half of the glaucoma cases in DRSPs may be undiagnosed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    如果不在早期进行检测和治疗,糖尿病视网膜病变(DR)将导致失明。为了制定有效的治疗策略,疾病的严重程度必须首先分为有转诊许可的糖尿病性视网膜病变(RWDR)和无转诊的糖尿病性视网膜病变(NRDR).然而,由于社区缺乏专业服务,通常没有足够的眼底检查,特别是在发展中国家。在这项研究中,我们介绍UGAN_Resnet_CBAM(URNet;UGAN是使用Unet进行特征提取的生成对抗网络),用于自动检测糖尿病视网膜病变的两阶段端到端深度学习技术。第一阶段利用DDR眼底数据集的特点设计了自适应图像预处理模块。以梯度加权类激活映射(Grad-CAM)和t分布和随机邻居嵌入(t-SNE)为评价指标,对预处理结果进行分析。在第二阶段,我们通过集成卷积块注意模块(CBAM)来增强Resnet50网络的性能。结果表明,我们提出的解决方案优于其他现有结构,达到94.5%和94.4%的精度,NRDR和RWDR召回率分别为96.2%和91.9%,分别。
    Diabetic retinopathy (DR) will cause blindness if the detection and treatment are not carried out in the early stages. To create an effective treatment strategy, the severity of the disease must first be divided into referral-warranted diabetic retinopathy (RWDR) and non-referral diabetic retinopathy (NRDR). However, there are usually no sufficient fundus examinations due to lack of professional service in the communities, particularly in the developing countries. In this study, we introduce UGAN_Resnet_CBAM (URNet; UGAN is a generative adversarial network that uses Unet for feature extraction), a two-stage end-to-end deep learning technique for the automatic detection of diabetic retinopathy. The characteristics of DDR fundus data set were used to design an adaptive image preprocessing module in the first stage. Gradient-weighted Class Activation Mapping (Grad-CAM) and t-distribution and stochastic neighbor embedding (t-SNE) were used as the evaluation indices to analyze the preprocessing results. In the second stage, we enhanced the performance of the Resnet50 network by integrating the convolutional block attention module (CBAM). The outcomes demonstrate that our proposed solution outperformed other current structures, achieving 94.5% and 94.4% precisions, and 96.2% and 91.9% recall for NRDR and RWDR, respectively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    常见的眼部疾病和许多导致视觉疾病的疾病,比如糖尿病和血管疾病,可以通过视网膜成像诊断。视网膜图像的增强通常有助于诊断与视网膜器官衰竭有关的疾病。然而,今天的图像增强方法可能会导致人为的边界,突然的颜色渐变,以及图像细节的丢失。因此,为了防止这些副作用,提出了一种新的视网膜图像增强方法。在这项工作中,我们提出了一种增强彩色视网膜图像整体对比度的新方法。也就是说,我们提出了一种基于强大的半耦合retinex方法的新retinex方法对弱光图像增强。特别是,照明层I根据文件逐渐接近S输入图像。这导致了一个完整的高斯变换模型,而R层反射率由S和I的中介共同估计,以在公开可用的Messidor数据库上的R估计期间同时抑制图像噪声。根据我们的评估测量(PSNR和SSIM),我们表明,与相关和最近提出的视网膜成像方法相比,该方法是有效的;此外,颜色,这是由数据决定的,不改变图像结构。最后,提出了一种技术来改善视网膜图像的明显颜色,这对于眼科医生更有效地筛查视网膜疾病是有用的。此外,该技术可用于开发用于成像测试的机器人技术,以搜索临床标志物。
    Eye diseases that are common and many diseases that result in visual ailments, such as diabetes and vascular disease, can be diagnosed through retinal imaging. The enhancement of retinal images often helps in diagnosing diseases related to retinal organ failure. However, today\'s image enhancement methods may lead to artificial boundaries, sudden color gradation, and the loss of image details. Therefore, to prevent these side effects, a new method of retinal image enhancement is proposed. In this work, we propose a new method for enhancing the overall contrast of colored retinal images. That is, we propose low-light image enhancement using a new retinex method based on a powerful semidecoupled retinex method. In particular, illumination layer I gradually approximates the S input image according to the file. This leads to a complete Gaussian transformation model, while the R-layer reflectance is estimated jointly by S and intermediary by I to suppress image noise simultaneously during R estimation on the publicly available Messidor database. From our assessment measurements (PSNR and SSIM), we show that this proposed method is effective in comparison with the relevant and recently proposed retinal imaging methods; moreover, the color, which is determined by the data, does not change the image structure. Finally, a technique is presented to improve the pronounced color of a retinal image, which is useful for ophthalmologists to screen for retinal disease more effectively. Moreover, this technique can be used in the development of robotics for imaging tests to search for clinical markers.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    UASSIGNED:使用人力资源进行图像管理是耗时的,但却是开发人工智能(AI)算法的重要步骤。我们的目标是在高容量环境中开发和实现用于图像管理的AI算法。我们还探索了人工智能工具,这些工具将有助于部署分层方法,其中AI模型标记图像并标记潜在的错误标签以供人类审查。
    UNASSIGNED:AI算法的实现。
    UNASSIGNED:来自多个临床试验的七场立体图像。
    UNASSIGNED:7场立体图像协议包括来自中央视网膜各个部分的7对图像以及眼睛前部的图像。所有图像均由阅读中心分级人员标记为字段编号。模型输出包括将视网膜图像分类为8个场编号。生成概率得分(0-1)来识别错误分类的图像,1表示正确标签的可能性很高。
    UNASSIGNED:AI预测与分级员分类字段编号以及使用概率评分来识别错误标记的图像的协议。
    UNASSIGNED:AI模型在17529张图像上进行了训练和验证,并在3004张图像上进行了测试。分级者分类与AI模型之间的字段编号的合并一致性为88.3%(Kappa,0.87)。合并平均概率评分为0.97(标准偏差[SD],0.08)对于分级者同意AI生成的标签的图像和0.77(SD,0.19)对于分级者与AI生成的标签不一致的图像(P<0.0001)。使用接收器工作特性曲线,0.99的概率评分被确定为区分错误标记图像的截止值.使用<0.99的概率得分作为截止值的分层工作流程将包括用于人类审查的3004个图像的27.6%,并且将错误率从11.7%降低到1.5%。
    UNASSIGNED:AI算法的实现除了模型验证之外还需要措施。在AI模型生成的标签中标记潜在错误的工具将减少不准确性,增加对系统的信任,并为持续的模型开发提供数据。
    UNASSIGNED: The curation of images using human resources is time intensive but an essential step for developing artificial intelligence (AI) algorithms. Our goal was to develop and implement an AI algorithm for image curation in a high-volume setting. We also explored AI tools that will assist in deploying a tiered approach, in which the AI model labels images and flags potential mislabels for human review.
    UNASSIGNED: Implementation of an AI algorithm.
    UNASSIGNED: Seven-field stereoscopic images from multiple clinical trials.
    UNASSIGNED: The 7-field stereoscopic image protocol includes 7 pairs of images from various parts of the central retina along with images of the anterior part of the eye. All images were labeled for field number by reading center graders. The model output included classification of the retinal images into 8 field numbers. Probability scores (0-1) were generated to identify misclassified images, with 1 indicating a high probability of a correct label.
    UNASSIGNED: Agreement of AI prediction with grader classification of field number and the use of probability scores to identify mislabeled images.
    UNASSIGNED: The AI model was trained and validated on 17 529 images and tested on 3004 images. The pooled agreement of field numbers between grader classification and the AI model was 88.3% (kappa, 0.87). The pooled mean probability score was 0.97 (standard deviation [SD], 0.08) for images for which the graders agreed with the AI-generated labels and 0.77 (SD, 0.19) for images for which the graders disagreed with the AI-generated labels (P < 0.0001). Using receiver operating characteristic curves, a probability score of 0.99 was identified as a cutoff for distinguishing mislabeled images. A tiered workflow using a probability score of < 0.99 as a cutoff would include 27.6% of the 3004 images for human review and reduce the error rate from 11.7% to 1.5%.
    UNASSIGNED: The implementation of AI algorithms requires measures in addition to model validation. Tools to flag potential errors in the labels generated by AI models will reduce inaccuracies, increase trust in the system, and provide data for continuous model development.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号