Inception V3

盗梦空间 V3
  • 文章类型: Journal Article
    脑肿瘤分割的过程需要在图像中精确定位肿瘤。磁共振成像(MRI)通常被医生用来发现任何脑肿瘤或组织异常。使用基于区域的卷积神经网络(R-CNN)掩模,Grad-CAM和迁移学习,这项工作为检测脑肿瘤提供了一种有效的方法。帮助医生做出非常准确的诊断是目标。已经提出了一种基于迁移学习的模型,当使用R-CNN掩模进行分割时,该模型为脑肿瘤检测提供了高灵敏度和准确性评分。为了训练模型,使用了InceptionV3、VGG-16和ResNet-50架构。利用脑肿瘤检测数据集的脑MRI图像来开发该方法。这项工作的表现是根据召回情况进行评估和报告的,特异性,灵敏度,准确度,精度,F1得分。已经进行了全面的分析,比较了所提出的模型与三种不同的架构:VGG-16,InceptionV3和Resnet-50。比较所提出的模型,其中受VGG-16的影响,相关作品也揭示了其性能。实现高灵敏度和准确度百分比是主要目标。使用这种方法,获得了99%左右的准确度和灵敏度,这比目前的努力要大得多。
    The process of brain tumour segmentation entails locating the tumour precisely in images. Magnetic Resonance Imaging (MRI) is typically used by doctors to find any brain tumours or tissue abnormalities. With the use of region-based Convolutional Neural Network (R-CNN) masks, Grad-CAM and transfer learning, this work offers an effective method for the detection of brain tumours. Helping doctors make extremely accurate diagnoses is the goal. A transfer learning-based model has been suggested that offers high sensitivity and accuracy scores for brain tumour detection when segmentation is done using R-CNN masks. To train the model, the Inception V3, VGG-16, and ResNet-50 architectures were utilised. The Brain MRI Images for Brain Tumour Detection dataset was utilised to develop this method. This work\'s performance is evaluated and reported in terms of recall, specificity, sensitivity, accuracy, precision, and F1 score. A thorough analysis has been done comparing the proposed model operating with three distinct architectures: VGG-16, Inception V3, and Resnet-50. Comparing the proposed model, which was influenced by the VGG-16, to related works also revealed its performance. Achieving high sensitivity and accuracy percentages was the main goal. Using this approach, an accuracy and sensitivity of around 99% were obtained, which was much greater than current efforts.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    乳腺癌(BC)是女性癌症死亡的主要原因,也是一种对女性健康构成重大威胁的癌症。近年来,深度学习方法在许多医学领域得到了广泛的应用,特别是在检测和分类应用中。研究组织学图像对BC的自动诊断对患者及其预后很重要。由于组织学图像的复杂性和多样性,手工检查可能很困难,容易出错,因此需要有经验的病理学家的服务。因此,在这项研究中,使用称为BreakHis和浸润性导管癌(IDC)的可公开访问的数据集来分析BC的组织病理学图像。接下来,使用超分辨率生成对抗网络(SRGAN),从低质量图像创建高分辨率图像,从BreakHis和IDC收集的图像被预处理以在预测阶段提供有用的结果。将传统的生成对抗网络(GAN)损失函数和有效的子像素网络的组件组合在一起,以创建SRGAN的概念。接下来,高质量的图像被发送到数据增强阶段,通过使用旋转对数据集进行小的调整来创建新的数据点,随机裁剪,镜像,和颜色转换。接下来,使用InceptionV3和Resnet-50(PFE-INC-RES)的基于补丁的特征提取用于从增强中提取特征。在提取特征之后,下一步包括处理它们并应用转导性长短期记忆(TLSTM),通过减少假阳性的数量来提高分类准确性.建议的PFE-INC-RES的结果使用BreakHis数据集上的现有方法进行评估,关于准确度(99.84%),特异性(99.71%),灵敏度(99.78%),和F1得分(99.80%),虽然建议的PFE-INC-RES在基于F1分数的IDC数据集中表现更好(99.08%),准确度(99.79%),特异性(98.97%),灵敏度(99.17%)。
    Breast cancer (BC) is the leading cause of female cancer mortality and is a type of cancer that is a major threat to women\'s health. Deep learning methods have been used extensively in many medical domains recently, especially in detection and classification applications. Studying histological images for the automatic diagnosis of BC is important for patients and their prognosis. Owing to the complication and variety of histology images, manual examination can be difficult and susceptible to errors and thus needs the services of experienced pathologists. Therefore, publicly accessible datasets called BreakHis and invasive ductal carcinoma (IDC) are used in this study to analyze histopathological images of BC. Next, using super-resolution generative adversarial networks (SRGANs), which create high-resolution images from low-quality images, the gathered images from BreakHis and IDC are pre-processed to provide useful results in the prediction stage. The components of conventional generative adversarial network (GAN) loss functions and effective sub-pixel nets were combined to create the concept of SRGAN. Next, the high-quality images are sent to the data augmentation stage, where new data points are created by making small adjustments to the dataset using rotation, random cropping, mirroring, and color-shifting. Next, patch-based feature extraction using Inception V3 and Resnet-50 (PFE-INC-RES) is employed to extract the features from the augmentation. After the features have been extracted, the next step involves processing them and applying transductive long short-term memory (TLSTM) to improve classification accuracy by decreasing the number of false positives. The results of suggested PFE-INC-RES is evaluated using existing methods on the BreakHis dataset, with respect to accuracy (99.84%), specificity (99.71%), sensitivity (99.78%), and F1-score (99.80%), while the suggested PFE-INC-RES performed better in the IDC dataset based on F1-score (99.08%), accuracy (99.79%), specificity (98.97%), and sensitivity (99.17%).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    及时准确地早期发现植物叶部病害对于保障农作物生产力和保障粮食安全至关重要。在他们的生命周期中,植物的叶子由于细菌等多种因素而患病,真菌,天气条件,等。在这项工作中,作者提出了一种模型,该模型使用改进的VisionTransformer和ResNet9模型,使用新颖的分层残差视觉变换器来帮助早期检测叶片疾病。所提出的模型可以通过减少可训练参数的数量以更少的计算来提取更有意义和区分的细节。所提出的方法在本地作物数据集上进行了评估,PlantVillage数据集,和扩展植物村数据集,具有13、38和51种不同的叶病类别。所提出的模型使用改进的视觉变换器的最佳轨迹参数进行训练,并使用ResNet9对特征进行分类。对上述数据集的各个方面进行了性能评估,结果表明,所提出的模型优于其他模型,例如InceptionV3,MobileNetV2和ResNet50。
    Early detection of plant leaf diseases accurately and promptly is very crucial for safeguarding agricultural crop productivity and ensuring food security. During their life cycle, plant leaves get diseased because of multiple factors like bacteria, fungi, weather conditions, etc. In this work, the authors propose a model that aids in the early detection of leaf diseases using a novel hierarchical residual vision transformer using improved Vision Transformer and ResNet9 models. The proposed model can extract more meaningful and discriminating details by reducing the number of trainable parameters with a smaller number of computations. The proposed method is evaluated on the Local Crop dataset, Plant Village dataset, and Extended Plant Village Dataset with 13, 38, and 51 different leaf disease classes. The proposed model is trained using the best trail parameters of Improved Vision Transformer and classified the features using ResNet 9. Performance evaluation is carried out on a wide aspects over the aforementioned datasets and results revealed that the proposed model outperforms other models such as InceptionV3, MobileNetV2, and ResNet50.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    糖尿病视网膜病变(DR)是人类失明的主要原因之一。为了防止失明,因此,早期检测DR是必要的。在本文中,提出了一种从眼底图像诊断DR的混合模型。形态学图像处理和Inceptionv3深度学习技术的结合被用来检测DR以及对健康,轻度非增殖性DR(NPDR),中等NPDR,严重的NPDR,和增殖性DR(PDR)。所提出的算法分几个步骤进行,如血管分割,视盘的定位和移除,和黄斑,异常特征检测(微动脉瘤,出血,和新血管形成),和分类。视网膜中出现的微动脉瘤和出血是DR的早期征兆。在这项工作中,我们通过在重叠的补片图像上应用动态对比有限的自适应直方图均衡化和阈值来检测微动脉瘤和出血.将DR分类为五个不同阶段,总体准确率为96.83%。与最近报告的工作相比,更好的性能表明了拟议工作的有效性和新颖性。
    One of the major causes of blindness in human beings is the diabetic retinopathy (DR). To prevent blindness, early detection of DR is therefore necessary. In this paper, a hybrid model is proposed for diagnosing DR from fundus images. A combination of morphological image processing and Inception v3 deep learning techniques are exploited to detect DR as well as to classify healthy, mild non-proliferative DR (NPDR), moderate NPDR, severe NPDR, and proliferative DR (PDR). The proposed algorithm was carried out in several steps such as segmentation of blood vessels, localization and removal of optic disc, and macula, abnormal features detection (microaneurysms, hemorrhages, and neovascularization), and classification. Microaneurysms and hemorrhages that appear in the retina are the early signs of DR. In this work, we have detected microaneurysms and hemorrhages by applying dynamic contrast limited adaptive histogram equalization and threshold value on overlapping patched images. An overall accuracy of 96.83% is obtained to classify DR into five different stages. The better performance demonstrates the effectiveness and novelty of the proposed work as compared to the recent reported work.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    具有完全或部分相同的内部类别的图像数据库的连续发布极大地恶化了用于真正全面的医疗诊断的自主计算机辅助诊断(CAD)系统的生产。第一个挑战是医学图像数据库的频繁大量发布,这通常有两个常见的缺点:图像复制和损坏。具有相同类别或类别的相同数据的许多后续版本没有明确的证据表明在图像数据库之间的这些相同类别的串联成功。这个问题是基于假设的实验路径上的绊脚石,用于产生可以成功地对所有这些模型进行正确分类的单一学习模型。删除冗余数据,提高性能,优化能源资源是最具挑战性的方面。在这篇文章中,我们提出了一个全球数据聚合规模模型,该模型包含从特定的全球资源中选择的六个图像数据库。建议的有效学习器基于训练任何给定数据发布中的所有独特模式,从而假设创建一个独特的数据集。HashMD5算法(MD5)为每个图像生成一个唯一的哈希值,使其适合重复删除。T分布随机邻域嵌入(t-SNE),使用可调的困惑参数,可以表示数据维度。HashMD5和t-SNE算法都是递归应用的,生成一个平衡和统一的数据库,每个类别包含相等的样本:正常,肺炎,和2019年冠状病毒病(COVID-19)。我们使用InceptionV3预训练模型和各种评估指标评估了所有建议数据和新自动化版本的性能。所提出的规模模型的性能结果显示出比传统的数据聚合更可观的结果,达到98.48%的高精度,随着高精度,召回,和F1得分。结果已通过统计t检验证明,产生t值和p值。重要的是要强调,所有的t值都是不可否认的重要,p值提供了反对零假设的无可辩驳的证据。此外,值得注意的是,当使用相同的因素诊断各种肺部感染时,Final数据集优于所有度量值的所有其他数据集。
    Continuous release of image databases with fully or partially identical inner categories dramatically deteriorates the production of autonomous Computer-Aided Diagnostics (CAD) systems for true comprehensive medical diagnostics. The first challenge is the frequent massive bulk release of medical image databases, which often suffer from two common drawbacks: image duplication and corruption. The many subsequent releases of the same data with the same classes or categories come with no clear evidence of success in the concatenation of those identical classes among image databases. This issue stands as a stumbling block in the path of hypothesis-based experiments for the production of a single learning model that can successfully classify all of them correctly. Removing redundant data, enhancing performance, and optimizing energy resources are among the most challenging aspects. In this article, we propose a global data aggregation scale model that incorporates six image databases selected from specific global resources. The proposed valid learner is based on training all the unique patterns within any given data release, thereby creating a unique dataset hypothetically. The Hash MD5 algorithm (MD5) generates a unique hash value for each image, making it suitable for duplication removal. The T-Distributed Stochastic Neighbor Embedding (t-SNE), with a tunable perplexity parameter, can represent data dimensions. Both the Hash MD5 and t-SNE algorithms are applied recursively, producing a balanced and uniform database containing equal samples per category: normal, pneumonia, and Coronavirus Disease of 2019 (COVID-19). We evaluated the performance of all proposed data and the new automated version using the Inception V3 pre-trained model with various evaluation metrics. The performance outcome of the proposed scale model showed more respectable results than traditional data aggregation, achieving a high accuracy of 98.48%, along with high precision, recall, and F1-score. The results have been proved through a statistical t-test, yielding t-values and p-values. It\'s important to emphasize that all t-values are undeniably significant, and the p-values provide irrefutable evidence against the null hypothesis. Furthermore, it\'s noteworthy that the Final dataset outperformed all other datasets across all metric values when diagnosing various lung infections with the same factors.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:卵巢癌仍然是妇科癌症死亡的主要原因。在病理诊断时预测卵巢癌对化疗的敏感性是精准医学研究的目标,我们在这项研究中使用一种新颖的深度学习神经网络框架来分析组织病理学图像。
    方法:我们开发了一种基于InceptionV3深度学习算法的方法,该算法补充了其他方法来预测对基于铂的标准疗法的反应。对于这项研究,我们使用来自癌症基因组图谱(TCGA)基因组数据共享门户的高级别浆液性癌的组织病理学H&E图像(治疗前)来训练InceptionV3卷积神经网络系统,以预测癌症是否被独立标记为对随后的铂类化疗敏感或耐药.然后使用来自训练过程之外的患者的数据来测试训练的模型。我们使用受试者工作特征(ROC)和混淆矩阵分析来评估模型性能,并使用Kaplan-Meier生存分析来将预测的抵抗概率与患者预后相关联。最后,封堵敏感性分析作为将组织病理学特征与反应相关的开始。
    结果:研究数据集包括248例2-4期浆液性卵巢癌患者。对于一个由40名患者组成的坚持测试集,经过训练的深度学习网络模型区分敏感性和抗性癌症,曲线下面积(AUC)为0.846±0.009(SE)。从深度学习网络计算的抵抗概率也与患者生存率和无进展生存率显着相关。在混淆矩阵分析中,基于Youden-Jcut-off,网络分类器的总体预测准确率为85%,灵敏度为73%,特异性为90%.舞台,grade,和患者年龄对于该队列大小无统计学意义.闭塞敏感性分析表明,网络学习的组织病理学特征可能与对化疗的敏感性或耐药性有关。但多个标记研究将是必要的,以跟进这些初步结果。
    结论:这种类型的分析具有潜力,如果进一步发展,改善对高级别浆液性卵巢癌治疗反应的预测,也许可以作为决定铂类疗法和其他疗法的一个因素。更广泛地说,它可能会增加我们对预测反应的组织病理学变量的理解,并且可能适用于其他癌症类型和成像方式。
    BACKGROUND: Ovarian cancer remains the leading gynecological cause of cancer mortality. Predicting the sensitivity of ovarian cancer to chemotherapy at the time of pathological diagnosis is a goal of precision medicine research that we have addressed in this study using a novel deep-learning neural network framework to analyze the histopathological images.
    METHODS: We have developed a method based on the Inception V3 deep learning algorithm that complements other methods for predicting response to standard platinum-based therapy of the disease. For the study, we used histopathological H&E images (pre-treatment) of high-grade serous carcinoma from The Cancer Genome Atlas (TCGA) Genomic Data Commons portal to train the Inception V3 convolutional neural network system to predict whether cancers had independently been labeled as sensitive or resistant to subsequent platinum-based chemotherapy. The trained model was then tested using data from patients left out of the training process. We used receiver operating characteristic (ROC) and confusion matrix analyses to evaluate model performance and Kaplan-Meier survival analysis to correlate the predicted probability of resistance with patient outcome. Finally, occlusion sensitivity analysis was piloted as a start toward correlating histopathological features with a response.
    RESULTS: The study dataset consisted of 248 patients with stage 2 to 4 serous ovarian cancer. For a held-out test set of forty patients, the trained deep learning network model distinguished sensitive from resistant cancers with an area under the curve (AUC) of 0.846 ± 0.009 (SE). The probability of resistance calculated from the deep-learning network was also significantly correlated with patient survival and progression-free survival. In confusion matrix analysis, the network classifier achieved an overall predictive accuracy of 85% with a sensitivity of 73% and specificity of 90% for this cohort based on the Youden-J cut-off. Stage, grade, and patient age were not statistically significant for this cohort size. Occlusion sensitivity analysis suggested histopathological features learned by the network that may be associated with sensitivity or resistance to the chemotherapy, but multiple marker studies will be necessary to follow up on those preliminary results.
    CONCLUSIONS: This type of analysis has the potential, if further developed, to improve the prediction of response to therapy of high-grade serous ovarian cancer and perhaps be useful as a factor in deciding between platinum-based and other therapies. More broadly, it may increase our understanding of the histopathological variables that predict response and may be adaptable to other cancer types and imaging modalities.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    冠状病毒最初是在武汉市开始的,2019年12月中国。它属于冠状病毒科,可以感染动物和人类。冠状病毒病-2019(COVID-19)的诊断通常通过血清学检测,基因实时逆转录聚合酶链反应(RT-PCR),和抗原测试。这些测试方法有局限性,如有限的灵敏度,高成本,和漫长的周转时间。有必要开发用于COVID-19预测的自动检测系统。与胸部计算机断层扫描(CT)相比,胸部X射线检查是一种低成本的过程。深度学习是机器学习中最有成效的技术,这为学习和筛查大量COVID-19和正常的胸部X射线图像提供了有用的调查。有许多用于预测的深度学习方法,但是这些方法有一些限制,比如过拟合,错误分类,以及对低质量胸部X光片的错误预测。为了克服这些限制,提出了一种新颖的混合模型,称为“带有VGG16(视觉几何组)的InceptionV3”,用于使用胸部X光片预测COVID-19。它是两种深度学习模型的组合,开始V3和VGG16(IV3-VGG)。要建立混合模型,从COVID-19射线照相数据库中收集了243张图像。在243张X射线中,121位为COVID-19阳性,122位为正常图像。混合模型分为两个模块,即预处理和IV3-VGG。在数据集中,一些具有不同尺寸和不同颜色强度的图像被识别和预处理。第二个模块,即IV3-VGG由四个块组成。第一个块被认为是VGG-16和块2和3被认为是InceptionV3网络和最后的块4包括四层,即平均池化,辍学,完全连接,和Softmax层。实验结果表明,IV3-VGG模型相对于现有的5种著名的深度学习模型,如InceptionV3、VGG16、ResNet50、DenseNet121和MobileNet,达到了98%的最高准确率。
    The Corona Virus was first started in the Wuhan city, China in December of 2019. It belongs to the Coronaviridae family, which can infect both animals and humans. The diagnosis of coronavirus disease-2019 (COVID-19) is typically detected by Serology, Genetic Real-Time reverse transcription-Polymerase Chain Reaction (RT-PCR), and Antigen testing. These testing methods have limitations like limited sensitivity, high cost, and long turn-around time. It is necessary to develop an automatic detection system for COVID-19 prediction. Chest X-ray is a lower-cost process in comparison to chest Computed tomography (CT). Deep learning is the best fruitful technique of machine learning, which provides useful investigation for learning and screening a large amount of chest X-ray images with COVID-19 and normal. There are many deep learning methods for prediction, but these methods have a few limitations like overfitting, misclassification, and false predictions for poor-quality chest X-rays. In order to overcome these limitations, the novel hybrid model called \"Inception V3 with VGG16 (Visual Geometry Group)\" is proposed for the prediction of COVID-19 using chest X-rays. It is a combination of two deep learning models, Inception V3 and VGG16 (IV3-VGG). To build the hybrid model, collected 243 images from the COVID-19 Radiography Database. Out of 243 X-rays, 121 are COVID-19 positive and 122 are normal images. The hybrid model is divided into two modules namely pre-processing and the IV3-VGG. In the dataset, some of the images with different sizes and different color intensities are identified and pre-processed. The second module i.e., IV3-VGG consists of four blocks. The first block is considered for VGG-16 and blocks 2 and 3 are considered for Inception V3 networks and final block 4 consists of four layers namely Avg pooling, dropout, fully connected, and Softmax layers. The experimental results show that the IV3-VGG model achieves the highest accuracy of 98% compared to the existing five prominent deep learning models such as Inception V3, VGG16, ResNet50, DenseNet121, and MobileNet.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    姜黄(姜黄)和姜黄(temulawak)是姜科的成员,含有姜黄素,精油,淀粉,蛋白质,脂肪,纤维素,和矿物。姜黄的营养成分比例与temulawak不同,这意味着经济价值的差异。然而,只有少数人了解草药,可以识别它们之间的区别。这项研究旨在建立一个模型,该模型可以根据从手机摄像头捕获的图像来区分两种姜科。由两种类型的根茎组成的图像集合用于通过使用迁移学习的学习过程来构建模型,特别预先训练的VGG-19和具有ImageNet权重的InceptionV3。实验结果表明,该模型对根茎分类的准确率分别为92.43%和94.29%,连续。这些成果相当有希望用于各种实际运用。
    Curcuma longa (turmeric) and Curcuma zanthorrhiza (temulawak) are members of the Zingiberaceae family that contain curcuminoids, essential oils, starch, protein, fat, cellulose, and minerals. The nutritional content proportion of turmeric is different from temulawak which implies differences in economic value. However, only a few people who understand herbal plants, can identify the difference between them. This study aims to build a model that can distinguish between the two species of Zingiberaceae based on the image captured from a mobile phone camera. A collection of images consisting of both types of rhizomes are used to build a model through a learning process using transfer learning, specifically pre-trained VGG-19 and Inception V3 with ImageNet weight. Experimental results show that the accuracy rates of the models to classify the rhizomes are 92.43% and 94.29%, consecutively. These achievements are quite promising to be used in various practical use.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    脑肿瘤是由不受控制的和异常的细胞分裂引起的严重病症。如果不能准确和及时地检测到,肿瘤可能会产生毁灭性的影响。磁共振成像(MRI)由于其出色的分辨率而成为检测脑肿瘤的常用方法之一。在过去的几十年里,在大脑图像分类领域已经进行了大量研究,从传统方法到深度学习技术,如卷积神经网络(CNN)。为了完成分类,机器学习方法需要手动创建功能。相比之下,CNN通过从未处理的图像中提取视觉特征来实现分类。训练数据集的大小对CNN提取的特征有显著影响。当CNN的尺寸较小时,它倾向于过度拟合。因此,已经开发了具有迁移学习的深度CNN(DCNN)。这项工作的目的是使用数据增强和迁移学习技术研究预训练的DCNNVGG-19,VGG-16,ResNet50和InceptionV3模型的大脑MR图像分类潜力。利用准确性验证测试集,召回,Precision,F1评分表明,具有迁移学习的预训练VGG-19模型表现出最佳性能。此外,这些方法提供了原始图像的端到端分类,而无需手动提取属性。
    Brain tumors are serious conditions caused by uncontrolled and abnormal cell division. Tumors can have devastating implications if not accurately and promptly detected. Magnetic resonance imaging (MRI) is one of the methods frequently used to detect brain tumors owing to its excellent resolution. In the past few decades, substantial research has been conducted in the field of classifying brain images, ranging from traditional methods to deep-learning techniques such as convolutional neural networks (CNN). To accomplish classification, machine-learning methods require manually created features. In contrast, CNN achieves classification by extracting visual features from unprocessed images. The size of the training dataset had a significant impact on the features that CNN extracts. The CNN tends to overfit when its size is small. Deep CNNs (DCNN) with transfer learning have therefore been developed. The aim of this work was to investigate the brain MR image categorization potential of pre-trained DCNN VGG-19, VGG-16, ResNet50, and Inception V3 models using data augmentation and transfer learning techniques. Validation of the test set utilizing accuracy, recall, Precision, and F1 score showed that the pre-trained VGG-19 model with transfer learning exhibited the best performance. In addition, these methods offer an end-to-end classification of raw images without the need for manual attribute extraction.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:我们旨在在眼科领域提供有效的计算机辅助诊断,并改善眼睛健康。本研究旨在创建一个基于深度学习的自动化系统,将眼底图像分为三类:正常、对糖尿病视网膜病变等疾病的及时识别和治疗。方法:采用健康管理中心眼底照相机采集516例患者的1032张眼底图像,深圳大学总医院,广东深圳518055,中国。然后,InceptionV3和ResNet-50深度学习模型用于将眼底图像分为三类,正常,黄斑变性及眼底镶嵌为眼底疾病的及时识别和治疗。结果:实验结果表明,采用Adam作为优化器的模型识别效果最好,迭代次数为150,学习率为0.00。根据我们提出的方法,通过使用ResNet-50和InceptionV3,在根据我们的分类问题进行微调和调整超参数后,达到了93.81%和91.76%的最高准确率。结论:本研究为糖尿病视网膜病变等眼部疾病的临床诊断或筛查提供参考。我们建议的计算机辅助诊断框架将防止由低图像质量和个人经验引起的错误诊断,和其他因素。在未来的实现中,眼科医生可以实施更先进的学习算法来提高诊断的准确性。
    Purpose: We aim to present effective and computer aided diagnostics in the field of ophthalmology and improve eye health. This study aims to create an automated deep learning based system for categorizing fundus images into three classes: normal, macular degeneration and tessellated fundus for the timely recognition and treatment of diabetic retinopathy and other diseases. Methods: A total of 1,032 fundus images were collected from 516 patients using fundus camera from Health Management Center, Shenzhen University General Hospital Shenzhen University, Shenzhen 518055, Guangdong, China. Then, Inception V3 and ResNet-50 deep learning models are used to classify fundus images into three classes, Normal, Macular degeneration and tessellated fundus for the timely recognition and treatment of fundus diseases. Results: The experimental results show that the effect of model recognition is the best when the Adam is used as optimizer method, the number of iterations is 150, and 0.00 as the learning rate. According to our proposed approach we, achieved the highest accuracy of 93.81% and 91.76% by using ResNet-50 and Inception V3 after fine-tuned and adjusted hyper parameters according to our classification problem. Conclusion: Our research provides a reference to the clinical diagnosis or screening for diabetic retinopathy and other eye diseases. Our suggested computer aided diagnostics framework will prevent incorrect diagnoses caused by the low image quality and individual experience, and other factors. In future implementations, the ophthalmologists can implement more advanced learning algorithms to improve the accuracy of diagnosis.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号