VGG

VGG
  • 文章类型: English Abstract
    Objective:To build a VGG-based computer-aided diagnostic model for chronic sinusitis and evaluate its efficacy. Methods:①A total of 5 000 frames of diagnosed sinus CT images were collected. The normal group consisted of 1 000 frames(250 frames each of maxillary sinus, frontal sinus, septal sinus, and pterygoid sinus), while the abnormal group consisted of 4 000 frames(1 000 frames each of maxillary sinusitis, frontal sinusitis, septal sinusitis, and pterygoid sinusitis). ②The models were trained and simulated to obtain five classification models for the normal group, the pteroid sinusitis group, the frontal sinusitis group, the septal sinusitis group and the maxillary sinusitis group, respectively. The classification efficacy of the models was evaluated objectively in six dimensions: accuracy, precision, sensitivity, specificity, interpretation time and area under the ROC curve(AUC). ③Two hundred randomly selected images were read by the model with three groups of physicians(low, middle and high seniority) to constitute a comparative experiment. The efficacy of the model was objectively evaluated using the aforementioned evaluation indexes in conjunction with clinical analysis. Results:①Simulation experiment: The overall recognition accuracy of the model is 83.94%, with a precision of 89.52%, sensitivity of 83.94%, specificity of 95.99%, and the average interpretation time of each frame is 0.2 s. The AUC for sphenoid sinusitis was 0.865(95%CI 0.849-0.881), for frontal sinusitis was 0.924(0.991-0.936), for ethmoidoid sinusitis was 0.895(0.880-0.909), and for maxillary sinusitis was 0.974(0.967-0.982). ②Comparison experiment: In terms of recognition accuracy, the model was 84.52%, while the low-seniority physicians group was 78.50%, the middle-seniority physicians group was 80.50%, and the seniority physicians group was 83.50%; In terms of recognition accuracy, the model was 85.67%, the low seniority physicians group was 79.72%, the middle seniority physicians group was 82.67%, and the high seniority physicians group was 83.66%. In terms of recognition sensitivity, the model was 84.52%, the low seniority group was 78.50%, the middle seniority group was 80.50%, and the high seniority group was 83.50%. In terms of recognition specificity, the model was 96.58%, the low-seniority physicians group was 94.63%, the middle-seniority physicians group was 95.13%, and the seniority physicians group was 95.88%. In terms of time consumption, the average image per frame of the model is 0.20 s, the average image per frame of the low-seniority physicians group is 2.35 s, the average image per frame of the middle-seniority physicians group is 1.98 s, and the average image per frame of the senior physicians group is 2.19 s. Conclusion:This study demonstrates the potential of a deep learning-based artificial intelligence diagnostic model for chronic sinusitis to classify and diagnose chronic sinusitis; the deep learning-based artificial intelligence diagnosis model for chronic sinusitis has good classification performance and high diagnostic efficacy.
    目的:搭建基于VGG的慢性鼻窦炎计算机辅助诊断模型,并评价其效能。 方法:①收集5 000帧已确诊的鼻窦CT图像,将其分为正常组1 000帧图像(其中,正常的上颌窦、额窦、筛窦、蝶窦影像图像各250帧)及异常组4 000帧图像(其中,上颌窦炎、额窦炎、筛窦炎、蝶窦炎影像图像各1 000帧),对图像进行大小归一化及分割预处理;②训练模型并对其进行仿真实验,分别得到正常组,蝶窦炎组,额窦炎组,筛窦炎组以及上颌窦炎组5个分类模型,从准确度、精确度、灵敏度、特异度、判读时间及ROC曲线下面积(AUC)6个维度,客观评价模型的分类效能;③随机选取200帧图像,通过模型与低年资医师组、中年资医师组、高年资医师组分别阅片构成对比试验,结合临床通过以上评价指标客观评价模型的效能。 结果:①仿真实验:整个模型的识别准确度为83.94%,精确度为89.52%,灵敏度为83.94%,特异度为95.99%,平均每帧图像判读时间为0.20 s;蝶窦炎的AUC为0.865(95%CI 0.849~0.881),额窦炎的AUC为0.924(0.911~0.936),筛窦炎的AUC为0.895(0.880~0.909),上颌窦炎的AUC为0.974(0.967~0.982)。②对比实验:在识别准确度上,模型为84.52%,低年资医师组为78.5%、中年资医师组为80.5%,高年资医师组为83.5%;在识别精确度上,模型为85.67%,低年资医师组为79.72%,中年资医师组为82.67%,高年资医师组为83.66%;在识别灵敏度上,模型为84.52%,低年资医师组为78.50%,中年资医师组为80.50%,高年资医师组为83.50%;在识别特异度上,模型为96.58%,低年资医师组为94.63%,中年资医师组为95.13%,高年资医师组为95.88%;在耗时上,模型平均每帧图像为0.20 s,低年资医师组平均每帧图像为2.35 s,中年资医师组平均每帧图像为1.98 s,高年资医师组平均每帧图像为2.19 s。 结论:本研究强调了基于深度学习的慢性鼻窦炎人工智能诊断模型分类诊断慢性鼻窦炎的可能性;基于深度学习的慢性鼻窦炎人工智能诊断模型分类性能好,具有较高的诊断效能。.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    蓝藻是水生环境中的主要微生物,由于饮用水水库中的毒素产生,对公众健康构成重大风险。传统的水质评估水样中产毒属的丰度既费时又容易出错,强调迫切需要快速准确的自动化方法。这项研究通过引入一个新的公共数据集来解决这一差距,TCB-DS(产毒性蓝细菌数据集),包括2593张10个产毒蓝细菌属的显微图像,随后,识别这些属的自动化系统,可分为两部分。最初,采用特征提取器卷积神经网络(CNN)模型,与MobileNet在与各种其他流行的架构(如MobileNetV2、VGG、等。其次,要对第一部分的提取特征执行分类算法,测试了多种方法,实验结果表明,全连接神经网络(FCNN)具有最佳性能,加权精度和f1得分分别为94.79%和94.91%,分别。使用MobileNetV2作为特征提取器和FCNN作为分类器获得的最高宏观精度和f1得分分别为90.17%和87.64%。这些结果表明,所提出的方法可以用作自动筛选工具,用于识别产毒蓝藻,对水质控制具有实际意义,取代了实验室操作员在显微镜观察后给出的传统估计。本文的数据集和代码可在https://github.com/iman2693/CTCB上公开获得。
    Cyanobacteria are the dominating microorganisms in aquatic environments, posing significant risks to public health due to toxin production in drinking water reservoirs. Traditional water quality assessments for abundance of the toxigenic genera in water samples are both time-consuming and error-prone, highlighting the urgent need for a fast and accurate automated approach. This study addresses this gap by introducing a novel public dataset, TCB-DS (Toxigenic Cyanobacteria Dataset), comprising 2593 microscopic images of 10 toxigenic cyanobacterial genera and subsequently, an automated system to identify these genera which can be divided into two parts. Initially, a feature extractor Convolutional Neural Network (CNN) model was employed, with MobileNet emerging as the optimal choice after comparing it with various other popular architectures such as MobileNetV2, VGG, etc. Secondly, to perform classification algorithms on the extracted features of the first section, multiple approaches were tested and the experimental results indicate that a Fully Connected Neural Network (FCNN) had the optimal performance with weighted accuracy and f1-score of 94.79% and 94.91%, respectively. The highest macro accuracy and f1-score were 90.17% and 87.64% which were acquired using MobileNetV2 as the feature extractor and FCNN as the classifier. These results demonstrate that the proposed approach can be employed as an automated screening tool for identifying toxigenic Cyanobacteria with practical implications for water quality control replacing the traditional estimation given by the lab operator following microscopic observations. The dataset and code of this paper are publicly available at https://github.com/iman2693/CTCB.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    随着技术的不断进步,生命科学学科发挥着越来越重要的作用,其中人工智能在医疗领域的应用越来越受到关注。贝尔面部麻痹,以面部肌肉无力或瘫痪为特征的神经系统疾病,对患者的面部表情和咀嚼能力产生深远的影响,从而对他们的整体生活质量和心理健康造成相当大的困扰。在这项研究中,我们设计了一个面部属性识别模型专门为个人与贝尔的面部麻痹。该模型利用增强的SSD网络和科学计算对患者病情进行分级评估。通过用更高效的骨干取代VGG网络,我们提高了模型的精度,并显著降低了其计算负担。结果表明,改进后的SSD网络在光分类中的平均精度为87.9%,中度和重度面神经麻痹,并有效地对面神经麻痹患者进行分类,科学计算也提高了分类的精度。这也是本文最重要的贡献之一,为未来智能诊断和治疗以及渐进式康复的研究提供了智能手段和客观数据。
    With the continuous progress of technology, the subject of life science plays an increasingly important role, among which the application of artificial intelligence in the medical field has attracted more and more attention. Bell facial palsy, a neurological ailment characterized by facial muscle weakness or paralysis, exerts a profound impact on patients\' facial expressions and masticatory abilities, thereby inflicting considerable distress upon their overall quality of life and mental well-being. In this study, we designed a facial attribute recognition model specifically for individuals with Bell\'s facial palsy. The model utilizes an enhanced SSD network and scientific computing to perform a graded assessment of the patients\' condition. By replacing the VGG network with a more efficient backbone, we improved the model\'s accuracy and significantly reduced its computational burden. The results show that the improved SSD network has an average precision of 87.9% in the classification of light, middle and severe facial palsy, and effectively performs the classification of patients with facial palsy, where scientific calculations also increase the precision of the classification. This is also one of the most significant contributions of this article, which provides intelligent means and objective data for future research on intelligent diagnosis and treatment as well as progressive rehabilitation.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    紫草属植物是一个复杂的系统,是敏感的环境因素,如供水,pH值,温度,光,辐射,杂质,和营养可用性。它可以用作环境变化的生物监测器;但是,生物测定是耗时的,并且具有很强的人为干扰因素,可能会根据进行分析的人而改变结果。我们已经开发了计算机视觉模型来研究Tradescantia克隆4430植物雄蕊毛细胞的颜色变化,由于空气污染和土壤污染,可以强调。这项研究引入了一个新的数据集,Trad-204,其包含来自Tradescantia克隆4430的单细胞图像,在Tradescantia雄毛突变生物测定(Trad-SHM)期间捕获。数据集包含来自两个实验的图像,一个侧重于颗粒物对空气的污染,另一个侧重于被柴油污染的土壤。两个实验都是在库里蒂巴进行的,巴西,2020年至2023年。图像代表不同形状的单细胞,尺寸,和颜色,反映植物对环境压力的反应。开发了一种自动分类任务来区分蓝色和粉红色细胞,这项研究探索了一个基线模型和三个人工神经网络(ANN)架构,即,TinyVGG,VGG-16和ResNet34。紫丁香对土壤中的空气颗粒物浓度和柴油都具有敏感性。结果表明,残差网络体系结构在训练集和测试集上的准确性均优于其他模型。数据集和发现有助于理解植物细胞对环境胁迫的反应,并为植物细胞自动图像分析的进一步研究提供宝贵的资源。讨论强调了膨大压力对细胞形状的影响以及对植物生理学的潜在影响。神经网络架构之间的比较与以前的研究一致,强调ResNet模型在图像分类任务中的卓越性能。粉红色细胞的人工智能识别提高了计数准确性,从而避免了由于不同颜色感知而造成的人为错误,疲劳,或者注意力不集中,除了促进和加快分析过程。总的来说,该研究提供了对植物细胞动力学的见解,并为未来研究如细胞形态变化提供了基础。这项研究证实,生物监测应被视为政治行动的重要工具,是风险评估和制定与环境有关的新公共政策的相关问题。
    Tradescantia plant is a complex system that is sensible to environmental factors such as water supply, pH, temperature, light, radiation, impurities, and nutrient availability. It can be used as a biomonitor for environmental changes; however, the bioassays are time-consuming and have a strong human interference factor that might change the result depending on who is performing the analysis. We have developed computer vision models to study color variations from Tradescantia clone 4430 plant stamen hair cells, which can be stressed due to air pollution and soil contamination. The study introduces a novel dataset, Trad-204, comprising single-cell images from Tradescantia clone 4430, captured during the Tradescantia stamen-hair mutation bioassay (Trad-SHM). The dataset contain images from two experiments, one focusing on air pollution by particulate matter and another based on soil contaminated by diesel oil. Both experiments were carried out in Curitiba, Brazil, between 2020 and 2023. The images represent single cells with different shapes, sizes, and colors, reflecting the plant\'s responses to environmental stressors. An automatic classification task was developed to distinguishing between blue and pink cells, and the study explores both a baseline model and three artificial neural network (ANN) architectures, namely, TinyVGG, VGG-16, and ResNet34. Tradescantia revealed sensibility to both air particulate matter concentration and diesel oil in soil. The results indicate that Residual Network architecture outperforms the other models in terms of accuracy on both training and testing sets. The dataset and findings contribute to the understanding of plant cell responses to environmental stress and provide valuable resources for further research in automated image analysis of plant cells. Discussion highlights the impact of turgor pressure on cell shape and the potential implications for plant physiology. The comparison between ANN architectures aligns with previous research, emphasizing the superior performance of ResNet models in image classification tasks. Artificial intelligence identification of pink cells improves the counting accuracy, thus avoiding human errors due to different color perceptions, fatigue, or inattention, in addition to facilitating and speeding up the analysis process. Overall, the study offers insights into plant cell dynamics and provides a foundation for future investigations like cells morphology change. This research corroborates that biomonitoring should be considered as an important tool for political actions, being a relevant issue in risk assessment and the development of new public policies relating to the environment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    准确可靠地估计骨盆倾斜是全髋关节置换术预防常见的术后并发症如植入物撞击和脱位的重要预先计划因素之一。受到基于深度学习的系统的最新进展的启发,本文的重点是提出一种创新且准确的方法,用于从站立的前后(AP)射线照相图像中估计功能性骨盆倾斜(PT)。我们介绍了一种基于并发学习方法的编码器-解码器式网络,称为VGG-UNET(嵌入在U-NET中的VGG),其中称为VGG的深度全卷积网络嵌入在图像分割网络的编码器部分,即,U-NET。在VGG-UNET的瓶颈中,除了解码器路径,我们使用另一条路径,利用轻量级卷积和完全连接的层来组合从VGG的最终卷积层提取的所有特征图,从而回归PT。在测试阶段,我们排除解码器路径,只考虑一个目标任务,即,PT估计。使用VGG-UNET获得的绝对误差,VGG,和MaskR-CNN分别为3.04±2.49、3.92±2.92和4.97±3.87。观察到VGG-UNET导致具有较低标准偏差(STD)的更准确的预测。我们的实验结果表明,与基于级联网络的最佳报告结果相比,所提出的多任务网络可以显着提高性能。
    Accurate and reliable estimation of the pelvic tilt is one of the essential pre-planning factors for total hip arthroplasty to prevent common post-operative complications such as implant impingement and dislocation. Inspired by the latest advances in deep learning-based systems, our focus in this paper has been to present an innovative and accurate method for estimating the functional pelvic tilt (PT) from a standing anterior-posterior (AP) radiography image. We introduce an encoder-decoder-style network based on a concurrent learning approach called VGG-UNET (VGG embedded in U-NET), where a deep fully convolutional network known as VGG is embedded at the encoder part of an image segmentation network, i.e., U-NET. In the bottleneck of the VGG-UNET, in addition to the decoder path, we use another path utilizing light-weight convolutional and fully connected layers to combine all extracted feature maps from the final convolution layer of VGG and thus regress PT. In the test phase, we exclude the decoder path and consider only a single target task i.e., PT estimation. The absolute errors obtained using VGG-UNET, VGG, and Mask R-CNN are 3.04 ± 2.49, 3.92 ± 2.92, and 4.97 ± 3.87, respectively. It is observed that the VGG-UNET leads to a more accurate prediction with a lower standard deviation (STD). Our experimental results demonstrate that the proposed multi-task network leads to a significantly improved performance compared to the best-reported results based on cascaded networks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Auditory brainstem response (ABR) is the response of the brain stem through the auditory nerve. The ABR test is a method of testing for loss of hearing through electrical signals. Basically, the test is conducted on patients such as the elderly, the disabled, and infants who have difficulty in communication. This test has the advantage of being able to determine the presence or absence of objective hearing loss by brain stem reactions only, without any communication. This paper proposes the image preprocessing process required to construct an efficient graph image data set for deep learning models using auditory brainstem response data. To improve the performance of the deep learning model, we standardized the ABR image data measured on various devices with different forms. In addition, we applied the VGG16 model, a CNN-based deep learning network model developed by a research team at the University of Oxford, using preprocessed ABR data to classify the presence or absence of hearing loss and analyzed the accuracy of the proposed method. This experimental test was performed using 10,000 preprocessed data, and the model was tested with various weights to verify classification learning. Based on the learning results, we believe it is possible to help set the criteria for preprocessing and the learning process in medical graph data, including ABR graph data.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    众所周知,深度卷积神经网络(DCNN)在图像识别任务中的性能不仅受形状的影响,而且受纹理信息的影响。尽管如此,理解DCNN的内部表示仍然是一项具有挑战性的任务。这项研究采用了Portilla-Simoncelli统计的简化版本,称为“minPS”,“探索纹理信息如何在预先训练的VGG网络中表示。使用从纹理图像中提取的minPS特征,我们对VGG层中各个通道的激活执行稀疏回归。我们的发现表明,可以通过minPS功能有效地描述VGG网络早期到中间层的通道。此外,我们观察到minPS子组的解释力随着网络层次结构的上升而演变。具体来说,称为线性交叉标度(LCS)和能量交叉标度(ECS)的子组对VGG通道的解释能力较弱。为了进一步调查这种关系,我们将原始纹理图像与合成图像进行比较,使用VGG生成,在minPS功能方面。我们的结果表明,缺乏某些minPS特征表明它们在VGG的内部表示中没有被利用。
    It is well-understood that the performance of Deep Convolutional Neural Networks (DCNNs) in image recognition tasks is influenced not only by shape but also by texture information. Despite this, understanding the internal representations of DCNNs remains a challenging task. This study employs a simplified version of the Portilla-Simoncelli Statistics, termed \"minPS,\" to explore how texture information is represented in a pre-trained VGG network. Using minPS features extracted from texture images, we perform a sparse regression on the activations across various channels in VGG layers. Our findings reveal that channels in the early to middle layers of the VGG network can be effectively described by minPS features. Additionally, we observe that the explanatory power of minPS sub-groups evolves as one ascends the network hierarchy. Specifically, sub-groups termed Linear Cross Scale (LCS) and Energy Cross Scale (ECS) exhibit weak explanatory power for VGG channels. To investigate the relationship further, we compare the original texture images with their synthesized counterparts, generated using VGG, in terms of minPS features. Our results indicate that the absence of certain minPS features suggests their non-utilization in VGG\'s internal representations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    简介:水果病害对水果生产有严重影响,导致农产品经济效益大幅下降。由于其优异的性能,深度学习被广泛用于作物的病害识别和严重程度诊断。本文主要研究利用深度卷积神经网络的高纬度特征提取能力来提高分类性能。方法:通过将Inception模块与当前最先进的EfficientNetV2相结合来形成所提出的神经网络,以更好地对柑橘类水果进行多尺度特征提取和病害识别。VGG用于代替U-Net骨干以增强网络的分段性能。结果:与现有网络相比,该方法的识别准确率达到95%以上。此外,比较了分割模型的准确性。VGG-U-Net,通过用VGG替换U-Net的主干而生成的网络,被发现具有最佳的分割性能,准确率为87.66%。该方法最适合诊断柑橘类水果病害的严重程度。同时,迁移学习用于改善网络模型的训练周期,在疾病的检测和严重程度诊断阶段。讨论:对比实验结果表明,所提出的方法对柑橘果实病害的严重程度识别和诊断是有效的。
    Introduction: Fruit diseases have a serious impact on fruit production, causing a significant drop in economic returns from agricultural products. Due to its excellent performance, deep learning is widely used for disease identification and severity diagnosis of crops. This paper focuses on leveraging the high-latitude feature extraction capability of deep convolutional neural networks to improve classification performance. Methods: The proposed neural network is formed by combining the Inception module with the current state-of-the-art EfficientNetV2 for better multi-scale feature extraction and disease identification of citrus fruits. The VGG is used to replace the U-Net backbone to enhance the segmentation performance of the network. Results: Compared to existing networks, the proposed method achieved recognition accuracy of over 95%. In addition, the accuracies of the segmentation models were compared. VGG-U-Net, a network generated by replacing the backbone of U-Net with VGG, is found to have the best segmentation performance with an accuracy of 87.66%. This method is most suitable for diagnosing the severity level of citrus fruit diseases. In the meantime, transfer learning is applied to improve the training cycle of the network model, both in the detection and severity diagnosis phases of the disease. Discussion: The results of the comparison experiments reveal that the proposed method is effective in identifying and diagnosing the severity of citrus fruit diseases identification.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    子宫肌瘤是影响育龄妇女的常见良性肿瘤。子宫肌瘤(UF)可以通过早期识别和诊断得到有效治疗。其从医学图像的自动诊断是基于深度学习(DL)的算法已显示出有希望的结果的领域。在这项研究中,我们评估了最先进的DL架构VGG16,ResNet50,InceptionV3,以及我们提出的用于UF检测任务的创新双路径深度卷积神经网络(DPCNN)架构.使用预处理方法,包括缩放,归一化,和数据增强,准备使用来自Kaggle的超声图像数据集。使用图像训练和验证DL模型后,使用不同的措施对模型的性能进行评估。与现有的DL模型相比,我们建议的DPCNN架构达到了99.8%的最高精度。研究结果表明,通过应用微调策略,从医学图像进行UF诊断的预训练深度学习模型性能可能会显着提高。特别是,InceptionV3模型达到了90%的准确率,与ResNet50模型实现89%的准确性。应当注意,发现VGG16模型具有85%的较低准确度水平。我们的发现表明,基于DL的方法可以有效地用于从医学图像中自动检测UF。该领域的进一步研究具有巨大的潜力,并可能导致创建尖端的计算机辅助诊断系统。为了进一步推进医学成像分析的最新技术,DL社区被邀请调查这些研究领域。尽管我们提出的创新DPCNN架构表现最好,InceptionV3和ResNet50等预训练模型的微调版本也带来了强劲的结果。这项工作为未来的研究奠定了基础,并有可能提高检测UF的准确性和适用性。
    Fibroids of the uterus are a common benign tumor affecting women of childbearing age. Uterine fibroids (UF) can be effectively treated with earlier identification and diagnosis. Its automated diagnosis from medical images is an area where deep learning (DL)-based algorithms have demonstrated promising results. In this research, we evaluated state-of-the-art DL architectures VGG16, ResNet50, InceptionV3, and our proposed innovative dual-path deep convolutional neural network (DPCNN) architecture for UF detection tasks. Using preprocessing methods including scaling, normalization, and data augmentation, an ultrasound image dataset from Kaggle is prepared for use. After the images are used to train and validate the DL models, the model performance is evaluated using different measures. When compared to existing DL models, our suggested DPCNN architecture achieved the highest accuracy of 99.8 percent. Findings show that pre-trained deep-learning model performance for UF diagnosis from medical images may significantly improve with the application of fine-tuning strategies. In particular, the InceptionV3 model achieved 90% accuracy, with the ResNet50 model achieving 89% accuracy. It should be noted that the VGG16 model was found to have a lower accuracy level of 85%. Our findings show that DL-based methods can be effectively utilized to facilitate automated UF detection from medical images. Further research in this area holds great potential and could lead to the creation of cutting-edge computer-aided diagnosis systems. To further advance the state-of-the-art in medical imaging analysis, the DL community is invited to investigate these lines of research. Although our proposed innovative DPCNN architecture performed best, fine-tuned versions of pre-trained models like InceptionV3 and ResNet50 also delivered strong results. This work lays the foundation for future studies and has the potential to enhance the precision and suitability with which UF is detected.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    冠状病毒病(COVID-19)是一类SARS-CoV-2病毒,最初在2019年下半年被发现,然后演变成大流行。如果在早期未发现,则感染率和死亡率会随着时间的推移而增加。为了防止疾病迅速传播,及时可靠的COVID-19鉴定方法变得很重要。最近,许多检测COVID-19疾病的方法都有各种缺陷,为了提高诊断性能,需要进行新的调查。在这篇文章中,已经提出了使用ECG图像和深度学习方法自动诊断COVID-19,如视觉几何组(VGG)和AlexNet架构。所提出的方法能够在COVID-19、心肌梗死、窦性心律正常,和其他异常心脏跳动仅使用导联-II心电图图像。通过使用公开可用的ECG图像数据库来验证所提出的技术的功效。我们使用Alexnet模型实现了77.42%的准确率,借助VGG19模型实现了75%的准确率。
    Coronavirus disease (COVID-19) is a class of SARS-CoV-2 virus which is initially identified in the later half of the year 2019 and then evolved as a pandemic. If it is not identified in the early stage then the infection and mortality rates increase with time. A timely and reliable approach for COVID-19 identification has become important in order to prevent the disease from spreading rapidly. In recent times, many methods have been suggested for the detection of COVID-19 disease have various flaws, to increase diagnosis performance, fresh investigations are required. In this article, automatically diagnosing COVID-19 using ECG images and deep learning approaches like as Visual Geometry Group (VGG) and AlexNet architectures have been proposed. The proposed method is able to classify between COVID-19, myocardial infarction, normal sinus rhythm, and other abnormal heart beats using Lead-II ECG image only. The efficacy of the technique proposed is validated by using a publicly available ECG image database. We have achieved an accuracy of 77.42% using Alexnet model and 75% accuracy with the help of VGG19 model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号