关键词: ABR Alexnet DenseNet121 Densenet201 VGG16 VGG19 auditory brainstem response deep learning hearing loss image processing

来  源:   DOI:10.3390/diagnostics14121232   PDF(Pubmed)

Abstract:
This study evaluates the efficacy of several Convolutional Neural Network (CNN) models for the classification of hearing loss in patients using preprocessed auditory brainstem response (ABR) image data. Specifically, we employed six CNN architectures-VGG16, VGG19, DenseNet121, DenseNet-201, AlexNet, and InceptionV3-to differentiate between patients with hearing loss and those with normal hearing. A dataset comprising 7990 preprocessed ABR images was utilized to assess the performance and accuracy of these models. Each model was systematically tested to determine its capability to accurately classify hearing loss. A comparative analysis of the models focused on metrics of accuracy and computational efficiency. The results indicated that the AlexNet model exhibited superior performance, achieving an accuracy of 95.93%. The findings from this research suggest that deep learning models, particularly AlexNet in this instance, hold significant potential for automating the diagnosis of hearing loss using ABR graph data. Future work will aim to refine these models to enhance their diagnostic accuracy and efficiency, fostering their practical application in clinical settings.
摘要:
这项研究使用预处理的听觉脑干反应(ABR)图像数据评估了几种卷积神经网络(CNN)模型对患者听力损失进行分类的功效。具体来说,我们采用了六种CNN架构-VGG16,VGG19,DenseNet121,DenseNet-201,AlexNet,和InceptionV3-区分听力损失患者和听力正常患者。使用包含7990个预处理的ABR图像的数据集来评估这些模型的性能和准确性。对每个模型进行了系统测试,以确定其准确分类听力损失的能力。模型的比较分析侧重于准确性和计算效率的度量。结果表明,AlexNet模型表现出优异的性能,达到95.93%的精度。这项研究的结果表明,深度学习模型,特别是在这种情况下的AlexNet,具有使用ABR图数据自动诊断听力损失的巨大潜力。未来的工作将旨在完善这些模型,以提高其诊断准确性和效率。促进其在临床环境中的实际应用。
公众号