Audio signal classification

  • 文章类型: Journal Article
    目的:尽管手术室的工作流程分析已经取得了很大进展,目前的系统仍然局限于研究。在寻求一个强大的,通用设置,尽管它有许多优点,但几乎没有任何关注音频的维度,比如低成本,location,和视觉独立性,或者几乎不需要处理能力。
    方法:我们提出了一种基于音频的事件检测方法,该方法仅依赖于两个麦克风在手术室中捕获声音。因此,创建了一个新的数据集,其中记录了超过63小时的音频,并在Isar大学医院进行了注释。声音文件被标记,预处理,增强,并随后转换为log-mel-谱图,该谱图用作使用预训练的卷积神经网络进行事件分类的视觉输入。
    结果:比较多种架构,我们能够证明即使是轻量级的模型,例如MobileNet,已经可以提供有希望的结果。数据增强还改进了11个定义类的分类,包括不同类型的凝血,手术台的运动以及一个闲置的类。使用新创建的音频数据集,总体准确率为90%,准确率为91%,F1评分为91%,证明了在手术室中基于音频的事件识别的可行性。
    结论:有了这第一个概念证明,我们证明,音频事件可以作为一个有意义的信息源,超越口语,可以很容易地集成到未来的工作流识别管道使用计算廉价的架构。
    OBJECTIVE: Even though workflow analysis in the operating room has come a long way, current systems are still limited to research. In the quest for a robust, universal setup, hardly any attention has been given to the dimension of audio despite its numerous advantages, such as low costs, location, and sight independence, or little required processing power.
    METHODS: We present an approach for audio-based event detection that solely relies on two microphones capturing the sound in the operating room. Therefore, a new data set was created with over 63 h of audio recorded and annotated at the University Hospital rechts der Isar. Sound files were labeled, preprocessed, augmented, and subsequently converted to log-mel-spectrograms that served as a visual input for an event classification using pretrained convolutional neural networks.
    RESULTS: Comparing multiple architectures, we were able to show that even lightweight models, such as MobileNet, can already provide promising results. Data augmentation additionally improved the classification of 11 defined classes, including inter alia different types of coagulation, operating table movements as well as an idle class. With the newly created audio data set, an overall accuracy of 90%, a precision of 91% and a F1-score of 91% were achieved, demonstrating the feasibility of an audio-based event recognition in the operating room.
    CONCLUSIONS: With this first proof of concept, we demonstrated that audio events can serve as a meaningful source of information that goes beyond spoken language and can easily be integrated into future workflow recognition pipelines using computational inexpensive architectures.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Dataset
    目的:咳嗽音频信号分类是筛查呼吸系统疾病的潜在有用工具,比如COVID-19。因为从传染病患者那里收集数据是危险的,许多研究团队已经转向众包,以快速收集咳嗽声音数据。COUGHVID数据集邀请专家医师来注释和诊断有限数量的记录中存在的潜在疾病。然而,这种方法有潜在的咳嗽错误标签,以及专家之间的分歧。
    方法:在这项工作中,我们使用半监督学习(SSL)方法-基于音频信号处理工具和可解释的机器学习模型-提高COUGHVID数据集的标记一致性,用于1)COVID-19与健康咳嗽声音分类2)区分干湿咳嗽,和3)评估咳嗽严重程度。首先,我们利用SSL专家知识聚合技术来克服数据集中的标签不一致性和标签稀疏性。接下来,我们的SSL方法用于识别重新标记的COUGHVID音频样本的子样本,该样本可用于训练或增强未来的咳嗽分类器.
    结果:重新标记的COVID-19和健康数据的一致性证明,它表现出高度的类间特征可分性:比用户标记的数据高3倍。同样,SSL方法将咳嗽类型的可分性提高了11.3倍,严重程度分类的可分性提高了5.1倍.此外,用户标记的音频段中的频谱差异在重新标记的数据中被放大,导致健康咳嗽和COVID-19咳嗽在1-1.5kHz范围内的功率谱密度明显不同(p=1.2×10-64),从声学角度证明了新数据集的一致性及其可解释性。最后,我们演示了如何使用重新标记的数据集来训练COVID-19分类器,实现0.797的AUC。
    结论:我们首次提出了一种针对咳嗽声音分类领域的SSL专家知识聚合技术,并演示如何以可解释的方式将多个专家的医学知识结合起来,从而提供丰富的,咳嗽分类任务的一致数据。
    OBJECTIVE: Cough audio signal classification is a potentially useful tool in screening for respiratory disorders, such as COVID-19. Since it is dangerous to collect data from patients with contagious diseases, many research teams have turned to crowdsourcing to quickly gather cough sound data. The COUGHVID dataset enlisted expert physicians to annotate and diagnose the underlying diseases present in a limited number of recordings. However, this approach suffers from potential cough mislabeling, as well as disagreement between experts.
    METHODS: In this work, we use a semi-supervised learning (SSL) approach - based on audio signal processing tools and interpretable machine learning models - to improve the labeling consistency of the COUGHVID dataset for 1) COVID-19 versus healthy cough sound classification 2) distinguishing wet from dry coughs, and 3) assessing cough severity. First, we leverage SSL expert knowledge aggregation techniques to overcome the labeling inconsistencies and label sparsity in the dataset. Next, our SSL approach is used to identify a subsample of re-labeled COUGHVID audio samples that can be used to train or augment future cough classifiers.
    RESULTS: The consistency of the re-labeled COVID-19 and healthy data is demonstrated in that it exhibits a high degree of inter-class feature separability: 3x higher than that of the user-labeled data. Similarly, the SSL method increases this separability by 11.3x for cough type and 5.1x for severity classifications. Furthermore, the spectral differences in the user-labeled audio segments are amplified in the re-labeled data, resulting in significantly different power spectral densities between healthy and COVID-19 coughs in the 1-1.5 kHz range (p=1.2×10-64), which demonstrates both the increased consistency of the new dataset and its explainability from an acoustic perspective. Finally, we demonstrate how the re-labeled dataset can be used to train a COVID-19 classifier, achieving an AUC of 0.797.
    CONCLUSIONS: We propose a SSL expert knowledge aggregation technique for the field of cough sound classification for the first time, and demonstrate how it can be used to combine the medical knowledge of multiple experts in an explainable fashion, thus providing abundant, consistent data for cough classification tasks.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号