scalogram

scalogram
  • 文章类型: Journal Article
    心血管疾病仍然是人类健康的主要威胁之一,显著影响质量和预期寿命。有效和迅速地识别这些疾病至关重要。这项研究旨在开发一种有效的新型混合方法,用于根据心脏病患者的短心电图(ECG)片段自动检测危险的心律失常。这项研究建议使用连续小波变换(CWT)将ECG信号转换为图像(扫描图),并检查将短的2s段ECG信号分为四组可电击的危险心律失常的任务,包括室性扑动(C1),心室纤颤(C2),室性心动过速尖端扭转(C3),和高速率室性心动过速(C4)。我们建议开发一种具有深度学习架构的新型混合神经网络来对危险的心律失常进行分类。这项工作利用从PhysioNet数据库获得的实际心电图(ECG)数据,与由合成少数过采样技术(SMOTE)方法产生的人工生成的ECG数据一起,解决类分布不平衡的问题,以获得精度训练模型。实验结果表明,该方法具有较高的精度,灵敏度,特异性,精度,F1得分为97.75%,97.75%,99.25%,97.75%,和97.75%,分别,在对所有四类可电击心律失常进行分类方面,优于传统方法。我们的工作在现实生活中具有重要的临床价值,因为它有可能显着提高心脏病患者危及生命的心律失常的诊断和治疗。此外,我们的模型还展示了对其他两个数据集的适应性和通用性。
    Cardiovascular diseases remain one of the main threats to human health, significantly affecting the quality and life expectancy. Effective and prompt recognition of these diseases is crucial. This research aims to develop an effective novel hybrid method for automatically detecting dangerous arrhythmias based on cardiac patients\' short electrocardiogram (ECG) fragments. This study suggests using a continuous wavelet transform (CWT) to convert ECG signals into images (scalograms) and examining the task of categorizing short 2-s segments of ECG signals into four groups of dangerous arrhythmias that are shockable, including ventricular flutter (C1), ventricular fibrillation (C2), ventricular tachycardia torsade de pointes (C3), and high-rate ventricular tachycardia (C4). We propose developing a novel hybrid neural network with a deep learning architecture to classify dangerous arrhythmias. This work utilizes actual electrocardiogram (ECG) data obtained from the PhysioNet database, alongside artificially generated ECG data produced by the Synthetic Minority Over-sampling Technique (SMOTE) approach, to address the issue of imbalanced class distribution for obtaining an accuracy-trained model. Experimental results demonstrate that the proposed approach achieves high accuracy, sensitivity, specificity, precision, and an F1-score of 97.75%, 97.75%, 99.25%, 97.75%, and 97.75%, respectively, in classifying all the four shockable classes of arrhythmias and are superior to traditional methods. Our work possesses significant clinical value in real-life scenarios since it has the potential to significantly enhance the diagnosis and treatment of life-threatening arrhythmias in individuals with cardiac disease. Furthermore, our model also has demonstrated adaptability and generality for two other datasets.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人类活动识别(HAR)与环境辅助生活(AAL)一起,是智能家居不可或缺的组成部分,体育,监视,和调查活动。为了识别日常活动,研究人员专注于轻量级,成本效益高,基于传感器的可穿戴技术与传统的基于视觉的技术一样,缺乏老年人的隐私,每个人的基本权利。然而,从一维多传感器数据中提取潜在特征是具有挑战性的。因此,这项研究的重点是通过一维多传感器数据的时频域分析从光谱图像中提取可区分的模式和深层特征。可穿戴传感器数据,特别是加速器和陀螺仪数据,作为不同日常活动的输入信号,并使用时频分析提供潜在信息。这种潜在的时间序列信息通过称为使用“scalograms”的过程映射到光谱图像中,来自连续小波变换。使用CNN等深度学习模型从活动图像中提取深度活动特征,MobileNetV3、ResNet、和GoogleNet,随后使用常规分类器进行分类。为了验证所提出的模型,使用SisFall和PAMAP2基准测试数据集。根据实验结果,使用Morlet作为具有ResNet-101和softmax分类器的母小波,该模型显示了活动识别的最佳性能,SisFall的准确率为98.4%,PAMAP2的准确率为98.1%,并且优于最先进的算法。
    Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right of every human. However, it is challenging to extract potential features from 1D multi-sensor data. Thus, this research focuses on extracting distinguishable patterns and deep features from spectral images by time-frequency-domain analysis of 1D multi-sensor data. Wearable sensor data, particularly accelerator and gyroscope data, act as input signals of different daily activities, and provide potential information using time-frequency analysis. This potential time series information is mapped into spectral images through a process called use of \'scalograms\', derived from the continuous wavelet transform. The deep activity features are extracted from the activity image using deep learning models such as CNN, MobileNetV3, ResNet, and GoogleNet and subsequently classified using a conventional classifier. To validate the proposed model, SisFall and PAMAP2 benchmark datasets are used. Based on the experimental results, this proposed model shows the optimal performance for activity recognition obtaining an accuracy of 98.4% for SisFall and 98.1% for PAMAP2, using Morlet as the mother wavelet with ResNet-101 and a softmax classifier, and outperforms state-of-the-art algorithms.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:/目的:自动呼吸暂停/呼吸不足事件分类,对临床应用至关重要,经常面临挑战,特别是在呼吸不足检测中。这项研究旨在评估使用鼻呼吸流量(RF)的组合方法的效率,外周血氧饱和度(SpO2),多导睡眠图(PSG)期间的ECG信号用于改善睡眠呼吸暂停/低通气检测和阻塞性睡眠呼吸暂停(OSA)严重程度筛查。
    方法:使用RF的主要特征训练Xception网络,在PSG期间获得的SpO2和ECG信号。此外,我们纳入了人口统计数据以提高绩效。呼吸暂停/呼吸不足事件的检测基于RF和SpO2特征集,而OSA的筛查和严重程度分类利用了预测的呼吸暂停/低通气事件和人口统计学数据。
    结果:使用RF和SpO2功能集,我们的模型检测呼吸暂停/呼吸不足事件的准确率达到94%.对于OSA筛查,实现了99%的异常准确度和0.99的AUC.OSA严重程度分类的准确度为93%,AUC为0.91,正常和轻度OSA与中度和重度OSA之间没有错误分类。然而,分类错误主要出现在存在呼吸不足的参与者的病例中.
    结论:所提出的方法为呼吸暂停/呼吸不足事件提供了强大的自动检测系统,需要比传统PSG更少的传感器,并表现出卓越的性能。此外,OSA筛查和严重程度分类的分类算法表现出显著的判别能力.
    BACKGROUND: /Objective: Automatic apnea/hypopnea events classification, crucial for clinical applications, often faces challenges, particularly in hypopnea detection. This study aimed to evaluate the efficiency of a combined approach using nasal respiration flow (RF), peripheral oxygen saturation (SpO2), and ECG signals during polysomnography (PSG) for improved sleep apnea/hypopnea detection and obstructive sleep apnea (OSA) severity screening.
    METHODS: An Xception network was trained using main features from RF, SpO2, and ECG signals obtained during PSG. In addition, we incorporated demographic data for enhanced performance. The detection of apnea/hypopnea events was based on RF and SpO2 feature sets, while the screening and severity categorization of OSA utilized predicted apnea/hypopnea events in conjunction with demographic data.
    RESULTS: Using RF and SpO2 feature sets, our model achieved an accuracy of 94 % in detecting apnea/hypopnea events. For OSA screening, an exceptional accuracy of 99 % and an AUC of 0.99 were achieved. OSA severity categorization yielded an accuracy of 93 % and an AUC of 0.91, with no misclassification between normal and mild OSA versus moderate and severe OSA. However, classification errors predominantly arose in cases with hypopnea-prevalent participants.
    CONCLUSIONS: The proposed method offers a robust automatic detection system for apnea/hypopnea events, requiring fewer sensors than traditional PSG, and demonstrates exceptional performance. Additionally, the classification algorithms for OSA screening and severity categorization exhibit significant discriminatory capacity.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    本文提出了故障诊断方法,旨在主动预防机器人系统中的潜在安全问题,特别是在工业环境中使用的人类共存机器人(HCRs)。数据来自HCR驱动模块的耐久性测试,收集时间序列振动数据,直到模块发生故障。在这项研究中,在没有故障后数据的情况下应用分类方法,最初收集的数据的50%被指定为正常部分,选择故障前10小时的数据作为故障部分。要为有限的故障数据集生成其他数据,请执行以下操作:利用具有梯度惩罚的Wasserstein生成对抗网络(WGAN-GP)模型,并将剩余连接添加到生成器以保持基本结构,同时防止数据关键特征的丢失。考虑到图像编码技术的性能因数据集类型而异,本研究应用并比较了五种图像编码方法和四种CNN模型,以方便选择最合适的算法。使用包括递归图在内的图像编码技术将时间序列数据转换为图像数据,格拉米角场,马尔可夫过渡场,频谱图,和scalogram。然后将这些图像应用于CNN模型,包括VGGNet,GoogleNet,ResNet,和DenseNet,计算故障诊断的准确性,并比较各模型的性能。实验结果表明,当使用WGAN-GP模型生成故障数据时,诊断准确性显着提高。在图像编码技术和卷积神经网络模型中,光谱图和DenseNet表现出卓越的性能,分别。
    This paper proposes fault diagnosis methods aimed at proactively preventing potential safety issues in robot systems, particularly human coexistence robots (HCRs) used in industrial environments. The data were collected from durability tests of the driving module for HCRs, gathering time-series vibration data until the module failed. In this study, to apply classification methods in the absence of post-failure data, the initial 50% of the collected data were designated as the normal section, and the data from the 10 h immediately preceding the failure were selected as the fault section. To generate additional data for the limited fault dataset, the Wasserstein generative adversarial networks with gradient penalty (WGAN-GP) model was utilized and residual connections were added to the generator to maintain the basic structure while preventing the loss of key features of the data. Considering that the performance of image encoding techniques varies depending on the dataset type, this study applied and compared five image encoding methods and four CNN models to facilitate the selection of the most suitable algorithm. The time-series data were converted into image data using image encoding techniques including recurrence plot, Gramian angular field, Markov transition field, spectrogram, and scalogram. These images were then applied to CNN models, including VGGNet, GoogleNet, ResNet, and DenseNet, to calculate the accuracy of fault diagnosis and compare the performance of each model. The experimental results demonstrated significant improvements in diagnostic accuracy when employing the WGAN-GP model to generate fault data, and among the image encoding techniques and convolutional neural network models, spectrogram and DenseNet exhibited superior performance, respectively.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    全球用水量不断增加,主要城市因水资源短缺而承受的压力日益增加,这凸显了对有效水管理实践的迫切需要。在全球缺水地区,大量的水浪费主要归因于渗漏,低效使用,和老化的基础设施。建筑物管道中未检测到的漏水导致了水浪费问题。为了解决这个问题,需要一种有效的漏水检测方法。在本文中,我们探索边缘计算在智能建筑中的应用,以加强水管理。通过集成传感器和嵌入式机器学习模型,被称为TinyML,智能水管理系统可以收集实时数据,分析它,并为有效利用水资源做出准确的决策。向TinyML的过渡可以实现更快、更具成本效益的本地决策,减少对中心化实体的依赖。在这项工作中,我们提出了一种解决方案,该解决方案可以在使用TinyML的情况下,以最小的人为干预进行实际场景中的有效泄漏检测。我们遵循的方法类似于生产中典型的机器学习生命周期,跨越阶段,包括数据收集,培训,超参数调整,部署前对设备上资源效率的离线评估和模型优化。在这项工作中,我们考虑了现有的聚氯乙烯管道漏水声数据集。要准备用于分析的声学数据,我们进行了预处理以将其转换为scalogram。我们通过将迁移学习应用于五个不同的卷积神经网络(CNN)变体,设计了一种漏水检测方法。即EfficientNet,ResNet,AlexNet,MobileNetV1和MobileNetV2。CNN模型被发现能够检测到最大测试精度的泄漏,召回,精度,F1得分为97.45%,98.57%,96.70%,和97.63%,分别,使用EfficientNet模型观察到。要在ArduinoNano33BLE边缘设备上实现无缝部署,EfficientNet模型使用量化压缩,导致1932毫秒的低推断时间,255.3千字节的峰值RAM使用量,和仅48.7千字节的闪存使用要求。
    The escalating global water usage and the increasing strain on major cities due to water shortages highlights the critical need for efficient water management practices. In water-stressed regions worldwide, significant water wastage is primarily attributed to leakages, inefficient use, and aging infrastructure. Undetected water leakages in buildings\' pipelines contribute to the water waste problem. To address this issue, an effective water leak detection method is required. In this paper, we explore the application of edge computing in smart buildings to enhance water management. By integrating sensors and embedded Machine Learning models, known as TinyML, smart water management systems can collect real-time data, analyze it, and make accurate decisions for efficient water utilization. The transition to TinyML enables faster and more cost-effective local decision-making, reducing the dependence on centralized entities. In this work, we propose a solution that can be adapted for effective leakage detection in real-world scenarios with minimum human intervention using TinyML. We follow an approach that is similar to a typical machine learning lifecycle in production, spanning stages including data collection, training, hyperparameter tuning, offline evaluation and model optimization for on-device resource efficiency before deployment. In this work, we considered an existing water leakage acoustic dataset for polyvinyl chloride pipelines. To prepare the acoustic data for analysis, we performed preprocessing to transform it into scalograms. We devised a water leak detection method by applying transfer learning to five distinct Convolutional Neural Network (CNN) variants, which are namely EfficientNet, ResNet, AlexNet, MobileNet V1, and MobileNet V2. The CNN models were found to be able to detect leakages where a maximum testing accuracy, recall, precision, and F1 score of 97.45%, 98.57%, 96.70%, and 97.63%, respectively, were observed using the EfficientNet model. To enable seamless deployment on the Arduino Nano 33 BLE edge device, the EfficientNet model is compressed using quantization resulting in a low inference time of 1932 ms, a peak RAM usage of 255.3 kilobytes, and a flash usage requirement of merely 48.7 kilobytes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    ECG质量评估对于减少心血管疾病的自动诊断中的错误警报和医师紧张至关重要。最近的研究集中在构建自动噪声ECG记录拒绝机制上。这项工作使用scalogram和Tucker张量分解开发了一个嘈杂的ECG记录拒绝系统。该系统可以拒绝心电图记录,无法分析或诊断。每个受试者的所有12导联ECG信号的缩放图被堆叠以形成3向张量。将Tucker张量分解与经验设置一起应用以获得核心张量。重塑核心张量以形成潜在特征集。在五次交叉验证设置中使用PhysioNet挑战2011数据集进行测试时,RusBoost合奏分类器被证明是一个非常可靠的选择,准确率为92.4%,灵敏度为87.1%,特异性为93.5%。根据实验结果,将scalogram与Tucker张量分解相结合可产生具有竞争力的性能,并有可能用于实际的ECG质量评估。
    ECG quality assessment is crucial for reducing false alarms and physician strain in automated diagnosis of cardiovascular diseases. Recent researches have focused on constructing an automatic noisy ECG record rejection mechanism. This work develops a noisy ECG record rejection system using scalogram and Tucker tensor decomposition. The system can reject ECG records, which cannot be analyzed or diagnosed. Scalogram of all 12‑lead ECG signals per subject are stacked to form a 3-way tensor. Tucker tensor decomposition is applied with empirical settings to obtain the core tensor. The core tensor is reshaped to form the latent features set. When tested using the PhysioNet challenge 2011 dataset in five-fold cross validation settings, the RusBoost ensemble classifier proved to be a very reliable option, producing an accuracy of 92.4% along with sensitivity of 87.1% and specificity of 93.5%. According to the experimental findings, combining the scalogram with Tucker tensor decomposition yields competitive performance and has the potential to be used in actual evaluation of ECG quality.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    睡眠是身心的自然休息状态。它对人类的身心健康至关重要,因为它有助于身体自我恢复。失眠是一种睡眠障碍,导致难以入睡或入睡,并可能导致一些健康问题。传统的睡眠监测和失眠检测系统价格昂贵,辛苦,而且耗时。这是第一项将心电图(ECG)scalogram与卷积神经网络(CNN)集成在一起的研究,以开发用于准确测量睡眠质量以识别失眠的模型。连续小波变换已用于将一维时域ECG信号转换为二维图。获得的字迹被馈送到AlexNet,MobileNetV2,VGG16和新开发的CNN,用于自动检测失眠。在循环交替模式(CAP)和睡眠障碍研究中心(SDRC)数据集上验证了所提出的INSOMNet系统。六项绩效指标,精度(ACC),错误遗漏率(FOR),灵敏度(SEN),错误发现率(FDR),特异性(SPE),和威胁评分(TS),已经被计算来评估开发的模型。我们开发的系统达到了98.91%的分类ACC,98.68%,对于1.5,0.66,SEN为98.94%,99.31%,FDR为0.80,2.00,SPE为98.87%,98.08%,CAP和SDRC数据集上的TS0.98、0.97,分别。开发的模型比转移学习网络更简单,更准确。该原型已准备好使用来自不同中心的庞大数据集进行测试。
    Sleep is a natural state of rest for the body and mind. It is essential for a human\'s physical and mental health because it helps the body restore itself. Insomnia is a sleep disorder that causes difficulty falling asleep or staying asleep and can lead to several health problems. Conventional sleep monitoring and insomnia detection systems are expensive, laborious, and time-consuming. This is the first study that integrates an electrocardiogram (ECG) scalogram with a convolutional neural network (CNN) to develop a model for the accurate measurement of the quality of sleep in identifying insomnia. Continuous wavelet transform has been employed to convert 1-D time-domain ECG signals into 2-D scalograms. Obtained scalograms are fed to AlexNet, MobileNetV2, VGG16, and newly developed CNN for automated detection of insomnia. The proposed INSOMNet system is validated on the cyclic alternating pattern (CAP) and sleep disorder research center (SDRC) datasets. Six performance measures, accuracy (ACC), false omission rate (FOR), sensitivity (SEN), false discovery rate (FDR), specificity (SPE), and threat score (TS), have been calculated to evaluate the developed model. Our developed system attained the classifications ACC of 98.91%, 98.68%, FOR of 1.5, 0.66, SEN of 98.94%, 99.31%, FDR of 0.80, 2.00, SPE of 98.87%, 98.08%, and TS 0.98, 0.97 on CAP and SDRC datasets, respectively. The developed model is less complex and more accurate than transfer-learning networks. The prototype is ready to be tested with a huge dataset from diverse centers.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    医疗保健技术的不断进步赋予了这一发现,诊断,预测疾病,革命领域。人工智能(AI)有望在实现精准医疗目标方面发挥关键作用,特别是在疾病预防方面,检测,个性化治疗。本研究旨在确定母小波和AI模型的最佳组合,以分析小儿视网膜电图(ERG)信号。数据集,包括信号和相应的诊断,使用常用的小波进行连续小波变换(CWT)以获得时频表示。小波图像用于训练五种广泛使用的深度学习模型:VGG-11,ResNet-50,DensNet-121,ResNext-50和VisionTransformer。评估他们对健康和不健康患者进行分类的准确性。研究结果表明,Ricker小波和VisionTransformer的组合始终为ERG分析提供最高的中值精度值。正如上四分位数和下四分位数所证明的那样。文章中所考虑的三种类型的ERG信号的获得组合的中值平衡精度为0.83、0.85和0.88。然而,其他小波类型也达到了高精度水平,表明仔细选择母小波进行准确分类的重要性。该研究对小波和模型的不同组合在对ERG小波谱进行分类时的有效性提供了有价值的见解。
    The continuous advancements in healthcare technology have empowered the discovery, diagnosis, and prediction of diseases, revolutionizing the field. Artificial intelligence (AI) is expected to play a pivotal role in achieving the goals of precision medicine, particularly in disease prevention, detection, and personalized treatment. This study aims to determine the optimal combination of the mother wavelet and AI model for the analysis of pediatric electroretinogram (ERG) signals. The dataset, consisting of signals and corresponding diagnoses, undergoes Continuous Wavelet Transform (CWT) using commonly used wavelets to obtain a time-frequency representation. Wavelet images were used for the training of five widely used deep learning models: VGG-11, ResNet-50, DensNet-121, ResNext-50, and Vision Transformer, to evaluate their accuracy in classifying healthy and unhealthy patients. The findings demonstrate that the combination of Ricker Wavelet and Vision Transformer consistently yields the highest median accuracy values for ERG analysis, as evidenced by the upper and lower quartile values. The median balanced accuracy of the obtained combination of the three considered types of ERG signals in the article are 0.83, 0.85, and 0.88. However, other wavelet types also achieved high accuracy levels, indicating the importance of carefully selecting the mother wavelet for accurate classification. The study provides valuable insights into the effectiveness of different combinations of wavelets and models in classifying ERG wavelet scalograms.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:大多数现有的自动睡眠分期方法都依赖于多模态数据,对特定时期进行评分不仅需要当前时期,还需要在该时期之前和之后的一系列连续时期。
    目的:我们提出并测试了一种名为SleepInceptionNet的卷积神经网络,它允许使用单通道脑电图(EEG)对单个时期进行睡眠分类。
    方法:SleepInceptionNet基于我们对不同脑电图预处理方法的效果的系统评估,脑电图通道,和卷积神经网络对自动睡眠分期性能的影响。使用883名参与者(937,975个32个时期)的多导睡眠图数据进行评估。单个脑电通道的原始数据(即,额叶,中央,和枕骨)和数据的3个特定转换,包括功率谱密度,连续小波变换,和短时傅里叶变换,分别用作卷积神经网络模型的输入。为了对睡眠阶段进行分类,对一维数据测试了7个顺序深度神经网络(即,原始EEG和功率谱密度),和16个图像分类器卷积神经网络对2D数据进行了测试(即,连续小波变换和短时傅里叶变换时频图像)。
    结果:最佳模型,SleepInceptionNet,它使用连续小波变换方法从中央单通道EEG数据中开发的时频图像作为InceptionV3图像分类器算法的输入,参考金标准多导睡眠图,Cohenκ符合0.705(SD0.077)。
    结论:SleepInceptionNet可能允许使用单通道EEG在自由生活条件下进行实时自动睡眠分期,这对于特定睡眠阶段的按需干预或治疗可能是有用的。
    Most existing automated sleep staging methods rely on multimodal data, and scoring a specific epoch requires not only the current epoch but also a sequence of consecutive epochs that precede and follow the epoch.
    We proposed and tested a convolutional neural network called SleepInceptionNet, which allows sleep classification of a single epoch using a single-channel electroencephalogram (EEG).
    SleepInceptionNet is based on our systematic evaluation of the effects of different EEG preprocessing methods, EEG channels, and convolutional neural networks on automatic sleep staging performance. The evaluation was performed using polysomnography data of 883 participants (937,975 thirty-second epochs). Raw data of individual EEG channels (ie, frontal, central, and occipital) and 3 specific transformations of the data, including power spectral density, continuous wavelet transform, and short-time Fourier transform, were used separately as the inputs of the convolutional neural network models. To classify sleep stages, 7 sequential deep neural networks were tested for the 1D data (ie, raw EEG and power spectral density), and 16 image classifier convolutional neural networks were tested for the 2D data (ie, continuous wavelet transform and short-time Fourier transform time-frequency images).
    The best model, SleepInceptionNet, which uses time-frequency images developed by the continuous wavelet transform method from central single-channel EEG data as input to the InceptionV3 image classifier algorithm, achieved a Cohen κ agreement of 0.705 (SD 0.077) in reference to the gold standard polysomnography.
    SleepInceptionNet may allow real-time automated sleep staging in free-living conditions using a single-channel EEG, which may be useful for on-demand intervention or treatment during specific sleep stages.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    ECG波识别是仅使用一种ECG搏动波(P-QRS-T)来检测心脏病的新课题之一。正常,心动过速,而心动过缓的心律很难单独使用时域或频域特征来检测,并且需要进行时频分析以提取代表性特征。本文研究了两种不同频谱表示的性能,虹膜频谱图和scalogram,对于不同的心电图搏动波在识别正常方面,心动过速,和心动过缓类。然后将这两个不同的光谱发送到两个不同的深度卷积神经网络(CNN),即,Resnet101和ShuffleNet,进行深度特征提取和分类。结果表明,使用ResNet101和T波scalogram检测心律的最佳准确率为98.3%,而使用ResNet101和QRS-Wave的虹膜频谱图检测的准确率为94.4%。最后,基于这些结果,我们注意到,使用来自时频表示的深度特征,使用一个波的ECG搏动,我们可以准确地检测基本节律,如正常,心动过速,和心动过缓.
    ECG wave recognition is one of the new topics where only one of the ECG beat waves (P-QRS-T) was used to detect heart diseases. Normal, tachycardia, and bradycardia heart rhythm are hard to detect using either time-domain or frequency-domain features solely, and a time-frequency analysis is required to extract representative features. This paper studies the performance of two different spectrum representations, iris-spectrogram and scalogram, for different ECG beat waves in terms of recognition of normal, tachycardia, and bradycardia classes. These two different spectra are then sent to two different deep convolutional neural networks (CNN), i.e., Resnet101 and ShuffleNet, for deep feature extraction and classification. The results show that the best accuracy for detection of beats rhythm was using ResNet101 and scalogram of T-wave with an accuracy of 98.3%, while accuracy was 94.4% for detection using iris-spectrogram using also ResNet101 and QRS-Wave. Finally, based on these results we note that using deep features from time-frequency representation using one wave of ECG beat we can accurately detect basic rhythms such as normal, tachycardia, and bradycardia.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号