signal processing

信号处理
  • 文章类型: Journal Article
    将可再生能源集成到当前的发电系统中需要准确的预测以优化和保持电网中的供需限制。由于环境条件的高度随机性,光伏功率的准确预测具有局限性,特别是在长期和短期。因此,这项研究提供了一种新的混合模型,用于基于不同分解技术的多频率信息的融合来预测短光伏功率,这将使预报员能够提供可靠的预测。我们评估并提供了五种多尺度分解算法与深度卷积神经网络(CNN)相结合的性能。此外,我们将建议的组合方法与现有预测模型的性能进行了比较。使用阿尔及利亚的三个并网光伏发电厂进行了详尽的评估,总装机容量为73.1兆瓦。开发的融合策略显示出出色的预测性能。将所提出的组合方法与单机预测模型和其他混合技术进行比较分析,证明了其在预测精度方面的优越性,三个研究的光伏电站的RMSE在[0.454-1.54]范围内变化。
    Integration renewable energy sources into current power generation systems necessitates accurate forecasting to optimize and preserve supply-demand restrictions in the electrical grids. Due to the highly random nature of environmental conditions, accurate prediction of PV power has limitations, particularly on long and short periods. Thus, this research provides a new hybrid model for forecasting short PV power based on the fusing of multi-frequency information of different decomposition techniques that will allow a forecaster to provide reliable forecasts. We evaluate and provide insights into the performance of five multi-scale decomposition algorithms combined with a deep convolution neural network (CNN). Additionally, we compare the suggested combination approach\'s performance to that of existing forecast models. An exhaustive assessment is carried out using three grid-connected PV power plants in Algeria with a total installed capacity of 73.1 MW. The developed fusing strategy displayed an outstanding forecasting performance. The comparative analysis of the proposed combination method with the stand-alone forecast model and other hybridization techniques proves its superiority in terms of forecasting precision, with an RMSE varying in the range of [0.454-1.54] for the three studied PV stations.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    这项研究提出了一种新颖的方法,用于从单导联ECG获得心电图(ECG)衍生呼吸(EDR),并从呼吸拉伸传感器获得呼吸衍生心电图(RDC)。该研究旨在重建呼吸波形,根据ECGQRS心跳复合波数据确定呼吸率,定位心跳,并使用呼吸信号计算心率(HR)。将通过将定位的QRS波群和吸气最大值与参考位置进行比较来评估两种方法的准确性。这项研究的结果将最终有助于开发新的,更准确,以及识别呼吸信号中心跳的有效方法,从而更好地诊断和管理心血管疾病,特别是在睡眠期间,呼吸监测对于检测与生活质量下降和心血管疾病已知原因相关的呼吸暂停和其他呼吸功能障碍至关重要。此外,这项工作可能有助于确定使用简单,非接触式可穿戴设备,用于从单个设备同时获得心脏病学和呼吸数据。
    This study proposes a novel method for obtaining the electrocardiogram (ECG) derived respiration (EDR) from a single lead ECG and respiration-derived cardiogram (RDC) from a respiratory stretch sensor. The research aims to reconstruct the respiration waveform, determine the respiration rate from ECG QRS heartbeat complexes data, locate heartbeats, and calculate a heart rate (HR) using the respiration signal. The accuracy of both methods will be evaluated by comparing located QRS complexes and inspiration maxima to reference positions. The findings of this study will ultimately contribute to the development of new, more accurate, and efficient methods for identifying heartbeats in respiratory signals, leading to better diagnosis and management of cardiovascular diseases, particularly during sleep where respiration monitoring is paramount to detect apnoea and other respiratory dysfunctions linked to a decreased life quality and known cause of cardiovascular diseases. Additionally, this work could potentially assist in determining the feasibility of using simple, no-contact wearable devices for obtaining simultaneous cardiology and respiratory data from a single device.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    UNASSIGNED:研究基于单导联心电图(ECG)的智能手机案例处理的信号的潜力,以确定左心室舒张功能障碍(LVDD)作为筛查方法。
    UNASSIGNED:我们包括446名样本学习受试者和259名39至74岁的样本测试患者,用于2D超声心动图测试,使用基于智能手机的单导联ECG监测器进行组织多普勒成像和ECG,以评估LVDD。ECG信号(spECG)的频谱分析已与先进的信号处理和人工智能方法结合使用。波长斜率,波之间的时间间隔,心电图复合体不同点的振幅,心电信号的能量和不对称指标进行了分析。QTc间期显示显著的舒张功能不全,敏感性为78%,特异性为65%,Tpeak参数>590ms,分别为63%和58%,在63%和74%的情况下,T值关闭>695ms,QRSfi>674ms,分别为74%和57%,分别。所有4个参数的阈值的组合将灵敏度提高到86%,特异性提高到70%,分别(OR11.7[2.7-50.9],P<.001)。算法认可表明:灵敏度-95.6%,特异性-97.7%,诊断准确率为96.5%,重复性为98.8%。
    UNASSIGNED:我们的结果表明,如果spECG与先进的信号处理和机器学习技术结合使用,基于单导联ECG的智能手机外壳作为LVDD的新型筛查工具具有巨大的潜力。
    UNASSIGNED: To investigate the potential of a signal processed by smartphone-case based on single lead electrocardiogram (ECG) for left ventricular diastolic dysfunction (LVDD) determination as a screening method.
    UNASSIGNED: We included 446 subjects for sample learning and 259 patients for sample test aged 39 to 74 years for testing with 2D-echocardiography, tissue Doppler imaging and ECG using a smartphone-case based single lead ECG monitor for the assessment of LVDD. Spectral analysis of ECG signals (spECG) has been used in combination with advanced signal processing and artificial intelligence methods. Wavelengths slope, time intervals between waves, amplitudes at different points of the ECG complexes, energy of the ECG signal and asymmetry indices were analyzed. The QTc interval indicated significant diastolic dysfunction with a sensitivity of 78% and a specificity of 65%, a Tpeak parameter >590 ms with 63% and 58%, a T value off >695 ms with 63% and 74%, and QRSfi > 674 ms with 74% and 57%, respectively. A combination of the threshold values from all 4 parameters increased sensitivity to 86% and specificity to 70%, respectively (OR 11.7 [2.7-50.9], P < .001). Algorithm approbation have shown: Sensitivity-95.6%, Specificity-97.7%, Diagnostic accuracy-96.5% and Repeatability-98.8%.
    UNASSIGNED: Our results indicate a great potential of a smartphone-case based on single lead ECG as novel screening tool for LVDD if spECG is used in combination with advanced signal processing and machine learning technologies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    情感分析是对文本中表达的情感的自动化编码。情感分析和其他类型的分析侧重于文本文档的自动编码在心理学和计算机科学中越来越受欢迎。然而,当前忽略了将以规则采样间隔收集的自动编码文本视为信号的潜力。我们使用短语“文本作为信号”来指代信号处理技术对规律性采样的编码文本文档的应用。为了说明将文本视为信号的潜力,我们在社交媒体分析领域的两个案例研究的教程中向读者介绍了各种此类技术。首先,我们将有限响应脉冲滤波应用于2020年美国选举周期间发布的情绪编码推文,并讨论了滤波信号中由此产生的变化的可视化.我们使用变化点检测来突出情绪信号的重要变化。然后我们检查数据插值,通过快速傅里叶变换(FFT)分析周期性,从2019年11月到2020年10月,对个人价值编码的推文进行FFT过滤,并将过滤信号的变化与这一时期发生的一些划时代事件联系起来。最后,我们使用块自举来估计所得滤波信号的可变性/不确定性。在完成本教程后,读者将了解信号处理的基础知识,以分析定期采样的编码文本。
    Sentiment analysis is the automated coding of emotions expressed in text. Sentiment analysis and other types of analyses focusing on the automatic coding of textual documents are increasingly popular in psychology and computer science. However, the potential of treating automatically coded text collected with regular sampling intervals as a signal is currently overlooked. We use the phrase \"text as signal\" to refer to the application of signal processing techniques to coded textual documents sampled with regularity. In order to illustrate the potential of treating text as signal, we introduce the reader to a variety of such techniques in a tutorial with two case studies in the realm of social media analysis. First, we apply finite response impulse filtering to emotion-coded tweets posted during the US Election Week of 2020 and discuss the visualization of the resulting variation in the filtered signal. We use changepoint detection to highlight the important changes in the emotional signals. Then we examine data interpolation, analysis of periodicity via the fast Fourier transform (FFT), and FFT filtering to personal value-coded tweets from November 2019 to October 2020 and link the variation in the filtered signal to some of the epoch-defining events occurring during this period. Finally, we use block bootstrapping to estimate the variability/uncertainty in the resulting filtered signals. After working through the tutorial, the readers will understand the basics of signal processing to analyze regularly sampled coded text.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    样本量是科学研究中至关重要的问题,在行为神经科学中更是如此,除了最佳实践,并不总是可能达到大的实验样品。在这项研究中,我们调查了研究结果如何随着样本量的减少而变化。在分析中考虑了在涉及四个视频的观察的任务期间计算的三个指标,两个与脑脑电图(EEG)活动有关,一个与自主神经生理措施有关,即,心率和皮肤电导.这些指数的修改进行了研究,考虑了样本大小的五个子组(32、28、24、20、16),每个亚组由630个不同的组合组成,这些组合是通过从36个受试者中引导n个(n=样本量)组成的,就总人口而言(即,36科目)。相关性分析,均方误差(MSE),并在参与者减少时研究了指标的标准偏差(STD),并在分析中考虑了三个影响因素:指标类型,任务,和它的持续时间(时间长度)。研究结果表明,与参与者减少相关的相关性显着降低,MSE和STD显着增加(p<0.05)。指出了结果仍然重要且具有可比性的受试者的阈值。这些影响在某种程度上对所有研究变量都很敏感,但主要的影响是由于任务的长度。因此,结果具有可比性的受试者的最小阈值随着斑点持续时间的减少而增加.
    The sample size is a crucial concern in scientific research and even more in behavioural neurosciences, where besides the best practice it is not always possible to reach large experimental samples. In this study we investigated how the outcomes of research change in response to sample size reduction. Three indices computed during a task involving the observations of four videos were considered in the analysis, two related to the brain electroencephalographic (EEG) activity and one to autonomic physiological measures, i.e., heart rate and skin conductance. The modifications of these indices were investigated considering five subgroups of sample size (32, 28, 24, 20, 16), each subgroup consisting of 630 different combinations made by bootstrapping n (n = sample size) out of 36 subjects, with respect to the total population (i.e., 36 subjects). The correlation analysis, the mean squared error (MSE), and the standard deviation (STD) of the indexes were studied at the participant reduction and three factors of influence were considered in the analysis: the type of index, the task, and its duration (time length). The findings showed a significant decrease of the correlation associated to the participant reduction as well as a significant increase of MSE and STD (p < 0.05). A threshold of subjects for which the outcomes remained significant and comparable was pointed out. The effects were to some extents sensitive to all the investigated variables, but the main effect was due to the task length. Therefore, the minimum threshold of subjects for which the outcomes were comparable increased at the reduction of the spot duration.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    与数据增强不同,非常罕见情况下的数据生成是一种方法,可以基于非常少的原始数据产生大量高质量的样本。这在具有用于研究目的的公开可用数据集的限制的异常检测和分类任务中可能是有用的。虽然一些其他方法试图解决这个问题,例如数据增强技术,没有什么可以确保合成样品的特性。以前,我们启动了一个框架,称为异常时间序列信号的数据增强和生成(DAGAT),这是与重要组件合作的:数据增强,变分自动编码器(VAE),数据选取器(DP),信号片段组装器(SFA),和质量分类器(QC)。然后,一个升级的框架,称为异常信号高级数据生成(ADGAS),是为了消除DAGAT的限制而引入的;这些是不可控的输出,并且是训练集中包含不良数据的可能性。通过改革DAGAT架构,ADGAS实现了产生的样品的更好结果。尽管如此,ADGAS可以通过更好的SFA来改进,DP,和QC。因此,本文提出了一种极罕见案例信号的数据生成框架。拟议的框架可以为各种目标生成可靠的数据。我们通过使用1D-CNN作为多类异常分类的性能评估者,并使用水处理和配水测试台(SWaT和WADI)作为现实世界的异常数据集来挑战这一框架。结果表明,它超越了其他基线方法的异常数据扩充和数据生成技术。
    Unlike data augmentation, data generation for extremely rare cases is an approach that can spawn a significant number of high-quality samples based on very few original data. This could be useful in anomaly detection and classification tasks that have the limitation of publicly available datasets for research purposes. Though some other approaches have attempted to solve this problem, such as data augmentation techniques, there was nothing to ensure the characteristics of synthesized samples. Previously, we initiated a framework, called Data Augmentation and Generation for Anomalous Time-series Signals (DAGAT), that was in cooperation with important components: Data Augmentation, Variational Autoencoder (VAE), Data Picker (DP), Signal Fragment Assembler (SFA), and Quality Classifier (QC). And then, an upgraded framework, called An Advanced Data Generation for Anomalous Signals (ADGAS), was introduced to eliminate the limitations of DAGAT; those are uncontrollable outputs and the possibility of bad data included in a training set. By reforming DAGAT architecture, ADGAS achieves a better outcome of generated samples. Nonetheless, ADGAS could be improved through better SFA, DP, and QC. Hence, this paper proposed a Data Generation Framework for Extremely Rare Case Signals. The proposed framework is achievable in generating reliable data for various objectives. We challenged this framework by using the 1D-CNN to serve as the performance evaluator in multi-class anomalous classifications and using the water treatment and water distribution testbed (SWaT and WADI) as the real-world anomaly datasets. The result shows that it surpasses other baseline methods of anomaly data augmentation and data generation techniques.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    Alzheimer\'s Disease (AD)-related behavioral symptoms (i.e. agitation and/or pacing) develop in nearly 90% of AD patients. In this N = 1 study, we provide proof-of-concept of detecting changes in movement patterns that may reflect underlying behavioral symptoms using a highly novel radio sensor and identifying environmental triggers.
    The Emerald device is a Wi-Fi-like box without on-body sensors, which emits and processes radio-waves to infer patient movement, spatial location and activity. It was installed for 70 days in the room of patient \'E\', exhibiting agitated behaviors.
    Daily motion episode aggregation revealed motor activity fluctuation throughout the data collection period which was associated with potential socio-environmental triggers. We did not detect any adverse events attributable to the use of the device.
    This N-of-1 study suggests the Emerald device is feasible to use and can potentially yield actionable data regarding behavioral symptom management. No active or potential device risks were encountered.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    Vibrations of complex structures such as bridges mostly present nonlinear and non-stationary behaviors. Recently, one of the most common techniques to analyze the nonlinear and non-stationary structural response is Hilbert-Huang Transform (HHT). This paper aims to evaluate the performance of HHT based on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) technique using an Artificial Neural Network (ANN) as a proposed damage detection methodology. The performance of the proposed method is investigated for damage detection of a scaled steel-truss bridge model which was experimentally established as the case study subjected to white noise excitations. To this end, four key features of the intrinsic mode function (IMF), including energy, instantaneous amplitude (IA), unwrapped phase, and instantaneous frequency (IF), are extracted to assess the presence, severity, and location of the damage. By analyzing the experimental results through different damage indices defined based on the extracted features, the capabilities of the CEEMDAN-HT-ANN model in detecting, addressing the location and classifying the severity of damage are efficiently concluded. In addition, the energy-based damage index demonstrates a more effective approach in detecting the damage compared to those based on IA and unwrapped phase parameters.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    The advent of \'Big Data\' and \'Deep Learning\' offers both, a great challenge and a huge opportunity for personalised health-care. In machine learning-based biomedical data analysis, feature extraction is a key step for \'feeding\' the subsequent classifiers. With increasing numbers of biomedical data, extracting features from these \'big\' data is an intensive and time-consuming task. In this case study, we employ a Graphics Processing Unit (GPU) via Python to extract features from a large corpus of snore sound data. Those features can subsequently be imported into many well-known deep learning training frameworks without any format processing. The snore sound data were collected from several hospitals (20 subjects, with 770-990 MB per subject - in total 17.20 GB). Experimental results show that our GPU-based processing significantly speeds up the feature extraction phase, by up to seven times, as compared to the previous CPU system.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    The brain is equipped with a complex system for processing sensory information, including retinal circuitry comprising part of the central nervous system. Retinal stimulation can influence brain function via customized eyeglasses at both subcortical and cortical levels. We investigated cortical effects from wearing therapeutic eyeglasses, hypothesizing that they can create measureable changes in electroencephalogram (EEG) tracings. A Z-BellSM test was performed on a participant to select optimal lenses. An EEG measurement was recorded before and after the participant wore the eyeglasses. Equivalent quantitative electroencephalography (QEEG) analyses (statistical analysis on raw EEG recordings) were performed and compared with baseline findings. With glasses on, the participant\'s readings were found to be closer to the normed database. The original objective of our investigation was met, and additional findings were revealed. The Z-bellSM test identified lenses to influence neurotypical brain activity, supporting the paradigm that eyeglasses can be utilized as a therapeutic intervention. Also, EEG analysis demonstrated that encephalographic techniques can be used to identify channels through which neuro-optomertric treatments work. This case study\'s preliminary exploration illustrates the potential role of QEEG analysis and EEG-derived brain imaging in neuro-optometric research endeavors to affect brain function.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

公众号