Signal Processing, Computer-Assisted

信号处理,计算机辅助
  • 文章类型: Journal Article
    目的:脑电图(EEG)是一种非侵入性技术,用于使用放置在头皮上的电极记录皮层神经元的电活动。它已成为在静态条件下进行的最先进的EEG研究之外的研究的有希望的途径。EEG信号总是被伪影和其他生理信号污染。伪影污染随着运动强度的增加而增加。方法:在过去的十年中(自2010年以来),研究人员已经开始在动态设置中实施EEG测量,以提高研究的整体生态有效性。许多不同的方法被用来从EEG信号中去除非脑活动,并且对于动态设置和特定运动强度应使用哪种方法没有明确的准则。主要成果:目前,运动研究中最常用的去除伪影的方法是基于独立分量分析的方法。然而,伪影去除方法的选择取决于运动的类型和强度,影响伪影的特性和感兴趣的EEG参数。在非静态条件下处理脑电图时,在实验的设计阶段就必须格外小心。软件和硬件解决方案必须组合以实现从EEG测量中充分去除不需要的信号。意义:我们根据运动的强度为每种方法的使用提供了建议,并强调了这些方法的优缺点。然而,由于目前文献的空白,需要进一步开发和评估运动过程中EEG数据中的伪影去除方法。
    Objective:Electroencephalography (EEG) is a non-invasive technique used to record cortical neurons\' electrical activity using electrodes placed on the scalp. It has become a promising avenue for research beyond state-of-the-art EEG research that is conducted under static conditions. EEG signals are always contaminated by artifacts and other physiological signals. Artifact contamination increases with the intensity of movement.Approach:In the last decade (since 2010), researchers have started to implement EEG measurements in dynamic setups to increase the overall ecological validity of the studies. Many different methods are used to remove non-brain activity from the EEG signal, and there are no clear guidelines on which method should be used in dynamic setups and for specific movement intensities.Main results:Currently, the most common methods for removing artifacts in movement studies are methods based on independent component analysis. However, the choice of method for artifact removal depends on the type and intensity of movement, which affects the characteristics of the artifacts and the EEG parameters of interest. When dealing with EEG under non-static conditions, special care must be taken already in the designing period of an experiment. Software and hardware solutions must be combined to achieve sufficient removal of unwanted signals from EEG measurements.Significance:We have provided recommendations for the use of each method depending on the intensity of the movement and highlighted the advantages and disadvantages of the methods. However, due to the current gap in the literature, further development and evaluation of methods for artifact removal in EEG data during locomotion is needed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    EEG参考的选择已被广泛研究。然而,对于脑电图数据的最合适重新参考的选择仍然存在争议。此外,脑电图参考在估计功能性脑心相互作用(BHI)中的作用,结合不同的多元建模策略,尚未调查。
    这项研究确定了结合适当的EEG电参考和信号处理方法的最佳方法,以进行有效的功能BHI评估。脑电图参考在普通平均值之间的影响,平均乳突,拉普拉斯参考,Cz参考,在不同的BHI方法中探索了参比电极标准化技术(REST),包括合成数据生成(SDG)模型,心跳诱发电位,心跳诱发的振荡,和最大信息系数。
    SDG模型在EEG参考之间表现出很高的鲁棒性,而最大信息系数法表现出很高的灵敏度。EEG的常见平均值和REST参考在方法间比较中显示出良好的一致性。拉普拉斯,和Cz参考显著偏置BHI测量。
    使用基于共同平均值的EEG参考优于使用其他参考以用于估计有向功能BHI的一致性。我们不建议使用基于分析推导的EEG参考,因为实验条件可能不符合其最佳估计的要求。特别是在临床环境中。
    使用EEG电参考的共同平均值被认为是定量的最合适的选择,功能BHI评估。
    The choice of EEG reference has been widely studied. However, the choice of the most appropriate re-referencing for EEG data is still debated. Moreover, the role of EEG reference in the estimation of functional Brain-Heart Interplay (BHI), together with different multivariate modelling strategies, has not been investigated yet.
    This study identifies the best methodology combining a proper EEG electrical reference and signal processing methods for an effective functional BHI assessment. The effects of the EEG reference among common average, mastoids average, Laplacian reference, Cz reference, and the reference electrode standardization technique (REST) were explored throughout different BHI methods including synthetic data generation (SDG) model, heartbeat-evoked potentials, heartbeat-evoked oscillations, and maximal information coefficient.
    The SDG model exhibited high robustness between EEG references, whereas the maximal information coefficient method exhibited a high sensitivity. The common average and REST references for EEG showed a good consistency in the between-method comparisons. Laplacian, and Cz references significantly bias a BHI measurement.
    The use of EEG reference based on a common average outperforms on the use of other references for consistency in estimating directed functional BHI. We do not recommend the use of EEG references based on analytical derivations as the experimental conditions may not meet the requirements of their optimal estimation, particularly in clinical settings.
    The use of a common average for EEG electrical reference is concluded to be the most appropriate choice for a quantitative, functional BHI assessment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    大脑的质子MR光谱,尤其是那些在短和中等回波时间测量的,包含来自移动大分子(MM)的信号。本共识文件提供了主要MM的描述。MM的这些宽峰是代谢物的较窄峰的基础,并且经常使它们的定量复杂化,但它们也可能作为特定疾病中的生物标志物具有潜在的重要性。因此,从低分子量代谢物中分离宽的MM信号能够准确测定代谢物浓度,并且在许多研究中具有主要意义。其他研究试图了解MM光谱的起源,将其分解为单独的光谱区域或峰,并将MM光谱的成分用作生物医学研究或临床实践中各种生理或病理状况的标记。本共识文件的目的是提供有关如何处理不同类型研究中的MM信号的概述和一些建议,以及该领域的开放问题列表。这些都在论文的最后进行了总结。
    Proton MR spectra of the brain, especially those measured at short and intermediate echo times, contain signals from mobile macromolecules (MM). A description of the main MM is provided in this consensus paper. These broad peaks of MM underlie the narrower peaks of metabolites and often complicate their quantification but they also may have potential importance as biomarkers in specific diseases. Thus, separation of broad MM signals from low molecular weight metabolites enables accurate determination of metabolite concentrations and is of primary interest in many studies. Other studies attempt to understand the origin of the MM spectrum, to decompose it into individual spectral regions or peaks and to use the components of the MM spectrum as markers of various physiological or pathological conditions in biomedical research or clinical practice. The aim of this consensus paper is to provide an overview and some recommendations on how to handle the MM signals in different types of studies together with a list of open issues in the field, which are all summarized at the end of the paper.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    一旦获取了MRS数据集,必须采取几个重要步骤来获得所需的代谢物浓度测量。首先,数据必须经过预处理,为分析做好准备。接下来,必须估计感兴趣的代谢物信号的强度。最后,必须使用定量参考信号将测量的代谢物信号强度转换为缩放浓度单位,以进行有意义的解释。在本文中,我们回顾了单体素MRS实验的采集后工作流程中的这三个主要步骤(预处理,分析和量化),并为每个步骤的最佳实践提供建议。
    Once an MRS dataset has been acquired, several important steps must be taken to obtain the desired metabolite concentration measures. First, the data must be preprocessed to prepare them for analysis. Next, the intensity of the metabolite signal(s) of interest must be estimated. Finally, the measured metabolite signal intensities must be converted into scaled concentration units employing a quantitative reference signal to allow meaningful interpretation. In this paper, we review these three main steps in the post-acquisition workflow of a single-voxel MRS experiment (preprocessing, analysis and quantification) and provide recommendations for best practices at each step.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    人类蛋白质组组织(HUPO)的人类蛋白质组计划(HPP)开发了自2016年以来应用的质谱(MS)数据解释指南。这些指南有助于确保完整的人类蛋白质组的新兴草案是高度准确的,并且假阳性蛋白质鉴定数量少。这里,我们根据过去一年与更广泛的HPP社区达成共识的讨论,描述了这些指南的更新。修订后的3.0准则解决了几个主要和次要的差距。我们为新兴的数据独立采集(DIA)MS工作流程以及HUPO蛋白质组学标准计划(PSI)开发的新通用光谱标识符(USI)系统的使用添加了指南。此外,我们讨论了标准HPP管道的更新,用于收集HPP中所有蛋白质的MS证据,包括对最少证据的改进。我们提出了一项新计划,将MassIVE-KB纳入下一个周期(HPP2020)的HPP管道,以便更全面地覆盖公共MS数据集。主核对表在标题和分项下进行了重组,和相关指南已分组。总之,HPPMS数据解释指南的2.1版效果很好,并且及时更新到3.0版本将有助于HPP,因为它接近其收集和策划翻译和表达的MS证据的目标人类基因组编码的所有预测〜20000人类蛋白质。
    The Human Proteome Organization\'s (HUPO) Human Proteome Project (HPP) developed Mass Spectrometry (MS) Data Interpretation Guidelines that have been applied since 2016. These guidelines have helped ensure that the emerging draft of the complete human proteome is highly accurate and with low numbers of false-positive protein identifications. Here, we describe an update to these guidelines based on consensus-reaching discussions with the wider HPP community over the past year. The revised 3.0 guidelines address several major and minor identified gaps. We have added guidelines for emerging data independent acquisition (DIA) MS workflows and for use of the new Universal Spectrum Identifier (USI) system being developed by the HUPO Proteomics Standards Initiative (PSI). In addition, we discuss updates to the standard HPP pipeline for collecting MS evidence for all proteins in the HPP, including refinements to minimum evidence. We present a new plan for incorporating MassIVE-KB into the HPP pipeline for the next (HPP 2020) cycle in order to obtain more comprehensive coverage of public MS data sets. The main checklist has been reorganized under headings and subitems, and related guidelines have been grouped. In sum, Version 2.1 of the HPP MS Data Interpretation Guidelines has served well, and this timely update to version 3.0 will aid the HPP as it approaches its goal of collecting and curating MS evidence of translation and expression for all predicted ∼20 000 human proteins encoded by the human genome.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

  • 文章类型: Journal Article
    我们报告了用于单次试验大脑状态解码的新型监督算法。它们的可靠性和鲁棒性对于在闭环中有效执行神经技术应用至关重要。当通过多通道记录评估大脑活动时,由源功率调制(SPoC)算法计算的空间滤波器允许识别振荡子空间。它们回归到一个已知的连续试验变量,例如刺激特征,认知过程或行为。在小数据集场景中,这种监督方法倾向于过度拟合其训练数据作为通过脑电图(EEG)涉及的记录,脑磁图或局部场电位通常提供低信噪比。为了改进这一点,我们提出并描述了三种类型的SPoC正则化技术:使用Tikhonov正则化的方法(需要通过交叉验证进行模型选择),Tikhonov正则化和协方差矩阵归一化的组合以及利用分析协方差矩阵收缩的策略。所有提出的技术都在新颖的仿真框架和现实世界数据中进行了评估。根据模拟结果,我们看到我们的期望实现了,SPoC正则化通常揭示了小训练集和严重标签噪声条件下的最大好处。对于从业者来说,我们推导了基于交叉验证的方法的正则化超参数的操作范围,并提供了开源代码。在实际数据上额外评估所有方法,我们观察到回归表现有所改善,主要是来自最初表现不佳的受试者的数据集.有了这篇概念验证论文,我们为SPoC提供了一个可推广的正则化框架,它可以作为未来实现先进技术的起点。
    We report on novel supervised algorithms for single-trial brain state decoding. Their reliability and robustness are essential to efficiently perform neurotechnological applications in closed-loop. When brain activity is assessed by multichannel recordings, spatial filters computed by the source power comodulation (SPoC) algorithm allow identifying oscillatory subspaces. They regress to a known continuous trial-wise variable reflecting, e.g. stimulus characteristics, cognitive processing or behavior. In small dataset scenarios, this supervised method tends to overfit to its training data as the involved recordings via electroencephalogram (EEG), magnetoencephalogram or local field potentials generally provide a low signal-to-noise ratio. To improve upon this, we propose and characterize three types of regularization techniques for SPoC: approaches using Tikhonov regularization (which requires model selection via cross-validation), combinations of Tikhonov regularization and covariance matrix normalization as well as strategies exploiting analytical covariance matrix shrinkage. All proposed techniques were evaluated both in a novel simulation framework and on real-world data. Based on the simulation findings, we saw our expectations fulfilled, that SPoC regularization generally reveals the largest benefit for small training sets and under severe label noise conditions. Relevant for practitioners, we derived operating ranges of regularization hyperparameters for cross-validation based approaches and offer open source code. Evaluating all methods additionally on real-world data, we observed an improved regression performance mainly for datasets from subjects with initially poor performance. With this proof-of-concept paper, we provided a generalizable regularization framework for SPoC which may serve as a starting point for implementing advanced techniques in the future.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Coherence is a widely used measure to determine the frequency-resolved functional connectivity between pairs of recording sites, but this measure is confounded by shared inputs to the pair. To remove shared inputs, the \'partial coherence\' can be computed by conditioning the spectral matrices of the pair on all other recorded channels, which involves the calculation of a matrix (pseudo-) inverse. It has so far remained a challenge to use the time-resolved partial coherence to analyze intracranial recordings with a large number of recording sites. For instance, calculating the partial coherence using a pseudoinverse method produces a high number of false positives when it is applied to a large number of channels. To address this challenge, we developed a new method that randomly aggregated channels into a smaller number of effective channels on which the calculation of partial coherence was based. We obtained a \'consensus\' partial coherence (cPCOH) by repeating this approach for several random aggregations of channels (permutations) and only accepting those activations in time and frequency with a high enough consensus. Using model data we show that the cPCOH method effectively filters out the effect of shared inputs and performs substantially better than the pseudo-inverse. We successfully applied the cPCOH procedure to human stereotactic EEG data and demonstrated three key advantages of this method relative to alternative procedures. First, it reduces the number of false positives relative to the pseudo-inverse method. Second, it allows for titration of the amount of false positives relative to the false negatives by adjusting the consensus threshold, thus allowing the data-analyst to prioritize one over the other to meet specific analysis demands. Third, it substantially reduced the number of identified interactions compared to coherence, providing a sparser network of connections from which clear spatial patterns emerged. These patterns can serve as a starting point of further analyses that provide insight into network dynamics during cognitive processes. These advantages likely generalize to other modalities in which shared inputs introduce confounds, such as electroencephalography (EEG) and magneto-encephalography (MEG).
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    BACKGROUND: The guidelines of the German Medical Association and the German Society for Clinical Neurophysiology and Functional Imaging (DGKN) require a high procedural and technical standard for electroencephalography (EEG) as an ancillary method for diagnosing the irreversible cessation of brain function (brain death). Nowadays, digital EEG systems are increasingly being applied in hospitals. So far it is unclear to what extent the digital EEG systems currently marketed in Germany meet the guidelines for diagnosing brain death.
    METHODS: In the present article, the technical und safety-related requirements for digital EEG systems and the EEG documentation for diagnosing brain death are described in detail. On behalf of the DGKN, the authors sent out a questionnaire to all identified distributors of digital EEG systems in Germany with respect to the following technical demands: repeated recording of the calibration signals during an ongoing EEG recording, repeated recording of all electrode impedances during an ongoing EEG recording, assessability of intrasystem noise and galvanic isolation of measurement earthing from earthing conductor (floating input).
    RESULTS: For 15 of the identified 20 different digital EEG systems the specifications were provided by the distributors (among them all distributors based in Germany). All of these EEG systems are provided with a galvanic isolation (floating input). The internal noise can be tested with all systems; however, some systems do not allow repeated recording of the calibration signals and/or the electrode impedances during an ongoing EEG recording.
    CONCLUSIONS: The majority but not all of the currently available digital EEG systems offered for clinical use are eligible for use in brain death diagnostics as per German guidelines.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    暂无摘要。
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Comparative Study
    Visual analysis of fetal heart rate (FHR) during labor is subject to inter- and intra-observer variability that is particularly troublesome for anomalous recordings. Automatic FHR analysis has been proposed as a promising way to reduce this variability. The major difficulty with automatic analysis is to determine the baseline from which accelerations and decelerations will be detected. Eleven methods for automatic FHR analysis were reprogrammed using description from the literature and applied to 66 FHR recordings collected during the first stage of delivery. The FHR baselines produced by the automatic methods were compared with the baseline defined by agreement among a panel of three experts. The better performance of the automatic methods described by Mongelli, Lu, Wrobel and Pardey was noted despite their different approaches on signal processing. Nevertheless, for several recordings, none of the automatic studied methods produced a baseline similar to that defined by the experts.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

公众号