关键词: Categories EEG Emotion Frequency tagging Voice

Mesh : Humans Emotions / physiology Brain / physiology Anger Happiness Fear

来  源:   DOI:10.1007/s10548-023-00983-8   PDF(Pubmed)

Abstract:
Seamlessly extracting emotional information from voices is crucial for efficient interpersonal communication. However, it remains unclear how the brain categorizes vocal expressions of emotion beyond the processing of their acoustic features. In our study, we developed a new approach combining electroencephalographic recordings (EEG) in humans with a frequency-tagging paradigm to \'tag\' automatic neural responses to specific categories of emotion expressions. Participants were presented with a periodic stream of heterogeneous non-verbal emotional vocalizations belonging to five emotion categories: anger, disgust, fear, happiness and sadness at 2.5 Hz (stimuli length of 350 ms with a 50 ms silent gap between stimuli). Importantly, unknown to the participant, a specific emotion category appeared at a target presentation rate of 0.83 Hz that would elicit an additional response in the EEG spectrum only if the brain discriminates the target emotion category from other emotion categories and generalizes across heterogeneous exemplars of the target emotion category. Stimuli were matched across emotion categories for harmonicity-to-noise ratio, spectral center of gravity and pitch. Additionally, participants were presented with a scrambled version of the stimuli with identical spectral content and periodicity but disrupted intelligibility. Both types of sequences had comparable envelopes and early auditory peripheral processing computed via the simulation of the cochlear response. We observed that in addition to the responses at the general presentation frequency (2.5 Hz) in both intact and scrambled sequences, a greater peak in the EEG spectrum at the target emotion presentation rate (0.83 Hz) and its harmonics emerged in the intact sequence in comparison to the scrambled sequence. The greater response at the target frequency in the intact sequence, together with our stimuli matching procedure, suggest that the categorical brain response elicited by a specific emotion is at least partially independent from the low-level acoustic features of the sounds. Moreover, responses at the fearful and happy vocalizations presentation rates elicited different topographies and different temporal dynamics, suggesting that different discrete emotions are represented differently in the brain. Our paradigm revealed the brain\'s ability to automatically categorize non-verbal vocal emotion expressions objectively (at a predefined frequency of interest), behavior-free, rapidly (in few minutes of recording time) and robustly (with a high signal-to-noise ratio), making it a useful tool to study vocal emotion processing and auditory categorization in general and in populations where behavioral assessments are more challenging.
摘要:
从声音中无缝提取情感信息对于有效的人际沟通至关重要。然而,目前尚不清楚大脑如何对声音表达的情感进行分类,而不是对其声学特征进行处理。在我们的研究中,我们开发了一种新方法,将人类脑电图记录(EEG)与频率标记范式相结合,以标记对特定类别的情绪表达的自动神经反应。参与者被呈现为属于五个情感类别的周期性异质非语言情感发声流:愤怒,厌恶,恐惧,2.5Hz时的快乐和悲伤(刺激长度为350ms,刺激之间有50ms的无声间隙)。重要的是,参与者不知道,特定的情绪类别以0.83Hz的目标呈现速率出现,只有当大脑将目标情绪类别与其他情绪类别区分开来并在目标情绪类别的异质样本中推广时,才会在EEG频谱中引起额外的反应。刺激在不同情绪类别的和谐度与噪音比相匹配,光谱重心和螺距。此外,给参与者提供了刺激的乱序版本,这些刺激具有相同的光谱含量和周期性,但可理解性被破坏.两种类型的序列都具有可比较的包络和通过模拟耳蜗反应计算的早期听觉外围处理。我们观察到,除了在完整序列和加扰序列中的一般呈现频率(2.5Hz)下的响应之外,与加扰序列相比,完整序列中出现了目标情绪呈现率(0.83Hz)的EEG频谱中的更大峰值及其谐波。完整序列中目标频率的响应越大,和我们的刺激匹配程序一起,表明特定情绪引起的分类大脑反应至少部分独立于声音的低级声学特征。此外,在恐惧和快乐的发声表现率下的反应引起了不同的地形和不同的时间动态,这表明不同的离散情绪在大脑中的表现不同。我们的范式揭示了大脑能够客观地(以预定义的兴趣频率)自动分类非语言的声音情感表达,无行为,快速(在几分钟的记录时间内)和强劲(具有高信噪比),使其成为研究一般和行为评估更具挑战性的人群的声音情感处理和听觉分类的有用工具。
公众号