frequency tagging

频率标记
  • 文章类型: Journal Article
    关于颜色类别是否会影响我们对颜色的感知,一直存在很多争论。最近的理论强调了自上而下的影响对颜色感知的作用,即由于自上而下的调制,视觉皮层中原始的连续颜色空间可能会转换为分类编码。为了测试颜色类别对颜色感知的影响,我们采用了RSVP范式,其中颜色刺激以每次刺激100ms的快速速度呈现,并被前后刺激所掩盖。此外,不需要明确的颜色命名或分类.理论上,在被动观看任务中以如此短的间隔进行反向掩蔽应该会限制来自较高级别的大脑区域的自上而下的影响。为了测量不同颜色类别引起的大脑反应的任何潜在细微差异,我们在RSVP刺激流中嵌入了基于频率标记的敏感EEG范例,其中oddball颜色刺激以与基础颜色刺激不同的频率进行编码。我们表明,在出现怪球刺激的频率下,EEG对跨类别怪球颜色的反应明显大于对类别内怪球颜色的反应。我们的研究表明,当颜色刺激快速呈现时,视觉皮层可以自动且隐式地编码颜色类别。
    There has been much debate on whether color categories affect how we perceive color. Recent theories have put emphasis on the role of top-down influence on color perception that the original continuous color space in the visual cortex may be transformed into categorical encoding due to top-down modulation. To test the influence of color categories on color perception, we adopted an RSVP paradigm, where color stimuli were presented at a fast speed of 100 ms per stimulus and were forward and backward masked by the preceding and following stimuli. Moreover, no explicit color naming or categorization was required. In theory, backward masking with such a short interval in a passive viewing task should constrain top-down influence from higher-level brain areas. To measure any potentially subtle differences in brain response elicited by different color categories, we embedded a sensitive frequency-tagging-based EEG paradigm within the RSVP stimuli stream where the oddball color stimuli were encoded with a different frequency from the base color stimuli. We showed that EEG responses to cross-category oddball colors at the frequency where the oddball stimuli were presented was significantly larger than the responses to within-category oddball colors. Our study suggested that the visual cortex can automatically and implicitly encode color categories when color stimuli are presented rapidly.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    语音理解要求听众将连续语音快速解析为分层组织的语言结构(即音节,字,短语,和句子),并将神经活动带入不同语言水平的节奏。衰老伴随着语音处理的变化,但目前尚不清楚衰老如何影响不同水平的语言表现.这里,当受试者主动和被动地听连续讲话时,我们记录了老年人和年轻人群中的脑磁图信号,其中单词的分层语言结构,短语,句子被标记为4、2和1赫兹,分别。应用了一种新开发的参数化算法,将周期性语言跟踪与非周期性分量分开。我们发现了增强的低级(单词级)跟踪,减少了更高级别的(短语和句子级别)跟踪,与年轻人相比,老年人的非周期性偏移减少。此外,我们观察到,年轻时的句子水平跟踪中的注意力调节大于老年。值得注意的是,神经行为分析表明,受试者的行为准确性与高级语言跟踪呈正相关,与低级语言跟踪反向相关。总的来说,这些结果表明,增强的低级语言跟踪,较高水平的语言跟踪减少和注意力调节的灵活性降低可能是与衰老相关的言语理解能力下降的基础。
    Speech comprehension requires listeners to rapidly parse continuous speech into hierarchically-organized linguistic structures (i.e. syllable, word, phrase, and sentence) and entrain the neural activities to the rhythm of different linguistic levels. Aging is accompanied by changes in speech processing, but it remains unclear how aging affects different levels of linguistic representation. Here, we recorded magnetoencephalography signals in older and younger groups when subjects actively and passively listened to the continuous speech in which hierarchical linguistic structures of word, phrase, and sentence were tagged at 4, 2, and 1 Hz, respectively. A newly-developed parameterization algorithm was applied to separate the periodically linguistic tracking from the aperiodic component. We found enhanced lower-level (word-level) tracking, reduced higher-level (phrasal- and sentential-level) tracking, and reduced aperiodic offset in older compared with younger adults. Furthermore, we observed the attentional modulation on the sentential-level tracking being larger for younger than for older ones. Notably, the neuro-behavior analyses showed that subjects\' behavioral accuracy was positively correlated with the higher-level linguistic tracking, reversely correlated with the lower-level linguistic tracking. Overall, these results suggest that the enhanced lower-level linguistic tracking, reduced higher-level linguistic tracking and less flexibility of attentional modulation may underpin aging-related decline in speech comprehension.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    母语人士擅长将连续语音解析为较小的元素,并将其神经活动带入不同级别的语言层次结构(例如,音节,短语和句子)来实现言语理解。然而,非母语大脑如何跟踪第二语言(L2)语音理解中的分层语言结构,以及它是否与自上而下的注意力和语言能力有关仍然难以捉摸。这里,我们在成年人中应用了频率标记范式,并研究了神经跟踪对分层组织的语言结构的反应(即,4Hz的音节率,当第一语言(L1)和L2听众参加或忽略语音流时,他们的短语速率为2Hz,句子速率为1Hz)。我们揭示了对高阶语言结构的神经反应中断(即,短语和句子)适用于L2听众,其中短语级别的跟踪在功能上与L2受试者的语言能力相关。我们还观察到,在L2语音理解中,自上而下的注意力调节效率低于L1语音理解。我们的结果表明,减少的delta-band神经元振荡有助于高阶语言结构的内部构造,可能会损害非母语的听力理解。重要性陈述低频神经振荡是原生大脑中言语理解的根源。尚未确定非本地大脑如何跟踪L2语音中的分层语言结构,以及它是否与注意力和语言能力有关。我们的研究记录了音节对语言结构的电生理反应,与L1相比,L2听众的短语和句子率降低了对L2中高阶语言结构的跟踪反应,这与行为水平的L2熟练程度有关。此外,与本地听众不同,他自动跟踪语音结构,非母语听众在被动收听过程中无法跟踪L2语音中的高阶语言结构,表明了在非天然大脑中注意力调节的不同模式。
    Native speakers excel at parsing continuous speech into smaller elements and entraining their neural activities to the linguistic hierarchy at different levels (e.g., syllables, phrases, and sentences) to achieve speech comprehension. However, how a nonnative brain tracks hierarchical linguistic structures in second language (L2) speech comprehension and whether it relates to top-down attention and language proficiency remains elusive. Here, we applied a frequency-tagging paradigm in human adults and investigated the neural tracking responses to hierarchically organized linguistic structures (i.e., the syllabic rate of 4 Hz, the phrasal rate of 2 Hz, and the sentential rate of 1 Hz) in both first language (L1) and L2 listeners when they attended to a speech stream or ignored it. We revealed disrupted neural responses to higher-order linguistic structures (i.e., phrases and sentences) for L2 listeners in which the phrasal-level tracking was functionally related to an L2 subject\'s language proficiency. We also observed less efficient top-down modulation of attention in L2 speech comprehension than in L1 speech comprehension. Our results indicate that the reduced δ-band neuronal oscillations that subserve the internal construction of higher-order linguistic structures may compromise listening comprehension in a nonnative language.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人类的空间听觉是一种高级听觉过程,对于环境中的快速声音定位至关重要。具有动物的神经生理学模型和处于觉醒阶段的人类受试者的神经影像学证据都表明,听觉对象的定位主要位于后听觉皮层。然而,这种认知过程在睡眠期间是否得到保留仍不清楚。为了填补这一研究空白,我们通过在人类受试者的清醒和非快速眼动(NREM)睡眠期间同时记录脑电图(EEG)和脑磁图(MEG)信号,研究了睡眠大脑识别声音位置的能力.使用频率标记范例,给受试者以5Hz的基本音节序列和每三个音节发生的位置变化,导致1.67Hz的声音定位偏移。EEG和MEG信号用于睡眠评分和神经追踪分析。分别。在觉醒和NREM睡眠中观察到反映基本听觉处理的5Hz神经跟踪反应,尽管睡眠时的反应比清醒时的反应弱。1.67Hz时的皮质响应,对应于声音位置的变化,在清醒期间观察到,无论对刺激的关注如何,但在NREM睡眠期间消失了。这些结果首次表明,睡眠可以保留基本的听觉处理,但会破坏声音定位的高阶大脑功能。
    Spatial hearing in humans is a high-level auditory process that is crucial to rapid sound localization in the environment. Both neurophysiological models with animals and neuroimaging evidence from human subjects in the wakefulness stage suggest that the localization of auditory objects is mainly located in the posterior auditory cortex. However, whether this cognitive process is preserved during sleep remains unclear. To fill this research gap, we investigated the sleeping brain\'s capacity to identify sound locations by recording simultaneous electroencephalographic (EEG) and magnetoencephalographic (MEG) signals during wakefulness and non-rapid eye movement (NREM) sleep in human subjects. Using the frequency-tagging paradigm, the subjects were presented with a basic syllable sequence at 5 Hz and a location change that occurred every three syllables, resulting in a sound localization shift at 1.67 Hz. The EEG and MEG signals were used for sleep scoring and neural tracking analyses, respectively. Neural tracking responses at 5 Hz reflecting basic auditory processing were observed during both wakefulness and NREM sleep, although the responses during sleep were weaker than those during wakefulness. Cortical responses at 1.67 Hz, which correspond to the sound location change, were observed during wakefulness regardless of attention to the stimuli but vanished during NREM sleep. These results for the first time indicate that sleep preserves basic auditory processing but disrupts the higher-order brain function of sound localization.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Speech mental imagery is a quasi-perceptual experience that occurs in the absence of real speech stimulation. How imagined speech with higher-order structures such as words, phrases and sentences is rapidly organized and internally constructed remains elusive. To address this issue, subjects were tasked with imagining and perceiving poems along with a sequence of reference sounds with a presentation rate of 4 Hz while magnetoencephalography (MEG) recording was conducted. Giving that a sentence in a traditional Chinese poem is five syllables, a sentential rhythm was generated at a distinctive frequency of 0.8 Hz. Using the frequency tagging we concurrently tracked the neural processing timescale to the top-down generation of rhythmic constructs embedded in speech mental imagery and the bottom-up sensory-driven activity that were precisely tagged at the sentence-level rate of 0.8 Hz and a stimulus-level rate of 4 Hz, respectively. We found similar neural responses induced by the internal construction of sentences from syllables with both imagined and perceived poems and further revealed shared and distinct cohorts of cortical areas corresponding to the sentence-level rhythm in imagery and perception. This study supports the view of a common mechanism between imagery and perception by illustrating the neural representations of higher-order rhythmic structures embedded in imagined and perceived speech.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    语音包含丰富的声学和语言信息。使用高度受控的语音材料,先前的研究表明,皮层活动与感知的语言单位的节奏是同步的,例如,单词和短语,在基本的声学特征之上,例如,演讲信封。听自然语音时,目前还不清楚,然而,皮层活动如何联合编码声学和语言信息。在这里,我们使用脑电图研究单词的神经编码,并观察当参与者自然听叙述时与多音节单词同步的神经活动。用于单词节奏的幅度调制(AM)提示增强了单词级响应,但是只有在被动聆听时才会观察到这种效果。此外,单词和AM提示由空间上可分离的神经反应编码,这些神经反应由注意力进行差分调制。这些结果表明,自下而上的声学线索和自上而下的语言知识分别有助于口语叙事中语言单位的皮层编码。
    Speech contains rich acoustic and linguistic information. Using highly controlled speech materials, previous studies have demonstrated that cortical activity is synchronous to the rhythms of perceived linguistic units, for example, words and phrases, on top of basic acoustic features, for example, the speech envelope. When listening to natural speech, it remains unclear, however, how cortical activity jointly encodes acoustic and linguistic information. Here we investigate the neural encoding of words using electroencephalography and observe neural activity synchronous to multi-syllabic words when participants naturally listen to narratives. An amplitude modulation (AM) cue for word rhythm enhances the word-level response, but the effect is only observed during passive listening. Furthermore, words and the AM cue are encoded by spatially separable neural responses that are differentially modulated by attention. These results suggest that bottom-up acoustic cues and top-down linguistic knowledge separately contribute to cortical encoding of linguistic units in spoken narratives.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Many sensorimotor functions are intrinsically rhythmic, and are underlined by neural processes that are functionally distinct from neural responses related to the processing of transient events. EEG frequency tagging is a technique that is increasingly used in neuroscience to study these processes. It relies on the fact that perceiving and/or producing rhythms generates periodic neural activity that translates into periodic variations of the EEG signal. In the EEG spectrum, those variations appear as peaks localized at the frequency of the rhythm and its harmonics.
    Many natural rhythms, such as music or dance, are not strictly periodic and, instead, show fluctuations of their period over time. Here, we introduce a time-warping method to identify non-strictly-periodic EEG activities in the frequency domain.
    EEG time-warping can be used to characterize the sensorimotor activity related to the performance of self-paced rhythmic finger movements. Furthermore, the EEG time-warping method can disentangle auditory- and movement-related EEG activity produced when participants perform rhythmic movements synchronized to an acoustic rhythm. This is possible because the movement-related activity has different period fluctuations than the auditory-related activity.
    With the classic frequency-tagging approach, rhythm fluctuations result in a spreading of the peaks to neighboring frequencies, to the point that they cannot be distinguished from background noise.
    The proposed time-warping procedure is as a simple and effective mean to study natural non-strictly-periodic rhythmic neural processes such as rhythmic movement production, acoustic rhythm perception and sensorimotor synchronization.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号