McGurk effect

  • 文章类型: Journal Article
    来自各种感觉方式的语音信息的处理对于人类交流至关重要。左后上颞回(pSTG)和运动皮层都重要参与多感官言语感知。然而,初级感觉区与pSTG和运动皮层的动态整合仍不清楚.这里,我们实施了经典McGurk效应范式的行为实验,并从63名正常成年人的同步视听音节感知中获取了任务功能磁共振成像(fMRI)数据.我们进行了动态因果模型(DCM)分析,以探索左pSTG之间的跨模态相互作用,左中前回(PrG),左颞上回(mSTG),左梭状回(FuG)。贝叶斯模型选择有利于一个获胜的模型,其中包括对PrG的连接调制(mSTG→PrG,FuG→PrG),从PrG(PrG→mSTG,PrG→FuG),和pSTG(mSTG→pSTG,FuG→pSTG)。此外,上述连接的耦合强度与行为McGurk易感性相关。此外,在强McGurk感知者和弱McGurk感知者之间,这些连接的耦合强度存在显着差异。强烈的感知者调节较少抑制视觉影响,允许较少的兴奋性听觉信息流入PrG,但在pSTG中集成了更多的视听信息。一起来看,我们的发现表明,在视听语音过程中,PrG和pSTG与初级皮层动态地相互作用,并支持运动皮层在调节听觉和视觉模态之间的增益和显着性方面发挥特定的功能作用。
    在线版本包含补充材料,可在10.1007/s11571-023-09945-z获得。
    The processing of speech information from various sensory modalities is crucial for human communication. Both left posterior superior temporal gyrus (pSTG) and motor cortex importantly involve in the multisensory speech perception. However, the dynamic integration of primary sensory regions to pSTG and the motor cortex remain unclear. Here, we implemented a behavioral experiment of classical McGurk effect paradigm and acquired the task functional magnetic resonance imaging (fMRI) data during synchronized audiovisual syllabic perception from 63 normal adults. We conducted dynamic causal modeling (DCM) analysis to explore the cross-modal interactions among the left pSTG, left precentral gyrus (PrG), left middle superior temporal gyrus (mSTG), and left fusiform gyrus (FuG). Bayesian model selection favored a winning model that included modulations of connections to PrG (mSTG → PrG, FuG → PrG), from PrG (PrG → mSTG, PrG → FuG), and to pSTG (mSTG → pSTG, FuG → pSTG). Moreover, the coupling strength of the above connections correlated with behavioral McGurk susceptibility. In addition, significant differences were found in the coupling strength of these connections between strong and weak McGurk perceivers. Strong perceivers modulated less inhibitory visual influence, allowed less excitatory auditory information flowing into PrG, but integrated more audiovisual information in pSTG. Taken together, our findings show that the PrG and pSTG interact dynamically with primary cortices during audiovisual speech, and support the motor cortex plays a specifically functional role in modulating the gain and salience between auditory and visual modalities.
    UNASSIGNED: The online version contains supplementary material available at 10.1007/s11571-023-09945-z.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    从理论上讲,自闭症和非自闭症个体在对景象和声音之间的时间关系的感知上存在差异,这是整合相关感官信息的困难。这些,反过来,被认为会导致言语感知和更高层次的社会行为问题。然而,建立这种联系的文献通常涉及有限的样本量,并且几乎完全集中在儿童身上。为了确定这些差异是否持续到成年,我们比较了496名自闭症患者和373名非自闭症成年人(年龄17~75岁).参与者完成了McGurk/MacDonald范例的在线版本,一种多感官错觉,表明能够整合视听言语刺激。视听异步被操纵,参与者对他们感知到的音节(揭示了他们对错觉的敏感性)以及音频和视频是否同步(允许洞察时间处理)做出了回应。与之前较小的研究相比,年轻的样品,我们没有发现自闭症成人的颞叶或多感官处理受损的证据.相反,我们发现在两组中,多感觉统合与年龄密切相关。这与先前的假设相矛盾,即在自闭症患者的一生中,多感官知觉的差异持续存在甚至增加。这也表明,随着个体感觉随着年龄的增长而下降,多感觉整合可能会发挥代偿作用。这些发现挑战了现有的理论,并为自闭症的发展提供了乐观的观点。他们还强调了扩大自闭症研究以更好地反映自闭症人群年龄范围的重要性。
    Differences between autistic and non-autistic individuals in perception of the temporal relationships between sights and sounds are theorized to underlie difficulties in integrating relevant sensory information. These, in turn, are thought to contribute to problems with speech perception and higher level social behaviour. However, the literature establishing this connection often involves limited sample sizes and focuses almost entirely on children. To determine whether these differences persist into adulthood, we compared 496 autistic and 373 non-autistic adults (aged 17 to 75 years). Participants completed an online version of the McGurk/MacDonald paradigm, a multisensory illusion indicative of the ability to integrate audiovisual speech stimuli. Audiovisual asynchrony was manipulated, and participants responded both to the syllable they perceived (revealing their susceptibility to the illusion) and to whether or not the audio and video were synchronized (allowing insight into temporal processing). In contrast with prior research with smaller, younger samples, we detected no evidence of impaired temporal or multisensory processing in autistic adults. Instead, we found that in both groups, multisensory integration correlated strongly with age. This contradicts prior presumptions that differences in multisensory perception persist and even increase in magnitude over the lifespan of autistic individuals. It also suggests that the compensatory role multisensory integration may play as the individual senses decline with age is intact. These findings challenge existing theories and provide an optimistic perspective on autistic development. They also underline the importance of expanding autism research to better reflect the age range of the autistic population.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    视觉系统在出生时还没有完全成熟,并且在整个婴儿期一直持续发展,直到在儿童后期和青春期达到成人水平。在出生后和视觉成熟之前,视力的破坏会导致视觉处理的缺陷,进而可能会影响互补感官的发展。研究在出生后早期发育过程中手术切除一只眼睛的人是了解感觉发育的时间表以及双眼在视觉系统成熟中的作用的有用模型。对于低水平和高水平的视觉刺激,都观察到了生命早期一只眼睛丧失后的自适应听觉和视听可塑性。值得注意的是,在生命早期摘除一只眼睛的人对McGurk效应的感知远低于双眼对照。
    当前的研究调查了在生命后期摘除一只眼睛的人是否也存在多感官代偿机制,产后视觉系统成熟后,通过测量他们是否感知到McGurk效应,与双眼对照和早期切除一只眼睛的人相比。
    在生命后期摘除一只眼睛的人感觉到McGurk效应类似于双眼观看控制,不像那些在生命早期切除一只眼睛的人。
    这表明手术取眼时基于年龄的多感觉代偿机制存在差异。这些结果表明,双眼性丧失的跨模态适应可能取决于皮质发育过程中的可塑性水平。
    UNASSIGNED: The visual system is not fully mature at birth and continues to develop throughout infancy until it reaches adult levels through late childhood and adolescence. Disruption of vision during this postnatal period and prior to visual maturation results in deficits of visual processing and in turn may affect the development of complementary senses. Studying people who have had one eye surgically removed during early postnatal development is a useful model for understanding timelines of sensory development and the role of binocularity in visual system maturation. Adaptive auditory and audiovisual plasticity following the loss of one eye early in life has been observed for both low-and high-level visual stimuli. Notably, people who have had one eye removed early in life perceive the McGurk effect much less than binocular controls.
    UNASSIGNED: The current study investigates whether multisensory compensatory mechanisms are also present in people who had one eye removed late in life, after postnatal visual system maturation, by measuring whether they perceive the McGurk effect compared to binocular controls and people who have had one eye removed early in life.
    UNASSIGNED: People who had one eye removed late in life perceived the McGurk effect similar to binocular viewing controls, unlike those who had one eye removed early in life.
    UNASSIGNED: This suggests differences in multisensory compensatory mechanisms based on age at surgical eye removal. These results indicate that cross-modal adaptations for the loss of binocularity may be dependent on plasticity levels during cortical development.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人类从出生起就特别注意面孔和言语,但是对导致专业化的发展过程的相互作用却知之甚少。我们调查了两个年龄组的婴儿(年龄在5至6.5个月大;年龄在9至10.5个月大)和成人的面部定向对视听(AV)言语感知的影响。我们记录了与事件相关的电位(ERP),以响应直立和倒置的面孔产生/ba/发音的视频,这些发音与听觉音节匹配/ba/或不匹配/ga/嘴巴运动。与其他刺激相比,我们观察到视听不匹配反应(AVMMR)对不一致的视觉/ba/听觉/ga/音节的幅度增加,而年龄较大的婴儿组没有表现出类似的反应。在年轻组的右额叶区域和成人的左右额叶区域,也检测到了相对于一致刺激的反向视觉/ba/听觉/ga/刺激的AV不匹配反应。我们表明,在所有年龄组中,面部配置对AV不匹配的神经反应的影响不同。AVMMR响应于反向不一致的AV语音的新颖发现可能暗示着年轻婴儿和成人在处理反向表达不一致的语音时的特征面部处理。在年龄较大的婴儿组中获得的对直立和倒置不一致刺激的可见差异反应缺乏,这表明在AV语音处理中可能存在功能性皮质重组。
    Humans pay special attention to faces and speech from birth, but the interplay of developmental processes leading to specialization is poorly understood. We investigated the effects of face orientation on audiovisual (AV) speech perception in two age groups of infants (younger: 5- to 6.5-month-olds; older: 9- to 10.5-month-olds) and adults. We recorded event-related potentials (ERP) in response to videos of upright and inverted faces producing /ba/ articulation dubbed with auditory syllables that were either matching /ba/ or mismatching /ga/ the mouth movement. We observed an increase in the amplitude of audiovisual mismatch response (AVMMR) to incongruent visual /ba/-auditory /ga/ syllable in comparison to other stimuli in younger infants, while the older group of infants did not show a similar response. AV mismatch response to inverted visual /ba/-auditory /ga/ stimulus relative to congruent stimuli was also detected in the right frontal areas in the younger group and the left and right frontal areas in adults. We show that face configuration affects the neural response to AV mismatch differently across all age groups. The novel finding of the AVMMR in response to inverted incongruent AV speech may potentially imply the featural face processing in younger infants and adults when processing inverted faces articulating incongruent speech. The lack of visible differential responses to upright and inverted incongruent stimuli obtained in the older group of infants suggests a likely functional cortical reorganization in the processing of AV speech.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在麦格克效应中,当听觉(A)音节与不一致的视觉(V)音节(例如,A/pa/V/ka/经常被称为/ka/或/ta/)。McGurk效应提供了一种衡量视觉对言语感知影响的方法,听觉正确反应的比例越低,越强。研究了跨语言效应,以了解自己的语言和外语之间的处理差异。关于麦格克效应,它有时被发现在外国演讲者中更强。然而,其他研究表明相反的情况,或者语言之间没有区别。大多数研究将英语与其他语言进行了比较。我们调查了母语为芬兰语和日语的人和听众的跨语言效果。两组听众都有49名参与者。刺激(/ka/,/pa/,/ta/)由两名女性和男性讲芬兰语和日语的人说出,并以A,V和AV模态,包括麦格克刺激A/pa/V/ka/。在两组中,日本刺激的McGurk效应更强。说话者之间的言语感知差异很明显,但母语之间的差异较小。同感感知与McGurk感知相关。这些发现表明,刺激依赖性特征有助于McGurk效应。这可能比跨语言因素对音节感知有更强的影响。
    In the McGurk effect, perception of a spoken consonant is altered when an auditory (A) syllable is presented with an incongruent visual (V) syllable (e.g., A/pa/V/ka/ is often heard as /ka/ or /ta/). The McGurk effect provides a measure for visual influence on speech perception, becoming stronger the lower the proportion of auditory correct responses. Cross-language effects are studied to understand processing differences between one\'s own and foreign languages. Regarding the McGurk effect, it has sometimes been found to be stronger with foreign speakers. However, other studies have shown the opposite, or no difference between languages. Most studies have compared English with other languages. We investigated cross-language effects with native Finnish and Japanese speakers and listeners. Both groups of listeners had 49 participants. The stimuli (/ka/, /pa/, /ta/) were uttered by two female and male Finnish and Japanese speakers and presented in A, V and AV modality, including a McGurk stimulus A/pa/V/ka/. The McGurk effect was stronger with Japanese stimuli in both groups. Differences in speech perception were prominent between individual speakers but less so between native languages. Unisensory perception correlated with McGurk perception. These findings suggest that stimulus-dependent features contribute to the McGurk effect. This may have a stronger influence on syllable perception than cross-language factors.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Network architectures and learning principles have been critical in developing complex cognitive capabilities in artificial neural networks (ANNs). Spiking neural networks (SNNs) are a subset of ANNs that incorporate additional biological features such as dynamic spiking neurons, biologically specified architectures, and efficient and useful paradigms. Here we focus more on network architectures in SNNs, such as the meta operator called 3-node network motifs, which is borrowed from the biological network. We proposed a Motif-topology improved SNN (M-SNN), which is further verified efficient in explaining key cognitive phenomenon such as the cocktail party effect (a typical noise-robust speech-recognition task) and McGurk effect (a typical multi-sensory integration task). For M-SNN, the Motif topology is obtained by integrating the spatial and temporal motifs. These spatial and temporal motifs are first generated from the pre-training of spatial (e.g., MNIST) and temporal (e.g., TIDigits) datasets, respectively, and then applied to the previously introduced two cognitive effect tasks. The experimental results showed a lower computational cost and higher accuracy and a better explanation of some key phenomena of these two effects, such as new concept generation and anti-background noise. This mesoscale network motifs topology has much room for the future.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    我们从多种感官中接收关于我们周围世界的信息,这些感官结合在一个被称为多感官整合的过程中。多感官整合已被证明依赖于注意力;然而,这种效应背后的神经机制知之甚少。当前的研究调查了感觉噪声的变化是否解释了注意力对多感觉整合的影响,以及注意力对多感觉整合的调节是否通过特定于模态的机制发生。基于McGurk错觉的任务用于测量多感觉整合,同时通过并发的听觉或视觉任务操纵注意力。根据单感表现的变异性在模态中测量感觉噪声,并将其用于预测McGurk感知的注意力变化。与以前的研究一致,当伴随着次要任务时,McGurk错觉的报告减少;然而,对于次要视觉(相对于听觉)任务,这种效果更强。虽然听觉噪声不受任何次要任务的影响,视觉噪声随着次要视觉任务的增加而增加。有趣的是,视觉噪声导致McGurk错觉的注意力中断显着变化。总的来说,这些结果强烈表明,感觉噪音可能是多感觉整合的注意力改变的基础。需要进一步的研究来确定这一发现是否可以推广到其他类型的多感觉整合和注意力操作。这一系列的研究可能会为未来的神经系统疾病的感觉处理的注意力改变的研究提供信息,比如精神分裂症,自闭症,和ADHD。
    We receive information about the world around us from multiple senses which combine in a process known as multisensory integration. Multisensory integration has been shown to be dependent on attention; however, the neural mechanisms underlying this effect are poorly understood. The current study investigates whether changes in sensory noise explain the effect of attention on multisensory integration and whether attentional modulations to multisensory integration occur via modality-specific mechanisms. A task based on the McGurk Illusion was used to measure multisensory integration while attention was manipulated via a concurrent auditory or visual task. Sensory noise was measured within modality based on variability in unisensory performance and was used to predict attentional changes to McGurk perception. Consistent with previous studies, reports of the McGurk illusion decreased when accompanied with a secondary task; however, this effect was stronger for the secondary visual (as opposed to auditory) task. While auditory noise was not influenced by either secondary task, visual noise increased with the addition of the secondary visual task specifically. Interestingly, visual noise accounted for significant variability in attentional disruptions to the McGurk illusion. Overall, these results strongly suggest that sensory noise may underlie attentional alterations to multisensory integration in a modality-specific manner. Future studies are needed to determine whether this finding generalizes to other types of multisensory integration and attentional manipulations. This line of research may inform future studies of attentional alterations to sensory processing in neurological disorders, such as Schizophrenia, Autism, and ADHD.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    自闭症儿童(AC)在McGurk任务中显示出较少的视听语音整合,这与他们减少的张嘴时间有关。本研究检查了AC在McGurk任务中的较少视听语音整合是否可以通过增加其张嘴时间来增加。我们招募了4至8岁的AC和非自闭症儿童(NAC)。在两个实验中,我们操纵了孩子们张嘴的时间,通过采用McGurk效应范式来测量他们的视听语音整合,追踪他们的眼球运动.在实验1中,我们在McGurk刺激下模糊了眼睛,并比较了儿童在模糊眼睛和清晰眼睛条件下的表现。在实验2中,我们提示儿童注意McGurk刺激的嘴或眼睛,或要求他们自由地观看McGurk刺激。我们发现,在AC中的McGurk任务中,说话者的眼睛模糊和对说话者的嘴的提示都增加了看嘴的时间并增加了视听语音整合。此外,我们发现模糊说话者的眼睛和提示说话者的嘴也增加了NAC中的张嘴时间,但是,无论是模糊说话者的眼睛,还是暗示说话者的嘴巴,都不会增加他们在McGurk任务中的视听语音整合。我们的发现表明,可以通过增加对口腔的关注来增加AC中McGurk任务中的视听语音整合。我们的发现有助于更深入地理解面部注意力和视听语音整合之间的关系,并为专业支持的发展提供见解,以增加AC中的视听语音整合。重点:本研究检查了在AC中McGurk任务中的视听语音整合是否可以通过增加他们对说话者的嘴的注意力来增加。在AC的McGurk任务中,说话者的眼睛模糊增加了看嘴的时间和视听语音整合。在AC中的McGurk任务中,吸引说话者的嘴也增加了看嘴的时间和视听语音的整合。通过增加对说话者的嘴的关注,可以增加AC中McGurk任务中的视听语音整合。
    Autistic children (AC) show less audiovisual speech integration in the McGurk task, which correlates with their reduced mouth-looking time. The present study examined whether AC\'s less audiovisual speech integration in the McGurk task could be increased by increasing their mouth-looking time. We recruited 4- to 8-year-old AC and nonautistic children (NAC). In two experiments, we manipulated children\'s mouth-looking time, measured their audiovisual speech integration by employing the McGurk effect paradigm, and tracked their eye movements. In Experiment 1, we blurred the eyes in McGurk stimuli and compared children\'s performances in blurred-eyes and clear-eyes conditions. In Experiment 2, we cued children\'s attention to either the mouth or eyes of McGurk stimuli or asked them to view the McGurk stimuli freely. We found that both blurring the speaker\'s eyes and cuing to the speaker\'s mouth increased mouth-looking time and increased audiovisual speech integration in the McGurk task in AC. In addition, we found that blurring the speaker\'s eyes and cuing to the speaker\'s mouth also increased mouth-looking time in NAC, but neither blurring the speaker\'s eyes nor cuing to the speaker\'s mouth increased their audiovisual speech integration in the McGurk task. Our findings suggest that audiovisual speech integration in the McGurk task in AC could be increased by increasing their attention to the mouth. Our findings contribute to a deeper understanding of relations between face attention and audiovisual speech integration, and provide insights for the development of professional supports to increase audiovisual speech integration in AC. HIGHLIGHTS: The present study examined whether audiovisual speech integration in the McGurk task in AC could be increased by increasing their attention to the speaker\'s mouth. Blurring the speaker\'s eyes increased mouth-looking time and audiovisual speech integration in the McGurk task in AC. Cuing to the speaker\'s mouth also increased mouth-looking time and audiovisual speech integration in the McGurk task in AC. Audiovisual speech integration in the McGurk task in AC could be increased by increasing their attention to the speaker\'s mouth.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    视觉提示对于听力受损的个体(例如耳蜗植入物(CI)用户)理解噪声中的语音尤其重要。功能近红外光谱(fNIRS)是一种基于光的成像技术,由于其与这些植入物的铁磁和电气组件的兼容性,因此非常适合测量CI用户的大脑活动。为了更好地阐明CI用户中视听(AV)语音整合的行为和神经相关性,我们设计了一个噪声语音任务,并测量了24名正常听力个体能够将说出的单音节单词的音频与女性说话者的相应视觉信号整合在一起的程度。在我们的行为任务中,我们发现,在由多说话者背景噪声组成的-6和-9dB信噪比下,视听配对比单独听音条件平均提高了103%和197%.在使用类似刺激的fNIRS任务中,我们测量了只有听觉听的活动,仅可视的lipreading,和AV收听条件。我们确定了通常与语音处理和视听整合相关的中上颞叶皮层区域的所有三种情况下的皮层活动。此外,在结唇状态下活跃的三个通道显示出与视听增益的行为量度以及McGurk效应相关的未校正相关性。进一步的工作主要集中在本研究中确定的感兴趣区域上,可以测试AV语音整合对于依赖此机制进行日常交流的CI用户有何不同。
    Visual cues are especially vital for hearing impaired individuals such as cochlear implant (CI) users to understand speech in noise. Functional Near Infrared Spectroscopy (fNIRS) is a light-based imaging technology that is ideally suited for measuring the brain activity of CI users due to its compatibility with both the ferromagnetic and electrical components of these implants. In a preliminary step toward better elucidating the behavioral and neural correlates of audiovisual (AV) speech integration in CI users, we designed a speech-in-noise task and measured the extent to which 24 normal hearing individuals could integrate the audio of spoken monosyllabic words with the corresponding visual signals of a female speaker. In our behavioral task, we found that audiovisual pairings provided average improvements of 103% and 197% over auditory-alone listening conditions in -6 and -9 dB signal-to-noise ratios consisting of multi-talker background noise. In an fNIRS task using similar stimuli, we measured activity during auditory-only listening, visual-only lipreading, and AV listening conditions. We identified cortical activity in all three conditions over regions of middle and superior temporal cortex typically associated with speech processing and audiovisual integration. In addition, three channels active during the lipreading condition showed uncorrected correlations associated with behavioral measures of audiovisual gain as well as with the McGurk effect. Further work focusing primarily on the regions of interest identified in this study could test how AV speech integration may differ for CI users who rely on this mechanism for daily communication.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    在患有自闭症谱系障碍(ASD)的个体中观察到较弱的McGurk效应;较弱的整合被认为是理解低阶非典型加工如何导致其适应不良社会行为的关键。然而,这种较弱的McGurk效应的机制尚未完全了解。这里,我们调查了(1)具有高自闭症特征的个体中较弱的McGurk效应是否由不良的唇读能力引起;(2)听力环境是否改变了具有高自闭症特征的个体中较弱的McGurk效应。为了确认他们,我们在大学生中进行了两项模拟研究,基于ASD的维度模型。结果表明,具有高自闭症特征的个体具有完整的唇读能力以及听和识别视听一致语音的能力(实验1)。此外,在具有高自闭症特征的个体中,McGurk效应较弱,在没有噪音的条件下出现,将在高噪声条件下消失(实验1和2)。我们的发现表明,高背景噪声可能会改变视觉线索上的权重,从而增加了具有高自闭症特征的个体中McGurk效应的强度。
    A weaker McGurk effect is observed in individuals with autism spectrum disorder (ASD); weaker integration is considered to be the key to understanding how low-order atypical processing leads to their maladaptive social behaviors. However, the mechanism for this weaker McGurk effect has not been fully understood. Here, we investigated (1) whether the weaker McGurk effect in individuals with high autistic traits is caused by poor lip-reading ability and (2) whether the hearing environment modifies the weaker McGurk effect in individuals with high autistic traits. To confirm them, we conducted two analogue studies among university students, based on the dimensional model of ASD. Results showed that individuals with high autistic traits have intact lip-reading ability as well as abilities to listen and recognize audiovisual congruent speech (Experiment 1). Furthermore, a weaker McGurk effect in individuals with high autistic traits, which appear under the without-noise condition, would disappear under the high noise condition (Experiments 1 and 2). Our findings suggest that high background noise might shift weight on the visual cue, thereby increasing the strength of the McGurk effect among individuals with high autistic traits.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

       PDF(Pubmed)

公众号