language comprehension

语言理解
  • 文章类型: Journal Article
    目前公认手语和口语具有显著的处理共性。支持这一点的证据通常只是调查额颞叶途径,Perisylvian语言区,半球偏侧化,和事件相关的电位在典型的设置。然而,最近的证据已经超出了这一点,并通过解释以前使处理比较无效的混淆并深入研究它们出现的具体条件,发现了手语和口语之间许多依赖于模态的处理差异。然而,这些处理差异通常被浅显地忽略为不特定于语言。
    这篇综述研究了最近关于处理手语和口语模式之间差异的神经科学证据,以及反对这些差异重要性的论点。关键区别存在于左前负(LAN)的地形以及N400等事件相关电位(ERP)组件的调制。还有典型的口语处理领域的差异激活,例如时间区域在手语(SL)处理中的条件作用。重要的是,手语处理独特地招募顶叶区域来处理语音和语法,并要求将空间信息映射到内部表示形式。此外,特定于模态的反馈机制独特地涉及手语的本体感觉输出后监测,与口语的听觉和视觉反馈机制相反。唯一发现ERP差异的后期制作研究表明,手语中的词汇访问比口语更早。时间性的主题,类似的解剖学机制观点的有效性,并讨论了当前语言模型的全面性,为未来的研究提出了改进建议。
    当前的神经科学证据表明,手语和口语模式之间的处理差异超出了语言之间的简单差异。对这些差异的思考和进一步探索将是发展大脑语言更全面观点的不可或缺。
    UNASSIGNED: It is currently accepted that sign languages and spoken languages have significant processing commonalities. The evidence supporting this often merely investigates frontotemporal pathways, perisylvian language areas, hemispheric lateralization, and event-related potentials in typical settings. However, recent evidence has explored beyond this and uncovered numerous modality-dependent processing differences between sign languages and spoken languages by accounting for confounds that previously invalidated processing comparisons and by delving into the specific conditions in which they arise. However, these processing differences are often shallowly dismissed as unspecific to language.
    UNASSIGNED: This review examined recent neuroscientific evidence for processing differences between sign and spoken language modalities and the arguments against these differences\' importance. Key distinctions exist in the topography of the left anterior negativity (LAN) and with modulations of event-related potential (ERP) components like the N400. There is also differential activation of typical spoken language processing areas, such as the conditional role of the temporal areas in sign language (SL) processing. Importantly, sign language processing uniquely recruits parietal areas for processing phonology and syntax and requires the mapping of spatial information to internal representations. Additionally, modality-specific feedback mechanisms distinctively involve proprioceptive post-output monitoring in sign languages, contrary to spoken languages\' auditory and visual feedback mechanisms. The only study to find ERP differences post-production revealed earlier lexical access in sign than spoken languages. Themes of temporality, the validity of an analogous anatomical mechanisms viewpoint, and the comprehensiveness of current language models were also discussed to suggest improvements for future research.
    UNASSIGNED: Current neuroscience evidence suggests various ways in which processing differs between sign and spoken language modalities that extend beyond simple differences between languages. Consideration and further exploration of these differences will be integral in developing a more comprehensive view of language in the brain.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    根据传统的语言学理论,复杂意义的建构牢固地依赖于句法结构建构操作。最近,然而,已经提出了新的模型,其中语义被视为部分独立于语法。在本文中,我们讨论了基于语法和自治语义模型的一些发展含义。我们回顾了婴儿和幼儿语义处理的事件相关脑电位(ERP)研究,专注于报告使用视觉或听觉刺激和试验的不同时间结构的N400振幅调制的实验。我们的评论表明,到6个月大的时候,婴儿可以从时间上重叠的刺激中关联或整合语义信息。随着时间的推移关联或整合语义信息的能力,在模式内和跨模式,出现了9个月。在序列和句子中关联或整合口语信息的能力在18个月前出现。我们还回顾了行为和ERP研究,这些研究表明,语法和句法处理技能只会在以后发展,18至32个月。这些结果为语法全面发展之前某些语义过程的可用性提供了初步证据:非语法意义构建操作可用于婴儿,尽管方式有限,在语法的抽象机制到位之前的几个月。我们根据对早期语言习得和人脑发育的研究来讨论这一假设。
    According to traditional linguistic theories, the construction of complex meanings relies firmly on syntactic structure-building operations. Recently, however, new models have been proposed in which semantics is viewed as being partly autonomous from syntax. In this paper, we discuss some of the developmental implications of syntax-based and autonomous models of semantics. We review event-related brain potential (ERP) studies on semantic processing in infants and toddlers, focusing on experiments reporting modulations of N400 amplitudes using visual or auditory stimuli and different temporal structures of trials. Our review suggests that infants can relate or integrate semantic information from temporally overlapping stimuli across modalities by 6 months of age. The ability to relate or integrate semantic information over time, within and across modalities, emerges by 9 months. The capacity to relate or integrate information from spoken words in sequences and sentences appears by 18 months. We also review behavioral and ERP studies showing that grammatical and syntactic processing skills develop only later, between 18 and 32 months. These results provide preliminary evidence for the availability of some semantic processes prior to the full developmental emergence of syntax: non-syntactic meaning-building operations are available to infants, albeit in restricted ways, months before the abstract machinery of grammar is in place. We discuss this hypothesis in light of research on early language acquisition and human brain development.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    Facial expressions constitute a rich source of non-verbal cues in face-to-face communication. They provide interlocutors with resources to express and interpret verbal messages, which may affect their cognitive and emotional processing. Contrarily, computer-mediated communication (CMC), particularly text-based communication, is limited to the use of symbols to convey a message, where facial expressions cannot be transmitted naturally. In this scenario, people use emoticons as paralinguistic cues to convey emotional meaning. Research has shown that emoticons contribute to a greater social presence as a result of the enrichment of text-based communication channels. Additionally, emoticons constitute a valuable resource for language comprehension by providing expressivity to text messages. The latter findings have been supported by studies in neuroscience showing that particular brain regions involved in emotional processing are also activated when people are exposed to emoticons. To reach an integrated understanding of the influence of emoticons in human communication on both socio-cognitive and neural levels, we review the literature on emoticons in three different areas. First, we present relevant literature on emoticons in CMC. Second, we study the influence of emoticons in language comprehension. Finally, we show the incipient research in neuroscience on this topic. This mini review reveals that, while there are plenty of studies on the influence of emoticons in communication from a social psychology perspective, little is known about the neurocognitive basis of the effects of emoticons on communication dynamics.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    BACKGROUND: The effect of speaker accent on listeners\' comprehension has become a key focus of research given the increasing cultural diversity of society and the increased likelihood of an individual encountering a clinician with an unfamiliar accent.
    OBJECTIVE: To review the studies exploring the effect of an unfamiliar accent on language comprehension in typically developing (TD) children and in children with speech and language difficulties. This review provides a methodological analysis of the relevant studies by exploring the challenges facing this field of research and highlighting the current gaps in the literature.
    METHODS: A total of nine studies were identified using a systematic search and organized under studies investigating the effect of speaker accent on language comprehension in (1) TD children and (2) children with speech and/or language difficulties.
    RESULTS: This review synthesizes the evidence that an unfamiliar speaker accent may lead to a breakdown in language comprehension in TD children and in children with speech difficulties. Moreover, it exposes the inconsistencies found in this field of research and highlights the lack of studies investigating the effect of speaker accent in children with language deficits.
    CONCLUSIONS: Overall, research points towards a developmental trend in children\'s ability to comprehend accent-related variations in speech. Vocabulary size, language exposure, exposure to different accents and adequate processing resources (e.g. attention) seem to play a key role in children\'s ability to understand unfamiliar accents. This review uncovered some inconsistencies in the literature that highlight the methodological issues that must be considered when conducting research in this field. It explores how such issues may be controlled in order to increase the validity and reliability of future research. Key clinical implications are also discussed.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    Language switching has been one of the main tasks to investigate language control, a process that restricts bilingual language processing to the target language. In the current review, we discuss the How (i.e., mechanisms) and Where (i.e., locus of these mechanisms) of language control in language switching. As regards the mechanisms of language control, we describe several empirical markers of language switching and their relation to inhibition, as a potentially important mechanism of language control. From this overview it becomes apparent that some, but not all, markers indicate the occurrence of inhibition during language switching and, thus, language control. In a second part, we turn to the potential locus of language control and the role of different processing stages (concept level, lemma level, phonology, orthography, and outside language processing). Previous studies provide evidence for the employment of several of these processing stages during language control so that either a complex control mechanism involving different processing stages and/or multiple loci of language control have to be assumed. Based on the discussed results, several established and new theoretical avenues are considered.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

  • 文章类型: Journal Article
    Comprehension and/or production of noun phrases and sentences requires the selection of lexical-syntactic attributes of nouns. These lexical-syntactic attributes include grammatical gender (masculine/feminine/neuter), number (singular/plural) and countability (mass/count). While there has been considerable discussion regarding gender and number, relatively little attention has focused on countability. Therefore, this article reviews empirical evidence for lexical-syntactic specification of nouns for countability. This includes evidence from studies of language production and comprehension with normal speakers and case studies which assess impairments of mass/count nouns in people with acquired brain damage. Current theories of language processing are reviewed and found to be lacking specification regarding countability. Subsequently, the theoretical implications of the empirical studies are discussed in the context of frameworks derived from these accounts of language production (Levelt, 1989; Levelt et al., 1999) and comprehension (Taler and Jarema, 2006). The review concludes that there is empirical support for specification of nouns for countability at a lexical-syntactic level.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Sci-hub)

公众号