Mesh : Humans Stuttering / physiopathology Male Female Adult Speech Perception / physiology Young Adult Phonetics Cues Severity of Illness Index Middle Aged Acoustic Stimulation / methods Adolescent Speech / physiology Auditory Perception / physiology

来  源:   DOI:10.1044/2024_JSLHR-24-00107

Abstract:
UNASSIGNED: We investigated speech and nonspeech auditory processing of temporal and spectral cues in people who do and do not stutter. We also asked whether self-reported stuttering severity was predicted by performance on the auditory processing measures.
UNASSIGNED: People who stutter (n = 23) and people who do not stutter (n = 28) completed a series of four auditory processing tasks online. These tasks consisted of speech and nonspeech stimuli differing in spectral or temporal cues. We then used independent-samples t-tests to assess differences in phonetic categorization slopes between groups and linear mixed-effects models to test differences in nonspeech auditory processing between stuttering and nonstuttering groups, and stuttering severity as a function of performance on all auditory processing tasks.
UNASSIGNED: We found statistically significant differences between people who do and do not stutter in phonetic categorization of a continuum differing in a temporal cue and in discrimination of nonspeech stimuli differing in a spectral cue. A significant proportion of variance in self-reported stuttering severity was predicted by performance on the auditory processing measures.
UNASSIGNED: Taken together, these results suggest that people who stutter process both speech and nonspeech auditory information differently than people who do not stutter and may point to subtle differences in auditory processing that could contribute to stuttering. We also note that these patterns could be the consequence of listening to one\'s own speech, rather than the cause of production differences.
摘要:
我们研究了口吃和不口吃的人的时间和频谱线索的语音和非语音听觉处理。我们还询问是否通过听觉处理措施的表现来预测自我报告的口吃严重程度。
口吃的人(n=23)和不口吃的人(n=28)在线完成了一系列四个听觉处理任务。这些任务包括在频谱或时间线索上不同的语音和非语音刺激。然后,我们使用独立样本t检验来评估组之间语音分类斜率的差异,并使用线性混合效应模型来测试口吃和非口吃组之间非语音听觉处理的差异,和口吃的严重程度作为所有听觉处理任务的性能的函数。
我们发现,在时间线索不同的连续体的语音分类和在频谱线索不同的非言语刺激的辨别方面,口吃和不口吃的人之间存在统计学上的显着差异。通过听觉处理措施的表现可以预测自我报告的口吃严重程度的显着方差。
放在一起,这些结果表明,口吃的人对言语和非言语听觉信息的处理方式与不口吃的人不同,并且可能表明听觉处理的细微差异可能导致口吃。我们还注意到,这些模式可能是听自己的演讲的结果,而不是生产差异的原因。
公众号