Speech perception

言语感知
  • 文章类型: Journal Article
    希腊语使用H*,L+H*,和H*+L,所有后面都是L-L%边缘色调,作为声明中的核沥青口音。先前的分析表明,这些重音通过F0缩放和轮廓形状来区分。这项研究通过探索额外的线索来扩展早期的调查,即,语音质量,振幅,和持续时间,在区分音调时,并研究F0和非F0线索选择的个体差异。贝叶斯多变量分析和层次聚类表明,重音不仅可以通过F0来区分,还可以通过组级别的其他线索来区分。提示选择具有个体差异。
    Greek uses H*, L + H*, and H* + L, all followed by L-L% edge tones, as nuclear pitch accents in statements. A previous analysis demonstrated that these accents are distinguished by F0 scaling and contour shape. This study expands the earlier investigation by exploring additional cues, namely, voice quality, amplitude, and duration, in distinguishing the pitch accents, and investigating individual variability in the selection of both F0 and non-F0 cues. Bayesian multivariate analysis and hierarchical clustering demonstrate that the accents are distinguished not only by F0 but also by additional cues at the group level, with individual variability in cue selection.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    与任务相关的研究一直报告说,听语音会激活大脑的颞叶和前额叶区域。然而,从静息状态形式处理语音时,听觉和语言网络的功能组织如何不同还不是很清楚。通常发育中的婴儿的语言网络组织知识可以作为重要的生物标志物来理解听力障碍婴儿预期的网络级破坏。我们假设语言网络的拓扑差异可以在两个实验条件(1)完全沉默(休息)和(2)响应重复的连续语音(稳定)中使用功能连通性度量来表征。30名听力正常的婴儿(14名男性和16名女性,年龄:7.8±4.8个月)纳入本研究。在两个实验条件下,从与言语和语言处理相关的双侧颞叶和前额叶区域记录了大脑活动:静息状态和稳态。使用图论分析表征了功能语言网络的拓扑差异。归一化的全局效率和聚类系数被用作功能整合和隔离的度量,分别。我们发现总的来说,婴儿的语言网络在休息和稳定状态下都展示了经济小世界组织。此外,与稳态相比,语言网络在静息状态下表现出更高的功能整合和更低的功能隔离。对6个月或6个月以下及6个月以上婴儿的发育影响进行的二次分析显示,在静息和稳态中功能整合和分离的拓扑差异可以在生命的前6个月后可靠地检测到。在静息状态下观察到的更高的功能整合表明,在没有语音刺激的情况下,婴儿的语言网络可以促进跨分布式语言区域的更有效的并行信息处理。此外,稳态下较高的功能隔离表明,语音信息处理发生在语言网络中紧密互连的专门区域内。
    Task-related studies have consistently reported that listening to speech sounds activate the temporal and prefrontal regions of the brain. However, it is not well understood how functional organization of auditory and language networks differ when processing speech sounds from its resting state form. The knowledge of language network organization in typically developing infants could serve as an important biomarker to understand network-level disruptions expected in infants with hearing impairment. We hypothesized that topological differences of language networks can be characterized using functional connectivity measures in two experimental conditions (1) complete silence (resting) and (2) in response to repetitive continuous speech sounds (steady). Thirty normal-hearing infants (14 males and 16 females, age: 7.8 ± 4.8 months) were recruited in this study. Brain activity was recorded from bilateral temporal and prefrontal regions associated with speech and language processing for two experimental conditions: resting and steady states. Topological differences of functional language networks were characterized using graph theoretical analysis. The normalized global efficiency and clustering coefficient were used as measures of functional integration and segregation, respectively. We found that overall, language networks of infants demonstrate the economic small-world organization in both resting and steady states. Moreover, language networks exhibited significantly higher functional integration and significantly lower functional segregation in resting state compared to steady state. A secondary analysis that investigated developmental effects of infants aged 6-months or below and above 6-months revealed that such topological differences in functional integration and segregation across resting and steady states can be reliably detected after the first 6-months of life. The higher functional integration observed in resting state suggests that language networks of infants facilitate more efficient parallel information processing across distributed language regions in the absence of speech stimuli. Moreover, higher functional segregation in steady state indicates that the speech information processing occurs within densely interconnected specialized regions in the language network.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    到2050年,全世界四分之一的人将患有听力障碍。我们提出了一种数字语音听力筛选器(dSHS),使用简短的无意义单词识别来测量语音听力能力。听力筛查的重要性正在增加,因为全球听力障碍患者的预期增加。我们将dSHS结果与标准化纯音平均值(PTA)和语音识别阈值(SRT)进行比较。50名参与者(55岁或以上)接受了纯音和语音识别阈值。单因素方差分析用于比较听力受损组和听力未受损组之间的差异。由DSHS,中度听力受损的临床阈值为35dB,重度听力受损为50dB。dSHS结果与PTA/SRT显著相关。ANOVA结果显示,听力受损组和未受损组之间的dSHS存在显着差异(F(1,47)=38.1,p<0.001)。使用35dB阈值进行分类分析,基于PTA的减值的准确率为85.7%,基于SRT的减值的准确率为81.6%。在50dB的阈值,基于PTA的损害的dSHS分类准确性为79.6%(阴性预测值(NPV)-93%),基于SRT的损害为83.7%(NPV-100%)。dSHS在3分钟内成功区分了听力受损和未受损的个体。这个听力筛选器节省时间,临床听力筛查,以简化对可能有听力障碍的患者的分诊,以进行适当的随访评估,从而提高服务质量。未来的工作将研究dSHS在临床和研究应用中帮助排除听力障碍的能力。
    By 2050, 1 in 4 people worldwide will be living with hearing impairment. We propose a digital Speech Hearing Screener (dSHS) using short nonsense word recognition to measure speech-hearing ability. The importance of hearing screening is increasing due to the anticipated increase in individuals with hearing impairment globally. We compare dSHS outcomes with standardized pure-tone averages (PTA) and speech-recognition thresholds (SRT). Fifty participants (aged 55 or older underwent pure-tone and speech-recognition thresholding. One-way ANOVA was used to compare differences between hearing impaired and hearing not-impaired groups, by the dSHS, with a clinical threshold of moderately impaired hearing at 35 dB and severe hearing impairment at 50 dB. dSHS results significantly correlated with PTAs/SRTs. ANOVA results revealed the dSHS was significantly different (F(1,47) = 38.1, p < 0.001) between hearing impaired and unimpaired groups. Classification analysis using a 35 dB threshold, yielded accuracy of 85.7% for PTA-based impairment and 81.6% for SRT-based impairment. At a 50 dB threshold, dSHS classification accuracy was 79.6% for PTA-based impairment (Negative Predictive Value (NPV)-93%) and 83.7% (NPV-100%) for SRT-based impairment. The dSHS successfully differentiates between hearing-impaired and unimpaired individuals in under 3 min. This hearing screener offers a time-saving, in-clinic hearing screening to streamline the triage of those with likely hearing impairment to the appropriate follow-up assessment, thereby improving the quality of services. Future work will investigate the ability of the dSHS to help rule out hearing impairment as a cause or confounder in clinical and research applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    在整个动物王国,听觉皮层的神经反应在发声过程中受到抑制,人类也不例外。一个常见的假设是抑制增加了对听觉反馈的敏感性,能够检测发声错误。这一假设先前已在非人灵长类动物中得到证实,然而,听觉抑制和人类语音监测灵敏度之间的直接联系仍然难以捉摸。为了解决这个问题,我们获得了35名神经外科参与者在演讲过程中的颅内脑电图(iEEG)记录.我们首先描述了听觉抑制的详细地形,在上颞回(STG)中变化。接下来,我们执行了延迟听觉反馈(DAF)任务,以确定抑制部位是否也对听觉反馈改变敏感.的确,重叠的地点对反馈的反应增强,表明灵敏度。重要的是,听觉抑制程度和反馈灵敏度之间有很强的相关性,暗示抑制可能是语音监控的关键机制。Further,我们发现,当参与者产生同时听觉反馈的语音时,如果参与者参与DAF范式,则有选择地激活后STG,提示注意负荷的增加可以调节听觉反馈的灵敏度。
    大脑降低了对我们自己产生的输入的反应,如移动或说话。本质上,当我们执行这些动作时,我们的大脑“知道”接下来会发生什么,因此不需要对意外事件做出强烈的反应。这就是为什么我们不能挠自己,以及为什么大脑对我们自己的声音的反应不如对别人的声音的反应。安静大脑的反应也使我们能够专注于新事物或重要事物,而不会被自己的动作或声音分散注意力。对非人类灵长类动物的研究表明,当动物发出声音时,听觉皮层(负责处理声音的大脑区域)中的神经元显示出抑制的活动水平。有趣的是,当灵长类动物听到自己声音的改变版本时,许多相同的神经元变得更加活跃。但目前还不清楚这是否也发生在人类身上。为了调查,Ozker等人.使用一种叫做脑电图的技术来记录参与者说话时人脑不同区域的神经活动。结果表明,当个体说话时,参与听觉处理的大脑大多数区域的活动受到抑制。然而,当人们听到自己声音的改变版本时,出现了意想不到的延迟,这些相同的区域显示出更多的活动。此外,Ozker等人.发现听觉皮层的抑制水平越高,这些领域对一个人说话的变化越敏感。这些发现表明,抑制大脑对自我生成语音的反应可能有助于检测语音生成过程中的错误。言语障碍在各种神经系统疾病中很常见,比如口吃,帕金森病,和失语症.Ozker等人.假设这些缺陷可能是因为个体无法抑制大脑听觉区域的活动,在检测和纠正自己讲话中的错误时引起了斗争。然而,需要进一步的实验来检验这一理论。
    Across the animal kingdom, neural responses in the auditory cortex are suppressed during vocalization, and humans are no exception. A common hypothesis is that suppression increases sensitivity to auditory feedback, enabling the detection of vocalization errors. This hypothesis has been previously confirmed in non-human primates, however a direct link between auditory suppression and sensitivity in human speech monitoring remains elusive. To address this issue, we obtained intracranial electroencephalography (iEEG) recordings from 35 neurosurgical participants during speech production. We first characterized the detailed topography of auditory suppression, which varied across superior temporal gyrus (STG). Next, we performed a delayed auditory feedback (DAF) task to determine whether the suppressed sites were also sensitive to auditory feedback alterations. Indeed, overlapping sites showed enhanced responses to feedback, indicating sensitivity. Importantly, there was a strong correlation between the degree of auditory suppression and feedback sensitivity, suggesting suppression might be a key mechanism that underlies speech monitoring. Further, we found that when participants produced speech with simultaneous auditory feedback, posterior STG was selectively activated if participants were engaged in a DAF paradigm, suggesting that increased attentional load can modulate auditory feedback sensitivity.
    The brain lowers its response to inputs we generate ourselves, such as moving or speaking. Essentially, our brain ‘knows’ what will happen next when we carry out these actions, and therefore does not need to react as strongly as it would to unexpected events. This is why we cannot tickle ourselves, and why the brain does not react as much to our own voice as it does to someone else’s. Quieting down the brain’s response also allows us to focus on things that are new or important without getting distracted by our own movements or sounds. Studies in non-human primates showed that neurons in the auditory cortex (the region of the brain responsible for processing sound) displayed suppressed levels of activity when the animals made sounds. Interestingly, when the primates heard an altered version of their own voice, many of these same neurons became more active. But it was unclear whether this also happens in humans. To investigate, Ozker et al. used a technique called electrocorticography to record neural activity in different regions of the human brain while participants spoke. The results showed that most areas of the brain involved in auditory processing showed suppressed activity when individuals were speaking. However, when people heard an altered version of their own voice which had an unexpected delay, those same areas displayed increased activity. In addition, Ozker et al. found that the higher the level of suppression in the auditory cortex, the more sensitive these areas were to changes in a person’s speech. These findings suggest that suppressing the brain’s response to self-generated speech may help in detecting errors during speech production. Speech deficits are common in various neurological disorders, such as stuttering, Parkinson’s disease, and aphasia. Ozker et al. hypothesize that these deficits may arise because individuals fail to suppress activity in auditory regions of the brain, causing a struggle when detecting and correcting errors in their own speech. However, further experiments are needed to test this theory.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    提出了一种测试来表征语音识别系统的性能。听力学家使用QuickSIN测试来测量人类识别噪声中连续语音的能力。该测试得出信噪比,在该信噪比下,个人可以正确识别低上下文句子中50%的关键字。有人认为,自动语音识别器的度量标准将使自动噪声语音识别器的性能与人类能力融为一体。这里,它证明了现代识别器的性能,使用数百万小时的无监督训练数据构建,与人类参与者相比,噪音从正常到轻度受损。
    A test is proposed to characterize the performance of speech recognition systems. The QuickSIN test is used by audiologists to measure the ability of humans to recognize continuous speech in noise. This test yields the signal-to-noise ratio at which individuals can correctly recognize 50% of the keywords in low-context sentences. It is argued that a metric for automatic speech recognizers will ground the performance of automatic speech-in-noise recognizers to human abilities. Here, it is demonstrated that the performance of modern recognizers, built using millions of hours of unsupervised training data, is anywhere from normal to mildly impaired in noise compared to human participants.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    提出了一种语音可懂度(SI)预测模型,该模型包括基于人耳生理解剖结构和活动的听觉预处理组件,分层尖峰神经网络,和基于相关性分析的决策后端处理。听觉预处理组件有效捕获听觉系统的先进生理细节,比如逆行行波,纵向联轴器,和耳蜗非线性。考虑了模型在各种加性噪声条件下预测正常听力听众数据的能力。在所有条件下,预测与实验测试数据紧密匹配。此外,我们开发了带有中耳的McGee不锈钢活塞的集中质量模型,以研究耳硬化症患者的恢复情况。我们证明了所提出的SI模型可以准确地模拟中耳干预对SI的影响。因此,该模型建立了基于模型的人耳损伤客观度量之间的关系,比如失真产物耳声发射,和言语感知。此外,SI模型可以作为优化参数和术前评估人工刺激的强大工具,为临床传导性耳聋的治疗提供有价值的参考。
    A speech intelligibility (SI) prediction model is proposed that includes an auditory preprocessing component based on the physiological anatomy and activity of the human ear, a hierarchical spiking neural network, and a decision back-end processing based on correlation analysis. The auditory preprocessing component effectively captures advanced physiological details of the auditory system, such as retrograde traveling waves, longitudinal coupling, and cochlear nonlinearity. The ability of the model to predict data from normal-hearing listeners under various additive noise conditions was considered. The predictions closely matched the experimental test data under all conditions. Furthermore, we developed a lumped mass model of a McGee stainless-steel piston with the middle-ear to study the recovery of individuals with otosclerosis. We show that the proposed SI model accurately simulates the effect of middle-ear intervention on SI. Consequently, the model establishes a model-based relationship between objective measures of human ear damage, like distortion product otoacoustic emissions, and speech perception. Moreover, the SI model can serve as a robust tool for optimizing parameters and for preoperative assessment of artificial stimuli, providing a valuable reference for clinical treatments of conductive hearing loss.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Systematic Review
    在日常的声学环境中,混响改变了在耳朵处接收到的语音信号。正常听力的听众对这些失真很健壮,快速重新校准,以实现准确的语音感知。在过去的二十年里,多项研究已经调查了听众用来减轻混响的负面影响和提高语音清晰度的各种适应机制。按照PRISMA准则,我们对这些研究进行了系统的回顾,目的是总结现有的研究,确定开放的问题,并提出未来的方向。两名研究人员独立评估了总共661项研究,最终包括23人在审查中。我们的结果表明,对混响语音的适应在不同的环境中是鲁棒的,实验设置,演讲单位,和任务,在噪声掩蔽或未掩蔽的条件下。适应的时间进程很快,有时发生在不到1秒,但这可能会根据声学环境的混响和噪声水平而有所不同。在中等混响的房间中适应性更强,在混响非常强烈的房间中适应性最小。虽然重新校准背后的机制在很大程度上是未知的,适应幅度调制中与直接混响比相关的变化似乎是主要的候选。然而,需要探索其他因素,为效应及其应用提供统一的理论。
    In everyday acoustic environments, reverberation alters the speech signal received at the ears. Normal-hearing listeners are robust to these distortions, quickly recalibrating to achieve accurate speech perception. Over the past two decades, multiple studies have investigated the various adaptation mechanisms that listeners use to mitigate the negative impacts of reverberation and improve speech intelligibility. Following the PRISMA guidelines, we performed a systematic review of these studies, with the aim to summarize existing research, identify open questions, and propose future directions. Two researchers independently assessed a total of 661 studies, ultimately including 23 in the review. Our results showed that adaptation to reverberant speech is robust across diverse environments, experimental setups, speech units, and tasks, in noise-masked or unmasked conditions. The time course of adaptation is rapid, sometimes occurring in less than 1 s, but this can vary depending on the reverberation and noise levels of the acoustic environment. Adaptation is stronger in moderately reverberant rooms and minimal in rooms with very intense reverberation. While the mechanisms underlying the recalibration are largely unknown, adaptation to the direct-to-reverberant ratio-related changes in amplitude modulation appears to be the predominant candidate. However, additional factors need to be explored to provide a unified theory for the effect and its applications.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:语音声音在人脑中通过复杂且相互连接的皮层和皮层下结构进行处理。两个神经信号,一个主要来自皮质来源(错配反应,MMR)和一个主要来自皮质下来源(频率跟踪反应,FFR)对于评估语音处理至关重要,因为它们都显示出对高级语言信息的敏感性。然而,记录MMR和FFR有明显的先决条件,使他们难以同时获得新的方法:使用新的范式,我们的研究旨在同时捕获这两个信号,并根据以下标准对其进行测试:(1)复制MMR对本地语音对比度与MMR对非本地语音对比度显着不同的影响,和(2)证明可以可靠地区分三种语音的FFR。
    结果:使用来自18名成年人的脑电图,我们观察到MMR到本机与本机之间的解码精度为72.2%。非母语语音对比。在预期的时间窗口中显示了明显更大的天然MMR。同样,FFR的显著解码准确率为79.6%.具有9ms滞后的高刺激-响应交叉相关性表明FFR密切跟踪语音。
    结论:这些研究结果表明,我们的范例可靠地同时捕获了MMR和FFR,用更少的试验(MMR:50试验;FFR:200试验)和更短的实验时间(12分钟)复制和扩展过去的研究。
    结论:这项研究为理解语音和语言处理的皮层-皮层下相互作用铺平了道路。最终目标是开发一种针对早期开发的评估工具。
    BACKGROUND: Speech sounds are processed in the human brain through intricate and interconnected cortical and subcortical structures. Two neural signatures, one largely from cortical sources (mismatch response, MMR) and one largely from subcortical sources (frequency-following response, FFR) are critical for assessing speech processing as they both show sensitivity to high-level linguistic information. However, there are distinct prerequisites for recording MMR and FFR, making them difficult to acquire simultaneously NEW METHOD: Using a new paradigm, our study aims to concurrently capture both signals and test them against the following criteria: (1) replicating the effect that the MMR to a native speech contrast significantly differs from the MMR to a nonnative speech contrast, and (2) demonstrating that FFRs to three speech sounds can be reliably differentiated.
    RESULTS: Using EEG from 18 adults, we observed a decoding accuracy of 72.2% between the MMR to native vs. nonnative speech contrasts. A significantly larger native MMR was shown in the expected time window. Similarly, a significant decoding accuracy of 79.6% was found for FFR. A high stimulus-to-response cross-correlation with a 9ms lag suggested that FFR closely tracks speech sounds.
    CONCLUSIONS: These findings demonstrate that our paradigm reliably captures both MMR and FFR concurrently, replicating and extending past research with much fewer trials (MMR: 50 trials; FFR: 200 trials) and shorter experiment time (12minutes).
    CONCLUSIONS: This study paves the way to understanding cortical-subcortical interactions for speech and language processing, with the ultimate goal of developing an assessment tool specific to early development.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    过去的研究探索了共振峰中心,在对假定的目标元音共振峰的话语持续时间内,收敛的纠正行为。在这项研究中,我们建立了在健康的老年人对照中,音高存在类似的居中现象,并研究了在阿尔茨海默病(AD)中这种矫正行为是如何改变的。我们发现,在校正低于和高于目标(中位数)音高的音高误差时,健康老年人的音高居中响应相似。相比之下,AD患者表现出不对称性,对低于目标发声的基音误差的校正比高于目标发声的基音误差的校正更大.这些发现表明,音高居中是人类语音中的一种稳健补偿行为。我们的发现还探讨了影响AD语言的神经退行性过程对音高居中的潜在影响。
    Past studies have explored formant centering, a corrective behavior of convergence over the duration of an utterance toward the formants of a putative target vowel. In this study, we establish the existence of a similar centering phenomenon for pitch in healthy elderly controls and examine how such corrective behavior is altered in Alzheimer\'s Disease (AD). We found the pitch centering response in healthy elderly was similar when correcting pitch errors below and above the target (median) pitch. In contrast, patients with AD showed an asymmetry with a larger correction for the pitch errors below the target phonation than above the target phonation. These findings indicate that pitch centering is a robust compensation behavior in human speech. Our findings also explore the potential impacts on pitch centering from neurodegenerative processes impacting speech in AD.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    真实世界的证据越来越多地用于支持全球范围内的临床和监管决策,并且可能是研究中国人工耳蜗使用者独特需求的有用工具。识别和理解噪声中的语音的能力对于人工耳蜗使用者来说至关重要,然而,这仍然是一个挑战,在日常设置与波动竞争的噪音水平。Cochlear™声音处理器,Nucleus®7(CP1000),包括向前聚焦,一种旨在提高噪声中语音性能的空间噪声算法,并为iPhone/iPod/iPad功能。我们进行了一个前瞻性的,单中心,开放标签,参与者内部,人工耳蜗植入参与者的真实世界证据调查。这项研究的主要目的,在中国进行,是将空间分离的动态噪声中的语音感知与Nucleus7与当前较旧的耳蜗声音处理器进行比较,包括自由和核5声音处理器。一项后续研究从初始研究开始监测参与者,直到他们的Nucleus7安装后12个月,并调查听力。满意,和可用性的设备通过问卷调查。40名参与者被纳入初始研究(年龄范围3至49岁),29名参与者继续进行随访研究(年龄范围5至28岁)。参与者的年龄不同,人工耳蜗植入经验,和听力损失的持续时间。与当前较旧的声音处理器相比,Nucleus7在噪声中的参与者语音识别性能显着提高了7.54dB(p<0.0001)。对Nucleus7的总体满意度为72%。在不同的听力环境中,对在安静环境中理解1:1对话的满意度为93.1%,62.1%的人在电话上理解,在复杂的嘈杂环境中听力达到34.5%。该研究证明了Nucleus7声音处理器在中国人群的不同听觉环境中的优势,并显示出改善的听觉能力。可用性,在现实世界的每一天的环境和满意度。
    Real-world evidence is increasingly used to support clinical and regulatory decisions globally and may be a useful tool to study the unique needs of cochlear implant users in China. The ability to recognize and understand speech in noise is critical for cochlear implant users, however, this remains a challenge in everyday settings with fluctuating competing noise levels. The Cochlear™ Sound Processor, Nucleus® 7 (CP1000), includes Forward Focus, a spatial noise algorithm aimed to improve speech-in-noise performance, and Made for iPhone/iPod/iPad functionality. We conducted a prospective, single-center, open-label, within-participant, real-world evidence investigation in participants with cochlear implants. The primary objective of this study, conducted in China, was to compare speech perception in spatially separated dynamic noise with the Nucleus 7 to the recipients\' current older Cochlear Sound Processor, including the Freedom and Nucleus 5 sound processors. A follow-up study monitored participants from the initial study up to 12-months post the fitting of their Nucleus 7 and investigated hearing ability, satisfaction, and usability of the device via a questionnaire. Forty participants were included in the initial study (age-range 3 to 49 years) and 29 continued to the follow-up study (age-range 5 to 28 years). The participants were heterogeneous in terms of age, cochlear implant experience, and duration of hearing loss. Nucleus 7 significantly improved participant speech recognition performance in noise by 7.54 dB when compared with the participants\' current older sound processor (p<0.0001). Overall satisfaction with Nucleus 7 was 72%. Satisfaction in different hearing contexts ranged from 93.1% for understanding a 1:1 conversation in a quiet setting, 62.1% for understanding on the phone, to 34.5% hearing in complex noisy situations. The study demonstrated the benefits of the Nucleus 7 sound processor across different hearing environments in a Chinese population and showed improved hearing ability, usability, and satisfaction in a real-world every-day environment.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号