Mesh : Animals Rats Acoustic Stimulation Models, Neurological Computational Biology Cochlea / physiology Auditory Perception / physiology Ferrets Evoked Potentials, Auditory / physiology Adaptation, Physiological / physiology Humans Machine Learning

来  源:   DOI:10.1371/journal.pcbi.1012288   PDF(Pubmed)

Abstract:
Sounds are temporal stimuli decomposed into numerous elementary components by the auditory nervous system. For instance, a temporal to spectro-temporal transformation modelling the frequency decomposition performed by the cochlea is a widely adopted first processing step in today\'s computational models of auditory neural responses. Similarly, increments and decrements in sound intensity (i.e., of the raw waveform itself or of its spectral bands) constitute critical features of the neural code, with high behavioural significance. However, despite the growing attention of the scientific community on auditory OFF responses, their relationship with transient ON, sustained responses and adaptation remains unclear. In this context, we propose a new general model, based on a pair of linear filters, named AdapTrans, that captures both sustained and transient ON and OFF responses into a unifying and easy to expand framework. We demonstrate that filtering audio cochleagrams with AdapTrans permits to accurately render known properties of neural responses measured in different mammal species such as the dependence of OFF responses on the stimulus fall time and on the preceding sound duration. Furthermore, by integrating our framework into gold standard and state-of-the-art machine learning models that predict neural responses from audio stimuli, following a supervised training on a large compilation of electrophysiology datasets (ready-to-deploy PyTorch models and pre-processed datasets shared publicly), we show that AdapTrans systematically improves the prediction accuracy of estimated responses within different cortical areas of the rat and ferret auditory brain. Together, these results motivate the use of our framework for computational and systems neuroscientists willing to increase the plausibility and performances of their models of audition.
摘要:
声音是由听觉神经系统分解为许多基本成分的时间刺激。例如,对耳蜗执行的频率分解进行建模的时间到频谱-时间变换是当今听觉神经反应的计算模型中广泛采用的第一处理步骤。同样,声音强度的增量和减量(即,原始波形本身或其频谱带的)构成神经代码的关键特征,具有很高的行为意义。然而,尽管科学界越来越关注听觉关闭反应,它们与瞬态ON的关系,持续的反应和适应仍不清楚。在这种情况下,我们提出了一个新的通用模型,基于一对线性滤波器,名为AdapTrans,将持续和瞬时的ON和OFF响应捕获到一个统一且易于扩展的框架中。我们证明,使用AdapTrans过滤音频耳蜗可以准确地呈现在不同哺乳动物物种中测得的神经反应的已知特性,例如OFF反应对刺激下降时间和先前声音持续时间的依赖性。此外,通过将我们的框架集成到黄金标准和最先进的机器学习模型中,来预测来自音频刺激的神经反应,在对大量电生理数据集(准备部署PyTorch模型和公开共享的预处理数据集)进行监督训练之后,我们表明AdapTrans系统提高了大鼠和雪貂听觉大脑不同皮质区域内估计反应的预测准确性.一起,这些结果激发了我们的计算和系统神经科学家框架的使用,他们愿意增加他们的试听模型的合理性和性能。
公众号