关键词: backpropagation brain-inspired learning chaos local minima spiking neural networks surrogate gradient

来  源:   DOI:10.1093/nsr/nwae037   PDF(Pubmed)

Abstract:
Spiking neural networks (SNNs) have superior energy efficiency due to their spiking signal transmission, which mimics biological nervous systems, but they are difficult to train effectively. Although surrogate gradient-based methods offer a workable solution, trained SNNs frequently fall into local minima because they are still primarily based on gradient dynamics. Inspired by the chaotic dynamics in animal brain learning, we propose a chaotic spiking backpropagation (CSBP) method that introduces a loss function to generate brain-like chaotic dynamics and further takes advantage of the ergodic and pseudo-random nature to make SNN learning effective and robust. From a computational viewpoint, we found that CSBP significantly outperforms current state-of-the-art methods on both neuromorphic data sets (e.g. DVS-CIFAR10 and DVS-Gesture) and large-scale static data sets (e.g. CIFAR100 and ImageNet) in terms of accuracy and robustness. From a theoretical viewpoint, we show that the learning process of CSBP is initially chaotic, then subject to various bifurcations and eventually converges to gradient dynamics, consistently with the observation of animal brain activity. Our work provides a superior core tool for direct SNN training and offers new insights into understanding the learning process of a biological brain.
摘要:
尖峰神经网络(SNNs)由于其尖峰信号传输而具有优异的能量效率,模仿生物神经系统,但是他们很难有效地训练。尽管基于替代梯度的方法提供了一个可行的解决方案,经过训练的SNN经常陷入局部最小值,因为它们仍然主要基于梯度动力学。受动物大脑学习中混沌动力学的启发,我们提出了一种混沌尖峰反向传播(CSBP)方法,该方法引入了损失函数来生成类脑混沌动力学,并进一步利用了遍历和伪随机性质,使SNN学习有效和鲁棒。从计算的角度来看,我们发现,就准确性和鲁棒性而言,CSBP在神经形态数据集(如DVS-CIFAR10和DVS-Gesture)和大规模静态数据集(如CIFAR100和ImageNet)上的性能明显优于当前最新的方法.从理论的角度来看,我们证明了CSBP的学习过程最初是混沌的,然后受到各种分叉,最终收敛到梯度动力学,与动物大脑活动的观察一致。我们的工作为直接SNN训练提供了一个卓越的核心工具,并为理解生物大脑的学习过程提供了新的见解。
公众号