Mesh : Neural Networks, Computer Animals Models, Neurological Neurons / physiology Learning / physiology Algorithms Nerve Net / physiology

来  源:   DOI:10.1038/s41593-024-01668-6   PDF(Pubmed)

Abstract:
Flexible computation is a hallmark of intelligent behavior. However, little is known about how neural networks contextually reconfigure for different computations. In the present work, we identified an algorithmic neural substrate for modular computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses revealed learned computational strategies mirroring the modular subtask structure of the training task set. Dynamical motifs, which are recurring patterns of neural activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. We showed that dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization.
摘要:
灵活的计算是智能行为的标志。然而,人们对神经网络如何在上下文中重新配置以进行不同的计算知之甚少。在目前的工作中,通过研究多任务人工递归神经网络,我们确定了模块化计算的算法神经基础。动态系统分析揭示了学习的计算策略,反映了训练任务集的模块化子任务结构。动态图案,它们是通过动力学实现特定计算的神经活动的重复模式,比如吸引子,决策边界和旋转,跨任务重用。例如,需要记忆连续循环变量的任务重新利用了相同的环吸引子。我们表明,当单元激活函数被限制为正数时,动态基序是由单元簇实现的。聚集性病变导致模块化性能缺陷。在学习的初始阶段之后,重新配置了基序以进行快速迁移学习。这项工作建立了动态基序作为组成计算的基本单位,介于神经元和网络之间。因为全脑研究同时记录来自多个专门系统的活动,动态主题框架将指导有关专业化和泛化的问题。
公众号