Likelihood Based Learning Rule for Temporal Coding In Recurrent Spiking Neural Networks
Paolo Muratore, Cristiano Capone, Pier Stanislao PAOLUCCI
2020
Zenodo
Recurrent spiking neural networks (RSNN) in the human brain learn to perform a wide range of perceptual, cognitive and motor tasks very efficiently in terms of energy consumption and require very few examples \cristiano{che errore c'è qui? forse 'a very few'?}. This motivates the search for biologically inspired learning rules for RSNNs to improve our understanding of brain computation and the efficiency of artificial intelligence. Several spiking models and learning rules have been proposed,
more »
... t it remains a challenge to design RSNNs whose learning relies on biologically plausible mechanisms and are capable of solving complex temporal tasks. In this paper, we derive a learning rule, local to the synapse, from a simple mathematical principle, the maximization of the likelihood for the network to solve a specific task. We propose a novel target-based learning scheme in which the learning rule derived from likelihood maximization is used to mimic a specific spiking pattern that encodes the solution to complex temporal tasks. This method makes the learning extremely rapid and precise, outperforming state of the art algorithms for RSNNs. While error-based approaches, (e.g. e-prop) trial after trial optimize the internal sequence of spikes in order to progressively minimize the MSE we assume that a signal projected from an external origin (e.g. from other brain areas) directly defines a suited target sequence. This facilitates the learning procedure since the network is trained from the beginning on the desired internal sequence. We demonstrate the capacity of our model to tackle several problems like learning multidimensional trajectories and solving the classical temporal XOR benchmark. Finally, we show that an online approximation of the gradient ascent, in addition to guaranteeing complete locality in time and space, allows learning after very few presentations of the target output. Our model can be applied to different types of biological neurons. The analytically derived plasticity learning rule is [...]
doi:10.5281/zenodo.4464128
fatcat:aae6glodcreevexdvrb2twc364