Parameter Space Structure of Continuous-Time Recurrent Neural Networks
Randall D. Beer
2006
Neural Computation
A fundamental challenge for any general theory of neural circuits is how to characterize the structure of the space of all possible circuits over a given model neuron. As a first step in this direction, this paper begins a systematic study of the global parameter space structure of continuous-time recurrent neural networks (CTRNNs), a class of neural models which, though simple, is dynamically universal. First, we explicitly compute the local bifurcation manifolds of CTRNNs. We then visualize
more »
... e structure of these manifolds in net input space for small circuits. These visualizations reveal a set of extremal saddle-node bifurcation manifolds that divide CTRNN parameter space into regions of dynamics with different effective dimensionality. Next, we completely characterize the combinatorics and geometry of an asymptotically exact approximation to these regions for circuits of arbitrary size. Finally, we show how these regions can be used to calculate estimates of the probability of encountering different kinds of dynamics in CTRNN parameter space. Continuous-time recurrent neural networks are among the simplest possible nonlinear continuous-time neural models. CTRNNs are defined by the vector differential equation ! ! y = "y + W# (y + $) + I (2.1) where !, ! y, y, ", and I are length N vectors, W = w ij { } is an N ! N matrix, and all vector operations (including the application of the output function ! (x) = 1 / (1+ e " x ) ) are performed element-wise. The standard neurobiological interpretation of this model is that y i represents the mean membrane potential of the i th neuron, ! (") represents its mean firing rate, τ i represents its membrane time constant, θ i represents its threshold/bias, I i represents an external input, the weights w ij, j ≠ i represent synaptic connections from neuron j to neuron i, and the self-interaction w ii represents a simple active conductance. This model can also be interpreted as representing nonspiking neurons (Dunn, et al., 2004) . In this case, ! (") represents saturating nonlinearities in synaptic input. Note that the distinction between I and θ is merely semantic; with respect to the output dynamics of (2.1), only the net input I+θ matters, since Eqn. (2.1) can be rewritten in the form ! ! x = "x + # (W x + I + $) using the substitution y ! W x + I . Without loss of generality, we will often assume that I = 0, so the net input to a CTRNN is given simply by θ. Thus, an Nneuron CTRNN has N time constants, N net inputs and N 2 weights, giving C CTRNN (N), the space of all possible CTRNNs on N neurons, an (N 2 +2N)-dimensional parameter space. Compared to more biologically-realistic neural models, the dynamics of an individual CTRNN neuron is quite trivial. However, small networks of CTRNNs can reproduce qualitatively the full range of nerve cell phenomenology, including spiking, plateau potentials, bursting, etc. More importantly, CTRNNs are known to be universal approximators of smooth dynamics (
doi:10.1162/neco.2006.18.12.3009
pmid:17052157
fatcat:r6mzgiwku5dxxnxhlqevmehxgi