next up previous
Next: ReST model Up: Multi-population recurrent model Previous: Activation dynamics

Learning dynamics

We extend here the activation dynamics to a generic on-line learning dynamics. A first article [10] presented a Hebbian learning process in a non-structured RRNN, that was found to reduce the complexity of the dynamics (later on called ``dynamics reduction''). We propose here a local Hebbian learning rule that relies on an on-line estimate of the covariance between afferent and efferent signals, as in [31]. We will suppose hereafter that we have at each time step local estimates of mean activation, which are stored in vector $\mathbf{\hat{x}}(t)$, and updated according to $\mathbf{\hat{x}}(t)=(1-\beta)\mathbf{\hat{x}}(t-1)+\beta\mathbf{x}(t)$, and we take $\beta=0.1$ in our simulations. The learning dynamics is thus described by the set of equations:
\begin{displaymath}
\begin{array}{l}
\forall t\geq 1, \forall (p,q) \in \{1,....
...t{x}_j^{(q)}(t-1))\right)}
\end{array}
\right.
\end{array}
\end{displaymath} (2)

where $\phi_g(u)=\left(1-f_g(u)\right)$ is a function that prevents weight drift when the post-synaptic neuron is saturated, and $\varepsilon^{(pq)}$ is the learning parameter from population $q$ towards population $p$ (supposed small). Note that we take into account the discrete time delay between pre-synaptic neuron $j$ and post-synaptic neuron $i$, which is important for learning temporal dependencies [19].
next up previous
Next: ReST model Up: Multi-population recurrent model Previous: Activation dynamics
Dauce Emmanuel 2003-04-08