Next: Learning process
Up: Resonant spatio-temporal learning in
Previous: Predictability
Learning and retrieval
In classical ``Hebbian'' studies on single-population recurrent
networks, connection weights are set according to a given set of
pre-defined sequences [19,15], without
dynamical interaction. On the contrary, our system uses an
on-line learning rule, so that the weights are updated at
each time step during the learning process. Weight adaptation
thus grounds on a real-time interaction between the input signal
and the system dynamics.
In order to evaluate the effect
of the learning rule on the dynamics, we alternate learning phases
(Eq.(2)) and testing phases (Eq.(1))
A first attempt to learn sequential patterns of activation from a
background chaotic activity with an on-line Hebbian learning can
be found in [18]. Otherwise, a Hebbian learning rule
has been proposed on our model in [10] for the
dynamical encoding of static input patterns.
For simulations on the ReST model, learning takes place on inner
and feedback links, i.e.
and
, so that the learning
``strength'' is lighter on secondary layer recurrent links.
Quantitative effects of parameter
on the
learning capacity can be found in section 4.3.
Otherwise, we have
(the weights from the primary layer remain unchanged).
Subsections
Next: Learning process
Up: Resonant spatio-temporal learning in
Previous: Predictability
Dauce Emmanuel
2003-04-08