Next: Activation dynamics
Up: Resonant spatio-temporal learning in
Previous: Introduction
Multi-population recurrent model
The class of neural networks we start from are recurrent systems,
whose weights are set according to a random draw (Random Recurrent
Neural Networks, ``RRNN's''). We present in
this section a generic formalism for the design of
multi-population random recurrent systems. This formalism will
help to specify the sensory architecture we use in section
3.
Random neural networks were introduced by Amari [2]
in a study of their large size properties. Predictions on the mean
field of such systems can be obtained in the limit of large sizes
under an hypothesis of independence of the individual signals
[33,6]. This convergence towards mean-field
equations has recently been formally proved [25].
The arising of several sorts of synchronized dynamics can thus be
proven, in a model with excitatory and inhibitory populations
[9].
Here, we will mainly consider our random networks as finite-size
dynamical systems, that can display a generic quasi-periodicity
route to chaos with a continuous tuning of gain parameter
[11]. Note that dynamical neural networks have some
specific time constraints that distinguish them from pure
feedforward associative systems. In particular, the time necessary
to reach an attractor is not determined. One needs ``several time
steps'' or ``a certain time'' to reach a neighborhood of the
attractor. This transient time, which may be short, is necessary,
and takes place as soon as a change occurs in the environment of
the system.
Subsections
Next: Activation dynamics
Up: Resonant spatio-temporal learning in
Previous: Introduction
Dauce Emmanuel
2003-04-08