next up previous
Next: Multi-population recurrent model Up: Resonant spatio-temporal learning in Previous: Resonant spatio-temporal learning in

Introduction

Understanding global dynamics of the brain in functional terms is a central issue in neurophysiology. Depending on the space scale, one can consider neuronal, local field or global field dynamics, and study the temporal behavior of neurons or groups of neurons. Measures of complexity [32], temporal coincidence [1], local synchronization [16], and long-range synchronization [27] lead to the idea that perception and/or cognition rely on collective organization phenomena. Such organization (i) manifests in reproducible spatio-temporal patterns of firing that take place at the milliseconds scale [1,23], (ii) is distributed to a whole sensory structure [32,23] or to the whole brain [27] and (iii) is transient, i.e. desynchronization follows synchronization [29,27]. This collective organization depends on the input sensory signal, partly, but also (and that is the point we want to stress) on inner dynamical constraints. For instance, the change in the dynamics following input presentation is not coded in the input signal. It arises as a phase transition [32] in the dynamics of the inner system. This transition depends on (i) long past history and adaptation between this particular stimulus and the system and (ii) an inner dynamical context that may modulate the way a given stimulus is interpreted. An artificial neural network with inner recurrent links can be seen as a dynamical system as it can generate an inner signal that propagates through inner (or recurrent) interactions. We call such a system a dynamical neural network (DNN). In order to perform computation with recurrent networks, people often try to avoid interferences between inner signal and input (command) signal. For instance, in classical applications of recurrent networks [38,12], inner recurrent states work as buffers that memorize a context, but one tries to avoid active inner dynamics for the response to be specified by the input signal (i.e. the same input sequence always produces the same response). On the other side, the classical Hopfield model [20] and derivatives [19] are autonomous dynamical systems (i.e there is no interaction in real time with an input signal), and the final attractor thus strictly depends on the initial conditions. In this article, we present an alternative approach in the framework of DNN's computation. We have made the choice to use a model which is simple in its design, and highly complex in its behavior. Our idea is that such a generic system could shed some new lights on natural processes of perception. So, more than the implementation details of our model, what is important is its: We present in section 2 the generic structure of a multi-population random recurrent model, with an on-line rule for weight adaptation. Then, taking a global analogy with biological sensory systems, we present in section 3 a model with a ``primary'' layer and a ``secondary'' layer, called ReST (for Resonant Spatio-Temporal system). In section 4, we show the effects of the learning rule, as a reduction of the dynamics on the secondary layer, and a feedback reinforcement from secondary layer towards primary layer. We then study the retrieval ability and the capacity of the model. Then, we present in section 5 an example of artificial system design in the case of robot navigation and scene recognition. This preliminary experiment illustrates the ability of our system to deal with real-world data. At last, we draw in section 6 parallels between the functioning of our model and biological observations, in terms of chaos, dynamical binding, cortical and sub-cortical structures, and ask the question of temporal scales.
next up previous
Next: Multi-population recurrent model Up: Resonant spatio-temporal learning in Previous: Resonant spatio-temporal learning in
Dauce Emmanuel 2003-04-08