next up previous
Next: Predictability Up: ReST model Previous: A 2-layers perceptual model


Spontaneous dynamics

We call ``spontaneous dynamics'' the dynamics corresponding to Eq.(1), when the weights have been defined according to a random draw (there is no learning). As we use rather large systems, the behavior of a given system is supposed to be representative of the behavior of a whole family of random systems that have been defined according to the same parameters set. This assumption is only exact at the limit of large sizes [25]. At finite size, we have checked the reproducibility and genericity of the behaviors described hereafter on several networks.

Table 1: Few parameters are necessary to define the initial system. The thresholds $\theta ^{(1)}$ and $\theta ^{(2)}$ are set strong enough to lower activity and avoid saturation. The feedforward links and inner links are randomly set according to $\mathcal{N}(0,(\sigma_J^{(21)})^2/N^{(1)})$ and $\mathcal{N}(0,(\sigma_J^{(22)})^2/N^{(2)})$ (the feedforward links are adapted to the statistics of the input signals, where $m_I^{(1)}$ corresponds to the mean sparsity of the input signal). Initially (before training), feedback links are equal to zero, so that the secondary layer activity has no influence on the primary layer. Lateral links on primary layer are also equal to zero. Gain parameter $g=8$ allows for a chaotic dynamics in the secondary layer. Typical learning parameters are also given in the right part of the table (see text).
Parameters for the 2-layers ReST model Typical learning parameters
Thresholds Weights standard deviation Gain Weight adaptation Update rate of
$\theta^{(1)}=0.5$ $\sigma_J^{(11)}=0$ $\sigma_J^{(12)}=0$ $g=8$ $\alpha^{(11)}=0$ $\alpha^{(12)}=0.1$ the mean activation
$\theta^{(2)}=0.4$ $\sigma_J^{(21)}=0.2/\sqrt{m_I^{(1)}}$ $\sigma_J^{(22)}=1$   $\alpha^{(21)}=0$ $\alpha^{(22)}=0.02$ $\beta=0.1$


The mean field equations [6] have helped us to determine the parameters of the system (the parameters are displayed in Tab.1). The parameters have been chosen for the inner dynamics to be chaotic, so that the response of the system is not fully specified by the input sequence. More precisely, taking into account the sparsity of the input signal, the value of $\sigma_J^{(21)}$ is chosen such that the feed-forward local field mean standard deviation $E(\sigma(\{h_i^{(21)}(t)\}_{t=1,...,+\infty}))=0.2$. With setting $\sigma_J^{(22)}=1$, the mean field equations allow to predict that $E(\sigma(\{h_i^{(22)}(t)\}_{t=1,...,+\infty})) \simeq 0.3$ at the thermodynamic limit, so that inner signal amplitude is significantly stronger than feed-forward signal amplitude. At a given time $t$, the spatial pattern of activation $\mathbf{x}^{(2)}(t)$ is such that 15% to 20% of the neurons are active, i.e. have their activation $> 0.5$ ( $E(x_i^{(2)}(t))\simeq 0.18$ at the thermodynamic limit). One can also note that almost every neuron is dynamically active in secondary layer, i.e. 80% of the neurons have an activation signal $\{x_i^{(2)}(t)\}_{t=1,...,+\infty}$ whose standard deviation is $>0.1$.
next up previous
Next: Predictability Up: ReST model Previous: A 2-layers perceptual model
Dauce Emmanuel 2003-04-08