next up previous
Next: Recognition Up: Learning and retrieval Previous: Learning process

Recognition, retrieval and dynamical memory

The aim of this section is to illustrate the computational abilities of our system. The reproducibility of these simulations has been checked on several networks. For sake of simplicity, we have performed learning on elementary input sequences. Elementary means that only one neuron is stimulated on the primary layer at a given time. The input sparsity is thus equal to $m_I^{(1)}=1/N^{(1)}$. This choice helps to simplify notation and concentrates our attention on the temporal behavior of our system. A temporal input sequence is described by a vector containing indices of neurons $\mathbf{s}=(i_1,...,i_\tau)$, so that $s(1)=i_1$,..., $s(\tau)=i_\tau$. The length of the sequence is $\tau$. This sequence describes a periodic input signal of period $\tau$, which is repeatedly presented, between $t=t_1$ and $t=t_2$ ($t_2»t_1$): $\forall t \in t_1,...,t_2$, $\forall j \in
1,...,N^{(1)}$, $I_j^{(1)}(t)=1$ if $s((t \mbox{ mod }\tau)+1)=j$, and $I_j^{(1)}(t)=0$ elsewhere.

Subsections

Dauce Emmanuel 2003-04-08