Physica A 301 (2001) 362–374
www.elsevier.com/locate/physa
Categorization ability in a biologically motivated neural network Cris&ogono R. da Silva Departamento de F sica, Universidade Federal de Alagoas, Cidade Universit aria, Br 101 Km 14 Norte, 57072-340 Macei o, Brazil Received 3 July 2001; received in revised form 16 August 2001
Abstract In this work, we study the categorization properties of an extreme and asymmetrically diluted version of the Hop0eld model for associative memory when the e1ect of the refractory period of the neurons is taken into account in the dynamics of the system. The simplest way of modeling these refractory periods is by means of a time-dependent threshold that acts only on those neurons that emit a signal and favors them to be at rest during a given time interval. The dynamic equations are derived in the limit of extreme dilution, using an approach, that explicitly preserves the dependence of the system on its whole history. The categorization error is analyzed for di1erent values of the parameters. In particular, we confront our analytical results with numerical simulations for the noiseless case T = 0. When the number of examples or their correlations increases, the system always categorizes independently of the amplitude of the potential that c 2001 Published by Elsevier Science B.V. mimics the e1ect of the refractory period. PACS: 87.10.+e; 84.35.+i; 64.60.Ht Keywords: Hop0eld model; Categorization error; Refractory periods
1. Introduction One of the most interesting issues of neural network models is their categorization ability, i.e., the capacity of the system for grouping a given set of correlated patterns (the examples) in distinct classes (concepts), in such a way that each concept represents the common features of a set of examples and is a dynamical attractor. The categorization ability of attractor neural models, particularly, the Hop0eld model [1], was studied 0rst by Fontanari [2]. He showed that when correlated patterns are stored in the fully connected Hop0eld model, in spite of the loss of the retrieval capacity, it E-mail address:
[email protected] (C.R. da Silva). c 2001 Published by Elsevier Science B.V. 0378-4371/01/$ - see front matter PII: S 0 3 7 8 - 4 3 7 1 ( 0 1 ) 0 0 4 2 0 - 4
C.R. da Silva / Physica A 301 (2001) 362–374
363
displays an ability to work as a categorizing machine. This means that although the system may not recognize the stored patterns it can decide to which category the input belongs. The categorization problems have been studied in models with two-state neurons [2–9] analog neurons [10], multi-states neurons [11,12], dilution and asymmetry in its synapses with two-state [13,14] and three-state neurons [15] and pseudo-inverse neural networks [16]. The aim of this paper is motivated biologically. Its starting point is the ultra-diluted Hop0eld model introduced by Derrida et al. [17] in which the authors studied two new biological ingredients; asymmetry and dilution in the original Hop0eld model. We add a new element, namely, the refractory period. It is well known that after 0ring a spike, a neuron is unable to 0re again, for a period of time of the order of 2 ms irrespective of its a1erent potential. This short period is known as the absolute refractory period (ARP). Following this short ARP, the neuron enters in a new regime of about 4 ms, in which it partially recovers the capacity of emitting spikes, but with a greater excitation threshold which continuously decreases with time. This second interval is known as the relative refractory period (RRP). Following this somewhat longer RRP, the threshold tends to return to its rest value and the neuron can 0re again with typical intra-network potentials. In the last years, a lot of work has been devoted to study the di1erent ways of including this biological feature in several neural models [18–27]. In a recent paper, da Silva et al. [26] introduced a threshold that mimics the refractory periods and studied the dynamical properties of an extreme diluted Hop0eld model in the context of pattern recognition. The simplest way for modeling this behavior is to introduce into the dynamics of the system a time-dependent threshold that acts only on those neurons that have emitted a signal and favors them to be at rest during a given time interval. In particular, they used a di1erent mathematical approach that permits to take into account the whole history of the network. Our main aim is to use this formulation to study the e1ects of refractory period in a totally di1erent emergent property, namely, the categorization ability. The remainder of the paper is organized as follows. In Section 2, we introduce the model and describe the categorization task. In Section 3, we present the analytical approach and analyze the results in di1erent limits. In Section 4, the analytic results are compared with those obtained through numerical simulations and 0nally, in Section 5, we discuss the main results.
2. Model The system is composed of N binary neurons, each one modeled by an Ising-like variable Si (i = 1; N ), which can take the values {−1; +1}. The state of the network at time t + 1 is governed by a synchronous stochastic dynamics ruled by the following probability: Si (t + 1) = ± 1
with probability
1 (1 ± tanh(0 hi (t))) : 2
(1)
364
C.R. da Silva / Physica A 301 (2001) 362–374
The parameter 0 (the inverse of the temperature T0 ) measures the noise level of the net and hi (t) denotes the local 0eld on neuron i at time t, de0ned by 0 (2) hi (t) = hH (Si (t) + 1) ; i (t) − 2 where hH i (t) denotes the usual Hop0eld post-synaptic potential (PSP) with asymmetric dilution N Jij Sj (t) : (3) (t) = hH i j=i
Here 0 is included to mimic the refractory period. If neuron i emits a spike at time t (Si (t) = + 1), then an extra contribution to the PSP −0 acts like a time-dependent threshold which favors the neuron to be at rest at t+1. If, on the other hand, neuron i is inactive (Si (t) = − 1), then it works like an usual Hop0eld neuron. It is important here to stress that this model does not distinguish between absolute and relative periods. Instead, it considers a kind of average of the absolute and relative periods; large 0 means to be within the absolute refractory period in which the neuron cannot emit another spike, no matter how large the PSP may be. On the other hand, small 0 means that neurons can emit spikes but the PSP required is higher than the usual threshold, characterizing the relative refractory period. The Jij is the synaptic matrix de0ned by the Hebbian rule Jij = Cij Tij ;
(4)
where the Cij ’s are random independent variables responsible for the dilution and asymmetry of the synaptic matrix, and are chosen according to the following probability distribution: C C (Cij ) ; (5) (Cij ) = (Cij − 1) + 1 − N N where C=N represents the mean connectivity per neuron. We assume that the synaptic eGcacies of those connections that survive after dilution are given by the following Hebbian rule [2]: p s (6) Tij = i j ; =1 =1
where the i are random independent variables taking the values ±1. These memories are organized in a hierarchical tree and grouped in p categories ( = 1; 2; : : : ; p), each one containing s con0gurations ( = 1; 2; : : : ; s) that represent di1erent elements or examples of the category. The probability distribution of these variables is given by P( i ) = b+ (i − 1) + b− (i + 1) ;
i b)=2.
(7)
i
The are random independent variables that can take the where b± = (1 ± values ±1 with the same probability and represent the p concepts and the parameter b describes the correlation between an example and its concept T i j U = b ij
(8)
C.R. da Silva / Physica A 301 (2001) 362–374
365
and also between two di1erent examples of the same concept: 2 2 T i j U = ij [b + (1 − b )] :
(9)
In the next section we solve the dynamics of our model. We will consider the time evolution of the network and we look for the recurrent relations for the relevant macroscopic parameters. In the absence of the time-dependent threshold, the so-called strong dilution condition C ln N allows one to obtain an exact recurrent equation for categorization error [13]. But, when the refractory period is included as a threshold that depends on the state of the same neuron, this development is no longer valid. Since an exact analytical treatment along this lines is too hard to be implemented, we solve the dynamics following a di1erent mathematical approach [26]. The validity of this approach is tested in Section 4 where we performed numerical simulations. 3. Dynamical equations First, we suppose that the initial con0guration represents an example of the 0rst category. Then we look for a solution with macroscopic overlap only with the 0rst concept and with the same overlap m with all its stored examples, that is m1 (0) = ms for any and m (0) ∼ O(N −1=2 ) for ¿1, where m (t) =
N 1 i Si (t) : N
(10)
i=1
Here, · · · refers to both a thermal average at temperature T0 and over an ensemble of initial conditions. In order to measure the categorization ability of the network let us de0ne the categorization error 1 − m1 = ; (11) 2 where m1 is the overlap between the 0rst concept and the state of the network N 1 1 Si (t) : m1 (t) = N i i
(12)
After performing the thermal average, we obtain the following equations from the de0nitions of m (symmetric solutions) and m1 0 ms (t + 1) = 11 tanh h (t) − (t) + 1) ; (13) (S 0 i i i 2 0 m1 (t + 1) = 1i tanh 0 hi (t) − ; (14) (Si (t) + 1) 2 from which we will extract the long time behavior of . Here, T U stands for the averages over the sets of patterns i and i . Note that, due to the fact that the refractory term depends on the state of the same neuron, the explicit dependence stays on Si (t) although we have already made the thermal average. So, to preserve the
366
C.R. da Silva / Physica A 301 (2001) 362–374
dependence on the history of the system, we average again Si (t) according to (1). In doing so, we now include a dependence also on t − 1. The same procedure can now be repeated until we 0nally attain the initial state Si (0). It can be easily veri0ed that this mechanism yields the following expressions: 1 1 11 ms (t + 1) = 11 i Ai (t) + 2 i Bi (t)Ai (t − 1) 2 2 +
1 11 Bi (t)Bi (t − 1)Ai (t − 2) 23 i
1 11 Bi (t)Bi (t − 1) · · · Bi (1)Ai (0) 2t+1 i
t l 1 11 Ai (t − l) Bi (t − (q − 1)) ; = i 2l+1 +··· +
(15)
q=1
l=0
1 1 m1 (t + 1) = 1i Ai (t) + 2 1i Bi (t)Ai (t − 1) 2 2 +
1 1 Bi (t)Bi (t − 1)Ai (t − 2) 23 i
+··· + =
1i
1 2t+1
1i Bi (t)Bi (t − 1) · · · Bi (1)Ai (0)
t l 1 A (t − l) Bi (t − (q − 1)) i 2l+1 l=0
;
(16)
q=1
where Ai (t) and Bi (t) are given by H Ai (t) = tanh[0 (hH i (t) − 0 )] + tanh[0 hi (t)] ; H Bi (t) = tanh[0 (hH i (t) − 0 )] − tanh[0 hi (t)] :
(17)
An analytic development is diGcult to implement due to the temporal correlation between the di1erent neurons. Derrida et al. [17] have proved that in the limit of extreme dilution (C ln N ) these temporal correlations are negligible and the quenched disorder can be treated as annealed one. Next, we assume that the temporal correlation among states of the system at di1erent times can be neglected, i.e., all factors at di1erent times in each term in Eqs. (15) and (16) can be independently averaged
l l Ai (t − l) Bi (t − (q − 1)) = TAi (t − l)U TBi (t − (q − 1))U : q=1
q=1
(18) This assumption, together with the strong dilution condition, turns our problem mathematically tractable. Under these conditions, hi (t) can be considered as a random variable which, in the limit C → ∞; p → ∞ with ! ≡ p=C constant, has a Gaussian distribution whose mean value and variance can be evaluated. After some standard calculations,
C.R. da Silva / Physica A 301 (2001) 362–374
367
we obtain the following relation for the recurrent equations: t l 1 A (t − l) B− (t − (q − 1)) ms (t + 1) = − 2l+2 l=0
+ (−1)l A+ (t − l)
q=1
l
B+ (t − (q − 1)) ;
(19)
q=1
t l 1 m1 (t + 1) = F (t − l) G− (t − (q − 1)) − 2l+2 l=0
+ (−1)l F+ (t − l)
q=1
l
G+ (t − (q − 1)) ;
(20)
q=1
where A± (t); B± (t); F± (t) and G± (t) are given by A± (t) = Dz[tanh((&z ± )) + tanh(&z )] ; B± (t) =
Dz[tanh((&z ± )) − tanh(&z )] ;
F± (t) =
Dz[tanh(('z ± )) + tanh('z )] ;
G± (t) =
Dz[tanh(('z ± )) − tanh('z )]
(21)
and &z = Ls z + ms (t)[1 + (s − 1)b2 ] ; 'z = L1 z + sms (t)b ; Ls = [(s − 1)(m2s (t)(1 + (s − 2)b2 − (s − 1)b4 )!sb4 ) + !s]1=2 ; L1 = [m2s (t)s(1 − b2 ) + !s[1 + (s − 1)b4 ]]1=2 :
(22)
Here, = 1=T = C=T0 and = 0 =C de0ne the reduced temperature and the reduced refractory parameter, respectively, and Dz is given by 2 dz Dz = √ e−z =2 : 2*
Note that for = 0; B± (t) = 0 and G± (t) = 0 for all t and only the term l = 0 survives in Eqs. (19) and (20). So, we recover the results obtained for an extreme and asymmetrically diluted version of the Hop0eld model studied in [13]. For s = 1 and b = 1 (A± (t) = F± (t) and B± (t) = G± (t) = 0 for all t) we recover the expression obtained by da Silva et al. [26] in the problem of pattern recognition.
368
C.R. da Silva / Physica A 301 (2001) 362–374
In spite of our simpli0cation, Eqs. (19) and (20) still depend on the whole history of the neuron. A 0rst approach consists in truncating the series to a given 0nite order, i.e., to take into account only the l previous states of the network. In doing so, we observe that the categorization error always converges to a 0xed point attractor, independently of the order of the truncation, but with di1erent values. Thus, we assume that the system always evolves to a 0xed point and the corresponding equations take the form: ∞
ms =
1 1 l l [A− B− + (−1)l A+ B+ ]; 4 2l
(23)
l=0 ∞
m1 =
1 1 l l + (−1)l F+ G+ ]; [F− G− 4 2l
(24)
l=0
where A± ; B± ; F± and G± are given by (21) with ms (t) = ms and m1 (t) = m1 for all t. It can be easily veri0ed that these two series converge to the following expressions: Dz[tanh((&z − +)) + tanh((&z ))] 1 ms = ; (25) 2 2 − + Dz[tanh((&z − +)) − tanh((&z ))] +=±1 Dz[tanh(('z − +)) + tanh(('z ))] 1 : (26) m1 = 2 2 − + Dz[tanh(('z − +)) − tanh(('z ))] +=±1 We now consider the problem of having an extensive number of concepts (!¿0) and start analyzing the noiseless limit T = 0 for which we have performed numerical simulations. After same simple calculations, Eqs. (25) and (26) take the following forms: 1 ms = 2 +=±1
√ √ erf [(ms (1 +(s −1)b2 )−+)= 2Ls ] + erf [ms (1 + (s −1)b2 )= 2Ls ] √ √ ; × 2−+[erf ((ms (1 +(s −1)b2 )−+)= 2Ls ) −erf (ms (1 + (s −1)b2 )= 2Ls )] (27) m1 =
√ √ erf [(ms sb − +)= 2L1 ] + erf [ms sb= 2L1 ] 1 √ √ : 2 2 − +[erf ((ms sb − +)= 2L1 ) − erf (ms sb= 2L1 )] +=±1
(28)
In Fig. 1, we present as a function of the examples number s and the refractory parameter for b = 0:2 and ! = 0:1. For small values of , the categorization error displays the following behavior as a function of the number s of examples: after an initial decrease, it starts to monotonically increase as a function of s, until it attains a maximum or a plateau, from which it falls down exponentially into a categorization phase. For large this behavior changes signi0cantly, the categorization error presents a maximum for small s and a large number of examples must be presented to the net to became able to start categorization. Notice that, in spite of reducing its categorization ability, the system categorizes for any value of .
C.R. da Silva / Physica A 301 (2001) 362–374
369
Fig. 1. The categorization error as a function of the number of examples s and refractory parameter for T = 0; b = 0:2 and ! = 0:1.
Fig. 2. Critical surface in the space of parameters (!; s; ) separating the categorization phase (below) for b = 0:2.
Since close to the transition ms 1, we expand Eqs. (27) and (28), obtaining the following relation for the parameters !; b; s and on the critical surface: 2
R 2 −R =2 2 )] 8 [(1 + (s − 1)b )(1 + erf ( √2 ) + e ; != 4 R 4 *s (1 + (s − 1)b )[2 + erf ( √ )]
(29)
2
where R=
!s[1 + (s − 1)b4 ]
:
Notice that, for s = 1 and b = 1 we recover the critical stored capacity !c () obtained in [26]. Fig. 2 displays the phase diagram in the !; s and space for b = 0:2. Along the critical surface the system undergoes a continuous transition from the categorization phase (below) to the non-categorization phase (above). As s increases the critical stored capacity, !c quickly decreases until it reaches a minimum and after that it grows very
370
C.R. da Silva / Physica A 301 (2001) 362–374
Fig. 3. The categorization error as a function of the number of examples s for → ∞; T = 0; b = 0:2 and di1erent values of !=!∗ (0:01; 0:05; 0:1; 0:12; 0:1479 and 0:2). Observe that for small ! and s → ∞ the → 0:25.
slowly for larger values of s. Note that along the line s = 1, the maximum stored capacity for asymptotic values of the refractory parameter are !c = 2=* ≈ 0:636 : : : [17] for = 0 and !∗ = 32=81* ≈ 0:1257 for → ∞ [26]. A similar behavior, not shown in this diagram, appears when s → ∞ and for asymptotic values of (see Eq. (29)). As occurs with the recognition ability, the system always categorizes irrespective of the value of the refractory parameter . An interesting behavior happens when is very large, i.e., in the limit → ∞, for which Eqs. (27) and (28) take the simple form √ 4 erf [ms (1 + (s − 1)b2 )= 2Ls ] √ ms = ; (30) 9 − erf 2 [ms (1 + (s − 1)b2 )= 2Ls ] √ 4 erf [ms sb= 2L1 ] √ m1 = ; (31) 9 − erf 2 [ms sb= 2L1 ] where Ls and L1 are given in (22). Fig. 3 shows as a function of s for → ∞; b = 0:2 and several values of !=!∗ , where !∗ = 32=81* is the critical value above which the system does not categorize for large . For small values of ! the categorization error is a monotonically decreasing function of s. As ! increases this behavior changes and presents a maximum for s¿1. For !¿!p a plateau emerges with = 0:5 for s− ¡s¡s+ and at s+ the system undergoes a smooth transition to a categorization phase. Observe that there is always a value of s above which it starts categorizing. Taking the limit → ∞ in Eq. (29), we obtain a simple expression for ! as function of parameters b and s !=
32 [1 + (s − 1)b2 ]2 : 81*s [1 + (s − 1)b4 ]
(32)
C.R. da Silva / Physica A 301 (2001) 362–374
371
Observe that for s = 1 or b = 1 we obtain the critical value !∗ = 32=81*. Comparing our equations for → ∞ obtained in [13] for = 0, we observed that both are similar. Following the same development, from Eq. (32), it is possible to get the following relation for s± : 1=2 !(1 − b4 ) 4!∗ b2 !∗ (1 − b2 ) ± + 1± 1− : (33) s =− 2 ∗ b (! − !) 2b4 (!∗ − !) !(1 + b2 )2 The value !p at which a plateau emerges is then given by !p =
128b2 : 81*(1 + b2 )2
(34)
At ! = !p , the generalization error reaches the value 0:5 at a unique point sp =
1 + b2 b2
(35)
and for !¡!p it does never reach the plateau. For !¿!p the plateau emerges at s− and it 0nishes at s+ . In Fig. 3, with b = 0:2 we obtain sp = 26 and !=!∗ ≈ 0:1479. For s1 and small values of !, m1 → 12 and = (1 − m1 )=2 decreases exponentially as 5 Q2 1 ; (36) exp − = + 2 4 16Q(*)1=2 where Q is given by Q=
sms b m2s s(1
−
b2 )
+ !s[1 + (s − 1)b4 ]
:
From Eq. (36) we observe that for large values of s and small ! the categorization error tends asymptotically to = 0:25. This behavior can be better understood by looking at the overlap between the 0rst concept and the state of the network m1 : for large values of (see the expression of the local 0eld Eq. (2)) this term is greater than the PSP and the temporal evolution drives the system to a regime where (a) all those neurons for which the 0rst concept is in an inactive state (1i = − 1) align with (Si = − 1); (b) all those neurons for which the 0rst concept is in an active state 1i = 1 oscillate between the active and inactive states. So, a half of the net is destabilized by the e1ect the parameter and for small ! the maximum value of m1 is 0:5. That is only possible due to the condition of extreme dilution that turns the states of the network Si independent of each other. It is important to point out that for the fully connected Hop0eld model with refractory periods a large value of destabilizes the stored patterns completely [25]. We also consider the T = 0 case. Fig. 4 shows as a function of s and T for ! = 0:05; b = 0:2 and = 0:4. For small values of T the generalization error is a monotonically decreasing function of s. As T increases this behavior changes and presents a plateau regime for s = 1. The temperature T seems to play the same role as the parameters ! and : both damage the categorization ability of the system.
372
C.R. da Silva / Physica A 301 (2001) 362–374
Fig. 4. The categorization error as a function of the number of examples s and T for = 0:4; b = 0:2 and ! = 0:05.
4. Numerical simulations For comparison the analytical and numerical results we have computed the categorization error in the noiseless limit (T = 0). Although it seems that the limit of extreme dilution is hard to implement in a computer, because the condition C ln N and C1 require a huge number of neurons, Arenzon and Lemke [28], da Silva et al. [25] showed that this limit can be reproduced with success using a less strong condition C N . Following these ideas, we performed numerical simulation to compute the categorization ability of our model using a synchronous updating dynamics. We worked with systems of sizes N = 20 000 and 40 000 neurons. In order to make a con0guration average, the simulations were repeated for 50 di1erent samples, using di1erent concepts, examples and initial con0guration. In Figs. 5a and b we present the generalization error as a function of s for T = 0; ! = 0:1 and b = 0:2 along two cuts, = 0:3 and 0:5, respectively. Although we are working far from the limit obtained analytically (C and p → ∞), in both cases, analytical (full line) and numerical (symbols) results agree very well for almost all values of s. Only for s1 we observe a systematic deviation from the theoretical data and this behavior is due to the fact that in this region the simulation were done for s p, while the analytical results are only valid for p s. 5. Conclusions In this paper, we studied the inLuence of the refractory periods in the categorization ability of a diluted and asymmetry neural network. This ingredient is introduced through a time-dependent threshold that gives place to feedback e1ects on the dynamics of the system. The long-time behavior of categorization error was obtained through an analytical approach for the time evolution of the overlap m1 between the state of the system and one of the concepts.
C.R. da Silva / Physica A 301 (2001) 362–374
373
Fig. 5. The categorization error versus the number of examples for T = 0; ! = 0:1 and b = 0:2. The full lines correspond to the analytical solutions. For the simulation we used p = 2; C = 20 and N = 20 000 (circles) and N = 40 000 (squares). (a) = 0:3, (b) = 0:5.
We have observed that the inclusion of the refractory periods in the dynamics of the system reduces the performance of the network as a categorization device. For small values of the refractory parameter the system begins categorizing with few examples s. With the growth of the system needs a larger number of examples to categorize. For large and for small values of ! (!¡!∗ ≈ 0:1257), the system always categorizes, but with low quality (the categorization error tends asymptotically to 14 when s → ∞. However, irrespective of the value of , there is always a value of s above which the system starts categorizing. In summary, we have presented an analytical approach that allows us to study, in an approximative way, the long-time dynamics of a model that take into account the e1ect of the refractory period. Although this analysis is valid for the ultra-diluted limit C ln N , the numerical simulations can easily be reproduced using a less restrictive condition C N . We verify that our numerical simulations agree very well with the theoretical data. We believe that this new analytic approach can be used in other problems in which the state of the system does depend on the states at previous times, i.e., in feedback networks.
Acknowledgements We thank to F.A. Tamarit for fruitful discussions and the N&ucleo de ComputaMca˜ o de Alto Desempenho (NACAD) da CoordenaMca˜ o de Projetos e Pesquisa em Engenharia (COPPE-UFRJ) for use of the Cray. This work was partially supported by Brazilian research agencies CNPq and CAPES and by Alagoas state research agency FAPEAL.
374
C.R. da Silva / Physica A 301 (2001) 362–374
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28]
J.J. Hop0eld, Proc. Natl. Acad. Sci. USA 79 (1982) 91. J.F. Fontanari, J. Phys. I France 51 (1990) 2421. E.N. Miranda, J. Phys. I France 1 (1991) 999. M.C. Branchtein, J.J. Arenzon, J. Phys. I France 2 (1992) 2019. P.R. Krebs, W.K. Theumann, J. Phys. A: Math. Gen. 26 (1993) 3983. J.A. Martins, W.K. Theumann, Physica A 253 (1998) 38. D.R.C. Dominguez, Phys. Rev. E 58 (1998) 4811. R.L. Costa A. Theumann, Physica A 268 (1999) 499. R.L. Costa, A. Theumann, Phys. Rev. E 61 (2000) 4860. D.A. Stariolo, F.A. Tamarit, Phys. Rev. A 43 (1992) 5249. D.R.C. Dominguez, W.K. Theumann, J. Phys. A: Math. Gen. 29 (1996) 749. R. Erichsen, W.K. Theumann, D.R.C. Dominguez, Phys. Rev. E 60 (1999) 7321. C.R. da Silva, F.A. Tamarit, N. Lemke, J.J. Arenzon, E.M.F. Curado, J. Phys. A: Math. Gen. 28 (1995) 1593. P.R. Krebs, W.K. Theumann, Phys. Rev. E 60 (1999) 4580. D.R.C. Dominguez, D. Boll&e, Phys. Rev. E 56 (1997) 7306. C.R. Neto, J.F. Fontanari, J. Phys. A: Math. Gen. 31 (1998) 531. B. Derrida, E. Gardner, A. Zippelius, Europhys. Lett. 4 (1987) 167. J. Buchmann, K. Shulten, Biol. Cybern. 54 (1986) 319. M.Y. Choi, Phys. Rev. Lett. 61 (1988) 2809. D. Horn, M. Usher, Phys. Rev. A 40 (1989) 1036. K. Aihara, T. Takabe, M. Toyoda, Phys. Lett. A 144 (1990) 333. D.J. Amit, M.R. Evans, M. Abeles, Network 1 (1990) 381. W. Gerstner, J.L. van Hemmen, Network 3 (1992) 139. F.A. Tamarit, D.A. Stariolo, S.A. Cannas, P. Serra, Phys. Rev. E 53 (1996) 5146. C.R. da Silva, F.A. Tamarit, E.M.F. Curado, Int. J. Mod. Phys. C 7 (1996) 43. C.R. da Silva, F.A. Tamarit, E.M.F. Curado, Phys. Rev. E 55 (1997) 3320. C.R. da Silva, F.A. Tamarit, E.M.F. Curado, Comp. Phys. Comm. 121–122 (1999) 103. J.J. Arenzon, N. Lemke, J. Phys. A: Math. Gen. 27 (1994) 5161.