Dynamical features simulated by recurrent neural networks

Dynamical features simulated by recurrent neural networks

Neural Networks PERGAMON Neural Networks 12 (1999) 609–615 Contributed article Dynamical features simulated by recurrent neural networks F. Botelho...

96KB Sizes 0 Downloads 81 Views

Neural Networks PERGAMON

Neural Networks 12 (1999) 609–615

Contributed article

Dynamical features simulated by recurrent neural networks F. Botelho* Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, USA Received 18 November 1996; received in revised form 16 February 1999; accepted 16 February 1999

Abstract The evolution of two-dimensional neural network models with rank one connecting matrices and saturated linear transfer functions is dynamically equivalent to that of piecewise linear maps on an interval. It is shown that their iterative behavior ranges from being highly predictable, where almost every orbit accumulates to an attracting fixed point, to the existence of chaotic regions with cycles of arbitrarily large period. q 1999 Elsevier Science Ltd. All rights reserved. Keywords: Neural network; Brain-State-in-a Box

1. Introduction Recurrent neural networks are mathematical models that attempt to mimic some functions easily performed by the human brain (Grossberg, 1988). Applications of neural networks to associative memory and pattern recognition problems rely on the stability of existing fixed points and cycles (Hopfield, 1982). Attracting behavior of fixed points and control of their locations allow a network to retrieve information or to recall a stored item by just using some partial content of that item. Sequences of recalled memories play an important role in our ability to comprehend a course of events. Moreover, periodic activity vitally influences our daily lives, for example in the control of motoric body functions such as the heartbeat. Questions on how neural networks maintain periodic activity and what makes them fail seem to be of fundamental interest. Contrarily, more complex invariant regions, designated chaotic regions, appear in the motion of virtually all natural phenomena. Population models of insects, metabolism of cells, propagation of impulses along our nerve system and the motion of electrons in atoms are known to behave in a chaotic way. Some of the most impressive examples are in mathematics, where the solutions of apparently simple equations show great complexity. The significance of chaos in biological systems is far from being understood, but it seems to play an important role in explaining the adaptability of individuals to unknown environmental conditions (Skarda & Freeman, 1987). An

elucidative example is the chaotic process a human brain follows before mastering a concept. This has a decisive influence in its future applicability and development. The Brain-State-in-a Box (BSB) is an abstract neural model, introduced by Anderson and coworkers (Anderson, Silverstein, Ritz & Jones, 1977), to simulate several psychological and cognitive behaviors such as categorical perception (Anderson et al., 1977, 1983), multistable perception (Kawamoto & Anderson, 1985), word perception (Golden, 1986), and lexical ambiguity (Kawamoto, 1985). In this article we consider the BSB class of models with two interconnected neurons of saturated linear type and rank one connecting matrices. We characterize the dynamical behavior of such systems via piecewise linear maps of the interval. We show that such simple mechanisms offer a large variety of dynamical behaviors, including LY-chaoticity for a large set of parameter values. Some techniques used in this article were introduced by Wang (1991) while addressing a similar question for a class of two-dimensional neural nets of sigmoidal type (see Botelho & Garzon, 1977; Hui & Zak, 1992). It is important to mention that if the connecting matrices have rank two this complex dynamics is not present (cf. Zhou, 1996). Therefore, simulations of resting stages following chaotic ones should assume a learning process that directs the network weights, at least temporarily, into a parameter region that generates rank one connecting matrices.

2. Basic definitions and background results * Corresponding author. Tel.: 1 1-901-678-3131. E-mail address: [email protected] (F. Botelho) 0893-6080/99/$ - see front matter q 1999 Elsevier Science Ltd. All rights reserved. PII: S0893-608 0(99)00026-X

The architecture of the neural models considered in this

610

F. Botelho / Neural Networks 12 (1999) 609–615

Fig. 1. A recurrent two dimensional neural network.

article consists of two interconnected neurons as depicted in Fig. 1. The probability that a neuron fires in a certain time interval is measured by a real number in the interval [0,1], called its activation state. A state vector is an assignment of real numbers to the neurons composing the network, represented by a pair of activation states, (x1,x2) with xi [ [0,1]. Information that passes from neuron j to neuron i changes linearly via a multiplicative factor, v ij, designated connecting weight. Each neuron, generically labeled with i, acts on the sum of all the incoming signals, vi1 x1 1 vi2 x2 ; by a saturated linear transfer function, s 8 0 if x # 0 > > < if 0 , x , 1 s…x† ˆ x > > : 1 if x $ 1: Saturated linear neurons attempt to simulate neural indifference outside some preassigned threshold value. If the stimulus, x, is below a certain value, the neuron is at rest, otherwise its response increases linearly until it reaches a saturation point. The discrete time evolution of the network is given by the following rule: T…x1 ; x2 † ˆ …s…v11 x1 1 v12 x2 †; s…v21 x1 1 v22 x2 †† with …x1 ; x2 † [ ‰0; 1Š2 :

Let f0 : X0 ! X0 and f1 : X1 ! X1 be two continuous maps defined on compact metric spaces, X0 and X1, respectively. We denote by o the standard composition of maps. Definition 2.1. ([Devaney, 1986]). The maps f0 and f1 are said to be topologically conjugate if there exists homeomorphism f0 from X0 onto X1 such that f0 +f0 ˆ f1 +f0 : Systems that are topologically conjugate have identical qualitative dynamics. We show that the dynamics of T (whose connecting matrix W ˆ …vij †i;jˆ1;2 has rank one) can be reduced to the dynamics of a planar map that depends only on one activation state.

where each component is a real valued function of the form h…x† ˆ s…as…bx†† with a . 0 and b . 0: The function h is given by: 8 0 if x # 0; > > > > > 1 < abx if x # max{a; 1}b h…x† ˆ > > > > 1 > : s…a† if x $ : max{a; 1}b Therefore, h has one fixed point, two fixed points, or infinitely many fixed points depending on a b , 1, . 1, or ˆ 1, respectively. The previous remark assures that without loss of generality the connecting matrix W is of the form ! v11 lv11 v v11 ± 0 and l ˆ 12 : ; v11 v21 lv21

Definition 2.2. A map F : D ! D, where D is a subset of R n, is said to be i-translation invariant if for every real number a and an n-tuple …x1 ; …; xn † [ D, F…x1 ; …; xi 1 a; …; xn † ˆ F…x1 ; …; xi ; …; xn †: Proposition 2.1. If v11 is nonzero then T is topologically conjugate to a 2-translation invariant map.

Proof.

We define the homeomorphism given by   v C…x; y† ˆ x 1 12 y; y ; v11

with inverse C21 …x; y† ˆ …x 2 …v12 =v11 †y; y†: The map T1 ˆ C^T^C21 is 2-translation invariant and given by: v T1 …x; y† ˆ …s…v11 x† 1 12 s…v21 x†; s…v21 x††: A v11 Throughout the remainder of the article, T1, defined on c…‰0; 1Š2 †; (cf. Proposition 2.1) is given by: T1 …x; y† ˆ …f …x†; g…x††; where f …x† ˆ s…v11 x† 1

v12 s…v21 x† v11

and Remark. Without loss of generality we may assume v11 ± 0: In fact, if v11 ˆ 0 and v22 ± 0 then the linear homeomorphism f given by f…x; y† ˆ …y; x† defines a conjugacy between T and T0 …x; y† ˆ …s…v22 x 1 v21 y†; s…v12 x 1 v11 y††: If v11 ˆ v22 ˆ 0; and v12 # 0 or v21 # 0 then either T or T 2 are identical to 0. Otherwise (i.e. v12 . 0 and v21 . 0†T 2 …x; y† ˆ …s…v12 s …v21 x††; s …v21 …s …v 12 y††††

g…x† ˆ s…v21 x†: The techniques used in this article may be extended to an arbitrary number of neurons. We consider only two neurons as this situation provides the easiest network structure presenting all the dynamical properties we are investigating.

F. Botelho / Neural Networks 12 (1999) 609–615

Definition 2.3. The sequence {T n …x†}n[N is called the orbit of x under T. An orbit is periodic if there exists n [ N so that T n …x† ˆ x and the number of distinct elements in {T n …x†}n[N is designated the period of x. We introduce a definition of orbital equivalence between two continuous maps defined on metric spaces. This notion strictly depends on existing periodic orbits and their respective periods. Definition 2.4. Two maps are said to be orbitally equivalent if there exists a period preserving bijection between their periodic points. Example 2.1. (1)Two maps that are topologically conjugate are also orbitally equivalent, since their conjugating homeomorphism is a period preserving bijection. (2) The maps i…x; y† ˆ …x; 0† and id…x† ˆ x; where x and y are real numbers, are orbitally equivalent but not topologically conjugate. 3. Overall dynamical behavior

Lemma 3.1 implies that x0 is a periodic point of f and y0 ˆ g…f n21 …x0 ††: Therefore, P0 is the only period n periodic point of T1 with the first component equal to x0. Conversely, if x is a periodic point of f of period n then …f …x†; g…x†† is a periodic point of T1 of period n. Moreover, if x1 is another periodic point of f of period n such that …f …x†; g…x†† ˆ …f …x1 †; g…x1 †† then f n21 …f …x†† ˆ f n21 …f …x1 †† and x ˆ x1 : This shows that T1 and f are orbitally equivalent, which implies that T and f are orbitally equivalent. A Next, we review the definition of nonwandering set, discussed by Smale (1967). The nonwandering set of a continuous map is an invariant set that, in many cases, captures all of the interesting dynamics of the map (cf. Devaney, 1996). Definition 3.1. The nonwandering set of a continuous map F, defined on a compact metric space X, is the closed invariant subset of X consisting of all points x [ X such that, every neighborhood of x, intersects one of its forward iterates.

V…F† ˆ {x [ X : ;W; a neighborhood of x;

In Section 2 we defined a conjugacy between T and a 2-invariant continuous map T1. In this section we show that T is orbitally equivalent to f …x† ˆ s…v11 x† 1 …v12 =v11 †s…v21 x† (cf. Proposition 3.1). Moreover, the nonwandering sets of T and f are naturally related and the entropy of T is equal to the entropy of f. We also include a description of the iterative behavior of f as the parameter values vary in R 3 and indicate a region in the parameter space where chaoticity is observed. Proposition 3.1. T is orbitally f …x† ˆ s…v11 x† 1 …v12 =v11 †s…v21 x†:

611

equivalent

to

The proof of this proposition relies on Lemma 3.1. Lemma 3.1. The point …x0 ; y0 † is a periodic point of T1 of period n if and only if x0 is a periodic point of f of period n and y0 ˆ g…f n21 …x0 †: Proof. The orbit of the point …x0 ; y0 † under T1 is the finite sequence {…x0 ; y0 †; …f …x0 †; g…x0 ††; …f 2 …x0 †; g…f …x0 ††; …; …f n …x0 †; g…f n21 …x0 †††}: Therefore, …x; y† is periodic of period n if and only if f n …x† ˆ x and y ˆ g…f n21 …x††: A Proof of Proposition 3.1. It is sufficient to prove the existence of a period preserving bijection between the set of all periodic points of T1 and the set of all periodic points of f. If P0 ˆ …x0 ; y0 † is a periodic point of period n of T1 then

'n [ N such that F n …W† > W ± B}: We represent by P1 the projection on the first component. Lemma 3.2.

P1 …V…T1 †† ˆ V…f †:

Proof. Let x [ P1 …V…T1 †† and y be such that …x; y† [ V…T1 †: For everyTWx, a neighborhood of x, there exists n [ N so that T1n …W† W ± B, where W ˆ Wx × Wy and Wy is a neighborhood of y. Let (a,b) be a point in W such that T1n …a; b† ˆ …f n …a†; g…f n21 …a††† [ W: This implies that f n …Wx † > Wx ± B: Conversely, assume x [ V…f † and x Ó P1 …V…T1 ††. Therefore, for every y, the point …x; y† Ó V…T1 † which implies the existence of a neighborhood of …x; y†, Vy, satisfying the condition \ ;n [ N: T n …Vy † Vy ˆ B; We select finitely many such neighborhoods Vy1 ; Vy2 ; …; Vyk 2 2 enough to cover {x} × ‰0; T 1Š > C…‰0; 1Š † …C‰0; 1Š is the domain of T1). Let V* ˆ iˆ1;…;k P1 …Vyi †, as V* is a neighborhood of x and x [ V…f †; we choose a [ V* with some forward image f n1 …a† [ V* : Therefore, there exists yi so that …f n1 …a†; g…f n1 21 …a††† [ Vyi : Since T1 is a planar map depending only on the first component of the input state vector, we have that T1n1 …a; yi † ˆ …f n1 …a†; g…f n1 21 …a††† [ Vyi ; and T1n1 …Vyi † > Vyi ± B: This contradicts our assumption and completes the proof. A

612

F. Botelho / Neural Networks 12 (1999) 609–615

The definition of topological entropy, stated below, was introduced by R. Bowen. Topological entropy is a topological invariant that quantifies the complexity of the orbit structure of a continuous map. Two orbits of a continuous map c defined on a compact metric space …X; d† are 1 undistinguishable up to order n if corresponding terms of order k…k # n† are 1 -close (i.e. d…ck …x†; ck …y†† , 1; for all 0 # k # n†: As X is compact there is only a finite number, n…1; k; c†; of 1 -distinct orbits up to order n. The number n…1; k; c†; is expected to grow exponentially fast with k, its growth rate is given by …1=k† log…n…1; k; c†† and its asymptotic growth rate is given by the limit, limk!∞ …1=k†log…n…1; k; c††: The topological entropy of c is the asymptotic growth rate of n…1; k; c† as k approaches infinity and 1 approaches 0. Definition 3.2. ([Bowen, 1978]). E is an …n; 1†-spanning set for c if for every x [ X there is x0 [ E such that ucj …x† 2 cj …x0 †u , 1

;j ˆ 0; …; n 2 1:

The minimal cardinality of an …n; 1†-spanning set is denoted by Sc …n; 1† and its asymptotic growth rate is given by h1 …c† ˆ Lim n!∞

1 log Sc …n; 1†: n

The topological entropy of c; h…c†, is defined to be h…c† ˆ Lim1!0 h1 …c†: Proposition 3.2.

h…T† ˆ h…f †:

Proof. The mappings T and T1 are topologically conjugate, hence h…T† ˆ h…T1 †; and the equality h…T1 † ˆ h…f † follows from Lemma 3.3. A Lemma 3.3. If F…x; y† ˆ …w1 …x†; w2 …x††; where w1 and w2 are continuous real valued functions defined on the closed interval I ˆ ‰a; bŠ; then h…F† ˆ h…w1 †: Proof. As w2 is uniformly continuous there is 0 , 10 , 1=2 such that, for x and y, 1 0-close, their images are 1 /2-close. Let E be an …n; 10 †-spanning set for w1 with minimal cardinality, then for every x [ I; there is x0 [ E such that uwj1 …x† 2 wj1 …x0 †u , 10 ;

j ˆ 0; …; n 2 1:

The uniform continuity of w2 implies that uw2 …wj1 …x†† 2 w2 …wj1 …x0 ††u , 1=2: Now we consider the set E × E0 ; where E0 ˆ {a; a 1 10 ; a 1 210 ; …; b}: The number of elements in E0 is at most ‰…b 2 a†=10 1 1 …‰…b 2 a†=10 Š represents the largest integer less than or equal to …b 2 a†=10 † and the set E × E0 is …n; 1†-spanning for F. Therefore, SF …n; 1† # Sw1 …n; 10 † × …‰…b 2 a†=10 Š 1 1† and h…F† # h…w1 †: Conversely, given E an …n; 1†-spanning for F; with minimal

cardinality, the set P1 …E† is …n; 1†-spanning for w1 and h…w1 † # h…F†: A We describe the qualitative dynamical behavior of T via the study of the 3-parameter family f …x† ˆ s…ax† 1 ls…bx†: It is proved in Proposition 3.1 that this function is orbitally equivalent to T when a ˆ v11 , l ˆ v12 =v11 and b ˆ v21 . The domain of f is the closed interval P1 …C‰0; 1Š2 † ˆ ‰min{l; 0}; 1 1 max{l; 0}Š: We define the function h…x† ˆ …x 1 uxu†=2; (uxu is the absolute value of x) to be used in the statement of Lemma 3.4. Lemma 3.4. If l $ 0 then f has infinitely many fixed points, exactly one, or exactly two if h…a† 1 lh…b† ˆ 1, , 1, or . 1, respectively. Proof. If l $ 0; the domain of f is the closed interval ‰0; 1 1 lŠ: If, in addition, b # 0 then f …x† ˆ s…ax† and the result clearly follows. If b . 0 and a # 0 the function F…x† ˆ lx reduces this case to the previous one, as the map 1 s…lax† 1 s…lbx† l

…la # 0†

is topologically conjugate to f: It remains to study the case a . 0 and b . 0: If 0 # x # min{1=max…a; b†; 1 1 l}; then f …x† ˆ …a 1 lb†x: Therefore f has infinitely many fixed points if a 1 lb ˆ 1 and exactly one fixed point …x ˆ 0† if a 1 lb , 1: When a 1 lb . 1; since the range of f is ‰0; 1 1 lŠ and …a 1 lb†…1 1 l† . …1 1 l †; f has another fixed point different from 0. A Now, we consider l , 0: Therefore the domain of f is ‰l; 1Š: We start by studying the orientation preserving case, i.e. a $ 0 and b # 0: It is easy to see that if a ˆ 1 or lb ˆ 1 then f has infinitely many fixed points. Lemma 3.5. If l , 0; f is orientation preserving, and …a 2 1†…lb 2 1† ± 0 then f has at most three fixed points …x ˆ l; 0; 1†: Moreover, x ˆ l is a fixed point of f if and only if lb $ 1 and x ˆ 1 is a fixed point of f if and only if a $ 1: Proof. The point x ˆ 0 is always fixed under f. As we are assuming a . 0 and b , 0; f can be written as follows: ( s…ax† if x $ 0; f …x† ˆ ls…bx† if x , 0: Each branch of f(x) is a piecewise linear map with at most two pieces, one of which has positive slope, that is equal to a if x $ 0 and lb if x # 0: A In the orientation reversing case (i.e. a , 0 and b . 0†

F. Botelho / Neural Networks 12 (1999) 609–615 Table 1 Dynamical behavior of f for some parameter values Periodic orbits of f

Parameter values (l , 0, a . 0, and b . 0)

Infinitely many fixed points One fixed point Two fixed points

a 1 lb ˆ 1.

Three fixed points Route to chaos

f(x) is given by: ( s…ax† f …x† ˆ ls…bx†

a 1 lb , 1, and l , 2 1 1 max{1/a , 1/b }. a 1 lb , 1 and l ˆ 2 1 1 max{1/a , 1/b }, or a 1 lb . 1 and a , b , or a 1 lb . 1, a . b , and lb . 2 1. a 1 lb , 1 and l . 2 1 1 max{1/a , 1/b }. a 1 lb . 1, a . b , and lb # 2 1.

future behavior facing similar stimuli or environmental conditions. The evolution of scrambled sets may play the role of obscurity while searching for adaptability. The main goal of this section is to show that chaotic regions can be simulated by neural networks with rank one connecting matrices and saturated linear neurons. We include a definition of chaotic map due to Li and Yorke (1975) and we exhibit values for the weights that guarantee this chaoticity. Definition space (X, Yorke, (or so that for have:

4.1. A map h defined on a compact metric d) is called chaotic, in the sense of Li and LY-chaotic) if there exists an uncountable set S every x, y in S, and a periodic point p of h, we

Lim uhn …x† 2 hn …y†u . 0; n!∞

if x # 0; if x $ 0:

:

Once more, each branch of f is a piecewise linear map consisting of, at most, two pieces one of which has negative slope (and the other one, if it exists, has zero slope). Under these assumptions f may have periodic points of period 2. Furthermore, f has infinitely many periodic points of period 2 iff abl ˆ 1; otherwise it has only one fixed point …x ˆ 0† or a fixed point and a periodic point of period 2 …x ˆ 0 and {l; 1}†: This is summarized in Lemma 3.6., whose proof is omitted as it follows similar techniques. Lemma 3.6. If l , 0; f is orientation reversing, and abl ± 1 then f has a periodic point of period 2 …{l; 1}† iff abl . 1: If l , 0 and ab . 0 then we may simply assume a . 0 and b . 0: The homeomorphism F…x† ˆ lx defines a conjugacy between f(x) and the function 1 s…alx† 1 s…blx† l

613

…la . 0 and lb . 0†:

We include in Table 1 the dynamics encountered while varying the parameter values. For most parameter values, the dynamics of f is easily predictable where every point is periodic or accumulates in a periodic orbit. If a . b . 0, a 1 lb . 1 and lb # 2 1 the dynamics of f undergoes a series of period doubling bifurcations. These inequalities will be used in the proof of the forthcoming Theorem 4.3. 4. Chaotic dynamics A neural network, as a biological model, should simulate some process of adaptation. An animal, while adjusting to a new environment, goes through a chaotic process of adaptability that apparently determines its

Lim uhn …x† 2 hn …y†u ˆ 0; n!∞

Lim uhn …x† 2 hn …p†u . 0: n!∞

A set S with these properties is called a scrambled set. Proposition 4.1. chaotic.

T is LY-chaotic if and only if f is LY-

Proof. Chaoticity is a topological invariant property. As T and T1 are topogically conjugate, we are reduced to prove that T1 is LY-chaotic if and only if f is LY-chaotic. Let S be a scrambled set of f, then S* ˆ {…x; x† : x [ S} is a scrambled set for T1. In fact, as g is uniformly continuous, Lim uf n …x† 2 f n …y†u . 0 n!∞

and

Lim uf n …x† 2 f n …y†u ˆ 0 for x; and y [ S n!∞

imply Lim uT1n …x† 2 T1n …y†u . 0 and n!∞

Lim ug…f n21 …x†† 2 g…f n21 …y††u ˆ 0: n!∞

The remaining part of the proof follows similarly. A

We state two well-known theorems for continuous maps on the interval that relate the existence of periodic points with overall chaoticity. Theorem 4.1. ([cf. Devaney, 1986]). If f is a continuous interval map with a periodic oribit of period 3, then f has a periodic orbit of period n for every positive integer n.

614

F. Botelho / Neural Networks 12 (1999) 609–615

Theorem 4.2. ([cf. Jankova & Smı´tal, 1986]). If f is continuous and has a periodic orbit with period divisible by an odd number, then f is LY-chaotic.

is equivalent to

We also include a folklore lemma that will be used in the proof of Theorem 4.3.

Inequalities (1) and (3) and f …21=lv21 † ˆ 0 imply f …1=v11 † ˆ 1 1 l…v21 =v11 † $ 21=lv21 $ 1=v11 and f ‰0; 1= v11 Š > f ‰1=v11 ; 21=lv21 Š $ ‰0; 21=lv21 Š: Lemma 4.1 and Theorem 4.2 imply that f has a periodic point of odd period and is LY-chaotic. Therefore T is LY-chaotic by Proposition 4.1. A

Lemma 4.1. Let f be a continuous function from the interval [a, b] into itself and let c be such that a , c , b, if f ‰a; cŠ > f ‰c; bŠ $ ‰a; bŠ then f has a periodic orbit of period 3. Proof. ([cf. Devaney, 1986]). As f ([a, c] $ [a, b] there exist x0 and x1, the largest values in [a, c], whose images are c and b, respectively. Without loss of generality assume x0 , x1. Clearly f restricted to [x0, x1] has no fixed points and f 3[x0, x1] . [x0, x1]. This implies the existence of a periodic point of period three in [x0, x1]. A Theorem 4.3 establishes a sufficient condition on the connecting weights for the existence of LY-chaoticity. We consider v 11 in [4, ∞) and l , 0. Theorem 4.3. 

l[

p p  2v11 2 v11 1 2 …4=v11 † 2v11 1 v11 1 2 …4=v11 † ; 2v21 = 2v21

Remarks. We notice that the conditions of Theorem 4.3 imply that f satisfies the inequalities described in the last case of the Table 1. If v 11 . 0, v 21 . 0, and l[

p p  2v11 2 v11 1 2 …4=v11 † 2v11 1 v11 1 2 …4=v11 † ; 2v21 = 2v21

then

v11 1 v21 lge and

lv21 #

v21 1 $2 : v11 lv21

…3†

5. Conclusions Chaotic behavior can be simulated by BSB neural models with only two interconnected neurons and connecting matrices of rank one. We described a large region of the parameter space where such behavior occurs. Such networks are of easy implementation and they seem to capture several aspects of cognitive perception including a complex learning path before stabilization in an attracting fixed point.

If v 11 . v 21 . 0, 1 1 l , 0, and

then T is LY-chaotic.



11l

s 4 v11 2 v11 1 2 v11 2

s 4 2v11 1 v11 1 2 v11 2

.1

, 21:

…1†

…2†

Proof of Theorem 4.3. The condition s s 3 2 4 4 2v11 1 v11 1 2 6 2v11 2 v11 1 2 v v11 7 6 7 11 7 l[6 ; 6 7 2v21 2v21 4 5

Acknowledgements The author gratefully acknowledges several constructive suggestions made by unknown reviewers.

References Anderson, J. (1983). Cognitive and psychological computation with neural models. IEEE Transactions on Systems Man and Cybernetics, SMC-13, 799–815. Anderson, J., Silverstein, J., Ritz, S., & Jones, R. (1977). Distinctive features, categorical perception and probability learning: some applications of a neural model. Psychological Review, 84, 413–451. Botelho, F., & Garzon, M. (1997). Neural networks of rank one. Proceedings of the thirteenth Annual CAM, 13, 47–57. Bowen, R. (1978). On axiom a diffeomorphisms. CBMS regional conference series in Math. Providence: American Mathematical Society. Devaney, R. (1986). An introduction to chaotic dynamical systems. Menlo Park, CA: Benjamin/Cummings. Golden, R. (1986). The brain-state-in-a-box neural model is a gradient descent algorithm. J. Math. Psychol., 30, 73–80. Grossberg, S. (1988). Nonlinear neural networks: Principles, mechanisms and architectures. Neural Networks, 1, 17–61. Hopfield, J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci., 79, 2554– 2558. Hui, S., & Zak, S. (1992). Dynamical analysis of the brain-state-in-a-box (BSB) neural models. IEEE Trans. on Neural Networks, 3 (1), 86–94. Jankova, K., & Smı´tal, J. (1986). A characterization of chaos. Bull. Aust. Math. Soc., 34, 283–292. Kawamoto, A. (1985). Dynamical processes in the (re)solution of lexical ambiguity. PhD Dissertation. Providence, RI: Brown University. Kawamoto, A., & Anderson, J. (1985). A neural network model of multistable perception. Acta Psychol., 59, 35–65.

F. Botelho / Neural Networks 12 (1999) 609–615 Li, T., & Yorke, J. (1975). Period three implies chaos. Am. Math. Monthly, 82, 985–992. Skarda, C., & Freeman, W. (1987). How brains make chaos in order to make sense of the world. Behav. Brain Sci., 10, 161–195. Smale, S. (1967). Differentiable dynamical systems. Bull. Am. Math. Soc., 73, 747–817.

615

Wang, X. (1991). Period-doublings to chaos in a simple neural network: An analytical proof. Complex Syst., 5, 425–441. Zhou, M. (1996). Fault-tolerance for two neuron networks. Master’s Thesis, The University of Memphis.