Chaotic neural networks

Chaotic neural networks

Volume 144, number 6,7 PHYSICS LETTERS A 12 March 1990 CHAOTIC NEURAL N E T W O R K S K. A I H A R A I, T. T A K A B E and M. T O Y O D A Departme...

539KB Sizes 7 Downloads 140 Views

Volume 144, number 6,7

PHYSICS LETTERS A

12 March 1990

CHAOTIC NEURAL N E T W O R K S

K. A I H A R A I, T. T A K A B E and M. T O Y O D A Department of Electronic Engineering, Faculty of Engineering, Tokyo Denki University, 2-2 Nishiki-cho, Kanda, Chiyoda, Tokyo IOL Japan Received 15 March 1989; revised manuscript received 2 October 1989; accepted for publication 9 January 1990 Communicated by A.P. Fordy

A model of a single neuron with chaotic dynamics is proposed by considering the following properties of biological neurons: ( 1) graded responses, (2) relative refractoriness and (3) spatio-temporal summation of inputs. The model includes some conventional models of a neuron as its special cases; namely, chaotic dynamics is introduced as a natural extension of the former models. Chaotic solutions of both the single chaotic neuron and the chaotic neural network composed of such neurons are numerically demonstrated.

1. Introduction

The recent activity o f studies on neurocomputing is forming a new trend toward parallel distributed processing based upon artificial neural networks [ 1,2]. Artificial neural networks are composed o f simple elements o f artificial neurons modeling biological neurons. A usual neuron model is a simple threshold element transforming a weighted summation o f the inputs into the output through a nonlinear output function with threshold. However, from the viewpoint o f neurophysiology, there is firm criticism that real neurons are far more complicated than such simple threshold elements [ 3,4 ]. One o f the typical characteristics which biological neurons have but the usual artificial neurons lack is chaotic behavior experimentally observable in a single neuron [ 5 - 1 0 ] . For instance, it has been clarified not only experimentally with squid giant axons but also numerically with the H o d g k i n - H u x l e y equations [ l 1 ] that responses o f a resting nerve membrane to periodic stimulation are not always periodic and that the apparently nonperiodic responses can be understood as deterministic chaos [ 7-9,12 ]. Although there exist interesting neural and similar net-

work models the macroscopic or systemic behavior o f which is chaotic [ 13-24], there are few simple models o f a single neuron with chaotic dynamics. On the other hand, celebrated models o f nerve membranes as the Hodgkin-Huxley equations [11 ] and the F i t z H u g h - N a g u m o equations [25,26] are too complicated for elements o f artificial neural networks even if these equations and their modified or generalized models [18,27,28] can reproduce the experimentally observed chaos [ 8,12,18,27-32 ]. In this report, we propose a model o f a single neuron which can describe the experimentally observed chaotic responses qualitatively but still is simple.

2. Complete devil's staircases in reponses of a neuron model

The history o f modeling the dynamics o f biological neurons traces back to early prominent models such as the McCulloch-Pitts neuron [33] and Caianiello's neuronic equation [34]. Caianiello's neuronic equation, which includes the McCuUochPitts model as a special case, reads

Present address: Department of Mathematics, University of Western Australia, Nedlands, WA 6009, Australia.

0375-9601/90/$ 03.50 © Elsevier Science Publishers B.V. (North-Holland)

(1) j~ I r=0

333

Volume 144, number 6,7

PHYSICS LETTERS A

where x~(t + 1 ) is the output of the ith neuron at the discrete time t + 1, x~ takes either 1 (firing) or 0 (nonfiring); u is the unit step function such that u(y)= 1 for y>~0 and u ( y ) = 0 for y < 0 ; M is the number of the neurons in the neural network; Tbr> (for i¢j) is the connection weight with which the firing of the jth neuron affects the ith neuron after the r + 1 timeunits; T~f~ is the memory coefficient of relative refractoriness with which the firing of the ith neuron retains influence on itself after the r + 1 time-units; and 0~is the threshold for the all-or-none firing of the ith neuron. The discontinuous output function u in eq. ( 1 ) is a mathematical representation of the socalled all-or-none law that the output of a neuron has the alternatives of presence and absence of a full size of action potential depending upon whether the strength of stimulation is more than the threshold or not. In 1971, Nagumo and Sato [35] analysed response characteristics of a neuron with a single input on the basis of a modified Caianiello model. They assumed that the influence of the refractoriness due to a past firing decreases exponentially with time [ 35,36 ]; namely, T~f ~= - ak ~where k takes a value between 0 and 1 and ot is a positive parameter. Eq. (2) shows the Nagumo-Sato model [35]:

x ( t + l ) = u ( A ( t ) - O t ~ok~X(t-r)-O),

(2)

where A (t) is the strength of the input at the discrete time t, and k is the damping factor of the refractoriness. By defining a new variable y ( t + l ) corresponding to the internal state of the neuron as follows,

12 March 1990

nal state y into the output x with eq. (5). In particular, when the input stimulation is composed of periodic pulses with the constant amplitude A as frequently used in electrophysiological experiments, a(t) of eq. (6) is temporally constant as follows,

a=(A-O)(1-k).

(7)

Response characteristics of eqs. (4) and (5) with the bifurcation parameter a have been analysed in detail [35,37,38]. Fig. la shows an example of the bifurcation diagram with changing value of the bifurcation parameter a. Figs. lb and lc are the cor-

1 a

I~ /

.~//f"

..i"

tl

/J

"/

/.%J/ jj~/ -10 0.5 Q C O eJ X UJ > O

b

~ -o.s Q,

.~. -1

_=

1I

C

¢=

y(t+ 1 ) = A ( t ) - o t ~ k ~ x ( t - r ) - O ,

(3)

r=0

rr

O)

.E t.

eq. (2) can be simplified as eqs. (4) and ( 5 ) [ 3 5 ] :

y(t+ 1 ) =ky(t) -otu(y(t) ) +a(t) ),

(4)

x(t+ l )=u(y(t+ l ) ) ,

(5)

iT. O.5 Q O~ @

1

where

a(t) = A ( t ) - kA ( t - 1 ) - 0 ( 1 - k ) .

(6)

While the dynamical behavior of the neuron is calculated with eq. (4) on the internal state y, the value of the output is obtained by transforming the inter334

Bifurcation Parameter a Fig. 1. Response characteristics ofeqs. (4) and (5) with the bifurcation parameter a of eq. (7) where k=0.5, a = l . 0 and y(0)=0.1: (a) the bifurcation diagram, (b) the Lyapunov exponent 2 and (c) the average firing rate p.

Volume 144,number 6,7

PHYSICS LETTERSA

responding characteristics of the Lyapunov exponent 2 and the average firing rate, or the excitation number p, respectively, where 2 and p are defined as follows: 1 n--I

2=

p=

lim - ~ In , ~ + ~ n tffi0 lim -1 n~. --I

n~+ao

r/ t = o

x(t).

(8) (9)

Since negative values of the Lyapunov exponent characterize periodic solutions, fig. lb implies that almost all the responses are periodic. It is actually clarified [ 35,37,38 ] that the response characteristics ofeqs. (4) and (5) form complete devil's staircases [39-41 ]; that is, the equations have chaotic solutions only at a self-similar Cantor set of the parameter values with zero Lebesgue measure.

3. Chaotic responses of a biological neuron

In order to verify the response characteristics of complete devil's staircases predicted by eqs. (4) and (5), the corresponding experiment was carried out by stimulating squid giant axons with periodic pulses under a space-clamp condition keeping potentials and currents spatially uniform over some length of the axon [7,9]. Thereupon the experimental results demonstrated, contrary to the prediction by eqs. (4) and (5), that not only periodic but also chaotic responses can be observed easily and reproducibly and that the response characteristics of the real nerve membranes form imcomplete devil's staircases with obvious chaos [ 7,9 ].

4. Modeling chaotic responses of a biological neuron

As is discussed in sections 2 and 3, though almost all the solutions ofeqs. (4) and (5) are periodic, the chaotic responses are experimentally detectable. This disagreement between the model and the experiment requires a modification of the basic equations (1) and (2). It should be noted that the generation of action potentials by current stimulation of a single pulse does

12 March 1990

not obey the strict all-or-none law under the spaceclamp condition [25,42,43]. In particular, Cole et al. [43] clearly showed both experimentally with squid giant axons and numerically with the Hodgkin-Huxley equations that the stimulus-response curve is not discontinuously all-or-none but continuously graded for the ease of spatial uniformity; namely the stimulus-response property of the nerve membrane is described not by an all-or-none step function such as the function u in eqs. ( 1 ) and (2) but by a continuously increasing function [25,42,43 ]. Although the space-clamp condition may seem to be too artificial, it should be noted that action potentials are actually initiated at a limited portion of the axon called an axon hillock or a trigger zone. Moreover, the length of the trigger zone, which was estimated to be about 1 mm by the experiment with squid giant axons in the state of a self-sustained oscillation of action potentials [44], approximately agrees with the length of the spatial uniformity in the experiment by Cole et al. [43]. Therefore, we assume that the output function of artificial neurons is a continuously increasing function f a n d replace the unit step function u in eq. (2) by a continuous function f a s follows, x(t+l

)=f(A(t)-ot r~= 0 krg(x(t-r) )-O) '

(10)

where x ( t + 1 ) is the output of the neuron, or a graded action potential generated at the time t + 1, which takes an analog value between 0 and 1; f is a continuous output function, e.g. the logistic function = 1/( 1 + e -y/') with the steepness parameter ¢; and g is a function describing the relationship between the analog output and the magnitude of the refractoriness to the following stimulation. Although the function g may be complicated as discussed later on, we keep g the identity function for the sake of simplicity and focus on effects of the continuous output function f i n this report. As is the case with the Nagumo-Sato model, defining the internal state 1 ) by

f(y)

g(x) =x

y(t+

y ( t + l ) = A ( t ) - a ~ k'g(x(t-r))-O

(11)

rmO

reduces eq. ( 1 0 ) t o the following equations, 335

Volume 144, n u m b e r 6,7

PHYSICS LETTERS A

y(t+ 1 ) = ky(t) -otg(f(y(t) ) ) +a,

(12)

x(t+ 1 ) = f ( y ( t + 1 ) ) .

(13)

Fig. 2 shows an example of the response characteristics in eqs. (12) and ( 13 ) with the logistic function f The excitation number p is defined as p=

lim -1 "-~ ~ h(x(t)), ..+oo n t=o

(14)

where h is a function which denotes waveform-shaping dynamics of the axon with a strict threshold for

1t" _1/ 0

f

~

~

--~,

=

propagation of action potentials initiated at the trigger zone [25,45,46] and assumed to be h(x)= 1 for x>~0.5 and h ( x ) = 0 for x<0.5; It should be noted that unlike the space-clamp condition, an all-or-none law holds for the propagation of action potentials along the axon if the length of the axon is sufficiently long [ 25,43,46 ]. The response characteristics in fig. 2 qualitatively reproduce alternating periodic-chaotic sequences of responses experimentally observed in squid giant axons [7-9,31 ]. Fig. 3 shows the classification of solutions to eq. (12) in the parameter space a X k. Shaded regions in fig. 3 correspond to chaotic solutions. Fig. 3 demonstrates that the cha-

L

~

12 March 1990

2

6

5

5 6

2

J

1

1 b o

0

0.5

1

X

Ill 1

e" a.

9

8

T

~-2 -3

1

C

t~

nr ,/,.,

.E ~. 0.5

0.5 0.3

0.4

0.5

'..

,

i

1

i

i

I

o

1

Bifurcation Parameter

a

Fig. 2. Response characteristics of eqs. (12) and ( 13 ) with the bifurcation parameter a where kffi0.5, a = 1.0, f(y) = 1/( 1 + e -y/°'°4) and y ( 0 ) = 0 . 5 : (a) the bifurcation diagram, (b) the Lyapunov exponent 2 and (c) the average firing rate p.

336

Fig. 3. Classification of solutions to eq. ( 12 ) in the parameter space a×k, where a = 1.0 and e=0.02. (b) is an enlargement of an upper-middle part of (a). The periodicity and the Lyapunov exponent of each solution were examined with changing the parameter values of a and k by l / 500 and 1/ 300 in (a) and by 0.2/ 500 and 0.5/300 in (b), respectively. While each natural n u m b e r designates a region of periodic solutions with the corresponding period, shaded regions correspond to chaotic solutions. Bubbles in shaded regions denote existence o f small regions o f periodic solutions with higher periods.

Volume 144, number 6,7

PHYSICS LETTERS A

otic neuron model of eqs. (12) and (13) has chaotic solutions in wide regions of the parameter space. 5. A model of chaotic neural networks

The neuron model with chaotic dynamics explained above can be generalized as an element of neural networks which we call "chaotic neural networks". Generally speaking, we need to consider two kinds of inputs as shown in fig. 4, namely feedback inputs from component neurons such as Hopfield networks [ 1 ] and externally applied inputs such as back-propagation networks [ 2,47 ], in order to design arbitrary architectures of artificial neural networks. The dynamics of the ith chaotic neuron in a neural network composed of M chaotic neurons can be modeled as

(M j N

, WOr2okrh (x (t-r)) =

t

+ ~ Vo ~. k % ( t - r ) j= 1

rffiO

12 March 1990

ron at the discrete time t + 1;f- is the continuous output function of the ith chaotic neuron; M is the number of the chaotic neurons in the neural network; Wo is the connection weight from the jth chaotic neuron to the ith chaotic neuron; hj is the transfer function of the axon for the propagating action potentials in thejth chaotic neuron; N is the number of externally applied inputs; VUis the connection weight from the jth externally applied input to the ith chaotic neuron; Ij(t-r) is the strength of the jth externally applied input at the time t - r , and gi is the refractory function of the ith chaotic neuron. In eq. (15), effects of the past inputs are assumed to decay exponentially with time in the form of T~r) = W~jkr or Vokr where k is the damping factor the same with that of refractoriness. In summary eq. (15) is a neuron model with the following three properties: (1) a continuous output function with graded action potentials, (2) an accumulating relative refractoriness with exponential temporal damping and (3) spatio-temporal summation of both feedback inputs Xj=I WoXt=okrhj(xj(t-r) ) and external inputs ~'~N= 1 VoXt~fok%(t-r). Eqs. (16) and (17) are the reduced forms of eq.

(15): --Or r=O ~ krgi(Xi(t--r)

)--O') '

(15)

where xi(t+ 1 ) is the output of the ith chaotic neu-

feedback/ inputs / /

hI (xI (t)) hj(xj(t?~,, ~ W

Vil{

externally inputs applied

~ J (t)

/ /

Yi (t+l)

/

output xi (t+l)

"iN .

IN(t) Fig. 4. A neuron model as an element of chaotic neural networks.

337

Volume 144, number 6,7

PHYSICS LETTERS A

yi(t+ l )=ky~(t)

x,(t+ l ) = f

M N q- ~. Wijhj(j~j(yj(t) ) )+ ~ ~jIj(t) j= 1

12 March 1990

j=

Wahj(xj(t) )+

j= [

Vqlj(t)-O,

. (19)

j= 1

-ag,~(y~(t) ) ) -0~(1 - k ) ,

(16)

xi(tJt- 1 ) =f(yi(t+ 1 ) ) ,

(17)

where yRt+ 1 ) is the internal state of the ith chaotic neuron and defined as follows,

Thus eq. (19) includes conventional models of neural networks such as McCulloch-Pitts networks [33 ], associative memory networks [49 ] and back-propagation networks [ 47 ].

6. Discussion

M

y,(t+ 1 ) = E Wo ~ k~hj(xJ(t-r)) j= 1

r=O

+ ~ V~j ~ kr[j(t-r) j~l

r=O

--or ~, krg~(x~(t-r) )-O~.

(18)

r=O

Examples of dynamical behavior in simple chaotic neural networks are shown in fig, 5 with the Lyapunov spectra [48]. The temporal patterns with bursts of firing in fig. 5 are actually chaotic because the maximum Lyapunov exponents are positive. When k and o~ tend to 0 in eq. (15), eq. ( 15 ) is transformed into eq. (19):

.l-,,,,,,,,,,,,,,,,,,.,,.-.,,,,,,, a

',",". .,'/,,"

..,.,..

//..

'."."."

",".",

...,,'.,,.

•,,,

c

f/,,,,,,,,,.... .',,',,',,',,',,'..-

:.'.. . .',,',,',,',,',,'.. • ,,,,,-,,'..'. ..,,,,,,,,,,,,'.....,,,,,,,,,,,/,,,.,...... • ,,,,,,.

b

We have proposed a neuron model with chaotic dynamics. The properties of the model are relevant to the following ones of biological neurons: ( 1 ) the continuous stimulus-response curve with graded action potentials [25,42,43], (2) the approximately exponential decay of relative refractoriness after firing [ 50 ] and the superposition of refractory effects due to preceding action potentials [ 51 ] and (3) the spatio-temporal summation of inputs through many synapses [52]. The model can qualitatively reproduce alternating periodic-chaotic sequences of responses experimentally observed with squid giant axons [ 7-9,31 ]. Although the output function f is

.-,,,,,,,,,,,., .v',,

..',,',. ..,,,,,,-

."l,," ..,,,,,.

.,,..

".".

,f'.. ,

=/ll"

,,,.

m/|



°-"'°" - ~ t

Fig. 5. Temporal output patterns o f chaotic neural networks composed o f three neurons. The parameter values in eqs. (16) and (17) are: M - - 3 and a - - 1.0; Wt2 = W23-- W31 = 0.5 and the other Wo are 0.0; 0r = 0.0, f ( y ) = 1/( 1 + e-y/o.o2) and both ht and g~ are the identity functions for i-- 1, 2 and 3; all V0 are 0.0; the initial conditions are yj ( 0 ) - - 1.0, y2(0) --0.0 and y3(0) =0.0. The values of k are (a) 0.67503, (b) 0.71 and (c) 0,769231, respectively. The size of each square is proportional to the strength of the output. The Lyapunov spectra are (0.10, - 0 . 1 1 , - 0 . 5 4 ) in (a), (0.1 l, - 0 . 3 3 , - 0 . 7 9 ) in (b), and (0.13, - 0 . 2 3 , - 1.29) in (c), respectively.

338

Volume 144, number 6,7

PHYSICS LETTERS A

assumed to be the logistic function in this report, eqs. ( 1 2 ) a n d ( 1 3 ) keep the similar chaotic d y n a m i c s for other continuous o u t p u t functions such as f ( y ) = (lY+e[ - l Y - e l ) / 4 E + ½ and y/E

The o n e - d i m e n s i o n a l m a p o f eq. ( 1 2 ) is a k i n d o f b i m o d a l m a p with the ( + - + ) successive signs o f the slope on the three m o n o t o n i c intervals [ 53 ]. As the value o f the d a m p i n g factor k in eq. ( 1 2 ) is between 0 a n d 1, the average gradients o f the ( + ) branches which correspond to resting and firing states are stable, o r between 0 a n d 1. O n the o t h e r hand, the average gradient o f the ( - ) branch which corresponds to the c o n t i n u o u s threshold p h e n o m e n o n [25 ] is unstable, or less than - 1 i f the steepness par a m e t e r ~ o f the logistic o u t p u t function f is sufficiently small. Thus the m o d e l o f e q . ( 1 2 ) a d d s a new family o f b i m o d a l m a p s related to neural d y n a m i c s to various b i m o d a l m a p s such as cubic m a p s a n d circle m a p s which have been studied extensively [ 5 3 61]. The refractory function g in eq. ( 1 2 ) , which is neglected a n d a s s u m e d to be the identity function in this report, m a y be actually complicated with not only the absolute a n d relative refractory phases b u t also the e n h a n c e d a n d depressed phases [25 ], the excitability fluctuations [ 3 ] a n d the s u p e r n o r m a l phase [ 50 ]. Specification o f the refractory function g would be necessary for m o r e precise d e s c r i p t i o n o f response characteristics o f biological neurons. Recently interesting studies have been progressing t o w a r d ( 1 ) possible functional roles o f chaotic dyn a m i c s [16,21,22,28,29,62-65] a n d ( 2 ) complex s p a t i o - t e m p o r a l d y n a m i c s in high-dimensional chaotic systems such as coupled chaotic m a p s on lattices [ 6 6 - 6 8 ], b o t h o f which seem to have close relations to neural networks. Although it is still an open problem to explore applicability o f chaotic d y n a m i c s in neurocomputing, o u r framework o f the chaotic neural networks at least m a k e s it possible to introduce functions o f the d e t e r m i n i s t i c chaos into artificial neural networks w h e n e v e r necessary because the chaotic neural network m o d e l includes some c o n v e n t i o n a l

12 March 1990

m o d e l s o f artificial neural networks as its special cases. References [ 1] J.J. Hopfield, Proc. Natl. Acad. Sci. USA 79 (1982) 2554. [2] D.E. Rumelhart, J.L. McClelland and the PDP Research Group, Parallel distributed processing, Vols. 1, 2 (MIT Press, Cambridge, 1986). [3] J.P. Segund, J. Theor. Neurobiol. 5 (1986) 1. [ 4 ] G. Matsumoto, Future Generations Comput. Syst. 4 (1988) 39. [5] A.V. Holden, W. Winlow and P.G. Haydon, Biol. Cybern. 43 (1982) 169. [6] P.E. Rapp, I.D. Zimmerman, A.M. Albano, G.C. Dcguzman and N.N. Greenbaun, Phys. Lett. A 110 ( 1985 ) 335. [7] G. Matsumoto, K. Aihara, Y. Hanyu, N. Takahashi, S. Yoshizawa and J. Nagumo, Phys. Lett. A 123 (1987) 162. [8] K. Aihara and G. Matsumoto, in: Chaos in biological systems, eds. H. Degn, A.V. Holden and L.F. Olsen (Plenum, New York, 1987)p. 121. [9] G. Matsumoto, N. Takahashi and Y. Hanyu, Chaos in biologicalsystems, eds. H. Degn, A.V. Holden and L.F. Oiscn (Plenum, New York, 1987) p. 143. [ 10] H. Hayashi and S. Ishizuka, Chaos in biological systems, eds. H. I~gn, A.V. Holden and L.F. Olsen (Plenum, New York, 1987) p. 157. [ 11 ] A.L. Hodgkin and A.F. Huxley, J. Physiol. (London) 117 (1952) 500. [12]T. Takabe, K. Aihara and G. Matsumoto, Trans. Inst. Electron. Inf. Commun. Eng. J71-A (1988) 744 [in Japanese ]. [ 13 ] M,Y. Choi and B.A. Huberman, Phys. Rev. A 28 (1983) 1204. [ 14] M.Y. Choi and B.A. Huberman, Phys. Rev. B 28 (1983) 2547. [ 15 ] G.A. Carpenter and S. Grossberg, Biol. Cybern. 48 ( 1983 ) 35. [ 16 ] E. Harth, IEEE Trans. SMC 13 ( 1983 ) 782. [ 17 ] M.A. Cohen and S. Grossberg, IEEE Trans. SMC 13 (1983) 815. [ 18l G.B. Ermentrout, SIAM J. Appl. Math. 44 (1984) 80. [ 19] K. Aoki, O. Ikezawa and K. Yamamoto, Phys. Lett. A 106 (1984) 343. [ 20 ] K.E. Kiirten and J.W. Clark, Phys. Lett. A 144 (1986) 413. [21 ] W.J. Freeman, Biol. Cybern. 56 (1987) 139. [ 22 ] I. Tsuda, E. Koerner and H. Shimizu, Pros. Theor. Phys. 78 (1987) 51. [23 ] U. Riedel, R. Kiihn and J.L. van Hemmen, Phys. Rev. A 38 (1988) 1105. [24] H. Sompolinsky, A. Crisanti and H.J. Sommers, Phys. Rev. Len. 61 (1988) 259. [25] R. FitzHugh, in: Biological engineering, ed. H.P. Schwan (McGraw-Hill, New York, 1969) p. 1. [26]J. Nagumo, S. Arimoto and S. Yoshizawa, Proc. IRE 50 ( 1962 ) 2061. 339

Volume 144, number 6,7

PHYSICS LETTERS A

[ 27 ] G.A. Carpenter, SIAM J. Appl. Math. 36 (1979) 334. [28] A. Lahiri, D.K. Goswami, U. Basu and B. Dasgupta, Phys. Lett. A 111 (1985) 246. [29] M.R. Guevara, L. Glass, M.C. Mackey and A. Shrier, IEEE Trans. SMC 13 (1983) 790. [ 30 ] K. Aihara, G. Matsumoto and Y. Ikegaya, J. Theor. Biol. 109 (1984) 249. [ 31 ] K. Aihara, G. Matsumoto and M. Ichikawa, Phys. Lett. A 111 (1985) 251. [32] K. Aihara and G. Matsumoto, in: Chaos, ed. A.V. Hoiden (Manchester Univ. Press/Princeton Univ. Press, Manchester/Princeton, 1986) p. 257. [33] W.S. McCulloch and W.H. Pitts, Bull. Math. Biophys. 5 (1943) 115. [34] E.R. Caianiello, J. Theor. Biol. 2 ( 1961 ) 204. [35] J. Nagumo and S. Sato, Kyberuetik 10 (1972) 155. [36] E.R. Caianiello and A. De Luca, Kybernetik 3 (1966) 33. [ 37 ] I. Tsuda, Phys. Lett. A 85 ( 1981 ) 4. [38]M. Yamaguchi and M. Ham, in: Competition and cooperation in nerve nets, eds. S. Amari and M.A. Arbib (Springer, Berlin, 1982) p. 171. [39] P. Bak and R. Bruinsma, Phys. Rev. Lett. 49 (1982) 249. [40] P. Bak, Rep. Prog. Phys. 45 (1982) 587. [ 41 ] M.H. Jensen, P. Bak and T. Bohr, Phys. Rev. Lett. 50 (1983) 1637. [42]K.S. Cote, Membranes, ions and impulses (Univ. of California Press, Berkeley, 1968 ). [43 ] ICS. Cole, R. Guttman and F. Bezanilla, Proc. Natl. Acad. Sci. 65 (1970) 884. [44] G. Matsumoto, in: Nerve membrane - biochemistry and function of channel proteins, eds. G. Matsumoto and M. Kotani (Univ. of Tokyo Press, Tokyo, 1981 ) p. 203. [45 ] A.F. Huxley, Ann. N.Y. Acad. Sci. 81 (1959) 221. [46 ] J.W. Cooley and F.A. Dodge Jr., Biophys. J. 6 ( 1966 ) 583.

340

12 March 1990

[47 ] D.E. Rumeihart, G.E. Hinton and R.J. Williams, Nature 323 (1986) 533. [48 ] I. Shimada and T. Nagashima, Prog. Tbeor. Phys. 61 (1979) 1605. [ 49 ] S. Amari and K. Maginu, Neural Networks 1 ( 1988 ) 63. [ 50 ] I. Tasaki, Physiology and electrochemistry of nerve fibers (Academic Press, New York, 1982 ). [ 5 t ] T. Musha, Y. Kosugi, G. Matsumoto and M. Suzuki, IEEE Trans. BME 28 ( 1981 ) 616. [52] D. Junge, Nerve and muscle excitation, 2nd Ed. (Sinauer Associates, 1981 ). [ 53 ] R.S. MacKay and C. Tresser, Physica D 27 ( 1987 ) 412. [54] R.M. May, Ann. N.Y. Acad. Sci. 316 (1979) 517. [ 55] S. Fraser and R. Kapral, Phys. Rev. A 25 (1982) 3223. [56] J. Coste andN. Peyraud, Physica D 5 (1982) 415. [57] J. Belair and L. Glass, Phys. Lett. A 96 (1983) 113. [ 58 ] J.P. Keener and L. Glass, J. Math. Biol. 21 (1984) 175. [ 59 ] F.T. Areechi, R. Badii and A. Politi, Phys. Rev. A 29 ( i 984) 1006. [60] R.S. MacKay and C. Tresser, Physica D 19 (1986) 206. [61 ] J. Ringland and M. Scheil, Phys. Lett. A 136 (1989) 379. [ 62 ] A.V. Holden, ed., Chaos (Manchester Univ. Press/Princeton University Press, Manchester/Princeton, 1986). [63] H. Degn, A.V. Holden and L.F. Oisen, eds., Chaos in biological systems (Plenum, New York, 1987). [ 64 ] C.A. Skarda and W.J. Freeman, Behav. Brain Sci. 10 ( 1987 ) 161, and the open peer commentary. [65] P.L. Christiansen and R.D. Parmentier, eds., Structure, coherence and chaos in dynamical systems (Manchester Univ. Press, Manchester, 1989). [66] IL Aoki and N. Mugibayashi, Phys. Lett. A 114 (1986) 425. [ 67 ] J.P. Crutchfield and K. Kaneko, in: Directions in chaos, ed. Hao Bai-lin (World Scientific, Singapore, 1987 ) p.272. [ 68 ] IC Kaneko, Physica D 34 (1989) 1.