On impulsive autoassociative neural networks

On impulsive autoassociative neural networks

Neural Networks PERGAMON Neural Networks 13 (2000) 63–69 www.elsevier.com/locate/neunet Contributed article On impulsive autoassociative neural net...

109KB Sizes 0 Downloads 80 Views

Neural Networks PERGAMON

Neural Networks 13 (2000) 63–69 www.elsevier.com/locate/neunet

Contributed article

On impulsive autoassociative neural networks q Zhi-Hong Guan a,*, James Lam b, Guanrong Chen c a

Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, People’s Republic of China b Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, People’s Republic of China c Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77204-4793, USA Received 18 February 1998; accepted 12 October 1999

Abstract Many systems existing in physics, chemistry, biology, engineering, and information science can be characterized by impulsive dynamics caused by abrupt jumps at certain instants during the process. These complex dynamical behaviors can be modeled by impulsive differential systems or impulsive neural networks. This paper formulates and studies a new model of impulsive autoassociative neural networks. Several fundamental issues, such as global exponential stability and existence and uniqueness of equilibria of such neural networks, are established. q 2000 Elsevier Science Ltd. All rights reserved. Keywords: Autoassociative neural networks; Equilibria; Impulsive differential equations; Stability

1. Introduction In the last three decades, neural network architectures have been extensively studied and developed (Carpenter, Cohen & Grossberg, 1987; Cohen & Grossberg, 1983; Fang & Kincaid, 1996; Grossberg, 1968, 1971, 1982, 1988; Guez, Protopopsecu & Barhen, 1988; Hopfield, 1982, 1984; Hou & Qian, 1998; Hunt, Sbarbaro, Zbikowski & Gawthrop, 1992; Li, Michel & Porod, 1988; Matsuoka, 1992; Michel & Gray, 1990; Si & Michel, 1994; Yang & Dillon, 1994). Various neural network architectures are inspired by both the principles governing biological neural systems and the well-established mathematical and engineering theories. The most widely used neural networks today are classified into two groups: continuous and discrete networks. However, there are still many networks existing in the real world which display some kind of dynamics in between the two groups. These include for example many evolutionary processes, particularly some biological systems such as biological neural networks and bursting rhythm models in pathology. Other examples include optimal control models in economics, frequency-modulated signal processing systems, and flying object motions. All these systems are characterized by abrupt q Supported by the National Natural Science Foundation of China under Grant 69774038, the China Natural Petroleum Corporation, and the HUST Foundation. * Corresponding author. Tel.: 1 86-27-87543130; fax: 1 86-2787543130. E-mail address: [email protected] (Z.-H. Guan).

changes of states at certain instants (Bainov & Simeonov, 1989; Guan, Liu & Wen, 1995; Lakshmikantham, Bainov & Simeonov, 1989; Liu & Guan, 1996; Pandit & Deo, 1982). Moreover, impulsive phenomena can also be found in other fields of information science, electronics, automatic control systems, computer networking, artificial intelligence, robotics, and telecommunications (Gelig & Churilov, 1998). Many sudden and sharp changes occur instantaneously, in the form of impulses, which cannot be well described by using pure continuous or pure discrete models. Therefore, it is important and, in fact, necessary to study impulsive systems. This paper is an attempt toward this goal. Specifically, in this work we introduce a new type of neural networks—impulsive neural networks—as an appropriate description of such phenomena of abrupt qualitative dynamical changes of essentially continuous systems. With regards to neural networks, additive neural networks, known as Grossberg–Cohen–Hopfield neural networks, are especially important. They were first proposed by Grossberg in the 1960s (Grossberg, 1968, 1971) and then developed by Grossberg and his colleagues, and Hopfield (Cohen & Grossberg, 1983; Grossberg, 1982; Hopfield, 1982, 1984). In recent years, the additive neural networks have been extensively studied, including both continuoustime and discrete-time settings, and applied to associative memory, model identification, optimization problems, etc. Many essential features of these networks, such as qualitative properties of stability, oscillation, and convergence issues have been investigated (e.g. Carpenter et al., 1987;

0893-6080/00/$ - see front matter q 2000 Elsevier Science Ltd. All rights reserved. PII: S0893-608 0(99)00095-7

64

Z.-H. Guan et al. / Neural Networks 13 (2000) 63–69

Fang & Kincaid, 1996; Grossberg, 1988; Guez et al., 1988; Hou & Qian, 1998; Hunt et al., 1992; Li et al., 1988; Matsuoka, 1992; Michel & Gray, 1990; Si & Michel, 1994; Yang & Dillon, 1994). This motivates the present investigation of the impulsive autoassociative neural networks. More specifically, in this paper, we first introduce a basic model of impulsive autoassociative neural networks and then study the existence and uniqueness of its equilibria, as well as the exponential stability for this new model. The paper is organized as follows. In Section 2, the impulsive autoassociative neural network model is described. Then, in Section 3, the problem of existence and uniqueness of equilibria of the model is studied. The global exponential stability property for the new model is finally established in Section 4, with conclusions given in Section 5.

…1†

yi ˆ gi …xi †; i ˆ 1; …; n; where Ci . 0; Ri . 0; and Ii are capacity, resistance, and bias, and xi and yi are the input and output, of the ith neuron, respectively; all the functions {gi …·†} are the same as that originally used by Grossberg and Cohen, and Hopfield, except that the symmetry of the matrix T ˆ …Tij †n×n is not assumed here. We only assume that gi [ C 1 and g 0i …·† is invertible (denoted xi ˆ g21 i …yi † U Gi …yi † below) and satisfies 0 , mi # g 0i # Mi , ∞ uniformly over the domain of gi ; i ˆ 1; …; n: Then it follows from the local inversion theorem (Ambrosetti & Prodi, 1993) that 1 1 0 0 21 # G 0i ˆ …g21 # ; i † ˆ …g i † Mi mi

i ˆ 1; …; n:

∞ X

ui …t† ˆ t 1

aik Hk …t† and vi …t† ˆ t 1

kˆ1

∞ X

bik Hk …t†;

kˆ1

…3†

i ˆ 1; …; n; where aik and bik are constants, with discontinuity points lim tk ˆ ∞ t1 , t2 , … , tk , …; k!∞

and Hk …t† are Heaviside functions defined by ( 0; t , tk ; Hk …t† ˆ 1; t $ tk : It is easy to see that ∞ X

aik d…t 2 tk †

and

kˆ1

Based on the structure of the Grossberg–Cohen–Hopfield model (Cohen & Grossberg, 1983; Grossberg, 1968, 1971, 1982, 1988; Hopfield, 1982, 1984), the proposed impulsive autoassociative neural network model is described by the following measure differential equation: n X xi Dui 1 Tij yj Dvj 1 Ii ; Ri jˆ1

kˆ1 ak Hk …t†: Accordingly, without loss of generality, we may assume that

Dui ˆ 1 1

2. The impulsive autoassociative neural network

Ci Dxi ˆ 2

P∞

…2†

In system (1), D denotes the distributional derivative, ui ; vi : J ˆ ‰t0 ; 1∞† ! R are functions of bounded variations which are right-continuous on any compact subinterval of J. We remark that the model formulation given above implies that Dui and Dvi represent the effect of sudden changes in the states of the system at the discontinuity points of ui and vi, i ˆ 1; …; n: They both can be identified with the usual Lebesgue–Stieltjes measure. In general, a function of bounded variation and right continuous contains two parts, one is an absolutely continuous function and the other is a singular function. When discontinuous points of the function are isolated and at most countable, the singular part has the form

Dvi ˆ 1 1

∞ X

bik d…t 2 tk †;

kˆ1

where d…t† is the Dirac impulsive function. It is clear from Eq. (2) that system (1) is topologically equivalent to Ci G 0i …yi †Dyi ˆ 2

n X Gi …yi † Dui 1 Tij yj Dvj 1 Ii ; Ri jˆ1

…4†

i ˆ 1; …; n: Hence, the study of the existence, uniqueness, and stabilities of the equilibrium of system (1) at xp ˆ …xp1 ; …; xpn †T is equivalent to that of system (4) at yp ˆ …yp1 ; …; ypn †T : In the subsequent discussion, the following norms will be used. For x ˆ …x1 ; …; xn †T [ Rn ; ixi denote the norms of vector x, and ixi1 ˆ

n X

uxi u;

ixi∞ ˆ max uxi u: 1#i#n

iˆ1

Correspondingly, for matrix A ˆ …aij †n×n [ Rn×n ; iAi denote its operator norm, and iAi1 ˆ max

1#j#n

n X iˆ1

uaij u;

iAi∞ ˆ max

1#i#n

n X

uaij u:

jˆ1

For any function f [ C…‰a; bŠ; Rn †; its norm ifi is defined by ifi ˆ supa#t#b if…t†i: 3. Existence and uniqueness of network equilibria In this section, we investigate the existence and uniqueness of the equilibrium of the impulsive autoassociative neural network (4). Note that an equilibrium state of system (4) is a solution of the following system of nonlinear impulsive algebraic

Z.-H. Guan et al. / Neural Networks 13 (2000) 63–69

equations: 2

n X Gi …yi † Dui 1 Tij yj Dvj 1 Ii ˆ 0; Ri jˆ1

i ˆ 1; …; n:

…5†

It is not difficult to know that yi …t† is a solution of Eq. (5) if and only if, for k ˆ 1; 2; …; yi …t† satisfies both 2

n X

Gi …yi …t†† 1 Tij yj …t† 1 Ii ˆ 0; Ri jˆ1

t [ ‰tk21 ; tk †

…6†

ij

Next, if it can be verified that



E 2 1 2F # a; 0 # a , 1;

` 2y

…10†

iAw2 2 Aw1 i # aiw2 2 w1 i;

n X Gi …yi …tk †† 1 Tij yj …tk † 1 Ii ˆ 0; Ri jˆ1

…7†

n X G …y …t †† Tij yj …tk †bjk ˆ 0; 2 i i k aik 1 Ri jˆ1

where i ˆ 1; …; n: Lemma 1. If one of the following conditions is satisfied, then system (6) has a unique equilibrium state at yi ˆ ypi ; i ˆ 1; …; n; in the interval ‰tk21 ; tk † : P j ˆ 1; …; n; (i) Tjj 2 Rj1Mj 1 niˆ1;i±j uTij u , 0; Pn 1 j ˆ 1; …; n; (ii) 2Tjj 1 Rj mj 1 iˆ1;i±j uTij u , 0; P n i ˆ 1; …; n; (iii) Tii 2 Ri1Mi 1 jˆ1;j±i uTij u , 0; P i ˆ 1; …; n: (iv) 2Tii 1 Ri1mi 1 njˆ1;j±i uTij u , 0; Proof.

where E is the n × n identity matrix and 2F…t; j†=2y ˆ …2Fi =2yj †n×n ; with 8 < Tii 2 G 0i …ji †=Ri ; i ˆ j; 2Fi …9† ˆ :T ; 2yj i ± j:

then Eq. (8) yields

and 2

65

Let F ˆ …F1 ; …; Fn † and y ˆ …y1 ; …; yn † ; where

Fi …t; yi † ˆ 2

T

n X Gi …yi …t†† 1 Tij yj …t† 1 Ii ; Ri jˆ1

T

i ˆ 1; …; n:

It is clear that F…t; y† is smooth over the domain: tk21 # t # tk 2 e; and iyi , ∞ for any sufficiently small constant e . 0: For functions w [ C…‰tk21 ; tk 2 eŠ; Rn †; define a map A: w!w2

1 F…t; w†; `

where ` ± 0 is a constant to be determined. It can be shown that A is a contraction mapping. Indeed, for any w1 ; w2 [ C…‰tk21 ; tk 2 eŠ; Rn †; it follows from the Taylor expansion of F…t; ·† and the mean value theorem that there is a j in between w 1 and w 2, such that i…Aw2 †…t† 2 …Aw1 †…t†i



1 1 ˆ

w2 …t† 2 F…t; w2 …t†† 2 w1 …t† 1 F…t; w1 …t††

` `



1 2F

ˆ w2 …t† 2 w1 …t† 2 …t; j†…w2 …t† 2 w1 …t††

` 2y



1 2F …t; j†

iw2 …t† 2 w1 …t†i; #

E 2 ` 2y

implying that A is a contraction mapping on C…‰tk21 ; tk 2 eŠ; Rn †: Hence, there exists a unique fixed point yp ˆ …yp1 ; …; ypn †T [ C…‰tk21 ; tk 2 eŠ; Rn † satisfying Ayp ˆ yp ; i.e. F…t; yp …t†† ˆ 0 for all t [ ‰tk21 ; tk 2 eŠ: Since e . 0 is sufficiently small, it follows from the definition of F that system (6) has a unique solution yp …t†; t [ ‰tk21 ; tk †: Next, we verify that any one of the four conditions (i)–(iv) implies Eq. (10), so that the above conclusion of existence and uniqueness holds. If the norm i·i ˆ i·i1 is used, Eq. (9) implies



E 2 1 2F

` 2y 8 0 19 ! 0 n < 1 = X G … j † j j @ ` 2 Tjj 2 ˆ max uTij uA : 1 ; 1#j#n : u`u Rj iˆ1;i±j …11† G 0j …yj †=Rj ;

j ˆ 1; …; n: Then, it follows from Let sj ˆ Tjj 2 …1=Mj † # G 0j …yj † # …1=mj † that Tjj 2

1 1 # sj # Tjj 2 ; R j mj Rj Mj

which implies 1 ` 2 Tjj 2 Rj Mj

!

j ˆ 1; …; n

! 1 # ` 2 sj # ` 2 Tjj 2 ; R j mj …12†

j ˆ 1; …; n: Now, pick an ` , 0 such that ` , Tjj 2 1=…Rj mj † for all j ˆ 1; …; n: Then, it follows from Eqs. (11) and (12) that



E 2 1 2F

` 2y 8 0 19 n < 1 = X 1 @Tjj 2 # max 2`1 uTij uA U a: ; 1#j#n : u`u Rj Mj iˆ1;i±j …13†

…8†

Obviously, a . 0: Observe that 0 1 n X 1 @ 1 Tjj 2 2`1 uTij uA , 1 Rj Mj u`u iˆ1;i±j

66

Z.-H. Guan et al. / Neural Networks 13 (2000) 63–69

where Q~ k ˆ T Qk T 21 ; then Eq. (15) has a unique equilibrium state at y…tk † ˆ yp …tk †:

is equivalent to Tjj 2

n X 1 1 uTij u , 0: Rj Mj iˆ1;i±j

Therefore, if condition (i) holds then the constant a defined by Eq. (13) satisfies 0 , a , 1; yielding Eq. (10) immediately. If condition (ii) is satisfied, we can pick ` . 0 such that ` . Tjj 2 1=…Rj Mj † for all j ˆ 1; …; n:. Then it follows from Eqs. (11) and (12) that 8 0 19

n <1 = X

1 2F 1

E 2

# max @` 2 Tjj 1 1 uTij uA

; 1#j#n : ` ` 2y Rj mj iˆ1;i±j U a:

(14)

Obviously, a . 0: Notice that 0 1 n X 1@ 1 ` 2 Tjj 1 1 uTij uA , 1 ` Rj mj iˆ1;i±j is equivalent to 2Tjj 1

n X 1 1 uTij u , 0: R j mj iˆ1;i±j

This implies that under condition (ii), the constant a defined by Eq. (14) satisfies 0 , a , 1; so that Eq. (10) holds. Finally, it is straightforward to repeat this procedure to verify that, if condition (iii) or (iv) is satisfied, then Eq. (10) holds when the norm i·i ˆ i·i∞ is used. This completes the proof of the lemma. A Next, we investigate the existence and uniqueness of the equilibrium for Eq. (7), which, in a vector form, is ( Ty…tk † 2 R21 G…y…tk †† 1 I ˆ 0; …15† T Qk y…tk † 2 Lk R21 G…y…tk †† ˆ 0; where 8 > T ˆ …Tij †n×n ; I ˆ …I1 ; …; In †T ; R ˆ diag…R1 ; …; Rn †; > > > > > Q ˆ diag…b1k ; …; bnk †; Lk ˆ diag…a1k ; …; ank †; > > < k …16† y…tk † ˆ …y1 …tk †; …; yn …tk ††T ; > > > T > G…y…t †† ˆ …G …y …t ††; …; G …y …t ††† ; > k 1 1 k n n k > > > : g…x…tk †† ˆ …g1 …x1 …tk ††; …; gn …x…tk ††††T :

Lemma 2. If T, Q k, and …T Qk 2 Lk T† are invertible, k ˆ 1; 2; …; and g…R‰Q~ k 2 Lk Š21 …Q~ k I†† ˆ …T Qk †21 ‰Lk …Q~ k 2 Lk †21 Q~ k IŠ; …17†

Proof. Since T and Qk are invertible, Eq. (15) is equivalent to ( T Qk y…tk † 2 Q~ k R21 G…y…tk †† 1 Q~ k I ˆ 0; T Qk y…tk † 2 Lk R21 G…y…tk †† ˆ 0: Obviously, this equation is equivalent to ( …Q~ k 2 Lk †R21 G…y…tk †† ˆ Q~ k I;

…18†

T Qk y…tk † 2 Lk R21 G…y…tk †† ˆ 0:

Since …Q~ k 2 Lk † is invertible, it follows from the consistent condition (17) that Eq. (18) has a unique solution: yp …tk † ˆ …T Qk †21 Lk …Q~ k 2 Lk †21 Q~ k I; namely, Eq. (15) has a unique solution yp …tk †: This completes the proof of the lemma. A The following result follows from Lemmas 1 and 2 immediately. Theorem 1. Assume that matrices T Qk ; and …T Qk 2 Lk T† are invertible, k ˆ 1; 2; …; and the consistent condition (17) holds, where T, Qk ; and Lk are given by Eq. (16). If one of the conditions (i)–(iv) given in Lemma 1 is satisfied, then the impulsive autoassociative neural network (4) has a unique equilibrium. 4. Stability of the equilibrium The stability issue of the proposed impulsive autoassociative network (1), or system (4), is now addressed. First, observe that if y p is the equilibrium state of system (4) then this equation can be rewritten as Ci G 0i …yi †D…yi 2 ypi † ˆ 2 1

Gi …yi † 2 Gi …ypi † Dui Ri n X

Tij …yj 2 ypj †Dvj ;

i ˆ 1; …; n:

jˆ1

…19† For convenience, we use the following notations:   a 1 ak ˆ min 1 1 ik ; bk ˆ ; 1#i#n Ci Ri ak   1 Ci gk ˆ max ; sk 1#i#n mi

…20†

8 0 19  n < 1  = X a sk ˆ min Ci 1 ik 2 @Tii bik 1 uTij bjk uA ; ; 1#i#n : Mi Ri jˆ1;j±i …21†

Z.-H. Guan et al. / Neural Networks 13 (2000) 63–69





1 ; Ri Ci 8 0 19 n
Let W…t† ˆ W…y…t††: Then inequality (24) implies that

l ˆ min

1#i#n

…22†

where aik ; bik ; mi ; Mi ; Ci ; Ri ; and Tij are given by (2), (3) and (19), respectively. Theorem 2. Assume that for k ˆ 1; 2; …; ak . 0 and that there exist constants pi . 0; i ˆ 1; …; n; such that P j ˆ 1; …; n; (i) pj Tjj 1 niˆ1;i±j P pi uTij u # 0; (ii) pj Tjj bjk 1 niˆ1;i±j pi uTij bjk u # 0; j ˆ 1; …; n; k ˆ 1; 2; … Then we have Q (a) kiˆ1 bi # c ˆconstant implies that the equilibrium y p of system (19) is globally exponentially stable; (b) bk # c ˆconstant and ‰…lnc†=dŠ 2 l , 0 together imply that the equilibrium y p of system (19) is globally exponentially stable, where c $ 1 and tk 2 tk21 $ d . 0: Proof. According to assumption (3), both u 0j and v 0j in system (19) exist on the interval ‰tk21 ; tk †: It follows from system (19) that Ci G 0i …yi …t†† 1

n X

t [ ‰tk21 ; tk †:

W…t† # W…tk21 †e2l…t2tk21 † ;

t [ ‰tk21 ; tk †:

…25†

On the other hand, we observe from system (4) that Ztk G …y …s†† i i dui …s† Ci ‰Gi …yi …tk †† 2 Gi …yi …tk 2 h††Š ˆ 2 Ri tk 2 h 1

Ztk

n X

tk 2 h jˆ1

Tij yj …s† dvj …s† 1

Ztk Ii ds;

tk 2 h

where h . 0 is sufficiently small. As h ! 01 ; we obtain Ci ‰Gi …yi …tk †† 2 Gi …yi …tk2 ††Š ˆ 2 1

Gi …yi …tk †† aik Ri n X

Tij yj …tk †bjk :

…26†

jˆ1

Similarly, for the equilibrium y p, Ci ‰Gi …ypi …tk †† 2 Gi …ypi …tk2 ††Š ˆ 2 1

Gi …ypi …tk †† aik Ri n X

Tij ypj …tk †bjk :

…27†

jˆ1

It then follows from Eqs. (26) and (27) that Ci ‰Gi …yi …tk †† 2 Gi …ypi …tk ††Š ˆ Ci ‰Gi …yi …tk2 †† 2 Gi …ypi …tk2 ††Š

d…yi …t† 2 ypi …t†† G …y …t†† 2 Gi …ypi …t†† ˆ 2 i i dt Ri

Tij …yj …t† 2 ypj …t††;

67

2 …23†

n X Gi …yi …tk †† 2 Gi …ypi …tk †† aik 1 Tij ‰yj …tk † 2 ypj …tk †Šbjk : Ri jˆ1

…28†

jˆ1

ypi …tk ††;

Construct a Lyapunov function of the form n Zyi X 0 pi Ci G i …vi † dvi : W…y† ˆ ypi iˆ1

Obviously, W…y† $ 0 and W…yp † ˆ 0: Computing the Dini derivative of W(y) along the trajectory defined by Eq. (23), and using the Taylor expansion Gj …yj † ˆ Gj …ypj † 1 G 0j …hj †  …yj 2 ypj † with hj being in between yj and ypj ; we obtain, from condition (i), that D1 W…y…t††u…23† ˆ

n X

pi Ci G 0i …yi †

iˆ1

#

n X

0 @pj Tjj 1

jˆ1

2

Gj …ypj †u

dyi sgn‰yi 2 ypi Š dt

1

n X

pi uTij uAuyi 2 ypj u 2

iˆ1;i±j

#2

n X jˆ1

# 2lW…y…t††;

where l . 0 is given by (22).

# W…tk2 † 1

n X

pi

iˆ1

n X

Tij …yj …tk † 2 ypj …tk ††bjk sgn…yi …tk †

jˆ1

2 ypi …tk †† which leads to

ak W…tk † # W…tk2 † 1

n X pj uGj …yj † R jˆ1 j

n X

2 4pj Tjj bjk 1

jˆ1

 uyj …tk † 2 ypj …tk †u;

1 p C uG …y † 2 Gj …ypj †u Rj Cj j j j j

t [ ‰tk21 ; tk †;

Multiplying both sides of (28) by pi sgn…yi …tk † 2 summing with respect to i from 1 to n, and then noting the definition of W…t†; we obtain  n  X a 1 1 ik pi Ci uGi …yi …tk †† 2 Gi …ypi …tk ††u Ci Ri iˆ1

n X

3 pi uTij bjk u5

iˆ1;i±j

…29†

where ak . 0 is given by (20). In view of condition (ii), Eq. (29) implies …24†

W…tk † # bk W…tk2 †; where bk . 0 is given by (20).

…30†

68

Z.-H. Guan et al. / Neural Networks 13 (2000) 63–69

By induction, it is easy to see, from (25) and (30), that W…t† # bk21 …b1 W…t0 †e2l…t2t0 † ;

t [ ‰tk21 ; tk †:

…31†

where l . 0 is given by (22). Since …1=Mi † # G 0i …yi † # …1=mi †; we have aiy…t† 2 yp …t†i1 # W…t† # biy…t† 2 yp …t†i1 ; where

(

a ˆ min

1#j#n

) pj Cj ; Mj

( b ˆ max

1#j#n

…32†

) pj C j : mj

Now, consider Q the following two cases: (a) When kiˆ1 bi # c ˆ constant, the inequality (31) leads to W…t† # cW…t0 †e2l…t2t0 † ;

cb iy…t0 † 2 yp …t0 †i1 e2l…t2t0 † ; iy…t† 2 y …t†i1 # a

t $ t0 : …33†

n

Note that the all norms in R are equivalent. Therefore, it follows from (33) immediately that the equilibrium y p of system (19) is globally exponentially stable. (b) When bk # c; c $ 1; and tk 2 tk21 $ d . 0; we have   ln c bk21 …b1 # ck21 # exp …tk21 2 t0 † d   ln c …34† …t 2 t0 † ; # exp d t [ ‰tk21 ; tk †: It follows from (31) and (34) that    ln c 2 l …t 2 t0 † ; W…t† # W…t0 †exp d This, in turn, reduces to b iy…t0 † 2 yp …t0 †i1 exp a



1#j#n

Clearly, W…y† $ 0 and W…yp † ˆ 0: Moreover, in view of (35), in a similar manner, we have dyl sgn‰yl 2 ypl Š D1 W…y†u…23† ˆ dt " ! n X 1 1 Tll 1 uTlj u uyl 2 ypl u 2 uGl …yl † # Cl G 0l …yl † R l jˆ1;j±l # Gl …ypl †u

" # n X 1 1 T 2 1 uTlj u uyl 2 ypl u # Cl G 0l …yl † ll R l Ml jˆ1;j±l m #2 l Cl

"

# n X 1 2 Tll 2 uTlj u uyl 2 ypl u Rl Ml jˆ1;j±l

# 2mW…y…t††;

t [ ‰tk21 ; tk †;

…36†

where m . 0 is given by (22). Thus (36) yields W…t† # W…tk21 †e2m…t2tk21 † ;

t [ ‰tk21 ; tk †;

…37†

where W…t† ˆ W…y…t††: On the other hand, similar to the reckoning from (26)– (28), from Eq. (4) we have Cl ‰Gl …yl …tk †† 2 Gl …ypl …tk ††Š ˆ Cl ‰Gl …yl …tk2 †† 2 Gl …ypl …tk2 ††Š 2

t $ t0 :

n X Gl …yl …tk †† 2 Gl …ypl …tk †† alk 1 Tlj ‰yj …tk † 2 ypj …tk †Šbjk : Rl jˆ1

…38†   ln c 2 l …t 2 t0 † ; d

Multiplying both sides of (38) by sgn…yl …tk † 2 ypl …tk ††; and then using the Taylor expansion Gl …yl † ˆ Gl …ypl † 1 G 0l …hl †  …yl 2 ypl † with h l being in between yl and ypl ; we obtain   a Cl 1 lk G 0l …hl …tk ††uyl …tk † 2 ypl …tk †u Rl

t $ t0 ; which implies that the conclusion of the theorem holds. This completes the proof of the theorem. A

Tii 2

Construct a Lyapunov function,

W…y† ˆ max uyj 2 ypj u U uyl 2 ypl u:

2

p

Theorem 3. and

Proof.

t $ t0

which, together with (32), implies that

iy…t† 2 yp …t†i1 #

imply that the equilibrium y p of system (19) is globally exponentially stable, where c $ 1 and tk 2 tk21 $ d . 0:

Assume that for k ˆ 1; 2; …; ak . 0; sk . 0

n X 1 1 uTij u , 0; Ri Mi jˆ1;j±i

# Cl G 0l …hl …tk2 ††uyl …tk2 † 2 ypl …tk2 †u 0 1 n X uTlj bjk uAuyl …tk † 2 ypl …tk †u: 1 @Tll blk 1 jˆ1;j±l

i ˆ 1; …; n:

…35†

Then we have Q (a) kiˆ1 gi # c ˆconstant implies that the equilibrium y p of system (19) is globally exponentially stable; (b) gk # c ˆconstant and ‰…lnc†=dŠ 2 m , 0 together

Since ak . 0; (39) leads to 2 0 13   n X a 1 lk 4 Cl 1 2 @Tll blk 1 uTlj bjk uA5uyl …tk † R l Ml jˆ1;j±l 2 ypl …tk †u #

Cl uy …t2 † 2 ypl …tk2 †u ml l k

…39†

Z.-H. Guan et al. / Neural Networks 13 (2000) 63–69

or W…tk † # gk W…tk2 †;

…40†

where gk . 0 is given by (20). It then follows from (37) and (40) that W…t† # gk21 …g1 W…t0 †e2m…t2t0 † ;

t [ ‰tk21 ; tk †:

Reasoning similarly to that from (31) to (34), we can easily deduce the conclusion of the theorem. A 5. Conclusions In this paper, we have formulated and studied a new type of neural networks—the impulsive autoassociative neural networks. This neural network model is useful for describing evolutionary processes that have sequential abrupt changes. Such networks cannot be appropriately represented by either pure continuous or pure discrete additive networks. Several fundamental issues, including the exponential stability and the existence and uniqueness of its equilibrium, were investigated. Some explicit and conclusive results about this new type of networks have been derived. More real-world applications of these networks will be further pursued in the near future. References Ambrosetti, A., & Prodi, G. (1993). A primer of nonlinear analysis, New York: Cambridge University Press. Bainov, D. D., & Simeonov, P. S. (1989). Stability theory of differential equations with impulse effects: theory and applications, Chichester: Ellis Horwood. Carpenter, G. A., Cohen, M. A., & Grossberg, S. (1987). Computing with neural networks. Science, 235, 1226–1227. Cohen, M. A., & Grossberg, S. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man and Cybernetics, 13, 815–826. Fang, Y., & Kincaid, T. G. (1996). Stability analysis of dynamical neural networks. IEEE Transactions on Neural Networks, 7, 996–1006. Gelig, A. K., & Churilov, A. N. (1998). Stability and oscillations of nonlinear pulse-modulated systems, Boston, Basel, Berlin: Birkhauser. Grossberg, S. (1968). Some physiological and biochemical consequences of

69

psychological postulates. Proceedings of the National Academy of Sciences, 60, 758–765. Grossberg, S. (1971). Pavlovian pattern learning by nonlinear neural networks. Proceedings of the National Academy of Sciences, 68, 828–831. Grossberg, S. (1982). Studies of mind and brain, Amsterdam: Kluwer/ Reidel. Grossberg, S. (1988). Nonlinear neural networks: principles, mechanisms, and architectures. Neural Networks, 1, 17–61. Guan, Z. -H., Liu, Y. -Q., & Wen, X. -C. (1995). Decentralized stabilization of singular and time-delay large-scale control systems with impulsive solutions. IEEE Transactions on Automatic Control, 40, 1437–1441. Guez, A., Protopopsecu, V., & Barhen, J. (1988). On the stability, storage capacity and design of nonlinear continuous neural networks. IEEE Transactions on Systems, Man and Cybernetics, 18, 80–87. Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79, 2554–2558. Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences, 81, 3088–3092. Hou, C., & Qian, J. (1998). Stability analysis for neural dynamics with time-varying delays. IEEE Transactions on Neural Networks, 9, 221– 223. Hunt, K. J., Sbarbaro, D., Zbikowski, R., & Gawthrop, P. J. (1992). Neural networks for control systems—a survey. Automatica, 28, 1083–1112. Lakshmikantham, V., Bainov, D. D., & Simeonov, P. S. (1989). Theory of impulse differential equations, Singapore: World Scientific. Li, J. H., Michel, A. N., & Porod, W. (1988). Qualitative analysis and synthesis of a class of neural networks. IEEE Transactions on Circuits and Systems, 35, 976–985. Liu, Y. -Q., & Guan, Z. -H. (1996). Stability, stabilization and control of measure large-scale systems with impulses, Guangzhou: The South China University of Technology Press. Matsuoka, K. (1992). Stability conditions for nonlinear continuous neural networks with asymmetric connection weights. Neural Networks, 5, 495–499. Michel, A. N., & Gray, D. L. (1990). Analysis and synthesis of neural networks with lower block triangular interconnecting structure. IEEE Transactions on Circuits and Systems, 37, 1267–1283. Pandit, S. G., & Deo, S. G. (1982). Differential systems involving impulses, New York: Springer. Si, J., & Michel, A. N. (1994). Analysis and synthesis of a class of discretetime neural networks with nonlinear interconnections. IEEE Transactions on Circuits and Systems (I), 41, 52–58. Yang, H., & Dillon, T. S. (1994). Exponential stability and oscillation of Hopfield graded response neural network. IEEE Transactions on Neural Networks, 5, 719–729.