Stability of non-autonomous bidirectional associative memory neural networks with delay

Stability of non-autonomous bidirectional associative memory neural networks with delay

ARTICLE IN PRESS Neurocomputing 71 (2008) 863–874 www.elsevier.com/locate/neucom Stability of non-autonomous bidirectional associative memory neural...

353KB Sizes 1 Downloads 66 Views

ARTICLE IN PRESS

Neurocomputing 71 (2008) 863–874 www.elsevier.com/locate/neucom

Stability of non-autonomous bidirectional associative memory neural networks with delay Minghui Jianga,, Yi Shenb a

College of Science, China Three Gorges University, YiChang, Hubei 443000, China Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China

b

Received 19 October 2006; received in revised form 4 February 2007; accepted 5 March 2007 Communicated by J. Cao Available online 18 March 2007

Abstract The paper discusses the stability of non-autonomous bidirectional associative memory neural networks. The existence and uniqueness of periodic solutions of the neural networks are investigated by using Gaines and Mawhin’s continuation theorem of coincidence degree theory in the paper. Moreover, criterions on asymptotic stability and exponential stability of periodic solutions of the neural networks are obtained by using matrix function inequality, and algorithms for the criterions on the neural networks are provided. Some of the results in the paper generalize and improve the results in the existing references. In the end, an illustrate example is given to verify our results. r 2007 Elsevier B.V. All rights reserved. Keywords: Non-autonomous bidirectional associative memory neural networks; Exponential stability; Periodic solution; Matrix function inequality

1. Introduction Bidirectional associative memory neural networks, firstly proposed by Kosko [13] for their various applications such as designing associative memories, solving optimization problems and automatic control engineering [13,18], is a class of twolayer hetero-associative networks with delays. Such applications rely heavily on the stability properties of equilibria or periodic solution of neural networks. Recently, the models of autonomous bidirectional associative neural networks with delays have been extensively studied [4–8,14,15,19,20,24]. However, in practical bidirectional associative memory neural networks, vital data such as the neuron fire rate, and the synaptic interconnection weights, etc., usually vary with time [10,16]. Moreover, the parameter fluctuation with times in bidirectional associative memory neural network implementation on very large scale integration chips is also unavoidable. To our knowledge, few authors [12] have considered global stability of periodic oscillatory solutions for the non-autonomous bidirectional associative neural networks with delays. Therefore, the stability analysis of non-autonomous bidirectional associative neural networks with delays is important from both theoretical and applied points of view. It is well known that the research of bidirectional associative neural networks with delays involves not only the stability analysis of equilibrium points but also that of periodic solution [17,22]. In particular, global stability of periodic solution of non-autonomous bidirectional associative neural networks with delays is important since the global stability of equilibrium points can be considered as a special case of periodic solution with zero period [1,2]. Hence, the stability analysis of periodic solutions is more general than of equilibrium points. In this paper, based on the matrix function inequality, sufficient conditions on the global exponential stability and asymptotical stability of non-autonomous bidirectional associative neural networks with delays are presented. It should be noted that the results in this paper generalize and improve the ones reported so far in the literature. Corresponding author.

E-mail address: [email protected] (M. Jiang). 0925-2312/$ - see front matter r 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2007.03.002

ARTICLE IN PRESS M. Jiang, Y. Shen / Neurocomputing 71 (2008) 863–874

864

Furthermore, stability of non-autonomous bidirectional associative neural networks with delays can be viewed as robust stability of autonomous bidirectional associative memory neural networks in certain meaning. In the end, an example is given to demonstrate the feasibility of our main results. This paper consists of the following sections. Section 2 describes some preliminaries. Existence, uniqueness and boundedness of non-autonomous bidirectional associative neural networks with delays are discussed in Section 3. Then, results on global stability are stated in Section 4. In Section 5, algorithm for criterions on global stability of nonautonomous bidirectional associative neural networks with delays are first provided. And, an illustrative example is given in Section 6. Finally, concluding remarks are made in Section 7. 2. Preliminaries The model on non-autonomous bidirectional associative neural networks with delays to be considered herein is described by the following differential equation: 8 q P > > x_ i ðtÞ ¼ ai ðtÞxi ðtÞ þ wij ðtÞf j ðyj ðt  tÞÞ þ I i ðtÞ; i ¼ 1; 2; . . . ; p; > < j¼1 (1) p P > > > : y_ j ðtÞ ¼ bj ðtÞyj ðtÞ þ vji ðtÞgi ðxi ðt  dÞÞ þ J j ðtÞ; j ¼ 1; 2; . . . ; q; i¼1

where x_ i ðtÞ denotes the derivative of xi ðtÞ. We call the variables xi ðtÞ; yj ðtÞ; i ¼ 1; . . . ; p, j ¼ 1; . . . ; q the state variables. ai ðtÞ40 and bi ðtÞ40 denote the passive decay rates; I i ðtÞ and J j ðtÞ are external inputs; wij ðtÞ and vji ðtÞ are connection weights of the network; the constant delays t and d responding to the finite speed of axonal signal transmission, which is non-negative. We assume that I i ðtÞ, J j ðtÞ are continuous periodic function with period T. In addition, the following conditions are satisfied. (H1) There exist positive constants M j ; j ¼ 1; . . . ; q and Li ; i ¼ 1; . . . ; p such that for any y; W 2 R 0p

f j ðyÞ  f j ðWÞ pM j ; yW

yaW,

gi ðyÞ  gi ðWÞ pLi ; yaW. yW M m m M m M m (H2) There are constants aM i , ai , bi , bi , wij , wij , vji , vji such that continuous functions ai ðtÞ, bi ðtÞ satisfy 0p

m aM i Xai ðtÞXai 40 and m wM ij Xwij ðtÞXwij

and

m bM i Xbi ðtÞXbi 40, m vM ji Xvji ðtÞXvji .

M m M (H3) Continuous functions f j ðÞ and gi ðÞ are bounded on R, in other words, there are constants f m j , f j , gi , gi such that the following inequalities hold M fm j pf j ðÞpf j

and M gm i pgi ðÞpgi .

The initial conditions associated with non-autonomous bidirectional associative neural networks (1) are of the form xi ðsÞ ¼ fi ðsÞ;

yj ðsÞ ¼ jj ðsÞ; s 2 ½t; 0;

i ¼ 1; . . . ; p; j ¼ 1; . . . ; q,

where fi ðsÞ, jj ðsÞ are continuous T-periodic functions. Define xðtÞ ¼ ½x1 ðtÞ; . . . ; xp ðtÞT 2 Rp ; yðtÞ ¼ ½y1 ðtÞ; . . . ; yq ðtÞT 2 Rq . For any solution ðxðtÞ; yðtÞÞT ¼ ðx1 ðtÞ; . . . ; xp ðtÞ; y1 ðtÞ; . . . ; yq ðtÞÞT with initial conditions xi ðsÞ ¼ fi ðsÞ; yj ðsÞ ¼ jj ðsÞ; s 2 ½t; 0, i ¼ 1; . . . ; p; j ¼ 1; . . . ; q and periodic solution P ðx ðtÞ; y ðtÞÞT ¼ ðx1 ðtÞ; . . . ; xp ðtÞ; ðy1 ðtÞ; . . . ; yq ðtÞÞT Þ, define kðf; jÞT  ðx ; y ÞT k ¼ pi¼1 maxtptp0 jfi ðtÞ  xi ðtÞj þ Pq  j¼1 maxtptp0 jjj ðtÞ  yj ðtÞj. For simplicity, the neural networks (1) can be rewritten as the following vector form: ( _ ¼ AðtÞxðtÞ þ W ðtÞf ðyðt  tÞÞ þ IðtÞ; xðtÞ (2) _ ¼ BðtÞyðtÞ þ V ðtÞgðxðt  dÞÞ þ JðtÞ; yðtÞ where xðtÞ ¼ ðx1 ðtÞ; . . . ; xp ðtÞÞT ; yðtÞ ¼ ðy1 ðtÞ; . . . ; yq ðtÞÞT , AðtÞ ¼ diagða1 ðtÞ; . . . ; ap ðtÞÞ, BðtÞ ¼ diagðb1 ðtÞ; . . . ; bq ðtÞÞ; W ðtÞ ¼

ARTICLE IN PRESS M. Jiang, Y. Shen / Neurocomputing 71 (2008) 863–874

865

ðwij ðtÞÞpq ; V ðtÞ ¼ ðvji ðtÞÞqp ; f ðyÞ ¼ ðf 1 ðy1 Þ; . . . ; f q ðyq ÞÞT , gðuÞ ¼ ðg1 ðx1 Þ; . . . ; gp ðxp ÞÞT ; IðtÞ ¼ ðI 1 ðtÞ; . . . ; I p ðtÞÞ; JðtÞ ¼ ðJ 1 ðtÞ; . . . ; J q ðtÞÞ. To investigate the existence of periodic solution for the neural networks (1) in the following section, we introduce the following notations and lemma. Let X and Z be Banach spaces, L be a linear mapping from Dom L  X to Z, and N : X 7!Z be a continuous mapping. Definition 1. The mapping L be a Fredholm mapping of index zero if dim Ker L ¼ codim Im Lo þ 1 and Im L is closed in Z. If L be a Fredholm mapping of index zero, then there exist continuous mappings P : X 7!X ; Q : Z7!Z and an isomorphism project J : Im Q7!Ker L. Lemma 1 (Rouche and Mawthis [21]). Let L be a Fredholm mapping of index zero and let N be L-compact on O which is open bounded set. Suppose the following conditions hold. (1) For each l 2 ð0; 1Þ; m 2 qO \ Dom L; LmalNm. (2) For each m 2 qO \ KerL; QNma0. (3) degðJQN; O \ Ker L; 0Þa0, where JQN : Ker L7!Ker L. Then, the equation Lm ¼ Nm has at least one solution in O \ Dom L. Throughout this paper, unless otherwise specified, we let AX0 denote non-negative definite matrix, A40 denote positive definite symmetric matrix and etc. AT and A1 , denotes the transpose and the inverse of a square matrix A, respectively. Let j  j and kAk denote the Euclidean norm in Rn and the 2-norm in Rnn , respectively. 3. Existence, uniqueness and boundedness In this section, in order to investigate the stability of periodic solution of the neural network (1), we firstly discuss the existence, uniqueness and boundedness of solutions of the neural networks (1), we have the following result. Theorem 1. Assume that (H1)–(H3) hold, then there is a unique solution through ðt0 ; fÞ, and all the solutions of the neural networks (1) are bounded for all tXt0 . Proof. By Global Existence and Uniqueness Theorem [21], there is a unique solution through ðt0 ; fÞ if Assumptions (H1)–(H3) are satisfied. For any solution ðxðtÞ; yðtÞÞT ¼ ðx1 ðtÞ; . . . ; xp ðtÞ; y1 ðtÞ; . . . ; yq ðtÞÞT with initial conditions xi ðsÞ ¼ fi ðsÞ; yj ðsÞ ¼ jj ðsÞ; s 2 ½t; 0; i ¼ 1; . . . ; p; j ¼ 1; . . . ; q, then we get 8 q P > > x_ i ðtÞ ¼ ai ðtÞxi ðtÞ þ wij ðtÞf j ðyj ðt  tÞÞ þ I i ðtÞ; i ¼ 1; 2; . . . ; p; > < j¼1 (3) p P > > > _ ðtÞ ¼ b ðtÞy ðtÞ þ v ðtÞg ðx ðt  dÞÞ þ J ðtÞ; j ¼ 1; 2; . . . ; q: y j ji i j : j i j i¼1

M m M Because I i ðtÞ and J j ðtÞ are periodic continuous functions, there are constants I m i ; I i and J j ; J j such that the following formulas are satisfied: M Im i pI i ðtÞpI i ;

M Jm j pJ j ðtÞpJ j .

Moreover, by Assumptions (H1)–(H3), we obtain by (3) 8 q P M M M > > x_ i ðtÞp  ai xi ðtÞ þ maxfjwm maxfjf m > ij j; jwij jg j j; jf j jg þ I i ; < j¼1 p P > m M > > : y_ j ðtÞp  bj yj ðtÞ þ maxfjvji j; jvji jg i¼1

M M maxfjgm i j; jgi jg þ J j ;

where ( ai ¼

am i ; aM i ;

xi ðtÞX0; xi ðtÞo0;

( bi ¼

bm i ;

xi ðtÞX0;

bM i ;

xi ðtÞo0:

i ¼ 1; 2; . . . ; p; (4) j ¼ 1; 2; . . . ; q;

ARTICLE IN PRESS M. Jiang, Y. Shen / Neurocomputing 71 (2008) 863–874

866

Set Mi ¼

q X

m M M M maxfjwm ij j; jwij jg maxfjf j j; jf j jg þ I i ,

j¼1

Nj ¼

p X

M m M M maxfjvm ji j; jvji jg maxfjgi j; jgi jg þ J j ,

i¼1

then, we derive ( x_ i ðtÞp  ai xi ðtÞ þ Mi ;

i ¼ 1; 2; . . . ; p;

y_ j ðtÞp  bj yj ðtÞ þ Nj ;

j ¼ 1; 2; . . . ; q:

(5)

By the above inequalities and comparison theorem [21], it follows that xi ðtÞ; i ¼ 1; 2; . . . ; p, yj ðtÞ; j ¼ 1; 2; . . . ; q are bounded. This completes the proof. & Now we shall consider the following existence of the periodic solution of the neural networks (1). Theorem 2. The neural networks (1) has at least one T-periodic solution if the neural networks (1) satisfies Assumptions (H1)–(H3). Proof. By Lemma 1, we can get the proof that is similar to that of Theorem 1 in [11] and here omitted.

&

Remark 1. From Theorem 1, it follows that the period of solution of the neural networks (1) is related to with the period of input functions I i ðtÞ; J j ðtÞ if continuous functions ai ðtÞ; bi ðtÞ; wij ðtÞ, vji ðtÞ are bounded. To show the uniqueness of the periodic solution, we use the transformation ui ðtÞ ¼ xi ðtÞ  xi ðtÞ; i ¼ 1; . . . ; p; zj ðtÞ ¼ yj ðtÞ  yj ðtÞ; j ¼ 1; . . . ; q to shift the periodic solution ðx1 ðtÞ; . . . ; xp ðtÞ; y1 ðtÞ; . . . ; yq ðtÞÞ of the neural networks (1) to the origin, and get the following form of the neural networks (1): 8 q P > > wij ðtÞsj ðzj ðt  tÞÞ; i ¼ 1; 2; . . . ; p; u_ i ðtÞ ¼ ai ðtÞui ðtÞ þ > > < j¼1 (6) p P > > vji ðtÞei ðui ðt  dÞÞ; j ¼ 1; 2; . . . ; q; > z_j ðtÞ ¼ bj ðtÞzj ðtÞ þ > : j¼1 where sj ðzj ðtÞÞ ¼ f j ðzj ðtÞ þ yj ðtÞÞ  f j ðyj ðtÞÞ; ei ðui ðtÞÞ ¼ gi ðui ðtÞ þ

xi ðtÞÞ



gi ðxi ðtÞÞ;

j ¼ 1; 2; . . . ; q, i ¼ 1; 2; . . . ; p.

It is obvious that the functions sj ðÞ and ej ðÞ satisfy the same Assumptions (H1) and (H3). the neural networks (6) can be rewritten as the following vector form: ( _ ¼ AðtÞuðtÞ þ W ðtÞSðzðt  tÞÞ; uðtÞ (7) z_ðtÞ ¼ BðtÞzðtÞ þ V ðtÞEðuðt  dÞÞ; where uðtÞ ¼ ðu1 ðtÞ; . . . ; up ðtÞÞT ; zðtÞ ¼ ðz1 ðtÞ; . . . ; zq ðtÞÞT ; AðtÞ ¼ diagða1 ðtÞ; . . . ; ap ðtÞÞ, BðtÞ ¼ diagðb1 ðtÞ; . . . ; bq ðtÞÞ; W ðtÞ ¼ ðwij ðtÞÞpq ; V ðtÞ ¼ ðvji ðtÞÞqp ; SðzÞ ¼ ðs1 ðz1 Þ; . . . ; sq ðzq ÞÞT , EðuÞ ¼ ðe1 ðu1 Þ; . . . ; ep ðup ÞÞT . We next prove that the T-periodic solution ðx1 ðtÞ; . . . ; xp ðtÞ; y1 ðtÞ; . . . ; yq ðtÞÞ of the neural networks (1) satisfying some conditions is unique in the following Theorem 3. Theorem 3. Assume that Assumption (H1), (H2)–(H3) hold. The neural networks (1) has an unique T-periodic solution if there exist two positive constants y; k and two diagonal positive definite matrix P; Q such that the following inequalities with matrix function hold ( L1 PAðtÞL1 þ 2yAðtÞL1  kI  yW ðtÞW ðtÞT  V ðtÞT QB1 ðtÞV ðtÞ40; (8) M 1 QBðtÞM 1 þ 2kBðtÞM 1  yI  kV ðtÞV ðtÞT  W ðtÞT PA1 ðtÞW ðtÞ40; where M ¼ diagðM 1 ; . . . ; M q Þ; L ¼ diagðL1 ; . . . ; Lp Þ, and I is an identity matrix, tX0. Proof. By Assumption (H1), it follows that Sð0Þ ¼ 0; Eð0Þ ¼ 0, so the neural networks (7) has at least one zero equilibrium point. We only need to prove the origin of the neural networks (7) is the unique equilibrium point and global

ARTICLE IN PRESS M. Jiang, Y. Shen / Neurocomputing 71 (2008) 863–874

867

asymptotically stable. Let ðu ; z Þ be any equilibrium of the neural networks (7), we have ( AðtÞu þ W ðtÞSðz Þ ¼ 0;

(9)

BðtÞz þ V ðtÞEðu Þ ¼ 0:

We know that u ¼ z ¼ 0 if Sðz Þ ¼ Eðu Þ ¼ 0, and vice versa. Therefore, by (9), if Sðz Þ ¼ 0, then Eðu Þ ¼ 0, and vice versa. We can prove that Sðz Þ ¼ Eðu Þ ¼ 0. Suppose that it is not true, i.e., Sðz Þa0 or Eðu Þa0. Multiplying both sides of the first equation in (9) by 2ðPu þ yEðu ÞÞT and the second equation in (9) by 2ðQz þ kSðz ÞÞT yield ( 2ðu ÞT PAðtÞu þ 2ðu ÞT PW ðtÞSðz Þ  2yEðu ÞT AðtÞu þ 2yEðu ÞT W ðtÞSðz Þ ¼ 0; (10) 2ðz ÞT QBðtÞz þ 2ðz ÞT QV ðtÞEðu Þ  2kSðz ÞT BðtÞz þ 2kSðz ÞT V ðtÞEðu Þ ¼ 0: The sum of the both sides of Eq. (10) yields  2ðu ÞT PAðtÞu þ 2ðu ÞT PW ðtÞSðz Þ  2yEðu ÞT AðtÞu þ 2yEðu ÞT W ðtÞSðz Þ  2ðz ÞT QBðtÞz þ 2ðz ÞT QV ðtÞEðu Þ  2kSðz ÞT BðtÞz þ 2kSðz ÞT V ðtÞEðu Þ ¼ 0, which is also equivalent to  2ðu ÞT PAðtÞu þ 2ðu ÞT PW ðtÞSðz Þ  2yEðu ÞT AðtÞu þ 2yEðu ÞT W ðtÞSðz Þ  2ðz ÞT QBðtÞz þ 2ðz ÞT QV ðtÞEðu Þ  2kSðz ÞT BðtÞz þ 2kSðz ÞT V ðtÞEðu Þ þ kEðu ÞT Eðu Þ  kEðu ÞT Eðu Þ þ ySðz ÞT Sðz Þ  ySðz ÞT Sðz Þ ¼ 0.

ð11Þ

The following inequality holds: ½ðPAðtÞÞ1=2 u  P1=2 AðtÞ1=2 W ðtÞSðz ÞT

½ðPAðtÞÞ1=2 u  P1=2 AðtÞ1=2 W ðtÞSðz ÞX0,

so we have  ðu ÞT PAðtÞu þ 2ðu ÞT PW ðtÞSðz ÞpSðz ÞT W ðtÞT PAðtÞ1 W ðtÞSðz Þ.

ð12Þ

Similarly, we can obtain  ðz ÞT QBðtÞz þ 2ðz ÞT QV ðtÞEðu ÞpEðu ÞT V ðtÞT QBðtÞ1 V ðtÞEðu Þ,

ð13Þ

 Eðu ÞT Eðu Þ þ 2Sðz ÞT V ðtÞEðu ÞpSðz ÞT V ðtÞT V ðtÞSðz Þ

ð14Þ

 Sðz ÞT Sðz Þ þ 2Eðu ÞT W ðtÞSðz ÞpEðu ÞT W ðtÞT W ðtÞEðu Þ.

ð15Þ

and

Substituting (12)–(15) into (11) yield 0p  ðu ÞT PAðtÞu þ Sðz ÞT W ðtÞT PAðtÞ1 W ðtÞSðz Þ  2yEðu ÞT AðtÞL1 Eðu Þ þ yEðu ÞT W ðtÞT W ðtÞEðu Þ  ðz ÞT QBðtÞz þ Eðu ÞT V ðtÞT QBðtÞ1 V ðtÞEðu Þ  kSðz ÞT BðtÞM 1 Sðz Þ þ kSðz ÞT V ðtÞT V ðtÞSðz Þ þ kEðu ÞT Eðu Þ þ ySðz ÞT Sðz Þ.

ð16Þ

Combining Assumption (H1) with (16), we have 0pEðu ÞT ðL1 PAðtÞL1  2yAðtÞL1 þ kI þ yW ðtÞT W ðtÞ þ V ðtÞT QBðtÞ1 V ðtÞÞEðu Þ þ Sðz ÞT ðM 1 QBðtÞM 1  2kBðtÞM 1 þ yI þ kV ðtÞT V ðtÞ þ W ðtÞT PAðtÞ1 W ðtÞÞSðz Þ.

ð17Þ

By (17), we get Eðu ÞT ðL1 PAðtÞL1  2yAðtÞL1 þ kI þ yW ðtÞT W ðtÞ þ V ðtÞT QBðtÞ1 V ðtÞÞEðu Þ þ Sðz ÞT ðM 1 QBðtÞM 1  2kBðtÞM 1 þ yI þ kV ðtÞT V ðtÞ þ W ðtÞT PAðtÞ1 W ðtÞÞSðz Þo0. 



The contradiction between (17) and (18)? implies that Sðz Þ ¼ Eðu Þ ¼ 0. This completes the proof.

ð18Þ &

Remark 2. The condition (8) in Theorem 3 is more difficult to verify. It is easier to check if continuous functions ai ðtÞ; bi ðtÞ; wij ðtÞ; vji ðtÞ are TT-periodic. Corollary 1. Assume that Assumptions (H1) and (H3) hold, moreover, ai ðtÞ; bi ðtÞ; wij ðtÞ, vji ðtÞ are continuous function with TT period. The neural networks (1) has the unique T-periodic solution if there exist two positive constants y; k and two diagonal

ARTICLE IN PRESS M. Jiang, Y. Shen / Neurocomputing 71 (2008) 863–874

868

positive definite matrix P; Q such that the following matrix function inequalities hold ( L1 PAðtÞL1 þ 2yAðtÞL1  kI  yW ðtÞW ðtÞT  V ðtÞT QB1 ðtÞV ðtÞ40; M 1 QBðtÞM 1 þ 2kBðtÞM 1  yI  kV ðtÞV ðtÞT  W ðtÞT PA1 ðtÞW ðtÞ40;

(19)

where M ¼ diagðM 1 ; . . . ; M q Þ; L ¼ diagðL1 ; . . . ; Lp Þ, and I is an identity matrix, t 2 ½0; TT. Proof. Because ai ðtÞ; bi ðtÞ; wij ðtÞ; vji ðtÞ are continuous function with TT period, the left of the inequalities (19) is also continuous matrix function with TT period. Hence, the condition (19) is equivalent to the condition (8). Corollary 1 can be directly derived by Theorem 2. & 4. Global stability By applying Theorem 2 and the second Lyapunov method, we can obtain the sufficient conditions on global stability of periodic solution of the neural networks (1) in the following theorems. Theorem 4. Assume that Assumptions (H1)–(H3) hold. The neural networks (1) global asymptotically converge to the unique T-periodic solution if there exist two positive constants y; k and two diagonal positive definite matrix P; Q such that the following matrix function inequalities hold: ( L1 PAðtÞL1 þ 2yAðtÞL1  kI  yW ðtÞT W ðtÞ  V ðtÞT QB1 ðtÞV ðtÞ40; (20) M 1 QBðtÞM 1 þ 2kBðtÞM 1  yI  kV ðtÞT V ðtÞ  W ðtÞT PA1 ðtÞW ðtÞ40; and (

L1 PAðtÞL1 þ 2yAðtÞL1  kI  yW ðtÞT W ðtÞ  V ðt þ dÞT QB1 ðt þ dÞV ðt þ dÞ40; M 1 QBðtÞM 1 þ 2kBðtÞM 1  yI  kV ðtÞT V ðtÞ  W ðt þ tÞT PA1 ðt þ tÞW ðt þ tÞ40;

(21)

where M ¼ diagðM 1 ; . . . ; M q Þ; L ¼ diagðL1 ; . . . ; Lp Þ, and I is an identity matrix, tX0. Proof. By Theorem 3 and the matrix function inequality (20), we know that the unique T-periodic solution of the neural networks (1) exists. Now we choose the following Lyapunov functional: UðuðtÞ; zðtÞÞ ¼ V 1 ðuðtÞ; zðtÞÞ þ V 2 ðuðtÞ; zðtÞÞ þ V 3 ðuðtÞ; zðtÞÞ þ V 4 ðuðtÞ; zðtÞÞ,

ð22Þ

where V 1 ðuðtÞ; zðtÞÞ ¼ uT ðtÞPuðtÞ þ zT ðtÞQzðtÞ, Z ui ðtÞ Z p q X X y ei ðsÞ ds þ 2 k V 2 ðuðtÞ; zðtÞÞ ¼ 2 V 3 ðuðtÞ; zðtÞÞ ¼

i¼1 p X

0

Z

t

k td

i¼1

Z

e2i ðui ðsÞÞ ds þ

i¼1 q X i¼1

zi ðtÞ

si ðsÞ ds, 0

Z

t

y tt

s2i ðzi ðsÞÞ ds,

t

SðzðsÞÞT W ðs þ tÞT PA1 ðs þ tÞW ðs þ tÞSðzðsÞÞ ds

V 4 ðuðtÞ; zðtÞÞ ¼ tt

Z

t

þ

EðuðsÞÞT V ðs þ dÞT QB1 ðs þ dÞV ðs þ dÞEðuðsÞÞ ds.

td

The time derivative of V 1 ðuðtÞ; zðtÞÞ and V 2 ðuðtÞ; zðtÞÞ along the trajectories of (7) turns out to be V_ 1 ðuðtÞ; zðtÞÞ ¼  2uT ðtÞPAðtÞuðtÞ þ 2uT ðtÞPW ðtÞSðzðt  tÞÞ  2zT ðtÞQBðtÞzðtÞ þ 2zT ðtÞQV ðtÞEðUðt  dÞÞ,

ð23Þ

V_ 2 ðuðtÞ; zðtÞÞ ¼  2yE T ðuðtÞÞAðtÞuðtÞ þ 2yE T ðuðtÞÞW ðtÞSðzðt  tÞÞ  2kST ðzðtÞÞBðtÞzðtÞ þ 2kS T ðzðtÞÞV ðtÞEðuðt  dÞÞ.

ð24Þ

Similarly, we have V_ 3 ðuðtÞ; zðtÞÞ ¼ kE T ðuðtÞÞEðuðtÞÞ  kE T ðuðt  dÞÞEðuðt  dÞÞ þ yST ðzðtÞÞSðzðtÞÞ  yST ðzðt  tÞÞSðzðt  tÞÞ pkE T ðuðtÞÞEðuðtÞÞ  kE T ðuðt  dÞÞEðuðt  dÞÞ þ yST ðzðtÞÞSðzðtÞÞ  yST ðzðt  tÞÞSðzðt  tÞÞ

ð25Þ

ARTICLE IN PRESS M. Jiang, Y. Shen / Neurocomputing 71 (2008) 863–874

869

and V_ 4 ðuðtÞ; zðtÞÞ ¼ SðzðtÞÞT W T ðt þ tÞPA1 ðt þ tÞW ðt þ tÞSðzðtÞÞ  Sðzðt  tÞÞT W ðtÞT PA1 ðtÞW ðtÞSðzðt  tÞÞ þ EðuðtÞÞT V ðt þ dÞT QB1 ðt þ dÞV ðt þ dÞEðuðtÞÞ  Eðuðt  dÞÞT V ðtÞT QB1 ðtÞV ðtÞEðuðt  dÞÞ pSðzðtÞÞT W T ðt þ tÞPA1 ðt þ tÞW ðt þ tÞSðzðtÞÞ  Sðzðt  tÞÞT W ðtÞT PA1 ðtÞW ðtÞSðzðt  tÞÞ þ EðuðtÞÞT V T ðt þ dÞQB1 ðt þ dÞV ðt þ dÞEðuðtÞÞ  Eðuðt  dÞÞT V ðtÞT QB1 ðtÞV ðtÞEðuðt  dÞÞ.

ð26Þ

By (12)–(15) and (23)–(26) together with Assumption (H1), we have _ UðuðtÞ; zðtÞÞp  uT ðtÞPAðtÞuðtÞ  zT ðtÞQBðtÞzðtÞ  2yE T ðuðtÞÞAðtÞuðtÞ  2kST ðzðtÞÞBðtÞzðtÞ þ ½uT ðtÞPAðtÞuðtÞ þ 2uT ðtÞPW ðtÞSðzðt  tÞÞ  Sðzðt  tÞÞT W ðtÞT PAðtÞ1 W ðtÞSðzðt  tÞÞ þ ½2yE T ðuðtÞÞW ðtÞSðzðt  tÞÞ  yST ðzðt  tÞÞSðzðt  tÞÞ þ kE T ðuðtÞÞEðuðtÞÞ þ yST ðzðtÞÞSðzðtÞÞ þ ½zT ðtÞQBðtÞzðtÞ þ 2zT ðtÞQV ðtÞEðUðt  dÞÞ  Eðuðt  dÞÞT V ðtÞT QBðtÞ1 V ðtÞEðuðt  dÞÞ þ ½2kST ðzðtÞÞV ðtÞEðuðt  dÞÞ  kE T ðuðt  dÞÞEðuðt  dÞÞ þ SðzðtÞÞT W T ðt þ tÞPA1 ðt þ tÞW ðt þ tÞSðzðtÞÞ þ EðuðtÞÞT V T ðt þ dÞQB1 ðt þ dÞV ðt þ dÞEðuðtÞÞ p  uT ðtÞPAðtÞuðtÞ  zT ðtÞQBðtÞzðtÞ  2yE T ðuðtÞÞAðtÞuðtÞ  2kST ðzðtÞÞBðtÞzðtÞ þ yE T ðuðtÞÞW ðtÞT W ðtÞEðuðtÞÞ þ kE T ðuðtÞÞEðuðtÞÞ þ yST ðzðtÞÞSðzðtÞÞ þ kST ðzðtÞÞV ðtÞT V ðtÞSðzðtÞÞ þ SðzðtÞÞT W T ðt þ tÞPA1 ðt þ tÞW ðt þ tÞSðzðtÞÞ þ EðuðtÞÞT V T ðt þ dÞQB1 ðt þ dÞV ðt þ dÞEðuðtÞÞ.

ð27Þ

By (27), we get _ UðuðtÞ; zðtÞÞpEðuÞT ½L1 PAðtÞL1  2yAðtÞL1 þ kI þ yW ðtÞT W ðtÞ þ V T ðt þ dÞQB1 ðt þ dÞV ðt þ dÞEðuÞ þ SðzÞT ½M 1 QBðtÞM 1 2kBðtÞM 1 þ yI þkV ðtÞT V ðtÞ þ W T ðt þ tÞPA1 ðt þ tÞW ðt þ tÞSðzðtÞÞo0. ð28Þ Therefore, by the standard Lyapunov-type theorem [9], we can conclude the origin of the neural networks (7) is globally asymptotically stable, i.e., the unique T-periodic solution of the neural networks (1) is global asymptotical stability. This proof is completed. & m m M M m M m Remark 3. If period T is zero and aM i ¼ ai ; bi ¼ bi ; wij ¼ wij ; and vji ¼ vji , then the Theorem 2 in this paper reduces to Theorem 2 in [1] while P ¼ Q ¼ I; y ¼ k ¼ 1 and d ¼ t ¼ t. Therefore, Theorem 4 in this paper generalized and improved the results in [12].

Corollary 2. Assume that Assumptions (H1) and (H3) hold, moreover, ai ðtÞ; bi ðtÞ; wij ðtÞ, vji ðtÞ are constants. The neural networks (1) global asymptotically converge to the unique equilibrium (i.e., 0-periodic solution) if there exist two positive constants y; k and two diagonal positive definite matrix P; Q such that one of the following linear matrix inequalities (LMIs) holds: ( L1 PAL1 þ 2yAL1  kI  yW T W  V T QB1 V 40; (29) M 1 QBM 1 þ 2kBM 1  yI  kV T V  W T PA1 W 40; where M ¼ diagðM 1 ; . . . ; M q Þ, L ¼ diagðL1 ; . . . ; Lp Þ, and I is an identity matrix. In similar method, we can prove the T-periodic solution of the neural networks (1) is globally exponentially stable in the following result. Theorem 5. Assume that Assumptions (H1) and (H3) hold, moreover, ai ðtÞ; bi ðtÞ; wij ðtÞ, vji ðtÞ are continuous function with TT period. The T-periodic solution of the neural networks (1) is unique and global exponentially stable if there exist four diagonal positive definite matrix P; Q; G; F and positive constants y; k such that the following inequalities with matrix function hold ( L1 PAðtÞL1  L1 GAðtÞL1 þ 2yAðtÞL1  kI  yW ðtÞT W ðtÞ  V ðtÞT QB1 ðtÞV ðtÞ40; (30) M 1 QBðtÞM 1  M 1 FBðtÞM 1 þ 2kBðtÞM 1  yI  kV ðtÞT V ðtÞ  W ðtÞT PA1 ðtÞW ðtÞ40;

ARTICLE IN PRESS M. Jiang, Y. Shen / Neurocomputing 71 (2008) 863–874

870

and (

L1 PAðtÞL1  L1 GAðtÞL1 þ 2yAðtÞL1  kI  yW ðtÞT W ðtÞ  V T ðt þ dÞQB1 ðt þ dÞV ðt þ dÞ40;

(31)

M 1 QBðtÞM 1  M 1 FBðtÞM 1 þ 2kBðtÞM 1  yI  kV ðtÞT V ðtÞ  W T ðt þ tÞPA1 ðt þ tÞW ðt þ tÞ40; where t 2 ½0; TT.

Proof. Obviously, the inequalities with matrix function (30) under the conditions of Theorem 5 implies the inequalities (8) hold, so the T-periodic solution of the neural networks (1) is unique. In the sequence, we shall prove the inequalities (31) guarantee the exponential stability of T-periodic solution of the neural networks (1). Choose the same Lyapunov functional as (22). By (28) and Assumption (H2), we get _ UðuðtÞ; zðtÞÞp  uT ðtÞGAðtÞuðtÞ  zT ðtÞðtÞFBðtÞzðtÞ þ EðuÞT ½L1 PAðtÞL1 þ L1 GAðtÞL1  2yAðtÞL1 þ kI þ yW ðtÞT W ðtÞ þ V T ðt þ dÞQB1 ðt þ dÞV ðt þ dÞEðuðtÞÞ þ SðzðtÞÞT ½M 1 QBðtÞM 1 þ M 1 FBðtÞM 1  2kBðtÞM 1 þ yI þ kV ðtÞT V ðtÞ þ W T ðt þ tÞPA1 ðt þ tÞW ðt þ tÞSðzðtÞÞo  uT ðtÞðtÞGAðtÞuðtÞ  zT ðtÞðtÞFBðtÞzðtÞ p  minfkGAðtÞk; kFBðtÞkgðjuðtÞj2 þ jzðtÞj2 Þ.

ð32Þ T

T

1

By Assumption (H2), we can take hðeÞ ¼  minfkGAðtÞk; kFBðtÞkg þ maxfkkL L þ ðV ðt þ dÞLÞ QB ðt þ dÞ V ðt þ dÞLk; kyM T M þ ðW ðt þ tÞMÞT PA1 ðt þ tÞW ðt þ tÞMkgeet te. Owing to h0 ðeÞ40; hð0Þ ¼  minfkGAðtÞk; kFBðtÞkgo0; hðþ1Þ ¼ þ1, there exists a unique e such that hðeÞ ¼ 0; i.e., there is e40 satisfies  minfkGAðtÞk; kFBðtÞkg þ maxfðkPk þ ykLkÞ; ðkQk þ kkMkÞge þ maxfkkLT L þ ðV ðt þ dÞLÞT QB1 ðt þ dÞV ðt þ dÞLk, kyM T M þ ðW ðt þ tÞMÞT PA1 ðt þ tÞW ðt þ tÞMkgeet te ¼ 0.

ð33Þ

From the definition of Lyapunov functional U and applying Assumption (H2), we have UðuðtÞ; zðtÞÞp maxfðkPk þ ykLkÞ; ðkQk þ kkMkÞgðjuðtÞj2 þ jzðtÞj2 Þ þ maxfkkLT L þ ðV ðt þ dÞLÞT QB1 ðt þ dÞ Z t T T 1 V ðt þ dÞLk; kyM M þ ðW ðt þ tÞMÞ PA ðt þ tÞW ðt þ tÞMkg ðjuðsÞj2 þ jzðsÞj2 Þ ds.

ð34Þ

tt

For simplicity, set a ¼ minfkGAðtÞk; kFBðtÞkg, b ¼ maxfðkPk þ ykLkÞ; ðkQk þ kkMkÞg; R ¼ maxfkkLT L þ ðV ðt þ dÞLÞT QB1 ðt þ dÞV ðt þ dÞLk; kyM T M þ ðW ðt þ tÞMÞT PA1 ðt þ tÞW ðt þ tÞMkg. For e40 satisfying (33), by (32) and (34), we get dðeet UðuðtÞ; zðtÞÞÞ _ ¼ eeet UðuðtÞ; zðtÞÞ þ eet UðuðtÞ; zðtÞÞ dt peet ða þ beÞðjuðtÞj2 þ jzðtÞj2 Þ þ eeet R

Z

t

ðjuðsÞj2 þ jzðsÞj2 Þ ds.

ð35Þ

tt

Integrating the both sides of (35) from 0 to an arbitrary tX0, we can obtain eet UðuðtÞ; zðtÞÞ  Uðuð0Þ; zð0ÞÞ Z t Z t Z p ees ða þ beÞðjuðsÞj2 þ jzðsÞj2 Þ ds þ eees R ds 0

0

s

ðjuðrÞj2 þ jzðrÞj2 Þ dr.

ð36Þ

st

And Z 0

t

eees R ds

Z

s

ðjuðrÞj2 þ jzðrÞj2 Þ drp st

Z

t

eRteeðrþtÞ ðjuðrÞj2 þ jzðrÞj2 Þ dr.

ð37Þ

t

Substituting (37) into (36) and applying (33), we obtain eet UðuðtÞ; zðtÞÞps;

tX0, R0 where s ¼ Uðuð0Þ; zð0ÞÞ þ t eRteet ðjuðrÞj2 þ jzðrÞj2 Þ dr. By the definition of U and (36), we get s eet ; tX0. juðtÞj2 þ jzðtÞj2 p minfkPk; kQkg

(38)

(39)

ARTICLE IN PRESS M. Jiang, Y. Shen / Neurocomputing 71 (2008) 863–874

871

This implies the origin of the neural networks (7) is globally exponentially stable and its exponential convergence rate e is decided by (33). Therefore, it follows the origin of the neural networks (7) is globally exponentially stable, that is to say, the neural networks (1) globally exponential converges to the unique T-periodic solution. The proof is completed. & Remark 4. Theorem 5 gives the sufficient condition on the globally exponential stability of the T-periodic solution of the neural networks (1). However, Refs. [22,17,1,2,9] only prove the asymptotic stability of equilibrium or periodic solution of the autonomous bidirectional associative neural network. The following Corollary is easily obtained by Theorem 5. Corollary 3. Assume that Assumptions (H1) and (H3) hold, moreover, ai ðtÞ; bi ðtÞ; wij ðtÞ, vji ðtÞ are constants. The equilibrium of the neural networks (1) is unique and global exponentially stable if there exist four diagonal positive definite matrix P; Q; G; F and positive constants y; k such that one of the following LMIs holds: ( L1 PAL1  L1 GL1 þ 2yAL1  kI  yW T W  V T QB1 V 40; (40) M 1 QBM 1  M 1 FBM 1 þ 2kBM 1  yI  kV T V  W T PA1 W 40:

5. Algorithm for criterions In this section, we give two algorithms on the matrix inequalities (20), (21), and (30)–(31) to verify it, respectively. Applying the continuous properties of matrix function and LMI technique [3], we can derive the following Algorithm A on the matrix inequalities (20) and (21). Algorithm A. Step 1: Let initial time t0 ¼ 0; i ¼ 1, maximum iterative number N ¼ N 0 (for example N 0 ¼ 1000), then go to next. Step 2: If there are one feasible solution P0 ; Q0 ; y0 ; k0 about the matrix inequalities (20) and (21) by LMI toolbox in Matlab while take t ¼ t0 , then go to step 3, or the matrix inequalities (20), (21) do not hold and stop. Step 3: Set P ¼ P0 ; Q ¼ Q0 ; y ¼ y0 ; k ¼ k0 in the matrix inequalities (20) and (21), there must exists the number s ¼ min1pkp2ðpþqÞ fsk g where ( minft4t0 jDk ðtÞ ¼ 0g there exist t such that Dk ðtÞ ¼ 0; sk ¼ (41) þ1 if Dk ðtÞa0 such that the determinant Dk ðtÞ of the kth leading principal minor of the matrix continuous function on left side of the matrix inequalities (20) and (21) are positive in ½t0 ; t0 þ sÞ. If s ¼ þ1, then the matrix inequalities (20), (21) hold and stop; if so þ 1 and ioN 0 , then t0 ¼ t0 þ s; i ¼ i þ 1 and go to step 2; if iXN, then fail and stop. In similar idea, the Algorithm B on the matrix inequalities (30) and (31) is given as follows. Algorithm B. Step 1: Let initial time t0 ¼ 0; i ¼ 1, maximum iterative number N ¼ N 0 (for example N 0 ¼ 1000), then go to next. Step 2: If there are one feasible solution P0 ; Q0 ; G 0 ; F 0 ; y0 ; k0 about the matrix inequalities (30) and (31) by LMI toolbox in Matlab while take t ¼ t0 , then go to the next, or the matrix inequalities (30), (31) do not hold and stop. Step 3: Set P ¼ P0 ; Q ¼ Q0 ; G ¼ G 0 ; F ¼ F 0 ; y ¼ y0 ; k ¼ k0 in the matrix inequalities (30) and (31), there exists the number s ¼ min1pkp2ðpþqÞ fsk g (sk is defined by (41)) such that the determinant Dk ðtÞ of the kth leading principal minor of the matrix continuous function on left side of the matrix inequalities (30) and (31) are positive in ½t0 ; t0 þ sÞ. If t0 þ s4TT, then the matrix inequalities (30), (31) hold and stop; if t0 þ soTT and ioN 0 , then t0 ¼ t0 þ s; i ¼ i þ 1 and go to step 2; if iXN, then fail and stop. Remark 5. The above algorithms provide the tool to verify the global stability criterions on the T-periodic solution of the neural networks (1), and will be demonstrated with an example in the next section. 6. Illustrative example To demonstrate clearly the effective of the above algorithms and compare our result with the one in [12], a simple example is given to verify the effectiveness.

ARTICLE IN PRESS M. Jiang, Y. Shen / Neurocomputing 71 (2008) 863–874

872

Example 1. Assume that the network parameters of the neural networks (1) are chosen as     1 þ t= expðtÞ 0 1 0 t ¼ d ¼ 1; AðtÞ ¼ ; BðtÞ ¼ , 0 10 0 10  W ðtÞ ¼

0 1



0 1

 ;

V ðtÞ ¼

0

1

0

1



and f i ðyi Þ ¼ tanhðyi Þ; i ¼ 1; 2;

gi ðxi Þ ¼ tanhðxi Þ; i ¼ 1; 2

with M ¼ L ¼ diagð1; 1Þ and the inputs I 1 ¼ sinðtÞ; I 2 ¼ 2 cosðtÞ; J 1 ¼ 2 cosðtÞ; J 2 ¼ 3 sinðtÞ. It is obvious that Assumptions (H1)–(H3) hold. Applying the Algorithm A to Example 1, we have: In the step 1, take t0 ¼ 0; i ¼ 1; N ¼ 100, go to the step 2. In the step 2, by computation, we can get one feasible solution:   21:6141 0 P0 ¼ Q0 ¼ ; y0 ¼ k0 ¼ 1:4030 0 2:8002 of the inequalities (20) and (21) with t ¼ t0 in Theorem 4 by LMI toolbox in Matlab 6.5, go to the step 3. In the step 3, set P ¼ P0 ; Q ¼ Q0 ; y ¼ y0 ; k ¼ k0 in the inequalities (20) and (21), we have H 1 ðtÞ ¼ L1 P0 AðtÞL1 þ 2y0 AðtÞL1  k0 I  y0 W ðtÞT W ðtÞ  V ðtÞT Q0 BðtÞ1 V ðtÞ ! 23:0171 þ 24:4202nt= expðtÞ 0 ¼ , 0 29:9595 O1 ðtÞ ¼ M 1 Q0 BðtÞM 1 þ 2k0 BðtÞM 1  y0 I  k0 V ðtÞT V ðtÞ  W ðtÞT P0 AðtÞ1 W ðtÞ ! 23:0171 0 ¼ , 0 51:5735  21:6141n expðtÞ=ðexpðtÞ þ tÞ and H 2 ðtÞ ¼ L1 P0 AðtÞL1 þ 2y0 AðtÞL1  k0 I  y0 W ðtÞT W ðtÞ  V ðt þ dÞT QB1 ðt þ dÞV ðt þ dÞ ! 23:0171 þ 24:4202nt= expðtÞ 0 ¼ , 0 29:9595

10

4

5

2

y2 (t)

x2 (t)

O2 ðtÞ ¼ M 1 Q0 BðtÞM 1 þ 2k0 BðtÞM 1  y0 I  k0 V ðtÞT V ðtÞ  W ðt þ tÞT PA1 ðt þ tÞW ðt þ tÞ ! 23:0171 0 ¼ . 0 51:5735  21:6141n expðt þ 1Þ=ðexpðt þ 1Þ þ t þ 1Þ

0 -5

5 0 (t)

-5 0

20

40

60

-2 5

80

y1

t

4

4

2

2

y2 (t)

y2 (t)

x1

0

0 -2 5

0

x1 (

t)

-5 0

20

40 60 t

80

0 (t)

-5

5 x2 (

0

-10 0

20

-5 0

20

40

60

80

t

0 -2 10

t)

60

40

80

t

Fig. 1. Behaviors of the neural networks with four groups of random initial conditions in Example 1.

ARTICLE IN PRESS 4

8

3

6

2

4

x2 (t)

x1 (t)

M. Jiang, Y. Shen / Neurocomputing 71 (2008) 863–874

1 0

873

2 0

-1 0

-2

3.14 6.28 9.42 12.56 15.7 18.84 21.98 25.12 28.26 31.4

0

3.14 6.28 9.42 12.56 15.7 18.84 21.98 25.12 28.26 31.4

t

t

4

5

x4 (t)

x3 (t)

2 0

0

-2 -4

-5 0

3.14 6.28 9.42 12.56 15.7 18.84 21.98 25.12 28.26 31.4

0

3.14 6.28 9.42 12.56 15.7 18.84 21.98 25.12 28.26 31.4

t

t

Fig. 2. 2p-Periodic solution of the neural networks in Example 1 in ½0; 31:4.

It is obvious that all the determinant Dk ðtÞ of the kth leading principal minor of the matrix functions H i ðtÞ; Oi ðtÞ; i ¼ 1; 2 of Theorem 4 are positive definite for all t 2 ½0; þ1Þ, i.e., s ¼ sk ¼ þ1. Therefore, by Theorem 4, it implies 2p-periodic solution of the above system is unique and global asymptotically stable. Fig. 1 shows the behavior of the neural networks (1) and Fig. 2 depicts 2p-periodic solution of the above system in ½0; 31:4. However, the parameters of Example 1 in this paper are applied to Assumption (H5 ) in [12] for t ¼ 0, we get 8 w1  s40; > > > < 10w2  r1  r2  s40; (42) > r1  s40; > > : 10r2  w1  w2  s40: The above LMI (42) have no feasible solution, so Theorem 4 in p. 179 of Ref. [12] could not judge the stability of the above system. It is obvious that the criteria in [23] cannot be also applied in Example 1.

7. Conclusion In this paper, we present theoretical results on the existence, uniqueness, boundedness, global exponential stability and asymptotic stability of periodic solution for non-autonomous bidirectional associative memory neural networks with delays. These conditions obtained here are easy to be checked in practice, and are of prime importance and great interest in the design and the applications of non-autonomous bidirectional associative memory neural networks with delays. The criterions are more general than the respective criteria reported in existing references. Finally, an illustrative example is given to verify the effectiveness of the results.

Acknowledgments The work was supported by Natural Science Foundation of China Three Gorges University, Natural Science Foundation of Hubei (Nos. 2004ABA055, D200613002) and National Natural Science Foundation of China (No. 60574025). The authors would like to thank the anonymous reviewers for their helpful comments and suggestions.

ARTICLE IN PRESS 874

M. Jiang, Y. Shen / Neurocomputing 71 (2008) 863–874

References [1] S. Arik, Global asymptotic stability analysis of bidirectional associative memory neural networks with time delays, IEEE Trans. Neural Networks 16 (2005) 580–586. [2] S. Arik, V. Tavsanoglu, Global asymptotic stability analysis of bidirectional associative memory neural networks with constant time delays, Neurocomputing 68 (2005) 161–176. [3] S. Boyd, L.E. Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, 1994. [4] J. Cao, Daniel W.C. Ho, X. Huang, LMI-based criteria for global robust stability of bidirectional associative memory networks with time delay, Nonlinear Anal. 66 (2007) 1558–1572. [5] J. Cao, J. Liang, Boundedness and stability for Cohen–Grossberg neural networks with time-varying delays, J. Math. Anal. Appl. 296 (2004) 665–685. [6] J. Cao, J. Liang, J. Lam, Exponential stability of high-order bidirectional associative memory neural networks with time delays, Physica D 199 (2004) 425–436. [7] J. Cao, Q. Song, Stability in Cohen–Grossberg type BAM networks with time-varying delays, Nonlinearity 19 (2006) 1601–1617. [8] J. Cao, L. Wang, Exponential stability and periodic oscillatory solution in BAM networks with delays, IEEE Trans. Neural Networks 13 (2002) 457–463. [9] M. Chen, Z. Chen, G. Chen, Approximate Solution of Operation Equations, World Scientific, Singapore, 1997. [10] C. Feng, R. Plamondon, Robust stability of interval bidirectional associative memory neural network with time delays, IEEE Trans. Neural Networks 14 (2003) 1560–1565. [11] M. Jiang, Y. Shen, X. Liao, Global stability of periodic solution for bidirectional associative memory neural networks with varying-time delay, Appl. Math. Comput. 182 (2006) 509–520. [12] H. Jiang, Z. Teng, Boundedness and stability for nonautonomous bidirectional associative neural networks with delay, IEEE Trans. Circuits Syst. II 51 (2004) 174–180. [13] B. Kosko, Bidirectional associative memories Systems, IEEE Trans. Man Cybern. 18 (1988) 49–60. [14] X. Liao, J. Yu, Qualitative analysis of bi-directional associative memory with time delay, Int. J. Circuit Theory Appl. 26 (1998) 219–229. [15] X. Liao, J. Yu, G. Chen, Novel stability criteria for bidirectional associative memory neural networks with time delays, Int. J. Circuit Theory Appl. 30 (2002) 519–546. [16] X. Liao, K. Wong, Robust stability of interval bidirectional associative memory neural network with time delays, IEEE Trans. Systems Man Cybern. Part B: Cybern. 34 (2004) 124–131. [17] R. Libin, LMI approach for global periodicity of neural networks with time-varying delays, IEEE Trans. Circuits Syst. I 52 (2005) 1451–1458. [18] B. Maundy, E. Masry, A switched capacitor bidirectional associative memory, IEEE Trans. Circuits Syst. 37 (1990) 1568–1572. [19] S. Mohamad, Global exponential stability in continuous-time and discrete-time delayed bidirectional neural networks, Physica D 159 (2001) 233–251. [20] H. Park, A novel criterion for global asymptotic stability of BAM neural networks with time delays, Chaos, Solitons and Fractals 29 (2006) 446–453. [21] N. Rouche, J. Mawhin, Ordinary Differential Equations: Stability and Periodic Solutions, Pitman, Boston, 1980. [22] V. Singh, A generalized LMI-based approach to the global asymptotic stability of delayed cellular neural networks, IEEE Trans. Neural Networks 15 (2004) 223–225. [23] Z. Yi, K.K. Tan, Convergence Analysis of Recurrent Neural Networks, Kluwer Academic Publishers, Dordrecht, 2003. [24] H. Zhao, Global stability of bidirectional associative memory neural networks with distributed delays, Phys. Lett. A 30 (2002) 519–546. Minghui Jiang received B.S. degree in Mathematics from Huazhong Normal University, and M.S. degree in Applied Mathematics from Human University, China in 1990 and 1997, respectively. He received his Ph.D. degree from Department of Control Science and Engineering, Huazhong University of Science and Technology, China in 2005. His present research interests include analysis and control of complex system, analysis of neural networks for applications.

Yi Shen received B.S. degree in Mathematics from Hunan College of Engineering and Science, and M.S. degree in Mathematics from Huazhong University of Science and Technology, China in 1988 and 1995, respectively. He received his Ph.D. degree from Department of Control Science and Engineering, Huazhong University of Science and Technology, China in 1998. His present research interests include analysis of hybrid system, and theoretical analysis of neural networks.