Global asymptotical stability of continuous-time delayed neural networks without global Lipschitz activation functions

Global asymptotical stability of continuous-time delayed neural networks without global Lipschitz activation functions

Commun Nonlinear Sci Numer Simulat 14 (2009) 3715–3722 Contents lists available at ScienceDirect Commun Nonlinear Sci Numer Simulat journal homepage...

239KB Sizes 0 Downloads 48 Views

Commun Nonlinear Sci Numer Simulat 14 (2009) 3715–3722

Contents lists available at ScienceDirect

Commun Nonlinear Sci Numer Simulat journal homepage: www.elsevier.com/locate/cnsns

Short communication

Global asymptotical stability of continuous-time delayed neural networks without global Lipschitz activation functions Yong Tan a,*, Mingjia Tan b a b

School of Economy and Management, Wuhan Polytechnic University, No.1 Zhonghuanxi Road, Wuhan, Hubei 430023, PR China School of Information Engineering, Hubei Institute for Nationalities, 10 Sankongqiao Road, Enshi, Hubei 445000, PR China

a r t i c l e

i n f o

Article history: Received 13 October 2008 Received in revised form 16 December 2008 Accepted 20 January 2009 Available online 28 February 2009 PACS: 02.30.ks 02.30.sa 02.60.cb

a b s t r a c t This paper investigates the global asymptotic stability of equilibrium for a class of continuous-time neural networks with delays. Based on suitable Lyapunov functionals and the homeomorphism theory, some sufficient conditions for the existence and uniqueness of the equilibrium point are derived. These results extend the previously works without assuming boundedness and Lipschitz conditions of the activation functions and any symmetry of interconnections. A numerical example is also given to show the improvements of the paper. Ó 2009 Elsevier B.V. All rights reserved.

Keywords: Neural networks Delays Global asymptotic stability

1. Introduction Recently, the dynamical behavior of neural networks has been attracted increasing attention due to its applicability in solving some optimization problems and pattern recognition problems. When neural networks are used to solve optimization problems it is essential to determine the existence and uniqueness as well as the stability of the equilibrium points. As pointed out in [2], although neural networks are proved to be implemented by electronic circuits, the finite switching speed of amplifiers and the transmission delays during the communication between neurons will affect the dynamical behavior of neural networks. Therefore, discussing the affect of the time delays on the equilibrium and stability properties of neural networks is of great importance. Many researchers have studied the equilibria and stability properties of neural networks with delays and presented various sufficient conditions for the uniqueness and global asymptotic stability of the equilibrium point (see, for example, [1–6,8–18]). Those existing results on existence, uniqueness, and global asymptotic stability (GAS) of the equilibrium mainly concern the case where the activation functions satisfy Lipschitz conditions [4,11,15]. The authors of [3,9] analyzed the global dissipativity of dynamical neural networks with time delays when activation functions are only continuous and monotone nondecreasing. In this Paper, under the same assumption we focus on GAS of dynamical neural networks with time delays. By applying homeomorphism techniques and constructing a suitable Lyapunov functional, we obtain some new criteria for global asymptotic stability of dynamical neural networks with delays. Comparisons between our results and previous ones is also given, which shows that our results generalize some results in [3,4,9,11,15]. * Corresponding author. E-mail addresses: [email protected] (Y. Tan), [email protected] (M. Tan). 1007-5704/$ - see front matter Ó 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.cnsns.2009.01.032

3716

Y. Tan, M. Tan / Commun Nonlinear Sci Numer Simulat 14 (2009) 3715–3722

Consider a general delayed neural network model of the following form:

x_ i ðtÞ ¼ di xi ðtÞ þ

n X

aij g j ðxj ðtÞÞ þ

j¼1

n X

bij g j ðxj ðt  sij ÞÞ þ Ii ;

i ¼ 1; 2; . . . ; n;

ð1:1Þ

j¼1

where n denotes the number of the neurons in the network, xi ðtÞ is the state of the ith neuron at time t, g j ðxj ðtÞÞ denote the activation functions of the jth neuron at time t, the constant weight coefficient aij denote the connection of the jth neuron to the ith neuron at time t, the constant weight coefficient bij denote the connection of the jth neuron to the ith neuron at time t  sij , Ii is the external bias on the ith neuron, di are positive constants. In this Paper, we consider the activation functions which are in the following class. Definition 1.1. A function gðxÞ : R ! R is said to be in class G if gðxÞ is continuous and monotone nondecreasing. Let s ¼ max16i;j6n sij , the initial conditions for (1.1) are of the form / ¼ ð/1 ; /2 ; . . . ; /n Þ, where each /i is a continuous function defined on the interval ½s; 0 of length equal to the delay s. Thus the phase space for (1.1) is C ¼ Cð½s; 0; Rn Þ, where Cð½s; 0; Rn Þ denotes the space of continuous functions from the interval ½s; 0 on the real Rn . C is a Banach space with the norm k/k2 ¼ sups6t60 /T ðtÞ/ðtÞ. For / ¼ ð/1 ; /2 ; . . . ; /n Þ in C, there exists a unique solution of systems (1.1), denoted xðt; /Þ ¼ ðx1 ðt; /Þ; x2 ðt; /Þ; . . . ; xn ðt; /ÞÞ such that xi ðt; /Þ ¼ /i ðtÞ for s 6 t 6 0, and xðt; /Þ satisfy systems (1.1) for t P 0. To simplify the notations, the dependence on the initial condition / will not be indicated unless necessary. The following notations will be used in our discussion: Let B ¼ ðbij Þ be a real matrix of dimension of n  n. BT , B1 denotes, respectively, the transpose of, the inverse of a square matrix B. The notation B > 0ðB < 0Þ means that B is symmetric and positive definite (negative definite). kBk1 and kBk1 represent the first norm and infinity norm of B, respectively. That is, P P kBk1 ¼ max16j6n ni¼1 j bij j, kBk1 ¼ max16i6n nj¼1 j bij j. I denotes identical matrix. We will sometimes write xðtÞ as x, f ðxðtÞÞ 1 T as f ðxÞ and the transpose of A as A . The paper is organized as follows. In Section 2, we give the existence and uniqueness condition. And then in Section 3, we present a few new sufficient conditions which ensure the GAS of the equilibrium point of system (1.1). All the conditions are assuming neither Lipschitz conditions of functions involved nor symmetry of interconnections. Hence, they are less restrictive than those given in the earlier references. A comparison between the results and the recent previous result is also made. Finally, some conclusions are drawn in Section 4. 2. Existence and uniqueness of equilibrium of delayed neural networks (1.1) The existence and uniqueness of equilibrium is prerequisite for investigating the global neural networks. Now, we proved the existence and the uniqueness of the equilibrium point of the system (1.1), under very relaxed conditions. Before the proof, let us review a lemma given in [6] Lemma 2.1. Continuous map HðxÞ : Rn ! Rn is homeomorphic if: 1. HðxÞ is injective. 2. limkxk!1 kHðxÞk ¼ 1. A point x ¼ ðx1 ; xx2 ; . . . ; xn ÞT is an equilibrium of system (1.1) if and only if this point x is a solution of the following equations:

di xi þ

n X ðaij þ bij Þg j ðxj Þ þ Ii ¼ 0;

i ¼ 1; 2; . . . ; n:

ð2:1Þ

j¼1

Definite the map HðxÞ as follows:

HðxÞ ¼ ðH1 ðxÞ; H2 ðxÞ; . . . ; Hn ðxÞÞT

ð2:2Þ

P where Hi ðxÞ ¼ di xi þ nj¼1 ðaij þ bij Þg j ðxj Þ þ Ii : It is worth noting that the solution of Eq. (2.1) may not be existent or unique. However, we have the following theorems for g i ðÞ 2 G: Theorem 2.2. Let g i ðÞ 2 G. If there exist positive constants pi > 0; i ¼ 1; 2; . . . ; n such that

pi ðaii  j bii jÞ 

n X

pj ðj aji j þ j bji jÞ P 0;

i ¼ 1; 2; . . . ; n

j¼1;j–i

then the map H is defined by (2.1) is a homeomorphism on Rn .

Y. Tan, M. Tan / Commun Nonlinear Sci Numer Simulat 14 (2009) 3715–3722

3717

Proof. From Lemma 2.1, we know that HðxÞ is homeomorphism of Rn if HðxÞ–HðyÞ; 8x–y; x; y; 2 Rn , and kHðxÞk ! 1 as kxk ! 1. Let x and y be two vectors such x–y. In this case, there exists k 2 f1; 2; . . . ; ng such that xk –yk : Thus

D1 

n X

pi signðxi  yi ÞðHi ðxÞ  Hi ðyÞÞ

i¼1

6

n X

pi di j xi  yi j þ

i¼1

þ

n X

pi ðaii þ bii Þ j g i ðxi Þ  g i ðyi Þ j

i¼1

n n X X

pi ðj aij j þ j bij jÞ j g j ðxj Þ  g j ðyj Þ j

i¼1 j¼1;j–i

6

n X

pi di j xi  yi j þ

i¼1

þ

n X

pi ðaii þ j bii jÞ j g i ðxi Þ  g i ðyi Þ j

i¼1

n n X X

pj ðj aji j þ j bji jÞ j g i ðxi Þ  g i ðyi Þ j

i¼1 j¼1;j–i

6

n X

pi di j xi  yi j þ

i¼1 n X

þ

n X ½pi ðaii þ j bii jÞ i¼1

pj ðj aji j þ j bji jÞ j g i ðxi Þ  g i ðyi Þ j6 

j¼1;j–i

n X

pi di j xi  yi j< 0:

i¼1

Moreover, there exists k0 2 f1; 2; . . . ; ng such that Hk0 ðxÞ–Hk0 ðyÞ: That is, HðxÞ–HðyÞ, for all x–y. We will show that the conditions of Theorem 2.2 also imply that kHðxÞk ! 1 as kxk ! 1. To this end, in (2.4), let y ¼ 0; which yields n X

pi ðHi ðxÞ  Hi ð0ÞÞsignxi 6 

i¼1

n X

pi di j xi j6 pmin

i¼1

n X

j xi j

ð2:4Þ

i¼1

where pmin ¼ min16i6n pi di : From (2.4), it follows that

   X n n X   pmin kxk1 6  p ðH ðxÞ  Hi ð0ÞÞ 6 pmax j Hi ðxÞ  Hi ð0Þ j¼ pmax kHðxÞ  Hð0Þk1 ;   i¼1 i i i¼1 where pmax ¼ maxfp1 ; p2 ; . . . ; pn g. Using the fact kHðxÞ  Hð0Þk1 6 kHðxÞk1 þ kHð0Þk1 , we obtain

pmin kxk1 6 pmax kHðxÞk1 þ pmax kHð0Þk1 : Hence,

kHðxÞk1 P

pmin kxk1  pmax kHð0Þk1 pmax

from which it can be easily concluded that kHðxÞk ! 1 as kxk ! 1. Hence, we have proved that HðxÞ is a homeomorphism on Rn . h Theorem 2.3. Let g i ðÞ 2 G, j g i ðxi Þ j! 1 as j xi j! 1. If there exists a positive constant r such that the following condition holds

A þ AT þ

  1 kBk1 þ rkBk1 I 6 0 r

where A ¼ ðaij Þnn and B ¼ ðbij Þnn , then the map H is defined by (2.1) is a homeomorphism on Rn . Proof. Let x and y be two vectors such x–y and gðxÞ ¼ ðg 1 ðx1 Þ; g 2 ðx2 Þ; . . . ; g n ðxn ÞÞT . Under the assumptions on the activation functions, x–y will imply two cases: (i), x–y and gðxÞ  gðyÞ–0; (ii), x–y and gðxÞ  gðyÞ ¼ 0. First, consider the case where x–y and gðxÞ  gðyÞ–0. In this case, there exists k 2 f1; 2; . . . ; ng such that ðxk  yk Þðg k ðxk Þ  g k ðyk ÞÞ > 0 and ðxi  yi Þðg i ðxi Þ  g i ðyi ÞÞ P 0 for i–k as g i 2 G: Moreover, we have

3718

Y. Tan, M. Tan / Commun Nonlinear Sci Numer Simulat 14 (2009) 3715–3722

D2  2ðgðxÞ  gðyÞÞT ðHðxÞ  HðyÞÞ n X di ðxi  yi Þðg i ðxi Þ  g i ðyi ÞÞ þ ðgðxÞ  gðyÞÞT ðA þ AT ÞðgðxÞ  gðyÞÞ 6 2 i¼1 n X n n X n X X 1 þ r j bij j ðg j ðxj Þ  g j ðyj ÞÞ2 j bij j ðg i ðxi Þ  g i ðyi ÞÞ2 þ r i¼1 j¼1 i¼1 j¼1

6 2

n X

di ðxi  yi Þðg i ðxi Þ  g i ðyi ÞÞ þ ðgðxÞ  gðyÞÞT ðA þ AT

i¼1

n X 1 þ ð kBk1 þ rkBk1 ÞIÞðgðxÞ  gðyÞÞ 6 2 di ðxi  yi Þðg i ðxi Þ  g i ðyi ÞÞ r i¼1

6 2ðxk  yk Þðg k ðxk Þ  g k ðyk ÞÞ < 0

ð2:5Þ

T

where D ¼ diagðd1 ; d2 ; . . . ; dn Þ . Hence, Hk ðxÞ–Hk ðyÞ. That is, we have proved that HðxÞ  HðyÞ–0 when x–y and gðxÞÞ–gðyÞ. Now consider the case (ii) where x–y and gðxÞ  gðyÞ ¼ 0. In the case, we have

HðxÞ  HðyÞ ¼ Dðx  yÞ–0; thus, implying that HðxÞ–HðyÞ for all x–y and gðxÞ ¼ gðyÞ. Hence, we have proved that HðxÞ–HðyÞ when x–y. We will show that the conditions of Theorem 2.3 also imply that kHðxÞk ! 1 as kxk ! 1. To this end, in (2.5), let y ¼ 0; which yields

2ðgðxÞ  gð0ÞÞT ðHðxÞ  Hð0ÞÞ 6 2

n X

T

di ðxi  0Þðg i ðxi Þ  g i ð0ÞÞ 6 2dx ðgðxÞ  gð0ÞÞ

ð2:6Þ

i¼1

where d ¼ minfd1 ; d2 ; . . . ; dn g: From (2.6) and the fact g i ðÞ 2 G, it follows that

06d

n X

xi ðg i ðxi Þ  g i ð0ÞÞ 6

i¼1

n X

j ðg i ðxi Þ  g i ð0ÞÞÞðHi ðxÞ  Hi ð0ÞÞ j :

ð2:7Þ

i¼1

If limkxk!1 kHðxÞk–1, then there exists a sequence fxp g such that limp!1 kxp k ¼ 1 and for all p,

kHðxp Þk 6 M1 ; where 0 < M1 < þ1 is a constant. Therefore, there exist a subsequence (for convenience, we denote it as fxp g) and a nonempty set K  f1; 2; . . . ; ng, such that the follows hold: (1) limp!1 j xpi j¼ 1 for all i 2 K; (2) there exist a positive constant M 2 such that j xpi j6 M2 for all p and i 2 f1; 2; . . . ; ng n K; (3) j Hi ðxp Þ j6 M1 for all i and p. Since g i ðsÞ is continuous on ½M2 ; M 2  for all i 2 f1; 2; . . . ; ng n K, there exists a positive constant M 3 such that j g i ðsÞ j6 M3 for all s 2 ½M 2 ; M 2  and i 2 f1; 2; . . . ; ng n K. Thus, j g i ðxpi Þ j6 M3 for all p and i 2 f1; 2; . . . ; ng n K. Moreover, we have

D3 

n X

j ðg i ðxpi Þ  g i ð0ÞÞðHi ðxp Þ  Hi ð0ÞÞ j

i¼1

¼

X

j ðg i ðxpi Þ  g i ð0ÞÞðHi ðxp Þ  Hi ð0ÞÞ j þ

i2K

6

X

X

j ðg i ðxpi Þ  g i ð0ÞÞðHi ðxp Þ  Hi ð0ÞÞ j

iRK

j g i ðxpi Þ  g i ð0Þ j ðM 1

i2K

þ j Hi ð0Þ jÞ þ

X X ðM1 þ j Hi ð0Þ jÞðM 3 þ j g i ð0Þ jÞ 6 M j g i ðxpi Þ  g i ð0Þ j þM: iRK

where M ¼ maxfmax16j6n fM1 þ j Hj ð0Þ jg; n X i¼1

xpi ðg i ðxpi Þ  g i ð0ÞÞ ¼

ð2:8Þ

i2K

X

P

iRK ðM 1 þ

xpi ðg i ðxpi Þ  g i ð0ÞÞ þ

i2K

j Hi ð0Þ jÞðM3 þ j g i ð0Þ jÞg. Using the fact g i ðÞ 2 G, we then have

X

xpi ðg i ðxpi Þ  g i ð0ÞÞ P

iRK

X

j xpi kg i ðxpi Þ  g i ð0Þ j

ð2:9Þ

i2K

Substituting (2.8) and (2.9) into (2.7), we can obtain for all p

X ðd j xpi j MÞ j g i ðxpi Þ  g i ð0Þ j6 M

ð2:10Þ

i2K

Since limp!1 j xpi j¼ 1 for all i 2 K and limjxj!1 j g i ðxÞ j¼ 1, there exists a positive constant P such that for all p > P, . Hence, j g i ðxpi Þ  g i ð0Þ jP 1 and j xpi j> 2M d

3719

Y. Tan, M. Tan / Commun Nonlinear Sci Numer Simulat 14 (2009) 3715–3722

X ðd j xpi j MÞ j g i ðxpi Þ  g i ð0Þ j> M

ð2:11Þ

i2K

which is contradict to the Eq. (2.10). In summary, limkxk!1 kHðxÞk ¼ 1, HðxÞ must be a homeomorphism on Rn .

h

Theorem 2.4. If the conditions of Theorem 2.2 (or Theorem 2.3) hold, the system (1.1) has a unique equilibrium point x . Proof. Theorem 2.2 (or Theorem 2.3) ensures that H is homeomorphism. Hence there is a unique equilibrium point x ¼ x such that Hðx Þ ¼ 0. h 3. The global asymptotic stability of neural system (1.1) In this section, we aim to find some sufficient conditions ensuring the global asymptotic stability of the equilibrium of (1.1). The equilibrium of (1.1) is said to be GAS if it is locally stable in sense of Lyapunov and globally attractive, i.e., every solution of (1.1) corresponding to an arbitrary given set of initial values satisfy limt!1 xi ðtÞ ¼ xi ; i ¼ 1; 2; . . . ; n: To prove the GAS of equilibrium, we will employ the Lyapunov direct method in [7]. Namely, the equilibrium x is stable and every solution is bounded if there exists a continuously differentiable Lyapunov function V : Rn ! R which is positive definite and radially unbounded, i.e., Vðx Þ ¼ 0; VðxÞ > 0 for x–0, limkxx k!1 VðxÞ ¼ 1, such that the time-derivative of V along _ _ _ is negative semi-definite. If, in addition, VðxÞ the solution of (1.1), given by, VðxÞ ¼ rVðxÞT x, is negative definite, then the equilibrium is globally asymptotically stable. Let x ¼ ðx1 ; x2 ; . . . ; xn Þ be the unique equilibrium point of system (1.1) given in the previous theorems. We will first shift the equilibrium point of system (1.1) to origin. By using the transformation zðtÞ ¼ xðtÞ  x , the equilibrium point x can be shifted to the origin. The neural network model (1.1) can be rewritten as: n X

z_ i ðtÞ ¼ di zi ðtÞ þ

aij fj ðzj ðtÞÞ þ

j¼1

n X

bij fj ðzj ðt  sij ÞÞ

ð3:1Þ

j¼1

where

fi ðzi ðtÞÞ ¼ g i ðzi ðtÞ þ xi Þ  g i ðxi Þ;

i ¼ 1; 2; . . . ; n:

Note that g i ðxÞ 2 G if and only if fi ðxÞ 2 G; x is the globally asymptotically stable equilibrium point of (1.1) if and only if the origin of (3.1) is GAS. Thus in the following, we only consider the GAS of the origin of (3.1). We now present our main global asymptotic stability results. Theorem 3.1. Suppose that in systems (3.1), fi ðsÞ 2 G and fi ð0Þ ¼ 0 for all i ¼ 1; 2; . . . ; n. If there exist positive constants pi > 0; i ¼ 1; 2; . . . ; n; such that n X

pi ðaii  j bii jÞ 

pj ðj aji j þ j bji jÞ P 0;

i ¼ 1; 2; . . . ; n

j¼1;j–i

then the origin of neural system (3.1) is globally asymptotically stable. Proof. We employ the following positive-definite Lyapunov functional:

VðzðtÞÞ ¼

n X

pi fj zi ðtÞ j þ

i¼1

n Z X j¼1

t

tsij

j bij kfj ðzj ðv ÞÞ j dv g:

ð3:2Þ

Obviously, for any z except 0, VðzÞ > 0. Moreover, limkzk!1 VðzÞ ¼ 1: Calculating The upper right derivative of V along trajectories of (3.1), we have

_ VðzðtÞÞ 6

n X i¼1

¼

n X

( pi di j zi ðtÞ j pi ðaii þ j bii jÞ j fi ðzi ðtÞÞ j 

) pi ðj aij j þ j bij jÞ j fj ðzj ðtÞÞ j

j¼1;j–i

( pi di j zi ðtÞ j pi ðaii þ j bii jÞ j fi ðzi ðtÞÞ j 

i¼1



n X

n X

) pj ðj aji j þ j bji jÞ j fi ðzi ðtÞÞ j

j¼1;j–i

¼

n X

pi di j zi ðtÞ j

i¼1

n n n X X X fpi ðaii  j bii jÞ  pj ðj aji j þ j bji jÞg j fi ðzi ðtÞÞ j6 p j zi ðtÞ j< 0: i¼1

j¼1;j–i

where p ¼ min16i6n fpi di g. Therefore, the origin of (3.1) is GAS.

i¼1

h

Remark 3.2. In [4], the author have also derived the sufficient conditions of global asymptotical stability for (3.1). However, author in [4] have assumed that fi ðsÞ ¼ tanhðai sÞ. The above results obtained improve the main results of [4].

3720

Y. Tan, M. Tan / Commun Nonlinear Sci Numer Simulat 14 (2009) 3715–3722

Corollary 3.3. Let g i ðÞ 2 G. If there exist positive constants pi > 0; i ¼ 1; 2; . . . ; n such that n X

pi ðaii  j bii jÞ 

pj ðj aji j þv ertbji jÞ P 0;

i ¼ 1; 2; . . . ; n

j¼1;j–i

then the neural system (1.1) has a unique equilibrium which is globally asymptotically stable. Proof. It is easy to check that the conditions of Theorems 2.2 and 3.1 are satisfied by taking fi ðzÞ ¼ g i ðz þ xi Þ  g i ðxi Þ, and hence Theorems 2.2 and 3.1 imply Corollary 3.3. h Theorem 3.4. Let fi ðÞ 2 G, j fi ðxi Þ j! 1 as j xi j! 1 and fi ð0Þ ¼ 0 for all i 2 f1; 2; . . . ; ng. If the following condition holds

A þ AT þ

  1 kBk1 þ rkBk1 I 6 0 r

where A ¼ ðaij Þnn and B ¼ ðbij Þnn , then the origin of neural system (3.1) is stable and every solution is bounded. If, in addition, sfi ðsÞ > 0 for s–0, then every solution approaches zero as t ! 1: That is, the origin of neural system (3.1) is globally asymptotically stable. Proof. We employ the following positive-definite Lyapunov functional:

VðzðtÞÞ ¼ 2

n Z X

zi ðtÞ

fi ðsÞds þ r

0

i¼1

n X n Z X i¼1

t

tsji

j¼1

j bji j fi2 ðzi ðv ÞÞdv :

ð3:3Þ

Obviously, for any z except 0, VðzÞ > 0. Moreover, limkzk!1 VðzÞ ¼ 1: The time derivative of V along solutions of (3.1) is given as follows:

_ VðzðtÞÞ ¼ 2

n X

di fi ðzi ðtÞÞzi ðtÞ þ 2

i¼1

þr

i¼1

n X n X i¼1

6 2

j bji j fi2 ðzi ðtÞÞ  r

j¼1

n X

di fi ðzi ðtÞÞzi ðtÞ þ 2

n X n X i¼1

þr

bij fi ðzi ðtÞÞfj ðzj ðt  sij ÞÞ

j¼1

j bji j fi2 ðzi ðt  sji ÞÞ

j¼1

aij fi ðzi ðtÞÞfj ðzj ðtÞÞ

j¼1

j bij kfi ðzi ðtÞÞkfj ðzj ðt  sij ÞÞ j

j¼1

n X n X i¼1

n X n X i¼1

n X n X

n X n X i¼1

aij fi ðzi ðtÞÞfj ðzj ðtÞÞ þ 2

j¼1

i¼1

i¼1

þ2

n X n X

j bji j fi2 ðzi ðtÞÞ  r

j¼1

n X n X i¼1

j bji j fi2 ðzi ðt  sij ÞÞ:

j¼1

Since

2 j fi ðzi ðtÞÞkfj ðzj ðt  sij ÞÞ j6

1 2 2 f ðzi ðtÞÞ þ rf j ðzj ðt  sij ÞÞ; r i

and g i ðsÞ 2 G, (3.4) can be written as

_ VðzðtÞÞ 6 2

n X

di fi ðzi ðtÞÞzi ðtÞ þ 2

i¼1

þ

i¼1

n X n X i¼1

þr

aij fi ðzi ðtÞÞfj ðzj ðtÞÞ þ

j¼1

n X n X 1 j bij j fi2 ðzi ðtÞÞ r i¼1 j¼1

r j bij j fj2 ðzj ðt  sij ÞÞ

j¼1

n X n X i¼1

¼ 2

n X n X

n X i¼1 n X

j bji j fi2 ðzi ðtÞÞ  r

j¼1

di fi ðzi ðtÞÞzi ðtÞ þ 2

n X n X i¼1

n X n X i¼1

j¼1

j bji j fi2 ðzi ðt  sji ÞÞ

j¼1

aij fi ðzi ðtÞÞfj ðzj ðtÞÞ þ

n X n n X n X 1X j bij j fi2 ðzi ðtÞÞ þ r j bji j fi2 ðzi ðtÞÞ r i¼1 j¼1 i¼1 j¼1

1 di j fi ðzi ðtÞÞkzi ðtÞ j þf T ðzðtÞÞðA þ AT Þf ðzðtÞÞ þ kBk1 f T ðzðtÞÞf ðzðtÞÞ þ rkBk1 f T ðzðtÞÞf ðzðtÞÞ r i¼1    n n X X 1 ¼ 2 di fi ðzi ðtÞÞzi ðtÞ þ f T ðzðtÞÞ A þ AT þ di fi ðzi ðtÞÞzi ðtÞ < 0 kBk1 þ rkBk1 I f ðzðtÞÞ 6 2 r i¼1 i¼1 6 2

ð3:4Þ

3721

Y. Tan, M. Tan / Commun Nonlinear Sci Numer Simulat 14 (2009) 3715–3722

where f ðzðtÞÞ ¼ ðf1 ðz1 ðtÞÞ; f2 ðz2 ðtÞÞ; . . . ; fn ðzn ðtÞÞÞ: Therefore, the origin of neural system (3.1) is stable and every solution is _ is negative definite. Thus, the origin of neural system (3.1) is globally bounded. If, in addition, sfi ðsÞ > 0 for s–0, then VðxÞ asymptotically stable. h Corollary 3.5. Let g i ðÞ 2 G, j g i ðxi Þ j! 1 as j xi j! 1. If the following condition holds

A þ AT þ ðkBk1 þ kBk1 ÞI 6 0 where A ¼ ðaij Þnn and B ¼ ðbij Þnn , then the neural system (1.1) has a unique equilibrium which is stable and every solution is bounded. If, in addition, g i ðsÞði ¼ 1; 2; . . . ; nÞ are strictly increasing functions, then the unique equilibrium of neural system (1.1) is globally asymptotically stable. Proof. It is easy to check that the conditions of Theorems 2.3 and 3.4 are satisfied by taking fi ðzÞ ¼ g i ðz þ xi Þ  g i ðxi Þ, and hence Theorems 2.3 and 3.4 imply Corollary 3.5. h Now, our results will be compared with those presented in [3,9]. First, we restate the results of [3,9]. Definition 3.6. ([3,9])The neural network defined by (1.1) is said to be a dissipative system, if there exists a compact set S  Rn , such that 8x0 2 Rn ; 9T > 0, when t P t 0 þ T, xðt; t 0 ; x0 Þ # S, where xðt; t0 ; x0 Þ denotes the solution of (1.1) from initial state x0 and initial time t0 . In this case, S is called a globally attractive set. A set S is called positive invariant if 8x0 2 S implies xðt; t0 ; x0 Þ # S for t P t0 . Theorem 3.7. [3] If all conditions of Corollary 3.3 (Corollary 3.5) are satisfied and gð0Þ ¼ 0, then, the neural network defined by (1.1) is a dissipative system and the set S1 ¼ fxkxi ðtÞ j6j ui j =di ; i ¼ 1; 2; . . . ; ng is a positive invariant and globally attractive set. Theorem 3.8. [9] Let gðxÞ 2 G; gð0Þ ¼ 0. If there exists





Q 11

Q 12

Q T12

Q 22

 > 0 and k > 0 such that the given matrix



is negative semi-definite, where Q 11 ¼ ððA þ AT Þ=2 þ ðk þ ÞInn ; Q 12 ¼ B=2; and Q 22 ¼ kInn ; then the neural network model (1.1) is a dissipative system and the set

( S2 ¼

n n X X u2i ui xj ðg i ðxi Þ  Þ2 6 ; 2  4 2 i¼1 i¼1

) i ¼ 1; 2; . . . ; n

is a positive invariant and globally attractive set. 1.5 x1(t) x2(t)

1

x(t)

0.5

0

−0.5

−1 0

5

10

15

time t Fig. 1. Numeric simulation of global asymptotical stability of equilibrium point in (3.10).

3722

Y. Tan, M. Tan / Commun Nonlinear Sci Numer Simulat 14 (2009) 3715–3722

Remark 3.9. In Corollary 3.3, we have derived that the neural network defined by (1.1) has a unique equilibrium which is globally asymptotically stable. However, the authors of [3] have only proved that the neural network defined by (1.1) is globally dissipative under the same conditions and g i ð0Þ ¼ 0. On the other hand, in Corollary 3.5, we have derived that the neural network defined by (1.1) has a unique equilibrium which is stable and every solution is bound. Whereas the authors of [3] have only proved that the neural network defined by (1.1) is globally dissipative under the same conditions and g i ð0Þ ¼ 0. Therefore, our results have improved those results in [3]. In order to show that the conditions we have obtained in this paper provide new sufficient criteria for determining the equilibrium and asymptotic stability of system (1.1), we consider the following example. Example 3.10. Consider the following two-neurons network with delay:



x_ 1 x_ 2



 ¼

1

0

0

2



x1 x2



 þ

4:1

a

a

4:1



x31 x32

!

 þ

2 2 2 2



x1 ðt  sÞ3 x2 ðt  sÞ3

! þ

  5 ; 5

where a is any constant. Obviously, g i ðxÞ 2 G and g i ðxÞ are strictly monotone increasing functions for all i 2 f1; 2; . . . ; ng. However, g i ðxÞ doesn’t satisfy the Lipschitz conditions, and those results in [4,11,15] would fail when applying to this example. Since A þ AT þ ðkBk1 þ kBk1 ÞI ¼ 0:2I < 0, this neural network model has a unique equilibrium x which is globally asymptotically stable according to Corollary 3.5. However, it is only obtained that the neural network model is dissipative system applying the results of Theorems 3.7 and 3.8. Therefore, our results establish new criteria for the global asymptotic stability of delayed neural networks and improve those results in the existing results. Fig. 1 shows that the dynamical system converges to a unique equilibrium point, where a ¼ 10 and the initial condition

/1 ðtÞ ¼ 2 þ 0:5t; /2 ðtÞ ¼ 1 þ 0:4t2 : 4. Conclusion This Paper is concerned with the globally asymptotic stability properties of continuous-time neural networks with delays. Only under the assumption that the activation functions is continuous and decreasing, we apply the homeomorphic approach and Lyapunov direct method, thus derive general sufficient conditions for the global asymptotic stability of delayed neural networks. A comparison between our results and the previous results was also given, which shows that our results establish a new set of global asymptotic stability criteria for delayed neural networks. Therefore, our results have improved those results in the earlier references. Acknowledgements The authors thank the anonymous reviewers and the editor for their constructive comments that led to truly significant improvement of the manuscript. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]

Arik S. Global asymptotical stability of a larger class of delayed neural networks. In: Proceedings of ISCAS, 2003. p. 721–4. Arik S. An analysis of exponential stability of delayed neural networks with time varying delays. Neural Networks 2004;17:1027–31. Arik S. On the global dissipativity of dynamical neural networks with time delays. Phys Lett A 2004;326:126–32. Chen T. Global convergence of delayed dynamical systems. IEEE Trans Neural Netowrks 2001;12(6):1532–6. Chen Z, Zhao DH, Ruan J. Dynamic analysis of high-order Cohen-Grossberg neural networks with time delay. Chaos Soliton Fract 2007;32(4):1538–46. Forti M, Tesi A. New conditions for global stability of neural networks with application to linear and quadratic programming problems. IEEE Trans Circ Syst I Fundam Theory Appl 1995;42(7):354–66. Hale JK, Verduyn Lunel SM. Introduction to functional differential equations. New York: Springer; 1993. Li CH, Yang SY. A further analysis on harmless delays in Cohen-Grossberg neural networks. Chaos Soliton Fract 2007;34(2):645–53. Liao X, Wang J. Global dissipativity of continuous-time recurrent neural networks with time delay. Phys Rev E 2003;68:016118. Liang J, Cao J. Exponential stability of continuous-time and discrete-time bidirectional associative memory networks with delay. Chaos Soliton Fract 2004;22(4):773–85. Mohamad S. Global exponential stability in DCNNs with distribted delays and unbounded activations. J Comput Appl Math 2007;205(1):161–73. Song Q, Cao J. Global exponential stability and existence of periodic solutions in BAM networks with delays and reactiondiffusion terms. Chaos Soliton Fract 2005;23(2):421–30. Zhao H, Cao J. New conditions for global exponential stability of cellular neural networks with delays. Neural Networks 2005;18:1332–40. Xia Y, Cao J, Lin M. New results on the existence and uniqueness of almost periodic solution for BAM neural networks with continuously distributed delays. Chaos Soliton Fract 2007;31:928–36. Zhao WR. Dynamics of Cohen-Grossberg neural network with variable coefficients and time-varying delays. Nonlinear anal real World Appl 2008;9(3):1024–37. Zhao WR, Zhang HS. Globally exponential stability of neural network with constant and variable delays. Phys Lett A 2006;352(4-5):350–7. Zhao WR, Tan Y. Harmless delays for global exponential stability of Cohen-Grossberg neural networks. Math Comput Simulat 2007;74(1):47–57. Zhao WR, Yan AZ. Stability analysis of neural networks with both variable and unbounded delays. Chaos Soliton Fract. doi:10.1016/ j.chaos.2007.08.017.