Global asymptotic stability of Hopfield neural networks with transmission delays

Global asymptotic stability of Hopfield neural networks with transmission delays

Physics Letters A 318 (2003) 399–405 www.elsevier.com/locate/pla Global asymptotic stability of Hopfield neural networks with transmission delays ✩ Q...

169KB Sizes 0 Downloads 76 Views

Physics Letters A 318 (2003) 399–405 www.elsevier.com/locate/pla

Global asymptotic stability of Hopfield neural networks with transmission delays ✩ Qiang Zhang a,b,∗ , Xiaopeng Wei b , Jin Xu b a School of Mechanical Engineering, Dalian University of Technology, Dalian 116024, China b Advanced Design Technology Center, Dalian University, Dalian 116622, China

Received 6 June 2003; received in revised form 13 September 2003; accepted 23 September 2003 Communicated by A.P. Fordy

Abstract The global asymptotic stability of Hopfield neural networks with transmission delays is considered in this Letter. We present a new sufficient condition for the asymptotic stability. This condition is dependent on the size of delays. The result is less conservative than those established in the earlier references. A numerical example is given to illustrate the applicability of this condition.  2003 Elsevier B.V. All rights reserved. PACS: 87.10.+e; 85.40.Ls; 43.80.+p; 87.18.Sn Keywords: Global asymptotic stability; Hopfield neural networks; Transmission delays

1. Introduction Recently, there has been increasing interest in the potential applications of the dynamics of neural networks in information processing systems. Among the most popular models in the literature is Hopfield neural networks (HNNs). HNNs have proved to be essential in solving some classes of optimization problems and found important applications in pattern recognition and associative memory. However, time delays are inevitably present in the implementation of neural networks due to the finite switching speed of amplifiers. Therefore, the study of their effects on stability of neural networks has received much more and more attention, see, e.g., [1–28] and references cited therein. There are delay-independent and delay-dependent conditions. In general, the delay-independent result is simple and straightforward while delay-dependent result is more complex but less restrictive and conservative. So far, most of these existing results for the global asymptotic stability of the delayed Hopfield neural networks are independent of delay parameters [10]. However, in some applications, system delays are frequently fixed and ✩

The project is supported by the National Natural Science Foundation of China (Grant Nos. 60174037, 50275013).

* Corresponding author.

E-mail address: [email protected] (Q. Zhang). 0375-9601/$ – see front matter  2003 Elsevier B.V. All rights reserved. doi:10.1016/j.physleta.2003.09.052

400

Q. Zhang et al. / Physics Letters A 318 (2003) 399–405

their bounds are known. In this Letter, we present a new sufficient condition for the asymptotic stability provided that the delays do not exceed certain (sufficiently small) bounds. The result given here improves and generalizes the results in the references. To demonstrate the applicability of our result, we consider a specific example.

2. Main results The dynamic behavior of a continuous time delayed Hopfield neural networks can be described by the following state equations: xi (t) = −ci xi (t) +

n 

  aij sj xj (t − τ ) + bi ,

i = 1, 2, . . . , n,

(1)

j =1

or equivalently

  x  (t) = −Cx(t) + AS x(t − τ ) + b,

(2)

where x(t) = [x1 (t), . . . , xn (t)]T ∈ R n , S(x(t − τ )) = [s1 (x1 (t − τ )), . . . , sn (xn (t − τ ))]T ∈ R n . C = diag(ci > 0) (a positive diagonal matrix), A = {aij } represents the delayed feedback matrix, while b = [b1 , . . . , bn ]T is a constant input vector and the delays τ is a nonnegative constant. In this Letter, we will assume that the activation functions si , i = 1, 2, . . . , n, satisfy the following hypotheses: (H1): si (i = 1, 2, . . . , n) is a bounded function for x ∈ R; si (ξ1 ) − si (ξ2 ) (H2): 0   Li , ∀ξ1 , ξ2 ∈ R, ξ1 = ξ2 . ξ1 − ξ2 This type of activation functions is clearly more general than both the usual sigmoid activation functions and the piecewise linear function (PWL): si (x) = 12 (|x + 1| − |x − 1|) which is used in [3]. The initial conditions associated with system (1) are of the form xi (s) = φi (s),

s ∈ [−τ, 0], i = 1, 2, . . . , n

in which φi (s) is continuous for s ∈ [−τ, 0]. In the following, we will use the notation A > 0 (or A < 0) to denote the matrix A being a symmetric and positive-definite (or negative-definite) matrix. The notation AT and A−1 means the transpose of and the inverse of a square matrix A. If A, B are symmetric matrices, A > B (A  B) means that A − B is positive-definite (positive-semidefinite). In order to obtain our results, we need establishing the following lemmas: Lemma 1. For any vectors a, b ∈ R n , the inequality −2a Tb  a T Y a + bT Y −1 b holds, in which Y is any matrix with Y > 0. Proof. Since Y > 0, we have

 T   a T Y a + 2a Tb + bT Y −1 b = Y 1/2 a + Y −1/2 b Y 1/2 a + Y −1/2 b  0.

From this, we can easily obtain the inequality (3). ✷ Lemma 2. For a given matrix   S11 S12 T T S= T , S22 = S22 , with S11 = S11 S12 S22

(3)

Q. Zhang et al. / Physics Letters A 318 (2003) 399–405

401

the following conditions are equivalent: (1) S < 0; −1 T S12 < 0. (2) S22 < 0, S11 − S12 S22 Proof. Note that  −1  I −S12 S22 S11 T 0 I S12

S12 S22



I 0

−1 −S12 S22 I

T

 =

−1 T S12 S11 − S12 S22 0

 0 . S22

This completes the proof. ✷ Lemma 3 [17]. There exists an equilibrium point for Eq. (1). Note that Lemma 3 only ensures the existence of an equilibrium point of Eq. (1), the uniqueness can be derived from its global asymptotic stability established below. According to Lemma 3, assume x ∗ = (x1∗ , x2∗ , . . . , xn∗ )T is an equilibrium point of Eq. (1), one can derive from (1) that the transformation yi = xi − xi∗ transforms system (1) or (2) into the following system: yi (t) = −ci yi (t) +

n 

  aij fj yj (t − τ ) ,

∀i,

(4)

j =1

where fj (yj (t)) = sj (yj (t) + xj∗ ) − sj (xj∗ ), or   y  (t) = −Cy(t) + Af y(t − τ ) ,

(5)

respectively. Note that since each function sj (·) satisfies the hypotheses (H1) and (H2), each fj (·) satisfies fj2 (ξj )  L2j ξj2 ,

fj (0) = 0,

ξj fj (ξj ) 

fj2 (ξj ) Lj

,

∀ξj ∈ R.

(6)

To prove the stability of x ∗ of Eq. (1), it is sufficient to prove the stability of the trivial solution of Eq. (4) or Eq. (5). Now we will present the main result for the global stability of Eq. (1). To this end, we can rewrite (5) as   y  (t) = −Cy(t) + AG y(t − τ ) y(t − τ ), (7) G(y) = diag(g1 (y1 ), . . . , gn (yn )), and 0  gi (yi ) = fi (yi )/yi  Li . Moreover, since y(t − τ ) = y(t) − where t  (α) dα, by substituting this into (7), we obtain y t −τ     y (t) = −C + AG y(t − τ ) y(t) − AG y(t − τ ) 



t

y  (α) dα.

(8)

t −τ

Evidently, the solution of Eq. (8) is equivalent to that of Eq. (7). By this transform, we can obtain the following: Theorem 1. The equilibrium point of Eq. (1) is globally asymptotically stable if there exist a positive-definite matrix P and positive-diagonal matrices D1 , D2 and D3 such that the matrix Σ defined by     −τ CD1−1 A −(P C + CP ) + P AD2 AT P + LD2−1 L + τ P ALD1 LAT P + LD3−1 L + CD1−1 C   τ AT D1−1 A − D3−1 −τ AT D1−1 C is negative-definite.

402

Q. Zhang et al. / Physics Letters A 318 (2003) 399–405

Proof. We will employ the following positive-definite Lyapunov functional: t V (yt ) = y (t)P y(t) + τ T

f t −τ

T



   y(s) D3−1 f y(s) ds +

t t

(y  )T (α)D1−1 y  (α) dα dβ.

t −τ β

The time derivative of V (yt ) along the trajectories of the system (8) is obtained as     D V (yt ) = 2y (t)P −C + AG y(t − τ ) y(t) − 2y T (t)P AG y(t − τ ) +



T

t

y  (α) dα

t −τ

        + τf y(t) D3−1 f y(t) − τf T y(t − τ ) D3−1 f y(t − τ ) + τ (y  )T (t)D1−1 y  (t) t − (y  )T (α)D1−1 y  (α) dα. T

(9)

t −τ

We can write the following:  T  1/2   1/2 −1/2  −1/2  0  D2 AT P y(t) − D2 G y(t − τ ) y(t) D2 AT P y(t) − D2 G y(t − τ ) y(t)       = y T (t)P AD2 AT P y(t) − 2y T(t)P AG y(t − τ ) y(t) + y T (t)G y(t − τ ) D2−1 G y(t − τ ) y(t)    y T (t)P AD2 AT P y(t) − 2y T(t)P AG y(t − τ ) y(t) + y T (t)LD2−1 Ly(t), hence,

    2y T(t)P AG y(t − τ ) y(t)  y T (t) P AD2 AT P + LD2−1 L P y(t).

(10)

Let G(y(t − τ ))AT P y(t) = a, y  (α) = b. By Lemma 1, we have       −2y T(t)P AG y(t − τ ) y  (α)  y T (t)P AG y(t − τ ) D1 G y(t − τ ) AT P y(t) + (y  )T (α)D1−1 y  (α)  y T (t)P ALD1 LAT P y(t) + (y  )T (α)D1−1 y  (α).

(11)

Moreover, we have

      T τ (y  )T (t)D1−1 y  (t) = τ −Cy(t) + Af y(t − τ ) D1−1 −Cy(t) + Af y(t − τ )    = τ y T (t)CD1−1 Cy(t) − 2y T(t)CD1−1 Af y(t − τ )     + f T y(t − τ ) AT D1−1 Af y(t − τ ) .

(12)

Substituting (10), (11), and (12) into (9), we obtain  D + V (yt )  y T (t) −(P C + CP ) + P AD2 AT P + LD2−1 L + τ P ALD1 LAT P     + τ LD3−1 L + τ CD1−1 C y(t) − τf T y(t − τ ) D3−1 f y(t − τ )       − 2τy T(t)CD1−1 Af y(t − τ ) + τf T y(t − τ ) AT D1−1 Af y(t − τ )   T  y(t) y(t)  < 0.   Σ  = f y(t − τ ) f y(t − τ ) Now, by a standard Lyapunov-type theorem in functional differential equations, see, e.g., [29], the solution of Eq. (8) is globally asymptotically stable, and therefore, x ∗ is globally asymptotically stable for Eq. (1). ✷ The condition in Theorem 1 possesses some adjustable parameters. This may lead to a bit difficult to be verified. To make the criteria more testable, we let D1 = C, D2 = D3 = I in Theorem 1, then we obtain

Q. Zhang et al. / Physics Letters A 318 (2003) 399–405

403

Corollary 1. The equilibrium point of Eq. (1) is globally asymptotically stable if there exists a positive-definite matrix P such that     −τ A −(P C + CP ) + P AAT P + L2 + τ P ALCLAT P + L2 + C   < 0. −τ AT τ AT C −1 A − I Furthermore, for Eq. (1) with L = C = I, we get Corollary 2. The equilibrium point of Eq. (1) is globally asymptotically stable if there exists a positive-definite matrix P such that     −τ A −2P + P AAT P + I + τ P AAT P + 2I   < 0. τ AT A − I −τ AT Remark 1. If τ = 0, then Theorem 1 reduces to −(P C + CP ) + P AD2 AT P + LD2−1 L < 0. This implies A − CL−1 ∈ LDS. This is consistent with the elegant result given in [4] for the global asymptotic stability of neural networks without delays. Remark 2. In [16], under the assumption of the symmetry of the connection matrix and the differentiability of the activation functions, Ye, Michel and Wang obtained a sufficient condition for global asymptotic stability. In [9], Driessche and Zhou also got a sufficient condition for the stability which requires to assume the activation functions are continuous differentiable and strictly monotonic increasing. Note in our Theorem 1, we only require that the activation functions satisfy the hypotheses (H1) and (H2), which are milder than the restrictions in [16] and [9].

3. An example In this section, we consider a particular network of two neurons and give supporting numerical simulation. Example 1. Consider the following simple two-neuron neural network with delays     x1 (t) = −0.7x1(t) + 0.1s x1 (t − τ ) + 0.1s x2 (t − τ ) − 2,     x2 (t) = −0.7x2(t) + 0.3s x1 (t − τ ) + 0.3s x2 (t − τ ) + 1,

(13)

where the activation function is described by si (x) = 0.5(|x + 1| − |x − 1|) and the time delay τ is a bounded constant. The delayed feedback matrix A and the matrix C are     0.7 0 0.1 0.1 C= , A= . 0 0.7 0.3 0.3 Clearly, L = I and si (x) satisfies (H1) and (H2). Let τ = 0.45, D1 = 1.8C, P = D2 = 1.6I , D3 = 1.7I in Theorem 1 above, the matrix Σ is obtained as     −1.8−1 · 0.45A −3.2C + 1.63 AAT + 1.6−1I + 0.45 1.62 · 1.8 · ACAT + 1.7−1 I + 1.8−1 C   −1.8−1 · 0.45AT 0.45 1.8−1AT C −1 A − 1.7−1 I By a straightforward calculation, we have   −1.0643 0.3329 −0.0250 −0.0250  0.3329 −0.1767 −0.0750 −0.0750   Σ =  −0.0250 −0.0750 −0.2290 0.0357  . −0.0250 −0.0750 0.0357 −0.2290

404

Q. Zhang et al. / Physics Letters A 318 (2003) 399–405

Fig. 1. Numerical simulation of solutions of Eq. (13).

The eigenvalues are −1.1753, −0.0008, −0.2582 and −0.2647, respectively. Thus, Σ < 0. It follows from Theorem 1 that the equilibrium point of Eq. (13) is asymptotically stable. In [10], the authors obtained the following result. For Eq. (13), if the delays τ  τ ∗ = 0.3638, then the equilibrium point of Eq. (13) is asymptotically stable. Obviously, this result is more restrictive and conservative. The numerical simulation is illustrated in Fig. 1.

Acknowledgement The authors would like to express their sincere appreciation to the reviewers for their helpful comments in improving the presentation and quality of the Letter.

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]

Q. Zhang, R. Ma, J. Xu, Electron. Lett. 37 (2001) 575. Q. Zhang, R. Ma, C. Wang, J. Xu, IEEE Trans. Automat. Control 48 (2003) 794. L.O. Chua, L. Yang, IEEE Trans. Circuits Systems 35 (1988) 1257. M. Forti, A. Tesi, IEEE Trans. Circuits Systems I 42 (1995) 354. S. Arik, V. Tavsanoglu, IEEE Trans. Circuits Systems I 45 (1998) 168. X. Liao, G. Chen, E.N. Sanchez, IEEE Trans. Circuits Systems I 49 (2002) 1033. X. Liao, J. Yu, IEEE Trans. Neural Networks 9 (1998) 1042. X. Liao, K.W. Wong, Z. Wu, G. Chen, IEEE Trans. Circuits Systems I 48 (2001) 1355. P.V.D. Driessche, X. Zou, SIAM J. Appl. Math. 58 (1998) 1878. A. Chen, J. Cao, L. Huang, IEEE Trans. Circuits Systems I 49 (2002) 1028. S. Arik, IEEE Trans. Circuits Systems I 49 (2000) 1211. S. Arik, IEEE Trans. Circuits Systems I 47 (2000) 571. S. Arik, IEEE Trans. Neural Networks 13 (2002) 1239. S. Arik, Phys. Lett. A 311 (2003) 504. T.L. Liao, F.C. Wang, IEEE Trans. Neural Networks 11 (2000) 1481. H. Ye, A.N. Michel, K.N. Wang, Phys. Rev. E 50 (1994) 4206. J. Cao, IEEE Trans. Circuits Systems I 48 (2001) 1330.

Q. Zhang et al. / Physics Letters A 318 (2003) 399–405

[18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29]

J. Cao, Phys. Rev. E 60 (1999) 3244. Z. Yi, P.A. Heng, K.S. Leung, IEEE Trans. Circuits Systems 48 (2001) 680. J. Zhang, Y. Yang, Int. J. Circuit Theor. Appl. 29 (2001) 185. H. Huang, J. Cao, J. Wang, Phys. Lett. A 298 (2002) 393. C. Sun, K. Zhang, S. Fei, C. Fong, Phys. Lett. A 298 (2002) 122. M. Gilli, IEEE Trans. Circuits Systems I 41 (1994) 518. M. Joy, Neural Networks 13 (2000) 613. H. Lu, Neural Networks 13 (2000) 1135. K. Gopalsamy, X. He, Physica D 76 (1994) 344. K. Gopalsamy, X. He, IEEE Trans. Neural Networks 5 (1994) 998. T. Roska, C.W. Wu, M. Balsi, L.O. Chua, IEEE Trans. Circuits Systems 39 (1992) 487. J. Hale, S.M. Verduyn Lunel, Introduction to Functional Differential Equations, Springer, New York, 1993.

405