Global asymptotic stability for a class of nonlinear neural networks with multiple delays

Global asymptotic stability for a class of nonlinear neural networks with multiple delays

Nonlinear Analysis 67 (2007) 3037–3040 www.elsevier.com/locate/na Global asymptotic stability for a class of nonlinear neural networks with multiple ...

171KB Sizes 0 Downloads 65 Views

Nonlinear Analysis 67 (2007) 3037–3040 www.elsevier.com/locate/na

Global asymptotic stability for a class of nonlinear neural networks with multiple delays Yi-You Hou a , Teh-Lu Liao a , Jun-Juh Yan b,∗ a Department of Engineering Science, National Cheng Kung University, Tainan 701, Taiwan, ROC b Department of Computer and Communication, Shu-Te University, Kaohsiung 824, Taiwan, ROC

Received 25 August 2006; accepted 26 September 2006

Abstract This paper investigates the global asymptotic stability (GAS) for a class of nonlinear neural networks with multiple delays. Based on Lyapunov stability theory and the linear matrix inequality (LMI) technique, a less conservative delay-dependent stability criterion is derived. The present result is shown to be less conservative than those given in the literature. c 2006 Elsevier Ltd. All rights reserved.

Keywords: Neural networks; Global asymptotic stability (GAS); Lyapunov stability theory; Linear matrix inequality (LMI)

1. Introduction In recent years, the stability analysis for a class of neural networks, such as Hopfield neural networks (HNNs), cellular neural networks (CNNs), and Cohen–Grossberg neural networks (CGNNs), has been extensively investigated [1–6]. These particular networks have been applied to image processing and signal processing problems. However, in this class of neural networks, the interactions between neurons are generally asynchronous which inevitably results in time delay. The existence of time delay may cause oscillations and instability in neural networks. Therefore, the stability analysis of neural networks with time delay is an important issue. Several criteria for the stability of time delay neural networks have been proposed [1–6]. So far, these criteria for time delay systems can be classified into two categories, namely delay-independent criteria and delay-dependent criteria. Generally speaking, the latter group is less conservative than its counterpart when the time delay values are small. In this work, a general class of neural networks, which includes Hopfield neural networks (HNNs), cellular neural networks (CNNs), and Cohen–Grossberg neural networks (CGNNs), with multiple delays, is considered. Based on Lyapunov stability theory and the linear matrix inequality technique, a less conservative delay-dependent stability formula is proposed for guaranteeing the asymptotic stability of the neural networks with multiple delays. Finally, an illustrative example is provided to verify that the illustrated property is less conservative than others reported in the literature [1–6].

∗ Corresponding author. Tel.: +886 7 6158000x4806; fax: +886 7 6159000x4899.

E-mail address: [email protected] (J.-J. Yan). c 2006 Elsevier Ltd. All rights reserved. 0362-546X/$ - see front matter doi:10.1016/j.na.2006.09.049

3038

Y.-Y. Hou et al. / Nonlinear Analysis 67 (2007) 3037–3040

2. Preliminaries Consider a class of neural networks with multiple delays described as x˙i (t) = −di (xi (t)) ci (xi (t)) −

n X

ai j f j (x j (t)) −

n X

j=1

xi (t) = φi (t),

! bi j f j (x j (t − τ j )) + Ji ,

j=1

t ∈ [−τ j , 0], i = j = 1, . . . , n,

(1)

where n ≥ 2 is the number of neurons in the network, xi denotes the state variable associated with the ith neuron and φi (t) is a continuous initial function. di (xi ) represents the amplification function, and ci (xi ) is an appropriately behaved function, to keep the solution of system (1) bounded. A = (ai j )n×n and B = (bi j )n×n indicate the interconnection strengths among the neurons without and with delays τ j , respectively. The activation function f j shows how the neurons respond to one another. Ji is the external constant input. We assume that the model (1) has an equilibrium point x ∗ = [x1∗ , . . . , x2∗ ]T . Let us define z(t) = x(t) − x ∗ , and then (1) can be transformed into the form of ! n n X X z˙ i (t) = −d˜i (z i (t)) c˜i (z i (t)) − ai j f˜i (z i (t)) − bi j f˜i (z i (t − τ j )) , (2) j=1

j=1

where d˜i (z i (t)) = di (z i (t) + xi∗ ), c˜i (z i (t)) = ci (z i (t) + xi∗ ) − ci (xi∗ ), f˜j (z j (t)) = f j (z j (t) + x ∗j ) − f j (x ∗j ). h i S S12 T ,S T with S11 = S11 Lemma 2.1 (Schur Complement of [7]). For a given matrix S = S11 T 22 = S22 , then the S22 12 following conditions are equivalent: (1) S < 0, −1 T (2) S22 < 0, S11 − S12 S22 S12 < 0. Regarding the system (1), the following assumptions are made: (A1) Each f i : R → R, i ∈ {1, 2, . . . , n}, is bound for all xi ∈ R and satisfies the Lipschitz condition, i.e. | f i (u) − f i (v)| ≤ li |u − v| for all u, v ∈ R. (A2) Each ci (xi (t)) and ci−1 (xi (t)), i ∈ {1, 2, . . . , n}, are continuously differentiable and 0 < αi ≤ ci0 (xi (t)), ∀xi ∈ R, where ci0 (xi ) denotes the derivative with respect to xi . (A3) Each di (xi ), i ∈ {1, 2, . . . , n}, is positive, continuous and bounded for all xi ∈ R, i.e. 0 < m i ≤ di (xi ) ≤ Mi < ∞, ∀xi ∈ R. By letting z(t) = [z 1 (t), . . . , z n (t)]T ∈ Rn and g(z) = [ f˜1 (z 1 (t)), . . . , f˜n (z n (t))]T ∈ R n , and by using (A1), we have g T (z(t))g(z(t)) = kg(z(t))k2 ≤ z T (t)L 2 z(t),

and

g (z(t − τ ))g(z(t − τ )) = kg(z(t − τ ))k ≤ z (t − τ )L 2 z(t − τ ), T

2

T

where L = diag[l1 , . . . , ln ]. From (A2), we have −z i (t)c˜i (z i (t)) ≤ −αi z i2 (t). 3. Main result Theorem 3.1. For the class of neural networks with multiple delays defined by system (1) with assumptions (A1)–(A3), the equilibrium point x ∗ is globally asymptotically stable (GAS) if there exist positive definite diagonal matrices P = diag[ p1 > 0, . . . , pn > 0], R = diag[r1 > 0, . . . , rn > 0] and Q such that the following LMI condition holds:   2PΣ − L Q L − Lτ R L −P B P A −B T P τR 0  > 0, Ω˜ =  (3) T A P 0 Q where τ = diag[τ1 , . . . , τn ], Σ = diag[α1 , . . . , αn ].

3039

Y.-Y. Hou et al. / Nonlinear Analysis 67 (2007) 3037–3040

Proof. Define the Lyapunov functional as follows: Z zi (t) Z n n X X ˜ V (z(t)) = 2 pi s/di (s)ds + τi ri

t

0

t−τi

i=1

i=1

f˜i2 (z i (s))ds

(4)

and, then, evaluating the time derivative of V (t) along the trajectory given in Eq. (2) gives ! n n n n X X X X ˙ ˜ ˜ V (z(t)) = −2 pi z i (t) c˜i (z i (t)) − ai j f i (z i (t)) − bi j f i (z i (t − τ j )) + τi ri f˜i2 (z i (t)) i=1



n X

j=1

j=1

i=1

τi ri f˜i2 (z i (t − τ ))

i=1 T

= −2z (t)P (C(z(t)) − Ag(z(t)) − Bg(z(t − τ ))) + g T (z(t))τ Rg(z(t)) − g T (z(t − τ ))τ Rg(z(t − τ ))

(5)

where C(z) = diag{c˜1 (z 1 (t)), . . . , c˜n (z n (t))} ∈ R n×n . From the fact that 2x T y ≤ x T Qx + y T Q −1 y, ∀x, y ∈ Rn and Q > 0, we have V˙ (z(t)) = −2z T (t)PΣ z(t) + 2z T (t)P Ag(z(t)) + 2z T (t)P Bg(z(t − τ )) + g T (z(t))τ Rg(z(t)) − g T (z(t − τ ))τ Rg(z(t − τ )) ≤ z T (t)(−2PΣ + P AQ −1 AT P + L Q L + Lτ R L)z(t) + 2z T (t)P Bg(z(t − τ )) − g T (z(t − τ ))τ Rg(z(t − τ )) ≤ −[z T (t)g T (z(t − τ ))] Ω [z T (t)g(z(t − τ ))]

(6)

where  2PΣ − P AQ −1 AT P − L Q L − Lτ R L Ω= −B T P

 −P B . τR

If the diagonal matrices P, R and Q are such that the condition Ω > 0, then V˙ (z(t)) < −[z T (t)

g T (z(t − τ ))]Ω [z T (t)g(z(t − τ ))]T < 0.

(7)

Based on Lyapunov stability theory, the z(t) decreases asymptotically to zero. Furthermore, by Lemma 2.1, LMI Ω˜ > 0 in (3) is equivalent to the matrix inequality Ω > 0. This completes the proof.  4. Illustrative example In this section, we given an example to demonstrate results obtained in previous sections. Example 4.1. Consider a two-dimensional cellular neural network with delays (DCNN) as given in (2) with the following parameters: f i (xi ) = 1/(|xi + 1| − |xi − 1|), di (xi (t)) = 1, ci (xi (t)) = ci xi (t) = xi (t), i = 1, 2; a11 = a12 = a21 = a22 = 0, b11 = 0.861, b12 = 0.7944, b21 = 0.32, b22 = −0.8295, τ1 = 1.5, and τ2 = 0.9. The assumptions (A1)–(A3) are satisfied with L 1 = L 2 = 1, α1 = α2 = 1 and M1 = M2 = m 1 = m 2 = 1, respectively. First, we check the DCNN by usingPthe stability conditions proposed in [1–6]. From [1] we note Pstability 2 that the DCNN is GAS if max1≤ j≤2 {( i=1 |bi j |+a j j + 1≤i6= j≤2 |ai j |)} < 1. From [2,3] we know that the system is asymptotically stable if max{τ1 , τ2 }kBk22 ≤ 1. Furthermore, from [4], the DCNN has a unique and GAS equilibrium point if r ci >

n X m X

qk |ai j |(r αk j /qk ) L j +

j=1 k=1

+

n X j=1

n X m X

qk |bi j |(rβk j /qk ) L j

j=1 k=1 r αm+1,i

|a ji |

dj L j +

n X j=1

(rβm+1,i )

|b ji |

!, dj L j

di

(i = j = 1, 2)

3040

Y.-Y. Hou et al. / Nonlinear Analysis 67 (2007) 3037–3040

with d1 = d2 = r = m = 1, αk j = βk j = 0 (k = 1, . . . , m; j = 1, 2),P and αm+1, j = βm+1, j =P 1 ( j = 1, 2). And in [5], the DCNN is stable if A + AT + S − 2I < 0, where S = diag[ nj=1 (|b1 j | + |b j1 |), . . . , nj=1 (|bn j | + P2 P |b jn |)] (n = 1, 2). It can be easily verified that max1≤ j≤2 {( i=1 |bi j | + a j j + 1≤i6= j≤2 |ai j |)} = 1.6239 > 1, max{τ1 , τ2 }kBk22 = h2.3442 > 1,i c1 = 1 < |b11 | + |b21 | = 1.181, c2 = 1 < |b12 | + |b22 | = 1.6239, and 0 A + AT + S − 2I = 0.8364 0 0.7734 > 0. Obviously, none of the conditions in [1–5] are satisfied. Furthermore, we can check that the matrix inequality has no solutions in [6]. Therefore, according to the criteria proposed in [1–6], we cannot conclude whether the DCNN is asymptotically stable or not. Now by applying the present conditions in Theorem 3.1, the LMI condition is satisfied with       125.7680 0 0.0037 0 83.8330 0 P= , Q= , R= . 0 311.8648 0 0.0545 0 346.8236

Thus according to Theorem 3.1, the illustrative example DCNN is GAS. From the above comparisons, the result obtained is less conservative than those derived in the literature [1–6]. 5. Conclusions A new delay-dependent stability criterion has been proposed for neural networks with multiple delays. The present result has proven to be less conservative than those obtained using delay-independent stability criteria [1,4,5] and delay-dependent stability criteria [2,3,6] recently reported in the literature. References [1] Q. Zhang, R. Ma, J. Xu, Stability of cellular neural networks with delay, Electron. Lett. 37 (2001) 575–576. [2] X. Li, L. Huang, H. Zhu, Global stability of cellular neural networks with constant and variable delays, Nonlinear Anal. 53 (2003) 319–333. [3] Q. Zhang, X. Wei, J. Xu, Delay-dependent global stability results for delayed Hopfield neural networks, Chaos Solitons Fractals (2006) doi:10.1016/j.chaos.2006.07.034. [4] H. Zhao, J. Cao, New conditions for global exponential stability of cellular neural networks with delays, Neural Netw. 18 (2005) 1332–1340. [5] X. Liao, J. Wang, Algebraic criteria for global exponential stability of cellular neural networks with multiple time delays, IEEE Trans. Circuits Syst. I 50 (2003) 268–275. [6] S. Xu, J. Lam, D. Ho, Y. Zou, Novel global asymptotic stability criteria for delayed cellular neural networks, IEEE Trans. Circuits Syst. II 52 (2005) 349–353. [7] S. Boyd, L. Ghaoui, El Feron, E. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, PA, 1994.