ARTICLE IN PRESS
Neurocomputing 71 (2007) 439–447 www.elsevier.com/locate/neucom
Letters
Further result on asymptotic stability criterion of neural networks with time-varying delays Tao Lia, Lei Guob,, Changyin Suna a
The Research Institute of Automation, Southeast University, Nanjing 210096, China The School of Instrument Science and Opto-Electronics Engineering, Beihang University, Beijing 100083, China
b
Received 11 April 2007; received in revised form 23 July 2007; accepted 23 July 2007 Communicated by L.C. Jain Available online 7 September 2007
Abstract In this paper, the global asymptotic stability problem is dealt with for a class of neural networks (NNs) with time-varying delays. The activation functions are assumed to be neither monotonic, nor differentiable, nor bounded. By constructing an augmented Lyapunov functional which contains an integral term of neuron state vector, an improved delay-dependent stability criterion for delay NNs is established in terms of linear matrix inequalities (LMIs). It is shown that the obtained criterion can provide less conservative results than some existing ones. Numerical examples are given to demonstrate the applicability of the proposed approach. r 2007 Elsevier B.V. All rights reserved. Keywords: Delay-dependent; Asymptotic stability; Neural networks (NNs); Linear matrix inequality (LMI)
1. Introduction Recently, there has been increasing interest in the study of neural networks (NNs) since NNs have found extensive applications in solving some optimization problems, associative memory, classification of patterns, reconstruction of moving images, and other areas [7,6]. It is now well known that applications of NNs heavily depend on their dynamic behavior. Since stability is one of the most important issues related to such behavior, the problem of stability analysis of NNs has attracted considerable attention in recent years [1–5,9–12,14–24]. In the implementation of artificial NNs, time delays are unavoidable due to the finite switching speed of amplifiers. It has been shown that the existence of time delays in a NN may lead to oscillation, divergence or instability. Therefore, the stability issue of neural networks with time delays has recently drawn particular research interests, see [1–5,9–12,17–24]. For example, in [2,3,23,17], delay-independent asymptotic stability problem is discussed by using a variety of techniques that include linear matrix inequality (LMI) approach, M-matrix theory, and topological degree theory. In [11], delaydependent stability condition is obtained by introducing a new Lyapunov functional and the condition can include some existing time delay-independent ones. In [9], a less conservative delay-dependent stability criterion for delay NNs is proposed by considering the useful term when estimating the upper bound of the derivative of Lyapunov functional. Although delay-dependent stability results for delayed NNs were proposed in [5,9–11,16,20], they have conservatism to some extent, which leave open room for further improvement. In this paper, we propose an augmented Lyapunov functional, which takes into account an integral term of neuron state vector. Owing to the augmented Lyapunov functional, a new delay-dependent stability criterion for delay NNs is derived in terms of LMIs, which can provide less conservative results than some existing ones. Two examples are given to demonstrate the applicability of the proposed approach. Corresponding author. Tel.: +86 10 82339013.
E-mail addresses:
[email protected],
[email protected] (L. Guo). 0925-2312/$ - see front matter r 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2007.07.009
ARTICLE IN PRESS T. Li et al. / Neurocomputing 71 (2007) 439–447
440
Notation. Throughout this paper, a real symmetric matrix P40ðX0Þ denotes P being a positive definite (positive semidefinite) matrix, and A4ðXÞB means A B4ðXÞ0. I is used to denote an identity matrix with proper dimension. Matrices, if not explicitly stated, are assumed to have compatible dimensions. The symmetric terms in a symmetric matrix are denoted by *. 2. Problem formulation Consider the following NNs with time-varying delays: x_ i ðtÞ ¼ ci xi ðtÞ þ
n X
aij gj ðxj ðtÞÞ þ
j¼1
n X
bij gj ðxj ðt dðtÞÞÞ þ J i ;
i ¼ 1; 2; . . . ; n,
(1)
j¼1
where n denotes the number of neurons in a neural network, xi ðtÞ corresponds to the state of the ith neuron at time t, gj ðxj ðtÞÞ denotes the activation function of the jth neuron at time t, aij denotes the constant connection weight of the jth neuron on the ith neuron at time t, bij denotes the constant connection weight of the jth neuron on the ith neuron at time t dðtÞ. J i is the external input on the ith neuron, ci 40 represents the rate with which the ith neuron will reset its potential to the resting state in isolation when disconnected from the network and external input, dðtÞ corresponds to the time_ varying transmission delay and satisfies 0odðtÞph; dðtÞpmo1. We can rewrite system (1) as the following matrix–vector form: _ ¼ Cx þ AgðxðtÞÞ þ Bgðxðt dðtÞÞÞ þ J, xðtÞ
(2)
where xðtÞ ¼ ½x1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞt 2 Rn , gðxðtÞÞ ¼ ½g1 ðx1 ðtÞÞ, g2 ðx2 ðtÞÞ; . . . ; gn ðxn ðtÞÞt 2 Rn , gðxðt dðtÞÞÞ ¼ ½g1 ðx1 ðt dðtÞÞÞ, g2 ðx2 ðt dðtÞÞÞ; . . . ; gn ðxn ðt dðtÞÞÞt 2 Rn , J ¼ ½J 1 ; J 2 ; . . . ; J n ðÞt 2 Rn , C ¼ diagfc1 ; c2 ; . . . ; cn g, A ¼ ðaij Þnn and B ¼ ðbij Þnn . In the following, we assume that each neuron activation function in (1), gi ðÞ; i ¼ 1; 2; . . . ; n; satisfies the following condition: k i p
gi ðxÞ gi ðyÞ pkþ i ; xy
8x; y 2 R; xay; i ¼ 1; 2; . . . ; n,
(3)
þ where k i , k i , i ¼ 1; 2; . . . ; n, are some constants. þ Remark 1. The constants k i , k i are allowed to be positive, negative or zero. Hence, the resulting activation functions could be nonmonotonic, and more general than the usual sigmoid functions in [20,11,9]. (also see [13,16] for detail).
Assume x ¼ ½x1 ; x2 ; . . . ; xn t is an equilibrium of system (2), one can derive from (2) that the transformation zðÞ ¼ xðÞ x transforms system (2) into the following system: z_ðtÞ ¼ CzðtÞ þ Af ðzðtÞÞ þ Bf ðzðt dðtÞÞÞ,
(4)
where zðtÞ ¼ ½z1 ðtÞ; z2 ðtÞ; . . . ; zn ðtÞt is the state vector of the transformed system, f ðzðtÞÞ ¼ ½f 1 ðz1 ðtÞÞ; f 2 ðz2 ðtÞÞ; . . . ; f n ðzn ðtÞÞt , f ðxðt dðtÞÞÞ ¼ ½f 1 ðx1 ðt dðtÞÞÞ, f 2 ðx2 ðt dðtÞÞÞ; . . . ; f n ðxn ðt dðtÞÞÞt 2 Rn and f i ðzi ðtÞÞ ¼ gi ðzi ðtÞ þ xi Þ gi ðxi Þ; i ¼ 1; 2; . . . ; n. Note that the functions f i ðÞ; i ¼ 1; 2; . . . ; n, satisfy the following condition: k i p
f i ðxÞ f i ðyÞ pkþ i ; xy
8x; y 2 R; xay; i ¼ 1; 2; . . . ; n.
(5)
The main purpose of this paper is to establish LMI-based sufficient conditions guaranteeing the global asymptotic stability of delay NNs (4). To obtain our main results, we need the following lemmas: Lemma 1 (Schur complement). Given constant matrices O1 ; O2 ; O3 where O1 ¼ Ot1 and O2 40, then O1 þ Ot3 O1 2 O3 o0 if only if "
O1
Ot3
O3
O2
# o0;
" or
O2
O3
Ot3
O1
# o0.
ARTICLE IN PRESS T. Li et al. / Neurocomputing 71 (2007) 439–447
441
Lemma 2 (Ku [8]). For any positive symmetric constant matrix M 2 Rnn , a scalar g40, vector function o: ½0; r ! Rn such that the integrations concerned are well defined, then Z
t
g
oðsÞ ds
Z
g
M
0
Z oðsÞ ds pg
0
g
ot ðsÞMoðsÞ ds.
0
3. Main results In the section, an augmented Lyapunov–Krasovskii functional containing an integral term of neuron state vector is constructed and the following asymptotic stability criterion is obtained. þ þ þ þ þ Theorem 1. For given scalars K 1 ¼ fkþ 1 k1 ; k2 k2 ; . . . ; k n kn g; K 2 ¼ fðk1 þ k 1 Þ; ðk 2 þ k 2 Þ; . . . ; ðk n þ k n Þg, hX0 and m. Then, _ for any delay dðtÞ satisfy 0pdðtÞph and dðtÞpm, the origin of system (4) is asymptotically stable if there exist positive matrices L11 ; Z 11 ; Qi ði ¼ 1; 2; 3Þ; R, positive diagonal matrices L ¼ diagfl1 ; l2 ; . . . ; ln g; D1 ¼ diagfd 11 ; d 12 ; . . . ; d 1n g; D2 ¼ diagfd 21 ; d 22 ; . . . ; d 2n g and any matrices L12 ; L22 ; Z 12 ; Z 22 ; Pi ði ¼ 3; . . . ; 12Þ such that the following LMIs hold: " # L11 L12 X0, (6) L22
"
#
Z 11
Z12
Z22
X0,
G12 G22
L12 Pt4 þ P11 Pt6 P11 þ P12
G14 P7 þ P8
L11 B þ h2 Z 12 B þ P9 D2 K 2 P9 þ P10
G16 0
hPt3 hPt5
hPt4 hPt6
Q3 P12 Pt12
Pt8
Pt10
L22
hPt11
hPt12
G44
LB ð1 mÞQ2 2D2
At L12 Bt L12
hPt7 hPt9
hPt8 hPt10
Z11
hZ 12 U
hZ12 h2 Z 22
U
2
G11 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4
(7) 3 C t U 7 0 7 7 0 7 7 7 At U 7 7 Bt U 7 7o0, 7 0 7 7 0 7 7 7 0 5
(8)
U
where G11 ¼ L11 C C t L11 þ P3 þ Pt3 þ L12 þ Lt12 þ Q1 þ Q3 þ h2 Z 11 h2 Z12 C h2 C t Z t12 2D1 K 1 ,
(9)
G12 ¼ Pt3 þ P5 þ Pt4 ,
(10)
G14 ¼ L11 A C t L þ h2 Z 12 A þ D1 K 2 þ P7 ,
(11)
G16 ¼ C t L12 þ L22 ,
(12)
G22 ¼ P5 Pt5 þ P6 þ Pt6 ð1 mÞQ1 2D2 K 1 ,
(13)
G44 ¼ LA þ At L þ Q2 2D1 .
(14)
U ¼ hR þ h2 Z 22 .
(15)
Proof. Consider the following augmented Lyapunov–Krasovskii functional candidate for system (4) as: V ðzðtÞÞ ¼ V 1 ðzðtÞÞ þ V 2 ðzðtÞÞ þ V 3 ðzðtÞÞ þ V 4 ðzðtÞÞ þ V 5 ðzðtÞÞ,
(16)
ARTICLE IN PRESS T. Li et al. / Neurocomputing 71 (2007) 439–447
442
where V 1 ðzðtÞÞ ¼ xt0 ðtÞEPx0 ðtÞ, Z zj n X li f j ðsÞ ds, V 2 ðzðtÞÞ ¼ 2 Z V 3 ðzðtÞÞ ¼
z_t ðsÞR_zðsÞ ds dy, #" # Z 0 Z t " zðsÞ #t " Z zðsÞ Z12 11
h
V 4 ðzðtÞÞ ¼ h
tþy
h
Z
0
i¼1 0 Z t
z_ðsÞ
tþy
Z22
z_ðsÞ
ds dy,
t
½zt ðsÞQ1 zðsÞ ds þ f t ðzðsÞÞQ2 f ðzðsÞÞ ds þ
V 5 ðzðtÞÞ ¼
Z
tdðtÞ
t
zt ðsÞQ3 zðsÞ ds, th
with 2
L11
6 Lt 6 12 6 6 0 6 t P ¼6 6 0 6 6 0 4 0 and " L11
L12 L22
L12
Pt3
Lt22 0
0 Pt5
0
Pt7
0 0
Pt9 Pt11
#
" X0;
Pt4
3
2
I 60 6 6 60 E¼6 60 6 6 40
7 7 7 7 7 t 7; P8 7 7 Pt10 7 5 Pt12 0 Pt6
Z 11
Z 12
Z 22
0
0 0
3 0 07 7 7 07 7; 07 7 7 05
0 0
0
0 0 I 0 0 0 0 0
2
zðtÞ
3
7 6 Rt 6 th zðsÞ ds 7 7 6 6 zðt dðtÞÞ 7 7 6 x0 ðtÞ ¼ 6 7 f ðzðtÞÞ 7 6 7 6 6 f ðzðt dðtÞÞÞ 7 5 4 zðt hÞ
# X0;
EP ¼ Pt E t X0;
L11 40;
Z 11 40;
R40;
Qi 40 ði ¼ 1; 2; 3Þ.
On the other hand, from the Leibniz–Newton formula, the following equations are true: Z t a1 :¼zðtÞ zðt dðtÞÞ z_ðsÞ ds ¼ 0,
ð17Þ
tdðtÞ
Z
tdðtÞ
z_ðsÞ ds ¼ 0.
a2 :¼zðt dðtÞÞ zðt hÞ
ð18Þ
th
Calculating the time derivative of V 1 ðzðtÞÞ along the solution of system (4) and using (17)–(18) yields 3 2 CzðtÞ þ Af ðzðtÞÞ þ Bf ðzðt dðtÞÞÞ 7 6 7 6 zðtÞ zðt hÞ 7 6 t V_ 1 ðzðtÞÞ ¼ 2x0 ðtÞPt 6 7, 7 6 a1 5 4 a2 ¼ xt1 ðtÞF1 x1 ðtÞ
ð19Þ
where 2 6 6 6 6 6 6 6 F1 ¼ 6 6 6 6 6 6 4
F21
G12
L12 Pt4 þ P11
L11 A þ P7
L11 B þ P9
G16
Pt3
F22
Pt6 P11 þ P12 P12 Pt12
P7 þ P8 Pt8
P9 þ P10 Pt10
0 L22
Pt5 Pt11
0
0 0
At L12 Bt L12
Pt7 Pt9
0
0
0
Pt4
3
Pt6 7 7 7 Pt12 7 7 Pt8 7 7 7; Pt10 7 7 0 7 7 7 0 5 0
3 zðtÞ 6 zðt dðtÞÞ 7 7 6 7 6 6 zðt hÞ 7 7 6 7 6 f ðzðtÞÞ 7 6 7 6 x1 ðtÞ ¼ 6 f ðzðt dðtÞÞÞ 7 7 6 R 7 6 t 6 th zðsÞ ds 7 7 6 Rt 7 6 6 tdðtÞ z_ðsÞ ds 7 5 4R tdðtÞ _ z ðsÞ ds th 2
ARTICLE IN PRESS T. Li et al. / Neurocomputing 71 (2007) 439–447
443
with F21 ¼ L11 C C t L11 þ P3 þ Pt3 þ L12 þ Lt12 , F22 ¼ P5 Pt5 þ P6 þ Pt6 , and G12 , G16 are defined in (10), (12). V_ 2 ðzðtÞÞ ¼ 2
n X
li f i ðzi ðtÞÞ_zi ðtÞ ¼ 2f t ðzðtÞÞL½CzðtÞ þ Af ðzðtÞÞ þ Bf ðzðt dðtÞÞÞ
(20)
i¼1
and L ¼ diagfl1 ; l2 ; . . . ; ln g. It follows Lemma 2 with dðtÞph that we have V_ 3 ðzðtÞÞ ¼ h_zt ðtÞRz_ðtÞ ¼ h_zt ðtÞRz_ðtÞ
Z
t
z_t ðsÞR_zðsÞ ds
th Z tdðtÞ
_ ds z_t ðsÞRxðsÞ
th
1 ph_zt ðtÞRz_ðtÞ h
Z
Z
t
z_t ðsÞR_zðsÞ ds
tdðtÞ
tdðtÞ
t Z z_ðsÞ ds R
th
tdðtÞ
th
1 _ ds xðsÞ h
Z
t Z z_ðsÞ ds R
t
tdðtÞ
t
z_ðsÞ ds .
ð21Þ
tdðtÞ
Moreover, it can be verified that " #" #" # #" # Z t " zðsÞ #t " Z zðtÞ t Z 11 Z 12 zðtÞ zðsÞ 11 Z 12 2 V_ 4 ðzðtÞÞ ¼ h h ds z_ðtÞ z_ðtÞ z_ðsÞ Z 22 Z 22 th z_ðsÞ 3t " 3 " #" #" # 2Rt #2 R t zðsÞ ds zðsÞ ds zðtÞ t Z 11 Z 12 zðtÞ Z 11 Z 12 th th 5 4R 5 ds ph2 4Rt t z_ðtÞ z_ðtÞ Z 22 Z _ _ z ðsÞ ds z ðsÞ ds 22 th th " #" #" # zðtÞ t Z 11 Z 12 zðtÞ ¼ h2 z_ðtÞ z_ðtÞ Z 22 Rt Rt 2 3t " 3 #2 Z 11 Z 12 th zðsÞ ds th zðsÞ ds 5 4R 5 ds 4 R tdðtÞ Rt Rt tdðtÞ Z 22 z_ðsÞds þ tdðtÞ z_ðsÞ ds z_ðsÞ ds þ tdðtÞ z_ðsÞ ds th th ¼ xt1 ðtÞF2 x1 ðtÞ, where 2
F11 6 6 6 6 6 6 6 F2 ¼ 6 6 6 6 6 6 6 4
0 0
0 0
ð22Þ
h2 Z12 A h2 C t Z 22 A 0
h2 Z12 B h2 C t Z 22 B 0
0 0
0 0
0
0
0
0
0
h2 At Z 22 A
h2 At Z 22 B h2 Bt Z 22 B
0 0
0 0
Z 11
Z12 Z22
0 0
3
7 7 7 0 7 7 7 0 7 7 0 7 7 7 Z12 7 7 Z22 7 5 Z22
and F11 ¼ h2 Z 11 þ h2 C t Z 22 C h2 Z12 C h2 C t Z t12 . V_ 5 ðzðtÞÞpxt1 ðtÞF3 x1 ðtÞ,
(23)
with F3 ¼ diagfðQ1 þ Q3 Þ; ð1 mÞQ1 ; Q3 ; Q2 ; ð1 mÞQ2 ; 0; 0; 0g. Moreover, one can infer from (5) that ðf ðzðtÞÞ kþ i zðtÞÞðf ðzðtÞÞ k i zðtÞÞp0, þ ðf ðzðt dðtÞÞÞ ki zðt dðtÞÞðf ðzðt dðtÞÞÞ k i zðt dðtÞÞp0,
ð24Þ ð25Þ
ARTICLE IN PRESS T. Li et al. / Neurocomputing 71 (2007) 439–447
444
which are equivalent to 2 3 kþ þ i þ ki t t " # " #t e k k e e e i i 7 i i i i 6 zðtÞ zðtÞ 2 6 7 p0; i ¼ 1; . . . ; n 6 7 5 f ðzðtÞÞ f ðzðtÞÞ 4 kþ þ k i ei eti i ei eti 2 2 3 kþ þ i þ ki t t " " #t ei ei 7 zðt dðtÞÞ # 6 k i k i ei ei zðt dðtÞÞ 2 6 7 p 0; i ¼ 1; . . . ; n 6 7 5 f ðzðt dðtÞÞÞ f ðzðt dðtÞÞÞ 4 kþ þ k i i t t ei ei ei ei 2
ð26Þ
ð27Þ
where ei denotes the unit column vector having ‘‘1’’ element on its ith row and zeros elsewhere. Letting þ þ D1 ¼ diagfd 11 ; d 12 ; . . . ; d 1n g; D2 ¼ diagfd 21 ; d 22 ; . . . ; d 2n g; K 1 ¼ fkþ 1 k1 ; k2 k2 ; . . . ; k n kn g, þ þ þ K 2 ¼ fðk1 þ k1 Þ; ðk2 þ k2 Þ; . . . ; ðkn þ kn Þg, we have from (19)–(23), (26) and (27) that V_ ðzðtÞÞ ¼ V_ 1 ðzðtÞÞ þ V_ 2 ðzðtÞÞ þ V_ 3 ðzðtÞÞ þ V_ 4 ðzðtÞÞ þ V_ 5 ðzðtÞÞ pxt1 ðtÞðF1 þ F2 þ F3 Þx1 ðtÞ þ 2f t ðzðtÞÞL½CzðtÞ þ Af ðzðtÞÞ þ Bf ðzðt dðtÞÞÞ þ h_zt ðtÞR_zðtÞ Z tdðtÞ t Z tdðtÞ Z t t Z t 1 1 _ ds z_ðsÞ ds R z_ðsÞ ds R z_ðsÞ ds xðsÞ h th h tdðtÞ th tdðtÞ 2 3 þ k þ k i t " # ei eti 7" zðtÞ # kþ i i k i ei ei n zðtÞ t 6 X 2 6 7 2 d 1i 6 7 þ 4 5 f ðzðtÞÞ f ðzðtÞÞ k þ k i i¼1 ei eti i ei eti 2 2 3 kþ þ i þ ki t t " # " #t e k k e e e i i 7 i i i i n 6 zðt dðtÞÞ zðt dðtÞÞ X 2 6 7 2 d 2i 6 7 5 f ðzðt dðtÞÞÞ f ðzðt dðtÞÞÞ 4 kþ i þ ki i¼1 t t ei ei ei ei 2 ¼ xt1 ðtÞFx1 ðtÞ,
ð28Þ
where F ¼ F1 þ F2 þ F3 þ F4 , 2 t hC RC 2D1 K 1 0 6 6 2D2 K 1 6 6 6 6 6 6 6 6 ¼6 6 6 6 6 6 6 6 6 6 4
0
hC t RA þ D1 K 2 C t L
hC t RB
0
0
0
0
D2 K 2
0
0
0
0
0
0
0
t
LA þ A L 2D1
LB
0
0
2D2
0
0
0
0 1 R h
0
3
7 0 7 7 7 0 7 7 7 0 7 7 7 0 7 7. 7 0 7 7 7 0 7 7 7 7 1 5 R h
In order to guarantee V_ ðzðtÞÞo0, one needs to insure that Fo0. Applying Lemma 1 (Schur’s complement) to Fo0, the inequality given in (8) is obtained. Then it follows from the Lyapunov–Krasovskii stability theorem that if the conditions given in (6)–(8) are met, the delay NNs (4) is guaranteed to be asymptotically stable. Remark 2. Theorem 1 is based on a newly proposed augmented Lyapunov functional of form (16), which contains a structure more general than the traditional ones as those in [9–11,20,21] for involving an integral term of state in the first term of (16). This new type of Lyapunov functional enables us to establish less conservative results. In order to demonstrate the improvement over the existing criteria, Corollary 1 is established from Theorem 1 by setting L12 ¼ L22 ¼ Z 11 ¼ Z 12 ¼ Z 22 ¼ 0.
ARTICLE IN PRESS T. Li et al. / Neurocomputing 71 (2007) 439–447
445
þ þ þ þ þ Corollary 1. For given scalars K 1 ¼ fkþ 1 k1 ; k2 k2 ; . . . ; k n kn g; K 2 ¼ fðk1 þ k 1 Þ; ðk 2 þ k 2 Þ; . . . ; ðk n þ k n Þg, hX0 and m. _ Then, for any delay dðtÞ satisfy 0pdðtÞph and dðtÞpm, the origin of system (4) is asymptotically stable if there exist positive matrices L11 ; Qi ði ¼ 1; 2; 3Þ; R, positive diagonal matrices L ¼ diagfl1 ; l2 ; . . . ; ln g; D1 ¼ diagfd 11 ; d 12 ; . . . ; d 1n g; D2 ¼ diagfd 21 ; d 22 ; . . . ; d 2n g and any matrices Pi ði ¼ 3; . . . ; 12Þ such that the following LMI holds: 3 2 ¯ 11 G12 L12 Pt4 þ P11 G L11 B þ P9 hPt3 hPt4 hC t R G¯ 14 7 6 6 G22 Pt6 P11 þ P12 P7 þ P8 D2 K 2 P9 þ P10 hPt5 hPt6 0 7 7 6 7 6 t t t t t 6 Q3 P12 P12 P8 P10 hP11 hP12 0 7 7 6 7 6 t t t 7 6 G LB hP hP hA R 44 7 8 7 6 ð29Þ 7o0; 6 t t t 6 ð1 mÞQ2 2D2 hP9 hP10 hB R 7 7 6 7 6 6 hR 0 0 7 7 6 7 6 7 6 hR 0 5 4
hR
where G¯ 11 ¼ L11 C C t L11 þ P3 þ Pt3 þ Q1 þ Q3 2D1 K 1 , G¯ 14 ¼ L11 A C t L þ D1 K 2 þ P7 . Remark 3. For k 1 ¼ k 2 ¼ ¼ kn ¼ 0 (i.e. K 1 ¼ 0), it is easy to show that Corollary 1 is equivalent to Theorem 1 in [9]. In [11], delayed NNs are studied and global asymptotic stability result is obtained. The main result can be recovered as a special case from Theorem 1 in [9] (see Corollary 1 in [9]). Thus Theorem 1 is an extension of those results given in [11,9].
_ In fact, Theorem 1 gives a criterion for delay NNs (4) with dðtÞ satisfying 0odðtÞph; dðtÞpu, where m is a given constant. In many cases, m is unknown. Regarding this circumstance, a rate-independent criterion for a delay only satisfying 0pdðtÞph is derived as follows by choosing Q1 ¼ Q2 ¼ 0 in Theorem 1. þ þ þ þ þ Corollary 2. For given scalars K 1 ¼ fkþ 1 k1 ; k2 k2 ; . . . ; k n kn g; K 2 ¼ fðk1 þ k 1 Þ; ðk 2 þ k 2 Þ; . . . ; ðk n þ k n Þg and hX0. Then, for any delay dðtÞ satisfy 0pdðtÞph, the origin of system (4) is asymptotically stable if there exist positive matrices L11 ; Z11 ; Q3 ; R, positive diagonal matrices L; D1 ; D2 and matrices L12 ; L22 ; Z 12 ; Z 22 ; Pi ði ¼ 3; . . . ; 6Þ such that (6), (7) and the following LMI holds: 3 2 2 t t t ^ 11 G12 L12 Pt þ P11 G G L B þ h Z B þ P G hP hP C U 14 11 12 9 16 3 4 4 7 6 t t 6 G ^ 22 Pt P11 þ P12 P7 þ P8 D K P þ P 0 hP hP 0 7 2 2 9 10 5 6 6 7 6 7 6 Q3 P12 Pt12 Pt8 Pt10 L22 hPt11 hPt12 0 7 6 7 6 6 G^ 44 LB At L12 hPt7 hPt8 At U 7 7 6 7 6 t t t t (30) 7o0, 6 2D B L hP hP B U 2 12 9 10 7 6 7 6 Z11 hZ 12 hZ12 0 7 6 7 6 2 6 U h Z 22 0 7 7 6 6 U 0 7 5 4 U
where ^ 11 ¼ L11 C C t L11 þ P3 þ Pt þ L12 þ Lt þ Q3 þ h2 Z 11 h2 Z 12 C h2 C t Z t , G 3 12 12 ^ 22 ¼ P5 Pt þ P6 þ Pt , G 5 6 ^ 44 ¼ LA þ At L 2D1 G and G12 ; G14 ; G16 are defined in (10)–(12). Remark 4. In Theorem 1, Corollaries 1 and 2, sufficient conditions that guarantee the delayed NNs to be asymptotically stable are expressed in the form of LMIs, which could be easily checked by utilizing the LMI toolbox in Matlab 6.5, and no turning of parameters will be needed.
ARTICLE IN PRESS T. Li et al. / Neurocomputing 71 (2007) 439–447
446
4. Numerical examples In this section, we use two examples to show the benefits of our results. Example 1. Consider the delayed NNs (4) with parameters C ¼ diagf1:2769; 0:6231; 0:9230; 0:4480g, 3 2 0:0373 0:4852 0:3351 0:2336 7 6 6 1:6033 0:5988 0:3224 1:2352 7 7 6 A¼6 7; 6 0:3394 0:0860 0:3824 0:5785 7 5 4 0:3253
0:1311 kþ 1
¼ 0:1137;
kþ 2
k 1
k 3
k 2
0:9534
¼ 0:1279;
kþ 3
0:5015
¼ 0:7994;
kþ 4
2
0:8674
6 6 0:0474 6 B¼6 6 1:8495 4 2:0413
1:2405
0:5325
0:9164
0:0360
2:6117
0:3788
0:5179
1:1734
0:0220
3
7 0:9816 7 7 7, 0:8428 7 5 0:2775
¼ 0:2368.
k 4
¼ ¼ ¼ ¼ 0, it can be checked that Theorem 1 in [1] and Theorem 1 in [21] are not satisfied. It means that For they fail to conclude whether this system is asymptotically stable or not. However, when m ¼ 0, by applying the results in [21,9,5] to this example, the maximum allowable bounds h are 1.4224 [21], 1.9321 [5], 3.5841 [9], respectively, while by using Theorem 1 in this note, we have h ¼ 4:0120, which shows that our criterion is less conservative than those in [1,21,9,5]. For k 1 ¼ 0:1; k2 ¼ 0; k 3 ¼ 0; k 4 ¼ 0:2, the maximum delay h that guarantees the delayed NNs to be asymptotically stable is calculated to be 3.3668. Example 2. Consider the delayed NNs (4) in [9] " # " # " # 2 0 1 1 0:88 1 C¼ ; A¼ ; B¼ , 0 2 1 1 1 1 kþ 1 ¼ 0:4;
kþ 2 ¼ 0:8.
For k 1 ¼ k 2 ¼ 0, the corresponding upper bound h is 0:8298 [11,10], 1:0880 [9], which is 1:1345 by using Corollary 2 in this note. It is seen that our results improve the existing results in [9–11]. For k 1 ¼ 0:2; k2 ¼ 0:1, the maximum delay h is obtained as 1:1380 by solving LMI (6), (7) and (30) in Matlab LMI Control Toolbox.
5. Conclusion In this paper, we have investigated the stability problem of the neural networks with time-varying delays. By employing an augmented Lyapunov–Krasovskii functional, we proposed a new stability criterion for the considered systems. The obtained delay-dependent stability conditions can improve some existing ones. Finally, several examples are given to show the superiority of our proposed stability conditions. Acknowledgments This work was supported by the National Science Foundation of China under Grant no. 60474050. References [1] S. Arik, Global asymptotic stability of a larger class of neural networks with constant time delay, Phys. Lett. A 311 (2003) 504–511. [2] J.D. Cao, L. Wang, Exponential stability and periodic oscillatory solution in BAM networks with delays, IEEE Trans. Neural Networks 13 (2002) 457–463. [3] J.D. Cao, J. Wang, Global exponential stability and periodicity of recurrent neural networks with time delays, IEEE Trans. Circuits Syst. I Reg. Papers 52 (2005) 920–931. [4] T. Chen, L. Rong, Delay-independent stability analysis of Cohen–Grossberg neural networks, Phys. Lett. A 317 (2002) 436–449. [5] H.J. Cho, J.H. Park, Novel delay-dependent robust stability criterion of delayed cellular neural networks, Chaos Solitons Fractals 32 (2007) 1194–1200. [6] L.O. Chua, L. Yang, Cellular neural networks: applications, IEEE Trans. Circuits Syst. 35 (1988) 1273–1290. [7] A. Cichocki, R. Unbehauen, Neural Networks for Optimization and Signal Processing, Wiley, Chichester, 1993. [8] K. Gu, An integral inequality in the stability problem of time-delay systems, in: Proceedings of 39th IEEE Conference on Decision and Control, 2000, pp. 2805–2810. [9] Y. He, G.P. Liu, D. Rees, New delay-dependent stability criteria for neural networks with time-varying delay, IEEE Trans. Neural Networks 18 (2007) 310–314. [10] Y. He, Q.G. Wang, M. Wu, LMI-based stability criteria for neural networks with multiple time-varying delays, Physica D 212 (2005) 126–136. [11] C.C. Hua, C.N. Longm, X.P. Guan, New results on stability analysis of neural networks with time-varying delays, Phys. Lett. A 352 (2006) 335–340. [12] X.F. Liao, G. Chen, E.N. Sanchez, Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach, Neural Networks 15 (2002) 855–866.
ARTICLE IN PRESS T. Li et al. / Neurocomputing 71 (2007) 439–447 [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]
447
Y.R. Liu, Z. Wang, X.H. Liu, Design of exponential state estimators for neural networks with mixed time delays, Phys. Lett. A 364 (2007) 401–412. J.H. Park, Robust stability of bidirectional associative memory neural networks with time delays, Phys. Lett. A 359 (2006) 494–499. V. Singh, Global robust stability of delayed neural networks: an LMI approach, IEEE Trans. Circuits Syst. II Exp. Briefs 52 (2005) 33–36. Q.K Song, Z. Wang, A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays, Phys. Lett. A (2007), 10.1016/j.physleta.2007.03.088. Q.K. Song, Z.J. Zhao, Y.M. Li, Global exponential stability of BAM neural networks with distributed delays and reaction C–diffusion terms, Phys. Lett. A 335 (2005) 213–225. Z. Wang, Y. Liu, M. Li, X. Liu, Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays, IEEE Trans. Neural Networks 17 (2006) 814–820. Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and distributed delays, Phys. Lett. A 345 (2005) 299–308. S. Xu, J. Lam, A new LMI condition for delay-dependent asymptotic stability of delayed hopfield neural networks, IEEE Trans. Circuits Syst. II Exp. Briefs 53 (2006) 230–234. S. Xu, J. Lam, D.W.C. Ho, Y. Zou, Novel global asymptotic stability criteria for delayed cellular neural networks, IEEE Trans. Circuits Syst. II Exp. Briefs 52 (2005) 349–353. B.Y. Zhang, S. Xu, Y. Zou, Relaxed stability conditions for delayed recurrent neural networks with polytopic uncertainties, Int. J. Neural Syst. 16 (2006) 473–482. Q. Zhang, X. Wei, J. Xu, Global exponential stability for nonautonomous cellular neural networks with delays, Phys. Lett. A 351 (2006) 153–160. Q. Zhang, X. Wei, J. Xu, Stability analysis for cellular neural networks with variable delays, Chaos Solitons Fractals 28 (2006) 331–336. Tao Li was born in 1979. He is now pursuing his Ph.D. degree in Research Institute of Automation Southeast University, China. His current research interests include time-delay systems, neural net-works, robust control, fault detection and diagnosis.
Lei Guo was born in 1966. He received the Ph.D. degree in control engineering from Southeast University (SEU), PR China, in 1997. From 1999 to 2004, he has worked at Hong Kong University, IRCCyN (France), Glasgow University, Loughborough University and UMIST, UK. Now he is a Professor in Research Institute of Automation, SEU, and in School of Instrument Science and Opto-Electronics Engineering, Beihang University. He also holds a visiting professor position in the University of Manchester, UK, and an invitation fellowship in Okayama University, Japan. His research interests include robust control, stochastic systems, fault detection, filter design and nonlinear control with their applications.
Changyin Sun was born in 1975. He received the Ph.D. degree in control engineering from South-east University (SEU), PR China, in 2004. Since December, 2006, he has been with SEU, where he is now a professor with the Department of Automation. His recent research interests include neural networks, nonlinear control and transient stability of power systems.