Neurocomputing 101 (2013) 1–9
Contents lists available at SciVerse ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
New delay-dependent stability criteria for uncertain stochastic neural networks with discrete interval and distributed delays$ Huabin Chen Department of Mathematics, Nanchang University, Nanchang 330031, Jiangxi, PR China
a r t i c l e i n f o
abstract
Article history: Received 31 December 2011 Received in revised form 23 March 2012 Accepted 12 June 2012 Communicated by H. Jiang Available online 4 September 2012
This paper studies the globally robustly asymptotical stability in mean square of uncertain stochastic neural networks with discrete interval and distributed time-varying delays. By constructing an augmented Lyapunov–Krasovskii functional, some delay-dependent criteria for the globally robustly asymptotical stability of such systems are formulated in terms of linear matrix inequalities (LMIs). Finally, two numerical examples are provided to illustrate the effectiveness of the obtained results. & 2012 Elsevier B.V. All rights reserved.
Keywords: Uncertain stochastic neural networks Lyapunov–Krasovskii functional Linear matrix inequalities (LMIs) Interval time-varying delay Distributed delay
1. Introduction Over the past decades, neural networks have been widely considered by many authors due to their wider applications in a variety of areas, such as signal processing, pattern recognition, static image processing, associative memory, combinatorial optimization and many other fields. These applications are largely dependent upon the stability of the equilibrium of neural networks, that is, stability is of much importance in dynamical properties about neural networks when neural networks are designed. In practice, time-delay is often encountered in various engineering, biological and economic systems. For the finite speed of information processing, the existence of time-delay can usually bring oscillation, divergence, or even instability of neural networks. As we know, the previous stability criteria for delayed neural networks can be classified into two types: delay-independent criteria [7–9,29,31,32,52] and delay-dependent criteria [6,14,15,19,21,22,24–26,35–37,39,40,42–44]. Generally speaking, delay-dependent stability criteria are usually less conservative than delay-independent ones especially when the size of the delay is small. Thus, obtaining the delay-dependent stability criteria is not only of theoretical importance, but also of practical value. However, most of delayed neural network models proposed and discussed above are deterministic and are only applicable to the $ This work was supported by the National Natural Science Foundations of China under Grant No. 11126278, the Natural Science Foundation of Jiangxi Province in China under Grant No. 20114BAB211001. E-mail address:
[email protected]
0925-2312/$ - see front matter & 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2012.06.010
case when no perturbation is produced by stochastic phenomenon. In practice, stochastic neural networks with time-delay could well reflect the reality. Recently, there were many valuable results on stability analysis for stochastic delayed neural networks [4,5,10,11,16–18,20,23,27,29,33,34,38,41,45,47–52]. For example, in [10,11,22,34,48], the stability analysis for uncertain stochastic neural networks with discrete and distributed delays has been discussed. When the exponential stability problem for the stochastic neural networks with time-varying delays is considered, one always assumes that the derivative of time-varying delay is less than one, see [10,11]. As a result, the obtained results in [10,11] are invalid when the derivative of time-varying delay equals to or is greater than one. In order to overcome this shortage in [10,11], the mean-square robustly stability for stochastic Hopfield neural networks with time-varying delay and distributed delay has been investigated in [18,20,23,27,45] by utilizing the free-weighting matrix technique. On the other hand, the foregoing stability criteria for stochastic neural networks with time-varying delay are only applied into the case when the lower bound of the delay is zero. In practice, there exists a special type of time delay in practical engineering systems, i.e., interval time-varying delay, hm rhðtÞ r hM and hm is not restricted to be zero, which commonly exists in networked control systems, and the delay-dependent stability of neural networks with interval time-varying delay was widely studied in [6,36,46] and the references therein. The delay-dependent stability criteria for uncertain stochastic neural networks with discrete interval and distributed delays were obtained in [2,3,30] by constructing a modified Lyapunov–Krasovskii functional and using the
2
H. Chen / Neurocomputing 101 (2013) 1–9
free-weighting matrix technique, but the bounding technique is frequently employed to deal with the discrete delay items. It is worth mentioning that the excessive use of the bounding technique can usually bring much conservatism while using the free-weighting matrices approach. Although the asymptotical stability analysis for uncertain stochastic neural networks with discrete interval and distributed delays is also considered in [22], some non-negative items are ignored. Thus, the obtained results in [2,3,22,30] are much more conservative. Thus, how to obtain the less conservative stability criteria for uncertain stochastic neural networks with discrete interval and distributed delays still remains a challenging problem, which motivates the present study. Inspired by the statements above, we consider the problem of the globally robustly asymptotical stability in mean square for uncertain stochastic neural networks with discrete interval and distributed delays. By constructing an augmented Lyapunov– Krasovskii functional, the LMI-based sufficient conditions ensuring the globally robustly asymptotical stability in mean square for such systems can be derived by using the free-weighting matrices, which are less conservative than some existing reports. And what is more, in contrast to the results in [7–9,29,31,32,52], the proposed LMI-based ones are computationally efficient as they can be solved numerically by employing the LMI toolbox in Matlab. Finally, two illustrative examples are provided to show the effectiveness of the given results in this paper. Notation. In this paper, Rn and Rmn denote the n-dimensional Euclidean space and the set of all m n real matrices, respectively. E stands for the identity matrix with appropriate dimensions; for two matrices X and Y, the notation X 4 Y (respectively, X Z Y) means that the XY denotes positive definite (respectively, positive semi-definite); J J denotes the Euclidean vector norm. ‘T’ denotes the transpose for a matrix or a vector. ðO,I,fIgt Z 0 ,PÞ is a probability space with a filtration fIgt Z 0 satisfying the usual conditions (i.e. the filtration contain all P-null sets and is right continuous). Denote by L2I0 ð½r,0; Rn Þ the family of all I0 -measurable Cð½-r,0; Rn Þ-valued random variables x ¼ fxðyÞ : r r y r 0g such that supy A ½r,0 EJxðyÞJ2 o þ 1, where EðÞ stands for the mathematical expectation. Matrices, it is not explicitly stated, are assumed to have compatible dimensions.
2. Problem formulation Consider the following uncertain stochastic neural networks with discrete interval and distributed time-varying delays: dxðtÞ ¼ AðtÞxðtÞ þW 0 ðtÞf ðxðtÞÞ þW 1 ðtÞf ðxðthðtÞÞÞ Z t þ W 2 ðtÞ f ðxðsÞÞ ds dt þ CðtÞxðtÞ þ DðtÞxðthðtÞÞ ttðtÞ
þ B0 ðtÞf ðxðtÞÞ þ B1 ðtÞf ðxðthðtÞÞÞþ B2 ðtÞ
Z
t
f ðxðsÞÞ ds dwðtÞ,
y A ½t ,0, t ¼ maxfhM , tg,
ð2:2Þ
n
t Z0, where xðtÞ A R is the state vector associated with the neurons; j A Cð½t ,0; Rn Þ is the initial function. f ðxðtÞÞ ¼ ½f 1 ðx1 ðtÞÞ,f 2 ðx2 ðtÞÞ, . . . ,f n ðxn ðtÞÞ denotes the neuron activation function. And wðtÞ ¼ ½w1 ðtÞ,w2 ðtÞ, . . . ,wn ðtÞT A Rn is an n-dimensional Brownian motion defined on a complete probability space ðO,I, fIt gt Z 0 ,PÞ. The delays h(t), tðtÞ satisfy the following assumptions: 0 r hm rhðtÞ rhM ,
_ r m, hðtÞ
and
1
1
1
8 > < W 2 ðtÞ ¼ W 2 þ DW 2 ðtÞ, CðtÞ ¼ C þ DCðtÞ, > : DðtÞ ¼ D þ DDðtÞ,
8 > < B0 ðtÞ ¼ B0 þ DB0 ðtÞ, B1 ðtÞ ¼ B1 þ DB1 ðtÞ, > : B ðtÞ ¼ B þ DB ðtÞ, 2
2
ð2:4Þ
2
where A, W 0 , W 1 , W 2 , C, D, B0 , B1 and B2 are known real constant matrices, and DAðtÞ, DW 0 ðtÞ, DW 1 ðtÞ, DW 2 ðtÞ, DCðtÞ, DDðtÞ, DB0 ðtÞ, DB1 ðtÞ, DB2 ðtÞ are unknown matrices representing time-varying parameter uncertainties in system model. We assume that the uncertainties are norm-bounded and can be described as ½DAðtÞ DW 0 ðtÞ DW 1 ðtÞ DW 2 ðtÞ DCðtÞ DDðtÞ DB0 ðtÞ DB1 ðtÞ DB2 ðtÞ ¼ MFðtÞ½N1 N2 N3 N4 N 5 N 6 N 7 N 8 N9 ,
ð2:5Þ
where F(t) are unknown real and possibly time-varying matrix for any given t and satisfy F T ðtÞFðtÞ rI,
ð2:6Þ
and M, N1 , N2 , N3 , N 4 , N 5 , N 6 , N 7 , N 8 , and N9 are some known real matrices with appropriate dimension. It is assumed that the elements of F(t) are Lebesgue measurable. When FðtÞ ¼ 0, systems (2.1) have the following nominal case: Z t dxðtÞ ¼ AxðtÞ þW 0 f ðxðtÞÞ þ W 1 f ðxðthðtÞÞ þW 2 f ðxðsÞÞ ds dt ttðtÞ
þ CxðtÞ þDxðthðtÞÞ þB0 f ðxðtÞÞ þ B1 f ðxðthðtÞÞÞ Z þB2
t
f ðxðsÞÞ ds dwðtÞ,
t Z0:
ð2:7Þ
ttðtÞ
To obtain the main results, we need the following assumption: (H) The activation function f(x) is bounded and satisfy the following Lipschitz condition: 9f ðu1 Þf ðu2 Þ9 r K9u1 u2 9,
8u1 ,u2 A Rn ,
and f ð0Þ ¼ 0, where K ¼ diagfk1 ,k2 , . . . ,kn g is a positive definite diagonal matrix. Remark 1. Obviously, under the condition (H), the stochastic neural networks with discrete interval and distributed delays (2.7) have one trivial solution when the initial value j ¼ 0, we can refer to [28]. Lemma 2.1 (Gu et al. [12,13]). For any constant matrix S A Rnn , S ¼ ST 4 0, a scalar g 40 and a vector function o : ½0, g-Rn such that the integrations are well defined, the following inequality holds: Z g T Z g Z g oðsÞ ds S oðsÞ ds r g oT ðsÞSoðsÞ ds: 0
0
0
ttðtÞ
ð2:1Þ xðyÞ ¼ jðyÞ,
uncertainties, that is, 8 > < AðtÞ ¼ Aþ DAðtÞ, W 0 ðtÞ ¼ W 0 þ DW 0 ðtÞ, > : W ðtÞ ¼ W þ DW ðtÞ,
0 r tðtÞ r t,
ð2:3Þ
where hm , hM , m and t are constants; AðtÞ, W 0 ðtÞ, W 1 ðtÞ, W 2 ðtÞ, CðtÞ, DðtÞ, B0 ðtÞ, B1 ðtÞ and B2 ðtÞ are matrix functions with time-varying
Lemma 2.2 (Boyd et al. [1]). Given matrices Q ¼ Q T , M,N with appropriate dimensions, then Q þ MFðtÞN þ NT F T ðtÞM T o 0, for all F(t) satisfying F T ðtÞFðtÞ r I, if and only if there exists e 40 such that Q þ e1 MT M þ eN T N o 0: Lemma 2.3 (Boyd et al. [1]). (Schur complement) For a given matrix " # S11 S12 S¼ T S12 S22
H. Chen / Neurocomputing 101 (2013) 1–9
3
with S11 ¼ ST11 , S22 ¼ ST22 , then the following conditions are equivalent:
Ok37 ¼ vk ½Q 112 þ Q 212 þ Q 312 , Ok38 ¼ Ok39 ¼ 0,
(1) S o0; T (2) S22 o0,S11 S12 S1 22 S12 o 0; T 1 (3) S11 o0,S22 S12 S11 S12 o 0.
Ok44 ¼ vk ½Q 311 , Ok45 ¼ Ok46 ¼ Ok47 ¼ 0,
Ok310 ¼ vk ½P 24 þ P34 , Ok311 ¼ vk ½P25 þ P 35 , Ok48 ¼ vk ½Q 312 , Ok49 ¼ 0, Ok410 ¼ vk ½P 34 , Ok411 ¼ vk ½P 35 , Ok55 ¼ vk ½Q 9 þ QW 0 þ W T0 Q T þ Q 122 þ h2m Q 7 þ h2Mm Q 8 þ t2 Q 4 , Ok56 ¼ vk ½QW 1 , Ok57 ¼ Ok58 ¼ 0, Ok59 ¼ vk ½QW 2 , Ok510 ¼ vk ½W T0 P 14 þP 44 , Ok511 ¼ vk ½W T0 P 15 þ P45 ,
3. Main results In this section, the sufficient conditions ensuring the globally asymptotical stability for the stochastic neural networks with discrete interval and distributed delays (2.7) are firstly obtained by constructing an augmented Lyapunov–Krasovskii functional and using the free weighting matrices. Before the results are given, the notations are needed as follows: Z t yðtÞ ¼ AxðtÞ þ W 0 f ðxðtÞÞ þW 1 f ðxðthðtÞÞÞ þW 2 f ðxðsÞÞ ds, ttðtÞ
and
s2 ¼
hMm ¼ hM hm .
Theorem 3.1. Let a A ð0,1Þ, for the delays h(t), tðtÞ satisfying the condition (2.3), and the condition (H) is satisfied, if there exist matrices: P11 4 0, P22 4 0, P33 4 0, P44 4 0, P55 4 0, P12 , P13 , P14 , i
P 15 , P 23 , P24 , P 25 , P 34 , P35 , P 45 , Q i ¼ ½ðQQi 11ÞT 12
Q i12 40 Q i22
Ok77 ¼ vk ½Q 122 þ Q 222 þ Q 322 , Ok78 ¼ Ok79 ¼ 0, Ok710 ¼ vk ½P 44 þ PT45 , Ok711 ¼ vk ½P45 þ P 55 , Ok88 ¼ vk ½Q 322 , Ok89 ¼ 0, Ok810 ¼ vk ½P T45 , Ok811 ¼ vk ½P 55 , Ok99 ¼ vk ½Q 4 , Ok910 ¼ vk ½W T2 P 14 , Ok911 ¼ vk ½W T2 P 15 , Ok1010 ¼ vk ½Q 7 , Ok1011 ¼ 0, Ok1111 ¼ vk ½Q 8 , P24 þ P 25 P 25 PT12 W 2 0 0T ,
ttðtÞ
2 2 ðhM hm Þ=2,
Ok610 ¼ vk ½W T1 P 14 , Ok611 ¼ vk ½W T1 P15 ,
^ ¼ ½P T A þ P22 H1 H2 P 22 þP 23 P23 P T W 0 þ P24 P T W 1 H 12 12 12
zðtÞ ¼ CxðtÞ þ DxðthðtÞÞ þ B0 f ðxðtÞÞ þ B1 f ðxðthðtÞÞÞ Z t þB2 f ðxðsÞÞ ds, 2 s1 ¼ hm =2,
Ok66 ¼ vk ½ð1mÞQ 222 Q 10 , Ok67 ¼ Ok68 ¼ Ok69 ¼ 0,
ði ¼ 1,2,3Þ,
L~ ¼ ½L1 L2 0 0 0 0 0 0 0 0 0T , ~ ¼ ½M 1 M2 0 0 0 0 0 0 0 0 0T , M ~J ¼ ½J J 0 0 0 0 0 0 0 0 0T , 1
Q l 4 0 ðl ¼ 4,5, . . . ,8Þ, Rk 40, Z j ðk,j ¼ 1,2,3,4Þ, Q ¼ diagfq1 ,q2 , . . . , 9
diagfq91 ,q92 ,
10
10 10 qn g 40, Q ¼ Q ¼ diagfq10 1 ,q2 , . . . ,qn g 4 0 T T T and some appropriately dimensional matrices: L ¼ ½L1 L2 , M ¼ ½M T1 M T2 T , J ¼ ½J T1 J T2 T , H ¼ ½HT1 HT2 T and I ¼ ½IT1 IT2 T , such that
2
P11
6 n 6 6 P ¼ ðPij Þ55 ¼ 6 6 n 6 n 4 n
. . . ,q9n g 40,
3
P12
P 13
P14
P 15
P22
P 23
P24
n
P 33
P34
n
n
P44
n
n
n
P 25 7 7 7 P 35 7 7 4 0, P 45 7 5 P 55
and the following linear matrix inequalities (LMIs) hold: " # " # O1 ðO1 Þ12 O2 ðO2 Þ12 2 o 0, o 0, X1 ¼ X ¼ ððO1 Þ12 ÞT ðO1 Þ22 ððO2 Þ12 ÞT ðO2 Þ22 " # O3 ðO3 Þ12 X3 ¼ o 0, ð3:1Þ ððO3 Þ12 ÞT ðO3 Þ22 where
Ok ¼ ðOkij Þ1111 , k ¼ 1,2,3, Ok11 ¼ vk ½P 11 AAT P 11 þ PT12 þP 12 þ Q 111 þ hm Q 5 þ hMm Q 6 T
þK Q 9 K þLT1 þ L1 þ hm HT1 þ hm H1 þhMm IT1 þ hMm I1 , Ok12 ¼ vk ½L2 MT1 þ JT1 þ hm H2 þ ðhM hm ÞI2 , Ok13 ¼ vk ½LT2 þ MT1 P 12 þ P13 ,
Ok14 ¼ vk ½K T4 M1 þ hm J T4 þ lHT4 P 13 , Ok15 ¼ vk ½P11 W 0 þ P14 AT Q T þ Q 112 , Ok16 ¼ vk ½P11 W 1 , Ok17 ¼ vk ½P 14 þ P15 , Ok18 ¼ vk ½P15 , Ok19 ¼ vk ½P11 W 2 , Ok110 ¼ vk ½AT P 14 þ P24 , Ok111 ¼ vk ½AT P 15 þ P25 , Ok22 Ok23 Ok26 Ok33
¼ ¼ ¼ ¼
vk ½ð1 ÞQ 211 þ LQ 13 LM T2 M T2 þ JT2 þ J 2 þK T Q 10 K, vk ½L2 þ M 2 , Ok24 ¼ vk ½J2 , Ok25 ¼ 0, vk ½ð1uÞQ 212 , Ok27 ¼ Ok28 ¼ Ok29 ¼ Ok210 ¼ Ok211 ¼ 0, vk ½Q 111 þQ 211 þ Q 311 , Ok34 ¼ Ok35 ¼ Ok36 ¼ 0,
2
~ ¼ ½H1 H2 0 0 0 0 0 0 0 0 0T , H I~ ¼ ½I1 I2 0 0 0 0 0 0 0 0 0T , I^ ¼ ½P T13 A þ P T23 I1 I2 PT23 þ P 33 P 33 PT13 W 0 þP 34 PT13 W 1 P34 þ P 35 P 35 PT13 W 2 0 0T , A~ ¼ ½A 0 0 0 W 0 W 1 0 0 W 2 0 0, C~ ¼ ½C D 0 0 B0 B1 0 0 B2 0 0, T
T
T
^ hm Ls ~ 1H ~ v1 C~ P 11 v1 C~ LQ hm A~ R1 ðO1 Þ12 ¼ ½hm H T T T s1 A~ R3 hm C~ Z 1 s1 C~ Z 3 v1 LT v1 M T v1 N T hm HT ,
~T ~T ~T ~ ~ ðO2 Þ12 ¼ ½hMm I^ hMm Ms 2 I v2 C P 11 v2 C LQ hMm A R2 T
T
T
s2 A~ R4 hMm C~ Z 2 s2 C~ Z 4 v2 LT v2 M T v2 N T s1 IT , ~ 2 I~ v2 C~ T P 11 v2 C~ T LQ hMm A~ T R2 ðO3 Þ12 ¼ ½hMm I^ hMm Ns ~ T R4 hMm C~ T Z 2 s2 C~ T Z 4 v2 LT v2 M T v2 N T s1 IT ,
s2 A 1 22
ðO Þ
¼ diagfhm Q 5 hm R1 s1 R3 v1 P 11 v1 LQ hm R1 s1 R3 hm Z 1 s1 Z 3 v1 Z 1 v1 Z 2 v1 Z 2 hm Z 3 ,
ðO2 Þ22 ¼ diagfhMm Q 6 hMm R2 s2 R4 v2 P 11 v2 LQ hMm R2 s2 R4 hMm Z 2 s2 Z 4 v2 Z 1 v2 Z 2 v2 Z 2 hMm Z 4 , ðO3 Þ22 ¼ diagfhMm Q 6 hMm R2 s2 R4 v3 P 11 v3 LQ hMm R2 s2 R4 hMm Z 2 s2 Z 4 v3 Z 1 v3 Z 2 v3 Z 2 hMm Z 4 , v1 ¼ a, v2 ¼ v3 ¼ 1a, and n means symmetric terms, then the stochastic neural networks with discrete interval and distributed delays (2.7) are globally stochastic asymptotically stable. Proof. Define an augmented Lyapunov–Krasovskii functional as follows: Vðt,xt Þ ¼ V 1 ðt,xt Þ þV 2 ðt,xt Þ þ V 3 ðt,xt Þ,
ð3:2Þ
where T
V 1 ðt,xt Þ ¼ x ðtÞPxðtÞ þ2
m
V 2 ðt,xt Þ ¼
Z
n X i¼1
t
Z
xi 0
jT ðsÞQ 1 jðsÞ ds þ
thm
Z
qi
thm
þ thM
f i ðsÞ ds, Z
thm
jT ðsÞQ 2 jðsÞ ds
thðtÞ
jT ðsÞQ 3 jðsÞ dsþ t
Z
0
t
Z
t tþy
T
f ðxðsÞÞQ 4 f ðxðsÞÞ ds
4
H. Chen / Neurocomputing 101 (2013) 1–9
Z
Z
0
t tþy
hm
þ hm
Z
t
Z
hm
Z
t
t
y ðsÞR1 yðsÞ ds dy þ
Z
tþy
V 3 ðt,xt Þ ¼
Z
0
Z
0
y
hm
Z
t
hM 0
Z
0
t
t
þ 2xT ðthM Þ½P33
yT ðsÞR4 yðsÞ ds dl dy
Z
t
hm
Z
0
y
thm
f ðxðsÞÞ ds
thm
xðsÞ ds
ttðtÞ
þ 2f
T
ðxðtÞÞ½W T0 P 14 þP 44
T
t
þ 2f ðxðtÞÞ½W T0 P 15 þP 45
T
z ðsÞZ 4 zðsÞ ds dl dy tþl
T
þ 2f ðxðtÞÞ½W T0 P 12 þP T24
and
T
"
Z
T
xðtÞ ¼ x ðtÞ
t
T
x ðsÞ ds
Z
Z
T
x ðsÞ ds thM
thm
T
T
thm
t
T
f ðxðsÞÞ ds
Z
thm
þ 2f ðxðtÞÞ½W T0 P 13 þP T34
#T T
f ðxðsÞÞ ds
,
thM
thm
T
þ 2f ðxðthðtÞÞÞ½W T1 P 14
T
jðtÞ ¼ ½x ðtÞf ðxðtÞÞ :
T
From Ito’s formula, the stochastic differential of Vðt,xt Þ is obtained as follows: " dVðt,xt Þ ¼ LVðt,xt Þdt þ 2 x ðtÞP 11 zðtÞ þ f ðxðtÞÞQzðtÞ t
T
x ðsÞ
þ Z
thm t
T dsP12 zðtÞ þ
T
f ðxðsÞÞ
þ thm
þ 2f ðxðthðtÞÞÞ½W T1 P 15 T
þ 2f ðxðthðtÞÞÞ½W T1 P 12
T
T
Z
Z
thm
T
x ðsÞ
thM
Z
T dsP 14 zðtÞ þ
xðsÞ ds
thm
T
z ðsÞZ 3 zðsÞ ds dl dy
þ hM
f ðxðsÞÞ ds thm
þ 2f ðxðtÞÞ½QW 1 f ðxðthðtÞÞÞ Z t T f ðxðsÞÞ ds þ 2f ðxðtÞÞ½QW 2
T
Z
xðsÞ ds
T
tþl
y
hm
thm
t
thM Z t
Z
xðsÞ ds thm
þ f ðxðtÞÞ½QW 0 þW T0 Q f ðxðtÞÞ
zT ðsÞZ 2 zðsÞ ds dy
0Z
Z
t
thM
Z Z
Z
thM
tþy
þ Z
þ 2xT ðthM Þ½PT23
zT ðsÞZ 1 zðsÞ ds dy
Z
hm
0
y ðsÞR2 yðsÞ ds dy,
þ 2xT ðthM Þ½P35
tþy
tþl
t
þ Z
T
tþy
hm
hM
t
y ðsÞR3 yðsÞ ds dl dy
þ Z
Z
T
y
Z
þ 2xT ðthM Þ½P34
tþl
Z
hm
hm
hM
þ Z
Z
þ 2xT ðthm Þ½P22 þ PT23 þ 2xT ðthm Þ½P23 þ P33
f ðxðsÞÞQ 8 f ðxðsÞÞ ds dy
T
þ hm
tþy
xT ðsÞQ 6 xðsÞ ds dy
T
tþy
hM
Z
0
t
f ðxðsÞÞQ 7 f ðxðsÞÞ ds dy
þ ðhM hm Þ Z
Z
T
tþy
hm
hm hM
Z
0
Z
xT ðsÞQ 5 xðsÞ ds dy þ
þ
thm
T
þ 2f ðxðthðtÞÞÞ½W T1 P 12
T dsP 13 zðtÞ
f ðxðsÞÞ
thM
T dsP14 zðtÞ
t
f ðxðsÞÞ ds
thm
Z
thm
f ðxðsÞÞ ds
thM t
Z
xðsÞ ds
thm
Z Z
thm
xðsÞ ds
thM t
f ðxðsÞÞ ds
thm
Z
thm
f ðxðsÞÞ ds
thM t
Z
xðsÞ ds
thm
Z
thm
T
dwðtÞ,
T
þ 2f ðxðthm ÞÞ½P 45 þP 55
ð3:3Þ T
þ 2f ðxðthm ÞÞ½P T24 þP T25
where LVðt,xt Þ ¼ LV 1 ðt,xt Þ þ LV 2 ðt,xt Þ þ LV 3 ðt,xt Þ,
ð3:4Þ
T
þ 2f ðxðthm ÞÞ½P T34 þP T35
T
LV 1 ðt,xt Þ ¼ x ðtÞ½P11 AA P T11 þP T12 þP 12 xðtÞ þ2xT ðtÞ½P12 þ P13 xðthm Þ þ2xT ðtÞ½P13 xðthM Þ þ2xT ðtÞ½P11 W 0 þ P14 AT Q T f ðxðtÞÞ Z t f ðxðsÞÞ þ2xT ðtÞP 11 W 1 f ðxðthðtÞÞÞþ 2xT ðtÞP 11 W 2 ttðtÞ T
T
T
þ2x ðtÞ½A P 14 þ P 24
Z
t
T
T
þ2x ðtÞ½A P 15 þ P 25 T
T
þ2x ðtÞ½A P 12 þ P 22 T
þ2x ðtÞ½A P 13 þ P 23 T
thm
þ2 þ2
thm
þ2 þ2
thm
f ðxðsÞÞ ds thM
Z
t
Z
ttðtÞ
f ðxðsÞÞ ds
thm
t
ttðtÞ t
xðsÞ ds
Z
Z
ttðtÞ
xðsÞ ds
thM Z t
þ2x ðthm Þ½P 25 þ P35
T
f ðxðsÞÞ ds
thM t
þ2x ðthm Þ½P 24 þ P34 T
T
þ 2f ðxðthm ÞÞ½P T25 þ 2f ðxðthm ÞÞ½P T35
thm
T
ds
f ðxðsÞÞ ds
Z Z
T
þ 2f ðxðthm ÞÞ½P 55
T
thm
Z
T
þ 2f ðxðthM ÞÞ½PT45
Z
t
ttðtÞ
Z Z
xðsÞ ds
thM
þ 2f ðxðthm ÞÞ½P 44 þP T45
# T
Z
Z
f ðxðsÞÞ ds
thm
Z
thm
f ðxðsÞÞ ds
thM t
Z
xðsÞ ds
thm
Z
thm
xðsÞ ds
thM
t
f ðxðsÞÞ ds thm thm
f ðxðsÞÞ ds
thM t
Z
xðsÞ ds
thm
Z
thm
thM
T
f ðxðsÞÞ ds½W T2 P14 T
f ðxðsÞÞ ds½W T2 P15 T
f ðxðsÞÞ ds½W T2 P12 T
f ðxðsÞÞ ds½W T2 P13 T
t
þ zT ðtÞP 11 zðtÞ þf ðxðtÞÞQzðtÞ,
xðsÞ ds Z Z Z Z
t
f ðxðsÞÞ ds thm thm
f ðxðsÞÞ ds thM t
xðsÞ ds thm thm
xðsÞ ds thM
ð3:5Þ
H. Chen / Neurocomputing 101 (2013) 1–9
Z
LV 2 ðt,xt Þ r jT ðtÞQ 1 jðtÞ þ jT ðthðtÞÞ½ð1mÞQ 2 jðthðtÞÞ
5
thðtÞ
yðsÞ ds
þ jT ðthM Þ½Q 3 jðthM Þ þ jT ðthm Þ½Q 1
thM
2
þf ðxðtÞÞ½t2 Q 4 þ hm Q 7 þ ðhM hm ÞQ 8 f ðxðtÞÞ Z t T f ðxðsÞÞQ 4 f ðxðsÞÞ dsþ yT ðtÞ½hm R1 þ ðhM hm ÞR2 t tt Z t T þs1 R3 þ s2 R4 yðtÞhm f ðxðsÞÞQ 7 f ðxðsÞÞ ds Z
thM
Z
t
thm
Z
t
x ðsÞQ 5 xðsÞ ds
thm
"
thm
Z
T
"
þ
2 2 hM hm
2 Z
# Z 4 zðtÞ t
thm
t
zT ðsÞZ 2 zðsÞ ds
Z
thM
t
Z r
ttðtÞ
Z
t
thm
thm
"Z
ð3:8Þ
ttðtÞ
t
r
thm
f ðxðsÞÞ ds thM
Q8
(Z
Z2
thðtÞ
"Z
#T
Z
Z
0
hm
Z
0
ð3:9Þ
ð3:10Þ
thM
From the condition (H), we have ( T f ðxðtÞÞQ 9 f ðxðtÞÞ r xT ðtÞKQ 9 KxðtÞ,
#
thm
zðsÞ dwðsÞ ,
Z2
"Z
t
t
tþy
zðsÞ dwðsÞ ,
T Z zðsÞ dwðsÞ Z 3
zðsÞ dwðsÞ dy,
t
And from Newton–Leibniz formula [14,15], it yields Z t a1 ðtÞ :¼ 2ZT ðtÞLT xðtÞxðthm Þ yðsÞ ds thm Z t zðsÞ dwðsÞ ¼ 0,
Z
hm
Z
tþy
hM
hm
Z
t
t
zðsÞ dwðsÞ rðhM hm ÞZT ðtÞIT Z 1 4 IZðtÞ
T Z zðsÞ dwðsÞ Z 4
tþy
t
zðsÞ dwðsÞ dy:
thm
ð3:11Þ
"Z
#T
thm
zðsÞ dwðsÞ
E
Z2
"Z
thðtÞ
thm
#
thm
zðsÞ dwðsÞ ¼ E
thðtÞ
T
Z
thm
a3 ðtÞ :¼ 2ZT ðtÞJ xðthðtÞÞxðthM Þ
zT ðsÞZ 1 zðsÞ ds, ð3:23Þ
"Z ð3:12Þ
#T
thðtÞ
zðsÞ dwðsÞ
E
Z2
"Z
#
thðtÞ
zðsÞ dwðsÞ ¼ E thM
Z
thðtÞ
zT ðsÞZ 1 zðsÞ ds,
thM
ð3:24Þ #
zðsÞ dwðsÞ ¼ 0, thðtÞ
thm
thðtÞ
yðsÞ ds
Z
thðtÞ
thM
thm
ð3:21Þ
tþy
On the other hand, from the Itˆo isometry in [28], we can obtain Z t T Z t Z t E zðsÞ dwðsÞ Z 1 zðsÞ dwðsÞ ¼ E zT ðsÞZ 1 zðsÞ ds,
thm
Z
ð3:20Þ
tþy
ð3:22Þ
T
ð3:19Þ
zðsÞ dwðsÞ rhm ZT ðtÞHT Z 1 3 H ZðtÞ
thm
f ðxðthðtÞÞÞQ 10 f ðxðthðtÞÞÞr xT ðthðtÞÞKQ 10 KxðthðtÞÞ:
a2 ðtÞ :¼ 2ZT ðtÞMT xðthm ÞxðthðtÞÞ
ð3:18Þ
#
thðtÞ
tþy
hM
)
ð3:17Þ
and
Z
f ðxðsÞÞ ds :
tþy
thM
þ
thm
zðsÞ dwðsÞ dy ¼ 0,
zðsÞ dwðsÞ r ZT ðtÞN T Z 1 2 N ZðtÞ
zðsÞ dwðsÞ
thm
T
)T
#
t
zðsÞ dwðsÞ r ZT ðtÞM T Z 1 2 M ZðtÞ
thðtÞ
2ZT ðtÞHT
f ðxðsÞÞQ 8 f ðxðsÞÞ ds
thM
(Z
Z
hm
f ðxðsÞÞ ds ,
Z
zðsÞ dwðsÞ ,
t
thM
f ðxðsÞÞ ds ,
xðsÞ ds thM
thðtÞ
þ
and thm
Z
#T
thM
2ZT ðtÞIT Z
Z1
zðsÞ dwðsÞ
þ
T Z f ðxðsÞÞ ds Q 7
ðhM hm Þ
thm
thm
2ZT ðtÞNT
Z t
Z
þ
f ðxðsÞÞQ 7 f ðxðsÞÞ ds
Z r
T
thðtÞ
t
thðtÞ
thm
thðtÞ
"Z
zT ðsÞZ 3 zðsÞ ds dy
T
hm
zðsÞ dwðsÞ
ð3:7Þ
T Z f ðxðsÞÞ ds Q 4
t
t
2ZT ðtÞM T
From Lemma 2.1, it follows that Z t T f ðxðsÞÞQ 4 f ðxðsÞÞ ds t tt
Z
T
T
thm
tþy
hM
hm hM
thm
Z þ
zT ðsÞZ 1 zðsÞ ds
zT ðsÞZ 4 zðsÞ ds dy:
hm Z3 2
tþy
hm
hm Z
t
xðsÞ ds
where ZðtÞ ¼ ½x ðtÞ x ðthðtÞÞ . From the formula (3.12)–(3.16), we have Z t zðsÞ dwðsÞ r ZT ðtÞLT Z 1 2ZT ðtÞLT 1 LZðtÞ
thm
Z
0
t
thm
Z
ð3:15Þ
tþy
ð3:16Þ
tþy
Z
yðsÞ ds dy
T
yT ðsÞR3 yðsÞ ds dy
yT ðsÞR4 yðsÞ ds dy
tþy
hM
Z
Z
0
hm
Z
hm
Z
Z
zðsÞ dwðsÞ dy ¼ 0,
ð3:6Þ
2
LV 3 ðt,xt Þ r y ðtÞ½s1 R3 þ s2 R4 yðtÞ þz ðtÞ hm Z 1 þ ðhM hm ÞZ 2 þ
t tþy
hM
yT ðsÞR2 yðsÞ ds,
T
Z
hm
and T
#
t
thðtÞ
thM
thm
Z
a5 ðtÞ :¼ 2Z ðtÞI ðhM hm ÞxðtÞ
x ðsÞQ 6 xðsÞ ds
thM
Z
yT ðsÞR1 yðsÞ ds
Z
Z
0
hm
T
T
T
T
yðsÞ ds dy tþy
hm
f ðxðsÞÞQ 8 f ðxðsÞÞ ds
ðhM hm Þ
xðsÞ ds Z
t
thm
thm
ð3:14Þ
t thm
Z
0
zðsÞ dwðsÞ ¼ 0, Z
a4 ðtÞ :¼ 2ZT ðtÞHT hm xðtÞ Z
#
thðtÞ
thM
þQ 2 þ Q 3 jðthm Þ þ xT ðtÞ½hm Q 5 þ ðhM hm ÞQ 6 xðtÞ T
Z
Z ð3:13Þ
0
Z
t
zðsÞ dwðsÞ
E hm
Z
tþy 0 Z t
¼E hm
tþy
T
Z3
Z
t
zðsÞ dwðsÞ dy
tþy T
z ðsÞZ 3 zðsÞ ds dy,
ð3:25Þ
6
H. Chen / Neurocomputing 101 (2013) 1–9
and Z hm Z E hM
Z
k
t
zðsÞ dwðsÞ
tþy hm Z t
¼E hM
T
Z4
Z
t
~ ¼ v ½P11 W 0 þ P14 AT Q T þ Q 1 e NT N 2 þ e NT N 7 , O k 1k 1 2k 5 15 12
zðsÞ dwðsÞ dy
k
~ ¼ v ½P11 W 1 e N T N3 þ e N T N8 , O k 1k 1 2k 5 16 k
~ ¼ v ½P11 W 2 e N T N4 þ e N T N9 , O k 1k 1 2k 5 19
tþy
zT ðsÞZ 4 zðsÞ ds dy:
ð3:26Þ
tþy
þ e2k NT6 N6 ,
Substituting (3.5)–(3.26) into (3.4), and then taking the mathematical expectation, it yields Z 0 Z t Z t 2 zT ðt, y,u,vÞX1 zðt, y,u,vÞ dv du dy ELVðt,xt Þ rE 3 hm hm t þ y thm Z hm Z t Z thm 1 þE zT ðt, y,u,vÞX2 zðt, y,u,vÞ dv du dy hMm s2 hM t þ y thðtÞ Z hm Z t Z thðtÞ 2 þE zT ðt, y,u,vÞX3 zðt, y,u,vÞ dv du dy, hMm s2 hM t þ y thM ð3:27Þ
T
T
T
ttðtÞ
¼ e2k N T6 N 9 , 2
¼ vk ½Q 9 þQW 0 þ W T0 Q T þ Q 122 þhm Q 7 þhMm Q 8 þ t2 Q 4 þe
T T 1k N 2 N 2 þ 2k N 7 N 7 ,
e
~ k ¼ v ½QW þ e NT N3 þ e NT N8 , O k 1 1k 2 2k 7 56 k
~ ¼ v QW þ e NT N 4 þ e NT N 9 , O k 2 1k 2 2k 7 59 k
~ ¼ v ½ð1mÞQ 2 Q þ e N T N3 þ e N T N8 , O k 1k 3 2k 8 10 66 22
k
~ Þ13 ¼ ½v M T P 11 0 0 0 v MT Q 0 0 0 0 v MT P14 v M T P 15 T , ðO k k k k
thm
thm
T
k
f ðxðsÞÞ dsx ðsÞyT ðsÞ
k
1
~ Þ231 ¼ ½hm MT P 12 0 0 0 hm MT R1 s1 M T R3 hm M T Z 1 s1 MT Z 3 0 0 0 0T , ðO
#T
T
k
~ Þ231 ðO ~ Þ232 , ~ Þ23 ¼ ½ðO ðO k
~ Þ232 ¼ ½0 0 v M T P 11 v MT QL 0 0 0 0 0 0 0 0T , ðO k k
:
thM
2
3
~ Þ231 ¼ ðO ~ Þ231 ¼ ½hMm M T P 12 0 0 0 hMm M T R2 s2 M T R4 hMm M T Z 2 ðO
s2 M T Z 4 0 0 0 0T , " # 0 e1k E ~ k Þ33 ¼ , ðO 0 e2k E
Consequently, it can be obtained from (3.1) that ELVðt,xt Þ rlEJxðtÞJ2 , 1
2
3
where l ¼ minflmin ðX Þ, lmin ðX Þ, lmin ðX Þg 4 0, which indicates that the system (2.7) is globally asymptotically stable in mean square. & Theorem 3.2. Let a A ð0,1Þ, for the delays h(t), tðtÞ satisfying the condition (2.3), and the condition (H) is satisfied, if there exist matrices: P 11 4 0, P22 4 0, P 33 40, P44 4 0, P 55 40, P12 , P13 , P 14 , i
P 15 , P23 , P24 , P25 , P34 , P 35 , P 45 , Q i ¼ ½ðQQi 11ÞT 12
Q i12 40 Q i22
ði ¼ 1,2,3Þ,
Q l 4 0 ðl ¼ 4,5, . . . ,8Þ, Rk 40 ðk ¼ 1,2,3,4Þ, Q ¼ diagfq1 ,q2 , . . . ,qn g 10 10 Q 9 ¼ diagfq91 ,q92 , . . . ,q9n g 4 0, Q 10 ¼ diagfq10 1 ,q2 , . . . ,qn g 4 0
some
appropriately
dimensional
matrices:
L ¼ ½LT1 LT2 T ,
M ¼ ½M T1 M T2 T , J ¼ ½J T1 J T2 T , H ¼ ½HT1 HT2 T , I ¼ ½IT1 IT2 T , and some positive scalars e1j 4 0, e2j 4 0ðj ¼ 1,2,3Þ, such that P ¼ ðPij Þ55 4 0 and the following linear matrix inequalities (LMIs) hold: 2 1 2 2 3 3 ~ 1 Þ13 ~ 2 Þ13 ~ ~ O ðO1 Þ12 ðO O ðO2 Þ12 ðO 6 6 7 7 6 02 ~ 1 Þ23 7 ~ 2 Þ23 7 X01 ¼ 6 6 n ðO1 Þ22 ðO 7 o 0, X ¼ 6 n ðO2 Þ22 ðO 7 o 0, 4 4 5 5 ~ 1 Þ33 ~ 2 Þ33 n n ðO n n ðO 2 3 3 ~ ~ 3 Þ13 O ðO3 Þ12 ðO 6 7 ~ 3 Þ23 7 X03 ¼ 6 ð3:28Þ 6 n ðO3 Þ22 ðO 7 o0, 4 5 3 33 ~ n n ðO Þ
k
~ ¼ v ½P 11 AAT P 11 þP T þP 12 þQ 1 þ K T Q K þhm Q þhMm Q O k 9 5 6 11 11 12 þ LT1 þL1 þ hm HT1 þ hm H1 þ hMm IT1 þ hMm I1 þ e1k NT1 N1 þ e2k NT5 N 5 , ~ 12 ¼ v ½L2 M T þ JT þ hm H2 þ ðhM hm ÞI2 þ e N T N6 , O 2i 5 k 1 1
v1 ¼ a,
v2 ¼ v3 ¼ 1a,
Proof. If A, W 0 , W 1 , W 2 , C, D, B0 , B1 and B2 in (3.1) are replaced with A þ DAðtÞ, W 0 þ DW 0 ðtÞ, W 1 þ DW 1 ðtÞ, W 2 þ DW 2 ðtÞ, C þ DCðtÞ, D þ DDðtÞ, B0 þ DB0 ðtÞ, B1 þ DB1 ðtÞ and B2 þ DB2 ðtÞ, where DAðtÞ, DW 0 ðtÞ, DW 1 ðtÞ, DW 2 ðtÞ, DCðtÞ, DDðtÞ, DB0 ðtÞ, DB1 ðtÞ and DB2 ðtÞ are described in (2.5), then (3.1)–(3.3) for the uncertain stochastic neural networks with discrete interval and distributed delays (2.1) are equivalent to the following conditions:
Xk þ Gk1d FðtÞðGk1e ÞT þ Gk1e F T ðtÞðGk1d ÞT þ Gk2d FðtÞðGk2e ÞT þ Gk2e F T ðtÞðGk2d ÞT o0,
ð3:29Þ
for k ¼ 1,2,3. where
G11d ¼ ½v1 P11 M 0 0 0 v1 QM 0 0 0 0 v1 MT P 14 v1 MT P 15 hm MT P 12 0 0 0 hm RT1 M s1 RT3 M 0 0 0 0 0 0T ,
G21d ¼ G31d ¼ ½v2 P 11 M 0 0 0 v2 QM 0 0 0 0 v2 MT P14 v2 MT P15 hMm M T P 12 0 0 0 hMm RT2 M s2 RT4 M 0 0 0 0 0 0T ,
G12d
¼ ½0 0 0 0 0 0 0 0 0 0 0 0 0 v1 P T11 M v1 LQM 0 0 hm Z 1 M s1 Z 3 M 0 0 0 0T , ¼ G32d ¼ ½0 0 0 0 0 0 0 0 0 0 0 0 0 v1 PT11 M v1 LQM 0 0 hm Z 1 M s1 Z 3 M 0 0 0 0T ,
k
~ Þ ~ ¼ ðO O k ¼ 1,2,3, ij 1111 ,
k ¼ 1,2,3,
and other items are given in Theorem 3.1, then the stochastic neural networks with mixed delays (2.1) are globally robustly asymptotically stable in mean square.
G22d
where k
¼ e2k N T6 N 8 ,
Ok99 ¼ Q 4 þ e1k N T4 N 4 þ e2k N T9 N 9 ,
T
f ðxðtÞÞ f ðxðthðtÞÞÞ f ðxðthm ÞÞ f ðxðthM ÞÞ Z t Z t T T f ðxðsÞÞ ds f ðxðsÞÞ ds
and
¼e
T 2k N 6 N 7 ,
k
zðt,s,u,vÞ ¼ xT ðtÞ xT ðthðtÞÞ xT ðthm Þ xT ðthM Þ
40,
~k O 25 ~k O 26 ~k O 29 k ~ O 55
~ ¼ e NT N4 þ e NT N9 , O 1k 3 2k 8 69
where
Z
k
~ ¼ v ½ð1mÞQ 2 þ LQ LMT M T þJ T þ J þ K T Q K O k 13 10 2 11 2 2 2 22
Gi1e
¼ ½N T1 0 0 0 N T2 N T3 0 0 N T4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0,
Gi2e ¼ ½NT5 NT6 0 0 NT7 NT8 0 0 NT9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0, i ¼ 1,2,3: By Lemma 2.1, there exist some positive scalars e1k 4 0,
e2k 4 0 ðk ¼ 1,2,3Þ such that
H. Chen / Neurocomputing 101 (2013) 1–9 k k T k T k k T k T k 1 k Xk1 þ e1 1k G1d ðG1d Þ þ e2k ðG1e Þ G1e þ e2k G2d ðG2d Þ þ e2k ðG2e Þ G2e o 0,
k ¼ 1,2,3:
ð3:30Þ
Applying Lemma 2.2, we can obtain that (3.30) are equivalent to (3.28), respectively. Remark 2. Some sufficient conditions ensuring the globally robustly asymptotical stability of uncertain stochastic neural networks with discrete interval and distributed delays have been given in Theorems 3.1–3.2. In Theorems 3.1–3.2, the derivative value of the Lyapunov–Krasovskii functional can ultimately be written as the sum of three parts (3.27). This treatment is different from one involved in [22] and Theorems 3.1–3.2 can have the less conservatism than ones given in [22] since some important information neglected in [22] and the information R thðtÞ th yT ðsÞR4 yðsÞ ds are fully considered in this paper. Remark 3. The augmented Lyapunov–Krasovskii functional Vðt,xt Þ, as pointed in [6,37,47], can play an important role in reducing the conservatism of our results. More specifically, by Rt R th Rt T taking the states thm xT ðsÞ ds, thMm xT ðsÞ ds, thm f ðxðsÞÞ ds, R hm R t R thm T thM f ðxðsÞÞ ds and hM t þ y yðsÞ ds dy as augmented variables, the stability in Theorems 3.1 and 3.2 sufficiently utilize more information on state variables, which can yield less conservatism. Remark 4. In contrast to the results obtained by using matrix theory technique [7–9,29,31,32,52], our proposed LMI-based ones are computationally efficient as they can be solved numerically by employing the LMI toolbox in Matlab. 4. Two illustrative examples Example 4.1. Consider uncertain system (2.1) with the following parameters: 0:2 4 0:4 0:2 2 0 1 0 A¼ , W0 ¼ , W1 ¼ , W2 ¼ , 0:1 0:3 0:1 0:7 0 3 0 1 0:3 0 0:2 0:6 0:5 0:1 C¼ , D¼ , , B0 ¼ 0 0:1 0:5 0:1 0:5 0 0:3 0:6 0:2 0 1 0 B1 ¼ , B2 ¼ , L¼ , M ¼ ½0:1 0:1, 0:2 0:1 0 0:2 0 1 N1 ¼ N 2 ¼ N3 ¼ N4 ¼ N 5 ¼ N6 ¼ N7 ¼ ½1 1,
P25 ¼ 1:0 e10
N8 ¼ N9 ¼ ½0 0:
Let a ¼ 0:1, by using Theorem 3.2, for given m ¼ 0:6, hm ¼1, and t ¼ 1:6, it is concluded that this system is globally robustly asymptotical stable in mean square while hM is up to 7:0 108 . And when m ¼ 0:6, hm ¼1, hM ¼ 7:0 108 and t ¼ 1:6, by using the Matlab LMI Control Toolbox [1] to solve the LMIs in (3.28) in Theorem 3.2, we derive a set of feasible solutions as follows: 0:3799 0:0466 P 11 ¼ , P 12 ¼ 1:0 e3 0:0466 0:1970 0:2223 0:3306 0:0012 0:0018 , P22 ¼ , 0:2223 0:3306 0:0018 0:0138 0:0526 0:0914 P 13 ¼ 1:0 e10 , 0:0232 0:1938 0:0022 0:0030 , P 14 ¼ 0:0024 0:0037 0:0025 0:0712 P 15 ¼ 1:0 e9 , 0:0498 0:1309 0:0797 0:0680 , P 23 ¼ 1:0 e10 0:0695 0:5625 0:0010 0:0002 , P 33 ¼ 1:0 e10 0:0002 0:0011
P24 ¼
0:0026 0:0039
P34 ¼ 1:0 e10 P35 ¼ 1:0 e10
7
0:1080
0:3057
0:2659 , 0:0089 0:1038
0:2859
,
0:0056
0:2987
,
0:2759
0:2866
0:2302
0:2704
,
0:2759 0:2866 0:0201 0:0022 P44 ¼ , 0:0022 0:0782 0:0203 0:0175 , P45 ¼ 1:0 e8 0:0256 0:1795 0:0318 0:0293 , P55 ¼ 1:0 e8 0:0293 0:2092 0:6240 0:0312 0:0430 0:7535 Q 111 ¼ , Q 112 ¼ , 0:0312 0:3636 0:0752 0:0131 0:3754 0:0042 , Q 122 ¼ 0:0042 1:4221 0:6090 0:0217 0:0906 0:6930 , Q 212 ¼ , Q 211 ¼ 0:0217 0:2219 0:0041 0:0671 0:0900 0:1278 , Q 222 ¼ 0:1278 0:8969 0:0078 0:0055 0:0246 0:0268 , Q 312 ¼ , Q 311 ¼ 0:0055 0:0808 0:0433 0:0272 0:1507 0:0523 , Q 322 ¼ 0:0523 0:2325 3:0451 0:0523 0:0020 0:0074 , Q5 ¼ , Q4 ¼ 0:0523 0:2325 0:0074 0:0518 0:0042 0:0165 , Q 6 ¼ 1:0 e8 0:0165 0:1129 0:0076 0:0233 0:0328 0:0325 , , Q 8 ¼ 1:0 e5 Q7 ¼ 0:0233 0:1208 0:0325 0:2133 9:3241 0:0000 Q9 ¼ , 0:0000 9:3241 0:8942 0:0000 0:0236 0:0000 , Q¼ Q 10 ¼ , 0:0000 0:8942 0:0000 0:0236 0:0011 0:0024 , R1 ¼ 0:0024 0:0135 0:1723 0:1830 , R2 ¼ 1:0 e10 0:1830 0:9494 0:1124 0:1484 , R4 ¼ 1:0 e18 0:1484 0:5848 0:0022 0:0060 0:0033 0:0026 , R3 ¼ , Z1 ¼ 0:0060 0:0252 0:0026 0:0120 0:0030 0:0071 , Z3 ¼ 0:0071 0:0283 0:0489 0:1260 , Z 2 ¼ 1:0 e10 0:1260 0:5449 0:0661 0:1760 , Z 4 ¼ 1:0 e18 0:1760 0:7589
e11 ¼ 0:0726, e12 ¼ 0:0248, e21 ¼ 0:0075, e22 ¼ 0:0028, e31 ¼ 0:0075, e32 ¼ 0:0028: Remark 5. Obviously, the stability criteria given in [4,5,10,11,16– 18,20,23,27,29,33,34,38,41,45,47–52] cannot be used to dealt with this example.
8
H. Chen / Neurocomputing 101 (2013) 1–9
Table 1 Allowable upper bound of hM with given for unknown m in Example 4.2. Methods
hm ¼ 1:0
hm ¼ 1:5
hm ¼ 2:0
Li et al. [22] Theorem 3.1 ða ¼ 0:1Þ
1.7647 1.9913
2.2417 2.5463
2.7374 3.0907
Table 2 Allowable upper bound of hM with given for unknown m in Example 4.2. Methods
hm ¼ 1:0
hm ¼ 1:5
hm ¼ 2:0
Li et al. [22] Theorem 3.1 ða ¼ 0:1Þ
1.9482 2.0643
2.3740 2.6338
2.8722 3.1880
Example 4.2. Consider the following stochastic neural networks with interval time-varying delay [22]: dxðtÞ ¼ ½AxðtÞ þW 0 f ðxðtÞÞ þ W 1 f ðxðthðtÞÞÞ dt þ ½CxðtÞ þDxðthðtÞÞ þ B0 f ðxðtÞÞþ B1 f ðxðthðtÞÞÞ dwðtÞ, where 0:4 0:7 0:2 4 0 A¼ , W2 ¼ , W0 ¼ 0:1 0 0:5 0 5 0:5 0 C¼ , 0 0:5 0:1 0 0 0:5 , D¼ , B0 ¼ 0 0:1 0:5 0 0:1 0 0:5 0 , L¼ B1 ¼ : 0 0:1 0 0:5
0:6 0:1
,
For unknown m and given different lower values hm, by using Theorem 3.1, we can obtain the upper bounds hM, which guarantee the globally asymptotical stability of this system, are given in Table 1. When B0 ¼ B1 ¼ 0, Table 2 lists that the upper bounds hM for different value hm when unknown m. We can see that Theorem 3.1 is less conservative than ones proposed in [22].
5. Conclusion In this paper, the problem of the globally robustly asymptotical stability in mean square for uncertain stochastic neural networks with discrete interval and distributed delays is considered. By constructing an augmented Lyapunov–Krasovskii functional, the LMI-based sufficient conditions ensuring the globally robustly asymptotical stability in mean square for such systems can be derived by using the free-weighting matrices, which are much less conservative than some existing reports. And, in contrast to the results in [7–9,29,31,32,52], the proposed LMIbased ones are computationally efficient as they can be solved numerically by employing the LMI toolbox in Matlab. Finally, two illustrative examples are provided to show the effectiveness of the given results in this paper.
Acknowledgement The author would like to thank the anonymous referees for their very helpful comments and suggestions which can greatly improve this manuscript, and the editors for their carefully reading of this paper.
References [1] S. Boyd, L.E. Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in Systems and Control Theory, SIAM, Philadelphia, 1994. [2] P. Balasubramaniam, R. Rakkiyappan, Delay-dependent robust stability analysis of uncertain stochastic neural networks with discrete interval and distributed time-varying delays, Neurocomputing 72 (2009) 3231–3237. [3] P. Balasubramaniam, S. Lakshmanan, R. Rakkiyappan, Delay-interval dependent robust stability criteria for stochastic neural networks with linear fractional uncertainties, Neurocomputing 72 (2009) 3675–3682. [4] H. Chen, P. Hu, Further results on delay-dependent exponential stability for uncertain stochastic neural networks with mixed delays and Markovian jump parameters, Neural Comput. Appl., http://dx.doi.org/10.1007/s00521-012-0810-z, in press. [5] H. Chen, Y. Zhang, P. Hu, Novel delay-dependent robust stability criteria for neutral stochastic delayed neural networks, Neurocomputing 73 (2010) 2554–2561. [6] J. Chen, J. Sun, G. Liu, D. Rees, New delay-dependent stability criteria for neural networks with time-varying interval delay, Phys. Lett. A 374 (2010) 4397–4405. [7] J. Cao, A set of stability criteria for delayed cellular neural networks, IEEE Trans. Circuits Syst. I 48 (2001) 494–498. [8] J. Cao, Global stability conditions for delayed CNNs, IEEE Trans. Circuits Syst. I 48 (2001) 1330–1333. [9] J. Cao, D. Zhou, Stability analysis of delayed cellular neural networks, Neural Networks 11 (1998) 1601–1605. [10] W. Chen, X. Lu, Mean square exponential stability of uncertain stochastic delayed neural networks, Phys. Lett. A 372 (2008) 1061–1069. [11] F. Deng, M. Hua, X. Liu, Y. Peng, J. Fei, Robust delay-dependent exponential stability for uncertain stochastic neural networks with mixed delays, Neurocomputing 74 (2011) 1503–1509. [12] K. Gu, V. Kharitonov, J. Chen, Stability of Time-delay Systems, Birkhauser, Boston, 2003. [13] K. Gu, An integral inequality in the stability problem of time-delay systems, in: Proceedings of 39th IEEE Conference on Decision and Control, Sydney, Australia, vol. 3, 2000, pp. 2805–2810. [14] Y. He, G. Liu, D. Rees, New delay-dependent stability criteria for neural networks with time-varying delay, IEEE Trans. Neural Networks 18 (2007) 310–314. [15] Y. He, G. Liu, D. Rees, M. Wu, Stability analysis for neural networks with timevarying interval delay, IEEE Trans. Neural Networks 18 (2007) 1850–1854. [16] C. Huang, J. Cao, Convergence dynamics of stochastic Cohen–Grossberg neural networks with unbounded distributed delays, IEEE Trans. Neural Networks 22 (2011) 561–572. [17] H. Huang, G. Feng, Delay-dependent stability for uncertain stochastic neural networks with time-varying delay, Physica A 381 (2007) 93–103. [18] M. Hua, X. Liu, F. Deng, J. Fei, New results on robust exponential stability of uncertain stochastic neural networks with mixed time-varying delays, Neural Process. Lett. 32 (2010) 219–233. [19] O. Kwon, J. Park, Exponential stability for uncertain cellular neural networks with discrete and distributed time-varying delays, Appl. Math. Comput. 203 (2008) 813–823. [20] O. Kwon, S. Lee, J. Park, Improved delay-dependent exponential stability for uncertain stochastic neural networks with time-varying delays, Phys. Lett. A 374 (2010) 1232–1241. [21] H. Karimi, H. Gao, New delay-dependent exponential H1 synchronization for uncertain neural networks with mixed time delays, IEEE Trans. Syst. Man Cybern. B 40 (2010) 173–185. [22] H. Li, K. Cheung, J. Lam, H. Gao, Robust stability for interval stochastic neural networks with time-varying discrete and distributed delay, Differ. Equ. Dyn. Syst. 19 (2011) 97–118. [23] H. Li, B. Chen, Q. Zhou, S. Fang, Robust exponential stability for uncertain stochastic neural networks with discrete and distributed time-varying delays, Phys. Lett. A 19 (2008) 3385–3394. [24] T. Li, Q. Luo, C. Sun, B. Zhang, Exponential stability of recurrent neural networks with time-varying discrete and distributed delays, Nonlinear Anal. Real World Appl. 10 (2009) 2581–2589. [25] K. Liu, H. Zhang, An improved global exponential stability criterion for delayed neural networks, Nonlinear Anal. Real World Appl. 10 (2009) 2613–2619. [26] X. Liao, G. Chen, E. Sanchez, Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach, Neural Networks 15 (2002) 855–866. [27] L. Ma, F. Da, Mean-square exponential stability of stochastic Hopfield neural networks with time-varying discrete and distributed delays, Phys. Lett. A 373 (2009) 2154–2161. [28] X. Mao, Stochastic Differential Equations with their Applications, Horwood, Chichester, 1997. [29] X. Meng, M. Tian, S. Hu, Stability analysis of stochastic recurrent neural networks with unbounded time-varying delays, Neurocomputing 74 (2011) 949–953. [30] R. Rakkiyappan, P. Balasubramaniam, S. Lakshmanan, Robust stability results for uncertain stochastic neural networks with discrete interval and distributed time-varying delays, Phys. Lett. A 372 (2008) 5290–5298. [31] R. Samli, S. Arik, New results for global stability of a class of neutral-type neural networks with time-delays, Appl. Math. Comput. 210 (2009) 564–570.
H. Chen / Neurocomputing 101 (2013) 1–9
[32] S. Senan, S. Arik, Global robust stability of bidirectional associative memory neural networks with multiple time delays, IEEE Trans. Syst. Man Cybern. Part B 37 (2007) 1375–1381. [33] Q. Song, Z. Wang, Stability analysis of impulsive stochastic Cohen–Grossberg neural networks with both time-varying and continuously distributed delays, Physica A 387 (2008) 3314–3326. [34] Z. Shu, J. Lam, Global exponential estimates of stochastic interval neural networks with discrete and distributed delays, Neurocomputing 71 (2008) 2950–2963. [35] J. Shao, T. Huang, S. Zhou, Global asymptotic robust stability and global exponential stability of neural networks with time-varying delays, Neural Process. Lett. 30 (2009) 229–241. [36] J. Tian, X. Zhou, Improved asymptotic stability criteria for neural networks with interval time-varying delay, Expert Syst. Appl. 37 (2010) 7521–7524. [37] J. Tian, S. Zhong, Improved delay-dependent stability criteria for neural networks with time-varying delay, Appl. Math. Comput. 217 (2011) 10278–10288. [38] J. Tian, S. Zhong, Stability analysis of stochastic recurrent neural networks with unbounded time-varying delays, Neurocomputing 74 (2011) 949–953. [39] G. Wang, J. Cao, J. Liang, Exponential stability in mean square for stochastic neural networks with mixed time-delays and Markovian jumping parameters, Nonlinear Dyn. 57 (2009) 209–218. [40] X. Wu, Y. Wang, L. Huang, Y. Zuo, Robust exponential stability criterion for uncertain neural networks with discontinuous activation functions and timevarying delay, Neurocomputing 73 (2010) 1265–1271. [41] Y. Wu, Y. Wu, Y. Chen, Mean square exponential stability of uncertain stochastic neural networks with time-varying delay, Neurocomputing 72 (2009) 2379–2384. [42] S. Xu, Y. Chu, J. Yu, New results on global exponential stability of recurrent neural networks with time-varying delays, Phys. Lett. A 352 (2006) 371–379. [43] S. Xu, J. Lam, Improved delay-dependent stability criteria for time-delay systems, IEEE Trans. Autom. Control 50 (2005) 587–594. [44] S. Xu, J. Lam, A new approach to exponential stability analysis of neural networks with time-varying delays, Neural Networks 19 (2006) 76–83. [45] J. Yu, K. Zhang, S. Fei, Further results on mean square exponential stability of uncertain stochastic delayed neural networks, Commun. Nonlinear Sci. Numer. Simulat. 14 (2009) 1582–1589.
9
[46] R. Yang, Z. Zhang, P. Shi, Exponential stability on stochastic neural networks with discrete interval and distributed delays, IEEE Trans. Neural Networks 21 (2010) 169–175. [47] B. Zhang, S. Xu, G. Zong, Y. Zou, Delay-dependent exponential stability for uncertain stochastic Hopfield neural networks with time-varying delays, IEEE Trans. Circuits Syst. I Reg. Pap. 56 (2009) 1241–1247. [48] J. Zhang, P. Shi, J. Qiu, H. Yang, A new criterion for exponential stability of uncertain stochastic neural networks with mixed delays, Math. Comput. Modelling 47 (2008) 1042–1051. [49] Q. Zhu, J. Cao, Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays, IEEE Trans. Syst. Man Cybern. B 41 (2011) 341–353. [50] Q. Zhu, J. Cao, pth moment exponential synchronization for stochastic delayed Cohen–Grossberg neural networks with Markovian switching, Nonlinear Dyn. 67 (2012) 829–845. [51] Q. Zhu, J. Cao, Robust exponential stability of Markovian jump impulsive stochastic Cohen–Grossberg neural networks with mixed time delays, IEEE Trans. Neural Networks 21 (2010) 1314–1325. [52] S. Zhu, Y. Shen, L. Liu, Exponential stability of uncertain stochastic neural networks with Markovian switching, Neural Process. Lett. 32 (2010) 293–309.
Huabin Chen was born in Hubei Province, China. He received Ph.D. degree in 2009 from the School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan, Hubei Province, China. Since July 2009, he has been working in the Department of Mathematics, School of Science, Nanchang University, Nanchang, Jiangxi Province, China. His current research interests include time-delay systems, stochastic systems and their application.