ARTICLE IN PRESS Neurocomputing 72 (2009) 3357–3365
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Letters
Exponential stability of hybrid stochastic neural networks with mixed time delays and nonlinearity$ Wuneng Zhou a,b, Hongqian Lu a,d,, Chunmei Duan c a
College of Information Science and Technology, Donghua University, Shanghai 201620, China Engineering Research Center of Digitized Textile & Fashion Technology, Ministry of Education, Donghua University, Shanghai 201620, China c School of Management and Economics, Shandong Normal University, Shandong 250014, China d School of Electronic Information and Control Engineering, Shandong Institute of Light Industry, Shandong 250353, China b
a r t i c l e in fo
abstract
Article history: Received 3 December 2008 Received in revised form 13 April 2009 Accepted 24 April 2009 Communicated by Z. Wang Available online 10 May 2009
This paper is concerned with the problem of robust exponential stability for a class of hybrid stochastic neural networks with mixed time-delays and Markovian jumping parameters. In this paper, freeweighting matrices are employed to express the relationship between the terms in the Leibniz–Newton formula. Based on the relationship, a linear matrix inequality (LMI) approach is developed to establish the desired sufficient conditions for the mixed time-delays neural networks with Markovian jumping parameters. Finally, two simulation examples are provided to demonstrate the effectiveness of the results developed. & 2009 Published by Elsevier B.V.
Keywords: Neural networks Uncertain systems Stochastic systems Mixed time-delays Exponential stability
1. Introduction Neural networks (cellular neural networks, Hopfield neural networks and bi-directional associative memory networks) have been intensively studied over the past few decades and have found application in a variety of areas, such as image processing, pattern recognition, associative memory and optimization problems [1–3]. In reality, time-delay systems are frequently encountered in various areas, e.g. in neural networks, where a time delay is often a source of instability and oscillations. Recently, both delay-independent and delay-dependent sufficient conditions have been proposed to verify the asymptotical or exponential stability of delay neural networks, see e.g. [4–10]. On the other hand, stochastic modeling has come to play an important role in many real systems [11,12], as well as in neural networks. Neural networks have finite modes, which may jump from one to another at different times. Recently, it has been shown in [13–14] that, the jumping between different neural network modes can be governed by a Markovian chain. Furthermore, in real nervous systems, the synaptic transmission is a noisy process
$ This work was supported by the National ‘‘863’’ Key Program of China (2008AA042902). Corresponding author. E-mail addresses:
[email protected] (W. Zhou),
[email protected] (H. Lu).
0925-2312/$ - see front matter & 2009 Published by Elsevier B.V. doi:10.1016/j.neucom.2009.04.012
brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes. It has also been known that a neural network could be stabilized or destabilized by certain stochastic inputs [15]. Hence, the stability analysis problem for stochastic neural networks becomes increasingly significant, and some results related to this problem have recently been published, see e.g. [15–17]. To the best of the authors’ knowledge, the robust exponential stability analysis problem for uncertain stochastic neural networks with mixed time-delays and Markovian jumping parameters, which is still an open problem, has not yet been fully investigated. In this paper, we study the global exponential stability problem for a class of hybrid stochastic neural networks with mixed time-delays and Markovian jumping parameters, where the mixed delays comprise discrete and distributed time-delays, the parameter uncertainties are norm-bounded, and the neural networks are subjected to stochastic disturbances described in terms of a Brownian motion. By utilizing a Lyapunov–Krasovskii functional candidate and using the well-known S-procedure, we convert the addressed stability analysis problem into a convex optimization problem. In this letter, the free-weighting-matrix approach is employed to derive a linear matrix inequality (LMI)based delay-dependent exponential stability criterion for neural networks with mixed time-delays and Markovian jumping parameters. Note that LMIs can be easily solved by using the Matlab LMI toolbox, and no tuning of parameters is required.
ARTICLE IN PRESS 3358
W. Zhou et al. / Neurocomputing 72 (2009) 3357–3365
Numerical examples demonstrate the effectiveness of this method. Notation: The notations in this paper are quite standard. Rn and nm R denote, respectively, the n dimensional Euclidean space and the set of all n m real matrices. The superscript ‘‘T’’ denotes the transpose and the notation X Y (respectively, X4Y), where X and Y are symmetric matrices, means that X Y is positive semidefinite (respectively, positive definite). I is the identity matrix with compatible dimension. Let h40 and Cð½h; 0; Rn Þ denote the family of continuous functions j from ½h; 0 to Rn with the norm jjj ¼ suphy0 jjðyÞj, where j:j is the Euclidean norm in Rn. A is a matrix, denoted by jAj its operator norm, i.e., jAj ¼ supfjAxj : qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jxj ¼ 1g ¼ lmax ðAT AÞ, where lmax ð:Þ (respectively, lmin ð:Þ) means the largest (respectively, smallest) eigenvalue of A. l2 ½0; 1Þ is the space of square integrable vector. Moreover, let ðO; F; fFt gt 0; PÞ be a complete probability space with a filtration fFt gt 0 satisfying the usual conditions (i.e., the filtration contains all P-null sets and is right continuous). Denote by LpF0 ð½h; 0Þ; Rn Þ the family of all F0 -measurable Cð½h; 0; Rn Þ-valued random variables x ¼ fxðyÞ : h y 0g such that suphy0 EjxðyÞjp o1, where Ef:g stands for the mathematical expectation operator with respect to the given probability measure P. Sometimes, the arguments of a function will be omitted in the analysis when no confusion can
neuron activation functions lk ðxÞ ¼ g k ðx þ u Þ g k ðu Þ ðk ¼ 0; 1; 2Þ satisfy (4)
jlk ðxÞj jGk ðxÞj, nn
where Gk 2 R ðk ¼ 0; 1; 2Þ are specified in (2). By the model (3), we are in a position to introduce the hybrid stochastic neural networks with mixed time delays and nonlinearity as follows. Let frðtÞ; t 0g be a right-continuous Markov process on the probability space, which takes values in the finite space S ¼ f1; 2; . . . ; Ng with generator G ¼ ðpij Þði; j 2 SÞ given by ( pij D þ oðDÞ if iaj; Pfrðt þ DÞ ¼ jjrðtÞ ¼ ig ¼ 1 þ pii D þ oðDÞ if i ¼ j; where D40 and limD!1 oðDÞ=D ¼ 0. pij 0 is the transition rate P from i to j if iaj and pii ¼ jai pij . We consider the following hybrid stochastic neural networks with mixed time delays and nonlinearity, which is actually a modification of (3). dxðtÞ ¼
ðAðrðtÞÞ þ DAðrðtÞÞÞxðtÞ þ ðW 0 ðrðtÞÞ
þ DW 0 ðrðtÞÞÞl0 ðxðtÞÞ þ ðW 1 ðrðtÞÞ þ DW 1 ðrðtÞÞÞl1 ðxðt hÞÞ Z t þðW 2 ðrðtÞÞ þ DW 2 ðrðtÞÞÞ l2 ðxðsÞÞ ds dt
T
arise. Finally, we use the symbol maddtðXÞ to represent X þ X .
tt
þ sðt; xðtÞ; xðt hÞ; rðtÞÞ doðtÞ.
(5)
2. Problem formulation For notational convenience, we give the following definitions: In this letter, the neural network with mixed time-delays is described as follows:
(1)
tt
n
where uðtÞ ¼ ½u1 ðtÞ; u2 ðtÞ; . . . ; un ðtÞ 2 R is the state vector associated with n neurons and the diagonal matrix A ¼ diagða1 ; a2 ; . . . ; an Þ has positive entries ak 40. W 0 ¼ ðw0ij Þnn , W 1 ¼ ðw1ij Þnn and W 2 ¼ ðw2ij Þnn are, respectively, the connection weight matrix, the discretely delayed connection weight matrix, and the distributively delayed connection weight matrix. g k ðuðtÞÞ ¼ ½g k1 ðu1 Þ; g k2 ðu2 Þ; . . . ; g kn ðun ÞT ðk ¼ 0; 1; 2Þ denotes the neuron activation function with g k ð0Þ ¼ 0, and V ¼ ½V 1 ; V 2 ; . . . ; V n T is a constant external input vector. The scalar h40, which may be unknown, denotes the discrete time delay, where the scalar t40 is the known distributed time-delay. Assumption 1. The neuron activation functions g i ð:Þ in (1), are bounded and satisfy the following Lipschitz condition jg k ðxÞ g k ðyÞj jGk ðx yÞj;
8x; y 2 R ðk ¼ 0; 1; 2Þ,
(2)
where Gk 2 Rnn are known constant matrices. Remark 1. In this letter, none of the activation functions are required to be continuous, differentiable and monotonically increasing. Note that the types of activation functions in (2) have been used in many papers, see [10,13,15–17]. Let u be the equilibrium point of (1). For the purpose of simplicity, we transform the intended equilibrium u to the origin by letting x ¼ u u , and then the system (1) can be transformed into: Z t l2 ðxðsÞÞ ds, (3) x_ ðtÞ ¼ AðxÞ þ W 0 lo ðxðtÞÞ þ W 1 l1 ðxðt hÞÞ þ W 2 tt
T
n
(6)
where
_ uðtÞ ¼ AuðtÞ þ W 0 g 0 ðuðtÞÞ þ W 1 g 1 ðuðt hÞÞ Z t þ W2 g 2 ðuðsÞÞ ds þ V, T
dxðtÞ ¼ yðt; iÞ dt þ sðt; xðtÞ; xðt hÞ; rðtÞÞ doðtÞ
where xðtÞ ¼ ½x1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞ 2 R is the state vector of the transformed system. It follows from (2) that the transformed
yðt; iÞ ¼ ðAðrðtÞÞ þ DAðrðtÞÞÞxðtÞ þ ðW 0 ðrðtÞÞ þ DW 0 ðrðtÞÞÞl0 ðxðtÞÞ þ ðW 1 ðrðtÞÞ þ DW 1 ðrðtÞÞÞl1 ðxðt hÞÞ þ ðW 2 ðrðtÞÞ Z t þ DW 2 ðrðtÞÞÞ l2 ðxðsÞÞ ds dt, tt
oðtÞ ¼ ½o1 ðtÞ; o2 ðtÞ; . . . ; om ðtÞT 2 Rm is a Brownian motion defined on ðO; F; fFt gt0 ; PÞ. Here,
DAðrðtÞÞ ¼ M A ðrðtÞÞFðt; rðtÞÞNA ðrðtÞÞ, DW k ðrðtÞÞ ¼ M k ðrðtÞÞFðt; rðtÞÞN k ðrðtÞÞ;
k ¼ 0; 1; 2,
(7)
where DAðrðtÞÞ is a diagonal matrix, and M A ðrðtÞÞ,NA ðrðtÞÞ, Mk ðrðtÞÞ, Nk ðrðtÞÞ ðk ¼ 0; 1; 2Þ, are known real constant matrices with appropriate dimensions at mode rðtÞ. The matrix Fðt; rðtÞÞ, which may be time-varying, is unknown and satisfies F T ðt; rðtÞÞFðt; rðtÞÞ I;
8t 0; rðtÞ ¼ i 2 S.
(8)
Assume that s : Rþ Rn Rn S is local Lipschitz continuous and satisfies the linear growth condition [16]. Moreover, s satisfies trace½sT ðt; xðtÞ; xðt hÞ; rðtÞÞsðt; xðtÞ; xðt hÞ; rðtÞÞ jS1;rðtÞ xðtÞj2 þ jS2;rðtÞ xðt hÞj2
(9)
where S1i and S2i are known constant matrices with appropriate dimensions. Observe the system (5) and let xðt; xÞ denote the state trajectory from the initial data xðyÞ ¼ xðyÞ on h y 0 in L2F0 ð½h; 0; Rn Þ. Clearly, the system (5) admits an equilibrium point (trivial solution) xðt; 0Þ 0 corresponding to the initial data x ¼ 0. For all d 2 ½d; 0, suppose that 9$40, such that jxðt þ dÞj $jxðtÞj;
d ¼ maxft; hg.
(10)
Recall that the Markovian process frðtÞ; t 0g takes values in the finite set S ¼ f1; 2; . . . ; Ng. For the sake of simplicity,
ARTICLE IN PRESS W. Zhou et al. / Neurocomputing 72 (2009) 3357–3365
we denote AðiÞ ¼ Ai ; NA ðiÞ ¼ NAi ;
W k ðiÞ ¼ W ki ; N k ðiÞ ¼ N ki ;
M A ðiÞ ¼ M Ai ;
M k ðiÞ ¼ M ki (11)
S1 ðiÞ ¼ S1i S2 ðiÞ ¼ S2i
or equivalently 2 3 M H ET 6 7 4 I 0 5o0.
Note that the network mode we work on next is rðtÞ ¼ i; 8i 2 S. Remark 2. The parameter uncertainty structure as in (7), (8) has been widely used in the literature addressing problems of robust control systems and neural networks, see [14,16,17] and references therein. Remark 3. The condition (9) imposed on the stochastic disturbance term, sðt; xðtÞ; xðt hÞ; rðtÞÞ, has been exploited in recent papers dealing with stochastic neural networks [18]. However, Markovian jumping parameters have not been considered in [18]. The following stability concepts are needed in this paper. Definition 1. For the system (5) and every x 2 L2F0 ð½h; 0; Rn Þ, the equilibrium point is asymptotically stable in the mean square if, for every network mode,
3359
I
Lemma 3. (Gu [19]). For any positive definite matrix M40, scalar g40 and vector function o : ½0; g ! Rn such that the integrations concerned are well defined, the following inequality holds: Z g T Z g Z g oðsÞ ds M oðsÞ ds g oðsÞMoðsÞ ds . 0
0
0
Firstly, we consider the uncertainty-free case. That means, there are no parameter uncertainties. Theorem 1. The neural network (5) with Fðt; rðtÞÞ ¼ 0 is globally exponentially stable in the mean square, if 8i 2 S, exist positive scalars ri 40, ki 40 ðk ¼ 1; 2; . . . ; 8Þ, positive definite matrices Pi ði ¼ 1; 2; . . . ; NÞ, Q1, Q2, S1, S2 and matrices Hi ¼ ½H1i H2i H3i H4i ;
Ri ¼ ½R1i R2i R3i R4i ;
lim Ejxðt; xÞj2 ¼ 0;
t!1
2
and is globally exponentially stable in the mean square if, for every network mode, there exist scalars a40 and b40 such that Y Ti
Ejxðt; xÞj2 aebt sup EjxðyÞj2 . hy0
The main purpose of the rest of this letter is to establish LMIbased stable criteria under which the system (5) is exponential stable in the mean square.
2 6 6 6 6 6 6 6 6 6 6 6 6 Fi ¼ 6 6 6 6 6 6 6 6 6 6 6 6 6 4
F11i
F12i F22i
F13i F23i F33i
Y 11i
6 6 6 ¼ Yi ¼ 6 6 4
Y 12i
Y 13i
Y 22i
Y 23i
Y 33i
Y 14i
RT1i W 0i
RT1i W 1i
0
0
0
HT2i
LT2i
RT2i W 0i
RT2i W 1i
0
0
0
HT3i
LT3i
RT3i W 0i
RT3i W 1i
0
0
HT4i
LT4i
RT4i W 0i
RT4i W 1i
1i I
0
0
0
0
0
0
2i I
0
0
0
0
0
3i I
0
0
0
0
4i I
0
0
0
5i I
0
0
6i I
0
7i I
2 6 6 6 Ci ¼ 6 6 6 6 4
RT1i W 2i
3
7 RT2i W 2i 7 7 7 RT3i W 2i 7 7 7 RT4i W 2i 7 7 0 7 7 7 0 7 7o0, 0 7 7 7 0 7 7 0 7 7 7 0 7 7 0 7 5 8i I
3i tGT2 G2 þ Q 2 þ 8i tGT2 G2 þ X 11i
X 12i
X 13i
X 14i
X 22i
X 23i
X 24i
X 33i
X 34i
X 44i
Lemma 1. (Wang et al. [16]). Let x 2 Rn , y 2 Rn and 40. Then we have xT y þ yT x xT x þ 1 yT y.
if and only if there exists a positive scalar 40 such that
(12a)
LT1i
M þ HFE þ ET F T HT o0,
X 33i
Pi ori I,
HT1i
Lemma 2. (Xie [20]). Let M ¼ M , H and E be real matrices of appropriate dimensions, with F satisfying F T F I, then
7 X 24i 7 7 7, X 34i 7 5 X 44i
such that the following linear matrix inequalities hold.
Pi W 2i
T
3
7 Y 24i 7 7 7, Y 34i 7 5 Y 44i
P i W 1i
We give the following lemmas which will be frequently used in the proofs of our main results.
X 14i
X 23i
P i W 0i
0
X 13i
3
F14i F24i F34i F44i
3. Main results and proofs
M þ 1 HHT þ ET Eo0,
Li ¼ ½L1i L2i L3i L4i , 2 X 11i X 12i 6 6 X 22i 6 X Ti ¼ X i ¼ 6 6 4
HT1i
(12b)
3
7 HT2i 7 7 7 HT3i 7 0, 7 HT4i 7 5 S1
(12c) 2
Y 11i
6 6 6 Oi ¼ 6 6 6 6 4
Y 12i
Y 13i
Y 14i
Y 22i
Y 23i
Y 24i
Y 33i
Y 34i
Y 44i
LT1i
3
7 LT2i 7 7 7 LT3i 7 0, 7 LT4i 7 5 S2
(12d)
ARTICLE IN PRESS 3360
W. Zhou et al. / Neurocomputing 72 (2009) 3357–3365
Z þ mad dt xT ðtÞP i W 2i
where
F11i ¼ P i Ai ATi P i þ Q 1 þ tQ 2 þ
N X
pij Pj þ 1i GT0 G0 þ ri ST1i S1i
j¼1
þ
þ L1i
RT1i Ai
ATi R1i
tt
T
x ðt hÞQ 1 xðt hÞ
Z
l2 ðxðsÞÞ ds t
xT ðsÞQ 2 xðsÞ ds
tt ÞHT2i þ
þ 6i GT0 G0 þ 4i tST1i S1i þ 5i hST1i S1i þ HT1i þ H1i LT1i
t
xT ðt hÞHT3i þ 2½xT ðtÞHT1i þ xT ðt t Z t þ yT ðt; iÞHT4i xðtÞ xðt tÞ dxðsÞ
þ tX 11i þ hY 11i ,
tt
þ 2½xT ðtÞLT1i þ xT ðt tÞLT2i þ xT ðt hÞLT3i Z t þ yT ðt; iÞLT4i xðtÞ xðt hÞ dxðsÞ
F12i ¼ HT1i þ H2i þ L2i ATi R2i þ tX 12i þ hY 12i , F13i ¼ H3i LT1i þ L3i ATi R3i þ tX 13i þ hY 13i ,
th
þ 2½xT ðtÞRT1i þ xT ðt tÞRT2i þ xT ðt hÞRT3i þ yT ðt; iÞRT4i Ai xðtÞ þ W 0i l0 ðxðtÞÞ Z t þ W 1i l1 ðxðt hÞÞ þ W 2i l2 ðxðsÞÞ ds yðt; iÞ
F14i ¼ H4i þ L4i RT1i ATi R4i þ tX 14i þ hY 14i , F22i ¼ HT2i H2i þ tX 22i þ hY 22i ,
xTi ðtÞX i xi ðtÞ
þt
F23i ¼ H3i LT2i þ tX 23i þ hY 23i ,
þ
F24i ¼ H4i RT2i þ tX 24i þ hY 24i , p
2 þ 5i hST2i S2i LT3i L3i þ tX 33i þ hY 33i ,
Proof. Construct the following Lyapunov–Krasovskii functional candidates: Z t Vðt; xðtÞ; rðtÞÞ ¼ xT ðtÞPðrðtÞÞxðtÞ þ xT ðsÞQ 1 xðsÞ ds th
Z
t
t Z t
tþs Z t
þ tt
xTi ðtÞY i xi ðtÞ ds
(15)
th
with xi ðtÞ ¼ ½xðtÞT xðt tÞT xðt hÞT yðt; iÞT T . From Lemma 1 and (4), we have:
T T 1 1i x ðtÞP i W 0i W 0i P i xðtÞ þ 1i l0 ðxðtÞÞl0 ðxðtÞÞ T T T T 1 1i x ðtÞP i W 0i W 0i P i xðtÞ þ 1i x ðtÞG0 G0 xðtÞ,
4i ½kS1 ðrðvÞÞxðvÞk2
(16)
s
maddtðxT ðtÞP i W 1i l1 ðxðt hÞÞÞ T
T T 1 2i x ðtÞP i W 1i W 1i P i xðtÞ þ 2i l1 ðxðt hÞÞl1 ðxðt hÞÞ T T T T 1 2i x ðtÞP i W 1i W 1i P i xðtÞ þ 2i x ðt hÞG1 G1 xðt hÞ,
s
þ kS2 ðrðvÞÞxðv hÞk2 dv ds Z t Z t þ yT ðv; rðvÞÞS2 yðv; rðvÞÞ dv ds th
th
T
x ðZÞQ 2 xðZÞ dZ ds
þ kS2 ðrðvÞÞxðv hÞk2 dv ds Z t Z t þ yT ðv; rðvÞÞS1 yðv; rðvÞÞ dv ds tt s Z t Z t þ 5i ½kS1 ðrðvÞÞxðvÞk2 th
maddtðxT ðtÞP i W 0i l0 ðxðtÞÞÞ
T
þ
tt t
Z
tt
F44i ¼ tS1 þ hS2 RT4i R4i þ tX 44i þ hY 44i .
0
xTi ðtÞX i xi ðtÞ ds
þ 5i h½kS1i xðtÞk2 þ kS2i xðt hÞk2 Z t 5i ½kS1i xðsÞk2 þ kS2i xðs hÞk2 ds th Z t þ hyT ðt; iÞS2 yðt; iÞ yT ðs; rðsÞÞS2 yðs; rðsÞÞ ds
F34i ¼ L4i RT3i þ tX 34i þ hY 34i ,
Z
t
þ 4i t½kS1i xðtÞk2 þ kS2i xðt hÞk2 Z t 4i ½kS1i xðsÞk2 þ kS2i xðs hÞk2 ds tt Z t þ tyT ðt; iÞS1 yðt; iÞ yT ðs; rðsÞÞS1 yðs; rðsÞÞ ds
y ST2i S2i Q 1 þ 7i GT1 G1 þ 4i tST2i S2i
F33i ¼ 2i GT1 G1 þ r
T hxi ðtÞY i xi ðtÞ
tt
Z
(13)
s
where P i 40; Q 1 40; Q 2 40; S1 40; S2 40; 4i 40; 5i 40ði ¼ 1; 2; . . . ; NÞ are to be determined. By Ito^ differential formula, the stochastic derivation of Vðt; xðtÞ; rðtÞÞ along (5) with Fðt; rðtÞÞ ¼ 0 can be obtained as follows: dVðt; xðtÞ; iÞ ¼ LVðt; xðtÞ; iÞ dt þ ðmad dt½xT ðtÞPi sðt; xðtÞ, xðt hÞ; iÞÞ doðtÞ,
(14)
Z maddt xT ðtÞPi W 2i
t tt
(17)
l2 ðxðsÞÞ ds
l2 ðxðsÞÞ ds tt tt Z t T T T l2 ðxðsÞÞ l2 ðxðsÞÞ ds 1 3i x ðtÞP i W 2i W 2i P i xðtÞ þ 3i t tt Z t T 1 T xT ðsÞGT2 G2 xðsÞ ds . (18) 3i x ðtÞPi W 2i W 2i P i xðtÞ þ 3i t T T 1 3i x ðtÞP i W 2i W 2i P i xðtÞ þ 3i
Z
t
T Z l2 ðxðsÞÞ ds
t
tt
where
LVðt; xðtÞ; iÞ ¼ xT ðtÞ½maddtðP i Ai Þ þ Q 1 þ tQ 2 þ
N X
pij Pj xðtÞ þ maddtðxT ðtÞPi W 0i l0 ðxðtÞÞÞ
j¼1
þ maddtðxT ðtÞP i W 1i l1 ðxðt hÞÞÞ T
þ trace½s ðt; xðtÞ; xðt hÞ; iÞ P i sðt; xðtÞ; xðt hÞ; iÞ
Next, it follows from the conditions (9) that trace½sT ðt; xðtÞ; xðt hÞ; iÞPi sðt; xðtÞ; xðt hÞ; iÞ lmax ðPi Þtrace½sT ðt; xðtÞ; xðt hÞ; iÞsðt; xðtÞ; xðt hÞ; iÞ lmax ðPi Þ½xðtÞT ST1i S1i xðtÞ þ xðt hÞT ST2i S2i xðt hÞ ri ½xðtÞT ST1i S1i xðtÞ þ xðt hÞT ST2i S2i xðt hÞ.
(19)
ARTICLE IN PRESS W. Zhou et al. / Neurocomputing 72 (2009) 3357–3365
where 2
Furthermore, it can be seen that Z
T
2xi ðtÞHTi
t
T
tt
dxðsÞ ¼ 2xi ðtÞHTi T
2xi ðtÞHTi 2xTi ðtÞHTi
Z
þ 4i
t tt
Z
t
yðs; rðsÞÞ ds
Z
tt t
Z
tt t
sðs; xðsÞ; xðs hÞ; iÞ do
tt
T
T yðs; rðsÞÞ ds þ 1 4i xi ðtÞH i H i xi ðtÞ
Y11i
6 6 0 Yi ¼ 6 6 6 0 4 0 þ
2
sðs; xðsÞ; xðs hÞ; iÞ do
.
þ þ
0
0
0
0
0
0
0
Y33i
0
0
0
tS1 þ hS2
3 7 7 7 7 7 5
T maddtfHTi ½1 1 0 0g þ 1 4i H i H i T 1 T maddtfLi ½1 0 1 0g þ 5i Li Li maddtfRTi ½Ai 0 0 1g T T T 1 T X i þ hY i þ 1 6i Ri W 0i W 0i Ri þ 7i Ri W 1i W 1i Ri T 1 T 8i Ri W 2i W 2i Ri ,
þt
Since
3361
þ
2
Z t
E
sðs; xðsÞ; xðs hÞ; iÞ do
4i tt Z t 4i E½kS1 ðrðsÞÞxðsÞk2 þ kS2 ðrðsÞÞxðs hÞk2 ds,
(25)
with
Y11i ¼ P i Ai ATi Pi þ Q 1 þ tQ 2 þ
tt
N X
T pij Pj þ 1 1i P i W 0i W 0i P i
j¼1
E 2xTi ðtÞHTi
Z
T T 1 þ 1i GT0 G0 þ 1 2i P i W 1i W 1i P i þ 3i P i W 2i W 2i P i
dxðsÞ
t
þ ri ST1i S1i þ 6i GT0 G0 þ 4i tST1i S1i þ 5i hST1i S1i ,
tt
Z t T T T yðs; rðsÞÞ ds þ 1 2xi ðtÞHTi 4i xi ðtÞH i H i xi ðtÞ tt Z t þ 4i E½kS1 ðrðsÞÞxðsÞk2 þ kS2 ðrðsÞÞxðs hÞk2 ds.
Y33i ¼ 2i GT1 G1 þ ri ST2i S2i Q 1 þ 7i GT1 G1 þ 7i GT1 G1 (20)
þ 4i tST2i S2i þ 5i hST2i S2i ,
tt
which with (12) and using Schur complement, implies that there exists a scalar a ¼ lmin ðYi Þ40, such that
Similarly, we can obtain that Z t E 2xTi ðtÞLTi dxðsÞ
EfLVðxðtÞ; iÞg aEðjxðtÞj2 Þ.
th
Z
T
t
T
T yðs; rðsÞÞ ds þ 1 2xi ðtÞLTi 5i xi ðtÞLi Li xi ðtÞ th Z t þ 5i E½kS1 ðrðsÞÞxðsÞk2 þ kS2 ðrðsÞÞxðs hÞk2 ds,
(21)
th
T
(26)
T
T
T T 2xi RTi W 0i l0 ðxðtÞÞ 1 6i xi ðtÞRi W 0i W 0i Ri xi ðtÞ þ 6i l0 ðxðtÞÞl0 ðxðtÞÞ T
T T T T 1 6i xi ðtÞRi W 0i W 0i Ri xi ðtÞ þ 6i xðtÞ G0 G0 xðtÞ, (22)
In the following, we will prove the mean-square exponential stability of the system (5). To this end, we define lP ¼ maxi2S lmax ðP i Þ; lp ¼ mini2S lmin ðP i Þ. According to dxðtÞ ¼ yðt; iÞ dt þ sðt; xðtÞ; xðt hÞ; rðtÞÞ doðtÞ, jxðt þ dÞj $jxðtÞj and (13), there exist positive scalars d1, d2 such that Z t lp EjxðtÞj2 EVðxðtÞ; t; iÞ lP EjxðtÞj2 þ d1 EjxðsÞj2 ds, (27) td
T
and
T
T T 2xi RTi W 1i l1 ðxðt hÞÞ 1 7i xi ðtÞRi W 1i W 1i Ri xi ðtÞ
þ 7i xðt hÞT GT1 G1 xðt hÞ,
T
2xi RTi W 2i
Z
t tt
(23)
T
dðlP þ dd1 eedt Þ a. (24)
tt
4i tkS1i xðtÞk2 ¼ 4i txT ðtÞST1i S1i xðtÞ, 4i tkS2i xðt hÞk2 ¼ 4i txT ðt hÞST2i S2i xðt hÞ, 5i hkS1i xðtÞk2 ¼ 5i hxT ðtÞST1i S1i xðtÞ, 5i hkS2i xðtÞk2 ¼ 5i hxT ðt hÞST2i S2i xðt hÞ.
xTi ðtÞ
(29)
edt VðxðtÞ; t; iÞ
and by Dynkin’s formula, one obtains Considering that for each rðtÞ ¼ i, i 2 S, t40, Z t Efedt VðxðtÞ; t; iÞg ¼ EfVðxð0Þ; 0; rð0ÞÞg þ E eds ½dVðxðsÞ; s; rðsÞÞ 0
þ LVðxðsÞ; s; rðsÞÞ ds.
(30)
It then follows from (26), (27) and (28) that
Efedt VðxðtÞ; t; iÞg
3
2
5Ci 4
xTi ðtÞ
yT ðs; rðsÞÞ yT ðs; rðsÞÞ 2 3 2 3 Z t xTi ðsÞ xTi ðsÞ 5O i 4 5ds, E4 T tt y ðs; rðsÞÞ yT ðs; rðsÞÞ tt
(28)
where d ¼ maxft; hg, t 2 ½h; 0, xðtÞ ¼ fðtÞ. Let d be a root to the inequality
T T l2 ðxðsÞÞ ds 1 8i xi ðtÞRi W 2i W 2i Ri xi ðtÞ Z t þ 8i t xðsÞT GT2 G2 xðsÞ ds,
In views of (15)–(24), it follows that 2 Z t T EfLVðxðtÞ; iÞg xi ðtÞYi xi ðtÞ E4
EVðxð0Þ; 0; rð0ÞÞ d2 EkfðtÞk2 ,
3 5ds
Z t d2 EfkfðtÞk2 g þ E eds dðlP jxðsÞj2 0 Z s Z t þ d1 jxðbÞj2 db ds aE eds jxðsÞj ds sd 0 Z t ¼ d2 EfkfðtÞk2 g þ E deds lP jxðsÞj2 ds 0 Z s Z t Z t þE eds d ds d1 jxðbÞj2 db aE eds jxðsÞj2 ds. 0
sd
0
(31)
ARTICLE IN PRESS 3362
W. Zhou et al. / Neurocomputing 72 (2009) 3357–3365
2
We can conclude that Z t Z s eds d1 jxðbÞj2 db ds 0
Z
sd t
d1 jxðbÞj2
d
dedd
Z
Z
bþd
eds ds db
b t
d1 jxðbÞj2 edb db
d
dedd
Z
t
d1 jxðsÞj2 eds ds þ
0
Z
0
!
d1 jxðsÞj2 eds ds .
(32)
dt
2
6 6 6 X Ti ¼ X i ¼ 6 6 4 2 Y 11i 6 6 6 Y Ti ¼ Y i ¼ 6 6 4
d
From (27), (31), (32) and (29), we obtain that dt
dd
2
or dd 2 dt EjxðtÞj2 l1 , p ðd2 þ ddd1 e ÞEkfðtÞk e
which implies that the trivial solution of system (5) is exponentially stable in the mean square. This completes the proof. & Remark 4. In Lyapunov-Krasovskii functional candidate (13), we use some double integral items and by which together with some free weighting matrices, we deduce the conditions involving time delays items t and h. These delay-dependent conditions are of smaller conservative compared with delay-independent conditions. It is obvious that the free weighting matrices Hi, Li, Ri in (15)
6 6 6 6 6 6 6 6 6 6 6 6 Fib ¼ 6 6 6 6 6 6 6 6 6 6 6 6 6 4
l1i NTAi NAi 0 0 0
X 22i
X 23i
X 33i
Y 12i
Y 13i
Y 22i
Y 23i
Y 33i
X 14i
3
7 X 24i 7 7 7, X 34i 7 5 X 44i 3 Y 14i 7 Y 24i 7 7 7, Y 34i 7 5 Y 44i
6 6 6 6 6 6 6 6 6 6 6 6 6 6 4
Fi þ Fib
G1i l1i
G2i
G3i
G4i
G5i
G6i
0
0
0
0
0
l2i
0
0
0
0
l3i
0 l4i
0 0
0 0
l5i
0
l6i
G7i
3
7 0 7 7 0 7 7 7 0 7 7o0, 0 7 7 7 0 7 7 0 7 5 l7i (33)
where Fi is defined in (12b), and
0
0
0
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
l2i N T0i N0i
0
0
0
0
0
0
0
0
0
0
0
0
l3i NT1i N1i
0
0
0
0
0
0
0
0
0
0
0
0
l4i NT2i N 2i
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 l5i N T0i N0i
0 0
0 0
0
0
0
0
0
0
0
0
0
0
l6i NT1i N 1i
0
0
0
0
0
0
0
0
0
0
0
0
l7i NT2i N 2i
By Theorem 1, we are in a position to present the solution to globally exponentially stable for the system (5) with uncertainty. Theorem 2. The dynamics of the neural network (5) is globally exponentially stable in the mean square, if 8i 2 S, exist positive scalars ri 40, ji 40; ðj ¼ 1; 2; . . . ; 8Þ, ln 40 ðn ¼ 1; 2; . . . ; 7Þ, positive definite matrices P i ði ¼ 1; 2; . . . ; NÞ, Q1, Q2, S1, S2 and matrixes
Ri ¼ ½R1i R2i R3i R4i ,
X 13i
0
Rt reflect the relationships among (xðtÞ; xðt tÞ; tt dxðsÞ), (xðtÞ; Rt Rt xðt hÞ; tt dxðsÞ) and (Ai xðtÞ; W 0i l0 xðtÞ, W 1i l1 ðxðt hÞÞ, W 2i tt l2 ðxðsÞÞ ds, yðt; iÞ), respectively, which are taken into account. These free weighting matrices can be easily determined by solving LMIs (12).
Hi ¼ ½H1i H2i H3i H4i ;
X 12i
such that (12a), (12c), (12d) and the following linear matrix inequalities hold:
2
e lp EjxðtÞj e EVðxðtÞ; t; iÞ ðd2 þ ddd1 e ÞEkfðtÞk
2
X 11i
Li ¼ ½L1i L2i L3i L4i ,
2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 G1i ¼ 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4
RT1i MAi Pi M Ai RT2i M Ai RT3i M Ai RT4i M Ai 0 0 0 0 0 0 0 0
3
2
7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7; 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5
6 6 6 6 6 6 6 6 6 6 6 6 6 6 G2i ¼ 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4
Pi M 0i 0 0 0 0 0 0 0 0 0 0 0
3
2
7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7; 7 7 7 7 7 7 7 7 7 7 7 7 7 5
6 6 6 6 6 6 6 6 6 6 6 6 6 6 G3i ¼ 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4
Pi M 1i 0 0 0 0 0 0 0 0 0 0 0
7 7 7 7 7 7 7 7 7 7 7 7 7 7, 7 7 7 7 7 7 7 7 7 7 7 5
3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7, 7 7 7 7 7 7 7 7 7 7 7 7 7 5
ARTICLE IN PRESS W. Zhou et al. / Neurocomputing 72 (2009) 3357–3365
2
Pi M 2i
6 6 6 6 6 6 6 6 6 6 6 6 6 6 G4i ¼ 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4
0 0 0 0 0 0 0 0 0 0
2
3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7; 7 7 7 7 7 7 7 7 7 7 7 7 7 5
0 2
RT1i M 1i
RT1i M 0i
3
Remark 5. Let S ¼ f1g and Fðt; rðtÞÞ ¼ 0, the neural network (5) can be reduced to (4) in [18]. Let S ¼ f1g and Fðt; rðtÞÞa0, the neural network (5) can be rewritten as (5) in [16]. Therefore, Theorems 1 and 2 in the letter can be regarded as the expansions of Theorem 1 in [18] and Theorem 1 in [16], respectively. However, for delayed neural networks, the criteria proposed in [16] and [18] are only applicable for systems with some admissible time delay. As is well known, when the time delay is actually small, the delayindependent conditions tend to be conservative. In this paper, the free-weighting matrix approach is employed to derive a delaydependent criterion for neural networks with mixed time delays.
7 6 6 RT M 7 6 2i 0i 7 7 6 7 6 T 6 R3i M 0i 7 7 6 7 6 T 6 R4i M 0i 7 7 6 7 6 6 0 7 7 6 7 6 6 0 7 7, G5i ¼ 6 7 6 6 0 7 7 6 7 6 6 0 7 7 6 7 6 6 0 7 7 6 7 6 6 0 7 7 6 7 6 6 0 7 5 4 0
3
7 6 T 6 R M 1i 7 7 6 2i 7 6 T 6 R3i M 1i 7 7 6 7 6 T 6 R4i M 1i 7 7 6 6 0 7 7 6 7 6 6 0 7 G6i ¼ 6 7; 6 0 7 7 6 7 6 6 0 7 7 6 6 0 7 7 6 7 6 6 0 7 7 6 6 0 7 5 4 0
2
RT1i M 2i
3
7 6 T 6 R M 2i 7 7 6 2i 7 6 T 6 R3i M 2i 7 7 6 7 6 T 6 R4i M 2i 7 7 6 6 0 7 7 6 7 6 6 0 7 G7i ¼ 6 7, 6 0 7 7 6 7 6 6 0 7 7 6 6 0 7 7 6 7 6 6 0 7 7 6 6 0 7 5 4 0
Proof. Replacing Ai and W ki , k ¼ 0; 1; 2, in (12b) with Ai þ MAi F i NAi and W ki þ Mki F i N ki ; k ¼ 0; 1; 2, respectively, we have
Fi þ maddtðG1i F i G1bi Þ þ maddtðG2i F i G2bi Þ þ maddtðG3i F i G3bi Þ þ maddtðG4i F i G4bi Þ þ maddtðG5i F i G5bi Þ þ maddtðG6i F i G6bi Þ þ maddtðG7i F i G7bi Þo0
Remark 6. Theorem 2 presents a sufficient condition to guarantee the global exponential stability in the mean square for the hybrid stochastic neural networks with mixed time delays and nonlinearity. For neural networks with Markovian jumping parameters, the problems of global exponential stability analysis for a class of neural networks have been handled in [13], where both time delays and Markovian jumping parameters are considered. A general of stochastic interval additive neural networks with timevarying delay and Markovian switching have been studied in [14]. It is worth noting that the parameter uncertainties, mixed time delays and stochastic disturbance have not been fully taken into account. Also, in [14], the mixed time delays has not been considered. Let Fðt; rðtÞÞ ¼ 0, W 2 ðrðtÞÞ ¼ 0, and sðt; xðtÞ; xðt hÞ; rðtÞÞ ¼ 0 in (5), then we can obtain system (5) in [13]. It can be realized that, up to now, the stability analysis problem for neural networks with Markovian jumping parameters, stochastic disturbance and mixed time delays has not been fully investigated, despite its practical importance. In this letter, as a special case, the stability problem has been thoroughly discussed for the hybrid stochastic neural networks with mixed time delays and nonlinearity.
4. Numerical examples (34)
where
G1bi ¼ ½N Ai 0 0 0 0 0 0 0 0 0 0 0, G2bi ¼ ½0 0 0 0 N 0i 0 0 0 0 0 0 0, G3bi ¼ ½0 0 0 0 0 N 1i 0 0 0 0 0 0, G4bi ¼ ½0 0 0 0 0 0 N2i 0 0 0 0 0, G5bi ¼ ½0 0 0 0 0 0 0 0 0 N 0i 0 0, G6bi ¼ ½0 0 0 0 0 0 0 0 0 0 N1i 0, G7bi ¼ ½0 0 0 0 0 0 0 0 0 0 0 N2i , By Theorem 1, the system (5) is globally exponentially stable in the mean square, if (34), (12a), (12c) and (12d) hold for all Fðt; rðtÞÞ satisfying (8). By Lemma 2, it follows that for all Fðt; rðtÞÞ satisfying (8), (34) holds if and only if there exists scalars lji ; ðj ¼ 1; 2; . . . ; 7Þ such that 1 T T T T F þ l1 1i G1i G1i þ l1i G1bi G1bi þ l2i G2i G2i þ l2i G2bi G2bi
þ l3i G3i GT3i þ l3i GT3bi G3bi þ l4i G4i GT4i þ l4i GT4bi G4bi 1
1
We present two examples here in order to illustrate the usefulness of our main results. Our aim is to examine the globally exponentially stable of a given delayed neural network with Markovian jumping parameters. Example 1. Consider a two-neuron network (5) with two modes and without parameter uncertainties. The network parameters are given as follows: " # " # " # 2:6 0 2:5 0 0:2 0 A1 ¼ ; A2 ¼ ; G0 ¼ , 0 2:7 0 2:6 0 0:3 " # " # " # 0:4 0 0:3 0 1:2 1:5 ; G2 ¼ ; W 01 ¼ , G1 ¼ 0 0:6 0 0:4 1:7 1:2 " # " # 1:1 1:6 1:1 0:5 ; W 11 ¼ , W 02 ¼ 1:8 1:2 0:5 0:8 " # " # " # 1:6 0:1 0:6 0:1 0:8 0:2 ; W 21 ¼ ; W 22 ¼ , W 12 ¼ 0:3 0:4 0:1 0:2 0:2 0:3 " # " # 0:08 0 0:07 0 S11 ¼ ; S12 ¼ , 0 0:08 0 0:06 " # " # 0:09 0 0:08 0 S21 ¼ ; S22 ¼ , 0 0:09 0 0:04
þ l5i G5i GT5i þ l5i GT5bi G5bi þ l6i G6i GT6i þ l6i GT6bi G6bi 1
þ
T l1 7i G7i G7i
3363
1
þ
l7i GT7bi G7bi o0.
(35)
By Schur complement, it can be easily shown that (35) is equivalent to (33). This completes the proof. &
"
G¼
0:12
0:12
0:11
0:11
# ;
t ¼ 0:12;
h ¼ 0:13,
sðt; xðtÞ; xðt hÞ; 1Þ ¼ ð0:4x1 ðt hÞ; 0:5x2 ðtÞÞT ,
ARTICLE IN PRESS 3364
W. Zhou et al. / Neurocomputing 72 (2009) 3357–3365
Fig. 1. The response of the state vector (Example 1).
Fig. 2. The response of the state vector (Example 2).
sðt; xðtÞ; xðt hÞ; 2Þ ¼ ð0:5x1 ðtÞ; 0:3x2 ðt hÞÞT , lk ðxðtÞÞ ¼ tanhðxðtÞÞ;
G0 ¼ G1 ¼ G2 ¼ 0:2I;
By using Matlab LMI toolbox, we solve the LMIs in (12) and obtain 17:6679 4:2912 4:6851 1:8324 ; P2 ¼ , P1 ¼ 4:2912 21:3771 1:8324 5:7081
0:4
0:3
0:2
0:1
6 W 11 ¼ 6 4 0:2 0:6 0:8 1:1 2 0:5 0:2 6 W 21 ¼ 6 4 0:3 0:7 1:2 1:1 " # 2 2 G¼ . 1 1
3
7 0:6 7 5; 1:2 3 0:1 7 0:3 7 5; 0:5
2
0:4
0:3
0:2
0:1
0:2 0:3
3
7 6 0:5 0:4 7 W 12 ¼ 6 5, 4 0:1 0:9 1:3 1:5 2 3 0:6 0:3 0:2 6 7 7 W 22 ¼ 6 4 0:2 0:6 0:5 5, 1:4 1:2 0:4
Fðt; 1Þ ¼ Fðt; 2Þ ¼ diagðsinð5tÞ; cosð5tÞ; sinð3tÞÞ,
t ¼ 0:12;
h ¼ 0:13;
lk ðxðtÞÞ ¼ tanhðxðtÞÞ; k ¼ 0; 1; 2,
By solving the LMIs (12a), (12c), (12d) and (33), we obtain 3 2 35:3471 3:3118 0:3826 7 6 0:8046 7 P1 ¼ 6 5, 4 3:3118 33:7867
Example 2. We consider neural network (5) with parameter uncertainties. The network data are given as follows: 2 3 2 3 2:7 0 0 2:6 0 0 6 7 6 7 7 6 7 A1 ¼ 6 4 0 2:8 0 5; A2 ¼ 4 0 2:4 0 5, 0 0 2:9 0 0 2:7 3 3 2 2 0:3 1:8 0:5 0:2 1:6 0:4 7 7 6 6 7 7 6 W 01 ¼ 6 4 1:1 1:6 1:1 5; W 02 ¼ 4 1:3 1:4 1:2 5, 0:8
¼ NA1 ¼ NA2 ¼ N 01 ¼ N02 ¼ N11 ¼ N 12 ¼ N 21 ¼ N22 ¼ 0:1I,
sðt; xðtÞ; xðt hÞ; 2Þ ¼ ð0:4x1 ðt hÞ; 0:6x2 ðt hÞ; 0:7x3 ðtÞÞT .
Therefore, it follows from Theorem 1 that the two-neuron neural network (5) without parameter uncertainties is globally exponentially stable in the mean square. The responses of the state vector xðtÞ of system (5) for Example 1 are shown in Fig. 1, which further illustrate the stability.
0:6
MA1 ¼ M A2 ¼ M 01 ¼ M 02 ¼ M 11 ¼ M 12 ¼ M 21 ¼ M 22
sðt; xðtÞ; xðt hÞ; 1Þ ¼ ð0:5x1 ðtÞ; 0:3x2 ðt hÞ; 0:4x3 ðtÞÞT ,
r1 ¼ 39:4315; r2 ¼ 24:6899; 11 ¼ 64:8119, 21 ¼ 32:1690; 31 ¼ 25:0637; 41 ¼ 24:4488, 51 ¼ 22:2422; 61 ¼ 32:2754; 71 ¼ 23:2970, 81 ¼ 22:6629; 12 ¼ 22:2509; 22 ¼ 17:9420, 32 ¼ 20:2979; 42 ¼ 24:6447; 52 ¼ 20:4882, 62 ¼ 39:1595; 72 ¼ 27:5308; 82 ¼ 25:4427.
2
S11 ¼ S12 ¼ S21 ¼ S22 ¼ 0:8I,
k ¼ 0; 1; 2.
2
0:3826
0:8046
30:9089
6:9012
2:2426
0:1964
6 P2 ¼ 6 4 2:2426 0:1964
5:3516 0:4075
3
7 0:4075 7 5, 4:1297
r1 ¼ 44:7997; r2 ¼ 21:8688; 11 ¼ 106:4891; 21 ¼ 76:2694; 31 ¼ 74:3977; 41 ¼ 33:2942; 51 ¼ 28:3790; 61 ¼ 61:1545; 71 ¼ 48:9937; 81 ¼ 48:0338; 12 ¼ 34:0457; 22 ¼ 33:0474; 32 ¼ 33:6439; 42 ¼ 36:4455; 52 ¼ 31:6921; 62 ¼ 60:7741; 72 ¼ 53:1769; 82 ¼ 52:0863; which indicates that the neural network (5) with parameter uncertainties is globally exponentially stable in the mean square. For Example 2, the responses of the state vector xðtÞof system (5) are shown as Fig. 2. The simulation results imply that the neural network (5) is globally exponentially stable.
5. Conclusions In this paper, we have addressed the problem of globally exponentially stable analysis for a class of uncertain stochastic delayed neural networks, where both mixed time delays and Markovian jumping parameter exist. Free-weighting matrices are employed to express the relationship between the terms in the Leibniz-Newton formula, and an LMI-based globally exponentially
ARTICLE IN PRESS W. Zhou et al. / Neurocomputing 72 (2009) 3357–3365
stable criterion is derived for delayed neural networks with mixed time delays and Markovian jumping parameters. Finally, simulation examples demonstrate the usefulness of the proposed results. Recent researches have thrown new light on neural networks with time-delays. Zhang et al. have provided improved conditions of the global asymptotic stability for a class of neural networks with interval time-varying delays [21]. The contributions of Wang et al. are concerned with the state estimation problem for a class of Markovian neural networks with discrete and distributed timedelays [22]. Liu et al. have focused on the stability analysis problem for a new class of discrete-time recurrent neural networks with mixed time-delays [23]. In our future work, we will start with these new results, and devote to dealing with the discrete-time complex networks with time-delays. References [1] S. Yong, P. Scott, N. Nasrabadi, Object recognition using multilayer Hopfield neural network, IEEE Trans. Image Process. 6 (1997) 357–372. [2] G. Joya, M. Atencia, F. Sandoval, Hopfield neural networks for optimization: study of the different dynamics, Neurocomputing 43 (2002) 219–237. [3] W.J. Li, T. Lee, Hopfield neural networks for affine invariant matching, IEEE Trans. Neural Networks 12 (2001) 1400–1410. [4] Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks with mixed time-delays, Chaos Solitons Fractals 32 (2007) 62–72. [5] Y. He, Q. Wang, M. Wu, C. Lin, Delay-dependent state estimation for delayed neural networks, IEEE Trans. Neural Networks 17 (2006) 1077–1081. [6] X. Liao, G. Chen, E.N. Sanchez, Delay dependent exponential stability analysis of delayed neural networks: an LMI approach, Neural Networks 15 (2003) 1401–1402. [7] H. Lu, Comments on ‘A generalized LMI-based approach to the global asymptotic stability of delayed cellular neural networks’, IEEE Trans. Neural Networks 16 (2005) 778–779. [8] Q. Zhang, X. Wen, J. Xu, Delay-dependent exponential stability of cellular neural networks with time-varying delays, Chaos Solitons Fractals 23 (2005) 1363–1369. [9] J. Cao, X. Li, Stability in delayed Cohen–Grossberg neural networks: LMI optimization approach, Phys. D 212 (2005) 1317–1329. [10] Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and distributed delays, Phys. Lett. A 345 (2005) 299–308. [11] X. Mao, Stochastic Differential Equations and their Applications, Horwood, Chichester, UK, 1997. [12] L. Arnold, Stochastic Differential Equations: Theory and Applications, Wiley, New York, 1972. [13] Z. Wang, Y. Liu, Y. Li, X. Liu, Exponential stability of delayed recurrent neural networks with Markovian jumping parameters, Phys. Lett. A 356 (2006) 346–352. [14] H. Huang, D.W.C. Hob, Y. Qu, Robust stability of stochastic delayed additive neural networks with Markovian switching, Neural Networks 20 (2007) 799–809. [15] Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with time delays, Nonlinear Anal. Real World Appl. 7 (2006) 1119–1128. [16] Z. Wang, Y. Liu, F. Karl, X. Liu, Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays, Phys. Lett. A 354 (2006) 288–297. [17] Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks with mixed time-delays, Chaos Solitons Fractals 32 (2007) 62–72.
3365
[18] Z. Wang, J. Fang, X. Liu, Global stability of stochastic high-order neural networks with discrete and distributed delays, Chaos Solitons Fractals 36 (2008) 388–396. [19] K. Gu, An integral inequality in the stability problem of time-delay systems, in: Proceedings of 39th IEEE Conference on Decision and Control, December 2000, Sydney, Australia, pp. 2805–2810. [20] L. Xie, Output feedback H1 control of systems with parameter uncertainty, Int. J. Control 63 (1996) 741–750. [21] Y. Zhang, D. Yue, E. Tian, New stability criteria of neural networks with interval time varying delay: a piecewise delay method, Appl. Math. Comput. 208 (2009) 249–259. [22] Z. Wang, Y. Liu, X. Liu, State estimation for jumping recurrent neural networks with discrete and distributed delays, Neural Networks 22 (2009) 41–48. [23] Y. Liua, Z. Wang, X. Liu, Asymptotic stability for neural networks with mixed time-delays: the discrete-time case, Neural Networks 22 (2009) 67–74.
Wuneng Zhou was born in 1959. He received a first class B.S. degree in Huazhong Normal University in 1982. He obtained his Ph.D. degree from Zhejiang University in 2005. From 1982 to 1995, he worked as an assistant, lecturer and associate professor in Yunyang Teacher’s College, Hubei Province. From 1995 to 2000, he worked as an associate professor and professor in Jingzhou Normal University, Hubei Province. From 2000 to 2006, he worked as a professor in Zhejiang Normal University, Zhejiang Province. Now he is a professor in Donghua University, Shanghai. His research interests include regional stability analysis, singular systems robust control, networks control, stochastic systems control, neural networks and chaotic control and synchronization.
Hongqian Lu in 2009, received his Ph.D. in Control Theory and Control Engineering from Donghua Univeisity, Shanghai, China. Currently, he works as a teacher at Shandong Institute of Light Industry, Jinan, China. His research interests include singular systems, robust control, networks control, neural networks.
Chunmei Duan in 2009, received her Ph.D. in Computer Science from Shandong Univeisity, Jinan, China. Currently, she works as a teacher at Shandong Normal University, Jinan, China. Her current research interests include Pattern Recognition, Robot Control, Neural Networks, Computer Vision and Computer Graphics.