Delay-dependent stochastic stability criteria for Markovian jumping neural networks with mode-dependent time-varying delays and partially known transition rates

Delay-dependent stochastic stability criteria for Markovian jumping neural networks with mode-dependent time-varying delays and partially known transition rates

Applied Mathematics and Computation 218 (2012) 5769–5781 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation jour...

247KB Sizes 2 Downloads 14 Views

Applied Mathematics and Computation 218 (2012) 5769–5781

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Delay-dependent stochastic stability criteria for Markovian jumping neural networks with mode-dependent time-varying delays and partially known transition rates Junkang Tian a,b,c,⇑, Yongming Li a, Jinzhou Zhao a, Shouming Zhong c a

State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation, Southwest Petroleum University, Chengdu, Sichuan 610500, PR China School of Sciences, Southwest Petroleum University, Chengdu, Sichuan 610500, PR China c School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, PR China b

a r t i c l e

i n f o

Keywords: Stochastic stability Markovian jumping neural networks Time-varying delays

a b s t r a c t In this paper, the problem of stochastic stability criterion of Markovian jumping neural networks with mode-dependent time-varying delays and partially known transition rates is considered. Some new delay-dependent stability criteria are derived by choosing a new class of Lyapunov functional. The obtained criteria are less conservative because freeweighting matrices method and a convex optimization approach are considered. Finally, a numerical example is given to illustrate the effectiveness of the proposed method.  2011 Elsevier Inc. All rights reserved.

1. Introduction In recent decades, neural networks have been investigated extensively because of their successful applications in various areas such as pattern recognition, image processing, associative memory and combinatorial optimization. However, these successful applications are greatly dependent on the dynamic behaviors of neural networks. As is well known now, stability is one of the main properties of neural networks, which is a crucial feature in the design of neural networks. On the other hand, it has been recognized that the time delays often occur in various neural networks, and may cause undesirable dynamic network behaviors such as oscillation and instability. Therefore, the stability analysis for delayed neural networks has become a topic of great theoretic and practical importance in recent years [1–27]. Recently, systems with Marvokian jumps have been attracting increasing research attention. This class of systems are the hybrid systems with two components in the state. The first one refers to the mode, which is described by a continuous–time finite-state Markovian process, and the second one refers to the state which is represented by a system of differential equations. The Markovian jump systems have the advantage of modeling the dynamic systems subject to abrupt variation in their structures, such as component failures or repairs, sudden environmental disturbance, changing subsystem interconnections, and operating in different points of a nonlinear plant [28]. Recently, there has been a growing interest in the study of neural networks with Markovian jumping parameters [29–38]. In [29], the problem of stochastic robust stability for uncertain delayed neural networks with Markovian jumping parameters is investigated. The state estimation problem for a class of Markovian neural networks with discrete and distributed time-delays is studied in [30]. Without assuming the boundedness, monotonicity and differentiability of the activation functions, some results for delay-dependent stochastic stability criteria for the Markovian jumping Hopfield neural networks with time-delay are developed in [31]. Some new delay-dependent

⇑ Corresponding author at: State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation, Southwest Petroleum University, Chengdu, Sichuan 610500, PR China. E-mail address: [email protected] (J. Tian). 0096-3003/$ - see front matter  2011 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2011.11.087

5770

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

stochastic stability criteria for BAM neural networks with Markovian jumping parameters are derived in [32] based on delay partitioning idea. To the best of our knowledge, the stochastic stability analysis for Markovian jumping neural networks with mode-dependent time-varying delays and partially known transition rates has never been tackled, and such a situation motivates our present study. In this paper, the problem of stochastic stability criterion for Markovian jumping neural networks with mode-dependent time-varying delays and partially known transition rates is considered. By choosing a new class of Lyapunov functional, some new delay-dependent stochastic stability criteria are derived to guarantee the stochastic stability of Markovian jumping neural networks. The obtained criteria are less conservative because free-weighting matrices method and a convex optimization approach are considered. Finally, a numerical example is given to show the effectiveness of the derived method. 2. Problem formulation Consider the following delayed neural network:

_ xðtÞ ¼ AxðtÞ þ BgðxðtÞÞ þ Cgðxðt  hðtÞÞÞ þ D

Z

t

gðxðsÞÞds þ l;

ð1Þ

tdðtÞ

 0; xðtÞ ¼ UðtÞ; t 2 ½h;

ð2Þ T

T

where xðtÞ ¼ ½x1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞ 2 Rn is the neuron state vector, gðxðÞÞ ¼ ½g 1 ðx1 ðÞÞ; g 2 ðx2 ðÞÞ; . . . ; g n ðxn ðÞÞ 2 Rn denotes the neuron activation function, and l ¼ ðl1 ; l2 ; . . . ; ln ÞT 2 Rn is a constant input vector. B; C; D 2 Rnn are the connection weight matrix and the delayed connection weight matrix,respectively. A = diag(a1, a2, . . . , an) with ai > 0, i = 1, 2, . . . , n. h(t), _ d(t) are time-varying continuous functions that satisfy 0 6 hðtÞ 6 h; 0 6 dðtÞ 6 d; hðtÞ 6 u, where h, d and u are constants.  0, where h  ¼ maxfd; hg. In addition, it is assumed that each neuron The initial vector U(t) is continuously differential on ½h; activation function gi(), i = 1, 2, . . . , n is bounded and satisfies the following condition:

ci 6

g i ðxÞ  g i ðyÞ 6 cþi ; xy

8x; y 2 R;

x – y;

i ¼ 1; 2; . . . ; n;

ð3Þ

þ where c i ; ci ; i ¼ 1; 2; . . . ; n are constants. Note that by using the Brouwers fixed-point theorem,  T it can be easily proven that there exists at least one equilibrium point for system (1). Assuming that x ¼ x1 ; x2 ; . . . ; xn is the equilibrium point of (1) and using the transformation z() = x()  x⁄, (1) can be converted to the following system:

z_ ðtÞ ¼ AzðtÞ þ Bf ðzðtÞÞ þ Cf ðzðt  hðtÞÞÞ þ D

Z

t

f ðzðsÞÞds;

ð4Þ

tdðtÞ

    where z(t) = [z1(t), z2(t), . . . , zn(t)]T, f(z()) = [f1(z1()), f2(z2()), . . . , fn(zn())]T and fi ðzi ðÞÞ ¼ g i zi ðÞ þ xi  g i xi ; i ¼ 1; 2; . . . ; n. According to the inequality (3), one can obtain that:

ci 6

fi ðzi ðtÞÞ 6 cþi f i ð0Þ ¼ 0; zi ðtÞ

i ¼ 1; 2; . . . ; n:

ð5Þ

Given probability space (X, !,P) where X is sample space, ! is r  algebra of subset of the sample space, and P is the probability measure defined on !. Let S = {1, 2, . . . , N} and the random form process {r(t), t 2 [0, +1)} be a homogeneous, finitestate Markovian process with right continuous trajectories with generator P = (pij)NN and transition probability from mode i at time t to mode j at time t + Dt, i, j 2 S:

Pfrðt þ DtÞ ¼ jjrðtÞ ¼ ig ¼



pij Dt þ oðDtÞ j – i; 1 þ pii Dt þ oðDtÞ j ¼ i;

ð6Þ

P with transition rates pij P 0 for i, j 2 S, j – i and pii ¼  Nj¼1;j–i pij , where Dt > 0 and limDt!0 oðDDttÞ ¼ 0. In addition, the transition rates of the Markovian chain are considered to be partially available, namely, some elements in matrix P are time-invariant but unknown. For instance, a system with three operation modes may have the transition rates matrix P as follows:

2

p11

P¼6 4 ?

p31

?

?

3

p22 ? 7 5; p32 p33

where ‘‘?’’ represents the inaccessible element. For notation clarity, "i 2 S, we denote S ¼ Sikn [ Siuk with Sikn , fj; if pij is knowng, Siuk , fj; if pij is unknowng; pikn P , j2Si pij throughout the paper. Furthermore, we assume the diagonal elements of P are known. Note that the set S comkn prises the various operational modes of the system under study. In this paper, we consider the Markovian jumping neural networks with mode-dependent time-varying delays and partially known transition rates described by the following nonlinear differential equations:

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

_ xðtÞ ¼ AðrðtÞÞxðtÞ þ BðrðtÞÞgðxðtÞÞ þ CðrðtÞÞgðxðt  hðrðtÞ; tÞÞÞ þ DðrðtÞÞ

Z

5771

t

gðxðsÞÞds þ l

ð7Þ

tdðrðtÞ;tÞ

Similar to the above analysis,we can obtain:

z_ ðtÞ ¼ AðrðtÞÞzðtÞ þ BðrðtÞÞf ðzðtÞÞ þ CðrðtÞÞf ðzðt  hðrðtÞ; tÞÞÞ þ DðrðtÞÞ

Z

t

f ðzðsÞÞds;

ð8Þ

tdðrðtÞ;tÞ

when r(t) = i 2 S, the matrices A(r(t)), B(r(t)), C(r(t)), D(r(t)) are represented by Ai, Bi, Ci, Di. In the system (8) hi(t), di(t) denote ~ ¼ max fh g; d ~¼ the mode-dependent time-varying delays which satisfy 0 6 hi ðtÞ 6 hi ; 0 6 di ðtÞ 6 di ; h_ i ðtÞ 6 ui ; h j2S j ~ dg. ~ max fd g; s ¼ maxfh; j2S

j

The initial conditions of system (8) is of the following form:

zðtÞ ¼ uðtÞ;

t 2 ½s; 0;

rð0Þ ¼ r 0 :

ð9Þ

Definition 1. Markovian system (8) is said to be stochastically stable, if for any u(t) defined on [s, 0] and r(0) 2 S, the following condition is satisfied:

lim E

Z

t!1

t

zT ðsÞzðsÞdsju; r 0

 < 1:

ð10Þ

0

Definition 2. Let U1 ; U2 ; . . . ; UN : Rm ! Rn be a given finite number of functions such that they have positive values in an open subset D of Rm . Then, a reciprocally convex combination of these functions over D is a function of the form:

1

a1

U1 þ

1

a2

1

U2 þ    þ

aN

UN : D ! Rn ;

ð11Þ P

where the real numbers ai satisfy ai > 0 and i ai ¼ 1. The following Lemma 1 suggests a lower bound for a reciprocally convex combination of scalar positive functions Ui = fi. Lemma 1 [39]. Let f1 ; f2 ; . . . ; fN : Rm ! R have positive values in an open subset D of Rm . Then, the reciprocally convex combination of fi over D satisfies:

min P

fai jai >0;

X 1 a ¼1g i i

i

ai

fi ðtÞ ¼

X

fi ðtÞ þ max g i;j ðtÞ

i

(

X i–j

m

subject to :

g i;j : R ! R; g j;i ðtÞDg i;j ðtÞ;

g i;j ðtÞ; "

ð12Þ

fi ðtÞ

g i;j ðtÞ

g i;j ðtÞ

fj ðtÞ

#

) P0 :

ð13Þ

Lemma 2 [40]. For any constant matrix Z 2 Rnn ; Z ¼ Z T > 0, scalar h > 0, such that the following integration is well defined, then:

h

Z

t

xT ðsÞZxðsÞds 6 

th

Z

t

th

xT ðsÞdsZ

Z

t

xðsÞds:

ð14Þ

th

3. Main results In this section, a new Lyapunov functional is constructed to derived a delay-dependent stochastic stability criterion for system (8) when the time-varying delays are mode-dependent and the transition rates are partially known. Theorem 1. For given scalars hi P 0, di > 0, ui, the system (8) with mode-dependent time-varying delays and partially known transition rates is stochastically stable if there exist symmetric positive definite matrices Pi, Q1i, Q2i, Q3i, Q4i, R1, R2, R3, R4, R5, R6, R7, positive diagonal matrices Ti1, Ti2 and any matrices S12, Li, Ni, Mi, with appropriate dimensions, for any i = 1, 2, . . . ,N, such that the following LMIs hold:



R7

S12



R7

P 0;

ð15Þ

5772

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

2 6 6 4

 1 þ pikn ðR1 þ pii Q 1i Þ

P

pij Q 1j

j–i;j2Sikn



P



3 7 7 < 0;

pij Q 1j 5

ð16Þ

j–i;j2Sikn



R1 þ pii Q 1i

Q 1j



Q 1j

2 6 6 4



8j 2 Siuk ;

< 0;

 1 þ pikn ðR2 þ pii Q 2i Þ

P

pij Q 2j

j–i;j2Sikn



P



ð17Þ 3 7 7 < 0;

pij Q 2j 5

ð18Þ

j–i;j2Sikn



R2 þ pii Q 2i

Q 2j



Q 2j

2 6 6 4



8j 2 Siuk ;

< 0;

 1 þ pikn ðR3 þ pii Q 3i Þ

P

pij Q 3j

P



3

7 7 < 0; pij Q 3j 5

j–i;j2Sikn



ð19Þ

ð20Þ

j–i;j2Sikn



R3 þ pii Q 3i

Q 3j



Q 3j

2 6 6 4



8j 2 Siuk ;

< 0;

 1 þ pikn ðR4 þ pii Q 4i Þ

P

pij Q 4j

j–i;j2Sikn



P



ð21Þ 3 7 7 < 0;

pij Q 4j 5

ð22Þ

j–i;j2Sikn



R4 þ pii Q 4i

Q 4j



Q 4j

2 6 6 4



< 0;

8j 2 Siuk ;

 1 þ pikn ðR5 þ pii di Q 4i Þ 

P

pij dj Q 4i

j–i;j2Sikn



P

ð23Þ 3 7 7 < 0;

pij dj Q 4i 5

ð24Þ

j–i;j2Sikn



R5 þ pii di Q 4i

dj Q 4i



dj Q 4i

2 6 6 4

 1 þ pikn Ei

2

Ei 6 4

6 6 4

 



hi Li hi R6 

 1 þ pikn Ei

2

Ei 6 4 

< 0;

  hi 1 þ pikn Li   hi 1 þ pikn R6

 2



Euk 1i

7 0 5 < 0;

7 0 7 5 < 0; kn E2i

8j 2 Siuk ;







3 ð26Þ

ð27Þ

Euk 2i



hi R6

Ekn 1i

ð25Þ

3

  hi 1 þ pikn Ni   hi 1 þ pikn R6

hi N i

8j 2 Siuk ;

Euk 1i

3

7 0 7 5 < 0; kn E2i

ð28Þ

3

7 0 5 < 0;

Euk 2i

Ekn 1i

8j 2 Siuk ;

ð29Þ

5773

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

where

2

E11 6  6 6  6 6 Ei ¼ 6  6  6 6 4  

E12 E22     

2

Ekn 2i

E13 E23 E33    

M i1 C i E24 0 E44   

0

Xi1 6 0 6 ¼4 0 0

E15 0 0 C Ti M Ti2 E55   3 0 0 7 7; 0 5

0 0

Xi2 0 0

Xi3 0

3 3 2 Mi1 Di Xi1 0 0 0 0 7 6 0 7 0 Xi2 0 7 7 6 7 6 0 0 7 X 0 0 7 i3 7 6 7 kn 6 0 7; E1i ¼ 6 0 0 0 Xi4 7 7; 7 6 0 Mi2 Di 7 0 0 0 7 7 6 7 4 0 0 5 0 0 0 5  d1 Q 4i 0 0 0 0 3 2 i 0 0 0 Pj 6 0 hj Q 1i 2 0 0 7 7 6 0 Pj 7 60 Q 0 0 h j 2i 7 6 0 hj Q 1i 6 uk 7 6 6 ¼60 0 0 hj Q 3i 7; E2i ¼ 4 0 0 60 0 0 0 7 7 6 0 0 5 40 0 0 0 0 0 0 0

E16 0 0 0 Mi2 Bi E66 

Euk 1i

Xi4

0 0 hj Q 2i 0

3 0 0 7 7 0 5 hj Q 3i

~ 1 þ R2 Þ  R7 þ L þ LT  M A  AT M T  2C1 T C2 ; E12 ¼ R7  ST þ LT  L þ N ; E11 ¼ pii Pi þ Q 1i þ Q 2i þ hðR i1 i1 i i1 i1 i1 12 i1 i1 i2 i E13 ¼ ST12 þ LTi3  Ni1 ; E15 ¼ Pi  M i1  ATi MTi2 ; E16 ¼ M i1 Bi þ T i1 ðC1 þ C2 Þ; E22 ¼ 2R7 þ S12 þ ST12  ð1  ui ÞQ 1i  Li2  LTi2 þ Ni2 þ NTi2  2C1 T i2 C2 ; E23 ¼ R7  ST12  LTi3 þ NTi3  Ni2 ; ~ 6 þh ~ 2 R7  M  M T ; E24 ¼ T i2 ðC1 þ C2 Þ; E33 ¼ R7  Q 2i þ pii hi Q 2i  Ni3  NTi3 ; E44 ¼ ð1  ui ÞQ 3i  2T i2 ; E55 ¼ hR i2 i2 2 ~ X X X d ~ 3 þ R4 þ dR ~ 5  2T i1 ; Xi1 ¼ pij Pj ; Xi2 ¼ pij hj Q 1i ; Xi3 ¼ pij hj Q 2i ; E66 ¼ Q 3i þ di Q 4i þ hR 2 j–i;j2Sikn j–i;j2Sikn j–i;j2Sikn X     Xi4 ¼ pij hj Q 3i ; C1 ¼ diag c1 ; c2 ; .. .; cn ; C2 ¼ diag cþ1 ; cþ2 ; .. .; cþn ; j–i;j2Sikn

e1 ¼ ð I

T

0 0 0 0 0 0Þ ; 0 0 0 0 ÞT ;

e3 ¼ ð 0 0 I  Ni ¼ N Ti1

NTi2

NTi3

e2 ¼ ð 0 I  Li ¼ LTi1

0 0 0 0

T

;

0 0 0 0 0Þ LTi2

LTi3

 M i ¼ M Ti1

T

0 0 0 0

T

0 0 0 M Ti2

0 0

T

Proof. Construct a new class of Lyapunov functional candidate as follow:

VðzðtÞ; iÞ ¼

X

9

V j ðzðtÞ; iÞ;

j¼1

where

V 1 ðzðtÞ; iÞ ¼ zT ðtÞPðrðtÞÞzðtÞ; Z t V 2 ðzðtÞ; iÞ ¼ zT ðsÞQ 1 ðrðtÞÞzðsÞds; thðrðtÞ;tÞ Z t V 3 ðzðtÞ; iÞ ¼ zT ðsÞQ 2 ðrðtÞÞzðsÞds; thðrðtÞÞ Z t V 4 ðzðtÞ; iÞ ¼ f T ðzðsÞÞQ 3 ðrðtÞÞf ðzðsÞÞds; thðrðtÞ;tÞ Z 0 Z t f T ðzðsÞÞQ 4 ðrðtÞÞf ðzðsÞÞdsdh; V 5 ðzðtÞ; iÞ ¼ dðrðtÞÞ tþh Z 0Z t Z 0Z t zT ðsÞðR1 þ R2 ÞzðsÞdsdh þ f T ðzðsÞÞR3 f ðzðsÞÞdsdh; V 6 ðzðtÞ; iÞ ¼ ~ ~ tþh h Z Z0h Ztþh Z 0 t 0 Z t f T ðzðsÞÞR4 f ðzðsÞÞdsdkdh þ f T ðzðsÞÞR5 f ðzðsÞÞdsdh; V 7 ðzðtÞ; iÞ ¼ ~ h ~ tþh d tþk d Z 0Z t z_ T ðsÞR6 z_ ðsÞdsdh; V 8 ðzðtÞ; iÞ ¼ ~ tþh h Z 0 Z t ~ z_ T ðsÞR7 z_ ðsÞdsdh; V 9 ðzðtÞ; iÞ ¼ h ~ h

tþh

ð30Þ

5774

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

where Pi, Q1i, Q2i, Q3i, Q4i, R1, R2, R3, R4, R5, R6, R7, i = 1, 2, . . . , N, are positive definite matrices and:

X

N

pij Q kj < Rk ; k ¼ 1; 2; 3; 4;

ð31Þ

pij dj Q 4i < R5 ;

ð32Þ

j¼1

X

N

j¼1

Remark 1 Our paper fully uses the information about mode-dependent time-varying delays and mode-dependent positive definite matrices, but [35] only use the information about mode-dependent positive definite matrices when constructing the Lyapunov functional V(z(t), i). So the Lyapunov functional V(z(t), i) in our paper is more general than that in [35], and the stability criteria in our paper may be more applicable. Let L be the infinitesimal generator of random process {zt, t P 0}, then for each r(t) = i, i 2 S, it can be shown that:

(Z

)

t

zT ðsÞQ 1 ðrðtÞÞzðsÞds

L

thðrðtÞ;tÞ

("Z # Z ) tþD t 1 E zT ðsÞQ 1 ðrðt þ DÞÞzðsÞdsjrðtÞ ¼ i  zT ðsÞQ 1i zðsÞds D!0 D thi ðtÞ tþDhðrðtþDÞ;tþDÞ 8 " # >Z X   1 < tþD T N z ðsÞ Q 1i þ pij D þ oðDÞ zðsÞds ¼ limþ P D!0 D > : tþDhi ðtþDÞ N ðpij DþoðDÞÞhj ðtþDÞ j¼1 ¼ limþ

j¼1



Z

)

t

zT ðsÞQ 1i zðsÞds

thi ðtÞ

8 9 >Z > Z t = 1 < tþD zT ðsÞQ 1i zðsÞds  zT ðsÞQ 1i zðsÞds ¼ limþ P > D!0 D > thi ðtÞ : tþDhi ðtþDÞ N ðpij DþoðDÞÞhj ðtþDÞ ; Z

1 þ limþ D!0 D 1 D

¼ limþ D!0

j¼1

tþD

tþDhi ðtþDÞ

Z

zT ðsÞ ðpij DþoðDÞÞhj ðtþDÞ

j¼1

tþD

zT ðsÞQ 1i zðsÞds þ limþ D!0

t

1 þ limþ D!0 D

PN

Z

1 D

X

N

ðpij D þ oðDÞÞQ 1j zðsÞds

j¼1

Z

thi ðtÞ

tþDhi ðtþDÞ

PN

ðpij DþoðDÞÞhj ðtþDÞ

zT ðsÞQ 1i zðsÞds

j¼1

tþD

tþDhi ðtþDÞ

PN

T

z ðsÞ ðpij DþoðDÞÞhj ðtþDÞ

j¼1

¼ zT ðtÞQ 1i zðtÞ  ð1  h_ i ðtÞ 

X

N

X

N

ðpij D þ oðDÞÞQ 1j zðsÞds

j¼1

pij hj ðtÞÞzT ðt  hi ðtÞÞQ 1i zðt  hi ðtÞÞ

j¼1

Z

þ

t

zT ðsÞ

thi ðtÞ

(Z L

Z

0

¼i

t

xT ðsÞQ ðrðtÞÞxðsÞdsdh

¼ limþ D!0

tþh

hðrðtÞÞ

Z

)

0

h½rðtÞ

Z

xT ðsÞQ ½rðtÞxðsÞds

1 E D

¼ limþ D!0

tþh

N

pij Q 1j zðsÞds

j¼1

)

t

!

X

(Z

h½rðtþDÞ

8 >Z 1<

D> :

Z

0

tþD

xT ðsÞQ ½rðt þ DÞxðsÞdsjrðtÞ

tþDþh

Z

0



PN j¼1

½pij DþoðDÞhj hi

tþD

xT ðsÞ tþDþh

X  N j¼1

9 > Z 0 Z 0 Z t Z tþD = þ xT ðsÞQ i xðsÞdsdh  xT ðsÞQ i xðsÞdsdh PN > hi  hi tþh ; ½pij DþoðDÞhj tþDþh j¼1 P Z 0 Z t Z Z N X 1 hi  j¼1 ½pij DþoðDÞhj t T N pij xT ðsÞQ j xðsÞdsdh  limþ x ðsÞQ i xðsÞdsdh ¼ D!0 D hi hi tþh tþh j¼1 Z Z tþD  Z t 1 0 þ limþ xT ðsÞQ i xðsÞds  xT ðsÞQ i xðsÞds dh P N p DþoðDÞ h D!0 D hi  tþh ½ ij  j tþDþh j¼1



pij D þ oðDÞ Q j xðsÞds

5775

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

¼

X

N

pij

Z

¼

Z

N



¼

N

Z

pij hj limþ D!0

j¼1

pij

Z

T

Z

0

t

thi 

PN

½pij DþoðDÞhj

xT ðsÞQ i xðsÞds

pij

Z

Z

0

t

xT ðsÞQ j xðsÞdsdh þ

X

N

Z

pij hj

t

xT ðsÞQ i xðsÞds þ

thi

j¼1

X

t

N

xT ðsÞQ j xðsÞdsdh  1 

tþh

hi

j¼1

T

tþh

hi N

X

 lim x ðt þ DÞQ i xðt þ DÞ  x ðt þ h þ DÞQ i xðt þ h þ DÞ dh

j¼1

X

xT ðsÞQ j xðsÞdsdh þ

j¼1

0

þ hi D!0

X

t

tþh

hi

j¼1

þ

Z

0

pij hj

Z

0



 xT ðtÞQ i xðtÞ  xT ðt þ hÞQ i xðt þ hÞ dh

hi

!Z

t

xT ðsÞQ i xðsÞds þ hi xT ðtÞQ i xðtÞ thi

j¼1

Similar to the process above, we can obtain:

X LV 1 ðzðtÞ; iÞ ¼ 2zT ðtÞPi z_ ðtÞ þ zT ðtÞ N pij Pj zðtÞ;

ð33Þ

j¼1

LV 2 ðzðtÞ; iÞ ¼ zT ðtÞQ 1i zðtÞ  ð1  h_ i ðtÞÞzT ðt  hi ðtÞÞQ 1i zðt  hi ðtÞÞ þ þ

Z

zT ðsÞ

N

thi ðtÞ

N

pij hj ðtÞzT ðt  hi ðtÞÞQ 1i zðt  hi ðtÞÞ

j¼1

!

X

t

X

pij Q 1j zðsÞds;

ð34Þ

j¼1

LV 3 ðzðtÞ; iÞ ¼ zT ðtÞQ 2i zðtÞ  zT ðt  hi ÞQ 2i zðt  hi Þ þ

X

N

Z

pij hj zT ðt  hi ÞQ 2i zðt  hi Þ þ

t

zT ðsÞ

thi

j¼1

X

! N

pij Q 2j zðsÞds; ð35Þ

j¼1

LV 4 ðzðtÞ; iÞ ¼ f T ðzðtÞÞQ 3i f ðzðtÞÞ  ð1  h_ i ðtÞÞf T ðzðt  hi ðtÞÞÞQ 3i f ðzðt  hi ðtÞÞÞ ! Z t X X N N T T pij hj ðtÞf ðzðt  hi ðtÞÞÞQ 3i f ðzðt  hi ðtÞÞÞ þ f ðzðsÞÞ pij Q 3j f ðzðsÞÞds: þ thi ðtÞ

j¼1

ð36Þ

j¼1

By Lemma 2, it is easy to obtain:

LV 5 ðzðtÞ; iÞ ¼

X

N

pij

Z

0

Z

di

j¼1

t

f T ðzðsÞÞQ 4j f ðzðsÞÞdsdh  1 

tþh

X

N

pij dj

di

j¼1



1 di

Z

tþh

!T f ðzðsÞÞds

Q 4i

Z

tdi ðtÞ

t

f T ðzðsÞÞQ 4i f ðzðsÞÞds

!

t

f ðzðsÞÞds þ di f T ðzðtÞÞQ 4i f ðzðtÞÞ;

ð37Þ

tdi ðtÞ

~ T ðtÞðR þ R ÞzðtÞ þ hf ~ T ðzðtÞÞR f ðzðtÞÞ  LV 6 ðzðtÞ; iÞ ¼ hz 1 2 3

LV 7 ðzðtÞ; iÞ ¼

f T ðzðsÞÞQ 4i f ðzðsÞÞds

tdi

j¼1

t

t tdi

þ di f T ðzðtÞÞQ 4i f ðzðtÞÞ Z 0 Z t Z X X N N pij f T ðzðsÞÞQ 4j f ðzðsÞÞdsdh þ pij dj 6 j¼1

!Z

Z

t ~ th

zT ðsÞðR1 þ R2 ÞzðsÞds 

Z

t

~ th

f T ðzðsÞÞR3 f ðzðsÞÞds;

Z 0 Z t Z t ~2 d ~ T ðzðtÞÞR5 f ðzðtÞÞ  f T ðzðsÞÞR4 f ðzðsÞÞdsdh  f T ðzðsÞÞR5 f ðzðsÞÞds; f T ðzðtÞÞR4 f ðzðtÞÞ þ df ~ tþh ~ 2 d td

~z_ T ðtÞR6 z_ ðtÞ  LV 8 ðzðtÞ; iÞ ¼ h

Z

t

~z_ T ðtÞR6 z_ ðtÞ  z_ T ðsÞR6 z_ ðsÞds 6 h

~ th

z_ T ðsÞR6 z_ ðsÞds 

Z

Z

ð39Þ

t

z_ T ðsÞR6 z_ ðsÞds;

ð40Þ

thi ðtÞ

t

~ th

~2 z_ T ðtÞR7 z_ ðtÞ  h 6h i

thi ðtÞ

thi

Z

~ ~2 z_ T ðtÞR7 z_ ðtÞ  h LV 9 ðzðtÞ; iÞ ¼ h

Z

ð38Þ

z_ T ðsÞR7 z_ ðsÞds

thi ðtÞ

z_ T ðsÞR7 z_ ðsÞds  hi

thi

Z

t

z_ T ðsÞR7 z_ ðsÞds:

ð41Þ

thi ðtÞ

By the Newton–Leibniz formula and (8), for any appropriately dimensioned matrices Li, Ni, Mi, i = 1, 2, . . . , N, one can obtain: T

2f ðtÞLi zðtÞ  zðt  hi ðtÞÞ 

Z

!

t

thi ðtÞ

z_ ðsÞds

¼0

ð42Þ

5776

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

2fT ðtÞNi zðt  hi ðtÞÞ  zðt  hi Þ 

Z

!

thi ðtÞ

z_ ðsÞds

¼0

ð43Þ

thi

T

Z

z_ ðtÞ  Ai zðtÞ þ Bi f ðzðtÞÞ þ C i f ðzðt  hi ðtÞÞÞ þ Di

2f ðtÞM i

!

t

f ðzðsÞÞds

¼0

ð44Þ

tdi ðtÞ

" T

T

T

T

T

_T

T

f ðtÞ ¼ z ðtÞ z ðt  hi ðtÞÞ z ðt  hi Þ f ðzðt  hi ðtÞÞÞ z ðtÞ f ðzðtÞÞ

Z

#

t T

f ðzðsÞÞds tdi ðtÞ

It is easy to obtain that:

2fT ðtÞLi

Z

t

thi ðtÞ

2fT ðtÞNi

Z

thi ðtÞ

thi



Z

Z

T z_ ðsÞds 6 hi ðtÞfT ðtÞLi R1 6 Li fðtÞ þ

t

z_ T ðsÞR6 z_ ðsÞds

T z_ ðsÞds 6 ðhi  hi ðtÞÞfT ðtÞNi R1 6 N i fðtÞ þ

t

z_ T ðsÞR7 z_ ðsÞds ¼  thi

Z

thi ðtÞ

z_ T ðsÞR7 z_ ðsÞds 

thi

ð45Þ

thi ðtÞ

Z

Z

thi ðtÞ

z_ T ðsÞR6 z_ ðsÞds

ð46Þ

thi

t

z_ T ðsÞR7 z_ ðsÞds

ð47Þ

thi ðtÞ

The LV9(z(t), i) is upper-bounded by

hi hi T fT ðtÞðe2  e3 ÞR7 ðe2  e3 ÞT fðtÞ  f ðtÞðe1  e2 ÞR7 ðe1  e2 ÞT fðtÞ hi  hi ðtÞ hi ðtÞ " #T  # " R7 S12 eT2  eT3 eT2  eT3 T 2 _T ~ _ fðtÞ 6 h z ðtÞR7 zðtÞ  f ðtÞ T ST12 R7 e1  eT2 eT1  eT2

~2 z_ T ðtÞR7 z_ ðtÞ  LV 9 ðzðtÞ; iÞ 6 h

ð48Þ ð49Þ

where the inequalities in (48) come from Lemma 2, and that of (49) from Lemma 1 as:

2 qffiffi

3T T  ðe  e Þ 2 3 a 6 7 R7 fT ðtÞ4 qffiffi 5 ST12  abðe1  e2 ÞT b

2 qffiffi 3 T b ðe  e Þ 2 3 S12 6 a 7 4 qffiffi 5fðtÞ 6 0; R7  aðe1  e2 ÞT

ð50Þ

b

i ðtÞ where a ¼ hi h ; b ¼ hihðtÞ . Note that when hi(t) = hi or hi(t) = 0, one can obtain fT(t)(e2  e3) = 0 or fT(t)(e1  e2) = 0, respechi i tively. So the relation (49) also holds. Furthermore, there exist positive diagonal matrices Ti1, Ti2, such that the following inequalities hold based on (3):

2f T ðzðtÞÞT i1 f ðzðtÞÞ þ 2zT ðtÞT i1 ðC1 þ C2 Þf ðzðtÞÞ  2zT ðtÞC1 T i1 C2 zðtÞ P 0;

ð51Þ

2f T ðzðt  hi ðtÞÞÞT i2 f ðzðt  hi ðtÞÞÞ þ 2zT ðt  hi ðtÞÞT i2 ðC1 þ C2 Þf ðzðt  hi ðtÞÞÞ  2zT ðt  hi ðtÞÞC1 T i2 C2 zðt  hi ðtÞÞ P 0: ð52Þ From (30)–(52), one can obtain that:

LVðzðtÞ; iÞ 6 fT ðtÞRi fðtÞ;

ð53Þ

where T 1 T Ri ¼ Ei þ hi ðtÞLi R1 6 Li þ ðhi  hi ðtÞÞN i R6 N i þ diagfW1i ; W2i ; W3i ; W4i ; 0; 0; 0g;

W1i ¼

X

pij Pj ; W2i ¼

j–i;j2S

X j–i;j2S

pij hj Q 1i ; W3i ¼

X

pij hj Q 2i ; W4i ¼

j–i;j2S

X

pij hj Q 3i :

j–i;j2S

T 1 T 1 T 1 T Note that 0 6 hi ðtÞ 6 hi ; hi ðtÞLi R1 6 Li þ ðhi  hi ðtÞÞN i R6 N i can be seen as the convex combination of Li R6 Li and N i R6 N i on hi(t). Therefore, Ri < 0 holds if and only if:

T Ri1 ¼ Ei þ hi Li R1 6 Li þ diagfW1i ; W2i ; W3i ; W4i ; 0; 0; 0g < 0;

Ri2 ¼ Ei þ

T hi Ni R1 6 Ni

þ diagfW1i ; W2i ; W3i ; W4i ; 0; 0; 0g < 0:

Applying the Schur complement, (31),(32),(54) and (55 can be rewritten respectively as:

ð54Þ ð55Þ

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

2 6 6 4

 1 þ pikn ðRk þ pii Q ki Þ 

P

pij Q kj

j–i;j2Sikn

P



3

 7 X Rk þ pii Q ki 7þ pij 5  pij Q kj i j2Suk

j–i;j2Sikn

2 6 6 4

 1 þ pikn ðR5 þ pii di Q 4i Þ 

P

pij dj Q 4i

P

2 6 6 4 2 6 6 4

 1 þ pikn Ei 

  hi 1 þ pikn Li   hi 1 þ pikn R6





 1 þ pikn Ei

  hi 1 þ pikn N i   hi 1 þ pikn R6

 



3

2 E 7 X 6 i þ p 0 7 4  ij 5 j2Siuk kn  E2i

Ekn 1i

3

2 E 7 X 6 i þ p 0 7 4  ij 5 j2Siuk kn  E2i

Ekn 1i

< 0;

3

j2Suk

j–i;j2Sikn



Q kj

 7 X R5 þ pii di Q 4i 7þ p ij 5  pij dj Q 4i i

j–i;j2Sikn



Q kj

hi Li hi R6

Euk 2i

hi N i

Euk 1i

hi R6 

dj Q 4i dj Q 4i

3 Euk 1i 7 0 5 < 0;



5777

k ¼ 1; 2; 3; 4;



< 0;

ð56Þ

ð57Þ

ð58Þ

3

7 0 5 < 0:

ð59Þ

Euk 2i

If (16)–(29) hold, then (56)–(59) also hold. From (53)–(55), it is easy to obtain that:

Ri ¼

hi ðtÞ h  hi ðtÞ Ri1 þ i Ri2 : hi hi

ð60Þ

Setting k1 = min{kmin(Ri1), kmin(Ri2), i 2 S}, then k1 > 0. For any t P 0, we have:

LVðzðtÞ; iÞ 6 k1 fT ðtÞfðtÞ 6 k1 zT ðtÞzðtÞ

ð61Þ

By Dynkin’s formula, one can obtain:

Z t  EfVðzðtÞ; iÞg  EfVðu; r 0 Þg 6 k1 E zT ðsÞzðsÞds ;

ð62Þ

0

and hence:

Z t  1 E zT ðsÞzðsÞds 6 EfVðu; r0 Þg; k1 0

tP0

ð63Þ

Based on Definition 1, the system (8) is stochastically stable when the time-varying delays are mode-dependent and the transition rates are partially known. The proof is completed. h

Remark 2. Theorem 1 develops a stochastic stability criterion of Markovian jumping neural networks with mode-dependent time varying delays and partially known transition rates. The result of Theorem 1 makes use of the information of the subsystems’ upper bounds of the time-varying delays, which may bring us less conservativeness. Moreover, by freeweighting matrices method, the upper bounds of ui are not restricted to be 1 in this paper. Therefore, our result is more natural and reasonable to Markovian jumping neural networks.

Ri < 0 is not simply guaranteed by Remark 3. From (53), it can be easily seen that T 1 T Ei þ hi Li R1 6 Li þ hi N i R6 N i þ diagfW1i ; W2i ; W3i ; W4i ; 0; 0; 0g < 0, but is evaluated by the LMIs in (54), (55), which can help reduce much conservatism than some existing results. Remark 4. Theorem 1 directly handles the inversely weighted convex combination of quadratic terms of integral quantities by utilizing the result of Lemma 1, which achieves performance behavior identical to the approaches based on Lemma 2. Now, the following corollary presents a sufficient condition for Markovian jumping neural networks with mode-dependent time varying delays and completely known transition rates. Corollary 1. For given scalars hi P 0, di > 0, ui, the system (8) with mode-dependent time varying delays and completely known transition rates is stochastically stable if there exist symmetric positive definite matrices Pi, Q1i, Q2i, Q3i, Q4i, R1, R2, R3, R4, R5, R6, R7, positive diagonal matrices Ti1, Ti2 and any matrices S12, Li, Ni, Mi, with appropriate dimensions, for any i = 1, 2, . . . ,N, such that LMIs (15), (31), (32) and the following LMIs hold:

5778

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

"

"

Ei

hi L i



hi R6

Ei 

hi Ni hi R6

# < 0;

ð64Þ

< 0;

ð65Þ

#

where

2

E11 6  6 6 6  6 6 Ei ¼ 6  6 6  6 6 4 

E12 E22

E13 E23



E33

0

0

0





E44

C Ti MTi2

0

 E11 ¼ E11 þ

Mi1 C i E24

E15 0

E16 0







E55

M i2 Bi









E66











X

X

pij Pj ; E22 ¼ E22 þ

j–i;j2S

3 M i1 Di 0 7 7 7 0 7 7 0 7 7; 7 M i2 Di 7 7 7 0 5 1  di Q 4i

pij hj Q 1i ; E33 ¼ E33 þ

j–i;j2S

X

pij hj Q 2i ; E44 ¼ E44 þ

j–i;j2S

X

pij hj Q 3i :

j–i;j2S

Proof. By Theorem 1, the desired results can be obtained easily according to (54), (55). This completes the proof.

h

Remark 5. It is easy to obtain that the conditions (16)–(29) will reduce to 31, 32, 64, 65, respectively when the i th row of P are all available.

4. A Numerical example Consider the system (8) with the following parameters:

    1 1 0:88 1 0:8 0:4 2:2 0 B1 ¼ ; C1 ¼ ; D1 ¼ ; A2 ¼ ; 0 2 1 1 1 1 0:5 0:6 0 1:5      1 0:6 1 0:1 1:2 0:7 2:3 0 0:3 0:2 B2 ¼ ; C2 ¼ ; D2 ¼ ; A3 ¼ ; B3 ¼ ; 0:1 0:3 0:1 0:2 0:6 0:4 0 2:5 0:4 0:1     0:5 0:7 0:5 O:3 0:2 0 0:4 0 ; D3 ¼ ; C1 ¼ ; C2 ¼ : C3 ¼ 0:7 0:4 0:2 1:2 0 0:1 0 0:8

A1 ¼



2 0



;

The three cases of the transition rates matrices are considered as:

2

0:3

0:8

6 Caseð1Þ : P ¼ 4 0:1

0:5

3

2

7 0:7 5; 0:7 0:4 1:1 2 3 0:8 ? ? 6 7 Caseð3Þ : P ¼ 4 ? 0:8 ? 5: 0:7 0:4 1:1

0:8

6 Caseð2Þ : P ¼ 4 0:1

0:8

0:7

? 0:8 0:4

?

3

7 0:7 5; 1:1

We assume condition 1: h1 = h2 = h3, u1 = u2 = u3 = 0.1, d1 = d2 = d3 = 0.5; condition 2: h1 = h2 = h3, u1 = u2 = u3 = 0.5, ~ which d1 = d2 = d3 = 0.8, and under the three cases above, respectively. Table 1 lists the corresponding upper bounds of h ~ decreases when can be computed by the method of Theorem 1 in this paper. Table 1 shows that the upper bounds of h the number of unknown elements increases. ~ ¼ 1:34, one can obtain: Solving LMIs (15)–(29) for the case (1), condition 1 and h

P1 ¼



0:0031 0:0014 0:0014 0:0084

Q 11 ¼ 1:0e004  Q 13 ¼ 1:0e004 

 

;

0:1438

P2 ¼



0:0024 0:0012 0:0012 0:0072

0:1307



0:1307 0:1563 0:3876 0:3860 0:3860

0:5253

; ;

;

P3 ¼

Q 12 ¼ 1:0e004 





0:0025 0:0013 0:0013 0:0074

;

0:6556

0:6800

0:6800

0:8977

;

5779

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

 Q 21 ¼

0:0023 0:0019 0:0019 0:0072

Q 31 ¼ 1:0e003   Q 41 ¼



T 21 ¼

T 32 ¼







;

 Q 42 ¼

; Q 32 ¼ 1:0e003 

0:0108 0:0025 0:0025 0:0047



;

 Q 23 ¼

0:0023 0:0019

;

0:0019 0:0072

 0:4380 0:2729 ; Q 33 ¼ 1:0e003  ; 0:4896 0:3481 0:2729 0:2035 0:7800 0:4896

 Q 43 ¼

0:0155 0:0036 0:0036 0:0092

;

  0:4489 0:4449 0:9668 0:6602 ; R2 ¼ 1:0e004  ; R3 ¼ 1:0e003  ; 0:5361 0:7249 0:4449 0:5991 0:6602 0:4934



R5 ¼

;

0:0051

R7 ¼ 1:0e003 

0:0019 0:0072

;

0:5223 0:5361

0:0062 0:0003 0:0003

0:0023 0:0019

0:1205 0:0703

0:0098 0:0150



 Q 22 ¼

0:2258 0:1205

0:0201 0:0098

R1 ¼ 1:0e004 

R4 ¼



;



0:1250 0:0362 0:0362 0:2403

0:0237

0

0

0:0090

0:0160

0

0

0:0072

;

T 22 ¼

0:0020

0:0014

0:0014

0:0010

; 

 T 11 ¼

;

R6 ¼

0:0546

0

0

0:0173

0:0092

0

0

0:0013

T 31 ¼

;



0:0008 0:0008

;

0:0008 0:0026

; 



 T 12 ¼

0:0441

0

0

0:0130

0:0170

0

0

0:0073

;

;



Therefore it follows from Corollary 1 that the system (6)with mode-dependent time-varying delays and completely known transition rates is stochastically stable. ~ ¼ 1:21, one can obtain: Solving LMIs (15)–(29) for the case (2), condition 1 and h

P1 ¼



0:0014 0:0007 0:0007 0:0041

Q 11 ¼ 1:0e004 

Q 21 ¼



 R4 ¼

0:0009 

0:0009

;



0:0034 0:0064 

Q 22 ¼



Q 42 ¼



; Q 12 ¼ 1:0e004 

0:0008 0:0007





P3 ¼

;

0:0007 0:0027

0:0660 0:0411 ;

0:0006

0:0006 0:0031

0:1164 0:0660

0:0082 0:0034

R1 ¼ 1:0e004 



0:0855 0:0877

0:0009 0:0034



P2 ¼

0:0877 0:1093

0:0010

Q 31 ¼ 1:0e003 

Q 41 ¼



;



0:0011 0:0028

0:0010

0:0006

0:0006 0:0033

;

 0:0963 0:1053 ; Q 13 ¼ 1:0e004  ; 0:1257 0:1785 0:1053 0:1517 

0:0008 0:0008

Q 23 ¼



0:1958 0:1395

0:0008 0:0029

0:1395 0:1121

;



0:1157 0:1257

;

; Q 32 ¼ 1:0e003 

0:0048 0:0011





Q 43 ¼





;

; Q 33 ¼ 1:0e003 

0:0065 0:0014 0:0014 0:0044



0:1198 0:0797 0:0797 0:0630

;

  0:0708 0:0544 0:2341 0:1832 ; R2 ¼ 1:0e004  ; R3 ¼ 1:0e003  ; 0:1155 0:1751 0:0544 0:2331 0:1832 0:1562 0:1026 0:1155

0:0020 0:0001 0:0001 0:0018

;

 R5 ¼

0:0011 0:0001 0:0001 0:0008



 ;

R6 ¼

0:0004

0:0004

0:0004 0:0013

;

Table 1 ~ under the three cases. Allowable upper bound of h

Condition 1 Condition 2

Case 1

Case 2

Case 3

1.34 1.21

1.21 1.04

1.11 0.87

;

5780

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

R7 ¼ 1:0e003   T 21 ¼

T 32 ¼





0:0454 0:0192



0:0192 0:1158

0:0100

0

0

0:0045

0:0084

0

0

0:0035

;

 T 11 ¼

; 

T 22 ¼

0:0195

0

0

0:0068

0:0036

0

0

0:0007

;

 T 12 ¼

 T 31 ¼

;

0:0194

0

0

0:0063

;



0:0085

0

0

0:0036

;



~ ¼ 1:11, one can obtain: Solving LMIs (15)–(29) for the case (3), condition 1 and h

P1 ¼



0:0029 0:0014



0:0014 0:0087

Q 11 ¼ 1:0e004  

Q 21 ¼





Q 42 ¼

;





;

P3 ¼

; Q 12 ¼ 1:0e003 

0:0016 0:0016







0:8951 0:5838

; Q 13 ¼ 1:0e004 





0:0049 0:0092

0:6257 0:6855 0:6855 0:9630

;

;

; Q 33 ¼ 1:0e003 

0:0123 0:0049





0:0014 0:0051

0:5838 0:4315

Q 43 ¼

;

0:0014 0:0014

Q 23 ¼

;

0:0072 0:0063



0:0815 0:1134





0:0014 0:0076

;

; Q 32 ¼ 1:0e003 

0:0153 0:0072

0:0024 0:0014

0:0737 0:0815



0:0016 0:0058







0:7917 0:5346 0:5346 0:4136

;

;

  0:0003 0:0003 0:8490 0:6195 ; R2 ¼ 1:0e004  ; R3 ¼ 1:0e003  ; 0:5934 0:8668 0:0003 0:0012 0:6195 0:5010 0:5248 0:5934

0:0022

0:0022

0:0078



Q 22 ¼

0:5478 0:3271

0:0060

 T 32 ¼

;

0:0110



R7 ¼ 1:0e003 

0:0016 0:0084

0:3271 0:2202

0:0060



T 21 ¼



0:0146 0:0060

R1 ¼ 1:0e004 

0:0026 0:0016

0:4740 0:6191

0:0016 0:0058

Q 41 ¼



0:4529 0:4740

0:0016 0:0016

Q 31 ¼ 1:0e003 

R4 ¼



P2 ¼

;



;

 R5 ¼

0:1266 0:0742 0:0742 0:2548

0:0390

0

0

0:0138

0:0090

0

0

0:0060

;

T 22 ¼

; 

0:0027

0:0003

0:0003

0:0018

T 11 ¼





 ;

R6 ¼

0:0377

0

0

0:0135

0:0119

0

0

0:0015

;

T 31 ¼

; 

0:0010

0:0009

0:0009 0:0031 T 12 ¼



;

0:0413

0

0

0:0136

0:0157

0

0

0:0094

;

;



Therefore it follows from Theorem 1 that the system (6) with mode-dependent time-varying delays and partially known transition rates is stochastically stable. 5. Conclusions In this paper, the problem of stochastic stability criterion of Markovian jumping neural networks with mode-dependent time-varying delays and partially known transition rates has been proposed. By choosing a new class of Lyapunov functional, some new delay-dependent stochastic stability criteria are derived to guarantee the stochastic stability of Markovian jumping neural networks. The obtained criteria are less conservative because free-weighting matrices method and a convex optimization approach are considered. Finally, a numerical example has been given to illustrate the effectiveness of the proposed method. Acknowledgments The authors thank the editors and the reviewers for their valuable suggestions and comments which have led to a much improved paper. This work was supported by the demonstration project of oil and gas development for carbonate reservoirs in Tarim basin under Grant 2011ZX05049 and the National Basic Research Program of China under Grant 2010CB732501.

J. Tian et al. / Applied Mathematics and Computation 218 (2012) 5769–5781

5781

References [1] J. Cao, L. Wang, Exponential stability and periodic oscillatory solution in BAM networks with delays, IEEE Trans. Neural Netw. 13 (2002) 457–463. [2] S. Arik, V. Tavsanoglu, Global asymptotic stability analysis of bidirectional associative memory neural networks with constant time delays, Neurocomputing 68 (2005) 161–176. [3] J. Tian, X. Zhou, Improved asymptotic stability criteria for neural networks with interval time-varying delay, Expert Syst. Appl. 37 (2010) 7521–7525. [4] Y. He, G.P. Liu, D. Rees, New delay-dependent stability criteria for neural networks with time-varying delay, IEEE Trans. Neural Netw. 18 (2007) 310– 314. [5] T. Li, Q. Luo, C.Y. Sun, B.Y. Zhang, Exponential stability of recurrent neural networks with time-varying discrete and distributed delays, Nonlinear Anal. Real World Appl. 10 (2009) 2581–2589. [6] O.M. Kwon, J.H. Park, Exponential stability analysis for uncertain neural networks with interval time-varying delays, Appl. Math. Comput. 212 (2009) 530–541. [7] Q.K. Song, Exponential stability of recurrent neural networks with both time-varying delays and general activation functions via LMI approach, Neurocomputing 71 (2008) 2823–2830. [8] R. Samli, S. Arik, New results for global stability of a class of neutral-type neural systems with time delays, Appl. Math. Comput. 210 (2009) 564–570. [9] J.H. Park, O.M. Kwon, Further results on state estimation for neural networks of neutral-type with time-varying delay, Appl. Math. Comput. 208 (2009) 69–75. [10] O.M. Kwon, J.H. Park, Improved delay-dependent stability criterion for neural networks with time-varying delays, Phys. Lett. A 373 (2009) 529–535. [11] J. Sun, G.P. Liu, J. Chen, D. Rees, Improved stability criteria for neural networks with time-varying delay, Phys. Lett. A 373 (2009) 342–348. [12] Y. Liu, Z. Wang, X. Liu, Global exponential stability of generalized recurrent neural networks with discrete and distributed delays, Neural Netw. 19 (2006) 667–675. [13] P. Balasubramaniam, M. Syed Ali, Sabri Arik, Global asymptotic stability of stochastic fuzzy cellular neural networks with multiple time-varying delays, Expert Syst. Appl. 37 (2010) 7737–7744. [14] J. Tian, S. Zhong, Improved delay-dependent stability criterion for neural networks with time-varying delay, Appl. Math. Comput. 217 (2011) 10278– 10288. [15] J. Tian, S. Zhong, New delay-dependent exponential stability criteria for neural networks with discrete and distributed time-varying delays, Neurocomputing 74 (2011) 3365–3375. [16] C.D. Zheng, L.B. Lu, Z.S. Wang, New LMI based delay-dependent criterion for global asymptotic stability of cellular neural networks, Neurocomputing 72 (2009) 3331–3336. [17] S. Xu, J. Lam, A new approach to exponential stability analysis of neural networks with time-varying delays, Neural Netw. 19 (2006) 76–83. [18] Q. Song, J. Zhang, Global exponential stability of impulsive Cohen–Grossberg neural network with time-varying delays, Nonlinear Anal. Real World Appl. 9 (2008) 500–510. [19] Z. Wang, Y. Liu, M. Li, X. Liu, Stability analysis for stochastic Cohen–Grossberg neural networks with mixed time delays, IEEE Trans. Neural Netw. 17 (2006) 814–820. [20] J.H. Park, O.M. Kwon, On improved delay-dependent criterion for global stability of bidirectional associative memory neural networks with timevarying delays, Appl. Math. Comput. 199 (2008) 435–446. [21] J. Tian, X. Xie, New asymptotic stability criteria for neural networks with time-varying delay, Phys. Lett. A 374 (2010) 938–943. [22] S. Mou, H. Gao, J. Lam, W. Qiang, New criterion of delay-dependent asymptotic stability for Hopfield neural networks with time delay, IEEE Trans. Neural Netw. 19 (2008) 532–535. [23] L. Ma, F. Da, Mean-square exponential stability of stochastic Hopfield neural networks with time-varying discrete and distributed delays, Phys. Lett. A 373 (2009) 2154–2161. [24] O.M. Kwon, S.M. Lee, J.H. Park, Improved delay-dependent exponential stability for uncertain stochastic neural networks with time-varying delays, Phys. Lett. A 374 (2010) 1232–1241. [25] O.M. Kwon, J.H. Park, New delay-dependent robust stability criterion for uncertain neural networks with time-varying delays, Appl. Math. Comput. 205 (2008) 417–427. [26] S.M. Lee, O.M. Kwon, J.H. Park, A novel delay-dependent criterion for delayed neural networks of neutral type, Phys. Lett. A 374 (2010) 1843–1848. [27] S.M. Lee, O.M. Kwon, J.H. Park, A new approach to stability analysis of neural networks with time-varying delay via novel Lyapunov–Krasovskii function, Chinese Phys. B 19 (2010) 050507 (1–6). [28] M.S. Mahmoud, P. Shi, Robust stability, stabilization and H1 control of time-delay systems with Markovian jump parameters, Int. J. Robust Nonlinear Control 13 (2003) 755–784. [29] L. Xie, Stochastic robust stability analysis for Markovian jumping neural networks with time delays, in: Proceedings IEEE International Conference on Networking, Sensing and Control, vol. 22, 2005, pp. 923–928. [30] Z. Wang, Y. Liu, X. Liu, State estimation for jumping recurrent neural networks with discrete and distributed delays, Neural Netw. 22 (2009) 41–48. [31] X. Lou, B. Cui, Delay-dependent stochastic stability of delayed Hopfield neural networks with Markovian jump parameters, J. Math. Anal. Appl. 328 (2007) 316–326. [32] H. Liu, Y. Ou, J. Hu, T. Liu, Delay-dependent stability analysis for continuous–time BAM neural networks with Markovian jumping parameters, Neural Netw. 23 (2010) 315–321. [33] W. Han, Y. Liu, L. Wang, Robust exponential stability of Markovian jumping neural networks with mode-dependent delay, Commun. Nonlinear Sci. Numer. Simul. 15 (2010) 2529–2535. [34] H. Li, B. Chen, Q. Zhou, C. Lin, Robust exponential stability for delayed uncertain Hopfield neural networks with Markovian jumping parameters, Phys. Lett. A 372 (2008) 4996–5003. [35] H. Liu, L. Zhao, Z. Zhang, Y. Ou, Stochastic stability of Markovian jumping Hopfield neural networks with constant and distributed delays, Neurocomputing 72 (2009) 3669–3674. [36] L. Wang, Z. Zhang, Y. Wang, Stochastic exponential stability of the delayed reaction–diffusion recurrent neural networks with Markovian jumping parameters, Phys. Lett. A 372 (2008) 3201–3209. [37] H. Bao, J. Cao, Stochastic global exponential stability for neutral-type impulsive neural networks with mixed time-delays and Markovian jumping parameters, Commun. Nonlinear Sci. Numer. Simul. 16 (2011) 3786–3791. [38] Q. Zhu, J. Cao, Stability analysis for stochastic neural networks of neutral type with both Markovian jump parameters and mixed time delays, Neurocomputing 73 (2010) 2671–2680. [39] P.G. Park, J.W. Ko, C.K. Jeong, Reciprocally convex approach to stability of systems with time-varying delays, Automatica 47 (2011) 235–238. [40] K. Gu, An integral inequality in the stability problem of time delay systems, in: Proceedings of the 39th IEEE Conference on Decision Control, 2000, pp. 2805–2810.