Delay-partitioning approach design for stochastic stability analysis of uncertain neutral-type neural networks with Markovian jumping parameters

Delay-partitioning approach design for stochastic stability analysis of uncertain neutral-type neural networks with Markovian jumping parameters

Author’s Accepted Manuscript Delay-partitioning approach design for stochastic stability analysis of uncertain neutral-type neural networks with Marko...

756KB Sizes 1 Downloads 81 Views

Author’s Accepted Manuscript Delay-partitioning approach design for stochastic stability analysis of uncertain neutral-type neural networks with Markovian jumping parameters Chun Yin, Yuhua Cheng, Xuegang Huang, Shouming Zhong, Yuanyuan Li, Kaibo Shi www.elsevier.com/locate/neucom

PII: DOI: Reference:

S0925-2312(16)30363-0 http://dx.doi.org/10.1016/j.neucom.2016.05.025 NEUCOM17041

To appear in: Neurocomputing Received date: 8 December 2015 Revised date: 1 April 2016 Accepted date: 9 May 2016 Cite this article as: Chun Yin, Yuhua Cheng, Xuegang Huang, Shou-ming Zhong, Yuanyuan Li and Kaibo Shi, Delay-partitioning approach design for stochastic stability analysis of uncertain neutral-type neural networks with Markovian jumping parameters, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2016.05.025 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Delay-partitioning approach design for stochastic stability analysis of uncertain neutral-type neural networks with Markovian jumping parametersI Chun Yin1,a,∗, Yuhua Cheng1,a , Xuegang Huangb , Shou-ming Zhongc , Yuanyuan Lia , Kaibo Shia a School

of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, P.R.China b China Aerodynamics Research & Development Center, Mianyang 621000, China c School of Mathematics Science, University of Electronic Science and Technology of China, Chengdu 611731, P.R.China

Abstract This paper investigates the problem of stability analysis for uncertain neutral-type neural networks with Markovian jumping parameters and interval time-varying delays. By separating the delay interval into multiple subintervals, a Lyapunov-Krasovskii methodology is established, which contains triple and quadruple integrals. The time-varying delay is considered to locate into any subintervals, which is different from existing delay-partitioning methods. Based on the proposed delay-partitioning approach, a stability criterion is derived to reduce the conservatism. Numerical examples show the effectiveness of the proposed methods. Keywords: Neutral-type neural networks; Markovian jumping parameters; Stability; Delay-partitioning;

1. Introduction In recent decades, delayed neural networks have obtained much more considerable attention because they often exist in a lot of areas, such as signal processing, model identification and optimization problem [1–5]. Since that time delays is frequently encountered, the problem of stability analysis of time delayed neural networks is an important point in the field of neural networks recently. Besides, many important results have been proposed in [6–11]. Furthermore, the neutral-type systems have been always investigated, because that the past state of the network affect on the current state. Due to the existence of parameter variations, modeling errors and process uncertainties, stability analysis of neutral uncertainties systems has gained a lot of attention [12–14]. In addition, Markovian jump systems can be described by a set of linear systems with the transitions during models determined by a Markovian chain in a finite mode set. This system has been applied in economic systems, modeling production systems and other practical systems. Markovian jump systems may 1 Chun

Yin and Yuhua Cheng contributed equally to this work. work was supported by National Basic Research Program of China (Grant No. 61503064 and 51502338) and 2015HH0039. ∗ Corresponding author. Tel: +86 (028)61831319; Email: [email protected] (C. Yin) I This

Preprint submitted to Neurocomputing

May 11, 2016

encounter random abrupt variations in their structures when the time goes by. For this kind of systems, we refer readers to [15, 16]. Delay-partitioning, which divides delay interval into subintervals, could obtain less conservative stability conditions [17]. After that the delay-partition idea was firstly proposed in [17], many researchers have focused on designing delay-partition technologies [18–23]. For example, the reference [23] considered a delay partitioning approach to delay-dependent stability analysis for neutral type neural networks. Moreover, some researchers in [20, 21] have improved the idea to analyze the stability of time-varying delays. This paper investigates the problem of stochastic stability analysis for neutral-type uncertain neural networks with Markovian jumping parameters and time-varying delays. A novel Lyapunov-Krasovskii function that involves triple and quadruple integrals terms, is constructed to obtain less conservative stability conditions. The proposed delay-partitioning method divide the time-delay interval [τ2− , τ2+ ] finely by introducing the time variable ρ(t). According to [(k − 1)h2 , kh2 ] = [(k − 1)h2 , (k − 1)h2 + ρ(t)] + [(k − 1)h2 + ρ(t), kh2 ], the time-delay τ2 (t) will determine on the certain subintervals any further. Moreover, the distributed delay Rt is considered as time-varying delay t−τ3 (t) f (x(s))ds. The time variable ρ1 (t) (ρ1 (t) = τ3q(t) ) is introduced to cope with the delay-partitioning problem. A new integral inequality is applied in terms of reciprocally convex inequality, in order to ensure conservatism reduction. Less conservative stability criteria are evaluated by using linear matrix inequalities. Numerical examples are given to demonstrate the effectiveness of the proposed methods. Notation: <

n

denotes n-dimensional Euclidean space and
n

is the set of all n × n real matrices.

For symmetric matrices X, X > 0(X ≥ 0) represents that is a real symmetric positive definite matrix (positive semi-definite). For symmetric matrices X and Y , X > Y (X ≥ Y ) means that the matrix X − Y is positive definite (nonnegative). AT stands for the transpose of matrix A. sym(A) denotes A + AT . ∗ denotes the elements below the main diagonal of a symmetric block matrix. I means the identity matrix with appropriate dimensions. Om×n represents zero matrix with m × n dimensions. || · || denotes the Euclidean norm of vector and its induced norm of matrix. (Ω, F, P ) is a complete probability space with a filtration {Ft }t≥0 , in which Ω, F and P separately denote a sample space, the σ-algebra of sunset of sample space n n and the probability measure on F . LP F0 ([−τ, 0]; R ) is the family of all F0 -measurable C([−τ, 0]; R )-valued-

random variables ξ = ξ(θ) : −τ ≤ θ ≤ 0 such that sup−τ ≤θ≤0 E{||ξ(θ)||22 } < ∞, where E{·} stands for the mathematical expectation operator with respect to the given probability measure P . col[x1 , x2 , ..., xn ] means [xT1 , xT2 , ..., xTn ]T .

2. Preliminaries Let {rt , t ≥ 0} be right-continuous Markov process on the probability space which takes values in the finite space S = {1, 2, ..., N } with generator π = (πij )(i, j ∈ S) also called jumping transfer matrix given by 2

P {rt+∆ = j|rt = i} =

where ∆ > 0 and lim∆→0

  πij ∆ + o(∆)

if j 6= i,

 1 + πii ∆ + o(∆)

if j = i,

o(∆) ∆

(1)

= 0. πij ≥ 0 is the transition rate from i to j if i 6= j and πii = −

P

j6=i

πij .

Consider a class of neutral-type uncertain neural networks with Markovian jumping parameters and mixed delays Z

t

y(t) ˙ = E(rt , t)y(t−τ ˙ 1 (t))+A(rt , t)y(t)+B(rt , t)g(y(t))+C(rt , t)g(y(t−τ2 (t)))+D(rt , t)

g(y(s))ds+I, t−τ3 (t)

(2) where y(t) = [y1 (t), ..., yn (t)]T ∈ Rn is the neuron state vector; g(y(·)) = [g1 (y1 (·)), ..., gn (yn (·))]T ∈ Rn is the neuron activation function vectors; I = [I1 , ..., In ]; we shall use the following assumption. Assumption 2.1. For the uncertain matrices of the system (2), E(rt , t),A(rt , t),B(rt , t),C(rt , t),D(rt , t) denote interconnection weight matrices and can be described by h i h i E(rt , t) A(rt , t) B(rt , t) C(rt , t) D(rt , t) = E(rt ) A(rt ) B(rt ) C(rt ) D(rt ) h i + Y (rt )I(rt , t) Me (rt ) Ma (rt ) Mb (rt ) Mc (rt ) Md (rt ) ,

(3)

where A(rt ) = −diag(a1 (rt ), ..., an (rt )) < 0, B(rt ), C(rt ), D(rt ), E(rt ), Y (rt ), Ma (rt ) Mb (rt ), Mc (rt ), Md (rt ), Me (rt ) represent the constant matrices. In order to simplify the notation, let rt = i. Then, one has A(rt ) = Ai ,B(rt ) = Bi , and so on. I(rt , t) is an unknown time-varying matrix with Lebesgue measurable elements bounded by I T (rt , t)I(rt , t) ≤ I, in which is the identity matrix of appropriate dimensions. Assumption 2.2. The time delays τ1 (t),τ2 (t),τ3 (t) are continuously time-varying functions in (2) that satisfy    0 ≤ τ1− ≤ τ1 (t) ≤ τ1+ , τ˙1 (t) ≤ µ1 ,    0 ≤ τ2− ≤ τ2 (t) ≤ τ2+ , τ˙2 (t) ≤ µ2 ,     0 ≤ τ (t) ≤ τ + . 3 3

(4)

τ2− m ,h2 [τ2− , τ2+ ] can

For any integers m ≥ 1,l ≥ 1,q ≥ 1,let h1 =

=

τ2+ −τ2− ,h3 l

=

τ3+ q ,ρ1 (t)

=

τ3 (t) q ,ρ(t)

=

τ2 (t)−τ2− ,then l

[0, τ2− ] can be divided into m segments, be divided into l segments, [0, τ3 (t)] can be divided in Sm Sl Sq − − + to q segments. [0, τ2 ] = i=1 [(i − 1)h1 , ih1 ],[τ2 , τ2 ] = i=1 [τ2− + (i − 1)h2 , τ2− + ih2 ],[0, τ3+ ] = i=1 [(i − Sq 1)h3 , ih3 ],[0, τ3 (t)] = i=1 [(i − 1)ρ1 (t), iρ1 (t)]. For each subinterval [(k − 1)h2 , kh2 ], k = 1, 2, ..., l, it easy to S obtain[(k − 1)h2 , kh2 ] = [(k − 1)h2 , (k − 1)h2 + ρ(t)] [(k − 1)h2 + ρ(t), kh2 ]. On the other hand, for any t ≥ 0, there exists an integer k ∈ {1, 2, ..., l} such that τ2 (t) ∈ [(k − 1)h2 , kh2 ]. 3

Assumption 2.3. Each activation function fi (·) in the system (2) is continuous and bounded, which satisfies the following inequalities σi− ≤

gi (a) − gi (b) ≤ σi+ , k = 1, 2, ..., n, a−b

and

(5)

gi (0) = 0,

where a,b ∈ <, a 6= b, σi− , σi+ are know constants. Remark 2.1. σi− , σi+ (i = 1, 2, . . . , n) are some constants, and they can be positive,negative,and zero in Assumption 2.3. Consequently, this type of activation function is clearly more general than both the usual sigmoid activation function and the piecewise liner function gi (u) = 21 (|ui+1 | − |ui |), which is useful to get less conservative result. Let the equilibrium point y ∗ in (2) be the origin, by setting x(t) = y(t) − y ∗ . The system (2) can be rewritten in the following form Z

t

x(t) ˙ = E(rt , t)x(t−τ ˙ 1 (t))+A(rt , t)x(t)+B(rt , t)f (x(t))+C(rt , t)f (x(t−τ2 (t)))+D(rt , t)

f (x(s))ds, t−τ3 (t)

(6) where x(t) = [x1 (t), ..., xn (t)]T ∈ Rn is the neuron state vector, fi (x(·)) = gi (xi (·)+yi∗ )−gi (yi∗ ) i = 1, 2, ..., n. From Assumption 2.3, the transformed neuron activation function satisfies σi− ≤

fi (a) − fi (b) ≤ σi+ , k = 1, 2, ..., n, a−b

and

(7)

fi (0) = 0.

The purpose of this paper is to obtain the stability theorem for the system (6). The following robustly stochastically stable definition is introduced. Definition 2.1. [20] The trivial solution (equilibrium point) of the neutral-type neural networks with Markovian jumping parameters (2) is said to be robustly stochastically stable in the mean square, if limt→∞ E{||x(t)||2 } =0, for all admissible uncertainties satisfying (3). Before deriving the main results, the following lemmas which will be used, are given. Lemma 2.1. [24] For any constant matrices V , W ∈ 0, scalars b > a, vector function V :[a, b] →
τ2 2

Z

0

−τ

Z

a t

W T (s)M W (s)dsdθ ≥ (

t+θ

(8)

a

Z

0

−τ

Z

t

W (s)dsdθ)T M

Z

0

t

W (s)dsdθ. −τ

t+θ

Z

(9)

t+θ

Lemma 2.2. [25] For the given matrices Q = QT , D, E with appropriate dimensions, one can conclude that Q + DF (t)E + E T F T (t)DT < 0,

(10) 4

for all F (t) satisfying F T (t)F (t) ≤ I, if and only if there exists some ε > 0, such that Q + εDDT + ε−1 E T E < 0.

(11)

Lemma 2.3. [26] Let f1 ,f2 ,...,fn :Rm → R have positive values in an open subsets D of Rm .Then, the reciprocally convex combination of fi over D satisfies X 1 X X fi (t) = fi (t) + max gi,j (t), αi gi,j (t) i i

min X {αi |αi >0, αi = 1}

i6=j

i

subject to  {gi,j : Rm 7→ R, gj,i (t) , gi,j (t),  Lemma 2.4. [26] For ki (t) ∈ [0, 1],

fi (t)

gi,j (t)

gj,i (t)

fj (t)

  ≥ 0}.

PN

ki (t) = 1, and vectors ηi which satisfy ηi =0 with ki (t)  = 0, Ri Sij  ≥ 0, matrices Ri > 0, there exists matrices Sij (i = 1, 2, ...N − 1, j = i + 1, ..., N ), satisfies  ∗ Rj such that the following inequality holds i=1

  T  η1 R1 . . . S1N           N X  η2   ∗ . . . S2N   η2  1 T  .   ηi Ri ηi ≥   .   ..   k (t)  .  . . . . . . . . . . . . . .  ..  i=1 i      ∗ . . . R ηN ηN N 

η1

Lemma 2.5. [27] For any given positive matrix Z > 0, the following inequality holds for differentiable function x(t) in [a, b] → Rn : Z

b

x˙ T (s)Z x(s)ds ˙ ≥

a

1 T ∧ η Z η, a−b

where  x(b) − x(a)  η= Rb 2 x(b) + x(a) − b−a x(s)ds a 

and



Z = diag[Z, 3Z].

3. Main results In this section, the main results are going to be obtained by using LMI approach. Firstly, the following notations are given Σ+ = diag[σ1+ , ..., σn+ ] η1 (t) = col

h

x(t)

Rt

and

Σ− = diag[σ1− , ..., σn− ],

x(s)ds t−h1

R0 −h3

Rt

f (x(s))dsdθ t+θ 5

i

,

h i η2 (t) = col x(t) x(t − h1 ) . . . x(t − (m − 1)h1 ) , h i η3 (t) = col x(t − τ2− ) x(t − τ2− − h2 ) . . . x(t − τ2− − (l − 1)h2 ) , h i f (η2 (t)) = col f (x(t)) f (x(t − h1 )) . . . f (x(t − (m − 1)h1 )) , h i f (η3 (t)) = col f (x(t − τ2− )) f (x(t − τ2− − h2 )) . . . f (x(t − τ2− − (l − 1)h2 )) , hR i R t−ρ1 (t) R t−(q−1)ρ1 (t) t η4 (t) = col t−ρ , f (x(s))ds f (x(s))ds . . . f (x(s))ds t−2ρ1 (t) t−τ3 (t) 1 (t) h ξ T (t) = η2T (t) η3T (t) xT (t − τ2+ ) xT (t − τ2 (t)) η3T (t − ρ(t)) f T (η2 (t)) f T (η3 (t)) R t−τ (t) Rt f T (x(t − τ2+ )) f T (x(t − τ2 (t))) x˙ T (t) η4T (t) t−τ33(t)−ρ1 (t) f T (x(s))ds t−h1 xT (s)ds R t−τ1− T R t−τ1 (t) T R t−τ2− T 1 1 1 x (s)ds x (s)ds x (s)ds + − + − t−τ (t) t−τ2 (t) 1 τ (t)−τ τ −τ (t) t−τ τ (t)−τ 1

1

1

1

1

2

2

R t−τ2 (t)

xT (s)ds x˙ T (t − τ1 (t)) xT (t − τ1− ) xT (t − τ1 (t)) xT (t − τ1+ ) i R0 Rt T , f (x(s))dsdθ −h3 t+θ h i ek = On×(k−1)n In On×(2m+3l+q+16−k)n , h i rk = On×(k−1)n In On×(2m+3l+q+10−k)n . 1 τ2+ −τ2 (t)

t−τ2+

Next, the main results are dicussed as follows. Theorem 3.1. Under Assumption 2.1, 2.2 and 2.3, the neutral-type uncertain neural networks (6) is robustly stochastically stable in mean square, if there exist matrices P = [Pij ]3×3 > 0,Vj > 0 j = (1, 2, ..., m),Fk > 0,Fk1 > 0,Gk ,k = (1, 2, .., l),R11 ,R12 ,R13 ,Rr ,r = (2, 3, ..., l),S1 > 0,S2 > 0,S1i > 0,S2i > 0 ,Uj > 0,Jj > 0 j = (1, 2, 3, 4),U1∗ ,Z1 > 0,Z2 > 0,Z1i > 0,Z2i > 0 ,T > 0,Ti > 0 ,Λjp j = (1, 2, .., m), p = (1, 2, ..., 2m + 3l + q + 16),N1i ,N2i ,N3i ,N4i ,Mi1 ,Mi2 , Mi3 ,Mi4 ,P1 , P2 ,H1 , H2 ,Q1 , Q2 , positive definite diagonal matrices K1 , K2 , L1 , L2 and positive scalars i > 0, i = (1, 2, ...N ) such that the following LMIs hold     P1 H 1 P2 H2  > 0 , Φ2 =   > 0, Φ1 =  ∗ Q1 ∗ Q2  Φ3k = 

Φ4k

Fk

GTk



Fk

    Fk        ∗       = ∗          Fk        ∗

(12)

 >0

R11

R12

Fk

R13



,

(13)

(k = 1, 2, .., l),

    > 0,  

if k = 1, (14)

Fk 

Rk   > 0, Fk

if k = 2, 3, ..., l, 6

 Φ5 = 

 Φ6i = 

N X





U1

U1



U1

 > 0,

∗ Z2i

Mi



∗ Z2i

πij S1j −S1 < 0,

(15)





>0

N X

,

Mi = 

πij S2j − S2 < 0,

j=1

j=1

(k)

Φ7i

Mi1

Mi2

Mi3

Mi4

N X

 ,

(16)

πij Z1j − Z1 < 0,

j=1

N X

πij Z2j − Z2 < 0,

j=1

 (k) Ξ Λ1 Λ2 . . . Λm ∆T1i  i   ∗ −V1 0 ... 0 0    ∗ ∗ −V2 . . . 0 0   = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    ∗ ∗ ∗ . . . −Vm 0    ∗ ∗ ∗ ... ∗ −−1 i I  ∗ ∗ ∗ ... ∗ ∗

∆T2i

N X

πij Tj − T < 0, (17)

j=1



  0    0     < 0,   0    0   −i I

(18)

where (k) Ξi

=W1 + +

W1T

m X

+

5 X

Wk + W6i + W7 + W8i +

k=2 m X

Λj ej +

j=1

11 X

T T Wk + W12i + W13 + W14i W15i + W15i W14i

k=9

eTj ΛTj −

j=1

m X

Λj ej+1 −

j=1

m X

∼ (k)

eTj+1 ΛTj + Υ + Υ(k) + Υ

,

j=1

Λj = col[Λj1 , Λj2 , ..., Λj,2m+3l+q+16 ], (j = 1, 2, ..., m), ∗ Z1i = diag(Z1i , 3Z1i ),

∗ Z2i = diag(Z2i , 3Z2i ),

K1 = diag(k11 , k12 , ..., k1n ), L1 = diag(l11 , l12 , ..., l1n ),

Σ1 = Σ+ Σ− ,

Σ2 =

1 + (Σ + Σ− ), 2

K2 = diag(k21 , k22 , ..., k2n ), L2 = diag(l21 , l22 , ..., l2n ),

W1,1 = col[e1 , e2m+3l+q+7 , e2m+3l+q+16 ],

W1,2 = col[e2m+3l+5 , e1 − e2 , h3 em+2l+3 − e2m+3l+6 ],

T W1 = W1,1 P W1,2 + (em+2l+3 − Σ− e1 )T K1 e2m+3l+5 + (Σ+ e1 − em+2l+3 )T K2 e2m+3l+5 ,

W2 = eT2m+3l+5

m X

h21 Vk e2m+3l+5 ,

k=1

W3,1 = col[e1 , ..., em ],

W3,2 = col[em+2l+3 , ..., e2m+2l+2 ], 7

W3,3 = col[e2 , ..., em+1 ],

W3,4 = col[em+2l+4 , ..., e2m+2l+3 ], W3,7 = col[em+2 , ..., em+l+1 ],

W3,5 = col[em+1 , ..., em+l ],

W3,6 = col[e2m+2l+3 , ..., e2m+3l+2 ],

W3,8 = col[e2m+2l+4 , ..., e2m+3l+3 ],

T T T T T T W3 =W3,1 P1 W3,1 + W3,1 H1 W3,2 + W3,2 H1 W3,1 + W3,2 Q1 W3,2 − W3,3 P1 W3,3 − W3,3 H1 W3,4 T T T T T T − W3,4 H1 W3,3 − W3,4 Q1 W3,4 + W3,5 P2 W3,5 + W3,5 H2 W3,6 + W3,6 H2 W3,5 + W3,6 Q2 W3,6 T T T T − W3,7 P2 W3,7 − W3,7 H2 W3,8 − W3,8 H2 W3,7 − W3,8 Q2 W3,8 ,

Υ = ((Ψij ) + (ΨTij ))(2m+3l+q+16)n×(2m+3l+q+16)n , (k)

(k)

Υ(k) = ((Ψij ) + (Ψij )T )(2m+3l+q+16)n×(2m+3l+q+16)n , ∼ (k)

Υ

∼ (k)

∼ (k)

= (Ψij +(Ψij )T )(2m+3l+q+16)n×(2m+3l+q+16)n ,

W4 =

l X

eTm+j Fj1 em+j − (1 −

j=1

l l X µ2 X T ) em+2+l+j Fj1 em+2+l+j + h22 eT2m+3l+5 Fj e2m+3l+5 , l j=1 j=1

T T W5 = W3,1 S1 W3,1 + W3,2 S2 W3,2 , T T T T W6i = W3,1 S1i W3,1 + W3,2 S2i W3,2 − W3,3 S1i W3,3 − W3,4 S2i W3,4 ,

W7,1 = col[e2m+3l+q+13 − e2m+3l+q+8 ],

W7,2 = col[e2m+3l+q+14 − e2m+3l+q+9 ],

W7,3 = col[e2m+3l+q+8 − e2m+3l+q+14 ], W7,4 = col[e2m+3l+q+9 − e2m+3l+q+15 ], h i W7,5 = col e2m+3l+q+14 − e2m+3l+q+15 e2m+3l+q+13 − e2m+3l+q+14 , 1 W7 =eT2m+3l+5 ((τ1+ − τ1− )2 U1 + U2 + (τ1+ − τ1− )2 (U3 + U4 ))e2m+3l+5 − eT2m+3l+q+12 (1 − µ1 )U1 2 T T T T e2m+3l+q+12 + W7,1 (−2U3 )W7,1 + W7,2 (−2U3 )W7,2 + W7,3 (−2U4 )W7,3 + W7,4 (−2U4 )W7,4

W8,1 W8,2 W8,3

T T + W7,5 (−Φ5 )W7,5 , h i = col e1 − e2 e1 + e2 − h2 e2m+3l+q+7 , 1 h i = col em+l+2 − em+l+1 em+l+2 + em+l+1 − 2e2m+3l+q+11 , h i h = col em+1 − em+l+2 em+1 + em+l+2 − 2e2m+3l+q+10 , W8,4 = col W8,2

W8,3

i

,

T ∗ T W8i = eT2m+3l+5 [h21 Z1i + (τ2+ − τ2− )2 Z2i ]e2m+3l+5 + W8,1 (−Z1i )W8,1 + W8,4 (−Φ6i )W8,4 ,

1 1 W9 = eT2m+3l+5 ( h31 Z1 + (τ2+ − τ2− )((τ2+ )2 − (τ2− )2 )Z2 )e2m+3l+5 , 2 2 W10,1 = col[e2m+3l+6 , ..., e2m+3l+q+5 ],

W10,2 = col[e2m+3l+7 , ..., e2m+3l+q+6 ],

T T W10 =W10,1 J1 W10,1 − W10,2 J1 W10,2 + eTm+2l+3 (h3 J2 + τ3+ J3 +

e2m+3l+6 −

q X p=1

eT2m+3l+5+p

h23 1 J4 )em+2l+3 − eT2m+3l+6 J2 2 h3

1 2 J3 e2m+3l+5+p − 2 eT2m+3l+q+16 J4 e2m+3l+q+16 , h3 τ3+ 8

W11 = eT2m+3l+5 (

h31 T )e2m+3l+5 , 6

W12,1 = [h1 e1 − e2m+3l+q+7 ], W12i = eT2m+3l+5 ( W13,1 = col

h

e1

2 h21 T Ti )e2m+3l+5 + W12,1 (− 2 Ti )W12,1 , 2 h1 i h i em+2l+3 , W13,2 = col em+l+2 e2m+3l+4 ,

T T W13 = W13,1 Γ1 W13,1 + W13,2 Γ2 W13,2 , T T T W14i = col[N1i e1 + N2i e2m+3l+5 + N3i e2m+3l+q+12 −

q X

T N4i e2m+3l+5+p ],

p=1

W15i = col[Ai e1 + Bi em+2l+3 + Ci e2m+3l+4 +

q X

Di e2m+3l+5+p + Ei e2m+3l+q+12 − e2m+3l+5 ],

p=1 T T T ∆2i = col[YiT N1i e1 + YiT N2i e2m+3l+5 + YiT N3i e2m+3l+q+12 −

q X

T YiT N4i e2m+3l+5+p ],

p=1

∆1i = col[Mai e1 + Mbi em+2l+3 + Mci e2m+3l+4 +

q X

Mdi e2m+3l+5+p + Mei e2m+3l+q+12 ],

p=1

Ψij =

(k)

Ψij

   − 21 Fi−m ,        − 21 (Fi−m + Fi−m−1 ),        − 21 Fm ,       −Fi−m−l−2 + 1 (Gi−m−l−2 + GT i−m−l−2 ), 2

   Fi−m − Gi−m ,        Fi−m−1 − Gi−m−1 ,        Gi ,      0,    1 Fk ,  2        −Gk ,        −Fk + Gk ,    = 1 Fk , 2       GTk − Fk ,        Fk − 21 (Gk + GTk ),       0,

i = j = m + 1, m + 2 6 i = j 6 m + l, i = j = m + l + 1, m + 3 + l 6 i = j 6 m + 2l + 2, m + 1 6 i 6 m + l,j = m + l + 2 + i, m + 2 6 i 6 m + l + 1,j = m + l + 1 + i, m + 1 6 i 6 m + l,j = i + 1, otherwise,

i = j = k + m, i = k + m,j = m + k + 1, i = k + m,j = m + l + 2 + k, i = j = k + m + 1, i = m + k + 1,j = m + l + 2 + k, i = j = m + l + 2 + k, otherwise,

9

  T  ), i = j = m + l + 2, −F1 + 12 (R11 + R11       1  i = j = m + 2, − 2 F1 ,      1 T  i = j = m + l + 3,  −F1 + 2 (R13 + R13 ),       i = j = m + 1, −1F ,   2 1     T  F1 − R11 , i = m + l + 2,j = m + 2,    (1) ¯ = F + R − R − R , i = m + l + 2,j = m + l + 3, Ψ ij 1 12 11 13       R13 − R12 , i = m + l + 2,j = m + 1,        R11 − R12 , i = m + 2,j = m + l + 3,         R12 , i = m + 2,j = m + 1,        F1 − R13 , m + l + 3,j = m + 1,      0, otherwise,    −Fk + 21 (Rk + RkT ), i = j = m + l + 2,         − 21 Fk , i = j = m + 1 + k,        − 1 Fk , i = j = m + k,    2 (k) ¯ Ψ Fk − RkT , i = m + l + 2,j = m + 1 + k, ij =       Fk − Rk , i = m + l + 2,j = m + k,        i = m + 1 + k,j = m + k,  Rk ,     0, otherwise,   ¯ (1) , k = 1, Ψ (k) ∼ ij Ψij =  Ψ ¯ (k) , k = 2, 3, ..., l. ij Proof. Consider the following Lyapunov-Krasovskii functional candidates V (xt , i, t) =

12 X

V (xt , i, t),

k=1

where V1 (xt , i, t) =

η1T (t)P η1T (t)

+2

n X

Z

V2 (xt , i, t) = h1

j=1

−(j−1)h1 Z

−jh1

(fk (s) −

k1k

k=1 m Z X

xk (t)

0

σk− s)ds

+2

n X k=1

t

x˙ T (s)Vk x(s)dsdθ, ˙

t+θ

10

Z k2k 0

xk (t)

(σk+ s − fk (s))ds,



t

Z

T 

η2 (s)

P1

H1





η2 (s)



t

Z

η3 (s)

T

      ds + t−h2 f (η3 (s)) f (η2 (s)) ∗ Q1 f (η2 (s))    P2 H 2 η3 (s)    ds, ∗ Q2 f (η3 (s)) Z −τ2− −(k−1)h2 Z t l Z t−τ − −(k−1)h2 l X X 2 T V4 (xt , i, t) = x (s)Fk1 x(s)ds + x˙ T (s)Fk x(s)dsdθ, ˙ h2 V3 (xt , i, t) =



t−h1

t−τ2− −(k−1)h2 −ρ(t)

k=1 Z 0

η2T (s)S1 η2 (s)dsdθ +

V5 (xt , i, t) = −h1 Z t

η2T (s)S1i η2 (s)ds +

Z

t−h1

−τ1−

−τ1+ −τ1−

Z

Z

0

Z

−h1

x˙ T (s)U1 x(s)dsdθ ˙ +

t+θ

t−τ1 (t)

Z

T

Z

−h1

0

Z

β

t−ρ1 (t) Z 0

T

Z

0

Z

Z

β 0

τ2− )

0

α

Z

β

x˙ T (s)U4 x(s)dsdθdη, ˙

t+θ

t

x˙ T (s)Z2i x(s)dsdθ, ˙

t+θ −τ2−

Z

Z

−τ2+

0

β

Z

t

x˙ T (s)Z2 x(s)dsdθdβ, ˙

t+θ

t+θ

Z

0

−h3

Z

−τ1−

f T (x(s))J2 f (x(s))dsdθ

t+θ 0

Z

−τ2+



Z

t

f T (x(s))J3 f (x(s))dsdθ +

Z

0

Z

β

t

f T (x(s))J4 f (x(s))dsdθdβ,

t+θ

t

x˙ T (s)T x(s)dsdθdαdβ, ˙

t+θ

t

x˙ T (s)Ti x(s)dsdθdβ, ˙

V12 (xt , i, t) = 12 X

Z

−ρ1 (t)

V11 (xt , i, t) = Z

(τ2+

t

Z

−τ3+

−τ2−

t+θ

+

−h1

Z

η

Z

−τ1+

t

η4T (s)J1 η4 (s)ds +

V10 (xt , i, t) =

0

−τ1−

−τ1+

x˙ T (s)Z1i x(s)dsdθ ˙ + (τ2+ − τ2− )

t

−h1

x˙ T (s)U2 x(s)ds ˙

t+θ

0

0

t

Z

t

x˙ (s)Z1 x(s)dsdθdβ ˙ +

Z

LV (xt , i, t) =

f T (η2 (s))S2i f (η2 (s))ds,

t

V9 (xt , i, t) = h1

Z

t

t+θ

η

V8 (xt , i, t) = h1

Z

f T (η2 (s))S2 f (η2 (s))dsdθ,

t+θ

x˙ (s)U3 x(s)dsdθdη ˙ +

−τ1+

Z

t+θ

t

t

Z

+ Z

0

Z

t−h1 −τ1−

Z

V7 (xt , i, t) =(τ1+ − τ1− ) Z

Z

−h1

t+θ

V6 (xt , i, t) =

−τ2− −kh2

k=1

t

Z

t+θ

LVk xt , i, t.

k=1

For simplicity, LVk (xt , i, t) is rewritten as LVk in the sequel without any confusion. By directly computing, one can conclude that LV1 =2η1T (t)P η˙ 1 (t) + 2[f (x(t)) − Σ− x(t)]T K1 x(t) ˙ + 2[Σ+ x(t) − f (x(t))]T K2 x(t) ˙

(19)

= ξ T (t)W1 ξ(t) + ξ T (t)W1T ξ(t), LV2 =ξ T (t)W2 ξ(t) − h1 T

≤ ξ (t)W2 ξ(t) −

m Z X

t−(j−1)h1

x˙ T (t)Vk x(s)ds ˙

j=1 t−jh1 m Z t−(j−1)h1 X j=1

Z

T

x˙ (s)dsVk

t−jh1

x(s)ds, ˙ t−jh1

11

(20)

t−(j−1)h1

LV3 = ξ T (t)W3 ξ(t), LV4 =ξ T (t)(

l X

(21)

eTm+j Fj1 em+j − (1 − ρ(t)) ˙

j=1

ξ(t) − h2

l X

eTm+2+l+j Fj1 em+2+l+j + h22

j=1

l Z X

t−τ2− −(j−1)h2

t−τ2− −jh2

j=1

l X

eT2m+3l+5 Fj e2m+3l+5 )

j=1

x˙ T (s)Fj x(s)ds, ˙ 

For τ2− + (k − 1)h2 6 τ2 (t) 6 τ2− + kh2 , if 

Fk

GTk



Fk

  > 0, k = 1, 2, ..., l, one can obtain by utilizing

Lemma 2.1 and 2.3, l Z X

− h2

j=1

= −h2

t−τ2− −jh2

l Z X j=1

6−

t−τ2− −(j−1)h2

l X j=1

x˙ T (s)Fj x(s)ds ˙

t−τ2− −(j−1)h2 −ρ(t)

x˙ (s)Fj x(s)ds ˙ − h2

t−τ2− −jh2

h2 h2 − ρ(t)

Z

T

l Z X j=1

t−τ2− −(j−1)h2 −ρ(t)

T

Z

x˙ (s)dsFj

t−τ2− −jh2

t−τ2− −(j−1)h2

t−τ2− −(j−1)h2 −ρ(t)

x˙ T (s)Fj x(s)ds ˙

t−τ2− −(j−1)h2 −ρ(t)

t−τ2− −jh2

Z t−τ2− −(j−1)h2 Z t−τ2− −(j−1)h2 l X h2 T x(s)ds ˙ − x˙ (s)dsFj x(s)ds ˙ ρ(t) t−τ2− −(j−1)h2 −ρ(t) t−τ2− −(j−1)h2 −ρ(t) j=1 6−

l X

(em+l+2+j − em+1+j )T Fj (em+l+2+j − em+1+j ) −

j=1



l X

l X (em+j − em+l+2+j )T Fj (em+j − em+l+2+j ) j=1

(em+l+2+j − em+1+j )T GTj (em+j − em+l+2+j ) −

j=1

l X

(em+j − em+l+2+j )T Gj (em+l+2+j − em+1+j )

j=1

+ (em+l+2+k − em+1+k )T Fk (em+l+2+k − em+1+k ) + (em+k − em+l+2+k )T Fk (em+k − em+l+2+k ) + (em+l+2+k − em+1+k )T GTk (em+k − em+l+2+k ) + (em+k − em+l+2+k )T Gk (em+l+2+k − em+1+k ) Z t−τ2− −(k−1)h2 − h2 x˙ T (s)Fk x(s)ds ˙ t−τ2− −kh2

T

= ξ (t)(Υ + Υ

(k)

Z )ξ(t) − h2

t−τ2− −(k−1)h2

t−τ2− −kh2

x˙ T (s)Fk x(s)ds. ˙ (22)

12

Consider 1 6 k 6 l. For k = 1, one has Z − h2 6−

t−τ2− −(k−1)h2

t−τ2− −kh2

h2 τ2− + h2 − τ2 (t)

t−τ2− −ρ(t)

Z

x(s)ds ˙ − t−τ2 (t)



F1

R11

  if  ∗  ∗

F1 ∗

−h2

Z

t−τ2 (t)

t−τ2− −h2

h2 ρ(t)

x˙ T (s)dsF1

t−τ2−

Z

t−τ2− −ρ(t)

t−τ2−

t−τ2− −h2

Z

x˙ T (s)F1 x(s)ds ˙

t−τ2 (t)

h2 τ2 (t) − ρ(t) − τ2−

x(s)ds ˙ − t−τ2− −h2

x˙ T (s)dsF1

Z

Z

t−τ2− −ρ(t)

x˙ T (s)dsF1 (23)

t−τ2 (t)

t−τ2−

x(s)ds, ˙ t−τ2− −ρ(t)



R12

  R13  > 0, then, by utilizing Lemma 2.3 and 2.4, one can derive  F1

t−τ2−

Z

Z

x˙ T (s)Fk x(s)ds ˙ = −h2

t−τ2− −h2

∼ (1)

x˙ T (s)Fk x(s)ds ˙ 6 ξ T (t) Υ

ξ(t).

(24)

For 2 ≤ k ≤ l, one has Z − h2

t−τ2− −(k−1)h2

t−τ2− −kh2

Z t−τ2 (t) Z t−τ2 (t) h2 T x(s)ds ˙ x ˙ (s)dsF k τ2− + kh2 − τ2 (t) t−τ2− −kh2 t−τ2− −kh2 (25) Z t−τ2− −(k−1)h2 Z t−τ2− −(k−1)h2 T x˙ (s)dsFk x(s)ds, ˙

x˙ T (s)Fk x(s)ds ˙ 6−

h2 τ2 (t) − (k − 1)h2 − τ2− t−τ2 (t) t−τ2 (t)   Fk R k  > 0, then, by applying Lemma 2.3, one can obtain if  ∗ Fk −

Z −h2

t−τ2− −(k−1)h2

t−τ2− −kh2

∼ (k)

x˙ T (s)Fk x(s)ds ˙ 6 ξ T (t) Υ

ξ(t),

(k = 2, ..., l).

(26)

Hence, based on the above analysis, one can conclude that Z −h2

t−τ2− −(k−1)h2

t−τ2− −kh2

∼ (k)

x˙ T (s)Fk x(s)ds ˙ 6 ξ T (t) Υ

ξ(t),

(k = 1, ..., l).

(27)

Furthermore, one has ∼ (k)

LV4 6 ξ T (t)W4 ξ(t) + ξ T (t)(Υ + Υ(k) + Υ

LV5 = ξ T (t)W5 ξ(t) −

t

Z

η2T (s)S1 η2 (s)ds −

t−h1

LV6 = ξ T (t)W6i ξ(t) +

)ξ(t),

Z

t

Z

(28)

t

f T (η2 (s))S2 f (η2 (s))ds,

(29)

t−h1

N X

πij η2T (s)S1j η2 (s)ds +

t−h1 j=1

Z

t

N X

t−h1 j=1

13

πij f T (η2 (s))S2j f (η2 (s))ds,

(30)

LV7 6(τ1+

τ1− )2 x˙ T (t)U1 x(t) ˙





(τ1+



t−τ1−

Z

τ1− )

t−τ1+

x˙ T (s)U1 x(s)ds ˙ + x˙ T (t)U2 x(t) ˙ − (1 − µ1 )

1 ˙ − x˙ (t − τ1 (t))U2 x(t ˙ − τ1 (t)) + (τ1+ − τ1− )2 x˙ T (t)(U3 + U4 )x(t) 2 Z −τ1− Z t+θ dsdθ − x˙ T (s)U4 x(s)dsdθ. ˙ T

−τ1+

τ1+ −τ1 (t) τ1+ −τ1−

Setting α = −(τ1+

τ1− )



Z

t−τ1+

τ1 (t)−τ1− , τ1+ −τ1−

−τ1−

Z

t−τ1−

Z

−τ1+

x˙ T (s)U1 x(s)ds ˙ 6−

x˙ T (s)U3 x(s)dsdθ ˙ 6−

1 α Z

Z

t−τ1 (t)

t−τ1−

−τ1−

Z

−τ1+

t−τ1+

x˙ T (s)U4 x(s)dsdθ ˙ 6−

Z

−τ1−

Z

t−τ1−

Z

−τ1 (t)

x˙ T (s)U3 x(s)dsdθ ˙ 6−

t+θ

−τ1−

Z

t−τ1 (t)

(31)

t−τ1−

x(s)ds, ˙

Z

T

(32)

x(s)ds, ˙ t−τ1 (t)

x˙ T (s)U4 x(s)dsdθ ˙ −

Z

t−τ1 (t) t−τ1 (t) T

t−τ1+

x˙ T (s)U3

t+θ

t−τ1−

t+θ

Z

t−τ1 (t)

Z

−τ1+

x˙ (s)dsU3 t−τ1 (t)

−τ1 (t)

Z

x˙ (s)dsU4

Z

−τ1 (t)

−τ1+

Z

t+θ

t−τ1+

x˙ T (s)U4

t−τ1 (t)

x(s)ds, ˙ t−τ1+

Z −τ1− 2 ( (x(t − τ1− ) − x(t + θ))dθ)T U3 (τ1 (t) − τ1− )2 −τ1 (t) −τ1−

(x(t −

τ1− )

(33)

− x(t + θ))dθ) = ξ

T

(34)

T (t)W7,1 (−2U3 )W7,1 ξ(t),

Z −τ1 (t) 2 ( (x(t − τ1 (t)) − x(t + θ))dθ)T U3 x˙ (s)U3 x(s)dsdθ ˙ 6− + (τ1 − τ1 (t))2 −τ1+ Z −τ1 (t) T ( (x(t − τ1 (t)) − x(t + θ))dθ) = ξ T (t)W7,2 (−2U3 )W7,2 ξ(t), T

− −τ1+

x(s)ds ˙ t−τ1+

x˙ T (s)U3 x(s)dsdθ ˙ −

t−τ1−

Z

Z

−τ1 (t)

−τ1 (t)

t−τ1 (t)

t+θ

−τ1 (t)

Z (

Z

x˙ T (s)U3 x(s) ˙

t+θ

t−τ1 (t)

t−τ1−

Z

β x(s)dsdθ ˙ − α



Z

x˙ (s)dsU1

−τ1 (t)

t+θ

Z

T

t−τ1 (t)

α x(s)dsdθ ˙ − β

Z

x˙ T (s)dsU1

t−τ1+

−τ1−

Z

t+θ



−τ1+

one can derive

1 − β



t−τ1−

Z

t−τ1+

,β =

t−τ1−

−τ1−

Z

t+θ

−τ1+

(35)

Z

−τ1−

− −τ1 (t)

Z −τ1− 2 x˙ (s)U4 x(s)dsdθ ˙ 6− ( (x(t + θ) − x(t − τ1 (t)))dθ)T U4 ((τ1 (t) − τ1− )2 −τ1 (t) t−τ1 (t) Z −τ1− T (− (x(t + θ) − x(t − τ1 (t)))dθ) = ξ T (t)W7,3 (−2U4 )W7,3 ξ(t),

Z

t+θ

T

−τ1 (t)

14

(36)

−τ1 (t)

Z

Z −τ1 (t) 2 x˙ (s)U4 x(s)dsdθ ˙ 6− + (x(t + θ) − x(t − τ1+ ))dθ)T U4 ( 2 + + (τ − τ (t)) 1 t−τ1 −τ1 1 (37) Z −τ1 (t) + T T (x(t + θ) − x(t − τ1 ))dθ) = ξ (t)W7,4 (−2U4 )W7,4 ξ(t). ( t+θ

Z

− −τ1+

T

−τ1+

 If 

− Z

1 α

Z



U1

U1



U1

  > 0, one has

t−τ1 (t)

x˙ T (s)dsU1

t−τ1+

t−τ1−

x(s)ds ˙ − t−τ1 (t)

 T 6 ξ T (t)W7,5 (− 

β α

Z

Z

t−τ1 (t)

x(s)ds ˙ − t−τ1+

t−τ1 (t)

x˙ T (s)dsU4

t−τ1+ ∗

U1

U1



U1

1 β

Z

t−τ1−

x˙ T (s)dsU1

Z

t−τ1−

α β

x(s)ds ˙ −

t−τ1 (t)

t−τ1 (t)

Z

t−τ1−

x˙ T (s)dsU3

t−τ1 (t)

t−τ1 (t)

Z

x(s)ds ˙

(38)

t−τ1+

 T )W7,5 ξ(t) = ξ T (t)W7,5 (−Φ5 )W7,5 ξ(t).

Therefore, one can furtherly conclude that LV7 6 ξ T (t)W7 ξ(t),

(39) Z

LV8 = h21 x˙ T (t)Z1i x(t) ˙ − h1

t

x˙ T (s)Z1i x(s)ds ˙ + h1

Z

T

x˙ (t)Z2i x(t) ˙ −

Z

(τ2+



τ2− )

t−τ2−

t−τ2+

Z

−h1

t−h1

Z

0

T

x˙ (s)Z2i x(s)ds ˙ +

(τ2+



N X

t

πij x˙ T (s)Z1j x(s)dsdθ ˙ + (τ2+ − τ2− )2

t+θ j=1

τ2− )

Z

−τ2−

−τ2+

Z

t

N X

πij x˙ T (s)Z2j x(s)dsdθ, ˙

t+θ j=1

t

−h1

x˙ T (s)Z1i x(s)ds ˙ 6 − (x(t) − x(t − h1 ))T Z1i (x(t) − x(t − h1 )) − 3[x(t) + x(t − h1 )

t−h1

2 − h1

Z

t

2 x(s)ds] Z1i [x(t) + x(t − h1 ) − h 1 t−h1 T

T ∗ 6 ξ T (t)W8,1 (−Z1i )W8,1 ξ(t).

15

Z

t

x(s)ds] t−h1

(40)

By applying Lemma 2.1, 2.3 and 2.5, one can obtain − (τ2+ − τ2− ) =

−(τ2+

t−τ2+

τ2− )



t−τ2−

Z

Z

x˙ T (s)Z2i x(s)ds ˙

t−τ2 (t) T

x˙ (s)Z2i x(s)ds ˙ −

t−τ2+

(τ2+



τ2− )

Z

t−τ2−

x˙ T (s)Z2i x(s)ds ˙

t−τ2 (t)

(τ2+ − τ2− ) {[x(t − τ2 (t)) − x(t − τ2+ )]T Z2i [x(t − τ2 (t)) − x(t − τ2+ )] + 3[x(t − τ2 (t)) + x(t − τ2+ ) τ2+ − τ2 (t) Z t−τ2 (t) Z t−τ2 (t) 2 2 + T (41) x(s)ds] Z2i [x(t − τ2 (t)) + x(t − τ2 ) − + x(s)ds]} − + τ2 − τ2 (t) t−τ2+ τ2 − τ2 (t) t−τ2+ 6−

(τ2+ − τ2− ) {[x(t − τ2− ) − x(t − τ2 (t))]T Z2i [x(t − τ2− ) − x(t − τ2 (t))] + 3[x(t − τ2− ) + x(t − τ2 (t)) τ2 (t) − τ2− Z t−τ2− Z t−τ2− 2 2 − T x(s)ds] x(s)ds]} Z [x(t − τ ) + x(t − τ (t)) − − 2i 2 2 τ2 (t) − τ2− t−τ2 (t) τ2 (t) − τ2− t−τ2 (t) −

(τ2+ − τ2− ) T T ∗ ξ (t)W8,2 Z2i W8,2 ξ(t) − τ2+ − τ2 (t)    ∗ Z2i Mi M  > 0, let Mi =  i1 If  ∗ ∗ Z2i Mi3

6−

−(τ2+ −τ2− )

Z

t−τ2−

t−τ2+

(τ2+ − τ2− ) T T ∗ ξ (t)W8,3 Z2i W8,3 ξ(t). τ2 (t) − τ2−  Mi2 , one has Mi4 

T x˙ T (s)Z2i x(s)ds ˙ 6 ξ T (t)W84 (− 

∗ Z2i

Mi



∗ Z2i

 T )W84 ξ(t) = ξ T (t)W8,4 (−Φ6i )W8,4 ξ(t). (42)

Hence, one can conclude Z

T

0

N X

t

Z

LV8 6ξ (t)W8i ξ(t) + h1 −h1

T

πij x˙ (s)Z1j x(s)dsdθ ˙ +

(τ2+



τ2− )

t+θ j=1

Z

−τ2−

Z

−τ2+

t

N X

πij x˙ T (s)Z2j x(s)dsdθ, ˙

t+θ j=1

(43)

Z

T

0

t

Z

T

(τ2+

τ2− )

Z

−τ2−

Z

t

x˙ T (s)Z2 x(s)dsdθ, ˙

(44)

LV10 =η4T (t)J1 η4 (t) − η4T (t − h3 )J1 η4 (t − h3 ) + h3 f T (x(t))J2 f (x(t)) + τ3+ f T (x(t))J3 f (x(t))− Z t Z t h2 f T (x(s))J2 f (x(s))ds − f T (x(s))J3 f (x(s))ds + 3 f T (x(t))J4 f (x(t))− 2 t−h3 t−τ3+ Z 0 Z t f T (x(s))J4 f (x(s))ds 6 ξ T (t)W10 ξ(t),

(45)

LV9 = ξ (t)W9 ξ(t) − h1

x˙ (s)Z1 x(s)dsdθ ˙ − −h1

−h3

T

t+θ



−τ2+

t+θ

t+θ

Z

0

Z

0

Z

t

LV11 = ξ (t)W11 ξ(t) − −h1

β

x˙ T (s)T x(s)dsdθdβ, ˙

t+θ

16

(46)

LV12 = ξ T (t)W12i ξ(t) +

0

Z

0

Z

−h1

Z

t

N X

πij x˙ T (s)Tj x(s)dsdθdβ. ˙

(47)

t+θ j=1

β

From Assumption 2.3, [fi (xi (t)) − σi+ xi (t)][σi− xi (t) − fi (xi (t))] > 0 i = 1, 2, ..., n, one can directly obtain the following inequality. For any diagonal matrices L1 = diag(l11 , ..., l1n ) > 0, L2 = diag(l21 , ..., l2n ) > 0, it follows that 

T 

x(t)

Γ+ = 

L1 Σ2



−L1

 

f (x(t))



−L1 Σ1

T 

x(t − τ2 (t))

Γ− = 

−L2 Σ1

 



x(t) f (x(t))

L2 Σ2

 ≥ 0,



(48)



x(t − τ2 (t))

    ≥ 0. f (x(t − τ2 (t))) ∗ −L2 f (x(t − τ2 (t)))     −L1 Σ1 L1 Σ2 −L2 Σ1 L2 Σ2  = Γ1 ,  = Γ2 , so we can obtain Let  ∗ −L1 ∗ −L2

(49)

T T Γ+ + Γ− = ξ T (t)W13,1 Γ1 W13,1 ξ(t) + ξ T (t)W13,2 Γ2 W13,2 ξ(t) = ξ T (t)W13 ξ(t),

For any matrices Nj ,j = 1, 2, ..., m,i = 1, 2, ..., N , one has Z t−(j−1)h1 Θj = 2ξ T (t)Λj [x(t − (j − 1)h1 ) − x(t − jh1 ) − x(s)ds] ˙ = 0,

(50)

t−jh1

T

T

Z

T

t

f T (x(s))dsN4i ][−x(t) ˙ + Ei x(t ˙ − τ1 (t))

0 = 2[x (t)N1i + x˙ (t)N2i + x˙ (t − τ1 (t))N3i − Z

t−τ3 (t) t

+ Ai x(t) + Bi f (x(t)) + Ci f (x(t − τ2 (t))) + Di

(51)

f (x(s))ds] t−τ3 (t)



T

T (t)(W14i W15i

+

T W15i W14i

+

∆T2i I(rt , t)∆1i

+ ∆T1i I T (rt , t)∆1i )ξ(t).

Moreover, one also has Z t−(j−1)h1 Z −2ξ T (t)Λj x(s)ds ˙ 6 ξ T (t)Λj Vj−1 ΛTj ξ(t) + t−jh1

t−(j−1)h1

x˙ T (s)dsVj

t−jh1

Z

t−(j−1)h1

x(s)ds. ˙

(52)

t−jh1

According to (19) to (52), the following inequality can be derived LV (xt , i, t) T



(k) (t)(Ξi

+

m X

Λk Vk−1 ΛTk

+

∆T2i I(rt , t)∆1i

+

Z

∆T1i I T (rt , t)∆1i )ξ(t)

Z N X f (η2 (s))( πij S2j − S2 )f (η2 (s))ds + h1

t

t−h1

+

(τ2+

0

T

+



−h1

j=1

τ2− )

Z

−τ2−

−τ2+

Z

t

t+θ

x˙ T (s)(

η2T (s)(

t−h1

k=1

Z

t

+

N X

Z

t

x˙ T (s)(

t+θ

Z

N X

0

−h1

17

πij S1j − S1 )η2 (s)ds

j=1

πij Z1j − Z1 )x(s)dsdθ ˙

j=1

Z

0

Z

t

πij Z2j − Z2 )x(s)dsdθ ˙ +

j=1

N X

β

t+θ

N X x˙ T (s)( πij Tj − T )x(s)dsdθdβ. ˙ j=1

(53) By applying Lemma 2.2, one has (k)

Ξi

m X

+

Λk Vk−1 ΛTk + ∆T2i I(rt , t)∆1i + ∆T1i I T (rt , t)∆1i

k=1

6

(k) Ξi

+

(54)

m X

Λk Vk−1 ΛTk

+

i ∆T1i ∆1i

+

T −1 i ∆2i ∆2i ,

k=1

Then, according to (17) and (54), it can be concluded that LV (xt , i, t) T



(k) (t)(Ξi

m X

+

Λk Vk−1 ΛTk

+

i ∆T1i ∆1i

+

T −1 i ∆2i ∆2i )ξ(t)

Z

Z N X f (η2 (s))( πij S2j − S2 )f (η2 (s))ds + h1

t

t−h1

+

(τ2+ Z

τ2− )



0

Z

−h1 T

−h1

j=1

0

Z

Z

−τ2−

β

Z

−τ2+

x˙ T (s)(

+

x˙ T (s)(

t+θ

t+θ

(k) (t)(Ξi

t

t

+



0

T

+

N X

η2T (s)(

t−h1

k=1

Z

t

+

N X

Z

t

x˙ T (s)(

t+θ

N X

πij S1j − S1 )η2 (s)ds

j=1 N X

πij Z1j − Z1 )x(s)dsdθ ˙

j=1

(55) πij Z2j − Z2 )x(s)dsdθ ˙

j=1

πij Tj − T )x(s)dsdθdβ ˙

j=1 m X

T Λk Vk−1 ΛTk + i ∆T1i ∆1i + −1 i ∆2i ∆2i )ξ(t).

k=1

By utilizing the schur lemma, (55) < 0 can be equivalent to (18). Hence, E{LV (xt , i, t)} 6 0, according to Definition 1 in [17], the neutral-type neural networks with Markovian jumping parameters (2) is proved to be robustly stochastically stable in the mean square. This completes the proof. Remark 3.1. The delay interval [0, τ2− ] is decomposed into m equivalent subintervals, the Netwon-Leibniz formula is applied in each subinterval and selected different weight-free matrices, that can be help to obtain less conservative results. Remark 3.2. In [25], the interval [0, τ2− ] which is the subinterval of time-delay interval [τ2− , τ2+ ] is decomposed into n equivalent segments. There is no explantation about the time-varying delay τ2 (t) in which subinterval. In order to get accurate results, our paper partition the time-delay interval [τ2− , τ2+ ] and introduction of the time variable ρ(t) (ρ(t) =

τ2 (t)−τ2− ). l

We have [(k − 1)h2 , kh2 ] = [(k − 1)h2 , (k − 1)h2 + ρ(t)] +

[(k − 1)h2 + ρ(t), kh2 ]. The time-delay τ2 (t) will determine on the certain subinterval any further. If there aren’t uncertainties in the neural networks (6), one has Z

t

x(t) ˙ = E(rt )x(t ˙ − τ1 (t)) + A(rt )x(t) + B(rt )f (x(t)) + C(rt )f (x(t − τ2 (t))) + D(rt )

f (x(s))ds. (56) t−τ3 (t)

Next, Theorem 3.1 is extended to the system (56) with time-varying delay. Hence, the following corollary is presented. 18

Corollary 3.1. Under Assumption 2.1, 2.2 and 2.3, the neutral-type uncertain neural networks (56) is robustly stochastically stable in mean square if there exist matrices P = [Pij ]3×3 > 0,Vj > 0 j = (1, 2, ..., m),Fk > 0,Fk1 > 0,Gk ,k = (1, 2, .., l),R11 ,R12 ,R13 ,Rr ,r = (2, 3, ..., l),S1 > 0,S2 > 0,S1i > 0,S2i > 0 ,Uj > 0,Jj > 0 j = (1, 2, 3, 4),U1∗ ,Z1 > 0,Z2 > 0,Z1i > 0,Z2i > 0 ,T > 0,Ti > 0 ,Λjp j = (1, 2, .., m), p = (1, 2, ..., 2m + 3l + q + 16),N1i ,N2i ,N3i ,N4i ,Mi1 ,Mi2 ,Mi3 ,Mi4 ,P1 , P2 ,H1 , H2 ,Q1 , Q2 , positive definite diagonal matrices K1 , K2 , L1 , L2 such that (12)-(17) and the following LMIs hold   (k) Ξi Λ1 Λ2 . . . Λm      ∗ −V1 0 ... 0      (k) (Φ7i )∗ =  ∗ ∗ −V2 . . . 0  < 0,      . . . . . . . . . . . . . . . . . . . . . .   ∗ ∗ ∗ . . . −Vm

(57)

(k)

Ξi , Λj , j = (1, 2, ..., m) have been given in Theorem 3.1. In addition, if there isn’t neutral terms, then the system (56) is represented by Z t x(t) ˙ = A(rt )x(t) + B(rt )f (x(t)) + C(rt )f (x(t − τ2 (t))) + D(rt ) f (x(s))ds,

(58)

t−τ3 (t)

Similar to the analysis in the Proof of Theorem 3.1, one can obtain the above results of Corollary 3.1. Corollary 3.2. Under Assumption 2.1, 2.2 and 2.3, the neutral-type uncertain neural networks (58) is robustly stochastically stable in mean square if there exist matrices P = [Pij ]3×3 > 0,Vj > 0 j = (1, 2, ..., m),Fk > 0,Fk1 > 0,Gk ,k = (1, 2, .., l),R11 ,R12 ,R13 ,Rr ,r = (2, 3, ..., l),S1 > 0,S2 > 0,S1i > 0,S2i > 0 ,Jj > 0 j = (1, 2, 3, 4),Z1 > 0,Z2 > 0,Z1i > 0,Z2i > 0 ,T > 0,Ti > 0 ,Λ∗jp j = (1, 2, .., m), p = (1, 2, ..., 2m + 3l + q + 10),N1i ,N2i ,N4i ,Mi1 ,Mi2 ,Mi3 ,Mi4 ,P1 , P2 ,H1 , H2 ,Q1 , Q2 , positive definite diagonal matrices K1 , K2 , L1 , L2 such that (12)-(14),(16)-(17) and the following LMIs hold   (k) (Ξi )∗ Λ∗1 Λ∗2 . . . Λ∗m      ∗ −V1 0 ... 0      (k) (Φ7i )∗∗ =  ∗ ∗ −V2 . . . 0  < 0,     . . . . . . . . . . . . . . . . . . . . . . . . .    ∗ ∗ ∗ . . . −Vm

(59)

where (k)

(Ξi )∗ =W1∗ + W1∗ T +

5 X

∗ ∗ Wk∗ + W6i + W8i +

k=2

+

m X j=1

Λ∗j rj +

m X j=1

11 X

∗ ∗ T ∗ ∗ ∗ ∗ T Wk∗ + W12i + W13 + W14i W15i + W15i W14i

k=9

rjT Λ∗j T −

m X

Λ∗j rj+1 −

j=1

m X j=1

19

∼ (k)

T rj+1 Λ∗j T + Υ + Υ(k) + Υ

,

Λ∗j = col[Λ∗j1 , Λ∗j2 , ..., Λ∗j,2m+3l+q+10 ], (j = 1, 2, ..., m) ∗ W1,1 = col[r1 , r2m+3l+q+7 , r2m+3l+q+10 ],

∗ W1,2 = col[r2m+3l+5 , r1 − r2 , h3 rm+2l+3 − r2m+3l+6 ],

∗ T ∗ W1∗ = W1,1 P W1,2 + (rm+2l+3 − Σ− r1 )T K1 r2m+3l+5 + (Σ+ r1 − rm+2l+3 )T K2 r2m+3l+5 ,

W2∗

=

T r2m+3l+5

m X

h21 Vk r2m+3l+5 ,

k=1 ∗ W3,1

= col[r1 , ..., rm ],

∗ W3,2 = col[rm+2l+3 , ..., r2m+2l+2 ],

∗ W3,4 = col[rm+2l+4 , ..., r2m+2l+3 ],

∗ W3,3 = col[r2 , ..., rm+1 ],

∗ W3,5 = col[rm+1 , ..., rm+l ],

∗ W3,6 = col[r2m+2l+3 , ..., r2m+3l+2 ], ∗ W3,7 = col[rm+2 , ..., rm+l+1 ],

∗ W3,8 = col[r2m+2l+4 , ..., r2m+3l+3 ]

∗ T ∗ ∗ T ∗ ∗ T ∗ ∗ T ∗ T W3∗ =W3,1 P1 W3,1 + W3,1 H1 W3,2 + W3,2 H1 W3,1 + W3,2 Q1 W3,2 − W3,3 P1 W3,3 ∗ T ∗ ∗ T ∗ ∗ T ∗ ∗ T ∗ ∗ T ∗ − W3,3 H1 W3,4 − W3,4 H1 W3,3 − W3,4 Q1 W3,4 + W3,5 P2 W3,5 + W3,5 H2 W3,6 ∗ T ∗ ∗ T ∗ ∗ T ∗ ∗ T ∗ T + W3,6 H2 W3,5 + W3,6 Q2 W3,6 − W3,7 P2 W3,7 − W3,7 H2 W3,8 − W3,8 H2 W3,7 ∗ T ∗ − W3,8 Q2 W3,8 ,

Υ = ((Ψij ) + (ΨTij ))(2m+3l+q+10)n×(2m+3l+q+10)n , (k)

(k)

Υ(k) = ((Ψij ) + (Ψij )T )(2m+3l+q+10)n×(2m+3l+q+10)n , ∼ (k)

Υ

∼ (k)

∼ (k)

= (Ψij +(Ψij )T )(2m+3l+q+10)n×(2m+3l+q+10)n ,

W4∗ =

l X j=1

T rm+j Fj1 rm+j − (1 −

l l X µ2 X T T ) rm+2+l+j Fj1 rm+2+l+j + h22 r2m+3l+5 Fj r2m+3l+5 , l j=1 j=1

∗ T ∗ ∗ T ∗ W5∗ = W3,1 S1 W3,1 + W3,2 S2 W3,2 , ∗ ∗ T ∗ ∗ T ∗ ∗ T ∗ ∗ T ∗ W6i = W3,1 S1i W3,1 + W3,2 S2i W3,2 − W3,3 S1i W3,3 − W3,4 S2i W3,4 , i h ∗ W8,1 = col r1 − r2 r1 + r2 − h2 r2m+3l+q+7 , 1 h i ∗ W8,2 = col rm+l+2 − rm+l+1 rm+l+2 + rm+l+1 − 2r2m+3l+q+9 h i h ∗ ∗ ∗ W8,3 = col rm+1 − rm+l+2 rm+1 + rm+l+2 − 2r2m+3l+q+8 , W8,4 = col W8,2

∗ W8,3

∗ T ∗ T ∗ ∗ ∗ T ∗ W8i = r2m+3l+5 [h21 Z1i + (τ2+ − τ2− )2 Z2i ]r2m+3l+5 + W8,1 (−Z1i )W8,1 + W8,4 (−Φ6i )W8,4 ,

1 1 T W9∗ = r2m+3l+5 ( h31 Z1 + (τ2+ − τ2− )((τ2+ )2 − (τ2− )2 )Z2 )r2m+3l+5 , 2 2 ∗ W10,1 = col[r2m+3l+6 , ..., r2m+3l+q+5 ]

,

∗ W10,2 = col[r2m+3l+7 , ..., r2m+3l+q+6 ],

20

i

,

∗ ∗ T ∗ ∗ T ∗ T W10 =W10,1 J1 W10,1 − W10,2 J1 W10,2 + rm+2l+3 (h3 J2 + τ3+ J3 +

r2m+3l+6 −

q X

T r2m+3l+5+p

p=1 ∗ T W11 = r2m+3l+5 (

h23 1 T J4 )rm+2l+3 − r2m+3l+6 J2 2 h3

2 T 1 J3 r2m+3l+5+p − 2 r2m+3l+q+10 J4 r2m+3l+q+10 , h3 τ3+

h31 T )r2m+3l+5 , 6

∗ W12,1 = [h1 r1 − r2m+3l+q+7 ], ∗ T W12i = r2m+3l+5 ( ∗ W13,1 = col

h

r1

2 h21 ∗ ∗ T (− 2 Ti )W12,1 , Ti )r2m+3l+5 + W12,1 2 h1 i h i ∗ = col rm+l+2 r2m+3l+4 , rm+2l+3 , W13,2

∗ ∗ T ∗ ∗ T ∗ W13 = W13,1 Γ1 W13,1 + W13,2 Γ2 W13,2 , ∗ W14i

=

T col[N1i r1

+

T N2i r2m+3l+5



q X

T N4i r2m+3l+5+p ],

p=1 ∗ W15i = col[Ai r1 + Bi rm+2l+3 + Ci r2m+3l+4 +

q X

Di r2m+3l+5+p − r2m+3l+5 ].

p=1

Other variables have been given in Theorem 3.1. Proof. Firstly, consider the Lyapunov-Krasovskii functions candidate in the proof of Theorem 3.1. Setting Ui = 0 i = 1, 2, 3, 4, U ∗ = 0 in these Lyapunov-Krasovskii functions. Similar to the proof of the Theorem 3.1, one can directly derive the results of Corollary 3.2. In the following, the neural network (6) without the Markovian jumping parameters is considered, which is represented by Z

t

x(t) ˙ = E x(t ˙ − τ1 (t)) + Ax(t) + Bf (x(t)) + Cf (x(t − τ2 (t))) + D

f (x(s))ds.

(60)

t−τ3 (t)

From the proof of Theorem 3.1, one has the following theorem. Theorem 3.2. Under Assumption 2.2 and 2.3, the neutral-type uncertain neural networks (60) is globally asymptotically stable if there exist matrices P = [Pij ]3×3 > 0,Vj > 0 j = (1, 2, ..., m),Fk > 0,Fk1 > 0,Gk ,k = (1, 2, .., l),R11 ,R12 ,R13 ,Rr ,r = (2, 3, ..., l),S1 > 0,S2 > 0,Uj > 0,Jj > 0 j = (1, 2, 3, 4),U1∗ ,T1 > 0,T2 > 0 ,Λjp j = (1, 2, .., m), p = (1, 2, ..., 2m + 3l + q + 16),N1 ,N2 ,N3 ,N4 ,M1 ,M2 ,M3 ,M4 ,P1 , P2 ,H1 , H2 ,Q1 , Q2 , positive definite diagonal matrices K1 , K2 , L1 , L2 such that (12)-(15),(17) and the following LMIs hold     M1 M2 T2∗ M >0 , M = , Φ6 =  M3 M4 ∗ T2∗

21

(61)

(k)

Φ7

  Ξ(k) Λ1 Λ2 . . . Λm      ∗ −V1 0 ... 0      = ∗ ∗ −V2 . . . 0  < 0,     . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   ∗ ∗ ∗ . . . −Vm

(62)

where (k)

Ξ

=W1 +

W1T

+

4 X

Wk +

k=2



m X

Λj ej+1 −

j=1

8 X

T W14 W15

Wk + W10 + W13 +

+

T W15 W14

+

Λj ej +

j=1

k=6 m X

m X

∼ (k)

eTj+1 ΛTj + Υ + Υ(k) + Υ

m X

eTj ΛTj

j=1

,

j=1

T1∗ = diag(T1 , 3T1 ),

T2∗ = diag(T2 , 3T2 ),

T T T T W6 = W3,1 S1 W3,1 + W3,2 S2 W3,2 − W3,3 S1 W3,3 − W3,4 S2 W3,4 , T T W8 = eT2m+3l+5 [h21 T1 + (τ2+ − τ2− )2 T2 ]e2m+3l+5 + W8,1 (−T1∗ )W8,1 + W8,4 (−Φ6 )W8,4 ,

W14 = col[N1T e1 + N2T e2m+3l+5 + N3T e2m+3l+q+12 −

q X

N4T e2m+3l+5+p ],

p=1

W15 = col[Ae1 + Bem+2l+3 + Ce2m+3l+4 +

q X

De2m+3l+5+p + Ee2m+3l+q+12 − e2m+3l+5 ].

p=1

Other variables have been given in Theorem 3.1. Proof. Consider the following Lyapunov-Krasovskii functional candidate V (xt ) =

8 X

V (xt , i, t),

k=1

V1 (xt ) =

η1T (t)P η1T (t)

+2

n X

Z

V2 (xt ) = h1

−jh1

j=1

Z



t

V3 (xt ) =

 t−h1



P2



V4 (xt , i, t) =

−(j−1)h1



η2 (s)

0

σk− s)ds

+2

n X

xk (t)

Z k2k 0

k=1

(σk+ s − fk (s))ds,

t

x˙ T (s)Vk x(s)dsdθ, ˙

t+θ

T 

P1

H1



η2 (s)

 Z

    ds + f (η2 (s)) ∗ Q1 f (η2 (s))   H2 η3 (s)   ds, Q2 f (η3 (s))

l Z X k=1

Z

(fk (s) −

k1k

k=1 m Z X

xk (t)

t−τ2− −(k−1)h2

t−τ2− −(k−1)h2 −ρ(t)

xT (s)Fk1 x(s)ds +

l X k=1

22

Z h2

t

 

t−h2

η3 (s) f (η3 (s))

−τ2− −(k−1)h2

−τ2− −kh2

Z

T 

t

t+θ

x˙ T (s)Fk x(s)dsdθ, ˙

t

Z

η2T (s)S1 η2 (s)ds +

V5 (xt , i, t) =

Z

t−h1

V6 (xt )

=(τ1+



−τ1−

Z

Z

−τ1−

Z

t

Z

t T

x˙ (s)T1 x(s)dsdθ ˙ + −h1 t

η4T (s)J1 η4 (s)ds +

t−ρ1 (t)

Z

0

Z

t

+ −τ3+

−τ1+

(τ2+



τ2− )

Z

t+θ

V8 (xt , i, t) =

−τ1−

Z

T

t+θ

η

V7 (xt ) = h1 Z

x˙ T (s)U2 x(s)ds ˙

t−τ1 (t)

t+θ

x˙ (s)U3 x(s)dsdθdη ˙ +

−τ1+ 0

t

Z

T

x˙ (s)U1 x(s)dsdθ ˙ +

−τ1+

Z

t

+ Z

f T (η2 (s))S2 f (η2 (s))ds,

t−h1

τ1− )

−τ1−

Z

t

f T (x(s))J3 f (x(s))dsdθ +

Z

0

Z

−h3

−τ2+

Z

−τ1+

Z

−τ1−

x˙ T (s)U4 x(s)dsdθdη, ˙

t+θ

t

x˙ T (s)T2 x(s)dsdθ, ˙

t+θ

t

f T (x(s))J2 f (x(s))dsdθ

−ρ1 (t) t+θ Z 0 Z 0Z t

t+θ

−τ2−

η

Z

β

f T (x(s))J4 f (x(s))dsdθdβ.

t+θ

Similar to the proof of Theorem 3.1, the above result can be proved. Remark 3.3. in [21]. However, the proposed delay-partitioning methods could obtain less conservative results. In [21], the time-varying delay term τ (t) was usually estimated as τ when estimating the upper bound. In this paper, the value of the upper bound and lower bound are estimated to be more exact since τ (t) is confined to the interval 0 ≤ τ2− ≤ τ2 (t) ≤ τ2+ . So the delay-partitioning methods make the problem of generalization. Remark 3.4. Different from the approaches applied in [20, 21], the novel Lyapunov-Krasovskii functions with multiple-integral terms are established. By utilizing Lemma 2.4, it could deal with the LyapunovKrasovskii functions. In the following, the examples show that the results can reduce conservative by comparing the other results. Remark 3.5. In [23], the distributed delay

Rt t−r

f (x(s))ds is the constant delay. And the time-delay interval

[0, r] is decomposed into l equivalent segments. Different from reference [23], we assumed that distributed Rt delay is the time-varying delay t−τ3 (t) f (x(s))ds. Meanwhile, the time variable ρ1 (t) (ρ1 (t) = τ3q(t) ) is introduced to deal with the delay-partitioning problem. Remark 3.6. Different from the existing delay-partitioning methods, the term V4 (xt , i, t) is constructed in this paper. τ2 (t) is firstly assumed to belong to some subinterval [τ2− + (k − 1)h2 , τ2− + kh2 ] and ρ(t) is introduced. Then, the subinterval [τ2− + (k − 1)h2 , τ2− + kh2 ] is decomposed into two segments, i.e.[τ2− + (k − 1)h2 , τ2− + (k − 1)h2 + ρ(t)] and [τ2− + (k − 1)h2 + ρ(t), τ2− + kh2 ]. Hence, the proposed stability theorems in this paper may be less conservative.

4. Illustrative example In this section, three examples are provided to show the effectiveness of the proposed methods. 23

Example  0.8 E1 = −  0  0.7 B1 =  −0.2  −0.4 Λ= 0.5

1. Consider the  system (56) with the following Markovian jumpingparameters:    0 0.3 0.2 2 0 6 0 , E2 = −  , A1 = −  , A2 = −  , 0.2 0.1 0.2 0 3 0 3        0.2 −0.5 0.1 0.4 −0.5 −0.2 0.7 , B2 =  , C1 =  , C2 =  , 0.3 −0.3 0.4 −0.1 0.2 0.1 −0.8  0.4 , Σ+ = diag(0.5, 0.7), Σ− = diag(0.3, 0.1), Σ1 = diag(0.15, 0.07), Σ2 = diag(0.4, 0.4), −0.5

D1 = D2 = 0, τ1+ = 0.4, τ1− = 0.1, τ2+ = 0.1, τ3+ = 0.1, µ1 = 0.1, l = q = 1. The activation functions are assumed to be fi (xi ) = 0.5(|xi + 1| − |xi − 1|),i = 1, 2. For the case of µ2 , the upper bound for unknown τ2+ derived by Corollary 3.1 and the other results in [20, 28] are listed in Table 1. According to Table 1, this example is given to show significant improvements over some existing results. Table 1: Allowable upper bound of τ2+ for Example 1.

µ2

0.1

0.3

0.5

0.7

0.9

1

[28]

0.6402

0.5973

0.5025

0.4850

0.4203

0.4039

Corollary 1 in[20] r=1

0.6694

0.6145

0.5298

0.5082

0.4942

0.4792

Corollary 1 in[20] r=3

0.6812

0.6158

0.5304

0.5128

0.5034

0.4941

Corollary 1 in[20] r=5

0.6921

0.6231

0.5491

0.5321

0.5107

0.5024

Corollary 3.1 r=1

0.7022

0.6328

0.5658

0.5421

0.5213

0.5106

Corollary 3.1 r=3

0.7156

0.6382

0.5733

0.5489

0.5286

0.5197

Corollary 3.1 r=5

0.7225

0.6412

0.5896

0.5521

0.5362

0.5269

Remark 4.1. It can be seen from Table 1 that our results have less conservativeness than the results of [20, 28]. Furthermore, if the values m and q become large, then the value of time delays upper bound will also become larger. Hence, the proposed delay partitioning approach could perform better results. Example the  system (58)  2. Consider   withthe following  parameters:   5 0 3 0 1 0.4 0.3 0.2 , A2 = −  , B1 =  , B2 =  , A1 = −  0 4 0 6 −2 0.1 0.4 0.1         1 0.2 0.5 0.7 0.5 −0.3 1 −0.3 , C2 =  , D1 =  , D2 =  , C1 =  0.1 0.2 0.7 0.4 0.2 1.2 0.2 1.2   −0.8 0.8 , Σ+ = diag(0.8, 0.8), Σ− = diag(0.1, 0.1), Σ1 = diag(0.08, 0.08) Λ= 0.3 −0.3 Σ2 = diag(0.45, 0.45), τ3+ = 0.2, µ2 = 0.3, µ1 = 0, l = q = 1. The activation functions are assumed to be fi (xi ) = 0.5(|xi + 1| − |xi − 1|),i = 1, 2. For the case of τ2− , the upper bounds for unknown τ2+ derived by Corollary 3.2 and the other results in [20, 29] are listed in Table 2. According to Table 2, this example is given to show the advantage of the proposed method. 24

Table 2: Allowable upper bound of τ2+ for Example 2.

τ2−

0.1

0.2

0.4

0.6

0.8

1

[29]

0.5029

0.6172

0.7935

0.9828

1.1642

1.3573

Corollary 1 in[20] r=1

0.7784

0.8685

1.0532

1.2417

1.4390

1.6390

Corollary 1 in[20] r=3

0.7787

0.8691

1.0549

1.2405

1.4410

1.6396

Corollary 1 in[20] r=5

0.7788

0.8692

1.0552

1.2455

1.4415

1.6401

Corollary 3.2 r=1

0.8224

0.8754

1.0752

1.2628

1.4495

1.6415

Corollary 3.2 r=3

0.8242

0.8796

1.0795

1.3018

1.4511

1.6455

Corollary 3.2 r=3

0.8315

0.8813

1.0810

1.3112

1.4528

1.5011

Example 3.  1.6305    0 A = −   0  0  0.2853    −0.5955 C=   −0.1497  −0.4348  −0.3054    −0.0546 E=   0.4563  −0.0115

Consider the system (60)  with thefollowing parameters:  −2.5573 −1.3813 1.9547 −1.1398 0 0 0         −1.0226 −0.8845 0.5045 −0.2111  1.9221 0 0 , , B =      1.0378  1.5532 0.6645 1.1902  0 2.5973 0    −0.3898 0.7079 −0.3398 −2.3162 0 0 1.3775    0.0265 0.1157 0.0578 −0.0930 −0.0793 0.4694 0.5354        0.3186 −0.1363 −0.0859 0.0742  1.3352 −0.9036 0.5529  , , D =      0.2037 −0.2049 0.0112 0.1457  −0.6065 −0.1641 −0.2037     −0.3161 −0.2469 −0.0736 0.4424 −1.3474 −0.6275 −2.2543  0.3682 0.1761 −0.0235   −0.2089 −0.0754 0.2668  + ,Σ = diag(1.0275, 0.9960, 0.3223, 0.2113), Σ− = Σ1 = 0,  0.0023 0.1440 0.6928   −0.2439 0.2004 0.1574

Σ2 = diag(0.5138, 0.4980, 0.1611, 0.1056), τ3+ = 0.2, µ1 = µ2 = 0, l = 1. The activation functions are assumed to be fi (xi ) = 0.5(|xi + 1| − |xi − 1|),i = 1, 2, 3, 4. By applying MATLAB LMI Control Toolbox, the maximum allowable upper bounds τ1+ , τ2+ , τ3+ for different values of m and q can be obtained, which are listed in Table 3. According to Table 3, this example is given to show the advantage of the proposed result.

25

3 x1(t) x2(t)

2

x3(t) x4(t)

Amplitude

1

0

−1

−2

−3

0

3

6

9

12 t

15

18

21

24

Figure 1: The state trajectories of for system (60) for τ1+ = τ2+ = τ3+ = 2.4368 in Example 3.

Table 3: Maximum allowable time delays upper bound τ1+ , τ2+ , τ3+ for Example 3.

Remark

case

Upper bound (τ1+ = τ2+ = τ3+ )

[30]

1.8320

Corollary 1 in[23] m=q=1

2.1679

Corollary 1 in[23] m=q=2

2.7442

Corollary 1 in[23] m=q=3

3.0962

Theorem 3.2 m=q=1

2.4368

Theorem 3.2 m=q=2

3.1009

Theorem 3.2 m=q=3

3.3740

4.2. When dealing with the data of Example 3, we let τ1+ = τ2+ = τ3+ . By using MATLAB

LMI Control Toolbox, one can obtain different τ1+ ,τ2+ ,τ3+ . Meanwhile, the minimum value of allowable time delays upper bound of τ1+ ,τ2+ ,τ3+ is used. Hence, the minimum value satisfied the LMIs are negative. Table 3 shows that the results in this paper have less conservativeness than the results of [30] and [23]. Further,if the values m and q become great, then, the value of time delays upper bound will also be greater. Furthermore, the state trajectories of the system (60) are shown in Figure 1.

5. Conclusion In this paper, some delay partitioning methods have been proposed for deriving the stability criteria of uncertain neutral-type neural networks with Markovian jumping parameters and time-varying delays. Lyapunov Krasovskii functional candidates with the tripe and quadruple integrals terms have been firstly intro26

duced by applying the delay-partitioning augmented factors involving subintervals. The time-varying delay has been considered to locate into any subintervals, which is different from the existing delay-partitioning methods. Moreover, the Netwon-Leibniz formula has been utilized in each subinterval and chosen different weight-free matrices. Based on the proposed delay-partitioning approaches, the stability theorems have been given to reduce the conservatism. Numerical examples have shown the benefits of the proposed result. [1] K. Ratnavelu, M. Manikandan, P. Balasubramaniam, Synchronization of fuzzy bidirectional associative memory neural networks with various time delays, Applied Mathematics and Computation 270 (2015) 582-605. [2] M. Syed Ali, Stability of Markovian jumping recurrent neural networks with discrete and distributed time-varying delays, Neurocomputing 149 (2015) 1280-1285. [3] J. Cao, Z. Lin, Bayesian signal detection with compressed measurements, Information Sciences 289 (2014) 241-253. [4] C. Yin, Y.Q. Chen, B. Stark, S. Zhong, E. Lau, Fractional-order adaptive minimum energy cognitive lighting control strategy for the hybrid lighting system, Energy and Buildings 87 (2015) 176-184. [5] J. Cao, T. Chen, J. Fan, Landmark Recognition with Compact BoW Histogram and Ensemble ELM, Multimedia Tools and Applications, 75 (2016) 2839-2857. [6] C. Yin, Y.Q. Chen, S.M. Zhong, Fractional-order sliding mode based extremum seeking control of a class of nonlinear system, Automatica, 50 (2014) 3173-3181. [7] Z. Shu, J. Lam, Exponential estimates and stabilization of uncertain singular systems with discrete and distributed delays, Int. J. Control 81 (2008) 865-882. [8] X. Huang, C. Yin, J. Huang, X. Wen, Z. Zhao, J. Wu, S. Liu, Hypervelocity impact of TiB2-based composites as front bumpers for space shield applications, Materials & Design, 97 (2016) 473-482. [9] C. Yin, S. Zhong, W. Chen, Design of sliding mode controller for a class of fractional-order chaotic systems, Commun. Nonlinear Sci. Numer. Simulat. 17 (2012)356-366. [10] J. Cao, Y. Zhao, X. Lai, M. Ong, C. Yin, Z. Koh, N. Liu, Landmark Recognition with Sparse Representation Classification and Extreme Learning Machine, Journal of The Franklin Institute, 352 (2015) 4528-4545. [11] C. Yin, Y. Cheng, Y. Q. Chen, B. Stark, S. M. Zhong, Adaptive fractional-order switching-type control method design for 3D fractional-order nonlinear systems, Nonlinear Dyn., 82 (2015) 39-52. [12] Y. Song, S. Liu, G. Wei, Constrained robust distributed model predictive control for uncertain discrete-time Markovian jump linear system, Journal of the Franklin Institute 352 (2015) 73-92. [13] M. Syed Ali, R. Saravanakumar, J.D. Cao, New passivity criteria for memristor-based neutral-type stochastic BAM neural networks with mixed time-varying delays, Neurocomputing 171 (2016) 1533-1547. [14] Y. Song, X. Fang, Q. Diao, Mixed H2/H∞ distributed robust model predictive control for polytopic uncertain systems subject to actuator saturation and missing measurements, Int. J. Syst. Sci. 47(4) (2015) 777-790. [15] Y.G. Kao, J. Xie, C.H. Wang, H.R. Karimi, A sliding mode approach to H∞ non-fragile observer-based control design for uncertain Markovian neutral-type stochastic systems, Automatic 52 (2015) 218-226. [16] S. Muralisankara, A. Manivannana, P. Balasubramaniamb, Robust stability criteria for uncertain neutral type stochastic system with Takagi-Sugeno fuzzy model and Markovian jumping parameters, Commun Nonlinear Sci Numer Simulat 17 (2012) 3876-3893. [17] K.Q. Gu, A further refinement of discretized Lyapunov functional method for the stability of time-delay systems, Int. J. Control 74(10) (2001)967-976. [18] J.J. Hui, X.Y. Kong, H.X. Zhang, X. Zhou, Delay-partitioning approach for system with interval time-varying delay and nonlinear perturbations, Journal of Computation and Applied Mathematics 281 (2015) 74-81. [19] G. Wei, F. Han, L. Wang, Y. Song, Reliable H-infinity filtering for discrete piecewise linear systems with infinite distributed

27

delays, Int. J. Gen. Syst 43 (2014) 346-358. [20] J.W. Xia, J.H.Park, H.B. Zeng, Improved delay-dependent Robust Stability Analysis for neutral-yype uncertain Neural Networks with Markovian jumping parameters and time-varying delays, Neurocomping 149 (2015) 1198-1205. [21] J.K. Tian, W.J. Xiong, F. Xu, Improved delay-partitioning method to stability analysis for neural networks with discrete and distributed time-varying delays, Applied Mathematical and Computation 233 (2014) 152-164. [22] Y.S. Liu, Z.D. Wang, W. Wang, Reliable H∞ filtering for discrete time-delay systems with randomly occurred nonlinearities via delay-partitioning method, Signal Processing 91 (2011)713-727. [23] S. Lakshmanan, J.H. Park, H.Y. Jung, O.M. Kwon, R. Rakkiyappan, A delay partitioning approach to delay-dependent stability analysis for neutral type neural networks with discrete and distributed delays, Neurocomputing 111 (2013) 81-89. [24] C. Li, X. Liao, Passivity analysis of neural network with time delay, IEEE Trans.Circuits Syst. II: Express Briefs 52(8) (2015) 471-475. [25] K. Gu, An integral inequality in the stability problem of time-delay systems, in: Proceedings of 39th IEEE Conference on Decision and Control, Sydney, Australia, (2000) 2805-2810. [26] P. Park, J.W. Ko, C.K. Jeong, Reciprocally convex approach to stability of systems with time-varying delays, Automatica 47 (2011) 235-238. [27] A. Seuret, F. Gouaisbant, Wirtinger-based integral inequality: application to time-delay systems, Automatica 49 (2013) 2860-2866. [28] W. Chen, L. Wang, Delay-dependent stability for neutral-type neural networks with time-varying delays and Markovian jumping parameters, Neurocomping 120 (2013) 569-576. [29] P. Balasubramaniam, S. Lakshmanan, A. Manivannan, Rubust stability analysis for Markovian jumping interval neural networks wuth discrete and distributed time-varing delays, Chaos, Solotons and Fractals 45 (2012) 483-495. [30] J. Feng, S. Xu, Y. Zou, Delay-dependent stability of neutral type neural networks with distributed delays, Neurocomputing 72 (2009) 2576-2580.

28