State estimation of neural networks with two Markovian jumping parameters and multiple time delays

State estimation of neural networks with two Markovian jumping parameters and multiple time delays

Author’s Accepted Manuscript State estimation of neural networks with two Markovian jumping parameters and multiple time delays Jiaojiao Ren, Xinzhi L...

661KB Sizes 1 Downloads 88 Views

Author’s Accepted Manuscript State estimation of neural networks with two Markovian jumping parameters and multiple time delays Jiaojiao Ren, Xinzhi Liu, Hong Zhu, Shouming Zhong, Kaibo Shi www.elsevier.com/locate/jfranklin

PII: DOI: Reference:

S0016-0032(16)30403-3 http://dx.doi.org/10.1016/j.jfranklin.2016.10.035 FI2778

To appear in: Journal of the Franklin Institute Received date: 10 November 2015 Revised date: 3 May 2016 Accepted date: 26 October 2016 Cite this article as: Jiaojiao Ren, Xinzhi Liu, Hong Zhu, Shouming Zhong and Kaibo Shi, State estimation of neural networks with two Markovian jumping parameters and multiple time delays, Journal of the Franklin Institute, http://dx.doi.org/10.1016/j.jfranklin.2016.10.035 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

State estimation of neural networks with two Markovian jumping parameters and multiple time delays Jiaojiao Rena,b,∗, Xinzhi Liub , Hong Zhu a , Shouming Zhong c, Kaibo Shid a School

of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, PR China b Department of Applied Mathematics,University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 c School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, PR China d School of Information Science and Engineering, Chengdu University, Chengdu, 610106, China

Abstract This paper studies the problem of state estimation for two Markovian jumping neural networks with leakage, discrete and distributed delays. The Markovian jumping parameters in connection weight matrices and discrete time-varying delay are assumed to be different. By constructing an appropriate Lyapunov-Krasovskii functional and combining with the reciprocally convex approach and Wirtinger-based integral inequality (this inequality gives a tighter upper bound), some sufficient conditions are established. They guarantee that the estimation error converges to zero exponentially in the mean square sense. Compared with existing results, the obtained criteria are more effective due to the application of Matrix decomposition method which sufficiently utilizes the information of Lyapunov matrices. Numerical examples and simulations are given to demonstrate the reduced conservatism and effectiveness of the proposed method. Keywords: Leakage delay; State estimation; Markovian jumping parameters; Matrix decomposition method.

1. Introduction Over the past decades, neural networks have gained increasing attention because of their successful applications, including signal processing, associative memories, combinatorial optimization and so on [1-4]. The law of system development is not only depends on current states, but also depends on previous states, which is called delay phenomenon. However, the existence of time delay has been recognized as one of the major source of instability and poor performance of network dynamics. Therefore, there are a lot of graceful results about the stability of delayed neural networks [5-10]. In particular, distributed delay [5, 10] should be added to the considered neural networks since neural networks have a quantity of parallel pathways with various axon sizes and lengths, and signal transmission is distributed for some time. ✩ This work was supported by a scholarship from the China Scholarship Council (CSC), National Basic Research Program of China (2010CB732501), the Fundamental Research Funds for the Central Universities(No. ZYGX2014J070), National Natural Science Foundation of China (61273015) and NSERC Canada. ∗ Corresponding author: School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, PR China. Email address: [email protected] (Jiaojiao Ren)

Preprint submitted to The Journal of The Franklin Institute

October 28, 2016

On the other hand, the information latching is a frequent occurrence that the considered neural network has finite modes by extracting finite state representation [11, 12], and every mode corresponds to a deterministic system. The switching between these modes is dominated by a Markov chain [13-16]. As pointed in [17], The shuffle-exchange networks can model practical interconnection systems successfully due to their size of switching elements and uncomplicated configuration. For discrete-time system, the authors in [18] considered the stability analysis and synchronization problems for a new class of discrete-time neural networks with Markovian jumping parameters and mode-dependent mixed time delays (both discrete and distributed delays). For continuous-time system, the stability problems of Markovian jumping neural networks with discrete and distributed time-varying delays were investigated in [19-21]. Further, exponential stability of delayed neural networks with Markovian jumping parameters was studied in [22, 23]. Particularly, the problem of stability and synchronization for a class of Markovian jump neural networks with partly unknown transition probabilities was addressed in [24] by the Kronecker product technique. But so far, either the Markovian jumping parameters in connection matrices and time delay are the same one [14,18] or the time delay is not related to the Markovian jumping parameters [12,13,16,19-24]. For example, a typical Markovian jumping neural network [31] t with mixed delay was described by x(t) ˙ = −A(r t )x(t)+B(rt )σ(x(t))+B1(rt )σ(x(t−h(t, rt )))+D(rt ) t−d σ(x(s))ds+ J(rt ), where A(rt ), B(rt ), B1 (rt ) and D(rt ) are the connection weight matrices, h(t, r t ) is a mode-dependent time-varying delay. The Markov chain in connection weight matrices and time delay are the same one. In fact, due to the dynamic systems subject to abrupt variation frequently in their structures, the time delay may also have finite modes and the switching between different time delay modes can also be governed by a Markov chain, which is different from the one in connection weight matrices. However, there are few works [25] about the problem of state estimation for neural networks with two different Markovian jumping parameters. Hence, it is necessary and meaningful to study the problem of state estimation for neural networks with two different Markovian jumping parameters. Due to the complexity of neural networks, the exact and complete information regarding neuron states is not always available in the network outputs. In order to the states of neurons can be fully utilized to achieve certain objectives [26], it is necessary to estimate the neuron states by available output measurements to make full use of neural networks. The problem of state estimation for time-delay neural networks was discussed in [27] by the linear matrix inequality (LMI) technique. Under the same assumption for the measurement nonlinearity, the improved delaydependent conditions were derived in [28] for the error systems. Most recently, in [29], Wang and Liu investigated the problem of state estimation for Markovian jump delayed recurrent neural networks. However, the results of [29] were dependent of distributed delay but not of discrete delay. This problem was further discussed in [30], where the obtained criteria was dependent on both distributed and discrete delay. Nevertheless, in [30], there are only two mode-dependent matrices P i and Qi contained in the Lyapunov functional, which may lead to a more conservative condition. Therefore, this problem was discussed in [31] again, some conditions were obtained by constructing a new Lyapunov-Krasovskii functional which leads to as many Lyapunov matrices as possible are chosen to be modedependent. From the numerical example we can see the obtained results are less conservative than the results in [30]. Although some significant results were reported in the literatures, the Lyapunov matrices have not been utilized 2

thoroughly. In existing papers, the used property is just that the Lyapunov matrices are symmetric positive definite. If the Matrix decomposition technique can be introduced, a less conservative criterion may be obtained. This is because the more information about Lyapunov matrices can be used. A special type of time delay, namely, leakage delay, is a kind of time delay that exists in the negative feedback terms of the system. The leakage delay has a destabilizing influence on the dynamical behaviors of system [32,33]. The neutral type neural networks with time delay in the leakage term was discussed in [34] by the topological degree theory. In addition, in [35], the leakage delay was considered into stochastic neural networks. The authors discussed the problem of passivity analysis of stochastic neural networks with time-varying delays and leakage delay. Moreover, Zheng and Wang [36] studied the stability problem of stochastic fuzzy neural networks with both Markovian jumping parameters and leakage delay under impulsive perturbations. Furthermore, the problem of state estimation for Markovian jumping neural networks with leakage time-varying delays was investigated in [37] by using discontinuous Lyapunov functional approach. Recently, the free-weighting matrix method and some stochastic analysis techniques [38] were used to study the problem of exponential passivity analysis of stochastic neural networks with leakage, distributed delays and Markovian jumping parameters. Therefore, it is necessary to add the leakage delay to considered model. To the best of authors’ knowledge, the considered model in [25] has not been taken leakage delay into account. Thus, this is the first attempt to study the problem of state estimation for neural networks with both two different Markovian jumping parameters and time delay in the leakage term. Motivated by above considerations, in this paper, the problem of state estimation of two different Markovian jumping neural networks with leakage, discrete and distributed delays is studied. Because two different Markovian jumping parameters are introduced, a new weak infinitesimal operator is newly proposed to act on Lyapunov-Krasovskii functional with two different Markovian jumping parameters. Matrix decomposition technique and Wirtinger-based integral inequality are adopted in this paper. The former method sufficiently utilize the information about Lyapunov matrices, Wirtinger-based integral inequality gives a sharper upper bound than Jensen’s inequality gives. By constructing an appropriate Lyapunov-Krasovskii functional which contains more mode-dependent Lyapunov matrices, several less conservative mode-dependent conditions are obtained to guarantee that the estimation error converges to zero exponentially in the mean square sense. Numerical examples are given to demonstrate the effectiveness of the proposed method. This paper is organized as follows: In Section 2, the state estimation problem is described and some preliminaries are introduced. The state estimation problem is studied and some sufficient conditions are developed in Section 3. In Section 4, illustrative examples are provided to demonstrate the effectiveness of the proposed methods and finally, conclusions are drawn in Section 5. Notations: Throughout this paper, the superscripts  − 1 and  T  stand for the inverse and transpose of a matrix, respectively; P > 0, (P  0, P < 0, P  0) means that the matrix P is symmetric positive definite(positive-semi definite, negative definite and negative-semi definite); R n denotes n-dimensional Euclidean space; R m×n is the set of m × n real matrices; The identity matrix of order n is denoted as I n ; ∗ denotes the symmetric block in symmetric 3

matrix; λmax (Q) and λmin (Q) denote, respectively, the maximal and minimal eigenvalue of matrix Q.

2. Problem statement and Preliminaries Let {rt }t≥0 and {δt }t≥0 be two right-continuous Markov chains defined on the probability space (Ω, F , {F t }t≥0 , P) taking discrete values in a finite set N = {1, 2, . . . , N} and M = {1, 2, . . . , M}, respectively. Their generators Π = (πi j )N×N and Λ = (pkl ) M×M are given by

pr(rt+

pr(δt+

⎧ ⎪ ⎪ ⎪ ⎪ ⎨ πi j  + o(), = j|rt = i) = ⎪ ⎪ ⎪ ⎪ ⎩ 1 + πii  + o(),

⎧ ⎪ ⎪ ⎪ ⎪ ⎨ pkl  + o(), = l|δt = k) = ⎪ ⎪ ⎪ ⎪ ⎩ 1 + pkk  + o(),

ji j=i

,

lk l=k

,

o() = 0, πi j ≥ 0, ∀ j  i, is the transition rate from mode i at time t to mode j at time t + , where  > 0, lim →0   j=N and πii = − j=1, ji πi j . pkl ≥ 0, ∀k  l, is the transition rate from mode k at time t to mode l at time t + , and  pkk = − l=N l=1,lk pkl . The two Markovian jumping neural networks with leakage, discrete and distributed delays considered in this paper is described by  t ⎧ ⎪ ⎪ ⎪ g(x(s))ds + J(rt ), ˙ = −C(rt )x(t − σ) + A(rt )g(x(t)) + B(r t )g(x(t − h(t, δ t ))) + B1 (rt ) ⎪ ⎪ ⎨ x(t) t−d ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ y(t) = D(rt )x(t) + g˜ (t, x(t)),

(1)

where x(t) = [x1 (t), x2 (t), · · · , xn (t)]T ∈ Rn is the state vector associated with n neurons; y(t) ∈ R m is the network measurement; g(x(t)) = [g 1 (x1 (t)), g2(x2 (t)), · · · , gn (xn (t))]T is the neuron activation function; g(t, ˜ x(t)) is a nonlinear disturbance; σ is the leakage delay satisfying σ ≥ 0 and d is the distributed delays; For each r t ∈ N and δt ∈ M, h(t, δt ) is a discrete mode-dependent time-varying delays; C(r t ) = diag{c1 (rt ), c2 (rt ), . . . , cn (rt )} > 0; A(rt ), B(rt ), B1 (rt ) and D(rt ) ∈ Rm×n are the connection weight matrices functions; J(r t ) is a constant external input vector; Remark 1. In order to avoid unnecessarily complicated calculation, the Markovian jumping parameters in the connection weight matrices considered in this paper are the same one. Indeed, the approach developed later can be easily applied to the system with different Markovian jumping connection weight matrices. To simplify the notations, for each r t = i ∈ N, for example, C(r t ) is denoted by C i ; For each δt = k ∈ M, h(t, δt ) is

4

denoted by h k (t). We consider the following full-order state estimator for system (1)  t ⎧ ⎪ ⎪ ˙ ⎪ x(t ˆ − σ) + A g( x(t)) ˆ + B g( x(t ˆ − h (t))) + B g( x(s))ds ˆ + Ji + Ki [y(t) − yˆ (t)], x(t) ˆ = −C ⎪ i i i k 1i ⎪ ⎨ t−d ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ yˆ (t) = Di x(t) ˆ + g(t, ˜ xˆ(t)),

(2)

where xˆ(t) is the estimation of the state x(t) and yˆ (t) is the estimated network measurement, K i are the gain matrices to be determined. Introduce the error state as e(t) = x(t) − x(t) ˆ and denote f (e(t)) = g(x(t)) − g( x(t)), ˆ ϕ(t, e(t)) = g(t, ˜ x(t)) − g(t, ˜ xˆ(t)), then we obtain˙the following estimation error system  e˙ (t) = − C i e(t − σ) + Ai f (e(t)) + Bi f (e(t − hk (t))) + B1i

t

f (e(s))ds − Ki Di e(t) − Ki ϕ(t, e(t)).

(3)

t−d

Moreover, the above system has an equivalent form as follows: d [e(t) − C i dt



t

 e(s)ds] = − (C i + Ki Di )e(t) + Ai f (e(t)) + Bi f (e(t − hk (t))) + B1i

t−σ

t

f (e(s))ds − Ki ϕ(t, e(t)).

t−d

(4) Assumption 1. There exist scalars hk , μk and d such that for each δ t = k ∈ M satisfy 0 ≤ hk (t) ≤ hk ,

h˙ k (t) ≤ μk .

(5)

For convenience, let h = max{h k |k ∈ M} and τ = max{h, d}. Assumption 2. For the known constant matrices Γ 1 , Γ2 , the activation function g(·) satisfies [g(a) − g(b) − Γ 1 (a − b)]T [g(a) − g(b) − Γ 2 (a − b)] ≤ 0,

∀a, b ∈ R n .

(6)

Assumption 3. There exist two known constant matrices Π 1 , Π2 with appropriate dimension, the nonlinear disturbance function g˜ (·) satisfies [˜g(a) − g(b) ˜ − Π 1 (a − b)]T [˜g(a) − g˜ (b) − Π 2 (a − b)] ≤ 0,

∀a, b ∈ R n .

(7)

Before proceeding further, the following definitions and lemmas are introduced. Definition 1. [30] For any finite initial function φ(s) ∈ C([−τ, 0]; R n), any initial mode r 0 ∈ N and k0 ∈ M, the error system (6) or (7) is said that converges to zero exponentially in the mean square sense if there exist scalars α > 0, β > 0 satisfying E|e(t, φ)|2 ≤ αe−βt φ 2 . Definition 2. Let et = e(t + s), −τ ≤ s ≤ 0, {et , rt , δt }t ≥ 0 is a C([−τ, 0]; Rn ) × N × M−valued Markov process. The 5

weak infinitesimal operator acting on a functional V : C([−τ, 0]; R n) × N × M × R+ → R is defined by 1 E[V(et+ , rt+ , δt+ , t + )] − V(et , i, k, t)  N M ˙ t , i, k) + =V(e πi j V(et , j, k) + pkl V(et , i, l).

LV(et , rt , δt , t) = lim+ →0

j=1

l=1

Remark 2. It is noted that the considered system in this paper with different Markovian jumping parameters in time delay and connection weight matrices, which can better simulate the actual situation compared with the existing articles [11-15, 18-24, 29-31]. Accordingly, a weak infinitesimal operator acting on Lyapunov-Krasovskii functional with two different Markovian jumping parameters is newly proposed in Definition 2, Lemma 1. [33] For any constant matrix Z T = Z > 0 and h > 0 such that the following integrations are well defined, then 



t

−h



t

t

ω (s)Zω(s)ds ≤ − ω (s)dsZ ω(s)ds, t−h t−h t−h   0 t    0 t h2 0 t T T − ω (s)Zω(s)dsdθ ≤ − ω (s)dsdθZ ω(s)dsdθ. 2 −h t+θ −h t+θ −h t+θ T

T

Lemma 2. [5] For an any constant matrix M > 0, the following inequality holds for all continuously differentiable ϕ in [a, b] → Rn :  (b − a)

b

 ϕ (s)Mϕ(s)ds ≥ T

a

where Φ =

b a

ϕ(s)ds −

b

 ϕ (s)dsM T

a

bθ 2 b−a a a

b

ϕ(s)ds + 3ΦT MΦ,

a

ϕ(s)dsdθ.

Remark 3. The inequality in Lemma 2 is called Wirtinger-based integral inequality, which gives a more accurate b bound of (b − a) a ϕT (s)Mϕ(s)ds than Jensen inequality. It is also worth noting that this improvement is allowed by bθ b using an extra signal a a ϕ(s)dsdθ and not only a ϕ(s)ds. Lemma 3. [6] Let f1 , f2 , . . . , fN : Rm → R have positive values in an open subset D of R m . Then the reciprocally convex combination of f i over D satisfies min {αi |αi >0, αi = 1}

1 fi (t) = fi (t) + max gi, j (t), gi, j (t) αi i i i j

i

subject to

{gi, j

⎡ ⎢⎢⎢ f (t) g (t) i, j ⎢ i : Rm → R, g j,i(t)  gi, j (t), ⎢⎢⎢⎢ ⎣ g (t) f (t) j,i j

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ ≥ 0}.

6

3. Main results In this section, based on Matrix decomposition method, we will give a less conservative criterion such that the state of estimation error system (3) or (4) converges to zero exponentially in the mean square sense. The advantage of Matrix decomposition method is more information about Lyapunov matrices can be utilized. to zero⎤ Theorem 3.1. For given scalars h k > 0, μk and d > 0, the state of estimation error system (3) or (4) converges ⎡ ⎥ ⎢⎢⎢ R ⎢ ik1 Rik2 ⎥⎥⎥⎥⎥ exponentially in the mean square sense if there exist real positive matrices P ik , Qik , Q, Hik , H, Rik = ⎢⎢⎢⎢ ⎥⎥, ⎣ ∗ Rik3 ⎦ ⎤ ⎡ ⎢⎢⎢ R R ⎥⎥⎥ 2 ⎥ ⎢⎢⎢ 1 ⎥⎥, T , T , Zik , Z, positive scalars λ1k , λ2k , λ3k , λ4k and any appropriately dimensioned matrices W lik1 , R = ⎢⎢ ⎣ ∗ R ⎥⎥⎦ ik 3

Wlik2 , Wlik3 , Wlik4 (l=1,2), Yi such that the following linear matrix inequalities (LMIs) are satisfied for i = 1, 2, . . . , N and k = 1, 2, . . . , M: ⎡ ⎢⎢⎢ S 1ik1 ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ∗ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ∗ ⎢⎢⎣ ∗

⎡ ⎢⎢⎢ U1ik1 ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ∗ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ∗ ⎢⎢⎣ ∗ N

S 1ik2

W1ik1

S 1ik3

W1ik3



S 1ik1





U1ik2

W2ik1

U1ik3

W2ik3



U1ik1





πi j U jk +

j=1 N j=1, ji

M

⎤ W1ik2 ⎥⎥⎥⎥ ⎥⎥⎥ ⎥ W1ik4 ⎥⎥⎥⎥ ⎥⎥⎥ ≥ 0, ⎥ S 1ik2 ⎥⎥⎥⎥ ⎥⎥⎥ S 1ik3 ⎦

(8)

⎤ W2ik2 ⎥⎥⎥⎥ ⎥⎥⎥ ⎥ W2ik4 ⎥⎥⎥⎥ ⎥⎥⎥ ≥ 0, ⎥ U1ik2 ⎥⎥⎥⎥ ⎥⎥⎥ U1ik3 ⎦

(9)

pkl Uil − U ≤ 0,

(10)

l=1

πi j S jk +

M

pkl Sil − S ≤ 0,

(11)

l=1,lk

Θik |hk (t)=0 = (Θik mn )17×17 |hk (t)=0 < 0,

(12)

Θik |hk (t)=hk = (Θik mn )17×17 |hk (t)=hk < 0,

(13)

where Rik represents F ik , Rik , S ik , Uik ; R represents F, R, S , U; U jk represents Q jk , H jk , T jk , Z jk ; Uil represents Qil , Hil , T il , Zil ; U represents Q, H, T , Z; S jk represents F jk , R jk , S jk , U jk ; Sil represents F il , Ril , S il , Uil ; S represents F, ⎤ ⎡ ⎥⎥⎥ ⎢⎢⎢ T 2 2 T sik1 sik2 h h ⎢ ⎥⎥⎥⎥, T sik represents S sik , U sik , respectively, R, S , U; S ik = hk S 1ik + hk S 2ik , Uik = 2k U1ik + 2k U2ik , T sik = ⎢⎢⎢⎢ ⎥ ⎣ ∗ T sik3 ⎦

7

s = 1, 2 and Θik 1,1 = −2Pik Ci +

N

πi j P jk +

j=1

Θik 1,4

pkl Pil + Qik + σQ + σHik +

l=1

h2k

σ2 h2 H + Fik1 + hF 1 + Rik1 + hR1 + hk S ik1 + S 1 2 2

h2 h3 T ) − h2k U2ik3 − λ1k M1 − λ4k N1 , U1 − k (W2ik4 + W2ik4 2 6 2 h2k h2 h3 ik S Uik2 + U2 − DTi GTi , = Pik Ci , Θik = S − W , Θ = P + R + hR + h S + + 1ik3 1ik4 ik ik2 2 k ik2 2 1,3 1,5 2 2 6 N M = W1ik4 − 2S 2ik3 , Θik Θik πi j P jkC j − pkl PilCi + Ci Pik Ci , 1,7 = F ik2 + hF 2 − λ1k M2 , 1,10 = − − S 1ik3 − 4S 2ik3 +

Θik 1,2

M

Uik1 +

j=1

hk T hk T T S 1ik2 + hk S 2ik2 + 3S 2ik3 + (hk (t)U 1ik3 + (hk − hk (t))W2ik4 ) + U2ik3 , 2 2 2 2 h hk h k T = − W1ik3 + hk S 2ik2 + 3S 2ik3 + (hk (t)W2ik4 + (hk − hk (t))U 1ik3 ) + k U2ik3 , 2 2 2 2 h T hk T T T = −3S 2ik2 − (hk (t)U 1ik2 + (hk − hk (t))W2ik2 ) − k U2ik2 , Θik Θik 1,17 = −λ4k N2 , 2,2 = −Qik , 2 2 2 h T hk T T T = −3S 2ik2 − (hk (t)W2ik3 + (hk − hk (t))U 1ik2 ) − k U2ik2 , Θik Θik 2,10 = −Ci Pik Ci , 2,5 = −Ci Yi , 2 2

Θik 1,11 = − Θik 1,12 Θik 1,13 Θik 1,14

l=1

h2k

T Θik 3,3 = −(1 − μk )F ik1 − 2S 1ik3 + W1ik4 + W1ik4 − λ2k M1 ,

hk hk T W1ik3 − S 1ik2 Θik 4,4 = −Rik1 − S 1ik3 − 4S 2ik3 − λ3k M1 , 2 2 hk T hk T T T W1ik2 + 2hk S 2ik2 S Θik Θik + 3S 2ik3 , Θik + 2hk S 2ik2 + 3S 2ik3 , Θik 4,9 = −λ3k M2 , 4,11 = 4,12 = 4,6 = −Rik2 , 2 2 1ik2 h2 h2 h3 T T Θik S 3 + k Uik3 + U3 − Yi − YiT , Θik Θik 4,13 = −3S 2ik2 , 4,14 = −3S 2ik2 , 5,5 = Rik3 + hR3 + hk S ik3 + 2 2 6 Θik 3,11 =

hk T hk T S 1ik2 − W1ik2 , 2 2

Θik 3,4 = −W1ik4 + S 1ik3 , Θ3,8ik = −(1 − μk )F ik2 − λ2k M2 ,

Θik 5,7 = Yi Ai ,

Θik 11,13 Θik 12,12 Θik 12,13 Θik 12,14 Θik 14,14

Θik 5,15 = dYi B1i ,

Θik 5,17 = −G i ,

Θik 6,6 = −Rik3 ,

Θik 8,8 = −(1 − μk )F ik3 − λ2k I,

Θik 9,9 = −λ3k I,

h2k h2 h2 3hk 3hk T S 1ik1 − h2k S 2ik1 − S 2ik2 − S 2ik2 − 3S 2ik3 − k U1ik3 − k U2ik3 , 4 2 2 4 4 2 h2k h h2 3h 3h k k T S 2ik2 − S 2ik2 − 3S 2ik3 − k W2ik4 − k U2ik3 , = − W1ik1 − h2k S 2ik1 − 4 2 2 4 4 2 2 h2k T h h h2 T 3hk 3h k T T T S 2ik1 + U1ik2 + 3S 2ik2 S 2ik1 + k W2ik3 + 3S 2ik2 = + k U2ik2 , Θik + k U2ik2 , 11,14 = 2 4 4 2 4 4 h2 h2 h2 3hk 3hk T S 2ik2 − S 2ik2 − 3S 2ik3 − k U1ik3 − k U2ik3 , = − k S 1ik1 − h2k S 2ik1 − 4 2 2 4 4 2 2 2 h h h h2k 3hk k T T T S 2ik1 + k W2ik2 U U2ik1 , = + 3S 2ik2 + k U2ik2 , Θik = −3S − − 2ik1 1ik1 13,13 2 4 4 4 4 h2 T h2 T h2k h2k 3hk T S 2ik1 + k U1ik2 W U2ik1 , = + 3S 2ik2 + k U2ik2 , Θik = −3S − − 2ik1 2ik1 13,14 2 4 4 4 4 h2 h2 = −3S 2ik1 − k U1ik1 − k U2ik1 , Θik Θik 15,15 = −dT ik − 3dT ik , 15,16 = 6T ik , 4 4 8

Θik 11,11 = − Θik 11,12

Θik 5,10 = −Pik Ci ,

d2 d2 d3 T + Zik + Z + Fik3 + hF 3 − λ1k I, 2 2 6 N M 1 = πi jC j P jkC j + pklCi PilCi − Hik , σ j=1 l=1

Θik 7,7 = dT ik + Θik 10,10

Θik 5,8 = Yi Bi ,

Θik 3,12 =

Θik 16,16 = −

12 T ik − 2Zik , d

Θik 17,17 = −λ4k I,

with other elements Θ ik mn = 0. Moreover, the gain matrices of the state estimator (3) or (4) can be designed as Ki = Yi−1 Gi

(i = 1, 2, . . . , N).

Proof. Consider the following Lyapunov-Krasovskii functional candidate as V(et , rt , δt ) =

5

Vi (et , rt , δt ),

(14)

i=1

with η(t) = [eT (t), f T (e(t)]T , ζ(t) = [eT (t), e˙ T (t)]T and  t  t V1 (et , rt , δt ) =[e(t) − C i e(s)ds]T P(rt , δt )[e(t) − C i e(s)ds], t−σ t−σ  t  t  t  t  t V2 (et , rt , δt ) = eT (s)Q(rt , δt )e(s)ds + eT (s)Qe(s)dsdθ + eT (s)H(rt , δt )e(s)dsdθ t−σ t−σ θ t−σ θ  t  t t eT (s)He(s)dsdθdλ, + t−σ λ θ  t  t  t T η (s)F(rt , δt )η(s)ds + ηT (s)Fη(s)dsdθ, V3 (et , rt , δt ) = t−h(t,δt ) t T

 V4 (et , rt , δt ) =

ζ (s)R(rt , δt )ζ(s)ds +

t−hδt  t

+

t−h t

 +  V5 (et , rt , δt ) =



λ

t

θ

β

λ

λ

θ

t

θ



ζ T (s)Rζ(s)dsdθ +



ζ T (s)S ζ(s)dsdθdλ +

 t

t

λ

t−hδt

 t t

t−h



t−h

 t

θ

t−h t

t

θ

θ



t

θ

t−hδt t

t

ζ T (s)S (rt , δt )ζ(s)dsdθ

ζ T (s)U(rt , δt )ζ(s)dsdθdλ

ζ T (s)Uζ(s)dsdθdλdβ,

 t  t t f T (e(s))T (rt , δt ) f (e(s))dsdθ + f T (e(s))T f (e(s))dsdθdλ t−d θ t−d λ θ  t  t t  t  t t t + f T (e(s))Z(rt , δt ) f (e(s))dsdθdλ + f T (e(s))Z f (e(s))dsdθdλdβ, 

t

t−d

t

t−d

β

λ

θ

When rt = i and δt = k, the time derivative of V(e t , rt , δt ) can be bounded as  LV1 =2[e(t)−C i

t

e(s)ds]T Pik [˙e(t)−C i e(t)+Ci e(t−σ)]+

N

t−σ

+

M l=1

 pkl [e(t) − C i

t

 e(s)ds] Pil [e(t) − C i



t

(15)

t−σ



t

eT (s)( t−σ

N j=1

9

e(s)ds] t−σ

t

e(s)ds],

t−σ

t

e(s)ds]T P jk [e(t)−C j t−σ

j=1

T

LV2 =eT (t)Qik e(t) − eT (t − σ)Qik e(t − σ) +

 πi j [e(t)−C j

πi j Q jk +

M l=1

pkl Qil )e(s)ds



t

+ σeT (t)Qe(t) −  +



t

t

eT (s)(

t−σ N

πi j H jk +

θ

t−σ

eT (s)Qe(s)ds + σeT (t)Hik e(t) −

j=1

M

pkl Hil )e(s)dsdθ +

l=1

1 σ





t

t

eT (s)dsHik t−σ

σ2 T e (t)He(t) − 2



e(s)ds 

t

t−σ t

eT (s)He(s)dsdθ,

(16)

θ

t−σ

where Lemma 1 is adopted, and N

LV3 ≤η (t)F ik η(t) − (1 − μk )η (t − hk (t))F ik η(t − hk (t)) + T

+

T

M





t

ηT (s)F il η(s)ds + hηT (t)Fη(t) −

pkl t−hl (t)

l=1



N j=1

t

 πi j

+



ζ T (s)Rζ(s)ds + hk ζ T (t)S ik ζ(t) − 



t

pkl t−hl

t

θ

ζ T (s)S il ζ(s)dsdθ +

h2 + k ζ T (t)U ik ζ(t) − 2 +

t

ζ T (s)R jk ζ(s)ds+

t



t

t−hl

l=1



λ

t



t

2

h T ζ (t)S ζ(t) − 2

ζ (s)Uik ζ(s)dsdθ +

t−hk t

θ

θ

N



θ

t−h

 πi j

t

t

ζ T (s)Ril ζ(s)ds+hζ T (t)Rζ(t)





t

πi j

t−hk

t

θ

ζ T (s)S jk ζ(s)dsdθ

ζ T (s)S ζ(s)dsdθ

 t λ

t−hk

h3 T ζ (t)Uζ(t) − 6

t

t−hl

j=1



t

j=1

ζ T (s)Uil ζ(s)dsdθdλ +

 pkl

l=1

ζ T (s)S ik ζ(s)ds +

T

 t

pkl

(17)

M

N

t−hk

l=1

M

ηT (s)Fη(s)ds,

t−hk

t−h M

ηT (s)F jk η(s)ds

t−hk (t)

j=1 t

t

πi j

t−h

LV4 =ζ T (t)Rik ζ(t)−ζ T (t − hk )Rik ζ(t − hk )+ 





t

t

θ

ζ T (s)U jk ζ(s)dsdθdλ

 t

t−h

λ

t θ

ζ T (s)Uζ(s)dsdθdλ,

(18)

By noting (8) and S ik = hk S 1ik + hk S 2ik , and using Lemma 2 and 3, we get  t −hk ζ T (s)S 1ik ζ(s)ds t−hk

⎡ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢ ≤ − ⎢⎢⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎣  −hk

⎤T ⎡ ⎥⎥⎥ ⎢⎢⎢ S 1ik1 ⎥⎥⎥ ⎢⎢⎢ ⎥⎥ ⎢⎢ e(t) − e(t − hk (t)) ⎥⎥⎥⎥ ⎢⎢⎢⎢ ∗ ⎥⎥⎥ ⎢⎢⎢  t−hk (t) ⎥⎥⎥ ⎢⎢⎢ e(s)ds ⎥⎥⎥ ⎢⎢⎢ ∗ t−hk ⎥⎥ ⎢⎢ e(t − hk (t)) − e(t − hk ) ⎦ ⎣ ∗

t

t

t−hk (t)

e(s)ds

⎡  t ⎢⎢⎢ e(s)ds ⎢ t−hk ≤ − ⎢⎢⎢⎢ ⎣ e(t) − e(t − h ) k t t−hk

W1ik1

S 1ik3

W1ik3



S 1ik1





⎤⎡ t W1ik2 ⎥⎥⎥⎥ ⎢⎢⎢⎢ e(s)ds t−hk (t) ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎢ W1ik4 ⎥⎥ ⎢⎢ e(t) − e(t − hk (t)) ⎥⎥⎥ ⎢⎢⎢  t−hk (t) ⎥⎢ S 1ik2 ⎥⎥⎥⎥ ⎢⎢⎢⎢ e(s)ds t−hk ⎥⎥⎥⎦ ⎢⎢⎢⎣ e(t − hk (t)) − e(t − hk ) S 1ik3

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦

(19)

ζ T (s)S 2ik ζ(s)ds

t−hk

with Φ = [

S 1ik2

eT (s)ds −

Since (9) and U ik =

2 hk

t

⎤T ⎡ ⎥⎥⎥ ⎢⎢⎢ S ⎥⎥⎥ ⎢⎢⎢ 2ik1 ⎥⎥⎦ ⎢⎢⎣ ∗ t

S 2ik3

⎤⎡  t ⎥⎥⎥ ⎢⎢⎢ e(s)ds ⎥⎥⎥ ⎢⎢⎢ t−hk ⎥⎥⎦ ⎢⎢⎣ e(t) − e(t − hk )

eT (s)dsdθ, −eT (t) − eT (t − hk ) +

t−hk θ h2k U 2 1ik + 2 U 2ik ,

h2k

S 2ik2

⎡ ⎤ ⎢⎢ S ⎥⎥⎥ ⎢⎢⎢ 2ik1 ⎥⎥⎥ T ⎢ − 3Φ ⎢⎢⎣ ⎥⎥⎦ ∗ 2 hk

t t−hk

eT (s)ds]T .

and by using Lemma 1 and 2, we have

10

S 2ik2 S 2ik3

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ Φ,

(20)

h2 − k 2





t t−hk

⎡ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢ ≤ − ⎢⎢⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎣

⎡ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ = − ⎢⎢⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎣

t

θ

ζ T (s)U1ik ζ(s)dsdθ

⎤⎡ ⎤T ⎡ ⎤ t t ⎥⎥⎥ ⎢⎢⎢ U1ik1 U1ik2 W2ik1 W2ik2 ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ e(s)dsdθ ⎢ ⎥ ⎥ ⎥⎥⎥ ⎢ t−hk (t) θ t−hk (t) θ ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎢ t  ⎥⎥⎥ t ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ hk (t)e(t) − t−h (t) e(s)ds hk (t)e(t) − t−h (t) e(s)ds U1ik3 W2ik3 W2ik4 ⎥⎥ ⎢⎢ ⎥⎥⎥ ⎢⎢⎢ ∗ k k ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎥⎥⎥ ⎢⎢⎢  t−hk (t)  t  t−hk (t)  t ⎥ ⎥⎥⎥ ⎢ ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎢ ∗ e(s)dsdθ ∗ U U e(s)dsdθ 1ik1 1ik2 ⎥⎥⎥ t−hk θ t−hk θ ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎢ ⎥⎥  t−hk (t)  t−hk (t) ⎥⎦ ⎢⎣ ⎥⎦ ⎢⎣ (hk − hk (t))e(t) − t−h e(s)ds (hk − hk (t))e(t) − t−h e(s)ds ⎦ ∗ ∗ ∗ U1ik3 k k ⎤⎡ ⎤T ⎡ ⎤ ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎢ Π ⎥⎥⎥ Π Π Π Π e(t) e(t) 11 12 13 14 15 ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎥⎥⎥⎥ ⎢⎢⎢⎢ t  ⎥ ⎥⎥⎥ t ⎥⎥⎥ ⎢⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎢ T ⎥⎥⎥ e(s)ds ∗ U W −U −W e(s)ds ⎥⎥⎥ ⎢⎢⎢ ⎢ 1ik3 2ik4 2ik3 ⎥ 1ik2 ⎥ ⎢ t−hk (t) t−h (t) k ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎥⎥⎥ ⎢⎢⎢  t−hk (t)  t−h (t) ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ , k T T ⎥⎥⎥ ⎢⎢⎢ ∗ (21) e(s)ds ∗ U −W −U e(s)ds 1ik3 ⎥ ⎥⎥⎥ ⎢ 2ik2 1ik2 ⎥ ⎥⎥⎥ ⎢⎢⎢ ⎢⎢⎢ t−h t−hk ⎥ ⎥ ⎥⎥⎥  t k t   ⎥⎥⎥ ⎢⎢⎢ t ⎥⎥⎥ ⎢⎢⎢ t e(s)dsdθ ⎥⎥⎥ ⎢⎢⎢ ∗ ∗ ∗ U1ik1 W2ik1 ⎥⎥⎥ ⎢⎢⎢ t−h (t) θ e(s)dsdθ ⎥⎥⎥⎥ t−h (t) θ ⎥⎥⎥ ⎢⎢⎢  k  ⎥⎥⎥ ⎢⎢⎢ ⎥  t−hkk (t)  t ⎦ ⎣ t−hk (t) t e(s)dsdθ ⎥⎥⎦ ⎦ ⎣ ∗ e(s)dsdθ ∗ ∗ ∗ U 1ik1 t−h θ t−h θ t

t

e(s)dsdθ

k

where Π11 =

k

h2k (t)U 1ik3

+ hk (t)(hk − hk (t))(W2ik4 +

T W2ik4 )

T + (hk − hk (t)) U1ik3 , Π12 = −hk (t)U 1ik3 − (hk − hk (t))W2ik4 , 2

T T T Π13 = −hk (t)W2ik4 − (hk − hk (t))U 1ik3 , Π14 = hk (t)U 1ik2 + (hk − hk (t))W2ik2 , Π15 = hk (t)W2ik3 + (hk − hk (t))U 1ik2 .

From (9), we can obtain ⎡ ⎢⎢⎢ hk (t)In ⎢ Π11 = ⎢⎢⎢⎢ ⎣ (h − h (t))I

⎤T ⎡ ⎤⎡ ⎤ ⎥⎥⎥ ⎢⎢⎢ U ⎥⎢ ⎥⎥⎥ hk (t)In ⎥⎥⎥ ⎢⎢⎢ 1ik3 W2ik4 ⎥⎥⎥⎥⎥ ⎢⎢⎢⎢⎢ ⎥⎥⎥ ⎥⎥⎦ ⎢⎢⎣ ⎥⎥⎦ ⎢⎢⎣ ⎥⎥ ∗ U1ik3 (hk − hk (t))In ⎦ k k n ⎤T ⎡ ⎤⎡ T ⎥⎥⎥ ⎢⎢⎢ W2ik4 +W2ik4 ⎥⎥⎥ ⎢⎢⎢ W hk (t)In hk (t)In 2ik4 ⎥⎥⎥ ⎢⎢⎢ 2 ⎥⎥⎥⎥ ⎢⎢⎢⎢ T ⎥⎦ ⎢⎣ ⎥⎦ ⎢⎢⎣ W2ik4 +W2ik4 ⎥ (hk − hk (t))In ∗ (hk − hk (t))In 2

⎡ ⎢⎢⎢ ⎢ ≥ ⎢⎢⎢⎢ ⎣

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦

h2k T (W2ik4 + W2ik4 ), 2 Combining (21) and (22) leads to =



h2k 2



t t−hk

 θ

t

(22)

ζ T (s)U1ik ζ(s)dsdθ

⎡ ⎤T ⎡ h2 ⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎢ k (W + W T ) Π Π13 e(t) 12 ⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎢ 2 2ik4 2ik4 ⎢⎢⎢  t ⎥⎥⎥ ⎢⎢⎢ ⎢⎢⎢ e(s)ds ⎥⎥⎥ ⎢⎢⎢ ∗ U1ik3 W2ik4 ⎢⎢⎢  t−hk (t) ⎥⎥⎥ ⎢⎢⎢ t−h (t) ⎢ ⎢ ⎥ k ≤− ⎢⎢⎢⎢ e(s)ds ⎥⎥⎥⎥ ⎢⎢⎢⎢ ∗ ∗ U1ik3 ⎢⎢⎢  t−hk  ⎥⎥⎥ ⎢⎢⎢ t t ⎢⎢⎢ ⎥ ⎢ ∗ ∗ ∗ ⎢⎢⎢ t−hk (t) θ e(s)dsdθ ⎥⎥⎥⎥⎥ ⎢⎢⎢⎢⎢ ⎢⎢⎣ t−hk (t)  t ⎥⎥⎦ ⎢⎢⎣ e(s)dsdθ ∗ ∗ ∗ t−hk θ ⎡  ⎤T t t  t  ⎢⎢⎢ e(s)dsdθ ⎥⎥⎥⎥⎥ h2k t ⎢⎢⎢ T t−hk θ ⎥ ζ (s)U2ik ζ(s)dsdθ ≤ − ⎢⎢ − ⎣ h e(t) −  t e(s)ds ⎥⎥⎦ 2 t−hk θ k

t−hk

By calculation of LV 5 (et , rt , δt ), one can obtain 11

Π14 T −U1ik2 T −W2ik2

U1ik1 ∗

⎡ ⎢⎢⎢ U ⎢⎢⎢ 2ik1 ⎢⎢⎣ ∗

⎤⎡ ⎤ ⎥⎥⎥ Π15 ⎥⎥⎥⎥ ⎢⎢⎢⎢ e(t) ⎥⎥⎥ ⎥⎥⎥ ⎢⎢⎢  ⎥⎥ t ⎥⎥⎥ ⎢⎢⎢ −W2ik3 ⎥⎥ ⎢⎢ e(s)ds ⎥⎥⎥⎥ ⎥⎥⎥ ⎢⎢⎢  t−hk (t) ⎥⎥⎥ t−h (t) T ⎥ ⎥⎥⎥⎥ ⎢⎢⎢⎢⎢ t−h k e(s)ds ⎥⎥⎥⎥⎥ , −U1ik2 k ⎥⎥⎥ ⎢⎢⎢  ⎥⎥⎥ t t W2ik1 ⎥⎥⎥⎥⎥ ⎢⎢⎢⎢⎢ t−h (t) θ e(s)dsdθ ⎥⎥⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥⎦ ⎢⎢⎢⎣ t−hkk (t)  t U e(s)dsdθ⎦ 1ik1

U2ik2 U2ik3

t−hk

θ

⎤⎡  t t ⎥⎥⎥ ⎢⎢⎢ e(s)dsdθ ⎥⎥⎥ ⎢⎢⎢ t−hk θ t ⎥⎥⎦ ⎢⎢⎣ hk e(t) − t−h e(s)ds k

(23)

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ . (24)



t

LV5 =d f (e(t))T ik f (e(t)) −

f (e(s))T ik f (e(s))ds +

T

T

N

t−d

+

M





t

pkl

θ

t−d

l=1

t

d T f (e(t))Zik f (e(t)) − 2

+

 M + pkl

 t

t





t

t θ

t−d



t

t−d

t

θ

f T (e(s))T jk f (e(s))dsdθ

d2 T f (e(t))T f (e(t)) − 2

f T (e(s))Zik f (e(s))dsdθ +

N



t



πi j

t

f T (e(s))T f (e(s))dsdθ

θ

t−d



j=1

t

 t

t−d

λ

θ

t

f T (e(s))Z jk f (e(s))dsdθdλ

 t  t t d T f (e(s))Zil f (e(s))dsdθdλ+ f (e(t))Z f (e(t))− f T (e(s))Z f (e(s))dsdθdλ. (25) 6 θ t−d λ θ t

3

T

λ

t−d

l=1

πi j

j=1

f T (e(s))T il f (e(s))dsdθ +

2



Using Lemma 2, it can be obtained that  t  t  1 t T 3 T − f (e(s))T ik f (e(s))ds ≤ − f (e(s))dsT ik f (e(s))ds − ΦT T ik Φ, d t−d d t−d t−d t

with Φ =

f (e(s))ds −

t−d

 t 2 t d t−d θ

(26)

f (e(s))dsdθ.

Then by Lemma 1, we can also get  t  t  t  t  t  t 2 − f T (e(s))Zik f (e(s))dsdθ ≤ − 2 f T (e(s))dsdθZik f (e(s))dsdθ. d t−d θ t−d θ t−d θ

(27)

Since πi j ≥ 0 (i  j), pkl ≥ 0 (k  l), F jk > 0, and F il > 0, it follows from (11) that  t  t N M πi j ηT (s)F jk η(s)ds + pkl ηT (s)F il η(s)ds t−hk (t)

j=1





N

πi j

t

j=1, ji t

N



ηT (s)(

t−h



M

ηT (s)F jk η(s)ds +

t−hk (t)



t−hl (t)

l=1

πi j F jk +

j=1, ji

ηT (s)F il η(s)ds

t−hl (t)

l=1,lk M

t

pkl 

pkl Fil )η(s)ds ≤

t

ηT (s)Fη(s)ds.

(28)

t−h

l=1,lk

Similarly, one can also get 

t

eT (s)(

N

t−σ



t−σ N

j=1

t

eT (s)(

θ

 πi j

N j=1

t

πi j

t

πi j H jk +



t

θ

t

eT (s)Qe(s)ds

(29)

t−σ M



t−σ

l=1 M



t

pkl Hil )e(s)dsdθ ≤ 

ζ (s)S jk ζ(s)dsdθ +

θ

t

eT (s)He(s)dsdθ, 

t

ζ (s)Ril ζ(s)ds ≤

t

T

pkl

l=1 T

t−hk

 pkl Qil )e(s)ds ≤

ζ (s)R jk ζ(s)ds + T



M l=1

t−hk

j=1 N

j=1



t

πi j Q jk +

t−hl M l=1

(30)

ζ T (s)Rζ(s)ds,

(31)

t−h



t



t

 ζ (s)S il ζ(s)dsdθ ≤

t



T

pkl t−hl

θ

t−h

12

θ

t

ζ T (s)S ζ(s)dsdθ,

(32)

 N πi j j=1 N

 t

t−hk

λ

 πi j



t

 πi j

j=1

θ

l=1

f T (e(s))T jk f (e(s))dsdθ+

λ

t−d

 t

t−d

λ

M

 pkl

l=1

 t

t





t

 M ζ T (s)U jk ζ(s)dsdθdλ+ pkl

t

t

 f T (e(s))T il f (e(s))dsdθ ≤

t

 t

t−hl

λ



t

t−d

f T (e(s))Z jk f (e(s))dsdθdλ +

θ

 ζ T (s)Uil ζ(s)dsdθdλ ≤

t

t

t

θ

t−d

j=1 N

t

θ

M

t

t θ

t−h λ

t−d



t

 t

t−d

λ

pkl

l=1

 t 

θ

t

ζ T (s)Uζ(s)dsdθdλ, (33)

t

θ

f T (e(s))T f (e(s))dsdθ, (34)

t

f T (e(s))Zil f (e(s))dsdθdλ

θ

t

f T (e(s))Z f (e(s))dsdθdλ.

(35)

θ

It follows from Assumption 2 and 3 that for any positive scalars λ 1k , λ2k , λ3k and λ4k , ⎡ ⎤ ⎢⎢⎢ M M ⎥⎥⎥ 1 2 ⎢ ⎥⎥⎥ − λ1k ηT (t) ⎢⎢⎢⎢ ⎥⎥ η(t) ≥ 0, ⎣ ∗ I ⎦ ⎡ ⎤ ⎢⎢⎢ M M ⎥⎥⎥ 1 2 ⎢ ⎥⎥⎥ − λ2k ηT (t − hk (t)) ⎢⎢⎢⎢ ⎥⎥ η(t − hk (t)) ≥ 0, ⎣ ∗ I ⎦ ⎡ ⎤ ⎢⎢⎢ M M ⎥⎥⎥ 1 2 ⎢ ⎥⎥⎥ − λ3k ηT (t − hk ) ⎢⎢⎢⎢ ⎥⎥ η(t − hk ) ≥ 0, ⎣ ∗ I ⎦ ⎡ ⎢⎢⎢ e(t) ⎢ − λ4k ⎢⎢⎢⎢ ⎣ ϕ(t, e(t))

⎤T ⎥⎥⎥ ⎥⎥⎥⎥ ⎥⎦

⎡ ⎢⎢⎢ N ⎢⎢⎢ 1 ⎢⎢⎣ ∗

⎤⎡ N2 ⎥⎥⎥⎥⎥ ⎢⎢⎢⎢⎢ e(t) ⎥⎥⎥ ⎢⎢⎢ I ⎦ ⎣ ϕ(t, e(t))

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ ≥ 0,

(36)

where M1 =

ΓT1 Γ2 + ΓT2 Γ1 , 2

M2 = −

ΓT1 + ΓT2 ΠT Π2 + ΠT2 Π1 , N1 = 1 , 2 2

N2 = −

ΠT1 + ΠT2 . 2

Meanwhile, we note that  2˙eT (t)Yi [−˙e(t)−C i e(t − σ)+Ai f (e(t)) + Bi f (e(t − hk (t))) + B1i

t

f (e(s))ds − Ki Di e(t) − Ki ϕ(t, e(t))] = 0. (37)

t−d

t Let ξ(t) = [eT (t), eT (t − σ), eT (t − hk (t)), eT (t − hk ), e˙ T (t), e˙ T (t − hk ), f T (e(t)), f T (e(t − hk (t))), f T (e(t − hk )), t−σ e(s)ds,   t t  t−hk (t)  t t t t 2 t 2 t−hk (t) T T e (s)ds, h2k t−h (t) θ eT (s)dsdθ, h2k t−h eT (s)dsdθ, d1 t−d f T (e(s))ds, d1 t−d θ f T (e(s))dsdθ, hk t−h (t) e (s)ds, hk t−h θ k

k

k

k

ϕT (t, e(t))]T . By combining (14)-(37) together, one derive from (12) and (13) that LV(et , rt , δt ) = L

5

Vi (et , rt , δt ) ≤ ξT (t)Θik ξ(t) ≤ 0.

(38)

i=1

Taking the similar method in [33] and [31], we have that the estimation error converges to zero exponentially in the mean square sense. Remark 4. In this paper, a new technique called Matrix decomposition method was introduced. This method sufficiently make full use of the information of the Lyapunov matrices, which may lead to conservative results. Since the introduction of two different Markovian jumping parameters, a new weak infinitesimal operator acting on Lyapunov– 13

Krasovskii functional with two different Markovian jumping parameters is newly proposed. Remark 5. In existing works, there are no results about the problem of state estimation for two different Markovian jumping neural networks with leakage, discrete and distributed delays. It is necessary to add the leakage delay into the considered model (1), which makes the considered model more accurate and more agreeable to the practical conditions.

t ttt Remark 6. In Theorem 1, some high-order integral terms are considered (e.g. t−h β λ θ ζ T (s)Uζ(s)dsdθdλdβ, t tt t ttt f T (e(s))T f (e(s))dsdθdλ and t−d β λ θ f T (e(s))Z f (e(s))dsdθdλdβ ), thus, there are more mode-dependent t−d λ θ Lyapunov matrices (For example, U ik , T ik , and Zik ) in (14) than those in [30] and [31]. In order to further illustrate the effectiveness of Matrix decomposition method and Wirtinger-based integral inequality, Let σ = 0 and h(t, δ t ) = h, the error system (3) correspondingly reduces to  e˙ (t) = −(C i + Ki Di )e(t) + Ai f (e(t)) + Bi f (e(t − h)) + B1i

t

f (e(s))ds + Ki ϕ(t, e(t)),

(39)

t−d

The system (39) is the same as the model considered in Theorem 1 in [30] and Theorem 2 in [31]. It is easy to get the following corollary in a similar fashion as in the proof of Theorem 1. Corollary 1. For given scalars h > 0 and d > 0, the state of estimation error ⎡system (39) converges ⎤ ⎤ ⎡ to zero expo⎥ ⎢⎢⎢ R ⎢⎢⎢ R R ⎥⎥⎥ ⎥ R ⎥ i1 i2 1 2 ⎥⎥⎥ ⎥⎥⎥ ⎢ ⎢ nentially in the mean square sense if there exist real positive matrices P i , Ri = ⎢⎢⎢⎢ , R = ⎢⎢⎢⎢ ,T, ⎣ ∗ R ⎥⎥⎦ ⎣ ∗ R ⎥⎥⎦ i i3 3 T , Zi , Z, positive scalars λ1i , λ2i , λ3i and any appropriately dimensioned matrix Y i such that the following LMIs are satisfied for i = 1, 2, . . . , N: N

πi j U j − U ≤ 0,

(40)

j=1

Θi = (Θimn )11×11 < 0,

(41)

where Ri represents F i , Ri , S i , Ui ; R represents F, R, S , U; U j represents F j , R j , S j , U j , T j , Z j ; U represents F, ⎡ ⎤ ⎢⎢⎢ S ⎥⎥⎥ S si1 si2 ⎢ ⎥⎥⎥⎥, s = 1, 2 and R, S , U, T , Z, respectively. S i = hS 1i + hS 2i , S si = ⎢⎢⎢⎢ ⎥ ⎣ ∗ S si3 ⎦ N h2 h2 h3 πi j P j + Ri1 + hR1 + hS i1 + S 1 + Ui1 + U1 + Fi1 + hF 1 − S 1i3 − 4S 2i3 − 2U i3 − λ1i M1 − λ3i N1 , Θi1,1 = 2 2 6 j=1 Θi1,2 = S 1i3 − 2S 2i3 ,

Θi1,3 = Pi + Ri2 + hR2 + hS i2 +

Θi1,5 = Fi2 + hF 2 − λ1i M2 ,

h2 h2 h3 S 2 + Ui2 + U2 − Ci YiT − DTi GTi , 2 2 6

h T T Θi1,7 = − S 1i2 + hS 2i2 + 3S 2i3 + Ui3 , 2

T Θi1,8 = −3S 2i2 − Ui2T ,

Θi2,2 = −Ri1 − Fi1 − S 1i3 − 4S 2ik3 − λ2i M1 ,

Θi2,4 = −Ri2 ,

T , Θi2,8 = −3S 2i2

h2 h2 h3 S 3 + Ui3 + U3 − Yi − YiT , 2 2 6

Θi3,3 = Ri3 + hR3 + hS i3 +

14

Θi2,6 = −Fi2 − λ2i M2 ,

Θi1,11 = −λ3i N2 ,

h T T Θi2,7 = S 1i2 + 2hS 2i2 + 3S 2ik3 , 2 Θi3,5 = Yi Ai ,

Θi3,6 = Yi Bi ,

5

0.5

2.8

0

2.6

−0.5

2.4

−1

2.2

−1.5 x2

r

t

3

2 1.8

x 10

−2 −2.5

1.6

−3

1.4

−3.5

1.2

−4

1 0

1

2

3

4

−4.5 −3

5

−2.5

Time t

−1

−0.5

1

0

0.5 5

x 10

Figure 2: The complex dynamic behaviors exhibited by the system (39).

d2 d2 d3 T + Zi + Z − λ1i I, 2 2 6 2 h 3h 3h 1 3h 1 T T S 2i1 + 3S 2i2 = −Fi3 − λ2i I, Θi7,7 = − S 1i1 − h2 S 2i1 − S 2i2 − S 2i2 − 3S 2i3 − Ui3 , Θi7,8 = + Ui2T , 4 2 2 2 2 2 1 12 = −3S 2i1 − Ui2 , Θi9,9 = −4dT i , Θi9,10 = 6T i , Θi10,10 = − T i − 2Zi , Θi11,11 = −λ3i I. 2 d

Θi3,7 = Yi B1i ,

Θi8,8

−1.5 x

Figure 1: The Markov chain generated Π by and  = 0.01.

Θi6,6

−2

Θi3,11 = −Gi ,

Θi4,4 = −Ri3 ,

Θi5,5 = Fi3 + hF 3 + dT i +

Remark 7. According to the difference between system (1) and (39), the Lyapunov functional was constructed for system (39) by letting h(t, δ t ) = hδt = h and σ = 0. And the proof of Corollary 1, the Jensen’s inequality was used t t t to deal with the term −h t−h ζ T (s)S 1i ζ(s)ds and − t−h θ ζ T (s)Ui ζ(s)dsdθ instead of reciprocally convex method and matrix decomposition technique, respectively. Remark 8. Theorem 1 in [30] is a special case of Corollary 1 with R(r t ) = 0, R2 = R3 = 0, S (rt ) = 0, S = 0, U(rt ) = 0, U = 0, Z(rt ) = 0, Z = 0, T = 0 and T (r t ) is a constant matrix. When R(r t ) = 0, R = 0, S i1 = S i2 = 0, S = 0, Ui1 = Ui2 = 0, U 1 = U2 = 0 and T = 0, Theorem 2 in [31] is a special case of Corollary 1. Matrix decomposition technique is used in Corollary 1, which will leads to a superior result. This will be shown by the following example. 4. Examples In this section, two numerical examples are provided to show the effectiveness of the proposed method. Example 4.1 Consider the system (39) with the following parameters ⎡ ⎡ ⎡ ⎡ ⎤ ⎤ ⎤ ⎤ ⎢⎢⎢ 3.6 0 ⎥⎥⎥ ⎢⎢⎢ 1.3 −0.2 ⎥⎥⎥ ⎢⎢⎢ 0.7 0.5 ⎥⎥⎥ ⎢⎢⎢ 0.3 0.2 ⎥⎥⎥   ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ C1 = ⎢⎢ = = = = , A , B , B , D 0.5 0 , 1 1 11 1 ⎢⎢⎣ ⎢⎢⎣ ⎢⎢⎣ ⎥⎥ ⎥⎥ ⎥⎥ ⎣ 0 1.9 ⎥⎥⎦ 0.5 0.8 ⎦ 0.8 −0.9 ⎦ −1 0.7 ⎦ ⎡ ⎡ ⎡ ⎡ ⎤ ⎤ ⎤ ⎤ ⎢⎢⎢ 2.5 0 ⎥⎥⎥ ⎢⎢⎢ −0.6 0.7 ⎥⎥⎥ ⎢⎢⎢ 0.6 −0.9 ⎥⎥⎥ ⎢⎢⎢ −0.5 0.7 ⎥⎥⎥   ⎢ ⎥⎥⎥ ⎥⎥⎥⎥ , A2 = ⎢⎢⎢⎢ ⎥⎥⎥⎥ , B2 = ⎢⎢⎢⎢ ⎥⎥⎥⎥ , B12 = ⎢⎢⎢⎢ C2 = ⎢⎢⎢⎢ = , D −0.5 0.2 , 2 ⎥⎥ ⎢⎣ ⎢⎣ ⎢⎣ ⎥ ⎥ ⎣ 0 3 ⎥⎦ 1.4 0.2 ⎦ 0.1 0.8 ⎦ 1 −0.9 ⎦ ⎡ ⎡ ⎡ ⎡ ⎤ ⎤ ⎤ ⎤ ⎢⎢⎢ 2.7 0 ⎥⎥⎥ ⎢⎢⎢ 0.1 −0.8 ⎥⎥⎥ ⎢⎢⎢ 0.4 1.1 ⎥⎥⎥ ⎢⎢⎢ 1.2 0.3 ⎥⎥⎥   ⎢ ⎥⎥⎥ ⎥⎥⎥⎥ , A3 = ⎢⎢⎢⎢ ⎥⎥⎥⎥ , B3 = ⎢⎢⎢⎢ ⎥⎥⎥⎥ , B13 = ⎢⎢⎢⎢ C3 = ⎢⎢⎢⎢ = . , D 0 0.6 3 ⎥⎥ ⎢⎣ ⎢⎣ ⎢⎣ ⎥ ⎥ ⎣ 0 2.4 ⎥⎦ 0.8 0.1 ⎦ 0.4 0.5 ⎦ 0.4 0.3 ⎦ 15

The state x and its estimation 1

Amplitude

100 True State Estimation

0 −100 −200

0

2

4

6

8

10 12 Time t The state x and its estimation

14

16

18

20

2

Amplitude

100 True State Estimation

0 −100 −200

0

2

4

6

8

10 Time t

12

14

16

18

20

The error states Amplitude

100 e1 e2 0

−100

0

2

4

6

8

10 Time t

12

14

16

18

20

Figure 3: State responses and error trajectories in case of h = 0.3 and d = 1.5.

Here, the activation function and the nonlinear disturbance function are assumed to be g(x(t)) =

1 2 (|x + 1| − |x − 1|)

and g(x(t)) ˜ = 0.2sin(x 1 (t)) + 0.2cos(x 2 (t)) + 0.6, respectively. It is known that Γ 1 = −I, Γ2 = I, Π1 = [−0.2, 0.2] and Π2 = [0.4, 0.6]. The purpose is to compare the maximum allowable bounds of h that guarantees the asymptotic stability of the above system. For different d, the upper bounds of h can be respectively obtained by various methods in [30], [31] and this paper, which are summarized in Table 1. From Table 1, it can be seen that the condition in Corollary 1 is less conservative than the one in [30] and [31]. ⎡ ⎤ ⎢⎢⎢ −0.4 0.2 0.2 ⎥⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎢ We assume Π = ⎢⎢⎢⎢⎢ 0.3 −0.4 0.1 ⎥⎥⎥⎥⎥ and  = 0.01, then a Markov chain can be generated, which is given in ⎢⎢⎢ ⎥ ⎣ 0.25 0.25 −0.5 ⎥⎥⎦ Fig. 1. Let h = 0.3 and d = 1.5, by Corollary 1, the gain matrices of the state estimator can be designed as ⎡ ⎤ ⎢⎢⎢ 0.1093 ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ , K1 = ⎢⎢ ⎣ −0.0699 ⎥⎥⎦

⎡ ⎤ ⎢⎢⎢ −0.0497 ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ K2 = ⎢⎢ , ⎣ −0.0561 ⎥⎥⎦

⎡ ⎤ ⎢⎢⎢ 0.0690 ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ K3 = ⎢⎢ . ⎣ 0.1637 ⎥⎥⎦

Therefore, it follows from Corollary 1, the estimation error converges to zero exponentially. Let the initial conˆ = [−35, 65] T for s = [−0.3, 0] and r 0 = 1. The complex dynamic behaviors ditions be x(s) = [50, −27] T and x(s) 16

3.5

0

3

−1

2

1

2.5

x

r

t

5

4

−2

2

−3

1.5

−4

1 0

1

2

3

4

x 10

−5 −12

5

−10

−8

−6

Time t

−4 x

Figure 4: The Markov chain generated by Π , Λ and  = 0.01.

−2

0

2 5

x 10

1

Figure 5: The complex dynamic behaviors exhibited by the system (1).

exhibited by the system (39) is given in Fig. 2. And Fig. 3 depicts the trajectories of true state and their estimations, respectively, the response of the error states e 1 (t) and e2 (t) are also given in Fig. 3. Table 1: Allowable upper bounds of h for different d. Methods

d = 0.5

d=1

d = 1.5

d=2

d = 2.2

[30]

0.5248

0.3702

0.2574

0.0916

0.0056

[31]

0.5630

0.4051

0.2897

0.1334

0.0379

Corollary 1

0.6028

0.4521

0.3263

0.1735

0.0695

Remark 9. From Table 1, the obtained results in this paper is less conservative than the results in [30] and [31]. The larger upper bounds of time delay are computed. This is because Matrix decomposition method and Wirtinger-based integral inequality are adopted in this paper. The former method sufficiently utilize the information about Lyapunov matrices, Wirtinger-based integral inequality gives a sharper upper bound than Jensen’s inequality gives. Example 4.2 Consider the system (1) with the following parameters ⎡ ⎢⎢⎢ ⎢ C1 = ⎢⎢⎢⎢ ⎣ ⎡ ⎢⎢⎢ ⎢ C2 = ⎢⎢⎢⎢ ⎣

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥ , 0 2.7 ⎦ ⎤ 0.5 0 ⎥⎥⎥⎥⎥ ⎥⎥⎥ , 0 1.5 ⎦ 1.3

0

⎡ ⎡ ⎡ ⎤ ⎤ ⎢⎢⎢ −1.5 −0.2 ⎥⎥⎥ ⎢⎢⎢ −0.7 1.5 ⎥⎥⎥ ⎢⎢⎢ 1.3 1.2 ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎥⎥⎥ ⎥⎥⎥ = = A1 = ⎢⎢ , B , B 1 11 ⎢⎢⎣ ⎢⎢⎣ ⎥⎥⎦ ⎣ −4.7 0.8 ⎥⎥⎦ 0.8 −0.9 1.5 2 ⎡ ⎡ ⎡ ⎤ ⎤ ⎤ ⎢⎢⎢ 2.2 1.7 ⎥⎥⎥ ⎢⎢⎢ 0.5 0.9 ⎥⎥⎥ ⎢⎢⎢ −0.5 0.7 ⎥⎥⎥ ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎥ A2 = ⎢⎢ , B2 = ⎢⎢ , B12 = ⎢⎢ ⎥⎥ , ⎣ −0.1 0.8 ⎥⎥⎦ ⎣ 0.1 −0.8 ⎥⎥⎦ ⎣ 1 −0.9 ⎦

⎤ ⎥⎥⎥ ⎥⎥⎥ ⎥⎥⎦ ,

  D1 = 0 1 ,

  D2 = 1.2 0 .

⎡ ⎤ ⎡ ⎤ ⎢⎢⎢ −3 3 ⎥⎥⎥ ⎢⎢⎢ −6 6 ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ In this example, a Markov chain is generated by Π = ⎢⎢ , Λ = ⎢⎢ and  = 0.01, which is ⎣ 3 −3 ⎥⎥⎦ ⎣ 6 −6 ⎥⎥⎦ shown in Fig. 4. The activation function and the nonlinear disturbance function are taken the same as Example 4.1. Obviously, we have Γ 1 = −I, Γ2 = I, Π1 = [−0.2, 0.2] and Π 2 = [0.4, 0.6]. Considering the time delay as h1 (t) = 0.8 + 0.8sint and h 2 (t) = 1.2 + cost, we can obtain h 1 = 1.6, h2 = 2.2, μ1 = 0.8, μ2 = 1. And let σ = 0.5 and 17

The state x1 and its estimation Amplitude

500 True State Estimation

0 −500

0

2

4

6

8

10 12 14 Time t The state x and its estimation

16

18

20

2

Amplitude

500 True State Estimation

0 −500

0

2

4

6

8

10 Time t

12

14

16

18

20

The error states Amplitude

200 e1 e2

0 −200

0

2

4

6

8

10 Time t

12

14

16

18

20

Figure 6: State responses and error trajectories in case of σ = 0.5, h1 = 1.6, h1 = 2.2 and d = 1.5.

d = 1.5. By Theorem 1, the gain matrices of the state estimator can be designed as ⎡ ⎤ ⎢⎢⎢ 0.0644 ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ K1 = ⎢⎢ , ⎣ 0.1403 ⎥⎥⎦

⎡ ⎤ ⎢⎢⎢ 0.3921 ⎥⎥⎥ ⎢⎢⎢ ⎥⎥⎥ K2 = ⎢⎢ . ⎣ −0.0205 ⎥⎥⎦

According to Theorem 1, the state of estimation error system (3) or (4) converges to zero exponentially. Let the initial ˆ = [−50, 62] T for s ∈ [−2.2, 0], r 0 = 1 and δ0 = 1. Some simulations are conditions be x(s) = [56, −75] T and x(s) presented in Fig. 5 and Fig. 6. Fig. 5 shows the complex dynamic behaviors exhibited by the system (1). And Fig. 6 is the trajectories of true state and their estimations and the response of the error states e 1 (t) and e2 (t).

5. Conclusions In this paper, we have investigated the problem of state estimation for two Markovian jumping neural networks with leakage, discrete and distributed delays. The considered model in (1) contains two different Markovian jumping parameters, which describes the actual situation more accurately and encompasses the system (39) as a special case. Owing to the introduction of two Markovian jumping parameters, a new weak infinitesimal operator acting on Lyapunov-Krasovskii functional has been proposed. By using Matrix decomposition method which is proposed 18

firstly, constructing the appropriate Lyapunov-Krasovskii functional and applying Wirtinger-based integral inequality, the sufficient conditions have been presented which are more effective than the existing results. Numerical examples and simulations have been given to demonstrate the effectiveness and usefulness of the proposed results. In future work, we will utilized the proposed method to deal with the system with parameter uncertainties or unknown nonlinear disturbances. [1] M. Zavarei, M. Jamshidi, Time delay systems: analysis, Optimization and Applications, North-Holland, 1987. [2] J. Hale, S. Lunel, Introduction to Functional Differential Equations, Springer-Verlag, Net York, 1993. [3] G.P. Liu, Nonlinear Identification and Control: A Neural Network Approach, Springer, London, 2001. [4] P. Watta, K. Wang, M. Hassoun, Recurrent neural nets as dynamical Boolean systems with application to associative memory, IEEE Trans. Neural Netw. 8 (5) (1997) 1268-1280. [5] A. Seuret, F. Gouaisbaut, Wirtinger-based integral inequality: Application to time-delay systems, Automatica 49 (9) (2013) 2860-2866. [6] P. Park, J.W. Ko, C. Jeong, Reciprocally convex approach to stability of systems with time-varying delays, Automatica 47 (1) (2011) 235-238. [7] Y. He, Q. Wang, C. Lin, M. Wu, Delay-range-dependent stability for systems with time-varying delay, Automatica 43 (2007) 371-376. [8] W. Lee, P. Park, Second-order reciprocally convex approach to stability of systems with interval time-varying delays, Appl. Math. Comput. 229 (2014) 245-253. [9] P. Liu, Improved delay-dependent robust stability criteria for recurrent neural networks with time-varying delays, ISA Transactions 52 (2013) 30-35. [10] Y. Liu, Z. Wang, X. Liu, Design of exponential state estimators for neural networks with mixed time delays, Physics Letters A 364 (2007) 401-412. [11] P. Tino, M. Cernansky, L. Benuskova, Markovian architectural bias of recurrent neural networks, IEEE Trans. Neural Netw. 15 (2004) 6-15. [12] H. Zhang, Y. Wang, Stability analysis of Markovian jumping stochastic Cohen-Grossgerg neural networks with mixed time delays, IEEE Trans. Neural Netw. 19 (2008) 366-370. [13] Q. Zhu, J. Cao, Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays, IEEE Trans. Syst. Man. Cybern. B. 41 (2) (2011) 341-353. [14] X. Yang, J. Cao, J. Lu, Synchronization of Markovian coupled neural networks with nonidentical node-delays and random coupling strengths, IEEE Trans. Neural Netw. Learning Syst. 23 (1) (2012) 60-71. [15] Y. Zhao, L. Zhang, S. Shen, H. Gao, Robust stability criterion for discrete-time uncertain Markovian jumping neural networks with defective statistics of modes transitions, IEEE Trans. Neural Netw. 22 (1) (2011) 164-170. [16] P. Shi, Y. Zhang, M. Chadli, R. K. Agarwal, Mixed H∞ and passive filtering for discrete fuzzy neural networks with stochastic jumps and time delays, IEEE Trans. Neural Netw. Learning Syst. 27 (4) (2016) 903-909. [17] I. Gunawan, Reliability analysis of shuffle-exchange network systems, Reliab. Eng. Syst. Saf. 93 (2008) 271-276. [18] Y. Liu, Z. Wang, J. Liang, X. Liu, Stability and synchronization of discrete-time Markovian jumping neural networks with mixed modedependent time delays, IEEE Trans. Neural Netw. 20 (7) (2009) 1102-1116. [19] H. Zhang, Y. Wang, Stability analysis of Markovian jumping stochastic CohenCGrossberg neural networks with mixed time delays, IEEE Trans. Neural Netw. 19 (2) (2008) 366-370. [20] Q. Zhu, J. Cao, Stability analysis of Markovian jump stochastic BAM neural networks with impluse control and mixed time delays, IEEE Trans. Neural Netw. Learning Syst. 23 (3) (2012) 467-479. [21] M. Syed Ali, Stability of Markovian jumping recurrent neural networks with discrete and distributed time-varying delays, Neurocomputing 149 (2015) 1280-1285. [22] Z. Wang, Y. Liu, L. Yu, X. Liu, Exponential stability of delayed recurrent neural networks with Markovian jumping parameters, Physics Letters A 356 (2006) 346-352. [23] L. Wang, Z. Zhang, Y. Wang, Stochastic exponential stability of the delayed reactionCdiffusion recurrent neural networks with Markovian

19

jumping parameters, Physics Letters A 372 (2008) 3201-3209. [24] Q. Ma, S. Xu, Y. Zou, Stability and synchronization for Markovian jump neural networks with partly unknown transition probabilities, Neurocomputing 74 (2011) 3404-3411. [25] J. Ren, H. Zhu, S. Zhong, Y. Zeng, Y. Zhang, State estimation of recurrent neural networks with two Markovian jumping parameters and mixed delays, Proceedings of the 34th Chinese Control Conference Hangzhou, China, July 28-30, 2015, 1577-1582. [26] L. Jin, P. Nikiforuk, M. Gupta, Adaptive control of discrete-time nonlinear systems using recurrent neural networks, IEE Proceedings Control Theory and Applications, 141 (1994) 167-176. [27] Z. Wang, D. Ho, X. Liu, State estimation for delayed neural networks, IEEE Trans. Neural Netw. 16 (2005) 279-284. [28] H. Huang, G. Feng, J. Cao, Robust state estimation for uncertain neural networks with time-varying delay, IEEE Trans. Neural Netw. 19 (2008) 1329-1339. [29] Z. Wang, Y. Liu, X. Liu, State estimation for jumping recurrent neural networks with discrete and distributed delays, Neural Netw. 22 (2009) 41-48. [30] Y. Chen, W. Zheng, Stochastic state estimation for neural networks with distributed delays and Markovian jump, Neural Netw. 25 (2012) 14-20. [31] H. Huang, T. Huang, X. Chen, A mode-dependent approach to state estimation of recurrent neural networks with Markovian jumping parameters and mixed delays, Neural Netw. 46 (2013) 50-61. [32] K. Gopalsamy, Stability and oscillations in delay differential equations of population dynamics, Kluwer Academic Publishers, Dordrecht, 1992. [33] S.Lakshmanan, J. Park, H. Yung, P. Balasubramaniam, Design of state estimator for neural networks with leakage, discrete and distributed delays, Appl. Math. Comput. 218 (2012) 11297-11310. [34] X. Li, J. Cao, Delay-dependent stability of neural networks of neutral type with time delay in the leakage term, Nonlinearity 23 (2010) 1709-1726. [35] Z. Zhao, Q. Song, S. He, Passivity analysis of stochastic neural networks with time-varying delays and leakage delay, Neurocomputing 125 (2014) 22-27. [36] C. Zheng, Y. Wang, Z. Wang, Stability analysis of stochastic fuzzy Markovian jumping neural networks with leakage delay under impulsive perturbations, J. Frankl. Inst 351 (2014) 1728-1755. [37] R. Rakkiyappan, Q. Zhu, T. Radhika, Design of sampled data state estimator for Markovian jumping neural networks with leakage timevarying delays and discontinuous Lyapunov functional approach, Nonlinear Dyn. 73 (2013) 1367-1383. [38] S. Senthilraj, R. Raja, Q. Zhu, R. Samidurai, Z. Yao, Exponential passivity analysis of stochastic neural networks with leakage, distributed delays and Markovian jumping parameters, Neurocomputing 175 (2016) 401-410.

20