Asynchronous H∞ filtering for discrete-time Markov jump neural networks

Asynchronous H∞ filtering for discrete-time Markov jump neural networks

Author's Accepted Manuscript Asynchronous H 1 filtering for discrete-time Markov jump neural networks Zhaowen Xu, Hongye Su, Huiling Xu, ZhengGuang W...

474KB Sizes 0 Downloads 118 Views

Author's Accepted Manuscript

Asynchronous H 1 filtering for discrete-time Markov jump neural networks Zhaowen Xu, Hongye Su, Huiling Xu, ZhengGuang Wu

www.elsevier.com/locate/neucom

PII: DOI: Reference:

S0925-2312(15)00085-5 http://dx.doi.org/10.1016/j.neucom.2015.01.040 NEUCOM15078

To appear in:

Neurocomputing

Received date: 22 October 2014 Revised date: 7 January 2015 Accepted date: 22 January 2015 Cite this article as: Zhaowen Xu, Hongye Su, Huiling Xu, Zheng-Guang Wu, Asynchronous H1 filtering for discrete-time Markov jump neural networks, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2015.01.040 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1

Asynchronous H∞ filtering for discrete-time Markov jump neural networks Zhaowen Xu, Hongye Su, Huiling Xu, Zheng-Guang Wu

Abstract—This paper is concerned with the asynchronous H∞ filtering problem for discrete-time Markov jump neural networks. The asynchronous phenomenon is considered and two different Markov chains are used to govern the jump mode of the filter and that of the neural networks respectively, which means that their modes needn’t to be corresponding to each other. A novel filtering design method is proposed. By introducing a unified Lyapunov functional, a sufficient condition is derived in terms of linear matrix inequality(LMI) such that the resultant filtering error system is stochastically stable. A numerical example is given to demonstrate the effectiveness of the proposed theoretical results. Index Terms—Markov jump neural networks, asynchronous filter, linear matrix inequalities(LMI)

I. I NTRODUCTION Neural Networks, viewed as mimic models of neural processing in the human brain, have been successfully applied in various areas including pattern recognition, associative memory design, image processing and optimization problems in the last few decades [1]–[11]. The applications of neural networks can be found in recent works regarding controllability of neuronal networks [9], [10]. In addition, recent advances and challenges in dynamics of complex neural networks and its application have been detailedly reviewed in [11]. Since it is very common for neural networks to have information latching problem, which can be handled by extracting finitestate representations from trained networks [12]–[14], recently much attention has been paid to introducing a Markov chain to determine the switching or jumping between different neural networks modes. Now Markov jump neural networks are of great importance in modeling a class of neural networks with finite network modes and a lot of research results have been achieved. For example, in [12], the stability and synchronization problem of a class of markovian jumping neural networks have been analyzed and a unified framework has been established to handle both the Markovian jumping parameters and mixed time delays. [15] discussed the issue of stability analysis for a class of impulsive stochastic BAM neural networks with both Markovian jump parameters and mixed time delays. In [16], the passivity for discrete-time stochastic neural networks This work was supported by The National Natural Science Foundation of P.R. China (61304072, 61320106009, 61134007, 61174137). Z. Xu, H. Su (corresponding author), and Z.-G. Wu are with the National Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang University, Yuquan Campus, Hangzhou Zhejiang, 310027, PR China (e-mail: [email protected]; [email protected]; [email protected]). H. Xu is with School of Science, Nanjing University of Science and Technology, Nanjing 210094, P.R. China (e-mail: [email protected])

with both Markovian jumping parameters and mixed time delays has been addressed. For more details concerning the Markov jump neural networks, see [17]–[19] and the reference cited therein. As is well known, the neuron states are generally not fully available in the network outputs. State estimation, as a strategy through available measurements to estimate the state of a dynamic system, plays an important role in the analysis of neural networks. Based on the reason above, it is very practical to study the filtering problem of Markov jump neural networks, which has recently drawn particular research interests, see e.g. [20]–[22] and the reference cited therein. For example, [20] has investigated the robust H ∞ state estimation problem for a general class of neural networks with probabilistic measurement delays and designed a full-order state estimator. In [22], a new convex combination technique is developed to derive the delay-dependent exponential stability condition for the estimator of Markovian jumping neural networks with time-varying discrete and distributed delays. On the other hand, while considering the filtering problem of a Markov jump system, a very common assumption is that the mode of the system is always corresponding to the mode of the filter, which is the so-called mode-dependent filter. It requires that the Markov chain state, ie, the mode of the system is available to the filter at any time, which, however, may sometimes be impractical or even impossible to be satisfied, such as networked control systems without time stamp information [23]–[28]. To deal with this kind of situation, the mode-independent filters have been introduced in recent years, see e.g. [29]–[31]. It should be noted that the mode-independent filter is quite powerful in the case that the information of the system mode is completely unaccessible, but its nature of neglecting all the available mode information inevitably adds conservatism to the results. Then the asynchronous filter has been reported to handle such asynchronous phenomenon between the mode of the system and the filter in [32]. However to the best of the author’s knowledge, the asynchronous filtering problem for general neural networks case catering for realistic network-induced phenomenon has not yet been investigated, which is the main motivation of this work. In this paper, the asynchronous filtering problem is considered for a class of Markov jump neural networks. Here the time delay, the nonlinearity of neural networks and incomplete measurements are introduced to make the system model more realistic. Two different Markov chains are used to stand for the filter modes and the system modes respectively, which means that their modes needn’t to be dependent or independent. In

2

this way we make full use of the information of the system modes, which will make the results less conservatism than the previous methods. Then an asynchronous filter is proposed and sufficient conditions are derived to ensure the error dynamic neural networks is stochastically stable with a desired H ∞ performance. A numerical example is given to illustrate the effectiveness of the proposed filter design method. Notation: The notations used throughout this paper are fairly ¯ n standard. Rn¯ and Rmׯ denote the n-dimensional Euclidean space and the set of all m × n real matrices, respectively. The notation X > Y (X  Y ), where X and Y are symmetric matrices, means that X − Y is positive definite (positive semidefinite). I and 0 represent the identity matrix and a zero matrix, respectively. The superscript “T” represents the transpose. || · || denotes the Euclidean norm of a vector and its induced norm of a matrix. (Ω , F , P) is a probability space. Ω is the sample space, F is the σ-algebra of subsets of the sample space, and P is the probability measure on F . E [·] denotes the expectation operator with respect to some probability measure on P. For an arbitrary matrix B and two symmetric matrices A and C,   A B ∗

C

denotes a symmetric matrix, where “∗” denotes the term that is induced by symmetry. Matrices, if their dimensions are not explicitly stated, are assumed to have compatible dimensions for algebraic operations. II. P RELIMINARIES Fix a probability space (Ω , F , P) and consider the following discrete-time stochastic Markov jump neural networks with randomly occurred nonlinearity and time delay: ⎧ ⎪ ⎪ x(k + 1) = A(r(k))x(k) + Ed (r(k))f (x(k − d(k))) ⎪ ⎪ ⎪ + E(r(k))f (x(k)) + B(r(k))υ(k) ⎪ ⎪ ⎪ ⎪ ⎪ y(k) = δ(α(k), 1)φ(C(r(k))x(k)) ⎪ ⎨ (1) + δ(α(k), 2)C(r(k))x(k) ⎪ ⎪ ⎪ + D(r(k))υ(k) ⎪ ⎪ ⎪ ⎪ ⎪ z(k) = L(r(k))x(k) ⎪ ⎪ ⎪ ⎩ x(j) = ψ(j), j = −dM , −dM + 1, ..., −1, 0 ¯ where x(k) ∈ Rn¯ is the neural state vector, y(k) ∈ R m is q¯ the measured output, z(k) ∈ R is the neural signal to be estimated, and υ(k) ∈ Rp¯ is the disturbance input. A(r(k)) describes the rate with which each neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs. E(r(k)) is the connection weight matrix and E d (r(k) is the time delay connection weight matrix. B(r(k)),C(r(k)),D(r(k)) and L(r(k)) are known real constant matrices with appropriate dimensions. The positive integer d(k) denotes the time-varying delay satisfying dm  d(k)  dM , k ∈ N+

where the lower bound d m and the upper bound d M are known positive integers.

The parameter {r(k), k  0} represents a Markov chain taking values in a finite set T 1 = {1, 2, · · · , T1 } with transition probability matrix Π = {π ij } given by Pr{r(k + 1) = j|r(k) = i} = πij T1 where 0  πij  1 for all i, j ∈ T1 , and j=1 πij = 1 for all i ∈ T1 . φ is a nonlinear function and represents the randomly occurred nonlinearity satisfying the following sector-condition [33]: ¯ (φ(η) − K1 η)T (φ(η) − K2 η) ≤ 0, η ∈ Rm

(2)

where K1 ≥ 0 and K2 ≤ 0 are diagonal matrices with K 1 > K2 . For technical convenience, the nonlinear function φ can be decomposed into a linear and a nonlinear part as φ(C(r(k))x(k)) = φs (C(r(k))x(k)) + K1 C(r(k))x(k) (3) which can be substituted into (2) to get φs (C(r(k))x(k))T (φs (C(r(k))x(k)) − KC(r(k))x(k)) ≤ 0 (4) where K = K2 − K1 > 0. δ(·, ·) is the Kronecker delta function defined by: 1 k=l δ(k, l) = 0 k = l. And the stochastic variable α(k) satisfies the following distribution: Pr{α(k) = i} = βi where βi ∈ [0, 1](i = 1, 2, 3) are constants that satisfy β1 + β2 + β3 = 1. Remark 1: Inspired by [34], the Kronecker delta function is introduced so that different values of α(k) can represent different kinds of incomplete measurements. Specifically, α(k) = 1 means that the sensor nonlinearities occur; when α(k) = 2, the nonlinearities disappear and the sensors work normally; and if α(k) = 3, the measurements only contain the disturbances. The model (1), as such, describes the phenomenon of randomly occurring sensor linearity and missing measurements in a unified way and can reflect the reality in a better way. Similar to [12], the activation function f is assumed to be continuous and bounded, and there exist constants λ i , ρi such that λi ≤

fi (α1 ) − fi (α2 ) ≤ ρi , i = 1, 2, ..., n α1 − α2

(5)

where α1 , α2 ∈ R and α1 = α2 . In this paper, we are interested in constructing the following neuron filter: x ˆ(k + 1) = Af (σ(k))ˆ x(k) + Bf (σ(k))y(k) (6) x(k), x ˆ(0) = 0 zˆ(k) = Cf (σ(k))ˆ where x ˆ ∈ Rn¯ , zˆ(t) ∈ Rq¯, and the matrices Af (σ(k)), Bf (σ(k)) and Cf (σ(k)) are the estimator matrices with appropriate dimensions, which are to be designed. The parameter {σ(k), k  0} represents a Markov chain taking values in

3

a finite set T2 = {1, 2, · · · , T2 } with transition probability r(k+1) } given by matrix Γ r(k+1) = {χpq Pr{σ(k + 1) = q|σ(k) = p} = χr(k+1) pq

under the zero-initial condition for any nonzero υ(k). In this case, the filtering error dynamic system (7) is said to be stochastically stable in the mean square with H ∞ performance γ.

r(k+1)

where 0  χpq  1 for all p, q ∈ T2 , and T2 r(k+1) χ = 1 for all p ∈ T2 . In the stochastic variation, q=1 pq the Markov chain r(k) is assumed to be independent on Fk−1 = σ{σ(1), σ(2), · · · , σ(k − 1)}, where Fk−1 is a σalgebra generated by {σ(1), σ(2), · · · , σ(k − 1)}(see [35]). Remark 2: It is notable that the filter mode does not necessarily match the mode of the neural networks in reality, especially when the jump mode of the neural networks is not available. Here a different Markov chain σ(k) is introduced into the filter (6). It is often regarded as the observation of the mode of the neural networks r(k). And it not only depends on σ(k −1) but also on r(k), which means that mode information of the neural networks can be made full use of in the filter. It is easy to find that the mode of filter (6) is asynchronous with that of the neural networks, but depend on it according to certain probabilities. When T 2 = {1}, the filter will change to mode-independent filer, which can be included in the filter form of (6) as a special case.

T ˆ(k)T , Let z¯(k) = z(k) − zˆ(k) and x ¯(k) = x(k)T x and combine (1) (3) and (6), then we obtain the filtering error dynamic system: ⎧ ¯ip υ(k) + β1 H2 Bf p φs (Ci x(k)) x ¯(k + 1) =A¯ip x ¯(k) + B ⎪ ⎪ ⎪ ⎪ ⎪ + (δ(α(k), 1) − β1 )H2 Bf p φs (Ci x(k)) ⎪ ⎪ ⎪ ⎨ + (δ(α(k), 1) − β1 )H2 Bf p K1 Ci x(k) ⎪ + (δ(α(k), 2) − β2 )H2 Bf p Ci x(k) ⎪ ⎪

⎪ ⎪ ⎪ + H1 Ei f (x(k)) + Edi f (x(k − d(k))) ⎪ ⎪ ⎩ ¯ ip x ¯(k) z¯(k) = L (7) where     Ai I 0 , H1 = A¯ip = 0 β1 Bf p K1 Ci + β2 Bf p Ci Af p    

0 Bi ¯ ip = Li −Cf p . ¯ip = ,L ,B H2 = Bf p D i I Definition 1: The filtering error dynamic system (7) is said to be stochastically stable in the mean square, if for any initial state (¯ x(0), r0 , σ0 ), the following condition holds  ∞  2 (8) ||¯ x(k)|| |¯ x(0), r0 , σ0 < ∞ E k=0

in the case of υ(k) = 0. We are now in a position to formulate the problem addressed in this paper as follows: for the given neural networks (1) and a prescribed scalar γ > 0, design a neuron filter of form (6) such that the filtering error dynamic system (7) is stochastically stable in the case of υ(k) = 0, and satisfies ∞  k=0

∞ 

E ||¯ z (k)||2 < γ 2 ||υ(k)||2 k=0

(9)

III. M AIN R ESULTS

In this section,we shall establish our main results based on the LMI approach. To start with, some lemmas that will be used in this paper shall be introduced. Lemma 1: (Schur Complement) For a given symmetric matrix  S11 S= T S12

S12 S22



the following two are equivalent: (i)S < 0; −1 T S12 < 0. (ii)S22 < 0, S11 − S12 S22

Lemma 2: [12] Let x = (x 1 , x2 , ..., xn )T ∈ Rn¯ , and f (x) = (f1 (x1 ), f2 (x2 ), ..., fn (xn ))T be a continuous nonlinear function satisfying fi (s) ≤ ρi , s = 0, s ∈ R, i = 1, 2, ...n s

λi ≤

with λi and ρi being constant scalars. Then for a diagonal matrix G > 0, we have 

x f (x)

T 

F1 G −F2 G

  −F2 G x G f (x)

where F1 = diag{λ1 ρ1 , λ2 ρ2 , ..., λn ρn } and F2 1 λ2 +ρ2 n diag{ λ1 +ρ , 2 , ..., λn +ρ }. 2 2

=

The following theorem gives a sufficient condition to guarantee the stochastic stability of the filtering error dynamics. Theorem 1: For given positive integers d M and dm , the error dynamic system with v(k) = 0 is stochastically stable in the mean square for any time-varying delay d(k) satisfying dm ≤ d(k) ≤ dM if there exist symmetric positive-definite matrices Pip , Q, positive-definite diagonal matrices G, L,and scalars εip such that for any i ∈ T 1 , p ∈ T2 , the following inequality holds, ⎡ ⎢ Θ=⎣

Θ1 + Θ2

ΩT

* *

−Y *

ST



⎥ 0 ⎦<0 −Xip

(10)

4

where

⎡ ⎤ −Pip 0 0 0 2εip H1 CiT K T ⎢ ⎥ 0 0 0 0 ⎢ * ⎥ ⎢ ⎥ ⎢ ⎥ * 0 0 0 Θ1 = ⎢ * ⎥ ⎢ ⎥ * * 0 0 ⎣ * ⎦ * * * * −2εip I ⎡ Θ11 Θ12 ⎤ 0 0 0 2 2 * dQ −L 0 0 0 22 ⎢ ⎥ Θ2 = ⎣ * * −Q11 −F1 G −Q12 +F2 G 0 ⎦ * * * −Q22 −G 0 * * * * 0 Θ211 = dH1 Q11 H1T − H1 F1 LH1T Θ212 = dH1 Q12 + H1 F2 L Xip =

T1  T2 

πij χjpq Pjq , d = dM − dm + 1

j=1 q=1

F1 = diag{λ1 ρ1 , λ2 ρ2 , ..., λn ρn } λn + ρn λ1 + ρ1 λ2 + ρ2 , , ..., } F2 = diag{ 2 2 2 T T M = H2 Bf p K1 Ci H1 , N = H2 Bf p Ci H1

S = Xip A¯ip Xip H1 Ei 0 Xip H1 Edi β1 Xip H2 Bf p Y = diag{Xip , Xip , Xip }

√ √ √ β1 β3 M T Xip β2 β3 N T Xip β1 β2 (M−N )T Xip J1 =

√ √ β1 β3 BfTp H2T Xip 0 β1 β2 BfTp H2T Xip J2 =

H3 = I 0 0 0 0

H4 = 0 0 0 0 I  T   H3 J1 Ω= J2 H4 Proof 1: Before we start, some calculations about the random variables need to be done. Using δ 1 to denote δ(α(k), 1) and δ2 to denote δ(α(k), 2), we have Pr{δ1 = 1} = Pr{α = 1} = β1 Pr{δ1 = 0} = 1 − Pr{α = 1} = 1 − β1 and E [δ1 ] = β1 and so it is the same with δ2 , E [δ2 ] = β2 . Furthermore, E [δ1 − β1 ] = E [δ2 − β2 ] = 0 E [(δ1 − β1 )2 ] = β1 (1 − β1 )2 + β12 (1 − β1 ) = β1 (1 − β1 )

E [(δ2 − β2 )2 ] = β2 (1 − β2 )2 + β22 (1 − β2 ) = β2 (1 − β2 )

following joint transition probability, Pr{r(k + 1) = j, σ(k + 1) = q|r(k) = i, σ(k) = p} =Pr{σ(k + 1) = q|r(k + 1) = j, r(k) = i, σ(k) = p} × Pr{r(k + 1) = j|r(k) = i, σ(k) = p}

(12)

=πij χjpq . Now we consider the following Lyapunov-Krasovskii functional: V (k) = V1 (k) + V2 (k)

(13)

where V1 (k) = x¯(k)T Pr(k)σ(k) x¯(k)  T   −d k−1 m +1   x(α) x(α) Q V2 (k) = f (x(α)) f (x(α)) β=−d +1 α=k−1+β M

with Pr(k)σ(k) and Q being positive definite matrices. Then we calculate the difference of V (k) along the error dynamic system with v(k) = 0 and take the mathematic expectation: E [ΔV1 (k)] =E [¯ x(k + 1)T Pr(k+1)σ(k+1) x ¯(k + 1) ¯(k)]. −x ¯(k)T Pr(k)σ(k) x

(14)

And for convenience, let X ip = Pr(k+1)σ(k+1) |r(k)=i,σ(k)=p . Take the expectation of all the possible matrices P at time k + 1, we have

Xip =

T1  T2 

πij χjpq Pjq .

j=1 q=1

On the other hand, it follows from (4) that for any scalar εip > 0, ς(k) = 2εip φs (Ci x(k))T (φs (Ci x(k))−KCi x(k)) ≤ 0. (15) Note that x(k) = H1T x¯(k), then x ¯(k + 1) =(A¯ip + (δ1 − β1 )H2 Bf p K1 Ci H1T + (δ2 − β2 )H2 Bf p Ci H1T )¯ x(k) ¯ip υ(k) + β1 H2 Bf p φs (Ci x(k)) +B

E [(δ1 − β1 )(δ2 − β2 )] = −β1 (1 − β1 )β2 − β2 β1 (1 − β2 ) + β3 β1 β2 = −β1 β2 (2 − β1 − β2 − β3 ) = −β1 β2 . (11)

+ (δ1 − β1 )H2 Bf p φs (Ci x(k))

+ H1 Ei f (x(k)) + Edi f (x(k − d(k))) x(k) =(A¯ip + (δ1 − β1 )M + (δ2 − β2 )N )¯ ¯ip υ(k) + β1 H2 Bf p φs (Ci x(k)) +B

Besides, the Markov chain {σ(k)} depends on {r(k)} while {r(k)} doesn’t vice versa. Following Bayes’ rule, we have the

+ (δ1 − β1 )H2 Bf p φs (Ci x(k))

+ H1 Ei f (x(k)) + Edi f (x(k − d(k)))

(16)

5

and recall that β1 + β2 + β3 = 1,we have

Then combining (17) (18) and (19), we have

E [ΔV1 (k)] T

E [ΔV2 (k)] ≤ E [η(k)T Θ2 η(k)].

T

¯(k + 1) − x ¯(k) Pip x¯(k)] =E [¯ x(k + 1) Xip x ¯(k + 1) − x ¯(k)T Pip x¯(k) − ς(k)] ≤E [¯ x(k + 1)T Xip x

=E x¯(k)T [A¯T Xip A¯ip + β1 β3 M T Xip M + β2 β3 N T Xip N ip

x(k) + β1 β2 (M − N )T Xip (M − N )]¯

Therefore it follows from (10) with Schur complement that E [ΔV (k)] =E [ΔV1 (k) + ΔV2 (k)] −1 S + Θ1 + Θ2 )η(k)] ≤E [η(k)T (Ω T Y −1 Ω + S T Xip

+ φs (Ci x(k))T [(β12 + β1 β3

≤0.

+ β1 β2 )BfTp H2T Xip H2 Bf p ]φs (Ci x(k)) + 2¯ x(k)T [β1 A¯T Xip H2 Bf p + β1 β3 M T Xip H2 Bf p

−1 S+ Let μ be the smallest eigenvalue of −(Ω T Y −1 Ω+S T Xip Θ1 + Θ2 ), then

ip

T

+ β1 β2 (M − N ) Xip H2 Bf p ]φs (Ci x(k)) + 2¯ x(k)T A¯Tip Xip H1 Ei f (x(k))



+ 2¯ x(k) A¯Tip Xip H1 Edi f (x(k − d(k))) T

E [V (∞) − V (0)] =E

+ 2f (x(k))T EiT H1T Xip β1 H2 Bf p φs (Ci x(k))

T H1T Xip H1 Edi f (x(k − d(k))) + f (x(k − d(k)))T Edi + η(k)T Θ1 η(k)

=E [η(k) (Ω Y

−1

Ω+S

T

−1 Xip S



ΔV (k) ≤ E

∞ 

 T

(−μη(k) η(k))

k=0

hence,  ∞  ∞   T T x¯(k) x¯(k) ≤E (η(k) η(k)) E

+ f (x(k))T EiT H1T Xip H1 Ei f (x(k))

T



k=0

T H1T Xip β1 H2 Bf p φs (Ci x(k)) + 2f (x(k − d(k)))T Edi

T

∞ 

k=0

k=0

1 1 ≤ E [V (0) − V (∞)] ≤ E [V (0)]. μ μ

+ Θ1 )η(k)]

where η(k) = [¯ x(k)T f (x(k))T x(k − d(k))T f (x(k − d(k)))T φs (Ci x(k))T ]T .   Q11 Q12 , then Next, set Q = * Q22 E [ΔV2 (k)] ⎡    T  Q11 Q12 x(k) x(k) ⎣ =E d * Q22 f (x(k)) f (x(k))     ⎤ T k−d m x(α) Q11 Q12 x(α) ⎦ − * Q f (x(α)) f (x(α)) 22 k−dM ⎡    T  H1 Q11 H1T H1 Q12 x¯(k) x ¯(k) ≤E ⎣d * Q22 f (x(k)) f (x(k))  ⎤  T  Q11 Q12 x(k − d(k)) x(k − d(k)) ⎦. − * Q22 f (x(k − d(k))) f (x(k − d(k))) (17) Letting L > 0 be a diagonal matrix, from Lemma 2, we have    T  H1 F1 LH1T −H1 F2 L x ¯(k) x¯(k) ≤ 0. * L f (x(k)) f (x(k)) (18) Similarly, for a diagonal matrix G > 0,   T   F1 G −F2 G x(k − d(k)) x(k − d(k)) ≤ 0. * G f (x(k − d(k))) f (x(k − d(k))) (19)

By definition, the filtering error dynamic system with v(k) = 0 is stochastically stable. Now, we are able to focus on the analysis of the H ∞ performance of the filtering process in the following theorem. Theorem 2: Let the filter parameters A f , Bf and Cf be given and γ be a positive constant. The filtering error dynamic system (7) is stochastically stable in the mean square with H∞ performance γ, if there exist symmetric positive-definite matrices Pip , Q, positive-definite diagonal matrices G, L,and scalars εip such that for any i ∈ T 1 , p ∈ T2 , the following inequality holds, ⎡ Θ1 + Θ2 ⎢ * ⎢ ⎢ * Ψ=⎢ ⎢ ⎢ * ⎣ *

0

ΩT

−γ 2 I *

0 −Y

¯ T Xip B ip 0

* *

* *

−Xip *

ST

WT



⎥ ⎥ ⎥ ⎥ < 0 (20) ⎥ 0 ⎥ ⎦ −I 0 0

¯ ip 0 0 0 0 . where W = L Proof 2: First, it is easy to verify that Θ < 0 under the condition Ψ < 0. Therefore, according to Theorem 1, the filtering error dynamic system (7) with υ(k) = 0 is stochastically stable in the mean square. Now we consider the performance and show that under the given conditions, the filtering error z¯(k) satisfies (9). For the H∞ performance analysis, we choose the same Lyapunov-Krasovskii functional and introduce the performance index: J(n) =

n  k=0

E [¯ z (k)T z¯(k) − γ 2 υ(k)T υ(k)]

6

where

then under zero initial condition, we have J(n) n  E [¯ z (k)T z¯(k) − γ 2 υ(k)T υ(k) + ΔV (k)] = k=0

− E [V (n + 1)] n  ¯ Tip L ¯ ip x E [¯ x(k)T L ¯(k) − γ 2 υ(k)T υ(k) + ΔV (k)] ≤ ≤

k=0 n 

¯T L ¯ ¯(k) − γ 2 υ(k)T υ(k) E [¯ x(k)T L ip ip x

k=0

¯ip υ(k) + 2β1 φT B T H T Xip B ¯ip υ(k) + 2¯ x(k)T A¯Tip Xip B s fp 2 T T T ¯ip υ(k) + 2f (x(k)) Ei H1 Xip B T



¯ ip = X

T ¯ip υ(k) Edi H1T Xip B

ip

−γ 2 I

 .

∞ 

E ||¯ z (k)||2 < γ 2 ||υ(k)||2

k=0

which completes the proof. Theorem 3: For a given scalar γ > 0, there exists an asynchronous filter of the form (6) such that the filtering error dynamic system (7) is stochastically stable with H ∞ performance γ, if there exist symmetric positive-definite matrices Pip =

1 Pip

*

2 Pip 3 Pip

 ,Q =

 Q11 *

Q12



0 −γ 2 I * * *

0

Fip Edi

⎥ ⎥ ⎥ ⎦

¯f p β1 B ¯f p β1 B



2 T Xip − VP − Fip

*



3 Xip − VP − VPT

(22)

Proof 3: From Theorem 2, we know that there exists an asynchronous filter of the form (6) such that the filtering error dynamics (7) is stochastically stable with H ∞ performance γ, if there exist proper matrices satisfying (20). Now we pre- and ¯ ip X −1 , G ¯ ip X −1 , I} and its post-multiply (20) by diag{I, I, G ip ip transpose with Is having the same dimension with each block in (20). Then (20) is shifted to the following inequality, ⎡ ⎤ Θ1 + Θ2 0 Z1T Z2 WT ⎢ * −γ 2 I 0 Z3 0 ⎥ ⎢ ⎥ ⎢ ⎥ ¯ ⎢ ⎥<0 * * Y 0 0 Ψ=⎢ ⎥ −1 T ⎢ 0 ⎥ * * * −Gip Xip Gip ⎣ ⎦ * * * * −I ¯ ip X −1 G ¯ ip X −1 G ¯ ip X −1 G ¯ T , −G ¯ T , −G ¯ T }. Y = diag{−G ip ip ip ip ip ip On the other hand, we can easily check that ¯ ip )X −1 (Xip − G ¯ ip )T > 0 (Xip − G ip which is equivalent to ¯ ip − G ¯T . ¯ T < Xip − G ¯ ip X −1 G −G ip ip ip

⎥ ⎥ ⎥ ⎥<0 ⎥ 0 ⎥ ⎦ −I

˜ < 0, we finally obtain (21). Thus the proof is and using Ψ completed. Remark 3: The asynchronous H ∞ filtering problem for a class of discrete-time Markov jump neural networks is proposed for the first time in the literature. A LMI-based sufficient

Z2T

WT

0 Y¯

Z3 0 ¯ ip X

0 0

*

0 Gip Edi





Z1T

* *

√ ¯f p 0 β1 β3 B √ ¯f p 0 β1 β3 B 0 0 0 0 √ ¯f p 0 β1 β2 B √ ¯f p 0 β1 β2 B

¯ ip − G ¯ T for −G ¯ ip Xip G ¯T Then we substitute the term X ip − G ip ip ¯ ˜ ¯ ˜ ¯ in Ψ to get Ψ. Since Ψ < Ψ, Ψ < 0, or Theorem 2 will hold ˜ < 0. By choosing if Ψ   Gip Fip ¯ ip = , G Vp Vp (23) −1 −1 ¯f p = V Bf p , C¯f p = Cf p A¯f p = V Af p , B

Q22

positive-definite diagonal matrices G, L, matrices ¯ f p , C¯f p and scalars εip > 0, such Gip , Fip , Vp , A¯f p , B that for any i ∈ T 1 , p ∈ T2 , the following holds, ⎡ Θ1 + Θ2 ⎢ * ⎢ ⎢ ⎢ * Ψ=⎢ ⎢ * ⎣ *

0 0 0 0 0 0

where

k=0



0 0 0 0 0 0

¯f p , Cf p = C¯f p . Af p = Vp−1 A¯f p , Bf p = Vp−1 B

According to Schur complement we obtain from (20) that J(n) < 0. Letting n → ∞, we have ∞ 

0 0 0 0 0 0

Moreover, if the above conditions have feasible solutions, the filter (6) can be obtained by

where 0

1 Xip − Gip − GTip

¯ ip , X ¯ ip , X ¯ ip } Y¯ =diag{X

¯ W = Lip 0 0 0 0

−1 S + Θ1 + Θ2 )η(k)] + η(k)T (Ω T Y −1 Ω + S T Xip ⎡  T   n T  ΩT T −1 Ω ⎣ ≤ Y E η˜(k) {Θ3 + 0 0 k=0 ⎤   T  T   T S ST WT WT −1 + + ¯ T Xip η (k)⎦ ¯ T }˜ Bip B 0 0 ip

 Θ1 + Θ2 T υ(k)T , Θ3 = *

√ ¯ K C β β B √ 1 3 ¯f p 1 i B β β 1 3 f p K1 C i ⎢ √ ¯ f p Ci ⎢ β2 β3 B √ Z1 = ⎢ ¯ f p Ci B β β 2 3 ⎣√ ¯ f p (K1 Ci −Ci ) β1 β2 B √ ¯ (K C −C ) β β B  T1 2 fp 1 i i ¯ Δ1 Af p Gip Ei Z2 = ΔT2 A¯f p Fip Ei

¯ T + β2 C T B ¯T Δ1 =ATi GTip + β1 CiT K1T B fp i fp T ¯ T + β2 C T B ¯T Δ2 =ATi Fip + β1 CiT K1T B fp i fp

¯ T B T F T + DT B ¯T Z3 = BiT GTip + DiT B i ip i fp fp

+ 2f (x(k − d(k))) ¯ T Xip B ¯ip υ(k) + υ(k)T B

η˜(k) = η(k)T



p

(21)

p

7

condition is derived for the existence of asynchronous filters that ensure the mean-square stochastic stability of the resulting error dynamic system and reduce the effect of the disturbance to a prescribed level γ. The analysis of the conservatism of the proposed method will be further studied in the future. Here the feasibility of the filter design problem can be readily checked by the solvability of an LMI with the help of the Matlab LMI toolbox. An illustrative example will be provided in the next section to show the potential of the proposed method. IV. N UMERICAL E XAMPLE In this section, a numerical example is presented to demonstrated the usefulness of the developed asynchronous filter design method in this paper. Consider the discrete-time Markov jump neural network with the following parameters: ⎡ ⎡ ⎤ ⎤ −0.6 0 0 −0.8 ⎢ ⎢ ⎥ ⎥ −0.2 0 ⎦ , B1 = ⎣ 0.3 ⎦ A1 = ⎣ 0 0 0 −0.6 0.9 ⎡ ⎡ ⎤ ⎤ −0.2 0 0 0.9 ⎢ ⎢ ⎥ ⎥ −0.8 0 ⎦ , B2 = ⎣−0.3⎦ A2 = ⎣ 0 0 0 −0.2 −0.6 ⎡ ⎡ ⎤ ⎤ 0.1 0.1 0 0.1 0.1 0.4 ⎢ ⎢ ⎥ ⎥ E1 = ⎣0.1 0.2 0 ⎦ , E2 = ⎣0.1 0.6 0.2⎦ 0.1 0.2 0.1 0.3 0.6 0.6 ⎡ ⎡ ⎤ ⎤ 0.1 0 0.1 0.1 0 0.4 ⎢ ⎢ ⎥ ⎥ Ed1 = ⎣0.1 0.2 0 ⎦ , Ed2 = ⎣0.1 0.6 0.2⎦ 0.1 0 0.1 0.3 0 0.6     −0.1 0.3 −0.1 0.3 C1 = , D1 = 0.7 0.4 0.5 −0.3     −0.2 −0.1 −0.2 0.1 , D2 = C2 = 0.5 0.2 0.2 0.5     0 −0.1 −0.1 0.2 −0.15 0 , L2 = L1 = 0.15 −0.1 0.2 0.1 0.1 −0.2 dm = 1, dM = 3, λi = 0, ρi = 0.2

and the transition matrices of Markov chain as ⎡ ⎡ ⎤ 0.1 0.3 0.6 0.4 ⎢ ⎢ ⎥ Γ 1 = ⎣0.8 0.1 0.1⎦ , Γ 2 = ⎣0.3 0.2 0.6 0.2 0.6

where



Af 1

Bf 1

Cf 1

Af 2

Bf 2

Cf 2

Af 3

Bf 3

K1 =





0.6

0

0

0.7

, K2 =



0.8

0

0

0.8

.

The activation function is f (k) = 0.2 tanh x For the random variable α(k), β 1 = β2 = 0.4, and β3 = 0.2. In this example, the transition matrix of Markov chain r(k) is taken as   0.9 0.1 Π= 0.2 0.8

⎤ 0.2 0.2 ⎥ 0.5 0.2⎦ 0.2 0.2

which implies the asynchronous filter (6) has three modes. The objective here is to design an asynchronous filter (6) such that the resulting filtering error dynamics is stochastically stable and has a guaranteed H ∞ performance index. Take γ = 0.6 and with the above parameters, we solve the LMI and obtain

and the randomly occurred nonlinearity is chosen as K2 − K1 K1 + K2 η+ sin(η) φ(η) = 2 2

σ(k) are chosen

Cf 3

⎡ ⎤ −0.5885 −0.1060 0.1640 ⎢ ⎥ = ⎣ 0.2553 −0.6287 0.2228 ⎦ 0.6519 0.1768 −0.1489 ⎡ ⎤ 0.6520 −0.6140 ⎢ ⎥ = ⎣−0.1597 0.7833 ⎦ −1.1069 0.9649   −0.1014 0.1616 0.0124 = −0.0485 0.0171 −0.0507 ⎡ ⎤ −0.7610 −0.0850 0.0090 ⎢ ⎥ = ⎣ 0.3443 −0.6416 0.3368 ⎦ 0.7669 0.1293 −0.0565 ⎡ ⎤ 0.6407 −0.5119 ⎢ ⎥ = ⎣−0.2320 0.7927 ⎦ −1.2929  −0.0722 = −0.0351 ⎡ −0.7610 ⎢ = ⎣ 0.3443 0.7669 ⎡ 0.7959 ⎢ = ⎣−0.4438 −1.3735  −0.1134 = −0.0634

0.6971

 0.1751 0.0252 0.0293 −0.0565 −0.0850

0.0090



⎥ −0.6416 0.3368 ⎦ 0.1293 −0.0565 ⎤ 0.8366 ⎥ 0.6411⎦ 0.8463

 0.1650 −0.0044 . 0.0339 −0.0829

Then we use this designed filter to illustrate the efficiency of the proposed design method. Take the initial state x(0) =

T −0.5 1 0 and the exogenous disturbance input υ(k) = sin2 k 1+k2 . And we can see the neuron states in Fig. 1. The state errors and the filtering errors are presented in Fig. 2 and Fig. 3 respectively. They all tend to be stable in four or five steps and approach the equilibrium of zero as the figures show. V. C ONCLUSION In this paper, we have studied the filtering problem for a class of discrete-time Markov jump neural networks. The neural networks under study involves time-varying delays,

8

1

x1 (k) x2 (k) x3 (k)

0.5

0

−0.5 0

2

4

6

8

10 12 Time (k)

14

16

18

20

Fig. 1. The state of the neural networks x(k) 0.5

0

−0.5

x ˆ1 (k) − x1 (k) x ˆ2 (k) − x2 (k) x ˆ3 (k) − x3 (k)

−1

−1.5 0

2

4

6

8

10 12 Time (k)

14

16

18

20

Fig. 2. The filtering state error x ˆ(k) − x(k) 0.2

z¯1 (k) z¯2 (k)

0.1

0

−0.1

−0.2 0

2

4

6

8

10 12 Time (k)

14

16

18

20

Fig. 3. The filtering error for the estimated signal z¯(k)

stochastic disturbances, random linearities and incomplete measurements. An effective linear matrix inequality approach has been proposed to design the asynchronous filters such that the filtering error dynamics is stochastically stable in the mean square and a prescribed H ∞ disturbance rejection attenuation is guaranteed. We have first investigated the sufficient conditions for the filtering error dynamics to be stochastically stable and then derived the explicit expression of the filter parameters. The usefulness and effectiveness of the proposed design method have been demonstrated by a numerical example. Finally, it should be mentioned that it is possible to extend our main results to more complex systems under constrained measure [36]–[38], multiple time-varying delays [39] or singular model [40], [41], and the corresponding results will appear in the near future. R EFERENCES [1] H. Zeng, Y. He, M. Wu, and H. Xiao, “Improved conditions for passivity of neural networks with a time-varying delay,” IEEE transactions on cybernetics, vol. 44, no. 6, pp. 785–792, 2014. [2] H. Zeng, Y. He, M. Wu, and C. Zhang, “Complete delay-decomposing approach to asymptotic stability for neural networks with time-varying delays,” IEEE Transactions on Neural Networks, vol. 22, no. 5, pp. 806– 812, 2014.

[3] J.-N. Li and L.-S. Li, “Mean-square exponential stability for stochastic discrete-time recurrent neural networks with mixed time delays,” Neurocomputing, vol. 151, pp. 790–797, 2015. [4] Z. Wu, H. Su, J. Chu, and W. Zhou, “Improved result on stability analysis of discrete stochastic neural networks with time delay,” Physics Letters A, vol. 373, no. 17, pp. 1546–1552, 2009. [5] Z. Wu, H. Su, J. Chu, and W. Zhou, “Improved delay-dependent stability condition of discrete recurrent neural networks with timevarying delays,” IEEE Transactions on Neural Networks, vol. 21, no. 4, pp. 692–697, 2010. [6] Z. Wu, P. Shi, H. Su, and J. Chu, “Delay-dependent exponential stability analysis for discrete-time switched neural networks with time-varying delay,” Neurocomputing, vol. 74, no. 10, pp. 1626–1631, 2011. [7] Y. Tang, H. Gao, and K. Juergen, “Distributed robust synchronization of dynamical networks with stochastic coupling,” IEEE Transactions on Circuits and Systems I-Regular Papers, vol. 61, no. 5, pp. 1508–1519, 2014. [8] Y. Tang and W. Wai, “Distributed synchronization of coupled neural networks via randomly occurring control,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 3, pp. 435–447, 2013. [9] Y. Tang, H. Gao, and J. Kurths, “Multiobjective identification of controlling areas in neuronal networks,” IEEE/ACM Transactions On Computational Biology and Bioinformatics, vol. 10, no. 3, pp. 708–720, 2013. [10] Y. Tang, Z. Wang, H. Gao, H. Qiao, and J. Kurths, “On controllability of neuronal networks with constraints on the average of control gains,” IEEE Transactions On Cybernetics, vol. 44, no. 12, pp. 2670–2681, 2014. [11] Y. Tang, F. Qian, H. Gao, and J. Kurths, “Synchronization in complex networks and its application-a survey of recent advances and challenges,” Annual Reviews in Control, vol. 38, no. 2, pp. 184–198, 2014. [12] Y. Liu, Z. Wang, J. Liang, and X. Liu, “Stability and synchronization of disrete-time markovian jumping neural networks with mixed modedependent time delays,” IEEE Transactions on Neural Networks, vol. 20, pp. 1102–1116, 2009. [13] M. Casey, “The dynamics of discrete-time computation with application to recurrent neural networks and finite state machine extraction,” Neurocomputing, vol. 8, no. 6, pp. 1135–1178, 1996. [14] T. Shi, H. Su, and J. Chu, “Robust H∞ output feedback control for Markovian jump systems with actuator saturation,” Optimal Control, Applications and Methods, vol. 33, no. 6, pp. 676–695, 2012. [15] Q. Zhu and J. Cao, “Stability analysis of markovian jump stochastic bam neural networks with impulse control and mixed time delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 3, pp. 467–479, 2012. [16] Z. Wu, P. Shi, H. Su, and J. Chu, “Passivity analysis for discrete-time stochastic markovian jump neural networks with mixed time delays,” IEEE Transactions on Neural Networks, vol. 22, no. 10, pp. 1566–1574, 2011. [17] H. Huang, D. Ho, and Y. Qu, “Robust stability of stochastic delayed additive neural networks with markovian switching,” Neural Networks, vol. 20, no. 7, p. 799–809, 2007. [18] S. He, F. Liu, “Stochastic finite-time boundedness of Markovian jumping neural network with uncertain transition probabilities,” Applied Mathematical Modelling, vol. 35, no. 6, pp. 2631–2638, 2011. [19] S. He, F. Liu, “Finite-time boundedness of uncertain time-delayed neural network with Markovian jumping parameters,” Neurocomputing, vol. 103, pp. 87–92, 2013. [20] Z. Wang, Y. Liu, X. Liu, and Y. Shi, “Robust state estimation for discrete-time stochastic neural networks with probabilistic measurement delays,” Neurocomputing, vol. 74, pp. 256–264, 2010. [21] Y. Liu, Z. Wang, and X. Liu, “State estimation for discrete-time neural networks with markov-mode-dependent lower and upper bounds on the distributed delays,” Neural Process Letters, vol. 36, pp. 1–19, 2012. [22] D. Zhang and L. Yu, “Exponential state estimation for markovian jumping neural networks with time-varying discrete and distributed delays,” Neural Networks, vol. 35, pp. 103–111, 2012. [23] S. He, F. Liu, “Finite-time fuzzy control of nonlinear jump systems with time delays via dynamic observer-based state feedback,” , IEEE Transactions on Fuzzy Systems, vol. 20, no. 4, pp. 605–614, 2012. [24] R. Lu, Y. Xu, A. Xue, and J. Zheng, “Networked control with state reset and quantized measurements: Observer-based case,” IEEE Transactions on Industrial Electronics, vol. 60, pp. 5206–5213, 2013. [25] R. Lu, Y. Xu, and A. Xue, “H∞ filtering for singular systems with communication delays,” Signal Processing, vol. 90, pp. 1240–1248, 2010.

9

[26] R. Lu, X. Zhou, F. Wu, and A. Xue, “Quantized H∞ output feedback control for linear discrete-time systems,” Journal of the Franklin Institute, vol. 350, pp. 2096–2108, 2013. [27] R. Lu, H. Li, and Y. Zhu, “Quantized H∞ filtering for singular timevarying delay systems with unreliable communication channel,” Circuits, Systems, and Signal Processing, vol. 31, pp. 521–538, 2012. [28] R. Lu, H. Li, A. Xue, J. Zheng, and Q. She, “Quantized H∞ filtering for different communication channels,” Circuits, Systems, and Signal Processing, vol. 31, pp. 501–519, 2012. [29] C. de Souza, A. Trofino, and K. Barbosa, “Mode-independent H∞ filters for markovian jump linear systems,” IEEE Transactions on Automatic Control, vol. 51, no. 11, pp. 1837–1841, 2006. [30] A. Goncalves, A. Fioravanti, and J. Geromel, “H∞ filtering of discretetime markov jump linear systems through linear matrix inequalities,” IEEE Transactions on Automatic Control, vol. 54, pp. 1347–1351, 2009. [31] O. Costa and G. Benites, “Robust mode-independent filtering for discrete-time markov jump linear systems with multiplicative noises,” International Journal of Control, vol. 86, no. 5, pp. 779–793, 2013. [32] Z. Wu, P. Shi, H. Su, and J. Chu, “Asynchronous l2 − l∞ filtering for discrete-time stochastic markov jump systems with randomly occurred sensor nonlinearities,” Automatica, vol. 50, pp. 180–186, 2013. [33] G. Wei, Z. Wang, H. Shu, and J. Fang, “A delay-dependent approach to H∞ filtering for stochastic delayed jumping systems with sensor nonlinearities,” International Journal of Control, vol. 80, no. 6, pp. 885–897, 2007. [34] B. Shen, Z. Wang, D. Ding, and H. Shu, “H∞ state estimation for complex networks with uncertain inner coupling and incomplete measurements,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 12, pp. 2027–2037, 2013. [35] L. Zhang, “H∞ estimation for discrete-time piecewise homogeneous markov jump linear systems,” Automatica, vol. 45, pp. 2570–2576, 2009. [36] J.-N. Li, Y.-J. Pan, H. Su, and C. Wen, “Stochastic reliable control of a class of networked control systems with actuator faults and input saturation,” International Journal of Control, Automation and Systems, vol. 12, no. 3, pp. 564–571, 2014. [37] T. Shi, “Finite-time control of linear systems under time-varying sampling,” Neurocomputing, vol. 151, pp. 1327–1331, 2015. [38] T. Shi and H. Su, “Sampled-data MPC for LPV systems with input saturation,” IET Control Theory & Applications, vol. 8, no. 17, pp. 17811788, 2014. [39] J.-N. Li, Y. Zhang, and Y.-J. Pan, “Mean-square exponential stability and stabilization of stochastic singular systems with multiple time-varying delays,” Circuits, Systems, and Signal Processing, DOI: 10.1007/s0003401409893-3. [40] Z. Wu, H. Su, and J. Chu, “Robust stabilization for uncertain discrete singular systems with state delay,” International Journal of Robust and Nonlinear Control, vol. 18, no. 16, pp. 1532–1550, 2008. [41] Z. Wu, H. Su, and J. Chu, “H∞ filtering for singular systems with timevarying delay,” International Journal of Robust and Nonlinear Control, vol. 20, no. 11, pp. 1269–1284, 2010.