Reliable filter design for discrete-time neural networks with Markovian jumping parameters and time-varying delay

Reliable filter design for discrete-time neural networks with Markovian jumping parameters and time-varying delay

Reliable filter design for discrete-time neural networks with Markovian jumping parameters and time-varying delay Journal Pre-proof Reliable filter ...

2MB Sizes 1 Downloads 24 Views

Reliable filter design for discrete-time neural networks with Markovian jumping parameters and time-varying delay

Journal Pre-proof

Reliable filter design for discrete-time neural networks with Markovian jumping parameters and time-varying delay Weifeng Xia, Shengyuan Xu, Junwei Lu, Zhengqiang Zhang, Yuming Chu PII: DOI: Reference:

S0016-0032(20)30123-X https://doi.org/10.1016/j.jfranklin.2020.02.039 FI 4451

To appear in:

Journal of the Franklin Institute

Received date: Revised date: Accepted date:

13 February 2019 30 December 2019 18 February 2020

Please cite this article as: Weifeng Xia, Shengyuan Xu, Junwei Lu, Zhengqiang Zhang, Yuming Chu, Reliable filter design for discrete-time neural networks with Markovian jumping parameters and time-varying delay, Journal of the Franklin Institute (2020), doi: https://doi.org/10.1016/j.jfranklin.2020.02.039

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier Ltd on behalf of The Franklin Institute.

Reliable filter design for discrete-time neural networks with Markovian jumping parameters and time-varying delay Weifeng Xia1,2 , Shengyuan Xu1† , Junwei Lu3 , Zhengqiang Zhang4 , Yuming Chu5 1

3

School of Automation, Nanjing University of Science and Technology, Nanjing 210094, China 2 School of Engineering, Huzhou University, Huzhou Zhejiang 313000, China School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing 210023, China 4 School of Electrical Engineering and Automation, Qufu Normal University, Rizhao 276826, China 5 School of Science, Huzhou Teachers College, Huzhou Zhejiang 313000, China

Abstract This paper considers the problem of reliable filter design for discrete-time neural networks subject to Markovian jumping parameters and time-varying delay. Firstly, based on a matrix inequality, a new sufficient condition, which guarantees the existence of a reliable filter such that the resulting filtering error system is stochastically stable and extended dissipative, is established. Second, a less conservative stability criterion for neural networks is proposed. Then, the condition for the solvability of the filter design problem is given in terms of linear matrix inequalities(LMIs). Finally, three numerical examples are given to illustrate the effectiveness and advantages of the proposed filter design scheme. Keywords: Neural networks, Markovian jump systems, filter, extended dissipative.

1

Introduction

Over the past several decades, neural networks have become a hot topic due to the fact that neural networks have extensive applied fields, such as image restoration [1], robotic control [2], system recognition [3], and optimization problem [4]. It is widely known that time delays are a common phenomenon while implementing artificial neural networks and often result in a neural network unstable [5–9]. Therefore, a great number of stability criteria for delayed neural networks have been reported in [10–17]. As we known, one key issue, when using Lyapunov function method, is to bound the cross terms. In order to solve this issue, Jensen inequality was introduced in [18], and the stability criterion for delayed neural networks was obtained in [10, 11, 16]. Later, a more accurate summation/integral inequality, named Wirtinger-based inequality [19], was applied to study the stability condition for delayed neural networks in [12, 13, 17]. Furthermore, the reciprocally convex inequality [20] has been playing an important role in dealing with the time-varying delay when applying Jensen inequality or Wirtinger-based inequality to deduce the stability conditions. Very recently, an extended reciprocally convex inequality was proposed in [21], and the stability analysis for discrete-time neural networks was discussed based on this inequality. It should be mentioned that the stability criterion developed in [21] is less conservative than those in [14,15]. Nevertheless, it should be noted that the results obtained in [14, 15, 21] are sufficient conditions, and thus they †

Corresponding author. E-mail address: [email protected].

1

could be further improved. The main difficult is how to deal with the time-varying delay term in order to obtain less conservative stability criteria than those existing results. This is the first motivation of this study. On the other hand, it is well recognized that the information of the system states is hard to be obtained. Thus, it is necessary to estimate the states of a given system by available measurement output. Filter designing is proved to be an effective method to solve this problem. The main idea of filter designing is to estimate the state variables of a given system using the corrupted system measurement output. As a result, considerable attention has been focused on the filter design problem for neural networks [22–25]. Among them, the problem of H∞ filter design for neural networks was discussed in [22, 23]. The robust passive filtering problem was investigated in [24] for uncertain neutral-type neural networks. In [25], an L2 − L∞ filter was designed for T-S fuzzy neural networks with time-varying delay. In addition, the filtering problem for delayed neural networks with Markovian jumping parameters also was probed in [26–28]. It is noted that the H∞ performance and passivity are special cases of dissipativity [29,30]. But, unfortunately, L2 − L∞ performance could not be included in dissipativity. Recently, extended dissipativity, which covers dissipativity and L2 − L∞ performance, was introduced in [31]. Which makes it possible to investigate the dissipative and L2 − L∞ performance analysis in a unified framework. Then, many results involving extended dissipative controller or filter design for various dynamic system have been made in [32–34]. Although the extended dissipative filter was designed for neural networks with Markovian jumping parameters in [28], time-delay has not been taken into account. To our best knowledge, the problem of extended dissipative filter design for discrete-time neural networks with Markovian jumping parameters and time-varying delay has not been fully studied yet and remains to be open. It should be mentioned that the aforementioned filter design results were obtained based on the assumption that the communication between the sensor and filter is perfect. It means that the measurement output signals from sensor can be successfully received by the filter. However, in most real practice scenarios, such an assumption does not always hold true. Since there may be unexpected communication delays, the disturbance, and actuator/sensor failure affecting the control/filtering scheme or degrade the performance [35,36]. Therefore, to preserve dynamic system reliability, the design of reliable controller or filter is of both theoretical and practical importance. For neural networks, the reliable controller was designed in [37]. However, the reliable filtering problem for neural networks have not been fully studied. This also motivate our present research. Mentioned above discussion, this paper deals with the reliable filter design for discrete-time neural networks in the presence of Markovian jumping parameters and time-varying delay. The main contributions lie in three aspects. The first one is that we make the first attempt to design a reliable filter for underlying system such that the filtering error system is stochastically stable and extended dissipative. The second one is that by employing a novel matrix inequality, a new stability condition for delayed neural networks is established, which is less conservative than some existing results. The third one is that the performance index considered in this paper is extended dissipativity. The merit of this general performance index is that we can analysis the l2 − l∞ performance and the dissipativity in a unified framework. 2

The rest part of the paper is structured as follows. Section 2 formulates the problem. In Section 3, the reliable filter design scheme for the underlying system is developed. Some examples and simulations are provided in Section 4 to illustrate the advantages and effectiveness of the proposed schemes. A conclusion is given in Section 5. Notations. Throughout this paper, for a matrix A, the notation A > 0 means that A is symmetric and positive definite. The space of n−dimensional Euclidean and n × m real matrices are denoted by Rn and Rn×m , respectively. The n−order identity matrix is represented by In ; The n−order null matrix and n × m−order null matrix are denoted by 0n and 0n,m ; l2 [0, ∞) is the space of square summable infinite sequence; k · k means the spectral norm for matrices; sym{A} stands for A + AT ; diag(·) denotes a block-diagonal matrix; The ∗ means the symmetric elements in a symmetric matrix; Moreover, let {Ω, F, P} be a complete probability space; E refers to the mathematical expectation operator.

2

Problem statement and preliminaries

For a given probability space {Ω, F, P}, consider a class of neural networks in the presence of Markovian jumping parameters and time-varying delay as follows:    x(k + 1) = A(θ(k))x(k) + B(θ(k))f (x(k)) + Bd (θ(k))f (x(k − d(k))) + D(θ(k))ω(k)   y(k) = C(θ(k))x(k) (1)  z(k) = L(θ(k))x(k)    x(k) = ϕ(k), k ∈ {−d2 , · · · , 0} where x(k) ∈ Rn is the neural state vector; y(k) ∈ Rp is the measurement output; z(k) ∈ Rq is the linear combination of the system states to be estimated; ω(k) ∈ Rr is the disturbance input, which belongs to l2 (0, ∞); f (x(k)) = [f1 (x1 (k)), f2 (x2 (k)), · · · , fn (xn (k))]T ∈ Rn denotes the neural activation function; ϕ(k) is a initial condition; The process {θ(k)} is a discrete-time Markov chain with finite state space L = {1, 2, · · · , N } with transition probability matrix Π = [πij ]N ×N given by Pr{θ(k + 1) = j|θ(k) = i} = πij P where 0 ≤ πij ≤ 1, for all i, j ∈ L and N j=1 πij = 1 for all i ∈ L. Matrices A(θ(k)), B(θ(k)), Bd (θ(k)), D(θ(k)), C(θ(k)) and L(θ(k)) are mode-dependent, and all of which are pre-known and real. To avoid unnecessarily complicated notations, for each θ(k) = i ∈ L, A(θ(k)), B(θ(k)), Bd (θ(k)), D(θ(k)), C(θ(k)) and L(θ(k)) are abbreviated as Ai , Bi , Bdi , Di , Ci and Li , respectively. The positive integer d(k) denotes time-varying delay and satisfies d1 ≤ d(k) ≤ d2 where d1 , d2 are known positive integers. The activation function f (·) is assumed to be continuous and bounded, and satisfies γi− ≤

fi (x) − fi (y) ≤ γi+ , x−y 3

i = 1, 2 · · · , n.

(2)

where fi (0) = 0, x, y ∈ R, x 6= y, and γi− , γi+ are known real scalars. In this paper, we are interested in constructing the following filter: ( xf (k + 1) = Af (θ(k))xf (k) + Bf (θ(k))yf (k) zf (k) = Cf (θ(k))xf (k)

(3)

where xf (k) ∈ Rn is the filter state; zf (k) ∈ Rq is an estimation of z(k); Af (θ(k)), Bf (θ(k)) and Cf (θ(k)) are filter matrices to be designed, and yf (k) is the signal from the sensor that may be faulty. For each θ(k) = i ∈ L, the failure model adopted here is as follows: yf (k) = Hi y(k)

(4)

where Hi is the ith subsystem sensor fault matrix, which is defined as Hi = diag(hi1 , hi2 , · · · , hip ),

0 ≤ hij ≤ hij ≤ hij ≤ 1,

j = 1, 2, · · · , p.

Then, we define the following matrices H0i = diag

hip + hip hi1 + hi1 ,··· , 2 2

!

,

H1i = diag

hip − hip hi1 − hi1 ,··· , 2 2

!

.

It can be verified that Hi = H0i + ∆Hi = H0i + diag(δi1 , · · · , δip ) where

h − h ij ij | δij |≤ , 2

(5)

j = 1, 2, · · · , p.

(6)

Let η(k) = col{x(k), xf (k)}, e(k) = z(k) − zf (k), the resulting filtering error system obtained from interconnection of (1), (3) and (4) is given by ( ˜i f (x(k)) + B ˜di f (x(k − d(k))) + D ˜ i ω(k) η(k + 1) = A˜i η(k) + B (7) e(k) = C˜i η(k) where A˜i =

"

Ai 0 Bf i Hi Ci Af i

#

,

˜i = B

"

Bi 0

#

,

˜di = B

"

Bdi 0

#

,

˜i = D

"

Di 0

#

,

C˜i = [Li − Cf i ]. In this paper, we adopt the following concepts of stochastically stable and extended dissipative for system (7). Definition 1 ( [38]) Filtering error system (7) with ω(k) = 0 is said to be stochastically stable if the following inequality holds for every initial condition η(0) ∈ Rn and the initial mode θ(0) ∈ L: (∞ ) X E η(k)T η(k)|η(0), θ(0) < ∞. k=0

4

Definition 2 ( [31, 32]) For given matrices Υ1 ≤ 0, Υ3 > 0, Υ4 ≥ 0 and any matrix Υ2 satisfying (kΥ1 k + kΥ2 k)kΥ4 k = 0, filtering error system (7) is said to be extended dissipative, if the following inequality holds for the zero initial condition and any T ≥ 0: ( T ) X  E e(k)T Υ1 e(k) + 2e(k)T Υ2 ω(k) + ω(k)T Υ3 ω(k) ≥ sup E{ω(k)T Υ4 ω(k)}. (8) 0≤k≤T

k=0

Remark 1 It is noted that the extended dissipativity can be reduced to the l2 − l∞ performance and the dissipativity by turning the parameters Υ1 , Υ2 , Υ3 , and Υ4 . For instants, when Υ1 = 0, Υ2 = 0, Υ3 = γ 2 I, and Υ4 = I, then inequality (8) reduces to the l2 − l∞ performance. When Υ1 = X , Υ2 = Y, Υ3 = Z, and Υ4 = 0, then inequality (8) becomes the (X , Y, Z)-dissipative. ˜ 1 ≥ 0 and Υ ˜ 4 ≥ 0, It follows from the condition of Definition 2 that there always exist matrices Υ ˜TΥ ˜ ˜T ˜ such that Υ1 = −Υ 1 1 and Υ4 = Υ4 Υ4 . The main goal of this paper is to design a filter in the form of (3) ensures the following two conditions are fulfilled: (1) The filtering error system (7) with ω(k) = 0 is stochastically stable in the sense of Definition 1. (2) Under zero initial condition, for any nonzero ω(k) ∈ l2 (0, ∞), the filtering error system (7) is extended dissipative in the sense of Definition 2. Now, we introduce the following lemmas, which will play a pivotal role in the proof of our main results. Lemma 1 ( [19]) For given positive-definite matrix R ∈ Rn , integers k1 and k2 satisfying k2 > k1 , then the following inequality holds " #T " #" # kX 2 −1 α(k) R 0 α(k) 1 $(k)T R$(k) ≥ k2 − k1 β(k) 0 3R β(k) k=k 1

where $(k) = x(k + 1) − x(k), α(k) = x(k2 ) − x(k1 ), β(k) = x(k2 ) + x(k1 ) −

2 k2 −k1 +1

Pk2

k=k1

x(k).

Lemma 2 ( [39]) For given positive-definite matrix R ∈ Rn and any ε ∈ (0, 1), there exist matrices M1 , M2 ∈ R2n×n , such that the following inequality holds: " # 1 R 0 ε ≥ sym{M1 [In 0n ] + M2 [0n In ]} 1 ∗ 1−ε R − εM1 R−1 M1T − (1 − ε)M2 R−1 M2T .

(9)

Lemma 3 ( [40]) Let X and Y be real constant matrices, then for any scalar ε > 0, the following inequality hold: X T Y + Y T X ≤ εX T X + ε−1 Y T Y. Lemma 4 ( [41]) For any constant matrix X and Y > 0, the following inequality hold: −X T Y −1 X ≤ Y − X − X T . 5

Remark 2 It is noted that inequality (9) includes the reciprocally convex inequality as a special case [39]. Therefore, one can expected that the stability condition obtained by inequality (9) will be less conservative than those derived using reciprocally convex inequality.

3

Main results

In this section, we perform stability analysis for the resulting filtering error system (7). Then, on the basis of these results, reliable filter design with extended dissipative scheme for neural networks in (1) is proposed.

3.1

Stability and performance analysis

In this subsection, we are interested in employing matrix inequality together with Lyapunov function method to analysis the stability and extended dissipativity for system (7). We first consider the case that the filtering error system (7) has known sensor failure parameters. Theorem 1 For given scalars d1 , d2 , and matrices Υ1 ≤ 0, Υ3 > 0, Υ4 ≥ 0 and any matrix Υ2 satisfying (kΥ1 k + kΥ2 k)kΥ4 k = 0, the filtering error system (7) is stochastically stable and extended dissipative, if there exist matrices Pi > 0, Q1 > 0, Q2 > 0, Q3 > 0, R1 > 0, R2 > 0, diagonal matrices Sj > 0(j = 1, · · · , 7), any matrices M1 , M2 , such that the following inequalities hold for (µ, ν) ∈ {(1, 2), (2, 1)} and each i ∈ L: 

  Θiµν =  

 ˜ T1 D3T Mν Ξ1i (dµ ) Φi (dµ ) L1T C˜iT Υ  ∗ −Iq 0 0   < 0, ˜  ∗ ∗ −R2 0 ∗ ∗ ∗ Ξ2 # " ˜T −I T Pi I C˜iT Υ 4 <0 ∗ −Ir

(10)

(11)

where Φi (dµ ) = −D1 (dµ )T Pi D1 (dµ ) + L2T (Q1 + (d12 + 1)Q3 )L2 + L3T (−Q1 + Q2 )L3 − L4T Q3 L4 ˜ 1 L6 − 2L T C˜ T Υ2 W13 −L T Q2 L5 + (Wsi − W1 )T (dT R1 + d2 R2 )(Wsi − W1 ) − L T R 5

1

12

T −W13 Υ3 W13 − D3T sym{M1 [I2n 02n ] + M2 [02n I2n ]}D3 +

Ξ1i (dµ ) =

h



T

πi1 D2 (dµ )

···



T

πiN D2 (dµ )

Ξ2 = diag(−P1−1 , · · · , −PN−1 ),

i

6

7 X

1

φj (k),

j=1

,

I = [I2n 02n ]T ,

Wj = [0n,(j−1)n In 0n,(12−j)n+r ], j = 1, 2, · · · , 12, W13 = [0r,12n Ir ],

6

i

L1 = col{W1 , W2 }, L2 = col{W1 , W6 }, L3 = col{W3 , W7 }, L4 = col{W4 , W8 },

L5 = col{W5 , W9 }, L6 = col{W1 − W3 , W1 + W3 − 2W10 },

L7 = col{W3 − W4 , W3 + W4 − 2W11 }, L8 = col{W4 − W5 , W4 + W5 − 2W12 },

D1 (dµ ) = col{L1 , (d1 + 1)W10 − W1 , (dµ − d1 + 1)W11 + (d2 − dµ + 1)W12 − W3 − W4 }, ˜ si , (d1 + 1)W10 − W3 , (dµ − d1 + 1)W11 + (d2 − dµ + 1)W12 − W4 − W5 }, D2 (dµ ) = col{W D3 = col{L7 , L8 },

˜ si = [A˜i 02n,3n B ˜i 02n,n B ˜di 02n,4n D ˜ i ], Wsi = [Ai 0n,4n Bi 0n Bdi 0n,4n Di ], W φ1 (k) = sym{(Σ1 W1 − W6 )T S1 (W6 − Σ2 W1 )},

φj (k) = sym{(Σ1 Wj+1 − Wj+5 )T Sj (Wj+5 − Σ2 Wj+1 )}, j = 2, 3, 4,

φ5 (k) = sym{(Σ1 (W1 − W3 ) − (W6 − W7 ))T S5 ((W6 − W7 ) − Σ2 (W1 − W3 ))}, φj (k) = sym{(Σ1 (Wj−3 − Wj−2 ) − (Wj+1 − Wj+2 ))T Sj ((Wj+1 − Wj+2 ) −Σ2 (Wj−3 − Wj−2 ))}, j = 6, 7,

Σ1 = diag(γ1+ , · · · , γn+ ), Σ2 = diag(γ1− , · · · , γn− ).

(12)

Proof. Consider the following Lyapunov-Krasovskii function for neural networks (1): V (k) = V1 (k) + V2 (k) + V3 (k)

(13)

where V1 (k) = ρ1 (k)T P (θ(k))ρ1 (k), k−d k−1 1 −1 X X V2 (k) = ρ2 (g)T Q1 ρ2 (g) + ρ2 (g)T Q2 ρ2 (g) g=k−d1

+

k−1 X

g=k−d2

T

ρ2 (g) Q3 ρ2 (g) +

V3 (k) = d1

k−1 X

T

$(g) R1 $(g) + d12

l=−d1 g=k+l

(

k−1 X

ρ2 (g)T Q3 ρ2 (g),

l=−d2 +1 g=k+l

g=k−d(k) −1 X

−d1 X

$(g)T R2 $(g),

l=−d2 g=k+l

k−1 X

ρ1 (k) = col η(k),

−d k−1 1 −1 X X

x(g),

g=k−d1

k−d 1 −1 X

g=k−d2

)

x(g) ,

ρ2 (k) = col{x(k), f (x(k))}, $(k) = x(k + 1) − x(k). Define ∆V (k) = E[V (k + 1, x(k + 1), θ(k + 1) = j)|x(k), θ(k) = i] − V (k, x(k), i) and ξ(k) = col{x(k), xf (k), x(k − d1 ), x(k − d(k)), x(k − d2 ), f (x(k)), f (x(k − d1 )), f (x(k − d(k))), f (x(k − P Pk−d1 Pk−d(k) 1 1 d2 )), d11+1 kg=k−d1 x(g), d(k)−d x(g), g=k−d2 x(g), ω(k)}, then, carrying out g=k−d(k) +1 d −d(k)+1 1 2 simple calculations yields ! N X ∆V1 (k) = ρ1 (k + 1)T πij Pj ρ1 (k + 1) − ρ1 (k)T Pi ρ1 (k) j=1

= ξ(k)

T

T

D2 (d(k))

N X j=1

πij Pj

!

7

T

!

D2 (d(k)) − D1 (d(k)) Pi D1 (d(k)) ξ(k).

(14)

k X

∆V2 (k) =

g=k+1−d1

− +

k−1 X

T

k−d 1 −1 X

ρ2 (g) Q1 ρ2 (g) −

−d1 X

l=−d2 +1

g=k+1+l

T

ρ2 (g)T Q2 ρ2 (g)

T

g=k+1−d(k+1) k X

k−d X1

g=k+1−d2

k X

T

"

ρ2 (g) Q1 ρ2 (g) +

g=k−d1

ρ2 (g) Q2 ρ2 (g) +

g=k−d2

T

T

ρ2 (g) Q3 ρ2 (g) −

ρ2 (g) Q3 ρ2 (g) −

k−1 X

ρ2 (g)T Q3 ρ2 (g)

g=k+l

k−1 X

ρ2 (g)T Q3 ρ2 (g)

g=k−d(k)

#

≤ ρ2 (k) (Q1 + (d12 + 1)Q3 )ρ2 (k) + ρ2 (k − d1 )T (−Q1 + Q2 )ρ2 (k − d1 )

−ρ2 (k − d(k))T Q3 ρ2 (k − d(k)) − ρ2 (k − d2 )T Q2 ρ2 (k − d2 )  T = ξ(k) L2T (Q1 + (d12 + 1)Q3 )L2 + L3T (−Q1 + Q2 )L3 − L4T Q3 L4  −L5T Q2 L5 ξ(k). ! −1 k k−1 X X X V3 (k) = d1 $(g)T R1 $(g) − $(g)T R1 $(g) l=−d1

+d12

g=k+1+l

−d 1 −1 X

k X

l=−d2

g=k+1+l

g=k+l

$(g)T R2 $(g) −

= $(k)T (d21 R1 + d212 R2 )$(k) − d1 −d12

k−d 1 −1 X

k−1 X

k−1 X

$(g)T R2 $(g)

g=k+l

(15)

!

$(g)T R1 $(g)

g=k−d1

$(g)T R2 $(g).

(16)

g=k−d2

It follows from (16) and Lemma 1 that −d1 ≤ −

"

"

k−1 X

$(g)T R1 $(g)

g=k−d1

x(k) − x(k − d1 ) P x(k) + x(k − d1 ) − d12+1 kg=k−d1 x(g)

x(k) − x(k − d1 ) × P x(k) + x(k − d1 ) − d12+1 kg=k−d1 x(g) ˜ 1 L6 ξ(k). = −ξ(k)T L6T R

#T " #

R1 0 0 3R1

#

(17)

Splitting the second summation term of (16) into two parts, one has −d12

k−d 1 −1 X

g=k−d2

T

$(g) R2 $(g) = −d12

k−d 1 −1 X

g=k−d(k)

k−d(k)−1 T

$(g) R2 $(g) − d12

8

X

g=k−d2

$(g)T R2 $(g).

(18)

Then, applying Lemma 1 yields −d12

k−d 1 −1 X

g=k−d(k)

= − and

"

x(k − d1 ) − x(k − d(k)) Pk−d1 2 x(k − d1 ) + x(k − d(k)) − d(k)−d g=k−d(k) x(g) 1 +1 # x(k − d1 ) − x(k − d(k)) Pk−d1 2 x(k − d1 ) + x(k − d(k)) − d(k)−d g=k−d(k) x(g) 1 +1

d12 ≤ − d(k) − d1 " ×

$(g)T R2 $(g) #T "

R2 0 0 3R2

#

d12 ˜ 2 L7 ξ(k). ξ(k)T L7T R d(k) − d1

(19)

k−d(k)−1

−d12

X

g=k−d2

= −

"

x(k − d(k)) − x(k − d2 ) Pk−d(k) 2 x(k − d(k)) + x(k − d2 ) − d2 −d(k)+1 g=k−d2 x(g) # x(k − d(k)) − x(k − d2 ) Pk−d(k) 2 x(k − d(k)) + x(k − d2 ) − d2 −d(k)+1 g=k−d2 x(g)

d12 ≤ − d2 − d(k) " ×

$(g)T R2 $(g) #T "

R2 0 0 3R2

#

d12 ˜ 2 L8 ξ(k). ξ(k)T L8T R d2 − d(k)

(20)

In view of Lemma 2 and (19)-(20), we have −d12

k−d 1 −1 X

$(g)T R2 $(g)

g=k−d2

 d12 d12 T ˜ T ˜ ≤ −ξ(k) L R2 L8 ξ(k) L R2 L7 + d(k) − d1 7 d2 − d(k) 8  d(k) − d1 ˜ −1 M T M1 R ≤ −ξ(k)T D3T sym{M1 [I2n 02n ] + M2 [02n I2n ]} − 2 1 d12  d2 − d(k) ˜ −1 M T D3 ξ(k). − M2 R (21) 2 2 d12 It follows from (16)-(17) and (21), we arrive at  ˜ 1 L6 − D T sym{M1 [I2n 02n ] ∆V3 (k) ≤ ξ(k)T (Wsi − W1 )T (d21 R1 + d212 R2 )(Wsi − W1 ) − L6T R 3  d(k) − d1 T ˜ −1 M T D3 + d2 − d(k) D T M2 R ˜ −1 M T D3 ξ(k). (22) +M2 [02n I2n ]}D3 + D3 M1 R 2 1 3 2 2 d12 d12 In light of neural activation function condition (2), we define the following function T



φ(k) = φ(s, t) = 2(Σ1 (s − t) − f (s) − f (t))T S((f (s) − f (t)) − Σ2 (s − t)). Then φ1 (k) = φ(x(k), 0);

φ2 (k) = φ(x(k − d1 ), 0);

φ3 (k) = φ(x(k − d(k)), 0);

φ4 (k) = φ(x(k − d2 ), 0); φ5 (k) = φ(x(k), x(k − d1 )); φ6 (k) = φ(x(k − d1 ), x(k − d(k))); φ7 (k) = φ(x(k − d(k)), x(k − d2 ));

9

(23)

In order to prove that filtering error system (7) is extended dissipative, we define the following performance index J ∗ (k) = e(k)T Υ1 e(k) + 2e(k)T Υ2 ω(k) + ω(k)T Υ3 ω(k).

(24)

Therefore, it follows from (13)-(16) and (22)-(24) that ∗

∆V (k) − J (k) +

7 X j=1

˜ i (d(k))ξ(k) φj (k) ≤ ξ(k)T Φ

(25)

where ˜ i (d(k)) = Φi (d(k)) + D2 (d(k))T Φ

N X

πij Pj

j=1

+

!

D2 (d(k)) +

d(k) − d1 T ˜ −1 M T D3 D3 M1 R 2 1 d12

d2 − d(k) T ˜ 2−1 M2T D3 − L1T C˜iT Υ1 C˜i L1 . D3 M2 R d12

and Φi (d(k)) is defined in (12). ˜ It is noticed that Φ(d(k)) is affine with respect to d(k), thus, by convex theory, we know that ˜ ˜ 1 ) < 0 and Φ(d ˜ 2 ) < 0. Therefore, inequality Φ(d(k)) < 0 for all d(k) ∈ [d1 , d2 ] is equivalent to Φ(d ˜ (10) and Schur complement equivalence lead to Φ(d(k)) < 0. Recalling φj (k) > 0, j = 1, · · · , 7, one has ∆V (k) − J ∗ (k) < 0.

(26)

Therefore, under zero initial condition, using (26) and (11), we deduce that (T −1 ) (T −1 ) X X E J ∗ (k) ≥ E ∆V (k) = E{V (T )} ≥ E{ρ1 (T )T Pi ρ1 (T )} ≥ E{e(T )T Υ4 e(T )}. (27) k=0

k=0

According to Definition 2, we shall prove the inequality (8) holds for matrices Υ1 , Υ2 , Υ3 and Υ4 which satisfying the condition in Definition 2. To realize this purpose, we divide the proof into two cases. Case I. kΥ4 k = 0. Then inequality (27) implies that ( T ) X E J ∗ (k) ≥ E{e(T )T Υ4 e(T )} = 0. (28) k=0

Case II. kΥ4 k > 0. Then the condition (kΥ1 k + kΥ2 k)kΥ4 k = 0 implies Υ1 = 0 and Υ2 = 0. So, Υ3 > 0 leads to J ∗ (α) ≥ ω(α)T Υ3 ω(α) > 0. This fact and (27) yield ( T ) ( k−1 ) X X ∗ ∗ E J (α) ≥ E J (α) ≥ E{e(k)T Υ4 e(k)}. (29) α=0

α=0

holds for 0 ≤ k ≤ T . Thus, taking the supremum over 0 ≤ k ≤ T both sides of (29) yields (8) hold. 10

Therefore, Case I and Case II lead to the filtering error system (7) is extended dissipative in the sense of Definition 2. Finally, when ω(k) = 0, equation (26) leads to ∆V (k) ≤ J ∗ (k) = e(k)T Υ1 e(k).

(30)

Recalling Υ1 < 0, we obtain that ∆V (k) < 0. Then by following a similar line as in [38], we can conclude that the filtering error system (7) is stochastically stable in the sense of Definition 1. The proof is complete.  Remark 3 Theorem 1 provides the mode-dependent conditions such that the filtering error system (7) is stochastically stable and extended dissipative. It is noted that the Lyapunov matrix P (θ(k)) is mode-dependent. Inspired by the idea of [31], we also can construct the matrices Q1 , Q2 , Q3 dependented on the mode θ(k) in order to obtain less conservative stability conditions. When L = {1} and ω(k) = 0, system (1) reduces to the following neural networks with timevarying delay: x(k + 1) = Ax(k) + Bf (x(k)) + Bd f (x(k − d(k))). (31) In the following we will derive a stability criterion for neural networks (31). For illustrate the advantages of our scheme, we adopt the same Lyapunov function as the one in [21]: V˜ (k) = ρ˜1 (k)T P ρ˜1 (k) +

k−1 X

ρ2 (g)T Q1 ρ2 (g) +

g=k−d1

+d1

−1 X

k−1 X

k−d 1 −1 X

ρ2 (g)T Q2 ρ2 (g)

g=k−d2

$(g)T R1 $(g) + d12

−d k−1 1 −1 X X

$(g)T R2 $(g)

(32)

l=−d2 g=k+l

l=−d1 g=k+l

n o P Pk−d1 −1 where ρ˜1 (k) = col x(k), k−1 x(g), x(g) , and ρ2 (k), $(k) are defined as in (13). g=k−d1 g=k−d2 Then, the following corollary can be obtained by (32) and the similar method to the proof of Theorem 1. Corollary 1 For given d1 , d2 , neural networks (31) is asymptotically stable, if there exist matrices P > 0, Q1 > 0, Q2 > 0, R1 > 0, R2 > 0, diagonal matrices Sj > 0(j = 1, 2, · · · , 7) and any matrices M1 , M2 , such that the following linear matrix inequalities (LMIs) hold for (µ, ν) ∈ {(1, 2), (2, 1)}: " # ˇ µ ) D˜3T Mν Φ(d <0 (33) ˜2 ∗ −R where ˇ µ ) = D˜2 (dµ )T P D˜2 (dµ ) − D˜1 (dµ )T P D˜1 (dµ ) + L˜T Q1 L˜2 + L˜T (−Q1 + Q2 )L˜3 Φ(d 2 3 T T 2 2 ˜ ˜ ˜ ˜ ˜ ˜ ˜ 1 L˜6 −L5 Q2 L5 + (Ws − W1 ) (d1 R1 + d12 R2 )(Ws − W1 ) − L˜6T R −D˜3T sym{M1 [I2n 02n ] + M2 [02n I2n ]}D˜3 + 11

7 X j=1

φ˜j (k),

˜ j = [0n,(j−1)n In 0n,(11−j)n ], j = 1, 2, · · · , 11, L˜2 = col{W ˜ 1, W ˜ 5 }, W ˜ 2, W ˜ 6 }, L˜5 = col{W ˜ 4, W ˜ 8 }, L˜6 = col{W ˜1 − W ˜ 2, W ˜1 + W ˜ 2 − 2W ˜ 9 }, L˜3 = col{W ˜2 − W ˜ 3, W ˜2 + W ˜ 3 − 2W ˜ 10 }, L˜8 = col{W ˜3 − W ˜ 4, W ˜3 + W ˜ 4 − 2W ˜ 11 }, L˜7 = col{W

˜ 1 , (d1 + 1)W ˜9 − W ˜ 1 , (dµ − d1 + 1)W ˜ 10 + (d2 − dµ + 1)W ˜ 11 − W ˜2 − W ˜ 3 }, D˜1 (dµ ) = col{W ˜ s , (d1 + 1)W ˜9 − W ˜ 2 , (dµ − d1 + 1)W ˜ 10 + (d2 − dµ + 1)W ˜ 11 − W ˜3 − W ˜ 4 }, D˜2 (dµ ) = col{W ˜ s = [A 0n,3n B 0n Bd 0n,4n ], D˜3 = col{L˜7 , L˜8 }, W

˜j − W ˜ j+4 )T Sj (W ˜ j+4 − Σ2 W ˜ j )}, j = 1, 2, 3, 4. φ˜j = sym{(Σ1 W ˜ j−4 − W ˜ j−3 ) − (W ˜j − W ˜ j+1 ))T Sj ((W ˜j − W ˜ j+1 ) − Σ2 (W ˜ j−4 − W ˜ j−3 ))}, φ˜j = sym{(Σ1 (W j = 5, 6, 7.

Remark 4 Corollary 1 provides a new stability criterion for delayed neural networks (31). It should be pointed out that the stability condition in Corollary 1 is less conservative than those in [14, 15, 21]. This fact can be illustrated by a numerical example in Section 4. Now, we are in a position to analysis the stochastic stability and extended dissipativity for system (7) in the case that sensor failure parameters are unknown but satisfy the conditions (5) and (6). Theorem 2 For given d1 , d2 , and matrices Υ1 ≤ 0, Υ3 > 0, Υ4 ≥ 0 and any matrix Υ2 satisfying (kΥ1 k + kΥ2 k)kΥ4 k = 0, the filtering error system (7) is stochastically stable and extended dissipative, if there exist matrices Pi > 0, Q1 > 0, Q2 > 0, Q3 > 0, R1 > 0, R2 > 0, diagonal matrices Sj > 0(j = 1, 2, · · · , 7), any matrices M1 , M2 , scalars gi > 0(g = 1, 2, · · · , N ), such that inequality (11) and the following LMIs hold for (µ, ν) ∈ {(1, 2), (2, 1)} and each i ∈ L: # " ˆ iµν + PN πij ji Cˆ T H T H1i Cˆi Λ1i Θ 1i i j=1 <0 (34) ∗ Λ2i where ˆT · · · B ˆ T ], Λ2i = diag(−1i Ip , · · · , −N i Ip ), Λ1i = [B Cˆi = [Ci 0p,13n+4N n+q+r ], f i1 f iN ˆ T = col{015n+q+r,p , Bf i , 04N n−2n,p }, · · · , B ˆ T = col{011n−4N n+q+r,p , Bf i , 02n,p } B f i1 f iN ˆ iµν is Θiµν defined in (10) where Hi replaced by H0i . and Θ Proof. We only need to prove inequality (34) implies (10) valid. Noticed Hi = H0i + ∆Hi , then it follows from (10) that ˆfTi1 ∆Hi Cˆi + · · · + √πiN B ˆfTiN ∆Hi Cˆi }. ˆ iµν + sym{√πi1 B (35) Θiµν = Θ In terms of Lemma 3 and (6), we obtain that there exist scalars gi > 0(g = 1, 2, · · · , N ), such that √ ˆT √ ˆ ˆ T ∆Hi Cˆi } sym{ πi1 B πiN B f i1 ∆Hi Ci + · · · + f iN −1 ˆ T ˆ T T ˆ ˆ ˆT B ˆf iN ≤ πi1 1i Ci ∆Hi ∆Hi Ci +  Bf i1 Bf i1 + · · · + πiN N i CˆiT ∆THi ∆Hi Cˆi + −1 B 1i



N X j=1

Ni

−1 ˆ T T ˆT ˆ ˆ πij ji CˆiT H1i H1i Cˆi + −1 1i Bf i1 Bf i1 + · · · + N i Bf iN Bf iN .

12

f iN

(36)

On the other hand, using Schur complement equivalence to (34), we have ˆ iµν + Θ

N X j=1

−1 ˆ T T ˆT ˆ ˆ πij ji CˆiT H1i H1i Cˆi + −1 1i Bf i1 Bf i1 + · · · + N i Bf iN Bf iN < 0.

Therefore, inequalities (35)-(37) lead to inequality (10) hold. The proof is complete.

(37) 

Remark 5 Theorem 2 provides a stability criterion for the neural networks with Markovian jumping parameters, time-varying delays and sensor faults. It is noted that the stability condition is dependent not only on time delays, system modes, but also on sensor failure parameters.

3.2

Reliable filter design

In this subsection, our attention will be devoted to solving the reliable filter design problem proposed in previous section. Based on the result of Theorem 2, the desired reliable filter design scheme for neural networks in the form of (1) is presented in the following theorem. Theorem 3 For given d1 , d2 , and matrices Υ1 ≤ 0, Υ3 > 0, Υ4 ≥ 0 and any matrix Υ2 satisfying (kΥ1 k + kΥ2 k)kΥ4 k = 0, there exists a filter in the form of (3) such that the filtering error system (7) is stochastically stable and extended dissipative, if there exist matrices Pi > 0, Q1 > 0, Q2 > 0, Q3 > 0, R1 > 0, R2 > 0, diagonal matrices Sj > 0(j = 1, 2, · · · , 7), Vi > 0, X2i > 0, X3i > 0, ¯f i , C¯f i , scalars gi > 0(g = 1, 2, · · · , N ), such that inequality any matrices U1i , U2i , M1 , M2 , A¯f i , B (11) and the following LMIs hold for (µ, ν) ∈ {(1, 2), (2, 1)} and each i ∈ L: # " ~ 1i ~ iµν + PN πij ji Cˆ T H T H1i Cˆi Λ Θ 1i i j=1 <0 (38) ∗ Λ2i where

~ iµν Θ

~ 1i Λ

 ~ 1i (dµ ) ˜ T D T Mν Ξ Φi (dµ ) L1T C˜iT Υ 3 1   ∗ −Iq 0 0   =  , ˜   ∗ ∗ −R2 0 ~2 ∗ ∗ ∗ Ξ h i √ T T ~ fTi1 · · · B ~ T ], Ξ ~ 1i (dµ ) = √πi1 D ~ ~ = [B , (d ) · · · π D (d ) 2 µ iN 2 µ f iN 

~ 2 = diag(P1 − Xi − X T , · · · , PN − Xi − X T ), Ξ i i ~ ~ D2 (dµ ) = col{Wsi , X2i ((d1 + 1)W10 − W3 ), X3i ((dµ − d1 + 1)W11 + (d2 − dµ + 1)W12 − W4 − W5 )}, ~ si = [A ~ i 02n,3n B ~ i 02n,n B ~ di 02n,4n D ~ i ], W " # " # " # " # ¯ ¯ U A + B H C A U B U B U D 1i i 1i di 1i i 1i i f i 0i i fi ~i = ~ di = ~i = ~i = , B , B , D A ¯f i H0i Ci A¯f i U2i Ai + B U2i Bi U2i Bdi U2i Di ~ T is B ˆ T defined in Theorem 2 where Bf i replaced by B ¯f i . and B f i1 f i1 In this case, a desired filter is obtained with the following parameters Af i = Vi−1 A¯f i ,

¯f i , Bf i = Vi−1 B 13

Cf i = C¯f i .

(39)

Proof. In order to solve the filter design problem proposed in Section 2, we define the " following # U1i Vi matrix S = diag(I14n+q+r , Xi , · · · , Xi , IN p ) with Xi = diag(X1i , X2i , X3i ) and X1i = . | {z } U2i Vi N

Then, pre-multiplying S T and post-multiplying S to (34), we deduce that (34) is equivalent to " # ˇ iµν + PN πij ji Cˆ T H T H1i Cˆi Λ ~ 1i Θ i 1i j=1 <0 (40) ∗ Λ2i where 

 ˇ iµν =  Θ  

˜ T D T Mν Ξ ~ 1i (dµ ) Φi (dµ ) L1T C˜iT Υ 1 3 ∗ −Iq 0 0 ˜2 ∗ ∗ −R 0 ˇ2 ∗ ∗ ∗ Ξ

ˇ 2 = diag(−XiT P −1 Xi , · · · , −X T P −1 Xi ). Ξ 1 i N



  , 

(41)

(42)

Then, Lemma 4 and (41)-(42) lead to ˇ iµν ≤ Θ ~ iµν . Θ

(43)

Therefore, from (40) and (43), we conclude that the conditions in Theorem 2 are satisfied when the LMIs in (11) and (38) hold. The proof is complete.  Remark 6 Theorem 3 provides sufficient conditions for the existence of reliable filters for neural networks in the presence of Markovian jumping parameters and time-varying delay. What should be mentioned is that the conditions in Theorem 3 are strict LMIs that can be checked easily. Remark 7 It should be pointed out that the transition probability information adopted in this paper is completely known. We note that the approaches developed in this paper could be extended to the filter design problem for system (1) with partially unknown transition probabilities.

4

Numerical examples

In this section, three examples are given to demonstrate the advantages and usefulness of the results in this paper. Example 1. Consider the neural networks in the form of (31) with the following parameters [21]: " # " # " # 0.1 0 0.02 0 −0.01 0.01 A= , B= , Bd = . 0 0.3 0 0.004 −0.02 −0.01 The activation function f (x) satisfying condition (2) with Σ1 = diag(1, 1) and Σ2 = diag(0, 0). Since the stability criterion in [21] is better than the results in [14, 15]. Thus, we only compare our result with the one in [21]. Table 1 lists the maximal upper bounds of d2 for different d1 calculated by Corollary 1 and the results in [21]. From this table it can be seen that Corollary 1 provides 14

Table 1: Maximum allowable d2 for different d1

Methods by Corollary 1 [21] by Theorem 1 [21] by Corollary 1

d1 = 2 97 99 101

d1 = 4 99 101 103

d1 = 6 101 103 105

d1 = 8 103 105 107

d1 = 10 105 107 109

d1 = 20 115 117 119

less conservative results in comparison to the one in [21]. It is worth to mention that the reduced conservatism in Corollary 1 benefits from employing the inequality (9) in Lemma 2. Example 2. Consider the neural networks with Markovian jumping parameters and time-varying delay in the form of (1) with the following parameters: " # " # " # " # −0.8 0 −0.3 0 0.11 0 0.11 0 A1 = , A2 = , B1 = , B2 = , 0 0.2 0 −0.7 0.1 0.2 0.1 0.2 " # " # " # " # 0.11 0 0.12 0.38 −0.02 0.04 Bd1 = , Bd2 = , D1 = , D2 = , 0.1 0.2 0.64 0.18 0.4 −0.25 h i h i h i h i C1 = 0.5 0.3 , C2 = 0.3 0.5 , L1 = 0.8 0.4 , L2 = 0.6 0.5 , π11 = 0.8, π12 = 0.2, π21 = 0.4, π22 = 0.6, h11 = 0.6, h12 = 0.8, h21 = 0.7, h22 = 0.9, Σ1 = diag(0.4, 0.2), Σ2 = diag(0, 0). (1)l2 − l∞ filter: Let Υ1 = 0, Υ2 = 0, Υ3 = γ 2 I and Υ4 = I, d1 = 1. For different d2 , we obtain the minimum value γ calculated by Theorem 3: Table 2: Minimum γ for different d2

d2 γ

2 0.1768

4 0.1910

6 0.2094

8 0.2508

10 0.5000

Given d2 = 8, γ = 0.2508, then the parameters of the desired l2 − l∞ filter (3) are given as: " # " # " #T −0.6734 0.0890 0.2806 −0.0735 Af 1 = , Bf 1 = , Cf 1 = , 0.2612 0.4094 0.2211 −0.1886 " # " # " #T −0.1426 1.1803 2.8079 −0.0602 Af 2 = , Bf 2 = , Cf 2 = . (44) −0.1947 −0.8188 0.7873 −0.1478 (2)Dissipative filter: Let Υ1 = −0.25I, Υ2 = −2.5I, Υ3 = γI, Υ4 = 0. For different d2 , the minimum value γ can be obtained by Theorem 3 as follows: Given d2 = 8, γ = 1.7844, then the parameters of the desired dissipative filter (3) are shown by: " # " # " #T −0.2612 0.3227 1.2162 −0.9299 Af 1 = , Bf 1 = , Cf 1 = , −0.6773 −0.2691 −2.3341 −0.5158 " # " # " #T −0.5911 0.1229 0.3907 −0.8001 Af 2 = , Bf 2 = , Cf 2 = . (45) 0.9003 0.2558 2.9361 −0.8045 15

Table 3: Minimum γ for different d2

d2 γ

2 0.7818

4 0.9552

6 1.2216

8 1.7844

10 5.1093

Let the initial conditions be x(0) = [0.5 − 0.5]T , xf (0) = [0 0]T , and the neural activation function f1 (x1 (k)) = 0.4 tanh(x1 (k)), f2 (x2 (k)) = 0.2 tanh(x2 (k)). The disturbance input ω(k) is given as follows ( −0.5, 5 ≤ k ≤ 6, ω(k) = 0.5, 10 ≤ k ≤ 11. Under these conditions together with the filter parameters given in (44) and (45), we get the simulation results shown in Figures 1-4. Figures 1 and 3 depict the filter states, respectively. The filter error signal e(k) is described in Figures 2 and 4. 0.2

1.5

0

20 40 Time(k)

60

0

0 −0.1

−0.05

−0.2

−0.1

−0.3

−0.15

−0.4 0

10

20

30 Time(k)

40

50

60

Figure 1: l2 − l∞ filter states xf (k) in Example 2.

6 4 2 0

0

10

20

0

30 Time(k)

20 40 Time(k) 40

60

50

60

0.3 xf1 (k) xf2 (k)

2.5

0.3 0.2

e(k) 0.2

2 1.5

0.1

1 0.5

0

0.1

20 40 Time(k)

Estimated error

Jump modes

0.4

Filter states

8

Figure 2: l2 − l∞ filter error signal e(k) in Example 2.

0.5

60

0

0 −0.1 −0.2

−0.1

0

10

20

30 Time(k)

40

50

60

Figure 3: Dissipative filter states xf (k) in Example 2.

−0.4

8 6 4 2 0

−0.3

−0.2 −0.3

Time delays

0.05

0.1

1 0.5

Time delays

Filter states

0.1

e(k) 0.2

2

Estimated error

Jump modes

0.15

0.3

xf1 (k) xf2 (k)

2.5

0

10

20

0

30 Time(k)

20 40 Time(k) 40

60

50

60

Figure 4: Dissipative filter error signal e(k) in Example 2.

Remark 8 For different d2 , Table 2 and Table 3 list the minimum value γ of l2 −l∞ and dissipative filter, respectively. It is clearly seen that the minimum value γ becomes lager as the time delay upper bound d2 increasing. This is shown the effectiveness of our filter design method. Moreover, the simulation results also demonstrate the effectiveness of the proposed scheme. 16

Example 3. In order to demonstrate the applications of the results proposed in this paper, we employ a biological network referred to as the synthetic regulatory network. The fundamental research work can be found in [42]. As [43], when we take the stochastic jumping and unexpected time-delay into account, the network can be transformed into the following equation. ( m(k + 1) = A1 (θ(k))m(k) + B1 (θ(k))f (p(k)) + Bd1 (θ(k))f (p(k − d(k))) + D1 (θ(k))ω(k) (46) p(k + 1) = A2 (θ(k))p(k) + A3 (θ(k))m(k) + D2 (θ(k))ω(k) where m(k) and p(k) are concentrations of messenger RNA (mRNA) matrices A1 (θ(k)), A2 (θ(k)), and A3 (θ(k)) denote the decay rates of protein and the translation rates of mRNA. B1 (θ(k)) and Bd1 (θ(k)) respectively. f (p(k)) stands for the feedback regulation of the protein a nonlinear function. {θ(k)} is a Markov chain. Then defining " # " # m(k) f (m(k)) x(k) = , f (x(k)) = . p(k) f (p(k))

and protein. The diagonal mRNA, the decay rates of are the coupling matrices, on the transcription and is

System (46) can be transformed into the form of (1) with the following parameters: " # " # " # " # A1i 0 0 B1i 0 Bd1i D1i Ai = , Bi = , Bdi = , Di = . A3i A2i 0 0 0 0 D2i In this example, the regulation function is chosen as f (x) = x2 /(1 + x2 ), then the corresponding parameters are γ + = 0.65 and γ − = 0. We assume the system (46) has two modes and the transition probability matrix is given by " # 0.8 0.2 Π= . 0.3 0.7 The rest of the parameters are borrowed from [43] and listed as follows:   " # " # 0 0 1 0.2I 0 0.1I 0   , A2 = , U =  1 0 0  , A1 = 0.09I 0.1I 0.08I 0.09I 0 1 0 " # " # " # 0 −0.5U 0 −0.8U 0 −0.2U B1 = , B2 = , Bd1 = , 0 0 0 0 0 0 " # " # 0 −0.1U −0.7 0 0.5 0 0.6 0 Bd2 = , D1 = , 0 0 0 0.2 0.3 0 0 0.1 " # " # 0.3 0 −0.5 0.5 0 0 1 0 −1 0 0 1 D2 = , C1 = , 0 0.3 0 0.5 0.2 0 0 0 1 0 0 0 " # 0.3 0 −0.5 0.5 0 0 L1 = [1 1 0 0 1 0 ], C2 = , 0 0.3 0 0.5 0.2 0 L2 = [1 0 0 1 0 0 ]. and the time-delay d(k) satisfies 1 ≤ d(k) ≤ 5.

17

(1)l2 − l∞ filter: Let Υ1 = 0, Υ2 = 0, Υ3 = γ 2 I and Υ4 = I. We obtain the minimum value γ = 0.8915, and the desired l2 − l∞ filter parameters are exhibited by:   −0.0479 −0.1664 0.1801 0.2781 −0.3462 −0.1956    −0.4131 0.0227 −0.2134 0.0736 −0.3884 0.1309     −0.5173 −0.0501 0.0700  0.4788 −0.8379 0.0409 , Af 1 =   0.2754 0.0721 0.0125 −0.2368 0.4266 −0.0970       0.2804 0.2535 −0.1429 −0.4370 0.6158 −0.0654  −0.0766 −0.0295 0.0851 0.1851 −0.1842 0.1193 " #T −0.3006 −0.0760 0.1656 −0.1879 0.1231 0.0908 Bf 1 = , −0.0641 −0.5996 −0.4559 −0.0478 0.0033 −0.0432 h i Cf 1 = −0.7208 −0.7641 −0.1984 0.3143 −0.8214 0.4950 ,   0.0834 −0.0428 0.0568 0.0450 −0.0536 −0.2570    0.0269 0.1121 −0.0337 −0.0727 0.0576 −0.0732     0.0914 0.2599  0.1868 −0.0309 0.1195 −0.0680 , Af 2 =   0.2966 0.1115 0.1121 −0.1761 0.3450 −0.1496       0.1396 0.1753 0.0001 −0.1856 0.3241 −0.0358  0.1278 0.0565 0.0677 −0.1023 0.1759 0.0719 " #T −0.0146 0.2944 −0.5363 −0.4158 −0.2284 −0.0745 Bf 2 = , 0.0056 0.3892 0.8924 −0.1081 −0.0969 −0.0173 h i Cf 2 = −0.4596 0.1840 −0.1630 −0.3938 0.1299 0.5108 . (47) (2)Dissipative filter: Let Υ1 = −0.5I, Υ2 = −4I, Υ3 = γI, Υ4 = 0. Then, by solving the LMIs in Theorem 3, we can obtain the minimum value γ = 5.4330, and the desired dissipative filter parameters are demonstrated by:   0.2725 0.0131 0.2411 −0.1128 0.0453 0.0352   0.1180 −0.1679 −0.3444 0.1355 0.1528   0.0797     0.1657 −0.1402 0.1646 −0.1296 0.0251 0.0613  Af 1 =   −0.0881 0.0685 −0.0428 0.1811 −0.0672 −0.3534  ,      −0.1525 0.0857 −0.2382 0.1400 0.0234 −0.3684  0.2085 −0.0447 0.1416 −0.1110 0.1297 0.3218 " #T 0.0762 −0.0640 −0.2320 −0.1402 −0.1745 0.0805 Bf 1 = , 0.5553 −0.3296 −0.4872 −0.1807 −0.6399 0.1296 h i Cf 1 = −0.9470 −0.9174 0.0180 −0.0406 −0.9455 −0.1590 ,

18



 0.1654 −0.0209 0.2002 −0.3026 0.1282 −0.4401   0.1590 −0.0548 −0.2410 0.1283 −0.0643   0.0323    −0.1196 0.2010 −0.1110 0.5397 −0.2795 0.1959   Af 2 =   0.2245 −0.0873 0.3658 −0.5176 0.1955 −0.2173  ,      0.0493 0.0406 −0.0023 −0.0715 0.0720 0.0064  −0.0412 0.0285 0.0388 0.0319 −0.0426 0.0143 " #T −0.5974 0.0334 0.5054 −1.0617 0.0255 0.0494 Bf 2 = , 0.0352 0.5239 0.8508 −0.1673 −0.2039 0.0092 h i Cf 2 = −0.2793 −0.1365 0.0624 −1.3639 0.3057 −0.2608 .

(48)

In the simulation, we choose the initial condition x(0) = [−0.2 0.1 − 0.3 0.2 − 0.1 0.2]T , xf (0) = [−0.1 −0.2 0.1 −0.1 0 −0.1]T , and the disturbance input ω(k) is assumed as ω(k) = e−0.1k sin(0.2k). Then, in terms of filter parameters in (47) and (48), we obtain the simulation results which are given in Figures 5-8. Figures 5 and 7 depict the filter states, respectively. The filter error signal e(k) is described in Figures 6 and 8. 0.5

0.7

Filter states

0.3 0.2

2 1.5

e(k) 0.6

0.4

1 0.5

0

50 Time(k)

0.1

6

0.5

Estimated error

Jump modes

2.5

Time delays

0.4

xf1 (k) xf2 (k) xf3 (k) xf4 (k) xf5 (k) xf6 (k)

100

0.3

4 2 0

0

0.2

50 Time(k)

100

0.1 0 0 −0.1 −0.2

−0.1 0

20

40

60

80

−0.2

100

0

20

40

Time(k)

Figure 5: l2 − l∞ filter states xf (k) in Example 3.

−0.8

Estimated error

2.5 2 1.5

20

0.6 0.4

4 2 0

0

50 Time(k)

0.2

100

0

1 0.5

0

6

0.8

0

Jump modes

Filter states

0.2

e(k) 1 Time delays

0.4

−0.6

100

1.2

xf1 (k) xf2 (k) xf3 (k) xf4 (k) xf5 (k) xf6 (k)

−0.4

80

Figure 6: l2 − l∞ filter error signal e(k) in Example 3.

0.6

−0.2

60 Time(k)

−0.2 0

50 Time(k) 40

100 60

80

−0.4

100

0

20

40

60

80

100

Time(k)

Time(k)

Figure 8: Dissipative filter error signal e(k) in Example 3.

Figure 7: Dissipative filter states xf (k) in Example 3.

19

5

Conclusion

In this paper, we have investigated the problem of reliable filter design for neural networks in the presence of Markovian jumping parameters and time-varying delay. A sufficient condition, which ensures the resulting filtering error system is stochastically stable and extended dissipative, is proposed. It should be emphasized that the stability condition achieved in this paper is less conservative than some existing results. Three numerical examples are provided to demonstrate the advantages and usefulness of the developed schemes. In addition, what should be stressed is that the filter design method proposed in this paper can be extended to other systems, such as T-S fuzzy systems, network systems [44, 45], and cyber-physical systems [46–48] with time-varying delays. Moreover, the scheme developed in this paper also can be used to investigate the state estimation or H∞ control problem [49, 50] for delayed neural networks.

Acknowledgements This work was supported by the NSFC 61673215, 61673169, 61374087, the 333 Project (BRA2017380), a Project Funded by the Priority Academic Program Development of Jiangsu, the Key Laboratory of Jiangsu Province.

References [1] J. K. Paik, A. K. Katsaggelos, Image restoration using a modified H¨opfield network, IEEE Trans. Image Process 1(1) (1992) 49–63. [2] W. Xia, W. X. Zheng, S. Xu, Event-triggered filter design for Markovian jump delay systems with nonlinear perturbation using quantized measurement, Int. J. Robust Nonlinear Control, 29(14)(2019) 4644–4664. [3] S. R. Chu, R. Shoureshi, M. Tenorio, Neural networks for system identification, IEEE Control Syst. Mag. 10(3) (1990) 31–35. [4] Y. Xia, J. Wang, A bi-projection nerual network for solving constrained quadratic optimization problem, IEEE Trans. Neuarl Netw. Learn. Syst. 27(2) (2016) 214–224. [5] Q. Ma, K. Gu, N. Choubedar, Strong stability of a class of difference equations of continuous time and structured singular value problem, Automatica, 87 (2018) 32–39. [6] W. Xia, W. X. Zheng, S. Xu, Realizability condition for digital filters with time delay using generalized overflow arithmetic, IEEE Trans Circuits Syst II, Exp Briefs, 66(1)(2018) 141–145. [7] Z. Zhang, S. Xu, B. Zhang, Exact tracking control of nonlinear systems with time delays and dead-zone input, Automatica, 52 (2015) 272–276. [8] G. Zong, R. Wang, W. Zheng, L. Hou, Finite-time H∞ control for discrete-time switched nonlinear systems with time delay, Int. J. Robust Nonlinear Control 25(6) (2015) 914–936. [9] W. Xia, S. Xu, Q. Ma, Z. Qi, Z. Zhang, Dissipative controller design for uncertain neutral systems with semi-Markovian jumping parameters, Optim. Control Appl. Meth. 39(2)(2018) 888–903. [10] Z. Wu, J. H. Park, H. Su, J. Chu, Stochastic stability analysis of piecewise homogeneous Markovian jump neural networks with mixed time-delays, J. Frankl. Inst. 349(6) (2012) 2136–2150. [11] Q. Zhu, J. Cao, Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays, IEEE Trans. Syst., Man, Cybern. B, Cybern. 41(2) (2011) 341–353. [12] W. I. Lee, S. Y. Lee, P. Park, A combined reciprocal convexity approach for stability analysis of static neural networks with interval time-varying delays, Neurocomputing 221 (2017) 168–177.

20

[13] R. Dey, J. C. Martinez Garcia, Improved delay-range-dependent stability analysis for uncertain retarded systems based on affine Wirtinger-inequality, Int. J. Robust Nonlinear Control 27(16) (2017) 3028–3042. [14] L. J. Banu, P. Balasubramaniam, K. Ratnavelu, Robust stability analysis for discrete-time uncertain neural networks with leakage time-varying delay, Neurocomputing 151 (2015) 808–816. [15] L. J. Banu, P. Balasubramaniam, Robust stability analysis for discrete-time neural networks with time-varying leakage delays and random parameter uncertainties, Neurocomputing 179 (2016) 126–134. [16] Z. G. Wu, P. Shi, H. Su, J. Chu, Passivity analysis for discrete-time stochastic Markovian jump neural networks with mixed time delays, IEEE Trans. Neural Netw. 22(10) (2011) 1566–1575. [17] C. Hua, S. Wu, X. Guan, New robust stability condition for discrete-time recurrent neural networks with time-varying delays and nonlinear perturbations, Neurocomputing 219 (2017) 203–209. [18] K. Gu, V. K. Kharitonov, J. Chen, Stability of time-delay systems, Birkhauser, Boston, 2003. [19] A. Seuret, F. Gouaisbaut, E. Fridman, Stability of discrete-time systems with time-varying delays via a novel summation inequality, IEEE Trans. Autom. Control. 60 (10) (2015) 2740–2745. [20] P. G. Park, J. W. Ko, C. Jeong, Reciprocally convex approach to stability of systems with time-varying delays, Automatica 47 (1) (2011) 235–238. [21] C. K. Zhang, Y. He, L. Jiang, Q. G. Wang, M. Wu, Stability analysis of discrete-time neural networks with time-varying delay via an extended reciprocally convex matrix inequality, IEEE Trans. Cybern. 47(10) (2017) 3040–3049. [22] Z. Su, H. Wang, L. Yu, D. Zhang, Exponential H∞ filtering for switched neural networks with mixed delays, IET Control Theory Appl. 8(11) (2014) 987–995. [23] J. Liu, J. Tang, S. Fei, Event-triggered H∞ filter design for delayed neural network with quantization, Neural Netw. 82 (2016) 39–48. [24] X. Zhang, X. Fan, Y. Xue, Y. Wang, W. Cai, Robust exponential passive filtering for uncertain neutral-type neural networks with time-varying mixed delays via Wirtinger-based integral inequality, Int. J. Control Autom. Syst. 15(2) (2017) 585–594. [25] H. D. Choi, C. K. Ahn, P. Shi, M. T. Lim, M. K. Song, L2 − L∞ filtering for Takagi-Sugeno fuzzy neural networks based on Wirtinger-type inequalities, Neurocomputing 153 (2015) 117–125. [26] L. Zhang, Y. Zhu, W. X. Zheng, Energy-to-peak state estimation for Markov jump RNNs with time-varying delays via nonsynchronous filter with nonstationary mode transitions, IEEE Trans. Neural Netw. Learn. Syst. 26(10)(2015) 2346–2356. [27] L. Zhang, Y. Zhu, P. Shi, Y. Zhao, Resilient asynchronous H∞ filtering for Markov jump neural networks with unideal measurements and multiplicative noises, IEEE Trans. Cybern. 45(12)(2015) 2840–2852. [28] J. Tao, Z. G. Wu, H. Su, Y. Wu, D. Zhang, Asynchronous and resilient filtering for Markovian jump neural networks subject to extended dissipativity, IEEE Trans. Cybern. 49(7)(2018) 2504–2513. [29] W. Xia, W. X. Zheng, S. Xu, Extended dissipativity analysis of digital filters with time delay and Markovian jumping parameters, Signal Process. 152(2018) 247–254. [30] W. Xia, S. Xu, Robust H∞ deconvolution filter for polytopic uncertain systems with distributed delay, Trans. Inst. Meas. Control 40(11)(2018) 3368–3376. [31] B. Zhang, W. X. Zheng, S. Xu, Filtering of Markovian jump delay systems based on a new performance index, IEEE Trans. Circuits Syst. I, Reg. Papers, 60 (5) (2013) 1250–1263. [32] Z. Feng, W. X. Zheng, On extended dissipativity of discrete-time neural networks with time delay. IEEE Trans. Neural Netw. Learn. Syst. 26(12) (2015) 3293–3300. [33] G. Zhuang, S. Xu, B. Zhang, J. Xia, Y. Chu, Y. Zou, Unified filters design for singular Markovian jump systems with time-varying delays, J. Franklin Inst. 353 (2016) 3739–3768. [34] W. Xia, Q. Ma, J. Lu, G. Zhuang, Reliable filtering with extended dissipativity for uncertain systems with discrete and distributed delays, Int. J. Syst. Sci. 48 (12) (2017) 2644–2657. [35] H. Shen, Z. G. Wu, J. H. Park, Reliable mixed passive and filtering for semi-Markov jump systems with randomly occurring uncertainties and sensor failures, Int. J. Robust Nonlinear Control 25(17) (2015) 3231– 3251.

21

[36] Z. G. Wu, P. Shi, H. Su, J. Chu, Reliable H∞ control for discrete-time fuzzy systems with infinite-distributed delay, IEEE Trans. Fuzzy Syst. 20(1) (2012) 22–31. [37] M. S. Ali, R. Vadivel, R. Saravanakumar, Design of robust reliable control for T-S fuzzy Markovian jumping delayed neutral type neural networks with probabilistic actuator faults and leakage delays: An event-triggered communication scheme, ISA Trans. 77 (2018) 30–48. [38] L. X. Zhang, E. K. Boukas, Mode-dependent H∞ filtering for discrete-time Markovian jump linear systems with partly unknown transition probabilities, Automatica 45 (2009) 1462–1467. [39] K. Liu, A. Seuret, Comparison of bounding methods for stability analysis of systems with time-varying delays, J. Franklin Inst. 354(7) (2017) 2979–2993. [40] L. Xie, CE. Desouza, Robust H∞ control for linear systems with norm-bounded time-varying uncertainty, IEEE Trans. Autom. Control 37(8) (1992) 1188–1191. [41] E. K. Boukas, Control of singular system with random abrupt changes, Spring, Berlin, 2008. [42] M. B. Elowitz, S. Leibler, A synthetic oscillatory network of transcriptional regulators, Nature 403(6767) (2000) 335–338. [43] H. Shen, Y. Zhu, L. Zhang, J. H. Park, Extended dissipative state estimation for Markov jump neural networks with unreliable links, IEEE Trans. Neural Netw. Learn. Syst. 28(2) (2017) 346–358. [44] W. Li, L. Liu, G. Feng, Cooperative control of multiple nonlinear benchmark systems perturbed by second-order moment processes, IEEE Trans. Cybern. Digital Object Identifier 10.1109/TCYB.2018.2869385. [45] W. Li, L. Liu, G. Feng, Distributed output-feedback tracking of multiple nonlinear systems with unmeasurable states, IEEE Trans. Syst., Man, Cybern. Syst. Digital Object Identifier 10.1109/TSMC.2018.2875453. [46] L. Su, D. Ye, A cooperative detection and compensation mechanism against denial-of-service attack for cyberphysical systems, Inf. Sci. 444(2018) 122–134. [47] D. Ye, X. Yang, L. Su, Fault-tolerant synchronization control for complex dynamical networks with semiMarkov jump topology, Appl. Math. Comput. 312(2017) 36–48. [48] D. Ye, T. Y. Zhang, G. Guo, Stochastic coding detection scheme in cyber-physical systems against replay attack, Inf. Sci. 481(2019) 432–444. [49] H. Shen, F. Li, H. Yan, H. R. Karimi, H. K. Lam, Finite-time event-triggered H∞ control for T-S fuzzy Markov jump systems, IEEE Trans. Fuzzy Syst. 26(5)(2018) 3122–3135. [50] H. Shen, S. Huo, J. Cao, T. Huang, Generalized state estimation for Markovian coupled networks under round-robin protocol and redundant channels, IEEE Trans. Cybern. (99)(2018) 1–10.

22

Declaration of interest statement

 

The authors declare that they have no known competing financial

interests or personal relationships that could have appeared to influence

the work reported in this paper.