Filter design of delayed static neural networks with Markovian jumping parameters

Filter design of delayed static neural networks with Markovian jumping parameters

Neurocomputing 153 (2015) 126–132 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Filter ...

451KB Sizes 1 Downloads 62 Views

Neurocomputing 153 (2015) 126–132

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Filter design of delayed static neural networks with Markovian jumping parameters Lei Shao a, He Huang a,n, Heming Zhao a, Tingwen Huang b a b

School of Electronics and Information Engineering, Soochow University, Suzhou 215006, PR China Texas A & M University at Qatar, Doha 5825, Qatar

art ic l e i nf o

a b s t r a c t

Article history: Received 21 July 2014 Received in revised form 23 September 2014 Accepted 21 November 2014 Communicated by Guang Wu Zheng Available online 27 November 2014

This paper considers the H 1 filtering problem of static neural networks with Markovian jumping parameters and time-varying delay. A mode and delay dependent approach is presented to deal with it. By constructing a stochastic Lyapunov functional with triple-integral terms and employing a recently proposed integral inequality, a design criterion is derived under which the resulting filtering error system is stochastically stable with a guaranteed H 1 performance. Based on it, the proper gain matrices and optimal H 1 performance index can be efficiently obtained via solving a convex optimization problem subject to some linear matrix inequalities. An advantage of this approach is that most of the Lyapunov matrices are distinct with respect to system mode and thus the choice of these matrices becomes much flexible. Finally, an example is provided to illustrate the application and effectiveness of the developed result. & 2014 Elsevier B.V. All rights reserved.

Keywords: Static neural networks Filter design Markovian jumping parameters Time-varying delay Integral inequalities

1. Introduction As been known, the phenomenon of information latching frequently appears in recurrent neural networks. One of the promising means to tackle it is attempt to extract a finite state representation such that the neural network under consideration has finite modes [1,2]. Furthermore, as discussed in [3–5], the switching between these modes is dominated by a Markov chain. Consequently, a kind of hybrid neural network models was established and named as recurrent neural networks with Markovian jumping parameters (or Markovian jumping recurrent neural networks). On the other hand, time delay is inevitable in real systems due to signal transmission and processing between components [6–11]. In recurrent neural networks, time delay should be also taken into account [42,12–15]. More details can be found in the recent survey paper [16]. During the past few years, the study of delayed neural networks with Markovian jumping parameters has attracted a great deal of attention. Many interesting results have been reported in the literature (see, e.g., [17–20], etc.). Generally, it could be very difficult and expensive to acquire the complete information of states of all neurons in a relatively large-scale delayed neural networks, especially, when Markovian jumping parameters are involved. However, it is found that many neural-networkbased applications heavily rely on these information [21,22]. That is to

n

Corresponding author. Tel./fax: þ 86 512 67871211. E-mail address: [email protected] (H. Huang).

http://dx.doi.org/10.1016/j.neucom.2014.11.045 0925-2312/& 2014 Elsevier B.V. All rights reserved.

say, in order to achieve special objectives, it is very necessary to know the neurons' state information in advance. Therefore, in this sense, we are required to propose a feasible approach to implementing the estimation of the states of all neurons. Then, in place of the true states of neurons, the estimated states can be utilized in practice. Since the seminal work in [23], the problem of state estimation of delayed recurrent neural networks has been widely investigated [24–27]. More recently, the state estimation problem was considered for delayed local field neural networks with Markovian jumping systems [28–31]. In [29], the authors presented a mode-dependent approach to studying this problem for a class of local field neural networks with Markovian jumping parameters and mixed delays. It was proved that the design criterion in [29] includes the one in [30] as a special case. The authors in [31] discussed the state estimation problem for discrete-time delayed Markovian jumping neural networks. A delay-dependent condition was provided by means of linear matrix inequalities (LMIs), which can be easily solved by some available algorithms [32]. It should be noted that almost all the above results are on localfield-type neural networks. It has been recognized that static neural networks, another kind of important recurrent neural networks, are distinct from local field ones [33]. It means that the design criteria in [28–31] are not able to be directly applied to delayed static neural networks with Markovian jumping parameters. This is the first motivation of the current study. Meanwhile, in the theory of state estimation of delayed recurrent neural networks, performance analysis should be taken into account. In this framework, we can define an index to evaluate the

L. Shao et al. / Neurocomputing 153 (2015) 126–132

performance of various approaches in the literature. As a matter of fact, noise disturbances could be introduced during the hardware implementation process of a recurrent neural network [34–36]. From this point of view, it is of great significance to consider the performance analysis in the state estimation theory of delayed static neural networks with Markovian jumping parameters. This also motivates our study. In this paper, our attention focuses on the study of H 1 filtering problem for a class of delayed static neural networks with Markovian jumping parameters. Firstly, the mathematical model of such kind of neural networks is presented. Then, inspired by [29], a mode and delay dependent approach is proposed to resolve this issue. By defining a suitable stochastic Lyapunov functional with tripleintegral terms and employing Wirtinger inequality [37] to cope with a cross term in the weak infinitesimal operator, a condition is derived such that the filtering error system is stochastically stable with a guaranteed H 1 performance. We can see that the optimal H 1 performance index and gain matrices are efficiently worked out via solving an LMI-based convex optimization problem. The advantage of this approach is that most of the Lyapunov matrices to be determined are district with respect to system mode. It is thus that the choice of these matrices is more flexible than common ones. A numerical example is finally given to demonstrate the effectiveness of our result. 2. Notations and problem description Let R be the set of real numbers, Rn the n-dimensional Euclidean space and Rnm the set of all n  m real matrices. For t Z 0, rt, taking values from a finite set M ¼ f1; 2; …; Mg, is a right-continuous Markov chain defined on a complete probability space ðΩ; F ; PÞ. Its transition probabilities between different modes are given by ( π ij h þ oðhÞ; iaj Pfr t þ h ¼ jjr t ¼ ig ¼ 1 þ π ii h þ oðhÞ; i ¼ j where h 40, o(h) is a high-order infinitesimal of h (i.e., limh-0 þ o ðhÞ=h ¼ 0), π ij Z0 for ja i, and for each i A M,

π ii ¼ 

M



j ¼ 1;j a i

π ij :

Cð½  d; 0; Rn Þ is the family of continuous functions from ½  d; 0 to Rn , and L2 ½0; 1Þ is the space of square-integrable vector functions over the interval ½0; 1Þ. For real square matrices X and Y, X 4 Y (X oY, X ZY and X rY) means that X Y is symmetric and positive definite (negative definite, positive semi-definite and negative semi-definite, respectively). diagfZ 1 ; Z 2 ; …; Z p g is a block diagonal matrix with diagonal blocks Z1, Z2, …, Zp. I is an identity matrix with appropriate dimension. The superscripts “> ” and “ 1” respectively represent the transpose and the inverse of a matrix or vector if applicable. i   nh stands for the symmetric block in a symmetric matrix (e.g., Xn YZ ¼ YX> YZ ). Matrices, if not explicitly stated, are supposed to have compatible dimensions. It is known that a delayed static neural network is expressed by _ ¼ AxðtÞ þ f ðWxðt dðtÞÞ þ JÞ; xðtÞ >

ð1Þ n

where xðtÞ ¼ ½x1 ðtÞ; x2 ðtÞ; …; xn ðtÞ A R is a state vector of the neural network with xk(t) being the state of the kth neuron; A ¼ diagða1 ; a2 ; …; an Þ is a diagonal matrix with each element ak 4 0 being the firing rate of neuron k; W ¼ ½wjk nn is a delayed connection weight matrix; f ðxðtÞÞ ¼ ½f 1 ðx1 ðtÞÞ; f 2 ðx2 ðtÞÞ; …; f n ðxn ðtÞÞ > is a continuous activation function; J ¼ ½J 1 ; J 2 ; …; J n  > is an external input vector, and d(t) is a time-varying delay. With the rapid development of science and technology, it could not be a good choice to design one neural network to handle a practical problem with high nonlinearity and complexity. In some real applications, one needs to combine several neural networks to

127

gain better performance. That is, among them, every neural network is designated to solve a specific subproblem in a comprehensive situation. It inevitably leads to designing and investigating hybrid neural networks. On the other hand, the so-called information latching phenomenon is frequently encountered in recurrent neural networks. One promising way to tackle this issue is to take a finite state representation from the underlying neural network. In this sense, it can be viewed as having finite modes, and the transition between these modes is governed by a Markov chain [1,4]. As a result, it is reasonable to introduce Markovian jumping parameters into recurrent neural networks with information latching. In recent years, much effort was spent to discuss delayed recurrent neural networks with Markovian jumping parameters. Many interesting results have been available on stability and passivity analysis, and state estimation of such kind of delayed neural networks [3,5,17–20,28–31]. While, almost all the results mentioned above are concerned with local-field-type neural networks. It is well known from [33] that static neural networks are generally not equivalent to local field ones. Based on these observations, in this study, we dedicate to considering delayed static neural networks with Markovian jumping parameters. Similar to [18,29,30], the mathematical model of a delayed static neural network with Markovian jumping parameters is given by   _ ¼  Art xðtÞ þ f rt W rt xðt  dðtÞÞ þ J rt þ B1rt wðtÞ; xðtÞ ð2Þ yðtÞ ¼ C rt xðtÞ þDrt xðt  dðtÞÞ þ B2rt wðtÞ;

ð3Þ

zðtÞ ¼ Ert xðtÞ;

ð4Þ

xðtÞ ¼ ϕðtÞ;

t A ½  d; 0

and

rt ¼ r0 ;

ð5Þ

where xðtÞ A Rn and yðtÞ A Rm are respectively the state vector and available output measurement. zðtÞ A Rp is a linear combination of the states, which is to be estimated. wðtÞ A Rq is a noise signal in the space L2 ½0; 1Þ. d(t) is a time-varying delay with an upper bound d 40. For each r t A M, f rt is an activation function, Art , W rt , B1rt , C rt , Drt , B2rt and Ert are real known matrices with compatible dimensions, and J rt is an external input vector. ϕðtÞ is an initial function belonging to Cð½ d; 0; Rn Þ and r 0 is an initial mode. In fact, for a fixed rt, (2) becomes a delayed static neural network. Therefore, the hybrid neural network model (2) can be regarded as a combination of M different delayed static neural networks   _ ¼  Ai xðtÞ þ f i W i xðt  dðtÞÞ þ J i þ B1i wðtÞ xðtÞ with i ¼ 1; 2; …; M. The switching between them follows the movement of the Markov chain r t ðt Z 0Þ. For convenience, for each r t ¼ i A M, we denote the matrix X rt to be Xi and the activation function f rt ðxðtÞÞ to be f i ðxðtÞÞ. For example, when r t ¼ i, the matrices Art and B1rt in (2) are written as Ai and B1i, respectively. As defined before, the activation function f i ðxðtÞÞ in (2) has the form f i ðxðtÞÞ ¼ ½f i1 ðx1 ðtÞÞ; f i2 ðx2 ðtÞÞ; …; f in ðxn ðtÞÞ > with i A M. In this study, it is assumed that Assumption 1. For each i A M and u a v A R, there are constants  þ lik and lik such that the activation function f ik ðxðtÞÞ satisfies 

lik r

f ik ðuÞ  f ik ðvÞ þ r lik : uv

ð6Þ

Here, k ¼ 1; 2; …; n.

Assumption 2. For t 4 0, there are constants d 40 and μ such that the time delay d(t) satisfies 0 r dðtÞ r d

and

_ r μ: dðtÞ

ð7Þ

128

L. Shao et al. / Neurocomputing 153 (2015) 126–132

To estimate z(t) in (4) with a guaranteed performance, for each r t ¼ i A M, a causal filer is constructed as   ^ þ f i W i xðt ^ dðtÞÞ þ J i x^_ ðtÞ ¼  Ai xðtÞ   ^  Di xðt ^  dðtÞÞ ; ð8Þ þ K i yðtÞ  C i xðtÞ ^ z^ ðtÞ ¼ Ei xðtÞ; ^ xð0Þ ¼ 0;

ð9Þ

t A ½  d; 0

ð10Þ

^ A R , z^ ðtÞ A R , and K i ði ¼ 1; 2; …; MÞ are gain matrices, where xðtÞ which will be designed later. ^ and zðtÞ ¼ zðtÞ  z^ ðtÞ. Define the error signals to be eðtÞ ¼ xðtÞ  xðtÞ It immediately follows from (2) to (4) and (8) and (9) that the filtering error system is n

First of all, for  d r s r 0, let et ¼ eðt þ sÞ. According to [28,38,39], it is known that for t Z 0, fet ; r t g is a Markov process. For each r t ¼ i, the weak infinitesimal operator L of a stochastic Lyapunov functional V ðt; et ; iÞ is given by LVðt; et ; iÞ ¼ limþ h-0

Then, by the well-known Dynkin formula Z

p

_ ¼  ðAi þ K i C i ÞeðtÞ  K i Di eðt  dðtÞÞ eðtÞ þ g i ðW i eðt  dðtÞÞÞ þ ðB1i  K i B2i ÞwðtÞ;

ð11Þ

zðtÞ ¼ Ei eðtÞ;

ð12Þ

EVðt; et ; r t Þ ¼ Vð0; e0 ; r 0 Þ þ E

Remark 1. Although it is assumed that the time delay in the considered hybrid neural network is mode-independent, we can easily extend to deal with the case of mode-dependent time delays by following the similar idea in [29]. For simplicity of notations, we only consider the mode-independent time delay in this study. Definition 1. For any initial function ϕ A Cð½  d; 0; Rn Þ and initial mode r 0 A M, if Z t  e > ðsÞeðsÞ ds o 1 lim E 0

is satisfied, the filtering error system (11) with wðtÞ  0 is said to be stochastically stable. The H 1 filtering problem to be addressed for the delayed static neural network with Markovian jumping parameters (2)–(5) via (8)–(10) is specified here. For a prescribed level ρ 40 of noise attenuation, we are required to find suitable gain matrices K i ði ¼ 1; 2; …; MÞ to achieve two objectives: (i) when wðtÞ  0, the filtering error system (11) is stochastically stable; and (ii) for any nonzero wðtÞ A L2 ½0; 1Þ, the inequality ‖z‖E2 o ρ‖w‖2

ð13Þ

holds under zero-initial condition (i.e. ϕðtÞ ¼ 0 for t A ½  d; 0), where the two kinds of norms are defined by

‖z‖E2

 Z ¼ E

‖w‖2 ¼

1

0 Z 1

z > ðtÞzðtÞdt

 1=2

w > ðtÞwðtÞdt

0

; 1=2 :

3. H 1 filtering criterion In this section, a mode and delay dependent approach is presented to handle the H 1 filtering problem of the delayed static neural network with Markovian jumping parameters (2)–(5). A design criterion is established by means of LMIs such that the two objectives defined above are ensured. It is shown that the gain matrices and optimal H 1 performance index are obtained by solving a convex optimization problem subject to some LMI-based constraints.

0

t

 LVðs; es ; r s Þds :

Lemma 1 (Jensen inequality [6]). For any given real matrix X A Rnn with X 4 0, scalars a o b, and continuous function ω : ½a; b-Rn , one has Z

^  dðtÞÞ þ J i Þ. where g i ðW i eðt  dðtÞÞÞ ¼ f i ðW i xðt  dðtÞÞ þ J i Þ  f i ðW i xðt

t-1

 1  E Vðt þ h; et þ h ; r t þ h Þjet ; r t ¼ i V ðt; et ; iÞ : h

b

ω > ðsÞX ωðsÞ ds Z

a

Z

1 b a

b a

!>

ωðsÞ ds

Z X a

b

!

ωðsÞ ds :

ð14Þ

Lemma 2 (Wirtinger inequality [37]). For any given real matrix Y A Rnn with Y 4 0, scalars a o b, and continuously differentiable function ω : ½a; b-Rn , one has Z

b

a

_ ðsÞ dsZ ω_ > ðsÞY ω

1 3 > ðωðbÞ  ωðaÞÞ Y ðωðbÞ  ωðaÞÞ þ Ω > Y Ω; ba ba ð15Þ

where

Ω ¼ ωðbÞ þ ωðaÞ 

2 ba

Z a

b

ωðsÞ ds:

Define 





þ

þ

þ

Li ¼ diagfli1 ; li2 ; …; lin g; Liþ ¼ diagfli1 ; li2 ; …; lin g: The following theorem gives a delay and mode dependent result on the H 1 filter design of the delayed static neural network with Markovian jumping parameters (2)–(5). Theorem 1. For given scalars d 40 and μ in (7), let ρ 4 0 be a prescribed level of noise attenuation, the H 1 filtering problem of the delayed static neural network with Markovian jumping parameters (2)–(5) is solvable, if there exist real matrices P i 4 0, Q 1i 4 0, Q 2i 4 0, Q 40, Ri 4 0, R 40, Si 4 0, S 40, diagonal matrices Γ i ¼ diagfγ i1 ; γ i2 ; …; γ in g 4 0 and Xi such that the following LMIs are satisfied for all iA M: M



j ¼ 1;j a i

M

π ij Q 1j þ ∑ π ij Q 2j r Q ;

M

∑ π ij Rj r R;

ð16Þ

j¼1

ð17Þ

j¼1

M

∑ π ij Sj rS;

j¼1

ð18Þ

L. Shao et al. / Neurocomputing 153 (2015) 126–132

2

2  Si d 4  Q 2i  Si d

6 Ωi1 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4

n

 X i Di 0

6

S 2 i

Pi

Ωi2

Ωi3

Si

0

0

0

Ωi5

0

 dDi X i>

d 6 d

2

>

n

n

Ωi4

n

n

n

Ωi6

0

0

0

n

n

n

n

 2Γ i

0

dP i

n

n

n

n

n

 ρ2 I

n

n

n

n

n

n

Ωi7 Ωi8

0

3 7 7 7 7 7 7 7 7 7 o 0; 7 7 7 7 7 7 7 5 ð19Þ

Ωi1 ¼

 P i Ai  Ai> P i  X i C i  C i> X i>

þ ∑ π ij P j j¼1

Ωi3 ¼  dAi> P i  dC i> X i> ; Ωi4 ¼  ð1  μÞQ 1i  2W i> Li Γ i Liþ W i ; Ωi5 ¼ W i> ðLi þ Liþ ÞΓ i ; 12 S; 3 i d Ωi7 ¼ dB1i> P i  dB2i> X i> ; 1 1 Ωi8 ¼  2P i þ Si þ S: d 2 ð20Þ

Proof. Since π ij Z 0 for ia j and π ii ¼  ∑M j ¼ 1;j a i π ij o 0, for Q 1j 4 0 ðj ¼ 1; 2; …; MÞ, one has Z t Z t M M e > ðsÞQ 1j eðsÞ ds r ∑ π ij e > ðsÞQ 1j eðsÞ ds ∑ π ij t  dðtÞ

j¼1

M



j ¼ 1;j a i

π ij

Z

t  dðtÞ

j ¼ 1;j a i

t td

e > ðsÞQ 1j eðsÞ ds:

ð21Þ

t  dðtÞ



t

d

j¼1

Z

0

t d

j¼1

e ðsÞQeðsÞ ds r 0:

ð22Þ

tþθ

d

Z

t

tþθ

d

tþθ

_ ds dθ  e_ > ðsÞSj eðsÞ

Z

0 d

Z

t t þθ

_ ds dθ r 0: e_ > ðsÞSeðsÞ ð24Þ

By Lemmas 1 and 2, it is known that for Ri 40 and Si 4 0, Z t > Z t Z t 1 e > ðsÞRi eðsÞ ds r  eðsÞ ds Ri eðsÞ ds;  d td t d t d 

>   1 eðtÞ  eðt  dÞ Si eðtÞ  eðt  dÞ d

> Z 3 2 t eðsÞ ds  eðtÞ þ eðt dÞ  d d td

Z 2 t  Si eðtÞ þeðt  dÞ  eðsÞ ds : d td

t d

n

6

 X i Di

Si

Pi

Ωi2

Ωi3

2

Si

0

0

0

Ωi5

0

 dDi X i>

d 6

0

3

2

d

>

n

n

Ωi4

n

n

n

Ωi6

0

0

0

n

n

n

n

 2Γ i

0

dP i

n

n

n

n

n

 ρ2 I

n

n

n

n

n

n

0

Ωi7 1

 PiS i

7 7 7 7 7 7 7 7 7 o 0: 7 7 7 7 7 7 7 5

Pi

2  Si d 4  Q 2i  Si d

6 Ω i1 6 6 6 6 n 6 6 6 6 n 6 6 n 6 6 6 n 6 6 n 4 n

 P i K i Di 0

6 2

Si

Pi

Ω i2

2

Si

0

0

Ωi5

0 0 0

d 6 d

n

Ωi4

n

n

Ωi6

n

n

n

0  2Γ i

n

n

n

n

 ρ2 I

n

n

n

n

n

0

3

Ω i3

7 7 7 7 7 0 7 7 7 > >  dDi K i S i 7 o 0; 7 7 0 7 7 dS i 7 7 7 Ω i7 5  Si

ð29Þ

where

j¼1

ð23Þ

t

2  Si d 4  Q 2i  Si d

1 2 4 þ Q 1i þ dQ þ Q 2i þ dRi þ d R  Si þ Ei> Ei ; 2 d

Similarly, based on (17) and (18), one can easily verify that Z 0 Z t Z 0 Z t M e > ðsÞRj eðsÞ ds dθ  e > ðsÞReðsÞ ds dθ r 0; ∑ π ij

Z

2

Pi r 

M

>

t d

∑ π ij

1

Ω i1 ¼  P i Ai  Ai> P i  P i K i C i C i> K i> P i þ ∑ π ij P j

It immediately follows from (16) and (21) that Z t Z t M M e > ðsÞQ 1j eðsÞ ds þ ∑ π ij e > ðsÞQ 2j eðsÞ ds ∑ π ij Z

ð27Þ

Let S i ¼ ð1=dÞSi þ 12S. In view of the fact that  P i S i 2P i þS i , one has from (19) that

2

K i ¼ P i 1 X i :

j¼1

2e > ðt  dðtÞÞW i> Li Γ i Liþ W i eðt  dðtÞÞ:

By noting (20), and pre- and post-multiplying (28) by diagfI; I; I; I; I; I; S i P i 1 g and its transpose, respectively, it is known that

Then the gain matrices Ki ði ¼ 1; 2; …; MÞ can be designed as

M

þ2g i> ðW i eðt  dðtÞÞÞΓ i ðLi þ Liþ ÞW i eðt  dðtÞÞ

ð28Þ

1 d

Ωi6 ¼  Ri 

j¼1

0 r  2g i> ðW i eðt  dðtÞÞÞΓ i g i ðW i eðt dðtÞÞÞ

6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4

M

1 2 4 þ Q 1i þ dQ þ Q 2i þ dRi þ d R Si þ Ei> Ei ; 2 d Ωi2 ¼ P i B1i  X i B2i ;

r

From (6), for any diagonal matrix Γ i ¼ diagfγ i1 ; γ i2 ; …; γ in g 4 0, one has

Ω 6 i1

where

129

ð25Þ

Ω i2 ¼ P i B1i P i K i B2i ; Ω i3 ¼  dAi> S i  dC i> K i> S i ; Ω i7 ¼ dB1i> S i  dB2i> K i> S i : Let

2

6 Ω i1

6 6 6 6 6 Σ 1i ¼ 6 6 6 6 6 6 6 4

n

2  Si d 4  Q 2i  Si d

P i K i Di 0

3

6

Pi

Si

0

0

Ωi5

0

d 6 d

Ω i2 7 7

S 2 i 2

n

n

Ωi4

n

n

n

Ωi6

n

n

n

n

n

n

0

0

0

n

 2Γ i

0

n

n

 ρ2 I

7 7 7 7 7 7; 7 7 7 7 7 5

Σ 2i ¼ ½  ðAi þ K i C i Þ 0  K i Di 0 I B1i  K i B2i : By Schur complement, it follows from (29) that

_ dsr  e_ > ðsÞSi eðsÞ

Σ 1i þd2 Σ 2i> S i Σ 2i r 0:

ð30Þ

One can also derive from (29) that ð26Þ

>

Σ 1i þ d2 Σ 2i S i Σ 2i r0

ð31Þ

130

L. Shao et al. / Neurocomputing 153 (2015) 126–132

with

2

2  Si d 4  Q 2i  Si d

Ω

6 i1 6 6 6 n 6 Σ 1i ¼ 6 6 6 n 6 6 n 4 n

P i K i Di

Si

2

d 6

0

d

For t 4 0, define an H 1 index function as Z t  ½z > ðsÞzðsÞ  ρ2 w > ðsÞwðsÞ ds : J 1 ðtÞ ¼ E

3

6

Pi

Si

2

n

Ωi4

n

n

Ωi6

n

n

n

0

Ωi5

0

0  2Γ i

7 7 7 7 7 7; 7 7 7 7 5

It follows from (34) and (35) that for any t 4 0 and nonzero w(t), Z t  ½LVðs; es ; r s Þ þ z > ðsÞzðsÞ  ρ2 w > ðsÞwðsÞ ds J 1 ðtÞ r E 0

o 0:

Σ 2i ¼ ½  ðAi þ K i C i Þ 0  K i Di 0 I:

t  dðtÞ

0

þ

d

Z

0

þ

Z Z

0

þ

0

Z

d

e > ðsÞQeðsÞ ds dθ þ

t þθ

θ

d

Z

t

t

t þα 0

θ

Z Z

t

t þα

_ ¼  ðAi þ K i C i ÞeðtÞ  K i Di eðt  dðtÞÞ þ g i ðW i eðt  dðtÞÞÞ: eðtÞ

Z

0

t

e > ðsÞRi eðsÞ ds dθ

t þθ

d

Z

e > ðsÞReðsÞ ds dα dθ þ

0 d

Z

t

t þθ

where

_ ds dθ e_ > ðsÞSi eðsÞ

_ ds dα dθ: e_ > ðsÞSeðsÞ



χ i ðtÞ ¼ e > ðtÞ; e > ðt  dÞ; e > ðt  dðtÞÞ;

ð32Þ

M

_ ðt  dðtÞÞQ 1i eðt dðtÞÞ þ ∑ π ij  ð1  dðtÞÞe

Z

j¼1 M

þ e > ðtÞQ 2i eðtÞ  e > ðt  dÞQ 2i eðt dÞ þ ∑ π ij j¼1

Z

>

þ de ðtÞQeðtÞ 

t

Z

t td

t  dðtÞ

Z

t

t d

e > ðsÞQ 1j eðsÞ ds

e > ðsÞQ 2j eðsÞ ds

>

e > ðsÞQeðsÞ ds þ de ðtÞRi eðtÞ

td



t

M

e > ðsÞRi eðsÞ dsþ ∑ π ij

Z

j¼1

0

Z

t

d

tþθ

d

t þθ

e > ðsÞRj eðsÞ ds dθ

Z 0 Z t 1 2 _ þ d e > ðtÞReðtÞ  e > ðsÞReðsÞ ds dθ þ de_ > ðtÞSi eðtÞ 2 d tþθ Z t Z Z 0 t M _ ds þ ∑ π ij _ ds dθ  e_ > ðsÞSi eðsÞ e_ > ðsÞSj eðsÞ td

1 2 _  þ d e_ > ðtÞSeðtÞ 2

j¼1

Z

0 d

Z

t t þθ

>

_ ds dθ: e_ ðsÞSeðsÞ

ð33Þ

>

LV ðt; et ; iÞ þ z > ðtÞzðtÞ  ρ2 w > ðtÞwðtÞ r χ i> ðtÞ½Σ 1i þ d Σ 2i S i Σ 2i χ i ðtÞ o0

χ ðtÞ ¼ e ðtÞ; e ðt  dÞ; e ðt  dðtÞÞ; >

>

>

Z

t

td

>

e ðsÞds;

g i> ðW i eðt  dðtÞÞÞ;

>

>

w ðtÞ

It is further known from (32) that, under zero-initial condition, Vð0; e0 ; r 0 Þ ¼ 0 and EVðt; et ; r t Þ Z0. According to Dynkin formula, it is obvious that Z t  E LV ðs; es ; r s Þ ds Z0: ð35Þ 0

>

min ρ2 subject to the LMIs ð16Þ  ð19Þ:

ð34Þ

holds for any nonzero

td

e > ðsÞ ds; g i> ðW i eðt  dðtÞÞÞ

Remark 3. Since the conditions in Theorem 1 are expressed in terms of LMIs, the optimal H 1 performance index can be efficiently found by resorting to standard algorithms [32] to solve the following convex optimization problem:

By taking (21)–(27) and (30) into account, it is not difficult to obtain from (33) that 2

t

Remark 2. It is seen from (16) to (19) that a mode and delaydependent design criterion is provided in Theorem 1. In the stochastic Lyapunov functional (32), two triple-integral terms are introduced. The purpose of constructing such a stochastic Lyapunov functional is to make most of Lyapunov matrices be dependent on mode i. In fact, except Q, R and S, the other Lyapunov matrices in (32) are district for different modes. For example, to let R0 Rt the Lyapunov matrix Ri in  d t þ θ e > ðsÞRi eðsÞ ds dθ depend on the R0 R0 Rt system mode i, a triple-integral term  d θ t þ α e > ðsÞReðsÞ ds dα d θ is simultaneously introduced. As shown in the proof, the term R0 Rt R0 Rt > > ∑M π e ðsÞR ij j eðsÞ ds dθ is eliminated by   d t þ θ e ðsÞ j¼1 d t þθ ReðsÞ ds dθ when (17) is satisfied. As a matter of fact, the modedependent Lyapunov matrices are more general than the common ones (which means that a common matrix is chosen for all modes). Therefore, in this sense, the choice of Lyapunov matrices in Theorem 1 is much flexible. In addition, Wirtinger inequality is Rt _ ds in the weak utilized to deal with the term t  d e_ T ðsÞSi eðsÞ infinitesimal operator. As a result, good performance can be achieved by Theorem 1.

M

þ ðB1i  K i B2i ÞwðtÞ þ ∑ π ij e > ðtÞP j eðtÞ þ e > ðtÞQ 1i eðtÞ j¼1

Z

Therefore, based on Dynkin formula and Definition 1, the filtering error system (11) with wðtÞ  0 is stochastically stable. This completes the proof. □

Then one can deduce that  LVðt; et ; iÞ ¼ 2e > ðtÞP i  ðAi þK i C i ÞeðtÞ  K i Di eðt  dðtÞÞ þg i ðW i eðt  dðtÞÞÞ

>

ð38Þ

To show the stochastic stability of system (38), we still adopt the stochastic Lyapunov functional (32) and need to calculate the weak infinitesimal operator LVðt; et ; iÞ with respect to (38). In this case, by following the similar line of the derivative of (34), one can also deduce from (31) that for each iA M h i > 2 LVðt; et ; iÞ r χ i> ðtÞ Σ 1i þd Σ 2i S i Σ 2i χ i ðtÞ r 0 ð39Þ

t d

Z

ð37Þ

It thus means that (13) is satisfied for any wðtÞ a 0 under the zeroinitial condition ϕðtÞ ¼ 0. When wðtÞ  0, the filtering error system (11) is of the form

The proof of this theorem is divided into two parts: (i) proof of the inequality (13); and (ii) proof of stochastic stability of the resulting filtering error system (11) with wðtÞ  0. To verify (13), for each i A M, a stochastic Lyapunov functional is constructed as Z t Z t Vðt; et ; iÞ ¼ e > ðtÞP i eðtÞ þ e > ðsÞQ 1i eðsÞ ds þ e > ðsÞQ 2i eðsÞ ds Z

ð36Þ

0

:

Remark 4. Recently, there have been reported some works on stabilization of delayed neural networks via state feedback control. While, as discussed before, it may be difficult to fully know the states of all neurons. In such a circumstance, one could make use of the estimated states to achieve practical objectives [21]. Therefore, it is expected that the filtering result of delayed static neural networks with Markovian jumping parameters derived in this study would have potential applications in many fields including state feedback control, signal processing, etc [40,41].

:

L. Shao et al. / Neurocomputing 153 (2015) 126–132

Table 1 The optimal H 1 performance indices ρmin and gain matrices K 1 , K 2 for different d.

ρmin K1 K2

d¼ 0.4

d ¼0.6

d ¼ 0.8

d ¼1.0

d ¼1.2

1.6684

0:1352  0:7122

 0:3460

2.3948

 0:4529  0:1629

 0:3548

3.3094

 1:0344 0:5870

 0:3671

5.4958

 0:9572 0:8191

 0:4000

53.6441

 0:8822 1:0447

 0:4364

 0:2142

 0:2012

 0:1829

 0:1341

 0:0803

Table 2 The optimal H1 performance indices ρmin and gain matrices K 1 , K 2 for different π22.

ρmin K1 K2

π 22 ¼  0:1

π 22 ¼  0:3

π 22 ¼  0:5

π 22 ¼  0:7

π 22 ¼  0:9

2.9829

 0:9287 0:9047

 0:4019

4.8477

 0:8953 1:0052

 0:4116

8.1468

 0:8832 1:0416

 0:4246

14.5513

 0:8665 1:0921

 0:4354

30.1896

 0:8509 1:1388

 0:4444

 0:1313

 0:1170

 0:0977

 0:0816

 0:0683

4. An illustrative example Here, we give a numerical example to illustrate the effectiveness of Theorem 1 to the H 1 filter design for delayed static neural networks with Markovian jumping parameters. Assume that the neural network in (2)–(5) has the following coefficients:



0:74 0 0:32  0:17 A1 ¼ ; W1 ¼ ; 0 0:98 0:29 0:43

 0:05 0:21 B11 ¼ ; C 1 ¼ ½0:20  0:11; 0:13  0:32 D1 ¼ ½  0:10 0:14; B21 ¼ ½0:08 0:25;

0:78 0:53 E1 ¼ ; L1 ¼ 0:3I; L1þ ¼ 0:8I;  1:02 0:46



0:82 0  0:13 0:74 A2 ¼ ; W2 ¼ ; 0 0:67  0:48  0:17

0:12  0:30 ; C 2 ¼ ½0:26 0:09; B12 ¼  0:54 0:06 D2 ¼ ½0:37  0:51; B22 ¼ ½0:18  0:20;

0:63 0:35 E2 ¼ ; L2 ¼ 0:2I; L2þ ¼ 0:6I: 0:99  0:41 Firstly, let π 11 ¼  π 12 ¼  3, π 21 ¼  π 22 ¼ 5 and μ ¼ 0:3. We change the value of the upper bound d of time delay, the optimal H 1 performance indices and gain matrices for different d can be found by Theorem 1, which are presented in Table 1. Next, let d ¼0.9, μ ¼ 0:8 and π 11 ¼  π 12 ¼  0:5. For different π22, we can also obtain the optimal H 1 performance indices and suitable gain matrices. The results are summarized in Table 2. The numerical results in Tables 1 and 2 confirm that Theorem 1 is valid to the H 1 filter design of delayed static neural networks with Markovian jumping parameters.

5. Conclusion In this paper, the H 1 filtering problem has been addressed for a class of static neural networks with Markovian jumping parameters and time-varying delay. Based on a stochastic Lyapunov functional with triple-integral terms, a mode and delay dependent condition has been established by means of LMIs. It has been shown that the design of desired gain matrices of a causal filter is accomplished by solving a convex optimization problem. It has

131

been facilitated readily by some available algorithms. It should be pointed out that most of the Lyapunov matrices are distinct with respect to mode. It means that the choice of these Lyapunov matrices is flexible, and thus our result is of great practice. Finally, an example with numerical results has been provided to verify the effectiveness of the developed design criterion. In this paper, it has been assumed that all the transition probabilities between different modes are known. When only partial information of the transition probability matrix is known, how to design a suitable filter is an interesting issue. On the other hand, it is very necessary and is of great significance to develop the potential applications of the filtering theory of this kind of hybrid neural networks in engineering fields, such as feedback control and signal processing, etc. These would be our future research topics.

Acknowledgments The authors would like to thank the associate editor and anonymous reviewers for their constructive comments that have greatly improved the quality of this paper. This work was jointly supported by the National Natural Science Foundation of China under Grant nos. 61005047, 61372146 and 61273122, the Natural Science Foundation of Jiangsu Province of China under Grant no. BK2010214. Also, this publication was made possible by NPRP Grant #4-1162-1-181 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the author[s]. References [1] P. Tino, M. Cernansky, L. Benuskova, Markovian architectural bias of recurrent neural networks, IEEE Trans. Neural Netw. 15 (1) (2004) 6–15. [2] M. Kovacic, Timetable construction with Markovian neural network, Eur. J. Oper. Res. 69 (1) (1993) 92–96. [3] Q. Zhu, J. Cao, Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays, IEEE Trans. Syst. Man Cybern. B 41 (2) (2011) 341–353. [4] X. Yang, J. Cao, J. Lu, Synchronization of Markovian coupled neural networks with nonidentical node-delays and random coupling strengths, IEEE Trans. Neural Netw. Learn. Syst. 23 (1) (2012) 60–71. [5] Y. Zhao, L. Zhang, S. Shen, H. Gao, Robust stability criterion for discrete-time uncertain Markovian jumping neural networks with defective statistics of modes transitions, IEEE Trans. Neural Netw. 22 (1) (2011) 164–170. [6] K. Gu, V.L. Kharitonov, J. Chen, Stability of Time-delay Systems, Birkhauser, Massachusetts, 2003. [7] J.K. Hale, Theory of Functional Differential Equations, Springer, New York, 1977. [8] S. Xu, J. Lam, A survey of liner matrix inequality techniques in stability analysis of delay systems, Int. J. Syst. Sci. 39 (1) (2008) 1095–1113. [9] R. Lu, Y. Xu, A. Xue, Hinfty filtering for singular systems with communication delays, Signal Process. 90 (4) (2010) 1240–1248. [10] R. Lu, H. Wu, J. Bai, New delay-dependent robust stability criteria for uncertain neutral systems with mixed delays, J. Frankl. Inst. 351 (3) (2014) 1386–1399. [11] R. Lu, H. Li, Y. Zhu, Quantized H 1 filtering for singular time-varying delay systems with unreliable communication channel, Circuits Syst. Signal Process. 31 (2) (2012) 521–538. [12] O. Faydasicok, S. Arik, A new upper bound for the norm of interval matrices with application to robust stability analysis of delayed neural networks, Neural Netw. 44 (2013) 64–71. [13] Z. Zeng, W.X. Zheng, Multistability of neural networks with time-varying delays and concave–convex characteristics, IEEE Trans. Neural Netw. Learn. Syst. 23 (2) (2012) 293–305. [14] X. Li, H. Gao, X. Yu, A unified approach to the stability of generalized static neural networks with linear fractional uncertainties and delays, IEEE Trans. Syst. Man Cybern. B 41 (2011) 1275–1286. [15] Z.-G. Wu, P. Shi, H. Su, J. Chu, Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling, IEEE Trans. Neural Netw. Learn. Syst. 23 (9) (2012) 1368–1376. [16] H. Zhang, Z. Wang, D. Liu, A comprehensive review of stability analysis of continuous time recurrent neural networks, IEEE Trans. Neural Netw. Learn. Syst. 25 (7) (2014) 1229–1262. [17] Q. Ma, S. Xu, Y. Zou, Stability and synchronization for Markovian jump neural networks with partly unknown transition probabilities, Neurocomputing 74 (2011) 3404–3411.

132

L. Shao et al. / Neurocomputing 153 (2015) 126–132

[18] R. Rakkiyappan, A. Chandrasekar, S. Lakshmanan, J.P. Park, Exponential stability of Markovian jumping stochastic Cohen–Grossberg neural networks with mode-dependent probabilistic time-varying delays and impulses, Neurocomputing 131 (2014) 265–277. [19] Y. Liu, Z. Wang, J. Liang, X. Liu, Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays, IEEE Trans. Neural Netw. 20 (7) (2009) 1102–1116. [20] Z.-G. Wu, P. Shi, H. Su, J. Chu, Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled-data, IEEE Trans. Cybern. 43 (6) (2013) 1796–1806. [21] H. Huang, T. Huang, X. Chen, C. Qian, Exponential stabilization of delayed recurrent neural networks: a state estimation based approach, Neural Netw. 48 (2013) 153–157. [22] L. Jin, P.N. Nikiforuk, M.M. Gupta, Adaptive control of discrete-time nonlinear systems using recurrent neural networks, IEE Proc. Control Theory Appl. 141 (1994) 169–176. [23] Z. Wang, D.W.C. Ho, X. Liu, State estimation for delayed neural networks, IEEE Trans. Neural Netw. 16 (1) (2005) 279–284. [24] J.H. Park, O.M. Kwon, State estimation for neural networks of neutral-type with interval time-varying delays, Appl. Math. Comput. 203 (2008) 217–223. [25] X. Liu, J. Cao, Robust state estimation for neural networks with discontinuous activations, IEEE Trans. Syst. Man Cybern. B 40 (6) (2010) 1425–1437. [26] Y. Liu, S.M. Lee, O.M. Kwon, J.H. Park, A study on H1 state estimation of static neural networks with time-varying delays, Appl. Math. Comput. 226 (2014) 589–597. [27] H. Huang, G. Feng, J. Cao, State estimation for static neural networks with time-varying delay, Neural Netw. 23 (2010) 1202–1207. [28] Z. Wang, Y. Liu, X. Liu, State estimation for jumping recurrent neural networks with discrete and distributed delays, Neural Netw. 22 (2009) 41–48. [29] H. Huang, T. Huang, X. Chen, A mode-dependent approach to state estimation of recurrent neural networks with Markovian jumping parameters and mixed delays, Neural Netw. 46 (2013) 50–61. [30] Y. Chen, W.X. Zheng, Stochastic state estimation for neural networks with distributed delays and Markovian jump, Neural Netw. 25 (2012) 14–20. [31] Z.-G. Wu, H. Su, J. Chu, State estimation for discrete Markovian jumping neural networks with time delay, Neurocomputing 73 (2010) 2247–2254. [32] S. Boyd, L. EI Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, PA, 1994. [33] Z.-B. Xu, H. Qiao, J. Peng, B. Zhang, A comparative study of two modeling approaches in neural networks, Neural Netw. 17 (2004) 73–85. [34] J. Misra, I. Saha, Artificial neural networks in hardware: A survey of two decades of progress, Neurocomputing 74 (2010) 239–255. [35] Q. Duan, H. Su, Z.-G. Wu, H1 state estimation of static neural networks with time-varying delay, Neurocomputing 97 (2012) 16–21. [36] H. Huang, G. Feng, J. Cao, Guaranteed performance state estimation of static neural networks with time-varying delay, Neurocomputing 74 (2011) 606–616. [37] A. Seuret, F. Gouaisbaut, Wirtinger-based integral inequality: application to time-delay systems, Automatica 49 (2013) 2860–2866. [38] S. Xu, J. Lam, X. Mao, Delay-dependent H1 control and filtering for uncertain Markovian jump systems with time-varying delays, IEEE Trans. Circuits Syst. I 54 (9) (2007) 2070–2077. [39] A.V. Skorokhod, Asymptotic Methods in the Theory of Stochastic Differential Equations, Amer. Math. Soc., Providence, RI, 1989. [40] S. Wen, T. Huang, Z. Zeng, Y. Chen, and P. Li, Circuit design and exponential stabilization of memristive neural network, Neural Networks, in press, http:// dx.doi.org/10.1016/j.neunet.2014.10.011. [41] X. He, C. Li, T. Huang, C. Li, J. Huang, A recurrent neural network for solving bilevel linear programming problem, IEEE Trans. Neural Netw. Learn. Syst. 25 (2014) 824–830. [42] S. Wen, Z. Zeng, T. Huang, Dynamic behaviors of memristor-based delayed recurrent networks, Neural Computing Appl. 23 (2013) 815–821.

Lei Shao received the B.S. degree in communication engineering from Soochow University, China in 2000, the M.S. degree in Multimedia communication from Soochow University, China in 2005. He is currently a lecturer at School of Electronics and Information Engineering, Soochow University, Suzhou, China. His research interests include neural networks, signal and information processing, and embedded system.

He Huang received the B.S. degree in mathematics from Gannan Normal University, China in 2000, the M.S. degree in applied mathematics from Southeast University, China in 2003, and the Ph.D. degree in automatic control from City University of Hong Kong, China in 2009. He is currently an Associate Professor at School of Electronics and Information Engineering, Soochow University, Suzhou, China. His research interests include neural networks, stochastic systems, nonlinear systems, and applied mathematics. He is an associate editor of Neurocomputing and Circuits, Systems & Signal Processing.

Heming Zhao received the M.Sc. degree from Soochow University in 1982. From 1984 to 1985, he studied in Tsinghua University, and from 1988 to 1990, he was a visiting scientist in Technical University of Munich, German. Since 1982, he has worked at Soochow University. During 1995–1997, he was a director at the Department of Electrical Engineering, and now he is a doctoral tutor and the dean of the School of Electronics and Information Engineering. He is a member of IEEE and a senior member of Chinese Electronic Institute, and he is a member of the editorial boards of Signal Processing. At present, his research interests include speech signal processing and intelligent computing. Tingwen Huang is a professor at Texas A&M University at Qatar. He received his B.S. degree from Southwest Normal University (now Southwest University), China, 1990, his M.S. degree from Sichuan University, China, 1993, and his Ph.D. degree from Texas A&M University, College Station, TX, 2002. After graduated from Texas A&M University, he worked as a visiting assistant professor there. Then he joined Texas A&M University at Qatar (TAMUQ) as an assistant professor in August 2003, then he was promoted to professor in 2013. Huang's focus areas for research interests include neural networks, chaotic dynamical systems, complex networks, optimization and control. He has authored and co-authored more than 100 refereed journal papers.