Robust state estimation for discrete-time neural networks with mixed time-delays, linear fractional uncertainties and successive packet dropouts

Robust state estimation for discrete-time neural networks with mixed time-delays, linear fractional uncertainties and successive packet dropouts

Neurocomputing 135 (2014) 130–138 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Letters...

550KB Sizes 1 Downloads 17 Views

Neurocomputing 135 (2014) 130–138

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Letters

Robust state estimation for discrete-time neural networks with mixed time-delays, linear fractional uncertainties and successive packet dropouts Xiu Kan a, Huisheng Shu b, Zhenna Li c a

College of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China School of Science, Donghua University, Shanghai 200051, China c School of Information Science and Technology, Donghua University, Shanghai 200051, China b

art ic l e i nf o

a b s t r a c t

Article history: Received 7 October 2013 Received in revised form 2 December 2013 Accepted 11 December 2013 Communicated by Z. Wang Available online 9 January 2014

This paper is concerned with the robust state estimation problem for a class of discrete-time delayed neural networks with linear fractional uncertainties (LFUs) and successive packet dropouts (SPDs). The mixed time delays (MTDs) consisting of both discrete time-delays and infinite distributed delays enter into the model of the addressed neural networks. A Bernoulli distributed white sequence with a known conditional probability is introduced to govern the random occurrence of the SPDs. The main purpose of the problem under consideration is to design a state estimator such that the dynamics of the estimation error is globally asymptotically stable in the mean square. By using stochastic analysis and Lyapunov stability theory, the desired state estimator is designed to be robust against LFUs and SPDs. Finally, a simulation example is provided to show the effectiveness of the proposed state estimator design scheme. & 2014 Elsevier B.V. All rights reserved.

Keywords: Neural networks State estimation Successive packet dropouts Fractional uncertainty Global asymptotic stability Mixed time delays

1. Introduction In the past decades, neural networks including Hopfield neural networks, bidirectional associative neural networks, cellular neural networks, as well as Cohen–Grossberg neural networks have been widely investigated. This is mainly due to the extensive applications in various areas such as pattern recognition, affine invariant matching, associative memory, model identification, and combinational optimization. As is well known, these applications highly rely on the dynamical behaviors of the neural networks. Therefore, the stability analysis issue for neural networks has drawn a great deal of attention and considerable research efforts have been made in this area. For instance, by utilizing a combination of the comparison principle, the theory of monotone flow and the monotone operator, some sufficient conditions ensuring existence, uniqueness and global exponential stability of the periodic solution have been derived in [1] for a class of neural networks. In [20], a set of necessary and sufficient conditions has been addressed for the global exponential stability of a class of generic discrete-time recurrent neural networks by means of the uncovered conditions. The globally asymptotic stability analysis problem has been dealt with in [22] for a class of uncertain stochastic Hopfield neural networks with discrete and distributed time-delays in terms of Lyapunov theory.

E-mail address: [email protected] (X. Kan). 0925-2312/$ - see front matter & 2014 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2013.12.044

It is well known that the state estimation is one of the foundational problems in dynamics analysis for complex systems including recurrent neural networks, complex networks, genetic regulatory networks as well as general engineering systems. Over the past few decades, a lot of effective approaches have been proposed in this research area, see e.g. [3,5,6,12,21]. In particular, since modeling errors and incomplete statistical information are often encountered in real-time applications, robust state estimation schemes have recently received considerable research attention in order to improve the robustness. On the other hand, time delays are often unavoidably encountered due to the finite speeds of signals switching and transmission between neurons, which may cause undesirable dynamic network behaviors such as oscillation and instability, see e.g. [9,14,18,23]. So far, two types of time delays, namely discrete and distributed time delays have gained considerable research attention. For example, the state estimation problem has been investigated in [14] for a class of discrete-time neural networks with Markovian jumping parameters as well as mode-dependent mixed time-delays. More recently, the robust H 1 state estimation problem has been studied in [23] for a general class of uncertain discrete-time stochastic neural networks with probabilistic measurement delays. Owing to unreliable measurements or network congestion, packet dropouts (or missing measurements), viewed as an often occurred network-induced problem, have drawn considerable research attention during the past few years, see e.g. [2,4,8,16,17,19]. For example, the distributed finite-horizon

X. Kan et al. / Neurocomputing 135 (2014) 130–138

filtering problem has been proposed in [4] for a class of discrete time-varying systems with randomly varying nonlinearities over lossy sensor networks involving quantization errors and SPDs. In [16], the optimal H2 filtering problem for linear systems with multiple packet dropouts has been tackled. The robust H 1 finitehorizon filtering problem has been investigated in [17] for discrete time-varying stochastic systems with norm-bounded uncertainties, multiple randomly occurred sector-nonlinearities and SPDs. In [19], the optimal full-order linear filter in the linear minimum variance sense has been designed for discrete-time stochastic linear systems with multiple packet dropouts. It is worth mentioning that the missing measurement problem in the neural networks has not been fully investigated up to now. Therefore, we aim to study the state estimation problem for neural networks with SPDs. In addition, due to the modeling errors, parameter drifting or fluctuation, uncertainties occur so frequently that may lead to instability and poor performance of the neural networks. Parameter uncertainties have been mainly categorized as normbounded uncertainties and interval uncertainties, while the interval type can be usually transformed into the norm-bounded type. For these two types of uncertainties, the state estimation problem and stability analysis have been investigated in [25,26] for neural networks. It should be pointed out that a more general kind of uncertainties, i.e., LFUs, have been proposed in [7,24] that include the common norm-bounded uncertainties as a special case. In [10], the state estimation problem has been first investigated for a class of discrete-time neural networks with such LFUs and sensor saturations. The stability property has been studied for generalized static neural networks with LFUs in [11]. Nevertheless, very little research effort has been made to account for delayed neural networks with LFUs and SPDs. With hope to shorten such a gap, in this paper, we are motivated to design an estimator for a class of delayed neural networks subject to MTDs, LFUs and SPDs. Summarizing the discussions made above, the main aim of this paper is to specifically deal with the state estimation problem for a class of discrete-time neural networks with LFUs, MTDs and SPDs. The main contributions of this paper are threefold. (1) SPDs are used to model a class of missing measurements in the context of neural networks, whose occurrence is governed by a specified Bernoulli distribution. (2) LFUs are utilized to describe the parameter uncertainties of the discrete-time neural networks with MTDs. (3) The designed estimator is expected to be robust against LFUs as well as SPDs, and ensure that the error dynamics is globally asymptotically stable in the mean square. The rest of this paper is outlined as follows. In Section 2, the discrete-time delayed neural networks with LFUs and SPDs are introduced. Moreover, the problem under consideration is formulated. In Section 3, by employing the Lyapunov stability theory, some sufficient conditions are established in the form of linear matrix inequalities (LMIs) and the explicit expression of the estimator gain is given. A simulation example is given in Section 4 to demonstrate the effectiveness of the main results obtained. Notation: The notation used here is fairly standard except where otherwise stated. N þ stands for the set of nonnegative integers. Rn and Rnm denote, respectively, the n dimensional Euclidean space and the set of all n  m real matrices. For a vector x ¼ ðx1 ; x2 ; …; xn ÞT A Rn , jxj is the Euclidean norm. The notation X ZY(respectively, X 4 Y), where X and Y are real symmetric matrices, means that X  Y is positive semi-definite (respectively, positive definite). MT represents the transpose of the matrix M. I denotes the identity matrix of compatible dimension. If A is a matrix, λmin ðAÞ (respectively, λmax ðAÞ) stands for the smallest (respectively, largest) eigenvalue of A. diagf⋯g stands for a block-diagonal matrix. The n in a matrix is used to denote term that is induced by symmetry. Moreover, let ðΩ; F ; ProbÞ be a probability space, where Prob, the probability measure, has total

131

mass 1. Efxg stands for the expectation of the stochastic variable x with respect to the given probability measure Prob. The symbol  denotes the Kronecker product. Matrices, if they are not explicitly specified, are assumed to have compatible dimensions.

2. Problem formulation and preliminaries Consider a discrete-time n-neuro neural network with MTDs described as follows: 8 xðk þ 1Þ ¼ AðkÞxðkÞ þ B1 ðkÞf ðxðkÞÞ þ B2 ðkÞgðxðk  τðkÞÞÞ > > > > þ1 > > < þB3 ðkÞ ∑ μd hðxðk  dÞÞ þ DðkÞxðkÞωðkÞ ð1Þ d¼1 > > ~ > yðkÞ ¼ CðkÞxðkÞ > > > : xðsÞ ¼ ϕðsÞ; s ¼  τ ;  τ þ 1; …; 1; 0 M

M

where xðkÞ ¼ ½x1 ðkÞ; x2 ðkÞ; …; xn ðkÞT A Rn is the state vector of the ~ A Rm is the measurement output vector; the neural network; yðkÞ nonlinear vector-valued functions f ðxðkÞÞ ¼ ½f 1 ðx1 ðkÞÞ; f 2 ðx2 ðkÞÞ; …; f n ðxn ðkÞÞT , gðxðkÞÞÞ ¼ ½g 1 ðx1 ðkÞÞ; g 2 ðx2 ðkÞÞ; …; g n ðxn ðkÞÞT and hðxðkÞÞ ¼ ½h1 ðx1 ðkÞÞ; h2 ðx2 ðkÞÞ; …; hn ðxn ðkÞÞT are the neuron activation functions; the positive integer τðkÞ describes the time-varying delay satisfying 0 o τm r τðkÞ r τM , where τm and τM are known positive integers representing the minimum and maximum delays respectively; ϕðsÞ is a given initial condition sequence; ωðkÞ is a scalar Wiener process (Brownian motion) on ðΩ; F ; ProbÞ with EfωðkÞg ¼ 0;

Efω2 ðkÞg ¼ 1;

EfωðiÞωðjÞg ¼ 0

ði a jÞ:

ð2Þ

The constant μd Z 0 satisfies the following convergence condition: þ1

þ1

d¼1

d¼1

μ ¼ ∑ μd and

∑ dμd o þ 1:

ð3Þ

The matrices AðkÞ ¼ A þ ΔAðkÞ, B1 ðkÞ ¼ B1 þ ΔB1 ðkÞ, B2 ðkÞ ¼ B2 þ ΔB2 ðkÞ, B3 ðkÞ ¼ B3 þ ΔB3 ðkÞ, CðkÞ ¼ C þ ΔCðkÞ and DðkÞ ¼ D þ ΔDðkÞ are bounded matrices containing parameter uncertainties ΔAðkÞ, ΔB1 ðkÞ, ΔB2 ðkÞ, ΔB3 ðkÞ, ΔCðkÞ and ΔDðkÞ that satisfy the following conditions: " # " # M1 ΔAðkÞ ΔB1 ðkÞ ΔB2 ðkÞ ¼ Σ ðkÞðI  J Σ ðkÞÞ  1 ½N 1 N 2 N3 ; M2 ΔCðkÞ ΔDðkÞ ΔB3 ðkÞ ð4Þ J T J o I;

Σ T ðkÞΣ ðkÞ r I;

8kANþ

ð5Þ

where A ¼ diagfa1 ; a2 ; …; an g and B1 ; B2 ; B3 ; C; D; H; M i ði ¼ 1; 2Þ and N i ði ¼ 1; 2; 3Þ are known constant matrices with appropriate dimensions, and Σ ðkÞ denotes the unknown matrix functions with Lebesgue measurable elements. Remark 1. In the neural network model (1), the conditions (4) and (5) are referred to as the admissible conditions. Note that this kind of parametric uncertainties means LFUs. As explained in Section 1, the LFUs are more general than the common uncertainties such as the norm-bounded type [22] and the interval type [25]. Notice that when J ¼ 0, the linear fractional parametric uncertainties reduce to the norm-bounded ones. Also, interval uncertainties can be viewed as the special case of norm-bounded ones. Therefore, it is more appropriate to use the linear fractional form to describe the parameter uncertainties in practical neural networks. The nonlinear activation functions f ðÞ, gðÞ and hðÞ are assumed to be continuous and satisfy f ð0Þ ¼ 0; gð0Þ ¼ 0; hð0Þ ¼ 0 and the following sector-bounded conditions, namely for 8 x; y A Rn : ½f ðxÞ f ðyÞ  U 1 ðx  yÞT ½f ðxÞ  f ðyÞ U 2 ðx yÞ r0;

ð6Þ

132

X. Kan et al. / Neurocomputing 135 (2014) 130–138

½gðxÞ  gðyÞ  V 1 ðx  yÞT ½gðxÞ  gðyÞ  V 2 ðx yÞr 0;

ð7Þ

½hðxÞ  hðyÞ  W 1 ðx  yÞT ½hðxÞ  hðyÞ  W 2 ðx  yÞ r 0

ð8Þ

where U 1 ; U 2 ; V 1 ; V 2 ; W 1 and W2 are real matrices of appropriate dimensions, and U ¼ U 1 U 2 ; V ¼ V 1  V 2 ; W ¼ W 1  W 2 are symmetric positive definite matrices. It is customary that such nonlinear functions f, g and h are said to belong to ½U 1 ; U 2 , ½V 1 ; V 2  and ½W 1 ; W 2 , respectively. Such kind of sector-bounded activation functions has been first proposed in [13] and later widely used. The actual network measurement describing SPDs is expressed by ~ þ ð1  γ ðkÞÞγ ðk  1Þyðk ~ 1Þ þ⋯ð1  γ ðkÞÞ yðkÞ ¼ γ ðkÞyðkÞ ~  iÞ⋯ ð1  γ ðk  1ÞÞ⋯ð1  γ ðk  i þ 1ÞÞγ ðk  iÞyðk

ð9Þ

where yðkÞ A Rm is the actual signal received by the estimator and the stochastic variable γ ðkÞ A R is Bernoulli-distributed white sequence taking values on 0 or 1 with Probfγ ðkÞ ¼ 1g ¼ Efγ ðkÞg ¼ γ ; Probfγ ðkÞ ¼ 0g ¼ 1  Efγ ðkÞg ¼ 1  γ

ð10Þ

where γ A ½01 is a known constant. The model (9) has been introduced in [15] to describe the SPDs. ~ For example, if γ ðkÞ ¼ 1, we have yðkÞ ¼ yðkÞ which means that there is no packet dropout occurring; if γ ðkÞ ¼ 0 but γ ðk  1Þ ¼ 1, we ~  1Þ which means that the measured output at time have yðkÞ ¼ yðk point k is missing but one at time point k  1 has been received. To facilitate the manipulation, the description (9) can be rewritten in the following compact form: ~ þ ð1  γ ðkÞÞyðk  1Þ: yðkÞ ¼ γ ðkÞyðkÞ

ð11Þ

In this paper, we consider the estimator for neural network (1) as follows: ( ^ ^ þ 1Þ ¼ F^ xðkÞ ^ þ GyðkÞ xðk ð12Þ ^ ¼ 0; s ¼  τM ;  τM þ 1; …;  1; 0 xðsÞ ^ where xðkÞ is the estimate of the neuron x(k) and F^ ; G^ are the estimator parameters to be determined. ^ Letting the estimation error be eðkÞ 9 xðkÞ  xðkÞ, the error dynamics can be obtained from (1), (11) and (12) as follows:

2 6 6 B2 ðkÞ ¼ 6 6 4

0 0 B2 ðkÞ

3

2

7 7 7; 7 5

6 6 B3 ðkÞ ¼ 6 6 4

B3 ðkÞ 0 0 B3 ðkÞ

3

2

7 7 7; 7 5

6 6 DðkÞ ¼ 6 6 4

DðkÞS 0 0 DðkÞS

7 7 7; 7 5

2 3T I 607 6 7 S¼6 7 : 405 0

lim Ef∣ηðkÞ∣2 g ¼ 0:

k-1

In this paper, we shall design an estimator with the form (12) to estimate the state of the neural network (1). In other words, we are interested in looking for the estimator gain such that the augmented system (14) is robustly, globally and asymptotically stable in the mean square.

3. Main results In this section, we will first provide the stability analysis result for the augmented system (14) which shall be used for the subsequent estimator design stage. Before proceeding, let us give the following lemmas which will be used in the proof of our main results. Lemma 1 (Liu et al. [13]). Let M A Rnn be a positive semi-definite matrix, xi A Rn and ai Z 0 ði ¼ 1; 2; …Þ. If the series concerned are convergent, the following inequality holds: !T ! ! þ1

∑ ai x i

þ1

þ1

þ1

i¼1

i¼1

i¼1

∑ ai x i r

M

i¼1

∑ ai

∑ ai xTi Mxi :

ð15Þ

Lemma 2 (Schur complement). Given constant matrices Ω1, Ω2, Ω3 T where Ω1 ¼ Ω1 and Ω2 4 0, then

Ω1 þ ΩT3 Ω2 1 Ω3 o 0

ð16Þ

if and only if " # T

Ω1 Ω3

Ω3

 Ω2

o 0:

ð17Þ

þ B1 ðkÞf ðxðkÞÞ þ B2 ðkÞgðxðk  τðkÞÞÞþ B3 ðkÞ ∑ μd hðxðk  dÞÞ

Lemma 3 (Xie [24]). Given matrices Q; H and E of appropriate dimensions and with Q symmetrical, then

^  1Þ þ DðkÞxðkÞωðkÞ:  ð1  γ ðkÞÞGyðk

Q þ HΞ E þ E T Ξ HT o 0

d¼1

ð13Þ

T By setting ηðkÞ ¼ ½xT ðkÞ; x^ ðkÞ; yT ðk 1Þ; eT ðkÞT , the following augmented system can be obtained from (1), (11), (12) and (13): 8 ηðk þ 1Þ ¼ ðA1 ðkÞ þ ðγ ðkÞ  γ ÞA2 ðkÞÞηðkÞ þ B1 ðkÞf ðSηðkÞÞ > > > > þ1 < þ B2 ðkÞgðSηðk  τðkÞÞÞþ B3 ðkÞ ∑ μd hðSηðk  τ ðkÞÞÞ þ DðkÞηðkÞωðkÞ d ¼1 > > > > : ηðsÞ ¼ ½ϕT ðsÞ; 0; 0; ϕT ðsÞT ; s ¼  τM ;  τM þ 1; …;  1; 0

ð14Þ

T

where Ξ ¼ ΔðI  J ΔÞ ε 4 0, " I 1 T Q þ ½ε E εH J T

ð18Þ

1

; I  J J 4 0; Δ Δ r I, if and only if for T

T

J

#  1"

I

#

ε  1E o 0: εHT

T

2

AðkÞ 6 ^ 6 γ GCðkÞ A1 ðkÞ ¼ 6 6 γ CðkÞ 4 0

0 6 GCðkÞ 6^ A2 ðkÞ ¼ 6 6 CðkÞ 4 0

0 F^

0

0

 G^

0 ^  GCðkÞ

I G^

0

ð1  γ ÞG^

0

7 7 7; 7 5

ð1  γ ÞI 0 ^  ð1  γ ÞG^ AðkÞ  γ GCðkÞ 2 3 3 0 B1 ðkÞ 6 7 7 0 6 0 7 7 7 7; B1 ðkÞ ¼ 6 6 0 7; 7 0 4 5 5 ^ B1 ðkÞ  GCðkÞ

0 ^ AðkÞ  F^  γ GCðkÞ 0

0

3

ð19Þ

Lemma 4. Given matrices Q; H1 ; H2 ; E 1 and E 2 of appropriate dimensions and with Q symmetrical, then Q þ H1 Ξ 1 E 1 þ E T1 Ξ 1 HT1 þ H2 Ξ 2 E 2 þ E T2 Ξ 2 HT2 o 0

where

2

3

Definition 1. The augmented system (14) is said to be globally asymptotically stable in the mean square if, for any solution ηðkÞ of (14), the following holds:

^ ^ ^ þ ½AðkÞ  F^  γ ðkÞGCðkÞ xðkÞ eðk þ 1Þ ¼ ½AðkÞ  γ ðkÞGCðkÞeðkÞ þ1

B2 ðkÞ

T

ð20Þ

where Ξ i ¼ Δi ðI  J i Δi Þ  1 ; I  J Ti J i 4 0; Δi Δi r I; i ¼ 1; 2, if and only if for ε~ 1 4 0 and ε~ 2 4 0 T

2

Q 6 ε~ E 6 1 1 6 T 6 H1 6 6 ε~ E 4 2 2 HT2

n

n

n

n

 ε~ 1 I ε~ 1 J T1

n

n

n

 ε~ 1 I

n

n

0

0

n

0

0

 ε~ 2 I ε~ 2 J T2

 ε~ 2 I

3 7 7 7 7 o0: 7 7 5

ð21Þ

X. Kan et al. / Neurocomputing 135 (2014) 130–138



Proof. By Lemma 3, (20) is equivalent to " #  1"  1 # I J 1 ε1 E 1 Q þ ½ε1 1 E T1 ε1 H1  T J 1 I ε1 HT1 " þ ½ε2 1 E T2 ε2 H2  Applying 2 Q 6 1 6 ε1 E 1 6 6 ε HT 6 1 1 6 1 6 ε2 E 2 4 ε2 HT2

J 2 I

I  J T2

#  1"

ε ε

1 2 E2 T 2 H2



# o 0:

ð22Þ

n

n

n

I

n

n

J T1

I

n

0

0

I

0

0

J T2

n

7 7 n 7 7 o0: 7 n 7 5 I n 7

Furthermore, define matrix 2 n n n I 6 0 ε  1I n n 6 1 6 1 6 0 0 ε I n T ¼6 1 1 60 0 0 ε 4 2 I 0 0 0 0

n

ð23Þ

3

7 7 7 7: 7 n 7 5 ε2 1 I n

n

Setting ε~ 1 ¼ ε1 2 and ε~ 2 ¼ ε2 2 , we obtain (21).



The following theorem provides a sufficient condition under which the augmented system (14) is globally asymptotically stable in the mean square. Theorem 1. Let the estimator gains F^ and G^ be given and the admissible conditions hold. Then, the augmented system (14) is globally asymptotically stable in the mean square if there exist positive definite matrices P, Q, R and five positive scalars λ1 ; λ2 ; λ3 ; s1 and s2 such that 2 3 n

n

 s1 I n 6 T H1 s J 1  s1 I Π ¼6 1 6 6s E 0 0 4 2 2 HT2 0 0 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where γ~ ¼ γ ð1  γ Þ, 2

Π 11

Π1 ¼ 6 4 K Z 2 6 6 6 L¼6 6 6 4

n

n

3

7 n 5; ~ P

L T

n

n

n

n

n

n

 s2 I n s2 J 2  s2 I 2

T  λ1 U 2 6 6 0 6 K ¼6 6  T 4  λ3 W 2

0  λ1 I

0

0

0

0

 λ2 I

0

0

0

0

μ R  λ3 I

0

0

0

0 1

"

Π 11 ¼

Π~ 11 0

0 Π~

22



μ

R

P~ ¼  I  P;

7 7 7 7o0 7 7 5

0

ð25Þ

0

PB3

0 2

0

0

0

A 6 γ GC 6 ^ A1 ¼ 6 6 γC 4 0 B1

 ;

0 F^

0

0

ð1  γ ÞG^

0

3

1γ 0 ^  ð1  γ ÞG^ A  γ GC 2 2 3 0 0 B2 6 γ~ GC ^ 6 0 7 0 6 6 7 B 2 ¼ 6 7; A 2 ¼ 6 6 γ~ C 0 4 0 5 4 ^ ~ B2 0  γ GC 0

^ A  F^  γ GC

3

6 0 7 6 7 B 1 ¼ 6 7; 4 0 5 B1

7 7 7; 7 5 0  γ~ G^  γ~ I γ~ G^

0 0 0 ^ ~  γ GC

3 7 7 7; 7 5

3 2 3 B3 2 3 2 3 DS 0 0 6 0 7 6 0 7 6 7 6 0 7 6 0 7 6 7 B 3 ¼ 6 7; D ¼ 6 5; H2 ¼ 4 5; 7; H1 ¼ 4 4 0 5 4 0 5 X^ 1 X^ 2 B3 DS 2 3 " # J 0 0 J 0 E 1 ¼ ½Y^ 1 0 0 60 J 07 ; J1 ¼4 ; 5; J 2 ¼ 0 J E 2 ¼ ½0 Y^ 2 0 0 0 J 2 3 2 3 PX 1 0 0 PX 3 PX 2 6 7 6 7 0 5; X^ 2 ¼ 4 0 PX 2 0 5; X^ 1 ¼ 4 0 0 " Y^ 1 ¼ " Y1 ¼

Y1

0

Y2

0

0 #

PX 4

;

Y^ 2 ¼

N1

0

0

0

0

N1

0

N1

2

M1 6 γ GM 6 ^ 2 X1 ¼ 6 6 γ M2 4 0 2 0 6 γ~ GM 6 ^ 2 X4 ¼ 6 6 γ~ M 2 4 0

"

0 N2

N3

0

0

0

0

0

N3

0 # ;

# ;

0 0 0 ^ 2 M 1  γ GM 3 0 0 7 7 7; 0 7 5 ^ 2 γ~ GM

Y 2 ¼ ½N 2 0 0 0; 3 7 7 7; 7 5

2

3 M2 6 0 7 6 7 X2 ¼ 6 7; 4 0 5 M2

 1; Π~ 11 ¼  P þðτM  τm þ 1ÞQ  λ1 U 1  λ3 W

2

3 M1 6 0 7 6 7 X3 ¼ 6 7; 4 0 5 M1

Π~ 22 ¼  Q  λ2 V 1 ;

U 1 ¼ ðST U T1 U 2 S þ ST U T2 U 1 SÞ=2;

U 2 ¼  ðST U T1 þ ST U T2 Þ=2;

V 1 ¼ ðST V T1 V 2 S þST V T2 V 1 SÞ=2;

V 2 ¼  ðST V T1 þ ST V T2 Þ=2;

 1 ¼ ðS W

T

W T1 W 2 S þ ST W T2 W 1 SÞ=2;

 2 ¼ ðST W T þ ST W T Þ=2: W 1 2

Proof. To start with the stability analysis of system (14), we construct the following Lyapunov–Krasovskii functional:

3

7 T 7  λ2 V 2 7 7; 7 5 0

VðkÞ ¼ V 1 ðkÞ þ V 2 ðkÞ þ V 3 ðkÞ þ V 4 ðkÞ

ð26Þ

where

0

3

# ;

PB2

2

Then, pre- and postmultiplying (23) by T , we obtain the following inequality: 2 3 n n n n Q 6 ε  2E 7 n n n 6 1 1  ε1 2 I 7 6 7 6 HT 7 ε1 2 J T1  ε1 2 I n n ð24Þ 6 7 o 0: 1 6 2 7  2 6 ε2 E 2 7 0 0  ε2 I n 4 5 HT2 0 0 ε2 2 J T2  ε2 2 I

Π1 6s E 6 1 1

PB1

2

Lemma 2 to (22), we get 3

133

V 1 ðkÞ ¼ ηT ðkÞP ηðkÞ;

7 7 7 7; 7 7 5

V 2 ðkÞ ¼ 2

PA1 6 Z ¼ 4 PA2 PD

0



i ¼ k  τ ðkÞ

ηT ðiÞQ ηðiÞ;

ð28Þ

3

7 0 5; 0

k1

ð27Þ

V 3 ðkÞ ¼

k  τm



k1

∑ ηT ðiÞQ ηðiÞ;

i ¼ k  τM þ 1 i ¼ j

ð29Þ

134

X. Kan et al. / Neurocomputing 135 (2014) 130–138 þ1

k1

d¼1

i ¼ kd

V 4 ðkÞ ¼ ∑ μd

h ðSηðiÞÞRhðSηðiÞÞ: T



ð30Þ

    E ΔV 4 ðkÞ ¼ E V 4 ðk þ 1Þ  V 4 ðkÞ (

EfΔVðkÞg ¼ EfΔV 1 ðkÞ þ ΔV 2 ðkÞ þ ΔV 3 ðkÞ þ ΔV 4 ðkÞg

ð31Þ

∑ μd

þ1

k1

d¼1

i ¼ kd

þ g T ðSηðk  τðkÞÞÞBT2 ðkÞPB2 ðkÞgðSηðk  τðkÞÞÞ !T þ1

∑ μd hðSηðk  dÞÞ

T

d¼1

)

T

( ¼ E μ h ðSηðkÞÞRhðSηðkÞÞ T

∑ μd hðSηðk  dÞÞ

BT3 ðkÞPB3 ðkÞ

d¼1

h ðSηðiÞÞRhðSηðiÞÞ

 h ðSηðk  dÞÞRhðSηðk dÞÞÞ

!

þ1

þ1

)

T

∑ μd ðh ðSηðkÞÞRhðSηðkÞÞ

¼E

¼ EfηT ðk þ 1ÞP ηðk þ 1Þ  ηT ðkÞP ηðkÞg 8 < T ¼ E ηT ðkÞAT ðkÞPAðkÞηðkÞ þ f ðSηðkÞÞBT1 ðkÞPB1 ðkÞf ðSηðkÞÞ :



h ðSηðiÞÞRhðSηðiÞÞ T

i ¼ kdþ1

(

EfΔV 1 ðkÞg ¼ EfV 1 ðk þ 1Þ  V 1 ðkÞg

k



d¼1

 ∑ μd

where

þ

þ1

¼E

Denoting AðkÞ ¼ A1 ðkÞ þ ðγ ðkÞ  γ ÞA2 ðkÞ, we can calculate that

)

þ1

d¼1

 ∑ μd h ðSηðk  dÞÞRhðSηðk  dÞÞÞ T

8 <

þ 2ηT ðkÞAT ðkÞPB2 ðkÞgðSηðk  τðkÞÞÞ

r E μ h ðSηðkÞÞRhðSηðkÞÞ :

þ 2f ðSηðkÞÞBT1 ðkÞPB2 ðkÞgðSηðk  τðkÞÞÞ T

T

þ1

þ 2ηT ðkÞAT ðkÞPB3 ðkÞ ∑ μd hðSηðk  dÞÞ d¼1 þ1



þ 2f ðSηðkÞÞBT1 ðkÞPB3 ðkÞ ∑ μd hðSηðk  dÞÞ T

μ

!T

þ1

1

∑ μd hðSηðk  dÞÞ

d¼1

!9 = R ∑ μd hðSηðk dÞÞ : ; d¼1 þ1

ð35Þ

d¼1

þ 2g ðSηðk  τ T

ðkÞÞÞBT2 ðkÞPB3 ðkÞ

9 =  η ðkÞP ηðkÞ ; ;

þ1

∑ μd hðSηðk dÞÞ

Denoting ξðkÞ ¼ ½ηT ðkÞ; ηT ðk  τðkÞÞ; f ðSηðkÞÞ; gT ðSηðk  τðkÞÞÞ; h T ðSηðkÞÞ; ∑dþ¼11 μd h ðSηðk  dÞÞT , from (32) to (34), we obtain T

d¼1

T

ð32Þ

k



i ¼ k  τðk þ 1Þ þ 1

k1



)

k1

ηT ðiÞQ ηðiÞ 

i ¼ k  τðk þ 1Þ þ 1

ηT ðiÞQ ηðiÞ 

(



i ¼ k  τ ðkÞ



i ¼ k  τðkÞ þ 1

Φ1 ðkÞ ¼ ½A1 ðkÞ; 0; B1 ðkÞ; B2 ðkÞ; 0; B3 ðkÞ; " # γ~ A2 ðkÞ 0 0 0 0 0 Φ2 ðkÞ ¼ ; DðkÞ

ηT ðiÞQ ηðiÞ

þ



i ¼ k  τ ðk þ 1Þ þ 1



2

6 6 6 6 Π^ 1 ðkÞ ¼ 6 6 6 6 6 4

i ¼ k  τm þ 1

ηT ðiÞQ ηðiÞ

ηT ðiÞQ ηðiÞ 

(

)

k1



i ¼ k  τðkÞ

ηT ðiÞQ ηðiÞ

r E ηT ðkÞQ ηðkÞ  ηT ðk  τðkÞÞQ ηðk  τðkÞÞ þ

k  τm



i ¼ k  τM þ 1

) ð33Þ

"

EfΔV 3 ðkÞg ¼ EfV 3 ðk þ 1Þ  V 3 ðkÞg ( ¼E

k  τm þ 1

¼E

j ¼ k  τM þ 2 i ¼ j k  τm

k  τm



∑ ηT ðiÞQ ηðiÞ

j ¼ k  τM þ 1 i ¼ j k  τm

k

∑ ηT ðiÞQ ηðiÞ 



)

k1



j ¼ k  τM þ 1 i ¼ j þ 1

( ¼E

k  τm

k

∑ ηT ðiÞQ ηðiÞ 



(

"

ηT ðiÞQ ηðiÞ ;



)

k1

∑ ηT ðiÞQ ηðiÞ

j ¼ k  τM þ 1 i ¼ j

)

j ¼ k  τM þ 1

¼ E ðτM  τm ÞηT ðkÞQ ηðkÞ 

k  τm



j ¼ k  τM þ 1

0

0

0

0

"

Π^ 11

0

0

0

0

0

Q

0

0

0

0

0

0

0

0

0

0

0

0

μR

0

0

0

0

0

3

0

7 7 7 7 0 7 7: 0 7 7 1 7  R5 0

μ

It follows from (6) to (8) that 3" #T 2 ^ # U 1 U^ 2 SηðkÞ SηðkÞ 4 T 5 r 0; f ðSηðkÞÞ f ðSηðkÞÞ U^ 2 I #T 2 ^ V1 4 T gðSηðk  τðkÞÞÞ V^ 2 Sηðk  τðkÞÞ

#T 2 ^ W1 4 T ^ hðSηðkÞ W SηðkÞ

2

ð37Þ

3" # V^ 2 Sηðk  τðkÞÞ 5 r 0; gðSηðk  τðkÞÞÞ I

3 ^ 2 " SηðkÞ # W 5 r0 hðSηðkÞ I

)

ηT ðjÞQ ηðjÞ ;

ð34Þ

U^ 1 ¼ ðU T1 U 2 þ U T2 U 1 Þ=2; V^ 1 ¼ ðV T V 2 þ V T V 1 Þ=2; 1

2

U^ 2 ¼ ðU T1 þ U T2 Þ=2; V^ 2 ¼  ðV T þ V T Þ=2;

^ 1 ¼ ðW T W 2 þW T W 1 Þ=2; W 1 2

1

ð38Þ

ð39Þ

where

ðηT ðkÞQ ηðkÞ  ηT ðjÞQ ηðjÞÞ

(

0

Π^ 11 ¼  P þðτM  τm þ 1ÞQ ;

T

k  τm

T

where

)

k1

T

T

ð36Þ

ηT ðiÞQ ηðiÞ

¼ E η ðkÞQ ηðkÞ  η ðk  τðkÞÞQ ηðk T

k1

T

þ ξ ðkÞΦ1 ðkÞP Φ1 ðkÞξðkÞ þ ξ ðkÞΦ2 ðkÞðI  PÞΦ2 ðkÞξðkÞg

¼ E ηT ðkÞQ ηðkÞ  ηT ðk  τðkÞÞQ ηðk  τðkÞÞ

 τðkÞÞ þ

^ ðkÞξðkÞ EfΔVðkÞg ¼ ∑ EfΔV i ðkÞg ¼ Efξ ðkÞΠ 1 T

(

þ

4

T

i¼1

EfΔV 2 ðkÞg ¼ EfV 2 ðk þ 1Þ  V 2 ðkÞg ( ¼E

ðby Lemma 1Þ

d¼1

þ ηT ðkÞDT ðkÞPDðkÞηðkÞ þ 2ηT ðkÞAT ðkÞPB1 ðkÞf ðSηðkÞÞ

2

^ 2 ¼  ðW T þ W T Þ=2: W 1 2

X. Kan et al. / Neurocomputing 135 (2014) 130–138

2

Then, from (36) to (39), one has 8 " 3" #T 2 ^ # < U 1 U^ 2 SηðkÞ SηðkÞ 4 5 EfΔVðkÞg r EfΔVðkÞg  E λ1 T f ðSηðkÞÞ f ðSηðkÞÞ : I U^ 2 3" " #T 2 ^ # V 1 V^ 2 Sηðk  τðkÞÞ Sηðk  τðkÞÞ 4 5 þ λ2 T gðSηðk  τðkÞÞÞ gðSηðk  τðkÞÞÞ I V^ 2 3" " #T 2 ^ #9 ^ 2 W1 W SηðkÞ = SηðkÞ 4 T 5 þ λ3 ^ hðSηðkÞÞ hðSηðkÞÞ ; I W 2

6 6

2 6 6 ΔDðkÞ ¼ 6 6 4

^ ðkÞξðkÞ þ ξ ðkÞΦ ðkÞP Φ ðkÞξðkÞ r Efξ ðkÞΠ 2 1 1 T

T

where

"

Π^ 2 ðkÞ ¼

T

K

ð40Þ

#

Π 11 K T

ð41Þ

By Lemma 2, (41) is equivalent to 2 3 Π^ 2 n n 6 7 7 o 0; Π^ 3 ðkÞ ¼ 6 P n 4 P Φ1 ðkÞ 5 ðI  PÞΦ2 ðkÞ 0  ðI  PÞ i.e., 2 3

Π 11 Π^ 3 ðkÞ ¼ 6 4 K

ZðkÞ 2

PA1 ðkÞ 6 γ~ PA ðkÞ ZðkÞ ¼ 4 2 PDðkÞ

n

6 6 4

3

0 0

ΔB3 ðkÞ

7 7 7; 7 5

7 7 7: 7 5

It follows easily from (4) and (5) that 2 2 3 M1 6 6 γ GM ^ 27 6 7 ΔA1 ðkÞ ¼ 6 6 7Σ ðkÞðI  J Σ ðkÞÞ  1 ½N 1 0 0 0 þ 6 6 4 γ M2 5 4

0

ð43Þ

P~

2 6 TðkÞ ¼ 4

PB1 ðkÞ

PB2 ðkÞ

0

PB3 ðkÞ

0

0

0

0

0

0

0

0

0

TðkÞ ¼ T þ ΔTðkÞ

P ΔA1 ðkÞ ΔZðkÞ ¼ 6 4 γ~ P ΔA2 ðkÞ P ΔDðkÞ

0

M1 6 γ GM ^ 2 6 Σ ðkÞðI J Σ ðkÞÞ  1 ½0 N 1 0 N 1  ¼ 6 6 γ M2 4 0 " N1 ðI  ðΣ ðkÞðI  J Σ ðkÞÞ  1 ÞÞ 0

0 ^ 2 M 1  γ GM

7 7 7 7 5

3

0 0 0 ^ 2 M 1  γ GM 0

0

0

N1

0

N1

7 7 7 7 5 # ¼ X1Ξ2Y 1

0 11 #B " #C #" Σ ðkÞ 0 B Σ ðkÞ 0 C J 0 BI  C Ξ2 ¼ Σ ðkÞ B Σ ðkÞ C 0 0 0 J @ A |fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflffl{zfflfflfflfflfflffl}|fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} J2

¼ Δ2 ðI  J 2 Δ2 Þ

3

1

Δ2

:

ð46Þ

Similarly, we have

7 5:

ð44Þ

3

07 5; 0

0

ð45Þ

Δ2

3

07 5;

2

3

0

"

n

TðkÞ

2

ΔA2 ðkÞ ¼ X 4 Σ ðkÞðI  J Σ ðkÞÞ  1 Y 1 ; ΔB1 ðkÞ ¼ X 3 Σ ðkÞðI  J Σ ðkÞÞ  1 N 2 ; ΔB2 ðkÞ ¼ X 3 Σ ðkÞðI J Σ ðkÞÞ  1 N 3 ; ΔB3 ðkÞ ¼ X 2 Σ ðkÞðI  J Σ ðkÞÞ  1 N 3 ; ΔDðkÞ ¼ X 2 Σ ðkÞðI J Σ ðkÞÞ  1 Y 2 : Thus, it follows from 2 PX 1 0 PX 2 ΔZðkÞ ¼ 6 4 0 0 0

ð47Þ

(45) and (47) that 2 3 # Y1 0 " Δ 0 2 6Y 0 7 4 2 5 0 I  Σ ðkÞðI  J Σ ðkÞÞ  1 PX 4 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} Y 1

0

3

07 5 0

Δ1

P ΔB1 ðkÞ

P ΔB2 ðkÞ

0

P ΔB3 ðkÞ

0

0

0

0

0

0

0

0

2

0

ΔDðkÞS

ΔB3 ðkÞ ¼ 6 6

3

where

n7 5o0

L

ZðkÞ ¼ Z þ ΔZðkÞ;

ΔTðkÞ ¼ 6 4

0

7 7 7; 7 5

ΔB3 ðkÞ

ð42Þ

On the other hand, we know that Z(k) and T(k) can be decomposed as follows:

2

ΔDðkÞS

L

Π^ 2 ðkÞ þ ΦT1 ðkÞP Φ1 ðkÞ þ ΦT2 ðkÞðI  PÞΦ2 ðkÞ o 0:

where

ΔB2 ðkÞ

2

0

with Π 11 ; K and L are defined previously. In order to deal with the stability, we shall first prove that the following inequality holds:

where

0 0

3

T

þ ξ ðkÞΦ2 ðkÞðI  PÞΦ2 ðkÞξðkÞg T

ΔB2 ðkÞ

ΔB2 ðkÞ ¼ 6 6 4

135

3 7 5;

3 ΔAðkÞ 0 0 0 6 ^ 7 0 0 0 6 γ G ΔCðkÞ 7 7; ΔA1 ðkÞ ¼ 6 6 γΔCðkÞ 7 0 0 0 4 5 ^ ^ 0 ΔAðkÞ  γ G ΔCðkÞ 0 ΔAðkÞ  γ G ΔCðkÞ 2 3 0 0 0 0 6 γ~ G^ ΔCðkÞ 7 0 0 0 6 7 7; ΔA2 ðkÞ ¼ 6 6 γΔ 7 ~ CðkÞ 0 0 0 4 5 0  γ~ G^ ΔCðkÞ 0  γ~ G^ ΔCðkÞ 2 3 ΔB1 ðkÞ 6 7 6 0 7 7 ΔB1 ðkÞ ¼ 6 6 0 7; 4 5 ΔB1 ðkÞ

¼ X^ 1 ðI  Σ ðkÞÞðI  ðI  J ÞðI  Σ ðkÞÞÞ  1 Y^ 1 ¼ X^ 1 Ξ 1 Y^ 1 ; 2

3 PX 3 PX 2 " 1 6 7 Σ ðkÞðI  J Σ ðkÞÞ 0 5 ΔTðkÞ ¼ 4 0 0 0 0 " # N2 N3 0 0 ¼ X^ 2 Ξ 2 Y^ 2  0 0 0 N3

ð48Þ #

0

Σ ðkÞðI J Σ ðkÞÞ  1 ð49Þ

where X 1 ; X 2 ; X 3 ; Y 1 ; Y 2 ; X^ 1 ; X^ 2 ; Y^ 1 and Y^ 2 are defined previously. ^ ðkÞ could be decomposed as Also, we know that Π 3

Π^ 3 ðkÞ ¼ Π^ 3 þ ΔΠ^ 31 ðkÞ þ ðΔΠ^ 31 ðkÞÞT þ ΔΠ^ 32 ðkÞ þ ðΔΠ^ 32 ðkÞÞT where

2

Π 11

Π^ 3 ¼ Π 1 ¼ 6 4 K Z

3

2

7

ΔΠ^ 31 ðkÞ ¼ 6 4

n

n

L

n 5;

T

P~

0

0

0

0

0

3

07 5;

ΔZðkÞ 0 0

ð50Þ

136

X. Kan et al. / Neurocomputing 135 (2014) 130–138

2

ΔΠ^

0

0

6 32 ðkÞ ¼ 4 0 0

0

"

3

Π 11 ¼

07 5: 0

0

ΔTðkÞ



Furthermore, from (48) and (49), one has 2 3 2 3 0 0 0 0 6 7 0 0 07 ΔΠ^ 31 ðkÞ ¼ 6 4 5 ¼ 4 0 5Ξ 1 ½Y^ 1 0 0; ^ ^ ^ X1 X 1 Ξ1 Y 1 0 0 2

ΔΠ^

0 6 32 ðkÞ ¼ 4 0 0

0 0 X^ 2 Ξ 2 Y^ 2

ð52Þ

ð53Þ

Π^ 2 ðkÞ þ ΦT1 ðkÞP Φ1 ðkÞ þ ΦT2 ðkÞðI  PÞΦ2 ðkÞ o  λI: Furthermore, we have ð54Þ

It follows from the Lyapunov stability theory that the augmented system (14) is globally asymptotically stable in the mean square, and the proof is now complete. □ In Theorem 1, a sufficient condition is given that guarantees the global asymptotical stability in the mean square of the augmented system (14). In what follows, we shall consider two special cases based on the results given in Theorem 1. The proofs of all the corollaries given below follow directly from Theorem 1, and are therefore omitted. Corollary 1. Suppose that there are no parameter uncertainties, that is, Σ ðkÞ ¼ 0. Let the estimator gains F^ and G^ be given and the admissible conditions hold. Then, the augmented system (14) is globally asymptotically stable in the mean square if there exist positive definite matrices P, Q, R and three positive scalars λ1 ; λ2 , and λ3 such that 2 3

Π 11 n L Π1 ¼ 6 4 K Z

n

7

n 5o0

ð55Þ

P~

T

where Π 1 ; Π 11 , K, L, Z and T are defined previously. Corollary 2. Suppose that the distributed delay term disappears. Let the estimator gains F^ and G^ be given and the admissible conditions hold. Then, the augmented system (14) is globally asymptotically stable in the mean square if there exist positive definite matrices P; Q and four positive scalars λ1 ; λ2 ; s1 and s2 such that 2 3

Π1

6s E 6 1 1 6 T H Π ¼6 6 1 6s E 4 2 2 HT2 where 2

Π 11

Π1 ¼ 6 4 K Z "  λ1 I L¼ 0

n

n

n

n

 s1 I

n

n

n

n

n

 s2 I

n

s1 J

 s1 I

0

0

0

0

n

L T 0

n

3

7 n 5; P~ #

 λ2 I

;

s2 J

2 K ¼4

 s2 I

T  λ1 U 2

0

7 7 7 7o0 7 7 5

22



; 

P~ ¼



P

0

0

P

 PB2 ; 0

PB1 0

 ;

with A; B1 ; B2 ; D, X1, X2, X3, Y1, Y2, X^ 1 , X^ 2 , Y^ 1 , Y^ 2 , H1 , H2 , J , E 1 , E 2 , U 1 , U 2 , V 1 , and V 2 are defined previously.

where H1 ; H2 ; E 1 and E 2 are defined previously. Subsequently, by Lemma 4, there exist s1 and s2 such that (41) is true that is there exist a λ 4 0 satisfying

EfΔVðkÞg r  λEf∣ξðkÞ∣2 g r  λEf∣ηðkÞ∣2 g:

PD

 0 ; 0

#

ð51Þ

Thus, we have

Π^ 3 ðkÞ ¼ Π^ 3 þH1 Ξ 1 E 1 þ E T1 Ξ T1 HT1 þ H2 Ξ 2 E 2 þ E T2 Ξ T2 HT2

PA

0

0 Π~

Π~ 11 ¼  P þðτM  τm þ 1ÞQ  λ1 U 1 ; Π~ 22 ¼  Q  λ2 V 1

3

2 3 0 7 6 05¼4 0 7 5Ξ 2 ½0 Y^ 2 0: 0 X^ 2 0



Π~ 11

Up to now, a series of conditions are obtained for a general class of discrete-time delayed neural networks with LFUs and SPDs to be globally asymptotically stable in the mean square. Theorem 1 offers a sufficient condition which guarantees the global asymptotical stability in the mean square of the dynamics of the estimation error. Moreover, suppose there are no LFUs or there is no distributed delay term and two simplified conditions are obtained for these two special yet practical neural networks. Next, we shall investigate the design problem of the estimator according to the stability condition conducted in Theorem 1. In the following theorem, the design approach is given in terms of the solution to an LMI. Theorem 2. Let the admissible conditions hold. Then, the state estimation problem for the neural network (14) is solvable if there exist positive definite matrices Q, R, P ¼ I  P 0 , matrices F^ f , G^ f and five positive scalars λ1 ; λ2 ; λ3 ; s1 and s2 such that 2 3 Π~ 1 n n n n 6 7 6 s1 E 1  s1 I n n n 7 6 7 6 ~T 7 ð57Þ Π ¼ 6 H 1 s1 J  s1 I n n 7o0 6 7 6 s2 E 2 7 0 0  s2 I n 5 4 HT2 0 0 s2 J  s2 I where 2

2 3 3 2 3 X1 0 0 A1f 0 Π 11 n n 6 6 7 6 7 ~ Π 1 ¼ 4 K L n 5; Z~ ¼ 4 A2f 0 5; X~ 1 ¼ 4 0 PX 2 0 7 5; X4 PD 0 0 0 Z~ T P~ 2 3 P0 A 0 0 0 6 γ G^ C 7 F^ f ð1  γ ÞG^ f 0 6 f 7 7; A1f ¼ 6 6 γ PC 7 0 ð1  γ ÞP 0 0 4 5 ^ ^ ^ ^ 0 P 0 A  F f  γ G f C  ð1  γ ÞG f P 0 A  γ G f C 2 3 0 0 0 0 2 3 0 6 γ~ G^ C 7 0  γ~ G^ f 0 6 7 f 6 ~ 1 ¼4 0 7 7; H A2f ¼ 6 5; 6 γ~ P 0 C 7 0  γ~ P 0 0 4 5 X~ 1 ^ ^ ^ 0  γ~ G f C γ~ G f  γ~ G f C 2 3 2 3 P 0 M1 0 0 0 6 γ G^ M 7 6 γ~ G^ M 0 0 7 2 6 f 2 7 6 7 f 7; X 4 ¼ 6 7; X1 ¼ 6 6 γ P 0 M2 7 6 7 ~ 0 γ P M 0 0 2 4 5 4 5 ^ ^ 0 P 0 M1  γ G M2 0 γ~ G M2 f

ð56Þ

f

and other matrices are defined in Theorem 1. Furthermore, the estimator gains can be designed as F^ ¼ P 0 1 F^ f and G^ ¼ P 0 1 G^ f . Proof. The proof follows from Theorem 1 easily and is therefore omitted. □

0 T 2V 2



3 5;

Remark 2. In Theorem 1, a sufficient condition is presented to ensure the global asymptotical stability in the mean square of the dynamics of the estimation error. Moreover, from Theorem 2, we can see that the design problem of estimator can be solved for the addressed discrete-time delayed neural network with MTDs, LFUs and SPDs. The criteria in Theorems 1 and 2 are expressed in terms

X. Kan et al. / Neurocomputing 135 (2014) 130–138

of the solution to certain LMIs which can be readily solved by resorting to the Matlab LMI toolbox. 4. An illustrative example

 0:06 0:02 0:07 2 3  0:01 0:01 0:01 6 7 6 7 0:4  0:2 5; B3 ¼ 4 0:02 0:03 0:02 5; B2 ¼ 4 0:3 0:25 0:15 0:2 0:03 0:024 0:025 2 3 2 3  0:04 0:08 0:048 0:45 0:3 0:12 6 7 6 7 0:16  0:06 5; C ¼ 4  0:25 0:4 0:2 5; D ¼ 4 0:12 0:04 0:08 0:052 0:2  0:1 0:3 2 3 2 3 2 3T 0:02 0 0:01 6 7 6 7 6 7 M 1 ¼ 4 0:01 5; M 2 ¼ 4 0:01 5; N 1 ¼ 4 0:01 5 ; 0:02  0:01 0:02 2 3T 2 3T 0:03 0:02 6 7 6 7 N 2 ¼ 4 0:01 5 ; N 3 ¼ 4 0:01 5 ; 0:015 0:02  1 3 1 kπ ; μðkÞ ¼ 2  ðm þ 3Þ ; μ ¼ ; τðkÞ ¼ þ sin 8 2 2 2 0

0

0:48

 0:1

τm ¼ 1;

0:2

0:18

3

0:4990

6 F^ f ¼ 4 0:4374 0:0594 2

In this section, we shall present an example to demonstrate the effectiveness of the proposed estimation approach for the neural network (1). The considered neural network is modeled by (1) with the following parameters: 2 3 2 3 0:4 0 0 0:04  0:08 0:04 6 6 0 7 0:04  0:06 7 A ¼ 4 0 0:52 5; B1 ¼ 4 0:02 5; 2

2

1:6404

137

0:3275

0:0128

3

2:6673

0:6096 7 5;

0:6785

1:1097

6 G^ f ¼ 4 3:1841  0:4769

 1:3433 4:6967 3:2517

0:0062

3

 2:4429 7 5: 5:0681

Therefore, according to Theorem 2, the parameters of the desired estimator gains can be designed as 2 3 0:1242 0:0500 0:0010 6 7 0:3833 0:0507 5; F^ ¼ 4 0:0583  0:0028 0:0279 0:1913 2

0:3784 6 G^ ¼ 4 0:4900  0:2089

 0:4013 0:6220 0:4368

0:0685

3

7  0:5812 5: 1:0804

Then, it follows from Theorem 2 that the system (14) with given parameters is globally asymptotically stable in the mean square, which is further verified by the simulation results given in Figs. 1–3.

τM ¼ 2;

and the nonlinear functions: 2 3  0:3x1 6 7 0:2x2 þ tanhð0:6x2 Þ f ðxðkÞÞ ¼ 4 5;  0:2x1 þ tanhð0:4x1 Þ þ0:1x2 2 3  0:2x1 þ tanhð0:4x1 Þ þ 0:1x2 6 7 gðxðkÞÞ ¼ hðxðkÞÞ ¼ 4 0:1x1  0:3x2 þ tanhð0:6x2 Þ  0:1x3 þ tanhð0:1x3 Þ 5:  0:1x1 þ tanhð0:1x2 Þ  0:3x3 þ tanhð0:6x3 Þ It is not difficult to verify that the above satisfy the sector-bounded conditions with 2 3 2  0:3 0 0  0:3 0 6 0 7 0:2 0 5; U 2 ¼ 6 0:4 U1 ¼ 4 4 0 0:2 0:1 0  0:2 0:1 2 3 0:2 0:1 0 6 7 0:3 0 5; W 1 ¼ V 1 ¼ 4 0:1  0:1 0:1 0:3 2 3  0:2 0:1 0 6 7  0:3  0:1 5: W 2 ¼ V 2 ¼ 4 0:1  0:1 0  0:3

nonlinear functions 0

Fig. 1. Ideal output y~ i ðkÞ and actual output yi(k) (i¼ 1, 2, 3).

3

07 5; 0

In this example, the probability is taken as γ ¼ 0:95. With the above parameters, by using Matlab with YALMIP 3.0, we solve LMI (57) and obtain a set of feasible solutions as follows: 2 3 3:8487 0:3561 0:0467 6 6:8113 1:3803 7 P 0 ¼ 4 0:3561 5; 2

 0:0467 0:8093

6 R ¼ 4 0:0411 0:0303

1:3803

0:0411

5:4363 3

0:0303

0:8415

0:0451 7 5;

0:0451

0:8167

Fig. 2. State xi(k) and its estimate x^ i ðkÞ ði ¼ 1; 2; 3Þ.

138

X. Kan et al. / Neurocomputing 135 (2014) 130–138

Fig. 3. Estimation errors ei(k) (i¼ 1, 2, 3).

5. Conclusions In this paper, the robust state estimation problem has been investigated for a class of discrete-time neural networks with MTDs, LFUs and SPDs. SPDs have been used to model neural networks with multiple missing measurements that occur according to the Bernoulli distributed white sequence with a known conditional probability. By constructing Lyapunov–Krasovskii functional and using LMI technique, sufficient conditions have been established to guarantee the global asymptotical stability in the mean square for the error system. Based on that, an estimator has been designed in terms of the solution to an LMI. Finally, an illustrative example has been provided to demonstrate the effectiveness and applicability of the technology presented in this paper. References [1] B. Chen, J. Wang, Global exponential periodicity and global exponential stability for a class of neural networks, Phys. Lett. A 329 (August (1–2)) (2004) 36–48. [2] D. Ding, Z. Wang, J. Hu, H. Shu, Dissipative control for state-saturated discrete time-varying systems with randomly occurring nonlinearities and missing measurements, Int. J. Control 86 (April (4)) (2013) 674–688. [3] H. Dong, Z. Wang, H. Gao, Variance-constrained H 1 filtering for nonlinear time-varying systems with multiple missing measurements: the finite-horizon case, IEEE Trans. Signal Process. 58 (May (5)) (2010) 2534–2543. [4] H. Dong, Z. Wang, H. Gao, Distributed filtering for a class of time-varying systems over sensor networks with quantization errors and successive packet dropouts, IEEE Trans. Signal Process. 60 (June (6)) (2012) 3164–3173. [5] H. Dong, Z. Wang, H. Gao, Distributed H1 filtering for a class of Markovian jump nonlinear time-delay systems over lossy sensor networks, IEEE Trans. Ind. Electron. 60 (October (10)) (2013) 4665–4672. [6] V.T.S. Elanayar, Y.C. Shin, Radial basis function neural network for approximation and estimation of nonlinear stochastic dynamic systems, IEEE Trans. Neural Netw. 5 (July (4)) (1994) 594–603. [7] L. Elghaoui, G. Scorletti, Control of rational systems using linear-fractional representations and linear matrix inequalities, Automatica 32 (September (9)) (1996) 1273–1284. [8] J. Hu, Z. Wang, B. Shen, H. Gao, Quantised recursive filtering for a class of nonlinear systems with multiplicative noises and missing measurements, Int. J. Control 86 (April (4)) (2013) 650–663. [9] J. Hu, Z. Wang, B. Shen, H. Gao, Gain-constrained recursive filtering with stochastic nonlinearities and probabilistic sensor delays, IEEE Trans. Signal Process. 61 (March (5)) (2013) 1230–1238. [10] X. Kan, Z. Wang, H. Shu, State estimation for discrete-time delayed neural networks with linear fractional uncertainties and sensor saturations, Neurocomputing 117 (October) (2013) 64–71. [11] X. Li, H. Gao, X. Yu, A unified approach to the stability of generalized static neural networks with linear fractional uncertainties and delays, IEEE Trans. Syst. Man Cybern. Part B: Cybern. 41 (October (5)) (2011) 1275–1286. [12] J. Liang, J. Lam, Robust state estimation for stochastic genetic regulatory networks, Int. J. Syst. Sci. 41 (January (1)) (2011) 47–63.

[13] Y. Liu, Z. Wang, X. Liu, Global exponential stability of generalized recurrent neural networks with discrete and distributed delays, Neural Netw. 19 (June (5)) (2006) 667–675. [14] Y. Liu, Z. Wang, X. Liu, State estimation for discrete-time Markovian jumping neural networks with mixed mode-dependent delays, Phys. Lett. A 372 (December (48)) (2008) 7147–7155. [15] M. Sahebsara, T. Chen, S.L. Shah, Optimal H2 filtering with random sensor delay, multiple packet dropout and uncertain observations, Int. J. Control 80 (February (2)) (2007) 292–301. [16] M. Sahebsara, T. Chen, S.L. Shah, Optimal H2 filtering in networked control systems with multiple packet dropout, IEEE Trans. Autom. Control 52 (August (8)) (2007) 1508–1513. [17] B. Shen, Z. Wang, H. Shu, H 1 filtering for uncertain time-varying systems with multiple randomly occurred nonlinearities and successive packet dropouts, Int. J. Robust Nonlinear Control 21 (September (14)) (2011) 1693–1709. [18] H. Shu, Z. Wang, Z. Lv, Global asymptotic stability of uncertain stochastic bidirectional associative memory networks with discrete and distributed delays, Math. Comput. Simul. 80 (November (3)) (2009) 490–505. [19] S. Sun, L. Xie, W. Xiao, Optimal full-order and reduced-order estimators for discrete-time systems with multiple packet dropout, IEEE Trans. Signal Process. 56 (August (8)) (2008) 4031–4038. [20] L. Wang, Z. Xu, Sufficient and necessary conditions for global exponential stability of discrete-time recurrent neural networks, IEEE Trans. Circuits Syst. I: Regular Pap. 53 (June (6)) (2006) 1373–1380. [21] Z. Wang, D.W.C. Ho, X. Liu, State estimation for delayed neural networks, IEEE Trans. Neural Netw. 16 (January (1)) (2005) 279–284. [22] Z. Wang, Y. Liu, K. Fraser, X. Liu, Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays, Phys. Lett. A 354 (June (4)) (2006) 288–297. [23] Z. Wang, Y. Liu, X. Liu, Y. Shi, Robust state estimation for discrete-time stochastic neural networks with probabilistic measurement delays, Neurocomputing 74 (December (1–3)) (2010) 256–264. [24] L. Xie, Output feedback H1 control of systems with parameter uncertainty, Int. J. Control 63 (4) (1996) 741–750. [25] S. Xu, J. Lam, D.W.C. Ho, Y. Zou, Global robust exponential stability analysis for interval recurrent neural networks, Phys. Lett. A 325 (May (2)) (2004) 124–133. [26] W. Zhou, M. Li, Mixed time-delays dependent exponential stability for uncertain stochastic high-order neural networks, Appl. Math. Comput. 215 (September (2)) (2009) 503–513.

Xiu Kan received the B.S. degree in Mathematics in 2007 from Ningxia University, Yinchuan, China, and the M.Sc. degree in Applied Mathematics in 2009 and the Ph.D. degree in Control Engineering in 2013, both from Donghua University, Shanghai, China. She is currently a Lecturer with the college of electronic and electrical engineering, Shanghai University of Engineering Science, Shanghai, China. From October 2010 to October 2011, she was a Visiting Ph.D. Student in the Department of Information Systems and Computing, Brunel University, UK. Her research interests include nonlinear control and filtering, as well as complex networks and their applications. She is a very active reviewer for many international journals.

Huisheng Shu received his B.Sc. degree in Mathematics in 1984 from Anhui Normal University, Wuhu, China, and the M.Sc. degree in Applied Mathematics in 1990 and the Ph.D. degree in Control Theory in 2005, both from Donghua University, Shanghai, China. He is currently a Professor at Donghua University, Shanghai, China. He has published 20 papers in refereed international journals. His research interests include mathematical theory of stochastic systems, robust control and robust filtering.

Zhenna Li received her B.S. degree in Mathematics and Applied Mathematics from Linyi University, Shandong, China, in 2010. She is currently pursuing her Ph.D. degree in the School of Information Science and Technology, Donghua University, Shanghai, China. She is now a Visiting Ph.D. Student in the Department of Information Systems and Computing, Brunel University, UK. Her research current interests primarily include fault diagnosis and isolation, nonlinear stochastic systems and networked control systems. She is a very active reviewer for many international journals.