Stochastic sampled-data control for state estimation of time-varying delayed neural networks

Stochastic sampled-data control for state estimation of time-varying delayed neural networks

Neural Networks 46 (2013) 99–108 Contents lists available at SciVerse ScienceDirect Neural Networks journal homepage: www.elsevier.com/locate/neunet...

591KB Sizes 0 Downloads 38 Views

Neural Networks 46 (2013) 99–108

Contents lists available at SciVerse ScienceDirect

Neural Networks journal homepage: www.elsevier.com/locate/neunet

Stochastic sampled-data control for state estimation of time-varying delayed neural networks Tae H. Lee a , Ju H. Park a,∗ , O.M. Kwon b , S.M. Lee c a

Nonlinear Dynamics Group, Department of Electrical Engineering, Yeungnam University, 214-1 Dae-Dong, Kyongsan 712-749, Republic of Korea

b

School of Electrical Engineering, Chungbuk National University, 52 Naesudong-ro, Cheongju 361-763, Republic of Korea

c

Department of Electronic Engineering, Daegu University, Gyungsan 712-714, Republic of Korea

article

info

Article history: Received 29 November 2012 Received in revised form 29 January 2013 Accepted 2 May 2013 Keywords: State estimator Neural networks Time-varying delay Sampled-data Stochastic sampling

abstract This study examines the state estimation problem for neural networks with a time-varying delay. Unlike other studies, the sampled-data with stochastic sampling is used to design the state estimator using a novel approach that divides the bounding of the activation function into two subintervals. To fully use the sawtooth structure characteristics of the sampling input delay, a discontinuous Lyapunov functional is proposed based on the extended Wirtinger inequality. The desired estimator gain can be characterized in terms of the solution to linear matrix inequalities (LMIs). Finally, the proposed method is applied to two numerical examples to show the effectiveness of our result. © 2013 Elsevier Ltd. All rights reserved.

1. Introduction Over the last decades, considerable attention has been devoted to the study of neural networks because they can be used to solve certain problems related to signal processing, static image treatment, image processing, pattern recognition, optimization etc. (Cichoki & Unbehauen, 1993; Hagan, Demuth, & Beale, 1996). In the implementation of neural networks, however, significant differences between an ideal and a practical neural network are encountered due to limitations of hardware, e.g. finite switching speed of amplifiers, threshold value of the communication line etc. These differences can cause unpredictable problems such as the existence of time-delays, uncertainties, disturbances, etc. Among these factors, time-delays are encountered frequently, and their existence is often a source of oscillation, poor performance and instability of the neural network. As a result, many studies have focused to propose either delay-independent or delay-dependent conditions of delayed neural networks and a large number of papers have been published on various neural networks with time delays. Notable examples include Balasubramaniam and Chandran (2011), Balasubramaniam, Nagamani, and Rakkiyappan (2011), Gao, Chen, and Lam (2008), Huang and Cao (2010), Kwon, Lee, Park, and Cha (2012), Liu, Chen, Cao, and Lu (2011), Mou, Gao, Lam, and



Corresponding author. E-mail address: [email protected] (J.H. Park).

0893-6080/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.neunet.2013.05.001

Qiang (2008), Park, Kwon, Park, and Lee (2012), Park, Kwon, Park, Lee, and Cha (2012), Wu, Park, Su, and Chu (2012a, 2012b) and the references therein. On the other hand, because the neuron states are seldom fully available in the network outputs, it is important to estimate the neuron state through the available measurements to utilize the neural networks. Recently, the state estimation problem for the neural networks has attracted interest (Balasubramaniam, Vembarasan, & Rakkiyappan, 2012; Bao & Cao, 2011; Huang, Feng, & Cao, 2010; Wang, Ho, & Liu, 2005; Wu, Su, & Chu, 2010). Wang et al. (2005) first investigated the state estimation problem for neural networks with time-varying delays. Wu et al. (2010) analyzed the state estimation problem for time-varying delayed discrete neural networks with Markovian jumping parameters, of which the transition probabilities were assumed to be partially unknown. In Bao and Cao (2011), the state estimation problem for a class of discrete-time stochastic neural networks with random delays, which is characterized by a Bernoulli stochastic variable, was investigated. The delay-dependent robust asymptotic state estimation of fuzzy neural networks with a mixed interval time-varying delay was studied using the Takagi–Sugeno fuzzy model representation (Balasubramaniam et al., 2012). A range of control methods have recently been applied to the design of a state estimator for neural networks (Duan, Su, & Wu, 2012; Huang, Feng, & Cao, 2011; Salam & Zhang, 2001). For example, Salam and Zhang (2001) designed an adaptive state estimator using optimization theory, calculus of the variations and gradient descent dynamics. Huang et al. (2011) examined the H∞ control method for a state estimation

100

T.H. Lee et al. / Neural Networks 46 (2013) 99–108

of static neural networks with time-varying delays, where both delay-dependent and delay-independent criteria for guaranteeing the globally asymptotic stability of the resulting error system and H∞ or H2 performance are presented. Currently, it is natural situation to consider the control input signals are discontinuous because the signals of most hardware, such as the sensor, transmitter, controller etc., obtained from sampler and data hold are discontinuous. Therefore, the controller design problem using sampled-data has attracted considerable attention and many studies have been published in recent years (Astrom & Wittenmark, 1989; Gao, Meng, & Chen, 2008; Gao, Wu, & Shi, 2009; Hu & Michel, 2000; Lam, 2012; Li, Zhang, Hu, & Nie, 2011; Li, Zhang, & Jing, 2009; Lu & Hill, 2008; Mikheev, Sobolev, & Fridman, 1988; Ozdemir & Townley, 2003; Tahara, Fujii, & Yokoyama, 2007; Zhu & Wang, 2011). In sampleddata control systems, selecting the proper sampling interval is very important for designing suitable controllers. Traditionally, many researchers have focused on constant sampling. On the other hand, the time-varying sampling is more useful because it copes well with problems, such as the change in network situation, the limitation of the calculating speed of hardware etc. (Hu & Michel, 2000; Li et al., 2009; Ozdemir & Townley, 2003; Tahara et al., 2007). Tahara et al. (2007) used the variable sampling deadbeat control method for a MW (megawatt) class PWM (pulse width modulation) inverter system to overcome the poor control performance, which comes from the limitations of hardware. Ozdemir and Townley (2003) proposed the sampled-data integral control for a large class of infinite-dimensional systems using convergent adaptive sampling. Hu and Michel (2000) discussed the stability problem of digital feedback control systems with time-varying sampling periods. In addition, the possibility of a random change in sampling interval was considered (Gao, Meng et al., 2008; Gao et al., 2009), which is said to be a further extended scheme to the case of time-varying sampling intervals. Therefore, the necessity of the controller design problem using sampled-data with the stochastically varying sampling interval has been highlighted. To the best of the authors’ knowledge, there are no reports on the problem of state estimation for neural networks using sampled-data with a stochastically varying sampling interval. Motivated by these studies, this paper addresses the design problem of a state estimator for time-varying delayed neural networks. Unlike previous studies, the states of the proposed neural networks were estimated using the sampled-data with stochastic sampling. In addition, a state of the art technique (Kwon et al., 2012) was applied to handle the activation function of neural networks, which is dividing the bounding of the activation function. Based on the common Lyapunov functional, this paper proposes a discontinuous Lyapunov functional approach that makes full use of the sawtooth structure characteristic of the sampling input delays by defining the stochastic variables, ρij (t ). Two numerical examples are given to illustrate the effectiveness and less conservatism of the proposed method. Notations: Rn is the n-dimensional Euclidean space, Rm×n denotes the set of m × n real matrices. X > 0 (respectively, X ≥ 0) means that matrix X is a real symmetric positive definite matrix (respectively, positive semi-definite). 0m×n denotes the m × n zero matrix. Γ (i, j) denotes the ith row, jth column element (or block matrix) of matrix Γ . ⋆ in a matrix represents the elements below the main diagonal of a symmetric matrix. diag{· · ·} denotes a diagonal matrix. ∥ · ∥ refers to the Euclidean vector norm and induced matrix norm. E{x} and E{x|y}, respectively, mean the expectation of a stochastic variable x and the expectation of the stochastic variable x conditional on the stochastic variable y. Pr{α} is the occurrence probability of an event α .

2. Problem formulation Consider the following time-varying delayed neural networks: x˙ (t ) = −Ax(t ) + B1 f (x(t )) + B2 f (x(t − d(t ))) + J , y(t ) = Cx(t ),

(1)

where x(t ) = [x1 (t ) x2 (t ) · · · xn (t )] ∈ R is the neuron state vector, n is the number of neurons in a neural network, y(t ) ∈ Rq is the output of the networks, f (x(t )) = [f1 (x1 (t )) f2 (x2 (t )) · · · fn (xn (t ))]T ∈ Rn means the neuron activation function, A = diag{ai } ∈ Rn×n is a positive diagonal matrix, B1 = (b1ij )n×n ∈ Rn×n T

n

and B2 = (b2ij )n×n ∈ Rn×n are the interconnection matrices representing the weight coefficients of the neurons, and J = [J1 J2 · · · Jn ]T ∈ Rn is an external input vector. The delay, d(t ), is a time-varying continuous function satisfying 0 ≤ d(t ) ≤ d,

d(˙t ) ≤ µ,

where d and µ are known constants. Assumption 1. Each activation function f (·) in (1) is continuous and bounded, and satisfies L− i ≤

fi (a) − fi (b) a−b

≤ L+ i ,

i = 1, 2, . . . , n

+ where f (0) = 0, a, b ∈ R, a ̸= b, and L− i and Li are known real scalars.

The aim of this paper is to present an efficient estimation algorithm to observe the neuron states from the available network output. To this end, the following full-order observer is proposed: x˙ˆ (t ) = −Axˆ (t ) + B1 f (ˆx(t )) + B2 f (ˆx(t − d(t ))) + J + u(t ), yˆ (t ) = C xˆ (t ),

(2)

where xˆ (t ) ∈ Rn is the estimation of the neuron state x(t ), yˆ (t ) ∈ Rq is the estimated output vector, and u(t ) ∈ Rn is the control input. Define the error vector by e(t ) = x(t ) − xˆ (t ). The error dynamical system is expressed from (1) and (2): e˙ (t ) = −Ae(t ) + B1 g (t ) + B2 g (t − d(t )) − u(t ),

(3)

where g (t ) = f (x(t )) − f (ˆx(t )). In this paper, the controller was assumed to use sampled-data with stochastic sampling as follows: u(t ) = K (y(tk ) − yˆ (tk )) = KCe(tk ), tk ≤ t < tk+1 , k = 0, 1, 2, . . .

(4)

where K is the gain matrix of the feedback controller to be determined later, tk is the updating instant time of the Zero-Order-Hold (ZOH) and the sampling interval is defined as h = tk+1 − tk . Remark 1. Most controllers currently in use are digital controllers or networked to the system. These control systems can be modeled by sampled-data control systems. Therefore, the sampled-data control approach should be examined. The sampling intervals, h, were assumed to take mth values such that tk+1 − tk = hi , where sampling interval integer, i, occurs stochastically in a set, {1, . . . , m}, with a value of 0 = h0 < h1 < · · · < hm , and the probability of the occurrence of each can be expressed as Pr{h = hi } = βi ,

i = 1, 2, . . . , m

where βi ∈ [0, 1] are the known constants and i=1 βi = 1. To design the controller using the sampled-data with stochastic sampling, the concept of the time-varying delayed control input proposed by Astrom and Wittenmark (1989) and Mikheev et al.

m

T.H. Lee et al. / Neural Networks 46 (2013) 99–108

101

(1988), was adopted in this paper. Therefore, by defining τ (t ) = t − tk , tk ≤ t < tk+1 , the controller (4) can be represented as follows:

Lemma 2 (Liu, Suplin, & Fridman, 2011). Let x(t ) ∈ W [a, b) and x(a) = 0. For any matrix R > 0, the following inequality holds:

u(t ) = KCe(tk )



= KCe(t − τ (t )),

tk ≤ t ≤ tk+1 .

x(s)T Rx(s)ds ≤

h1

Pr{h1 ≤ τ (t ) < h2 } =

hm

,

αi (t ) = βi (t ) =

hm − hm−1 hm

1 0

hi−1 ≤ τ (t ) < hi otherwise

1 0

h = hi otherwise,



.

i = 1, . . . , m

Pr{αi (t ) = 1} = Pr{hi−1 ≤ τ (t ) < hi } =

m 

βj

hi − hi−1 hj

j =i

Pr{βi (t ) = 1} = Pr{h = hi } = βi ,

= αi , (5)

 Φa 1  ⋆   ⋆   ⋆ Φa =   ⋆   ⋆   ⋆ ⋆ < 0,

(6)

where i = 1, . . . , m, i=1 αi = 1. Therefore, system (3) with m sampling intervals can be expressed as follows:



m



(7)

i =1

Vi Ui



Ti Yi

Xi

Wi Yi



Definition 1 (Gao, Meng et al., 2008). The error system (7) is said to be mean-square stable if for any ε > 0, σ (ε) > 0 such that E{∥e(t )∥2 } < ε, t > 0 when E{∥e(0)∥2 } < σ (ε). In addition, if limt →∞ E{∥e(t )∥2 } = 0, for any initial conditions, the error system (7) is said to be globally mean-square asymptotically stable. Remark 2. The time-varying delay τi (t ) in Eq. (7) is independent of the stochastic interval. Therefore, by introducing stochastic variables, αi (t ), a system (3) can be remodeled to a system (7), which is a general time-varying delay system. Here, the probability of αi (t ) is indicated in Eq. (5), which originated from Gao et al. (2009). 3. Main results

γ1

T



γ2

M γ1

Φ2

Φ3a Γ3

Φ4a ⋆ ⋆ ⋆ ⋆ ⋆

0

0

−R 1 ⋆ ⋆ ⋆ ⋆

0

0

0

Φ6 ⋆ ⋆ ⋆

γ GB1 Φ7a ⋆ ⋆

γ GB2

GB2

Φ5a

0

0

Φ8a ⋆

0



    −R2   0    0   0

0

−R3

≥ 0,

(9)

> 0,

(10)

> 0,

∀i = 1 , . . . , m,

(11)

x(s)ds



≤ (γ2 − γ1 )



γ2 γ1

xT (s)Mx(s)ds.

T

Proof. Denote η(t ) = eT (t ) g T (t ) and consider the following discontinuous Lyapunov functional for error system (7)



V (et ) = V1 (et ) + V2 (et ) + V3 (et ) + V4 (et ) + V5 (et ), t ∈ [tk , tk+1 ),

(12)

where V1 (et ) = eT (t )Pe(t ), V2 (et ) =



t t −d(t )

m 

ηT (s)Q η(s)ds +

αi (t )

t −hi−1





t −hi−1 t −h i

m 



αi (t )

eT (s)Zi e(s)ds

t





e˙ T (u)Ui e˙ (u)duds , s

t −hi−1 t −hi

i=1



 t s



+ e˙ (u)Yi e˙ (u) duds , T

V5 (et ) =

m  i=1

ηT (s)Rη(s)ds,

t −hi

+ pi V4 (et ) =

t



t −d

i=1

Lemma 1 (Gu, Kharitonov, & Chen, 2003). For any matrix M > 0, scalars γ1 and γ2 satisfying γ2 > γ1 , a vector function x : [γ1 , γ2 ] → Rn such that the integrations concerned are well defined, then x(s)ds

0

(8)



V3 (et ) =

In this section, a design problem of the state estimator for timevarying delayed neural networks using the sampled-data with stochastic sampling intervals will be investigated via a discontinuous Lyapunov functional approach. Before proceeding further, the following lemmas are given.

γ2



0

where notations of Φ a are defined in Appendix A. Then, the desired control gain matrix (4) is given by K = G−1 H.

where hi−1 ≤ τi (t ) < hi .



Γ1 Γ2

∀a = 1, 2, 

Xi



 αi (t )KCe(t − τi (t ))

Ui



e˙ (t ) = −Ae(t ) + B1 g (t ) + B2 g (t − d(t ))

+

Q2 Q3



following LMIs:

with the following probability:

m 

Q1

, R = R⋆1 RR23 ,positive diagonal matrices Λi {λi1 , . . . , λin } (i = 1, . . . , 4), symmetric matrices Ti , Wi (i = 1, . . . , m) and any matrices G, H , Vi (i = 1, . . . , m) satisfying the

The stochastic variables, αi (t ) and βi (t ), are defined such that



The following is the main result of this paper.

1, . . . , m), Q =

.. . Pr{hm−1 ≤ τ (t ) < hm } =

x˙ (s)T Rx˙ (s)ds. a

Theorem 1. For given positive constants, µ, γ , βi , hi (i = 1, . . . , + − m), and diagonal matrices L1 = diag{L− 1 , . . . , Ln } and L2 = diag{L1 , + . . . , Ln }, the error system (7) is globally mean-square asymptotically stable, if there exist   matrices,  P , Si , Ui , Xi , Yi , Zi (i =  positive-definite

,

hm h2 − h1

b



π2

a

Here, the time-varying delay τ (t ) satisfies τ˙ (t ) = 1 and the following probability rule: Pr{0 ≤ τ (t ) < h1 } =

4(b − a)2

b

V5i (t ),

eT (u)Xi e(u)

102

T.H. Lee et al. / Neural Networks 46 (2013) 99–108



V5i (et ) = βi (t ) h2i



π

2

4

From (18), the following can be derived

e˙ T (s)Si e˙ (s)ds tk

t



t



 E{LV5i (t )} = E βi h2i e˙ T (t )Si e˙ (t )



(e(s) − e(tk ))T Si (e(s) − e(tk ))ds . tk

   π 2 e(t − hj−1 ) Si −Si ⋆ Si 4 e(t − τj (t )) j =1   e(t − hj−1 ) × i = 1, 2, . . . , m. e(t − τj (t ))

V5i (t ) ≥ 0 can be found easily from Lemma 2. In addition, limt →t − V (t ) ≥ V (tk ), because V5i (t ) will disappear at t = tk .



k

Define the infinitesimal operator, L, of V (et ) as follows: 1

LV (et ) = lim

h

h→0+

{E{V (et +h )|et } − V (et )}.

The following inequality about the integration term of LV3 (t ) is obtained using Lemma 1:

(14)

− αi pi

 (15)

i =1

,

(16)

t −h i

   m  T E{LV4 (t )} = E αi pi e (t )Xi e(t ) −

t −hi−1

eT (s)Xi e(s)ds

t −hi

i =1

+ pi e˙ T (t )Yi e˙ (t ) −

t −hi−1



 e˙ T (s)Yi e˙ (s)ds

.

(17)

t −hi

The definition of τ (t ) = t − tk shows that the last integration term of V5i is



π2 4

(e(s) − e(tk ))T Si (e(s) − e(tk ))ds

=−



αj (t )

π2 4

j =1

t −hj−1



t −τj (t )

(e(s) − e(t − τj (t )))T

1, 0,

βi (t )αj (t ) = 1 otherwise

t −τi (t )



e˙ T (s)Ui e˙ (s)ds

t −h i

pi

pi



(20)

τi (t ) − hi−1 T hi − τi (t ) T δ (t )Ui δ1i (t ) + δ2i (t )Ui δ2i (t ) τi (t ) − hi−1 1i hi − τi (t ) ≥ δ1iT (t )Vi δ1i (t ) + δ2iT (t )ViT δ2i (t ).

(21)

t −hi−1





T e˙ T (s)Ui e˙ (s)ds ≤ −αi δ1i (t )Ui δ1i (t ) + δ2iT (t )Ui δ2i

t −hi

 + δ (t )Vi δ2i (t ) + δ2iT Vi δ1i (t )  T  −Ui U i − Vi e(t − hi−1 ) = αi  e(t − τi (t ))  ⋆ −2Ui + Vi + ViT e ( t − hi ) ⋆ ⋆   e(t − hi−1 ) × e(t − τi (t ))  . e(t − hi ) T 1i

Therefore, to fully use the information of stochastically varying interval delay, τ (t ), the stochastic variables, ρij (t ), are introduced such that





According to Park, Ko, and Jeong (2011), if (9) holds, then it is clear that

− αi pi

× Si (e(s) − e(t − τj (t )))ds.

ρij (t ) =

e˙ T (s)Ui e˙ (s)ds +

δ T (t )Ui δ1i + δ2iT (t )Ui δ2i τi (t ) − hi−1 1i hi − τi (t )   hi − τi (t ) = −αi 1+ δ T (t )Ui δ1i τi (t ) − hi−1 1i    τi (t ) − hi−1 T + 1+ δ2i (t )Ui δ2i , hi − τi (t )  t −h  t −τ (t ) where δ1i (t ) = t −τ (i−t )1 e˙ (s)ds, δ2i (t ) = t −h i e˙ (s)ds. i i

tk i

t −hi−1

From (20)–(21), the new upper bound of the integration term of (16) can be obtained as follows:

t



e˙ T (s)Ui e˙ (s)ds

t −τi (t )



≤ −αi

− eT (t − hi )Zi e(t − hi ) + p2i e˙ T (t )Ui e˙ (t )   e˙ T (s)Ui e˙ (s)ds

t −hi−1



= −αi pi

  m  E{LV3 (t )} = E αi eT (t − hi−1 )Zi e(t − hi−1 )

t −hi−1

(19)

t −hi

E{LV2 (t )} ≤ E{ηT (t )Q η(t ) − (1 − µ) × ηT (t − d(t ))Q η(t − d(t )) + ηT (t )Rη(t ) − ηT (t − d)Rη(t − d)},

− pi

ρij

(13)

From (12) and (13), the following equations are obtained.

E{LV1 (t )} = E{2eT (t )P e˙ (t )},

i  

j ≤ i = 1, . . . , m

Vi



−Vi + Ui  −Ui (22)

with the following probability Pr{ρij (t ) = 1} = βi

hj − hj −1 hi

Inspired by the work reported by Kim, Park, and Jeong (2010), the following zero equalities with any symmetric matrices Ti and Wi were considered

= ρij ,

where i=1 j=1 ρij = 1. Then, the Lyapunov functional, V5i (t ), can be rewritten as

m i

V5i (t ) = βi (t )



h2i

i  j =1



 0 = αi eT (t − hi−1 )Ti e(t − hi−1 ) − eT (t − τi (t ))Ti e(t − τi (t ))

t



e˙ (s)Si e˙ (s)ds T

t −τ (t )

 ρij (t )

t −hi−1

−2 t −τi (t )

π2 4



t −hi−1 t −τj (t )

e (s)Ti e˙ (s)ds ,



(e(s) − e(t − τj (t )))T

0 = αi eT (t − τi (t ))Wi e(t − τi (t )) − eT (t − hi )Wi e(t − hi )

 × Si (e(s) − e(t − τj (t )))ds .

 T

 (18)

t −τi (t )

−2 t −h i

 e (s)Wi e˙ (s)ds . T

T.H. Lee et al. / Neural Networks 46 (2013) 99–108

With the above zero equalities, if (10)–(11) hold, then the upper bound of LV4 (t ) can be expressed as

  m  E{LV4 (t )} = E αi pi eT (t )Xi e(t ) + eT (t − hi−1 ) i=1



eT (t − h2 ) · · · eT (t − τm (t )) eT (t − hm )

+ e (t − τi (t ))(Wi − Ti )e(t − τi (t ))  .

(23)

On the other hand, in this paper, the bounding of the activation function, L− ≤ g (uu) ≤ L+ i i , will consider two subintervals, such − + L +L

g (u)

− + L +L

g (u)

as L− ≤ u ≤ i 2 i and i 2 i ≤ u ≤ L+ i , based on the i method presented by Kwon et al. (2012). Therefore the following inequalities hold in each case. g (u) u

• Case I: L− i ≤ 0 ≤ −2

n 



− + Li +Li

2

+ − λ1i [gi (t ) − L− i ei (t )][gi (t ) − ((Li + Li )/2)ei (t )]

i =1

−2

n 

λ2i [gi (t − d(t )) − L− i ei (t − d(t ))][gi (t − d(t ))

i =1

+ − ((L− i + Li )/2)ei (t − d(t ))]

= ηT (t )Υ11 η(t ) + ηT (t − d(t ))Υ21 η(t − d(t )). • Case II:

− + Li +Li

0 ≤ −2

2 n 



g (u) u

(24)

≤ L+ i

+ + λ3i [gi (t ) − ((L− i + Li )/2)ei (t )][gi (t ) − Li ei (t )]

i =1

−2

n 

+ λ4i [gi (t − d(t )) − ((L− i + Li )/2)ei (t − d(t ))]

i =1

× [gi (t − d(t )) − L+ i ei (t − d(t ))] = ηT (t )Υ12 η(t ) + ηT (t − d(t ))Υ22 η(t − d(t )),

(25)

j

where Υi , (i, j = 1, 2) are defined in Appendix A. According to the error system (7), for any appropriately dimensioned matrix G, the following equation holds:





E 2 eT (t )G + γ e˙ T (t )G



+ B2 g (t − d(t )) +



m 

−˙e(t ) − Ae(t ) + B1 g (t ) = 0,

(26)

i =1

where H = GK . The following new bound for E{LV (t )} can be derived using the relationship, (14)–(17), (19), (22)–(26):

E{LV (t )} ≤ E{ζ T (t )Φ a ζ (t )},

a = 1, 2,

.

Remark 3. The discontinuous Lyapunov functional V5 (t ), which was first reported by Liu and Fridman (2012), makes full use of the sawtooth structure characteristics of sampling input delays. On the other hand, in this paper, the interval of integration, V5 (t ), [tk , t ], occurred stochastically because of the definition of τ (t ) = t − tk . If the sampling interval, h(t ) = h2 , then tk exists in two intervals [t − τ1 (t ), t ] and [t − τ2 (t ), t − h1 ] with probabilities hh12 and h2h−2h1 , respectively. Therefore, introducing the stochastic variable ρij (t ) makes full use of the information of the sampling input delay τ (t ), which has stochastically varying intervals. This is the key idea of this paper. To the best of authors’ knowledge, the discontinuous Lyapunov functional approach has never tackled a sampled-data control system with stochastic sampling intervals. Remark 4. If the sampling interval takes m values, then the dimensions of matrices defined in Theorem 1 are as follows: Φ a ∈ R(2m+7)n×(2m+7)n , Γ1 ∈ R2mn×n , Γ2 ∈ R2mn×2mn , Γ3 ∈ R6n×2mn . In ˜ a with m = 3 is preorder to give the intelligibleness, the matrix Φ sented in Appendix B. The existence of the Lyapunov functional V5 (t ) can determine whether the Lyapunov functional (12) is a discontinuous Lyapunov functional or not. If Si = 0 in (12), the Lyapunov functional (12) is continuous and the following theorem can be obtained. Corollary 1. For given positive constants µ, γ , βi , hi (i = 1, . . . , + − m), and diagonal matrices L1 = diag{L− 1 , . . . , Ln } and L2 = diag{L1 , . . . , L+ } , if there exist positive-definite matrices P , S , U , X , Y , i i i i Zi n    

(i = 1, . . . , m), Q = Q⋆1 QQ23 , R = R⋆1 RR23 ,positive diagonal matrices Λi {λi1 , . . . , λin } (i = 1, . . . , 4), symmetric matrices Ti , Wi (i = 1, . . . , m) and any matrices G, H , Vi (i = 1, . . . , m) satisfying the LMIs (8) such that (8) |Si =0 , (∀i = 1, . . . , m), and (9)–(11), then

the error system (7) is globally mean-square asymptotically stable with a stochastic sampled-data controller. Moreover, the desired control gain matrix in (4) can be given by K = G−1 H. Remark 5. The main difference between two Lyapunov functionals stated in Theorem 1 and Corollary 1 is the existence of V5 (t ), which makes full use of the sawtooth structure characteristic of the sampling input delays. Therefore, theoretically, the conservatism of Theorem 1 is less than Corollary 1, which will be validated by numerical examples in the next section. 4. Numerical examples

 αi KCe(t − τi (t ))

T

Therefore, if LMIs (8)–(11) hold, then the error system (7) is mean-square stable according to Definition 1, which completes the proof. 

i =1

− e (t − hi )Wi e(t − hi ) + pi e˙ (t )Yi e˙ (t )

 ζ (t ) = eT (t ) eTm (t ) eT (t − d(t )) eT (t − d) T e˙ T (t ) g T (t ) g T (t − d(t )) g T (t − d) , em (t ) = eT (t − τ1 (t )) eT (t − h1 ) eT (t − τ2 (t ))

T

T

where the matrices Φ a (a = 1, 2) are defined in Theorem 1 and

with

× Ti e(t − hi−1 ) + eT (t − τi (t ))(Wi − Ti )e(t − τi (t )) − eT (t − hi )Wi e(t − hi ) + pi e˙ T (t )Yi e˙ (t )   T   t −hi−1  e(s) e(s) Xi T i ds − ⋆ Yi e˙ (s) e˙ (s) t −τi (t )    T   t −τi (t )  e(s) e(s) Xi W i ds − ⋆ Yi e˙ (s) e˙ (s) t −hi   m  ≤E αi pi eT (t )Xi e(t ) + eT (t − hi−1 )Ti e(t − hi−1 )

T

103

In this section, two numerical examples are provided to demonstrate the effectiveness of the proposed method. For convenience, the number of sampling intervals chosen in this section was two (m = 2). In addition, the tuning parameter, γ , is selected as 1 and MATLAB, YALMIP 3.0 and SeDuMi 1.3 were used to solve the LMI problems in the two examples.

104

T.H. Lee et al. / Neural Networks 46 (2013) 99–108 Table 1 Maximum values of h2 for different h1 (β1 = 0.6). h1 h2

Theorem 1 Corollary 1

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

2.31 2.08

2.20 1.96

2.11 1.83

2.03 1.67

1.95 1.49

1.85 1.25

1.71

1.50

×

×

× ×

Table 2 Maximum values of h2 for different β1 (h1 = 0.5).

β1 h2

Theorem 1 Corollary 1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

10.21 4.71

3.91 2.36

2.47 1.76

1.95 1.49

1.67 1.33

1.49 1.22

1.36 1.13

1.26 1.07

1.18 1.03

Example 1. Consider the closed loop system (7) with the following parameters:



1 0

A=

0.3 0.2

 B2 =



0 , 1

0.6 0.5

 B=

0.4 , −0.5



 −0.7 , 0.4   C = 1 1 ,

2 sin π t + 0.03t 2 J = , 3 cos 5t + 0.005t 2





1 (|xi (t ) + 1| − |xi (t ) − 1|), 5 d(t ) = 0.92 sin t + 1. f (xi (t )) =

From the functions, f (t ) and d(t ), it is clear that the bound values + of function g (t ) are L− i = −0.5 and Li = 0.4, and µ = 0.93. From Theorem 1 and Corollary 1, the maximum sampling interval, h2 , can be obtained for a variety of cases, as listed in Tables 1 and 2. This shows the maximum values of h2 with respect to different h1 and β1 values, respectively. Tables 1 and 2 show that Theorem 1 gives more improved results than Corollary 1, as mentioned in Remark 5. By applying Theorem 1 to the above system with h1 = 0.2, h2 = 0.4, β1 = 0.5 and initial conditions x(0) = (−10, −7), xˆ (0) = (10, 7), the LMIs given in Theorem 1 are feasible and the following control gain can be obtained: 0.5953 . 0.6623

 K =

Fig. 1. True state x(t ) and its estimate xˆ (t ) in Example 1.



Fig. 1 presents the simulation results by applying the state estimator with h1 = 0.2, h2 = 0.4, β1 = 0.5 and K obtained above. From Fig. 1, the responses of the state estimators track to true states by a designed stochastic sampled-data controller. Fig. 2 shows the sampled control inputs, and Fig. 3 displays the sampling interval, h, which varies stochastically. Example 2. Originally, neural networks embody the characteristics of real biological neurons that are connected or functionally related in a nervous system. On the other hand, neural networks can represent not only biological neurons but also other practical systems. One of them is the quadruple-tank process, which is presented in Fig. 4. The quadruple-tank process consists of four interconnected water tanks and two pumps. The inputs are the voltages to the two pumps and the outputs are the water levels of Tanks 1 and 2. As shown in Fig. 4, the quadruple-tank process can be expressed clearly using the neural network model. Haoussi, Tissir, Tadeo, and Hmamed (2011), Huang, Li, Duan, and Starzyk (2012) and Johansson (2000) proposed the state-space equation of the quadruple-tank process and designed the state feedback controller

Fig. 2. Sampled-data control input with two sampling intervals in Example 1.

as follows: x˙¯ (t ) = A¯ 0 x¯ (t ) + A¯ 1 x¯ (t − τ1 ) + B¯ 0 u¯ (t − τ2 ) + B¯ 1 u¯ (t − τ3 ) where

 −0.0021 0  A¯ 0 =  0 0

0 −0.0021 0 0

0 0 −0.0424 0



0 0  , 0 −0.0424

(27)

T.H. Lee et al. / Neural Networks 46 (2013) 99–108

105

Table 3 Maximum values of h2 for different h1 (β1 = 0.5). h1 Theorem 1 Corollary 1

h2

1

2

5

8

9

13

26 19

24 19

22 18

21 14

20

16

×

×

Table 4 Maximum values of h2 for different β1 and h1 . h1 = 12

β1 Theorem 1 Corollary 1

h2

h1 = 8

0.9

0.1

0.9

0.1

40

15

×

×

143 31

15 12

f¯ (¯x(t )) = f¯1 (¯x1 (t )), . . . , f¯4 (¯x4 (t ))

T

, ¯fi (¯xi (t )) = 0.01(|¯xi (t ) + 1| − |¯xi (t ) − 1|), 

i = 1, . . . , 4.

The quadruple-tank process (27) can be rewritten to the form of system (1) as follows: Fig. 3. Stochastic parameters, h, of Example 1.

x˙ (t ) = −Ax(t ) + B1 f (x(t )) + B2 f (x(t − d(t ))) + J , y(t ) = Cx(t )

(28)

where A = −A¯ 1 − A¯ 2 , B1 = B¯ 0 K¯ , B2 = B¯1 K¯ , f (·) = f¯ (·) and J = C =

Fig. 4. Schematic representation of the quadruple-tank process. Source: From Johansson (2000).



0 0  A¯ 1 =  0 0

0 0 0 0

0.0424 0 0 0

0.1113γ1 B¯ 1 = 0



 ¯B2 = 0

0

0 0



0 0.0424 , 0  0

0 0.1042γ2

0 0.1042(1 − γ2 )

0 0

T

0 0

1 0

0 0 1

0 0

0.1113(1 − γ1 ) 0

T

,

For simplicity, it was assumed that τ1 = 0, τ2 = 0 and τ3 = d(t ). Here, the control input, u¯ (t ), means the amount of water supplied by pumps. Therefore, it is true that u¯ (t ) has a threshold value due to the limited area of the hose and the capacity of the pumps. Therefore, it is natural to consider, u¯ (t ), as a nonlinear function as follows:



0 0

+ . L− i = −0.05, Li = 0.05 and µ = 0.99 can

be obtained from the above system parameters. Under the above simulation setting, Tables 3 and 4, which specify the values of the maximum sampling interval, h2 , can be obtained using Theorem 1 and Corollary 1. Originally, the water levels of the two lower tanks are the only accessible and usable information in the quadruple-tank process, whereas the water levels of the two upper tanks are not. Although the water in the upper tanks overflow, no action can be taken. Therefore, there is a strong need to know the water level of the upper tanks, which is one of the methods for obtaining the information to design the state estimator. Until now, the research of the quadruple-tank process focused on designing the stabilizing controller. Therefore, in the sense of practical applications, it deserves that the quadruple-tank process should be applied to the proposed state estimation method. When the sampling intervals h1 = 5, h2 = 10, the probability of sampling interval β1 = 0.5, and initial conditions x(0) = (−4, 4, 6, −5), xˆ (0) = (10, −10, −10, 10), the control gain matrix, which was calculated by Theorem 1 can be expressed as 0.0243 0.0024 K = 0.0031 0.0040

,

T 0 . In addition, d(t ) = 0.98 sin π2 t + 1 and

0



γ1 = 0.333, γ2 = 0.307, u¯ = K¯ x¯ (t ),   −0.1609 −0.1765 −0.0795 −0.2073 K¯ = . −0.1977 −0.1579 −0.2288 −0.0772

u¯ (t ) = K¯ f¯ (¯x(t )),

 0 

0.0023 0.0241 . 0.0039 0.0027



Fig. 5 shows the state estimation result. From Fig. 5, the designed state estimator for a neural network (28) is well followed to the true states by the stochastic sampled-data control signals which are shown in Fig. 6. Fig. 7 presents the stochastically varying sampling interval h. 5. Conclusions In contrast to studies on the design of a state estimator for neural networks, this paper used the sampled-data with stochastically varying intervals. The discontinuous type Lyapunov functional was designed using the extended Wirtinger inequality to fully use the information of the sawtooth structural sampling delay. The results showed that the use of the discontinuous Lyapunov functional results in less conservatism than the use of the continuous Lyapunov

106

T.H. Lee et al. / Neural Networks 46 (2013) 99–108

Fig. 5. True state x(t ) and its estimate xˆ (t ) in Example 2.

Fig. 6. Sampled-data control input with two sampling intervals in Example 2.

functional. In addition, the new approach proposed by Kwon et al. (2012), which divides the bounding of the activation function into two intervals, was applied. The effectiveness of the proposed idea was demonstrated by two numerical examples. Acknowledgments The work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (20100009373).

Appendix A

Φ1a = Q1 + R1 − GA − AT GT + α1 Z1 − α1 U1 m  a + α1 T1 + Υ11 − S¯1 + αi pi Xi , i=1

Φ2 Φ3a Φ4a Φ5a

= P − G − γ AT GT , a = Q2 + R2 + GB1 + Υ12 , a = −(1 − µ)Q1 + Υ21 , a = −(1 − µ)Q2 + Υ22 ,

T.H. Lee et al. / Neural Networks 46 (2013) 99–108

 Φa 1  ⋆   ⋆   ⋆   ⋆   ⋆  a ˜ = Φ  ⋆   ⋆   ⋆   ⋆   ⋆   ⋆ ⋆

1 Γ11 2 Γ11 ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆

1 Γ12 2 Γ12 2 Γ22 ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆

1 Γ13

0

0

0

0

0

0

0

0

2 Γ23 2 Γ33 ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆

2 Γ24 2 Γ34 2 Γ44 ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆

Φ2 32 Γ11

0

0

0

0

0

0

0

0

0

0

0

0

2 Γ46 2 Γ56 2 Γ66

32 Γ31

0

2 Γ45 2 Γ55

0

0

0

0

0

0

0

32 Γ51

0

0

0

0

0

0

0

Φ4a

0

0

0

Φ5a

−R 1 ⋆ ⋆ ⋆ ⋆

0

0

0

Φ6 ⋆ ⋆ ⋆

γ GB1 Φ7a ⋆ ⋆

γ GB2

0

1 Γ15

107

⋆ ⋆ ⋆ ⋆ ⋆ ⋆ ⋆

0

0

0

⋆ ⋆ ⋆ ⋆ ⋆ ⋆

⋆ ⋆ ⋆ ⋆ ⋆

Φ3a

GB2

0 0



    0   0    0   0   0   −R 2   0    0   0

0

Φ8a ⋆

0

−R 3

Box I.

Γ1 (1, 2) = α1 V1 , Γ1 (1, i) = −α i+1 HC , i = 3, 5, 7, . . . , 2m − 1, 2   Γ2 (i, i) = α i+1 −2U i+1 + V i+1 + V Ti+1 + W i+1 − T i+1 2

2

2

2

2

2

− S¯ i+1 , i = 1, 3, 5, . . . , 2m − 1, 2  Γ2 (i, i) = α i −Z i − U i − W i 2 2 2 2  + α i +1 Z i +1 − U i +1 + T i +1 − S¯ i +1 , 2

2

2

2

i = 2, 4, 6, . . . , 2m,

2

  Γ2 (i, i + 1) = α i+1 U i+1 − V i+1 , i = 1, 3, 5, . . . , 2m − 1, 2 2 2   Γ2 (i, i + 1) = α i +1 U i +1 − V i +1 + S¯ i +1 , 2

2

2

i = 2, 4, 6, . . . , 2m − 2,

Γ2 (i, i + 2) = α i +1 V i +1 , i = 2, 4, 6, . . . , 2m − 2, 2 2   0 0 Γ32 02nm×n 02nm×n 02nm×n , Γ3 = 2nm×n 2nm×n

Fig. 7. Stochastic parameters, h, of Example 2.

Γ32 (i, 1) = −α i+1 γ C T H T , 2

Φ6 = −γ (G + GT ) +

2

i = 1, 3, 5, . . . , 2m − 1,

other entries of Γ1 , Γ2 , Γ3 = 0,

m  (αi p2i Ui + αi pi Yi + βi h2i Si ),

h0 = 0

and αi , Zi , Ui , S¯i = 0,

i > m.

i=1 a Φ7a = Q3 + R3 + Υ13 ,

Appendix B

a Φ8a = −(1 − µ)Q3 + Υ23 ,   m 2  π pi Sj , S¯i = βj j=i

˜ a is given in Box I, where Φ

4hj

1 Γ11 = Γ1 (1, 1),

pi = hi − hi−1 ,

Υ11 =



1 Υ11 ⋆

2 Υ11 Υ12 = ⋆



Υ21 =



1 Υ21 ⋆

2 Υ21 Υ22 = ⋆



 −L1 (L1 + L2 )Λ1

3L1 + L2

  2 Υ12 −L (L + L2 )Λ3 = 2 1 2 Υ13 ⋆   1 Υ22 −L (L + L2 )Λ2 = 1 1 1 Υ23 ⋆   2 Υ22 −L (L + L2 )Λ4 = 2 1 2 Υ23 ⋆

L1 + 3L2

Υ12 = 1 Υ13

 1



Γ1 (1, 1) = α1 U1 − α1 V1 + S¯1 − α1 HC ,

2

−2Λ1 2

−2Λ3 3L1 + L2 2

−2Λ2 L1 + 3L2 2 −2Λ4

 Λ3 

1 Γ13 = −α2 HC ,

1 Γ15 = α3 HC ,



Λ1 

1 Γ12 = Γ1 (1, 2),

,

,

2 Γ11 = α1 (−2U1 + V1 + V1T + W1 − T1 ) − S¯1 , 2 Γ12 = α1 (U1 − V1 ), 2 Γ22 = −α1 (Z1 + U1 + W1 ) + α2 (Z2 − U2 + T2 ) − S¯2 , 2 2 Γ23 = α2 (U2 − V2 ) + S¯2 , Γ24 = α2 V2 , 2 Γ33 = α2 (−2U2 + V2 + V2T + W2 − T2 ) − S¯2 ,



Λ2 

,



Λ4 

,

2 Γ34 = α2 (U2 − V2 ), 2 Γ44 = −α2 (Z2 + U2 + W2 ) + α3 (Z3 − U3 + T3 ) − S¯3 , 2 2 Γ45 = α3 (U3 − V3 ) + S¯3 , Γ46 = α3 V3 , 2 Γ55 = α3 (−2U3 + V3 + V3T + W3 − T3 ) − S¯3 , 2 Γ56 = α3 (U3 − V3 ), 2 Γ66 = −α3 (Z3 + U3 + W3 ),

108

T.H. Lee et al. / Neural Networks 46 (2013) 99–108

32 Γ11 = −α1 γ C T H T , 32 Γ51

32 Γ31 = −α2 γ C T H T ,

= −α3 γ C H , T

T

and another notations are defined in Appendix A. References Astrom, K., & Wittenmark, B. (1989). Adaptive control. MA: Addison-Wesley. Balasubramaniam, P., & Chandran, R. (2011). Delay decomposition approach to stability analysis for uncertain fuzzy Hopfield neural networks with timevarying delay. Communications in Nonlinear Science and Numerical Simulation, 16, 2098–2108. Balasubramaniam, P., Nagamani, G., & Rakkiyappan, R. (2011). Passivity analysis for neural networks of neutral type with Markovian jumping parameters and time delay in the leakage term. Communications in Nonlinear Science and Numerical Simulation, 16, 4422–4437. Balasubramaniam, P., Vembarasan, V., & Rakkiyappan, R. (2012). Delay-dependent robust asymptotic state estimation of Takagi–Sugeno fuzzy Hopfield neural networks with mixed interval time-varying delays. Expert Systems with Applications, 39, 472–481. Bao, H., & Cao, J. (2011). Delay-distribution-dependent state estimation for discretetime stochastic neural networks with random delay. Neural Networks, 24, 19–28. Cichoki, A., & Unbehauen, R. (1993). Neural networks for optimization and signal processing. Chichester: Wiley. Duan, Q., Su, H., & Wu, Z. G. (2012). H∞ state estimation of static neural networks with time-varying delay. Neurocomputing, 97, 16–21. Gao, H., Chen, T., & Lam, J. (2008). A new delay system approach to network-based control. Automatica, 44, 39–52. Gao, H., Meng, X., & Chen, T. (2008). Stabilization of networked control systems with new delay characterization. IEEE Transactions on Automatic Control, 53, 2142–2148. Gao, H., Wu, J., & Shi, P. (2009). Robust sampled-data H∞ control with stochastic sampling. Automatica, 45, 1729–1736. Gu, K., Kharitonov, V. K., & Chen, J. (2003). Stability of time-delay systems. Boston: Birkhauser. Hagan, M. T., Demuth, H. B., & Beale, M. (1996). Neural network design. Boston MA: PWS Publishing Company. Haoussi, F. E., Tissir, E. H., Tadeo, F., & Hmamed, A. (2011). Delay-dependent stabilisation of systems with time-delayed state and control: application to a quadruple-tank process. International Journal of Systems Science, 42, 41–49. Hu, B., & Michel, A. N. (2000). Stability analysis of digital feedback control systems with time-varying sampling periods. Automatica, 36, 897–905. Huang, G., & Cao, J. (2010). Delay-dependent multistability in recurrent neural networks. Neural Networks, 23, 201–209. Huang, H., Feng, G., & Cao, J. (2010). State estimation for static neural networks with time-varying delay. Neural Networks, 23, 1202–1207. Huang, H., Feng, G., & Cao, J. (2011). Guaranteed performance state estimation of static neural networks with time-varying delay. Neurocomputing, 74, 606–616. Huang, T., Li, C., Duan, S., & Starzyk, J. A. (2012). Robust exponential stability of uncertain delayed neural networks with stochastic perturbation and impulse effects. IEEE Transactions on Neural Networks and Learning Systems, 23, 866–875. Johansson, K. H. (2000). The quadruple-tank process: a multivariable laboratory process with an adjustable zero. IEEE Transactions on Control Systems Technology, 8, 456–465. Kim, S. H., Park, P. G., & Jeong, C. (2010). Robust H∞ stabilisation of networked control systems with packet analyser. IET Control Theory and Applications, 4, 1828–1837.

Kwon, O. M., Lee, S. M., Park, Ju H., & Cha, E. J. (2012). New approaches on stability criteria for neural networks with interval time-varying delays. Applied Mathematics and Computation, 218, 9953–9964. Lam, H. K. (2012). Stabilization of nonlinear systems using sampled-data outputfeedback fuzzy controller based on polynomial-fuzzy-model-based control approach. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 42, 258–267. Li, N., Zhang, Y., Hu, J., & Nie, Z. (2011). Synchronization for general complex dynamical networks with sampled-data. Neurocomputing, 74, 805–811. Li, Y., Zhang, Q., & Jing, C. (2009). Stochastic stability of networked control systems with time-varying sampling periods. International Journal of Information and Systems Sciences, 5, 494–502. Liu, X., Chen, T., Cao, J., & Lu, W. (2011). Dissipativity and quasi-synchronization for neural networks with discontinuous activations and parameter mismatches. Neural Networks, 24, 1013–1021. Liu, K., & Fridman, E. (2012). Wirtinger’s inequality and Lyapunov-based sampleddata stabilization. Automatica, 48, 102–108. Liu, K., Suplin, V., & Fridman, E. (2011). Stability of linear systems with general sawtooth delay. IMA Journal of Mathematical Control and Information, 27, 419–436. Lu, J. G., & Hill, D. J. (2008). Global asymptotical synchronization of chaotic Lur’e systems using sampled data: a linear matrix inequality approach. IEEE Transactions on Circuits and Systems II, 55, 586–590. Mikheev, Y., Sobolev, B., & Fridman, E. (1988). Asymptotic analysis of digital control systems. Automation and Remote Control, 49, 1175–1180. Mou, S., Gao, H., Lam, J., & Qiang, W. (2008). A new criterion of delay-dependent asymptotic stability for Hopfield neural networks with time delay. IEEE Transactions on Neural Networks, 19, 532–535. Ozdemir, N., & Townley, T. (2003). Integral control by variable sampling based on steady-state data. Automatica, 39, 135–140. Park, P. G., Ko, J. W., & Jeong, C. (2011). Reciprocally convex approach to stability of systems with time-varying delays. Automatica, 7, 235–238. Park, M. J., Kwon, O. M., Park, Ju H., & Lee, S. M. (2012). Simplified stability criteria for fuzzy Markovian jumping Hopfield neural networks of neutral type with interval time-varying delays. Expert Systems with Applications, 39, 5625–5633. Park, M. J., Kwon, O. M., Park, Ju H., Lee, S. M., & Cha, E. J. (2012). Synchronization criteria for coupled stochastic neural networks with time-varying delays and leakage delay. Journal of the Franklin Institute, 349, 1699–1720. Salam, F. M., & Zhang, J. (2001). Adaptive neural observer with forward co-state propagation. In Proceedings on international joint conference on neural networks, Vol. 1. Tahara, S., Fujii, T., & Yokoyama, T. (2007). Variable sampling quasi multirate deadbeat control method for single phase PWM inverter in low carrier frequency. In Proceedings of power conversion conference (pp. 804–809). Wang, Z., Ho, D. W. C., & Liu, X. (2005). State estimation for delayed neural networks. IEEE Transactions on Neural Networks, 16, 279–284. Wu, Z. G., Park, Ju H., Su, S., & Chu, J. (2012a). New results on exponential passivity of neural networks with time-varying delays. Nonlinear Analysis: Real World Applications, 13, 1593–1599. Wu, Z. G., Park, Ju H., Su, H., & Chu, J. (2012b). Passivity analysis of Markov jump neural networks with mixed time-delays and piecewise-constant transition rates. Nonlinear Analysis: Real World Applications, 13, 2423–2431. Wu, Z. G., Su, H., & Chu, J. (2010). State estimation for discrete Markovian jumping neural networks with time delay. Neurocomputing, 73, 2247–2254. Zhu, X. L., & Wang, Y. (2011). Stabilization for sampled-data neural-network-based control systems. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 41, 210–221.