ARTICLE IN PRESS
Neurocomputing 71 (2008) 2857–2867 www.elsevier.com/locate/neucom
An LMI approach to delay-dependent state estimation for delayed neural networks He Huanga,, Gang Fenga, Jinde Caob a
Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Hong Kong, PR China b Department of Mathematics, Southeast University, Nanjing 210096, PR China Received 31 May 2007; received in revised form 30 July 2007; accepted 15 August 2007 Communicated by Z. Wang Available online 29 August 2007
Abstract This paper is concerned with the state estimation problem for a class of neural networks with time-varying delay. Comparing with some existing results in the literature, the restriction such as the time-varying delay was required to be differentiable or even its timederivative was assumed to be smaller than one, are removed. Instead, the time-varying delay is only assumed to be bounded. A delaydependent condition is developed to estimate the neuron states through observed output measurements such that the error-state system is globally asymptotically stable. The criterion is formulated in terms of linear matrix inequality (LMI), which can be checked readily by using some standard numerical packages. An example with simulation results is given to illustrate the effectiveness of the proposed result and the improvement over the existing ones. r 2007 Elsevier B.V. All rights reserved. Keywords: Delayed neural networks; State estimation; Delay-dependent; Global asymptotical stability; Linear matrix inequalities
1. Introduction Various classes of neural networks have been increasingly studied in the past few years, due to their practical importance and successful applications in many areas such as combinatorial optimization, signal processing and communication [8,13,16]. These applications greatly depend on the dynamic behaviors of the underlying neural networks. As is well known, time delay may occur in the process of information storage and transmission in neural networks. In electronic implementation of neural networks, the time delay is often time-variant, and even varies dramatically with time because of the finite switch speed of amplifiers and faults in the electrical circuits. Up to now, the stability analysis for delayed neural networks has attracted considerable attention, and a large amount of results have been available in the literature, see, for example, [1,3–5,7,9,15,17,21–28]. On the other hand, the neuron states are not often completely available in the network outputs in many applications. Therefore, the state estimation problem of neural networks becomes significant for many applications [14,20]. The main objective of the problem is to estimate the neuron states through available output measurements such that the dynamics of the error-state system is globally stable. Recently, the state estimation problem for neural networks has attracted certain attention and some progresses have been made [6,12,19]. Wang et al. firstly investigated the state estimation problem for neural networks with time-varying delay in [20]. Under the precondition that the time-derivative of the time-varying delay was smaller than 1, a linear matrix inequality (LMI) condition was derived to guarantee the existence of the expected state estimator. In [14], the problem of state estimation was also addressed for delayed neural networks under a weak assumption that the time-varying delay was required to be differentiable. However, the proposed condition was expressed Corresponding author. Tel.: +852 2194 2938; fax: +852 2788 8017.
E-mail addresses:
[email protected] (H. Huang),
[email protected] (G. Feng),
[email protected] (J. Cao). 0925-2312/$ - see front matter r 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2007.08.008
ARTICLE IN PRESS H. Huang et al. / Neurocomputing 71 (2008) 2857–2867
2858
in terms of a matrix inequality, not an LMI, which corresponds to a nonlinear programming problem. The authors in [18] dealt with the state estimation problem for a class of neural networks with discrete and distributed delays. The existing results related to this issue can be generally classified into two categories: delay-independent criteria [20] and delaydependent criteria [14,18]. The delay-independent case is irrespective of the size of the time delay. While the delaydependent case is relevant to the size of the time delay. Generally speaking, the delay-dependent case is considered to be less conservative than the delay-independent case, especially when the size of time delay is small. In this paper, the state estimation problem is studied for a class of neural networks with time-varying delays. As mentioned above, the time-varying delay in [14,20] must satisfy the assumptions that it was differentiable and its timederivative was less than 1 or a constant. This would greatly limit the applications of the results proposed in [14,20]. Here, such restrictions on the time-varying delay are removed. By defining a new Lyapunov–Krasovskii functional, the boundedness of the time-varying delay is only required. A delay-dependent condition is developed to estimate the neuron states through available output measurements such that the error-state system is globally asymptotically stable. The criterion is formulated in terms of an LMI, which can be checked efficiently by using some standard numerical packages [2,10]. Notations: For a real square matrix X, the notation X 40 (X X0; X o0; X p0) means that X is symmetric and positive definite (positive semi-definite, negative definite, negative semi-definite, respectively). The shorthand notation diagfM 1 ; M 2 ; . . . ; M N g denotes a block diagonal matrix with diagonal blocks being the matrices M 1 ; M 2 ; . . . ; M N . I is the identity matrix with appropriate dimension. The superscript ‘‘T’’ represents the transpose. j j is the Euclidean norm in Rn . Let LbF0 ð½d; 0; Rn Þ denote the family of all F0 -measurable Cð½d; 0; Rn Þ-valued variables x ¼ fxðyÞ : dpyp0g such that supdpyp0 jxðyÞjp o1. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible dimensions for algebra operations. 2. Problem formulation The model of neural networks with time-varying delay considered in this paper is described by the following state equation: _ ¼ AxðtÞ þ W 0 gðxðtÞÞ þ W 1 gðxðt tðtÞÞÞ þ J; xðtÞ
ð1Þ
where xðtÞ ¼ ½x1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞT 2 Rn is the state vector associated with n neurons, A ¼ diagða1 ; a2 ; . . . ; an Þ is a diagonal matrix with positive entries ai 40. The matrices W 0 and W 1 are, respectively, the connection weight matrix and the delayed connection weight matrix. gðxðtÞÞ ¼ ½g1 ðx1 ðtÞÞ; g2 ðx2 ðtÞÞ; . . . ; gn ðxn ðtÞÞT denotes the neuron activation function, and J ¼ ðJ 1 ; J 2 ; . . . ; J n ÞT is an external input vector. tðtÞ is the time-varying delay satisfying 0ptðtÞpd;
ð2Þ
where d is a scalar constant. Remark 1. A sufficient condition for the state estimation of delayed neural networks was given in [20] based on the assumption that the time-varying delay must be differentiable and its time-derivative was smaller than 1. By assuming that the time-varying delay was differentiable and its time-derivative was bounded, the state estimation problem for delayed neural networks was also discussed in [14]. However, these restrictions on the time-varying delay are removed in this paper. As stated in (2), the time delay is only required to be bounded. As in [14,20], the neuron activation function gðÞ is assumed to satisfy the following Lipschitz condition: jgðx1 Þ gðx2 ÞjpjGðx1 x2 Þj;
ð3Þ
with G 2 Rnn being a known constant matrix. It is known [14,20] that in relatively large scale neural networks, the information related to the neuron states are commonly incomplete from the network measurements. That is to say, in practice the neuron states are not often fully obtainable in the network outputs. The purpose of this study is to present an efficient estimation algorithm to observe the neuron states from the available network outputs. Therefore, the network measurements are assumed to satisfy yðtÞ ¼ CxðtÞ þ f ðt; xðtÞÞ;
ð4Þ
m
mn
n
m
where yðtÞ 2 R is the measurement output, C 2 R is a known constant matrix, and f : R R ! R is the neurondependent nonlinear disturbances on the network outputs satisfying the following Lipschitz condition: jf ðt; x1 Þ f ðt; x2 ÞjpjF ðx1 x2 Þj; where the constant matrix F 2 R
ð5Þ nn
is also known.
ARTICLE IN PRESS H. Huang et al. / Neurocomputing 71 (2008) 2857–2867
2859
The full-order state estimator is of the form _^ ¼ AxðtÞ ^ ^ tðtÞÞÞ þ J þ K½yðtÞ C xðtÞ ^ f ðt; xðtÞÞ, ^ ^ þ W 0 gðxðtÞÞ þ W 1 gðxðt xðtÞ nm
^ is the estimation of the neuron state, and K 2 R where xðtÞ Define the error state to be
(6)
is the estimator gain matrix to be designed.
^ eðtÞ ¼ xðtÞ xðtÞ and ^ jðtÞ ¼ gðxðtÞÞ gðxðtÞÞ, ^ cðtÞ ¼ f ðt; xðtÞÞ f ðt; xðtÞÞ, then the error-state system can be expressed by e_ðtÞ ¼ ðA þ KCÞeðtÞ þ W 0 jðtÞ þ W 1 jðt tðtÞÞ KcðtÞ. Let eðt; xÞ be the state trajectory of system (7) under the initial condition eðyÞ ¼ xðyÞ on dpyp0 in is obvious that system (7) admits a trivial solution eðt; 0Þ ¼ 0.
(7) L2F0 ð½d; 0; Rn Þ.
It
Definition 1. For the system (7) and every x 2 L2F0 ð½d; 0; Rn Þ, the trivial solution is said to be globally asymptotically stable if it is locally stable in the sense of Lyapunov and globally attractive. We end this section by recalling two lemmas. Lemma 1 (Gu et al. [11]). For any constant matrix N 2 Rmm ; N ¼ NT , scalar g40, vector function o : ½0; g ! Rm such that the integrations concerned are well defined, then Z g T Z g Z g T g o ðsÞNoðsÞ dsX oðsÞ ds N oðsÞ ds . 0
0
0
Lemma 2 (Schur complement [2]). The LMI " # O11 O12 o0, OT12 O22 where O11 ¼ OT11 ; O22 ¼ OT22 , is equivalent to O22 40;
T O11 þ O12 O1 22 O12 o0.
3. State estimator for delayed neural networks This section is dedicated to designing a state estimator for the neural network with time-varying delay (1), such that the error-state system is globally asymptotically stable. The following theorem presents a delay-dependent condition for the existence of the desired state estimator for the delayed neural network based on an LMI approach. Theorem 1. The error-state system (7) of the delayed neural network described by (1) and (4) is globally asymptotically stable if there exist three positive scalars a40; b40; g40 and real matrices P40; Q40; R; S; T such that the LMI 3 2 O S þ T T S PW 0 PW 1 R dAT P dC T RT 7 6 7 6 S T þ T T T T þ bG T G T 0 0 0 0 7 6 7 6 T T 7 6 S T Q 0 0 0 0 7 6 7 6 T T 7o0 6 W P 0 0 aI 0 0 dW P ð8Þ 0 0 7 6 7 6 T T 7 6 W1 P 0 0 0 bI 0 dW 1 P 7 6 7 6 T T 7 6 R 0 0 0 0 gI dR 5 4 dPA dRC 0 0 dPW 0 dPW 1 dR 2P þ Q holds, where O ¼ PA AT P RC C T RT þ S þ ST þ aG T G þ gF T F .
ARTICLE IN PRESS H. Huang et al. / Neurocomputing 71 (2008) 2857–2867
2860
And then the estimator gain can be designed as K ¼ P1 R. Proof. Define a Lyapunov–Krasovskii functional candidate as V ðeðtÞÞ ¼ V 1 ðeðtÞÞ þ V 2 ðeðtÞÞ;
ð9Þ
with V 1 ðeðtÞÞ ¼ eT ðtÞPeðtÞ, Z
t
ðs t þ dÞ_eT ðsÞQe_ðsÞ ds,
V 2 ðeðtÞÞ ¼ d td
where P ¼ PT 40 and Q ¼ QT 40 are to be determined. By directly computing the time-derivative of V 1 ðeÞ, one can deduce that V_ 1 ðeÞ ¼ 2eT ðtÞP½ðA þ KCÞeðtÞ þ W 0 jðtÞ þ W 1 jðt tðtÞÞ KcðtÞ ¼ eT ðtÞ½PðA þ KCÞ ðA þ KCÞT PeðtÞ þ 2eT ðtÞPW 0 jðtÞ þ 2eT ðtÞPW 1 jðt tðtÞÞ 2eT ðtÞPKcðtÞ þ 2eT ðtÞS þ 2eT ðt tðtÞÞT
Z
t
e_ðsÞ ds 2eT ðtÞS
Z
ttðtÞ
t
e_ðsÞ ds 2eT ðt tðtÞÞT
ttðtÞ T
Z
t
e_ðsÞ ds ttðtÞ
Z
t
e_ðsÞ ds ttðtÞ
¼ eT ðtÞ½PðA þ KCÞ ðA þ KCÞT PeðtÞ þ 2e ðtÞPW 0 jðtÞ þ 2eT ðtÞPW 1 jðt tðtÞÞ 2eT ðtÞPKcðtÞ þ 2eT ðtÞSeðtÞ 2eT ðtÞSeðt tðtÞÞ þ 2eT ðt tðtÞÞTeðtÞ 2eT ðt tðtÞÞTeðt tðtÞÞ Z t Z t 2eT ðtÞS e_ðsÞ ds 2eT ðt tðtÞÞT e_ðsÞ ds ttðtÞ
ttðtÞ
¼ eT ðtÞ½PðA þ KCÞ ðA þ KCÞT P þ S þ S T eðtÞ þ 2eT ðtÞðS þ T T Þeðt tðtÞÞ 2eT ðt tðtÞÞTeðt tðtÞÞ þ 2eT ðtÞPW 0 jðtÞ þ 2eT ðtÞPW 1 jðt tðtÞÞ Z t Z t 2eT ðtÞPKcðtÞ 2eT ðtÞS e_ðsÞds 2eT ðt tðtÞÞT e_ðsÞ ds. ttðtÞ
ð10Þ
ttðtÞ
Calculating the time-derivative of V 2 ðeÞ along the trajectories of system (7) and noting that 0ptðtÞpd, it yields Z t V_ 2 ðeÞ ¼ d 2 e_T ðtÞQ_eðtÞ d e_T ðsÞQ_eðsÞ ds td Z t pd 2 e_T ðtÞQ_eðtÞ tðtÞ e_T ðsÞQ_eðsÞ ds. ttðtÞ
It then follows from Lemma 1 that V_ 2 ðeÞpd 2 e_T ðtÞQ_eðtÞ
Z
t
T Z e_ðsÞ ds Q
ttðtÞ
t
e_ðsÞ ds .
ttðtÞ
Since the functions gðÞ and f ðÞ satisfy (3) and (5), respectively, it is clear that 2 ^ pjGeðtÞj2 ¼ eT ðtÞG T GeðtÞ, jT ðtÞjðtÞ ¼ jgðxðtÞÞ gðxðtÞÞj 2 ^ cT ðtÞcðtÞ ¼ jf ðt; xðtÞÞ f ðt; xðtÞÞj pjFeðtÞj2 ¼ eT ðtÞF T FeðtÞ.
Then, for positive scalars a; b and g, a½eT ðtÞG T GeðtÞ jT ðtÞjðtÞX0, b½eT ðt tðtÞÞG T Geðt tðtÞÞ jT ðt tðtÞÞjðt tðtÞÞX0, g½eT ðtÞF T FeðtÞ cT ðtÞcðtÞX0.
(11)
ARTICLE IN PRESS H. Huang et al. / Neurocomputing 71 (2008) 2857–2867
2861
Combining (10) with (11), one obtains that V_ ðeÞpeT ðtÞ½PðA þ KCÞ ðA þ KCÞT P þ S þ S T eðtÞ þ 2eT ðtÞðS þ T T Þeðt tðtÞÞ 2eT ðt tðtÞÞTeðt tðtÞÞ þ 2eT ðtÞPW 0 jðtÞ þ 2eT ðtÞPW 1 jðt tðtÞÞ Z t Z t 2eT ðtÞPKcðtÞ 2eT ðtÞS e_ðsÞ ds 2eT ðt tðtÞÞT e_ðsÞ ds þ d 2 e_T ðtÞQe_ðtÞ
Z
ttðtÞ
T Z e_ðsÞ ds Q
t
ttðtÞ
ttðtÞ
t
e_ðsÞ ds þ a½eT ðtÞG T GeðtÞ jT ðtÞjðtÞ
ttðtÞ
þ b½eT ðt tðtÞÞGT Geðt tðtÞÞ jT ðt tðtÞÞjðt tðtÞÞ þ g½eT ðtÞF T FeðtÞ cT ðtÞcðtÞ ¼ xT ðtÞðF þ d 2 CT QCÞxðtÞ,
ð12Þ
where " T
Z
T
#T T T T T e_ðsÞ ds j ðtÞ j ðt tðtÞÞ c ðtÞ ,
t
xðtÞ ¼ e ðtÞ e ðt tðtÞÞ ttðtÞ
2
S þ T T
S
PW 0
PW 1
PK
T T T þ bG T G
T
0
0
0
T T
Q
0
0
0
0
0
aI
0
0
0
0
0
bI
0
Y
6 6 ST þ T 6 6 6 S T 6 F¼6 6 W T0 P 6 6 6 W T1 P 4 K T P
0
0 T
0 T
0
3 7 7 7 7 7 7 7, 7 7 7 7 5
gI
T
T
Y ¼ PðA þ KCÞ ðA þ KCÞ P þ S þ S þ aG G þ gF F , C ¼ ½ðA þ KCÞ 0 0 W 0 W 1 K. If F þ d2 CT QCo0, there must be a small positive scalar e such that F þ d2 CT QC þ diagfeI; 0; 0; 0; 0; 0gp0. It follows from (12) that V_ ðeÞp eeT ðtÞeðtÞo0
ð13Þ
for all eðtÞa0, which implies that the error-state system (7) of the delayed neural network described by (1) and (4) is globally asymptotically stable. In the following, we will show that F þ d2 CT QCo0. Firstly, by using Lemma 2, F þ d2 CT QCo0 is equivalent to 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4
Y
S þ T T
S
PW 0
PW 1
PK
dðA þ KCÞT Q
ST þ T
T T T þ bG T G
T
0
0
0
0
S T
T T
Q
0
0
0
0
W T0 P
0
0
aI
0
0
dW T0 Q
W T1 P
0
0
0
bI
0
dW T1 Q
K T P
0
0
0
0
gI
dK T Q
dQðA þ KCÞ
0
0
dQW 0
dQW 1
dQK
Q
3 7 7 7 7 7 7 7 7o0: 7 7 7 7 7 7 5
ð14Þ
ARTICLE IN PRESS H. Huang et al. / Neurocomputing 71 (2008) 2857–2867
2862
Pre- and post multiplying diagfI; I; I; I; I; I; PQ1 g and diagfI; I; I; I; I; I; Q1 Pg, respectively, and applying the change of variable such that K ¼ P1 R, then one easily derives that 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4
Y
S þ T T
S
PW 0
PW 1
PK
dðA þ KCÞT P
S T þ T
T T T þ bG T G
T
0
0
0
0
S T
T T
Q
0
0
0
0
W T0 P
0
0
aI
0
0
dW T0 P
W T1 P
0
0
0
bI
0
dW T1 P
K T P
0
0
0
0
gI
dK T P
dPðA þ KCÞ
0
0
dPW 0
dPW 1
dPK
PQ1 P
3 7 7 7 7 7 7 7 7o0: 7 7 7 7 7 7 5
ð15Þ
It is noted that (15) is not an LMI condition because of the term PQ1 P. In view of the inequality PQ1 PX2P Q resulting from ðP QÞT Q1 ðP QÞ ¼ PQ1 P 2P þ QX0, the LMI condition (8) can guarantee that (15) holds. That is, F þ d2 CT QCo0 is implied by the LMI condition (8). This completes the proof. & Remark 2. Theorem 1 presents an LMI-based condition for the design of the desired state estimator for delayed neural networks. This criterion is dependent on the size of the time-varying delay. Two slack variables S; T have been introduced into the obtained condition, which are not required to be symmetric. The state estimation problem for delayed neural networks was studied by Wang et al. [20], where a delay-independent condition without slack variables was derived by means of the feasibility of an LMI. It is thus expected that Theorem 1 is less conservative than the result proposed in [20] due to the increased freedom of these introduced slack variables. Remark 3. The delay-dependent state estimation condition developed in [14] was formulated by a matrix inequality, but not an LMI. It is to say that, to check the existence of the state estimator, one needs to handle a nonlinear programming problem, which cannot be easily solved. However, the delay-dependent condition in Theorem 1 is expressed in terms of an LMI. It can be solved efficiently by using the Matlab LMI Control Toolbox [2,10]. Therefore, Theorem 1 is more practical and easily applied than the results in [14]. As seen in the proof of Theorem 1, the following corollary is immediately obtained. Corollary 1. Let the estimator gain K be given. The error-state system (7) is globally asymptotically stable if there exist scalars a40, b40, g40 and real matrices P40, Q40, S, T such that the LMI (14) holds. In the past few decades, the stability analysis of recurrent neural networks has been an active research topic. A few of delay-independent or delay-dependent stability criteria have been reported in the literature, see, for example, [1,3–5,7,9,15,17,21–28]. It should be pointed out that the developed LMI approach can be applied to the global asymptotical stability analysis of neural networks with time-varying delays. Consider the delayed neural network (1), and let x be its unique equilibrium point. By making a transformation zðtÞ ¼ xðtÞ x , the system (1) can be rewritten as ^ ^ z_ðtÞ ¼ AzðtÞ þ W 0 gðzðtÞÞ þ W 1 gðzðt tðtÞÞÞ,
(16)
^ where gðzðtÞÞ ¼ gðxðtÞ þ x Þ gðx Þ. It follows from (3) that ^ jgðzÞjpjGzj: The following theorem can be easily obtained under the assumption that the time-varying delay only satisfies (2).
ð17Þ
ARTICLE IN PRESS H. Huang et al. / Neurocomputing 71 (2008) 2857–2867
2863
Theorem 2. The system (16) is globally asymptotically stable, if there exist real matrices P40, Q40, S, T and scalars a40, b40 such that 3 2 S PW 0 PW 1 dAT Q D S þ TT 7 6 7 6 ST þ T T T T þ bG T G T 0 0 0 7 6 7 6 T T 7 6 S T Q 0 0 0 7 6 7o0, 6 T 7 6 W T0 P 0 0 aI 0 dW Q 0 7 6 7 6 T T 7 6 W1 P 0 0 0 bI dW Q 1 5 4 dQA 0 0 dQW 0 dQW 1 Q where D ¼ PA AT P þ S þ S T þ aGT G. Proof. To show that the neural network (16) is globally asymptotically stable, we choose the following Lyapunov–Krasovskii functional candidate: Z t T ðs t þ dÞ_zT ðsÞQ_zðsÞ ds. V ðzðtÞÞ ¼ z ðtÞPzðtÞ þ d td
The remaindered proof is similar to that of Theorem 1 and is thus omitted. This completes the proof.
&
Remark 4. Theorem 2 gives a delay-dependent condition to test the global asymptotical stability of the delayed neural network (1). The stability criterion is expressed in terms of an LMI, which can be checked readily by standard algorithms such as the interior-point method [2]. In [1,4,7,27,28], several stability conditions were proposed for time-varying delay neural networks by requiring the time-derivative of the time delay to be less than 1. This restriction has been removed in Theorem 2. The time-varying delay is only required to be bounded. It means that the differentiability of the time-varying delay is not necessary to investigate the stability of delayed neural network, and thus Theorem 2 is less restrictive than those in [1,4,7,27,28]. Moreover, two slack variables S; T have been introduced to reduce the conservatism of the proposed criterion. 4. A numerical example A simple example with simulation results is provided to demonstrate the effectiveness of the developed LMI approach to the state estimator design for delayed neural networks. Consider the delayed neural network with the following parameters 2 3 3:6 0 0 6 7 7 A¼6 4 0 4:2 0 5, 0 0 5 2 3 0:2 0:1 0 6 7 0:3 0:2 7 W0 ¼ 6 4 0:1 5, 0:2 0:1 0:2 2 3 0:1 1 0:2 6 7 7 W1 ¼ 6 4 0:1 0:2 0:1 5, 2
0:2
0:1 0:4
cos t þ 0:4 sin t þ 0:005t2
3
6 7 27 J¼6 4 0:5 cos t þ 0:5 sin t þ 0:004t 5, 1:2 cos t þ 0:5 sin t 0:01t2 C ¼ I. The activation function is assumed to be gðxÞ ¼ 14½jx þ 1j jx 1j with G ¼ 0:5I. The nonlinear disturbance is of the form f ðt; xðtÞÞ ¼ 0:4 cos xðtÞ with F ¼ 0:4I. Finally, the time-varying delay is given as tðtÞ ¼ j sin tj with d ¼ 1. Obviously, j sin tj is not differentiable at t ¼ kp ðk ¼ 0; 1; 2; . . .Þ. It means that the proposed criteria in [14,20] fail to solve the state estimation problem. However, Theorem 1 is valid in this case. By solving the LMI in Theorem 1,
ARTICLE IN PRESS H. Huang et al. / Neurocomputing 71 (2008) 2857–2867
2864
a feasible solution is obtained as 2 3 9:5518 0:0580 0:5724 6 7 0:1084 7 P¼6 4 0:0580 7:5670 5, 0:5724 0:1084 5:8168 2 3 3:7321 0:0285 0:3792 6 7 0:0733 7 Q¼6 4 0:0285 2:1975 5, 0:3792 0:0733 0:9594 2 18:7496 0:1476 2:0494 6 18:1875 0:3627 R¼6 4 0:1215 2
1:1083
0:2589
0:9753 0:0096 6 S¼6 4 0:0285 1:1511 0:0105 0:0205 2 2:3152 0:0198 6 1:7024 T ¼6 4 0:0046 0:1909 0:0466
3 7 7, 5
18:3591 3 0:0462 7 0:0218 7 5, 0:7089 3 0:2033 7 0:0480 7 5, 0:8576
a ¼ 3:6675, b ¼ 2:2576, g ¼ 50:0955. Then the state estimation gain K can be designed as 2 3 1:9631 0:0009 0:0256 6 7 7 K ¼6 4 0:0010 2:4035 0:0025 5. 0:0027 0:0004 3:1536 The simulation results are shown in Figs. 1–4. Among them, Figs. 1–3 represent the responses of the true states x1 ðtÞ; x2 ðtÞ; x3 ðtÞ and their estimations x^ 1 ðtÞ; x^ 2 ðtÞ; x^ 3 ðtÞ, respectively. Fig. 4 is the response of the error state eðtÞ.
The True State x1 and Its Estimation 1 0.8
Amplitude
0.6 0.4 0.2 0 −0.2 −0.4 0
2
4 6 Time in samples
8
10
Fig. 1. The responses of the true state x1 ðtÞ (solid) and its estimation x^ 1 ðtÞ (dash-dot).
ARTICLE IN PRESS H. Huang et al. / Neurocomputing 71 (2008) 2857–2867
2865
The True State x2 and Its Estimation 1.5 1
Amplitude
0.5 0 −0.5 −1 −1.5 0
2
4 6 Time in samples
8
10
Fig. 2. The responses of the true state x2 ðtÞ (solid) and its estimation x^ 2 ðtÞ (dash-dot).
The True State x3 and Its Estimation 1 0.5
Amplitude
0 −0.5 −1 −1.5 −2 0
2
4 6 Time in samples
8
10
Fig. 3. The responses of the true state x3 ðtÞ (solid) and its estimation x^ 3 ðtÞ (dash-dot).
The Error State e(t) 3 2
Amplitude
1 0 −1 −2 −3 0
2
4 6 Time in samples
8
Fig. 4. The responses of the error state eðtÞ.
10
ARTICLE IN PRESS H. Huang et al. / Neurocomputing 71 (2008) 2857–2867
2866
The simulation results demonstrate the effectiveness of the developed approach for the design of the state estimator for delayed neural networks. 5. Conclusion In this paper, the delay-dependent state estimation problem has been studied for a class of neural networks with timevarying delays. The differentiability of the time-varying delay has been no longer needed to address this issue. A sufficient condition has been presented to guarantee the existence of the desired state estimator for delayed neural networks. The criterion is dependent on the size of the time-varying delay and formulated by means of the feasibility of a strict LMI. Two slack variables has been introduced to reduce the conservatism of the obtained results. In addition, the developed approach has been applied to discuss the global asymptotical stability of delayed neural networks. Finally, an example with simulation results has been given to illustrate the effectiveness of the result and the improvement over the existing ones. Acknowledgments The authors would like to thank the associate editor and the anonymous reviewers for their constructive comments that have greatly improved the quality of this paper. The work was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region of China [Project no.: CityU 1353/04E], and partially supported by the National Natural Science Foundation of China under Grant no. 60574043, International Joint Project funded by NSFC and the Royal Society of the United Kingdom, and the Natural Science Foundation of Jiangsu Province of China under Grant no. BK2006093. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]
S. Arik, An analysis of exponential stability of delayed neural networks with time varying delays, Neural Networks 17 (2004) 1027–1031. S. Boyd, L. EIGhaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, PA, 1994. J. Cao, A set of stability criteria for delayed cellular neural networks, IEEE Trans. Circuits Syst. I 48 (2001) 494–498. J. Cao, J. Wang, Global asymptotic stability of a general class of recurrent neural networks with time-varying delays, IEEE Trans. Circuits Syst. I 50 (2003) 34–44. J. Cao, J. Wang, Global asymptotic and robust stability of recurrent neural networks with time delays, IEEE Trans. Circuits Syst. I 52 (2005) 417–426. V.T.S. Elanayar, Y.C. Shin, Approximation and estimation of nonlinear stochastic dynamic systems using radial basis function neural networks, IEEE Trans. Neural Networks 5 (1994) 594–603. T. Ensari, S. Arik, Global stability analysis of neural networks with multiple time varying delays, IEEE Trans. Autom. Control 50 (2005) 1781–1785. R. Fantacci, M. Forti, M. Marini, L. Pancani, Cellular neural network approach to a class of communication problems, IEEE Trans. Circuits Syst. I 46 (1999) 1457–1467. M. Forti, A. Tesi, New conditions for global stability of neural networks with application to linear and quadratic programming problems, IEEE Trans. Circuits Syst. I 42 (1995) 354–366. P. Gahinet, A. Nemirovsky, A.J. Laub, M. Chilali, LMI Control Toolbox: For Use with Matlab, The Math Works Inc., 1995. K. Gu, V.L. Kharitonov, J. Chen, Stability of Time-delay Systems, Birkhauser, Massachusetts, 2003. R. Habtom, L. Litz, Estimation of unmeasured inputs using recurrent neural networks and the extended Kalman filter, in: Proceedings of the International Conference on Neural Network, vol. 4, Houston, TX, 1997, pp. 2067–2071. S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice-Hall, Englewood cliffs, NJ, 1998. Y. He, Q.-G. Wang, M. Wu, C. Lin, Delay-dependent state estimation for delayed neural networks, IEEE Trans. Neural Networks 17 (2006) 1077–1081. H. Huang, J. Cao, J. Wang, Global exponential stability and periodic solutions of recurrent neural networks with delays, Phys. Lett. A 298 (2002) 393–404. G. Joya, M.A. Atencia, F. Sandoval, Hopfield neural networks for optimization: study of the different dynamics, Neurocomputing 43 (2002) 219–237. Y. Liu, Z. Wang, X. Liu, Global exponential stability of generalized recurrent neural networks with discrete and distributed delays, Neural Networks 19 (2006) 667–675. Y. Liu, Z. Wang, X. Liu, Design of exponential state estimators for neural networks with mixed time delays, Phys. Lett. A 364 (2007) 401–412. F.M. Salam, J. Zhang, Adaptive neural observer with forward co-state propagation, in: Proceedings of the International Joint Conference on Neural Networks (IJCNNN01), vol. 1, Washington, DC, 2001, pp. 675–680. Z. Wang, D.W.C. Ho, X. Liu, State estimation for delayed neural networks, IEEE Trans. Neural Networks 16 (2005) 279–284. Z. Wang, Y. Liu, M. Li, X. Liu, Stability analysis for stochastic Cohen–Grossberg neural networks with mixed time delays, IEEE Trans. Neural Networks 17 (2006) 814–820. Z. Wang, Y. Liu, X. Liu, On global asymptotic stability of neural networks with discrete and distributed delays, Phys. Lett. A 345 (2005) 299–308. Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with time delays, Nonlinear Anal.: R. World Appl. 7 (2006) 1119–1128. Z. Wang, H. Shu, Y. Liu, D.W.C. Ho, X. Liu, Robust stability analysis of generalized neural networks with discrete and distributed time delays, Chaos Solitons Fractals 30 (2006) 886–896.
ARTICLE IN PRESS H. Huang et al. / Neurocomputing 71 (2008) 2857–2867
2867
[25] S. Xu, J. Lam, D.W.C. Ho, Y. Zou, Improved global robust asymptotic stability criteria for delayed cellular neural networks, IEEE Trans. Syst. Man Cybern. B 35 (2005) 1317–1321. [26] S. Xu, J. Lam, D.W.C. Ho, Y. Zou, Novel global asymptotic stability criteria for delayed cellular neural networks, IEEE Trans. Circuits Syst. II 52 (2005) 349–353. [27] H. Zhang, X. Liao, LMI-based robust stability analysis of neural networks with time-varying delay, Neurocomputing 67 (2005) 306–312. [28] Q. Zhang, X. Wei, J. Xu, Delay-dependent exponential stability of cellular neural networks with time-varying delays, Chaos Solitons Fractals 23 (2005) 1363–1369. He Huang received the B.S. degree in mathematics from Gannan Normal University, Ganzhou, China, in 2000, and the M.S. degree in applied mathematics from Southeast University, Nanjing, China, in 2003. He is now working toward his Ph.D. degree at City University of Hong Kong, Hong Kong, China. His current research interests include neural networks, intelligent control, nonlinear systems, and applied mathematics.
Gang Feng received the B.Eng. and M.Eng. degrees in automatic control (of electrical engineering) from Nanjing Aeronautical Institute, Nanjing, China, in 1982 and in 1984, respectively, and the Ph.D. degree in electrical engineering from the University of Melbourne, Melbourne, Australia, in 1992. He has been an Associate Professor and then Professor at City University of Hong Kong since 2000, and was a Lecturer/Senior Lecturer at the School of Electrical Engineering, University of New South Wales, Australia, from 1992 to 1999. He was a visiting Fellow at the National University of Singapore (1997) and Aachen Technology University, Germany (1997–1998). He has authored and/or coauthored numerous referred technical papers. His current research interests include robust adaptive control, signal processing, piecewise linear systems, and intelligent systems and control. Dr. Feng was awarded an Alexander von Humboldt Fellowship in 1997–1998. He is an Associate Editor of the IEEE Transactions on Automatic Control, the IEEE Transactions on Fuzzy Systems and the Journal of Control Theory and Applications. He was an Associate Editor of the IEEE Transactions on Systems, Man, and Cybernetics, Part C, and the Conference Editorial Board of the IEEE Control System Society. Jinde Cao received the B.S. degree from Anhui Normal University, Wuhu, China, the M.S. degree from Yunnan University, Kunming, China, and the Ph.D. degree from Sichuan University, Chengdu, China, all in mathematics/applied mathematics, in 1986, 1989, and 1998, respectively. From March 1989 to May 2000, he was with Yunnan University. In May 2000, he joined the Department of Mathematics, Southeast University, Nanjing, China. From July 2001 to June 2002, he was a Post doctoral Research Fellow in the Department of Automation and Computer-aided Engineering, Chinese University of Hong Kong, Hong Kong. From August 2002 to October 2002, he was a Senior Visiting Scholar at the Institute of Mathematics, Fudan University, Shanghai, China. From February 2003 to May 2003, he was a Senior Research Associate in the Department of Mathematics, City University of Hong Kong, Hong Kong. From July 2003 to September 2003, he was a Senior Visiting Scholar in the Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei, China. From January 2004 to April 2004, he was a Research Fellow in the Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Hong Kong. From January 2005 to April 2005, he was a Research Fellow in the Department of Mathematics, City University of Hong Kong, Hong Kong. From January 2006 to April 2006, he was a Research Fellow in the Department of Electronics Engineering, City University of Hong Kong, Hong Kong. From July 2006 to September 2006, he was a Visiting Research Fellow of Royal Society in the School of Information Systems, Computing and Mathematics, Brunel University, UK. From February 2007 to April 2007, he was a Research Fellow in the Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Hong Kong. He is currently a Professor and Doctoral Advisor at the Southeast University. Prior to this, he was a Professor at Yunnan University from 1996 to 2000. He is the author or coauthor of more than 130 journal papers and five edited books and a reviewer of Mathematical Reviews and Zentralblatt-Math. His research interests include nonlinear systems, neural networks, complex systems and complex networks, control theory, and applied mathematics. Professor Cao is a Senior Member of the IEEE, and an Associate Editor of the IEEE Transaction on Neural Networks, Journal of the Franklin Institute, Mathematics and Computers in Simulation, and Neurocomputing.