An LMI approach for global robust dissipativity analysis of T–S fuzzy neural networks with interval time-varying delays

An LMI approach for global robust dissipativity analysis of T–S fuzzy neural networks with interval time-varying delays

Expert Systems with Applications 39 (2012) 3345–3355 Contents lists available at SciVerse ScienceDirect Expert Systems with Applications journal hom...

2MB Sizes 1 Downloads 38 Views

Expert Systems with Applications 39 (2012) 3345–3355

Contents lists available at SciVerse ScienceDirect

Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

An LMI approach for global robust dissipativity analysis of T–S fuzzy neural networks with interval time-varying delays S. Muralisankar a,⇑, N. Gopalakrishnan a, P. Balasubramaniam b a b

School of Mathematics, Madurai Kamaraj University, Madurai 625 021, Tamilnadu, India Department of Mathematics, Gandhigram Rural University, Gandhigram 624 302, Tamilnadu, India

a r t i c l e

i n f o

Keywords: Global dissipativity T–S fuzzy model Neural networks Linear matrix inequality Time-varying delays

a b s t r a c t Takagi–Sugeno (T–S) fuzzy models are often used to represent complex nonlinear systems by means of fuzzy sets and fuzzy reasoning applied to a set of linear sub-models. In this paper, the global robust dissipativity of T–S fuzzy neural networks with interval time-varying delays are investigated. By constructing a proper Lyapunov–Krasovskii functional and using linear matrix inequality (LMI) technique, delaydependent criteria for checking the global dissipativity and global exponential dissipativity of fuzzy neural networks have been derived in terms of LMI, which can be solved numerically using LMI toolbox in MATLAB. Finally, numerical examples are given to illustrate the effectiveness of the theoretical results.  2011 Elsevier Ltd. All rights reserved.

1. Introduction In recent years, dynamics of neural networks (NNs) have been widely studied due to their extensive applications in aerospace, defense, robotic, telecommunications, signal processing, pattern recognition etc. (Feng, Yang, & Wu, 2009; Haykin, 1998; Li, 2010a, 2010b). Fuzzy logic theory was shown to be an appealing and efficient approach to dealing with the analysis and synthesis problems for complex nonlinear systems. Takagi and Sugeno (1985) proposed an effective way to transform a nonlinear dynamic system to a set of linear sub-models via some fuzzy models by defining a linear input/output relationship as its consequence of individual plant rule. In Cao and Frank (2000), the standard T–S fuzzy model was extended to one with time delays and some stability conditions were presented in terms of LMIs. Recently, the Lyapunov– Krasovskii approach and the Lyapunov–Razumikhin method have been used to study the stability of delayed fuzzy systems (Cao & Frank, 2000, 2001). Moreover, the concept of incorporating fuzzy logic into NNs has grown into a popular research topic (Huang, 2006; Liu & Tang, 2004). The notion of dissipative dynamical system was first introduced by Willems (1972), and it is subsequently generalized in Zhang, Yan, and Chen (2010) via various approaches. Dissipative systems theory has wide ranging implications and applications in control theory. Applications of dissipativeness in the stability analysis of linear systems with certain nonlinear feedback were first discussed in Willems (1972). In addition, dissipativeness was crucially used in the stability analysis of nonlinear systems (Hill & Moylan, ⇑ Corresponding author. Tel.: +91 452 2452371; fax: +91 451 2453071. E-mail address: [email protected] (S. Muralisankar). 0957-4174/$ - see front matter  2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2011.09.021

1976). The theory of dissipative systems generalizes basic tools including the passivity theorem, bounded real lemma, Kalman Yakubovich lemma and the circle criterion (Tan, Soh, & Xie, 1999). It is well known that the stability problem is central to the analysis of a dynamic system, where various types of stability of an equilibrium point have captured the attention of researchers. Nevertheless, from a practical point of view, it is not always the case that every NN has its orbits approaching a single equilibrium point. It is possible that there is no equilibrium point in some situations. Therefore, the concept on dissipativity has been introduced in Hale (1988). The concept of dissipativity in dynamical systems is a more general concept and it has found applications in various areas such as stability theory, chaos and synchronization theory, system norm estimation and robust control. Recently, many interesting results have been proposed for the dissipativity of delayed NNs (Arik, 2004; Cao, Yuan, Ho, & Lam, 2006; Huang, Xu, & Yang, 2007; Liao & Wang, 2003; Lou & Cui, 2008; Masubuchi, 2006; Song & Cao, 2008, 2010; Song & Zhao, 2005; Wang, Cao, & Wang, 2009; Zhang et al., 2010). Motivated by the above discussions, we shall generalize the ordinary dissipativity analysis on uncertain NNs to express dissipativity of T–S fuzzy uncertain NNs with interval time-varying delays. The main purpose of this paper is to study the global robust dissipativity of T–S fuzzy NNs with interval time-varying delays. To the best of authors knowledge, there were no results for global robust dissipativity analysis of T–S fuzzy NNs with mixed interval timevarying delays in terms of LMIs, which can be easily solved by MATLAB LMI toolbox. The main advantage of the LMI based approaches is that the LMI conditions can be solved numerically using the effective interiorpoint algorithms. We also provide numerical examples to demonstrate the effectiveness of the proposed dissipativity results.

3346

S. Muralisankar et al. / Expert Systems with Applications 39 (2012) 3345–3355

Notations: throughout this paper, Rn and Rnn denote, respectively, the n-dimensional Euclidean space and the set of all n  n real matrices. The superscript T denotes the transposition and the notation X P Y (respectively, X > Y), where X and Y are symmetric matrices, means that X  Y is positive semi-definite (respectively, positive definite). In is the n  n identity matrix. k  k is the Euclidean norm in Rn . The notation ⁄ always denotes the symmetric block in one symmetric matrix. Sometimes, the arguments of a function or a matrix will be omitted in the analysis when no confusion can arise. 2. Problem description and preliminaries In this paper, we consider the following neural network with mixed time-varying delays

dxðtÞ ¼ AxðtÞ þ W 1 f ðxðtÞÞ þ W 2 f ðxðt  sðtÞÞÞ þ W 3 dt Z t  f ðxðsÞÞds þ u

ð1Þ

trðtÞ

for t P 0, where xðtÞ ¼ ½x1 ðtÞ; . . . ; xn ðtÞT 2 Rn is the state vector of the network at time t, n corresponds to the number of neurons; A = diag(a1, a2, . . . , an) > 0 is a positive diagonal matrix; W1 = (aij)nn, W2 = (bij)nn and W3 = (cij)nn represent the connection weight matrix, the discretely delayed connection weight matrix and the distributively delayed connection weight matrix, respectively; f(x(t)) = [f1(x1(t)), . . . , fn(xn(t))]T denotes the neuron activation function at time t; u ¼ ðu1 ; . . . ; un ÞT 2 Rn is a constant external input vector; and s(t) and r(t) denote the discrete and distributed timevarying delays, respectively. Assumption (H1). The time-varying delays s(t) and r(t) satisfy

0 6 h1 6 sðtÞ < h2 ;

0 6 rðtÞ 6 r;

½DAk ðtÞ DW 1k ðtÞ DW 2k ðtÞ DW 3k ðtÞ h i 1 2 3 EW EW ; ¼ Gk F k ðtÞ EAk EW k k k

W2 1 3 where Gk ; EAk ; EW and EW are known real constant matrices k ; Ek k with appropriate dimensions, and Fk(t) is the time-varying uncertain matrix which satisfies that

F Tk ðtÞF k ðtÞ 6 I:

ð4Þ

Remark 2.1. In Song and Cao (2008), authors studied the problem of global dissipativity analysis on uncertain NNs with mixed timevarying delays. This paper attempts at the first time to dealt with T–S fuzzy NNs with interval time-varying delays. In this paper, the global robust dissipativity problem is investigated for T–S fuzzy NNs with interval time-varying delays. Thus, our results extend the existing ones. The defuzzified output of the T–S fuzzy system (2) is represented as follows

( r dxðtÞ X ¼ xk ðhðtÞÞ ðAk þ DAk ðtÞÞxðtÞ þ ðW 1k þ DW 1k ðtÞÞf ðxðtÞÞ dt k¼1 þðW 2k þ DW 2k ðtÞÞ  f ðxðt  sðtÞÞÞ ) Z t f ðxðsÞÞds þ u ; þðW 3k þ DW 3k ðtÞÞ where p Y t ðhðtÞÞ ; tk ðhðtÞÞ ¼ xk ðhðtÞÞ ¼ Pr k gkj ðhj ðtÞÞ j¼1 tj ðhðtÞÞ j¼1

in which gkj ðhj ðtÞÞ is the grade of membership of hj ðtÞ in gkj . According to the theory of fuzzy sets, we have

s_ ðtÞ 6 l;

tk ðhðtÞÞ P 0; k ¼ 1; 2; . . . ; r; Assumption (H2). For any j 2 {1, 2, . . . , n}, fj(0) = 0 and there exist  þ constants lj and lj such that 

fj ða1 Þ  fj ða2 Þ þ 6 lj a1  a2

tk ðhðtÞÞ > 0 for all t:

Therefore, it implies r X

xk ðhðtÞÞ ¼ 1 for all t:

k¼1

Plant Rule k: IF h1(t) is gk1 and . . . and hp(t) is gkp THEN

dxðtÞ ¼ ðAk þ DAk ðtÞÞxðtÞ þ ðW 1k þ DW 1k ðtÞÞf ðxðtÞÞ dt þ ðW 2k þ DW 2k ðtÞÞf ðxðt  sðtÞÞÞ Z t f ðxðsÞÞds þ u; þ ðW 3k þ DW 3k ðtÞÞ

ð2Þ

trðtÞ

t 2 ½q; 0;

r X k¼1

xk ðhðtÞÞ P 0; k ¼ 1; 2; . . . ; r;

for all a1 – a2. The kth rule of the T–S fuzzy neural network with parameter uncertainties is of the following form:

xðtÞ ¼ uðtÞ;

ð5Þ

trðtÞ

where h1, h2, r and l are constants.

lj 6

ð3Þ

q ¼ maxðh2 ; rÞ; k ¼ 1; 2; . . . ; r;

where gki ði ¼ 1; 2; . . . ; pÞ is the fuzzy set, h(t) = [h1(t), . . . , hp(t)]T is the premise variable vector, r is the number of IF–THEN rules. The norm is _ defined by kxkq ¼ maxfsupq6t60 kxðtÞk; suph6t60 kxðtÞkg. Ak, W1k, W2k and W3k are constant known real matrices. DAk(t), DW1k(t), DW2k(t) and DW3k(t) denote the time-varying parameter uncertainties. Assumption (H3). The parameter uncertainties DAk(t), DW1k(t), DW2k(t) and DW3k(t) are of the form:

Definition 2.2. A neural network (5) is said to be globally dissipative if there exists a compact set S # Rn , such that 8 x0 2 Rn ; 9 Tðx0 Þ > 0, where t P t0 + T(x0), x(t,t0,x0) # S, where x(t,t0,x0) denotes the solution of Eq. (5) from initial state x0 and initial time t0. In this case, S is called a globally attractive set. A set S is called a positive invariant if "x0 2 S implies x(t,t0,x0) # S for t P t0. Definition 2.3. Let S be a globally attractive set of neural networks [Eq. (5)]. A neural network (5) is said to be globally exponentially dissipative system if there exist a compact set S⁄  S in Rn such that 8x0 2 Rn n S , there exists a constant M(x0) > 0 and a > 0 such that

inf fkxðt; t 0 ; x0 Þ  ~xk : ~x 2 S g 6 Mðx0 Þeaðtt0 Þ :

x2Rn nS

Set S⁄ is globally exponentially attractive set, where x 2 Rn n S . The following Lemmas are using to prove our theorems.

Lemma 2.4 Rakkiyappan and Balasubramaniam (2010). Let D and N be real constant matrices of appropriate dimensions, matrix F(t) satisfies FT(t)F(t) 6 I. Then (i) for any scalar  > 0, DF(t)N + NTFT(t)DT 6 1D DT + NTN. (ii) For any P > 0, 2aTb 6 aTP1a + bTPb.

3347

S. Muralisankar et al. / Expert Systems with Applications 39 (2012) 3345–3355

Lemma 2.5 Gu (1994). For any constant matrix M 2 Rnn ; M ¼ MT > 0, scalar g > 0, vector function x : ½0; g ! Rn such that the integrations are well defined, the following inequality holds

Z

g

T Z xðsÞds M

0

g

 Z xðsÞds 6 g

0

g

Xk2;3 ¼ NT5 þ N 6 ;

Xk2;7 ¼ ð1  lÞS2 þ L2 W; Xk2;9 ¼ Xk2;10 ¼ Xk2;11 ¼ 0; Xk2;14 ¼ N6 ; Xk2;15 ¼ 0;

xT ðsÞMxðsÞds: 0

k 3;4

X Lemma 2.6 Boyd, Ghoui, Feron, and Balakrishnan (1994). Let M, P, Q be the given matrices such that Q > 0, then

"

T

P

M

M

Q

Xk2;4 ¼ N3 þ NT4 ;

k 3;5

¼X

k 3;6

¼X

¼X

Xk2;8 ¼ 0; Xk2;12 ¼ N 2 ;

Xk3;3 ¼ Q 1 þ Q 2 þ 2N 5 ;

Xk3;14 ¼ N5 ;

Xk4;10 ¼ Xk4;11 ¼ Xk4;12 ¼ 0; k 5;5

3. Main result

X

For convenience, we set r X

W 2 þ DW 2 ðtÞ ¼

xk ðhðtÞÞðW 1k þ DW 1k ðtÞÞ;

W 3 þ DW 3 ðtÞ ¼

Xk5;7 ¼ M 1 W 2k ;

Xk4;14 ¼ Xk4;15 ¼ 0;

Xk5;8 ¼ Xk5;9 ¼ Xk5;10 ¼ Xk5;11



1 Xk6;6 ¼ Y 1 þ Z 1 þ r2 Z 2  2V þ k EW k

T   1 EW ; k

Xk6;7 ¼ Xk6;8 ¼ Xk6;9 ¼ Xk6;10 ¼ 0;

xk ðhðtÞÞðW 2k þ DW 2k ðtÞÞ; Xk6;11 ¼ Xk6;12 ¼ Xk6;13 ¼ Xk6;14 ¼ Xk6;15 ¼ 0;

k¼1 r X

Xk4;13 ¼ N4 ;

Xk5;15 ¼ M1 W 3k ;

k¼1 r X

Xk4;4 ¼ 2N4 ;

¼ Xk5;12 ¼ Xk5;13 ¼ Xk5;14 ¼ 0;

k¼1

W 1 þ DW 1 ðtÞ ¼

Xk3;15 ¼ 0;

¼ h2 R1 þ ðh2  h1 ÞR2  2M 1 ;

Xk5;6 ¼ M1 W 1k ;

xk ðhðtÞÞðAk þ DAk ðtÞÞ; r X

Xk3;8 ¼ S1 þ S2 ;

¼ 0;

Xk4;5 ¼ Xk4;6 ¼ Xk4;7 ¼ Xk4;8 ¼ Xk4;9 ¼ 0;

< 0 () P þ M T Q 1 M < 0:

A þ DAðtÞ ¼

Xk2;13 ¼ N3 ;

Xk3;9 ¼ Xk3;10 ¼ Xk3;11 ¼ 0; Xk3;12 ¼ Xk3;13 ¼ 0;

#

k 3;7

Xk2;5 ¼ Xk2;6 ¼ 0;



2 Xk7;7 ¼ ð1  lÞY 2  2W þ k EW k

xk ðhðtÞÞðW 3k þ DW 3k ðtÞÞ;

T   2 EW ; k

k¼1

Xk7;8 ¼ Xk7;9 ¼ Xk7;10 ¼ Xk7;11 ¼ Xk7;12 ¼ Xk7;13 ¼ Xk7;14 ¼ Xk7;15 ¼ 0;

then system (5) can be rewritten as

dxðtÞ ¼ ðA þ DAðtÞÞxðtÞ þ ðW 1 þ DW 1 ðtÞÞf ðxðtÞÞ þ ðW 2 dt þ DW 2 ðtÞÞf ðxðt  sðtÞÞÞ þ ðW 3 þ DW 3 ðtÞÞ Z t  f ðxðsÞÞds þ u:

Xk8;8 ¼ Y 1 þ Y 2 ; Xk8;9 ¼ 0; Xk8;10 ¼ Xk8;11 ¼ Xk8;12 ¼ Xk8;13 ¼ Xk8;14 ¼ Xk8;15 ¼ 0; X ð6Þ

trðtÞ

Theorem 3.1. Suppose that (H1)–(H3) hold. If there exist a symmetric positive definite matrices P > 0, Qi > 0 (i = 1, 2, 3, 4), Rj, Yj, Zj > 0 (j = 1, 2), and Q > 0, positive diagonal matrices V > 0 and W > 0, a matrices S1 S2, M1 and Nl (l = 1, . . . , 6) and a positive diagonal constant k > 0 such that the following LMI

k ¼

Xk

MGk



k I

 < 0;

ð7Þ

  hold for k = 1, 2, . . . , r, where Xk ¼ Xki;j

1515

¼X

k 9;12

¼X

 T   EAk

2

þ 2N1 þ 2Q ;

Xk1;2 ¼ N 1 þ NT2 ;

Xk1;5 ¼ P  ATk M T1 ;

Xk9;9 ¼ Z 1 ;

¼ 0;

Xk10;10 ¼ Q 3 ;

Xk10;11 ¼ Xk10;12 ¼ Xk10;13 ¼ Xk10;14 ¼ Xk10;15 ¼ 0; 1 Q ; Xk11;12 ¼ Xk11;13 ¼ Xk11;14 ¼ Xk11;15 ¼ 0; h2  h1 4 1 ¼  R1 ; Xk12;13 ¼ 0; h2

Xk11;11 ¼  Xk12;12

1 ðR1 þ R2 Þ; h2  h1 1 ¼ R2 ; h2  h1

Xk12;14 ¼ Xk12;15 ¼ 0; Xk13;13 ¼  Xk13;14 ¼ Xk13;15 ¼ 0;

Xk14;14

with

Xk1;1 ¼ Q 1 þ h1 Q 3 þ ðh2  h1 ÞQ 4  2L1 V þ k EAk

Xk1;3 ¼ Xk1;4 ¼ 0;

k 9;11

Xk9;13 ¼ Xk9;14 ¼ Xk9;15 ¼ 0;

Now, we discuss the globally dissipative of neural network model (6) as follows.



k 9;10

Xk1;6 ¼ S1 þ L2 V;



3 Xk15;15 ¼ Z 2 þ k EW k

Xk14;15 ¼ 0;

T   3 EW ; k

M ¼ ½0 0 0 0 M 1 0 0 0 0 0 0 0 0 0 0T the neural network (6) is globally dissipative, and S ¼ n o uk x : kxk 6 kkM1ðQ is a positive invariant and globally attractive set. Þ min

Xk1;7 ¼ Xk1;8 ¼ Xk1;9 ¼ Xk1;10 ¼ Xk1;11 ¼ 0; Xk1;12 ¼ N1 ;

Xk1;13 ¼ Xk1;14 ¼ Xk1;15 ¼ 0;

Xk2;2 ¼ ð1  lÞQ 2  2L1 W  2N2 þ 2N3  2N6 ;

Proof. Consider the following positive radially unbounded Lyapunov–Krasovskii functional candidate for model (6) as

VðtÞ ¼ V 1 ðtÞ þ V 2 ðtÞ þ V 3 ðtÞ þ V 4 ðtÞ þ V 5 ðtÞ;

3348

S. Muralisankar et al. / Expert Systems with Applications 39 (2012) 3345–3355

where

From assumption (H2), we have

   þ fi ðxi ðtÞÞ  li xi ðtÞ fi ðxi ðtÞÞ  li xi ðtÞ 6 0;

T

V 1 ðtÞ ¼ x ðtÞPxðtÞ; V 2 ðtÞ ¼

  T  Q 1 S1 xðsÞ xðsÞ ds ST1 Y 1 f ðxðsÞÞ th1 f ðxðsÞÞ     Z th1  xðsÞ T Q 2 S2 xðsÞ þ ds; T S2 Y 2 f ðxðsÞÞ f ðxðsÞÞ tsðtÞ Z



t

V 3 ðtÞ ¼ h1

Z

Z

0

t

xT ðsÞQ 3 xðsÞds dh þ

Z

tþh

h1

Z

h1

which are equivalent to

 2

t

xT ðsÞQ 4 xðsÞds dh;

V 4 ðtÞ ¼

Z

0

V 5 ðtÞ ¼

_ x_ T ðsÞR1 xðsÞds dh þ

tþh

h2

Z

t

Z

Z

h1

n X

Z

t

f T ðxðsÞÞZ 1 f ðxðsÞÞds þ r

tr

0

Z

r

_ x_ T ðsÞR2 xðsÞds dh;

t

f T ðxðsÞÞZ 2 f ðxðsÞÞds dh:

dV 1 ðtÞ _ ¼ 2xT ðtÞPxðtÞ; dt

ð8Þ

 f ðxðt  h1 ÞÞY 1 f ðxðt  h1 ÞÞ þ x ðt  h1 ÞQ 2 xðt  h1 Þ þ 2xT ðt  h1 ÞS2 f ðxðt  h1 ÞÞ þ f T ðxðt  h1 ÞÞY 2 f ðxðt  h1 ÞÞ  ð1  lÞf T ðxðt  sðtÞÞÞY 2 f ðxðt  sðtÞÞÞ; T Z xðsÞds Q 3

th1

t

ð9Þ  xðsÞds

th1

þ ðh2  h1 Þx ðtÞQ 4 xðtÞ !T ! Z th1 Z th1 1  xðsÞds Q 4 xðsÞds ; h2  h1 th2 th2

th2



1 h2

t

tsðtÞ

!T _ xðsÞds

R1

tsðtÞ

2



T

xðtÞ f ðxðtÞÞ

T 

xðtÞ f ðxðtÞÞ

f ðxðtÞÞ

6 0 i ¼ 1; 2; . . . ; n;

2 4

 þ

li li ei eTi 

þ l i þli 2



þ l i þli 2

ei eTi

ei eTi

ei eTi

3   xðtÞ 5 6 0; f ðxðtÞÞ

L1 V

L2 V

L2 V

V





xðtÞ f ðxðtÞÞ

ð13Þ

6 0;

 þ  þ  þ l þl  þ n is approwhere L1 ¼ diag l1 l1 ; . . . ; ln ln ; L2 ¼ diag 1 2 1 ; . . . ; ln þl 2 priate dimension. Similarly one has,

2



xðt  sðtÞÞ f ðxðt  sðtÞÞÞ

T 

L1 W

L2 W

L2 W

W



xðt  sðtÞÞ f ðxðt  sðtÞÞÞ



6 0:

ð14Þ

ð15Þ

t

ð10Þ

ð16Þ

tsðtÞ

tsðtÞ

 _ xðsÞds ;

0 ¼ 2ðxT ðt  h1 ÞN 5 þ xT ðt  sðtÞÞN6 Þ xðt  h1 Þ  xðt  sðtÞÞ 

ð17Þ

Z

th1

! _ xðsÞds :

ð18Þ

tsðtÞ

It follows from the inequalities (8)–(14) and using assumption (H3) and Lemma 2.4 in (15)–(18), that

dVðtÞ _ þ xT ðtÞQ 1 xðtÞ  xT ðt  h1 ÞQ 1 xðt  h1 Þ 6 2xT ðtÞPxðtÞ dt

! _ xðsÞds þ ðh2

þ 2xT ðtÞS1 f ðxðtÞÞ  2xT ðt  h1 ÞS1 f ðxðt  h1 ÞÞ þ f T ðxðtÞÞY 1 f ðxðtÞÞ

tsðtÞ

dV 5 ðtÞ 6 f T ðxðtÞÞZ 1 f ðxðtÞÞ  f T ðxðt  rÞÞZ 1 f ðxðt  rÞÞ dt þ r2 f T ðxðtÞÞZ 2 f ðxðtÞÞ !T ! Z t Z t  f ðxðsÞÞds Z 2 f ðxðsÞÞds : trðtÞ

0 ¼ 2ðxT ðtÞN1 þ xT ðt  sðtÞÞN2 ÞðxðtÞ  xðt  sðtÞÞ Z t _  xðsÞdsÞ;

T _ ðR1 xðsÞds

_  h1 Þx_ T ðtÞR2 xðtÞ !T ! Z th1 Z th1 1 _ _  xðsÞds xðsÞds ; R2 h2  h1 tsðtÞ tsðtÞ

trðtÞ



xðtÞ

th2

th2

Z



0 ¼ 2ðxT ðt  sðtÞÞN3 þ xT ðt  Z  h2 ÞN4 Þ xðt  sðtÞÞ  xðt  h2 Þ 

T

Z

5

trðtÞ

 2ð1  lÞxT ðt  sðtÞÞS2 f ðxðt  sðtÞÞÞ

Z

ei eTi

3

þ ðW 2 þ DW 2 ðtÞÞf ðxðt  sðtÞÞÞ Z t f ðxðsÞÞds þ u; þ ðW 3 þ DW 3 ðtÞÞÞ

 ð1  lÞxT ðt  sðtÞÞQ 2 xðt  sðtÞÞ

dV 4 ðtÞ 1 _  6 h2 x_ T ðtÞR1 xðtÞ dt h2  h1 Z tsðtÞ  _ þ R2 Þ xðsÞds

ei eTi

ei eTi

_  ðA þ DAðtÞÞxðtÞ þ ðW 1 þ DW 1 ðtÞÞf ðxðtÞÞ 0 ¼ 2ðx_ T ðtÞM1 ÞðxðtÞ

T

t

2

þ l i þli 2

We can see that following equations hold for any matrices M1, N1, N2, N3, N4, N5 and N6 with appropriate dimensions

dV 2 ðtÞ 6 xT ðtÞQ 1 xðtÞ  xT ðt  h1 ÞQ 1 xðt  h1 Þ þ 2xT ðtÞS1 f ðxðtÞÞ dt  2xT ðt  h1 ÞS1 f ðxðt  h1 ÞÞ þ f T ðxðtÞÞY 1 f ðxðtÞÞ

Z



þ l i þli



that is

Calculating the time derivative of V(t), we have

dV 3 ðtÞ 2 6 h1 xT ðtÞQ 3 xðtÞ  dt

4

 þ

li li ei eTi

t

tþh

T

vi



i¼1

tþh

h2

2

where er denotes the unit column vector having one element on its rth row and zeros elsewhere. Let V = diag{v1, v2, . . . , vn}, W = diag{w1, w2, . . . , wn}, then

2 Z

T

xðtÞ f ðxðtÞÞ

tþh

h2

i ¼ 1; 2; . . . ; n;

 f T ðxðt  h1 ÞÞY 1 f ðxðt  h1 ÞÞ þ xT ðt  h1 ÞQ 2 xðt  h1 Þ  ð1  lÞxT ð11Þ

ðt  sðtÞÞQ 2 xðt  sðtÞÞ þ 2xT ðt  h1 ÞS2 f ðxðt  h1 ÞÞ  2ð1  lÞxT ðt  sðtÞÞS2 f ðxðt  sðtÞÞÞ þ f T ðxðt  h1 ÞÞY 2 f ðxðt  h1 ÞÞ  ð1  lÞf T 2

ðxðt  sðtÞÞÞY 2 f ðxðt  sðtÞÞÞ þ h1 xT ðtÞQ 3 xðtÞ 

Z

t

T Z xðsÞds Q 3

th1

ð12Þ

t

th1 T

þ ðh2  h1 Þx ðtÞQ 4 xðtÞ

 xðsÞds

3349

S. Muralisankar et al. / Expert Systems with Applications 39 (2012) 3345–3355



Z

1 h2  h1

!T

th1

xðsÞds th2

1 h2

Z

1 h2  h1 !T Z _xðsÞds R1

t

tsðtÞ

xðsÞds

tsðtÞ

T Z _ ðR1 þ R2 Þ xðsÞds

th2

_ _ xðsÞds þ ðh2  h1 Þx_ T ðtÞR2 xðtÞ

!T

th1

_ xðsÞds

Z

R2

tsðtÞ

!

th1

T

_ xðsÞds þ f ðxðtÞÞZ 1 f ðxðtÞÞ

tsðtÞ

T

Z

2 T

 f ðxðt  rÞÞZ 1 f ðxðt  rÞÞ þ r f ðxðtÞÞZ 2 f ðxðtÞÞ 

!T

t

f ðxðsÞÞds

trðtÞ

!

Z

t

Corollary 3.3. Suppose that (H1)–(H3) hold. If there exist symmetric positive definite matrices P > 0, Qi > 0 (i = 1, 2, 3, 4), Rj, Yj, Zj > 0 (j = 1, 2) and Q > 0, positive diagonal matrices V > 0 and W > 0, a matrices S1, S2, M1 and Nl (l = 1, . . . , 6) such that the following LMI holds

þ 2x_ T ðtÞM 1 W 2 f ðxðt  sðtÞÞÞ þ f T ðxðt  sðtÞÞÞðEW 2 ÞT

Z

ðEW 3 ÞT ðEW 3 Þ

trðtÞ

!

t

!T f ðxðsÞÞds

xðtÞ  xðt  sðtÞÞ 

Z

!

t

tsðtÞ

tsðtÞ

ð21Þ

invariant and globally attractive set. In the following theorem, we extend the above result to global exponential dissipativity of neural network (6) as follows.

_ xðsÞds þ 2ðxT ðt  sðtÞÞN3

þ xT ðt  h2 ÞN 4 Þ  Z xðt  sðtÞÞ  xðt  h2 Þ 

X < 0;

where X = (Xi,j)15  15 are defined as in Theorem 3.1, the neural netn o 1 uk is a positive work (20) is globally dissipative and S ¼ x : kxk 6 kkM min ðQ Þ

f ðxðsÞÞds þ 2x_ T ðtÞM 1 u þ 2ðxT ðtÞN1 þ xT ðt  sðtÞÞN2 Þ

trðtÞ

ð20Þ

trðtÞ

þ f T ðxðtÞÞðEW 1 ÞT ðEW 1 Þf ðxðtÞÞ

trðtÞ

Rt _ is Remark 3.2. In this paper, the integral term  th2 x_ T ðsÞR1 xðsÞds Rt R tsðtÞ _ _  tsðtÞ x_ T ðsÞR1 xðsÞds divided into two parts are  th2 x_ T ðsÞR1 xðsÞds; R th _ and the integral term  th21 x_ T ðsÞR2 xðsÞds is divided into two parts as R tsðtÞ T R th1 T _ _ _ _  th2 x ðsÞR2 xðsÞds;  tsðtÞ x ðsÞR2 xðsÞds, which may lead to less con-

dxðtÞ ¼ AxðtÞ þ W 1 f ðxðtÞÞ þ W 2 f ðxðt  sðtÞÞÞ dt Z t f ðxðsÞÞds þ u: þ W3

f ðxðt  sðtÞÞÞ L2 W W f ðxðt  sðtÞÞÞ _  2x_ T ðtÞM 1 AxðtÞ þ 1 x_ T ðtÞM1 GGT M T1 xðtÞ _  2x ðtÞM 1 xðtÞ A T A T T þ x ðtÞðE Þ ðE ÞxðtÞ þ 2x_ ðtÞM 1 W 1 f ðxðtÞÞ

t

when x 2 Rn n S. Therefore, neural network (6) is a globally dissipative system and the set S is a positive invariant and globally attractive set as LMI (7) hold. This completes the proof. h

In the following, we consider the neural network without fuzzy and uncertainties then (5) becomes

_T

 ðEW 2 Þf ðxðt  sðtÞÞÞ ! Z t Z T _ f ðxðsÞÞds þ  þ 2x ðtÞM 1 W 3

dVðtÞ 6 2ðxT ðtÞQxðtÞ þ x_ T ðtÞM 1 uÞ 6 2kxkðkmin ðQÞkxk þ kM 1 ukÞ dt < 0;

servative results.

     xðtÞ T L1 V L2 V xðtÞ f ðxðsÞÞds  2 f ðxðtÞÞ L2 V V f ðxðtÞÞ trðtÞ  T    L1 W L2 W xðt  sðtÞÞ xðt  sðtÞÞ

Z2  2

 _ xðsÞds

th2

!

t

tsðtÞ

tsðtÞ

Z

1  h2  h1

!

th1

th2

Z

_  þ h2 x_ T ðtÞR1 xðtÞ 

Z

Q4

 _ xðsÞds

Theorem 3.4. Under the conditions of Theorem 3.1, neural network n o uk (6) is globally exponentially dissipative and S ¼ x : kxk 6 kkM1ðQ is Þ

th

min

2  þ 2 xT ðt  h1 ÞN5 þ xT ðt  sðtÞÞN6 ! Z th1 _ xðsÞds xðt  h1 Þ  xðt  sðtÞÞ 

a positive invariant and globally attractive set. Further, the exponential dissipativity rate index k = /2 can be estimated by the following inequality

tsðtÞ

dVðtÞ nðtÞ; 6 2ðxT ðtÞQxðtÞ þ x_ T ðtÞM 1 uÞ þ nT ðtÞt  dt

ð19Þ

where,



T

T

T

T

_T

T

nðtÞ ¼ x ðtÞ x ðt  sðtÞÞ x ðt  h1 Þ x ðt  h2 Þ x ðtÞ f ðxðtÞÞ Z t T f T ðxðt  sðtÞÞÞf T ðxðt  h1 ÞÞ f T ðxðt  rÞÞ xðsÞds Z

!T

th1

xðsÞds !T

th1

_ xðsÞds tsðtÞ

Z

!T Z

t

th1

tsðtÞ

_ xðsÞds

tsðtÞ

th2

Z

Z

t

trðtÞ

T _ xðsÞds

Wk

MGk



k I

 < 0;

ð22Þ

  hold for k = 1, 2, . . . , r, where Wk ¼ Wki;j

1515

W k1;1 ¼ P þ eh1 Q 1 þ þ

A T A k ðEk Þ ðEk Þ

2 h1 eh1 Q 3

!T 3T f ðxðsÞÞds 5

with

þ ðh2  h1 Þeh2 Q 4  2L1 V

þ 2N1 þ 2Q ;

Wk1;2 ¼ N1 þ NT2 ; Wk1;3 ¼ Wk1;4 ¼ 0;

th2

Wk1;5 ¼ P  ATk MT1 ;

Wk1;6 ¼ eh1 S1 þ L2 V; Wk1;7 ¼ 0; Wk1;8 ¼ Wk1;9 ¼ Wk1;10 ¼ Wk1;11 ¼ 0;

and

t ¼ ðti;j Þ



Nk ¼

Wk1;12 ¼ N1 ;

Wk1;13 ¼ Wk1;14 ¼ Wk1;15 ¼ 0; 

1

¼ ðXi;j Þ þ diag 0; 0; 0; 0;  M1 GG

T



M T1 ; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0

Wk2;2 ¼ ð1  lÞQ 2  2L1 W  2N2 þ 2N3  2N6 ; :

Considering xk(h(t)) P 0 (k = 1, 2, . . . , r) and k < 0 (k = 1, 2, . . . , r) in Pr Theorem 3.1, we have Noting that k¼1 xk ðhðtÞÞ k < 0. Pr k¼1 xk ðhðtÞÞ ¼ 1, we obtain k < 0 by using Lemma 2.6. Thus from condition (7) and inequality (19), we get

Wk2;3 ¼ NT5 þ N6 ;

Wk2;4 ¼ N 3 þ NT4 ; Wk2;5 ¼ Wk2;6 ¼ 0;

Wk2;7 ¼ ð1  lÞS2 þ L2 W; Wk2;8 ¼ Wk2;9 ¼ Wk2;10 ¼ Wk2;11 ¼ 0; Wk2;12 ¼ N2 ; Wk2;13 ¼ N 3 ;

3350

S. Muralisankar et al. / Expert Systems with Applications 39 (2012) 3345–3355

Wk2;14 ¼ N6 ;

Wk3;3 ¼ Q 1 þ eðh2 h1 Þ Q 2 þ 2N5 ;

Wk2;15 ¼ 0;

V 3 ðtÞ ¼ h1

Wk3;4 ¼ Wk3;5 ¼ Wk3;6 ¼ Wk3;7 ¼ 0; k 3;8

W

ðh2 h1 Þ

¼ S1 þ e

Wk3;14 ¼ N5 ; k 4;4

W

¼ 2N4 ; W ¼W

W

k 3;10

¼W

k 3;11

¼W

k 3;12

¼W

k 3;13

¼W

k 4;6

¼W

k 4;13

¼ 0;

W

¼W

¼W

k 4;8

k 4;9

¼W

k 4;10

¼W

W

k 5;7

¼ M 1 W 1k ; W

¼W

þ

¼ N 4 ;

W

k 5;9

Z

¼W

¼W

1

1 ÞT ðEW k Þ;

¼W

k 5;12

¼W

k 5;13

¼W

k 5;14

¼W

¼ 0;

Wk6;6 ¼ eh1 Y 1 þ er Z 1 þ r2 er Z 2  2V

Wk5;15 ¼ M 1 W 3k ;

k ðEWk

k 5;11

Wk6;7 ¼ Wk6;8 ¼ Wk6;9 ¼ Wk6;10 ¼ Wk6;11 ¼ 0;

Wk6;12 ¼ Wk6;13 ¼ Wk6;14 ¼ Wk6;15 ¼ 0; k 7;7

W

k 7;9

W

W2 T W2 k ðEk Þ ðEk Þ;

¼ ð1  lÞY 2  2W þ  k 7;10

¼W

¼W

k 7;11

¼W

k 7;12

k 7;13

¼W

W

k 7;14

k 7;8

eðsþh2 Þ xT ðsÞQ 4 xðsÞds dh;

_ eðsþh2 Þ x_ T ðsÞR1 xðsÞds dh

Z

h1

t

Z

t

_ eðsþh2 Þ x_ T ðsÞR2 xðsÞds dh;

tþh

eðsþrÞ f T ðxðsÞÞZ 1 f ðxðsÞÞds Z

0

r

Z

t

eðsþrÞ f T ðxðsÞÞZ 2 f ðxðsÞÞds dh:

tþh

Calculating the time derivative of V(t), we have

dV 1 ðtÞ _ ¼ et xT ðtÞPxðtÞ þ 2et xT ðtÞPxðtÞ; dt

þ 2eðtþh1 Þ xT ðtÞS1 f ðxðtÞÞ  2et xT ðt  h1 ÞS1 f ðxðt  h1 ÞÞ

¼W

¼ 0;

Wk8;8 ¼ Y 1 þ eðh2 h1 Þ Y 2 ; Wk8;9 ¼ 0;

þ eðtþh1 Þ f T ðxðtÞÞY 1 f ðxðtÞÞ  et f T ðxðt  h1 ÞÞY 1  f ðxðt  h1 ÞÞ þ eðtþh2 h1 Þ xT ðt  h1 ÞQ 2 xðt  h1 Þ

Wk8;10 ¼ Wk8;11 ¼ Wk8;12 ¼ Wk8;13 ¼ Wk8;14 ¼ Wk8;15 ¼ 0;

 et ð1  lÞxT ðt  sðtÞÞQ 2 xðt  sðtÞÞ

Wk9;9 ¼ Z 1 ; Wk9;10 ¼ Wk9;11 ¼ Wk9;12 ¼ 0;

þ 2eðtþh2 h1 Þ xT ðt  h1 ÞS2 f ðxðt  h1 ÞÞ

Wk9;13 ¼ Wk9;14 ¼ Wk9;15 ¼ 0;

 2ð1  lÞet xT ðt  sðtÞÞS2 f ðxðt  sðtÞÞÞ

Wk10;10 ¼ Q 3 ;

þ eðtþh2 h1 Þ f T ðxðt  h1 ÞÞY 2 f ðxðt  h1 ÞÞ

Wk10;11 ¼ Wk10;12 ¼ Wk10;13 ¼ Wk10;14 ¼ Wk10;15 ¼ 0; Wk11;11 k 12;12

W

 ð1  lÞet f T ðxðt  sðtÞÞÞ  Y 2 f ðxðt  sðtÞÞÞ;

1 ¼ Q ; Wk ¼ Wk11;13 ¼ Wk11;14 ¼ Wk11;15 ¼ 0; h2  h1 4 11;12 1 ¼  R1 ; Wk12;13 ¼ 0; h2

Wk12;14 ¼ Wk12;15 ¼ 0;



3 Wk14;15 ¼ 0; Wk15;15 ¼ Z 2 þ k EW k

T   3 EW ; k

M ¼ ½0 0 0 0 M 1 0 0 0 0 0 0 0 0 0 0 : Proof. Consider the following positive radially unbounded Lyapunov–Krasovskii functional candidate for model (6) as

VðtÞ ¼ V 1 ðtÞ þ V 2 ðtÞ þ V 3 ðtÞ þ V 4 ðtÞ þ V 5 ðtÞ;

V 1 ðtÞ ¼ et xT ðtÞPxðtÞ; t

" eðsþh1 Þ

xðsÞ

#T "

Q1

S1

#"

xðsÞ

# ds

ST1 Y 1 f ðxðsÞÞ f ðxðsÞÞ " # " # " # T Z th1 Q 2 S2 xðsÞ xðsÞ  ðsþh2 Þ þ e ds; tsðtÞ ST2 Y 2 f ðxðsÞÞ f ðxðsÞÞ th1

ð24Þ

 xðsÞds

th1

ð25Þ

dV 4 ðtÞ _ 6 h2 eðtþh2 Þ x_ T ðtÞR1 xðtÞ dt Z tsðtÞ T Z tsðtÞ  et _ _ ðR1 þ R2 Þ  xðsÞds xðsÞds h2  h1 th2 th2 !T ! Z t Z t et _ _ R1 xðsÞds xðsÞds  h2 tsðtÞ tsðtÞ _ þ ðh2  h1 Þeðtþh2 Þ x_ T ðtÞR2 xðtÞ !T ! Z Z th1 th1 et _ _ R2 xðsÞds xðsÞds ;  h2  h1 tsðtÞ tsðtÞ

where

Z

t

þ ðh2  h1 Þeðtþh2 Þ  xT ðtÞQ 4 xðtÞ !T ! Z th1 Z th1 et xðsÞds Q 4 xðsÞds ;  h2  h1 th2 th2

T

V 2 ðtÞ ¼

dV 3 ðtÞ 2 6 h1 eðtþh1 Þ xT ðtÞQ 3 xðtÞ dt Z t T Z  et xðsÞds Q 3 th1

1 ðR1 þ R2 Þ; h2  h1 1 ¼ R2 ; h2  h1

Wk13;13 ¼ 

Wk13;14 ¼ Wk13;15 ¼ 0; Wk14;14

ð23Þ

dV 2 ðtÞ 6 eðtþh1 Þ xT ðtÞQ 1 xðtÞ  et xT ðt  h1 ÞQ 1 xðt  h1 Þ dt

¼ 0; k 7;15

¼W

t

tþh

tr

¼ M 1 W 2k ;

k 5;10

t

h2

V 5 ðtÞ ¼

eðsþh1 Þ xT ðsÞQ 3 xðsÞds dh

tþh

Z

0

þr k 5;8

Z

h1

h2

k 4;11

Wk4;14 ¼ Wk4;15 ¼ 0; Wk5;5 ¼ h2 eh2 R1 þ ðh2  h1 Þeh2 R2  2M 1 ; k 5;6

Z

t

tþh

h2

V 4 ðtÞ ¼ k 4;7

Z

0

h1

Z

¼ 0;

Wk3;15 ¼ 0; k 4;5

k 4;12

S2 ;

k 3;9

þ

Z

ð26Þ

dV 5 ðtÞ 6 eðtþrÞ f T ðxðtÞÞZ 1 f ðxðtÞÞ  et f T ðxðt  rÞÞZ 1 f ðxðt  rÞÞ dt þ r2 eðtþrÞ f T ðxðtÞÞZ 2 f ðxðtÞÞ !T ! Z t Z t f ðxðsÞÞds Z 2 f ðxðsÞÞds : ð27Þ  et trðtÞ

trðtÞ

3351

S. Muralisankar et al. / Expert Systems with Applications 39 (2012) 3345–3355

It follows from inequalities (13) and (14) and using assumption (H3), Lemma 2.4 in (15)–(18) and (23)–(27) that

dVðtÞ t T _ þ eh1 xT ðtÞQ 1 xðtÞ 6e x ðtÞPxðtÞ þ 2xT ðtÞPxðtÞ dt xT ðt  h1 ÞQ 1 xðt  h1 Þ þ 2eh1 xT ðtÞS1 f ðxðtÞÞ

þ2eðh2 h1 Þ xT ðt  h1 ÞS2 f ðxðt  h1 ÞÞ  2ð1  lÞxT ðt  sðtÞÞS2 f ðxðt  sðtÞÞÞ þ eðh2 h1 Þ f T ðxðt  h1 ÞÞY 2 f ðxðt  h1 ÞÞ  ð1  lÞf T  2 xðt  sðtÞÞÞY 2  f ðxðt  sðtÞÞÞ þ h1 eh1 xT ðtÞQ 3 xðtÞ Z t T Z t   xðsÞds Q 3 xðsÞds þ ðh2  h1 Þ  eh2 xT ðtÞQ 4 xðtÞ 

th1

1 h2  h1

!T

th1

xðsÞds

Q4

Z

th2

Z

tsðtÞ

th1

! _ xðsÞds þ h2 eh2 x_ T ðtÞR1 xðtÞ

th2

T Z _ xðsÞds ðR1 þ R2 Þ

Z

f ðxðsÞÞds

 1=2 Vðxð0ÞÞ eð=2Þt ; kmin ðPÞ

which means that neural network (6) is globally exponentially dissipative and the set S is a positive invariant and globally attractive set as LMI (22) hold. This completes the proof. h In the following, we consider the neural network without fuzzy and uncertainties then (5) becomes

dxðtÞ ¼ AxðtÞ þ W 1 f ðxðtÞÞ þ W 2 f ðxðt  sðtÞÞÞ dt Z t f ðxðsÞÞds þ u: þ W3

ð32Þ

Corollary 3.5. Under the conditions of Corollary 3.3, neural network n o 1 uk is (32) is globally exponentially dissipative and S ¼ x : kxk 6 kkM min ðQ Þ a positive invariant and globally attractive set. Further, the exponential dissipativity rate index k = /2 can be estimated by the following inequality

W < 0;

ð33Þ

where W = (Wi,j)15  15 are defined as in Theorem 3.4.

trðtÞ

!#

t

ð31Þ

trðtÞ

þer f T ðxðtÞÞZ 1  f ðxðtÞÞ  f T ðxðt  rÞÞZ 1 f ðxðt  rÞÞ !T Z t þr2 er f T ðxðtÞÞZ 2 f ðxðtÞÞ  f ðxðsÞÞds Z 2

kxk 6



tsðtÞ

1 _ xðsÞds h2  h1 th2 th2 ! ! T Z t Z t 1 _ _ _  xðsÞds xðsÞds þ ðh2  h1 Þeh2 x_ T ðtÞR2 xðtÞ R1 h2 tsðtÞ tsðtÞ !T ! Z th1 Z th1 1 _ _ xðsÞds xðsÞds  R2 h2  h1 tsðtÞ tsðtÞ 

From the definition of V(x(t)), we know that

From Eqs. (30) and (31), we get

þeh1 f T ðxðtÞÞY 1 f ðxðtÞÞ  f T ðxðt  h1 ÞÞY 1 f ðxðt  h1 ÞÞ þ eðh2 h1 Þ xT  t  h1 ÞQ 2 xðt  h1 Þ  ð1  lÞxT ðt  sðtÞÞ  Q 2 xðt  sðtÞÞ

Z

ð30Þ

VðxðtÞÞ P et xT ðtÞPxðtÞ:

2xT ðt  h1 ÞS1 f ðxðt  h1 ÞÞ

th1

VðxðtÞÞ 6 Vðxð0ÞÞ:

2

trðtÞ



xðtÞ

T 

f ðxðtÞÞ

L1 V L2 V L2 V

V



xðtÞ



f ðxðtÞÞ

dVðtÞ 6 2ðxT ðtÞQxðtÞ þ x_ T ðtÞM1 uÞ þ nT ðtÞnðtÞ; dt where,

 ¼ ði;j Þ ¼ ðWi;j Þ   þ diag 0; 0; 0; 0; 1 M1 GGT M T1 ; 0; 0; 0; 0; 0; 0; 0; 0; 0; 0 < 0 ð28Þ and

Remark 3.6. In Liao and Wang (2003), Arik (2004), Song and Zhao (2005), Cao et al. (2006), Masubuchi (2006), Huang et al. (2007), Lou and Cui (2008), Song and Cao (2008), Wang et al. (2009), Zhang et al. (2010) and Song and Cao (2010), authors studied the various types of neural networks and few of them have proposed results in LMI approach. In this paper, the problem of global robust dissipativity is derived for T–S fuzzy neural network using LMI approaches. Therefore, our results are less conservative than those given in the previous literature.

4. Numerical examples In this section, we will give examples showing the effectiveness of established theoretical results.

" nðtÞ ¼ xT ðtÞ xT ðt  sðtÞÞ xT ðt  h1 Þ xT ðt  h2 Þ

Example 1. Consider the system (6) with parameters defined as

x_ T ðtÞ f T ðxðtÞÞ f T ðxðt  sðtÞÞÞf T ðxðt  h1 ÞÞ f T ðxðt  rÞÞ !T !T Z t T Z th1 Z t _ xðsÞds xðsÞds xðsÞds Z

tsðtÞ

th2

tsðtÞ

th2

th1

T _ xðsÞds

Z

th1

A1 ¼

!T _ xðsÞds

tsðtÞ

Z

t

trðtÞ

!T 3T f ðxðsÞÞds 5 :

Considering xk(h(t)) P 0 (k = 1, 2, . . . , r) and Nk < 0 (k = 1, 2, . . . , r) in Pr Theorem 3.4, we have Noting that k¼1 xk ðhðtÞÞNk < 0. Pr k¼1 xk ðhðtÞÞ ¼ 1. From inequality (28) we get

dVðtÞ 6 2ðxT ðtÞQxðtÞ þ x_ T ðtÞM1 uÞ dt 6 2kxkðkmin ðQ Þkxk þ kM 1 ukÞ < 0;

ð29Þ

when x 2 RnnS. Integrating two sides of Eq. (29) from 0 to an arbitrary t > 0, we have

W 31



  2:5 0:2 W 11 ¼ ; 0 2 4:5 3:5   0:2 0:05 ¼ ; 0:35 0:8

 u¼

2 0



0:02 0:04

W3 2 EW 1 ¼ E1 ¼

W 22 ¼



;

 ; 

 G1 ¼

0

0

0:2

0:01

0

0

0:01

2:5 0:6 0:4 2:5

 ;



0:3



 ; A2 ¼

W 32 ¼

;

1 EA1 ¼ EW ¼ 1

2:5 0 0 2 

W 21 ¼



1

 ;

2

0:4

 ;

0:5 2:8



 ; W 12 ¼

0:5 0:4 0:8



0:03

0

0

0:03

2 0:4 5 3:5

G2 ¼



 ;

 ;

0:4

0

0

0:2

 ;

3352

S. Muralisankar et al. / Expert Systems with Applications 39 (2012) 3345–3355

1 EA2 ¼ EW ¼ 2





0:02 0 ; 0 0:02

2 3 ¼ EW ¼ EW 2 2



 0:01 0 ; 0 0:01

4

f 1 ðxÞ

¼ f2 ðxÞ ¼ tanhðxÞ;

3

It is easy to check that assumptions (H1) and (H3) are satisfied and  þ  þ h1 ¼ 0:66; h2 ¼ 0:9928; r ¼ 0:3; li ¼ 0; li ¼ 2; li ¼ 0; li ¼ 2. Thus,

0 0 0 0

 ;

L2 ¼



1 0

 :

0 1

1

state



L1 ¼

2

0

By the MATLAB LMI Control Toolbox, we find solution to the LMI in Eq. (7) as follows

0:0018 0:0000



#

"

;

0:0000 0:0002 " # 6:1341 0:7844 Q2 ¼ ; 0:7844 6:0528 "

Q3 ¼

4:8497

0:0000

Q1 ¼

16:89027

8:3005

8:3005

27:9421

#

" Q4 ¼

2:0756

0:0000

0:0000 4:8497 0:0000 " # 0:3807 0:2000 ; R1 ¼ 106 0:20000 0:1269 "

0:0010 0:0001

#

Y2 ¼

5:4128 3:7775

"

Z1 ¼

3:7199 1:6957

# Z2 ¼

1:6957 3:5925 # 2:3772 3:7126 ; 1:6271 4:8341

Q¼ 

−0.4

−0.2

0

time (sec)

0.2

0.4

0.6

0.8

1

Fig. 1. The transient responses of time and state for k = 1 for Example 1.

4

9:1717 5:4144

#

3

; 2

5:9714

1:7470

1:7470

8:0115

1

#

0

;

−2





0 0:0033 0 W¼ ; 0 1:0303 0 0:0033   5:7573 2:7534 ; S1 ¼ 2:4063 6:0477



−0.6

−1

 ;

1:7403

2:0756

5:4144 9:2464

"

−0.8

;

;

;

"

#

#

3:7775 5:5331 "

Y1 ¼

;

0:0001 0:0011 "

−2

;

−3 −1

;

R2 ¼

−1

#

state

"

−3 −1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

time (sec) Fig. 2. The transient responses of time and state for k = 2 for Example 1.

 S2 ¼

3:9688 2:2482

 ;



0:3353 0:0188



M1 ¼ ; 2:0564 4:1238 0:0189 0:3510   0:1829 0:1241 ; N1 ¼ 106 0:0121 0:0102 6



0:3825

0:2015

N2 ¼ 10 0:2015 0:1267   0:0030 0:0002 N4 ¼ ; 0:0002 0:0032 

 0:0034 0:0051 N5 ¼ ; 0:0052 0:0107 e1 ¼ 0:0246;

 ;

N3 ¼



0:0030 0:0002 0:0002 0:0032

Example 2. Consider the system (20) with parameters defined as

" A¼

 ;



 0:0030 0:0002 N6 ¼ ; 0:0002 0:0032

e2 ¼ 0:0370:

Therefore, the neural network (6) is globally dissipative and the positive invariant and global attractive set are S = {x : kxk 6 0.0175}. Figs. 1 and 2 depicts the time and state of the considered network with initial conditions x1(t) = 0.3, x2(t) = 0.2 and x1(t) = 0.4, x2(t) = 0.3, t 2 [1, 0].

W3 ¼



4 0

#

" W1 ¼

;

0 3 " 1

0:4

0:6



1 

0:03

;

0:04

3:2 0:4 4

3:6

#

" ;

W2 ¼

2:2 1:2 1:2

4

# ;

# ;

f 1 ðxÞ ¼ f2 ðxÞ ¼ tanhðxÞ:

It is easy to check that assumptions (H1) and (H3) are satisfied and  þ  þ h1 ¼ 0:8; h2 ¼ 1:0173; r ¼ 0:3; li ¼ 0; li ¼ 2; li ¼ 0; li ¼ 2. Thus,

L1 ¼



0 0 0 0



;

L2 ¼



1 0 0 1

 :

By the MATLAB LMI Control Toolbox, we find solution to the LMI in Eq. (21) as follows

3353

S. Muralisankar et al. / Expert Systems with Applications 39 (2012) 3345–3355









0:4687 0:2161 139:3402 ; Q1 ¼ 0:2161 0:1121 104:1924    42:2231 2:0579 52:0426 Q2 ¼ ; Q3 ¼ 2:0579 35:4697 0:0000    18:5802 0:0000 0:0134 ; R1 ¼ Q4 ¼ 0:0000 18:5802 0:0067





 0:1534 0:0663 ; R2 ¼ 0:0663 0:0536   32:7623 12:0209 ; Y2 ¼ 12:0209 32:1703 

 ; 12:2526 36:8993   2:6908 1:3524 Q¼ ; 1:8003 2:5689 38:0998 12:2526

Z1 ¼





1:2916

0

S2 ¼

13:2857

 Z2 ¼

6:4957



 0:0135 0:0067 ; 0:0067 0:0034   0:7409 0:3227 N4 ¼ ; 0:3227 0:2552

N2 ¼

N5 ¼



0:0566 0:2528 0:2529

0:3244

 ;

75:8144

26:3889

26:3889

81:0423





2:5

0:03

0:0948

0

0

0:0944

 ;

 ;



G1 ¼

;

3



0:8

0:4

0

0

0:3





 ;

1 EA1 ¼ EW ¼ 1

 A2 ¼

W 32 ¼

;

0:5 2:8

1 EA2 ¼ EW ¼ 2

W 21 ¼

;

 0:03 0 ; 0 0:03



0

0

2:2

0:5

0:6

1

2 3 EW ¼ EW ¼ 2 2



0:4

3

0:01

0

0

0:01

 ;

 ;

;

 ;

G2 ¼



0:3

0

0

0:4

 0:02 0 ; 0 0:02

 ;

f 1 ðxÞ

M1 ¼

0:0566

0:0256

0:0260

0:0172

It is easy to check that assumptions (H1) and (H3) are satisfied and  þ  þ h1 ¼ 1; h2 ¼ 1:3953; r ¼ 0:3; li ¼ 0; li ¼ 1:4; li ¼ 0; li ¼ 1:4. Thus,

 ;



0 0

L1 ¼

N3 ¼



0 0

"



0:6876

0:2960

0:2961

0:2420

 ;

L2 ¼

1:2483

0:7

0

0

0:7

 :

1:2660

#

"

4:5796 2:1274

3

; Q 1 ¼ 10 1:2660 2:0244 " # 780:2072 78:4188 Q2 ¼ ; 78:4188 809:5559



N6 ¼



By the MATLAB LMI Control Toolbox, we find solution to the LMI in Eq. (22) as follows

 0:7331 0:3188 ; 0:3188 0:2532

 :

 Q3 ¼

0:0000

0:0000 895:1100  0:0121 0:0178 ; R1 ¼ 0:0178 0:0261 

0:3703



1.5

895:1100

Y2 ¼

1

Z1 ¼

0:1403



 Q4 ¼

0:1403 0:6408  944:1951 642:3267 642:3267 928:3844



Y 1 ¼ 103

;

807:6208 323:8715

323:8715 795:7156  3:6310 0:9571 Q¼ ; 0:9563 2:9879

0.5

 ;

2:1274 3:0179

−0.5



−1

S1 ¼

−1.5



2:5433



0:0000

443:1641



0

time (sec)

0.5

Fig. 3. The transient responses of time and state for Example 2.

1

 ;

1:6935

0:9540

0:9540 1:6687

 ;

 ;

;

 ;



Z 2 ¼ 103



0 3:5203  718:8600 465:1139 382:3011 665:2249

 S2 ¼

0

;

0:0000



0

#

443:1641



R2 ¼

−0.5



2:5 0:5



2:5

1





¼ f2 ðxÞ ¼ tanhðxÞ:

 ;





0:02

W 22 ¼ 

0

  0:02 0 2 3 EW ¼ EW ¼ ; 1 1 0 0:02   2:2 0:5 W 12 ¼ ; 4 3:7

2

state

  2 0:4 W 11 ¼ ; 0 1:5 4 3:8   0:4 0:3 ¼ ; 0:6 0:6

W 31

Therefore, the neural network (20) is globally dissipative and the positive invariant and global attractive set are S = {x : kxk 6 0.0028}. Fig. 3 depicts the time and state of the considered network with initial conditions x1(t) = 0.3, x2(t) = 0.5, t 2 [1, 0].

−2 −1



A1 ¼

 65:5176 24:0352 Y1 ¼ ; 24:0352 64:3342

 ;

3:8269 12:8548   0:0011 0:0005 ; N1 ¼ 0:0004 0:0002

Example 3. Consider the system (6) with parameters defined as



W¼ 0 1:3905   26:3861 12:8220 ; S1 ¼ 7:2846 25:3714 

104:1924 ; 289:2376  0:0000 ; 52:0426  0:0067 ; 0:0033

479:5892 343:6215

290:2815 443:6318   0:0017 0:0025 ; N1 ¼ 0:0021 0:0031



2:1559

1:1890

1:1890

2:0490

1:4185

0

0

2:2452

 ;

 ;

;

 ;

 M1 ¼

0:1825

0:1022

0:1025

0:3284

 ;

3354

S. Muralisankar et al. / Expert Systems with Applications 39 (2012) 3345–3355



 0:0088 0:0129 N2 ¼ ; 0:0129 0:0189   0:9651 0:3975 N4 ¼ ; 0:3975 1:6805  N5 ¼

2:3964

2:0070

1:8188 1:1410

e1 ¼ 214:4287;

 ;

N3 ¼

 N6 ¼



 0:9649 0:3972 ; 0:3972 1:6800

0:9355

0:3531

0:3539

1:6177

   3:8 0 2:6 0:6 ; W1 ¼ ; 0 2:6 3:5 4   1 0:8 W3 ¼ ; 0:6 1







 ;



e2 ¼ 8:7415:

Therefore, the neural network (6) is globally exponentially dissipative. Moreover, from Eq. (22), we can get that the exponential dissipativity rate index k = /2 = 0.0131. It is easy to compute that the positive invariant and globally attractive set are S = {x : kxk 6 0.0053}. Figs. 4 and 5 depicts the time and state of the considered network with initial conditions x1(t) = 0.3, x2(t) = 0.5,t 2 [1, 0].





0 0



 ;

L2 ¼

0:6

0

0

0:6

 :



  1:5600 0:9497 Q 1 ¼ 103 ; 1:6037 1:1272 0:9497 2:3336   409:7816 27:0418 ; Q2 ¼ 27:0418 350:2791 2:8398



1:6037

549:2708

;

0:0000

0:0000 549:2708  0:0293 0:0191 ; R1 ¼ 0:0191 0:0124

 ;

Q4 ¼



286:5514

0:0000

0:0000

286:5514



R2 ¼ state

f 1 ðxÞ ¼ f2 ðxÞ ¼ tanhðxÞ:

;

By the MATLAB LMI Control Toolbox, we find solution to the LMI in Eq. (33) as follows

1

0



0:3179



Y2 ¼

0:1127



0:1127 0:1814  362:5386 153:1137 153:1137 343:8646

Z1 ¼

−2

−3 −0.5

0

0.5

time (sec)



Fig. 4. The transient responses of time and state for k = 1 for Example 3.

S1 ¼

2.5

S2 ¼

1:3162





0

 ;

795:4553 321:9837 321:9837 756:6649

Z 2 ¼ 103

 W¼

0 2:3333  294:3487 144:4242 88:9780

240:3678

140:6995

70:9736

43:4687 114:0745  0:0064 0:0042 ; N1 ¼ 0:0040 0:0026



1.5



1

N2 ¼

0:0282

0:0183

1:1373

0

0

0:8509

0.5 0 −0.5

N5 ¼

−1 −1.5 −2 −0.6

−0.4

−0.2

0

time (sec)

0.2

0.4

0.6

Fig. 5. The transient responses of time and state for k = 2 for Example 3.

0.8



0:1809 0:6614 0:7086 0:7503

 ;

 ;

;



M1 ¼

;





; 0:0183 0:0119   0:9798 0:3714 ; N4 ¼ 0:3714 0:5476

 ;

 1:1376 0:6761 ; 0:6761 1:3358





2

 ;

;







Y1 ¼

;

 456:6791 144:5897 ; 144:5897 443:3150   1:7808 1:5789 Q¼ ; 1:5784 2:3721

−1

state

0 0

L1 ¼

Q3 ¼

−2.5 −0.8



0:02

3

2

 2:8 0:6 ; 0:8 3:2

It is easy to check that assumptions (H1) and (H3) are satisfied and  þ  þ h1 ¼ 0:7; h2 ¼ 1:0529; r ¼ 0:3; li ¼ 0; li ¼ 1:2; li ¼ 0; li ¼ 1:2. Thus,

P¼ Example 4. Consider the system (32) with parameters defined as

0:03



W2 ¼

N3 ¼

N6 ¼



0:2639

0:1160

0:1182

0:1368

0:9793

0:3711

0:3711

0:5474

0:8987

0:3185

0:3183

0:5132

 ;

 ;

 :

Therefore, the neural network (32) is globally exponentially dissipative. Moreover, from Eq. (33), we can get that the exponential dissipativity rate index k = /2 = 0.2261. It is easy to compute that the positive invariant and globally attractive set are S = {x : kxk 6 0.0255}. Fig. 6 depicts the time and state of the considered network with initial conditions x1(t) = 0.4, x2(t) = 0.5, t 2 [1, 0]. It should be pointed out that Theorems 1 and 2, Corollaries 1 and 3 in Song and Cao (2008) cannot be applied to this example since

S. Muralisankar et al. / Expert Systems with Applications 39 (2012) 3345–3355

0.8 0.6

state

0.4 0.2 0 −0.2 −0.4 −0.6 −0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

time (sec) Fig. 6. The transient responses of time and state for Example 4.

the inequalities in Song and Cao (2008) do not have feasible solutions. Remark 5. From the above examples, we can see that the proposed results in this paper improve and generalize than those in Song and Cao (2008).

5. Conclusion In this paper, we have studied the global robust dissipativity of T–S fuzzy neural networks with interval time-varying delays. By a proper Lyapunov–Krasovskii functional and employing analytic techniques, some sufficient conditions ensuring the global dissipativity and global exponential dissipativity of T–S fuzzy neural networks have been derived in terms of LMIs. The new results given in this paper to improve the earlier dissipativity results. The numerical examples are provided to demonstrate the effectiveness of the proposed results. References Arik, S. (2004). On the global dissipativity of dynamical neural networks with time delays. Physics Letters A, 326, 126–132. Boyd, B., Ghoui, L. E., Feron, E., & Balakrishnan, V. (1994). Linear matrix inequalities in system and control theory. Philadelphia: SIAM.

3355

Cao, Y. Y., & Frank, P. M. (2000). Analysis and synthesis of nonlinear time delay systems via fuzzy control approach. IEEE Transactions on Fuzzy Systems, 8, 200–211. Cao, Y. Y., & Frank, P. M. (2001). Stability analysis and synthesis of nonlinear timedelay systems via linear Takagi–Sugeno fuzzy models. IEEE Transactions on Fuzzy Systems, 124, 213–229. Cao, J., Yuan, K., Ho, D. W. C., & Lam, J. (2006). Global point dissipativity of neural networks with mixed time-varying delays. Chaos, 16, 013105. Feng, W., Yang, S. X., & Wu, H. (2009). On robust stability of uncertain stochastic neural networks with distributed and interval time-varying delays. Chaos, Solitons and Fractals, 42, 2095–2104. Gu, K. (1994). Integral inequality in the stability problem of Time-Delay Systems. In Proceeding 39th IEEE CDC, Sydney, Philadelphia. Hale, J. K. (1988). Asymptotic behavior of dissipative systems. Mathematical surveys and monographs (Vol. 25). Providence, RI, USA: American Mathematical Society. Haykin, S. (1998). Neural networks: A comprehensive foundation. NJ: Prentice Hall. Hill, D. J., & Moylan, P. J. (1976). Stability of nonlinear dissipative systems. IEEE Transactions on Automatic Control, 21, 708–711. Huang, T. (2006). Exponential stability of fuzzy cellular neural networks with distributed delay. Physics Letters A, 351, 48–52. Huang, Y., Xu, D., & Yang, Z. (2007). Dissipativity and periodic attractor for nonautonomous neural networks with time-varying delays. Neurocomputing, 70, 16–18. Li, X. (2010a). Existence and global exponential stability of periodic solution for delayed neural networks with impulsive and stochastic effects. Neurocomputing, 73, 749758. Li, X. (2010b). Global robust stability for stochastic interval neural networks with continuously distributed delays of neutral type. Applied Mathematics and Computation, 215, 4370–4384. Liao, X., & Wang, J. (2003). Global dissipativity of continuous-time recurrent neural networks with time delay. Physical Review E, 68, 1–7. Liu, Y., & Tang, W. (2004). Exponential stability of fuzzy cellular neural networks with constant and time-varying delays. Physics Letters A, 323, 224–233. Lou, X. Y., & Cui, B. T. (2008). Global robust dissipativity for integro-differential systems modeling neural networks with delays. Chaos, Solitons and Fractals, 36, 469–478. Masubuchi, I. (2006). Dissipativity inequalities for continuous-time descriptor systems with applications to synthesis of control gains. Systems & Control Letters, 55, 158–164. Rakkiyappan, R., & Balasubramaniam, P. (2010). Delay-probability-distributiondependent stability of uncertain stochastic genetic regulatory networks with mixed time-varying delays: An LMI approach. Nonlinear Analysis: Hybrid Systems, 4, 600–607. Song, Q., & Cao, J. (2008). Global dissipativity analysis on uncertain neural networks with mixed time-varying delays. Chaos, 18, 043126. Song, Q., & Cao, J. (2010) Global dissipativity on uncertain discrete-time neural networks with time-varying delays. Discrete Dynamics in Nature and Society doi:10.1155/2010/810408. Song, Q., & Zhao, Z. (2005). Global dissipativity of neural networks with both variable and unbounded delays. Chaos, Solitons and Fractals, 25, 393–401. Takagi, T., & Sugeno, M. (1985). Fuzzy identification of systems and its applications to modeling and control. IEEE Transactions on Systems, Man and Cybernetics, 15, 116–132. Tan, Z. Q., Soh, Y. C., & Xie, L. H. (1999). Dissipative control for linear discrete-time systems. Automatica, 35, 1557–1564. Wang, G., Cao, J., & Wang, L. (2009). Global dissipativity of stochastic neural networks with time delay. Journal of the Franklin Institute, 346, 794–807. Willems, J. C. (1972). Dissipative dynamical system – Part 1: General theory. Archive for Rational Mechanics and Analysis, 45, 321–351. Zhang, H., Yan, H., & Chen, Q. (2010). Stability and dissipative analysis for a class of stochastic system with time-delay. Journal of the Franklin Institute, 347, 882–893.