Robust passivity analysis of fuzzy Cohen–Grossberg BAM neural networks with time-varying delays

Robust passivity analysis of fuzzy Cohen–Grossberg BAM neural networks with time-varying delays

Applied Mathematics and Computation 218 (2011) 3799–3809 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation jour...

271KB Sizes 0 Downloads 58 Views

Applied Mathematics and Computation 218 (2011) 3799–3809

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Robust passivity analysis of fuzzy Cohen–Grossberg BAM neural networks with time-varying delays R. Sakthivel a,⇑, A. Arunkumar b, K. Mathiyalagan b, S. Marshal Anthoni b a b

Department of Mathematics, Sungkyunkwan University, Suwon 440-746, South Korea Department of Mathematics, Anna University of Technology, Coimbatore 641 047, India

a r t i c l e

i n f o

Keywords: Fuzzy Cohen–Grossberg BAM neural networks Passivity analysis Linear matrix inequality Delay fractioning technique

a b s t r a c t This paper is concerned with the problem of passivity analysis for a class of Cohen–Grossberg fuzzy bidirectional associative memory (BAM) neural networks with time varying delay. By employing the delay fractioning technique and linear matrix inequality optimization approach, delay dependent passivity criteria are established that guarantees the passivity of fuzzy Cohen–Grossberg BAM neural networks with uncertainties. The passivity condition is expressed in terms of LMIs, which can be easily solved by various convex optimization algorithms. Finally, a numerical example is given to illustrate the effectiveness of the proposed result. Ó 2011 Elsevier Inc. All rights reserved.

1. Introduction Neural networks have found a large number of successful applications in various fields of science and engineering. The Cohen–Grossberg neural network model was first proposed by Cohen and Grossberg [5] in 1983 and many researchers have done extensive works on this subject due to their important applications in many fields such as pattern recognition, parallel computing, associative memory, image processing and optimization problems. Moreover, bidirectional associative memory neural networks is a special class of recurrent neural networks that can store bipolar vector pairs, it is formed by neurons arranged in two layers, the x-layer and the y-layer [18,20,26]. The neurons in one layer are fully interconnected to the neurons in the other layer, while there are no interconnection among neurons in the same layer. In reality, time delays often occur due to finite switching speeds of the amplifiers and communication time. Moreover, it is observed both experimentally and numerically that time delay in neural networks may induce instability. Therefore, neural networks with time delays have recently become a topic of research interest. Thus, many stability results for delayed neural networks have been reported in [11,19,21,22]. On the other hand, due to the presence of any modeling error, external disturbance or parameter fluctuation during the physical implementation, uncertainty is unavoidable and may affect the stability of the whole system. The presence of parameter uncertainties often break the stability of dynamical systems. Hence, the stability analysis for the neural networks with uncertainties are widely investigated [9,27]. Sheng et al. [24] studied the global robust stability problem for TS fuzzy Hopfield neural networks with parameter uncertainties and stochastic perturbations by using the Lyapunov method and stochastic analysis approach. However, besides delay effects, in mathematical modeling of real world problems, we will encounter some other inconveniences, for example, the complexity and the uncertainty or vagueness. Fuzzy theory is considered as a more suitable setting for the sake of taking vagueness into consideration. Fuzzy systems in the form of the Takagi–Sugeno (TS) model [28] have attracted rapidly growing interest in recent years. TS fuzzy systems are nonlinear systems described by a set of IF–THEN ⇑ Corresponding author. E-mail address: [email protected] (R. Sakthivel). 0096-3003/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2011.09.024

3800

R. Sakthivel et al. / Applied Mathematics and Computation 218 (2011) 3799–3809

rules. Some nonlinear dynamic systems can be approximated by the overall fuzzy linear TS models for the purpose of dynamical analysis. Sathy and Balasubramaniam [23] studied the asymptotic stability of uncertain fuzzy Markovian jumping Cohen–Grossberg BAM neural networks with discrete and distributed time-varying delays by using the LMI approach. Li et al. [12] established a new set of sufficient conditions for a class of fuzzy Cohen–Grossberg neural networks with time delays by employing an inequality technique and M-matrix theory. The passivity theory which was firstly proposed in the circuit analysis [2] has found applications in diverse areas such as stability, complexity, signal processing, chaos control and synchronization, and fuzzy control. The passivity framework is a promising approach to the stability analysis of delayed neural networks, because it can lead to general conclusions on stability. Recently, the passivity analysis problem has been investigated for continuous-time neural networks with time varying delays [4,6,8,14,15,17,25]. Chen et al. [3] discussed the passivity of stochastic neural networks for both delay independent and delay dependent cases by using the Lyapunov–Krasovskii functional and LMI technique. Very recently, Kwon et al. [10] studied the passivity analysis for uncertain neural networks with time-varying delay by constructing new augmented Lyapunov–Krasovskii’s functionals. Further, while studying the passivity or stability issues the main criteria is, how to reduce the possible conservatism induced by the introduction of the new Lyapunov–Krasovskii functional when dealing with time delays. Moreover, the idea of delay partitioning, fractioning or decomposition techniques becomes an increasing interest of many researchers due to much less conservative results while studying the dynamical behaviors via LMI approach [16,29,31]. Wang et al. [30] studied the robust stability analysis problem for a class of uncertain genetic regulatory networks with and without noise perturbations by employing a novel Lyapunov–Krasovskii functional and delay fractioning technique. A novel Lyapunov functional by employing a delay fractioning approach has been constructed in [29] for obtaining new synchronization criteria. Hu et al. [7] derived the exponential stability of BAM neural networks by considering the time delays of two layers are taken into account separately rather than as a whole with the idea of delay fractioning. Li et al. [13] studied the passivity for neural networks with discrete and distributed delays by constructing a novel Lyapunov functional by the idea of delay fractioning and the stability criteria are expressed in the form of convex optimization problems. Further, the passivity analysis for uncertain fuzzy Cohen–Grossberg BAM neural networks with time varying delays has not been investigated yet. Therefore, it is necessary and important to obtain some new passivity conditions for uncertain fuzzy Cohen–Grossberg BAM neural networks with time varying delays. Motivated by the above discussions, the objective of this paper is to study the passivity analysis of a class of fuzzy Cohen–Grossberg BAM neural networks with time varying delays and parameter uncertainties. The passivity conditions are obtained by using the new Lyapunov–Krasovskii functional with the idea of delay fractioning technique and free weighting matrix approach. The less conservative passivity conditions are obtained in terms of linear matrix inequalities, which can be easily solved numerically by using the Matlab LMI control toolbox. Finally, we provide a numerical example with simulation result to demonstrate the effectiveness of the proposed method. 2. Problem formulation and preliminaries In this section, we start by introducing some definitions, notations and basic results that will be used in this paper. The superscripts T and (1) stands for matrix transposition and matrix inverse respectively; Rnn denotes the n  n-dimensional Euclidean space; the notation P > 0 means that P is real, symmetric and positive definite; I and 0 denote the identity matrix and zero matrix with compatible dimensions; diag{} stands for a block-diagonal matrix; we use an asterisk (⁄) to represent a term that is induced by symmetry and sym (A) is defined as A + AT. Matrices which are not explicitly stated are assumed to be compatible for matrix multiplications. Consider the following uncertain Cohen–Grossberg BAM neural networks with time-varying delay described by

9 > > > ðc1ji þ Dc1ji Þfj ðu2j ðtÞÞ > > > > j¼1 > > # > > > n > P > >  ðd1ji þ Dd1ji Þfj ðu2j ðt  qj ðtÞÞÞ  J 1i ðtÞ ; > > > j¼1 > > =  m P u_ 2j ðtÞ ¼ a2j ðu2j ðtÞÞ b2j ðu2j ðtÞÞ  ðc2ij þ Dc2ij Þg i ðu2i ðtÞÞ > > > i¼1 > >  > m > P > >  ðd2ij þ Dd2ij Þg i ðu1i ðt  si ðtÞÞÞ  J 2j ðtÞ ; > > > > i¼1 > > > > u1i ¼ /1i ðsÞ; s 2 ½si;0 ; i ¼ 1; 2; . . . ; n; > > > ; u ¼ / ðsÞ; s 2 ½q ; j ¼ 1; 2; . . . ; n; "

u_ 1i ðtÞ ¼ a1i ðu1i ðtÞÞ b1i ðu1i ðtÞÞ 

2j

2j

n P

ð1Þ

j;0

where u1i and u2j are the activations of the ith neurons and jth neurons; gi() and fj() stand for the signal functions of the ith neurons and jth neurons; a1i(u1i(t)) and a2j(u2j(t)) are the positive constants, they stand for the rate with which the cell i and j reset their potential to the resting state when isolated from the other cells and inputs; b1i(u1i(t)) and b2j(u2j(t)) are the behaved functions; c1ji, c2ij, d1ji and d2ij are the synaptic connection weights; J1j(t) and J2j(t) represent the external inputs; the bounded function s(t) and q(t) represent unknown delays of the system and satisfy,

R. Sakthivel et al. / Applied Mathematics and Computation 218 (2011) 3799–3809

0 6 h1 6 sðtÞ 6 h2 ;

s_ ðtÞ 6 l1 ; 0 6 q1 6 qðtÞ 6 q2 ; q_ ðtÞ 6 l2 ;

3801

ð2Þ

here the time varying delay is represented into two parts: constant part and time-varying part as in [7], that is, s(t) = h1 + h⁄(t), and for q(t) = q1 + q⁄(t), where s⁄(t), q⁄(t) satisfies 0 6 s⁄(t) 6 h2  h1, 0 6 q⁄(t) 6 q2  q1 and h2 > h1 > 0, q2 > q1 > 0, l1, l2 > 0 are constants, and we let gi(), fj() are the outputs of the BAM neural networks. The matrices DC1 = (Dc1ji)nn, DC2 = (Dc2ij)nn, DD1 = (Dd1ji)nn and DD2 = (Dd2ij)nn are the parameter uncertainties with appropriate dimensions. Throughout this paper, we make the following assumptions: (H1) aki(ui(t)) > 0, aki are bounded, that is there exist aki ; aki > 0 such that aki 6 aki 6 aki ; i ¼ 1; 2; . . . ; n; k ¼ 1; 2; a ¼ maxfaki g; a ¼ minfaki g. (H2)

bki ðxÞbki ðyÞ xy

P cki > 0 for any x, y 2 R and x – y.

 T For notational convenience, we shift the equilibrium point u11 ; u12 ; . . . ; u1n ; u21 ; u22 ; . . . ; u2n to the origin by the transfor      mations xi ðtÞ ¼ u1i  u1i ; yj ðtÞ ¼ u2j  u2j ; a1i ðxi ðtÞÞ ¼ a1i xi ðtÞ þ u1i  a1i u1i ; a2i ðyj ðtÞÞ ¼ a2j yj ðtÞ þ u2j  a2j ðu2j Þ; b1i ðxi ðtÞÞ           ¼ b1i xi ðtÞ þ u1i  b1i u1i ; b2i ðyj ðtÞÞ ¼ b2j yj ðtÞ þ u2j  b2j ðu2j Þ; g i ðxi ðtÞÞ ¼ g i xi ðtÞ þ ui  g i ðui Þ; f j ðyi ðtÞÞ ¼ fj yi ðtÞ þ ui  fj   ui ui ðtÞ ¼ J 1i  J 1i ; v j ðtÞ ¼ J 2j  J 2j , which yields the following system

"

9 > > ðc1ji þ Dc1ji Þfj ðyj ðtÞÞ > > > > j¼1 > > > # > > n > P > >  ðd1ji þ Dd1ji Þfj ðyj ðt  qj ðtÞÞÞ  ui ðtÞ ; =

x_ i ðtÞ ¼ a1i ðxi ðtÞÞ b1i ðxi ðtÞÞ 

n P

j¼1

ð3Þ

 > > m P > > y_ j ðtÞ ¼ a2j ðyj ðtÞÞ b2j ðyj ðtÞÞ  ðc2ij þ Dc2ji Þg i ðxi ðtÞÞ > > > > i¼1 > >  > m > P > ;  ðd2ij þ Dd2ji Þg i ðx1i ðt  si ðtÞÞÞ  v j ðtÞ : > i¼1

Then the system (3) ishtransformed to

i _ xðtÞ ¼ a1 ðxðtÞÞ b1 ðxðtÞÞ  C 1 f ðyðtÞÞ  D1 f ðyðt  qðtÞÞÞ  uðtÞ ; h i _ yðtÞ ¼ a2 ðyðtÞÞ b2 ðyðtÞÞ  C 2 gðxðtÞÞ  D2 gðxðt  sðtÞÞÞ  v ðtÞ ;

ð4Þ

where

xðtÞ ¼ ½x1 ðtÞ; . . . ; xn ðtÞT ;

yðtÞ ¼ ½y1 ðtÞ; . . . ; yn ðtÞT ;

gðxðtÞÞ ¼ ½gðx1 ðtÞÞ; . . . ; gðxn ðtÞÞT ;

f ðyðtÞÞ ¼ ½f ðy1 ðtÞÞ; . . . ; f ðyn ðtÞÞT ;

sðtÞ ¼ ½s1 ðtÞ; s2 ðtÞ; . . . ; sn ðtÞT ; qðtÞ ¼ ½q1 ðtÞ; q2 ðtÞ; . . . ; qn ðtÞT ; a1 ðxðtÞÞ ¼ diagfa1 ðx1 ðtÞÞ; . . . ; a1 ðxn ðtÞÞgT ; a2 ðyðtÞÞ ¼ diagfa2 ðy1 ðtÞÞ; . . . ; a2 ðyn ðtÞÞgT ; b1 ðxðtÞÞ ¼ diagfb1 ðx1 ðtÞÞ; . . . ; b1 ðxn ðtÞÞgT ;

b2 ðyðtÞÞ ¼ diagfb2 ðy1 ðtÞÞ; . . . ; b2 ðyn ðtÞÞgT ;

C 1 ¼ ðC 1 þ DC 1 Þ; D1 ¼ ðD1 þ DD1 Þ; C 2 ¼ ðC 2 þ DC 2 Þ; D2 ¼ ðD2 þ DD2 Þ; C 1 ¼ ðc1ji Þ; C 2 ¼ ðc2ij Þ; D1 ¼ ðd1ji Þ; D2 ¼ ðd2ij Þ: Further, the activation function satisfies the following assumption: þ  þ (H3) For any i = 1, 2, . . . , n and j = 1, 2, . . . , n there exist constants G i ; Gi ; F j and F j such that

g i ðx1 Þ  g i ðx2 Þ 6 Gþi ; for all x1 ; x2 2 R; x1 – x2 ; x1  x 2 fj ðy1 Þ  fj ðy2 Þ 6 F þj ; for all y1 ; y2 2 R; y1 – y2 : F j 6 y1  y2

Gi 6

Next, we will consider a uncertain fuzzy BAM neural networks with time varying delays, which is represented by a TS fuzzy model composed of a set of fuzzy implications and each implications is expressed as a linear system model, see [27]. The kth rule of this TS fuzzy model is of the following form: Plant Rule g:  IF u1 ðtÞ and

v 1 ðtÞ are hg1



n ; . . . ; up ðtÞ and

v p ðtÞ are hgp

o

THEN

_ xðtÞ ¼ a1g ðxðtÞÞ½b1g ðxðtÞÞ  C 1g f ðyðtÞÞ  D1g f ðyðt  qðtÞÞÞ  uðtÞ; _ yðtÞ ¼ a2g ðyðtÞÞ½b2g ðyðtÞÞ  C 2g gðxðtÞÞ  D2g gðxðt  sðtÞÞÞ  v ðtÞ;

) ð5Þ

3802

R. Sakthivel et al. / Applied Mathematics and Computation 218 (2011) 3799–3809

where hgi ; ði ¼ 1; 2; . . . ; p; g ¼ 1; 2; . . . ; rÞ are fuzzy sets (u1(t), u2(t), . . . , up(t), v1(t), v2(t), . . . , vp(t))T are the premise variable vectors; x(t), y(t) are the state variable and r is the number of IF–THEN rules. Further we assume that the parameter uncertainties DC1g, DD1g, DC2g, DD2g are time varying and described by, (A1) Structured perturbations:

½DC 1g DD1g DC 2g DD2g  ¼ Mg F g ðtÞ½N11g N12g N21g N22g ;

ð6Þ

where N11g, N12g, N21g, N22g and Mg are known constant matrices of appropriate dimensions and Fg(t) is an known time varying matrix with Lebegue measurable elements bounded by, F Tg ðtÞF g ðtÞ 6 I where I is the identity matrix with appropriate dimension. Let kg(n(t)) be the normalized membership function of the inferred fuzzy set bg(n(t)). The defuzzifield system is as follows:

_ xðtÞ ¼ _ yðtÞ ¼

9 > kg ðnðtÞÞfa1g ðxðtÞÞ½b1g ðxðtÞÞ  C 1g f ðyðtÞÞ  D1g f ðyðt  qðtÞÞÞ  uðtg; > > = g¼1 r P

r P

g¼1

> > kg ðnðtÞÞfa2g ðyðtÞÞ½b2g ðyðtÞÞ  C 2g gðxðtÞÞ  D2g gðxðt  sðtÞÞÞ  v ðtÞg; > ;

ð7Þ

Q ; bg ðnðtÞÞ ¼ pi¼1 hgi ðzi ðtÞÞ, and hgi ðzi ðtÞÞ is the grade of the membership function of zi(t) in hgi . We Pr Pr assume bg ðnðtÞÞ P 0; g ¼ 1; 2; . . . ; r; g¼1 bg ðnðtÞÞ > 0, and kg(n(t)) satisfy, kg ðnðtÞÞ P 0; g ¼ 1; 2; . . . ; r; g¼1 kg ðnðtÞÞ ¼ 1 for any n(t). Before giving our main results, we will present the definitions and lemmas which are used in the proof of the theorems. b ðnðtÞÞ

where kg ðnðtÞÞ ¼ Pr g

b ðnðtÞÞ g¼1 g

Definition 2.1. The BAM neural networks (7) are said to be passive if there exists a scalar # such that,

2

Z

tp



f T ðyðsÞÞ g T ðxðsÞÞ

  uðsÞ

0

v ðsÞ

ds P #

Z

tp



uT ðsÞ

0

v T ðsÞ

  uðsÞ

v ðsÞ

ds

for all tp P 0 and for all solution of (7) with x0 = 0, y0 = 0. Lemma 2.2 [23]. Given a positive definite matrix S 2 Rnn ; S ¼ ST > 0 and scalars h1 < s(t) < h2, for vector function x = [x1(t), x2(t), . . . , xn(t)]T, we have

ðh2  h1 Þ

Z

th1

_T

_ P x ðsÞSxðsÞds th2

Z

th1

!T _ xðsÞds

S

th2

Z

th1

! _ xðsÞds :

th2

Lemma 2.3 [1]. Given matrices / = /T, M, N and F(t) with appropriate dimensions / + MF(t)N + NTF(t)TMT < 0 for all F(t) satisfying F(t)TF(t) 6 I, if and only if there exists a scalar  > 0 such that

/ þ MM T þ 1 NT N < 0: 3. Main results In this section, we consider the passivity criteria for neural networks (7) when DC1g = 0, DC2g = 0, DD1g = 0 and DD2g = 0. By employing the idea of delay fractioning technique [7], we introduce a new Lyapunov–Krasovskii functional candidate for fuzzy Cohen–Grossberg BAM neural networks with time varying delays. Further, this result is extended to obtain a new passivity criteria for uncertain fuzzy Cohen–Grossberg BAM neural networks. For presentation convenience, we denote

 F 1 ¼ diag F 1 F þ1 ;  G1 ¼ diag G1 Gþ1 ;

F 2 F þ2 ; . . . ; F n F þn ; G2 Gþ2 ; . . . ; Gn Gþn ;



F 1 þ F þ1 F 2 þ F þ2 F  þ F þn ; ;...; n ; 2 2 2 

G þ Gþ1 G2 þ Gþ2 G þ Gþn ; ;...; n : G2 ¼ diag 1 2 2 2

F 2 ¼ diag

Theorem 3.1. Given integers l, k > 1, the fuzzy Cohen–Grossberg BAM neural networks (7) without uncertainty is passive, if there exist symmetric positive definite matrices P1 > 0, P2 > 0 and Qk > 0, (Q = 1, 2, . . . , 8) and Sk > 0, (k = 1, 2, 3, 4), there exit a positive diagonal matrices Ri > 0, Hi > 0, (i = 1, 2), a scalar 0 > 0 and any appropriately dimensioned matrices L, X such that following LMI hold for g = 1, 2, . . .,r:

N < 0;

ð8Þ

R. Sakthivel et al. / Applied Mathematics and Computation 218 (2011) 3799–3809

where

N ¼ W Tp1 P1 W p1 þ W Tp2 P2 W p2 þ W TQ 11 Q 11 W Q 11 þ W TQ 12 Q 12 W Q 12 þ W TQ 21 Q 21 W Q 21 þ W TQ 22 Q 22 W Q 22 þ W TS1 S1 W S1   þ W TS13 S13 W S13 þ W TS2 S2 W S2 þ W TS4 S4 W S4 þ sym W Tn3 LW L þ W Tn4 XW X þ W R11 R1 W R12 þ W H11 H1 W H12 þ W R21 R2 W R22 þ W H21 H2 W H22  #W T# W #  W T#1 W # ;  P1 ¼

H1 ¼



0

P1

P1

0



 ;

P2 ¼

H1

0



0

H1

;

0

P2

P2

0

R2 ¼





 R1 ¼

;

0

0

R1



R2

0

0

R2



R1

H2 ¼

;



;

H2

0

0

H2

 ;

Q 11 ¼ diagfQ 1 ; Q 1 ; Q 2 ; Q 2 ; Q 3 ; Q 3 g;

Q 12 ¼ diagfQ 4 ; Q 4 g;

Q 21 ¼ diagfQ 5 ; Q 5 ; Q 6 ; Q 6 ; Q 7 ; Q 7 g;

Q 22 ¼ diagfQ 8 ; Q 8 g;

S1 ¼ diagfS1 ; S2 ; S3 ; S4 g; W p1 ¼



In

0n;ðlþ6Þnþðkþ7Þm

0n;ðlþ3Þn

In

2

W Q 11

"

0n;ðlþ4Þn 0n;ðlþ5Þn

2

" W Q 22 ¼

0m;ðlþ7Þnþðkþ4Þm

2 W S2 ¼ 4

l I h1 n

0m;ðlþ7Þm

Im Im

1

3

7 0km;6m 7 7 0m;ðkþ6Þm 7 7 7; 0m;5m 7 7 0m;ðkþ6Þm 7 5 0m;4m

Ikm Im pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1  l2 Im

qffiffiffiffi  hl1 In qffiffiffiffi k q Im

;

0km;7m

0m;2m 0m;m

# ;

0n;3nþðkþ7Þm

3

7 7 0n;3nþðkþ7Þm 7 7; 7 7 0n;3m 5

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi h2  h1 I n qffiffiffiffi q1 I k m pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi q2  q1 Im

0m;ðlþ7Þnþðkþ3Þm 6 W S13 ¼ 4

Ikm

h1 I l n

6 6 6 0n;ðlþ3Þn ¼6 6 60 4 n;ðlþ7Þnþðkþ3Þm

#

0n;nþðkþ7Þm

qffiffiffiffi

0n;ðlþ3Þn

2 qffiffiffiffi

0n;2nþðkþ7Þm

Im pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1  l2 Im

0m;ðlþ7Þnþðkþ5Þm

2 W S1

In

0km;ðlþ7Þn

W p2 ¼

7 0ln;6nþðkþ7Þm 7 7 7 7 7; 0n;5nþðkþ7Þm 7 7 7 5 0n;4nþðkþ7Þm

0n;ðlþ6Þnþðkþ7Þm pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1  l1 I n 0n;ðlþ6Þnþðkþ7Þm

In pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1  l1 In

S2 ¼ diagfS2 ; S2 g;

3

Iln

6 6 0km;ðlþ7Þnþm 6 6 0m;ðlþ7Þn 6 ¼6 6 0m;ðlþ7Þnþðkþ1Þm 6 6 0m;ðlþ7Þn 4 0m;ðlþ7Þnþðkþ2Þm

 ;

0n;3nþðkþ7Þm

0ln;7nþðkþ7Þm

Iln

6 6 0ln;n 6 6 In 6 ¼6 6 0n;ðlþ1Þn 6 6 I 4 n 0n;ðlþ2Þn

W Q 12 ¼

W Q 21

S13 ¼ diagfS1 ; S3 g;

0m;3m

0n;ðlþ5Þnþðmþ7Þm qffiffiffiffi  qk Im 1

3 0m;ðkþ5Þm

0n;ðlþ1Þn

1 ffi pffiffiffiffiffiffiffiffiffi In h2 h1

1 ffi  pffiffiffiffiffiffiffiffiffi In

0n;4nþðkþ7Þm

0n;ln

1 ffi pffiffiffiffiffiffiffiffiffi In

1 ffi  pffiffiffiffiffiffiffiffiffi In

0n;5nþðkþ7Þm

h2 h1

h2 h1 h2 h1

7 5;

3 5;



S4 ¼ diagfS4 ; S4 g;

0m;ðlþ7Þn

Im

0m;ðkþ6Þm

0m;ðlþ7Þnþðkþ3Þm

Im

0m;3m

 ;

3803

3804

R. Sakthivel et al. / Applied Mathematics and Computation 218 (2011) 3799–3809

" W S4 ¼

0m;ðlþ7Þnþðkþ1Þm

1 pffiffiffiffiffiffiffiffiffiffi q2 q1 Im

1  pffiffiffiffiffiffiffiffiffiffi q q I m

0m;4m

0m;ðlþ7Þnþkm

1 pffiffiffiffiffiffiffiffiffiffi q2 q1 Im

1  pffiffiffiffiffiffiffiffiffiffi q q I m

0m;5m

2

W n3

Iln 6 ¼ 4 0n;ðlþ1Þn 0n;ðlþ3Þn

W L ¼ C1g

In

2 W R11

W H11

D2g

G1

6 6 6 ¼6 6 6 4 

W R12 ¼

 W H12 ¼

W# ¼



0n

Im

0m;2m

2

3 7 7 7; 5

0ðlþ1Þn;n

G1

G2

03n;n

03n;n

G2

In

0nþðkþ7Þm;m

0nþðkþ7Þm;m

6 6 F1 6 ¼6 6 0ðkþ3Þm;m 6 4 F 2

W R21

02m;m

3

2

7 7 7 7; 7 7 5

6 6 6 ¼6 6 6 4

0n;ðlþ6Þnþðkþ7Þm

0n;ðlþ4Þn

In

0n;ðlþ1Þn

In

0n;5nþðkþ7Þm

0n;ðlþ5Þn

In

0n;nþðkþ7Þm

0ðlþ7Þn;n

0n;2nþðkþ7Þm

W H21

 ;

 W H22 ¼

;

0m;ðkþ7Þm



Im

0m;3m

Im ; 3

0ðlþ7Þn;n

7 7 7 0ðkþ3Þm;m 7 7; 7 In 5 02m;m F 2

F1

F 2

03m;m

03m;m



F 2

In

0m

0m

3 7 7 7 7; 7 7 5

0m;ðlþ7Þn

Im

0m;ðkþ6Þm

0m;ðlþ7Þnþðkþ4Þm

Im

0m;ðkþ1Þm

0m;ðlþ7Þnþðkþ1Þm

Im

0m;ðkþ4Þm

0m;ðlþ7Þnþðkþ5Þm

Im

0m



3

0m;5m 7 5;

Im

0ðlþ7Þnþðkþ1Þm;n

W #1 ¼

;

0m;7m

0ðlþ7Þnþðkþ1Þm;n

W R22 ¼



Ikm

0m ;

0m;ðkþ2Þm

0ðlþ1Þn;n

Im

W n4

C2g

02nþðkþ7Þm;n

0m;ðlþ7Þnþðkþ6Þm

0m;ðlþ7Þn 6 ¼ 4 0m;ðlþ7Þnþðkþ1Þm 0m;ðlþ7Þnþðkþ3Þm D1g

02nþðkþ7Þm;n

In

2

C 1g

In

0n;ðlþ6Þn

;

0m;ðkþ4Þm

0ðlþ3Þn;n

In

1

#

In

G2

6 0ðlþ3Þn;n 6 ¼6 4 G2

2

0n;2n

In

C 2g

2

05nþðkþ7Þm 7 5; 03nþðkþ7Þm

In

W X ¼ 0n;ðlþ4Þn

1

3

0n;7nþðkþ7Þm

0n;ðlþ2Þn

2

 ;

 ;

0n;ðlþ4Þn

In

0n;2nþðkþ7Þm

0m;ðlþ7Þnþðkþ4Þm

Im

0m;2m

 :

Proof. In order to prove the passivity criteria, we consider the following Lyapunov–Krasovskii functional based on the idea of delay fractioning,

Vðt; xðtÞ; yðtÞÞ ¼

3 X

Vðt; xðtÞ; yðtÞÞ;

ð9Þ

i¼1

where

V 1 ðt; xðtÞ; yðtÞÞ ¼ xT ðtÞP1 xðtÞ þ yT ðtÞP2 yðtÞ; V 2 ðt; xðtÞ; yðtÞÞ ¼

Z

t

t

þ

h1 l

Z

cT1 ðsÞQ 1 c1 ðsÞds þ

Z

þ

Z

tsðtÞ

cT2 ðsÞQ 5 c2 ðsÞds þ

q1 k

Z

0

h1 l

t

t

t

V 3 ðt; xðtÞ; yðtÞÞ ¼

Z

q1

_ x_ T ðsÞS1 xðsÞds dh þ Z

Z

t

xT ðsÞQ 3 xðsÞds þ

th2

t

tqðtÞ

t

tþh

q2

Z

xT ðsÞQ 2 xðsÞds þ

Z

yT ðsÞQ 6 yðsÞds þ

h1 h2

Z

Z

t

tsðtÞ

t

tq2

yT ðsÞQ 7 yðsÞds þ

t

tþh

Z

_ x_ T ðsÞS2 xðsÞds dh þ

Z

0



q1 k

Z

Z

g T ðxðsÞÞQ 4 gðxðsÞÞds

t

tqðtÞ

f T ðyðsÞÞQ 8 f ðyðsÞÞds;

t

_ y_ T ðsÞS3 yðsÞdsdh

tþh

t

_ y_ T ðsÞS4 yðsÞds dh;

tþh

with





cT1 ðsÞ ¼ xT ðsÞ; xT s 

       h1 ðl  1Þ q ðk  1Þ h1 ; cT2 ðsÞ ¼ yT ðsÞ; yT s  1 ; . . . ; yT s  q1 ; ; . . . ; xT s  l k l k

R. Sakthivel et al. / Applied Mathematics and Computation 218 (2011) 3799–3809

3805

where l, k P 1 are fractioning numbers. Taking the time derivative of Vi(t, x(t), y(t)) (i = 1, 2, 3) along the trajectories of (7), we have

_ þ 2yT ðtÞP2 yðtÞ; _ V_1 ðt; xðtÞ; yðtÞÞ ¼ 2xT ðtÞP1 xðtÞ

ð10Þ

    h1 h1 V_2 ðt;xðtÞ;yðtÞÞ ¼ cT1 ðtÞQ 1 c1 ðtÞ  cT1 t  Q 1 c1 t  þ xT ðtÞQ 2 xðtÞ  ð1  s_ ðtÞÞxT ðt  sðtÞÞQ 2 xðt  sðtÞÞ l l þ xT ðtÞQ 3 xðtÞ  xT ðt  h2 ÞQ 3 xðt  h2 Þ þ g T ðxðtÞÞQ 4 gðxðtÞÞ  ð1  s_ ðtÞÞg T ðxðt  sðtÞÞÞQ 4 gðxðt  sðtÞÞÞ þ cT2 ðtÞQ 5 c2 ðtÞ  cT2 ðt 

q1

q

ÞQ c ðt  1 Þ þ yT ðtÞQ 6 yðtÞ  ð1  q_ ðtÞÞyT ðt  qðtÞÞQ 6 yðt  qðtÞÞ k 5 2 k þ yT ðtÞQ 7 yðtÞ  yT ðt  q2 ÞQ 7 yðt  q2 Þ þ f T ðyðtÞÞQ 8 f ðyðtÞÞ  ð1  q_ ðtÞÞf T ðyðt  qðtÞÞÞQ 8 f ðyðt  qðtÞÞÞ;

Z t Z th1 h1 q _ T ðtÞS2 xðtÞ _  _ T ðsÞS1 xðsÞds _ _  _ _ V_3 ðt; xðtÞ; yðtÞÞ ¼ x_ T ðtÞS1 xðtÞ  h Þ x x þ ðh x_ T ðsÞS2 xðsÞds þ 1 y_ T ðtÞS3 yðtÞ 2 1 h l k th2 t l1 Z tq1 Z t _ _  _ y_ T ðsÞS3 yðsÞds þ ðq2  q1 Þy_ T ðtÞS4 yðtÞ y_ T ðsÞS4 yðsÞds:  q1

t

ð11Þ

ð12Þ

tq2

k

From (10)–(12), we have

_ xðtÞ; yðtÞÞ 6 W T P1 W p þ W T P2 W p þ W T Q 11 W Q þ W T Q 12 W Q þ W T Q 21 W Q þ W T Q 22 W Q Vðt; p1 p2 Q 11 Q 12 Q 21 Q 22 11 12 21 22 1 2 h1 T q _ þ ðh2  h1 Þx_ T ðtÞS2 xðtÞ _ þ 1 y_ T ðtÞS3 yðtÞ _ þ ðq2  q1 Þy_ T ðtÞS4 yðtÞ _ x_ ðtÞS1 xðtÞ l k Z th1 Z t Z tq1 Z t _ _ _ _ x_ T ðsÞS1 xðsÞds  x_ T ðsÞS2 xðsÞds  y_ T ðsÞS3 yðsÞds  y_ T ðsÞS4 yðsÞds:  h q

þ

t

1 l

t

th2

1 k

ð13Þ

tq2

By applying Lemma 2.2, we can get the following inequalities

Z

t

l _  x ðsÞS1 xðsÞds 6 h h1 t l1 Z Z th1 _ x_ T ðsÞS2 xðsÞds ¼  _T

th2

"Z

#T

t

t

_ xðsÞds

h1 l

S1

"Z

#

t

t

tsðtÞ

_ x_ T ðsÞS2 xðsÞds 

h1 l

_ xðsÞds ;

Z

th1

_ x_ T ðsÞS2 xðsÞds;

Z

tsðtÞ

_ x_ T ðsÞS2 xðsÞds 6

t



Z

q1

1 k

tq1

_ y_ T ðsÞS4 yðsÞds ¼

t

Z

tq2

1 k

t

tqðtÞ

_ y_ T ðsÞS4 yðsÞds 

tq2

Z

ð16Þ

ð17Þ

ð18Þ

1 k

Z

tq1

_ y_ T ðsÞS4 yðsÞds;

ð19Þ

tqðtÞ

"Z #T "Z # tqðtÞ tqðtÞ 1 _ _ S4 yðsÞds yðsÞds ; q2  q1 tq2 tq2 tq2 "Z #T "Z # Z tq1 tq1 tq1 1 T _ _ _ S4 y_ ðsÞS4 yðsÞds 6 yðsÞds yðsÞds :  q2  q1 tqðtÞ tqðtÞ tqðtÞ



ð15Þ

tsðtÞ

th2

Z tsðtÞ T Z tsðtÞ  1 _ _ S2 xðsÞds xðsÞds ; h2  h1 th2 th2 th2 "Z #T "Z # Z th1 th1 th1 1 T _ _ _ _ x ðsÞS2 xðsÞds 6  xðsÞds S2 xðsÞds ;  h2  h1 tsðtÞ tsðtÞ tsðtÞ "Z #T "Z # Z t t t k T _ _ _ _  y ðsÞS3 yðsÞds 6  yðsÞds S3 yðsÞds ; q q q 

ð14Þ

tqðtÞ

_ y_ T ðsÞS4 yðsÞds 6

ð20Þ

ð21Þ

On the other hand, for any matrices L, X of appropriate dimensions the following inequalities hold,

_ 2nT3 ðtÞL C1g xðtÞ þ C 1g f ðyðtÞÞ þ D1g f ðyðt  qðtÞÞÞ þ uðtÞ  xðtÞ ¼ 0;

T _ 2n4 ðtÞX C2g yðtÞ þ C 2g gðxðtÞÞ þ D2g gðxðt  sðtÞÞÞ þ v ðtÞ  yðtÞ ¼ 0; where

nT3 ðtÞ ¼ cT1 ðtÞ; xT ðt  sðtÞÞ; x_ T ðtÞ ;

nT4 ðtÞ ¼ cT2 ðtÞ; yT ðt  qðtÞÞ; y_ T ðtÞ :

From assumption (H3), for any i = 1, 2, . . ., n, we have

ðg i ðxi ðtÞÞ  Gi xi ðtÞÞðg i ðxi ðtÞÞ  Gþi xi ðtÞÞ 6 0;

ð22Þ ð23Þ

3806

R. Sakthivel et al. / Applied Mathematics and Computation 218 (2011) 3799–3809

which is equivalent to



xðtÞ gðxðtÞÞ

T

2 4

Gi Gþi ei eTi 

G þGþ i i 2



G þGþ i i 2

ei eTi

ei eTi

ei eTi

3 5



 xðtÞ 6 0; gðxðtÞÞ

where ei denotes the units column vector having element 1 on its ith row and zeros elsewhere. Let R1 = diag{r11, r12, . . . , r1n}, H1 = diag{h11, h12, . . . , h1n}, R2 = diag{r21, r22, . . . , r2n}, H2 = diag{h21, h22, . . . , h2n}. Then

2    þ n X xðtÞ T Gi Gi ei eTi 4 ri G þGþ gðxðtÞÞ i¼1  i i e eT



G þGþ i i 2

ei eTi

i i

2

ei eTi

3 5





xðtÞ gðxðtÞÞ

6 0:

That is,



T 

xðtÞ gðxðtÞÞ

G1 R1

G2 R1

G2 R1

R1





xðtÞ

ð24Þ

6 0:

gðxðtÞÞ

Similar to this, one can get



T 

yðtÞ f ðyðtÞÞ



F 2 R2

R2

T 

G1 H1

G2 H1

G2 H1

H1

F 1 H2

F 2 H2

F 2 H2

H2

gðxðt  sðtÞÞÞ yðt  qðtÞÞ f ðyðt  qðtÞÞÞ

T 



yðtÞ

F 2 R2

xðt  sðtÞÞ





F 1 R2

ð25Þ

6 0;

f ðyðtÞÞ 

xðt  sðtÞÞ



gðxðt  sðtÞÞÞ 

yðt  qðtÞÞ f ðyðt  qðtÞÞÞ



6 0;

ð26Þ

6 0:

ð27Þ

From (13)–(27) and Definition 2.1, we have

 



_ xðtÞ; yðtÞÞ  2 f T ðyðtÞÞ g T ðxðtÞÞ uðtÞ  # uT ðtÞ Vðt; v ðtÞ where

v T ðtÞ

  uðtÞ

v ðtÞ

6 nT ðtÞNnðtÞ;

ð28Þ

nT ðtÞ ¼ nT1 ðtÞ; nT2 ðtÞ ;

nT1 ðtÞ ¼ cT1 ðtÞ; xT ðt  h1 Þ; xT ðt  sðtÞÞ; xT ðt  h2 Þ; x_ T ðtÞ; g T ðxðtÞÞ; g T ðxðt  sðtÞÞÞ; uT ðtÞ ;

nT2 ðtÞ ¼ cT2 ðtÞ; yT ðt  q1 Þ; yT ðt  qðtÞÞ; yT ðt  q2 Þ; y_ T ðtÞ; f T ðyðtÞÞ; f T ðyðt  qðtÞÞÞ; v T ðtÞ :

Thus we concluded that, if the LMI (8) holds then,

 



_ xðtÞ; yðtÞÞ  2 f T ðyðtÞÞ g T ðxðtÞÞ uðtÞ  # uT ðtÞ Vðt; v ðtÞ

v T ðtÞ

  uðtÞ

v ðtÞ

6 0:

ð29Þ

By integrating the Eq. (29), with respect to t over the time period [0, tp], we have,

2

Z

0

tp



f T ðyðsÞÞ g T ðxðsÞÞ

  Z tp uðsÞ

T ds P Vðtp ; xðt p Þ; yðt p ÞÞ  Vð0; xð0Þ; yð0ÞÞ  # u ðsÞ v ðsÞ 0   Z tp

T uðsÞ ds P # u ðsÞ v T ðsÞ v ðsÞ 0

v T ðsÞ

  uðsÞ

v ðsÞ

ds ð30Þ

for x0 = 0, y0 = 0, we have V(0, x(0), y(0)) = 0, and hence the BAM neural network (7) is passive in the sense of Definition 2.1. This completes the proof. h Next, we consider robust passivity analysis of fuzzy Cohen–Grossberg BAM neural networks with time-varying delays and uncertainties. The results of the previous section will be extended to fuzzy Cohen–Grossberg BAM neural networks with time-varying structured uncertainties described in (6). Now, by considering the same Lyapunov function as in Theorem 3.1 and employing the similar proof of Theorem 3.1, we can obtain the robust passivity analysis for uncertain fuzzy BAM neural networks (7) with time varying delay. Theorem 3.2. Under the assumption (A1), given integers l, k > 1, the neural networks (7) is robustly passive, if there exist symmetric positive definite matrices P1 > 0, P2 > 0 and Qk > 0, (Q = 1, 2, . . . , 8) and Sk > 0, (k = 1, 2, 3, 4), there exit a positive diagonal matrices Ri > 0, Hi > 0, (i = 1, 2), a scalars 0 > 0, 1, 2 > 0 and any appropriately dimensioned matrices L, X such that following LMI hold for g = 1, 2, . . ., r:

R. Sakthivel et al. / Applied Mathematics and Computation 218 (2011) 3799–3809

2

H W Tn3 LT Mg W Tn4 X T Mg

6 4



1 I

0



2 I

3807

3 7 5 < 0;

ð31Þ

where

H ¼ N þ 1 NT1 N1 þ 2 NT2 N2 ;

NT1 ¼ 0m;ðlþ7Þnþðkþ4Þm

N11g

N12g

NT2 ¼ 0n;ðlþ4Þn

N22g

0n;nþðkþ7Þm

N21g

0m ;

and the remaining parameters are same as defined in Theorem 3.1. Proof. By considering the uncertainties described in (6), then the matrices C 1g ; D1g ; C 2g and D2g in (7) becomes C1g + MgFg (t)N11g, D1g + MgFg(t)N12g, C2g + MgFg(t)N21g and D2g + MgFg(t)N22g and following the similar steps in Theorem 3.1 we can get,

N þ W Tn3 LMg F g ðtÞN1 þ NT1 F Tg ðtÞMTg LT W n3 þ W Tn4 XMg F g ðtÞN2 þ NT2 F Tg ðtÞMTg X T W n4 < 0: By using Lemma 2.3 in the above inequality, we obtain T T T T T T T 1 N þ 1 1 W n3 LM g M g L W n3 þ 1 N 1 N 1 þ 2 W n4 XM g M g XW n4 þ 2 N 2 N 2 < 0:

ð32Þ

Applying the Schur complement to (32), it is easy to get (31). This completes the proof.

h

4. Numerical simulation In this section, we present an example to demonstrate the effectiveness of the proposed results. Example 4.1. Consider the fuzzy Cohen–Grossberg BAM neural networks with time-varying delays in the absence of parameter uncertainties together with the kth rule as follows:  Plant Rule 1: IF u1 ðtÞ and

v 1 ðtÞ are h11



, THEN

_ xðtÞ ¼ C11 xðtÞ þ C 11 f ðyðtÞÞ þ D11 f ðyðt  qðtÞÞÞ þ uðtÞ _ yðtÞ ¼ C21 yðtÞ þ C 21 gðxðtÞÞ þ D21 gðxðt  sðtÞÞÞ þ v ðtÞ  Plant Rule 2: IF u2 ðtÞ and

v 2 ðtÞ are h22



, THEN

_ xðtÞ ¼ C12 xðtÞ þ C 12 f ðyðtÞÞ þ D12 f ðyðt  qðtÞÞÞ þ uðtÞ; _ yðtÞ ¼ C22 yðtÞ þ C 22 gðxðtÞÞ þ D22 gðxðt  sðtÞÞÞ þ v ðtÞ; with the following parameters



C11 ¼ 

C21 ¼ 

C12 ¼ 

C22 ¼

3:6

0

0

2:4

3:7

0

0

2:9

3:6

0

0

2:8

3:7

0

0

2:9

 ;

C 11 ¼



 ;

C 21 ¼

 ;

C 12 ¼

 ;

C 22 ¼

0:1 0:2 0:1 0:2



0:6





0:5

0:3 0:8 



D11 ¼

; 

0:12

0:13

0:12

0:16

0:22 0:12 0:32

0:17



 ;

;

 ;

0:2 0:1

D21 ¼

;

0:3 0:2 

0:25

0:3

0:4

0:6

D12 ¼  D22 ¼



 ;

0:15

0:02

0:12

0:17

0:10

0:16

0:33

0:15

 ;

 :

The activation functions are described by g1(x) = tanh (2x), g2(x) = tanh (4x), f1(y) = tanh (2y) and f2(y) = tanh (4y), the membership functions for Rule 1 and Rule 2 are h11 ¼ e2u11 ðtÞ and h22 ¼ 1  h11 . Clearly, it can be seen that the assumption (H3) is þ  þ  þ  þ satisfied with G 1 ¼ 2; G1 ¼ 0; G2 ¼ 4; G2 ¼ 0, and F 1 ¼ 2; F 1 ¼ 0; F 2 ¼ 4; F 2 ¼ 0. Thus

3808

R. Sakthivel et al. / Applied Mathematics and Computation 218 (2011) 3799–3809

Table 1 Calculated upper bound of h2, q2 for various values of l1, l2.

l1, g2

0.1

0.2

0.3

0.4

0.5

0.6 6 l1, l2 6 1.0

l = 1, k = 1 l = 2, k = 2 l = 3, k = 3

h2, q2 = 2.835 h2, q2 = 3.086 h2, q2 = 3.776

2.182 2.262 2.365

2.165 2.208 2.257

2.156 2.197 2.243

2.152 2.191 2.234

2.152 2.191 2.230

2

2 x1(k)

1.5

2

y1(k) 2

0.5 0

0

−1

−1

−1.5

−1.5 5

10

15

Time ’t’

20

25

−2

30

y (k) 2

−0.5

0

y1(k)

0.5

−0.5

−2

x2(k)

1

y (k)

x(t) & y(t)

x(t) & y(t)

1

x1(k)

1.5

x (k)

0

5

10

15

Time ’t’

20

25

30

Fig. 1. State trajectories of the fuzzy Cogen–Grossberg BAM neural networks when g = 1, g = 2 and h2, q2 = 3.776.

G1 ¼



0 0



0 0

;

G2 ¼



1

0

0

2



F1 ¼



0 0 0 0

 ;

F2 ¼



1

0

0

2

 :

If the delay-fractioning numbers parameters l and k, time delay lower bounds h1 and q1, and time derivative limit l1 and l2 are given, the LMI conditions in Theorem 3.1 can be readily solved via existing numerical software. If we set l, k = 1, 2, 3 and the lower bound h1, q1 = 2 then the obtained time delay upper bounds for different values of l1, l2 are given in Table 1. It is clear that the calculated upper bound h2 and q2 increases as the fractioning time l and k increases. It is noted that as l and k increases, the computational complexity also increases correspondingly, so the optimum result should be less conservative. In particular, for the given l, k, h1, q1, l1 and l2, we can obtain the feasible solutions by solving the LMI’s in Theorem 3.1 via Matlab LMI toolbox which is not given here due to the page limit. Therefore, by Theorem 3.1, it clear that the model (1) with above given parameters is passive. If the external inputs u(t) = v(t) = 0 and we choose our initial values of the state variables as (x1(t), x2(t)) = (0.5,1), (y1(t), y2(t)) = (0.5, 2), then the trajectories of the state variables are shown in Fig. 1. The simulation results reveals that both x(t) and y(t) are converging to the equilibrium point zero, so we conclude that the considered fuzzy Cohen–Grossberg BAM neural networks are internally stable. 5. Conclusion In this paper, by utilizing a new Lyapunov functional based on the idea of delay fractioning technique together with LMI technique, we derive a set of sufficient conditions which guarantees the robust passivity for the uncertain fuzzy Cohen– Grossberg BAM neural networks with time varying delay. The derived criteria have been established in terms of LMIs that can easily be solved by using standard LMI software packages. Finally, a numerical example is presented to illustrate the applicability of the obtained results. Acknowledgement The work of R. Sakthivel is supported by the Korean Research Foundation Grant funded by the Korean Government with Grant No. KRF 2011-0005449. References [1] B. Boyd, L.E. Ghoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, 1994. [2] V. Bevelevich, Classical Network Synthesis, Van Nostrand, NewYork, 1968.

R. Sakthivel et al. / Applied Mathematics and Computation 218 (2011) 3799–3809

3809

[3] Y. Chen, H. Wang, A. Xue, Passivity analysis of stochastic time-delay neural networks, Nonlinear Dynamics 61 (2010) 71–82. [4] Y. Chen, W. Li, W. Bi, Improved results on passivity analysis of uncertain neural networks with time-varying discrete and distributed Delays, Neural Processing Letters 30 (2009) 155–169. [5] M. Cohen, S. Grossberg, Absolute stability of global pattern formation and parallel memory storage by competitive neural networks, IEEE Transactions on Systems Man and Cybernetics 13 (1983) 815–826. [6] J. Fu, H. Zhang, T. Ma, Q. Zhang, On passivity analysis for stochastic neural networks with interval time-varying delay, Neurocomputing 73 (2010) 795– 801. [7] L. Hu, H. Liu, Y. Zhao, New stability criteria for BAM neural networks with time-varying delays, Neurocomputing 72 (2009) 3245–3252. [8] D.H. Ji, J.H. Koo, S.C. Won, S.M. Lee, Ju.H. Park, Passivity-based control for Hopfield neural networks using convex representation, Applied Mathematics and Computation 217 (2011) 6168–6175. [9] O.M. Kwon, J.H. Park, Exponential stability analysis for uncertain neural networks with interval time-varying delays, Applied Mathematics and Computation 212 (2009) 530–541. [10] O.M. Kwon, J.H. Park, S.M. Lee, E.J. Cha, A new augmented Lyapunov–Krasovskii functional approach to exponential passivity for neural networks with time-varying delays, Applied Mathematics and Computation 217 (24) (2011) 10231–10238. [11] S.M. Lee, O.M. Kwon, Ju.H. Park, A novel delay-dependent criterion for delayed neural networks of neutral type, Physics Letters A 374 (2010) 1843– 1848. [12] C. Li, Y. Li, Y. Ye, Exponential stability of fuzzy Cohen–Grossberg neural networks with time delays and impulsive effects, Communications in Nonlinear Science and Numerical Simulation 15 (2010) 3599–3606. [13] C. Li, Ch. Li, X. Liao, T. Huang, Impulsive effects on stability of high-order BAM neural networks with time delays, Neurocomputing 74 (2011) 1541– 1550. [14] H. Li, H. Gao, P. Shi, New passivity analysis for neural networks with discrete and distributed delays, IEEE Transactions on Neural Networks 21 (2010) 1842–1847. [15] C.Y. Lu, H.H.T. sai, T.J. Su, Delay-dependent approach to passivity analysis for uncertain neural networks with time-varying delay, Neural Processing Letters 27 (2008) 237–246. [16] S. Mou, H. Gao, J. Lam, W. Qiang, A new criterion of delay-dependent asymptotic stability for Hopfield neural networks with time delay, IEEE Transactions on Neural Networks 19 (2008) 532–535. [17] J.H. Park, Further results on passivity analysis of delayed cellular neural networks, Chaos, Solitons and Fractals 34 (2007) 1546–1551. [18] J.H. Park, S.M. Lee, O.M. Kwon, On exponential stability of bidirectional associative memory neural networks with time-varying delays, Chaos, Solitons and Fractals 39 (2009) 1083–1091. [19] J.H. Park, O.M. Kwon, Analysis on global stability of stochastic neural networks of neutral type, Modern Physics Letters B 22 (2008) 3159–3170. [20] R. Sakthivel, R. Samidurai, S. Marshal Anthoni, New exponential stability criteria for stochastic BAM neural networks with impulses, Physica Scripta 82 (2010) 045802. [21] R. Sakthivel, R. Samidurai, S. Marshal Anthoni, Exponential stability for stochastic neural networks of neutral type with impulsive effects, Modern Physics Letters B 24 (2010) 1099–1110. [22] R. Sakthivel, R. Samidurai, S. Marshal Anthoni, Asymptotic stability of stochastic delayed recurrent neural networks with impulsive effects, Journal of Optimization Theory and Applications 147 (2010) 583–596. [23] R. Sathy, P. Balasubramaniam, Stability analysis of fuzzy Markovian jumping Cohen–Grossberg BAM neural networks with mixed time-varying delays, Communications in Nonlinear Science and Numerical Simulation 16 (2011) 2054–2064. [24] L. Sheng, M. Gao, H. Yang, Delay-dependent robust stability for uncertain stochastic fuzzy Hopfield neural networks with time-varying delays, Fuzzy Sets and Systems 160 (2009) 3503–3517. [25] Q. Song, J. Liang, Z. Wang, Passivity analysis of discrete-time stochastic neural networks with time-varying delays, Neurocomputing 72 (2009) 1782– 1788. [26] Q. Song, Z. Zhao, Y. Li, Global exponential stability of BAM neural networks with distributed delays and reaction-diffusion terms, Physics Letters A 335 (2005) 213–225. [27] M. Syed Ali, P. Balasubramaniam, Robust stability of uncertain stochastic fuzzy BAM neural networks with time varying delay, Physics Letters A 372 (2008) 5159–5166. [28] T. Takagi, M. Sugeno, Fuzzy identification of systems and its applications to modeling and control, IEEE Transactions on Systems Man and Cybernetics 15 (1985) 116–132. [29] Y. Wang, Z. Wang, J. Liang, A delay fractioning approach to global synchronization of delayed complex networks with stochastic disturbances, Physics Letters A 372 (2008) 6066–6073. [30] Y. Wang, Z. Wang, J. Liang, On robust stability of stochastic genetic regulatory networks with time delays: a delay fractioning approach, IEEE Transactions on Systems, man and Cybernetics 40 (2010) 729–740. [31] Y. Zhao, H. Gao, J. Lam, B. Du, Stability and stabilization of delayed TS fuzzy systems: a delay partitioning approach, IEEE Transactions on Fuzzy Systems 17 (2009) 750–762.