Delay-interval dependent robust stability criteria for stochastic neural networks with linear fractional uncertainties

Delay-interval dependent robust stability criteria for stochastic neural networks with linear fractional uncertainties

ARTICLE IN PRESS Neurocomputing 72 (2009) 3675–3682 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/loca...

204KB Sizes 1 Downloads 87 Views

ARTICLE IN PRESS Neurocomputing 72 (2009) 3675–3682

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Delay-interval dependent robust stability criteria for stochastic neural networks with linear fractional uncertainties$ P. Balasubramaniam , S. Lakshmanan, R. Rakkiyappan Department of Mathematics, Gandhigram Rural University, Gandhigram - 624 302, Tamilnadu, India

a r t i c l e in fo

abstract

Article history: Received 19 December 2008 Received in revised form 14 March 2009 Accepted 24 June 2009 Communicated by S. Arik Available online 3 August 2009

In this paper, we study the delay-interval dependent robust stability criteria for stochastic neural networks with linear fractional uncertainties. The time-varying delay is assumed to belong to an interval and is a fast time-varying function. The uncertainty under consideration includes linear fractional norm-bounded uncertainty. Based on the new Lyapunov–Krasovskii functional, some inequality techniques and stochastic stability theory, delay-interval dependent stability criteria are obtained in terms of linear matrix inequalities. Finally, some numerical examples are provided to demonstrate the less conservatism and effectiveness of the proposed LMI conditions. & 2009 Elsevier B.V. All rights reserved.

MSC: 34K20 34K50 92B20 Keywords: Delay/interval-dependent stability Linear matrix inequality (LMI) Lyapunov–Krasovskii functional Stochastic neural networks

1. Introduction In the past two decades, neural networks (NNs) have received increasing interest owing to their applications in various areas such as aerospace, defense, robotic, telecommunications, signal processing, pattern recognition, static image processing, associative memory and combinatorial optimization [1]. Since the integration and communication delays are unavoidably encountered both in biological and artificial neural systems, which may result in oscillation and instability, increasing interest has been focused on stability analysis of NNs with time-delays. Furthermore, time delay is frequently a source of oscillation, divergence, or even instability and deterioration of NNs. Generally speaking, the so-far obtained stability results for delayed NNs can be classified into two types; that is, delay-independent stability [2] and delay-dependent stability [3]; the former does not include any information on the size of delay while the latter employs such information. For delay-dependent type, much attention has been paid to reduce conservatism of stability conditions. In practice, a time varying interval delay is often encountered, that is, the range

$ The work of the authors was supported by UGC-SAP(DRS), New Delhi, India under the sanctioned no. F510/6/DRS/2004 (SAP-1). The work of the third author was supported by CSIR-SRF under grant No:09/715(0013)/2009-EMR-I.  Corresponding author. Tel.: +91 451 2452371; fax: +91 451 2453071. E-mail address: [email protected] (P. Balasubramaniam).

0925-2312/$ - see front matter & 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2009.06.006

of delay varies in an interval for which the lower bound is not restricted to zero. Most of the results in stability of NNs with discrete interval time-varying delays have been reported in [4–7]. As discussed in [8–10], distributed delays should be incorporated into the model due to the fact that there may exist a distribution of propagation delays over a period of time in some cases. Therefore, both discrete and distributed delays should be taken into account when modelling a realistic NNs. Uncertainties are frequently encountered in various engineering and communication systems. The characteristics of dynamic systems are significantly affected by the presence of the uncertainty, even to the extend of instability in extreme situation. Shortly a new type uncertainty namely linear fractional form is considered in [11] which can include the norm bounded uncertainties as a special case. In general, there are two kinds of disturbances to be considered for one model NNs. They are parameter uncertainties and stochastic perturbations. Recently, some results on stability of stochastic NNs with finite distributed delays have been reported in [8,10,9]. To the best of our knowledge, the delay-interval dependent robust stability criteria of stochastic NNs with linear fractional uncertainties have not been investigated yet, which is very important in both theories and applications and is also a very challenging problem. Motivated by the above discussion, in this paper, we investigate the delay-interval dependent robust stability criteria for stochastic neural networks with linear fractional uncertainties. By

ARTICLE IN PRESS 3676

P. Balasubramaniam et al. / Neurocomputing 72 (2009) 3675–3682

constructing a new Lyapunov–Krasovskii functional and employing some analysis techniques, sufficient conditions are derived for the considered stochastic system in terms of LMIs, which can be easily calculated by MATLAB LMI control Toolbox. Numerical examples are given to illustrate the effectiveness and less conservativeness of the proposed method. Notations: Throughout this paper, Rn and Rnn denote, respectively, the n-dimensional Euclidean space and the set of all n  n real matrices. The superscript T denotes the transposition and the notation XZY (respectively, X4Y), where X and Y are symmetric matrices, means that X  Y is positive semi-definite (respectively, positive definite). In is the n  n identity matrix. j  j is the Euclidean norm in Rn . Moreover, let ðO; F; PÞ be a complete probability space with a filtration fFt gtZ0 satisfying the usual conditions. That is the filtration contains all P-null sets and is right continuous. The notation  always denotes the symmetric block in one symmetric matrix. Sometimes, the arguments of a function or a matrix will be omitted in the analysis when no confusion can arise.

2. Problem description and preliminaries Consider the following Hopfield neural networks with both discrete and distributed time-varying delays described by y_ i ðtÞ ¼ ai ðyi ðtÞÞ þ

n X

b0ij gj ðyj ðtÞÞ þ

j¼1

þ

n X

d1ij

Z

j¼1

n X

cij1 gj ðyj ðt  tðtÞÞÞ

t

gj ðyj ðsÞÞ ds þ Ii ;

where xðtÞ is the state vector of the transformation system, fj ðxðtÞÞ ¼ gj ðxj ðtÞ þ yj Þ  gj ðyj Þ, with fj ðxð0ÞÞ ¼ 0 for j ¼ 1; 2; . . . ; n. Consider the following Hopfield neural networks with parameter uncertainties and stochastic perturbations as follows:  dxðtÞ ¼ AðtÞxðtÞ þ BðtÞf ðxðtÞÞ þ CðtÞf ðxðt  tðtÞÞÞ  Z t f ðxðsÞÞ ds dt þDðtÞ trðtÞ

 þ A0 ðtÞxðtÞ þ A1 ðtÞxðt  tðtÞÞ þ B1 ðtÞf ðxðtÞÞ  Z t f ðxðsÞÞ ds dwðtÞ; þC1 ðtÞf ðxðt  tðtÞÞÞ þ D1 ðtÞ

ð4Þ

trðtÞ

xðtÞ ¼ fðtÞ;

8t 2 f2h; 0g;

h ¼ maxfh2 ; rg;

r ¼ maxðrðtÞÞ;

ð5Þ

where wðtÞ denotes a one-dimensional Brownian motion satisfying EfdwðtÞg ¼ 0 and EfdwðtÞ2 g ¼ dt. The matrices AðtÞ ¼ Aþ DAðtÞ; BðtÞ ¼ B þ DBðtÞ; CðtÞ ¼ C þ DCðtÞ; DðtÞ ¼ D þ DDðtÞ; A0 ðtÞ ¼ A0 þ DA0 ; A1 ðtÞ ¼ A1 þ DA1 ðtÞ; B1 ðtÞ ¼ B1 þ DB1 ðtÞ; C1 ðtÞ ¼ C1 þ DC1 ðtÞ and D1 ðtÞ ¼ D1 þ DD1 ðtÞ, where A ¼ diagða1 ; a2 ; . . . ; an Þ has positive entries ai 40, A0 ; A1 ; B1 ; C1 ; D1 are connection weight matrices with appropriate dimensions. In this system, the parametric uncertainties are assumed to be of the form: ð6Þ

where E1 ; E2 ; E3 ; E4 ; E5 ; E6 ; E7 ; E8 ; E9 are matrices. The class of parametric uncertainties DðtÞ that satisfy

trðtÞ

i ¼ 1; 2; . . . ; n

ð1Þ

_ yðtÞ ¼ AyðtÞ þ BgðyðtÞÞ þ Cgðyðt  tðtÞÞÞ þ D

trðtÞ

½DAðtÞ; DBðtÞ; DCðtÞ; DDðtÞ; DA0 ðtÞ; DA1 ðtÞ; DB1 ðtÞ; DC1 ðtÞ; DD1 ðtÞ ¼ HDðtÞ½E1 ; E2 ; E3 ; E4 ; E5 ; E6 ; E7 ; E8 ; E9 ;

j¼1

or equivalently the vector form

xðtÞ ¼ yðtÞ  y transforms system (2) into the following system: Z t _ ¼ AxðtÞ þ Bf ðxðtÞÞ þ Cf ðxðt  tðtÞÞÞ þ D f ðxðsÞÞ ds; ð3Þ xðtÞ

Z

DðtÞ ¼ ½I  FðtÞJ1 FðtÞ;

ð7Þ

where J is also a known matrix satisfying t

I  JJ T 40

gðyðsÞÞ ds

ð8Þ

trðtÞ

þ I;

ð2Þ

where yðtÞ ¼ ½y1 ðtÞ; y2 ðtÞ; . . . ; yn ðtÞT 2 Rn denotes the state vector associated with n neurons. The matrix A ¼ diagða1 ; a2 ; . . . ; an Þ is a diagonal matrix with positive entries ai 40. B ¼ ðb0ij Þnn , C ¼ ðcij1 Þnn and D ¼ ðd1ij Þnn are, respectively, denote connection weights, the discrete delayed connection weights and the distributed delayed connection weights of the j neuron on the i neuron. gðxÞ ¼ ½g1 x1 ðtÞ; g2 x2 ðtÞ; . . . ; gn xn ðtÞT 2 Rn is the activation function with gð0Þ ¼ 0. I ¼ ½I1 ; I2 ; . . . ; In  is a constant external input. In order to obtain our main results, the assumptions are always made throughout this paper. ðA1 Þ The activation function g is bounded, continuously differentiable with gð0Þ ¼ 0 and satisfy the Lipschitz condition jgi ðx1 Þ  gi ðx2 Þjrli jx1  x2 j;

8x1 ; x2 2 R; i ¼ 1; . . . ; n;

where L ¼ diagðl1 ; l2 ; . . . ; ln Þ40 is a positive diagonal matrix. Then by ðA1 Þ, we have jgi ðxÞjrli jxj;

8x 2 R; i ¼ 1; . . . ; n:

ðA2 Þ The time-varying delays tðtÞ satisfy 0rh1 rtðtÞrh2 ;

t_ ðtÞrmo1;

where h1 ; h2 , m are constants. Assume that y ¼ ðy1 ; y2 ; . . . ; yn ÞT is an equilibrium point of system (2), one can derive from (2) that transformation

and FðtÞ is uncertain matrix satisfying F T ðtÞFðtÞrI:

ð9Þ

It is assumed that all elements of FðtÞ are Lebesque measurable. The matrices DAðtÞ; DBðtÞ; DCðtÞ, DDðtÞ; DA0 ðtÞ; DA1 ðtÞ; DB1 ðtÞ; DC1 ðtÞ; DD1 ðtÞ are said to be admissible if (6)–(9) hold. fðtÞ 2 Cð½2h; 0; Rn Þ is the initial function and f ðxÞ ¼ ½f1 ðx1 Þ; f2 ðx2 Þ; . . . ; fn ðxn ÞT 2 Rn is the activation function with f ð0Þ ¼ 0. Definition 2.1. The stochastic neural networks (4) is said to be stochastically stable if there exists a positive scalar c40 such that Z T lim E xT ðtÞxðtÞ dtrc sup EJfðsÞJ2 : T!1

0

s2½h;0

Now, we give the following lemmas which are essential for the proof of main results. Lemma 2.2 (Schur complement). Given constant matrices O1 , O2 and O3 with appropriate dimensions, where OT1 ¼ O1 and OT2 ¼ O2 40, then

O1 þ OT3 O1 2 O3 o0 if and only if " # T

O1 

O3 o0 or O2

"

O2 

#

O3 o0: O1

Lemma 2.3 (Gu [12]). For any n  n constant matrix M40, any scalars a and b with aob and a vector function xðtÞ : ½a; b!Rn such

ARTICLE IN PRESS P. Balasubramaniam et al. / Neurocomputing 72 (2009) 3675–3682

that the integrations concerned are well defined, then the following inequality holds: "Z #T "Z # "Z b b b xðsÞ ds M xðsÞ ds rðb  aÞ xðsÞT MxðsÞ ds: a

a

a

Lemma 2.4 (Zhang et al. [13]). Suppose DðtÞ is given by (7)–(9). Given matrices M ¼ M T , S and N of appropriate dimensions, the inequality T

T

where 2 6 6 6 6 6 6 6 6 O¼6 6 6 6 6 6 6 6 4

T

M þ SDðtÞN þ N D ðtÞS o0

O11

3677



O12 O22





O13 O23 O33







O14 O24 O34 O44









O15 O25 O35 O45 O55











O16 O26 O36 O46 O56 O66













O17 O27 O37 O47 O57 O67 O77

 

 

 

 

 

 

 

O18 O28 O38 O48 O58 O68 O78 O88 

3

O19 O29 7 7 7 O39 7 7 O49 7 7 7 O59 7 7o0; O69 7 7 7 O79 7 7 O89 7 5 O99 ð15Þ

holds for FðtÞ such that F T ðtÞFðtÞrI, if and only if, for some d40 2 3 M S dNT 6 ST dI dJ T 7 4 5o0: dN dJ dI

with T T O11 ¼ Q1 þ Q2 þ Q3 þ P2 þ P2T  P29 A  AT P29 þ P38 A0 þ AT0 P38 ;

O12 ¼ P2 þ P3T  P11 þ P20 T T  AT P30 þ AT0 P39 þ P38 A1 ;

3. Main result

T T O13 ¼ P4T þ P11  AT P31 þ AT0 P40 ;

T O14 ¼ P5T  P20

Defining two new state variables for the stochastic neural network (4),  yðtÞ ¼ AðtÞxðtÞ þ BðtÞf ðxðtÞÞ þ CðtÞf ðxðt  tðtÞÞÞ  Z t þDðtÞ f ðxðsÞÞ ds ; ð10Þ trðtÞ

T T  AT P32 þ AT0 P41 ;

T T O15 ¼ P1 þ P6T  P29  AT P33 þ AT0 P42 ;

T T O16 ¼ P7T  AT P34  P38 þ AT0 P43 ; T þP38 B1 þ AT0 P44 þ LK 1 ;

O18 ¼ P9T þ P29 C T T þ P38 C1 þ AT0 P45 ;  AT P36

and  gðtÞ ¼ A0 ðtÞxðtÞ þ A1 ðtÞxðt  tðtÞÞ þ B1 ðtÞf ðxðtÞÞ  Z t f ðxðsÞÞ ds ; þC1 ðtÞf ðxðt  tðtÞÞÞ þ D1 ðtÞ

T O17 ¼ P8T þ P29 B  AT P35

T T O19 ¼ P10 þ P29 D  AT P37

T þP38 D1 þ AT0 P46 ;

ð11Þ

T T O22 ¼ ð1  mÞQ1  P3T  P3  P12  P12 þ P21 þ P21 þ P39 A1

trðtÞ

T þ AT1 P39 ;

then system (4) becomes dxðtÞ ¼ yðtÞ dt þ gðtÞ dwðtÞ:

ð12Þ

T T T O24 ¼ P5T  P14  P21 þ P23 þ AT1 P41 ;

Moreover, the following equality holds: Z t xðtÞ  xðt  tðtÞÞ ¼ dxðsÞ Z

ttðtÞ t

yðsÞ ds þ

¼ ttðtÞ

Z

¼

P6T



T P15

 P30 þ

T P24

þ

ð13Þ

ttðtÞ

In order to discuss robust stability of stochastic neural networks (4), which has linear fractional parametric uncertainties (6), first, we consider the case in which the matrices A, B, C, D, A0 , A1 , B1 , C1 , D1 are fixed, that is, DA ¼ 0, DB ¼ 0, DC ¼ 0, DD ¼ 0, DA0 ¼ 0, DA1 ¼ 0, DB1 ¼ 0, DC1 ¼ 0, DD1 ¼ 0. For this case, the following theorem holds. Theorem 3.1. Consider NNs (4) satisfying assumption ðA1 Þ and ðA2 Þ. The equilibrium solution of stochastic neural networks (4) is globally asymptotically stable in the mean square if there exist positive definite matrices P1 ¼ P1T 40; Rl ¼ RTl 40; l ¼ 1; 2; 3; Qj ¼ QjT 40; j ¼ 1; 2; 3; 4, diagonal matrix K40, K1 40, K2 40 and for any matrices Pi ; i ¼ 2; . . . ; 46 such that the LMI holds 3 2 O N M S 7 6 1 7 6   R2 0 0 7 6 h 2 7 6 7 6 1 ð14Þ P1 ¼ 6  7o0;   R 0 3 7 6 h2  h1 7 6 7 6 5 4 1     ðR2 þ R3 Þ h2  h1

¼

P8T



T P17

þ

T P26

O25

T AT1 P42 ;

T T T O26 ¼ P7T  P16  P39 þ P25 þ AT1 P43 ;

t

gðsÞ dwðsÞ:

T T T O23 ¼ P4T þ P12  P13 þ P22 þ AT1 P40 ;

O27

T þ P30 B þ P39 B1 þ AT1 P44 ;

T T T O28 ¼ P9T  P18 þ P27 þ P30 C þ P39 C1 þ AT1 P45 þ LK 2 ;

O29

T T T ¼ P10  P19 þ P28 þ P30 D þ P39 D1 þ AT1 P46 ; T O33 ¼ Q2 þ P13 þ P13 ;

¼

T P16

T O34 ¼ P14  P22 ;

¼

T O38 ¼ P18 þ P31 C þ P40 C1 ;

T P25

 P41 ;

T O45 ¼ P24  P32 ;

O47 ¼

T P26

T O48 ¼ P27 þ P32 C þ P41 C1 ;

¼ K þ P33 B 

T P35

O46

þ P32 B þ P41 B1 ;

T O49 ¼ P28 þ P32 D þ P41 D1 ;

T O55 ¼ h2 R2 þ ðh2  h1 ÞR3  P33  P33 ; T

O39

þ P31 D þ P40 D1 ;

T O44 ¼ Q3  P23  P23 ;

¼

O36

 P40 ;

T O37 ¼ P17 þ P31 B þ P40 B1 ; T P19

T O35 ¼ P15  P31 ;

T O56 ¼ P34  P42 ;

þ P42 B1 ;

T O58 ¼ P33 C  P36 þ P42 C1 ;

T O59 ¼ P33 D  P37 þ P42 D1 ;

O57

ARTICLE IN PRESS 3678

P. Balasubramaniam et al. / Neurocomputing 72 (2009) 3675–3682

T O66 ¼ P1 þ K  P43  P43 ;

T O67 ¼ P34 B þ P43 B1  P44 ;



xT ðtÞ ¼ xT ðtÞ xT ðt  tðtÞÞ xT ðt  h1 Þ xT ðt  h2 Þ yT ðtÞ g T ðtÞ

O68

T ¼ P34 C þ P43 C1  P45 ;

 f T ðxðtÞÞ f T ðxðt  tðtÞÞÞ

Z

t

f T ðxðsÞÞ ds



trðtÞ

T O69 ¼ P34 D þ P43 D1  P46 ;

O77 and

T T ¼ 2K1 þ rR1 þ Q4 þ P35 B þ BT P35 þ P44 B1 þ BT1 P44 ;

EP ¼ PT ET Z0:

O78 ¼ P35 C þ B

T

T P36

þ P44 C1 þ

T BT1 P45 ;

T

It is noted that x ðtÞEP xðtÞ is actually xT ðtÞP1 xðtÞ. Then, it can be obtained by Ito’s formula that

T T O79 ¼ P35 D þ BT P37 þ P44 D1 þ BT1 P46 ;

dVðxt ; tÞ ¼ LVðxt ; tÞ dt þ 2xT ðtÞP1 gðtÞ dwðtÞ;

T T O88 ¼ ð1  mÞQ4 þ P36 C þ C T P36 þ P45 C1 þ C1T P45  2K2 ; T T O89 ¼ P36 D þ C T P37 þ P45 D1 þ C1 P46 ;

where LV 1 ðxt ; tÞ ¼ 2xT ðtÞP1 yðtÞ þ g T ðtÞP1 gðtÞ:

1 r; R1

T T O99 ¼ P37 D þ DT P37 þ P46 D1 þ DT1 P46 

On the other hand, from the stochastic theory, the following equations are true:   Z t Z t Z1 ðtÞ ¼ xðtÞ  xðt  tðtÞÞ  yðsÞ ds  gðsÞ dwðsÞ ¼ 0;

N ¼ ½P2 P3 P4 P5 P6 P7 P8 P9 P10 T ; M ¼ ½P11 P12 P13 P14 P15 P16 P17 P18 P19 T ;

ttðtÞ

"

S ¼ ½P20 P21 P22 P23 P24 P25 P26 P27 P28 T ;

Z2 ðtÞ ¼ xðt  h1 Þ  xðt  tðtÞÞ 

ttðtÞ

Z

U ¼ ½P29 P30 P31 P32 P33 P34 P35 P36 P37  ; V ¼ ½P38 P39 P40 P41 P42 P43 P44 P45 P46  :



Z3 ðtÞ ¼ xðt  tðtÞÞ  xðt  h2 Þ 

Vðxt ; tÞ ¼ V1 ðxt ; tÞ þ V2 ðxt ; tÞ þ V3 ðxt ; tÞ þ V4 ðxt ; tÞ þ V5 ðxt ; tÞ;

ð16Þ

Z

V3 ðxt ; tÞ ¼

Z

0

V4 ðxt ; tÞ ¼

Z

0

h2

V5 ðxt ; tÞ ¼ Z

t

þ

Z

t

ttðtÞ

y ðsÞR2 yðsÞ ds dy þ

xT ðsÞQ1 xðsÞ ds þ Z

0

0

t ttðtÞ

0

0

0

0

0

0

0

0

0

0

0

0

0

0

P1 6 6 P2 6 6 P11 6 P¼6 6 P20 6 6 P29 4 P38

Z

t

Z

t



th2

0

t

 f ðxðsÞÞ ds  gðtÞ ¼ 0:

ttðtÞ

2 T

y ðsÞR3 yðsÞ ds dy;

tþy

xT ðsÞQ2 xðsÞ ds

th1

0

gðsÞ dwðsÞ

Therefore, h1

h2

xT ðsÞQ3 xðsÞ ds þ

0

Z

tþy

I 60 6 6 60 ET ¼ 6 60 6 6 40 2

Z

T

th2

where 2

th2

ttðtÞ

Z5 ðtÞ ¼ A0 ðtÞxðtÞ þ A1 ðtÞxðt  tðtÞÞ þ B1 ðtÞf ðxðtÞÞ þ C1 ðtÞf ðxðt  tðtÞÞÞ

fj ðxðsÞÞ ds;

f T ðxðsÞÞR1 f ðxðsÞÞ ds dy; t

Z



xi

tþy

r

Z

t

yðsÞ ds 

ttðtÞ

þD1 ðtÞ

Z

ttðtÞ

 f ðxðsÞÞ ds  yðtÞ ¼ 0;

t

þDðtÞ

V1 ðxt ; tÞ ¼ x ðtÞEP xðtÞ;

0

Z



T

i¼1

gðsÞ dwðsÞ ttðtÞ

Z4 ðtÞ ¼ AðtÞxðtÞ þ BðtÞf ðxðtÞÞ þ CðtÞf ðxðt  tðtÞÞÞ

where

Z

#

th1

¼ 0;

Proof. Consider the Lyapunov–Krasovskii functional

ki

yðsÞ ds 

Z

¼ 0;

T

n X

th1 ttðtÞ

T

V2 ðxt ; tÞ ¼ 2

ð17Þ

ð18Þ

LV 2 ðxt ; tÞ ¼ 2f T ðxðtÞÞKyðtÞ þ g T ðtÞKgðtÞ;

ð19Þ

Z5 ðtÞ

3

0

0

07 7 7 07 7; 07 7 7 05

0

0

0 0

0 0

0 0

0 0

0 0

0 0

0 0

0

0

0

0

0

0

0

3

6 Z ðtÞ 7 6 1 7 7 6 6 Z2 ðtÞ 7 7 6 T LV 1 ðxt ; tÞ ¼ 2xT ðtÞP1 yðtÞ þ g T ðtÞP1 gðtÞ þ 2x ðtÞPT 6 7; 6 Z3 ðtÞ 7 7 6 6 Z ðtÞ 7 4 4 5

f T ðxðsÞÞQ4 f ðxðsÞÞ ds;

0

0

LV 3 ðxt ; tÞprf T ðxðtÞÞR1 f ðxðtÞÞ 

Z

t

f T ðxðsÞÞR1 f ðxðsÞÞ ds;

ð20Þ

trðtÞ

LV 4 ðxt ; tÞ ¼ h2 yT ðtÞR2 yðtÞ 

Z

t

yT ðsÞR2 yðsÞ ds

th2

0

0

0

0

0

0

0

P3

P4

P5

P6

P7

P8

P9

P12

P13

P14

P15

P16

P17

P18

P21

P22

P23

P24

P25

P26

P27

P30

P31

P32

P33

P34

P35

P36

P39

P40

P41

P42

P43

P44

P45

þðh2  h1 ÞyT ðtÞR3 yðtÞ 

3

0 7 P10 7 7 P19 7 7 7; P28 7 7 P37 7 5 P46

Z

th1

yT ðsÞR3 yðsÞ ds;

ð21Þ

th2

LV 5 ðxt ; tÞpxT ðtÞQ1 xðtÞ  ð1  mÞxT ðt  tðtÞÞQ1 xðt  tðtÞÞ þxT ðtÞQ2 xðtÞ  xT ðt  h1 ÞQ2 xðt  h1 Þ þ xT ðtÞQ3 xðtÞ T

x ðt  h2 ÞQ3 xðt  h2 Þ þ f T ðxðtÞÞQ4 f ðxðtÞÞ ð1  mÞf T ðxðt  tðtÞÞÞQ4 f ðxðt  tðtÞÞÞ:

ð22Þ

ARTICLE IN PRESS P. Balasubramaniam et al. / Neurocomputing 72 (2009) 3675–3682

LMI holds

It is obvious that

2 T

T

x ðtÞLK 1 f ðxðtÞÞ  f ðxðtÞÞK1 f ðxðtÞÞZ0;

ð23Þ

xT ðt  tðtÞÞLK 2 f ðxðt  tðtÞÞÞ  f T ðxðt  tðtÞÞÞK2 f ðxðt  tðtÞÞÞZ0:

ð24Þ

Then by Lemma 2.3 and using 0rh1 rtðtÞrh2 and 0orðtÞrr, we have Z

3679

t

1 f T xðsÞR1 fxðsÞ dsr   r trðtÞ

Z

 T Z f ðxðsÞÞ ds R1

t trðtÞ

 f ðxðsÞÞ ds ;

t

6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4

O 

N 1  R2 h2







M

S

S1

e1 N1T

S2

0

0

0

0

0

0

0

0

0

0

0

0

1 R3 h2  h1













1  ðR2 þ R3 Þ h2  h1 









e1 I

e1 J

0



e1 I

0













e2 I















e2 N2T

3

7 7 0 7 7 7 7 7 0 7 7 7 7 0 7o0; 7 7 0 7 7 7 0 7 7 e2 J 7 5 e2 I

ð30Þ

trðtÞ

ð25Þ where O is defined in (15). Z

t

 ttðtÞ

Z

yT ðsÞR2 yðsÞ dsr 

ttðtÞ



1 h2

Z

T Z yðsÞ ds R2

t trðtÞ

 yðsÞ ds ;

t

ð26Þ

trðtÞ

yT ðsÞðR2 þ R3 ÞyðsÞ ds

th2

r Z

1 h2  h1

th1

 ttðtÞ

Z

T Z yðsÞ ds ðR2 þ R3 Þ

ttðtÞ

th2

ttðtÞ

 yðsÞ ds ;

ð27Þ

th2

yT ðsÞR3 yðsÞ dsr 

1 h2  h1

"Z

#T

th1

yðsÞ ds

R3

ttðtÞ

"Z

#

th1

yðsÞ ds :

ttðtÞ

Proof. Assume that inequality (30) holds. It can be seen that (30) can be rewritten as 3 2 P1 S1 e1 N1T S2 e2 N2T 6  e I e1 J 0 0 7 7 6 1 7 6 0 0 7  e1 I C¼6 7o0; 6  7 6 e I e J     4 2 2 5     e2 I where S1 ¼ ½HP 29 HP 30 HP31 HP 32 HP 32 HP34 HP 35 HP36 HP37 T ;

ð28Þ N1 ¼ ½E1 0 0 0 0 0 E2 E3 E4 ;

Substituting (18)–(28) into (17) we have T

S2 ¼ ½HP 38 HP 39 HP40 HP 41 HP 42 HP43 HP 44 HP45 HP46 T ;

dVðxt ; tÞrc ðtÞP1 cðtÞ þ zðdwðtÞÞ; where P1 is defined in Theorem 3.1 with "

cT ðtÞ ¼ xT ðtÞ

Z

t

yT ðsÞ ds

ttðtÞ

zðdwðtÞÞ ¼ 2xT ðtÞN

Z

th1

yT ðsÞ ds

ttðtÞ

Z

N2 ¼ ½E5 E6 0 0 0 0 E7 E8 E9  Z

ttðtÞ

yT ðsÞ ds;

ð29Þ

th2

t

gðsÞ dwðsÞ ttðtÞ

T

2x ðtÞM

Z

th1

T

gðsÞ dwðsÞ  2x ðtÞS

ttðtÞ

Z

ttðtÞ

gðsÞ dwðsÞ

th2

þ2xT ðtÞPgðtÞ dwðtÞ: According to P1 o0 and there exists a scalar a40 such that

P1 þ diagfaIn ; 0; 0; 0; 0; 0; 0; 0; 0go0: Hence we have

E

and P1 in defined in (14). Thus, C ¼ P1 þ S1 DðtÞN1 þ N1T DðtÞST1 þ S2 DðtÞN2 þ N2T DðtÞST2 o0 holds according to Lemma 2.4. It can be verified that C is exactly the same P1 of (14) when A, B, C, D, A0 , A1 , B1 , C1 and D1 are replaced by A þ HDðtÞE1, B þ HDðtÞE2 , Cþ HDðtÞE3 , D þ HDðtÞE4 , A0 þ HDðtÞE5 , A1 þ HDðtÞE6 , B1 þ HDðtÞE7 , C1 þ HDðtÞE8 and D1 þ HDðtÞE9 , respectively. &

dVðxt ; tÞ T pEðc ðtÞP1 cðtÞÞr  aEjxðtÞj2 : dt

Thus if P1 o0, then the stochastic system (4) is asymptotically stable in the mean-square. The proof is completed. &

Theorem 3.2. Consider neural networks (4) satisfying assumption ðA1 Þ and ðA2 Þ. The equilibrium solution of stochastic neural network (4) is linear fractional norm-bounded uncertainties (6) is globally asymptotically stable in the mean square if there exist scalars e1 40; e2 40, some positive definite matrices P1 ¼ P1T 40; Rl ¼ RTl 40; l ¼ 1; 2; 3; Qj ¼ QjT 40; j ¼ 1; 2; 3; 4, diagonal matrix K40, K1 40, K2 40 and for any matrices Pi ; i ¼ 2; . . . ; 46 such that the

Remark 3.3. It is clear to see that if we set J ¼ 0, the linear fractional norm-bounded uncertainties reduce to routine normbounded uncertainties. Therefore, one can easily derive a corresponding results for routine norm-bounded uncertainties from Theorem 3.2. In the following, we will discuss the robust stability for the following uncertain stochastic neural networks with time-varying delays dxðtÞ ¼ ½AðtÞxðtÞ þ BðtÞf ðxðtÞÞ þ CðtÞf ðxðt  tðtÞÞÞ dt þ ½A0 ðtÞxðtÞ þ A1 ðtÞxðt  tðtÞÞ þ B1 ðtÞf ðxðtÞÞ þ C1 ðtÞf ðxðt  tðtÞÞÞ dwðtÞ;

ð31Þ

where the time-delay tðtÞ satisfies 0rh1 rtðtÞrh2 , t_ ðtÞrm. Then, we have the following result. Theorem 3.4. Consider neural networks (31) satisfying assumption ðA1 Þ and ðA2 Þ. The equilibrium solution of stochastic neural networks (31) is linear fractional norm-bounded uncertainties (6) is globally asymptotically stable in the mean square if there exist scalars e1 40; e2 40, some positive definite matrices P1 ¼ P1T 40; Rl ¼ RTl 40; l ¼ 2; 3; Qj ¼ QjT 40; j ¼ 1; 2; 3; 4 diagonal matrix K40, K1 40, K2 40 and for any matrices Pi ; i ¼ 2; . . . ; 41 such that the

ARTICLE IN PRESS 3680

P. Balasubramaniam et al. / Neurocomputing 72 (2009) 3675–3682 T T O 78 ¼ P32 C þ BT P33 þ P40 C1 þ BT1 P41 ;

LMI holds 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4

O 

N 1  R2 h2 

T 1N 1

S2

e

M

S

S1

0

0

0

0

0

0

0

0

0

1 R3 h2  h1

























 

 

 

 



1 ðR2 þ R3 Þ h2  h1 

0

0

0

e1 I

0



e1 J e1 I

 

 

e2 I 

0

T 2N 2

e

3

7 7 7 7 7 7 0 7 7 7 7 7o0; 0 7 7 7 0 7 7 7 0 7 7 e2 J 7 5 e2 I 0

ð32Þ

T T O 88 ¼ ð1  mÞQ4 þ P33 C þ C T P33 þ P41 C1 þ C1T P41  2K 2 ;

N ¼ ½P2 P3 P4 P5 P6 P7 P8 P9 T ; M ¼ ½P10 P11 P12 P13 P14 P15 P16 P17 T ; S ¼ ½P18 P19 P20 P21 P22 P23 P24 P25 T ; U ¼ ½P26 P27 P28 P29 P30 P31 P32 P33 T ; V ¼ ½P34 P35 P36 P37 P38 P39 P40 P41 T ;

where O ¼ ðO Þij ði; j ¼ 1; 2; . . . ; 8Þ with T T O 11 ¼ Q1 þ Q2 þ Q3 þ P2 þ P2T  P26 A  AT P26 þ P34 A0 þ AT0 P34 ;

S 1 ¼ ½HP 26 HP27 HP 28 HP 29 HP30 HP 31 HP 32 HP33 T ;

O 12 ¼ P2 þ P3T  P10 þ P18

N 1 ¼ ½E1 0 0 0 0 0 E2 E3 ;

T T  AT P27 þ AT0 P35 þ P34 A1 ;

T T O 13 ¼ P4T þ P10  AT P28 þ AT0 P36 ;

T O 14 ¼ P5T  P18 T T  AT P29 þ AT0 P37 ;

T T O 15 ¼ P1 þ P6T  P26  AT P30 þ AT0 P38 ;

T T O 16 ¼ P7T  AT P31  P34 þ AT0 P39 ;

T O 17 ¼ P8T þ P26 B  AT P32

S 2 ¼ ½HP 34 HP35 HP 36 HP 37 HP38 HP 39 HP 40 HP41 T ; N 2 ¼ ½E4 E5 0 0 0 0 E6 E7 :

Proof. Consider the Lyapunov–Krasovskii functional

T þP34 B1 þ AT0 P40 þ LK 1 ;

Vðxt ; tÞ ¼ V1 ðxt ; tÞ þ V2 ðxt ; tÞ þ V3 ðxt ; tÞ þ V4 ðxt ; tÞ;

T T O 18 ¼ P9T þ P26 C  AT P33 þ P34 C1 þ AT0 P41 ;

where T

V1 ðxt ; tÞ ¼ x ðtÞEP xðtÞ;

T T O 22 ¼ ð1  mÞQ1  P3T  P3  P11  P11 þ P19 þ P19 þ P35 A1 T þ AT1 P35 ;

V2 ðxt ; tÞ ¼ 2

n X

¼

O 26 ¼

P6T P7T

 

T P14 T P15

 P27 þ  P35 þ

T P22

þ

T P23

þ

O 25

V3 ðxt ; tÞ ¼

O 28 ¼



þ

T P25

O 27

þ P27 C þ P35 C1 þ

T O 33 ¼ Q2 þ P12 þ P12 ;

¼

T P14

 P28 ;

O 36 ¼

T O 37 ¼ P16 þ P28 B þ P36 B1 ; T O 44 ¼ Q3  P21  P21 ; T ¼ P23  P37 ;

þ LK 2 ; with

O 35

T O 38 ¼ P17 þ P28 C þ P36 C1 ; T O 45 ¼ P22  P29 ;

O 46

T O 48 ¼ P25 þ P29 C þ P37 C1 ;

O 56 ¼

T P31

 P38 ;

T ¼ K T þ P30 B  P32 þ P38 B1 ; T O 58 ¼ P30 C  P33 þ P38 C1 ; T O 66 ¼ P1 þ K  P39  P39 ;

2

xT ðsÞQ1 xðsÞ ds þ

Z

Z

Z

0

O 57

t

Z

t

t

ttðtÞ

yT ðsÞR2 yðsÞ ds dy þ

tþy

f T ðxðsÞÞQ4 f ðxðsÞÞ ds; Z

h1 h2

Z

t

yT ðsÞR3 yðsÞ ds dy;

tþy

3

0

0

0

0

0

0

0

60 6 6 60 T E ¼6 60 6 6 40

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

07 7 7 07 7; 07 7 7 05

0

0

0

0

0

0

0

0

P1 6 6 P2 6 6 P10 6 P¼6 6 P18 6 6 P26 4 P34

xT ðsÞQ2 xðsÞ ds

th1

I

2

T O 47 ¼ P24 þ P29 B þ P37 B1 ;

O 55 ¼ h2 R2 þ ðh2  h1 ÞR3  P30 

fj ðxðsÞÞ ds;

xT ðsÞQ3 xðsÞ ds þ

h2

 P36 ;

T P30 ;

t

xi 0

th2

T AT1 P41

T O 34 ¼ P13  P20 ; T P15

ttðtÞ

Z

V4 ðxt ; tÞ ¼ T P17

t

þ

T T T ¼ P8T  P16 þ P24 þ P27 B þ P35 B1 þ AT1 P40 ;

P9T

Z

T AT1 P38 ; T AT1 P39 ;

Z

i¼1

T T T O 23 ¼ P4T þ P11  P12 þ P20 þ AT1 P36 ; T T T O 24 ¼ P5T  P13  P19 þ P21 þ AT1 P37 ;

ki

0 P3

0 P4

0 P5

0 P6

0 P7

0 P8

P11

P12

P13

P14

P15

P16

P19

P20

P21

P22

P23

P24

P27

P28

P29

P30

P31

P32

P35

P36

P37

P38

P39

P40

3 0 7 P9 7 7 P17 7 7 7 P25 7 7 P33 7 5 P41

and T O 67 ¼ P31 B þ P39 B1  P40 ;

T ¼ P31 C þ P39 C1  P41 ; T T O 77 ¼ 2K1 þ Q4 þ P32 B þ BT P32 þ P40 B1 þ BT1 P40 ;

O 68

xT ðtÞ ¼ ½xT ðtÞ xT ðt  tðtÞÞ xT ðt  h1 Þ xT ðt  h2 Þ yT ðtÞ g T ðtÞ f T ðxðtÞÞ f T ðxðt  tðtÞÞÞ: The remaining part of the proof follows immediately from Theorem 3.2. This completes the proof. &

ARTICLE IN PRESS P. Balasubramaniam et al. / Neurocomputing 72 (2009) 3675–3682

4. Numerical examples

Table 1 Varying J.

In this section, we will give three examples showing the effectiveness of established theories. Example 4.1. Consider system (4) with the following matrices:       0:2 4 0:4 0:2 4 0 A¼ ; B¼ ; C¼ ; 0:1 0:3 0:1 0:7 0 6 A0 ¼

A1 ¼ and C1 ¼







0:3 0

 0 ; 0:3

0:5

0:1

0:5

0

0:3

0:6

0:2

0:1

 ;

 ;

B1 ¼



0:2

0:6

0:5

0:1

D ¼ D1 ¼



1

0

0

1



 ;

L ¼ 0:2I:

Example 4.2. Consider system (4) with the following matrices       0:4 0:2 4 0 0:2 4 ; A0 A¼ ; B¼ ; C¼ 0:1 0:3 0:1 0:7 0 6   0:3 0 ¼ ; 0 0:3

C1 ¼

 

0:5

0:1

0:5

2:272

0:3 0:2

 0:6 ; 0:1



B1 ¼

;



D ¼ D1 ¼ I; L ¼ 0:2I; ¼ E9 ¼ ½1 1:





0:2

0:6

0:5

0:1

h1

h2 ¼ r

J

m

0 0 0 0 0 0

7.2349 7.2326 7.2271 7.2114 7.1876 7.1121

0.1 0.3 0.5 0.7 0.8 0.9

0 0 0 0 0 0

Table 2 Varying J and m.

Using Matlab LMI Control Toolbox to solve the LMI (14) (without uncertainties), we obtain the upper bound of the time-varying delays is h2 ¼ 7:2546 when m ¼ 0. This shows that the approach developed in this paper is effective and less conservative than some existing results.

A1 ¼

3681

h1

h2 ¼ r

J

m

0 0 0 0 0 0

7.1245 6.3494 1.8250 7.1163 6.3417 1.8251

0 0 0 0.5 0.5 0.5

0.1 0.5 0.9 0.1 0.5 0.9

Table 3 Varying J. h1

h2

J

m

0 0 0 0

o1 o1 o1 o1

0.1 0.3 0.5 0.9

0 0 0 0

 ;

 0:1 ; 0:1

E1 ¼ E2 ¼ E3 ¼ E4 ¼ E5 ¼ E6 ¼ E7 ¼ E8

It was reported in [9] that the above matrices is robustly asymptotically stable in mean square when 0otðtÞr2:15; 0orðtÞr2:15. However, by our Theorem 3.2 and using Matlab LMI Toolbox, m ¼ 0; h1 ¼ 0 and J ¼ 0 it is found that the equilibrium solution of uncertain stochastic neural networks (4) is robustly asymptotically stable in mean square for any tðtÞ and rðtÞ satisfying 0otðtÞrh2 ¼ 7:2353; 0orðtÞr7:2353. This shows that the established results in this paper is finer than the previous results since the stability region is valid up to the upper bound 7.2353 instead of 2.15 in [9]. It is clear from Table 1 that when h1 ¼ 0 and m are fixed, the different values of h2 ¼ r are plotted for corresponding different values of J. Table 2 clearly shows that when h1 is fixed, we get different values of h2 corresponding to different values of J and m. Example 4.3. Consider the stochastic neural networks with linear fractional uncertainties dxðtÞ ¼ ½AðtÞxðtÞ þ BðtÞf ðxðtÞÞ þ CðtÞf ðxðt  tðtÞÞÞ dt þ ½A0 ðtÞxðtÞ þ A1 ðtÞxðt  tðtÞÞ þ B1 ðtÞf ðxðtÞÞ þ C1 ðtÞf ðxðt  tðtÞÞÞ dwðtÞ;

ð33Þ

where    0:4 4 0 A¼ ; B¼ 0:1 0 5   0:5 0 ¼ ; 0 0:5

A1 ¼



0:5 0

0:5 

¼

0

 ;

0:7 0

B1 ¼



 ;



0:1

0

0

0:1



 ;

0:2

0:6

0:5

0:1

C1 ¼



 ;

A0

0:1

0

0

0:1

 ;

H



0:1 ; 0:1

L ¼ 0:5I; E1 ¼ ½0:2 ¼ ½0:2  0:3;

0:3;

E4 ¼ E5 ¼ E6 ¼ E7 ¼ ½0:1

E2 ¼ ½0:2

 0:3;

E3

0:1:

Applying Theorem 2 in [14], Theorem 2 in [9] and Theorem 3.3 in [6], for the above systems it is found that the equilibrium solution of stochastic neural networks (33) is robustly asymptotically stable in mean square for delay tðtÞ satisfying 0otðtÞr0:5730, 0otðtÞr0:7056 and 0otðtÞr3:8795, respectively. However, applying Theorem 3.4 of this paper we can conclude that if 0otðtÞo1, system (33) is robustly asymptotically stable in mean square sense and finer then the previous works based on the upper bound. In Table 3, the different values of h2 are plotted for corresponding to different values of J when h1 ¼ 0 and m are fixed.

ARTICLE IN PRESS 3682

P. Balasubramaniam et al. / Neurocomputing 72 (2009) 3675–3682

Table 4 Varying J and m. h1

h2

J

m

0 0 0 0 0 0

o1 o1 o1 o1 o1 o1

0 0 0 0.5 0.5 0.5

0.1 0.5 0.9 0.1 0.5 0.9

Table 4 clearly shows that when h1 is fixed, we get different values of h2 corresponding to different values of J and m. 5. Conclusion In this paper, several sufficient conditions guaranteeing the asymptotic stability problems for delay-interval dependent robust stability criteria for stochastic neural networks with linear fractional uncertainties have been proposed. Some less conservative stability criteria have been obtained by considering the relationship between the time-varying delay and its lower and upper bounds when calculating the upper bound of the derivative of new Lyapunov–Krasovskii functional. From the numerical comparisons, significant improvements over the recent existing results have been observed. References [1] S. Haykin, Neural Networks: A comprehensive Foundation, Prentice-Hall, Englewood Cliffs, NJ, 1998. [2] T. Chen, L. Rong, Delay-independent stability analysis of Cohen–Grossberg neural networks, Phys. Lett. A 317 (2003) 436–449. [3] Y. Chen, Y. Wu, Novel delay-dependent stability criteria of neural networks with time-varying delay, Neurocomputing, 72 (2009) 1065–1070. [4] W. Feng, S.X. Yang, W. Fu, H. Wu, Robust stability analysis of uncertain stochastic neural networks with interval time varying delay, Chaos Solitons Fractals, 41 (2009) 414–424. [5] Y. He, G.P. Liu, D. Rees, M. Wu, Stability analysis for neural networks with time varying interval delays, IEEE Trans. Neural Networks 18 (2007) 1850–1854. [6] R. Rakkiyappan, P. Balasubramaniam, S. Lakshmanan, Robust stability results for uncertain stochastic neural networks with discrete interval and distributed time varying delays, Phys. Lett. A 372 (2008) 5290–5298. [7] Y.-Y. Hou, T.-L. Liao, C.-H. Lien, J.-J. Yan, stability analysis for neural networks with interval time-varying delays, Chaos 17 (2007) (033120)1–9. [8] Z. Wang, Y. Liu, K. Fraser, X. Liu, Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays, Phys. Lett. A 354 (2006) 288–297. [9] H. Li, B. Chen, Q. Zhou, S. Fang, Robust exponential stability for uncertain stochastic neural network with discrete and distributed time-varying delays, Phys. Lett. A 372 (2008) 3385–3394. [10] Z. Wang, S. Lauria, J. Fang, X. Liu, Exponential stability of uncertain stochastic neural networks with mixed time delays, Chaos Solitons Fractals 32 (2007) 62–72. [11] T. Li, L. Guo, C. Sun, Robust stability for neural networks with time-varying delays and linear fractional uncertainties, Neurocomputing 71 (2007) 421–427. [12] K. Gu, An integral inequality in the stability problem of time-delay systems, in: Proceedings of 39th IEEE Conference on Decision and Control, 2000, pp. 2805–2815.

[13] Q. Zhang, X. Wei, J. Xu, Global exponential stability for nonautonomous cellular neural networks with delays, Phys. Lett. A 351 (2006) 153–160. [14] W.-H. Chen, X. Lu, Mean square exponential stability of uncertain stochastic delayed neural networks, Phys. Lett. A 372 (2008) 1061–1069.

P. Balasubramaniam post graduated from the Department of Mathematics of Gobi Arts College affiliated to Bharathiar University, Coimbatore in the year 1989. He was awarded Master of Philosophy in the year 1990 and Doctor of Philosophy (Ph.D.) in 1994 in the field of Mathematics with specialized area of Control Theory from the Department of Mathematics, Bharathiar University, Coimbatore, Tamilnadu, India. Soon after his completion of Ph.D. degree, he served as Lecturer in Mathematics in Kumaraguru College of Technology and also in Kongu Engineering College for 3 years. Since February 1997 he served as Lecturer in Mathematics for four years and Reader in Mathematics for five years in the Gandhigram Rural University, Gandhigram, Tamilnadu, India. He is rendering his services as a Professor and Head, Department of Mathematics, Gandhigram Rural University, Gandhigram, Tamilnadu, India from November 2006 till date. He was selected as a Visiting Research Professor during the year 2001 and 2005–2006 for promoting research in the field of control theory and neural networks at Pusan National University, Pusan, South Korea. He has 15 years of experience in teaching and research. He has published 40 research papers in various SCI journals holding impact factors. He has also published research articles in national journals and international conference proceedings. He is also serving as a reviewer for few SCI journals and member of the editorial board of Journal of Computer Science. Unique recognition was bestowed on him through a Tamilnadu Scientist Award (TANSA) 2005 for the discipline of Mathematical Sciences from Tamilnadu State Council for Science and Technology. He has also received Bharat Jyoti Award 2005 for the contribution to Science and Technology. His research interest includes the areas of control theory, stochastic differential equations, soft computing, neutral networks and cryptography. S. Lakshmanan graduated in the field of Mathematics during 2002–2005 from Government Arts College, Salem-7. He post graduated in Mathematics from Sri Ramakrishna Mission Vidyalaya College of Arts and Science affiliated to Bharathiar University, Coimbatore, Tamilnadu, India during 2005–2007. He was awarded Master of Philosophy in the year 2008 in the field of Mathematics with specialized area of Stability of Stochastic Differential Equations from the Department of Mathematics, Gandhigram Rural University, Gandhigram, Tamilnadu, India. His research interests is in the field of qualitative theory of stochastic systems and neural networks. R. Rakkiyappan graduated in the field of Mathematics during 1999–2002 from Sri Ramakrishna Mission Vidyalaya College of Arts and Science. He post graduated in Mathematics from PSG College of Arts and Science affiliated to Bharathiar University, Coimbatore, Tamilnadu, India during 2002–2004. His research interests is in the field of qualitative theory of stochastic and impulsive systems, neural networks and delay differential systems.