Stability analysis of discrete-time stochastic neural networks with time-varying delays

Stability analysis of discrete-time stochastic neural networks with time-varying delays

ARTICLE IN PRESS Neurocomputing 73 (2010) 740–748 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate...

221KB Sizes 4 Downloads 54 Views

ARTICLE IN PRESS Neurocomputing 73 (2010) 740–748

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Stability analysis of discrete-time stochastic neural networks with time-varying delays Yan Ou a,, Hongyang Liu a, Yulin Si a, Zhiguang Feng b a b

Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin, Heilongjiang 150001, PR China Department of Mechanical Engineering, University of Hong Kong, Hong Kong

a r t i c l e in f o

a b s t r a c t

Article history: Received 2 July 2009 Received in revised form 9 September 2009 Accepted 30 October 2009 Communicated by J. Liang Available online 18 November 2009

This paper investigates the problem of stability analysis for a class of discrete-time stochastic neural networks (DSNNs) with time-varying delays. In the concerned model, stochastic disturbances are described by a Brownian motion, and time-varying delay dðkÞ satisfies dm r dðkÞ r dM . Based on the delay partitioning idea and some inequalities, a new stability criterion with less conservatism in terms of linear matrix inequalities (LMIs) is proposed by introducing a novel Lyapunov–Krasovskii functional combined with a free-weighting matrix method. The condition can be checked by utilizing some numerical software and a numerical example is provided to show the usefulness of the proposed condition. & 2009 Elsevier B.V. All rights reserved.

Keywords: Asymptotic stability Delay partitioning Discrete-time neural networks Stochastic neural networks Linear matrix inequality Time-varying delays

1. Introduction In the past few years, due to the background of a wide range applications, such as associative memory, pattern recognition, signal processing optimization, and model identification (see e.g., [10,31]), neural networks have attracted much attention. On the other hand, according to the finite switching speed of amplifiers in electronic networks, time delay (also called hereditary), either constant or timevarying, are often encountered in various engineering. This kind of delay is often a main source of oscillatory which brings to neural networks divergence and instability and needs to pay enough attention. Recently, many important results on the stability analysis have been reported for neural networks with time delay, see, e.g., [1–4,7,9,13–16,21,23,26,28,33] and the references therein. It is worth pointing out that most neural networks are concerned with continuous-time cases. However, discrete-time neural networks (DNNs), specializing in implementing and applications, have gradually attracted much attention. DNNs are important in formulating discrete-time systems that are analogues of the continuous-time neural networks in order to provide convenient ways in simulating and computing the continuous-time systems. Therefore, both analysis and synthesis problems for DNNs have been extensively studied and a great number of important results have been reported in the literature, see, for instance, [11,22,30,34] and the references therein. In practice, when modeling real neural systems, stochastic disturbances are probably part of the main sources leading to unwilling behaviors of neural networks. Neural networks could be destabilized by certain stochastic inputs. The reasons are as follows. The behavior of the stochastic process is a non-deterministic factor, namely, the system’s later states are determined both by the process’ predictable actions and by random elements. In real neural networks, synaptic transmission is a noisy process brought about by random fluctuations as well as released factors from neurotransmitters or other probabilistic causes. The stability problems have attracted increasing interests, and some results related to stochastic disturbances have been published, see [12,24,27,29,32] and the references therein. According to the widespread existence of stochastic disturbances, neural networks can provide promising applications in lots of areas we have barely imagined, such as radically changing financial markets, causing a complete rearrangement of conventional insurance industry, pattern recognition, and constrained optimization.

 Corresponding author.

E-mail address: [email protected] (Y. Ou). 0925-2312/$ - see front matter & 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2009.10.017

ARTICLE IN PRESS Y. Ou et al. / Neurocomputing 73 (2010) 740–748

741

Based on the discussions above, the problem of stability analysis for DSNNs with time-varying delays has been investigated recently. In [18], the problem of exponentially stability analysis for uncertain discrete-time stochastic neural networks with time-varying delays is investigated by utilizing a method to convert the addressed stability analysis problem into a convex optimization problem. It is noted that the stability criterion, which is inevitably conservative and has been improved in some recent papers, is delay-independent. In [25], by combining with a free-weighting matrix method and a new Lyapunov–Krasovskii functional, a delay-dependent stability condition has been obtained, which proves to be less conservative than [18]. Although it has been proved that the benefits arising from [25] as well as some other recently published papers ([20], for example) are much greater than those brought about by [18], they still leave much room for improvement. It is our observation that the results presented in [20] could be further significantly improved if we employ the idea of delay partitioning, which motivates the present study. In this paper, the delay partitioning idea is used to solve the problem of asymptotic stability analysis in the mean square for a class of DSNNs with time-varying delays. This paper aims at proposing a more tractable condition for stability analysis, which presents a less conservative result. In the concerned model, stochastic disturbances are described by Brownian motion, and time-varying delay dðkÞ satisfies dm r dðkÞ rdM . Note that the dimensions of the LMIs stability condition depend on the partitioning number m. By utilizing a novel Lyapunov–Krasovskii functional combined with the delay partitioning idea, which has been used for different systems with time delays [8], and some inequalities, we convert the stability analysis problem into a feasibility problem of LMIs, and then the conditions have been checked by utilizing some numerical software. A numerical example is provided to show the usefulness of proposed condition. Notations: The notation is quite standard. Throughout this paper, N stands for the set of integers, Rn denotes the n-dimensional Euclidean space, and Rmn is the set of all m  n real matrices. The notation P 4 0 ( Z 0) means that P is real symmetric and positive definite (semi-definite) and the superscript ‘‘T’’ stands for matrix transposition. I is the identity matrix with compatible dimensions. If A is pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a matrix, JAJ is the Euclidean norm, i.e., JAJ ¼ supfJAXJ : JXJ ¼ 1g ¼ lmax ðAT AÞ where lmax ðAÞ (respective lmin ðAÞ) means the largest (respective smallest) eigenvalue of A. Moreover, let ðO; F ; fF t gt Z 0 ; PÞ be a complete probability space with a filtration fF t gt Z 0 satisfying the usual conditions (i.e., the filtration contains all P-null sets and is right continuous). Efg stands for the mathematical expectation operator with respect to the given probability measure. The asterisk ðÞ represents a term that is induced by symmetry and diagð  Þ stands for a block-diagonal matrix. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.

2. Problem formulation In this paper, we consider the following neural networks model: xðk þ1Þ ¼ AxðkÞ þ BGðxðkÞÞ þ DHðxðkdðkÞÞÞ þ sðxðkÞ; xðkdðkÞÞ; kÞwðkÞ; T

ð1Þ

n

with initial condition x0 ; where xðkÞ ¼ ðx1 ðkÞ; x2 ðkÞ; . . . ; xn ðkÞÞ A R ; xi ðkÞ is the state of the i-th neuron at time k; GðxðkÞÞ ¼ ðg1 ðx1 ðkÞÞ; g2 ðx2 ðkÞÞ; . . . ; gn ðxn ðkÞÞÞT A Rn ; HðxðkÞÞ ¼ ðh1 ðx1 ðkÞÞ; h2 ðx2 ðkÞÞ; . . . ; hn ðxn ðkÞÞÞT A Rn ; gj ðxj ðkÞÞ and hi ðxi ðkÞÞ denote the activation functions; the diagonal matrix A ¼ diagða1 ; a2 ; . . . ; an Þ is real constant diagonal with entries jai jo 1; B ¼ ðbij Þnn and D ¼ ðdij Þnn are the connection weight matrix and the discretely delayed connection weight matrix, respectively; the positive integer dðkÞ denotes the timevarying delay satisfying dm rdðkÞ r dM , where dm and dM are constant positive scalars representing the minimum and maximum delays, respectively; the lower bound dm of delay can always be described by dm ¼ tm, where t and m are integers; the time delay dðkÞ is represented into two parts: constant part tm and time-varying part hðkÞ, that is, dðkÞ ¼ tmþ hðkÞ, where hðkÞ satisfies 0 r hðkÞ rdM tm. Remark 1. The above assumption on the time delay dðkÞ, having been used in many papers (see e.g., [5,6,19]), is really general. Note that the assumption characterizes the real situation in many promising applications. A typical example, containing time delay, induced by the network transmission which is actually time-varying, can be assumed to have minimum and maximum delay bounds without loss of generality. Clearly, if dm ¼ dM , the time-varying delay reduces to constant delay. In the DSNNs in (1), wðkÞ is a scalar Wiener process (Brownian Motion) on ðO; F ; fF t gt Z 0 ; PÞ with EfwðkÞg ¼ 0;

Efw2 ðkÞg ¼ 1;

EfwðiÞwðjÞg ¼ 0 ði a jÞ:

Throughout this paper, we make the following assumptions. Assumption 1. There exist two positive constants r1 and r2 such that

sT ðx; y; kÞsðx; y; kÞ r r1 xT x þ r2 yT y;

8x; y A Rn :

ð2Þ

Assumption 2. For j A f1; 2; . . . ; ng, the neuron activation functions in the DSNNs in (1) satisfy l j r

gj ðs1 Þgj ðs2 Þ rljþ ; s1 s2

u j r

hj ðs1 Þhj ðs2 Þ r ujþ ; s1 s2

s1; s2 A R;

s1; s2 A R;

ð3Þ

ð4Þ

þ  for all s1 a s2 , where ljþ , l j , uj , uj are some constants.

Assumption 3. The DSNNs in (1) satisfy Gð0Þ ¼ Hð0Þ ¼ 0:

ð5Þ

Remark 2. Assumption 2 on the activation function dealing with the problem of stability analysis of neural networks is firstly proposed þ þ  in [17]. The constants l j , lj , uj , uj are allowed to be positive, negative or zero. Therefore, the resulting activation functions should be

ARTICLE IN PRESS 742

Y. Ou et al. / Neurocomputing 73 (2010) 740–748

non-monotonic, and be more general than usual functions. Note that, with such an assumption, the problem of stability analysis for DSNNs can be investigated properly, see [17], for example. Remark 3. Under Assumptions 1 and 2, it is easy to check that functions G, H, and s satisfy the linear growth condition. Therefore, for any initial data x0 , the DSNNs in (1) has a unique solution (or equilibrium point) denoted by xðkÞ. According to Assumption 2, it is obvious that xðkÞ ¼ 0 is a trivial solution of the DSNNs in (1). Lemma 1 (Schur complement). Given constant matrices S11 , S12 , S21 , and S22 , where S11 ¼ ST11 , S22 ¼ ST22 , S21 ¼ ST12 , then " # S11 S12 o0 S21 S22 is equivalent to S22 o0

T S11 S12 S1 22 S12 o 0:

and

The main purpose of this paper is to establish a sufficient criterion of asymptotic stability in the mean square for the DSNNs in (1).

3. Main results For notation, in the following, we denote þ  þ   l þ l1 l2 þ l2 l þ þ l n L2 ¼ diag 1 ; ;...; n ; 2 2 2

þ  þ  L1 ¼ diagðl1þ l 1 ; l2 l2 ; . . . ; ln ln Þ;



þ  þ  U1 ¼ diagðu1þ u 1 ; u2 u2 ; . . . ; un un Þ;

U2 ¼ diag

 þ  u1þ þ u u þ þ u 1 u2 þu2 n ; ;...; n : 2 2 2

Our main result is given in the following theorem. 

Theorem 1. Suppose that Assumptions 1–3 hold. The DSNNs in (1) are asymptotically stable in the mean square if there exist scalar l 4 0, matrices P 40, R4 0, M 4 0, N 4 0, Qi 4 0, Si 4 0, i ¼ 1; 2, diagonal matrices L ¼ diagðl1 ; l2 ; . . . ; ln Þ 40, G ¼ diagðg1 ; g2 ; . . . ; gn Þ 4 0, and matrices X, Y, and Z satisfying 

P þ tS1 þ ðdM tmÞS2 o l I; 2 6 6 6 6 6 6 6 6 6 4

ð6Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi T dM tm þ1X2 R

F þ C þ CT þ YW1 W2 XT1 P XT2 Q2

"

P1 ¼

M

X



S1

pffiffiffi

tXT3 S1

 

P 

0 Q2

0 0

0 0







R

0









S1











#

" Z 0;

P2 ¼

N

Y



S2

#

" Z0;

P3 ¼

N

Z



S2

Z 0;

F ¼ XT2 P X2 þ l r1 XT2 X2 þ XT4 ðl r2 IRÞX4 þWQT 1 Q 1 WQ1 WQT2 Q2 WQ2 ;



L

0

0

L

 ;





G

0

0

G

 ;

" Q1 ¼

W1 ¼ 0:5WA1 L WB1 þ 0:5WBT1 L WAT1 ; 2

L1

6 0mn þ 2n;n 6 WA1 ¼ 6 4 L2 0n

Q1

0

0

Q1

# ;

W2 ¼ 0:5WA2 G WB2 þ 0:5WBT2 G WAT2 ;

3

2

0mn þ 2n;n 7 7 7; 5 In 0n

6 6 WA2 ¼ 6 6 4

L2

ð7Þ

#

where



pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi T 3 dM tmX3 S2 7 7 0 7 7 0 7 7 o 0; 7 0 7 7 0 5 S2

0mn þ 2n;n

0mn þ 2n;n

U1

U2

02n;n

02n;n

U2

In

3 7 7 7; 7 5

ð8Þ

ARTICLE IN PRESS Y. Ou et al. / Neurocomputing 73 (2010) 740–748

X1 ¼ ½A 0n;mn þ 2n B D;

X2 ¼ ½In 0n;mn þ 4n ;

X3 ¼ X1 X2 ;

743

X4 ¼ ½0n;mn þ n In 0n;3n :

Proof. Firstly, based on the delay partitioning idea, we introduce a new Lyapunov–Krasovskii functional VðkÞ ¼ V1 ðkÞ þ V2 ðkÞ þ V3 ðkÞ þV4 ðkÞ; with V1 ðkÞ ¼ xT ðkÞPxðkÞ; V2 ðkÞ ¼

k1 X i ¼ kt

V3 ðkÞ ¼

k1 X

GT ðiÞQ1 GðiÞ þ

xT ðiÞQ2 xðiÞ;

i ¼ kdM

tX mþ1

k1 X

xT ðiÞRxðiÞ;

j ¼ dM þ 1 i ¼ k1 þ j

V4 ðkÞ ¼

1 X

k1 X

dT ðjÞS1 dðjÞ þ

i ¼ t j ¼ k þ i

X tm1

k1 X

dT ðjÞS2 dðjÞ;

i ¼ dM j ¼ k þ i

dðjÞ ¼ xðj þ 1ÞxðjÞ; 2 6 6

GðiÞ ¼ 6 6 4

xðiÞ xðitÞ ^ xðiðm1ÞtÞ

3 7 7 7: 7 5

Calculating the forward difference of VðkÞ along the solution of the DSNNs in (1), and taking the mathematical expectation, we have EfDVðkÞg ¼ EfDV1 ðkÞg þ EfDV2 ðkÞg þ EfDV3 ðkÞg þ EfDV4 ðkÞg;

ð9Þ

where EfDV1 ðkÞg ¼ EfV1 ðk þ1ÞV1 ðkÞg ¼ Ef½AxðkÞ þBGðxðkÞÞ þDHðxðkdðkÞÞÞT P½AxðkÞ þ BGðxðkÞÞ þ DHðxðkdðkÞÞÞ þ sT ðxðkÞ; xðkdðkÞÞ; kÞP sðxðkÞ; xðkdðkÞÞ; kÞxT ðkÞPxðkÞg;

ð10Þ

EfDV2 ðkÞg ¼ EfV2 ðk þ1ÞV2 ðkÞg ¼ EfGT ðkÞQ1 GðkÞGT ðktÞQ1 GðktÞ þ xT ðkÞQ2 xðkÞxT ðkdM ÞQ2 xðkdM Þg;

ð11Þ

9 8 = k tm < X EfDV3 ðkÞg ¼ E ðdM tm þ 1ÞxT ðkÞRxðkÞ xT ðiÞRxðiÞ r EfðdM tm þ1ÞxT ðkÞRxðkÞxT ðkdðkÞÞRxðkdðkÞÞg; ; :

ð12Þ

i ¼ kdM

9 8 kdðkÞ1 = k1 kX tm1 < X X T T T T EfDV4 ðkÞg ¼ E d ðkÞðtS1 þ ðdM tmÞS2 ÞdðkÞ d ðjÞS1 dðjÞ d ðjÞS2 dðjÞ d ðjÞS2 dðjÞ ; : j ¼ kt j ¼ kdðkÞ j ¼ kdM  ¼ E ½AxðkÞ þ BGðxðkÞÞ þ DHðxðkdðkÞÞÞxðkÞT ðtS1 þ ðdM tmÞS2 Þ½AxðkÞ þBGðxðkÞÞ þDHðxðkdðkÞÞÞxðkÞ T

þ s ðxðkÞ; xðkdðkÞÞ; kÞðtS1 þ ðdM tmÞS2 ÞsðxðkÞ; xðkdðkÞÞ; kÞ

k1 X

T

d ðjÞS1 dðjÞ

j ¼ kt

kX tm1

T

d ðjÞS2 dðjÞ

j ¼ kdðkÞ

kdðkÞ1 X j ¼ kdM

9 = d ðjÞS2 dðjÞ : ; T

ð13Þ From (10)–(13), it follows that  EfDVðkÞg r E ½AxðkÞ þBGðxðkÞÞ þDHðxðkdðkÞÞÞT P½AxðkÞ þ BGðxðkÞÞ þ DHðxðkdðkÞÞÞ þ sT ðxðkÞ; xðkdðkÞÞ; kÞðP þ tS1 þ ðdM tmÞS2 ÞsðxðkÞ; xðkdðkÞÞ; kÞ þ ½AxðkÞ þ BGðxðkÞÞ þ DHðxðkdðkÞÞÞxðkÞT ðtS1 þ ðdM tmÞS2 Þ½AxðkÞ þBGðxðkÞÞ þ DHðxðkdðkÞÞÞxðkÞxT ðkÞPxðkÞ þ GT ðkÞQ1 GðkÞGT ðktÞQ1 GðktÞ þ xT ðkÞQ2 xðkÞxT ðkdM ÞQ2 xðkdM Þ T

T

þ ðdM tm þ 1Þx ðkÞRxðkÞx ðkdðkÞÞRxðkdðkÞÞ

k1 X j ¼ kt

T

d ðjÞS1 dðjÞ

kX tm1 j ¼ kdðkÞ

T

d ðjÞS2 dðjÞ

kdðkÞ1 X j ¼ kdM

9 = d ðjÞS2 dðjÞ ; ; T

ð14Þ

ARTICLE IN PRESS 744

Y. Ou et al. / Neurocomputing 73 (2010) 740–748

Notice that from Assumption 2 and condition (6), it is easy to see that

sT ðxðkÞ; xðkdðkÞÞ; kÞðP þ tS1 þ ðdM tmÞS2 ÞsðxðkÞ; xðkdðkÞÞ; kÞ r lmax ðP þ tS1 þðdM tmÞS2 ÞsT ðxðkÞ; xðkdðkÞÞ; kÞsðxðkÞ; xðkdðkÞÞ; kÞ  ð15Þ r l ðr1 xT ðkÞxðkÞ þ r2 xT ðkdðkÞÞxðkdðkÞÞÞ: Substituting (15) into (14) yields n   EfDVðkÞg r E aT ðkÞXT1 P X1 aðkÞ þ l r1 aT ðkÞXT2 X2 aðkÞ þ l r2 aT ðkÞXT4 X4 aðkÞ þ aT ðkÞXT3 ðtS1 þ ðdM tmÞS2 ÞX3 aðkÞaT ðkÞXT2 PX2 aðkÞ þ aT ðkÞWQT1 Q 1 WQ1 aðkÞ þ aT ðkÞXT2 Q2 X2 aðkÞaT ðkÞWQT2 Q2 WQ2 aðkÞ þ ðdM tm þ 1ÞaT ðkÞXT2 RX2 aðkÞaT ðkÞXT4 RX4 aðkÞ 9 kdðkÞ1 k1 kX tm1 = X X T T T d ðjÞS1 dðjÞ d ðjÞS2 dðjÞ d ðjÞS2 dðjÞ ;  ; j ¼ kt

j ¼ kdðkÞ

ð16Þ

j ¼ kdM

where

aðkÞ ¼ ½GT ðkÞ xT ðktmÞ xT ðkdðkÞÞ xT ðkdM Þ GT ðxðkÞÞ HT ðxðkdðkÞÞÞT : According to the definition of dðjÞ, for any matrices X, Y and Z the following equations always hold: 2 3 k1 X T 4 dðjÞ5 ¼ 0; 2a ðkÞX xðkÞxðktÞ

ð17Þ

j ¼ kt

2 2a ðkÞY 4xðktmÞxðkdðkÞÞ T

kX tm1

3

dðjÞ5 ¼ 0;

ð18Þ

j ¼ kdðkÞ

2 2a ðkÞZ 4xðkdðkÞÞxðkdM Þ T

kdðkÞ1 X

3

dðjÞ5 ¼ 0:

ð19Þ

j ¼ kdM

On the other hand, for any appropriately dimensioned matrices M 4 0 and N 4 0, the following equations are true: 0¼

k1 X j ¼ kt



k1 X

aT ðkÞMaðkÞ

kX tm1

j ¼ kt

aT ðkÞNaðkÞ

j ¼ kdM

k1 X

aT ðkÞMaðkÞ ¼ taT ðkÞMaðkÞ

aT ðkÞMaðkÞ;

ð20Þ

j ¼ kt

kX tm1

aT ðkÞNaðkÞ ¼ ðdM tmÞaT ðkÞN aðkÞ

j ¼ kdM

kX tm1

aT ðkÞNaðkÞ

j ¼ kdðkÞ

kdðkÞ1 X

aT ðkÞNaðkÞ:

Substituting (17)–(21) into (16) yields n EfDVðkÞg r E aT ðkÞðF þ C þ CT þ Y þ XT1 P X1 þ XT2 Q2 X2 þðdM tm þ 1ÞXT2 RX2 þ XT3 ðtS1 þ ðdM tmÞS2 ÞX3 ÞaðkÞ 9 kdðkÞ1 k1 kX tm1 = X X T T T  B ðk; jÞP1 Bðk; jÞ B ðk; jÞP2 Bðk; jÞ B ðk; jÞP3 Bðk; jÞ ; ; j ¼ kt

j ¼ kdðkÞ

ð21Þ

j ¼ kdM

ð22Þ

j ¼ kdM

where

Bðk; jÞ ¼ ½aT ðkÞ dT ðjÞT : From (3)–(5), it follows that ðgj ðxðkÞÞljþ xj ðkÞÞðgj ðxðkÞÞl j xj ðkÞÞ r 0;

j ¼ 1; 2; . . . ; n;

ð23Þ

ðhj ðxj ðkdðkÞÞÞujþ xj ðkdðkÞÞÞðhj ðxj ðkdðkÞÞÞu j xj ðkdðkÞÞÞr 0; which are equivalent to 2 " #T þ  T 6 lj lj ej ej xðkÞ 6 6 GðxðkÞÞ 4 ljþ þl j ej eTj  2 "

xðkdðkÞÞ HðxðkdðkÞÞÞ

#T



ljþ þl j 2 ej eTj

2 6 6 6 4

T ujþ u j ej ej



ujþ þ u j 2

ej eTj



j ¼ 1; 2; . . . ; n:

ð24Þ

3

ej eTj 7" xðkÞ # 7 r0; 7 5 GðxðkÞÞ ujþ þu j 2 ej eTj

j ¼ 1; 2; . . . ; n;

3 ej eTj 7" xðkdðkÞÞ # 7 r 0; 7 5 HðxðkdðkÞÞÞ

j ¼ 1; 2; . . . ; n:

where ei denotes the unit column vector having one element on its i-th row and zeros elsewhere.

ð25Þ

ð26Þ

ARTICLE IN PRESS Y. Ou et al. / Neurocomputing 73 (2010) 740–748

745

Then, from (22), (25), and (26), we have n EfDVðkÞg r E aT ðkÞ½F þ C þ CT þ Y þ XT1 P X1 þ XT2 Q2 X2 þ ðdM tm þ 1ÞXT2 RX2 þ XT3 ðtS1 þ ðdM tmÞS2 ÞX3 aðkÞ 2 3 ljþ þ l j " #T þ  T T l l e e  e e kdðkÞ1 k1 k t m1 n 6 j j j j 7 X X X X xðkÞ 2 6 j j 7  BT ðk; jÞP1 Bðk; jÞ BT ðk; jÞP2 Bðk; jÞ BT ðk; jÞP3 Bðk; jÞ lj 6 þ  7 GðxðkÞÞ 4 lj þ lj 5 j¼1 j ¼ kt j ¼ kdðkÞ j ¼ kdM ej eTj  ej eTj 2 9 2 3 ujþ þu j > " #T " # #> þ  T T " > u u e e  e e = n 6 7 j j j j j X j xðkdðkÞÞ xðkÞ 2 6 7 xðkdðkÞÞ gj   6 7 þ  HðxðkdðkÞÞÞ 4 uj þ uj GðxðkÞÞ 5 HðxðkdðkÞÞÞ > > j¼1 > ej eTj  ej eTj ; 2 n T T T T T T ¼ E a ðkÞ½F þ C þ C þ Y þ X1 P X1 þ X2 Q2 X2 þðdM tm þ1ÞX2 RX2 þ X3 ðtS1 þðdM tmÞS2 ÞX3 aðkÞ " #T " #" # kdðkÞ1 k1 kX tm1 X X xðkÞ xðkÞ LL1 LL2 BT ðk; jÞP1 Bðk; jÞ BT ðk; jÞP2 Bðk; jÞ BT ðk; jÞP3 Bðk; jÞ  GðxðkÞÞ LL2 L GðxðkÞÞ j ¼ kdM j ¼ kt j ¼ kdðkÞ #" " #T " #) n GU1 GU2 xðkdðkÞÞ xðkdðkÞÞ  ¼ E aT ðkÞ½F þ C þ CT þ YW1 W2 þ XT1 P X1 þ XT2 Q2 X2 HðxðkdðkÞÞÞ HðxðkdðkÞÞÞ GU2 U þðdM t

m þ1ÞXT2 RX2 þ XT3 ð

tS1 þ ðdM tmÞS2 ÞX3 aðkÞ

k1 X

T

B ðk; jÞP1 Bðk; jÞ

j ¼ kt

kX tm1

T

B ðk; jÞP2 Bðk; jÞ

j ¼ kdðkÞ

kdðkÞ1 X

)

T

B ðk; jÞP3 Bðk; jÞ :

j ¼ kdM

ð27Þ According to Lemma 1, (7) implies that n E aT ðkÞ½F þ C þ CT þ YW1 W2 þ XT1 P X1 þ XT2 Q2 X2 þ ðdM tm þ 1ÞXT2 RX2 þ XT3 ðtS1 þ ðdM tmÞS2 ÞX3 aðkÞ 9 kdðkÞ1 = k1 kX tm1 X X  BT ðk; jÞP1 Bðk; jÞ BT ðk; jÞP2 Bðk; jÞ BT ðk; jÞP3 Bðk; jÞ o 0: ; j ¼ kt

j ¼ kdðkÞ

ð28Þ

j ¼ kdM

From (27), we obtain EfDVðkÞg r eJaðkÞJ2 ;

ð29Þ

where e is a positive scalar. Based on (29), we know from Lyapunov stability theory that the DSNNs in (1) with time-varying delays are asymptotically stable in the mean square (see [18,20]) and this completes the proof. & If mt ¼ dðkÞ ¼ dM , namely, time delay is constant, we can derive the following corollary. Corollary 1. Suppose that Assumptions 1–3 hold. Given positive integers t and m, the DSNNs in (1) are asymptotically stable in the mean  square if there exist scalar l~ 40, matrices P 4 0, M 4 0, Qi 40, i ¼ 1; 2, S 4 0, diagonal matrices L ¼ diagðl1 ; l2 ; . . . ; ln Þ 4 0, G ¼ diagðg1 ; g2 ; . . . ; gn Þ 4 0 and matrix X satisfying  P þ tS o l~ I;

2 6 6 6 6 4

T

T

~ þC ~ T þY ~ W ~ Q2 ~ þC ~ P X ~ 1 W ~ 2 X F 1 2

~ ¼ P



M 



P

0





Q2







pffiffiffi ~ T 3 tX 3 S 7 0 7 7 o 0; 7 0 5 S

 X Z 0; S

where T

T

T

T



T



~ ¼ X ~ PX ~ þW ~ Q 1W ~ ~ ~ ~ ~ Q þ l~ r~ X ~ ~ F 1 2 X 2 þ X 4 ðl r 2 IQ2 ÞX 4 ; 2 2 Q1 1 ~ ¼ X½In In 0n;mn þ n ; C





L

0

0

L

 ;





~ ¼ tM; Y

G

0

0

G

 ;

"

Q1 Q1 ¼ 0

~ A DW ~ B þ 0:5W ~ T DW ~ T ; ~ 1 ¼ 0:5W W B1 A1 1 1

# 0 ; Q1

~ 2 ¼ 0:5W ~ A HW ~ B þ 0:5W ~ T HW ~ T ; W B2 A2 2 2

ARTICLE IN PRESS 746

Y. Ou et al. / Neurocomputing 73 (2010) 740–748

2

L1

L2

0mn;n 7 7 7; In 5

6 0mn;n ~ A ¼6 W 6 1 4 L2

" ~ Q ¼ W 1

2

3

6 6 U ~ A ¼6 1 W 2 6 02n;n 4 U2

0n

0n

Imn

0mn;n

0mn;2n

0mn;n

Imn

0mn;2n

X~ 1 ¼ ½A 0n;mn B D;

0mn;n

0mn;n

3

7 U2 7 7; 02n;n 7 5 In

# ;

X~ 2 ¼ ½In 0n;mn þ 2n ;

X~ 3 ¼ X~ 1 X~ 2 ;

X~ 4 ¼ ½0n;mn In 0n;2n :

Proof. Choose a Lyapunov functional candidate to be V~ ðkÞ ¼ V~ 1 ðkÞ þ V~ 2 ðkÞ þ V~ 3 ðkÞ; where V~ 1 ðkÞ ¼ xT ðkÞPxðkÞ; V~ 2 ðkÞ ¼

k1 X i ¼ kt

V~ 3 ðkÞ ¼

1 X

k1 X

GT ðiÞQ1 GðiÞ þ

xT ðiÞQ2 xðiÞ;

i ¼ kdM k1 X

dT ðjÞSdðjÞ;

i ¼ t j ¼ k þ i

dðjÞ ¼ xðj þ1ÞxðjÞ; 2 6 6 GðiÞ ¼ 6 6 4

3

xðiÞ xðitÞ ^ xðiðm1ÞtÞ

7 7 7: 7 5

The proof of Corollary 1 can be obtained by the same method in Theorem 1, and then we complete the proof.

&

Remark 4. Based on the delay partitioning idea and some inequalities, Theorem 1 presents a new stability criterion for the DSNNs in (1) in terms of LMIs, which presents a less conservative result. The asymptotic stability can be checked by solving the set of the LMIs in some numerical software. We divide dðkÞ into constant part and time-varying part, and the delay partitioning idea is only applied to the lower bound of the delay range. Note that the dimensions of the LMIs stability condition depend on the partitioning number m. 4. Illustrative example In this section, a numerical example is presented to demonstrate the usefulness of the proposed method on the asymptotic stability of the DSNNs in (1) with time-varying delays. Example 1. Consider 2 0:4 0 6 A ¼ 4 0 0:5 0 0

the DSNNs in (1) with the 3 2 0 0:3 0:1 6 0 7 0:3 5; B ¼ 4 0 0:4 0:1 0:1

following parameters 3 2 0:2 0:2 6 0:2 7 5; D ¼ 4 0:2 0:2 0:1

[18]: 3

0:1

0:1

0:3

0:1 7 5;

0:2

r1 ¼ r2 ¼ 0:05:

0:3

Take the activation functions as g1 ðsÞ ¼ tanhð0:6sÞ0:2sins; h1 ðsÞ ¼ tanhð0:4sÞ þ 0:2sins;

g2 ðsÞ ¼ tanhð0:4sÞ; h2 ðsÞ ¼ tanhð0:2sÞ;

From the parameters above, it can be verified that 2 3 2 3 0:3 0 0 0:16 0 0 6 7 6 0 0:2 0 7 0 0 5; L2 ¼ 4 L1 ¼ 4 0 5; 0 0 0:1 0 0 0

g3 ðsÞ ¼ tanhð0:2sÞ; h3 ðsÞ ¼ tanhð0:4sÞ: 2

U1 ¼ 6 4

0

0

3

2

0

0

U2 ¼ 6 4

0

0

7 0 5; 0

0:12

0:2

0

0

0:1

0

0

0

3

0 7 5: 0:2

Firstly, our purpose is to find the maximum upper bound dM for different dm . Now assume the lower bound is dm ¼ 2, and by considering the condition in [18], the maximum delay of upper bound dM is 6. However, by utilizing the method proposed in this paper,

ARTICLE IN PRESS Y. Ou et al. / Neurocomputing 73 (2010) 740–748

747

Table 1 Allowable upper bound of dM for various dm . Methods

dm ¼ 2

dm ¼ 4

dm ¼ 6

dm ¼ 8

dm ¼ 10

dm ¼ 15

dm ¼ 20

[18]

6

8

10

12

14

19

24

Theorem 1

9

11 t¼2 m¼2

13 t¼3 m¼2

15 t¼2 m¼4

17 t¼5 m¼2

22 t¼5 m¼3

27 t¼5 m¼4

t¼2 m¼1

we find that the maximum delay of upper bound dM is 9 ðt ¼ 2; m ¼ 1Þ. A more detailed comparison between the two conditions is given in Table 1, which displays the achieved upper bounds dM for different lower bounds dm . From Table 1, we can see that Theorem 1 is less conservative than the criterion proposed in [18].

5. Conclusions In this paper, the stability analysis problem of DSNNs with time-varying delays has been investigated. By utilizing a new Lyapunov– Krasovskii functional combined with the delay partitioning idea and some inequalities, we have converted the stability analysis problem into a feasibility problem of LMIs, and then the condition has been checked by utilizing some numerical software. A numerical example has been provided to show less conservatism of the proposed condition over other papers.

References [1] S. Arik, An analysis of exponential stability of delayed neural networks with time varying delays, Neural Networks 17 (7) (2004) 1027–1031. [2] J. Cao, M. Xiao, Stability and Hopf bifurcation in a simplified BAM neural network with two time delays, IEEE Trans. Neural Networks 18 (2007) 416–430. [3] J. Cao, K. Yuan, H. Li, Global asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delays, IEEE Trans. Neural Networks 17 (2006) 1646–1651. [4] T.P. Chen, W.L. Lu, G.R. Chen, Dynamical behaviors of a large class of general delayed neural networks, Neural Comput. 17 (2005) 949–968. [5] H. Gao, T. Chen, New results on stability of discrete-time systems with time-varying state delay, IEEE Trans. Automat. Control 52 (2007) 328–334. [6] H. Gao, J. Lam, C. Wang, Y. Wang, Delay-dependent output-feedback stabilization of discrete-time systems with time-varying state delay, IEEE Proc. Control Theory Appl. 151 (6) (2004) 691–698. [7] S. Mou, H. Gao, J. Lam, W. Qiang, A new criterion of delay-dependent asymptotic stability for Hopfield neural networks with time delay, IEEE Trans. Neural Networks 19 (2008) 532–535. [8] F. Gouaisbaut, D. Peaucelle, Delay-dependent stability analysis of linear time delay systems, in: IFAC Workshop on Time Delay System, LAquila, Italy, 2006. [9] Y. He, G. Wang, M. Wu, LMI-based stability criteria for neural networks with multiple time-varying delays, Physica D 212 (2005) 126–136. [10] D. Ho, J. Lam, J. Xu, H. Ka Tam, Neural computation for robust approximate pole assignment—a survey, Neurocomputing 25 (1) (1999) 191–211. [11] S. Hu, J. Wang, Global robust stability of a class of discrete-time interval neural networks, IEEE Trans. Circuits Syst. 53 (1) (2006) 129–138. [12] H. Huang, D.W.C. Ho, J. Lam, Stochastic stability analysis of fuzzy Hopfield neural networks with time-varying delays, IEEE Trans. Circuits Syst. (I) 52 (5) (2005) 251–255. [13] H. Li, B. Chen, Q. Zhou, S. Fang, Robust exponential stability for uncertain stochastic neural networks with discrete distributed time-varying delays, Phys. Lett. A 372 (2008) 3385–3394. [14] H. Li, B. Chen, C. Lin, Q. Zhou, Mean square exponential stability of stochastic fuzzy Hopfield neural networks with discrete and distributed time-varying delays, Neurocomputing 72 (2009) 2017–2023. [15] H. Li, B. Chen, Q. Zhou, W. Qian, Robust stability for uncertain delayed fuzzy Hopfield neural networks with Markovian jumping parameters, IEEE Trans. Syst. Man Cybern. B 39 (2009) 94–102. [16] H. Li, B. Chen, Q. Zhou, C. Lin, Robust exponential stability for delayed uncertain Hopfield neural networks with Markovian jumping parameters, Phys. Lett. A 372 (2008) 4996–5003. [17] Y. Liu, Z. Wang, X. Liu, Global exponential stability of generalized recurrent neural networks with discrete and distributed delays, Neural Networks 19 (5) (2006) 667–675. [18] Y. Liu, Z. Wang, X. Liu, Robust stability of discrete-time stochastic neural networks with time-varying delays, Neurocomputing 71 (4–6) (2008) 823–833. [19] Y. Liu, Z. Wang, A. Serrano, X. Liu, Discrete-time recurrent neural networks with time-varying delays: exponential stability analysis, Phys. Lett. A 362 (2007) 480–488. [20] M. Luo, S. Zhong, R. Wang, W. Kang, Robust stability analysis of discrete-time stochastic neural networks systems with time-varying delays, Appl. Math. Comput. 209 (4) (2009) 305–313. [21] K. Patan, Stability analysis and the stabilization of a class of discrete-time dynamic neural networks, IEEE Trans. Neural Networks 18 (3) (2007) 660–673. [22] P. Shi, E.K. Boukas, Y. Shi, On stochastic stabilization of discrete-time Markovian jump systems with delay in state, Stochastic Anal. Appl. 21 (2003) 935–951. [23] P. Shi, E.K. Boukas, Y. Shi, R.K. Agarwal, Optimal guaranteed cost control of uncertain discrete time-delay systems, J. Comput. Appl. Math. 157 (2003) 435–451. [24] P. Shi, Y. Xia, G.P. Liu, D. Rees, On designing of sliding-mode control for stochastic jump systems, IEEE Trans. Autom. Control 51 (1) (2006) 97–103. [25] Q. Song, J. Liang, Z. Wang, Passivity analysis of discrete-time stochastic neural networks with time-varying delays, Neurocomputing 72 (2009) 1782–1788. [26] Q. Song, Z. Wang, An analysis on existence and global exponential stability of periodic solutions for BAM neural networks with time-varying delays, Nonlinear Anal. Real World Appl. 8 (2007) 1224–1234. [27] Z. Wang, D.W.C. Ho, X. Liu, Variance-constrained filtering for uncertain stochastic systems with missing measurements, IEEE Trans. Automat. Control 48 (7) (2003) 1254–1258. [28] Z. Wang, D.W.C. Ho, X. Liu, A note on the robust stability of uncertain stochastic fuzzy systems with time-delays, IEEE Trans. Syst. Man Cybern. Part A 34 (2004) 570–576. [29] Z. Wang, D.W.C. Ho, X. Liu, Variance-constrained control for uncertain stochastic systems with missing measurement, IEEE Trans. Systems Man Cybern. Part A 35 (2005) 746–753. [30] Z. Wang, B. Huang, H. Unbehauen, Robust H1 observer design of linear state delayed systems with parametric uncertainty: the discrete-time case, Automatica 35 (1999) 1161–1167. [31] Z. Wang, Y. Liu, X. Liu, Stability analysis for stochastic Cohen–Grossberg neural networks with mixed time delays, IEEE Trans. Neural Networks 17 (3) (2006) 814–820. [32] Z. Wang, Y.R. Liu, K. Fraser, X.H. Liu, Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays, Phys. Lett. A 354 (4) (2006) 28–297. [33] Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with time delays, Nonlinear Anal. Real World Appl. 7 (2006) 1119–1128. [34] B. Zhang, S. Xu, Y. Zou, Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays, Neurocomputing 72 (2008) 321–330.

ARTICLE IN PRESS 748

Y. Ou et al. / Neurocomputing 73 (2010) 740–748 Yan Ou is currently Student of School of Astronautics of Harbin Institute of Technology. His research interests are robust stability analysis based on neural network, system modeling and programming algorithm.

Hongyang Liu was born in Heilongjiang Province, China, in 1987. He is currently studying toward the B.S. degree at Harbin Institute of Technology, Harbin, China. His research interests include stochastic control, neural networks based control, filter theory, time-delay systems.

Yulin Si received the B.S. degree in School of Astronautics from Harbin Institute of Technology, Harbin, China, in 2009. He is studying for the M.S. degree in Control Science and Engineering in Harbin Institute of Technology, Harbin, China. His research interests include neural networks, robust control and flight control.

Zhiguang Feng received the B.S. degree in automation from Qufu Normal University, Rizhao, China, in 2006, and the M.S. degree in Control Science and Engineering from Harbin Institute of Technology, Harbin, China, in 2009, respectively. He is studying for the Ph.D. Degree in the Department of Mechanical Engineering, The University of Hong Kong, Hong Kong. His research interests include robust control, singular systems, time-delay systems and dissipative systems.