Dissipativity analysis for discrete stochastic neural networks with Markovian delays and partially known transition matrix

Dissipativity analysis for discrete stochastic neural networks with Markovian delays and partially known transition matrix

Applied Mathematics and Computation 228 (2014) 292–310 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

486KB Sizes 0 Downloads 56 Views

Applied Mathematics and Computation 228 (2014) 292–310

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Dissipativity analysis for discrete stochastic neural networks with Markovian delays and partially known transition matrix Magdi S. Mahmoud ⇑, Gulam Dastagir Khan Systems Engineering Department, KFUPM, P.O. Box 5067, Dhahran 31261, Saudi Arabia

a r t i c l e

i n f o

Keywords: Delay-dependent stability Dissipativity Neural networks Markov chain Time-delays Partially known transition matrix

a b s t r a c t The problem of dissipativity analysis for a class of discrete-time stochastic neural networks with discrete and finite-distributed delays is considered in this paper. System parameters are described by a discrete-time Markov chain. A discretized Jensen inequality and lower bounds lemma are employed to reduce the number of decision variables and to deal with the involved finite sum quadratic terms in an efficient way. A sufficient condition is derived to ensure that the neural networks under consideration is globally delay-dependent asymptotically stable in the mean square and strictly ðZ; S; GÞ  a-dissipative. Next, the case in which the transition probabilities of the Markovian channels are partially known is discussed. Numerical examples are given to emphasize the merits of reduced conservatism of the developed results. Ó 2013 Elsevier Inc. All rights reserved.

1. Introduction Research investigations into neural networks have received considerable attention during the past several years. This stems from the fact that neural networks have been successfully applied in a variety areas, such as signal processing, pattern recognition, and combinatorial optimization [1–6]. In recent years, neural networks with time-delay have also been studied extensively, because time delays do occur in electronic implementation of analog neural networks as a result of signal transmission and finite switching speed of amplifiers. In turn, this may lead to the instability and poor performance of systems [7–11]. A great number of important and interesting results have been obtained on the analysis and synthesis of time-delay neural networks. These include stability analysis, state estimation, passivity, etc. Typically, the stability problem has been investigated for continuous-time neural networks with time delay in [9,12] and stability conditions have been proposed based on linear matrix inequality (LMI) approach. In discrete-time setting, some sufficient criteria have been established in [13] to ensure the delay-dependent stability of neural networks with time-varying delay. In [14–16], the state estimation problem has been investigated for continuous-time neural networks with time-delay, and some algorithms have been presented to compute the desired state estimators. The results on the state estimation problem of discrete-time neural networks with time-delay can be found in [25–28]. The problem of passivity analysis for timedelay neural networks has been considered in [17,19], and some types of delay-dependent passivity conditions have been derived. It should be pointed out that all the time-delays considered in the above-mentioned references are of the discrete nature. It is well known that neural networks usually have a spatial extent due to the presence of an amount of parallel pathways with a variety of axon sizes and lengths. In effect, there will be a distribution of propagation delays and hence, the signal propagation is no longer instantaneous and cannot be modeled with discrete delays [20,21]. ⇑ Corresponding author. E-mail addresses: [email protected], [email protected] (M.S. Mahmoud). 0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2013.11.087

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

293

Generally speaking, the distributed delays in the neural networks can be classified into two types, finite-distributed delays and infinite-distributed delays. Continuous-time neural networks with infinite-distributed delays have been investigated in [21]. Based on the LMI method and other approaches, continuous time neural networks with finite-distributed delays have also been discussed in [22–24]. It seems that the corresponding results for discrete-time neural networks with distributed delays are relatively few. However, it has been shown that discrete-time neural networks are more important than continuous-time neural networks in our digital world. In [29], the authors made the first attempt to discuss the problem of stability analysis for discrete-time neural networks with infinite distributed delays. The results obtained include both deterministic and stochastic cases, and thus the results are very general and powerful. The problem of passivity analysis for a class of uncertain discrete-time stochastic neural networks with infinite-distributed delays has been investigated in [30]. Some delay-dependent criteria have been established to ensure the passivity of the considered neural networks. In [31], finite-distributed delay have been introduced in the discretetime setting for the first time. The authors have further investigated the state estimation problem for discrete-time neural networks with Markov jumping parameters as well as mode-dependent mixed time-delays. In both cases, sufficient conditions have been established that guarantee the existence of the state estimators. Some recently reported results [32–37,50,51] have dealt with several important aspects d that are closely related to the topic under consideration in this paper. On another research front, the past decade has witnessed the rapid development of dissipative theory in system and control areas. The reason is mainly twofold: 1. The dissipative theory gives a framework for the design and analysis of control systems using an input–output description based on energy-related considerations [38], and 2. The dissipative theory serves as a powerful or even indispensable tool in characterizing important system behaviors, such as stability and passivity, and has close connections with passivity theorem, bounded real lemma, Kalman–Yakubovich lemma, and the circle criterion [39]. Very recently, in [40], the robust reliable dissipative filtering problem has been investigated for uncertain discrete-time singular system with interval time-varying delays and sensor failures. The problem of static output-feedback dissipative control has been studied for linear continuous-time system based on an augmented system approach in [44]. A necessary and sufficient condition for the existence of a desired controller has been given, and a corresponding iterative algorithm has also been developed to solve the condition. In this regard, we note that the problem of dissipativity analysis has also been investigated for neural networks with time-delay in [45,46]. Some delay-dependent sufficient conditions have been given to guarantee the dissipativity of the considered neural networks. However, the neural networks considered in [45–47] are of the continuous-time nature. Up to now, there is little information in the published literature about the dissipativity analysis problem for discrete-time stochastic neural networks with discrete and finite-distributed delays. It is, therefore, the main purpose of the present research to bridge such a gap by making the first attempt to deal with the dissipativity analysis problem for discrete-time stochastic neural networks with both time-varying discrete and finite distributed delays. In this paper, we consider the problem of dissipativity analysis for discrete-time stochastic neural networks with both time-varying discrete and finite-distributed delays. Based on the discretized Jensen inequality and lower bounds lemma, a condition is established ensuring the stability and strict ðZ; S; GÞ  a-dissipativity of the considered neural networks, The developed condition depends not only on the discrete delay but also on the finite-distributed delay. Using the derived condition, we also develop some results on several special cases. In all cases, the obtained results have several advantages over the existing ones because not only they have less conservatism but also require number of decision variables. Three numerical examples are given to illustrate the effectiveness and superiority of the proposed methods. Notation: The notations used throughout this paper are fairly standard. Rr and Rpr denote the r-dimensional Euclidean space and the set of all p  r real matrices, respectively. The notation X > YðX P YÞ, where X and Y are symmetric matrices, means that X  Y is positive definite (positive semidefinite). I and 0 represent the identity matrix and a zero matrix, respectively. ðX; F ; PÞ is a probability space, X is the sample space, F is the r-algebra of subsets of the sample space, and P is the probability measure on F:E½ denotes the expectation operator with respect to some probability measure P. For integers a and b with a < b; N½a; b ¼ fa; a þ 1; . . . ; b  1; bg. The superscript ‘‘T’’ represents the transpose, and diag f. . .g stands for a block diagonal matrix. k  k denotes the Euclidean norm of a vector and its induced norm of a matrix, and l2 ½0; 1Þ is the space of square summable infinite sequence. In symmetric block matrices or complex matrix expressions, we use the symbol  to represent a term that is induced by symmetry. Matrices, if they are not explicitly specified, are assumed to have compatible dimensions. 2. Preliminaries Consider the following discrete-time stochastic neural network with mixed time-delays:

8 PsðkÞ ^ ðk; xðkÞ; xðk  dðkÞÞÞxðkÞ; > < xðk þ 1Þ ¼ DxðkÞ þ AgðxðkÞÞ þ Bgðxðk  dðkÞÞÞ þ uðkÞ þ C v ¼1 gðxðk  v ÞÞ þ r yðkÞ ¼ gðxðkÞÞ; > : xðkÞ ¼ /ðkÞ; k 2 N½ maxfd1 ; d2 g; 0;

ð1Þ

294

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310 T

T

where xðkÞ ¼ ½x1 ðkÞx2 ðkÞ . . . xr ðkÞ 2 Rn is the state; gðxðkÞÞ ¼ g 1 ðx1 ðkÞÞg 2 ðx2 ðkÞÞ . . . g r ðxr ðkÞÞ , and xq ðkÞ is the state of the q-th neuron at time k; g q ðxq ðkÞÞ denotes the activation function of the q-th neuron at time k; yðkÞ is the output of the neural network, xðtÞ 2 l2 ½0; þ1Þ is the input which belongs to L2 ½0; 1Þ; yðtÞ 2 Rq is the measured output; the function /ðkÞ is the initial 1 ; d 2 ; . . . ; d r g describes the rate with which the each neuron will reset its potential to the resting state in condition; D ¼ diagfd isolation when disconnected from the networks and external inputs; A ¼ ðaql Þrr ; B ¼ ðbql Þrr , and C ¼ ðcql Þrr are, respectively, the connection weight matrix, the discretely delayed connection weight matrix, and the distributively delayed connection weight matrix; xðkÞ is a scalar Wiener process (Brownian Motion) on ðX; F ; PÞ with 2

E½xðkÞ ¼ 0; E½xðkÞ  ¼ 1; E½xðqÞxðlÞ ¼ 0 ðq–lÞ;

ð2Þ

dðkÞ and sðkÞ denote the discrete delay and the finite distributed delay, respectively, and satisfy 0 < d1 6 dðkÞ 6 d1 and 0 < d2 6 sðkÞ 6 d2 , where d1 and d2 are real positive integers. dðkÞ and sðkÞ are assumed to be modeled as two independent homogeneous Markov chains, which takes values in S1 ¼ f0; 1; . . . ; d1 g and S2 ¼ f0; 1; . . . ; d2 g. The transition probabilities of dðkÞ (jumping from mode i to j) and sðkÞ (jumping from mode m to n) are defined by

pij ¼ Prðdðk þ 1Þ ¼ jjdðkÞ ¼ iÞ; kmn ¼ Prðsðk þ 1Þ ¼ njsðkÞ ¼ mÞ; 2 p11 p12 pi13 3 6 p ¼ 4 pi21 p22 p32 7 5; p31 pi32 pi33 2

3

k11

k12

k13

6 k ¼ 4 k21

k22

7 k23 5;

k31

k32

k33

where pij P 0; i; j 2 S1 ; kmn P 0; m; n 2 S2 and condition

Pd1

j¼0

pij ¼ 1;

Pd 2

n¼0 kmn

¼ 1. The transition probabilities also satisfy the

pij ¼ 0; if j–i þ 1 and j–0; kmn ¼ 0;

if n–m þ 1 and n–0:

In addition, it is considered hereafter that the transition probabilities of the Markov chain are partially available, that is, some of the elements in matrices p and k are not known. For instance, a system (1) with three modes will have the transition probabilities matrices, p and k as

2

3

2

3

p11 p12 ? ? k12 ? 6 7 p¼6 p22 p23 7 4 ? 5; k ¼ 4 ? ? k32 5; p31 ? ? ? ? ? where the unknown elements are represented by ?. For notational clarity, 8i 2 I ¼ f1; 2; . . .g, we denote I iK :¼ fj : if pij is knowng; I iuK :¼ fj : if pij is unknowng m Im K :¼ fn : if kmn is knowng; I uK :¼ fn : if kmn is unknowng

Moreover I iK –0, it is further described as

I iK ¼ fKi1 ; . . . ; Kir g; where

Kir

1 6 r 6 N;

þ

2 N represents the rth known element with the index Kir in the ith row of matrices

piK :¼

X

pij ; Pði;mÞ :¼ K

ði;mÞ

X X

kmn pij Pðj; nÞ;

m

j2I iK

PUK :¼

XX

p and k. Also, we denote

j2I iK n2I K

kmn pij Pðj; nÞ;

ði;mÞ

Pði; mÞ :¼ P K

ði;mÞ

[ PUK :

m j2I iUK n2I UK

The following assumptions are imposed on neural network (1), which will be needed to develop the main results: Assumption 2.1 [22]. The activation function g q ðÞ in (1) is bounded and continuous, and g q ð0Þ ¼ 0, and there exist constants dq and qq such that

dq 6

g q ða1 Þ  g q ða2 Þ 6 qq ; a1  a2

where a1 ; a2 2 R, and a1 –a2 .

q ¼ 1; 2; . . . ; r;

ð3Þ

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

295

^ ðkÞ ¼ r ^ ðk; xðkÞ; xðk  dðkÞÞÞ : N  Rr  Rr ! Rr is the diffusion coefficient vector, and assumed to Assumption 2.2 [19]. r satisfy

r^ ðkÞT r^ ðkÞ 6 where



G1 

G2 G3





T 

xðkÞ

G1

G2



G3

xðk  dðkÞÞ



xðkÞ xðk  dðkÞÞ

 ;

ð4Þ

P 0 is known constant matrices.

Now we introduce the following definition: Definition 2.1 [41]. Given matrices Z 2 Rqq ; G 2 Rqq ; S 2 Rqq ; the unforced (uðkÞ ¼ 0) discrete-time stochastic neural network (1) is called ðZ; S; GÞ  dissipativ e (in the mean square sense) if for some real function gð:Þ; gð0Þ ¼ 0,

E

(k p X

) D

WðsÞds ¼ E

(k    p X yðsÞ T Z

S

uðsÞ

G

0

0





yðsÞ uðsÞ



) ds

P  gðxo Þ;

ð5Þ

holds for all kp P 0, all solutions of (1) under the conditions xðkÞ ¼ 0 when k 6 0, and all nonzero uðkÞ when k P 0 but uðkÞ ¼ 0 when k < 0. Furthermore, if for some scalar a > 0,

E

(k p X

) D

WðsÞds ¼ E

(k    p X yðsÞ T Z

S

uðsÞ

ðG  aIÞ

0

0





yðsÞ uðsÞ



) ds

P  gðxo Þ;

ð6Þ

The unforced discrete-time stochastic neural network (1) is called strictly ðZ; S; GÞ  dissipativ e in the mean square sense. Proceeding further, we consider the neural network (1) satisfies the following definitions. Definition 2.2. The unforced (uðkÞ ¼ 0) neural network (1) said to be globally asymptotically stable in the mean square, if following equality holds for each solution of xðkÞ of (1):

lim E½kxðkÞk2  ¼ 0:

ð7Þ

k!þ1

Remark 2.1. It is interesting to note that for classes of time-delay systems without Markovian jump parameters, the dissipativity inequality in (5) reduces to kp X

D

WðsÞds ¼

0

  kp  X yðsÞ T Z

S

uðsÞ

G

0





yðsÞ uðsÞ



P  gðxo Þ;

ð8Þ

which has been largely used in recent works [42]. It should be noted in (1) that zðtÞ is a random signal and therefore, it is natural to apply the mathematical expectation operator onto the left-hand side of (8). In turn, this results in (5) and hence the dissipativity concept in Definition 2.1 is quite acceptable. Remark 2.2. More importantly, in view of [43], the concept of strict ðZ; S; GÞ-dissipativity provides an effective performance measure for which stability behavior should be examined a prior via Lyapunov theory. Therefore, for the stable system under consideration, dissipativity gives a guaranteed assessment of the performance for which standard measures like disturbance attenuation, strictly positive real and passivity are special cases. To see this, we recall the following important cases: 1. Setting Z ¼ I; S ¼ 0; G  aI ¼ c2 I; in (6) yields asymptotic stability results with disturbance attenuation c. 2. Setting Z ¼ 0; S ¼ I; G ¼ 0; in (6) yields asymptotic stability results with strictly positive real. 3. Setting Z ¼ 0; S ¼ I; G  aI ¼ bI; in (6) yields asymptotic stability results with passivity. On the basis of Definition 2.1, the objective of this paper is to derive delay-dependent dissipativity conditions for the discrete-time stochastic neural network (1) based on the LMI approach such that the following two requirements are met concurrently: 1. the unforced (uðkÞ ¼ 0) neural network (1) is globally asymptotically stable in the mean square; 2. the neural network (1) is strictly ðZ; S; RÞ  c-dissipative.

296

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

In order to achieve the foregoing conditions, the following Lemmas are needed . Lemma 2.1 (Discretized Jensen Inequality [48]). For any matrix M > 0, integers c1 and c2 satisfying c2 > c1 , and vector function x : N½c1 ; c2  ! Rr , such that the sums concerned are well defined, then

ðc2  c1 þ 1Þ

c2 X

xðaÞT MxðaÞ P

a¼c1

c2 X a¼c1

Lemma 2.2 [49]. For any matrix



M x

c2 X

xðaÞT M

xðaÞ:

ð9Þ

a¼c1

S M



> 0, integers d1 ; d2 ; dðkÞ satisfying d1 6 dðkÞ 6 d1 , and vector function

xðk þ Þ : N½d1 ; d1  ! Rr , such that the sums concerned are well defined, then

ðd1  d1 Þ

kd 1 1 X

T

fðaÞT MfðaÞ 6 xðkÞ PxðkÞ;

ð10Þ

a¼kd1

where fðlÞ ¼ xðl þ 1Þ  xðlÞ and T

xðkÞ ¼ ½xT ðk  d1 Þ; xT ðk  dðkÞÞ; xT ðk  d1 Þ 2

MS

M

P¼6 4 

3

2M þ S þ ST





S 7 S þ M 5: M

3. Dissipativity analysis This section deals mainly with the study of dissipativity analysis of neural network (1). In what follows, Theorem 3.1 carries out dissipativity analysis with the assumption that all the elements of the transition probability matrices describing the delays dðkÞ and sðkÞ are completely known . Later in Theorem 3.3, this study is extended for the case of partially known transition probability matrices. 2

2

Pði; mÞ þ d1 Z 1 þ d12 Z 2 6 qI 2

Nð1;mÞ



6 6 6 6 6 6 6 ¼6 6 6 6 6 6 4

Z2

S



Z2

W11ði;mÞ   

Z1

ð11aÞ

qG2

0

W22ði;mÞ W23ði;mÞ S  W33ði;mÞ W34ði;mÞ   W44ði;mÞ

































N15

W16ði;mÞ W17ði;mÞ 0

W18ði;mÞ

0

0

0

0

F2H

0

0

0

0

0

0

W55ði;mÞ W56ði;mÞ W57ði;mÞ  W66ði;mÞ W67ði;mÞ   W77ði;mÞ 





W58ði;mÞ W68ði;mÞ W78ði;mÞ W88ði;mÞ ;

3 7 7 7 7 7 7 7 7<0 7 7 7 7 7 5

ð11bÞ

 > 0:

ð11cÞ

3.1. Completely known transition probability matrices For simplicity, throughout the rest of this paper, we denote dðkÞ ¼ i 2 S1 ;

F 1 ¼ diagfd1 q1 ; d2 q2 ; . . . ; dr qr g;   d1 þ q1 d2 þ q2 dr þ qr ; ; ;...; F 2 ¼ diag 2 2 2 d12 ¼ d1  d1 ; # ¼

s2 ðs2 þ s1 Þðs2  s1 þ 1Þ

d1 ¼ minfdðkÞ; k 2 Fg;

; 2 d2 ¼ minfsðkÞ; k 2 Fg;

p ¼ minfpii ; i 2 S1 g; k ¼ minfkmm ; m 2 S2 g; l ¼ 1 þ ð1  pÞðd1  d1 Þ:

sðkÞ ¼ m 2 S2

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

297

Theorem 3.1. Consider that all the elements of the transition probability matrices k and p describing the distribution of discrete delay dðkÞ and finite delay sðkÞ are completely known in advance. Under Assumptions 2.1 and 2.2, neural network (1) is globally asymptotically stable in the mean square and strictly ðZ; S; GÞ  a-dissipative, if there exist Pði; mÞ ¼ PT ði; mÞ > 0; Q > 0; Z 1 > 0; Z 2 > 0; R > 0; S, diagonal matrices Y > 0; H > 0, and scalars q > 0 and a > 0 such that inequalities 11(a)–(c) satisfies where

W11ði;mÞ ¼ DPði; mÞD  Pði; mÞ þ Q þ ðd12 ÞQ þ R þ d21 ðD  IÞZ 1 ðD  IÞ  Z 1 þ d212 ðD  IÞZ 2 ðD  IÞ þ qG1  F 1 Y W15ði;mÞ ¼ DPði; mÞA þ d21 ðD  IÞZ 1 A þ d212 ðD  IÞZ 2 A þ F 2 Y W16ði;mÞ ¼ DPði; mÞB þ d21 ðD  IÞZ 1 B þ d212 ðD  IÞZ 2 B W17ði;mÞ ¼ DPði; mÞC þ d21 ðD  IÞZ 1 C þ d212 ðD  IÞZ 2 C W18ði;mÞ ¼ DPði; mÞ þ d21 ðD  IÞZ 1 þ d212 ðD  IÞZ 2 W22ði;mÞ ¼ Z  Z 1  Z 2 ; W23ði;mÞ ¼ Z 2  S W33ði;mÞ ¼ Z  2Z 2 þ S þ ST þ qG3  F 1 H W34ði;mÞ ¼ S þ Z 2 ; W44ði;mÞ ¼ G  Z 2 W55ði;mÞ ¼ AT Pði; mÞA þ d21 AT Z 1 A þ d212 AT Z 2 A þ #R  Y  Z W56ði;mÞ ¼ AT Pði; mÞB þ d21 AT Z 1 B þ d212 AT Z 2 B W57ði;mÞ ¼ AT Pði; mÞC þ d21 AT Z 1 C þ d212 AT Z 2 C W58ði;mÞ ¼ S þ AT Pði; mÞ þ d21 AT Z 1 þ d212 AT Z 2 W66ði;mÞ ¼ BT Pði; mÞB þ d21 BT Z 1 B þ d212 BT Z 2 B  H W67ði;mÞ ¼ BT Pði; mÞC þ d21 BT Z 1 C þ d212 BT Z 2 C W68ði;mÞ ¼ BT Pði; mÞ þ d21 BT Z 1 þ d212 BT Z 2 W77ði;mÞ ¼ C T Pði; mÞC þ d21 C T Z 1 C þ d212 C T Z 2 C  R W78ði;mÞ ¼ C T Pði; mÞ þ d21 C T Z 1 þ d212 C T Z 2 W88ði;mÞ ¼ Pði; mÞ þ d21 Z 1 þ d212 Z 2  G þ cI; where Pði; mÞ :¼

Pd2 Pd1

j¼0 kmn

n¼0

pij Pðj; nÞ.

Proof. First, stability of the unforced neural network (1) is derived. For this we define gðkÞ ¼ xðk þ 1Þ  xðkÞ. The Lyapunov functional for the unforced neural network (1) is considered as:

Vðk; xðkÞÞ ¼

7 X V s ðk; xðkÞÞ;

ð12Þ

s¼1

where T

V 1 ðk; xðkÞÞ ¼ xðkÞ Pði; mÞxðkÞ;

V 2 ðk; xðkÞÞ ¼

k1 X

xðaÞT QxðaÞ;

a¼kdðkÞ

V 3 ðk; xðkÞÞ ¼

d1 X

k1 X

xðaÞQxðaÞ;

V 4 ðk; xðkÞÞ ¼

h¼d1 þ1a¼kþh

V 5 ðk; xðkÞÞ ¼ d1

1 X k1 X

xðaÞT RxðaÞ;

a¼kd1

gðaÞT Z 1 gðaÞ; V 6 ðk; xðkÞÞ ¼ d12

h¼d1 a¼kþh

V 7 ðk; xðkÞÞ ¼ d2

k1 X

s2 X h X k1 X h¼s1 v ¼1 a¼kv

dX k1 1 1 X h¼d1 l¼kþh

gðxðaÞÞT RgðxðaÞÞ:

gðaÞT Z 2 gðaÞ;

298

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

Letting E½DVðkÞ ¼ E½Vðk þ 1; xðk þ 1ÞÞ  Vðk; xðkÞÞ. Along the solution of neural network (1) with uðkÞ ¼ 0, we have

"

d2 X d1 X T E½DV 1 ðkÞ ¼ E xðk þ 1Þ kmn pij Pðj; nÞxðk þ 1Þ  xðkÞ Pði; mÞxðkÞ

#

T

n¼0 j¼0

h T :¼ E xðkÞ DPði; mÞDxðkÞ T

þ 2xðkÞ DPði; mÞAgðxðkÞÞ T

þ 2xðkÞ DPði; mÞBgðxðk  dðkÞÞÞ XsðkÞ T þ 2xðkÞ DPði; mÞC v ¼1 gðxðk  v ÞÞ T

þ gðxðkÞÞ AT Pði; mÞAgðxðkÞÞ T

þ 2gðxðkÞÞ AT Pði; mÞBgðxðk  dðkÞÞÞ XsðkÞ T þ 2gðxðkÞÞ AT Pði; mÞC v ¼1 gðxðk  v ÞÞ T

þ gðxðk  dðkÞÞÞ BT Pði; mÞBgðxðk  dðkÞÞÞ XsðkÞ T þ 2gðxðk  dðkÞÞÞ BT Pði; mÞC v ¼1 gðxðk  v ÞÞ XrðkÞ XrðkÞ T þ gðxðk  v ÞÞ C T Pði; mÞC v ¼1 gðxðk  v ÞÞ v ¼1 i ^ ðkÞT Pði; mÞr ^ ðkÞ  xðkÞT Pði; mÞxðkÞ ; þr where Pði; mÞ :¼

Pd2 Pd1

j¼0 kmn

n¼0

ð13Þ

pij Pðj; nÞ

E½DV 2 ðkÞ ¼ E½V 2 ðxðk þ 1; k þ 1jxðkÞ; kÞÞ  V 2 ðxðkÞ; kÞ 0 1 k k1 X X AxT ðaÞQxðaÞ  ¼@ a¼kþ1dðkþ1Þ T

a¼kdðkÞ T

¼ x ðkÞQxðkÞ  x ðk  dðkÞÞQxðk  dðkÞÞ kdðkÞ X

þ

xT ðaÞQxðaÞ

a¼kþ1dðkþ1Þ

6 xT ðkÞQxðkÞ  xT ðk  dðkÞÞQxðk  dðkÞÞ kd X1

þ

xT ðaÞQxðaÞ;

ð14Þ

a¼kd1 þ1

E½DV 3 ðkÞ ¼ E½V 3 ðxðk þ 1; k þ 1jxðkÞ; kÞÞ  V 3 ðxðkÞ; kÞ ! d1 k k1 X X X xT ðaÞRxðaÞ  ¼ h¼d1 þ1

a¼kþ1þh

a¼kþh kd X1

¼ ðd1  d1 ÞxT ðkÞQxðkÞ 

xT ðaÞQxðaÞ;

ð15Þ

a¼kd1 þ1

E½DV 4 ðkÞ ¼ E½V 4 ðzðk þ 1; k þ 1jzðkÞ; kÞÞ  V 4 ðzðkÞ; kÞ ! k k1 X X xT ðaÞRxðaÞ  ¼ a¼kþ1d1

a¼kd1

T

¼ x ðkÞRxðkÞ  xT ðk  d1 ÞRxðk  d1 Þ; 2 T E½DV 5 ðkÞ ¼ E4d21 gðkÞ Z 1 gðkÞ  d1

k1 X

ð16Þ 3

gðaÞT Z 1 gðaÞ5;

ð17Þ

a¼kd1

" E½DV 6 ðkÞ ¼ E

2 d12

T

gðkÞ Z 2 gðkÞ  d12

kd 1 1 X

a¼kd1

# T

gðaÞ Z 2 gðaÞ ;

ð18Þ

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

299

3 d2 X h X T E½DV 7 ðkÞ ¼ E v gðxðkÞÞ RgðxðkÞÞ  E4d2 gðxðk  v ÞÞ Rgðxðk  v ÞÞ5 h

2

i

T

h¼d2 v ¼1

h i h XsðkÞ i T T 6 E v gðxðkÞÞ RgðxðkÞÞ  E d2 v ¼1 gðxðk  v ÞÞ Rgðxðk  v ÞÞ :

ð19Þ

By applying Lemma 2.1, we have

d1

k1 X

k1 X

gðlÞT Z 1 gðlÞ 6 

gðaÞT Z 1

a¼kd1

l¼kd1

k1 X

gðaÞ

l¼kd1

T

T

T

¼ xðkÞ Z 1 xðkÞ þ 2xðkÞ Z 1 xðk  d1 Þ  xðk  d1 Þ Z 1 xðk  d1 Þ:

ð20Þ

We know that gðkÞ ¼ xðk þ 1Þ  xðkÞ. Therefore for uðkÞ ¼ 0

gðkÞ ¼ ðD  IÞxðkÞ þ AgðxðkÞÞ þ Bgðxðk  dðkÞÞÞ þ C

XsðkÞ v ¼1

^ ðk; xðkÞ; xðk  dðkÞÞÞxðkÞ: gðxðk  v ÞÞ þ r

ð21Þ

Based on (17), (20), and (21), we obtain

h T T E½DV 5 ðkÞ 6 E d21 xðkÞ ðD  IÞZ 1 ðD  IÞxðkÞ þ 2d21 xðkÞ ðD  IÞZ 1 AgðxðkÞÞ XsðkÞ T T þ 2d21 xðkÞ ðD  IÞZ 1 Bgðxðk  dðkÞÞÞ þ 2d21 xðkÞ ðD  IÞZ 1 C v ¼1 gðxðk  v ÞÞ T

T

þ d21 gðxðkÞÞ AT Z 1 AgðxðkÞÞ þ 2d21 gðxðkÞÞ AT Z 1 Bgðxðk  dðkÞÞÞ XsðkÞ T T þ 2d21 gðxðkÞÞ AT Z 1 C v ¼1 gðxðk  v ÞÞ þ d21 gðxðk  dðkÞÞÞ BT Z 1 Bgðxðk  dðkÞÞÞ X sðkÞ T þ 2d21 gðxðk  dðkÞÞÞ BT Z 1 C v ¼1 gðxðk  v ÞÞ XsðkÞ XsðkÞ T þ d21 v ¼1 gðxðk  v ÞÞ C T Z 1 C v ¼1 gðxðk  v ÞÞ ^ ðkÞT Z 1 r ^ ðkÞ  xðkÞT Z 1 xðkÞ þ 2xðkÞT Z 1 xðk  d1 Þ þ d21 r i T xðk  d1 Þ Z 1 xðk  d1 Þ :

ð22Þ

Also according to Lemma 2.2, we have

h 2 T E½DV 6 ðkÞ 6 E d12 xðkÞ ðD  IÞZ 2 ðD  IÞxðkÞ 2

T

2

T

þ 2d12 xðkÞ ðD  IÞZ 2 AgðxðkÞÞ þ 2d12 xðkÞ ðD  IÞZ 2 Bgðxðk  dðkÞÞÞ XsðkÞ 2 T 2 T þ 2d12 xðkÞ ðD  IÞZ 2 C v ¼1 gðxðk  v ÞÞ þ d12 gðxðkÞÞ AT Z 2 AgðxðkÞÞ XsðkÞ 2 T 2 T þ 2d12 gðxðkÞÞ AT Z 2 Bgðxðk  dðkÞÞÞ þ 2d12 gðxðkÞÞ AT Z 2 C v ¼1 gðxðk  v ÞÞ XsðkÞ 2 T 2 T þ d12 gðxðk  dðkÞÞÞ BT Z 2 Bgðxðk  dðkÞÞÞ þ 2d12 gðxðk  dðkÞÞÞ BT Z 2 C v ¼1 gðxðk  v ÞÞ XsðkÞ XsðkÞ 2 T 2 ^ ðkÞT Z 2 r ^ ðkÞ  xðk  d1 ÞT Z 2 xðk  d1 Þ þ d12 v ¼1 gðxðk  v ÞÞ C T Z 2 C v ¼1 gðxðk  v ÞÞ þ d12 r T

T

þ 2xðk  d1 Þ ðZ 2  SÞxðk  dðkÞÞ þ 2xðk  d1 Þ Sxðk  d1 Þ T

T

þ xðk  dðkÞÞ ð2Z 2 þ S þ S Þxðk  dðkÞÞ T

þ 2xðk  dðkÞÞ ðS þ Z 2 Þxðk  d1 Þ i T xðk  d1 Þ Z 2 xðk  d1 Þ :

ð23Þ

Using Lemma 2.1 once again, we have

d2

XsðkÞ v ¼1

XsðkÞ XsðkÞ T T gðxðk  v ÞÞ Rgðxðk  v ÞÞ 6  v ¼1 gðxðk  v ÞÞ R v ¼1 gðxðk  v ÞÞ:

ð24Þ

Thus

E½DV 7 ðkÞ 6 E

h

i

v gðxðkÞÞT RgðxðkÞÞ



hXsðkÞ

T

gðxðk  v ÞÞ R v ¼1

i : gðxðk  v ÞÞ v ¼1

XsðkÞ

ð25Þ

From Assumption 2.2 and (11a), we have





r^ ðkÞT Pði; mÞ þ d21 Z 1 þ d212 Z 2 r^ ðkÞ 6 qxðkÞT G1 xðkÞ þ 2qxðkÞT G2 xðk  dðkÞÞ þ qxðk  dðkÞÞT G3 xðk  dðkÞÞ:

ð26Þ

From Assumption 2.1 and [22], we have that for any q ¼ 1; 2; . . . ; r,

ðg q ðxq ðkÞÞ  qq xq ðkÞÞðg q ðxq ðkÞÞ  dq xq ðkÞÞ 6 0;

ð27Þ

300

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

which is equivalent to



xðkÞ

T

gðxðkÞÞ

2 4

dq qq eq eTq 

dq þqq 2



eq eTq

dq þqq 2

eq eTq

eq eTq

3T 5



xðkÞ



gðxðkÞÞ

ð28Þ

6 0;

where eq denotes the unit column vector having 1 element on its q-th row and zeros elsewhere. Thus, for any appropriately dimensioned diagonal matrix Y > 0, the following inequality holds:



xðkÞ gðxðkÞÞ

T 

F1Y

F 2 Y



Y

T 

 xðkÞ 6 0; gðxðkÞÞ

ð29Þ

that is T

T

T

xðkÞ F 1 YxðkÞ þ 2xðkÞ F 2 YgðxðkÞÞ  gðxðkÞÞ YgðxðkÞÞ P 0:

ð30Þ

Similarly, for any appropriately dimensioned diagonal matrix H > 0, the following inequality also holds: T

T

 xðk  dðkÞÞ F 1 Hxðk  dðkÞÞ þ 2xðk  dðkÞÞ F 2 Hgðxðk  dðkÞÞÞ; T

 gðxðk  dðkÞÞÞ Hgðxðk  dðkÞÞÞ P 0:

ð31Þ

Hence, adding the left-hand sides of (30) and (31), we can obtain from (13)–(16), and (22)–(26) that

E½DVðkÞ ¼ E

" # 7 h i X T^ DV s ðk; xðkÞÞ 6 E HðkÞ N HðkÞ ;

ð32Þ

s¼1

where

h

T

T

H1 ðkÞ ¼ xðkÞ

xðk  d1 Þ

h

H2 ðkÞ ¼ gðxðkÞÞ

T

T

xðk  dðkÞÞ T

gðxðk  dðkÞÞÞ

XsðkÞ v ¼1

xðk  d1 Þ

T

iT T

gðxðk  v ÞÞ

; iT

;

T T

HðkÞ ¼ ½H1 ðkÞT H2 ðkÞ  ; 2

6 6 6 6 6 6 ^ ¼6 N 6 6 6 6 6 4

Z1

W11ði;mÞ      

qG2

0

W15ði;mÞ W16ði;mÞ W17ði;mÞ

3 7

W22ði;mÞ W23ði;mÞ S 0 0 0 7 7  W33ði;mÞ W34ði;mÞ 0 F2H 0 7 7 7 W44ði;mÞ W45ði;mÞ 0 0 7   7: 7 W55ði;mÞ W56ði;mÞ W57ði;mÞ 7    7     W66ði;mÞ W67ði;mÞ 7 5      W77ði;mÞ

 > 0 such that Therefore, we conclude from (11b) that there exists a scalar q

 E½kxðkÞk2 ; E½DVðkÞ < q

ð33Þ

which implies that for any k P 0

E½Vð0; xð0ÞÞ 6 E½Vðk þ 1; xðk þ 1ÞÞ  E½Vð0; xð0ÞÞ ¼

k X  sumka¼0 E½kxðlÞk2 : E½DVðlÞ 6 q

ð34Þ

a¼0

In turn, the following inequality holds:

"

# k X 1 2 E kxðlÞk 6 E½Vð0; xð0ÞÞ < 1 q

ð35Þ

l¼0

which in turn implies limk!þ1 E½kxðkÞk2  ¼ 0. Therefore according to Definition 2.1, the neural network (1) is globally asymptotically stable in the mean square. Next, we study the dissipativity of neural network (1). To this end, by considering the Lyapunov functional (12) it follows that

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

301

kp kp X X  E½DVðkÞ  E yT ðkÞZyðkÞ  2uT ðkÞSyðkÞ k¼0

k¼0 kp h i X  ðkÞT NH  ðkÞ ; E H uT ðkÞðG  cIÞuðkÞ 6

ð36Þ

k¼0

where

 ðkÞ ¼ H



HðkÞ uðkÞ

 :

We can get from (11b) and (36) that s X E½DVðkÞ 6 J s;u ;

ð37Þ

k¼0

which implies

E½Vðxðs þ 1ÞÞ  E½Vðxð0ÞÞ 6 J s;u :

ð38Þ

Thus, (5) holds under zero initial condition. Therefore, according to Definition 2.1, neural network (1) is strictly ðZ; S; GÞ

a-dissipative. This completes the proof. h Remark 3.1. From the above analysis is clear that the conditions established to ensure that the neural network(1) is globally asymptotically stable in the mean square and strictly ðZ; S; GÞ  a -dissipative is delay-dependent. This dependency is not only on the discrete delay dðkÞ but also on the finite-distributed delay sðkÞ. Thus, the expression of the LMIs obtained in Theorem 3.1 is much simpler which has only one matrix, including all four decision variables. It can also be noticed that the LMIs in (8) are not only over the matrix variables, but also over the scalar a. This implies that by setting d ¼ a and minimizing d subject to (8), we can obtain the optimal dissipativity performance a (by a ¼ dÞ. Also, it is worth mentioning that given different d1 ; d1 ; d2 , and d2 , the optimal dissipativity performance a achieved should be different, which will be illustrated through a numerical example in the Section 4. We now give conditions on passivity of neural network (1) by choosing Z ¼ 0; S ¼ I , and G  aI ¼ bI as follows. Corollary 3.1. Under Assumptions 2.1 and 2.2, the neural network (1) is passive, if there exist Pði; mÞ ¼ PT ði; mÞ > 0; Q > 0; Z 1 > 0; Z 2 > 0; R > 0; S, diagonal matrices Y > 0; H > 0, and scalars q > 0 and b > 0 such that (11a), (11c), and (39) hold

2

Nði;mÞ

6 6 6 6 6 6 6 ¼6 6 6 6 6 6 6 4

W11ði;mÞ   

Z1

qG2

0

W15

W22ði;mÞ W23ði;mÞ S  W33ði;mÞ W34ði;mÞ W44ði;mÞ  

 

 

 

 

















W16ði;mÞ W17ði;mÞ W18ði;mÞ

0

0

0

0 0

F2H 0

0 0

^ 55ði;mÞ W56ði;mÞ W57ði;mÞ W  W66ði;mÞ W67ði;mÞ   W77ði;mÞ 





3

7 7 7 0 7 7 0 7 7 7 ^ 58ði;mÞ 7 < 0; W 7 7 W68ði;mÞ 7 7 W78ði;mÞ 7 5 ^ 88ði;mÞ W 0

ð39Þ

where

W11ði;mÞ ; W15ði;mÞ ; W16ði;mÞ ; W17ði;mÞ ; W18ði;mÞ ; W22ði;mÞ ; W23ði;mÞ ; W33ði;mÞ ; W34ði;mÞ ; W44ði;mÞ ; W56ði;mÞ ; W57ði;mÞ ; W66ði;mÞ ; W67ði;mÞ ; W68ði;mÞ ; W77ði;mÞ ; and W78ði;mÞ ; follow the same definitions as those in Theorem 3.1, and

^ 55ði;mÞ ¼¼ AT Pði; mÞA þ d2 AT Z 1 A þ d2 AT Z 2 A þ #R  Y; W 12 1 ^ 58ði;mÞ ¼ I þ AT Pði; mÞ þ d2 AT Z 1 þ d2 AT Z 2 ; W 12 1 ^ 88ði;mÞ ¼ Pði; mÞ þ d2 Z 1 þ d2 Z 2  cI: W 1 12 Remark 3.2. In Corollary 3.1, a delay-dependent sufficient condition is given to guarantee the passivity of neural network (1). It is clear that by minimizing a subject to (11a), (11c), and (39), optimal passivity performance a can be obtained.

302

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

Remark 3.3. In light of Remark 2.2, H1 performance condition of neural network (1) can be readily obtained through Theorem 3.1 by selecting Z ¼ I; S ¼ 0, and G  aI ¼ c2 I. In the following, we consider the neural network with zero finite-distributed delay. Therefore, neural network (1) reduces to:

8 ^ ðk; xðkÞ; xðk  dðkÞÞÞxðkÞ > < xðk þ 1Þ ¼ DxðkÞ þ AgðxðkÞÞ þ Bgðxðk  dðkÞÞÞ þ uðkÞ þ r yðkÞ ¼ gðxðkÞÞ > : xðkÞ ¼ /ðkÞ; k 2 N½d1 ; 0

ð40Þ

and the corresponding Lyapunov functional is given as follows:

^ xðkÞÞ ¼ Vðk;

6 X V s ðk; xðkÞÞ;

ð41Þ

s¼1

where V s ðk; xðkÞÞ follows the same definitions as those in (12). Then, by following the similar method as used in Theorem 3.1, we can get the following result on dissipativity analysis for neural network (40).

2

Nði;mÞ

6 6 6 6 6 6 ¼6 6 6 6 6 4

W11ði;mÞ

Z1

qG2

0

 

W22ði;mÞ

Z2  S



W33ði;mÞ

S S þ Z 2

0 0

0 F2H







Q 3  Z 2

0









0  W55ði;mÞ











W56ði;mÞ W66ði;mÞ













W15ði;mÞ W16ði;mÞ W18ði;mÞ

3

7 7 7 7 7 0 7 7 < 0: ^ 58ði;mÞ 7 7 W 7 W68ði;mÞ 7 5 ^ 88ði;mÞ W 0 0

ð43Þ

Theorem 3.2. Under Assumptions 2.1 and 2.2, neural network (40) is globally asymptotically stable in the mean square and strictly ðZ; S; GÞ  a-dissipative, if there exist Pði; mÞ ¼ PT ði; mÞ > 0; Q > 0; Z 1 > 0; Z 2 > 0; S, diagonal matrices Y > 0; H > 0, and scalars q > 0 and a > 0 such that (11a), (11c), and (42) hold

2

Nði;mÞ

6 6 6 6 6 6 ¼6 6 6 6 6 4

W11ði;mÞ

Z1

qG2

0



W22ði;mÞ

Z2  S

S

0

0





W33ði;mÞ

S þ Z 2

0

F2H







Q 3  Z 2

0

0









 

 

 

 

W15ði;mÞ W16ði;mÞ W18ði;mÞ

~ 55ði;mÞ W56ði;mÞ W  W66ði;mÞ 



3

7 7 7 0 7 7 0 7 7 < 0; 7 W58ði;mÞ 7 7 7 W68ði;mÞ 5 0

ð42Þ

W88ði;mÞ

where

W11ði;mÞ ; W15ði;mÞ ; W16ði;mÞ ; W18ði;mÞ ; W22ði;mÞ ; W33ði;mÞ ; W56ði;mÞ ; W58ði;mÞ ; W66ði;mÞ ; W68ði;mÞ ; and W88ði;mÞ ; follow the same definitions as those in Theorem 3.1, and

~ 55ði;mÞ ¼ AT Pði; mÞA þ d2 AT Z 1 A þ d2 AT Z 2 A  Y  G: W 1 12 In a similar way, we can also specialize Theorem 3.2 to the passivity analysis of neural network (40) by choosing Z ¼ 0; S ¼ I, and G  aI ¼ bI. Corollary

3.2. Under

Assumptions

2.1

and

2.2,

neural

network

(40)

is

passive,

if

there

exist

Pði; mÞ ¼ PT ði; mÞ > 0; Q > 0; Z 1 > 0; Z 2 > 0; S, diagonal matrices Y > 0; H > 0, and scalars q > 0 and b > 0 such that (11a), (11c), and (43) hold where W11ði;mÞ ; W15ði;mÞ ; W16ði;mÞ ; W18ði;mÞ ; W22ði;mÞ ; W33ði;mÞ ; W56ði;mÞ ; W66ði;mÞ , and W68ði;mÞ follow the same ^ 58ði;mÞ and W ^ 88ði;mÞ follow the same definitions as those in Corollary 3.1, and definitions as those in Theorem 3.1 and W

 55ði;mÞ ¼ AT Pði; mÞA þ d2 AT Z 1 A þ d2 AT Z 2 A  Y W 12 1 Remark 3.4. Building on inequality (10) mentioned in Lemma 2.2, it can be inferred that, the Corollary 3.2 takes full advantage of the information of the time-varying delay dðkÞ. Thus, the result derived above has reduced conservatism and involves less decision variables than that of [18].

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

303

3.2. Dissipativity analysis with partially known transition probability matrices The main results are summarized by the following theorems: Theorem 3.3. Consider the neural network(1) with partially known transition probability matrices k and p . Under Assumptions 2.1 and 2.2, neural network(1) is globally asymptotically stable in the mean square and strictly ðZ; S; RÞ  c-dissipative, if there exist Pði; mÞ ¼ P T ði; mÞ > 0; Q > 0; Z 1 > 0; Z 2 > 0; R > 0; S, diagonal matrices Y > 0; H > 0, and scalars q > 0 and c > 0 such that inequalities (11a), (11c), (46) and (47) holds Proof. First, we know that the neural network (1) is globally asymptotically stable in the mean square and strictly ðQ ; S; RÞ  a-dissipative under the completely known transition probability matrices if inequalities (11a)–(11c) holds. Let us express (11b) as:



Nði;mÞ ¼



Wði;mÞ1 Wði;mÞ2 < 0;  Wði;mÞ3 2 6

Wði;mÞ1 ¼ 6 6 4

Wði;mÞ2

6 6 ¼6 4

0

W15ði;mÞ W16ði;mÞ W17ði;mÞ W18ði;mÞ 0

0

0

0

0 0

F2H 0

0 0

0 0

3 7 7 7; 5 3

2

Wði;mÞ3

3

W22ði;mÞ W23ði;mÞ S 7 7 7; W33ði;mÞ W34ði;mÞ 5    W44ði;mÞ

  

2

qG2

Z1

W11ði;mÞ

ð44Þ

W55ði;mÞ W56ði;mÞ W57ði;mÞ W58ði;mÞ 6  W66ði;mÞ W67ði;mÞ W68ði;mÞ 7 7 6 ¼6 7; 4   W77ði;mÞ W78ði;mÞ 5    W88ði;mÞ

where

W11ði;mÞ ; W15ði;mÞ ; W16ði;mÞ ; W17ði;mÞ ; W18ði;mÞ ; W22ði;mÞ ; W23ði;mÞ ; W33ði;mÞ ; W34ði;mÞ ; W44ði;mÞ ; W55ði;mÞ ; W56ði;mÞ ; W57ði;mÞ ; W58ði;mÞ ; W66ði;mÞ ; W67ði;mÞ ; W68ði;mÞ ; W77ði;mÞ ; W78ði;mÞ ; and W88ði;mÞ ; follow the same definitions as those in Theorem 3.1. We also know that,

8 d2 X d1 X > > > Pði; mÞ :¼ kmn pij Pðj; nÞ > > > > n¼0 j¼0 > > > > XX > ði;mÞ > > kmn pij Pðj; nÞ < PK :¼ m j2I i n2I K

ð45Þ

K > > X X > > ði;mÞ > kmn pij Pðj; nÞ PUK :¼ > > > > i n2I m > j2I UK > UK > > > : ði;mÞ ði;mÞ Pði; mÞ :¼ PK [ PUK

Note that from (45), it holds that (44) can be written as

Nði;mÞ ¼

XX m

j2I iK n2I K

where

" kmn pij

Wði;mÞ1 Wði;mÞ2 

Wði;mÞ3

# þ

X X m

j2I iUK n2I UK

" kmn pij

Wði;mÞ1 Wði;mÞ2 

Wði;mÞ3

# ;

304

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

2 6 6

Wði;mÞ1 ¼ 6 6 4 2 6

Wði;mÞ2 ¼ 6 6 4 2 6 6

Wði;mÞ3 ¼ 6 6 4

qG2

Z1

W11ði;mÞ

3

0

7

W22ði;mÞ W23ði;mÞ S 7 7;  W33ði;mÞ W34ði;mÞ 7 5   W44ði;mÞ

  

W15ði;mÞ W16ði;mÞ W17ði;mÞ W18ði;mÞ 0

0

0

0

0

F2H

0

0

0

0

0

0

W55ði;mÞ W56ði;mÞ W57ði;mÞ W58ði;mÞ

3 7 7 7; 5 3

7 7; W78ði;mÞ 7 5

W66ði;mÞ W67ði;mÞ W68ði;mÞ 7

 



W77ði;mÞ







W88ði;mÞ

where

W11ði;mÞ ¼ DPðj; nÞD  Pði; mÞ þ Q þ ðd12 ÞQ þ R þ d21 ðD  IÞZ 1 ðD  IÞ  Z 1 þ d212 ðD  IÞZ 2 ðD  IÞ þ qG1  F 1 Y; W15ði;mÞ ¼ DPðj; nÞA þ d21 ðD  IÞZ 1 A þ d212 ðD  IÞZ 2 A þ F 2 Y; W16ði;mÞ ¼ DPðj; nÞB þ d21 ðD  IÞZ 1 B þ d212 ðD  IÞZ 2 B; W17ði;mÞ ¼ DPðj; nÞC þ d21 ðD  IÞZ 1 C þ d212 ðD  IÞZ 2 C; W18ði;mÞ ¼ DPðj; nÞ þ d21 ðD  IÞZ 1 þ d212 ðD  IÞZ 2 ; W55ði;mÞ ¼ AT Pðj; nÞA þ d21 AT Z 1 A þ d212 AT Z 2 A þ v R  Y  Z; W56ði;mÞ ¼ AT Pðj; nÞB þ d21 AT Z 1 B þ d212 AT Z 2 B; W57ði;mÞ ¼ AT Pðj; nÞC þ d21 AT Z 1 C þ d212 AT Z 2 C; W58ði;mÞ ¼ S þ AT Pðj; nÞ þ d21 AT Z 1 þ d212 AT Z 2 ; W66ði;mÞ ¼ BT Pðj; nÞB þ d21 BT Z 1 B þ d212 BT Z 2 B  H; W67ði;mÞ ¼ BT Pðj; nÞC þ d21 BT Z 1 C þ d212 BT Z 2 C; W68ði;mÞ ¼ BT Pðj; nÞ þ d21 BT Z 1 þ d212 BT Z 2 ; W77ði;mÞ ¼ C T Pðj; nÞC þ d21 C T Z 1 C þ d212 C T Z 2 C  R; W78ði;mÞ ¼ C T Pðj; nÞ þ d21 C T Z 1 þ d212 C T Z 2 ; W88ði;mÞ ¼ Pði; mÞ þ d21 Z 1 þ d212 Z 2  G þ aI: "

Nði;mÞ ¼

PK Wði;mÞ1

PK Wði;mÞ2



PK Wði;mÞ3

# þ

X X

" kmn pij

m

j2I iUK n2I UK

where PK denotes the terms in which elements of

"

"

PK Wði;mÞ1

PK Wði;mÞ2



PK Wði;mÞ3

Wði;mÞ1 Wði;mÞ2 

Wði;mÞ3

Wði;mÞ1 Wði;mÞ2 

Wði;mÞ3

# ;

pij and kmn are known. Therefore, if one has

# < 0;

ð46Þ

# < 0;

ð47Þ

then we have Nði;mÞ < 0, hence the system is stochastically stable and strictly ðZ; S; RÞ dissipative under partially known transition probabilities, which is concluded from the obvious fact that no knowledge on pij 8I iUK and kmn 8I m UK is required on (46) m m i i and (47). Thus for piK ; km K –0 and pK ; kK ¼ 0, respectively, one can readily obtain (44), since if pK ; kK ¼ 0 the condition (45), (46) will reduce to (44). h

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

305

Remark 3.5. The passivity analysis of the neural network (1) and (40) by choosing different values of Z ¼ 0; S ¼ I; G  aI ¼ bI; can be carried out in the similar fashion as for the case of completely known transition probability matrices. Since it follows the same procedure, it has been omitted.

4. Numerical examples In this section, three numerical examples are presented to illustrate the effectiveness of the developed dissipativity analysis for discrete-time stochastic neural networks with time-varying delays. The first example demonstrates the validity of the given dissipativity condition in Theorem 3.1 with the consideration that the transition probability matrices pij and kmn describing the delays dðkÞ and sðkÞ respectively are completely known. In the second example, the same analysis is repeated by using Theorem 3.3 but with the assumption that, some of the elements of the transition probability matrices pij and kmn are missing(unknown). In the third example, the reduced conservatism of the developed passivity criterion is demonstrated. 4.1. Example 1

Example 4.1. Consider a neural network of the type (1) with



  0:03 ; A¼ 0 0:02 0:06    0:06 0:03 0:04 B¼ ; C¼ 0:03 0:03 0:05



0:05

0

0:01



0:04  0:07 0:09

and the activation functions are taken as follows:

1 ðja þ 1j þ ja  1jÞ; 10 1 f2 ðaÞ ¼ ðja þ 1j þ ja  1jÞ: 20

f1 ðaÞ ¼

It can be verified that Assumption 2.1 is satisfied with d1 ¼ 0:1; q1 ¼ 0:1; d2 ¼ 0:2, and q2 ¼ 0:2. Thus



F1 ¼



0:02

0

0

0:05

F2 ¼

;



0 0 0 0



:

^ ðkÞ satisfies Assumption 2.2 with The noise diffusion coefficient vector r

2 

G1

G2



G3



0:03

0:05

0

3

0

6 0:04 0:01 0:011 0 7 7 6 ¼6 7: 4 0:011 0:01 0:06 0:012 5 0:022 0:05

0:02

0:043

In this example, we choose

 Z¼

2 0 0 3

 ;

 S¼

 2 0 ; 1 2

 G¼

 3 0 : 0 2

Our purpose hereafter is to discuss the relationship between the optimal dissipativity performance c and delay factors dðkÞ and sðkÞ. It is assumed that the distributed delay sðkÞ and discrete delays dðkÞ are characterized by two independent homogeneous Markov chains. Firstly, the dependency of optimal dissipativity performance c on discrete delay dðkÞ is analyzed. Assume that the distributed delay sðkÞ characterized by a Markov chain take values in a finite set S2 ¼ f1; 2; 3g, which correspond to 4, 5, 6 seconds delays, that is, d2 ¼ 4 and d2 ¼ 6. We first assume that the lower bound of the discrete delay dðkÞ is fixed to be 6 and the upper bound of the discrete delay dðkÞ is made to be 8, then the discrete delay dðkÞ modeled using a Markov chain take values in a finite set S1 ¼ f1; 2g, By these two random serials, we can calculate their transition probability matrices as

pij ¼



:5293 :4707 :864

2

:136

:29 :5 :79

6 kmn ¼ 4 :6

:1

:3

:5

 ; 3

7 :3 5: :2

306

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

By using Theorem 3.1, the optimal dissipativity performance obtained is c ¼ 2:387. However, if the upper bound on d1 is increased to 9, the transition probability matrix pij becomes

2

:5293 :235 :2357

pij ¼ 6 4 :864

:111 :6

:321

3

7 :025 5: :079

The optimal dissipativity performance obtained in this case is a ¼ 2:2345. Similarly different values of a are obtained by varying d1 and keeping the value of d1 fixed. A detailed comparison for different values of d1 is provided in Table 1, which shows that for a fixed d1 , a larger d1 corresponds to a smaller optimal dissipativity performance a. Next, the upper bound of the discrete delay dðkÞ is fixed to be 9. By using Theorem 3.1, the optimal dissipativity performance is analyzed for different values of d1 . The values of c, obtained are tabulated in Table 2. When the value of d1 is taken as 12, the optimal dissipativity performance obtained is c ¼ 2:9455. This shows that when the upper bound of the discrete delay dðkÞ is fixed, for a smaller d1 , the obtained optimal dissipativity performance c is usually smaller. A more detailed comparison for different values of d1 is provided in Table 2. The second task in this example is to show the relationship between the optimal dissipativity performance c and the distributed delay sðkÞ. To this end, we assume d1 ¼ 5 and d1 ¼ 9, that is, the discrete delay dðkÞ satisfies 5 6 dðkÞ 6 9. 1. Now, we select d2 ¼ 3 as a fixed value and vary d2 . When d2 ¼ 5 (corresponding to 3 6 sðkÞ 6 5Þ, the optimal dissipativity performance obtained by Theorem 3.1 is c ¼ 2:953. When d2 ¼ 6 (corresponding to 6 6 sðkÞ 6 12Þ, the optimal dissipativity performance obtained is c ¼ 2:754. This indicates that for the same d2 , a larger d2 corresponds to a smaller optimal dissipativity performance c. A more detailed comparison for different values of s2 is provided in Table 3. 2. Next, we choose d2 ¼ 13. According to Theorem 3.1, Table 4 gives the optimal dissipativity performance c with different d2 . We can find from Table 4 that for the same d2 , a larger d2 corresponds to a larger optimal dissipativity performance c. 4.2. Example 2 Consider the same neural network as in Example 4.1 but with partially known transition probability matrices. That is, the transition probability matrices describing the discrete delay sðkÞ and distributed delay dðkÞ are having some unknown elements in them. The main goal here is again to discuss the relationship between the optimal dissipativity performance c and delays dðkÞ and sðkÞ. Firstly, we assume that the distributed delay sðkÞ is bounded between 4 6 sðkÞ8. Then, the same analysis is carried out as in Example 4.1 where the optimal dissipativity performance is obtained against various values of d1 with d1 as constant. The transition probability matrices considered in this case are

pij ¼



?

?

:864 :136 2

 ;

:29 :5 :79

6 kmn ¼ 4 ?

?

?

?

3

7 :3 5: ?

By using Theorem 3.3, the optimal dissipativity performance c obtained in this case is as provided in Table 5. Next, the upper bound of the discrete delay dðkÞ is fixed. By using Theorem 3.3, the optimal dissipativity performance is analyzed for different values of d1 . The values of c, obtained are tabulated in Table 6. In the second part of this example, the relationship between the optimal dissipativity performance a and the distributed delay sðkÞ is highlighted. The discrete delay dðkÞ characterized by a Markov chain is assumed to satisfies 5 6 dðkÞ 6 9, that is, d1 ¼ 5 and d1 ¼ 9. As in Example 4.1, we select d2 ¼ 3 as a fixed value and vary d2 . The optimal dissipativity performance a

Table 1 Optimal dissipativity performance for fixed d1 . d1

8

9

10

11

Theorem 3.1

2.387

2.3345

2.239

1.881

Table 2 Optimal dissipativity performance for fixed d1 . d1

10

11

12

13

Theorem 3.1

2.75

2.91

2.9455

3.432

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

307

Table 3 Optimal dissipativity performance for fixed d2 . d2

4

5

6

7

Theorem 3.1

3.083

2.953

2.754

2.502

Table 4 Optimal dissipativity performance for fixed d2 . d2

7

8

9

10

Theorem 3.1

2.235

2.2864

2.433

2.501

Table 5 Optimal dissipativity performance for different d1 and partially known transition matrices. d1

8

9

10

11

Theorem 3.1

2.333

2.26

2.06

1.953

Table 6 Optimal dissipativity performance for different d1 and partially known transition matrices. d1

10

11

12

13

Theorem 3.1

2.33

2.84

2.89

3.12

Table 7 Optimal dissipativity performance for different d2 and partially known transition matrices. d2

4

5

6

7

Theorem 3.1

3.13

2.332

2.23

1.983

Table 8 Optimal dissipativity performance for different d2 and partially known transition matrices. d2

7

8

9

10

Theorem 3.1

2.532

2.81

2.84

3.008

obtained by Theorem 3.3 is as provided in Table 7. Next, the upper bound of the distributed delay sðkÞ is fixed to a certain value and optimal dissipativity performance a is analyzed for different value of d2 . The detailed analysis is provided in Table 8. Remark 4.1. It can be observed that, the results obtained in the case of partially known transition probability matrices are almost similar to that of the results obtained in completely known transition probability matrices.

4.3. Example 3 In this example, we demonstrate the reduced conservatism of the developed passivity criterion. Consider the neural network (40) where the finite distributed delay sðkÞ is taken as zero. The rate matrix D, connection weight matrix A and discretely delayed connection weight matrix B are taken as:

" D¼ " B¼

0:9

0:002

0:006

1

#

0:09 0:03 0:3

0:08

" A¼

; # ;



0:005

0:01

#

; 0:01 0:004 " # 0:001 0:002 0:001 0:003

The activation functions considered are

;

308

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310 Table 9 Optimal dissipativity performance for different d1 . d1

5

6

8

8

Corollary (3.2) [52] [18]

2.206 2.374 3.21

2.284 2.453 3.54

2.3053 2.53 3.64

2.365 2.83 3.87

Table 10 Optimal dissipativity performance for different d1 .

 F1 ¼

d1

3

4

5

6

Corollary (3.2) [52] [18]

7.95 8.32 13.33

7.88 8.233 13.12

7.263 8.01 12.82

7.218 7.932 12.32

0:001

0

0

0:0003

 ;

 F2 ¼

0:3

0:01

0:002

0:4

 :

The noise diffusion coefficient matrix is considered as:

2 

G1

G2



G3



0:001

6 0:001 6 ¼6 4 0

0:001

0

0:011

3

7 7 7: 0:0032 0:0054 0:0031 5 0:003

0:0054 0:0042

0

0

0

0:0032

1. We first assume that the lower bound of the discrete delay dðkÞ is fixed to be 3 and the upper bound of the discrete delay d(k) is made to be 5, then the discrete delay dðkÞ modeled using a Markov chain take values in a finite set S1 ¼ 1; 2; 3. The transition probability matrix pij in this case is given as

2

pij

3 :55 :35 :1 6 7 :2 0 5: ¼ 4 :8 :6 :2 :2

By using Corollary 3.2, the optimal dissipativity performance c obtained is 2:206. For the same bounds on discrete delay dðkÞ, the optimal dissipativity performance c obtained in [52,18] are 2:374 and 3:21 respectively. Similarly, different values of the optimal dissipativity performance c are obtained by varying d1 and keeping d1 as constant, which are tabulated in Table 9. From the above analysis, it is clear that the proposed passivity criteria has the potential of reduced conservatism. Optimal dissipativity performance c for different d1 are recorded in Table 10.

5. Conclusions This paper has considered the problem of dissipativity analysis for discrete-time stochastic neural networks with discrete and finite distributed delays that is described by discrete-time Markov chain. The discretized Jensen inequality and lower bounds lemma have been used to deal with the finite sum quadratic terms. A delay-dependent condition has been provided to ensure the considered neural network to be globally asymptotically stable in the mean square and strictly ðZ; S; RÞ  c-dissipative. The derived condition depends not only the discrete delay but also on the finite-distributed delay. A special case has been discussed, in which the transition probabilities of the Markovian channels were considered to be partially known. It has been established that the derived results has less number of decision variables and possess reduced conservatism. Numerical examples have been given to show the effectiveness and advantage of the proposed methods. Acknowledgments The authors would like to thank the reviewers for their helpful comments on our submission. This work is supported by the deanship of scientific research (DSR) at KFUPM through research group project No. RG-1316-1.

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

309

References [1] S. Arik, Global asymptotic stability of a class of dynamical neural networks, IEEE Trans. Circuits Syst. I, Fundam. Theory Appl. 47 (4) (2000) 568–571. [2] M.S. Mahmoud, Novel robust exponential stability criteria for neural networks, Neurocomputing 73 (11) (2009) 331–335. [3] M.S. Mahmoud, S.Z. Selim, P. Shi, Global exponential stability criteria for neural networks with probabilistic delays, IET Control Theory Appl. 4 (11) (2010) 2405–2415. [4] M.S. Mahmoud, Y. Xia, LMI-based exponential stability criterion for bidirectional associative memory neural networks, Neurocomputing 74 (3) (2010) 284–290. [5] M.S. Mahmoud, A. Ismail, Improved results on robust exponential stability criteria for neutral-type delayed neural networks, Appl. Math. Comput. 217 (5) (2010) 3011–3019. [6] Y. Zhao, L. Zhang, S. Shen, H. Gao, Robust stability criterion for discrete-time uncertain Markovian jumping neural networks with defective statistics of modes transitions, IEEE Trans. Neural Networks 22 (1) (2011) 164–170. [7] M.S. Mahmoud, A.Y. Al-Rayyah, Adaptive control of systems with mismatched nonlinearities and time-varying delays using state-measurements, IET Control Theory Appl. 4 (1) (2010) 27–36. [8] M.S. Mahmoud, S. Elferik, New stability and stabilization methods for nonlinear systems with time-varying delays, Optimal Control Appl. Methods 31 (2) (2010) 273–287. [9] J. Lam, S. Xu, D.W.C. Ho, Y. Zou, On global asymptotic for a class of delayed neural networks, Int. J. Circuit Theory Appl. 40 (11) (2012) 1165–1174. [10] H. Zhang, Z. Liu, G. Huang, Novel delay-dependent robust stability analysis for switched neutral-type neural networks with time-varying delays via SC technique, IEEE Trans. Syst. Man Cybern. Part B Cybern. 40 (6) (2010) 1480–1491. [11] Y. He, G. Liu, D. Rees, New delay-dependent stability criteria for neural networks with time-varying delay, IEEE Trans. Neural Networks 18 (1) (2007) 310–314. [12] Z. Wu, P. Shi, H. Su, J. Chu, Delay-dependent stability analysis for switched neural networks with time-varying delay, IEEE Trans. Syst. Man Cybern. Part B Cybern. 41 (6) (2011) 1522–1530. [13] Y. Zhao, H. Gao, J. Lam, K. Che, Stability analysis of discrete-time recurrent neural networks with stochastic delay, IEEE Trans. Neural Networks 20 (8) (2009) 1330–1339. [14] Z. Wang, D.W.C. Ho, X. Liu, State estimation for delayed neural networks, IEEE Trans. Neural Networks 16 (1) (2005) 279–284. [15] M.S. Mahmoud, Extended state estimator design method for neutral-type neural networks with time-varying delays, Int. J. Syst. Control Commun. 3 (1/ 2) (2011) 1–19. [16] M.S. Mahmoud, New exponentially convergent state estimation method for delayed neural networks, Neurocomputing 72 (5) (2009) 3935–3942. [17] C. Li, X. Liao, Passivity analysis of neural networks with time delay, IEEE Trans. Circuits Syst. II, Exp. Briefs 52 (8) (2005) 471–475. [18] Q. Song, J. Liang, Z. Wang, Passivity analysis of discrete-time stochastic neural networks with time-varying delays, Neurocomputing 72 (7–9) (2009) 1782–1788. [19] Z. Wu, P. Shi, H. Su, J. Chu, Passivity analysis for discrete time stochastic Markovian jump neural networks with mixed time delays, IEEE Trans. Neural Networks 22 (10) (2011) 1566–1575. [20] Z. Wang, Y. Liu, G. Wei, X. Liu, A note on control of a class of discrete-time stochastic systems with distributed delays and nonlinear disturbances, Automatica 46 (3) (2010) 543–548. [21] Z. Wang, H. Zhang, Global asymptotic stability of reaction diffusion Cohen–Grossberg neural networks with continuously distributed delays, IEEE Trans. Neural Networks 20 (1) (2010) 39–49. [22] Y. Liu, Z. Wang, X. Liu, Global exponential stability of generalized recurrent neural networks with discrete and distributed delays, Neural Networks 19 (5) (2006) 667–675. [23] Z. Wang, Y. Liu, M. Li, X. Liu, Stability analysis for stochastic Cohen–Grossberg neural networks with mixed time delays, IEEE Trans. Neural Networks 17 (3) (2006) 814–820. [24] M.S. Mahmoud, Robust global stability of discrete-time recurrent neural networks, Proc. IMechEng Part I–J. Syst. Control Eng. 223 (8) (2009) 1045– 1053. [25] M.S. Mahmoud, Novel robust exponential stability criteria for neural networks, Neurocomputing 73 (11) (2009) 331–335. [26] M.S. Mahmoud, Y. Xia, LMI-based exponential stability criterion for bidirectional associative memory neural networks, Neurocomputing 74 (3) (2010) 284–290. [27] M.S. Mahmoud, Y. Xia, Improved exponential stability analysis for delayed recurrent neural networks, J. Franklin Inst. 348 (1) (2011) 201–211. [28] J. Qiu, K. Lu, P. Shi, M.S. Mahmoud, Robust exponential stability for discrete-time interval BAM neural networks with delays and Markovian jump parameters, Int. J. Adapt. Control Signal Process. 24 (9) (2010) 760–785. [29] Y. Liu, Z. Wang, X. Liu, Asymptotic stability for neural networks with mixed time-delays: the discrete-time case, Neural Networks 22 (1) (2009) 67–74. [30] H. Li, C. Wang, P. Shi, H. Gao, New passivity results for uncertain discrete-time stochastic neural networks with mixed time delays, Neurocomputing 73 (16–18) (2010) 3291–3299. [31] Y. Liu, Z. Wang, X. Liu, State estimation for discrete-time Markovian jumping neural networks with mixed mode-dependent delays, Phys. Lett. A 372 (48) (2008) 7147–7155. [32] M.S. Mahmoud, New filter design for linear time-delay systems, Linear Algebra and Its Appl. 434 (4) (2011) 1080–1093. [33] M.S. Mahmoud, Delay-dependent dissipativity of singular time-delay systems, IMA J. Math. Control Inf. 26 (1) (2009) 45–58. [34] M.S. Mahmoud, Y. Shi, F.M. AL-Sunni, Dissipativity analysis and synthesis of a class of nonlinear systems with time-varying delays, J. Franklin Inst. 346 (2009) 570–592. [35] M.S. Mahmoud, H.N. Nounou, Y. Xia, Dissipative control for internet-based switching systems, J. Franklin Inst. 347 (1) (2010) 154–172. [36] J. Qiu, K. Lu, M.S. Mahmoud, N. Yao, X. Du, Robust passive control for uncertain nonlinear neutral markovian jump systems with mode-dependent time–delays, ICIC Express Lett. 5 (1) (2011) 119–125. [37] M.S. Mahmoud, Delay-dependent dissipativity analysis and synthesis of switched delay systems, Int. J. Robust Nonlinear Control 21 (1) (2011) 1–20. [38] W.M. Haddad, V. Chellaboina, Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach, Princeton Univ. Press, Princeton, NJ, 2008. [39] D.J. Hill, P.J. Moylan, The stability of nonlinear dissipative systems, IEEE Trans. Autom. Control 21 (5) (1976) 708–711. [40] Z. Feng, J. Lam, Robust reliable dissipative filtering for discrete delay singular systems, Signal Process. 92 (12) (2012) 3010–3025. [41] M.S. Mahmoud, H.N. Nounou, Dissipative analysis and synthesis of time-delay systems, Mediterranean J. Meas. Control 1 (2005) 97–108. [42] M.S. Mahmoud, P. Shi, Methodologies for Control of Jumping Time-Delay Systems, Kluwer Academic Publishers, Amsterdam, 2003. [43] M.S. Mahmoud, Yuanqing Xia, A generalized approach to stabilization of linear interconnected time-delay systems, Asian J. Control 14 (6) (2012) 1539–1552. [44] Z. Feng, J. Lam, Z. Shu, Dissipative control for linear systems by static output feedback, Int. J. Syst. Sci., 2012 (to be published). [45] Z. Feng, J. Lam, Stability and dissipativity analysis of distributed delay cellular neural networks, IEEE Trans. Neural Networks 22 (6) (2011) 976–981. [46] Z. Wu, J. Lam, H. Su, J. Chu, Stability and dissipativity analysis of static neural networks with time delay, IEEE Trans. Neural Networks Learn. Syst. 47 (2) (2012) 199–210. [47] J.C. Willems, Dissipative dynamical systems, part I: General theory, Arch. Ration. Mech. Anal. 45 (5) (1972) 321–351. [48] X. Zhu, Y. Wang, G. Yang, New delay-dependent stability results for discrete-time recurrent neural networks with time-varying delay, Neurocomputing 72 (13–15) (2009) 3376–3383. [49] Z. Wu, Ju H. Park, H. Su, J. Chu, Admissibility and dissipativity analysis for discrete-time singular systems with mixed time-varying delays, Appl. Math. Comput. 47 (13) (2012) 199–210.

310

M.S. Mahmoud, G. Dastagir Khan / Applied Mathematics and Computation 228 (2014) 292–310

[50] P. Li, J. Lam, Z. Shu, On the transient and steady-state estimates of interval genetic regulatory networks, IEEE Trans. Syst. Man Cybern. Part B: Cybern. 40 (2) (2010) 336–349. [51] Z. Wang, H. Gao, J. Cao, X. Liu, On delayed genetic regulatory networks with polytopic uncertainties: robust stability analysis, IEEE Trans. Nanobiosci. 7 (2) (2008) 154–163. [52] Z. Wu, P. Shi, J. Chu, Dissipativity analysis for discrete-time stochastic neural networks with time-varying delays, IEEE Trans. Neural Networks Learn. Syst. 24 (3) (2013) 345–355.