J. Math. Anal. Appl. 328 (2007) 316–326 www.elsevier.com/locate/jmaa
Delay-dependent stochastic stability of delayed Hopfield neural networks with Markovian jump parameters ✩ Xuyang Lou, Baotong Cui ∗ Research Center of Control Science and Engineering, Southern Yangtze University, 1800 Lihu Rd., Wuxi, Jiangsu 214122, PR China Received 9 November 2005 Available online 16 June 2006 Submitted by William F. Ames
Abstract In this paper, the problem of stochastic stability for a class of time-delay Hopfield neural networks with Markovian jump parameters is investigated. The jumping parameters are modeled as a continuous-time, discrete-state Markov process. Without assuming the boundedness, monotonicity and differentiability of the activation functions, some results for delay-dependent stochastic stability criteria for the Markovian jumping Hopfield neural networks (MJDHNNs) with time-delay are developed. We establish that the sufficient conditions can be essentially solved in terms of linear matrix inequalities. © 2006 Elsevier Inc. All rights reserved. Keywords: Hopfield neural networks; Markovian jump parameters; Stochastic stability; Linear matrix inequality
1. Introduction Hopfield neural networks [1,2] have been extensively studied in past years and found many applications in different areas such as pattern recognition, associative memory, and combinatorial optimization. Many essential features of these networks, such as qualitative properties of stability, oscillation, and convergence issues have been investigated by many authors (e.g., [3–7]). Such applications rely on the existence of an equilibrium point or a unique equilibrium point, ✩
This work is supported by the National Natural Science Foundation of China (No. 10371072) and the Science Foundation of Southern Yangtze University (No. 103000-21050323). * Corresponding author. E-mail address:
[email protected] (B.T. Cui). 0022-247X/$ – see front matter © 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.jmaa.2006.05.041
X.Y. Lou, B.T. Cui / J. Math. Anal. Appl. 328 (2007) 316–326
317
and its stability. Therefore, the study on stability of Hopfield neural networks has caught many researchers’ attention. Hopfield neural networks with time delays have been extensively investigated over the years, and various sufficient conditions for the stability of the equilibrium point of this class of neural networks have been presented in [8–12]. Markovian jump systems introduced by [13], are the hybrid systems with two components in the state. The first one refers to the mode which is described by a continuous-time finite-state Markovian process, and the second one refers to the state which is represented by a system of differential equations. The jump systems have the advantage of modeling the dynamic systems subject to abrupt variation in their structures, such as component failures or repairs, sudden environmental disturbance, changing subsystem interconnections, operating in different point of a nonlinear plant [14]. For the last three decades, many researchers have made a lot of progress in Markovian jump control theory, see [15–18] and references therein. The problem of stochastic robust stability (SRST) for uncertain delayed neural networks with Markovian jumping parameters is investigated via linear matrix inequality (LMI) technique in [19]. To the best of our knowledge, only few works have been done on the stochastic stability analysis for neural networks with Markovian jumping parameters. However, the stochastic stability analysis for Hopfield neural networks with Markovian jump parameters has never been tackled. The purpose of this paper is to address our stochastic asymptotic stability condition for a class of MJDHNNs which includes delayed Hopfield neural networks, BAM neural networks as its special cases. The derived conditions are expressed in terms of linear matrix inequalities (LMIs), which can be checked numerically very efficiently by resorting to recently developed standard algorithms such as interiorpoint methods [25]. By using a less conservative fact, our results expand and improve those established in the earlier references and are less restrictive. Notations and facts. In the sequel, we denote AT and A−1 the transpose and the inverse of any square matrix A. We use A > 0 (A < 0) to denote a positive- (negative-) definite matrix A; and I is used to denote the n × n identity matrix. Let R denote the set of real numbers, Rn denotes the n-dimensional Euclidean space and Rn×m the set of all n × m real matrices. diag[·] denotes a block diagonal matrix. The symbol “” within a matrix represents the symmetric term of the matrix. The mathematical expectation operator with respect to the given probability measure P is denoted by E{·}. Fact 1 (Schur complement). Given constant matrices Ω1 , Ω2 , Ω3 , where Ω1 = Ω1T and 0 < Ω2 = Ω2T , then Ω1 + Ω3T Ω2−1 Ω3 < 0 if and only if −Ω2 Ω3 Ω1 Ω3T < 0 or < 0. Ω3 −Ω2 Ω3T Ω1 Fact 2. For any real matrices Σ1 , Σ2 , Σ3 with appropriate dimensions and Σ3T Σ3 I, it follows that Σ1T Σ2 + Σ2T Σ1 εΣ1T Σ3 Σ1 + ε −1 Σ2T Σ3−1 Σ1 ,
∀ε > 0.
Fact 3. Assume that a(s) ∈ Rna , b(s) ∈ Rnb are given for s ∈ Ω and N ∈ Rna ×nb . Then, for any matrices X ∈ Rna ×na , Y ∈ Rna ×nb and Z ∈ Rnb ×nb satisfying X Y 0, YT Z
318
X.Y. Lou, B.T. Cui / J. Math. Anal. Appl. 328 (2007) 316–326
the following holds: T X a(s) −2 a T (s)Nb(s) ds b(s) Y T − NT Ω
Y −N Z
a(s) b(s)
ds.
Ω
2. Problem statement Consider the following continuous time-delayed Hopfield neural networks: ⎧ n
⎪ ⎨ x˙p (t) = −cp xp (t) + wpq sq xq (t − τ ) + Jp , ⎪ ⎩
q=1
C = diag(cp ),
W = (wpq )n×n ,
(1)
p, q = 1, 2, . . . , n,
where x(t) = (x1 (t)x2 (t) · · · xn (t))T ∈ Rn is the state vector; C is a positive diagonal matrix; W represents the delayed feedback matrix; J = (J1 , J2 , . . . , Jn )T is a constant input vector and the delay τ is a nonnegative constant. Throughout this paper, we assume that (H1 ) there exist positive numbers Lq such that 0
sq (ξ1 ) − sq (ξ2 ) Lq , ξ1 − ξ2
q = 1, 2, . . . , n,
(2)
for all ξ1 , ξ2 ∈ R, ξ1 = ξ2 . The initial conditions associated with system (1) are of the form xp (s) = φ0 ,
p = 1, 2, . . . , n,
(3)
where φ0 are continuous functions defined on [−τ, 0]. Now, let x ∗ = (x1∗ , x2∗ , . . . , xn∗ )T be the equilibrium of the system (1), and set y(t) = x(t) − x ∗ . Then it is easy to follow from (1) that
y(t) ˙ = −Cy(t) + Wf y(t − τ ) , (4) where
T y(t) = y1 (t)y2 (t) · · · yn (t)
is the state vector of the transformed system, and
T f y(t) = f1 y1 (t) f2 y2 (t) · · · fn yn (t) , with
fq yq (t) = sq yq (t) + xq∗ − sq xq∗ ,
q = 1, 2, . . . , n,
and fq (0) = 0, for q = 1, 2, . . . , n. Note that since each function sq (·) satisfies the hypothesis (H1 ), each fq (·) satisfies fq2 (ξq ) L2q ξq2 ,
ξq fq (ξq )
fq2 (ξq ) Lq
,
∀ξq ∈ R.
(5)
X.Y. Lou, B.T. Cui / J. Math. Anal. Appl. 328 (2007) 316–326
319
Suppose (H1 ) holds, then it is clear the conditions of Lemma 2 in [11] hold for the functions sq (·). Therefore, similarly to the proof of Theorem 1 in [11], we can obtain that the equilibrium point of the system (1) is exist and unique. It can be seen that the asymptotic stability of x ∗ of the system (1) is equivalent to that of the trivial solution of system (4). Given a probability space (Ω, Υ, P) where Ω is the sample space, Υ is the algebra of events and P is the probability measure defined on Υ. Let S = {1, 2, . . . , N} and the random form process {ηt , t ∈ [0, +∞)} be a homogeneous, finite-state Markovian process with right continuous trajectories with generator Λ = (πij ) and transition probability from mode i at time t to mode j at time t + δ, i, j ∈ S, if i = j , πij δ + o(δ), (6) pij = Pr(ηt+δ = j | ηt = i) = 1 + πij δ + o(δ), if i = j , with transition probability rates πij 0 for i, j ∈ S, i = j , and πii = − N j =1,j =i πij , where δ > 0 and limδ→0 o(δ)/δ = 0. Note that the set S comprises the various operational modes of the system under study. In this paper, we consider the delayed Hopfield neural networks with Markovian jumping parameters described by the following nonlinear differential equations: x˙p (t) = −cp (ηt )xp (t) +
n
wpq (ηt )sq xq (t − τ ) + Jp ,
p, q = 1, 2, . . . , n.
(7)
p=1
Suppose the activation functions sq (·) (q = 1, 2, . . . , n) also satisfy (H1 ). Similar to the above analysis, we can obtain
y(t) ˙ = −C(ηt )y(t) + W (ηt )f y(t − τ ) . (8) According to the above analysis, the trivial solution of system (8) exist. Now we will present the main result for MJDHNN (7) or (8). To this end, we can rewrite (8) as
(9) y(t) ˙ = −C(ηt )y(t) + W (ηt )G y(t − τ ) y(t − τ ), where G(y(t)) = diag(g1 (y1 (t)), . . . , gn (yn (t))), and 0 gj (yj (t)) = fj (yj (t))/yj (t) Lj . t Moreover, since y(t − τ ) = y(t) − t−τ y (α) dα, by substituting this into (9), we obtain
y(t) ˙ = −C(ηt )y(t) + W (ηt )G y(t − τ ) y(t) − W (ηt )G y(t − τ )
t
y (α) dα.
(10)
t−τ
Evidently, the solution of (10) is equivalent to that of (8). Definition 1. The MJDHNN (7) is said to be stochastically asymptotically stable, if for all finite initial state φ0 and mode η0 , the following relation holds: T 2 x(t, φ0 , η0 ) dt φ0 , η0 = 0. lim E (11) T →∞ 0
In the sequel, for simplicity, while ηt = i, the matrices C(ηt ), W (ηt ) are represented by Ci , Wi .
320
X.Y. Lou, B.T. Cui / J. Math. Anal. Appl. 328 (2007) 316–326
3. Main results In this section, some sufficient conditions of stochastic asymptotic stability for MJDHNN (7) or (8) are obtained. Theorem 1. Suppose (H1 ) holds. The MJDHNN (7) is stochastically asymptotically stable if there exist symmetrical and positive definite matrices Pi > 0 (i ∈ S) and X > 0, Y > 0, Z > 0, U > 0, R > 0 satisfying the following LMIs: X Y 0, (12) YT Z ⎡ ⎤ (1, 1) Y Wi − 2Ci Wi Pi Wi LT WiT Pi ⎢ ⎥ (2, 2) 0 0 ⎢ ⎥ < 0, (13) ⎣ ⎦ −R 0 −U where (1, 1) = 2Pi Ci + LT RL + X − 2Ci Y + 2Ci LT WiT Pi + Ci ZCi +
N
πij Pj ,
j =1
(2, 2) = WiT U Wi
+ WiT ZWi .
Proof. For mode i ∈ S, let us construct the following Lyapunov functional:
V t, y(t), ηt = i = y T (t)Pi y(t).
(14)
According to the Itô’s rule, the weak infinitesimal operator [·] of the process {y(t), ηt , t 0} for system (10) at the point {t, y(t), ηt } is given by [14,26] N
∂V ∂V T + y˙ (t) [V ] = + πij V t, y(t), i, j . ∂t ∂y ηt =i
(15)
j =1
Using (10) in (14)–(15), manipulating the terms, applying the argument of completing the squares and over-bounding the result, we get
[V ] 2y T (t)Pi −Ci y(t) + Wi G y(t − τ ) y(t)
− 2y T (t)Pi Wi G y(t − τ )
t
t−τ
y(α) ˙ dα + y T (t)
N
πij Pj y(t).
(16)
j =1
From Fact 2, it follows that
2y T (t)Pi Wi G y(t − τ ) y(t) y T (t) P Wi R −1 WiT P + LT RL y(t). In the addition, by Fact 3, it can follow that
(17)
X.Y. Lou, B.T. Cui / J. Math. Anal. Appl. 328 (2007) 316–326
−2y (t)Pi Wi G y(t − τ )
321
t y(α) ˙ dα
T
t−τ
t t−τ
×
y(t) y(α) ˙
y(t) y(α) ˙
T
X Y T − (Pi Wi G(y(t − τ )))T
Y − Pi Wi G(y(t − τ )) Z
(18)
dα.
Substituting (17) and (18) into (16), we obtain [V ] −2y T (t)Pi Ci y(t) + y T (t) P Wi R −1 WiT P + LT RL y(t)
+ y T (t)Xy(t) − 2y T (t)CiT Yy(t) + 2f T y(t − τ ) WiT Y T y(t)
T + 2y T (t)CiT Pi Wi G y(t − τ ) y(t)
T − 2f T y(t − τ ) WiT Pi Wi G y(t − τ ) y(t)
+ y T (t)CiT ZCi y(t) + f T y(t − τ ) WiT ZWi f y(t − τ ) N
− 2y T (t)CiT Wi f y(t − τ ) + y T (t) πij Pj y(t).
(19)
j =1
Using Fact 2, we have
T −2f T y(t − τ ) WiT Pi Wi G y(t − τ ) y(t)
f T y(t − τ ) WiT U Wi f y(t − τ )
+ y T (t)GT y(t − τ ) WiT PiT U −1 Pi Wi G y(t − τ ) y(t)
f T y(t − τ ) WiT U Wi f y(t − τ ) + y T (t)LT WiT PiT U −1 Pi Wi Ly(t).
(20)
Therefore, we get [V ] −2y T (t)Pi Ci y(t) + y T (t) P Wi R −1 WiT P + LT RL y(t)
+ y T (t)Xy(t) − 2y T (t)CiT Yy(t) + 2f T y(t − τ ) WiT Y T y(t)
+ 2y T (t)CiT LT WiT PiT y(t) + f T y(t − τ ) WiT U Wi f y(t − τ ) + y T (t)LT WiT PiT U −1 Pi Wi Ly(t)
+ y T (t)CiT ZCi y(t) − 2y T (t)CiT Wi f y(t − τ )
N
+ f T y(t − τ ) WiT ZWi f y(t − τ ) + y T (t) πij Pj y(t)
= where
y(t) f (y(t − τ ))
T Ω1
y(t) , f (y(t − τ ))
j =1
(21)
322
X.Y. Lou, B.T. Cui / J. Math. Anal. Appl. 328 (2007) 316–326
Y Wi − 2CiT Wi , (2, 2)
(1, 1) Ω1 =
(1, 1) = 2Pi Ci + LT RL + X − 2CiT Y + 2CiT LT WiT PiT + CiT ZCi +
N
πij Pj
j =1
+ Pi Wi R −1 WiT Pi + LT WiT PiT U −1 Pi Wi L, (2, 2) = WiT U Wi + WiT ZWi . By Schur complement, LMIs (12)–(13) guarantee Ω1 < 0, ∀i ∈ S, we conclude that [V ] < 0, for all y = 0 and [V ] 0, for all y. This completes the proof of the Theorem 1. 2 By constructing another Lyapunov functional, we can obtain the following result: Theorem 2. Suppose (H1 ) holds. The MJDHNN (7) is stochastically asymptotically stable if there exist symmetrical and positive definite matrices Pi > 0 (i ∈ S) and Qi > 0 (i ∈ S), X > 0, Y > 0, Z > 0, U > 0, R > 0 satisfying the following LMIs: X Y 0, (22) YT Z ⎡ ⎤ (1, 1) Y Wi − 2CiT Wi Pi Wi LT WiT Pi ⎢ ⎥ (2, 2) 0 0 ⎢ ⎥ < 0, (23) ⎣ ⎦ −R 0 −U where (1, 1) = 2Pi Ci + LT RL + X − 2CiT Y + 2CiT LT WiT Pi + CiT ZCi + τ LT Qi L +
N
πij Pj + εi LT
j =1
N
πij Qj L,
j =1
(2, 2) = WiT U Wi + WiT ZWi − τ Qi − εi−1
N
πij Qj .
j =1
Proof. For mode i ∈ S, let us construct the following Lyapunov functional:
V t, y(t), ηt = i = y T (t)Pi y(t) + τ
t
f T y(s) Qi f y(s) ds.
(24)
t−τ
Using (10) in (15) and (24), manipulating the terms, applying the argument of completing the squares and over-bounding the result, we get
[V ] 2y (t)Pi −Ci y(t) + Wi G y(t − τ ) y(t) − 2y T (t)Pi Wi G y(t − τ )
t
T
+ τf y(t) Qi f y(t) − τf T y(t − τ ) Qi f y(t − τ ) T
t−τ
y(α) ˙ dα
X.Y. Lou, B.T. Cui / J. Math. Anal. Appl. 328 (2007) 316–326
+ y T (t)
N
πij Pj y(t) + εi y T (t)LT
j =1
N
323
πij Qj Ly(t)
j =1
N
− εi−1 f T y(t − τ ) πij Qj f y(t − τ ) .
(25)
j =1
Similar to the proof of Theorem 1, we can obtain that [V ] −2y T (t)Pi Ci y(t) + y T (t) P Wi R −1 WiT P + LT RL y(t)
+ y T (t)Xy(t) − 2y T (t)CiT Yy(t) + 2f T y(t − τ ) WiT Y T y(t)
+ 2y T (t)CiT LT WiT PiT y(t) + f T y(t − τ ) WiT U Wi f y(t − τ ) + y T (t)LT WiT PiT U −1 Pi Wi Ly(t) + y T (t)CiT ZCi y(t)
− 2y T (t)CiT Wi f y(t − τ ) + f T y(t − τ ) WiT ZWi f y(t − τ )
+ τy T (t)LT Qi Ly(t) − τf T y(t − τ ) Qi f y(t − τ ) + y T (t)
N
πij Pj y(t) + εi y T (t)LT
j =1
y(t) = f (y(t − τ )) where
(1, 1) Ω2 =
πij Qj Ly(t)
j =1
− εi−1 f T y(t − τ )
N
N
πij Qj f y(t − τ )
j =1
T Ω2
y(t) , f (y(t − τ ))
(26)
Y Wi − 2CiT Wi , (2, 2)
(1, 1) = 2Pi Ci + LT RL + X − 2CiT Y + 2CiT LT WiT PiT + CiT ZCi + τ LT Qi L +
N j =1
πij Pj + εi LT
N
πij Qj L + Pi Wi R −1 WiT Pi + LT WiT PiT U −1 Pi Wi L,
j =1
(2, 2) = WiT U Wi + WiT ZWi − τ Qi − εi−1
N
πij Qj .
j =1
By Schur complement, LMIs (22)–(23) guarantee Ω2 < 0, ∀i ∈ S, we conclude that [V ] < 0, for all y = 0 and [V ] 0, for all y. This completes the proof of the Theorem 2. 2 The condition in Theorem 2 possesses some adjustable parameters. This may lead to some difficulty. To make the criteria more testable, we let U = R = I and Qi = I , ∀i ∈ S in Theorem 2, then we obtain the following corollary. Corollary 1. Suppose (H1 ) holds. The MJDHNN (7) is stochastically asymptotically stable if there exist symmetrical and positive definite matrices Pi > 0 (i ∈ S) and X > 0, Y > 0, Z > 0 satisfying the following LMIs:
324
X.Y. Lou, B.T. Cui / J. Math. Anal. Appl. 328 (2007) 316–326
X YT (1, 1)
Y 0, Z Y Wi − 2CiT Wi (2, 2)
(27) < 0,
(28)
where (1, 1) = 2Pi Ci + (1 + τ )L2 + X − 2CiT Y + 2CiT LT WiT Pi + CiT ZCi + P Wi WiT P + LT WiT Pi2 Wi L +
N
πij Pj + εi
j =1
(2, 2) = WiT Wi + WiT ZWi − τ I − εi−1
N
N
πij L2 ,
j =1
πij I.
j =1
Furthermore, for the system (10) with L = I and Ci = I (i ∈ S), we get Corollary 2. Suppose (H1 ) holds. The MJDHNN (7) is stochastically asymptotically stable if there exist symmetrical and positive definite matrices Pi > 0 (i ∈ S) and X > 0, Y > 0, Z > 0 satisfying the following LMIs: X Y 0, (29) YT Z (1, 1) Y Wi − 2CiT Wi < 0, (30) (2, 2) where (1, 1) = 2Pi Ci + (1 + τ )I + X − 2CiT Y + 2CiT WiT Pi + CiT ZCi + P Wi WiT P + WiT Pi2 Wi +
N
πij Pj + εi
j =1
(2, 2) = WiT Wi + WiT ZWi − τ I − εi−1
N
N
πij I,
j =1
πij I.
j =1
Remark 1. For using the less conservative Fact 3, our results obviously improve those established in the earlier references. Most of the previous works [3,4] assume that the activation functions are differentiable, here note in our Theorems 1 and 2, we only require that the activation functions satisfy the hypothesis (H1 ). In addition, our conditions are without requirement of symmetry of the weight matrices, which are milder than the restrictions in [9]. Remark 2. When the system (7) neglects Markovian jumping parameters, then the system (7) becomes the model (1) analyzed by [3,4,9,10]. Among some researches on several kinds of neural networks [3,4,9,10,20–24], in order to obtain the existence and uniqueness, and stability of the equilibrium point, the authors assumed that the signal propagation functions si (·) were assumed to be bounded, which might greatly restrict the application domain of the neural network. Clearly, our results in this paper contain those given in [3,4,9,10], and the conditions are less restrictive than those in [3,4,9,10].
X.Y. Lou, B.T. Cui / J. Math. Anal. Appl. 328 (2007) 316–326
325
4. A numerical example In this section, we present a numerical example to illustrate our results. Example 1. We consider the MJDHNN (7) with n = 2. Let the Markov process governing the mode switching has generator −0.4 0.4 Λ= . 0.3 −0.3 For the two operating conditions (modes), the associated data are: 5 0 0.8 −0.7 1 C(1) = , W (1) = , J (1) = , 0 3 0.4 0.6 2 2 0 0.6 0.4 −3 C(2) = , W (2) = , J (2) = . 0 3 −0.5 0.4 1.5 −u
Suppose the activation function is described by sq (u) = tanh(u) = eeu −e , q = 1, 2. Then we +e−u have L = I, and sq (u) satisfy (H1 ). Let τ = 0.6 in Theorem 1 above, then using the MATLAB LMI toolbox, we can obtain the following feasible solution for LMIs (12) and (13) 9.9414 0.9802 21.6623 4.7188 P (1) = > 0, P (2) = > 0, 0.9802 23.0880 4.7188 17.0562 32.4567 14.1032 0.6883 10.1100 R= > 0, U= > 0, 14.2584 40.2103 −9.6676 1.4997 24.2911 5.6939 2.5083 −0.0043 > 0, X= > 0, Y= −0.4695 3.4984 5.8063 36.9536 0.4521 10.6998 Z= > 0. (31) −10.6019 0.3055 u
Then the conditions of Theorem 1 are satisfied. Therefore, MJDHNN (7) is stochastic asymptotically stable. 5. Conclusions Without assuming the boundedness, monotonicity and differentiability of the activation functions, some sufficient conditions for delay-dependent stochastic asymptotic stability of delayed Hopfield neural networks with Markovian jump parameters have been obtained. The network model considered here is general and includes delayed Hopfield-type neural networks, BAM neural networks as its special cases. Some of our results are improvements on previous works established by other researchers and less conservative. References [1] J.J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. 79 (1982) 2554–2558. [2] J.J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, Proc. Natl. Acad. Sci. 81 (1984) 3088–3092. [3] K. Gopalsamy, X.Z. He, Stability in asymmetric Hopfield nets with transmission delays, Phys. D 76 (1994) 344– 358.
326
X.Y. Lou, B.T. Cui / J. Math. Anal. Appl. 328 (2007) 316–326
[4] C. Hou, J. Qian, Stability analysis for neural dynamics with time-varying delays, IEEE Trans. Neural Networks 9 (1998) 221–223. [5] J.H. Li, A.N. Michel, W. Porod, Qualitative analysis and synthesis of a class of neural networks, IEEE Trans. Circuits Syst. 35 (1988) 976–985. [6] P.K. Das II, W.C. Schieve, A bifurcation analysis of the four dimensional generalized Hopfield neural network, Phys. D 88 (1) (1995) 14–28. [7] X.X. Liao, Stability of Hopfield neural networks, Sci. China Ser. A 23 (1992) 1025–1035. [8] S. Arik, Stability analysis of delayed neural networks, IEEE Trans. Circuits Syst. I 47 (2000) 1089–1092. [9] Z.B. Xu, Global convergence and asymptotic stability of asymmetric Hopfield neural networks, J. Math. Anal. Appl. 191 (1995) 405–427. [10] Q. Zhang, X.P. Wei, J. Xu, Global asymptotic stability of Hopfield neural networks with transmission delays, Phys. Lett. A 318 (2003) 399–405. [11] H.Y. Zhao, Global asymptotic stability of Hopfield neural network involving distributed delays, Neural Networks 17 (2004) 47–53. [12] J.Y. Zhang, X.S. Jin, Global stability analysis in delayed Hopfield neural network models, Neural Networks 13 (2000) 745–753. [13] N.M. Krasovskii, E.A. Lidskii, Analytical design of controllers in systems with random attributes, Autom. Remote Control 22 (1961) 1021–2025. [14] M.S. Mahmoud, P. Shi, Robust stability, stabilization and H∞ control of time-delay systems with Markovian jump parameters, Int. J. Robust Nonlinear Control 13 (2003) 755–784. [15] D. Sworder, Feedback control of a class of linear systems with jump parameters, IEEE Trans. Automat. Control 14 (1) (1969) 9–14. [16] Y. Ji, H.J. Chizeck, Controllability, stability and continuous-time Markovian jump linear quadratic control, IEEE Trans. Automat. Control 35 (7) (1990) 777–788. [17] X. Feng, K.A. Loparo, Y. Ji, H.J. Chizeck, Stochastic stability properties of jump linear systems, IEEE Trans. Automat. Control 37 (1) (1992) 38–53. [18] W.H. Chen, Z.H. Guan, X. Lu, Delay-dependent output feedback stabilization of Markovian jump system with time-delay, IEE Proc. Control Theory Appl. 151 (5) (2004) 561–566. [19] L. Xie, Stochastic robust stability analysis for Markovian jumping neural networks with time delays, in: Proceedings IEEE, Networking, Sensing and Control 19–22 (2005) 923–928. [20] B.T. Cui, X.Y. Lou, Global asymptotic stability of BAM neural networks with distributed delays and reactiondiffusion terms, Chaos Solitons Fractals 27 (2006) 1347–1354. [21] X.Y. Lou, B.T. Cui, Global asymptotic stability of delay BAM neural networks with impulses, Chaos Solitons Fractals 29 (4) (2006) 1023–1031. [22] X.Y. Lou, B.T. Cui, Absolute exponential stability analysis of delayed bi-directional associative memory neural networks, Chaos Solitons Fractals 31 (3) (2006) 695–701. [23] J.D. Cao, On exponential stability and periodic solutions of CNNs with delays, Phys. Lett. A 267 (2000) 312–318. [24] J.D. Cao, On stability of delayed cellular neural networks, Phys. Lett. A 267 (2000) 303–308. [25] S. Boyd, L. El Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM Stud. Appl. Math., SIAM, Philadelphia, PA, 1994. [26] H. Kushner, Stochastic Stability and Control, Academic Press, New York, 1967.