Nonlinear Analysis: Hybrid Systems 3 (2009) 408–417
Contents lists available at ScienceDirect
Nonlinear Analysis: Hybrid Systems journal homepage: www.elsevier.com/locate/nahs
Dynamic analysis of Markovian jumping impulsive stochastic Cohen–Grossberg neural networks with discrete interval and distributed time-varying delaysI R. Rakkiyappan, P. Balasubramaniam ∗ Department of Mathematics, Gandhigram Rural University, Gandhigram - 624302, Tamil Nadu, India
article
info
Article history: Received 22 February 2009 Accepted 23 February 2009 Keywords: Asymptotic stability Cohen–Grossberg neural networks Linear matrix inequality Lyapunov–Krasovskii functional Markovian jumping parameters Impulsive stochastic neural networks
abstract In this paper, the dynamic analysis problem is considered for a new class of Markovian jumping impulsive stochastic Cohen–Grossberg neural networks (CGNNs) with discrete interval and distributed delays. The parameter uncertainties are assumed to be norm bounded and the discrete delay is assumed to be time-varying and belonging to a given interval, which means that the lower and upper bounds of interval time-varying delays are available. Based on the Lyapunov–Krasovskii functional and stochastic stability theory, delay-interval dependent stability criteria are obtained in terms of linear matrix inequalities. Some asymptotic stability criteria are formulated by means of the feasibility of a linear matrix inequality (LMI), which can be easily calculated by LMI Toolbox in Matlab. A numerical example is provided to show that the proposed results significantly improve the allowable upper bounds of delays over some existing results in the literature. © 2009 Elsevier Ltd. All rights reserved.
1. Introduction Since Cohen Grossberg neural networks (CGNNs) has been proposed and studied by Cohen–Grossberg in 1983 [1], it has extensively gained research attention and it has promising application potentials for tasks of classification, associative memory, parallel computation and nonlinear optimization problems. During the hardware implementation, time delays exist due to the finite switching speed of the amplifiers and communication time and, thus, delays should be incorporated into the model equations of the network. For this model [2] introduced delays and studied the qualitative properties of CGNNs. Some other more detailed justifications for introducing delays into model equations of neural networks can be found in [3–8], and references therein. Dynamical systems are often classified into two categories of either continuous-time or discrete-time systems. These two dynamic systems are widely studied in population models and neural networks, yet there is a somewhat new category of dynamical system, which is neither continuous-time nor purely discrete-time; these are called dynamical systems with impulses. This third category of dynamical systems displays a combination of characteristics of both the continuoustime and discrete-time systems. A fundamental theory of impulsive differential equations has been developed in [9]. For instance, in the implementation of electronic networks, the state of the networks is subject to instantaneous perturbations and experiences abrupt changes at certain instants, which may be caused by the switching phenomenon, frequency change or other sudden noise, that is it exhibits impulsive effects [10–12]. Neural networks are often subject to impulsive perturbations that in turn affect dynamical behaviors of the systems. Therefore, it is necessary to take both time delays and
I The first and third authors work was supported by UGC-SAP(DRS), New Delhi India under the sanctioned No. F510/6/DRS/2004 (SAP-1).
∗
Corresponding author. Tel.: +91 451 2452371; fax: +91 4512453071. E-mail address:
[email protected] (P. Balasubramaniam).
1751-570X/$ – see front matter © 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.nahs.2009.02.008
R. Rakkiyappan, P. Balasubramaniam / Nonlinear Analysis: Hybrid Systems 3 (2009) 408–417
409
impulsive effects into account on the dynamical behaviors of neural networks [13–16]. Furthermore, researches of impulsive Cohen–Grossberg neural networks have been receiving much interest in recent years [17–19]. However, the models studied in all the above mentioned papers have been largely restricted to deterministic differential equations. These models do not take into account the inherent randomness that is associated with signal transmission. This is pointed out by [20], in real nervous system and in the implementation of artificial neural networks, synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes, hence, noise is unavoidable and should be taken into consideration in modeling. Therefore, it is of practical importance to study the stochastic effects on the stability property of delayed Cohen–Grossberg neural networks, see for example [21–24]. The Markovian jump systems (MJSs) are a special class of hybrid systems, which is specified by two components in the state. The first one refers to the mode, which is described by a continuous time finite-state Markovian process, and the second one refers to the state which is represented by a system of differential equations. The MJSs have the advantage of modeling the dynamic systems subject to abrupt variation in their structures, such as component failures or repairs, sudden environmental disturbance, changing subsystem interconnections, and operating in different points of a nonlinear plant [25]. In other words, the neural networks may have finite modes and the mode may switch from one to another at different times and it is shown that the switching between different neural network modes can be governed by a Markovian chain. Hence, the stochastic neural networks with such a jumping character is of great significance in modeling a class of neural networks with finite modes. The impulsive stochastic Cohen–Grossberg neural networks with mixed delays have been studied in [26]. The stability of stochastic CGNNs with Markovian jumping and mixed time delays has been studied in [27] and [28], where as [28] considers the impulsive perturbations. Recently, Balasubramaniam and Rakkiyappan in [29] have studied the robust stability analysis of Markovian jumping stochastic Cohen Grossberg neural networks with with discrete interval and distributed time-varying delays. To the best of our knowledge the stability analysis problem of Markovian jumping impulsive stochastic CGNNs with discrete interval and distributed time-varying delays has not been fully investigated, which is very challenging and remains as an open issue. Motivated by the above discussions, in this paper, we consider the Markovian jumping impulsive stochastic Cohen–Grossberg neural networks with discrete interval and distributed time-varying delays. This paper contributes the development of global robust stability in the mean square for Markovian jumping impulsive stochastic Cohen–Grossberg neural networks with discrete interval and distributed time-varying delays. The delay-dependent robust stability conditions are obtained in terms of LMIs, which can be readily verified by using the Matlab LMI toolbox. Numerical examples are given to illustrate the effectiveness and lower conservativeness of the proposed method. Notations: Throughout this paper, Rn and Rn×n denote, respectively, the n-dimensional Euclidean space and the set of all n × n real matrices. The superscript T denotes the transposition and the notation X ≥ Y (respectively, X > Y ), where X and Y are symmetric matrices, means that X − Y is positive semi-definite (respectively, positive definite). In is the n × n identity matrix. | · | is the Euclidean norm in Rn . Moreover, let (Ω , F , P ) be a complete probability space with a filtration {Ft }t ≥0 satisfying the usual conditions (i.e. the filtration contains all P-null sets and is right continuous). The notation ∗ always denotes the symmetric block in one symmetric matrix. Sometimes, the arguments of a function or a matrix will be omitted in the analysis when no confusion can arise. 2. Problem description and preliminaries In this paper, the impulsive CGNNs with discrete and distributed delays are described as follows: du(t ) dt
Z h = −a(u(t )) b(u(t )) − Ag (u(t )) − Bg (u(t − τ (t ))) − C
u(tk ) = Dk u(tk− ),
t
i
t −r (t )
g (u(s))ds + I ,
t 6= tk
t = tk
(1)
for t > 0 and k = 1, 2, . . ., where u(t ) = [u1 (t ), u2 (t ), . . . , un (t )]T ∈ Rn is the state vector associated with the n neurons at time t. The first part is the continuous part of model (1), which describes the continuous evolution processes of the neural networks, where g (u(t )) is the activation function; a(u(t )) is the amplification function, b(u(t )) denotes the behaved function. The matrices A = (aij )n×n , B = (bij )n×n and C = (cij )n×n are respectively, the connection weight matrix, the discretely delayed connection weight matrix, and the distributively delayed connection weight matrix. I = [I1 , I2 , . . . , In ]T is a constant external input vector. The second part is the discrete part of model (1), which describes that the evolution processes experience abrupt changes of state at moments of time tk , where u(tk ) = Dk u(tk− ) is the impulse at moment tk , the fixed moment of time tk satisfy t1 < t2 < · · · , limk→∞ tk = +∞ and u(t − ) = lims→t − u(s); Dk is a constant real matrix at the moments of time tk . Continuous function φ : [−h2 , 0] → Rn with the sup-norm |φ| = sup−h2 ≤s≤0 kφ(s)k. For given t0 , and ϕ(t ) ∈ PC ([−h2 , 0], Rn ), the initial condition of (1) is u(t0 + t ) = ϕ(t ) for t ∈ [−h2 , 0]. It is well-known that bounded activation functions always guarantee the existence of an equilibrium point for CGNNs (1). For convenience, we shift the equilibrium point u∗ = [u∗1 , u∗2 , . . . , u∗n ]T to the origin by the transformation x(t ) = u(t ) − u∗ , which yields the following system: dx(t ) dt
Z h = −α(x(t )) β(x(t )) − Af (x(t )) − Bf (x(t − τ (t ))) − C
t
t −r (t )
i
f (x(s))ds ,
t 6= tk
410
R. Rakkiyappan, P. Balasubramaniam / Nonlinear Analysis: Hybrid Systems 3 (2009) 408–417
x(tk ) = Dk x(tk− ),
t = tk
x(t0 + t ) = ψ(t ),
t ∈ [−h2 , 0]
(2)
where x(t ) = [x1 (t ), x2 (t ), . . . , xn (t )] ∈ R is the state vector of the transformed system, and T
n
α(x(t )) = diag(α1 (x(t )), α2 (x(t )), . . . , αn (x(t ))) αl (xl (t )) = al (xl (t ) + u∗l ), l = 1, 2, . . . , n β(x(t )) = [β1 (x1 (t )), β2 (x1 (t )), . . . , β1 (xn (t ))]T , βl (xl (t )) = bl (xl (t ) + u∗l ) − bl (u∗l ) f (x(·)) = [f1 (x(·)), f2 (x(·)), . . . , fn (x(·))]T fl (x(·)) = gl (xl (·) + u∗l ) − bl (u∗l ) x(tk ) = u(tk ) − u∗ (tk ) = Dk x(tk− ),
ψ(t ) = u(t0 + t ) − u∗ . In the real world, neural networks are often distributed by environmental noise that affect the stability of the equilibrium point and by the varying structure parameters that satisfy the Markov process. Now, based on the model (2), we are in a position to introduce the delayed impulsive CGNNs with Markovian jumping parameters and stochastic perturbations. Let r (t ), t ≥ 0, be a right continuous Markov chain on the probability space taking values in a finite state space S = {1, 2, . . . , N } with generator Γ = (γij )N ×N given by P {r (t + δ) = j|r (t ) = i} =
γij δ + o(δ), 1 + γii δ + o(δ),
i 6= j, i = j,
o(δ)
where δ > 0 and limδ→0 δ = 0. Here, γij ≥ 0 is the transition rate from i to j, if i 6= j while γii = − j=1,j6=i γij . In this paper, we consider Markovian jumping impulsive stochastic CGNNs with discrete interval and distributed delays as follows:
PN
h
dx(t ) = −α(x(t ), r (t )) β(x(t ), r (t )) − A(r (t ))f (x(t )) − B(r (t ))f (x(t − τ (t )))
− C (r (t )) x(tk ) = Dk x(tk− ),
Z
t
i
t −r (t )
f (x(s))ds dt + σ t , x(t ), x(t − τ (t )),
Z
t t −r (t )
f (x(s))ds dw(t ),
t 6= tk
t = tk
x(t0 + t ) = ψ(t ),
t ∈ [−h2 , 0]
(3)
where σ (·) ∈ R and w(t ) is a standard Brownian motion. In order to obtain our main results, we assume the following conditions hold. (A1) The neuron activation function fn (·) is bounded on R and Lipschitz continuous, that is, there exist constants ln > 0 such that n
|fn (x1 ) − fn (x2 )| ≤ ln |x1 − x2 |,
∀x1 , x2 ∈ R.
(A2) The time-varying delays τ (t ) satisfy 0 ≤ h1 ≤ τ (t ) ≤ h2 ,
τ˙ (t ) ≤ µ < 1,
where h1 , h2 are constants. Furthermore, the bounded function r (t ) represent the distributed delay of systems with 0 < r (t ) ≤ r¯ . Now we consider the following Markovian jumping impulsive stochastic Cohen–Grossberg neural networks with uncertain parameters as follows:
h
dx(t ) = −α(x(t ), r (t )) β(x(t ), r (t )) − (A(r (t )) + ∆A(r (t )))f (x(t ))
− (B(r (t )) + ∆B(r (t )))f (x(t − τ (t ))) − (C (r (t )) + ∆C (r (t ))) Z t + σ t , x(t ), x(t − τ (t )), f (x(s))ds dw(t ), t 6= tk
Z
t t −r ( t )
i
f (x(s))ds dt
t −r (t )
x(tk ) = (Dk (r (t ))), x(t0 + t ) = ψ(t ),
t = tk t ∈ [−h2 , 0].
For convenience, the each possible value of r (t ) is denoted by i, i ∈ S in what follows. Then we have Ai = A(r (t )),
∆Ai = ∆A(r (t )),
Bi = B(r (t )),
Ci = C (r (t )), ∆Bi = ∆B(r (t )), ∆Ci = ∆C (r (t )),
(4)
R. Rakkiyappan, P. Balasubramaniam / Nonlinear Analysis: Hybrid Systems 3 (2009) 408–417
411
where Ai , Bi and Ci for any i ∈ S, are known constant matrices of appropriate dimensions, ∆Ai , ∆Bi and ∆Ci for i ∈ S, are unknown matrices that represent the time-varying parameter uncertainties and are assumed to be of the form:
[∆Ai
∆Ci ] = Hi Fi (t )[T1i
∆Bi
T3i ],
T2i
(5)
where Hi , T1i , T2i and T3i are known real constant matrices and Fi (t ), ∀i ∈ S are unknown time-varying matrix functions satisfying FiT (t )Fi (t ) ≤ I ,
∀ i ∈ S.
(6)
It is assumed that all the elements of F (t ) are Lebesgue measurable. The matrices ∆Ai , ∆Bi and ∆Ci , are said to be admissible if both (4) and (5) hold.
h
dx(t ) = −αi (x(t )) βi (x(t )) − Ai (t )f (x(t )) − Bi (t )f (x(t − τ (t )))
− Ci ( t )
t
Z
t −r (t )
x(tk ) = Dik x(tk− ),
i
f (x(s))ds dt + σi t , x(t ), x(t − τ (t )),
Z
t
t −r (t )
f (x(s))ds dw(t ),
t 6= tk
t = tk
x(t0 + t ) = ψ(t ),
t ∈ [−h2 , 0]
(7)
where Ai (t ) = Ai + ∆Ai (t ),
Bi (t ) = Bi + ∆Bi (t ),
Ci (t ) = Ci + ∆Ci (t ).
Next, we will give the following assumptions: 0 < α il ≤ αil (·) ≤ α¯ il , α i = min1≤l≤n (α il ),α¯ i = max1≤l≤n (α¯ il ),
xl (t )βil (xl (t )) ≥ µil x2l (t ), µil > 0, ∆i = diag(µi1 , µi2 , . . . , µin ), |f (x)| ≤ |lj x|, tr (σiT (t , x(t ), x(t − τ (t )),
Rt
Rt
Rt
t −r ( t )
f (x(s))ds)
f (x(s))ds)) ≤ |Σi1 x(t )| + |Σi2 x(t − τ (t ))| + |Σi3 t −r (t ) f (x(s))ds| , where Σi1 , Σi2 and Σi3 t −r (t ) are known diagonal matrices. To derive our main results, the following Definition and Lemmas are first introduced, which are essential for the proof in what follows.
σi (t , x(t ), x(t − τ (t )),
2
2
2
Definition 2.1. The trivial solution of (7) is asymptotically stable in the mean square if there exists a δ > 0 and for any > 0 satisfying E kx(t )k2 < ,
and
lim E kx(t )k2 = 0,
t →∞
when t > t0 and kx(t0 )k2 < δ.
Definition 2.2 ([15]). The function V : [t0 , ∞) × Rn → R+ belongs to the class v0 if (1) the function V is continuous on each of the sets [tk−1 , tk ) × Rn and for all t ≥ t0 , V (0, t ) = 0; (2) V (x, t ) is locally Lipschitzian in x ∈ Rn ; (3) for each k = 1, 2, . . ., there exist finite limits lim
V (q, t ) = V (x, tk− )
lim
V (q, t ) = V (x, tk+ )
(q,t )→(x,tk− ) (q,t )→(x,tk+ )
with V (x, tk+ ) = V (x, tk ) satisfied. Lemma 2.3. For any constant matrix M > 0, any scalars a and b with a < b, and a vector function x(t ) : [a, b] → Rn such that the integrals concerned as well defined, then the following holds b
hZ
b
iT hZ
x(s)ds M a
i
x(s)ds ≤ (b − a) a
b
Z
xT (s)Mx(s)ds. a
Lemma 2.4. Let M , E and F (t ) be the real matrices of appropriate dimensions with F (t ) satisfying F T (t )F (t ) ≤ I. Then, Ψ + MF (t )E + [MF (t )E ]T < 0 holds if and only if there exist a scalar > 0 satisfying
Ψ + −1 MM T + EE T < 0. 3. Main results In this section, we will derive delay-interval dependent robust stability in the mean square for Markovian jumping impulsive stochastic uncertain Cohen–Grossberg neural networks with discrete interval and distributed time-varying delays (7). Theorem 3.1. Assume (A1) and (A2) hold. For given scalars h2 > h1 ≥ 0 and µ, system (7) is globally robustly asymptotically stable in the mean square if there exist matrices Ql = QlT ≥ 0, l = 1, 2, . . . , 4, R > 0, Y1 > 0, Y2 > 0, X1 > 0, X2 > 0, X3 > 0,
412
R. Rakkiyappan, P. Balasubramaniam / Nonlinear Analysis: Hybrid Systems 3 (2009) 408–417
diagonal matrices K1 > 0, K2 > 0, and positive scalars i > 0 satisfying the following LMIs
P1i
P2i P5i
∗
Pi = ∗
∗ ∗
∗ Ξi ∗ ∗
P3i P6i P8i
∗
¯
T i Ti
P¯ i Hi −i I ∗
P4i P7i > 0, P9i P10i
(8)
0 < 0, −i I
(9)
DTik P1j Dik − P1i < 0
(10)
P1i < ρi I
(11)
where
Ξi = (ϕm,n,i )10×10×i
ϕ1,1,i
! N X −2P1i ∆i α i T γij P1j + ρi Σi1 Σi1 + Q1 + Q2 + Q3 + P2i + h2 Y1 + (h2 − h1 )Y2 + α¯ i2 , = α¯ i2 j =1
ϕ1,2,i = P5iT +
N X
γij P2j + X1 − α i ∆Ti P2i ,
ϕ1,3,i = P6i +
j =1
ϕ1,4,i = P7i +
N X
N X
γij P3j − α i ∆Ti P3i ,
j =1
γij P4j − α i ∆Ti P4i ,
ϕ1,5,i = −(1 − µ)P2i + (1 − µ)P3i − (1 − µ)P4i ,
ϕ1,6,i = P4i ,
j =1
ϕ1,7,i = P3i , ϕ1,8,i = P1i Ai + K1 , ! N X 1 ϕ2,2,i = γij P5j − Y1 α¯ i2 ,
ϕ1,9,i = P1i Bi , ϕ1,10,i = P1i Ci , N N X X ϕ2,3,i = γij P6j , ϕ2,4,i = γij P7j ,
h2
j =1
j =1
j =1
ϕ2,5,i = −(1 − µ)P5i + (1 − µ)P6i − (1 − µ)P7i − (1 − µ)X1 , ϕ2,8,i = P2iT Ai , ϕ3,4,i =
N X
ϕ2,9,i = P2iT Bi ,
γij P9j ,
ϕ2,10,i = P2iT Ci ,
ϕ3,3,i
ϕ2,6,i = P7i , ϕ2,7,i = −P6i , ! N X 1 = γij P8j − (Y1 + Y2 ) α¯ i2 , ( h − h ) 2 1 j=1
ϕ3,5,i = −(1 − µ)P6iT + (1 − µ)P8i − (1 − µ)P9i + (1 − µ)X2T ,
ϕ3,6,i = P9i ,
j =1
ϕ3,7,i = −P8i − X2T , ϕ3,8,i = P3iT Ai , ϕ3,9,i = P3iT Bi , ϕ3,10,i = P3iT Ci , ! N X 1 ϕ4,4,i = γij P10j − Y2 α¯ i2 , ϕ4,5,i = −(1 − µ)P7iT + (1 − µ)P9iT − (1 − µ)P10i − (1 − µ)X3T , ( h − h ) 2 1 j =1 ϕ4,6,i = P10i + X3T ,
ϕ4,7,i = −P9iT ,
ϕ4,8,i = P4iT Ai ,
ϕ4,9,i = P4iT Bi ,
ϕ4,10,i = P4iT Ci ,
ϕ5,5,i = −(1 − µ)Q1 + ρi Σi2T Σi2 , ϕ5,6,i = ϕ5,7,i = ϕ5,8,i = 0, ϕ5,9,i = K2 , ϕ5,10,i = 0, ϕ6,7,i = ϕ6,8,i = ϕ6,9,i = ϕ6,10,i = 0, ϕ7,7,i = −Q3 , ϕ7,8,i = ϕ7,9,i = ϕ7,10,i = 0, ϕ8,9,i = ϕ8,10,i = 0, ϕ9,9,i = −(1 − µ)Q4 − 2K2 , ϕ9,10,i = 0, ϕ8,8,i = Q4 + r¯ R − 2K1 ,
ϕ6,6,i = −Q2 ,
1
ϕ10,10,i = − R + ρi Σi3T Σi3 , r¯ h iT P¯ i = P1i , P2i , P3i , P4i , 0, 0, 0, 0, 0, 0 ,
h
T¯i = 0, 0, 0, 0, , 0, 0, 0, Ti1T , Ti2T , Ti3T
iT
.
Proof. From the assumptions, we know that the amplification function αi (x(t )) is nonlinear and satisfies αi (x(t ))αi (x(t )) ≤ α¯ i2 I. Pre- and post-multiplying the left-hand side inequality (7) by diag(αi (x(t )), αi (x(t )), αi (x(t )), αi (x(t )), I , I , I , I , I , I , I , I ), it follows that
Ξ˜ i Π =∗ ∗
P¯ i Hi −i I
i T¯iT
∗
−i I
0 < 0,
(12)
R. Rakkiyappan, P. Balasubramaniam / Nonlinear Analysis: Hybrid Systems 3 (2009) 408–417
413
where
Ξ˜ i = Ξi ,
(j, k, i) 6= ((1, 1, i), (1, 8, i), (1, 9, i), (1, 10, i), (2, 2, i), (2, 8, i), (2, 9, i), (2, 10, i)(3, 3, i), (3, 8, i), (3, 9, i), (3, 10, i)(4, 4, i), (4, 8, i), (4, 9, i), (4, 10, i))
with
ϕ˜ 1,1,i = −2Pi ∆i α i +
N X
γij P1j + ρi Σi1T Σi1 + Q1 + Q2 + Q3 + P2j + h2 Y1 + (h2 − h1 )Y2 ,
j =1
ϕ˜ 1,8,i = αi (x(t ))P1i Ai = P1i αi (x(t ))Ai ,
ϕ˜ 1,9,i = αi (x(t ))P1i Bi = P1i αi (x(t ))Bi , N X 1 ϕ˜ 2,2,i = γij P5j − Y1 ,
ϕ˜ 1,10,i = αi (x(t ))P1i Ci = P1i αi (x(t ))Ci ,
h2
j =1
ϕ˜ 2,8,i = αi (x(t ))
T P2i Ai
T P2i
=
αi (x(t ))Ai ,
ϕ˜ 2,9,i = αi (x(t )) = P2iT αi (x(t ))Bi , N X 1 ϕ˜ 3,3,i = γij P8j − (Y1 + Y2 ) ( h − h1 ) 2 j =1 T P2i Bi
ϕ˜ 2,10,i = αi (x(t ))P2iT Ci = P2iT αi (x(t ))Ci , ϕ˜ 3,8,i = αi (x(t ))P3iT Ai = P3iT αi (x(t ))Ai ,
ϕ˜ 3,9,i = αi (x(t ))P3iT Bi = P3iT αi (x(t ))Bi , N X 1 Y2 ϕ˜ 4,4,i = γij P10j − ( h − h1 ) 2 j =1
ϕ˜ 3,10,i = αi (x(t ))P3iT Ci = P3iT αi (x(t ))Ci , ϕ˜ 4,8,i = αi (x(t ))P4iT Ai = P4iT αi (x(t ))Ai , ϕ˜ 4,10,i = αi (x(t ))
T P4i Ci
=
T P4i
ϕ˜ 4,9,i = αi (x(t ))P4iT Bi = P4iT αi (x(t ))Bi ,
αi (x(t ))Ci ,
for j, k = 1, 2, . . . , 10. Define the Lyapunov–Krasovskii functional as follows V (x, t , r (t )) = Vi (x, t ) = Vi1 (x, t ) + Vi2 (x, t ) + Vi3 (x, t ) + Vi4 (x, t ) + Vi5 (x, t ) Vi1 (x, t ) Vi2 (x, t ) Vi3 (x, t )
= ξ (t )Pi ξ1 (t ), Z t Z = xT (s)Q1 x(s)ds + t −τ (t )
Z
0
Z
t
Z
xT (s)Q3 x(s)ds + t −h2
t
Z
t −τ (t )
f T (x(s))Q4 f (x(s))ds,
f T (x(s))Rf (x(s))dsdθ , t +θ
0
t
Z
t +θ
−h2
t
= t −τ (t )
−h1
Z
xT (s)Y1 x(s)dsdθ +
= Z
xT (s)Q2 x(s)ds +
t
Z
−h 2
Vi5 (x, t )
t t −h 1
= −¯r
Vi4 (x, t )
(13)
T 1
xT (s)dsX1
t
Z
t −τ (t )
x(s)ds +
Z
t
xT (s)Y2 x(s)dsdθ , t +θ
Z
t −τ (t )
xT (s)dsX2
Z
t −h 2
t −τ (t )
x(s)ds +
t −h2
Z
t −h1 t −τ (t )
xT (s)dsX3
Z
t −h1 t −τ (t )
x(s)ds,
where
h ξ1T (t ) = xT (t ),
t
Z
t −τ (t )
x(s)ds
T
t −τ (t )
Z
,
x(s)ds
T
Z
,
t −h1 t −τ (t )
t −h2
x(s)ds
T i
.
For t = tk and using the techniques in [28], we have V (x, tk , j) − V (x, tk− , i) = ξ1T (tk )Pj ξ1 (tk ) − ξ1T (tk− )Pi ξ1 (tk− )
= xT (tk− )(CiTk P1j Cik − P1i )x(tk− ) < 0.
(14)
For t ∈ [tk−1 , tk ), the weak infinitesimal generator LV along with (13), we can obtain that
n
LVi (x, t ) = 2xT (t )P1i αi (x(t )) −βi (x(t )) + Ai (t )f (x(t )) + Bi (t )f (x(t − τ (t )))
+ Ci ( t )
Z
t t −r (t )
o
t
Z
f (x(s))ds + 2
+ Bi (t )f (x(t − τ (t ))) + Ci (t )
t −τ (t )
Z
t t −r ( t )
x(s)ds
T
n (t )P2iT αi (x(t )) −βi (x(t )) + Ai (t )f (x(t ))
o
Z
f (x(s))ds + 2
t −τ (t )
T
x(s)ds
T P3i αi (x(t ))
t −h 2
Z n × −βi (x(t )) + Ai (t )f (x(t )) + Bi (t )f (x(t − τ (t ))) + Ci (t )
t t −r (t )
f (x(s))ds
o
414
R. Rakkiyappan, P. Balasubramaniam / Nonlinear Analysis: Hybrid Systems 3 (2009) 408–417 t −h1
Z
+2
t −τ (t )
+ Ci (t )
x(s)ds
T
t
Z
n
T P4i αi (x(t )) −βi (x(t )) + Ai (t )f (x(t )) + Bi (t )f (x(t − τ (t )))
o
t −r (t )
N X
f (x(s))ds + tr σiT (·)P1i σi (·) +
− (1 − µ)xT (t )P2i x(t − τ (t )) +
N X
γij xT (t )P2j
− (1 − µ) +
t −τ (t )
t −τ (t )
Z
t
x(s)ds
T
P5i x(t − τ (t )) +
t −τ (t )
+
γij
N X
γij
t −τ (t )
x(s)ds + t
Z
x(s)ds
t −τ (t )
x(s)ds
T
t
Z
j=1
Z T x(s)ds P6i x(t ) − (1 − µ) T
t −h 2 N X
t
Z
j=1
Z
γij xT (t )P1j x(t ) + xT (t )P2i x(t )
j=1
t −τ (t )
T
P5j
t −τ (t )
x(s)ds
T
T P6j
− (1 − µ)
t −h 1
Z
t −τ (t )
x(s)ds
T
t
Z
x(s)ds +
T P7i x(t − τ (t )) +
N X
x(s)ds
t −τ (t )
γij
t −h1
Z
N X
T
x(s)ds
t −τ (t )
j=1
+ (1 − µ)xT (t )P3i x(t − τ (t )) − xT (t )P3i x(t − h2 ) + + (1 − µ)
t −τ (t )
Z
x(s)ds
t t −τ (t )
t −τ (t )
Z
t
Z
j =1
−
x(s)ds
T
T
P6i x(t − τ (t )) −
P6j
t −τ (t )
Z
T
γij xT (t )P3j
T
x(s)ds
γij
Z
t −h 1 t −τ (t )
Z
t −h 1 t −τ (t )
j =1 N
+
X
γij xT (t )P4j
P8i x(t − h2 ) +
− (1 − µ) +
N X
γij
x(s)ds
t −τ (t )
T P7j
x(s)ds
T
T
N X
t −τ (t )
Z
t −τ (t ) t −τ (t )
Z
x(s)ds
x(s)ds
T
P8i x(t − τ (t ))
t −h2
γij
t −τ (t )
Z
x(s)ds
T
P8j
x(s)ds
t −h 2 t −h1
Z
t −τ (t )
Z
t −h2
T P9i x(t − τ (t )) −
T P9j
t
Z
P6i x(t − h2 )
t −τ (t )
Z
T
x(s)ds
t −τ (t )
T P9i x(t − h2 )
x(s)ds + xT (t )P4i x(t − h1 ) − (1 − µ)xT (t )P4i x(t − τ (t ))
t −h1
x(s)ds +
t −τ (t )
t
x(s)ds
T
Z
t t −τ (t )
P7i x(t − τ (t )) +
T
x(s)ds N X
γij
P7i x(t − h1 ) t
Z
t −τ (t )
j=1
Z x(s)ds P9i x(t − h1 ) − (1 − µ) T
t −h 2
+
t −h 2
Z
t −τ (t )
t −τ (t )
Z
x(s)ds
x(s)ds
j =1
Z
T
x(s)ds + (1 − µ)
j =1
+ (1 − µ) N X
t −τ (t )
x(s)ds
t −h2
t
Z
t −h 2
t −h 2
+
t
Z
T P7i x(t )
j =1
γij
P5i x(t )
T P6i x(t − τ (t ))
t −h 1
Z
t −τ (t )
t −h2
N X
T
t −h 2
Z
j =1
+
x(s)ds
t −τ (t )
x(s)ds
x(s)ds
T
T
P7j
Z
t −h1 t −τ (t )
x(s)ds
P9i x(t − τ (t ))
t −h2
Z
j =1
− (1 − µ)
t −τ (t )
x(s)ds
T
t −h2
Z
t −h 1 t −τ (t )
x(s)ds
T
P9j
Z
t −h1 t −τ (t )
x(s)ds +
P10i x(t − τ (t )) +
t −h 1
Z
t −τ (t )
N X
γij
Z
j =1
x(s)ds
t −h1 t −τ (t )
T
P10i x(t − h1 )
x(s)ds
T
P10j
Z
t −h1 t −τ (t )
x(s)ds
+ x (t )Q1 x(t ) − (1 − µ)x (t − τ (t ))Q1 x(t − τ (t )) + x (t )Q2 x(t ) − x (t − h1 )Q2 x(t − h1 ) + xT (t )Q3 x(t ) − xT (t − h2 )Q3 x(t − h2 ) + f T (x(t ))Q4 f (x(t )) − (1 − µ)f T (x(t − τ (t )))Q4 f (x(t − τ (t ))) + r¯ f T (x(t ))Rf (x(t )) Z t Z t T T − f (x(s))Rf (x(s))ds + h2 x (t )Y1 x(t ) − xT (s)Y1 x(s)ds T
t −r (t )
T
T
t −h2
T
R. Rakkiyappan, P. Balasubramaniam / Nonlinear Analysis: Hybrid Systems 3 (2009) 408–417
+ (h2 − h1 )xT (t )Y2 x(t ) −
Z
t −h 1
415
xT (s)Y2 x(s)ds + 2 xT (t ) − xT (t − τ (t )) X1
t −h2 t
Z
× t −τ (t )
Z
x(s)ds + 2 xT (t − h2 ) − xT (t − τ (t )) X2
t −τ (t )
x(s)ds + 2 xT (t − h1 )
t −h2
− xT (t − τ (t )) X3
t −h1
Z
t −τ (t )
x(s)ds.
(15)
From the given assumptions we have
−2xT (t )P1i αi (x(t ))βi (x(t )) = −2
n X
n X
(xj (t )p1ij αij (x(t ))βij (x(t ))) ≤ −2α i
j =1
−
t
Z
t −τ (t )
j=1
= −2α i xT (t )P1i ∆i x(t ) n Z T X x(s)ds P2i αi (x(t ))βi (x(t )) = − j =1
≤ −α i −
t −τ (t )
Z
x(s)ds
T
p1ij µij x2j (t )
P3i αi (x(t ))βi (x(t )) = −
(16) t t −τ (t )
t −τ (t )
x(s)ds
T
t −τ (t )
P2i ∆i x(t )
(17)
xj (s)ds p3ij αij (x(t ))βij (x(t ))
t −h2
j =1
≤ −α i
xj (s)ds p2ij αij (x(t ))βij (x(t ))
t
Z
n Z X
t −h2
Z
t −τ (t )
x(s)ds
T
P3i ∆i x(t )
(18)
t −h2
−
t −h1
Z
t −τ (t )
T
x(s)ds
P4i αi (x(t ))βi (x(t )) = −
n Z X
t −τ (t )
j =1
≤ −α i
tr σiT (·)Pi σi (·)
t −h 1
t −h1
Z
t −τ (t )
xj (s)ds p4ij αij (x(t ))βij (x(t ))
T
x(s)ds
P4i ∆i x(t )
(19)
≤ ρi xT (t )Σi1T Σi1 x(t ) + ρi xT (t − τ (t ))Σi2T Σi2 x(t − τ (t )) T Z t Z t T f (x(s))ds . + ρi f (x(s))ds Σi3 Σi3
(20)
t −r (t )
t −r (t )
According to Lemma 2.4, we can obtain that,
Z
t
− t −τ (t )
Z
xT (s)Y1 x(s)ds ≤ −
t −τ (t )
−
t
Z 1h h2
t −τ (t )
xT (s)(Y1 + Y2 )x(s)ds ≤ −
t −h2
Z
t −h 1
−
xT (s)Y2 x(s)ds ≤ −
1
t
iT hZ
x(s)ds Y1
hZ
1
(h2 − h1 ) hZ t −h1
i
t −τ (t ) t −τ (t )
x(s)ds ,
(21)
hZ
iT
x(s)ds (Y1 + Y2 )
iT hZ
i
x(s)ds ,
t −h1
i
x(s)ds ,
(23)
(h2 − h1 ) t −τ (t ) t −τ (t ) Z t Z t Z t h i h i T 1 − f T (x(s))Yf (x(s))ds ≤ − f T (x(s))ds Y f (x(s))ds . r¯ t −r (t ) t −r (t ) t −r ( t ) t −τ (t )
(22)
t −h2
t −h 2
x(s)ds Y2
t −τ (t )
(24)
It is obvious from (A1) that f T (x(t ))K1 x(t ) − f T (x(t ))K1 f (x(t )) ≥ 0,
(25)
f (x(t − τ (t )))K2 x(t − τ (t )) − f (x(t − τ (t )))K2 f (x(t − τ (t ))) ≥ 0.
(26)
T
T
Substituting (16)–(26) in (15) we have
LVi (x, t ) ≤ ζ T (t )Π ζ (t )
(27)
where Π is defined in (12) and
h Z ζ T (t ) = xT (t )
t
t −τ (t )
x(s)ds
T Z
t −τ (t ) t −h2
x(s)ds
T Z
t −h1 t −τ (t )
x(s)ds
T
xT (t − τ (t ))
x T ( t − h1 )
416
R. Rakkiyappan, P. Balasubramaniam / Nonlinear Analysis: Hybrid Systems 3 (2009) 408–417
xT (t − h2 ) f T (x(t ))
f T (x(t − τ (t )))
t
Z
t −r (t )
T i .
f (x(s))ds
From the conditions of Theorem 3.1, if ξ (t ) 6= 0 we can obtain
LV (x, t , i) < 0.
(28)
For t ∈ [tk−1 , tk ], in view of (14) and (28), we have V (x, tk , j) < V (x, tk− , i) < V (x, tk−1 , i).
(29)
By a similar proof and Mathematical induction, we can derive that (29) is true for all i, j, r (0) = i0 ∈ S , k ∈ N V (x, tk , j) < V (x, tk− , i) < V (x, tk−1 , i) < · · · < V (x, t0 , i0 ). This implies that kx(t0 )k2 < δ and k → ∞, limt →∞ Ekx(t )k2 = 0. Then the system (6) asymptotically stable in the mean square. Remark 3.2. In this paper, by constructing a new Lyapunov–Krasovskii functional and using the LMI method, we have sufficient conditions for Markovian jumping impulsive stochastic uncertain Cohen–Grossberg neural networks with discrete interval and distributed time-varying delays. Neither system transformation nor free weight matrix via Newton Leibniz formula is required. Theorem 3.1 provides delay-dependent conditions for the robust asymptotic stability (in the mean square sense) for Markovian jumping impulsive stochastic Cohen–Grossberg neural networks with discrete interval and distributed time-varying delays. These conditions are formulated in terms of LMIs and can be easily solved by using the Matlab LMI Control Toolbox. It is worth noting that, by applying convex optimization algorithms, we can calculate the maximum allowable upper bound of the discrete and distributed delays, that is h2 , r, which guarantees the feasibility of the presented LMIs. These methods mentioned above are not considered in the literature and may lead to obtain an improved feasible region for delay-dependent stability criteria. Remark 3.3. In (13) the novel Lyapunov–Krasovskii functionals are used to derived the delay-dependent stability conditions. A new type of Markovian jumping matrix Pi is introduced in this paper. Markovian jumping could occur at any moment. If Markovian jumping occurs not at the impulsive time instants, the Lyapunov parameter Pj should be written as Pi in (10). 4. Example In this section, we will give an example showing the effectiveness of the conditions given here. 4.1. Example Consider the impulsive stochastic CGNN neural networks with Markovian jumping parameters with
h
dx(t ) = −α(x(t )) β(x(t )) − A(t )f (x(t )) − B(t )f (x(t − τ (t ))) − C (t )
+ σ t , x(t ), x(t − τ (t )), A1 =
0
C1 =
0 , 0.1
−7
7 , −6
6
H1 =
1 0
A2 =
C2 =
α 1 = α 2 = 0.4,
H2 =
1 0
t
i
t −r (t )
f (x(s))ds dw(t )
0 , 0.2
0 0
B1 =
−0.2 0
∆1 = ∆2 =
0 , 0
(30)
6 0
0 , 0.3
0 , 3
Σ31 = Σ32 =
0.2 0
0.1 0.2
0.2 , 0.4
B2 =
Σ11 = Σ12 =
0.4 0
0 , 0.4
0 , 0.2
0 , 1
f (x(s))ds dt
0 , −3
Σ21 = Σ22 =
0 , 1
1.5 1
0.2 0
t
t −r (t )
0 , −1.2
0.1 0
Γ =
−1
Z
Z
T11 = T12 = T13 = T21 = T22 = T23 =
0.2 0
0 , 0.2
α¯ 1 = α¯ 2 = 0.8.
When µ = 0 and h1 = 0 by using Theorem 3.1 in this paper, we can obtain the maximum allowable upper bounds h2 = r = 4.6818. However, by Theorem 3.1 in this paper we conclude that the system (30) is robustly asymptotically stable in the mean square for the maximum allowable upper bounds h2 = r = 4.6818. This shows that the approach developed in this paper is effective and less conservative than some existing results.
R. Rakkiyappan, P. Balasubramaniam / Nonlinear Analysis: Hybrid Systems 3 (2009) 408–417
417
5. Conclusion In this paper, a new sufficient condition guaranteeing the robust asymptotic stability (in the mean square sense) for Markovian jumping impulsive stocchastic Cohen–Grossberg neural networks with discrete interval and distributed timevarying delays has been proposed. Based on LMI methods, robust stability condition for the impulsive stochastic CGNNs with Markovian jumping parameters have been obtained in the form of LMIs. Finally, a numerical example is given to illustrate the usefulness of the obtained results. References [1] M.A. Cohen, S. Grossberg, Absoulute stability of global pattern formation and parallel memory storage by competetive neural networks, IEEE Trans. Syst. Man Cybern. 13 (1983) 815–826. [2] H. Ye, A. Michel, K. Wang, Qualitative analysis of Cohen–Grossberg neural networks with multiple delays, Phys. Rev. E 50 (1995) 2611–2618. [3] J. Cao, J. Liang, Boundedness and stability for Cohen Grossberg neural networks with time-varying delays, J. Math. Anal. Appl 296 (2004) 665–685. [4] T. Chen, L. Rong, Delay-independent stability analysis of Cohen Grossberg neural networks, Phys. Lett. A 317 (2003) 436–449. [5] C. Huang, L. Huang, Dynamics of a class of Cohen Grossberg neural networks with time-varying delays, Nonlinear Anal. RWA 8 (2007) 40–52. [6] S. Arik, Z. Orman, Global stability analysis of Cohen–Grossberg neural networks with time-varying delays, Phys. Lett. A 341 (2005) 410–421. [7] C. Ji, H.G. Zhang, Y. Wei, LMI approach for global robust stability of Cohen–Grossberg neural networks with multiple delays, Neurocomputing 71 (2008) 475–485. [8] Z. Yuan, L. Huang, D. Hu, B. Liu, Convergence of nonautonomous Cohen Grossberg-type neural networks with variable delays, IEEE Trans. Neural Netw. 19 (2008) 140–147. [9] V. Lakshmikantham, D.D. Bainov, P.S. Simeonov, Theory of Impulsive Differential Equations, World Scientific, Singapore, 1989. [10] D.Y. Xu, Z.C. Yang, Impulsive delay differential inequality and stability of neural networks, J. Math. Anal. Appl. 305 (2005) 107–120. [11] T. Yang, Impulsive control, IEEE Trans. Automat. Control 44 (1999) 1081–1083. [12] D.Y. Xu, W. Zhu, S.J. Long, Global exponential stability of impulsive integro-differential equation, Nonlinear Anal. 64 (2006) 2805–2816. [13] Z.-T. Huang, Q.-G. Yang, X.-S. Luo, Exponential stability of impulsive neural networks with time-varying delays, Chaos Solitons Fractals 35 (2008) 770–780. [14] Y. Xia, J. Cao, S.S. Cheng, Global exponential stability of delayed cellular neural networks with impulses, Neurocomputing 70 (2007) 2495–2501. [15] Y. Zhang, J.T. Sun, Stability of impulsive neural networks with time delays, Phys. Lett. A 348 (2005) 44–50. [16] R. Rakkiyappan, P. Balasubramaniam, J. Cao, Global exponential stability results for neutral-type impulsive neural networks, Nonlinear Anal. RWA, in press (doi:10.1016/j.nonrwa.2008.10.050). [17] K. Li, Stability analysis for impulsive Cohen–Grossberg neural networks with time-varying delays and distributed delays, Nonlinear Anal. RWA (in press). [18] Q. Song, J. Zhang, Global exponential stability of impulsive Cohen–Grossberg neural network with time-varying delays, Nonlinear Anal. RWA 9 (2008) 500–510. [19] Z. Chen, J. Ruan, Global dynamic analysis of general Cohen–Grossberg neural networks with impulse, Chaos Solitons Fractals 32 (2007) 1830–1837. [20] S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, New York, 1994. [21] Z. Wang, Y. Liu, M. Li, X. Liu, Stability analysis for stochastic Cohen–Grossberg neural networks with mixed time delays, IEEE Trans. Neural Netw. 17 (2006) 814–820. [22] E. Zhu, H. Zhang, Y. Wang, J. Zou, Z. Yu, Z. Hou, pth moment exponential stability of stochastic Cohen–Grossberg neural networks with time-varying delays, Neural Process. Lett. 26 (2007) 191–200. [23] H. Zhao, N. Ding, Dynamic analysis of stochastic Cohen Grossberg neural networks with time delays, Appl. Math. Comput. 183 (2006) 464–470. [24] C. Huang, Y. He, L. Huang, Stability analysis of non-autonomous stochastic Cohen–Grossberg neural networks, Nonlinear Dyn., in press (doi:10.1007/s11071-008-9456-x). [25] M. Mariton, Jump Linear Systems in Automatic Control, Marcel Dekker, New York, 1990. [26] X. Wang, Q. Guo, D. Xu, Exponential p-stability of impulsive stochastic Cohen–Grossberg neural networks with mixed delays, Math. Comput. Simulat., in press (doi:10.1016/j.matcom.2008.08.008). [27] H. Zhang, Y. Wang, Stability analysis of Markovian jumping stochastic Cohen Grossberg neural networks with mixed time delays, IEEE Trans. Neural Netw. 19 (2008) 366–370. [28] M. Dong, H. Zhang, Y. Wang, Dynamics analysis of impulsive stochastic Cohen–Grossberg neural networks with MArkovian jumping and mixed time delays, Neurocomputing (in press). [29] P. Balasubramaniam, R. Rakkiyappan, Delay-dependent robust stability analysis for Markovian jumping stochastic Cohen–Grossberg neural networks with discrete interval and distributed time-varying delays, Nonlinear Anal. Hybrid syst. doi:10.1016/j.nahs.2009.01.002.