Author's Accepted Manuscript
Robust delay-dependent stability criteria for uncertain neural networks with two additive time-varying delay components Yajuan Liu, S.M. Lee, H.G. Lee
www.elsevier.com/locate/neucom
PII: DOI: Reference:
S0925-2312(14)01362-9 http://dx.doi.org/10.1016/j.neucom.2014.10.023 NEUCOM14806
To appear in:
Neurocomputing
Received date: 8 May 2014 Revised date: 26 August 2014 Accepted date: 9 October 2014 Cite this article as: Yajuan Liu, S.M. Lee, H.G. Lee, Robust delay-dependent stability criteria for uncertain neural networks with two additive time-varying delay components, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2014.10.023 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Robust delay-dependent stability criteria for uncertain neural networks with two additive time-varying delay components Yajuan Liua , S. M. Leea1 , H. G. Leeb a Department
of Electronic Engineering, Daegu University, Gyeongsan 712-714, Republic of Korea
b Department
of Information and Communication Engineering, Daegu University, Gyeongsan 712-714, Republic of Korea
Abstract This paper considers the problem of robust stability of uncertain neural networks with two additive time varying delay components. The activation functions are monotone nondecreasing with known lower and upper bounds. By constructing of a modified augmented Lyapunov function, some new stability criteria are established in term of linear matrix inequalities, which is easily solved by various convex optimization techniques. Compared with the existing works, the obtained criteria are less conservative due to reciprocal convex technique and an improved inequality, which provides more accurate upper bound than Jensen inequality for dealing with the cross-term. Finally, two numerical examples are given to illustrate the effectiveness of the proposed method. Keywords Asymptotic stability, neural networks, additive time-varying delay
1
Introduction
In the past few decades, neural networks have been one of the hottest issues for their successful applications in various fields such as signal processing, pattern recognition, model identification and optimization problem [1-4]. In hardware implementation of neural networks, it is well known that time delay is frequently occurred, and the existence of time delay may cause instability and poor performance [5-10]. Therefore, much effort has been devoted to the delay dependent stability analysis of delayed neural networks [11-19], since delay-dependent stability criteria are generally less conservative than delay-independent ones especially when the size of the time delay is small. In recent years, much attentions have been received in robustness analysis for 1
Corresponding author, E-mail address:
[email protected] , Tel.: +82-53-850-6647, fax: +82-53-850-6619.
1
uncertain neural networks due to the existence of modeling errors, external disturbance, and parameter fluctuations [20-24]. In the systems considered above, the time delay in a state was assumed to appear in singular form. However, in practical situations especially networked controlled systems, signals sometimes transmitted from one point to another two segments of networks. Therefore, a system with two additive time varying delay components has been considered as a new model of time-delay systems due to variable transmission conditions [25-34]. Zhao et al. [30] proposed a stability criterion for neural networks with two additive time-varying delay components by using free-weighting matrix method. Han et al. [31] improved the result in [30] by employing a convex polyhedron method. Tian et al. [32] derived a less conservative stability criterion by constructing augmented Lyapunov functional and using reciprocally convex method. In [33], some less conservative results were derived by using convex polyhedron method or reciprocally convex method. Recently, the delay-dependent stability criteria for generalized neural networks with two delay components was investigated in [34]. Although these results and analytic tools are elegant for the stability of neural networks with two additive time-varying delay components, there still have room for further improvement. On one hand, Jenson inequality [30-34], which neglects some terms, was employed to estimate the upper bound of some derivative of Lyapunov-Krasovskii functional. On the other hand, activation functions were not fully utilized for constructing the Lyapunov-Krasovskii functional. In addition, to the best of our knowledge, the robust stability for neural networks with two additive delay components has not been investigated by any researchers. Inspired by the discussion above, in this paper, we are aim at giving a robust stability criterion of uncertain neural networks with additive time-varying delay components, and providing a less conservative stability criterion of delayed neural networks without uncertainties by some new techniques. The two additive time-varying delay is defined as h(t) = h1 (t) + h2 (t), where h1 (t) is the time delay induced from sensor to controller and h2 (t) is the delay induced from controller to actuator [25-34]. Based on the modified augmented Lyapunov-Krasovskii, which is fully utilized the information of activation functions, some stability criteria are derived by reciprocally convex method [35] and an improved inequality [36], which provides more accurate upper bound than Jensen inequality for dealing with the cross-term. Finally, two numerical examples are given to demonstrate that the proposed conditions which are less conservative than
2
some existing ones. Notations: Throughout this paper, * denotes the elements below the main diagonal of a symmetric block matrix. I denotes the identity matrix with appropriate dimensions, Rn denotes the n dimensional Euclidean space, Rm×n is the set of all m × n real matrices, and . refers to the Euclidean vector norm and the induced matrix norm. For symmetric matrices A and B, the notation A > B (respectively, A ≥ B) means that the matrix A − B is positive definite (respectively, nonnegative). diag{. . .} denotes the block diagonal matrix.
2
Problem statement
The uncertain delayed neural networks is described by y(t) ˙ = −(C + ∆C)y(t) + (A + ∆A)g(y(t)) + (B + ∆B)g(y(t − h1 (t) − h2 (t))) + J, (1) where y(t) = [y1 (t), y2 (t), . . . , yn (t)]T ∈ Rn is the neuron state vector associated with n neurons, C = diag{c1 , c2 , . . . , cn } > 0, g(y(·)) = [g1 (y1 (·)), g2 (y2 (·)), . . . , gn (yn (·))]T ∈ Rn denotes the continuous activation function, J = [J1 , J2 , . . . , Jn ]T is an exogenous input vector, A ∈ Rn×n and B ∈ Rn×n are the connection weight matrix and the delayed connection weigh matrix, respectively. h1 (t) and h2 (t) are two time-varying delays satisfying 0 ≤ h1 (t) ≤ h1 , h˙ 1 (t) ≤ µ1 , 0 ≤ h2 (t) ≤ h2 , h˙ 2 (t) ≤ µ2 ,
(2)
where h1 , h2 , µ1 and µ2 are known constants. Naturally, we denote h(t) = h1 (t) + h2 (t), h = h1 + h2 , µ = µ1 + µ2 . The uncertainties satisfy the following condition: [∆C ∆A ∆B] = DF (t)[E1 E2 E3 ],
(3)
where D, E1 , E2 andE3 are known constant matrices; F (t) ∈ Rn×n is the unknown real timevarying matrices with Lebesgue measurable elements bounded by F T (t)F (t) ≤ I, ∀t ≥ 0.
(4)
The activation function gi (·) satisfies gi (ξ1 ) − gi (ξ2 ) ≤ ki+ , ξ1 − ξ2 gi (0) = 0, ξ1 , ξ2 ∈ R, ξ1 = ξ2 , i = 1, 2, . . . , n, ki− ≤
3
(5)
where ki− , ki+ are some known constants. From the Brouwer’s fixed-point theorem, there exists an equilibrium point for the neural networks. Assume that y ∗ = [y1∗ , y2∗ , . . . , yn∗ ] is an equilibrium point of system (1) and using the transformation x(·) = y(·) − y ∗ , the system (1) can be converted to the following form: x(t) ˙ = −(C + ∆C)x(t) + (A + ∆A)f (x(t)) + (B + ∆B)f (x(t − h(t))),
(6)
where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state vector of the transformed system, f (x(·)) = [f1 (x1 (·)), f2 (x2 (·)), . . . , fn (xn (·))]T ∈ Rn and fi (xi (·)) = gi (xi (·) + y ∗ ) − gi (y ∗ )(i = 1, 2 . . . , n). Then, according to the inequality (5), one can obtain that fi (ξ1 ) − fi (ξ2 ) ≤ ki+ , ξ1 − ξ2 fi (0) = 0, ξ1 , ξ2 ∈ R, ξ1 = ξ2 , i = 1, 2, . . . , n,
ki− ≤
(7)
where ki− , ki+ are some known constants. Now, the system (6) can be rewritten as x(t) ˙ = −Cx(t) + Af (x(t)) + Bf (x(t − h(t))) + Dp(t), p(t) = F (t)q(t),
(8)
q(t) = E1 x(t) + E2 f (x(t)) + E3 f (x(t − h(t))). In what follows, some essential lemmas are introduced. Lemma 2.1[36] For a given matrix R > 0, the following inequality holds for all continuously differentiable function x(t) in [a, b] ∈ Rn : −(b − a)
b a
x˙ T (s)Rx(s)ds ˙ ≤ −[x(b) − x(a)]T R[x(b) − x(a)] − 3ΩT RΩ,
where Ω = x(b) + x(a) −
2 b−a
b a
(9)
x(s)ds.
Remark 2.1 Since R > 0, ΩT RΩ is positive. Hence, it is easy to see that Lemma 2.1 can provide b ˙ ≤ −[x(b)−x(a)]T R[x(b)−x(a)]) tighter bound than Jenson inequality (−(b−a) a x˙ T (s)Rx(s)ds since the existence of the term −3ΩT RΩ. Lemma 2.2 [35] Let f1 , f2 , . . . , fN : Rm → R have positive values in an open subset D of Rm . Then, the reciprocally convex combination of fi over D satisfies 1 fi (t) = fi (t) + max gij (t) gij αi i αi =1}
min
{αi |αi >0,
i
i
4
i=j
subject to
⎧ ⎨ ⎩
⎫ ⎬ fi (t) gi,j (t) ∆ ⎦≥0 → R, gj,i (t) = gi,j (t), ⎣ ⎭ gi,j (t) fj (t) ⎡
gij : Rm
⎤
Lemma 2.3 (Finsler’s lemma [37]). Let ξ ∈ Rn , Φ = ΦT ∈ Rn×n , and B ∈ Rm×n such that rank(B) < n. The following statements are equivalent (i) ξ T Φξ < 0, ∀Bξ = 0, ξ = 0, T
(ii) B ⊥ ΦB ⊥ < 0, where B ⊥ is a right orthogonal complement of B.
3
Main results
In this section, we first propose a stability criterion for uncertain neural networks with two additive time-varying delay components. For the sake of simplicity of matrix and vector representation, ei ∈ R15n×n (i = 1, 2, . . . , 15) are defined as block entry matrices (for example e4 = [0 0 0 I 0 0 0 0 0 0 0 0 0 0 0]T ). The other notations are defined as: ξ T (t) = xT (t) xT (t − h1 ) xT (t − h1 (t)) xT (t − h2 ) xT (t − h2 (t)) xT (t − h) xT (t − h(t)) f T (x(t)) f T (x(t − h1 (t))) f T (x(t − h2 (t))) f T (x(t − h(t))) x˙ T (t) ⎤ T T t−h(t) t 1 1 x(s)ds x(s)ds pT (t)⎦ , h − h(t) h(t) t−h t−h(t) Ξ1 = [e1 (h − h(t))e13 + h(t)e14 ]P[e12 e1 − e6 ]T + [e12 e1 − e6 ]P[e1 (h − h(t))e13 + h(t)e14 ]T , Ξ2 = [e8 − e1 K1 ]ΩeT12 + e12 ΩT [e8 − e1 K1 ]T + [e1 K2 − e8 ]ΛeT12 + e12 ΛT [e1 K2 − e8 ]T , Ξ3 = [e1 e8 ](Q1 + Q2 + Q3 )[e1 e8 ]T − (1 − µ1 )[e3 e9 ]Q1 [e3 e9 ]T −(1 − µ2 )[e5 e10 ]Q2 [e5 e10 ]T − (1 − µ)[e7 e11 ]Q3 [e7 e11 ]T , Ξ4 = e1 (R1 + R2 + R3 )eT1 − e2 R1 eT2 − e4 R2 eT4 − e6 R3 eT6 , ⎡ ⎤ ¯ T S ⎦ ΠT1 , Ξ5 = h2 e12 SeT12 − Π1 ⎣ ¯ ∗ S Π1 = [e7 − e6 e7 + e6 − 2e13 e1 − e7 e1 + e7 − 2e14 ], Φ = −2e1 K1 H1 K2 eT1 + e1 (K1 + K2 )H1 eT8 + e8 H1T (K1 + K2 )T eT1 − 2e8 H1 eT8 −2e3 K1 H2 K2 eT3 + e3 (K1 + K2 )H2 eT9 + e9 H2T (K1 + K2 )T eT3 − 2e9 H2 eT9 −2e5 K1 H3 K2 eT5 + e5 (K1 + K2 )H3 eT10 + e10 H3T (K1 + K2 )T eT5 − 2e10 H3 eT10 −2e7 K1 H4 K2 eT7 + e7 (K1 + K2 )H4 eT11 + e11 H4T (K1 + K2 )T eT7 − 2e11 H4 eT11 ,
5
Ψ = (e1 E1T E1 eT1 + e1 E1T E2 eT8 + e8 E2T E1 eT1 + e8 E2T E2 eT8 + e8 E2T E3 eT11 + e11 E3T E2 eT8 +e1 E1T E3 eT11 + e11 E3T E1 eT1 + e11 E3T E3 eT11 − e15 eT15 ), Υ = Ξ1 + Ξ2 + Ξ3 + Ξ4 + Ξ5 + Φ + Ψ, Γ = [−C 0 0 0 0 0 0 A 0 0 B − I 0 0 D]. Now, we have the following theorem. Theorem 3.1 For given scalars h1 , h2 , µ1 , µ2 , diagonal matrices K1 = diag{k1− , k2− , . . . , kn− } and K2 = diag{k1+ , k2+ , . . . , kn+ }, the system (6) is globally asymptotically stable if there exist symmetric positive matrices P ∈ R2n×2n , Qi ∈ R2n×2n , Ri ∈ Rn×n , i = 1, 2, 3, S ∈ Rn×n positive diagonal matrices Ω, Λ, Hi , i = 1, 2, 3, 4, a positive scalar and any matrix T ∈ R2n×2n such that the following LMIs hold for all h(t) ∈ [0, h] (Γ⊥ )T ΥΓ⊥ < 0, ⎡ ⎤ S¯ T ⎣ ⎦ ≥ 0, ∗ S¯ ⎡ where S¯ = ⎣
(10) (11)
⎤ S
0
⎦. ∗ 3S Proof Let us consider the following Lyapunov-Krasoskii functional candidate V (t) =
5
Vi (t),
(12)
i=1
where
⎤T
⎡
V1 (t) = ⎣ t
t−h x(s)ds n xi (t)
V2 (t) = 2
i=1
V3 (t) =
x(t)
0
t
⎤
⎡
⎦ P⎣ t
x(t)
⎦,
t−h x(s)ds
ωi (fi (s) − ki− s) + λi (ki+ s − fi (s)) ds,
⎡ ⎣
t−h1 (t)
⎤T x(s)
⎡
⎦ Q1 ⎣
⎤ x(s)
⎦ ds +
V5 (t) = h
−h
t
⎣
t−h2 (t)
f (x(s)) f (x(s)) ⎡ ⎤T ⎡ ⎤ t x(s) x(s) ⎣ ⎦ Q3 ⎣ ⎦ ds, + t−h(t) f (x(s)) f (x(s)) t t T T x (s)R1 x(s)ds + x (s)R2 x(s)ds + V4 (t) = t−h1 0
⎡
t
t−h2
x˙ T (s)S x(s)dsdα. ˙
t+α
6
t t−h
⎤T x(s) f (x(s))
⎡
⎦ Q2 ⎣
xT (s)R3 x(s)ds,
⎤ x(s) f (x(s))
⎦ ds,
Taking the time-derivative of Vi (t)(i = 1, 2, 3, 4, 5), one can obtain ⎤T ⎡ ⎤ x(t) x(t) ˙ ⎦ P⎣ ⎦ V˙ 1 (t) = 2 ⎣ t t−h(t) x(s)ds + x(s)ds x(t) − x(t − h) t−h t−h(t) ⎡
= ξ T (t)Ξ1 ξ(t), n ˙ [ωi (fi (xi (t)) − ki− xi (t)) + λi (ki+ xi (t) − fi (xi (t)))]x˙ i (t) V2 (t) = 2
(13)
j=1
˙ + 2[K2 x(t) − f (x(t))]T Λx(t) ˙ = 2[f (x(t)) − K1 x(t)]T Ωx(t) = ξ T (t)Ξ2 ξ(t), ⎡ ⎤T ⎡ ⎤ x(t) x(t) ⎦ (Q1 + Q2 + Q3 ) ⎣ ⎦ V˙3 (t) ≤ ⎣ f (x(t)) f (x(t)) ⎤T ⎡ ⎡ x(t − h1 (t)) x(t − h1 (t)) ⎦ Q1 ⎣ −(1 − µ1 ) ⎣ f (x(t − h1 (t))) f (x(t − h1 (t))) ⎤T ⎡ ⎡ x(t − h2 (t)) x(t − h2 (t)) ⎦ Q2 ⎣ −(1 − µ2 ) ⎣ f (x(t − h2 (t))) f (x(t − h2 (t))) ⎡ ⎤ ⎡ ⎤T x(t − h(t)) x(t − h(t)) ⎦ ⎦ Q3 ⎣ −(1 − µ) ⎣ f (x(t − h(t))) f (x(t − h(t)))
(14)
⎤ ⎦ ⎤ ⎦
= ξ T (t)Ξ3 ξ(t),
(15)
V˙ 4 (t) = xT (t)(R1 + R2 + R3 )x(t) − xT (t − h1 )R1 x(t − h1 ) −xT (t − h2 )R2 x(t − h2 ) − xT (t − h)R3 x(t − h) = ξ T (t)Ξ4 ξ(t), ˙ −h V˙5 (t) = h2 x˙ T (t)S x(t) 2 T
˙ −h = h x˙ (t)S x(t)
t
(16)
x˙ T (s)S x(s)ds ˙
t−h t−h(t) t−h
T
x˙ (s)S x(s)ds ˙ −h
t
x˙ T (s)S x(s)ds ˙
t−h(t)
h h (αT1 (t)Sα1 (t) + 3αT2 (t)Sα2 (t)) − (αT (t)Sα3 (t) + 3αT4 (t)Sα4 (t)) h − h(t) h(t) 3 ⎤T ⎡ ⎤ ⎤T ⎡ ⎤ ⎡ ⎡ α α α α (t) (t) (t) (t) h ⎦ S¯ ⎣ 1 ⎦− h ⎣ 3 ⎦ S¯ ⎣ 3 ⎦ ⎣ 1 ˙ − = h2 x˙ T (t)S x(t) h − h(t) α (t) h(t) α (t) α (t) α (t) ˙ − ≤ h2 x˙ T (t)S x(t)
⎤T
⎡ ˙ −⎣ = h2 x˙ T (t)S x(t)
α1 (t) α2 (t)
2
⎤
⎡
⎦ S¯ ⎣
α1 (t) α2 (t) 7
⎦−
2
⎡
4
⎤T
⎡
4
⎤
h(t) ⎣ α1 (t) ⎦ ¯ ⎣ α1 (t) ⎦ S h − h(t) α (t) α2 (t) 2
⎤T
⎤T ⎡ ⎤ ⎡ α3 (t) α3 (t) h − h(t) ⎦ S¯ ⎣ ⎦− ⎦ S¯ ⎣ ⎦. ⎣ −⎣ h(t) α4 (t) α4 (t) α4 (t) α4 (t) ⎡
⎡ If ⎣
α3 (t)
(17)
⎤
S¯ T ∗
⎤
⎡
α3 (t)
⎦ > 0, the following inequality is satisfied by reciprocally convex method [35]
S¯
⎡
⎤ ⎤T
⎡ α1 (t)
h(t) ⎣ ⎦ ⎢ h−h(t) ⎢ ⎢ α2 (t) ⎢ ⎤ ⎡ ⎢ ⎢ α (t) ⎣ − h−h(t) ⎣ 3 ⎦ h(t) α4 (t)
⎡
⎤ ⎤
⎡ α1 (t)
h(t) ⎣ ⎦ ⎥ ⎡ ⎤⎢ h−h(t) ⎥ ⎢ ¯ ⎥ ⎢ S T α2 (t) ⎥ ⎣ ⎦⎢ ⎤ ⎡ ⎥ ⎢ ¯ ⎥ ⎢ α (t) ∗ S ⎦ ⎣ − h−h(t) ⎣ 3 ⎦ h(t) α4 (t)
⎥ ⎥ ⎥ ⎥ ≥ 0, ⎥ ⎥ ⎦
(18)
which implies ⎡
⎤T
⎤
⎡
⎡
⎤T
⎡
⎤
h(t) ⎣ α1 (t) ⎦ ¯ ⎣ α1 (t) ⎦ h − h(t) ⎣ α3 (t) ⎦ ¯ ⎣ α3 (t) ⎦ − S S h − h(t) α (t) h(t) α2 (t) α4 (t) α4 (t) 2 ⎤T ⎡ ⎤ ⎡ ⎤T ⎤ ⎡ ⎡ α (t) α (t) α (t) α1 (t) ⎦ T ⎣ 3 ⎦−⎣ 3 ⎦ TT⎣ 1 ⎦. ≤ −⎣ α2 (t) α4 (t) α4 (t) α2 (t) −
(19)
Then, we can get from (17) and (19) that ⎤T ⎡ ⎤ ⎡ ⎤T ⎡ ⎤ ⎡ α (t) α1 (t) α (t) α (t) ⎦ S¯ ⎣ 1 ⎦−⎣ 3 ⎦ S¯ ⎣ 3 ⎦ ˙ −⎣ V˙ 5 (t) ≤ h2 x˙ T (t)S x(t) α2 (t) α2 (t) α4 (t) α4 (t) ⎤T ⎡ ⎤ ⎡ ⎤T ⎤ ⎡ ⎡ α (t) α (t) α (t) α1 (t) ⎦ T ⎣ 3 ⎦−⎣ 3 ⎦ TT ⎣ 1 ⎦ −⎣ α2 (t) α4 (t) α4 (t) α2 (t) ⎡ ⎤ ¯ S T ⎦ α(t) ˙ − αT (t) ⎣ = h2 x˙ T (t)S x(t) ¯ ∗ S = ξ T (t)Ξ5 ξ(t),
(20)
where α(t) = [αT1 (t), αT2 (t), αT3 (t), αT4 (t)]T , α1 (t) = x(t−h(t))−x(t−h), α2 (t) = x(t−h(t))+x(t− t−h(t) t 2 ¯ h)− h−2h(t) t−h x(s)ds, α3 (t) = x(t)−x(t−h(t)), α4 (t) = x(t)+x(t−h(t))− h(t) t−h(t) x(s)ds, S = ⎡ ⎤ S 0 ⎣ ⎦. ∗ 3S Furthermore, from (7), for the diagonal matrix, H1 , H2 , H3 , H4 , we can achieve the following inequalities −2f T (x(t))H1 f (x(t)) + 2xT (t)T (K1 + K2 )Λ1 f (x(t)) − 2xT (t)K1 H1 K2 x(t) ≥ 0, 8
(21)
−2f T (x(t − h1 (t)))H2 f (x(t − h1 (t))) + 2xT (t − h1 (t))(K1 + K2 )H2 f (x(t − h1 (t))) −2xT (t − h1 (t))K1 H2 K2 x(t − h1 (t)) ≥ 0,
(22)
−2f T (x(t − h2 (t)))H3 f (x(t − h2 (t))) + 2xT (t − h2 (t))(K1 + K2 )H3 f (x(t − h2 (t))) −2xT (t − h2 (t))K1 H3 K2 x(t − h2 (t)) ≥ 0,
(23)
−2f T (x(t − h(t)))H4 f (x(t − h(t))) + 2xT (t − h(t))(K1 + K2 )H4 f (x(t − h(t))) −2xT (t − h(t))K1 H4 K2 x(t − h(t)) ≥ 0.
(24)
By Eq. (3), the following inequality holds pT (t)p(t) ≤ q T (t)q(t),
(25)
there exists a positive satisfying the following inequality: [q T (t)q(t) − pT (t)p(t)] = ξ T (t)Ψξ(t).
(26)
From Eqs. (13)-(26) and by application of S-procedures [38], if Eq. (10) holds, then an upper bound of V˙ (t) is V˙ (t) ≤ ξ T (t)Υξ(t).
(27)
Based on Lemma 2.3, ξ T (t)Υξ(t) < 0 with Γξ(t) = 0 is equivalent to (Γ⊥ )T ΥΓ⊥ < 0. This
completes the proof.
Remark 3.1 Without uncertainties, system (6) is reduced to x(t) ˙ = −Cx(t) + Af (x(t)) + Bf (x(t − h(t))),
(28)
Based on Theorem 3.1, it is easy to obtain the stability condition of system (28). For the sake of simplicity of matrix and vector representation, e˜i ∈ R14n×n (i = 1, 2, . . . , 14) are defined as block entry matrices(for example e˜4 = [0 0 0 I 0 0 0 0 0 0 0 0 0 0]T ). The other notations are defined as: ξ˜T (t) = xT (t) xT (t − h1 ) xT (t − h1 (t)) xT (t − h2 ) xT (t − h2 (t)) xT (t − h) xT (t − h(t)) f T (x(t)) f T (x(t − h1 (t))) f T (x(t − h2 (t))) f T (x(t − h(t))) x˙ T (t)
9
1 h − h(t)
T
t−h(t)
x(s)ds t−h
1 h(t)
t
T ⎤ x(s)ds ⎦ ,
t−h(t)
˜ 1 = [˜ e1 (h − h(t))˜ e13 + h(t)˜ e14 ]P[˜ e12 e˜1 − e˜6 ]T + [˜ e12 e˜1 − e˜6 ]P[˜ e1 (h − h(t))˜ e13 + h(t)˜ e14 ]T , Ξ ˜ 2 = [˜ e8 − e˜1 K1 ]Ω˜ eT12 + e˜12 ΩT [˜ e8 − e˜1 K1 ]T + [˜ e1 K2 − e˜8 ]Λ˜ eT12 + e˜12 ΛT [˜ e1 K2 − e˜8 ]T , Ξ ˜ 3 = [˜ e1 e˜8 ](Q1 + Q2 + Q3 )[˜ e1 e˜8 ]T − (1 − µ1 )[˜ e3 e˜9 ]Q1 [˜ e3 e˜9 ]T Ξ −(1 − µ2 )[˜ e5 e˜10 ]Q2 [˜ e5 e˜10 ]T − (1 − µ)[˜ e7 e˜11 ]Q3 [˜ e7 e˜11 ]T , ˜ 4 = e˜1 (R1 + R2 + R3 )˜ eT − e˜2 R1 e˜T2 − e˜4 R2 e˜T4 − e˜6 R3 e˜T6 , Ξ ⎡ 1 ⎤ ¯ S T ˜1 ⎣ ˜T, ˜ 5 = h2 e˜12 S e˜T − Π ⎦Π Ξ 12 1 ¯ ∗ S ˜ 1 = [˜ e7 − e˜6 e˜7 + e˜6 − 2˜ e13 e˜1 − e˜7 e˜1 + e˜7 − 2˜ e14 ], Π ˜ = −2˜ Φ e1 K1 H1 K2 e˜T1 + e˜1 (K1 + K2 )H1 e˜T8 + e˜8 H1T (K1 + K2 )T e˜T1 − 2˜ e8 H1 e˜T8 −2˜ e3 K1 H2 K2 e˜T3 + e˜3 (K1 + K2 )H2 e˜T9 + e˜9 H2T (K1 + K2 )T e˜T3 − 2˜ e9 H2 e˜T9 −2˜ e5 K1 H3 K2 e˜T5 + e˜5 (K1 + K2 )H3 e˜T10 + e˜10 H3T (K1 + K2 )T e˜T5 − 2˜ e10 H3 e˜T10 −2˜ e7 K1 H4 K2 e˜T7 + e˜7 (K1 + K2 )H4 e˜T11 + e˜11 H4T (K1 + K2 )T e˜T7 − 2˜ e11 H4 e˜T11 , ˜2 + Ξ ˜3 + Ξ ˜4 + Ξ ˜ 5 + Φ, ˜ ˜ =Ξ ˜1 + Ξ Υ ˜ = [−C 0 0 0 0 0 0 A 0 0 B − I 0 0]. Γ Now, we have the following theorem. Corollary 3.1 For given scalars h1 , h2 , µ1 , µ2 , diagonal matrices K1 = diag{k1− , k2− , . . . , kn− } and K2 = diag{k1+ , k2+ , . . . , kn+ }, the system (28) is globally asymptotically stable if there exist symmetric positive matrices P ∈ R2n×2n , Qi ∈ R2n×2n , Ri ∈ Rn×n , i = 1, 2, 3, S ∈ Rn×n positive diagonal matrices Ω, Λ, Hi , i = 1, 2, 3, 4 and any matrix T ∈ R2n×2n such that the following LMIs hold for all h(t) ∈ [0, h] ˜Γ ˜ ⊥ < 0, ˜ ⊥ )T Υ (Γ ⎡ ⎤ ¯ S T ⎣ ⎦ ≥ 0, ¯ ∗ S ⎡ where S¯ = ⎣
(29) (30)
⎤ S
0
⎦.
∗ 3S Remark 3.2 Less conservativeness of the proposed stability criteria over [31-34] relies on two aspects. First, Lemma 2.1 combined with reciprocally convex approach [35], that is Lemma 10
Table 1: Upper delay bound h2 for different h1 under µ1 = 0.7 and µ2 = 0.1. h1
0.8
1
1.2
[31]
1.5666
1.3668
1.1664
[32]
2.0164
1.8203
1.6197
[33]
1.9528
1.7992
1.6441
[34]
1.9666
1.8351
1.6803
Corollary 3.1
2.5191
2.3191
2.1191
2.2 is used to deal with the term
t t−h
x˙ T (s)S x(s)ds. ˙ Second, for constructing the Lyapunov-
Krasovskii functional, more information about the activation function, that is, V3 plays a key role in enhancing the feasible region of stability criterion.
Remark 3.3 Different with the constructed Lyapunov-Krasovskii functional [31-34], the term t t ˙ T (s)S1 x(s)ds ˙ or t−h2 x˙ T (s)S2 x(s)ds ˙ are not considered, since it does not affect the result t−h1 x by using the method employed in this paper.
4
Numerical examples
In this section, two numerical examples are given to show the effectiveness of the proposed method. Example 4.1 Consider the system of (28) with the following parameters ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 2 0 1 1 0.88 1 ⎦, A = ⎣ ⎦, B = ⎣ ⎦, C=⎣ 0 2 −1 −1 0 1 ⎡ ⎤ ⎡ ⎤ 0 0 0.4 0 ⎦ , K2 = ⎣ ⎦. K1 = ⎣ 0 0 0 0.8 The maximum value of upper bound h2 compared with the results in [31,32,33,34] with different h1 under µ1 = 0.7, µ2 = 0.1 and µ1 = 0.7, µ2 = 0.2 are listed in Table 1 and Table 2, respectively. From Table 1 and Table 2, one can know clearly that the results obtained by Corollary 3.1 can provide larger admissible upper bounds than the stability criteria in [31,32,33,34]
11
Table 2: Upper delay bound h2 for different h1 under µ1 = 0.7 and µ2 = 0.2. h1
0.8
1
1.2
[31]
0.8515
0.6596
0.4616
[32]
0.8703
0.6713
0.4715
[33]
1.1364
0.9454
0.7207
[34]
1.1296
0.9603
0.7743
Corollary 3.1
1.3390
1.1390
0.9390
Example 4.2 Consider the system of (1) with the following parameters ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 1 0 −1 0.5 −2 0.5 0 0 ⎦, A = ⎣ ⎦, B = ⎣ ⎦, D = ⎣ ⎦, C=⎣ 0 1 0.5 −1.5 0.5 −2 −0.1 −0.1 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 1 0 0.5 0 0.1 0.1 ⎦ , E2 = ⎣ ⎦ , E3 = ⎣ ⎦, E1 = ⎣ 0 0.5 0 1 0 0 ⎡ ⎤ ⎡ ⎤ 0 0 0.4 0 ⎦ , K2 = ⎣ ⎦. K1 = ⎣ 0 0 0 0.8 Taking the activation function as f1 (s) = 0.2(|s + 1| − |s − 1|), f2 (s) = 0.4(|s + 1| − |s − 1|), h1 = 0.5, µ1 = 0.7, µ2 = 0.1, by solving LMIs (10) and (11) in Theorem 3.1, we can obtain the allowable upper bound of h2 is 0.7688. Then letting the two additive time-varying delay components as h1 (t) = 0.25 + 0.25 sin(2.8t), h2 (t) = 0.3844 + 0.3844 sin(0.2601t), and the initial condition as x(0) = [0, 5, −0.5]T , a simulation result about the state responses is given in Figure 1. Corresponding feasible solutions of LMIs (10) and (11) are given as follows ⎤
⎡ 0.1478 0.0219 0.0038 0.0121
⎥ ⎢ ⎡ ⎤ ⎥ ⎢ ⎢ 0.0219 0.1432 0.0006 0.0079 ⎥ 0.4240 0 ⎥, Ω = ⎣ ⎦, P=⎢ ⎥ ⎢ ⎢ 0.0038 0.0006 0.0414 0.0067 ⎥ 0 0.3737 ⎦ ⎣ 0.0121 0.0079 0.0067 0.0449 ⎤ ⎡ ⎡ 0.0199 0.0080 −0.0510 −0.0103 0.0163 ⎥ ⎢ ⎢ ⎥ ⎢ ⎢ ⎢ 0.0080 ⎢ 0.0065 0.0032 −0.0203 −0.0041 ⎥ ⎥ , Q2 = ⎢ Q1 = ⎢ ⎥ ⎢ ⎢ ⎢ −0.0510 −0.0203 0.2917 ⎢ −0.0335 0.0584 ⎥ ⎦ ⎣ ⎣ −0.0103 −0.0041 0.0584 0.0117 −0.0068
12
⎡ Λ=⎣
0.0065
⎤ 0.1484
0
0
0.1360
⎦,
−0.0335 −0.0068
⎤
⎥ ⎥ −0.0133 −0.0027 ⎥ ⎥, ⎥ −0.0133 0.1962 0.0393 ⎥ ⎦ −0.0027 0.0393 0.0079 0.0026
⎡ 0.0204
0.0082
⎢ ⎢ ⎢ 0.0082 0.0033 Q3 = ⎢ ⎢ ⎢ −0.0836 −0.0337 ⎣ −0.0330 −0.0132 ⎡ ⎤ 0.0104 0.0041 ⎦, R2 = ⎣ 0.0041 0.0017 ⎡ −0.0331 0.0081 ⎢ ⎢ ⎢ 0.0072 −0.0893 T =⎢ ⎢ ⎢ −0.0043 −0.0034 ⎣ 0.0172 0.0139 ⎡ 0.6723 0 H2 = e−4 ⎣ 0 0.0260
5
−0.0836 −0.0330
⎤
⎥ ⎡ ⎤ ⎥ −0.0337 −0.0132 ⎥ 0.0104 0.0041 ⎥ , R1 = ⎣ ⎦, ⎥ 0.9233 −0.4238 ⎥ 0.0041 0.0017 ⎦ −0.4238 1.1541 ⎡ ⎤ ⎡ ⎤ 0.0630 −0.0214 0.0360 −0.0062 ⎦, S = ⎣ ⎦, R3 = ⎣ −0.0214 0.2182 −0.0062 0.0901 ⎤ −0.0014 −0.0033 ⎥ ⎡ ⎤ ⎥ 1.0664 0 0.0003 −0.0027 ⎥ ⎥ , H1 = ⎣ ⎦ , = 0.0599. ⎥ 0 0.4603 −0.0182 −0.0115 ⎥ ⎦ 0.0028 −0.0213 ⎡ ⎡ ⎤ ⎤ ⎤ 1.2382 0 0.2155 0 ⎦. ⎦ , H3 = e−3 ⎣ ⎦ , H4 = ⎣ 0 0.5475 0 0.0079
Conclusion
In this paper, the delay-dependent stability problem for uncertain delayed neural networks with two additive time-varying delay components has been studied. Based on the modified augmented Lyapunov functional, some new delay-dependent asymptotic stability criteria are derived by using reciprocally convex method and more accurate upper bound of the integral form. The effectiveness of the theoretical results has been demonstrated by two numerical examples.
6
Acknowledgments
This research was supported by the Daegu University Research Scholarship Grants and Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2013R1A1A2063350 and 2014R1A1A4A01003860).
References [1] J. Hopfield, ”Neurons with graded response have collective computational properties like those of two-state neurons,” Proc. Natl. Acad. Sci. U.S.A. vol. 81, pp. 3088-3092, 1984. [2] K. Gopalsamy and X.-Z. He, ”Delay-independent stability in bidirectional associative memory networks,” IEEE Trans. Neural Netw., vol. 5, no. 6, pp. 998-1002, Nov. 1994.
13
[3] Y. Tang, H. Gao, J. Kurths,” Multiobjective identification of controlling areas in neuronal networks,” IEEE/ACM Transactions On Computational Biology and Bioinformatics, vol. 10, no. 3, pp. 708-720, May-June 2013. [4] Y. Tang, W. K. Wong, ”Distributed synchronization of coupled neural networks via randomly occurring control,” IEEE Transactions On Neural Networks and Learning Systems, vol. 24, no. 3, pp. 435-447, Mar. 2013. [5] R. Lu, H. Su, J. Chu, A. Xue, ”A simple approach to robust D-stability analysis for uncertain singular delay systems,” Asian Journal of Control, vol. 11, no. 4, pp 411-419, 2009. [6] R. Lu, Y. Xu, A. Xue, ”H∞ filtering for singular systems with communication delays,” Signal Processing, vol. 90, no. 4, pp. 1240-1248, 2010. [7] R. Lu, H. Li, Y. Zhu, ”Quantized H∞ filtering for singular time-varying delay systems with unreliable communication channel, Circuits, Systems,” and Signal Processing, vol. 31, no. 2, pp 521-538, 2012. [8] D. Zhang, L. Yu, ”Exponential state estimation for Markovian jumping neural networks with time-varying discrete and distributed delays,” Neural Networks, vol. 35, no. 11, pp: 103-111, 2012. [9] D. Zhang, W.J. Cai, Q.G. Wang, ”Mixed H∞ and passivity based state estimation for fuzzy neural networks with Markovian-type estimator gain change,” Neurocomputing, vol. 139, pp: 321-327, 2014. [10] R. Lu, H. Wu, J. Bai, ”New delay-dependent robust stability criteria for uncertain neutral systems with mixed delays,” Journal of the Franklin Institute, vol. 351, no. 3, pp 13861399, 2014. [11] S. Mou, H. Gao, J. Lam, ”A new criterion of delay-dependent asymptotic stability for hopfield neural networks with time delay,” IEEE Trans. Neural Netw., vol. 19, no. 3, pp. 532-535, Mar. 2008. [12] S. Xiao, X. Zhang, ”New globally asymptotic stability criteria for delayed cellular neural networks,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 56, no. 8, pp. 659-663, Aug. 14
2009. [13] H. -B, Zeng, Y. He, M. Wu, S.-P. Xiao, ”Passivity analysis for neural networks with a time-varying delay,” Neurocomputing, Vol.74, No.5, pp.730-734, 2011. [14] H. -B, Zeng, S.-P. Xiao, B. Liu, ”New stability criteria for recurrent neural networks with a time-varying delay,” International Journal of Automation and Computing, Vol. 8, No. 1, pp. 128-133, 2011. [15] H. Zeng, Y. He, M. Wu, C. Zhang, ”Complete delay-decomposing approach to asymptotic stability for neural networks with time-varying delays,” IEEE Trans. Neural Netw., vol. 22, no. 5, pp. 806-812, May 2011. [16] Y. Liu, W. B. Ma, M. S. Mahmoud, ”New results for global exponential stability of neural networks with varying delays,” Neurocomputing, vol. 97, no. 15, pp. 357-363, Nov. 2012. [17] X. Zhou, J. Tian, H. Ma, S. Zhong, ”Improved delay-dependent stability criteria for recurrent neural networks with time-varying delays”, Neorocomputing, In press, 2013. [18] O.M. Kwon, Ju H. Park, S.M. Lee, E.J. Cha, ”Analysis on delay-dependent stability for neural networks with time-varying delays”, Neurocomputing, vol. 103, pp. 114-120, Mar. 2013. [19] H. Zeng, Y. He, M. Wu, H. -Q, Xiao, ”Improved conditions for passivity of neural networks with a time-varying delay,” IEEE Transactions on Cybernetics, Vol. 44, No. 6, 785-792, 2014. [20] H. Huang, G. Feng, J. Cao, ”Robust State Estimation for Uncertain Neural Networks With Time-Varying Delay”, IEEE trnas. Neural Netw., vol.19, no. 8, pp. 1329-1339, 2008. [21] O.M. Kwon, Ju.H. Park, ”New delay-dependent robust stability criterion for uncertain neural networks with time-varying delays”, Applied Mathematics and Computation, vol. 205, no. 1, pp. 417-427, Nov, 2008. [22] H. Chen, ”New delay-dependent stability criteria for uncertain stochastic neural networks with discrete interval and distributed delay”, Neurocomputing, vol. 101, no. 4, pp. 1-9, Feb, 2013.
15
[23] S. Arik, ”A new condition for robust stability of uncertain neural networks with time delays,” Neurocomputing, vol. 128, no. 27, pp. 476-482, Mar, 2014. [24] S. Arik, ”An improved robust stability result for uncertain neural networks with multiple time delays,” Neural Networks, vol. 54, pp. 1-10, Jun, 2014. [25] J.Lam, H. Gao, C. Wang, ”Stability analysis for continuous system with two additive time-varying delay components,” Syst. Control Lett., vol. 56, no. 1, pp. 16-24, 2007. [26] H. Gao, T. Chen, J. Lam, ”A new delay system approach to network based control”, Automatica, vol. 44, no. 1, pp. 39-52, 2008. [27] H. Wu, X. Liao, W. Feng, S. Guo, W. Zhang, ”Robust stability analysis of uncertain systems with two additive time-varying delay components,” Applied Mathematical Modelling, vol. 33, no. 12, pp. 4345-4353, Dec, 2009. [28] R. Dey, G. Ray, S. Ghosh, A. Rakshit, ”Stability analysis for continuous system with additive time-varying delays: A less conservative result”, Applied Mathematics and Computation, vol. 215, no. 10, pp. 3740-3745, Jan, 2010. [29] H. Shao, Q.-L. Han, ”On stabilization for systems with two additive time-varying input delays arising from networked control systems,” Journal of the Franklin Institute, vol. 349, no. 6, pp. 2033-2046, 2012. [30] Y. Zhao, H. Gao, S. Mou, ”Asymptotic stability analysis of neural networks with successive time delay components”, Neurocomputing, vol. 71, no. 13-15, pp. 2848-2856, 2008. [31] H. Shao, Q.L. Han, ”New delay-dependent stability criteria for neural networks with two additive time-varying delay”, IEEE trans. Neural Netw., vol. 22, no. 5, pp. 812-818, 2011. [32] J. Tian, S. Zhong, ”Improved delay-dependent stability criteria for neural networks with two additive time-varying delay components”, Neurocomputing, vol. 77, no.1, pp. 114-119, Feb, 2012. [33] N. Xiao, Y. Jia, ”New approaches on stability criteria for neural networks with two additive time-varying delay components”, Neurocomputing, vol. 118, no. 22, pp. 150-156, Otc, 2013. 16
[34] C. -K. Zhang, Y.He, L. Jiang, Q. H. Wu and M. Wu, ”Delay-dependent stability criteria for generalized neural networks with two delay components”, IEEE Trans. Neural Netw. Learn. Syst. , In press, 2013. [35] P. Park, J. W. Ko, C. Jeong, ”Reciprocally convex approach to stability of systems with time-varying delays,”Automatica, vol. 47, no. 1, pp. 235-238, Jan. 2011. [36] A.Seuret, F. Gouaisbaut, ”Wirtinger-based integral inequality: Application to time-delay systems,” Automatica, vol. 49 pp.2860-2866, 2013. [37] R.E. Skelton, T.Iwasaki, K.M.Grigoradis, A Unified Algebraic Approach to Linear Control Design, Taylor, Francis, NewYork, 1997. [38] S. Boyd, L.E. Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, 1994.
17
0.5 x(1) x(2)
0.4 0.3
State responses
0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 −0.5
0
1
2
3
4 t
5
6
7
Figure 1: State responses of x(t) under x(0) = [0.5, −0.5]T in Example 4.2
18
8