Author's Accepted Manuscript
Stability analysis of delayed HOPFIELD NEURAL NETWORKS with impulses via inequality techniques Adnène Arbi, Chaouki Aouiti, Farouk Chérif, Abderrahmane Touati, Adel M. Alimi
www.elsevier.com/locate/neucom
PII: DOI: Reference:
S0925-2312(14)01396-4 http://dx.doi.org/10.1016/j.neucom.2014.10.036 NEUCOM14840
To appear in:
Neurocomputing
Received date: 6 October 2013 Revised date: 28 August 2014 Accepted date: 19 October 2014 Cite this article as: Adnène Arbi, Chaouki Aouiti, Farouk Chérif, Abderrahmane Touati, Adel M. Alimi, Stability analysis of delayed HOPFIELD NEURAL NETWORKS with impulses via inequality techniques, Neurocomputing, http://dx.doi.org/ 10.1016/j.neucom.2014.10.036 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Stability Analysis of Delayed Hopfield Neural Networks with Impulses via Inequality Techniques Adn`ene Arbi*, Chaouki Aouiti*, Farouk Ch´erif**, Abderrahmane Touati* and Adel M. Alimi*** * University of Carthage, Department of Mathematics, Faculty of Sciences of Bizerta, BP W, Jarzouna 7021, Bizerta, Tunisia. ** University of Sousse, Department of Computer science, ISSATS, Laboratory of Math Physics; Specials Functions and Applications LR11ES35, Ecole Sup´ erieure des Sciences et de Technologie, Sousse 4002, Tunisia. *** University of Sfax, ENIS, REGIM-Lab. (Research Groups in Intelligent Machines), Bp 1173, Sfax 3038, Tunisia.
Abstract In this paper, the problem of stability for a class of time-delay Hopfield neural networks with impulsive perturbation is investigated. The existence of a unique equilibrium point is proved by using the Arzel`a-Ascoli’s theorem and Rolle’s theorem. Some sufficient stability criteria have proved that the uniform stability, the uniform asymptotic stability, the global asymptotic stability and the global exponential stability of the system, are derived from using the Lyapunov functional method and the linear matrix inequality approach by estimating the upper bound of the derivative of Lyapunov functional. The exponential convergence rate of the equilibrium point is also estimated. Finally, we analyse and interpret some numerical examples showing the efficiency of our theoretical results. Keywords: Hopfield neural networks; Lyapunov functionals; Arzel`a-Ascoli’s theorem; Rolle’s theorem; Time varying delay; Impulse; Uniform stability; Uniform asymptotic stability; Global asymptotic stability; Global exponential stability.
1. Introduction Hopfield neural networks (HNNs) were first introduced by Hopfield, in 1982 [16]. They have been extensively studied and developed in recent years, and they have attracted much attention in the literature on Hopfield neural networks Email address:
[email protected] (A. Arbi),
[email protected] (C. Aouiti),
[email protected] (F. Ch´ erif),
[email protected] (A. Touati),
[email protected] (A.M. Alimi). ()
Preprint submitted to Neurocomputing
December 2, 2014
with time delays, ([23], [31]). There has been a steady increase in the excitement and interest on the potential applications of the Hopfield neural networks (HNNs), and so far HNNs have found many important applications in pattern recognition, image processing, associative memory and optimization problems, automatic control, model identification, etc. (we refer the reader to [13], [17], [25], [38], [40]). Although most neural systems are realized by software simulations, only hardware implementation can fully utilize its advantages of parallel processing and error tolerance. Until now, efforts to construct hardware realizations of artificial neural networks were devoted primarily to the implementation of models, which ignore the dynamical behaviors of neural networks. As we all know, stability for neural networks is one of the preconditions in the designs, applications and VLSI implementation of neural networks, therefore, the dynamical behaviors of neural networks have received a great deal of interest ([1], [5], [24], [27]). In implementation of artificial neural networks, time delays are unavoidable due to finite switching speeds of the amplifiers. The existence of time delays may cause oscillations and instability of neural networks. Therefore, it is important to investigate the stability of delayed neural networks ([2], [3], [4], [7], [8], [17], [18], [20], [21], [22], [23], [35], [36], [39], [41]). However, besides delay effect, impulsive phenomena can be found in a wide variety of evolutionary process, particularly some biological systems such as biological neural networks and bursting rhythm models in pathology, as well as optimal control models in economics, frequency-modulated signal processing systems, and flying object motions, in which many sudden and sharp changes occur instantaneously, in the form of impulse. Examples of impulsive phenomena can also be found in other fields of information science, electronics, automatic control systems, computer networking, artificial intelligence, robotics, and telecommunications, etc. Many interesting results on impulsive effect have been gained ([15], [26], [28], [34], [37]). As artificial electronic system, neural networks are often subject to impulsive perturbation which can affect the dynamical behaviors of the systems just as time delays. Therefore, there are few authors to consider both impulsive effect and delay effect on the stability of neural networks ([15], [28], [34], [37]). However, as the paper [12] points out, stability analysis for delay systems can be classified into two catalogs according to their dependence on the information about the size of delays. Namely, delay-independent stability criterion and delay-dependent stability criterion. The delay-independent stability is independent of the size of the delays and delay-dependent stability is concerned with the size of delays. In general, for small delays, delay-independent criterion is likely to be conservative. Therefore, increasing attention has been focused on delay dependent stability analysis of delay differential systems ([9], [12], [19], [33]). However, most of these results have focused on the deterministic systems with delays. To the best of the authors knowledge. This paper is an attempt to this goal. By using Lyapunov Krasovskii-type functional and inequality techniques, we obtain the sufficient conditions for global exponential stability and global asymptotic stability of impulsive neural networks with time varying delays. These conditions can be divided into two classes: delay-dependent criterion and delay-independent one. Since delay-dependent stability criteria make use 2
of information on the length of delays, they are less conservative than delayindependent stability condition. The approach used in these papers consists of two steps. • Step 1: Prove the existence and uniqueness of equilibrium. • Step 2: Prove a various types of stability of equilibrium. The paper is organized as follows. In the following section we discuss some notations, definitions and lemma. In section 3, the existence and uniqueness of equilibrium point are proved. In Section 4, we consider the uniform asymptotic stability, the global exponential stability and global asymptotic stability of the equilibrium of (1), three theorems and four corollaries are given (two theorems belongs to delay dependent category and one belongs to delay-independent category). The new conditions of stability are simpler and less restrictive versions of some recent results. In Section 5, some examples are given to illustrate the effectiveness of our theoretical results. Finally, some conclusions are drawn in section 6. 2. Preliminaries Let R denote the set of real numbers, Z+ denote the positive integers and Rn denote the n-dimensional real space equipped with the Euclidean norm . . The identity matrix, with appropriate dimensions, is denoted by Id and diag(...) denote the block diagonal matrix. Consider the following delayed HNNs model with impulses: ⎧ n n ⎨ x˙ (t) = −c x (t) + a f (x (t)) + b g (x (t − τ (t))) + I , if t = t i i i ij j j ij j j i k j=1 j=1 ⎩ xi \t=tk = xi (tk ) − xi (t− i = 1, ..., n, n, k ∈ Z+ , k ), (1) where n ≥ 2 corresponds to the number of units in a neural network; the impulsive times tk satisfy: 0 ≤ t0 < t1 < ... < tk < ..., lim tk = +∞; xi (t) corresponds to the state k−→+∞
of the unit i at time t; ci is a positive constant; fj , gj , denote respectively, the measures of response or activation to their incoming potentials of the unit j at time t and t − τ (t); constant aij denotes the synaptic connection weight of the unit j on the unit i at time t; constant bij denotes the synaptic connection weight of the unit j on the unit i at time t − τ (t); Ii is the input of the unit i; τ (t) is the transmission delay such that 0 < τ (t) ≤ τ and τ˙ (t) ≤ ρ < 1; t ≥ t0 ; τ, ρ are constant. For all r > 0 and D ⊆ Rn , define:
3
P C([−r, 0], D) = {ψ : [−r, 0] −→ D, is continuous everywhere except at a − + finite number of points tk , at which ψ(t+ k ) and ψ(tk ) exist and ψ(tk ) = ψ(tk )}. For ψ ∈ P C([−r, 0], D), the norm of ψ is defined by ψr =
sup ψ(θ).
−r≤θ≤0
For any t0 ≥ 0, let: P Cδ (t0 ) = {ψ ∈ P C([−τ, 0], Rn ) : ψτ < δ}. The initial conditions associated with system (1) are of the form: x(s) = φ(s), s ∈ [t0 − τ, t0 ],
(2)
where x(s) = (x1 (s), x2 (s), ..., xn (s))T , φ(s) = (φ1 (s), φ2 (s), ..., φn (s))T ∈ P C([−τ, 0], Rn ). In this paper, we assume that some conditions are satisfied so that the equilibrium point of system (1) exists, see in the continuation (Theorem 3.2 and Theorem 3.3). ¯n ) is the equilibrium point of system (1). Impulsive Assume that x¯ = (¯ x1 , x¯2 , ..., x operator is viewed as perturbation of the equilibrium point x ¯ of such system without impulsive effects. We assume that (i)
− ¯i ), Δxi \t=tk = xi (tk ) − xi (t− k ) = dk (xi (tk ) − x (i)
dk ∈ R, i = 1, 2, ..., n, k = 1, 2, .... Henceforth we assume that each activation function fi (.), gi (.), i = 1, 2, ..., n, satisfies the following conditions ∀u, y ∈ R: (H1) |fi (u + y) − fi (u)| ≤ Li |y| and fi (0) = 0, (H2) |gi (u + y) − gi (u)| ≤ Mi |y| and gi (0) = 0, Li , Mi ≥ 0, and we set: (H3) cmax = max ci , cmin = min ci , i
i
L = max Li , M = max Mi , i ∈ Λ = {1, 2, ...n}, i
i
(1)
(2)
(n)
(H4) Dk = diag(1 + dk , 1 + dk , ..., 1 + dk ). Since x¯ is an equilibrium point of system (1), one can derive from system ¯i , i = 1, 2, ..n, transforms such system (1) that the transformation yi = xi − x into the following system: ⎧ n n ⎨ y˙ i (t) = −ci yi (t) + aij Fj (yj (t)) + bij Gj (yj (t − τ (t))), if t = tk , t ≥ t0 , j=1 j=1 ⎩ (i) yi (tk ) = (1 + dk )yi (t− ), i = 1, ..., n, n, k ∈ Z+ , k (3) 4
where Fj (yj (t)) = fj (¯ xj +yj (t))−fj (¯ xj ) and Gj (yj (t−τ (t))) = gj (¯ xj +yj (t−τ (t)))−gj (¯ xj ). To prove the stability of x ¯ of system (1), it is sufficient to prove the stability of the zero solution of system (3). Using (H4), the system (3) can be written in the following matrix-vector form: ⎧ y(t) ˙ = −Cy(t) + A.F (y(t)) + B.G(y(t − τ (t))), if t = tk , t ≥ t0 , ⎪ ⎪ ⎨ (4) y(tk ) = Dk y(t− i = 1, ..., n, n, k ∈ Z+ , k ), ⎪ ⎪ ⎩ y(t0 + θ) = ϕ(θ), where y(t) = (y1 (t), ..., yn (t))T ; y(t − τ (t)) = (y1 (t − τ (t)), ..., yn (t − τ (t)))T ; C = diag(c1 , ..., cn ); A = (aij )n×n ; B = (bij )n×n ; F (y) = (F1 (y1 ), F2 (y2 ), ..., Fn (yn ))T ; G(y) = (G1 (y1 ), G2 (y2 ), ..., Gn (yn ))T . In the following, the notation X T and X −1 mean the transpose of and the inverse of a square matrix X. We will use the notation X > 0 (or X < 0, X ≥ 0, X ≤ 0) to denote that the matrix X is a symmetric and positive definite (negative definite, positive semidefinite, negative semidefinite) matrix. Let λmax (X), λmin (X), respectively, denote the largest and smallest eigenvalue of matrix X. Remark 2.1. From (H1) and (H2), it is clear that f (.) and g(.) satisfied |fi (y)| ≤ Li |y|, |gi (y)| ≤ Mi |y|, Li , Mi > 0. It is obvious that Fj (.), Gj (.) have also these same properties. Some definitions and lemmas of stability for system (1) at its equilibrium point are introduced as follows: Definitions 2.1. ([11]) x ¯ ∈ Rn is said to be an equilibrium point of system (1), if ¯i (t) + −ci x
n j=1
aij fj (¯ xj (t)) +
n
bij gj (¯ xj (t − τ (t))) + Ii = 0, i = 1, ..., n.
j=1
Now, we need the following basic lemmas used in our work. Lemma 2.1. Let X ∈ Rn×n , then λmin (X)aT a ≤ aT Xa ≤ λmax (X)aT a for any a ∈ Rn if X is a symmetric matrix . 5
Proof. (see [6]). Lemma 2.2. For any a, b ∈ Rn , the inequality ±2aT b ≤ aT Xa + bT X −1 b holds, where X is any n × n matrix with X > 0 . Proof. (proved in [32]). Global Asymptotic Stability (GAS)
Global Exponential Stability (GES)
Exponential Stability (EA)
Asymptotic Stability (AS)
Stability
Global Uniform Stability (GUS)
Uniform Stability (US)
Uniform Asymptotic Stability (UAS)
Figure 1: Relationship between various type of stability.
A various types of Stability [14] Assume y(t) = y(t0 , ϕ)(t) be the solution of (3) through (t0 , ϕ), then the zero solution of (3) is said to be (see Figure 1): Stability: (P1) stable, if for any > 0 and t0 ≥ 0, there exists some δ( , t0 ) > 0 such that ϕ ∈ P Cδ (t0 ) implies y(t0 , ϕ)(t) < , t ≥ t0 . Uniform stability: (P2) uniformly stable, if the δ in (P1) is independent of t0 . Uniform attractive: (P3) uniformly attractive, if there exists some δ > 0 such that for any > 0, there exists some T = T ( , δ) > 0 such that t0 ≥ 0 and ϕ ∈ P Cδ (t0 ) implies y(t0 , ϕ)(t) < , t ≥ t0 + T .
6
Uniform asymptotic stability: uniformly asymptotically stable, if (P2) and (P3) hold, (P5) globally asymptotically stable, if (P1) holds and for any given initial value y0 = ϕ, y(t0 , ϕ)(t) −→ 0 as t −→ +∞. Global exponential stability: (P6) globally exponentially stable, if there exist constants α > 0, β ≥ 1 such that for any initial value ϕ, y(t0 , ϕ)(t) ≤ βϕτ e−α(t−t0 ) . 3. Existence and uniqueness of Hopfield neural networks equilibrium Let J ⊂ R+ be an interval of the form [a, b[ where 0 ≤ a < b ≤ ∞ and let D ⊂ Rn be an open set. Given a constant r > 0, if x ∈ P C([t0 − r, ∞[, Rn ) where t0 ∈ R+ then for each t ≥ t0 we define xt ∈ P C([−r, 0], Rn ) by xt (s) = x(t + s) for −r ≤ s ≤ 0. In addition, we define the following norm x(t){ξ,∞} = max |ξi−1 xi (t)|, where ξi > 0. i=1,...,n
We introduce some definitions and lemma as follows [29]: Definition 3.1. A functional H : J × P C([−r, 0], D) −→ Rn is said to be composite-PC if for each t0 ∈ J and 0 < α ≤ ∞, where [t0 , t0 + α[⊂ J, if x ∈ P C([t0 −r, t0 +α], D), then the composite function K defined by K(t) = H(t, xt ) is an element of the function class P C([t0 , t0 + α[, Rn ). Definition 3.2. A functional H : J ×P C([−r, 0], D) −→ Rn is said to be quasibounded if for each t0 ∈ J and 0 < α ≤ ∞, where [t0 , t0 + α] ⊂ J, and for each compact set F ⊂ D, there exists some > 0 such that f (t, ψ) ≤ for all (t, ψ) ∈ [t0 , t0 + α] × P C([−r, 0], F ). Definition 3.3. A functional H : J × P C([−r, 0], D) −→ Rn is said to be continuous in ψ if for each fixed t ∈ J, H(t, ψ) is a continuous function of ψ on P C([−r, 0], D). The system (3), can be written as follows: x(t) ˙ = H(t, xt ), if t = tk , t ≥ t0 , x\t=tk = x(tk ) − x(t− k ), where
k ∈ Z+ ,
H(t, xt ) = −Cx(t) + A.F (x(t)) + B.G(x(t − τ (t))).
This lemma introduces an equivalent integral formulation of system (3)-(2).
7
(5)
Lemma 3.1. Suppose H is composite-PC. Then a function x ∈ P C([t0 − r, t0 + α], D) where α > 0 and [t0 , t0 + α] ⊂ J, that experiences the impulsive effect at the points T = {tk }m k=1 , where t0 < t1 < t2 < ... < tm ≤ t0 + α is a solution of (3)-(2) if and only if x satisfies ⎧ φ(t − t0 ), t ∈ [t0 − r, t0 ], ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ φ(0) + t H(s, xs )ds, t ∈]t0 , ti [, t0 x(t) = (6) t ⎪ x(t− ) + I(t , x − ) + ⎪ H(s, x )ds, t ∈ [t , t [, k s k k+1 ⎪ k t tk ⎪ k ⎪ ⎩ k = i, i + 1, ..., for t ∈ [t0 − r, t0 + α] where t0 ∈ [ti−1 , ti [ for some i or, equivalently, ⎧ ⎨ φ(t − t0 ), t x(t) = ⎩ φ(0) + t0 H(s, xs )ds +
{k:tk ∈]t0 ,t]}
t ∈ [t0 − r, t0 ], I(tk , xt− ), t ∈]t0 , t0 + α].
(7)
k
Remark 3.1. If x is defined on an interval of the form [t0 − r, t0 + β[ for some 0 < β ≤ +∞, where [t0 , t0 + β[⊂ J, then Lemma 3.1 also gives us the equivalent integral formulation of a solution of system (3)-(2). Theorem 3.2. (Existence) Assume H is composite-PC, quasi-bounded and continuous in its second variable. Then for each (t0 , φ) ∈ J × P C([−r, 0], D) there exists a solution x = x(t0 , φ) of system (3)-(2) on [t0 − r, t0 + β] for some β > 0. Proof. (See Appendix A). Theorem 3.3. (Uniqueness) Let a+ ii = max{0, aii }. Suppose that assumptions (H1)-(H2) hold, if there are positive constants ξ1 , ..., ξn and α > 0 such that ξi (−ci + α + a+ ii Li ) +
n
ξj |aij |Lj +
j=1
n
ξj eατ |bij |Mj ≤ 0,
(8)
j=1
j=i
then the HNN system (1)-(2) has a unique equilibrium. Proof. (See Appendix B). 4. Criterias of stability of HNNs We start with establishing a theorem which proves the uniform stability of system (1). Theorem 4.1. Assume that there exists an n × n symmetric positive definite matrix Q which verifies the following conditions: |bij | (Q) (i) λmax ≤ min n ci , i,j=1,...,n
8
1 ci
(ii) max
1≤i≤n
+
k
s=1
n
aij
j=1 −1
ξs cmax k
where sup
k∈Z+ s=1
n + max L2j 1≤j≤n
i=1
aij ci
k n ξs cmax max Mj2 + 1≤j≤n
s=1
i=1
|bij | ci
λmax C −1 BQ−1 B T C −1 < 2,
ξs cmax < ∞, ξs is the largest eigenvalue of Ds C −1 Ds .
Then the equilibrium point of system (1) is uniformly stable. Proof. (See Appendix C). If Q =
1 60
· Id in Theorem 4.1, then we have:
Corollary 4.2. The equilibrium point of the system (1) is uniformly stable, if the following conditions are satisfied: |bij | 1 (i) 60n ≤ min , i,j=1,...,n ci
k n n n (i) 2 aij (ii) max c1i aij + max L2j c max (1 + d ) + max Mj2 s max ci 1≤i≤n
1≤j≤n
j=1
k
(i)
s=1
i=1
−1
i∈
1≤j≤n
λmax C −1 BB T C −1 < 2, for all k ∈ Z+ ,
2 cmax max (1 + ds ) i∈ 2 k (i) where supk∈Z+ max < ∞. 1 + ds
+ 60 ·
s=1
s=1 i∈
Now, we shall establish some theorems which provide sufficient conditions for uniform asymptotic stability and global asymptotic stability of system (1). Theorem 4.3. System (1) is uniformly stable if there exist ∗ ∈ [0, 1], σ > 0 and positive n × n symmetric definite matrix Q such that: λmax (Q) max ci
2
√
2
√τ +4 ≤ 1, · ττ 2 +4+τ +4−τ τ 2 +4 i,j
n n n aij |bij | 2 (ii) max c1i aij + max L2j + max M j ci ci 1≤i≤n 1≤j≤n 1≤j≤n j=1 i=1 i=1 ∗ + σ1 λmax C −1 BQ−1 B T C −1 < 2, + cmin
(i) σ ·
i
n min |bij |
t0
max{cmax ξk ,1}
< ∞, where ξk is the largest eigenvalue of Dk C −1 Dk , (iii) 1+∗ (t−t0 )2 and k ∈ Z+ . In addition, if we have: t0
max{cmax ξk ,1}
(iv) −→ 0, if t −→ +∞, then system (1) is uniformly 1+∗ (t−t0 )2 asymptotically stable and globally asymptotically stable. Remark 4.1. Theorem 4.1 and Theorem 4.3 study the uniformly stability. One can see that the delays in Theorem 4.1 is discret and constant, while in Theorem 4.3 we consider a time various delays. It should be mentioned that both the as 9
i=1
|bij | ci
uniform stability and the asymptotic uniform stability are studied in Theorem 4.3. Proof. (See Appendix D). If max{cmax · ξk , 1} < ∞, then we can get the following criterion for t0
stability with ∗ = 0. Corollary 4.4. Assume that there are a constant σ > 0 and n × n symmetric positive definite matrix Q such that: λmax (Q) max ci
√
2
2
√τ +4 ≤ 1, · ττ 2 +4+τ +4−τ τ 2 +4 i,j
n n n aij |bij | 1 2 2 (ii) max ci aij + max Lj + max Mj ci ci 1≤i≤n 1≤j≤n 1≤j≤n j=1 i=1 i=1 + σ1 λmax C −1 BQ−1 B T C −1 < 2. Then, the equilibrium point of system (1) is uniformly asymptotically stable and globally asymptotically stable.
(i) σ ·
i
n min |bij |
√
2
2
√τ +4 and Q = If σ = ττ 2 +4−τ +4+τ τ 2 +4 following criterion for stability.
1 60 Id
in Corollary 4.4, then we can get the
Corollary 4.5. Assume that the following conditions are satisfied: max ci
i (i) 60n min |bij | ≤ 1, i,j
n n aij 1 2 (ii) max ci aij + max Lj ci 1≤i≤n 1≤j≤n j=1 i=1 n −1 |bij | + max Mj2 BB T C −1 < 2. + 60 ci σ λmax C
1≤j≤n
i=1
Then, the equilibrium point of system (1) is uniformly asymptotically stable and globally asymptotically stable. In addition, a new theorem which provides sufficient conditions for global exponential stability of system (1) is proposed. Theorem 4.6. Assume that: (i) there exist ¯
> 0 , σ > 0 and n × n symmetric definite positive matrix Q which satisfy: σe¯τ ·
λmax (Q) · max ci i
n min |bij |
< 1, ∀i, j ∈ {1, 2, ..., n},
i,j
n aij 2 + max L j cmin ci 1≤i≤n 1≤j≤n j=1 i=1 n λmax (C −1 BQ−1 B T C −1 ) 1 2 + + max Mj σ ci |bij | < 2, (ii)
¯
+ max
1 ci
n
aij
1≤j≤n
i=1
10
(iii) there exist constants ν ≥ 0, α ¯ ∈ [0, ¯[ such that: m ln(max{ξk .cmax , 1}) < ν + α ¯ (tm − t0 ), ∀m ∈ Z+ , where ξk is the largest k=1
eigenvalue of Dk C −1 Dk . Then the equilibrium point of system (1) is globally exponentially stable and the ¯ . approximate exponentially convergent rate is (¯−2α) Proof. (See Appendix E). If, in Theorem 4.6, we have Q = Id, then we obtain the following result: Corollary 4.7. Assume that there exist constants ¯ > 0, σ > 0 such that max ci
i (i) σe¯τ n min |bij | < 1, i,j
n n aij ¯ 1 2 aij + max Lj (ii) cmin + max ci ci 1≤i≤n 1≤j≤n j=1 i=1 n λmax (C −1 BB T C −1 ) 1 + + max Mj2 σ ci |bij | < 2,
1≤j≤n
i=1
(iii) there are constants ν ≥ 0, α ¯ ∈ [0, ¯ ] such that m (i) ln max cmax max (1 + dk )2 , 1 < ν + α ¯ (tm − t0 ) k=1
i=1,2,...,n
for all m ∈ Z+ holds. Then, the equilibrium point of system (1) is globally exponentially stable and ¯ the approximate exponentially convergent rate is (¯−2α) . 5. Comparison with previous results and numerical examples In this section, we present two numerical examples to illustrate that our conditions are more feasible than those given in earlier reference ([10], [36], [39], [40]). Based on Theorem 3.2 and Theorem 3.3, it is easy to predicate the existence and uniqueness of equilibrium points for systems (9), (10) and (11). Example 5.1. Consider the two-neuron delayed neural network with impulses as follows: ⎧ x˙1 (t) = −2.5x1 (t) − 0.5f1 (x1 (t)) + 0.1f2 (x2 (t)) ⎪ ⎪ ⎪ ⎪ −0.1g1 (x1 (t − τ )) + 0.2g2 (x2 (t − τ )) − 1, ⎪ ⎨ x˙2 (t) = −2x2 (t) + 0.2f1 (x1 (t)) − 0.1f2 (x2 (t)) (9) ⎪ +0.2g1 (x1 (t − τ )) + 0.1g2 (x2 (t − τ )) + 4, if t = tk , t ≥ t0 , ⎪ ⎪ ⎪ ⎪ ⎩ (i) − ¯i ), k ∈ Z+ , i = 1, 2, xi |t=tk = xi (tk ) − xi (t− k ) = dk (xi (tk ) − x 11
where τ = 0.87, the activation functions are the following: f1 (x) = f2 (x) = g1 (x) = g2 (x) = 0.5(|x + 1| − |x − 1|) and (1) dk = 1 +
1 5k2
(2)
− 1, dk =
By Matlab, we note
1+
1 6k2
− 1, tk = k, k ∈ Z+ .
λmax C −1 BB T C −1 = 0.0125.
Considering the activation functions f1 , f2 , g1 and g2 , we can choose Li = 1, Mi = 1, i = 1, 2. On the other hand, by using the Mathematica software, we noticed that
2 ∞ ∞ 2 1 (i) max 1 + ds = max 1 + 2 < 1.4. i=1,2 i=1,2 5s s=1 s=1 By Corollary 4.2 the equilibrium point of (9) is uniformly stable. From Corollary 4.5, we have √ τ2 + 4 − τ τ2 + 4 1 √ 2.3272. σ= 0.4297 and 2 2 σ τ +4+τ τ +4 Thus, max
⎧ n ⎨1
1≤i≤n ⎩ ci
aij
j=1
+ max
1≤j≤n
Mj2
⎫ ⎬ ⎭
+ max
1≤j≤n
n | bij | ci i=1
+
L2j
n aij i=1
ci
60 λmax C −1 BB T C −1 < 2, σ
hence, by Corollary 4.5, the equilibrium point (0.01, 2.5)T of system (9) is uniformly asymptotically stable and globally asymptotically stable, (see Figure 2 and Figure 3). Table 1: Values of the maximal delay τ .
τ
In [10] 0.0279
In [40] 0.17
In this work 0.87
Remark 5.1. For this example, we additionally get that the equilibrium point of system (9) is uniformly asymptotically stable and globally asymptotically stable with upperbound of delays τ = 0.87 > 0.17 . However, the criteria given in [40] and [10] are invalid for (τ ≥ 0.87). Therefore, our results are less conservative and more efficient than those given in [10] and [40] (see Table 5). 12
Figure 2: The convergence dynamics of the system (9) in Example 4.1.
Figure 3: The orbit x1 − x2 of the system (9) for t ∈ [0, 1000].
Example 5.2. Consider the two-neurons delayed neural network with impulses as follows: ⎧ x˙1 (t) = −x1 (t) + 18 f1 (x1 (t)) + 14 f2 (x2 (t)) ⎪ ⎪ ⎪ 1 1 ⎪ ⎪ ⎨ + 3 g1 (x1 (t − τ )) − 6 g2 (x2 (t − τ )), x˙2 (t) = −x2 (t) + 14 f1 (x1 (t)) + 18 f2 (x2 (t)) (10) 1 1 ⎪ ⎪ ⎪ − 6 g1 (x1 (t − τ )) + 4 g2 (x2 (t − τ )), if t = tk , t ≥ t0 , ⎪ ⎪ ⎩ x(tk ) = γk x(t− k ), k = 1, 2, ... where tk − tk−1 = 1, γk = (−1)k
e0.224 +4 , 5
k ∈ Z+ . Here we consider τ = 0.7,
f1 (x) = f2 (x) = g1 (x) = g2 (x) = 0.5(|x + 1| − |x − 1|). Next we show that the equilibrium point of system (10) is globally exponentially stable with τ ≤ 0.7. It is easy to calculate that e0.224 + 4 (i) k − 1. L = M = 1, dk = (−1) 5 Then, we may choose ¯ = 0.0695, α ¯ = 0.0495, σ = e−1.15 , ν = 0, Q = Id.
13
Figure 4: The convergence dynamics of the system (10) in Example 4.2.
Figure 5: The orbit x1 − x2 of system (10) for t ∈ [0, 10000].
From Corollary 4.7, the equilibrium point of system (10) is globally exponentially stable (see Figure 4 and Figure 5) with approximate exponential convergence rate 0.01. But for any α, A + AT + αI is not negative definite. Hence, the result in [20] cannot apply in this case, because the matrix −(A + AT ) is obtained as ⎞ ⎛ −(A + A ) = ⎝ T
1 4
1 2
1 2
1 4
⎠,
it is obvious that −(A + AT ) is not a positive definite. Therefore, the condition in ([7], [21], [22], [36]) does not hold. Example 5.3. Consider the two-neuron delayed neural network with impulses as follows : ⎧ x˙1 (t) = −2x1 (t) − 1f1 (x1 (t)) + 0.5f2 (x2 (t)) ⎪ ⎪ ⎪ ⎪ −0.5g1(x1 (t − τ )) + 0.5g2 (x2 (t − τ )) ⎪ ⎨ x˙2 (t) = −3.5x2 (t) + 0.5f1 (x1 (t)) − 1f2 (x2 (t)) ⎪ ⎪ +0.5g1(x1 (t − τ )) + 0.5g2 (x2 (t − τ )), if t = tk , t ≥ t0 , ⎪ ⎪ ⎪ ⎩ (i) ¯i ), k = 1, 2, ... xi (tk ) = dk (xi (t− k)−x
14
(11)
where tk = k, , k ∈ Z+ . Here we consider τ = 0.5, f1 (x) = f2 (x) = g1 (x) = g2 (x) = 0.5(|x + 1| − |x − 1|) ⎛ ⎞ ⎛ ⎞
−1 0.5 −0.5 0.5 2 0 ⎠, B = ⎝ ⎠. C= , A=⎝ 0 3.5 0.5 −1 0.5 0.5 e0.224 + 4 (i) k − 1. L = M = 1, dk = (−1) 5 Then, we may choose ¯ = 0.2, α ¯ = 0.0208, σ = 0.125, ν = 0, Q = Id.
Figure 6: The convergence dynamics of the system (11) in Example 4.3.
Figure 7: The orbit x1 − x2 of system (11) for t ∈ [0, 500].
From Corollary 4.7, the equilibrium point of system (10) (0, 0)T is globally exponentially stable (see Figure 6 and Figure 7) with approximate exponential convergence rate 0.8. When k = 0, 25 (note that the exponential convergence rate is k2 in [39]), Theorem 1 in [39] shows the system to be globally exponentially stable, but Theorem 3 in [8] fails to verify that. Furthermore, C − |A|L − |B|M, where (|X| = (|xij |)n×n , X = A, B), 15
is not a M-matrix [6], It follows from this that the condition of Theorem 3 in [8] is not applicable to ascertain the stability of such a neural networks. However, Corollary 4.7 in this paper shows the system to be globally exponentially stable, even for k = 0.8 and τ = 0.5. Remark 5.2. In ([10], ([36]), [39] and [40]), system (9), (10) and (11) are considered without impulses. However, in our study (9), (10) and (11) are considered with impulses. 6. Conclusion In this paper, a class of HNNs with delays and impulsive perturbations is considered. We obtain some new criteria ensuring existence, uniqueness, uniform asymptotic stability, global stability and global exponential stability of the equilibrium point for such system by using the Arzel`a-Ascoli’s theorem, Rolle’s theorem, the Lyapunov method and linear matrix inequality. Our stability results show the effects of delay and impulsive perturbations on the stability of HNNs and not need the connection weight matrices to be symmetric. In comparison to some outcomes reported in the literature, the present results provide new stability criteria, simpler to verify, for delayed Hopfield neural networks with impulse. Our work extend and generalize the result of ([10], [30], [36], [39], [40]) since that our study consider a various type for stability, approximate the taux of exponential convergent rate and consider system with impulses. As future work, we propose to adapt our results to the impulsive high-order Hopfield type Neural Networks with delays and apply our methods of stability analysis to some more complex systems. Acknowledgements We are thankful to anonymous reviewers for their constructive comments and suggestions, which help us to improve the manuscript. Appendix A. Proof of Theorem 3.2 Proof. Let (t0 , φ) ∈ J × P C([−r, 0], D) and choose α > 0 so that: [t0 , t0 + α] ⊂ J. Since φ(0) ∈ D and D is open choose λ > 0 such that F1 = {z ∈ Rn /z − φ(0) ≤ λ} ⊂ D. Since φ ∈ P C([−r, 0], D) then the closure of the range of φ which we denote by F2 is a compact subset of D. So define F = F1 ∪ F2 . 16
Then F is a compact subset of D. Since H is quasi-bounded then there must exist some > 0 so that H(t, ψ) ≤ for all (t, ψ) ∈ [t0 , t0 +α]×P C([−r, 0], F ). / 0 , t0 +β] for all k. Let β > 0 be sufficiently small so that β ≤ α, β ≤ λ and tk ∈]t For 0 < β1 ≤ β define: R(t0 , φ, λ, β1 ) = {x ∈ P C([t0 − r, t0 + β1 ], D) | xt0 = φ, x is continuous on ]t0 , t0 + β1 ] and x(t) − φ(0) ≤ λ ∀t ∈]t0 , t0 + β1 ]} . If x ∈ R(t0 , φ, λ, β1 ) then xt ∈ P C([−r, 0], F ) for all t ∈ [t0 , t0 + β1 ] from the definition of F and so H(t, xt ) ≤ , for t ∈ [t0 , t0 + β1 ]. Moreover, if x ∈ R(t0 , φ, λ, β1 ) then the composite function H(t, xt ) is in P C([t0 , t0 + β1 ], Rn ), since H is composite-PC. Note that when restricted to the domain [t0 , t0 + β1 ], functions in R(t0 , φ, λ, β1 ) are continuous since they are continuous on ]t0 , t0 +β1 ] and are right-continuous at t0 . For μ = 1, 2, 3, ... define ⎧ φ(t − t0 ), ⎪ ⎪ ⎪ ⎨ φ(0), x(μ) (t) = ⎪ ⎪ β ⎪ ⎩ φ(0) + t− μ H(s, x(μ) )ds, s t0
t ∈ [t0 − r, t0 ], t ∈ t0 , t0 + βμ , t ∈ t0 + βμ , t0 + β .
(A.1)
We will first prove that each function x(μ) is well-defined and is in R(t0 , φ, λ, β). (1) This is obviously true any μ ≥ 2 the first two expressions in (A.1) for x . For define x(μ) for t ∈ t0 − r, t0 +
β μ
and, restricted to this interval,
x(μ) ∈ R(t0 , φ, λ,
β ). μ
(μ)
Thus H(t, xt ) is piecewise continuous and consequently integrable on ! β t 0 , t0 + . μ Therefore, in (A.1) defines x(μ) as a continuous function the third expression β 2β we have for t ∈ t0 + βμ , t0 + 2β μ . Moreover, for t ∈ t0 + μ , t0 + μ (μ)
x
" (t) − φ(0) ≤
t− β μ
t0
H(s, x(μ) s )ds
" ≤ "
≤
17
t− β μ
t0 t0 + β μ
t0
H(s, x(μ) s )ds ds =
β ≤ λ. μ
(A.2)
This shows that x(μ) is well-defined on t0 − r, t0 + 2 βμ and, when restricted to this interval, is in R(t0 , φ, λ, 2 βμ ). Now suppose that x(μ) is well-defined on t0 − r, t0 + k βμ for some 1 < k < μ and, when restricted to this interval, is in
R(t0 , φ, λ, k βμ ).
(μ) (μ) Then H(t, xt ) ≤ and H(t, xt ) is piecewise continuous for t ∈ t0 , t0 + k βμ . Thus, (A.1) defines x(μ) as a continuous function for t ∈ t0 + k βμ , t0 + (k + 1) βμ .
Also, inequality (A.2) holds for all t in this interval which shows that x(μ) restricted to this time interval is in R(t0 , φ, λ, (k + 1) βμ ). So, by induction, x(μ) is a well-defined function in R(t0 , φ, λ, β). For each μ let y (μ) denote the restriction of x(μ) to [t0 , t0 + β]. Then y (μ) is continuous on [t0 , t0 + β]. Moreover, for t ∈ [t0 , t0 + β], y (μ) (t) ≤ λ + φ(0), and so the functions y (μ) are uniformly bounded. In addition, for any t1 , t2 ∈ [t0 , t0 + β] we have: y (μ) (t1 ) − y (μ) (t2 ) ≤
"
t1 − β μ
t2 − β μ
H(s, x(μ) s )ds ≤ |t1 − t2 |,
(A.3)
for all μ which implies that the functions y (μ) are equicontinuous on the interval [t0 , t0 + β]. Hence, by Arzel` a-Ascoli’s theorem there exists a subsequence {y μk } of the sequence of functions {y (μ) } which converges uniformly to some continuous function y on [t0 , t0 + β] as k −→ +∞. Define
x(t) =
φ(t − t0 ),
t ∈ [t0 − r, t0 ],
y(t),
t ∈]t0 , t0 + β].
(μ )
(A.4)
For each fixed t ∈ [t0 , t0 + β], xt k − xt r −→ 0 as k −→ +∞ and since H(t, ψ)) is assumed to be continuous in ψ for t fixed then lim
(μk )
k−→+∞
H(t, xt
) = H(t, xt ).
Moreover, since x(μk ) ∈ R(t0 , φ, λ, β) then (μk )
H(t, xt
) ≤ , for t ∈ [t0 , t0 + β]. 18
By Lebesgue’s dominated convergence theorem we obtain " t " t k) lim H(s, x(μ )ds = H(s, xs )ds, for all t ∈ [t0 , t0 + β]. s k−→+∞
t0
t0
From (A.1) we get " t " (μk ) (μk ) x (t) = φ(0)+ H(s, xs )ds−
(A.5)
! β t ∈ t0 + , t0 + β , μk t0 t− μβ k (A.6) where the second integral tends to zero as k −→ +∞. By taking the limit as k −→ +∞ in (A.6) and using (A.5) we find φ(t − t0 ), t ∈ [t0 − r, t0 ], x(t) = (A.7) t φ(0) + t0 H(s, xs )ds, t ∈]t0 , t0 + β], t
k) H(s, x(μ )ds, s
!
which in light of Lemma 3.1 proves the theorem. Appendix B. Proof of Theorem 3.3 Proof. We prove that the equilibrium point is unique. Suppose that x ¯ and y¯ are two equilibrium points of (1) and define w(t) = x ¯(t) − y¯(t), ∀t > t0 ≥ 0. Then we have: n dwi (t) = −ci wi (t) + aij {fj (¯ xj (t)) − fj (¯ yj (t))} dt j=1
+
n
bij {gj (¯ xj (t − τ (t))) − gj (¯ yj (t − τ (t)))} , i = 1, ..., n.
(B.1)
j=1
By Rolle’s theorem we can obtain n n dwi (t) = −ci wi (t) + aij fj (θj )wj (t) + bij gj (ηj )wj (t − τ (t)), i = 1, ..., n, dt j=1 j=1
(B.2)
where θj is between x¯j (t) and y¯j (t); ηj is between x¯j (t − τ (t)) and y¯j (t − τ (t)). Let z(t) = eαt w(t). Then we have n dzi (t) = (−ci + α)zi (t) + aij fj (θj )e−αt zj (t) dt j=1
+
n
bij gj (ηj )e−α(t−τ (t)) zj (t − τ (t)), i = 1, ..., n.
j=1
19
(B.3)
So, n dzi (t) aij fj (θj )zj (t) ≤ (−ci + α)zi (t) + dt j=1
+
n
bij gj (ηj )eατ zj (t − τ (t)), i = 1, ..., n.
(B.4)
j=1
It exists i0 ∈ {1, ..., n} such that zi0 (t)|, z(t){ξ,∞} = |ξi−1 0 then ξi0
dz(t){ξ,∞} d|zi0 (t)| = ≤ sign{zi0 (t)}{ξi0 (−ci0 + α)ξi−1 zi0 (t) 0 dt dt n n + ξj ai0 fj (θj )ξj−1 zj (t) + ξj bi0 gj (ηj )eατ ξj−1 zj (t − τ (t))} j=1
j=1
−1 ≤ ξi0 (−ci0 + α + a+ i0 i0 Li0 )|ξi0 zi0 (t)| +
⎧ ⎪ n ⎨ ⎪ ⎩j=1
ξj |ai0 j |Lj
⎫ ⎪ ⎬ ⎪ ⎭
z(t){ξ,∞}
j=i
+
n
ξj eατ |bi0 j |Mj z(t − τ (t)ξ,∞ .
(B.5)
j=1
Furthermore, let Ω(t) = then
sup
t−τ (t)≤s≤t
z(s){ξ,∞} ,
z(t){ξ,∞} ≤ Ω(t),
and if z(t){ξ,∞} = Ω(t), by (8) and (B.5), we have ⎧ ⎫ ⎪ ⎪ n n ⎨ ⎬ dz(t){ξ,∞} ατ Ω(t) ≤ 0, ≤ ξi0 (−ci0 + α + a+ L ) + ξ |a |L + ξ e |b |M j i0 j j j i0 j j i0 i0 i0 ⎪ ⎪ dt ⎩ ⎭ j=1 j=1 j=i
therefore, Ω(t) decreases monotone, which implies z(t){ξ,∞} = O(1), and
w(t){ξ,∞} = O(e−αt ).
20
Appendix C. Proof. We only prove that the zero solution of system (3) is uniformly stable (Proof of Theorem 4.1). For any > 0, we may choose δ=
k
where β = sup
k∈Z+ s=1
√ β·
# cmax cmin
+ cmax βτ max
1≤j≤n
Mj2
n i=1
|bij | ci
(C.1)
ξs cmax .
For any t0 ≥ 0, ϕ ∈ P Cδ (t0 ), let y(t) = y(t0 , ϕ)(t) be the solution of (3) through (t0 , ϕ). Consider the following Lyapunov functional: ⎛ ⎞ " n n n 1 2 |bij | t ⎝ V (t) = yi (t)+ ξs cmax ⎠ G2j (yj (s))ds, ∀t ≥ t0 . c c i i t−τ (t) i=1 i=1 j=1 ts ≤t
By using (H3), we have: 1 cmax
1
2
y(t) ≤ V (t) ≤
2
cmin
y(t) +β
n n |bij | i=1 j=1
ci
Mj2
"
t
t−τ (t)
yj2 (s)ds.
(C.2)
Therefore, $ V (t) ≤
1
Mj2
+ βτ max
cmin
1≤j≤n
n |bij | i=1
% y(t)2 .
ci
(C.3)
For any t ∈ [tk , tk+1 [, k = 1, 2, ..., one has $ k % n n n |bij | 1 ∂V (t) =2 yi (t)y˙i (t) + ξs cmax G2j (yj (t)) ∂t c c i i s=1 i=1 i=1 j=1 $ −
k
% ξs cmax
s=1
n n |bij | i=1 j=1
ci
G2j (yj (t − τ (t)))(1 − τ˙ (t)).
Using (H1) and (H2), we obtain ∂V (t) ∂t
≤ −2
n
yi2 (t)
i=1
+ 2
n n i=1 j=1
+
n n aij i=1 j=1
ci
yi2 (t)
+
n n aij i=1 j=1
bij yi (t)Gj (yj (t − τ (t))) − ci
21
$
k s=1
ci
$ L2j yj2 (t)
ξs cmax
%
+
k s=1
n n i=1 j=1
% ξs cmax
n n |bij | i=1 j=1
ci
|bij | 2 G (yj (t − τ (t))). ci j
Mj2 yj2 (t)
From Lemmas 2.2 and 2.1, we get: 2
n n bij
ci
i=1 j=1
yi (t)Gj (yj (t−τ (t))) = 2y T (t)C −1 BG(y(t−τ (t))) = 2GT (y(t−τ (t)))B T C −1 y(t) ⎛
( ⎤T ) k ) = ⎣G(y(t − τ (t)))* ξs cmax ⎦ ⎡
⎜ ⎟ ⎜ T −1 ⎟ ⎜B C y(t) # 1 ⎟ ⎜ ⎟ k ⎝ ⎠ ξs cmax
s=1
$ ≤
T
ξs cmax
G (y(t−τ (t)))QG(y(t−τ (t)))+
s=1
≤ $ +
%
k
ξs cmax
s=1 k
$
%
k
$
⎞
s=1
k
%−1 y T (t)C −1 BQ−1 B T C −1 y(t)
ξs cmax
s=1 n n λmax (Q) 2 Gj (yj (t − τ (t))) n i=1 j=1
%−1
λmax (C −1 BQ−1 B T C −1 )
ξs cmax
s=1
n
yi2 (t).
(C.4)
i=1
Substitue (C.4) in derivative of V (t), we obtain easily ∂V (t) ∂t
≤ −2 $ + $ + $ −
n
yi2 (t) +
i=1 k
%
ξs cmax
s=1 k
% ξs cmax
s=1 k
% ξs cmax
s=1
n n aij i=1 j=1
ci
yi2 (t) +
i=1 j=1
n n |bij | i=1 j=1
ci
λmax (Q) n n n i=1 j=1
n n aij
$
Mj2 yj2 (t)
+
ci
k
L2j yj2 (t) %−1 λmax (C −1 BQ−1 B T C −1 )
ξs cmax
s=1
n n
i=1
G2j (yj (t − τ (t)))
i=1 j=1
|bij | 2 G (yj (t − τ (t))). ci j
Therefore, ⎛ ∂V (t) ∂t
≤
⎝−2 + max
1≤i≤n ⎩ ci
$ +
k
% ξs cmax
s=1
$ +
⎧ n ⎨1
k
aij
⎭
max
1≤j≤n
%−1 ξs cmax
j=1
⎫ ⎬
Mj2
+ max
1≤j≤n
n |bij | i=1
ci
L2j
n aij i=1
ci
⎞ −1 λmax C BQ−1 B T C −1 ⎠ y(t)2
s=1
22
n
yi2 (t)
$ +
k
% ξs cmax
s=1
n n λmax (Q)
n
i=1 j=1
|bij | − ci
G2j (yj (t − τ (t))).
By using (i) and (ii), we obtain ∂V (t) < 0. ∂t
(C.5)
Moreover, from Lemma 2.1 and system (4) we have ⎛ ⎞ " n n n 1 2 |bij | tk ⎝ V (tk ) = yi (tk ) + ξs cmax ⎠ G2j (yj (s))ds c c i i t −τ (t ) k k i=1 i=1 j=1 ts ≤tk ⎛ ⎞ " n n |bij | t− k −1 ⎝ = y T (t− Dk y(t− ξs cmax ⎠ G2j (yj (s))ds k )Dk C k)+ − − c i t −τ (t ) k k i=1 j=1 ts ≤tk ⎛ ⎞ − " n n |bij | tk − − ⎝ ≤ ξk y T (tk )y(tk ) + ξk · cmax ξs cmax ⎠ G2j (yj (s))ds − ci t− k −τ (tk ) i=1 j=1 ts ≤tk−1
≤ +
ξk · cmax y ξk · cmax
T
−1 (t− y(t− k )C k)
" n n |bij | i=1 j=1
Therefore,
ci
⎛
t− k
− t− k −τ (tk )
⎝
⎞
ξs cmax ⎠ G2j (yj (s))ds.
ts ≤tk−1
V (tk ) ≤ ξk · cmax V (t− k ).
(C.6)
By induction and from (H3), (C.2), (C.6) it follows that ∀k ≥ 1,
1 cmax
y(t)2 ≤ V (t) ≤ V (t0 )
ξk · cmax .
(C.7)
t0
By taking t = t0 in (C.3) and by using (C.7), we obtain $
% n |bij | cmax 2 2 + cmax βτ max Mj δ 2 β, ∀t ≥ t0 . y(t) ≤ 1≤j≤n cmin c i i=1 It follows from (C.1) y(t) < , ∀t ≥ t0 . Therefore, the zero solution of system (3) is uniformly stable, i.e., the equilibrium point of system (1) is uniformly stable.
23
Appendix D. Proof. First, we prove that the equilibrium point of system (1) is uniformly stable (Proof of Theorem 4.3). We consider this Lyapunov function: " n n n 1 2 | bij | t V (y)(t) = [1+ (t−t0 ) ] yi (t)+ (1+(s−t0 )2 )G2j (yj (s))ds. c c i t−τ (t) i=1 i i=1 j=1 ∗
2
From condition (iii), there is a constant M ∗ > 0, such that: max{cmax ξk , 1} t0
(D.1)
For any t0 ≥ 0, let y(t0 , ϕ)(t) be a solution of system (1). So, ∀ > 0, we choose δ of the following manner: 1 δ= (
) n ) |bij | 1 2 *cmax Mj ci τ+ cmin + max 1≤i≤n
j=1
τ3 3
.
(D.2)
M∗
Then, we can prove when ϕ ∈ P Cδ (t0 ), that y(t) < , t ≥ t0 . From system (3), we have
⎧ n n n n ⎨ / 0 aij 1 2 ∂V (y)(t) = 1 + ∗ (t − t0 )2 yi2 (t) + 2 yi (t)Fj (yj (t)) + 2 ∗ (t − t0 ) yi (t) −2 ⎩ ∂t c c i i=1 i=1 j=1 i=1 i ⎫ n n n n ⎬ bij | bij | +2 yi (t)Gj (yj (t − τ (t))) + (1+(t−t0 )2 )G2j (yj (t)) ⎭ c c i i i=1 j=1 i=1 j=1 −
n n | bij | (1 + (t − τ (t) − t0 )2 )G2j (yj (t − τ (t)))(1 − τ˙ (t)). (D.3) c i i=1 j=1
We have from Lemma 2.2: n n 1 bij yi (t)Gj (yj (t − τ (t))) = 2y T (t)C −1 BG(y(t − τ (t))) c i i=1 j=1
/ √ 0T 1 T T −1 T −1 = 2G (y(t − τ (t)))B C y(t) = 2 G(y(t − τ (t))) σ B C y(t) √ σ 1 ≤ σGT (y(t − τ (t)))QG(y(t − τ (t))) + y T (t)C −1 BQ−1 B T C −1 y(t) σ n n n 1 λmax (Q) G2j (yj (t−τ (t)))+ λmax (C −1 BQ−1 B T C −1 ) yi2 (t). ≤σ n σ i=1 j=1 i=1
2
(D.4) 24
So, from (D.3) and (D.4): ∂V (y)(t) ∂t
=
+
−2(1 + ∗ (t − t0 )2 )
n
yi2 (t) + (1 + ∗ (t − t0 )2 )
⎧ n n ⎨ a
i=1 j=1
+
i=1
2 ∗ (t − t0 )
n i=1
n
(1 + (t − τ (t) − t0 )2 )
+
(1 + (t − t0 )2 )
≤ + + −
(1 + (t − t0 )2 )
n
λmax (Q) 2 1 2 yi (t) + σ(1 + ∗ (t − t0 )2 ) Gj (yj (t − τ (t))) ci n i=1 j=1
−
Then we obtain, ∂V (y)(t) ∂t
ij
y 2 (t) c i i=1 i=1 j=1 i ⎫ n n n ⎬ 1 aij 2 Fj (yj (t)) + λmax (C −1 BQ−1 B T C −1 ) yi2 (t) ⎭ ci σ ⎩
n n | bij | 2 Gj (yj (t − τ (t))) ci i=1 j=1
n n | bij | 2 Gj (yj (t)). ci i=1 j=1
⎧ ⎨ ⎩
−2
n
yi2 (t) +
i=1
n n aij i=1 j=1
ci
yi2 (t) +
n n aij i=1 j=1
ci
Fj2 (yj (t))
n n n | bij | 2 1 Gj (yj (t)) + λmax (C −1 BQ−1 B T C −1 ) yi2 (t) c σ i i=1 j=1 i=1
n n n ∗ 2 (t − t0 ) 1 2 λmax (Q) y (t) + (1 + (t − t0 )2 )σ · 1 + (t − t0 )2 i=1 ci i n i=1 j=1 | bij | (1 + (t − τ (t) − t0 )2 ) G2j (yj (t − τ (t))). ci
By using (H1), (H2) and (H3), ⎧ ⎫ ⎧
n n ⎨1 ⎬ ⎨ ∂V (y)(t) aij 2 2 2 2 aij y(t) + max Lj y(t)2 ≤ (1 + (t − t0 ) ) −2y(t) + max 1≤i≤n ⎩ ci 1≤j≤n ⎭ ⎩ ∂t c i j=1 i=1
n |bij | 1
∗ + max Mj2 y(t)2 y(t)2 + λmax C −1 BQ−1 B T C −1 y(t)2 + 1≤j≤n ci σ cmin i=1 n n λmax (Q) | bij | − (1 + (t − τ (t) − t0 )2 ) + (1 + (t − t0 )2 )σ · G2j (yj (t − τ (t))). n c i i=1 j=1 So,
⎧ ⎨
n n n ∂V (y)(t) aij | bij | 1 2 2 2 ≤ (1+(t−t0 ) ) −2 + max { aij } + max {Lj } + max {Mj } 1≤i≤n ci 1≤j≤n 1≤j≤n ⎩ ∂t c ci j=1 i=1 i i=1
25
+ −
n n 1
∗ λmax (Q) λmax (C −1 BQ−1 B T C −1 ) + y(t)2 + (1 + (t − t0 )2 )σ · σ cmin n i=1 j=1 | bij | (1 + (t − τ (t) − t0 )2 ) G2j (yj (t − τ (t))). ci
From (i), we prove | bij | λmax (Q) 1 + (t − t0 )2 σ · ≤ 1 + (t − τ (t) − t0 )2 . n ci Indeed, it is sufficient that: (1 + (t − τ (t) − t0 )2 ) σ · λmax (Q)ci ≤ . n | bij | (1 + (t − t0 )2 ) Let u(t) =
1+(t−τ (t))2 , 1+t2
next we show that, ∀t ≥ 0 √ τ2 + 4 − τ τ2 + 4 √ . u(t) ≥ τ2 + 4 + τ τ2 + 4
(D.5)
First, for t ∈ [τ, +∞[, we have: u(t) ≥
1 + (t − τ )2 = v(t). 1 + t2
It is easy to compute for t ≥ 0 % $ √ √ τ2 + 4 − τ τ2 + 4 τ + τ2 + 4 √ = vmin = v . 2 τ2 + 4 + τ τ2 + 4 Also, we obtain v(τ ) > vmin , that is √ τ2 + 4 − τ τ2 + 4 1 √ ≥ . 1 + τ2 τ2 + 4 + τ τ2 + 4
(D.6)
Second, for t ∈ [0, τ [, we have: u(t) =
1 + (t − τ (t))2 1 ≥ . 1 + t2 1 + τ2
(D.7)
In view of (D.6) and (D.7), we obtain that (D.5) holds also for t ∈ [0, τ [. Therefore, we have proved (D.5) holds for all t ∈ [0, +∞[. From (i) and (ii) we obtain,
∂V (y)(t) < 0, ∂t
26
From system (4) and Lemma 2.1, we have for all k ≥ 1: V (y)(tk ) = = +
[1 + (tk − t0 )2 ]
" n n n 1 2 | bij | tk yi (tk ) + (1 + (s − t0 )2 )G2j (yj (s))ds c c i i tk −τ (tk ) i=1 i=1 j=1
−1 [1 + (tk − t0 )2 ]y T (t− Dk y(t− k )Dk C k) − " n n t | bij | k (1 + (s − t0 )2 )G2j (yj (s))ds − − c i t −τ (t ) k k i=1 j=1
≤
− [1 + (tk − t0 )2 ]ξk y T (t− k )y(tk ) +
≤
[1 + (tk − t0 )2 ]
+ ≤ Therefore,
" − n n | bij | tk (1 + (s − t0 )2 )G2j (yj (s))ds − − c i t −τ (t ) k k i=1 j=1
ξk −1 y T (t− y(t− k )C k) λmin (C −1 )
" − n n | bij | tk (1 + (s − t0 )2 )G2j (yj (s))ds − − c i t −τ (t ) k k i=1 j=1 ξk , 1 V (t− max k ). λmin (C −1 ) V (y)(tk ) ≤ max{ξk cmax , 1}V (t− k ).
(D.8)
Using (H3) and (D.8), it follows that 1 cmax
(1 + ∗ (t − t0 )2 )y(t)2 ≤ V (t) ≤ V (t0 )
We have
max{ξk cmax , 1}.
(D.9)
t0
⎧ n ⎨
⎫ ⎞
⎬ 3 3 1 | bij | (t − t0 ) − (t − t0 − τ (t)) ⎠ V (t) ≤ ⎝ y(t)2 . (1 + ∗ (t − t0 )2 ) + max Mj2 τ (t) + 1≤i≤n ⎩ ⎭ cmin c 3 i j=1 ⎛
For t = t0 , we have: ⎧ ⎛ n ⎨ | bij 1 V (t0 ) ≤ ⎝ + max Mj2 cmin 1≤i≤n ⎩j=1 ci From (D.9) and (D.10), we obtain ⎧ ⎛ n ⎨ 1 | bij + max M2 y(t)2 ≤ ⎝ cmin 1≤i≤n ⎩j=1 j ci
(D.10)
⎫ ⎞ {ξk cmax , 1}
⎬ 3 | τ ⎠ t
Using (D.1) and (D.2), this implies ⎧ ⎛ n ⎨ | bij 1 2 ⎝ y(t) ≤ + max M2 cmin 1≤i≤n ⎩j=1 j ci 27
⎫ ⎞
|⎬ τ3 ⎠ τ+ ϕ2 . ⎭ 3
⎫
|⎬
⎞ τ3 ⎠ τ+ cmax δ 2 M ∗ ≤ 2 . ⎭ 3
Hence, the zero solution of system (1) is uniformly stable. In view of condition (iv), it is obvious that : lim sup y(t)2 = 0, so the equit−→+∞
librium point of system (1) is also uniformly asymptotically stable and globally asymptotically stable. Appendix E. Proof. To prove the Theorem 4.6, consider the Lyapunov functional as follows: " n n n 1 ¯t 2 |bij | t e yi (t) + e¯s G2j (yj (s))ds. V (y)(t) = c c i t−τ (t) i=1 i i=1 j=1 It is clear that: V (y)(t) > 0, ∀y = 0. By using the assumption (H2), it is easy to show that
M2 1 1 + max B e¯t y(t)2 . (1 − e−¯τ ) V (y)(t) ≤ cmin cmin
¯
(E.1)
Besides from system (4) and Lemma 2.1, we have for all k ≥ 1: " n n n 1 ¯tk 2 |bij | tk V (y)(tk ) = e yi (tk ) + e¯s G2j (yj (s))ds c c i i t −τ (t ) k k i=1 i=1 j=1 " tk n n |bij | e¯s G2j (yj (s))ds = e¯tk y T (tk )C −1 y(tk ) + c i t −τ (t ) k k i=1 j=1 = ≤ ≤
e
¯tk T
y
−1 (t− Dk y(t− k )Dk C k)
− e¯tk ξk y T (t− k )y(tk ) +
e¯tk
+
n n i=1 j=1
" n n |bij | i=1 j=1
|bij | ci
"
ci
t− k
− t− k −τ (tk )
n
n
t− k
− t− k −τ (tk )
e¯s G2j (yj (s))ds
e¯s G2j (yj (s))ds
|bij | ξk −1 y T (t− y(t− k )C k)+ −1 λmin (C ) ci i=1 j=1
"
t− k
− t− k −τ (tk )
e¯s G2j (yj (s))ds.
Then, V (y(tk )) ≤ max{ξk .cmax , 1}V (t− k ). On the other hand, from (H3) and (E.2), we have: 1 ¯t e y(t)2 ≤ V (t) ≤ V (t0 ) max{ξk .cmax , 1}. cmax
(E.2)
(E.3)
t0
Using (E.1) we have V (t0 ) ≤
1 cmin
+
2 Mmax B cmin
28
1 (1 − e−¯τ ) e¯t0 ϕ2 .
¯
(E.4)
By combining (E.4) and (E.3), we obtain
2 cmax cmax .Mmax 1 (1 − e−¯τ ) + B e−¯(t−t0 ) ϕ2 × y(t)2 ≤ cmin cmin
¯
max{ξk .cmax , 1}.
t0 ≤tk ≤t
Using condition (iii), this last inequality gives
1
¯ 0) y(t) ≤ M ϕe− 2 (¯−α)(t−t , ∀t ≥ t0 ,
where
M =
#
2 cmax cmax .Mmax + B cmin cmin
1 −¯ τ (1 − e ) eν ≥ 1.
¯
Hence, the zero solution of (1) is globally exponentially stable. Now, we turn our attention to the function V (.). First, we have: n n 1 2 1 ∂V (y)(t) =¯
e¯t yi (t) + e¯t 2yi (t)y˙i (t) ∂t c c i i=1 i=1 i
n n 2 |bij | 1 ¯t 2 e Gj (yj (t)) − e¯(t−τ (t)) G2j (yj (t − τ (t)))(1 − τ˙ (t)) . + ci i=1 j=1
Using system (3), one obtains n n n ∂V (y)(t) 1 2 1 =¯
e¯t yi (t) + e¯t 2yi (t)(−ci yi (t) + aij Fj (yj (t)) ∂t c c i=1 i i=1 i j=1
+
n
bij Gj (yj (t−τ (t)))+
j=1
n n 2 |bij | 1 ¯t 2 e Gj (yj (t)) − e¯(t−τ (t)) G2j (yj (t − τ (t)))(1 − τ˙ (t)) . ci i=1 j=1
Then, n n n n 1 2 1 ∂V (y)(t) ≤¯
e¯t yi (t) − 2e¯t yi2 (t) + 2e¯t aij yi (t)Fj (yj (t)) ∂t c c i i=1 i=1 i=1 j=1 i
+2e¯t
n n n n n n 1 |bij | ¯t 2 |bij | 2 bij yi (t)Gj (yj (t−τ (t)))+ e Gj (yj (t))−e¯(t−τ (t)) G (yj (t−τ (t))). c ci ci j i=1 j=1 i i=1 j=1 i=1 j=1
Therefore, n n n n n n ∂V (y)(t) 1 2 1 1 ≤¯
e¯t yi (t)−2e¯t yi2 (t)+e¯t aij yi2 (t)+e¯t aij Fj2 (yj (t)) ∂t c c c i i i i=1 i=1 i=1 j=1 i=1 j=1
+2e¯t
n n n n n n 1 |bij | ¯t 2 |bij | 2 bij yi (t)Gj (yj (t−τ (t)))+ e Gj (yj (t))−e¯(t−τ (t)) G (yj (t−τ (t))). c c ci j i i=1 j=1 i i=1 j=1 i=1 j=1
(E.5) 29
Moreover, we have by Lemma 2.2: n n 1 bij yi (t)Gj (yj (t − τ (t))) = 2y T (t)C −1 BG(y(t − τ (t))) c i i=1 j=1
/ √ 0T 1 = 2GT (y(t − τ (t)))B T C −1 y(t) = 2 G(y(t − τ (t))) σ B T C −1 y(t) √ σ 1 ≤ σGT (y(t − τ (t)))QG(y(t − τ (t))) + y T (t) C −1 BQ−1 B T C −1 y(t) σ n n n 1 λmax (Q) 2 Gj (yj (t−τ (t)))+ λmax C −1 BQ−1 B T C −1 yi2 (t). ≤σ n σ i=1 j=1 i=1
2
(E.6) By substituting (E.6) in (E.5), we obtain ∂V (y)(t) ∂t
≤
+ +
n n n n n n 1 2 1 1 yi (t) − 2e¯t yi2 (t) + e¯t aij yi2 (t) + e¯t aij Fj2 (yj (t)) c c c i=1 i i=1 i=1 j=1 i i=1 j=1 i ⎧ ⎫ n n n ⎨ λ ⎬ (Q) 1 max e¯t σ G2j (yj (t − τ (t))) + λmax (C −1 BQ−1 B T C −1 ) yi2 (t) ⎩ ⎭ n σ i=1 j=1 i=1
e¯t ¯
n n |bij | i=1 j=1
ci
e¯t G2j (yj (t)) − e¯(t−τ (t))
n n |bij | i=1 j=1
ci
G2j (yj (t − τ (t))).
Using (H3), we obtain ⎧ ⎫ ⎛
n n ⎨ ⎬
¯ ∂V (y)(t) 1 1 a ij + λmax (C −1 BQ−1 B T C −1 ) ≤ ⎝ − 2 + max aij + max L2j 1≤i≤n ⎩ ci ⎭ 1≤j≤n ∂t cmin c σ j=1 i=1 i ⎧
% n n n n ⎨ 1 2 ¯t 2 ¯t λmax (Q) + max Mj |bij | e yi (t) + e σ G2j (yj (t − τ (t))) 1≤j≤n ⎩ c n i=1 i i=1 i=1 j=1 ⎫ n n ⎬ |bij | 2 − e¯(t−τ (t)) Gj (yj (t − τ (t))) . ⎭ ci i=1 j=1
It follows from conditions (i) and (ii) that ∂V (y)(t) < 0. ∂t This completes the proof of the theorem. [1] B. Ammar, F. Ch´erif, A.M. Alimi, Existence and Uniqueness of Pseudo Almost-Periodic Solutions of Recurrent Neural Networks with Time-Varying Coefficients and Mixed Delays. IEEE Transactions on Neural Networks and Learning Systems, (23)(2012) 109-118. 30
[2] A. Arbi, C. Aouiti, A. Touati, A new sufficient conditions of stability for discrete time non-autonomous delayed Hopfield neural networks. World Academy of Science, Engineering and Technology, (6) (2012) 605-610. [3] A. Arbi, C. Aouiti, A. Touati, Uniform Asymptotic Stability and Global Asymptotic Stability for Time-Delay Hopfield Neural Networks. IFIP Advances in Information and Communication Technology, (381) (1) (2012) 483-492. [4] S. Arik, V. Tavsanoglu, On the global asymptotic stability of delayed cellular neural networks. IEEE Trans. Circuits Syst. I 47 (4)(2000) 571-574. [5] A. Balavoine, J. Romberg, C.J. Rozell, Convergence and Rate Analysis of Neural Networks for Sparse Approximation. IEEE Transactions on Neural Networks and Learning Systems, (23) (2012) 1377-1389. [6] A. Berman, R.J. Plemmons, Nonnegative matrices in the mathematical science. New York: Academic, (1979). [7] J. Cao, Global stability conditions for delayed CNNs. IEEE Trans. Circuits Syst. I 48 (11)(2001) 1330-1333. [8] J. Cao, J. Wang, Global asymptotic stability of recurrent neural networks with Lipschitz-continuous activation functions and time-varying delays. IEEE Trans. Circuits Syst. I (50)(2003) 34-44. [9] J. Cao, X. Li, Stability in delayed Cohen-Grossberg neural networks: LMI optimization approach. Physica D (212)(2005) 54-65. [10] A. Chen, J. Cao, L. Huang, An estimation of upperbound of delays for global asymptotic stability of delayed Hopfield neural networks. IEEE Trans Circuits Syst I (49)(2002) 1028-1032. [11] G. Chen, Z. Pu, J. Zhang, The global exponential stability and global attractivity for variably delayed Hopfield neural network models. Chinese Journal of Engineering Mathematics 22 (5)(2005) 821-826. [12] Wu-H. Chen, Zhi-H. Guan, X. Lu, Delay-dependent exponential stability of uncertain stochastic systems with multiple delays: an LMI approach. Syst. Control Lett. (54)(2005) 547-555. [13] Z. Chen, J. Ruan, Global stability analysis of impulsive Cohen-Grossberg neural networks with delay. Physica A (345)(2005) 101-111. [14] X.L. Fu, B.Q. Yan, Y.S. Liu, Introduction of Impulsive Differential Systems. Science Press, Beijing (2005). [15] Zhi-H. Guan, J. Lam, G. Chen, On impulsive autoassociative neural networks. Neural Networks (13)(2000) 63-69.
31
[16] J.J. Hopfield, Neural networks and physical systems with emergent collect computational abilities. Proc.Nat.Acad.Sci.USA,79(2)(1982) 2554-2558. [17] H. Huang, J. Cao, On global asymptotic stability of recurrent neural networks with time-varying delays. Applied Mathematics and Computation (142)(2003) 143-154. [18] T. Huang, C. Li, S. Duan, J.A. Starzky, Robust Exponential Stability of Uncertain Delayed Neural Networks With Stochastic Perturbation and Impulse Effects. IEEE Transactions on Neural Networks and Learning Systems, (23)(2012) 866-875. [19] C. Li, X. Liao, R. Zhang, Delay-dependent exponential stability analysis of bi-directional associative memory neural networks with time delay: an LMI appraoch. Chaos, Solitons and Fractals (24)(2005) 1119-1134. [20] X. Li, L. Huang, J. Wu, A new method of Lyapunov functionals for delayed cellular neural networks. IEEE Trans. Circuits Syst.I Regul.Pap, (51)(2004) 2263-2270. [21] T.-L. Liao, F.-C. Wang, Global stability condition for cellular neural networks with delay. IEEE Electron. Lett. (35)(1999) 1347-1349. [22] T.-L. Liao, F.-C. Wang, Global stability for cellular neural networks with time delay. IEEE Transactions on Neural Networks (11)(2000) 1481-1484. [23] X. Liao, Kwok-W. Wong, Z. Wu, G. Chen, Novel robust stability criteria for interval-delayed hopfield neural networks. IEEE Transactions on Circuits and Systems I (48)(2001) 1355-1359. [24] X.X. Liao, Mathematical theory of cellular neural networks (II). Sci.China (Ser.A) (38)(1995) 542-551. [25] B. Liu, Almost periodic solutions for Hopfield neural networks with continuously distributed delays. Mathematics and Computers in Simulation (73)(2007) 327-335. [26] B. Liu, X. Liu, X. Liao, Robust stability of uncertain impulsive dynamical systems. J.Math.Anal.Appl. (290)(2004) 519-533. [27] X. Liu, R. Dickson, Stability analysis of Hopfield neural networks with uncertainty. Mathematical and Computer Modelling (34)(2001) 353-363. [28] X. Liu, G. Ballinger, Boundedness for impulsive delay differential equations and applications to population growth models. Nonlinear Analysis: Theory, Methods & Applications (53)(2003) 1041-1062. [29] X. Liu, G. Ballinger, Existence and continuability of solutions for differential equations with delays and state-dependent impulses. Nonlinear Anal 51 (2002), 633-647. 32
[30] S. Long, D. Xu, Delay-dependent stability analysis for impulsive neural networks with time varying delays. Neurocomputing (71)(2008) 1705-1713. [31] J. Peng, H. Qiao, Zong-b. Xu, A new approach to stability of neural networks with time-varying delays. Neural Networks (15)(2002) 95-103. [32] Edgar N. Sanchez, Joze P. Perez, Input-to-state stability (ISS) analysis for dynamic NN. IEEE Trans. Circuits Syst. I 46 (1999) 1395-1398. [33] W. Xiang, J. Xiao, Stability analysis and control synthesis of switched impulsive systems. Int. J. Robust. Nonlinear Control (22) (2012) 1440-1459. [34] Z. Yang, D. Xu, Stability analysis of delay neural networks with impulsive effects. IEEE Trans. Circuits Syst II: Express Briefs (52)(2005) 517-521. [35] H. Zhang, F. Yang, X. Liu, Q. Zhang, Stability Analysis for Neural Networks With Time-Varying Delay Based on Quadratic Convex Combination. IEEE Transactions on Neural Networks and Learning Systems, (24)(2013) 513-521. [36] H. Zhang, G. Wang, New criteria of global exponential stability for a class of generalized neural networks with time-varying delays. Neurocomputing 70 (2007) 2486-2494. [37] Y. Zhang, J. Sun, Boundedness of the solutions of impulsive differential systems with time-varying delay. Appl.Math.Comput. (154)(2004) 279-288. [38] Y. Zhang, J. Sun, Stability of impulsive neural networks with time delays. Physica A (384)(2005) 44-50. [39] Q. Zhang, X. Wei, J. Xu, Delay-dependent exponential stability of cellular neural networks with time-varying delays. Chaos, Solitons and Fractals, (23)(2005) 1363-1369. [40] Q. Zhang, X. Wei, J. Xu, Delay-dependent global stability results for delayed Hopfield neural networks. Chaos Solitons and Fractals (34)(2007) 662-668. [41] Q. Zhang, X. Wei, J. Xu, Delay-dependent global stability condition for delayed Hopfield neural networks. Nonlinear Analysis: Real World Applications (8)(2007) 997-1002.
33