Finite-time synchronization of memristor-based Cohen–Grossberg neural networks with time-varying delays

Finite-time synchronization of memristor-based Cohen–Grossberg neural networks with time-varying delays

Author’s Accepted Manuscript Finite-time synchronization of memristor-based Cohen-Grossberg neural networks with timevarying delays Mei Liu, Haijun Ji...

502KB Sizes 6 Downloads 20 Views

Author’s Accepted Manuscript Finite-time synchronization of memristor-based Cohen-Grossberg neural networks with timevarying delays Mei Liu, Haijun Jiang, Cheng Hu www.elsevier.com/locate/neucom

PII: DOI: Reference:

S0925-2312(16)00192-2 http://dx.doi.org/10.1016/j.neucom.2016.02.012 NEUCOM16737

To appear in: Neurocomputing Received date: 23 September 2015 Revised date: 31 December 2015 Accepted date: 8 February 2016 Cite this article as: Mei Liu, Haijun Jiang and Cheng Hu, Finite-time synchronization of memristor-based Cohen-Grossberg neural networks with timevarying delays, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2016.02.012 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Finite-time synchronization of memristor-based Cohen-Grossberg neural networks with time-varying delays Mei Liu, Haijun Jiang∗ and Cheng Hu College of Mathematics and System Sciences, Xinjiang University, Urumqi, 830046, Xinjiang, P.R. China

Abstract. This paper concerns the problem of global and local finite-time synchronization for a class of memristor-based Cohen-Grossberg neural networks with time-varying delays by designing a appropriate feedback controller. Through a nonlinear transformation, we derive an alternative system from the considered memristor-based Cohen-Grossberg neural networks. Then, by considering the finite-time synchronization of the alternative system, we obtain some novel and effective finite-time synchronization criteria for the considered memristor-based Cohen-Grossberg neural networks. These results generalize and extend some previous known works on conventional Cohen-Grossberg neural networks. Finally, numerical simulations are given to present the effectiveness of the theoretical results. Key words: Cohen-Grossberg neural network; Time-varying delay; Finite-time synchronization; Memristor

1 Introduction Memristor is a contraction for memory resistor, which was firstly postulated by Chua (1971)[1] to describe relationship between electric charge and magnetic flux. The memristor can be applied to fabricate new neural networks model to emulate the human ∗

E-mail: [email protected]; [email protected].

1

brain. However, memristor did not cause much concern of researchers until the papers Strukov et al. (2008)[2] asserted that a memristor of nanometer-size solid-state twoterminal device has been constructed by a team from the Hewlett-Packard Company. Recently, the studies of memristive neural networks are very much because memristive neuron-dynamic system is an entirely new artificial neural weighting circuit that has potential applications in many areas, such as systems control, associative memory, pattern recognition, signal processing, optimization problem, new classes of artificial neural systems and so on. Memristor-based neural network can be seen as a particular case of switched networks where the switching rule relies on the network state, but its analysis method is different from that of general switched systems due to its special characteristics. So the study of memristor-based neural networks with discontinuous right-hand sides is not an easy work. But, after Filippov proposed a novel method, i.e., Filippov regularization [3], to transform differential equations with discontinuous right-hand side into a differential inclusion [4], the dynamic behavior of memristor-based neural networks has become a popular topic [5, 6, 7]. Synchronization for memristor-based neural networks with time-varying delay via an adaptive feedback controller was investigated in [5]. The global exponential stability in a Lagrange sense for memristive recurrent neural networks with time-varying delays was considered in [6]. Global Mittag-Leffler stability and synchronization of memristor-based fractional-order neural networks were studied in [7]. Recently, in order to synchronize two identical or different chaotic systems, a lot of synchronization strategies have been developed including adaptive control [8, 9, 10], impulsive control [11, 12, 13], fuzzy control [14], feedback control [5], periodically intermittent control [15]. This indicates that the suitable controller can make the slave system synchronize to the master system under infinite horizon. Nevertheless, for the purpose of supervision and control, the networks might always be expected to reach synchronization in a finite time, especially in engineering fields. In general, exponential synchronization cannot ensure that the system under study reaches the control performance of fast convergence [16], while finite-time synchronization possesses such a characteristic. Nevertheless, as far as we know, there are seldom published papers investigating finite-time synchronization of memristor-based Cohen-Grossberg neural networks. Therefore, it is important to consider the finite-time synchronization of memristor-based Cohen-Grossberg neural networks. Motivated by the above analysis, this paper presents a memristor-based Cohen-

2

Grossberg neural network model with discrete time-varying delays, and then considers global finite-time synchronization of the model via appropriate controller. The main contributions in this paper can be summarized as follows. First, based on the property of behaved function, amplification function, and derivative theorem for inverse function, we get a transform system from the considered memristor-based CohenGrossberg neural networks. Then, by investigating finite-time synchronization of the alternative system, we obtain the corresponding finite-time synchronization condition of the considered model. Moreover, the criteria utilized in this paper are easy to verify and improve the conditions given in [17, 18, 19]. Finally, numerical simulations are given to illustrate the effectiveness of the theoretical results. The rest of this paper is organized as follows. In Section 2, model of memristorbased Cohen-Grossberg neural networks with time-varying delays and preliminaries are given. Finite-time synchronization of the considered model under the appropriate controller is investigated in Section 3. In Section 4, the effectiveness and feasibility of the developed methods are presented by a numerical example. Finally, some conclusions are reached in Section 5.

2 Preliminaries For convenience, in this paper, following notations are introduced. Let Rn be the space of n-dimensional real column vectors. For x = (x1 , · · · , xn )T ∈ Rn , x denotes a n  1 vector norm defined by x = ( x2i ) 2 . For matrix P ∈ Rn×n , PT denotes its transpose, i=1

λmax (P), λmin (P) respectively represent the maximum and minimum eigenvalue of matrix P and P > 0 denotes that P is a positive definite matrix. Let Wi j = max{| wˆ i j | , | wˇ i j |}, Ci j = max{| cˆi j | , | cˇi j |}. Based on the relevant models in [17, 18, 19] for memristor-based recurrent neural networks, in this paper, we investigate a memristor-based Cohen-Grossberg neural networks model with time-varying delays which is described as follows: n   wi j (ui (t)) f j (u j (t)) u˙ i (t) = −ai (ui (t)) bi (ui (t)) −



n 

j=1



(1)

ci j (ui (t)) f j (u j (t − τ j (t))) − Ii , i ∈ I = {1, 2, ..., n},

j=1

where ui (t) denotes the state variable of the ith neuron at time t, Ii is the external input to the ith neuron, ai (ui (t)) and bi (ui (t)) represent the amplification function and 3

appropriately behaved function at time t, respectively, f j (u j (t)) denotes the output of the jth neuron at time t, the time-varying delay τ j (t) corresponds to the finite speed of the axonal signal transmission, wi j (ui (t)) and ci j (ui (t)) are memristive connection weights of the neural network satisfying the following conditions: ⎧ ⎪ i j ⎪ Mi j M ⎨ 1, wi j (ui (t)) = × signi j , ci j (ui (t)) = × signi j , signi j = ⎪ ⎪ ⎩ −1, Ci Ci

i  j, i = j,

i j denote the memductances of memristors Ri j , R i j , respectively. In in which Mi j , M addition, Ri j represents the memristor between the feedback function fi (ui (t)) and ui (t), i j represents the memristor between the feedback function fi (ui (t − τi (t))) and ui (t). R

i j respond to As we known, capacitor Ci is changeless, memductances Mi j , M change in pinched hysteresis loops. Thus, wi j (ui (t)), ci j (ui (t)) will change as pinched

hysteresis loops change. According to the feature of the memristor and the currentvoltage characteristic, then ⎧ ⎪ ⎪ ⎨ wˆ i j , wi j (ui (t)) = ⎪ ⎪ ⎩ wˇ i j , ⎧ ⎪ ⎪ ⎨ cˆi j , ci j (ui (t)) = ⎪ ⎪ ⎩ cˇi j ,

| ui (t) |< Ti ,

(2)

| ui (t) |≥ Ti , | ui (t) |< Ti ,

(3)

| ui (t) |≥ Ti ,

where switching jumps Ti > 0, wˆ i j , wˇ i j , cˆi j , cˇi j , i, j ∈ I are constants. In order to obtain our main results, the following assumptions are necessary: (H1 ): The parameters wi j (ui (t)) and ci j (ui (t)) satisfy the above equation, and there exists constant τ > 0 such that 0 ≤ τ j (t) ≤ τ, where τ = max{sup τ j (t)}. j∈I

t≥0

(H2 ): ai (u) is continuous, monotone and there exist positive constants ai and a¯i such that 0 < ai ≤ ai (u) ≤ a¯i , u ∈ R, i ∈ I . 



(H3 ): There exists positive constant li such that bi (u) ≥ li , where bi (u) denotes the derivative of bi (u), u ∈ R and bi (0) = 0, i ∈ I . (H4 ): There exists positive constant Li such that | fi (x) − fi (y) |≤ Li | x − y |, ∀x, y ∈ R, x  y, i ∈ I . (H5 ): There exists positive constant Mi such that | fi (x) |≤ Mi for bounded x, ∀x ∈ R, i ∈ I . 1 exists. We choose an antiderivative hi (ui ) of From (H2 ), the antiderivative of ai (ui ) 1 d 1 that satisfies hi (0) = 0. Obviously, . By ai (ui ) > 0, we obtain that hi (ui ) = ai (ui ) dui ai (ui ) 4

hi (ui ) is strictly monotone increasing about ui . In light of derivative theorem for inverse d −1 (ui ) of hi (ui ) is differentiable and h (ui ) = ai (ui ). function, the inverse function h−1 i dui i (z)) is differentiable. Denote xi (t) = hi (ui (t)), it is By (H3 ), composition function bi (t, h−1 i u˙ i (t) (xi (t)). Substituting these equalities into easy to see that x˙ i (t) = and ui (t) = h−1 i ai (ui (t)) system (1), we get x˙i (t) =

−bi (h−1 i (xi (t))) +

n 

+

n 

−1 wi j (h−1 i (xi (t))) f j (h j (x j (t)))

j=1 −1 ci j (h−1 i (xi (t))) f j (h j (x j (t

(4) − τ j (t)))) + Ii .

j=1

From conditions (2) and (3), one can see that system (1) is a differential equation with discontinuous right-hand side. In this case, the solution of system (1) in the conventional sense does not exist. Nevertheless, we can discuss the dynamical actions of system (1) by means of Filippov solution. Now, we first recall the following definitions which is necessary to define Filippov solution. Definition 1 [20]. Let E ⊂ Rn , x −→ F(x) is called a set-valued map from E → Rn , if to each point x of a set E ⊂ Rn , there corresponds a nonempty set F(x) ⊂ Rn . Definition 2 [20]. A set-valued map F with nonempty values is said to be upper semi-continuous at x0 ∈ E ⊂ Rn , if for any open set N containing F(x0 ), there exists a neighborhood M of x0 such that F(M) ⊂ N. F(x) is said to have a closed (convex, compact) image if for each x ∈ E, F(x) is closed (convex, compact). Now, we introduce the concept of Filippov solution [3]. Consider the following non-autonomous differential equation in vector notation: dx = f (t, x), dt dx denotes the time derivative of x and where t denotes time, x represents state vector, dt f (t, x) is not continuous with respect to x. n

Definition 3 [20]. Let us consider the set-valued map F : R × Rn → 2R defined as F(t, x) = co[ f (t, B(x, )\N)], >o μ(N)=0

where co[·] denotes the closure of the convex hull, B(x, ) is the ball of center x and radius  and μ(N) is the Lebesgue measure of set N, intersection is taken over all sets N of measure zero and over all  > 0. A solution x(t) in Filippovs sense of the Cauchy 5

dx problem for equation = f (t, x), with initial condition x(t0 ) = x0 is an absolutely dt continuous vector value function on any compact subinterval [0, T), which satisfies x(t0 ) = x0 and differential inclusion: dx ∈ F(t, x), f or a.e. t ∈ [0, T). dt In the following, let us consider the differential equation system (1). Since wi j (ui (t)) and ci j (ui (t)) in system (1) are defined as discontinuous functions and the classical definition of solutions are invalid for the differential equation with discontinuous right-hand side. For this purpose, by applying the above theories of set-valued map and differential inclusion, we extend the conception of the Filippov solution to the memristor-based Cohen-Grossberg neural networks (1) as follows: Definition 4 [20]. A vector function u = (u1 , u2 , ..., un )T : [−τ, T) → Rn , T ∈ (0, +∞] is a state solution of the delayed and discontinuous system (1) on [−τ, T) if (i) u is continuous on [−τ, T) and absolutely continuous on any compact subinterval of [0, T); (ii) For a.e. t ∈ [0, T), u(t) = (u1 (t), u2 (t), ..., un (t))T : [−τ, T) → Rn , T ∈ (0, +∞] satisfies n   u˙ i (t) ∈ −ai (ui (t)) bi (ui (t)) − co[wi j (ui (t))] f j (u j (t))



n 

j=1



(5)

co[ci j (ui (t))] f j (u j (t − τ j (t))) − Ii  Fi (t, u), i ∈ I ,

j=1

where co[wi j (ui (t))], co[ci j (ui (t))] are defined by ⎧ ⎪ ⎪ wˆ i j , ⎪ ⎪ ⎪ ⎨ co[wi j (ui (t))] = ⎪ co{wˆ i j , wˇ i j }, ⎪ ⎪ ⎪ ⎪ ⎩ wˇ i j , ⎧ ⎪ ⎪ cˆi j , ⎪ ⎪ ⎪ ⎨ co[ci j (ui (t))] = ⎪ ⎪ co{ˆci j , cˇi j }, ⎪ ⎪ ⎪ ⎩ cˇi j ,

| ui (t) |< Ti , | ui (t) |= Ti ,

(6)

| ui (t) |> Ti , | ui (t) |< Ti , | ui (t) |= Ti ,

(7)

| ui (t) |> Ti .

Obviously, it is easy to check that the set-valued map (t, u) → (F1 (t, u), F2 (t, u), ..., Fn (t, u))T has nonempty compact convex values. Furthermore, it is upper semi-continuous. So, it is measurable [4]. By the measurable selection theorem [21], if u is a solution of system (1), then there exist bounded measurable functions wi j (ui (t)), ci j (ui (t)) : [0, T) → R such 6

that wi j (ui (t)) ∈ co[wi j (ui (t))], ci j (ui (t)) ∈ co[ci j (ui (t))] for a.e. t ∈ [0, T) and n   ˙ui (t) = −ai (ui (t)) bi (ui (t)) − wi j (ui (t)) f j (u j (t))



n 

j=1



(8)

ci j (ui (t)) f j (u j (t − τ j (t))) − Ii , i ∈ I .

j=1

Under this definition, it comes out that the ui (t) is a solution of system (1) in the sense of Filippov. Based on the above definitions and theory of differential inclusion, it can be achieved from (4) that x˙i (t) ∈

−bi (h−1 i (xi (t))) +

n 

+

n 

−1 co[wi j (h−1 i (xi (t)))] f j (h j (x j (t)))

j=1

(9)

−1 co[ci j (h−1 i (xi (t)))] f j (h j (x j (t

− τ j (t)))) + Ii ,

j=1

where

⎧ ⎪ ⎪ wˆ i j , ⎪ ⎪ ⎪ ⎨ co[wi j (h−1 co{wˆ i j , wˇ i j }, i (xi (t)))] = ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ wˇ i j , ⎧ ⎪ ⎪ cˆi j , ⎪ ⎪ ⎪ ⎨ co[ci j (h−1 co{ˆci j , cˇi j }, i (xi (t)))] = ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ cˇi j ,

| h−1 (xi (t)) |< Ti , i | h−1 (xi (t)) |= Ti , i

(10)

| h−1 (xi (t)) |> Ti , i | h−1 (xi (t)) |< Ti , i | h−1 (xi (t)) |= Ti , i

(11)

| h−1 (xi (t)) |> Ti , i

or equivalently, there exist wi j (h−1 (xi (t))) ∈ co[wi j (h−1 (xi (t)))], ci j (h−1 (xi (t))) ∈ co[ci j (h−1 (xi (t)))] i i i i such that n  −1 −1 wi j (h−1 x˙i (t) = −bi (hi (xi (t))) + i (xi (t))) f j (h j (x j (t))) +

n 

j=1

(12)

−1 ci j (h−1 i (xi (t))) f j (h j (x j (t

− τ j (t)))) + Ii .

j=1

Let system (1) be the driving system, we construct a controlled response system described by

n   wi j (vi (t)) f j (v j (t)) v˙i (t) = −ai (vi (t)) bi (vi (t)) −



n 

j=1



ci j (vi (t)) f j (v j (t − τ j (t))) − Ii + Ri (t),

j=1

7

(13)

where Ri (t) is a appropriate feedback controller. The initial conditions ϕ(s) = (ϕ1 (s), ϕ2 (s), ..., ϕn (s))T of system (1) and

φ(s) =

(φ1 (s), φ2 (s), ..., φn (s)) of system (13) are of the following form T

ui (s) = ϕi (s), vi (s) = φi (s),

(14)

where s ∈ [−τ, 0], i ∈ I , ϕi (s) and φi (s) are continuous functions. Now, define the synchronization error e(t) as follows: e(t) = (e1 (t), e2 (t), ..., en (t))T , where ei (t) = yi (t) − xi (t). Choosing the following control strategy: Ri (t) = −pi (vi (t) − ui (t)) − ηi sign(vi (t) − ui (t)) − −

n  j=1

n 

ki j sign(v j (t) − u j (t))

j=1

(15)

δi j sign(vi (t) − ui (t))|v j (t − τ j (t)) − u j (t − τ j (t))|,

where pi , ηi > 0 are the constant control strengths, δi j is the controller gain. K = (ki j )n×n > 0 is a positive definite matrix, which will be suitably chosen to synchronize the master-slave-based systems in finite time. Similar to the analysis of (9)-(12), from (13) and (15), we obtain (yi (t))) + y˙ i (t) ∈ −bi (h−1 i ×

(y j (t f j (h−1 j

n  j=1

co[wi j (h−1 (yi (t)))] f j (h−1 (y j (t))) + i j

− τ j (t)))) + Ii −

pi ai (h−1 (yi (t))) i

(h−1 i (yi (t))

n  j=1

co[ci j (h−1 (yi (t)))] i

− h−1 i (xi (t)))

 1 − − ki j − ai (h−1 (yi (t))) ai (h−1 (yi (t))) j=1 i i n  1 −1 −1 × sign(h j (y j (t)) − h j (x j (t))) − δi j sign(h−1 i (yi (t)) −1 ai (hi (yi (t))) j=1 ηi

sign(h−1 i (yi (t))

n

h−1 i (xi (t)))

(16)

(xi (t)))|h−1 (y j (t − τ j (t))) − h−1 (x j (t − τ j (t)))|, − h−1 i j j where yi (t) = hi (vi (t)), ⎧ ⎪ ⎪ wˆ i j , ⎪ ⎪ ⎪ ⎨ −1 co[wi j (hi (yi (t)))] = ⎪ co{wˆ i j , wˇ i j }, ⎪ ⎪ ⎪ ⎪ ⎩ wˇ i j , ⎧ ⎪ ⎪ cˆi j , ⎪ ⎪ ⎪ ⎨ co[ci j (h−1 co{ˆci j , cˇi j }, i (yi (t)))] = ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ cˇi j , 8

| h−1 (yi (t)) |< Ti , i | h−1 (yi (t)) |= Ti , i

(17)

| h−1 (yi (t)) |> Ti , i | h−1 (yi (t)) |< Ti , i | h−1 (yi (t)) |= Ti , i | h−1 (yi (t)) |> Ti , i

(18)

or equivalently, there exist wi j (h−1 (yi (t))) ∈ co[wi j (h−1 (yi (t)))], ci j (h−1 (yi (t))) ∈ co[ci j (h−1 (yi (t)))] i i i i such that y˙ i (t) =

−bi (h−1 i (yi (t)))

+

n 

−1 wi j (h−1 i (yi (t))) f j (h j (y j (t)))

j=1

×

f j(h−1 j (y j (t

− τ j (t)))) + Ii −

pi ai (h−1 (yi (t))) i

+

n 

ci j (h−1 i (yi (t)))

j=1

(h−1 i (yi (t))

− h−1 i (xi (t)))

 1 − − ki j − ai (h−1 (yi (t))) ai (h−1 (yi (t))) j=1 i i n  1 −1 −1 × sign(h j (y j (t)) − h j (x j (t))) − δi j sign(h−1 i (yi (t)) ai (h−1 (y (t))) i i j=1 ηi

sign(h−1 i (yi (t))

n

h−1 i (xi (t)))

(19)

(xi (t)))|h−1 (y j (t − τ j (t))) − h−1 (x j (t − τ j (t)))|. − h−1 i j j Therefore, from (12) and (19), we can get the following synchronization error system:    n −1 (y (t))) − b (h (x (t))) + wi j (h−1 (yi (t))) f j (h−1 (y j (t))) e˙i (t) = − bi (h−1 i i i i i i j j=1

  n −1 − wi j (h−1 (x (t))) f (h (x (t))) + ci j (h−1 (yi (t))) f j (h−1 (y j (t − τ j (t)))) i j j i j i j j=1  pi −1 −1 (h−1 (x (t))) f (h (x (t − τ (t)))) − − ci j (h−1 i j j j i (yi (t)) − hi (xi (t))) i j −1 ai (hi (yi (t))) n  ηi 1 −1 −1 sign(hi (yi (t)) − hi (xi (t))) − ki j − ai (h−1 (yi (t))) ai (h−1 (yi (t))) j=1 i i n  1 −1 −1 × sign(h j (y j (t)) − h j (x j (t))) − δi j sign(h−1 i (yi (t)) −1 ai (hi (yi (t))) j=1

(20)

(xi (t)))|h−1 (y j (t − τ j (t))) − h−1 (x j (t − τ j (t)))|. − h−1 i j j In addition, the following definition is given in order to achieve our results. Definition 5 [22]. The zero solution of the system (20) is said to be finite-time stable (on an open neighborhood U ⊂ D of the origin) if: (i) there exists a function T : U\{0} → (0, ∞), such that ∀e0 ∈ U, the solution ψ(t, e0 ) of the system (20) is defined and ψ(t, e0 ) ∈ U\{0} for t ∈ [0, T(e0 )) and lim ψ(t, e0 ) = 0, t→T(e0 )

then, T(e0 ) is called the settling time. (ii) for all ε > 0, there exists δ(ε) > 0 such that for every e0 ∈ (B·2,n (δ(ε)\{0})) ∩ U, e(t, e0) ∈ B·2,n (ε) for all t ∈ [0, T(e0 )). When U = D = Rn , the zero solution is said to be globally finite-time stable. Furthermore, if only (i) is fulfilled then the origin of system (20) is said to be finite-time attractive. 9

In order to prove that the error dynamic system (20) can be ensured to converge to the equilibrium point in finite time, some lemmas should be presented firstly as follows: Lemma 1 [23]. Assume that a continuous, positive-definite function V : D → R+ , real numbers α > 0, 0 < η < 1 and an open neighborhood U ⊂ D of the origin such that ˙ V(x) ≤ −αV η (x), ∀x ∈ U \ {0}. Then, the origin of system is finite-time stable, i.e., V(x) satisfies the following inequality: V 1−η (x) ≤ V1−η (x0 ) − α(1 − η)(x − x0 ), x0 ∈ U, with the settling time T(x) given by T(x) ≤

1 V 1−η (x0 ). α(1 − η)

In addition, if U = Rn , V is proper and radially unbounded, then the origin is globally finite-time stable. Lemma 2 [24]. Suppose there is a Lyapunov function V(x) defined on a neighborhood U ⊂ Rn of the origin, and ˙ V(x) ≤ −αV η (x) − θV(x), ∀x ∈ U \ {0}, where α, θ > 0, 0 < η < 1 are constants. Then, the origin of system is finite-time stable. Moreover, the settling time T(x) given satisfies:   θ ln 1 + V 1−η (x0 ) α . T(x) ≤ α(1 − η) In addition, if U = Rn , V is proper and radially unbounded, then the origin is globally finite-time stable. Lemma 3 [10]. Suppose there is a Lyapunov function V(x) defined on a neighborhood U ⊂ Rn of the origin, and ˙ V(x) ≤ −αV η (x) + θV(x), ∀x ∈ U \ {0}, where α, θ > 0, 0 < η < 1 are constants. Then, the origin of system is finite-time stable. The set α N = {x|V(x)1−η < } ∩ U θ 10

is contained in the domain of attraction of the origin. The settling time satisfies   θ ln 1 − V 1−η (x0 ) α , x0 ∈ N. T(x) ≤ α(1 − η) In addition, if U = Rn , V is proper and radially unbounded, then the origin is globally finite-time stable. Lemma 4 [25]. Let x1 , x2 , ..., xn ∈ Rn be any vectors and 0 ≤ q < 2 is a real number satisfying: q

x1 q + x2 q + · · · + xn q ≥ (x1 2 + x2 2 + · · · + xn 2 ) 2 .

3 Finite-time synchronization In this section, by using Lyapunov functional method, differential inclusions theory and some inequality techniques, several sufficient conditions are obtained to ensure exponential synchronization of memristor-based Cohen-Grossberg neural networks with appropriate feedback controller Ri (t). In the following, we give some assumptions for system parameters and control strengths.   n (|wˆ i j − wˇ i j | + |ˆci j − cˇi j |)M j for any i ∈ I . (H6 ): ηi ≥ ai j=1

(H7 ): δi j a j ≥ Ci j L j a j ai for any i, j ∈ I . Theorem 1. Suppose that assumptions (H1 ) − (H7 ) hold, and K is a positive pi a T definite matrix. If there exists pi > 0 such that θ = 2 min(li ai + i ) − max{Li ai }λmax (W + i∈I i∈I ai W) = 0, then the origin of system (20) is globally finite-time stable with the settling time √ 2V 1/2 (0) , T(x) ≤ λmin (K) n 1 2 where V(0) = e (0). 2 i=1 i Proof. Since bi (u) and h−1 (λ) are strictly monotonically increasing and differentiable, i −1 −1 bi (0) = 0, and hi (0) = 0, bi (hi (λ)) is strictly monotonically increasing and differentiable for λ ∈ R. So

  −1 (y (t))) − b (h (x (t))) − sign(ei (t)) bi (h−1 i i i i i 

= −sign(ei (t))bi (ξ1 )(h−1 (yi (t)) − h−1 (xi (t))) i i 



= −sign(ei (t))bi (ξ1 )(h−1 (ξ2 )) (yi (t) − xi (t)) i ≤ −li ai | ei (t) |, 11

(21)

where ξ1 is between h−1 (yi (t)) and h−1 (xi (t)), ξ2 is between yi (t) and xi (t). i i Therefore, similar to (21), we can get −sign(ei (t))

pi ai (h−1 (yi (t))) i

−sign(ei (t))

−1 (h−1 i (yi (t)) − hi (xi (t))) ≤ −

ηi ai (h−1 (yi (t))) i

pi ai ai

| ei (t) |,

ηi −1 sign(h−1 i (yi (t)) − hi (xi (t))) ≤ − , ai

(22)

(23)

and  1 −1 −1 δi j sign(h−1 − sign(ei (t)) i (yi (t)) − hi (xi (t)))|h j (y j (t − τ j (t))) −1 ai (hi (yi (t))) j=1 n

(x j (t − τ j (t)))| − h−1 j n 1 ≤− δi j a j |e j (t − τ j (t))|. ai j=1

(24)

From (10), (11), (20) and under assumptions (H2 ), (H4) and (H5 ) , we can get   −1 −1 −1  wi j (h−1 (y (t))) f (h (y (t))) − w (h (x (t))) f (h (x (t))) i j j j ij i i j j j i   −1 −1 −1  (y (t))) f (h (y (t))) − w (h (y (t))) f (h (x (t))) ≤ wi j (h−1 i j j j ij i i j j j   i (yi (t))) f j (h−1 (x j (t))) − wi j (h−1 (xi (t))) f j (h−1 (x j (t))) + wi j (h−1 i j i j     ≤ Wi j L j a j e j (t) + wˆ i j − wˇ i j M j .

(25)

In a similar way, we can get   −1 −1 −1  ci j (h−1 (y (t)) f (h (y (t − τ (t)))) − c (h (x (t))) f (h (x (t − τ (t)))) i j j j i j i j j j i j i j     ≤ Ci j L j a j e j (t − τ j (t)) + cˆi j − cˇi j M j .

(26)

In the following, we define a Lyapunov functional: 1 2 V(t) = e (t), i ∈ I , 2 i=1 i n

then we calculate the upper right derivation of Vi (t) along the solution of error system (20).

12

+

D V(t) =

n 

|ei (t)|sign(ei (t))D+ ei (t)

i=1











n 

n 

pi a i |ei (t)| − (ai li + )|ei (t)| + L j a j (W i j |e j (t)| + Ci j |e j (t − τ j (t))|) ai i=1 j=1 n n  ηi 1  (|wˆ i j − wˇ i j | + |ˆci j − cˇi j |)M j − − sign(ei (t))ki j + a a i i j=1 j=1 n   1 −1 ×sign(h−1 (y (t)) − h (x (t))) − δ a |e (t − τ (t))| j j i j j j j j j ai j=1 n n  

pi a i |ei (t)| − (ai li + )|ei (t)| + L j a j Wi j |e j (t)| ai i=1 j=1 n  1 −1 sign(ei (t))ki j sign(h−1 (y (t)) − h (x (t))) − j j j j ai j=1 pi a T −2 min(li ai + i )V(t) + max{Li ai }λmax (W + W)V(t) i∈I i∈I ai n n   1 −1 sign(ei (t))|ei (t)|ki j sign(h−1 − j (y j (t)) − h j (x j (t))) max{ai } i=1 j=1  i∈I  pi a T − 2 min(li ai + i ) + max{Li ai }λmax (W + W) V(t) i∈I i∈I ai n λmin (K)  |ei (t)| − max{ai } i=1  i∈I  pi a T − 2 min(li ai + i ) − max{Li ai }λmax (W + W) V(t) i∈I ai √ i∈I 2λmin (K) 1 V 2 (t) − max{ai }

(27)

i∈I 1

= −αV 2 (t) − θV(t), √ pi ai T 2λmin (K) , θ = 2 min(li ai + where α = ) − max{Li ai }λmax (W + W). i∈I i∈I max{ai } ai i∈I

If the constant θ = 0, then D+ V(t) ≤ −αV 2 (t). By Lemma 1, the origin of the system (20) is globally finite-time stable, and for any given t0 , V(t) satisfies the following 1

inequality: 1 1 1 V 2 (t) ≤ V 2 (t0 ) − α(t − t0 ), t0 ≤ t ≤ t1 , 2

and V(t) ≡ 0, ∀ t ≥ t1 ,

13

with t1 given by t1 ≤ t0 +

√ 1 2 max{ai }V 2 (t0 ) i∈I

λmin (K)

.

Let t0 = 0, the conclusion hold in Theorem 1 and the proof is completed.



Theorem 2. Suppose that assumptions (H1 ) − (H7 ) hold, and K is a positive pi a T definite matrix. If there exists pi > 0 such that θ = 2 min(li ai + i ) − max{Li ai }λmax (W + i∈I i∈I ai W) > 0, then the origin of system (20) is globally finite-time stable with the settling time   θ 2 ln 1 + √ V 1/2 (x0 ) 2λmin (K) . T(x) ≤ θ Proof. Similar to the proof of Theorem 1, if the constant θ > 0, then D+ V(t) ≤ 1

−αV 2 (t) − θV(t). By Lemma 2, the origin of the system (20) is globally finite-time stable, and the setting time t1 satisfies the following inequality:   pi a i T {a } 2 min (l a + ) − max {L a }λ (W + W) max i i i i i max   i∈I i∈I i∈I ai 2 ln 1 + V 1/2 (x0 ) √ 2λmin (K) . t1 ≤ pi ai T min(li ai + ) − max{Li ai }λmax (W + W) i∈I i∈I ai  Theorem 3. Suppose that assumptions (H1 ) − (H7 ) hold, and K is a positive pi a T definite matrix. If there exists pi > 0 such that θ = 2 min(li ai + i ) − max{Li ai }λmax (W + i∈I i∈I ai W) < 0, then the origin of system (20) is globally finite-time stable with the settling time   θ 2 ln 1 + √ V 1/2 (x0 ) 2λmin (K) . T(x) ≤ θ Proof. Similar to the proof of Theorem 1, if the constant θ < 0, then D+ V(t) ≤ 1 −αV 2 (t) + (−θ)V(t). By Lemma 3, the origin of the system (20) is globally finite-time stable, and the setting time t1 satisfies the following inequality:   pi a i T {a } 2 min (l a + ) − max {L a }λ (W + W) max i i i i i max   i∈I i∈I i∈I ai 2 ln 1 + V 1/2 (x0 ) √ 2λmin (K) . t1 ≤ pi ai T min(li ai + ) − max{Li ai }λmax (W + W) i∈I i∈I ai 14

 In system (1), if the amplification function ai (ui (t)) = ai for t > 0, then system (1) is reduced to the following form: 

u˙ i (t) = −ai bi (ui (t)) − −

n 

n 

wi j (ui (t)) f j (u j (t))

j=1



ci j (ui (t)) f j (u j (t − τ j (t))) − Ii ,

(28) i ∈ I.

j=1

Obviously, the assumption (H2 ) is satisfied in this case. Accordingly, the response system is the following form: 

v˙ i (t) = −ai bi (vi (t)) − −

n 

n 

wi j (vi (t)) f j (v j (t))

j=1

(29)



ci j (vi (t)) f j (v j (t − τ j (t))) − Ii + Ri (t),

i ∈ I,

j=1

where Ri (t) is the controller mentioned before. In this case, we also give the hypothesis as the same as assumptions (H6 ), (H7 ), which can be stated as follows:   n  (H6 ): ηi ≥ ai (|wˆ i j − wˇ i j | + |ˆci j − cˇi j |)M j for any i ∈ I . j=1



(H7 ): δi j ≥ Ci j L j ai for any i, j ∈ I . From Theorem 1, Theorem 2 and Theorem 3, we can easily get the following corollaries. 



Corollary 1. Assume (H1 ) − (H5 ), (H6 ) and (H7 ) are true, if K = kI is the scalar matrix for any k > 0 and there exists pi > 0 such that the following condition holds: T



θ = 2 min(li ai + pi ) − max{Li ai }λmax (W + W) = 0, i∈I

i∈I

then the zero solution of system (20) is globally finite-time stable, implying that the slave system (13) will synchronize the master system (1) in a finite time with the settling time √ 1 2ai V 2 (0)  . t1 ≤ k   Corollary 2. Assume (H1 ) − (H5 ), (H6 ) and (H7 ) are true, if K = kI is the scalar matrix for any k > 0 and there exists pi > 0 such that the following condition holds: T



θ = 2 min(li ai + pi ) − max{Li ai }λmax (W + W) > 0, i∈I

i∈I

15

then the zero solution of system (20) is globally finite-time stable, implying that the slave system (13) will synchronize the master system (1) in a finite time with the settling time   T a (l a + p ) − max {L a }λ (W + W) 2 min i i i i i i max   i∈I i∈I 2 ln 1 + V 1/2 (x0 ) √  2k t1 ≤ . T 2 min(li ai + pi ) − max{Li ai }λmax (W + W) i∈I

i∈I 



Corollary 3. Assume (H1 ) − (H5 ), (H6 ) and (H7 ) are true, if K = kI is the scalar matrix for any k > 0 and there exists pi > 0 such that the following condition holds: T



θ = 2 min(li ai + pi ) − max{Li ai }λmax (W + W) < 0, i∈I

i∈I

then the zero solution of system (20) is globally finite-time stable, implying that the slave system (13) will synchronize the master system (1) in a finite time with the settling time   T a (l a + p ) − max {L a }λ (W + W) 2 min i i i i i i max   i∈I i∈I 2 ln 1 + V 1/2 (x0 ) √  2k t1 ≤ . T 2 min(li ai + pi ) − max{Li ai }λmax (W + W) i∈I

i∈I

When ai (ui (t)) = 1, similar to assumptions (H6 ) and (H7 ), we get assumptions   (H6 ) and (H7 ), which is stated by n   (H6 ): ηi ≥ (|wˆ i j − wˇ i j | + |ˆci j − cˇi j |)M j for any i ∈ I . j=1



(H7 ): δi j ≥ Ci j L j for any i, j ∈ I . According to Theorem 1, Theorem 2 and Theorem 3, we can easily derive the following corollaries. 



Corollary 4. Assume (H1 ) − (H5 ), (H6 ) and (H7 ) are true, if K = kI is the scalar matrix for any k > 0 and there exists pi > 0 such that the following condition holds: T



θ = 2 min(li + pi ) − max{Li }λmax (W + W) = 0, i∈I

i∈I

then the zero solution of system (20) is globally finite-time stable, implying that the slave system (13) will synchronize the master system (1) in a finite time with the settling time √ 1 2V 2 (0)  . t1 ≤ k   Corollary 5. Assume (H1 ) − (H5 ), (H6 ) and (H7 ) are true, if K = kI is the scalar matrix for any k > 0 and there exists pi > 0 such that the following condition holds: T



θ = 2 min(li + pi ) − max{Li }λmax (W + W) > 0, i∈I

i∈I

16

then the zero solution of system (20) is globally finite-time stable, implying that the slave system (13) will synchronize the master system (1) in a finite time with the settling time T 2 min(li + pi ) − max{Li }λmax (W + W)   i∈I i∈I 2 ln 1 + V 1/2 (x0 ) √  2k t1 ≤ . T 2 min(li + pi ) − max{Li }λmax (W + W) i∈I

i∈I





Corollary 6. Assume (H1 ) − (H5 ), (H6 ) and (H7 ) are true, if K = kI is the scalar matrix for any k > 0 and there exists pi > 0 such that the following condition holds: T



θ = 2 min(li + pi ) − max{Li }λmax (W + W) < 0, i∈I

i∈I

then the zero solution of system (20) is globally finite-time stable, implying that the slave system (13) will synchronize the master system (1) in a finite time with the settling time T 2 min(li + pi ) − max{Li }λmax (W + W)   i∈I i∈I 2 ln 1 + V 1/2 (x0 ) √  2k t1 ≤ . T 2 min(li + pi ) − max{Li }λmax (W + W) i∈I

i∈I

Remark 1. When ai (·) = 1, the memristor-based Cohen-Grossberg neural networks in this paper reduced to the memristor-based recurrent neural networks studied in [26]. When wi j (ui (t)), ci j (ui (t)) are deterministic constants, the model (1) turns out to the system studied in [27]. When ai (·) = 1, wi j (ui (t)), ci j (ui (t)) are deterministic constants, the model (1) reduced to the model considered in [17]. Therefore, model in this paper is general. Remark 2. How to deal with the general amplification function ai (·) is a key problem in studying synchronization of Cohen-Grossberg neural networks. The synchronization of Cohen-Grossberg neural networks with constant amplification function ai (·) has been extensively investigated, such as [27, 28]. However, there are few results of the synchronization for the commonly Cohen-Grossberg neural networks with timevarying amplification function ai (·). In this paper, this issue is concerned and some central conditions are derived by designing feedback controller. Remark 3. In [22], Jiang and Wang studied finite-time synchronization control of a class of memristor-based recurrent neural networks. They obtained the following differential inclusions: n  [ai j , ai j ] f j (e j (t)) + ui (t). e˙i (t) ∈ −ei (t) + j=1

17

Unfortunately, the authors pointed out in [5] that the above formula cannot always hold. However, in our paper, we does not need the above formula. Hence, in some sense, the results derived here are more reasonable. Remark 4. Compared to the previous works, in our paper, there are two advantages. First, based on differential conclusion and derivative theorem for inverse function, we first transform the memristor-based Cohen-Grossberg neural network (1) and (13) into (12) and (19) respectively, then we get finite-time synchronization criteria between (1) and (13). By this way, we need not some special assumptions τ˙ j (t) ≤  < 1, |ai (ui ) − ai (vi )| ≤ Ni |ui − vi |, ai (ui )bi (ui ) − ai (vi )bi (vi ) ≥ γi , ui − vi where , Ni , γi are constants, which are mentioned in the paper of [28, 29]. Moreover, for the studying stability and synchronization of Cohen-Grossberg neural network, the conventional Lyapunov functions are relatively complicated in many present papers. However, in our paper, a similar Lyapunov function is given to prove synchronization of Cohen-Grossberg neural network by using the transformation technique involved. Remark 5. In Theorem 1, Theorem 2 and Theorem 3, the finite-time synchronization for a class of Cohen-Grossberg neural networks with time-varying delays is investigated. In general, in the practical engineering process, it is more desirable that the synchronization objective is realized in a finite time rather than merely asymptotically. So, the results of this paper extend and improve the previous results [27, 28, 29].

4 Numerical Simulations In this section, a chaotic network is given to present the effectiveness of our results achieved in this paper. Example. Consider the following memristor-based neural networks model with variable delays as follows: 2   wi j (ui (t)) f j (u j (t)) u˙ i (t) = −ai (ui (t)) bi (ui (t)) −



2 

j=1



ci j (ui (t)) f j (u j (t − τ j (t))) − Ii ,

j=1

18

(30) i = 1, 2,

where ⎧ ⎪ ⎪ ⎨ 2, w11 (u1 ) = ⎪ ⎪ ⎩ 1.5, ⎧ ⎪ ⎪ ⎨ −2.5, w21 (u2 ) = ⎪ ⎪ ⎩ −2, ⎧ ⎪ ⎪ ⎨ −1.5, c11 (u1 ) = ⎪ ⎪ ⎩ −1.2, ⎧ ⎪ ⎪ ⎨ 0.5, c21 (u2 ) = ⎪ ⎪ ⎩ 0.8,

| u1 (t) |< 0.4, | u1 (t) |≥ 0.4,

⎧ ⎪ ⎪ ⎨ −0.1, w12 (u1 ) = ⎪ ⎪ ⎩ −0.3, ⎧ ⎪ ⎪ ⎨ 0.4, w22 (u2 ) = ⎪ ⎪ ⎩ 0.6, ⎧ ⎪ ⎪ ⎨ −0.6, c12 (u1 ) = ⎪ ⎪ ⎩ −0.4,

| u2 (t) |< 0.4, | u2 (t) |≥ 0.4, | u1 (t) |< 0.4, | u1 (t) |≥ 0.4, | u2 (t) |< 0.4, | u2 (t) |≥ 0.4,

⎧ ⎪ ⎪ ⎨ −2.5, c22 (u2 ) = ⎪ ⎪ ⎩ −2.3,

| u1 (t) |< 0.4, | u1 (t) |≥ 0.4, | u2 (t) |< 0.4, | u2 (t) |≥ 0.4, | u1 (t) |< 0.4, | u1 (t) |≥ 0.4, | u2 (t) |< 0.4, | u2 (t) |≥ 0.4.

We consider system (30) as the drive system and corresponding response system 0.3 is defined as in (13) in case of i = 1, 2. And ai (ui ) = 1 + , i = 1, 2, b1 (u1 ) = 2 + tanh ui et , and take the activation function as fi (ui ) = 1.8u1 , b2 (u2 ) = 1.6u2 , τ1 (t) = τ2 (t) = 1 + et tanh(ui (t)), i = 1, 2, I1 = 0, I2 = 0. Obviously, 0 ≤ τ1 (t), τ2 (t) ≤ 1, 1.1 ≤ ai (ui ) ≤ 



1.3, b1 (u1 ) = 1.8, b2 (u2 ) = 1.6, bi (0) = 0, | fi (ui ) − fi (vi )| ≤ |ui − vi |, | fi (ui )| ≤ 1, i, j = 1, 2. Therefore, (H1 ) − (H5) hold for system (30). The model (30) has chaotic attractor with the initial conditions u1 (s) = 0.6, u2 (s) = 0.4, ∀s ∈ [−1, 0) can be seen in Fig. 1. And Figs. 2, 3 describe the state variables u1 (t), v1 (t) of system (30) and u2 (t), v2 (t) of its response system, respectively. By simple computing, we get w11 = 2, w12 = 0.3, w21 = 2.5, w22 = 0.6, c11 = 1.5, c12 = 0.6, c21 = 0.8, c22 = 2.5, a1 = a2 = 1.3, a1 = a2 = 1.1, l1 = 1.8, l2 = 1.6, τ = 1, L1 = L2 = 1, M1 = M2 = 1, η1 ≥ 1.56, η2 ≥ 1.56, δ11 ≥ 2.3045, δ12 ≥ 0.9218, δ21 ≥ 1.2291, δ22 ≥ 3.8409, pi ≥ 2.3220. Now, let p1 = 8, p2 = 7, η1 = 1.6, η2 = 1.6, k11 = 1, k12 = k21 = 0, k22 = 0.8, δ11 = 2.4, δ12 = 0.93, δ21 = 2.5, δ22 = 4. Then the conditions of (H6 ) − (H7 ) are satisfied. Therefore, according to the Theorem 1, system (30) and the corresponding response system are finite-time synchronized under the feedback control (15) with a finite time t1 ≤ 14.5935. Furthermore, we get the synchronization trajectories of variables u1 (t), v1 (t) and u2 (t), v2 (t), which are shown in Figs. 4 and 5, respectively. And we also give the synchronization error curve e(t) which is presented in the fig. 6.

19

2 1.5 1

u

2

0.5 0 −0.5 −1 −1.5 −2

−0.6

−0.4

−0.2

0 u

0.2

0.4

0.6

1

Fig. 1. The chaotic attractor of memristive neural networks (30).

1

2.5 u1

0.8

u2

v1

v2

2

0.6

1.5

0.4

1

0.2 0.5 0 0 −0.2 −0.5

−0.4

−1

−0.6

−1.5

−0.8 −1 0

20

40

60

80

−2 0

100

20

40

60

80

t

t

Fig. 2. The states of u1 (t) and v1 (t).

Fig. 3. The states of u2 (t) and v2 (t).

20

100

1.5

3 u

u

v

v

1

2

1

2

1

2

0.5

1

0

0

−0.5

−1

−1

−2

−1.5 0

5

10

−3 0

15

5

10

15

t

t

Fig. 4. State trajectories of variables u1 (t), v1 (t) with feedback control (15), respectively.

Fig. 5. State trajectories of variables u2 (t), v2 (t) with feedback control (15), respectively.

5 ||e(t)||

4

3

2

1

0

−1 0

5

10 t

15

20

Fig. 6. Synchronization error  e(t) = |v1 − u1 | + |v2 − u2 |.

5 Conclusion In this paper, the finite-time synchronization of memristor-based Cohen-Grossberg neural networks model with time-varying delays has been studied. In particular, several new sufficient conditions guaranteeing the finite-time synchronization of memristorbased Cohen-Grossberg neural networks are achieved via the feedback controller. Ev21

idently, compared with correspondingly previous works (see [30, 31]), our results are more general and less conservative. Finally, some numerical simulations are presented to illustrate the effectiveness of the proposed theories. Acknowledgements This work was supported by the National Natural Science Foundation of P.R. China (Grants No. 61164004, No. 61473244, No. 11402223).

References [1] L. Chua, Memristor-the missing circuit element, IEEE Trans. Circuits syst. 18(1971) 507-519 [2] D. Strukov, G. Snider, D. Stewart, R. Williams, The missing memristor found, Nature 453(2008) 80-83 [3] A. Filippov, Differential Equations With Discontinuous Right-Hand Sides, Kluwer, Dordrecht, 1988. [4] J. Aubin, A. Cellina, Differential Inclusions, Berlin. Germany: Springer-Verlag, 1984. [5] N. Li, J. Cao, New synchronization criteria for memristor-based networks: Adaptive control and feedback control schemes, Neural Netw. 61(2015) 1-9. [6] G. Zhang, Y. Shen, C. Xu, Global exponential stability in a Lagrange sense for memristive recurrent neural networks with time-varying delays, Neurocomputing 149(2015) 1330-1336. [7] J. Chen, Z. Zeng, P. Jiang, Global Mittag-Leffler stability and synchronization of memristor-based fractional-order neural networks, Neural Netw. 51(2014) 1-8. [8] H. Zhang, Y. Xie, Z. Wang, Adaptive synchronization between two different chaotic neural networks with time delay, IEEE Trans. Neural Netw. 18(6)(2007) 1841-1845. [9] Q. Wang, Q. Lu, Int. J, Non-Linear Mech(2009). doi:10.1016/j.ijnonlinmec. 2009.01.001. [10] Y. Shen, X. Xia, Semi-global finite-time observers for nonlinear systems, Automatica 44(2008) 3152-3156. 22

[11] C. Li, X. Liao, Complete and lag synchronization of hyperchaotic systems using small impulses, Chaos, Solitons, Fractals 22(4)(2004) 857-867. [12] Y. Yang, J. Cao, Exponential lag synchronization of a class of chaotic delayed neural networks with impulsive effects, Phys. A 386(1)(2007) 492-502. [13] W. Ding, M. Han, M. Li, Exponential lag synchronization of delayed fuzzy cellular neural networks with impulses, Phys. Lett. A 373(8)(2009) 832-837. [14] L. Tang, D. Li, H. Wang, Lag synchronization for fuzzy chaotic system based on fuzzy observer, Appl. Math. Mechanics 30(2009) 803-810. [15] G. Zhang, Y. Shen, Exponential synchronization of delayed memristor-based chaotic neural networks via periodically intermittent control, Neural Netw. 55 (2014) 1-10. [16] C. Hu, J. Yu, H. Jiang, Exponential lag synchronization for neural networks with mixed delays via periodically intermittent control, Chaos 20(2010) 023108. [17] J. Wang, H. Jiang, C. Hu, T. Ma, Convergence behavior of delayed discrete cellular neural networks without periodic coefficients, Neural netw. 53(17)(2014) 61-68. [18] A. Wu, S. Wen, Z. Zeng, Synchronization control of a class of memristor-based recurrent neural networks, Inf. Sci. 183(1)(2012) 106-116. [19] G. Zhang, Y. Shen, J. Sun, Global exponential stability of a class of memristorbased recurrent neural networks with time-varying delays, Neurocomputing 97(2012) 149-154. [20] Z. Cai, L. Huang, Functional differential inclusions and dynamic behaviors for memristor-based BAM neural networks with time-varying delays, Commun. Nonlinear Sci. Numer. Simulat. 19(2014) 1279-1300. [21] J. Aubin, H. frankowska, Set-valued analysis, Boston: Birkhauser,1990. [22] M. Jiang, S. Wang, J. Mei, Y. Shen, Finite-time synchronization control of a class of memristor-based recurrent neural networks, Neural Netw. 63(2015) 133-140. [23] X. Yang, J. Cao, Finite-time stochastic synchronization of complex networks, Appl. Math. Model. 34(2010) 3631-3641.

23

[24] Y. Shen, Y. Huang, Uniformly observable and globally Lipschitzian nonlinear systems admit global finite-time observers, IEEE Trans. Autom. Control 54(2009) 2621-2625. [25] J. Mei, M. Jiang, B. Wang, B. Long, Finite-time parameter identification and adaptive synchronization between two chaotic neural networks, J. Frankl. Inst. 350(2013) 1617-1633. [26] A. Wu, Z. Zeng, Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays, Neural Netw. 36(2012) 1-10. [27] J. Yu, C. Hu, H. Jiang, Z. Teng, Exponential synchronization of Cohen-Grossberg neural networks via periodically intermittent control, Neurocomputing 74(2011) 1776-1782. [28] Q. Zhu, J. Cao, Adaptive synchronization of chaotic Cohen-Grossberg neural networks with mixed time delays, Nonlinear Dyn. 61(2010) 517. [29] Q. Gan, Adaptive synchronization of Cohen-Grossberg neural networks with unknown parameters and mixed time-varying delays, Commun. Nonlinear Sci. Numer. Simulat. 17(7)(2012) 3040-3049. [30] X. Yang, J. Cao, W. Yu, Exponential synchronization of memristive CohenGrossberg neural networks with mixed delays, Cogn Neurodyn. 8(2014) 239-249. [31] Q. Liu, S. Zhang, Adaptive lag synchronization of chaotic Cohen-Grossberg neural networks with discrete delays, Chaos 22(3)(2012) 033123.

24