Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions

Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Stability...

798KB Sizes 0 Downloads 76 Views

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions$ Shuo Zhang, Yongguang Yu n, Qing Wang Department of Mathematics, Beijing Jiaotong University, Beijing 100044, PR China

art ic l e i nf o

a b s t r a c t

Article history: Received 5 March 2015 Received in revised form 7 July 2015 Accepted 18 July 2015 Communicated by Lixian Zhang

Fractional-order Hopfield neural networks are often used to model the information processing of neuronal interactions. For a class of such networks with discontinuous activation functions, it is needed to investigate the existence and stability conditions of their solutions. Under the framework of Filippov solutions, a growth condition is firstly given to guarantee the existence of their solutions. Then, some sufficient conditions are proposed for the boundedness and stability of the solutions of such discontinuous networks by employing the Lyapunov functionals. Finally, a numerical example is presented to demonstrate the effectiveness of the theoretical results. & 2015 Elsevier B.V. All rights reserved.

Keywords: Fractional-order Neural networks Discontinuous activation functions Stability Filippov solution

1. Introduction In the past decade, neural networks have been found extensive applications in optimization, classification, solving nonlinear algebraic equations, signal and image processing, pattern recognition, automatic control, associative memories and many others [1–3]. Thus, a great number of attentions have been paid to the study of the recurrently connected neural networks presented by Cohen, Grossberg and Hopfield [4–6]. It is worth noting that the Hopfield neural networks and their various generalizations have been a hot topic, because of their ability to deal with computational and optimization problems and large significance in hardware implementation. And many papers have analyzed the dynamical properties of integer-order Hopfield neural networks [7–11]. In above literatures [7–11], the activation functions of the studied models are assumed to be continuous or even Lipschitz continuous. However, it is well known that the network model with discontinuous activation functions may be more realistic in designing and implementing an artificial neural network. For instance, in Ref. [5], the standard assumption of the classical Hopfield neural networks is that the activations are used in high-gain limit where they closely approach discontinuous and comparator functions. Recently, neural networks with discontinuous activation functions have received a great deal of attentions, ☆ Supported by the National Nature Science Foundation of China (No. 11371049) and the Fundamental Research Funds for the Central Universities (No. 2015YJS174). n Corresponding author. E-mail address: [email protected] (Y. Yu).

due to their plenty of engineering applications, such as dry friction, impacting machines, power circuits, switching in electronic circuits and other areas. As we know, modeled by a differential equation with discontinuous activation functions, the global stability of a neural network was firstly studied by Forti and Nistri [12], based on Lyapunov diagonally stable matrix and constructing suitable Lyapunov function. Under the assumption that the activation functions are nondecreasing, the global exponential stability and global convergence of the network system were analyzed in Refs. [13,14]. Further, without assuming the boundedness and monotonicity of the activation functions, the dynamics behaviors of the discontinuous neural networks were discussed by using Lyapunov stability theory in Refs. [15,16]. Note that all the above mentioned works just studied integer-order models of neural networks. Fractional calculus has been a pure mathematical notion for over three centuries. In the past two decades, many researchers pointed out that the fractional-order derivatives and integrals are very suitable for both theoretical and applied aspects of numerous branches of science and engineering, such as electromagnetic waves, dielectric polarization, viscoelastic systems, colored noise and finance [17–19]. Compared with the classical integer order models, fractional-order derivatives provide an excellent instrument for the description of memory and hereditary properties of various materials and processes. In the neural areas, the common capacitor from the continuous-time integer-order Hopfield neural network is replaced by the fractance, giving birth to the so-called fractional-order Hopfield neural network model [20]. The fractional-order neural models own better description of memory

http://dx.doi.org/10.1016/j.neucom.2015.07.077 0925-2312/& 2015 Elsevier B.V. All rights reserved.

Please cite this article as: S. Zhang, et al., Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.07.077i

S. Zhang et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

2

and hereditary properties of various processes than the integerorder ones, as well as the fundamental and general computation ability which can contribute to efficient information processing, stimulus anticipation and frequency-independent phase shifts of oscillatory neuronal firing [21]. For the stability analysis of fractional-order Hopfield neural networks, some scholars have reported several important results by employing different analytical methods such as Laplace transform, linear stability theory of fractional-order system and the second method of Lyapunov [22–24]. Nevertheless, all networks in the studied fractionalorder models [22–24] are continuous. To the best of our knowledge, there are few works on the dynamics analysis of fractionalorder neural networks with discontinuous activation functions. In Ref. [25], the authors used Lyapunov method to study global Mittag–Leffler stability and synchronization of memristor-based fractional-order neural networks. In their model, the connection memristive weights are discontinuous, but the activation functions are still continuous. Motivated by the above discussion, the stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions is investigated. To ensure the existence of the solutions of such networks, a growth condition is proposed. Then, by utilizing the Lyapunov methods, some sufficient conditions are presented for the boundedness and stability of the studied networks. Besides, the uniqueness of the equilibrium point of the network system is discussed under some given conditions. The rest of the paper is organized as follows. In Section 2, some preliminaries are given, including fractional-order Hopfield neural network model and Mittag–Leffler stability for fractional-order differential equations. Then, the stability of fractional-order Hopfield neural networks with discontinuous activation functions is analyzed in Section 3. A numerical example in Section 4 is offered to show the effectiveness of our results. Finally, the paper is concluded in Section 5.

2. Preliminaries In this section, we recall some notions and lemmas on fractional calculus and Hopfield neural networks' discontinuous model. Some notations throughout the paper are listed here. For a function f ðtÞ, we define its limit functions f ðt þ Þ ▵ ¼ limτ-t þ f ðτÞ and f ðt  Þ ▵ ¼ limτ-t  f ðτÞ. J  J is an arbitrary norm and ‖  ‖p is the pnorm. co½ represents the convex closure of a set. Γ ðÞ is the Gamma function, and Lfg denotes the Laplace transform. sgnðxÞ is the sign function as 8 x40 > < 1; x¼0 sgnðxÞ ¼ 0; > : 1; x o 0 Besides, the Caputo fractional-order differential operator t0 Dαt and Mittag–Leffler functions Eα;β ðÞ; Eα ðÞ will be defined in the following of this section.

Definition 1. (Caputo fractional-order derivative). The Caputo fractional-order derivative of order α for a function f ðtÞ A C n þ 1 ð½t 0 ; þ1Þ; RÞ is defined as Z t ðnÞ 1 f ðτ Þ α dτ ; ð1Þ t 0 Dt f ðtÞ ¼ Γ ðn  αÞ t0 ðt  τÞα þ 1  n where α 4 0, n is a positive integer such that n  1 o α r n. The Laplace transform of the Caputo fractional-order derivative is Lft0 Dαt f ðtÞ; sg ¼ sα FðsÞ 

sα  k  1 f

ðkÞ

ðt 0 Þ;

n1oαrn

k¼0

where s is the variable in Laplace domain. Some properties of Caputo fractional-order derivative are listed as follows: Property 1.

α

t 0 Dt

C ¼ 0 holds, where C is any constant.

Property 2. For constants μ and ν, the linearity of Caputo fractionalorder derivative is described by α

t 0 Dt

ðμf ðtÞ þ νgðtÞÞ ¼ μt0 Dαt f ðtÞ þ νt0 Dαt gðtÞ:

In this paper, we consider the fractional-order Hopfield neural networks described by α

0 Dt

xðtÞ ¼  AxðtÞ þ Bf ðxðtÞÞ þ w;

ð2Þ

where 0 o α o 1 is the fractional order; xðtÞ ¼ ðx1 ðtÞ; x2 ðtÞ; …; xn ðtÞÞT A Rn is the state vector associated with the neurons; A ¼ diagfa1 ; a2 ; …; an g is an n  n diagonal matrix with every charging rate ai 4 0, i ¼ 1; 2; …; n; B ¼ ðbij Þnn is the connection weight matrix; f ðxðtÞÞ ¼ ðf 1 ðx1 Þ; f 2 ðx2 Þ; …; f n ðxn ÞÞT A Rn is a mapping and f i ðxi Þ denotes the activation function of the ith neuron, i ¼ 1; 2; …; n; constant vector w ¼ ðw1 ; w2 ; …; wn ÞT is the external input. For networks (2), we consider that all activation functions f i ðÞ for i ¼ 1; 2; …; n are discontinuous. In addition, f i ðÞ is continuously differentiable, except on a countable set of isolated points ft ik g, where the left and right limits f i ðt ik Þ and f i ðt ikþ Þ exist. Since each f i ðÞ is discontinuous, we cannot solve the solution of the networks (2) under the classical definition. Note that Filippov provided a solution concept for the integer-order differential equation with a discontinuous right-hand side. It can also apply to the solution of fractional-order differential equation under same discontinuous conditions. Based on the Filippov solution [27] of integer-order differential equation, we introduce the concept of Filippov solution for fractional-order differential equation. Consider the following n-dimensional fractional-order differential system: α

0 Dt

xðtÞ ¼ f ðt; xÞ;

ð3Þ

where f ðt; xÞ is discontinuous in x. Definition 2. A set-valued map F : R  Rn -Rn is defined as F ðt; xÞ ¼ ⋂

⋂ co½ f ðt; Bðx; δÞ=NÞ;

δ 4 0μðNÞ ¼ 0

2.1. Fractional-order Hopfield neural network model In nonlinear science, fractional-order calculus plays an important role and has three common definitions of differential operators: Grunwald–Letnikov, Riemann–Liouville, and Caputo definitions [26]. We only discuss the Caputo fractional-order derivative in this paper, because its initial conditions are identical to the ones of integer-order derivatives, which is well-understood in physical situations and more applicable to real world problems.

n 1 X

where Bðx; δÞ ¼ fy : J y x J r δg, and μðNÞ is Lebesgue measure of set N. A vector function xðtÞ defined on a nondegenerate interval I  R is called a Filippov solution of system (3), if it is absolutely continuous on any subinterval ½t 1 ; t 2  of I and for a.e. t A I, xðtÞ satisfies the differential inclusion: α

0 Dt

xðtÞ A F ðt; xÞ:

Please cite this article as: S. Zhang, et al., Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.07.077i

S. Zhang et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Now we consider the networks (2), and denote the set-valued maps as FðxÞ ▵ ¼ co½ f ðxÞ ¼ ðco½ f 1 ðx1 Þ; …; co½ f n ðxn ÞÞ: Due to the condition of f i ðÞ, we have co½ f i ðxi Þ ¼ ½minf f i ðxi Þ; f i ðxiþ Þg; maxf f i ðxi Þ; f i ðxiþ Þg, for i ¼ 1; 2; …; n. Then, we define the Filippov solution of system (2) as follows: Definition 3. In the sense of Filippov, a function xðtÞ is called to be a solution of system (2) on ½0; TÞ with initial condition xð0Þ ¼ x0 , if xðtÞ is absolutely continuous on any compact interval of ½0; TÞ, xð0Þ ¼ x0 and α

0 Dt

xðtÞ A  AxðtÞ þ BFðxðtÞÞ þ w;

ð4Þ

for a.e. t A ½0; TÞ. Or equivalently to condition (4), there exists a measurable function γ ¼ ðγ 1 ; γ 2 ; …; γ n ÞT : ½0; TÞ-Rn , such that γ ðtÞ A FðxðtÞÞ and α

0 Dt

xðtÞ ¼  AxðtÞ þ Bγ ðtÞ þ w;

ð5Þ

for a.e. t A ½0; TÞ, where the single-valued function γ is the so-called measurable selection of the function F. 2.2. Mittag–Leffler stability for fractional-order differential equations Mittag–Leffler stability is an important dynamic behavior for fractional-order differential equations, and the related theories are listed in this subsection. Mittag–Leffler function is firstly introduced for its frequent using in solving fractional-order differential equations, which is similar to exponential function used in integerorder differential equations. Definition 4 (Podlubny [26]). . A two-parameter Mittag–Leffler function is defined as Eα;β ðzÞ ¼

1 X

zk ; Γ ð α k þ βÞ k¼0

where α 40, β 4 0 and z A C. For β ¼ 1, its one-parameter form is shown as 1 X

zk ¼ Eα;1 ðzÞ: Eα ðzÞ ¼ Γ ðαk þ 1Þ k¼0

The Laplace transform of two-parameter Mittag–Leffler function is ðReðsÞ 4 jλj1=α Þ;

where t Z 0, s is the variable in Laplace domain and ReðsÞ is the real part of s, λ A R. Consider an n-dimensional fractional-order system: α

0 Dt

α

0 Dt

xðtÞ ¼ gðt; xðtÞÞ;

ð6Þ

where α A ð0; 1Þ, x ¼ ðx1 ; x2 ; …; xn Þ A R , g : ½0; þ 1Þ  R -R piecewise continuous on t. Its solution can be solved as Z t 1 xðtÞ ¼ x0 þ ðt  τÞα  1 gðτ ; xðτÞÞ dτ; Γ ðα Þ 0 T

n

n

n

is

where x0 ¼ xð0Þ is the initial value of system (6). Definition 5. The constant x is an equilibrium point of Caputo fractional-order dynamic system (6) if and only if gðt; xÞ ¼ 0. Remark 1. Without loss of generality, all definitions and theorems are given for the cases when the equilibrium point is the origin, i.e., x ¼ 0. The reason is that any equilibrium point can be

yðtÞ ¼ 0 Dαt ðxðtÞ  xÞ ¼ gðt; xðtÞÞ ¼ gðt; yðtÞ þ xÞ ¼ ψ ðt; yÞ;

where ψ ðt; 0Þ ¼ 0 and the new system has equilibrium point at the origin for new variable y. Lemma 1 (Existence and uniqueness theorem, Kilbas et al. [28]). For system (6), there exists a unique solution if x ¼ 0 is the equilibrium point and gðt; xÞ satisfies locally Lipschitz condition on x. Lemma 2 (Li et al. [24]). If x ¼ 0 is an equilibrium point of system (6), gðt; xÞ is piecewise continuous on t and satisfies locally Lipschitz condition on x with Lipschitz constant l, then the solution of (6) satisfies α

J xðtÞ J r J x0 J Eα ðlt Þ; where α A ð0; 1Þ. Definition 6 (Mittag–Leffler stability, Li et al. [24]). If x ¼ 0 is an equilibrium point of system (6), the solution of (6) is said to be Mittag–Leffler stable if J xðtÞ J r ½mðx0 ÞEα ð  λt α Þb ;

ð7Þ

where λ 40, b 4 0, mð0Þ ¼ 0, mðxÞ Z 0, and mðxÞ satisfies locally Lipschitz condition on x A Rn with Lipschitz constant m0 . Mittag– Leffler stability can imply asymptotic stability, i.e., J xðtÞ J -0 with t- þ 1. The following Lemma 3 is given to analyze the asymptotic stability of fractional-order differential equations. Lemma 3 (The extended second method of Lyapunov, Zhang et al. [29]). Factional-order system (6) is Mittag–Leffler stable at the equilibrium point x ¼ 0 if there exists a continuous function Vðt; xðtÞÞ satisfying

α1 ‖xðtÞ‖a rV ðt; xðtÞÞ r α2 ‖xðtÞ‖ab ; β

Especially, E1;1 ðzÞ ¼ e , where α ¼ β ¼ 1.

n o sα  β ; L t β  1 Eα;β ð  λt α Þ ¼ α s þλ

translated to the origin via a change of variables based on Properties 1 and 2. If the equilibrium point in (6) is x a0, with the change of variable yðtÞ ¼ xðtÞ  x, system (6) can be rewritten as

0 Dt

z

3

V ðt þ ; xðt þ ÞÞ r  α3 ‖xðtÞ‖ab

for a:e: t A ½0; þ 1Þ;

ð8Þ

where V ðt; xðtÞÞ : ½0; 1Þ  D-R satisfies locally Lipschitz condition on x, V_ ðt; xðtÞÞ is piecewise continuous, and V_ ðt þ ; xðt þ ÞÞ exists for any t A ½0; 1Þ; D  Rn is a domain containing the origin; t Z0, β A ð0; 1Þ, α1 , α2 , α3 , a and b are arbitrary positive constants. If the assumptions hold globally on Rn , then x ¼ 0 is globally Mittag–Leffler stable. Besides, the following Lemmas 4 and 5 are useful in the next sections. Lemma 4 (Zhang et al. [29]). If hðtÞ A C 1 ð½0; þ 1Þ; RÞ denotes a continuously differentiable function, the following inequality holds: α

0 Dt

j hðt þ Þj r sgnðhðtÞÞ0 Dαt hðtÞ

for a:e: t A ½0; þ1Þ;

ð9Þ

where 0 o α o 1. Note that inequality (9) also holds almost every0 where, when h ðtÞ exists. Lemma 5 (Ye et al. [30]). For a constant β 4 0, suppose aðtÞ is a nonnegative, nondecreasing function locally integrable on 0 rt o T (some T r þ 1) and bðtÞ r M is a nonnegative, nondecreasing continuous function defined on 0 rt oT, where M is a constant. If uðtÞ is nonnegative and locally integrable on 0 r t oT with satisfying Z t uðtÞ r aðtÞ þ bðtÞ ðt sÞβ  1 uðsÞ ds 0

Please cite this article as: S. Zhang, et al., Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.07.077i

S. Zhang et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

0 r φðtÞ o 2ϵ1γ with all t Z N 1 . So according to (12), we gain Z t Z N1 j hðtÞnφðtÞj r j hðt  τÞj φðτÞ dτ ¼ j hðt  τÞj φðτÞ dτ 0 0 Z t þ j hðt  τÞj φðτÞ dτ

on the interval, we have uðtÞ r aðtÞEβ ðbðtÞΓ ðβ Þt β Þ: 3. Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions

Theorem 1. For a solution xðtÞ of system (6), we have limt- þ 1 xðtÞ ¼ 0, if xð0Þ A D and there exist positive constants α1 , α2 , α3 , a, b, γ , a continuous function V ðt; xðtÞÞ and a piecewise smooth hðtÞ : ½0; 1Þ-R satisfying

α1 ‖x‖ r Vðt; xðtÞÞ r α2 ‖x‖ ; β

0 Dt

Z

ab

V ðt þ ; xðt þ ÞÞ r  α3 ‖x‖ab þ hðtÞ

þ1 0

for a:e: t A ½0; þ 1Þ;

j hðtÞj dt ¼ γ o þ 1;

lim hðtÞ ¼ 0;

α3 Vðt; xðtÞÞ þ hðtÞ: α2

β

α3 Vðt; xðtÞÞ þ hðtÞ: α2

ð14Þ

α3 VðsÞ þ HðsÞ; α2

ð15Þ

where V ð0 Þ ¼ limτ-0 þ Vðτ; xðτÞÞ, V ðsÞ ¼ LfVðt ; xðt ÞÞg, VðsÞ ¼ LfVðt; xðtÞÞg, MðsÞ ¼ LfmðtÞg and HðsÞ ¼ LfhðtÞg. Due to the continuity of function Vðt; xðtÞÞ and (15), we obtain Vðt þ ; xðt þ ÞÞ ¼ V ðt; xðtÞÞ, V þ ðsÞ ¼ VðsÞ and þ

VðsÞ ¼

þ

þ

þ

Vð0Þsβ  1  MðsÞ þ HðsÞ

α3 sβ þ α2

α2



0

þ

ϵ1

j hðt  τÞj φðτÞ dτ

ϵ1

r R 2 2 N 1 φðτ Þ dτ 0

Z

N1 0

φðτÞdτ þ

ϵ1 2

¼ ϵ1 ;

for any t Z N 1 þ N 2 . Therefore we get limt- þ 1 hðtÞn½t β  1 Eβ;

βð  αα32 t β Þ ¼ 0 and limt- þ 1 α1 ‖xðtÞ‖a r

lim VðtÞ ¼ 0 based on

t- þ 1

inequality (10), which implies that limt- þ 1 xðtÞ ¼ 0.□ Now we discuss the existence of the solutions of neural networks (2) or fractional-order differential inclusion (4) [31]. To prove the theorems expediently, we give the following assumption:

¼ supξ A F i ðxi Þ j ξ j rki j xi j þ hi ; j F i ðxi Þj ▵

i ¼ 1; 2; …; n:

ð18Þ

Proof. Because the set-valued map xðtÞ↪  AxðtÞ þ BFðxðtÞÞ þ w is upper-semi-continuous with nonempty compact convex values, the local existence of a solution xðtÞ of Eq. (5) can be guaranteed. According to Eqs. (5) and (18), for a.e. t A ½0; þ 1Þ, we obtain ‖ AxðtÞ þBFðxðtÞÞ þw‖p r ‖A‖p ‖xðtÞ‖p þ ‖B‖p ðK‖xðtÞ‖p þ HÞ þ‖w‖p ¼ K ‖xðtÞ‖p þ H

α2



are nonnegative functions, the above 2 equation becomes      α3 α3 VðtÞ rV ð0ÞEβ  t β þ hðtÞn t β  1 Eβ;β  t β : ð16Þ

α2

N1

r ð‖A‖p þ ‖B‖p KÞ‖xðtÞ‖p þ ‖B‖p H þ ‖w‖p

:

Because Vðt; xðtÞÞ satisfies locally Lipschitz condition on x, there exists a unique solution of V ðtÞ from Lemma 1. Then, with the inverse Laplace transform, the unique solution of (14) is      α3 α3 VðtÞ ¼ Vð0ÞEβ  t β þ ½hðtÞ  mðtÞn t β  1 Eβ;β  t β : Since t β  1 and Eβ;β  αα3 t β

Z

Theorem 2. In the sense of Eq. (5), there exists at least one solution of system (2) for any initial value xð0Þ, if Assumption 1 holds.

Taking the Laplace transform of (14) gives sβ V þ ðsÞ  Vð0 þ Þsβ  1 þ MðsÞ ¼ 

j hðtÞnφðtÞj r

Assumption 1. F satisfies a growth condition (g.c.): there exist constants ki 4 0 and hi , such that

Then there exists a nonnegative function mðtÞ satisfying þ þ 0 Dt V ðt ; xðt ÞÞ þ mðtÞ ¼ 

4 0 such that

we have

ð13Þ

V ðt þ ; xðt þ ÞÞ r 

for any t Z N1 . Due to (13), there exists N 2 40 for any R N1ϵ1

ð17Þ

ð11Þ

Proof. From inequalities (10) and (11) the following inequality holds almost everywhere: β

2

0

ð10Þ

where t Z 0, β A ð0; 1Þ, D  Rn is a domain which contains the origin, Vðt; xðtÞÞ : ½0; 1Þ  D-R satisfies locally Lipschitz condition on x, V_ ðt; xðtÞÞ is piecewise continuous, and V_ ðt þ ; xðt þ ÞÞ exists for any t A ½0; 1Þ. If the assumptions hold globally on Rn , then we obtain limt- þ 1 xðtÞ ¼ 0 for any xð0Þ A Rn .

0 Dt

j hðt  τÞj φðτÞ dτ þ

2 φðτÞdτ 0 j hðtÞj o R N1ϵ1 with all t Z N 2 . Substituting into inequality (17), 2 φðτÞdτ 0

ð12Þ

t- þ 1

Z

ϵ1 t j hðt  τÞj dτ 2 γ N1 0 Z t  N1 Z N1 ϵ1 ¼ j hðt  τÞj φðτÞ dτ þ j hðξÞj dξ 2γ 0 0 Z N1 ϵ1 r j hðt  τÞj φðτÞ dτ þ ; N1

r

In order to analyze the stability of fractional-order Hopfield neural networks with discontinuous activation functions, Theorem 1 is derived in the beginning of this section.

a

N1

Z

α2

Certainly, limt- þ 1 Vð0ÞEβ ð  αα32 t β Þ ¼ 0 holds, then let us prove limt- þ 1 hðtÞn½t β  1 Eβ;β ð  αα32 t β Þ ¼ 0. ¼ t β  1 Eβ;β ð  αα32 t β Þ. Due to For convenience, we denote φðtÞ ▵ limt- þ 1 φðtÞ ¼ 0, there exists N 1 40 for any 2ϵ1γ 4 0 such that

ð19Þ

where K ¼ maxfk1 ; k2 ; …; kn g, H ¼ maxfh1 ; h2 ; …; hn g, K ¼ ‖A‖p þ ‖B‖p K and H ¼ ‖B‖p H þ ‖w‖p . Based on the solution expression of fractional-order system, one has Z t 1 ðt  τÞα  1 ½  AxðτÞ þ BFðxðτÞÞ þwdτ‖p ‖xðtÞ‖p r ‖xð0Þ‖p þ ‖ Γ ðαÞ 0 Z t 1 r ‖xð0Þ‖p þ ðt  τÞα  1 ðK ‖xðτÞ‖p þ HÞ dτ Γ ðα Þ 0 Z t H K tα þ ¼ ‖xð0Þ‖p þ ðt  τÞα  1 ‖xðτÞ‖p dτ: ð20Þ αΓ ðαÞ Γ ðα Þ 0 According to Lemma 5 and the inequality (20), we gain ! H t α Eα ðK t α Þ: ‖xðtÞ‖p r ‖xð0Þ‖p þ αΓ ðαÞ So xðtÞ remains bounded for t A ½0; þ 1Þ which ensures that the solution of system (2) exists.□

Please cite this article as: S. Zhang, et al., Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.07.077i

S. Zhang et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

5

Remark 2. For the corresponding integer-order networks with discontinuous activation functions, the existence of their solutions in the sense of Filippov has been proved under same conditions in Ref. [32]. Obviously, it is a special case of Theorem 2 with α ¼ 1.

inverse Laplace transform, the unique solution of (23) is

To analyze the properties of the solution of system (2), some assumptions are given as follows.

zðtÞ r zð0ÞEα ð  ct α Þ-0

Assumption 2. For the discontinuous activation functions f i , there exist constants li 40 and mi Z 0, such that

zðtÞ ¼ zð0ÞEα ð ct α Þ  mðtÞn½t α  1 Eα;α ð  ct α Þ: Since t α  1 and Eα;α ð  ct α Þ are nonnegative functions, the above equation becomes

M M ¼ Vðt þ ; xðt þ ÞÞ  r ϵ: c c

j f i ðxÞ  f i ðyÞj r li j x  yj þ mi ;

Vðt; xðtÞÞ 

for any x; y A Rn and i ¼ 1; 2; …; n.

Then we gain

Assumption 3. For i ¼ 1; 2; …; n, there exist positive constants ci such that

‖xðtÞ‖1 ¼ Vðt; xðtÞÞ r

n X

c i ¼ ai 

j bji j li 4 0:

ð21Þ

j¼1

Theorem 3. If Assumptions 1–3 hold, any solution of system (2) is bounded. And there exists a T Z 0 such that for all t Z T and any solution xðtÞ, ‖xðtÞ‖1 r

M þ ϵ; c

P P where c ¼ minfc1 ; c2 ; …; cn g, M ¼ ni¼ 1 ðj wi j þ nj¼ 1 j bij j ðmj þ j f j ð0Þj ÞÞ and 0 o ϵ⪡1 is an arbitrary small constant. n X

j xi ðtÞj :

i¼1

α

V ðt þ ; xðt þ ÞÞ ¼

n X

α

0 Dt

j xi ðt þ Þj r

i¼1

¼

n X

r

2

sgnðxi ðtÞÞ4  ai xi ðtÞ þ

¼

2 4  ai j xi ðtÞj þ 2 4  ai j xi ðtÞj þ

i¼1

¼

sgnðxi ðtÞÞ0 Dαt xi ðtÞ

n X

bij ðf j ðxj ðtÞÞ  f j ð0ÞÞ þ

j¼1

i¼1 n X

n X i¼1

i¼1 n X

n X

n X

4ai 

i¼1

n X

lj j bij j j xj ðtÞj þ

n X

j¼1

j¼1

n X

n X

li j bji j j xi ðtÞj þ

3 li j bji j 5 j xi ðtÞj þ

j¼1

n X

3 bij f j ð0Þ þ wi 5

j¼1

j¼1

2

j bij j ðmj þ j f j ð0Þj Þ þ j wi j 5 3 j bij j ðmj þ j f j ð0Þj Þ þ j wi j 5

j¼1 n X

ðj wi j þ

i¼1

n X

j bij j ðmj þ j f j ð0Þj ÞÞ

j¼1

ð22Þ

With the change of variable zðtÞ ¼ Vðt þ ; xðt þ ÞÞ  Mc, we obtain 0 Dt

zðtÞ r  czðtÞ:

Then there exists a nonnegative function mðtÞ satisfying α

0 Dt

zðtÞ þ mðtÞ ¼  czðtÞ:

ð23Þ

Taking the Laplace transform of (23) gives sα ZðsÞ  zð0Þsα  1 þ MðsÞ ¼  cZðsÞ; where ZðsÞ ¼ LfzðtÞg and MðsÞ ¼ LfmðtÞg. So one has ZðsÞ ¼

zð0Þsα  1  MðsÞ sα þ c

Remark 3. If each mi ¼ 0 ði ¼ 1; 2; …; nÞ in Assumption 2, every f i is Lipschitz continuous. Theorem 4 in Ref. [29] gives the boundedness condition of the continuous networks' solutions, which is a special case of Theorem 3 with mi ¼ 0 ði ¼ 1; 2; …; nÞ. When mi ¼ wi ¼ f i ð0Þ ¼ 0 ði ¼ 1; 2; …; nÞ, x ¼ 0 is Mittag–Leffler stable under Assumptions 2 and 3 [29]. Obviously, this is also a special case of Theorem 3. Further, to obtain the stability of system (2), two additional assumptions need to be considered.

where c ¼ minfc1 ; c2 ; …; cn g P P M ¼ ni¼ 1 ðj wi j þ nj¼ 1 j bij j ðmj þ j f j ð0Þj ÞÞ.

di ¼ ai 

n X

j bji j pi 4 0:

ð24Þ

j¼1

Theorem 4. If there exists an equilibrium point x of system (2) and Assumptions 1–5 hold, point x must be a unique equilibrium point and ‖x‖1 r Mc. Proof. According to Assumptions 1–3 and Theorem 3, any solution xðtÞ of system (2) is bounded. In other words, for any ϵ 4 0, there exists a T Z 0 such that ‖xðtÞ‖1 r

M þ ϵ; c

for all t Z T and any solution xðtÞ. Due to Definition 5, we have  Ax þ Bf ðxÞ þ w ¼ 0. So x is also a solution of system (2) and ‖x‖1 r Mc. Next, we prove that x is the unique equilibrium point of system (2). Otherwise, suppose that system (2) has another equilibrium point y with satisfying  Ay þ Bf ðyÞ þ w ¼ 0 and ‖y‖1 r Mc o r. Then we can obtain ‖Aðy  xÞ‖1 ¼ ‖Bðf ðyÞ  f ðxÞÞ‖1 : Due to Assumptions 4 and 5, we have ‖Bðf ðyÞ  f ðxÞÞ‖1 r

n X n X

j bij j j f ðy j Þ  f ðx j Þj

i¼1j¼1

:

Because  czðtÞ satisfies locally Lipschitz condition on zðtÞ, there exists a unique solution of zðtÞ from Lemma 1. Then, with the

and

Assumption 5. For i ¼ 1; 2; …; n, there exist positive constants di such that

3

r  cVðt; xðtÞÞ þ M ¼ cV ðt þ ; xðt þ ÞÞ þ M:

α

and the boundedness of the solution of system (2) is obvious.□

j f i ðxÞ  f i ðyÞj r pi j x  yj ;

According to Assumptions 2, 3 and Lemma 4, the following inequality holds almost everywhere: 0 Dt

M þ ϵ; c

Assumption 4. For the discontinuous activation functions f i , there exist constants pi 4 0 and r 4 Mc, such that for any x; y A ½  r; r and i ¼ 1; 2; …; n,

Proof. Construct a Lyapunov function as Vðt; xðtÞÞ ¼ ‖xðtÞ‖1 ¼

when t- þ 1:

Thus, for any ϵ 4 0, there exists a T Z0 such that for all t Z T and any solution V ðt; xðtÞÞ,

r

n X n X i¼1j¼1

pj j bij j j y j  x j j ¼

n X i¼1

0 @

n X

1 pi j bji j A j y i  x i j

j¼1

Please cite this article as: S. Zhang, et al., Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.07.077i

S. Zhang et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6 n X

o

Construct the Lyapunov function as

ai j y i  x i j ¼ ‖Aðy  xÞ‖1 ;

i¼1

Vðt; yðtÞÞ ¼ ‖yðtÞ‖1 ¼

n X

j yi ðtÞj :

i¼1

which contradicts with ‖Aðy  xÞ‖1 ¼ ‖Bðf ðyÞ  f ðxÞÞ‖1 . Hence the point x is the unique equilibrium point of system (2).□ Theorem 5. If system (2) has an equilibrium point x and Assumptions 1–5 hold, system (2) is globally attractive, i.e.,

When t Z T, any solution satisfies ‖xðtÞ‖1 o r. According to Assumptions 4, 5 and Lemma 4, the following inequality holds for a.e. t A ½T; þ1Þ: α

0 Dt

Vðt þ ; yðt þ ÞÞ ¼

lim xðtÞ ¼ x: ¼

Proof. According to Assumptions 1–3 and Theorem 3, any solution xðtÞ of system (2) is bounded. In other words, for any positive constant ϵ{r  Mc, there exists a T Z 0 such that

r

M þ ϵ; c

¼

¼

n X

3

bij ðf j ðxj ðtÞÞ  f j ðx j ÞÞ5 3

pj j bij j j yj ðtÞj 5

j¼1

# ai j yi ðtÞj þ

n X

2 4a i 

n X j¼1

n X

pi j bji j j yi ðtÞj  3

pi j bji j 5 j yi ðtÞj

j¼1

ð26Þ n

where d ¼ minfd1 ; d2 ; …; dn g. Thus we obtain, for all xð0Þ A R and a. e. t A ½0; þ 1Þ, Vðt þ ; yðt þ ÞÞ r  d‖yðtÞ‖1 þ hðt; xðtÞÞ;

ð27Þ

where hðt; xðtÞÞ is an undetermined function. From inequality (22), we know hðt; xðtÞÞ ¼ 0 for all t A ½T; þ1Þ. Because of the boundedness of xðtÞ, hðt; xðtÞÞ is bounded and Z þ1 j hðt; xðtÞÞj dt o þ1:

2.5 2 1.5

0

1

Based on Theorem 1, we obtain limt- þ 1 yðtÞ ¼ limt- þ 1 xðtÞ  x ¼ 0 and limt- þ 1 xðtÞ ¼ x.□

0.5 i

sgnðyi ðtÞÞ0 Dαt yi ðtÞ

r  d‖yðtÞ‖1 ;

ð25Þ

α

i

4  ai j yi ðtÞj þ

i¼1

0 Dt

f (x )

n X j¼1

2

i¼1

yðtÞ ¼  AyðtÞ þBðf ðxðtÞÞ  f ðxÞÞ:

0

4. Numerical simulations

−0.5 −1 −1.5 −2 −2.5 −4

−3

−2

−1

0 xi

1

2

3

4

2.5

The following example is given to demonstrate the effectiveness of the proposed theoretical results. And a predictor–corrector scheme is employed for the approximate numerical solutions of fractional-order neural networks. Consider the two-cell neural networks of n ¼2 neurons [16] described by system (2) with the following parameters:   0:3  0:6 α ¼ 0:8; A ¼ diagf3; 3g; B ¼ ðbij Þ22 ¼ ; 0:6 0:3 (

2

f i ðxi Þ ¼

1.5 1 −−− [f (x )] co i i

n X

n X i¼1

sgnðyi ðtÞÞ4  ai yi ðtÞ þ

n X i¼1

for all t Z T and any solution xðtÞ. Due to Theorem 4, x is the unique equilibrium point of system (2) and ‖x‖1 r Mc. Then, with the change of variable yðtÞ ¼ xðtÞ  x, system (2) can be rewritten as α

j yi ðt þ Þj r

2

n X i¼1

0 Dt

α

0 Dt

i¼1

t- þ 1

‖xðtÞ‖1 r

n X

tanhðxi Þ xi þ 1

xi 4 0

tanhðxi Þ xi  1

xi r0

;

i ¼ 1; 2:

So, the neural networks model (2) can be rewritten as ( 0:8 0 Dt x1 ðtÞ ¼  3x1 ðtÞ þ 0:3f 1 ðx1 ðtÞÞ  0:6f 2 ðx2 ðtÞÞ þ w1 ; : 0:8 0 Dt x2 ðtÞ ¼  3x2 ðtÞ þ 0:6f 1 ðx1 ðtÞÞ þ 0:3f 2 ðx2 ðtÞÞ þ w2 :

0.5 0

ð28Þ

−0.5 −1 −1.5 −2 −2.5 −4

−3

−2

−1

0 x

1

2

3

4

i

Fig. 1. (a) The trajectories of the activation functions f i ðxi Þi ¼ 1; 2 in system (28). (b) The trajectories of the activation functions' convex closures co½ f i ðxi Þi ¼ 1; 2 in system (28).

Fig. 1 shows the trajectories of the activation functions f i ðxi Þi ¼ 1; 2 as well as their convex closures co½ f i ðxi Þ. Both f i ðxi Þi ¼ 1; 2 are discontinuous because of f i ð0 þ Þ a f i ð0  Þ. Then, we analyze two cases with different external inputs w ¼ ðw1 ; w2 ÞT . Case 1. When w ¼ ð0; 0ÞT , we get l1 ¼ l2 ¼ 2, m1 ¼ m2 ¼ 2, f 1 ð0Þ ¼ f 2 ð0Þ ¼ 1, c1 ¼ c2 ¼ 1:2 and M ¼ 5:4 with satisfying Assumptions 1–3. So according to Theorem 3, any solution of system (28) is bounded. In other words, there exists a T Z 0 such that ‖xðtÞ‖1 r 4:5001, for all t Z T and any solution xðtÞ. With initial value xð0Þ ¼ ð  6; 6ÞT , Fig. 2 shows that the states of system (28) are

Please cite this article as: S. Zhang, et al., Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.07.077i

S. Zhang et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

7

6

2 x (t)

1.5

1

1

x1(t)

5

x2(t)

x (t) 2

4

0.5

3

0

2 −0.5

1

−1

0

−1.5 −2

−1 0

5

10

15

20

0

5

10

a

2

4

1

3

0 −−− [f (x (t))] co 1 1

−−− [f (x (t))] co 1 1

5

1

−1 −2

0

−3

−1

−4

−2

0

5

10

15

−5

20

0

5

t

b

0

0 −−− [f (x (t))] co 2 2

−−− [f (x (t))] co 2 2

1

−1 −2

15

20

−2 −3

−4

−4

10

20

−1

−3

5

15

2

1

0

10 t

2

−5

20

Fig. 4. The states for Case 1 of system (28) with initial value xð0Þ ¼ ð6; 6ÞT .

Fig. 2. The states for Case 1 of system (28) with initial value xð0Þ ¼ ð  6; 6ÞT .

2

15

t

t

15

−5

20

0

t

5

10 t

Fig. 3. The evolutions of the activation functions' convex closures for corresponding states in Fig. 2: (a) co½ f 1 ðx1 ðtÞÞ and (b) co½ f 2 ðx2 ðtÞÞ.

Fig. 5. The evolutions of the activation functions' convex closures for corresponding states in Fig. 4: (a) co½ f 1 ðx1 ðtÞÞ and (b) co½ f 2 ðx2 ðtÞÞ.

bounded and ‖xðtÞ‖1 r4:5001 as t Z 10, which verifies the result of Theorem 3. Corresponding to the states in Fig. 2, Fig. 3 shows the evolutions of the activation functions' convex closures co½ f i ðxi ðtÞÞi ¼ 1; 2. And the discontinuities of activation functions f i ðxi ðtÞÞ are high-frequency after t ¼ 10. With another initial value xð0Þ ¼ ð6; 6ÞT , Fig. 4 shows the bounded states of system (28) and Fig. 5 is the evolutions of the corresponding co½ f i ðxi ðtÞÞi ¼ 1; 2. These also confirm the effectiveness of Theorem 3.

Case 2. When w ¼ ð27:9;  22:2ÞT , due to M ¼ 55:5, it is not convenient to analyze the stability of system (28). However, according to Remark 1, we can rewrite system (28) with the change of variable yðtÞ ¼ xðtÞ  ð7;  8ÞT : (

0:8 0 Dt y1 ðtÞ ¼ 0:8 0 Dt y2 ðtÞ ¼

 3y1 ðtÞ þ 0:3g 1 ðy1 ðtÞÞ  0:6g 2 ðy2 ðtÞÞ;  3y2 ðtÞ þ 0:6g 1 ðy1 ðtÞÞ þ 0:3g 2 ðy2 ðtÞÞ;

:

ð29Þ

Please cite this article as: S. Zhang, et al., Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.07.077i

S. Zhang et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

9

8

8

6 −−− [f (x (t))] co 1 1

−−− [g1(y1)] co

10

7 6 5

4 2 0 −2

4

−4

−11

−10

−9

−8

−7

−6

−5

−4

−3

−2

−6

y1

0

1

2

3

4

5

6

4

5

6

t

6

−6

4

−7

2 −−− [f (x (t))] co 2 2

−−− [g2(y2)] co

−5

−8 −9 −10 −11

0 −2 −4 −6 −8

4

5

6

7

8 y2

9

10

11

12

−10

Fig. 6. The trajectories of the activation functions' convex closures in system (29): (a) co½g 1 ðy1 Þ and (b) co½g2 ðy2 Þ.

0

1

2

3 t

Fig. 8. The evolutions of the activation functions' convex closures for corresponding states in Fig. 7: (a) co½ f 1 ðx1 ðtÞÞ and (b) co½ f 2 ðx2 ðtÞÞ.

10

8

8

6

6

x1(t)

4

x2(t)

2

x2(t)

2

0

0

−2

−2

−4

−4

−6

−6

−8 −10

x1(t)

4

−8 0

5

10

15

20

t

where g 1 ðy1 Þ ¼ ( g 2 ðy2 Þ ¼

tanhðy1 þ7Þ  y1 þ1y1 4  7 tanhðy1 þ7Þ  y1 1y1 r  7 tanhðy2 8Þ  y2 þ1y2 4 8 tanhðy2 8Þ  y2 1y2 r 8

5

10

15

20

t

Fig. 7. The states for Case 2 of system (28) with initial value xð0Þ ¼ ð  10; 10ÞT .

(

0

;

Fig. 9. The states for Case 2 of system (28) with initial value xð0Þ ¼ ð6; 6ÞT .

Fig. 6 shows the trajectories of the activation functions' convex closures co½g i ðyi Þ i ¼ 1; 2. g i ðyi Þ are both discontinuous because of g 1 ð  7 þ Þ ag 1 ð  7  Þ and g 2 ð8 þ Þ a g 2 ð8  Þ. Then, for system (29), we know l1 ¼ l2 ¼ 2, m1 ¼ m2 ¼ 2, f 1 ð0Þ  2:0000, f 2 ð0Þ  2:0000, c1 ¼ c2 ¼ 1:2 and M ¼ 7:2 with satisfying Assumptions 1–3. Note that r ¼ 7 4 Mc ¼ 6 and p1 ¼ p2 ¼ 2 satisfy Assumptions 4 and 5. Based on Theorems 4 and 5 system (29) is globally attractive to the

Please cite this article as: S. Zhang, et al., Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.07.077i

S. Zhang et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Though the existence of the solutions of fractional-order networks is obtained, their uniqueness cannot be acquired under the common conditions of integer-order discontinuous systems. Thus, the uniqueness conditions will be investigated in our next work. Besides, as a kind of unavoidable phenomena in practice, timedelay will be also considered in our following task.

−4 −4.2 −4.4

[f1(x1(t))] −−− co

−4.6 −4.8

−5.2

−5

References

−5.4 −5.6 −5.8 −6

0

2

4

6

8

10

6

8

10

t

6 4 −−− [f (x (t))] co 2 2

9

2 0 −2 −4 −6

0

2

4 t

Fig. 10. The evolutions of the activation functions' convex closures for corresponding states in Fig. 9: (a) co½ f 1 ðx1 ðtÞÞ and (b) co½ f 2 ðx2 ðtÞÞ.

unique equilibrium point y ¼ x  ð7;  8ÞT  ð0:5600; 0:0800ÞT . Equivalently, limt- þ 1 xðtÞ ¼ x  ð7:5600;  7:9200ÞT can be obtained for system (28). With initial value xð0Þ ¼ ð  10; 10ÞT , Fig. 7 shows that the states of system (28) converge to the equilibrium point x  ð7:5600;  7:9200ÞT , which verifies the result of Theorems 4 and 5. Corresponding to the states in Fig. 7, Fig. 8 shows the evolutions of the activation functions' convex closures co½ f i ðxi ðtÞÞ i ¼ 1; 2. And the activation functions f i ðxi ðtÞÞ are discontinuous at only t  0:2. With initial value xð0Þ ¼ ð6; 6ÞT , Fig. 9 shows that the states of system (28) converge to the equilibrium point x  ð7:5600;  7:9200ÞT , and Fig. 10 shows the evolutions of the corresponding co½ f i ðxi ðtÞÞ i ¼ 1; 2. These also verify the effectiveness of Theorems 4 and 5.

5. Conclusion In this paper, based on the concept of Filippov solution, the stability of fractional-order Hopfield neural networks with discontinuous activation functions is studied. Consistent with integer-order networks, the growth condition is proved for the existence of the solutions of fractional-order networks. By using the Lyapunov methods, two groups of sufficient conditions are derived to guarantee the boundedness and stability of such networks respectively. In addition, the uniqueness of the equilibrium point of the network system is analyzed under the proposed conditions. Furthermore, the effectiveness of the given results is demonstrated by the numerical simulations.

[1] Y.Q. Yang, J.D. Cao, A feedback neural network for solving convex constraint optimization problems, Appl. Math. Comput. 201 (2008) 340–350. [2] E. Kaslik, S. Sivasundaram, Impulsive hybrid discrete-time Hopfield neural networks with delays and multistability analysis, Neural Netw. 24 (2011) 370–377. [3] Y.R. Liu, Z.D. Wang, J.L. Liang, X.H. Liu, Stability and synchronization of discrete-time Markovian jumping neural networks with mixed modedependent time delays, IEEE Trans. Neural Netw. 20 (2009) 1102–1116. [4] M.A. Cohen, S. Grossberg, Absolute stability and global pattern formation and parallel storage by competitive neural networks, IEEE Trans. Syst. Man Cybern. 13 (1983) 815–826. [5] J.J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, Proc. Natl. Acad. Sci. 81 (1984) 3088–3092. [6] D.W. Tank, J.J. Hopfield, Simple neural optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit, IEEE Trans. Circuits Syst. 33 (1986) 533–541. [7] J. Zhang, X. Jin, Global stability analysis in delayed Hopfield neural network models, Neural Netw. 13 (7) (2000) 745–753. [8] Z. Wang, Y. Liu, K. Fraser, Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays, Phys. Lett. A 354 (4) (2006) 288–297. [9] H. Huang, Q. Du, X. Kang, Global exponential stability of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays, ISA Trans. 52 (6) (2013) 759–767. [10] L. Zhang, Y. Zhu, W. Zheng, Energy-to-Peak state estimation for Markov jump RNNs with time-varying delays via nonsynchronous filter with nonstationary mode transitions, IEEE Trans. Neural Netw. Learn. Syst. This issue PP(99), 2015, 1-1. [11] L. Zhang, Y. Zhu, P. Shi, Y. Zhao, Resilient asynchronous H-infinity filtering for Markov jump neural networks with unideal measurements and multiplicative noises, IEEE Trans. Cybern. this issue PP(99), 2015, 1-1. [12] M. Forti, P. Nistri, Global convergence of neural networks with discontinuous neuron activation, IEEE Int. Symp. Circuits Syst. 50 (2003) 1421–1435. [13] M. Forti, M. Grazzini, P. Nistri, Generalized Lyapunov approach for convergence of neural networks with discontinuous or non-Lipchitz activations, Physica D 214 (2006) 88–89. [14] W.L. Lu, T.P. Chen, Dynamical behaviors of Cohen–Grossberg neural networks with discontinuous activation functions, Neural Netw. 18 (2005) 231–242. [15] J.F. Wang, L.H. Huang, Z.Y. Guo, Global asymptotic stability of neural networks with discontinuous activations, Neural Netw. 22 (2009) 931–937. [16] J. Xiao, Z.G. Zeng, W.W. Shen, Global asymptotic stability of delayed neural networks with discontinuous neuron activations, Neurocomputing 118 (2013) 322–328. [17] E. Ahmeda, A. Elgazzar, On fractional order differential equations model for nonlocal epidemics, Physica A: Stat. Mech. Appl. 379 (2) (2007) 607–614. [18] G. Cottone, M. Paola, R. Santoro, A novel exact representation of stationary colored Gaussian processes (fractional differential approach), J. Phys. A: Math. Theor. 43 (8) (2010) 085002. [19] N. Özalp, E. Demirci, A fractional order SEIR model with vertical transmission, Math. Comput. Model. 54 (1) (2011) 1–6. [20] A. Boroomand, M. Menhaj, Fractional-order Hopfield neural networks, in: Advances in Neuro-Information Processing, Springer, Berlin, Heidelberg, 2009 pp. 883–890. [21] B. Lundstrom, M. Higgs, W. Spain, Fractional differentiation by neocortical pyramidal neurons, Nat. Neurosci. 11 (11) (2008) 1335–1342. [22] R. Wu, X. Hei, L. Chen, Finite-time stability of fractional-order neural networks with delay, Commun. Theor. Phys. 60 (2013) 189–193. [23] E. Kaslik, S. Sivasundaram, Nonlinear dynamics and chaos in fractional-order neural networks, Neural Netw. 32 (2012) 245–256. [24] Y. Li, Y. Chen, I. Podlubny, Stability of fractional-order nonlinear dynamic systems: Lyapunov direct method and generalized Mittag–Leffler stability, Comput. Math. Appl. 59 (5) (2010) 1810–1821. [25] J.J. Chen, Z.G. Zeng, P. Jiang, Global Mittag–Leffler stability and synchronization of memristor-based fractional-order neural networks, Neural Netw. 51 (2014) 1–8. [26] I. Podlubny, Fractional Differential Equations, Academic Press, New York, 1999. [27] A.F. Filippov, Mathematics and its Applications (Soviet Series), Differential Equations with Discontinuous Right-Hand Sides, Kluwer Academic Publishers, Boston, 1988. [28] A. Kilbas, H. Srivastava, J. Trujillo, Theory and Application of Fractional Differential Equations, Elsevier, San Diego, 2006. [29] S. Zhang, Y.G. Yu, H. Wang, Mittag–Leffler stability of fractional-order Hopfield neural networks, Nonlinear Anal.: Hybrid Syst. 16 (2015) 104–121.

Please cite this article as: S. Zhang, et al., Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.07.077i

10

S. Zhang et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

[30] H.P. Ye, J.M. Gao, Y.S. Ding, A generalized Gronwall inequality and its application to a fractional differential equation, J. Math. Anal. Appl. 328 (2007) 1075–1081. [31] J. Hendersona, A. Ouahabb, Fractional functional differential inclusions with finite delay, Nonlinear Anal. 70 (2009) 2091–2105. [32] X.Y. Liu, T.P. Chen, J.D. Cao, W.L. Lu, Dissipativity and quasi-synchronization for neural networks with discontinuous activations and parameter mismatches, Neural Netw. 24 (2011) 1013–1021.

Qing Wang received the B.S. degree from the Department of Mathematical Science, Dalian Maritime University, China in 2013. Now, she is a M.S. Candidate in the Department of Mathematics, School of Science, Beijing Jiaotong University, China. Her current research interests focus on stochastic dynamical system, nonlinear dynamics and control.

Shuo Zhang received the B.S. degree from the Department of Mathematical Science, Shandong Normal University, China in 2011. Now, he is a Ph.D. Candidate in the Department of Mathematics, School of Science, Beijing Jiaotong University, China. His current research interests include neural networks, chaos synchronization, complex networks, nonlinear dynamics and control.

Yongguang Yu received his M.S. degree from the Department of Mathematical Science, Inner Mongolia University, China, in 2001, and the Ph.D. degree from the Institute of Applied Mathematics, Academy of Mathematics and System Sciences, Chinese Academy of Sciences, China, in 2004. From 2007 to 2009, he was a Research Fellow in City University of Hong Kong, China. Since 2010, he has been a Professor with the Department of Mathematics, School of Science, and Beijing Jiaotong University, China. His research interests include chaotic dynamics, chaos control and synchronization, complex networks, nonlinear control and multi-agent systems.

Please cite this article as: S. Zhang, et al., Stability analysis of fractional-order Hopfield neural networks with discontinuous activation functions, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.07.077i