Mean-square dissipativity of numerical methods for a class of stochastic neural networks with fractional Brownian motion and jumps

Mean-square dissipativity of numerical methods for a class of stochastic neural networks with fractional Brownian motion and jumps

Author's Accepted Manuscript Mean-square dissipativity of numerical methods for a class of stochastic neural networks with fractional Browian motion ...

751KB Sizes 3 Downloads 49 Views

Author's Accepted Manuscript

Mean-square dissipativity of numerical methods for a class of stochastic neural networks with fractional Browian motion and jumps Weijun Ma, Baocang Ding, Hongfu Yang, Qimin Zhang

www.elsevier.com/locate/neucom

PII: DOI: Reference:

S0925-2312(15)00439-7 http://dx.doi.org/10.1016/j.neucom.2015.03.072 NEUCOM15350

To appear in:

Neurocomputing

Received date: 31 October 2014 Revised date: 7 February 2015 Accepted date: 23 March 2015 Cite this article as: Weijun Ma, Baocang Ding, Hongfu Yang, Qimin Zhang, Mean-square dissipativity of numerical methods for a class of stochastic neural networks with fractional Browian motion and jumps, Neurocomputing, http://dx.doi.org/10.1016/j.neucom.2015.03.072 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Mean-square dissipativity of numerical methods for a class of stochastic neural networks with fractional Browian motion and jumps Weijun Ma1

Baocang Ding1,∗

Hongfu Yang2

Qimin Zhang2,3

1 School

of Electronic and Information Engineering, Xi’an Jiaotong University Xi’an, 710049, PR China 2 School of Mathematics and Information Science, Beifang University of Nationalities, Yinchuan 750021, PR China 3 School of Mathematics and Computer Science, Ningxia University, Yinchan, 750021, PR China Abstract: In this paper, we introduce a class of stochastic neural networks with fractional Brownian motion (fBm) and Poisson jumps. We also concern mean-square dissipativity of numerical methods applied to a class of stochastic neural networks with fBm and jumps. The conditions under which the underlying systems are mean-square dissipative are considered. It is shown that the mean-square dissipativity is preserved by the compensated split-step backward Euler method and compensated backward Euler method without any restriction on stepsize, while the split-step backward Euler method and backward Euler method could reproduce mean-square dissipativity under a stepsize constraint. The results indicate that compensated numerical methods achieve superiority over non-compensated numerical methods in terms of mean-square dissipativity. Finally, an example is given for illustration. Keywords: Stochastic neural networks; Mean-square dissipativity; Numerical methods; Fractional Brownian motion; Poisson jumps

1

Introduction

In recent years, neural networks have been widely investigated because of their extensive applications in classification, parallel computation, optimization and many other fields, e.g.,[1-4]. In such applications, it is of prime importance to ensure that 1

*Corresponding author. E-mail addresses: weijunma [email protected](W. Ma), [email protected] (B. Ding), hongfu [email protected] (H. Yang), [email protected] (Q. Zhang).

1

the designed neural networks is stable. Therefore, the stability of neural networks has received much attention [5-10]. Many problems in physics and engineering fields are modeled with dissipative differential equations in which an energy loss mechanism is present. They are characterized by possessing a bounded positively invariant absorbing set that all trajectories starting from any bounded set enter in a finite time and thereafter remain inside. The concept of dissipativity in dynamical systems, introduced in the 1970s, generalizes the idea of a Lyapunov stability. There are applications in diverse areas such as stability theory, chaos and synchronization theory, system norm estimation, and robust control. Therefore, it is a very interesting topic to investigate the dissipativity of dynamical systems. There exist some results on the dissipativity of neural networks [11-17]. On the other hand, in the real world, neural networks are often subject to environmental disturbances; especially, the signal transfer within neural networks is always affected by the stochastic perturbations. Therefore, in order to reflect more realistic dynamical behaviors, Brownian motion has been introduced to model stochastic phenomena in neural networks [18-27] and the references therein. Recently, Song et al.[28] studied the global synchronization of complex networks perturbed by the Poisson noise. Zhang et al.[29] analyzed the global synchronization of complex networks perturbed by Brownian motion and Poisson jumps. Moreover, most of the stochastic neural networks, similar to the stochastic differential equations, do not have explicit solutions. Thus, appropriate numerical approximation schemes, such as the split-step backward Euler scheme, are needed to apply stochastic neural networks in practice or study their properties. There is little work on the mean-square dissipativity of numerical methods for the stochastic neural networks with fBm and jumps, although there are many papers concerning with the numerical solutions to stochastic differential equations [30-38]. The fractional Brownian motion (fBm) is a family of centered Gaussian processes with continuous sample paths indexed by the Hurst parameter H ∈ (0, 1). It is a self-similar process with stationary increments and has a long-memory when H > 12 . These significant properties make the fractional Brownian motion a natural candidate as a model for noise in a wide variety of physical phenomena, such as mathematical finance, communication networks, hydrology and medicine. Therefore, it is important to study the stochastic calculus with respect to fBm and related issues (see Mishura [39] and the references therein for a more complete presentation 2

of this subject). Recently, stochastic differential equations driven by fBm have attracted a lot of attentions. The first results are established by Ferrante and Rovira [40]. Since then, based on different settings, various forms of equations have been studied. For example, the finite-dimensional equations have been investigated [41-46], and the infinite-dimensional equations in a Hilbert space considered [47-50]. An advantage of fractional Brownian motion (fBm) models in comparison with classical Brownian motion models is that fBm systems have long-memory when H > 12 . Taking into account this fact, it is easy to see that the incorporation of a memory term into a neural network model is an extremely important improvement. Therefore, it is necessary and interesting to study stochastic neural networks with fBm both in theory and practice. In this paper, we consider a class of stochastic neural networks with fBm and Poisson jumps of the form  dx(t) = [−Ax(t− ) + Bf (x(t− ))]dt + g(t, x(t− ))dB H (t) + h(t, x(t− ))dN(t) in [0, T ], x(0− ) = x0 , (1.1) 2 where 0 < T < +∞, x0 is an n-dimensional random variable and E|x0 | < +∞, with the notation E denoting the mathematical expectation with respect to P . N(t) is a scalar Poisson process. x(t− ) := lim− x(s), x(t) = (x1 (t), x2 (t), · · ·, xn (t))T ∈ s→t

Rn is the state vector associated with the neurons; A = diag(a1 , a2 , · · ·, an ) > 0 with ai > 0 represents the self-feedback connection weight matrix; B = (bij )n×n is connection weight matrix, f (x(t)) = (f (x1 (t)), · · ·, f (xn (t)))T is a vector-valued activation function; g : [0, T ] × Rn → Rn×m represents the noise intensity function matrix and h : [0, T ] × Rn → Rn is a jump coefficient. A new stochastic neural network system is given by model (1.1). It is an extension of Song et al.[28] and Zhang et al.[29]. The consideration of the stochastic environmental noise leads to a stochastic neural networks system with fBm and jumps (1.1), which is more realistic. Motivated by the above discussions, in this paper we will concern the meansquare dissipativity of numerical methods for a stochastic neural networks with fBm and Poisson jumps under the given conditions. This work differs from existing results (see e.g. Song et al.[28], Zhang et al.[29], Li et al.[37] and Rathinasamy [38]) in that (a) compensated and non-compensated numerical methods are considered, and (b) mean-square dissipativity and fBm are involved. The rest of the paper is organized as follows. In Section 2, we introduce some 3

notations, definitions and conditions, which will be used throughout the rest of the paper. In Section 3, some criteria for mean-square dissipative of (1.1) is shown. The mean-square dissipativity of split-step backward Euler method and compensated split-step backward Euler method for (1.1) are established in Section 4. Section 5 is devoted to prove that backward Euler method and compensated backward Euler method can inherit the mean-square dissipativity of (1.1). In Section 6 we give an example to illustrate the main results. Finally, a conclusion is drawn in Section 7.

2

Preliminaries

Throughout this paper, unless otherwise specified, we use the following notations. If A is a vector or matrix, its transpose is denoted by AT . Let Rn be a Euclidean space with the inner product x, y = y T x and corresponding norm  |x| = x, x, x = (x1 , x2 , · · ·, xn )T , y = (y1 , y2, · · ·, yn )T ∈ Rn . Rn×m denotes the set of n × m real matrices. If A is a matrix, its operator norm is denoted by  A = sup{|Ax| : |x| = 1} = λmax (AT A), for A ∈ Rn×n , where λmax (AT A) denotes the maximum eigenvalue of AT A. λmin(A) denotes the minimum eigenvalue of A. Let (Ω, F , P ) be a complete probability space. Let B H (t) be an m-dimensional fBm with Hurst parameter H ∈ ( 12 , 1) and N(t) be a scalar Poisson process with intensity λ > 0 both defined on the complete probability space. We assume that Poisson process N(t) is independent of the fBm B H (t). Definition 2.1. A fractional Brownian motion (fBm) B H = {B H (t), t ∈ R} (we use the same symbol B H for one-dimensional fBm without any confusion) for Hurst parameter 0 < H < 1 is a continuous and centered Gaussian process with covariance function 1 RH (t, s) = E[B H (t)B H (s)] = (|t|2H + |s|2H − |t − s|2H ); t, s ∈ R. 2 For H = 12 , the fBm is then a standard Brownian motion. Remark 2.2. By Definition 2.1 we obtain that a standard fBm B H has the following properties: (i) B H (0) = 0 and E[B H (t)] = 0 for all t ≥ 0. (ii) B H has homogeneous increments, i.e., B H (t + s) − B H (s) has the same law of B H (t) for s, t ≥ 0. (iii) B H is a Gaussian process and E[B H (t)]2 = t2H , t ≥ 0, for all H ∈ (0, 1). (iv) B H has continuous trajectories.

4

(v) For any α > 0, every s, t ∈ R, we have E[|B H (t) − B H (s)|α ] = E[|B H (1)|α]|t − s|αH . It is known that B H (t) with H > H

B (t) =

1 2

admits the following Volterra representation:



t 0

K(t, s)dB(s),

where B is a standard Brownian motion and the Volterra kernel K(t, s) is given by  t 3 u 1 K(t, s) = cH (u − s)H− 2 ( )H− 2 du, t ≥ s, s s  H(2H−1) with β(·) representing the Beta function. We take where cH = β(2−2H,H− 12 ) K(t, s) = 0 if t ≤ s. For the deterministic function ϕ ∈ L2 ([0, T ]), the fractional Wiener integral of ϕ with respect to B H is defined by  T  T H ∗ ϕ(s)dB (s) = KH ϕ(s)dB(s), 0

0

T ∗ where KH ϕ(s) = s ϕ(r) ∂K (r, s)dr. ∂r The following Itˆo formula for generalized functionals of a fBm with arbitrary Hurst parameter was proved in [51-52]. Lemma 2.3. Let f, g : [0, T ] → R be deterministic continuous functions. If  t  t x(t) = x0 + f (s)ds + g(s)dB H (s), t ∈ [0, T ], 0

0

where x0 is a constant and F ∈ C 1,2 ([0, T ] × R), then t  t ∂F (s, x(s))ds + (s, x(s))dx(s) F (t, x(t)) =F (0, x0 ) + 0 ∂F 0 ∂x  t 2H−1 2∂s ∂ 2 F +H 0 s g (s) ∂x2 (s, x(s))ds.

(2.1)

Remark 2.4. If taking H = 12 in Lemma 2.3, then the fBm-Itˆo formula (2.1) is the classical Itˆo formula [4]. Lemma 2.5. For any a, b ∈ Rn and any positive constant σ > 0, the inequality 2aT b ≤ σaT Xa + σ −1 bT X −1 b holds, in which X is any matrix with X > 0. 5

(2.2)

Definition 2.6. Eq.(1.1) is said to be mean-square dissipative if there exists a bounded set B ⊂ R, such that for any given bounded set D ⊂ Rn , there is a time t∗ = t∗ (D), such that for any given initial value contained in D, the mathematical expectation of corresponding solution E|x(t)|2 is contained in B for all t ≥ t∗ . Here B is referred to as a mean-square absorbing set of (1.1). Furthermore, we impose the following conditions: (c1) There exist constants k1 ≥ 0, k2 ≥ 0 such that for all x ∈ Rn , |f (x)|2 ≤ k1 + k2 |x|2 .

(2.3)

(c2) There exist constants k3 ≥ 0, k4 ≥ 0 such that for all x ∈ Rn , |g(t, x)|2 ≤ k3 |x|2 , |h(t, x)|2 ≤ k4 |x|2 .

(2.4)

Remark 2.7. The activation function f (x(t)) in (2.3) doesn’t require the monotonicity and differentiability etc. Hence, condition (2.3) in this paper is weaker than those given in the earlier literature (see, e.g. [11-17, 21]). Remark 2.8. The model (1.1) is new and more general than those discussed in the previous literature. For example, if we do not consider Markovian jump and let H = 12 and h(t, x(t)) ≡ 0, then the model (1.1) is the same as stochastic Hopfield neural networks in [24]. Furthermore, when H = 12 , g(t, x(t)) = Dx(t)(D ∈ Rn×n ), h(t, x(t)) ≡ 0, the model (1.1) is stochastic recurrent neural networks in [25]. Therefore, the model (1.1) is an extension of those reported in [24-25].

3

Mean-square dissipativity of stochastic neural networks with fBm and jumps

In this section, we will study the mean-square dissipativity of (1.1). Firstly, we state the main theorem. Theorem 3.1. Assume that system (1.1) satisfies (2.3)-(2.4) with l := −2λmin (A)+ √ σk2 B2 + σ −1 + 2Hk3T 2H−1 + 2λ k4 + λk4 < 0. Then, (i) for any given  > 0, there exists a positive number t∗ , such that, E|x(t)|2 ≤ −

σk1 B2 + , l

t ≥ t∗ ;

(ii) for any given  > 0, (1.1) is mean-square dissipative with a mean-square 2 + ). absorbing set B = (0, − σk1 B l 6

Proof. From (1.1), applying fBm-Itˆo formula (2.1) to |x(t− )|2 yields t t |x(t− )|2 = |x0 |2 + 0 2x(s− ), −Ax(s− )ds + 0 2x(s− ), Bf (x(s− ))ds t t +2H 0 s2H−1 |g(s, x(s− ))|2 ds + 2 0 x(s− ), g(s, x(s− ))dB H (s) t t + 0 |h(s, x(s− ))|2 dN(s) + 2 0 x(s− ), h(s, x(s− ))dN(s) t t = |x0 |2 + 0 2x(s− ), −Ax(s− )ds + 0 2x(s− ), Bf (x(s− ))ds t t +2H 0 s2H−1 |g(s, x(s− ))|2 ds + 2 0 x(s− ), g(s, x(s− ))dB H (s) t ¯ + 0 [2x(s− ), h(s, x(s− )) + |h(s, x(s− ))|2 ]dN(s) t + 0 [2λx(s− ), h(s, x(s− )) + λ|h(s, x(s− ))|2 ]ds, ¯ where N(t) := N(t) − λt is a compensated Poisson process. Taking expectation on both sides of (3.1) we obtain that t E|x(t− )|2 =E|x0 |2 + 0 2Ex(s− ), −Ax(s− )ds t t + 0 2Ex(s− ), Bf (x(s− ))ds + 2H 0 s2H−1 E|g(s, x(s− ))|2 ds t + 0 [2λEx(s− ), h(s, x(s− )) + λE|h(s, x(s− ))|2 ]ds.

(3.1)

(3.2)

By x, y = y T x, (2.3) and Lemma 2.4, we get x(t), −Ax(t) = −(Ax(t))T x(t) = −xT (t)Ax(t) ≤ −λmin (A)|x(t)|2 , and

2x(t), Bf (x(t)) = 2(Bf (x(t)))T x(t) = 2f T (x(t))B T x(t) ≤ σf T (x(t))B T Bf (x(t)) + σ −1 xT (t)x(t) ≤ σλmax (B T B)|f (x(t))|2 + σ −1 |x(t)|2 = σB2 |f (x(t))|2 + σ −1 |x(t)|2 ≤ σk1 B2 + (σk2 B2 + σ −1 )|x(t)|2 .

(3.3)

(3.4)

Therefore, using (2.4), (3.3), (3.4) and the Cauchy-Schwarz inequality, it follows that for any t ∈ [0, T ], E|x(t− )|2 ≤ E|x0 |2 + (−2λmin (A) + σk2 B2 + σ −1 + 2Hk3 T 2H−1 t t √ + 2λ k4 + λk4 ) 0 E|x(s− )|2 ds + 0 σk1 B2 ds t t = E|x0 |2 + 0 σk1 B2 ds + l 0 E|x(s− )|2 ds.

(3.5)

By the generalized Bellman-Gronwall-type inequality (see [53]), one can show that t u t E|x(t)|2 ≤ (E|x0 |2 + 0 σk1 B2 exp(− 0 ldv)du) · exp( 0 ldu) (3.6) 2 2 + (E|x0 |2 + σk1 B )elt . = − σk1 B l l Let J = sup |x(t)|2 x∈D

7

2

for any given bounded set D. If E|x0 |2 + σk1 B ≥ 0, for any given initial value l σk1 B2 x0 ∈ D, that B = (0, − l + ) is a mean-square absorbing set of (1.1) may be ∗ seen from (3.6) by choosing t such that (J +

σk1 B2 lt∗ )e ≤ , l

2

2

< 0, it is easy to see that E|x(t)|2 ≤ − σk1 B + where  > 0. If E|x0 |2 + σk1 B l l for all 0 ≤ t ≤ T. The proof is complete.  Remark 3.2. In [21], some sufficient conditions of global dissipativity of stochastic neural networks with time delay were obtained by constructing Lyapunov function, or Lyapunov-Krasovskii function and linear matrix inequality (LMI) etc. In general, it is difficult to construct suitable Lyapunov function for stochastic differential equations with retarded arguments, even for constant delays to deal with dissipativity. The LMI approach to the dissipativity problem of neural networks involves some difficulties with determining the constraint conditions on the network parameters as it requires to test positive definiteness of high dimensional matrices. However, in this work, a novel sufficient condition of mean-square dissipativity of stochastic neural networks with fBm and jumps is obtained by the property of inner product, fBm-Itˆo formula and matrix theory. Remark 3.3. According to mean-square dissipativity in Theorem 3.1, the solution of system (1.1) is bounded in mean square sense.

4

Mean-square dissipativity of split-step backward Euler method and compensated split-step backward Euler method

According to Theorem 3.1, we investigate the mean-square dissipativity of splitstep backward Euler (SSBE) method and compensated split-step backward Euler (CSSBE) method for Eq.(1.1). We give SSBE method for (1.1) as yn∗ = yn − Ayn∗ h + hBf (yn∗ ),

(4.1a)

yn+1 = yn∗ + g(tn , yn∗ )ΔBnH + h(tn , yn∗ )ΔNn ,

(4.1b)

with initial value y0 = x0 . Here, h > 0 is the stepsize and yn is the approximation to x(tn ) for tn = nh. ΔBnH = B H (tn+1 ) − B H (tn ) and ΔNn = N(tn+1 ) − N(tn ) denote the increments of fBm and Poisson process, respectively. Theorem 4.1. Assume that system (1.1) satisfies (2.3)-(2.4) with l = −2λmin (A)+ √ σk2 B2 + σ −1 + 2Hk3T 2H−1 + 2λ k4 + λk4 < 0. Let yn be the numerical solution 8

generated by the SSBE method (4.1a)-(4.1b). Then for any given  > 0, there exists an n1 such that √ σk1 B2 (1 + (2Hk3T 2H−1 + 2λ k4 + λk4 )h + λ2 h2 k4 ) 2 E|yn | ≤ − +, n ≥ n1 , h < h0 , l + λ2 k 4 h where

 h0 =

− λ2lk4 , +∞,

k4 > 0, k4 = 0.

Proof. From (4.1a), we have |yn∗ + Ayn∗ h − hBf (yn∗ )|2 = |yn |2 .

(4.2)

Therefore, by (3.3)-(3.4), we get that |yn∗ |2 ≤ −2hyn∗ , Ayn∗  + 2hyn∗ , Bf (yn∗ ) + |yn |2 ≤ (−2λmin (A) + σk2 B2 + σ −1 )h|yn∗ |2 + σk1 B2 h + |yn |2 . Since l < 0, then 1 − (−2λmin (A) + σk2 B2 + σ −1 )h > 0, we can show that |yn∗ |2 ≤

σk1 B2 h 1−(−2λmin (A)+σk2 B2 +σ−1 )h

+

1 |y |2 . 1−(−2λmin (A)+σk2 B2 +σ−1 )h n

(4.3)

It follows from (4.1b) that |yn+1|2 = |yn∗ |2 + |g(tn , yn∗ )|2 |ΔBnH |2 + |h(tn , yn∗ )|2 |ΔNn |2 + 2yn∗ , g(tn , yn∗ )ΔBnH  +2yn∗ , h(tn , yn∗ )ΔNn  + 2g(tn, yn∗ )ΔBnH , h(tn , yn∗ )ΔNn . (4.4) H H 2 2H 2 2 2 By E(ΔBn ) = 0, E(ΔBn ) = h , E(ΔNn ) = λh, E(ΔNn ) = λh + λ h , we have E|yn+1|2 = E|yn∗ |2 + h2H E|g(tn , yn∗ )|2 + (λh + λ2 h2 )E|h(tn , yn∗ )|2 + 2λhEyn∗ , h(tn , yn∗ ).

(4.5)

Therefore, substituting (4.3) into (4.5) and applying the Cauchy-Schwarz inequality and (2.4), one obtains √ E|yn+1 |2 ≤ (1 + k3 h2H + (2λ k4 + λk4 )h + λ2 h2 k4 )E|yn∗ |2 √ = (1 + (k3 h2H−1 + 2λ k4 + λk4 )h + λ2 h2 k4 )E|yn∗ |2 √ (4.6) ≤ (1 + (2Hk3T 2H−1 + 2λ k4 + λk4 )h + λ2 h2 k4 )E|yn∗ |2 ≤ C1 E|yn |2 + D1 , where

√ 1 + (2Hk3T 2H−1 + 2λ k4 + λk4 )h + λ2 h2 k4 C1 = , 1 − (−2λmin (A) + σk2 B2 + σ −1 )h 9

√ σk1 B2 h(1 + (2Hk3 T 2H−1 + 2λ k4 + λk4 )h + λ2 h2 k4 ) D1 = . 1 − (−2λmin (A) + σk2 B2 + σ −1 )h By recursive method, we can derive from (4.6) that E|yn |2 ≤

D1 (1 − C1n ) + E|x0 |2 C1n . 1 − C1

It is straightforward to check that 0 < C1 < 1 when h < − λ2lk4 and k4 > 0. If k4 = 0, we know 0 < C1 < 1 for all h > 0. Then we have √ D1 σk1 B2 (1 + (2Hk3 T 2H−1 + 2λ k4 + λk4 )h + λ2 h2 k4 ) 2 =− lim sup E|yn | ≤ . 1 − C1 l + λ2 k 4 h n→∞ Therefore, for any given  > 0, there exists an n1 such that √ 2 2H−1 B (1 + (2Hk T + 2λ k4 + λk4 )h + λ2 h2 k4 ) σk 1 3 +, n ≥ n1 , h < h0 , E|yn |2 ≤ − l + λ2 k 4 h where

 h0 =

− λ2lk4 , +∞,

k4 > 0, k4 = 0.

The proof is complete.  Next, we discuss the mean-square dissipativity of CSSBE method. Given a stepsize h > 0, an adaptation of the CSSBE method to (1.1) leads to yn∗ = yn − Ayn∗ h + hfλ (tn , yn∗ ),

(4.7a)

¯n , yn+1 = yn∗ + g(tn , yn∗ )ΔBnH + h(tn , yn∗ )ΔN

(4.7b)

where tn = nh, yn ≈ x(tn ), y0 = x0 , fλ (t, x) = Bf (x) + λh(t, x). ΔBnH = B H (tn+1 ) − ¯n = N ¯ (tn+1 ) − N ¯ (tn ) represent the increments of fBm and compenB H (tn ) and ΔN sated Poisson process, respectively. Theorem 4.2. Assume that system (1.1) satisfies (2.3)-(2.4) with l = −2λmin (A)+ √ σk2 B2 + σ −1 + 2Hk3T 2H−1 + 2λ k4 + λk4 < 0. Let yn be the numerical solution produced by the CSSBE method (4.7a)-(4.7b). Then for any given  > 0, there exists an n2 such that E|yn |2 ≤ −

σk1 B2 (1 + (2Hk3T 2H−1 + λk4 )h) + , l

n ≥ n2 , h > 0.

Proof. Using the condition (2.4) and (3.4), we obtain 2x, fλ (t, x) = 2x, Bf (x) + 2λx, h(t, x) √ ≤ σk1 B2 + (σk2 B2 + σ −1 + 2λ k4 )|x|2 . 10

(4.8)

A combination of (4.7a) and (4.8) gives |yn∗ |2 ≤ −2hyn∗ , Ayn∗  + 2hyn∗ , fλ (tn , yn∗ ) + |yn |2 √ ≤ σk1 B2 h + (−2λmin (A) + σk2 B2 + σ −1 + 2λ k4 )h|yn∗ |2 + |yn |2 . √ Since l < 0, k3 ≥ 0, λ > 0, k4 ≥ 0, then −2λmin (A) + σk2 B2 + σ −1 + 2λ k4 < √ −(2Hk3 T 2H−1 + λk4 ) ≤ 0 and 1 − (−2λmin (A) + σk2 B2 + σ −1 + 2λ k4 )h > 0, we have |yn∗ |2 ≤

1 √ |y |2 1−(−2λmin (A)+σk2 B2 +σ−1 +2λ k4 )h n

E(ΔBnH )

= 0, Noting that the fact λh, using (2.4) and (4.9) yields

E(ΔBnH )2

+

σk1 B2 h √ . 1−(−2λmin (A)+σk2 B2 +σ−1 +2λ k4 )h 2H

=h

(4.9) ¯ ¯ , E(ΔNn ) = 0 and E(ΔNn )2 =

E|yn+1 |2 ≤ (1 + (2Hk3T 2H−1 + λk4 )h)E|yn∗ |2 ≤ C2 E|yn |2 + D2 , where C2 =

1 + (2Hk3T 2H−1 + λk4 )h √ , 1 − (−2λmin(A) + σk2 B2 + σ −1 + 2λ k4 )h

D2 =

σk1 B2 h(1 + (2Hk3T 2H−1 + λk4 )h) √ . 1 − (−2λmin (A) + σk2 B2 + σ −1 + 2λ k4 )h

We can obtain the following estimate derived from (4.10): E|yn |2 ≤

D2 (1 − C2n ) + E|x0 |2 C2n . 1 − C2

It is straightforward to check that 0 < C2 < 1 for all h > 0. Then we get lim sup E|yn |2 ≤ n→∞

D2 σk1 B2 (1 + (2Hk3 T 2H−1 + λk4 )h) =− . 1 − C2 l

Therefore, for any given  > 0, there exists an n2 such that E|yn |2 ≤ −

σk1 B2 (1 + (2Hk3T 2H−1 + λk4 )h) + , l

The proof is complete. 

11

n ≥ n2 , h > 0,

(4.10)

5

Mean-square dissipativity of backward Euler method and compensated backward Euler method

In this section, we will discuss the ability of backward Euler (BE) method and compensated backward Euler (CBE) method in terms of reproducing mean-square dissipativity of (1.1). For a given constant stepsize h > 0, we give the BE method for (1.1) by y0 = x0 and yn+1 = yn − Ayn+1 h + hBf (yn+1 ) + g(tn , yn )ΔBnH + h(tn , yn )ΔNn .

(5.1)

Here, yn is the approximation to x(tn ) for tn = nh, whereas ΔBnH = B H (tn+1 ) − B H (tn ) and ΔNn = N(tn+1 ) − N(tn ) denote the increments of fBm and Poisson process, respectively. The compensated Poisson process motivates an alternative to the BE method in (5.1). We introduce CBE method for (1.1) by y0 = x0 and ¯n , yn+1 = yn − Ayn+1 h + hfλ (tn+1 , yn+1) + g(tn , yn )ΔBnH + h(tn , yn )ΔN

(5.2)

¯ (tn+1 ) − N(t ¯ n ). ¯n = N where ΔN Next, the following two theorems enable the sufficient conditions for mean-square dissipativity of BE and CBE methods. Theorem 5.1. Assume that conditions (2.3)-(2.4) are satisfied and l = −2λmin (A)+ √ σk2 B2 + σ −1 + 2Hk3T 2H−1 + 2λ k4 + λk4 < 0. Then for any given  > 0, there exists an n3 such that E|yn |2 ≤ −

σk1 B2 + , l + λ2 k 4 h

where

 h0 =

− λ2lk4 , +∞,

n ≥ n3 , h < h0 , k4 > 0, k4 = 0,

and yn is defined by (5.1). Proof. It follows from (5.1) that |yn+1 + Ayn+1 h − hBf (yn+1 )|2 = |g(tn , yn )ΔBnH + h(tn , yn )ΔNn + yn |2 .

(5.3)

Hence |yn+1|2 ≤ −2hyn+1 , Ayn+1  + 2hyn+1, Bf (yn+1) + |yn |2 + |g(tn , yn )|2 |ΔBnH |2 + |h(tn , yn )|2 |ΔNn |2 + 2yn , g(tn , yn )ΔBnH  + 2yn , h(tn , yn )ΔNn  + 2g(tn , yn )ΔBnH , h(tn , yn )ΔNn . (5.4) 12

Using the dependence properties of ΔBnH and ΔNn yields E|yn+1 |2 ≤ −2hEyn+1 , Ayn+1 + 2hEyn+1, Bf (yn+1 ) + E|yn |2 + h2H E|g(tn , yn )|2 + (λh + λ2 h2 )E|h(tn , yn )|2 + 2λhEyn , h(tn , yn ). (5.5) Now, the Cauchy-Schwarz inequality and the (2.4),(3.3),(3.4) give that √ (1 − (−2λmin (A) + σk2 B2 + σ −1 )h)E|yn+1|2 ≤ (1 + (2Hk3 T 2H−1 + 2λ k4 + λk4 )h + λ2 h2 k4 )E|yn |2 + σk1 B2 h. (5.6) 2 −1 Since 1 − (−2λmin (A) + σk2 B + σ )h > 0, one obtains E|yn+1 |2 ≤ C3 E|yn |2 + D3 , 2H−1



2 2k

+2λ k4 +λk4 )h+λ h 3T where C3 = 1+(2Hk 1−(−2λmin (A)+σk2 B2 +σ−1 )h By (5.7), we have

E|yn |2 ≤

4

, D3 =

(5.7)

σk1 B2 h . 1−(−2λmin (A)+σk2 B2 +σ−1 )h

D3 (1 − C3n ) + E|x0 |2 C3n . 1 − C3

It is straightforward to check that 0 < C3 < 1 when h < − λ2lk4 and k4 > 0. In the case k4 = 0, we deduce 0 < C3 < 1 for all h > 0. Then we get D3 σk1 B2 . lim sup E|yn | ≤ =− 1 − C3 l + λ2 k 4 h n→∞ 2

Therefore, for any given  > 0, there exists an n3 such that E|yn |2 ≤ −

σk1 B2 + , l + λ2 k 4 h

where

 h0 =

− λ2lk4 , +∞,

n ≥ n3 , h < h0 , k4 > 0, k4 = 0.

The proof is complete.  Theorem 5.2. Assume that conditions (2.3)-(2.4) are satisfied and l = −2λmin (A)+ √ σk2 B2 + σ −1 + 2Hk3T 2H−1 + 2λ k4 + λk4 < 0. Then for any given  > 0, there exists an n4 such that E|yn |2 ≤ −

σk1 B2 + , l

where yn is given by (5.2). 13

n ≥ n4 , h > 0,

Proof. Similarly, it follows from (5.2) that |yn+1 + Ayn+1h − hfλ (tn+1 , yn+1 )|2 = |g(tn , yn )ΔBnH + h(tn , yn )ΔN¯n + yn |2 . (5.8) ¯n yields Using the dependence properties of ΔBnH and ΔN E|yn+1|2 ≤ −2hEyn+1 , Ayn+1  + 2hEyn+1, fλ (tn+1 , yn+1 ) + E|yn |2 + h2H E|g(tn , yn )|2 + λhE|h(tn , yn )|2 .

(5.9)

Applying the condition (2.4),(3.3) and (4.8), we obtain from (5.9) √ (1 − (−2λmin (A) + σk2 B2 + σ −1 + 2λ k4 )h)E|yn+1|2 ≤ (1 + (2Hk3 T 2H−1 + λk4 )h)E|yn |2 + σk1 B2 h. (5.10) √ Since 1 − (−2λmin (A) + σk2 B2 + σ −1 + 2λ k4 )h > 0, one can get E|yn+1 |2 ≤ C4 E|yn |2 + D4 , 2H−1

(5.11) 2

+λk4 )h σk1 B h 3T √ √ D4 = 1−(−2λ (A)+σk where C4 = 1−(−2λ 1+(2Hk 2 −1 +2λ k )h , 2 −1 +2λ k )h . 4 2 B +σ 4 min (A)+σk2 B +σ min By recursive method, we can derive from (5.11) that

E|yn |2 ≤

D4 (1 − C4n ) + E|x0 |2 C4n . 1 − C4

It is straightforward to check that 0 < C4 < 1 for all h > 0. Then we have lim sup E|yn |2 ≤ n→∞

D4 σk1 B2 . =− 1 − C4 l

Hence, for any given  > 0, there exists an n4 such that E|yn |2 ≤ −

σk1 B2 + , l

n ≥ n4 , h > 0.

The proof is complete.  Remark 5.3. It is shown that split-step backward Euler method and backward Euler method can preserve mean-square dissipativity within a stepsize constraint, whereas compensated split-step backward Euler method and compensated backward Euler method could reproduce mean-square dissipativity without any restriction on stepsize. Furthermore, in view of mean-square dissipativity in Theorems 4.1-5.2, the numerical solutions of system (1.1) are also bounded in mean square sense.

14

6

One example

In this section we discuss an example to illustrate our theory. Consider a two-dimensional stochastic neural networks with fBm and jumps √ ⎧ ⎪ dx 60 0 x 2 1 (t) (t) 0.5x (t) + 1 1 1 1 ⎪ ⎪ √ = [− + ]dt ⎪ ⎪ 0 60 −1 2 0.5x2 (t) + 1 dx2 (t) x2 (t) ⎪ ⎨ √ tanh(x sin(x1 (t)) (t)) 1 ⎪ + dB H (t) + 3 dN(t), in [0, 100], ⎪ ⎪ (t)) sin(x tanh(x2 (t)) ⎪ 2 ⎪ ⎪ ⎩ (x1 (0), x2 (0))T = (0.1, 1)T , (6.1) where B H (t) be a scalar fBm with Hurst parameter H = 34 , and N(t) be a scalar T Poisson process with intensity λ = 9. Set h = N (N ∈ Z + ) and tn = nh (n = 0, 1, 2 · ··, N). Obviously, λmin (A) = 60, B2 = 5, the functions f, g and h satisfy conditions (2.3)-(2.4) with k2 = k3 = 1, k1 = 2, k4 = 3. Taking σ = 7.5, h = 0.02,  = 0.01, we have l = −9.189752130426868 < 0, h0 = 0.037817910001757, E|x(t)|2 ≤ 8.171264736583947. It follows from Theorem 3.1 that the system (6.1) is mean-square dissipative with a mean-square absorbing set B = (0, 8.171264736583947). On the other hand, according to the Theorems 4.1-5.2, we obtain the following estimates: E|yn |2 E|yn |2 E|yn |2 E|yn |2

≤ 44.367128542003876, ≤ 15.026727115314463, ≤ 17.332007759507889, ≤ 8.171264736583947,

h < h0 , h < h0 ,

(SSBE method) (CSSBE method) (BE method) (CBE method)

where yn = (yn1 , yn2 )T . The figures 1, 3, 5, 7 depict the mean-square dissipativity of numerical solutions which are obtained by different methods SSBE, CSSBE, BE and CBE for system (6.1), respectively. The figures 2, 4, 6, 8 depict the states of the numerical solutions. If we take h = 0.04 > h0 and use the SSBE method and the BE method to solve the system (6.1), then we can get E|yn |2 ≤ −6.104417569666075e+002 and E|yn |2 ≤ −1.414332839878782e + 002 respectively, which are invalid. So, the compensated numerical methods are better than the non-compensated numerical methods in the light of mean-square dissipativity.

15

7

Conclusion

As is well known, the noises do exist in a neural network, due to random fluctuations and probabilistic causes in the network. Thus, it is necessary and rewarding to investigate stochastic effects to the mean-square dissipativity of neural networks. In this paper, the mean-square dissipativity of numerical methods for a class of stochastic neural networks with fBm and jumps are considered. Owing to fractional Brownian motion B H is neither a semimartingale nor a Markov process, so the powerful tools from the theories of such processes are not applicable when studying B H . By using fBm-Itˆo formula, inner product properties and matrix theory, some sufficient conditions are derived for the mean-square dissipativity of the system. An example confirms our theoretical results. Therefore, by Remark 5.3, we have a reason to believe that compensated numerical methods are better than non-compensated numerical methods on the basis of the mean-square dissipativity. So far, few authors have discussed the stochastic neural networks with fBm and jumps. Our method is different from the above-mentioned literature, and is also suitable to more general stochastic delayed neural network models with fBm and jumps. Furthermore, stability, synchronization and attractor of this model can be discussed in the future.

Acknowledgements The authors thank the editors and reviewers for their valuable comments. This work is supported by the National Natural Science Foundation of China (NSFC, Grant nos. 61174095, 11261043, 61261044 and 11461053).

References [1]

J. Lu, D. W. C. Ho, J. Cao, J. Kurths, Exponential synchronization of linearly coupled neural networks with impulsive disturbances, IEEE Trans. Neural Netw. 22(2)(2011)329-335. [2] X. Su, P. Shi, L. Wu, S. K. Nguang, Induced 2 filtering of fuzzy stochastic systems with time-varying delays, IEEE Trans. Cybern. 43(4)(2013) 1251-1264. [3] X. Su, P. Shi, L. Wu, M. V. Basin, Reliable filtering with strict dissipativity for T-S fuzzy time-delay systems, IEEE Trans. Cybern. 44(12)(2014)2470-2483. [4] X. Mao, Stochastic Differential Equations and Applications, 2nd ed., Horwood Publishing, Chichester, 2007.

16

[5]

P. Balasubramaniam, M. SyedAli, Robust exponential stability of uncertain fuzzy Cohen-Grossberg neural networks with time-varying delays, Fuzzy Sets Syst. 161(2010) 608-618. [6] Z. Orman, New sufficient conditions for global stability of neutral-type neural networks with time delays, Neurocomputing. 97 (2012) 141-148. [7] L. Wang, Y. Gao, Global exponential robust stability of reaction-diffusion interval neural networks with time-varying delays, Phys. Lett. A. 350(2006) 342-348. [8] J. J. Oliveira, Global asymptotic stability for neural network models with distributed delays. Math. Comput. Model. 50 (2009) 81-91. [9] P. Liu, Delay-dependent robust stability analysis for recurrent neural networks with time-varying delay, Int. J. Innov. Comput., Inf. Control. 9(8)(2013)3341-3355. [10] Y. Li, J. Li, M. Hua, New results of H∞ filtering for neural network with time varying delay, Int. J. Innov. Comput., Inf. Control. 10(6)(2014) 2309-2323. [11] S. Arik, On the global dissipativity of dynamical neural networks with time delays, Phys. Lett. A. 326(2004) 126-132. [12] C. Li, X. Liao, Passivity analysis of neural networks with time delay, IEEE Trans. Circuits Syst. II. 52(8)(2005) 471-475. [13] X. Liao, Q. Luo, Z. Zeng, Positive invariant and global exponential attractive sets of neural networks with time-varying delays, Neurocomputing. 71(2008) 513-518. [14] X. Liao, J. Wang, Global dissipativity of continuous-time recurrent neural networks with time delay, Phys Rev E. 68(2003)1-7. [15] Q. Song, Z. Zhao, Global dissipativity of neural networks with both variable and unbounded delays, Chaos Solitons Fract. 25(2005) 393-401. [16] Z. Zhang, S. Mou, J. Lam, H. Gao, New passivity criteria for neural networks with time-varying delay, Neural Networks, 22(2009) 864-868. [17] Z. Guo, J. Wang, Z. Yan, Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays, Neural Networks. 48(2013) 158-172 [18] L. Wan, Q. Zhou, Attractor and ultimate boundedness for stochastic cellular neural networks with delays, Nonlinear Anal.: RWA. 12(2011) 2561-2566. [19] Q. Zhu, J. Cao, Mean-square exponential input-to-state stability of stochastic delayed neural networks, Neurocomputing. 131(2014) 157-163. [20] Y. Sun, G. Feng, J. Cao, Stochastic stability of Markovian switching genetic regulatory networks, Phys. Lett. A. 373 (2009) 1646-1652. [21] G. Wang, J. Cao, L. Wang, Global dissipativity of stochastic neural networks with time delay, J. Franklin Inst. 346(2009)794-807. [22] Z. Wang, H. Shu, J. Fang, X. Liu, Robust stability for stochastic Hopfield neural networks with time delays, Nonlinear Anal.: RWA. 7(2006) 1119 - 1128. 17

[23] S. Blythe, X. Mao, X. Liao, Stability of stochastic delay neural networks, J. Franklin Inst. 338(2001) 481-495. [24] S. Zhu, Y. Shen, G. Chen, Noise suppress or express exponential growth for hybrid Hopfield neural networks, Phys. Lett. A. 374 (2010) 2035-2043. [25] S. Zhu, Y. Shen, Robustness analysis for connection weight matrices of global exponential stability of stochastic recurrent neural networks, Neural Networks. 38 (2013) 17-22. [26] J. Lu, D. W. C. Ho, L. Wu, Exponential stabilization of switched stochastic dynamical networks, Nonlinearity. 22 (2009) 889-911. [27] X. Yang, J. Cao, J. Lu, Stochastic synchronization of complex networks with nonidentical nodes via hybrid adaptive and impulsive control, IEEE Trans. Circuits Syst. I. 59(2)(2012)371-384. [28] B. Song, J. H. Park, Z. Wu, Y. Zhang, Global synchronization of complex networks perturbed by the Poisson noise, Appl. Math. Comput. 219(2012) 3831-3839. [29] Y. Zhang, B. Song, J. H. Park, G. Shi, Z. Wu, Global synchronization of complex networks perturbed by Brown noises and Poisson noises, Circuits Syst. Signal Process. 33 (2014) 2827-2849. [30] D. Higham, P. Kloeden, Numerical methods for nonlinear stochastic differential equations with jumps, Numer. Math. 101(2005) 101-119. [31] D. Higham, P. Kloeden, Convergence and stability of implicit methods for jumpdiffusion systems, Int. J. Numer. Anal. Model. 3(2006) 125-140. [32] X. Wang, S. Gan, Compensated stochastic theta methods for stochastic differential equations with jumps, Appl. Numer. Math. 60(2010) 877-887. [33] X. Mao, S. Sabanis, Numerical solutions of stochastic differential delay equations under local Lipschitz condition, J. Comput. Appl. Math. 151(2003) 215-227. [34] M. Liu, W. Cao, Z. Fan, Convergence and stability of the semi-implicit Euler method for a linear stochastic differential delay equation, J. Comput. Appl. Math. 170(2004) 255-268. [35] C. Huang, Mean square stability and dissipativity of two classes of theta methods for systems of stochastic delay differential equations, J. Comput. Appl. Math. 259(2014) 77-86. [36] Q. Ma, D. Ding, X. Ding, Mean-square dissipativity of several numerical methods for stochastic differential equations with jumps, Appl. Numer. Math. 82(2014) 44-50. [37] R. Li, W. K. Pang, P. K. Leung, Exponential stability of numerical solutions to stochastic delay Hopfield neural networks, Neurocomputing. 73(2010) 920-926. [38] A. Rathinasamy, The split-step θ-methods for stochastic delay Hopfield neural networks, Appl. Math. Modelling. 36 (2012) 3477-3485. 18

[39] Y. S. Mishura, Stochastic calculus for fractional brownian motion and related processes, Springer. 2008. [40] M. Ferrante, C. Rovira, Stochastic delay differential equations driven by fractional Brownian motion with Hurst parameter H > 12 , Bernoulli. 12(1)(2006) 85-100. [41] M. Besal´ u, C. Rovira, Stochastic delay equations with non-negativity constraints driven by fractional Brownian motion, Bernoulli. 18(1)(2012) 24-45. [42] B. Boufoussi, S. Hajji, Functional differential equations driven by a fractional Brownian motion, Comput. Math. Appl. 62 (2011) 746-754. [43] N. T. Dung, Mackey-Glass equation driven by fractional Brownian motion, Physica A. 391 (2012) 5465-5472. [44] J. Le´on, S. Tindel, Malliavin calculus for fractional delay equations, J. Theor. Probab. 25(3)(2012) 854-889. [45] W. Xiao, W. Zhang, W. Xu, X. Zhang, The valuation of equity warrants in a fractional Brownian environment, Physica A. 391 (2012) 1742-1752. [46] A. Neuenkirch, I. Nourdin, S. Tindel, Delay equations driven by rough paths, Electronic J. Probab. 13(2008) 2031-2068. [47] B. Boufoussi, S. Hajji, Neutral stochastic functional differential equations driven by a fractional Brownian motion in a Hilbert space, Statist. Probab. Lett. 82(2012) 1549-1558. [48] T. Caraballo, M. J. Garrido-Atienza, T. Taniguchi, The existence and exponential behavior of solutions to stochastic delay evolution equations with a fractional Brownian motion, Nonlinear Anal.:TMA 74(2011) 3671-3684. [49] Y. Ren, X. Cheng, R. Sakthivel, On time-dependent stochastic evolution equations driven by fractional Brownian motion in a Hilbert space with finite delay, Math. Meth. Appl. Sci. 37(2014) 2177-2184. [50] W. Ma, Q. Zhang, C. Han, Numerical analysis for stochastic age-dependent population equations with fractional Brownian motion, Commun. Nonlinear Sci. Numer. Simul. 17 (2012) 1884-1893. [51] C. Bender, An Itˆo’s formula for generalized functionals of a fractional Brownian motion with arbitrary Hurst parameter, Stochastic Process. Appl. 104(2003) 81106. [52] L. Yan, Maximal inequalities for the iterated fractional integrals, Statist. Probab. Lett. 69(2004) 69-79. [53] X. Mao, Exponential Stability of Stochastic Differential Equations, Marcel Dekker, Basel, 1994.

19

45 1

yn 2 n

40

y

35

E|y1n|2, E|y2n|2

30

25

20

15

10

5

0

0

10

20

30

40

50 t

60

70

80

90

100

Figure 1: Mean-square dissipativity of SSBE method of (6.1). 35

30

25

E|y2n|2

20

15

10

5

0

0

5

10

15

20

25

30

35

40

45

E|y1n|2

Figure 2: Mean-square transient response of state variables yn1 , yn2 of (6.1) (SSBE method).

20

15 y1n y2n

E|y1n|2, E|y2n|2

10

5

0

0

10

20

30

40

50 t

60

70

80

90

100

Figure 3: Mean-square dissipativity of CSSBE method of (6.1). 12

10

E|y2n|2

8

6

4

2

0

0

5

10

15

E|y1n|2

Figure 4: Mean-square transient response of state variables yn1 , yn2 of (6.1) (CSSBE method).

21

18 y1n y2n

16

14

E|y1n|2, Ey2n|2

12

10

8

6

4

2

0

0

10

20

30

40

50 t

60

70

80

90

100

Figure 5: Mean-square dissipativity of BE method of (6.1). 18

16

14

E|y2n|2

12

10

8

6

4

2

0

0

2

4

6

8

10

12

14

16

18

E|y1n|2

Figure 6: Mean-square transient response of state variables yn1 , yn2 of (6.1) (BE method). 22

8 y1n y2n

7

6

E|y1n|2, E|y2n|2

5

4

3

2

1

0

0

10

20

30

40

50 t

60

70

80

90

100

Figure 7: Mean-square dissipativity of CBE method of (6.1). 3.5

3

2.5

E|y2n|2

2

1.5

1

0.5

0

0

1

2

3

4

5

6

7

8

E|y1n|2

Figure 8: Mean-square transient response of state variables yn1 , yn2 of (6.1) (CBE method).

23