Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606 www.elsevier.com/locate/na
Convergence dynamics of stochastic reaction–diffusion recurrent neural networks with continuously distributed delays夡 Yan Lva,∗ , Wei Lvb , Jianhua Sunc a Department of Statistics and Financial Mathematics, Nanjing University of Science and Technology, Nanjing 210094, China b Department of Mathematics, Xi’an Jiao Tong University, Xi’an 710049, China c Department of Mathematics, Nanjing University, Nanjing 210093, China
Received 22 September 2006; accepted 18 April 2007
Abstract Convergence dynamics of reaction–diffusion recurrent neural networks (RNNs) with continuously distributed delays and stochastic influence are considered. Some sufficient conditions to guarantee the almost sure exponential stability, mean value exponential stability and mean square exponential stability of an equilibrium solution are obtained, respectively. Lyapunov functional method, M-matrix properties, some inequality technique and nonnegative semimartingale convergence theorem are used in our approach. These criteria ensuring the different exponential stability show that diffusion and delays are harmless, but random fluctuations are important, in the stochastic continuously distributed delayed reaction–diffusion RNNs with the structure satisfying the criteria. Two examples are also given to demonstrate our results. 䉷 2007 Elsevier Ltd. All rights reserved. Keywords: Stochastic recurrent neural networks; Reaction–diffusion; Continuously distributed delay; Exponential stability; Lyapunov functional; Martingale convergence theorem
1. Introduction The stability of recurrent neural networks (RNNs), including cellular neural networks (CNNs) and Hopfield neural networks (HNNs), is very important in the study associative content-addressable memories, pattern recognition and optimization. It is well known time delays are ubiquitous both in neural processing and in signal transmission and a variety of competing results have accumulated in the literature concerning the stability of RNNs models with constant fixed delays in the past decade, see [1,6,10,11,21,31]. Although constant fixed delays in the models of delayed feedback systems serve as a good approximation in simple circuits consisting of a small number of cells, neural networks usually have a spatial extent due to the presence of an amount of parallel pathways with a variety of axon sizes and lengths. Therefore, there will be a distribution of conduction velocities along these pathways and a distribution of propagation be modelled with discrete delays and a appropriate way is to incorporate continuously distributed delays. For recent exposures on HNNs that incorporate distributed delays, we refer to [9,12,32]. Also there are some work consider 夡 This work was partly supported by the National Natural Science Foundation of China (No. 10171044) and the Natural Science Foundation of Jiangsu Province (No. BK2001024). ∗ Corresponding author. E-mail addresses:
[email protected] (Y. Lv),
[email protected] (W. Lv),
[email protected] (J. Sun).
1468-1218/$ - see front matter 䉷 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.nonrwa.2007.04.003
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
1591
varying delays or mixing varying delay, see recent work [7,8]. Obviously, the applications mentioned above depend on the convergence dynamics of neural networks. If the RNNs depend on only time or instantaneously time and time delay, the model is in fact an ordinary differential equation or a functional differential equation. In the factual operations, however, the diffusion phenomena could not be ignored in neural networks and electric circuits once electrons transport in a nonuniform electromagnetic field. Hence, it is essential to consider the state variables varying with the time and space variables. The neural networks with diffusion terms can commonly be expressed by partial differential equations. The study on the stability of reaction–diffusion neural networks, for instance, see [26,17,20,22,29,30], and references therein. In this paper we will further consider the stochastic influence in the reaction–diffusion RNNs with continuously distributed delays. In fact a real system is usually affected by external perturbations which in many cases are of great uncertainty and hence may be treated as random, as pointed out by [19] that in real nervous systems synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters, and other probabilistic causes. To the best of our knowledge, however, there are few results about stochastic effects to the stability property of neural networks with delays in the literature today. Chua et al. in [14,15] proposed a novel class of information-processing systems, called CNNs, with application to image processing and pattern recognition, and a CNNs model with additive noise current sources inserted to the cell state capacitor in [13], which has been found to be extremely useful in some biological learning processes [4]. In [16] the authors described a network where noise is injected into the first hidden layer (only), and use the result to obtain error derivatives, thereby avoiding back-propagation. To date, the approaches to the mathematical incorporation of such stochastic effects to the stability property of the neural networks are to use probabilistic threshold models (e.g. [19]), to view neural networks as nonlinear dynamical systems with intrinsic noise, i.e., to include a representation of the inherent stochasticity in the neurodynamics (e.g. [16]), etc. Liao and Mao [23,24] supposed that there exists a stochastic perturbation to the neural networks with discrete delays, and initiated the study of stability and instability of stochastic neural networks. And then, Blythe et al. [3] continued their research to discuss almost sure exponential stability for a class of stochastic neural network with discrete delays, by using the nonnegative semimartingale convergence theorem. And the recent paper [28] has studied the stability of stochastic reaction–diffusion RNNs with constant delays. The stability of stochastic delay differential equations have been studied intensively, for example, we refer to [2,18,25]. However, stochastic delay neural networks have their own characteristics and it is desirable to obtain stability criteria that make full use of these characteristics. In the following sections, we intend to investigate the convergence dynamics of stochastic reaction–diffusion RNNs with continuously distributed delays. First in Section 2 the model concerned is introduced with some necessary notations, assumptions, and lemmas which will be useful later. By using the Lyapunov functional method, M-matrix properties and nonnegative semimartingale convergence theorem, sufficient conditions to guarantee the almost sure exponential stability of an equilibrium solution are given in Section 3. In Section 4, we first prove the mean square exponential stability of the equilibrium solution under the same conditions as in Section 3. Then, sufficient conditions to assure the mean value exponential stability of an equilibrium solution are given. Finally, our results are compared with the previous results derived in the literature for continuously distributed delayed RNNs without diffusion or stochastic perturbation. Two examples are also given in Section 5 to demonstrate the main results. The conclusions are drawn in Section 6. 2. Model description and preliminaries Consider the stochastically perturbed reaction–diffusion RNNs with continuously distributed delays ⎡ m n j jyi dyi (t) = Dik dt + ⎣−ci hi (yi (t, x)) + aij fj (yj (t, x)) jxk jxk k=1 j =1 ⎤ t n ∞ + bij ij (t − s)gj (yj (s, x)) ds + Ji ⎦ dt + il (yi (t, x)) dwil (t), j =1
−∞
l=1
x ∈ X,
(2.1)
1592
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
jyi := jn
jyi jyi ,..., jx1 jxm
yi (s, x) = i (s, x),
T = 0,
t 0, x ∈ jX,
−∞ < s 0, x ∈ X,
for 1i n, and t 0. In the above model, n 2 is the number of neurons in the network, xi is space variable, yi (t, x) is the state variable of the ith neuron at time t and in space variable x, fj (yj (t, x)) and gj (yj (t, x)) denote the output of the j th unit at time t and in space variable x, smooth function Dik = Dik (t, x, y) 0 is a diffusion operator, X is a compact set with smooth boundary jX and measure mes X > 0 in Rm . jyi /jn|jX = 0, t 0 and i (s, x) are the boundary value and initial value, respectively. ci , aij , bij , Ji are constants: ci represents the rate with which the ith unit will reset its potential to the resting state in isolation when disconnected from the network and the external stochastic perturbation, and is a positive constant; Ji denotes the external bias on the ith unit; aij and bij weight the strength of the j th unit on the ith unit at time t. The kernels ij , for i, j = 1, 2, . . . , n, are real-valued nonnegative ∞ ∞ piecewise continuous functions defined on [0, ∞) and satisfy (i) 0 ij (t) dt = 1, (ii) 0 tij (t) dt < ∞, (iii) there
∞ t exists a positive such that 0 te ij (t) dt = 1. Moreover, {wil (t) : i = 1, . . . , n, l ∈ N} are independent scalar standard Wiener processes on the complete probability space (, F, P) with the natural filtration {Ft }t 0 generated by the standard Wiener process {w(s) : 0 s t} which is independent of wil (t), where we associate with the canonical space generated by w(t), and denote by F the associated -algebra generated by w(t) with the probability measure P. Let u = (u1 , . . . , un )T and L2 (X) is the space of scalar value Lebesgue measurable functions on X which is a Banach space for the L2 -norm v2 =
1/2 |v(x)| dx
,
2
X
v ∈ L2 (X),
then we define the norm u as u =
n
1/2 ui 22
.
i=1
Assume = {(1 (s, x), . . . , n (s, x))T : −∞ < s 0} ∈ C((−∞, 0]; L2 (X, Rn )) and be an F0 -measurable Rn valued random variable, where, for example, F0 =Fs restricted on (−∞, 0], and C((−∞, 0]; L2 (X, Rn )) is the space of all continuous Rn -valued functions defined on (−∞, 0] × X with a norm C =
sup
−∞
{(t, x)2 }.
Throughout this paper, for system (2.1), we have the following assumptions: (A1) hi : R → R is differentiable and i = inf x∈R hi (x) > 0, hi (0) = 0, for i = 1, . . . , n. (A2) fj , gj and il are Lipschitz continuous with Lipschitz constants j > 0, j > 0 and Lil > 0, respectively, and
∞ l=1 Lil < ∞, for i, j = 1, 2, . . . , n, l ∈ N. (A3) C − A+ − B + is a nonsingular M-matrix, where = diag(1 , . . . , n ),
= diag( 1 , . . . , n ),
= diag(1 , . . . , n ),
C = diag(c1 , . . . , cn ),
A+ = (|aij |)n×n ,
B + = (|bij |)n×n .
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
1593
For the deterministic system
⎡ m n j jyi dyi (t) = Dik dt + ⎣−ci hi (yi (t, x)) + aij fj (yj (t, x)) jxk jxk k=1 j =1 ⎤ t n + ij (t − s)gj (yj (s, x)) ds + Ji ⎦ dt, x ∈ X, bij
(2.2)
−∞
j =1
by the similar method of [27], we can easily prove that under the assumptions (A1)–(A3), system (2.2) has a unique equilibrium point y ∗ = (y1∗ , . . . , yn∗ )T and it is globally exponentially stable. Suppose (A4) il (yi∗ ) = 0, for i = 1, . . . , n, l ∈ N. Then y ∗ = (y1∗ , . . . , yn∗ )T is an equilibrium point of system (2.1) provided that system (2.1) satisfies (A1)–(A4). Now system (2.1) is equivalent to ⎧ m ⎨ ∗) j(y (t) − y j i i d(yi (t) − yi∗ ) = dt + − ci [hi (yi (t, x)) − hi (yi∗ )] Dik ⎩ jxk jxk k=1
+
n
aij [fj (yj (t, x)) − fj (yj∗ )] +
j =1
× j(yi − yi∗ ) := jn
⎫ ⎬
[gj (yj (s, x)) − gj (yj∗ )] ⎭
jyi jyi ,..., jx1 jxm
n
bij
j =1
dt +
∞
t −∞
ij (t − s)
il (yi (t, x)) dwil (t),
x ∈ X,
(2.3)
l=1
T = 0,
t 0, x ∈ jX,
yi (s, x) − yi∗ (s, x) = i (s, x) − yi∗ (s, x), −∞ < s 0, x ∈ X,
since Ji = ci hi (yi∗ ) − nj=1 aij fj (yj∗ ) − nj=1 bij gj (yj∗ ). By the theory of stochastic differential equations, see [2,18,25], it is known that under the conditions (A1) and (A2), (2.3) has a global solution on t 0, which is denoted by y(t; ), or, y(t), if no confusion occur. For the effects of stochastic forces to the stability property of continuously distributed delayed RNNs (2.3), we will study the almost sure exponential stability, mean square exponential stability and mean value exponential stability of their equilibrium solution y(t) ≡ y ∗ in the following sections. For completeness, we give the following definitions [28], in which E denotes expectation with respect to P. Definition 2.1. A matrix A = (aij )n×n is said to be a nonsingular M-matrix, if A has the form sI − B,
s > 0, B 0,
where I denotes the identity matrix and s > (B), and (B) denotes the spectral radius of a square matrix B. It follows from [5] that a nonsingular M-matrix A is equivalent to one of the following properties:
• There exists rj > 0, such that nj=1 aij rj > 0, 1i n.
• There exists rj > 0, such that nj=1 aj i rj > 0, 1i n. Definition 2.2. Eq. (2.3) is said to be almost surely exponentially stable if there exists a positive constant such that for each pair of t0 and there is a finite positive random variable K such that y(t; t0 , ) − y ∗ p Ke−(t−t0 ) ,
P-a.s.
1594
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
for all t t0 . In this case lim sup t→∞
1 ln(y(t; t0 , ) − y ∗ p ) − . t
(2.4)
The left-hand side of (2.4) is called the almost sure Lyapunov exponent of the solution. Definition 2.3. Eq. (2.3) is said to be pth moment exponentially stable if there exists a pair of positive constants and K such that Ey(t; ) − y ∗ p KE − y ∗ p e−t ,
t 0
for any . In this case lim sup t→∞
1 ln(Ey(t; ) − y ∗ p ) − . t
(2.5)
The left-hand side of (2.5) is called the pth moment Lyapunov exponent of the solution. When p = 1 and 2, it is usually called the exponential stability in mean value and in mean square, respectively. The following lemmas are important in our approach. Lemma 2.4 (Nonnegative semimartingale convergence theorem). Blythe et al. [3]. Suppose A(t) and U (t) are two continuous adapted increasing processes on t 0 with A(0) = U (0) = 0, a.s. Let M(t) be a real-valued continuous local martingale with M(0) = 0, a.s. and be a nonnegative F0 -measurable random variable with E < ∞. Define X(t) = + A(t) − U (t) + M(t) for t 0. If X(t) is nonnegative, then lim A(t) < ∞ ⊂ lim X(t) < ∞ ∩ lim U (t) < ∞ a.s., t→∞
t→∞
t→∞
Dc )
where B ⊂ D a.s. denotes P(B ∩ = 0. In particular, if limt→∞ A(t) < ∞ a.s., then for almost all ∈ limt→∞ X(t, ) < ∞ and limt→∞ U (t, ) < ∞, i.e., both X(t) and U (t) converge to finite random variables. Lemma 2.5 (Burkholder–Davis–Gundy inequality). There exists a universal constant Kp for any 0 < p < ∞ such that for every continuous local martingale M vanishing at zero and any stopping time , E(|M |p )E
sup |Ms |p
0s
Kp E(M, M )p/2 ,
where M, M is the cross-variation of M. 3. Almost sure exponential stability of an equilibrium solution For Eq. (2.3), we have the following result. Theorem 3.1. Under the assumptions (A1), (A2), (A4) and
(A5) C − C¯ − A+ − B + is a nonsingular M-matrix, where C¯ = diag(c¯1 , . . . , c¯n ), c¯i := −ci i + nj=1 |aij |j +
∞ 2
n j =1 |bij | j + l=1 Lil 0, for 1 i n, and other notations are the same as in (A3), Eq. (2.3) is almost surely exponentially stable. Proof. Let y ∗ be an equilibrium point and y(t, x) = (y1 (t, x), . . . , yn (t, x))T be a solution of system (2.1). From (A5), there exist constants ri > 0, 1i n such that ri (ci i − c¯i ) −
n j =1
|aj i |i rj −
n j =1
|bj i | i rj > 0.
(3.1)
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
1595
Since c¯i 0, we have r i ci i −
n
|aj i |i rj −
n
|bj i | i rj > 0,
j =1
j =1
which implies (A3) holds. Notice (2.1) is equivalent to (2.3). Let zi = yi − yi∗ . It follows from (3.1) that there exists a sufficiently small constant > 0 such that ∞ n n ri (ci i − c¯i − ) − |aj i |i rj − |bj i | i rj kj i (u)eu du0. (3.2) j =1
Take V (z(t), t) = et
0
j =1
n
2 i=1 ri zi (t).
Then Itô’s formula yields t
V (z(t), t) = V (z(0), 0) + 0
e
s
n
ri zi2 (s) ds
t
+
2es
0
i=1
n
ri zi (s)
i=1
m j j(yi (s) − yi∗ ) × Dik − ci [hi (yi (s, x)) − hi (yi∗ )] jxk jxk k=1 s n n ∗ bij ij (s − u) aij [fj (yj (s, x)) − fj (yj )] + + j =1
×
[gj (yj (u, x)) − gj (yj∗ )] du
t
× il (yi (s, x)) dwil (s) + 0
e s
−∞
j =1
t
ds +
2e 0
∞ n
s
∞ n
ri zi (s)
i=1 l=1
ri 2il (yi (s, x)) ds.
i=1 l=1
It is easy to calculate by the boundary condition that m m j jzi jzi 2 Dik dx = − Dik dx. zi jxk jxk X X jxk k=1
k=1
Moreover
s
zi (s) ij (s − u)[gj (yj (u, x)) − gj (yj∗ )] du dx X −∞ s |zi (s)| ij (s − u) j |zj (u)| du dx X −∞ s = j ij (s − u) |zi (s)||zj (u)| dx du (by Schwarz inequality) X −∞ s j ij (s − u)zi (s)2 zj (u)2 du −∞ s = j zi (s)2 ij (s − u)zj (u)2 du (by Cauchy inequality) −∞ s 2 1 ij (s − u)zj (u)2 du j zi (s)22 + 2 −∞ s 2 1 1 1/2 1/2 2 = j zi (s)2 + j ij (s − u)ij (s − u)zj (u)2 du (by Schwarz inequality) 2 2 s−∞ 1 1 j zi (s)22 + j ij (s − u)zj (u)22 du. 2 2 −∞
(3.3)
1596
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
Integrate both sides of (3.3) with respect to x, with (A1), (A2), (A4) and Hölder inequality, we have X
V (z(t), t) dx
=
t
V (z(0), 0) dx +
X
e
s
0
⎧ ⎨
n
ri
i=1
X
zi2 (s) dx ds
m j(yi (s) − yi∗ ) j Dik dx ⎩ X jxk jxk 0 k=1 i=1 n − ci aij zi (s)[fj (yj (s, x)) − fj (yj∗ )] dx zi (s)[hi (yi (s, x)) − hi (yi∗ )] dx +
t
+
2es
n
ri
zi (s)
X
+
n
j =1
bij
X
j =1
t
+
2es
0
e s
0
n
zi (s)
n ∞
∞ n i=1 l=1
i=1
t
+
2e
s
0
+
i=1
n 1
2
t
+
t
e s
0
+ 0
j =1
+
t
e s
0
t
n
ri zi (s)22 ds
+
n
|aij |j zi (s)2 zj (s)2
j =1
s
ri
X
zi (s)il (yi (s, x)) dx dwil (s)
ri L2il zi (s)22 ds
n
ri
t
es
0
⎧ ⎨
n
ri zi (s)22 ds
i=1
−2ci i zi (s)22 +
⎩
n
|aij |j (zi (s)22 + zj (s)22 )
j =1
⎫ ⎬ |bij | j (zi (s)22 + ij (s − u)zj (u)22 du) ds ⎭ −∞
2es
0
+
es
i=1
i=1 l=1 ∞ n
i=1
n
t
2il (yi (s, x)) dx ds
⎫ ⎬ zi (s)22 + ij (s − u)zj (u)22 du ds ⎭ −∞
i=1 l=1
t
X
0
∞ n
i=1
+
ri zi (0)22 +
e s
ds
ri −ci i zi (s)22 ⎩
|bij | j
2es
⎭
zi (s)il (yi (s, x)) dx dwil (s)
0
X
⎧ ⎨
j =1
+
n
n
ij (s − u)[gj (yj (u, x)) − gj (yj∗ )] du dx
⎫ ⎬
ri
ri
ri zi (0)22 +
s −∞
i=1 l=1 t
+
X
∞ n
i=1 l=1 ∞ n
s
ri
X
zi (s)il (yi (s, x)) dx dwil (s)
ri L2il zi (s)22 ds.
i=1 l=1
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
Notice that, s t es kij (s − u)zj (u)22 du ds 0 −∞ ∞ t kij (u)zj (s − u)22 du ds es = 0 0 ∞ t = es zj (s − u)22 ds du kij (u) 0 0 ∞ t−u = kij (u) e(u+s) zj (s)22 ds du
0 ∞
kij (u)
−u
0
=
∞
−u 0
kij (u)eu
e
(u+s)
0 −u
0
zj (s)22 ds du +
es zj (s)22 ds du +
∞
0 t 0
t
kij (u) 0
es zj (s)22
e(u+s) zj (s)22 ds du
∞
kij (u)eu du ds.
0
Hence X
V (z(t), t) dx
n
ri zi (0)22
i=1
+
t
e s
0
+
n
ri
i=1
n
t
+ 0
⎧ ⎨
n
t
+ +
t
i=1
⎩
ri
n
|bij | j
e s
0
n ∞
∞
ij (u)eu
|aij |j (zi (s)22 + zj (s)22 )
∞ 0
0 −u
0
j =1
2es
n j =1
|bij | j zi (s)22 + zj (s)22
0
ri zi (s)22 ds
i=1
n
−2ci i zi (s)22 +
j =1
+
es
⎫ ⎬ ij (u)eu du ds ⎭
es zj (s)22 ds du
ri
X
i=1 l=1 ∞ n
zi (s)il (yi (s, x)) dx dwil (s)
ri L2il zi (s)22 ds
i=1 l=1
⎧ ⎡ ⎤ n n ∞ n ⎨ ri zi (0)22 − e s |aij |j − |bij | j − L2il − ⎦ ri ⎣2ci i − = ⎩ 0 i=1 i=1 j =1 j =1 l=1 ⎫ ∞ n n ⎬ − |aj i |i rj − |bj i | i rj kj i (u)eu du zi (s)22 ds ⎭ 0
n
j =1
+
n i=1
t
+ 0
t
j =1
ri
n
|bij | j
j =1
2es
∞ n i=1 l=1
∞ 0
ij (u)eu
0 −u
es zj (s)22 ds du
ri
X
zi (s)il (yi (s, x)) dx dwil (s)
1597
1598
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
=
n
ri zi (0)22 −
− +
j =1 n i=1
t
+
|bj i | i rj ri
n
∞ 0
2es
+
i=1
t
+
2es
0
|aj i |i rj
j =1
⎫ ⎬
ij (u)eu
0 −u
0
ri
n
∞ n
⎩
n
es zj (s)22 ds du
i=1
∞
X
i=1 l=1
ri zi (0)22
ri [ci i − c¯i − ] −
kj i (u)eu du zi (s)22 ds ⎭
|bij | j
∞ n
⎧ n ⎨ i=1
j =1
0 n
es
0
i=1 n
t
ri
ri
n
|bij | j
j =1
X
i=1 l=1
zi (s)il (yi (s, x)) dx dwil (s) ∞
ij (u)e
u
0 −u
0
(using (A5) and (3.2)) es zj (s)22 ds du
zi (s)il (yi (s, x)) dx dwil (s).
(3.4)
It is obvious that the right-hand side of (3.4) is a nonnegative semimartingale. From Lemma 2.4, it can be easily seen that its limit is finite almost surely as t → ∞, which shows that lim sup V (y(t), t) < + ∞, t→∞
P-a.s.
Since X
V (z(t), t) dx = et
we have lim sup t→∞
n
ri zi (t)22 min {ri }et 1i n
i=1
min {ri }e
1i n
t
n
n
zi (t)22 ,
i=1
zi (t)22
< + ∞,
P-a.s.
i=1
which implies n 1 2 lim sup ln zi (t)2 < − , t→∞ t
P-a.s.,
i=1
that is, lim sup t→∞
1 ln(z2 ) < − , t
The proof is complete.
P-a.s.
4. Moment exponential stability of an equilibrium solution In this section, we study the first and second moment exponential stability of an equilibrium solution of system (2.3), and then make some comparisons with the usual continuously distributed delayed RNNs without diffusion or stochastic perturbation. Theorem 4.1. Under the assumptions of Theorem 3.1, Eq. (2.3) is exponentially stable in mean square.
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
1599
Proof. Taking expectations on both sides of (3.4) and noticing that E
t
2es
0
n ∞
ri
X
i=1 l=1
zi (s)il (yi (s, x)) dx dwil (s) = 0,
we have min {ri }et
1i n
n
Ezi (t)22
i=1
n
ri Ezi (0)22
+
n
ri
i=1
i=1
max {ri } 1i n
n
n
|bij | j
i=1
n
ri
i=1
ij (u)e
u
n
|bij | j
∞
0 −u
0
j =1
Ezi (0)22 +
∞
es Ezj (s)22 ds du
ij (u)eu
0
j =1
0 −u
es Ezj (s)22 ds du.
Then it is easy to see that there exists some positive constant K such that Ez(t) = E 2
n
zi (t)22
KE
i=1
The proof is complete.
n
zi (0)22
e−t .
i=1
Since (Ez(t))2 Ez(t)2 , condition (A5) assures the mean value exponential stability of an equilibrium solution of system (2.3). However, the condition is crude and there is no accuracy. In the following result we give the more feasible condition. Theorem 4.2. Under the assumptions (A1), (A2), (A4), and
2 1/2 (A6) C− C˜ −A+ −B + is a nonsingular M-matrix, where C˜ =diag(c˜1 , . . . , c˜n ), c˜i = 21 ∞ L2il +K1 ( ∞ l=1 l=1 Lil ) √ ci i , 1 i n, K1 is the positive constant assured by Lemma 2.5 and other notations are the same as in (A3), Eq. (2.3) is mean value exponential stable. Proof. Since C − C˜ − A+ − B + is a nonsingular M-matrix, there exist constants ri > 0, 1 i n, such that ri (ci i − c˜i ) −
n
|aj i |i rj −
j =1
n
|bj i | i rj > 0.
(4.1)
j =1
Clearly, ri ci i −
n
|aj i |i rj −
j =1
n
|bj i | i rj > 0,
j =1
which implies that C−A+ −B + is a nonsingular M-matrix. As we have stated in Section 2, (2.1) has an equilibrium point and is equivalent to (2.3). Moreover, it follows from (4.1) that there exists a sufficiently small constant > 0 such that ri (ci i − c˜i − ) −
n j =1
|aj i |i rj −
n j =1
|bj i | i rj
∞ 0
kj i (u)eu du > 0.
(4.2)
1600
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
Let y ∗ be an equilibrium point and y(t, x) = (y1 (t, x), . . . , yn (t, x))T be a solution of system (2.1). Let zi = yi − yi∗ . Applying Itô’s formula to zi2 and integrating with respect to x, we have ⎧ ⎧ m ⎨ ⎨ ∗ j(yi (t) − yi ) j Dik dt + −ci [hi (yi (t, x)) − hi (yi∗ )] dzi 22 = 2zi ⎩ ⎩ jxk jxk X
k=1
+
n
aij [fj (yj (t, x)) − fj (yj∗ )] +
j =1
+
∞
n
bij
−∞
j =1
il (yi (t, x)) dwil (t)
dx +
∞ X l=1
l=1
t
ij (t − s)[gj (yj (s, x)) − gj (yj∗ )] ds
⎫ ⎬ ⎭
dt
2il (yi (t, x)) dt dx.
Then applying Itô’s formula to (zi 22 )1/2 , we have dzi 2 =
1 2zi 2 +
n
⎧ ⎧ m ⎨ ⎨ ∗ j(yi (t) − yi ) j Dik dt + −ci [hi (yi (t, x)) − hi (yi∗ )] 2zi ⎩ ⎩ jxk jxk X k=1
aij [fj (yj (t, x)) − fj (yj∗ )] +
j =1
+
bij
j =1
∞
il (yi (t, x)) dwil (t)
1 2zi 2
dx +
l=1
−
n
∞
1 2zi 32
−∞
ij (t − s)[gj (yj (s, x)) − gj (yj∗ )] ds
∞ X l=1
⎫ ⎬ ⎭
dt
2il (yi (t, x)) dt dx
2
X
l=1
t
zi il (yi (t, x)) dx
dt − ci i zi 2 dt + ci i zi 2 dt.
By the method of variation parameters, (A1), (A2), (3.4) and Hölder inequality, we have for t 0, i = 1, 2, . . . , n, m t j j(yi (s) − yi∗ ) −ci i t −ci i (t−s) 1 Dik zi (0)2 + e zi zi (t)2 = e zi 2 X jxk jxk 0 k=1
− ci [hi (yi (s, x)) − hi (yi∗ )] +
n
aij [fj (yj (s, x)) − fj (yj∗ )]
j =1
+
n
bij
j =1
+
s −∞
ij (s − u)[gj (yj (u, x)) − gj (yj∗ )] du
⎭
t
e
−ci i (t−s)
0
−
∞
1 2zi 32
+
t
e 0
⎫ ⎬
l=1
−ci i (t−s)
1 2zi 2
∞ X l=1
2il (yi (s, x)) dx
2 X
zi il (yi (s, x)) dx
1 zi 2
X
zi
∞ l=1
dx ds
+ ci i zi 2
ds
il (yi (s, x)) dwil (s) dx
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
e−ci i t zi (0)2 +
t
e−ci i (t−s)
0
+
n
|bij | j
−∞
j =1
−
t
s
−ci i zi 2 +
⎩
∞
n
|aij |j zj 2
j =1
ij (s − u)zj (u)2 du + 1
e−ci i (t−s)
⎧ ⎨
∞ 1
2
l=1
1601
L2il zi 2 + ci i zi 2
⎫ ⎬ ⎭
ds
2
zi il (yi (s, x)) dx
ds 2zi 32 l=1 X t ∞ −ci i (t−s) 1 + zi il (yi (s, x)) dwil (s) dx e zi 2 X 0 l=1 ⎧ t n ⎨ e−ci i (t−s) |aij |j zj 2 e−ci i t zi (0)2 + ⎩ 0 0
j =1
+
n
|bij | j
j =1
t
+
⎫ ∞ ⎬ 1 2 ij (s − u)zj (u)2 du + Lil zi 2 ds ⎭ 2 −∞ s
e−ci i (t−s)
0
1 zi 2
l=1
X
zi
∞
il (yi (s, x)) dwil (s) dx.
l=1
For the above constant satisfying (4.2), we have sup zi ()2 e sup e(−ci i ) zi (0)2 + sup
0t
0t
+
n
0t
|bij | j
j =1
e−ci i (−s)
0
⎧ n ⎨ ⎩
|aij |j zj (s)2
j =1
⎫ ∞ ⎬ 1 2 ij (s − u)zj (u)2 du + Lil zi (s)2 ds ⎭ 2 −∞ s
l=1
∞ 1 −ci i (−s) + sup e zi (s)il (yi (s, x)) dx dwil (s) . X zi (s)2 0t 0 l=1
It is easy to obtain sup
0t
e−ci i (−s)
0
sup
0t
n
|aij |j zj (s)2 ds
j =1
e(−ci i )(−s) ds
0
(ci i − )−1
n j =1
n
|aij |j sup (es zj (s)2 ) 0s
|aij |j sup (es zj (s)2 ) 0s t
j =1
and sup
0t
e−ci i (−s)
0
(ci i − )−1
∞
1 2 Lil zi (s)2 ds 2 l=1
∞ 1
2
l=1
L2il sup (es zi (s)2 ). 0s t
1602
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
One can see that sup
0t
=
e
−ci i (−s)
0
n
= =
j =1 n
|bij | j sup
0t
=
|bij | j
0t
|bij | j sup
0t
|bij | j sup
0t
j =1
(ci i − )−1
n
(ci i − )
−1
j =1
−∞
e−ci i (−s)
ij (s − u)zj (u)2 du ds
∞
ij (u)
ij (u)zj (s − u)2 du ds
e−ci i (−s) zj (s − u)2 ds du
0
0
∞ 0
∞
ij (u)eu
e(−ci i )(−s) zj (s − u)2 e(s−u) ds du
0
0
e
(−ci i )(−s)
ij (u)eu sup (zj (s − u)2 e(s−u) ) du 0s t
0
|bij | j
∞
ij (u)eu
0
|bij | j
∞
ds
0
j =1 n
s
0
|bij | j sup
j =1 n
j =1
j =1 n
n
∞
sup (zj (s)2 es ) du
−u s t
ij (u)e
u
0
du
sup
−∞
s
s
(zj (s)2 e ) + sup (zj (s)2 e ) . 0s t
It is well known that the cross-variation of wil (t) is given as wil , wj l t = ij t,
1 i, j n, l ∈ N.
It follows from Lemma 2.5, the property of stochastic integral, Hölder inequality, (A2) and (A4) that ∞ 1 −ci i (−s) E sup e zi (s)il (yi (s, x)) dx dwil (s) zi (s)2 X 0t 0 l=1 ∞ 1/2 2 t 1 e2t−2ci i (t−s) z (s) (y (s, x)) dx ds K1 E i il i zi (s)22 X l=1 0 ∞ 2 1/2 t 1 2t−2ci i (t−s) 2 K1 E e Lil |zi (s)| dx ds zi (s)22 X l=1 0 ∞ 1/2 t 2t−2ci i (t−s) 2 2 K1 E e Lil zi (s)2 ds
0
l=1
1/2 ⎧ 2 ⎫1/2 ∞ ⎬ ⎨ t K1 L2il E e2(−ci i )(t−s) ds sup zi (s)2 es ⎭ ⎩ 0 0s t l=1 ∞ 1/2 1 K1 L2il Gi (t) c − i i l=1 ∞ 1/2 √ ci i 2 Gi (t), K1 Lil ci i −
l=1
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
1603
where
sup zi ()2 e ,
Gi (t) = E
0t
i = 1, . . . , n.
Thus we have ⎧ ⎨
Gi (t)E zi (0)2 + (ci i − )−1 ⎩ + (ci i − )−1
n
n
|bij | j
j =1
j =1
+ (ci i − )−1
⎩2
L2il + K1
l=1
∞
ij (u)eu du
0
|aij |j + |bij | j
⎧ ∞ ⎨1
∞
∞
1/2
l=1
sup
−∞
(zj (s)2 es )
⎭
ij (u)eu du Gj (t)
0
L2il
⎫ ⎬
⎫ ⎬ √ ci i Gi (t). ⎭
Obviously, there exist some positive constants i (i = 1, . . . , n) such that ∞ n i Ezi (0)2 (ci i − )−1 |bij | j ij (u)eu duE sup (zj (s)2 es ) . −∞
0
j =1
So ⎧ ⎨
⎫ ∞ 1/2 ∞ ⎬ 1 2 √ r i ci i − − Lil − K1 L2il ci i Gi (t) ⎩ ⎭ 2 l=1 l=1 i=1 ⎧ ⎫ ∞ n n ⎨ n n ⎬ (ci i − )ri (1 + i )Ezi (0)2 + |aj i |i j + |bj i | i j kj i (u)eu du Gi (t). ⎩ ⎭ 0
n
i=1
i=1
j =1
j =1
Taking (4.2) into account and setting ⎧ ⎡ ⎤ ∞ 1/2 ∞ ⎨ 1 √ K˜ = min ri ⎣ci i − − L2il − K1 L2il ci i ⎦ 1i n ⎩ 2 l=1 l=1 ⎫ ∞ n n ⎬ − |aj i |i rj − |bj i | i rj kj i (u)eu du , ⎭ 0 j =1
j =1
we have K˜
n
Ezi (t)2 et K˜
i=1
n i=1
Gi (t) max {ci i ri (1 + i )} 1i n
n
Ezi (0)2 ,
i=1
that is, n i=1
Ezi (t)2 K
n
Ezi (0)2 e−t ,
i=1
with K = K˜ −1 max1 i n {ci i ri (1 + i )}, which completes the proof.
1604
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
Remark 4.3. Clearly, (A6) is more accurate than (A5) for the mean value exponential stability of an equilibrium solution of system (2.1). Moreover, it is more convenient to check condition (A6) in the factual application. Now, we compare our results with the previous results derived in the literature for the usual continuously distributed delayed RNNs without diffusion or stochastic perturbation. Set il = 0, (i = 1, . . . , n, l ∈ N). Then system (2.1) becomes the deterministic continuously distributed delayed reaction–diffusion RNNs (2.2). And condition (A6) reduces to (A3). Then we have the following result from Theorem 4.2. Corollary 4.4. Under assumptions (A1)–(A3), Eq. (2.2) is globally exponentially stable. Set Dik = 0, (i = 1, . . . , n, k = 1, . . . , m) and il = 0, (i = 1, . . . , n, l ∈ N). Then system (2.1) reduces to the following usual continuously distributed delayed RNNs: y˙i (t) = −ci hi (yi (t)) +
n j =1
aij fj (yj (t)) +
n
bij
t −∞
j =1
ij (t − s)gj (yj (s)) ds + Ji
(4.3)
which have been extensively studied [12,32]. Clearly, assumptions (A1)–(A3) are independent of Dik , (i = 1, . . . , n, k = 1, . . . , m). Thus, the following result obtained in [12,32] is trivial. Corollary 4.5. Under assumptions (A1)–(A3), Eq. (4.3) is globally exponentially stable. Remark 4.6. The results in this paper show that, the exponential stability criteria on stochastic continuously distributed delayed reaction–diffusion RNNs are independent of the magnitude of delays and diffusion effect, but are dependent of the magnitude of noise, and therefore, in the above content, diffusion and delays are harmless, but noisy fluctuations are important. 5. Two examples In this section, we give two examples to demonstrate our results. The stochastic reaction–diffusion neural network models described here are somewhat artificial but it can be clearly illustrated how the results of this paper can be applied. Example 1. Consider a stochastic reaction–diffusion neural network with continuously distributed delays 2 j jy1 dy1 (t) = D1k − c1 h(y1 (t)) + a11 f (y1 (t)) + a12 f (y2 (t)) jxk jxk k=1 t t (t − s)g(y1 (s)) ds + b12 (t − s)g(y2 (s)) ds dt + b11 −∞
−∞
+ L11 y1 (t) dw11 (t) + L12 y1 (t) dw12 (t), 2 j jy2 dy2 (t) = D2k − c2 h(y2 (t)) + a21 f (y1 (t)) + a22 f (y2 (t)) jxk jxk k=1 t t (t − s)g(y1 (s)) ds + b22 (t − s)g(y2 (s)) ds dt + b21 −∞
−∞
+ L21 y2 (t) dw21 (t) + L22 y2 (t) dw22 (t),
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606
1605
where h(y) = y + ey − 1, f (y) = arctan y, g(y) = (ey − e−y )/(ey + e−y ), (t) = e−t , and Dik = xk or constant, i, k = 1, 2. It is obvious that i = j = j = 1, h(0) = 0, i, j = 1, 2. Taking 16 0 1 2 −3 1 4 1 C= , A= , B= , L= . 0 9 3 1 2 −2 1 1 Therefore, we have −c1 1 +
2
|a1j |j +
j =1
−c2 2 +
2
2
|b1j | j +
j =1
|a2j |j +
j =1
2 j =1
C − C¯ − A+ − B + =
2
L21l = 8,
l=1
|b2j | j +
2
L22l = 1,
l=1
4 −3 , −5 5
which is a nonsingular M-matrix. It follows from Theorems 3.1 and 4.1 that this system is almost surely exponentially stable and exponentially stable in mean square. Example 2. For the system in Example 1, if taking K1 = 58 , and √2 √2 L=
2 √ 2 4
then we have 3 ˜ C= 0
2 √ 7 4
0 27 16
,
,
C − C˜ − A+ − B + =
9 −5
−3 69 16
,
which is a nonsingular M-matrix. It follows from Theorem 4.2 that this system is mean value exponentially stable. 6. Conclusions As we have pointed out in Section 1, noises do exist in a neural network, due to random fluctuations and probabilistic causes in the network. Thus, it is necessary and rewarding to study stochastic effects to the stability property of neural networks. In this paper, the almost sure exponential stability, mean value exponential stability and mean square exponential stability of the stochastic reaction–diffusion RNNs with continuously distributed delays are studied without assuming the boundedness, monotonicity and differentiability of the output functions. Some sufficient conditions to guarantee the almost sure exponential stability, mean value exponential stability and mean square exponential stability of an equilibrium solution are given, respectively, by using the Lyapunov functional method, M-matrix properties, the method of variation parameters, inequality technique and stochastic analysis. Notice that, the criteria of the exponential stability on stochastic continuously distributed delayed reaction–diffusion RNNs are independent of the magnitude of delays and diffusion effect, but are dependent of the magnitude of noise, and therefore, in the above content, delays and diffusion are harmless, but noisy fluctuations are important. Our methods are also suitable to more general stochastic reaction–diffusion neural network models with time delays. Acknowledgment The authors would like to thank the reviewers and editor for the valuable comments and suggestions. References [1] S. Arik, Global robust stability of delayed neural networks, IEEE Trans. Circuits Syst. 50 (1) (2003) 156–160. [2] L. Arnold, Stochastic Differential Equations: Theory and Applications, Wiley, New York, 1972.
1606 [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32]
Y. Lv et al. / Nonlinear Analysis: Real World Applications 9 (2008) 1590 – 1606 S. Blythe, X. Mao, X. Liao, Stability of stochastic delay neural networks, J. Franklin Inst. 338 (2001) 481–495. J. Buhmann, K. Schulten, Influence of noise on the function of a “physiological” neural networks, Biol. Cyber. 56 (1987) 313–327. J. Cao, Exponential stability and periodic solutions of delayed cellular neural networks, Sci. China. Ser. E 43 (3) (2000) 328–336. J. Cao, Global stability conditions for delayed CNNs, IEEE Trans. Circuits Syst. 48 (11) (2001) 1330–1333. J. Cao, Q. Song, Stability in Cohen–Grossberg type BAM neural networks with time-varying delays, Nonlinearity 19 (7) (2006) 1601–1617. J. Cao, K. Yuan, D.W. Ho, J. Lam, Global point dissipativity of neural networks with mixed time-varying delays, Chaos 16 (2006) art. no. 013105. J. Cao, K. Yuan, H. Li, Global asymptotical stability of generalized recurrent neural networks with multiple discrete delays and distributed delays, IEEE Trans. Neural Networks 17 (6) (2006) 1646–1651. T. Chen, S. Amari, Stability of asymmetric Hopfield networks, IEEE Trans. Neural Networks 12 (1) (2001) 159–163. T. Chen, S. Amari, New theorems on global convergence of some dynamical systems, Neural Networks 14 (3) (2001) 251–255. Y. Chen, Global stability of neural networks with distributed delays, Neural Networks 15 (7) (2002) 867–871. L.O. Chua, T. Roska, The CNN paradigm, IEEE Trans. Circuits Syst. 40 (3) (1993) 147–156. L.O. Chua, L. Yang, Cellular neural networks: applications, IEEE Trans. Circuits Syst. 35 (10) (1988) 1273–1290. L.O. Chua, L. Yang, Cellular neural networks: theory, IEEE Trans. Circuits Syst. 35 (10) (1988) 1257–1272. Y. Cun, C.C. Galland, G.E. Hinton, GEMINI: gradient estimation through matrix inversion after noise injection, in: D.S. Touretzky (Ed.), Advances in Neural Information Processing Systems, vol. I, Morgan Kaufmann, San Mateo, 1989, pp. 138–141. J.W. Evans, Nerve axon equations: II stability at rest, Indiana Univ. Math. J. 21 (1) (1972) 75–90. A. Friedman, Stochastic Differential Equations and Applications, Academic Press, New York, 1976. S. Haykin, Neural networks, Prentice-Hall, Upper Saddle River, NJ, 1994. J. Liang, J. Cao, Global exponential stability of reaction diffusion recurrent neural networks with time-varying delays, Phys. Lett. A 314 (2003) 434–442. X. Liao, Absolute stability of nonlinear control systems, Kluwer Academic Publishers, Dordrecht, 1993. X. Liao, J. Li, Stability in Gilpin–Ayala competition models with diffusion, Nonlinear Anal. 28 (10) (1997) 1751–1758. X. Liao, X. Mao, Stability and instability of stochastic neural networks, Stochast. Anal. Appl. 14 (2) (1996) 165–185. X. Liao, X. Mao, Stability of stochastic neural networks, Neural Parallel Sci. Comput. 4 (2) (1996) 205–224. X. Mao, Stochastic Differential Equations and Applications, Horwood Publishing, 1997. Q. Song, J. Cao, Dynamics of bidirectional associative memory networks with distributed delays and reaction–diffusion terms, Nonlinear Anal. Ser. B 8 (1) (2007) 345–361. Q. Song, J. Cao, Z. Zhao, Periodic solutions and its exponential stability of reaction–diffusion recurrent neural networks with continuously distributed delays, Nonlinear Anal. Real World Appl. 7 (2006) 65–80. J. Sun, L. Wan, Convergence dynamics of stochastic reaction–diffusion recurrent neural networks with delays, Int. J. Bifur. Chaos 15 (7) (2005) 2131–2144. L. Wang, D. Xu, Global exponential stability of Hopfield reaction–diffusion neural networks with time-varying delays, Sci. China (F) 46 (6) (2003) 466–474. L. Wang, D. Xu, Asymptotic behavior of a class of reaction diffusion equations with delays, J. Math. Anal. Appl. 281 (2003) 439–453. J. Zhang, X. Jin, Global stability analysis in delayed Hopfield neural network models, Neural Networks 13 (2000) 745–753. Q. Zhang, X. Wei, J. Xu, Global exponential stability of Hopfield neural networks with continuously distributed delays, Phys. Lett. A 315 (2003) 431–436.