Commun Nonlinear Sci Numer Simulat 17 (2012) 3708–3718
Contents lists available at SciVerse ScienceDirect
Commun Nonlinear Sci Numer Simulat journal homepage: www.elsevier.com/locate/cnsns
Adaptive synchronization for stochastic competitive neural networks with mixed time-varying delays q Qintao Gan ⇑, Renxi Hu, Yuhua Liang Department of Basic Science, Shijiazhuang Mechanical Engineering College, Shijiazhuang 050003, PR China
a r t i c l e
i n f o
Article history: Received 10 October 2011 Received in revised form 23 December 2011 Accepted 14 January 2012 Available online 2 February 2012 Keywords: Exponential synchronization Stochastic competitive neural networks Mixed time-varying delays Adaptive control p-Norm
a b s t r a c t This paper deals with the synchronization problem for competitive neural networks with different time scales, as well as mixed time-varying delays (both discrete and distributed time-varying delays) and stochastic disturbance. By using stochastic analysis approaches and constructing a novel Lyapunov–Krasovskii functional, an adaptive feedback controller is proposed to guarantee the exponential synchronization of proposed competitive neural networks in terms of p-norm. The synchronization results presented in this paper generalize and improve many known results. This paper also presents an illustrative example and uses simulated results of this example to show the feasibility and effectiveness of the theoretical results. 2012 Elsevier B.V. All rights reserved.
1. Introduction In the past decade, there has been a great interest in neural networks due to their wide range of applications, such as associative memory, pattern recognition, signal processing, image processing, fault diagnosis, automatic control engineering, combinatorial optimization, and so on. In fact, time delays are unavoidably in the information processing of neurons due to various reasons. For example, time delays can be caused by the finite switching speed of amplifier circuits in neural networks or deliberately introduced to achieve tasks of dealing with motion-related problems, such as moving image processing. In addition, the axonal transmission delays in neural networks are often time-varying [13,28,29,37]. Meanwhile, a neural network usually has a spatial nature due to the presence of an amount of parallel pathways of a variety of axon sizes and lengths, it is desired to model them by introducing distributed delays. Therefore, both discrete and distributed delays, especially both discrete and distributed time-varying delays, should be taken into account when modeling realistic neural networks [14,17,31,32,34,35]. In 1983, Cohen and Grossberg [4] proposed competitive neural networks. Recently, Meyer-Bäse et al. [21–23] proposed the so called competitive neural networks with different time scales, which can be seen as the extension of Hopfield neural networks [11,12], Grossberg’s shunting network [7] and Amari’s model for primitive neuronal competition [1]. In the competitive neural networks model, there are two types of state variables: the short-term memory (STM) variable describing the fast neural activity, and the long-term memory (LTM) variable describing the slow unsupervised synaptic modifications. Therefore, there are two time scales in the competitive neural networks model, one of which corresponds to the fast change of the state, and the other to the slow change of the synapse by external stimuli. Recently, many scientific and technical
q
This work was supported by the National Natural Science Foundation of China (Nos. 10671209, 11071254).
⇑ Corresponding author.
E-mail address:
[email protected] (Q. Gan). 1007-5704/$ - see front matter 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.cnsns.2012.01.021
3709
Q. Gan et al. / Commun Nonlinear Sci Numer Simulat 17 (2012) 3708–3718
workers have been joining the study fields with great interest, and various interesting results for competitive neural networks with different time scales have been reported in [9,19,24,25]. It was found in [3] that some delayed neural networks can exhibit chaotic dynamics. As special complex networks, delayed competitive neural networks with different time scales have been found to exhibit complex and unpredictable behaviors including stable equilibria, periodic oscillations, bifurcation and chaotic attractors. Therefore, some works dealing with chaos synchronization phenomena in delayed competitive neural networks with different time scales have also been published in [6,18]. Gan et al. [6] investigated the adaptive synchronization problem of competitive neural networks with different time scales, discrete constant delays and unknown parameters by utilizing Lyapunov stability theory and parameter identification technique, and they also demonstrated the effectiveness of application of the proposed adaptive feedback scheme in secure communication. Without assuming the active functions to be differentiable and bounded, some delayindependent or delay-dependent criteria for the exponential synchronization problem of a class of competitive neural networks with different time scales and time-varying delays were derived and the controller gain matrix was designed by using Lyapunov functionals, free-weighting matrix approach, linear matrix inequality approach and Leibniz–Newton formula in [18]. Actually, the synaptic transmission in real neural networks can be viewed as a noisy process introduced by random fluctuations from the release of neurotransmitters and other probabilistic causes [10,40]. Hence, noise is unavoidable and should be taken into consideration in modeling. Hence, considerable attention has been paid on the study of stochastic neural networks theory and various interesting results have been reported in [2,8,15,27,26,30,33,38,39]. Especially, Gu [8] proposed an adaptive feedback controller to achieve complete synchronization of coupled competitive neural networks with different time scales, discrete constant delays and stochastic perturbations by using LaSalle-type invariance principle for stochastic differential delay equation. In [38], Yang et al. investigated the problem of lag synchronization for a kind of competitive neural networks with different time scales, discrete and distributed constant delays, as well as uncertain nonlinear external and stochastic perturbations by designing a simple but robust adaptive controller. In [39], the problem of exponential synchronization of switched stochastic competitive neural networks with different time scales, both interval time-varying delays and distributed delays was studied based on multiple Lyapunov–Krasovkii functionals, the free-weighting matrix method, Newton–Leibniz formulation, as well as the invariance principle of stochastic differential equations, where distributed delays were unbounded or bounded, the stochastic disturbance was of the form of multi-dimensional Brownian motion, and the networks were governed by switching signals with average dwell time. Based on the Lyapunov second method and LMI (linear matrix inequality) optimization approach, Park et al. [30] proposed a dynamic feedback controller to guarantee the asymptotical mean-square synchronization of two identical delayed discrete-time complex networks with stochastic disturbances. From the above analysis, we know that the mixed time-varying delays and stochastic noise disturbance on the dynamic behaviors of competitive neural networks with different time scales cannot be neglected in modeling. As is pointed out in [13], there are few results, or even no results concerning the synchronization schemes for complex networks, in particular stochastic complex networks based on p-norm. The issues of integrating mixed time-varying delays and stochastic noise disturbance into the study of synchronization for competitive neural networks with different time scales require more complicated analysis. Therefore, it is interesting to study this problem both in theory and in applications, and there exist open room for further improvement. This situation motivates our present investigation. This paper is concerned with the exponential synchronization of competitive neural networks with different time scales, mixed time-varying delays and stochastic perturbations under the adaptive control in terms of p-norm is investigated in this paper. By introducing a novel Lyapunov–Krasovskii functional with the idea of delay partitioning, employing stochastic analysis approaches, an adaptive controller is proposed for the considered competitive neural networks. Our results are more general and they effectually complement or improve the previously known results. The organization of this paper is as follows: in the next section, problem statement and preliminaries are presented; in Section 3, an adaptive controller is proposed to ensure the exponential synchronization for competitive neural networks with different time scales, mixed time-varying delays and stochastic perturbations in terms of p-norm; a numerical example will be given in Section 4 to demonstrate the effectiveness and feasibility of our theoretical results. Finally, conclusions are drawn in Section 5. Notation: Throughout this paper, Rn and Rnm denote the n dimensional Euclidean space and the set of all n m real matrices, respectively; ðX; F ; PÞ is a complete probability space, where X is the sample space, F is the r-algebra of subsets of the sample space and P is the probability measure on F ; Efg stands for the mathematical expectation operator with respect to the given probability measure P; ‘‘sgn’’ is the sign function defined in Filippov sense [5]. 2. Modeling and preliminary Motivated by the discussions in the above section, in this paper, we consider the competitive neural networks with different time scales and mixed time-varying delays described by:
STM :
ex_ i ðtÞ ¼ ai xi ðtÞ þ
N X k¼1
Dik fk ðxk ðtÞÞ þ
Z N N X X Dsik fk ðxk ðt sðtÞÞÞ þ Drik k¼1
k¼1
t
trðtÞ
fk ðxk ðsÞÞds þ Bi
P X j¼1
mij wj ;
ð2:1Þ
3710
Q. Gan et al. / Commun Nonlinear Sci Numer Simulat 17 (2012) 3708–3718
LTM :
_ ij ðtÞ ¼ ci mij ðtÞ þ wj fi ðxi ðtÞÞ; m
where i = 1, 2, . . . , N, j = 1, 2, . . . , P; xj(t) is the neuron current activity level, fj() is the output of neurons, mij(t) is the synaptic efficiency, wi is the constant external stimulus; ai > 0 represents the time constant of the neuron; ci > 0 represents disposable scaling constant; 0 < s(t) 6 s and 0 < r(t) 6 r are the time-varying delay and the distributed time-varying delay, respectively; Dik, Dsik and Drik represent the connection weight, the time-varying delay connection strength, and the distributed time-varying delay connection strength between the ith neuron and the kth neuron, respectively; Bj is the strength of the external stimulus, e > 0 is the time scale of STM state [22]. P After setting Si ðtÞ ¼ Pj¼1 mij wj ¼ mTi ðtÞw, where w = (w1, w2, . . . , wP)T, mi(t) = (mi1(t), mi2(t), . . . , miP(t))T, system (2.1) can be rewritten as the state-space form:
STM :
ex_ i ðtÞ ¼ ai xi ðtÞ þ
N X
Dik fk ðxk ðtÞÞ þ
k¼1
LTM :
Z N N X X Dsik fk ðxk ðt sðtÞÞÞ þ Drik k¼1
t
fk ðxk ðsÞÞds þ Bi Si ðtÞ;
ð2:2Þ
trðtÞ
k¼1
S_ i ðtÞ ¼ ci Si ðtÞ þ jwj2 fi ðxi ðtÞÞ;
where jwj2 ¼ w21 þ w22 þ þ w2P is a constant. Without loss of generality, the input stimulus vector is assumed to be normalized with unit magnitude jwj2 = 1, and the fast time-scale parameter e is also assumed to be unit, competitive neural networks (2.2) are simplified to
STM :
x_ i ðtÞ ¼ ai xi ðtÞ þ
N X
Dik fk ðxk ðtÞÞ þ
k¼1
LTM :
Z N N X X Dsik fk ðxk ðt sðtÞÞÞ þ Drik k¼1
t
fk ðxk ðsÞÞds þ Bi Si ðtÞ;
ð2:3Þ
trðtÞ
k¼1
S_ i ðtÞ ¼ ci Si ðtÞ þ fi ðxi ðtÞÞ;
or the following compact form:
STM :
_ xðtÞ ¼ AxðtÞ þ Df ðxðtÞÞ þ Ds f ðxðt sðtÞÞÞ þ Dr
Z
t
f ðxðsÞÞds þ BSðtÞ;
ð2:4Þ
trðtÞ
LTM :
_ SðtÞ ¼ CSðtÞ þ f ðxðtÞÞ;
where t 2 R+ = [0, 1), x(t) = (x1(t), x2(t), . . . , xN(t))T, S(t) = (S1(t), S2(t), . . . , SN(t))T, A(t) = diag(a1, a2, . . . , aN), B(t) = diag(B1, B2, . . . , BN), C(t) = diag(c1, c2, . . . , cN), D = (Dik)NN, Ds ¼ Dsik NN , Dr ¼ Drik NN , f(x()) = (f1(x1()), f2(x2()), . . . , fN(xN()))T. System (2.4) is supplemented with the following initial value:
xðtÞ ¼ /x ðtÞ 2 Cð½smax ; 0; RN Þ; x
k/x kp ¼
N X
S
i¼1 smax 6h60
j/xi ðhÞjp
s
s
!1p sup
ð2:5Þ
T ; / ðtÞ ¼ /S1 ðtÞ; /S2 ðtÞ; . . . ; /SN ðtÞ , where max = max{ x S -valued functions / (h), / (h) on [ max, 0] with p-norm (p
/x1 ðtÞ; /x2 ðtÞ; . . . ; /xN ðtÞ N
for any / ðtÞ ¼ the family of all continuous R
SðtÞ ¼ /S ðtÞ 2 Cð½smax ; 0; RN Þ;
T
;
k/S kp ¼
N X
N
s, r}, and Cð½smax ; 0; R Þ denotes is a positive integer) defined by
!1p sup
i¼1 smax 6h60
j/Si ðhÞjp
:
In order to deal with synchronization, we need to design a control input for a driven system so that the driven system achieves synchronization with the driving system, provided that the two systems start from different initial conditions. The driving system is given by system (2.4). Suppose that the response (slaver) system with stochastic perturbations is designed as
" STM :
s
dyðtÞ ¼ AyðtÞ þ Df ðyðtÞÞ þ D f ðyðt sðtÞÞÞ þ D
r
Z
t
# f ðyðsÞÞds þ BRðtÞ þ uðtÞ dt
trðtÞ
þ HðeðtÞ; eðt sðtÞÞ; eðt rðtÞÞÞdxðtÞ; LTM :
ð2:6Þ
dRðtÞ ¼ ½CRðtÞ þ f ðyðtÞÞdt;
where e(t) = (e1(t), e2(t), . . . , eN(t))T = y(t) x(t) and z(t) = (z1(t), z2(t), . . . , zN(t))T = R(t) S(t) the synchronization error states. u(t) = (u1(t), u2(t), . . . , uN(t))T is a feedback controller of following form
uðtÞ ¼ qeðtÞ:
ð2:7Þ
Instead of the usual linear feedback, the feedback strength q = diag(q1, q2, . . . , qN) is updated by the following law:
q_ i ¼ ki jei ðtÞjp elt ;
ð2:8Þ
3711
Q. Gan et al. / Commun Nonlinear Sci Numer Simulat 17 (2012) 3708–3718
where ki > 0 (i = 1, 2, . . . , N) is an arbitrary positive constant. Furthermore, H = (hik)NN is the diffusion coefficient matrix (or noise intensity matrix) and the stochastic disturbance x(t) = [x1(t), x2(t) . . . , xN(t)]T 2 RN is a Brownian motion defined on complete probability space ðX; F ; PÞ, and
EfdxðtÞg ¼ 0;
Efdx2 ðtÞg ¼ dt:
This type of stochastic perturbation can be regarded as a result from the occurrence of the internal error when the simulation circuits are constructed, such as inaccurate design of the coupling strength and some other important parameters [16]. Therefore, the stochastic perturbation may be introduced by the feedback controller u(t). Accordingly, the noise intensity matrix H also relies on the state variable x(t) of master system (2.4). The initial condition for response system (2.6) is given of the form
yðhÞ ¼ wy ðhÞ;
RðhÞ ¼ wR ðhÞ;
smax 6 h 6 0;
ð2:9Þ
T T for any w ðhÞ ¼ wy1 ðhÞ; . . . ; wyN ðhÞ ; wR ðhÞ ¼ wR1 ðhÞ; . . . ; wRN ðhÞ 2 LpF 0 ð½ max ; 0; RN Þ, where LpF 0 ð½ max ; 0; RN Þ is the family of N all F 0 -measurable Cð½ max ; 0; R Þ-valued random variables satisfying that supsmax 6h60 Ejwy ðhÞjp ; supsmax 6h60 EjwR ðhÞjp < 1. y
s
s
s
In this paper, for systems (2.4) and (2.6), we assume that the activation functions, noise intensity functions and timevarying delays satisfy the following properties: (H1) We assume that the functions fk() and hik() satisfy that there exist positive constants Lk and gik (i,k = 1, 2, . . . , N) such that
jfk ðn1 Þ fk ðn2 Þj 6 Lk jn1 n2 j; jhik ðn1 ; n1 ; ~n1 Þ hik ðn2 ; n2 ; ~n2 Þj2 6 gik jn1 n2 j2 þ jn1 n2 j2 þ j~n1 ~n2 j2 for any n1 ; n2 ; n1 ; n2 ; ~ n1 ; ~ n2 2 R, and
hik ð0; 0; 0Þ ¼ 0;
i; k ¼ 1; 2; . . . ; N:
(H2) Time-varying transmission delays s(t) and r(t) satisfy s_ ðtÞ 6 . < 1 or s_ ðtÞ > . > 1, r_ ðtÞ 6 . < 1 or r_ ðtÞ > . > 1 for all t, respectively, where . and .⁄ are constants. Subtracting (2.4) from (2.6) yields the error system as follows
STM :
deðtÞ " s
¼ AeðtÞ þ DgðeðtÞÞ þ D gðeðt sðtÞÞÞ þ D
r
Z
#
t
gðeðsÞÞds þ BzðtÞ þ uðtÞ dt þ HðeðtÞ; eðt sðtÞÞ; eðt
trðtÞ
rðtÞÞÞdxðtÞ; LTM :
ð2:10Þ
dzðtÞ ¼ ½CzðtÞ þ gðeðtÞÞdt;
where
gðeðÞÞ ¼ ðg 1 ðe1 ðÞÞ; g 2 ðe2 ðÞÞ; . . . ; g N ðeN ðÞÞÞT ¼ f ðyðÞÞ f ðxðÞÞ: Before ending this section, we introduce some notations, the definition of exponential synchronization for the delayed competitive neural networks with different time scales (2.4) and (2.6) under the adaptive controller (2.7) and (2.8) in terms of pnorm, and a lemma which will come into play later on. For any x(t) = (x1(t), x2(t), . . . , xN(t))T 2 RN, define
kxðtÞkp ¼
N X
!1p jxi ðtÞjp
:
i¼1
Let PC , PCð½smax ; 0; RN Þ denote the piecewise left continuous functions / : [smax, 0] ? RN with the norm
k/kp ¼
N X
!1p sup
i¼1 smax 6s60
j/i ðsÞjp dx
:
Definition 2.1. The noise-perturbed response system (2.6) and the drive system (2.4) can be exponentially synchronized under the adaptive controller (2.7) and (2.8) in terms of p-norm, if there exist constants l, l⁄ > 0 and M, M⁄ P 1 such that
EfkyðtÞ xðtÞkp g þ EfkRðtÞ SðtÞkp g 6 MEfkwy /x kp gelt þ M EfkwR /S kp gel t ; for t 2 [0, +1).
3712
Q. Gan et al. / Commun Nonlinear Sci Numer Simulat 17 (2012) 3708–3718
Lemma 2.1 (Holder inequality). Assume that there exist two continuous functions F(x), G(x) : [a, b] ? R, constants a, b, p and q satisfying
b > a;
p; q > 1;
1 1 þ ¼ 1; p q
then the following inequality holds
Z
b
jFðxÞGðxÞjdx 6
a
"Z
b
p
jFðxÞj dx
#1p "Z
a
#1q
b
q
jGðxÞj dx :
a
3. Synchronization analysis of competitive neural networks In this section, based on adaptive feedback control technique, theoretical results are developed to realize the exponential synchronization between systems (2.4) and (2.6) in terms of p-norm. Theorem 3.1. Under assumptions (H1)–(H2), the noise-perturbed response system (2.6) and the drive system (2.4) can be exponentially synchronized under the adaptive feedback controller (2.7) and (2.8) in terms of p-norm, if there exist nonnegative P Pp p real numbers -li and -li ¼ 1; ¼ 1 satisfy the following condition: l¼1 li l¼1 li
pci þ jBi jp-pi þ
ðH3 Þ
p1 X p- Li li < 0; l¼1
for all i = 1, 2, . . . , N. Proof. Define the Lyapunov–Krasovskii functional as N X
VðtÞ ¼
"
V i ðtÞ þ els ai
Z
t
V i ðsÞds þ els ai ½1 rsgnð1 .Þ
tsðtÞ
i¼1
lr
þe bi ½1 r sgnð1 . Þ
Z
tsðtÞ
ts
Z
trðtÞ
V i ðsÞds þ ci
tr
Z
0
Z
t
V i ðsÞds
trðtÞ
# p 2 V i ðhÞdhds þ e jzi ðtÞj þ ðq þ si Þ ; 2ki i tþs
Z
rðtÞ
V i ðsÞds þ elr bi
t
p
lt
ð3:1Þ
where ai, bi, ci > 0 and si are constants to be determined, 0 < r, r⁄ < 1 and l are suitable positive constants (l may be very small), and
V i ðtÞ ¼ elt jei ðtÞjp ;
i ¼ 1; 2; . . . ; N:
Employing Itô’s differential rule [20], the stochastic derivative of along the trajectory of error system (2.10) can be obtained as follows:
dVðtÞ ¼ LVðtÞdt þ HðeðtÞ; eðt sðtÞÞ; eðt rðtÞÞÞdxðtÞ;
ð3:2Þ
where the operator L is given as follows:
LVðtÞ 6
N X
(
" p1
lt
lV i ðtÞ þ pe jei ðtÞj
ai ei ðtÞ þ
N X
i¼1
s
r
Dik g k ðjek ðtÞjÞ þ Dik g k ðjek ðt sðtÞÞjÞ þ Dik
k¼1
Z
t
trðtÞ
! g k ðjek ðsÞjÞds
þBi jzi ðtÞj þ qi jei ðtÞj þ els ai ½V i ðtÞ ð1 s_ ðtÞÞV i ðt sðtÞÞ þ els ai ½1 rsgnð1 .Þ ½ð1 s_ ðtÞÞV i ðt sðtÞÞ V i ðt sÞ þ elr bi ½V i ðtÞ ð1 r_ ðtÞÞV i ðt rðtÞÞ þ elr bi ½1 r sgnð1 . Þ ½ð1 r_ ðtÞÞV i ðt rðtÞÞ V i ðt rÞ " # Z t V i ðsÞds þ lelt jzi ðtÞjp þ pelt jzi ðtÞjp1 ½ci zi ðtÞ þ g i ðei ðtÞÞ pðqi þ si Þelt jei ðtÞjp þci rV i ðtÞ trðtÞ
) N X 2 pðp 1Þ lt p2 2 2 e jei ðt;xÞj þ gik ek ðt;xÞ;ek ðt sðtÞ;xÞ;ek ðt rðtÞ;xÞ : 2 k¼1
ð3:3Þ
It follows from (H1) and the fact
ap1 þ ap2 þ þ app P pa1 a2 . . . ap
ða1 ; a2 ; . . . ; ap P 0Þ
that p1
pjei ðtÞj
N X
p1
Dik g k ðjek ðtÞjÞ 6 pjei ðtÞj
k¼1;k–i
N X
jDik jLk jek ðtÞj ¼
k¼1;k–i
6
N X
p1 X
k¼1;k–i l¼1
" # p1 Y f nlik flik p jDik j Lk jei ðtÞj jDik jnpik Lkpij jek ðtÞj
N X k¼1;k–i
pf
jDik jpnlij Lk lik jei ðtÞjp þ
N X k¼1;k–i
l¼1 pfpik
jDik jpnpik Lk
jek ðtÞjp ;
ð3:4Þ
3713
Q. Gan et al. / Commun Nonlinear Sci Numer Simulat 17 (2012) 3708–3718
where nlik and flik are nonnegative real numbers and satisfy Similarly, we have
pjei ðtÞjp1
pjei ðtÞj
l¼1 nlik
¼ 1 and
Pp
l¼1 flik
¼ 1, respectively.
p1 N N X N X X X s pn pflij D lik L jei ðtÞjp þ Ds pnpik Lpfpik jek ðt sðtÞÞjp ; Dsik g k ðjek ðt sðtÞÞjÞ 6 ik ik k k k¼1
p1
Pp
k¼1 l¼1
Z N X Drik
trðtÞ
k¼1
p1 N X X
t
pjei ðtÞjp1 Bi jzi ðtÞj 6
g k ðjek ðsÞjÞds 6
p1 X
ð3:5Þ
k¼1
pf r pn lik lik
Lk
jDik j
"Z N X pf r pn pik pik jei ðtÞj þ Dik Lk p
k¼1 l¼1
#p
t
jek ðsÞjds ;
ð3:6Þ
trðtÞ
k¼1
jBi jp-li jei ðtÞjp þ jBi jp-pi jzi ðtÞjp ;
ð3:7Þ
l¼1
pjzi ðtÞjp1 g i ðjei ðtÞjÞ 6
p1 X p- p- Li li jzi ðtÞjp þ Li pi jei ðtÞjp ;
ð3:8Þ
l¼1 N X
pjei ðt; xÞjp2
gik jek ðtÞj2 ¼
k¼1;k–i
N X
"
k¼1;k–i
N X
gik jek ðt sðtÞÞj2 6
gik
p þ gik pik jek ðtÞjp ;
ð3:9Þ
k¼1;k–i
p2 N X X
j¼1
jek ðt; xÞj gikpik jek ðtÞj
N p X ðp1Þik
gpiklik jei ðtÞjp þ
k¼1;k–i l¼1
pjei ðtÞjp2
gik
l¼1
p2 N X X
6
# ðp1Þik
p2 Y lik
gik jei ðtÞj
p
p
gik lik jei ðtÞjp þ
k¼1 l¼1
N X pðp1Þik
gik
p þ gik pik jek ðt sðtÞÞjp :
ð3:10Þ
p þ gik pik jek ðt rðtÞÞjp ;
ð3:11Þ
k¼1
and
pjei ðtÞjp2
N X
gik jek ðt rðtÞÞj2 6
j¼1
p2 N X X
p
gik lik jei ðtÞjp þ
k¼1 l¼1
N X p ðp1Þik
gik
k¼1
Pp Pp Pp where nlik , flik , n , f , , lik and nonnegative real numbers and satisfy l¼1 nlik ¼ 1, l¼1 flik ¼ 1, l¼1 nlik ¼ 1, lik are Pp Pp lik lik lik Pp Pp l¼1 flik ¼ 1, l¼1 lik ¼ 1, l¼1 lik ¼ 1 and l¼1 lik ¼ 1, respectively. By applying (3.4)–(3.10), assumptions (H1)–(H2) to (3.3), we have (" p1 p1 N N N X X X X X s pn pf r pn pf pf D lik L lik þ D lik L lik LVðtÞ 6 l pðai þ si pDii Li ðp 1Þgii =2Þ þ jDik jpnlij Lk lik þ ik ik k k i¼1
þ
k¼1;k–i l¼1
p1 X
p-pi
jBi jp-li þ Li
jei ðtÞjp þ
l¼1
p2 N X X k¼1;k–i l¼1
lr
r bi e j1 . jV i ðt rðtÞÞ ci "
gpiklik þ
Z
p2 N X X plik
k¼1 l¼1
p
gik þ gik
k¼1 l¼1
t
trðtÞ
lik
V i ðsÞds þ
l pci þ jBi j
p-pi
#
þ els ðai þ bi Þ þ rci V i ðtÞ rai els j1 .jV i ðt sðtÞÞ ! p1 X p-li þ Li elt jzi ðtÞjp l¼1
# " # N N N p X X X ppik p1 p 1 pðp1Þik ppik s pnpik pfpik ðp1Þik þ þ þ gik þ gik Dik Lk þ gik þ gik V k ðtÞ þ V k ðt sðtÞÞ 2 2 k¼1;k–i k¼1;k–i k¼1 k¼1 "Z #p ) N N t X X p p 1 pðp1Þik Dr pnpik Lpfpik gik þ gik pik V k ðt rðtÞÞ þ jek ðsÞjds ð3:12Þ : þ ik k 2 trðtÞ k¼1 k¼1 N X
pf pn jDik j pik Lk pik
From (H3), it is easy to know that for i = 1, 2, . . . , N, we have
l pci þ jBi jp-pi þ
p1 X p- Li li 6 0: l¼1
Let
ai ¼
bi ¼
N p X 1 Ds pnpki Lpfpki þ p 1 g ðp1Þki þ gppki ; ki i ki ki rj1 .j k¼1 2
r j1
N X p 1 p 1 pðp1Þki gki þ gki pki ; . j k¼1 2
ð3:13Þ
3714
Q. Gan et al. / Commun Nonlinear Sci Numer Simulat 17 (2012) 3708–3718
ci ¼ rp1
N X Dr pnpki Lpfpki ; ki
i
k¼1
si ¼
1 p
þ
(
p1 N X X
l pðai pDii Li ðp 1Þgii =2Þ þ
pf
jDik jpnlij Lk lik þ
k¼1;k–i l¼1 p1 X
p-pi
jBi jp-li þ Li
p2 N X X
jei ðtÞjp þ
l¼1
gpiklik þ
k¼1;k–i l¼1
p1 N X X s pn pf D lik L lik þ Dr pnlik Lpflik ik ik k k k¼1 l¼1
p2 N X X plik
p
gik þ gik lik þ els ðai þ bi Þ þ rci
k¼1 l¼1
XN p 1 pðp1Þki pf p k¼1;k–i jDki jpnpki Li pki þ gki þ gki pki : 2 It follows from (3.12) and (3.13) and Lemma 2.1 that
LVðtÞ 6 0:
ð3:14Þ
Taking the mathematical expectation of both sides of (3.2), we obtain
dEfVðtÞg 6 0; dt
ð3:15Þ
which implies that
ð3:16Þ
EfVðtÞg 6 EfVð0Þg: In the sequel, we define that
WðtÞ ¼
p ðq ðtÞ þ si Þ2 : 2ki i
ð3:17Þ
Note that N X
EfVð0Þg ¼
" V i ð0Þ þ els ai
Z
0
sð0Þ
i¼1
"
1 r sgnð1 . Þ
6
V i ðsÞds þ els ai ½1 rsgnð1 .Þ
N X
jei ð0Þjp þ els ai
Z rð0Þ Z
0
V i ðsÞds þ ci
Z
0
rð0Þ
Z
0
þe bi ½1 r sgnð1 . Þ
V i ðsÞds þ elr bi
r
Z sð0Þ s
ls
p
0
rð0Þ
V i ðsÞds þ elr bi
s
els jei ðsÞjp ds þ els ai ½1 rsgnð1 .Þ Z rð0Þ
#
Z
V i ðhÞdhds þ jzi ð0Þjp þ Wð0Þ
sð0Þ
i¼1
lr
s
r
"
Z sð0Þ
e jei ðsÞj ds þ ci
Z
0
rð0Þ
Z
0
lh
els jei ðsÞjp ds þ elr bi
Z
#
p
0
els jei ðsÞjp ds
rð0Þ
p
e jei ðhÞj dhds þ jzi ð0Þj þ Wð0Þ
s
6 ½1 þ sels min fai ð1 rsgnð1 .ÞÞg þ relr min fbi ð1 r sgnð1 . ÞÞg þ r2 min fci g i¼1;2;...;N i¼1;2;...;N i¼1;2;...;N n o n o y x p R S p þWð0ÞE kw / kp þ E kw / kp
ð3:18Þ
and
EfVðtÞg P
N X
elt jei ðtÞjp þ
i¼1
N X
n o n o elt jzi ðtÞjp ¼ elt E kyðtÞ xðtÞkpp þ elt E kRðtÞ SðtÞkpp :
ð3:19Þ
i¼1
Let 1
M ¼ ½1 þ sels min fai ð1 rsgnð1 .ÞÞg þ relr min fbi ð1 r sgnð1 . ÞÞg þ r2 min fci g þ Wð0Þp P 1; i¼1;2;...;N
M ¼ 1;
i¼1;2;...;N
i¼1;2;...;N
l ¼ l > 0:
It follows from (3.17)–(3.19) that
EfkyðtÞ xðtÞkp g þ EfkRðtÞ SðtÞkp g 6 MEfkwy /x kp gelt þ M EfkwR /S kp gel t ; which implies that the noise-perturbed response system (2.6) and the drive system (2.4) can be exponentially synchronized under the adaptive feedback controller (2.7) and (2.8) in terms of p-norm. This completes the proof of Theorem 3.1. h
3715
Q. Gan et al. / Commun Nonlinear Sci Numer Simulat 17 (2012) 3708–3718
Remark 1. Under the precondition that the derivative of the discrete time-varying delay s(t) and finitely distributed timevarying delay r(t) were smaller than one, Tang et al. [36] dealt with the problem of lag synchronization and parameter identification for a class of chaotic neural networks with stochastic perturbation, which involve both the discrete and distributed time-varying delays. In this paper, a novel Lyapunov–Krasovskii functional V(t) is employed to deal with the competitive neural networks with different time scales, mixed time-varying delays and stochastic perturbations. In the novel LyapuRt Rt Rt nov–Krasovskii functional (3.1), the integral terms ts V i ðsÞds and tr V i ðsÞds are divided into two parts as ai tsðtÞ V i ðsÞds R tsðtÞ R Rt t r ðtÞ and ai ½1 rsgnð1 .Þ ts V i ðsÞds, bi trðtÞ V i ðsÞds and bi ½1 r sgnð1 . Þ tr V i ðsÞds, respectively, such a new introduction may lead to potentially less conservative results on the restriction of the time derivative of time-varying delays s(t) and r(t) in [36]. Remark 2. As far as we know, all the existing results concerning the synchronization of competitive neural networks based on 1-norm or 2-norm [6,8,18,38,39] have not considered the situation of mixed time-varying delays. In this paper, the model discussed is universal, which unifies discrete and distributed time-varying delays in terms of p-norm. Hence, in a sense, our results have been shown to be the generalization and improvement of existing results reported recently in the literature. Remark 3. In order to achieve the exponential synchronization of the coupled competitive neural networks with different scales (2.4) and (2.6), the variable feedback strength related to the synchronization errors will be automatically adapted to a suitable strength and the adaptive synchronization scheme is simple to implement in practical engineering design. It is different from the traditional linear feedback in [18,39], where the feedback strength is fixed, thus the strength must be maximal, which means a kind of waste in practice. Remark 4. Our analysis is carried out under the assumption p P 2 throughout this paper. Evidently, there is an interesting open problem concerning the exponential synchronization for competitive neural networks with different time scales, mixed time-varying delays and stochastic noise disturbance by using adaptive control for p = 1 or based on 1-norm. Remark 5. Today, there are generally two kinds of continuously distributed delays in the neural networks model, i.e., finitely distributed delays and infinitely distributed delays. In this paper, the distributed delays in neural networks model is finite and time-varying. In fact, Lemma 2.1 can not be used to deal with the infinitely distributed delays in terms of p-norm, therefore, our theoretical results can not be used to deal the synchronization problem for competitive neural networks with different time scales, as well as mixed time-varying delays (both discrete time-varying and infinitely distributed delays) and stochastic disturbance, this is another problem that we should study in the future. Remark 6. For parameter . and .⁄ in the assumption (H2), only two cases are considered in this paper, there are (1) s_ ij ðtÞ 6 . < 1 for all t (slowly varying delay); (2) s_ ij ðtÞ P . > 1 for all t (fast-varying delay), and
2.5
0.8
2
0.6
1.5
0.4 0.2
0.5
S2(t)
x (t) 2
1
0
−0.2
−0.5
−0.4
−1
−0.6
−1.5 −2 −5
0
−4
−3
−2
−1
0
1
2
3
4
−0.8 −1
−0.5
x1(t) Fig. 1. Chaotic attractor of the delayed competitive neural network (4.1).
0
S1(t)
0.5
1
3716
Q. Gan et al. / Commun Nonlinear Sci Numer Simulat 17 (2012) 3708–3718
(1) s_ ij ðtÞ 6 . < 1 for all t (distributed slowly varying delay); (2) s_ ij ðtÞ P . > 1 for all t (distributed fast-varying delay). However, we do not mention the mixed time-varying delays without restriction on the upper/lower bound of the derivation of time delays. Obviously, this is an important and interesting open problem. So, we take this problem in the future. 4. A numerical example In this section, we give an example with numerical simulations to illustrate the effectiveness of the theoretical results obtained above. For the sake of simplification, we consider a delayed competitive neural network model described by (drive system)
STM :
_ xðtÞ ¼ AxðtÞ þ Df ðxðtÞÞ þ Ds f ðxðt sðtÞÞÞ þ Dr
Z
t
f ðxðsÞÞds þ BSðtÞ;
ð4:1Þ
trðtÞ
LTM :
_ SðtÞ ¼ CSðtÞ þ f ðxðtÞÞ;
where
xðtÞ ¼ ðx1 ðtÞ; x2 ðtÞÞT ; SðtÞ ¼ ðS1 ðtÞ; S2 ðtÞÞT ; f ðÞ ¼ tanhðÞ;
1 0 0:4 0 2 0 A¼ ; B¼ ; C¼ ; 0 1 0 0:3 0 2
1:5 2 1 2 1 1 ; Dr ¼ ; D¼ ; Ds ¼ 3 3:5 1 3 3 3 sðtÞ ¼ 1:8 þ 0:1 sinðtÞ; rðtÞ ¼ 2:2 þ 0:1 cosðtÞ:
3
1.5 x (t)
x (t)
1
2
2
y1(t)
0.5
x2(t),y2(t)
x1(t),y1(t)
1 0 −1
0 −0.5
−2
−1
−3 −4
y2(t)
1
0
2
4
6
8
−1.5
10
0
2
4
t 1
8
10
0.5
0.8 0.6
S1(t)
S2(t)
R1(t)
R2(t)
0.4
S2(t),R2(t)
S1(t),R1(t)
6
t
0.2 0 −0.2
0
−0.5
−0.4 −0.6 −0.8
0
2
4
6
8
10
−1
0
2
4
t Fig. 2. State trajectories of drive system (4.1) and response system (4.2).
6
t
8
10
3717
Q. Gan et al. / Commun Nonlinear Sci Numer Simulat 17 (2012) 3708–3718 1.5 e (t) 1
e (t),e (t),z (t),z (t) 1 2 1 2
1
e2(t) z1(t)
0.5
z2(t)
0 −0.5 −1 −1.5 0
2
4
6
8
10
t Fig. 3. Convergence dynamics of errors between systems (4.1) and (4.2).
Fig. 1 shows the chaotic behavior of the delayed competitive neural network (4.1) with above coefficients and initial values x1(h) = 0.5, x2(h) = 0.3, S1(h) = 0.3, S2(h) = 0.1 for "h 2 [2.3, 0]. The noise-perturbed response system is described by
" STM :
dyðtÞ ¼ AyðtÞ þ Df ðyðtÞÞ þ Ds f ðyðt sðtÞÞÞ þ Dr
Z
#
t
f ðyðsÞÞds þ BRðtÞ þ uðtÞ dt
trðtÞ
þ HðeðtÞ; eðt sðtÞÞ; eðt rðtÞÞÞdxðtÞ; LTM :
ð4:2Þ
dRðtÞ ¼ ½CRðtÞ þ f ðyðtÞÞdt;
where y(t) = (y1(t), y2(t))T, R(t) = (R1(t), R2(t))T,
h11 ðe1 ðtÞ; e1 ðt sðtÞÞ; e1 ðt rðtÞÞÞ ¼ e1 ðtÞ þ 2e1 ðt sðtÞÞ þ 3e1 ðt rðtÞÞ;
h12 ðe2 ðtÞ; e2 ðt sðtÞÞ; e2 ðt rðtÞÞÞ ¼ h21 ðe1 ðtÞ; e1 ðt sðtÞÞ; e1 ðt rðtÞÞÞ ¼ 0; h22 ðe2 ðtÞ; e2 ðt sðtÞÞ; e2 ðt rðtÞÞÞ ¼ 2e2 ðtÞ þ 3e2 ðt sðtÞÞ þ 4e2 ðt rðtÞÞ: The initial condition of (4.2) is chosen as y1(h) = 1, y2(h) = 1, R1(h) = 1, R2(h) = 1 for " h 2 [2.3, 0]. We set p = 2, k1 = k2 = 0.1, l = 0.01. Fig. 2 shows the state trajectories of drive system (4.1) and response system (4.2). Fig. 3 depicts the synchronization errors of state variables between drive and response systems almost surely converge to zero. The numerical simulations clearly verify the effectiveness and feasibility of the developed adaptive feedback controller to the synchronization of chaotic stochastic competitive neural networks with different time scales and mixed time-varying delays for different initial conditions. 5. Conclusions In this paper, an adaptive controller has been proposed to ensure the exponential synchronization for a class of competitive neural networks with different time scales, mixed time-varying delays and stochastic noise perturbations in terms of p-norm. The problem considered in this paper is more general in many aspects and incorporates as special cases various problems which have been studied extensively in the literature. Some remarks and a numerical example have been used to demonstrate the effectiveness of the obtained results. In fact, due to the different parameters, activation functions and neural network architectures which is unavoidable in real implementation, the master system and response system are not identical and the resulting synchronization is not exact and complex. Therefore, it is important and challenging to study the synchronization problems of non-identical chaotic neural networks. However, to the best of our knowledge, there are few results concerning the synchronization problem for nonidentical competitive neural networks with different time scales, mixed time-varying delays and stochastic disturbance. This is an interesting problem and will become our future investigative direction. References [1] Amari SI. Field theory of self-organizing neural nets. IEEE Trans Syst Man Cybern 1983;13:741–8. [2] Chen W, Zheng W. Robust stability analysis for stochastic neural networks with time-varying delay. IEEE Trans Neural Networks 2010;21:508–14.
3718
Q. Gan et al. / Commun Nonlinear Sci Numer Simulat 17 (2012) 3708–3718
[3] Cilli M. Strange attractors in delayed cellular neural networks. IEEE Trans Circuits Syst I 1993;40:849–53. [4] Cohen MA, Grossberg S. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans Syst Man Cybern B 1983;13:815–26. [5] Filippov AF. Differential Equations with Discontinuous Right Hand Sides. Kluwer Academic Publisher; 1988. [6] Gan Q, Xu R, Kang X. Synchronization of unknown chaotic delayed competitive neural networks with different time scales based on adaptive control and parameter identification. Nonlinear Dyn 2012;67:1893–902. [7] Grossberg S. Adaptive pattern classification and universal recording. Biological Cybern 1976;23:21–134. [8] Gu H. Adaptive synchronization for competitive neural networks with different time scales and stochastic perturbation. Neurocomputing 2009;73:350–6. [9] Gu H, Jiang H, Teng Z. Existence and global exponential stability of equilibrium of competitive neural networks with different time scales and multiple delays. J Franklin Institute 2010;347:719–31. [10] Haykin S. Neural Networks. Englewood Cliffs, NJ: Prentice-Hall; 1994. [11] Hopfield J. Neural networks and physical systemswith emergent collective computational abilities. Proc Nat Acad Sci USA 1982;79:2554–8. [12] Hopfield J. Neurons with graded response have collective computational properties like those of two-state neurons. Proc Nat Acad Sci USA 1984;81:3088–92. [13] Hu C, Yu J, Jiang H, Teng Z. Exponential stabilization and synchronization of neural networks with time-varying delays via periodically intermittent control. Nonlinearity 2010;23:2369–91. [14] Hua M, Liu X, Deng F, Fei J. New results on robust exponential stability of uncertain stochastic neural networks with mixed time-varying delays. Neural Process Lett 2010;32:219–33. [15] Kwon OM, Lee SM, Park Ju H. Improved delay-dependent exponential stability for uncertain stochastic neural networks with time-varying delays. Phys Lett A 2010;374:1232–41. [16] Lin W, He Y. Complete synchronization of the noise-perturbed Chua’s circuits. Chaos 2005;15:023705. [17] Liu Y, Wang Z, Liu X. Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Networks 2006;19:667–75. [18] Lou X, Cui B. Synchronization of competitive neural networks with different time scales. Physica A 2007;380:563–76. [19] Lu H, He Z. Global exponential stability of delayed competitive neural networks with different time scales. Neural Networks 2005;18:243–50. [20] Mao X. Stochastic Differential Equations and Their Applications. Chichester, UK: Horwood; 1997. [21] Meyer-Bäse A, Ohl F, Scheich H. Singular perturbation analysis of competitive neural networks with different time scales. Neural Comput 1996;8:1731–42. [22] Meyer-Bäse A, Pilyugin SS, Chen Y. Global exponential stability of competitive neural networks with different time scales. IEEE Trans Neural Networks 2003;14:716–9. [23] Meyer-Bäse A, Pilyugin SS, Wismüler A, Foo S. Local exponential stability of competitive neural networks with different time scales. Eng Appl Artif Intell 2004;17:227–32. [24] Meyer-Bäse A, Roberts R, Thümmler V. Local uniform stability of competitive neural networks with different time-scales under vanishing perturbations. Neurocomputing 2010;73:770–5. [25] Nie X, Cao J. Multistability of competitive neural networks with time-varying and distributed delays. Nonlinear Anal RWA 2009:928–42. [26] Park Ju H, Kwon OM. Analysis on global stability of stochastic neural networks of neutral type. Mod Phys Lett B 2008;22:3159–70. [27] Park Ju H, Kwon OM. Synchronization of neural networks of neutral type with stochastic perturbation. Mod Phys Lett B 2009;23:1743–51. [28] Park Ju H, Kwon OM. Further results on state estimation for neural networks of neutral-type with time-varying delay. Appl Math Comput 2009;208:69–75. [29] Park Ju H, Kwon OM, Lee SM. State estimation for neural networks of neutral-type with interval time-varying delays. Appl Math Comput 2008;203:217–23. [30] Park Ju H, Lee SM, Jung HY. LMI optimization approach to synchronization of stochastic delayed discrete-time complex networks. J Optimiz Theory Appl 2009;143:357–67. [31] Phat VN, Trinh H. Exponential stabilization of neural networks with various activation functions and mixed time-varying delays. IEEE Trans Neural Networks 2010;21:1180–4. [32] Sathy R, Balasubramaniam P. Stability analysis of fuzzy Markovian jumping Cohen–Grossberg BAM neural networks with mixed time-varying delays. Commun Nonlinear Sci Numer Simulat 2011;16:2054–64. [33] Su W, Chen Y. Global robust stability criteria of stochastic Cohen–Grossberg neural networks with discrete and distributed time-varying delays. Commun Nonlinear Sci Numer Simulat 2009;14:520–8. [34] Syed Ali M, Balasubramaniam P. Global asymptotic stability of stochastic fuzzy cellular neural networks with multiple discrete and distributed timevarying delays. Commun Nonlinear Sci Numer Simulat 2011;16:2907–16. [35] Tang Y, Fang J, Xia M, Yu D. Delay distribution dependent stability of stochastic discrete-time neural networks with randomly mixed time-varying delays. Neurocomputing 2009;72:3830–8. [36] Tang Y, Qiu R, Fang J, Miao Q, Xia M. Adaptive lag synchronization in unknown stochastic chaotic neural networks with discrete and distributed timevarying delays. Phys Lett A 2008;372:4425–33. [37] Wu Z, Su H, Chu J, Zhou W. Improved delay-dependent stability condition of discrete recurrent neural networks with time-varying delays. IEEE Trans Neural Networks 2010;21:692–7. [38] Yang X, Cao J, Long Y, Wei R. Adaptive lag synchronization for competitive neural networks with mixed delays and uncertain hybrid perturbations. IEEE Trans Neural Networks 2010;21:1656–67. [39] Yang X, Huang C, Cao J. An LMI approach for exponential synchronization of switched stochastic competitive neural networks with mixed delays, Neural Comput Appl. doi: 10.1007/s00521-011-0626-2, in press. [40] Zhu Q, Cao J. Robust exponential stability of Markovian jump impulsive stochastic Cohen–Grossberg neural networks with mixed time delays. IEEE Trans Neural Networks 2010;21:1314–25.