Adaptive lag synchronization in unknown stochastic chaotic neural networks with discrete and distributed time-varying delays

Adaptive lag synchronization in unknown stochastic chaotic neural networks with discrete and distributed time-varying delays

Physics Letters A 372 (2008) 4425–4433 Contents lists available at ScienceDirect Physics Letters A www.elsevier.com/locate/pla Adaptive lag synchro...

259KB Sizes 0 Downloads 93 Views

Physics Letters A 372 (2008) 4425–4433

Contents lists available at ScienceDirect

Physics Letters A www.elsevier.com/locate/pla

Adaptive lag synchronization in unknown stochastic chaotic neural networks with discrete and distributed time-varying delays ✩ Yang Tang a,∗ , Runhe Qiu a , Jian-an Fang a , Qingying Miao a,b , Min Xia a a b

College of information science and technology, Donghua University, Shanghai 201620, PR China Continue Education School, Shanghai Jiaotong University, Shanghai 200030, PR China

a r t i c l e

i n f o

Article history: Received 4 March 2008 Received in revised form 26 March 2008 Accepted 7 April 2008 Available online 18 April 2008 Communicated by A.R. Bishop PACS: 05.45.Xt 05.45.Gg 05.45.-a

a b s t r a c t In this Letter, we have dealt with the problem of lag synchronization and parameter identification for a class of chaotic neural networks with stochastic perturbation, which involve both the discrete and distributed time-varying delays. By the adaptive feedback technique, several sufficient conditions have been derived to ensure the synchronization of stochastic chaotic neural networks. Moreover, all the connection weight matrices can be estimated while the lag synchronization is achieved in mean square at the same time. The corresponding simulation results are given to show the effectiveness of the proposed method. © 2008 Elsevier B.V. All rights reserved.

Keywords: Chaotic neural networks Adaptive feedback synchronization Discrete and distributed time-varying delays Stochastic system Parameters identification

1. Introduction It is widely believed that chaos synchronization has played a more and more significant role in nonlinear science [1–7]. So far, a wide variety of synchronization phenomena have been discovered such as complete synchronization, lag synchronization [8]. It is worth mentioning that, in many practical situations, a propagation delay will appear in the electronic implementation of dynamical systems. Therefore, it is very important to investigate the lag synchronization. On the other hand, in the past few years, synchronization of complex dynamical networks [9–13] has been studied extensively. As a special complex networks, there has been increasing interest in synchronization of neural networks [14–26]. Recently, the research of neural networks with stochastic perturbations has arrested much attention [25,27–29]. In Ref. [25], the synchronization scheme have been developed to synchronized the stochastic neural networks with constant delay by adaptive feedback method [30–32]. The distributed delay in neural networks [26,33–36] has been an active subject nowadays recently. Many literature have devoted to stability analysis issue for neural networks with distributed delay [34–36] based on linear matrix inequality (LMI) approach. However, there are few works about the synchronization of neural networks with distributed delay. In Ref. [26], the sufficient conditions on the complete synchronization for neural networks without noise perturbation which the parameters are known beforehand. But in real-life applications, this assumption is not realistic due to the stochastic perturbation widely existing in practical situations. Moreover, some systems’ parameters cannot be exactly known in prior. To the best of authors’ knowledge, up to now, lag synchronization for unknown stochastic chaotic neural networks with both discrete and distributed time-varying delays via state coupling has not been addressed yet, which is still an open problem and remains challenging. Inspired by the above discussion, a more general model for neural networks which includes the stochastic perturbation, discrete and



*

This research was supported by the National Natural Science Foundation of PR China (10571024). Corresponding author. E-mail addresses: [email protected] (Y. Tang), [email protected] (J.-a. Fang).

0375-9601/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.physleta.2008.04.032

4426

Y. Tang et al. / Physics Letters A 372 (2008) 4425–4433

distributed time-varying delays is considered. By employing Lyapunov function candidate, we propose an adaptive feedback scheme for synchronization of unknown delayed stochastic neural networks. It can be revealed that global synchronization of coupled neural networks can be achieved in mean square while identifying the unknown parameters simultaneously. 2. Neural network models and preliminaries Notations Throughout this Letter, Rn and Rn×m denote, respectively, the n-dimensional Euclidean space and the set of all real matrices. The superscript ‘T ’ denotes matrix transposition and the notation X  Y (respectively, X > Y ) where X and Y are symmetric  matrices,

means that X − Y is positive semi-definite (respectively, positive definite). Let a vector norm  x  on Rn defined as  x =

(R+

n

2 i =1 xi , so + ; R ) denote the family of all

 x  = x x. E{·} stands for the mathematical expectation operator. I is an identity matrix. Let C ×R nonnegative functions V (t , x) on R+ × Rn which are continuously twice differentiable in x and once differentiable in t. 2

T

1, 2

n

Considering the following unknown neural networks with discrete and distributed time-varying delays system:

 dxi (t ) = −c i xi (t ) +

n 

t

n n        ai j ˜f j x j (t ) + b i j g˜ j x j t − τ1 (t ) + di j

j =1

j =1

j =1

  ˜h j x j (s) ds + J i dt ,

i = 1, 2, . . . , n

(1)

t −τ2 (t )

or equivalently,



t

     dx(t ) = −C x(t ) + A ˜f x(t ) + B g˜ x t − τ1 (t ) + D

  ˜h x(s) ds + J dt ,

(2)

t −τ2 (t )

where x(t ) = (x1 (t ), x2 (t ), . . . , xn (t )) T ∈ Rn is the state vector associated with the neurons; C = diag(c 1 , c 2 , . . . , cn ) > 0 is an unknown matrix; A = (ai j )n×n , B = (b i j )n×n and D = (di j )n×n denote the unknown connection weight matrix, the time-varying delayed connection weight matrix and the distributively delayed connection weight matrix, respectively; J = [ J 1 , J 2 , . . . , J n ] T is an external input vector; τ1 (t ) is the time-varying delay; the scalar τ2 (t ) is the distributed time-delay; ˜f , g˜ and h˜ are the activation functions of neurons, where





 









 T



 









 T

f˜ x(t ) = f˜1 x1 (t ) , f˜2 x2 (t ) , . . . , f˜n xn (t )

∈ Rn ,          T   g˜ x t − τ1 (t ) = g˜1 x1 t − τ1 (t ) , g˜2 x2 t − τ1 (t ) , . . . , g˜n xn t − τ1 (t ) ∈ Rn  

and



h˜ x(t ) = h˜ 1 x1 (t ) , h˜ 2 x2 (t ) , . . . , h˜ n xn (t )

∈ Rn .

To realize lag synchronization, the noise-perturbed response system is given as

 dy i (t ) = −ˆc i y i (t ) +

n 





aˆ i j ˜f j y j (t ) +

j =1

n 







bˆ i j g˜ j y j t − τ1 (t )

j =1

+

n  j =1

dˆ i j

t







h˜ j y j (s) ds + J i + i (t )e i (t )

t −τ2 (t )

   + σi t , y i (t ) − xi (t − δ), y i t − τ1 (t ) − xi t − τ1 (t ) − δ dωi (t ), 



i = 1, 2, . . . , n

(3)

or equivalently,



    ˆ ˜f y (t ) + Bˆ g˜ y t − τ1 (t ) + Dˆ dy (t ) = −Cˆ y (t ) + A 

 h˜ y (s) ds + J +  (t )  e (t ) dt

t



t −τ2 (t )

     + σ t , y (t ) − x(t − δ), y t − τ1 (t ) − x t − τ1 (t ) − δ dω(t ),

(4)

ˆ Bˆ and Dˆ are the estimations of the unknown matrices C , A, B and D, respectively; e (t ) = y (t ) − x(t − δ) = ( y 1 (t ) − where Cˆ , A, x1 (t −δ), y 2 (t )− x2 (t −δ), . . . , yn (t )− xn (t −δ)) T ∈ Rn denote the synchronization errors; δ is a propagation delay;  = (1 , 2 , . . . , n ) T ∈ Rn are the updated feedback gain; the mark  is defined as   e (t )  (1 · e 1 (t ), 2 · e 2 (t ), . . . , n · en (t )); ω(t ) is a n-dimensional Brownian motion satisfying E{dω(t )} = 0 and E{[dω(t )]2 } = dt. ˆ B˜ = B − Bˆ and D˜ = D − Dˆ be the estimation errors of the parameters C , A, B and D. Subtracting (2) from (4), ˜ = A − A, Let C˜ = C − Cˆ , A yields the error dynamical system as follows 

    de (t ) = −C e (t ) + A f e (t ) + B g e t − τ1 (t ) + D 

t

      ˜ ˜f y (t ) − B˜ g˜ y t − τ1 (t ) − D˜ h e (s) ds + C˜ y (t ) − A 

t −τ2 (t )



     +   e (t ) dt + σ t , y (t ) − x(t − δ), y t − τ1 (t ) − x t − τ1 (t ) − δ dω(t ), where













f e (t ) = ˜f e (t ) + x(t − δ) − ˜f x(t − δ) ;

 



g e t − τ1 (t )

        = g˜ e t − τ1 (t ) + x t − τ1 (t ) − δ − g˜ x t − τ1 (t ) − δ ;

t





h˜ y (s) ds

t −τ2 (t )

(5)

Y. Tang et al. / Physics Letters A 372 (2008) 4425–4433

and









4427





h e (t ) = h˜ e (t ) + x(t − δ) − h˜ x(t − δ) . Throughout this Letter, the following assumptions are needed: ( A 1 ) The activation functions ˜f i (x) and g˜ i (x) satisfy the Lipschitz condition, i.e., for all i = 1, 2, . . . , n, there exist constants λi > 0 and φi > 0 such that



˜f i (x) − ˜f i ( y )  λi |x − y |,



g˜ i (x) − g˜ i ( y )  φi |x − y |,

∀x, y ∈ Rn .

( A 2 ) The activation functions h˜ (x) satisfies the Lipschitz condition of vector form:

h(x) − h( y )  H (x − y ) ,

∀x, y ∈ Rn ,

where H ∈ Rn×n is a constant matrix. ( A 3 ) f (0) = g (0) = h(0) = 0 and σ (t , 0, 0) = 0. ( A 4 ) σ (t , u , v ) satisfies the Lipschitz condition. Furthermore, there exist known constant matrices of appropriate dimensions Σ1 and Σ2 such that



σ T (t , u , v )σ (t , u , v )  Σ1 u 2 + Σ2 v 2 , ∀(t , u , v ) ∈ R+ × Rn × Rn . ( A 5 ) τ1 (t ) and τ2 (t ) are bounded and continuously differentiable functions, 0  τ1 (t )  τ1 , 0  τ2 (t )  τ2 and 0  τ˙1 (t )  μ < 1, 0  τ˙2 (t )  < 1. Let y (t ; ξ ) denote the state trajectory of the neural network (4) from the initial data y (ϕ ) = ξ(ϕ ) on −τ ∗  ϕ  0(τ ∗ = max{τ1 , τ2 }) in L 2F0 ([−τ ∗ , 0], Rn ), where L 2F0 ([−τ ∗ , 0], Rn ) is the family of all F0 -measurable C ([−τ ∗ , 0]; Rn )-valued random variables satisfying that sup−τ ∗ ϕ 0 E|ξ(ϕ )|2 < ∞, and C ([−τ ∗ , 0]; Rn ) denotes the family of all continuous Rn -valued functions ξ(ϕ ) on [−τ ∗ , 0] with the norm ξ  = sup−τ ∗ ϕ 0 |ξ(ϕ )|. trace

Definition 1. The master system (2) and response system (4) are synchronized in the sense of lag synchronization, if the error system (5) is asymptotically stable in mean square



2

lim E e (t ) = 0.

(6)

n→∞

3. Main results The following lemmas will be essential in achieving the synchronization criteria. Lemma 1. (See Ref. [37].) Let Ω1 , Ω2 , Ω3 be real matrices of appropriate dimensions with Ω3 > 0. Given any vector x, y of appropriate dimensions, then the following inequality holds, 2x T Ω1T Ω2 y  x T Ω1T Ω3 Ω1 x + y T Ω2T Ω3−1 Ω2 y . Lemma 2. (See Ref. [27].) For any positive definite matrix N > 0, scalar concerned are well defined, the following inequality holds:

 ν (t ) w (s) ds

 T  ν (t ) N



w (s) ds

0

 ν (t )



w T (s) N w (s) ds.

 ν (t )

0

ν > ν (t ) > 0, vector function w : [0, ν ] → Rn such that the integrations

0

ˆ Bˆ and Dˆ are Theorem 1. Under assumptions ( A 1 ∼ A 5 ), the feedback strength  (t ) = (1 (t ), 2 (t ), . . . , n (t )) T and the estimated parameters Cˆ , A, adapted according to the following updated law, respectively, ⎧ ˙i = −αi e 2i (t ), ⎪ ⎪ ⎪ ⎪ ⎪ c˙ˆ i = γi e i (t ) y i (t ), ⎪ ⎪ ⎨ a˙ˆ i j = −ψi j e i (t ) ˜f j ( y j (t )), ⎪ ⎪ ˙ ⎪ ⎪ bˆ i j = −βi j e i (t ) g˜ j ( y j (t − τ1 (t ))), ⎪ ⎪ ⎪˙ t ⎩ dˆ i j = −ρi j e i (t ) t −τ (t ) h˜ j ( y j (s)) ds, 2

i = 1, 2, . . . , n, i = 1, 2, . . . , n, i , j = 1, 2, . . . , n, ,

(7)

i , j = 1, 2, . . . , n, i , j = 1, 2, . . . , n,

in which αi > 0, γi > 0, ψi j > 0, βi j > 0 and ρi j > 0 are arbitrary constants, respectively. Then the controlled response system (4) and drive system (2) can be synchronized lag synchronization in mean square. Proof. For each V ∈ C 1,2 (R+ × Rn ; R+ ), define an operator L associated with the error system acting on V by

            L V t , e (t ) = V t t , e (t ) + V e t , e (t ) −C e (t ) + A f e (t ) + B g e t − τ1 (t ) t

      ˜ ˜f y (t ) − B˜ g˜ y t − τ1 (t ) − D˜ h e (s) ds + C˜ y (t ) − A 

+D t −τ2 (t )

1



+ trace σ 2

t



  h˜ y (s) ds +   e (t )

t −τ2 (t )

     t , e (t ), e τ1 (t ) V ee t , e (t ) σ t , e (t ), e τ1 (t ) ,

 T

(8)

4428

where

Y. Tang et al. / Physics Letters A 372 (2008) 4425–4433

    ∂ V (t , e (t )) ∂ V (t , e (t )) ∂ V (t , e (t )) ∂ V (t , e (t )) , , V e t , e (t ) = , ,..., ∂t ∂ e1 ∂ e2 ∂ en   2       ∂ V (t , e (t )) V ee t , e (t ) = , e (t ) = y (t ) − x(t − δ), e τ1 = y t − τ1 (t ) − x t − τ1 (t ) − δ . ∂ ei ∂ e j n×n 



V t t , e (t ) =

Construct the following Lyapunov functional candidate:

t

 1 1 V t , e (t ) = e T (t )e (t ) + 

2

+

2

1 2

n 



1

2

t −τ1 (t ) 2

(i + l) +

αi

i =1

e (ζ ) P e (ζ ) dζ +

1

γi

c˜ 2i

+

n  j =1

1

ψi j

t

1

T

t e T (θ) Q e (θ) dθ ds

t −τ2 (t ) s

a˜ 2i j

n n  1 ˜2  1 ˜2 + b + d , βi j i j ρi j i j j =1

(9)

j =1

where P and Q are positive definite matrices and l is a constant to be determined in the following, respectively. By the Itô-differential formula [38], the stochastic derivative of V along the trajectory of the error system (5) can be obtained as follows,









 



dV t , e (t ) = L V t , e (t ) dt + V e t , e (t )



σ t , e (t ), e τ1 (t ) dω(t ),

where operator L is given as follows

           L V t , e (t ) = V t t , e (t ) + V e t , e (t ) −C e (t ) + A f e (t ) + B g e t − τ1 (t ) 

t

      ˜ ˜f y (t ) − B˜ g˜ y t − τ1 (t ) − D˜ h e (s) ds + C˜ y (t ) − A 

+D t −τ2 (t )



 h˜ y (s) ds +   e (t ) 

t −τ2 (t )



1

t

+ trace σ 2 

T

     t , e (t ), e τ1 (t ) V ee t , e (t ) σ t , e (t ), e τ1 (t )



     = e T (t ) −C e (t ) + A f e (t ) + B g e t − τ1 (t ) t

      ˜ ˜f y (t ) − B˜ g˜ y t − τ1 (t ) − D˜ h e (s) ds + C˜ y (t ) − A 

+D t −τ2 (t )

1

T

1 − τ˙1 (t )

2

2

n 

(i + l)e 2i (t ) −

i =1

+

   1  1 e t − τ1 (t ) P e t − τ1 (t ) + τ2 (t )e T (t ) Q e (t ) − 1 − τ˙2 (t ) T



2

n 

c˜ i e i (t ) y i (t ) +

i =1

n n  



 h˜ y (s) ds +   e (t ) 

t −τ2 (t )

+ e (t ) P e (t ) −



t

d˜ i j e i (t )

i =1 j =1

t

n n  

2





a˜ i j e i (t ) ˜f j y j (t ) +

i =1 j =1





h˜ j y j (s) ds +

t −τ2 (t )

1 2

trace



n n  

t e T (s) Q e (s) ds

t −τ2 (t )







b˜ i j e i (t ) g˜ j y j t − τ1 (t )

i =1 j =1

 





σ T t , e (t ), e τ1 (t ) σ t , e (t ), e τ1 (t ) .

(10)

It can be obtained that

n ⎧ T e (t )(  e (t )) = i =1 i e 2i (t ), ⎪ ⎪ ⎪ n ⎪ ⎪ e T (t )C˜ y (t ) = i =1 c˜ i e i (t ) y i (t ), ⎪ ⎪ ⎨  n ˜ ˜f ( y (t )) = n ˜ i j e i (t ) ˜f j ( y j (t )), e T (t ) A i =1 j =1 a ⎪ n n ⎪ T ⎪ ˜ ˜ j ( y j (t − τ1 (t ))), ⎪ e (t ) B˜ g˜ ( y (t − τ1 (t ))) = i =1 ⎪ j =1 b i j e i (t ) g ⎪ ⎪ t    ⎩ T ˜ t n n ˜ ˜ ˜ e (t ) D t −τ (t ) h( y (s)) ds = i =1 j =1 d i j e i (t ) t −τ2 (t ) h j ( y j (s)) ds. 2

(11)

By using Lemma 1, we can get





e T (t ) A f e (t ) 

1 2

e T (t ) A A T e (t ) +

1 2

 





f T e (t ) f e (t )

(12)

and

 



e T (t ) B g e t − τ1 (t )

1

1

2

2

 e T (t ) B B T e (t ) +

According to (11)–(13), we can get

 

  



g T e t − τ1 (t ) g e t − τ1 (t ) .

(13)

Y. Tang et al. / Physics Letters A 372 (2008) 4425–4433

4429

    1      1 1  1   L V t , e (t )  −e T (t )C e (t ) + e T (t ) A A T e (t ) + f T e (t ) f e (t ) + e T (t ) B B T e (t ) + g T e t − τ1 (t ) g e t − τ1 (t ) 2

t

2





+ e T (t ) D

h e (s) ds +

t −τ2 (t )

1

2

e T (t ) P e (t ) −

2

2





T





e T t − τ1 (t ) P e t − τ1 (t ) − le T (t )e (t )

t

 1 + τ2 (t )e (t ) Q e (t ) − 1 − τ˙2 (t ) 2 2 1

1 − τ˙1 (t )

2

e T (s) Q e (s) ds +

t −τ2 (t )

1 2

trace



 





σ T t , e (t ), e τ1 (t ) σ t , e (t ), e τ1 (t ) .

(14)

From assumption ( A 1 ), we can get the following inequalities:

 





f T e (t ) f e (t ) =

n 





f i2 e i (t ) 

i =1

n 

λ2i e 2i (t )  Λe T (t )e (t )

(15)

φi2 e 2i (t )  Φ e T (t )e (t ),

(16)

i =1

and

 





g T e (t ) g e (t ) =

n 





g i2 e i (t ) 

i =1

n  i =1

where Λ = max{λ2i : i = 1, 2, . . . , n} and Φ = max{φi2 : i = 1, 2, . . . , n}. From assumption ( A 5 ), the following inequalities can be given as



1 − τ˙1 (t ) 2

−

1−μ 2



,

1 − τ˙2 (t ) 2

−

1−

2

(17)

.

According to the (15)–(17), we have

  1 1 L V t , e (t )  −e T (t )C e (t ) + e T (t ) A A T e (t ) + e T (t ) B B T e (t ) + 2



1−μ 2



2





t





2

     1 1 Λ − l e T (t )e (t ) + e T (t ) P e (t ) + Φ e T t − τ1 (t ) e t − τ1 (t ) 2





e T t − τ1 (t ) P e t − τ1 (t ) + D

1

h e (s) ds +

t −τ2 (t )



1

+ trace σ

1 2

τ2 e T (t ) Q e (t ) −

2

t

1−

2

e T (s) Q e (s) ds t −τ2 (t )

   t , e (t ), e τ1 (t ) σ t , e (t ), e τ1 (t ) .

 T

2

(18)

Let Ω1 , Ω2 , Ω3 in Lemma 1 be the compatible dimensions identity matrix. Hence, we have

t T

e (t ) D

t



T h e (s) ds = D T e (t ) 

t −τ2 (t )





h e (s) ds t −τ2 (t )



1 2

t

T T

1 D e (t ) D e (t ) +

 h e (s) ds

T

2

1

1

2

2

= e T (t ) D D T e (t ) +

T



t −τ2 (t )

t





h e (s) ds

t −τ2 (t )

t





t

h T e (s) ds t −τ2 (t )





h e (s) ds.

(19)

t −τ2 (t )

From the assumption ( A 2 ), the following inequality

      h e (t ) = ˜h e (t ) + x(t ) − h˜ x(t )  He (t ) ,

(20)

then h T (e (t ))h(e (t ))   H (e (t ))2 = e T (t ) H T He (t ). According to Lemma 2, we have 1 2

t



t





h T e (s) ds t −τ2 (t )



h e (s) 

t −τ2 (t )

t

τ2 2



 



h T e (s) h e (s) ds  t −τ2 (t )

t

τ2 2

e T (s) H T He (s) ds.

(21)

t −τ2 (t )

It can be seen from assumption ( A 4 ) that 1 2

trace



 





1

1

2

2









σ T t , e (t ), e τ1 (t ) σ t , e (t ), e τ1 (t )  e T (t )Σ1 Σ1T e (t ) + e T t − τ1 (t ) Σ2 Σ2T e t − τ1 (t ) .

Let Q = 1τ−2 H T H is a positive definite matrix. Using inequalities (19), (21) and (22), we obtain from (18) that

     1 1 1 Λ L V t , e (t )  e T (t ) −C + τ2 Q + A A T + B B T + −l · I 2 2 2 2     1    Φ 1 1 1 1−μ + D D T + Σ1 Σ1T + P e (t ) + e T t − τ1 (t ) Σ2 Σ2T + · I − P e t − τ1 (t ) 2

2

2

2

2

2

(22)

4430

Y. Tang et al. / Physics Letters A 372 (2008) 4425–4433

     1 1  e T (t ) λmax (−C ) + λmax A A T + λmax B BT  + λmax

1 2

2





1

τ2 Q + λmax

2

2

 D DT

 + λmax

1 2

 Σ1 Σ1T

+

1 2

P+

Λ

       Φ 1 1−μ + e t − τ1 (t ) λmax Σ2 Σ2T + − P e t − τ1 (t ) ,

2

 − l e (t )

 T

2

2

(23)

2

where for any matrix W , λmax ( W ) denote the maximum eigenvalue of matrix W . Then, we take



l = λmax (−C ) + λmax

 + λmax P=

1 2

 Σ1 Σ1T

1 2

+



+ λmax

Λ

1

+

2



1 1−μ



A AT

1 2



B BT



2(1 − μ)



+ λmax



 T

1 2





τ2 Q + λmax

1 2



D DT



λmax Σ2 Σ2 + Φ + 1.

(24)

   λmax Σ2 Σ2T + Φ I .

(25)

Then, it can be obtained that

  L V t , e (t )  −e T (t )e (t ).

(26)

According to the invariant principle of stochastic differential equation proposed in [39], it is can be derived that Ee (t ; ξ )2 → 0, Cˆ → C , ˆ → A, Bˆ → B and Dˆ → D as t → ∞. The unknown parameters of the drive system can be identified at the same time when the A synchronization in mean square is achieved. This completes the proof of Theorem 1. 2 When the stochastic perturbation is removed from the drive system, the response system is



     ˆ ˜f y (t ) + Bˆ g˜ y t − τ1 (t ) + Dˆ dy (t ) = −Cˆ y (t ) + A

t

  ˜h x(s) ds +   e (t ) + J dt ,

(27)

t −τ2 (t )

which can lead the following results.

ˆ Bˆ Corollary 1. Under assumptions ( A 1 )–( A 3 ) and ( A 5 ), the feedback strength  (t ) = (1 (t ), 2 (t ), . . . , n (t )) T and the estimated parameters Cˆ , A, ˆ are adapted according to the following updated law, respectively, and D ⎧ ˙i = −αi e 2i (t ), ⎪ ⎪ ⎪˙ ⎪ ⎪ cˆ i = γi e i (t ) y i (t ), ⎪ ⎪ ⎨ a˙ˆ i j = −ψi j e i (t ) ˜f j ( y j (t )), ⎪ ⎪ ˙ˆ ⎪ ⎪ b i j = −βi j e i (t ) g˜ j ( y j (t − τ1 (t ))), ⎪ ⎪ ⎪ t ⎩ ˙ˆ di j = −ρi j e i (t ) t −τ (t ) h˜ j ( y j (s)) ds, 2

i = 1, 2, . . . , n, i = 1, 2, . . . , n, i , j = 1, 2, . . . , n,

(28)

i , j = 1, 2, . . . , n, i , j = 1, 2, . . . , n,

in which αi > 0, γi > 0, ψi j > 0, βi j > 0 and ρi j > 0 are arbitrary constants, respectively. Then the controlled noiseless response system (4) and drive system (2) can be synchronized in lag synchronization. When the distributed delays are removed from the neural network, that is, D = 0 in drive system (2) and response system (4) can be rewritten as follows











 





 

dx(t ) = −C x(t ) + A ˜f x(t ) + B g˜ x t − τ1 (t )

+ J dt ,

(29)

and



+ J +  (t )  e (t ) dt      + σ t , y (t ) − x(t − δ), y t − τ1 (t ) − x t − τ1 (t ) − δ dω(t ),

ˆ ˜f x(t ) + Bˆ g˜ x t − τ1 (t ) dy (t ) = −Cˆ y (t ) + A

(30)

respectively. We can obtain the following results. Corollary 2. Under assumptions ( A 1 )–( A 5 ), if the feedback time-varying strength  (t ) = (1 (t ), 2 (t ), . . . , n (t )) T and the estimated parameters Cˆ , ˆ and Bˆ are adapted according to the following updated law, respectively, A

⎧ ˙i = −αi e 2i (t ), ⎪ ⎪ ⎪ ⎪ ⎨ c˙ˆ i = γi e i (t ) y i (t ), ⎪ a˙ˆ i j = −ψi j e i (t ) ˜f j ( y j (t )), ⎪ ⎪ ⎪ ⎩ ˙ˆ b i j = −βi j e i (t ) g˜ j ( y j (t − τ1 (t ))),

i = 1, 2, . . . , n, i = 1, 2, . . . , n, i , j = 1, 2, . . . , n, i , j = 1, 2, . . . , n,

then the known drive chaotic neural networks and the stochastic perturbed response system can be achieved lag synchronization in mean square.

(31)

Y. Tang et al. / Physics Letters A 372 (2008) 4425–4433

4431

Fig. 2. t − e 1 (t ) − e 2 (t ).

Fig. 1. Chaotic dynamic of response system with stochastic noise perturbation.

Furthermore, when the matrices C , A , B and D are known in prior, the following corollary can be achieved by adaptive feedback approaches. Corollary 3. Under assumptions ( A 1 )–( A 3 ) and ( A 5 ), if there exist arbitrary positive constants

 (t ) = (1 (t ), 2 (t ), . . . , n (t ))T is adapted to the following update law:

αi (i = 1, 2, . . . , n) such that feedback strength

˙i (t ) = −αi e 2i (t ), i = 1, 2, . . . , n,

(32)

then the known drive chaotic neural networks and the stochastic perturbed response system can be achieved in the sense of lag synchronization in mean square. Proof. Construct the following Lyapunov functional





V t , e (t ) =

1 2

e T (t )e (t ) +

1 2

t e T (ζ ) P e (ζ ) dζ + t −τ1 (t )

t

1 2

t e T (θ) Q e (θ) dθ ds +

t −τ2 (t ) s

n 1 1

2

i =1

αi

(i + l)2 .

We can easily derive the result which is similar to the proof of Theorem 1. Its proof is straightforward and hence omitted.

(33)

2

4. Numerical simulations We consider the following two-order stochastic chaotic neural network with discrete and distributed time-varying delays



t

     dx(t ) = −C x(t ) + A ˜f x(t ) + B g˜ x t − τ1 (t ) + D

  ˜h x(s) ds + J dt ,

(34)

t −τ2 (t )

with x(t ) = (x1 (t ), x2 (t )) T , f˜ (x(t )) = g˜ (x(t )) = h˜ (x(t )) = tanh(x(t )) = (tanh(x1 ), tanh(x2 )) T ,



C=

1 0 0 1





A=

,



1.8 −0.15 , −5.2 3.5



B=



−1.7 −0.12 , −0.26 −2.5



D=

0.6 −2

τ1 (t ) =

et , et +1



τ2 (t ) = 1, J = (0, 0)T and

0.15 , −0.12

respectively. Then, the neural network model has a chaotic attractor with initial values x1 (t ) = 0.2, x2 (t ) = 0.5, ∀t ∈ [−1, 0]. The noise-perturbed response system is designed as



    ˆ ˜f y (t ) + Bˆ g˜ y t − τ1 (t ) + Dˆ dy (t ) = −Cˆ y (t ) + A 

t

 h˜ y (s) ds + J +  (t )  e (t ) dt 

t −τ2 (t )

     + σ t , y (t ) − x(t − δ), y t − τ1 (t ) − x t − τ1 (t ) − δ dω(t ), with Cˆ =



1 0 0 1



ˆ= A

,



1.8 −5.2

 −0.15 , aˆ 22

Bˆ =



 −1.7 −0.12 , −0.26 bˆ 22

The noise intensity can be chosen as





σ t , e (t ), e τ1 (t ) =



a1 e 1 (t ) + b1 e τ1 (t ) 0 0 a1 e 1 (t ) + b1 e τ1 (t )

 ,

(35)

ˆ = D



0.6 −2

0.15 dˆ 22

 .

4432

Y. Tang et al. / Physics Letters A 372 (2008) 4425–4433

Fig. 3. t − x1 (t ) − y 1 (t ).

Fig. 5. t − 1 (t ) − 2 (t ).

Fig. 4. t − x2 (t ) − y 2 (t ).

Fig. 6. aˆ 22 (t ), bˆ 22 (t ), dˆ 22 (t ).

where ω(t ) is a 2-order Brownian motion satisfying E [dω(t )] = 0 and E [dω(t )]2 = dt. From ( A 2 ), we have Σ1 = diag(|a1 |, |a2 |) and Σ2 = diag(|b1 |, |b2 |). In the simulations, the Euler–Maruyama numerical method is employed to simulate the drive system (34) and response system (35). The initial values of the response system is taken as y 1 (t ) = 1, y 2 (t ) = 1, ∀t ∈ [−1, 0]. We take the initial conditions of the feedback strength and the unknown parameters as [1 (0), 2 (0)] T = [1, 1] T and (ˆa22 (0), bˆ 22 (0), dˆ 22 (0)) = [3.4, −2.4, −0.2] T , respectively, and αi = 30, γi = βi j = ψi j = ρi j = 50. The propagation delay δ = 1.2. It is can be easily seen from Fig. 1 that the chaotic behavior of the noiseperturbed response system (35) in phase space. Fig. 2 depicts the error state of system (5). Figs. 3 and 4 show the lag synchronization between unknown drive system (34) and response system (35) with a propagation δ = 1.2. We can see from Figs. 5 and 6 that the time evolution of adaptive parameters of 1 , 2 , aˆ 22 , bˆ 22 , dˆ 22 , respectively. 5. Conclusion In this Letter, the adaptive scheme to the problem of lag synchronization and parameters identification for stochastic chaotic neural networks with discrete delay and distributed time-varying delays is investigated in detail. The chaotic neural networks are subjected to stochastic disturbances described in terms of a Brownian motion. By adaptive feedback technique, a simple, rigorous and systematic synchronization-based parameters identification scheme is proposed to solve the problem addressed. The method using in this Letter is simple to implement in practice. The variable feedback strength will be automatically adapted to a suitable strength. Moreover, we can adjust the synchronization speed and parameters identification speed by regulating the adaptive gain. In the end, the numerical simulations are given to show the feasibility of the proposed approach. Acknowledgements The authors are grateful to the anonymous reviewers for their kind help and constructive comments which helped improving the presentation of the Letter.

Y. Tang et al. / Physics Letters A 372 (2008) 4425–4433

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39]

G.R. Chen, X. Dong, From Chaos to Order: Methodologies, Perspectives, and Applications, World Science, Singapore, 1998. L.M. Pecora, T.L. Carroll, Phys. Rev. Lett. 64 (1990) 821. J.G. Ojalvo, R. Roy, Phys. Rev. Lett. 86 (22) (2001) 5204. T. Yang, L.O. Chua, IEEE Trans. Circuits Syst. I 44 (1997) 976. S. Boccaletti, J. Kurths, G. Osipov, D.L. Valladares, C.S. Zhou, Phys. Rep. 366 (2002) 1. Y. Tang, J.A. Fang, Phys. Lett. A 372 (2008) 1816. D.M. Li, Z.D. Wang, J. Z, J.A. Fang, J.N. Ni, Chaos Solitons Fractals, doi:10.1016/j.chaos.2007.01.057. E.M. Shahverdiev, S. Sivaprakasam, K.A. Shore, Phys. Lett. A 292 (2002) 320. J. Lü, G. Chen, IEEE Trans. Automat. Control 50 (2005) 841. J. Lü, X.H. Yu, G. Chen, D.Z. Cheng, IEEE Trans. Circuits Syst.-I 51 (2004) 787. W.W. Yu, J. Cao, J. Lü, SIAM J. Appl. Dynam. Syst. 7 (2008) 108. J. Zhou, J.A. Lu, J. Lü, IEEE Trans. Automat. Control 51 (2006) 652. X.F. Wang, G. Chen, Int. J. Bifur. Chaos 12 (2002) 187. J. Lu, J. Cao, Physica A 384 (2007) 432. W. Yu, J. Cao, Physica A 373 (2007) 252. J. Cao, P. Li, W. Wang, Phys. Lett. A 353 (4) (2006) 318. W. He, J. Cao, Phys. Lett. A 372 (2008) 408. X. Lou, C. Bao, Physica A 380 (2007) 563. W.L. Lu, T.P. Chen, Physica D 198 (2004) 148. W.L. Lu, T.P. Chen, IEEE Trans. Circuits Syst.-I 51 (2004) 2491. P. Li, J.D. Cao, Z.D. Wang, Physica A 373 (2007) 261. G.R. Chen, J. Zhou, Z.R. Liu, Int. J. Bifur. Chaos 14 (7) (2004) 2229. C.J. Cheng, T.L. Liao, C.C. Hwang, Chaos Solitons Fractals 24 (2005) 197. Z. Li, G. Chen, IEEE Trans. Circuits Syst. 53 (2006) 28. Y. Sun, J. Cao, Phys. Lett. A 364 (2007) 277. K. Wang, Z. Teng, H. Jiang, Physica A 387 (2008) 631. Z. Wang, S. Lauria, J.A. Fang, X. Liu, Chaos Solitons Fractals 32 (2007) 62. Z. Wang, Y. Liu, K. Fraser, X. Liu, Phys. Lett. A 354 (2006) 288. L. Wan, J. Sun, Phys. Lett. A 343 (4) (2005) 306. D.B. Huang, Phys. Rev. E 71 (2005) 037203. D.B. Huang, Phys. Rev. E 69 (2004) 067201. D.B. Huang, Phys. Rev. E 73 (2006) 066204. S. Ruan, R. Filfil, Physica D 191 (2004) 323. Z. Wang, Y. Liu, M. Li, X. Liu, IEEE Trans. Neural Networks 17 (2006) 814. Z. Wang, J.a. Fang, X. Liu, Chaos Solitons Fractals 36 (2008) 388. Z. Wang, S. Shu, Y. Liu, D.W.C. Ho, X. Liu, Chaos Solitons Fractals 28 (2006) 793. S. Xu, T. Chen, J. Lam, IEEE Trans. Automat. Control 48 (2003) 900. A. Friedman, Stochastic Differential Equations and Applications, Academic Press, New York, 1976. X. Mao, J. Math. Anal. Appl. 268 (2002) 125.

4433