Global exponential stability of a class of memristor-based recurrent neural networks with time-varying delays

Global exponential stability of a class of memristor-based recurrent neural networks with time-varying delays

Neurocomputing 97 (2012) 149–154 Contents lists available at SciVerse ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom ...

279KB Sizes 0 Downloads 92 Views

Neurocomputing 97 (2012) 149–154

Contents lists available at SciVerse ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Global exponential stability of a class of memristor-based recurrent neural networks with time-varying delays Guodong Zhang a,b,c, Yi Shen a,b,n, Junwei Sun a,b a

Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan 430074, China c College of Mathematics and Statistics, Hubei Normal University, Huangshi 435002, China b

a r t i c l e i n f o

abstract

Article history: Received 13 November 2011 Received in revised form 15 February 2012 Accepted 16 May 2012 Communicated by: W. Lu Available online 30 May 2012

The paper analyzes a general memristor-based recurrent neural networks with time-varying delays (DRNNs). The dynamic analysis in the paper employs results from the theory of differential equations with discontinuous right-hand side as introduced by Filippov, and some new conditions concerning global exponential stability are obtained. In addition, these conditions do not require the activation functions to be differentiable, the connection weight matrices to be symmetric and the delay functions to be differentiable, our results are mild and more general. Finally, numerical simulations illustrate the effectiveness of our results. & 2012 Elsevier B.V. All rights reserved.

Keywords: Global exponential stability Recurrent neural networks Memristor Time-varying delays

1. Introduction After Prof. L.O. Chua postulated the existence of a new twoterminal circuit element called memristor (as a contraction of memory and resistor) in 1971 [1,2], it took scientists almost 40 years to invention such a practical memristor device which was published by scientists at Hewlett-Packard Laboratories on 1 May 2008 issue of Nature [3,4]. The memristor is a contraction of memory and resistor due to its function: to memorize its history. A memristor is a twoterminal passive device whose value (i.e., memristance) depends on the magnitude and polarity of the voltage applied to it and the length of time that the voltage has been applied. When the voltage is turned off, the memristor remembers its most recent value until next time it is turned on. Because of this feature, we can apply this device to build a new model of neural networks to emulate the human brain, and its potential applications is in next generation computer and powerful brain-like neural computer (see [3–10]). As we know, the recurrent neural networks are very important nonlinear circuit networks because of their wide applications in associative memory, pattern recognition, signal processing, systems

n Corresponding author at: Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China. E-mail addresses: [email protected] (G. Zhang), [email protected] (Y. Shen).

0925-2312/$ - see front matter & 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2012.05.002

control, data compression, optimization problem (see [11–20]). However, the existing memristor-based networks which many researchers had constructed have been found to be computationally restrictive. In these circumstances, the applicability of these memristor-based networks is strongly restricted. So, recently, Wu et al. [7–9] begin to turn their attentions to the general memristor-based recurrent neural networks (1) as follows: n n X X dxi ðtÞ ¼ di ðxi Þxi ðtÞ þ aij ðxi Þf j ðxj ðtÞÞ þ bij ðxi Þ dt j¼1 j¼1

g j ðxj ðttij ðtÞÞÞ þIi ,

t Z0, i ¼ 1; 2, . . . ,n,

where ( di ðxi Þ ¼

bij ðxi Þ ¼

di ,

n

9xi ðtÞ9 o T i ,

di ,

nn

9xi ðtÞ9 4 T i ,

8 < bnij ,

9xi ðtÞ9 oT i ,

nn

: bij ,

aij ðxi Þ ¼

8 < anij ,

9xi ðtÞ9 o T i ,

: ann ij ,

9xi ðtÞ9 4 T i ,

ð1Þ

9xi ðtÞ9 4T i , n

nn

n

nn

in which switching jumps T i 4 0,di 40,di 40,anij ,ann ij , bij ,bij , i,j ¼ 1; 2, . . . ,n, are all constant numbers, Ii is an external constant input. Different from the previous works [7–9], in this paper, we will deal with the problem of global exponential stability analysis of the general memristor-based recurrent neural networks (1). The organization of this paper is as follows. Some preliminaries are

150

G. Zhang et al. / Neurocomputing 97 (2012) 149–154

introduced in Section 2. Some new algebraic conditions concerning global exponential stability are derived in Section 3. Numerical simulations are given to demonstrate the effectiveness of the proposed approach in Section 4. Finally, this paper ends by a conclusions.

If xn ¼ ðxn1 ,xn2 , . . . ,xnn ÞT is an equilibrium point of network (1), then by letting yi ðtÞ ¼ xi ðtÞxni ,i ¼ 1; 2, . . . ,n, we have n n X X dyi ðtÞ A co½d i ,d i yi ðtÞ þ co½a ij ,a ij f j ðyj ðtÞÞ þ co½b ij ,b ij  dt j¼1 j¼1

g j ðyj ðttij ðtÞÞÞ, or equivalently, there co½b ij ,b ij , such that

2. Preliminaries For convenience, we first make the following preparations. Throughout this paper, solutions of all the systems considered in the following are intended in Filippov’s sense (see [21]). And ½, represents the interval. In Banach space of all continuous functions Cð½t,0,Rn Þ, where t ¼ max1 r i,j r n ftij ðtÞg. For vector v ¼ ðv1 ,v2 , . . . ,vn ÞT A Rn , 9  9 denotes the Euclidean norm, i.e., qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P n nn n nn 9v9 ¼ ½ ni¼ 1 ðvi Þ2 . Let d i ¼ maxfdi ,di g, d i ¼ minfdi ,di g, a ij ¼ n

nn

n

nn

n nn maxfanij ,ann ij g, a ij ¼ minfaij ,aij g, b ij ¼ maxfbij ,bij g, b ij ¼ minfbij ,bij g,

for i,j ¼ 1; 2, . . . ,n. co½x i , x i  denotes the convex hull of ½x i , x i , clearly, in this paper, we have ½x i , x i  ¼ co½x i , x i . Matrices D ¼ diagðd i Þnn , A^ ¼ ða^ ij Þnn , B^ ¼ ðb^ ij Þnn . For a continuous functions kðtÞ : R-R, D þ kðtÞ is called the upper right dini derivative and defined as D þ kðtÞ ¼ limh-0 þ ð1=hÞðkðt þ hÞkðtÞÞ. The initial conditions of (1) are given by xðsÞ ¼ fðsÞ ¼ ðf1 ðsÞ, f2 ðsÞ, . . . , fn ðsÞÞT A Cð½t,0,Rn Þ. In this paper, we first make following Assumption for the system (1). Assumption 1. For i ¼ 1; 2, . . . ,n,8s1 ,s2 A R,s1 a s2 , the neuron activation functions f i ðxi Þ,g i ðxi Þ in (1) are bounded and satisfy 0r

f i ðs1 Þf i ðs2 Þ r si , s1 s2

0r

g i ðs1 Þg i ðs2 Þ r ri , s1 s2

ð2Þ

where si , ri are nonnegative constants. Now, as the literature [7–9], by applying the theories of setvalued maps and differential inclusions [21–23], from (1), we have

n X

co½b ij ,b ij g j ðxj ðttij ðtÞÞÞ þ Ii ,

t Z 0,

ð3Þ

j¼1

b A co½d ,d , a bij A or equivalently, for i,j ¼ 1; 2, . . . ,n, there exist d i i i b co½a ij ,a ij , b ij A co½b ij ,b ij , such that n X

n X

dxi ðtÞ b x ðtÞ þ b g ðx ðtt ðtÞÞÞ þ I , bij f j ðxj ðtÞÞ þ ¼ d b a i i ij j j ij i dt j¼1 j¼1 t Z 0, i ¼ 1; 2, . . . ,n:

ð4Þ

Definition 1. A constant vector xn ¼ ðxn1 ,xn2 , . . . ,xnn ÞT is called to be an equilibrium point of network (1), if for i,j ¼ 1; 2, . . . ,n, 0 A co½d i ,d i xni þ

n X

co½a ij ,a ij f j ðxnj Þ þ

j¼1

n X

n X j¼1

bij f j ðxnj Þ þ a

n X j¼1

b A b A co½d ,d , a bij A co½a ,a ij , b d i ij i i ij

n n X X dyi ðtÞ b y ðtÞ þ b g ðy ðtt ðtÞÞÞ: bij f j ðyj ðtÞÞ þ ¼ d b a i i ij j j ij dt j¼1 j¼1

ð6Þ

where f j ðyj ðtÞÞ ¼ f j ðyj ðtÞ þxnj Þf j ðxnj Þ,g j ðyj ðtÞÞ ¼ g j ðyj ðtÞ þ xnj Þg j ðxnj Þ. Definition 2. Let xn be an equilibrium point of system (1), the equilibrium point xn of system (1) is said to be globally exponential stability, if there exist positive constants b and m such that for any solution x(t) of (1) with initial conditions fðsÞ ¼ ðf1 ðsÞ, f2 ðsÞ, . . . , fn ðsÞÞT A Cð½t,0,Rn Þ satisfies 9xðtÞxn 9 r mebt sup 9fðsÞxn 9

for 8t 4 0:

t r s r 0

For further deriving the global exponential stability conditions, the following lemmas are needed. Lemma 1 (Filippov [21]). Under Assumption1, there is at least a local solution xðtÞ ¼ ðx1 ðtÞ,x2 ðtÞ, . . . ,xn ðtÞÞT of system (1) with the initial conditions fðsÞ ¼ ðf1 ðsÞ, f2 ðsÞ, . . . , f3 ðsÞÞT ,sA ½t,0, which is essentially bounded. Moreover, this local solution x(t) can be extended to the interval ½0, þ 1Þ in the sense of Filippov. n nn n nn Under Assumption1, and di 4 0,di 4 0,anij ,ann ij , bij ,bij ,Ii are all constant numbers, so, from the literature [21,24], in order to study the memristor-based recurrent neural network (1), we can turn to the qualitative analysis of the relevant differential inclusion (3). Lemma 2 (Zhou and Cao [20]). Let a,b be constants with 0 ob oa. Let x(t) be a continuous nonnegative function on t Zt 0 t and satisfy the following inequality for t Z t 0 ,

where xðtÞ9suptt r s r t fxðsÞg. Then, xðtÞ rxðt 0 Þeeðtt0 Þ , where e is a bound on the exponential convergence rate and is the unique positive et solution of e ¼ abe . Lemma 3. Under Assumption1, then, system (1) exists at least one equilibrium point. Proof. From Assumption 1, we know that the neuron activation functions f i ,g i in (1) are Lipschitz continuous and bounded. And n nn n nn di 4 0,di 4 0,anij ,ann ij , bij ,bij ,Ii are all constant numbers and bounded, then, it is obvious that system (4) exists an equilibrium point, so the differential inclusion (3) exists at least one equilibrium point, that is, system (1) exists at least one equilibrium point. & In the following section, the paper aims to analysis the globally exponential stability of the system (1).

co½b ij ,b ij   g j ðxnj Þ þIi ,

j¼1

b A co½d ,d , a bij A or equivalently, for i,j ¼ 1; 2, . . . ,n, there exist d i i i b co½a ij ,a ij , b ij A co½b ij ,b ij , such that b xn þ d i i

exist

ð5Þ

dxðtÞ r axðtÞ þ bxðtÞ, dt

n X dxi ðtÞ A co½d i ,d i xi ðtÞ þ co½a ij ,a ij f j ðxj ðtÞÞ dt j¼1

þ

t Z 0,

b  g ðxn Þ þ I ¼ 0: b ij i j j

3. Main results The main results of the paper are given in the following theorems. Theorem 1. Let xn be the equilibrium point of the system (1), 0 r tij ðtÞ r t. Then, under Assumption1, the equilibrium point xn of

G. Zhang et al. / Neurocomputing 97 (2012) 149–154

system (1) is globally exponentially stable if n X n X 1 1 2 2 ^2 2 ða^ s þ b jk rk Þ o0,  min fd i g þ 2 1rirn d jk k j¼1k¼1 j

a^ jk ¼ maxf9a jk 9,9a jk 9g,

hold for all t 40, where f9b jk 9,9b jk 9g,i,j,k ¼ 1; 2, . . . ,n.

ð7Þ

Vðy,tÞ ¼

2 i¼1

j¼1

j¼1

j¼1

or ð6Þ

¼

þ

yi ðtÞyi ðtÞ0 ¼

n X

i¼1

i¼1

n X

n X

bij f j ðyj ðtÞÞ þ a

j¼1

j¼1

j¼1

and 0 12 n n n X X X 2 2 ^b r 9y ðtt ðtÞÞ9A r @ b^ ij r2j 9yj ðttij ðtÞÞ9 ij j j ij j¼1

j¼1

ð8Þ

n X

k¼1

ð12Þ

¼ y2i ðtÞ:

Then, by computing the upper right derivative Vðy,tÞ along the solutions of system (5) or (6), we can derive that D þ Vðy,tÞ9ð5Þ

By Cauchy–Schwarz inequality, we obtain 0 12 n n n n n X X X X X 2 2 2 @ a^ ij sj 9yj ðtÞ9A r a^ ij s2j a^ ik s2k 9yj ðtÞ9 ¼ y2j ðtÞ

b^ jk ¼ max

Proof. To get the exponential stability analysis of the equilibrium point xn of system (1), now, we turn to study the equivalent system (5) or (6). We choose a Lyapunov functional for system (5) or (6) as n X 1

151

n b y ðtÞ yi ðtÞ d i i 9 = b g ðy ðtt ðtÞÞÞ b ij j j ij ;

8 < n X A yi ðtÞ co½d i ,d i yi ðtÞ þ co½a ij ,a ij  : i¼1 j¼1 9 n = X f j ðyj ðtÞÞ þ co½b ij ,b ij g j ðyj ðttij ðtÞÞÞ : ; n X

j¼1

n X

2

b^ ik r2k

n X

y2j ðttij ðtÞÞ:

It follows from (12) to (13) and in view 0 r tij ðtÞ r t,i,j ¼ 1; 2, . . . ,n, then we have 8 n < n n X X 1 1 X 2 2 a^ ik s2k  d i y2i ðtÞ þ 9yj ðtÞ9 D þ Vðy,tÞ9ð5Þ or ð6Þ r : 2 d ik¼1 i¼1 j¼1 9 n n = 1 X ^2 2 X 2 b ik rk þ 9yj ðttij ðtÞÞ9 ; di k ¼ 1 j¼1 2 3 n n X n X X 1 2 25 2 4 1 d þ a^ jk sk yi ðtÞ ¼ i 2 d i¼1 j¼1k¼1 j þ

ð9Þ

j¼1

ð13Þ

j¼1

k¼1

of

n X n X n X 1 ^2 2 2 b jk rk yi ðttji ðtÞÞ d i¼1j¼1k¼1 j

r aVðy,tÞ þ b sup VðsÞ,

ð14Þ

tt r s r t

According to (2), it can be easily checked that the transformed neuron activation functions satisfy 0 r sup

yi a 0

f i ðyi Þ r si , yi

0 r sup

yi a 0

a ¼ min fd i g2 1rirn

g i ðyi Þ r ri , yi

ð10Þ b¼2

where i ¼ 1; 2, . . . ,n, si , ri are nonnegative constants. By (9) and (10), we get n  n X X a^ ij sj 9yj ðtÞ9 d i y2i ðtÞ þ 9yi ðtÞ9 D þ Vðy,tÞ9ð5Þ or ð6Þ r i¼1

þ 9yi ðtÞ9

n X n X 1 ^2 2 b jk rk : d j¼1k¼1 j

Vðy,tÞ r V ð0Þeet ,

 b^ ij rj 9yj ðttij ðtÞÞ9

ð15Þ

t 40,

ð16Þ

where V ðtÞ ¼ suptt r s r t VðsÞ. That is,

j¼1

8 n < n X X 1 a^ ij sj 9yj ðtÞ9 ¼  d y2 ðtÞ þ9yi ðtÞ9 : 2 i i i¼1 j¼1 9 n = X 1 b^ ij rj 9yj ðttij ðtÞÞ9 :  d i y2i ðtÞ þ 9yi ðtÞ9 ; 2

n X

2

y2i ðtÞ r eet sup 9fðsÞxn 9 ,

i¼1

t 4 0:

t r s r 0

Because yi ¼ xi xn , we have

j¼1

By mean-value inequality, we have 8 0 12 n < n X 1 1 @X þ 2 a^ s 9y ðtÞ9A  d y ðtÞ þ D Vðy,tÞ9ð5Þ or ð6Þ r : 4 i i d i j ¼ 1 ij j j i¼1 0 12 9 n = 1 1 @X 2 ^ b ij rj 9yj ðttij ðtÞÞ9A  d i yi ðtÞ þ ; 4 di j ¼ 1 8 12 0 n > n < 1 X 1 @X 2 a^ s 9y ðtÞ9A  d y ðtÞ þ ¼ > 2 i i d i j ¼ 1 ij j j i¼1: 0 12 9 > n = X 1B b^ ij rj 9yj ðttij ðtÞÞ9A : þ @ > di j ¼ 1 ;

n X n X 1 2 2 a^ jk sk , d j¼1k¼1 j

By (7), we can obtain a 4b 4 0. Thus, by Lemma 2, we know that there exists e 40 such that

j¼1 n X

where

n X

2

ðxi ðtÞxn Þ2 r eet sup 9fðsÞxn 9 ,

t 40:

ð17Þ

t r s r 0

i¼1

Hence, for all t 4 0, the equilibrium point xn of system (1) is globally exponentially stable. The proof of this theorem is now completed. & Theorem 2. Let xn be the equilibrium point of the system (1), 0 r tij ðtÞ r t. Then, under Assumption1, the equilibrium point xn of system (1) is globally exponentially stable if 8 9 8 9 ^ 2 r2 g = <1
ð11Þ

j¼1k¼1

Pn

^ ^ hold for all t 4 0, where j ¼ 1 gi a ij ¼ 1, gi 4 0, a ji ¼ maxf9a ji 9,9a ji 9g, b^ jk ¼ maxf9b jk 9,9b jk 9g,i,j,k ¼ 1; 2, . . . ,n.

152

G. Zhang et al. / Neurocomputing 97 (2012) 149–154

Proof. To get the exponential stability analysis of the equilibrium point xn of system (1), now, we turn to study the equivalent system (5) or (6). We choose a Lyapunov functional for system (5) or (6) as Vðy,tÞ ¼

n X 1 i¼1

2

gi y2i ðtÞ:

þ

D Vðy,tÞ9ð5Þ

or

ð19Þ

Then, by computing the upper right derivative Vðy,tÞ along the solutions of system (5) or (6), and by mean-value inequality, we can derive that D þ Vðy,tÞ9ð5Þ

¼

or ð6Þ

n X

gi yi ðtÞyi ðtÞ0 ¼

i¼1

n X

n

raVðy,tÞ þ b sup VðsÞ, where 8 <1

9 n ^ X a ji s2i = , d a ¼ 2 min d j gi ; 1 r i r n:2 i j¼1

Vðy,tÞ r V ð0Þeet ,

min fgi g

1rirn

ð20Þ

j¼1

ð21Þ

¼

k¼1

g

r2k

y2j ðttij ðtÞÞ:

gi y2i ðtÞ

i¼1

t 4 0:

t r s r 0

n X

2

y2i ðtÞ r Meet sup 9fðsÞxn 9 ,

i¼1

t 4 0,

ð26Þ

t r s r 0

where M ¼ max1 r i r n fgi g=min1 r i r n fgi g Z1. Because yi ¼ xi xn , we have n X

2

ðxi ðtÞxn Þ2 r Meet sup 9fðsÞxn 9 ,

i¼1

t 4 0:

ð27Þ

t r s r 0

Hence, for all t 4 0, the equilibrium point xn of system (1) is globally exponentially stable. The proof of this theorem is now completed. & Theorem 3. Let xn be the equilibrium point of the system (1), 0 r tij ðtÞ r t. Then, under Assumption1, the equilibrium point xn of system (1) is globally exponentially stable if 8 9 8 9 n X n a n ^ <1
and 0 12 n n n X X X 2 2 @ gi b^ ij rj 9yj ðttij ðtÞÞ9A r g2i b^ ij r2j 9yj ðttij ðtÞÞ9 j¼1

n X

i.e.

j¼1

n X

i¼1

y2i ðtÞ r

2

j

2 2^ i b ik

n X

1rirn

P By Cauchy–Schwarz inequality and nj¼ 1 gi a^ ij ¼ 1, we obtain 0 12 n n n X X X 2 @ gi a^ ij sj 9yj ðtÞ9A r ðgi a^ ij Þ2 s2 9yj ðtÞ9

n X

ð25Þ

r max fgi geet sup 9fðsÞxn 9 ,

8 0 12 n < n X 1 1 @X 2 ¼  d g y ðtÞ þ g a^ s 9y ðtÞ9A : 2 i i i d i gi j ¼ 1 i ij j j i¼1

gi a^ ij s2j y2j ðtÞ

t 40,

where V ðtÞ ¼ suptt r s r t VðsÞ. That is,

j¼1

n X

j¼1k¼1

By (18), we can obtain a 4 b 40. Thus, by Lemma 2, we get that there exists e 40 such that

12 9 = n X 1 1 @  d i gi y2i ðtÞ þ gi b^ ij rj 9yj ðttij ðtÞÞ9A ; 4 d i gi

j¼1

8 9 ^ 2 r2 g =
0

0 12 9 n = 1 @X gi b^ ij rj 9yj ðttij ðtÞÞ9A : þ ; d i gi j ¼ 1

ð23Þ

tt r s r t

j¼1

j¼1

n X n X n X 1 ^2 2 b r g g y2 ðttji ðtÞÞ d g jk k j i i i¼1j¼1k¼1 j i

i¼1

8 0 12 n < n X 1 1 @X 2 r  d g y ðtÞ þ g a^ s 9y ðtÞ9A : 4 i i i d i gi j ¼ 1 i ij j j i¼1

r

þ

gi yi ðtÞ dbi yi ðtÞ

9 = n n X X b g ðy ðtt ðtÞÞÞ bij f j ðyj ðtÞÞ þ b a þ ij j j ij ; j¼1 j¼1 8 < n n X X A yi ðtÞ co½d i ,d i gi yi ðtÞ þ co½a ij ,a ij gi : i¼1 j¼1 9 n = X f j ðyj ðtÞÞ þ co½b ij ,b ij gi g j ðyj ðttij ðtÞÞÞ ; j¼1 8 n < n X X d i gi y2i ðtÞ þ 9yi ðtÞ9 gi a^ ij sj 9yj ðtÞ9 r : i¼1 j¼1 9 n = X gi b^ ij rj 9yj ðttij ðtÞÞ9 þ9yi ðtÞ9 ; j¼1 8 n < n X X 1  d i gi y2i ðtÞ þ 9yi ðtÞ9 gi a^ ij sj 9yj ðtÞ9 ¼ : 2 i¼1 j¼1 9 = n X 1 gi b^ ij rj 9yj ðttij ðtÞÞ9  d i gi y2i ðtÞ þ9yi ðtÞ9 ; 2

j¼1

8 n < n X 1 1X a^ ij s2j y2j ðtÞ r  d g y2 ðtÞ þ ð6Þ : 2 i i i d i i¼1 j¼1 9 n n = X X 2 1 2 2 ^ þ gb r y ðttij ðtÞÞ ; d i k ¼ 1 i ik k j ¼ 1 j 2 3 n n X X 1 4 1 d þ a^ s2 5gi y2i ðtÞ ¼ i 2 d g ji i i¼1 j¼1 j i

Remark 1. Theorems 1–3 also imply the uniqueness of the equilibrium point of the system (5) or (6). That is, the uniqueness of the equilibrium point of the system (1).

j¼1

ð22Þ

j¼1

It follows from (21) and (22) and in view of 0 r tij ðtÞ r t,i,j ¼ 1; 2, . . . ,n, then we get

4. Numerical examples Now, we perform some numerical simulations to illustrate our analysis by using MATLAB(7.0) programming.

G. Zhang et al. / Neurocomputing 97 (2012) 149–154

System 1. Consider two-dimensional memristor-based recurrent neural networks 8 dx1 ðtÞ > > > > > dt > > > > > > > <

¼

153

150

100

d1 ðx1 Þx1 ðtÞ þ a11 ðx1 Þf 1 ðx1 ðtÞÞ þa12 ðx1 Þf 2 ðx2 ðtÞÞ þ b11 ðx1 Þg 1 ðx1 ðtt1 ðtÞÞÞþ b12 ðx1 Þg 2 ðx2 ðtt2 ðtÞÞÞ þ I1 ,

¼

50

d2 ðx2 Þx2 ðtÞ þ a21 ðx2 Þf 1 ðx1 ðtÞÞ þa22 ðx2 Þf 2 ðx2 ðtÞÞ

States

dx2 ðtÞ > > > > > dt > > > > > > > :

þ b21 ðx2 Þg 1 ðx1 ðtt1 ðtÞÞÞþ b22 ðx2 Þg 2 ðx2 ðtt2 ðtÞÞÞ þ I2 ,

0

ð29Þ

−50 where ( d1 ðx1 Þ ¼

1:2,

9x1 ðtÞ9o 1,

1,

9x1 ðtÞ94 1,

(1 a11 ðx1 Þ ¼

6,

9x1 ðtÞ9 o1,

 16 ,

9x1 ðtÞ9 41,

(1 a21 ðx2 Þ ¼

5,

9x2 ðtÞ9 o1,

 15 ,

9x2 ðtÞ9 41,

(1 b11 ðx1 Þ ¼

4,

9x1 ðtÞ9 o 1,

 14 ,

9x1 ðtÞ9 4 1,

(1 b21 ðx2 Þ ¼

( d2 ðx2 Þ ¼

7,

9x2 ðtÞ9 o 1,

 17 ,

9x2 ðtÞ9 4 1,

1,

9x2 ðtÞ9 o1,

1:2,

9x2 ðtÞ9 41,

−150

(1 a12 ðx1 Þ ¼

5,

9x1 ðtÞ9 o 1,

 15 ,

9x1 ðtÞ9 4 1,

(1 a22 ðx2 Þ ¼

8,

9x2 ðtÞ9 o 1, 9x2 ðtÞ9 4 1,

6,

9x1 ðtÞ9 o 1,

 16 ,

9x1 ðtÞ9 4 1,

(1 b22 ðx2 Þ ¼

1

2

3

4 time T

5

6

7

8

3,

9x2 ðtÞ9 o 1,

 13 ,

9x2 ðtÞ9 4 1:

In Theorem 1 or 3, we can get si ¼ ri ¼ 1,i ¼ 1; 2, and from (7) and (28), we have 

2 X 2 X 1 1 2 2 ^2 2 min fd i g þ ða^ jk sk þ b jk rk Þ ¼ 0:1548o0 2 1rir2 d j¼1k¼1 j

ð30Þ

and  min

8 <1

1 r i r 2:2

And then, the parameters are given as follows: ! !   1 1 1 1 1 0 6 5 4 6 ^ ^ D¼ , A¼ 1 1 , B¼ 1 1 : 0 1 5 8 7 3

di

9 8 9 2 X 2 a 2 ^
j¼1k¼1

j¼1

ð31Þ

And t1 ðtÞ ¼ 1:8þ 0:5 sinðtÞ, t2 ðtÞ ¼ 20:8 cosðtÞ, I ¼ ðI1 ,I2 ÞT , the activation function as follows:

take

Now, according to Theorem 1 or 3, we can get that the equilibrium point xn of (29) is globally exponentially stable. The simulation results of network (29) with 15 initial values (randomly choose in a bounded interval) are depicted in Fig. 1 with external input I ¼ ð0; 0ÞT , Fig. 2 with external input I ¼ ð9,10ÞT , respectively. 5. Conclusions

f 1 ðxÞ ¼ f 2 ðxÞ ¼ g 1 ðxÞ ¼ g 2 ðxÞ ¼ 12ð9x þ199x19Þ:

In this paper, basing on the previous work [7–9,18,20,24], we made an effort to deal with the problem of global exponential stability analysis for a class of general memristor-based recurrent neural networks with time-varying delays. To investigate the dynamic properties of the switching system, under the framework of Filippov’s solution, we can turn to the qualitative analysis of a relevant differential inclusion which is upper semicontinuous and easier to handle. The algebraic conditions concerning global exponential stability obtained in this paper are new and do not require the activation functions to be differentiable, the connection weight matrices to be symmetric and the delay functions to be differentiable, which are different from the previous works [7–9]. A numerical example is given to illustrate effectiveness of the proposed theory. The derived analysis in this paper maybe further extended to provide a basis for understanding the characteristics of memristor devices in a wider range of applications.

150

100

50 States

0

Fig. 2. Transient behavior of the DRNN (29) with input I ¼ ð9,10ÞT .

 18 , (1

b12 ðx1 Þ ¼

−100

0

−50

−100

Acknowledgments

−150

0

1

2

3 time T

4

5

Fig. 1. Transient behavior of the DRNN (29) with input I ¼ ð0; 0ÞT .

6

The authors gratefully acknowledge anonymous referees’ comments and patient work. This work is supported by the National Science Foundation of China under Grant no. 60974021 and

154

G. Zhang et al. / Neurocomputing 97 (2012) 149–154

Key Program of National Natural Science Foundation of China (No. 61134012). References [1] L.O. Chua, Memristor-the missing circuit element, IEEE Trans. Circuit Theory CT-18 (1971) 507–519. [2] L.O. Chua, S.M. Kang, Memristive devices and systems, Proc. IEEE 64 (1976) 209–223. [3] D.B. Strukov, G.S. Snider, G.R. Stewart, R.S. Williams, The missing memristor found, Nature 453 (2008) 80–83. [4] J.M. Tour, T. He, The fourth element, Nature 453 (2008) 42–43. [5] M. Itoh, L.O. Chua, Memristor oscillators, Int. J. Bifur. Chaos 18 (2008) 3183–3206. [6] M. Itoh, L.O. Chua, Memristor cellular automata and memristor discrete-time cellular neural networks, Int. J. Bifur. Chaos 19 (2009) 3605–3656. [7] A.L. Wu, J. Zhang, Z.G. Zeng, Dynamic behaviors of a class of memristor-based Hopfield networks, Phys. Lett. A 375 (2011) 1661–1665. [8] A.L. Wu, Z.G. Zeng, X.S. Zhu, J. Zhang, Exponential synchronization of memristor-based recurrent neural networks with time delays, Neurocomputing 74 (2011) 3043–3050. [9] A.L. Wu, S.P. Wen, Z.G. Zeng, Synchronization control of a class of memristorbased recurrent neural networks, Inf. Sci. 183 (2012) 106–116. [10] F. Merrikh-Bayat, S.B. Shouraki, Memristor-based circuits for performing basic arithmetic operations, Procedia Comput. Sci. 3 (2011) 128–132. [11] A.H. Wan, J.G. Peng, M.S. Wang, Exponential stability of a class of generalized neural networks with time-varying delays, Neurocomputing 69 (2006) 959–963. [12] L. Hu, H.J. Gao, W.X. Zheng, Novel stability of cellular neural networks with interval time-varying delay, Neural Networks 21 (2008) 1458–1463. [13] R.L. Wang, Z. Tang, Q.P. Cao, A learning method in Hopfield neural network for combinatorial optimization problem, Neurocomputing 48 (2002) 1021–1024. [14] L. Zhang, Z. Yi, Selectable and unselectable sets of neurons in recurrent neural networks with saturated piecewise linear transfer function, IEEE Trans. Neural Networks 22 (2011) 1021–1031. [15] S.Y. Xu, Y.M. Chu, J.W. Lu, New results on global exponential stability of recurrent neural networks with time-varying delays, Phys. Lett. A 352 (2006) 371–379. [16] L. Zhang, Z. Yi, S.L. Zhang, P.A. Heng, Activity invariant sets and exponentially stable attractors of linear threshold discrete-time recurrent neural networks, IEEE Trans. Autom. Control 54 (2009) 1341–1347. [17] L. Zhang, Z. Yi, J.L. Yu, Multiperiodicity and attractivity of delayed recurrent neural networks with unsaturating piecewise linear transfer functions, IEEE Trans. Neural Networks 19 (2008) 158–167. [18] H.G. Zhang, G. Wang, New criteria of global exponential stability for a class of generalized neural networks with time-varying delays, Neurocomputing 70 (2007) 2486–2494. [19] Y. Shen, J. Wang, An improved algebraic criterion for global exponential stability of recurrent neural networks with time-varying delays, IEEE Trans. Neural Networks 19 (2008) 528–531. [20] D.M. Zhou, J.D. Cao, Globally exponential stability conditions for cellular neural networks with time-varying delays, Appl. Math. Comput. 131 (2–3) (2002) 487–496. [21] A.F. Filippov, Differential Equations with Discontinuous Right-hand Sides, Kluwer, Dordrecht, 1988.

[22] A.N. Michel, L. Hou, D. Liu, Stability of Dynamical Systems: Continuous, Discontinuous and Discrete Systems, Birkhauser, Boston, MA, 2007. [23] A. Bacciotti, L. Rosier, in: Lyapunov Functions and Stability in Control Theory, Lecture Notes in Control and Information Science, vol. 267, Springer, 2005. [24] J. Hu, J. Wang, Global uniform asymptotic stability of memristor-based recurrent neural networks with time delays, in: 2010 International Joint Conference on Neural Networks, IJCNN 2010, Barcelona, Spain, 2010, pp. 1–8.

Guodong Zhang received his B.S. degrees from Huanghuai University, Zhumadian, China, and his M.S. degrees in Applied Mathematics from Hubei Normal University, Huangshi, China, in 2008 and 2011, respectively. He is currently a doctoral candidate in the Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, and also in the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan, China. His current research interests include stability analysis of dynamic systems, neural networks and memristors.

Yi Shen received the M.S. degree in Applied Mathematics and the Ph.D. degree in Systems Engineering from the Huazhong University of Science and Technology, Wuhan, China, in 1995 and 1998, respectively. He was a Post-Doctoral Fellow with the Huazhong University of Science and Technology from July 1999 to June 2001. He is currently a Professor with the Department of Control Science and Engineering with the same university. He has published over 30 international journal papers. His current research interests include the areas of neural networks, nonlinear stochastic systems, and memristors.

Junwei Sun received his B.S. degree in School of Mathematics and Information Science, Shangqiu Normal University, Shangqiu, China, and his M.S. degree in College of Electrical and Electronic Engineering, Zhengzhou University of Light Industry, Zhengzhou, China, in 2008 and 2011. He is currently a doctoral candidate in the Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, and also in the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Wuhan, China. His main research interest lies in the fields of intelligent control and DNA computation.