Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
Contents lists available at ScienceDirect
Commun Nonlinear Sci Numer Simulat journal homepage: www.elsevier.com/locate/cnsns
Almost periodic dynamical behaviors for generalized Cohen–Grossberg neural networks with discontinuous activations via differential inclusions q Dongshu Wang a,b, Lihong Huang a,c,⇑ a
College of Mathematics and Econometrics, Hunan University, 410082 Changsha, Hunan, PR China School of Mathematical Sciences, Huaqiao University, 362021 Quanzhou, Fujian, PR China c Hunan Women’s University, 410004 Changsha, Hunan, PR China b
a r t i c l e
i n f o
Article history: Received 29 July 2013 Received in revised form 11 February 2014 Accepted 12 February 2014 Available online 1 March 2014 Keywords: Almost periodic solution Global exponential stability Cohen–Grossberg neural networks Discontinuous activation Differential inclusion
a b s t r a c t In this paper, we investigate the almost periodic dynamical behaviors for a class of general Cohen–Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and nonsmooth analysis theory with generalized Lyapunov approach, we obtain the existence, uniqueness and global stability of almost periodic solution to the neural networks system. It is worthy to pointed out that, without assuming the boundedness or monotonicity of the discontinuous neuron activation functions, our results will also be valid. Finally, we give some numerical examples to show the applicability and effectiveness of our main results. Ó 2014 Elsevier B.V. All rights reserved.
1. Introduction Recurrently connected neural networks, which have been successfully applied in various science and engineering fields such as classification, signal processing, pattern recognition, parallel computation, complicated optimization problems [1–4]. Such applications heavily depend on dynamical behaviors of the neural networks. For the practical design of neural networks, it is very important and necessary to analyze the dynamical behaviors of the neural networks. For example, periodic oscillations in recurrent neural networks have found many applications, such as associative memories [5], pattern recognition [6,7], machine learning [8,9] and robotmotion control [10]. Thus, it is interesting to study the existence and stability of periodic solutions for neural networks. It is well known that, periodic solution should generally be defined up to 1. However, from long-term dynamical behaviors point of view, the periodic parameters of dynamical system often turn out to experience uncertain perturbations. That is to say, parameters can be regarded as periodic up to a small error. In this circumstance, almost periodic oscillatory behavior is considered to be more accordant with reality. The almost periodic neural networks are as a natural extension of the periodic ones. Meanwhile, almost periodic functions are generalization of periodic functions. Much efforts have been devoted to studying the dynamical behaviors of recurrently connected neural network with almost periodic parameters [11–16], since the importance of almost periodic property. q
Project is supported by the Natural Science Foundation of China (11371127).
⇑ Corresponding author at: College of Mathematics and Econometrics, Hunan University, 410082 Changsha, Hunan, PR China. E-mail addresses:
[email protected] (D. Wang),
[email protected] (L. Huang). http://dx.doi.org/10.1016/j.cnsns.2014.02.016 1007-5704/Ó 2014 Elsevier B.V. All rights reserved.
3858
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
However, the results of the papers mentioned above are based on the assumption that the activation functions are bounded, continuous, globally Lipschitz, even smooth (continuously differentiable). Recently, neural network systems with discontinuous neuron activations have been received much attention, since Forti and Nistri deal with the global stability of a neural network modeled by a differential equation with a discontinuous right-hand side [17]. Moreover, as pointed by Forti and Nistri [17], neural network systems with discontinuous neuron activations are important and do frequently arise in the applications. Thus, it is of practical importance to investigate the dynamical behaviors of discontinuous neural network systems. And therefore much efforts has been devoted to analyzing the dynamical behavior of neural networks with discontinuous activations[18–31], such as equilibria, periodic oscillations, almost periodic oscillations, stability etc. Considering the practical importance of almost periodic phenomenon, Allegretto et al. in [32] proved the common asymptotic behavior of almost periodic solution for discontinuous, delayed, and impulsive neural networks. In [33,34], the authors investigated the existence, uniqueness and global stability of almost periodic solutions of delayed Hopfield neural network systems with discontinuous activations, respectively. In [35], based on the retarded different inclusions, Wang and Huang proved the existence, uniqueness and global stability of periodic and almost periodic solutions to the delayed Cohen–Grossberg neural network systems with discontinuous activations. However, all discussions in these papers are based on the assumption that the discontinuous neuron activation functions are monotone nondecreasing. It is worthy to pointed out that, the detailed shape of the activation does not matter very much as long, since its growth is dominated by the passive decay that stabilizes the model [36]. Hence, the more practical and interesting model of neuron network is, in such model the discontinuous activation functions maybe not monotone nondecreasing. One of the contributions of this paper is: dropping the assumptions on monotonicity of the discontinuous activation functions, we study the global exponential stability of almost periodic solutions for neuron network systems. On the other hand, time delays are often inevitable in many practical applications of neural networks since the finite switching speed of amplifiers and communication time, for example, control, image processing, pattern recognition, signal processing and associative memory. In view of the way it occurs, time delay should have two types: discrete delay and distributed delay. In reality, discrete (time-varying) delay and distributed delay always occur simultaneously. For more knowledge about the practical design and application of time-delayed neural networks we refer to [37,35–46]. There are some works on analyzing the dynamical behavior of neural networks with both discrete (time-varying) delay and distributed delay, see [11,29,30,33,42,43]. It is well known that, time delays can affect the stability of the neural network systems and may lead to some complex dynamic behaviors such as oscillation, chaos and instability. Moreover, the issue of stability analysis of the neural network systems is also a significant research topic in neural theory. Thus, it is of great importance to study the dynamical behaviors of neural networks with both discrete (time-varying) delays and distributed delays, such as the existence, uniqueness, stability and global exponential stability of almost periodic solutions. The main contribution of this paper is: dropping the assumptions on monotonicity of the discontinuous activation functions, we investigate the existence, uniqueness and global exponential stability of almost periodic solution for the delayed neural networks with discontinuous neuron activations, discrete time-varying and distributed time-varying delays. The organization of this paper is as follows: In Section 2, the neuron network model studied in this paper and some preliminaries are introduced. In Section 3, the existence of almost periodic solution to the neural network system are derived by constructing some suitable generalized Lyapunov functions. In Section 4, the uniqueness and global exponential stability of the almost periodic solution to the neural network system are obtained. After that, some numerical examples are given to show the effectiveness of the theoretical results given in the previous section. At last, this paper ends with a conclusion.
2. Neural network model and preliminaries In this paper, we consider the following generalized Cohen–Grossberg neural networks model with discontinuous activations:
" # Z t n n n X X X dxi ðtÞ ¼ qi ðxi ðtÞÞ di ðtÞxi ðtÞ þ aij ðtÞfj ðxj ðtÞÞ þ bij ðtÞfj ðxj ðt sij ðtÞÞÞ þ cij ðtÞ fj ðxj ðsÞÞds þ Ii ðtÞ ; dt trij ðtÞ j¼1 j¼1 j¼1 i ¼ 1; 2; . . . ; n;
ð2:1Þ
where xðtÞ ¼ ðx1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞÞT 2 Rn and xi ðtÞ denotes the state variable of the potential of the ith neuron; di ðtÞ represents the self-inhibition with which the ith neuron will reset its potential to the resting state in isolations when disconnected from the network; aij ðtÞ denotes the connection strength of the jth neuron on the ith neuron; bij ðtÞ and cij ðtÞ are the delayed feedbacks of the jth neuron on the ith neuron, with time-varying delay and distributed delay, respectively; sij ðtÞ; dij ðtÞ denote the discrete time-varying delay and distributed time-varying delay, respectively; fi ðxi ðtÞÞ represents the activation function of the ith neuron; Ii ðtÞ denotes the external input to the ith neuron; qi ðxi ðtÞÞ denotes the amplification function of the ith neuron. For the discontinuous neuron activations in system (2.1), we present the following assumptions:
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
3859
(H1) For i ¼ 1; 2; . . . ; n; fi is continuous expect on a countable set of isolate points qik , where there exist finite right limits limx !ðqi Þþ fi ðxi Þ , fiþ ðqik Þ and left limits limxi !ðqi Þ fi ðxi Þ , fi ðqik Þ, respectively. Moreover, fi has a finite number of disi
k
k
continuous points on any compact interval of R. (H2) For i ¼ 1; 2; . . . ; n; fi ¼ g i þ hi , where g i is continuous on R and hi is continuous expect on a countable set of isolate points qik . For 8u; v 2 R, there exists positive constants Li such that jg i ðuÞ g i ðv Þj 6 Li ju v j; i ¼ 1; 2; . . . ; n. Moreover, hi ði ¼ 1; 2; . . . ; nÞ is monotonically decreasing in R. (H2⁄) For i ¼ 1; 2; . . . ; n; fi ¼ g i þ hi , where g i is continuous on R and hi is continuous expect on a countable set of isolate points qik . For 8u; v 2 R, there exists positive constants Li such that jg i ðuÞ g i ðv Þj 6 Li ju v j; i ¼ 1; 2; . . . ; n. Moreover, hi ði ¼ 1; 2; . . . ; nÞ is monotonically nondecreasing in R. Remark 1. Note that co½fi ðxi Þ ¼ ½minffi ðxi Þ; fiþ ðxi Þg; maxffi ðxi Þ; fiþ ðxi Þg for i ¼ 1; 2; . . . ; n and xi 2 R, that is, co½fi ðxi Þ is an interval with non-empty interior when fi is discontinuous at xi ¼ qik , while co½fi ðxi Þ ¼ fi ðxi Þ is a singleton when fi is continuous at xi – qik . For example, there are four classes of different situations illustrated in Figs.1–4 when the neuron activation function f is discontinuous at xi ¼ 0 and continuous at xi – 0.Obviously, in Fig.1, f ðxÞ ¼sgnx x is discontinuous at x ¼ 0 and co½f ð0Þ ¼ ½1; 1, where f ð0Þ ¼ 1 < 1 ¼ f þ ð0Þ; in Fig. 2, f ðxÞ ¼ xsgnx is discontinuous at x ¼ 0 and x2 þ 1; x 6 0; co½f ð0Þ ¼ ½1; 1, where f ð0Þ ¼ 1 > 1 ¼ f þ ð0Þ; in Fig.3, f ðxÞ ¼ is discontinuous at x ¼ 0 and x 1; x > 0 0:5x 1; x 6 0; co½f ð0Þ ¼ ½1; 1, where f þ ð0Þ ¼ 1 < 1 ¼ f ð0Þ; in Fig.4, f ðxÞ ¼ is discontinuous at x ¼ 0 and ex ; x > 0 þ co½f ð0Þ ¼ ½1; 1, where f ð0Þ ¼ 1 < 1 ¼ f ð0Þ. It is worthy to pointed out that the neuron activation function f is nonmonotonically, see Figs.1–4. Remark 2. In this paper, the neuron activation functions fi ðxi Þði ¼ 1; 2; . . . ; nÞ are allowed to be unbounded, to be super linear growth (or nonlinear growth), even to be super exponential growth. For example, there are four different discontinuous neuron activation functions illustrated in Figs. 1–4. Therefore, the restriction conditions of the neuron activation functions in the literature [17,21–25,28,31] have been eliminated successfully, such as: (1) fi ðxi Þði ¼ 1; 2; . . . ; nÞ are bounded; (2) fi ðxi Þði ¼ 1; 2; . . . ; nÞ satisfy the growth condition: there exist nonnegative constants ki ; hi , such that
jco½fi ðxi Þj ¼
sup jci j 6 ki jxi j þ hi ;
i ¼ 1; 2; . . . ; n;
ci 2co½fi ðxi Þ
(3) fi ðxi Þði ¼ 1; 2; . . . ; nÞ satisfy the nonlinear growth condition: there exist nonnegative constant c and 0 < a < 1, such that
kco½f ðxÞk ¼ sup kck 6 cð1 þ kxka Þ; c2co½f ðxÞ T
where co½f ðxÞ ¼ ðco½f1 ðx1 Þ; co½f2 ðxn Þ; . . . ; co½fn ðxn ÞÞ . Moreover, the restriction conditions fi ðqik Þ < fiþ ðqik Þ (where fi is discontinuous at qik ) in the literature [17,19–24,30,31, 33–36] have also been taken away successfully, see Figs. 2 and 3. Therefore, the neuron activation functions fi ðxi Þði ¼ 1; 2; . . . ; nÞ in this paper are more general and more practical. We also need the following assumptions for the delayed system (2.1).
Fig. 1. Example of discontinuous neuron activation f ðxÞ ¼ sgnx x.
3860
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
Fig. 2. Example of discontinuous neuron activation f ðxÞ ¼ x sgnx.
Fig. 3. Example of discontinuous neuron activation f ðxÞ, where f ðxÞ ¼
x2 þ 1; x 1;
x 6 0; . x > 0:
l M (H3) For i ¼ 1; 2; . . . ; n and s 2 R; qi ðsÞ are continuous and there exist positive constants qli ; qM i such that 0 < qi 6 qi ðsÞ 6 qi , the delays sij ðtÞ and rij ðtÞ are nonnegative continuous almost periodic functions and di ðtÞ; aij ðtÞ; bij ðtÞ; cij ðtÞ; Ii ðtÞ are continuous almost periodic functions, that is, for any > 0, there exists l ¼ lðÞ > 0 such that for any interval ½a; a þ l, there is x 2 ½a; a þ l such that
jdi ðt þ xÞ di ðtÞj < ; jaij ðt þ xÞ aij ðtÞj < ;
jIi ðt þ xÞ Ii ðtÞj < ; jbij ðt þ xÞ bij ðtÞj < ;
jsij ðt þ xÞ sij ðtÞj < ;
jrij ðt þ xÞ rij ðtÞj < ;
jcij ðt þ xÞ cij ðtÞj <
hold for all i; j ¼ 1; 2; . . . ; n and t 2 R.
Fig. 4. Example of discontinuous neuron activation f ðxÞ, where f ðxÞ ¼
0:5x 1; ex ;
x 6 0; . x > 0:
3861
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
(H4) The delays sij ðtÞ and rji ðtÞ are continuously differentiable function and satisfying over, there exist positive constants n1 ; n2 ; . . . ; nn and d > 0 such that
Ci ðtÞ < 0 and !i ðtÞ < 0;
i ¼ 1; 2; . . . ; n;
where dsM
Ci ðtÞ ¼ ni di ðtÞ þ þ
n n n L e ji jb ðu1 ðtÞÞj n X X ni d X ji j i ji þ þ nj Li jaji ðtÞj þ nj Li L 0 1 qi 1 sji ðuji ðtÞÞ j¼1 j¼1 j¼1
Z n X nj Li r0ji ðtÞ j¼1
!i ðtÞ ¼ ni aii ðtÞ þ
t
n X
Z n X nj r0ji ðtÞ j¼1
Z
0
jcji ðt sÞjeds ds
rji ðtÞ
jcji ðq þ rji ðtÞÞjed½qþrji ðtÞt dq;
trji ðtÞ M
nj jaji ðtÞj þ
n n edsji jb ðu1 ðtÞÞj X ji j ji
j¼1;j–i
þ
s0ij ðtÞ < 1 for i; j ¼ 1; 2; . . . ; n. More-
t
j¼1
1s u 0 ji ð
1 ji ðtÞÞ
þ
Z n X nj
0
jcji ðt sÞjeds ds
rji ðtÞ
j¼1
jcji ðq þ rji ðtÞÞjed½qþrji ðtÞt dq
trji ðtÞ
and u1 ij is the inverse function of uij ¼ t sij ðtÞ; i; j ¼ 1; 2; . . . ; n. (H4⁄) The delays sij ðtÞ and rji ðtÞ are continuously differentiable function and satisfying s0ij ðtÞ < 1 for i; j ¼ 1; 2; . . . ; n. Moreover, there exist positive constants n1 ; n2 ; . . . ; nn and d > 0 such that
b i ðtÞ < 0; Ci ðtÞ < 0 and !
i ¼ 1; 2; . . . ; n;
where Ci ðtÞ is defined as above and n X
b i ðtÞ ¼ n aii ðtÞ þ ! i
M
nj jaji ðtÞj þ
j¼1;j–i
n n edsji jb ðu1 ðtÞÞj X ji j ji j¼1
1 s0ji ðu1 ji ðtÞÞ
þ
Z n X nj j¼1
0
rji ðtÞ
jcji ðt sÞjeds ds þ
Z n X nj r0ji ðtÞ j¼1
t
jcji ðq
trji ðtÞ
þ rji ðtÞÞjed½qþrji ðtÞt dq: Since the neural network system (2.1) is defined as a piecewise continuous vector function and the classical definition of solutions has been shown to be invalid for the differential equation with discontinuous right-hand side, so we need to specify what is meant by a solution of the delayed differential Eq. (2.1) with a discontinuous right-hand side. For this purpose, we extend the concept of the Filippov solution to the delayed differential Eq. (2.1) as follows: M Definition 2.1. A vector function x ¼ ðx1 ; x2 ; . . . ; xn ÞT : ½1; T Þ ! Rn , where 1 ¼ max16i;j6n fsM ij ; rij g and T 2 ð0; þ1, is a state solution of the discontinuous system (2.1) on ½1; T Þ if (1) x is continuous on ½1; T Þ and absolutely continuous on any compact interval of ½0; T Þ; (2) there exists a measurable function c ¼ ðc1 ; c2 ; . . . ; cn ÞT : ½1; T Þ ! Rn such that cj ðtÞ 2 co½fj ðxj ðtÞÞ for a.e. t 2 ½1; T Þ and
" # Z t n n n X X X dxi ðtÞ ¼ qi ðxi ðtÞÞ di ðtÞxi ðtÞ þ aij ðtÞcj ðtÞ þ bij ðtÞcj ðt sij ðtÞÞ þ cij ðtÞ cj ðsÞds þ Ii ðtÞ ; dt trij ðtÞ j¼1 j¼1 j¼1 2 ½0; T Þ;
i ¼ 1; 2; . . . ; n:
for a:e: t ð2:2Þ
Any function c ¼ ðc1 ; c2 ; . . . ; cn ÞT satisfying (2.2) is called an output solution associated with the state x ¼ ðx1 ; x2 ; . . . ; xn ÞT . With this definition, it turns out that the state x ¼ ðx1 ; x2 ; . . . ; xn ÞT is a solution of (2.1) in the sense of Filippov since it satisfies
" # Z t n n n X X X dxi ðtÞ 2 qi ðxi ðtÞÞ di ðtÞxi ðtÞ þ aij ðtÞco½fj ðxj ðtÞÞ þ bij ðtÞco½fj ðxj ðt sij ðtÞÞÞ þ cij ðtÞ co½fj ðxj ðsÞÞds þ Ii ðtÞ ; dt trij ðtÞ j¼1 j¼1 j¼1
for a:e: t 2 ½0; T Þ;
i ¼ 1; 2; . . . ; n:
ð2:3Þ
For an initial value problem (IVP) associated to the neural network system (2.1), we follow the definition introduced by Forti et al. [17–20]. Definition 2.2 (IVP). For any continuous function / ¼ ð/1 ; /2 ; . . . ; /n ÞT : ½1; 0 ! Rn and any measurable selection w ¼ ðw1 ; w2 ; . . . ; wn ÞT : ½1; 0 ! Rn , such that wj ðsÞ 2 co½fj ð/j ðsÞÞðj ¼ 1; 2; . . . ; nÞ for a.e. s 2 ½1; 0 by an initial value problem associated to (2.1) with initial condition ½/; w, we mean the following problem: find a couple of functions ½x; c : ½1; T Þ ! Rn Rn , such that x is a solution of system (2.1) on ½1; T Þ for some T > 0; c is an output solution associated to x, and
3862
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
" # 8 n n n X X X Rt > > > dxdti ðtÞ ¼ qi ðxi ðtÞÞ bi ðtÞxi ðtÞ þ a ðtÞ c ðtÞ þ b ðtÞ c ðt s ðtÞÞ þ c ðtÞ c ðsÞds þ I ðtÞ ; for a:e: t 2 ½0;T Þ; i ¼ 1; 2; .. .; n; > ij ij ij ij i j j trij ðtÞ j > > > j¼1 j¼1 j¼1 > > < cj ðtÞ 2 co½fj ðxj ðtÞÞ; for a:e: t 2 ½0;T Þ; > > > > > xðsÞ ¼ /ðsÞ; 8s 2 ½1; 0; > > > > : cðsÞ ¼ wðsÞ; for a:e: s 2 ½1; 0: ð2:4Þ
Definition 2.3. Let x ðtÞ ¼ ðx1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞÞT be a solution of the given IVP of system (2.1), x ðtÞ is said to be globally exponentially stable, if for any solution xðtÞ ¼ ðx1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞÞT of (2.1), there exist constants M > 0 and d > 0 such that
kxðtÞ x ðtÞk 6 Medt ;
for t P t0 P 0;
P where kxðtÞ x ðtÞk ¼ ni¼1 jxi ðtÞ xi ðtÞj. As for the concept of almost periodic function, we use the definition introduced by Fink [47] and He [48]. Definition 2.4. A continuous function xðtÞ : R ! Rn is said to be almost periodic on R if for any > 0, the set Tðx; Þ ¼ fx : kxðt þ xÞ xðtÞk < ; 8t 2 Rg is relatively dense, that is, for any > 0, it is possible to find a real number l ¼ lðÞ > 0, for any interval with length lðÞ, there exists a number x ¼ xðÞ in this interval such that kxðt þ xÞ xðtÞk < , for all t 2 R. Suppose that xðtÞ : ½0; þ1Þ ! Rn is absolutely continuous on any compact interval of ½0; þ1Þ. We give a chain rule for computing the time derivative of the composed function VðxðtÞÞ : ½0; þ1Þ ! R as follows. Lemma 2.1 (Chain Rule [49,50]). Suppose that VðxÞ : Rn ! R is C-regular, and that xðtÞ : ½0; þ1Þ ! Rn is absolutely continuous on any compact interval of ½0; þ1Þ. Then, xðtÞ and VðxðtÞÞ : ½0; þ1Þ ! R are differential for a.e. t 2 ½0; þ1Þ, and we have
dVðxðtÞÞ ¼ dt
nðtÞ;
dxðtÞ ; dt
8nðtÞ 2 @VðxðtÞÞ:
For convenience, we shall introduce the notations
g l ¼ inf gðsÞ; s2R
g M ¼ supgðsÞ; s2R
where gðtÞ is an almost periodic function. 3. Existence of almost periodic solution In this section, we investigate the existence of almost periodic solution to the neural network system (2.1) with discontinuous activation functions. Lemma 3.1. Suppose that the assumptions (H1), (H2), (H3) and (H4) are satisfied, then for any IVP associated to (2.1), there exists a solution ½x; c of the neural network system (2.1) on ½0; þ1Þ, i.e., the solution x of (2.1) is defined for t 2 ½0; þ1Þ and c is defined for t 2 ½0; þ1Þ up to a set with measure zero. Moreover, there exists constant M > 0 such that kxk < M for t 2 ½1; þ1Þ and kck < M for a.e. t 2 ½1; þ1Þ. Proof. Based on the detailed discussions in Section 2, the set-valued map
" xi ðtÞ,!qi ðxi ðtÞÞ di ðtÞxi ðtÞ þ
Z n n n X X X aij ðtÞco½fj ðxj ðtÞÞ þ bij ðtÞco½fj ðxj ðt sij ðtÞÞÞ þ cij ðtÞ j¼1
j¼1
#
t
co½fj ðxj ðsÞÞds þ Ii ðtÞ ; trij ðtÞ
j¼1
i ¼ 1; 2; . . . ; n is upper semi-continuous with nonempty compact convex values, the local existence of a solution xðtÞ ¼ ðx1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞÞT of (2.3) is obviously [49,51,52]. That is, the IVP of system (2.2) has at least one solution xðtÞ ¼ ðx1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞÞT on ½0; T Þ for some T 2 ð0; þ1 and the derivative of xi ðtÞ is a measurable selection from
"
Z n n n X X X qi ðxi ðtÞÞ di ðtÞxi ðtÞ þ aij ðtÞco½fj ðxj ðtÞÞ þ bij ðtÞco½fj ðxj ðt sij ðtÞÞÞ þ cij ðtÞ j¼1
i ¼ 1; 2; . . . ; n:
j¼1
j¼1
t
trij ðtÞ
# co½fj ðxj ðsÞÞds þ Ii ðtÞ ;
3863
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
It follows from the Continuation Theorem [51, Theorem 2, P78] that either T ¼ þ1, or T < þ1 and limt!T kxðtÞk ¼ þ1, P where kxðtÞk ¼ ni¼1 jxi ðtÞj is defined as above. Next, we will show that limt!T kxðtÞk < þ1 if T < þ1, which means that the maximal existing interval of xðtÞ can be extended to þ1. According to Definition 2.1, there exists c ¼ ðc1 ; c2 ; . . . ; cn ÞT : ½1; T Þ ! Rn such that cj ðtÞ 2 co½fj ðxj ðtÞÞ for a.e. t 2 ½1; T Þ and
" # Z t n n n X X X dxi ðtÞ ¼ qi ðxi ðtÞÞ di ðtÞxi ðtÞ þ aij ðtÞcj ðtÞ þ bij ðtÞcj ðt sij ðtÞÞ þ cij ðtÞ cj ðsÞds þ Ii ðtÞ ; dt trij ðtÞ j¼1 j¼1 j¼1
for a:e: t 2 ½0; T Þ;
i ¼ 1; 2; . . . ; n: Note that fi ¼ g i þ hi . There exists a vector variable g ¼ ðg1 ; g2 ; . . . ; gn ÞT : ½1; T Þ ! Rn gi ðtÞ þ g i ðxi ðtÞÞ ¼ ci ðtÞði ¼ 1; 2; . . . ; nÞ, where gi ðtÞ 2 co½hi ðxi ðtÞÞ for a.e. t 2 ½1; T Þ. Now, we consider the following candidate Lyapunov function:
Z n X ni edt
V 1 ðtÞ ¼
xi ðtÞ
0
i¼1
Z n X n X ni V 2 ðtÞ ¼ i¼1 j¼1
i¼1 j¼1
that
dq ; qi ðqÞ jbij ðu1 ij ðqÞÞj
t
tsij ðtÞ
Z n X n X ni
V 3 ðtÞ ¼
such
0
1 s0ij ðu1 ij ðqÞÞ Z
rij ðtÞ
t
tþs
dðqþsM Þ ij
½jg j ðxj ðqÞÞj þ jgj ðqÞje
dq;
jcij ðq sÞj½jg j ðxj ðqÞÞj þ jgj ðqÞjedðqsÞ dqds:
Obviously, V 1 ðtÞ; V 2 ðtÞ; V 3 ðtÞ are regular [48,49]. Meanwhile, the solution xðtÞ of the system (1.1) are all absolutely continuous. Then, V 1 ðtÞ; V 2 ðtÞ; V 3 ðtÞ are differential for a.e. t P 0 and the time derivative can be evaluated by Lemma 2.1. From (2.2), we obtain
d dt
Z
n n n X X X dq 1 dxi ðtÞ ¼ di ðtÞxi ðtÞ þ aij ðtÞcj ðtÞ þ bij ðtÞcj ðt sij ðtÞÞ þ cij ðtÞ ¼ qi ðqÞ qi ðxi ðtÞÞ dt j¼1 j¼1 j¼1
xi ðtÞ
0
¼ di ðtÞxi ðtÞ þ
þ
cij ðtÞ
j¼1
Z
t
trij ðtÞ
cj ðsÞds þ Ii ðtÞ
n n X X aij ðtÞ½g j ðxj ðtÞÞ þ gj ðtÞ þ bij ðtÞ½g j ðxj ðt sij ðtÞÞÞ þ gj ðt sij ðtÞÞ j¼1
n X
Z
j¼1
t
trij ðtÞ
½g j ðxj ðsÞÞ þ gj ðsÞds þ Ii ðtÞ:
Moreover, we have
Z Z xi ðtÞ d xi ðtÞ dq dq 1 dxi ðtÞ 1 dxi ðtÞ ¼ mi ðtÞ ; ¼ @ dt 0 qi ðxi ðtÞÞ dt qi ðqÞ qi ðqÞ qi ðxi ðtÞÞ dt 0 nR o x ðtÞ where mi ðtÞ ¼ sign 0 i qdi ðqqÞ ¼ signfxi ðtÞg, if xi ðtÞ – 0; while ular, we can choose mi ðtÞ as follows
mi ðtÞ can be arbitrarily choosen in ½1; 1, if xi ðtÞ ¼ 0. In partic-
8 if xi ðtÞ ¼ ci ðtÞ ¼ 0; > < 0; mi ðtÞ ¼ signfgi ðtÞg; if xi ðtÞ ¼ 0 and ci ðtÞ – 0; > : signfxi ðtÞg; if xi ðtÞ – 0:
Thus, we have
mi ðtÞfxi ðtÞg ¼ jxi ðtÞj; mi ðtÞfgi ðtÞg ¼ jgi ðtÞj; i ¼ 1; 2; . . . ; n: Let VðtÞ ¼ V 1 ðtÞ þ V 2 ðtÞ þ V 3 ðtÞ, by applying the chain rule in Lemma 2.1, calculate the time derivative of VðtÞ along the solution trajectories of the system (2.1) in the sense of Eq. (2.2), then we can get for a.e. t P 0 that
Z x ðtÞ X n n n X n i jbij ðu1 dVðtÞ X dq X 1 dxi ðtÞ ij ðtÞÞj dðtþsM Þ dt ij þ ¼ dni edt þ n e m ðtÞ n ½jg j ðxj ðtÞÞj þ jgj ðtÞje i i i 0 1 dt q dt q ð q Þ ðx ðtÞÞ s ð u ðtÞÞ 1 0 i i i ij ij i¼1 i¼1 i¼1 j¼1
n X n X dðtsij ðtÞþsM Þ ij ni jbij ðtÞj½jg j ðxj ðt sij ðtÞÞÞj þ jgj ðt sij ðtÞÞje i¼1 j¼1
3864
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
þ
Z n X n X ni r0ij ðtÞ i¼1 j¼1
Z n X n X þ ni i¼1 j¼1
þ
n X i¼1
þ
t trij ðtÞ
jcij ðq þ rij ðtÞÞj½jg j ðxj ðqÞÞj þ jgj ðqÞjedðqþrij ðtÞÞ dq
0 rij ðtÞ
jcij ðt sÞj½jg j ðxj ðtÞÞj þ jgj ðtÞjedðtsÞ ds
Z n X ni e Ii ðtÞ 6 dni edt dt
ni edt jaij ðtÞkgj ðtÞj þ
i¼1 j¼1;j–i
Z n X n X þ ni r0ij ðtÞ i¼1 j¼1
þ
Z n X n X ni i¼1 j¼1
þ
n X
i¼1 j¼1
xi ðtÞ
0
i¼1
n X n X
t trij ðtÞ
trij ðtÞ
jcij ðtÞj½jg j ðxj ðsÞÞj þ jgj ðsÞjedt ds
n X n X jbij ðu1 ij ðtÞÞj dðtþsM Þ ij ni ½jg j ðxj ðtÞÞj þ jgj ðtÞje 0 1 s ð u ðtÞÞ 1 ij ij i¼1 j¼1
jcij ðq þ rij ðtÞÞj½jg j ðxj ðqÞÞj þ jgj ðqÞjedðqþrij ðtÞÞ dq
jcij ðt sÞj½jg j ðxj ðtÞÞj þ jgj ðtÞjedðtsÞ ds þ
n n X X ni edt Ii ðtÞ 6 edt Ci ðtÞjxi ðtÞj i¼1
n X edt !i ðtÞjgi ðtÞj þ ni edt Ii ðtÞ:
i¼1
t
n n X n n X X dq X ni edt di ðtÞjxi ðtÞj þ ni edt jaij ðtÞkg j ðxj ðtÞÞj ni edt aii ðtÞjgi ðtÞj qi ðqÞ i¼1 i¼1 j¼1 i¼1
0 rij ðtÞ
Z n X n X ni
i¼1
ð3:1Þ
i¼1
It follows from the assumption (H4) that, there exists positive constants #i ; ti ði ¼ 1; 2; . . . ; nÞ such that for t P 0, we have
Ci ðtÞ 6 #i < 0 and !i ðtÞ 6 ti < 0;
i ¼ 1; 2; . . . ; n:
ð3:2Þ
Combing inequalities (3.1) and (3.2), we have n dVðtÞ X 6 ni edt IM i ; dt i¼1
for a:e: t P 0:
Consequently,
VðtÞ 6 Vð0Þ þ
n 1X n IM edt ; d i¼1 i i
for t P 0:
Note that VðtÞ P 1q edt kxðtÞk, where q ¼ max16i6n fqM i g. Hence
kxðtÞk 6 qVð0Þedt þ
n n qX qX ni edt IM n edt IM i 6 qVð0Þ þ d i¼1 d i¼1 i i
for t 2 ½0; T Þ, which shows that xðtÞ is bounded on its existence interval ½1; T Þ. Therefore, limt!T kxðtÞk < þ1, which means T ¼ þ1. That is, the neural network system (2.1) has a global solution for any IVP. Moreover, we have
kxðtÞk ¼
n X
jxi ðtÞj 6 M 0 ;
for t 2 ½1; þ1Þ;
ð3:3Þ
i¼1
P where M 0 ¼ qVð0Þ þ qd ni¼1 ni edt IM i þ k/k. Note that, fi has a finite number of discontinuous points on any compact interval of R. In particular, fi has a finite number of discontinuous points on compact interval ½M0 ; M 0 . Without loss of generality, let fi discontinuous at points fqik : k ¼ 1; 2; . . . ; li g on the interval ½M 0 ; M 0 , and assume that M 0 < qi1 < qi2 < < qili < M0 . Let us consider a series of continuous functions:
( fi0 ðxÞ
¼
fi ðxÞ;
if x 2 ½M0 ; qi1 Þ;
fi ðqi1 0Þ; if x ¼ qi1 ;
8 i i > < fi ðqkþ1 0Þ; if x ¼ qk ; fik ðxÞ ¼ fi ðxÞ; if x 2 ðqik ; qikþ1 Þ; > : i fi ðqkþ1 þ 0Þ; if x ¼ qikþ1 ; ( l
fi i ðxÞ ¼ Denote
fi ðqili þ 0Þ; if x ¼ qili ; fi ðxÞ;
if x 2 ðqili ; M 0 :
k ¼ 1; 2; . . . ; li 1;
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
Mi ¼ max
8 <
(
max :x2½M0 ;qi1
ffi0 ðxÞg;
max
max
x2½qik ;qikþ1
16k6li 1
9 =
) ffik ðxÞg
; max
3865
x2½qil ;M0
l ffi i ðxÞg
;
i
and
mi ¼ min
8 <
(
min :x2½M0 ;qi1
ffi0 ðxÞg;
min
16k6li 1
9 = : ;
) min
x2½qik ;qikþ1
ffik ðxÞg
; min
x2½qil ;M 0
l ffi i ðxÞg
i
Obviously, we have
jco½fi ðxi ðtÞÞj 6 maxfjMi j; jmi jg;
i ¼ 1; 2; . . . ; n:
Note that ci ðtÞ 2 co½fi ðxi ðtÞÞ for a.e. t 2 ½1; þ1Þ and i ¼ 1; 2; . . . ; n. Thus
jci ðtÞj 6 maxfjM i j; jmi jg for a:e: t 2 ½1; þ1Þ and i ¼ 1; 2; . . . ; n: Hence
( ) n n X X kcðtÞk 6 max jMi j; jmi j ; i¼1
Set M ¼ maxfM0 ;
Pn
for a:e: t 2 ½1; þ1Þ:
ð3:4Þ
i¼1
i¼1 jM i j;
Pn
i¼1 jmi jg.
Hence, from (3.3) and (3.4), we have
kxðtÞk 6 M
for t 2 ½1; þ1Þ
ð3:5Þ
kcðtÞk 6 M
for a:e: t 2 ½1; þ1Þ:
ð3:6Þ
and
The proof of Lemma 3.1 is completed.
h
Lemma 3.2. Suppose that the assumptions (H1), (H2), (H3) and (H4) are satisfied, then any solution xðtÞ of the system (2.1) associated with an output cðtÞ is asymptotically almost periodic, i.e., for any > 0, there exist T > 0; l ¼ lðÞ and x ¼ xðÞ in any interval with the length of lðÞ, such that kxðt þ xÞ xðtÞk 6 for all t P T. Proof. It follows from Lemma 3.1 that there is M ¼ maxfM 0 ;
kxðtÞk 6 M;
for t 2 ½1; þ1Þ
kcðtÞk 6 M;
for a:e: t 2 ½1; þ1Þ:
Pn
i¼1 jM i j;
Pn
i¼1 jmi jg
> 0 such that
and
According to the assumption (H3), for any the following inequalities:
> 0 there is l ¼ lðÞ > 0 such that for any a 2 R there is x 2 ½a; a þ l satisfying
nd nd nd ; jaij ðt þ xÞ aij ðtÞj < ; jbij ðt þ xÞ bij ðtÞj < ; 24nqMN 24n2 MN 24n2 MN nd nd nd jIi ðt þ xÞ Ii ðtÞj < ; jcij ðt þ xÞ cij ðtÞj < ; jrij ðt þ xÞ rij ðtÞj < ; 24nqN 24n2 1MN 24n2 jcjM ij M N
jdi ðt þ xÞ di ðtÞj <
where jcjM ij ¼ sups2R jc ij ðsÞj; i; j ¼ 1; 2; . . . ; n and 0 < n , min16i6n fni g 6 max16i6n fni g , N. Moreover, in terms of assumption (H1) and cj ðtÞ 2 co½fj ðxj ðtÞÞðj ¼ 1; 2; . . . ; nÞ for a.e. t 2 ½1; þ1Þ, then we obtain
cj ðt þ x sij ðt þ xÞÞ cj ðt þ x sij ðtÞÞ < ;
for a:e: t 2 ½1; þ1Þ
and
Z Z tþx tþx cj ðsÞds cj ðsÞds < ; tþxrij ðtþxÞ tþxrij ðtÞ
for all t 2 ½1; þ1Þ:
Thus
jei ðt; xÞj <
nd ; 4nqN
for a:e: t 2 ½1; þ1Þ;
ð3:7Þ
jDi ðt; xÞj <
nd ; 4nqN
for a:e: t 2 ½1; þ1Þ;
ð3:8Þ
3866
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
where
ei ðt; xÞ ¼ Ii ðt þ xÞ Ii ðtÞ ½di ðt þ xÞ di ðtÞxi ðt þ xÞ þ
n X ½aij ðt þ xÞ aij ðtÞcj ðt þ xÞ j¼1
Z n n X X ½bij ðt þ xÞ bij ðtÞcj ðt þ x sij ðt þ xÞÞ þ ½cij ðt þ xÞ cij ðtÞ þ j¼1
tþx
tþxrij ðtþxÞ
j¼1
cj ðsÞds;
and
"Z n n X X Di ðt; xÞ ¼ bij ðtÞ½cj ðt þ x sij ðt þ xÞÞ cj ðt þ x sij ðtÞÞ þ cij ðtÞ j¼1
j¼1
tþx tþxrij ðtþxÞ
cj ðsÞds
Z
#
tþx tþxrij ðtÞ
cj ðsÞds :
Now, we consider the following candidate Lyapunov function:
V 1 ðtÞ ¼
Z n X ni edt i¼1
xi ðtÞ
0
Z n X n X ni V 2 ðtÞ ¼
dq ; qi ðqÞ jbij ðu1 ij ðqÞÞj
t
dðqþsM Þ
ij dq; ½jg j ðxj ðqÞÞj þ jgj ðqÞje 0 1 tsij ðtÞ 1 sij ðuij ðqÞÞ i¼1 j¼1 Z 0 Z t n X n X ni jcij ðq sÞj½jg j ðxj ðqÞÞj þ jgj ðqÞjedðqsÞ dqds: V 3 ðtÞ ¼
i¼1 j¼1
rij ðtÞ
tþs
Obviously, W 1 ðtÞ; W 2 ðtÞ; W 3 ðtÞ are regular [48,49]. Meanwhile, the solutions xðt þ xÞ; xðtÞ of the neural network system (2.1) are all absolutely continuous. Then, W 1 ðtÞ; W 2 ðtÞ; W 3 ðtÞ are differential for a.e. t P 0 and the time derivative can be evaluated by Lemma 2.1. From (2.2), we obtain
d dt
Z
xi ðtþxÞ xi ðtÞ
dq 1 dxi ðt þ xÞ 1 dxi ðtÞ ¼ dt qi ðxi ðtÞÞ dt qi ðqÞ qi ðxi ðt þ xÞÞ ¼ ei ðx; tÞ þ Di ðx; tÞ di ðtÞ½xi ðt þ xÞ xi ðtÞ þ
n X
aij ðtÞ½cj ðt þ xÞ cj ðtÞ
j¼1
þ
Z n n X X bij ðtÞ½cj ðt þ x sij ðt þ xÞÞ cj ðt sij ðtÞÞ þ cij ðtÞ j¼1
j¼1
t
trij ðtÞ
½cj ðx þ sÞ cj ðsÞds;
where ei ðt; xÞ and Di ðt; xÞ are defined as above. Moreover, we have
Z Z xi ðtþxÞ dq d xi ðtþxÞ dq 1 dxi ðt þ xÞ 1 dxi ðtÞ ¼ @ xi ðtÞ dt xi ðtÞ dt qi ðxi ðtÞÞ dt qi ðqÞ qi ðqÞ qi ðxi ðt þ xÞÞ 1 dxi ðt þ xÞ 1 dxi ðtÞ ¼ mi ðtÞ ; qi ðxi ðt þ xÞÞ dt qi ðxi ðtÞÞ dt nR x ðtþxÞ where mi ðtÞ ¼ sign xiiðtÞ
o
dq qi ðqÞ
¼ signfxi ðt þ xÞ xi ðtÞg, if xi ðt þ xÞ – xi ðtÞ; while mi ðtÞ can be arbitrarily choosen in ½1; 1, if
xi ðt þ xÞ ¼ xi ðtÞ. In particular, we can choose
mi ðtÞ as follows
8 if xi ðt þ xÞ xi ðtÞ ¼ ci ðt þ xÞ ci ðtÞ ¼ 0; > < 0; mi ðtÞ ¼ signfgi ðt þ xÞ gi ðtÞg; if xi ðt þ xÞ ¼ xi ðtÞ and ci ðt þ xÞ – ci ðtÞ; > : signfxi ðt þ xÞ xi ðtÞg; if xi ðt þ xÞ – xi ðtÞ: Thus, for i ¼ 1; 2; . . . ; n, we have
mi ðtÞfxi ðt þ xÞ xi ðtÞg ¼ jxi ðt þ xÞ xi ðtÞj; mi ðtÞfgi ðt þ xÞ gi ðtÞg ¼ jgi ðt þ xÞ gi ðtÞj: Let WðtÞ ¼ W 1 ðtÞ þ W 2 ðtÞ þ W 3 ðtÞ, by applying the chain rule in Lemma 2.1, calculate the time derivative of WðtÞ along the solution trajectories of the system (2.1) in the sense of Eq. (2.2), then we can get for a.e. t P 0 that
3867
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
Z xi ðtþxÞ dq X n n n X dWðtÞ X 6 dni edt ni edt ½jei ðt; xÞj þ jDi ðt; xÞj ni edt di ðtÞjxi ðt þ xÞ xi ðtÞj þ xi ðtÞ dt qi ðqÞ i¼1 i¼1 i¼1 þ
n X n n X X ni edt jaij ðtÞkg j ðxj ðt þ xÞÞ g j ðxj ðtÞÞj ni edt aii ðtÞjgi ðt þ xÞ gi ðtÞj i¼1 j¼1
i¼1
n X n X jbij ðu1 ij ðtÞÞj ½jg j ðxj ðt þ xÞÞ g j ðxj ðtÞÞj ni edt jaij ðtÞkgj ðt þ xÞ gj ðtÞj þ ni þ 0 1 s ð 1 ij uij ðtÞÞ i¼1 j¼1;j–i i¼1 j¼1 n X n X
dðtþsM Þ ij
þ jgj ðt þ xÞ gj ðtÞje
þ
Z n X n X ni r0ij ðtÞ i¼1 j¼1
þ jgj ðx þ qÞ gj ðqÞjedðqþrij ðtÞÞ dq þ
t trij ðtÞ
Z n X n X ni i¼1 j¼1
þ jgj ðt þ xÞ gj ðtÞjedðtsÞ ds 6
jcij ðq þ rij ðtÞÞj½jg j ðxj ðx þ qÞÞ g j ðxj ðqÞÞj
0
rij ðtÞ
jcij ðt sÞj½jg j ðxj ðt þ xÞÞ g j ðxj ðtÞÞj
n n X X edt Ci ðtÞjxi ðt þ xÞ xi ðtÞj þ edt !i ðtÞjgi ðt þ xÞ gi ðtÞj i¼1
i¼1
n X þ ni edt ½jei ðt; xÞj þ jDi ðt; xÞj:
ð3:9Þ
i¼1
Combining inequalities (3.2), (3.7), (3.8), snf (3.9), we have n dWðtÞ X ni nd dt 6 e ; dt 2nq N i¼1
for a:e: t P 0:
Thus n n X X n dt ni n dt n dt e kxðt þ xÞ xðtÞk 6 WðtÞ 6 Wð0Þ þ e 6 Wð0Þ þ e : q 2nq N 2nq i¼1 i¼1
Note that Wð0Þ is a constant and we can choose a sufficiently large T > 0 such that
q dt e Wð0Þ < ; n 2
for t P T:
Therefore, we have
q kxðt þ xÞ xðtÞk 6 edt Wð0Þ þ < ; n 2 for t P T. The proof of Lemma 3.2 is completed.
h
Theorem 3.1. Suppose that the assumptions (H1), (H2), (H3) and (H4) are satisfied, then there exists at least one almost periodic solution of the neural network system (2.1). Proof. Let xðtÞ be any solution of the neural network system (2.1), that is, there exists c ¼ ðc1 ; c2 ; . . . ; cn ÞT : ½1; T Þ ! Rn such that cj ðtÞ 2 co½fj ðxj ðtÞÞ for a.e. t 2 ½1; T Þ and
" # Z t n n n X X X dxi ðtÞ ¼ qi ðxi ðtÞÞ di ðtÞxi ðtÞ þ aij ðtÞcj ðtÞ þ bij ðtÞcj ðt sij ðtÞÞ þ cij ðtÞ cj ðsÞds þ Ii ðtÞ ; dt tdij ðtÞ j¼1 j¼1 j¼1
i ¼ 1; 2; . . . ; n:
It follows from (3.7) and (3.8) that, we can select a sequence t k satisfying limk!þ1 t k ¼ þ1 and such that
jei ðt; t k Þj 6
1 ; k
for a:e: t 2 ½1; þ1Þ;
ð3:10Þ
jDi ðt; tk Þj 6
1 ; k
for a:e: t 2 ½1; þ1Þ;
ð3:11Þ
where
ei ðt; tk Þ ¼ Ii ðt þ tk Þ Ii ðtÞ ½di ðt þ tk Þ di ðtÞxi ðt þ tk Þ þ
n X ½aij ðt þ t k Þ aij ðtÞcj ðt þ tk Þ j¼1
Z n n X X ½bij ðt þ tk Þ bij ðtÞcj ðt þ tk sij ðt þ t k ÞÞ þ ½cij ðt þ t k Þ cij ðtÞ þ j¼1
j¼1
tþt k tþt k rij ðtþt k Þ
cj ðsÞds
3868
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
and
"Z n n X X bij ðtÞ½cj ðt þ t k sij ðt þ t k ÞÞ cj ðt þ t k sij ðtÞÞ þ cij ðtÞ
Di ðt; tk Þ ¼
j¼1
tþt k
tþtk rij ðtþtk Þ
j¼1
cj ðsÞds
Z
tþt k
tþt k rij ðtÞ
#
cj ðsÞds :
From (3.5) and (3.6), it is easy to see that there exists M > 0 such that kx0i ðtÞk 6 M for a.e. t 2 ½1; þ1Þ. Thus, the sequence fxðt þ t k Þgk2N is equi-continuous and uniformly bounded. In view of Arzela-Ascoli theorem and diagonal selection principle, we can select a sub-sequence of ftk g(for convenience, still denoted by ftk g), such that xðt þ t k Þ uniformly converges to a continuous function x ðtÞ on any compact set of R. In addition, from (3.5) and (3.6), for any t 2 ½1; þ1Þ, fcðt þ t k Þgk2N is uniformly bounded. Therefore, for any t 2 ½1; þ1Þ, we can select a sub-sequence of ftk g (for convenience, still denoted by ftk g), such that cðt þ tk Þ converges weakly to a measurable function c ðtÞ. Next, the rest of proof is divided into three steps: Step 1. For c ðtÞ ¼ ðc1 ðtÞ; c2 ðtÞ; . . . ; cn ðtÞÞT , we will show that cj ðtÞ 2 co½fj ðxj ðtÞÞ for a.e. t 2 ½1; þ1Þ, where x ðtÞ ¼ ðx1 ðtÞ; x2 ðtÞ; . . . ; xn ðtÞÞT . Note that co½fj ðÞ is upper semi-continuous set-valued map and for any t 2 ½1; þ1Þ; xðt þ tk Þ ! x ðtÞ as k ! þ1, we have that for any > 0, there exists N > 0 such that
co½fi ðxi ðt þ t k ÞÞ # co½fi ðxi ðtÞÞ þ B for k > N and t 2 ½1; þ1Þ, where B ¼ fx 2 Rn : kxk 6 1g. Hence, for any k > N; ci ðt þ t k Þ 2 co½fi ðxi ðtÞÞ þ B. Then, by the compactness of co½fi ðxi ðtÞÞ þ B, we have
c ðtÞ ¼ lim cðt þ tk Þ 2 co½fi ðxi ðtÞÞ þ B; k!þ1
which shows that ci ðtÞ 2 co½fi ðxi ðtÞÞ for a.e. t 2 ½1; þ1Þ by the arbitrariness of . Step 2. We will show that x ðtÞ is a solution of the neural network system (2.1). According to Lebesgue’s dominated convergence theorem, for any t 2 ½1; þ1Þ and h 2 R, we have
xi ðt þ hÞ xi ðtÞ ¼ lim ½xi ðt þ t k þ hÞ xi ðt þ tk Þ ¼ lim k!þ1
þ
k!þ1
n X
aij ðtk þ hÞcj ðt k þ hÞ þ
j¼1
¼ lim
Z
k!þ1
þ
n X
"
tþh
qi ðxi ðtk þ hÞÞ
t
Z
j¼1
ei ðh; tk Þ þ Di ðh; tk Þ þ Ii ðhÞ di ðhÞxi ðtk þ hÞ þ
þ lim
k!þ1
Z t
hrij ðhÞ
j¼1
cj ðsÞds
dh
n X aij ðhÞcj ðtk þ hÞ j¼1
#
h
#)
tk þh
t k þhrij ðt k þhÞ
j¼1
qi ðxi ðhÞÞ Ii ðhÞ di ðhÞxi ðhÞ þ
t
fqi ðxi ðt k þ hÞÞ½Ii ðt k þ hÞ di ðt k þ hÞxi ðt k þ hÞ
Z n n X X bij ðtk þ hÞcj ðt k þ h sij ðt k þ hÞÞ þ cij ðtk þ hÞ
"
tþh
tþh t
Z n X bij ðhÞcj ðtk þ h sij ðhÞÞ þ cij ðhÞ
j¼1
¼
Z
cj ðtk þ sÞds dh
Z n n n X X X aij ðhÞcj ðhÞ þ bij ðhÞcj ðh sij ðhÞÞ þ cij ðhÞ j¼1
j¼1
#
h
hrij ðhÞ
j¼1
cj ðsÞds dh
tþh
qi ðxi ðt k þ hÞÞ½ei ðh; t k Þ þ Di ðh; tk Þdh;
ð3:12Þ
where
ei ðh; tk Þ ¼ Ii ðh þ tk Þ Ii ðhÞ ½di ðh þ tk Þ di ðhÞxi ðh þ tk Þ þ
n X ½aij ðh þ t k Þ aij ðhÞcj ðh þ t k Þ j¼1
þ
n X
Z n X ½bij ðh þ t k Þ bij ðhÞcj ðh þ t k sij ðh þ t k ÞÞ þ ½cij ðh þ t k Þ cij ðhÞ
j¼1
hþt k
hþt k rij ðhþt k Þ
j¼1
cj ðsÞds
and
Di ðh; tk Þ ¼
n X
"Z n X bij ðhÞ½cj ðh þ t k sij ðh þ tk ÞÞ cj ðh þ tk sij ðhÞÞ þ cij ðhÞ
j¼1
j¼1
hþt k
hþtk rij ðhþtk Þ
cj ðsÞds
Z
hþt k
hþt k rij ðtÞ
#
cj ðsÞds
are defined as above. It follows from (3.10) and (3.11) that
lim
k!þ1
Z t
tþh
qi ðxi ðtk þ hÞÞ½ei ðh; t k Þ þ Di ðh; tk Þdh ¼ 0:
Eqs. (3.12) and (3.13) imply
ð3:13Þ
3869
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
xi ðt þ hÞ xi ðtÞ ¼
Z
"
tþh
t
qi ðxi ðhÞÞ Ii ðhÞ di ðhÞxi ðhÞ þ
Z n n n X X X aij ðhÞcj ðhÞ þ bij ðhÞcj ðh sij ðhÞÞ þ cij ðhÞ j¼1
j¼1
j¼1
h
hrij ðhÞ
#
cj ðsÞds dh:
Hence, x ðtÞ is a solution of the neural network system (2.1). Step 3. Now, we will show that x ðtÞ is the almost periodic solution of the neural network system (2.1). By the proof of Lemma 3.2, for any > 0, there exist T > 0; l ¼ lðÞ, and x ¼ xðÞ in any interval with the length of lðÞ, such that kxðt þ xÞ xðtÞk 6 for all t P T. Hence, there exists sufficiently large constant K > 0 such that kxðt þ tk þ xÞ xðt þ tk Þk 6 for all t 2 ½1; þ1Þ and k > K. Let k ! þ1, it is easy to obtain that kx ðt þ xÞ x ðtÞk 6 for all t 2 ½1; þ1Þ, that is, x ðtÞ is the almost periodic solution of the neural network system (2.1). The proof of Theorem 3.1 is completed. h Similarly to the proof of Theorem 3.1, we can obtain the following Theorem. Theorem 3.2. Suppose that the assumptions (H1), (H2*), (H3) and (H4*) are satisfied, then there exists at least one almost periodic solution of the neural network system (2.1).
4. Uniqueness and global exponential stability In this section, we will study the uniqueness and global exponential stability of the almost periodic solution obtained in Section 3 for the neural network system (2.1) with discontinuous activation functions. Theorem 4.1. Suppose that the assumptions (H1), (H2), (H3) and (H4) are satisfied. Then the system (2.1) has a uniqueness almost periodic solution which is globally exponentially stable. Proof. Let xðtÞ and e c ðtÞ; ½/; w x ðtÞ be any two solutions of the neural network system (2.1) associated with outputs cðtÞ and e e w e are the corresponding initial values, respectively. and ½ /; Note that fi ¼ g i þ hi ði ¼ 1; 2; . . . ; nÞ. There exist two vectors variable gðtÞ ¼ ðg1 ðtÞ; g2 ðtÞ; . . . ; gn ðtÞÞT and ge ðtÞ ¼ ð ge 1 ðtÞ; ge 2 ðtÞ; . . . ; ge n ðtÞÞT , such that gi ðtÞ þ g i ðxi ðtÞÞ ¼ ci ðtÞði ¼ 1; 2; . . . ; nÞ and ge i ðtÞ þ g i ðex i ðtÞÞ ¼ ec i ðtÞði ¼ 1; 2; . . . ; nÞ, e i ðtÞ 2 co½hi ðe where gi ðtÞ 2 co½hi ðxi ðtÞÞ and g x i ðtÞÞ for a.e. t 2 ½1; þ1Þ, respectively. Now, we consider the following candidate Lyapunov function:
Z xi ðtÞ dq n X dt U 1 ðtÞ ¼ ni e ; ex i ðtÞ qi ðqÞ i¼1 Z t n X n X jbij ðu1 M ij ðqÞÞj e j ðqÞjedðqþsij Þ dq; ½jg j ðxj ðqÞÞ g j ðe ni U 2 ðtÞ ¼ x j ðqÞÞj þ jgj ðqÞ g 0 1 1 s ð u ð q ÞÞ tsij ðtÞ ij ij i¼1 j¼1 Z 0 Z t n X n X e j ðqÞjedðqsÞ dqds; x j ðqÞÞj þ jgj ðqÞ g ni jcij ðq sÞj½jg j ðxj ðqÞÞ g j ðe U 3 ðtÞ ¼ i¼1 j¼1
rij ðtÞ
tþs
Obviously, U 1 ðtÞ; U 2 ðtÞ; U 3 ðtÞ are regular [48,49]. Meanwhile, the x-periodic solution x ðtÞ and any solution xðtÞ of the system (2.1) are all absolutely continuous. Then, U 1 ðtÞ; U 2 ðtÞ; U 3 ðtÞ are differential for a.e. t P 0 and the time derivative can be evaluated by Lemma 2.1. From (2.2), we obtain
Z xi ðtÞ x i ðtÞ d dq 1 dxi ðtÞ 1 de ¼ di ðtÞ½xi ðtÞ e x i ðtÞ ¼ dt ex i ðtÞ qi ðqÞ qi ðxi ðtÞÞ dt qi ð e x i ðtÞÞ dt n n X X þ aij ðtÞ½cj ðtÞ e c j ðtÞ þ bij ðtÞ½cj ðt sij ðtÞÞ ec j ðt sij ðtÞÞ j¼1
Z n X cij ðtÞ þ j¼1
j¼1 t
trij ðtÞ
½cj ðsÞ e c j ðsÞds ¼ di ðtÞ½xi ðtÞ ex i ðtÞ þ
n X e j ðtÞ aij ðtÞ½gj ðtÞ g j¼1
n n X X e j ðt sij ðtÞÞ aij ðtÞ½g j ðxj ðtÞÞ g j ðe bij ðtÞ½gj ðt sij ðtÞÞ g x j ðtÞÞ þ þ j¼1
j¼1
n X x j ðt sij ðtÞÞÞ bij ðtÞ½g j ðxj ðt sij ðtÞÞÞ g j ðe þ j¼1
Z n X þ cij ðtÞ j¼1
t
trij ðtÞ
e j ðsÞds þ ½gj ðsÞ g
Z n X cij ðtÞ j¼1
t
trij ðtÞ
½g j ðxj ðsÞÞ g j ðe x j ðsÞÞds:
3870
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
Moreover, we have
Z Z xi ðtÞ dq 1 x i ðtÞ x i ðtÞ d xi ðtÞ dq dxi ðtÞ 1 de 1 dxi ðtÞ 1 de ¼ mi ðtÞ ; ¼ @ ex i ðtÞ qi ðqÞ qi ðxi ðtÞÞ dt dt ex i ðtÞ qi ðqÞ qi ðxi ðtÞÞ dt x i ðtÞÞ dt x i ðtÞÞ dt qi ð e qi ðe where
R
ðtÞ mi ðtÞ ¼ sign exxiðtÞ i
dq qi ðqÞ
¼ signfxi ðtÞ e x i ðtÞg, if xi ðtÞ – e x i ðtÞ; while
xi ðtÞ ¼ e x i ðtÞ. In particular, we can choose
mi ðtÞ can be arbitrarily choosen in ½1; 1, if
mi ðtÞ as follows
8 if xi ðtÞ e c i ðtÞ ¼ 0; x i ðtÞ ¼ ci ðtÞ e > < 0; mi ðtÞ ¼ signfgi ðtÞ ge i ðtÞg; if xi ðtÞ ¼ ex i ðtÞ and ci ðtÞ – ec i ðtÞ; > : signfxi ðtÞ e if xi ðtÞ – e x i ðtÞg; x i ðtÞ:
Thus, we have
mi ðtÞfxi ðtÞ ex i ðtÞg ¼ jxi ðtÞ ex i ðtÞj; mi ðtÞfgi ðtÞ ge i ðtÞg ¼ jgi ðtÞ ge i ðtÞj; i ¼ 1; 2; . . . ; n: Let UðtÞ ¼ U 1 ðtÞ þ U 2 ðtÞ þ U 3 ðtÞ, by applying the chain rule in Lemma 2.1, calculate the time derivative of UðtÞ along the solution trajectories of the system (2.1) in the sense of Eq. (2.2), then we can get for a.e. t P 0 that
Z xi ðtÞ dq X n n x i ðtÞ dUðtÞ X 1 dxi ðtÞ 1 de dt ¼ dni e ni edt mi ðtÞ þ ex i ðtÞ qi ðqÞ i¼1 dt qi ðxi ðtÞÞ dt qi ðe x i ðtÞÞ dt i¼1 þ
n X n X jbij ðu1 M ij ðtÞÞj e j ðtÞjedðtþsij Þ ½jg j ðxj ðtÞÞ g j ðe ni x j ðtÞÞj þ jgj ðtÞ g 0 1 sij ðu1 ij ðtÞÞ i¼1 j¼1 n X n X M e j ðt sij ðtÞÞjedðtsij ðtÞþsij Þ ni jbij ðtÞj½jg j ðxj ðt sij ðtÞÞÞ g j ðe x j ðt sij ðtÞÞÞj þ jgj ðt sij ðtÞÞ g i¼1 j¼1
Z n X n X þ ni r0ij ðtÞ i¼1 j¼1
Z n X n X þ ni i¼1 j¼1
Z n X n X ni i¼1 j¼1
t trij ðtÞ
e j ðqÞjedðqþrij ðtÞÞ dq jcij ðq þ rij ðtÞÞj½jg j ðxj ðqÞÞ g j ðe x j ðqÞÞj þ jgj ðqÞ g
0
rij ðtÞ
e j ðtÞjedðtsÞ ds x j ðtÞÞj þ jgj ðtÞ g jcij ðt sÞj½jg j ðxj ðtÞÞ g j ðe
t trij ðtÞ
e j ðsÞjedt ds jcij ðtÞj½jg j ðxj ðsÞÞ g j ðe x j ðsÞÞj þ jgj ðsÞ g
Z xi ðtÞ dq X n n n X n X X dni edt ni edt di ðtÞjxi ðtÞ e ni edt jaij ðtÞkg j ðxj ðtÞÞ g j ðe x i ðtÞj þ x j ðtÞÞj 6 q ð q Þ e x ðtÞ i i i¼1 i¼1 i¼1 j¼1
n X
e i ðtÞj þ ni edt aii ðtÞjgi ðtÞ g
i¼1
þ
þ
n X n X jbij ðu1 M ij ðtÞÞj e j ðtÞjedðtþsij Þ ½jg j ðxj ðtÞÞ g j ðe ni x j ðtÞÞj þ jgj ðtÞ g 0 1 s ð u ðtÞÞ 1 ij ij i¼1 j¼1
Z n X n X ni r0ij ðtÞ Z n X n X ni i¼1 j¼1
6
e j ðtÞj ni edt jaij ðtÞkgj ðtÞ g
i¼1 j¼1;j–i
i¼1 j¼1
þ
n X n X
t trij ðtÞ
e j ðqÞjedðqþrij ðtÞÞ dq jcij ðq þ rij ðtÞÞj½jg j ðxj ðqÞÞ g j ðe x j ðqÞÞj þ jgj ðqÞ g
0
rij ðtÞ
e j ðtÞjedðtsÞ ds jcij ðt sÞj½jg j ðxj ðtÞÞ g j ðe x j ðtÞÞj þ jgj ðtÞ g
n n X X e i ðtÞj: edt Ci ðtÞjxi ðtÞ e edt !i ðtÞjgi ðtÞ g x i ðtÞj þ i¼1
i¼1
Hence, from (4.1) and (3.2), we have
dUðtÞ 6 0; dt
for a:e: t P 0:
Note that
Z xi ðtÞ dq n n X X dt n dt e jxi ðtÞ e ni e x i ðtÞj 6 6 UðtÞ: ex i ðtÞ qi ðqÞ q i¼1 i¼1
ð4:1Þ
3871
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
Hence
kxðtÞ e x ðtÞk ¼
n X qUðtÞ dt Uð0Þ dt jxi ðtÞ e e 6 e ; x i ðtÞj 6 n n i¼1
ð4:2Þ
where n ¼ min16i6n fni g > 0 and q ¼ max16i6n fqM i g > 0 are defined as above. In view of Theorem 3.1, the existence of almost periodic solution x ðtÞ to the system (2.1) is obvious. Therefore, the almost periodic solution x ðtÞ to the system (2.1) is globally exponentially stable. Next, we will show the uniqueness of the almost periodic solution to the neural network system (2.1). In fact, let x ðtÞ and e x ðtÞ be two different almost periodic solution to the neural network system (2.1). Then there exists t 0 2 R such that
kx ðt 0 Þ e x ðt0 Þk ¼ M 0 > 0:
ð4:3Þ
In addition, similar to the proof of (4.2), there exist constants
e
kx ðtÞ x ðtÞk <
M 1 ; d
> 0 such that
M 1 edt
for t P t 0 . There exist a sufficient large T 1 such that
kx ðtÞ e x ðtÞk <
M 0 ; 2
8t P T 1 ;
which leads to
kx ðt 0 þ xÞ e x ðt 0 þ xÞk <
M 0 ; 2
8x P T 1 t 0 :
Thus
k½x ðt 0 þ xÞ e x ðt 0 þ xÞ ½x ðt0 Þ e x ðt 0 Þk P kx ðt0 Þ e x ðt 0 Þk kx ðt0 þ xÞ e x ðt0 þ xÞk P
M 0 ; 2
8x P T 1 t 0 :
Note that x ðtÞ e x ðtÞ is almost periodic since x ðtÞ and e x ðtÞ are two different almost periodic functions, there exist l > 0 such that ½T 1 t0 ; T 1 t0 þ l contain x satisfying
k½x ðt 0 þ xÞ e x ðt 0 þ xÞ ½x ðt0 Þ e x ðt 0 Þk <
M 0 : 2
This is a contradiction with (4.3). Consequently, the almost periodic solution x ðtÞ of the system (2.1) is unique. The proof of Theorem 4.1 is completed. h Similarly to the proof of Theorem 4.1, we can obtain the following Theorem. Theorem 4.2. Suppose that the assumptions (H1), (H2⁄), (H3) and (H4⁄) are satisfied. Then the system (2.1) has a uniqueness almost periodic solution which is globally exponentially stable. Remark 3. In the earlier literature [32–35], by using generalized Lyapunov functions, linear matrix inequalities (LMIs), etc., the neuron network systems with discontinuous neuron activations were investigated. A series of results on the global exponential stability of almost periodic solution for the neuron network systems with discontinuous neuron activations were obtained. However, by comparison we find that the Theorems 4.1 and 4.2 obtained in this section on the global exponential stability of almost periodic solution for the neural network dynamic system (2.1) with discontinuous neuron activations and mixed time delays make the following improvements: (1) In Theorems 4.1 and 4.2, the neuron activation functions fi ðxi Þði ¼ 1; 2; . . . ; nÞ are allowed to be non-monotonically. However, in the existing literature [33–35], many results on the stability analysis of almost periodic solution for neural networks with discontinuous activation functions are conducted under the following assumptions: For each i ¼ 1; 2; . . . ; n; fi ðxi Þ is monotonically non-decreasing in R. It is noted that this assumption is not required in Theorems 4.1 and 4.2. Furthermore, the restriction condition fiþ ðqik Þ > fi ðqik Þ(where fi is discontinuous at qik ) in the papers [32–35] has also been taken away successfully, see Theorem 4.1. Therefore, the discontinuous activation functions of this paper are more general and more practical. (2) To the best of our knowledge, most of the existing results concerning the delayed neural network dynamical systems with discontinuous neuron activation functions have not consider the discrete time-varying delays and distributed delays situation [32,34,35]. (3) To the best of our knowledge, most of the existing results concerning the delayed neural network dynamical systems with discontinuous neuron activation functions have not consider the Cohen–Grossberg models [32–34]. Therefore, Theorems 4.1 and 4.2 are much more general and practical.
3872
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
Remark 4. In Theorems 4.1 and 4.2, the neuron activation functions fi ðxi Þði ¼ 1; 2; . . . ; nÞ are allowed to be unbounded, to be super linear growth (or nonlinear growth), even to be super exponential growth, etc. Furthermore, in the existing literature [17,21–25,28,31], many results on the existence of periodic solution or equilibrium point for neural networks with discontinuous activation functions are conducted under the following assumptions: For each i ¼ 1; 2; . . . ; n; fi ðÞ is bounded, i.e., there exist a positive constant M > 0, such that fi ðÞ 6 M; For each i ¼ 1; 2; . . . ; n; co½fi ðÞ satisfies a growth condition, i.e., there exist constants ki ; hi , with ki P 0 such that
sup jfj 6 ki jxi j þ hi :
kco½fi ðxi Þk ¼
f2co½fi ðxi Þ
For each i ¼ 1; 2; . . . ; n; fi ðxi Þ satisfies the nonlinear growth condition: there exist nonnegative constant c and 0 < a < 1, such that
kco½f ðxÞk ¼ sup kck 6 cð1 þ kxka Þ; c2co½f ðxÞ T
where co½f ðxÞ ¼ ðco½f1 ðx1 Þ; co½f2 ðxn Þ; ; co½fn ðxn ÞÞ . It is noted that these assumptions are not required in Theorems 4.1 and 4.2. Moreover, it is well known that, in the earlier literature [24–28,31], many results on the stability analysis of periodic solution or equilibrium point for neural networks with discontinuous activation functions are conducted under the unilateral Lipschitz-like condition: For each i ¼ 1; 2; . . . ; n, there exists a constant Li , such that for any two different numbers u; v 2 R; 8ci 2 co½fi ðuÞ; 8gi 2 co½fi ðv Þ,
ci gi uv
P Li :
It is worthy to pointed out that, in Theorems 4.1 and 4.2, the neuron activation functions fi ðxi Þði ¼ 1; 2; . . . ; nÞ may not satisfy the unilateral Lipschitz-like condition. Therefore, the activation functions of this paper are more general and more practical.
5. Numerical examples In this section, we consider some numerical examples, with which the delayed neural network systems have different discontinuous neuron activation functions, to show the effectiveness of the theoretical results given in the previous section. Example 1. Consider the following general Cohen–Grossberg neural networks:
dx1 ðtÞ ¼ ½0:3 þ 0:1 cosðx1 ðtÞÞ½x1 ðtÞ þ 0:3f 1 ðx1 ðtÞÞ 0:01f 2 ðx2 ðtÞÞ þ 0:01f 1 ðx1 ðt sðtÞÞÞ 0:01f 2 ðx2 ðt sðtÞÞÞ dt pffiffiffi pffiffiffi i þ 0:2 sin 2t þ 0:1 sin 5t ; dx2 ðtÞ ¼ ½0:3 0:1 sinðx2 ðtÞÞ½x2 ðtÞ þ 0:01f 1 ðx1 ðtÞÞ þ 0:3f 2 ðx2 ðtÞÞ þ 0:01f 1 ðx1 ðt sðtÞÞÞ 0:01f 2 ðx2 ðt sðtÞÞÞ dt i pffiffiffi þ 0:3 cos 3t 0:1 sin t ; 8 < sin x þ x2 ; where f1 ðxÞ ¼ sin x; : sin x x2 ;
8 x x < 0:5p; < e þ arctan x; jxj 6 0:5p; f2 ðxÞ ¼ arctan x; : x e þ arctan x; x > 0:5p;
ð5:1Þ
x < 1; jxj 6 1; and sðtÞ 1. x>1
Consider the IVP of the system (5.1) with 5 random initial conditions /ðsÞ ¼ ð1; 1ÞT ; ð0:5; 0:5ÞT ; ð0; 0ÞT ; ð0:5; 0:5ÞT and ð1; 1ÞT for s 2 ½1; 0. It is easy to verify that the system (5.1) satisfies all the assumptions in Theorem 4.1. Therefore, it follows from Theorem 4.1 that the non-autonomous system (5.1) has a unique almost periodic solution which is globally exponentially stable. As shown in Figs. 5–8, numerical simulations also confirm that all the solutions converge to the unique almost periodic solution of the system (5.1) by MATLAB. Remark 5. The discontinuous activation functions of the system (5.1) are described by
8 2 > < sin x þ x ; x < 0:5p; f1 ðxÞ ¼ sin x; jxj 6 0:5p; > : sin x x2 ; x > 0:5p;
8 x x < 1; > < e þ arctan x; jxj 6 1; and f 2 ðxÞ ¼ arctan x; > : x e þ arctan x; x > 1:
It is easy to see that the activation functions f1 ðxÞ and f2 ðxÞ are discontinuous, unbounded, non-monotonic, and super linear growth condition (In fact, f2 ðxÞ satisfies the exponential growth condition). Meanwhile, for the activation functions f1 ðxÞ and f2 ðxÞ, the unilateral Lipschitz-like condition does not be satisfied. Therefore, the results in the papers [33–36] cannot
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
3873
1 0.8 0.6 0.4
x
1
0.2 0 −0.2 −0.4 −0.6 −0.8 −1
0
20
40
60
80
100
t
Fig. 5. Time-domain behavior of the state variable x1 for the system (5.1) with 5 random initial conditions.
1 0.8 0.6 0.4
x
2
0.2 0
−0.2 −0.4 −0.6 −0.8 −1
0
20
40
60
80
100
t
Fig. 6. Time-domain behavior of the state variable x2 for the system (5.1) with 5 random initial conditions.
be applied to discuss the stability (or global exponential stability) of almost periodic solution for the system (5.1). Moreover, the activation functions f1 ðxÞ and f2 ðxÞ are discontinuous at x ¼ 0:5p and x ¼ 1, respectively. By easy calculation, we obtain that f1 ð0:5pÞ ¼ 1 þ 0:25p2 > 1 ¼ f1þ ð0:5pÞ; f1 ð0:5pÞ ¼ 1 > 1 0:25p2 ¼ f1þ ð0:5pÞ, f2 ð1Þ ¼ e 0:25p > 0:25 p ¼ f2þ ð1Þ, f1 ð1Þ ¼ 0:25p > e1 þ 0:25p ¼ f1þ ð1Þ. Thus, the restriction condition fiþ ðqik Þ > fi ðqik Þ(where fi is discontinuous at qik ) in the papers [33–36] has also been eliminated successfully. Therefore, the results of this paper are more general and more practical. Example 2. Consider the following general Cohen–Grossberg neural networks:
h pffiffiffi dx1 ðtÞ ¼ ½0:2 þ 0:1 sinðx1 ðtÞÞ 2x1 ðtÞ 0:5 þ 0:01 sin 2t f1 ðx1 ðtÞÞ 0:01f 2 ðx2 ðtÞÞ þ 0:01f 1 ðx1 ðt sðtÞÞÞ dt pffiffiffi pffiffiffi i 0:01f 2 ðx2 ðt sðtÞÞÞ þ 0:3 sin 3t þ 0:2 sin 5t ; h pffiffiffi dx2 ðtÞ ¼ ½0:2 0:1 cosðx2 ðtÞÞ 2x2 ðtÞ þ 0:01f 1 ðx1 ðtÞÞ 0:5 0:01 cos 2t f2 ðx2 ðtÞÞ dt i pffiffiffi þ0:01f 1 ðx1 ðt sðtÞÞÞ 0:01f 2 ðx2 ðt sðtÞÞÞ þ 0:2 cos 3t 0:3 sin t ; where f1 ðxÞ ¼ f2 ðxÞ ¼
x 0:1; x2 þ 0:1;
x 6 0; and sðtÞ 0:1. x>0
ð5:2Þ
3874
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879 1 0.8 0.6 0.4
x2
0.2 0
−0.2 −0.4 −0.6 −0.8 −1 −1
−0.5
0 x1
0.5
1
Fig. 7. Phase response of the state variables x1 and x2 for the system (5.1) with 5 random initial conditions.
1 0.8 0.6 0.4
x
1
0.2 0 −0.2 −0.4 −0.6 −0.8 −1
0
20
40
60
80
100
time t
Fig. 8. Three-dimensional trajectory of the state variables x1 and x2 for the system (5.1) with 5 random initial conditions.
Consider the IVP of the system (5.2) with 5 random initial conditions /ðsÞ ¼ ð1; 1ÞT ; ð0:5; 0:5ÞT ; ð0; 0ÞT ; ð0:5; 0:5ÞT and ð1; 1ÞT for s 2 ½1; 0. It is easy to verify that the system (5.2) satisfies all the assumptions in Theorem 4.2. Therefore, it follows from Theorem 4.2 that the non-autonomous system (5.2) has a unique almost periodic solution which is globally exponentially stable. As shown in Figs. 9–12, numerical simulations also confirm that all the solutions converge to the unique almost periodic solution of the system (5.2) by MATLAB. Remark 6. The discontinuous activation functions of the system (5.2) are described by
f1 ðxÞ ¼ f2 ðxÞ ¼
x 0:1; x 6 0; x2 þ 0:1;
x > 0:
It is easy to see that the activation functions f1 ðxÞ and f2 ðxÞ are discontinuous, unbounded, non-monotonic. Therefore, the results in the papers [33–36] cannot be applied to discuss the stability (or global exponential stability) of almost periodic solution for the system (5.2). Thus, the results of this paper are more general and more practical. Remark 7. Consider the following general Cohen–Grossberg neural networks:
h pffiffiffi dx1 ðtÞ ¼ ½0:2 þ 0:1 sinðx1 ðtÞÞ 2x1 ðtÞ þ 0:5 þ 0:01 sin 2t f1 ðx1 ðtÞÞ 0:01f 2 ðx2 ðtÞÞ þ 0:01f 1 ðx1 ðt sðtÞÞÞ dt pffiffiffi pffiffiffi i 0:01f 2 ðx2 ðt sðtÞÞÞ þ 0:3 sin 3t þ 0:2 sin 5t
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879 1 0.8 0.6 0.4
x2
0.2 0
−0.2 −0.4 −0.6 −0.8 −1
0
20
40
60
80
100
t
Fig. 9. Time-domain behavior of the state variable x1 for the system (5.2) with 5 random initial conditions.
1 0.8 0.6 0.4
x2
0.2 0 −0.2 −0.4 −0.6 −0.8 −1 −1
−0.5
0 x1
0.5
1
Fig. 10. Time-domain behavior of the state variable x2 for the system (5.2) with 5 random initial conditions.
1
x
2
0.5 0 −0.5 −1 1 0.5 0 x
−0.5 1
−1
0
20
40
60
80
100
t (seconds)
Fig. 11. Phase response of the state variables x1 and x2 for the system (5.2) with 5 random initial conditions.
3875
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
1
x
2
0.5 0
−0.5 −1 1 0.5 0 −0.5 x
1
−1
40
20
0
60
80
100
t (seconds)
Fig. 12. Three-dimensional trajectory of the state variables x1 and x2 for the system (5.2) with 5 random initial conditions.
1 0.8 0.6 0.4
x
1
0.2 0
−0.2 −0.4 −0.6 −0.8 −1
0
20
40
60
80
100
time t
Fig. 13. Time-domain behavior of the state variable x1 for the system (5.3) with 5 random initial conditions.
1 0.8 0.6 0.4 0.2 x2
3876
0
−0.2 −0.4 −0.6 −0.8 −1
0
20
40
60
80
100
time t
Fig. 14. Time-domain behavior of the state variable x2 for the system (5.3) with 5 random initial conditions.
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
3877
1 0.8 0.6 0.4
x2
0.2 0
−0.2 −0.4 −0.6 −0.8 −1 −1
−0.5
0 x1
0.5
1
Fig. 15. Phase response of the state variables x1 and x2 for the system (5.3) with 5 random initial conditions.
1
x
2
0.5 0
−0.5 −1 1 0.5 0 x
1
−0.5 −1
0
20
40
60
80
100
t (seconds)
Fig. 16. Three-dimensional trajectory of the state variables x1 and x2 for the system (5.3) with 5 random initial conditions.
h pffiffiffi dx2 ðtÞ ¼ ½0:2 0:1 cosðx2 ðtÞÞ 2x2 ðtÞ þ 0:01f 1 ðx1 ðtÞÞ þ 0:4 0:01 cos 2t f2 ðx2 ðtÞÞ þ 0:01f 1 ðx1 ðt sðtÞÞÞ dt i pffiffiffi 0:01f 2 ðx2 ðt sðtÞÞÞ þ 0:2 cos 3t 0:3 sin t ;
ð5:3Þ
x 1; mx 6 0; and sðtÞ 0:1. x2 þ 1; x > 0; Consider the IVP of the system (5.3) with 5 random initial conditions /ðsÞ ¼ ð1; 1ÞT ; ð0:5; 0:5ÞT ; ð0; 0ÞT ; ð0:5; 0:5ÞT and ð1; 1ÞT for s 2 ½1; 0. It is easy to verify that the system (5.3) does not satisfy the assumption (H4*) in Theorem 4.2. Hence, the existence and uniqueness of almost periodic solution to the system (5.3) cannot be guarantee. However, as shown in Figs. 13–16, numerical simulations also show that all the solutions has at least two almost periodic solutions of the system (5.3) by MATLAB. In fact, the dynamic behaviors of the system (5.3) are much different from the system (5.2). Although, the systems(5.3) and (5.2) are just the same except the coefficients aii ðtÞ and the activation functions fi ðxÞ; i ¼ 1; 2. Therefore, it is interesting to study the existence of multiply almost periodic solutions and multistability of the neural network system with general discontinuous activations, we leave it for future research. where f1 ðxÞ ¼ f2 ðxÞ ¼
6. Conclusion In this paper, a class of general Cohen–Grossberg neural networks with discontinuous right-hand sides, time-varying delays and distributed delays has been investigated. Based on the retarded differential inclusions theory and nonsmooth analysis theory with generalized Lyapunov approach, the existence, uniqueness and global stability of almost periodic solution for the neural networks are obtained. It is worthy to pointed out that, without assuming the boundedness or monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not
3878
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
only on time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on time-varying and distributed delayed neural networks with discontinuous activations. Finally, we give some numerical examples to show the applicability and effectiveness of our main results. We think it would be interesting to investigate the possibility of extending the results to more complex discontinuous neural network systems with time-varying and distributed delays, such as uncertain network systems and stochastic neural network systems. These issues will be the topic of our future research. References [1] Liu D, Xiong X, DasGupta B, Zhang H. Motif discoveries in unaligned molecular sequences using self-organizing neural networks. IEEE Trans Neural Netw 2006;17:919–28. [2] Rutkowski L. Adaptive probabilistic neural networks for pattern classification in time-varying environment. IEEE Trans Neural Netw 2004;15:811–27. [3] Haykin S. Neural networks: a comprehensive foundation. Englewood Cliffs, NJ: Prentice-Hall; 1999. [4] Xia Y, Wang J. A general projection neural network for solving monotone variational inequalities and related optimization problems. IEEE Trans Neural Netw 2004;15:318–28. [5] Nishikawa T, Lai Y, Hoppensteadt F. Capacity of oscillatory associative-memory networks with error-free retrieval. Phys Rev Lett 2004;92:1081–101. [6] Wang D. Emergent synchrony in locally coupled neural oscillators. IEEE Trans Neural Netw 1995;6:941–8. [7] Chen K, Wang D, Liu X. Weight adaptation and oscillatory correlation for image segmentation. IEEE Trans Neural Netw 2000;11:1106–23. [8] Ruiz A, Owens D, Townley S. Existence, learning, and replication of periodic motion in recurrent neural networks. IEEE Trans Neural Netw 1998;9:651–61. [9] Townley S, Ilchman A, Weiss M, McClements W, Ruiz A, Owens D, Pratzel-Wolters D. Existence and learning of oscillations in recurrent neural networks. IEEE Trans Neural Netw 2000;11:205–14. [10] Jin H, Zacksenhouse M. Oscillatory neural networks for robotic yo-yo control. IEEE Trans Neural Netw 2003;14:317–25. [11] Bai C. Existence and stability of almost periodic solutions of Hopfield neural networks with continuously distributed delays. Nonlinear Anal 2009;71:5850–9. [12] Jiang H, Zhang L, Teng Z. Existence and global exponential stability of almost periodic solution for cellular neural networks with variable coefficient and time-varying delays. IEEE Trans Neural Netw 2005;16:1340–51. [13] Xia Y, Cao J, Huang Z. Existence and exponential stability of almost periodic solution for shunting inhibitory cellular neural networks with impulses. Chaos Solitons Fract 2007;34:1599–607. [14] Li Y, Fan X. Existence and globally exponential stability of almost periodic solution for Cohen–Grossberg BAM neural networks with variable coefficients. Appl Math Model 2009;33:2114–20. [15] Liu Y, You Z, Cao L. On the almost periodic solution of generalized Hopfield neural networks with time-varying delays. Neurocomputing 2006;69:1760–7. [16] Liu B, Huang L. New results of almost periodic solutions for recurrent neural networks. J Comput Appl Math 2007;206:193–205. [17] Forti M, Nistri P. Global convergence of neural networks with discontinuous neuron activations. IEEE Trans Circuits Syst I Fundam Theory Appl 2003;50:1421–35. [18] Forti M, Nistri P, Quincampoix M. Generalized neural network for nonsmooth nonlinear programming problems. IEEE Trans Circuits Syst I 2004;51:1741–54. [19] Forti M, Nistri P, Papini D. Global exponential stability and global convergence in finite time of delayed neural networks with infinite gain. IEEE Trans Neural Netw 2005;16(6):1449–63. [20] Forti M, Grazzini M, Nistri P. Genrealized Lyapunov approach for convergence of neural networks with discontinuous or non-Lipschitz activations. Physica D 2006;214:88–9. [21] Lu W, Chen T. Dynamical behaviors of Cohen–Grossberg neural networks with discontinuous activation functions. Neural Netw 2005;18:231–42. [22] Liu X, Chen T, Cao J, Lu W. Dissipativity and quasi-synchronization for neural networks with discontinuous activations and parameter mismatches. Neural Netw 2011;24:1013–21. [23] Liu X, Cao J. Robust state estimations for neural networks with discontinuous activations. IEEE Trans Syst Man Cybern Part B 2010;40(6):1425–37. [24] Huang L, Guo Z. Global convergence of periodic solution of neural networks with discontinuous activation functions. Chaos Solitons Fract 2009;42:2351–6. [25] Cai Z, Huang L, Guo Z, Chen X. On the periodic dynamics of a class of time-varying delayed neural networks via differential inclusions. Neural Netw. 2012;33:97–113. [26] Wu H. Stability analysis for periodic solution of neural networks with discontinuous neuron activations. Nonlinear Anal Real World Appl 2009;10:1717–29. [27] Xiao J, Zeng Z, Shen W. Global asymptotic stability of delayed neural networks with discontinuous neuron activations. Neurocomputing, in press. DOI http://dx.doi.org/10.1016/j.neucom.2013.02.021.. [28] Huang L, Wang J, Zhou X. Existence and global asymptotic stability of periodic solutions for Hopfield neural networks with discontinuous activations. Nonlinear Anal Real World Appl 2009;10:1651–61. [29] Wang D, Huang L, Cai Z. On the periodic dynamics of a general Cohen–Grossberg BAM neural networks via differential inclusions. Neurocomputing 2013;118:203–14. [30] Liu J, Liu X, Xie W. Global convergence of neural networks with mixed time-varying delays and discontinuous neuron activations. Inf Sci 2012;183:92–105. [31] Li Y, Wu H. Global stability analysis for periodic solution in discontinuous neural networks with nonlinear growth activations. Adv Differ Equ 2009. http://dx.doi.org/10.1155/2009/798685. [32] Allegretto W, Papini D, Forti M. Common asymptotic behavior of solutions and almost periodicity for discontinuous, delayed, and impulsive neural networks. IEEE Trans Neural Netw 2010;21:1110–25. [33] Lu W, Chen T. Almost periodic solution of a class of delayed neural networks with discontinuous activations. Neural Comput 2008;20(4):1065–90. [34] Qin S, Xue X, Wang P. Global exponential stability of almost periodic solution of delayed neural networks with discontinuous activations. Inf Sci 2013;220:367–78. [35] Wang J, Huang L. Almost periodicity for a class of delayed Cohen–Grossberg neural networks with discontinuous activations. Chaos Solitons Fract 2012;45:1157–70. [36] Wang J, Huang L, Guo Z. Global asymptotic stability of neural networks with discontinuous activations. Neural Netw 2009;22:931–7. [37] Civalleri P, Gilli M, Pandolfi L. On stability of cellular neural networks with delay. IEEE Trans Circuits Syst I 1993;40:157–65. [38] He Y, Wu M. An improved global asymptotic stability criterion for delayed cellular neural networks. IEEE Trans Neural Netw 2006;17(1):250–3. [39] Wu Z, Su H, Chu J, Zhou W. Improved delay-dependent stability condition of discrete recurrent neural networks with time-varying delays. IEEE Trans Neural Netw 2010;21(4):692–7. [40] Hou C, Qian J. Stability analysis for neural dynamics with time-varying delays. IEEE Trans Neural Netw 1998;9(1):221–3. [41] Huang H, Ho D, Lam J. Stochastic stability analysis of fuzzy Hopfield neural networks with time-varying delays. IEEE Trans Circuits Syst II 2005;52(5):251–5.
D. Wang, L. Huang / Commun Nonlinear Sci Numer Simulat 19 (2014) 3857–3879
3879
[42] Chen Y. Global stability of neural networks with distributed delays. Neural Netw 2002;15(7):867–71. [43] Jessop J, Campbell SA. Approximating the stability region of a neural network with a general distribution of delays. Neural Netw 2010;23(10):1187–201. [44] Marcus C, Westervelt R. Stability of analog neural networks with delay. Phys Rev A 1989;39:347–59. [45] Izhikevich E. Polychronization: computation with spikes. Neural Comput 2006;18:245–82. [46] Zhang D, Li Y. Exponential state estimation for Markovian jumping neural networks with time-varying discrete and distributed delays. Neural Netw 2012. http://dx.doi.org/10.1016/j.neunet.2012.08.005. [47] Fink AM. Almost periodic differential equations. Lecture notes in mathematics. Berlin: Springer; 1974. [48] He C. Almost periodic differential equation. Beijing: Higher Education Publishing House; 1992 [In Chinese]. [49] Huang L, Guo Z, Wang J. Theory and applications of differential equations with discontinuous right-hand sides. Beijing: Science Press; 2011 [In Chinese]. [50] Clarke F. Optimization and nonsmooth analysis. New York: Wiley; 1983. [51] Aubin J, Cellina A. Differential inclusions. Berlin: Springer-Verlag; 1984. [52] Filippov A. Differential equations with discontinuous right-hand side. Mathematics and its applications (Soviet series). Boston, MA: Kluwer Academic; 1988.