ARTICLE IN PRESS
Neurocomputing 69 (2006) 1776–1781 www.elsevier.com/locate/neucom
Letters
Stability in static delayed neural networks: A nonlinear measure approach$ Ping Li, Jinde Cao Department of Mathematics, Southeast University, Nanjing 210096, China Received 11 September 2005; received in revised form 10 December 2005; accepted 12 December 2005 Available online 3 February 2006 Communicated by R.W. Newcomb
Abstract In this paper, the global exponential stability is discussed for static recurrent neural networks. Without assuming the boundedness, monotonicity and differentiability of the activation functions, a new sufficient condition is obtained to ensure the existence and uniqueness of the equilibrium based on the nonlinear measure. Meanwhile, the condition obtained also guarantees the global exponential stability of the delayed neural networks via constructing a proper Lyapunov functional. The results, which are independent of the time delay, can be checked easily by convex optimization algorithms. In the end of this paper, two illustrative examples are also given to show the effectiveness of our results. r 2006 Elsevier B.V. All rights reserved. Keywords: Exponential stability; Recurrent neural networks; Time delay; Nonlinear measure; Lyapunov functional
1. Introduction The recurrently connected neural networks have been extensively studied in the past decade and successfully applied to different areas such as combinatorial optimization, pattern recognition, associative memory. These applications greatly rely on the dynamical behavior of the neural networks. Therefore, the dynamical analysis is a necessary step for practical design and applications of neural networks. According to whether the neuron states (the external states of neurons) or local field states (the internal states of neurons) are taken as basic variables in the network, neural networks can be classified as static neural networks or local field neural networks [13,19]. For instance, the recurrent back-propagation neural networks, which are described by $ This work was jointly supported by the National Natural Science Foundation of China under Grant Nos. 60574043 and 60373067, and the Natural Science Foundation of Jiangsu Province, China under Grant No. BK2003053. Corresponding author. Tel.: +86 2583792315; fax: +86 2583792316. E-mail address:
[email protected] (J. Cao).
0925-2312/$ - see front matter r 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2005.12.031
the following ordinary differential equations in the vectormatrix form: dxðtÞ ¼ AxðtÞ þ f ðWxðtÞ þ JÞ, dt
(1)
are static neural networks. Here, xi is the state variable of P neuron i with yi ¼ nj¼1 wij xj þ J i being its local field state; f i is the activation function of neuron i; wij is the connection weight value between neuron i and neuron j; n is the number of neurons in the neural network. On the other hand, Hopfield neural networks [5] firstly introduced by Hopfield are local field neural networks dxðtÞ ¼ AxðtÞ þ Wf ðxðtÞÞ þ J, dt
(2)
where xi is the local field state variable with vi ¼ f i ðxi Þ as the output of neuron i; J i is the constant input from the outside of the system. It should be noted that neural networks (1) and (2) are not always equivalent. One can easily check that under the assumption that WA ¼ AW holds and W is nonsingular, (1) can be transformed to (2) by the substitution of yðtÞ ¼ WxðtÞ þ J. However, in many real applications of
ARTICLE IN PRESS P. Li, J. Cao / Neurocomputing 69 (2006) 1776–1781
the neural networks, it is not rational to assume the invertibility of the matrix W. Many neural systems, such as the head-direction system [20] or the oculomotor integrator [15], are modelled by non-invertible networks. To the best of our knowledge, system (2) has received considerable attention and many theoretical results have been obtained, see [16,8,2,17,9,3] and references cited therein. However, only very little attention has been focused on the static neural networks (1), see [18,7,6]. In [19], the authors have given theoretical comparison on the dynamical behavior of the neural networks (1) and (2). In the light of Lyapunov theory [4], a fundamental requirement of the neural network system is that it should have at least one equilibrium point. In the previous literatures, existence of an equilibrium is usually assumed [18,8,6], besides this, the activation functions of the neural network are usually required to be bounded such that the Brouwer–Schauder fixed point theorem can be applied to ensure the existence of an equilibrium point, see for example, [2]. However, as everybody knows, especially in some important engineering problems, some activation functions are not bounded. For example, when neural networks are designed to solve optimization problems in the presence of linear or quadratic constraints, the activation functions needed are unbounded modelled by diode-like exponential-type functions. On the other hand, when a neural network is applied to optimization problems, the neural network model should be designed such that there is only one equilibrium point and it should be globally stable [17]. In practice, the choice of activation functions may strongly affect the complexity and performance of the neural networks. Many theoretical results have been obtained under the assumption that the following condition (H) or ðH Þ is satisfied: ðHÞ : jf i ðx1 Þ f i ðx2 Þjpl i jx1 x2 j; ðH Þ : 0p
f i ðx1 Þ f i ðx2 Þ pl i ; x1 x 2
i ¼ 1; 2; . . . ; n,
i ¼ 1; 2; . . . ; n,
for each x1 ; x2 2 R and x1 ax2 , where l i are positive constants. It should be noted that the activation functions satisfying condition ðH Þ are nondecreasing, and the class of functions satisfying the condition (H) includes the class of functions under the condition ðH Þ. In [10], the author pointed out that when applied as associative memory, the network’s absolute capacity can be remarkably improved by replacing the usual sigmoid transfer functions with nonmonotonic transfer functions. Besides, it was observed both experimentally and numerically that time delay can lead to an oscillation and furthermore, to instability and poor performance of a neural network due to finite switching speed of the amplifiers and communication time [9].
1777
Motivated by the above discussion, in this paper, we consider the stability of the following neural networks 8 < dxðtÞ ¼ AxðtÞ þ f ðWxðt tÞ þ JÞ; t40; dt (3) : xðtÞ ¼ fðtÞ; t 2 ½t; 0; where xðtÞ ¼ ½x1 ðtÞ; x2 ðtÞ . . . ; xn ðtÞT , A ¼ diagða1 ; a2 ; . . . ; an Þ, fðtÞ ¼ ½f1 ðtÞ; f2 ðtÞ; . . . ; fn ðtÞT and f ðxðtÞÞ ¼ ½f 1 ðx1 ðtÞÞ; f 2 ðx2 ðtÞÞ; . . . ; f n ðxn ðtÞÞT . In the remainder of this paper, based on a nonlinear measure, a sufficient condition will be obtained to ensure the global exponential stability of (3). It should also be pointed out that here we only need the assumption (H) on the activation functions. In addition, the monotonicity restriction of activation functions as well as boundedness of the activations are removed. In some sense, our nonlinear measure plays an important role in designing the neural network. Throughout this paper, let Rn be the n-dimensional real vector space with vector norm k k defined as kxk2 ¼ P ð ni¼1 x2i Þ1=2 for any x 2 Rn . hx; yi denotes the inner product of any two vectors x; y 2 Rn . We use xt to represent a segment of xðyÞ on ½t t; t, that is, xt : ½t; 0 ! Rn with kxt k ¼ supttpypt kxðyÞk. For any real matrix P, the notation PT ; P1 ; lmax ðPÞ and lmin ðPÞ, respectively, denote its transpose, inverse, maximum eigenvalue and minimum eigenvalue. For real symmetric matrices P and Q, the notation PXQ (respectively, P4Q) means that the matrix P Q is positive semidefinite (respectively, positive definite). I represents the identity matrix with appropriate dimensions. 2. Main results To obtain our main results, we still need the following definitions and lemmas. Lemma 1 (Sanchez and Perez [14]). Given any real matrices X, Y and Q40 with appropriate dimensions. Then the following matrix inequality holds X T Y þ Y T X pX T QX þ Y T Q1 Y . Lemma 2 (Schur complement [1]). The following linear matrix inequality (LMI) " # QðxÞ SðxÞ 40, ST ðxÞ RðxÞ
ARTICLE IN PRESS P. Li, J. Cao / Neurocomputing 69 (2006) 1776–1781
1778
where QðxÞ ¼ QT ðxÞ, RðxÞ ¼ RT ðxÞ and SðxÞ depend affinely on x, is equivalent to each of the following conditions
which implies that
(i) QðxÞ40; RðxÞ S T ðxÞQ1 ðxÞSðxÞ40; (ii) RðxÞ40; QðxÞ SðxÞR1 ðxÞS T ðxÞ40.
Obviously, for any fixed point y, we address that kF ðxÞk ! 1 whenever kxk ! 1, that is to say, F is a norm-coercive mapping. By the norm-coerciveness theorem [11], therefore, F is a homeomorphism of Rn . This implies the Lemma 3. &
Definition 1 (Qiao, Peng and Xu [12]). Suppose that O is an open set of Rn , and F is an operator from O into Rn . The constant mO ðF Þ9 sup
xay x;y2O
hF ðxÞ F ðyÞ; signðx yÞi , kx yk1
(4)
P where kx yk1 ¼ ni¼1 jxi yi j, is called the nonlinear measure of F on O. Motivated by this definition, we extend this concept into the sense of k k2 , that is: Definition 2. Suppose that O is an open set of Rn , and F : O ! Rn is an operator. The constant hF ðxÞ F ðyÞ; x yi mO ðF Þ9 sup kx yk22 xay
kF ðxÞ F ðyÞk2 X mRn ðF Þkx yk2 .
Remark 1. Lemma 3 particularly shows that F ðxÞ ¼ 0 will have, and only have, one solution whenever O ¼ Rn and mO ðF Þo0. By utilizing some matrix techniques, compared to [12], the nonlinear measure definition with the norm k k2 is convenient in practice which will be shown later. Theorem 1. The neural network model (3) only has an equilibrium point, which is globally exponentially stable if there exist a matrix P ¼ PT 40 and a diagonal matrix Q ¼ diagðq1 ; q2 ; . . . ; qn Þ40 such that any of the following matrix inequalities hold: " (i) X1 ¼
PA AP þ W T LQLW
P
P
Q
x;y2O
¼ sup xay x;y2O
ðx yÞT ðF ðxÞ F ðyÞÞ kx yk22
ð5Þ
is called the nonlinear measure of F on O with the norm k k2 . Then an analog of Lemma 1 in [12], we obtain the following lemma: Lemma 3. If mO ðF Þo0, then F is an injective mapping on O. In addition, if O ¼ Rn , then F is a homeomorphism of Rn . Proof. Firstly, we show F is injective. Suppose s1 ; s2 2 O satisfying F ðs1 Þ ¼ F ðs2 Þ but s1 as2 , then we find from (5) that mO ðF Þ ¼ sup
xay x;y2O
X
hF ðxÞ F ðyÞ; x yi kx yk22
hF ðs1 Þ F ðs2 Þ; s1 s2 i ks1 s2 k22
¼ 0, which contradicts to mO ðF Þo0. Hence, the injective property of F is obtained. When O ¼ Rn , for any x; y 2 Rn ; xay, based on the Ho¨lder inequality, we yield mRn ðF Þ ¼ sup
xay x;y2O
hF ðxÞ F ðyÞ; x yi kx yk22
hF ðxÞ F ðyÞ; x yi kx yk22 kF ðxÞ F ðyÞk2 X , kx yk2 X
# o0,
(ii) X2 ¼ PA AP þ W T LQLW þ PQ1 Po0, where L ¼ diagðl 1 ; l 2 ; . . . ; l n Þ. Proof. According to Lemma 2, one can easily check that condition (i) is equivalent to condition (ii). In the following, we shall prove the theorem in two steps. First, we prove the existence and uniqueness of the equilibrium. And then, we show the conditions given also ensure the equilibrium to be globally exponentially stable. Step 1: Define an operator F : Rn ! Rn by ! n X wij xj þ J i , F i ðxÞ ¼ ai xi þ f i j¼1
where x ¼ ½x1 ; x2 . . . ; xn T 2 Rn ; F ðxÞ ¼ ½F 1 ðxÞ; F 2 ðxÞ; . . . ; F n ðxÞT . Considering the following system dyðtÞ ¼ PF ðyðtÞÞ. (6) dt With the invertibility of matrix P, we can conclude that system (3) and system (6) have the same equilibrium set. Next, we shall prove that mRn ðPF Þo0. From Definition 2, we have mRn ðPF Þ ¼ sup
xay x;y2Rn
hPF ðxÞ PF ðyÞ; x yi , kx yk22
where hPF ðxÞ PF ðyÞ; x yi ¼ ðx yÞT ðPF ðxÞ PF ðyÞÞ ¼ ðx yÞT PðF ðxÞ F ðyÞÞ.
ARTICLE IN PRESS P. Li, J. Cao / Neurocomputing 69 (2006) 1776–1781
Then the time derivative of V along the trajectory of (7) takes the form
Using Lemma 1, we deduce ðx yÞT PðF ðxÞ F ðyÞÞ
dV ðt; zt Þ dt ¼ 2ebt zðtÞT P_zðtÞ þ bebt zðtÞT PzðtÞ
¼ ðx yÞT PðAx þ f ðWx þ JÞ þ Ay f ðWy þ JÞÞ ¼ ðx yÞT PAðx yÞ þ ðx yÞT Pðf ðWx þ JÞ f ðWy þ JÞÞ
þ ebðtþtÞ gT ðWzðtÞÞQgðWzðtÞÞ ebt gT ðWzðt tÞÞ QgðWzðt tÞÞ
p ðx yÞT PAðx yÞ þ 12 ðx yÞT PQ1 Pðx yÞ
¼ 2ebt zðtÞT PðAzðtÞ þ gðWzðt tÞ þ JÞÞ þ bebt zðtÞT PzðtÞ
þ 12 ðf ðWx þ JÞ f ðWy þ JÞÞT Qðf ðWx þ JÞ
þ ebðtþtÞ gT ðWzðtÞÞQgðWzðtÞÞ ebt gT ðWzðt tÞÞ QgðWzðt tÞÞ
f ðWy þ JÞÞ ¼ 12 ðx yÞT ðPA AP þ PQ1 PÞðx yÞ
pebt ½zðtÞT ðPA þ APÞzðtÞÞ þ zðtÞT ðPQ1 P þ bPÞzðtÞ
þ 12 ðf ðWx þ JÞ f ðWy þ JÞÞT Qðf ðWx þ JÞ f ðWy þ JÞÞ
þ ebt zðtÞT W T LQLWzðtÞ ¼ ebt zðtÞT ½bP PA AP þ PQ1 P þ ebt W T LQLW zðtÞ.
p12 ðx yÞT ðPA AP þ PQ1 PÞðx yÞ þ 12 ðWx WyÞT LQLðWx WyÞ ¼ ¼
1 2 ðx 1 2 ðx
T
1779
1
T
yÞ ðPA AP þ PQ P þ W LQLW Þðx yÞ yÞT X2 ðx yÞ,
Clearly, from (9), this implies ðdV ðt; zt Þ=tÞp0, hence, we have V ðt; zt ÞpV ð0Þ; 8t40. From (10), we can easily yield V ð0Þp½lmax ðPÞ þ tebt lmax ðW T LQLW Þkfk2 ,
(11)
T
hence, ðx yÞ PðF ðxÞ F ðyÞÞo0 since X2 o0. This implies mRn ðPF Þo0, by Lemma 3, we conclude that system (6) or equivalently, system (3) has a unique equilibrium point x . Step 2: To simplify the proof, we shift the equilibrium point x of (3) to the origin via the transformation zðtÞ ¼ xðtÞ x , then Eq. (3) can be transformed into the following form:
on the other hand, from (10), we also have V ðt; zt ÞXlmin ðPÞebt kzðtÞk2 ;
(7)
where zðtÞ ¼ ½z1 ðtÞ; z2 ðtÞ; . . . ; zn ðtÞT , gðWzðtÞÞ ¼ ½g1 ðw1 zðtÞÞ; g2 ðw2 zðtÞÞ; . . . ; gn ðwn zðtÞÞT with gi ðwi zðtÞÞ ¼ f i ðwi zðtÞ þ wi x þ J i Þ f i ðwi x þ J i Þ and wi ¼ ½wi1 ; wi2 ; . . . ; win denotes the ith row of matrix W. Moreover, according to condition (H), it is easy to see that jgi ðy1 Þ gi ðy2 Þjpl i jy1 y2 j;
gi ð0Þ ¼ 0
(8)
for any y1 ; y2 2 Rn with y1 ay2 ; i ¼ 1; 2; . . . ; n. In view of (i), then we can ascertain that there exists a scalar b40 such that bP PA AP þ PQ1 P þ ebt W T LQLW o0.
(9)
Let us consider the following Lyapunov functional candidate V ðt; zt Þ9ebt zðtÞT PzðtÞ þ
Z
t
ebðsþtÞ gT ðWzðsÞÞQgðWzðsÞÞ ds, tt
(10) where zt ðyÞ ¼ zðt þ yÞ; y 2 ½t; 0.
(12)
Now, combining (11) with (12), we can obtain that sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi lmax ðPÞ þ tebt lmax ðW T LQLW Þ kzðtÞkp lmin ðPÞ eðb=2Þt kfk;
dzðtÞ ¼ AzðtÞ þ gðWzðt tÞ þ JÞ, dt
8t40.
8t40.
It means that the origin of (7) or equivalently the equilibrium point x of (3) is globally exponentially stable. This completes the proof. & As in [8], Theorem 1 can be extended to the case of the time-varying delay t ¼ tðtÞ, in the following, we state it without proof: Theorem 2. Suppose there exists Zo1 such that t_ ðtÞpZ for all t40. Then the neural network (3) only has an equilibrium point, which is globally exponentially stable if there exist a matrix P ¼ PT 40 and a diagonal matrix Q ¼ diagðq1 ; q2 ; . . . ; qn Þ40 such that any of the following matrix inequality holds: " # P PA AP þ W T LQLW (i) X1 ¼ o0, P ð1 ZÞQ 1 (ii) X2 ¼ PA AP þ W T LQLW þ 1Z PQ1 Po0.
Remark 2. In [18,7], the authors obtained the global asymptotical stability (respectively, the global exponential stability) of the neural network (3) (respectively, neural network (1)) both under the prior existence of an equilibrium and condition ðH Þ; It can be easily seen in our paper, we only need the activation functions satisfying the condition (H), which remove the boundedness and
ARTICLE IN PRESS P. Li, J. Cao / Neurocomputing 69 (2006) 1776–1781
1780
monotonic nondecreasing properties of the f i . Therefore, the criteria here are more relaxing than the results in [18,7]. Next, we give a simple condition to ensure the global exponential stability of the neural network (3). Corollary 1. If 2A þ W T W þ L2 o0 holds, then, the neural network (3) only has an equilibrium point, which is globally exponentially stable. Proof. Similar to the proof of Theorem 1, let P ¼ I; Q ¼ ðL1 Þ2 , we can easily derive this result. Its proof is straightforward and hence omitted. & 3. Two illustrative examples In this section, we will give two examples to show the effectiveness of our results. Example 1. Consider the following recurrent neural networks with delay dxðtÞ ¼ AxðtÞ þ f ðWxðt tÞ þ JÞ, dt where the parameters 3:0927 0 2:0012 A¼ ; W¼ 0 4:0118 2:1658
1:2816 2:3418
;
the activation functions f i ðxÞ ¼ 13 sin x þ 16 x ði ¼ 1; 2Þ. Obviously, one can find that f i satisfies the condition (H) with l i ¼ 12 ði ¼ 1; 2Þ, so L ¼ diagð12; 12Þ, on the other hand, it can be verified that f i is not bounded and nonmonotonic. Thus, the condition in [18,7] fails to conclude whether this neural network is globally stable or not. However, by referring to the Matlab LMI Control Toolbox to solve (i) in Theorem 1, we obtain 0:6210 0:3440 1:3834 0 P¼ ; Q¼ . 0:3440 0:5474 0 1:3834 Therefore, by Theorem 1, we know this recurrent neural network has and only has an equilibrium point which is globally exponentially stable. For numerical simulation, let J ¼ ½62:2; 60:6T and the delay t ¼ 1. Fig. 1 below depicts the time responses of state variables from the 30 random constant initial states in the set ½10; 10 ½10; 10 with step h ¼ 0:01. It confirms that the proposed condition leads to the existence and unique-
5
5 x2
10
x1
10
0
−5 −10
0
−5 0
2
4 t
6
8
−10
0
2
4 t
6
Fig. 1. Transient response of state variables x1 ðtÞ and x2 ðtÞ.
8
ness of an equilibrium point which is globally exponentially stable. Example 2. Consider the recurrent neural networks in (3) with parameters t ¼ 0, 2:1 0 2:1 0:1755 ; W¼ . A¼ 0 4 0:03 2:85 The activation function f 1 ðxÞ ¼ 23 sin x þ 13 x; f 2 ðxÞ ¼ 2 1 15 sin x þ 15 x. Obviously, one can find that f i satisfying the condition (H) with l 1 ¼ 1; l 2 ¼ 15, that is, L ¼ diagð1; 15Þ. For this neural network, it can be verified that the condition of Theorems 4 in [6] is not satisfied. Therefore, they fail to conclude whether this neural network is globally exponentially stable or not. However, by referring to the Matlab LMI Control Toolbox to solve the (i) in Theorem 1, we can see that it is feasible and the solutions can be obtained as 6:3435 0:0936 3:0241 0 ; Q¼ . P¼ 0:0936 2:0231 0 9:1369 Therefore, by Theorem 1, we know this recurrent neural network has and only has an equilibrium point which is globally exponentially stable. Its numerical simulation is similar to the simulation of Example 1, hence, it is omitted here. 4. Conclusions This paper is concerned with the global exponential stability of static delayed recurrent neural networks. Based on the nonlinear measure in the sense of k k2 , we obtain a new sufficient condition for the existence and uniqueness of the equilibrium point in the form of matrix inequality. Meanwhile, the condition obtained also ensures the global exponential stability of the neural network. In addition, the resulting criteria, which is independent of the delay and can be solved numerically by using recently developed interior-point algorithm, is easy to check and apply. It is believed that the relaxed restriction of the result will be meaningful for the design and application of recurrent neural networks. References [1] S. Boyd, L. EI Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadephia, 1994. [2] J. Cao, J. Wang, Global asymptotic and robust stability of recurrent neural networks with time delays, IEEE Trans. Circuits Syst.—I 52 (2) (2005) 417–426. [3] T. Ensari, S. Arik, Global stability of a class of neural networks with time-varying delay, IEEE Trans. Circuits Syst.—II 52 (3) (2005) 126–130. [4] J.K. Hale, S.M. Verduyn Lunel, Introduction to Functional Differential Equations, Springer, New York, 1993. [5] J.J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. USA 79 (1982) 2554–2558.
ARTICLE IN PRESS P. Li, J. Cao / Neurocomputing 69 (2006) 1776–1781 [6] S. Hu, J. Wang, Global stability of a class of continuous-time recurrent neural networks, IEEE Trans. Circuits Syst.—I 49 (9) (2002) 1334–1347. [7] J. Liang, J. Cao, A based-on LMI stability criterion for delayed recurrent neural networks, Chaos, Solitons & Fractals 28 (1) (2006) 154–160. [8] X. Liao, G. Chen, E.N. Sanchez, LMI-Based approach for asymptotically stability analysis of delayed neural networks, IEEE Trans. Circuits Syst.—I 49 (7) (2002) 1033–1039. [9] C.M. Marus, R.M. Westervelt, Stability of analog neural networks with delay, Phys. Rev. A 39 (1) (1989) 347–359. [10] M. Morita, Associative memory with nonmonotone dynamics, Neural Networks 6 (1) (1993) 115–126. [11] J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [12] H. Qiao, J. Peng, Z. Xu, Nonlinear measures: a new approach to exponential stability analysis for Hopfield-type neural networks, IEEE Trans. Neural Networks 12 (2) (2001) 360–370. [13] H. Qiao, J. Peng, Z. Xu, B. Zhang, A reference model approach to stability analysis of neural networks, IEEE Trans. Syst. Man Cybern.—B 33 (6) (2003) 925–936. [14] E.N. Sanchez, J.P. Perez, Input-to-state stability (ISS) analysis for dynamic neural networks, IEEE Trans. Circuits Syst.—I 46 (11) (1999) 1395–1398. [15] H.S. Seung, How the brain keeps the eye still, Proc. Nati. Acad. Sci. USA 93 (1996) 13339–13344. [16] V. Singh, A generalized LMI-based approach to the global asymptotic stability of delayed cellular neural networks, IEEE Trans. Neural Networks 15 (1) (2004) 223–225. [17] Y. Xia, J. Wang, On the stability of globally projected dynamical systems, J. Optim. Theory Appl. 106 (1) (2000) 129–150. [18] S. Xu, J. Lam, D.W.C. Ho, Y. Zou, Global robust exponential stability analysis for interval recurrent neural networks, Phys. Lett. A 325 (2) (2004) 124–133. [19] Z. Xu, H. Qiao, J. Peng, B. Zhang, A comparative study of two modeling approaches in neural networks, Neural Networks 17 (1) (2004) 73–85. [20] K. Zhang, Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory, J. Neurosci. 16 (6) (1996) 2112–2126. Ping Li was born in Jiangsu Province, China, in 1981. He was graduated from the Department of Mathematics of Southeast University, Nanjing, China, in 2004, and then, he was eligible for the Graduate program in Southeast University, free of admission test. Now, he is working toward the M.S. degree in Mathematics from Southeast University. His current research interests include stability theory, neural networks, complex networks and hybrid systems.
1781
Jinde Cao received the B.S. degree from Anhui Normal University, Wuhu, China, the M.S. degree from Yunnan University, Kunming, China, and the Ph.D. degree from Sichuan University, Chengdu, China, all in mathematics/ applied mathematics, in 1986, 1989, and 1998, respectively. From March 1989 to May 2000, he was with Yunnan University. In May 2000, he joined the Department of Mathematics, Southeast University, Nanjing, China. From July 2001 to June 2002, he was a Post doctoral Research Fellow in the Department of Automation and Computer-aided Engineering, Chinese University of Hong Kong, Hong Kong. From August 2002 to October 2002, he was a Senior Visiting Scholar at the Institute of Mathematics, Fudan University, Shanghai, China. From February 2003 to May 2003, he was a Senior Research Associate in the Department of Mathematics, City University of Hong Kong, Hong Kong. From July 2003 to September 2003, he was a Senior Visiting Scholar in the Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei, China. From January 2004 to April 2004, he was a Research Fellow in the Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Hong Kong. From January 2005 to April 2005, he was a Research Fellow in the Department of Mathematics, City University of Hong Kong, Hong Kong. From January 2006 to April 2006, he was a Research Fellow in the Department of Electronics Engineering, City University of Hong Kong, Hong Kong. He is currently a Professor and Doctoral Advisor at the Southeast University. Prior to this, he was a Professor at Yunnan University from 1996 to 2000. He is the author or coauthor of more than 130 journal papers and five edited books and a reviewer of Mathematical Reviews and Zentralblatt-MATH. His research interests include nonlinear systems, neural networks, complex systems and complex networks, stability theory, and applied mathematics.