Commun Nonlinear Sci Numer Simulat 19 (2014) 2747–2757
Contents lists available at ScienceDirect
Commun Nonlinear Sci Numer Simulat journal homepage: www.elsevier.com/locate/cnsns
Convergence analysis for second-order interval Cohen–Grossberg neural networks Sitian Qin ⇑, Jingxue Xu, Xin Shi Department of Mathematics, Harbin Institute of Technology at Weihai, Weihai, 264209, China
a r t i c l e
i n f o
Article history: Received 18 July 2013 Accepted 11 January 2014 Available online 23 January 2014 Keywords: Second-order interval Cohen–Grossberg neural networks Global robust stability Homomorphic mapping theorem Lyapunov functional method
a b s t r a c t This paper presents new theoretical results on global stability of a class of second-order interval Cohen–Grossberg neural networks. The new criteria is derived to ensure the existence, uniqueness and global stability of the equilibrium point of neural networks under uncertainties. And we make some comparisons between our results with the existed corresponding results. Some examples are provided to show the effectiveness of the obtained results. Ó 2014 Elsevier B.V. All rights reserved.
1. Introduction In 1983, Cohen and Grossberg proposed a class of neural networks, which are now called Cohen–Grossberg neural networks [1]. The Cohen–Grossberg neural networks have received considerable attention due to their potential applications for tasks of classification, associative memory, parallel computation and solving optimization problems (see [2–5,7–14]). Such applications depend heavily on the dynamical behaviors of the networks, such as stability, oscillatory properties, and so on. In particular, applications of Cohen–Grossberg neural networks rely heavily on the stability of the networks. It is well known that, in electronic implementations, the stability of neural networks may often be destroyed by its unavoidable uncertainty due to the existence of time delays, modeling error, external disturbance and parameter fluctuation (see [15–29]). Hence, the study on global stability of interval Cohen–Grossberg neural networks has received considerable attention, and varieties of interesting results have been presented in the literatures (see [2–5,7–14]). For example, in [7], authors studied exponential stability for interval Cohen–Grossberg neural networks, using linear matrix inequality, matrix norm and Halanay inequality techniques. In [8], based on the comparison principle, the authors discussed existence and uniqueness of the solution of interval fuzzy Cohen–Grossberg neural networks with piecewise constant argument. And via some state transmission matrixes, Wang et al. in [13] studied the global exponential stability of a class of interval Cohen–Grossberg neural networks with both multiple time-varying delays and continuously distributed delays. However, as far as we know, there are few literatures discussed the stability (or robust stability) of general second-order Cohen–Grossberg neural networks. In fact, in some cases, second-order Cohen–Grossberg neural networks have wider applications than first-order neural networks, which motivates us to research second-order Cohen–Grossberg neural networks (see [4–6]). For example, in [4,5], authors considered the following more general second-order Cohen–Grossberg neural network with more delays ⇑ Corresponding author. Tel.: +86 0631 5687572. E-mail addresses:
[email protected] (S. Qin),
[email protected] (J. Xu),
[email protected] (X. Shi). 1007-5704/$ - see front matter Ó 2014 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.cnsns.2014.01.008
2748
S. Qin et al. / Commun Nonlinear Sci Numer Simulat 19 (2014) 2747–2757
( ) 8 n X > > dxi ðtÞ > ¼ a ðx ðtÞÞ b ðx ðtÞÞ s f ½x ðt r Þ; y ðt s Þ þ I > dt i i i i ji j j ji ji i ; j > < j¼1 ( ) n > > > dyi ðtÞ ¼ c ðy ðtÞÞ d ðy ðtÞÞ Xt g ½x ðt d Þ; y ðt g Þ þ J ; > > i i i i ji j j ji j i ji : dt
ð1Þ
j¼1
By topological degree method, contractive mapping principle and Lyapunov functional method, they studied the global robust exponential stability of equilibrium solution to neural network (1) in [4,5]. Obviously, [4,5] only considered the delayed interconnection weight matrix and delayed activation functions. Hence, the conclusions of [4,5] are conservative to some extent. Meanwhile, Ke and Miao in [6] studied the following inertial Cohen–Grossberg type neural network with time delays: 2
d xi ðtÞ dt
2
" # n n X X dxi ðtÞ ¼ bi ai ðxi ðtÞÞ hi ðxi ðtÞÞ aij fj ðxj ðtÞÞ aij fj ðxj ðt sij ÞÞ þ Ii : dt j¼1 j¼1
ð2Þ
By properly chosen variable substitution, the system (2) is equivalently transformed to a second-order Cohen–Grossberg neural network. Then, they studied exponential stability of the transformed second-order Cohen–Grossberg neural network. Motivated by above works, in this paper, we consider following general second-order Cohen–Grossberg neural network,
_ xðtÞ ¼ aðxðtÞÞfbðxðtÞÞ Uf ½xðtÞ; yðtÞ Sf ½xðt sÞ; yðt sÞ þ Ig; _ yðtÞ ¼ cðyðtÞÞfdðyðtÞÞ Vg½xðtÞ; yðtÞ Tg½xðt sÞ; yðt sÞ þ Jg;
ð3Þ
where x ¼ ðx1 ; x2 ; . . . ; xn ÞT 2 Rn and y ¼ ðy1 ; y2 ; . . . ; yn ÞT 2 Rn are the state vector associated with the neurons. aðxÞ ¼ diagfai ðxi Þg 2 Rnn and cðyÞ ¼ diagfci ðyi Þg 2 Rnn are positive,continuous and bounded amplification functions. T T bðxÞ ¼ ðb1 ðx1 Þ; b2 ðx2 Þ; . . . ; bn ðxn ÞÞ 2 Rn and dðyÞ ¼ ðd1 ðy1 Þ; d2 ðy2 Þ; . . . ; dn ðyn ÞÞ 2 Rn are appropriately behaved functions. T f ðx; yÞ ¼ ðf1 ðx1 ; y1 Þ; f2 ðx2 ; y2 Þ; . . . ; fn ðxn ; yn ÞÞ 2 Rn and gðx; yÞ ¼ ðg 1 ðx1 ; y1 Þ; g 2 ðx2 ; y2 Þ; . . . ; g n ðxn ; yn ÞÞT 2 Rn are the activation functions. I ¼ ðI1 ; I2 ; . . . ; In ÞT ; J ¼ ðJ 1 ; J 2 ; . . . ; J n ÞT 2 Rn are the constant input vectors. We will study global stability of the equilibrium point of interval neural network (3). First, by homeomorphism method, we discuss the existence and uniqueness of equilibrium point of neural network (3). Then, we prove the global robust stability of neural network (3) by constructing Lyapunov functional. Finally, some comparisons are made with previous works by some examples to show our results’ advantages. 2. Preliminaries In this section, we will present some definitions and lemmas, which are needed in the remainder of this paper. For a vecqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pn 2 tor x ¼ ðx1 ; x2 ; . . . ; xn ÞT 2 Rn ; jxj ¼ ðjx1 j; jx2 j; . . . ; jxn jÞT 2 Rn denotes the absolute-value vector, kxk ¼ i¼1 xi denotes the Euclidean norm. For a matrix A ¼ ðaij Þnn ; AT denotes the transpose of A, jAj ¼ ðjaij jÞnn denotes absolute-value matrix, A > ðPÞ0 means that A is a symmetric positive definite (semi-definite) matrix, A < ð6Þ0 means that A is a symmetric neg1
ative definite (semi-definite) matrix, km ðAÞ denote the smallest eigenvalue of A, kAk2 ¼ ½kmax AT A2 denotes the spectra norm. Denote A ¼ 12 ðA þ AÞ; A ¼ 12 ðA AÞ. Let E be the unit matrix. In order to completely characterize the parameter uncertainties, the parameters uij ; v ij ; sij ; t ij in neural network (3) are assumed to satisfy the following conditions:
U I :¼ ½U; U ¼ fU ¼ ðuij Þ : U 6 U 6 U; i:e:; uij 6 uij 6 uij ; i; j ¼ 1; 2 . . . ng; V I :¼ ½V; V ¼ fV ¼ ðv ij Þ : V 6 V 6 V; i:e:; v ij 6 v ij 6 v ij ; i; j ¼ 1; 2 . . . ng; SI :¼ ½S; S ¼ fS ¼ ðsij Þ : S 6 S 6 S; i:e:; sij 6 sij 6 sij ; i; j ¼ 1; 2 . . . ng; T I :¼ ½T; T ¼ fT ¼ ðtij Þ : T 6 T 6 T; i:e:; tij 6 tij 6 t ij ; i; j ¼ 1; 2 . . . ng: In this paper, we always assume the following hypothesises hold for i ¼ 1; . . . ; n: ð1Þ ð2Þ ð1Þ ð2Þ (A1 ): There exist positive constants ki ; ki ; hi ; hi such that ð1Þ2
ð1Þ2
ða1 Þ : 0 6 jfi ðs1 ; t 1 Þ fi ðs2 ; t 2 Þj2 6 ki
js1 s2 j2 þ hi
ða2 Þ : 0 6 jg i ðs1 ; t 1 Þ g i ðs2 ; t2 Þj2 6 ki
2
ð2Þ
jt 1 t 2 j2 ;
ð2Þ2
js1 s2 j2 þ hi
jt1 t2 j2
for any s1 ; s2 ; t 1 ; t 2 2 R. (A2 ): ai ðÞ; ci ðÞ : R ! R are continuous and there exist positive constants mj ; nj ðj ¼ 1; 2Þ such that
0 < m1 < ai ðtÞ < m2 ; 0 < n1 < ci ðtÞ < n2 for t 2 R. (A3 ): bi ðÞ; di ðÞ : R ! R are continuous and there exist positive constants Ai ; Bi ; C i ; Di satisfying
ð4Þ
S. Qin et al. / Commun Nonlinear Sci Numer Simulat 19 (2014) 2747–2757
2749
bi ðsÞ ¼ Bi s þ qi ðsÞ and jqi ðsÞ qi ðtÞj 6 Ai js tj; di ðtÞ ¼ Di t þ ri ðtÞ and jri ðsÞ ri ðtÞj 6 C i js tj for s; t 2 R. For the simplicity, under above assumptions, we denote
2 2 2 2 e ¼ diag kð1Þ þ kð2Þ ; H e ¼ diag hð1Þ þ hð2Þ ; K i i i i
ð5Þ
A ¼ diagðAi Þ; B ¼ diagðBi Þ; C ¼ diagðC i Þ; D ¼ diagðDi Þ:
Definition 2.1. A point ðx ; y ÞT 2 R2n is said to be an equilibrium point of neural network (3), if
aðx Þfbðx Þ Uf ½x ; y Sf ½x ; y þ Ig ¼ 0; cðy Þfdðy Þ Vg½x ; y Tg½x ; y þ Jg ¼ 0: Lemma 2.1 [15]. If matrix S ðorTÞ in (4) satisfies S 2 ½S; Sðor T 2 ½T; TÞ, then, the following inequality holds:
Sk2 ðorkTk2 6 k Te k2 Þ; kSk2 6 ke qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where ke Sk2 ¼ min kS k2 þ kS k2 ; kS k22 þ kS k22 þ 2kST jS jk2 ; kb Sk2 ;S ¼ 12 ðS þ SÞ; S ¼ 12 ðS SÞ; b S ¼ ð^sij Þnn with ^sij ¼ maxfjsij j;jsij jg. e is similarly defined. And T Lemma 2.2. If HðxÞ 2 C0 satisfies the following conditions
ðiÞ HðxÞ – HðyÞ for all x – y; ðiiÞ kHðxÞk ! 1 as kxk ! 1; then, HðxÞ is a homeomorphism of Rn . 3. Existence and uniqueness analysis In this section, we present the sufficient conditions that guarantee the existence and uniqueness of equilibrium point of Cohen–Grossberg neural network (3). We firstly make following denotations,
e ¼ diagð u e i Þ; u ei ¼ U
n X maxðjuij j2 ; juij j2 Þ; j¼1
n X e ¼ diagð v e i Þ; v ei ¼ maxðjv ij j2 ; jv ij j2 Þ; V j¼1 2
2
ð1Þ ð2Þ Q ¼ diagðqi Þ; qi ¼ ke Sk22 ki þ k Te k22 ki ; 2
2
ð1Þ ð2Þ Sk22 hi þ k Te k22 hi : R ¼ diagðr i Þ; ri ¼ ke
Theorem 3.1. Under hypothesises (A1 ) (A2 ) (A3 ), Cohen–Grossberg neural network (3) has a unique equilibrium point, if following condition holds:
H¼
e Q nK e E 2B 2A U
0
0
e R nH e E 2D 2C V
! > 0;
ð6Þ
where E 2 Rnn is the unit matrix. Proof. We first define following map,
Hðx; yÞ ¼
bðxÞ þ Uf ðx; yÞ þ Sf ðx; yÞ I dðyÞ þ Vgðx; yÞ þ Tgðx; yÞ J
ð7Þ
Obviously, Cohen–Grossberg neural network (3) has a unique equilibrium point if and only if Hðx; yÞ is a homeomorphism on R2n . Hence, we next prove that Hðx; yÞ is a homeomorphism by following two steps.
2750
S. Qin et al. / Commun Nonlinear Sci Numer Simulat 19 (2014) 2747–2757
Step 1: In this step, we prove that H is an injective map. In fact, for any ðx1 ; y1 Þ 2 R2n and ðx2 ; y2 Þ 2 R2n with ðx1 ; y1 Þ – ðx2 ; y2 Þ, we have
T x x2 2 1 ðHðx1 ; y1 Þ Hðx2 ; y2 ÞÞ ¼ 2ðx1 x2 ÞT ðbðx1 Þ bðx2 ÞÞ þ ðx1 x2 ÞT ðU þ SÞ½f ðx1 ; y1 Þ f ðx2 ; y2 Þ y1 y2 T
ð8Þ
T
2ðy1 y2 Þ ðdðy1 Þ dðy2 ÞÞ þ ðy1 y2 Þ ðV þ TÞ½gðx1 ; y1 Þ gðx2 ; y2 Þ ¼ W 1 þ W 2 ; where W 1 ¼ 2ðx1 x2 ÞT ½bðx1 Þ bðx2 Þ þ ðx1 x2 ÞT ðU þ SÞ½f ðx1 ; y1 Þ f ðx2 ; y2 Þ; W 2 ¼ 2ðy1 y2 ÞT ½dðy1 Þ dðy2 Þ þ ðy1 y2 ÞT ðV þ TÞ½gðx1 ; y1 Þ gðx2 ; y2 Þ. We firstly consider W 1 as follows. (i) It is easy to obtain that
2ðx1 x2 ÞT ½bðx1 Þ bðx2 Þ ¼ 2ðx1 x2 ÞT ½ðBx1 þ Q ðx1 ÞÞ ðBx2 þ Q ðx2 ÞÞ ¼ 2ðx1 x2 ÞT Bðx1 x2 Þ 2ðx1 x2 ÞT ðQ ðx1 Þ Q ðx2 ÞÞ 6 2ðx1 x2 ÞT Bðx1 x2 Þ þ 2jx1 x2 jT jQ ðx1 Þ Q ðx2 Þj 6 2ðx1 x2 ÞT Bðx1 x2 Þ þ 2jx1 x2 jT Ajx1 x2 j ¼ ðx1 x2 ÞT ð2B þ 2AÞðx1 x2 Þ: (ii) Next, we estimate the second term of W 1 , i.e., ðx1 x2 ÞT U½f ðx1 ; y1 Þ f ðx2 ; y2 Þ,
ðx1 x2 ÞT U½f ðx1 ; y1 Þ f ðx2 ; y2 Þ ¼ 2
n h i X ð1Þ ð2Þ ð1Þ ð1Þ ð2Þ ð2Þ xi xi uij fj xj ; yj fj xj ; yj i;j¼1
62
ð1Þ ð2Þ 2 ð1Þ ð1Þ ð2Þ ð2Þ 2 n jx fj xj ; yj j2 X i xi j juij j þ jfj xj ; yj 2
i;j¼1
¼
n X n X
n X n X ð1Þ ð2Þ ð1Þ ð1Þ ð2Þ ð2Þ jxi xi j2 juij j2 þ jfj xj ; yj fj xj ; yj j2
i¼1 j¼1
i¼1 j¼1
! n n n h i X X X ð1Þ2 ð1Þ ð1Þ2 ð1Þ ð1Þ ð2Þ ð2Þ ð2Þ 2 6 juij j jxi xi j2 þ n kj jxj xj j2 þ hj jyj yj j2 i¼1
6
j¼1
" n X
n X i¼1
maxðjuij j2 ; juij j2 Þ
#
j¼1 ð1Þ
ð2Þ
x i xi
2
n h i X ð1Þ2 ð1Þ ð1Þ2 ð1Þ ð2Þ ð2Þ þn kj jxj xj j2 þ hj jyj yj j2
j¼1
j¼1
n h i X ð1Þ2 ð1Þ ð1Þ2 ð1Þ ð2Þ ð2Þ e 1 x2 Þ þ n ¼ ðx1 x2 ÞT Uðx kj jxj xj j2 þ hj jyj yj j2 ; j¼1
P e ¼ diagð u e i Þ; u e i ¼ nj¼1 maxðjuij j2 ; juij j2 Þ. where U (iii) For ðx1 x2 ÞT S½f ðx1 ; y1 Þ f ðx2 ; y2 Þ, we have T
ðx1 x2 ÞT S½f ðx1 ; y1 Þ f ðx2 ; y2 Þ 6 ðx1 x2 ÞT ðx1 x2 Þ þ ½f ðx1 ; y1 Þ f ðx2 ; y2 Þ ST S½f ðx1 ; y1 Þ f ðx2 ; y2 Þ T
6 ðx1 x2 Þ ðx1 x2 Þ þ kS Sk2 jf ðx1 ; y1 Þ f ðx2 ; y2 ÞjT jf ðx1 ; y1 Þ f ðx2 ; y2 Þj n h i X ð1Þ2 ð1Þ ð1Þ2 ð1Þ ð2Þ ð2Þ 6 ðx1 x2 ÞT ðx1 x2 Þ þ kSk22 ki jxi xi j2 þ hi jyi yi j2 : T
i¼1
From Lemma 2.1, we know that
kSk22
6 ke Sk22 , so
ðx1 x2 ÞT S½f ðx1 ; y1 Þ f ðx2 ; y2 Þ 6 ðx1 x2 ÞT ðx1 x2 Þ þ ke Sk22
n h X
ð1Þ2
ki
ð1Þ2
jxi xi j2 þ hi ð1Þ
ð2Þ
i ð1Þ ð2Þ jyi yi j2 :
i¼1
From (i)–(iii), it is clear that n h i X ð1Þ2 ð1Þ ð1Þ2 ð1Þ ð2Þ ð2Þ e þ Eðx1 x2 Þ þ n W 1 6 ðx1 x2 ÞT ½2B þ 2A þ U kj jxj xj j2 þ hj jyj yj j2 j¼1
þ
Sk22 ke
n h X i¼1
Similarly, for W 2 ,
ð1Þ2 ð1Þ ki jxi
ð2Þ xi j 2
þ
ð1Þ2 ð1Þ hi jyi
ð2Þ yi j2
i
:
S. Qin et al. / Commun Nonlinear Sci Numer Simulat 19 (2014) 2747–2757
2751
n h i X ð2Þ2 ð1Þ ð2Þ2 ð1Þ ð2Þ ð2Þ e þ Eðy y Þ þ n W 2 6 ðy1 y2 ÞT ½2D þ 2C þ V kj jxj xj j2 þ hj jyj yj j2 1 2 j¼1
þ
k Te k22
n h i X ð2Þ2 ð1Þ ð2Þ2 ð1Þ ð2Þ ð2Þ ki jxi xi j2 þ hi jyi yi j2 : i¼1
Then,
W ¼ W1 þ W2 e þ Eðx1 x2 Þ þ ðy y ÞT ½2D þ 2C þ V e þ Eðy y Þ 6 ðx1 x2 ÞT ½2B þ 2A þ U 1 2 1 2 2 n n h i h i XX ðlÞ2 ð1Þ X 2 2 ðlÞ ð1Þ ð1Þ2 ð1Þ ð2Þ ð1Þ ð2Þ ð1Þ ð2Þ ð2Þ Sk22 kj jxj xj j2 þ hj jyj yj j2 þ ke ki jxi xi j2 þ hi jyi yi j2 þn l¼1 j¼1
i¼1
n h i X ð2Þ2 ð1Þ ð2Þ2 ð1Þ ð2Þ ð2Þ ki jxi xi j2 þ hi jyi yi j2 : þ k Te k22
ð9Þ
i¼1
On the other hand, it is easy to get that 2 X n h n h i 2 i X X ðlÞ2 ð1Þ ðlÞ2 ð1Þ ð1Þ2 ð2Þ2 ð1Þ ð2Þ2 ð2Þ ð2Þ ð1Þ ð2Þ ð1Þ ð2Þ n kj jxj xj j2 þ hj jyj yj j2 ¼ n kj þ kj jxj xj j2 þ hj þ hj jyj yj j2 l¼1 j¼1
j¼1
e ðx1 x2 Þ þ nðy y ÞT Hðy e ¼ nðx1 x2 Þ K 1 2 1 y2 Þ; n n h h i i X ð1Þ2 ð1Þ X 2 ð1Þ ð2Þ2 ð1Þ ð2Þ2 ð1Þ ð2Þ ð1Þ ð2Þ ð2Þ ð2Þ 2 2 2 ki jxi xi j þ hi jyi yi j þ k Te k22 ki jxi xi j2 þ hi jyi yi j2 Sk2 ke T
¼
i¼1 n h X
ð10Þ
i¼1 ð1Þ2
Sk22 ki ke
ð2Þ2
þ k Te k22 ki
n h i 2 X i 2 ð1Þ2 ð2Þ2 ð1Þ ð2Þ ð1Þ ð2Þ Sk22 hi þ k Te k22 hi þ ke xi xi yi yi
i¼1
i¼1
¼ ðx1 x2 ÞT Q ðx1 x2 Þ þ ðy1 y2 ÞT Rðy1 y2 Þ; 2
2
2
2
ð1Þ ð1Þ e k2 kð2Þ ; R ¼ diagðr ij Þ; rij ¼ ke e k2 hð2Þ . where Q ¼ diagðqij Þ; qij ¼ ke Sk22 ki þ k T Sk22 hi þ k T 2 i 2 i Hence, by (9) and (10)
W ¼2
x1 x2 y1 y2
T
e þ E þ nK e þ Q ðx1 x2 Þ ðHðx1 ; y1 Þ Hðx2 ; y2 ÞÞ 6 ðx1 x2 ÞT ½2B þ 2A þ U
T e þ E þ nH e þ Rðy y Þ ¼ x1 x2 H x1 x2 ; þ ðy1 y2 ÞT ½2D þ 2C þ V 1 2 y1 y2 y1 y2
where H defined in (6) is a positive definite matrix. Hence, for any xT1 ; yT1 – ðxT2 ; yT2 Þ,
2
x1 x2 y1 y2
T
ðHðx1 ; y1 Þ Hðx2 ; y2 ÞÞ 6
x1 x2 y1 y2
T
H
x1 x2 y1 y2
< 0;
ð11Þ
Then, Hðx1 ; y1 Þ – Hðx2 ; y2 Þ for all xT1 ; yT1 – xT2 ; yT2 , i.e., H is an injective map. Step 2: We next prove H is coercive, i.e., limkðx;yÞk!1 kHðx; yÞk ¼ 1. From (11), we can get that
2
x1 x2
T
y1 y2
x1 x2 2 k < 0: ðHðx1 ; y1 Þ Hðx2 ; y2 ÞÞ 6 km ðHÞk y1 y2
ð12Þ
Especially, letting ðx2 ; y2 Þ ¼ 0,
2 xT1 ; yT1 ðHðx1 ; y1 Þ Hð0; 0ÞÞ 6 km ðHÞk xT1 ; yT1 k2 : Hence,
2kðxT1 ; yT1 ÞkkðHðx1 ; y1 Þ Hð0; 0ÞÞk P j2ðxT1 ; yT1 ÞðHðx1 ; y1 Þ Hð0; 0ÞÞj P km ðHÞkðxT1 ; yT1 Þk2 ; which means that,
kHðx1 ; y1 Þk þ kHð0; 0Þk P kðHðx1 ; y1 Þ Hð0; 0ÞÞk P Then we obtain that
kHðx1 ; y1 Þk >
km ðHÞ T T
k x1 ; y1 k kHð0; 0Þk: 2
1 km ðHÞk xT1 ; yT1 k: 2
ð13Þ
2752
S. Qin et al. / Commun Nonlinear Sci Numer Simulat 19 (2014) 2747–2757
Therefore, kHðx1 ; y1 Þk ! 1 as xT1 ; yT1 ! 1. Thus, from Lemma 2.2, the map Hðx; yÞ : R2n ! R2n is a homeomorphism. Hence, Cohen–Grossberg neural network (3) has a unique equilibrium point. h
4. Stability analysis In this section, we will analyze the global asymptotical stability of the equilibrium point under assumptions in Theorem 3.1. Under assumptions of Theorem 3.1, we denote the unique equilibrium point by ðx ; y Þ. In order to simplify the proofs, we make the transformation
½zðtÞT ; wðtÞT ¼ ½ðxðtÞ x ÞT ; ðyðtÞ y ÞT ; then the equilibrium point ½x ; y of neural network (3) is shifted to the origin and neural network (3) is equivalent to the following system,
z_ ðtÞ ¼ aðzðtÞÞfbðzðtÞÞ U U½zðtÞ; wðtÞ SU½zðt sÞ; wðt sÞg; _ wðtÞ ¼ cðwðtÞÞfdðwðtÞÞ V W½zðtÞ; wðtÞ T W½zðt sÞ; wðt sÞg;
ð14Þ
where zðtÞ ¼ ðz1 ; z2 ; . . . ; zn ÞT 2 Rn ; w ¼ ðw1 ; w2 ; . . . ; wn ÞT 2 Rn ; Uðz; wÞ ¼ ð/1 ðz1 ; w1 Þ; . . . ; /n ðzn ; wn ÞÞT 2 Rn with /i ðzi ; wi Þ ¼ fi ðzi
þxi ; wi þ yi Þ fi xi ; yi ; Wðz; wÞ ¼ ðw1 ðz1 ; w1 Þ . . . ; wn ðzn ; wn ÞÞT 2 Rn with wi ðzi ; wi Þ ¼ g i ðzi þ xi ; wi þ yi Þ g i ðxi ; yi Þ. It is obvious that /i ð0Þ ¼ 0 and wi ð0Þ ¼ 0. Theorem 4.1. Under assumptions in Theorem 3.1, the unique equilibrium point of Cohen–Grossberg neural network (3) is globally asymptotically stable. Proof. Since neural network (3) is equivalent to neural network (14), we only need to study the stability of neural network (14). Consider the following Lyapunov function:
VðzðtÞ; wðtÞÞ ¼ V 1 ðzðtÞ; wðtÞÞ þ V 2 ðzðtÞ; wðtÞÞ; where
V 1 ðzðtÞ; wðtÞÞ ¼ 2 V 2 ðzðtÞ; wðtÞÞ ¼ 2
n Z X
zi ðtÞ
0 i¼1 n Z wi ðtÞ X i¼1
0
s ds þ ai ðsÞ s ds þ ci ðsÞ
Z
t
UT ðzðnÞ; wðnÞÞST SUðzðnÞ; wðnÞÞdn;
ts
Z
t
WT ðzðnÞ; wðnÞÞT T T WðzðnÞ; wðnÞÞdn:
ts
We firstly calculate the derivative of V 1 as follows,
V_ 1 ðzðtÞ; wðtÞÞ ¼ 2zT ðtÞa1 ðzðtÞÞz_ ðtÞ þ UT ðzðtÞ; wðtÞÞST SUðzðtÞ; wðtÞÞ UT ðzðt sÞ; wðt sÞÞST SUðzðt sÞ; wðt sÞÞ ¼ 2zT ðtÞa1 ðzðtÞÞaðzðtÞÞbðzðtÞÞ þ 2zT ðtÞa1 ðzðtÞÞaðzðtÞÞU U½zðtÞ; wðtÞ þ 2zT ðtÞa1 ðzðtÞÞaðzðtÞÞSU½zðt sÞ; wðt sÞ þ UT ðzðtÞ; wðtÞÞST SUðzðtÞ; wðtÞÞ UT ðzðt sÞ; wðt sÞÞST SUðzðt sÞ; wðt sÞÞ ¼ 2zT ðtÞbðzðtÞÞ þ 2zT ðtÞU U½zðtÞ; wðtÞ þ 2zT ðtÞSU½zðt sÞ; wðt sÞ þ UT ðzðtÞ; wðtÞÞST SUðzðtÞ; wðtÞÞ UT ðzðt sÞ; wðt sÞÞST SUðzðt sÞ; wðt sÞÞ:
ð15Þ
Then, for first term 2zT ðtÞbðzðtÞÞ in above equality (15),
2zT ðtÞbðzðtÞÞ ¼ 2zT ðtÞBzðtÞ 2zT ðtÞQ ðzðtÞÞ 6 2zT ðtÞBzðtÞ þ 2jzT ðtÞjjQ ðzðtÞÞj 6 2zT ðtÞBzðtÞ þ 2jzT ðtÞjAjzðtÞj 6 2zT ðtÞBzðtÞ þ 2zT ðtÞAzðtÞ:
ð16Þ
For second term 2zT ðtÞU U½zðtÞ; wðtÞ in (15), we have
2zT ðtÞU U½zðtÞ; wðtÞ ¼ 2
n n X X jzi ðtÞj2 juij j2 þ j/j ½zj ðtÞ; wj ðtÞj2 zi ðtÞuij /j ½zj ðtÞ; wj ðtÞ 6 2 2 i;j¼1 i;j¼1
¼
n n n X n n X X X X ð1Þ2 ð1Þ2 jzi ðtÞj2 juij j2 þ j/j ½zj ðtÞ; wj ðtÞj2 6 ð juij j2 Þðzi ðtÞÞ2 þ n ½kj jzj ðtÞj2 þ hj jwj ðtÞj2 i;j¼1
e þn 6 z ðtÞ UzðtÞ T
i;j¼1 n h X j¼1
ð1Þ2 kj jzj ðtÞj2
i¼1
þ
ð1Þ2 hj jwj ðtÞj2
j¼1
i
:
j¼1
ð17Þ
2753
S. Qin et al. / Commun Nonlinear Sci Numer Simulat 19 (2014) 2747–2757
For third term 2zT ðtÞSU½zðt sÞ; wðt sÞ in (15),
2zT ðtÞSU½zðt sÞ; wðt sÞ 6 zT ðtÞzðtÞ þ UT ½zðt sÞ; wðt sÞST SUT ½zðt sÞ; wðt sÞ:
ð18Þ
T
T
For fourth term U ðzðtÞ; wðtÞÞS SUðzðtÞ; wðtÞÞ in (15), we have
UT ðzðtÞ; wðtÞÞST SUðzðtÞ; wðtÞÞ 6 kST Sk2 jUT ðzðtÞ; wðtÞÞjjUðzðtÞ; wðtÞÞj 6 kSk22
n X
j/i ðzi ðtÞ; wi ðtÞÞj2
i¼1
6
ke Sk22
n h X
ð1Þ2 ki jzi ðtÞj2
þ
ð1Þ2 hi jwi ðtÞj2
i
ð19Þ
:
i¼1
Therefore, according to (16)–(19), we get that n h i X ð1Þ2 ð1Þ2 e þ EÞzðtÞ þ n V_ 1 ðzðtÞ; wðtÞÞ 6 zT ðtÞð2B þ 2A þ U kj jzj ðtÞj2 þ hj jwj ðtÞj2 j¼1
Sk22 þ UT ðzðt sÞ; wðt sÞÞST SUðzðt sÞ; wðt sÞÞ þ ke
n h X
ð1Þ2
ki
ð1Þ2
jzi ðtÞj2 þ hi
jwi ðtÞj2
i
i¼1
e þ EÞzðtÞ UT ðzðt sÞ; wðt sÞÞST SUðzðt sÞ; wðt sÞÞ ¼ azT ðtÞð2B þ 2A þ U n h n i h i X X ð1Þ2 ð1Þ2 ð1Þ2 ð1Þ2 kj jzj ðtÞj2 þ hj jwj ðtÞj2 þ ke ki jzi ðtÞj2 þ hi jwi ðtÞj2 : Sk22 þn j¼1
i¼1
In a similar way, we have n h n h i i X X ð2Þ2 ð2Þ2 ð2Þ2 ð2Þ2 e þ EÞwðtÞ þ n V_ 2 ðzðtÞ; wðtÞÞ 6 wT ðtÞð2D þ 2C þ V kj jzj ðtÞj2 þ hj jwj ðtÞj2 þ k Te k22 ki jzi ðtÞj2 þ hi jwi ðtÞj2 : j¼1
i¼1
Hence,
V_ 1 ðzðtÞ; wðtÞÞ þ V_ 2 ðzðtÞ; wðtÞÞ 6
zðtÞ
!T
e þE 2B þ 2A þ U
!
0
zðtÞ
!
e þE wðtÞ wðtÞ 0 2D þ 2C þ V ( ) n n h i h i X ð1Þ2 X ð2Þ2 ð1Þ2 ð2Þ2 2 2 2 2 þn kj jzj ðtÞj þ hj jwj ðtÞj þ kj jzj ðtÞj þ hj jwj ðtÞj j¼1
þ ke Sk22
ð20Þ
j¼1
n h n h i i X X ð1Þ2 ð1Þ2 ð2Þ2 ð2Þ2 ki jzi ðtÞj2 þ hi jwi ðtÞj2 þ k Te k22 ki jzi ðtÞj2 þ hi jwi ðtÞj2 : i¼1
i¼1
On the other hand, it is clear that
( n h X
n
ð1Þ2
kj
ð1Þ2
jzj ðtÞj2 þ hj
) n h i X i ð2Þ2 ð2Þ2 jwj ðtÞj2 þ kj jzj ðtÞj2 þ hj jwj ðtÞj2
j¼1
j¼1
n h n h i i X X ð1Þ2 ð2Þ2 ð1Þ2 ð2Þ2 e zðtÞ þ nwT ðtÞ HwðtÞ e kj þ kj hj þ hj jzj ðtÞj2 þ n jwj ðtÞj2 ¼ nzT ðtÞ K 6n j¼1
ð21Þ
j¼1
and
ke Sk22
n h X
ð1Þ2
ki
ð1Þ2
jzi ðtÞj2 þ hi
n h i i X ð2Þ2 ð2Þ2 jwi ðtÞj2 þ k Te k22 ki jzi ðtÞj2 þ hi jwi ðtÞj2
i¼1
¼
n h X
i¼1 ð1Þ2 ke Sk22 ki
þ
ð2Þ2 k Te k22 ki
i
n h i X ð1Þ2 ð2Þ2 ke jzi ðtÞj þ jwi ðtÞj2 ¼ zT ðtÞQzðtÞ þ wT ðtÞRwðtÞ Sk22 hi þ k Te k22 hi 2
i¼1
ð22Þ
i¼1
Hence,
_ VðzðtÞ; wðtÞÞ ¼ V_ 1 ðzðtÞ; wðtÞÞ þ V_ 2 ðzðtÞ; wðtÞÞ 6 a
zðtÞ wðtÞ
T
H
zðtÞ wðtÞ
;
where H from Theorem 3.1 is a positive definite matrix. Then,
_ VðzðtÞ; wðtÞÞ 6 a
zðtÞ wðtÞ
T
H
zðtÞ wðtÞ
6 km ðHÞ½zðtÞT zðtÞ þ wðtÞT wðtÞ:
ð23Þ
2754
S. Qin et al. / Commun Nonlinear Sci Numer Simulat 19 (2014) 2747–2757
_ wðtÞÞ is negative definite for all ðzðtÞ; wðtÞÞ – 0. In addition, VðzðtÞ; wðtÞÞ is radially unbounded So we can ensure that VðzðtÞ; since VðzðtÞ; wðtÞÞ ! 1 as kðzT ðtÞ; wT ðtÞk ! 1. Thus, it can be concluded that the origin of neural network (14), or equivalently the equilibrium point of neural network (3) is globally asymptotically stable. Remark 1. Second-order Cohen–Grossberg neural network (3) is not a simple combination of two first-order Cohen–Grossberg neural networks. That is because the activation function f does not only depend on x, but also depend on y. For example, consider the following second-order Cohen–Grossberg neural network,
(
_ xðtÞ ¼ ð2 þ sinxðtÞÞ 3xðtÞ 14 sinxðtÞsinyðtÞ 3sinxðt 1Þsinyðt 1Þ ;
_ yðtÞ ¼ ð3 2cosxðtÞÞ 4yðtÞ 15 cosxðtÞcosyðtÞ 2cosxðt 1Þcosyðt 1Þ :
ð24Þ
Obviously, neural network (24) can not be expressed by two first-order Cohen–Grossberg neural networks. Hence, Theorems 3.1 and 4.1 are not directly extensions for first-order Cohen–Grossberg neural network. Meanwhile, as a special case, Theorems 3.1 and 4.1 can be used to verify the robust stability of following first-order Cohen–Grossberg neural network, i.e.,
_ xðtÞ ¼ aðxðtÞÞfbðxðtÞÞ Uf ðxðtÞÞ Sf ðxðt sÞÞ þ Ig:
ð25Þ
Corollary 4.1. Cohen–Grossberg neural network (25) has a unique equilibrium point which is globally asymptotically robust stable, if for any s; t 2 R and i ¼ 1; 2; . . . ; n, the following conditions hold, (i) (ii) (iii) (iv)
there exists positive constant ki such that jfi ðsÞ fi ðtÞj 6 ki js tj, ai ðÞ : R ! R is continuous and there exist positive constants m1 and m2 such that 0 < m1 < ai ðtÞ < m2 , bi ðÞ : R ! R is continuous and there exist positive constants Ai ; Bi satisfying bi ðsÞ ¼ Bi s þ qi ðsÞ and jqi ðsÞ qi ðtÞj 6 Ai js tj, 2 e Q nK e E > 0, where Q ¼ diagðq Þ; q ¼ ke 2B 2A U Sk22 ki . i i
5. Comparisons and examples In this section, we will compare our results with the previous robust stability results in [4,5]. Example 1. Consider a second-order neural network (3) with the following network parameters:
0:1 0:1
U¼
!
0:1 0:1
0:2 0:2
!
0:1 0:1
!
2:1 4:1
!
; V¼ ; V¼ ; S¼ ; 0:1 0:1 0:1 0:1 0:2 0:2 3:1 2:1 ! ! ! 4:1 2:1 3:9 1:9 4 þ 2 sin x1 0 S¼ ; T¼ ; T¼ ; aðxÞ ¼ ; 2:9 1:9 3:1 3:1 2:9 2:9 0 3 2 cos x2 ! ! ! 1 3x1 þ 16 sin x1 b1 ðx1 Þ 2 þ 12 cos y1 0 ; bðxÞ ¼ ¼ ; cðxÞ ¼ 1 2x2 þ 15 cos x2 b2 ðx2 Þ 0 3 þ 2 sin y2 ! ! ! ! 1 1 x þ e5y1 d1 ðy1 Þ f1 ðx1 ; y1 Þ 4y1 14 sin y1 4 1 dðyÞ ¼ ¼ ; f ðx; yÞ ¼ ¼ ; 1 d2 ðy2 Þ f2 ðx2 ; y2 Þ 5y2 cos y2 e6x2 þ 14 y2 ! ! 1 g 1 ðx1 ; y1 Þ e4x1 þ 13 y1 ¼ ; I ¼ J ¼ 0; s ¼ 1: gðx; yÞ ¼ 1 g 2 ðx2 ; y2 Þ x þ 13 y2 5 2 0:1 0:1 ! 1:9 3:9
;
U¼
!
After simple calculation, we have
B¼
3 0 0
2
;
A¼
1 16
0
0
1 15
!
;
D¼
0 B Meanwhile, it is easy to get that H ¼ B @
4 0 0 5
;
C¼
1 4
0
!
0 1 1
0:0657
C C > 0. By Theorem 4.1, the defined neural netA
0:2113 0:4062 0:1150
work in this example has a unique equilibrium point, which is globally asymptotically robust stable.
2755
S. Qin et al. / Commun Nonlinear Sci Numer Simulat 19 (2014) 2747–2757
3.4
x2(t)
3.2
State Vector (x,y)
3 2.8 2.6
x (t) 1
2.4 2.2
y1(t)
2 y2(t)
1.8 1.6 1.4 0
1
2
3
4 Time t
5
6
7
8
Fig. 1. Trajectories of neural network in Case 1 of Example 1 with three different initial points.
3 2.8 x (t)
2.6
2
Slate Vector (xy)
2.4
x (t) 1
2.2 2
y1(t)
1.8 1.6
y (t) 2
1.4 1.2 1
0
1
2
3
4 Time t
5
6
7
8
Fig. 2. Trajectories of neural network in Case 2 of Example 1 with three different initial points.
However, we cannot use Theorem 3.1 in [5] to verify the stability of neural network in this example. In fact, Theorem 3.1 in [5] can be only used to verify the stability of neural network (1), i.e., U ¼ V ¼ 0 in neural network (3). On the other hand, even if we let U ¼ V ¼ 0 in this example, the conditions of Theorem 3.1 in [5] still does not hold here. For example, it is easy to obtain that r ¼ 1:1542 > 1, where r is from Theorem 3.1 in [5]. Obviously, it contradicts with the basic assumption r < 1 in Theorem 3.1 of [5]. For details, see case 2 and Fig. 2 below. To test and verify the theoretical result, we give some simulations by the following cases: Case 1. U ¼ U; V ¼ V; S ¼ S; T ¼ T, see Fig. 1. Case 2. U ¼
0 0
0 0
2 UI ;
V¼
0 0
0 0
2 VI;
S¼
2 3
4 2
2 SI ;
T¼
4 3
2 3
2 T I , see Fig. 2.
Case 3. U ¼ U; V ¼ V; S ¼ S; T ¼ T, see Fig. 3. One of potential application of Theorem 4.1 is to verify the stability of Cohen–Grossberg neural network. Next, by an example, we will make comparing with the corresponding conclusions of [4]. Example 2. Consider following Cohen–Grossberg neural network
(
_ xðtÞ 14 ðxðtÞ þ eyðtÞ Þ 3ðxðt 1Þ þ eyðt1Þ Þ ; xðtÞ ¼ ð2 þ sinxðtÞÞ 95 8 191
_ yðtÞ ¼ ð3 2cosxðtÞÞ 16 yðtÞ 14 ðexðtÞ þ yðtÞÞ 3ðexðt1Þ þ yðt 1ÞÞ 3 :
ð26Þ
2756
S. Qin et al. / Commun Nonlinear Sci Numer Simulat 19 (2014) 2747–2757 2.6 2.4 x (t) 2
Slate Vector (x,y)
2.2
x (t) 1
2 1.8
y (t)
1.6
1
1.4 y2(t)
1.2 1 0
1
2
3
4 Time t
5
6
7
8
Fig. 3. Trajectories of neural network in Case 2 of Example 1 with three different initial points.
0.8 0.7
Slate Vector (x,y)
0.6 y(t) 0.5 0.4 x(t)
0.3 0.2 0.1 0
0
1
2
3
4 Time t
5
6
7
8
Fig. 4. Trajectories of neural network of Example 2 with three different initial points.
It is easy to verify that the conditions in Theorem (4.1) hold. Hence, the defined neural network in this example has a unique equilibrium point, which is globally asymptotically stable. By MATLAB, we present a simulation of neural network (26) with three different initial points. See Fig. 4. However, the conclusions in [4] cannot be used to verify the global stability of the defined neural network, since the LMI in [4] dose not hold for Cohen–Grossberg neural network (26).
6. Conclusions In this paper, we present some new sufficient conditions for global stability of second-order interval Cohen–Grossberg neural networks. The existence of the equilibrium point to second-order Cohen–Grossberg neural network (3) is derived by homeomorphism mapping theorem. And the global asymptotical stability is proved by Lyapunov method by employing Lyapunov functional. We give some numerical examples to illustrate that our result is a improvement over other recent results. Acknowledgement This work is supported by the national science fund of grant (11271099, 11126218, 11101107), China Postdoctoral Science Foundation funded project (2013M530915). References [1] Cohen M, Grossberg S. Absolute stability and global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans Syst Man Cybern 1983;13:815–26.
S. Qin et al. / Commun Nonlinear Sci Numer Simulat 19 (2014) 2747–2757
2757
[2] Zhou CH, Zhang HY, Zhang HB, Dang CY. Global exponential stability of impulsive fuzzy Cohen–Grossberg neural networks with mixed delays and reaction–diffusion terms. Neurocomputing 2012;91:67–76. [3] Huang ZK, Feng CH, Mohamad S. Multistability analysis for a general class of delayed Cohen–Grossberg neural networks. Inf Sci 2012;187:233–44. [4] Zhang Z, Zhang T, Huang S, Xiao P. New global exponential stability result to a general Cohen–Grossberg neural networks with multiple delays. Nonlinear Dyn 2012;67:2419–32. [5] Zhang ZQ, Zhou DM. Global robust exponential stability for second-order Cohen–Grossberg neural networks with multiple delays. Neurocomputing 2009;73:213–8. [6] Ke YQ, Miao CF. Stability analysis of inertial Cohen–Grossberg-type neural networks with time delays. Neurocomputing 2013;117:196–205. [7] Wang Z, Zhang H, Yu W. Robust stability criteria for interval Cohen–Grossberg neural networks with time varying delay. Neurocomputing 2009;72:1105–10. [8] Bao G, Wen SP, Zeng ZG. Robust stability analysis of interval fuzzy Cohen–Grossberg neural networks with piecewise constant argument of generalized type. Neural Netw 2012;33:32–41. [9] Li T, Fei S, Guo YQ, Zhu Q. Stability analysis on Cohen–Grossberg neural networks with both time-varying and continuously distributed delays. Nonlinear Anal: Real World Appl 2009;10:2600–12. [10] Lien CH, Yu KW, Lin YF, Chung YJ, Chung LY. Stability conditions for Cohen–Grossberg neural networks with time-varying delays. Phys Lett A 2008;372:2264–8. [11] Zhu QX, Cao JD. Robust exponential stability of markovian jump impulsive stochastic Cohen–Grossberg neural networks with mixed time delays. IEEE Trans Neural Netw 2010;21:1314–25. [12] Zhang HG, Wang ZS, Liu DR. Robust stability analysis for interval Cohen–Grossberg neural networks with unknown time-varying delays. IEEE Trans Neural Netw 2008;19:1942–55. [13] Wang ZS, Zhang HG, Yu W. Robust stability of Cohen–Grossberg neural networks via state transmission matrix. IEEE Trans Neural Netw 2009;20:169–74. [14] Zhang H, Wang Z, Liu D. Global asymptotic stability and robust stability of a class of Cohen–Grossberg neural networks with mixed delays. IEEE Trans Circuits Syst I 2009;56(3):616–29. [15] Faydasicok O, Arik S. Robust stability analysis of a class of networks with discrete time delays. Neural Netw 2012;29–30:52–9. [16] Ensari T, Arik S. New results for robust stability of dynamical neural networks with discrete time delays. Expert Syst Appl 2010;27:5925–30. [17] Faydasicok O, Arik S. Equilibrium and stability analysis of delayed neural networks under parameter uncertainties. Appl Math Comput 2012;218:6716–26. [18] S.T. Qin, D.J. Fan, M. Yan, Q.H. Liu, Global robust exponential stability for interval delayed neural networks with possibly unbounded activation functions, Neural Process Lett (in press). [19] Shao JL, Huang TZ. A note on Global robust stability criteria for interval delayed neural networks via an LMI approach. IEEE Trans Circuits Syst II 2008;55(11):1198–202. [20] Zheng C, Zhang H, Wang Z. Improved robust stability criteria for delayed cellular neural networks via the LMI approach. IEEE Trans Circuits Syst II 2010;57(1):41–5. [21] Singh V. Improved global robust stability for interval-delayed hopfield neural networks. Neural Process Lett 2008;27(3):257–65. [22] Ensari T, Arik S. New results for robust stability of dynamical neural networks with discrete time delays. Expert Syst Appl 2010;27:5925–30. [23] Ozcan N. A new sufficient condition for global robust stability of delayed neural networks. Neural Process Lett 2011;34:305–16. [24] Shao JL, Huang TZ, Wang XP. Improved global robust exponential stability criteria for interval neural networks with time-varying delays. Expert Syst Appl 2011;38:15587–93. [25] Huang T. Robust stability of delayed fuzzy Cohen–Grossberg neural networks. Comput Math Appl 2011;61:2247–50. [26] Singh V. Modified criteria for global robust stability of interval delayed neural networks. Appl Math Comput 2009;215(15):3124–33. [27] Lee SM, Kwon OM, Park JH. A novel delay-dependent criterion for delayed neural networks of neutral type. Phys Lett A 2010;374:1843–8. [28] Yang R, Gao H, Shi P. Novel robust stability criteria for stochastic Hopfield neural networks with time delays. IEEE Trans Syst Man Cybern, Part B: Cybern 2009;39:467–74. [29] Faydasicok O, Arik S. Further analysis of global robust stability of neural networks with multiple time delays. J Franklin Inst 2012;349:813–25.