ARTICLE IN PRESS
JID: NEUCOM
[m5G;June 27, 2019;13:57]
Neurocomputing xxx (xxxx) xxx
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays Yurong Zhang, Zhichun Yang∗ College of Mathematics, Chongqing Normal University, Chongqing 400047, China
a r t i c l e
i n f o
Article history: Received 22 August 2018 Revised 11 February 2019 Accepted 15 March 2019 Available online xxx Keywords: Neural networks Global dynamics Learning algorithm Input-to-state stability (ISS) Unbounded delays
a b s t r a c t In this paper, a class of non-autonomous neural networks with time-varying delays is considered. By using a new differential inequality and M-matrix, we investigate the positive invariant set and global attracting set of the networks without the assumption on boundedness of time delays or system coefficients. On this basis, we obtain sufficient conditions on the uniformly boundedness, the existence of periodic attractor and give its existence range for periodic neural networks. Furthermore, we offer a weight learning algorithms to ensure input-to-state stability, and give the state estimate and attracting set for the system. Our results can extend and improve earlier ones. Some examples and simulations are given to demonstrate the effectiveness of the obtained results.
1. Introduction Dynamics of autonomous neural networks have been deeply investigated due to its extensive applications in optimization, pattern recognition, signal processing and associative memories, and so on (see [1–5]). In recent years, stability properties of equilibrium point for autonomous neural networks with time delay have enjoyed a remarkable progress and a large number of stability criteria have been derived (see, e.g., Refs. [6–9]). From the view point of reality, autonomous neural network systems with constant coefficients may be a type of simplified models to simulate biological neural networks and artificial intelligence systems. In practical evolutionary processes of the networks, the coefficients and extern inputs are subject to environmental disturbances and are varied with time. Even in learning theory of neural networks, the connection weights among neurons are frequently undated to achieve some purpose. In this cases, the model of non-autonomous neural networks can more accurately depict the evolutionary processes of the networks (see also, e.g., [10–15]). In addition, delay effects is inevitable owing to the finite switching speed of the amplifiers in the networks. The delays may be varied with time, and even be unbounded, for instance, in neural networks with proportional delay (see also, e.g., [16–18]). However, most of previous works required the boundedness of variable delays and system coefficients. Accordingly, it is still an interesting and challenging subject to
∗
Corresponding author. E-mail address:
[email protected] (Z. Yang).
© 2019 Elsevier B.V. All rights reserved.
study further global dynamics of non-autonomous neural networks without the assumption of boundedness on system coefficients and time delays. In non-autonomous neural networks, the equilibrium point maybe does not exist since system coefficients and extern inputs are time-varying. Some important problems are to investigate global dynamics including the existence range and attracting region of system states. This motivates some recent research on positive invariant sets and globally attracting sets, which play important roles in analyzing the boundedness and attractor for nonlinear dynamical systems. For example, attracting and invariant sets are obtained for autonomous neural networks with time delays in [19– 23]. For non-autonomous neural networks with time-varying delays, authors obtained some attracting and invariant sets in [13,14], but need the restrictive conditions on bounded delays and require the common factor of the coefficients. Furthermore, it is meaningful to understand how extern inputs affect states of nonlinear system and the input-to-state stability (ISS) became an popular topic to study the dynamical behavior of systems (see [24,25]). Recently, ISS properties were studied for virous neural networks such as Hopfield neural networks [26–28], Cohen–Grossberg neural networks [29,30], BAM neural networks [31], reaction-diffusion neural networks [32], and other recurrent neural networks (see also, e.g., [33–37]). The main methods include Lyanpunov functional, differential inequalities or integral inequalities, but there is few effectiveness on non-autonomous neural networks with unbounded coefficients and unbounded delays. On the other hand, learning algorithm for dynamical neural networks becomes a very hot subject due to some successful applications in the neuro identification,
https://doi.org/10.1016/j.neucom.2019.03.093 0925-2312/© 2019 Elsevier B.V. All rights reserved.
Please cite this article as: Y. Zhang and Z. Yang, Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.03.093
JID: NEUCOM 2
ARTICLE IN PRESS
[m5G;June 27, 2019;13:57]
Y. Zhang and Z. Yang / Neurocomputing xxx (xxxx) xxx
neuro control, associative memory and optimization solvers (see [38–40]). In [38], authors suggested a sliding-mode learning law for dynamic neural network to deals with state observation problem of uncertain dynamical models by Lyapunov method of state estimation error system. In [39], a gradient learning algorithm is designed to make the neural networks passive, asymptotic stability and input-to-state stability. Akn proposed a passive weight learning law for delayed switched Hopfield neural networks to ensure passivity and asymptotical stability of delayed neural networks [27], and gave an ISS weight learning algorithm for recurrent neural networks to guarantee robust stability and reduce the effect of the external disturbance [28]. It is natural to further investigate the ISS learning algorithm for non-autonomous neural networks with either bounded delays or unbounded ones. According to the above discussion, main contributions are summarized as follows. (1) By establishing a new differential inequality, we give global dynamics including the positive invariant set and globally attracting set for non-autonomous neural networks with unbounded delays, and provide periodic attractor and its existence range for periodic neural networks. (2) When coefficients of networks have common factors in very subsystem, we extend some existing results on invariant and attracting sets to the case of unbounded delays by M-matrix properties. (3) Some new ISS learning algorithms for non-autonomous neural networks with delays are proposed to give the state estimation and global attracting set, which guarantee input-to-state stability and exponential stability. The organization of this paper is as follows. In Section 2, we describe a model of non-autonomous neural network with delays and introduce some necessary definitions and basic lemmas, and construct a new generalization Halanay inequality without the requirement on bounded delays. In Section 3, we investigate global dynamics on the positive and attracting sets, further give periodic attractor and it range. In Section 4, we propose a new learning algorithm to ensure input-to-state stability and to obtain global attracting set. In Section 5, some examples and simulations are given to demonstrate the effectiveness of the obtained results. This paper then concludes in Section 6.
In this paper, we consider a class of non-autonomous neural networks with time-varying delays as follows
x i (t ) = −ai (t )xi (t ) +
n
bi j (t ) f j (x j (t ))
j=1
+
n
ci j (t )g j (x j (t − τi j (t ))) + Ii (t ), i ∈ N = {1, 2, . . . , n},
j=1
(1) where n corresponds to the number of units in neural networks, xi (t) corresponds to the state of the ith unit at time t, fj (xj ) and gj (xj ) are the activation functions of the jth unit, ai (t) ≥ 0 represents the rate with which the ith unit will reset its potential to the resting state n isolation when disconnected from the network and external inputs, τ ij (t) corresponds to the transmission delay, which satisfies t → ∞, t − τi j (t ) → ∞. bij (t) and cij (t) denote the strength of the jth neuron on ith unit at time t and t − τi j (t ), respectively, Ii (t) is the external input on the ith unit. We always assume that functions ai (t), bij (t), cij (t) and Ii (t) are continuous for t ∈ R, i, j ∈ N and I (t ) = (I1 (t ), . . . , In (t ))T ∈ Ln∞ . Here, time delays may be bounded and unbounded, which includes some models of neural networks [6,12,13,15,19]. In this paper, we always suppose that solutions of system (1) globally exist satisfying the following initial conditions
x(t0 + s ) = φ (s ), s ∈ (−∞, 0], t0 ≥ 0, where φ (s ) = (φ1 (s ), · · · , φn (s ))T ∈ C. The solution of system (1) with initial condition xt0 = φ is denoted by x(t, t0 , φ ) or xt (t0 , φ ). Firstly, we introduce the following definitions. Definition 2.1. A set S ⊂ C is called to be a positive invariant set, if for any initial value φ ∈ S, we have x(t, t0 , φ ) ∈ S, t ≥ t0 . Definition 2.2. Solutions of Eq. (3) are said to be uniformly boundedness, if for any α > 0, there is a constant β = β (α ) > 0 such that
|x(t, t0 , φ )| ≤ β (α ) for φ ≤ α , t ≥ t0 . Definition 2.3. A set S1 ⊂ C is called a global attracting set, if for any initial value φ ∈ C, the solution xt (t0 , φ ) converges to S1 as t → ∞, that is,
2. Model description and preliminaries Let R+ = {x ∈ R : x ≥ 0}, Rn denote real n-dimensional linear vector space and Rn × n denote the set of n × n real matrices. For any x ∈ Rn , we define its normal |x| = maxi |xi |, and denote [x]+ = (|x1 |, |x2 |, . . . , |xn | )T . For any real matrices A = (ai j )n×n and B = (bi j )n×n , we write A ≥ B if ai j ≥ bi j , ∀i, j ∈ N . For A ∈ Rn × n , denote the transposition, inverse, matrix norm, matrixmeasure, maximum (or minimum) eigenvalue by AT , A−1 , A = λmax (AT A ), μ(A ) = 1 T 2 λmax (A + A ), λmax (A) (or λmin (A) when A is a symmetric matrix. E is the identity matrix. C[X, Y] is the space of continuous mapping from the topological space X to the topological space Y. Especially, let C C ([−∞, t0 ]; Rn ) denote the family of all bounded continuous Rn valued function φ on [−∞, t0 ] with φ = max−∞≤s≤t0 |φ (s )|. The Ln∞ is the set of all the essentially bounded functions ψ : R+ → Rn equipped with the essential supremum norm ψ∞ = sup{|ψ (t )|, t ≥ t0 }. A function γ : R+ → R+ is a K− function denoted by γ ∈ K if it is continuous, strictly increasing,and γ (0 ) = 0. A function ξ : R+ × R+ → R+ is a KL−function denoted by ξ ∈ KL if for each fixed t ≥ 0, the function ξ ( · , t) is a K− function and for each fixed s ≥ 0, it is decreasing to zero as t → ∞. Set W = {ω ∈ C (R, R+ )| ω(t) ≤ 1, if t ≤ t0 ; ω(t) ≥ 1, if t > t0 , and ω(t) is monotonically increasing}.
dist(xt (t0 , φ ), S1 ) → 0, as t → ∞, where dist (xt (t0 , φ ), S1 ) = inf xt (t0 , φ ) − ϕ. ϕ ∈S1
Definition 2.4. The trivial solution of system (1) is said to be input-to-state stable if for every φ ∈ C ([−∞, 0]; Rn ) and input I ∈ Ln∞ , there exist two functions γ ∈ K, β ∈ KL such that
|x( t, φ )| ≤ β (t, φ ) + γ ( I ∞ ). Especially, the trivial solution of system (1) is said to be exponentially input-to-state stable if
|x( t, φ )| ≤ ke−rt φ + λ I ∞ . The trivial solution is said to globally asymptotically (exponentially) stable when I (t ) = 0 in the above estimations, respectively. To obtain our results, we need some basic lemmas as follows. Lemma 2.1 (see [41]). Given matrices X, Y, and with appropriate dimensions such that = T and any scalar ε > 0, then
X T Y + Y T X ≤ ε X T X +
1
ε
Y T −1Y.
Lemma 2.2 (see [42]). Let P ∈ Rn × n be a symmetric positive define matrix and P = Q T Q. For any x ∈ Rn , then
xT AT PAx ≤ Q AQ −1 2 xT P x, xT (AT P + PA )x ≤ 2μ(Q AQ −1 ) xT P x.
Please cite this article as: Y. Zhang and Z. Yang, Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.03.093
ARTICLE IN PRESS
JID: NEUCOM
[m5G;June 27, 2019;13:57]
Y. Zhang and Z. Yang / Neurocomputing xxx (xxxx) xxx
Lemma 2.3 (see [43]). Let p ≥ 2, and , a, b > 0. Then
a p−1 b ≤
( p − 1 ) a p
p
+
This contradicts the equation of (4). Hence, for any > 0, we have vi (t ) < κ (φ + )zi , and so ui (t ) ≤ [χ (t ) + κω−1 (t )φ]zi , t ≥ t0 , i ∈ N . The proof is completed.
p
b . p p−1
In order to study the dynamical behaviors of non-autonomous neural networks with unbounded delays, we firstly establish a differential inequality with unbounded delays and time-varying inputs. Lemma 2.4. Let ui (t) on [t0 , +∞ ) be a solution of the following delay differential inequality with the initial condition ui (t ) = φi (t ), t ≤ t0 and φ i (t) is bounded and continuous n D ui (t ) ≤ [αi j (t )u j (t ) + βi j (t )u j (t − τi j (t ))] + Ji (t ), t ≥ t0 , +
j=1
where α ij (t) ≥ 0, i = j, β ij (t) ≥ 0, Ji (t) ≥ 0, τ ij (t) ≥ 0 satisfying t − τi j (t ) → ∞ as t → ∞. If there exist positive constants δ, zi , i ∈ N and
a function ω (t ) ∈ W satisfies ωω ((tt)) ≤ δ and
j=1
ω (t ) z < −δ zi , αi j (t ) + βi j (t ) ω (t − τi j (t )) j
(3)
is a constant), we may choose ω (t ) = eλ(t−t0 ) ∈ W and so ωω ((tt)) = λeλ(t−t0 ) = λ(t−t ) e
0
λzi +
λ, ω (t−ωτ(t )(t )) ≤ ij
eλ(t−t0 ) eλ(t−t0 −τ )
= eλτ . If
n [αi j (t ) + βi j (t )eλτ ]z j < 0,
then the conclusion in Lemma 2.4 becomes the following exponential estimation
ui (t ) ≤ [χ (t ) + κ e−λ(t−t0 ) ]φzi , t ≥ t0 ,
ui (t ) ≤ [χ (t ) + κω−1 (t )φ]zi , t ≥ t0 , where κ =
1 , min zi
φ = sup |φ (s )|, χ (t) is a continuous and −∞≤s≤t0
i
J (t )
nondecreasing function satisfying χ (t ) ≥ iδ z . i Proof. Define functions vi (t ) = ω (t )(ui (t ) − zi χ (t )) for t ≥ t0 . From ω (t ) ∈ W, we easily get vi (t) ≤ κφzi holds for all t ∈ [−∞, t0 ]. We shall prove that vi (t ) < κ (φ + )zi , for all t ≥ t0 and any given number . Otherwise, by continuity of ui (t), then there must be a constant t∗ > t0 and an integer m such that
vm (t ∗ ) = κ (φ + )zm , D+ vm (t ∗ ) ≥ 0, vi (t ) ≤ κ (φ + )zi , ∀t ∈ [t0 , t ∗ ], i ∈ N .
(4)
D+ χ (t )
Noting that χ (t) is nondecreasing, then ≥ 0, χ (t − τi j (t )) ≤ χ (t ). Moreover, according to condition (2), (3) and (4), one can derive that
D
Remark 2.2. To seek a proper function ω(t) to satisfies (3), one can refer to some specifying functions ω(t) given in [17] for different cases of unbounded delays. Especially, when τ ij (t) ≤ τ (τ
j=1
then
+
Remark 2.1. The above result is a new type of generalized Halanay inequalities without assumption of bounded delays. In fact, it also an extension of the scalar inequality with constant coefficients given in [18] and the inequalities with constant coefficients and inputs J (t ) = 0 given in [17]. The result will play an important roles on dynamics for non-autonomous neural networks with unbounded delays.
(2)
n
3
vm (t )|t =t ∗
= (D+ um (t ∗ ) − D+ χ (t ∗ ))ω (t ∗ ) + ω (t ∗ )(um (t ∗ ) − zm χ (t ∗ )) ≤ D um (t )ω (t ) + ω (t ∗ )(um (t ∗ ) − zm χ (t ∗ )) +
∗
∗
n ≤ ω (t ∗ ) [(αm j (t )u j (t ∗ ) + βm j (t ∗ )u j (t ∗ − τi j (t ∗ ))] + Jm (t ∗ )
j=1
+ω (t ∗ )(um (t ∗ ) − zm χ (t ∗ ))
= ω (t ∗ )
n j=1
αm j (t )
v j (t ) + z j χ (t ∗ ) ω (t ∗ ) ∗
v j (t ∗ − τm j (t ∗ )) ∗ ∗ ∗ + z χ ( t − τ ( t )) + J ( t ) m j ω (t ∗ − τm j (t ∗ )) vm (t ∗ ) +ω (t ∗ ) ω (t ∗ ) n ω (t ∗ ) ≤ κ (φ + ) z αm j (t ∗ ) + βm j (t ∗ ) ω (t ∗ − τ (t ∗ )) j j=1
n ω (t ∗ ) ∗ ∗ ∗ ∗ + z + ω (t )χ (t ) [αm j (t ) + βi j (t )]z j + δ zm ω (t ∗ ) m j=1 +βm j (t ∗ )
< 0.
which may lead to the results on exponential (input-to-state) stability. When α ij (t) ≡ pij , β ij (t) ≡ qij , Ji (t) ≡ 0, we easily see that the above result on exponential estimation is consistent with the conclusion given in [44]. 3. Global dynamics Invariant sets and attracting set are two important dynamical behaviors in various dynamical systems. It can be employed to discuss dynamics such as existence of solutions, boundedness and the attractor of the systems. In order to obtain our results, we introduce the following assumptions. (H1 ) All fj and gj satisfy the global Lipshitz condition, that is, there exist kj > 0 and lj > 0 such that
| f j (u ) − f j (v )| ≤ k j |u − v|, ∀ u, v ∈ R, |g j ( u ) − g j ( v )| ≤ l j |u − v|, ∀ j ∈ N . (H2 ) Let J ⊂ N . There exist continuous nonnegative functions hi (t) and constants aˆi , bˆ i j ≥ 0, cˆi j ≥ 0, Iˆi ≥ 0, kˆ j ≥ 0, t ≥ 0 such that
ai (t ) ≥ aˆi hi (t ), ∀i ∈ N , and aˆi > 0, ∀i ∈ N − J , |bi j (t )| ≤ hi (t )bˆ i j , |ci j (t )| ≤ hi (t )cˆi j , ∀ i, j ∈ N ,
|Ii (t )| ≤ hi (t )Iˆi , ∀ i, j ∈ N , b j j (t )sgn(u − v )( f j (u ) − f j (v )) ≤ −kˆ j bˆ j j h j (t )|u − v|, ∀ j ∈ J , ∀ u, v ∈ R, where sgn(s) is the sign function of s. (H3 ) Aˆ − P − Q is an nonsingular M-matrix, where Aˆ =diag{aˆ1 , . . . aˆn }, P = ( pi j )n×n , Q = (qi j )n×n , pi j = bˆ i j [−kˆ i σi j + k j (1 − σi j )], qi j = cˆi j l j , and
σi j =
1,
if i = j ∈ J (given by (H2 )),
0,
otherwise.
Remark 3.1. The above assumption (H2 ) shows that time-varying parameters in ith subsystem have a common factor hi (t), which is a generalization of the assumption h1 (t ) = · · · = hn (t ) = h(t ) given in [13,14] to discuss dynamical behaviors for non-autonomous neural networks with bounded delays. Furthermore, by given subset
Please cite this article as: Y. Zhang and Z. Yang, Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.03.093
ARTICLE IN PRESS
JID: NEUCOM 4
[m5G;June 27, 2019;13:57]
Y. Zhang and Z. Yang / Neurocomputing xxx (xxxx) xxx
J , the requirement on the activation functions is flexible since we allow all or part of activation functions f j , j ∈ J are monotonic increasing and bjj (t) ≤ 0, or monotonic decreasing and bjj (t) ≥ 0. By Lemma 2.4, we first give a generalized result on positive invariant set and attracting set. Theorem 3.1. Let (H1 ) hold and ζ (t ) = (ζ1 (t ), . . . , ζn ζi (t ) = |Ii (t )| + nj=1 [|bi j (t ) f j (0 )| + |ci j (t )g j (0 )|]. If there exists a positive constant δ such that
(t ))T ,
−ai (t ) +
n
(|bi j (t )|k j + |ci j (t )|l j ) < −δ, t ≥ t0 , i = 1, . . . , n. (5)
j=1
Then S = {φ ∈ C |φ ≤ ζ (tδ)∞ } is a positive variant set of system (1). Additionally, S is a globally attracting set of system (1) provided that there are a positive constant δ and a function ω (t ) ∈ W satisfy
ing lim ω (t ) = ∞, ωω ((tt)) ≤ δ such that t→∞
−ai (t ) +
n
|bi j (t )|k j + |ci j (t )|l j
j=1
ω (t ) < −δ, ω (t − τi j (t ))
t ≥ t0 , i = 1, . . . , n.
D |xi (t )|
(6)
= sgn(xi (t ))x i (t ) ≤ −ai (t )|xi (t )| +
n
Thus, S is a globally attracting set of system (1). This completes the proof. When variable parameters in ith subsystem have common factor hi (t) given in (H2 ), we can obtain the following positive invariant set and globally attracting set by nonsingular M-matrix. Theorem 3.2. Assume that conditions (H1 )–(H3 ) are satisfied. Let S¯ = ˆ {φ ∈ C |[φ ]+ ≤ N = (A − P − Q )−1 ζˆ }, where ζˆ = (ζˆ1 , . . . , ζˆn )T , ζˆi = sup ζi (t ), ζ( i ) given in Theorem 3.1. Then S¯ is a positive invariant set t≥t0
of system (1). Additionally, S¯ is a globally attracting set of system t (1) provided that hi , i ∈ N satisfies lim 0 hi (s )ds = ∞, and for any t→∞ T M > 0, there exit constants T, T such that lim 0 hi (t − s )ds ≥ M, t ≥ t→∞
Proof. Without loss of generality, we let ζˆ > 0. Since Aˆ − P − Q is an nonsingular M-matrix, we have (Aˆ − P − Q )−1 ≥ 0, and N = ˆ (N1 , . . . , Nn )T = (A − P − Q )−1 ζˆ > 0. We now prove for t ≥ t0 and
φ ∈ C, when φ ≤ N,
∀ t ≥ t0 ,
n
[x(t )]+ < dN,
(7)
When φ ≤ ζ (tδ)∞ , then |xi (t )| ≤ ζ (tδ)∞ , t ≤ t0 . We shall prove |xi (t )| ≤ ζδ∞ + , t ≥ t0 for any > 0. Otherwise, there must be m and t1 > t0 such that
|xi (t )| ≤ dNi , ∀i ∈ N , t0 ≤ t ≤ t2 .
+
≤ −aˆi hi (t )|xi (t )| + hi (t )
n
pi j |x j (t )|
j=1
n ζ ∞ ζ ∞ + + + |bi j (t1 )|k j δ δ j=1 ζ ∞ +|ci j (t1 )|l j + + ζ ∞ δ n ζ ∞ ≤ −ai (t1 ) + (|bi j (t1 )|k j + |ci j (t1 )|l j ) + δ δ j=1
(|bi j (t1 )|k j + |ci j (t1 )|l j )
j=1
≤ −δ < 0. This leads to a contradiction, which means that |xi (t )| ≤ ζδ∞ + , t ≥ t0 for any > 0. So we obtain the positive variant set S of system (1). Furthermore, let pii (t ) = −ai (t ) + ki |bii (t )|, pi j (t ) =
+qi j |x j (t − τi j (t ))| + ζˆi
≤ −ai (t1 )
n
+k j (1 − σi j ) )|x j (t )| + cˆi j L j |x j (t − τi j (t ))| + ζˆi
n bˆ i j (−kˆ i σi j j=1
|bi j (t1 )|k j |x j (t1 )|
+|ci j (t1 )|l j |x j (t1 − τi j (t1 )))| + ζ ∞ .
(11)
On the other hand, by the conditions (H1 ), (H2 ) and Eq.(1) we have
Then, from (5),
j=1
(10)
and
D |xi (t )| ≤ −ai (t )|xi (t )| + hi (t )
ζ ∞ ζ ∞ |xm (t1 )| = + , |xi (t )| ≤ + , t ≤ t1 , i ∈ N, δ δ D+ |xi (t1 )| ≥ 0. n
(9)
|xm (t2 )| = dNm , |xm (t )| < dNm , t < t2 ,
+|ci j (t )|l j |x j (t − τi j (t )))| + ζi (t ).
D+ |xi (t1 )| ≤ −ai (t1 )|xi (t1 )| +
t ≥ t0 .
If not, there must be m and t2 > t0 such that
|bi j (t )|k j |x j (t )|
j=1
(8)
where x(t ) = x(t; t0 , φ ). First, we shall prove that for any given d > 1, φ < dN implies
|bi j (t )|| f j (x j (t ) − f j (0 )|
+|ci j (t )||g j (x j (t − τi j (t ))) − g j (0 )| + ζi (t )
+ −ai (t1 ) +
ζ ∞ + ω−1 (t )φ, t ≥ t0 . δ
[x(t )]+ ≤ N,
j=1
≤ −ai (t )|xi (t )| +
ui (t ) ≤
t0 + T + T .
Proof. By (H1 ), we calculate the upper right derivative along system (1) as follows: +
k j |bi j (t )| (i = j ), qi j (t ) = l j |ci j (t )| and χ (t ) = ζδ∞ . It follows from Lemma 2.4 and (6) that
n
= −(aˆi − pii )hi (t )|xi (t )| + hi (t )
+
n
qi j |x j (t − τi j (t ))| + ζˆi ,
pi j |x j (t )|
j=1, j =i
∀i ∈ N , t ≥ t0 .
(12)
j=1
Let (aˆi − pii )hi (t ) = hˆ i (t ) for i = 1, . . . , n. Then
D |xi (t )| ≤ −hˆ i (t )|xi (t )| + +
+
n
hˆ m (s ) aˆm − pmm
n
pi j |x j (t )|
j=1, j =i
qi j |x j (t − τi j (t ))| + ζˆi .
(13)
j=1
Please cite this article as: Y. Zhang and Z. Yang, Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.03.093
ARTICLE IN PRESS
JID: NEUCOM
[m5G;June 27, 2019;13:57]
Y. Zhang and Z. Yang / Neurocomputing xxx (xxxx) xxx
From φ < dN, (11) and (13), we obtain t2
hˆ m (s )ds
|xm (t2 )| ≤ e− t0
n
×
+
[ φm ] +
−
t2
n
×
dNm +
=e
−
t0
pm j dN j +
=e
t2 t0
t2
=e
+
t2 t0
t s
2
hˆ m (s ) aˆm − pmm
hˆ m (θ )dθ
t→+∞
there is a large enough number T > 0 satisfying
e−
qm j dN j + ζˆm ds
t2
e−
t s
Here, A¯ = diag{a11 − p11 , . . . , ann − pnn }, P¯ = ( p¯ i j )n×n , p¯ ii = 0, p¯ i j = pi j (i = j ) and Hˆ (t ) = diag{hˆ 1 (t ), . . . , hˆ n (t )}. For any xi (t0 + s ) = φ (s ), s ∈ (−∞, 0], we can find a k ≥ 1 such that [φ ]+ ≤ kN = k(Aˆ − P − Q )−1 ζˆ . It follows from the above positive invariant set Sk , [x(t, φ )]T ≤ kN. So, we can find an non-negative vector σ := lim sup[x(t )]+ . For any given > 0, by the assumption on hi (t), T
0
t0
hˆ m (θ )dθ
ds
[x(t )]+ ≤ e
pi j N j +
j=1, j =i
n
n
dNi −
≤e
1 aˆi − pii
pi j d N j +
j=1, j =i
(14)
≤e
1 + cˆm − pmm
n
≤e ≥ 0, i = 1, . . . , n.
pm j d N j +
j=1, j =m
n
pm j dN j +
j=1, j =m
n
qm j dN j + Iˆm
j=1
t t0
Hˆ (s )ds
Hˆ (s )A¯ −1
t0
Hˆ (s )ds
[φ ]+ +
t−T
e−
t s
Hˆ (θ )dθ
t0
Hˆ (s )A¯ −1
−
t t0
Hˆ (s )ds
kN +
t−T
t0
e−
t s
Hˆ (θ )dθ
Hˆ (s )A¯ −1
−
t−T t ˆ t H (s )ds 0
kN + e−
t t−T
Hˆ (θ )dθ
−e
−
t t0
Hˆ (s )ds
kN + e−
T 0
Hˆ (t−s )ds
t t0
Hˆ (θ )dθ
t−T
Hˆ (θ )dθ
−
t
A¯ −1
A¯ −1
−e
−
t t0
Hˆ (θ )dθ T 0
A¯ −1
Hˆ (t−s )ds
A¯ −1
t ˆ E − e− t0 H (θ )dθ A¯ −1 P¯kN ++QkN + ζˆ +E A¯ −1 P¯ (σ + ) + Q (σ + ) + ζˆ . (16)
≤e
−
t
t0
Hˆ (s )ds
kN +
Thus,
σ = lim sup[x(t )]+ ≤ A¯ −1 [P¯kN + +QkN + ζˆ ] t→+∞
+A¯ −1 [P¯ (σ + ) + Q (σ + ) + ζˆ ].
which contradicts the equality in (10). So (16) is hold. Let d → 1, then [x(t )]+ ≤ N, and so S¯ is a positive invariant set of system (1). In fact, for any k ≥ 1, we easily see that Sk = {φ ∈ C |[φ ]+ ≤ kN = k(Aˆ − P − Q )−1 ζˆ } is also a positive invariant set since ζˆ ≤ kζˆ according to the above proof. In the following, we shall prove that S¯ is a globally attracting set. From (13), we have −
Hˆ (θ )dθ
× P¯ (σ + ) + Q (σ + ) + ζˆ
= dNm ,
[x(t )]+ ≤ e
s
t0
× P¯ kN + +QkN + A¯ −1 ζˆ + E − e−
qm j dN j + ζˆm
j=1 n
t
≤ 1, from (14) and (15), we obtain
1 |xm (t2 )| < dNm − cˆm − pmm
e−
× P¯ (σ + ) + Q (σ + ) + ζˆ
j=1
t − t 2 hˆ m (s )ds 0
t
× P¯ kN + +QkN + ζˆ + E − e−
(15) Noting that e
t
qm j dN j + ζˆm .
qi j dN j + Iˆi
× P¯ kN + +QkN + ζˆ ds t t ˆ + e− s H (θ )dθ Hˆ (s )A¯ −1 P¯ (σ + ) + Q (σ + ) + ζˆ ds
j=1
n
[φ ]+ +
+Q[x j (s − τ (s ))]+ + ζˆ ds
pm j d N j
qi j N j + Iˆi = (cˆi − pii )Ni , i = 1, . . . , n,
n
Hˆ (s )ds
t−T
n
1 + aˆm − pmm
pm j dN j +
−
j=1, j =m
t0
× P¯ [x j (s )]+ + Q[x j (s − τ (s ))]+ + ζˆ ds t t
ˆ + e− s H (θ )dθ Hˆ (s )A¯ −1 P¯ [x j (s )]+
qm j dN j + ζˆm
t
1 dNm − aˆm − pmm
qm j dN j + ζˆm
≤e
j=1
yielding
−
Since (Aˆ − P − Q ) is an nonsingular M-matrix and N = (Aˆ − P − Q )−1 ζˆ , one can get that (P + Q )N + ζˆ = Aˆ N, or n
≤ E, [x(t )]+ ≤ σ + , t − τi j (t ) ≥ T , t ≥ t0 + T .
× P¯ [x j (s )]+ + Q[x j (s − τ (s ))]+ + ζˆ ds
qm j dN j + ζˆm
j=1
j=1, j =m
n
Hˆ (t−s )ds
Let a number T > T . Then, when t ≥ t0 + T + T , we have 2
t2 ˆ 1 dNm + (1 − e− t0 hm (s)ds ) aˆm − pmm
pm j dN j +
n
×
n
j=1
hˆ m (s ) aˆm − pmm
qm j |x j (s − τ (s ))| + ζˆm ds
1 dNm + aˆm − pmm
hˆ m (s )ds
n
hˆ m (θ )dθ
j=1
hˆ m (s )ds
j=1, j =m −
n
e−
n
pm j dN j +
n
×
2
t0
j=1, j =m −
s
j=1
hˆ m (s )ds
n
×
t
t0
j=1, j =i t2
e−
× P¯ [x j (s )]+ + Q[x j (s − τ (s ))]+ + ζˆ ds.
j=1
hˆ m (s )ds
t0
t2
pm j |x j ( s )| +
j=1, j =m
5
[φ ]+ +
t
t0
e−
t s
Hˆ (θ )dθ
Hˆ (s )A¯ −1
Letting → 0, we get that σ ≤ A¯ −1 [P¯ σ + Q σ + ζˆ ], and so (A¯ − P¯ − Q )σ ≤ ζˆ . Note that A¯ − P¯ − Q = Aˆ − P − Q is an nonsingular M-matrix, the invertible matrix (Aˆ − P − Q )−1 exits and is nonnegative. So σ ≤ (Aˆ − P − Q )−1 ζˆ . That is, S¯ is a globally attracting set of system (1). The proof is complete. Remark 3.2. In [13], authors investigated non-autonomous neural networks with bounded delays (i.e. τ ij (t) ≤ τ ). By a stricter assumption h1 (t ) = · · · = hn (t ), some results on the positive invariant set and globally attracting set are obtained. Some similar results are
Please cite this article as: Y. Zhang and Z. Yang, Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.03.093
ARTICLE IN PRESS
JID: NEUCOM 6
[m5G;June 27, 2019;13:57]
Y. Zhang and Z. Yang / Neurocomputing xxx (xxxx) xxx
derived in [14,15]. In our Theorem 3.2, however, we remove these assumption and obtain the positive invariant set and globally attracting set of neural networks with either bounded delays or unbounded ones. Based on Theorems 3.1 and 3.2, we easily obtain the result on boundedness, which means that solutions of system (1) globally exist. Theorem 3.3. Assume that (H1 ) and (5), or (H1 ) − (H3 ) hold. Then solutions of system (1) are uniformly boundedness. Proof. For any initial value ϕ ∈ C, there exists a vector J ≥ 0 such that [ϕ ]+ τ ≤ J and ζ (t) ≤ J, where ζ is given in Theorem 3.1. From the proof Theorems 3.1 and 3.2, we can obtain the positive invariant set S = {φ ∈ C |φ ≤ Jδ ∞ } when (H1 ) and (5) hold, or S¯ = ˆ {φ ∈ C |[φ ]+r ≤ N = (C − P − Q )−1 J} when (H1 ) − (H3 ) hold. There-
fore, all solutions of (1) are uniformly boundedness and the proof is complete. When system (1) is T-periodic, we shall further investigate the existence of periodic attractor and its range by using the obtained invariant sets in Theorems 3.1 and 3.2, Theorem 3.4. Let aij (t), bij (t), cij (t), τ ij (t) be T-periodic functions. If (H1 ) and (5) hold, then the system (1) has one unique T-periodic solution xT (t ) ∈ S = {φ ∈ C |φ ≤ ζ (tδ)∞ }, which is globally asymptotically stable.
assumption h1 (t ) = · · · = hn (t ). Theorem 3.4 removes this assumption and obtain the same conclusion. 4. Weight learning algorithm In this section, we design two weight learning algorithms to obtain the input-to-state stability and present some globally attracting sets for the neural networks with time delays. To do this, we always assume that f j (0 ) = g j (0 ) = 0, j ∈ N . That is, x = 0 is the trivial solution of system (1). Theorem 4.1. Assume that (H1 ) hold and there exist positive constants , δ , zi , p ≥ 2 and a function ω (t ) ∈ W satisfies lim ω (t ) =
(−ai (t ) p + ( p − 1 ) )zi + +( p − 1 ) l pj
where x(t, φ ), x(t, ψ ) are the solution through (0, φ ) and (0, ψ ), respectively. Combining with Lemma 8 given in [13], we easily see system (1) has one globally asymptotically stable T-periodic solution xω (t) ∈ S. The proof is complete. In addition, when the coefficients have a common periodic factor for every subsystem in Eq. (1), we have Theorem 3.5. Let (H1 ) − (H3 ) hold with hi (t + T ) = hi (t ) > 0, T > 0. Then the system (1) has exactly one T-periodic solution xT (t ) ∈ S¯ = ˆ {φ ∈ C |[φ ]+ ≤ N = (A − P − Q )−1 ζˆ }, which is globally exponentially stable. Proof. Let yi (t ) = x(t, φ ) − x(t, ψ ). By a similar proof in (17) under conditions (H2 ) and (H3 ), we have
D+ |yi (t )| ≤ −(aˆi − pii )hi (t )|yi (t )| + hi (t )
+
n
qi j |y j (t − τi j (t ))| ,
n
bi j (t ) =
j=1, j =i
(19)
x = 0,
| f j (x j (t ))||g j (x j (t − τi j (t ))| p−2 , x = 0, |xi (t )| p−1
0,
(20)
x = 0,
then the learning law makes the neural network (1) be input-to-state stable in the following meaning
|xi (t, φ )| p ≤
zi p [ω−1 (t )φ p + γ I∞ ], t ≥ t0 , min zi i∈N
γz
1
and the set S = {φ ∈ C |[φi ]+ ≤ ( miniz ) p I∞ } is a globally attractive i∈N
set, where γ = −p+1 δ −1 .
i
Vi (t ) = |xi (t )| p ( p ≥ 2 ), i = 1, . . . , n.
Proof. Set
Then
∂ Vi ∂ xi =
p|xi | p−1 sgn(xi ), where sgn( · ) is the sign function. The derivative of V(t) along the trajectory (1) is n
bi j (t ) f j (x j (t ))
+
n
ci j (t )g j (x j (t − τi j (t ))) + Ii (t )
j=1
Since Aˆ − P − Q is an nonsingular M-matrix, there are a positive vector z = (z1 , . . . , zn )T such that (Aˆ − P − Q )z > 0. From periodic functions hi (t) > 0 and τ ij ≤ τ , we find a small enough number λ > 0 such that n
( pi j + qi j eλτ )z j < 0, i ∈ N .
j=1
It follows from Lemma 2.4 with ω (t ) = eλt and Remark 2.2 that zi |yi (t )| ≤ e−λt min φ − ψ. By Theorem 3.2 and Lemma 8 given z i∈N
0,
j=1
(17)
(18)
| f j (x j (t ))| p−1 , x = 0, |xi (t )| p−1
D+Vi (t ) = p|xi (t )| p−1 −ai (t )xi (t ) +
j=1
λ − aˆi zi hi (t ) + hi (t )
ω (t ) z < −δ zi , i ∈ N . ω (t − τi j (t )) j
pi j |y j (t )|
∀i ∈ N , t ≥ t0 .
( p + −p+1 )k pj
If the weight matrix bij (t), cij (t) are update as
ci j (t ) =
x(t, φ ) − x(t, ψ ) ≤ ω (t )φ − ψ, t ≥ 0,
n j=1
Proof. By a similar derivation in Theorem 3.1, it follows from Lemma 2.4 with Ji (t ) = 0 that −1
t→∞
∞, ωω ((tt)) ≤ δ and
≤ −ai (t ) p|xi (t )| p + p|xi (t )| p−1
Remark 3.3. In [13], authors gave sufficient conditions to find periodic attractor and its range for periodic neural networks with
bi j (t )| f j (x j (t ))|
j=1
+ p|xi (t )| p−1
n
ci j (t )|g j (x j (t − τi j (t )))|
j=1
+ p|xi (t )| p−1 |Ii (t )| ≤ −ai (t ) p|xi (t )| p + p|xi (t )| p−1
i
in [13], then system (1) has one globally exponentially stable Tperiodic solution xT (t) ∈ S.
n
n
bi j (t )| f j (x j (t ))|
j=1
+ p|xi (t )| p−1
n
ci j (t )|g j (x j (t − τi j (t )))|
j=1
+( p − 1 )|xi (t )| p + −p+1 |Ii (t )| p .
Please cite this article as: Y. Zhang and Z. Yang, Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.03.093
ARTICLE IN PRESS
JID: NEUCOM
[m5G;June 27, 2019;13:57]
Y. Zhang and Z. Yang / Neurocomputing xxx (xxxx) xxx
If we use the updating law in (19) and (20) and Lemma 2.3, we derive
D Vi (t ) ≤ (−ai (t ) p + ( p − 1 ) )|xi (t )| + +
p
n
+
min
p| f j (x j (t ))|
p| f j (x j (t ))||g j (x j (t − τi j (t )))| p−1 + −p+1 |Ii (t )| p
j=1
≤ (−ai (t ) p + ( p − 1 ) )|xi (t )| p +
n
n
( p − 1 )|gi (x j (t − τi j (t )))| p +
n
j=1
= −xT (t )(A(t )P + PA(t ))x(t ) + 2xT (t )P B(t ) f (x(t ))
−p+1 | f j (x j (t ))| p
+2xT (t )PC (t )g(x(t − τ (t ))) + 2xT (t )P I (t ).
j=1
By using Lemmas 2.1 and 2.2, we can derive:
≤ (−ai (t ) p + ( p − 1 ) )|xi (t )| p +
n
( p + −p+1 )k pj |x j (t )| p
j=1
+
V (t ) = xT (t )P x(t ).
D+V (t ) = x˙ T (t )P x(t ) + xT (t )P x(˙t )
+ −p+1 |Ii (t )| p
n
Proof. Let P is a symmetric positive definite matrix, P = Q T Q. We consider the following Lyapunov function
The derivative of V(t) along the trajectory (2) is
pk pj |x j (t )| p
j=1
+
then the learning law makes the neural network (21) be input-to δ −1 −1 λmax (Q T Q ) state stable and the set S = {φ ∈ C |φ ≤ I∞ } is λ (Q T Q ) a globally attracting set.
p
j=1 n
−xT (t )(A(t )P + PA(t ))x(t ) ≤ a(t )xT (t )P x(t ), 2xT (t )P I (t ) ≤ ε xT (t )P x(t ) + ε −1 IT (t )P I (t ).
( p − 1 ) l pj |x j (t − τi j (t ))| p + −p+1 |Ii (t )| p .
If we use the updating law as (23) and (24) and Lemma 2.2, then
j=1
D+V (t ) ≤ (a(t ) + ε )xT (t )P x(t ) + 2 f T (x(t ))P f (x(t ))
Therefore,
+2gT (x(t − τ (t ))P g(x(t − τ (t )) + ε −1 IT (t )P I (t )
n D+Vi (t ) ≤ [ pi j (t )Vi (t ) + qi j (t )V j (t − τi j (t ))] + −p+1 |Ii (t )| p ,
≤ (a(t ) + ε )xT (t )P x(t ) + 2Q KQ −1 2 xT (t )P x(t ) +2Q LQ −1 2 xT (t − τ (t ))P x(t − τ (t )) + ε −1 IT (t )P I (t )
j=1
≤ (a(t ) + ε + 2Q KQ −1 2 )xT (t )P x(t )
where
⎧ ⎨ pii (t ) = −ai (t ) p + ( p − 1 ) + ( p + −p+1 )kip , pi j (t ) = ( p + −p+1 )k p , i = j ⎩q (t ) = ( p − 1 ) l p . j ij j
+2Q LQ −1 2 xT (t − τ (t ))P x(t − τ (t )) + ε −1 IT (t )P I (t ). Thus,
D+V (t ) ≤ α (t )V (t ) + β (t )V (t − τ (t )) + J (t ),
By (18) and Lemma 2.4, we obtain
|xi (t )| p ≤
where
zi p [[φi p ω−1 (t ) + −p+1 δ −1 I (t )∞ ], t ≥ t0 .. min zi i∈N
Thus, lim |xi (t )| p ≤ t→∞
zi p γ I ∞ . min zi i∈N
Clearly, the set S = {φ ∈ C |[φ ]+ ≤
γ zi p ( min ) I∞ } is a globally attractive set of (1). This completes the z 1
i∈N
i
proof.
x (t ) = −A(t )x(t ) + B(t ) f (x(t )) + C (t )g(x(t − τ (t )) + I (t ), (21) where x(t ) = (x1 (t ), . . . , xn (t ))T ∈ Rn×n , I (t ) = (I1 (t ), . . . , In (t ))T , f (x(t ))=( f1 (x1 (t )), . . . , fn (xn (t )))T , g(x(t −τ (t )))=(g1 (x1 (t −τ (t ))), . . . , gn (xn (t − τ (t ))))T , B(t ) = (bi j (t ))n×n , C (t ) = (ci j (t ))n×n , A(t ) = diag{a1 (t ), . . . , an (t )}. Let K = diag{k1 , . . . , kn }, L = diag{l1 , . . . , ln }. By a different Lyapunov function, we have Theorem 4.2. Let (H1 ) hold and a(t ) = μ(−A(t )P − PA(t )). Assumed that there exist positive constants ε , δ , an invertible matrix Q, and a
function ω (t ) ∈ W satisfies lim ω (t ) = ∞, ωω ((tt)) ≤ δ, such that t→∞
(a(t ) + ε + 2Q KQ −1 2 ) + 2Q LQ −1 2
ω (t ) < −δ. ω (t − τ (t ))
(22)
If the weight matrix B(t), C(t) are update as
P −1 x(t ) f T (x(t ))P , x = 0, xT (t )x(t ) 0, x = 0,
C (t ) =
P −1 x(t )gT (x(t − τ (t )))P , x = 0, xT (t )x(t ) 0, x = 0,
α (t ) a(t ) + ε + 2Q KQ −1 2 , β (t ) 2Q LQ −1 2 , J (t ) ε −1 IT (t )P I (t ). It follows from (22) and Lemma 2.4 that
V (t ) ≤ ω−1 (t )φ + δ −1 ε −1 J∞ ,
When τ ij (t) ≡ τ (t), we write the system (1) in vector form:
B(t ) =
7
(23)
∀t ≥ t0 .
Therefore, we have λmin (P )xT (t )x(t ) ≤ xT (t )P x(t ) ≤ ω−1 (t )φ + δ −1 ε −1 J∞ . That is
x2 ≤
1
λmin (P )
ω−1 (t )φ +
δ −1 ε −1 J∞ . λmin (P )
And so the set S = {φ ∈ C |φ ≤ tive set. Then proof is complete.
δ −1 −1 J∞ λmin (P ) } is a globally attrac-
Remark 4.1. Let τ ij (t) ≤ τ (τ is a constant) in system (1). According to Remark 2.2, Theorems 4.1 and 4.2 give learning algorithm to guarantee the exponentially input-to-state stability and globally exponential stability (when I (· ) = 0) of the trivial solution. Remark 4.2. It should be pointed out that the proposed method is different from the exist ones in [27,28]. By applying generalized Halaney inequality, we can give some new learning algorithms to obtain some dynamics properties of system (1) with unbounded delays. 5. Examples and simulation
(24)
In this section, some examples and simulations are given to demonstrate the effectiveness of the obtained results.
Please cite this article as: Y. Zhang and Z. Yang, Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.03.093
ARTICLE IN PRESS
JID: NEUCOM 8
[m5G;June 27, 2019;13:57]
Y. Zhang and Z. Yang / Neurocomputing xxx (xxxx) xxx
Fig. 1. The transient state of x1 (t), x2 (t) in Example 5.1.
Example 5.1. Consider non-autonomous Hopfield neural networks with unbounded delays
⎧ dx (t ) 1 ⎪ = −3x1 (t ) − cos(t 2 )g1 (x1 (t − 0.9t )) ⎪ ⎪ ⎨ dt +2 sin(t 2 )g2 (x2 (t − 0.6 ln(1 + t ))) + 2 sin(et ), dx2 (t ) ⎪ ⎪ = −3t x2 (t ) + t sin(t 2 )g1 (x1 (t − 0.8 ln(1 + t ))) ⎪ ⎩ dt −t cos(t 2 )g2 (x2 (t − 0.8t )) + t,
Fig. 2. τ (t) is bounded, the transient state of x1 (t), x2 (t) in Example 5.2.
(25)
where sigmoid function g j (s ) = arctan(s ). In (H1 ) − (H3 ), we take h1 (1 ) ≡ 1, h2 (t ) = t, k j = l j = 1, J = N , kˆ j = 0, lˆj = 1, aˆ1 = aˆ2 = 3, bˆ i j = 0, cˆ11 = 1, cˆ12 = 2, cˆ21 = 1, cˆ22 = 1, Iˆ1 = Iˆ2 = 1. We easily verify that
Cˆ =
3 0 , P= 0 3
Cˆ − P − Q =
2 −1
0 0 , Q= 0 0
1 2 , 1 1
−2 . 2
Clearly, Cˆ − P − Q is an nonsingular M-matrix. It follows from Theorem 3.2 that S = {φ ∈ C | φ1 τ ≤ 2, φ2 τ ≤ 1.5} is a positive invariant set and globally attracting set. Fig. 1 shows the boundeness and attractiveness for solutions of system (1) with different initial values xi (t ) = x0i (constant), t ≤ 0. The conclusion does not require the assumption on boundedness of coefficients and delays, which cannot be obtained according to the existing publications.
Fig. 3. τ (t) is bounded, the transient state of x1 (t), x2 (t) with I (t ) = 0. in Example 5.2.
Example 5.2. Consider the following non-autonomous neural networks:
dxi (t ) = −ai (t )xi (t ) + bi j (t ) f j (x j (t )) dt 2
j=1
+
2
ci j (t )g j (x j (t − τ (t ))) + Ii (t )
(26)
j=1
where a1 (t ) = 5 + | sin t |, a2 (t ) = 5 + | cos t |, I1 (t ) = I2 (t ) = 0.1 sin t, f j (s ) = tanh(s ), g j (s ) = arctan(s ). Case 1: bounded delay τ (t ) = 2.5. Take ω (t ) = eβ t , β = 0.1; p = 2, = ki = li = zi = 1, δ = 0.1. It is easy to check that all the conditions in Theorem 4.1 are satisfied by the learning algorithm given in (19) and (20). Then the the system is exponentially input-tostate stability from Remark 4.1, and the set S = {φ ∈ C |[φ ]+ ≤ √ 10} is a globally attracting set. Take the initial conditions x(t ) = (1.3, −1 )T , t ≤ 0. Fig. 2 shows state of x1 (t), x2 (t) with input I(t) which indicates that the system (1) is exponentially ISS, and Fig. 3
Fig. 4. τ (t) is unbounded, the transient state of x1 (t), x2 (t) in Example 5.2.
Please cite this article as: Y. Zhang and Z. Yang, Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.03.093
JID: NEUCOM
ARTICLE IN PRESS
[m5G;June 27, 2019;13:57]
Y. Zhang and Z. Yang / Neurocomputing xxx (xxxx) xxx
shows the state of x1 (t), x2 (t) without input, which indicates that the equilibrium point of the system (1) is globally exponentially stable. Case 2: unbounded delay τ (t ) = 0.5t. In Theorem 4.2, we take Q = I; ω (t ) = ln(1 + 0.2t ) + 1, t ≥ t0 , otherwise, ω (t ) = 1, t ≤
t0 ; ki = li = = 1, δ = 0.2. then we derived that for all t ≥ 0, ωω ((tt)) ≤ ω (t ) 0.2, lim ω (t− τ (t )) = 1, a (t ) ≤ −5, and so the condition (22) holds. t→∞ By using the undate learning law in (23) and (24), it follows from Theorem 4.2 that the network is input-to-state stable and the set S = {φ ∈ C |φ ≤ 0.1} is a globally attracting set. Take the initial conditions x(t ) = (1.3, −1 )T , t ≤ 0. Fig. 4 shows the time responds trajectories of system (1) with disturbance input I(t).
6. Conclusion In this paper, we formulated a class of non-autonomous neural network models with time-varying delays, in which time delays and system coefficients may be unbounded. By establishing a new generalized Halanay inequality and using M-matrix properties, we studied global dynamical behaviors and gave the estimate for the positive invariant set and global attracting set. Based on this, periodic attractor and its existence range were gave out for the model. Furthermore, we provided a weight learning algorithms to ensure input-to-state stability and gave the state estimate and attracting set for the system. The obtained results extended and improved earlier ones. Global dynamical behaviors and learning algorithms given in this paper should have important leading significance in the design and applications of non-autonomous neural networks. Declaration of interest None. Acknowledgments The authors would like to thank the Associate Editor and anonymous reviewers for their valuable comments and suggestions. This work is supported partially by National Natural Science Foundation of China (NSFC) under Grant nos. 11471061, 61673078, 11701060, the Fundamental and Frontier Research Project of Chongqing under Grant nos. cstc2018jcyjAX0144, KJ1704099, the Program of Chongqing Innovation Team in University under Grant no. CXTDG201602008.
References [1] L.O. Chua, L. Yang, Cellular neural networks: applications, IEEE Trans. Circuits Syst. 35 (1988) 1273–1290. [2] M. Forti, New condition for global stability of neural networks with application to linear and quadratic programming problems, IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 42 (1995) 354–366. [3] W.J. Freeman, Y. Yau, B. Burke, Central pattern generating and recognizing in olfactory-bulb a correlation learning rule, Neural Netw. 1 (1988) 277–288. [4] C. Ulbricht, G. Dorffner, A. Lee, Neural networks for recognizing patterns in cardiotocograms, Artif. Intell. Med. 12 (1998) 271–284. [5] X. He, C. Li, T. Huang, C. Li, J. Huang, A recurrent neural network for solving bilevel linear programming problem, IEEE Trans. Neural Netw. Learn. Syst. 25 (2014) 824–830. [6] D. Xu, H. Zhao, H. Zhu, Global dynamics of Hopfield neural networks involving variable delays, Appl. Math. Comput. 42 (2001) 39–45. [7] Q. Song, Exponential stability of recurrent neural networks with both time– varying delays and general activation functions via LMI approach, Neurocomputing 71 (2008) 2823–2830. [8] D. Xu, Z. Yang, Impulsive delay differential inequality and stability of neural networks, J. Math Anal. Appl 305 (2005) 107–120.
9
[9] S. Mohamad, Stability in asymmetric Hopfield nets with transmission delays, Physica D. 76 (1994) 344–358. [10] L.V. Hien, V.N. Phat, H. Trinh, New generalized Halanay inequalities with applications to stability of nonlinear non-autonomous time-delay systems, Nonlinear Dyn. 82 (2015) 1–13. [11] J. Oliveira, Global exponential stability of nonautonomous neural network models with unbounded delays, Neural Netw. 96 (2017) 71–79. [12] B. Lu, H. Jiang, A. Abdurahman, H. Chen, Global generalized exponential stability for a class of nonautonomous cellular neural networks via generalized Halanay inequalities, Neurocomputing 214 (2016) 1046–1052. [13] Y. Huang, D. Xu, Z. Yang, Dissipativity and periodic attractor for non-autonomous neural networks with time-varying delays, Neurocomputing 70 (2007) 2953–2958. [14] Z. Yang, D. Xu, Global dynamics for non-autonomous reaction-diffusion neural networks with time-varying delays, Theory Comput. Sci. 403 (2008) 3–10. [15] D. Xu, S. Long, Attracting and quasi-invariant sets of non-autonomous neural networks with delays, Neurocomputing 77 (2012) 222–228. [16] B. Liu, W. Lu, T. Chen, Generalized Halanay inequalities and their applications to neural networks with unbounded time-varying delays, IEEE Trans. Neural Netw. 22 (2011) 1508–1513. [17] T. Chen, L. W., Global-μ stability of delayed neural networks with unbounded time-varying delays, IEEE Trans. Neural Netw. 18 (2007) 1836–1840. [18] H. Li, C. Li, W. Zhang, et al., Global dissipativity of inertial neural networks with proportional delay via new generalized Halanay inequalities, Neural Process. Lett. 6 (2018) 1–19. [19] D. Xu, H. Zhao, Invariant and attracting sets of Hopfield neural networks with delay, Int. J. Syst. Sci. 32 (2001) 863–866. [20] D. He, D. Xu, Attracting and invariant sets of fuzzy Cohen–Grossberg neural networks with time-varying delays, Phys. Lett. A 372 (2008) 7057–7062. [21] Y. Huang, W. Zhu, D. Xu, Invariant and attracting set of fuzzy cellular neural networks with variable delays, Appl. Math. Lett. 22 (2009) 478–483. [22] D. Xu, Z. Yang, Attracting and invariant sets for a class of impulsive functional differential equations, J. Math. Anal. Appl. 329 (2007) 1036–1044. [23] L. Xu, D. Xu, P-attracting and p-invariant sets for a class of impulsive stochastic functional differential equations, Comput. Math. Appl. 57 (2009) 54–61. [24] E.D. Sontag, Smooth stabilization implies coprime factorization, IEEE Trans. Autom. Control 34 (1989) 435–443. [25] E.D. Sontag, Further facts about input to state stabilization, IEEE Trans. Autom. Control 35 (1990) 473–476. [26] E.N. Sanchez, J.P. Perez, Input-to-state stability (ISS) analysis for dynamic neural networks, IEEE Trans. Circuits Syst. I 46 (1999) 1395–1398. [27] C.K. Ahn, Passive learning and input-to-state stability of switched Hopfield neural networks with time-delay, Inf. Sci. 180 (2010) 4582–4594. [28] C.K. Ahn, Some new results on stability of Takagi–Sugeno fuzzy Hopfield neural networks, Fuzzy Sets Syst. 179 (2011) 100–111. [29] Q. Zhu, J. Cao, R. Rakkiyappan, Exponential input-to-state stability of stochastic Cohen–Grossberg neural networks with mixed delays, Nonlinear Dyn. 79 (2014) 1085–1098. [30] W. Zhou, L. Teng, D. Xu, Mean-square exponentially input-to-state stability of stochastic Cohen–Grossberg neural networks with time-varying delays, Neurocomputing 153 (2014) 54–61. [31] J. Li, W. Zhou, Z. Yang, State estimation and input-to-state stability of impulsive stochastic BAM neural networks with mixed delays, Neurocomputing 227 (2017) 37–45. [32] Z. Yang, W. Zhou, T. Huang, Input-to-state stability of delayed reaction-diffusion neural networks with impulsive effects, Neurocomputing 333 (2019) 261–272. [33] W. Zhou, Z. Yang, Input-to-state stability for dynamical neural networks with time-varying delays, Abstr. Appl. Anal.. 2012 (2012) 665–681. [34] S. Zhu, Y. Shen, Two algebraic criteria for input-to-state stability of recurrent neural networks with time-varying delays, Neural Comput. Appl. 22 (2013) 1163–1169. [35] Q. Zhu, J. Cao, Mean-square exponential input-to-state stability of stochastic delayed neural networks, Neurocomputing. 131 (2014) 157–163. [36] Y. Xu, W. Luo, K. Zhong, S. Zhu, Mean square input-to-state stability of a general class of stochastic recurrent neural networks with Markovian switching, Neural Comput. Appl. 25 (2014) 1657–1663. [37] Z. Yang, W. Zhou, T. Huang, Exponential input-to-state stability of recurrent neural networks with multiple time-varying delays, Cogn. Neurodyn. 8 (2014) 47–54. [38] I. Chairez, A. Poznyak, T. Poznyak, New sliding-mode learning law for dynamic neural network observer, IEEE Trans. Circuits Syst. II Exp. Brief 53 (2006) 1338–1342. [39] W. Yu, X. Li, Passivity analysis of dynamic neural networks with different time-scales, Neural Process. Lett. 25 (2007) 143–155. [40] W. Yu, Nonlinear system identification using discrete-time recurrent neural networks with stable learning algorithms, Inf. Sci. 158 (2004) 131–147. [41] A.S. Poznyak, E.N. Sanchez, Nonlinear systems approximation by neural networks: error stability analysis, Intell. Autom. Soft Comput. 1 (1995) 247–258. [42] Z. Yang, D. Xu, Stability analysis and design of impulsive control systems with time delay, IEEE Trans. Autom. Control 52 (2007) 1448–1454. [43] Z. Yang, D. Xu, L. Xiang, Exponential p-stability of impulsive stochastic differential equations with delays, Phys. Lett. A. 359 (2006) 129–137.
Please cite this article as: Y. Zhang and Z. Yang, Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.03.093
JID: NEUCOM 10
ARTICLE IN PRESS
[m5G;June 27, 2019;13:57]
Y. Zhang and Z. Yang / Neurocomputing xxx (xxxx) xxx
[44] D. Xu, Z. Yang, Impulsive delay differential inequality and stability of neural networks, J. Math. Anal. Appl. 305 (2005) 107–120. Yurong Zhang received her B.S. Degree from Chongqing Normal University, Chongqing, China in 2017. She is currently a master candidate in the College of Mathematics of Chongqing Normal University, Chongqing, China. Her current research interests include stability theory, neural networks and stochastic functional systems.
Zhichun Yang received his B.Sc. degree from the Department of Mathematics, Southwest University, Chongqing, China in 1993, and his M.Sc. and Ph.D. degrees from the Department of Mathematics, Sichuan University, China in 2002 and 2006, respectively. From 1993 to 2006 he was at Chengdu Textile Institution and was promoted to associate Professor in 20 05. From 20 06 to 2009, he was an associate Professor of Department of Mathematics, Chongqing Normal University. Since July 2009 he has been a Professor of College of Mathematics, Chongqing Normal University, China. He is author and coauthor of more than 70 journal papers. His current research interests include qualitative theory of impulsive systems and hybrid control, stochastic functional differential equations, biomathematics and neural networks.
Please cite this article as: Y. Zhang and Z. Yang, Global dynamics and learning algorithm of non-autonomous neural networks with time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.03.093