Neurocomputing 356 (2019) 60–68
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Finite-time anti-synchronization of neural networks with time-varying delays via inequality skills ✩ Zhengqiu Zhang a, Ting Zheng a, Shenghua Yu b,∗ a b
College of Mathematics and Econometrics, Hunan University, Changsha 410082, China School of Economics and Trade, Hunan University, Changsha 410079, China
a r t i c l e
i n f o
Article history: Received 31 January 2019 Revised 29 March 2019 Accepted 5 May 2019 Available online 10 May 2019
a b s t r a c t In this paper, we consider the finite-time anti-synchronization for the master–slave neural networks with time delays. By means of combining integral inequality skills with other inequality skills, two novel criteria to assure the finite-time anti-synchronization for the discussed master–slave neural networks are presented under two classes of different controllers. Our results and method are different from those in existing papers.
Communicated by Dr. Jin-Liang Wang
© 2019 Elsevier B.V. All rights reserved.
Keywords: The master–slave neural networks The finite-time anti-synchronization Integrating inequality skills Utilizing different inequality techniques from those in existing papers
1. Introduction Synchronization of neural networks has been widely studied in recent years because of their potential applications in image processing, secure communication, information science, biological systems and many other fields. Up to now, a lot of sufficient conditions to assure the synchronization for the master–slave neural networks have been acquired. As far as this aspect is concerned, we can refer to [1–9]. In practice, it also appears another interesting phenomenon in symmetrical oscillators: anti-synchronization, which means that two state vectors of the synchronization systems have the same absolute values, but their signs are opposite. It is reported that the anti-synchronization is of important applied values in practice, for instance, in the application to lasers, the using of anti-synchronization can provide a new way to generate pulses with special forms, in the application to communication systems, its applying can strengthen the security and secrecy by the transform of synchronization and anti-synchronization continuously in the process of digital signal transmission. As a result, the study on anti-synchronization is of great meaning in both theory and practice.
✩ Project supported by the Innovation Platform Open Fund in Hunan Province Colleges and Universities of China (no:201485). ∗ Corresponding author. E-mail addresses:
[email protected] (Z. Zhang),
[email protected] (T. Zheng),
[email protected] (S. Yu).
https://doi.org/10.1016/j.neucom.2019.05.012 0925-2312/© 2019 Elsevier B.V. All rights reserved.
Up to present, the exponential anti-synchronization and finitetime anti-synchronization for the master–slave neural networks have been also studied and some criteria to ensure the global antisynchronization and finite-time anti-synchronization for the discussed master–slave neural networks have been acquired, for instance, see [10–16]. Up to present, the sufficient conditions to assure the finite-time anti-synchronization for the master–slave neural networks have been acquired basically by means of utilizing linear matrix inequality method, Lyapunov functional method and some finite-time stability theorems [10–16]. While, the results of anti-synchronization for the master–slave neural networks have been rare by utilizing other study method. Recently, in [17], the anti-synchronization of master–slave neural networks with time delays was studied. By combining the Holder inequality with other inequality craftsmanships, a new criterion to assure the anti-synchronization for the considered master–slave neural networks was presented under the control law which only depends on the system state at present time t in [17]. In [17], the following neural networks were considered: k¯
k¯
h¯ =1
h¯ =1
dwr¯ (t ) = −dr¯ wr¯ (t ) + ar¯h¯ fh¯ (wh¯ (t )) + br¯h¯ dt × fh¯ (wh¯ (t − τr¯h¯ (t ))) + Ir¯ , r¯ = 1, 2, . . . , k¯ ,
(1)
where wr¯ (t ) denotes the state of the r¯th unit at time t; dr¯ > 0 is the rate with which the r¯th unit will reset its potential to the rest-
Z. Zhang, T. Zheng and S. Yu / Neurocomputing 356 (2019) 60–68
ing state in isolation when disconnected from the network and external inputs; ar¯h¯ , br¯h¯ correspond to the connection weight and delayed connection weight on the r¯th unit, respectively; fr¯ (. ) is defined as the activation function; τr¯h¯ (t ) is the time delay; Ir¯ represents the external input, r¯, h¯ = 1, 2, . . . , k¯ . The initial value of system (1) is given as follows:
wr¯ (θ ) = φˆ r¯ (θ ), θ ∈ [−τ¯ , 0],
(2)
where φˆ r¯ (θ ) ∈ C ([−τ¯ , 0], R ), r¯ = 1, 2, . . . , k¯ , C ([−τ¯ , 0], R ) denotes the set of all continuous functions from [−τ¯ , 0] to R. Let system (1) be the master system. Then the slave system is k¯
k¯
h¯ =1
h¯ =1
dvr¯ (t ) = −dr¯ vr¯ (t ) + ar¯h¯ Gh¯ (vh¯ (t )) + br¯h¯ Gh¯ (vh¯ (t − τ¯r¯h¯ (t ))) dt + Ir¯ + xr¯ (t ), r¯ = 1, 2, . . . , k¯ ,
(3)
with the initial state
vr¯ (θ ) = ψˆ r¯ (θ ), θ ∈ [−τ¯ , 0],
(4)
where ψˆ r¯ (θ ) ∈ C[−τ¯ , 0], R ), r¯ = 1, 2, . . . , k¯ . Recently, we have studied the finite-time synchronization for the master–slave neural networks by utilizing integral inequality method [2,1]. Since up to now, the results of the antisynchronization for the master–slave neural networks have been acquired basically by utilizing linear matrix inequality and some finite-time stability theorems, this makes us to want to study the finite-time anti-synchronization between the master system (1) and the slave system (3) by utilizing integral inequality method. In [2,1], mainly by applying integral transformation with integral inequality skills, two novel sufficient conditions assuring the finite-time synchronization between the discussed master neural networks and slave neural networks were acquired. In [17], by applying the Holder inequality and other techniques, see [17], a sufficient condition assuring the finite-time anti-synchronization between the discussed master neural networks and slave neural networks was acquired. Up to now, in the discussion of the finitetime synchronization and the finite-time anti-synchronization in many papers for the master–slave neural networks, in the controllers, the items ep (t) and eq (t) of exponential type have been designed. Without introducing inequality skills to transform them into non-exponential type inequalities, the authors have to apply the finite-time stability theorems to studying the finite-time synchronization for such master–slave neural networks with the exponential type controllers. This motivates us to introduce two inequalities (Lemma 2.2 and Lemma 2.3) to transform the items of exponential type in the controllers into non-exponential type inequalities (see the proof of Theorem 3.1). In [2,1], by the study of two order differential inequalities V1 (t ) ≤ AV1 (t ) + BV2 (t ) and V2 (t ) ≤ CV2 (t ) + DV1 (t )(A, B, C, D are some constants), the finitetime synchronization conditions were acquired. This motivates us to design such controller sign[rˆ(t )](rˆ(t ) is error state) to construct the delayed differential inequality U (t ) ≤ AU (t ) + BU (t − τ (t ))(τ (t ) is time-varying delay) to study the finite-time antisynchronization (see the proof of Theorem 3.2). Consequently, in this paper, our objective is to acquire two novel sufficient conditions assuring the finite-time antisynchronization for the master system (1) and the slave system (3) by means of introducing two inequalities (see Lemma 2.2 and Lemma 2.3) to transform exponential type controllers into nonexponential type controllers and by applying integral inequalities. As a result, the contribution of this paper can be embodied in the following two points: (1) Combining two inequalities (Lemma 2.2 and Lemma 2.3) with integral inequality skills are used to study the finite-time anti-synchronization for system (1) and system (3); (2) Two new sufficient conditions are acquired to ensure the finite-
61
time anti-synchronization for system (1) and system (3) under two classes of controllers. The rest part of the paper is outlined as follows: In Section 2, some preliminaries are described. In Section 3, on the basis of combining integral inequality skills with inequality skills, two criteria are given to assure the finite-time anti-synchronization between the master system (1) and the slave system (3). Two examples are give to illustrate the effectiveness of our main results in Section 4. 2. Preliminaries In this section, some assumptions, Definitions and Lemmas are introduced. Letting er¯ (t ) = wr¯ (t ) + vr¯ (t ), then we can acquire the antisynchronization error system as follows:
er¯ (t ) = −dr¯ er¯ (t ) + +
k¯
k¯
ar¯h¯ Gh¯ (wh¯ (t )) + Gr¯ (vh¯ (t ))
h¯ =1
br¯h¯ Gh¯ (wh¯ (t − τ¯r¯h¯ (t ))) + Gh¯ (vh¯ (t − τ¯r¯h¯ (t )))
h¯ =1
+2Ir¯ + xr¯ (t ), r¯ = 1, 2, . . . , k¯ .
(5)
The controllers are designed as follows:
xr¯ (t ) = sign[er¯ (t )][γ1 + L¯ 1 |er¯ (t )| p1 + L¯ 2 |er¯ (t )|q1 ] −δ1 sign[er¯ (t )]
k¯
lh¯ |br¯h¯ ||eh¯ (t − τ¯r¯h¯ (t ))|
(6)
h¯ =1
and
xr¯ (t ) = L¯ 3 sign[er¯ (t )], sign[er¯ (t )]=
where
(7)
⎧ ⎪ ⎨ 1, er¯ (t ) > 0,
0, er¯ (t ) = 0, −1, er¯ (t ) < 0,
⎪ ⎩
L¯ 1 >0,0
1,δ1 >1,L¯ 3 ,γ1
constants. We display the following notations:
Iˆ = 2
k¯
|Ir¯ | + k¯ γ1 , U (t0 ) =
r¯=1
1≤r¯≤k¯
1≤h¯ ≤k¯
A¯ 0 =
|wr¯ (t0 ) − vr¯ (t0 )|,
r¯=1
dˆ = min {dr¯ }, cˆ = max
aˆ = max
k¯
1≤h¯ ≤k¯
k¯
k¯
|ar¯h¯ |lh¯ ,
r¯=1
|br¯h¯ |lh¯ .
r¯=1
t0
t0 −τ¯ (t0 )
e|d−cˆ|η ˆ
U ( η )d η ˆ aˆe|d−cˆ|τ1 ,A = , 1 − τ2 1 − τ2 ˆ
Iˆ + L¯ 1 (k¯ )(1−p1 ) (1 − p1 ) A¯ 1 = , cˆ − dˆ L¯ 2 (k¯ )(1−q1 ) (1 − q1 ) A¯ 2 = A¯ 1 + , cˆ − dˆ A¯ 2 L¯ 2 q1 (k¯ )( p1 −q1 ) ¯ A¯ 3 = 1 + , A5 = , ¯L1 p1 ¯L1 p1 (k¯ )(1−p1 ) + cˆ − dˆ A¯ 2 U (t0 ) + A¯ 2 A¯ 4 = − , ( 1 ¯ ¯L1 p1 (k¯ )(1−p1 ) L¯ 1 p1 (k ) −p1 ) + cˆ − dˆ
Iˆ + k¯ L¯ 3 (dˆ−cˆ)t0 ˆ A¯ 6 = U (t0 ) + e + A¯ 0 aˆe|d−cˆ|τ1 . cˆ − dˆ In the paper, we always suppose that
are
62
Z. Zhang, T. Zheng and S. Yu / Neurocomputing 356 (2019) 60–68
(B¯ 1 ) If G(wr¯ ) is a odd function and there exists a positive constant lr¯ such that
|wr¯ (t ) + vr¯ (t )| = 0, t ≥ t0 + T¯ .
|G(wr¯ ) − G(vr¯ )| ≤ lr¯ |wr¯ − vr¯ |, for all wr¯ , vr¯ ∈ R, r¯ = 1, 2, . . . , k¯ , |.| is the norm of the Euclidean space R.
(B¯ 2 )
(k¯ )q1 L¯ 1 p1 + L¯ 2 q1 (k¯ ) p1 > 0,
arw1
≤k
r1
k
,
aw
w=1
asw1
≥k
1−s1
w=1
s1
k
.
aw
w=1
z p1 ≤ p1 z + 1 − p1 .
cˆ > dˆ;
Proof. Set g(z ) = z p1 − p1 z. Then g (z ) = p(z p1 −1 − 1 ). Since 0 < p1 < 1, so when 0 < z < 1, g (z) > 0; z > 1, g (z) < 0. Consequently, g(1 ) = 1 − p1 = maxz∈R {g(z )}. So z p1 − p1 z ≤ 1 − p1 . Consequently, the proof of Lemma 2.3 is accomplished.
Iˆ + k¯ L¯ 3 < 0;
(B¯ 5 )
∗
Lemma 2.3 [26]. Setting b∗ > 1, y∗ > 0, then (y∗ )b ≥ b∗ y∗ + 1 − b∗ .
dˆ > cˆ + Aˆ . Statement 1. On the basis of the conditions (B¯ 2 ) and (B¯ 3 ), we obtain
A¯ 3 > 0, A¯ 2 < 0, A¯ 5 < 0, A¯ 4 < 0,
U (t0 ) + A¯ 2 > 1. A¯ 4 L¯ 1 p1 (k¯ )(1−p1 )
Proof. On the basis of the condition (k¯ )q1 L¯ 1 p1 + L¯ 2 q1 (k¯ ) p1 > 0, we can get A¯ 3 > 0. On the basis of the conditions A¯ 2 + U (t0 ) < U (t )L¯ p (k¯ )(1−p1 ) − 0 1 1 < 0, it follows that A¯ 2 < 0. Combining A¯ 2 < 0 cˆ−dˆ
with cˆ > dˆ, we have A¯ 5 < 0. Since U (t0 ) + A¯ 2 < 0, A¯ 2 < 0, L¯ 1 > 0, p1 > 0, k¯ > 0, cˆ > dˆ, from the definition of A¯ 4 , then we have
(cˆ − dˆ)[U (t0 ) + A¯2 ] + U (t0 )L¯1 p1 (k¯ )(1−p1 ) A¯ 4 = < 0. L¯1 p1 (k¯ )(1−p1 ) [cˆ − dˆ + L¯1 p1 (k¯ )(1−p1 ) ]
In view of (B¯ 1 ), sine G(x) is odd function, we acquire the following Statement 2. Statement 2. For ∀wh¯ (t ), vh¯ (t ) ∈ R,
|Gh¯ (wh¯ (t )) + Gh¯ (vh¯ (t ))| = |Gh¯ (wh¯ (t )) − Gh¯ (−vh¯ (t ))| ≤ lh¯ |wh¯ (t ) + vh¯ (t )| = lh¯ eh¯ (t ). 3. Finite-time anti-synchronization
Theorem 3.1. Setting (B¯ 1 ) − (B¯ 3 ) hold, then the master system (1) and the slave system (3) can acquire finite-time anti-synchronized under the controllers (6).
(cˆ − dˆ)[U (t0 ) + A2 ] + L¯1 p1 (k¯ )(1−p1 ) [U (t0 ) + A¯ 2 ] < (cˆ − dˆ)[U (t0 ) + A2 ] + L¯1 p1 (k¯ )(1−p1 )U (t0 ).
Proof. We construct a Lyapunov function:
Consequently
U (t0 ) + A¯2 <
Proof. The proof of Lemma 2.3 is similar to that of Lemma 2.2 and it is omitted.
In this section, we will drive two sufficient conditions on the finite-time anti-synchronization for the master system (1) and the slave system (3).
Since A2 < 0, one has
U (t0 )L¯1 p1 (k¯ )(1−p1 ) + [U (t0 ) + A¯2 ](cˆ − dˆ) L¯1 p1 (k¯ )(1−p1 ) + cˆ − dˆ
A¯2 U (t0 ) + A¯2 − L¯1 p1 (k¯ )(1−p1 ) L¯1 p1 (k¯ )(1−p1 ) + cˆ − dˆ ( 1 −p ) 1 ¯ < A¯4 L¯1 p1 (k¯ )(1−p1 ) , L¯1 p1 (k )
U (t ) =
k¯
By system (5), one has,
U (t ) =
sign[er¯ (t )] − dr¯ er¯ (t ) +
k¯
+ Gr¯ (vh¯ (t )) +
Definition 2.1. The master systems (1) and the slave system (3) are said to acquire the finite-time anti-synchronization if there exists a positive function T¯ which is dependent on the initial time t0 and the initial values of system (1) and system (3) such that for arbitrary solutions of system (1) and system (3) denoted by [w1 (t ), w2 (t ), . . . , wk¯ (t )]T and [v1 (t ), v2 (t ), . . . , vk¯ (t )]T , we have for r¯ = 1, 2, . . . , k¯ ,
k¯
br¯h¯ Gh¯ (wh¯ (t − τ¯r¯h¯ (t )))
h¯ =1
+ Gh¯ (vh¯ (t − τ¯r¯h¯ (t ))) + 2Ir¯ + xr¯ (t )
Statement 2. By (B¯ 4 ) and (B¯ 5 ), we acquire A¯ 6 > 0, dˆ > cˆ.
ar¯h¯ Gh¯ (wh¯ (t ))
h¯ =1
|wr¯ (t ) − vr¯ (t )| = 0
k¯ r¯=1
U (t0 ) + A¯2 > 1. A¯4 L¯1 p1 (k¯ )(1−p1 )
Proof. The proof is easy and it is omitted.
|er¯ (t )|.
r¯=1
from which together with A¯4 < 0, one has
lim
k
Lemma 2.2 [26]. If z > 0, 0 < p1 < 1, then
(B¯ 4 )
t→t0 +T¯
1−r1
w=1
(B¯ 3 )
=
Lemma 2.1 (see [18,19]). Setting a1 , a2 , . . . , aw ≥ 0, 0 < r1 ≤ 1, s1 > 1, then the the following two inequalities hold: k
U (t0 )L¯ 1 p1 (k¯ )(1−p1 ) A¯ 2 + U (t0 ) < − ; cˆ − dˆ
and
≤
k¯ r¯=1
− dr¯ |er¯ (t )| +
k¯
|ar¯h¯ |lh¯ |eh¯ (t )| +
h¯ =1 k¯
lh¯ |br¯h¯ ||eh¯ (t − τ¯r¯h¯ (t ))|
h¯ =1
×δ1 + L¯ 1 |er¯ (t )| p1 + L¯ 2 |er¯ (t )|q1 + γ1 k¯ r¯=1
|br¯h¯ |
h¯ =1
×lh¯ |eh¯ (t − τ¯r¯h¯ (t ))| + 2|Ir¯ | −
≤ (cˆ − dˆ)U (t ) + Iˆ + L¯ 1
k¯
|er¯ (t )| p1 + L¯ 2
k¯ r¯=1
|er¯ (t )|q1 .
(8)
Z. Zhang, T. Zheng and S. Yu / Neurocomputing 356 (2019) 60–68
Because L¯ 1 > 0, L¯ 2 < 0, o < p1 ≤ 1, q1 > 1, Lemma 2.1, we acquire
L¯ 1
k¯
|er¯ (t )| p1 ≤ L¯ 1 (k¯ )(1−p1 ) (
r¯=1
k¯
on
the
basis
of
Multiplying (18) with e−l1 p1 k t], one has ¯ (1−p1 ) t
e −L 1 p 1 ( k ) ¯
|er¯ (t )| ) p1 (9)
k¯
|er¯ (t )|q1 ≤ L¯ 2 (k¯ )(1−q1 ) (
r¯=1
k¯
|er¯ (t )| )q1
Because
(10)
t
¯
t0
U (t ) ≤ (cˆ − dˆ)U (t ) + Iˆ + L¯ 1 (k¯ )(1−p1 )U p1 (t ) + L¯ 2 (k¯ )(1−q1 )U q1 (t ).
=
t
t0
ˆ
(12)
Integrating (12) over [t0 , t] yields
×(k¯ )(1−p1 ) ×
t
e
Iˆ
ˆ
t
t0
cˆ − dˆ
¯
(13)
L¯ 1U p1 (s ) ≤ L¯ 1 p1U (s ) + L¯ 1 (1 − p1 ).
U (t )e(d−cˆ)t ≤ U (t0 )e(d−cˆ)t0 + ˆ
×(k¯ )(1−q1 )
t
t0
¯
t
¯ (1−p )
ˆ
t
θ
¯ (1−p1 ) s
e −L 1 p 1 ( k ) ¯
ˆ
(19)
e(d−cˆ)θ U (θ )dθ ds ¯ (1−p1 ) θ
e(d−cˆ)θ U (θ ) e−L1 p1 (k)
t0
t
t0
¯
¯ (1−p1 ) t
−e−L1 p1 (k) ¯
e(d−cˆ)θ U (θ )dθ , ˆ
dθ
(20)
Z (t )
¯ (1−p1 ) )t
e(d−cˆ−L1 p1 (k) ˆ
¯
¯ (1−p1 ) )t0
− e(d−cˆ−L1 p1 (k) ˆ
¯
L¯ 1 p1 (k¯ )(1−p1 ) + cˆ − dˆ L¯ 2 q1 (k¯ )( p1 −q1 ) −L¯ 1 p1 (k¯ )(1−p1 ) t t (dˆ−cˆ)s − e e U (s )ds. L¯ 1 p1 t0 ¯ (1−p ) ¯ A¯ 3 e−L1 p1 (k) 1 t Z (t )
(14)
+
Iˆ + L¯ 1 (k¯ )(1−p1 ) (1 − p1 ) (dˆ−cˆ)t0 [e cˆ − dˆ t
e
(dˆ−cˆ)s
t0
L¯ 1 p1 (k¯ )(1−p1 ) +cˆ − dˆ
A¯ 3
t
t0
ˆ
(15)
t ˆ ˆ Setting Z (t ) = t e(d−cˆ)sU (s )ds, then, Z (t ) = e(d−cˆ)t U (t ). Conse0 quently by (15), one has ˆ ˆ ˆ Z (t ) ≤ U1 (t0 )e(d−cˆ)t0 + A¯ 1 [e(d−cˆ)t0 − e(d−cˆ)t ] + L¯ 1 p1 t ˆ ×(k¯ )(1−p1 ) Z (t ) + L¯ 2 (k¯ )(1−q1 ) e(d−cˆ)sU q1 (s )ds.
¯ (1−p1 ) )t
e(d−cˆ−L1 p1 (k) ˆ
¯
(16)
ˆ
+
e(d−cˆ)sU (s )ds ≤ ˆ
A¯ 2 L¯ 1 p1 (k¯ )(1−p1 ) + cˆ − dˆ
¯ (1−p1 ) (t−t0 )
= A¯ 4 e(d−cˆ)t0 +L1 p1 (k) ˆ
¯
¯ (1−p1 ) (t−t0 )
e(d−cˆ)t − e(d−cˆ)t0 +L1 p1 (k) ˆ
ˆ
¯
(17)
[U (t0 ) + A¯ 2 ]e(d−cˆ)t0 ˆ ¯ (1−p ) ¯ ≤ A¯ 4 e(d−cˆ)t0 +L1 p1 (k) 1 (t−t0 ) − . L¯ 1 p1 (k¯ )(1−p1 ) [U (t0 ) + A¯ 2 ]e(d−cˆ)t0 ˆ ¯ (1−p ) ¯ A¯ 4 e(d−cˆ)t0 +L1 p1 (k) 1 (t−t0 ) − < 0, L¯ 1 p1 k¯ )(1−p1 ) t ≥ t0 + T¯1 ,
ˆ ˆ ˆ Z (t ) ≤ U (t0 )e(d−cˆ)t0 + A¯ 2 [e(d−cˆ)t0 − e(d−cˆ)t ] + L¯ 1 p1
t0
e(d−cˆ)sU (s )ds. ˆ
,
ˆ
ˆ
then
Substituting (17) into (16) yields
t
[U (t0 ) + A¯ 2 ]e(d−cˆ)t0 L¯ 1 p1 (k¯ )(1−p1 )
+ A¯ 5 e(d−cˆ)t −
ˆ
¯
ˆ [U (t0 ) + A¯ 2 ]e(d−cˆ)t0 L¯ 1 p1 (k¯ )(1−p1 ) (t−t0 ) e −1 L¯ 1 p1 (k¯ )(1−p1 )
In (21), if setting
By Lemma 2.3, we get
U q1 ( s ) ≥ q1 U ( s ) + 1 − q1 .
¯ (1−p1 ) )t0
− e(d−cˆ−L1 p1 (k)
ˆ
t0
×(k¯ )(1−p1 ) Z (t ) + L¯ 2 q1 (k¯ )(1−q1 )
A¯ 2
from which, in view of A¯5 < 0, consequently, one has
U (s )ds + L¯ 2
e(d−cˆ)sU q1 (s )ds.
[U (t0 ) + A¯ 2 ]e(d−cˆ)t0 −L¯ 1 p1 (k¯ )(1−p1 ) t0 ¯ (1−p ) ¯ e − e −L 1 p 1 ( k ) 1 t ¯L1 p1 (k¯ )(1−p1 ) ˆ
≤
Substituting (14) into (13) implies
] + L¯ 1 p1 (k¯ )(1−p1 )
ˆ
Namely
By Lemma 2.2, because 0 < p ≤ 1, L¯ 1 > 0, consequently,
−e
¯ (1−p1 ) )t0
− e(d−cˆ−L1 p1 (k)
t0
A¯ 2
+
U (s )ds.
(dˆ−cˆ)t
¯
ˆ [U (t0 ) + A¯ 2 ]e(d−cˆ)t0 −L¯ 1 p1 (k¯ )(1−p1 ) t0 ¯ (1−p ) ¯ e − e −L 1 p 1 ( k ) 1 t L¯ 1 p1 (k¯ )(1−p1 )
ˆ
ˆ e(d−cˆ)sU p1 (s )ds + L¯ 2 (k¯ )(1−q1 )
ˆ
¯ (1−p1 ) t
e −L 1 p 1 ( k )
[e(d−cˆ)t0 − e(d−cˆ)t ] + L¯ 1
(dˆ−cˆ)s q1
t0
ˆ
substituting (20) into (19) implies
≤ ˆ
e −L 1 p 1 ( k ) 1 t L¯ 1 p1 (k¯ )(1−p1 ) ¯
≥−
ˆ ≤ e(d−cˆ)t Iˆ + L¯ 1 (k¯ )(1−p1 )U1p1 (t ) + L¯ 2 (k¯ )(1−q1 )U1q1 (t ) .
s
e(d−cˆ)θ U (θ )dθ
1 L¯ 1 p1 (k¯ )(1−p1 )
=
ˆ d[U (t )e(d−cˆ)t ]
ds
t0
(11)
ˆ Multiplying (11) with e(d−cˆ)t yields
¯ (1−p1 ) s
e −L 1 p 1 ( k )
Substituting (9) and (10) into (8) implies
ˆ
¯ (1−p1 ) )t
e(d−cˆ−L1 p1 (k)
t0
r¯=1
U (t )e(d−cˆ)t ≤ U (t0 )e(d−cˆ)t0 +
L¯ 1 p1 (k¯ )(1−p1 ) +cˆ−dˆ t s ¯ (1−p ) ˆ ¯ + L¯ 2 q1 (k¯ )(1−q1 ) e −L 1 p 1 ( k ) 1 s d s e(d−cˆ)θ U (θ )dθ .
= L¯ 2 (k¯ )(1−q1 )U q1 (t ).
dt
and integrating over [t0 ,
Z (t )
A2
+
and
L¯ 2
(1−p1 )t Z (t )
ˆ [U (t0 ) + A¯ 2 ]e(d−cˆ)t0 −L¯ 1 p1 (k¯ )(1−p1 ) t0 ¯ (1−p ) ¯ e − e −L 1 p 1 ( k ) 1 t L¯ 1 p1 (k¯ )(1−p1 )
≤
r¯=1
= L¯ 1 (k¯ )(1−p1 )U p1 (t )
63
where
(18)
T¯1 =
1 U (t0 ) + A¯ 2 ln . ¯L1 p1 (k¯ )(1−p1 ) A¯ 4 L¯ 1 p1 (k¯ )(1−p1 )
(21)
64
Z. Zhang, T. Zheng and S. Yu / Neurocomputing 356 (2019) 60–68
By Statement 1, ln
U (t0 )+A¯ 2 A¯ 4 L¯ 1 p1 (k¯ )(1−p1 )
t ˆ ˆ Setting Z (t ) = t e(d−cˆ)ηU (η )dη, then Z (t ) = U (t )e(d−cˆ)t . Con0 sequently, based on (24), it yields
> 0. So T¯1 > 0.
Because A¯ 3 > 0, by (21), it follows that
Z (t ) ≤ U (t0 )e(d−cˆ)t0 +
lim U (t ) = 0; U (t ) = 0, t ≥ t0 + T¯1 .
ˆ
t→T¯1 +t0
t→t0 +T¯1
|wr¯ (t ) − vr¯ (t )| = 0;
ˆ
Consequently, the proof of Theorem 3.1 is accomplished.
Theorem 3.2. Suppose that τ¯rh (t ) = τ¯ (t ), 0 ≤ τ¯ (t ) ≤ τ1 , τ¯ (t ) ≤ τ2 < 1. Further assume that (B¯ 1 ), (B¯ 4 ) and (B¯ 5 ) hold. Then the master system (1) and the slave system (3) can acquire finite-time antisynchronized under the controllers (7).
ˆ d[Z (t )e−At ] Iˆ + k¯ L¯ 3 (dˆ−cˆ)t0 ˆ ˆ ≤ e−At U (t0 ) + e + A¯ 0 aˆe|d−cˆ|τ1 dt cˆ − d Iˆ + k¯ L¯ 3 (dˆ−cˆ)t−Aˆt + e . (26) dˆ − cˆ
Integrating (26) over [t0 , t] yields ˆ
Z (t )e−At ≤
Proof. We construct a Lyapunov function:
U (t ) =
|er¯ (t )|. ≤
From system (5), we get k¯
sign[er¯ (t )] −dr¯ er¯ (t ) +
k¯
r¯=1
+
ar¯h¯ Gh¯ (wh¯ (t )) + Gr¯ (vh¯ (t ))
h¯ =1 k¯
br¯h¯ Gh¯ (wh¯ (t − τ¯r¯h¯ (t ))) + Gh¯ (vh¯ (t − τ¯r¯h¯ (t )))
+ 2Ir¯ + xr¯ (t )
≤
k¯
− dr¯ |er¯ (t )| +
r¯=1
+
k¯
|ar¯h¯ |lh¯ |eh¯ (t )|
(22)
(23)
Integrating (23) over [t0 , t] implies ˆ
ˆ
+aˆ
t
t0
+aˆ
t −τ¯ (t )
t0 −τ (t0 )
ˆ
≤ U (t0 )e(d−cˆ)t0 + ˆ
Iˆ + k¯ L¯ 3 (dˆ−cˆ)t ˆ [e − e(d−cˆ)t0 ] ˆ d − cˆ
e(d−cˆ)[η+τ (m(s ))]
U ( η )d η 1 − τ¯ (m(s ))
Iˆ + k¯ L¯ 3 (dˆ−cˆ)t ˆ [e − e(d−cˆ)t0 ] dˆ − cˆ
U ( η )d η ˆ ˆ +e|d−cˆ|τ1 aˆ e(d−cˆ)η 1 − τ2 t0 −τ¯ (t0 )
(dˆ − cˆ)(dˆ − cˆ − Aˆ )A¯ 6 (cˆ−dˆ)t0 e . Aˆ (Iˆ + k¯ L¯ 3 )
−
(dˆ − cˆ)(dˆ − cˆ − Aˆ )A¯ 6 > 0. Aˆ (Iˆ + k¯ L¯ 3 )
Statement 5. The controllers in our paper are different from those in [17], so the results of the finite-time anti-synchronization for system (1) and system (3) are different from the result in [17].
= U (t0 )e(d−cˆ)t0 + ˆ
where m(s) is the inverse function of η (s ) = s − τ¯ (s ).
dˆ − cˆ − Aˆ
ln 1 −
Statement 4. The concrete inequality skills used in our paper are different from those used in [10–25] and the concrete integral inequality skills used in our paper are different from those in [1,2,8].
t −τ¯ (t )
Iˆ + k¯ L¯ 3 (dˆ−cˆ)t ˆ [e − e(d−cˆ)t0 ] ˆ d − cˆ t U ( η )d η ˆ | dˆ−cˆ|τ1 +e aˆ A¯ 0 + e(d−cˆ)η . 1 − τ2 t0
1
Statement 3. Up to date, the finite-time anti-synchronization for the master–slave neural networks have been acquired basically by utilizing linear matrix inequality and finite-time stability theorems [10–16]. Recently, in [17], by utilizing Holder inequality, the finitetime anti-synchronization was studied. While, in our paper, the sufficient conditions to assure the finite-time anti-synchronization for the discussed master–slave neural networks are acquired by getting rid of linear matrix inequality and finite-time stability theorems, by applying integral inequality skills.
ˆ
ˆ
Tˆ =
As a result Tˆ > 0. The rest proof is the same as that of the corresponding part of Theorem 3.1 and it is omitted.
Iˆ + k¯ L¯ 3 (dˆ−cˆ)t ˆ [e − e(d−cˆ)t0 ] dˆ − cˆ
e(d−cˆ)sU (s − τ (s ))ds
= U (t0 )e(d−cˆ)t0 +
(27)
By Statement 2, one has Aˆ6 > 0, Aˆ > 0, Iˆ + k¯ L¯ 3 < 0, dˆ > cˆ + Aˆ , then
U (t )e(d−cˆ)t ≤ U (t0 )e(d−cˆ)t0 +
A¯ 6 −Aˆt0 Iˆ + k¯ L¯ 3 ˆ ˆ e + e(d−cˆ−A)t ˆ ˆ A (d − cˆ − Aˆ )(dˆ − cˆ) Iˆ + k¯ L¯ 3 ˆ ˆ − e(d−cˆ−A)t0 . (dˆ − cˆ)(dˆ − cˆ − Aˆ )
where
ˆ Multiplying (22) with e(d−cˆ)t implies ˆ
ˆ
t ≥ t0 + Tˆ ,
h¯ =1
d[U (t )e(d−cˆ)t ] ˆ ≤ e(d−cˆ)t Iˆ + k¯ L¯ 3 + aˆU (t − τ (t )) . dt
ˆ
then
≤ (cˆ − dˆ)U (t ) + Iˆ + aˆU (t − τ¯ (t )) + k¯ L¯ 3 .
ˆ
A¯ 6 −Aˆt0 Iˆ + k¯ L¯ 3 ˆ ˆ e + e(d−cˆ−A)t ˆ ˆ ˆ ˆ A (d − cˆ − A )(d − cˆ) Iˆ + k¯ L¯ 3 ˆ ˆ − e(d−cˆ−A)t0 ≤ 0, ˆ ˆ ˆ (d − cˆ)(d − cˆ − A )
h¯ =1
|br¯h¯ |lh¯ |eh¯ (t − τ¯r¯h¯ (t ))| + 2|Ir¯ | + L¯ 3
ˆ
In (27), putting
h¯ =1
k¯
A¯ 6 −Aˆt0 Iˆ + k¯ L¯ 3 ˆ [e − e−At ] + Aˆ (dˆ − cˆ)(dˆ − cˆ − Aˆ ) × e(d−cˆ−A)t − e(d−cˆ−A)t0
r¯=1
U (t ) =
(25)
Multiplying (25) with e−At yields
|wr¯ (t ) − vr¯ (t )| = 0, t ≥ t0 + T¯1 .
k¯
Z (t ) ˆ +e|d−cˆ|τ1 aˆ A¯ 0 + . 1 − τ2
Thus
lim
Iˆ + k¯ L¯ 3 (dˆ−cˆ)t ˆ [e − e(d−cˆ)t0 ] dˆ − cˆ
(24)
Statement 6. By designing the delayed feedback controller which is used to offset the delayed item in the deduced differential inequality, the order differential inequality U (t ) ≤ (cˆ − dˆ)U (t ) + I¯ +
Z. Zhang, T. Zheng and S. Yu / Neurocomputing 356 (2019) 60–68
65
P q L¯1 (k¯ )(1−p1 )U11 (t ) + L¯2 (k¯ )(1−q1 )U1 1 (t ) is constructed. Thus by solving the order differential inequality, the result of finite-time synchronization is obtained.
4. Two examples In this section, we give two examples for showing our results. Example 4.1. Consider the following master–slave neural networks with time delays
⎧ dw (t ) 1 = −w1 (t ) + 0.5G1 (w1 (t )) − 0.3G2 (w2 (t )) ⎪ dt ⎪ ⎪ ⎪ ⎪ − 0 . 2 G1 (w1 (t − 0.5sint )) + G2 (w2 (t − 0.5sint )) ⎪ ⎪ ⎨ − 0.02,
dw2 (t ) ⎪ = −2w2 (t ) − G1 (w1 (t )) + 1.3G2 (w2 (t )) ⎪ dt ⎪ ⎪ ⎪ + 0.7G1 (w1 (t − 0.5sint )) + 1.2G2 (w2 (t − 0.5sint )) ⎪ ⎪ ⎩
(28)
+ 0.03,
Fig. 1. The master-slave system with the controller in Example 4.1.
⎧ dv (t ) 1 = −v1 (t ) + 0.5G1 (v1 (t )) − 0.3G2 (v2 (t )) ⎪ ⎪ ⎪ dt ⎪ ⎪ − 0 . 2G1 (v1 (t − 0.5sint )) + G2 (v2 (t − 0.5sint )) ⎪ ⎪ ⎨ − 0.02 + x1 (t ), dv2 (t ) ⎪ ⎪ dt = −2v2 (t ) − G1 (v1 (t )) + 1.3G2 (v2 (t )) ⎪ ⎪ ⎪ + 0.7G1 (v1 (t − 0.5sint )) + 1.2G2 (v2 (t − 0.5sint )) ⎪ ⎪ ⎩ + 0.03 + x2 (t ),
(29)
where the controllers are designed as follows:
x1 (t ) = sign(e1 (t ))[−2 + 2|e1 (t )| 3 − |e1 (t )|3 ] 1
− 2sign(e1 (t ))(2 × 0.2 × e1 (t − 0.5sint ) − e2 (t − 0.5sint )),
(30)
x2 (t ) = sign(e2 (t ))[−2 + 2|e2 (t )| 3 − |e2 (t )|3 ] 1
− 2sign(e2 (t ))(2 × 0.7 × e1 (t − 0.5sint ) − 1.4e2 (t − 0.5sint )). In Theorem 3.1,
Fig. 2. The error with the controller in Example 4.1.
a11 a21
a12 a22
b11 b21
b12 b22
=
0.5 −1
=
(31)
−0.3 , 1.3
−0.2 0.7
1 , 1.2
k¯ = 2, L¯ 1 = 2 > 0, L¯ 2 = −1 < 0, d1 = 1, d2 = 2, γ1 = −2, δ1 = and 2, p1 = 13 , q1 = 3, I1 = −0.02, I2 = 0.03. Taking odd function as
|x−1| G(x ) = |x+1|− , which means that the activation functions 2 G(wr¯ ) satisfies condition (B¯ 1 ) with l1 = l2 = 2. The initial values are set as: w1 (t ) = 1.01 + t, w2 (t ) = 3.05 + t, v1 (t ) = 1.03 + t , v2 (t ) = 3.02 + t. we can get U (0 ) = 2r¯=1 |wr¯ (0 ) − vr¯ (0 )| = 0.05. Because the controllers (30) and (31) with q1 = 3 > 1 in Example 1 is different from that in (6) with 0 < β < 1 in [17], so the finite-time anti-synchronization of (28) and (29) can not be tested with the result in [17]. However, we can verify the effectiveness and superiority of Theorem 3.1. By calculation, we can get
Iˆ = 2
2
|Ir¯ | + 2γ1 = −3.9,
r¯=1
dˆ = min {dr¯ } = 1, cˆ = max 1≤r¯≤2
1≤h¯ ≤2
|ar¯h¯ |lh¯ = 3,
r¯=1
−3.9 + 2 × 2(1− 3 ) × (1 − 13 ) −3.9 + A¯ 1 = = 3−1 2 1
Fig. 3. The master-slave system without the controller in Example 4.1.
2
√ 3 4 4 3
,
66
Z. Zhang, T. Zheng and S. Yu / Neurocomputing 356 (2019) 60–68
A¯ 2 =
−3.9 + 2 −
√ 3 4 4 3
−3.4 + −1 × 21−−3 × (1 − −3 ) + = 3−1 2
√ 3 4 4 3
,
√ 1 3 0.05 × 2 × 13 × 2(1− 3 ) U (t0 )L¯ 1 p1 (k¯ )(1−p1 ) 4 =− =− , 3−1 60 cˆ − dˆ √ 3
−3.4+ 4 3 4 2 Because (k¯ )q1 L¯
then, A¯ 2 + U (t0 ) = is satisfied.
+ 0.05 < 0, which means that (B¯ 2 ) 1 3 ¯ p ¯ −1×3× 1 p1 + L2 q 1 ( k ) 1 = 2 × 2 ×
3 √ 1 3 ¯ 3 ) is satisfied. By Theorem 3.1, the 2 3 = 16 B − 3 × 2 > 0 , thus ( 3 finite-time anti-synchronization for system (28) and (29) can be acquired under the controllers (30) and (31). Fig. 1 represents the curves of the master–slave neural networks with variables w1 (t), w2 (t), v1 (t), v2 (t), the error curves of master–slave system e1 (t) and e2 (t) are shown in the following Fig. 2, the master–slave system (28) and (29) and its error without the controllers are shown in Figs. 3 and 4.
Fig. 4. The error without the controller in Example 4.1.
Example 4.2. We are concerned with the master system (1), the slave system (3) and the controllers (7) in Example 4.2.
⎧ dw (t ) 1 = −4w1 (t ) + 0.1G1 (w1 (t )) + 0.5G2 (w2 (t )) ⎪ dt ⎪ ⎪ ⎪ ⎪ +0 . 1 G1 (w1 (t − 0.5sint )) − 0.2G2 (w2 (t − 0.5sint )) ⎪ ⎪ ⎨ −0.02,
dw2 (t ) ⎪ = −4.5w2 (t ) + 0.9G1 (w1 (t )) − 3.5G2 (w2 (t )) ⎪ dt ⎪ ⎪ ⎪ −0.5G1 (w1 (t − 0.5sint )) + 0.3G2 (w2 (t − 0.5sint )) ⎪ ⎪ ⎩
(32)
+0.03,
⎧ dv (t ) 1 = −4v1 (t ) + 0.1G1 (v1 (t )) + 0.5G2 (v2 (t )) ⎪ dt ⎪ ⎪ ⎪ ⎪ + 0 .1G1 (v1 (t − 0.5sint )) − 0.2G2 (v2 (t − 0.5sint )) ⎪ ⎪ ⎨ − 0.02 + x (t ), 1
dv2 (t ) ⎪ = −4.5v2 (t ) + 0.9G1 (v1 (t )) − 3.5G2 (v2 (t )) ⎪ dt ⎪ ⎪ ⎪ − 0 .5G1 (v1 (t − 0.5sint )) + 0.3G2 (v2 (t − 0.5sint )) ⎪ ⎪ ⎩ + 0.03 + x2 (t ),
(33)
where the controllers are designed as follows: Fig. 5. The master-slave system with the controller in Example 4.2.
x1 (t ) = 1.5sign(e1 (t )),
(34)
x2 (t ) = 1.5sign(e2 (t )).
(35)
By setting k¯ = 2, L¯ 3 = 1.5, d1 = 4, d2 = 4.5, I1 = −0.02, I2 = 0.03, τ¯rh (t ) = τ¯ (t ) = 03.5 (2 + sint ), τ1 = τ2 = 0.5, and taking odd function G(x ) = 2x , then the activation functions G(wr¯ ) satisfies condition (B¯ 1 ) with l1 = l2 = 12 . In Theorem 3.2,
a11 a21
b11 b21
Fig. 6. The error with the controller in Example 4.2.
a12 a22 b12 b22
=
0.1 0.9
=
0.1 −0.5
0.5 , −3.5
−0.2 . 0.3
Since the controller (7) in Example 2 is different from that in [17], the finite-time anti-synchronization of cannot be tested with the result in [17]. All the initial conditions can be set as: w1 (t ) = 2.01, w2 (t ) = −5.05, v1 (t ) = 2.03, v2 (t ) = −5.02. Then we can get that U (0 ) = 0.05. By calculation, we have Iˆ + k¯ L¯ 3 = −3.9 + 2 × 1.5 = −0.9 < 0, then (B¯ 4 ) is satisfied. Because aˆ =
Z. Zhang, T. Zheng and S. Yu / Neurocomputing 356 (2019) 60–68
67
References
Fig. 7. The master-slave system without the controller in Example 4.2.
Fig. 8. The error without the controller in Example 4.2.
max1≤h¯ ≤k¯
2
0.3×e|4−2|×0.5 1−0.5
r¯=1
|br¯h¯ |lh¯ = max (0.3, 0.25 ) = 0.3, thus Aˆ =
aˆe|d−cˆ|τ1 1−τ2 ˆ
=
= 35e , and dˆ = 4 > cˆ + Aˆ = 2 + 35e . Consequently, (B¯ 5 ) is satisfied. By Theorem 3.2, the master system (32) and the slave system (33) can acquire finite-time anti-synchronized under the controllers (34) and (35) in Example 2. The curves of variables w1 (t), w2 (t), v1 (t) and v2 (t) are shown in the following Fig. 5, and the error curves are shown in the following Fig. 6, the master– slave system (32) and (33) and its error without the controllers are shown in Figs. 7 and 8. 5. Conclusion The finite-time anti-synchronization of the master–slave neural networks is discussed. Getting rid of applying some finite-time stability theorems, two novel criteria to ensure the finite-time antisynchronization for above networks are presented by applying different integral inequality skills from those in [1,2,8] and different inequality skills from those in [10–21]. Our work on the finitetime anti-synchronization is new and is of definite meaning in theory.
[1] Z.Q. Zhang, J.D. Cao, Novel finite-time synchronization criteria for inertial neural networks with time delays via integral inequality method, IEEE Trans. Neural Netw. Learn. Syst. 30 (5) (2019) 1476–1485. [2] Z.Q. Zhang, A.L. Li, S.H. Yu, Finite-time synchronization for delayed complex– valued neural networks via integrating inequality method, Neurocomputing 318 (2018) 248–260. [3] P. Liu, Z.G. Zeng, J. Wang, Global synchronization of coupled fractional-order recurrent neural networks, IEEE Trans. Neural Netw. Learn. Syst. (2018), doi:10. 1109/TNNLS.2018.2884620. [4] Q. Xiao, T.W. Huang, Z.G. Zeng, Global exponential stability and synchronization for discrete-time inertial neural networks with time delays: a time scale approach, IEEE Trans. Neural Netw. Learn. Syst. (2018), doi:10.1109/TNNLS. 2018.2874982. [5] H. Zhang, N.R. Pal, Y. Sheng, Z.G. Zeng, Distributed adaptive tracking synchronization for coupled reaction-diffusion neural network, IEEE Trans. Neural Netw. Learn. Syst. (2018), doi:10.1109/TNNLS.2018.2869631. [6] H. Zhang, Z.G. Zeng, Q.L. Han, Synchronization of multiple reaction-diffusion neural networks with heterogeneous and unbounded time-varying delays, IEEE Trans. Cybern. 49 (8) (2019) 2980–2991. [7] J.J. Chen, B.S. Chen, Z.G. Zeng, Global asymptotic stability and adaptive ultimate Mittag-Leffler synchronization for a fractional-order complex-valued memristive neural networks with delays, IEEE Trans. Syst. Man. Cybern. Syst. (2018), doi:10.1109/TSMC.2018.2836952. [8] Z.Q. Zhang, L. Ren, New sufficient conditions on global asymptotic synchronization of inertial delayed neural networks by integrating inequality techniques, Nonlinear Dyn. 95 (2019) 905–917. [9] S. Dharania, R. Rakkiyappana, J.H. Park, Pinning sampled-data synchronization of coupled inertial neural networks with reaction-diffusion terms and time– varying delays, Neurocomputing 227 (2017) 101–107. [10] H.Q. Wu, X.W. Zhang, R.X. Li, R. Yao, Adaptive anti-synchronization and h∞ anti-synchronization for memristive neural networks with mixed time delays and reaction-diffusion terms, Neurocomputing 168 (2015) 726–740. [11] M.H. Yu, W.P. Wang, X. Luo, L.L. Liu, M.M. Yuan, Exponential anti-synchronization control of stochastic memristive neural networks with mixed time-varying delays based on novel delay dependent or delay-independent adaptive controller, Math. Probl. Eng. 16 (2017). Article ID 8314757. [12] W.P. Wang, L.X. Li, H.P. Peng, J. Kurths, J.H. Xiao, Y.X. Yang, Finite-time anti-synchronization control of memristive neural networks with stochastic perturbations, Neural Process. Lett. 43 (1) (2016) 49–63. [13] H. Zhao, Q. Zhang, Global impulsive exponential anti-synchronization of delayed chaotic neural networks, Neurocomputing 74 (2011) 563–567. [14] G. Zhang, Y. Shen, L. Wang, Global anti-synchronization of a class of chaotic memristive neural networks with time-varying delays, Neural Netw. 46 (2013) 1–8. [15] A. Wu, Z. Zeng, Anti-synchronization control of a class of memristive recurrent neural networks, Commun. Nonlinear Sci. Numer. Simul. 18 (2013) 373– 385. [16] H. Zhao, L. Li, H. Peng, J. Kurths, J. Xiao, Y. Yang, Anti-synchronization for stochastic memristior-based neural networks with non-modeled dynamics via adaptive control approach, Eur. Phys. J. B 88 (2015) 109. [17] L.L. Wang, T.P. Chen, Finite-time anti-synchronization of neural networks with time-varying delays, Neurocomputing 275 (2018) 1595–1600. [18] H.K. Khalil, J.W. Grizzle, Nonlinear Systems, Prentice Hall, Upper Saddle River, 2002. [19] H. Li, M. Zhu, Z.B. Chu, H.B. Du, G.H. Wen, N.D. Alotaibi, Fixed-time synchronization of a class of second-order nonlinear leader-following muiti-agent systems, Asian J. Control 20 (1) (2018) 39–48. [20] Z.B. Wang, H.Q. Wu, Global synchronization in fixed time for semi-Markovian switching complex dynamical networks with hybird couplings and time-varying delays, Nonlinear Dyn. 95 (3) (2019) 2031–2062. [21] Y.L. Huang, S.H. Qiu, S.Y. Ren, Z.W. Zheng, Fixed-time synchronization of coupled Cohen-Grossberg neural networks with and without parameter uncertainties, Neurocomputing 315 (2018) 157–168. [22] C. Aouiti, E.A. Assali, Y.E. Foutayenj, Finite-time and fixed-time synchronization of inertial Cohen-Grossberg neural networks with time varying delays, Neural Processing Letters (2019), doi:10.1007/s11063- 019- 10018- 8. [23] Y.J. Liu, J.J. Huang, T.W. Huang, X.B. Yang, Finite-time synchronization of complex-valued neural networks with multiple time-varying delays and infinite distributed delays, Neural Process. Lett. (2018), doi:10.1007/ s11063- 018- 9958- 6. [24] Q. Xu, S.X. Zhuang, S.J. Liu, J. Xiao, Decentralized adaptive coupling synchronization of fractional-order complex-variable dynamical networks, Neurocomputing 186 (2016) 119–126. [25] Q. Xu, X.H. Xu, S.X. Zhuang, J.X. Xiao, C.H. Song, C. Che, New complex-projective synchronization strategies for drive-response networks with fractional complex-variable dynamics, Appl. Math. Comput. 338 (2018) 552–566. [26] R.Z. Zhang, P.L. Liu, The problems and examples in Mathematics analysis, Jiangxi People Publishing House, 1994, p. 12.
68
Z. Zhang, T. Zheng and S. Yu / Neurocomputing 356 (2019) 60–68 Zhengqiu Zhang: He is a Professor of College of Mathematics and Econometrics in Hunan University in China. His study field is neural networks theory and applications. So far, he is a author of more than 100 Sci papers.
Ting Zheng: She is a master of College of Mathematics and Econometrics in Hunan University in China. Her study field is neural networks theory and applications. So far, he is a author of more than 2 Sci papers.
Shenghua Yu: He is a Professor of Econometrics School of Economics in Hunan University in China. His study field is applied economics, neural networks theory and applications. So far, he is a author of more than 10 Sci papers.