ARTICLE IN PRESS
JID: NEUCOM
[m5G;November 6, 2019;14:4]
Neurocomputing xxx (xxxx) xxx
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays Liyan Duan a, Jigui Jian a,b,∗, Baoxian Wang a,b a b
College of Science, China Three Gorges University, Yichang, Hubei 443002, China Three Gorges Mathematical Research Center, China Three Gorges University, Yichang, Hubei 443002, China
a r t i c l e
i n f o
Article history: Received 11 March 2019 Revised 14 August 2019 Accepted 20 October 2019 Available online xxx Communicated by Prof. Huaguang Zhang Keywords: BAM inertial neural network Dissipativity Globally exponentially attractive set Neutral-type Mixed time-varying delay Inequality
a b s t r a c t This paper considers the global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays. Firstly, we transform the proposed BAM inertial neural networks to usual one. Secondly, by establishing a new neutral-type differential inequality and employing Lyapunov method and analytical techniques, some novel sufficient conditions in accordance with algebraic and linear matrix inequalities are obtained for the global exponential dissipativity of the addressed neural networks. Moreover, the globally exponentially attractive sets and the exponential convergence rate index are also assessed. Finally, the effectiveness of the obtained results is illustrated by some examples with numerical simulations.
1. Introduction The bidirectional associative memory neural networks (BAMNNs) first put forward by Kosko [1,2] are peculiar and have also been investigated due to their better potentialities of information association and memory. In past decades, the analysis of dynamics for different types of BAMNNs has attracted great attention and a large number of results for BAMNNs have been obtained (for example, see [3–13] and the references therein). In addition, time delays widely exist in various fields such as electronic and biological neural systems. There are usually many types of delays such as discrete delays, proportional delays, neutral delays, finite and infinite distributed delays. In the classic delayed BAMNNs, the delays are usually discrete constant or time-varying delays only in the states. However, the changes of the current state of system may be related to not only the current specific state but also the past state and the changes of the past state, such as partial element equivalent circuits [14]. For this reason, neutral term is introduced as a kind of more extensive time delay that refers to the time delay in the derivative of the state function. In the recent literature, many scholars have researched the asymptotic properties of various neutral type neural networks [15–22]. In [23], the ∗ Corresponding author at: College of Science, China Three Gorges University, Yichang, Hubei 443002, China. E-mail addresses:
[email protected] (L. Duan),
[email protected] (J. Jian).
© 2019 Elsevier B.V. All rights reserved.
stability conditions of neutral-type BAMNNs(NT-BAMNNs) with mixed delays and Markovian jump parameters were obtained. Zhao et al. studied [24] the globally attracting sets of NT-BAMNNs with infinite distributed and time-varying delays by developing a new integral inequality. The authors [25] discussed the global Lagrange stability for NT-BAMNNs with multiple time-varying delays. It can be noticed that many previous neural networks were always described by first-order differential equation, which ignored the influence made by the inertia. Because the inertial term is taken as a useful tool to generate bifurcation and chaos. Later, an inertial term is introduced into neural networks, which can produce complex bifurcation behavior and chaos [26]. Now, the various dynamic behaviors of inertial neural networks (INNs) [27– 31] have been deeply investigated and lots of interesting results have been obtained. In recent years, the authors [32–34] explored the stability of various BAM inertial neural networks (BAMINNs) with delays. Zhou et al. [35] studied the stability of neutral-type BAM inertial neural networks (NT-BAMINNs) with discrete time delay. As pointed out in [36], the dissipativity is an extension of Lyapunov stability, and the dissipative theory can provide a productive frame for stability analysis. Moreover, the research of the global dissipativity can be used to decide the global attracting sets which are invariant. Once a global attracting set is turned up, a probable bound of equilibrium points, chaos and periodic states can be measured. The dissipativity analysis of systems has attracted more
https://doi.org/10.1016/j.neucom.2019.10.082 0925-2312/© 2019 Elsevier B.V. All rights reserved.
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM 2
and more attention from scholars [37–42]. Recently, the global dissipativity for BAMNNs with time-varying and unbound delays [12] and infinite distributed delays [13] is investigated via some inequality techniques, respectively. Liu and Jian [43] studied the global dissipativity of quaternion-valued BAMNNs with time delay based on the plural decomposition method of quaternion and inequality techniques. In [44,45], Tu et al. analyzed the global dissipativity of memristive neutral type and delayed uncertain INNs, respectively. Zhang et al. [46] investigated the global exponential dissipativity (GED) of memristive INNs with distributed and discrete time-varying delays. However, as far as we know, there is almost no literature that considers the GED of NT-BAMINNs. These motivate the current study. In this paper, we focus on the GED of NT-BAMINNs with mixed time-varying delays. The main contributions of the paper are the following aspects: (i) The present paper is one of the first papers that tries to study the GED of NT-BAMINNs with mixed timevarying delays. (ii) A new neutral-type differential inequality is first established. (iii) Based on the Lyapunov approach and using the new differential inequality and other inequality techniques, some novel sufficient conditions are established to ensure the GED of the considered NT-BAMINNs under two different types of activation functions. Moreover, the framework of the globally exponentially attractive set (GEAS) is given out. The structure of this paper is organized as follows. Some preliminaries are given in Section 2. The main results are obtained in Section 3. Section 4 gives two examples with numerical simulations to verify the validity of our results. Finally, Section 5 presents some conclusions. Notations: In this paper, Rn and Rn × n denote the set of all n-dimensional real-valued vectors and all n × n real-valued matrices, respectively. For H ∈ Rn × n , HT shows the transpose of H, λmax (H)(λmin (H)) refers to maximal(minimal) eigenvalue of H. G < 0 represents that the matrix G is symmetric negative definite and G < K means G − K < 0. Let I = {1, 2, . . . , n}, J = {1, 2, . . . , m}. 2. Preliminaries
⎧ 2 ⎪ ⎪ d ui 2(t ) = − αi dui (t ) − ai ui (t ) + Fi (v(t )) ⎪ ⎪ dt dt ⎪ ⎪ ⎪ m ⎪ ⎪ + ei j f j (v˙ j (t − σ (t ))) + Ii (t ), ⎪ ⎨ j=1
⎪ d2 v j (t ) dv j (t ) ⎪ ⎪ = − βj − h j v j (t ) + G j (u(t )) ⎪ 2 ⎪ dt dt ⎪ ⎪ ⎪ n ⎪ ⎪ + w ji gi (u˙ i (t − η (t ))) + J j (t ), ⎩ m
In this paper, we shall use the following Assumptions. Assumption 1. (H1). There are constants gi > 0, fj > 0 such that functions gi (·), fj (·) meet |gi (· )| ≤ gi , | f j (· )| ≤ f j , i ∈ I, j ∈ J . Assumption 2. (H2). There exist constants li− , li+ , w−j and w+j such that fj (·), gi (·) meet for s1 , s2 ∈ R and s1 = s2 ( j ∈ J , i ∈ I ):
li− ≤
(1)
bi j f j (v j (t )) + ci j f j (v j (t − τ (t )))
+ di j n
Assumption 3. (H3). Time-varying delays are bounded and derivable and meet:
0 ≤ θ (t ) ≤ θ , 0 ≤ h(t ) ≤ h, 0 ≤ δ (t ) ≤ δ, 0 ≤ τ (t ) ≤ τ , 0 ≤ η (t ) ≤ η,
Denote − − − − − − W − = diag{w− 1 , w2 , . . . , wm }, L = diag{l1 , l2 , . . . , ln },
+ + + + + + W + = diag{w+ 1 , w2 , . . . , wm }, L = diag{l1 , l2 , . . . , ln },
li = max{|li− |, |li+ |},
W = diag{w1 , w2 , . . . , wm }, L = diag{l1 , l2 , . . . , ln }, J = (J1 , J2 , . . . Jm )T , I = (I1 , I2 , . . . In )T ,
t −h(t )
f j (v j (s ))ds ,
+ n ji
t
t −θ (t )
m
(|bi j | f j + |ci j | f j + |di j |h f j + |ei j | f j ) + Ii , N¯ j
j=1
=
n
(|k ji |gi + |m ji |gi + θ |n ji |gi + |w ji |gi ) + J j .
i=1
k ji gi (ui (t )) + m ji gi (ui (t − δ (t )))
Ni =
t
i=1
f j ( s1 ) − f j ( s2 ) gi ( s1 ) − gi ( s2 ) ≤ li+ , w−j ≤ ≤ w+j . s1 − s2 s1 − s2
Remark 2. From Assumption (H2), one can see that the mentioned activation function is better than the both Lipschitz-type, Lurietype as well as sigmoid activation functions. Because the constants in Assumption (H2) are allowed to be negative, zero or positive which is more general (see for instance [7,12,22,27,30–33,44,46]).
j=1
G j (u(t )) =
Remark 1. Evidently, system (1) is more general than the ones in [12,32,33,35]. For example, if di j = ei j = 0, w ji = n ji = 0, then system (1) reduces to the BAMINNs in [32,33]. If di j = ei j = 0, w ji = n ji = 0, the inertial terms are ignored and the external inputs are all constants, then system (1) becomes into the model in [12]. If bi j = di j = 0, k ji = n ji = 0, the neutral items are only linear functions, then system (1) here reduces to the model (2) in [35].
w j = max{|w−j |, |w+j |},
i=1
where
second derivative is known as an inertial term of (1); constants α i > 0, β j > 0; ai > 0, hj > 0 show the rate with which the ith and jth neurons will reset its potential to the resetting state in isolation when disconnected from the networks and external inputs, respectively; constants bij , cij , dij , eij , kji , mji , nji , wji denote the connection strengths; the external inputs Ii (t) and Jj (t) meet |Ii (t)| ≤ Ii , |Jj (t)| ≤ Jj . fj (·), gi (·) are the activation functions with f j (0 ) = 0, gi (0 ) = 0. τ (t), δ (t), θ (t), h(t) are the time-varying delays, σ (t), η(t) are the neutral-type time-varying delays.
0 ≤ σ (t ) ≤ σ , τ˙ (t ) ≤ τ˜ < 1, δ˙ (t ) ≤ δ˜ < 1, σ˙ (t ) ≤ σ˜ < 1, η˙ (t ) ≤ η˜ < 1.
Consider the following NT-BAMINNs with mixed time-varying delays for i ∈ I, j ∈ J
Fi (v(t )) =
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
gi (ui (s ))ds ,
u(t ) = (u1 (t ), u2 (t ), . . . , un (t ))T , v(t ) = (v1 (t ), v2 (t ), . . . , vm (t ))T , ui (t) and vj (t) represent the states of the ith and jth neurons, the
The initial conditions of system (1) are shown as
⎧ du (r ) ⎪ ⎨ui (r ) = ϕiu (r ), i = ψiu (r ),
i ∈ I,
⎪ ⎩v j (r ) = ϕ v (r ), dv j (r ) = ψ v (r ), j j
j ∈ J,
dr
dr
r ∈ [t0 − , t0 ],
(2)
where the functions ϕiu (r ), ψiu (r ), ϕ vj (r ), ψ vj (r ) are continuous on [t0 − , t0 ] for = max{η, τ , h, σ , θ , δ}. If δ (t), h(t), τ (t), θ (t), σ (t),
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
η(t) are unbounded, then = +∞. (uT (t, u , v ), vT (t, u , v ))T represents the solution of (1) through (t0 , u , v ) and simply refers to as (uT (t), vT (t))T , where
ϕ u = (ϕ1u (r ), ϕ2u (r ), . . . , ϕnu (r ))T , ψ u = (ψ1u (r ), ψ2u (r ), . . . , ψnu (r ))T , u = ((ϕ u )T , (ψ u )T )T , ϕ v = (ϕ1v (r ), ϕ2v (r ), . . . , ϕmv (r ))T , ψ v = (ψ1v (r ), ψ2v (r ), . . . , ψmv (r ))T , v = ((ϕ v )T , (ψ v )T )T . Choosing appropriate scalars ki , k¯ j , using the following variable transformation:
⎧ dui (t ) ⎪ ⎨xi (t ) = d (t ) + ki ui (t ), ⎪ ⎩y j (t ) = dv j (t ) + k¯ j v j (t ), d (t )
i ∈ I, (3) j ∈ J,
then (1) is equivalent to the following system:
⎧ dui (t ) ⎪ = − ki ui (t ) + xi (t ), ⎪ ⎪ dt ⎪ ⎪ ⎪ ⎪ dxi (t ) ⎪ ⎪ ⎪ ⎪ dt = − ρi ui (t ) − zi xi (t ) + Fi (v(t )) ⎪ ⎪ ⎪ m ⎪ ⎪ + ei j f j (v˙ j (t − σ (t ))) + Ii (t ), ⎪ ⎨ j=1
(4)
i=1
where ρi = k2i + ai − αi ki , zi = αi − ki , ρ¯ j = k¯ 2j + h j − β j k¯ j , z¯ j = β j − k¯ j , and the initial conditions can be shown as
ui (r ) = ϕiu (r ), xi (r ) = ki ϕiu (r ) + ψiu (r ) φiu (r ),
v j (r ) = ϕ vj (r ), y j (r ) = k¯ j ϕ vj (r ) + ψ jv (r ) φ vj (r ),
Correspondingly, (1) can be written as
⎧ du(t ) ⎪ ⎪ = − Au(t ) + x(t ), ⎪ ⎪ dt ⎪ ⎪ ⎪ dx(t ) ⎪ ⎪ = − Gu(t ) − Zx(t ) + B f (v(t )) + C f (v(t − τ (t ))) ⎪ ⎪ dt ⎪ ⎪ t ⎪ ⎨ + D t −h(t ) f (v(s ))ds + E f (v˙ (t − σ (t ))) + I (t ), dv(t ) ⎪ ⎪ = − Hv(t ) + y(t ), ⎪ ⎪ dt ⎪ ⎪ ⎪ ⎪ dy(t ) ⎪ ⎪ = − Fv(t ) − Z¯ y(t ) + Kg(u(t )) + Mg(u(t − δ (t ))) ⎪ ⎪ dt ⎪ ⎪ t ⎩ + N t −θ (t ) g(u(s ))ds + Wg(u˙ (t − η (t ))) + J (t ).
(6)
Definition 1. System (1) is called a globally exponentially dissipative system (GEDS), if there are constants α > 0 and β > 0, for the compact set = {(uT (t ), vT (t ))T |(uT (t ), vT (t ))T ≤ β} ⊂ Rn+m and every solution (uT (t ), vT (t ))T ∈ Rn+m \ with ∀(( u )T , ( v )T )T ∈ Rn+m \ for ∀r ∈ [t0 − , t0 ], there exist number ζ = ζ ( u , v ) > 0 such that
(uT (t ), vT (t ))T ≤ β + ζ e−α (t−t0 ) .
⎪ dv j (t ) ⎪ ⎪ = − k¯ j v j (t ) + y j (t ), ⎪ ⎪ dt ⎪ ⎪ ⎪ ⎪ dy j (t ) ⎪ = − ρ¯ i v j (t ) − z¯ j y j (t ) + G j (u(t )) ⎪ ⎪ dt ⎪ ⎪ ⎪ n ⎪ ⎪ ⎩ + w ji gi (u˙ i (t − η (t ))) + J j (t ),
3
r ∈ [t0 − , t0 ]. (5)
Remark 3. The similar with the work in [34], the free-weighted coefficients ki , k¯ j are introduced into the variable transformation (3). If ki = 1, k¯ j = 1, then the transformation (3) degrade into ones
[32,33]. If m = n, then (3) reduces to that in [34]. Denote g = (g1 (· ), g2 (· ), . . . , gn (· ))T , f ( · ) = ( f 1 ( · ), f 2 ( · ), . . . , fm (· ))T ,
x(t ) = (x1 (t ), x2 (t ), . . . , xn (t ))T , A = diag{k1 , k2 , . . . , kn }, G = diag{ρ1 , ρ2 , . . . , ρn },
In this case, the argument α is called dissipativity rate and the set is called a globally exponentially attractive set (GEAS). A set is called a positive invariant set, if ∀(( u )T , ( v )T )T ∈ for all r ∈ [t0 − , t0 ] implies (uT (t), vT (t))T ⊆. Remark 4. Definition 1 above is a common definition of GED for system (1), which is new and different from Definition 2 in [36,44] and Definition 1 in [46]. It parallels with the definition of Lyapunov exponential stability. The definition of GED is defined by using the infimum of all the solutions outside a compact set in [36,44,46]. To our knowledge, there have been not yet analogous definition on GED to the definition of Lyapunov exponential stability and almost all of the scholars discuss the definition of GED based on idea from the infimum. Compared with the definitions in [36,44,46], Definition 1 here can provide directly useful information for the convergence rate and GEAS of (1). Lemma 1 [28]. For b1 > 0, b2 > 0, λ > 1, ν > 1 and λ1 + ν1 = 1, then the inequality b1 b2 ≤ λ1 bλ1 + ν1 bν2 keeps, and the equation holds only if bλ1 = bν2 .
Lemma 2 [12]. For x1 , y1 ∈ Rn and a positive definite matrix P, then
2xT1 y1 ≤ xT1 P −1 x1 + yT1 P y1 . Lemma 3 [34]. For positive definite matrix D > 0, numbers a1 and b1 satisfy a1 < b1 , a vector function χ (t): [a1 , b1 ] → Rn is integrable, then
b1 a1
Z = diag{z1 , z2 , . . . , zn }, B = (bi j )n×m , C = (ci j )n×m , D = (di j )n×m , E = (ei j )n×m ,
χ ( ϑ )d ϑ
≤ ( b1 − a1 )
T
b1
D a1
b1 a1
χ ( ϑ )d ϑ
χ T (ϑ )Dχ (ϑ )dϑ .
Lemma 4 [28]. For matrices A1 , A2 , B1 , B2 ∈ Rn × n and proper in-
y(t ) = (y1 (t ), y2 (t ), . . . ym (t ))T , H = diag{k¯ 1 , k¯ 2 , . . . , k¯ m }, F = diag{ρ¯ 1 , ρ¯ 2 , . . . , ρ¯ m }, Z¯ = diag{z¯1 , z¯2 , . . . , z¯m }, K = (k ji )m×n , N = (n ji )m×n , M = (m ji )m×n , W = (w ji )m×n , I (t ) = (I1 (t ), I2 (t ), . . . , In (t ))T , J (t ) = (J1 (t ), J2 (t ), . . . , Jm (t ))T .
T
T A1 A B B −1 1 + 1 ϒ −1 1 = A2 A2 B2 B2 T
vertible matrices , ϒ , then
A1 A2
B1 B2
−1 0
0
ϒ −1
A1 A2
B1 B2
.
Lemma 5. Under Assumptions (H2) and (H3), let u(t) and x(t) be the solution of system (6), P˜i (i = 1, 2, . . . , 5 ) be positive definite matrices and R = diag(r1 , r2 , . . . , rn ) > 0 be positive diagonal matrix. Define a Lyapunov functional as
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM 4
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
V˜ (t ) = uT (t )P˜1 u(t ) + xT (t )P˜2 x(t ) + 2
i=1
+
t
t
t−θ t
ui (t ) 0
(gi (s ) − li− s )ds
t
integrating the both sides of (7) from 0 to t1 (t1 > 0), it follows that
≤ V˜ (0 ) −
g (u(ϑ ))P˜4 g(u(ϑ ))dϑ du T
u
g (u˙ (ϑ ))P˜5 g(u˙ (ϑ ))dϑ .
−
If there are positive constants d˜1 , d˜2 , d˜3 , d˜4 and γ˜ such that
+
+
T
t −η (t )
u (t )P˜1 u(t ) −
γ˜
≤ (V˜ (0 ) + S˜)e
+
t ≥ 0,
δ eδ λmax (P˜3 ) + θ 3 eθ λmax (P˜4 ) − d˜3 ≤ 0, λmax (P˜2 ) − d˜2 ≤ 0, λmax (P˜1 ) − d˜1 ≤ 0,
then for ∀ > 0 with ηeη λmax (P˜5 ) − d˜4 ≤ 0, one can obtain T
−
−t
,
where
3 θ
+ θ e
0 −δ
λmax (P˜4 )
η
+ ηe λmax (P˜5 )
g (u(ϑ ))g(u(ϑ ))dϑ 0
−η
0
− li− t
+ θ
+
t
t−θ
t u
t
t −η (t )
t1
0
+ et θ
t
t−θ
u
0
t
t−θ
t u
gT (u(ϑ ))P˜4 g(u(ϑ ))dϑ dudt.
et gT (u(ϑ ))P˜3 g(u(ϑ ))dtdϑ
gT (u(ϑ ))P˜3 g(u(ϑ ))dϑ
t
0
−η
t1
gT (u˙ (ϑ ))P˜5 g(u˙ (ϑ ))dϑ
eϑ gT (u˙ (ϑ ))P˜5 g(u˙ (ϑ ))dϑ ).
0
(9)
gT (u˙ (ϑ ))P˜5 g(u˙ (ϑ ))dϑ dt
t −η (t )
t
t−θ
t1 0
0
−θ
t u
(10)
gT (u(ϑ ))P˜4 g(u(ϑ ))dϑ dudt
eϑ gT (u(ϑ ))P˜4 g(u(ϑ ))dϑ
g (u(ϑ ))P˜4 g(u(ϑ ))dϑ . T
et1 V˜ (t1 ) −
(11)
γ˜
≤ V˜ (0 ) −
γ˜
+
0
t1
et [δ eδ λmax (P˜3 )
+ θ 3 eθ λmax (P˜4 ) − d˜3 ]gT (u(t ))g(u(t ))dt t1 + et [ηeη λmax (P˜5 ) − d˜4 ]gT (u˙ (t ))g(u˙ (t ))dt 0
+ δ eδ + ηeη
gT (u(ϑ ))P˜3 g(u(ϑ ))dϑ gT (u(ϑ ))P˜4 g(u(ϑ ))dϑ du,
According to (8)–(11), we can get
gT (u(ϑ ))P˜4 g(u(ϑ ))dϑ du
t
gT (u˙ (ϑ ))P˜5 g(u˙ (ϑ ))dϑ dt
eϑ gT (u(ϑ ))P˜3 g(u(ϑ ))dϑ ,
0
g (u(ϑ ))P˜3 g(u(ϑ ))dϑ
gT (u(ϑ ))P˜3 g(u(ϑ ))dϑ dt
t −δ (t )
e(ϑ +δ ) gT (u(ϑ ))P˜3 g(u(ϑ ))dϑ
t1
et θ
−
t −δ (t )
t
gT (u(ϑ ))P˜3 g(u(ϑ ))dϑ dt
−δ
≤ θ e
gT (u˙ (ϑ ))P˜5 g(u˙ (ϑ ))dϑ
t
t
min{ϑ +δ,t1 }
3 θ
t −η (t )
−δ
0
≤ − et d˜3 gT (u(t ))g(u(t )) − et d˜4 gT (u˙ (t ))g(u˙ (t )) t + et gT (u˙ (ϑ ))P˜5 g(u˙ (ϑ ))dϑ + et
The term with triple integral in (8) can be calculated as
T
t −δ (t )
0
max{ϑ ,0} t1
+
+ t
et
ϑ )dϑ − u (t )R(L − L )u(t ) +
T
−δ
≤ ηeη
+ (λmax (P˜1 ) − d˜1 )uT (t )u(t ) − d˜3 gT (u(t ))g(u(t )) ui (t ) n − d˜4 gT (u˙ (t ))g(u˙ (t )) + et 2 ri ( gi ( ϑ )
+e
t1
t
t1
(λmax (P˜2 ) − d˜2 )xT (t )x(t )
et
t −η (t )
t −δ (t )
g (u˙ (ϑ ))g(u˙ (ϑ ))dϑ .
i=1
t1
et d˜4 gT (u˙ (t ))g(u˙ (t ))dt
et
+
− d˜4 gT (u˙ (t ))g(u˙ (t )) − uT (t )R(L+ − L− )u(t ) + γ˜
et θ
0
T
˜ (t ) dW ≤ et V˜ (t ) − γ˜ et + et −d˜1 uT (t )u(t ) dt − d˜2 xT (t )x(t ) − d˜3 gT (u(t ))g(u(t )) ≤e
t1
g (u(ϑ ))g(u(ϑ ))dϑ
−θ 0
et
0
≤ δ eδ
T
˜ (t ) = (V˜ (t ) − γ˜ )et , then Proof. Considering the function as W
t
t1
≤ δ
T
+
The terms with double integral in (8) may be written as
≤
S˜ = δ eδ λmax (P˜3 )
γ˜
(8)
0
t ≥ 0,
t1
0 t1
et d˜3 gT (u(t ))g(u(t ))dt
0
dV˜ (t ) ≤ − d˜1 uT (t )u(t ) − d˜2 xT (t )x(t ) − d˜3 gT (u(t ))g(u(t )) dt − d˜4 gT (u˙ (t ))g(u˙ (t )) − uT (t )R(L+ − L− )u(t ) + γ˜ ,
t1
γ˜
et1 V˜ (t1 ) −
gT (u(ϑ ))P˜3 g(u(ϑ ))dϑ
t −δ (t )
+θ
n ri
(7)
0
gT (u(ϑ ))P˜3 g(u(ϑ ))dϑ
−δ 0
+ θ 3 eθ
−η
gT (u˙ (ϑ ))P˜5 g(u˙ (ϑ ))dϑ
0
−θ
gT (u(ϑ ))P˜4 g(u(ϑ ))dϑ
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
≤ V˜ (0 ) −
γ˜
+ S˜ ≤ V˜ (0 ) + S˜.
−
V˜ (t1 ) −
+
≤ (V˜ (0 ) + S˜)e−t1 ,
u (t )P˜1 u(t ) −
γ˜
≤ (V˜ (0 ) + S˜)e
−t
,
+
t ≥ 0.
Remark 5. Lemma 5 here develops and generalizes Theorem 1 in [22]. Theorem 1 in [22] was used to investigate NT-BAMNNs with Lurie-type activation functions. Unfortunately, facing NTBAMINNs and NT-BAMNNs with non-Lurie-type activation functions and mixed time-varying delays, especially the neutral items are unbounded, Theorem 1 in [22] cannot be used directly, while Lemma 5 here is skillful to deal with NT-BAMINNs and NTBAMNNs with unbounded neutral terms and non-Lurie-type activation functions. 3. Main results Theorem 1. Under Assumption (H1) and 0 ≤ h(t) ≤ h, 0 ≤ θ (t) ≤ θ , if there exist positive constants ε i , ξ i , ν i , λj , μj , γ j and μ > 1 such that
b1 = min
ki −
1≤i≤n
μ − 1 μμ−1 1 ε − | ρi | μ μ i μξi
> 0,
μ − 1 μμ−1 μ − 1 μμ−1 b2 = min zi − ξ − ν μ − | ρi | μ i μ i 1≤i≤n μεi μ − 1 μμ−1 1 c1 = min k¯ j − λ j − |ρ¯ j | μ μ 1≤ j≤m μμ j c2 = min
z¯ j −
1≤ j≤m
1
μ − |ρ¯ j |
μλ j
n
|xi (t )|μ−1 Ni −
m
+
k¯ j |v j (t )|μ
j=1
m
|v j (t )|μ−1 |y j (t )| −
m
z¯ j (t )|y j (t )|μ
j=1
m
|ρ¯ j ||y j (t )|μ−1 |v j (t )| +
j=1
m
|y j (t )|μ−1 N¯ j . (16)
j=1
In view of Lemma 1, one can obtain
1
|ui (t )|μ−1 |xi (t )| = εi |ui (t )|μ−1 |xi (t )| εi μ − 1 μμ−1 1 ≤ ε |ui (t )|μ + μ |xi (t )|μ , μ i μεi
(17)
1
|xi (t )|μ−1 |ui (t )| = ξi |xi (t )|μ−1 |ui (t )| ξi μ − 1 μμ−1 1 ≤ |u (t )|μ , ξ |xi (t )|μ + μ i μξiμ i 1
|xi (t )|μ−1 Ni = νi |xi (t )|μ−1 Ni ≤ νi
(12)
1
|ρi ||xi (t )|μ−1 |ui (t )|
i=1
j=1
n
i=1
from the arbitrariness of t1 , one derives T
zi |xi (t )|μ +
i=1
So, we have
γ˜
n
5
(18)
μ − 1 μμ−1 1 μ N , ν |xi (t )|μ + μ i μνiμ i (19)
> 0,
(13)
|v j (t )|μ−1 |y j (t )| = λ j |v j (t )|μ−1
> 0,
μ − 1 μμ−1 μ − 1 μμ−1 μj − γ μ μ j
≤
(14)
≤
(15)
|y j (t )|
μ − 1 μμ−1 1 |y (t )|μ , λ j |v j (t )|μ + μ μλμj j
|y j (t )|μ−1 |v j (t )| = μ j |y j (t )|μ−1
> 0,
1
λj
1
μj
(20)
|v j (t )|
μ − 1 μμ−1 1 |v (t )|μ , μ j |y j (t )|μ + μ μμμj j
(21)
then (1) is a GEDS. Moreover the set
1 = (u (t ), v (t )) ∈ R T
T
T
n+m
m n μ μ |ui (t )| ≤ |v j (t )| + b j=1 i=1
is a GESA and a positive invariant set, where = μ N¯ j j=1 μγ μ , b
m
j
μ Ni i=1 μν μ i
n
|y j (t )|μ−1 N¯ j = γ j |y j (t )|μ−1
= min{b1 , b2 , c1 , c2 }.
n | ui (t ) |μ
μ
i=1
+
D V(t ) |(4)
i=1
μ
+
m i= j
m | v j (t ) |μ | y j (t ) |μ + . μ μ j=1
Taking the upper right derivative of V(t) along the solutions of (4), one has
D+ V(t ) |(4) =
n
|ui (t )|μ−1 D+ |ui (t )| +
i=1
+
|xi (t )|μ−1 D+ |xi (t )|
i=1
m
|v j (t )|μ−1 D+ |v j (t )| +
j=1
≤−
n
n i=1
m
|y j (t )|μ−1 D+ |yi (t )|
j=1
ki |ui (t )|μ +
n i=1
|ui (t )|μ−1 |xi (t )|
μ − 1 μμ−1 1 μ N¯ . γ |y j (t )|μ + μ j μγ jμ j
According to (12)–(22), we get +
n | xi (t ) |μ
γj
N¯ j ≤
(22) +
Proof. Introducing a Lyapunov candidate function as
V (t ) =
1
μ − 1 μμ−1 1 ≤− ki − εi − |ρi | μ |ui (t )|μ μ μξi i=1
n 1 μ − 1 μμ−1 − zi − ξ μ − | ρi | μ i μξi i=1 n μ − 1 μμ−1 1 μ |xi (t )|μ + − νi μ Ni μ μν i i=1
m μ μ − 1 1 μ−1 ¯ kj − − λ j − |ρ¯ j | μ |v j (t )|μ μ μμ j j=1
m 1 μ − 1 μμ−1 − z¯ j − μj μ − |ρ¯ j | μ μλ j j=1 m μ − 1 μμ−1 1 ¯μ |y j (t )|μ + − γj μ Nj μ μγ j j=1 n
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM 6
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
≤−
n
b1 |ui (t )|μ −
i=1
−
m
n
b2 |xi (t )|μ −
i=1
m
c1 |v j
n
(t )|μ
≤ − bμV (t ) + < 0 , and we have for V (t ) > bp
≤
bμ
V (0 ) −
when V (t ) > bμ ,
i.e.
e−bμt , t ≥ 0,
(23)
(uT (t ), vT (t ))T ∈ Rn+m \ 1 . Accordingly,
if ϕiu (r ), φiu (r ), ϕ vj (r ), φ vj (r ) ∈ 1 , i T T v (t)) ∈ 1 , t ≥ 0, which shows 1
∈ j ∈ J , I,
(uT (t),
then
is a positive invariant set of (4). Based on (23), one easily gets that system (4) is a GEDS, and 1 is a GEAS of (4). Since system (4) is a equivalence system of (1), system (1) is a GEDS, and 1 is a GEAS and a positive invariant set of (1). In reality, we can get from (23) n
|ui (t )|μ +
i=1
m
|v j (t )|μ ≤ μV (t ) ≤
j=1
b
+ Ke−bμt , t ≥ 0,
(24)
n
m μ μ μ μ i=1 (|ui (0 )| + |xi (0 )| ) + i= j ( |v j ( 0 )| + |y j ( 0 )| ).
Theorem 2. Under Assumption (H1), 0 ≤ h(t) ≤ h, 0 ≤ θ (t) ≤ θ , if following conditions are satisfied
1≤i≤n
d2 = min
1≤ j≤m
d4 = min
1≤ j≤m
z¯ j − 1 > 0,
(25)
k¯ j − |ρ¯ j | > 0,
then system (1) is a GEDS and the set
n m T T T n+m 2 = (u (t ), v (t )) ∈ R |ui (t )| + |v j (t )| ≤ d i=1
(|xi (t )| + |ui (t )| ) +
m
i=1
n
i=1
Ni +
m j=1
N¯ j ,
(|v j (t )| + |y j (t )| ).
j=1
Computing the upper righthand derivative of V(t) along the solutions of (4), we can obtain
D+V (t )|(4) ≤
n
(|ρi ||ui (t )| − zi |xi (t )| + Ni ) +
i=1
+
m
(−k¯ j |v j (t )| + |y j (t )| ) +
j=1
=−
n i=1
−
n
(−ki |ui (t )| + |xi (t )| )
m
j=1
(|ρ¯ j ||v j (t )| − z¯ j |y j (t )| + N¯ j )
(ki − |ρi | )|ui (t )| + (zi − 1 )|xi (t )| +
n
j=1
for V (t ) > , and one has d
E P2
e−dt , t ≥ 0.
d
0
0
0
0
0
0
⎞
P2 E
⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ < 0, ⎟ ⎟ ⎟ ⎟ ⎟ ⎠
0 0 0 0 0 0 −(1 − σ˜ )P7
(28)
⎛
6 + P15 5 Q P4 K 0 P4 M P4 N ⎜ T HQ 0 0 0 0 5 ⎜ 5 ⎜ ⎜ Q QH 6 0 0 0 0 ⎜ ⎜ K T P4 0 0 0 0 0 7 ⎜ ⎜ 0 0 0 8 0 0 ⎜ 0 ⎜ ⎜ MT P4 ˜ )P8 0 0 0 0 0 − ( 1 − δ ⎜ ⎜ T 0 0 0 0 0 −P9 ⎝ N P4 0
0
0
0
0
0
P4W 0 0 0 0 0 0
⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ < 0, ⎟ ⎟ ⎟ ⎟ ⎟ ⎠
−(1 − η˜ )P10
(29) where
1
2
4
5
6 2 5 6
= − 2P1 A + 2L− RA + ALˆS4 LˆA + 2LˆS2 Lˆ, = P1 − P2 G − RL− − LˆS4 LˆA, 3 = − 2P2 Z + P2 + LˆS4 Lˆ, ˆ S3 W ˆ H + 2W ˆ S1 W ˆ, = − 2P3 H + 2W − QH + HW ˆ S3 W ˆ H, = P3 − P4 F − QW − − W ˆ S3 W ˆ , 1 = 1 + P12 + R(L+ − L− ), = − 2P4 Z¯ + P4 + W = P13 + P5 − S1 , 3 = P14 + P8 − S2 , 4 = P19 + P7 − S3 , = 4 + P16 + Q (W + − W − ), = P17 + h2 P6 − S1 , 7 = P18 + θ 2 P9 − S2 , 8 = P20 + P10 − S4 .
γ∗ 3 = (uT (t ), vT (t ))T ∈ Rn+m | uT (t )P1 u(t ) + vT (t )P3 v(t ) ≤
Ni
i=1
j=1
(26)
˜
" # γ ∗ = IT P2 I + JT P4 J, ˜ = sup | λmax P1 − " # " # " # " # λmin P12 ≤ 0, λmax P2 − λmin P11 ≤ 0, λmax P3 − " # " # " # " # λmin P16 ≤ 0, λmax P4 − λmin P15 ≤ 0, τ eτ λmax P5 + " # " # " # h3 eh λmax P6 − λmin P13 + P17 ≤ 0, σ eσ λmax P7 − " " # " " # λmin P19 ) ≤ 0, ηeη λmax P10 − λmin P20 ) ≤ 0, δ eδ λmax P8 + " # " # θ 3 eθ λmax P9 − λmin P14 + P18 ≤ 0 . is
m (k j − |ρ¯ j | )|v j (t )| + (z¯ j − 1 )|y j (t )| + N¯ j
≤ − dV (t ) + < 0
V (0 ) −
Then (1) is a GEDS, and the set 3
i=1 m
T
W T P2
Proof. Consider the Lyapunov candidate function as n
≤
⎛ + P
2 P2 B R 0 P2C P2 D 3 11 ⎜ T2 1 0 −AR 0 0 0 ⎜ ⎜ BT P 0 2 0 0 0 0 2 ⎜ ⎜ ⎜ R −RAT 0 3 0 0 0 ⎜ ⎜ 0 0 0 0 4 0 0 ⎜ ⎜ T 0 0 0 0 −(1 − τ˜ )P5 0 ⎜ C P2 ⎜ ⎝ DT P2 0 0 0 0 0 −P6
j=1
is a GEAS and a positive invariant set, where = d = min{d1 , d2 , d3 , d4 }.
V (t ) =
d
Theorem 3. Under Assumptions (H2) and (H3), if there exist six positive diagonal matrices S1 , S2 , S3 , S4 , Q = diag(q1 , q2 , . . . , qm ), R = diag(r1 , r2 , . . . , rn ) and twenty positive definite matrices Pk (k = 1, 2, . . . , 20 ) such that
where K = From Definition 1, we get the conclusion that (1) is a GEDS and 1 is a GEAS of (1).
⎧ zi − 1 > 0, ⎨d1 = 1min ≤i≤n ⎩d3 = min ki − |ρi | > 0,
≤ V (t ) −
Remark 6. Due to the Young inequality is constrained by μ > 1, Theorems 1 and 2 will enrich and complement both.
bμ
d
Similar to the proof in Theorem 1, one has that 2 is a positive invariant set and a GEAS, and system (1) is globally exponentially dissipative.
j=1
V (t ) −
|v j (t )| −
j=1
(27)
c2 |y j (t )|μ +
|ui (t )| +
i=1
j=1
m
a
GEAS,
where
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
Proof. Consider the following Lyapunov functional
V (t ) =
12
7
V˙ 7 (t ) = h2 f T (v(t ))P6 f (v(t )) − h
t
t−h
V j (t ),
f T (v(ϑ ))P6 f (v(ϑ ))dϑ , (36)
j=1
V˙ 8 (t ) = f T (v˙ (t ))P7 f (v˙ (t ))
where
− (1 − σ˙ (t )) f T (v˙ (t − σ (t )))P7 f (v˙ (t − σ (t ))),
V1 (t ) = uT (t )P1 u(t ), V2 (t ) = xT (t )P2 x(t ), V3 (t ) =
(37)
v (t )P3 v(t ), V4 (t ) = y (t )P4 y(t ), T
T
V˙ 9 (t ) = 2xT (t )Rg(u(t )) − 2uT (t )ARg(u(t )) V5 (t ) = 2 V6 (t ) =
qj
t
t −τ (t )
t
t −σ (t )
V9 (t ) = 2
V11 (t ) = V12 (t ) =
t
t−θ t
t −η (t )
+ gT (u(t ))P8 g(u(t )),
(39)
V˙ 11 (t ) = θ 2 gT (u(t ))P9 g(u(t )) − θ
0
t
t−θ
f T (v˙ (ϑ ))P7 f (v˙ (ϑ ))dϑ , ui (t )
(38)
V˙ 10 (t ) = − (1 − δ˙ (t ))gT (u(t − δ (t )))P8 g(u(t − δ (t )))
f T (v(ϑ ))P6 f (v(ϑ ))dϑ du,
u
i=1
t −δ (t )
θ
t
n ri t
+ 2uT (t )L− RAu(t ) − 2uT (t )L− Rx(t ),
( f j (ϑ ) − w−j ϑ )dϑ ,
f T (v(ϑ ))P5 f (v(ϑ ))dϑ ,
t
t−h
V10 (t ) =
0
j=1
V7 (t ) = h V8 (t ) =
m
v j (t )
gT (u(ϑ ))P9 g(u(ϑ ))dϑ , (40)
V˙ 12 (t ) = gT (u˙ (t ))P10 g(u˙ (t ))
(gi (ϑ ) − li− ϑ )dϑ ,
− (1 − η˙ (t ))gT (u˙ (t − η (t )))P10 g(u˙ (t − η (t ))).
gT (u(ϑ ))P8 g(u(ϑ ))dϑ ,
(41)
By Lemma 2, we have
2x (t )P2C f (v(t − τ (t ))) − (1 − τ˙ (t )) f T (v(t − τ (t )))P5 f (v(t − τ (t ))) T
t u
≤ xT (t )P2C[(1 − τ˜ )P5 ]−1C T P2 x(t ),
gT (u(ϑ ))P9 g(u(ϑ ))dϑ du,
gT (u˙ (ϑ ))P10 g(u˙ (ϑ ))dϑ .
Computing the derivative of Vj (t) along the solution of system (6), it follows that
(30)
V˙ 2 (t ) = − 2xT (t )P2 Gu(t ) − 2xT (t )P2 Zx(t ) + 2xT (t )P2 B f (v(t )) + 2xT (t )P2C f (v(t − τ (t ))) + 2xT (t )P2 I (t ) + 2xT (t )P2 D
t
t −h(t )
+ 2x (t )P2 E f (v˙ (t − σ (t ))),
− (1 − η˙ (t ))gT (u˙ (t − η (t )))P10 g(u˙ (t − η (t ))) ≤ yT (t )P4W [(1 − η˜ )P10 ]−1W T P4 y(t ),
(44)
2xT (t )P2 E f (v˙ (t − σ (t ))) − (1 − σ˙ (t )) f T (v˙ (t − σ (t )))P7 f (v˙ (t − σ (t )))
f (v(ϑ ))dϑ
T
2yT (t )P4 Mg(u(t − δ (t ))) − (1 − δ˙ (t ))gT (u(t − δ (t )))P8 g(u(t − δ (t ))) ≤ yT (t )P4 M[(1 − δ˜ )P8 ]−1 MT P4 y(t ), (43) 2yT (t )P4W g(u˙ (t − η (t )))
V˙ 1 (t ) = 2uT (t )P1 u˙ (t ) = 2uT (t )P1 (−Au(t ) + x(t )) = − 2uT (t )P1 Au(t ) + 2uT (t )P1 x(t ),
(42)
≤ xT (t )P2 E[(1 − σ˜ )P7 ]−1 E T P2 x(t ),
(45)
(31)
V˙ 3 (t ) = − 2vT (t )P3 H v(t ) + 2vT (t )P3 y(t ),
(32)
2xT (t )P2 I (t ) ≤ xT (t )P2 x(t ) + IT P2 I,
2yT (t )P4 J (t )
≤ y (t )P4 y(t ) + J P4 J. T
T
(46)
Combining Lemmas 3 and 2, one can obtain
V˙ 4 (t ) = − 2yT (t )P4 F v(t ) − 2yT (t )P4 Z¯ y(t ) + 2yT (t )P4 Kg(u(t )) + 2y (t )P4 Mg(u(t − δ (t )))
2xT (t )P2 D
T
+ 2yT (t )P4 J (t ) + 2yT (t )P4 N
t
t −θ (t )
g(u(ϑ ))dϑ
+ 2y (t )P4W g(u˙ (t − η (t ))), T
(33)
V˙ 5 (t ) = − 2vT (t )HQ f (v(t )) + 2yT (t )Q f (v(t ))
t −h(t )
t
t−h
f (v(ϑ ))dϑ − h
t
t −h(t )
t
t−h
f T (v(s ))P6 f (v(ϑ ))dϑ
f (v(ϑ ))dϑ
f (v(ϑ ))dϑ
T
t
P6 t−h
f (v(ϑ ))dϑ
≤ xT (t )P2 DP6−1 DT P2 x(t ),
(47)
(34) 2yT (t )P4 N
V˙ 6 (t ) = f (v(t ))P5 f (v(t )) T
− (1 − τ˙ (t )) f T (v(t − τ (t )))P5 f (v(t − τ (t ))),
t
≤ xT (t )P2 D −
+ 2vT (t )W − QH v(t ) − 2vT (t )W − Qy(t ),
(35)
t
t −θ (t )
g(u(ϑ ))dϑ − θ
≤ yT (t )P4 N P9−1 N T P4 y(t ).
t
t−θ
gT (u(ϑ ))P9 g(u(ϑ ))dϑ (48)
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM 8
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
⎛
Under Assumption (H2), the following inequalities hold
P2C
ˆ S1 W ˆ v(t ) ≤ 0, f (v(t ))S1 f (v(t )) − v (t )W
(49)
gT (u(t ))S2 g(u(t )) − uT (t )LˆS2 Lˆu(t ) ≤ 0,
(50)
T
T
ˆ S3 W ˆ v˙ (t ) f T (v˙ (t ))S3 f (v˙ (t )) ≤ v˙ T (t )W T ˆ ˆ H v(t ) − 2yT (t )W ˆ S3 W ˆ H v(t ) = v (t )HW S3W ˆ S3 W ˆ y(t ), + yT (t )W
(51)
gT (u˙ (t ))S4 g(u˙ (t )) ≤ u˙ T (t )LˆS4 Lˆu˙ (t ) = uT (t )ALˆS4 LˆAu(t ) − 2xT (t )LˆS4 LˆAu(t ) + xT (t )LˆS4 Lˆx(t ).
(52)
Combining (30)–(52) and Lemma 4, we have
"
#
+ x (t ) P2 + LˆS4 Lˆ − 2P2 Z x(t )
"
#
+ 2xT (t ) − P2 G − RL− − LˆS4 LˆAT + P1 u(t ) + 2xT (t )P2 B f (v(t ))
−1
−1
+ xT (t )P2C (1 − τ˜ )P5
+ xT (t )P2 E (1 − σ˜ )P7
C T P2 x(t ) + xT (t )P2 DP6−1 DT P2 x(t )
"
E T P2 x(t ) + IT P2 I
ˆ S3 W ˆ H + 2W ˆ S1 W ˆ + vT (t ) − 2P3 H + 2W − QH + HW
#
v(t ) # T − ˆ S3 W ˆ H v(t ) + 2y (t ) P3 − P4 F − QW − W " # T ˆ S3 W ˆ y(t ) + y (t ) − 2P4 Z¯ + P4 + W T T + 2y (t )P4 Kg(u(t )) + y (t )P4 M (1 − δ˜ )P8 −1 MT P4 y(t ) "
−1
0
0
0
0
⎛
5
⎞
Q
P4 K
0
4
HQ
0
0
QH
h P6 − S1
0
0
0
0
θ P9 − S2
0
0
0
0
0
P10 − S4
P4 M
P4 N
6
⎜ T ⎜ 5 ⎜ 3 = ⎜ Q ⎜ T ⎝K P4 ⎛
2
2
0
⎞ ⎛ ˜ 0 ⎟ ⎟ (1 − δ )P8 ⎟⎜ 0 ⎟⎝ 0 ⎟ ⎠ 0 0
0
0
0
P4 M
P4 N
P4W
0
0
⎛
0 0
⎜ 0 ⎜ ⎜ ⎜ 0 ⎜ ⎝ 0
⎟ ⎟ ⎟ ⎟, ⎟ ⎠
P4W
0 0
0
0
⎞−1
0
0
P9
0
0
(1 − η˜ )P10
⎟ ⎠
⎞T
⎟ ⎟ ⎟ 0 ⎟ . ⎟ 0 ⎠
−P11
⎜ 0 ⎜ ⎜ 1 + 2 < ⎜ 0 ⎜ ⎝ 0
0
0
0
0
−P12 − R(L+ − L− )
0
0
0
0
−P13
0
0
0
−P14
0
0
0
0
+ 2vT (t )HQ f (v(t )) + 2yT (t )Q f (v(t ))
0
⎟ ⎟ ⎟ 0 ⎟ , ⎟ 0 ⎠
⎛
− 2uT (t )ARg(u(t )) + 2xT (t )Rg(u(t ))
0
⎞T
Applying Schur complement to (28) and (29), we have
W T P4 y(t ) + J T P4 J
0
0
+ yT (t )P4 N P9−1 N T P4 y(t ) + yT (t )P4W (1 − η˜ )P10
P2 E
⎜ 0 ⎜ ⎜ 4 = ⎜ 0 ⎜ ⎝ 0
dV (t ) | dt (6) " # ≤ uT (t ) − 2P1 A + 2L− RA + ALˆS4 LˆA + 2LˆS2 Lˆ u(t ) T
⎜0 ⎜ ⎜ ⎜0 ⎜ ⎝0
P2 D
⎟ ⎟ ⎟ 0 ⎟, ⎟ 0 ⎠
−P19 (54)
+ f T (v(t )) P5 + h2 P6 − 2S1 f (v(t )) + gT (u(t )) P8 + θ 2 P9 − 2S2 g(u(t ))
"
#
"
⎛−P
#
= T (t )(1 + 2 )(t ) + ϒ T (t )(3 + 4 )ϒ (t ) + γ ∗ , where
ϒ (t ) = (yT (t ), vT (t ), f T (v(t )), gT (u(t )), gT (u˙ (t )))T ,
3
2
P2 B
R
0
1
0
−AR
0
0
P5 − S1
0
0
−RAT
0
P8 − S2
0
0
0
0
P7 − S3
0
⎛
P2C
⎜0 ⎜ ⎜ 2 = ⎜ 0 ⎜ ⎝0 0
0
⎞ ⎛ 0 ⎟ ⎟ (1 − τ˜ )P5 ⎟⎝ 0 ⎟ 0 ⎟ 0 ⎠ 0
0
0
P2 D 0 0
(53)
⎜ 0 3 + 4 < ⎜ ⎝ 0 0 0
0 0 −P17 0 0
0 0 0 −P18 0
⎞
0 0 ⎟ 0 ⎟ ⎠. 0 −P20 (55)
From (53)–(55), we get
(t ) = (xT (t ), uT (t ), f T (v(t )), gT (u(t )), f T (v˙ (t )))T ,
⎜ T ⎜ 2 ⎜ 1 = ⎜BT P2 ⎜ ⎝ R
0 ¯ ) −P16 − Q (W + − W 0 0 0
15
+ f T (v˙ (t )) P7 − S3 f (v˙ (t )) + gT (u˙ (t )) P10 − S4 g(u˙ (t ))
⎛
⎞
dV (t ) |(6) ≤ −xT (t )P11 x(t ) − uT (t )P12 u(t ) − yT (t )P15 y(t ) dt " # − f T (v(t )) P13 + P17 f (v(t ))
⎞ ⎟ ⎟ ⎟ ⎟, ⎟ ⎠
"
#
− gT (u(t )) P14 + P18 g(u(t ))
− uT (t ) R(L+ − L− ) u(t ) − vT (t )P16 v(t )
"
− vT (t )Q W + − W −
#
v(t ) − gT (u˙ (t ))P20 g(u˙ (t ))
− f T (v˙ (t ))P19 f (v˙ (t )) + γ ∗
P2 E
0
⎞−1
P6
0
⎠
0
(1 − σ˜ )P7
0
≤ −λmin (P11 )xT (t )x(t ) − λmin (P12 )uT (t )u(t ) − λmin (P13 + P17 ) f T (v(t )) f (v(t )) − λmin (P15 )yT (t )y(t ) − λmin (P16 )vT (t )v(t ) −λmin (P14 + P18 )gT (u(t ))g(u(t )) − λmin (P19 ) f T (v˙ (t )) f (v˙ (t ))
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
− λmin (P20 )gT (u˙ (t ))g(u˙ (t )) − uT (t ) R(L+ − L− ) u(t )
"
− vT (t )Q W + − W
# −
v(t ) + γ ∗ .
(56)
For (uT (t ), vT (t ))T ∈ Rn+m \ 3 , based on Lemma 5, one can get
γ
uT (t )P1 u(t ) + vT (t )P3 v(t ) −
∗
˜
≤ V (t ) −
γ
∗
˜
≤ Ke−˜t , t ≥ 0, (57)
where
K = V (0 ) + ˜τ e˜τ λmax (P5 )
+ ˜h3 e˜h λmax (P6 ) + ˜σ e
˜σ
λmax (P7 )
+ ˜δ e˜δ λmax (P8 ) 3 ˜θ
+ ˜θ e
−h 0
−σ 0
−δ
λmax (P9 )
+˜ηe˜η λmax (P10 )
0
0
−τ
f T (v(ϑ )) f (v(ϑ ))dϑ
f T (v(ϑ )) f (v(ϑ ))dϑ f T (v˙ (ϑ )) f (v˙ (ϑ ))dϑ
gT (u(ϑ ))g(u(ϑ ))dϑ
0
−θ 0 −η
9
Remark 10. If the neutral-type is not considered, then system (1) may be shown as
⎧ 2 d u (t ) du (t ) ⎪ ⎨ i 2 = − αi i − ai ui (t ) + Fi (v(t )) + Ii (t ), dt
dt
(58)
2 ⎪ ⎩ d v j (t ) = − β dv j (t ) − h v (t ) + G (u(t )) + J (t ). j j j j j 2
dt
dt
Corollary 1. Under Assumption (H2), δ˙ (t ) ≤ δ˜ < 1, τ˙ (t ) ≤ τ˜ < 1, if there are four positive diagonal matrices S1 , S2 , Q, R and sixteen positive definite matrices Pk (k = 1, 2, . . . , 16 ) such that
⎛˜
3 + P11 ⎜ T ⎜ ˜2 ⎜ T ⎜ B P2 ⎜ ⎜ ⎜ R ⎜ T ⎝ C P2
˜2
P2 B
R
P2C
P2 D
˜1
0
−AR
0
0
˜2
0
0
0
˜3
0
0
0
0
−(1 − τ˜ )P5
0
0
0
0
0 −RA
T
D P2
T
⎞
⎟ ⎟ ⎟ 0 ⎟ ⎟ < 0, ⎟ 0 ⎟ ⎟ 0 ⎠
−P6 (59)
g (u(ϑ ))g(u(ϑ ))dϑ T
gT (u˙ (ϑ ))g(u˙ (ϑ ))dϑ .
By Definition 1, (57) means that 3 is a GEAS and system (1) is GEDS. Remark 7. The authors [35] studied the global asymptotic stability for NT-MANINN with time-varying delays, where the activation functions for neutral items ware linear. Activation functions for neutral item discussed in [15,22,25] were bounded, while activation functions for neutral item discussed in Theorem 3 here are not only nonlinear but also unbounded. Compared to [22,25,32,46], the constructed Lyapunov functionals here refer to not only the states of the system but also the derivatives of the states or neutral-type activation functions. Remark 8. In [15,22,25], the authors only discussed the bounded activation function for neutral terms. Here, we gave some sufficient conditions of the GED of (1) by analysing two different types of activation functions. Compared to Theorem 3 with general activation functions, the activation functions in Theorems 1 and 2 are bounded and Theorem 3 removes the boundedness assumption for activation functions. In addition, Theorem 3 requires that all timevarying delays are bounded, but Theorems 1 and 2 remove the boundedness of discrete and neutral-type time-varying delays. So, Theorems 1–3 will complement and enrich both. Remark 9. Recently, many criteria for the global dissipativity of various BAMNNs [12,13,43] and INNs [44–46] are proposed. The assumption condition adopted in Theorem 3 here to discuss dissipativity is different from those adopted in [12,13,43–46], where the Lipschitz-type activation functions [12,13,43,45] and Lurie-type activation functions [44,46] are presented, respectively. As a result, the dissipativity condition of Theorem 3 here can possess much widespread application. Besides, the approach adopted in this paper to discuss exponential dissipativity is different from those adopted in [12,13,43,44,46], where the boundary and the convergence rate for the discussed system can not be simultaneously obtained. The obtained criteria here are more intuitive for the boundary and the convergence rate for the discussed states on the basis of Definition 1. And the dynamical characteristics of various INNs have been extensively investigated [27–35,44–46]. However, little literature if not none discussed the GED for NT-BAMINNs with mixed time-varying delays, and this indicates that our results are new.
⎛˜
6 + P15 ⎜ T ⎜ ˜5 ⎜ ⎜ Q ⎜ ⎜ KT P 4 ⎜ ⎜ T ⎝ M P4
˜5
Q
P4 K
P4 M
P4 N
˜5
HQ
0
0
0
QH
˜6
0
0
0
0
˜7
0
0
0
0
−(1 − δ˜ )P8
0
0
0
0
T
N P4
⎞
⎟ ⎟ ⎟ 0 ⎟ ⎟ < 0, ⎟ 0 ⎟ ⎟ 0 ⎠
(60)
−P9
where
˜1
˜3
˜5
˜2 ˜5 ˜7
˜ 2 = P1 − P2 G − RL− , = − 2P1 A + 2L− RA + 2X S2 X, ˜ 6 = −2P4 Z¯ + P4 , = − 2P2 Z + P2 , ˜1= ˜ 1 + P12 + R(L+ − L− ), ¯ , = P3 − P4 F − Q W ˜ 3 = P14 + P8 − S2 , = P13 + P5 − S1 , ˜ 6 = P7 + h2 P6 − S1 , ¯ ), = 4 + P16 + Q (W + − W = P10 + θ 2 P9 − S2 .
Then system (58) is a GEDS, and the set
γ¯ 4 = (uT (t ), vT (t ))T ∈ Rn+m | uT (t )P1 u(t ) + vT (t )P3 v(t ) ≤ ¯
" #
is a GEAS of (58), where γ¯ = IT P2 I + J T P4 J, ¯ = sup | λmax P1 −
" # " # " # " # " # λmin P12 ≤ 0, λmax P2 − λmin P11 ≤ 0, λmax P3 − λmin P16 ≤ " # " # " # " # 0, λmax P4 − λmin P15 ≤ 0, τ eτ λmax P5 + h3 eh λmax P6 − " # " # " # " λmin P13 + P17 ≤ 0, δ eδ λmax P8 + θ 3 eθ λmax P9 − λmin P14 + # P18 ≤ 0 .
4. Examples Example 1. Consider NT-BAMINNs (6) with the following parameters for n = m = 2:
A = H = diag(2, 2 ), G = F = diag(−1, −1 ), Z = Z˜ = diag(8, 8 ),
C=
3
−1
−1
2
K=
1
−1
0
4
, B=
2
−2
−1
,M=
1
2
−3
2
4
,E=
,W =
0.5
1
−1
0.3
0.2
−1
1
1.5
, D=
2
−1
1
4
,N=
1
2
1
2
,
,
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM 10
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
0.2 0.2 0.15 0.15 0.1 0.1 0.05 v2
v1
0.05 0
0 −0.05
−0.05
−0.1
−0.1
−0.15 0.2 0.1 0 −0.1 −0.2
0
−0.1
−0.2
−0.15 0.2
0.2
0.1
0.1 0 −0.1
u2
−0.2
u
1
u2
0.2
−0.2
0
−0.1
0.1
0.2
u
1
0.2
0.15
0.15
0.1
0.1
0.05 2
v2
0.05 v
0
0
−0.05
−0.05
−0.1
−0.1
−0.15 0.2 0.1 0 −0.1 −0.2
−0.2
0
−0.1
v
1
0.1
0.2
−0.15 0.2 0.1 0 −0.1 −0.2
u
1
v
−0.2
0
−0.1
0.1
0.2
u2
1
Fig. 1. The phase trajectories of (6) in R3 in Case 1.1.
h(t ) = 0.6 sin2 (t ) + 0.4, θ (t ) = 0.7 sin2 (t ) + 0.3, gi (ui (t )) = arctan(ui (t )), f j (v j (t )) = tanh(v j (t )), I1 (t ) = −I2 (t ) = − sin(t ), J1 (t ) = −J2 (t ) = cos(t ). Obviously, fj (·) and gi (·) meet Assumption (H1) with f j = 1, gi = 1.5708, h = θ = 1, I1 = I2 = J1 = J2 = 1. When μ = 2, choosing k1 = k2 = k¯ 1 = k¯ 2 = ξ1 = ξ2 = μ1 = μ2 = ν1 = ν2 = γ1 = γ2 = 2, ε1 = ε2 = λ1 = λ2 = 12 , the corresponding values can be easily obtained that b1 = b3 = 1.75, b2 = b4 = 2, b = 1.75, N1 = 12.5, N2 = 13.3, N¯ 1 = 18.5929, N¯ 2 = 25.3473, = 165.1653, d1 = d2 = 7, d3 = d4 = d = 1, = 69.7402. These values meet all the conditions of Theorems 1 and 2. According to Theorems 1 and 2, system (6) is a GEDS. The sets
1 = (uT (t ), vT (t ))T ∈ R2+2 u21 (t ) + u22 (t ) + v21 (t ) + v22 (t ) ≤ 47.1901 and
2 = (uT (t ), vT (t ))T ∈ R2+2 |u1 (t )| + |u2 (t )| + |v1 (t )| + |v2 (t )| ≤ 69.7402 are two GEASs of (6) with the parameters of Example 1.
Let the initial conditions be
ϕ = (0.06, −0.01 ), ϕ v = (0.08, −0.04 ), φ u = (0.02, −0.03 ), φ v = (0.05, −0.03 ). u
Case 1.1. Assume that τ (t ) = 0.5et , δ (t ) = 0.5 sin2 (t ) + e2t , σ (t ) = 4 + e3t − cos(t ), η (t ) = 2 + sin(t ) + e0.5t , which are unbounded. Fig. 1 displays the phase trajectories of (6) with the parameters of Example 1 with bounded activation functions and unbounded time-varying delays τ (t), δ (t), σ (t), η(t) in the phase space R3 . Case 1.2. Taking δ (t ) = 0.5 sin2 (t ), τ (t ) = 0.5 sin(t ), σ (t ) = − cos(t ) + 4, η (t ) = 2 + sin(t ), which are bounded. Fig. 2 displays the phase trajectories of (6) with the parameters of Example 1 with bounded both activation functions and timevarying delays τ (t), δ (t), σ (t), η(t) in the phase space R3 . So, Figs. 1 and 2 testify the validity of the results of Theorems 1 and 2 and Remark 8. Example 2. We still discuss (6) with the parameters of Example 1, but only time-varying delays h(t), τ (t), δ (t), σ (t), θ (t) and η(t) are reassigned as follows:
h(t ) = 0.3 sin (t ) + 0.3, 2
τ (t ) = 0.5 sin2 (t ), δ (t ) = 0.2 sin2 (t ),
σ (t ) = 0.4 sin2 (t ) + 0.4, θ (t ) = 2 sin2 (t ), η (t ) = 1 + 0.6 sin2 (t ). Obviously, τ (t), h(t), δ (t), θ (t), σ (t) and η(t) satisfy Assumption(H3) with τ = τ˜ = 0.5, h = η˜ = 0.6, θ = 2, σ = 0.8, δ = δ˜ = 0.2, η =
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
11
0.2 0.2 0.15
0.15
0.1
0.1
0.05 v
2
v1
0.05
0
0
−0.05
−0.05
−0.1
−0.1
−0.15 0.2 0.1 0 −0.1 −0.2
−0.2
0.1
0
−0.1
−0.15 0.2
0.2
0.2
0.1 0 −0.1
u
−0.2
u
2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
u
1
v2
v2
0.2
−0.1
−0.2
u2
1
0.1
0
0
0
−0.05
−0.05
−0.1
−0.1
−0.15 0.2 0.1 0 −0.1 −0.2
0
−0.1
−0.2
v1
0.2
0.1
−0.15 0.2 0.1 0 −0.1 −0.2
u
−0.1
−0.2
v
1
0.2
0.1
0 u
1
2
3
Fig. 2. The phase trajectories of (6) in R in Case 1.2.
1.6, σ˜ = 0.4. Choosing k1 = k2 = k˜ 1 = k˜ 2 = 2. Therefore, the coefficient matrix A, G, Z, H, F , Z˜ , B, D, C, E, K, N, M, W of the model in Example 2 are the same as defined in Example 1.
P5 =
P9 =
Case 2.1. When f j (v j (t )) = tanh(v j (t )), gi (ui (t )) = arctan(ui (t )), which are bounded. Obviously, fj (·) and gi (·) satisfy Assumption (H2) with l1− = l2− = w− = w− = 0, l1+ = l2+ = w+ = w+ = 1. These 1 2 1 2 values meet all the conditions of Theorem 3. Then, the solutions of (28) and (29) are obtained by using Matlab LMI Control Toolbox:
P11 =
P1 =
0.6719
38.6957
33.3875
−0.1791
−0.1791
21.8267
P3 =
0.4084
5.5151
3.3367
0.7104
0.7104
4.5211
2.5667
1.5173
1.5173
2.1887
2.7299
0.2875
0.2875
1.5962
P17 =
, P2 =
0.1226
0.1226
2.6483
, P4 =
2.1322
, P8 =
−0.3380
0.6697
9.2107
−0.0699
−0.0699
3.9921
3.6751
−0.2042
−0.2042
2.5716
−0.1251
24.5740
21.3200
7.6299
1.0625
1.0625
11.3828
4.7034
1.3218
1.3218
3.8703
,
,
,
4.0100
−0.4143
−0.4143
5.1210
, P16 =
, P18 =
, P20 =
,
1.0994
, P14 =
−0.1251 1.0994
, P12 =
12.3077
13.6817
, P10 =
−0.3380
P19 =
, P6 =
2.0403
S4 = diag(13.9698, 18.7502 ),
0.4084
9.5857
P13 = P15 =
S2 = diag(22.9932, 32.9808 ), S3 = diag(16.9359, 10.6582 ),
0.6719
9.7314
Q = diag(1.4838, 1.4838 ), R = diag(1.5621, 1.6683 ), S1 = diag(26.0919, 18.0390 ),
29.1072
−5.4147
ϕ = (−0.02, 0.03 ), ϕ v = (0.04, −0.05 ), φ u = (−0.04, 0.06 ), φ v = (0.06, −0.03 ). u
−5.4147
P7 =
Let the initial conditions be
14.0675
,
4.8244
−0.3269
−0.3269
2.4070
3.9706
−1.1406
−1.1406
2.4225
3.1700
−0.5312
−0.5312
3.6837
,
,
,
,
2.4041
−0.0343
−0.0343
1.5597
,
λmax (P1 ) = 38.7426, λmax (P2 ) = 2.6759, λmax (P3 ) = 33.3902, λmax (P4 ) = 2.4055,
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM 12
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
0.2
0.2
0.15
0.15
0.1
0.1
0.05 v
2
v1
0.05
0
0
−0.05
−0.05
−0.1
−0.1
−0.15 0.2
−0.15 0.2
0.1 0 −0.1 −0.2
−0.2
0
−0.1
u2
0.1
0.2
0.2
0.1 0 −0.1 −0.2
u
−0.1
−0.2
u2
1
0.1
0 u
1
0.2
0.2
0.15 0.15 0.1 0.1 0.05 v2
v2
0.05
0 0 −0.05 −0.05 −0.1 −0.1 −0.15 0.2
−0.15 0.2 0.1
0 v
−0.1
−0.2
−0.2
1
0
−0.1
0.1
0.2 0 v1
u1
−0.2
0.2
0.1
0
−0.1
−0.2
u2
Fig. 3. The phase trajectories of (6) in R3 in Case 2.1.
Q = diag(1.5596, 0.6403 ), R = diag(1.6669, 1.8183 ),
λmax (P5 ) = 17.7321, λmax (P6 ) = 24.5753, λmax (P7 ) = 9.6263, λmax (P8 ) = 21.4751,
S1 = diag(26.8637, 18.5770 ), S2 = diag(23.7970, 34.0410 ), S3 = diag(17.3036, 10.9477 ),
λmax (P9 ) = 4.8538, λmax (P10 ) = 11.6627, λmin (P11 ) = 0.8487, λmin (P12 ) = 2.9010,
S4 = diag(14.2591, 19.1227 ),
P1 =
λmin (P13 + P17 ) = 5.5809, λmin (P14 + P18 ) = 6.1919, λmin (P15 ) = 0.5909, λmin (P16 ) = 2.3635,
0.6934
0.6934
40.4514
34.8535
−0.1758
−0.1758
22.7160
14.4651
−5.5341
−5.5341
10.0195
P3 =
λmin (P19 ) = 2.5350, λmin (P20 ) = 2.8368, γ ∗ = 8.9199, ˜ = ∗ = 0.0708. From Theorem 3, system (6) is a GEDS and the set
3 = (uT (t ), vT (t ))T ∈ R2+2 uT (t )P1 u(t ) + vT (t )P3 v(t ) ≤ 125.9873
is a GEAS. Fig. 3 shows the phase trajectories of (6) with the parameters of Example 2 with bounded activation functions and all bounded time-varying delays in three-dimensional phase space in Case 2.1. Case 2.2. Choosing f j (v j (t )) = 0.5v j (t ) + 0.5 tanh(v j (t )), gi (ui (t )) = 0.5ui (t ) + 0.5 arctan(ui (t )), which are unbounded. Obviously, fj (·) and gi (·) satisfy Assumption (H2) with l1− = l2− = w− = w− = 0.5, l1+ = l2+ = w+ = w+ = 1. These val1 2 1 2 ues meet all the conditions of Theorem 3. Then, the solutions of (28) and (29) are obtained by using Matlab LMI Control Toolbox:
P5 =
P7 =
0.4107
0.4107
5.6578
3.4570
0.7265
0.7265
4.6568
2.6765
1.5773
1.5773
2.2744
2.8471
0.2976
0.2976
1.6700
P11 =
P13 =
P15 =
, P8 =
9.4641
−0.0759
−0.0759
4.0955
2.4626
−0.0341
−0.0341
1.6046
12.6301
−0.1125
−0.1125
25.0032
7.7844
1.0807
1.0807
11.6127
4.8896
1.3526
1.3526
4.0141
,
,
−0.4285
5.2735
, P18 =
,
−0.4285
, P16 =
,
4.1748
,
21.8841
, P14 =
,
1.1585
, P12 =
0.6965
2.7115
1.1585
, P10 =
−0.3406
0.1276
14.0362
−0.3406
0.1276
, P6 =
2.1867
, P4 =
2.1318
P17 =
, P2 =
9.7763
P9 =
30.5813
,
5.0234
−0.3210
−0.3210
2.5048
4.1213
−1.1656
−1.1656
2.5244
,
,
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
ARTICLE IN PRESS
JID: NEUCOM
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
13
0.2 0.2 0.15 0.15 0.1 0.1 0.05 v1
0.05 v
2
0
0 −0.05 −0.05 −0.1 −0.1 −0.15 −0.2 −0.1 0 0.1 0.2
0.1
0.2
−0.2
−0.1
0
−0.15 −0.2 −0.1
−0.2 −0.1
0
0
0.1 0.2
u
u
1
2
0.1 0.2 u1
u2
0.2
0.1
0.15
0.05
0.1
0
0.05 v
2
v2
0.2 0.15
−0.05
0
−0.1
−0.05
−0.15 −0.2
−0.1 −0.1 −0.2
0
−0.1 0.2 v
−0.1
0
0.1
−0.15 −0.2
0.1 0.2
−0.2 −0.1
0
1
0
0.1
u1
0.2 v
0.1 0.2
1
u2
Fig. 4. The phase trajectories of (6) in R3 in Case 2.2.
P19 =
3.7637
−0.2054
−0.2054
2.6450
, P20 =
3.2374
−0.5434
−0.5435
3.7550
,
λmax (P1 ) = 40.4999, λmax (P2 ) = 2.7409, λmax (P3 ) = 34.8560, λmax (P4 ) = 2.4640, λmax (P5 ) = 18.2062, λmax (P6 ) = 25.0042, λmax (P7 ) = 9.8168, λmax (P8 ) = 22.0516, λmax (P9 ) = 4.9990, λmax (P10 ) = 11.8998, λmin (P11 ) = 0.8853, λmin (P12 ) = 3.0302,
5. Conclusions By employing Lyapunov method and combining with some inequality techniques including a new differential inequality, some sufficient conditions in accordance with algebraic and LMIs are obtained to ensure the GED of NT-BAMINNs with mixed timevarying delays. The concrete estimates for the GEASs have been established simultaneously. Further, the method in this paper can also be applied to analyze the stability and synchronization of some other discontinuous NT-BAMINNs, such as memristive NTBAMINNs. And in future research, we will utilize non-reduced order method [27] to study the boundedness and convergence for NT-BAMINNs and memristive NT-BAMINNs. Finally, the viability and effectiveness of the proposed results are demonstrated via some illustrative examples with simulations.
λmin (P13 + P17 ) = 5.7580, λmin (P14 + P18 ) = 6.4335, λmin (P15 ) = 0.6197, λmin (P16 ) = 2.4645,
Declaration of Competing Interest
λmin (P19 ) = 2.6085, λmin (P20 ) = 2.8942, γ ∗ = 9.1524, ˜ = ∗ = 0.0707.
Acknowledgment
Based on Theorem 3, system (6) is a GEDS and the set
˜ 3 = (uT (t ), vT (t ))T ∈ R2+2 uT (t )P1 u(t ) + vT (t )P3 v(t ) ≤ 129.4540 .
is a GEAS. Fig. 4 show the phase trajectories of (6) with the parameters of Example 2 with unbounded activation functions and all bounded time-varying delays in three-dimensional phase space in Case 2.2. Hence, Figs. 3 and 4 indicate that the conclusions of Theorem 3 and Remark 8 are correct.
The authors declare that there is no conflict of interest.
The authors are grateful for the support of the National Natural Science Foundation of China under Grant 11601268. References [1] B. Kosko, Adaptive bi-directional associative memories, Appl. Opt. 26 (1987) 4947–4960. [2] B. Kosko, Bi-directional associative memories, IEEE Trans. Syst. Man Cybern. 18 (1988) 49–60. [3] L.S. Wang, X.H. Ding, M.Z. Li, Global asymptotic stability of a class of generalized BAM neural networks with reaction-diffusion terms and mixed time delays, Neurocomputing 321 (2018) 251–265.
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082
JID: NEUCOM 14
ARTICLE IN PRESS
[m5G;November 6, 2019;14:4]
L. Duan, J. Jian and B. Wang / Neurocomputing xxx (xxxx) xxx
[4] M.S. Ali, K. Meenakshi, Y.H. Joo, Finite-time h∞ filtering for discrete-time Markovian jump BAM neural networks with time-varying delays, Int. J. Control Automat. Syst. 16 (2018) 1971–1980. [5] B.L. Lu, H.J. Jiang, C. Hu, A. Abdurahman, Pinning impulsive stabilization for BAM reaction-diffusion neural networks with mixed delays, J. Frankl. Inst. 355 (2018) 8802–8829. [6] Z.Y. Zhang, X.P. Liu, R.N. Guo, C. Lin, Finite-time stability for delayed complex– valued BAM neural networks, Neural Process. Lett. 48 (2018) 179–193. [7] L.L. Li, J.G. Jian, Exponential p-convergence analysis for stochastic BAM neural networks with time-varying and infinite distributed delays, Appl. Math. Comput. 266 (2015) 860–873. [8] C. Sowmiya, R. Raja, J.D. Cao, X. Li, G. Rajchakit, Discrete-time stochastic impulsive BAM neural networks with leakage and mixed time delays: an exponential stability problem, J. Frankl. Inst. 355 (2018) 4404–4435. [9] Z.Q. Zhang, A.L. Li, L. Yang, Global asymptotic periodic synchronization for delayed complex-valued BAM neural networks via vector-valued inequality techniques, Neural Process. Lett. 48 (2018) 1019–1041. [10] M. Sader, A. Abdurahman, H.J. Jiang, General decay synchronization of delayed BAM neural networks via nonlinear feedback control, Appl. Math. Comput. 337 (2018) 302–314. [11] B. Ammar, H. Brahmi, F. Chrif, On the weighted pseudo-almost periodic solution for BAM networks with delays, Neural Process. Lett. 48 (2018) 849–862. [12] Z.W. Tu, L.W. Wang, Z.W. Zha, J.G. Jian, Global dissipativity of a class of BAM neural networks with time-varying and unbound delays, Commun. Nonlinear Sci. Numer. Simulat. 18 (2013) 2562–2570. [13] L.S. Wang, L. Zhang, X.H. Ding, Global dissipativity of a class of BAM neural networks with both time-varying and continuously distributed delays, Neurocomputing 152 (2015) 250–260. [14] X.M. Zhang, Q.L. Han, A new stability criterion for a partial element equivalent circuit model of neutral type, IEEE Trans. Circuits Syst. 56 (2009) 798–802. [15] Q. Luo, Z.G. Zeng, X.X. Liao, Global exponential stability in lagrange sense for neutral type recurrent neural networks, Neurocomputing 74 (2011) 638–645. [16] Z.Y. Li, J. Lam, Y. Wang, Stability analysis of linear stochastic neutral-type time-delay systems with two delays, Automatica 91 (2018) 179–189. [17] T. Wu, L.L. Xiong, J.D. Cao, Z.X. Liu, H.Y. Zhang, New stability and stabilization conditions for stochastic neural networks of neutral type with markovian jumping parameters, J. Frankl. Inst. 355 (2018) 8462–8483. [18] S.H. Long, Y.L. Wu, S.M. Zhong, D. Zhang, Stability analysis for a class of neutral type singular systems with time-varying delay, Appl. Math. Comput. 339 (2018) 113–131. [19] C. Maharajan, R. Raja, J. Cao, G. Rajchakit, A. Alsaedi, Novel results on passivity and exponential passivity for multiple discrete delayed neutral-type neural networks with leakage and distributed time-delays, Chaos Solitons Fractals 115 (2018) 268–282. [20] I.V. Alexandrova, New robustness bounds for neutral type delay systems via functionals with prescribed derivative, Appl. Math. Lett. 76 (2018) 34–39. [21] S. Ma, Y.M. Kang, Exponential synchronization of delayed neutral-type neural networks with Lvy noise under non-lipschitz condition, Commun. Nonlinear Sci. Numer. Simulat. 57 (2018) 372–387. [22] J.G. Jian, B.X. Wang, Global lagrange stability for neutral-type cohen-grossberg BAM neural networks with mixed time-varying delays, Math. Comput. Simul. 116 (2015) 1–25. [23] M.S. Ali, R. Vadivel, O.M. Kwon, Decentralized event-triggered stability analysis of neutral-type BAM neural networks with Markovian jump parameters and mixed time varying delays, Int. J. Control Automat. Syst. 16 (2018) 983–993. [24] Z.H. Zhao, J.G. Jian, B.X. Wang, Global attracting sets for neutral-type BAM neural networks with time-varying and infinite distributed delays, Nonlinear Anal.HS 15 (2015) 63–73. [25] J.G. Jian, B.X. Wang, Stability analysis in lagrange sense for a class of BAM neural networks of neutral type with multiple time-varying delays, Neurocomputing 149 (2015) 930–939. [26] C.G. Li, G.R. Chen, X.F. Liao, J.B. Yu, Hopf bifurcation and chaos in a single inertial neuron model with time delay, Eur. Phys. J. B 41 (2004) 337–343. [27] X.Y. Li, X.T. Li, C. Hu, Some new results on stability and synchronization for delayed inertial neural networks based on non-reduced order method, Neural Netw. 96 (2017) 91–100. [28] P. Wan, J.G. Jian, Global convergence analysis of impulsive inertial neural networks with time-varying delays, Neurocomputing 245 (2017) 68–76. [29] Q. Tang, J.G. Jian, Matrix measure based exponential stabilization for complex– valued inertial neural networks with time-varying delays using impulsive control, Neurocomputing 273 (2018) 251–259. [30] J.D. Cao, Y. Wan, Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays, Neural Netw. 53 (2014) 165–172. [31] W. Zhang, C.D. Li, T.W. Huang, J. Tan, Exponential stability of inertial BAM neural networks with time-varying delay via periodically intermittent control, Neural Comput. Appl. 26 (2015) 1781–1787. [32] Y.Q. Ke, C.F. Miao, Stability and existence of periodic solutions in inertial BAM neural networks with time delay, Neural Comput. Appl. 23 (2013) 1089–1099. [33] Z.Q. Zhang, Z.Y. Quan, Global exponential stability via inequality technique for inertial BAM neural networks with time delays, Neurocomputing 151 (2015) 1316–1326.
[34] C. Maharajan, R. Raja, J.D. Cao, G. Rajchakit, Novel global robust exponential stability criterion for uncertain inertial-type BAM neural networks with discrete and distributed time-varying delays via lagrange sense, J. Frankl. Inst. 355 (2018) 4727–4754. [35] F.Y. Zhou, H.X. Yao, Stability analysis for neutral-type inertial BAM neural networks with time-varying delays, Nonlinear Dyn. 92 (2018) 1583–1598. [36] Z.Y. Guo, J. Wang, Z. Yan, Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays, Neural Netw. 48 (2013) 158–172. [37] X.D. Li, R. Rakkiyappan, G. Velmurugan, Dissipativity analysis of memristor-based complex-valued neural networks with time-varying delays, Inf. Sci. 294 (2015) 645–665. [38] J. Wang, J.H. Park, H. Shen, J. Wang, Delay-dependent robust dissipativity conditions for delayed neural networks with random uncertainties, Appl. Math. Comput. 221 (2013) 710–719. [39] Q. Ma, D.Q. Ding, X.H. Ding, Mean-square dissipativity of several numerical methods for stochastic differential equations with jumps, Appl. Numer. Math. 82 (2014) 44–50. [40] Y.N. Pan, Q. Zhou, Q. Lu, C.W. Wu, New dissipativity condition of stochastic fuzzy neural networks with discrete and distributed time-varying delays, Neurocomputing 162 (2015) 267–272. [41] C. Zhao, S.M. Zhong, X.J. Zhang, K.B. Shi, Novel results on dissipativity analysis for generalized delayed neural networks, Neurocomputing 332 (2019) 328–338. [42] R. Samidurai, R. Sriraman, Robust dissipativity analysis for uncertain neural networks with additive time-varying delays and general activation functions, Math. Comput. Simul. 155 (2019) 201–216. [43] J. Liu, J.G. Jian, Global dissipativity of a class of quaternion-valued BAM neural networks with time delay, Neurocomputing 349 (2019) 123–132. [44] Z.W. Tu, J.D. Cao, A. Alsaedi, F. Alsaadi, Global dissipativity of memristor-based neutral type inertial neural networks, Neural Netw. 88 (2017) 125–133. [45] Z.W. Tu, J.D. Cao, T. Hayat, Matrix measure based dissipativity analysis for inertial delayed uncertain neural networks, Neural Netw. 75 (2016) 47–55. [46] G.D. Zhang, Z.G. Zeng, J.H. Hu, New results on global exponential dissipativity analysis of memristive inertial neural networks with distributed time-varying delays, Neural Netw. 97 (2018) 183–191. Liyan Duan received the B.S degree in College of Mathematics and Statistics, Tianshui Normal University, Tianshui, China, in 2016. She is currently a postgraduate student in College of Science, China Three Gorges University, Yichang, China. Her current research interests include stability of neural networks and dynamic behavior analysis of fuzzy systems.
Jigui Jian received the B.S. degree in Mathematics from China Southwest University, Chongqing, China, in 1987. His M.S. degree in Mathematics from Huazhong Normal University and his Ph.D. degree from Department of Control Science and Engineering from Huazhong University of Science and Technology, Wuhan, China, in 1994 and in 2005, respectively. He is currently a professor and Ph.D. advisor in the Department of Mathematics, China Three Gorges University. His present research interests include stability of dynamical systems, Fractional-order systems, nonlinear control systems and dynamics behavior of neural networks.
Baoxian Wang received her B.S. and M.S. degrees in Mathematics from China Three Gorges University, Yichang, China, in 2005 and in 2008, respectively. Her Ph.D. degree from Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, in 2012. She is currently a associate professor at China Three Gorges University. Her current research interests include stability theory of neural networks, complex dynamical networks, impulsive and hybrid control systems, chaos control and synchronization.
Please cite this article as: L. Duan, J. Jian and B. Wang, Global exponential dissipativity of neutral-type BAM inertial neural networks with mixed time-varying delays, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.10.082