Global exponential stability analysis of discrete-time BAM neural networks with delays: a mathematical induction approach Communicated by Dr. Jin-Liang Wang
Journal Pre-proof
Global exponential stability analysis of discrete-time BAM neural networks with delays: a mathematical induction approach Er-yong Cong, Xiao Han, Xian Zhang PII: DOI: Reference:
S0925-2312(19)31532-2 https://doi.org/10.1016/j.neucom.2019.10.089 NEUCOM 21484
To appear in:
Neurocomputing
Received date: Revised date: Accepted date:
26 May 2019 16 September 2019 21 October 2019
Please cite this article as: Er-yong Cong, Xiao Han, Xian Zhang, Global exponential stability analysis of discrete-time BAM neural networks with delays: a mathematical induction approach, Neurocomputing (2019), doi: https://doi.org/10.1016/j.neucom.2019.10.089
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier B.V.
Global exponential stability analysis of discrete-time BAM neural networks with delays: a mathematical induction approach Er-yong Conga,c , Xiao Hana,∗, Xian Zhangb,∗ a
School of Mathematics, Jilin University, 2699 Qianjin Street, Changchun,130012 Jilin, Peoples Republic of China b School of Mathematical Science, Heilongjiang University, Harbin 150080, P. R. China c Department of Mathematics, Harbin University, Harbin 150086, Heilongjiang, P. R. China.
Abstract The problem of global exponential stability analysis for discrete-time bidirectional associative memory (BAM) neural networks with time-varying delays is investigated. By using the mathematical induction method, a novel exponential stability criterion in the form of linear matrix inequalities is firstly established. Then stability criteria depending upon only the system parameters are derived, which can easily checked by using the standard toolbox software (e.g., MATLAB). The proposed approach is directly based on the definition of global exponential stability, and it does not involve the construct of any Lyapunov-Krasovskii functional or auxiliary function. For a class of special cases, it is theoretical proven that the less conservative stability criteria can be obtained by using the proposed approach than ones in literature. Moreover, several numerical examples are also provided to demonstrate the effectiveness of the proposed method. Keywords: Discrete-time delayed BAM neural network; Global exponential stability; Mathematical induction approach; Spectral abscissa; Spectral ∗
Corresponding author Email addresses:
[email protected] (Er-yong Cong),
[email protected] (Xiao Han),
[email protected] (Xian Zhang)
Preprint submitted to Neurocomputing
November 2, 2019
radius. 1. Introduction In recent years, the dynamics of neural networks have been extensively studied because of their applications in pattern recognition, artificial intelligence, optimization, etc [1–3]. Bidirectional associative memory (BAM) neural networks were first introduced by Kosko [4, 5], which generalizes the single-layer autoassociative Hebbian correlator to a two-layer patternmatched heteroassociative circuits, and comes up with a complete and clear pattern stored in memory from an incomplete or fuzzy pattern. So the BAM neural networks possess better abilities of information memory and information association. In hardware realization of neural networks, time delays are inevitably introduced because of the finite switching speed of the amplifiers and communication time among neurons. It is known that the time delays are a source of undesired dynamics like oscillation and instability. Therefore, it is more important to determine sufficient conditions for the asymptotic or exponential stability of delayed BAM neural networks [6–16]. Here we mention that closely related to this paper. Using the M-matrix methods, Liang and Cao [7] studied the exponential stability of continuous-time BAM neural network with constant delays and its discrete analogue. Liang et al. [8] obtained delay-dependent and delay-independent stability criteria of discretetime BAM neural networks with variable time delays. However, the delaydependent stability criteria above are based on certain constraints on the delays. By constructing an appropriate Lyapunov-Krasovskii functional and
2
by using the technique of linear matrix inequality (LMI), Liu and Tang [10] further investigated the global exponential stability of discrete-time BAM neural networks without the constraints on delays in [8]. Gao and Cui [14] generalized the results in [8, 10] by establishing the exponential stability criteria for discrete-time BAM neural network with interval delays. However, it should be noted that in [11], in order to obtain a less conservative stability criterion, a more general Lyapunov-Krasovskii functional than the traditional one in [8, 10, 14] is constructed, and some slack matrices are introduced. Raja [15] deals with the problem of the global exponential stability analysis for a class of discrete-time BAM neural networks by employing the LMI approach. The problem of globally exponential stability analysis for discretetime BAM neural networks is further investigated in [16] by constructing a modified Lyapunov-Krasovskii functional and by employing reciprocally convex and free-weighting matrix techniques. Some exponential stability criteria are derived, which can provide a larger admissible maximal delay bound than some existing results. Motivated by the aforementioned discussions, constructing auxiliary function or Lyapunov–Krasovskii functional is too difficult and requires more skills. So it is necessary to find a new approach to analyze stability of delayed discrete-time BAM neural networks. The problem of globally exponential stability analysis for discrete-time BAM neural networks will be further investigated in this paper. By using the mathematical induction method, we derive several novel global exponential stability criteria in the simple form, which can easily verified via the standard tool software (e.g. MATLAB). It will be theoretically proven that the obtained global exponential stability
3
criteria is less conservative than ones in [6]. In addition, several numerical examples are also provided to illustrate the effectiveness of theoretical results. On the one hand, compared with the existing results in [6, Theorems 3.1 and 3.2], the proposed approach does not construct any Lyapunov-Krasovskii functional and auxiliary function. On the other hand, compared with the linear matrix inequality-based stability criteria proposed in [8, 9, 11, 13–16], the obtained stability criteria possess more simpler form, which reduces the computational complexity. The organization of this paper is as follows. The problem considered in this paper will be formulated in Section 2. The novel criteria of global exponential stability criteria for discrete-time BAM neural networks with time-varying delays will be presented in Section 3. In Section 4, we will theoretically compare the criteria of the global exponential stability obtained in this paper and the ones in literature. Finally, numerical examples are provided in Section 5 to illustrate the effectiveness of the method proposed. Notation. Suppose Z, R and C are sets of all integers, real numbers and complex numbers, respectively. Let Z[a, b] be the subset of Z consisting of all integers between a and b, and let Z[a, ∞) be the limit case of Z[a, b] when
b → ∞. For given positive integers p and q, let Rp×q denote the set of all p × q matrices over R. Set Rp = Rp×1 .
A matrix M ∈ Rn×n is called a Metzler matrix if all off-diagonal elements
of M are nonnegative. Let In be the identity matrix in Rn×n and λ(M ) =
{z ∈ C : det(zIn − M ) = 0}. The spectral abscissa of M is defined by s(M ) := max{Reλ : λ ∈ λ(M )}, and the spectral radius of M is defined by ρ(M ) := max{|λ| : λ ∈ σ(M )}. 4
For A = [aij ] ∈ Rp×q and B = [bij ] ∈ Rp×q , the matrix [aij bij ], denoted by A ◦ B, refers to the Hadamard product of A and B, and the symbol
A B (or B A) means that aij ≥ bij for all i ∈ Z[1, p] and j ∈ Z[1, q];
in particular, if aij > bij for all i ∈ Z[1, p] and j ∈ Z[1, q], then we write A B (or B ≺ A) instead of A B (or B A). Let |A| = [|xij |]. Then p×q |XY | |X||Y | for all X ∈ Rp×q and Y ∈ Rq×r . Denote by Rp×q and R
the sets of all p × q nonnegative and positive matrices, respectively. Similar notations are adopted for vectors. 2. Problem formulation Consider the following discrete-time BAM neural network with timevarying delays, which can be described by [15]: xi (k + 1) = ai xi (k) +
n X
cij fj (yj (k))
j=1
+
n X j=1
eij gj (yj (k − hij (k))) + Ji , i ∈ Z[1, n], k ∈ Z[0, ∞],(1a)
yj (k + 1) = bj yj (k) +
n X
dji f˜i (xi (k))
i=1
+
n X i=1
wji g˜i (xi (k − τji (k))) + J˜j , j ∈ Z[1, n], k ∈ Z[0, ∞],(1b)
where xi (k) and yj (k) are the states of the ith neuron from the neural field FX and jth neuron from the neural field FY at time k, respectively; ai , bj ∈ (−1, 1) describe the stability of internal neuron processes on the X-layer and the Y -layer, respectively; cij , eij , dji and wji are constants which denote the synaptic connection weights; fj (·) and gj (·) denote the activation functions of 5
the jth neuron from the neural field FY , f˜i (·) and g˜i (·) denote the activation functions of the ith neuron from the neural field FX ; Ji and J˜j denote the external constant inputs from outside the network acting on the ith and jth neuron respectively; and hij (k) and τji (k) denote the time-varying delays. Remark 1. [6] The network (1) can store paired patterns or memories, and search them in the forward and backward directions. In addition, when the number of neurons are sufficiently large, the neurons can not instantaneously process inputs and transmit outputs to the receiving neurons, so it is necessary to introduce time delays in the handling and transmitting processes. For convenience, let’s make the following assumptions: Assumption 1. The activation functions fj ,gj ,f˜i and g˜i (i, j ∈ Z[1, n]) satisfy fj (0) = 0, 0 ≤
fj (s1 ) − fj (s2 ) gj (s1 ) − gj (s2 ) (1) (2) ≤ γj ; gj (0) = 0, 0 ≤ ≤ γj , s1 − s2 s1 − s2
f˜i (s1 ) − f˜i (s2 ) g˜i (s1 ) − g˜i (s2 ) (1) (2) ≤ γ˜i ; g˜i (0) = 0, 0 ≤ ≤ γ˜i , f˜i (0) = 0, 0 ≤ s1 − s2 s1 − s2 ∀s1 , s2 ∈ R, s1 6= s2 ,
(1)
(2)
(1)
where γj > 0,γj > 0,˜ γi
(2)
> 0 and γ˜i
> 0 are known constants.
Assumption 2. The activation functions fj ,gj ,f˜i and g˜i (i, j ∈ Z[1, n]) satisfy (1) (2) (1) (2) |fj (s)| ≤ Sj , |gj (s)| ≤ Sj , |f˜i (s)| ≤ S˜i , |˜ gi (s)| ≤ S˜i , ∀s ∈ R, (1) (2) (1) (2) where Sj > 0, Sj > 0, S˜i > 0 and S˜i > 0 are known constants.
6
Assumption 3. The time-varying delays hij (k) and τji (k) (i, j ∈ Z[1, n]) satisfy ¯ ij and 0 ≤ τji (k) ≤ τ¯ji , k ∈ Z[0, ∞], 0 ≤ hij (k) ≤ h ¯ ij and τ¯ji are known positive integers. Set where h ¯ σ = max max hij , max τ¯ji . 1≤i,j≤n
1≤i,j≤n
A pair (x∗ , y ∗ ) ∈ Rn × Rn is said to be an equilibrium of BAM neural network (1), if x∗i = ai x∗i +
n X
cij fj (yj∗ )
j=1
+
n X j=1
yj∗
=
bj yj∗
+
eij gj (yj∗ ) + Ji , i ∈ Z[1, n], n X
(2a)
dji f˜i (x∗i )
i=1
+
n X i=1
wji g˜i (x∗i ) + J˜j , j ∈ Z[1, n],
(2b)
where x∗i and yi∗ are the ith components of x∗ and y ∗ , respectively. Assumption 2 can guarantee the existence of equilibrium. Obviously, system (2) can be written as the following compact matrix form: x∗ = Ax∗ + Cf (y ∗ ) + Eg(y ∗ ) + J,
(3a)
˜ y ∗ = By ∗ + Df˜(x∗ ) + W g˜(x∗ ) + J,
(3b)
where C = [cij ]n×n , D = [dji ]n×n , E = [eij ]n×n , W = [wji ]n×n , and A = diag(a1 , a2 , . . . , an ), B = diag(b1 , b2 , . . . , bn ), 7
J = col(J1 , J2 , . . . , Jn ), J˜ = col(J˜1 , J˜2 , . . . , J˜n ), f (y ∗ ) = col(f1 (y1∗ ), f2 (y2∗ ), . . . , fn (yn∗ )), g(y ∗ ) = col(g1 (y1∗ ), g2 (y2∗ ), . . . , gn (yn∗ )), f˜(x∗ ) = col(f˜1 (x∗1 ), f˜2 (x∗2 ), . . . , f˜n (x∗n )), g˜(x∗ ) = col(˜ g1 (x∗1 ), g˜2 (x∗2 ), . . . , g˜n (x∗n )). Set u(k) = x(k) − x∗ and v(k) = y(k) − y ∗ , k ∈ Z[0, ∞). Due to (1) and (3), we have ui (k + 1) = ai ui (k) +
n X
cij fj∗ (vj (k))
j=1
+
n X j=1
eij gj∗ (vj (k − hij (k))), i ∈ Z[1, n], k ∈ Z[0, ∞), (4a)
vj (k + 1) = bj vj (k) +
n X
dji f˜i∗ (ui (k))
i=1
+
n X i=1
wji g˜i∗ (ui (k − τji (k))), j ∈ Z[1, n], k ∈ Z[0, ∞), (4b)
where fj∗ (·) = fj (· + y ∗ ) − fj (y ∗ ), gj∗ (·) = gj (· + y ∗ ) − gj (y ∗ ), f˜i∗ (·) = f˜i (· + x∗ ) − f˜i (x∗ ), g˜i∗ (·) = g˜i (· + x∗ ) − g˜i (x∗ ). Clearly, (0, 0) is an equilibrium of system (4), and fj∗ , gj∗ , f˜i∗ and g˜i∗ satisfy the following inequalities: |fj∗ (s)| ≤ γj |s|, |gj∗ (s)| ≤ γj |s|, |f˜i∗ (s)| ≤ γ˜i |s|, (1)
(2)
8
(1)
(2)
|˜ gi∗ (s)| ≤ γ˜i |s|, ∀s ∈ R, i, j ∈ Z[1, n].
(5)
Let C(Z(−σ, 0], Rn ) refer to the linear space over R consisting of all function φ : Z(−σ, 0] → Rn satisfying sups∈Z(−σ,0] φ(s) < ∞. Define the norm k · k on Rn × Rn by k(a, b)k = (kak22 + kb|22 )1/2 for any a, b ∈ Rn , and the norm k(·, ·)kσ on C(Z(−σ, 0], Rn ) × C(Z(−σ, 0], Rn ) by ( k(ψ, ϕ)kσ = max
sup
s∈Z(−σ,0]
kψ(s)k2 ,
sup
s∈Z(−σ,0]
kϕ(s)k2
)
for any ψ, ϕ ∈ C(Z(−σ, 0], Rn ), where k · k2 is the Euclidean norm on Rn . Definition 1. [7] The zero equilibrium of system (4) is said to be globally exponentially stable, if there exist scalars K > 0 and γ > 0 such that every solution of (4), (u(k), v(k)) starting from ϕ, ψ ∈ C(Z(−σ, 0], Rn ), satisfies k(u(k), v(k))k ≤ Ke−γk k(ψ, ϕ)kσ , ∀k ∈ Z[0, ∞), where u(k) = col(u1 (k), u2 (k), . . . , un (k)), v(k) = col(v1 (k), v2 (k), . . . , vn (k)). This paper aims at proposing a mathematical induction approach to establish the global exponential stability criteria for the zero equilibrium of system (4) (i.e., the unique equilibrium of BAM neural network (1)). It is worth to emphasize that the mathematical induction approach does not require to construct any Lyapunov–Krasovskii functional, which is different ones in [6–16].
9
3. Global exponential stability criteria In this section we will investigate sufficient conditions under which the zero equilibrium of system (4) is globally exponentially stable, that is, BAM neural networks (1) has a unique equilibrium which is globally exponentially stable. To this end, we first introduce the result given in [17]. Lemma 1. [17] Let A0 ∈ Rn×n be a Metzler matrix and B0 , C0 , D0 ∈ Rn×n . Then the following statements (i)–(iii) are equivalent: (i) ρ(D0 ) < 1 and s(A0 + B0 (In − D0 )−1 C0 ) < 0. (ii) A0 x + B0 y ≺ 0 and C0 x + D0 y ≺ y for some x, y ∈ Rn . (iii) s(A0 ) < 0 and ρ(C0 (−A0 )−1 B0 + D0 ) < 1. Now, it is the position to present our main conclusion as follows: Theorem 1. For given a positive scalar γ, under Assumptions 1–3, the BAM neural network (1) has a unique equilibrium which is globally exponentially stable, if there exist u˜, v˜ ∈ Rn such that Aγ u˜ + Bγ v˜ 0,
(6)
Cγ u˜ + Dγ v˜ 0,
(7)
where ¯
¯
¯
Aγ = |A| − e−γ In , Bγ = eγ h ◦ |E|Γ2 + |C|Γ1 , eγ h = [eγ hij ], 10
˜ 2 + |D|Γ ˜ 1 , Dγ = |B| − e−γ In , eγ τ¯ = [eγ τ¯ji ], Cγ = eγ τ¯ ◦ |W |Γ (1)
(2)
(1)
(2)
Γ1 = diag(γ1 , γ2 , . . . , γn(1) ), Γ2 = diag(γ1 , γ2 , . . . , γn(2) ), (1) (1) (2) (2) ˜ 1 = diag(˜ ˜ 2 = diag(˜ Γ γ1 , γ˜2 , . . . , γ˜n(1) ), Γ γ1 , γ˜2 , . . . , γ˜n(2) ).
Proof. Choose K1 > 0 such that K1 u˜ col(1, 1, . . . , 1), K1 v˜ col(1, 1, . . . , 1). For any fixed initial functions ϕ, ψ ∈ C(Z[−σ, 0], Rn ) of system (4), define uˆ(k) = K1 k(ψ, ϕ)kσ e−γk u˜, k ∈ Z[−σ, ∞),
(8)
vˆ(k) = K1 k(ψ, ϕ)kσ e−γk v˜, k ∈ Z[−σ, ∞).
(9)
Now, by using the mathematical induction method we will show that |u(k)| uˆ(k), |v(k)| vˆ(k), ∀k ∈ Z[−σ, ∞).
(10)
Indeed, from the definition of k · kσ and the choice of K1 , it is clear that |u(q)| uˆ(q), |v(q)| vˆ(q), ∀q ∈ Z[−σ, 0].
(11)
Assume that (10) holds for k ≤ q (q ≥ 0). When k = q + 1, it follows from (4a) and (5) that, for any i ∈ Z[1, n],
|ui (q + 1)| ≤ |ai ||ui (q)| + +
n X j=1
+
j=1
j=1
|cij ||fj∗ (vj (q))|
|eij ||gj∗ (vj (q − hij (q)))|
≤ |ai ||ui (q)| + n X
n X
n X j=1
(2)
(1)
|cij |γj |vj (q)|
|eij |γj |vj (q − hij (q))|. 11
By using (11) and the inductive hypothesis, one can easily derive that |ui (q + 1)| ≤ |ai |ˆ ui (q) + +
n X j=1
n X j=1
(1)
|cij |γj vˆj (q)
(2)
|eij |γj vˆj (q − hij (q)),
(12)
where uˆi (q) and vˆi (q) are the ith components of uˆ(q) and vˆ(q), respectively. The combination of (8), (9) and (12) gives |ui (q + 1)| ≤ |ai |K1 k(ψ, ϕ)kσ e−γq u˜i n X (1) + |cij |γj K1 k(ψ, ϕ)kσ e−γq v˜j j=1
+
n X j=1
(2)
|eij |γj K1 k(ψ, ϕ)kσ e−γ(q−hij (q)) v˜j
≤ K1 k(ψ, ϕ)kσ e−γq " # n n X X (1) (2) ¯ × |ai |˜ ui + |cij |γj v˜j + |eij |γj eγ hij v˜j , j=1
(13)
j=1
where u˜i and v˜i are the ith components of u˜ and v˜, respectively. Due to the arbitrariness of i ∈ Z[1, n], the inequality (13) is equivalent to |u(q + 1)| K1 k(ψ, ϕ)kσ e−γq (|A|˜ u + Bγ v˜). Using (6) and (8), we obtain that |u(q + 1)| K1 k(ψ, ϕ)kσ e−γ(q+1) u˜ = uˆ(q + 1).
(14)
By a process similar to one deriving (14), it is easy to give from (7) and (9) that |v(q + 1)| K1 k(ψ, ϕ)kσ e−γ(q+1) v˜ = vˆ(q + 1). 12
In summary, (10) holds. It follows from (8)–(10) that 1
k(u(k), v(k))k = (ku(k)k22 + kv(k)k22 ) 2 1
v (k)k22 ) 2 ≤ (kˆ u(k)k22 + kˆ 1
= K1 e−γk k(ψ, ϕ)kσ (k˜ uk22 + k˜ v k22 ) 2 , ∀k ∈ Z[0, ∞). 1
Let K = K1 (k˜ uk22 + k˜ v k22 ) 2 , then k(u(k), v(t))k ≤ Ke−γk k(ψ, ϕ)kσ , ∀k ∈ Z[0, ∞). This, together with the arbitrariness of ϕ, ψ ∈ C(Z[−σ, 0], Rn ), implies that the zero equilibrium of system (4) is globally exponentially stable, and hence BAM neural network (1) has a unique equilibrium which is globally exponentially stable. This completes the proof. Combining Theorem 1 and Lemma 1, one can easily derive the following conclusion. Theorem 2. For given a positive scalar γ, under Assumptions 1–3, the BAM neural network (1) has a unique equilibrium which is globally exponentially stable, if one of the following statements (a)–(c) holds: (a) Aγ u˜ + Bγ v˜ ≺ 0 and Cγ u˜ + (Dγ + In )˜ v ≺ v˜ for some u˜, v˜ ∈ Rn . (b) ρ(Dγ + In ) < 1 and s(Aγ − Bγ Dγ−1 Cγ ) < 0. (c) s(Aγ ) < 0 and ρ(Dγ + In − Cγ A−1 γ Bγ ) < 1. Set ˜ = |C|Γ1 + |E|Γ2 , C˜ = |D|Γ ˜ 1 + |W |Γ ˜ 2. A˜ = |A| − In , B 13
Theorem 3. Under Assumptions 1– 3, the BAM neural network (1) has a unique equilibrium which is globally exponentially stable, if one of the following statements (a)–(c) holds: ˜u + B˜ ˜ v ≺ 0 and C˜ u˜ + |B|˜ (a) A˜ v ≺ v˜ for some u˜, v˜ ∈ Rn . ˜ n − |B|)−1 C) ˜ < 0. (b) ρ(|B|) < 1 and s(A˜ + B(I ˜ < 1. (c) s(|A|) < 1 and ρ(|B| − C˜ A˜−1 B) ˜ limγ→0+ Bγ = B, ˜ limγ→0+ Cγ = C˜ and Proof. Clearly, limγ→0+ Aγ = A, limγ→0+ Dγ = |B| − In . By Theorem 2, the proof is completed. Remark 2. Theorem 2 and 3 give the delay-dependent and independent global exponential stability criteria for the zero equilibrium of system (4) (that is, unique equilibrium of BAM neural network (1)). 4. Theoretical comparisons In this section, we will give theoretical comparisons of the global exponential stability criteria for the equilibrium of BAM neural network (1) presented in the previous section and [6, 7]. When C = D = 0, BAM neural network (1) becomes: xi (k + 1) = ai xi (k) +
n X
eij gj (yj (k − hij (k))) + Ji , i ∈ Z[1, n], (15a)
n X
wji g˜i (xi (k − τji (k))) + J˜j , j ∈ Z[1, n], (15b)
j=1
yj (k + 1) = bj yj (k) +
i=1
14
where k ∈ Z[0, ∞). System (15) can be viewed as a discretized version of continuous-time neural network, if its parameters possess the following special form [6, 7]: ai = e−˜ai h , eij =
1 − e−˜ai h 1 − e−˜ai h ∗ e˜ij , Ji = Ji , i ∈ Z[1, n], a ˜i a ˜i ˜
˜
bj = e−bj h , wji =
(16a)
˜
1 − e−bj h 1 − e−bj h ˜∗ w˜ji , J˜j = Jj , j ∈ Z[1, n], ˜bj ˜bj
(16b)
where a ˜i , ˜bj ∈ (0, ∞) and h ∈ (0, ∞) are the discretized step-size. The following proposition is a simple overview of several results in literature. Proposition 1. Let gi and g˜i (i ∈ Z[1, n]) satisfy Assumptions 1 and 3. Then BAM neural network (15) subject to (16) has a unique equilibrium which is globally exponentially stable, if one of the following statements are satisfied: (2)
(i) [6, Theorem 3.1] a ˜i > γ˜i i, j ∈ Z[1, n]. (ii) [6, Theorem 3.2] a ˜i > Z[1, n].
Pn
j=1
Pn
j=1
(2) Pn eij | for all |w˜ji | and ˜bj > γj i=1 |˜
(2)
|˜ eij |γj
and ˜bj >
Pn
i=1
(2)
|w˜ji |˜ γi
for all i, j ∈
Theorem 4. Assume that the parameters in BAM neural network (15) possess the special form (16). If one of (i)–(ii) in Proposition 1 is true, then so is one of (a)–(c) in Theorem 3. Proof. Case 1: (i) in Proposition 1 is true. From (16), we have a ˜i >
(2) γ˜i
n X j=1
n X ˜bj a ˜i (2) ˜ |wji |, bj > γj |eij |, i, j ∈ Z[1, n]. 1 − bj 1 − ai i=1
15
Let u¯i =
˜bj a ˜i , v¯j = , i, j ∈ Z[1, n]. 1 − ai 1 − bj
Then u¯i > 0, v¯i > 0 and (1 − ai )¯ ui >
n X
(1 − bj )¯ vj >
(2)
|wji |¯ vj γ˜i , i ∈ Z[1, n],
j=1 n X i=1
(2)
|eij |¯ ui γj , j ∈ Z[1, n],
namely ˜u + C˜ T v¯ ≺ 0, B ˜ T u¯ + |B|¯ A¯ v ≺ v¯. By applying Lemma 1, we obtain ˜ < 0, ρ(B ˜ T (−A) ˜ −1 C˜ T + |B|) < 1. s(A) Noting that ˜ A) ˜ −1 B ˜ + |B| = (B ˜ T (−A) ˜ −1 C˜ T + |B|)T , C(− we obtain ˜ A) ˜ −1 B ˜ + |B|) < 1, ρ(C(− Again using Lemma 1, there exist uˆ, vˆ ∈ Rn such that ˜u + Bˆ ˜ v ≺ 0, C˜ uˆ + |B|ˆ Aˆ v ≺ vˆ, i.e., (a) in Theorem 3 is true.
16
Case 2: (ii) in Proposition 1 is true. Form (16), we obtain a ˜i > ˜bj >
n X
j=1 n X i=1
|eij |
a ˜i (2) γ , i ∈ Z[1, n], 1 − ai j
|wji |
˜bj (2) γ˜i , j ∈ Z[1, n], 1 − bj
and 1 − ai > 1 − bj >
n X
j=1 n X i=1
(2)
|eij |γj , i ∈ Z[1, n], (2)
|wji |˜ γi , j ∈ Z[1, n].
Taking u˜i = v˜i = 1, i ∈ Z[1, n], we have ˜u + B˜ ˜ v ≺ 0, C˜ uˆ + |B|˜ A˜ v ≺ v˜, i.e., (a) in Theorem 3 is true. Remark 3. It has been proven in [7, Corollary 3] that the BAM neural network (15) subject to (16) has a unique global exponential equilibrium if (2)
−˜ ai λi + γ˜i
n X j=1
(2) −˜bj λn+j + γj
|w˜ji |λn+j < 0, i ∈ Z[1, n],
n X i=1
|˜ eij |λi < 0, j ∈ Z[1, n],
(17a)
(17b)
for some λj > 0, j ∈ Z[1, 2n]. Similar to the proof of Theorem 4, one can easily show that (17) is equivalent to one of (a)–(c) in Theorem 3 with C = D = 0. 17
Finally, what different from the approaches proposed in [11, 14–16] is worth emphasizing that the approach proposed here have the following advantages: • The approach can provide delay-independent and dependent global exponential stability criteria; • The obtained delay-dependent global exponential stability criteria can used to give the exponential estimation of BAM neural networks. 5. Illustrative examples In this section we will present the effectiveness of the proposed induction method by three numerical examples. Example 1. Consider the discrete-time BAM neural network (1) with the following parameters: A = diag(0.05, 0.05), B = diag(0.04, 0.03), −0.03 −0.01 0.04 , D = C= 0.02 0.02 0.03 0.05 −0.05 0.03 , W = E= 0.2 0.01 −0.2
0.03 , 0.01 0.06 , 0.01
J = col(0.05, 0), J˜ = col(0, 0.2),
fj (s) = gj (s) = f˜i (s) = g˜i (s) = tanh(s), s ∈ [0, ∞), i, j ∈ Z[1, 2], 18
1
0.6
0.8
0.5
0.6
0.4
0.4
0.3
0.2
0.2
0
0.1
-0.2
0
-0.4
-0.1
-0.6
-0.2
-0.8
-0.3
-1
0
5
10
15
20
25
30
35
40
45
-0.4
50
0
5
10
15
20
25
30
35
40
45
50
Figure 1: State trajectories of discrete-time BAM neural network (Example 1)
˜1 = Γ ˜2 = Clearly, Assumption 2 is satisfied. If we choose Γ1 = Γ2 = Γ I2 , then Assumption 1 is satisfied. Following the discussion in [18, Section 4], one can easily obtain an equilibrium, (x∗ , y ∗ ), of the discrete-time BAM neural network under consideration as follows: x∗ = col(0.0403, 0.0070), y ∗ = col(0.0036, 0.1993). By employing the function eig of MATLAB, it is easy to verify that Theorem 3(b) holds, and the discrete-time BAM neural network under consideration has a unique equilibrium which is globally exponentially stable. Since the condition (b) in Theorem 3 is delay-independent, the resulted global exponential stability is independent of the choice of delays hij (k) and τji (k), i, j ∈ Z[1, 2]. Furthermore, when hij (k) ≡ 6 and τji (k) ≡ 6 (i, j ∈ Z[1, 2]), the state re-
sponses of the BAM neural network under consideration are given in Figures 1–3 for different initial functions, which explains the theoretical results presented in Theorem 3.
19
0.6
0.4
0.5
0.2
0.4 0
0.3 0.2
-0.2
0.1 -0.4
0 -0.1
-0.6
-0.2 -0.8 -1
-0.3 0
5
10
15
20
25
30
35
40
45
-0.4
50
0
5
10
15
20
25
30
35
40
45
50
Figure 2: State trajectories of discrete-time BAM neural network (Example 1)
0.4
0.8 0.7
0.2
0.6 0 0.5 -0.2
0.4
-0.4
0.3 0.2
-0.6 0.1 -0.8 -1
0
0
5
10
15
20
25
30
35
40
45
-0.1
50
0
5
10
15
20
25
30
35
40
45
50
Figure 3: State trajectories of discrete-time BAM neural network (Example 1)
20
Example 2. Consider discrete-time BAM neural network (1) with A = diag(0.6, 0.65), B = diag(0.04, 0.03), 0.05 −0.04 −0.01 , D = C= 0.04 0.02 0.03 0.04 0.06 −0.04 , W = E= −0.3 0.2 0.01
0.04 , 0.02 0.05 , 0.01
J = col(0.04, 0), J˜ = col(0, 0.3), f (s) = col(0.2tanh(s), tanh(s)), g(s) = col(0.2tanh(s), 0.9tanh(s)), f˜(s) = col(0.2tanh(s), 0.3tanh(s)), g˜(s) = col(0.1tanh(s), 0.4tanh(s)), ∀s ∈ [0, ∞), i, j ∈ Z[1, 2], hij (k) = rij + sij sin(kπ/2), τji (k) = pji + qji cos(kπ), where r11 = r21 = r22 = 7, r12 = 11,s11 = s12 = s21 = s22 = 1,p11 = p21 = 7, p12 = 12, p22 = 8,q11 = q21 = q12 = q22 = 1. Clearly, Assumption 2 is satisfied. Furthermore, Assumptions 1 and 3 are satisfied by taking ˜ 1 = diag(0.2, 0.3), Γ ˜ 2 = diag(0.1, 0.4), Γ1 = diag(0.2, 1), Γ2 = diag(0.2, 0.9), Γ ¯ 11 = h ¯ 21 = h ¯ 22 = τ¯11 = τ¯21 = 7, h ¯ 12 = 11, τ¯12 = 12 and τ¯22 = 8, Based on h the discussion in [18, Section 4], a equilibrium, (x∗ , y ∗ ), of the BAM neural network under consideration can be obtained as follows: x∗ = col(0.0657, 0.0335), y ∗ = col(0.0021, 0.3081), 21
3.5
5
3
4
2.5 3
2 1.5
2
1 1
0.5 0
0
-0.5 -1 -2
-1 0
5
10
15
20
25
30
35
40
45
-1.5
50
0
5
10
15
20
25
30
35
40
45
50
Figure 4: State trajectories of discrete-time BAM neural network (Example 2)
For γ = 0.18, by using the function eig of MATLAB, it is easy to verify that Theorem 2(b) holds, so the BAM neural network under consideration has a unique equilibrium which is globally exponentially stable. Moreover, if we use p11 = 8 instead of p11 = 7 in this example, then Theorem 2(b) is not true, which implies that Theorem 2 is not available in this case. So, the global exponential stability provided in Theorem 2 is delay-dependent. When hij (k) and τji (k) (i, j ∈ Z[1, 2]) are taken as above, the state responses of the BAM neural network under consideration are given in Figures 4–6 for different initial functions. Example 3. Consider a scalar discrete-time BAM neural network (15) subject to (16), where a ˜1 = 6, ˜b1 = 5, e˜11 = 9, w˜11 = 3,
(18a)
J1 = 20, J˜1 = 30, h11 = 5, τ11 = 10.
(18b)
Assume that the nonlinear activation functions g1 (s) = g˜1 (s) = tanh(s), s ∈
˜2 = 1 [0, ∞). It is clear that Assumptions 1 and 2 are satisfied, with Γ2 = Γ 22
0.8
4
0.6
3
0.4 2
0.2 0
1
-0.2 0
-0.4 -0.6
-1
-0.8 -2 -3
-1 0
5
10
15
20
25
30
35
40
45
-1.2
50
0
5
10
15
20
25
30
35
40
45
50
Figure 5: State trajectories of discrete-time BAM neural network (Example 2)
1.2
5
1
4
0.8 3
0.6 0.4
2
0.2 1
0 -0.2
0
-0.4 -1
-0.6 -0.8
0
5
10
15
20
25
30
35
40
45
-2
50
0
5
10
15
20
25
30
35
40
45
50
Figure 6: State trajectories of discrete-time BAM neural network (Example 2)
23
9.5
10
9
8
8.5 6
8 7.5
4
7 2
6.5 6
0
5.5 -2
5 4.5
0
5
10
15
20
25
30
35
40
45
-4
50
0
5
10
15
20
25
30
35
40
45
50
Figure 7: State trajectories of discrete-time BAM neural network (Example 3 (h = 0.1 and h = 1)) (2)
and S1
(2) = S˜1 = 1. By using MATLAB it is obvious that (i) and (ii) of
Proposition 1 (i.e. [6, Theorem 3.1 and 3.2]) are not satisfied, and Proposition 1 can not be used to check the global exponential stability of the BAM neural network (15). But, by employing the function eig of MATLAB, it is easy to verify that Theorem 3(b) and [7, Corollary 3] hold, and hence the discrete-time BAM neural network under consideration has a unique equilibrium (x∗ = 4.8333, y ∗ = 6.5999) which is globally exponentially stable, which explains the theoretical results presented in Theorem 4 and Remark 2. Furthermore, when h = 0.1 and h = 1, the state responses of the BAM neural network under consideration is given in Figure 7. 6. Conclusion In this paper, the problem of global exponential stability for the discretetime BAM neural networks with time-varying delays has been addressed. Firstly, the exponential stability criterion in the form of linear matrix inequality is established by using mathematical induction method. Then, the 24
stability criteria which just depend on the system parameters are deduced, and their check can be carried out easily by using standard tool software (such as MATLAB). This method is directly based on the definition of global exponential stability, and does not involve the construction of any LyapuovKrasovskii functional or auxiliary function. Compared with the one in reference [6], it has been theoretical proved that the method proposed in this paper can obtain less conservative stability criteria. Finally, the effectiveness of the method is verified by several numerical examples. It is possible to apply the method proposed in this paper to the other timedelay systems, for example, Lur’e-type systems [19–21], genetic regulatory networks [22, 23], discrete-time delayed systems [24, 25], singular systems [26, 27], Takagi–Sugeno fuzzy time-delay systems [28], etc. Acknowledgment This work was supported in part by the Natural Science Foundation of Heilongjiang Province (No. LH2019F030). The authors would like to thank the associate editor and the anonymous reviewers for their very helpful comments and suggestions, which greatly improves the original version of the paper. References References [1] F. Beaufays, Y. Abdel-Magid, B. Widrow, Application of neural networks to load-frequency control in power systems, Neural Networks 7 (1) (1994) 183–194.
25
[2] M. Galicki, H. Witte, J. D¨orschel, M. Eiselt, G. Griessbach, Common optimization of adaptive preprocessing units and a neural network during the learning period. application in eeg pattern recognition, Neural Networks 10 (6) (1997) 1153–1163. [3] B. K. Wong, Y. Selvi, Neural network applications in finance: A review and analysis of literature (1990–1996), Information & Management 34 (3) (1998) 129–139. [4] B. Kosko, Adaptive bidirectional associative memories, Applied Optics 26 (23) (1987) 4947–4960. [5] B. Kosko, Bi-directional associative memories, IEEE Transactions on Systems, Man and Cybernetics 18 (1) (1988) 49–60. [6] S. Mohamad, Global exponential stability in continuous-time and discrete-time delayed bidirectional neural networks, Physica D: Nonlinear Phenomena 159 (3-4) (2001) 233–251. [7] J. Liang, J. Cao, Exponential stability of continuous-time and discretetime bidirectional associative memory networks with delays, Chaos, Solitons and Fractals 22 (4) (2004) 773–785. [8] J. Liang, J. Cao, D. W. Ho, Discrete-time bidirectional associative memory neural networks with variable delays, Physics Letters A 335 (2-3) (2005) 226–234. [9] X.-G. Liu, M. Wu, M.-L. Tang, X.-B. Liu, Global exponential stability of discrete-time BAM neural networks with variable delays, in: 2007 IEEE 26
International Conference on Control and Automation, IEEE, 2007, pp. 3139–3143. [10] X.-G. Liu, M.-L. Tang, R. Martin, X.-B. Liu, Discrete-time BAM neural networks with variable delays, Physics Letters A 367 (4-5) (2007) 322– 330. [11] Y. Chen, W. Bi, Y. Wu, Delay-dependent exponential stability for discrete-time BAM neural networks with time-varying delays, Discrete Dynamics in Nature and Society 2008 (2008) 1–14. [12] X. Lu, Global exponential stability for discrete-time BAM neural network with variable delay, in: The Sixth International Symposium on Neural Networks (ISNN 2009), Springer, 2009, pp. 19–29. [13] R. Zhang, Z. Wang, J. Feng, Y. Jing, Delay-dependent exponential stability of discrete-time BAM neural networks with time varying delays, in: International Symposium on Neural Networks, Springer, 2009, pp. 440–449. [14] M. Gao, B. Cui, Global robust exponential stability of discrete-time interval BAM neural networks with time-varying delays, Applied Mathematical Modelling 33 (3) (2009) 1270–1284. [15] R. Raja, S. M. Anthoni, Global exponential stability of BAM neural networks with time-varying delays: the discrete-time case, Communications in Nonlinear Science and Numerical Simulation 16 (2) (2011) 613–622.
27
[16] Y. Shu, X. Liu, F. Wang, S. Qiu, Further results on exponential stability of discrete-time BAM neural networks with time-varying delays, Mathematical Methods in the Applied Sciences 40 (11) (2017) 4014–4027. [17] P. H. A. Ngoc, H. Trinh, Novel criteria for exponential stability of linear neutral time-varying differential systems, IEEE Transactions on Automatic Control 61 (6) (2016) 1590–1594. [18] X. Zhang, L. Wu, J. H. Zou, Globally asymptotic stability analysis for genetic regulatory networks with mixed delays: an M-matrix-based approach, IEEE/ACM Transactions on Computational Biology and Bioinformatics 13 (1) (2016) 135–147. [19] Y. Zhu, W. X. Zheng, D. Zhou, Quasi-synchronization of discretetime Lur’e-type switched systems with parameter mismatches and relaxed PDT constraints, IEEE Transactions on Cybernetics (2019) DOI: 10.1109/TCYB.2019.2930945, in press. [20] Y. T. Wang, X. H. Zhang, X. Zhang, Neutral-delay-range-dependent absolute stability criteria for neutral-type Lur’e systems with time-varying delays, Journal of the Franklin Institute 353 (18) (2016) 5025–5039. [21] Y. T. Wang, Y. Xue, X. Zhang, Less conservative robust absolute stability criteria for uncertain neutral-type Lur’e systems with time-varying delays, Journal of the Franklin Institute 353 (2016) 816–833. [22] Y. T. Wang, X. Zhang, Z. R. Hu, Delay-dependent robust H∞ filtering of uncertain stochastic genetic regulatory networks with mixed timevarying delays, Neurocomputing 166 (2015) 346–356. 28
[23] Y. T. Wang, A. H. Yu, X. Zhang, Robust stability of stochastic genetic regulatory networks with time-varying delays: a delay fractioning approach, Neural Computing and Applications 23 (5) (2013) 1217–1227. [24] H. Li, N. Zhao, X. Wang, X. Zhang, P. Shi, Necessary and sufficient conditions of exponential stability for delayed linear discrete-time systems, IEEE Transactions on Automatic Control 64 (2) (2019) 712–719. [25] X. Zhang, W. Shang, Y. Wang, WDOP-based summation inequality and its application to exponential stability of linear delay difference systems, Asian Journal of Control 20 (2) (2018) 746–754. [26] Y. Zhu, Z. Zhong, M. V. Basin, D. Zhou, A descriptor system approach to stability and stabilization of discrete-time switched PWA systems, IEEE Transactions on Automatic Control 63 (10) (2018) 3456–3463. [27] X. Wang, X. Zhang, X. Yang, Delay-dependent robust dissipative control for singular LPV systems with multiple input delays, International Journal of Control, Automation and Systems 17 (2) (2019) 327–335. [28] X. Wang, S. Ding, X. Zhang, X. Fan, Further studies on robust H∞ control for a class of Takagi–Sugeno fuzzy time-delay systems with application to CSTC problems, Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 233 (2) (2019) 103–117.
29
Biographies
of Authors
Er-yong Cong received the B.S. and M.S. degrees in School of Mathematical Science from Heilongjiang University in 2003 and 2009, respectively. Since 2003 he has been working at Harbin University, where he is currently a Lecturer with the Department of Mathematical. His research interests include neural networks and stability analysis of delayed dynamic systems.
Xiao Han received Ph.D. degree in the School of Mathematics from Jilin University in 2007. Since 2007 she has been working at Jilin University, where she is currently a Lecturer in the School of Mathematics. Her current research interests include computational mathematics, numerical solutions of differential equations, and actuarial mathematics.
Xian Zhang (M'10--SM'17) received Ph.D. degree in Control Theory from Queen’s University of Belfast in UK in 2004. Since 2004 he has been at Heilongjiang University, where he is currently a Professor in the School of Mathematical Science. His current research interests include neural networks, genetic regulatory networks, mathematical biology and stability analysis of delayed dynamic systems. He has received the Second Class of Science and Technology Awards of Heilongjiang Province in 2005 and the Three Class of Science and Technology Awards of Heilongjiang Province in 2015. He is a senior member of the IEEE, and a Vice President of Mathematical Society of Heilongjiang Province. Since 2006, he served as an Editor of the Journal of Natural Science of
31
32
33
Declaration of interests ☒ The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ☐The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: