ARTICLE IN PRESS
JID: NEUCOM
[m5G;October 21, 2019;13:48]
Neurocomputing xxx (xxxx) xxx
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switchingR Guoxiong Xu, Haibo Bao∗ School of Mathematics and Statistics, Southwest University, Chongqing 400715, PR China
a r t i c l e
i n f o
Article history: Received 8 May 2019 Revised 7 August 2019 Accepted 9 September 2019 Available online xxx Communicated by Prof. Liu Xiwei Keywords: BAM neural networks Means-square exponential Input-to-state stability Markovian jump Lyapounov–Krasovskill functional Weak infinitesimal operator
a b s t r a c t This paper mainly discusses the input-to-state stability for BAM neural networks with time-varying delays and Markov jump parameters. Considering the system with Markov jump parameters, we select the improved criterion, namely mean-square exponential input-to-state stability. With the help of stochastic theory, we establish the Markovian switched Lyapunov–Krasovskill functional and obtain its derivation by making use of the weak infinitesimal operator, which is used to obtain the algebraic and linear matrix inequality(LMI) conditions. These conditions can ensure that the system is mean-square exponentially input-to-state stable. In particular, we design a controller to simplify the algebraic conditions. Finally, we provide two numerical examples to show the effectiveness and superiority of the obtained results.
1. Introduction It must be emphasized that a Markov jump system [1–12] is a hybrid system, which is driven by Markovian chains and includes event-drive and time-evolving. It can be divided into two categories according to time: continuous Markovian chains [13] and discrete Markovian chains [14]. The former is that the time is continuous, but the state is discrete. For the latter, the time and state are both discrete. In the real world, Markov jump systems are used to model some systems, where the abrupt changes of the structure and parameters happen, and widely applied to various fields, such as channel identification, slip steering, population dynamics, communication systems, random failures and so on. In the past few years, there are many scholars studying Markovian switching neural networks. Their researches mainly include stability, synchronization, dissipativity, event-triggered systems, fault tolerant control and state estimation. For example, authors in [15] divided activation functions into two different types of functions—bounded activation functions and unbounded activation functions, and in-
R This work was jointly supported by the National Natural Science Foundation of China under Grant nos. 61973258 and 61573291, the Fundamental Research Funds for Central Universities XDJK2016B036 and the National Natural Science Foundation of CQ under Grant no. cstc2019jcyj-msxmX0452. ∗ Corresponding author. E-mail address:
[email protected] (H. Bao).
© 2019 Elsevier B.V. All rights reserved.
vestigated the mean-square global exponential stability of recurrent neural networks with Markovian switching and time delays in Lagrange sense. In [6], sampled-data synchronization for timevarying delay Markovian jump neural networks with variable samplings was discussed and authors gave the criteria of synchronization by making use of the input delay approach and the LMI technique. For fault-tolerant problems, authors focused on Markovian stochastic jump systems and removed the influence of sensor faults and disturbances in the framework of a novel augmented sliding mode observer in [16]. Dissipativity and stability analysis problems for generalized neural networks with time-varying interval delays and Markov jump were explored in [17]. Considering some novel integral inequalities and stochastic theory, authors established some available criteria which were less conservative. What is more, in order to deal with the state estimation problem concerned with discrete-time Markov jump linear systems where the time-correlated measurement noise was taken into account, authors gave two algorithms by utilizing a measurement sequence in [18]. Of course, there are many excellent results studying this field, please see Refs. [19–23]. Obviously, the development of society is inseparable from networks. It is well known that networks are ubiquitous and range from nature and biological systems to society. That is to say that networks exist in nature and society. For instance, it is not difficult to see that networks for the development of universities play an important role. To the best knowledge of our knowledge,
https://doi.org/10.1016/j.neucom.2019.09.033 0925-2312/© 2019 Elsevier B.V. All rights reserved.
Please cite this article as: G. Xu and H. Bao, Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.09.033
JID: NEUCOM 2
ARTICLE IN PRESS
[m5G;October 21, 2019;13:48]
G. Xu and H. Bao / Neurocomputing xxx (xxxx) xxx
pattern recognition [24], signal processing [25]and global optimization processing [26] are supported by networks in the real world. In a word, any systems can be abstracted into a network which is composed of interactive individuals. Thus, there are many experts and scholars devoting themselves to this field. As a result, many different types of network models have been established. From the viewpoint of layers, there are single layer networks and multilayer networks. It is easy to show some networks, such as Boolean networks [27], cellular neural networks [28], competitive neural networks [29], BAM neural networks, recurrent neural networks [30], and so on. For BAM neural networks, there are many results. To name just a few, stability and synchronization of inertial BAM neural networks with time delays were considered and several sufficient conditions which guaranteed the global exponential stability of the equilibrium point were given by Cao et al. [31] with the help of the matrix measure and Halanay inequality. In [32], authors first proved the existence and uniqueness of the equilibrium ponit of the system and global asymptotic stability of the equilibrium point for complex-valued BAM neural networks by means of the Lyapunov function method and mathematical analysis technique. Then, the delay-dependent stability criteria of the complex-valued BAM neural networks were presented. In addition, the fixed-time stabilization problem on BAM neural networks was discussed in [33], and authors established an improvement theorem of fixedtime stability which is suitable for impulsive systems and designed two different controllers for stability of the system. Recently, input-to-state stability [34–38], which has been adopted in many subjects, such as swarm formation [39], signal processing [40] and so on, attracts great attention and has formed a theoretical basis. It is necessary to investigate the effects of external inputs, while stability of a system is concerned and analysed. Surprisingly, input-to-state stability criteria were improved by Zhu et al. [41], based on stochastic theory. Although there are a lot of results studying input-to-state stability, few papers make use of LMI conditions to guarantee that systems are input-to-state stable. At the same time, general algebraic conditions for input-to-state stability are difficult to be satisfied and checked when the dimension of the system is larger. In order to deal with these problems, we give simpler algebraic conditions via designing feedback controllers and LMI conditions according to matrix techniques. From what has been mentioned above, we investigate the mean-square exponential input-to-state stability for time-varying delayed BAM neural networks with Markovian switching in this paper. It should be pointed out that our conditions for input-to-state stability are easy to be tested and MATLAB-LMI toolbox can be regarded as a tool to solve these LMIs, which is distinguished from other papers about input-to-state stability. The remaining development of this article is as follows. In Section 2, we give some lemmas and definitions which are useful for this paper. Then, time-varying delayed BAM neural networks with Markovian switching are described. In Section 3, we give sufficient conditions for the mean-square exponential input-to-state stability by making use of Lyapunov functions. In Section 4, numerical simulation results are given to show the effectiveness of conditions for the stability of the system. In Section 5, the conclusion is drawn. Notation: n and m are positive integer numbers and Rn is the ndimensional Euclidean space. The ∗T indicates the transposition of the ∗ . For matrixes A = (ai j )n×n , A > ( < )0 shows that A is a positive (negative) definite matrix. What’s more, C ([−τ , 0], Rn ) indicates the class of continuous functions η from [−τ , 0] to Rn with the norm η = sup−τ ≤s≤0 | η (s ) |, where τ > 0. Diag{a1 , a2 , . . . , an } means a diagonal matrix. ∞ stands for the class of essentially bounded functions χ , namely χ ∞ = essupt≥0 | χ (t ) |< +∞. E( · ) is an expectation operator. κ represents the family of continuous strictly increasing functions which are mappings from [0, ∞) to [0, ∞) and
equal 0 at 0. κκ is the family of functions (t, x) which satisfy: (1) For each t, ( · , x) is a κ class function. (2) For each x, (t, · ) is decreasing to 0 when t tends to infinity. En denotes an n-order unit matrix. λmax (A ) and λmin (A ) are the maximum eigenvalues and the minimum eigenvalues of matrix A, respectively. 2. The description of the model and preliminaries In this part, some definitions and lemmas used later are given. For a given probability space ( , F, P ), where is a sample space, and F is the σ -algebra which is the subsets of the sample place , r(t)(t > 0) is a right-continuous Markovian chain on space ( , F, P ), with a natural filtration {Ft }t≥0 , taking values in a finite state space S = {1, 2, 3, . . . , N} and governed by the following form: i f i = j, γij + o(), P r (r (t + ) = j | r (t ) = i ) = 1 + γii + o(), i f i = j, where > 0 and o() satisfies lim→0 o() = 0, γij ≥ 0 means the transition rate from the state i to state j for i = j, while γii = N − k=1,k=i γik . Let = (γij )N×N . Then, we introduce the following BAM neural networks with time-varying delays and Markovian switching:
⎧ m ⎪ ⎪ x˙ i (t ) = −e1i xi (t ) + ai j (r (t )) f j (y j (t )) ⎪ ⎪ j=1 ⎪ ⎪ m ⎪ ⎪ ⎪ + bi j (r (t )) f j (y j (t − τ1 (t )) + ui (t ), ⎪ ⎪ ⎪ j=1 ⎨ n y˙ j (t ) =−e2 j y j (t ) + c ji (r (t ))gi (xi (t )) ⎪ i=1 ⎪ ⎪ n ⎪ ⎪ ⎪ + d ji (r (t ))gi (xi (t − τ2 (t )) + v j (t ), ⎪ ⎪ i=1 ⎪ ⎪ ⎪ ⎪ ⎩xi (s ) = ψi (s ), − max{ρ1 , ρ2 } ≤ s ≤ 0, y j (s ) =ϕ j (s ), − max{ρ1 , ρ2 } ≤ s ≤ 0,
(1)
where i = 1, 2, · · · , n, j = 1, 2, · · · , m, x(t ) = (x1 (t ), . . . , xn (t ))T ∈ Rn and y(t ) = (y1 (t ), . . . , ym (t ))T ∈ Rm are the state vectors of the neural networks. The positive definite diagonal matrices E¯1 = diag{e11 , . . . , e1n } and E¯2 = diag{e21 , . . . , e2m } are the selffeedback connection weight matrices. Matrices A = (ai j (r (t )))n×m , B = (bi j (r (t )))n×m , C = (c ji (r (t )))m×n and D = (d ji (r (t )))m×n are the connection weight matrices with Markovian switching. f (y(t )) = ( f1 (y1 (t )), . . . , fm (ym (t )))T and g(x(t )) = (g1 (x1 (t )), . . . , gn (xn (t )))T are activation functions without time delays. f (y(t − τ1 (t ))) = ( f1 (y1 (t − τ1 (t ))), . . . , ( fm (ym (t − τ1 (t )))T and g(x(t − τ2 (t ))) = (g1 (x1 (t − τ2 (t ))), . . . , gn (xn (t − τ2 (t )))T are activation functions with time delays. What is more, ui (t) and vj (t) belong to ∞ . Let τ = max{ρ1 , ρ2 }. ψ = (ψ1 , ψ2 , · · · , ψn )T belongs to L2F0 ([−τ , 0]; Rn ) which stands for the family of all F0 measurable and C ([−τ , 0], Rn )-valued stochastic 0 variables φ = {φ (s ) : −τ ≤ s ≤ 0} satisfying −τ E | φ (x ) |2 dx < ∞. And ϕ = (ϕ1 , ϕ2 , · · · , ϕm )T ∈ L2F ([−τ , 0]; Rm ). The time-varying 0 delays τ 1 (t) and τ 2 (t) satisfy the following conditions:
0 ≤ τ1 (t ) ≤ ρ1 , 0 ≤ τ2 (t ) ≤ ρ2 ,
τ˙ 1 (t ) ≤ τ11 < 1, τ˙ 2 (t ) ≤ τ22 < 1.
For activation functions, we give general assumptions Assumption 1. There exist positive constants Lj and Mi such that
| f j (υ ) − f j (ν ) |≤ L j | ν − υ |, | gi (υ ) − gi (ν ) |≤ Mi | ν − υ |, for all ν and υ ∈ R, i = 1, 2, · · · , n and j = 1, 2, · · · , m. Assumption 2.
f j ( 0 ) = gi ( 0 ) = 0 for i = 1, 2, . . . , n and j = 1, 2, . . . , m.
Please cite this article as: G. Xu and H. Bao, Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.09.033
ARTICLE IN PRESS
JID: NEUCOM
[m5G;October 21, 2019;13:48]
G. Xu and H. Bao / Neurocomputing xxx (xxxx) xxx
Remark 1. Obviously, under Assumption 1, the system (1) has an unique solution once the initial conditions are presented. When ui (t ) = v j (t ) ≡ 0, it is easy to see that the system (1) is an autonomous system and x = 0 and y = 0 are an equilibrium point of the system. This paper aims at discussing the stability of the trivial solution under affects of external inputs, which accords with the definition of input-to-state stability. It should be pointed out that the activation functions of the system (1) are abstract functions, thus it is unpractical to prove the existence of the trivial solution of the system. Therefore, it is reasonable to make Assumption 2.
(1 − τ22 )e−ρ2 λ ai −
E | z(t ) |2 ≤ α e−β t E
(1 − τ11 )e−ρ1 λ b j −
V (z(t ), p, t ) = eλt +
=
N
LV =
γ pq eλt
(5)
ξi ( p)x2i (t ) + eλt
m
μ j ( p)y2j (t )
j=1
n
t
t −τ2 (t )
eλs ai x2i (s )ds +
n
+
ξi (q )x2i (t ) +
γ pq eλt
eλt ai x2i (t ) − (1 − τ˙ 2 (t ))
i=1 m
+
eλt b j y2j (t ) − (1 − τ˙ 1 (t )) n
ξi ( p)x2i (t ) + λeλt
i=1
+
m
ξi ( p)xi (t )x˙ i (t ) + 2eλt
n
ξi (q )x2i (t ) +
m
eλ(t−τ2 (t )) ai x2i (t − τ2 (t ))
m
eλ(t−τ1 (t )) b j y2j (t − τ1 (t ))
j=1
+ 2eλt
n
ξi ( p)x2i (t ) + λeλt
μ j ( p)y2j (t )
j=1
n
m
ξi ( p)xi (t ) − e1i xi (t ) +
m
i=1
−
ξi ( p )
| a i j ( p ) | L j − ξi ( p )
j=1
m
bi j ( p) f j (y j (t − τ1 (t )) + ui (t )
+ 2eλt
+
n
d ji ( p)gi (xi (t − τ2 (t )) + v j (t )
≤
γ pq eλt
n
q∈S n
ξi ( p ) | a i j ( p ) | L j −
−
μ j ( p)
i=1
| c ji ( p) | Mi − μ j ( p)
+
γ pq μ j (q )
n
i=1
n
ξi (q )x2i (t ) +
γ pq eλt
q∈S
| d ji ( p) | Mi − μ j ( p) ≥ 0,
i=1
(3)
+
m
m
μ j (q )y2j (t )
j=1
eλt ai x2i (t ) − (1 − τ22 )e−λρ2
i=1
q∈S
i=1 n
i=1
i=1
| bi j ( p) | L j − ξi ( p) ≥ 0, (2)
j=1
2μ j ( p)e2 j − b j − λμ j ( p) −
n μ j ( p)y j (t ) − e2 j y j (t ) + c ji ( p)gi (xi (t ))
m j=1
γ pq ξi (q )
ai j ( p) f j (y j (t ))
j=1
j=1
q∈S
j=1 m
m
μ j (q )y2j (t )
j=1 n
eλt b j y2j (t ) − (1 − τ˙ 1 (t ))
i=1
+
m
i=1
+ λeλt
Theorem 1. Under Assumptions 1 and 2, the trivial solution of the system (1) is mean-square exponentially input-to-state stable if there exist positive scalars λ, ξ i (p), μj (p), ai and bj for p ∈ S, i = 1, 2, . . . , n and j = 1, 2, . . . , m such that the following inequalities are satisfied:
μ j ( p)y j (t )y˙ j (t )
γ pq eλt
eλt ai x2i (t ) − (1 − τ˙ 2 (t ))
j=1
In this part, we first give algebraic conditions for stability of system (1) without controllers. Then, a feedback controller is designed to simplify algebraic conditions. Finally, we give LMI conditions, which can be manipulated by Matlab, to solve the stability problem.
μ j ( p)y2j (t )
q∈S
i=1
m
eλ(t−τ1 (t )) b j y2j (t − τ1 (t ))
j=1
q∈
+
μ j (q )y2j (t )
j=1
n
γ pq eλt
n
m
eλ(t−τ2 (t )) ai x2i (t − τ2 (t ))
m
i=1
∂V ∂V + x˙ T (t ) | . ∂t ∂ x r (t )=i
eλs b j y2j (s )ds.
j=1
+ λeλt
=
μ j ( p)|c ji ( p)|Mi −
t −τ1 (t )
i=1
j=1
3. Main results
m
t
j=1 n
i=1
2ξi ( p)e1i − ai − λξi ( p) −
m
j=1
q∈S
i=1
n
[E {V (x(t + ), r (t + ), t + ) | x(t ), r (t ) = i}
j=1
ξi ( p ) | b i j ( p ) | L j ≥ 0 .
According to Lemma 1, we have
+ 2eλt
γijV (x(t ), j, t ) +
n
i=1
Lemma 1 [42]. If V (x(t ), i, t ) is the stochastic positive Lyapunov– Krasovkii functional, its weak infinitesimal operator is given by
n
i=1
q∈S
− V (x(t ), i, t )]
(4)
Proof. We select the following switching Lyapunov function:
Remark 2. Here, we refer to the definition of mean-square exponential input-to-state stability in reference [41]. Different from reference [41], we consider BAM neural networks with Markovian jumps and give some new sufficient conditions for mean-square exponential input-to-state stability in this paper.
1
μ j ( p) | d ji ( p) | Mi ≥ 0,
i=1
h¯ 2 +δ ϒ 2∞ .
LV (x(t ), i, t ) = lim→0
m j=1
In order to facilitate our researches, we denote z(t ) = (x1 (t ), . . . , xn (t ), y1 (t ), . . . , ym (t ))T , h¯ = (ψ1 , · · · , ψn , ϕ1 , · · · , ϕm )T and ϒ (t ) = (u1 (t ), . . . , un (t ), v1 (t ), . . . , vm (t ))T . Definition 1. The system (1) is mean-square exponentially inputto-state stable at z(t ) = 0, if there exist positive constants α , β and δ such that the following inequality holds
3
n
eλt ai x2i (t − τ2 (t ))
i=1
eλt b j y2j (t ) − (1 − τ11 )e−λρ1
j=1
+ λeλt
m
eλt b j y2j (t − τ1 (t ))
j=1 n i=1
ξi ( p)x2i (t ) + λeλt
m
μ j ( p)y2j (t )
j=1
Please cite this article as: G. Xu and H. Bao, Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.09.033
ARTICLE IN PRESS
JID: NEUCOM 4
[m5G;October 21, 2019;13:48]
G. Xu and H. Bao / Neurocomputing xxx (xxxx) xxx
− 2eλt
n
ξi ( p)e1i x2i (t ) + 2eλt
i=1
+ 2eλt
n m
+ 2eλt
ξi ( p)ai j ( p)xi (t ) f j (y j (t ))
i=1 j=1
n m
n
− 2eλt
ξi ( p)bi j ( p)xi (t ) f j (y j (t − τ1 (t )))
+ 2eλt
ξi ( p)xi (t )ui (t )
m
μ j ( p)e2 j y2j (t ) + 2eλt
m n
μ j ( p)c ji ( p)y j (t )gi (xi (t ))
+ 2eλt
j=1 i=1
m n
μ j ( p)d ji ( p)y j (t )gi (xi (t − τ2 (t )))
m
≤
γ pq eλt
μ j ( p)y j (t )v j (t )
+ +
ξi (q )x2i (t ) +
+
γ pq eλt
q∈S
i=1
m
μ j (q )y2j (t )
+
j=1
eλt ai x2i (t ) − (1 − τ22 )e−λρ2
γ pq eλt
n
m
eλt b j y2j (t ) − (1 − τ11 )e−λρ1
eλt ai x2i (t − τ2 (t )) eλt b j y2j (t − τ1 (t ))
n
ξi ( p)x2i (t ) + λeλt
m
+ λeλt − 2eλt + eλt
ξi ( p)e1i x2i (t ) + 2eλt
i=1
n m
ξi ( p) | xi (t ) || ui (t ) |
+ eλt
ξi ( p) | ai j ( p) || xi (t ) || f j (y j (t )) − f j (0 ) | ξi ( p) | bi j ( p) || xi (t ) || f j (y j (t − τ1 (t ))) − f j (0 ) |
+ eλt + eλt
m
μ j ( p) | y j (t ) || v j (t ) |
+ eλt
m
μ j ( p) | c ji ( p) || y j (t ) || gi (xi (t )) − gi (0 ) | μ j ( p) | d ji ( p) || y j (t ) || gi (xi (t − τ2 (t ))) − gi (0 ) |
ξi (q )x2i (t ) +
γ pq eλt
q∈S
+ eλt
m
eλt ai x2i (t ) − (1 − τ22 )e−λρ2
+ eλt
eλt b j y2j (t ) − (1 − τ11 )e−λρ1
j=1
m
μ j (q )y2j (t )
+ eλt
ξi ( p)x2i (t ) + λeλt
i=1
− 2eλt
n
n
eλt ai x2i (t − τ2 (t ))
μ j ( p)e2 j y2j (t )
j=1
ξi ( p) | bi j ( p) | L j y2j (t − τ1 (t )) m
μ j ( p)v2j (t )
μ j ( p) | c ji ( p) | Mi y2j (t ) μ j ( p) | c ji ( p) | Mi x2i (t )
m n
m
m
ξi ( p)e1i x2i (t ) + 2eλt
eλt b j y2j (t − τ1 (t ))
μ j ( p)y2j (t )
n m i=1 j=1
n
μ j ( p) | d ji ( p) | Mi y2j (t )
+ eλt
m n
μ j ( p) | d ji ( p) | Mi x2i (t − τ2 (t ))
j=1 i=1
j=1
i=1
+ 2eλt
m
j=1 i=1
= −eλt
j=1 n
ξi ( p) | bi j ( p) | L j x2i (t ) − 2eλt
j=1
m n
i=1
+ λeλt
ξi ( p)u2i (t )
i=1
ξi ( p) | ai j ( p) | L j y2j (t )
μ j ( p)y2j (t ) + eλt
m n
j=1
i=1
+
n
j=1 i=1
i=1
n
ξi ( p)x2i (t ) + eλt
ξi ( p) | ai j ( p) | L j x2i (t )
j=1
j=1 i=1
γ pq eλt
n
j=1 i=1
m n
q∈S
μ j ( p)y2j (t )
i=1
n m
j=1
n
ξi ( p)e1i x2i (t ) + eλt
n m
j=1 i=1
m
i=1 j=1
j=1
+ 2eλt
eλt b j y2j (t − τ1 (t ))
i=1 j=1
μ j ( p)e2 j y2j (t ) + 2eλt
m n
m
i=1 j=1
i=1 j=1 m
eλt ai x2i (t − τ2 (t ))
j=1
n m
i=1 j=1
+ 2eλt
j=1 n
i=1 j=1
n i=1
n m
ξi ( p)x2i (t ) + λeλt
n m
j=1
n
μ j (q )y2j (t )
j=1 n
i=1
μ j ( p)y2j (t )
m
q∈S
j=1
j=1
− 2eλt
γ pq eλt
eλt b j y2j (t ) − (1 − τ11 )e−λρ1
i=1
+ 2eλt
i=1
n
+ 2eλt
ξi (q )x2i (t ) +
eλt ai x2i (t ) − (1 − τ22 )e−λρ2
m
− 2eλt
μ j ( p) | d ji ( p) || y j (t ) | Mi | xi (t − τ2 (t )) |
i=1
n
i=1
i=1
+
n
i=1
j=1
≤
m n
m
+ λeλt
μ j ( p) | c ji ( p) || y j (t ) | Mi | xi (t ) |
i=1
n
q∈S n
m n
q∈S
j=1
≤
μ j ( p) | y j (t ) || v j (t ) |
j=1
j=1 i=1
j=1 i=1
+ 2eλt
m
μ j ( p)e2 j y2j (t ) + 2eλt
j=1 i=1
j=1
+ 2eλt
m j=1
i=1
− 2eλt
ξi ( p) | bi j ( p) || xi (t ) | L j | y j (t − τ1 (t )) |
i=1 j=1
i=1 j=1
+ 2eλt
n m
ξi ( p) | xi (t ) || ui (t ) |
i=1
ξi ( p) | ai j ( p) || xi (t ) | L j | y j (t ) |
n
[2ξi ( p)e1i − ai − λξi ( p) −
i=1
−
m
μ j ( p) | c ji ( p) | Mi
j=1
γ pq ξi (q )
q∈S
− ξi ( p )
m j=1
| a i j ( p ) | L j − ξi ( p )
m
| bi j ( p) | L j − ξi ( p)]x2i (t )
j=1
n m − eλt [(1 − τ22 )e−ρ2 λ ai − μ j ( p) | d ji ( p)Mi |]x2i (t − τ2 (t )) i=1
j=1
Please cite this article as: G. Xu and H. Bao, Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.09.033
ARTICLE IN PRESS
JID: NEUCOM
[m5G;October 21, 2019;13:48]
G. Xu and H. Bao / Neurocomputing xxx (xxxx) xxx
− eλt
m n [2μ j ( p)e2 j − b j − λμ j ( p) − ξi ( p ) | a i j ( p ) | L j
−
(1 − τ22 )e−ρ2 λ ai −
i=1
j=1
γ pq μ j (q )
− μ j ( p)
n
| c ji ( p) | Mi − μ j ( p)
i=1
n
| d ji ( p) | Mi − μ j ( p)]y2j (t )
i=1
m n − eλt [(1 − τ11 )e−ρ1 λ b j − ξi ( p) | bi j ( p) | L j ]y2j (t − τ1 (t )) i=1
j=1 n
ξi ( p)u2i (t ) + eλt
i=1
m
μ j ( p)v2j (t ).
j=1
Based on the conditions (2)–(5), we can obtain
LV ≤ eλt
n
m
ξi ( p)u2i (t ) + eλt
i=1
μ j ( p)v2j (t ).
(6)
j=1
Integrating both sides of inequality (6) from 0 to t and taking the expectation operator, we have
EV (z(t ), p, t ) ≤ EV (z(0 ), r (0 ), 0 ) + ≤
{ξi ( p), μ j ( p)} ϒ 2∞
max
1≤i≤n,1≤ j≤m,p∈S
t
esλ ds
1≤i≤n,1≤ j≤m,p∈S
+
max1≤i≤n,1≤ j≤m,p∈S {ξi ( p), μ j ( p)} ϒ
λ
2∞
(eλt − 1 ).
(7)
Furthermore, 1≤i≤n,1≤ j≤m,p∈S
≤
1≤i≤n,1≤ j≤m,p∈S
+
max1≤i≤n,1≤ j≤m,p∈S {ξi ( p), μ j ( p)} ϒ
λ
2∞
max1≤i≤n,1≤ j≤m,p∈S {ξi ( p), μ j ( p)} −λt ≤ e E min1≤i≤n,1≤ j≤m,p∈S {ξi ( p), μ j ( p)} +
h¯ 2
Theorem 2. Under Assumptions 1 and 2, the trivial solution of the system (1) is mean-square exponentially input-to-state stable if the feedback controllers (8) and (9) are adopted and there exist positive scalars λ, ξ i (p), μj (p), ai and bj for p ∈ S, i = 1, 2, · · · , n and j = 1, 2, . . . , m such that the following inequalities (10)–(13)are satisfied:
w1i (t ) = −k1i xi (t ), where i = 1, 2, . . . , n and k1i = max p∈S { bi j ( p ) | L j + 1},
where j = 1, 2, . . . , n and k2 j = max p∈S { d ji ( p) | Mi + 1},
2ξi ( p)e1i − ai − λξi ( p) −
m
(8)
m
w2 j (t ) = −k2 j y j (t ),
j=1 | ai j ( p) | L j +
n
i=1
| c ji ( p) | Mi +
μ j ( p)|c ji ( p)|Mi −
m
j=1
n
|
(9)
i=1
|
γ pq ξi (q ) ≥ 0,
q∈S
j=1
(10) 2μ j ( p)e2 j − b j − λμ j ( p) −
n i=1
ξi ( p ) | b i j ( p ) | L j ≥ 0 .
(13)
i=1
Remark 3. The proof of the Theorem 1 is the same as that of the Theorem 2 and we omit it here. Although algebraic conditions for stability of the stochastic system were presented in [41], it is hard to check these conditions as the dimension of the system increases and there are many systems that do not satisfy such conditions. From Theorem 1, it is easy to see that the algebraic conditions involve the dimension of the system and the number of states of Markovian processes and it is more challenging to verify proposed conditions. We can obtain better algebraic conditions in Theorem 2, while we simply employ feedback controllers. Thus, it is necessary to design controllers to solves such problems. Corollary 1. When ϒ ≡ 0, the trivial solution of the system (1) is mean-square exponentially stable if the conditions of Theorem 1 or Theorem 2 hold.
Next, for the convenience of computations, we apply LMI approaches to deal with stability problems. The system (1) can be rewritten into the following form: x˙ (t ) = −E¯1 x(t ) + A(r (t )) f (y(t )) + B(r (t )) f (y(t − τ1 (t )) + u(t )
ξi ( p)|ai j ( p)|L j −
q∈S
(14)
Theorem 3. Under Assumptions 1 and 2, the trivial solution of the system (1) is mean-square exponentially input-to-state stable if there exist matrices Q(p) > 0, T(p) > 0, A¯ > 0, B > 0 and diagonal matrices 1 (p) > 0, 2 (p) > 0, 1 (p) > 0 and 2 (p) > 0, where Q(p), 1 (p), 2 (p) and A¯ are n × n matrices, and T(p), 1 (p), 2 (p) and B¯ are m × m matrices, such that the following linear matrix inequality holds for given λ and p ∈ S:
max1≤i≤n,1≤ j≤m,p∈S {ξi ( p), μ j ( p)} ϒ 2∞ . λ min1≤i≤n,1≤ j≤m,p∈S {ξi ( p), μ j ( p)}
This completes the proof.
n
For convenience, let diag{L1 , L2 , . . . , Lm } = L and diag{M1 , M2 , . . . , Mn } = M.
(eλt − 1 ),
then,
E | z(t ) |2
(12)
y˙ (t ) = −E¯2 y(t ) + B(r (t ))g(x(t )) + D(r (t ))g(x(t − τ2 (t )) + v(t )
{ξi ( p), μ j ( p)}E h¯ 2
max
(1 − τ11 )e−ρ1 λ b j −
{ξi ( p), μ j ( p)}eλt E z(t ) 2
min
μ j ( p) | d ji ( p) | Mi ≥ 0,
Corollary 2. When S = {1}, the trivial solution of the system (1) is exponentially input-to-state stable if the conditions of Theorem 1 or Theorem 2 hold.
0
{ξi ( p), μ j ( p)}E h¯ (t ) 2
max
m j=1
q∈S
+ eλt
5
γ pq μ j (q ) ≥ 0, (11)
( p) ⎛
⎜ ⎜ ⎜ ⎜ =⎜ ⎜ ⎜ ⎜ ⎝
11 (P ) ∗ ∗ ∗ ∗ ∗ ∗ ∗
0
22 ( p) ∗ ∗ ∗ ∗ ∗ ∗
⎞
0 0 Q ( p )A ( p ) Q ( p )B ( p ) 0 0 0 0 ⎟ ⎟ −1 ( p) C T ( p )T ( p ) 0 0 0 ⎟ ⎟ ∗ −2 ( p) DT (P )T ( p) 0 0 0 ⎟, ⎟ ∗ ∗ 33 ( p) 0 0 0 ⎟ ⎟ ∗ ∗ ∗ 44 ( p) 0 0 ⎠ ∗ ∗ ∗ ∗ −1 ( p) 0 ∗ ∗ ∗ ∗ ∗ −2 ( p) 0 0
0 0 0
where
11 ( p) =
γ pq Q (q ) − 2Q ( p)E¯1 + Q ( p) + λQ ( p) + A¯ + M2 1 ( p),
q∈S
22 ( p) = −(1 − τ22 )e−λρ2 A¯ + M2 2 ( p), 33 ( p) =
γ pq T (q ) − 2T ( p)E¯2 + T (P ) + λT ( p) + B¯ + L2 1 ( p),
q∈S
44 ( p) = −(1 − τ11 )e−λρ1 B¯ + L2 2 ( p). Proof. Considering Assumptions 1 and 2, one can derive that
−L2j y2j (t ) + f j2 (y j (t )) ≤ 0.
Please cite this article as: G. Xu and H. Bao, Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.09.033
ARTICLE IN PRESS
JID: NEUCOM 6
[m5G;October 21, 2019;13:48]
G. Xu and H. Bao / Neurocomputing xxx (xxxx) xxx
Then,
T
y j (t ) f j (y j (t ))
Further,
−L2j 0
T
y(t ) f (y(t ))
Thus,
T
y(t ) f (y(t ))
0 1
−L2 0
0 Em
L 1 ( p ) 0
2
0
−1 ( p)
T
x(t ) g(x(t ))
M 1 ( p) 0 2
0 −1 ( p)
T
y(t − τ1 (t )) f (y(t − τ1 (t )))
L2 2 ( p) 0
+ 2eλt yT (t )T ( p)D( p)g(x(t − τ2 (t ))) + eλt yT (t )T ( p)y(t )
≥ 0.
(15)
−2 ( p)
≥ 0,
(16)
T
x(t − τ2 (t )) g(x(t − τ2 (t )))
M 2 2 ( p ) 0
0
y(t − τ1 (t )) f (y(t − τ1 (t )))
≥ 0,
≥ 0.
(18) Choose the following switching Lyapunov functional candidate V (z(t ), p, t ) = eλt xT Q ( p)x(T ) + eλt yT (t )T ( p)y(t )
t
t + eλs xT (s )A¯ x(s )ds + eλs yT (s )B¯ y(s )ds, t −τ2 (t )
L2 1 ( p) 0
+
x(t ) g(x(t ))
M2 1 ( p) 0
T
+
M 2 ( p) 0
y(t ) f (y(t ))
−1 ( p)
x(t − τ2 (t )) g(x(t − τ2 (t )))
2
0
−2 ( p)
2
x(t ) g(x(t ))
0
y(t − τ1 (t )) f (y(t − τ1 (t )))
−2 ( p)
t −τ1 (t )
≤ eλt uT (t )Q ( p)u(t ) + eλt vT (t )T ( p)v(t ) ≤ max{λmax (Q ( p)), λmax (T ( p))}eλt p∈S
ϒ 2∞ .
(19)
Integrating both sides of inequality (19) from 0 to t and taking the expectation operator, we have
EV (z(t ), p, t ) ≤ EV (z(0 ), r (0 ), 0 ) + max{λmax (Q ( p)), λmax (T ( p))} ϒ p∈S
γ pq eλt xT Q ( p)x(t ) +
q∈S
≤ max{λmax (Q ( p)), λmax (T ( p))}E
γ pq eλt yT (t )T ( p)y(t )
p∈S
q∈S
p∈S
+ eλt yT (t )B¯ y(t ) − (1 − τ˙ 1 (t ))eλ(t−τ1 (t )) y(t − τ1 (t ))B¯ y(t − τ1 (t )) = γ pq eλt xT (t )Q ( p)x(t ) + γ pq eλt yT (t )T ( p)y(t )
2
2
(t ))A¯ x(t − τ2 (t ))
t
eλs ds
0
2∞ (eλt − 1 ),
which generates
E |z(t )|2 ≤
max p∈S {λmax (Q ( p)), λmax (T ( p))} −λt e E min p∈S {λmin (Q ( p)), λmin (T ( p))} +
q∈S
+ λeλt xT (t )Q ( p)x(t ) + λeλt yT (t )T ( p)y(t ) + eλt xT (t )A¯ x(t ) − (1 − τ˙ (t ))eλ(t−τ2 (t )) x(t − τ
2∞
h¯ 2
+ max{λmax (Q ( p)), λmax (T ( p))} ϒ
+ λeλt xT (t )Q ( p)x(t ) + λeλt yT (t )T ( p)y(t ) + eλt xT (t )A¯ x(t ) − (1 − τ˙ 2 (t ))eλ(t−τ2 (t )) x(t − τ2 (t ))A¯ x(t − τ2 (t ))
q∈S
x(t − τ2 (t )) . g(x(t − τ2 (t )))
LV ≤ eλt ( p) + eλt uT (t )Q ( p)u(t ) + eλt vT (t )T ( p)v(t )
+ 2eλt xT Q ( p)x˙ (t ) + 2eλt yT T ( p)y˙ (t )
max p∈S {λmax (Q ( p)), λmax (T ( p))} min p∈S {λmin (Q ( p)), λmin (T ( p))}
h¯ 2
ϒ 2∞ .
Thus the proof is completed.
+ eλt yT (t )B¯ y(t ) − (1 − τ˙ 1 (t ))eλ(t−τ1 (t )) y(t − τ1 (t ))B¯ y(t − τ1 (t )) + 2eλt xT (t )Q ( p)[−E¯ x(t ) + A( p) f (y(t ))
Corollary 3. When ϒ ≡ 0, the trivial solution of the system (1) is mean-square exponentially stable if the conditions of Theorem 3 hold.
+ B( p) f (y(t − τ1 (t )) + u(t )] + 2eλt yT (t )T ( p)[−E¯2 y(t ) + B( p)g(x(t ))
Corollary 4. When S = {1}, the trivial solution of the system (1) is exponentially input-to-state stable if the conditions of Theorem 3 hold.
1
=
0
L 2 ( p) 0
T
−1 ( p)
y(t − τ1 (t )) f (y(t − τ1 (t )))
then, LV =
T
0
Letting = (x(t ), x(t − τ2 (t )), g(x(t )), g(x(t − τ2 (t ))), y(t ), y(t − τ1 (t )), f (y(t )), f (y(t − τ1 (t ))))T , we can obtain
x(t − τ2 (t )) g(x(t − τ2 (t )))
− 2 ( p )
T
y(t ) f (y(t ))
+
(17)
+ eλt uT (t )Q ( p)u(t ) + eλt vT (t )T ( p)v(t ) +
x(t ) g(x(t ))
0
γ pq eλt yT (t )T ( p)y(t )
q∈S
+ 2eλt xT (t )Q ( p)B( p) f (y(t − τ1 (t ))) + eλt xT (t )Q ( p)x(t ) − 2eλt yT (t )T ( p)E¯2 y(t ) + 2eλt yT (t )T ( p)B( p)g(x(t ))
y(t ) f (y(t ))
+ eλt yT (t )B¯ y(t ) − (1 − τ11 )eλρ1 eλt y(t − τ1 (t ))B¯ y(t − τ1 (t )) − 2eλt xT (t )Q ( p)E¯1 x(t ) + 2eλt xT (t )Q ( p)A( p) f (y(t ))
≤ 0.
γ pq eλt xT Q ( p)x(t ) +
+ λeλt xT (t )Q ( p)x(t ) + λeλt yT (t )T ( p)y(t ) + eλt xT (t )A¯ x(t ) − (1 − τ22 )eλρ2 eλt x(t − τ2 (t ))A¯ x(t − τ2 (t ))
≤ 0.
y(t ) f (y(t ))
q∈S
y j (t ) f j (y j (t ))
Similarly, we can derive that
≤
+ D( p)g(x(t − τ2 (t )) + v(t )] γ pq eλt xT Q ( p)x(t ) + γ pq eλt yT (t )T ( p)y(t ) q∈S
q∈S
+ λeλt xT (t )Q ( p)x(t ) + λeλt yT (t )T ( p)y(t ) + eλt xT (t )A¯ x(t ) − (1 − τ˙ 2 (t ))e−λρ2 eλt x(t − τ2 (t ))A¯ x(t − τ2 (t )) + eλt yT (t )B¯ y(t ) − (1 − τ˙ 1 (t ))e−λρ1 eλt y(t − τ1 (t ))B¯ y(t − τ1 (t )) − 2eλt xT (t )Q ( p)E¯1 x(t ) + 2eλt xT (t )Q ( p)A( p) f (y(t )) + 2eλt xT (t )Q ( p)B( p) f (y(t − τ1 (t ))) − 2eλt yT (t )T ( p)E¯2 y(t ) + 2eλt yT (t )T ( p)B( p)g(x(t )) + 2eλt yT (t )T ( p)D( p)g(x(t − τ2 (t ))) + 2eλt xT (t )Q ( p)u(t ) + 2eλt yT (t )T ( p)v(t )
Remark 4. In [43], authors discussed the input-to-state stability of impulsive BAM neural networks with stochastic effects and mixed delays. In [44], exponential input-to-state stability for the complex-valued memristor-based system was investigated with the help of the theory of differential inclusion and set-valued map. In [45], authors made use of stochastic analysis techniques to study the mean-square exponential input-to-state stability for a class of stochastic systems with neutral terms and mixed delays. Thankfully, these achievements have advanced the research process of input-to-state stability. However, these papers only focused on impulses, memristors and time delays. At the same time, these results don’t involve linear matrix inequalities. Compared with these
Please cite this article as: G. Xu and H. Bao, Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.09.033
ARTICLE IN PRESS
JID: NEUCOM
[m5G;October 21, 2019;13:48]
G. Xu and H. Bao / Neurocomputing xxx (xxxx) xxx
7
papers, this paper considers Markov jump systems and LMI conditions. What is more, we also design a controller to simply algebraic conditions.
2ξ1 (1 )e11 − a1 − λξ1 (1 ) −
4. Numerical simulation
2ξ2 (1 )e12 − a2 − λξ2 (1 ) −
m
μ j (1 )|c j1 (1 )|M1 −
m
μ j (1 )|c j2 (1 )|M2 −
2ξ1 (2 )e11 − a1 − λξ1 (2 ) −
Example 1.
m
⎪ y˙ 1 (t ) =− e21 y1 (t ) + c11 (r (t ))g1 (x1 (t )) + c12 (r (t ))g2 (x2 (t )) ⎪ ⎪ ⎪ ⎪ ⎪ + d11 (r (t ))g1 (x1 (t − τ2 (t )) + d12 (r (t ))g2 (x2 (t − τ2 (t )) + v1 (t ) ⎪ ⎪ ⎪ ⎪ ⎪y˙ 2 (t ) =− e22 y2 (t ) + c21 (r (t ))g1 (x1 (t )) + c22 (r (t ))g2 (x2 (t )) ⎪ ⎪ ⎩ + d21 (r (t ))g1 (x1 (t − τ2 (t )) + d22 (r (t ))g2 (x2 (t − τ2 (t )) + v2 (t )
2ξ2 (2 )e12 − a2 − λξ2 (2 ) −
m
,
2μ1 (1 )e21 − b1 − λμ1 (1 ) −
μ j (2 )|c j2 (2 )|M2 −
n
ξi (1 )|ai1 (1 )|L1 −
n
where n=m=2, S={1,2}, r(0)=1, f j (t ) = gi (t ) = tanh(t ), τ1 (t ) = τ2 (t ) = 0.2sin2 (t ), u1 (t ) = 0.2sin(t ), u2 (t ) = 0.15cos(t ), v1 (t ) = 0.18sin(t ), v2 (t ) = 0.2sin(t ), e11 = 4.1, e12 = 4.4, e21 = 4.3, e22 = 4.7, the initial condition is (0.3, −0.36, −0.32, 0.2 )T and
0.3 , 0.4 0.4 , −0.5 −0.2 , 0.6
=
0.3 , 0.2
−0.4 0.5
B (2 ) =
C (2 ) =
0.1 −0.4
0.3 0.1
A (2 ) =
0.3 0.4
D (1 ) =
0.2 0.3
B (1 ) =
C (1 ) =
−0.2 0.2
0.4 0.3
D (2 ) =
0.2 −0.3
ξi (1 )|ai2 (1 )|L2 −
2μ2 (2 )e22 − b2 − λμ2 (2 ) −
(1 − τ22 )e−ρ2 λ a1 −
−0.3 , 0.4
p∈S
k12 = max{
(1 − τ22 )e−ρ2 λ a2 −
m
m
m
k21 = max{
j=1
n
(1 − τ22 )e−ρ2 λ a1 −
p∈S
k22 = max{
i=1
n i=1
| b1 j ( p ) | L j + 1} = 2.1,
(1 − τ11 )e−ρ1 λ b1 −
γ 2 q μ1 ( q ) = 3 . 8 ,
q∈S
m
ξi (2 )|ai2 (2 )|L2 −
γ 2 q μ2 ( q ) = 4 . 4 ,
q∈S
μ j (1 ) | d j1 (1 ) | M1 = 1.3,
m
μ j (1 ) | d j2 (1 ) | M2 = 1.3,
m
μ j (2 ) | d j1 (2 ) | M1 = 1.3,
n
n
m
μ j (2 ) | d j2 (2 ) | M2 = 0.1,
n
ξi ( 1 ) | b i 1 ( 1 ) | L 1 = 1 . 3 ,
i=1
| b2 j ( p ) | L j + 1} = 2.4,
(1 − τ11 )e−ρ1 λ b2 −
| d1i ( p ) | Mi + 1 } = 2 .3 ,
(1 − τ11 )e−ρ1 λ b1 −
| d2i ( p ) | Mi + 1 } = 2 .7 .
It is easy to check that τ1 = τ2 = ρ1 = ρ2 = 0.2, L j = Mi = 1 and the following inequalities hold:
ξi ( 1 ) | b i 2 ( 1 ) | L 2 = 0 . 5 ,
n
ξi ( 2 ) | b i 1 ( 2 ) | L 1 = 0 . 5 ,
i=1
(1 − τ11 )e−ρ1 λ b2 −
i=1
n i=1
i=1
| c2i ( p ) | Mi +
j=1
j=1
| c1i ( p ) | Mi +
ξi (2 )|ai1 (2 )|L1 −
j=1
(1 − τ22 )e−ρ2 λ a2 −
j=1
| a2 j ( p ) | L j +
γ 1 q μ2 ( q ) = 3 . 2 ,
j=1
j=1
p∈S
p∈S
| a1 j ( p ) | L j +
j=1
0.4 , 0.7
n i=1
0.2 , 0.3
4 . −1
m
γ 1 q μ1 ( q ) = 3 . 8 ,
q∈S
i=1
we take λ = 0.2, a1 = a2 = b1 = b2 = 3, μ j ( p) = ξi ( p) = 2 and
k11 = max{
n
0 , 0.1
−4 1
2μ1 (2 )e21 − b1 − λμ1 (2 ) −
γ2q ξ2 (q ) = 3.2,
q∈S
i=1
q∈S
i=1
2μ2 (1 )e22 − b2 − λμ2 (1 ) −
γ2q ξ1 (q ) = 3.2,
q∈S
j=1
(20)
A (1 ) =
μ j (2 )|c j1 (2 )|M1 −
j=1
⎧ x˙ 1 (t ) =− e11 x1 (t ) + a11 (r (t )) f1 (y1 (t )) + a12 (r (t )) f2 (y2 (t )) ⎪ ⎪ ⎪ ⎪ ⎪ + b11 (r (t )) f1 (y1 (t − τ1 (t )) + b12 (r (t )) f2 (y2 (t − τ1 (t )) + u1 (t ) ⎪ ⎪ ⎪ ⎪ ⎪ x˙ 2 (t ) =− e12 x2 (t ) + a21 (r (t )) f1 (y1 (t )) + a22 (r (t )) f2 (y2 (t )) ⎪ ⎪ ⎪ ⎨ + b21 (r (t )) f1 (y1 (t − τ1 (t )) + b22 (r (t )) f2 (y2 (t − τ1 (t )) + u2 (t )
γ1q ξ2 (q ) = 3,
q∈S
j=1
In order to verify effectiveness and superiority of our results, we show two examples.
γ1q ξ1 (q ) = 3.2,
q∈S
j=1
n
ξi ( 2 ) | b i 2 ( 2 ) | L 2 = 1 . 3 .
i=1
For simulations, the trajectories of the state x(t) and y(t) are showed in Fig. 1 and Fig. 2, respectively. What is more, the state of the jump r(t) is shown in Fig. 3. From figures, we can see that the solution of the system (20) fluctuates up and down near zero
Please cite this article as: G. Xu and H. Bao, Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.09.033
JID: NEUCOM 8
ARTICLE IN PRESS
[m5G;October 21, 2019;13:48]
G. Xu and H. Bao / Neurocomputing xxx (xxxx) xxx
Fig. 1. The state of x(t).
Fig. 2. The state of y(t).
and is bounded. This coincides with the definition of mean-square exponential input-to-state stability and reveals that our methods are feasible and effective. Example 2.
⎧ x˙ 1 (t ) =− e11 x1 (t ) + a11 (r (t )) f1 (y1 (t )) + a12 (r (t )) f2 (y2 (t )) ⎪ ⎪ ⎪ ⎪ ⎪ + b11 (r (t )) f1 (y1 (t − τ1 (t )) + b12 (r (t )) f2 (y2 (t − τ1 (t )) + u1 (t ) ⎪ ⎪ ⎪ ⎪ ˙ x ( t ) = − e12 x2 (t ) + a21 (r (t )) f1 (y1 (t )) + a22 (r (t )) f2 (y2 (t )) ⎪ 2 ⎪ ⎪ ⎨ + b21 (r (t )) f1 (y1 (t − τ1 (t )) + b22 (r (t )) f2 (y2 (t − τ1 (t )) + u2 (t ) , ⎪y˙ 1 (t ) =− e21 y1 (t ) + c11 (r (t ))g1 (x1 (t )) + c12 (r (t ))g2 (x2 (t )) ⎪ ⎪ ⎪ ⎪ + d11 (r (t ))g1 (x1 (t − τ2 (t )) + d12 (r (t ))g2 (x2 (t − τ2 (t )) + v1 (t ) ⎪ ⎪ ⎪ ⎪ ⎪ y˙ 2 (t ) =− e22 y2 (t ) + c21 (r (t ))g1 (x1 (t )) + c22 (r (t ))g2 (x2 (t )) ⎪ ⎪ ⎩ + d21 (r (t ))g1 (x1 (t − τ2 (t )) + d22 (r (t ))g2 (x2 (t − τ2 (t )) + v2 (t ) (21)
where n=2, m=2, τ1 (t ) = τ2 (t ) = 0.3cos2 (t ), S={1,2}, r(0)=1, f j (t ) = gi (t ) = tanh(t ), τ1 (t ) = τ2 (t ) = 0.3cos2 (t ), u1 (t ) = 0.19sin(t ), u2 (t ) = 0.15cos(t ), v1 (t ) = 0.17 sin(t ), v2 (t ) = 0.19sin(t ) e11 = 4, e12 = 4.5, e21 = 4, e22 = 3,the initial condition is (0.17, −0.15, −0.2, 0.1 )T and
0.1 , 0.2
A (2 ) =
0.4 −0.2
0.6 , 0.2
B (2 ) =
−0.6 0.3
0.4 , 0.2
B (1 ) =
C (1 ) =
−0.7 0.8
A (1 ) =
C (2 ) =
0.5 −0.3
−0.4 , 0.1
0.6 0.1
−0.1 , 0.7
0.3 −0.1
0.3 , 0.2
Please cite this article as: G. Xu and H. Bao, Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.09.033
ARTICLE IN PRESS
JID: NEUCOM
[m5G;October 21, 2019;13:48]
G. Xu and H. Bao / Neurocomputing xxx (xxxx) xxx
9
Fig. 3. The state of r(t).
0.5 x1
0.4
x2
0.3 0.2 0.1 x
0 −0.1 −0.2 −0.3 −0.4 −0.5
0
5
10
15
20
25
time(s) Fig. 4. The state of x(t).
0.8 −0.3 , D (2 ) = 0.6 −0.3
D (1 ) =
0.6 0.2 , = −0.3 0.6
−4 4 . 2 −2
We take λ = 0.2 and it is not difficult to see that
1 0
L=M=
T (1 ) =
0 , 9.3052
9.8029 0
0 , 9.3348
5.2497 0.0052
0.0052 , 6.9897
Q (2 ) =
T (2 ) =
5.2861 0.1024
5.2874 0.0092
0.1024 , 4.7934
1 ( 2 ) =
0.0092 , 6.9979
A¯ =
9.8452 0
16.7307 0.4162
0 , 6.8043
7.0283 0
0 , 6.8043
2 (2 ) =
0 , 9.9877
7.0354 0
2 (1 ) = 2 ( 2 ) =
0.4162 , 17.4080
0 , 10.0375
7.0283 0
2 (1 ) =
9.7413 0
1 (1 ) =
0.1148 , 4.7854
9.6420 0
5.2798 0.1148
1 (2 ) =
0 . 1
By the MATLAB-LMI toolbox, one can get
Q (1 ) =
1 (1 ) =
B¯ =
7.0354 0
16.6981 0.0194
0 , 7.1843
0 , 7.1843
0.0194 . 15.6790
Please cite this article as: G. Xu and H. Bao, Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.09.033
JID: NEUCOM 10
ARTICLE IN PRESS
[m5G;October 21, 2019;13:48]
G. Xu and H. Bao / Neurocomputing xxx (xxxx) xxx
Fig. 5. The state of y(t).
Fig. 6. The state of r(t).
Similarly, the state evolution of x(t), y(t) and r(t) is presented in Fig. 4, Fig. 5 and Fig. 6, respectively. The usefulness of our obtain results is visually shown in the figures where the solution of the system goes up and down through the X-axis and don’t exceed a given range. When linear matrix inequalities have feasible solutions, we can draw a conclusion that the system (21) is mean-square exponentially input-to-state stable. In other words, we transform the input-to-state stability problem into a linear matrix inequality problem Remark 5. The algebraic conditions play an important role in exploring affects of parameters for system stability and MATLAB-LMI is a powerful tool for solving complex computing problems. Thus, we present algebraic conditions and LMI conditions in this paper. In summary, algebraic and LMI approaches are basic methods for studying the stability of the system.
5. Conclusion Nowaday, input-to-state stability attracts more and more attention. It is still an open problem and deserves to be discussed. In this paper, we make use of Lyapunov functional to give algebraic conditions and matrix conditions by stochastic theory, inequality techniques and matrix approaches. Specially, we give a feedback controller which reduces the conservativeness of algebraic conditions. In a word, the results of this paper are more operable. In the future, we still devote ourselves to input-to-stability and Markov jump systems.
Declaration of Competing Interest None.
Please cite this article as: G. Xu and H. Bao, Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.09.033
JID: NEUCOM
ARTICLE IN PRESS G. Xu and H. Bao / Neurocomputing xxx (xxxx) xxx
References [1] L.X. Zhang, E.K. Boukas, Stability and stabilization of Markovian jump linear systems with partly unknown transition probabilities, Automatica 45 (2) (2009) 463–468. [2] Y.G. Kao, J. Xie, C.H. Wang, Stabilization of singular Markovian jump systems with generally uncertain transition rates, IEEE Trans. Autom. Control 59 (9) (2014) 2604–2610. [3] R. Saravanakumar, M.S. Ali, C.K. Ahn, H.R. Karimi, P. Shi, Stability of Markovian jump generalized neural networks with interval time-varying delays, Physica 28 (8) (2017) 1840–1850. [4] H. Zhang, Y. Shi, J.M. Wang, On energy-to-peak filtering for nonuniformly sampled nonlinear systems: a Markovian jump system approach, IEEE Trans. Fuzzy Syst. 22 (1) (2013) 212–222. [5] B.Y. Zhang, W.X. Zheng, S.Y. Xu, Filtering of Markovian jump delay systems based on a new performance index, IEEE Trans. Circuits Syst. I: Regul. Pap. 60 (5) (2013) 1250–1263. [6] Z.G. Wu, P. Shi, H.Y. Su, J. Chu, Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled data, IEEE Trans. Cybern. 43 (6) (2013) 1796–1806. [7] P. Shi, F.B. Li, A survey on Markovian jump systems: modeling and design, Int. J. Control Autom. Syst. 13 (1) (2015) 1–16. [8] M. Zhang, Q.X. Zhu, New criteria of input-to-state stability for nonlinear switched stochastic delayed systems with asynchronous switching, Syst. Control Lett. 129 (2019) 42–50. [9] B. Wang, Q.X. Zhu, Stability analysis of semi-Markov switched stochastic systems, Automatica 94 (2018) 72–80. [10] Q.X. Zhu, Razumikhin-type theorem for stochastic functional differential equations with le´ vy noise and Markov switching, Int. J. Control 90 (8) (2017) 1703–1712. [11] B. Wang, Q.X. Zhu, Stability analysis of Markov switched stochastic differential equations with both stable and unstable subsystems, Syst. Control Lett. 205 (2017) 55–61. [12] Q.X. Zhu, Pth moment exponential stability of impulsive stochastic functional differential equations with Markovian switching, J. Frankl. Inst. 351 (7) (2014) 3965–3986. [13] G.G. Yin, Q. Zhang, Continuous-Time Markov Chains and Applications, Springer, New York, 2013. [14] J.P. Cerri, M.H. Terra, Control of discrete-time Markovian jump linear systems subject to partially observed chains, in: Proceedings of the 2012 American Control Conference (ACC), 2012, pp. 1609–1614. [15] Q.X. Chen, L. Liu, A.L. Wu, Mean-square global exponential stability in Lagrange sense for delayed recurrent neural networks with Markovian switching, Neurocomputing 226 (22) (2017) 58–65. [16] H.Y. Li, H.J. Cao, P. Shi, X.D. Zhao, Fault-tolerant control of Markovian jump stochastic systems via the augmented sliding mode observer approach, Automatica 50 (7) (2014) 1825–1834. [17] S.Y. Jiao, H. Shen, Y.L. Wei, X. Huang, Z. Wang, Further results on dissipativity and stability analysis of Markov jump generalized neural networks with time– varying interval delays, Appl. Math. Comput. 336 (1) (2017) 338–350. [18] W. Liu, State estimation for discrete-time Markov jump linear systems with time-correlated measurement noise, Automatica 76 (2017) 266–276. [19] J.M. Wang, S.P. Ma, C.H. Zhang, M.Y. Fu, H∞ State estimation via asynchronous filtering for descriptor Markov jump systems with packet losses, Signal Process. 154 (2019) 159–167. [20] Y.R. Liu, W.B. Liu, M.A. Obaid, L.A. Abbas, Exponential stability of Markovian jumping Cohen–Grossberg neural networks with mixed mode-dependent time-delays, Neurocomputing 177 (12) (2016) 409–415. [21] L.G. Wu, X.M. Yao, W.X. Zheng, Generalized H2 fault detection for two-dimensional Markovian jump systems, Automatica 48 (8) (2012) 1741–1750. [22] L.G. Wu, X.J. Su, P. Shi, Output feedback control of Markovian jump repeated scalar nonlinear systems, IEEE Trans. Autom. Control 59 (1) (2014) 199–204. [23] Q.X. Zhu, Q.Y. Zhang, Pth moment exponential stabilisation of hybrid stochastic differential equations by feedback controls based on discrete-time state observations with a time delay, IET Control Theory Appl. 11 (12) (2017) 1992–2003. [24] N. Kasobov, K. Dhoble, N. Nuntalid, G. Indiveri, Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition, Physica 41 (2013) 188–201. [25] L. Wang, N. Guo, H.Y. Tam, C. Lu, Signal processing using artificial neural network for BOTDA sensor system, OSA Publ. 24 (6) (2016) 6769–6782. [26] W. Feng, S. Yang, Thermomechanical processing optimization for 304 austenitic stainless steel using artificial neural network and genetic algorithm, Appl. Phys. A 112 (2016) 1018–1028. [27] R. Li, T. Chu, Complete synchronization of boolean networks, IEEE Trans. Neural Netw. Learn. Syst. 23 (5) (2012) 840–846.
[m5G;October 21, 2019;13:48] 11
[28] H.J. Jiang, Z.D. Teng, Finite-time synchronization for fuzzy cellular neural networks with time-varying delays, Fuzzy Sets Syst. 297 (15) (2016) 96–111. [29] A. Meyer-base, A. Moradi Amani, S. Foo, A. Standlbauer, W. Yu, Pinning observability of competitive neural networks with different time-constants, Neurocomputing 329 (15) (2019) 97–102. [30] G.M.T. Xavier, F.G. Castaneda, L.M.F. Nava, J.M. Cadenas, Memristive recurrent neural network, OSA Publ. 273 (17) (2018) 281–295. [31] J.D. Cao, Y. Wan, Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays, Neural Netw. 53 (6) (2014) 165–172. [32] Z.Y. Wang, L.H. Huang, Global stability analysis for delayed complex-valued BAM neural networks, Neurocomputing 173 (3) (2016) 2083–2089. [33] H.F. Li, C.D. Li, T.G. Huang, W.L. Zhang, Fixed-time stabilization of impulsive Cohen–Grossberg BAM neural networks, Neural Netw. 98 (2018) 203–211. [34] P. Zhao, W. Feng, Y. Kang, Stochastic input-to-state stability of switched stochastic nonlinear systems, Automatica 48 (10) (2012) 2569–2576. [35] J. Liu, X.Z. Liu, W.C. Xie, Input-to-state stability of impulsive and switching hybrid systems with time-delay, Automatica 47 (5) (2011) 899–908. [36] Y.L. Liu, Y.G. Kao, R.K. Hamid, Z.R. Gao, Input-to-state stability for discrete-time nonlinear switched singular systems, OSA Publ. 358–359 (1) (2016) 18–28. [37] D. Nesic, A.R. Teel, Input-to-state stability of networked control systems, Automatica 40 (12) (2004) 2121–2128. [38] Q.X. Zhu, J.D. Cao, R. Rakkiyappan, Exponential input-to-state stability of stochastic Cohen–Grossberg neural networks with mixed delays, Nonlinear Dyn. 79 (2) (2015) 1085–1098. [39] P. Ogren, N.E. Lenoard, Obstacle avoidance in formation, in: Proceedings of the 2003 IEEE International Conference on Robotics and Automation, 2003, pp. 2492–2497. [40] D. Angeli, A Lyapunov approach to incremental stability properties, IEEE Trans. Autom. Control 47 (3) (2002) 410–421. [41] Q.X. Zhu, J.D. Cao, Mean-square exponential input-to-state stability of stochastic delayed neural networks, Neurocomputing 131 (5) (2014) 157–163. [42] S. Lakshmanan, F.A. Rihan, R. Rakkigappan, J.H. Park, Stability analysis of the differential genetic regulatory networks model with time-varying delays and Markovian jumping parameters, Nonlinear Anal.: Hybrid Syst. 14 (2014) 1–15. [43] J.J. Li, W.S. Zhou, Z.C. Yang, State estimation and input-to-state stability of impulsive stochastic BAM neural networks with mixed delays, Neurocomputing 27 (1) (2017) 37–452. [44] R.N. Guo, Z.Y. Zhang, X.P. Liu, C. Lin, H.X. Wang, J. Chen, Exponential input-to-state stability for complex-valued memristor-based BAM neural networks with multiple time-varying delays, Neurocomputing 275 (31) (2018) 2041–2054. [45] Y.F. Song, W. Sun, F. Jiang, Mean-square exponential input-to-state stability for neutral stochastic neural networks with mixed delays, Neurocomputing 205 (12) (2016) 195–203. Guoxiong Xu was born in 1994. He received the B.S. degree in applied mathematics in 2017 from Anyang Normal University, Anyang, Henan, China. He is currently pursuing the M.S. degree with the dynamical systems in Southwest University, Chongqing, China. His current research interests include neural networks, control theory, synchronization and stability theory.
Haibo Bao received the B.S. degree and M.S. degree from Northeast Normal University, Changchun, China, and the Ph.D. degree from Southeast University, Nanjing, China, all in mathematics/applied mathematics in 20 02, 20 05 and 2011, respectively. From July 2005 to April 2012, she was with the School of Science, Hohai University,Nanjing, China. In May 2012, she joined the School of Mathematics and Statistics, Southwest University, Chongqing, China. In the period from October 2014 to September 2015, she was a Research Associate in the Department of Electrical Engineering, Yeungnam University, Korea. From December 2016 to December 2017, she was a visiting scholar in the Department of Physics, Humboldt University. Currently, she is a Professor in Southwest University. Her research interests include neural networks, complex dynamical networks, control theory, the fractional calculus theory. Dr. Bao is currently an Associate Editor of the Journal of Applied Mathematics and Computing (Springer)
Please cite this article as: G. Xu and H. Bao, Further results on mean-square exponential input-to-state stability of time-varying delayed BAM neural networks with Markovian switching, Neurocomputing, https://doi.org/10.1016/j.neucom.2019.09.033