Accepted Manuscript
Robust H synchronization of Markov jump stochastic uncertain neural networks with decentralized event-triggered mechanism R. Vadivel, M. Syed Ali, Faris Alzahrani PII: DOI: Reference:
S0577-9073(18)31180-8 https://doi.org/10.1016/j.cjph.2019.02.027 CJPH 788
To appear in:
Chinese Journal of Physics
Received date: Revised date: Accepted date:
25 August 2018 19 February 2019 24 February 2019
Please cite this article as: R. Vadivel, M. Syed Ali, Faris Alzahrani, Robust H synchronization of Markov jump stochastic uncertain neural networks with decentralized event-triggered mechanism, Chinese Journal of Physics (2019), doi: https://doi.org/10.1016/j.cjph.2019.02.027
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Highlights H∞ synchronization of Markov jump neural networks is considered. Decentralized event-triggered mechanism is adopted. Lyapunov-Krasovskii functional technique and LMI method is utilized. A controller is designed such that the stochastic neural networks are stable. A new stochastic synchronization law is designed to ensure synchronization.
AC
CE
PT
ED
M
AN US
CR IP T
• • • • •
ACCEPTED MANUSCRIPT
2
Robust H∞ synchronization of Markov jump stochastic uncertain neural networks with decentralized event-triggered mechanism1 R. Vadivel a , M. Syed Ali a , Faris Alzahranib Department of Mathematics, Thiruvalluvar University, Serkkadu, Vellore, Tamil Nadu, India. b Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia. a
AN US
CR IP T
Abstract: This study examines the problem of robust H∞ synchronization of Markov jump stochastic neural networks with mixed time varying delays and decentralized event triggered scheme. We present a decentralized event-triggered scheme, which utilize only locally available information, for determining the moment in time of communication from the sensors to the significant controller. The jumping parameters are modelled as a continuous-time, finite-state Markov chain. By formulating a suitable Lyapunov-Krasovskii functional(LKF) and by using Newton-Leibniz formulation, utilizing free weighting matrix method, the sufficient conditions under which the proposed neural network is stochastic stable. Moreover, these stability criteria are expressed in terms of linear matrix inequalities (LMIs), which can be efficiently solved via the standard numerical packages. The effectiveness of the proposed results is illustrated by numerical examples. Key Words: Decentralized event-triggered; Markovian jump parameters; Lyapunov-Krasovskii functional; Linear matrix inequality; synchronization. 1. Introduction
PT
ED
M
Neural networks have found a large number of successful applications in numerous fields, such as pattern recognition, signal and image processing, classification, intelligent robot, automatic control and other areas. Further, various neural network models such as Hopfield neural networks, cellular neural networks and stochastic neural networks are extensively investigated [1]-[8]. In reality, the time delays inevitably encountered in the implementation of neural networks, which usually leads to undesirable dynamic network behaviors such as instability, oscillation, chaos and so on [9]-[14]. It is thus important to incorporate time delays into dynamical analysis of neural networks. It is noted that stochastic disturbances are significant in the process of dealing with the stability problem of the neural networks. Although when designing real nervous systems, stochastic disturbances and uncertainties are possibly the prime one of the performance abasement of the achieve neural networks. Hence, the stability analysis problems for stochastic delayed neural networks with or without parameter uncertainties becomes increasingly great interest to the researchers [15]-[18].
AC
CE
In addition, Markovian jump systems can be labeled by a arrangement of linear systems with the transitions during models expressed by a Markovian chain in a finite mode set. This system has been applied in economic systems, modeling manufacturing systems and other resonable systems. Markovian jump systems may encounter random abrupt variations in their structures [19]-[25]. Synchronization problem for neural network is the other hot topic. Synchronization means putting in synchrony, couple of events happens at the equivalent time. That is, the state of error system for neural networks can accomplish to zero ultimately when time approach infinity. The function of artificial neural network is to simulate human brain to process information. When an extraneous message spark our brain, central nervous system and sensory nervous systems in brain will together make responses to the message and their response results should be rational. In the past years, synchronization of neural network was successfully put in several fields, such as signal processing, combinatorial optimization, secure communication systems, etc [26]-[29]. For example, the authors discussed stochastic synchronization of Markovian jump neural networks with time varying delay using sampled data in [30]. The authors in [31], investigated the adaptive synchronization for stochastic T-S fuzzy neural 1E-mail:
[email protected] (M. Syed Ali),
[email protected] (R. Vadivel),
[email protected] (Jinde cao), *Corresponding author. Email:
[email protected] (Young Hoon Joo).
ACCEPTED MANUSCRIPT
3
CR IP T
networks with time-delay and Markovian jumping parameters. The adaptive observer-based projective synchronization for chaotic neural networks with mixed time delays is studied in [32]. On the other hand, the performance of a real-world neural control system is influenced by external disturbances. So it is important to design a valid control law to eliminate the effect of approximation errors and external disturbances to achieve the desired performance [33]. Hence control of time-delayed neural networks is an importance of both practical and theoretical subject. Therefore H∞ setting have excellent advantages such as efficient disturbance rejection and reduced sensitivity to uncertainties [34]-[37].
M
AN US
In recent years, event-triggered control has gained increasing attention in the analysis and implementation of real-time control systems. As an alternative of the time-triggered control scheme, event triggered scheme is utilized as an efficient way to reduce the burden of communication networks and improve the transmission efficiency [38]-[40]. Related with the time-triggered control scheme, the advantage of the event triggered scheme is that it can facilitate the efficient usage of the shared communication resources, and whether the current sampled information will be transmitted or not depends on pre-designed conditions, avoiding as much of the unnecessary transmission. Specifically in the type of battery-powered wireless equipments, condensing the load of network communications has an essential effect on the battery growth. In recent years, researchers face the issue of how to reduce the waste of restricted network resource is a notable and requiring task. Tentative issue in [41], show that the maximum number of control effort executions can be efficiently decreased under the eventtriggered scheme. As a result, the event-triggered scheme, once the control effort is executed only if a predefined event condition is disregard, has become popular research topic in the area of control. In the event-triggered control structure, the necessity of sampling or communication are resolved by the occurrence of an event rather than time For energetic use of the limited transmission resources (e.g. battery power and/or network bandwidth), to reduce some needless transmissions, it is an essence to suggest an event-triggered transmission mechanism into decentralized event-triggered control implementation . A challenge to such decentralized event-triggered control is that the complete measured data of the system output is not available to any of the sensor nodes. To deal with this problem, effort has been made recently ([42]-[47]).
AC
CE
PT
ED
One of the important goals to study stochastic neural networks is to make the certain systems synchronized to realize assume performance by designing a suitable control law. Moreover, synchronization design can have communication between nodes, which cause the network congestion and waste the network resources. In order to overcome the conservativeness of synchronization scheme, the decentralized event-triggered mechanism is proposed, where the update of controller are only determined by certain events that are triggered depending on the agents dynamic behavior. Therefore, the study of synchronization based event triggered problem is of great significance. The problem of synchronization of switched neural networks with communication delays via the event-triggered control has been studied in [48]. In [49] investigated the problem of pinning exponential synchronization of complex networks via event-triggered communication with combinational measurements. In [50], event-triggered asynchronous intermittent communication are presented for synchronization in complex dynamical networks. Recently, synchronization of master-slave neural networks with a decentralized event-triggered communication scheme was investigated in [51]. In recent years, event triggered based H∞ concept got considerable attention among the researchers [52]-[54]. Unfortunately, due to some theoretical and technical difficulties, up to now, the robust H∞ synchronization of Markov jump stochastic uncertain neural networks with decentralized event-triggered mechanism has not been addressed, which is still an open problem and remains challenging. This situation encourages our present research. Motivated by the studies mentioned above, this paper aims to develop robust H∞ synchronization of Markov jump stochastic uncertain neural networks with decentralized event-triggered mechanism.
ACCEPTED MANUSCRIPT
4
CR IP T
Firstly, sufficient criteria of H∞ synchronization with decentralized event triggered scheme are provided in the addressed stochastic neural networks by utilizing Lyapunov-Krasovskii functional technique, new delay dependent stochastic stability conditions and LMI method. Then, a controller is designed such that the class of stochastic neural networks are stochastically stable, and have an H∞ attention performance level. Finally, numerically examples is also given to illustrate the effectiveness of obtained results. The contribution of this paper lies in the following three aspects: (i) A new class of uncertain Markovian jumping neural networks is introduced whose H∞ synchronization is considered, which has been rarely investigated. (ii) the decentralized event triggered communication scheme is adopted with state feedback controller, that will reduce the usage of limited network resources. (iii) A new stochastic synchronization law is designed to ensure that each slave system synchronizes master system in this paper. All the main results are formulated and can be obtained in terms of LMIs , which can be checked numerically using the effective LMI toolbox in MATLAB.
M
AN US
Notation: Throughout this paper, set of positive integers can be denoted by N, Rn denotes the ¯ and two n-dimensional Euclidean space, Rn×m is the set of n × m real matrices. For a matrix B ¯ ¯ A B ¯ symmetric matrices A¯ and C, represents the symmetric matrix, where the notation ∗ ∗ C¯ ¯ ∈ Rn×m , the notation X ¯ > 0 (respectively, X ¯ ≥ 0) denotes the entries implied by symmetry. For X ¯ means that the matrix X is a real symmetric positive definite (positive semi-definite). The superscript T shows the transpose of the matrix (or vector). Let (Ω, F, P) be a complete probability space which relatives to an increasing family (Ft )t>0 of σ algebras (Ft )t>0 ⊂ F, where Ω denotes the sample space, F is σ algebra of subsets of the sample space and P is the probability measure on F. I symbolize the identity matrix of compatible dimension, diag{...} indicates the block-diagonal matrix and k . k is the Euclidean norm in Rn . Let {%t , t ≥ 0} be a right continuous Markov process on the probability space which takes values in the finite set S = {1, 2, ..., n} with generator Π = {πij } given by, πij ∆ + o(∆), i 6= j P {%t+∆ = j|%t = i} = 1 + πij ∆ + o(∆), i = j
ED
Where ∆P> 0, limh→0 o(∆) ∆ = 0 and πij ≥ 0 is the transition rate from i to j, if i 6= j and m πij = − j=1,j6=i πij for each mode i. Note that if πii = 0 for some i ∈ S, then the ith mode is called terminal mode.
PT
2. Problem formulation and some preliminaries
AC
CE
Consider the following uncertain Markovian jump stochastic neural networks with mixed time varying delays dx(t) = [−A¯1 (%t , t)x(t) + B¯1 (%t , t)f (x(t)) + C¯1 (%t , t)f (x(t − τ (t)))) Rt +D¯1 (%t , t) t−d(t) f (x(s))ds + J(t)]dt + σ[x(t), x(t − τ (t)), t, %t ]dw(t), (1) ¯ 0], x(s) = φ(s), s ∈ [−max(τ2 , d), R Z (t) = G (% )x(t) + C (% )x(t − τ (t))) + D (% ) t f (x(s))ds, x f t d t g t t−d(t)
with x(t) = [x1 (t), x2 (t), ..., xn (t)]n ∈ Rn , where xi (t) are the master system’s state vector associated with the ith neuron, f (x(t)) = [f1 (x(t)), f2 (x(t)), ..., fn (x(t))]T ∈ Rn denotes the neuron activation function , the vector J(t) = [J1 (t), J2 (t), ..., Jn (t)]T ∈ Rn is an external input vector. The timevarying vector-valued initial function φ(t) is a continuously differentiable functional, w(t) zero-mean real scalar Wiener process, and Zx (t) ∈ Rs is the controlled output of the master network. The matrices A¯1 (%t , t) = A1 (%t ) + 4A1 (%t , t), B¯1 (%t , t) = B1 (%t ) + 4B1 (%t , t), C¯1 (%t , t) = C1 (%t ) + 4C1 (%t , t), D¯1 (%t , t) = D1 (%t )+4D1 (%t , t), E¯1 (%t , t) = E1 (%t )+4E1 (%t , t) and Gf (%t ), Cd (%t ), Dg (%t ) are known real constant matrices with appropriate dimensions for all %t ∈ S. 4A1 (%t , t), 4B1 (%t , t), 4C1 (%t , t), 4D1 (%t , t) and 4E1 (%t , t) are real-valued unknown matrices representing time-varying
ACCEPTED MANUSCRIPT
5
parameter uncertainties, and are assumed to be of the form [4A1 (%t , t) 4B1 (%t , t) 4C1 (%t , t) 4D1 (%t , t) 4E1 (%t , t)] = Eb (%t )F (%t , t)[ξa (%t ) ξb (%t ) ξc (%t ) ξd (%t ) ξe (%t )],
(2)
where Eb (%t ), ξa (%t ), ξb (%t ), ξc (%t ), ξd (%t ) and ξe (%t )] denotes the known real constant matrices for all %t ∈ S and the uncertain time varying matrix F (%t , t) satisfying F T (%t , t)F (%t , t) ≤ I, ∀%t ∈ S, t ≥ 0.
CR IP T
(3)
Assumption 2.1. Each neuron activation functions fj (t), (j = 1, 2, ..., n) are continuous and bounded, and satisfy the following condition: fj (u) − fj (v) σj− ≤ ≤ σj+ , ∀ u, v ∈ R, u 6= v, (4) u−v where σj− , σj+ are some constants.
AN US
According to the master-slave configuration, if system (1) is regarded as the master system, the corresponding slave system with control input should be constructed as the following equation: The slave neural network is described as follows: dy(t) = [−A¯1 (%t , t)y(t) + B¯1 (%t , t)f (y(t)) + C¯1 (%t , t)f (y(t − τ (t)))) Rt +D¯1 (%t , t) t−d(t) f (y(s))ds + H¯1 (%t )ω(t) + E¯1 (%t , t)u(t) + J(t)]dt +σ[y(t), y(t − τ (t)), t, %t ]dw(t), (5) y(s) = φ(s), s ∈ [−max(τ , h), 0], 2 Z (t) = G (% )y(t) + C (% )y(t − τ (t))) + D (% ) R t f (y(s))ds, y f t d t g t t−d(t)
M
with y(t) = [y1 (t), y2 (t), ..., yn (t)]n ∈ Rn , where yi (t) are the slave system’s state vector associated with the ith neuron; u(t) ∈ Rl is the control input; ω(t) ∈ Rn is the disturbance input which belongs to L2 [0, ∞); and Zy (t) is the controlled output of the slave network. where τ (t) and d(t) denote the discrete interval and distributed time-varying delays respectively, and are assumed to satisfy ¯ 0 ≤ τ1 ≤ τ (t) ≤ τ2 , τ˙ (t) ≤ µ1 , 0 ≤ d(t) ≤ d,
CE
PT
ED
Let θ(t) = y(t) − x(t) be the error state of the master-slave systems (1) and (5). Then subtracting (1) from (6) yields the following error dynamical system: dθ(t) = [−A¯1 (%t , t)θ(t) + B¯1 (%t , t)g(θ(t)) + C¯1 (%t , t)g(θ(t − τ (t)))) Rt +D¯1 (%t , t) t−d(t) g(θ(s))ds + H¯1 (%t )ω(t) + E¯1 (%t , t)u(t)]dt + σ[θ(t), θ(t − τ (t)), t, %t ]dw(t), ¯ 0], θ(s) = φ(s), s ∈ [−max(τ2 , d), Z (t) = G (% )θ(t) + C (% )θ(t − τ (t))) + D (% ) R t g(θ(s))ds, θ f t d t g t t−d(t) (6) where g(θ(t)) = f (y(t)) − f (x(t)). and it can be verified that the function gj (·) satisfies
AC
gj (w) ≤ σj+ , j = 1, 2, ..., n, w ∈ R, w 6= 0 (7) w In this summary, the error analysis θ(t) and the entries (n) are collected into v nodes, relative signal node l ∈ {1, 2, 3, 4..., v} are represented by θl (t) ∈ Rn . Such that, denoting lth event generator release l instants by [tlkl h]∞ Kl = 0 and we check the next release tkl +1 h of event generator l, which is determined by σj− ≤
lh|sTl (tlkl h + ˜ lh)Γl sl (tlkl h + ˜ lh) > δl θlT (tlK h)Γl r˜l (tlK h), tlkl +1 h = tlkl h + min {˜ ˜ l∈Z+
(8)
Moreover, kl th communication time instant of the lth transmitter is tlkl h; Z+ denotes the set of positive integers; Γl > 0 is the flexible factor to determine the threshold of the event-triggered transmitter; the error between the immediate transmission one and current sampling vector is describe as sl (tl h + ˜lh) = θl (tl h + ˜lh) − θl (tl h). (9) kl
kl
kl
ACCEPTED MANUSCRIPT
6
Moreover, from the decentralized event-triggered condition (8), the set of release instants {tlkl h} is a subset of {0, h, 2h, ...}. Notice that if δl = 0, {tlkl h} equals to {0, h, 2h, ...}, such that, all the sampled synchronization error signals are transfer to the controller, which is minimize to a periodic sampling synchronization case. The decentralized event-triggered communication scheme (8) is employed to diminished some redundant data transmissions. The event-triggered mechanism and the sampling are executed separately. In this paper, we designing the following state feedback controller
where Ki ∈ R
m×n
is to be determined and tk h = max {tlkl h}, tk+1 h = l=1,2,...v
min {tlkl +1 }h.
l=1,2,...v
(10)
CR IP T
u(t) =Ki [θ(t1k1 h) θ(t2k2 h) ... θ(tnkn h)]T , t ∈ [tk h, tk+1 h),
(11)
Let vk = tk+1 − tk . Then the interval [tk h, tk+1 h) can be expressed as [tk h, tk+1 h) =
v[ k −1
φ˜l ,
j=0
(12)
AN US
where φ˜l = [tk h + ˜lh, tk h + ˜lh + h). Define η2 (t) = t − tk h − ˜lh for t ∈ φ˜l . It is clear that η2 (t) is a piecewise-linear function satisfying 0 ≤ η2 (t) ≤ h, t ∈ φ˜l , (13) η˙2 (t) = 1, t 6= tk h + ˜lh. Therefore , the threshold error sl (tk h + ˜lh) can be rewritten as
sl (t − η2 (t)) = θl (t − η2 (t)) − θl (tlkl h), t ∈ φ˜l .
(14)
M
Denote s(t − η2 (t)) = col{s1 (t − η2 (t)), s2 (t − η2 (t)) , ..., sv (t − η2 (t))}. Then the control input u(t) can be obtained as
ED
u(t) = Ki (θ(t − η2 (t)) − s(t − η2 (t))),
t ∈ φ˜l .
(15)
CE
PT
Substituting (15) into (6), we get dθ(t) = [−A¯1 (%t , t)θ(t) + B¯1 (%t , t)g(θ(t)) + C¯1 (%t , t)g(θ(t − τ (t)))) Rt ¯ ¯ +D¯1 (%t , t) t−d(t) ¯ g(θ(s))ds + H1 (%t )ω(t) + E1 (%t , t)(Ki (θ(t − η2 (t)) − s(t − η2 (t))))]dt +σ[θ(t), θ(t − τ (t)), t, %t ]dw(t), φ(s), s ∈ [−max(τ2 , h), 0], θ(s) = Z (t) = G (% )θ(t) + C (% )θ(t − τ (t))) + D (% ) R t g(θ(s))ds. θ f t d t g t t−h(t) (16) Decentralized event-triggered condition (8) and from the following conditions holds for t ∈ φ˜l
(17)
AC
sT (t − β(t)) Γi s(t − η2 (t)) ≤ δ[θ(t − η2 (t)) − s(t − η2 (t))]T Γi [θ(t − η2 (t)) − s(t − η2 (t))],
with Γi = diag{Γ1i , Γ2i , ..., Γvi } and δ = diag{δ1 , δ2 , ..., δv }. Which will be occupied in the control combination of the closed-loop system (16) later. Now, we give the following assumption and definitions. Assumption 2.2. Assume that σ : Rn × Rn × R+ × S → Rn , is locally Lipschitz continuous and satisfies the linear growth condition [18]. Moreover σ, satisfies ¯ 1i x1 + xT Σ ¯ trace[σ T (x1 , x2 , t, i)σ(x1 , x2 , t, i)] ≤ xT1 Σ 2 2i x2
(18)
¯ 1i and Σ ¯ 2i is the known positive constant matrices for all x1 , x2 ∈ Rn and %(t) = i, i ∈ S, where Σ with convenient dimensions.
ACCEPTED MANUSCRIPT
7
Definition 2.3. [12] The Markovian jump system (16) is said to be stochastically stable, when w(t) ≡ ¯ >0 0, if for all finite Ψ(t) ∈ Rn defined on [−max(τ2 , d), 0] and initial mode r0 ∈ N , there exists a z satisfying, nZ T o ¯ 0. lim E xT (t, ψ, r0 ) x(t, ψ, r0 )dt|ψ, r0 ≤ xT0 zx T →∞
0
CR IP T
Definition 2.4. [12] For a real number γ > 0, the Markovian jump neural network (16) is said to possess the γ > 0 -disturbance attenuation property if for all ω(t) ∈ L2 [0, ∞), ω(t) 6= 0, system (16) is stochastically stable and the response Zθ : [0, ∞) → Rp under zero initial condition, i.e. ψ = 0, satisfies Z ∞ nZ ∞ o T 2 ω T (t)ω(t)dt. (19) E Zθ (t)Zθ (t)dt ≤ γ 0
0
Let
. k ω(t) k2 =
nZ
∞
0
∞
o1/2 ZθT (t)Zθ (t)dt ,
AN US
n Z . k Zθ k2 = E
o1/2 ω T (t)ω(t)dt and
0
and Vzθ ω denote the system from the exogenous input ω(t) to the controlled output Zθ (t) then the H∞ -norm of Vzθ ω is k Vzθ ω k∞ ≤
sup
.
ω(t)∈L2 (0,∞)
Hence (19), implies k Vzθ ω k∞ ≤ γ. In otherwords, γ-disturbance attenuation implies γ-suboptimal H∞ control.
M
Before deriving our main results, we state the following lemmas.
PT
ED
˜ ∈ Rn×n , scalars a < b, and vector Lemma 2.5. [11] For symmetric positive definite matrix R n ω : [a, b] → R such that the integrations concerned are well defined, the following inequality holds T Z b ˜ 1 χ1 R 0 χ1 T ˜ ω (s)Rω(s)ds ≥ , ˜ χ2 0 3R b − a χ2 a RbRs RbRb Rb 2 2 where χ1 = a ω(s)ds and χ2 = χ1 − b−a ω(u)duds = −χ1 + b−a ω(u)duds. a a a s
CE
˜ ∈ Rn×n , M ˜ =M ˜ T > 0, scalar η > 0, vector function Lemma 2.6. [10] For any constant matrix M n w ¯ : [0, η] → R such that the integrations concerned are well defined, the following inequality holds: Z
η
w(s)ds ¯
0
T
˜ M
Z
0
η
Z w(s)ds ¯ ≤η
η
˜ w(s)ds. w ¯ T (s)M ¯
0
AC
˜ P, ˜ Q ˜ be given matrices such that Q ˜ > 0, then Lemma 2.7. [14] Let M, " # T ˜ ˜ P M ˜ TQ ˜ −1 M ˜ < 0. < 0 ⇔ P˜ + M ˜ −Q ˜ M
Lemma 2.8. [20] Let Y¯ , H and J¯(t) be real matrices of appropriate dimensions, and J¯(t) satisfying J¯T (t)J¯(t) ≤ I. Then, the following inequality holds for any constant > 0: Y¯ J¯(t)H + H T J¯T (t)Y¯ T ≤ Y¯ Y¯ T + −1 H T H . Lemma 2.9. [14] For any matrices U , V , then the following matrix inequality holds: ˆ , U T V + V T U ≤ U T Pˆ −1 U + V T PV
where Pˆ is a given positive definite matrix
ACCEPTED MANUSCRIPT
8
Lemma 2.10. [40] For matrices R > 0, X T = X and any scalar ρ, the following inequality holds −XR−1 X = ρ2 R − 2ρX. 3. Main results
CR IP T
Here, some LMI conditions will be developed to ensure that master system (1) and slave system (5) are synchronous by employing the Lyapunov functional and stochastic analysis approach. For presentation convenience, we denote F1 = diag{σ1− σ1+ , σ2− σ2+ , ..., σn− σn+ }, ) ( σn− + σn+ σ1− + σ1+ σ2− + σ2+ . , , ..., F2 = diag 2 2 2
AN US
For the sake of notation simplification, in the subsequent for each possible %t = i, (i = 1, 2, ..., N ), we simply write L(%t ) as Li (t), for instance, A¯1 (%t , t) is denoted by A¯1i (t), and B¯1 (%t , t) by B¯1i (t), and so on. The system (16) can be written as dθ(t) = [−A¯1i θ(t) + B¯1i g(θ(t)) + C¯1i g(θ(t − τ (t)))) Rt ¯ ¯ +D¯1i ¯ g(θ(s))ds + H1i ω(t) + E1i (Ki (θ(t − η2 (t)) − s(t − η2 (t))))]dt t−d(t)
+σ[θ(t), θ(t − τ (t)), t, %t ]dw(t), θ(s) = φ(s), s ∈ [−max(τ2 , h), 0], Rt Zθ (t) = Gf i θ(t) + Cdi θ(t − τ (t))) + Dgi t−h(t) g(θ(s))ds.
(20)
CE
Ψ1 =
AC
where
PT
ED
M
¯δ Theorem 3.1. For a prescribed scalar γ > 0 and some given positive scalars τ1 , τ2 , µ1 , h, d, as well as the matrices Ki (i = 1, 2, ..., N ), the Markovian jumping stochastic neural network in (20) with 4A¯1i (t) = 4B¯1i (t) = 4C¯1i (t) = 4D¯1i (t) = 4E¯1i (t) = 0 is mean square stochastically L1 L2 R1 R2 stable, if there exist positive definite matrices Pi , L = , R = , Z = LT2 L3 R2T R3 Z1 Z2 , S3 , S4 , E1 , E2 , W, X1 , X2 , T1 , T2 , Qi (i = 1, 2, 3), positive diagonal matrices L, S, Z2T Z3 appropriate dimensional matrices ¯ 1i > 0, λ ¯ 2 > 0, λ ¯ 3 > 0, ¯ N ¯ , U, ¯ and scalars λ M, such that the following LMI is satisfied: Ψ1 Ψ2 Ξ= < 0, (21) ∗ Ψ3 ˜1 Θ
Pi H1i
∗
−γ 2 I
, Ψ2 =
" ¯ Γ2
¯ 21 Γ
¯3 Γ
¯4 Γ
0
0
∗
¯ 1i I, Pi ≤ λ ¯ 2 I, X1 ≤ λ ¯ 3 I, X2 ≤ λ
˜ 1 = (Λmn )18×18 , Θ
˜ Υ2 ∗ ¯5 # Γ , Ψ3 = ∗ 0
0
0
˜3 Υ
0
∗
˜4 Υ
∗
∗
0
0 , 0
−I
(22) (23) (24)
ACCEPTED MANUSCRIPT
9
Λ11 = −Pi A¯1i − A¯1iT Pi +
N X j=1
πij Pj + L1 + Q1 + Q2 −
π2 2 W + ηM Q3 − F1 L 4
+ [λ1i + τ1 λ2 + (τ2 − τ1 )λ3 ]Σ1i ,
π2 Λ15 = Pi B¯1i + L2 + F2 L, Λ16 = Pi C¯1i , Λ1,13 = Pi D¯1i , Λ1,16 = Pi E¯1i Ki + W, 4 Λ1,17 = −Pi E¯1i Ki , Ψ1,19 = Pi H¯1i , Λ22 = −(1 − µ1 )L1 − F1 S + [λ1i + τ1 λ2 + (τ2 − τ1 )λ3 ]Σ2i ,
CR IP T
Λ26 = −(1 − µ1 )L2 + F2 S, Λ33 = −L1 + R1 + Z1 , Λ37 = Z2 − L2 + R2 , Λ44 = −Z1 , Λ48 = −Z2 , Λ55 = L3 + τ 2 S3 + (τ2 − τ1 )2 S4 + E1 + d¯2 E2 − L, 2
Λ66 = −(1 − µ1 )R3 − (1 − µ2 )E1 − S, Λ77 = −L3 + R3 + Z3 , Λ88 = −Z3 , Λ99 = −4S3 ,
Λ9,10 = 3
ST Q3 ST S3 QT S3 + 3 3 , Λ10,10 = −12 2 − 12 32 , Λ11,11 = −4Q3 , Λ11,12 = 3 +3 3 , τ2 τ2 τ2 τ2 ηM ηM
Λ12,12 = −12
QT3 E2 E2T Q3 − 12 , Λ = −4E , Λ = 3 + 3 , Λ14,14 = −S4 , 13,13 2 13,18 2 2 ηM ηM h h
AN US
π2 W, Λ16,17 = −δΓi , Λ17,17 = −Γi + δΓi , 4 h i E2 ET ¯ T T ˜ ˜ Λ18,18 = −12 ¯2 − 12 ¯22 , Γ = τ A T (τ − τ ) A T 2 1 1 1 2 1 1 2 , d d n o ¯ 21 = [τ1 H¯1i T1 (τ2 − τ1 )H¯1i T2 ], Υ ˜ 2 = diag − τ1 T1 , −(τ2 − τ1 )T2 , Γ i n h o √ √ ¯ ¯ ˜ 3 = diag − T1 , −T2 , −T2 , ¯ 3 = √τ1 M τ2 − τ1 N τ2 − τ1 U¯ , Υ Γ n o ¯ 4 = [M ¯ N ¯ U], ¯ Υ ˜ 4 = diag − X1 , −X2 , −X2 , Γ 6times
M
Λ15,15 = −Q1 , Λ16,16 = δΓi −
z }| { A˜1 = [−A¯1i 0 0 0 B¯1i C1i 0 0 0 D¯1i E¯1i Ki − E¯1i Ki 0 0 0], ¯ 5 = col Gf i Γ
Cdi
ED
10times
h
z }| { 0 0 0
Dgi
Proof. For simplicity, we denote
6times
z }| { i 0 0 0 .
PT
eˆ(t) = −A¯1i θ(t) + B¯1i g(θ(t)) + C¯1i g(θ(t − τ (t)))) Z t ¯ + D1i g(θ(s))ds + H¯1i ω(t) + E¯1i (Ki (θ(t − η2 (t)) − s(t − η2 (t)))),
CE
¯ t−d(t)
α ˆ (t) = σ ˜ (θ(t), θ(t − τ (t)), t, i),
(25)
AC
then from (20), dθ(t) can be revised as dθ(t) = eˆ(t)dt + α ˆ (t)dω(t).
(26)
Construct a Lyapunov-Krasovskii functional candidate for system (20) as follows V (θ(t), t, i) =
7 X
Vj (θ(t), t, i),
(27)
j=1
where V1 (θ(t), t, i) = θT (t)Pi θ(t), Z t Z ˜ V2 (θ(t), t, i) = φ˜T (s)L φ(s)ds + t−τ1
t−τ1
t−τ (t)
˜ φ˜T (s)R φ(s)ds +
Z
(28) t−τ1
t−τ2
˜ φ˜T (s)Z φ(s)ds,
(29)
ACCEPTED MANUSCRIPT
10
V3 (θ(t), t, i) = τ2
Z
0
−τ2
V4 (θ(t), t, i) = V5 (θ(t), t, i) =
Z
t
Z
t T
t+θ
g (θ(s))S3 g(θ(s))dsdθ + (τ2 − τ1 )
g T (θ(s))E1 g(θ(s))dsdθ + d¯
t−d(t) Z t
θT (s)Q2 θ(s)ds + h
Z
0
−h t
t−η2 (t) Z t T
Z
Z
t
0
Z
Z
−τ1
−τ2
Z
t
g T (θ(s))S4 g(θ(s))dsdθ,
t+θ
(30)
t
g T (θ(s))E2 g(θ(s))dsdθ,
−d¯ t+θ
(31)
θT (s)Q3 θ(s)dsdθ
t+θ
−τ1 0
V7 (θ(t), t, i) =
Z
−τ1
−τ2
t+θ t
Z
t+θ
(tr[ˆ αT (s)X1 α ˆ (s)ds])dsdθ +
Z
Z
−τ1
−τ2
t+θ
t
t+θ
CR IP T
Z π2 θ (s)Q1 θ(s)ds − + [θ(s) − θ(t − η2 (t))]T W [θ(s) − θ(t − η2 (t))]ds, (32) 4 t−η2 (t) t−h Z −τ1 Z t Z 0 Z t eˆT (s)T2 eˆ(s)dsdθ, (33) eˆT (s)T1 eˆ(s)dsdθ + V6 (θ(t), t, i) = (tr[ˆ αT (s)X2 α ˆ (s)ds])dsdθ,
(34)
AN US
θ(s) . g(θ(s)) Then, it follows from Ito’s formula, for each %t = i, i ∈ S, the mathematical expectation of the stochastic derivative of V1 (θ(t), t, i) with E[2θ(t)Pi α ˆ T (t)dw(t)] = 0, we obtain ¯ = and φ(s)
E[dV1 (θ(t), t, i)] = E[LV1 (θ(t), t, i)] ≤ 2θ(t)Pi eˆ(t) + tr(ˆ αT (t)Pi α ˆ (t)) + θT (t)
S X
Πij Pj θ(t).
(35)
j=1
M
On the other hand, by Assumption (2.2) and condition (22), we get the following trace[˜ σ T (θ(t), θ(t − τ (t)), t, i), σ ˜ (θ(t), θ(t − τ (t)), t, i)]
(36)
ED
≤ λ1i [θT (t)Σ1i θ(t) + θT (t − τ (t))Σ2i θ(t − τ (t))], ˜ − φ˜T (t − τ1 )L φ(t ˜ − τ1 ) + φ˜T (t − τ1 )R φ(t ˜ − τ1 ) E[dV2 (θ(t), t, i)] ≤ φ˜T (t)L φ(t)
AC
CE
PT
˜ − τ (t)) + φ˜T (t − τ1 )Z φ(t ˜ − τ1 ) − φ˜T (t − τ2 )Z φ(t ˜ − τ2 ) − (1 − µ1 )φ˜T (t − τ (t))R φ(t T L1 L2 θ(t) θ(t) + g(θ(t)) g(θ(t)) LT2 L3 T L1 L2 θ(t − τ1 ) θ(t − τ1 ) − g(θ(t − τ1 )) g(θ(t − τ1 )) LT2 L3 T R1 R2 θ(t − τ1 ) θ(t − τ1 ) + T g(θ(t − τ1 )) g(θ(t − τ1 )) R2 R3 T R1 R2 θ(t − τ (t)) θ(t − τ (t)) − (1 − µ1 ) g(θ(t − τ (t)) g(θ(t − τ (t)) R2T R3 T Z1 Z2 θ(t − τ1 ) θ(t − τ1 ) + g(θ(t − τ1 )) g(θ(t − τ1 )) Z2T Z3 T Z1 Z2 θ(t − τ2 ) θ(t − τ2 ) − , g(θ(t − τ2 ) g(θ(t − τ2 ) Z2T Z3 Z t E[dV3 (θ(t), t, i)] ≤ τ22 g T (θ(t))S3 g(θ(t)) − τ2 g T (θ(s))S3 g(θ(s))ds t−τ2
+ (τ2 − τ1 )2 g T (θ(t))S4 g(θ(t)) − (τ2 − τ1 )
Z
t−τ1
t−τ2
g T (θ(s))S4 g(θ(s))ds,
(37)
ACCEPTED MANUSCRIPT
11
¯ ¯ E[dV4 (θ(t), t, i)] ≤ g T (θ(t))E1 g(θ(t)) − (1 − µ2 )g T (θ(t − d(t)))E 1 g(θ(t − d(t))) Z t + d¯2 g T (θ(t))E2 g(θ(t)) − d¯ g T (θ(s))E2 g(θ(s))ds,
(38)
t−d¯
E[dV5 (θ(t), t, i)] ≤ θT (t)Q1 θ(t) − θT (t − h)Q1 θ(t − h) + θT (t)Q2 θ(t) Z t + h2 θT (t)Q3 θ(t) − h θT (s)Q3 θ(s)ds t−h
2
CR IP T
π [θ(t) − θ(t − η2 (t))]T W [θ(t) − θ(t − η2 (t))], 4 Z Z t T T eˆ (s)T1 eˆ(s)ds − E[dV6 (θ(t), t, i)] = eˆ (t)[τ1 T1 + (τ2 − τ1 )T2 ]ˆ e(t) − −
= eˆT (t)[τ1 T1 + (τ2 − τ1 )T2 ]ˆ e(t) − Z
t−τ (t)
t−τ1
eˆT (s)T1 eˆ(s)ds −
eˆT (s)T2 eˆ(s),
E[dV7 (θ(t), t, i)] = τ1 tr(ˆ αT (t)X1 α ˆ (t)) + (τ2 − τ1 )tr(ˆ αT (t)X2 α ˆ (t)) − −
t−τ1
tr(ˆ αT (s)X2 α ˆ (s))ds
t−τ2
Z
M
t−τ (t)
tr(ˆ αT (s)X2 α ˆ (s))ds −
ED
t−τ1
eˆT (s)T2 eˆ(s)ds
t−τ (t)
Z
tr(α ˆ T (s)X1 α ˆ (s))ds
t−τ1
¯ 3 [θT (t)Ξ1i θ(t) + θ(t − τ (t))Ξ2i θ(t − τ (t))] − + (τ2 − τ1 )λ Z
eˆT (s)T2 eˆ(s)
t
¯ 2 [θT (t)Ξ1i θ(t) + θ(t − τ (t))Ξ2i θ(t − τ (t))] ≤ τ1 λ
−
t−τ2 Z t−τ1
(40)
t−τ2
Z
t−τ1
AN US
−
t−τ1 Z t
(39)
t−τ (t)
Z
t
tr(ˆ αT (s)X1 α ˆ (s))ds
t−τ1
tr(ˆ αT (s)X2 α ˆ (s))ds.
(41)
t−τ2
According to Lemma 2.4 and Lemma 2.5, we get Z Z t i 2 t −τ2 g (θ(s))S3 g(θ(s))ds ≤ g (θ(s))ds − g T (θ(s))dsdθ τ2 t−τ2 θ t−τ2 t−τ2 hZ t Z Z t i 2 t −S3 0 g(θ(s))ds − g(θ(s))dsdθ , 0 −3S3 τ2 t−τ2 θ t−τ2 (42) Z t Z t Z t hZ t i 2 −h θT (s)Q3 θ(s)ds ≤ θT (s)ds − θT (s)dsdθ h t−h θ t−h t−h hZ t Z Z t i 2 t −Q3 0 θT (s)ds − θT (s)dsdθ , (43) 0 −3Q3 h t−h θ t−h Z t Z t Z t hZ t i 2 −d¯ g T (θ(s))E2 g(θ(s))ds ≤ g T (θ(s))ds − ¯ g T (θ(s))dsdθ d t−d¯ θ t−d¯ t−d¯ hZ t Z Z t i 2 t −E2 0 g T (θ(s))ds − ¯ g T (θ(s))dsdθ , 0 −3E2 d t−d¯ θ t−d¯ (44) !T ! Z t−τ1 Z t−τ1 Z t−τ1 (τ2 − τ1 ) g T (θ(s))S4 g(θ(s))ds ≤ g(θ(s))ds S4 g(θ(s))ds . (45) t
PT
T
hZ
t
T
AC
CE
Z
t−τ2
t−τ2
t−τ2
ACCEPTED MANUSCRIPT
12
Based on (7), the following inequalities are true [gs (θs (t)) − σs− θs (t)][gs (θs (t)) − σs+ θs (t)] ≤ 0,
[gs (θs (t − d(t))) − σs− (θs (t − τ (t)))][gs (rs (t − τ (t))) − σs+ (θs (t − τ (t)))] ≤ 0,
and can be briefly written as
θ(t) g(θ(t))
T
F1 ∗
−F2 I
θ(t) g(θ(t))
θ(t − τ (t)) g(θ(t − τ (t)))
T
F1 ∗
−F2 I
θ(t − τ (t)) g(θ(t − τ (t)))
≤ 0,
CR IP T
s = 1, 2, ..., n,
≤ 0.
Consider the following zero equations,
¯ 0 = 2ζ T (t)M[θ(t) − θ(t − τ1 ) −
AN US
Then for any positive matrices L = {L1s , L2s , ..., Lns } and S = {S1s , S2s , ..., Sns }, the following inequalities hold true T θ(t) F1 L −F2 L θ(t) ≤ 0, g(θ(t)) ∗ L g(θ(t)) T θ(t − τ (t)) F1 S −F2 S θ(t − τ (t)) ≤ 0. (46) g(θ(t − τ (t))) ∗ S g(θ(t − τ (t))) Z
t
t−τ1
eˆ(s)ds −
M
¯ [θ(t − τ1 ) − θ(t − τ (t)) − 0 = 2ζ T (t)N
ED
¯ 0 = 2ζ T (t)U[θ(t − τ (t) − θ(t − τ2 ) − where
Z
Z
t−τ1
t−τ (t)
Z
t
α ˆ (s)dw(s)],
t−τ1
t−τ (t)
eˆ(s)ds −
t−τ2
Z
eˆ(s)ds −
Z
(47)
t−τ1
α ˆ (s)dw(s)],
(48)
t−τ (t) t−τ (t)
α ˆ (s)dw(s)].
(49)
t−τ2
Z
PT
h ζ T (t) = θT (t) θT (t − τ (t)) θT (t − τ1 ) θT (t − τ2 ) g T (θ(t)) g T (θ(t − τ (t))) g T (θ(t − τ1 )) Z t Z t Z t Z t Z t Z t g T (θ(t − τ2 )) g T (θ(s))ds g T (θ(s))dsdθ θT (s)ds θT (s)dsdθ t
CE
t−d¯
g T (θ(s))
t−τ2 t−τ1
Z
t−τ2
t−τ2
θ
t−h
g T (θ(s)) θT (t − ηM ) θT (t − η(t)) wT (t)
It follows from Lemma 2.8 that Z t Z ¯ ¯ −1 M ¯ T ζ(t) + −2ζ T (t)M α ˆ (s)dw(s) ≤ ζ T (t)MX 1
AC
Z
¯ −2ζ T (t)N
−2ζ T (t)U¯
Z
t−τ1 t−τ1
t−τ (t) t−τ (t)
t−τ2
¯ X −1 N ¯ T ζ(t) + α ˆ (s)dw(s) ≤ ζ T (t)N 2 ¯ −1 U¯T ζ(t) + α ˆ (s)dw(s) ≤ ζ T (t)UX 2
Z
Z
t
t−τ1 t−τ1
t−τ (t)
t−τ (t)
t−τ2
t−τ1
t
Z
t−h
t
t−d¯ θ
α ˆ (s)dw(s)
T
θ
i g T (θ(s))dsdθ .
T Z α ˆ (s)dw(s) X1
t
X2
t−τ (t)
α ˆ (s)dw(s) ,
t−τ (t)
t−τ2
(50)
α ˆ (s)dw(s) ,
t−τ1 t−τ1
Z
T Z α ˆ (s)dw(s) X2
On the other hand, by using Itˆ o formula in [18], we can obtain (" Z #T " Z #) Z t t E α ˆ (s)dw(s) X1 α ˆ (s)dw(s) =E t−τ1
Z
α ˆ (s)dw(s) .
(51)
t
t−τ1
tr[ˆ αT (s)X1 α(s)]ds,
(52)
ACCEPTED MANUSCRIPT
13
(" Z
E
α ˆ (s)dw(s)
t−τ (t)
(" Z
E
#T
t−τ1
#T
t−τ (t)
α ˆ (s)dw(s)
t−τ2
X2
"Z
#)
t−τ1
α ˆ (s)dw(s)
t−τ (t)
X2
"Z
#)
t−τ (t)
α ˆ (s)dw(s)
t−τ2
=E
Z
t−τ1
tr[ˆ αT (s)X2 α ˆ (s)]ds,
(53)
t−τ (t)
=E
Z
t−τ (t)
tr[ˆ αT (s)X2 α ˆ (s)]ds.
(54)
t−τ2
Combining (35) to (54), with (17), we obtain
− −
t−τ (t)
Z
¯ + eˆT (s)T2 ]T −1 [N ¯ T ζ(t) + T2 eˆ(s)]ds [ζ T (t)N 2
t−τ (t)
t−τ2
[ζ T (t)U¯ + eˆT (s)T2 ]T2−1 [U¯T ζ(t) + T2 eˆ(s)]ds.
where
AN US
t−τ1 t−τ1
Z
CR IP T
¯ −1 M ¯ T + (τ2 − τ1 )N ¯ T −1 N ¯ T + (τ2 − τ1 )UT ¯ −1 U¯T E[dV (θ(t), t, i)] ≤ ζ T (t)[Ξ1 + τ1 MT 1 2 2 −1 ¯ T −1 ¯ T −1 ¯ T ¯ ¯ ¯ + MX1 M + N X2 N + UX2 U ]ζ(t) Z t ¯ + eˆT (s)T1 ]T −1 [M ¯ T ζ(t) + T1 eˆ(s)]ds − [ζ T (t)M 1
˜1 Θ
Ξ1 = ∗ ∗
Pi H1i 0
∗
¯2 Γ
(55)
¯ 21 , Γ ˜2 Υ
M
In the above inequality, end of the three terms in (55) are less than 0, we get ¯ T + (τ2 − τ1 )UT ¯ −1 U¯T ¯ T + (τ2 − τ1 )N ¯ T −1 N ¯ −1 M E[dV (θ(t), t, i)] ≤ ζ T (t)[Ξ1 + τ1 MT 2 2 1 −1 ¯ T −1 ¯ T −1 ¯ T ¯ ¯ ¯ + MX1 M + N X2 N + UX2 U ]ζ(t).
ED
Now, we set
nZ t o f (t) = E [Z T (t)Z(t) − γ 2 wT (t)w(t)]dt ,
(56)
(57)
0
PT
where t > 0. Since V (φ(t), 0) = 0 under the zero initial condition, that is, φ(t) = 0 for t ∈ [−¯ τ , 0], (Z ) th i f (t) = E Z T (t)Z(t) − γ 2 wT (t)w(t) + E[dV (θ(t), t, i)] dt − E{V (θt , %(t))} , 0
CE
(Z
f (t) ≤ E
h
) i Z (t)Z(t) − γ w (t)w(t) + E[dV (θ(t), t, i)] dt} , T
2
T
nZ t o ˜ f (t) ≤ E ζ T (t)Ωζ(t)dt .
AC where
0
t
(58)
0
h ˜ ≤ Ψ1 + τ1 MT ¯ −1 M ¯ T + (τ2 − τ1 )N ¯ T −1 N ¯ T + (τ2 − τ1 )UT ¯ −1 U¯T + MX ¯ −1 M ¯T Ω 1 2 2 1 i ¯ X −1 N ¯ T + UX ¯ −1 U¯T + Γ ¯5Γ ¯ T5 . +N 2 2
(59)
˜ is equivalent to (21), by this way k Zθ (t) k2 ≤ γ k v(t) k2 , for every nonzero By Lemma 2.7, Ω v(t) ∈ L2 [0, ∞) is satisfied. Therefore, we can see that the system (20) assure that the master system (1) and the slave system (5) are synchronous.
ACCEPTED MANUSCRIPT
14
Remark 3.2. Assumption 2.1 was first proposed in liu et al [6], and has been subsequently used many papers. In this paper, the generalized activation function (4) is taken into account and derived required stability conditions. From, Assumption 2.1, σj− and σj+ are known real scalars and they may be positive, negative, or zero, which means that the resulting activation functions may be nonmonotonic and more general than the usual sigmoid functions and Lipschitz-type conditions. Such a description is very precise in quantifying the lower and upper bounds of the activation functions, therefore it is very effective in employing the LMI method to reduce the conservatism.
CR IP T
Remark 3.3. Theorem 3.1 provides a novel delay-dependent condition guaranteeing the stochastic stability of neural networks with time-varying delays. In order to reduce the conservativeness, the R t−τ R t−τ 1 tr(ˆ αT (s)X2 α ˆ (s))ds has been developed into two integral term − t−τ21 eˆT (s)T2 eˆ(s)ds, and − t−τ (t) R t−τ (t) T R t−τ1 R t−τ1 T αT (s)X2 α ˆ (s))ds parts as − t−τ (t) eˆ (s)T2 eˆ(s)ds − t−τ2 eˆ (s)T2 eˆ(s), and − t−τ (t) tr(ˆ R t−τ (t) − t−τ2 tr(ˆ αT (s)X2 α ˆ (s))ds which is mainly based on the information about τ1 ≤ τ (t) ≤ τ2 , which may leads to less conservative results.
AN US
Remark 3.4. The decentralized event-triggered communication scheme (8) is employed to reduce unnecessary data transmissions. Compared with the decentralized event-triggered mechanisms presented in [43] and [46], the proposed event-triggered mechanism and sampling are executed separately. Therefore, real-time detection hardware is no longer needed. 3.1. Extension to uncertain stochastic neural networks.
ED
M
¯ δ as well Theorem 3.5. For a prescribed scalar γ > 0 and some given positive scalars τ1 , τ2 , µ1 , h, d, as the matrices Ki (i = 1, 2, ..., N ), the Markovin jumping stochastic uncertain neural networks (16) is ¯ ¯ ¯ stochastically stable, if there exist positive scalars εi > 0, λ1i > 0, λ2 > 0, λ3 > 0 and positive definite L1 L2 R1 R2 Z1 Z2 matrices Pi , L = ,R= ,Z = , S3 , S4 , E1 , E2 , W, X1 , X2 , LT2 L3 R2T R3 Z2T Z3 T1 , T2 , Qi (i = 1, 2, 3), positive diagonal matrices L, S, and appropriately dimensioned matrices ¯ N ¯ , U, ¯ such that the following LMI is satisfied: M, Ψ1 Ψ2 (60) f= < 0, ∗ Ψ3
˜1 Θ
Pi H1i
CE
Ψ1 =
PT
where
−γ I
, Ψ2 =
" ¯ Γ2
¯ 21 Γ
AC
∗
2
¯3 Γ
¯4 Γ
¯5 Γ
0
0
0
˜2 Υ
∗ # ¯6 Γ , Ψ3 = ∗ ¯ eˆ Γ ∗ ∗
0
0
0
˜3 Υ
0
0
∗
˜4 Υ
0
∗
∗
−I
∗
∗
0
¯ 1i I, Pi ≤ λ ¯ 2 I, X1 ≤ λ
(62) (63)
17times
z }| { T 0 0 0]T , Lˆ = [0 0 τ1 Ebi T1
(61)
ˆ 2 ], Γ ¯ eˆ = [Lˆ C], ˆ U
ˆ 1 = [E T Pi U bi
0
0 0 , 0 ˜ Υ6
¯ 3 I, X2 ≤ λ
where ˆ1 ¯ 6 = [U Γ
5times
T τ21 Ebi T2
z }| { 0 0] , Cˆ = [0 0 0], T
ACCEPTED MANUSCRIPT
15
ˆ 2 = [−ξai 0 0 0 ξbi U ˜ 6 = diag{−I, −I}. Υ
ξci
z
6times
0
}| { 0 0
ξdi
0 0
0]T ,
− ξei K
ξei K
Proof. Replace A¯1i , B¯1i , C¯1i , D¯1i , E¯1i in LMI (21) by A¯1i + 4A¯1i (t), B¯1i + 4B¯1i (t), C¯1i + 4C¯1i (t), D¯1i + 4D¯1i (t), E¯1i + 4E¯1i . Ξ + Va F (t)Vb + (Va F (t)Vb )T < 0,
19times
Va =
Vb = [−ξai
6times
z }| { 0 0 0
T [Ebi Pi
0
0
0
T τ1 Ebi
ξbi
ξci
T τ21 Ebi
z
6times
}| { 0 0 0
z }| { 0 0 0]T , ξdi
0 0
By Lemma 2.8, we know that (64) is equivalent to
(65)
− ξei K
ξei K
0].
(66)
(67)
AN US
Ξ + Va VaT + −1 Vb T Vb < 0.
CR IP T
where
(64)
By, applying Schur complement, (67) is equivalent to (60).
where
Λ ∗
H1i
−γ 2 I
ˆ , f2 =
" ¯ Φ2
Θ21
¯3 Φ
¯4 Φ
¯5 Φ
¯6 Φ
0
0
0
Θeˆ
CE
ˆ1 = f
PT
ED
M
¯ δ as well Theorem 3.6. For a prescribed scalar γ > 0 and some given positive scalars τ1 , τ2 , µ1 , h, d, as the matrices Ki (i = 1, 2, ..., N ), the Markovian jumping stochastic uncertain neural networks (16) is ¯ > 0, λ ¯ 2 > 0, λ ¯ 3 > 0 and positive definite stochastically stable, if there exist positive scalars εi > 0, λ1i ¯1 L ¯2 ¯1 R ¯2 ¯1 Z¯2 L R Z ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ matrices Xˆi , L¯ = ¯ T ¯ , R¯ = ¯T R ¯ 3 , Z = Z¯ T Z¯3 , S3 , S4 , E1 , E2 , W , X1 , X2 , L2 L3 R 2 2 ¯ i (i = 1, 2, 3), positive diagonal matrices L, ¯ S, ¯ and appropriately dimensioned matrices T¯1 , T¯2 , Q ¯ ¯ ¯ ˆ M, N , U,, F3 , such that the following LMI is satisfied: ˆ1 f ˆ2 f ˆ = f (68) ˆ 3 < 0, ∗ f
#
ˆ2 Λ
∗ ˆ , f3 = ∗ ∗ ∗
AC
¯ 1i I, Xˆi ≤ λ ¯ 2 I, X1 ≤ λ
0
0
0
ˆ3 Λ
0
0
∗
ˆ4 Λ
0
∗
∗
−I
∗
∗
0
N X j=1
¯1 + Q ¯1 + Q ¯2 − πij Xˆj + L
+ [λ1i + τ1 λ2 + (τ2 − τ1 )λ3 ]Σ1i ,
0 0 , 0 ˆ Λ6 (70) (71)
Λ = (Λij )18×18 , Λ11 = −Xˆi A1i − A1iT Xˆi +
(69)
¯ 3 I, X2 ≤ λ
where
0
π2 ¯ 2 ¯ ¯ W + ηM Q3 − F1 L 4
ACCEPTED MANUSCRIPT
16
π2 ¯ W, 4 ¯ 1 − F1 S + [λ1i + τ1 λ2 + (τ2 − τ1 )λ3 ]Σ2i , = −E1i Fˆ3 , Λ1,19 = H1i , Λ22 = −(1 − µ1 )R ¯ 2 + F2 S, Λ33 = −L ¯1 + R ¯ 1 + Z¯1 , Λ37 = Z¯2 − L ¯2 + R ¯ 2 , Λ44 = −Z¯1 , = −(1 − µ1 )R 2 2 2 ¯ 3 + τ2 S¯3 + (τ2 − τ1 ) S¯4 + E ¯1 + d¯ E ¯2 − L, = −Z¯2 , Λ55 = L ¯ 3 − (1 − µ2 )E ¯1 − S, Λ77 = −L ¯3 + R ¯ 3 + Z¯3 , Λ88 = −Z¯3 , Λ99 = −4S¯3 , = −(1 − µ1 )R T T ¯ ¯T S¯ S¯ S¯3 S¯3 ¯ 3 , Λ11,12 = 3 Q3 + 3 Q3 , + 3 3 , Λ10,10 = −12 2 − 12 32 , Λ11,11 = −4Q =3 τ2 τ2 τ2 τ2 ηM ηM ¯T ¯2 ¯3 ¯T Q E Q E ¯2 , Λ13,18 = 3 = −12 2 − 12 23 , Λ13,13 = −4E + 3 2 , Λ14,14 = −S¯4 , ηM ηM h h
Λ1,17 Λ26 Λ48 Λ66 Λ9,10 Λ12,12
CR IP T
¯ 2 + F2 L, Λ16 = Xˆi C1i , Λ1,13 = Xˆi D1i , Λ1,16 = E1i Fˆ3 + Λ15 = Xˆi B1i + L
AN US
2 ¯ 1 , Λ16,16 = δ Γ ¯i − π W ¯ , Λ16,17 = −δ Γ ¯ i , Λ17,17 = −Γ ¯ i + δΓ ¯i, Λ15,15 = −Q 4 h i ¯2 ¯T E E ¯ 2 = τ1 A˜T1 Λ18,18 = −12 ¯2 − 12 ¯22 , Φ (τ2 − τ1 )A˜T1 , d d n o ˆ 2 = diag − τ1 (T1 − 2Xˆi ), −(τ2 − τ1 )(T2 − 2Xˆi ) , Θ21 = [τ1 H1i (τ2 − τ1 )H1i ], Λ h i n o √ √ ¯ 3 = √τ 1 M ¯ ¯ ˆ 3 = diag − T¯1 , −T¯2 , −T¯2 , Φ τ2 − τ1 N τ2 − τ1 U¯ , Λ n o ¯ 4 = [M ¯ N ¯ U], ¯ Λ ˆ 4 = diag − X ¯ 1 , −X ¯ 2 , −X ¯2 , Φ 6times
z }| { A˜1 = [−Xˆi A1i 0 0 0 Xˆi B1i Xˆi C1i 0 0 0 Xˆi D1i E1i Fˆ3i − E1i Fˆ3i 0 0 0], Cdi 17times
z }| { i ˆ1 ¯ 6 = [U 0 0 0 , Φ
M
¯ 5 = col Gf i Φ ˆ 1 = [E T U bi
6times
10times
z }| { 0 0 0
Dgi
z }| { T ˆ 0 0 0]T , Lˆ = [0 0 τ1 Ebi Xi
ˆ 2 = [−Xˆi ξai U
0
ED
h
0
0
Xˆi ξci
z
6times
}| { 0 0 0
Xˆi ξdi
7times
z }| { 0 0] , Cˆ = [0 0 0], T
0 0
ξei Fˆ3i
− ξei Fˆ3i
0]T ,
PT
ˆ 6 = diag{−I, −I}. Λ
Xˆi ξbi
T ˆ τ21 Ebi Xi
ˆ 2 ], Θeˆ = [Lˆ C], ˆ U
Furthermore, the desired H∞ controller gains are given by Ki = Fˆ3 Xˆi−1 . Proof. Now, Letting
CE
¯ l = Xˆi Ll Xˆi , R ¯ l = Xˆi Rl Xˆi , Z¯l = Xˆi Zl Xˆi , l = (1, 2, 3), Pi = Xˆi−1 , L
AC
¯j = Xˆi Ej Xˆi , W ¯ = Xˆi W Xˆi , S¯l = Xˆi Sl Xˆi , l = (3, 4), E
¯ j = Xˆi Xj Xˆi , T¯j = Xˆi Tj Xˆi , j = (1, 2), Q ¯ s = Xˆi Qs Xˆi , s = (1, 2, 3) X
¯ = Xˆi LXˆi , M ¯ = Xˆi M Xˆi , N ¯ = Xˆi N Xˆi , U ¯ = Xˆi U Xˆi . S¯ = Xˆi S Xˆi , L
Then pre and post multiplying LMI in (60) by 6 times
18 times
}| { z }| { z diag Xˆi , . . . , Xˆi , I, I, I, Xˆi , . . . , Xˆi , I, I, I ,
f=
Ψ1
Ψ2
∗
Ψ3
< 0,
(72)
(73)
ACCEPTED MANUSCRIPT
17
where
Ψ1 =
Ψ18×18 ∗
where
Pi H1i
,Ψ = −γ 2 I 2
h ¯ 2 = τ1 A˜T Γ 1
¯ 12 = [τ1 H1i Γ
" ¯ Γ2
¯3 Γ
¯4 Γ
¯5 Γ
0
0
0
¯ 21 Γ
˜2 Υ
∗ # ¯ Γ6 , Ψ3 = ∗ ¯ eˆ Γ ∗ ∗
i (τ2 − τ1 )A˜T1 ,
n ˜ 2 = diag − τ1 T −1 , (τ2 − τ1 )H1i ], Υ 1
0
0
0
˜3 Υ
0
0
∗
˜4 Υ
0
∗
∗
−I
∗
∗
0
0
0 0 , 0 ˜ Υ6
CR IP T
o −(τ2 − τ1 )T2−1 .
AN US
Since the term T1−1 , T2−1 , T¯1 = Xˆi T1 Xˆi and T¯2 = Xˆi T2 Xˆi are both in (73), which is difficult to solve. In order to facilitate the design of H∞ controller, we transform T1−1 and T2−1 into the following inequality with T¯1 and T¯2 in accordance to Lemma (2.10). −Tl−1 ≤ T¯l − 2Xˆi (l = 1, 2)
Then substituting Tl−1 with T¯l − 2Xˆi in (73). Therefore we can obtain (68).
(74)
M
Remark 3.7. It is easy to see that (73) cannot be directly solved by using Matlab LMI toolbox for that there are nonlinear terms in Theorem 3.6, such as Pi Tl−1 Pi . On the one hand, based on the cone complementary linearization (CCL) algorithm [55], the nonlinear minimization can be adopted to solve this non-convex problem. However this method leads a high computational complexity. On the other hand, with the help of the fact that Pi Tl−1 Pi = T¯l − 2Xˆi for Tl > 0, this non-convex problem can be transformed to LMIs. This reduces the computational complexity
PT
ED
Remark 3.8. The decentralized event triggered scheme for stochastic neural networks is based on the result of Theorems 3.6. Unfortunately, Theorems 3.6 does not give feasible LMI conditions for obtaining control gain matrices Ki . Hence, we must look for another set of stability conditions. To this end, we take an appropriate congruence transformation to obtain feasible LMI stability conditions and a design method of a decentralized event triggered scheme based state feedback controller.
CE
Remark 3.9. In Theorem 3.1, the stability of the neural networks has been obtained by constructing an appropriate Lyapunov functional. It should be mentioned that some newly developed approaches, such as decentralized event-triggered approach and some inequality techniques are employed to reduce the conservatism of the stability condition.
AC
Remark 3.10. Note that the system (20) in Theorem 3.1 are obtained in the case of u(t) 6= 0. If u(t) = 0, that is, the control input, Markovian jumping and uncertain parameters is not considered, it can be seen that the aforementioned system (20) reduces to dθ(t) = [−A1 θ(t) + B1 g(θ(t)) + C1 g(θ(t − τ (t)))) Rt +D1 t−d(t) ¯ g(θ(s))ds + H1 ω(t)]dt + σ[θ(t), θ(t − τ (t)), t, %t ]dw(t), (75) φ(s), s ∈ [−max(τ2 , h), 0], θ(s) = R t Z (t) = G θ(t) + C θ(t − τ (t))) + D θ f d g t−h(t) g(θ(s))ds.
Corollary 3.11. For a prescribed scalar γ > 0 and some given positive scalars τ1 , τ2 , µ1 , d¯ the system ¯ ¯ ¯ (75) is stochastically stable, ifthere exist positive scalars λ1i > 0, λ2 > 0, λ3 > 0 and positive definite L1 L2 R1 R2 Z1 Z2 matrices P, L = ,R= ,Z = , S3 , S4 , E1 , E2 , X1 , X2 , T1 , T2 , LT2 L3 R2T R3 Z2T Z3
ACCEPTED MANUSCRIPT
18
∗
∗
∗
∗
¯ 1 I, P ≤λ ¯ 2 I, X1 ≤ λ
−I
AN US
¯ 3 I, X2 ≤ λ
CR IP T
¯ N ¯ , U, ¯ such that the folpositive diagonal matrices L, S, and appropriately dimensioned matrices M, lowing LMI is satisfied: z ˆz ˆz ˆz ˆz ˆ f f f f f 1,18 1,17 1,16 1,15 14×14 ˆz ∗ f 0 0 0 15,15 z ˆz = ˆ (76) f ∗ ∗ f16,16 0 0 < 0, ˆz ∗ ∗ ∗ f 0 17,17
¯ 1 + τ1 λ ¯ 2 + (τ2 − τ1 )λ ¯ 3 ]Σ1 , ˆ z = −P A1 − A T P + L1 − F1 L + [λ f 11 1
ˆ z = P B1 + L2 + F2 L, f ˆ z = P C1 , f ˆ z = P D1 , f ˆ z = P H1 , f 15 16 1,11 1,14 ¯ 1 + τ1 λ ¯ 2 + (τ2 − τ1 )λ ¯ 3 ]Σ2 , ˆ z = −(1 − µ1 )Q1 − F1 S + [λ f 22
ˆ z = −(1 − µ1 )R2 + F2 S, f ˆ z = −L1 + R1 + Z1 , f ˆ z = Z2 − L2 + R2 , f ˆ z = −Z1 , f 26 33 37 44 z z 2 2 2 ¯ ˆ ˆ f = −Z2 , f = L3 + τ S3 + (τ2 − τ1 ) S4 + E1 + d E2 − L, 48
55
2
M
ˆ z = −(1 − µ1 )R3 − (1 − µ2 )E1 − S, f ˆ z = −L3 + R3 + Z3 , f ˆ z = −Z3 , f ˆ z = −4S3 , f 66 77 88 99
PT
ED
T S3T ˆ z E2 ET S3 ˆz ˆz ˆ z = 3 S3 + 3 S3 , f f +3 2 , 9,10 10,10 = −12 2 − 12 2 , f11,11 = −4E2 , f11,13 = 3 τ2 τ2 τ2 τ2 h h h i T E2 E2 ˆ z 2 ˆz ˆz ˆz ˜T f (τ2 − τ1 )A˜T1 T2 , 12,12 = −S4 , f13,13 = −12 ¯2 − 12 ¯2 , f14,14 = −γ I, f1,15 = τ1 A1 T1 d do i n h√ √ √ ˆz ˆz ¯ ¯ τ1 M τ2 − τ1 N τ2 − τ1 U¯ , f = diag − τ T , −(τ − τ )T 1 1 2 1 2 , f1,16 = 15,15 n o n o ˆz ˆ z = [M ˆz ¯ N ¯ U], ¯ f f −T2 , −T2 , f 16,16 = diag − T1 , 1,17 17,17 = diag − X1 , −X2 , −X2 , 8times
Cd
CE
h ˆ z = col Gf f 1,18
z }| { 0 0 0
Dg
0
i 0 0 , A˜1 = [−A1 0 0 0 B1 C1 0 0 0 0 D1 0 0 H1 ].
Proof. Consider the same Lyapunov-Krosovskii functional defined in (27). The proof is similar to the lines in Theorem 3.1.
AC
Remark 3.12. If there is no effects of the stochastic terms and H∞ performance, then (75) reduces to Z t dθ(t) = −A1 θ(t) + B1 g(θ(t)) + C1 g(θ(t − τ (t)))) + D1 g(θ(s))ds, (77) ¯ t−d(t)
we obtain the following Corollary 3.13 based on Corollary 3.11. Corollary 3.13. For a given positive scalars τ1 , τ2 , µ1 , d¯ the system (77) is asymptotically stable, ¯ > 0, λ ¯ 2 > 0, λ ¯ 3 > 0 and positive definite matrices P, L = if there exist positive scalars λ1i L1 L2 R1 R2 Z1 Z2 , R = , Z = , S3 , S4 , E1 , E2 , X1 , X2 , T1 , T2 , positive LT2 L3 R2T R3 Z2T Z3
ACCEPTED MANUSCRIPT
19
CR IP T
¯ N ¯ , U, ¯ such that the following diagonal matrices L, S, and appropriately dimensioned matrices M, LMI is satisfied: ¯z ¯z ¯z ¯z Φ13×13 Φ Φ Φ 1,14 1,15 1,16 ¯z ∗ Φ 0 0 14,14 ˜ = < 0, (78) Ξ z ¯ ∗ ∗ Φ15,15 0 z ¯ ∗ ∗ ∗ Φ16,16 ¯ 1 + τ1 λ ¯ 2 + (τ2 − τ1 )λ ¯ 3 ]Σ1 , ¯ z = −P A1 − A T P + L1 − F1 L + [λ Φ 11 1 z z z ¯ 15 = P B1 + L2 + F2 L, Φ ¯ 16 = P C1 , Φ ¯ 1,11 = P D1 , Φ z ¯ ¯ ¯ 3 ]Σ2 , ¯ Φ = −(1 − µ1 )Q1 − F1 S + [λ1 + τ1 λ2 + (τ2 − τ1 )λ 22
¯ z26 = −(1 − µ1 )R2 + F2 S, Φ ¯ z33 = −L1 + R1 + Z1 , Φ ¯ z37 = Z2 − L2 + R2 , Φ ¯ z44 = −Z1 , Φ ¯ z48 = −Z2 , Φ ¯ z55 = L3 + τ22 S3 + (τ2 − τ1 )2 S4 + E1 + d¯2 E2 − L, Φ ¯ z = −(1 − µ1 )R3 − (1 − µ2 )E1 − S, Φ ¯ z = −L3 + R3 + Z3 , Φ ¯ z = −Z3 , Φ ¯ z = −4S3 , Φ ¯ z9,10 Φ ¯ z12,12 Φ ¯ z14,14 Φ
88
99
S3 S3T ¯ z E2 ET ST ¯ z S3 ¯z =3 +3 2 , +3 3 , Φ 10,10 = −12 2 − 12 2 , Φ11,11 = −4E2 , Φ11,13 = 3 τ2 τ2 τ2 τ2 h h h i T ¯ z13,13 = −12 E2 − 12 E2 , Φ ¯ z1,14 = τ1 A˜T1 T1 (τ2 − τ1 )A˜T1 T2 , = −S4 , Φ 2 d¯2 d¯o n i h √ √ ¯ ¯ ¯ z1,15 = √τ1 M = diag − τ1 T1 , −(τ2 − τ1 )T2 , Φ τ2 − τ1 N τ2 − τ1 U¯ , n o n o ¯ z1,16 = [M ¯ N ¯ U], ¯ Φ ¯ z16,16 = diag − X1 , −X2 , −X2 . = diag − T1 , −T2 , −T2 , Φ
M
¯ z15,15 Φ
77
AN US
66
Remark 3.14. In case, the distributed delay are not exists in neural networks (77), then we have the simplified neural networks as follows:
ED
dθ(t) = −A1 θ(t) + B1 g(θ(t)) + C1 g(θ(t − τ (t)))),
(79)
Then from Corollary 3.13, it is easy to have one Corollary for the above mentioned remark with ¯ = 0 and the other entries are same as Corollary 3.13. d(t)
CE
PT
Remark 3.15. In Theorems and corollaries, we have used maximum variables in our LMIs, which is more than that in Ref [11,12,17,19]. It seems that these variables increase the computational complexity. However, the process of getting the feasible solution just need a few seconds by using MATLAB toolbox. Besides, according to Tables 2,3 it is easy to see that the stability criteria obtained in this study are less conservatism than the existing ones. Therefore, it is worthy to reduce the conservative of stability criteria for a few more seconds. Even more the results obtained in Theorem 3.6 can be modified as one with even less number of decision variables by applying finsler Lemma, which will be a future direction of work.
AC
Remark 3.16. Theorem 3.6 provides a method to co-design both of controller gain and the trigger parameters Ki and Γi in terms of a set of LMIs. Moreover, the information of the transmission delay is also involved in the condition of Theorem 3.6. For the given information of the trigger parameters Γi and the transmission delay, by solving a set of LMIs stated in Theorem 3.6, the controller gain and trigger matrix can be obtained, which can be used to guarantee the error system (16) stable, which implies the master system (1) and the slave system (5) synchronized. Remark 3.17. In reality, as the capacity of the communication channels is limited, it is necessary to reduce the data transmission rate in the neural networks. However, the traditional sampling method may waste the bandwidth of the neural networks with unnecessary signals. Subsequently, it is urgent to introduce a new decentralized event triggered mechanism in order to determine the sent sampled
ACCEPTED MANUSCRIPT
20
CR IP T
signals for the of available stochastic neural networks. Furthermore, some pioneering works have been done for various types of neural networks with event triggered scheme, for example to transform the event-triggered scheme with switched neural networks [48], decentralized event-triggered scheme for master-slave neural networks in [51] and event-triggered approach to state estimation for a class of complex networks in [52]. The model considered in the present study is more practical than that proposed by [48, 51, 52], whereas we consider Markov jump stochastic NNs with combination of interval time varying delay and H∞ synchronization method. As discussed, decentralized eventtriggered scheme with H∞ error system model in this paper is an effective way to reduce the limited capacity data transmission. 4. Numerical Examples
In this section, numerical examples are addressed to show the effectiveness of the proposed method.
AN US
Example 1. Consider the system (16) with following matrix parameters: Mode 1: " # " # " # " # 5 0 10.2 0.1 0.1 0.1 0.4 0.1 A1 = , B1 = , C1 = , D1 = , 0 5 0 −2 0.2 −0.1 0.2 −0.03 " # " # " # −0.3 −1 0.5 −0.2 0.2 0 E1 = , H1 = , Gf = , 0.6 0.8 −0.3 0.2 −0.05 0.3 " # " # " # " # −0.5 1 0.4 −0.1 0.2 0 0.3 0 Cd = , Dg = , Σ11 = Σ12 = , 0 0.4 0.2 −0.3 0 0.5 0 0.2
E2 = Cd2 =
"
"
5
0
0
3
−0.2 0 1 0.1
#
, B2 = 0.1
"
−0.1 0.3
0
−0.1
ED
A2 =
"
#
"
−0.2
#
, C2 =
0.1
#
"
−0.1
0
0.01 "
−0.2
−2
0
#
, D2 = #
"
, H2 = , Gf 2 = , −0.2 0 0.6 0.1 −2 # " # " # " 0 0.4 0 0.1 0 0.2 , Dg2 = Σ21 = , Σ22 = −2 0.2 −0.3 0 0.2 0
PT
Mode 2:
M
π11 = −4, π12 = 4, ξa1 = ξb1 = ξc1 = ξd1 = ξe1 = diag{0.1, 0.1},
−0.1
0.1
0.01
−0.1
0 0.1
#
#
,
,
CE
π21 = 5, π22 = −5, ξa2 = ξb2 = ξc2 = ξd2 = ξe2 = diag{0.1, 0.1}, " # " # " # 0.5 0 1 0 0.5 0 F1 = 0, F2 = , Eb1 = , Eb2 = . 0 0.5 0 1 0 0.5
AC
Let τ¯1 = 0.1, τ¯2 = 0.4, δ = 0.1, µ1 = 0.2, and d¯ = h = 0.1. The maximum admissible upper bounds τ¯2 obtained for different values of µ1 and τ¯1 are listed in Table 1. Solving LMIs in Theorem 3.6, the feasible solutions obtained are as follows: " # " # " # 6.2160 0.0766 28.0329 −0.2299 22.4461 −3.4730 P1 = , P2 = , L1 = , 0.0766 5.6944 −0.2299 27.9580 −3.4730 14.1020 " # " # " # 6.8400 −4.7447 46.0623 −1.3416 19.5212 −1.0258 L2 = , L3 = , R1 = , −4.7447 5.5013 −1.3416 36.2339 −1.0258 12.0195 " # " # " # 6.4886 −4.3835 27.2108 13.1915 1.7284 −1.4504 R2 = , R3 = , Z1 = , −4.3835 5.1887 13.1915 22.0311 −1.4504 1.2353
ACCEPTED MANUSCRIPT
21
Q2 = T2 = M1 = W =
"
"
"
"
1.3069
−1.1017
−1.1017
0.9681
#
, Z3 =
"
−8.6754
11.0134
#
"
1.5541
−1.3020
#
, Q1 = , −8.6754 8.4152 −1.3020 1.1084 # " # " # 258.5716 −216.6359 0.5180 −0.4340 1.2445 −1.0458 , Q3 = , T1 = , −216.6359 184.4153 −0.4340 0.3695 −1.0458 0.9096 # " # " # 1.7560 −1.4701 381.4859 −294.8041 916.0952 −707.9808 , S3 = , S4 = , −1.4701 1.2853 −294.8041 290.3739 −707.9808 697.2868 # " # " # 1.3606 −1.1335 1.5371 −1.2807 1.5371 −1.2807 , N1 = , U1 = , −1.1335 0.9805 −1.2807 1.1070 −1.2807 1.1070 # 7.4446 −5.4127 , γ = 1.4234, λ1 = 1.2587, λ2 = 1.3221, λ3 = 1.1832. −5.4127 4.4804
CR IP T
Z2 =
"
AN US
However, by resorting to the Matlab LMI Control Toolbox, we find that the LMIs in Theorem 3.6 are feasible. Therefore the gain matrices of the designed controller and decentralized event-triggered parameters are obtained as follows: " # " # 0.1348 −0.0607 2.6906 −2.4356 K1 = , K2 = . −0.0676 0.0310 −2.4330 2.2087 " # " # 25.4636 −16.7946 96.9632 − 78.4483 Γ1 = , Γ2 = . −16.7946 14.4952 −78.448370.3353
M
For numerical simulation, we consider the initial state [2; −2]T in Figure. 1 depict the time responses Table 1. The maximum allowable upper bounds for delays τ¯2 with different values of µ and τ¯1 . 0.1 0.9763
0.2 0.8634
0.3 0.8575
0.4 0.8401
PT
ED
µ1 = τ¯1 τ¯2
0.5 0.7163
0.7 0.6943
10
θ1 θ2
AC
State responses
CE
5
0
−5
−10
−15
0
2
4
6
8
10
t/sec
Figure 1. State trajectory of the system in Example 1 of state variable. It confirms that the proposed condition in Theorem 3.6 leads to stochastic stable equilibrium point for the model (16).
ACCEPTED MANUSCRIPT
22
CR IP T
Example 2. Consider the neural network model (75) with " # " # " # " # 5 0 1 1.2 0.2 0.2 0.4 0.1 A1 = , B1 = , C1 = , D1 = , 0 5 0.2 −1 0.3 −0.1 0 −0.3 " # " # " # " # 0.5 −0.1 0.5 0 −1 1 0.4 0.1 H1 = , Gf = , Cd = , Dg = , −0.4 0.2 −0.5 0.3 0 0.4 0.2 −0.1 " # " # " # 02 0 0.3 0 0.5 0 Σ1 = , Σ2 = , F1 = 0, F2 = . 0 0.5 0 0.2 0 0.5
PT
ED
M
AN US
Let τ¯1 = 0.5, τ¯2 = 1, µ1 = 0.2 and d¯ = 0.3, and applying the corollary 3.11 in MATLAB LMI control toolbox, we obtain the H∞ performance index γ and a set of feasible solutions as follows " # " # " # 25.3003 0.2379 23.6251 −4.5229 1.9509 −1.2023 P = , L1 = , L2 = , 0.2379 25.4233 −4.5229 30.0683 −1.2023 4.3722 " # " # " # 46.0623 −1.3416 19.5212 −1.0258 6.4886 −4.3835 L3 = , R1 = , R2 = , −1.3416 36.2339 −1.0258 12.0195 −4.3835 5.1887 " # " # " # 27.2108 13.1915 1.7284 −1.4504 1.3069 −1.1017 R3 = , Z1 = , Z2 = , 13.1915 22.0311 −1.4504 1.2353 −1.1017 0.9681 " # " # " # 11.0134 −8.6754 1.5541 −1.3020 258.5716 −216.6359 Z3 = , Q1 = , Q2 = , −8.6754 8.4152 −1.3020 1.1084 −216.6359 184.4153 " # " # " # 0.5180 −0.4340 1.2445 −1.0458 1.7560 −1.4701 Q3 = , T1 = , T2 = , −0.4340 0.3695 −1.0458 0.9096 −1.4701 1.2853 " # " # " # 381.4859 −294.8041 916.0952 −707.9808 1.3606 −1.1335 S3 = , S4 = , M1 = , −294.8041 290.3739 −707.9808 697.2868 −1.1335 0.9805 " # " # " # 1.5371 −1.2807 1.5371 −1.2807 7.4446 −5.4127 N1 = , U= , W = , −1.2807 1.1070 −1.2807 1.1070 −5.4127 4.4804 γ = 1.4234, λ1 = 1.2587, λ2 = 1.3221, λ3 = 1.1832.
AC
CE
Therefore, by corollary 3.11, the slave system (5) can completely synchronize the master system (1), and the control gain matrix and event-triggered parameters are obtained as follows " # " # 0.0748 −0.0014 1.2789 0.0176 K= , Γ= . −0.0038 0.0523 0.0176 1.1474 Figure 2. is the state response with the initial condition [4, −5]T . The above result shows that all the conditions stated in Corollary 3.11 have been satisfied and hence system considered is stochastically stable under the event-triggered condition (8). Example 3. we consider the interval time varying delayed neural networks proposed in (77) with 1.3973 0 0 0.6362 2.1984 3.2834 , B1 = 1.3298 2.1034 2.3294 , 0 2.3842 0 A1 = 0 0 3.1874 3.1834 4.2830 0.2834
ACCEPTED MANUSCRIPT
23
4 θ1 3
θ2
2
State responses
1 0 −1
−3 −4 −5
0
2
4
6
8
10
12
t/sec
CR IP T
−2
14
Figure 2. State trajectory of the system in Example 2 0.2084
C1 = −2.2918 2.9424
3.1984 −1.2993 −1.0376
−2.9321
1.3945 , F1 = 0, F2 = diag{1.8321, 1.9732, 2.2842}, D1 = 0. 0.9321
AN US
M
Again, our purpose is to find the allowable maximum time-delayed upper bound τ2 for a given lower bound τ1 = 2 and d¯ = 1 under different µ1 . Such that the delayed NNs are asymptotically stable. The corresponding upper bounds of τ1 with different µ1 derived by the method proposed in [7, 16] and corollary 3.13 in this paper is shown in Table 2. It is clear that our criteria provide larger delay upper bound than the one in [7, 16]. which means that the criteria proposed in this paper can provide less conservatism for this Example 3.
ED
Example 4. Consider the following neural networks with time varying delays (77) with following Table 2. The maximum allowable time delay upper bound τ¯2 with different values of µ1 and τ¯1 = 2. µ1 = 0 2.82 2.88 2.95
µ1 = 0.2 2.63 2.71 3.12
PT
CE
[7] [16] This paper
AC
parameters given in [13, 20]: " A1 =
0.8
0
0
5.3
#
, B1 =
"
0 0 0 0
µ1 = 0.6 2.46 2.52 2.60
#
, C1 =
"
µ1 = 0.9 2.28 2.34 2.65
0.1 0.3 0.9 0.1
#
,
where F1 = 0, F2 = diag{1, 1} which is equivalent to L = diag(1, 1) in [11]. By solving Example 4 using LMI in Remark 3.14, the computed upper bounds of τ2 for various values of µ1 are listed in Table 3. It is seen from Table 3 that the stability criterion proposed in this paper gives much less conservative results than those in the existing literature [13, 20]. Remark 4.1. In this paper, the controller u(t) = Ki (θ(t − η2 (t)) − s(t − η2 (t))) is designed by the decentralized event-triggered approach, which needs less information from the network outputs than the controller in [26, 29]. Thus the above mentioned approach can lead to a significant reduction of the information communication burden in the network and save more computing cost.
ACCEPTED MANUSCRIPT
24
Table 3. The maximum allowable time delay upper bound τ¯2 with different values of µ1 . Methods µ1 = 0.95 µ1 = 0.99 Methods µ1 = 0.95 µ1 = 0.99 [13] 8.4119 5.4834 τ1 = 2 [13] 9.4119 6.4377 τ1 = 2 [20] 14.2468 11.4567 [20] 12.5249 9.5445 This paper 12.8452 10.1121 τ1 = 2 This paper 14.5247 12.1567 5. Conclusion
CR IP T
τ1 = 1 τ1 = 1 τ1 = 1
In this paper, the problem of robust H∞ synchronization of stochastic neural networks with mixed time varying delays and Markovian jump parameters under the decentralized event-triggered mechanism is studied. By employing Lyapunov functional, and free weighting matrix technique, new delay dependent criteria for the stochastic stability of the addressed neural networks have been established in terms of linear matrix inequalities. The usefulness and superiority of our result are illustrated by numerical examples. The proposed results may utilized to study generalized neural networks, Memristor-based recurrent neural networks, and complex valued neural network.
AN US
6. Acknowlegement
This work was partially supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016R1A6A1A03013567) and by the Korea Institute of Energy Technology Evaluation and Planning(KETEP) and the Ministry of Trade, Industry & Energy(MOTIE) of the Republic of Korea (No. 20174030201670). This work was supported in part by the CSIR project No. 25(0274)/17/EMR-II dated 27/04/2017.
CE
PT
ED
A. Cichocki, R. Unbehauen, Wiley, Chichester, 1993. S. Haykin, Prentice-Hall, NJ (1994) B. Wong, Y. Selvi, Inf. Manage. 34 (1998) 129. T. Li, W. X. Zheng, C. Lin, IEEE Trans. Neural Netw. 22 (2011) 2138. R. Anbuvithya, K. Mathiyalagan, R. Sakthivel, P. Prakash, Cogn. Neurodynamics 10 (2016) 339. Y. Liu, Z. Wang, X. Liu, Neural Netw 19 (2006) 667. C. D. Zheng, H. G. Zhang, Z. S. Wang, IEEE Trans. Circuits Syst. II 56 (2009) 250. Z.-S. Lv, C.-P. Zhu, P. Nie, Z. Zhao, H.-J. Yang, Y.-J. Wang, C.-K. Hu, Front. Phys. 12 (2017) 128902. S. Arik, IEEE Trans. Circuits Syst.I: Fundam. Theory Appl. 47 (2000) 1089. M. Syed Ali, Neurocomputing 149 (2015) 1280. A. Seuret, F. Gouaisbaut, Automatica 49 (2013) 2860. S.-F. Huang, J.-Q. Zhang, M.-H. Wang, C.-K. Hu, Physica A 499 (2018) 88-97. Y. Zhang, D. Yue, E. Tian, Appl. Math. Comput. 208 (2009) 249. S. Boyd, L. Ghaoui, E. Feron, V. Balakrishnan, SIAM, Philadelphia, 1994. Q. Zhu, J. Cao, Neurocomputing 73 (2010) 2671. J. Xia, J. Yu, Y. Li, H. Zheng, Circuits Syst. Signal Process 31 (2012) 1535. M. S. Ali, S. Saravanan, Chin. J. Phys. 55 (2017) 1953. X. Mao, Horwood, Chichester, 1997. M. Kovacic, Biol. Cybern. 64 (1991) 337-342. M. Syed Ali, S. Arik, R. Saravanakumar, Neurocomputing 158 (2015) 167. C. Yin, Y. Cheng, X. Huang, Shou-ming Zhong, Y. Li, K. Shi, Neurocomputing 207 (2016) 437. H. Shen; Y. Zhu, L. Zhang, Ju. H. Park, IEEE Trans. Neural Netw. Learn. Syst. 28 (2016) 346. Q. Zhu, J. Cao, IEEE Trans. Neural Netw. 21 (2010) 1314. R. Saravanakumar, M. Syed Ali, C. Ahn, H. Karimi, P. Shi, IEEE Trans. Neural Netw. Learn. Syst. 28 (2017). 1840. J. Xie, Y. Kao, Neural Comput. Appl. 26 (2015) 1537. X. Yang, J. Cao, Z. Yang, SIAM J. Control Optim. 51 (2013) 3486. P. Selvaraj, R. Sakthivel, O. M. Kwon, Neural Netw. 105 (2018) 154. H. R. Karimi, H. Gao, IEEE Trans. Syst., Man, Cybern. part-B, Cybern. 40 (2010) 173. N. Li, J. Cao, Neural Netw. 61 (2015) 1. Z. G. Wu, P. Shi, H. Su, J. Chu, IEEE Trans. Cybern. 43 (2013) 1796.
AC
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]
M
References
[25] [26] [27] [28] [29] [30]
ACCEPTED MANUSCRIPT
25
AN US
CR IP T
D. Tong, Q. Zhu, W. Zhou, Y. Xu, J. Fang, Neurocomputing 117 (2013) 91. S.Tourani, Z.Rahmani, B.Rezaie, chin. J. Phys. 54 (2016) 285. C. K. Ahn, Int. J. Comput. Int. Sys. 4 (2011) 855. M. Syed Ali, R. Saravanakumar, Appl. Math. Comput. 249 (2014) 510. M. Syed Ali, R. Saravanakumar, Q. Zhu, Neurocomputing 166 (2015) 84. Y. Du, X. Liu, S. Zhong, Chaos Solitons Fractals 91 (2016) 1. R. Sakthivel, K. Mathiyalagan, S. Marshal Anthoni, IET Control Theory 6 (2012) 1220. D. Jun Du, B. Qi, M. Fei, C. Peng, Inform. Sciences 325 (2015) 393. L. Wang, Z. Wang, T. Huang, G. Wei, IEEE Trans. Cybern. 168 (2015) 283. H. Wang, P. Shi , C. Lim, Q. Xue, Int. J. Robust Nonlin. 25 (2015) 3422. D. Lehmann, J. Lunze, Control Engineering Practice 19 (2011) 101. H. Ma, D. Liu, D. Wang, F. Tan, C. Li, Neurocomputing 161 (2015) 267. M. Maz, P. Tabuada, IEEE Trans. Autom. Control 56 (2011) 2456. S. Senan, M. Syed Ali, R. Vadivel, S. Arik, Neural Netw. 86 (2017) 32. D. Liu, F. Hao, Int. J. Control. Autom. 11 (2013) 33. M. C. F. Donkers, W. P. M. H. Heemels, IEEE Trans. Autom. Control, 57 (2012) 1362-1376. P. Tallapragada, N. Chopra, IEEE Trans. Autom. Control 59 (2014) 3312. S. Wen, Z. Zeng, M. Z. Q. Chen, T. Huang, IEEE Trans. Neural Netw. Learn. Syst. 28 (2017) 2334. B. Zhoua, X. Liaoa, T. Huangb, G. Chen, Neurocomputing 157 (2015) 199. H. Li, X. Liao, G. Chen, D. Hill, Z. Dong, T. Huang, J. Franklin Inst. 66 (2015) 1. J. Zhang, C. Peng, Neurocomputing 173 (2016) 1824. H. Wang, Y. Ying, R. Lu, A. Xue, Inform. Sci. 329 (2016) 540. A. Xue, H. Wang, R. Lu, Neurocomputing 190 (2016) 165. X. Song, Y. Men, J. Zhou, J. Zhao, H. Shen, Appl. Math. Comput. 298 (2017) 123. L. E. Ghaoui, F. Oustry, M. AitRami, IEEE Trans. Autom. Control 42 (1997) 1171.
AC
CE
PT
ED
M
[31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55]