Almost sure stability of hybrid stochastic systems under asynchronous Markovian switching

Almost sure stability of hybrid stochastic systems under asynchronous Markovian switching

Systems & Control Letters 133 (2019) 104556 Contents lists available at ScienceDirect Systems & Control Letters journal homepage: www.elsevier.com/l...

836KB Sizes 0 Downloads 50 Views

Systems & Control Letters 133 (2019) 104556

Contents lists available at ScienceDirect

Systems & Control Letters journal homepage: www.elsevier.com/locate/sysconle

Almost sure stability of hybrid stochastic systems under asynchronous Markovian switching ∗

Shixian Luo a , Feiqi Deng a , , Bo Zhang b , Zhipei Hu c a

School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510640, China School of Automation, Guangdong University of Technology, Guangzhou, 510006, China c School of Electrical and Information Engineering, Shantou University, Shantou 515063, China b

article

info

Article history: Received 24 July 2017 Received in revised form 28 September 2018 Accepted 5 October 2019 Available online xxxx Keywords: Hybrid stochastic systems Asynchronous switching control Markovian jump Almost sure stability

a b s t r a c t This paper presents two novel Lyapunov methodologies for almost sure stability of hybrid stochastic systems under asynchronous Markovian switching. One is established by a concave composite Lyapunov function with exponential martingale inequality. The other is derived by the strong law of large numbers, which can explore the coupling between the drift part and diffusion part of the systems, thus fully capturing the stabilizing effect of the stochastic noise. Both of these stability conditions give a quantitative relationship between the size of the detected delay of switching signal, the stationary distribution and the generator of Markov chain. As applications, easy-to-check stability and stabilization criteria are further provided for one-sided growth nonlinear systems and linear systems. Numerical examples illustrate the proposed theoretical results. © 2019 Elsevier B.V. All rights reserved.

1. Introduction Switched systems are a class of hybrid systems which are widely applied in many fields such as failure-prone manufacturing, traffic management, power systems, and networked control systems. Stability properties and control synthesis are the crucial and fundamental problems of these successful applications, which have been extensively studied (see, for instance [1–4]). It has been realized that mode-dependent controller is more general, flexible and less conservative in solving control synthesis problems. However, to implement the mode-dependent controller, one should accurately get the real-time switching signal of the plant. In practices, it is unrealistic to obtain the instantaneous switching signal [3]. The so-called ‘‘asynchronous switching’’, which takes into account the detected delay of switching signal when implementing the controller, may be reasonable in reality [5,6]. Some salient results have been done on switched systems under asynchronous switching control, such as [5–8]. In recent years, stochastic modeling has received widespread attention in areas of automatic control. As a class of popular systems in stochastic modeling, hybrid stochastic systems driven by continuous-time Markov chains and Wiener process have been studied by many works, see [9–17]. However, asynchronous switching control problems of stochastic systems do not seem to ∗ Corresponding author. E-mail addresses: [email protected] (S. Luo), [email protected] (F. Deng), [email protected] (B. Zhang), [email protected] (Z. Hu). https://doi.org/10.1016/j.sysconle.2019.104556 0167-6911/© 2019 Elsevier B.V. All rights reserved.

receive much attention. For the dwell-time-based asynchronous switching control, there is a general assumption (A): the detected delay of switching signal is less than the corresponding switching interval, see [18,19]. However, this assumption is difficult to meet for randomly switched systems because the switching signal is a stochastic process. For example, in Markovian switching, it is well known that for every sample path, the switching interval is almost sure greater than zero, but it is not easy to determine a positive lower bound of the switching interval for all sample paths. For asynchronous Markovian switching issues, there are some works have been done on discrete-time Markovian jump systems, see [20–23]. Razumikhin-type criteria for mean square stability of nonlinear stochastic delay systems with asynchronous Markovian switching were established in [24,25]. Unfortunately, these criteria not only are built upon the assumption (A) but also depend on a sequence of stopping time caused by Markov chain, and thus resulting in inconvenient application and verification. The stability of stochastic systems includes moment stability and stability in probability, where the stability described by probability 1 is called almost sure stability. In [26], the author pointed out that almost sure stability is much closer to the stability properties of real systems because the state is a sample rather than an average or a probability when observing a running system. Although moment stability implies almost sure stability under certain conditions, such as linear growth condition, moment stability is usually more restrictive, see [27]. Furthermore, it is known from [28–30] that almost sure stability allows to characterize the stabilizing role of stochastic noise. However, to the author’s best knowledge, little work has been

2

S. Luo, F. Deng, B. Zhang et al. / Systems & Control Letters 133 (2019) 104556

done on asynchronous Markovian switching control problem of stochastic systems. For the synchronous Markovian jump systems, the almost sure stability analysis is usually carried out by Lyapunov function with the ergodic theorem of continuous-time Markov chain, see [31]. However, note that in the presence of asynchronous switching, the system parameters contain bivariate stochastic process, which is difficult to be handled by the standard ergodic theorem. This paper revisits asynchronous Markovian switching control problem of hybrid stochastic systems through considering the relationship between the size of detected delay of switching signal, the generator and the stationary distribution of the Markov chain. The main contributions are summarized as follows: (i) We establish a form of ergodic theorem for a binary continuoustime stochastic process equipped with the hysteretic and the current state of an irreducible Markov chain. This helps us to identify the conditions on the sample Lyapunov exponent that guarantee the almost sure stability of the considered system. (ii) Two novel stability criteria for almost sure stability of hybrid stochastic systems under asynchronous Markovian switching are established, which quantitatively analyze the effects of the detected delay of switching signal on the stability. Compared with results on the synchronous switching case [16,31] or the non-switching case [27–30], our second stability criterion can explore the coupling between the drift part and diffusion part of the systems, and thus it is not only less conservative than those results [16,27–31] in some scenarios but also provides a flexible design for stochastic stabilization. (iii) For the case that stochastic hybrid systems with one-sided growth nonlinearity or linear stochastic hybrid systems, easy-to-check stability and stabilization criteria are further provided.

Notation. Let ∥ · ∥ denote the Euclidean norm of a vector. √ trace(AT A). C 1,2 (R+ × For any A ∈ Rn×m , define ∥A∥F = Rn ; R+ ) denotes the family of all nonnegative functions V (t , x) on R+ × Rn that are twice continuously differentiable in x and once in t. Given a complete probability space (Ω , F , P) with a natural filtration {Ft }t ≥0 satisfying the usual conditions, let w(t) = (w1 (t), . . . , wm (t))T ∈ Rm be an m-dimensional Wiener process defined on the probability space. Consider the following hybrid stochastic system

⎧ ⎨ dx(t) = fσ (t) (t , x(t), u(t))dt + gσ (t) (t , x(t), u(t))dw(t), t > 0, ⎩ x(0) = x0 , σ (0) = σ0 ,

(1)

where x(t) ∈ Rn is the state variable. {σ (t), t ≥ 0} is a right-continuous Markov process defined on the probability space which takes values in the finite set M = {1, 2, . . . , N } with generator Γ = (γij ), i, j ∈ M, given by

o(∆t) lim∆t →0 ∆t

⎧ ⎨ dx(t) = fσ (t) (t , x(t), hσ (t −τ ) (t , x(t)))dt + gσ (t) (t , x(t), hσ (t −τ ) (t , x(t)))dw(t), t > 0, ⎩ x(0) = x0 , σ (0) = σ0 ,

(2)

Obviously, because of the existence of the mismatched control input, it may degrade the performances and even cause instability. Therefore, this paper attempts to establish general stability criteria for (2), and quantitatively analyze the effect of the delay of switched signal on the stability performance. For this purpose, we always assume that for each i, j ∈ M, f˜ij, (t , 0) ≡ 0, g˜i,j (t , 0) ≡ 0, for all t ≥ 0, and both f˜i,j (t , x) and g˜i,j (t , x) satisfy the local Lipschitz condition, where f˜i,j (t , x) = fi (t , x, hj (t , x)), g˜i,j (t , x) = gi (t , x, hj (t , x)). For V ∈ C 1,2 (R+ × Rn ; R+ ), define operators associated with (2) by

∂ V (t , x) ∂ V (t , x) ˜ + fi,j (t , x) ∂t ] [ ∂x ∂ 2 V (t , x) 1 ˜ g (t , x) , + trace g˜iT,j (t , x) i,j 2 ∂ x2 ∂ V (t , x) HV (t , x, i, j) = g˜i,j (t , x), ∂x LV (t , x, i, j) =

then according to Itô’s formula, we have dV (t , x(t)) = LV (t , x(t), σ (t), σ (t − τ ))dt

+ HV (t , x(t), σ (t), σ (t − τ ))dw(t).

(3)

To make this paper more readable, we cite the definition of almost sure exponential stability.

2. Problem formulation

P{σ (t + ∆t) = j |σ (t) = i} =

of the matched controller. That is, the control input u(t) should be modified by u(t) = hσ (t −τ ) (t , x(t)), where τ > 0 and σ (t −τ ) = σ0 if t ≤ τ . Hence, the resulting closed-loop system is given by

{

γij ∆t + o(∆t), 1 + γii ∆t + o(∆t),

i ̸ = j, i = j,

where ∆t > 0, = 0, and γij ≥ 0 for i ̸= j, γii ≤ 0 ∑N with γ = −γ . u(t) = hσ (t) (t , x(t)) is mode-dependent ii j=1, j̸ =i ij control input which is used to achieve system stability or certain performances. For each i ∈ M, fi : R+ × Rn × Rn0 → Rn , gi : R+ × Rn × Rn0 → Rn×m , hi : R+ × Rn → Rn0 . Throughout this paper, we assume that the initial value (x0 , σ0 ) is deterministic. It is noted that in the controlled system (1), the control input is coincident with the switching rule. However, as pointed out in [5,6], this requirement is hard to be satisfied in the physical systems, the control input may exist a time delay which is induced by the identification of the system modes or the implementation

Definition 1 ([27,29]). The zero solution of system (2) is said to be almost surely exponentially stable if lim sup t →∞

1 t

log ∥x(t)∥ < 0, a.s.

holds for all (x0 , σ0 ) ∈ Rn × M. 3. Almost sure stability analysis We begin with the statement of a lemma which plays an important role in establishing the almost sure stability criteria for system (2). Lemma 1. Let {σ (t), t ≥ 0} be a finite-state, irreducible, rightcontinuous Markov process characterized by the generator Γ = ([γij )N ×N with stationary probability distribution Π = ] π1 π2 . . . πN . For any ξi,j ∈ R, i, j ∈ M, τ ≥ 0, it holds that lim

t →+∞

1 t

t



ξσ (s),σ (s−τ ) ds = 0

N ∑

πj ξi,j pj,i (τ ), a.s.,

i,j=1

where eΓ τ = pi,j (τ )

(

) N ×N

.

Proof. Case 1: τ = 0. Note that pi,i (0) = 1 and pi,j (0) = 0, i ̸ = j, i, j ∈ M, then it follows the ergodic ∫ t theorem of continuous-time ∑N Markov chain [32] that limt →∞ 1t 0 ξσ (s),σ (s) ds = i=1 πi ξi,i . ∫ t Case 2: τ > 0. Obviously, limt →+∞ 1t 0 ξσ (s),σ (s−τ ) ds =

∫ t −τ

∫t

limt →+∞ 1t −τ ξσ (s+τ ),σ (s) ds = limt →+∞ 1t 0 ξσ (s+τ ),σ (s) ds due to ξi,j is a constant. Moreover, with the right-continuous property of

S. Luo, F. Deng, B. Zhang et al. / Systems & Control Letters 133 (2019) 104556

∫t 1

σ (t) in mind, and we thus obtain 1

lim

ξσ (s+τ ),σ (s) ds ≤ maxi,j∈M {|ξi,j |} for all t ≥ 0,

t



t

t →+∞

0

t

ξσ (s+τ ),σ (s) ds = lim lim

h→0+ t →+∞

0



1 t

+∞ ∫ ∑

=

0

ξσ (s+τ ),σ (⌊ hs ⌋h) ds,

where ⌊a⌋ denotes the largest integer not greater than a ∈ R. Choosing h ∈ (0, τ ). For each j ∈ M, we define the stopping times



(k+1)h j

pj,i (s + τ − kh)dsP{kl = k} kh

k=0 h

t

pj,i (s + τ )ds

=

j kl

j kl−1

= min{k >

|σ (kh) = j, k ∈ N0 }, l > 1. ∑N(t) ¯ ¯ ¯ and let N(t) = ⌊ ht ⌋, Nj (t) = k=1 I{σ (kh)=j} . Apparently, N(t) = ∑N j=1

Nj (t). Under these notations, ¯ ∫ N(t) ∑

t

∫ 0

ξσ (s+τ ),σ (⌊ hs ⌋h) ds =

(k+1)h

t



k=0 h



pj,i (s + τ )ds.

= 0

¯ +1)h (N(t) N ∑

=

Nj (t) ∫ ∑

ξi,j

j

(kl +1)h

I{σ (s+τ )=i} ds

j

i,j=1

kl h

l=0 t



ξσ (s+τ ),σ ((N(t) ¯ +1)h) ds.

+ ¯ +1)h (N(t)

∫t

¯ +1)h) ds ≤ maxi,j∈M {|ξi,j |}h, for all ¯ +1)h ξσ (s+τ ),σ ((N(t) (N(t) ∫t 1 t ≥ 0, limt →+∞ t (N(t) ¯ +1)h) ds = 0. Hence, ¯ +1)h ξσ (s+τ ),σ ((N(t) ∫ t 1 lim ξσ (s+τ ),σ (⌊ hs ⌋h) ds

Since

t

t →+∞

0

N 1∑

lim

=

t →+∞

=

t

ξi,j

t

t →+∞

I{σ (s+τ )=i} ds

kl h

l=0

ξi,j

i,j=1

Nj (t)

Nj (t) ∫ ∑

1

¯ Nj (t) N(t)

j

(kl +1)h j

l=0

I{σ (s+τ )=i} ds.

kl h

¯ N(t)

It is easily calculated that limt →+∞ t = 1h . Furthermore, the ergodic theorem of discrete-time Markov chain [32] yields limt →+∞

Nj (t)

¯ N(t)

= πj . Now, we estimate limt →+∞

1 Nj (t)

∑Nj (t) ∫ (kjl +1)h l=0

j

kl h

I{σ (s+τ )=i} ds by the strong law of large numbers. Note that for given i, j ∈ M, the probability of the set {σ (s + τ ) = i, s ∈ j j j (kl h, (kl + 1)h)} only depends on the length of the interval (kl h, j (kl + 1)h) instead of l ∈ N. Therefore, for given pair of modes i, j ∈ M, the random variables

∫ (kjl +1)h

lim

t →+∞ a.s.

=E

Nj (t)

I{σ (s+τ )=i} ds

+∞ [∫ ∑ k=0

=

k=0

=

(k+1)h

j

E

j I{σ (s+τ )=i} kl

]

j kl

∥HV (t , x, i, j)∥2 ≥ ci,j V 2 (t , x),

(6)

N ∑

( ) 1 πj ξi,j − ci,j pj,i (τ ) < 0,

(7)

2

i,j=1

then system (2) is almost surely exponentially stable. Proof. Note that f˜i,j (t , x) and g˜i,j (t , x) satisfy the local Lipschitz condition and inequality (5) holds, then it follows from Khasminskii test that there is a unique global solution x(t) to system (2). Now, we prove that x(t) is almost surely exponential stable. Clearly, if x0 = 0, then x(t ; 0, 0) = 0. Therefore, we only need to show x(t) → 0 a.s., for x0 ̸ = 0. Fix any x0 ̸ = 0, by the same technique used in the proof of [12, Lemma 2.1], it can show that x(t) ̸ = 0, t ≥ t0 almost surely. Thus, we choose a composite Lyapunov function for system (2): W (t) = log V (t , x(t)). Applying the Itô rule to W (t), we have dW (t) =

1 [LV (t , x(t), σ (t), σ (t − τ ))dt V (t , x(t)) +HV (t , x(t), σ (t), σ (t − τ ))dw(t)] 1



2V 2 (t , x(t))

∥HV (t , x(t), σ (t), σ (t − τ ))∥2 dt .

(8)

By the conditions (5), we get

{

| = k dsP{ = k} P

t

ξσ (s),σ (s−τ ) ds

t

[ sup

M(t) −

0≤t ≤k

(k+1)h

kh

(5)

the exponential martingale inequality,

[

kh

+∞ ∫ ∑ k=0

]

j

I{σ (s+τ )=i} ds|kl = k P{kl = k}

(k+1)h

pj,i (s +

∥HV (s, x(s), σ (s), σ (s − τ ))∥2 ds + M(t), (9) 2V 2 (s, x(s)) 0 ∫ t HV (s,x(s),σ (s),σ (s−τ )) where M(t) = 0 dw (s) is local continuous marV (s,x(s)) tingale with M(0) = 0. Its quadratic variation is ⟨M(t), M(t)⟩t = ∫ t ∥HV (s,x(s),σ (s),σ (s−τ ))∥2 ds. Giving ε ∈ (0, 1) and k = 1, 2, . . ., by 0 V 2 (s,x(s))

kh

+∞ ∫ ∑

0



]

E

∫h

LV (t , x, i, j) ≤ ξi,j V (t , x),



kl h

kl h

=

ξi,j πj 1h

0

I{σ (s+τ )=i} ds

j

i,j=1

(4)

W (t) ≤ W (0) +

j

l=0

∑N

c ∥x∥p ≤ V (t , x),



j

(kl +1)h

j (kl +1)h

[∫

ξσ (s+τ ),σ (⌊ hs ⌋h) ds =

0

τ )ds, a.s. Moreover, it follows Theorem 2.5 of [32] that the transition functions pi,j (t), i, j ∈ M are continuous on R+ . Therefore, ∫t ∫h ∑N 1 + limt →+∞ 1t 0 ξσ (s+τ ),σ (s) ds = i,j=1 ξi,j πj limh→0 h 0 pj,i (s + ∑N τ )ds = i,j=1 ξi,j πj pj,i (τ ), a.s. The proof is complete. ■

I{σ (s+τ )=i} ds, l ∈ N are

j

kl h

independent and identically distributed. By the strong law of large numbers and Fubini’s Theorem, Nj (t) ∫ 1 ∑

∫t

1 t

j

(kl +1)h j

i,j=1

N ∑ ¯ N(t)

lim

Nj (t) ∫ ∑

Thus, limt →+∞

Theorem 1. If there exists nonnegative function V ∈ C 1,2 (R+ × Rn ; R+ ), constants c > 0, p > 0, ci,j ≥ 0, and ξi,j , i, j ∈ M, such that the following conditions hold for all t ≥ 0, x ∈ Rn , and i, j ∈ M:

ξσ (s+τ ),σ ((N(t) ¯ +1)h) ds

+

j

P{kl = k}

With the help of Lemma 1, we are ready to present the following theorem.

ξσ (s+τ ),σ (kh) ds kh

k=0

+∞ ∑

0

j

k1 = min{k ∈ N0 |σ (kh) = j}

3

j kl

P{σ (s + τ ) = i|σ (kh) = j}dsP{

= k}

>

2

ε

} log k



ε 2 1 k2

t

∫ 0

.

∥HV (s, x(s), σ (s), σ (s − τ ))∥2 ds V 2 (s, x(s))

]

4

S. Luo, F. Deng, B. Zhang et al. / Systems & Control Letters 133 (2019) 104556

By virtue of Borel–Cantelli’s lemma, we get that for almost all ω ∈ Ω , there exists an integer k0 = k0 (ω) such that if k ≥ k0 , M(t) ≤

ε 2

∥HV (s, x(s), σ (s), σ (s − τ ))∥ 2 ds + log k, V 2 (s, x(s)) ε

t



holds for all t ∈ [0, k]. Substituting the above inequality into (9) and using (6), we obtain

∫ t( W (t) ≤ W (0) +

ξσ (s),σ (s−τ ) −

1−ε 2

0

+

2

ε

) cσ (s),σ (s−τ )

( πj ξi,j +

ds

t

W (0) +



+

1

2

ε

∥HV (t , x, ℓ, i, j)∥ ≤ β V (t , x), ℓ = 1, . . . , m,

(12)

t

ξσ (s),σ (s−τ ) −

0

1−ε 2

cσ (s),σ (s−τ )

∑m

ds,



N

p

i,j=1

2V (t , x(t)) m ∑

2



Letting ε → 0+ , and using condition (7), we conclude that the zero solution of (2) is almost surely exponentially stable. ■

1

Remark 1. It is noteworthy that the generator Γ of Markov chain directly affects the size of the detected delay. If the values of γii , i ∈ M are small, i.e. slowly switching case, then the detected delay is larger. For the fast switching case (γii is large), the delay τ is smaller. This corresponds to the dwell-time-switching-based results [6–8,18,19].

m 1∑

Observe that conditions (5) and (6) separately restrict the drift part and the diffusion part of system (2). So, these conditions may ignore the coupling between the drift part and the diffusion part. Consequently, conditions (5)–(6) are usually conservative for the almost sure stability. This can be seen in Example 1. The following result gives a compact form between LV (t , x, i, j) and H = [ V (t , x, i, j). For notation ] briefly, we set g˜i,j (t , x) g˜1,i,j (t , x) . . . g˜m,i,j (t , x) , and HV (t , x, ℓ, i, j) = ∂ V∂(tx,x) g˜ℓ,i,j (t , x), where g˜ℓ,i,j (t , x) ∈ Rn , ℓ = 1, . . . , m, i, j ∈ M. Theorem 2. If there exists nonnegative function V ∈ C 1,2 (R+ × Rn ; R+ ), constants c > 0, p > 0, κℓ,i,j , ξi,j , i, j ∈ M, ℓ = 1, . . . , m, and β ≥ 0, such that (4) and the following conditions hold for all t ≥ 0, x ∈ Rn , and i, j ∈ M: LV (t , x, i, j) −

m ∑

ℓ=1

κℓ,i,j HV (t , x, ℓ, i, j) ≤ ξi,j V (t , x),

(10)

LV (t , x(t), σ (t), σ (t − τ ))

2 κℓ,σ (t),σ (t −τ ) V (t , x)

ℓ=1

m ∑

κℓ,σ (t),σ (t −τ ) HV (t , x(t), ℓ, σ (t), σ (t − τ )) m ∑

1 2V (t , x(t))

(HV (t , x(t), ℓ, σ (t), σ (t − τ ))

ℓ=1

]

−κℓ,σ (t),σ (t −τ ) V (t , x(t))) m ∑

ℓ=1

Remark 2. For the synchronous switching control, the almost sure stability of Markovian jump systems relies heavily on the stationary distribution of Markov chain, because all subsystems are only connected by the stationary distributed, see [31]. However, Theorem 2 reveals that for the asynchronous switching control, not only the stationary distribution affects the stability, but also the generator of the Markov chain. Actually, Theorem 1 establishes a quantitative relationship between the stationary distribution of Markov chain and the size of detected delay of switching signal. Moreover, different from the mean square stability obtained in [24,25], Theorem 1 indicates that an unstable hybrid system can be stabilized by stochastic noise.

HV (t , x(t), ℓ, σ (t), σ (t − τ ))dwℓ (t)

2

+

dt

[

ℓ=1



∥HV (t , x(t), ℓ, σ (t), σ (t − τ ))∥

2

ℓ=1

V (t , x(t))

V (t , x(t)) 2

]

m ∑

1

1

+

( ) 1−ε 1∑ 1 πj ξi,j − ci,j pj,i (τ ). lim sup log ∥x(t)∥ ≤

LV (t , x(t), σ (t), σ (t − τ ))

V (t , x(t))

ℓ=1

Letting k → ∞, and using Lemma 1 and inequality (4), we have

t

[

1

+

for all t ∈ [k − 1, k].

t →∞

ℓ=1

2 Proof. Since ∥HV (t , x, i, j)∥2 = ℓ=1 ∥HV (t , x, ℓ, i, j)∥ , stochastic differential equation (8) can be rewritten as



)

2

then system (2) is almost surely exponentially stable.

log k

∫ tt (

κℓ,i,j pj,i (τ ) < 0,

(11)

dW (t) =

log k, for all t ∈ [0, k].

) 2

It follows that W (t)

m 1∑

i,j=1

2

0

N ∑

1 V (t , x(t))

dt

HV (t , x(t), ℓ, σ (t), σ (t − τ ))dwℓ (t).

By the condition (10),

∫ t( W (t) ≤ W (0) +

ξσ (s),σ (s−τ ) +

0

+

m ∑

Mℓ (t),

m 1∑

2

) κℓ,σ (s),σ (s−τ ) ds 2

ℓ=1

(13)

ℓ=1

where Mℓ (t) =

∫t

, x(s), ℓ, σ (s), σ (s−τ ))dwℓ (s). More-

1 HV (s 0 V (s,x(s))

over, it follows from (12) that

⟨Mℓ (t), Mℓ (t)⟩t =

t

∫ 0

∥HV (s, x(s), ℓ, σ (s), σ (s − τ ))∥2 ds ≤ β 2 t . V 2 (s, x(s))

⟨Mℓ (t),Mℓ (t)⟩t

Hence, limt →+∞ t law of large numbers, lim

t →+∞

Mℓ (t) t

≤ β 2 < +∞. Applying the strong

= 0, a.s.

By Lemma 1, we conclude that system (2) is almost surely exponentially stable if (11) is satisfied. ■ Remark 3. Different from Theorem 1, Theorem 2 is established by using the strong law of large numbers. As a result, the condition (12) is required. Note that the size of β in (12) does not affect the stability. Therefore, condition (12) can also be formulated as: ∥HV (t ,x,ℓ,i,j)∥ for given i, j ∈ M and ℓ = 1, . . . , m, is uniformly V (t ,x) n bounded over (t , x) ∈ R+ × R . In fact, this condition is automatically satisfied for the case of linear-growth nonlinear diffusions (i.e., ∥g˜ℓ,i,j (t , x)∥ ≤ L∥x∥, where L ≥ 0 is a constant) with the quadratic Lyapunov function.

S. Luo, F. Deng, B. Zhang et al. / Systems & Control Letters 133 (2019) 104556

4. Applications

Corollary 2. The Markovian jump stochastic systems (16) is almost surely exponentially stable if one of the following conditions holds:

Some theoretical results of asynchronous Markovian switching control for nonlinear hybrid stochastic systems are established in the previous section. We now apply these results with quadratic Lyapunov functions to some special systems.

(a) If for some prescribed scalars ξi,j ∈ R and cℓ,i,j ≥ 0, i, j ∈ M, n×n ℓ = 1, 2, . . . , m, there exists ∑Npositive matrix P ∈ R , such that (7) holds with ci,j = ℓ=1 cℓ,i,j , and the following LMIs hold:

4.1. Hybrid nonlinear stochastic systems

P A¯ i,j + A¯ Ti,j P +

T P C¯ ℓ,i,j + C¯ ℓ, i,j P −

T P C¯ ℓ,i,j + C¯ ℓ, i,j P +

g Li,j

f

g

(ii) ∥g˜i,j (t , x)∥2F ≤ Li,j ∥x∥2 ,

(



(iii) ∥x g˜i,j (t , x)∥ ≥ ci,j ∥x∥2 , ∑N f g (iv) i,j=1 πj (2Li,j + Li,j − 2ci,j )pj,i (τ ) < 0.

(i) x

(

f˜i,j (t , x) −

∑m

ℓ=1

)

P A¯ i,j −

κℓ,i,j g˜ℓ,i,j (t , x) ≤ ξi,j ∥x∥ , 2

(ii) ∥g˜ℓ,i,j (t , x)(∥ ≤ Lℓ,i,j ∥x∥, ( )) ∑N ∑m 1 2 2 (iii) i,j=1 πj 2ξi,j + ℓ=1 Lℓ,i,j + 2 κℓ,i,j pj,i (τ ) < 0. Proof. The proof follows from Theorems 1 and 2, we thus omit it here. ■ 4.2. Linear hybrid stochastic systems This subsection establishes a linear matrix inequality (LMI) based stability and stabilization conditions for linear stochastic systems under asynchronous Markovian switching. The linear Markovian jump stochastic systems is described as following

[ ] ⎧ dx(t) = Aσ (t) x(t) + Bσ (t) u(t) dt ⎪ ⎪ ⎪ m ⎨ ∑ [ ] + Cℓ,σ (t) x(t) + Dℓ,σ (t) u(t) dwℓ (t), t > 0, ⎪ ⎪ ℓ=1 ⎪ ⎩ x(0) = x0 , σ (0) = σ0 ,

(14)

u(t) = Kσ (t −τ ) x(t).

+

where Ki ∈ R and τ > 0. Combining (14) with (15), we have the following close-loop systems

( ) ⎧ dx(t) = Aσ (t) + Bσ (t) Kσ (t −τ ) x(t)dt ⎪ ⎪ ⎪ m ⎨ ∑ ( ) + Cℓ,σ (t) + Dℓ,σ (t) Kσ (t −τ ) x(t)dwℓ (t), t > 0, ⎪ ⎪ ℓ=1 ⎪ ⎩ x(0) = x0 , σ (0) = σ0 .



cℓ,i,j P ≤ 0.

(18)

m ∑

m ∑

)

(

κℓ,i,j C¯ ℓ,i,j + A¯ i,j −

m ∑

ℓ=1

)T κℓ,i,j C¯ ℓ,i,j

P

ℓ=1

T ¯ C¯ ℓ, i,j P Cℓ,i,j − ξi,j P < 0,

(19)

where A¯ i,j = Ai + Bi Kj , C¯ ℓ,i,j = Cℓ,i + Dℓ,i Kj . Remark 4. It can be seen from LMIs (17) and (18) that when considering the stochastic stabilization problem, C¯ ℓij or −C¯ ℓij must be a Hurwitz matrix (all eigenvalues of matrix C¯ ℓij or −C¯ ℓij have negative real parts). This indicates that the diffusion term of the considered system is stabilizable. In fact, such LMI-based conditions for the stabilization by noise were imposed in many existing literature, such as [29]. However, Corollary 2(b) does not impose such restriction. Based on Corollary 2(b) and using the similar technique of [9], the controller can be designed. Corollary 3. Given the detected delay of switching signal τ , consider the Markovian jump system (14). If for some prescribed scalars ξi,j and κℓ,i,j , i, j ∈ M, ℓ = 1, 2, . . . , m, there exist matrices 0 < X ∈ Rn×n , and K¯ i ∈ Rn×n0 , such that (11) and the following LMIs hold:

Φij ∗

Yi −X

]

< 0, i, j ∈ M,

where T Φij = Ai X + XATi + Bi K¯ j + K¯j BTi − ξi,j X −

m ∑

κℓ,i,j

ℓ=1

(15)

n0 ×n

(17)

ℓ=1

[

where Ai , Cℓ,i ∈ Rn×n , and Bi , Dℓ,i ∈ Rn×n0 are known constant matrices. The control input is constructed by a mode-dependent state-feedback controller with delayed switching signal:

cℓ,i,j P ≥ 0,

holds. (b) If for some prescribed scalars ξi,j , κℓ,i,j ∈ R, i, j ∈ M, ℓ = 1, 2, . . . , m, there exists a positive matrix P ∈ Rn×n , such that (11) and the following LMIs hold for all i, j ∈ M,

(i) xT f˜i,j (t , x) ≤ Li,j ∥x∥2 ,

(b) There exist constants ξi,j , κℓ,i,j ∈ R, Lℓ,i,j ≥ 0 such that the following conditions hold for all x ∈ Rn , t ≥ 0, i, j ∈ M, and ℓ = 1, . . . , m:



or

(a) There exist constants ∈ R, , ci,j ≥ 0 such that the following conditions hold for all x ∈ Rn , t ≥ 0, and i, j ∈ M:

T

T ¯ C¯ ℓ, i,j P Cℓ,i,j − ξi,j P < 0, i, j ∈ M,

and for each ℓ = 1, 2, . . . , m, either

Corollary 1. The asynchronous switching control hybrid stochastic system (2) is almost surely exponentially stable if one of the following conditions holds: f Li,j

m ∑

ℓ=1

By taking into account a special Lyapunov function V (t , x) = ∥x∥2 , Theorems 1 and 2 can be cast into the following result.

T

5

X

( ) × Cℓ,i X + XCℓ,T i + Dℓ,i K¯ j + K¯ jT DTℓ,i , [ T = diag(X , . . . , X ), Yi = XC1,i + K¯ jT DT1,i . . .

T ¯T T XCm ,i + Kj Dm,i ,

]

then the admissible controller (15) with Ki = K¯ i X −1 is almost surely exponentially stabilized system (14). 5. Numerical examples Example 1. Consider the switched linear stochastic system with following parameters:

(16) Choose the Lyapunov function candidate V (x) = xT Px for system (16), and then applying Theorems 1 and 2, we have the following LMI-based stability criteria.

0.5 0

[ A1 =

[ ] D11 =

1 −1.5 , A2 = −1.5 1

]

[

[ ]

[ 1 0 , D21 = , K1 = − 2 0 1

]

[

0 −1 , Γ = 0.5 1 0 , K2 = 0

]

[

]

1 , −1

] −2 ,

6

S. Luo, F. Deng, B. Zhang et al. / Systems & Control Letters 133 (2019) 104556

Fig. 1. Simulation of Example 1: (a) Sample-path trajectories of the system without diffusion term; (b) switching signals; (c) Sample-path trajectories of the system under asynchronous Markovian switching.

Fig. 2. Simulation of Example 2: (a) Sample-path trajectories of the system without feed-back control; (b) switching signals; (c) Sample-path trajectories of the closed-loop system under asynchronous Markovian switching.

Bi = Cℓ,1 = 0, i = 1, 2, m = 1, and the switching signal {σ (t) ∈ M ≜ {1, 2}}t ≥0 . The probability distribution [ stationary ] of the generator Γ is Π = 21 12 . Fig. 1(a) shows that the state trajectory of the system without diffusion term is unstable. Note that ∥xT g˜ij (t , x)∥ = ∥xT Di1 Kj x∥ = 2x21 or 2x22 , condition (iii) of Corollary 1(a) or LMI (18) is satisfied only if cℓ,i,j = 0. Hence, the results [16,27–31] cannot use to test the almost sure stability of the system or the subsystems. However, by Corollary 2(b) with the choices of (ξ1,1 , ξ1,2 , ξ2,1 , ξ2,2 ) = (−1.43, 1.5, 1.5, −1.43) and (κ1,1,1 , κ1,1,2 , κ1,2,1 , κ1,2,2 ) = (−1.61, −0.21, −0.21, −1.61), it has been found that the LMIs are feasible, the maximum allowable detected delay is τ = 0.08. Let the initial value x0 = [3 − 2]T , σ0 = 1, and the detected delay τ = 0.08. Fig. 1(b)–(c) plots the sample-path trajectories of the system and its corresponding switching signals. It can be clearly seen that all the trajectories converge to the origin.

[ −0.0894 Γ = α 0.0671 0.0236 mode 1 2 3

−0.0366 ⎢ 0.0482 Ai = ⎣ 0.1002



0 0.4422 ⎢ ci Bi = ⎣ −5.5200



0

0.0271 −1.01 ai 0

0.0188 0.0024 −0.707 1

0.1761 −7.5922⎥ , 4.4900 ⎦



0

⎤ −0.4555 −4.0208⎥ ⎦, b i

0

ai 0.3681 0.0664 0.5047

bi 1.4200 0.1198 2.5460

0.0236 0 , −0.0236

]

ci 3.5446 0.9775 5.1120

Cℓ,i = Dℓ,i = 0, α > 0, and N =[ 3. It can] be verified that the stationary distribution is Π = 31 13 13 . The numerical simulation indicates that the trajectories of the open-loop system with initial conditions x0 = [−0.5 0.6 − 0.3 0]T and σ0 = 1 are divergent. We apply Corollary 3 to solve the controller design problem. Let τ = 0.7 and α = 10, solving the LMIs of Corollary 3 with the choices of ξi,i = −0.1, i ∈ M, ξ1,2 = 0.2, ξ1,3 = 0.4, ξ2,1 = ξ3,1 = 0.1, ξ2,3 = 0.6, and ξ3,2 = 0.25, we obtain the following controller gains:

−0.5192 −0.2448

0.1673 0.0684

1.3007 0.6963

1.9357 , 0.6158

−0.4719 K2 = −0.1908 [ −0.5618 K3 = −0.3014

0.1940 0.0720

1.1612 0.5508

1.7500 , 0.4285

0.1350 0.0471

1.4492 0.8758

2.2147 . 0.8847

[ Example 2. Consider a helicopter flight control system which is modeled as the linear Markovian jump system (14) with three different modes, see the work [33]. The parameters are given by

0.0671 −0.0671 0

K1 =

[

]

]

]

Under the same switching signal generated in Fig. 2(b), the sample-path trajectories of the closed-loop system with τ = 0.7 are shown in Fig. 2(c). It is clear that all the state variables of the closed-loop system converge to zero as time goes by, which confirms the behavior predicted by our result.

S. Luo, F. Deng, B. Zhang et al. / Systems & Control Letters 133 (2019) 104556

6. Conclusion By employing concave Lyapunov functions, the strong law of large numbers and the ergodic property of Markov process, this paper has established almost sure stability criteria for stochastic systems with asynchronous Markovian switching. The obtained results establish a quantitative relationship between the size of detected delay of switching signal, the stationary distribution and the generator of Markov chain. It has revealed that for a pre-design controller, the generator of Markov chain is the main factor in the determination of the size of the detected delay. Different from some existing results on the stabilization by noise, our results do not require the diffusion term of stochastic systems to be stabilizable. Numerical results have verified the effectiveness of the proposed results. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments This work was supported in part by the National Natural Science Foundation of China under Grants 61573156, 61733008, 61803094, 61873099, and in part by the Scholarship from China Scholarship Council under Grant 201806150120. References [1] D. Liberzon, Switching in Systems and Control, Springer Science & Business Media, 2003. [2] H. Lin, P.J. Antsaklis, Stability and stabilizability of switched linear systems: a survey of recent results, IEEE Trans. Autom. Control 54 (2) (2009) 308–322. [3] L. Zhang, Y. Zhu, P. Shi, Q. Lu, Time-Dependent Switched Discrete-Time Linear Systems: Control and Filtering, Springer, 2016. [4] L. Wu, Y. Gao, J. Liu, H. Li, Event-triggered sliding mode control of stochastic systems via output feedback, Automatica 82 (2017) 79–92. [5] L. Zhang, P. Shi, Stability, l2 -Gain and asynchronous H∞ control of discretetime switched systems with average dwell time, IEEE Trans. Automat. Control 54 (9) (2009) 2192–2199. [6] L. Zhang, H. Gao, Asynchronously switched control of switched linear systems with average dwell time, Automatica 46 (5) (2010) 953–958. [7] G. Xie, L. Wang, Stabilization of switched linear systems with time-delay in detection of switching signal, J. Math. Anal. Appl. 305 (1) (2005) 277–290. [8] X. Zhao, P. Shi, L. Zhang, Asynchronously switched control of a class of slowly switched linear systems, Systems Control Lett. 61 (12) (2012) 1151–1156. [9] W.-H. Chen, J.-X. Xu, Z.-H. Guan, Guaranteed cost control for uncertain Markovian jump systems with mode-dependent time-delays, IEEE Trans. Automat. Control 48 (12) (2003) 2270–2277. [10] X. Mao, A. Matasov, A.B. Piunovskiy, Stochastic differential delay equations with Markovian switching, Bernoulli 6 (1) (2000) 73–90.

7

[11] W. Chen, S. Xu, Y. Zou, Stabilization of hybrid neutral stochastic differential delay equations by delay feedback control, Systems Control Lett. 88 (2016) 1–13. [12] X. Mao, Stability of stochastic differential equations with Markovian switching, Stochastic Process. Appl. 79 (1) (1999) 45–67. [13] X. Mao, C. Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, 2006. [14] L. Huang, X. Mao, On almost sure stability of hybrid stochastic systems with mode-dependent interval delays, IEEE Trans. Automat. Control 55 (8) (2010) 1946–1952. [15] J. Bao, X. Mao, C. Yuan, Lyapunov exponents of hybrid stochastic heat equations, Systems Control Lett. 61 (1) (2012) 165–172. [16] X. Zong, F. Wu, G. Yin, Z. Jin, Almost sure and pth-moment stability and stabilization of regime-switching jump diffusion systems, SIAM J. Control Optim. 52 (4) (2014) 2595–2622. [17] P. Shi, F. Li, L. Wu, C.C. Lim, Neural network-based passive filtering for delayed neutral-type semi-Markovian jump systems, IEEE Trans. Neural Netw. Learn. Syst. 28 (9) (2017) 2101–2114. [18] Y.-E. Wang, X.-M. Sun, J. Zhao, Stabilization of a class of switched stochastic systems with time delays under asynchronous switching, Circuits Systems Signal Process. 32 (1) (2013) 347–360. [19] W. Ren, J. Xiong, Stability and stabilization of switched stochastic systems under asynchronous switching, Systems Control Lett. 97 (2016) 184–192. [20] J. Xiong, J. Lam, Stabilization of discrete-time Markovian jump linear systems via time-delayed controllers, Automatica 42 (5) (2006) 747–753. [21] M. Liu, D.W. Ho, Y. Niu, Stabilization of Markovian jump linear system over networks with random communication delay, Automatica 45 (2) (2009) 416–421. [22] Z.-G. Wu, P. Shi, Z. Shu, H. Su, R. Lu, Passivity-based asynchronous control for Markov jump systems, IEEE Trans. Automat. Control 62 (4) (2017) 2020–2025. [23] F. Li, C. Du, C. Yang, W. Gui, Passivity-based asynchronous sliding mode control for delayed singular Markovian jump systems, IEEE Trans. Automat. Control 63 (8) (2018) 2715–2721. [24] Y. Kang, D.-H. Zhai, G.-P. Liu, Y.-B. Zhao, P. Zhao, Stability analysis of a class of hybrid stochastic retarded systems under asynchronous switching, IEEE Trans. Automat. Control 59 (6) (2014) 1511–1523. [25] Y. Kang, D.-H. Zhai, G.-P. Liu, Y.-B. Zhao, On input-to-state stability of switched stochastic nonlinear systems under extended asynchronous switching, IEEE Trans. Cybern. 46 (5) (2016) 1092–1105. [26] F. Kozin, A survey of stability of stochastic systems, Automatica 5 (1) (1969) 95–112. [27] X. Mao, Stochastic Differential Equations and Applications, Horwood Publishing, Chichester, 2007. [28] J.A. Appleby, X. Mao, A. Rodkina, Stabilization and destabilization of nonlinear differential equations by noise, IEEE Trans. Automat. Control 53 (3) (2008) 683–691. [29] L. Hu, X. Mao, Almost sure exponential stabilisation of stochastic systems by state-feedback control, Automatica 44 (2) (2008) 465–471. [30] L. Huang, Stochastic stabilization and destabilization of nonlinear differential equations, Systems Control Lett. 62 (2) (2013) 163–169. [31] F. Deng, Q. Luo, X. Mao, Stochastic stabilization of hybrid differential equations, Automatica 48 (9) (2012) 2321–2328. [32] B. Sericola, Markov Chains: Theory, Algorithms and Applications, John Wiley & Sons, 2013. [33] D.P. de Farias, J.C. Geromel, J.B. do Val, O.L.V. Costa, Output feedback control of Markov jump linear systems in continuous-time, IEEE Trans. Automat. Control 45 (5) (2000) 944–949.