Stabilization of positive Markov jump systems

Stabilization of positive Markov jump systems

Author’s Accepted Manuscript Stabilization of positive Markov jump systems Yafeng Guo www.elsevier.com/locate/jfranklin PII: DOI: Reference: S0016-...

644KB Sizes 0 Downloads 52 Views

Author’s Accepted Manuscript Stabilization of positive Markov jump systems Yafeng Guo

www.elsevier.com/locate/jfranklin

PII: DOI: Reference:

S0016-0032(16)30212-5 http://dx.doi.org/10.1016/j.jfranklin.2016.06.026 FI2644

To appear in: Journal of the Franklin Institute Received date: 30 November 2015 Revised date: 1 May 2016 Accepted date: 29 June 2016 Cite this article as: Yafeng Guo, Stabilization of positive Markov jump systems, Journal of the Franklin Institute, http://dx.doi.org/10.1016/j.jfranklin.2016.06.026 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Stabilization of positive Markov jump systems∗ Yafeng Guo†

Abstract This paper is concerned with investigating the problems of stability and stabilization for positive Markov jump systems. A notion of mean stability is introduced, which is shown to be equivalent to the common notions of stochastic stability in the literature. Necessary and sufficient conditions of mean stability and stabilization are established for both continuous-time and discrete-time positive Markov jump systems. All the conditions are solvable in terms of standard linear programming. Numerical examples are given to illustrate the effectiveness and the merits of the proposed methods.

Keywords. Positive systems; Markov jump systems; Linear programming; Stabilization.

I.

Introduction

In many practical systems, state variables are intrinsically nonnegative, e.g., population of animals, absolute temperatures, level of liquids in tanks and concentration of substances in chemical process. Such systems are referred to be positive in the literature. Positive systems have numerous applications in various areas, such as ecology [1], biology [2], pharmacokinetics [3], economics [4], and communication [5]. Over the last decades, a great deal of attention has been devoted to the analysis and synthesis of positive systems, including stability analysis [6, 7, 8, 9, 10], state feedback and output feedback controller design [11, 12, 13, 14, 15, 16, 17], observer design [18, 19, 20, 21], etc. On the other hand, the presence of random abrupt changes is quite common in many dynamical systems, which may be caused by random failures and repairs of the components, changes in the interconnections of subsystems, sudden environment changes, etc. Most of the traditional system models are usually powerless to face these random abrupt changes. In contrast, Markov jump systems (MJSs) comprise an important class of stochastic dynamic systems which can, in several situations, model the above problems, see e.g., [22, 23, 24]. In [25], controllability and stabilizability problems are studied for continuous-time MJSs. In [26], H∞ control problem is investigated for networked discrete-time Takagi-Sugeno fuzzy MJSs with time-varying delays. In that paper, the effect of uncertain dropout rate is addressed. Although a lot of results on MJSs have been reported, very little research has focused on positive MJSs. In [27], stability and stabilization of continuous-time positive MJSs are investigated by using linear matrix inequalities (LMI). As shown in [13], linear programming possesses a lower computational complexity than the LMI approach. In [28], necessary and sufficient conditions are proposed for exponential mean stability of continuous-time positive MJSs and the proposed conditions are solvable by using linear programming. As ∗ This work was supported by Shanghai Pujiang Program (15PJ1407900), National Natural Science Foundation of China (61104115), Research Fund for the Doctoral Program of Higher Education of China (20110072120018), and the Fundamental Research Funds for the Central Universities. † Department of Control Science and Engineering, Tongji University, Shanghai, 201804, China. Email: [email protected].

1

for the discrete-time positive MJSs, necessary and sufficient conditions are proposed for 1-moment stability in [29]. It is noted that in both [28] and [29] the stabilization problem does not be addressed. In [30], the sufficient conditions on stabilization are proposed for a special class of positive MJSs, where the MJSs are assumed to have an additional switching control signal that affects the stochastic subsystems dynamics. For the usual positive MJSs without any additional switching control signal, in [31] stochastic 1-moment stabilization conditions are proposed for both continuous-time and discrete-time domains. In [32], stochastic 1-moment stabilization problem is considered for positive MJSs with distributed time delay and incomplete known transition rates. However, in both [31] and [32] the stabilization conditions restrict the solution space where the rank of all candidate controller gain matrices are always one. It leads to conservatism of controller design for general multiple-input systems. To the best of the author’s knowledge, the stabilization problems for positive MJSs have not been fully investigated. They still remain important and challenging. This paper is concerned with the stabilization of positive MJSs in both continues-time and discrete-time domains. First, a notion of mean stability is introduced and is shown to be equivalent to the standard notion of 1-moment stability as well as that of exponential mean stability. Second, stability conditions for such systems are proposed. Then, the conditions for the controller design are derived. All the conditions are necessary and sufficient in the sense of mean stability and are solvable in terms of standard linear programming. Finally, numerical examples are given to illustrate the effectiveness and the merits of the proposed methods. Notation: The notations in this paper are quite standard. Throughout this paper, Rn and Rn×m denote respectively the n-dimensional Euclidean space and the set of all n × m real matrices. Rn+ represents the nonnegative orthant of the n-dimensional real space Rn . The superscript T denotes transpose. I stands for the identity matrix of appropriate dimension. Given a set of vectors v (i) ∈ Rn , i = 1, 2, · · · , N , the symbol v = vec[v (i) ] represents the vector obtained by stacking vectors v (1) , v (2) , · · · , v (N ) , into a single nN -dimensional vector. The following descriptions of elements of matrices are used: A = [aij ] and B = [bT1 · · · bTn ]T . For a matrix (or a vector) M , M  0 (resp. M  0) means that its elements are positive: mij > 0 (resp. mij ≥ 0). A matrix M is said to be a Metzler matrix if its off-diagonal elements are all nonnegative real numbers. E{ · } stands for the mathematical expectation. diag{ · , · · · , · } means a block-diagonal matrix. ⊗ denotes the Kronecker product. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.

II.

Preliminaries and definitions

Given the probability space (Ω, F , P), where Ω is the sample space, F is the algebra of events and P is the probability measure defined on F , consider the following continuous-time and discrete-time MJSs, respectively: x(t) ˙ = A(rt )x(t) + B(rt )u(t),

(1)

and x(t + 1) = A(rt )x(t) + B(rt )u(t)

(2)

where x(t) ∈ Rn is the state vector and u(t) ∈ Rm is the control input. For the continuous-case, {rt , t ≥ 0} is a Markov process, while for the discrete-time case it is a Markov sequence. {rt , t ≥ 0}, taking values in a

2

Δ

finite set S = {1, . . . , N }, governs the switching among the different system modes. For the continuous-time MJS, the mode transition probabilities of Markov process {rt , t ≥ 0} are given by

Pr{rt+Δ = j|rt = i} =

⎧ ⎪ ⎨ λij Δ + o(Δ), if i = j, ⎪ ⎩ 1 + λij Δ + o(Δ), if i = j,

where Δ > 0, limΔ→0 {o(Δ)/Δ} = 0, and λij ≥ 0 (i, j ∈ S, i = j) denotes the switching rate from mode i at s time t to mode j at time t + Δ, and λii = − j=1,j=i λij for all i ∈ S. The transition rate matrix is denoted Δ

as Λ = [λij ]. For the discrete-time MJS, the mode transition probabilities of Markov sequence {rt , t ≥ 0} are given by Pr(rt+1 = j|rt = i) = πij , where πij ≥ 0, ∀i, j ∈ S, and

N

Δ

j=1

πij = 1. Likewise, the transition probability matrix is denoted as

Π = [πij ]. The set S contains N modes of system (1) or (2). For rt = i ∈ S, the system matrices of the ith mode are denoted by A(i) , B (i) , which are real and known. In what follows, we present the definitions on positivity and stability for jump systems that will be used in this paper. Definition 1 System (1) or (2) with u(t) = 0 is said to be positive if for any initial condition x0 ∈ Rn+ and r0 ∈ S all state x(t) ∈ Rn+ for all t ≥ 0 . Definition 2 Assume that system (1) or (2) with u(t) = 0 is positive; then it is said to be mean stable if for any initial condition x0 ∈ Rn+ and r0 ∈ S, lim E {x(t)} = 0. t→∞

Definition 3 Assume that system (1) or (2) with u(t) = 0 is positive; then it is said to be 1-moment stable if for any initial condition x0 ∈ Rn+ and r0 ∈ S, lim E { x(t) } = 0. t→∞

The following lemmas will also be used in this paper. Lemma 1 ([33]) Let M ∈ Rn×n be a Metzler matrix. Then M is a Hurwitz matrix if and only if there exists a vector v  0 such that M v ≺ 0. Lemma 2 ([34]) Let M  0 ∈ Rn×n . Then M is a Schur matrix if and only if there exists a vector v  0 such that (M − I)v ≺ 0.

III.

Stability analysis

In this section, we will investigate the stability criteria for positive MJSs. First, we establish the equivalence between mean stability and 1-moment stability. Theorem 1 Assume that system (1) or (2) with u(t) = 0 is positive; then it is mean stable if and only if it is 1-moment stability.

3

Proof: Sufficiency: Assume that system (1) or (2) is mean stable. By Definition 2, it is straightforward to see that lim E

n i=1

t→∞

xi (t) = n × 0 = 0.

From the nonnegativity of vector x(t), it is direct to see that (without loss of generality, here 1-norm of x(t) is adopted)

x(t) =

n i=1

|xi (t)| =

n i=1

xi (t).

Therefore, lim E { x(t) } = lim E

t→∞

n

t→∞

i=1

xi (t) = 0,

which implies the 1-moment stability. Necessity: Assume that the system is 1-moment stable. From the nonnegativity of vector x(t), it is easily to see that, for all i = 1, 2, . . . , n, 0 ≤ xi (t) ≤ x(t) . Then, by Definition 3, 0 ≤ lim E {xi (t)} ≤ lim E { x(t) } = 0, t→∞

t→∞

which implies the mean stability. Remark 1 In literature, it has been shown (see. e.g. [28, 35, 36]) that 1-moment, stochastic 1-moment, exponential 1-moment, and exponential mean stability are actually equivalent for a positive Markov jump linear system. Thus, we know from Theorem 1 that the notion of mean stability in this paper is equivalent to all the three those of 1-moment stability and that of exponential mean stability. However, the use of mean stability is more convenient than 1-moment stability to investigate the stability of the positive Markov jump linear systems since in mean stability the expectation of x(t) rather than x(t) is taken. On the other hand, unlike the exponential mean stability, the definition of mean stability has the same form for the discrete-time and continuous-time systems. Hence it is more concise than that of exponential mean stability. These are the reasons why this paper introduces the notion of mean stability.

a.

Continuous-time case

In this subsection, we give the stability result for the unforced system (1) (with u(t) = 0). The following theorem presents a set of necessary and sufficient conditions on the mean stability of the unforced system (1). Theorem 2 Assume that system (1) with u(t) = 0 is positive (or equivalently that for every i = 1, · · · N , the system matrix A(i) is Metzler); then the following statements are equivalent. (i) System (1) with u(t) = 0 is mean stable. (ii) H is a Hurwitz matrix, where H = diag{A(1) , · · · , A(s) } + ΛT ⊗ I.

(3)

4

(iii) There exist a set of vectors v (i)  0, i ∈ S such that A(i) v (i) +

N 

πji v (j) ≺ 0, ∀i ∈ S.

(4)

j=1

Proof: (i)⇔(ii). From Remark 1, we know that the notion of mean stability and exponential mean stability are equivalent. Then, from Theorem 2 in [28] the conclusion follows. (ii)⇔(iii). Since A(i) , ∀i ∈ S and Λ are Metzler matrix, it is readily seen that H is a Metzler matrix. Then, by Lemma 1, H is a Hurwitz matrix if and only if there exists a vector v  0 such that Hv ≺ 0. Letting v = vec[v (i) ], the equivalence between (ii) and (iii) is established.

b.

Discrete-time case

The following theorem presents a necessary and sufficient condition on the mean stability of the unforced system (2). Theorem 3 Assume that system (2) with u(t) = 0 is positive (or equivalently that for every i = 1, · · · N , the system matrix A(i)  0); then the following statements are equivalent. (i) System (2) with u(t) = 0 is mean stable. (ii) F is a Schur matrix, where F = (ΠT ⊗ I)diag{A(1) , · · · , A(N ) }.

(5)

(iii) There exist a set of vectors v (i)  0, i ∈ S such that



N j=1

(j) (j)

πji A

v



− v (i) ≺ 0, ∀i ∈ S.

(6)

Proof: From Theorem 1, we know that the notion of mean stability and 1-moment stability are equivalent. Then, from Theorem 1 in [29] the conclusion follows. Theorems 2 and 3 present the necessary and sufficient conditions for mean stability of continuous-time and discrete-time positive MJSs, respectively. All of the conditions are checkable. The conditions (iii) of Theorem 2 and 3 are linear. Thus, these conditions can be solved as a standard linear programming problems. As for conditions (ii) of Theorems 2 and 3, from Lemmas 1 and 2 the checks of the Metzler property of H and the Schur property of F turn out to be linear programming problems as well, i.e., finding vectors c  0 and d  0 to satisfy Hc ≺ 0 and (F − I)d ≺ 0, respectively. Remark 2 From Remark 1, we know that the proposed condition in Theorems 2 and 3 is also necessary and sufficient in the sense of 1-moment, stochastic 1-moment, exponential 1-moment, and exponential mean stability. Remark 3 The sufficient conditions were obtained in Theorems 1 and 2 in [31] for stochastic 1-moment stability of continuous-time and discrete-time positive MJSs, respectively. Note that in these conditions 5

there exist the products of a decision variable ρ and other decision variables so that they are nonlinear. For sake of solvability, ρ has to be tuned in advance, which easily introduces conservatism. As mentioned previously, the necessary and sufficient conditions (ii) and (iii) of Theorems 2 and 3 in this paper are linear, which can be directly solved as standard linear programming problems without tuning any parameter.

IV.

State-feedback stabilization

In this section, the stabilization problems of continuous-time system (1) and discrete-time system (2) are considered. The state-feedback stabilizing controllers are designed such that the closed-loop system is mean stability. The mode-dependent controllers considered here have the form below: u(t) = K(rt )x(t)

(7)

where K (i) (∀rt = i ∈ S) are the controller gains to be determined.

a.

Continuous-time case

Using (7), the system (1) is represented as x(t) ˙ = [A(rt ) + B(rt )K(rt )] x(t).

(8)

The following theorem presents necessary and sufficient conditions for the existence of controller in the form of (7) such that the closed-loop system (8) is positive and mean stable. Theorem 4 The following statements are equivalent. (i) There exists a controller in the form of (7) such that the closed-loop system (8) is positive and mean stable. (ii) There exist a set of vectors v (i)  0 and matrices K (i) such that for each i ∈ S, A(i) + B (i) K (i) is a Metzler matrix and

N A(i) + B (i) K (i) v (i) + πji v (j) ≺ 0.

(9)

j=1

(iii) There exist vectors v (i) =

(i) v1

...

(i) vn

T

(i)

v (i)  0,

(10)

A(i) v (i) + B (i) (i) (i)

(i)

∈ Rn and c1 , . . . , cn ∈ Rm such that for each i ∈ S

n k=1

(i)

ck +

N j=1

πji v (j) ≺ 0,

(11)

(i) (i)

akl vl + bk cl  0, for k = l,

T (i) (i) (i) with A(i) = [akl ] and B (i) = (b1 )T · · · (bn )T .

6

(12)

Moreover, if (10)-(12) have a feasible solution, an admissible controller gain is given by K





(i)

(i) (i) (v1 )−1 c1

=

(i) (i) (vn )−1 cn

···

.

(13)

Proof: The equivalence between (i) and (ii) follows the equivalence between (i) and (ii) in Theorem 2. Then the proof will be completed by showing the equivalence between (ii) and (iii). Suppose that condition (iii) holds. Then, noting that K (i) in (13), we have B (i) K (i) v (i) = B (i)

n

(i) k=1 ck .

This, together with (11) in condition (iii), yield (9) in condition (ii). On the other hand, it can be easily seen that A(i) + B (i) K (i) is a Metzler matrix since condition (iii) guarantees for k = l that (i)

(i)

akl + bk

 −1   (i) (i) (i) (i) (i) vl cl = akl + bk kl = A(i) + B (i) K (i)

kl

 0.

Following the same line of argument, the proof of (ii)⇒(iii) can be easily obtained.

b.

Discrete-time case

Using (7), the system (2) is represented as x(t + 1) = [A(rt ) + B(rt )K(rt )] x(t).

(14)

The following theorem presents necessary and sufficient conditions for the existence of controller in the form of (7) such that the closed-loop system (14) is positive and mean stable. Theorem 5 The following statements are equivalent. (i) There exists a controller in the form of (7) such that the closed-loop system (14) is positive and mean stable. (ii) There exist a set of vectors v (i)  0 and matrices K (i) such that for each i ∈ S, A(i) + B (i) K (i)  0 and



N j=1

  πji A(j) + B (j) K (j) v (j) − v (i) ≺ 0.

(iii) There exist vectors v (i) =

(i) v1

...

(i) vn

T

(15)

(i)

v (i)  0,



N j=1

(i) (i)

(i)

∈ Rn and c1 , . . . , cn ∈ Rm such that for each i ∈ S (16)

 n πji A(j) v (j) + B (j)

k=1

(j)

ck



− v (i) ≺ 0,

(17)

(i) (i)

akl vl + bk cl  0, (i)

with A(i) = [akl ] and B (i) =

(18)

(i) (b1 )T

···

(i) (bn )T

7

T .

Moreover, if (16)-(18) have a feasible solution, an admissible controller gain is given by K

(i)



=

(i) (i) (v1 )−1 c1

···

(i) (i) (vn )−1 cn

.

(19)

Proof: The proof can be completed by following the same line of argument of the proof for Theorem 4, and hence it is omitted here. The conditions provided in Theorems 4 and 5 are not only necessary and sufficient for controller design but also are very effective for computation. Actually, like the stability conditions of Theorems 2 and 3, the conditions (iii) in Theorem 4 and 5 are linear as well. Therefore, these conditions are solvable in term of standard linear programming. Remark 4 In current literatures [31] and [32], the conditions for controller design are also presented. However, by using these conditions, the solution space is restricted where the rank of all candidate controller gain matrices are always one. Clearly, it will lead to conservatism of controller design for multiple-input systems.

V.

Numerical examples

In this section, two examples are provided to illustrate the effectiveness and the merits of the developed results in this paper. Example 1 Consider continuous-time Markov jump system (1) with three modes, whose system matrices are ⎡

A(1)

⎡ ⎡ ⎡ ⎤ ⎤ ⎤ ⎤ 2.0 −1.2 0.1 0 1.0 2.0 −0.2 0.1 ⎢ ⎢ ⎢ ⎢ ⎥ ⎥ ⎥ ⎥ =⎣ ⎦ , B (1) = ⎣ ⎦ , A(2) = ⎣ ⎦ , B (2) = ⎣ ⎦, −2.5 1.0 −0.2 0.3 −2.5 1.5 0 −0.3 ⎡

A(3)

⎡ ⎤ ⎤ 2.0 0 0.2 0 ⎢ ⎢ ⎥ ⎥ =⎣ ⎦ , B (3) = ⎣ ⎦. 0 2.0 0 −0.3

The transition rate matrix Λ is ⎡ ⎤ −1 1 0 ⎢ ⎥ ⎢ ⎥ ⎢ Λ = ⎢ 0.6 −0.6 0 ⎥ ⎥. ⎣ ⎦ 0.8 0.2 −1 Both Theorem 3 in [31] and Theorem 4 in this paper are used to design controller such that the closed-loop system is positive and mean stable (or equivalently, stochastically 1-moment stable). However, we find that the former method cannot successfully design such a controller for this system. The reason is as follows: We know from Theorem 2 in this paper that the closed-loop system is mean stable if and only if H is a Hurwitz

8

matrix, where H = diag{A(1) + B (1) K (1) , · · · , A(3) + B (3) K (3) } + ΛT ⊗ I ⎡ (1) (1) (1) 0.6I 0.8I ⎢ A +B K −I ⎢ =⎢ 0.2I I A(2) + B (2) K (2) − 0.6I ⎢ ⎣ 0 0 A(3) + B (3) K (3) − I

⎤ ⎥ ⎥ ⎥. ⎥ ⎦

It is easy to see that all the eigenvalues of A(3) + B (3) K (3) − I belong to the set of the eigenvalues of H. If using Theorem 3 in [31], only a class of special controller gain matrices K (i) can be obtained, whose form is ⎡ ⎢ K (i) = ⎣

(i) k1

(i) k2

(i) a(i) k1

(i) a(i) k2

Denote ⎡ ⎢ T =⎣

⎤ ⎥ ⎦,

∀i ∈ S.

⎤ 1 (2)

3a 2

0 ⎥ ⎦. 1

Then by using similarity transformation, we have ⎡ (3) (3) (3)

⎢ 1 + 0.2k1 − 0.3a k2 (3) (3) (3) −1 T A +B K −I T =⎣ 0

(3) 0.2k2

1

⎤ ⎥ ⎦.

From the equation above, we see that one of eigenvalues of A(3) + B (3) K (3) − I is 1. It is immediate to know that H is not a Hurwitz matrix. Therefore, the method in [31] is not able to design the desired controller for this system. However, by using Theorem 4 in this paper, we get ⎡

v (1)

⎤ ⎡ ⎤ ⎡ ⎤ 1.9968 6.3175 3.2508 ⎢ ⎥ (2) ⎢ ⎥ (3) ⎢ ⎥ =⎣ ⎦, v = ⎣ ⎦, v = ⎣ ⎦, 9.3566 9.4705 7.4312 ⎡

(1)

c1

⎤ ⎡ ⎤ ⎡ ⎤ −192.7237 161.9059 212.4419 ⎢ ⎥ (1) ⎢ ⎥ (2) ⎢ ⎥ =⎣ ⎦ , c2 = ⎣ ⎦ , c1 = ⎣ ⎦, 4.8247 −158.1931 −126.0323 ⎡





⎢ 51.8447 ⎥ (2) c2 = ⎣ ⎦, 216.4061





⎢ −218.6430 ⎥ (3) c1 = ⎣ ⎦, −70.7106



⎢ 66.7559 ⎥ (3) c2 = ⎣ ⎦. 222.7225

Then, the controller gains are ⎡

K (1)

⎤ −96.5159 17.3040 ⎢ ⎥ =⎣ ⎦, 2.4162 −16.9072



K (2)

⎤ 33.6276 5.4744 ⎢ ⎥ =⎣ ⎦, −19.9498 22.8506 9



K (3)

⎤ −67.2573 8.9832 ⎢ ⎥ =⎣ ⎦. −21.7515 29.9713

To show the effectiveness of the design more persuasive, simulation with 1000 Monte Carlo runs is shown in this example. Fig. 1 is the state response curve of the closed-loop system with initial condition x0 = [2 4]T  0. It is shown that for all Monte Carlo runs the states x(t) are convergent and maintain x(t)  0 for t ≥ 0. Therefore, the designed controller makes the closed-loop system is positive and stable.

Figure 1: State response of the closed-loop system with 1000 Monte Carlo runs

Example 2 Consider discrete-time Markov jump system (2) with three modes, whose system matrices are ⎡

A(1)

⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 2.5 0 −0.1 0 −2.5 −1 0.1 −0.4 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (1) (2) (2) =⎣ ⎦, B = ⎣ ⎦, A = ⎣ ⎦, B = ⎣ ⎦, 0 2.5 0 0.3 2 1.5 0 0.2 ⎡

A(3)

⎤ ⎡ ⎤ −1.5 −0.2 0.1 ⎢ ⎥ ⎢ ⎥ (3) =⎣ ⎦, B = ⎣ ⎦. −2.5 2 0.5 −0.3 1

The transition probability matrix Π is ⎡

⎤ 0.8 0.1 0.1 ⎢ ⎥ ⎢ ⎥ ⎢ Π = ⎢ 0 0.4 0.6 ⎥ ⎥. ⎣ ⎦ 0 0.7 0.3 Both Theorem 4 in [31] and Theorem 5 in this paper are employed to design controller such that the closed-loop system is positive and mean stable. Unfortunately, the former method fails to design such a controller for the system. The reason is similar to that in Example 1: We know from Theorem 3 in this paper

10

that the closed-loop system is mean stable if and only if F is a Schur matrix, where F = (ΠT ⊗ I)diag{A(1) + B (1) K (1) , · · · , A(3) + B (3) K (3) } ⎡ (1) (1) (1) 0 0 ⎢ 0.8(A + B K ) ⎢ (1) (1) (1) (2) (2) (2) (3) (3) (3) =⎢ ⎢ 0.1(A + B K ) 0.4(A + B K ) 0.7(A + B K ) ⎣ 0.1(A(1) + B (1) K (1) ) 0.6(A(2) + B (2) K (2) ) 0.3(A(3) + B (3) K (3) )

⎤ ⎥ ⎥ ⎥. ⎥ ⎦

It is easily seen that all the eigenvalues of 0.8(A(1) + B (1) K (1) ) belong to the set of the eigenvalues of F . If applying Theorem 4 in [31], only a class of special controller gain matrices K (i) can be obtained, whose form is ⎡ ⎢ K (i) = ⎣

(i) k1

(i) k2

(i) a(i) k1

(i) a(i) k2

Denote ⎡ ⎢ T =⎣

1 3a(1)

⎤ ⎥ ⎦,

∀i ∈ S.

⎤ 0 ⎥ ⎦, 1

Then by using similarity transformation, we have ⎡ (1) (1) (1)   ⎢ 2 − 0.08k1 + 0.24a k2 0.8T A(1) + B (1) K (1) T −1 = ⎣ 0

(1) −0.08k2

2

⎤ ⎥ ⎦.

From the equation above, we see that one of eigenvalues of 0.8(A(1) + B (1) K (1) ) is 2. It is immediately seen that F is not a Schur matrix. Therefore, the method in [31] is unable to design the stabilizing controller for this system. However, by using Theorem 5 in this paper, we get ⎡











⎢ 27.2239 ⎥ (2) ⎢ 42.5171 ⎥ (3) ⎢ 76.6347 ⎥ v (1) = ⎣ ⎦, v = ⎣ ⎦, v = ⎣ ⎦, 47.5007 27.1492 13.3917 ⎡











⎢ 467.0533 ⎥ (1) ⎢ −60.3109 ⎥ (2) ⎢ −163.6893 ⎥ (1) c1 = ⎣ ⎦ , c2 = ⎣ ⎦ , c1 = ⎣ ⎦, 35.6616 −326.1235 −396.5705 ⎡

(2)

c2

⎤ ⎡ ⎤ ⎡ ⎤ −123.3584 230.6787 −429.8724 ⎢ ⎥ (3) ⎢ ⎥ (3) ⎢ ⎥ =⎣ ⎦ , c1 = ⎣ ⎦ , c2 = ⎣ ⎦. −175.7902 −269.4226 −641.7662

11

Then, the controller gains are ⎡



⎢ 17.1560 −1.2697 ⎥ K (1) = ⎣ ⎦, 1.3099 −6.8657





⎢ −3.8500 −4.5437 ⎥ K (2) = ⎣ ⎦, −9.3273 −6.4750





⎢ 3.0101 −32.0999 ⎥ K (3) = ⎣ ⎦. −3.5157 −47.9226

Fig. 2 is the state response curve of the closed-loop system with initial condition x0 = [4

2]T  0 for

1000 Monte Carlo runs. It is shown that for all Monte Carlo runs the states x(t) are convergent and maintain x(t)  0 for t ≥ 0. Therefore, the designed controller makes the closed-loop system is positive and stable.

Figure 2: State response of the closed-loop system with 1000 Monte Carlo runs

It is noticed that the two numerical examples are artificial. The main reason using such examples is that besides the simulation comparison, the elaborately structured examples can be used for visually analyzing the conservatism of the existing methods, which can provide further view for readers.

VI.

Conclusions

In this paper, the stability and stabilization problems were investigated for positive MJSs. Necessary and sufficient criteria were obtained for positive MJSs in both continuous-time domain and discrete-time domain. The conservatism of the sufficient conditions presented in the existing literature is thoroughly removed by the proposed approach. It is well known that the stability and stabilization of systems are fundamental problems by virtue of their importance in analysis and synthesis of the systems. Therefore, it is anticipated that the techniques developed in this paper might be employed to handle other analysis and synthesis problems for the underlying systems, for instance, L1 control, L1 filtering and observer design, etc. 12

References [1] Caswell, H. (2001). Matrix Population Models: Construction, Analysis and Interpretation. Sunderland: Sinauer Associates. [2] de Jong, H. (2002). Modeling and simulation of genetic regulartory systems: a literature review. Journal of Computational Biology, 9(1), 67-103. [3] Jacquez, J. (1985). Compartmental Analysis in Biogogy and Medicine. Ann Arbor: University of Michigan Press. [4] Kaczorek, T. (2002). Positive 1D and 2D Systems. London: Springer-Verlag. [5] Shorten, R., Wirth, F., & Leith, D. (2006). A positive systems model of TCP-like congestion control: asymptotic results. IEEE/ACM Transaction on Networking, 14(3). [6] Haddad, W. M., & Chellaboina, V. (2004). Stability theory for nonnegative and compartmental dynamical systems with time delay. Systems and Control Letters, 51(5), 355-361. [7] Kaczorek, T. (2007). The choice of the forms of Lyapunov functions for a positive 2D Roesser model. International Journal of Applied Mathematics and Computer Science, 17(4), 471-475. [8] Gurvits, L., Shorten, R., & Mason, O. (2007). On the stability of switched positive linear systems. IEEE Transactions on Automatic Control, 52(6), 1099-1103. [9] Mason, O., & Shorten, R. (2007). On linear copositive Lyapunov functions and the stability of switched positive linear systems. IEEE Transactions on Automatic Control, 52(7), 1346-1349. [10] Liu, X. (2009). Stability analysis of switched positive systems: a switched linear copositive Lyapunov function method. IEEE Transcations on Circuits and Systems–II: Express Briefs, 56(5), 414-418. [11] De Leenheer, P., & Aeyels, D. (2001). Stabilization of positive linear systems. Systems and Control Letters, 44, 861-868. [12] Daafouz, J., Riedinger, P., & Iung, C. (2002). Stability analysis and control synthesis for switched systems: A switched Lyapunov function approach. IEEE Transactions on Automatic Control, 47(11), 1883-1887. [13] Rami, M. A., & Tadeo, F. (2007). Controller synthesis for positive linear systems with bounded controls. IEEE Transcations on Circuits and Systems–II: Express Briefs, 54(2), 151-155. [14] Roszak, B., & Davison, E. J. (2009). Necessary and sufficient conditions for stabilizabiity of positive LTI systems. Systems and Control Letters, 58, 474-481. [15] Fornasini, E., & Valcher, M. E. (2012). Stability and stabilizability criteria for discrete-time positive switched systems. IEEE Transactions on Automatic Control, 57(5), 1208-1221. [16] Rami, M. A. (2011). Solvability of static output-feedback stabilization for LTI positive systems. Systems and Control Letters, 60, 704-708. [17] Shen, J., & Lam, J. (2015). On static output-feedback stabilization for multi-input multi-output positive systems. International Journal of Robust and Nonlinear Control, 25(16), 3154-3162. 13

[18] van den Hof, J. M. (1998). Positive linear observer for linear compartmental systems. SIAM Journal on Control and Optimization, 36(2), 590-608. [19] Hardin, H. M., & van Schuppen, J. H. (2007). Observers for linear positive systems. Linear Algebra and its Applications, 425(2-3), 571-607. [20] Rami, M. A., Tadeo, F., & Helmke, U. (2011). Positive observers for linear positive systems, and their implications. Intertional Journal of Control, 84(4), 716-725. [21] Zaidi, I., Chaabane, M., Tadeo, F., & Benzaouia, A. (2015). Static state-feedback controller and observer design for interval positive systems with time delay. IEEE Transcations on Circuits and Systems–II: Express Briefs, 62(5), 506-510. [22] de Farias, D. P., Geromel, J. C., do Val, J. B. R., & Costa, O. L. V. (2000). Output feedback control of Markov jump linear systems in continuous-time. IEEE Transactions on Automatic Control, 45(5), 944-949. [23] Zhang, L., Zhu, Y., Shi, P., & Zhao, Y. (2015). Resilient asynchronous H∞ filtering for Markov jump neural networks with unideal measurements and multiplicative noises. IEEE Transactions on Cybernetics, 45(12), 2015. [24] Zhang, L., Leng, Y., & Colaneri, P. (2016). Stability and stabilization of discrete-time semi-Markov jump linear systems via semi-Markov kernel approach. IEEE Transactions on Automatic Control, 61(2), 2016. [25] Ji, Y., Chizeck, H. J. (1990). Controllability, stabilizability, and continuous-time Markovian jump linear quadratic control. IEEE Transactions on Automatic Control, 35(7), 777-788. [26] Zhang, L., Ning, Z., & Shi, P. (2015). Input-output approach to control for fuzzy Markov jump systems with time-varying delays and uncertain packet dropout rate. IEEE Transactions on Cybernetics, 45(11), 2015. [27] Rami, M. A., & Shamma, J. (2009). Hybrid positive systems subject to Markovian switching, in Proceedings of the 3rd IFAC Confernce on Analysis & Design of Hybrid Systems, Zaragoza, Spain, pp. 138-143. [28] Bolzern, P., Colaneri, P., & De Nicolao, G. (2014). Stochastic stability of positive Markov jump linear systems. Automatica, 50(4), 1181-1187. [29] Lian, J., Liu, J., & Zhuang, Y. (2015). Mean stability of positive Markov jump linear systems with Homogeneous and switching transition probabilities. IEEE Transactions on Circuits and Systems–II: Express Briefs, 62(8), 801-805. [30] Bolzern, P., Colaneri, P., & De Nicolao, G. (2014). Stabilization via switching of positive Markov jump linear systems. in Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, USA, pp. 2359-2364. [31] Zhang, J., Han, Z., & Zhu, F. (2014). Stochastic stability and stabilization of positive systems with Markovian jump parameters. Nonlinear Analysis: Hybrid Systems, 12(3), 147-155.

14

[32] Liu, J., He, J., Lian, J., & Zhuang, Y. (2015). Stochastic stability and stabilization for positive Markov jump systems with distributed time delay and incomplete known transition rates. in Proceedings of the 27th Chinese Control and Decision Conference, Qingdao, China, pp. 2389-2394. [33] Fiedler, M., & Ptak, V. (1962). On matrices with nonpositive off-diagonal elements and positive pricipal minors. Czechoslovak Mathemaitical Journal, 12(87), 382-400. [34] Farina, L., & Rinaldi, S. (2000). Positive Linear Systems: Theory and Applications. New York: Wiley. [35] Fang, Y., Loparo, K. A., & Feng, X. (1994). Almost sure stability and δ-moment stability of jump linear systems. International Journal of Control, 59(5), 1281-1307. [36] Fang, Y., Loparo, K. A., & Feng, X. (1995). Stability of discrete time jump linear systems. Journal of Mathemaitical Systems, Estimation, and Control, 5(3), 275-321.

15