Infinite Markov jump-bounded real lemma

Infinite Markov jump-bounded real lemma

Systems & Control Letters 57 (2008) 64 – 70 www.elsevier.com/locate/sysconle Infinite Markov jump-bounded real lemma夡 Marcos G. Todorov, Marcelo D. Fr...

194KB Sizes 4 Downloads 87 Views

Systems & Control Letters 57 (2008) 64 – 70 www.elsevier.com/locate/sysconle

Infinite Markov jump-bounded real lemma夡 Marcos G. Todorov, Marcelo D. Fragoso ∗ Department of Systems and Control, Laboratório Nacional de Computação Científica, LNCC/MCT, Av. Getúlio Vargas, 333, 25651-075 Petrópolis, RJ, Brazil Received 6 June 2006; received in revised form 29 May 2007; accepted 26 June 2007 Available online 31 July 2007

Abstract A bounded real lemma is established, providing an equivalent condition to stochastic stability (SS) of a continuous-time infinite Markovian jump linear system (MJLS) with a prescribed L2 -stochastic disturbance attenuation level  in terms of existence of solutions to an infinite set of coupled linear matrix inequalities (LMIs). Besides the interest in its own right, the main result provides a fundamental tool for the development of an H∞ -like theory devoted to this class of systems. © 2007 Elsevier B.V. All rights reserved. Keywords: Bounded real lemma; Stochastic stability; Continuous-time linear system; Markovian jump system

1. Introduction Markov jump linear system (MJLS) is by now a well-known class of systems with a wide potential of applicability, which includes applications in safety-critical and high-integrity systems (e.g., aircraft, chemical plants, nuclear power station, robotic manipulator systems, large-scale flexible structures for space stations such as antenna, solar arrays, etc.), typically, systems which may experience abrupt changes in their structure (see, for instance, [1] and references therein). In this paper, we establish a bounded real lemma for continuous-time MJLS, for the case in which the state space of the Markov process is infinite countable. In addition, when reduced for the case in which the state space of the Markov chain is finite, and the state space of x is real, the result here is, to some extent, the continuoustime counterpart of the main result in [6]. Furthermore, as a by product, our main result has been successfully applied to solve a certain H∞ problem (in fact a disturbance attenuation problem) for the same class of MJLS considered here (see [7]). It is noteworthy here that the ideas in [5] have inspired our work. 夡 A version of this paper has been accepted for presentation at the 2007 ACC. ∗ Corresponding author. Tel.: +55 24 2233 6013; fax: +55 24 2231 6141. E-mail addresses: [email protected] (M.G. Todorov), [email protected] (M.D. Fragoso).

0167-6911/$ - see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.sysconle.2007.06.016

Finally, we mention (see, e.g., [2–4]) as previous work dealing with the class of MJLS for the countably infinite state space case. An outline of the content of this paper is as follows. In Section 2 we provide the bare essential of notational conventions. The model and some preliminaries are given in Section 3. Finally, the bounded real lemma is treated in Section 4. 2. Notation In the complex n-space Cn we denote by ·, · and  ·  the standard inner product and its corresponding norm, respectively. We denote Blt(Cm , Cn ) as the Banach space of all matrices M ∈ Cn×m , equipped with the standard induced matrix norm  · , and simply write Blt(Cn ) := Blt(Cn , Cn ). We also write Blt(Cn∗ ) = {U ∈ Blt(Cn ); U ∗ = U }, with ∗ denoting the conjugate transpose, and Blt(Cn+ ) = {U ∈ Blt(Cn∗ ); U 0}. As usual, in the infinite Markovian jump parameter setting, we set an infinite dimensional Banach space, say, Hm,n sup , as the linear space composed by all matrices of the form M = (M1 , . . .) where Mi ∈ Blt(Cm , Cn ) for every i ∈ S := {1, 2, . . .}, and such that Msup := supi∈S Mi  < ∞. We further write Hnsup n∗ in place of Hn,n sup and say that such M belong to Hsup (resp. n∗ Hn+ sup , the set of all positive matrices) if Mi ∈ Blt(C )(resp. n+ n+ ˜ sup ⊂ Hn+ Mi ∈ Blt(C )) for all i ∈ S. Also, define H sup as the subset of all uniformly positive M (i.e., such that Mi In

M.G. Todorov, M.D. Fragoso / Systems & Control Letters 57 (2008) 64 – 70

for all i ∈ S and some  > 0 independent of i; here In stands for the n × n identity matrix). Furthermore, we have that M ∈ n+ ˜ n− ˜ n+ Hn− sup (resp. Hsup ) if −M = (−M1 , . . .) ∈ Hsup (resp. Hsup ). Concerning the random objects, fix a complete stochastic basis (, F, Ft , P), where Ft ⊂ F is a filtration with t ∈ R+ := [0, ∞), to which all stochastic processes belong and are appropriately adapted. In addition, let E(·) denote the usual mathematical expectation and set, for some 0 < T < ∞, the space Ln2 (T ) of all processes y = {(y(t), Ft ); t ∈ [0, T ], y(·) ∈ Cn } T such that yT := ( 0 E[y(t)2 ] dt)1/2 is finite. We shall also write y ∈ Ln2 (R+ )whenever the limit yR+ := limT →∞ yT is well defined. Finally, I (·) : R+ → {0, 1} and 1{Υ } : (, F) → {0, 1} are indicator functions for some interval  ⊂ R+ and some event Υ ∈ , respectively. 3. Model setting and some basic facts Consider in (, F, Ft , P) a homogeneous Markov process  = {(t , Ft ), t ∈ R+ }, with right continuous sample paths and countably infinite state space S = {1, 2, . . .}, such that:  i = j, ij dt + o(dt), (1) P(t+dt = j |t = i) = 1 + ii dt + o(dt), i = j,  where 0 ij for i = j , and 0 i := −ii = j ∈S\{i} ij  for some  < ∞ and all i ∈ S. In addition, we assume that 0 is fixed (see [8] for the random case, which is a trivial extension). In this paper, we deal with the class of systems, , modelled by the following stochastic differential equation on t ∈ R+ : x(t) ˙ = At x(t) + Bt v(t),

x(0) = x0 ,

(2)

for some fixed x0 ∈ Cn , and z(t) = Ct x(t) + Dt v(t).

(3) Ln2 v (R+ )

is any finite enHere, x denotes the state, v ∈ ergy stochastic disturbance acting on the system and z(t) : (, Ft ) → Cnz is the system output. We define for every t the augmented state variable (x(t), t ) : (, Ft ) → Cn × S v ,n , C := and A := (A1 , . . .) ∈ Hnsup , B := (B1 , . . .) ∈ Hnsup n,n nv ,nz (C1 , . . .) ∈ Hsupz and D := (D1 , . . .) ∈ Hsup . For easiness of notation, in case the initial condition x0 is zero we denote the corresponding zero-state response (zs-response) by xzs (·) = x(·, 0 , 0, v) and if the input v is identically zero, then we shall write xzi (·) = x(·, 0 , x0 , 0), the zero-input response (zi-response). In this paper we consider the following notion of internal stability. Definition 1. System is said to be stochastically stable (SS) if, for any initial conditions (x0 , 0 ), we have that ||xzi ||R+ < ∞. Remark 2. In [4] it has been proved that SS of is equivalent to the process x belonging to Ln2 (R+ ) whenever v ∈ Ln2 v (R+ ). Note that in this case, since C, D are sup-bounded operators, n it follows that z ∈ L2 z (R+ ) for any such v. That is, SS leads to some kind of external L2 input–output stability of .

65

The following operator will be germane to our approach:  ij Pj , i ∈ S. (4) Ti (P ) = A∗i Pi + Pi Ai + j ∈S

In [4] it has been proved that T is both hermitian and bounded with respect to the norm T(·) := supi∈S Ti (·). An important issue on SS is the following set of theorems (see [2,4]), which give an equivalent condition to SS in terms of existence of solutions to Lyapunov equations on an infinite dimensional Banach space: ˜ n+ Theorem 3. If system is SS then, for every S ∈ H sup , there n+ ˜ sup such that: exists a unique G ∈ H T(G) + S = 0.

(5)

˜ n+ Theorem 4. If there exists G ∈ H sup such that (5) is satisfied n+ ˜ for some S ∈ Hsup , then system is SS. If is SS we may define, in the spirit of the H∞ theory, the n following perturbation operator L : Ln2 v (R+ ) → L2 z (R+ ): Lv(t) = Ct xzs (t) + Dt v(t).

(6)

It is clear that, for x0 = 0, z(·) = Lv(·). The H∞ norm of this operator, L, is then defined as the induced norm from n Ln2 v (R+ ) into L2 z (R+ ), i.e., L =

sup

v∈Ln2 v (R+ ),vR+

LvR+ ,

=0 vR+

(7)

or L = supv∈Lnv (R+ ),vR =0 (zR+ /vR+ ). The larger this + 2 norm is, the larger is the effect of the unknown disturbance v on the output z, i.e., L measures the influence of the disturbances in the worst case scenario. Therefore, a method which allows us to compute the size of L is rather important. In the control literature, bounded real lemma refers to such a tool which provides a method to evaluate the size of L. 4. The bounded real lemma In this section we establish the main result of this paper, which is a jump-bounded real lemma (JBRL). Roughly, this result provides necessary and sufficient conditions for the system under consideration (2)–(3) being stable with a prescribed bound for the perturbation operator (L < ), in terms of LMIs. Since, in view of (7), the norm of L is the minimal  0 such that zR+ vR+ , for all v ∈ L2nv (R+ ), it is very helpful, and usual in the H∞ -control literature, to associate a finitetime quadratic cost functional to the problem. Therefore our approach relies on a detailed study of the following cost functional:  T  JT (x0 , 0 , v) = E[2 v(t)2 − z(t)2 ] dt, (8) 0

which depends on the initial conditions (x0 , 0 ), the disturbance v, a terminal time T > 0 and some prescribed disturbance attenuation level  > 0. We shall be especially concerned with the

66

M.G. Todorov, M.D. Fragoso / Systems & Control Letters 57 (2008) 64 – 70

problem of minimizing the above cost functional with respect to v which, whenever x0 =0, is closely related to the supremum problem in (7), and therefore the size of L. The next result provides an alternative form of writing up the functional defined in (8). This new way of expressing the cost allows us, among many other things, to solve the aforementioned minimization problem in a rather straightforward manner, with the aid of Riccati equations (see Lemma 11). Theorem 5. For any initial condition (x0 , 0 ), all v ∈ Ln2 v (T ), T > 0, and every P = (P1 , . . .) : [0, T ] → Hn∗ sup continuously differentiable, the cost functional defined in (8) can be written as 

JT (x0 , 0 , v) = x0 , P0 (0)x0  − Ex(T ), PT (T )x(T )  T  E x(t), P˙t (t)x(t) + 0

 +

x(t)



v(t)

, Mt (P (t))

x(t)



v(t)

dt,

(9)

where, for m = n + nv , we define M(P ) = (M1 (P ), . . .) by means of the following (m × m)-matrices: 

Ti (P ) − Ci∗ Ci Gi (P ) Mi (P ) = , (10)  Gi (P )∗ Hi Gi (P ) = Pi Bi − Ci∗ Di

with i ∈ S.

and

 Hi = 2 Inv

− Di∗ Di

for every

With respect to the second term on the right-hand side of (12) we have that E{fj (t, x(t)) d1{t =j } }=E{fj (t, x(t))(1{t+dt =j } −1{t =j } )} =E{E[fj (t, x(t))(1{t+dt =j } −1{t =j } )|Ft ]} =E{fj (t, x(t))E(1{t+dt =j } −1{t =j } |t )} = E{fj (t, x(t))[P(t+dt = j |t ) − P(t = j |t )]} = E{t j fj (t, x(t))} dt + o(dt)

(14)

for all j ∈ S, where the last equality follows as a direct consequence of (1), bearing in mind that P(t =j |t )=0, for j = t , and that P(t+dt = j |t ) − P(t = j |t ) = 1 + t t dt + o(dt) − 1 = t t dt + o(dt), for j = t . Summing up the expression (12) for all j ∈ S and throwing away the  terms o(dt) it follows, since the sequence of partial sums M j =1 Fj (t, x(t)) converges uniformly to a limit on [0, T ], that dF (t, x(t)) = Ex(t), P˙t x(t) dt 

A∗ P +P A + x(t) t t j ∈S t j Pj t t +E , v(t) B∗ Pt t

x(t) × dt, v(t)

P  t B t



0 (15)

and the result follows by integrating both sides of the above expression and combining it to (8). 

Proof. Let us introduce the following Lyapunov function:  F (t, x(t)) = tr(Pj (t)Qj (t)) j ∈S

=

 j ∈S

E{x(t), Pj (t)x(t)1{t =j } },

(11)

where Qj (t) := E[x(t)x(t)∗ 1{t =j } ], j ∈ S. Now define the Ft -measurable functions fj (t, x(t)) = x(t), Pj (t)x(t) and Fj (t, x(t)) = E[fj (t, x(t))1{t =j } ], whose differential can be written as dFj (t, x(t)) = E{dfj (t, x(t))1{t =j } } + E{fj (t, x(t))d1{t =j } }.

(12)

In order to calculate the first differential on the right-hand side of the expression above we shall proceed in the following way, where fj (t, x(t); dx(t)) stands for the first Gâteaux variation of fj on the direction (0, dx(t)), evaluated at (t, x(t)): dfj (t, x(t))=

j fj (t, x(t)) dt+ fj (t, x(t); dx(t)) jt

={x(t), P˙j (t)x(t)+Aj x(t)+Bj v(t), Pj (t)x(t) + Pj (t)x(t), Aj x(t) + Bj v(t)} dt.

(13)

The main result (JBRL) reads as follows: Theorem 6 (jump-bounded real lemma). Given  > 0, system ˜ n− is SS with L <  if and only if there exist P ∈ H sup such m+ ˜ that M(P ) ∈ Hsup . The next proposition prove the above theorem in one direction, i.e., establish a relation between SS with L <  and the ˜ n− ˜ m+ existence of some P ∈ H sup such that M(P ) ∈ Hsup . ˜ n− Proposition 7. If there exist  > 0 and P ∈ H sup such that m+ ˜ M(P ) ∈ Hsup , then system is SS and L < . Proof. From the hypothesis there exists  > 0 such that Mi (P ) 2 I for all i ∈ S. Then, choosing any 0 < ˜2 < 2 we have the strict inequalities Mi (P ) − ˜2 I > 0, from which [Mi (P ) − ˜2 I ]11 = Ti (P ) − Ci∗ Ci − ˜2 In > 0 for all i ∈ S. Thus, defining Ri = Ti (P ) − Ci∗ Ci − ˜2 In > 0, i ∈ S, we ˜ n+ have that there exists −P ∈ H sup such that: Ti (−P ) + (Ri + Ci∗ Ci + ˜2 In ) = 0,

i ∈ S,

(16)

where Ri + Ci∗ Ci + ˜2 In > ˜2 In , from which we have SS of in accordance with Theorem 4. Moreover, for such a (constant)

M.G. Todorov, M.D. Fragoso / Systems & Control Letters 57 (2008) 64 – 70

pair (, P ), one can write:  T  JT (0, 0 , v) = E[2 v(t)2 − (Lv)(t)2 ] dt

exists a constant c > 0 such that: 

= − Ex(T ), PT x(T ) +  ×

x(t) v(t)  T

0 + 2



, Mt (P )

T 0

x(t) v(t)

∀v ∈ Ln2 v (T ).

JT (x0 , 0 , v)  − cx0 2

0



67

(21)

Proof. Based on Proposition (8), let us choose XT = (XT ,1 , . . .) : [0, T ] → Hn∗ sup satisfying (18) over all [0, T ], so the cost representation from Theorem 5, considering XT (·) in place of P (·), writes:  T  {EGt (XT )∗ x(t), v(t) JT (x0 , 0 , v)=x0 , XT ,0 (0)x0 +

E

dt

0

E[v(t)2 ] dt,

+ v(t), Gt (XT )∗ x(t)

(17)

0

which is strictly positive whenever vR+ = 0. By letting T → ∞ we find that 2 v2R+ > Lv2R+ for every such v ∈ Ln2 v (R+ ), from which we conclude that L < .  In order to prove the second part of the JBRL, Theorem 6, we proceed by parts, first establishing some intermediate results. The proposition below specializes the main result of [3] to our needs. It provides existence and uniqueness results for the Banach space differential equations we shall be dealing with in the sequel. Proposition 8. There exist X = (X1 , . . .) and Y = (Y1 , . . .) mapping (−∞, T ] into Hn∗ sup such that, for every i ∈ S and all 0 < T < ∞, X˙ i + Ti (X) − Ci∗ Ci = 0,

Xi (T ) = 0,

(18)

and



+ v(t), H v(t)} dt.

(22)

t

We can easily remove the last term inside the integral by writing  JT (0, 0 , v) under the same form and remembering that x(·) = xzi (·) + xzs (·), so 



JT (x0 , 0 , v) = JT (0, 0 , v) + x0 , XT ,0 (0)x0   T + E{Gt (XT )∗ xzi (t), v(t) 0

+ v(t), Gt (XT )∗ xzi (t)} dt.

(23)

Define v0 (·) = v(·)I[0,T ] (·). Choosing  > 0 such that 2 < 2 − L2 we have 

JT (0, 0 , v) 2 v0 2R+ − Lv0 2R+ (2 − L2 )v0 2R+  ∞ 2  E[v0 (t)2 ] dt 0

Y˙i + Ti (Y ) − Ci∗ Ci − Gi (Y )Ri−1 Gi (Y )∗

= 0,

Yi (T ) = 0, (19)

v + and R > 0, i ∈ S. Moreover, X where R = (R1 , . . .) ∈ Hnsup i and Y are unique, continuous, and continuously differentiable.

Proof. It suffices to prove existence (with the indicated properties) of Y, since (19) reduces to (18) when B and D are zero. Eq. (19) may be written as  ˜ i − Yi Bi R −1 Bi∗ Yi = 0, ij Yj + Q Y˙i + Yi A˜ i + A˜ ∗i Yi + i j =i

(20) ˜ i = −C ∗ (I + where A˜ i = Ai + 21 ii I + Bi Ri−1 Di∗ Ci and Q i −1 ∗ Di Ri Di )Ci for i ∈ S. Furthermore, we have that A˜ = ˜ = (Q ˜ 1 , . . .) = Q ˜ ∗. ‘ (A˜ 1 , . . .) ∈ Hnsup and Q The proof of existence and uniqueness of Y is completed by following the same steps as in the proof of Theorem 4.1 in [3], bearing in mind Remark 3.1.  The following lemma shows that the cost associated to any Ln2 v (T ) disturbance can be uniformly bounded from below. Lemma 9. Supposing SS of system with L <  we have, for every initial condition (x0 , 0 ) and any T > 0, that there

 =

T

Ev(t), v(t) dt,

(24)

0

and thus, by square completion, we can eliminate a quadratic term:  T  JT (x0 , 0 , v) x0 , XT ,0 (0)x0  + E{v(t), v(t) 0

+ −1 Gt (XT )∗ xzi (t), v(t) + v(t), −1 Gt (XT )∗ xzi (t) + −1 Gt (XT )∗ xzi (t)2 − −1 Gt (XT )∗ xzi (t)2 } dt x0 , XT ,0 (0)x0  − −2  T × E[Gt (XT )∗ xzi (t)2 ] dt.

(25)

0

Now let us seek for uniform limitants to XT (·) over all [0, T ]. By time homogeneity of (18) we have that XT (t) = XT −t (0) for any t ∈ [0, T ], yielding 

x0 , XT ,i (t)x0  = x0 , XT −t,i (0)x0  = JT −t (x0 , i, 0) = − z2T −t 0.

(26)

68

M.G. Todorov, M.D. Fragoso / Systems & Control Letters 57 (2008) 64 – 70

The lower bound can be obtained as follows: first notice that, for v identically zero, we have z(s) = Cs xzi (s) for s ∈ [0, T ], and then  E[z(s)2 ] = E[Cs xzi (s)2 1{s =j } ] j ∈S



 j ∈S

E[Cj 2 xzi (s)2 1{s =j } ]

C2sup

 j ∈S

E[xzi (s)2 1{s =j } ]

0



= C2sup E[xzi (s)2 ],

× v(t), xzs (t) + v(t), H v(t)} dt. t

(27)

0

c0 Csup x0  , 2

(28)

straight from the SS hypothesis. Putting all together: 0 x0 , XT ,i (t)x0   − c0 Csup x0  . 2

(29)

By a calculation analogous to (27), and always omitting the time dependence of XT (·), it follows that E[Gt (XT )∗ xzi (t)2 ]  G(XT )2sup E[xzi (t)2 ]. Since Gj (XT ) = XT ,j Bj − Cj∗ Dj XT ,j Bj  + Cj Dj , it follows from (29) that, for all t ∈ [0, T ], G(XT (t))sup c0 Csup Bsup + Csup Dsup

(30)

and thus, once more using the SS fact,  T E[Gt (XT )∗ xzi (t)2 ] dt C2sup (c0 Bsup +Dsup )2 0

× xzi R+ 2

c0 C2sup (c0 Bsup +Dsup )2 × x0 2 .

(31)

Substituting (29)–(31) back in (25) the result follows: 

JT (x0 , 0 , v) − {c0 Csup + −2 c0 C2sup × (c0 Bsup + Dsup )2 }x0 2  − cx0  , for some conveniently chosen constant c > 0.

E{xzs (t), Gt (XT )v(t) + Gt (XT )v(t), xzs (t)}  = E{[xzs (t), Gt (XT )  + Gt (XT ) j ∈S

× , xzs (t)]1{ t = j }}   2E[xzs (t)1{t =j } ]Gj (XT )  j ∈S

2 G(X ˜ T )sup E[xzs (t)] 2 (c ˜ 0 Bsup + Dsup )Csup E[xzs (t)],

(32) 

(34)

where we have used (30) to obtain the last inequality. Returning  to (33) and using the negativity hypothesis on Hi we have that, for some > 0 adequately chosen, 

 JT (0, i, v )  { E[xzs (t)] − } dt. (35) 0

Since the sample paths of  are right-continuous, it follows that E[xzs (t)] is also continuous to the right of t = 0. Bearing in mind that xzs (0) = 0, we have that the right-hand side of the above expression is then negative if we take > 0 sufficiently small. But this contradicts the previous lemma, from which we  conclude that Hi 0 for any i ∈ S.  Now consider 0 <  <  and ˜ = 2 − 2 , in such a way that L < ˜ < . Repeating the previous steps right from the start ˜ with ˜ in place of  we conclude that Hi = (2 − 2 )Inv − ∗ Di Di 0 for all i ∈ S, so 

2

(33)

Now, defining v (·) = I[0, )(·) with ∈ [0, T ) we have that, for all t ∈ [0, ),

and so there exists a constant c0 > 0 such that:  ∞  T −t 2 E[z(s) ] ds Csup E[xzi (s)2 ] ds 0

Proof. We need to prove that there exists  0 such that  Hi 2 Inv for all i ∈ S. First, let us prove by contradiction  that Hi 0 for every i ∈ S; that is, suppose that there exists  ∈ Cnv , ˜ :=  , such that  , Hi  < − for some i ∈ S and > 0. Choose again the only XT : [0, T ] → Hn∗ sup that satisfies (18) for all i ∈ S. Thus, omitting its time dependence, we have for any i ∈ S that  T  JT (0, i, v) = E{xzs (t), Gt (XT )v(t) + Gt (XT )

Hi = 2 Inv − Di∗ Di 2 Inv

(36) 

follows immediately. Furthermore, H  sup = supi∈S Hi  = supi∈S 2 Inv − Di∗ Di  2 + D2sup , and the proof is completed. 

The following lemma guarantees uniform positivity of the second diagonal block of the matrix M(·) defined back in (10), which is obviously a necessary condition for the validity of the assertion made under Theorem 6.

Next we establish the aforementioned minimization result  concerning JT (x0 , 0 , ·) and a specific disturbance vT , for later use.

Lemma 10. Suppose system is SS with L < . Then we  v +. have that H  = (H1 , . . .) ∈ Hnsup

Lemma 11. Suppose system is SS with L <  and denote by PT the solution of (19) with H  in place of R. Thus, defining

M.G. Todorov, M.D. Fragoso / Systems & Control Letters 57 (2008) 64 – 70 

69

K(PT ) = (K1 (PT ), . . .) with Ki (PT ) = −(Hi )−1 Gi (PT )∗ over all i ∈ S, we have that:

with H  in place of R. It is of great importance when it comes to the infinite horizon problem limT →∞ PT .

(i) The feedback disturbance vT (·) := K(·) (PT )x(·) is such that:

Proposition 12. Assume L <  and is SS. Consider PT :  [0, T ] → Hn∗ sup , the solution of (19) with R = H . Then the following hold for all t ∈ [0, T ]:



vT = arg min JT (x0 , 0 , v),

(37)

v∈D

where D is the domain of J with respect to its last argument. (2) The infimal cost associated is given by  JT (x0 , 0 , vT ) = x0 , PT ,0 (0)x0 ; 

(38) 

and, of course, JT (x0 , 0 , vT ) JT (x0 , 0 , v) for all v ∈ Ln2 v (T ). 



Proof. Based on the positivity of (Hi )∗ = Hi over all i ∈ S  (Lemma 10), we can define the inner product ·, ·H  =·, H · t t and its corresponding induced norm  · H  . Then the proof t follows easily once we write the cost in a more adequate form. From Theorem 5:

(i) For every i ∈ S there exists a constant c > 0 such that: −cI n PT ,i (t) 0.

(ii) For every 0 < T˜ < T and i ∈ S, we have that PT˜ ,i (t) PT ,i (t). (iii) The limit P = limT →∞ PT (t) exists and is such that P ∈ Hn− sup . Proof. In this proof, we shall benefit from the optimality result obtained in the previous lemma. Simply pick any i ∈ S, t ∈ [0, T ], and write 

x0 , PT ,i (t)x0  = x0 , PT −t,i (0)x0  = JT −t (x0 , i, vT −t ) 

JT −t (x0 , i, 0) 



JT (x0 , 0 , v)



+ v(t), Gt (P )∗ x(t) + v(t), H v(t)} dt t



x0 , PT ,i (t)x0  = x0 , PT −t,i (0)x0  = JT −t (x0 , i, vT −t )

= x0 , P0 (0)x0  − Ex(T ), PT (T )x(T )  T E{v(t) − Kt (P )x(t)2H  +



0

t

x0 , PT ,0 (0)x0 ,

} dt (40)

and the equality holds only if v(t) = vT (t) = Kt (PT )x(t) for all t ∈ [0, T ], from which we prove both (i) and (ii).  The following proposition guarantees, both, boundedness and monotonicity results for PT , the solution of (19) over [0, T ]

T −t T˜ −t



J ˜

(39)

Since the cost associated to some set of arguments is independent of any particular choice on P (by its very definition), we can consider the previous expression with PT (the solution of (19) with R = H  ) in place of P. Thus  T  JT (x0 , 0 , v) = x0 , PT ,0 (0)x0  + E{v(t) − Kt (PT )x(t)2H  

T −t

 −

+ x(t), [P˙t + Tt (P ) − C∗t Ct t



˜ =J˜ JT −t (x0 , i, v)

t



(42)

Besides, from Lemma 9, we have that x0 , PT −t,i (0)x0  =  JT −t (x0 , i, vT −t )  −cx0 2 =x0 , −cI n x0 , from which (41) follows. For the proof of (ii) define v(·) ˜ = vT˜ −t (·)I[0,T˜ −t] (·) ∈ nv L2 (R+ ). The result follows immediately from the following:

+ Gt (P )∗ x(t), v(t)

− Gt (P )(H )−1 Gt (P )∗ ]x(t)} dt.

E[z(s)2 ] ds 0.

0

0

0

T −t

= −

= x0 , P0 (0)x0  − Ex(T ), PT (T )x(T )  T E{x(t), [P˙t + Tt (P ) − C∗t Ct ]x(t) +

(41)

T −t

(x0 , i, vT˜ −t )

E[z(s)2 ] ds

(x0 , i, vT˜ −t ) = x0 , PT˜ ,i (t)x0 .

(43)

Finally, we have that (iii) is simply a consequence of (i) and (ii), and the completeness of Hn∗ sup . In fact: lim PT (t) = lim PT −t (0) = lim PT (0),

T →∞

T →∞

T →∞

(44)

which is independent of t. Finally, bounded negativity follows from (41).  Note that item (iii) of the previous proposition, together with the finite horizon problem (19), implies that for all i ∈ S the connected algebraic equations Ti (P ))−Ci∗ Ci −Gi (P )(2 Inv −Di∗ Di )−1 Gi (P )∗ =0

(45)

have a solution P ∈ Hn− sup . These facts allow us to proceed to the proof of our main result, right after the next auxiliary result is established.

70

M.G. Todorov, M.D. Fragoso / Systems & Control Letters 57 (2008) 64 – 70

Lemma 13. Define C˜  =(C˜ 1 , . . .) and D˜ =(D˜ 1 , . . .) with C˜ i = [Ci∗ In ]∗ and D˜ i =[Di∗ 0nv ×n ]∗ , for any i ∈ S and some  > 0.  Then the perturbation operator L˜ : v → z˜  , corresponding to system  x(t) ˙ = At x(t) + Bt v(t), t ∈ R+ , (46) z˜  (t) = C˜  x(t) + D˜ t v(t) t

with x(0) = x0 ∈ Cn is such that, for some constant > 0, L˜  2 L2 + 2 ,

(47)

where the norm of L˜  is defined analogously as that of L. Proof. According to the above definition we have that, for any v ∈ Ln2 v (R+ ) and every t > 0, L˜  v(t)2 = Lv(t)2 + 2 xzs (t)2 .

(48)

Hence, by integrating over R+ and taking the expected value it follows that L˜  v2R+ = Lv2R+ + 2 xzs 2R+ (L2 + c2 2 )v2R+ (49) for some constant c > 0 (see [4, Theorem 5.2], as we have pointed out in Remark 2). The result follows immediately if we take the supremum of the above expression over all such v.  Proof (Theorem 6). It remains to prove the converse of Proposition 7. In order to do so, first notice that for some sufficiently small  > 0 and > 0 we have, from Lemma 13, that: L <  ⇒ L˜ ˜  < ˜ ,

(50)

2 1/2

where ˜ = (2 + ) and ˜ = (2 − 2 )1/2 . Thus, considering ˜ ˜ in lieu of , the previous results state that 0 < H˜ i := 2 Inv − D˜ i∗ D˜ i − 2 Inv for all i ∈ S, and hence there is P˜ = (P˜1 , . . .) ∈ Hn− sup such that: ˜ i˜ (P˜ )(2 Inv − D˜ ∗i D˜ i − 2 Inv )−1 Ti (P˜ ) − (C˜ i˜ )∗ C˜ i˜ − G ˜ i˜ (P˜ )∗ = 0, ×G

(51)

˜ ˜ (P˜ ) = P˜i Bi − (C˜ ˜ )∗ D˜ i = where (C˜ i˜ )∗ C˜ i˜ = Ci∗ Ci + ˜2 In , G i i ∗ ∗ ∗ P˜i Bi − Ci Di , and D˜ i D˜ i = Di Di . Hence (51)may be written

down as (notice T(·) is homogeneous): 

Ti (−P˜ )+{Ci∗ Ci +˜2 In +Gi (P˜ )(Hi  )−1 Gi (P˜ )∗ }=0,

(52)

˜ n− so P˜ actually belongs to H sup , from Theorem 3. Furthermore, the above expression states that  Ti (P˜ ) − Ci∗ Ci − 2 In − Gi (P˜ )(Hi − 2 Inv )−1 Gi (P˜ )∗

= 2 In > 0, 

and since Hi − 2 Inv > 0 for every i ∈ S, it follows from m+ ˜ Schur complements that P˜ ∈ Hn− sup is such that M(P ) ∈ Hsup , completing the proof.  Acknowledgments Research supported in part by the Research Council of the State of Rio de Janeiro-FAPERJ, under the Grant no. 151.899/05, by the Brazilian National Research Council-CNPq, under the Grant nos. 141363/2007-0 and 302587/2004-7 and by FAPESP, Grant no. 03/06736-7. References [1] O.L.V. Costa, M.D. Fragoso, R.P. Marques, Discrete-Time Markov Jump Linear Systems, Probability and its Applications, Springer, New York, 2005. [2] M.D. Fragoso, J. Baczynski, Lyapunov coupled equations for continuoustime infinite Markov jump linear systems, J. Math. Anal. Appl. 274 (2002) 319–355. [3] M.D. Fragoso, J. Baczynski, On an infinite dimensional perturbed Riccati differential equation arising in stochastic control, Linear Algebra Appl. 406 (2005) 165–176. [4] M.D. Fragoso, O.L.V. Costa, A unified approach for stochastic and mean square stability of continuous-time linear systems with Markovian jumping parameters and additive disturbances, SIAM J. Control Optim. 44 (4) (2005) 1165–1191. [5] D. Hinrichsen, A.J. Pritchard, Stochastic H ∞ , SIAM J. Control Optim. 36 (5) (1998) 1504–1538. [6] P. Seiler, R. Sengupta, A bounded real lemma for jump systems, IEEE Trans. Automat. Control 48 (9) (2003) 1651–1654. [7] M.G. Todorov, M.D. Fragoso, Output feedback H∞ control of continuoustime infinite Markovian jump linear systems via LMI methods, Accepted for Presentation at the European Control Conference (ECC’07). [8] M.G. Todorov, M.D. Fragoso, Infinite Markov jump bounded real lemma, Accepted for Presentation at the American Control Conference (ACC’07).