Remarks on the time-varying H∞ Riccati equations

Remarks on the time-varying H∞ Riccati equations

Systems & Control Letters 37 (1999) 335–345 www.elsevier.com/locate/sysconle Remarks on the time-varying H∞ Riccati equations Akira Ichikawaa; ∗ , H...

122KB Sizes 0 Downloads 26 Views

Systems & Control Letters 37 (1999) 335–345

www.elsevier.com/locate/sysconle

Remarks on the time-varying H∞ Riccati equations Akira Ichikawaa; ∗ , Hitoshi Katayamab a Department

b Department

of Electrical and Electronic Engineering, Shizuoka University, Hamamatsu 432-8561, Japan of Electro-Mechanical Engineering, Osaka Electro-Communication University, Neyagawa 572-8530, Japan Received 14 October 1998; received in revised form 11 January 1999; accepted 9 April 1999

Abstract In this paper we establish some useful properties of three Riccati equations appearing in the standard H∞ -control problems for continuous and discrete-time time-varying systems. We then give necessary and sucient conditions for the existence of a suboptimal controller by three conditions involving two independent Riccati equations with a coupling c 1999 Elsevier Science B.V. All rights reserved. inequality. Keywords: H∞ -control; Continuous time; Discrete time; Time-varying; Riccati equations

1. Introduction The standard H∞ control problem for a continuous-time system is to ÿnd necessary and sucient conditions for the existence of a so-called -suboptimal controller and the characterization of all such controllers [1]. As is well-known necessary and sucient conditions are given either by solutions of two coupled Riccati equations (X∞ ; Ytmp in [1]) or solutions of two Riccati equations (X∞ ; Y∞ in [1]) with a coupling condition (X∞ Y∞ ) ¡ 2 , where (M ) denotes the spectral radius of M . Similar results are obtained for the discrete-time system [7,14,16]. In applications the conditions of the second type are usually convenient to use as we can solve the two Riccati equations independently. The H∞ theory has been extended to time-varying systems. In fact, the H∞ problem for continuous-time system has been considered by Ravi et al. [12] and necessary and sucient conditions of the ÿrst type are given. The H∞ problem on the ÿnite horizon has been also considered by Limebeer et al. [10] and conditions of the second type are given. However, necessary and sucient conditions of the second type on the inÿnite horizon are not established in the literature. The results can be easily guessed and are of the same form as in the time-invariant case. But arguments leading to them are more involved and have some special features of time-varying systems. Hence we collect some useful properties of H∞ Riccati equations and establish the conditions of the second type. In the discrete-time case the necessary and sucient conditions of the second type are more dicult to establish as remarked by Green and Limebeer [3] even in the time-invariant case. The time-varying case has been considered in [2,8] and the conditions of the ÿrst type are given. In this paper we shall also establish ∗ Corresponding author. Tel.: +81-53-478-1088; fax: +81-53-478-1088. E-mail addresses: [email protected] (A. Ichikawa), [email protected] (H. Katayama)

c 1999 Elsevier Science B.V. All rights reserved. 0167-6911/99/$ - see front matter PII: S 0 1 6 7 - 6 9 1 1 ( 9 9 ) 0 0 0 4 1 - 9

336

A. Ichikawa, H. Katayama / Systems & Control Letters 37 (1999) 335–345

relations between three Riccati equations and give necessary and sucient conditions of the second type. The calculations leading to them are far more involved, but we believe that it is important to have complete results for both continuous- and discrete-time systems. There are many systems in practice which are time-varying such as the motion of a pendulum with moving support, parametric ampliÿers and the motion of a spacecraft subject to gravity gradient and aerodynamic torques [4] and the introduction of time-varying systems lead to better accuracy and quality in applications [4,9,11]. Our results can be directly applied to such systems. The H∞ theory for time-varying systems is also useful when we consider sampled-data systems using the jump system formulation [6,13,15]. 2. Continuous-time Riccati equations Consider x˙ = A(t)x + B1 (t)w + B2 (t)u; z = C1 (t)x + D12 (t)u; y = C2 (t)x + D21 (t)w

(1)

where x ∈ Rn is the state, w ∈ Rm1 the disturbance, u ∈ Rm2 the control input, z ∈ Rp1 the controlled output, y ∈ Rp2 the output to be used for control and all matrices are piecewise continuous and uniformly bounded over [t0 ; ∞). The standard H∞ control problem is concerned with necessary and sucient conditions for the existence of a -suboptimal controller, i.e., an internally stabilizing controller with kzkL2 (t0 ;∞;Rp1 ) 6dkwkL2 (t0 ;∞;Rm1 ) for some 0 ¡ d ¡  (i)    (ii) (H) (iii)    (iv)

for any w ∈ L2 (t0 ; ∞; Rm1 )

and the following conditions are assumed: 0 [C1 D12 ] = [0 I ]; D12 0 ] = [0 I ]: D21 [B10 D21 (A; B1 ; C1 ) is stabilizable and detectable; (A; B2 ; C2 ) is stabilizable and detectable:

The solutions to this problem is given by Ravi et al. [12]. To introduce their results we need a deÿnition. Consider the Riccati equations − X˙ = A0 (t)X + XA(t) + P(t) + XR(t)X;

(2)

Y˙ = A(t)Y + YA0 (t) + Q(t) + YS(t)Y

(3)

on [t0 ; ∞) where P; Q; R and S are bounded piecewise continuous symmetric matrices. Below we often omit the argument t in A(t), B1 (t) and so on. Deÿnition 1. (a) A bounded symmetric solution X of (2) is called a stabilizing solution if A + RX is exponentially stable, i.e., kSX (t; s)k6M e− (t−s) ;

t0 6s6t ¡ ∞

for some M ¿ 0 and ¿ 0, where SX is the transition matrix of A + RX . (b) A bounded symmetric solution Y of (3) is called a stabilizing solution if A + YS is exponentially stable. The following result is known [12,5]. Theorem 1. There exists a -suboptimal controller if and only if the two conditions below hold. (i) There exists a bounded nonnegative stabilizing solution to the Riccati equation   1 0 0 0 ˙ B1 B1 − B2 B2 X + C10 C1 : − X = A X + XA + X

2

(4)

A. Ichikawa, H. Katayama / Systems & Control Letters 37 (1999) 335–345

337

(ii) For the X given in (i); there exists a bounded nonnegative stabilizing solution to the Riccati equation   0    1 1 1 0 0 0 0 ˙ XB2 B2 X − C2 C2 Z + B1 B10 ; (5) Z = A + 2 B1 B1 X Z + Z A + 2 B1 B1 X + Z

2 Z(t0 ) = 0:

(6)

We shall establish the following equivalent necessary and sucient conditions. Theorem 2. There exists a -suboptimal controller if and only if the three conditions below hold. (i) There exists a bounded nonnegative stabilizing solution to Riccati equation (4). (ii) There exists a bounded nonnegative stabilizing solution to the Riccati equation.   1 0 0 0 C C − C C Y˙ = AY + YA0 + Y 1 2 2 Y + B1 B1 ;

2 1 Y (t0 ) = 0:

(7) (8)

2

(iii) (X (t)Y (t)) ¡ d ; ∀t¿t0 for some 0 ¡ d ¡ ; where (M ) denotes the spectral radius of M . We shall show under condition (i) that (ii) and (iii) imply condition (ii) in Theorem 1. We shall establish this theorem by a series of lemmas which give useful properties of these Riccati equations. Lemma 1. Suppose there exists a -suboptimal controller. Then conditions (i); (ii) of Theorem 1 and condition (ii) of Theorem 2 hold. Proof. Necessity of Theorem 1 is proved in [12]. Condition (ii) of Theorem 2 is the dual to (i) and follows from the adjoint system of (1). Lemma 2. Let X; Y and Z satisfy (4); (7); (8) and (5); (6); respectively. Then Z − Y − (1= 2 )ZXY = 0; ∀t ∈ [t0 ; ∞). Proof. Set Q = Z − Y − (1= 2 )ZXY . Then by direct calculation we have    0   1 0 1 1 0 0 0 0 ˙ XB2 B2 X − C2 C2 Q + Q A + Y C C 1 − C2 C 2 : Q = A + 2 B1 B1 X + Z

2

2 1 Hence Q(t) = SZ (t; t0 )Q(t0 )SY0 (t; t0 ) where SZ and SY are transition matrices of A + (1= 2 )B1 B10 X + Z((1= 2 )XB2 B20 X − C20 C2 ) and A + Y ((1= 2 )C10 C1 − C20 C2 ), respectively. Since Q(t0 ) = 0, it follows that Q(t) = 0; ∀t¿t0 . Lemma 3. Let X; Y and Z be matrices of the same order with property 1 Z − Y − 2 ZXY = 0:

Then (i) I + (1= 2 )XZ; I − (1= 2 )XY are nonsingular and −1  −1  1 1 ; Y = Z I + 2 XZ : Z = Y I − 2 XY

(ii)  is an eigenvalue of XZ if and only if  = 2 =( 2 + ) is an eigenvalue of XY . (iii) If X and Z are nonnegative; then every eigenvalue of XZ is nonnegative and

2  ¡ 2 ; ∈(XZ) 2 +  where (A) denotes the set of eigenvalues of A. (XY ) = max

338

A. Ichikawa, H. Katayama / Systems & Control Letters 37 (1999) 335–345

Proof. (i) Since      1 1 1 1 I − 2 XY = I + 2 X Z − Y − 2 ZXY = I; I + 2 XZ



I + (1= 2 )XZ and I − (1= 2 )XY and hence I + (1= 2 )ZX and I − (1= 2 )YX are nonsingular and −1  −1  1 1 = I + 2 ZX Z; Y = Z I + 2 XZ

−1  −1  1 1 = I − 2Y Y: Z = Y I − 2 XY

(ii) Since XY = XZ(I + (1= 2 )XZ)−1 , the assertion readily follows. (iii) The ÿrst part is well-known and by (ii) XY has only nonnegative eigenvalues. Thus the second part also follows from (ii). Lemma 4. Suppose there exists a -suboptimal controller. Then condition (iii) in Theorem 2 holds. Proof. By Lemmas 1–3 the eigenvalues of XY have the form 2 =( 2 + );  ∈ (XZ). Since X and Z are nonsingular and uniformly bounded on [t0 ; ∞);  ∈ (XZ) are nonnegative and uniformly bounded. Hence (XY ) ¡ d2 for some 0 ¡ d ¡ . Lemma 5. (a) Suppose (4); (5) and (7) have solutions and that x satisÿes   0 1 0 0 C C 1 − C2 C 2 x: − x˙ = A + Y

2 1 Then x˜ = (I − (1= 2 )XY )x satisÿes 0   1 1 0 0 XB B X − C C x: ˜ − x˜˙ = A + 2 B1 B10 X + Z 2 2 2 2

2

(9)

(10)

(b) Suppose (4); (5) and (7) possess bounded solutions on [t0 ; ∞) and that I − (1= 2 )XY has a bounded inverse on [t0 ; ∞). Then Z is a stabilizing solution of (5) if and only if Y is a stabilizing solution of (7). Proof. (a) Di erentiating x˜ we obtain  0  0   1 1 1 1 0 0 0 0 ˙ XB2 B2 X − C2 C2 Y x − 2 A + 2 B1 B1 X XYx −x˜ = A + 2 B1 B1 X + Y

2

0    −1      1 1 1 1 1 0 0 XB B X − C C XY XY x I − 2 XY x + Y I − I − = A + 2 B1 B10 X 2 2 2 2

2

2

2  0  1 1 0 0 XB B X − C C x: ˜ = A + 2 B1 B10 X + Z 2 2 2 2

2 (b) Under the stated assumptions (9) and (10) are equivalent and the last assertion follows. Lemma 6. Suppose X and Y satisfy the conditions of Theorem 2. Then Z = Y (I − (1= 2 )XY )−1 is a bounded nonnegative stabilizing solution of (5) and (6). Proof. By condition (iii), (I −(1= 2 )XY )−1 is bounded and so is Z and Z =Y 1=2 (I −(1= 2 )Y 1=2 XY 1=2 )−1 Y 1=2 ¿0. Moreover, Z − Y − (1= 2 )ZXY = 0. Then we can show that Z is di erentiable and in fact        1 1 1 1 0 0 ˙ I − 2 XY Z I − 2 XY = A + 2 B1 B1 X Y + Z A + 2 B1 B1 X



    1 1 XB2 B20 X − C20 C2 Y: + B1 B10 I − 2 XY + Z

2

A. Ichikawa, H. Katayama / Systems & Control Letters 37 (1999) 335–345

339

Hence Z satisÿes (5) and (6). By Lemma 5, Z is a stabilizing solution of (5) since Y is a stabilizing solution of (7). Now the proof of Theorem 2 follows immediately from Lemmas 1; 4 and 6. As for stabilizing solutions we have the following properties. Lemma 7. (a) A stabilizing solution of (2); if it exists; is unique. (b) Let Y and Y be two stabilizing solutions of (3). Then Y (t) − Y (t) → 0

as t → ∞:

Proof. (a) Let X and X be two stabilizing solutions of (2). Then −

d (X − X ) = (A + RX )0 (X − X ) + (X − X )(A + RX ): dt

Hence, X (t) − X (t) = SX0 (T; t)[X (T ) − X (T )]SX (T; t); where SX is the state transition matrix of A + RX . Hence ||X (t) − X (t)||6M1 e− 1 (T −t) cM2 e− 2 (T −t) for some positive constant Mi ; i ; i = 1; 2 and c. Letting T → ∞ we obtain X (t) − X (t) = 0; ∀t¿t0 . (b) Since d (Y − Y ) = (A + YS)(Y − Y ) + (Y − Y )(A + Y S)0 ; dt we have Y (t) − Y (t) = SY (t; t0 )[Y (t0 ) − Y (t0 )]SY0 (t; t0 ); where SY is the state transition matrix of A + YS. Hence Y (t) − Y (t) → 0 as t → ∞, since A + YS and A + Y S are exponentially stable. 3. Discrete-time Riccati equations Consider x(k + 1) = A(k)x(k) + B1 (k)w(k) + B2 (k)u(k); z(k) = C1 (k)x(k) + D12 (k)u(k); y(k) = C2 (k)x(k) + D21 (k)w(k);

(11)

where x ∈ Rn ; w ∈ Rm1 ; u ∈ Rm2 ; z ∈ Rp1 ; y ∈ Rp2 and all matrices are uniformly bounded functions of k; k¿k0 . The discrete-time H∞ control problem is concerned with necessary and sucient conditions for the existence of a -suboptimal controller, i.e., an internally stabilizing controller with ||z||l2 (k0 ; ∞; Rp1 ) 6d||w||l2 (k0 ;∞;Rm1 )

for any w ∈ l2 (k0 ; ∞; Rm1 )

for some 0 ¡ d ¡ and condition (H) is assumed for (11). The solution to this problem is given in [2,8]. To introduce their results, we need the following Riccati equations: V (k) ¿ aI

for some a ¿ 0;

X (k) = C10 C1 + A0 X (k + 1)A − (R02 T2−1 R2 )(k) + (F 0 VF)(k)

(12) (13)

340

A. Ichikawa, H. Katayama / Systems & Control Letters 37 (1999) 335–345

and VY (k) ¿ aI

for some a ¿ 0;

(14)

−1 R2Y )(k) + (FY0 VY−1 FY )(k); Y (k + 1) = B1 B10 + AY (k)A0 − (R02Y T2Y

(15)

Y (k0 ) = 0;

(16)

where T2 (k) = I + B20 X (k + 1)B2 ; T1 (k) = 2 I − B10 X (k + 1)B1 ; R2 (k) = B20 X (k + 1)A; R1 (k) = B10 X (k + 1)A; V (k) = (T1 + S 0 T2−1 S)(k); −1 F(k) = [V (R1 − S T2 R2 )](k); T1Y (k) = 2 I − C1 Y (k)C10 ; T2Y (k) = I + C2 Y (k)C20 ; R2Y (k) = C2 Y (k)A0 ; R1Y (k) = C1 Y (k)A0 ; −1 VY (k) = (T1Y + SY0 T2Y SY )(k); SY (k) = C2 Y (k)C10 ; −1 0 −1 FY (k) = [VY (R1Y − SY T2Y R2Y )](k) S(k) = B20 X (k + 1)B1 ; −1

0

and for simplicity, we have omitted k in all system matrices of (11). We also need the following Riccati equation depending on X : VZ (k) ¿ aI

for some a ¿ 0;

(17)

−1 0 + AX Z(k)A0X − (R02Z T2Z R2Z )(k) + (FZ0 VZ FZ )(k); Z(k + 1) = B1X B1X

(18)

Z(k0 ) = 0;

(19)

where B1X (k) = (B1 V −1=2 )(k);

AX (k) = (A + B1 F)(k);

C1X (k) = [T2−1=2 (R2 + SF)](k);

D11X (k) = (T2−1=2 SV −1=2 )(k);

D12X (k) = T21=2 (k);

D21X (k) = (D21 V −1=2 )(k); 0 0 ; T2Z (k) = I + T1Z (k) = I − D11X D11X − C1X Z(k)C1X 0 ; R2Z (k) = C2 Z(k)A0X ; R1Z (k) = C1X Z(k)A0X + D11X B1X −1 0 ; VZ (k) = (T1Z + SZ0 T2Z SZ )(k); SZ (k) = C2 Z(k)C1X −1 0 −1 FZ (k) = [VZ (R1Z − SZ T2Z R2Z )](k): 2

We can rewrite (13), (15) and (18) as X (k) =

C10 C1

Y (k + 1) =

0



+ A X (k + 1)A −

B1 B10

0

+ AY (k)A −



0 

R2Y R1Y

and 0 + AX Z(k)A0X − Z(k + 1) = B1X B1X

respectively.

R2 R1



S T2 S 0 −T1

0 

R2Z R1Z

T2Y SY0 0 

−1 

SY −T1Y

T2Z SZ0

R2 R1

−1 

SZ −T1Z

C2 Z(k)C20 ;

! (k); R2Y R1Y

−1 

(20)

! (k)

R2Z R1Z

(21)

! (k);

(22)

A. Ichikawa, H. Katayama / Systems & Control Letters 37 (1999) 335–345

Deÿnition 2. (a) A bounded symmetric solution X of (12) and (13) is called a stabilizing solution if   −1   B2 S T2 R2 AX cl = A − B1 R1 S 0 −T1

341

(23)

is exponentially stable, i.e., ||SX (k; s)||6M k−s ;

k0 6s6k ¡ ∞

for some M ¿ 0 and 0 ¡ ¡ 1, where SX is the transition matrix of AX cl . (b) A bounded symmetric solution Y of (14)–(16) is called a stabilizing solution if 0  −1    SY T2Y C2 R2Y AY cl = A − R1Y SY0 −T1Y C1

(24)

is exponentially stable. (c) A bounded symmetric solution Z of (17)–(19) is called a stabilizing solution if  0  −1   R2Z SZ T2Z C2 AZcl = AX − R1Z SZ0 −T1Z C1X is exponentially stable. The following result is known [2,8]. Theorem 3. There exists a -suboptimal controller if and only if the two conditions below hold. (i) There exists a bounded nonnegative stabilizing solution to Riccati equation (12) and (13). (ii) For the X given in (i); there exists a bounded nonnegative stabilizing solution to Riccati equation (17)–(18). We shall establish the following equivalent necessary and sucient conditions. Theorem 4. There exists a -suboptimal controller if and only if the three conditions below hold. (i) There exists a bounded nonnegative stabilizing solution to Riccati equation (12) and (13). (ii) There exists a bounded nonnegative stabilizing solution to Riccati equation (14)–(16). (iii) (X (k)Y (k)) ¡ d2 ; ∀k¿k0 for some 0 ¡ d ¡ . We shall establish this theorem by a series of lemmas as in the continuous-time case. Before introducing discrete version of Lemmas 2; 5 and 6, we ÿrst rewrite the Riccati equations in compact forms. Using the equalities E(I + LE)−1 = (I + EL)−1 E; E ∈ Rn×m ; L ∈ Rm×n ; I − (I + G)−1 = G(I + G)−1 = (I + G)−1 G; G ∈ Rn×n ; we have AX (k) = (A + B1 F)(k)

 −1 1 A: = [I + B2 B20 X (k + 1)] I + B2 B20 X (k + 1) − 2 B1 B10 X (k + 1)

Let M (k) = I + B2 B20 X (k + 1) and N (k) = [M (k) − (1= 2 )B1 B10 X (k + 1)]−1 . Then we can rewrite (13) (or (20)) as X (k) = C10 C1 + AX (k + 1)N (k)A = C10 C1 + A0 X (k + 1)M −1 (k)AX :

(25)

342

A. Ichikawa, H. Katayama / Systems & Control Letters 37 (1999) 335–345

Similarly, we can rewrite (15) (or (21)) and (18) (or (22)) as follows: Y (k + 1) = B1 B10 + AY (k)NY (k)A0 ;

(26)

−1  1 (k); Z(k + 1) = I − 2 (k)X (k + 1)B2 T2−1 (k)B20 X (k + 1)

(27)

where NY (k) = [I + C20 C2 Y (k) − (1= 2 )C10 C1 Y (k)]−1 ; (k) = (MN )(k)B1 B1 + (k) and (k) = AX Z(k)(I + C20 C2 Z(k))−1 A0X . We also have AY cl (k) = ANY0 (k);  −1 1 AX [I + Z(k)C20 C2 ]−1 : AZcl (k) = I − 2 (k)X (k + 1)B2 T2−1 (k)B20 X (k + 1)

(28)

By (27), we have −1  1 (k) = Z(k + 1) I + 2 X (k + 1)B2 T2−1 (k)B20 X (k + 1)Z(k + 1)

 −1 1 −1 0 = I + 2 Z(k + 1)X (k + 1)B2 T2 (k)B2 X (k + 1) Z(k + 1)

and hence we can rewrite (28) as  −1 1 −1 0 AX [I + Z(k)C20 C2 ]−1 : AZcl (k) = I + 2 Z(k + 1)X (k + 1)B2 T2 (k)B2 X (k + 1)

(29)

Lemma 8. Suppose there exists a -suboptimal controller. Then conditions (i); (ii) of Theorem 3 and condition (ii) of Theorem 4 hold. Proof. Necessity of Theorem 3 is proved in [2,8]. Condition (ii) of Theorem 4 is the dual to (i) and follows from the adjoint system of (12). Lemma 9. Let X; Y and Z satisfy (12); (13); (14)–(16) and (17)–(19); respectively. Then Z−Y −(1= 2 )ZXY = 0; ∀k ∈ [k0 ; ∞). Proof. We shall prove the equality by induction. Set Q(k) = Z(k) − Y (k) − (1= 2 )(ZXY )(k). Let k = k0 . Then Q(k) = 0 since Z(k0 ) = 0 = Y (k0 ). Now we assume Q(k) = 0. Then by Lemma 3, −1  −1  1 1 (k); Y (k) = Z(k) I − 2 XZ (k): Z(k) = Y (k) I − 2 XY

Since Q(k + 1) = Y (k + 1) − Z(I − (1= 2 )XY )](k + 1), it is enough to show    1 (k + 1): Y (k + 1) = Z I − 2 XY

Now Y (k + 1) = B1 B10 + AYNY A0  −1 1 = B1 B10 + AZ I + C2 C20 Z + 2 (X − C1 C10 )Z A0

 −1 1 0 0 0 −1 = B1 B1 + AZ I + C2 C2 Z + 2 A X (k + 1)M AX Z A0 ;

(30)

A. Ichikawa, H. Katayama / Systems & Control Letters 37 (1999) 335–345

343

where Y (k) = Z(I + (1= 2 )XZ)−1 (k) is substituted in the second equality and Riccati equation (25) is used in the third equality. Since −1  1 A0 AZ I + C2 C20 Z + 2 A0 Xˆ M −1 AX Z

 −1 1 = (MN )−1 AX Z(I + C2 C20 Z)−1 I + 2 A0 Xˆ M −1 AX Z(I + C2 C20 Z)−1 A0

 −1 1 ˆ −1 −1 0 −1 0 0 −1 0 = (MN ) AX Z(I + C2 C2 Z) A I + 2 X M AX Z(I + C2 C2 Z) A

 −1 1 = (MN )−1 AX Z(I + C2 C20 Z)−1 A0X (MN )0 + 2 Xˆ M −1 AX Z(I + C2 C20 Z)−1 A0X

we have

"

−1 # 1 ˆ −1 + N M + 2XM Y (k + 1) = (MN )

" #  −1 1 −1 0 0 0 −1 ˆ −1 0 0 −1 (N M ) MNB1 B1 + I + 2 (N M ) X M = (MN )

 −1    1 1 MNB1 B10 + (N 0 M 0 )−1 I + 2 Xˆ NB1 B10 = (MN )−1 I + 2 (N 0 M 0 )−1 Xˆ M −1

 −1   1 1 ˆ −1 0 0 −1 ˆ −1 0 0 0 I + 2 (N M ) X M  since N M = I + 2 X NB1 B1 ; = (MN )

−1

MNB1 B10



0

0

(31)

where we set Xˆ = X (k + 1). On the other hand   1 Z(k + 1) I − 2 X (k + 1)Y (k + 1)

−1 (  −1 )  1 1 ˆ 1 ˆ −1 0 ˆ −1 0 0 −1 ˆ −1 I + 2 (N M ) X M  I − 2 X (MN )  = I − 2 X B2 T2 B2 X

−1 (  −1 )  1 ˆ 1 1 ˆ −1 0 ˆ −1 0 0 −1 ˆ −1 I − 2 X (MN )  I + 2 (N M ) X M = I − 2 X B2 T2 B2 X

 −1    1 1 1 I + 2 (N 0 M 0 )−1 Xˆ M −1 MN − 2 Xˆ Y (k + 1); = I − 2 Xˆ B2 T2−1 B20 Xˆ

where we have used (27) and (31) in the second equality and (31) again in the last equality. By direct calculation we have   1 1 1 0 0 −1 ˆ −1 MN − 2 Xˆ = I − 2 Xˆ B2 T2−1 B20 Xˆ I + 2 (N M ) X M

which implies (30). Lemma 10. Suppose there exists a -suboptimal controller. Then condition (iii) in Theorem 4 holds. Proof. Similar to the proof of Lemma 4. Lemma 11. (a) Suppose there exist solutions of (12) – (15); (17) and (18) and that x satisÿes x(k) = A0Y cl (k)x(k + 1):

(32)

344

A. Ichikawa, H. Katayama / Systems & Control Letters 37 (1999) 335–345

Then x(k) ˜ = (I − (1= 2 )XY )(k)x(k) satisÿes ˜ + 1): x(k) ˜ = A0Zcl (k)x(k

(33) 2

(b) Suppose further that X; Y and Z are bounded on [k0 ; ∞) and that I − (1= )XY has a bounded inverse on [k0 ; ∞). Then Z is a stabilizing solution of (17) – (19) if and only if Y is a stabilizing solution of (14) – (16). Proof. We shall show (a) only. Using (29) and Z(k) = Y (I − (1= 2 )XY )−1 (k) we have  −1    −1 1 1 1 1 I − 2 Xˆ Yˆ I − 2 XY + C20 C2 Y A0X I − 2 Xˆ Yˆ + Xˆ B2 T2−1 B20 Xˆ Yˆ ; A0Zcl (k) = I − 2 XY



where Yˆ = Y (k + 1). Note   −1  1 1 A0X I − 2 Xˆ Yˆ + Xˆ B2 T2−1 B20 Xˆ Yˆ I − 2 XY + C20 C2 Y

−1    1 1 0ˆ 1 ˆ ˆ −1 0 ˆ ˆ 0 0 ˆ AX I − 2 X Y + X B2 T2 B2 X Y = I + C2 C2 Y − 2 C1 C1 Y − 2 A X NAY

  −1  1 1 A0 N 0 M 0 I − 2 (I + Xˆ B2 B20 )−1 Xˆ Yˆ = NY I − 2 A0 Xˆ NAYNY

  −1  1 1 N 0 M 0 − 2 Xˆ Yˆ ; = NY A0 I − 2 Xˆ NAYNY A0

where we have used (25) in the second equality. By direct calculation, we have  −1 −1  1 0 −1 ˆ 1 ˆ 0 0 0 −1 0 N = (N ) − 2 (N ) X NAYNY A I − 2 X NAYNY A

−1  1 1 = M 0 − 2 Xˆ B1 B10 − 2 Xˆ AYNY A0

 −1 1 ˆ 0 0 0 = M − 2 X (B1 B1 + AYNY A )

1 = M 0 − 2 Xˆ Yˆ ;

where we have used Xˆ N = N 0 Xˆ in the third equality and (26) in the last equality. Hence −1    1 ˆ ˆ 1 0 0 AZcl (k) = I − 2 XY NY A I − 2 X Y

   −1 1 1 = I − 2 XY (k)A0Y cl (k) I − 2 XY (k + 1)

and we have shown the assertion. Lemma 12. Suppose X and Y satisfy the conditions of Theorem 4. Then Z =Y (I −(1= 2 )XY )−1 is a bounded nonnegative stabilizing solution of (17)–(19). Proof. Z is bounded by assumption (iii) and Z = Y 1=2 (I − (1= 2 )Y 1=2 XY 1=2 )−1 Y 1=2 ¿0. Moreover, Z − Y − (1= 2 )ZXY = 0. Then as in the proof of Lemma 9, we can show directly that Z is a solution of (17)–(19). By Lemma 11, Z is a stabilizing solution of (17) and(18) since Y is a stabilizing solution of (14) and (15). Now the proof of Theorem 4 follows from Lemmas 8, 10 and 11.

A. Ichikawa, H. Katayama / Systems & Control Letters 37 (1999) 335–345

345

As in the continuous-time case, we have the following properties for stabilizing solutions. Lemma 13. (a) A stabilizing solution (20) (or (13)); if exists; is unique. (b) Let Y and Y be two stabilizing solutions of (21) (or (15)). Then Y (k) − Y (k) → 0 as k → ∞.  (c) Let Z and Z be two stabilizing solutions of (22) (or (18)). Then Z(k) − Z(k) → 0 as k → ∞. Proof. (a) Let X and X be two stabilizing solutions of (20). Then using (20), we obtain X (k) − X (k) = A0X cl (X − X )(k + 1)AX cl ; where AX cl is deÿned by (23) with X replaced by X . Hence X (k) − X (k) = SX0 (N; k)(X − X )(N )SX (N; k); where SX is the state transition matrix of AX cl . Hence |X (k) − X (k)|6M1 1N −k cM2 2N −k for some constants Mi ¿ 0, 0 ¡ i ¡ 1, i=1, 2 and c ¿ 0. Letting N → ∞, we obtain X (k)− X (k)=0; ∀k¿k0 . (b) Since Y (k + 1) − Y (k + 1) = AY cl (Y − Y )(k)A0Y cl ; where AY cl is deÿned by (24) with Y replaced by Y . Hence Y (k) − Y (k) = SY (k; k0 )(Y − Y )(k0 )SY0 (k; k0 ); where SY is the state transition matrix of AY cl . Hence Y (k) − Y (k) → 0, as k → ∞, since AY cl and AY cl are exponentially stable. The proof of (c) is similar to that of (b). References [1] J.C. Doyle, K. Glover, P.P. Khargonekar, B.A. Francis, State space solutions to standard H2 and H∞ control problems, IEEE Trans. Automat. Control 34 (1989) 831–847. [2] V. Dragan, A. Halanay, V. Ionescu, Inÿnite horizon disturbance attenuation for discrete-time systems: a Popov–Yakubovich approach, Integral Equations Oper. Theory 19 (1994) 153–215. [3] M. Green, D.J.N. Limebeer, Linear Robust Control, Prentice-Hall, Englewood Cli s, NJ, 1994. [4] C.J. Harris, J.F. Miles, Stability of Linear Systems, Academic Press, New York, 1980. [5] A. Ichikawa, H∞ -control and ÿltering with initial uncertainty for time-varying systems, Int. J. Systems Sci. 26 (1996) 1633–1657. [6] A. Ichikawa, H. Katayama, H2 and H∞ control for jump systems with application to sampled-data systems, Int. J. Systems Sci. 29 (1998) 829–849. [7] P.A. Iglesias, K. Glover, State-space approach to discrete-time H∞ control, Int. J. Control 54 (1991) 1031–1073. [8] H. Katayama, A. Ichikawa, H∞ -control with output feedback for time-varying discrete systems, Int. J. Control 63 (1996) 1167–1178. [9] W.S. Levine, The Control Handbook, IEEE Press, New York, 1996, pp. 451–468. [10] D.J.N. Limebeer, B.D.O. Anderson, P.P. Khargonekar, M. Green, A game theoretic approach to H ∞ control for time-varying systems, SIAM J. Control 30 (1992) 262–283. [11] J.O’Reilly, Observers for Linear Systems, Academic Press, New York, 1983. [12] R. Ravi, K.M. Nagpal, P.P. Khargonekar, H∞ control of linear time-varying systems: a state-space approach, SIAM J. Control 29 (1991) 1394–1413. [13] M.F. Sagfors, H.T. Toinvonen, B. Lennartson, H∞ control of multirate sampled-data systems: a state space approach, Automatica 34 (1998) 415–428. [14] A.A. Stoorvogel, The H∞ Control Problem: A State Space Approach, Prentice-Hall, New York, 1992. [15] W. Sun, K.M. Nagpal, P.P. Khargonekar, H ∞ control and ÿltering for sampled-data systems, IEEE Trans. Automat. Control 38 (1993) 1162–1175. [16] D.J. Walker, Relationship between three discrete H ∞ algebraic Riccati equation solutions, Int. J. Control 52 (1990) 801–809.