Systems & Control Letters 23 (1994) 389-394 North-Holland
389
A Riccati equation approach to a constant I/O scaled Hoo optimization problem Y.A. Jiang and D.J. Clements School of Electrical Engineering, Umverslty of New South Wales, PO Box 1, Kensington 2033, Austraha Received 31 May 1993 Rewsed 10 November 1993 Abstract This paper studies a constant I/O scaled H optlmLzatlon problem with a Riccatl equation approach. An lteratlve algorithm Js presented for searchmg for the optimal I/O scaling matrix. We show that a descent direction of the scaling matnx can be determined by algebraic Rlccati equation solutions Keywords State-feedback H®; diagonal control, robust control, Rlccat~ equation
scahng, optimal
1. Introduction One way to deal with both robustness and performance objectives for systems with nonlinear time-varying structured uncertainty is to set up an H~ optimization problem with optimal diagonal input/output similarity scaling. The initial work for the related problem of time-invariant structured uncertainty is the D - K iteration proposed by Doyle [2], but one objection to this is that there is no guarantee of a global minimum solution. Recently, some globally convergent approaches have been developed for the constant I/O scaled H~ optimization in the case of state feedback control I-9, 5]. In [9], Packard et al. transform the problem into a linear matrix inequality problem with size equal to the sum of the order of the plant and the dimension of the scaling matrix. In [5], El Ghaoui et al. reformulate the problem as a generalized eigen-
Correspondence to Dr D Clements, School of Electrical Engmeenng, Umversity of New South Wales, PO Box 1, Kensington 2033, Australia E-mall
[email protected] edu.au. Elsevier Science B V_ SSDI 0 1 6 7 - 6 9 1 1 ( 9 3 ) E 0 1 4 7 - 9
value minimization problem Solutions to both of these problems involve a nondifferentlable quasi-convex optimization problem over the independent vanables in the scaling matrix D, and the solution P to an algebraic Riccatl equation (ARE). In this paper, an algorithm is proposed for the state feedback constant I/O scaled H~o optimization problem. The algorithm involves a D - P iteration scheme which alternately searches for a solution P to the ARE corresponding to the optimal H~-norm for a fixed D, and for a scaling matrix D to reduce the H~o-norm The optimization problem is quasi-convex and we show that at any point not equal to the global minimum, a descent direction can be computed using the proposed D - P iteration. Thus, the quasi-convex optimization problem [9, 4] is separated into two parts: a nondlfferentiable quasi-convex optimization problem for the scaling matrix D and a convex optimization problem for the optimal H~-norm for which a Newton-like algorithm is available in [10]. Moreover, it is shown that a descent direction of the scaling matrix can be derived in terms of system matrices and the optimal ARE solution P. Specifically, we show that this descent direction of D is determined by P when the optimum is attained, or determined by a related Lyapunov equation when the optimum is not attamed. This paper is organized as follows. The problem is formulated in Section 2 In Section 3, the state feedback constant I/O scaled H~ optimization problem is represented as a globally convergent joint minimization problem over D and P. In Section 4, a descent direction of the scaling matrix is derived and some search approaches are discussed A numerical example is provided in Section 5, which is followed by concluding remarks in Section 6.
390
)' 4 hanq. D ,l ( lement.s
4 tonstant 1 0 ~ a l e d I f , opttmtzatton problem
2. Problem statement
zatlon problem Consider a Riccatl operator R(P.D,7) = PA + ATp + PFD lFTp/72 q.- H ~ D H
Consider the hnear system
- ( P B + H T D E ) N ~ I ( P B + H~DE) T,
= A x + F w + Bu,
(5)
(1) z = H x + Eu,
where N o = ETDE, or equivalently
with state x ( t ) e ~ ' , disturbance w ( t ) e R "~, output z ( t ) e R "~, and control u ( t ) e R "~ We assume that (1) (A, B) and (A, F) are controllable, (2) (H, A) is observable, and (3) E has full column rank. Suppose that the d~sturbance w is generated as W = z~7.
for a causal, possibly nonlinear operator A mapping L~" into itself, and further suppose that A has the block diagonal structure dlag(bllk~,
, 6slk, , A1,
..
, A f),
S
+f=
l,
R(P, O, 7) = P(A - B N D 1 E T D H ) + (A -- BND 1ETDH)Tp + P ( F D - 1FT/7: -- B N o 1BT)p + HTSoH,
(6)
where So = I - E N o l E T Let P:= {P P > 0}. For D e D fixed, there exists [3, 8] an internally stabilizing control gain K such that 11DI/:GKD- 1/211~ < V ff and only if there exists P e P such that R(P, D, 7) <~ 0 It follows that
(2) 7" = mf{~: R ( P , D , 7) ~< 0, P e P , D e D , 7 > 0}
(7)
where the Aj have size rnj x m s We also consider associated input/output scaling matrices D with the structure
is an alternatave characterization of 7*- For each 7 > 0, let
diag(D~, .
J~:= {(P, D): P e P , D e D , R ( P , D , 7) <, 0}
, D~, d rI,.,, _ , d:l,.:),
(3)
where D , e C k'×k', D , = D f f > 0 , and d , > 0 Let A denote the set of all structured uncertamtms A, and let D denote the set of all structured I/O scahng matrices D. For a state-feedback control law u = - K x , the closed-loop transfer function from the disturbance w to the output z is Gr(s):= (H + E K ) ( s l - A + B K ) - I F .
Then the minimal I/O scaled H~-norm achieved by state-feedback control is 7* :=
mf K~S,
IID1/2 G r D - 1/2 II~,
(4)
Lemma 1. J~ is convex for each 7 > 0 and J~, cJy~ for O < 71 <72 Proof. The convexity of J~ is shown m [9] where an indirect argument, proceeding via an associated discrete-time problem, is presented. The convexity is also proved for the case E = 0 in I-5]. The monotomctty follows directly from (5). [] Remark 1. The convexity of Jr m Lemma 1 for E # 0 can also be proved directly somewhat along the lines of the argument in [5]. For each P e P and D e D , we define
D~D
where S denotes the set of all stabilizing gains, and tlGll~=supw>~o#(G(jw)) is the H~-norm. The quantity 1/7" prowdes a stability margin for the optimal Gx(s) in that stability is assured for any uncertainty A w~th LE-lnduced norm less than 1/7"-
3. A Riccati equation approach
7*(P, D) = mf{y R(P, D, 7) ~< 0, 7 > 0}
(8)
and for each D e D , we define 7*(D)-= lnf{7 R(P, D, 7) ~< 0, P > 0, 7 > 0}_
(9)
It follows from Lemma 1 that both ?*(P, D) and 9,*(D) are quasi-convex functions. Further, it is clear that 7" = mf 7*(D), D~D
The I/O scahng H~-norm opumizatlon problem (4) can be represented as a quasi-convex minim1-
so that computation of 7* can be restated an terms of y* (D).
Y-A Jtang, D J_ Clements / A constant I / 0 scaled H ~ opt~m~zatton problem
A quadratically convergent algorithm is avadable [10] for the computation of 7*(D). However, since the optimal value ~* (D) may only be attained w~th an 'infinite' P, it is convenient to consider the inverse of the ARE solution P. Let Q = P - a and consider the Riccati operator /~(Q, D, 7):= AQ + QA x + QHXDHQ + FD - 1FT/y2 - (B + QHTDE)NoX(B + QHTDE) r
(10) The characterization of the optimum y*(D) ~s given in [10] Lemma 2. Consider system (1) and the Riccatt operator (10) with AQ = (A -- B N ~ I E T D H ) r + HTSoHQ.
(11)
The following statements hold: (a) If the opttmal value ?*(D) is attained wzth P > O, there exzsts Q > O such that /~(Q, D, 7*(D)) = 0, and AQ ts maromally anttstable. (b) I f the optimal value 7* (D) ts not attained wzth P > O, then there exists a singular Q >~0 such that /~(Q, D, y*(D)) = 0 and Ao, ts anttstable. Now the optimization problems (7) and (9) may be restated as
391
Lemma 3. Zo(t) zs an analytzc functzon o f t and its derwatwe at t = 0 is 2~(0) = W6,
(15)
where W~ = U T D U - VTDV, V
=
FT/~,
(16)
U = S,HQ + E N i aBT
4. Solutions to optimal I / 0 scaling According to Lemma 2, the optimal value ~ may or may not be attained by a finite P > 0 We look at both situations in this section. First, suppose that ~ is attained. Then AQ will be marginally antistable with imaginary axis elgenvalues jw,, z = 1, ... ,m For each such elgenvalue, let X, be the span of the corresponding eigenvectors. Theorem 1. Suppose ~ is to be attained and let Q > 0 be the maximum solutzon of I~(Q, I, ~)= O. Then there exzsts D~D such that II DI/EGK D-1/2 I]~ < zf and only tf there exists D e D such that X,n W6X, > 0 for all z.
y* = lnf{7:/~(Q, D, ),) ~< 0, DeD, Q/> 0, ), > 0} (12) Proof. A detailed proof can be found in [7].
and ),*(D)=Inf{7 /~(Q,D, 7)~<0, Q > ~ 0 , 7 > 0 } ,
(13)
respectively. For a given D e D and 7*(D), we want to find a new D with a smaller 7*(D) An alternative characterization of D e D is D = e b where /):= diag(D1 . . . . , Ds, dl It, l, ..-, dflmf)
(14)
and D, e C r' ×'', D, = D*, and d,s R So the search for a suitable D~D can be replaced by a search for a suitable/9~/), the set of all/5. Let Zo = QHTDHQ + FD-1FT/~2 - ( B + Q H r D E ) N o i(B + QHrDE) r, so that/~(Q, D, 7) = AQ + QA r + Zo. For a fixed D, we can always normalize the problem to D = I by absorbing D into F, H, and E. Also, denote ~*(D) by ~ Thus, consider D(t):= e ~' so that D ( 0 ) = 1 With a small variation of D around I, the vanation of I~(Q,I + AD, 7) is determined by Zao. Let Z~(t) := Zoa,. We have Z~(0) = Z~
[]
F i n d i n g / ) such that X,u W r X , > 0 is a convex feasibihty and several existing algorithms [1] are available. Remark 2. Let x be an elgenvector of AQ associated with some eigenvaluejw,. Note that Ux and Vx are a pair of singular vectors of Gr(jw,) associated with the singular value ~ (see [7]). Theorem 1 shows that a descent direction D only depends on the peak singular values. Now suppose that the optimal value ~? is not attained. Let Q satisfy/~(Q, I, ~]) = 0 and let AQ be defined by (11). Then, as shown in Lemma 2, Q is singular and AQ is antlstable. Define the Lyapunov equation QAQ + A~(~ + W6 = 0 Then we have the following result.
(17)
t A Jmnq, D d Clements ,' ,4 constant I / 0 scaled H , optlmlzatton problem
392
Theorem 2. Suppose that ~ ts not attamed Then 7 > )'* tf and only if there exzsts D6D such that X T Q X > 0 where X spans the null space of Q and Q zs the solutton of the Lyapunot, equation t l7}
dimension one. Partition U and V compatibly with the structure of D as
Proof. Let D(t) = e&~O and Q(t) be the solution to I~(Q(t),D(t),~)=0. Since AQ is antistable, equation (17) is well-defined and Q(t) is an analytic function of t. Noting L e m m a 3, it is easily derived from the equation R(Q, D, ~7) = 0 that the derivative of Q(t) at t = 0 can be computed from the Lyapunov equation (17). Let the Taylor series for Q(t) in terms of t be
U =
Ul tT~ t71
V=
^
L e m m a 4. Let O be the solutzon of Lyapunov equation (17) wtth /) = dlag(D1 . . . .
(18)
Q(t) = Q + t o + o(t),
where llm,_.oO(tfft=O. Let X span the qdimensional null space of Q Since Q(t) is analytic in t, and symmetric for each t, there exist analytic functions 2,(t) describing the elgenvalues of Q(t). Let the 2,(t) be ordered so that 2,(0)= 0 for ~= 1 . . . . q_ To first order, these q smallest elgenvalues are ;.,(t)
=
Ds, dlIt~,
. , dflt~)
and let M sattsfy the Lyapunov equatmn AQM + MAre + x T x = 0.
(20)
Then XTQX = Tr(W6M) f
= i t=l
Tr(Ofl~,) + Z d, Tr(A]t,),
(21)
t=l
where 571, = I],MU r, - V,MV,,T and 5)1, = U,MO v, - I?,,M~x. Moreover X T O X >1-0 f o r D, = 371, and
t~,(x*ox).
^
d, = Tr(M,)
If x x O X > 0, then all 2,(0 are posmve for t near 0 and D = e & such that Q ( t ) > 0 and /~(Q, D, ~ ) = 0. Clearly, AQ,) is still antistable for small enough t Thus, from L e m m a 2, 7* < Conversely, suppose that ~* < ~ so that there exists D~ with y*(Dx) < ~ Write Dt = I + t/5, where /~ = D ~ - I Then for each t~(0, 1], there exists an antistabihzing solution Q, > 0 such that I~(Q,, D,, ~7)= 0 Since AQ is antistable, Q, is an analytic function of t near t = 0_ Moreover, since for fixed ~ the level set J9 is convex, it follows that Qt 1s concave in t. Thus, Qt >~Qo + t(Qx - Qo) on [0, 1]. Then
x r ( Q ' - Q X° )> ~ t
(19)
Xr(Q1-Q°)X>°'
since X T Q o X = 0 and XTQ1 X the unique solution of (17) []
>
0. Finally, O IS
In general, using (17) directly to find a statable /5 such that XTQX > 0 is not an easy task However, a simple approach can be derived when X has
Proof. The first equahty follows from the observation that Tr(W6M) = - Tr(OAQM + A ~ Q M ) a n d Tr(XvQX) = Tr((~XX v) = - Tr(QAQM + Q MA~). The second equality follows simply from noting that Wfi = ~ = ~ 2 ~ , + Y~{=IM,. Finally, the inequality is obvious [] From this lemma, in conjunction with Theorem 2, we obtain the following corollary Corollary 1. Suppose that ~ zs not attained and the
null space of Q, the anttstabilizmo solutlon of /~(Q, I, if)= 0, has dtmension one. Let X span thzs null space Then ~ > ~* if at least one 1t711or f'lt is nonzero In the case of block structures A with s = 0, a necessary and sufficient condition for searching for a descent scaling matrix D can be derived as follows Consider the Lyapunov equations
0 = Q,AQ + % 0 , + t?Jt?, - ~T p,, with [~, and ~ defined by (19)
122)
Y A. Jtano, D J. Clements / A constant I/0 scaled H~ opttmtzatton problem
Corollary 2. Consider the case of s = 0 m (2). Suppose that there exists a singular Q >10 such that I~(Q, I, y-) = 0 and the matrix AQ is anttstable. Then there exists D~D such that 7" < ~ if and only if there exists D e D such that XT
d,Q., X > O, t
where X spans the null space of Q and the Q, are the solutions of the Lyapunov equations (22). Remark 3. If/~ is a descent direction, let t= > 0 be the largest t such that R(P,,eb',~)<~ 0 for all 0 < t < t=. A reasonable choice for the step length might be ½t,, for which an estimate is ½ min {~o, %} where a~ = max {c( > 0: AQ + otHTSIHO_, antistable}, (23) % = max{~ > O. Q + ~0 > O} An algorithm for the suboptimal or optimal I/O scaled H~ state feedback control problem can now be described as follows.
D-P iteration 3.1: (1) Let D := L (2) Let F : = F D -1/2, H:=D~/2H, and E . = D1/2E. (3) Calculate y. If ~ is not attained, go to (5). Otherwise, construct X which spans the null space of Q and compute (~. (4) If there exists/~e/) such that X r Q X > 0, let D = e bt with t > 0 given by (23). Go to (2) Otherwise, go to (6). (5) Construct X whmh spans the unstable eigenspace of AQ. If there exists DeD such that X r W 6 X > 0, let D = e 6' for some t > 0. Go to (2). (6) 7 = 7", stop.
Remark 4. Note that the optimal state-feedback gain is given by K = No 1(BTQ- ~ + ErDH). If ~ is not attained, the optimal feedback gain will tend to infinity, and thus there will be numerical difficulties in constructing the optimal controller in the standard D - K iteration This problem is avoided with the proposed D-P iteration.
393
5. Example In this section an example considered in [9] is used to illustrate the proposed D-P iteration approach The system has 4 states, 2 inputs, 7 disturbances, and 7 outputs. The values for A, B, E, F, and H can be found in I-9] where the I/O scaling set D:= {dlag(D1, D2, d 3, I2): D,~R 2×2, D, = D,r > 0, ds > 0}
(24)
is considered. Without I/O scaling, lnfx II Gr II~ 66.27. By solving the convex feasibility problem for several values of 7, 19] shows that the problem is unsolvable for 7 = 21.8 and solvable for ~, = 22.04 For each chosen value of 7, a few thousand iterations are required to detect the solvability of the feasibility problem In each iteration, the computation involves solving for the largest e]genvalue of a 9 x 9 symmetric real matrix for this particular example. Using the scaling matrix computed m 1-9], we obtain ), = 22.1061 rather than 22.04. We shall consider two I/O scaling sets, one with no repeated blocks and one with some repeated blocks. Case 1 (s = 0). We choose D.= {diag(dl, d2, d3, d4, d5, •2): d, > 0} to have no repeated blocks. The D-P Iteration algorithm 3.1 is implemented in MATLAB on a 286 computer. The results are displayed in Table 1. With four iterations, an approximate value of 22 5689 is obtained for the optimal I/O scaled H~ norm. The corresponding value of the scaling matrix is D = diag(5.2729, 0.1711, 1.3839, 0.5536, 1.0172, 1, 1). In each iteration, the algorithm solves one H~ optimmation problem and some Lyapunov equations. In this example, the H~ optimization problem is solved using the algorithm proposed by Scherer [10] and the I/O scahng matrix is determined using the method suggested in Remark 3. Starting from D = I7 and 1/3, = 0, the four iterations require a total of 14 Riccat~ equaUons and 17 Lyapunov equations. Recall that solving a Riccati equation for a system with order n is roughly equivalent to solving an eigenvalue problem for a 2n x 2n matrix and that the order of the system is 4 for this
394
Y A Jlanq, D J Clements
,l tonstant I 0 scaled H , opttmtzatnm problem
Table 1 No repeated blocks Iteration
~,*(D)
Rlccat~
Lyapunov
I 2 3 4 5 6
66 2740 22 9891 22 5705 22 5689 22 5684 22 5683
3 6 10 14 17 20
3 8 13 17 21 25
Table 2 Repeated blocks Iteration
?*(D)
Rlccatl
Lyapunov
1 2 3 4 5 6
66 2740 22 9583 22 3757 22 2595 22 2496 22 2238
3 7 12 15 19 22
3 9 14 19 27 31
problems. These scheme alternately searches for P. the solution of ARE associated with the optimal H ~-norm, and for D the I/O scaling matrix while holding the other constant It is shown that the D P lteratton computes a descent dtrectlon at any point away from the global mmlmum The descent direction of the I/O scahng matrix IS determined by the ARE solution P By using the quadratically convergent algortthm [10], only several Riccatt equations and Lyapunov equations are required in each iteration With this algortthm, computational experience shows that a good approximation to the optimum can be obtained after several Iterations Thus, the computation requtred for the scaled H~. optlmlzatton problems ts comparable wtth that for the standard H e optlmlzataon problems. This may supply the work for robust control designs for systems with structured uncertainties
References
example. The total computation required for computing the optimal I/O scahng matrix is roughly equal to the elgenvalue computatton problem of 22 8 × 8 real matrices_ Case 2. (s :~ 0). We choose the same I/O scaling set (24) given in [9] Now the D - P Iteration algorithm 3.1 produces the results displayed m Table 2. After 6 iterations, we obtam r = 22 2238 for the I/O scalmg matrix /[-5.1746 0.0690] D = dlag~/0.0690 0.1412 / ' [
1.5348 - 0 . 1 7 6 6 ] ) - 0.1766 0.6983J' 0.9745, lz
at which stage the total computation has risen to 22 Rlccati equations and 31 Lyapunov equations These simulation results show that the proposed algorithm quickly produces good estimates of y*.
6. Conclusions This paper has presented a scheme for state-feedback constant I/O scaled H~ optimization
[1] C Beck, Computation Jssues m solving LMIs, Prot 30th IEEE Conf Declslon Control (1991) pp 1259 1260 [2] J C Doyle, Structured uncertainty m control system design, Proc 24th CDC (1985) pp_ 260-265 [3] J C Doyle, K Glarer, P Khargonekar and B Francis, State-space solutions to standard H2 and H~ control problems, IEEE Trans Automat Control (1989) 831-847 [4] J C Doyle, A Packard and K Zhou, Review of LFTs, LMIs, and #, Proc 30th CDC (1991) pp 1227-1232 [5] L El Ghaom, V Balakrlshnan and S Boyd, On maximizing a robustness measure for structured nonlinear perturbations, Proc ACC (1992) pp 2923-2924 [6] P M Gahmet and P Pandy, Fast and numerically robust algorithm for computing the H~ optimum, Proc 30th CDC (1991) pp 200-205 [7] YA Jxang and D J, Clements, A Rlccatl equation approach to approximate ,u-norm computation, Control 92 Conf, Perth (1992) pp 233-238 [8] P P Khargonekar, I R Petersen and K Zhou, Robust stablhzatlon of uncertain linear systems quadratic stabdlzabihty and H~o control theory, IEEE Trans Automat Control 35 (1990) 356-361 [9] A Packard, K Zhou, P Pandey, J Leonhardson and G Balas, Optimal, constant I/O similarity scaling for full-mformaUon and state-feedback control problems, System Control Lett 19 (1992) 271-280 [10] C Scherer, Ha-control by state-feedback and fast algorithms for the computauon of optimal Ha-norms, IEEE Trans Automat Control 35 (1990) 1090-1099 [11] J_C Wdlems, Least squares stationary optimal control and the algebraic Rlccatl equation, IEEE Trans Automat Control 16 (1971) 621~34