0005-1098191 $3.00 + 0.00 Pergamon Press plc ~ ) 1991 International Federation of Automatic Control
Automat/ca, Vol. 27, No. 2, pp. 317-329, 1991 Printed in Great Britain.
(1-optimal Control of Multivariable Systems with Output Norm Constraints* J. S. McDONALDi- and J. B. PEARSONt¢
The ~l-optimal control problem is considered for general rational plants, possibly subject to ~e®-norm constraints on some outputs, and a procedure given for the construction of optimal or near-optimal rational compensators. Key W o r d s - - C o n t r o l system synthesis; linear optimal control; multivariable control systems; linear programming.
each of whose entries converges to zero, that is:
Ala~tmet--In this paper, we consider the gt-optimal control problem for general rational plants. It is shown that for plants with no poles or zeros on the unit circle an optimal compensator exists and that the resulting closed loop transfer function is polynomial whenever there are at least as many controls as regulated outputs and at least as many measurements as exogeneous inputs. Exactly or approximately optimal rational compensators can be obtained by solving a sequence of finite linear programs for the coefficients of a polynomial closed loop transfer function. No assumptions on plant poles or zeros are required to obtain at least approximately optimal compensators. It is shown that constrained problems in which a set of outputs is regulated subject to ~ - n o r m constraints on another set of outputs can be solved using a slight modification of the same algorithm.
c ° ×n: = {/~ • ~ ×n: ILm®/tq(k) = 0 %
Vi • (1 . . . . .
m}, j • (1 . . . . . n}}.
Let m and n be as above and let z be a complex variable. Then we define:
D, b
The open and closed, respectively, unit disk in the
g(.)
complex plane. The g-transform. Given a matrix /~ = (/t(k))~=o of right-sided real sequences:
g(~) := ~ ft(k)zk. k=0
Notation
Amx n
Let m and n be positive integers. Then we define: ~e~×n
The real normed linear space of all m x n matrices /~ each of whose entries is a fight-sided, absolutely summable real sequence Hq= (Hq(k))~= o. The norm is defined: II/~lh :=
~x~
max
ie{1 .....
~
m} 1= 1 ffi
~A,,×n
We will often drop the m and n in the above notations when the dimension is either unimportant or clear from the context. Because the g-transform is an invertible mapping on all the above sequence spaces, we can associate sequences and their g-transforms as pairs. For such a pair, we write a hatted variable to denote the sequence and an unhatted variable to denote the corresponding g-transform. Now let X be a real normed linear space, let S c X be a subspace of X, and let x* be a bounded linear functional on X. Then we define:
IHij(k)l.
The real normed linear space of all m x n matrices / t each of whose entries is a fight-sided, magnitude bounded real sequence Hq=(Hq(k))~= o. The norm is defined: m
II/~ll® := ~
max
i = 1 il/{1 . . . . .
c°×~
n}
The real normed linear space of all m x n matrices H such that H = g ( H ) for some sequence / ~ • e~×,. The norm is defined IIHIIA := IIHIh. The subspace of Am×n consisting of all elements each of whose entries is a real-rational function of z. (Entrl_'es are precisely those with all poles outside D. )
sup I/~q(k)l.
The subspace of , ~ × n consisting of all elements
BX (x, x*) X*
* Received 11 September 1989; revised 26 April 1990; received in final form 31 May 1990. The original version of this paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor H. Kimura under the direction of Editor H. Kwakernaak. ? Department of Electrical and Computer Engineering, Rice University, Houston, TX 77251-1892, U . S . A . ~tAuthor to whom all correspondence should be addressed.
The closed unit ball of X ; BX := {x • X : Ilxll ~< 1}. The value of the bounded linear functional x* at the point x • X. The space of all bounded linear functionals on X, also called the dual space of X. X * is a complete real normed linear space with the norm defined:
ttx*tt := su£~ tl S±
The (right) annihilator of S E X ; S ± : = { x * • X * : (x, x* ) = 0 Vx • S}. Thus S ± is a subspace of X*.
IS
317
The (left) annihilator of S E X * ; IS:={x• X : (x, x* ) = 0 Vx* • S}. Thus ±S is a subspace of X.
318
J.S.
MCDONALD and J. B. PEARSON
1. INTRODUCTION IN THIS paper, we consider the problem of minimizing the maximum magnitude of a set of regulated outputs of a linear discrete time system excited by bounded-amplitude disturbance signals, using a linear shift invariant discrete-time compensator. This is equivalent to minimization of the A-norm of a closed loop transfer matrix of the form, given a transfer matrix H ~ ~ A :
~=H-K where K takes values in some feasible set be. The required compensator is constructed from K. The feasible set depends on two given transfer matrices U and V in ~ A . Taking b e = S : = {K ~ ~ A : 3 Q ~ ~ A satisfying K = UQV} corresponds to minimizing over all stabilizing compensators with rational transfer matrices, while taking be=SA := { K e A : 3 Q • A satisfying K = UQV} corresponds to allowing stabilizing compensators with transfer matrices in the quotient field of A. From a practical standpoint, the former version of the problem is most useful since it considers only finite dimensional compensators, while the latter allows the compensator to be possibly infinite dimensional. However, because of the properties of A as the dual of a normed linear space, the latter problem has been the standard one studied. In Dahleh and Pearson (1987), the problem with be=Sm was considered under several assumptions on U and V: that U has full row rank, V has full column rank, neither U nor V have transmission zeros on the unit circle, all the zeros of U and V in D are simple, and U and V have no common zeros in D. The rank assumption on U and V corresponds to requiring that the open loop system have at least as many independent control inputs as outputs to be regulated and at least as many independent measured outputs as disturbance inputs, while the zeros of U and V arise from the poles and zeros of the open loop system. It was shown that a Ko • SA exists which minimizes I[~lla and that, in fact, any such K0 must be in S so that the problem has a solution when be= S. Moreover for the minimizing K0, q b 0 = H - K 0 is a polynomial which can be constructed from the solution of a sufficiently large finite linear program formulated in a dual space. This linear program is equivalent to the problem with the feasible set restricted to include only Ks such that • = H - K is a polynomial of fixed degree at least equal to the degree of ~0. In Dahleh and Pearson (1988), a particular problem in which U has dimensions 2 × 1 and V has dimensions 1 × 2 was considered (that is, the above rank assumptions are not satisfied).
Similar assumptions were made on the zeros of U and V, entrywise. It was shown once again that a minimizing K exists in SA, but without the above rank assumptions it is unclear whether a minimizer exists in S. It was shown, however, that if there exists a K ~ Sa such that qb = H - K is polynomial, then a sequence of increasingly large finite linear programs can be formulated with the property that the minimum norms form a non-increasing sequence which converges to the infimal norm. Thus this gives a method of finding approximate minimizers which are arbitrarily good by solving a sufficiently large linear program. Each linear program corresponds to restricting the feasible set to Ks such that qb = H - K is a polynomial of a fixed finite degree. Again, these linear programs are formulated in a dual space and the corresponding • s constructed from their solutions. In this paper, we have two main aims. First, we wish to give a method for computing implementable compensators with optimal or at least close to optimal performance in the most general possible setting. In this spirit, we will take 5e= S in the formulation of our standard problem and drop as many of the above assumptions on U and V as possible. Second, we address the problem of minimizing the maximum magnitude of a set of regulated outputs subject to constraints on the maximum magnitude of other outputs. This corresponds to a "disk" type constraint on the norm of ~ . In Section 2 we discuss briefly the formulation of our standard problem and identify four cases determined by the rank (i.e. row or column) of the matrices U and V. These cases are treated separately throughout the paper and lead to significant differences in the properties of the problem. In Section 3, we use essentially the same approach as Dahleh and Pearson (1987, 1988) to address existence, that is, we consider the problem first with be=SA and then infer existence of a minimizer in S if possible. We will retain only the assumption that U and V have no unit circle zeros and show that a minimizer exists in SA regardless of the ranks of U and V. In the case in which U has full row rank and V has full column rank, a minimizer exists in S and corresponds to a polynomial ~. This is the expected generalization of the results of Dahleh and Pearson. The characterization of the feasible set SA in this general case is the main new result in this section and provides the key not only to proving existence but to computing minimizers. Also, our proofs of existence of minimizers are somewhat more direct than those previously given.
Constrained eX-optimal control In Section 4, we drop all assumptions on U and V and consider the computation of exact or approximate minimizers in S. First we note that the characterization found in Section 3 of the feasible set Sa extends in an obvious way to characterize S even when unit circle zeros are present in U and/or V. We show that when U has full row rank and V has full column rank, arbitrarily good approximate minimizers can always be found by solving finite linear programs, which again correspond to @ being a polynomial of a fixed degree. We formulate the linear program in terms of the coefficients of the polynomial ~ . Of course, when U and V have no unit circle zeros, a sufficiently large such program will yield a minimizing K. When U and V do not satisfy the rank assumptions, there may be no K e S which gives @ polynomial, so that the above procedure cannot be used. We give a simple characterization of all problems which have this difficulty. Finally, in Section 5, we define a class of constrained problems as described above. We do not consider the existence of minimizers for such problems, but we show that, provided the constrained problem is feasible, arbitrarily good approximate minimizers can be obtained as easily as for the unconstrained case by a slight modification of the same algorithm.
closed-loop system, may be described by the following parametrization (see e.g. Francis, 1987): = H - UQV
(;) =(<, c2J(w) c,2
(2.2)
where: H := Gll + G1zMY'G21 U := G12M
V : = IV1Gzl Q6 ~A and G22 = N M -~= ~ I - ~ 1 are arbitrary right and left stable coprime factorizations of Gzz (i.e. coprime over ~ A ) . Also, the following Bezout identity is satisfied:
The transfer matrices H, U, and V are all in ~ A and the transfer matrix Q is the free design parameter. The D e ~ A corresponding to a particular Q may be achieved by choosing the compensator transfer matrix C as follows: C = (Y - MQ)(X
- N Q ) -1
= (~" - Q~')-'(I? - Qh4).
(2.3a) (2.3b)
In this setting the problem of minimization of II~ll= when f f ~ B ~e~ is equivalent to the following minimum distance problem in ~ A : (OPT):
2. PROBLEM FORMULATION
In the standard problem which we will consider, we are given a discrete time system G which is causal, finite dimensional linear and shift invariant (FDLSI) and hence is described by a real-rational transfer matrix G(z). The system G has two (vector) inputs and two (vector) outputs: w is a vector of nw exogeneous disturbance inputs, z is a vector of n~ outputs which are to be regulated, y is a vector of ny outputs which are measurable, and u is a vector of n, control inputs. The control input u is assumed to the output of a causal FDLSI compensator C whose input is the measured output y. If we partition the transfer matrix G ( z ) conformally with these inputs and outputs, we obtain the following equations to describe the closed loop system:
319
inf
KeS
IIH -
KIIA =:/.~opt
where S := {K ~ ~ A : 3 Q ~ ~ A satisfying K = UQV}. We will sometimes say, given a problem (OPT), that ( O P T ) is the standard problem defined by H, U and V given above. Also, given a system G' and input and output weighting transfer matrices W~, Ww e ~ A , the problem of minimizing Ilff;*~ll~ when the inputs are in a weighted ball Bww := { ff ~ ~¢~: 3~9 ~ B g ~ satisfying w = W w y } may be put in the above form by taking H = W~H'Ww, U = W~U', and V = V'Ww. The only assumption which will be considered to hold generally throughout the paper is the following: Assumption 1. The transfer matrices U and V have full normal rank, that is, full rank for almost all z.
(2.1) u=Cy.
It is well known that if the system G is admissible (Cheng and Pearson, 1981), the set of all closed loop transfer matrices @ from w to z which may be achieved by choice of C, and which correspond to an internally stable
Special cases Given the above assumption, the dimensions of the inputs and outputs of the standard system of (2.1) determine what kind of rank (i.e. full row rank or full column rank or both) that U and V have, and we can classify problems ( O P T ) by rank as follows. In the case nu -> nz, or at least as
320
J.S.
McDONALD and J. B. PEARSON
many control inputs as regulated outputs, U will be a " f a t " matrix with full row rank. If we have ny -----n~, or at least as many measured outputs as exogeneous inputs, V will be a "skinny" matrix with full column rank. Intuitively, this combination of dimensions is good from the point of view of designing compensators for the system, and the study of (OPT) turns out to be simpler in this case. For this reason we will call this the good rank case. If n, < n~, U will be a skinny matrix with full column rank and we will say this U has bad rank. Similarly, ny < nw will result in a fat V with full row rank, and we will say this V has bad rank. Problems in which U a n d / o r V have bad rank will be called bad rank problems. The various combinations of good and bad rank in U and V thus define four cases of (OPT). Note that for the H ~ problem, it is also these combinations of ranks in U and V which define the usual four cases of that problem (the one-block problem, the two cases of two-block problems, and the four-block problem). We will consider in detail just the two extreme cases in which either both U and V have good rank (the one-block problem) or both have bad rank (the four-block problem). The intermediate cases (two-block problems) have properties essentially like the latter, and we will make comments in the sequel to indicate how they can be treated.
3. E X I S T E N C E
OF A MINIMIZER
In this section,we consider the question: When does there exist Ko•S such that /~oot= I I H Kolla? Our approach is to m a k e use of the following two theorems from Luenberger (1969), which ensure existence of solutions to minimum distance problems set in the duals of n o r m e d linear spaces and identify a useful property called alignment of such solutions when they exist.
Definition 1. Given a real normed linear space X and its dual X*, we say that an element x* • X* and an element x • X are aligned if (x,x*)= IIxll IIx*ll,
Theorem 1. Let x be an element in a real normed linear space X and let d denote its distance from the subspace M. Then: d=
inf I l x - m [ I = meM
(x,x*)
max x*~BM
±
where the m a x i m u m on the right is achieved for some x~'• M ± with IIx~ll = 1. If the infimum on the left is achieved for some moeM, then x - m0 is aligned with x~]'.
Theorem 2. Let M be a subspace in a real normed linear space X. Let x * • X* and let d denote its distance from M I. Then: d=
min I I x * - m * l l = m*eM
±
sup
(x,x*)
x~BM
where the minimum on the left is achieved for some m~ • M ±. If the s u p r e m u m on the right is achieved for some xo•M, then x * - m ~ is aligned with xo. The following corollary to T h e o r e m 2 follows easily using the fact that if a subspace M in a dual space is weak*-closed then M = [ ± M ] l (Rudin, 1973, T h m 4.7). In fact, it is equivalent to T h e o r e m 2 (except for the alignment condition) using the fact that for every subspace M of a normed linear space X, M ± is weak *-closed in X* (Rudin, 1973, p. 91).
Corollary 1. Let X be a normed linear space. Let x* •X* and let M* be a subspace of X*. If M* is weak*-closed, then there exists an element mo* • M * such that: IIx*-m~ll =
inf
m*~M*
IIx*--m*ll-
Since the space ~ A is not complete and hence cannot be a dual space, the question of existence of a minimizer for (OPT) cannot be resolved directly using these results. Instead we consider the related minimum distance problem in ff ni z X n w . .
(OPTO: where
inf l i e - gll, =: ~ S 1 :=
( g • (~.z×..:3Q • A
satisfying K =
UQV}. We will use the facts (Luenberger, 1969) that (Cm×n) o , = ~emx. when we define linear 0 functional evaluation as follows, given 0 • C,,x, a n d / 4 • (lm×,,:
i=lj-1
k=0
and that (t~m×n) 1 * = (~m×n with a similar definition of functional evaluation. (OPT1) is the problem which was considered in Dahleh and Pearson (1987, 1988). It is also clearly equivalent to (OPT) with the feasible set enlarged from just S to all of SA, since A . . . . . and (nzxn. 1 are isometrically isomorphic under the ~ - t r a n s f o r m . Our approach will be to use T h e o r e m 2 to establish first the existence of a minimizer for (OPT1) (which corresponds to a point in SA). The alignment condition of T h e o r e m 1 will then allow us to conclude in certain cases that the Lr-transform of this
Constrained eLoptimal control minimizer lies, in fact, in S and hence is a minimizer for ( O P T ) . We will see that the results of Dahleh and Pearson (1987, 1988) concerning existence extend to the general case in the expected way; in the good rank case, we will be able to establish existence of a minimizer for ( O P T ) assuming no zeros of U and V are on the unit circle while in the bad rank case, we will need a similar assumption and will only be able to establish existence of a minimizer for (OPT1). As has been shown by counterexample in Vidyasagar (1987) we cannot h o p e to establish existence in general without precluding unit circle zeros. The g o o d rank case In this case, U has full row rank = nz and V has full column rank = nw. Before proceeding, we establish some notation. For simplicity, we replace the dimensions nz and nw with m and n, respectively. W e will need to consider S m i t h McMillan form decompositions (MacFarlane and Karcanias, 1976) of U and V given by: U = LuMuRv
(3.1)
V = LvMvRv
where L v , R v , L v , and R v are (polynomial) unimodular matrices and M y , M v are rational matrices which have the familiar diagonal forms:
321
for each z o e ~ u v which correspond to the multiplicities of the ZoS on the diagonal of My. A sequence Eu(Zo) is sometimes referred to as the sequence of structural indices of z0 in U. We can also define m polynomial row vectors of dimension m and n polynomial column vectors of dimension n as follows: oli(z) = ( L 3 l ) i ( z )
i = 1. . . . .
m
fli(Z) = (R~,I)i(z)
j = 1. . . . .
n
where subscript i indicates the ith row and superscript j indicates the jth column. We can now state the assumption we will require and define a set of conditions which will be shown to characterize the feasible set $1 of ( O P T O. A s s u m p t i o n 2. Neither U nor V have any transmission zeros on the unit circle, that is, ~ u v c D. Definition 2. Given U and V as above and K eArn×n, we say K interpolates U (from the left) and V (from the right) if the following condition is satisfied: Given any zero Zo e ~Ztr¢ of U a n d / o r V with structural indices Zv(z0) and Zv(Z0) in U and V, respectively, we have for all i e { 1 . . . . . m } and je {1,... ,n}: (i) ( oI~K)(k)(Zo) = O,
~PI(Z)
My=
. • .
e (z)
0
"'"
.
0 ...
=
0
(3.2)
k--ou, o~u , - - l ( ~ ) ( k - I
0 /
~,U(ZO) := ( Oui(Zo) )im=l = (z - Zo)°U,(~°)gi(z)
k = Ovj, • • •, ou, + Ovj - 1
or: (b)
.
"''
')
r=O r X [ol~l)K(k-l-r)[J~r)](Zo) = O,
Let ~ f ~ denote the set of all z e / ) which are zeros of either U or g. Then for each Zo s ~ v v we can define a non-decreasing sequence of non-negative integers Zu(zo) corresponding to the multiplicities with which the t e r m ( z - z0) appears on the diagonal of M y . T h a t is: means: ~,(z)
k=O, . . . , Ovj-1.
l=0
. .
0,
(iii)(a)
e.(z)l
Me =
, o r , - 1.
(ii)
TY
E'a(z) . o
k=0 ....
i = 1. . . . .
m
where the gi(z) have no poles or zeros at z = Zo. We can define similarly a set of sequences Ev(Zo)
l=0 X
r=O
r
~t [ o L• ( l )-K- ( k - l - r ) l ~ ( r ) ls(-Z) o 3 x
)
O,
k = or,, • • •, ovj + or, - 1 where the argument of or,(.) and Or,(') is understood to be Zo and superscript (k) indicates the kth derivative with respect to z. Note that this condition simplifies greatly in the case of a zero z0 which is not c o m m o n to U and V; if it is a zero only of U, for example, we have Z v ( Z o ) = (0)7=1 and parts (ii) and (iii) are trivially satisfied for all i and ]. The following theorem shows that this condition characterizes
322
J . S . McDONALD and J. B. PEARSON
S~ in terms of its image in A 5~-transform.
under the
k° 0(
Z0 • / ) :
(olKfl)(k)(Zo) = ~ Theorem 3. Given Assumption 2, U and V as above, and K • A, there exists Q • A satisfying K = U Q V if and only if K interpolates U and V.
+ 2
+ E × = ~'
::10 • A
eK)(O(Zo)fl(k-O(Zo)
1=0
+ ~
oL~k-1)(Zo)(Kfl)")(Zo)
l =0
My = MvoMv~
r
)
X g(k-l-r)(zo)~(r)(zO)"
Ou,(Zo) + av,(Zo) - 1.
Proof. This follows straightforwardly, if somewhat tediously, by manipulating the expansion;
(offg~)(k)=
where Mv~ = diag [(~v)7'=1] and Mv, = diag [(;tv)~'=l] are diagonal matrices containing exactly the zeros in D of Mu and My, respectively, and Mvo, Mvo are in A with right and left inverses, respectively, in A. Hence: 3Q•A
(0(Zo)
r=0
+ lE= 0 r=O E
Proof. We can clearly factor M v and Mv of (3.2) into forms: M v = Mu~Mvo
Z
I=0
( oliK~))(k)(Zo) ----O, k = 0.....
o/k-O(Zo)(Kfll(O(Zo)
1=0
We defer the proof of the theorem until we have established the following two lemmas.
Lemma 1. Given Assumption 2, U and V as above, and K • A , ~ × , there exists Q • A m × , satisfying K = U Q V if and only if for all z o e Y w , i • { 1 . . . . , m } a n d ] • { 1 . . . . . n} we have:
olK)(O(Zo)fl(k-O(Zo)
1=0
(k~(l)oc(r)g(l-r)[~(k-l) 1=o ~=o \ 1 / \ r /
and using the fact that tr, fl, and K are all in A. []
Proof of Theorem 3. Using Lemma 1, we see that to prove the theorem we can equivalently establish for all zo • 3fuv, i • { 1. . . . . m}, and j • { 1 . . . . . n}:
satisfying K = U Q V
satisfying K = LvMv~Q.Mvflv := M ~ I L ~ I K R v I M ~ 1 • A .
(i), (ii), and (iii) of Definition 2 hold A
But this last holds if and only if an arbitrary entry of Q is in A, i.e. if and only if:
Q-.ii- °liK~J • A
(,)
for i, j arbitrary. To show that this is equivalent to the condition in the lemma, first suppose (.) holds. Then o:iKflj = )~u,)wjQ_.ij where K and are both analytic in D. Thus the conditions in the lemma hold (Churchill and Brown, 1984, p. 152). Conversely, if the conditions in the lemma hold, it can be shown that we can write ~iKflj = AU,~vjl(,ij w h e r e ff~ij • A. Hence Oij = Kij • A and (*) holds. []
(oliKfl/)(k)(ZO) = O,
"~
k = 0.....
ov,(Zo) + Ov,(Zo) - 1.
( ~ ) : This follows immediately by applying Lemma 2 in the appropriate form. ( ~ ) : Let z0 • ~uv be arbitrary. If we establish that (i) holds for arbitrary i and (ii) holds for arbitrary j, (iii) follows by applying Lemma 2. Considering (i) first, let i be arbitrary and argue inductively on k. For k = 0 we have:
0 = (ol~Kflj)(zo) = (oltK)(zo)flj(zo), VjE{1 . . . . .
n}
( ol~K)(zo)R ~,'(Zo) = 0 :=>(cr~K)(zo) = 0
Lemma 2. Given non-negative integers ou, Ov, and k < - a v + a v - 1 , K•A, polynomial row and column vectors cr and fl, respectively, and
since Rv 1 is unimodular. Now suppose (ol/K)(')(zo) = 0 for 0 --
Constrained/?l-optimal control for all j:
323
= ~ [&~* I~ * [3/l(q)[zql(~)(zo) 0 = (.,KE)¢*)(Zo)
q=0
= l=0 ~ (~)(o~iK)(O(Zo)fl}k-O(Zo) = (o:iK)(~')(Zo)E(Zo) +~
o~iK)(O(Zo)flIk-O(Zo)
I=0
=
Let M denote the linear span of all the (~0-kzoS, a subspace of Cm×~.° Then it is immediate that M ± = S1. For the complex zeros case, we can treat the conjugate zeros in pairs to obtain two similar sequences. []
-~- ( o ~ i K ) ( k ) ( Z o ) ~ j ( Z o )
since the summation on the right is zero. Thus we have:
Theorem 5. Given Assumption 2, the problem (OPT) has a minimizer Ko. Moreover ~o = H - Ko is a polynomial transfer matrix.
( o~K)(~)(zo)R~'(Zo) = 0 ~ ( o~K)(k)(zo) = O. Hence, (i) holds. The argument to establish (ii) is similar. [] We should note that while Lemma 1 provides a notationally simpler set of conditions equivalent to that of Theorem 3, the latter provides a smaller number of conditions on K. It will be clear from the next section that this will lead to fewer constraint equations in a linear program which gives an approximate minimizer for (OPT) and hence more efficient computation. In any event, it is clear that the feasible set S~ of (OPTs) can be characterized equivalently: 1 S~ {K e ~em×~ : K interpolates U and V} = {/~ 1 ~m×~.K satisfies the conditions of Lemma 1}. The following results exploit the form of S~ to establish existence first of a minimizer for (OPT1) and then for (OPT).
Theorem 4. Given Assumption 2, the problem (OPT1) has a minimizer/~o. Proof. For ease of notation we will give the proof only for the case that all Zo e Zuv are real and indicate how to generalize to the complex case in a straightforward way. We will show that 0 there exists a subspace M of Cm×~ such that M -~ = $1. Then S~ is weak *-closed and the result follows by Corollary 1. We will use the second characterization of $1 given above, which imposes a condition on K for each i, j, k, and Zo. Correspondingly, we define for each condition a sequence (~qk~, as follows: aiikzo(1) : =
E
E
p =o q =o
&f(q - P)l~f (P
- l)[zq](k)(zo).
It is easily verified that the above sequence lies 1 . in COm×nsince every zo e D. Also, given/~ • gm×~-
Proof. To establish this we will consider the problem (OPTs) to be posed in the primal space ~'~×n and use the fact that Si L, which lies in general in fro×n, is in fact exactly the M given in the proof of Theorem 4 and hence lies in Cm×n. o We will use the alignment condition of Theorem 1 to show first that for any minimizer /~o of (OPT1), ePo has at least one row which is polynomial. Next we show that, given any minimizer of (OPT1) for which l rows of the corresponding • are polynomial (where l < m), there exists another minimizer for which at least 1 + 1 rows of the corresponding ~ are polynomial. Hence there is at least one minimizer /~o for which all rows of ¢Po are polynomial. For such a /~o, Ko is clearly rational and hence a minimizer for (OPT). First, then, suppose /~o is any minimizer for (OPT1) and Go is any maximizer for the dual problem maxd~Bs~(H, ~ ) . By Theorem 1, Go and +o are aligned. If Go is the zero functional, then #1 = 0 and Ko = H is a minimizer for (OPT) for which q% is a polynomial. If Go is non-zero, then there is a row, say the ith, such that maxj I1~o011~>0. Since d:o e 0 there exists N such that Iaoo(k)l N . It then follows easily from the alignment of ~o and Go that ~o0(k) = 0 for each j when k > N and hence the i-th row of ~o is a polynomial. Next, suppose 1 < m and Ko is any minimizer for (OPT1) such that l rows (say the first l) of are polynomial. Partition after the lth row so that:
Cmxn,
( oq and consider the problem: inf 11(/t2 ,~es2
r=l s=l l=0
X [p~--oq~--o&.r(q_p)~7(p_l)[Zq](k)(Zo
)]rs,rs(l,
(eoq - Rill
where $2: = {/~ e eem_,)×.:[go 1 (go + R) V $1}. Clearly if /~o is any minimizer for this problem, [/~orl (/~o2 +/~o)r] r is a minimizer for
324
J . S . McDONALD and J. B. PEARSON
(OPT O. Moreover, applying Lemma 1, we see that /~ ~$2 if and only if for all ZoE~vv, i e {1 . . . . . m} and j e {1 . . . . . n} we have:
~M..\ (ii) ( / ( K 1 2 ) ( U ,~.v} v /=0 (iii) /( interpolates /5" and 17".
( oflRflj)(k)(Zo) = O, k = 0. . . . .
ov,(Zo) + Ov,(Zo) - 1
Proof. ( i f ) : I f K = UQV then certainly / ( = OQf'. Now, U and I7' have good rank and satisfy
where or/2 denotes the last m - I entries of ~r;. Thus it is easily shown (cf. proof of Theorem 4) that S~ is in c ° and the same argument given above applies to establish the existence of a minimizer/~o such that Ha - Ko2 - Ro has at least one polynomial row. H e n c e [KTI(/~02 "]- i~o)T] T is a minimizer for (OPT 0 with at least l + 1 polynomial rows. []
Assumption 2 so that if Q e A then, by Theorem 3, (iii) holds. Also, 0 and 17" are invertible so that Q = 0-'/(12 -1 and hence:
The bad rank case
Using the polynomial coprime factorizations (3.3), we obtain (i) and (ii). ( ~ ): Since 0 and 17"are invertible, there always exists a unique Q : = 0 - 1 / ( I 7"-1 solving /(=/5" QI?. Since 0 and 9 have good rank and satisfy Assumption 2, Theorem 3 holds and (iii)=> Q e A. Also:
In this case, U has full column rank = n, and V has full row r a n k = n y . We will need the following assumption:
Assumption 3. There exist n, rows of U and n~ columns of V which are linearly independent for all z on the unit circle.
K12 = OQV2~ KI2 = I(~z-IV2 K21 = U2Q~'~ K21 = U20-x/(
(*)
((,) and K22 = U2Q½)
U20-~£9-'½ = U20-1K~2.
(ii) ~ K12 = / ~ V - 1 V 2 = UQV2 Note that this is slightly stronger than requiring that U and V have no transmission zeros on the unit circle. Under this assumption, U and V can be written in the following form without loss of generality (possibly requiring the interchange of inputs and/or outputs):
U=
Ua
v=(9
va)
K21 K22/ and 0 and I/define a good rank sub-problem of the overall problem satisfying Assumption 2. Also, we can define polynomial coprime factorizations as follows: : D~'~
(/-~Va = NvDv 1.
(3.3)
Using these definitions we state the following result characterizing the feasible set St for this case.
Theorem 6. Given U and V as above, Assumption 3, and K e A, there exists Q e A satisfying K = UQV if and only if: -
(i) (-N'v
{
DV)~
/(
KI2]
K22/= 0
(**)
((i) and ( * * ) ) ~ Kz2 = U20-~K12 = UzQ½ so that K = UQV.
Remark 1. If conditions (i) and (ii) of Theorem 6 are satisfied, it is straightforward to verify that also: (K21
where 0 has dimensions n, x nu and is invertible and 17" has dimensions ny × ny and is invertible. Moreover, 0 and I7" have no zeros on the unit circle. Thus K = UQV can be written:
g20-'
(i) ~ K2, = U2(J-~R = UzQ("
K22)(-DNV) = 0.
This remark will prove useful in the next section. The following theorem uses the above characterization of S, to establish existence of a minimizer for (OPT O.
Theorem 7. Given U and V as above and Assumption 3, the problem (OPT 0 has a minimizer/¢0. Proof. For each of the conditions (i), (ii) and (!ii) of Theorem 6 define a subspace of all K e ~ . . . . which satisfy the corresponding condition; S(0 for condition (i), and so on. Then S 1 = S(i ) (-1 S(ii) (") S(iii). W e will show that each of these subspaces and hence S~ is weak *-closed, and the result follows. First note that S(ii0 can be handled exactly as S~ was in Theorem 4, by constructing a c o sequence corresponding to each condition in Lemma 1. The only difference is that o n l y / ( will be required to interpolate 0 and 17I so that the sequences will have the following form, where the partitioning is conformal with
Constrained eLoptimal control our usual partition of K:
( 0 z0 0) The annihilator M ± of the linear span M of these sequences will be equal to Soii) and hence S0i0 is weak *-closed. Considering now S(i), and letting T : = [ - N v /)vl for notational convenience, define a bounded linear operator F : ~ a1 x.~ )--) ~(,,,-~.) 1 x,,~ as follows, given K • ~ n~z x n w"• FK := 7"*g. Then S(i)= At(F), where Ac(.) denotes the null space. Now define a bounded linear operator G: cO _0 ~-~.)×~. ~ %~x~ as follows, given & • C(nz-nu)Xnw: n z --n u
T ~] ~,m(l-k)&mj(l)
(G&)q(k)=~.
l=O m = l
i= 1 , . . . , nz j=l .....
n,,,
k=O, 1. . . . Then F = G*, the adjoint of G, since, for all & • c(,z_,,)×, o 1 . wa n d / ( • ~e,z×,w. ~
nw
(G&, K ) =
~
E Z g0(k) i=1 j = l k = O
X[~n~
nu Tim(l^T
-k)&m](l)]
LI=0 m = l E
m=l
E
E
j=I l=O
x
k=O = (&, F K ) .
~'mJ(l)
tm,(l-k
approximate minimizers which are arbitrarily good (i.e. whose distance from H can be made arbitrarily close to the infimal distance/~opt). We will again consider the good rank and bad rank cases separately, but first we will discuss the characterization of ( O P T ) as an infinite linear program and define, given a problem (OPT), a class of related problems which we will call truncated problems which are equivalent to finite linear programs and which can be used in many cases to obtain exact or approximate minimizers for (OPT). We will characterize exactly when this approach can be applied. We will also note that the assumptions required on unit circle zeros of U and V in the last section can be dropped when we consider the problem ( O P T ) directly.
Linear programs and truncated problems In the previous section, we characterized the feasible set S, of (OPT,) for the good rank case in Lemma 1 and Theorem 3 and for the bad rank case in Theorem 6. The following modified versions of these results characterize the feasible set S of ( O P T ) and do not require the Assumptions 2 or 3. The notation is as established in the last section for their respective cases and the proofs are omitted as they are easily obtained by modifying the proofs of the corresponding results. Lemma 3 (Good rank). Given K • ~ A , there exists Q • ~ A satisfying K = U Q V if and only if for all Z o e ~ u v , i • { 1 . . . . . nz} and ] • {1 . . . . . nw} we have:
n z --n w n w :
325
,j(k
t-i=l
Thus S 0 ) = N ( G * ) is weak *-closed by Rudin (1973, Thin 4.12). It is clear by a similar argument that S(ii) is also weak*-closed. [] In cases where only one of U and V has bad rank, say V, Assumption 3 is unchanged and we partition V = [17" V12] where I7" has dimensions ny x ny and is invertible and K = [/( K12]. Then U and 17 define a good rank sub-problem satisfying Assumption 2, and the conditions of Theorem 6 are modified; (i) disappears, (ii) is unchanged, and (iii) becomes " / ( interpolates U and t2". Finally, existence of a minimizer can still only be established for (OPTs).
( oc,Kflj)(k)(Zo) = O, k = 0. . . . .
oui(Zo) + ~rv,(Zo)- 1.
Theorem 8 (Good rank). Given K • ~ A , there exists Q • 9~A satisfying K = U Q V if and only if K interpolates U and V. Theorem 9 (Bad rank). Given K • ~ A , there exists Q • 9~A satisfying K = U Q V if and only if conditions (i), (ii) and (iii) of Theorem 6 are satisfied. Recalling that the closed loop transfer function ~ = H - K (so that K = H - ~ ) and using the definition of the A-norm, ( O P T ) can be written: nw oo
inf
max
~]
i~:{1. . . . . n z } j = l
~ [~0(k)l
k=0
subject to: 4. T R U N C A T E D P R O B L E M S
In this section, we consider the problem of computing minimizers for ( O P T ) when they are known to exist or otherwise at least computing
H-DeS. In both the good and bad rank cases, the requirement H - • • S can be interpreted as a set of linear equality constraints on ~ as follows.
326
J.S.
McDONALD and J. B. PEARSON
In the good rank case, we can use L e m m a 3 above and define a set of sequences (~jkz0 corresponding to each condition as we did in the proof of Theorem 4. Then we see that H - O e S if and only if for each such sequence:
following linear program: (LP~): subject to: ^+
(~ij ( k ) - ~ j ( k ^+
=
i = 1 , . . . , nz j = l . . . . . nw
) = ~ij(k )
% (k), 4;;(k) >- 0
This clearly defines a finite set of linear equality constraints on O. The condition of T h e o r e m 8 that K interpolate U and V can be similarly interpreted to yield a smaller but equivalent set of constraints (i.e. the conditions of Lemma 3 contain linear dependencies not present in the conditions of Definition 2 for interpolation). In the bad rank case, condition (iii) defines a similar finite set of linear equality constraints on ci, (cf. proof of T h e o r e m 7). Condition (i) is clearly satisfied if and only if: =
t
l(k)
k = 0 , 1,2 . . . . which again defines a set of linear equality constraints on • but in this case an infinite number. Condition (ii) defines a similar infinite set of constraints. With these observations and the use of a standard linear programming technique for handling the objective function (see, e.g. Chv~tal, 1983) we see that (OPT) is equivalent to the following linear program in the impulse response coefficients ¢~ij(k) of the closed loop system and the auxiliary variables •^+i/(k), O/)-(k), and ~.: (LP):
min ~.
inf ;t
subject to:
O~(k) - ~:~(k ) = ~o(k ) ^+ 0,/(k), 4~(k) >-0
i=1,...
,nz
j=l .....
nw
nw
~ [~ff(k) + ¢~;(k)] -< ~. j=l
k=0 .....
6
i = 1. . . . .
nz
6
k=O
H-O~S~. For any 6, (LP~) has a finite number of variables and, in the good rank case, a finite number of constraints. In the bad rank case there remain in general an infinite number of constraints but we will see that in all cases of interest these are equivalent to a finite set so that each (LP~) that we wish to solve is a finite linear program. Of course, the problem (OPTa) always has a bounded objective function but the set Sa may be empty for a given problem (OPT) and a given 6. In this case #a is not defined. In order to address this problem, we state the following condition.
Condition 2. There exists 6" such that Ss. is non-empty or, equivalently, such that (OPT) has a feasible point for which O = H - K is a polynomial of degree 6". If (and only if) this condition is satisfied we can define a monotonically increasing integer sequence (6(i))~=0 for which, by taking 6 ( 0 ) 6 ' , the corresponding sequence of problems (OeTa(i)) will have well defined infimal norms go(0' The sequence ga(o will clearly be monotonically non-increasing and, moreover, the following theorem will establish that:
!im/At~(i ) =
k=0,1 ....
~opt'
tl w
i=1,...
E [~bff(k) + ~b~(k)l -< Z
,n~
1=1 k=O
H-OeS. Because (LP) has an infinite number of variables and (at least in the bad rank case) constraints, it cannot be solved directly using general linear programming techniques. Instead of (OPT), then, we study the following family of related finite dimensional problems which we call truncated problems, and which are indexed by the non-negative integer 6:
(OPTa):
min KeSa
IIn - gilA
=: tza
where Sa := {KE S : H - K is a polynomial of degree < b}. With this restriction on the degree of ¢ ~ = H - K , (OPTs) is equivalent to the
Theorem 9. Given Condition 2, tta - ~ o p t can be made arbitrarily small by taking 6 sufficiently large. Proof. Given Condition 2, there exists Ks. e Ss. and we can write for any K e S:
IIn - gila
= It(H - go*)
+ (Ks* -
gila
----lie's- -- ( g - Ks.)H a where Oo. is a polynomial of d e g r e e - b * . Moreover, K ~ S is and only if ( K - K s . ) ~ S so that: ~opt := inf l i B - KtlA = inf IIOs*- KIIA. KES KES Thus there is a K ' = UQ'V for which II¢'s.-K'IIA approximates ~opt arbitrarily closely. Also,
Constrained tLoptimal control the set of polynomials is dense in ~ A and U and V are in A so that K' can be approximated arbitrarily closely by approximating Q' sufficiently closely with a polynomial Qp. Finally, since U and V can be taken to be polynomial (Dahleh and Pearson, 1987), we have that ¢I~p := ¢I:~6. -- UQpV is a polynomial whose norm is arbitrarily close to #opt- Thus by taking 6 to be the degree of (I)p, we have the result. [] With this theorem we know that whenever Condition 2 is satisfied, a procedure for finding arbitrarily good approximate minimizers is to simply formulate and solve a sequence of problems (LP6) for increasingly large 6. If we choose 6 too small, the problem will be infeasible and 6 must be increased but the existence of 6* ensures that a feasible problem will eventually be obtained in this way. In the following we examine conditions under which Condition 2 is satisfied and summarize the application of this solution procedure in the good rank and bad rank cases separately.
The good rank case If Assumption 2 is satisfied (no unit circle zeros of U or V), Condition 2 is clearly satisfied since by Theorem 5 we can take 6* to be the degree of H - K0 where K0 is a minimizer for (OPT). The following lemma establishes that even when Assumption 2 is not satisfied, Condition 2 still is.
Lemma 4. Given U and V with good rank, there
327
programs corresponding to an increasing sequence of bs. It is possible to estimate 6* in order to aid in formulating this sequence, but in practice simply increasing 6 until a feasible program is obtained is usually satisfactory. By Theorem 9, the corresponding #as converge to #opt from above. At present, we have no method which allows 6 to be selected a priori such that 1#6- #opt[ is less than a given e. When no unit circle zeros are present, however, the exact minimizer corresponds to a polynomial ~ and hence can be found by solving (LP6) for a suitably chosen 6. This 6 may be estimated using a generalization of the bound given in Dahleh and Pearson (1987) but again, in practice, simply increasing 6 until the degree of the minimizing stops increasing is usually satisfactory.
The bad rank case We have seen in the good rank case that Condition 2 always holds. This is not so in the bad rank case. The following theorem characterizes for bad rank problems exactly when it is satisfied. (Here H is partitioned as K was in Theorem 6.)
Theorem 10. Given U and V with bad rank, Condition 2 is satisfied if and only if the transfer matrices TvH and THV defined:
ru. := (-Nu
(
U)(H21 /-/2#
.,q(-Nv)
THV:= /-/21 H22]k Dv /
exists 6* such that $6. is non-empty.
are both polynomials.
Proof. Recalling the Smith-McMillan form decompositions (3.1) of U and V, consider the related problem (OPT') defined by H ' : = LuIHRv 1, U' := MuRv, and V' := LvMv. Then L v , = R v , = l and the condition of Lemma 3 becomes, for each Z o ~ u , v , (=~Zw), i {1 . . . . . n~}, a n d ] ¢ {1 . . . . . nw}:
Proof. ( ~ ) : If Condition 2 is satisfied, (OPT) has a feasible point K for which ~ = H - K is a polynomial. K must satisfy conditions (i) and (ii) of Theorem 6 and hence the condition of Remark 1. Since the matrices [-/Vv /)u] and [-N~, Dr] r are polynomial, so are Tu, and Ti4v. ( ~ ) : Recall that the matrices [-/~/v /)u] and [-N(, Dvr]r are left (right) polynomial coprime factorizations of U20-1(17"-lv2). There exist associated right (left) polynomial coprime factorizations U20 -1 = NvDTfl (17"-1V2= / ) v l N v ) and the following Bezout identities can be constructed:
(Kb)(k)(zo) = 0 k = 0 . . . . . av;(Zo) + av/(Zo) - 1. For each i and j let ~ j ( z ) be a scalar polynomial such that Kij. = H i i - ~0 satisfies the above for each z0 (such a polynomial always exists). Then there exists a Q ' e ~ A such that ~ ' = H ' U'Q'V' and hence K := UQ'V is a feasible point of (OPT) for which ~ = H - K = L v ~ ' R v is a polynomial. Let 6" be the degree of ~. Then • eSa.. [] t
•
t
t
Recall that (LPa) is a finite linear program for any 6 in the good rank case. Then the situation is as follows. By Lemma 4, it is always possible to formulate a sequence of feasible finite linear
Du / \Nu
Xu f(v] \ - Yv
Dv )
where all the blocks are polynomial. Define:
328
J.S.
McDONALD and J. B. PEARSON
and note that B u I and B y I are both polynomial. Now consider a problem (OPT') defined by taking H ' := BuHBv, U' = BvU, and V' = VBv. Then it is straightforward to show that U' and V' have the following forms:
where O' and 17" are square and full rank. Also, partitioning as usual, the following blocks of H ' :
(H21 H22,)= TuHBv \H~2 / ,
,
v
are polynomials since Ttm and THV are. Now H,-' U,-' and V'- define a good rank problem which, by L e m m a 4, has a feasible point for which + ' = H ' - / ~ ' is polynomial and hence there exists Q • ~ A such that ~ ' = / q ' - (J'Q~". Then clearly:
is a feasible point of (OPT') for which ~ ' = H ' - K' is given by: m'=
( +' H~I
-'1% H~2/
and is hence polynomial. Finally, then, K = B ~ IK ' B ~ ~= U Q V is a feasible point of (OPT) for which • = H - UQV = ByeS'B?, ~ is polynomial. [] Now suppose that (OPT) satisfies the condition of the theorem and hence Condition 2 is satisfied. Then for 6>>-6* the constraint equations of (LP~) corresponding to condition (i) of Theorem 6 are equivalent to the following finite set, where 6r := degree ( [ - N v /)v]):
• +l(k) = k=O . . . . .
formulation of this sequence if desired. As Theorem 9 shows, the solution of such a sequence of problems yields a corresponding sequence of/z~s which converge from above to /~opt. Again as in the good rank case, we have no method which allows 6 to be selected a priori such that 1/,~ -/Zopt[ is less than a given e. Finally, we remark on the case when only U or V say V, has bad rank and recall that in this case we partition V = [I7" V12]. The conditions in Theorem 10 then reduce to requiring the matrix:
6+6T
and similarly for condition (ii). The set of constraint equations corresponding to condition (iii) is also finite so that (LP6) is a finite linear program for each 6 -> 6". The situation in the bad rank case is then as follows. Recalling the results of Section 3, we have only established the existence of a minimizing K for (OPTI) in the bad rank case, not for (OPT). In particular, there need not exist a minimizing K for (OPT) for which the corresponding ~ is a polynomial. Hence, (OPT) cannot in general be solved exactly by the solution of (LP6) for any finite 6. However, when the conditions of T h e o r e m 10 are satisfied, it is possible to formulate a sequence of feasible finite linear programs corresponding to an increasing sequence of dis. It is possible, as in the good rank case, to estimate 6* to aid in the
:
( LIV /
to be polynomial and, provided it is, the preceding discussion applies to the approximate solution of such problems as well. 5. CONSTRAINED PROBLEMS Given any standard problem [say (OPT) defined by H, U, and V], two index sets 5% and 5% which partition { 1 , . . . , nz} and which have nr and nc elements, respectively, where n, ~> 1, and a set of positive real numbers {d~}~s,, we can define an associated constrained #1 optimization problem. In this section we observe that the computation of approximate minimizers for almost all such problems which have a non-empty feasible set and for which (OPT) satisfies Condition 2 can be handled using a very minor modification of the method given in the last section for solving (OPT) itself. That is, arbitrarily good approximate minimizers can be found by solving a sequence of finite linear programs. To state the constrained problem associated with (OPT) assume without loss of generality that ~ = {1 . . . . , nc} and 5% --- {n~ + 1. . . . . n~} and define a weighting matrix W~ := diag [(d,-1)~J. Also define the following partitions:
H
u~
Kr
where Hc has nc rows, Hr has nr rows, and so on. Then the constrained problem is:
(OPTC):
inf IIHr --JrtlA ----:ktC K~S c
where Sc := {K • ~ A : =IQ • ~ A satisfying K = UQV and IIWc(H~- K¢)IIA ~ 1}. This problem corresponds to minimizing the maximum ~¢~norm of a set of regulated outputs indexed by ~r subject to the constraint that the ~ - n o r m s of the constrained outputs indexed by 5% are bounded by the corresponding d i. Whereas the feasible set S of (OPT) is always non-empty (since always 0 • S), Sc need not be
Constrained et-optimal control and hence #c may not be well defined. We can describe when Sc is non-empty in terms of the infimal norm #F of an associated standard problem defined by He: = WcHc, UF := WcU~ and VF := V. Clearly Sc 4: ~ if and only if # t -< 1. In order to apply the approach of the last section to computing approximate solutions to feasible problems (OPTC), we define truncated constrained problems exactly as we did for unconstrained:
(OPTC,~):
inf IIHr- K, HA=:/ttc~ KeSca
where Sc,5 := { K e S c : H - K is polynomial of degree -< 6}. If (OPT) satisfies Condition 2 and also/ttr < 1, it can be shown using essentially the ideas of the proof of Theorem 9 that (OPTC~) has a feasible point for sufficiently large 6 and moreover that the sequence of infimal norms /ttc~ for an increasing sequence of 6s has as its limit #cFinally, we observe that when (OPTC~) has a feasible point it is equivalent to the following finite linear program: (LP~):
min 3.
subject to:
ep~(k ) - ePij(k) ~Pi](k ) ^+ dPi/(k), tb~(k) >_0
i = 1 , . . . ,nz
=
j = l . . . . ,n~ k=0 .....
nw
E
6
6
E
[~)/~'(k)d-~)/](k)] -~<'~ i6°ffr
j = l k=O nw
~ ~'~ [ ~ ( k ) + ~ ( k ) ] ~ d;
i e ~¢~
j=l k=0
H-~eS~ where S~ is the feasible set of (OPTs). It is interesting to note that while such constrained problems are known to be difficult when other norms such as the H~-norm are considered (Ting and Poolla, 1988), they are handled extremely simply when using the A-norm. In fact, the complexity of (LPC~) is no greater than that of the corresponding unconstrained problem. 6. CONCLUSION
In this paper, we have extended the results of Dahleh and Pearson (1987, 1988) concerning existence of minimizers for ¢l-optimal control problems to a broader class of plants and provided more direct proofs. The only assumption required is to preclude open loop poles or zeros on the unit circle. A class of truncated
AUTO 27:2-H
329
problems corresponding to polynomial closed loop transfer matrices and which are equivalent to finite linear programs is defined. Necessary and sutficient conditions are given for when a sequence of such problems can be solved to obtain either exact or approximate minimizers. While we have at present no method for determining in general the size of approximation error introduced by this procedure, this problem has been addressed in a special case in (Staffans, 1990). In that paper, the scalar mixed sensitivity problem studied in Dahleh and Pearson (1988) is considered and a procedure is given for determining the approximation error as closely as desired. In addition, a minimizer for which the closed loop transfer matrix is rational (i.e. non-polynomial) is found for the same problem. We have also observed that problems incorporating norm constraints on some outputs while regulating others can be solved at least approximately using only a slight modification of the method for solving unconstrained problems. The simplicity of constrained problems is a property of the ~el-norm which sets it apart from the H=-norm and other norms frequently used for control system design. Acknowledgement--This research was supported by NASA under Grant NAG9-208 and by the National Science Foundation under Grants ECS-8806977 and CCR-8809615. REFERENCES Cheng, L. and J. B. Pearson (1978). Frequency domain synthesis of multivariable linear regulators. IEEE Trans. Aut. Control, AC-23, 3-15. Churchill, R. V. and J. W. Brown (1984). Complex Variables and Applications. McGraw-Hill, New York. Chv~ital, V. (1983). Linear Programming. Freeman, New York. Dahleh, M. A. and J. B. Pearson (1987). ~l Optimal feedback controllers for MIMO discrete-time systems, IEEE Trans. Aut. Control, AC-32, 314-322. Dahleh, M. A. and J. B. Pearson (1988). Optimal rejection of persistent disturbances, robust stability, and mixed sensitivity minimization. IEEE Trans. Aut. Control, AC-33, 722-731. Francis, B. A. (1987). A Course in H~ Control Theory. Springer, New York. Luenberger, D. G. (1969). Optimization by Vector Space Methods. Wiley, Chichester, U.K. MacFarlane, A. G. J. and N. Karcanias (1976). Poles and zeros of linear multivariable systems: a survey of the algebraic, geometric and complex variable theory Int. J. Control, 24, 33-74. Rudin, W. (1973). Functional Analysis. McGraw-Hill, New York. Staffans, O. J. (1990). The mixed sensitivity minimization problem has a rational eqoptimai solution, Helsinki University of Technology Institute of Mathematics Research Reports, A274. Ting, T. and K. Poolla (1988). Upper bounds and approximate solutions for multidisk problems. IEEE Trans. Aut. Control AC-33, 783-786. Vidyasagar, M. (1987). Further results on the optimal rejection of persistent bounded disturbances; Part I: The discrete-time case. preprint.