Aatomatica, Vol, 18. No. 2, pp. 147-154, 1982 Printed in Great Britain.
000~ 1098182/020147-08503.0010 Pergamon Press Ltd ~) 1982 International Federation of Automatic Control
Computer-aided Design via Optimization: A Review* D. Q. MAYNE,? E. POLAK¢ and A. SANGIOVANNI-VINCENTELLI§
Recent algorithms provide solutions to control-design problems including those involving infinite dimensional constraints and parameter tuning after fabrication. Key words--Computer-aided design; optimization; infinite dimensional constraint; outer approximations; nondifferentiable optimization; control system design. Almraet--Many design problems, including control design problems, involve infinite dimensional constraints of the form 4,(z, a ) ~ 0 for all a E d, where a denotes time or frequency or a parameter vector. In other design problems, tuning or trimming of certain parameters, after manufacture of the system, is permitted; the corresponding constraint is that for each a in ~ there exists a value ¢ (of the tuning parameter) in a permissible set T such that ~(z, a, t)<0. Recent algorithms for solving design problems having such constraints are summarized.
chosen so that, inter alia, the resultant closedloop system satisfies certain constraints. These constraints often include hard constraints on controls and states; a typical constraint is y(z, t) -
1. INTRODUCTION INFINITE dimensional constraints, of the form 4,(z, a ) - < 0 for all a E ~ , s~ a subset of R", arise in surprisingly many design problems. Some examples follow:
(iii) Design of earthquake resistant structures The problem here is to design structures, such as steel-framed multistory buildings that can resist earthquakes. The design considerations include the constraint that the displacements of structural elements, in response to a specified input, should be limited in magnitude at all times in a certain interval. An additional range of problems occur when the parameter values of the actual system, structure or device differ from the nominal values employed in the design. This difference may occur because of production tolerances employed in manufacture (e.g. in structure and circuit design) or because of lack of precise knowledge of some parameters in a system (e.g. identification error). A satisfactory design may require satisfaction of certain constraints not only by the nominal design but also by all possible realizations of the system as the appropriate system parameters range over the tolerance set. Examples include:
(i) Design of envelope-constrained filters The problem here is the choice of weighting function w of a digital filter to process a given input pulse s corrupted by noise such that the output error is minimized subject to the constraint that the noiseless output pulse ¢ = g*s satisfies an envelope constraint (O(t) E[a(t), b(t)] for all t E [0, tl]). The problem is relevant to pulse compression in radar systems, waveform equilization, channel equalization for communication and deconvolution of seismic and medical ultrasonic data. (ii) Design of controllers (Zakian and AI-Naib, 1973) The parameters (z) of a controller are to be *Received 5 November 1979; revised 24 November 1980; revised 6 July, 1981. An early version of this paper was presented in two parts at the IFAC Workshop on Control Applications of Nonlinear Programming which was held in Denver, Colorado, U.S.A. during June 1979. The published proceedings of this IFAC meeting may be ordered from Pergamon Press Ltd, Headington Hill Hail, Oxford OX3 0BW, U.K. This paper was recommended for publication in revised form by associate editor D. Jacobson. tDepartment of Computing and Control, Imperial College, London SW7 2BZ, U.K. CDepartment of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA94720, U.S.A. §Electronics Research Laboratory, University of California, Berkeley, CA 94720, U.S.A.
(iv) Optimum design of chemical plant with un-
certain parameters Certain design constraints must be satisfied for all values of the uncertain parameters in a linear set. (v) Design of robust controllers The controller must be such that the design constraints are satisfied for all values of certain plant parameters lying in a specified set. 147
D.Q. MAYNE,E. POLAK and A. SANGIOVANNI-VINCENTELLI
148
(vi) Circuit design Design constraints must be met not only by the nominal design ~ E R n but for all values of the circuit parameter z in the set ~ + M, where is the tolerance set. The set is commonly a hypercube. The latter set of constraints (max {~b(z, a)la E ~/}-< 0) may be very difficult to satisfy and may therefore require, if a 100% yield in manufacture is required, very tight tolerances (i.e. a small .~), making manufacture prohibitively expensive. To avoid this difficulty the facility for altering certain parameters ('tuning' controllers, 'trimming' circuit components) after manufacture is often provided. If tuning or trimming is effected by a parameter ~- ranging over a compact set T, then the constraint has the form: for each a E s / t h e r e exists a ~- E T such that ~(z, a, ~') < O, or, equivalently max rain ~:(z, a, ~') <- O. aE~f
~ET
Taking into account conventional (finite dimensional) constraints, many design problems may be expressed either as: A. determine a z ~ F; or B. minimize {f(z)[z E F} where
F£ {z ~a'lg(z)-
(1)
where g: R P ~ R 2, and 0a: R P ~ R ; Sa is defined by ,/,~(z) = max max ~,J(z, a) aE~
j~r
(2)
where r denotes the set {1, 2 . . . . . r}. If postmanufacture tuning o r trimming is permitted, the design problems are: C. determine a z E F; D. minimize {f(z)[z E F} where F is now defined by f ~ {z ERP[g(z) < 0 , Oa.r(Z) ~ 0 }
(3)
and 0a.r: R p ~ R is defined by A
0a.r(z) = max rain max ~J(z, a, ~-). • E,~
rET
(4)
jEr
These problems are obviously very complexm merely to test feasibility requires a global solu-
tion of a maximization problem (Becker, 1979) for problems A and B and of a max-rain problem (Clarke, 1975) for problems C and D. Conditions of optimality for B have been derived and conceptual algorithms, developed by, for example, Demyanov (1966) and Oettli (1976). Within the general area of mathematical programming, problems of the type discussed above are referred to as semi-infinite programming problems (Gustafson and Kortanek, 197,3; Gehher, 1974). In 1978, a workshop was held on semi-infinite programming in Bad Honnef where a number of interesting results were presented (R. Metich, 1978). Our own work is distinguished by three specific features. The first is that whenever a problem is decomposed by outer approximations, we use the methods of Gonzaga and Polak (1979) which offer considerable advantages in constraint dropping over earlier schemes, such as those due to Eaves and Zangwill (1971). The second distinguishing feature is that we go to great lengths to exploit the structure of the engineering design problems. For example, in Polak and Mayne (1976) and in Gonzaga, Polak and Traham (1979) we obtain considerable simplifications in the algorithm by making use of the observation that both the transient responses and the frequency responses (in an appropriate framework) of a dynamical system have at most a finite number of local maxima. Finally, unlike a great deal of the literature, our algorithms are invariably implementable and a number of them have been tested on design problems. The purpose of this paper is to present a perspective on the algorithms that we have developed since 1976 for optimization-based computer-aided design of engineering systems.
2. ALGORITHMS FOR DESIGN PROBLEMS WITH INFINITE DIMENSIONALCONSTRAINTS The master algorithms for this type of problem [see (i)-(vi) above] may be divided into four classes 1. Solving A using implementable 'feasible directions' algorithms. 2. Solving A using implementable outer approximations algorithms. 3. Solving B using implementable 'feasible directions' algorithms. 4. Solving B using implementable outer approximations algorithms. The essential features of the master algorithms are retained if we ignore the conventional constraints and restrict the number of infinite dimensional constraints to one so that F
Computer-aided design via optimization: a review
149
Step 2: If 0,(zt) > - e, sete = d2 and go tO Step
is defined by
1.
(5)
Fa--{z ~R"I0~(z) < 0}
O~[z/+ &//,(zt)] - O~(z~)-< - A~/2
where, now, 0~: R ~ ,--R is defined by tba(z) ~ max ~(z, a)
(6)
2. t. Feasible directions type algorithms for problems A and B We assume that ~b: R P x , ~ - ~ R is continuously differentiable, that ~¢ is compact subset of R and that, for each z, ~b(z, .) has only a finite number of local maxima in ,~. If z is not feasible [ 0 r ( z ) > 0] the 'feasible directions' algorithms for problem A determine a search direction which is a descent direction for 0r(z). For any z in R" let the 'e-most-active constraint' set ~¢,(z) C ~¢ be defined by ~¢,(z) g {a ~sCl~k(z. a) -> O~(z) - e}.
"
Step 3: If z ~ F stop. Else compute the largest A~~{1,//,//2 . . . . } such that
Step 4: Set i = i + I and go to Step 1.
[]
It can be shown that this algorithm finds a feasible point in a finite number of iterations. The implementable version (Mayne and Polak, 1976; Gonzaga and Polak, 1979) approximates ~¢ by a discretization ~8 (e.g. of ~¢ = [0, 1], -~6 = {0, 8, 28 . . . . . 1} and refines this discretization automatically via an adaptive law which ensures convergence; thus zg,(z~) is replaced by an easily determined approximation. For problem B[min{f(z)lzEF}] we make the extra assumption that f is continuously differentiable and compute the search direction /~,(z) to be the solution of
(7) ~.(z) =A rain max {(~Tf(z). h) - ~/0r(z)+ hE$
A solution h,(z) of
(V:d~(z, a), h), a E~,(z)} 0,(z) = min max (~7,,k(z, a), h)
(8)
hE$ a E,~,(:)
(where S is the unit hypercube in R p ) is a descent direction for 0~(z) if 0,(z)< 0; solving (8) is prohibitively difficult. For an implementable algorithm ~¢~(z) must be replaced (approximated) by a finite set. One such approximation was employed in Polak and Mayne (1976); a better approximation, .~,(z), proposed in Gonzaga, Polak and Trahan (1979), is .~Rz) ~ {a ~ ¢ J a is a local maximizer of ok(z: ")}. (9) The search direction /[,(z) is any solution of the finite-dimensional linear program -
A
O,(z) = min max (V:Cb(z, a), h). heS
a e.~Jz)
(10)
To ensure the existence of a descent direction for O~(z) we assume that 0 ~ c o {~7:~(z, a), a E d0(z)}, i.e. these gradients are positive linearly independent.
Algorithm 1 (to compute a z E F) Data: z0 ER p, e0 E(0, ~ ) , / / E ( 0 , 1). Step 0: Set i = 0, set e = E0. Step 1: Compute ~,(zi), 0~(zi),/z~(zl).
(II) A
where 3' is a positive constant and 0r(z)÷ = max {0, 0r(Z)}. We note that 0 , ( z ) ~ 0 and that if 0,(z) < 0 then (~/(z). t/,(z))< vO~,(z)+. (~ .#(z. a)./i,(z)) < 0 for all a E .~,(z). Hence h,(z) is a descent direction for 4,(z, a) at all a Ez~,(z) and also for f(z) if z E F ; if z E F, /z,(z) permits an increase in f. This permitted increase decreases as z approaches F[0~(z). --, 0]. Algorithm 2, a 'feasible-directions' algorithm for problem B, is similar to Algorithm 1, except that /~,(z) is computed from (11) and Step 3 is replaced by N
Step 3': If 0~(zs)>0, compute the largest A; E {I,/3,//2 . . . . } such that O~[z, + x:/.(z~)]
-
Or(zt) ~ - x~2.
If 0r(zt)~;0, compute the largest At E{1,//, l/2, ...} such that f[zi + A~(zt)] - f(zi) -< - A~2; 0r[zi + &h(z;)] ~ 0. [] In the implementable version of the algorithm zg is again replaced by a discretization which is
150
D.Q. MAYNE. E. POLAK and A. SANGIOVANNI-VINCENTELLI
automatically refined to ensure convergence (Potak and Mayne, 1976; Gonzaga, Polak and Trahan, 1979). 2.2. Outer approximation algorithms for problems A and B We assume again that ~b and f are continuously differentiable and that M is a compact subset of R ~. For any subset M' of R ~ let F~, be defined by F~,={z ~a~14~(z, a ) < 0 for all a Ca/'}. If M'C s/, then (i) F~ c F~, (F~ is an outer approximation of f~) and (ii) min {f(z)lz ~ Fu.} < rain {f(z)lz ~ F~}. The outer approximation algorithms (Eaves and Zangwill, 1971; Blankenship and Falk, 1974) employ a sequence of outer approximations, {F~,} as in the following conceptual algorithm for problem A. Algorithm 3 (to determine z E F) Data: ~o (a discrete subset of a/). Step 0: Set i = 0. Step 1: Compute a z: in F~ r Step 2: Compute O~(zi) and a ai ~ M such that ¢(zi, ai) = ~b~(zi). If ~ ( z i ) < O, stop. Step 3: Set M~+t= Mi U{a~}. Set i = i + 1 and go to Step 1. [] It can be shown (Eaves and Zangwill, 1971; Blankenship and Falk, 1974) that any accumulation point of an infinite sequence generated by Algorithm 3 is feasible. Step 1 can be achieved in a finite number of iterations (Mayne and colleagues, 1979a, b; Polak and Mayne, 1978a), but Step 2 involves an infinite process. Moreover the cardinality of a/i tends to infinity with i. To cope with the latter difficulty a method for dropping constraints is required. Several schemes for achieving this are proposed in (Mayne and colleagues, 1979a, b; Gonzaga and Polak, 1979). The next algorithm incorporates a scheme (see Step 3) which has proved successful (Gonzaga and Polak, 1979). Algorithm 4 (to determine a z ~ F) Data: M0 (a discrete subset of M), 8 ~(0, 1), k->l. Step 0: Set i = 0. Step 1: Compute a z~ in F~,.
Step 2: Compute Sz(z~) and a a~ ~ s / such that cb(zi, ai)= O~(zi). If ~(zi)-< 0, stop. Step 3: Set Mi+~ = {a~l~b(z.~, a~) - k(8 ~ - ~5~), J = 0, 1,2 . . . . . . i} Set i = i + l and go to Step 1. [] (12) It follows from Gonzaga and Polak (1979) that any accumulation point z of an infinite sequence {z~} generated by Algorithm 4 is feasible. Note that for any i, k(8 ~ - 8 i ) = 0 when i = i and k(8 j - 8 ~ ) ~ k 8 i as i--,o0; hence the test in Step 3 increases in difficulty with i so that a t will probably be dropped from ~/~ for all i sufficiently large, thus controlling the growth of ~ti. The implementable version of this algorithm utilizes an approximate, but progressively more precise computation of a~ [recall a~ solves max {~b(zi, a)la E ,.~]. A conceptual algorithm (Algorithm 5) for solving problem B is obtained (Eaves and Zangwill, 1971; Blankenship and Falk, 1974) if Step l in algorithm 4 is replaced by Step l': Compute a z~ to solve P~i: rain {f(z)lz ~F~,}. Clearly Step 1' must be replaced by an approximate solution to P~ in an implementable algorithm. This requires an optimality function to gauge the accuracy of the solution to P~r A suitable function is 0~, defined by
as(z) = min max {(XTf(z), h); ¢(z, a) lies
+(~7:O(z, a), h), a ~M'}-~b~(z)+}.
(13)
Under weak conditions (of continuous differentiability of f and ~ and positive linear independence of the most active constraints) it can be shown that (a) 0a,(z) < 0 [or all z ~ F~,; (b) Oa,(z) < 0 for all z ERP; (14) (c) O~,(z)= 0 if and only if z E F and satisfies the F. John optimality condition for the problem P~,: rain {f(z)lz E F~,}. The conceptual Step 1' in Algorithm 5 can now be replaced, yielding the following:
Algorithm 6 (to solve min {f(z)lz E F}) Data: ~0 (a discrete subset of M), 7, 8 E(0, !), k-> 1. Step 0: Set i =0. Step 1: Solve P~, approximately to obtain a zi
Computer-aided design via optimization: a review
151
tuning so that the feasible set F is defined by
such that
F={zERPICt~.r(Z)-<0}
O~,(z~) >- - ,/. Step 2: Compute ct~(z~) and Cti ~
satisfying
(16)
where, now, Ct~.r:RP~ R is defined by
~,~(Zj) = Ck(Zi, al).
Step 3: Set ~t~+~= {ai]d~(zi, a i) >- k(8 ~ - ~), j = O, 1,2 . . . . . i}. Set i = i + l and go to Step l. (15) It can be shown (Gonzaga and Polak, 1979), that any accumulation point £ of an infinite sequence {z~} generated by Algorithm 6 is feasible and satisfies 0~¢(£)= 0, a necessary condition of optimality for P~. As before, the implementable version of this algorithm solves max {~(z~, a)la E.sl/~} (in Step 2) approximately but with increasing precision as i increases. 2.3. Subalgorithms for the master algorithms The following subalgorithms are required by the master algorithms described above (i) Standard linear programs (e.g. in Step 1 of Algorithms 1 and 2). (ii) Algorithms for solving a finite number of inequalities in a finite number of iterations (e.g. in Step 1 of Algorithms 3 and 4). Two new algorithms (Mayne and colleagues, 1979a; Polak and Mayne, 1978a) have been developed for this purpose. These algorithms combine the quadratic rate of convergence of Newton's method with the robustness and finite convergence of first order methods, and have proved particularly successful withifi the outer approximation master algorithms since they generate a point in the interior of the (current) constraint set. (iii) Algorithms for Constrained Optimization (e.g. in Step 1 of Algorithms 5 and 6). Two new algorithms (Mayne and Polak, 1978; Polak and Mayne, 1978b) which are globally stabilized versions of Newton's method [using, respectively, an exact penalty function or hybriding with a phase I-phase II method of feasible directions (Polak and colleagues, 1979) to achieve stabilization] have been developed. Algorithms 2 and 3 employ (see Step 1) a special Phase I-Phase II method of feasible directions (Polak and Mayne, 1976; Gonzaga and colleagues, 1978; Polak and colleagues, 1979).
3. ALGORITHMS FOR DESIGN PROBLEMS WITH INFINITE DIMENSIONAL CONSTRAINTS AND TUNING
Again for exposition we consider the simplest case of one infinite dimensional constraint with
ct~.r(Z) = max min 6(z, a, r). aE,at ~ET
(17)
It is assumed that f and ~ are continuously differentiable, that ~¢ is a compact subset of R' and T is a compact subset of R ~. We will consider only problem D: min {f(z)lz E F}. 3.1. A generalized gradient algorithm Since F is defined by (16) and (17), it is clear that any implementable algorithm will have to approximate the (infinite) set A by a set of discrete approximations {~t~}. The next algorithm (Algorithm 7) is a conceptual algorithm based in outer approximations, for solving min {f(z)jz E F} Algorithm 7 (to solve P~t.r: rain {f(z)lct~.r(Z)
0}) Data: ,~0 (a discrete subset of ~¢), % 8 E(0, 1), k~, 1. Step 0: Set i = 0. Step 1: Solve P~,,T: min ~(z)lct~ r(t) - 0 } for z~. Step 2: Compute ct~.r(Z~) and a~ E ~¢ satisfying 6(z, ai, r) = ct~t,r(Z~)for some r E T. Step 3: Set ~1+1 = {a~lct~.r(zj) ~ k(8' - 8i), j = O, 1, 2 . . . . . i}. (18) Set i = i ÷ 1 and go to Step 1. This algorithm can be seen to involve a small, and obvious, modification, in Step 2, to (the conceptual) Algorithm 5, and reduces P~.T to an infinite sequence of problems (P~,. r} where ~¢i is a discrete set. However solving P~,. r is not trivial since the function 0i ~ 0 ~ r is, in general, not differentiable, and may even fail to have directional derivatives. However, 0~ is locally Lipschitz, and, hence has a generalized gradient (Clarke, 1975) a0i(z) at z[OO(z) is defined to be the convex hull of the set of all limits of the form l i m V O ( z + v j ) where v~--*0 as j..,o~ in such a way that Vct(z + vi) is well defined; thus the generalized gradient a0 of ct ~ max {h 1, h~}, where h' and h 2 are continuously differentiable, satisfies 0Ct(z)= {~Th'(z)} if h21(z) > hiS(z), act(z) = {~ThZ(z)} if hZ(z) > h'(z) and act(z) -- co {Vh'(z), Vh2(z)} ff h'(z)= h2(z)]. Clearly Octi is a point-to-set map, mapping R p into 2"'. It can be
D . Q . MAYNE,E. POLAK and A. SANGIOVANNI-VINCENTELLI
152
shown (Clarke, 1979) for all z in R', that ~¢(z) is a well-defined non-empty subset of R p and that the map c;t# is bounded and upper semi-continuous on any open bounded subset of R p. A necessary condition (Clarke, 1979) for ~ to be a solution of rain {g,(z)} is that 0 6 00(~). The steepest descent direction for 6 at z is - N r {OO(z)}, where N r {a~(z) } - arg rain {[[hUlh 6 aO(z)}.
(19)
However, this direction cannot be used in an algorithm to reduce ¢, because discontinuities in dg, may cause jamming [e.g. consider an algorithm to find a z satisfying 0 ( z ) < 0 where A
= m a x {h 1, he}]. As in Algorithm 1 [see equa-
tion (8)] some local averaging is required. For this purpose a smeared generalized gradient (Goldstein, 1977; Polak and Sangiovanni-Vincentelli, 1979) #,g,: R' <- 2a', defined as follows: a,~(z)
=
co
u aO(y) y~e¢~.,)
(20)
is employed [B(z, ~) £{yl]ly-zJl < e}]. This smeared generalized gradient can be employed to solve Pal,. r in an extension of a feasibledirections type algorithm. For any • > 0 we define the set M,(z) by I {Vf(z)} if O ( z ) < ~ M,(z) = ~co {{Vf(z)} UaO(z)} if O ( z ) > ~ (21) (recall that ~ = ~,. r). The optimality function 0,; R p ---R for Pal,. r is defined by
0,(z) ~ -rain {llh[[2[h EM,(z)}
(22)
and the descent direction h,(z) by h,(z) =~ - N r {M,(z)}.
(23)
and ~,[z~ + X~h,(zj)] <- O. Step 5: Set zj+l = zj + A~h,(zj), set j = j + 1 and go to Step 1. []
This algorithm can also be used to find z0 6 F~ or a combined phase l--phase 2 algorithm can be constructed, on the lines of Polak and colleagues (1979), so that z0 may be any point in R ~. The resultant algorithm may then be employed in Algorithm 7. If step 1 is replaced by Step 1': Solve P~,.T approximately to obtain a zi such that 00(z9 ~e-~/i then a more practical algorithm is obtained; this step may be implemented using a finite number of iterations of Algorithm 8. However, the algorithm is still not imptementable because o~,~(zj) is an infinite set which requires an infinite process to compute. Hence the final, implementable algorithm described by Polak and Sangiovanni-Vincentelli (1979), makes use of the fact that 0 ~ 0al,. r is semi-smooth to approximate a0 by a finite number of vectors; the precision of the approximation is adaptively increased in such a way that convergence is ensured; it also employs an approximate, but progressively more accurate, estimation of the ai required in Step 2 of Algorithm 7. 3.2. A transformation approach The algorithm described in Section 3.1 suffers two disadvantages: the first, common to most nondifferentiable optimization procedures is that the bisection procedure to obtain an approximation to the generalized gradient is c o m p u t a t i o n a l l y very expensive; the second is that it is limited to the case when there is only one constraint of the form max min ~(z, a, t) < 0 (i.e aE~
The algorithm for solving P~,. r is Algorithm 8 (to solve Pall,T) Sangiovanni- Vincentelli, 1979)
Step 3: If O,(zj)>- - ~ , set ~ = d2 and go to Step 2. Step 4: If Oo(zj)= 0 stop. Else compute the largest Aj 6{I,/3,/32 . . . . } such that f[z i + Ajh,(zj)] - f(zj) -- - Ai 6 / 2
(Polak
and
Data: z0 6F~ ~ {zlg,(z)~0}, c0>0, ~ 6(0, 1). Step 0: Set j = 0. Step 1: Set • = ~0. Step 2: Compute h,(zj) and 0~(zj).
rET
r = 1). The latter restriction arises from the requirement of semismoothness of a0~,. r. Very recently a method (Polak, 1979) of avoiding these difficulties has been found. As in Section 3.1 a master algorithm (e.g. Algorithm 7 with Step 1 replaced, as before, by Step 1' is employed; the difference lies in the suhalgorithm to solve P~,. r. Instead of solving P~,.r, directly, the problem is transformed into an
Computer-aided design via optimization: a review equivalent problem soluble by conventional optimization algorithms. Recall that P~ ~ P~,.T is defined by P/: rain {f(z)lO~.r(z) <- 0}
(23)
where 6~.,.r is defined by max rain ~(z, a, ~:)< 0. a~,~l i
(24)
I"ET
Suppose that s~i = {al, a2 . . . . . may be written as
aj}. Then (22)
P~: min {f(z)[min ~(z, ai, ¢) <- O, i ~J}. (25) ¢~T
Solving P~ requires the determination of a (~, ~, ÷2. . . . .
÷j) ~ F ~ R p x T x T x . . . x T such that ((£, a~, ~)-<0, for all i E J
(26)
and f(£)<-f(Y.) for any (~, ¢,, r2. . . . . rj) E F satisfying (25). It is therefore plausible (and easily proven) that Pi is equivalent to /~i:
min ~z. r 1 . . . . .
{f(z)16(z, ai, ¢i) < O r])
for all i E J}.
(27)
But (27) is a conventional constrained optimization problem. The final algorithm is therefore similar to the implementable version of Algorithm 7 but with Algorithm 8 (the subalgorithm required in Step 1') replaced by a conventional algorithm for solving/3i. The algorithm is easily extended to deal with ease when ff~. r is defined by (4) ( r - 1) and z is also subject to conventional and infinite dimensional constraints. 4. CONCLUSIONS
The algorithms described above may be employed in a variety of control design problems, in particular Problems (ii), (iv) and (v) of Section 1. The only point requiring further attention is the calculation of derivatives such as :4~(z, a) and V if(z, a, ¢). If 4~ and f represent constraints in the frequency domain, the standard computations only are involved. If, however, they represent constraints in the time domain, the derivatives of the state transition matrix of the closed loop system is repeatedly required. One suggestion for this is outlined by B e c k e r (1979). P r e l i m i n a r y
studies o f c o n t r o l
design using frequency domain (Polak and Mayne, 1976; Gonzaga and colleagues, 1978; Voreadis, 1978) and time domain (Becker and colleagues, 1 9 7 8 ) constraint have been encouraging.
153
Acknowledgement--Research sponsored by National Science Foundation Grants PFR79-08261 and ECS-7913148. REFERENCES Becker, R. G., A. L. Heunis and D. Q. Mayne (1978). Computer aided design of control systems via optimization. Publication 78/47, Department of Computing and Control, Imperial College, London. Becker, R. G. (1979). Linear system functions via diagonalization. Report, Department of Computing and Control, Imperial College, London. Blankenship, J. W. and J. E. Falk (1979). Infinitely constraint optimization problems. The George Washington University, Institute of Management Science and Engineering, Serial T-301. Clarke, F. M. (1975). Generalized gradients and applications. Trans Am. Math. Soc., 205, 247. Demyanov, V. F. (1966). On the solution of certain min max problems. Kybern., 2, 47. Eaves, B. C. and W. T. Zangwill (1977). Generalized cutting plane algorithms. SIAM J. Control, 9, 529. Gehner, K. R. (1974). Necessary and sufficient conditions for the Fritz John Problem with linear equality constraints, SIAM J. Control, 12, 140. Gonzaga, C., E. Polak and R. Trahan (1978). An improved algorithm for optimization problems with functional inequality constraints. Report UCB/ERL M78/56, ERL, College of Engineering, University of California, Berkeley. Gonzaga, C. and E. Polak (1979). On constraint dropping schemes and optimality functions for a class of outer approximations algorithms. SlAM J. Control and Optimiz, 17,477. Gustafson, S. A. and Kortanek, K. O. (1973). Numerical treatment of a class of semi-infinite programming problems, Naval Res. Logistics Quart., 20, 477. Levitin, E. S. and B. T. Polyak (1966). Constrained minimization methods. Zn. Vychisl. Mat. Fiz., 6,787. Mayne, D. Q., E. Polak and A. J. Heunis (1979a). Solving nonlinear inequalities in a finite number of iterations. Publication 79/3, Department of Computing and Control, Imperial College, London. Mayne, D. Q., E. Polak and R. Trahan (1979b). An outer approximations algorithm for computer aided design problems UCB/ERL M77/10, ERL, College of Engineering, University of California, Berkeley. J. Ol~timi2. Theory and Applic., 2,8, 331. Mayne, D. Q. and E. Polak (1978). A superlinearity convergent algorithm for constrained optimization problems. Publication 78/52, Department of Computing and Control, Imperial College, London. Mettich, E. (1979). Semi-in~nite Programming. SpringerVerlag Lecture Notes in Control and Information Sciences, Voi. 15. Mifflin, R. (1977). Semismooth and semiconvex functions in constrained optimization. SIAM. J. Control and Optimiz., 15. 959. Oettli, W. (1976). The method of feasible directions for continuous minmax-problems, Proc. 9th Intl. Math. Prog. Syrup., Budapest, p. 505. Polak, E. (1979). An implementable algorithm for the optimal design centering, tolerancing and tuning problem. JOTA, in press. Polak, E. and D. Q. Mayne (1976). An algorithm for optimization problems with functional inequality constraints. IEEE Trans Aut. Control, AC-21, 184. Polak E. and D. Q. Mayne (1978a). On the finite solution of nonlinear inequalities. Report UCB/ERL M78180, ERL, College of Engineering, University of California, Berkeley. Polak, E. and D. Q. Mayne (197gb). A robust second order method for optimization problems with inequality constraints. Report, UCB/ERL,, E R E College of Engineering, University of California, Berkely. Polak, E. and A. Sangiovanni-Vincentelli, (1979). Tbeoreti-
154
D.Q. MAYNE,E. POLAK and A. SANGIOVANNI-V[NCENTELLI
cal and computational aspects of the design centering, tolerancing and tuning problem. IEEE Trans. Circ. and Syst., CAS-~, 795. Polak, E., R. Trahan and D. Q. Mayne (1979). Combined phase Imphase II methods of feasible directions. Mathe. Programming, 17, 61. Voreadis, A., (1978). Computer aided design of multivariable
systems using a feasibility algorithm. MSc thesis, Department of Computing and Control, Imperial College, London. Zakian, V. and U. AI-Naib, (1973). Design of dynamical and control systems by a method of inequalities. Proc. lEE, 120, 1421.