Copyright © IFAC Software for Computer Control Madrid, Spain 1982
ALGORITHMS OF OPTIMIZATION OF LINEAR AND NON-LINEAR SYSTEMS R. Gabasov*, V. S. Glushenkov*, F. M. Kirillova**, A. V. Pokatayev**, A. A. Senko** and A. I. Tyatyushkin*** *Byelorussian State University, Minsk, USSR **Institute of Math ematics of the BSSR Academy of Scien ces, USSR ***Siberian En ergetics In stitute of the USSR Academy of Sciences, USSR
Abstract. The report contains the clgorithms of solution of linear, piece-wise linear, quadratic, geometric programming and optimal control prob lems. 'rhe ideas of t he mo no graph "Linear Programming Methods" by R.Gabasov and F.M.Kirillov
si ble s olution ::c and t he s upport Kon is said to be the support feas ible solution.
ALGORITHMS OF ::;01UTION OF LINEAR j?ROGi.1.AkiWING PROBLEMS Consider an interval linear programming problem (LP)
Al go rithm 1. On the support f eas ible so lution {X~K.Sllp} let's c alcu l a te the vector of potentials u'::u.'(I wp >= , -f =cqwp)ASCIp and the estimate vector 6=A'CJ>:I4'A(Iwp,J) -c'(JL 'i'he principle of decreas ing of su bopt i ~a lity estima te ~(:X:, K,ClP)::: : L Aj (Xj + L. A/Xr";~) + ["i(
C*~.4x~g: d*:!:X!s:d.~ (1) vJhe re C =eCd), X =x:.q), d.* = ri"C'J), d,.:. C'X-'Pm.MC,
=r1,*CJ) are n-vectors; 8,,(I), ~·(I) ure n;-vectors j A=A(I,J) is an M Hr. matrix, I= {f,2, ... , m}, d {f,l, ... , n}.
=
Vec tor x on which the main (€* ~ ~ A:c ~ ,.) and direct(ri~~ X ~d.*) constraints are fulfilled is said to be feasible solution of problem (1). E -optimal feasible solution is defined by the inequality ,,%,0_ C'4 E E E where %,0 is optimal feasible soluti'on. The pair KSllp::{Isup,Jsup}, IwpCI, dsv"C'd,IIS/f,-ld$llf>1 for which matrix A$CfI A(I~.d,"P) nonsingular is said to be the support of problem (1). The pair {:;c,K sClp } consisting of the fea-
B., -
-4./
'i<'o -AC4td)~Cd»+ [. u" c Aj'>O
"'.>0
14..<0
et-Ali,d )~CJ»
lies at the f oundation of the iteration. The suboptimali ty est imate adrcit s of the decomposition to (:c, K,IIp)"~(:II!)+ ~(Ks.lf) where f(=C) characterizes t he measure of non-optimality of the feasible solution ~; f(KSCIp) is calculated according to the dual feasible solution and characterizes the measure of non-optimality of the
=
339
R. Gabasov
340
support. The decrease ~(~,K,~) is achieved by independent decrease of estimates pC ~> and ,C «sup)' The decrease of the estimate '(~) is connected with the transition to a new feasible solution ~ -x +ge where the suitable direction is chosen from the condition of the maximal increment of the cost function,9 is a maximal step along and is determined according to the standard rules. The suboptimality estimate of the support feasible solution{x,Ks~} equals to P(i,Ksup >=(f-9>P(%, Ksllp )' At P(X, Ksup) ~ E the feasible solution x is E -optimal. Otherwise we get down to the changing a support. The choose of a new support is carried out f rom the condition of decreasing the estimate P.C Ksvp ), This is accompli shed by the ite r ation in the dual problem. The ellements of the new s upp ort are chosen from the condition of maximal decrease of the dual cost f unction along the chosen dual direc t ion.
e
e
2. The s i rr:p l ifi ed }:'u le of t he ch ange of the support is used in the Given algorithm. Relaxation of t he cost f unction of the dual problem a t the change of the support happens before the appearance of the first point of break of the speed of changing the dual cost function along the chosen direction.
A lg C2~i_~hm
Algorithm 3. The supp~rt Kwp is changed by a new support Ksvp proceeding from the principle of decreasing of suboptimali ty estimate: ~(Ksup) < ~(Kwp) Construct the sets 10 , do:
do= {jEd$up: J..j
et al.
c'q)xl'l)-+I'rfAIIC, '.('f)~ A(i,;J>~ d..cl) ~ :.cc,) 4 d.~(j)
rei),
(2)
where f-I\Io,'·J\"Jo. l et's solve problem (2) by dual method with full relaxation proceeding from the dual basic feasible solution corresponding to the support Ksvp • The basis of the optimal basic feasible solution of problem (2) is taken as a new support Ksvp • The efficiency of the algorithm at different values of the parameter ~ is revealed. In the conducted experiments the parameter ~ was chosen in the following way: ~ = : f'tt\04(! {~flC(gr - &*i >, lI1A/.IC.(cij - ci llj>J/100, o ~ f- ~ 100. The r esults of the computer experiments showed t hat the given algorithm is the most efficient at fA ~ 3-;-'0 when m >n.; and at f ~ Lt~ when In ~ n. Dual algorithms. To solve problem (1) four dual algorithms have b een worked out which differ by the constructions of suitable directions and by the. choice of the dual step along the chosen direction. One of these algorithms is working with basic dual feasible solutions. The algorithms with a floating support. Consider LP problem c'x _m.~, Ax. ~ i., d .. ~x!:cl~ (3) where c,x,rl.*,rl.- are n-vectors, g is m. -vector, A is rTl.>< n. - ma trix . The adaptive a l gorithm with a flo ating support has been sugge s ted which is the modif ication of algorithm 1. The algorithm can start from inf easible initial vector x , a st ar t procedure of constructing feasible solution is foreseen in this case.
+~< ~j
I o={ i ~I\I"f: Bill + ~ < ACi , } )xCq)d7 -! }CI\Isvp; where t -:,. 0 is the paramet er of the algorithm. Consider the problem
Finite algorithms.Consider LP problem c'x_m.Mt',Ax=g,clll~~{,d: (4) where e::.eC(J>, :t!::.x(l), dtt=d.,(;J),d*:;. rl,.("d), ~:;. M 1), A=A(I,}), 'l.tulk A =In. Obviously Is"" =I, Asllp = A(I,Jwp).
Algorithms of Optimization of Linear and Non-Linear Systems
The algorithms described above guarantee obtainment of the solution for a finite number of iterations only when the conditions of dual non-degeneracy are fulfilled. In the terms of problem (4) they look in the following way: !lj:f- O, VjEJNS =J\'JSIJP' The phenomenon of dual quasi-degeneracy or f" -degeneracy (3 j E dNS , Il1j I ~ f ) can take place in the process of solution. At small f qua si- deg eneracy can entail lowering of the efficiency of the me thod because of the small i mp rovement of the support on each iteration. }'rimal a nd dual a l gorithms of solution of problem (4) have been construct ed which take into account the possible quas i-d eGene racy of the problem. 'rh e ide a of the algol'i thms is in the, foll owing . The components j E dNf= {jE dNS, I Ajl ~ f" } have small infl u ence on maxi miz a tion of t he initial c ost fu nction but t hey can prevent relaxation of the du a l cost fu nction. In th e case of quasi-degeneracy the subpro blem of correction of direction l is solved by chang ing e(]N~) and e(Jsvp>' '1'11e aim of the correc tion is the vii th J rawal of support components of the direction e from the bounds which are broken in t he f irst turn. The degree of degeneracy ( I dNf< I) of the sUbprob lem of correction is less th an the de gree of degeneracy of the initial problem. This gives an opportunity t o i mprove the support of the cost f unction of t he subproblem of corre ction. In the case quasi-degeneracy of t he su bp roblem of correction appears the subproblem of correotion of the second level is constructed. The f ini tenes.s of t he described algorithms is proved. The information about the subproblems of correction can be held in a rather compact form and doesn't require essential volume
=
SFC- l.
341
of computer storage.
SPECIAL NON-LINEAR PROGRAMMING PROBLEMS Piece-wise linear programming problem. Consider the problem f(:IC)=m(n(c~X+Q(I<)-+rnA:J: Ax-€ It£K " ,- ,
(5)
d.:s, x:s,d· where x:xC(J),d.::d.,(J),cl-= 'cl*q>, clc = CIt(J),I(E K, are n-vectors, 8=R(!) is m-vector, et =0(0<> is p -vector, A=A(I,J) is a constant m x 11 -matrix, ta-nk A= :: m; I = {1,2.",., m}, J= {1,2,,,.,I'I.), K ={1, 2, ... ,p}.
Two methods are Vlorked out (Shilkina, 1981). The algorithms make i t possible to stop the solution at Eo -optimal fe asible solution. The principle of the decrease of the suboptimality estimate lies at the baois of both t he methods. The methods differ from each other by initial information and the v/ay of its transformatibn. The information about the f easible solution x and about the support of constraints AsoJp ( ASII? =A(I,]svp), cUt Asup7: 0) is suggested to be known. To calculate the suboptimality es timate p of the support feasible solution {x, Asvp} the first method forms a special LP problem fo r which special algorithm of solution is suggested. The result of the iteration is a new support feasible solution {.x., AsupJ and f~fb' The second method (adaptive method) uses the support of the problem Ps? ={A sup, Ds,,?} vlhere DSIIp is the support of cost function, DSllf = (e( Ksvp ), D(Ksup , 4sf)} I D(K,J)= e(K,;})A;J, A(I,J)-e(K,J>, dsfC'J\JsIJP, KsllpcK, cLet'l!sup .;: O. The suboptimality estimate is calculated according to the support feasible solution {:le, Fspl. The result of the iteration of the adaptive method is a new support feasible solution {~J Fsp} with F>' Both the algorithms use dual support method under ohanging supports.
*
P ..
342
R. Gabasov et al.
Algorithm of solution of ~~adratic problems. A canonical convex quadratic programming problem f/2 x'Dx. + c'~ -+ mi.Iz., A:x: = g, d. ~x lrd~ (6) is considered. A primal support method of its solution is constructed (Raketsky, 1980). The notion of the support of the quadratic programming problem forms the basis of the method. The support of quadratic programming problem consists of two components: the support of constraints and the support of the cost function. Such form of presentation of the support allows to hold the elements of the support matrices in a compact form. If during the process of solution we come across a finite number of degenerate support feasible solutions then the number of iterations is also finite. The modification of the algorithm for the problem without the constraints Ax = B is constructed. In this case the support of the problem coincides with the support of the cost fu nction. Geometric proJ?;.ramming. The posynomial programming problem in a separable form (SGP) ~o(i!) ... m.i.n;~/(c:){.1, KEPj -:E R~ (7) is considered. Here 9K ca) =.!:: e',\ Ke{o}U ,~W'\" uP, mo=i, m." ::n l(-(+1, "E-P, np=nj p= IPI, .:c ::. A T;: + g, A is fI.)t m-exponent matrix, 'UU\.k A ::. rn, g e Rn.. If the set p::. ~ then problem (7) is said to be an unconstrained SGP problem (USGP). In the case of P={",Z, ... ,p} problem (7) is said to be general cons trained SGP problem (CSGP). The following algorithms are constructed: 1) primal (GPUPL), dual (GPUDL) and combined (GPUCL) algorithms of linear approximation for the USGP problem. Algorithms are based on approximation of the original problem by the problem of minimization (maximization) of some minorant (majorant) of the cost function of the primal (dual) problem.
Algorithm GPUCL consists in successive application of the iterations of the algorithms GPUPL and GPUDL; 2)primal algorithm GPCLA of linear approximation for the CSGP problem based on the solution of auxilliary interval LP subproblems. Algorithms GPUPL, GPUDL, GPUCL, GPCLA are characterized by economical use of the computer on-line storage. 3)primal algorithms of polynomial approximation for USGP (GPUPA) and CSGP (GPCPA) problems. The iterations of the algorithms are conducted with the use of curvilinear directions (polynomials). Algorithm GPUPA realizes decomposition of the optimality criterion Bnd this essentially simplifies the original problem USGP. ALGOiUTIDilS OJ!' SOLUTION OF OPTIIiillL CONTROL PIWBLErIIS Optimal terminal prob}em. The problem ~(u..): C'XCtf)"" ~, X(t+1) =A::x:d:) ... g LLci}, x(o)
=x
~ f*(t),
O ,
S.. ' HX(t1 ) " ~*, f. et) ~
LLctH:
(8)
tET ={O,i, ... , l:1-1t,
is investigated where X(t) is n. -vector of state,
,tc-i\ is said to be admissible control if along it all the constraints of problem (8) are fulfilled. The pair { lA. , Kwpr consisting of the admiesible control and support is said to be support control. For the support control {I.(., K5Vp\ the inequality d(I.(.·)-d(14) ~f&(u.,K$(Ip), P(14, KSUf)::: ~ Alt)(C.t(t) -1.(1:»+ Lt.(h(LtCt>-.f"ct» Act»O
+L 11'(S}W.(S) + L 11'(-')00-($) '(')~O
'11($»0
A(4:)~O
Algorithms of Optimization of Linear and Non-Linear Systems
is true. Here Ari)==-'f"(t)e, -t Eo T, is cocontrol l' et) is the solution of the conjugate equation.
r'er-i) =r'd;)Aj t= i,i,-i; r'Ct,-f)=C''At,-t-'"e, -11' ,( I sup)HCIwp,;n; 'If' ' Cl Sllp{ ) =. C t ETSIIp } P;;'~ is vector of potentials, w.. =S*-H~(tf} w*=~* -
H:;cc t ,>
are lower and upper
discrepancy vectors; is optimal control.
'1;,$ =T\Twp, "".
the solution of the problem stops. Otherwise we construct a new control ii =- u + eA L(. where ALL = {AlA.d;), t E 'f} is a sui table direction, e is a maximal step along 61.(. Construct the setsT",={t:/A('I:)/~oI.}, T,,3={t: I A(i)/> ..... }, 'T'",,,,=T,,,s according to the given number oC. ~ 0 ( parame ter of the iteration of the algorithm) and Act>, t Eo'T'. Variations of c ontrolling signals /..l(t), t E: 'fH3 • have greater influence on the cost f unctional t han the variations of controlling signals ",et), t €- 'T'",M. That' s why in t he g iven algorithm the ascent direction will b0 constructed according to the set T",S from the c ond ition of the maximal i ncrement of the f unctional. The set 7'm \~ ill be used for providing maximal increment of the cost functional along this di r ection. To construct the direction 2.nd t o choose the s tep we solve the s pecial int erval LP problem.
nrrm
'l'he ef ficiency of the algorithm at di ff erent values of the parameter ~ is reve a led. In the conducted computer exp eriments the parameter 01. was chosen in the following way of.. =fA' mAoeIAC-t}1/100, [0, fOOJ. t~T".
.
rf:
The results of the computer experiments showed that the algorithm is the most ef ficient at ~ 20+50.
r
Finite algorithms. Finite algorithms
343
of solution of general problems are used for constructing finite algorithms of optimization of linear discrete systems. Consider the problem C'~(t1) ~ macr, :.ccif.t) = A:cct)+ 8u.(t), %(0)= :;(.,
t~""
l-I~ct2.) =. 8, f*(t)~ l(.(t)~ 5*ct), (10)
{o",,,.,t 1 -11,
a
where tt ~ t2. , E- R"" , the other parameters of problem (10) coincide with the corresponding parameters of the problem (8). Primal and dual alg orithms with,.. - adap tive scaling a re suggested. To speed up the work of the algorithm t he methods of quickened integration of s tationary line ar discrete systems are proposed. Agg r egation of subintervals of the interval T lies at the basis of these methods. Max-min problem. The probl em of t h e maximization of t h e f unctional 1(t.C.) = ~ll'l t XCt ,) +~d <1
c:
ifK'
on the tra j ectories of the li ne ar sy stem X(-!H) = A~(t) ~ ~tLcb, %(0) ",:.co, HX(tt)=8. 5.eh~tL(i)~r(t), ·hT={O,i, ... ,i t - 13, ( 11)
R7s
where X(t)f R~ tfl. :'T'U{itl; g, .l'oE ER; A is 1\ or Il. -matrix, H is M .. It - matrix, "., ~ n, 'tQMk H = n1, Ci £. R• i. E K; K:: { f, 2, ... ,p}. To solve the given problem the al gorithm iD Vlorked out ( A.V.Guroins ky, 1 980) at the basis of which lie the ideas of the adapt~ve method for a general LP problem. The peculiarity of the algorithm c ons ists of the introduction of the support of the cost f unc tional which in a special was takes into consideration nonlinearity of the performance index. Quadratic problem. Consider the problam of minimization of convex quadra~ t tic functional d(t.C.)= 1/2. ~f L (x'('I:)M:.c( )+
+2:x'(t)el(.(t)+I(L42.(t)]~I'I\ln.,
t=O
(~~) ~ 0,
on the trajectories of a linear sys-
344
R. Gabasov
tem (11). Primal exact algorithm is worked out (A.I.Lutov, 1982) which develops the ideas of the primal algorithm of quadratic programming. For non-degenerate problems the algorithm is finite. The algorithm takes into consideration specific fe~tures of the problem. The algorithm starts from some admissible control and the support of constraints. Matrices corresponding to the support of constraints and support of the cost function are calculated recurrently at the transition to the next iteration. And that considerably shortens the CPU time. 90nclusion. All the algorithms reported are new. They are based on a new approach to the solution of LP problem (adaptive methods) and differ principally from the traditional schemes based on simplex-method of LP. The algorithms allow to begin the solution from an arbitrary initial feasible solution not obligatory basic one. They fully take into consideration the structure of the problem, solve de generate LP problems, get suboptimal feasible solution for a finite number of iterations. In the course of extensive computer experiments it is revealed that the number of iterations in the adaptive method of LP is 2.) times less than in s~ plex-method and the CPU time is 5 times less on average. The programs realizing the suggested algorithms have undergone careful check in computer experiments. They are included in the program package "Adaptive Optimization" ( ES EVM - Soviet computers software).
et aZ.
REFERENCES Gabasov, R., and F.M.Kirillova (1981). Constructive Methods of Parametric and Functional Optimization. IFAC 8th Triennial World Cong~ Kyoto, Japan. vol.IY, pp. IY-111-IY-116. Kir11lova, F.M., 0.I.Kostjukov8, and R.Gabasov (1979). Adaptive Method for Solving Large Problems of Linear Programming. IFAClFOPS Symposium Optimization Methods, Applied Aspects. pp.16)170. Gabasov, R., F.M.Kirillova, and A.I.Tyatyushkin (1982). Algorithms of Optimization of Linear Systems. BGU Publ. House, Minsk. Gabasov, R., and F.M.Kirillova (Eds.) . (1980). Optimal Control Problems. Nauka i Technika, Minsk. Pokatayev, A.V. (1982). The Algorithm of Solving Unconstrained Geometric Programming Problem. Izvestiya AN SSSR, Technicheskaya Kibernetika, 1, )9-46. Lutov, A.I. (1982). Dual Method of Solution of Quadratic Optimal Control Problem. Preprin~ of Institute of Mathematics of Buelorussian Academie of Science, Minsk, N 7 (132). Gabasov, R., and F.M.Kirillova (1980). New Linear Programming Methods and Their Application to Optimal Control. It.h--.Y{Q!.ls.sQQ'p_ on C_o_nJ;_rol !i'pplic_ati2..D. S __.2.:f.. HonlineSF_ Pr9lC ramming, Denver, USA, 1979, Pergamon Press.