A general method for solving constrained optimization problems

A general method for solving constrained optimization problems

93 A general method for solving constrained optimization proMems Cosimo R E S I N A Institute of Computer Sciences, Faculty of Science, University o...

1MB Sizes 4 Downloads 78 Views

93

A general method for solving constrained optimization proMems Cosimo R E S I N A

Institute of Computer Sciences, Faculty of Science, University of BARL Italy Abstract: In this paper a general problem of constrained minimization is studied. The minima are determined by searching for the asymp~;otical values of the solutibns of a suitable system of ordinary differential equations. For this system, if the initial point is feasible, then any trajectory is always inside the set of constraints and tends towards a set of critical points. Each critical point that is not a relative minimum is unstable. For formulas of one-step numerical integration, an estimate of the step of integration is given, so that the above mentioned qualitative properties of the system of ordinary differential equations are kept. Keywords: Nonlinear optimization, differential equations, constrained minimum problem, numerical analysis

which has the property of invariance for the set of constraints K, and the stability that allows the trajectories to reach a solution of the minimization problem, whatever the initial point inside K may be. After the analysis and the study of stabili~.y of such a differential equation, a Euler-like method of numerical integration is given as example. For this method, a study is made of the size. of the step of integration and an estimate is determined so that the numerical formulas that are used, keep the same properties of invariance and stability as the differential equation.

2. Formulation of the minimization problem and approach to its solut,.'on We take the minimization problem rninf(x)

(1)

x~K

where

K= { x ~ " ]

gi(x)<~ O, i= 1 , 2 , . . . , m } 4:,~.

1. introduction

K is a compact set and arcwise connected,

Various authors have determined the solution of minimization problems by finding the asymptotic values of the solutions of differential equations. Differential equations [1,2,3] have been constructed either to solve nonlinear ,equations or to determine unconstrained mininmra of functions, using stable numerical integration methods. This approach is also used [5,6] in order to resolve constrained problems. This paper generalizes that result, dealing with a minimization problem with inequality constraints, and it constructs a new formulation with respect to [7]. The formulation that is followed consists in the construction of a suitable differential equation

f:R n~R,

gi:N n ~ N ,

i=l,2,...,m. are continuously differefatiable functions. Furthermore, F: K ~ ~" is a continuous function on K. In order to solve the above defined minimization problem we shall use an equation from the class of initial-value differential equations:

{x(O)=x°~K. 2 =F(x),

(2)

We give the following definitions:

Received June 1982; revised March t984

Definition 2.1. The set K is called 'invariant' with respect to the system (2) if for any x ~ K, x(x °, t) belongs to K for all ~ >/0.

North-Ho'dand European Journal of Operational Research 21 (1985) 93--100

Definition 2.2. A point 2 ~ N" is an ~2 limit point of the solution x(x °, t) of (2) if there exists a

0377-2217/85/$3.30 © 1985, ElsevierScience Publishers B.V. (North-Holland)

94

C Resma / A generalmeth~ for solvingeonstmined optimization probtems

and furthermore

sequence { tk } such that tim x ( x °, tk)= 2. k ---*e¢

The set ~(x °) of all I2 limit points of the solution x ( x °. t) is called its $2 limit set. Definition 2.3. Any solution x of F ( x ) = 0 called a critical point of (2).

is

Definition 2.4. Any function f(-) that satisfies the inequality

~,~rf(x)F(x)<-I]F(x}[i~- V x ~ K .

(3)

is called a LaSalle function. The following definitions derive from Liapunov theory. Def~aifien 2.5. "iqae set fl(x °) is asymptotically stable for the system (2) if in some neighbourhood N' of O(~0), there is a scalar positive definite function L ( x ) (Liapunov function) such that its derivative L'(x) is negative definite,, in that region, with respect to the solutions of (2). Definition 2.6. The set ~?(x °) is completely unstable or repulsive if the functions l(x) and L'(x) are negative definite in N, that is, any solution of (2) will leave the region for some t > t °. Let us suppose that the system (2) satisfies the pro,perry (A) of invariance and the property (B) of the LaSalle function, with r e s l ~ t to definitions 2.1-2.4. Property (A) ins~ar,~s '.hat for problem (~) all the trajectories bel(mg to the system (2), originating from a ieasfb~e Foint, are entirely made up of feasible points. This raecms that if an in tial point satisfies the constraints, then the uajecqory that departs from that point will be entirely ir K. Before interpreting prol:my (B), we must remember [8,17]: , ~INeerem 2.L g')'J, reference ~o system (2), with the property of invariance, let V:K~N

and

W:K--~ +

be functions, continuous ,~n K, so thot Vx

K: v

V(x)F(x) -<. - w (

Then

Wx° ~ K :ic~ V( k ) : I2(x°) c_ M where ~ ( x °) is x ° as the initial set contained in A N V- X(c). Definition 2.7. Let C(x °) be the set of the critical points of (2). then g ( x °) is isolated if C(x ° )l~(x °) is closed. Using property (B) from Theorem 2.1 with V(.) = f ( - ) we elm deduct [9,181: (I) On the closed limit set $2(x °) the function f ( - ) is constant; In addition for each set I~(x o) of isolated limit points in C(x °) we have: (II) If $2(x °) is a relative minimum for f(-), then g~(x°) is an asymptotically stable set f ( x ) f ( p ) , with p ~ [~(x °) is the relative function of Liapunov; (III) If f,'(x °) is a relative maximum for f(.), then it is a repulsive set; (IV) $2(x ~) is unstable for the remaining cases. Furthermore, the following can be confirmed [9,18] by Liapunov theory: (i) if p is a strong relative minimum point for f ( . ) in K, then p is asymptotically stable, and therefore a critical point for (2); (ii) if p is a strong relative maximum point for f(-) in K, then p is repulsive and it is a critical point for (2); (iii) if p a stationary point for f ( - ) in K, that is, neither a maximum nor a minimum, then it is an unstable point. These properties will become weaker if the set ~2(x°) is not isolated, cr if the point p is an accumulatkm point for the critical points. The class of differential equations (2), where the function F(.) satisfies the above mentioned properties (A) and (B) is therefore adequate to the search of minimum points of problem (1). It is also suitabile if any trajectory tends towards a set that does not contain relative minimum points. In fact, seeing that such a set is unstable~ it is unlikely that a method of numerical integration wit~ point it out.

C ~Resina/~ genera!meth~for solving constrainedoptimizationproblems .~quaafion problem.

In fact, if x ° ~ 0K, then g~(x °) = 0 for i ~ I c_ {1, 2 , . . . , m }. The corresponding inward normal n in x ° is given by

V Tf"" ( ~f-~xl,

{ ag,, ag,

1

eT = ( ~,~, 8 o , . . . , 8,,,), 8jj =

solution of (5). We shall now demonstrate that for each x ° ~ K, we have that, indicated by n, for the inward normal vector to K, nrF( x °) >f O.

For the following we shall define:

{~

95

We shall obtain therefore:

i = 1, 2 . . . . , m,

nTF( x ° ) = e r,j ( x ° ) F( x ° )

ifi4~j, if i = j ,

= - e ? J ( x ° ) A ( x o) v f ( x ° )

D ( x ) = d i a g ( g l ( x ) , g z ( x ) , .... g,,(x)), or, using the second equation of (4)

I = identity matrix(n x n),

/'/TF( X 0 ) "-" eTD( x 0 ) B ( x ° ) ' T f ( x ° )

0 = null matrix(m x n),

= g , ( x ° ) e S B ( x ° ) 'Tf(x ° ) = ().

J = { % }, where c~j = Oxj

i=l,2,...,m,

j=l,2,...,n.

~)K is the boundary, of set K, OK = { x ~ R", where there exists at least an "i" with g ~ ( x ) = 0}. / i = { x ~ R " ; g i ( x ) < 0, V i = 1 . . . , m} We assume that (i) Vx ~ ~K,

det(J(x)jV(x)- D ( x ) )

* 0;

I, J ( x ) A ( x ) + D ( x ) B ( x ) = O.

Theorem 3.2. f( . ) is a La Salle function for (5) and satisfies the inequality (3). Proof. Seeing *.hat from (4)

(ii) Vx ° ~ K where g~(x ° ) = 0 , for i ~ I c _ { 1, 2 .... , m }, we have that { Vg,(x °) )i ~ 1 are linearly independent. Therefore, let A and B be matrices of dimension n x n and rn x n, respectively, the solutions of the matricial system A(x) + :(x)B(x)=

Observation. If in the point x ° there i:~ more than one active constraint, the trajectory will not come out, caused by the fact that the normal directions in that point are linearly independent.

(4)

We prove that the system of differential equations

= A T ( x ) A ( x ) - B~(x)D(x)~ (x), we have df(x(t))

= vfT(x(t))F(x(t))

= -vVf(x(t))A(x(t))vf(x(t)) - -IIF(x(t))ll z -]1( - D( x( t ))z B( x( t ) ) v f ( x ( t )))[G::

{ 2 = F(x)= -A(x)vf(x),

(~)

x(0) = x° ~/~, satisfies the properties (A) and (B). Theorem 3.1. The set K is' invariant for the system

-IIF(x(t))H 22" Observatien. K can also be closed and unlimited, as long as f ( - ) is such that

(5).

V x O ~ K , Lx°= ( x ~ K ~ f ( x ) ~ f ( x ° ) }

Pr~mL Owing to the continuity of the matrix A(x), and of the vector v f ( x ) on K, there exists a

is compact. Seeing that f ( - ) decreases along each trajectory, the set Lxo is necessarily invariant.

96

4. Relation Tucker ~ i n ~ I~'mifio~ 4.1. A point 2 ~ .Z is said to be a Kuhn-Tucker point if it satisfies the following properties:

p,=.e~B(~)vf(.2),

i = 1 , 2 ..... ,-

and

m

v f ( ~ ) + Z p:g,(.2) = ~,

(6)

i=1

= e~S(~)

m

E0,gA~)=0.

(7)

i=l

p,>~0,

i = 1 , 2 . . . . . m,

g , ( 2 ) < 0,

(8) (9)

i = 1 , 2 . . . . . m.

Lemma 4.1. I f ~ E K is a Kuhn-Tucker point then is a critical point ]or (5).

A(~) v f ( ~ ) = 0,

~,,)

from which we obta follows immediately. that point we have g~(2) ~ O, i = 1, 2 . . . . , m and therefore from (12) p~=0,

i = 1, 2 , . . . , m

ProoL Let us suppose

This proves property (8). Lastly, let us suppo ;e that .2 is the only limit point of a trajectory x (t) and x ( t ) * .2. For i = 1, 2 . . . . . m, g~(x(t)) is ~olution of

It fo~ows from (6) that

g~= - g,p,(x(t))

v f ( . 2 ) + J T ( 2 ) P = 0,

(10)

and from (7), (8), (9) we obtain

where pi(x(t)) are deft md by (11), being g~(x(t) limited in the compact ;et K, we have lim p , ( x ( t ) ) = p ~ ( 2 ; > ~ O ,

D(x)p=0.

i=1,2,...,m.

t-'~' + ~

(see [9, page 205].)

Therefore

:(~)v/(~) +J(~):(y)p = s(.2) v / ( ~ ) - D(~':p + ~,(~ r}S'(:~)p = 0, from which p = - ( J ( . 2 ) g r ( . 2 ) - D ( 2 ) ) q S ( ~ ' ) V f(.2),

and if we substitute the ~atter in (10) we obtain ( I- :(.2)( J(~)JT(.2) -D(~))-'J(.2))vf(.2)=O

or rather for (4)

A(.2)v:(.2) = 0. Lemma 4.2. For ea:h critical point .2 of (5), the properties (6), (7) md (9) are ver(fied Furthermor~ if .2 ~ K or :~ is the ore), 8mit point of a trajectory x(O, belot~g to (5), that does not coincide wit:~ .2, then .2 is a [(uhn-Tucker poinl:

For the numerical integration of the system of differential equations (5), devised in order to determine the m i n i m u m point of a function, high order methods are not essential. Instead, the deduced qualitative properties of invariance and stability in the continuods case must hold in the discrete case. Therefore, we shall examine numerical methods belonging to the class of onevstep difference equations:

x.+~=~.+h,,q~(x,,; h.),

(~_)

~:~"~R

°,

(13)

with ~(x.; 0 ) = F(x.). Given x , ~ K, h, must be determined so that the follow~ing properties are satisfied: (P1) (Pz)

Proof. For a critical point x of (5)

o=-(s(.2)s~(.2)-o(z))-1:(~=lv:(~).

S. Numerical integration

h,, must allow that x,+i ~ K; :7, must allow that

f(x,,+ i) < : ( x ° ) - • ( l w ( x . ) i0

97 disregarding the terms in O(h~), (P1) is equivalent to

where

¢:R÷--,R+,

(15)

1 + h,,eTB(x,,)vf(x,,) >10.

and ~ ( u ) ~ q~(v) if u>~ v.

The following is an operative method for the estimate of M (Theorem 5.1). ttaving fixed x,,, let ho > 0 be a sufficiently small step of integration. Putting in the continuous case for the system of differential equations (5). For f(-) belonging to the class cz(R n) the following is valid Theorem 5.1. If, for a method of the class (13), (P1) is satisfied for the choice h,,-- ~n

llF(x.)i122 M

o =x.+hoF(xn)+O(h~ Xn+l

)

the quantity

M = ( I ( x,,+ ° l ) - f ( xn) - h 0 v ' r f ( x , ) F ( x , ) )//hg will be accepted as the estimate of M. This estimate has been obtained considering Taylor's expansion of f(x~+l) about x,, up to the second power of h0.

where

5.1. Description of the algorithm.

0 <~1 < a . < 1 - r l ,

The algorithm 1:hat solves the minimization problem that has been formulated at the beginning of this paper, is developed according to the following steps. Step 1. Calculate ¢(x,; hn), xn ~ K. Step 2. h,, is chosen, being the biggest hn that satisfies (15) and that does not exceed 8. l[F( xn ) [12/M. Step 3. Calculate x n+1 according to the formula (13). Step 4. If the constraints are respected and if f ( - ) decreases, modify xn and go to Step 1, otherwise: Step 5. Reduce the integration step h n and go to Step 3. In this paper the improved Euler method has been used:

with ~ and M suitable constants, then (P2) is necessarily satisfied. P r o f . Given x,,

f(xn+,) = f(x,,) + h,,vf T ( x n ) F ( x . ) + O( h 2 ). We can put IO(h~)l < Mh 2, because of the compactness of K and the continuity of the second derivatives. Because of Theorem 3.2 it follows

f(x"+l)<~f(x")-~"(1 -~")

IIF(x.)il 2 M

"

(14)

An estimate of h~ will be made in order to verify (P1)- Let x~ ~/~, in order that x~+l ~/~', it must be gi(x,+l) < 0

for i = 1, 2 , . . . , m .

Putting

gi(x.+l) = g i ( x . ) + h n V g T ( x . ) F ( x . ) + O( h~ ), because of (5) and Theorem 3.1, we have

V g i ( x , ) F(x~ ) = - gi(xn)eTB(x,) vf(x,,), and therefore

gi( x~+ 1) = g~( x,, ) - h~gs( xn) eTB (xn)V f(x~)

x,,+l = xn + h,F(.% + F ( x , , ) h , ) / 2 . We preferred tNs method instead of the simple Euler method,

x,+l=xn+hnF(x,), because the latter is of the first order and in presence of nonlinear constraints often goe:; out of the region of feasibility, slowing down the ,~nvergence to a point cf mJmmum. The tast step of the Ngorithm has been obo tained using techniques of halving and interpolation.

function m x, NI is the number of iterations. The initial value of h o has been chosen as 10 -3, in fact various numerical exlx,wi_rnents have demonstrated that cha~ges of h o d o not influence the results. The tolerance EPS used for the convergence was 10 -3.

(r5) f ( x ) = x ~ + x ~ + 2x~ + x ~ - 5 x , - 5x2 - - 2 1 x 3 + 7x4, g l ( x ) = x~2 + x22 + x32 + x 24 + x~ - x2 + x3 -x 4 - 8<0, g2(x)=x21 + 2 x 2 + x~ + 2 x ] - x l - x 4 - 1 0 < O , g a ( x ) = 2 x ~ + x~ + x~ + 2 x , - x 2 - x 4 - 5 < O ,

(F1)

x=(O, 1,2,-1),

/ ( x ) = (x~ - 2) ~ + (.~2 - 1) 2,

f=-44,

X o = (0, 0, 0, 0),

N I = 69.

g,(~) = ~ - x2 < o, (F6)

g2(X) = Xl + X 2 - - 2 < 0 , x=(1,1),

f=l,

x o=(0.6,0A),

NI = 16.

f(x)=(x,-

10) 3 + ( x 2 -

20) 3,

g a ( x ) = - x 1 + 13 < 0 , (F2)

g2(x) = - ( x ~ -

f!(x) = - e (~3-1)2(~2- 2)z,

g 3 ( x ) = (x, - 6) 2 + ( x 2 -

g , ( x ) = - x ~ + x 2 < o,

g4(x)

gz(x)=e -x~-x2<0,

x = (14.095, 0.842961),

g 3 ( x ) = 2 ( x a - 1) 2 - x 2 < 0, x = (1.35, 0.257),

f= -23.7,

5 ) 2 - ( x 2 - 5) 2 + 1 0 0 < 0 , 5)2-82.81 d0,

= - - X 2 < O,

xo = (14.35, 8.6),

f= -6961.81,

NI = 44.

(F3)

Other very intereSting experiments have been carfled out, studying test problems with prede 7 tcrmlned solutions [13,14] i n the general, degener. ate and iU.conditioned cases:

f(x) = - x,,

General case

x o = (1.23700, 0.6100),

NI = 25.

g~(x)=x3-x2
f(x)=

9

=

x~

x = (1, 1),

-

x? < 0, f= -1,

x o = (0.10000,¢).0050)

NI = 20.

1.825 x~ + 1.785 x~ + 1.41 x 2 + 1.43 x 42 "[- 0 . 8 XiX 2 + 0 . 8 5 X1X 3 + 0.2 XlX 4 - 0 . 2 x 2 x 3 + 0.29 x z x a + 0.2 x3x 4 + 0.195 :q - 1.881 x 2 - 1.387 x 3 - 2.161 x 4 - 14.0329,

(F4)

g l ( x ) = - S ~ ( x ) + 4.34 < 0, f ( x ) = 9 - 8x~ - 6x2 - 4x2 - 4x3 + 2x~ + 2x~ + x~ + 2 x t x 2 + 2xlx3, g~(x) = -x~ <0,

g 2 ( x ) = - S 2 ( x ) + 26.22 < O, g 3 ( x ) = - S 3 ( x ) + 9,65 < O, g 4 ( x ) = - S 4 ( x ) + 55 < o ,

a ~ Ageneral meth~forsOIoing/consteainedoptimizationproblems

C

Exarn~

99

3.

= 1.125 x~ + 2.185 x~ + 1.31 x32

f(x)

+ 2.43 x ] + 1.3 x l x = + 1.85 x l x 3 x~ + 2xzZ+ x~ + 2 x ] - k x ~ x 2

+ 0 . 2 x l x 4 - 0.2 XzX3 + 0.29 xz:x4 + 0 . 2 x 3 x 4 - 1.005 xa - 1.281 :~2

+ 2xa - x2 - x3.

- 0.787 x 3 -- 1.661 x4 - 21.3529.

!x 2 + 2 X t X 2

x = (1,5, 1 . 7 , 1 . 6 , 1.8),

f=0.1-

x ° , ( 1 . 7 , 1.7, 1.7, 1.7),

E x a m p l e 4. f(x)=

10 - 5 ,

+ 0 . 4 3 x42 + 0.3 x a x 2 - 0.15 x l x 3

N I = 32,

x ° = (1.9, 1.8, 1.8, 1.7),

t3.125 x~ + 0.185 x~ + 0.31 x~

+ 0 . 2 x a x 4 - 0.2 x 2 x 3 + 0.29 x 2 x 4

N I = 49,

+ 0.2 x 3 x 4 - 1.005 xl - 1.281 x2 x ° = (2, 2, 2, 2),

N I = 52,

x ° = 12.4, 2.4, 2 , 3 ) ,

- 0 . 7 8 7 x 3 - 2.66 x 4 + 4.8671.

N I = 35, HI-conditioned case

x°= (5, 6, 5, 4), NI=53. D e g e n e r a t e case

Here are s t u d i e d four examples of degenerate p r o b l e m w i t h the s a m e c o n s t r a i n t s as in the general case, utilizing five starting points. It is proved (see T a b l e 1) t h a t the n u m b e r o f iterations increases w h e n the degeneration degree increases.

In this case four iU-conditioned p r o b l e m s with five starting p o i n t s are studied, a n d the n u m b e r of iterations increases (see T a b l e 2) when the n u m b e r "'m'" of ill-conditioning increases. E x a m p l e 1. f(x)=

4-0.5 x l x 2 + x l x 3 - 0.3 ~,:~ - 2.3 x 2

E x a m p l e I. f(x)=

- 2.2 x z - 1.3 x4 - 13.43.

3.125 xlz + 3.185 x~ + 2.31 x3z + 2.43 x4z + 1.3 x~x~ + 1.85 x l x 3

E x a m p l e 2.

+ 0 . 2 x ~ x 4 - 0.2 XEX3 + 0.29 x 2 x 4

f(x)=

2.2 x~ + 1.7666665 x 2 + 1.6 x 2 + t . 5 x~

+ 0.2 x 3 x 4 + 0.995 x 1 - 2.281 x 2

+ x l x 2 + x l x 3 - 1.15 xl - 1.9166661 x z

- 1 . 7 8 7 x 3 - 1.661 x 4 - 31.0029.

- 2 . 2 x3 - 1.3 x4 - 13.1!8334.

Example 2. /(x)=

2.2 x f + 2.1 x g + 2t .6 x 2 + 1.5 x 2

E x a m p l e 3.

1.145 x:z + 2.195 x~ + 1.32 x32 +2.43 x ]

f(x)=2.2

0.985 x 1 - 1.291 x2 - 0.79 x3

xlz + 1.7666665 x22 + 1.2 x 2 + 1.5 x4 + xv~% + 1.333334- x l x 3 + 0.25 XzX3

- 1.661 x 4 + 1.3 XlX2 + 1.85 XlX3

- 1 . 6 8 3 3 3 4 x 1 - 2.3166661 x 2 - 1.845 x3

+ 0 . 2 x l x 4 - 0.2 x 2 x 3 + 0.29 x ~ x 4

- 1.3 x4 - 12.662334.

+0.2 x3x4-

21.4494. Table 2

Table 1 x° x~

NI Ex. 1 34

NI Ex. 2

NI Ex. 2



NIEx. 1 m = 33

NIEx. 2 m =1.1.103

NIEx. 3 m = 3.6.10 4

NIEx. 4 m=1.2-106

x~ x~ x~ x~ X5o

15 18 17 18 22

31 33 28 34 39

57 63 67 71 83

66 70 72 84 9i

NI Ex. 4

48

76

82

x~ ~

49 53

57 59

91 89

89 98

x~ x~

36 53

62 62

97 68

115 101

100 Era~pJe

f(x)

4.

= 2.2 x~ + 1.77.z~ + x~.x2 + 1 . 3 4 x " ~ 0 . 2 5 X2X 3 "Jr0 . ~ X2.~ 4 -I" 0;1~} .~3.X4 - 2.144 x 1 - 2.688 xl -- 2.1556 -0.7396

x3

x 4 - 12.25708.

References

[1] Davidenk¢,, D.F., "On a new method of mmierical solution of systems of nonlinear equations,", Dok&,~,Academy Nauk, SSSR 88 (1953) 601-602. [2] Trec~:ani, G. "On the critical points of eoath uously differential functions", in: L.C. Dixon ana G.P. Szego (eds.), Towards Global Optimization 1, 1975. [3] Botsaris. C.A., "Differential gradient meth,3d¢', Journal of Mathematical Analysis and Applicat~o~,'s 63 (1978) 177-t98. [4] Hestenes, M.R., Optiraization Ttzeory--The Finite Dimensional Case, John Wiley and Sons, New York, 1975. [5] Botsaris, C.A., "A Newton type c ~ e ~ search method for constrained optimization", Journal of mathematical Ana~sis and Appiication~ 69 (1979) 372-397. [6] Botsafis, C.A., "An efficient eurvilinear method for the nfinimization of nonlin,:ar fu,~ction subject to linear inequality constraints", .rournal of Mathematk'al Analysis and Applications 71 ('197"9)482-515.

ruing problems", omaggio a C. F,errari, edited by G. Torte, Bologna (1974) 521-536. [13] Schittkowski, K., Nonlinear Programming Codes, Information, Tests, Performance, Lecture Notes in Economics and Mathematical systems, t83, Springer, 183, 19,~0. [14] Hock, W., and Schittkowski, K., Test Examples for Nonlinear Programming Codes, Lecture Notes in Economics and Mathematical Systems, 187, Springer, 187, 1981). [15] Polak, E., ComputationalMethads in Optimization, an Unified Approach, Academic Press, Reading, 1971. [16] Cea, J., Optimizatio~ TheorieetAlgorithmes, Dunod, Paris, 1971. [17] Halanay, A., "Stabilita", Union of Italian Mathematician, Quarterly, Bologna, 1978. [18] Willems, J.L., Stability Theory of Dynamical Systems, Nelson, 1970.