1 YIYII-.
1
UP,'
Y_.
.
I-IYL.L"I..YLIII
I".
Vy%Y..YY,&"lL
J.-B. Hiriart-Urmty (editor) 0 Elsevier Science Publishers B.V. (North-Holland), 1986
213
A GENERAL DETERMINISTIC APPROACH TO GLOBAL OPTIMIZATION VIA D.C. PROGRAMMING
H. T U Y Institute of Mat hemat ics P.O. Box 631, Bo'H6 Hanoi' (Vietnam)
Abstract. A d.c. function is a function which can be represented as a difference of two convex functions. A d.c. programming problem is a mathematical programming problem involving a d x . objective function and (or) d.c. constraints. We present a general approach to global optimization based on a solution method for d.c. programming problems.
Keywords. Global optimization. Deterministic approach. D.c. Programming. Canonical d.c. program. Convex maximization. Outer approximation method.
274
H. Tuy
1. INTRODUCTION T h e general problem we are concerned with in this paper is t o find t,he global minimum of a function F:R" -, R over a constraint set given by a system of
the form
Z E R , G i ( z ) > O ( i = 1 , 2 ,..., m), where R is a closed convex subset of Rn and F , Gi (i = 1 , 2 , . . . ,rn) are real-valued functions on R", continuous relative t o a n open neighbourhood of R. This is an inherently difficult problem. Even in the special case where F is concave and the Gi are affine, the problem is known t o be NP-hard. In the classical approach t o nonlinear optimization problems, we use information on the local behaviour of the function F and Gi t o decide whether a given feasible point is t h e best among all neighbouring feasible points. If so
we terminate; if not, t h e same local information indicates the way t o proceed t o a better solution. When the problem is convex, the optimal solution provided by this approach will surely be a global optimum. But, in the absence of convexity, this approach generally yields a local optimum only. Therefore, if a truly global optimum is desired rather than an arbitrary local minimum, the conventional methods of local optimization are almost useless. This intrinsic difficulty accounts for the small number of papers devoted t o global optimization, in comparison with the huge literature on local optimization. Yet many applications require global optimization, and the practical need for efficient and reliable methods for solving this problem has very much increased in recent time. From a deterministic point of view, the global optimization problem, in the general case when no further structure is given on the d a t a , is essentially intractable, since it would require, for example, evaluating t h e functions over a prohibitively dense grid of feasible points. It seems, then, that the only
A general deterministic approach to global optimization
275
available recourse in many cases is t o use stochastic methods, which guarantee the results only with some probability and very often are not able t o conciliate reliability t o efficiency. However, for a particular class of global optimization problems, namely for problems of global minimization of concave functions (or, equivalently, global maximization of convex functions) over convex sets, significant results have been obtained in the last decade; see [36], [8], [15], [16], [27], [32], [31], [37],
[38], [42], [45]. On the basis of these results and also the tremendous progress in computing techniques (parallel processing and the like), it has become now realistic to hope for a general deterministic approach to a very wide class of global optimization problems whose data are given in terms of convex and concave functions only. In view of the usually very high cost of the computation of an exact global solution, it seems appropriate to slightly modify the formulation of the global optimization problem as follows: Find a globally a-optimal solution, i.e., a feasible point Z such that there is no feasible point z satisfying F ( z ) 5 F ( 2 ) - a where a is some prescribed positive number. The core of the problem consists of two basic questions:
1) Given a feasible point zo, check whether or not x o is globally a-optimal. 2) Knowing that zo is not globally a-optimal, find a feasible point x such that F ( z ) 5 F ( z o )- a.
If these two questions are answered successfully, the solution scheme suggests itself start from a feasible point (preferably a local minimum) zo, check whether or not xo is globally &-optimal; if not, find a feasible point (preferably a local minimum) x 1 such that F(sl)5 F ( x o ) - a ;repeat the procedure as
long as needed, each new iteration starting from the point produced by the previous one. In actual practice, the choice of a is guided by computational considerations. In many cases a can be taken fairly small, but in other cases
H. Tuy
216
one must b e content with a comparatively coarse value of a,or perhaps with only one or two iterations of the procedure. Well, one cannot cry for the moon, and, for certain extremely hard problems, it would be unreasonable t o try to reach the absolute minimum, simply because it is unreachable by the means currently available. P u t in this framework, deterministic methods for global optimization may be combined with conventional local optimization methods to yield reasonably efficient procedures for locating a sufficiently good local minimum. The purpose of this paper is t o present a general approach t o global optimization problems in the above setting, via t h e class of so-called d.c. programming problems. It turns out that this class includes the great majority
of optimization problems potentially encountered in the applications. Every d.c. program can in turn be reduced t o a canonical form which is a convex program with just one additional reverse convex constraint (i.e., a constraint of the form g(z) 1 0 with g convex). Several alternative algorithms can then be proposed for this canonical d.c. program, each consisting basically of two alternating phases: a local phase, where local searches are used to find a local minimum better than the current feasible solution, and a global phase, where global methods are called for to test the current local minimum for global a-optimality and t o find a new starting point for the next local phase. This two-phase structure is also typical for most of t h e stochastic methods. Thus the difference between our deterministic approach and the stochastic one is that in the global phase stochastic methods evaluate the functions in a number of random sample points, while our methods choose these points on the basis of so far collected information about the structure of the problem. T h e paper consists of five sections. After t h e introduction, we discuss in Section 2 d.c. functions and d.c. programs. Here we introduce canonical d.c. programs and show how any d.c. program can be, by simple manipulations, reduced to this canonical form. In Section 3 we demonstrate the practical importance of d.c. programs by showing how a number of problems of interest
271
A general deterministic approach to global optimization
can be recognized as d.c. programs. Finally, Sections 4 and 5 a r e devoted t o developing solution methods for canonical d.c. programs.
2. D.C. FUNCTIONS AND D.C. PROGRAMS We first recall the definition and some basic properties of d.c. functions. For a comprehensive discussion of d.c. functions t h e reader is referred t o the article of Hiriart-Urruty [14] and t h e recent work of R. Ellaia [7].
Definition 1. Let R be a convex closed subset of R". A continuous function f : R
+
R is called a d.c. f u n c t i o n if it can be
represented as a difference of two convex functions on R:
with
f l , f2
convex on R.
Examples of d.c. functions (see e.g.[14] or [7]): (1) Any convex or concave function on R. (2) Any continuous piecewise affine function.
(3) Any C2-function and, in particular, any quadratic function. (4) Any lower-C2 function, i.e., any function f:R" -+ R such t h a t : for each
point u E R" there is for some open neighbourhood
f(x) = max F(x,s) SES
where S is a compact topological space and
X of u a representation
X€X F:Xx S -+ R is
a function
which has partial derivatives up t o order 2 with respect t o x and which along with all these derivatives is jointly continuous in
( 2 ,s)
E X x s [25].
T h e usefulness of d.c. functions in optimization theory stems from the following properties (see e.g. [40] for a proof).
Proposition 1. The class of d.c. functions on R is a linear space which is stable under the operations
278
H. Tuy
It follows, in particular, that the pointwise minimum of a finite family of
convex functions g l ( z ) , . . . , g m ( z ) is a d.c. function. In fact
where the second term is a convex function since it is the pointwise maximum of m convex functions
CiZjgi
( j = 1 , . . . ,m).
Proposition 2. If R is compact, the family of d.c. functions on
R is dense in C (0). In other words, any continuous function on R can be approximated, as closely as desired, by a d.c. function. In particular, one way t o approximate a continuous function on R by a d.c. function is t o construct a piecewise affine approximation to it over R (Example 2 above), using e.g. a triangulation of R. Thus the class of d.c. functions includes a large majority of functions of practical interest.
Definition 2. A d.c. programming problem is a global optimization problem of the form minimize f ( z ) , s.t. z E R, gi(z) 2 0 (i = 1 , . .. ,m) where R is a convex closed subset of R",and
(2)
f , 91,. . . ,gm are
d.c. functions on R". From Proposition 2 it follows t h a t any global optimization problem as stated at the beginning of the Introduction, with R compact, can be approximated by a d.c. program (2) where the d.c. functions f, gi are such that
(l)
(l)
I t is beyond the scope of this paper to discuss the conditions under which an optimal
solution of the approximate d.c. program actually gives an approximate optimal solution of the original problem.
219
A general deterministic approach to global optimization
D.c. programs form a very broad class of optimization problems including, of course, convex programs (where f and 9%are all concave). In the next section we shall discuss some examples of the most typical d.c. programs encountered in the applications. A striking feature of d.c. programs is that, despite their great variety, they can all be reduced to a canonical form which we are going now to describe.
Definition 3. An inequality of the form g ( z ) 2 0 where g : R n
-, R is
a convex function, is called a reverse convex (sometimes also complementary convex) inequality. Obviously, such an inequality determines a nonconvex set which is the complement of a convex open set (and for this reason is called a complementary convex set). Optimization problems involving reverse constraints have been considered by Rosen 1261, Avriel and Williams 121, Meyer 1211, Ueing 1441, and more recently by Hillestad and Jacobsen [12], [13], Tuy 1391, [43], Bulatov [4], Bohringer and Jacobsen [3], Thuong and Tuy [34], [41], Thach [28], and others.
Definition 4. A mathematical programming problem is called a canonical d.c. program if its objective function is linear, and all its constraints are convex, except exactly one which is reverse convex. In other words, a canonical d.c. program is an optimization problem of the form:
(PI
minimize c T s , s.t.
x E 0, g ( x ) 2 0,
(3)
where 0 is a convex closed set and g : Rn -, R is a convex function. Clearly such a problem can be regarded as the result of adjoining to the convex program minimize c T 5, s.t. the additional reverse convex constraint g ( s )
2
E
R,
(4)
2 0. Thus, in a canonical d.c.
program, all the nonconvexity of the problem is confined t o the reverse convex
280
H. Tuy
constraint. It seems, then, that a canonical d.c. program is a rather special form of d.c. programs. Neverthless,
Proposition 3. Any d.c. program (2) can be converted into an equivalent canonical d.c. program.
Proof. First we note t h a t problem (2) is equivalent t o the following one: minimize z , s.t. s E R, g i ( z ) 2 0 (i = 1,.. . , m ) >z - f ( z ) 2 0 . Therefore, by changing the notation, we can assume that the objective function in (2) is linear. Next, on the basis of Proposition 1, a system of d.c. constraints g i ( s ) 2 0 (i = 1,.. . ,m ) can always be replaced by a single d.c. constraint g(s) =
.
gi(z) 2 0. min ...,m
a=l,
Finally, a d.c. constraint
( p , q convex) is equivalent t o the system
where the first constraint is convex, while the second is reverse convex. We have thus proved t h a t , by means of some simple manipulations, any d.c. program (2) can be transformed into an equivalent minimization problem, with a linear objective function, and with all constraints convex, except one reverse convex. I As a corollary of Proposition 3, it follows, in particular, t h a t any convex
program with several additional reverse convex constraints, minimize c T z , s.t. s E R , g,(s) 2
o
(i = I , . . . , m )
where all the gi are convex, can be converted into one with a single additional reverse convex constraint (i.e., a canonical d.c. program). This is achieved by
A general deterministic approach to global optimization
281
performing t h e transformation (1) followed u p by t h e manipulations from ( 5 ) to (6).
3. EXAMPLES OF D.C. PROGRAMS It is well known from economists t h a t reverse convex constraints and concave objective functions t o be minimized (or convex objective functions t o be maximized) arise in situations where economies of scale (or increasing returns) are present (see e.g. [48], [49] for a discussion of nonconvexity in economics). Typically, suppose that an activity program x has t o be selected from a certain
c R” (the set of all technologically feasible programs) and the
convex set R
selection must be made so as t o minimize the cost, f ( z ) , under the conditions that u t ( z ) 2 c, (z = 1 , . . . , m ) , where ur(x)represents some kind of “effect” or “utility
”
resulting from the program z, and c, is t h e minimal level required
for the ith effect. Then the problem t o be solved is a problem ( P ) ,with g t ( z ) = u,(z) - c t , and will be generally a d.c. program since the functions
f
and u, can very often assumed t o be d.c. functions (for example, f is concave, while the u, are convex; in many cases u,(x)= funct,ions utJ
(a)
C,”=,u t J ( z j )where , certain
are convex, others are concave or S-shaped functions, so t h a t
u r ( z )is actually a d.c. function). D.c. programs encountered in industry and other applied fields (electrical network design, water distribution network design, engeneering design, Mechanics, Physics) have been described, e.g., in [9], [lo], [ l l ] ,[19], [26], “461. Let us discuss here in more detail some examples of optimization problems whose d.c. structure is not readily apparent.
I ) Design centering problem. Random variations inherent in any fabrication process may result in very low production yield. To help the designer to minimize the influence of these random variations, a method consists in maximizing yield by centering the nominal value of design parameters in the so-called region of acceptability ([46]). This problem, called the design centering problem, can be formulated as a problem of the following form.
282
H. Tuy
# 8 and
Given in R" a convex closed set R with int R convex sets
Di = Rn\Ci (i
rn complementary
= 1 , . . . ,rn; Ci being convex open), and given
a convex compact set Bo with 0 E int Bo, find t h e largest convex body B homothetic to a translate of Bo that is contained in
s = R n D~n . . . n D,. If p denotes the gauge (Minkowski functional) of Bo, so that
then the problem is to find max{r : B Z ( r )= {y : p(y - z) I r} c 2,r
S}.
In this formulation, the problem is a kind of two-level optimization problem, namely: for every fixed z E S find the maximum r ( z ) of all r satisfying
then maximize r ( z j over all z E S. A more efficient approach, however, is t o formulate this problem as a d.c. program, in the following way [29].
For any closed subset M of R", define dM(z) = inf {p(y - z) : y $ M } . Then it can be proved (see[29]) that every dDj is a finit convex fu ction on
R", while dn is a finite concave function on
which can be extended to a
finite concave function dn on Rn. If
-
f (z) = min{dn (2) ,dD, (z)
9 *
*
dDm(z)},
then it follows from Proposition 1 t h a t f is a d.c. function, and it is easily seen t h a t the design centering problem is just equivalent t o the d.c. program maximize {f(z), s.t. z E S}
283
A general deterministic approach to global optimization
(if
Di = { x : g i ( x ) 2 0 } , with gi convex, then the problem has just the form
(P)). Note t h a t problems very near t o this are encountered in computer aided coordinate measurement technique (see [9]).
2) Linear programs with an additional complementarity condition. T h e following problem has been considered in [lo]. In offshore technology, a submarine pipeline is usually laid so t h a t it rests freely on the sea bottom. Since the sea bed profile is usually irregularly hilly, its regularization is often carried out by trench excavations, both in order t o avoid excessive bending moments in the pipe and t o bury it for protection. Thus, the practical problem which arises is t o minimize the total cost of the excavation, under the conditions t h a t the free contact equilibrium configuration of t h e pipe nowhere implies excessive bending. It has been shown in [lo] t h a t this problem can be expressed as a nonlinear program of the form:
Minimize c T x
+ dTy,
s.t.
Ax+By2p, s2O,y20 xTy = 0
( 5 ,Y
E R"),
where the only nonlinear contraints is the complementarity condition
x T y = 0. But the latter condition is obviously equivalent to
which is a reverse convex inequality, since the function
is concave. Therefore, t h e problem in point is a canonical d.c.
program,
more precisely, a linear program with a n additional reverse convex constraint.
H. Tuy
284
T h e usefulness of this approach from a computational point of view has been demonstrated e.g. in [33].
3) Jointly constrained biconvex programs. T h e following problem, which generalizes the well known bilinear programming problem, has been considered by some authors (see e.g. [l]) minimize f ( z )
+ xTy + g(y), s.t. ( z , y ) E S c R" x R",
where f and g are convex on S , and S is a convex closed subset of R" x R". Here the bilinear form xTy is a d.c. function, since 4sTy = I ~ + y 1 ~ - l s - y I2
+
Therefore the objective function itself is a d.c. function. We can also convert the problem into a concave minimization problem:
4) Global minimization of a Lipschitzian function. Let R be a convex compact set in Rn,and f a Lipschitzian function on R. Let K be a constant such t h a t
If(.)
-
f(Y)lI K l z - YI
x,Y E 0.
then f ( z ) = maxg(x,y) = g(z,z), YER
and hence minf(x) = minmaxg(x,y). XER
ZER YEn
On the basis of this property, in [6], [23] (see also [4]) and more recently in [20], the following method has been proposed to find the global minimum of f over
R:
A general deterministic approach to global optimization
285
Initialization: Take zo E $2, set R1 = {zo}, set i = 1.
Step I. Compute z', a solution of the problem min max g ( z ,y ) . ZER YERi
u {zz}.
Step 2. Set Ri+l = Ri Set i
t
i + 1 and go back t o step 1.
T h e crucial step in this procedure is t o solve the problem (7). However, as strange as it might seem, neither in [6], [23], nor in [20] has it been indicated how t o solve this problem. In fact, for every y E Ri t h e function g ( . ,y) is concave, so the function (oi(z)
= maxyEni g ( z , y ) is a d.c. function (Proposition 1) and each prob-
lem (7) is the minimization of a d.c. function over t h e compact convex set R. Thus, the above method involves the solution of a sequence of increasingly complicated d.c. programs.
A more efficient approach recently developed by P.T. Thach [30] requires solving only one d.c. program.
5) Continuous programming. We have seen that in view of t h e density of d.c. functions in C(R) (when R is convex compact), any problem of t h e form minimize F ( z ) ,s.t. z E R, G,(z)5 0 (i = 1 , . . . , m ) where F, Gi (i = 1 , . . . ,m ) are continuous functions, can be approximated by a d.c. program. We now go further by showing t h a t any continuous program-
ming problem can actually be converted into a d.c. program. Indeed, assuming, as we may without loss of generality, t h a t the objective function is linear, the problem has the form minimize c T z , s.t. z E M , where M is a closed subset of
Rn.B u t
it is known t h a t for any nonempty
closed set M, t h e function z
H
d&(z) = inf
{ 1% - yI2 : y E M }
H. Tuy
286
is d.c., more specifically
where the function z
H
1zI2 - d & ( z ) is convex (see [14]). Therefore the
problem is nothing else than the d.c. program minimize c T z , s.t. d L ( z ) 5 0.
4. SOLUTION METHOD FOR D.C. PROGRAMS
As shown in Section 2, every d.c. program can be reduced t o the canonical form
(P)
minimize c r z , s.t. z E R, g(z)2 0,
where R is a convex closed set and g: Rn ---t R is a convex function. In this and the next section we shall present a method for solving this problem ( P ) .
For the sake of simplicity we shall assume t h a t R is compact, so that an optimal solution of ( P ) exists. Also we shall only consider t h e case where min
(7% : z E R} < min { c T z : z E R , g ( z ) 2 0} .
(8)
Otherwise, the reverse convex constraint g(z) 2 0 would be inessential, i.e.,
( P ) would be equivalent t o the convex program minimize c T z , s.t. z E R and could be solved by many available efficient algorithms. Thus, we shall assume that a point w is available such t h a t
w E R, g(w) < 0 cTw c min { cTz : z E R, g(z) 2 0} :
(9)
287
A general deterministic approach to global optimization
Let G = { x : g(x)
5 0}, and for any set M
C
R" denote by d M the
boundary of M .
For any feasible point z let n ( z ) be the point where the line segment [w; 21 meets dG. Since n ( z ) = Bw
+ (1 - 6 ) z with 0 5 0 < 1, we have by virtue
of (11): c ~ ( T ( z )= )
and if g(z)
>0
(so t h a t 6
ecTw + (1 - 6 ) c T z 5 &,
> 0), then C'(.(.))
Proposition 4.
< cT z.
Under the above stated assumptions, prob-
lem (P) always has an optimal solution lying on dR
n dG.
Proof. As noticed just above, if a feasible point z is such t h a t g(z) > 0, then c T ( x ( z ) )< c T z . Therefore, problem ( P ) is equivalent t o minimize cTx, s.t. x E R , g(x) = 0.
(12)
Consider now any solution Z of (12). Take a supporting hyperplane t o the convex set G at point 3, and denote by S its intersection with R. If z' is an extreme point of S where the linear function cTx achieves its minimum over
S , then x' E dR and since S C R\int G, x' is also an optimal solution t o ( P ) , hence, from the above, g(x') = 0, i.e. x' E dG. Thus, in solving ( P ) we can restrict ourselves t o points on dR
n dG. I
The two basic questions t o be examined, as mentioned in the introduction, are the following: 1 ) Given a feasible solution xo of (12), i.e., a point xo E RndG, check whether or not xo is an a-optimal solution of ( P ) .
2) Given a point xo E R
n dG
which is not an a-optimal solution
point x 1 E RndG such t h a t cTxl positive number).
5 cTx0-a.
, find
a
(Here a denotes a prechosen
H. Tuy
7-88
Proposition 5. In order that a feasible solution so to ( P ) be an a-optimum it is necessary a n d sufficient that.
Proof. Since R is compact, the maximum defined in (13) is always attained, Clearly (13) holds if and only if (5 : 5
E
R,
g(s)
1 0 , c T z I C T Z O - a } = 0,
i.e., if and only if so is a n a-optimum. I Thus, t o check whether or not a n a-optimum has been achieved at zo we have t o solve the subproblem
(QbO))
maximize g ( s ) , s.t. s E R, cTs 5 cTs0 - a-
This is a convex maximization (i.e. concave minimization) problem and is still a difficult problem of global optimization. One should not wonder at it, though, since to decide the global optimality of so one cannot expect t o use only local information. T h e point here is that t h e problem (&(so)) is more tractable than ( P ) and can be solved with a reasonable efficiency by several available algorithms (see e.g. [37], [38], [15], [16]). Proposition-5 resolves at the same time the two questions 1) and 2). If the maximal value of g in (&(so)) is < 0, xo is a n a-optimal solution of ( P ) . Otherwise, we find an optimal solution zo to (&(so)) such t h a t zo E
R , g ( z O ) 2 0, c T s 0 5
If g ( z o ) = 0, we set s1 = z o ; if g ( z o )
CTZO
- a.
> 0, we set z1 = x ( z o ) ,the point where
the line segment [w; zO] meets 8G. These results suggest the following method for solwing ( P ) .
A general deterministic approach to global optimization
289
ALGORITHM 1 (conceptual) Initialization: Compute a point xo E R n dG (preferably a local minimum of c T x over R\int G. Set k = 0.
Phase I : Solve the subproblem ( Q ( z k ) maximize ) g ( z ) , s.t. z E R , c T z 5 c T z k - a to give an optimal solution zk.
If g(zk) < 0, terminate. Otherwise, go t o Phase 2. Phase 2: Starting from zk compute a point xk+' E R n dG (preferably near t o a local minimum) such that cTzk+'
5 cTzk 5 c T x k - a . Set k
t
k+l
and go to Phase 1.
Proposition 6. The above algorithm terminates after finitely many iterations.
Proof. Obvious, since cTxk+' 5 c T z k - Q (k = 1,2,.. .) and c T x is bounded below on R\int G. I
Remark 1. Algorithm 1 consists of an alternating sequence of "global
"
and
"local" searches. In Phase 1, global search is carried out to test the current z k for a-optimality and t o find a better feasible point zk,if such a point exists. In Phase 2, local search is used t o find a feasible point zk+' on dG at least as good as z k . The simplest way is to take zk+' = 7r(zk),the intersection point of dG with the line segment [ w , z k ] .But in many circumstances it is advantageous t o compute a point zk+' close to a local minimum, by using any local minimization algorithm. For example one can proceed as follows.
Step 0: Let uo = r ( z k ) .Set i = 0. Step 1 : Take a supporting hyperplane H it o G at u':
H' = {z : (pa I x - u ' ) = 0 } , with pa E dg(u'). If u' is an optimal for the convex program minimize c T x , s.t. x E R
n H',
H. Tuy
290
stop and set zk+l = u i . Otherwise compute an optimal solution v i t o this program.
Step 2: Let ui+’ = ~ ( v ’ ) Set . i t i + 1 and return t o Step 1.
It can be shown that if the convex function g is Giteaux differentiable on dG (so that there is a unique supporting hyperplane to G at each point on dG), then the just described procedure converges to a “stationary” point on dG (see [43]).
Remark 2
: When
R is polyhedral, several finite algorithms are available for
solving ( Q ( x k ) )([8], [37], [27], [31]). In this case, zk is a vertex of the polytope x E R,
CTX
5 C T Z k - a.
Therefore, if g(zk) > 0, then in Phase 2, starting from the vertex zk,we can apply the simplex algorithm for minimizing c T x over this polytope. After a number of pivots we cross the surface g ( x ) = 0: the crossing point will yield xk+l E R
n dG. Algorithm 1 for this case is essentially the same as the one
developed earlier in [34]. 5. IMPLEMENTABLE ALGORITHM
In the general case, where certain constraints defining R are convex nonlinear, the convex maximization problem ( Q ( x k ) )in Phase 1 cannot be solved exactly by a finite procedure. Therefore, to make Algorithm 1 implementable, we must organize Phase 1 in such a way that either it terminates after finitely many steps or, whenever infinite, it converges to some a-optimal solution. Let R = {z : h ( z ) 5 0}, where h:Rn
-+
R is a convex finite function
(note that such a function is continuous and subdifferentiable everywhere; see e.g. [24]). We shall assume int R
# 0.
This condition, along with the assumptions
already made in the previous section, entails the existence of a point w satisfying (lo), (11) and such that
w E int 0, i.e., h ( w ) < 0.
A general deterministic approach to global optimization
29 1
ALGORITHM 2 (implementable) Initialization: Compute a point zo E R n aG (preferably a local minimum
of c T z over R\int G).Select a polytope S containing the convex compact set
R, such that the set V of all vertices of S is known and IVI is small. Set k = 0.
I. Phase 1 Set S1= S n {z : c T z 5 cTxk - a}. Compute the set V' of all vertices of
S1. Set i = 1. Step I. Compute vi = arg max {g(z) : z E
If
g(v')
v'}.
(14)
< 0, terminate. Otherwise go to Step 2.
Step 2. Compute u' = ~ ( v ' ) the , intersection point of the line segment [ w ,u'] with 8G.
a) If u' E R, i.e., h(u') 5 0, set zk = u'. Reset S = S', V = V' and go to
Phase 2. b) If u' $ R, i.e., h(u') > 0, find the point y' where the line segment [w,u'] meets a R , select p a E 8h(y') and generate the new constraint
Let Sa+l = Sin {z : la(.)
5 O}. Compute the set ViS1 of all vertices of
SiS1.
Set i + i
+ 1 and go back to Step 1.
292
H. Tuy
11. Phase 2 Starting from z k , compute a point local minimum ) such that c T z k + l
zk+l
ER
n aG
(preferably close t o a
5 c T z k . Set k t k + 1 and go t o Phase 1
(with S, V defined in Step 2a of previous Phase 1).
Remark 3. Since Si+' differs from Siby just one additional linear constraint, the set
Vi+l of all vertices of Si+l can be computed from the knowledge of the set V', by using e.g. the procedure developed in [31] (see also [37]). Lemma 1. In every iteration k, constraint (15) separates u* and v* strictly from
Rk = ( 5 E R
: c*z
5 c T z k - a},
(16)
that is
Proof. Suppose k = 0. Clearly S' 3 Ro. From the definition of a subgradient
5 h ( z ) - h(y*) = h(s) (since h(yC) = 0). Hence Z,(z) 5 0 for every z satisfying h ( z ) 5 0, i.e., for every 5 E R. Further, Z,(ya) = 0 and l,(w) 5 h(w) < 0, hence, nothing t h a t ui = w + O(yi - w) with 0 > 1, we deduce Zi(ua) = (1 - 0)Z,(w) > 0. Then, since vi = w + X(ui - w) with X 2 1, we have Z,(z)
we also have Zi(Wi)
= (1 - X ) l i ( W ) + XZi(U') > 0.
Thus (17) holds for k = 0. By induction it is easily seen that hence (17) holds for every iteration k. I
S 1 3 R k and
293
A general deterministic approach to global optimization
Proposition 9. If Algorithm 2 terminates a t some iteration k, then x k is an a-optimal solution of ( P ) . Proof. Algorithm 2 terminates when g ( d ) < 0. But from (14) and the convexity of g it follows that
o > max {g(s) : z E si} On the other hand, as seen in the proof of the previous lemma, Hence
o > max {g(x) : z E R,
Rk c S'.
c T s 5 c T x k - a}.
This implies the a-optimality of x k by Proposition 5 . I
Proposition 8. If for some iteration k Phase 1 is infinite, then any cluster point ii of the generated sequence {ui} satisfies
Proof. Observe that if Phase 1 is infinite, then Step 2a never occurs, and .so h(ui) > 0 for every i. Therefore ui always lies between yi and
line segment
[w,zli].
d
on the
Consider now any cluster point 1 of { u ' } , for example
ii = 1imv+=
u*y.
By taking a subsequence if necessary we may assume that
the sequence
{ d V converges } to some V. But if Phase 1 is infinite, this amounts
exactly to applying the outer approximation algorithm ([15], [37]) t o the problem of maximizing g(z) over the convex set
Rk defined
by (16). Therefore,
ij
must be a n optimal solution to the latter problem, i.e.,
v E ok,g(V) = max{g(z) Since vi
:x E
nk}.
4 R for every i, this implies 6 E c ~ Rand , hence V = ii.
On the other
hand, since ui E d G for every i, it follows that ii E aG, i.e., g(ii) = 0. This completes the proof of (18). I
It follows from Propositions 8 and 5 that x k is not a-optimal. However, since cTI
5 c T z k - a, (18) implies
o = max {g(z)
:z E
a,
cTx
5 c'ii} .
(19)
H. TUY
294
Observe t h a t if instead of this inequality we had
o > max{g(s)
:s E
n,
cTz < c T 4 } ,
we would be able t o conclude that ii is a global optimal solution of (P), for this would simply mean the inconsistency of the system z€
n,
g(s) 2 0 ,
CT5
5 CTii.
Since, unfortunately, we have only (19), in order t o conclude on the global optimality of ti, we must make the following additional assumption about the problem (P).
Definition 5. Problem [P) is said to be stable if
Proposition 9. If problem ( P ) is stable, then in order that a feasible solution Z of (P) be a global optimum it is necessary and sufficient that
o = max {g(z) : z E R,
cTs 5 c ~ z } .
(20)
Proof. If there is a point z E 0 such t h a t cTz 5 cTZ and g ( z ) > 0, then ~ ( z will ) be a feasible point such t h a t cT(x(z)) < c T x
5 cTZ. Therefore, (20)
must hold for any optimal solution 3. Conversely, suppose we have (20). This means that there is no point z E R satisfying c T x 5 CTZ, g ( z ) > 0. Hence, for
E
> 0, if
Z'
is an optimal solution of the perturbed problem
min { c T z : z E R , g(z) 2 then cTx" > c T Z . Letting
E
1 0 we
E}
,
obtain from t h e stability assumption -y 2
c T Z , where -y is the optimal value in (P). Therefore, Z is optimal for (P). I
295
A general deterministic approach to global optimization
Corollary. If problem (P) is stable, then, in the case where
Phase 1 is infinite, any cluster point fi of
{d}yields an
optimal
solution to (PI. Remark 4. It follows from the proof of Proposition 8 that, in the case where
Phase 1 is infinite, g ( v 2 )
+
0. If we stop this phase at some step where
< c, then according to (14) for every z E R such that cTz 5 c T z k - a, we must have g(z) 5 g ( v a ) < 6 . Hence, there is no z E R satisfying g(z) 1 6 and cTz 5 cTzk -a. This shows that zk is an approximate a-optimal solution g(v2)
in the following sense cTzk - a
< min { c T z : z E R, g(z) 2 c } .
This conclusion is independent of the stability assumption. Remark 5. A first algorithm somewhat different from the algorithm presented
above was given in 1431. There also exist solution methods to problem (P) which bypass the stability condition (see e.g. Thach [28]). Using certain ideas in the latter paper one can propose the following variant of Algorithm 2. Let h + ( z ) = max{O,h(z)}.
Then h + ( z ) is also a convex function, and
since h + ( z ) = 0 for z E R , it is easily seen, by virtue of Proposition 5 , that a feasible solution zo to (P)is a-optimal if and only if
o > max { g(z)+ h+ (z) : z E 0, c T z 5 c T z 0 - a } . Let us now modify Phase 1 in Algorithm 2 as follows.
Phase 1 Set
S' = Sn { z
:cTz
5 c T z k - a} . Compute the set V' of all vertices of
S'. Set i = 1. Step I. Compute vi = arg max {g(s)
+
+ h+(z) : s E vi}.
If g ( v i ) h + ( v i ) < 0, terminate. Otherwise, go t o Step 2.
H.TUY
296
Step 2. We have g(v')
+ h+(v') 2 0.
a) If max{g(v'), h(v')} = 0, then h+(v') = 0, g(vi) = 0, i.e., vi E
n n dG,
so we set zk = v', reset S = S', V = V' and go to Phase 2.
b) If max{g(v'), h ( v i ) } > 0 , then v i 4
R n G and we can find the inter-
section point y' of the line segment [w,vi] with the boundary of R
n G.
Let
p' be a subgradient of the convex function max{g, h} at yi. Generate the new
constraint
Z&) Let S*+l= S'
= ( p i I z - yi) 5 0.
n {z : Zi(z) 5 0). Compute the set Vi+l of all vertices of
SiS1(from the knowledge of V').
Set i + i
+ 1 and go back to Step 1.
By an argument analogous to that used previously we can then prove that: 1) If Phase 1terminates at Step 1, then zk is an a-optimal solution to (P); 2) If Phase 1 is infinite, any cluster point B of {vi} satisfies
B E d o n aG, 0 = max {g(z) + It+(.) Furthermore, it turns out that
:zE
nk}.
A general deterministic approach to global optimization
Proposition 10. Let
v ' - arg min { c T z : z E v
297
~g(z) , 2O},
Y -
and assume that g is strictly convex. If Phase 1 is infinite, then any cluster point G of the sequence {Ga} is an optimal solution of (P). Proof. Since g(G') 2 0, it follows that g(Z) 2 0. But g(2)
+h+(q
I g(v1)
+ h+ (va )
by the definition of v'. Therefore g(G)
where
ij
+ h+(G) 5 g(v) + h+(Z) = 0,
is the corresponding cluster point of {vi}. This implies g(G) = 0,
h+(G) = 0, i.e., G E an n aG. Take now any optimal solution Z of (P). Then by Proposition 4, Z E aR
n aG. Let
q E a g ( 2 ) . Since 2 E S' (see Lemma l), there is a ver-
tex z' of S' in the half space
I
( q 5 - 2)
2 0.
Let E be a cluster point of { z i } . Then, since g(zi)
2 g(Z)
+ ( q I zi - 2) 2 0,
it follows, in a manner analogous to G, that g(E) = 0, h + ( f ) = 0, i.e., f E at2 n aG.But then ( q I f - 2) 5 g(E) - g(Z) = 0,
and since ( q I za - 2) 2 0 implies ( q I E - 3) 2 0, it follows that ( q I Z - 2) = 0. This equality along with the relation g(Z) = 0 shows that E is an intersection point of G and the supporting hyperplane H =
(5
: (q
I 5 - Z) = O}.
however, g is strictly convex, we must have H n G = {Z}, hence E = 2. Noting that from the definition of Z' CTi?
we then conclude cTiT
5 c T Z , and
so
5 CTZ', is is actually optimal t o (P).
Since,
H. Tuy
298
6. CONCLUSION
We have presented a general approach to global optimization, the main points of which can be summarized as follows: 1) A large majority of mathematical programming problems of interest actually involve d.c. functions. 2) Any d.c. programming problem can be reduced to the canonical form where all the nonconvexity of the problem is confined to a single reverse convex constraint. 3) Canonical d.c. programming problems can be solved by algorithms of the same complexity as outer approximation methods for convex maximization problems. 4) By restricting the problem to the search for an a-optimum, it is possible
to devise a flexible solution method in which local searches alternate with global searches. Hopefully, the above approach, properly combined with local and stochastic approaches, will help to handle a class of global optimization problems which otherwise would be very difficult to solve.
REFERENCES [l] F.A. AL-KHAYYAL, J.E. FALK, Jointly constrained biconves programming, Math. Oper. Res., 8 (1983), 273-286. [2] M. AVRIEL, A.A. WILLIAMS. Complementary geometric programming, SIAM J. Appl. Math., 19 (1970), 125-141. [3] M.C. BOHRINGER, S.E. JACOBSEN, Convergent cutting planes for linear programs with additional reverse convex constraints, Lectures notes in
Control and Information Science, 59, System Modelling and Optimization, Proc. 11th IFIP Conference Copenhagen (1983), 263-272.
A general deterministic approach to global optimization
299
[4] V.P. BULATOV, Embedding methods in optimization problems, Nauka.
Novosibirsk (1977), Russian. [5] NGUYEN DINH DAN, On the characterization and the decomposition of d.c. functions, Preprint, Institute of Mathematics, Hanoi(1985).
[6] Yu.M. DANILIN, S.A. PIYAVSKII, O n a n algorithm for finding the absolute minimum, In: “Theory of Optimal decisions”, 2, Kiev, Institute of
Cybernetics (1967), Russian. [7] R. ELLAIA, Contribution a l’analyse et l’optimisation de diffirences de fonctions convexes, Thkse de Doctorat 3eme cycle, UniversitC Paul
Sabatier, Toulouse (1984). [8] J.E. FALK, K.R. HOFFMAN, A successive underestimation method for concave minimization problems, Math. Oper. Res., 1 (1976), 251-259.
[9] W. FORST, Algorithms for optimization problems of computer aided coordinate measurement techniques, 9th Symposium on Operations Research,
Osnabruck, August (1984). [lo] F. GIANNESSI, L. JURINA, G. MAIER, Optimal excavation profile for a pipeline freely resting o n the sea floor, Eng. Struct., 1 (1979), 81-91.
[ll] B. HERON, M. SERMANGE, Nonconvez methods for computing free boundary equilibria of axially symmetric plasmas, Appl. Math. Optim., 8
(1982), 351-382. [12] R.J. HILLESTAD, S.E. JACOBSEN, Reverse convex programming, Appl.
Math. Optim., 6 (1980), 63-78. [13] R.J. HILLESTAD, S.E. JACOBSEN, Linear programs with a n additional reverse converse constraint, Appl. Math. Optim., 6 (1980), 257-269.
[ 141 J.-B. HIRIART-URRUTY, Generalized diflerentiability, duality and optimization for problems dealing with differences of convex functions, to
appear in Lecture Notes in Mathematics, Springer-Verlag (1985).
H. Tuy
300
[15] K.L. HOFFMAN, A method for globally minimizing concave functions over convex sets, Math. Programming, 20 (1981), 22-32.
[16] R. HORST, An algorithm for nonconvex programming problems, Math.
Programming, 10 (1976), 312-321. [17] L.A. ISTOMIN, A modification of Hoang Tuy’s method for minimizing a concave function over a polytope, Z. Vycisl. Mat. i Mat. fiz., 17 (1977),
1592-1597, (Russian). [18] R. KLEMPOUS, J. KOTOWSKI, J. LUASIEWICZ, Algorithm wyznaczania optymalnej strategii wspoldzialania zbiornikow siecowych z systemem wodociagowym, Zerszyty Naukowe Politechniki Slaskiej, Seria Au-
tomatyka, 69 (1983), 27-35. [19] Z. MAHJOUB, Contribution & l’e‘tude de l’optimisation des re‘seauz maille‘s, ThBse d’Etat, Institut National Polytechnique de Toulouse (1983).
[20] D.Q. MAYNE, E. POLAK, Outer approximation algorithm for nondifferentiable optimization problems, Journal of Optimization Theory and
Applications, 42 (1984), 19-30. [21] R. MEYER, The validity of a family of optimization methods, SIAM J.
Control, 8 (1970), 41-54. [22] B.M. MUKHAMEDIEV, Approximate methods for solving the concave programming problem, Z. Vycisl. Mat. i Mat. fiz., 22 (1982), 727-731,
(Russian). 1231 S.A. PIYAVSKII, Algorithms for finding the absolute minimum of a function, In: “Theory of Optimal decisions”, 2, Kiev, Institute of Cybernetics
(1964), Russian. [24] R.T. ROCKAFELLAR, Convex analysis, Princeton Univ.
Press,
Princeton, New Jersey (1970). (251 R.T. ROCKAFELLAR, Favorable classes of Lipschitz continuous functions in su bgradient optimization, Working paper, IIASA (1981).
A general deterministic approach to global optimization
301
[26] J.B. ROSEN, Iterative solution of nonlinear optimal control problems,
SIAM J. Control, 4 (1966), 223-244. [27] J.B. ROSEN, Global minimization of linearly constraint concave function by partition of feasible domain, Math. Oper. Res., 8 (1983), 215-230.
[28] P.T. THACH, Convex programs with several additional reverse convex constraints, Preprint, Institute of Mathematics, Hanoi'(1984).
[29] P.T. THACH, The design centering problem as a d.c. program, Preprint,
Institute of Mathematics, Hanoi'( 1985). [30] P.T. THACH, H. TUY, Global optimization under Lipschitzian constraints,
Preprint, Institute of Mathematics, Hanoi'(1985). [31] T.V. THIEU, B.T. TAM, V.T. BAN, A n outer approximation method for globally minimizing a concave function over a compact convex set, IFIP
Working Conference on Recent Advances on System Modeling and Optimization, Hanoi( 1983). [32] Ng.V. THOAI, H. TUY, Convergent algorithms f o r minimizing a concave function, Math. Oper. Res., 5 (1980), 556-566.
[33] Ng.V. THOAI, O n convex programming problems with additional constraints of complementarity type, CORE discussion paper no 8508 (1985).
[34] Ng.V. THUONG, H. TUY, A finite algorithm f o r solving linear programs with a n additional reverse convex constraint, Proc. Conference on Nondif-
ferentiable Optimization, IIASA (1984). [35] J.F. TOLAND, A duality principle for nonconvex optimization and the calculus of variations, Arch. Rat. Rech. Anal., 71 (1979), 41-61.
1361 H. TUY, Concave programming under linear constraints, Doklad. Nauk., 159 (1964), 32-35; English translation in Soviet Mathematics, 5 (1964), 1437-1440.
302
H. Tuy
[37] H. TUY, O n outer approximation methods for solving concave minimiration problems, Report no 108 [ 1983), Forschungsschwerpunkt Dynamische Systeme, Univ. Bremen, Acta. Math. Vietnamica, 8, 2 (1983), 3-34. [38] H. TUY, T.V. THIEU, Ng.Q. THAI, A conical algorithm for globally minimizing a concave function over a closed convex set, Math. Oper. Res., forthcoming. [39] H. TUY, Global minimization of a concave function subject to mixed linear and reverse convex constraints, IFIP Working Conference on Recent Advances on System Modeling and Optimization, Hano:( 1983). [40] H. TUY, Global minimization of a difference of two convez functions, Selected Topics in Oper. Res. and Math. Economics, Lecture Notes in
Economics and Mathematical Systems, Springer-Verlag, 226 (1984), 98-118. [41] H. TUY, Ng.V. THUONG, Minimizing a convex function over the complement of a convex set, Proc. 9th Symposium on Operations Research, Osnabruck (1984). [42] H. TUY, Concave minimization under linear constraints with special structure, Optimization, 16 (1985), 2-18. [43] H. TUY, Convex programs with an additional reverse convex constraint, Journal of Optimization and Applications, forthcoming. [44] U. UEING, A combinatorial method to compute a global solution of certain nonconvez optimization problems, in: Numerical Methods for Non-
linear Optimization, Ed. F.A. Lootsma, Academic Press, New York (1972), 223-230. [45] N.S. VASILIEV, Active computing method for finding the global minimum of a concave function, Z. Vycisl. Mat. i Mat. fiz., 23 (1983), 152-156, (Russian).
A general deterministic approach to global optimization
303
[46] L.M. VIDIGAL, S.W. DIRECTOR, A design centering algorithm for nonconvex region of acceptability, IEEE Trans. on Computer-Aided Design of
Integrated Circuits and Systems, CAD-1 (1982), 13-24. [47] Z.Q. XIA, J.-J. STRODIOT, V.H. NGUYEN, Some optimality conditions
for the CM-embedded problem with Euclidean norm, IIASA Workshop
on Nondifferentiable Optimization: Motivation and Applications, Sopron (1984). [48] A.B. ZALESSKY, Non-convexity of admittable areas and optimization of
economic decisions, Ekonomika i Mat. Metody, XVI (1980), 1069-1080
(Russian). [49] A.B. ZALESSKY, O n optimal assessments under non-convex feasible solutions areas, Ekonomika i Mat. Metody, XVII (1981), 651-667 (Russian).