European Journal of Operational Research 44 (1990) 395-409 North-Holland
395
Theory and Methodology
A finite cutting plane method for solving linear programs with an additional reverse convex constraint Jfinos F U L O P Department of Operations Research, Computer and Automation Institute, Hungarian Academy of Sciences, H-1502 Budapest, Kende u. 13-17, Hungary
Abstract: The paper deals with linear programs with an additional reverse convex constraint. If the feasible region of this problem is nonempty and the objective function to be minimized is bounded below on it, then the problem has a finite minimum obtained on an at most one-dimensional face of the polyhedron as well. This paper presents a finite cutting plane method using convexity and disjunctive cuts for solving the considered problem. The method is based upon a procedure which, for a given nonnegative integer q, either finds such an at most q-dimensional face of the original polyhedron which has a point feasible to the cuts generated previously or proves that there exists no such a face. Computational experience is also provided. Keywords: Nonconvex programming, reverse convex constraint, cutting plane methods, general set covering problem 1. Introduction In this paper we consider the mathematical programming problem:
min.
cTx
s.t.
A x = b,
following (1.1)
x >l O,
g ( x ) >1O,
(1.2) (1.3)
where A is an m X n matrix, c is an n-vector, b is an m-vector, g : R n ~ R is a continuous quasiconvex function, and x ~ R n. If only problem (1.1)-(1.2) was considered, we would have a linear programming problem. Let P = { x ~ R" l a x = b, x>~0} a n d G = ( x ~ R n l g ( x ) >t 0}. If the constraint (1.3) was changed to g(x)<~ O, we would
Received February. 1988; revised December 1988
obtain a convex programming problem. Constraint (1.3) is called reverse convex constraint. As a consequence of (1.3), the set P n G of the feasible points m a y be nonconvex. Geometrically, P n G is the intersection of a convex polyhedron and the complement set of an open convex set. The polyhedron P is assumed to be nonempty. Numerous economic and engineering problems can be formulated as (1.1)-(1.3), e.g. [1,5,6,16,21, 23]. Several methods were published for solving linear programs with an additional reverse convex constraint. For the case of differentiable and convex g, Rosen [23], Avriel and Williams [1], and Meyer [21] presented iterative linearization methods converging to K u h n - T u c k e r points. Bansal and Jacobsen [5,6] considered a special network flow problem of form (1.1)-(1.3). It is well-known that if P n G is n o n e m p t y and bounded, then (1.1) has a finite optimum obtained
0377-2217/90/$3.50 © 1990, Elsevier Science Publishers B.V. (North-Holland)
396
J. FiilSp / A finite cuttingplane methodfor solving linearprograms
on an at most one-dimensional face of P as well [16,18]. This means that for a polyhedron P havhag more than one point, the optimum is obtained on an edge of P as well. This property will be slightly extended for that case when P n G is nonempty and cTx is bounded below over P n G. Several methods for solving (1.1)-(1.3) are based upon this property, see e.g. the papers of Ueing [34], Hillestad [16], Hillestad and Jacobsen [18], and Thuong and Tuy [29]. Ueing [34] reduces (1.1)-(1.3) to convex programming problems. A branch and bound type method is proposed by Hillestad [16]. Hillestad and Jacobsen [18] obtain the optimal solution by a partial enumeration of the edges of P. Thuong and Tuy [29] solve (1.1)-(1.3) via linear programming and concave minimization problems. A convergent branch and bound method is proposed by Muu [22]. Generalizations of linear programs with an additional reverse convex constraint are considered b y Forg6 [8], Hillestad and Jacobsen [17], Thach [28], and Tuy [32,33]. The method to be presented in this paper is also based upon the property that it is sufficient to search for an optimal solution of (1.1)-(1.3) only on the at most one-dimensional faces of P. However, the idea of Forg6 [8], and Hillestad and Jacobsen [17], to apply convexity cuts to exclude points with g ( x ) < 0, is also used. Since a cutting plane method using only convexity cuts m a y be nonconvergent, the finiteness of our method is assured by a procedure which either finds such an at most q-dimensional faces of P which has a point feasible to the cuts generated previously or proves that there exists no such a face. This procedure itself is a finite cutting plane algorithm and it is a generalization of the procedure presented in [9] for the case q = 0. In a forthcoming paper [10], the procedure mentioned above is used at solving the generalized lattice point problem [13,24,25]. In Sections 2 and 3, the procedure mentioned above is presented. We present a finite cutting plane method for solving (1.1)-(1.3) in Section 4. Section 5 provides computational experience.
is a face of the polyhedron P [27]. Let the n-vectors ( a l ) T. . . . . ( a m ) T denote the rows of A, and let the m-vectors a l , . . . , a n denote its columns. For a face Pz ~ fJ, the dimension of Pz is the dimension of the linear manifold spanned by Pz [27]. Throughout this paper, the dimension of Pz is denoted b y dim Pz. If dim Pz = 0, then Pz is a vertex of P. In case of dim Pz = 1, Pz is an edge of P. Since rank A = m, we have 0 ~< dim Pz n - m for every n o n e m p t y face Pz of P. Let Fq denote the union of the at most q-dimensional faces of P. Let inequalities be given in the form of H x <, h,
(2.1)
where H is a k × n matrix and h is a k-vector. The cutting plane constraints generated during the method are stored in the form of (2.1). Let Q = { x ~ R" [ H x ~< h ). Clearly, x ~ Q if and only if there exists x s such that Hx + Ix s=h,
where x s ~ R k, X s = ( X n + 1. . . . . Xn+k) "r, S = ( n + 1 . . . . . n + k ) and I is the k x k unit matrix. Fix an integer q such that 0 ~ q ~< n - m. In the followings, we present a procedure which either finds an at most q-dimensional face Pz such that P z n Q ~ fJ or proves that Fq n Q = ~J. D e f i n i t i o n 2.1. The face P z of the polyhedron P is
called relevant if P z n Q ~ ~J and dim Pz <~q. The face Pz of the polyhedron P is called irrelevant if P z A Q ~ ~J and Pz n Q n Fq = fJ. The purpose of the procedure mentioned above is to find a relevant face. A face Pz m a y be neither relevant nor irrelevant. If we find an irrelevant face, then it is beneficial to exclude its points from the further search. This is performed by the face cut presented in the next section. Consider the constraint system A x = b, n x + I x s = h,
2. Generating a relevant or an irrelevant face
We assume that rank A - - m . { 1 . . . . . n } and Z c N. The set
(x
Ixj= 0, vjez}
Let
x s>lO,
x >~O,
(2.2)
Xs >~O.
N= Now, x ~ P N Q if and only if x and x s = h - H x fulfil (2.2). Let B be a feasible basis of (2.2), and let I B c N U S denote the index set of the basic
J. Fiil6p / A finite cutting plane method for solving linear programs
variables. Let I R = N O S \ I B denote the index set of the nonbasic variables. Theorem 2.1. There exists a relevant face if and only if there exists" a feasible basis B of (2.2) such that I N n l B l <~m+q. Proof. Let Pz be a relevant face determined by Z c N. Let ?/= dim Pz, clearly ~/~< q. Let 2~ = ( j ~ N I xj = O, V x ~ Pz ). It easy to see that Z c Z and Pz = P2. Then, ?/= dim Pz = n - r a n k [ ( a a. . . . . a m } u { e j l J ~ Z } ] , where ej denotes the j - t h unit vector [7,30]. Since rank A = m, there exists 2 c 2~ such that I ZTI = n-m-~ and the vectors of ( a 1. . . . . a m } U ( ej I J ~ Z ) are linearly independent. F r o m this, it follows that rank{ ai l i ~ N \ Z } = m. The rank of the constraint matrix
of (2.2) is m + k. Consider the constraint system obtained from (2.2) by deleting the variables xs, i ~ Z, together with the columns and nonnegativity constraints belonging to them. The rank of the constraint matrix of this system is also m + k. Since P2 n Q 4=t~, this reduced (2.2) has a basic feasible solution. Let B be a feasible basis of the reduced (2.2). This B is also a feasible basis of the original (2.2) on which this temporal reduction has been performed. Since Z c I ~ = N U S N I u, we obtain I N A I B [ ~ < n - - 1 2 1 = m + { < ~ m + q . Conversely, assume that B is a feasible basis of (2.2) such that [ N n I B I ~< m + q. Choose an index set Z c N \ I B such that I Z I > / n - m - q We show that the face Pz determined by this Z is relevant. The rank of the matrix obtained from (2.3) by deleting the columns indexed from Z is also m + k. F r o m this, it follows that rank{ a~ [ i N \ Z } = m, in addition, r a n k [ { a l , . . . , a m} U ( e j l J ~ Z } ]
=m+
IZ[.
Since the x-part of the basic feasible solution determined by B is in the face Pz, we get Pz n Q =gJ~ and dim Pz ~< n - m - [ Z I ~< q. This means that Pz is a relevant face. []
397
By Theorem 2.1, we obtain that searching for a relevant face is equivalent to searching for a feasible basis of (2.2) such that I N n I B I ~< m + q. If P n Q = ~J, then we obtain directly that there exists no relevant face. Assume now that P n Q 4= and consider a feasible basis B of (2.2). If I N n IB [ ~< rn + q, then by Theorem 2.1, we can generate a relevant face, e.g. with Z = N n IR, the face Pz is relevant. If [ N n I B I > m + q, then starting from B, we try to reach such a feasible basis of (2.2) for which the number of basic variables indexed from N is less than that for B. This can be done by the idea due to Majthay and Whinston [19], and used also in [9]. Choose a variable x r, r ~ N n IB, and starting from the feasible basis B, solve the linear program
rain. s.t.
Xr A x = b, H x + Ix s = h, x>~O,
(2.4)
Xs>~O,
with such a modification of the simplex method that only the variables x j, j ~ S, are eligible to enter the basis. If the optimal solution yields x r = 0 and x r is basic, then try to pivot it out of the basis by exchanging it with a nonbasic variable indexed from S. The modified entering rule guarantees that in the course of solving (2.4), the number of basic variables indexed from N does not increase, moreover, it may decrease. Let the simplex tabular form of (2.2) belonging to a basis B be given as follows:
x~+ Y'~ dijx j = d i o ,
i ~ l B.
(2.5)
j~IR
If the optimal value of (2.4) is positive, then in the optimal feasible solution, xr is a basic variable, and in the simplex tabular form (2.5) determined by the optimal basis B of (2.4), we have d m > 0 and drj ~< 0, Vj ~ S n I~. This means that denoting Z = N A I~, the variable x~ is positive in Pz n Q, i.e. it is nonextremal in Pz n Q. If drj <~0 holds for all j ~ N n I-~ as well, then x r takes only positive value in P n Q, i.e. it is nonextremal in P G Q . If the optimal value of (2.4) is zero and x r cannot be pivoted out o f the basis by changing it with a nonbasic variable indexed from S, then in the optimal simplex tabular form we have d r o = 0 and d d = 0, Vj ~ S n I X. Moreover, these coeffi-
398
J.
FVd~p/ A finite cuttingplane methodfor solving linearprograms
cients remain zero considering the simplex tabular form of an arbitrary feasible basis of (2.2) reached by simplex iterations departing from the basis and using the modified entering rule. If for the optimal basis B of (2.4), we have I N N I~ I ~< m + q, then denoting Z = N n 1~, the face Pz is relevant. Otherwise, a new variable xr, r ~ N n 1~, can be chosen and a new (2.4) can be solved. It is clear that a variable which has turned out to remain in the basis under the modified entering rule is not worth choosing. The question of choosing a new xr will be discussed in Algorithm 2.1 of this section. By working with Algorithm 2.1, after solving a finite number of problems (2.4), one of the cases below is reached: (i) A feasible basis B of (2.2) is obtained such that I N N / B [ < < . m + q . (ii) We prove that Fq O Q = ~, i.e. there is no relevant face. (iii) An irrelevant face Pz is found. After solving a problem (2.4), let N O and N1 denote the index sets of those variables indexed from N which have turned out to remain in the basis at zero and positive level, respectively, using the modified entering rule. Furthermore, let N 2 denote the index set of those variables indexed from N which have turned out to be nonextremal in P A Q . Obviously, we have N 2CN~. Let n i denote the cardinality of the set N~, i = 0, 1, 2. Theorem 2.2. Let B be a feasible basis of (2.2) reached at solving a problem (2.4). Fix the index sets Z = N n I R and N o. Let a nonnegative integer lc, a k × n matrix It, and a k-vector h be given arbitrarily. Consider the constraint system A x = b,
/-Ix + / x g =/~, x>~O,
(2.6)
xg>~O,
where 1 is the Ic × lc unit matrix, x g = (xn+ 1. . . . . xn+k) r, S = ( n + 1 . . . . . n + k ) . A s s u m e that (2.6) has a feasible solution. Let B be a feasible basis of (2.6). Then I Ii3 n ( Z u N0) l>~ n o, moreover, if the x-part of the basic feasible solution determined by is not in Pz, then IZ~n(ZU No)l > ~ n 0 + l .
Proof. In the case of n o = 0, the statements are obvious. Consider now the case n o > 0. In the simplex tabular form (2.5) determined by B, we have drj = 0 for all r ~ N O and j ~ (S n IR) n (0}.
Consider the vector system consisting of the right-hand-side vector of (2.5) and those columns of the constraint matrix of (2.5) which are indexed from K = ( I B \ N o ) U (S n IR)" The rank of this system is m + k - n 0. The constraint matrix and the right-hand-side vector of (2.5) can be obtained from the constraint matrix (2.3) and the righthand-side vector of (2.2) by pre-multiplying them by B -1. Consequently, the rank of the vector system consisting of the columns of (2.3) indexed from K and the right-hand-side vector of (2.2) is also m + k - n 0- Since S c K and the columns of (2.3) indexed from S are ej, j = m + 1 . . . . . m + k, in addition K \ S = ( N n I B ) \ No, it is easy to see that rank[(b}U{aili~(NnlB)\N0)
] =m--n o. (2.7)
Consider a feasible basis B of (2.6). Let /(" = [(N n I B ) \ N 0 ] U S. It follows from (2.7) that the rank of the vector system consisting of the right-handside vector of (2.6) and those columns of the constraint matrix of (2.6) which are indexed from /(" is m + k - n 0. Since the m + Ic columns of J~ are linearly independent and ( N U S ) \ / ~ - - Z U N o , we get [ I ~ A ( Z U N 0 ) [ >/n o. Assume now that the x-part of the basic feasible solution determined by J~ is not in P z and we have only I Ii3 N ( Z U N 0) I = n 0- Consider the simplex tabular form of (2.6) determined by/~:
Xi "~- E t~ijXj-~" rio, jElf~
i ~ 1i3.
(2.8)
The rank of the vector system consisting of the right-hand-side vector of (2.8) and those columns of the constraint matrix of (2.8) which are indexed from /~ is m + lc - n0. Since I I~ n ( Z u N 0 ) I = n o , we get l l i 3 n / ~ I = m + k - n 0 , and there exist m + I c - n o different unit vectors among those columns of the constraint matrix of (2.8) which are indexed f r o m / ~ . The x-part of the basic feasible solution determined by B is not in P z hence there exists an r ~ Z n li~ such that a~r0 > 0. The considered unit vectors contain zeroes in their components indexed by r, thus, the rank of the system consisting of the right-hand-side vector of (2.8) and these unit vectors is m +/c + 1 - n o , contradicting to the assumption. Consequently, we have I Ii3 n ( Z u N 0) I f> n 0 + 1 after all. []
J. F'fdi~p / A finite cutting plane method for solving linear programs
Assume that the inequality
~ Vjxj >1Vo j=l
holds for every x ~ P n Q, where vo > 0. Let J + = { j ~ N I vj > 0}. Then, for any feasible basis B of (2.2), we get
I l B n J + I >71-
(2.9)
For every feasible basis B of (2.2), a binary n-vector y can be corresponded as follows: (1 YJ = ~0
if j ~ I B ( j = l otherwise.
. . . . . n),
Then, (2.9) can be written in the form of
Y'~ yj > 1.
(2.10)
jEJ +
bases of systems (2.2) obtained by adding new constraints to (2.1). We showed in [9] that it is worth collecting the inequalities (2.10), (2.12) and (2.13) obtained by solving problems (2.4) into the constraint systems of general set covering problems used as auxiliary problems. This idea is also used and extended here. In every case when after solving a problem (2.4) the considered variable x r turns out to remain basic using the modified entering rule, a new general set covering constraint is constructed. If the index r is added to N 0, then the appropriate (2.12) is generated. If we add r to N 1, then the constraint (2.10) obtained f r o m (2.11) is constructed, moreover, in case of r ~ N 2 we get Yr = 1. Collect these constraints as
Yj>~fli,
Xr +
~
drjxj >1dr o
(2.11)
jE NnlR
is valid for the points of P n Q. F r o m (2.11), an inequality (2.10) can be constructed and it is also valid for the feasible bases of the systems (2.2) obtained by adding new constraints to (2.1). If in (2.11), we get d~j ~< 0, Vj ~ N n IR, then r ~ N 2 and y~ = 1. The statements of Theorem 2.2 can be reformulated by using the vector y. Let B be a feasible basis of (2.2) reached at solving a problem (2.4). Fix the index sets Z = N n I R and N 0. Then, for every nonnegative integer k and system (2.6), the vectors y corresponding to the feasible bases J~ of (2.6) satisfy
i=l,...,p,
j~V~
yj= l, If the optimal value of (2.4) is positive, then in the simplex tabular form (2.5) determined by the optimal basis B, we have dro > 0 and drj <~O, Vj ~ S n I R, hence the inequality
399
j ~ N2,
yjG { 0 , 1 } ,
(2.14)
j~N\N2,
where V ~ c N and fli is positive integer, i = 1 . . . . . p. If a face Pz turns out to be irrelevant, then a constraint (2.13) obtained by Theorem 2.2 is also added to (2.14) except for the case of Z = ~ . In the later case, we get FqnQ=~[ directly. The present (2.14) may also contain constraints obtained in such previous systems (2.2) which had less constraints in (2.1). Because of the construction of (2.14), we have I V~l>~/3,, i = 1 . . . . . p, thus, (2.14) has a feasible solution.
Theorem 2.3. Consider the general set covering
problem n
rain.
yj j=l
s.t.
Y'. Yj>~fli,
i = 1 . . . . . p,
jEK
E
Yj>~no.
(2.12)
j~ZU~
In addition, if the x-part of the basic feasible solution determined by a /~ is not in Pz, then
£ j~ZuN
yj> n o + 1
(2.13)
0
is also valid for this y. It is clear that if an inequality (2.12) or (2.13) holds for the feasible bases of a (2.2), then it is also valid for the feasible
yj= l, j E N 2 , y j ~ {O, 1}, j ~ N \ N
2.
(2.15)
l f Fq n Q =/=~J, then the optimal value of (2.15) does not exceed m + q. Proof. By Theorem 2.1. if Fq n Q v~~, then there exists a feasible basis B of (2.2) such that I N n I~ I ~< m + q. The vector y corresponded to this has the objective function value I N n I-~ I in (2.15).
400
J. FiilSp / A finite cutting plane method for soloing linear programs
The constraints of types (2.10) and (2.12) in (2.15), including the constraints yj = 1, j ~ N 2, are valid for the vectors y generated from the feasible bases of (2.2) hence for y as well. Consider now the constraints of type (2.13) in (2.15). Only such constraints (2.13) were added to (2.15) where Pz n Q o Fq = J~ had been proven. Since the x-part of the basic feasible solution determined by B lies in a relevant face, it cannot be in a face determined by a set Z taking role in a constraint (2.13) of (2.15). Consequently, y satisfies the constraints (2.13) in (2.15) as well. [] As we mentioned in [9] at a similar application of general set covering auxiliary problems, it is beneficial to use size reduction procedures for (2.15). For example, if Vi c Vt and /~i ~ fit, then the constraint indexed by t can be removed from (2.15). If k ~ N 2 and k ~ V t, the k can be removed from Vt and fit can be decreased by one. If fit becomes zero, then the constraint indexed by l can be deleted from (2.15). To elude the time-consuming work of computing the exact optima of problems (2.15), lower and upper bound computing procedures can be used for the optimal value of (2.15) as it is described in [9]. For example, let y0 be a feasible solution of (2.15) with the objective function value U0. Then, U0 is an upper bound for the optimum of (2.15). If U0 ~ m + q, we cannot state Fq n Q = g[ directly. If U0 > m + q, determine a lower bound L o for the o p t i m u m of (2.15). If L o > m + q, then Fq n Q = JJ, i.e. there exists no relevant face. If a new constraint is added to (2.15), then either the previous yO and U0 are valid for the new (2.15) as well or they can be directly modified to do it. This later means that by setting some components of y0 from 0 to 1, a feasible solution y0 of (2.15) can be obtained. After this, it is worth reducing y0 to a prime cover. For details about size reduction, lower and upper bound computing for general set covering problems, see [4,9,14,15]. Since the constraints y j = 1, j ~ N2, are included by (2.15) and for n o > 0 we have a constraint (2.12) such that ( Z U N o ) N N 2 =J~, we can use n o + n 2 as an obvious lower bound for the optimum of (2.15). Corollary 2.1. I f n o + n 2 > m + q, then Fq n Q =~J.
where B is the feasible basis reached last. Let L 1 be a lower bound for the optimal value of the general set covering problem nfln.
~yj
j=l S.t.
Y'~ yj>~,8,, j~v~ yj=a, yje
(2.16)
j~Na,
(0, 1},
j E N \ N 1.
I f Ll > m + q, then Pz A Q A Fq = ~J. In this case, let L 2 be a lower bound for the general set covering problem obtained by adding the constraint E Yj >~no + 1 j~ZuN o
(2.17)
to (2.16). Obviously, L z >I L 1 can be assumed. Then, for every x ~ Fq n Q, there exists at least L 2 -- m q indices j ~ N 1 \ N 2 such that xj = O. Proof. Except the constraints yj = 1, j ~ NI \ N2, the constraints of (2.16) are valid for every feasible basis of (2.2). We know that the variables x r, r ~ N1, take only positive values in Pz N Q. Assume that L 1 > m + q but Pz n Q n Fq ~ fJ. Add the inequality xj ,< 0
(2.18)
j~z
to (2.1) temporarily, and let Q denote the set of the points feasible to the new (2.1). Since P n Q = Pz N Q, we get P n Q n Fq ~ ~J. By Theorem 2.1, the new (2.2) containing m + k + 1 constraints has a feasible basis B such that [ I ~ n N ] ~ < m + q The variables Xr, r ~ N1, take only positive values in P n Q hence Pj = 1, Vj" ~ N1, hold for the vector y determined by B. This means that y is feasible to (2.16) and its objective function value does not exceed m + q. However, this contradicts L 1 > m + q, consequently, Pz n Q n Fq = ~[ after all. Before proving the second statement, delete (2.18) from (2.1). Assume that Fq n Q ~ fJ and let .~ ~ Fq n Q. A d d the inequalities xj ~< 2j,
j=l
- x a ~< - ~ j , Theorem 2.4. After solving a problem (2.4), consider the index sets No, N1, N2, and Z = N n IR,
i=1 ..... p,
. . . . . n, j=l
. . . . . n,
(2.19)
to (2.1). Let Q denote the set of the points feasible
J. Fiili~p / A finite cutting plane method for solving linear programs
to the new (2.1). N o w P N Q = ( 2 } . Since 2 ~ Fq, the new (2.2) containing m + k + 2n constraints has a feasible b a s i s / } such that I N n I~ [ ~< m + q. The first n c o m p o n e n t s of the basic feasible solution determined b y /~ constitute just 2. Since 2 ~ Pz, the vector )3 generated b y J~ fulfils (2.17). Let N = { j ~ N 1 1 P j = 0 } . Clearly N c N 1 \ N 2. Consider the problem
rain.
~ yj j=l
s.t.
~
Yj>~fli,
i=1 ..... p,
jEV,
£
yj> n0+ 1'
(2.20)
j~ZUNo
yj=l,
jENI\N
y j ~ {0, 1},
,
j~(N\NI)
U1V.
T h e optimal value of (2.20) does not exceed that of (2.16)-(2.17), and the difference between these two optimal values is at most I/VI. Since 33 is feasible to (2.20), the value m + q is an u p p e r b o u n d for the o p t i m u m of (2.20). F r o m L 2 > m + q, we get I ]~ I >/L2 - m - q. This means that 33j=0 for at least L 2 - m q indices j ~ N I \ N 2 , thus, 2~ = 0 for at least L 2 - m - q indices j N I \ N 2. The statements are proven and the constraints (2.19), which were used only at the proof, must be deleted from (2.1), and (2.2) must be similarly restored. [] Corollary 2.2. I f
L 1 >
m + q and L 2 - m - q >
n 1 - - n 2 , then F q n Q=~J.
Similarly to (2.15), obvious lower b o u n d s can be given for the o p t i m a of (2.16) and (2.16)-(2.17), namely, n o + n~ and n o + n a + 1. The size reducing, and lower and upper b o u n d c o m p u t i n g procedures can be used in a similar w a y as at (2.15). N o w we summarize the algorithm which leads us to one of cases (i)-(iii) in a finite n u m b e r of simplex iterations. Algorithm 2.1 Step O. Let N O =J~, n o = 0. Let N 2 be the index set of those variables indexed f r o m N which have turned out to be nonextremal in P n Q. Set N 1 = N 2 , n 2 - - - I N 2 1, n I = n 2. Let general set covering constraints obtained previously and valid for the
401
feasible bases of (2.2) be collected in (2.15) and (2.16). A t this step, (2.15) and (2.16) are identical. Determine a feasible solution y0 of (2.15). Set Uo = ~ = a Y f , yl = y 0 and U1 = U0. Consider an arbitrary feasible basis B of (2.2). G o to Step 1. Step 1. If I N n I B I ~< m + q, then we have case (i): stop. Otherwise, go to Step 2. Step 2. If U1 ~< m + q, then go to Step 3. Otherwise, using a heuristic method, determine a feasible solution yl of (2.16) and set U 1 = ~ = l Y ) " If U 1 ~< m + q, go to Step 3. Otherwise, using a heuristic method, c o m p u t e a lower b o u n d L 1, L a >/ n o + n 1, for the o p t i m u m of (2.16). If L l ~< m + q, go to Step 3. Otherwise, consider Z = N n I R. If Z = g, we have case (ii): stop. Otherwise, c o m p u t e a lower b o u n d L 2, L 2 >~ m a x ( L 1 , n o + n 1 + 1}, for the o p t i m u m of (2.16)-(2.17). If n I - n 2 < L 2 -mq, then we have case (ii): stop. Otherwise, go to Step 8. Step 3. Choose an index r ~ ( N N I a ) \ ( N 0 U N1). Starting from the feasible basis reached last, solve p r o b l e m (2.4) with the modified entering rule. Let B denote the optimal basis. If r ~ IB, go to Step 4. If r ~ I R and [ N n I a ] > m + q, then repeat Step 3. Otherwise, we have case (i): stop. Step 4. If in the optimal basic feasible solution of (2.4), we get Xr > 0, then go to Step 5. Otherwise, try to pivot x r out of the basis by exchanging it with a variable x j, j ~ S N I R. If this is possible, then execute the basis change, denote the new basis b y B , a n d i f [ N N I B[ > m + q , t h e n g o to Step 3, else we have case (i): stop. If x r remains basic, then set N o ~ N o U ( r } and n 0 ~ n 0 + l . D e n o t i n g Z = N N I R, construct (2.12) and add it to (2.15) and (2.16). Execute the possible size reductions, generated b y (2.12), in (2.15) and (2.16). If (2.12) does not hold for y0, then modify y0 directly to satisfy the new (2.15) and set U0 = E jn= l y j 0 • If (2.12) does not hold for y 1 , then m o d ify yl directly to satisfy the new (2.16), and set Ua = ~.nj = l y j.l. G o to Step 1. Step 5. If in the optimal simplex tabular form we have d,i ~< 0, Vj ~ N n I R, then go to Step 7. Otherwise, construct a constraint (2.10) from (2.11), and add it to (2.15). If the new constraint does not hold for y0, then m o d i f y y0 directly to satisfy the new (2.15), and set U0 = X~=ly° . Execute the possible size reductions, generated by the new constraint, in (2.15). G o to Step 6. Step 6. Set Na ~ Na U { r }, nl ~ na + l. Execute the possible size reductions, generated by
402
J. Fiiltp/ A finite cutting plane method for soloing linear programs
y r = l , in (2.16). If y ~ = 0 , then set y ~ = l , and U1 ~ Ua + 1. Go to Step 1. Step 7. Set N 2 ~ N 2 U ( r } , n2~n2+l. Execute the possible size reductions, generated by y~ = 1, in (2.15). If yO = 0, then set yO = 1, and U0 ~ U0 + 1. G o to Step 6. Step 8. Add (2.17) to (2.15). If (2.17) does not hold for y0, then modify y0 directly to satisfy the new (2.15), and set Uo=E~=ly °. Execute the possible size reductions, generated by the new constraint, in (2.15). If U0 ~< m + q, then we have case (iii): stop. Otherwise, using a heuristic method, determine a feasible solution y0 of (2.15), and set Uo = E~=ly° . If U0 ~< m + q, then we have case (iii): stop. Otherwise, using a heuristic method, compute a lower bound L 0, L 0 >~ n o + n 2 + 1, for the optimum of (2.15). If L o > m + q, then we have case (ii); otherwise, we have case (iii). Stop. It is easy to see that Algorithm 2.1 is finite since after solving a problem (2.4), the index r is added to one of index sets I R, N o and N a, moreover, L1 >i no + nl. This guarantees that getting to the choice of a new index r in Step 3, the index set we can choose from is not empty. Algorithm 2.1 is a direct extension of Algorithm 3.1 given in [9] for the case q = 0 hence the details concerning on the use of lower and upper bounds, and size reduction procedures are also valid here with the obvious extension for the case of a general q. For details see [9].
have obviously l ~< I J I. Every variable xr, r ~ J, was added to J after solving a concerning problem (2.4). Let
x, +
E
If after the execution of Algorithm 2.1 we have case (i) or (ii), then we can generate a relevant face or it is proven that there exists no relevant face, respectively. If we obtain case (iii), then the face Pz determined by Z = N N 1 R is irrelevant, where B is the feasible basis reached last. Moreover, for every possible x ~ Fq n Q we have xj = 0 for at least L 2 - m - q indices j ~ N a \ N 2. In order to exclude the points of Pz from the further search, a cutting plane constraint is constructed and added to (2.1). However, this constraint must leave the possible points of Fq (~ Q. After having obtained case (iii) b y Algorithm 2.1, consider the feasible basis B, the index sets N 1 and N 2, and the lower bound L 2 reached last. Let Z = N C~IR, J = NI \ N 2 and l = L2 - m - q. We
(3.1)
denote the row indexed by r in the simplex tabular form of the optimal basis of problem (2.4) belonging to variable x r. The constraint (3.1) was stored when adding the index r to N I \ N 2 and it is used now. Clearly, d}~) > 0 and d}f ) <~O, V j S n IR(,,. F r o m (3.1), we obtain that the inequality
[.t(r)/a(,) ~ r j /~'*rO ) X j ) 1
(3.2)
j~IR(,)
holds for every feasible solution of (2.2) with x r = 0. By introducing suitable zero coefficients, write (3.2) as follows:
~,
arjx j >/1.
(3.3)
j~NUS
Since Z D N n IR(,~, we have arj <~O, V j ~ ( N U S) \ Z . Drop l - 1 elements from J arbitrarily and let J * denote the set of the remainder elements of J. Then, every feasible solution belonging to a point of Fq C3 Q fulfils (3.3) for at least one index r ~ J * . Let
otT=max(arjlr~J*
},
j~NUS,
and consider the inequality
E 3. The face cut
d}f)xj = d(~ )
jEIR(r)
aTxj>~l
(3.4)
j~NuS
constructed by the disjunctive cut idea of Baias [2,3,25]. Since ct7 ~< 0, Vj ~ ( N U S ) \ Z , (3.4) does not hold for any such feasible solution of (2.2) whose x-part lies in Pz. However, every feasible solution of (2.2) determined by a point of Fq n Q fulfils (3.4). Before adding (3.4) to (2.1), the possible nonzero coefficients eta, j ~ S, must be reduced from (3.4). This can be done b y subtracting the (m + j ) th constraint of (2.2) a,+j * times from (3.4) for every nonzero a, +j, j = 1 . . . . . k. The direction of the inequality suitable for (2.1) can be obtained by multiplying both sides by - 1 . The inequality received in this way is now added to (2.1) and k is increased b y one. Let Q be again the set of the points feasible to (2.1). If P c~ Q = i f , then we obtain case (ii) directly. Otherwise, repeat A1-
J. Fii'lrp / A finite cutting plane method for soloing linear programs
gorithm 2.1 now with the enlarged system (2.2). Problems (2.15) and (2.16) to be set up in Step 0 of Algorithm 2.1 are identical with problem (2.15) obtained last at the previous execution of Algorithm 2.1. Since P has a finite number of faces, after executing Algorithm 2.1 finite number of times, we get to case (i) or (ii), i.e. either we obtain a relevant face or prove that Fq n Q = fJ. We mention that at some expense of extra work, cut (3.4) may be made deeper by the negative edge extension idea of Glover [12,25] and the combinatorical method of Sherali and Sen [26]. Since in (3.4) we have a 7 ~< 0, Vj ~ ( N U S ) \ Z , the inequality E
~Txj ~ 1
jEJ +
holds for the points of the new set P n Q, where J + = ( j ~ Z I a7 > 0}. Since J + 4=J~, a constraint (2.10) can be constructed and added to (2.15) and (2.16) before repeating Algorithm 2.1.
4. Solving linear programs with an additional reverse convex constraint It is known that if P R G is n o n e m p t y and bounded, then the finite o p t i m u m of (1.1)-(1.3) is obtained on an at most one-dimensional face of P as well. This property is slightly extended here. Theorem 4.1. I f P f3 G - ~ J and cTx is bounded below on P n G, then (1.1)-(1.3) has a finite optimum and there exists an optimal solution on an at most one-dimensional face of P as well. Proof. If dim P = 0, then the statement is trivial. We assume now that dim P > 0. If P c G, then P n G = P and the finite optimum of min( cTx I X P } is obtained at a vertex of P as well. Otherwise, consider an x ° ~ P n G. By the theorem of separating hyperplanes, there exist t ~ R n and ~" R such that tTx ° >l"r and tTx < • for every x P X G . Let / 3 = P n ( x ~ R n l t T x > ~ ' r } . Since P c G, the finite optimum of min{ cTxl x ~ / 3 } is obtained at a vertex x I of /3. Clearly, x 1 lies on an edge of P, in addition, cTxl<~ cTx °. Considering any edge E of P, either E N G = ~ or cTx has a finite minimum on E (3 G. Since P has a finite number of edges, the statement is proven. []
403
In the following, for the sake of simplicity, the single point of a zero-dimensional polyhedron is also considered a degenerate edge. Then, under the above assumptions, it is sufficient to search for the o p t i m u m of (1.1)-(1.3) on the feasible points of the edges of P. The fact of P n G 4: ~ will be checked during the method presented in this chapter. The lower boundedness of cTx on the possibly n o n e m p t y P n G is assumed. This holds obviously if cTx is bounded below on P, or, specially, if P is bounded. During the method for solving (1.1)-(1.3), cutting plane constraints are generated. By adding convexity cuts, points with g ( x ) < 0 are cut off from P. Disjunctive cutting plane constraints are constructed when we search for one-dimensional relevant faces and use the face cut described in Section 3 to exclude points of irrelevant faces. These cuts exclude only such points of the edges of P which were cut off previously by another cutting plane constraint. We use also disjunctive cuts when cutting off a vertex of P. This later cut is constructed in such a way that it does not exclude any point which lies in an edge of P, is feasible to the cuts generated previously and has an objective function value less than the least one found till then. We also use the objective function cut
cTx ~< -y
(4.1)
to exclude the points with objective function value greater than the least one found till then, where 7 = cTx * and x * is the incumbent solution, i.e. the best feasible solution obtained previously. Because of the construction of the above cuts, if there exists a point feasible to (1.1)-(1.3) and with objective function value less than ~,, then we can state that there exists such a one which, in addition to the properties mentioned above, lies in an edge of P and is feasible to the cuts collected in (2.1). This means that if there exists an improving feasible solution for (1.1)-(1.3), then F a n Q v~~. The question whether F 1 n Q is empty or not can be answered by the procedure described in Sections 2 and 3 using q = 1 now. During this procedure, new constraints may be generated and added to (2.1), however, the set F 1 N Q does not change meanwhile. If F 1 n Q = ~ turns out, we can stop. In this case, if we have not found points from P (~ G, then (1.1)-(1.3) has no feasible solution; other-
J. FiMp / A finite cutting plane me!hod for solving linear programs
404
wise, the incumbent solution x* is optimal for (l.l)-(1.3) and y = cTx* is the optimal value. If the procedure detects Pi f? Q # 0, then a feasible basis B of (2.2) is also obtained such that 1N f~ I,]
0 and g(X) = 0 separately. If g(X) < 0, then the convexity cut used also by Forgo [8], and Hillestad and Jacobsen [17] is applied. Consider the simplex tabular form (2.5) determined by B and the n-vectors zj, j E I,, where
+
-dzj
if i E IB,
1
if i=j
i 0
(i=l,...,n),
A* = max{ A I g[ Ax(‘+‘) + (1 - h)x(‘)]
otherwise.
If the vertex X of P is nondegenerate, then zj, j E I,, are the directions of the edges of P emanating from X. Moreover, even if X is degenerate, the convex hull of the halflines of directions zj, j E I,, and emanating from X contains P. Let hj=sup{XIg(Z+Xzj)
h>O},
jEIa,
z 0,
A E [O,
II}
and X * = h*xu+‘)
+ (1 -
A*)x”‘.
Clearly A* > 0. In the third case, let z E R” be the direction of the considered edge and determine A* = max{ X ] g(x”’
and
+ AZ) 2 0, A 2 0}
and
tj = i
l/Xi
if Aj
jEIR,
x* =x(‘)+h*z.
0
if Aj= cc,
jEIR.
Since it is assumed that cTx is bounded below on PnG,wehaveO
Then, for every xEPn{xERflICj,,Rtjx,cl} we have g(x) c 0. By adding the convexity
cut
]11,311 c jel,
an unbounded edge of P can be constructed over which cTx is unbounded below or a vertex x(l) of P neighbouring to x (‘) = X can be found for which cTx”’ < cTxco). If in the later case we have g(x(‘)) > 0, then repeat the above things to be done now for the vertex x(l). By repeating these a finite number of times, we obtain a finite sequence x(o), x(V >--*, x(l) of neighbouring vertices of P such that cTx(‘) > cTx(‘+‘) i=O ,...,l-1, and g(x”‘) > 0, i = 0,. . . , 1. Moieover, either x(I) 1s ’ an optimal solution’ of (l.l)-(1.3), or there exists a vertex x(‘+l) of P neighbouring to x(‘) such that cTx(‘) > cTxcf+l) and g(x(‘+ l’) < 0, or an unbounded edge of P can be constructed such that it emanates from x(l) and cTx is unbounded below on it. In the first case, we are done, set x * = x(‘) and stop. In the second case, if g(x(‘+‘)) = 0, then substituting X = xc’+‘), execute the things to be done written at the case g(X) = 0 below. Otherwise, determine
(-tj)xj<
-1
to (2.1), the vertex Z is cut off, but only such points of P are excluded for which (1.3) does not hold. The procedure for finding an at most one-dimensional relevant face is now to be repeated in the remainder polyhedron. Consider now the case g(X) > 0. Check whether x is an optimal solution of the problem min{ cTx ]x E P}. If it is, then X is an optimal solution of (l.l)-(1.3), too. Otherwise, either such
J. Fiilrp / A finite cutting plane method for soloing linear programs
cox (° < cX~ and g(x (°) >10, then step on x (° and substituting ~ = x (° execute the things to be d o n e written at the cases g ( 2 ) = 0 and g ( 2 ) > 0, respectively. Otherwise, for every z ~° for which cTz 0) < 0, determine
)h=max(XI2+Xz(°~PnG,
2~>~ 0}.
Clearly 0 ~<)h < + ° ° . If for an index i we get 2~ > 0, then let x * = 2 + Xiz (° and "r = cTx *. Obviously, g ( x * ) = 0 and cTx* < cT2. A d d (4.1) to (2.1) or u p d a t e (4.1) if it is included b y (2.1). Then, the procedure for finding an at most one-dimensional relevant face is repeated in the remainder polyhedron. If for every z (°, we have either cTz(i)> 0 or cVz ( ° < 0 and ~,~=0, then on the edges of P emanating f r o m 2, there is no point feasible to (1.1)-(1.3) with an objective function value less than cT2. Let N 3 = ( j ~ N [ _~j > 0 ). Then, for every 2 ~ F 1 n G n ( x ~ R" [ ¢Tx < cTx }, we have N 3 n { j ~ N I Yj = 0} 4=g. By the construction of the constraints of (2.1), ff ~ Q holds as well. Since for every x ~ P N Q we have x y > 0 , V j ~ N 2, there exists j ~ N 3\ N 2 for which ~Tj = 0. Consequently, if N 3 \ N 2 = ~, then there exists no such an Y, thus, 2 is an optimal solution of (1.1)-(1.3). In case of N3\N24=~i, let B be a feasible basis of (1.2) determining 2. Consider the constraints
x~ +
Y', drjxj = g o ,
(4.2)
r ~ N3\N2,
jEIR
in the simplex tabular form where I ~ = N \ I ~ . For every R nlcVx < cT2}, we have Y~ = index r ~ N 3 \ N 2. Similarly to construction of Section 3, let
determined b y B, ff ~ F 1 n G n ( x 0 for at least one the disjunctive cut
aT=max{do/dmlr~N3\N2} ,
jEI~.,
and consider the inequality
Y'~ ( - a T ) x j < - 1 .
(4.3)
j ~ ll~
By adding (4.3) to (2.1), the vertex ~ is cut off, but every ~ ~ F a n G N ( x ~ R" [ ¢Tx < cTx } is left. In addition, let "t = c T2 and x * = 2, add (4.1) to (2.1) or update (4.1) if it is included b y (2.1). After this, the procedure for finding an at most one-dimensional relevant face is repeated in the remainder polyhedron.
405
N o w , we deal with the case when after the execution of A l g o r i t h m 2,1 a feasible basis B of (2.2) is f o u n d for which [ N n I B I = m + 1. Then, Pz is an at m o s t one-dimensional relevant face, where Z = N n I R. Investigate whether Pz n Q contains zero-dimensional relevant face. If after the execution of Algorithm 2.1, we obtain N n I B = N OU Na, then n0 + nl = m + 1 hence b y Theorem 2.4, we have P z N Q N F 0=j~. I n case of ( N n I a ) \ ( N 0 w N1) 4= I~, choose a variable Xr, r (N N I B ) \ ( N o U N1), and solve p r o b l e m (2.4) starting f r o m B and using the modified entering rule. Let B denote the optimal basis again. If ] N N I B I = m, then execute the things to be written above for this case. Otherwise, if in the optimal solution of (2.4) we have x r > 0, then execute Steps 5 and 6 of Algorithm 2.1, moreover, if x r turns out to be nonextremal in P n Q, then execute Step 7 as well. If in the optimal solution of (2.4) we have x r = 0 and xr is basic, then execute Step 4 of Algorithm 2.1. However, we p e r f o r m these steps with such a modification that instead of executing the instructions " g o to Step 1 or 3", the execution terminates, moreover, the maintenance of yl, U1 and (2.16) is now unnecessary. If in Step 4, the variable x~ becomes nonbasic, then for the feasible basis B reached last, we get I N n IBI = m and we proceed with the things to be d o n e written for this case. Otherwise, for the index r considered previously, we have r e ( N N I B ) \ ( N O U N 1) now. In the case of ( N N I B) \ ( N OU N 1) 4=j~, choose a new index r ~ ( N N I B) \ ( N OU N 1) and repeat the activities above. Clearly, after solving a finite n u m b e r of problems (2.4), we obtain either I N N I B I =m, or ( N A I B ) \ ( N o U N1)=~J and I N n I B [ = rn + 1. In the followings, we deal with the second case. Let x ~1) ~ R n be the vector consisting of the first n c o m p o n e n t s of the basic feasible solution determined b y B. The set Pz n Q is b o u n d e d if and only if the o p t i m u m of max.
Xl
s.t.
Ax = b, Hx + Ix s - - h , xj=O, j G Z , xj>~O,
(4.4)
j~(NUS)\Z,
is finite, where the index l ~ S N I R is unique. Solve (4.4). If (4.4) has a finite optimum, then
406
J. Fiilrp / A finite cuttingplane methodfor solvinglinearprograms
Pz n Q = { x ~ R" Ix = )~x w + (1 - )k)x (2), 0 £ )k < 1}, where x (2) ~ R" is the vector consisting of the first n components of the optimal solution of (4.4). If (4.4) has no finite optimum, then Pz n Q = ( x ~ R" Ix = x 0) + )~z, )~ >/0}, where the direction z ~ R" can be easily generated by using the simplex method for (4.4). To improve the incumbent solution, we solve min{ CTX I X ~ Pz n Q n G ) by one-dimensional search, as it is shown below. Consider the case when Pz N Q is bounded. If g( x w ) < 0 and g( x (2)) < 0, then Pz n Q n G =ft. Otherwise, at least one of g ( x (1)) >_-0 and g ( x (2)) >/0 holds. Without loss of generality, we can assume that cTx(1) ~< cTx (2). If g ( x (a)) >/ 0, then let x * = xW; otherwise, let h* = m a x ( X l g [ X x w + ( 1 - X ) x (2,] >i O, X ~ [0, 11) and x * = X*x °~ + (1 - h* ) x (2)
Then, update the right-hand-side value of constraint (4.1) obviously included by (2.1) to ~-----
cTx *. Now we consider the case when Pz n Q is unbounded. If g(x w ) < 0 and cTz < 0, then Pz n Q N G = ~ because cTx is bounded below on P n G. If g(x (1)) >10 and cTz < 0, then let X* = m a x ( h l g [ x
( 1 ) + h z ] >0, h > 0 )
and x * = XO) + ~k*z. It is easy to see that A* < ~ . Then, update the right-hand-side value of constraint (4.1) obviously included by (2.1) to ~,=c'rx *. If g(x w ) >10 and cZz >~O, let x * = x w, ~ = cTx (1), and add (4.1) to (2.1), or update (4.1)if it is included by (2.1). If g(x w ) < 0 and c'rz >1O, then determine A* = m i n ( X l g [ x (a)+ Xz] > 0, A > 0 ) .
(4.5)
If A*= + oo, i.e. (4.5) has no feasible solution, then Pz n Q n G =JJ; otherwise, let x * = x (1) + A'z, y = cTx * and add (4.1) to (2.1), or update (4.1) if it is included by (2.1). We mention that if g(x w ) < O, cTz = 0 and (4.5) has a feasible solution, then instead of solving (4.5) exactly we can choose any feasible solution of (4.5) for ~*.
It is easy to see that even if we added (4.1) to (2.1), or updated (4.1), the objective function cut (4.1) does not cut off every point of Pz n Q. After having solved min( cVxl.x ~ P z n Q n G) to improve the incumbent solution, we want to exclude the points of Pz N Q from the further search. It is clear that for every ~7~ ( F I \ P z ) n Q there exists j ~ N I \ N 2 for which X j = 0 . If NI\N2=t~, i.e. N 1 = N2, then there exists no such an )~ hence the incumbent solution x * is the optimal solution of (1.1)-(1.3). Otherwise, we construct a face cut. Every index r ~ N 1 \ N 2 was added to N 1\ N 2 after having solved a problem (2.4) either in Algorithm 2.1 or at the procedure described in this section, when we tried to reach a zero-dimensional relevant face in Pz n Q. For every index r ~ N 1\ N 2, consider the constraint (3.1) from the optimal simplex tabular form of problem (2.4) belonging to index r. The disjunctive face cut presented in Section 3 is applied now by setting J = N I \ N 2 and l = 1. Since d,(~) > 0 and d(,f ) <~O, Vr ~ N 1\ N 2, Vj ~ ( N U S ) \ Z , the disjunctive face cut (3.4) constructed in this way excludes the feasible solutions of (2.2) determined by the points of Pz n Q, but leaves the feasible solutions of (2.2) belonging to the points of ( F I \ P z ) n Q. As in Section 3, transform (3.4) to the form prescribed by (2.1), and add it to (2.1). We mention that if it turns out that Pz n Q A G = ~ and the vertex X (1) of P A Q is nondegenerate, then the convexity cut generated at x (1) cuts off the points of Pz n Q but leaves every point of P N Q N G. Consequently, we can use the convexity cut instead of the disjunctive face cut in this case. Then, after adding a disjunctive face cut or a convexity cut to (2.1), the procedure for finding an at most one-dimensional relevant face is repeated in the remainder polyhedron. Now we discuss the finiteness of the cutting plane method outlined above for solving (1.1)(1.3). Every main cycle of the method begins with the execution of Algorithm 2.1 with q - - 1 . In Section 2, it was shown that Algorithm 2.1 is finite. If it turns out that F 1 N Q = ~, e.g. P n Q = ~ as a special case, then the method terminates. Otherwise, an at most one-dimensional relevant face is obtained, and after this, the things to be done described in this section are executed. If the incumbent solution x * does not turn out to be optimal meanwhile, then the main cycle is repeated. It is obvious that the remainder points of
J. FiilSp / A finite cutting plane method for solving linear programs
at least one face of P are cut off in every main cycle of the method. Since the polyhedron P has a finite number of faces, the method is finite. After the termination of the method, the incumbent solution x * is a global optimal solution of (1.1)-(1.3) except we have no incumbent solution and (1.1)-(1.3) has no feasible solution consequently. Corollary 4.1. The cutting plane method proposed above for determining the global optimum of (1.1)-(1.3) is finite. It can be shown that the complexity of the method is exponential. Consider the following linear program with an additional reverse convex constraint: min.
~xi i=1
s.t.
Xi + X m + i = l ,
xi>~O,
i = l,...,
m,
i = 1 . . . . . 2m,
m
Z
i=l
[m/2l.
It is easy to see that the global optimal solutions of this problem of size m × 2m are those feasible solutions which have m - [ m / 2 ] zeroes and [ m / 2 ] ones in the first m components. Working by the proposed method, each of these global optimal solutions must be found. The number of them is m ([m/2])' which depends exponentially from m.
5. Computational experience We tested three variations of the method presented in this paper for solving linear programs with an additional reverse convex constraint. These methods differ only in the realization of Algorithm 2.1 similarly to Methods 2, 3 and 4 tested in [9] for the case q = 0. Method A does not use the general set covering auxiliary problems. This means that we do not handle problems (2.15), (2.16) and (2.16)-(2.17), only the index sets N 0, N 1 and N 2 are maintained. The activities relating on upper bound computings are skipped in Algorithm 2.1, and only the obvi-
407
ous lower bounds are used, i.e. n 0 + n 2 + 1 for L0, n o + n 1 for L1, and n o + n I + 1 for L 2. Method B uses the general set covering problems. Heuristic procedures are applied to obtain the lower and upper bounds, namely, Algorithm 6.1 of [9] for upper bound computation and Algorithm 6.2 of [9] for lower bound computation. Method C uses the general set covering problems. A heuristic procedure is applied to obtain upper bounds, namely, Algorithm 6.1 of [9]. The lower bounds are obtained by solving the linear programming relaxations of the general set covering problems by the simplex method. We mention that the computational realizations of Algorithm 2.1 in Methods A, B, and C are direct extensions of those of Algorithm 3.1 in Methods 2, 3, and 4 of [9], respectively. The realizations of the activities described in Section 4 are the same in Methods A, B and C. The first three test problems were chosen from the literature. The first is from Muu [22], the second is from Thuong and Tuy [29], and the third is from Hillestad [16]. The other test problems were generated randomly in the following way. We generated a n o n e m p t y and bounded polyhedron P = ( x ~ R n l A x = b, x >l O}, where matrix A was of size m × 2 m , i.e. n = 2 m . The elements aig of A were generated as uniform r a n d o m variables on [0, 2] for j = i and j = m + i, and as uniform r a n d o m variables on [ - 1, 1] in all other cases. In addition, we had b i = 1, i = 1 . . . . . m, c i = l , i = 1 . . . . . m, and c i = 0 , i = m + 1 , . . . , 2 m . It is easy to see that by substituting the expected values of the random variables for the elements of A, we could obtain just the difficult problems discussed at the end of Section 4. If the polyhedron P generated in this way turned out to be empty or unbounded, then it was rejected and a new one was generated. Then, we determined the optimal solutions xmaX and x min of the linear programs m a x ( c T x l x ~ P ) and rain( c X x l x ~ P }, respectively. After this the convex function n
g(x) ~ Z (Xj--Xjmin) 2 __ 1 j=l
~ (xjmaX_xmin~2j 1 j=l
was used for (1.3). The methods were coded in Fortran IV and were implemented on an IBM 3031 computer using the double precision option on the Fortran
408
J. Fiili~p / A finite cutting plane method for solving linear programs
Table 1 a
b
c
f
g
1.
3× 5
A B C
d 7 7 7
e 3 3 3
0.01 0.02
0.02 0.03 0.04
2.
4x 6
A B C
9 9 9
3 3 3
0.01 0.02
0.03 0.04 0.05
3.
3×7
A B C
37 37 37
7 7 7
0.07 0.08
0.13 0.20 0.21
4.
5 × 10
A B C
190 135 119
25 20 18
0.17 0.26
1.24 1.03 1.06
5.
6 × 12
A B C
414 252 249
36 30 30
0.39 0.41
3.12 2.44 2.39
6.
7 × 14
A B C
635 429 382
42 39 33
0.63 1.06
5.40 4.39 4.33
7.
8 × 16
A B C
1517 1060 983
83 81 79
2.02 5.54
21.19 18.22 20.63
8.
9×18
A B C
1591 954 1028
83 69 71
1.74 2.93
25.17 15.82 18.43
9.
10 × 20
A B C
2841 2212 1587
128 120 113
3.28 15.12
63.36 48.89 52.05
H c o m p i l e r . T h e c o m p u t a t i o n a l results a r e g i v e n in T a b l e 1, w h e r e a b c d e
= = = = =
serial n u m b e r o f t h e test p r o b l e m , size o f m a t r i x A, method, number of the simplex iterations, number of the cuts generated during the method, f = CPU time used for maintenance and bound c o m p u t a t i o n o f t h e g e n e r a l set c o v e r i n g a u x iliary p r o b l e m s (in s e c o n d s ) , g = t o t a l C P U t i m e (in s e c o n d s ) . T h e c o m p u t a t i o n a l results o f T a b l e 1 s h o w t h a t f o r p r o b l e m s o f little size, t h e n u m b e r s o f t h e s i m p l e x i t e r a t i o n s a n d c u t s are t h e s a m e , m o r e o v e r , the activities r e l a t i n g o n t h e g e n e r a l set c o v e r i n g p r o b l e m s r e s u l t a slight e x t r a C P U t i m e c o n s u m p t i o n at M e t h o d s B a n d C. H o w e v e r , f o r
t h e test p r o b l e m s o f g r e a t e r size, t h e t o t a l C P U times used for Methods B and C are significantly less t h a n t h o s e f o r M e t h o d A . T h i s suggests t h a t in g e n e r a l , M e t h o d s B a n d C c a n b e C o n s i d e r e d to b e m o r e e f f e c t i v e t h a n M e t h o d A.
References [1] Avriel, M., and Williams, A.C., "Complementary geometric programming", SlAM Journal on Applied Mathematics 19 (1970) 125-141. [2] Balas, E., "Disjunctive programming: Cutting planes from logical conditions", in: Mangasarian, O.L., Meyer, R.R., and Robinson, S.M. (eds.), Nonlinear Programming 2, Academic Press, New York, 1975, 297-312. [3] Balas, E., "Disjunctive programming", Annals of Discrete Mathematics 5 (1979) 3-51. [4] Balas, E., and Ho, A., "Set covering algorithms using cutting planes, heuristics, and subgradient optimization: A computational study", Mathematical Programming 12 (1980) 37-60. [5] Bansal, P.P., and Jacobsen, S.E., "Characterization of basic solutions for a class of nonconvex programs", Journal of Optimization Theory and Applications 15 (1975) 549-564. [6] Bansal, P.P., and Jacobsen, S.E., "An algorithm for optimizing network flow capacity under economies-of-scale", Journal of Optimization Theory and Applications 15 (1975) 565-586. [7] Eckhardt, U., "Theorems on the dimensions of convex sets", Linear Algebra and its Applications 12 (1975) 63-76. [8] Forg6, F., Nonconvex and Discrete Programming (KiJzgazdas6gi ds Jogi Ki~nyvkiad6), Budapest, 1978 (in Hungarian). [9] F'tilSp, J., "A finite procedure to generate feasible points for the extreme point mathematical programming problem", European Journal of Operational Research 35 (1988) 228-241. [10] FiilSp, J., "A finite cutting plane method for the generalized lattice point problem", to appear in European Journal of Operational Research. [11] Glover, F., "Convexity cuts and cut search", Operations Research 21 (1973) 123-134. [12] Glover, F., "Polyhedral convexity cuts and negative edge extensions", Zeitschrift fftr Operations Research 18 (1974) 181-186. [13] Glover, F., and Klingman, D., "The generalized lattice point problem", Operations Research 21 (1973) 141-155. [14] Hall, N.G., and Hochbaum, D.S., "The multicovering problem: The use of heuristics, cutting planes, and subgradient optimization for a class of integer programs", University of California at Berkeley, 1986. [15] Hall, N.G., and Hochbaum, D.S., "A fast approximation for the multicovering problem", Discrete Applied Mathematics 15 (1986) 35-40. [16] Hillestad, R.J., "Optimization problems subject to a budget constraint with economies of scale", Operations Research 23 (1975) 1091-1098.
J. FiilSp / A finite cutting plane method for solving linear programs [17] Hillestad, R.J., and Jacobsen, S.E., "Reverse convex programming", Applied Mathematics and Optimization 6 (1980) 63-78. [18] Hillestad, R.J., and Jacobsen, S.E., "Linear programs with an additional reverse convex constraint", Applied Mathematics and Optimization 6 (1980) 257-269. [19] Majthay, A., and Whinston, A., "Quasi-concave minimization subject to linear constraints", Discrete Mathematics 9 (1974) 35-59. [20] Mattheis, T.H., and Rubin, D.S., "A survey and comparison of methods for finding all vertices of convex polyhedrical sets", Mathematics of Operations Research 5 (1980) 167-185. [21] Meyer, R., "The validity of a family of optimization methods", S l A M Journal on Control 8 (1970) 41-54. [22] Muu, L.D., "A convergent algorithm for solving linear programs with an additional reverse convex constraint", Kybernetika 21 (1985) 428-435. [23] Rosen, J.B., "Iterative solution of nonlinear optimal control problems", SIAM Journal on Control 4 (1966) 223-244. [24] Sen, S., and Sherali, H.D., "On the convergence of cutting plane algorithms for a class of nonconvex mathematical programs", Mathematical Programming 31 (1985) 42-56. [25] Sherali, H.D., and Shetty, C.M., Optimization with Disjunctive Constraints, Springer, New York, 1980. [26] Sherali, H.D., and Sen, S., "On generating cutting planes
[27] [28]
[29]
[30] [31] [32]
[33]
[34]
409
from combinatorical disjunctions", Operations Research 33 (1985) 928-933. Stoer, J., and Witzgall, C., Convexity and Optimization in Finite Dimensions, Springer, Berlin, 1970. Thach, P.T., "Convex programs with several additional reverse convex constraints", hcta Mathematica Vietnamica 10 (1985) 35-57. Thuong, N.V., and Tuy, H., "A finite algorithm for solving linear programs with an additional reverse convex constraint", in: Demyanov, V.F., and Pallaschke, D. (eds.), Nondifferentiable Optimization." Motivations and Applications (Sopron, 1984), Springer, New York, 1985, 292-301. Tschernikow, S.N., Lineare Ungleichungen, VEB Deutscher Verlag der Wissenschaften, Berlin, 1971. Tuy, H., "Concave programming under linear constraints", Soviet Mathematics (1964) 1937-1940. Tuy, H., "A general deterministic approach to global optimization via d.c. programming", in: Hiriart-Urruty, J.-B. (ed.), F E R M A T Days 85: Mathematics for Optimization, North-Holland, 1986, 273-303. Tuy, H., "Convex programs with an additional reverse convex constraint", Journal of Optimization Theory and Applications 52 (1987) 463-486. Ueing, U., "A combinatorial method to compute a global solution of certain nonconvex problems", in: Lootsma, F.A. (ed.), Numerical Methods for Nonlinear Optimization, Academic Press, 1972, 223-230.