Discrete
Applied
Mathematics
42 (1993) 139-145
139
North-Holland
Note on combinatorial optimization with max-linear objective functions Sung-Jin Chung Department ofIndustrial
Engineering, Seoul National University. Seoul, South Korea; and Department of
Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI, USA
Horst W. Hamacher” Fachbereich Mathemhtik,
Francesco Dipartimento
Universittit Kaiserslautern,
Kaiserslautern,
Germany
Maffioli” di Elettronica,
Politecnico di Milano. Milano, Ital)
Katta G. Murty Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI, USA Received Revised
15August 1990 1 March 1991
Abstract Chung, S.-J., H.W. Hamacher,
F. Maffioli and K.G. Murty, Note on combinatorial
max-linear
Discrete Applied Mathematics
objective functions,
We consider combinatorial
optimization
system of linear constraints
in O-l variables.
max-linear q=l,...,p
objective function
problems
is defined
optimization
with
42 (1993) 139-145.
with a feasible solution set SC(O,I )” specified by a
Additionally,
several cost functions
by flx):=max(c’x,...+!‘x:
c,,., ,,c~ are given. The
YES} where c?=(c~,.,,,c:)
is for
an integer row vector in Iw”.
The problem problem.
of minimizingflx)
over S is called the max-linear combinatorial optimization
We will show that MLCO is NP-hard NP-hard for generalp. for MLCO.
even for the simplest case of S={O,l}” andp=2,
We discuss the relation to multi-criteria
Correspondence to: Professor S-J. Chung, Department ty, Seoul, South Korea. * Supported by NATO Grant RG85/0240.
0166-218X/93/$06.00
0
1993 -
Elsevier
of Industrial
Science Publishers
optimization
Engineering,
B.V. All rights
(MLCO)
and strongly
and develop some bounds
Seoul National
reserved
Universi-
140
S.J. Chin et al.
1. Introduction We consider combinatorial optimization problems with a feasible solution set S c (0, I}” specified by a system of linear constraints in O-l variables. Additionally, several
cost functions
defined
by
cl, . . . , cp are given.
The max-linear
objective
function
is
f(x):=max{clx,...,cPx:xES} where cq:=(cy,..., The problem
optimization
of minimizing
c,Q) is an integer f(x)
row vector
over S is called
in R”, q = 1, . . . ,p.
(1.1)
the max-linear combinatorial
(MLCO) problem.
MLCO can always be modeled as an integer program by standard techniques (Nemhauser and Wolsey [15]). In the problem which we study in this paper the set S always has a special structure so that a single linear objective function can be optimized over it efficiently, i.e., in polynomial time. The focus of our investigation will be MLCO problems with p? 2 over such sets S. MLCO plays a significant role in the assembly of printed circuit boards (see Drezner and Nof [6]). There, S is the set of all incidence vectors of maximum cardinality matchings in a bipartite graph. Other applications include partition problems (Garey and Johnson [S]), multi-processor scheduling problems and certain stochastic optimization problems (Granot and Zang [lo]). A special case of this problem is the ML matroid problem. The NP-completeness of this problem has been proved by Warburton [17] who also analyzes worst-case performances of some Greedy heuristics. Granot [9] introduces Lagrangean duals for the problem. In Section 2 of this paper we show that even the case S= (0,1)” (the unconstrained MLCO) with p = 2 is NP-hard and strongly NP-hard for general p. Section 3 deals with the relation of MLCO and discrete multi-criteria optimization problems. Section 4 contains some remarks on branch and bound strategies for MLCO.
2. Complexity
results
Elegant methods are available for minimizing convex functions over convex sets (see Fletcher [7], Luenberger [2]). However, this problem becomes hard even for simple discrete sets as the following example taken from Murty and Kabaldi [14] shows. problem is Let do, d,, . . . , d, be given positive integers. Then the subset-sum that of checking whether there exists a Boolean vector XE (0, l}” such that dlxl + ... + d,,x,, = do. This problem is well known to be NP-complete (Garey and Johnson [S]). If we define the convex quadratic functionf(x) := (dlxl + ... +d,x,-d~J~,
then the subset-sum problem is equivalent to checking whether the optima1 value in min{f(x): XE (0, 1)“) is 0 or strictly greater than 0. In this section we show that even the unconstrained MLCO problem defined with the simplest convex functions-max-linear functions defined by two linear functions c1 and c*-leads to an NP-hard optimization problem. Theorem
2.1.
The unconstrained
MLCO
with respect to two linear functions
is
NP-hard. Proof. Consider an instance of the interval subset-sum (ISS) problem: Let a,, . . . , a, be positive integral weights, and let u and d be positive integers with odd. The ISS problem asks for a subset N of { 1, . . . , m} such that the sum of integral weights indexed by the elements of N is contained in the interval [u, d]. Since the subset-sum problem is a special case of the ISS problem (with o = d), ISS is NP-complete. We reduce ISS to the unconstrained MLCO, such that an instance of ISS has a feasible solution if and only if the optimum objective value in the corresponding instance of MLCO is strictly less than 0. Set n:=m+l, cf:=ai, ~::=a;, i=l,...,m, &+,:=-d-l and ci+i:=u-1. Given any x E {0, 1} @+ ‘) the following two cases can occur. MLCO is Case 1: x,+l= 0. Then the objective value of x in the unconstrained greater than or equal to 0, since ai>O, i= 1, . . . , m. MLCO is less Case 2: x,+l = 1. The objective value of x in the unconstrained than 0 if and only if C ai
and
iF&(-ai)<-(u-l)
ieN
where N={i: lrilm and X,=1}. In this case, N is a feasible solution sum problem. q
to the given instance
of the interval
subset-
Theorem 2.2. The unconstrained MLCO problem with p cost functions c,, . . . , cp is strongly NP-hard. Proof. Let A be a (0,l) matrix with m rows and n columns and let e be the vector with each of its m components being equal to 1. The set-partitioningproblem (SPP) is the problem to find some XE (0, l}” such that Ax= e. This problem is strongly NP-hard (see Garey and Johnson [8]). 0 In order to reduce SPP to MLCO we denote with Ai the ith row of matrix A, define p = 2m, and introduce n + 1 variables xi, . . . ,x,,, x,, + 1, and p cost vectors cq , each with n + 1 components, defined by
cq =
(-A,,
(4-m,
0)
-2)
for q=l,...,m, for q=m+l,
. . ..2m.
142
S.J. Chin et al
Then it can easily be verified that SPP has a feasible solution timum objective value in this MLCO is strictly less than 0.
3. Relation
to multi-criteria
if and only if the op-
problems
In multi-criteria optimization (MCO) we also consider several cost functions c ,..., cp. The goal in MC0 is to find efficient solutions, i.e., solutions XES with the following property. 1
p, then none of these inequalities
Ify~Sand~~y
,..., p, and at least one of these inequalities
is strict,
by y.
Theorem 3.1. For any instance of A4LCO there is an optimum solution which is efficient with respect to the cost functions c’, . . ..cp. Proof. Suppose x is an optimum solution to a given MLCO, by YES. Then c’ylcqx for all q= 1, . . ..p implies . . . . cPy}
max{c’y,
Hence y is also optimal
for MLCO.
0
As a consequence of Theorem 3.1 we efficient solutions of the corresponding problem. Therefore for p = 2 a solution CJE (minXEs c2x, maxXcs c2x} will solve minimize subject
and let x be dominated
can solve MLCO by only considering the multi-criteria combinatorial optimization of the following with problem parameters MLCO.
c’x, to
x ES
and
c2x5 0.
If S is the set of bases of a matroid, then the latter problem is a matroidal knapsack problem discussed in Camerini and Vercellis [3] and Camerini et al. [2]. These papers applied to this particular MLCO problem give an alternative approach to the ones taken by Granot [9] or Warburton [ 171.
4. Branch and bound approach We first discuss some general bounding strategies. Since the combinatorial optimization problem under consideration can be solved in polynomial time for a single objective we can efficiently compute
Combinatorial
6q:=min{cqx:
XES),
q=l,...,
opfimization
143
p.
(4.1)
Let x4 be the solution in which a4 is attained. Then L(S)=max{aq:
q=l,...,m)
(4.2)
is a lower bound for the MLCO problem. An upper bound is obtained by setting U(S):=min(f(xq):
q=l,...,p}.
(4.3)
Let y be one of the solutions x4 such that cYqis equal to L(S) and let z be one of the solutions xr such that f(x’) = U(S). Let T be the union of all variables which are equal to 1 either in y or z or both. One of the variables in Twill be selected as branching variable: For each t E T let S(t) := {xES: x, =O>. Compute L@(t)) and U@(t)). Then take the t with the smallest U(S(t)) -L,@(t)) and x, as the branching variable. The lower bound (4.2) can be improved by using Lagrangean relaxation: An LPformulation of MLCO is minimize
z
subject to
z - crxz 0, z - C2XZ0, ... z-CPXZO, XES, z unrestricted.
(4.4)
Let 7c1,. . . . rep be nonnegative Lagrange multipliers associated with the constraints z-cqxro, q=l,..., p in (4.4). Then a lower bound of MLCO is obtained by maximizing over all rcl, . . . , nPcp’0 the function min{z-7rr(z-c’x)-...
- rcp(z - cpx>: z unrestricted,
x E S}.
(4.5)
Since (4.5) can be written as min{(l-7r,-
~~~-np)z+(nlcl+~~~+~pcP)x:
z unrestricted, XES},
(4.6)
we can restrict ourselves to or, . . . , np satisfying nl + ... + rep= 1. Thus we get the following result. Theorem 4.1.
L,(S) :=
max n,+...+rr,=l
min (nrc’ + 0.. + 7cpcP)x XGS
is a lower bound for the MLCO. However L, improves the bound of (4.2), i.e., L(S)iL,(S). Proof.
L*(S) is a lower bound since it is the optimal objective
value of the
S.J. Chin et al.
144
Lagrangean dual of LP (4.4). Since rr with rcq= 1 for exactly one q E { 1, . . . , p> is feasible the result follows. 0 If the set S is specified by a unimodular system of linear constraints in (O,l)variables it can be solved through LP techniques. In this case (4.6) is a piecewise linear concave function over the set of all nonnegative rcq, q = 1, . . . , p, and can be computed efficiently by using techniques of nondifferentiable concave programming (see, for instance, Shapiro [16]). For the casep =2 one can use algorithms for solving parametric combinatorial optimization problems with respect to a single parameter (see Carstensen [4,5], Hamacher and Foulds [II], etc.) or use efficient approximation techniques for its solution (see Burkard et al. [l]). IfalineardescriptionS={x:Ax=b,xj=Oorx~=1,j=1,...,n}ofSisgiven,it is well known (see, for instance, Murty [13]) that the bound L,(S) can be further improved by replacing in Theorem 4.1 the set S by Siin:= {x: Ax= b, O
L,(S):=L,(S):=
is a lower bound such that the optimal solution x* of MLCO satisfies L(s)IL,(s)IL,(s)If(x*)_(
U(S).
(4.7)
References [l] R.E. Burkard,
H.W.
to mathematical Graz,
Hamacher
and G. Rote, Approximation
programming,
Report
89-1987,
Institute
F. Maffioli
Programming [3] P. Camerini
University
timization,
and C. Vercellis,
Multi-constrained
and C. Vercellis,
Ph.D.
The matroidal
knapsack:
3 (1984) 157-162. The complexity of some problems dissertation,
Department
matroidal
knapsack
Drezner and S.Y. Nof, On optimizing
A class of (often) in parametric
of Mathematics,
Complexity of some parametric I51 P.J. Carstensen, Math. Programming 26 (1983) 64-75.
problems,
bin picking
integer
well-solvable
Math.
and linear
problems,
combinatorial
op-
University
of Ann Arbor,
MI (1983).
and network
programming
problems,
and insertion
plans for assembly
robots,
IEEE
16 (1984) 262-270.
(Wiley, New York, 1987). 171 R. Fletcher, Practical Methods of Optimization A Guide to the Theory 181 M.R. Garey and D.S. Johnson, Computers and Intractability: Completeness (Freeman, San Francisco, CA, 1979). PI D. Granot, A new exchange property for matroids and its application Oper. Res. 28 (1984) 41-45.
to max-min
[lOI D. Granot Faculty
of
45 (1989) 211-231.
Oper. Res. Lett. [4] P.J. Carstensen,
Trans.
and application
Technical
Graz (1987).
[2] P. Camerini,
161Z.
of convex functions
of Mathematics,
and I. Zang, On the min-max solution of some combinatorial problems, of Commerce, University of British Columbia, Vancouver, B.C. (1982).
[Ill H.W. Hamacher 33 (1989) 21-37.
and L.R. Foulds,
Algorithms
for flows with parametric
capacities,
of NP-
problems,
Working Z. Oper.
Z.
paper, Res.
Combinatorial optimization [12] D.G. Luenberger, MA, 1984). [13] K.G. Murty,
Linear
[14] K.G. Murty ming, [15] G.L.
and Combinatorial
Programming
Nemhauser
Series in Discrete
optimization
Programming
Some NP-complete
Wolsey,
Mathematics Worst
and Nonlinear
Programming (Wiley,
problems
(Addison-Wesley,
New York,
in quadratic
Reading,
1976). and nonlinear
program-
39 (1987) 117-129.
and L.A.
Mathematical
[17] A. Warburton, binatorial
to Linear
and S.N. Kabadi,
Math.
[16] J. Shapiro,
Introduction
145
Integer
Programming: case analysis problems,
and Combinatorial
and Optimizations
(Wiley,
Structures of greedy
Math.
and
Programming
Optimization,
New York,
and Algorithms related
Wiley Interscience
1988). (Wiley,
heuristics
New York,
for some
33 (1985) 234-241.
1979).
min.max
com-