EUROPEAN JOURNAL OF OPERATIONAL RESEARCH
ELSEVIER
European Journal of Operational Research 81 (1995) 634-643
Theory and Methodology
Minmax combinatorial optimization Abraham P. Punnen, Y.P. Aneja * Faculty of Business Administration, University of Windsor, Windsor, Ont., Canada N9B 3P4
Received February 1993; revised April 1993
Abstract
Let E be a finite set, and F be a family of subsets of E. For each element e E E, p weights c i, i = 1, 2,..., p are prescribed. Then the minmax combinatorial optimization problem is to Minimizes~FMaximuml<_i<_pEe~s{ci}. Several special cases of this general problem, viz. 3-partition, makespan minimization problem, bandwidth minimization in matrices and graphs, categorized assignment problem, etc., have been studied in literature. In this paper we propose exact and heuristic methods to solve the general problem. Our algorithms also give some new insights into the application of subgradient optimization in solving the Lagrangean dual of combinatorial problems when the Lagrange multipliers are required to satisfy additional constraints. Encouraging computational results are also reported. Keywords: Minmax optimization; Lagrangean relaxation; Subgradient algorithms; Combinatorial optimization I. I n t r o d u c t i o n
Let E be a finite set and F = {$1, S 2 . . . . . Sq} be a family of subsets of E. For each e ~ E, p weights
Ce I, c e2 . . . . . CPe are prescribed. T h e n the minmax combinatorial optimization problem is: (MMCOP) Min Max ( ~ c : } . S~F l
e~S
For p = 1, M M C O P reduces to the well known class of minsum problems. Problems of the type M M C O P arise in a variety of situations. In multi-criteria optimization, generating non-dominated solutions using Tchebycheff procedures [7-9,29,34], one needs the solution of problems of the type M M C O P . O t h e r applications include categorized assignment scheduling [1,24,26,28], bin packing [14,20], minmax paths [31], minmax travelling salesman problem [17,23], band width minimization in matrices and graphs [12], spanning tree and bipartite matching problems under categorization [26], minmax matroid base problems [32], multi processor scheduling [11,16], etc. The solvability of the problem M M C O P depends on the structure of the vector C k = (Ckl . . . . . ClEIk), 1 < k < p , as well as the structure of F. For example if p = [ E [ and C k is a scalar multiple of the k-th * Corresponding author. 0377-2217/95/$09.50 © 1995 Elsevier Science B.V. All rights reserved SSDI 0377-2217(93)E0325-R
A.P. Punnen, Y.P. Aneja / European Journal of Operational Research 81 (1995) 634-643
635
unit vector, then M M C O P reduces to the bottleneck problem [10]. In this case, it is well known that M M C O P can be solved by solving O(log I E I) feasibility problems (i.e. checking whether a given S __CE _ belongs to F, where F is given under a compact representation). Thus if this feasibility problem is solvable in polynomial time, then the bottleneck problem is also solvable in polynomial time. However, the polynomial solvability of the feasibility problem does not imply the polynomial solvability of MMCOP when C k vectors are arbitrary. In fact, it can be shown that for a general cost vector C k, 1 < k < p , even the unconstrained version of M M C O P (i.e. any S ___E is a member of F ) is NP-complete (The proof follows from a reduction from the P A R T I T I O N Problem). Thus in order to solve MMCOP for practical purposes, one should depend on enumerative schemes or settle for approximate solutions produced by some heuristic algorithms. The efficiency of most of the enumerative schemes depends on the ability to generate good upper and lower bounds on the optimal objective function value. In many hard combinatorial optimization problems, Lagrangean relaxation has been used successfully as a way to establish good quality lower bounds [11,18,22]. The Lagrangean approach can be easily adapted to get good lower bounds for M M C O P as well. Further, in this approach, the special structure of M M C O P can be used to get a heuristic solution also without much additional computational effort. In most of the applications of Lagrangean relaxation studied in the literature the Lagrange multipliers are either unconstrained, or are required to satisfy only non-negativity restrictions, Subgradient optimization based methods are widely used to solve such relaxed problems. The subgradient optimization method starts with a feasible vector of Lagrange multipliers, identifies a subgradient direction and moves along this direction by a 'small' distance to get the new multipliers. The procedure terminates when 'convergence' is observed or the iteration limit is exceeded. In the Lagrangean relaxation of MMCOP, in addition to the non-negativity restrictions, the Lagrange multipliers are required to satisfy an additional constraint. Held et al. [19] suggested a modification of the unconstrained subgradient algorithm to solve such constrained Lagrangean problems. In this approach, the Lagrange multipliers at each iteration are generated in the same way as in the unconstrained problems and the resulting multipliers are projected onto the constrained space to get the vector of feasible multipliers. In this paper, we consider two alternative approaches to solve constrained Lagrangean problems. In the first approach we use a projected subgradient direction as the direction of the move. In the second approach we use affine scaling at each iteration. Gradient projection method for optimizing differentiable functions was proposed by Rosen [27]. Recently [15] both gradient projection and affine scaling ideas have been used successfully to solving linear programming problems. In addition to obtaining a good lower bound on the optimal objective function value, our algorithms can be used to get a heuristic solution to MMCOP. Extensive computational results are presented which show that our approach of solving the Lagrangean relaxation has a faster convergence rate as compared with the method suggested in [19]. The paper is organized as follows: In Section 2, we discuss a simple lower bounding technique and formulate the Lagrangean problem. In Section 3, we consider M M C O P with p = 2. Section 4 deals with subgradient and projected subgradient algorithms. Computational results with the algorithms of Section 4 are presented in Section 5.
2. Lower bounds
Let
Z(k)
= Min ~ cf, S~F j~S
Z ( ' ) = Max l<_k<_p
Z(k).
Let Z* be the optimal objective function value of MMCOP.
A.P. Punnen,YP. Aneja/EuropeanJournalof OperationalResearch81 (1995)634-643
636
Lemma 1. Z ( ' ) < Z*. The lower bound Z ( . ) is attractive because of the simplicity in computation. Such a bound was used by Seshan [28] for a particular case of M M C O P called the categorized assignment problem. However, the lower bound Z ( . ) can be arbitrarily bad, as shown by the following example: Let:
E=
{(i,j):l
F=
{((i,-n-(i)): 1 < i < n ) , w h e r e
H =
{Set of all permutations of { 1, 2 . . . . . n} }.
c]1= c~.=
(
-m 0
if (i, j ) = (1, 1), otherwise,
0 m
if (i, j ) = (1, 2), otherwise.
or= ( ~ - ( 1 ) , . . . , q r ( n ) ) ~ H } .
It is easy to see that Z ( . ) = - m but Z* = 0. We now present a stronger lower bound using Lagrangean relaxation. M M C O P is equivalent to the following problem: Min s.t.
Z
S ~f,
(1) (2)
Z - ~.,c~>_O, l < k < p . jES
Let Ak be the Lagrange multiplier associated with the k-th constraint in (2). Then, for any A = (~1,-.-, Zv) >- 0,
P L'(A) = M i n i m u m Z / 1
SEF
\-
P
k~=ll~k) "[- j~S(k~=ll~kCff" )
is a lower bound for the optimal objective function value of MMCOP. Thus the best lower bound is obtained by solving (D')
MaxC(,). A>O
Let A* = (A],. .. , A*) be an optimal solution to D'. Then it can be verified that P vector A such that Ek=iAk v~ 1, L'(h) is unbounded. Thus D' can be written as:
P *= ~k=lt~k
1, since for a
(D) Max
L(A) P
s.t.
•
Ak = 1,
(3)
k=l
Ak > 0,
(4)
A.P. Punnen, Y.P. Aneja /European Journal o f Operational Research 8I (1995) 634-643
637
where P
L(A)
=
Min
E C'e,
S~F e~S
C'e ~- Z
Ak ck"
k=l
Let D ( . ) be the optimal objective function value of D. Lemma 2.
Z(')
Proof. D ( - ) _< Z* follows from the theory of Lagrang_ean relaxation. To establish the first inequality, it may be observed that if Z ( - ) = Z(k), then Z ( - ) = L(A), where A is the k-th unit vector. Since the k-th unit vector is a feasible solution to D, the result follows.
Although as noted in Lemma 2, the Lagrangean lower bound is superior to the lower bound Z ( . ) we need an efficient way to compute this bound.
3. T h e case p = 2
When p = 2, we can reduce D to a one-dimensional search problem. From (3) we have A1 = 1 - Az. Thus D reduces to the following: (D O) Max L°(A) 0_
where
L°(A)=Min ~(Ac~+(1-A)C2e) S~F e~S
Min
~ C2e+A ~. e ~ S 1
e
(Cle-C2 1
,i=1,2
.... ,q .
e
Since L°(A), being a minimum of q ( = I F I) linear functions, is concave, D o can be solved polynomially provided we have a polynomial algorithm to compute L°(A) for a given A [4]. We present here, however, an adaptation of the bicriteria algorithm due to Aneja and Nair [5] to solve D °. This algorithm has worked extremely well in practice and has recently been shown to be strongly polynomial [25]. Informally the algorithm can be described as follows. At any given iteration, A* is known to lie in some interval [l, u]. L(A) is then solved for both A = l and A = u, identifying corresponding optimal solutions S(l) and S(u), respectively. These two solutions provide two linear functions of A - one corresponding to S(l) and the other to S(u). The point A where these two functions intersect is the next chosen value for A. Depending upon the slope of linear function corresponding to S(-A), the interval [1, u] is updated to [l, A], [A, u], or [A, A]. In addition to a lower bound, this algorithm also provides a heuristic solution to MMCOP.
4. S u b g r a d i e n t h e u r i s t i c s
We now discuss the problem D for arbitrary p. Several approaches have been suggested by researchers to solve t h e Lagrangean dual of a combinatorial optimization problem. These include subgradient optimization [19], dual ascent methods [33], column generation [3], among others. Other
638
A.P. Punnen, Y.P. Aneja / European Journal of Operational Research 81 (1995) 634-643
recent approaches include the A n e j a - K a b a d i algorithm [4] and the Bertsimas-Orlin [6] algorithm, The approach of Aneja and Kabadi when adapted to solve D results in an algorithm with worst case complexity that grows exponentially with p and hence is not suitable for large value of p. The Bertsimas-Orlin algorithm also can be adapted to solve the problem D but it uses Vaidya's algorithm [6] as a subroutine and the practical significance of this algorithm is not well established. Subgradient optimization, however, is known to be a powerful tool in solving Lagrangean dual of several hard combinatorial optimization problems [19,22]. Starting with A° = ( l / p , l / p , . . . , l / p ) , the subgradient algorithm generates a sequence {At} of solutions of D according to the following recursion: ,~,+1 _ At + 0t : d t
(1)
At+l = Proja{~ t+a}
(2)
and
where Proja{~t+l} is the projection of ~t+l on to the unit simplex A and d t is the subgradient of L(A) at the point At. Under suitable assumptions on the step length 0 t taken at each iteration it can be shown that {At} converges to an optimal A* of D. Interestingly, in each iteration of the subgradient algorithm a feasible solution of M M C O P is also generated and the best such solution produced serves as a heuristic solution to MMCOP. The bottleneck operations at each iteration are the identification of S, the optimal solution that yields L(A) for a given A, and the projection operation. The complexity of the identification of S depends on the special structure of F. The projection operation can be done in O ( p 2) time using the algorithm given in [19]. This worst case complexity can be reduced to O ( p log p ) by using a sorting algorithm or to O ( p ) by using a repeated median finding scheme [21]. Although the worst case complexity of the median based algorithm is linear, its average performance can be even worse than the O ( p 2) algorithm. Rather than moving out of the feasible region and then finding the closest point in the feasible region at each iteration, a more natural approach seems to be to always stay in the feasible set by moving along directions which retain feasibility. It turns out that this direct approach is computationally more attractive. Keeping these observations in mind, we now present a projected subgradient algorithm to solve D. The algorithm starts with an interior feasible solution A = (A 1, A 2 . . . . , Ap), Ai > 0 for all i, and identifies a subgradient direction d. This vector d is then projected onto the null space of eTA = 1. The projection matrix P = I - e(eTe)- leT = I -- ( e e T / p ) . Thus, if d is projection of d onto the null space of e, then d = P d = d - e ( eXd) / p .
Hence d can be computed from d in 2 p additions and 1 division. Finding the projected direction is, therefore, much faster than projecting an infeasible point to the feasible set. The projected subgradient algorithm generates the solutions {At} according to the recursion
A`+1 = A t q'- 0," d ' / ] l d ' H
(3)
where d t is a subgradient of L(A) at A' and d t ~ P d t. The step length O, is chosen such that 0 < Or < min{cr t,/3t}, where a, = min(a~: i = 1 . . . . . p} and a~ = - ( a t I[ d' II)/d/if d~ < 0 and oo otherwise for all i, a n d / 3 ' = 2(L(A*) - L ( A r ) ) / ] ] d t ]] 2. Thus the step length Ot is chosen such that At+l > 0 at each iteration. This enables us to stay always in the interior of the feasible set. If at is zero or very close to zero then recursion (3) may lead to premature convergence: however we never encountered such a situation in our computational experiments. In order to establish theoretical convergence we need to control this situation. Let /~, be a pre-specified tolerance. If at
A.P. Punnen, Y.P. Aneja / European Journal of Operational Research 81 (1995) 634-643
639
We establish in the appendix that u n d e r suitable choice of 0t the projected subgradient algorithm converges to an optimal solution to D. The major difference between the subgradient algorithm and the projected subgradient algorithm is in the fashion in which the Lagrange multipliers are updated. The effort involved per iteration is less in the projected subgradient algorithm as compared with the subgradient algorithm. Detailed computational results are presented in Section 5. Let us now consider another variant of our projected subgradient algorithm. This algorithm uses affine scaling at each iteration to make the updated Lagrange multipliers equidistant from the coordinate axes. We now discuss the effect of using affine scaling in solving D. Note that we can write problem D as SeF
j~S
where C = [C 1 c 2 . . . c p ] T is a p × n matrix. The algorithm works in a manner similar to that of the projected subgradient algorithm. We start our algorithm with A° = [1/p, 1/1) .... , l/p], a point that satisfies the constraints EA i = 1, A i >__0, and is equidistant from all p coordinate axes. Assume that a movement along the projected subgradient direction takes us to A1, with EA~ = 1, and A~ > 0 for all i. Consider now the following affine transformation: = A-1A
where A = diag(pA °, pA ° . . . . . pA °) is a p × p diagonal matrix. Using this transformation A1 is transformed to ~1 = (l/p, 1/12.... , l/p), again a point which is equidistant from all the p axes. Using this transformation, problem D changes to
(D)
Max {L(A) :eTA = I, A >- O} =Max [ Min { ~s( ffAC)j) :eTAA = I, A >- O) [ S~F
S~F
j
j~S
which is similar to our earlier problem with C changed to C and e changed to f , and the same procedure continues. Since each iteration of the subgradient, projective subgradient and affine scaling algorithms generates a feasible solutions of M M C O P by comparing the objective function value of the feasible solutions generated and taking the best solution yields a heuristic solution also for MMCOP. As mentioned earlier, since the unconstrained version of M M C O P itself is NP-complete, one should depend on enumerative schemes to solve the problem exactly. The upper and lower bounding schemes discussed in the previous section can be incorporated in a tree search procedure to get an exact algorithm for the MMCOP. The efficient generation of the search tree depends on the structure of F and hence we do not discuss it here.
5. Computational results
We now present results of extensive computational experiments conducted using algorithms discussed in Section 4. All the three algorithms are coded in F O R T R A N 77 and tested on an IBM 4381 computer system. The ground set E is chosen as E = {(i, j ) [ 1 < i < n , 1
640
A.P. Punnen, Y.P. Aneja / European Journal of Operational Research 81 (1995) 634-643
Table 1 Comparative performance of algorithms n p projective subgradient 30 50 70 100
5 4 3 5 4 3 5 4 3 5 4 3
affine scaling
subgradient
iter
time
RATIO
iter
time
RATIO
iter
time
RATIO
22 10 14 23 29 33 14 12 52 56 35 36
2.26 1.22 1.36 6.38 6.77 7.06 9.47 8.49 19.11 58.21 33.93 33.73
0.05 0.03 0.05 0.03 0.03 0.01 0.02 0.02 0.01 0.02 0.01 0.01
22 23 14 24 34 26 32 24 39 55 64 20
3.09 2.64 1.79 9.27 10.49 7.53 22.18 17.12 19.22 75.14 71.19 26.52
0.05 0.03 0.04 0.03 0.04 0.01 0.02 0.01 0.01 0.02 0.01 0.01
32 33 34 53 36 53 71 67 50 77 82 86
2.76 2.60 2.46 11.61 8.29 10.02 29.13 28.45 18.50 74.43 68.62 68.06
0.12 0.04 0.08 0.08 0.04 0.05 0.07 0.06 0.02 0.03 0.04 0.03
Let H be the set o f all p e r m u t a t i o n s of {1, 2 , . . . , n}. T h e n for each 7r ~ H we have a m e m b e r S(Tr) ~ F, w h e r e S(~r) = {(i, ~-(i)), 1 < i < n}. T h e resulting M M C O P is called the m i n m a x assignment problem. T h e data c.tJ' ~. 1 < k < p , and (i, j) ~ E g e n e r a t e d for the test problems are uniformly distributed r a n d o m integers in the interval [5,50]. T h e m a x i m u m n u m b e r of iterations is set equal to n + 5. H o w e v e r we observed that the best b o u n d was o b t a i n e d in m u c h fewer iterations. W e use the following termination criterion: " I f the lower b o u n d does no improve in 10 consecutive iterations t h e n terminate". This is a c o m m o n a p p r o a c h used in m a n y subgradient based algorithms in practice [22]. I n all the algorithms the step length p a r a m e t e r (i.e. 0 t) was set to a constant and did not vary with the n u m b e r of iterations. F o r the subgradient algorithm this was set to 0.25, for the projective subgradient algorithm this was set to min{0.1, [ m i n { - h i / d i : d i < 0} - 0.0001 [} and for the affine scaling algorithm this was set to min{0.07, I m i n { - 1 / p / d i : d i < 0} - 0.0001 [}. T h e s e values of 'optimal' step length p a r a m e t e r s are identified by careful fine tuning of the algorithms by extensive experimentation. Having fixed the step length parameters, for each p r o b l e m size (i.e. n, p ) five different problems were solved. F o r each problem, the n u m b e r of iterations taken for convergence, c o m p u t a t i o n time (in C P U seconds) and ( u p p e r b o u n d - lower b o u n d ) Ratio =
lower b o u n d
are noted. T h e average o f these values over five problems are summarized in Table 1. F r o m our experiments we observed that the projected subgradient algorithm and affine scaling algorithm have faster c o n v e r g e n c e rate as c o m p a r e d with the subgradient algorithm. F u r t h e r the value of R A T I O indicates that for the test problems the heuristic solution o b t a i n e d are near optimal. F r o m our experiments we could not m a k e a conclusion on which of the three algorithms p r o d u c e g o o d quality heuristic solution to M M C O p , although f r o m the lower b o u n d i n g point of view the projective subgradient algorithm and affine scaling algorithm are better. T h u s if t h e p u r p o s e is to get a heuristic solution for M M C O P one could use all the three algorithms and pick up the best solution obtained. Such a heuristic solution can possibly be improved using some solution i m p r o v e m e n t heuristics like the tabu search [13,14,24].
A.P. Punnen, Y.P. Aneja / European Journal of Operational Research 81 (1995) 634-643
641
6. C o n c l u s i o n s
In this paper we have studied minmax combinatorial optimization problem and proposed general algorithms to find upper and lower bounds on the optimal objective function value. T h e s e algorithms also produce good heuristic solutions. These upper and lower bounding schemes can be easily incorporated in a tree search procedure (which exploit the special structure of F ) to get exact solutions to MMCOP. Another major contribution of the paper is the introduction of the projected subgradient algorithm and its variant the affine scaling algorithm in the context of subgradient optimization. These algorithms provide two approaches to solve Lagrangean relaxations using subgradient optimization, when the Lagrange multipliers are required to satisfy additional constraints. In MMCOP, these additional constraints are specially structured and the simplicity and good performance our algorithms depends on this fact. For general constraints, although the philosophy of these algorithms is still valid, their superiority over traditional subgradient algorithms is yet to be investigated and is left as a topic for future research.
Appendix
Theorem 1. I f (i) 0 t ~ 0 as t ~ oo and (ii) converges to an optimal solution to D.
Y',t=lOt
diverges, then the projected subgradient algorithm
Proof. Let L* be the optimal objective function value of D. Since L is continuous and concave and the unit simplex A is convex and bounded, there exists A* ~ A such that L ( A * ) = L*. We now show by contradiction that for every e > 0 there exists an i such that
L(a i) >_L*
(4)
From the continuity of L we have for every e > 0 there exists a ~ > 0 such that
IIA- * II
~
L(A) > L * - e .
(5)
If d t = 0 at any iteration then At is an optimal solution and the algorithm will terminate. So we assume that d t 4: O. Consider ~t = A* + 6dr~ II d t ][. Now
(6)
II ,~t _ A* II = 6. Thus
L(,(') _>L* -
(7)
From (7) and the assumption that L(At) < L* - e for all t, we have
t)
(8)
Using (8) and the concavity of L we have
_0.
(9)
642
A.P. Punnen, Y.P. Aneja / European Journal o f Operational Research 81 (1995) 634"643
W e con~ider two cases: C a s e 1. a t > tzt: I n this case we use (3) to c o m p u t e At+l. T h e n h t - A* II A t + l - At
II 2 _< II A' - A* II 2 + 2 0 t ( P d t ) V ~ - ~ 7 ~ + 02.
(10)
Since P is a p r o j e c t i o n m a t r i x a n d At - A * is in the n u l l space of e, we have P ( A t - A * ) = II P a t II = II d t II. T h u s
II a ' ÷ 1 - At II 2 _< II a t - A* II 2
_1_20t(dt)T(At_ A*)/[I d t I1 + o 2.
At - A * a n d
(11)
C a s e 2. a t < / z t : I n this case we m a k e a u s u a l s u b g r a d i e n t i t e r a t i o n a n d from s t a n d a r d a r g u m e n t s it c a n
b e s e e n that At
ilAt+l -- A t l l 2 <_ l l A ~ t_A.
A*
i i 2 + 2 0 t ( d , ) T I - ~--' - ~
+ 02.
(12)
Now ( d ' ) T ( A t -- A*) = ( d t ) T ( A t - (ftt - a d ' / l l d' II)) = ( # ) T ( A t
-- Xt) + a II d t II.
(13)
F r o m (9), (11), (12) a n d (13) we have
[[ At+l
- - At
II 2 _< II A' - A* II 2 + 0t(0, - 2 8 ) .
C h o o s i n g tz t --->0 as t ~ o0 a n d E/z t diverges t h e r e always exists a choice of 0 t such t h a t 0 t --* 0 as t --* o0 a n d EO t diverges (say for example/-~t = 1 / t ) . Now since 0 t --->0 t h e r e exists a c o n s t a n t K such that t > K ~ 0 t < 8. T h u s for every t _> K
(14)
II A ~÷1 - At II 2 _< II At - A* II 2 + 0 , & A d d i n g (14) for t = K , K + 1, K + 2 , . . . , K + q we have K+q ~ 0t-< IIAK-A* 112- IlAg+q÷l--A* II z < t=K
IIAK-A* IlL
C h o o s i n g q sufficiently large we o b t a i n a c o n t r a d i c t i o n to the fact t h a t EO t diverges a n d h e n c e the theorem.
References [1] Aggarwal, V., Tikekar, V.G., and Hsu, L.-F., "Bottleneck assignment problem under categorization", Computers & Operations Research 13 (1986) 11-26. [2] Ahmadi, R.H., and Tang, C.S., "An operation partitioning problem for automated assembly system design", Operations Research 39 (1991) 824-835. [3] Aneja, Y.P., "An integer programming approach to Steiner problem on graphs", Networks 10 (1980) 167-178. [4] Aneja, Y.P., and Kabadi, S.N., "Polynomial algorithms for Lagrangean relaxations in combinatorial problems", Working Paper, Faculty of Business Administration, University of Windsor, Canada. [5] Aneja, Y.P., and Nair, K.P.K., "Bicriteria transportation problem", Management Science 25 (1979) 73-78. [6] Bertsimas, D., and Orlin, J.B., "A technique for speeding up the solution of the Lagrangean dual", WP # 3278-91-MSA, Sloan School of Management, Massachusetts Institute of Technology. [7] Bowman, V.J., "On the relationship of the Tchebycheff norm and the efficient frontier of multicriteria objectives", in: Lecture Notes in Economics and Mathematical Systems 135, Springer-Verlag Berlin, 1980, 76-85.
A.P. Punnen, Y.P. Aneja / European Journal of Operational Research 81 (1995) 634-643
643
[8] Choo, E.U., and Atkins, D.R., "An interactive algorithm for multi-criteria programming", Computers & Operations Research 7 (1980) 81-87; [9] Ecker, J.G., and Shoemaker, N.E., "Selecting subsets of the set of nondominated vectors in multiple objective linear programming", SIAM journal on Control and Optimization 19 (1981) 505-515. [10] Edmonds, J., and Fulkerson, D.R., "Bottleneck extrema", Journal of Combinatorial Theory 8 (1970) 299-306. [11] Freville, A., and Guignard, M., "Relaxations for minmax problems", Research Report, Department of Decision Sciences, The Wharton School, University of Pennsylvania, 1991. [12] Ghattas, O.N., "The minimum bandwidth problem as an assignment problem with side constraints", Engineering Optimization 15 (1990) 163-169. [13] Glover, F., "Tabu search - A tutorial", Technical Report, Graduate school of Business Administration, University of Colorado at Boulder, 1990. [i4] Glover, F., and Hubscher, R., "Bin packing with tabu search", Technical Report, Center for Applied Artificial Intelligence, University of Colorado at Boulder, 1991. [15] Goldfarb, D., and Todd, M.J., "Linear Programming", in: G. Nemhauser et al. (eds.), Optimization, North-Holland, Amsterdam, 1989, 73-170. [16] Guignard, M., "Solving makespan minimization problems with Lagrangean decomposition", Discrete Applied Mathematics 42 (1993) 17-29. [17] Gupta, A., and warburton, A., "Approximation methods for multicriteria traveling salesman problem", Working Paper 86-24, Faculty of Administration, University of Ottawa, Canada. [18] Held, M., and Karp, R.M., "Travelling salesman problem and minimum spanning trees, Part II", Mathematical Programming 1 (1971) 6-25. [19] Held, M., Wolfe, P., and Crowder, H.D., "Validation of subgradient optimization", Mathematical Programming 6 (1974) 62-88. [20] Kellerer, H., and Woeginger, G., "A tight LPT bound for 3-partition", Report No. 170, Institut fiir Mathematik, Teehnische Universit~it Graz, Austria, 1990. [21] Maculan, N., and de Paula, Jr., G.G., "A linear time median finding algorithm for projecting a vector on the simplex of ~n,,, Operations Research Letters 8 (1989) 219-222. [22] Mazzola, J.B., and Neebe, A.W., "Resource constrained assignment scheduling", Operations Research 34 (1986) 560-572. [23] Punnen, A.P., "Travelling salesman problem under categorization", Operations Research Letters 12 (1992) 89-95. [24] Punnen, A.P., and Aneja, Y.P., "Categorized assignment scheduling: A tabu search approach", to appear in Journal of the Operational Research Society. [25] Radzik, T., "Algorithms for some linear and fractional combinatorial optimization problems", Ph.D. Thesis, Stanford University, 1992. [26] Riehey, M.B., and Punnen, A.P., "Minimum perfect bipartite matching and spanning trees under categorization", Discrete Applied Mathematics 39 (1992) 147-153. [27] Rosen, J.B., "The gradient projection method for nonlinear programming, Part I: Linear constraints", Journal for the Society of Industrial and Applied Mathematics 8 (1960) 181-217. [28] Seshan, C.R., "Some generalizations of the time minimizing assignment problem", Journal of the Operational Research Society 32 (1981) 489-494. [29] Steuer, R.E., and Choo, E.U., "An interactive weighted TchebYcheff procedure for multiple objective programming", Mathematical Programming 26 (1983) 326-344. [30] Vaidya, P., " A new algorithm for minimizing convex functions over convex sets',, AT&T Bell Laboratories, 1990. [31] Warburton, A., "Approximation of Pareto-optima in multi-objective shortest path problems", Operations Research 35 (1987) 70-79. [32] Warburton, A., "Worst case analysis of greedy and related heuristics for some min-max combinatorial optimization problems", Mathematical Programming 33 (1985) 234-241. [33] Wong, R.T., " A dual ascant approach for Steiner tree problem on directed graph", Mathematical Programming 28 (1984) 271-287. [34] Zeleny, M., "Compromise programming", in: Multicriteria Decision Making, University of South Carolina Press, Columbia, SC, 1973, 262-301.