European Journal of Operational Research 129 (2001) 471±480
www.elsevier.com/locate/dsw
Theory and Methodology
Budgeting with bounded multiple-choice constraints David Pisinger
*
Department of Computer Science, University of Copenhagen, Universitetsparken 1, DK-2100 Copenhagen, Denmark Received 14 October 1998; accepted 26 October 1999
Abstract We consider a budgeting problem where a speci®ed number of projects from some disjoint classes has to be selected such that the overall gain is largest possible, and such that the costs of the chosen projects do not exceed a ®xed upper limit. The problem has several application in government budgeting, planning, and as relaxation from other combinatorial problems. It is demonstrated that the problem can be transformed to an equivalent multiple-choice knapsack problem through dynamic programming. A naive transformation however leads to a drastic increase in the number of variables, thus we propose an algorithm for the continuous problem based on Dantzig±Wolfe decomposition. A master problem solves a continuous multiple-choice knapsack problem knowing only some extreme points in each of the transformed classes. The individual subproblems ®nd extreme points for each given direction, using a median search algorithm. An integer optimal solution is then derived by using the dynamic programming transformation to a multiplechoice knapsack problem for an expanding core. The individual classes are considered in an order given by their gradients, and the transformation to a multiple-choice knapsack problem is performed when needed. In this way, only a dozen of classes need to be transformed for standard instances from the literature. Computational experiments are presented, showing that the developed algorithm is orders of magnitude faster than a general LP/MIP algorithm. Ó 2001 Elsevier Science B.V. All rights reserved. Keywords: Integer programming; Dantzig±Wolfe decomposition; Dynamic programming; Bounded multiple-choice knapsack problem
1. Introduction Consider a budgeting problem with k classes N1 ; . . . ; Nk of projects. It is demanded that at most (resp. at least or exactly) ai projects should be selected from class Ni . Each project j 2 Ni has an
*
Tel.: +45-35-32-13-54; fax: +45-35-32-14-01. E-mail address:
[email protected] (D. Pisinger).
associated gain (pro®t) pij and cost (weight) wij and the problem is to choose the appropriate number of projects from each class such that the largest gain is obtained without exceeding a limit c on the cost. Thus the budgeting problem with bounded multiple-choice constraints (BBMC) may be formulated as maximize
0377-2217/01/$ - see front matter Ó 2001 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 7 - 2 2 1 7 ( 9 9 ) 0 0 4 5 1 - 8
z
k X X i1 j2Ni
pij xij
472
subject to
D. Pisinger / European Journal of Operational Research 129 (2001) 471±480 k X X
wij xij 6 c;
1
i1 j2Ni
X
xij 6 ai ;
i 2 L;
xij P ai ;
i 2 G;
xij ai ;
i 2 E;
j2Ni
X j2Ni
X j2Ni
xij 2 f0; 1g;
i 1; . . . ; k; j 2 Ni :
The sets L; G; E must be disjunctive and L [ G [ E K, where K f1; . . . ; kg. All coecients pij ; wij ; ai and c are positive integers, and the classes N1 ; . . . ; Nk are mutually disjoint, class Ni having P size ni . The total number of items is n ki1 ni . If we relax the integrality constraint xij 2 f0; 1g in (1) to 0 6 xij 6 1 we obtain the continuous budgeting problem with bounded multiple-choice constraints (CBBMC). To avoid unsolvable or trivial situations we assume that X XX X vi 6 c < vi wij ;
2 i2E[G
i2E[L
i2G j2Ni
where vi is the weight sum of the ai lightest items in class Ni and vi is the weight sum of the ai heaviest items in the class. The budgeting problem with bounded multiplechoice constraints is a generalization of the multiple-choice knapsack problem (MCKP) in which E K and ai 1 for i 1; . . . ; k. If each class has exactly two items, where one of these has
pi1 ; wi1
0; 0; i 1; . . . ; k, and additionally ai 1 for each class K E, the problem (1) corresponds to the 0±1 knapsack problem (KP). The continuous relaxation of KP will be denoted by CKP. The BBMC has a wide variety of applications in budgeting, where a number of projects from each class Ni has to be selected, such that the overall gain is largest possible, and such that the costs demanded for the chosen projects do not exceed a ®xed upper limit. It may be applied in sequencing and scheduling problems as well as in strategic production planning. BBMC has several applications in the social system, as it may be used
for hospital planning with production constraints in each department, or government budgeting with demands in dierent sectors. Moreover BBMC contains several classical knapsack problems as special cases, among which we should mention the 0±1 knapsack problem, subset-sum problem, bounded knapsack problem, multiple-choice knapsack problem. So all applications for these problems (see e.g. [6,7] ) are immediately adaptable to this general model. BBMC is NP-hard as it contains KP as a special case. In this paper we will show that BBMC may be transformed to an ordinary MCKP through dynamic programming, which then is solved by use of a minimal-core dynamic programming algorithm from [7]. Section 2 will present the transformation of BBMC to MCKP, and show several fundamental properties for MCKP. Section 3 shows how Dantzig±Wolfe decomposition may be used to solve the continuous BBMC without transforming classes to MCKP. Finally Section 4 describes how gradients in each class may be used to de®ne a core of the problem, and how the same gradients may be used for reductions of states in a dynamic programming algorithm. Some computational experiments are presented in Section 5. A ®rst version of this paper was presented at AIRO'96 [8].
2. Transformation to MCKP Every BBMC can be put in a form with equalities in all the cardinality constraints: maximize
z
k X X
pij xij
i1 j2Ni
subject to
k X X
wij xij 6 c;
3
i1 j2Ni
X
xij ai ;
i 1; . . . ; k;
j2Ni
xij 2 f0; 1g;
i 1; . . . ; k; j 2 Ni :
The transformation is a stepwise modi®cation of the constraints and item pro®ts/weights:
D. Pisinger / European Journal of Operational Research 129 (2001) 471±480
P Step 1: A class Ni with a constraint j2Ni xij P ai is modi®ed to a constraint of the form X xij 6 n ÿ ai
4 j2Ni
P by adding j2Ni pij to the objective function and P subtracting j2Ni wij from the capacity c. In addition, all items j 2 Ni change sign to
ÿpij ; ÿwij . In this way we have changed the problem of choosing at least ai items to the problem of choosing no more than n ÿ ai items. P Step 2: A class Ni with a constraint j2Ni xij 6 ai is easily modi®ed to equality constraint X xij ai
5 j2Ni
by adding ai new items to the class which all have pro®t and weight equal to zero. Step 3: Negative pro®ts and weights are handled by adding a suciently large constant to all items in a class Ni . Let pim minj2Ni pij and wmi minj2Ni wij . Replace all item pro®ts and weights by
pij ÿ pim ; wij ÿ wmi . This does not aect the optimal solution since a speci®c number of items is chosen in each class. We must however add the constant ai pim to the objective function, and subtract ai wmi from the capacity for each class Ni . In the rest of the paper we will assume that BBMC is in the standard form (3).
473
A BBMC in the standard form may be transformed to an equivalent MCKP as follows: In each class Ni we construct all undominated sets of a items through dynamic programming for a i
~ c for h 1; . . . ; ni ; 1; . . . ; ai . Thus let fh;a a 1; . . . ; ai ; c~ 0; . . . ; c be an optimal solution value to the following cardinality constrained knapsack problem de®ned at the ®rst h items of class Ni , and with exactly a chosen items: 9 8 Ph > > pij xij : > > j1 > > = < Ph ~ w x c ; i ij ij : c max Pj1 fh;a
~ h > > > > x a; ij > > j1 ; : xij 2 f0; 1g; j 1; . . . ; h
6
i Initially we set f0;0
~ c 0 for all c~ 0; . . . ; c, i are found by using while subsequent values of fh;a the recursion: 8 i > c > > fhÿ1;a
~ < if a 0 or c~ ÿ wih < 0; i
~ c fh;a i > c; fhÿ1;aÿ1
~ c ÿ wih pih g maxff i
~ > > : if a hÿ1;a > 0; c~ ÿ wih P 0:
7 We may de®ne an equivalent MCKP by for each class Ni , and for each weight j 0; . . . ; c setting (see Fig. 1) wij j;
pij fnii ;ai
j:
8
Fig. 1. Transformation of Ni to N i . In the present example, Ni consists of items with
w; p f
6; 203; (117, 169), (137, 197), (186, 257), (169, 226), (246, 269), (249, 260), (111, 100), (83, 17),
189; 23g and ai 4. The transformed class N i consists of items with
w; p f
317; 489; (337, 517), (343, 586), (371, 669), (403, 697), (420, 729), (429, 794), (446, 826), (477, 854), (497, 883), (554, 897), (574, 926), (606, 954), (669, 957), (686, 989),
849; 1011g where each item in N i represents the sum of ai items from Ni .
474
D. Pisinger / European Journal of Operational Research 129 (2001) 471±480
In this way we obtain a MCKP with classes N i , and the problem is to maximize the pro®t sum by choosing exactly one item from each class without exceeding the capacity constraint [6]: maximize
z
k X X i1 j2N i
subject to
k X X
pij xij
wij xij 6 c;
9
i1 j2N i
X
xij 1;
i 1; . . . ; k;
j2N i
xij 2 f0; 1g;
i 1; . . . ; k; j 2 N i :
Class N i has size ni c, thus the transformation runs in O
n Pki ai c in each class of BBMC, using totally O
c i1 ni ai time. There are c items in each class of (9), so the MCKP can be solved in O
kc2 through dynamic programming [7]. This givesP a total solution time for BBMC of O
c ki1 ni ai kc2 .
To characterize solutions to CMCKP further, let us consider some results from 0±1 knapsack problems. Balas and Zemel [1] showed that the CKP may be solved in O
n time through a partitioning technique which seeks a parameter k such that when we partition the items f1; . . . ; ng in TP fj : pj =wj P kg and S fj : pj =wj > kg, U fj : pj =wj < kg then j2S wj < c 6 j2S[T wj . The objective value of P the continuous solution is P then j2S pj k
c ÿ j2S wj . Now for the MCKP assume that all continuously dominated items have been removed according to Proposition 1, leaving back the undominated items Ri (Fig. 2). Then due to the convexity of each class one can see the continuous problem as a search for a k such that k X
2.1. Properties of MCKP and BBMC Having transformed the BBMC to an equivalent MCKP, well-known properties of the latter problem can be used to further restrict the solution space. We have: De®nition 1. If two items r and s in the same class N i satisfy that wir 6 wis and pir P pis ;
Proposition 2 [9]. An optimal solution to CMCKP satisfies the following:
a The solution has at most two fractional variables xfb and xfb0 .
b If it has two fractional variables they must be in the same class N f .
10
i1
wi/i < c 6
k X i1
wiwi ;
12
where /i arg minj2M i wij ; wi arg maxj2M i wij ; for n i 1; . . . ; k; o M i j 2 Ri :
pij ÿ kwij max`2Ri
pi` ÿ kwi` ; for i 1; . . . ; k;
13
then we say that item r dominates item s. Similarly if some items r; s; t 2 N i with wir 6 wis 6 wit and pir 6 pis 6 pit satisfy pit ÿ pir p ÿ pir P is ; wit ÿ wir wis ÿ wir
11
then we say that item s is continuously dominated by items r and t. Proposition 1 [9]. Given two items r; s 2 N i . If item r dominates item s then an optimal solution to MCKP with xis 0 exists. If two items r; t 2 N i continuously dominate an item s 2 N i then an optimal solution to CMCKP with xis 0 exists.
Fig. 2. A class N i , where the white items are continuously dominated by the black items. The undominated items Ri (black) form the upper convex boundary of N i .
D. Pisinger / European Journal of Operational Research 129 (2001) 471±480
475
Ni having the largest projection, it is sucient to order the items in Ni according to Pij
k pij ÿ kwij
Fig. 3. Projection of
wij ; pij 2 Ri at
ÿk; 1. In this case M i f/i ; wi g, /i is the lightest item in M i , and wi is the heaviest item in M i .
Thus M i is the set of items in Ri which have the largest projection of
wij ; pij at
ÿk; 1 and /i resp. wi are the lightest/heaviest among the items in M i (Fig. 3). If we can choose a k such that the weight sum of items /i is less than c but the weight sum of items wi is not smaller than c, then we have found an optimal solution to CMCKP. The objective value is then given by zCMCKP
k X i1
pi/i k c ÿ
k X i1
! wi/i :
14
See [9] for a greedy algorithm which constructs a solution with the above properties. We have previously seen that the transformation of BBMC to MCKP is expensive, thus the formulation (12) to (13) cannot directly be used for solving BBMC. For a given value of k it is however easy for each class Ni to ®nd the corresponding extreme point in N i which has the largest projection at
ÿk; 1. The items in M i are those minimizing the following expression: max
pi` ÿ kwi` `2Ri
max
BNi ;jBjai
max
BNi ;jBjai
X j2B
X
pij ÿ k
X
! wij
j2B
pij ÿ kwij ;
15
j2B
i.e. an extreme point
pi` ; wi` 2 N i in direction
ÿk; 1 is the sum of the ai items in Ni which have largest projection at
ÿk; 1. To ®nd the ai items in
16
choosing the ai ®rst entries in this list. To ®nd the extreme points with smallest/largest weights sums, i.e.
pi/i ; wi/i resp.
piwi ; wiwi we simply use the same sorting key (16), breaking ties such that the smallest resp. largest weights are chosen ®rst. Using a median search algorithm to ®nd the ai ®rst items with the given ordering, the extreme points in N i can be found in O
ni time. Let us end this section with a characterization of solutions to CBBMC: Proposition 3. There exists a LP-optimal solution which satisfies the following:
a There are zero or two fractional variables.
b If there are two fractional variables then they belong to the same class N f . Proof. Transform the BBMC to an equivalent MCKP through dynamic programming. Continuously dominated states will never lead to an LPoptimal solution, thus they may be removed. Solve the continuous MCKP. Due to Proposition 2, zero or two decision variables in CMCKP will be fractional. If there are zero fractional variables, then there will neither be any fractional variables in the LP-optimal solution to BBMC. Thus assume that there are two fractional variables in the LP-optimal solution to MCKP. The variables will be found in the same set N f . Let xfr and xfs be those two variables and let Xr fxf ;r1 ; . . . ; xf ;raf g resp. Xs fxf ;s1 ; . . . ; xf ;saf g be the variables in BBMC which correspond to the two chosen states in MCKP. Obviously jXr j jXs j af and Xr Nf , Xs Nf . Assume that the items are ordered according to their projection (16), and let c be the projection of the last item xf ;raf . Those items xrk which have a larger projection than c will be present in both sets and thus xrk 1 in an optimal solution to CBBMC. The remaining items will all have the same projection c. Then an LP-optimal solution may be constructed by repeatedly exchanging an lighter item xrk with a heavier item xs` . When two
476
D. Pisinger / European Journal of Operational Research 129 (2001) 471±480
items are met where an exchange will lead to an over®lled knapsack, these are the fractional variables, and the continuous solution is a convex combination of xrk and xs` such that xrk xs` 1 and such that the total capacity is applied.
3. Dantzig±Wolfe decomposition for the CBBMC The continuous BBMC can be solved through a dynamic programming transformation to MCKP. The continuous MCKP can then be solved in linear time (measured in the number of items in the transformed classes) by applying the algorithms by Dyer [4] or Zemel [10]. Since the number of items introduced byPthe dynamic programming transk formation is i1 ni kc, the CMCKP will be solved in O
kcPand thus the continuous BBMC is solved in O
c ki1 ni ai kc time when we take into account the time used for the transformation. Better solution times can be obtained through Dantzig±Wolfe decomposition [3]. A master problem solves a continuous MCKP, while the individual subproblems ®nd extreme points for each class N i . The algorithm is a kind of binary search for the parameter k which satis®es (12) to (13). As seen in the proof of Proposition 3, we know that k will correspond to a quotient k
a pir ÿ pis ; b wir ÿ wis
17
where r; s are two items in the same class Ni . If P, W are the largest pro®t resp. weight of an item, then we know that k a=b should be found among values 0 6 a 6 P and 1 6 b 6 W , thus we may use binary search to determine the optimal value of k. Let initially K1 ; K2 0; P be the interval in which k should be found and consider the following algorithm for the master problem: Step 1: Let k a=b be a rational number in the interval K1 ; K2 . Such a value may be derived ef®ciently using the function small_rational_between from [5]. Derive /i ; wi and M i as given by (13) in each class by solving the subproblem (15). If the constraint (12) is satis®ed, we may stop since k de®nes the optimal solution.
Step 2: Let k
K1 K2 =2. For each class Ni derive /i ; wi and M i as given by (13) for this value of k. If constraint (12) is satis®ed, then we may terminate with an optimal value of k. Otherwise, if Pk w > c we know that k is P too small thus i1 i/i setting K1 k. In a similar way, if ki1 wiwi < c we set K2 k. Go to Step 1. In order to analyze the time complexity of the algorithm, we have previously noticed that there are O
PW possible values of k a=b. A binary search among these values will demand O
log
PW iterations of the P master problem. Solving the subproblems takes ki1 O
ni O
n time for each value of k, thus the whole algorithm runs in O
n log P n log W which is polynomial in the input size. The decomposition shows that we are able to solve the continuous BBMC without knowing the items in each class N i and in particular we do not need to transform Ni to N i . Indeed, we do not even need to know all continuously undominated items (extreme points) as these are generated ``on the run'' in O
ni time. The solution times for the continuous BBMC have been sketched in Fig. 4. Problems with ni 100 items in each of the k classes can be solved in a couple of seconds, and the solution times seem to grow linearly with the size of the problem. 4. Solving BBMC to integer optimality In Section 3 we saw that Dantzig±Wolfe decomposition may be used to solve the continuous BBMC eciently. Obtaining an integer solution is however more dicult, since this step somehow demands the transformation (8) of classes Ni to N i . If a core is used for solving the transformed BBMC, only the classes in the core need to be transformed according to (8) and hopefully the size of the core is much smaller than k. We will apply the core algorithm presented in [7]. The algorithm is based on dynamic programming, where the search is focused at a small number of classes where there is a large probability for ®nding an optimal solution. Since it is dicult to say in advance how large a core should be chosen, the
D. Pisinger / European Journal of Operational Research 129 (2001) 471±480
477
Fig. 4. Solution times (in seconds) for solving the continuous problem through Dantzig±Wolfe decomposition. The graph depicts log time as a function of log k for uncorrelated instances with ni 100 items. Pro®ts and weights are randomly distributed in 1; 1000, and ai is randomly distributed in 1; 20. Times are average values of 100 instances. The solution times seem to grow linearly with k as a 1:0 and b ÿ3:4, thus the solution times can be approximated by t
k 10b k a 0:000398k.
algorithm initially start with one class N f in the core, and gradually expands the core according to some greedy rules. These greedy rules are based on the concept of forward and backward gradients ÿ k i ; ki de®ned as follows (Fig. 5): k i kÿ i
max
pij ÿ pibi ; wij ÿ wibi
i 6 f ;
18
min
pibi ÿ pij ; wibi ÿ wij
i 6 f ;
19
j2N i ; wij >wibi
j2N i ; wij
where item bi 2 N i is the continuously optimal choice in the set, and class N f is the fractional class with up to two fractional variables. The forward gradient k i is a measure of the largest possible gain per weight unit by choosing a heavier items in class N i instead of the continuously optimal item bi . Similarly the backward gradient kÿ i is a measure of the smallest possible loss per weight unit by choosing a lighter item in
ÿ Fig. 5. Gradients k i ; ki in class N i .
class N i . The gradients can be derived eciently without transforming classes Ni to N i . Again we use the technique of deriving extreme points from previous section using an iterative algorithm to derive k i : Set k 0. repeat Set k i k . Find an extreme point
P ; W 2 N i in direction k k i according to (15). Set k
P ÿ pibi =
W ÿ wibi . until
k i k A similar approach may be used for kÿ i , where 1. we iterate from the initial value kÿ i Now the main algorithm for the exact solution of BBMC runs as follows: First, we derive the gradients for all classes, and de®ne the core as C N f , where class f is the fractional class. In each iteration we choose the class with largest k i resp. smallest kÿ i and add it to the core. The chosen class Ni ®rst must be transformed to the corresponding set N i , and then all combinations are enumerated through dynamic programming, using the recursion from [7]. The gradients are also used for fathoming states in the dynamic programming. Let N s be the class with smallest kÿ i among the classes not yet enumerated, and let N t be the class with largest value k i . By this assumption we get the following upper bound on a state in the dynamic programming with pro®t sum p and weight sum l:
478
D. Pisinger / European Journal of Operational Research 129 (2001) 471±480
( u
p; l
p
c ÿ lk t
if l 6 c;
p
c ÿ lkÿ s
if l > c:
20
The state is fathomed if bu
p; lc 6 z, where z is the so far best solution found in the dynamic programming recursion. For convenience we set ÿ k t 0 and ks 1 when all classes have been enumerated to fathom states which cannot be improved further.
5. Computational results To test the actual performance of the presented BBMC algorithm, it has been implemented in C. The algorithm assumes that BBMC is at the standard form (3). We will consider four dierent instance types taken from the literature [7,9]. Each instance is tested with coecients in the range R 100 or 1000 and dierent numbers of classes k and sizes ni : · Uncorrelated instances. In each class we generate ni items by choosing wij and pij randomly in 1; R. Such instances are generally quite easy since a majority of the items will be dominated. · Weakly correlated instances. In each class, wij is randomly distributed in 1; R and pij is randomly distributed in wij ÿ 10; wij 10, such that pij P 1. These instances should re¯ect a natural correlation between pro®t and weight.
· Subset-sum instances. wij is randomly distributed in 1; R and pij wij . Subset-sum instances may be very dicult to solve since all upper bounds from linear relaxation return the value c. If a solution with objective value of c is found, the process may however be terminated immediately. · Zig-zag. Sinha and Zoltners [9] constructed some special instances which have very few dominated items. For each class i, ni items
w0j ; pj0 are constructed as in the uncorrelated case, and the pro®ts and weights are ordered in increasing order. Finally we set wij w0j and pij pj0 for j 1; . . . ; ni . These instances should be the most dicult to solve since there are no dominated items. The cardinalities ai in (1) are randomly chosen in 1; A, where A 5 when ni 10, and A 20 whenP ni 100. The capacity c is set to k c i1
vi vi =2, where vi and vi are de®ned as in (2). All tests were run at a HP Visualize C200 (200 MHz), and the entries in the following tables are all average values of 100 instances. Small classes with ni 10 are considered in the upper part of the tables while large classes with ni 100 are considered in the lower part of the tables. First, Table 1 gives the average time for solving the continuous problem. Notice that the solution times grow linearly with the problem size, and that even problems with 1 000 000 variables are solved in less than 5 seconds. Thus the Dantzig±Wolfe decomposition for the continuous problem is very successful.
Table 1 Time used for solving the continuous problem (seconds), where entries are rounded to two decimal digits. Average of 100 instances k
ni
R Uncorrelated
Weakly correlated
Subset-sum
100
1000
100
1000
100
1000
Zig-zag 100
1000
10 100 1000 10 000
10 10 10 10
0.00 0.00 0.03 0.38
0.00 0.00 0.03 0.37
0.00 0.00 0.04 0.47
0.00 0.00 0.04 0.45
0.00 0.00 0.02 0.24
0.00 0.00 0.02 0.24
0.00 0.00 0.04 0.47
0.00 0.00 0.04 0.48
10 100 1000 10 000
100 100 100 100
0.00 0.03 0.32 3.53
0.00 0.03 0.33 3.61
0.00 0.04 0.38 4.08
0.00 0.03 0.38 4.02
0.00 0.01 0.14 1.56
0.00 0.01 0.15 1.56
0.00 0.03 0.34 3.89
0.00 0.04 0.39 4.32
D. Pisinger / European Journal of Operational Research 129 (2001) 471±480
Next Table 2 shows the average core size for each instance type. The core size is the number of classes which were transformed Ni ! N i and enumerated in the dynamic programming algorithm for the MCKP. Most problems are solved with only a dozen of classes enumerated, meaning that
479
a very limited number of the expensive transformations Ni ! N i need to be performed. This demonstrates that the gradients are well suited for de®ning a core. Finally Table 3 gives the total computational times for solving the problems to integer
Table 2 Number of sets which have been enumerated (core size). Average of 100 instances k
R
ni
Uncorrelated
Weakly correlated
Subset-sum
100
1000
100
1000
100
1000
Zig-zag 100
1000
10 100 1000 10 000
10 10 10 10
3 6 6 11
3 12 17 10
4 6 5 5
7 11 9 9
1 1 1 1
1 2 2 2
4 6 6 10
5 16 14 12
10 100 1000 10 000
100 100 100 100
3 3 4 6
5 12 8 10
2 2 2 2
3 3 3 4
0 0 0 0
0 0 0 0
1 1 2 2
6 8 6 8
Table 3 Total time used (seconds). Average of 100 instances k
R
ni
Uncorrelated
Weakly correlated
Subset-sum
100
1000
100
1000
100
1000
Zig-zag 100
1000
10 100 1000 10 000
10 10 10 10
0.00 0.01 0.05 0.52
0.00 0.02 0.09 0.52
0.00 0.01 0.06 0.69
0.01 0.02 0.07 0.68
0.00 0.00 0.03 0.34
0.01 0.01 0.03 0.35
0.00 0.01 0.06 0.67
0.00 0.02 0.08 0.69
10 100 1000 10 000
100 100 100 100
0.05 0.08 0.49 4.81
0.09 0.68 0.62 5.05
0.07 0.08 0.57 5.76
1.10 0.37 0.86 6.18
0.07 0.08 0.25 2.11
1.09 0.95 1.24 3.07
0.09 0.12 0.58 5.69
1.15 2.55 2.38 7.59
Table 4 Time used by CPLEX for solving uncorrelated instances with ni 100, R 1000a
a
k
LP-primal (seconds)
LP-dual (seconds)
LP-barrier (seconds)
10 100 1000
0.03 0.88 72.21
0.04 2.14 316.70
0.09 1.38 27.26
Integer opt (seconds) 2.96 587.83 8761.26
Branch-and-bound (nodes) 1302 19654 b 20523 c
The ®rst three columns give the solution time for solving the LP-relaxation by using primal simplex, dual simplex or interior point methods. The next two columns give the solution times and the number of branch-and-bound nodes generated for solving the problem to integer optimality. b Terminated with solution 0.00962% from optimum. c Terminated with solution 0.02185% from optimum.
480
D. Pisinger / European Journal of Operational Research 129 (2001) 471±480
optimality. Even very large sized instances are solved within 8 seconds, and the solution times are very stable for all sizes and types of instances. A few experiments with CPLEX 5.0 [2] were run to compare the performance of the developed algorithm with that of a good LP/MIP solver. The ®rst instance in each class of the uncorrelated instances with ni 100 and R 1000 was run for dierent values of k. The obtained results are reported in Table 4, demonstrating that the specialized algorithm for BBMC is able to solve the continuous problem two orders of magnitude faster than the best of the LP algorithms. The integer problems seem to be very degenerated, since CPLEX is not able to ®nd the optimal solution in reasonable time for large instances. 6. Conclusion The BBMC is a very general model which has concrete applications in budgeting, manufacturing and packing/loading. In particular, it contains all ``classic'' knapsack problems as a special case. It has been shown that the BBMC is solvable in Pk pseudopolynomial time O
c i1 ni ai kc2 . For the continuous BBMC, Dantzig±Wolfe decomposition lead to an ecient division of the problem. A master problem solves a MCKP problem, while subproblems ®nd extreme points of the transformed sets N i . This results in a polynomial algo-
rithm running in O
n log P n log W . Computational experiments have demonstrated that the continuous as well as integer BBMC can be solved orders of magnitude faster than by using a general LP/MIP solver. References [1] E. Balas, E. Zemel, An algorithm for large zero±one knapsack problems, Operations Research 28 (1980) 1130± 1154. [2] CPLEX base system with barrier and mixed integer solver options, ILOG Inc, NV, USA, 1997. [3] G.B. Dantzig, P. Wolfe, Decomposition principle for linear programs, Operations Research 8 (1960) 101±111. [4] M.E. Dyer, An O(n) algorithm for the multiple-choice knapsack linear program. Mathematical Programming 29 (1984) 57±63. [5] LEDA ± Library of Ecient Datatypes and Algorithms, LEDA Software GmbH, Saarbr ucken, Germany, 1997. [6] S. Martello, P. Toth, Knapsack Problems: Algorithms and Computer Implementations, Wiley, Chichester, England, 1990. [7] D. Pisinger, A minimal algorithm for the multiple-choice knapsack problem, European Journal of Operational Research 83 (1995) 394±410. [8] D. Pisinger, Budgeting with bounded multiple-choice constraints, in: Proceedings of AIRO'96, Perugia, 17±20 September 1996. [9] A. Sinha, A.A. Zoltners, The multiple-choice knapsack problem, Operations Research 27 (1979) 503±515. [10] E. Zemel, An O(n) algorithm for the linear multiple choice knapsack problem and related problems, Information Processing Letters 18 (1984) 123±128.