A PTAS for the Multiple Subset Sum Problem with different knapsack capacities

A PTAS for the Multiple Subset Sum Problem with different knapsack capacities

Information Processing Letters 73 (2000) 111–118 A PTAS for the Multiple Subset Sum Problem with different knapsack capacities Alberto Caprara a,∗ , ...

100KB Sizes 108 Downloads 103 Views

Information Processing Letters 73 (2000) 111–118

A PTAS for the Multiple Subset Sum Problem with different knapsack capacities Alberto Caprara a,∗ , Hans Kellerer b,1 , Ulrich Pferschy b,2 a DEIS, University of Bologna, viale Risorgimento 2, I-40136 Bologna, Italy b University of Graz, Department of Statistics and Operations Research, Universitätsstr. 15, A-8010 Graz, Austria

Received 2 July 1999; received in revised form 14 December 1999 Communicated by T. Lengauer

Abstract We present a PTAS for the Multiple Subset Sum Problem (MSSP) with different knapsack capacities. This is the selection of items from a given ground set and their assignment to a given number of knapsacks such that the sum of the item weights in every knapsack does not exceed its capacity and the total sum of the weights of the packed items is as large as possible. Our result generalizes the PTAS for the special case in which all knapsack capacities are identical [1].  2000 Elsevier Science B.V. All rights reserved. Keywords: Multiple Subset Sum Problem; PTAS; Algorithms

1. Introduction This paper deals with the Multiple Subset Sum Problem (MSSP), which is a generalization of the classical subset sum problem, and at the same time a relevant special case of the multiple knapsack problem, namely the case where profit and weight of each item coincide. MSSP is formally defined as follows. We are given a set N := {1, . . . , n} of items, each item i having a positive integer weight wi , and a set M := {1, . . . , m} of knapsacks, each knapsack j having a positive integer capacity cj . The objective is to select a subset of items of maximum total weight that can be packed

in the knapsacks. The problem can be formulated as the following integer linear program: XX wi xij (1) maximize j ∈M i∈N

subject to

X

wi xij 6 cj ,

j ∈ M,

(2)

xij 6 1,

i ∈ N,

(3)

i ∈ N, j ∈ M.

(4)

i∈N

X

j ∈M

xij ∈ {0, 1},

Let cmax and cmin denote the maximum and minimum knapsack capacities, respectively. Without loss of generality we assume max wi 6 cmax ,

∗ Corresponding author. Email: [email protected]. 1 Email: [email protected]. 2 Email: [email protected].

i∈N

otherwise the largest item can be removed from the problem, and

0020-0190/00/$ – see front matter  2000 Elsevier Science B.V. All rights reserved. PII: S 0 0 2 0 - 0 1 9 0 ( 0 0 ) 0 0 0 1 0 - 7

112

A. Caprara et al. / Information Processing Letters 73 (2000) 111–118

cmin > min wi , i∈N

otherwise the smallest knapsack can be removed. The load of a knapsack j represents the overall weight of the items packed in j and will be denoted by λj . MSSP is well known to be strongly NP-hard and has several applications, see, e.g., [1,2]. Generally, we will denote by z∗ the optimal solution value and by zH the value of the solution returned by some heuristic algorithm H . Consider a value ε ∈ (0, 1). We say that H is a (1 − ε)-approximation algorithm for a maximization problem if zH > (1 − ε)z∗ for all instances. A Polynomial-Time Approximation Scheme (PTAS) for a maximization problem is an algorithm taking as input a problem instance and a value ε ∈ (0, 1), delivering a solution of value zH > (1 − ε)z∗ , and running in time polynomial in the size of the encoded instance. In this paper we derive a PTAS for MSSP, thus generalizing the PTAS for the case in which all knapsack capacities are equal, presented in [1]. This is the best approximation one could hope for, as the problem is NP-hard in the strong sense by reduction from the 3-partitioning problem. A PTAS for the Multiple Knapsack Problem in which all capacities are equal but item weights and profits are distinct is presented in [4]. PGiven a set S ⊆ N of items we let w(S) := i∈S wi , and P given a set T ⊆ M of knapsacks we let c(T ) := j ∈T cj . In the following section we will need a lower bound on z∗ as a function of c(M). Therefore, we show now a simple greedy algorithm called greedy reduction that either finds a solution of value at least c(M)/2, proving z∗ > c(M)/2, or reduces the size of the relevant instance by packing some knapsacks in a provably optimal way. The application of this procedure will allow us to assume z∗ > c(M)/2. Greedy reduction considers the items sorted in nonincreasing order of weights w1 > w2 > · · · > wn and the knapsacks sorted in nonincreasing order of capacities c1 > c2 > · · · > cm . The items are simply assigned to the knapsacks by a next fit policy. In the beginning only the largest knapsack is open, all other knapsacks are closed. For every item (in the given order) we try to pack it into the open knapsack. If this is impossible, the open

knapsack is closed and the next knapsack is opened. If possible, the item is put into this new open knapsack, otherwise it remains unpacked. Denote by λj the load of knapsack j at the end of the procedure. If λj > cj /2 holds for all j one has clearly shown that z∗ > c(M)/2. Otherwise, since items are packed in decreasing order of weight, the fact that the load of a knapsack j is less than half its capacity implies that one ran out of items while knapsacks j + 1, . . . , m were not used at all. For this situation we have to consider two cases. In the first case, all items are packed in knapsacks 1, . . . , j and we clearly have an optimal solution. In the second case, some items were left unpacked. Let i denote the unpacked item with minimum weight and l be the knapsack with largest capacity such that cl < wi , noting that l 6 j . In this case, all items i + 1, . . . , n are packed into knapsacks l, . . . , j , and none of the items 1, . . . , i fits in these knapsacks. This means that the packing of these knapsacks is optimal and that knapsacks j + 1, . . . , m are useless. Hence, we can remove items i + 1, . . . , n along with knapsacks l, . . . , j, j + 1, . . . , m, obtaining a smaller instance for which the optimal solution is at least half the sum of the knapsack capacities. Summarizing, we have shown the following Lemma 1. For a given MSSP instance, greedy reduction either shows that z∗ > c(M)/2, or optimally packs the smallest items into the smallest knapsacks and returns a reduced instance (possibly empty) for which z∗ > c(M)/2 holds. The paper and also our PTAS is organized as follows: In Section 2 we derive a PTAS for the special case of MSSP, namely the capacity dense MSSP, where the ratio between the maximum and minimum capacity of a knapsack is bounded by a constant. Using this algorithm we construct in Section 3 a polynomial-time (1 − ε)-approximation algorithm for the more general capacity grouped MSSP, which is an MSSP where the knapsack capacities can be partitioned such that a bounded number of capacity dense subproblems arises. Finally, in Section 4 the PTAS for the general MSSP is derived by artificially generating a number of capacity grouped instances and taking the best solution obtained by applying the (1 − ε)-approximation to each of these instances.

A. Caprara et al. / Information Processing Letters 73 (2000) 111–118

2. A PTAS for the capacity dense MSSP In this section a PTAS is given for the capacity dense MSSP, which is the special case of MSSP in which the ratio between the maximum and minimum capacity of a knapsack is bounded by a constant. This first PTAS will be referred to by PTASCD and uses the main ideas of the PTAS for the case with equal capacities described in [1]. Let ε be the required accuracy. We assume ε < 1/2, as a worst-case ratio of 1/2 is achieved by the greedy reduction algorithm described in the Introduction. As we are looking for a PTAS we will consider ε to be a constant value. In order to have a consistent notation throughout the paper, let ` be the smallest integer such that cmin > ε` cmax . Note that the capacity dense requirement ensures that ` is bounded by a constant. Define ε˜ := ε/5, and partition the item set N into the set L of large items,  L := i ∈ N: wi > ε˜ `+1 cmax , and the set S of small items,  S := i ∈ N: wi 6 ε˜ `+1 cmax . The next lemma shows that every polynomial-time (1 − ε˜ )-approximation algorithm for instances with large items only can be extended to a polynomialtime (1 − ε˜ )-approximation algorithm for general instances by adding the small items by a simple greedy procedure. Lemma 2. Any polynomial-time (1 − ε˜ )-approximation algorithm for MSSP instances with large items only yields a polynomial-time (1 − ε˜ )-approximation algorithm for general MSSP instances. Proof. Denote by H the (1 − ε˜ )-approximation algorithm for instances with large items only and apply H to the set L of large items in a general instance. Let ∗ denote the value of the optimal solution for the inzL H the value of the solution stance defined by L, and zL H > (1 − ε˜ )z∗ . Afcomputed by H . By assumption, zL L ter applying H , simply assign the small items to the knapsacks in a greedy way, i.e., each small item is packed, if possible, into any knapsack in which it fits. If there are items of S which were not packed by this procedure, each knapsack j ∈ M has a load at least cj − ε˜ `+1 cmax > (1 − ε˜ )cj and we are done.

113

Hence, assume that all small items are packed into the knapsacks and let zH denote the value of the heuristic solution obtained. The optimal solution value ∗ + w(S), and therefore z∗ satisfies z∗ 6 zL H + w(S) zH = zL ∗ + w(S) > (1 − ε˜ )z∗ . > (1 − ε˜ )zL

2

Lemma 2 allows one to initially get rid of the small items, which are reconsidered only at the end of the algorithm. At the same time, one can consider only the large items when the performance of an algorithm is analyzed. After having (temporarily) removed the small items, we perform the following preprocessing of the large items. We partition the item set L into subsets Ij , j = 1, . . . , d1/˜ε `+1 e − 1, each containing the items with weight in (j ε˜ `+1 cmax , (j + 1)˜ε`+1 cmax ]. Let   1 − 1. σj := j ε˜ `+1 Now the reduced item set R is generated in the following way. If |Ij | 6 2mσj , we select all items in Ij , otherwise we select only the mσj largest and the mσj smallest and put them into R. Note that r := |R| 6

`+1 e−1 d1/˜εX

2mσj

j =1



  1 = 2m −1 j ε˜ `+1 j =1   1 1 6 2m `+1 ln `+1 + 1 , ε˜ ε˜ `+1 e−1 d1/˜εX

(5)

and hence r is in O(m). The following lemma allows us to consider only ∗ and z∗ denote the items in R in our PTAS. Let zL R the optimal solution value of MSSP for the instances corresponding to L and R, respectively. ∗ > (1 − ε˜ )z∗ . Lemma 3. zR L

Proof. For j = 1, . . . , d1/˜ε`+1 e − 1, we have (σj + 1)j ε˜ `+1cmax > cmax , i.e., each knapsack can contain no more than σj items from Ij , as the weight of each item in Ij is strictly larger than j ε˜ `+1 cmax . Consider an optimal solution for instance L with a

114

A. Caprara et al. / Information Processing Letters 73 (2000) 111–118

knapsack l with load λl containing some item i ∈ Ij \ R. Then there exists at least one item is among the mσj smallest and one item ib among the mσj largest in Ij which are both unpacked in this solution. If exchanging i with ib yields a feasible solution, perform this exchange, without decreasing the load of knapsack l. Otherwise, exchange i with is : As λl − wi + wib > cl , the new load will be λl − wi + wis = λl − wi + wib − wib + wis > cl − ε˜ `+1 cmax > (1 − ε˜ )cl , as wib − wis < ε˜ `+1 cmax . Applying this exchanges as long as items in L \ R are packed clearly yields a solution for instance R ∗ , since, after each whose value is at least (1 − ε˜ )zL exchange, the load of the knapsack l involved is either not decreased or at least equal to (1 − ε˜ )cl . 2 We now show how to find a near-optimal solution for the instance defined by R. First of all, we apply the greedy reduction procedure of Section 1. This either finds an optimal solution and we are done, or shows that the optimal solution value is at least c(M)/2, possibly by reducing the instance. In the latter case, any approximation guarantee for the reduced instance applies to the original instance as well. For notational convenience, even after the possible reduction we let R and m denote the set of items and the number of knapsacks, respectively. We distinguish between two cases, depending on the relative values of m and ε˜ . If m 6 3/˜ε`+1 , then by (5) the number r of items as well as the number of knapsacks are bounded by a constant, and an optimal solution can be found in constant time by complete enumeration. In the remainder, we consider the more difficult case m > 3/˜ε `+1 , where we group the items in R as follows. Define   k := m˜ε`+1 and let p and q be such that r = pk + q and 1 6 q 6 k. Note that k > 3. It is easy to check that   1 3 (6) p 6 2`+2 ln `+1 + 1 ε˜ ε˜

and hence p is bounded by a constant. If (6) were not true we would have from (5) r p  1 1  2m `+1 ln `+1 + 1 2 ε˜ ε˜ = m˜ε`+1 , <   1 1 3 3 2`+2 ln `+1 + 1 ε˜ ε˜

m˜ε`+1 − 1 6 k 6

which is a contradiction for m˜ε`+1 > 3. Denote by 1, . . . , r the items in R and assume that they are sorted in nondecreasing order of weights, i.e., w1 6 w2 6 · · · 6 wr . Partition R into the p +1 subsets  Ri := ik + 1, . . . , (i + 1)k , i = 0, . . . , p − 1, and Rp := {pk + 1, . . . , r}. Define the MSSP instance I with k items of weight vi := w(i+1)k , i = 0, . . . , p − 1, and q items of weight vp := wr . Because the number of distinct item weights p is constant, this instance can be solved to optimality in polynomial time as follows. We call feasible knapsack filling a vector t with p + 1 nonnegative integer entries t0 , . . . , tP p , such that p ti 6 k for i = 0, . . . , p − 1, tp 6 q, and i=0 ti vi 6 cmax . Let t (1) , t (2) , . . . , t (f ) be the complete list of all feasible knapsack fillings, where f denotes the car(j ) (j ) (j ) dinality of this list, and let t (j ) = (t0 , t1 , . . . , tp ) denote the j th vector in this list. Moreover, let λj := Pp (j ) i=0 ti vi denote the sum of item weights corresponding to the feasible knapsack filling t (j ) . By a rough estimate f can be bounded by b1/˜ε`+1 c(p+1) and hence by a constant, because in any knapsack there can be at most b1/˜ε`+1 c items with weight vi , i = 0, . . . , p. Consider the following integer linear program, which can be used to solve the MSSP instance defined above. Assume the knapsack fillings are sorted by nonincreasing λj values, and, for j = 1, . . . , f , let mj be the number of knapsacks l such that cl > λj . The integer variable xj expresses the number of knapsacks packed with the items in knapsack filling j . maximize

f X j =1

λj xj

(7)

A. Caprara et al. / Information Processing Letters 73 (2000) 111–118

subject to

f X j =1 f X j =1 j X

(j )

ti xj 6 k,

i = 0, . . . , p − 1,

(j )

tp xj 6 q, x l 6 mj ,

l=1

xj > 0 integer,

(8)

(9) j = 1, . . . , f,

(10)

j = 1, . . . , f.

(11)

Constraints (8) and (9) take into account the fact that at most k items are available for each weight (in fact, q items for the smallest weight). Constraints (10) ensure that the number of knapsacks packed with load at least λj does not exceed mj . As the number f of variables is constant, this integer linear program can be solved in polynomial time by applying Lenstra’s algorithm [5]. The solution obtained for instance I is converted into a solution for the item set R (and hence item set L) by replacing the si , say, items of weight vi in the solution, i = 0, . . . , p, by items (i + 1)k, (i + 1)k − 1, . . . , (i + 1)k − si + 1. Denoting the value of this H , the next lemma evaluates its quality. solution by zL H > (1 − 4˜ ∗. ε)zR Lemma 4. zL

Proof. Let t1 denote the value of an optimal solution of the MSSP instance I with a constant number of distinct item weights. Let s be the cardinality of this solution and let y1 , . . . , ys be the weights of the items packed, in nondecreasing order. Analogously, let x1 , . . . , xs be the weights of the items packed by the heuristic solution for the item set R, again in nondecreasing order. For notational convenience, let v−1 := 0 throughout the proof. Observe that xj +k > yj for j = 1, . . . , s − k. This H is not larger than k times the ensures that t1 − zL largest weight in the instance. Hence, H 6 kvp 6 kcmax t1 − zL

6 m˜ε`+1 cmax 6 ε˜ c(M) ∗ , 6 2˜εzR ∗ > c(M)/2 after the application of greedy reducas zR tion to R. Now consider the optimal solution for instance R, and let ri∗ be the number of items in Ri packed by

115

this solution for i = 0, . . . , p. Furthermore, let t2 be the value of the optimal solution of the instance in which there are exactly ri∗ items of weight vi−1 for i = 0, . . . , p. Observing that the weight of each item in Ri is not smaller than vi−1 for i = 0, . . . , p, it follows that in this latter solution all the items are packed. Moreover, by definition, t1 > t2 , as t1 is the solution of an instance by a wider item set. Pdefined p Let r := i=0 ri , and, as above, y1 , . . . , yr and x1 , . . . , xr denote the weights (in nonincreasing order) of the items packed by the solutions of value t2 and ∗ , respectively. One has y zR j +k > xj for j = 1, . . . , r − k, as ri∗ 6 k for i = 0, . . . , p. Hence, by the same considerations as above, the relation ∗ ∗ − t2 6 2˜εzR zR

holds. Therefore we have H ∗ ∗ > t1 − 2˜εzR > t2 − 2˜εzR zL ∗ ∗ ∗ > (1 − 2˜ε)zR − 2˜εzR = (1 − 4˜ε)zR .

2

Finally, the solution obtained for the item set L is completed by adding small items in a greedy way as indicated by Lemma 2. We summarize the various steps of Algorithm PTASCD in Fig. 1. H denote the value of the heuristic solution Let zL obtained at the end of Phase 2 (before packing the ∗ be the value of the optimal small items), and let zL solution for the instance defined by the large items. H > (1 − ε)z∗ . Lemma 5. zL L

Proof. From Lemmas 3 and 4, H ∗ ∗ > (1 − 4˜ε)zR > (1 − 4˜ε)(1 − ε˜ )zL . zL H > (1 − ε)z∗ as long as Hence, zL L

(1 − 4˜ε)(1 − ε˜ ) > (1 − ε), which is satisfied by the choice ε˜ := ε/5.

2

Theorem 6. Algorithm PTASCD is a PTAS for the capacity dense MSSP. Proof. The approximation guarantee follows from Lemmas 2 and 5. The running time is clearly polynomial by the discussion above. 2

116

A. Caprara et al. / Information Processing Letters 73 (2000) 111–118

Algorithm PTASCD Initialization: Given the required accuracy ε, compute ε˜ and partition the item set into sets S and L Phase 1: Apply item preprocessing to set L obtaining set R Apply greedy reduction to set R: If an optimal solution is found then goto Phase 3, otherwise let R denote the reduced item set and m the reduced number of knapsacks Phase 2: If m 6 3/˜ε `+1 then Compute the optimal solution for set R by complete enumeration Else Group the items in set R, solve to optimality the MSSP instance (with a constant number of distinct item weights), and construct a feasible solution for the item set R from the optimal solution obtained Phase 3: Pack the items in S in a greedy way Fig. 1. Outline of Algorithm PTASCD .

3. A (1 − ε)-approximation algorithm for the capacity grouped MSSP In this section we present an approximation algorithm for a generalization of the capacity dense MSSP, which will be used in the next section to derive a PTAS for the general MSSP. Given the required accuracy ε, partition the set M of knapsacks according to capacity intervals into sets M1 , M2 , . . . , where  M` := j ∈ M: cj ∈ (ε` cmax , ε`−1 cmax ] . Let k be the maximum number of nonempty consecutive sets in the partition, i.e.,  k = max d: ∃` such that M`−1 = ∅, M` 6= ∅, M`+1 6= ∅, . . . , M`+d−1 6= ∅, M`+d = ∅ , where we define M0 := ∅. We say that the MSSP instance is capacity grouped (at ε) if k is bounded by a constant. Note that the property of being capacity grouped clearly depends on ε, hence the (1 − ε)-approximation algorithm that we present below leads to a PTAS only for instances which are capacity grouped at ε for any ε > 0. For instance, this gives a PTAS for the case in which the number of different knapsack capacities is bounded by a constant.

Given a capacity grouped instance, we partition the knapsacks as follows. Remove from M1 , M2 , . . . all the empty sets, and aggregate the remaining sets into groups so that sets with consecutive indices are in the same group. Formally, the sets Mi and Mj , i < j , are in the same group if and only if Mk 6= ∅ for k = i, . . . , j . Note that, by definition of capacity grouped, each group is formed by the union of at most k sets. Hence, the ratio between the minimum and maximum capacity in each group is at most εk and we can define large and small items for each group as in Section 2. Let the groups, arranged in decreasing order of capacities, be G1 , G2 , . . . , Gt , noting that t is in O(m). The reason for partitioning the set of knapsacks into groups is that the large items for group Gp do not fit into the knapsacks in groups Gp+1 , . . . , Gt and are small items for the knapsacks in groups Gp−1 , . . . , G1 . This allows a separate handling of the large items in each group, as illustrated below. Clearly, the groups can be generated efficiently by finding, for each knapsack j ∈ M, the value i(j ) such that j ∈ Mi(j ) . Then, one can consider the knapsacks in decreasing order of capacities and, for each knapsack j ∈ M, put it into the same group of knapsack j −1, say Gp , if i(j )−i(j −1) 6 1, and into a new group Gp+1 otherwise. This simple observation

A. Caprara et al. / Information Processing Letters 73 (2000) 111–118

handles the fact that the number of sets M1 , M2 , . . . is in principle infinite. We now show how PTASCD illustrated in the previous section yields a polynomial-time (1 − ε)-approximation algorithm for the capacity grouped case, reε . We take PTASCD almost as a black ferred to as HCG box, even if we use the fact that it first computes a near optimal solution for the large items and then packs the small items greedily. ε starts from Gt , the group with smallest caHCG pacities, and applies PTASCD to it, considering all items that fit into the largest knapsack in the group. Then, PTASCD is applied to the knapsacks in group Gt −1 , considering all the items that fit into the largest knapsack and were not packed before in some knapsack of group Gt . The procedure is iterated, applying PTASCD to the knapsacks in Gt −2 , . . . , G1 . ε is a polynomial-time Theorem 7. Algorithm HCG (1 − ε)-approximation algorithm for the capacity grouped MSSP. ε calls t, i.e., O(m), times PTAS , hence Proof. HCG CD its running time is polynomial. We will show by induction that the produced solution is within ε of the optimal one. In fact, we prove that zpH > (1 − ε)zp∗ for p = t, t − 1, . . . , 1, where zpH is the value of the heuristic solution produced for knapsacks in Gt ∪ Gt −1 ∪ · · · ∪ Gp , and zp∗ is the optimal solution value of the instance with these knapsacks only. The basis of the induction, namely ztH > (1 − ε)zt∗ follows immediately from Theorem 6. H ∗ . Consider the Now suppose zp+1 > (1 − ε)zp+1 packing of the small items in PTASCD applied to group Gp . If all small items are packed, then

zpH

H = zL p

+ w(Sp ),

where Sp denotes the set of small items for group H is the contribution of the large items for Gp and zL p group Gp . For the optimal solution, we can separate the contributions of the large and small items for group ∗ + z∗ . The fact that the large Gp , writing zp∗ = zL Sp p items can be packed only in the knapsacks in Gp and H > (1−ε)z∗ , whereas clearly Lemma 5 imply that zL Lp p w(Sp ) > zS∗p . Hence, zpH > (1 − ε)zp∗ .

117

In the other case, in which not all small items are packed, we have H zpH > (1 − ε)c(Gp ) + zp+1 .

Separating the contributions to the optimal solution value of the knapsacks in Gp and in G>p := Gt ∪ · · ·∪ Gp+1 , we can write zp∗ = z∗ (Gp ) + z∗ (G>p ). Clearly, (1 − ε)c(Gp ) > (1 − ε)z∗ (Gp ) and H ∗ > (1 − ε)zp+1 > (1 − ε)z∗ (G>p ), zp+1

where the first inequality is the inductive hypothesis above. Hence, zpH > (1 − ε)zp∗ and the proof is complete. 2

4. A PTAS for the general MSSP In this section we show how to handle general instances of MSSP. The main idea is to force an instance to be capacity grouped for the required accuracy ε by removing some knapsacks, so that the optimal solution values before and after the removal of the knapsacks are close to each other. This yields the PTAS for general MSSP, hereafter called PTASGEN , ε as a black box. that uses HCG PTASGEN considers the partitioning of the knapsacks into sets M1 , M2 , . . . defined in Section 3. Given the required accuracy ε, let ε¯ := ε/2. For each value q = 1, 2, . . . , d1/¯εe, PTASGEN removes all knapsacks in M(q) := Mq ∪ Mq+d1/¯εe ∪ Mq+2d1/¯εe ∪ · · · ∞ [ Mq+id1/¯εe . = i=0

The instance obtained in this way is capacity grouped ε is applied to it. Among at ε¯ with k 6 d1/¯εe, and HCG all solutions computed for the different values of q, the best one is given as output. Theorem 8. Algorithm PTASGEN is a PTAS for the general MSSP.

118

A. Caprara et al. / Information Processing Letters 73 (2000) 111–118

Proof. The running time of PTASGEN is polynomial ε . In order to show that as it calls d1/¯εe times HCG the solution is within ε of the optimum, we show that there exists a value q ∈ {1, 2, . . ., d1/¯εe} such that z∗ (q) > (1 − ε¯ )z∗ , where z∗ (q) is the value of the optimal solution after the knapsacks in M(q) have been removed. Then the theorem follows from the fact that zH (q) > (1 − ε¯ )z∗ (q) > (1 − ε¯ )2 z∗ > (1 − ε)z∗ , where zH (q) is the value of the solution returned by ε . HCG Let γq be the contribution of the knapsacks in M(q) to the optimal solution value for q = 1, . . . , d1/¯εe. Hence, we have z∗ = γ1 + γ2 + · · · + γd1/¯εe . Let qmin be such that d1/¯εe

γqmin = minq=1 γq . Clearly, γqmin 6 ε¯ z∗ . Moreover, d1/¯ εe ! X ∗ γq − γqmin , z (qmin ) > q=1

as a feasible solution for the instance where the knapsacks in M(q) have been removed is given by

the packing of the remaining knapsacks in the optimal solution. Overall, we complete the proof by observing z∗ (qmin ) > z∗ − γqmin > (1 − ε¯ )z∗ .

2

Acknowledgement The first author was partially supported by CNR and MURST, Italy. References [1] A. Caprara, H. Kellerer, U. Pferschy, The Multiple Subset Sum Problem, Technical Report, Faculty of Economics, University of Graz, December 1998. [2] M. Dawande, J. Kalagnanam, P. Keskinocak, R. Ravi, F.S. Salman, Approximation algorithms for the multiple knapsack problem with assignment restrictions, IBM Research Report RC 21331, IBM Research Division, T.J. Watson Research Center, Yorktown Heights, NY, 1998. [3] W. Fernandez de la Vega, G.S. Lueker, Bin packing can be solved within 1 + ε in linear time, Combinatorica 1 (1981) 349– 355. [4] H. Kellerer, A polynomial approximation scheme for the multiple knapsack problem, in: Proc. APPROX 99, Lecture Notes in Computer Science, Springer, Berlin, 1999. [5] H.W. Lenstra, Integer programming with a fixed number of variables, Math. Oper. Res. 8 (1983) 538–548.