An AFPTAS for variable sized bin packing with general activation costs

An AFPTAS for variable sized bin packing with general activation costs

JID:YJCSS AID:3008 /FLA [m3G; v1.189; Prn:29/09/2016; 9:35] P.1 (1-18) Journal of Computer and System Sciences ••• (••••) •••–••• Contents lists av...

1MB Sizes 5 Downloads 38 Views

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.1 (1-18)

Journal of Computer and System Sciences ••• (••••) •••–•••

Contents lists available at ScienceDirect

Journal of Computer and System Sciences www.elsevier.com/locate/jcss

An AFPTAS for variable sized bin packing with general activation costs Leah Epstein a , Asaf Levin b,∗ a b

Department of Mathematics, University of Haifa, Haifa, Israel Faculty of Industrial Engineering and Management, The Technion, Haifa, Israel

a r t i c l e

i n f o

Article history: Received 11 August 2014 Received in revised form 25 November 2015 Accepted 23 July 2016 Available online xxxx Keywords: Bin packing Asymptotic approximation schemes Scheduling

a b s t r a c t Motivated by issues of minimizing energy consumption, we study variable sized bin packing with general costs. In this problem we are given an unlimited supply of bins of different types. A bin of each type has a size and a cost. A set of items is given, where each item has a size in (0, 1]. The goal is to assign the items to bins, so that the total cost is minimized. It is allowed to use multiple instances of any of the bin types, but this counts towards the total cost. The problem is general as the cost of a bin has no relation to its size. We design an AFPTAS for the problem by introducing new reduction methods and separation techniques, thus providing new insights into this interesting optimization problem. Furthermore, we use this AFPTAS to provide an AFPTAS for the bin packing with bin utilization cost problem. © 2016 Elsevier Inc. All rights reserved.

1. Introduction We consider the following optimization problem which arises in scheduling of jobs with a common due-date in data centers, where we would like to minimize the total activation cost of the used machines (see [22] for a discussion regarding this emerging field of optimization goals). In this scenario, we are given an unlimited supply of machines of r different types. A machine of type i has a speed b i ≤ 1 associated with it and its cost is c i . The cost of a machine can be regarded as its activation cost, or the total amount of energy that is used to turn it on. A set of n jobs {1, 2, . . . , n} is given, where job j has a processing time requirement (also called size) 0 < s j ≤ 1. All jobs have a common due-date of 1, and no late jobs are allowed (i.e., all jobs must be completed by the due-date). The goal is to assign the jobs (non-preemptively) to machines, so that the total cost of used (activated) machines is minimized. Problem definition. In what follows, we use the terminology of bin packing, as previous studies of the problem and related problems use this terminology. We consider the following variant of one dimensional bin packing [12], which is a natural generalization of both classical bin packing and variable sized bin packing. The input consists of a set of bin types and a set of items. We are given an infinite supply of bins of r types whose sizes are denoted by br < · · · < b1 = 1, and we let B = {b1 , . . . , br }. Input items have rational sizes in (0, 1] and are to be partitioned into subsets. The set of input items is denoted by {1, 2, . . . , n}. The size of item j is denoted by s j . Each subset S in the partition has to be assigned (packed) to  some bin type i, such that the set of items fits into an instance of this bin type, i.e., s j ∈ S j ≤ b i . A bin type i has a cost c i associated with it. We assume c 1 = 1, and use the conventions br +1 = cr +1 = 0. The cost of a solution is the sum of costs

*

Corresponding author. E-mail addresses: [email protected] (L. Epstein), [email protected] (A. Levin).

http://dx.doi.org/10.1016/j.jcss.2016.07.007 0022-0000/© 2016 Elsevier Inc. All rights reserved.

JID:YJCSS AID:3008 /FLA

2

[m3G; v1.189; Prn:29/09/2016; 9:35] P.2 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

of the used bins (taking multiple subsets with the same bin type into account). That is, if the subsets are S 1 , . . . , S k , and k the subset of index  is packed into a bin of type i  (for 1 ≤  ≤ k), we get the cost =1 c i  . The goal is to find a feasible solution whose cost is minimized. We call this problem Generalized cost variable sized bin packing, and denote it by GCVS. This bin packing problem indeed formulates our scheduling problem, where a machine of type i and speed b i corresponds to a bin of size b i , and the jobs correspond to items that need to be packed in the bins. Note that the cost of a bin type is equal to the activation cost of the corresponding machine type. Performance measures. Let A be an algorithm, and let A denote its cost as well. The cost of an optimal (offline) algorithm is denoted by opt. The asymptotic approximation ratio for an algorithm A is defined to be the infimum R ≥ 1 such that for any input, A ≤ R · opt + c, where c is independent of the input. If we enforce c = 0, then R is called the absolute approximation ratio. An asymptotic polynomial time approximation scheme is a family of approximation algorithms, such that for every ε > 0, the family contains a polynomial time algorithm with an asymptotic approximation ratio of 1 + ε . We abbreviate asymptotic polynomial time approximation scheme by APTAS (also called an asymptotic PTAS). An asymptotic fully polynomial time approximation scheme (AFPTAS) is an APTAS whose time complexity is polynomial not only in the input size but also in 1ε . A PTAS and an FPTAS are defined similarly (to an APTAS and an AFPTAS, respectively) but they have the required properties with respect to the absolute approximation ratio. Previous work. In the basic bin packing problem, there is a single type of bins. The size and the cost of a bin are equal to 1. Its study started in the early 1970s [29,18–20]. Since then, the problem has been studied in many articles (see [4] for a survey). In the early 1980s, an APTAS was designed by Fernandez de la Vega and Lueker [8], and shortly after that, an AFPTAS was designed by Karmarkar and Karp [21]. In these approximation schemes, items are partitioned into small and large items. An approximate solution is found for the large items while small items are added greedily. Many generalizations and variants of bin packing have been studied (see [6,5] for surveys). Construction of asymptotic approximation schemes for variants of bin packing can be harder than the basic variant, as was demonstrated for example by Caprara, Kellerer, and Pferschy [3], who designed an APTAS for bin packing with cardinality constraints (where, in addition to the standard restriction on the total size of items in a bin, a parameter k is given, such that the number of items packed into a bin cannot exceed k). They showed that a greedy packing of relatively small items, performed after the packing of large items has been fixed (as in [8], see above), cannot be used in an asymptotic approximation scheme for this variant. Instead, they considered multiple packings of large items. For each such packing, the small items are combined into the packing using a linear program that allows to pack small items fractionally. A more complex approach for the packing of small items was introduced in [13]. The packing of small items is integrated into the linear programs, where a single packing of the large items is created, and the properties of the packing of small items (but not their exact packing) are determined by the linear program. Using this approach, an AFPTAS for bin packing with cardinality constraints was designed [13]. Asymptotic approximation schemes for other variants of bin packing were given for example in [27,30,10,9,2,13,11,7]. The most relevant special case of our problem is variable sized bin packing problem, in which b i = c i for all i. This variant was studied by Murgolo [27], who designed an AFPTAS for this variant. An APTAS for GCVS was designed in [12]. Next, we elaborate on this former APTAS for GCVS. This scheme is based on showing that there is a near optimal solution satisfying the following property (called nice in [12]). For every item j, it is packed either as a small item for its bin, or in a bin whose cost is relatively close to the minimum cost of a bin that can accommodate j. Then, for every size interval between two consecutive sizes of bins, a linear grouping of the items with size in the interval is performed, and the resulting rounded instance is solved via dynamic programming that searches for an approximate nice solution for the rounded instance. It is explained in [12] that this method cannot be extended to obtain an AFPTAS, as the rounded instance cannot be solved using an integer program in fixed dimension (the natural integer programming formulation has a linear dimension and not a constant dimension). Thus, to obtain our main result, one needs to apply new techniques and find a new way to attack GCVS. A scheduling problem that is dual to our problem was recently studied in [22,25]. We refer to these papers for more information regarding the applications of our problem in energy aware scheduling. Our results. In this work we design an AFPTAS for GCVS. In order to do that, we take a completely different approach from that of [12]. We apply a number of reductions and modifications of the set of bins. While some simple reductions are as in [12], we develop more complicated modifications. These modifications will allow us to partition the input into almost independent sub-problems. In each sub-problem there is a constant number of bin sizes (depending on ε ), and a constant number of what we call density classes, where the density of a bin is its cost per size unit. Nevertheless, the sub-problems are related in the sense that not all items of the original input that are sufficiently small to be packed into the bins of this sub-input are packed in the solution to the sub-problem, and some of them may need to be packed into larger bins. An approximate total size of these items is a parameter of the sub-problem, and an additional parameter of it is the total size of tiny items (that are much smaller than the bins) that needs to be packed into these or further bins. Each such sub-problem is approximated by using the methods of [21], and the solutions to the different sub-problems are glued together using a relatively simple dynamic programming. In addition to our study of GCVS, we consider a different variant of bin packing. The input and the set of feasible solutions are identical to those of basic bin packing, that is, items of rational sizes in (0, 1] are to be packed into bins of

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.3 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

3

size 1. However, in this problem, the cost of a bin is a monotonically non-decreasing non-negative function of the total size of items packed into it. More accurately, in the Bin packing problem with bin utilization cost (denoted by BPUC), we are given a monotonically non-decreasing non-negative cost function f , where its domain contains the interval [0, 1], and a set of items {1, 2, . . . , n}, where item j has a non-negative size s j . The goal is to partition the items subsets S 1 , . . . , S m such minto  that the total size of items in each subset is at most 1 and the cost, which is defined as j ∈ S i s j ), is minimized i =1 f ( (that is, the cost of a bin packed with the item set S i is the value of f of the total size of items in S i ). A special case of this problem, where the function must be concave, was studied in [24,23]. We show that the general problem for an arbitrary monotonically non-decreasing non-negative function f can be reduced to GCVS, and therefore admits an AFPTAS. One technical difficulty √ is the representation of f . In [24,23] the authors do not discuss this issue. However, even for concave functions as f (x) = x or f (x) = log2 (x + 1), the value of f cannot be assumed to be given as a part of the input (since even for rational values of x, f (x) can be irrational and its representation may require an infinite number of bits). Moreover, the relevant set of values of f has an exponential cardinality which can be as large as 2n (as each value corresponds to a subset of indices). We follow the traditional methodology for handling such functions f , that is, in our algorithm, the function f is given by oracles that enable us to query values of f . We discuss the properties of the oracles, which we assume to be given as input, in Section 5, and show how to reduce BPUC to GCVS, thus obtaining an AFPTAS to BPUC. We also show how to implement the required oracles for the case where f is a computable function. The approach. The difficulties in solving GCVS arise in several aspects. Since the number of bin types is non-constant, several tricky reductions are required in order to reduce the number of bin types. Still, the remaining bin types can be very different in terms of the density. While the sequence of costs can be assumed to be monotone, this is not the case with the sequence of densities. It is not difficult to see that even if the density of the bins of type 1 is large, still some items cannot be packed into smaller bins due to their sizes. Once instances of this bin type are used, an algorithm is faced with the decision of which smaller items can use the resulting empty space. Moreover, in some cases a smaller bin can be preferred to a slightly larger one, even if it has a larger density, since its cost is still smaller. One natural attempt to solve the problem would be to apply a dynamic programming algorithm on the different bin types, starting with the smaller ones, as in [17,12]. The resulting dynamic programming formulation has a non-constant time horizon, that is, a non-constant number of levels. Therefore, the formulation of the problem as an integer program has a non-constant number of variables. On the other hand, the asymptotic approximation schemes of [8,21] strongly rely on the fact that the cost of a bin is uniform, and in [27], the methods rely on the fact that the cost of a bin is proportional to its size. Specifically, the previous AFPTAS of Murgolo [27] for variable sized bin packing is crucially based on reducing the number of bin types to a constant number (a function of 1ε ). His reductions state that it is possible to consider only bin types for which b i ≥ ε , and the costs of consecutive bin types differ by a multiplicative factor of (at least) 1 + ε . Not only that these reductions do not hold for GCVS; we show in Appendix A that it is impossible to design an algorithm with a constant asymptotic approximation ratio that first reduces the number of bin types to a constant, and then solves the resulting problem. This is true even if the resulting input is solved optimally, possibly in exponential time. Thus, our problem is significantly more difficult than the variable sized bin packing problem of [27]. Note that it is fairly straightforward to extend [27] for the case that the cost of a bin is a concave function of its size. We develop a novel set of reductions and techniques. These reductions are the main combinatorial insights into the structure of GCVS that we gain from our work. In addition, we combine a number of methods, including those mentioned above, rounding [16,17], shifting [15,14], and the usage of medium sets of elements [28,1], which is applied in our case not to items or bin sizes, but to densities. In our reductions, we partition the bin types into subsequences, and deal with each subsequence almost independently. Input items are partitioned into subsets too, but it is unfortunately impossible to partition the input into completely independent parts, since it is necessary to allow the assignment of small items to very large bins, as well as their assignment to much smaller bins. We develop an AFPTAS for an auxiliary problem on (long but of constant size) subsequences of bin types. The solutions for the partial inputs are combined via simple dynamic programming, taking into account the remaining smaller items that still need to be packed. This dynamic programming differs from that of [17] as here we are required to determine the number of machines (bins) of each type. Definitions and notation. The original input is denoted by I , and consists of a set of input items and a set of bin types. Let 1 ε be a fixed positive constant such that ε < 10 and ε1 is an integer. We use O (ε ) to denote a term which can be bounded from above by c ε for some constant value c > 0 and ε < 1. We use the notation αi for the (cost) density of a bin of size b i , c i.e., αi = bi . In the proofs, we make use of the following inequality: (1 + ε )τ −1 ≥ 1 + (τ − 1)ε > ετ for a positive integer τ , i and therefore, log1+ε ετ + 1 < τ . Next Fit [19] is a bin packing heuristic that is applied to pack a set of items into bins of common size of b. It keeps a single active bin and packs items into it as long as this is possible. When an item cannot be packed into the active bin, a new active bin is created. Every pair of consecutive bins contain a total size of more than b. 2. Some reductions on the bin set and assumptions on optimal solutions In this section we show a series of modifications on B and restrictions on the optimal solution, which can be applied efficiently. We do not apply any modifications on the input items at this time. In the following sections we will compute a

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.4 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

4

solution that uses only the modified set of bins, and approximates an optimal solution among possible solutions under the specified restrictions. Some of these reductions or similar ones were used in previous work on the subject [12,13,11,7,16,17] while many of the reductions are new. The reductions below may change the set of optimal solutions and increase the cost of such solutions, but we show that the modifications may increase the cost of an optimal solution by at most a multiplicative factor of 1 + O (ε ) plus a constant additive factor. Specifically, each one of the following reductions increases the cost of an optimal solution by a factor of at most 1 + 8ε plus a constant additive factor (possibly depending on ε ). In our analysis, we compare the cost of our approximation algorithm to the cost of an optimal solution for the instance resulting from all the reductions, and prove an asymptotic approximation ratio of 1 + O (ε ). We get that even though the “real” asymptotic approximation ratio (i.e., the asymptotic approximation ratio with respect to an optimal solution of the original problem) may be slightly larger, it is still at most 1 + O (ε ) times the asymptotic approximation ratio that we prove plus an additive constant factor, and thus our analysis results in an asymptotic approximation ratio of 1 + O (ε ). Therefore, since we are interested in designing an AFPTAS for the problem, the reductions are harmless from our point of view. For simplicity, each reduction receives a set of bins B, where | B | = r, and returns a set B  ⊆ B, where | B  | = r  , which is in turn renamed again to be a set B. We use r to denote the cardinality of the set B, even if it has been modified. Similarly, if the values c i and αi are modified, the new values are denoted by c i and αi , but in what follows the notations c i and αi are used for the modified values. 2.1. Standard reductions Our first set of reductions follow existing ones in the literature and are described in this section. Lemma 1. Without loss of generality, we assume that the cost densities αi are integer powers of 1 + ε . This increases the optimal cost by a factor of at most 1 + ε . Proof. To achieve this, we modify the values c i of the bins in B as follows. For a bin type i, we let p i = log1+ε αi (that is, p i is an integer such that (1 + ε ) p i −1 < αi ≤ (1 + ε ) p i ). Note that p i may be negative. We let bi = b i , c i = (1 + ε ) p i · bi and therefore, αi = (1 + ε ) p i . Since αi ≤ αi < (1 + ε )αi , we have c i ≤ c i ≤ (1 + ε )c i . Consider a specific packing of a given input. The cost of this packing with respect to the modified costs may increase by at most a factor of 1 + ε . Thus the optimal cost increases by at most this factor. Note that the cost of a packing using the original densities is no larger than the cost using the modified densities. 2 The following two simple lemmas appear in [12]. We present their proofs for completeness. Lemma 2. [12] Without loss of generality, we assume that the values c i are monotonically decreasing (i.e., for i < j, c i > c j ). This does not change the optimal cost. Proof. To achieve this, we show that given a set of bin types, we can omit some bins from B and change any solution for any given input into a solution for the same input that does not use bins removed from B, while its cost is not larger than the cost of the original solution. This is achieved by the following process on B. While there exist i, j such that i < j (and thus b i > b j ) but c i ≤ c j , remove the bin type j from B. Note that since b i > b j , it is possible to move the contents of every bin of size b j into a bin of size b i without increasing the cost of the given solution. This is applied on the bin set until no such pair i, j exists, and thus results in a set B  where the sequence of values c i is monotonically decreasing. Note that this reduction does not modify properties of bins, but it removes some bin types from B. We find that for every input, there is an optimal solution that uses only bins of B  , and thus the reduction keeps (for every instance) at least one optimal solution unaffected. 2 We assume in the remainder of the paper that the values c i of the bins are strictly monotonically decreasing. In the next lemma, we restrict the costs of bins further. Lemma 3. [12] Without loss of generality, for all i, most 1 + ε .

ci c i +1

≥ 1 + ε holds. This modification increases the optimal cost by a factor of at

Proof. If the claim does not already hold for B, then we apply the following process on the bin types of the input. Traverse the list of bin types by increasing order of their indices from the largest bin type (of size b1 = 1) to the smallest bin type (of size br ). During the traversal keep only a subset of the types, and remove the other types from B. Let j = 1. We keep the first bin type, and inductively assume that the last bin type that is kept has index j. Then, given the value j, for c i = j + 1, j + 2, . . . , as long as c j < 1 + ε , the i-th type is removed from the list of bin types. Each time we keep the bin i

c

type with smallest index i such that c j ≥ 1 + ε and i becomes the new value of j. If there is no such value of i, we remove i all bin types j + 1, j + 2, . . . , r from B.

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.5 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

5

Consider a feasible solution that packs a subset of items, S, into a bin of type i that is removed during this process. By c the construction, the set of resulting bin types contains a bin type j < i such that c j < 1 + ε . We modify the solution by i using a bin of type j to pack the items in S. This is possible since b j > b i . Applying this procedure on all bins of types that were removed from B (and all subsets packed into such bins) results in a feasible solution to the new instance whose total cost is at most 1 + ε times the cost of the original solution (as each subset is moved at most once). Therefore, the cost of an optimal solution for the new instance is at most 1 + ε times the cost of an optimal solution for the original instance. 2 Corollary 4. We have

ci ≤

1

(1 + ε )i −1

for any 2 ≤ i ≤ r and

r 

ci ≤ 1 +

i =1

1

ε

.

Proof. The first property can be proved using induction and Lemma 3. The second claim follows by the first part since r 

i =1

ci ≤

∞ 

t =0

1

(1+ε )t

= 1 + 1ε . 2

After the above modifications were applied, the values c i are unique (the values b i are unique even before the modifications, and are never modified). However, multiple bin types may have the same density. We say that a bin belongs to a density class k if its density is (1 + ε )k (where k may be negative but it is always an integer). Next, we bound the number of bin types in one density class. The proof of the next lemma is similar to the method used by Murgolo [27]. Lemma 5. The following can be assumed without loss of generality. Let i, j be two bin types of a density class k, such that b j > b i . Then b i ≥ εb j holds. Proof. To achieve this, we show that given a set of bin types, we can omit some bins from B and change any solution of any given input into a solution for the same input that does not use bins removed from B. We calculate the increase in the cost of the solution later. If the claim does not already hold for the input bin types, consider a density class k and assume that there exist two bin types i, j in it such that b i < εb j . We can assume that j is the largest bin type in class k (i.e., of the minimum index). Next, we show how to omit all such bin types i from B. Let X be the set of bin types i in the density class k such that b i < εb j . We repack all items packed into bins of types in X into bins of type j using Next Fit. Let ∇ j be the total size of items that were repacked into bins of type j, let t j be the number of new bins of type j, and let t i be the number of bins of type i in the previous solution. Each repacked item has a size of at most εb j . Thus, if Next Fit uses a new active bin, then the previous active bin contains items of total size that exceeds b j − εb j = (1 − ε )b j , so ∇ j > (t j − 1)(1 − ε )b j , or equivalently, tj <

∇j

b j (1−ε )

+ 1.



i The previous cost of the packing of the repacked items was of class k is (1 + ε )k , so i ∈ X t c i . Recall that the density k i k i for any i ∈ X , c i = b i · (1 + ε ) . Thus, the previous cost was exactly i ∈ X t b i (1 + ε ) . Since ∇ j ≤ i ∈ X t b i , we get that the k previous cost was at least ∇ j · (1 + ε ) . The new cost of the bins for the repacked items is at most

t j · c j = t j b j · (1 + ε )k < (

∇j b j (1 − ε )

+ 1)b j · (1 + ε )k = (

∇j 1−ε

+ b j ) · (1 + ε )k =

∇j 1−ε

· (1 + ε )k + c j .

1 Thus, since ε < 12 , the new cost is at most 1− ε ≤ 1 + 2ε times the previous cost, plus c j . Since this is performed at most once for each density class and hence also at most once for each bin type, the total additive increase in the cost is at most 1 + 1ε by Corollary 4, while the multiplicative increase in the cost is by a factor of at most 1 + 2ε . 2

Corollary 6. For every density class, there are at most ε12 bin types belonging to this class. Proof. Consider the set of bin types of one density class k, and let

π1 < · · · < πq be their indices (their sizes are bπ1 > · · · > c πq cπ bπ cπ = bπ1 , or equivalently, cπq = bπq ≥ ε , so bπ

bπq ). By Lemma 5, bπq ≥ εbπ1 . Since all densities are equal to (1 + ε )k , we get

q

1

1

1

we have c πq ≥ ε c π1 . By Lemma 3, c πi ≥ (1 + ε )c πi+1 for 1 ≤ i ≤ q, which implies c π1 ≥ (1 + ε )q−1 c πq ≥ ε (1 + ε )q−1 c π1 . Thus,

q ≤ log1+ε 1ε + 1 ≤ ε12 , which holds since

ε > 0 and

1

ε is an integer.

2

2.2. New reductions After applying the reductions in the previous subsection, we deviate from the previous approaches to GCVS and related problems. The next lemma reduces the bin set as follows. Given two bin types, where one bin type has a much larger

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.6 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

6

density than the other one, the bin type with the much larger density must be larger. Intuitively, larger bins may have large density and would still be used since there may be items which are too large to be packed into smaller bins. However, small bins of large density are less useful for smaller items. Lemma 7. Without loss of generality, for all i, j, if αx < 2 · α y .

αi ≥ 2 · α j , then bi > b j . Thus, for a pair of bin types x > y (such that b x < b y ),

Proof. To prove the claim, we show that given a set of bin types, we can omit some bins from B and change any solution to any given input into a solution for the same input that does not use bins removed from B. We calculate the increase in the cost of the solution later. If the claim does not already hold for the input bin types, consider a bin type j such that there exists a bin type i, where αi ≥ 2 · α j and b i < b j . Let X be the set of bin types i that satisfy these last two properties with respect to j. The remaining bins, B \ X are the bin j and bins k that satisfy bk > b j or αk < 2 · α j (or both). Next, we remove the bin types of X from B. Items that were packed into bins of size b i , where i ∈ X , are repacked into bins of size b j using Next Fit. Every pair of consecutive bins contain a total size of more than b j . Since any item that was packed into a bin of size b i with i ∈ X has size no larger than b i < b j , this packing is feasible. Let ∇ j denote the total size of items that were repacked, and let t j t

the number of new bins of size b j resulting from the repacking. We can find 2j  ≥ such that each pair is packed with a total size exceeding b j . Thus, ∇ j > b j ·

t j −1 , 2

t j −1 2

disjoint pairs of consecutive bins,

or equivalently, t j <

2∇ j bj

+ 1.

For every i ∈ X , let i be the total size of items coming from bins of size b i that were repacked, and let t i be the number of bins of size b i in the previous solution. Since the items that are repacked using bins of size b j are those coming from bins whose types are in X , we have ∇ j = i . Using i ≤ t i · b i and t j ≤ 

therefore at most

i

2t b i c j bj

i∈ X

i∈ X  2 i ∈ X i bj

2·∇ j bj

+1=  i +cj − t ci .

Next, we use the assumption

+ 1 we have t j ≤



2t i b i bj

i∈ X

+ 1. The difference in the cost of the bins is

i∈ X

αi ≥ 2 · α j for all i ∈ X , or equivalently,

2b i c j bj

≤ c i , to get

2t i b i c j bj

≤ t i c i , so the increase in

cost is at most c j . This is applied on the bin set until no such j exists (for which the corresponding set X is non-empty), and thus it results in a set B  with the required property. Each time that the process is applied for a given value of j, the cost of a solution increases by at most c j . The process is applied at most once for each j, and therefore the increase is at most c j ≤ 1 + 1ε , by Corollary 4.

2

j∈ B

Next, we define a partition of the input items and bin types as follows. We define a bin type i for 2 ≤ i ≤ r to be a breakpoint if b i ≥ εb i −1 and c i < ε 2 c i −1 . The intuition of this definition is that if i is a breakpoint, then all bin types of index at least i are much more efficient than bins of index smaller than i, and we prefer to use (many) bins of index at least i instead of (smaller number of) bins of type smaller than i whenever possible. Let d denote the number of breakpoints and let βi denote the bin type of the i-th breakpoint, so that 1 < β1 < · · · < βd ≤ r. We let βd+1 = r + 1 and β0 = 1. Lemma 8. Without loss of generality, all items of size in (bβ j+1 , bβ j ], for 0 ≤ j ≤ d are packed into bins of types β j +1 − 1, . . . , β j + 1, β j .

¯ i = min1≤ j ≤βi −1 α j , for 1 ≤ i ≤ d. The sequence α¯ i is monotonically non-increasing in i. Proof. Let α It is clear that all items of size in (bβ j+1 , bβ j ] are packed into bins of types β j +1 − 1, . . . , 1, since they are larger than any bin of a different type. Thus, the claim holds for j = 0. For 1 ≤ j ≤ d we will show a transformation in which such items that are packed into bins of types β j − 1, . . . , 1 will be repacked into bins of type β j . For 1 ≤ j ≤ d, let ∇ j denote the total size of items of size in (bβ j+1 , bβ j ] that are packed into bins of types β j − 1, . . . , 1 in a given optimal solution to the original input. Claim 9. We have

opt ≥

d 

α¯ j · ∇ j .

j =1

Proof. For 1 ≤ i ≤ r, let t i denote the number of bins of type i used by a given optimal packing. We have opt = r  i =1

r  i =1

t i ci =

t i b i αi . Let i denote the total size of items packed into bins of type i in this solution. We have i ≤ t i b i , so opt ≥

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.7 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–••• r  i =1

7

i αi . Let j (for 1 ≤ j ≤ d) denote the total size of items packed into bins of types β j +1 − 1, . . . , β j , i.e., j =

Since for any β j ≤ i ≤ β j +1 − 1,

αi ≥ α¯ j+1 , we have opt ≥

1 −1 d β j+  

j =0 i =β j

i αi ≥

1 −1 d −1 β j + 

j =0 i =β j

i α¯ j +1 =

d −1 j =0

β j+ 1 −1  i =β j

i .

j α¯ j +1 . For j, k such that

0 ≤ j < k ≤ d, let λk, j denote the total size of items of size in (bβk+1 , bβk ] that is packed in bins of types β j +1 − 1, . . . , β j in the given optimal solution. We have ∇k = d −1

Thus,

j =0

j α¯ j +1 ≥

d −1

2

the summation.

d 

j =0 k= j +1

k −1 j =0

λk, j α¯ j +1 =

λk, j and j ≥

d k −1  k=1 j =0

α¯ i >

αβi −1 2

>

αj >

α β i −1 2

λk, j . k= j +1 d k −1 

d 

k=1 j =0

k=1

λk, j α¯ j +1 ≥

From the definition of breakpoints, we get that type 1 ≤ j < βi − 1 satisfies

d 

αβ i =

c βi b βi

λk, j α¯ k =

∇k α¯ k , using α¯ k ≤ α¯ j +1 since k ≥ j + 1 in

ε2 cβ −1

< εbβ i−1 = εαβi −1 holds for 1 ≤ i ≤ d. By Lemma 7, any bin i

. Therefore,

αβi . 2ε

(1)

Next, we create bins of the type β j for 1 ≤ j ≤ d, called shadow bins. All items of size in (bβ j+1 , bβ j ] that are packed into ∇j

bins of types 1, . . . , β j − 1 are removed from their original bins and repacked using Next Fit in at most 2 b ∇j

type β j . The total cost of the shadow bins of type β j is at most (2 b inequality holds by (1). The total cost of the shadow bins is therefore at most

βj

+ 1 bins of

+ 1)c β j ≤ 2∇ j αβ j + c β j ≤ 4ε α¯ j ∇ j + c β j , where the last

d   j =1

βj



¯ j ∇ j + c β j ≤ 4ε opt + 1 + ε1 , by Claim 9 and Corollary 4. 4ε α

Thus, the cost of an optimal solution increased by at most a factor of 1 + 4ε and an additive factor of at most 1 + ε1 . This completes the proof of the lemma. 2 By Lemma 8, we can partition the input into independent problems, where each problem consists of all items of size in

(bβ j+1 , bβ j ], for some 0 ≤ j ≤ d, and the bin types β j +1 − 1, . . . , β j + 1, β j . Each problem will be solved independently of the

other ones, and in what follows we will assume without loss of generality that we are dealing with just one such problem. Given one such problem, we scale the bin sizes and item sizes so that the maximum bin size is 1. We also scale the bin costs so that the maximum cost of a bin type (in the given problem we consider) is 1. We will derive in what follows a solution for each such problem whose cost is at most 1 + ε times the optimal cost for that problem plus a constant. Observe that summing all these constants for the different problems still results r in a constant as when we undo the scaling of the costs of bin types we get that the total additive term is bounded by i =1 c i times the additive constant that we will get for one such problem. Observe also that a resulting problem may still contain a non-constant number of bin types, and thus in what follows we will partition the input further into a set of dependent problems. Corollary 10. If 1 ≤ i ≤ r − 1 is a bin type such that b i +1 ≥ εb i , then c i +1 ≥ ε 2 c i . Lemma 11. Consider two bin types i, j such that i < j, b j ≥ εb i , and c j ≥ ε 2 c i . The number of density classes of the bin types in α i , i + 1, . . . , j is at most ε23 , and additionally α i < ε12 and j

max

α

min

α

i ≤≤ j i ≤≤ j

<

2

ε2

must hold. Proof. For every bin type , where i + 1 ≤  ≤ j, b < b i and c  ≥ c j ≥ ε 2 c i hold. Thus the density of the bin type , satisfies

α =

c b

2 > εbici = ε 2 αi , that is, ααi > ε 2 . By Lemma 7, ααi < 2.

Since

ε < 1, ε2 <

max

Thus

i ≤≤ j

min

i ≤≤ j

α

α αi < 2 holds for i =  as well. We have proved the following two properties:

max

i ≤≤ j

αi

α

min

< 2,

i ≤≤ j

αi

α

> ε2 .

α 2 2 2 α < ε2 . By Lemma 1, the ratio αi is an integer power of 1 + ε , and therefore, there are at most log1+ε ε2 + 1 ≤ ε3

distinct density classes, including the density class of type i.

2

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.8 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

8

2.3. A key reduction via shifting In this section we define a class of inputs which need to be tested. In each input, a certain subset of bin types is removed in order to create some multiplicative gaps in the list of sizes of bin types. This is done by using a method related to the shifting method of Hochbaum and Maass [15] (see also [14]) and the method of selection of medium elements [28,1]. The algorithms of the next sections will be applied for each input created in this step, and a solution of minimal cost will be given as an output. We later prove that there exists a solution for at least one of the inputs whose cost is sufficiently low. αr We say that the density α is removable if there is a non-empty density class of density αr for which α ≤ ε22 and the  maximum size of a bin with density αr is larger than the maximum size of a bin with density α . We say that a bin type j is removable if it belongs to a removable density class, and otherwise it is a non-removable bin type. Let g (ε ) be an integral function of ε . We will use g (ε ) = ε15 . We define layers of density classes and of bin types as follows. A bin type j with density α j belongs to layer x (for an integer 1 ≤ x ≤ g (ε )) if there exists an integer y such that



αj ∈

1



1

,

ε2( y· g (ε)+x) ε2( y· g (ε)+x+1)

.

Input x is defined as the set of all items, together with the set of bin types, excluding the removable bin types of layer x that are removed. Let C x denote the cost of the bins of removable bin types of layer x that are used by a given optimal packing. Let C other denote the cost of bin types that are not removable out of the cost opt. We have opt = C other +

g (ε )

x=1

Cx.

Let a chain of bin types be a subsequence of bin types z[1] < · · · < z[ w ] such that for any 1 ≤ j ≤ w − 1, we have

αz[ j+1] ≥ ε2 αz[ j] .

2 · g (ε ) Claim 12. Assume layer x is removed from the bin type set. Then a chain z[1], z[2], . . . , z[ w ] of Input x satisfies αz[ w ] ≥ ε 2 · αz[1] , and therefore

max

αz[]

min

αz[]

1≤≤ w 1≤≤ w



8

ε 2 · g (ε )

Proof. By Lemma 7, max

1≤≤ w

.

αz[] ≤ 2αz[1] and min α ≥ 1≤≤ w

α z[ w ] 2

. Thus, the second claim will follow from the first one. Let

z = z[1]. First, consider the case where all bin types in the chain are either removable, or do not belong to layer x, except for possibly z. Thus, except for possibly z, no bin type in the chain belongs to layer x. Let y be the minimum integer such that ˜ be the maximum index of a bin type in the chain such that αz˜ > 2(( y−1)·1g (ε)+x+1) . Since by αz ≤ 2( y· g (1ε)+x+1) and let z˜ = z[] ε

the choice of y,

αz >

1

ε2(( y−1)· g (ε)+x+1)

ε

, z˜ is well defined.

Next, we prove z˜ = z[ w ] (i.e., ˜ = w). Assume by contradiction z˜ < z[ w ]. By the definition of a chain, we have

ε2 αz˜ . On the other hand, αz[+ ˜ 1] ≤ 1

ε2(( y−1)· g (ε)+x)

, which implies

1

. Since z[˜ + 1] does not belong to layer x, in fact we have ε2(( y−1)· g (ε)+x+1) 1

αz[+ ˜ 1] ≥ αz[+ ˜ 1] ≤

αz[+ αz[+ ˜ 1] ˜ 1] αz ε2(( y−1)· g (ε)+x+1) ε2(( y−1)· g (ε)+x+1) 2 2 αz˜ < ε2(( y−1)· g (ε)+x) = ε , contradicting αz˜ ≥ ε . This implies αz[ w ] ≤ ε2( y· g (ε)+x+1) =

< ε2·2g (ε) . Next, consider the case where there is at least one non-removable bin type of layer x in the chain that is not z. Denote the maximum index of a non-removable bin type of layer x in the chain by z > z, where z = z[ ]. Next, we consider the non-removable density class αz . Let j be the minimum index of any bin type of this density class. If j > z, then let j  be such that z[ j  − 1] < j ≤ z[ j  ] (which exists since z = z[1] < j ≤ z[ ] = z ). We have ε 2 αz[ j  −1] ≤ αz[ j  ] by the definition of ε 2 · g (ε )

a chain, and

αz[ j ] < 2α j by Lemma 7. Thus,

α z [ j  −1] ≤ ε22 . By the choice of j, all bin types of density class αj

so they are removable due to bin type z[ j 

αz = α j are of

size at most b j < − 1], by the last observation and the definition of removable densities. This is a contradiction, and thus we can assume that j ≤ z. By Lemma 7, if j < z, then αz ≤ 2α j = 2αz , so αz ≤ 2αz holds for both cases j = z and j < z. If z = z[ w ], then we are done. Otherwise, we can use the proof above for the chain z , . . . , z[ w ], since in the chain z[ ], z[ + 1], . . . , z[ w ] only z = z[ ] α is a non-removable bin type of layer x. We have α z ≤ 2·1g (ε) , and so ααz ≤ 2·2g (ε) . 2 b z[ j  −1] ,

z[ w ]

ε

z[ w ]

ε

Let x˜ denote an index of a layer that minimizes C x . The following claim is obvious. Claim 13. We have

C x˜ ≤

opt g (ε )

.

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.9 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

9

For every removable bin type j of layer x˜ , we let i j denote the maximum index of a bin such that b i j > b j , and i j either does not belong to layer x˜ or it is a non-removable bin type. That is, i j is the index of the smallest bin type that is larger than j, and was not removed in the removal of the (removable) bin types of layer x˜ . Note that bin type 1 is a non-removable bin type, therefore i j is well defined for any 1 < j ≤ r such that j is removable. 4α

Claim 14. For every removable bin type j of layer x˜ , αi j ≤ ε4 j .



Proof. Assume that



α j ∈ 1/ε2( y˜ · g (ε)+˜x) , 1/ε2( y˜ · g (ε)+˜x+1) for an integer y˜ , and let j 0 = j and  = 0. We define a sequence

of bin types j 0 , j 1 , . . . and repeat the following process. If j  is not a removable bin type or it does not belong to layer x˜ , then we stop. Otherwise, j  is a removable bin type which belongs to layer x˜ , and we act as follows (note that if  = 0, then this property holds, so the next process must be performed in this case). The bin type j  is removable due to the existence of a non-empty density class (it is non-empty before the removal of the removable bin types of layer x˜ ), for which the largest bin type of this density class, which we αj

denote by j +1 , satisfies α+1 ≤ ε22 and b j +1 is larger than the maximum size of any bin with density α j  . We let  =  + 1 j and repeat. Note that the sequence j  is decreasing, and therefore the process is stopped after a finite number of times. If at some time j  = 1, then the process stops since the bin type 1 is non-removable. We let k j be the bin type j  , where  is the last value of  (thus,  ≥ 1). The bin types j  ∈ { j 0 , j 1 , . . . , j  −1 } satisfy that the density class α j  is removable and belongs to layer x˜ . It must be the case that





α j ∈ 1/ε2( y˜ · g (ε)+˜x) , 1/ε2( y˜ · g (ε)+˜x+1) . We prove this by induction. If

  = 0, this holds by definition. Assume that α j −1 ∈ 1/ε 2( y˜ · g (ε)+˜x) , 1/ε 2( y˜ · g (ε)+˜x+1) for some 1 ≤  ≤  − 1. Recall that αj

both bin types j  and j −1 belong to layer x˜ . We have α  ≤ ε22 and since j  < j −1 , by Lemma 7, j −1 

α j−1 ≤ 2α j . If

α j ∈/ 1/ε2( y˜ · g (ε)+˜x) , 1/ε2( y˜ · g (ε)+˜x+1) , then it is either the case that α j > 1/ε2(( y˜ +1)· g (ε)+˜x) or α j ≤ 1/ε2(( y˜ −1)· g (ε)+˜x+1) . In 2( y˜ · g (ε )+˜x+1)

αj

the first case, ε22 ≥ α  > ε2(( y˜ +1)· g (ε)+˜x) = 2g (1ε)−2 , and in the second case ε j −1 ε

1 2

2( y˜ · g (ε )+˜x)

αj

≤ α j  < ε2((εy˜ −1)· g (ε)+˜x+1) = ε 2g (ε)−2 , and both −1

α j −1 1 α j ≤ ε2 . By the definition of k j = j  (recall that this bin type is either non-removable αk j αk or does not belong to layer x˜ or both), α ≤ ε22 . Therefore, α jj ≤ ε24 . By definition of i j , i j ≥ k j , and by Lemma 7, j   −1

cases lead to a contradiction. Thus

αi j ≤ 2αk j , and the claim follows. 2

Lemma 15. The optimal cost of packing the input x˜ is at most (1 + 8ε )opt + 1 + 1ε . Proof. For every bin type i that was not removed, let A i denote the set of bin types j such that i j = i. The set of items that are packed into bins of A i in a given optimal solution are repacked together using bins of type i. Let  j denote the total size of repacked items which were removed from bins of type j (for the values of j considered here), and let t j denote the c number of bins of type j that are used in the original optimal solution. We have  j ≤ t j b j = t j αj ≤ t j c j ε44α , where the last j

i

inequality holds by Claim 14. The cost of these bins in the packing of opt is t j c j . The repacking of all items packed into bins of A i is performed using Next  Fit on bins of type i. Thus, the resulting number of bins is at most 2



4 j j ∈ A i αi t c j ε 4 α + c i = i

8

ε4



2

j∈ Ai

bi

j

+ 1, and their cost is at most (

2

j∈ Ai

bi

j

+ 1)c i = 2



j∈ A i

 j · αi + c i ≤

8 j j ∈ A i t c j + c i . Thus the new cost is at most ε 4 times the original cost of bins of type in

A i plus c i . r 8C Taking the sum over all i, we get that the total additional cost is at most ε4x˜ + i =1 c i ≤ ε84opt + 1 + 1ε , by Corollary 4 g (ε ) and Claim 13. By the choice of g (ε ), we get an upper bound on the additional cost of 8ε opt + 1 + 1ε . Thus, the cost of the

new solution, which does not use bins of removable bin types of layer x˜ , is at most (1 + 8ε )opt + 1 + ε1 .

2

2.4. Further reductions Consider the list of bin types (which results from the last reduction via the shifting technique for a fixed value of x). We apply the reduction in Lemma 8 once again. Denote the resulting bin sizes by 1 = b1 > b2 > · · · > br > br +1 = 0. Without loss of generality, we can assume (using Corollary 10) that if b i ≥ εb i −1 , then c i ≥ ε 2 c i −1 . That is, if b i ≥ εb i −1 , then the density of bin type i is relatively close to the density of bin type i − 1 (see Lemma 11). Our next goal is to partition the input list of bin types into subsequences that satisfy the following assumptions. Lemma 16. It is possible to partition the list of bin types into subsequences of consecutive bin types, such that the following properties hold.

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.10 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

10

1. The number of density classes in each subsequence is constant (a polynomial function of ε1 ). Moreover, the ratio between the maximum density of a bin type and the minimum density of a bin type in a common subsequence is at most ε2g8(ε) .

2. The number of bin types in each subsequence is a constant (a polynomial function of 1ε ). 3. If bin types i and i − 1 belong to a common subsequence, then b i ≥ εb i −1 . 4. If bin types i and i − 1 belong to different subsequences, then b i < εb i −1 .

Proof. We create the partition into subsequences by traversing the list of bins from the largest to smallest (that is, in an increasing order of bin types), and each time that there is a pair of consecutive bin types such that b i < εb i −1 , we start a new subsequence with bin type i. Next, we analyze the resulting partition into subsequences. Note that properties 3 and 4 clearly hold. Moreover, property 2 holds by property 1 using Corollary 6. Thus, it suffices to establish property 1. Consider a pair of consecutive bin types, i − 1 and i. We now prove that if i − 1 and i are in the same subsequence, then αi ≥ ε2 αi−1 . Since bi ≥ εbi−1 , then c i ≥ ε2 c i−1 holds (by Corollary 10), and therefore by Lemma 11, αi ≥ ε2 αi−1 . Therefore, each subsequence is a chain. By Claim 12, the second part of property 1 holds. We conclude that the first part of property 1 holds due to Lemma 1

 2g (ε)

 2g (ε)+1 + 1 ≤ log1+ε ε1 + 1 = (2g (ε ) + 1) · log1+ε ε1 + 1 ≤ ( ε25 + 1) · ( ε12 − 1) + 1 < ε27 , using 1 g (ε ) = ε15 and ε ≤ 10 . We found that the number of different densities is polynomial in 1ε . 2

and since log1+ε 8 ·

1

ε

Corollary 17. The ratio between the largest bin size in a subsequence and the smallest bin size in the same subsequence is upper bounded by a constant (an exponential function of 1ε ). Proof. The claim holds by properties 2 and 3 of Lemma 16.

2

Let β¯ i be the type of the largest bin of the i-th subsequence of bins where 1 = bβ¯ > bβ¯ > · · · > bβ¯ > bβ¯ 1

2

p

p +1

= 0 where

the number of subsequences of bin types is denoted by p. We partition the input items into disjoint subsets in the following way. The items of size in the interval (bβ¯ , bβ¯ ] i +1 i belong to the i-th subset of input items. An item of a subset i is called delayed if it is packed in a bin of subsequence i  for i  < i. An item j of a subset i is called tiny if its size is at most ε · bβ¯ −1 and otherwise it is non-tiny. An item j that is both tiny and delayed is called a tiny i +1

delayed item, and an item j that is both non-tiny and delayed is called non-tiny delayed item. We will compare the cost of the solution to a (super-)optimal solution that allows items that are packed as tiny items to be packed fractionally, but an item must be either packed completely (and integrally) as a non-tiny item, or all its parts are packed as tiny. We fix such a super-optimal solution sol. For i = 2, 3, . . . , p, denote by ξi an upper bound on the total size of items that belong to the subsets of indices i , i + 1, . . . , p of the input and are packed in sol in bins of subsequences 1, 2, . . . , i − 1 (as tiny items). Given the values of ξi for all i, we will be able to decompose the problem into (almost) independent problems (one problem for each subsequence of bins), where the relation between the different problems is characterized only by the values of ξi (for 2 ≤ i ≤ p). We will enumerate (approximately) all possibilities for choosing the best values of ξi (for 2 ≤ i ≤ p), and thus the following lemma is essential in restricting our search space. Lemma 18. For 2 ≤ i ≤ p, without loss of generality, we can assume that the value of ξi is divisible by bβ¯ −1 , while we do not decrease i the total size allocated for packing tiny items of subset j of the input (for 2 ≤ j ≤ p). Proof. Assume that ξi is not divisible by bβ¯ −1 . We round the value of ξi up to the next number that is divisible by bβ¯ −1 i i (by adding dummy items). We define a near-optimal solution that uses an additional bin of size bβ¯ −1 to pack the dummy i items that are introduced due to this rounding for the r given value of i. Applying this transformation increases the cost of p the super-optimal solution by at most c ≤ ¯ i =1 c i , which is a constant (by Corollary 4). Therefore, without loss of i =2 βi −1 generality, we can restrict ourselves to solutions which for every i use a value of ξi that is divisible by bβ¯ −1 . 2 i

Next, we bound the total size of non-tiny delayed items of a given subset. Lemma 19. For 2 ≤ i ≤ p, without loss of generality, we can assume that the total size of non-tiny delayed items of subset i is at most 1 − ε times the total size of non-tiny items of subset i. Proof. Fix an optimal solution, we increase its cost by a factor of 1 + 8ε (and some constant additive term to be bounded later), and using the additional cost of 8ε opt, we assign an extra cost to each bin and distribute this cost between the items packed into this bin proportionally to their size. The extra cost assigned to a bin whose cost is c would be 8ε c. That is, if an item j of size s j is assigned to a bin of cost c, size b, and density α = bc , whose total size of assigned items is s ≤ b,

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.11 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

11

s

then the extra cost that we give to item j is 8ε c · sj ≥ 8εα s j . Clearly the total extra cost given to all items is 8ε times the original cost of the optimal solution. Next, for every subset i, whose total size of non-tiny delayed items is larger than 1 − ε times the total size of non-tiny items of subset i, we remove some non-tiny delayed items from their bins and repack them into bins of size bβ¯ using Next i Fit. We remove and repack items one by one, until the first time in which the total size of (remaining) non-tiny delayed items is at most 1 − ε times the total size of non-tiny items of subset i. As a result, the requirement on the maximum fraction of non-tiny delayed items of each subset will hold. To compute the cost of the new bins, for each subset i, we neglect the last bin of size bβ¯ that contains the last moved i item (and possibly other items), and one additional bin, if the remaining number of new bins of size bβ¯ is odd. The last (at i

most) two bins of all types β¯ i have a total cost of at most 2 + ε2 by Corollary 4. We neglect this additive term of the cost, and consider the remaining new bins, called new bins that were not neglected. To prove the claim, it suffices to show that the total cost of the new bins that were not neglected is at most the total extra cost of the items. For a fixed value of i, the total size of the new bins that were not neglected is at most twice the total size of the repacked items excluding the last repacked item (since at least one new bin was neglected, there is an even number of non-neglected bins, and every pair of non-neglected bins have contents exceeding the size of one bin). Let Δi denote the total size of non-tiny items of subset i, and let Δi denote the original total size of non-tiny delayed items of subset i. According to the assumption on i, we have Δi > (1 − ε )Δi . The total size of the items that we repack into new bins that were not neglected is at most ε times the total size of non-tiny items of subset i (since the last repacked item is in a Δ

new bin that was neglected), that is, at most ε Δi < ε 1−iε < 2ε Δi , as packed into a bin of density at least

αβ¯ 2

i

ε < 12 . Since every non-tiny delayed item is originally

(by Lemma 7), the total extra cost (defined above) of the non-tiny delayed items of

Δ subset i at least 4εαβ¯ Δi . The total size of the new bins of type β¯ i that were not neglected is at most 2ε Δi = 2ε b i · bβ¯ , so i i β¯ i Δ their cost is at most 2ε b i c β¯ = 2ε Δi · αβ¯ < 4ε Δi αβ¯ . Thus the cost of the new bins that were not neglected does not exceed i i i β¯ i

the extra cost which was distributed among the items.

2

3. An auxiliary problem In this section we define an auxiliary problem whose solution serves as a procedure in the algorithm of the next sections. We call this problem Budgeted generalized cost variable sized bin packing with a constant number of bin types (BGCP). In this problem, we are given a supply of a subset of ψ(ε ) consecutive types of bins in B of indices γ , γ + 1, . . . , γ + ψ(ε ) − 1, such that the following properties hold: The ratio between the maximum density of a bin type and the minimum density of a bin type is at most χ (ε ) = 2g8(ε) , and for γ + 1 ≤ i ≤ γ + ψ(ε ) − 1, we have b i ≥ εb i −1 . The function ψ(ε ) is a ε



polynomial function of 1ε . Since for γ + 1 ≤ i ≤ γ +ψ(ε ) − 1 we have b i ≥ εb i −1 , we conclude that b ≤ ( ε1 )ψ(ε) = φ(ε ). γ +ψ(ε )−1 In addition, we assume bγ +ψ(ε) < εbγ +ψ(ε)−1 and bγ < εbγ −1 , if γ > 0. The input defines a subset of items U ⊆ I . The set of items U consists of all items of size in (εbγ +ψ(ε)−1 , bγ ]. In addition, the input has a parameter G in ≥ 0 encoding the total size of “sand” that is a collection of arbitrary divisible items that can be packed fractionally, as well as a budget G out ≥ 0. In the sand we do not keep the identity of the items and some of them may be packed fractionally or partially (but all these items are tiny). Informally, the goal is to pack all items as well as a total size of G in of sand except for a total size of at most G out of items (and sand) into bins of minimum total cost. To enable us to define the goal more precisely, we recall that item j ∈ I is tiny if s j ≤ εbγ +ψ(ε)−1 , and otherwise it is regular (or non-tiny). We assume that there is a set of tiny items of total size G in that we are required to pack together with the set of items U into bins of types γ , γ + 1, . . . , γ + ψ(ε ) − 1 and allow a total size of at most G out to remain unpacked so that the total cost of the bins that we use will be minimized. As a first step, we will allow a fractional packing of the tiny items (while the regular items that we choose to pack must be packed integrally). Let  ⊂ U be the set of items regular  that we decide to pack (into bins of types γ , γ + 1, . . . , γ + ψ(ε ) − 1), then a solution is strong if it satisfies sj ≥ ε s j, j ∈

j ∈U

i.e., at least an ε fraction of the items of U (in terms of their total size) is selected into . There are no restrictions on the packing of the sand. Let opt B denote an optimal solution among strong solutions to this problem as well as its cost. We are interested in an algorithm whose running time is polynomial in n and ε1 , such that it finds a feasible (but not necessarily strong) solution of cost at most (1 + O (ε ))opt B + f (ε ) · c γ , where f is a function of ε (which does not depend on any other parameter). The algorithm in which we are interested should pack all packed items integrally (that is, both the items in U and the tiny items that participate in the sand). As a first step, we construct an algorithm that only packs items of U integrally, and later we show how to modify the packing so that the packed tiny items are packed integrally as well, without exceeding the budget G out . The cost would still be of the form (1 + O (ε ))opt B + f (ε ) · c γ . Our next step is to apply the so-called linear grouping technique for ψ(ε ) disjoint subsets of U that we define (which form a partition of U ). Each such subset is the intersection of U with the set of items whose sizes are in an interval (bi +1 , bi ] for i = γ , γ + 1, . . . , γ + ψ(ε ) − 1.

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.12 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

12

Linear grouping. We adapt the linear grouping method, proposed in [8], to our purposes. For i = γ , γ + 1, . . . , γ + ψ(ε ) − 1 we will partition the set of items I i = { j ∈ U : b i +1 < s j ≤ b i } into j Ii

η(ε) =

η(ε) χ (ε) subsets denoted as I i1 , . . . I i in the following ε3

way. If | I i | < η(ε ), then each of the sets for j ≥ 2 will contain at most one element and I i1 = ∅. In this case, for every j ∈ I i , we let the rounded-up size of j be sj = s j (that is, no rounding is applied for such a set). Otherwise, we partition

η(ε) subsets such that the following conditions hold: | I i1 | ≥ | I i2 | ≥ · · · ≥ | I iη(ε) | ≥ | I i1 | − 1 (note that this condition j j j defines the cardinality of each subset I i in a unique way) and for every j = 1, 2, . . . , η(ε ), I i contains the largest | I i | items j −1 k j of I i \ k=1 I i (breaking ties arbitrarily). For every j = 2, 3, . . . , η(ε ) and k ∈ I i , we let sk = max p ∈ I j s p be the rounded-up I i into

i

size of item k. Note that in both cases (| I i | < η(ε ) and | I i | ≥ η(ε )), we have | I i1 | ≤ η(2ε) · | I i |, since in the second case

| I i1 | = η|(I εi |) ≤ η|(I εi |) + 1 ≤ 2 η|(I εi |) as | I i | ≥ η(ε ). γ +ψ(ε)−1 1 The set of elements

i =γ

I i is packed together into dedicated bins of size bγ using Next Fit.

Lemma 20. The cost of packing the items of

γ +ψ(ε)−1 i =γ

I i1 into dedicated bins of size bγ is at most 4ε · opt B + c γ .

Proof. Let ∇ denote the total size of the items in U before linear grouping was applied. Since for every i, the ratio between the largest item in I i and the smallest item in I i is at most ε1 (this holds with or without the rounding due to the size

γ +ψ(ε)−1

ratio for I i ), and since | I i1 | ≤ η(2ε) · | I i |, we conclude that the total size of the items in I i1 is at most εη2(ε) · ∇ . i =γ These items are packed together into bins of density αγ using Next Fit. Therefore, the total cost of these bins is at most c γ + εη4(ε) · ∇ · αγ . Recall that opt B is an optimal solution among strong solutions, so the total size of the subset of U of packed items is at least ε times the total size of items of U (which is ∇ ), so the total size of such bins is at least ε ∇ . Moreover, by definition, αγ αγ every bin that can be used has density of at least χ (ε) . Therefore, opt B ≥ L B, where L B = ε ∇ · χ (ε) . The claim follows since 4 4 ε3 εη(ε) · ∇ · αγ = ε · χ (ε) · ∇ · αγ = 4ε · L B.

2

Let U  be the set of non-sand items with their rounded sizes, excluding the set

γ +ψ(ε)−1

1

I i . The input where the i =γ non-sand items are the items of U  is called the rounded input. We use the same values of G in and G out as before, that is, the values of G , G remain unchanged in the rounded input which we consider. Specifically, U  consists of the items in U\

γ +ψ(ε)−1 in1 i =γ

out

I i where the size of an item j is sj . The following claim holds by standard arguments of linear grouping.

Claim 21. Consider instances of BGCP in which the total size of sand is G in and the budget is G out . A feasible solution for the instance with the set U of items can be modified to become a solution of at most the same cost for the input with the set of items U  . Moreover, every feasible solution for the instance with the set U  of items implies a feasible solution to the input where the set of items is U \

γ +ψ(ε)−1 i =γ

I i1 of at most the same cost.

Proof. In each part we will prove that given a solution to an input where the set of items is X , the solution can be converted into a solution for an input where the set of items is X  . To do that we define an injection, so that every item of X  of size x is mapped to an item of X , of size x where x ≥ x . Given a solution for the first input, the packing of sand is unchanged; that is, the size of sand that is not packed remains as it was, and a bin with sand assigned to it keeps exactly the same amount of sand. For every item of X , if no item of X is mapped to it, then it is removed. Otherwise, if the item of X in unpacked, then its corresponding item of X  is unpacked. Thus the budget G out is not exceeded. If this item of X is packed, then the corresponding item of X  takes its place. The solution remains feasible due to the condition on the sizes. The first part of the claim follows by mapping items of a set I i to items of the set I i−1 , for 2 ≤  ≤ η(ε ) and every i such that I i1 = ∅. It is possible to define an injection since | I i−1 | ≥ | I i | for all 2 ≤  ≤ η(ε ), 1 ≤ i ≤ γ + ψ(ε ) − 1 such that

I i1 = ∅, and since every rounded size of an item of I i is at most the original size of any item of I i−1 . The second part of

the claim follows from mapping each item of U \ size. 2

γ +ψ(ε)−1 i =γ

I i1 with its original size to the same item with the rounded

Thus, in what follows, we assume that our instance satisfies the following properties. There is a constant number (an exponential function of 1ε ) of item sizes in each interval (b i , b i −1 ] for all i = γ + 1, . . . , γ + ψ(ε ), and each of these items has size of at least εbγ +ψ(ε)−1 ≥ φ(εε) bγ . We denote the set of sizes in the set U  by H , and note that | H | ≤ ψ(ε ) · η(ε ), which is a constant depending on 1ε (it is an exponential function of ε1 ), and additionally | H | ≤ n. We let the distinct values in H .

σ1 > σ2 > · · · > σ| H | be

Bin configuration. A configuration of a bin represents a possible packing of a subset of items of U  into a bin. It is a (| H | + 1)-tuple, where the i-th component, for 1 ≤ i ≤ | H |, states the number of items of size σi that are packed into this

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.13 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

13

bin, and the (| H | + 1)-th component is the index of the bin type. For a configuration C , we denote the size of a bin that is specified by the configuration by b(C ), and we let ni (C ) be number of items of size σi that are packed into a bin that is packed according to configuration C . We denote the cost of one bin of size b(C ) by cost (C ) (that is, b(C ) = b i for some i, and in this case cost (C ) = c i ). A bin that is packed according to configuration C leaves an empty space for packing sand, and | H | | H | we let e (C ) = b(C ) − i =1 ni (C ) · σi be the size of this empty space. A configuration C is feasible if i =1 ni (C ) · σi ≤ b (C ) (or equivalently e (C ) ≥ 0). We denote the set of all feasible configurations by C . Note that the standard upper bound on |C | is (at least) an exponential function of ε1 , which makes it impossible to enumerate C in polynomial time, and thus we will not use any bound on |C |. Constructing the linear program. For i = 1, 2, . . . , | H | we denote the number of items of size σi in U  by ni . We will have a decision variable xC for every configuration C corresponding to the number of bins that are packed according to configuration C , a variable y i for every 1 ≤ i ≤ | H | determining the number of items of size σi that are unpacked in our solution, and a variable v corresponding to the total size of sand that is unpacked in the solution. Recall that G in is the total size of sand. We consider the following linear program (though the algorithm does not construct it explicitly).



min s.t.



cost (C )xC

C ∈C

n i ( C ) xC + y i ≥ n i

∀ i = 1, 2, . . . , | H |

(2)

C ∈C



e (C )xC + v ≥ G in

(3)

C ∈C

|H | 

σi y i + v ≤ G out

(4)

i =1

xC ≥ 0

∀C ∈ C

yi ≥ 0

∀ i = 1, 2, . . . , | H |

v ≥0 In this linear program, the constraints (2) ensure that the value of y i is the number of items of size σi that are unpacked, constraint (3) ensures that the value of v is the total size of sand that is unpacked, and constraint (4) ensures that the total size of unpacked items and unpacked sand is at most G out . Note that the number of constraints in this linear program is | H | + 2 (besides the non-negativity constraints), which is a polynomial function of n (using | H | ≤ n). This linear program has an exponential number of variables (exponential as a function of ε1 ), and hence it is not constructed explicitly by the algorithm. However, we will be able to solve it approximately within a factor of 1 + ε . Denote an approximate (within a factor of 1 + ε ) basic solution to this linear program by (x∗ , y ∗ , v ∗ ), and let x˜ C = x∗C for all C . Our scheme returns a solution that packs x˜ C bins with configuration C . Each item of U  is later replaced by the corresponding item of U . The column generation technique. To solve the above linear program approximately, we invoke the column generation technique of Karmarkar and Karp [21]. Next, we elaborate on this technique. The linear program may have an exponential number of variables (or an even larger number of variables) but it has a polynomial number of constraints (neglecting the non-negativity constraints), since | H | ≤ n. Instead of solving the linear program, we solve its dual program (that has a polynomial number of variables but possibly an exponential number of constraints). Thus, we consider the following dual linear program (this is the dual program of the linear program that results from multiplying the constraint (4) by −1).

max

|H | 

ni λi + G in π1 − G out π2

i =1

s.t.

|H | 

ni (C )λi + e (C )π1 ≤ cost (C )

∀C ∈ C

(5)

∀ i = 1, 2, . . . , | H |

(6)

i =1

λ i − σ i π2 ≤ 0

π1 − π2 ≤ 0 λi ≥ 0

π1 , π2 ≥ 0

(7)

∀ i = 1, 2, . . . , | H | .

The dual variable λi corresponds to the i-th constraint of (2) and π1 , π2 correspond to constraints (3) and (4), respectively. The dual constraints (5) correspond to the primal variables xC , the dual constraints (6) correspond to the primal variables y i , and the dual constraint (7) corresponds to the primal variable v.

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.14 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

14

Therefore, to be able to apply the ellipsoid algorithm, in order to solve the above dual problem within a factor of 1 + ε , it suffices to show that there exists a polynomial time algorithm (polynomial in n and ε1 ) such that for a given solution a∗ = (λ∗ , π ∗ ), which is a vector of length | H | + 2, decides whether a∗ is close enough to a feasible dual solution. First, note that the dual constraints (6) and (7) as well as the non-negativity constraints can be checked easily in polynomial time. Thus, it suffices to decide if a∗ satisfies all the constraints (5) approximately. More precisely, it should either provide | H | a configuration C ∈ C such that n (C )λ∗i + e (C )π1∗ > cost (C ) or output that such an approximate infeasibility evidence i =1 i

| H |



does not exist, that is, for all configurations C ∈ C , n (C )λ∗i + e (C )π1∗ ≤ (1 + ε )cost (C ) holds. In such a case, 1a+ε i =1 i satisfies all the constraints (5) in the dual program and it also satisfies the other dual constraints. Such a polynomial time algorithm is called approximated separation oracle.

Approximated separation oracle for the dual linear program. We rewrite the dual constraint and consider the following | H | | H | one n (C )λ∗i + b(C )π1∗ − u (C )π1∗ ≤ cost (C ), where u (C ) = i =1 σi · ni (C ) is the total space of configuration C that is i =1 i

| H | n (C )λ∗ − u (C )π1∗ ≤ cost (C ) − b(C )π1∗ . Our i =1 i | H | i approximated separation oracle will either result in a configuration C such that n (C )λ∗i − u (C )π1∗ > cost (C ) − b(C )π1∗ i =1 i | H | ∗ or will establish that for every configuration C , the following holds n (C )λi − u (C )π1∗ ≤ (1 + ε )(cost (C ) − b(C )π1∗ ). i =1 i | H | ∗ ∗ Note that in the last case we obtain that i =1 ni (C )λi + e (C )π1 ≤ (1 + ε )cost (C ) as well, and hence such an algorithm is occupied by items. We further change the constraint into the following form

indeed an approximated separation oracle. We define a separation oracle. The separation oracle has a fixed value of a∗ , and it will apply the following procedure for each possibility of b(C ) (i.e., for every bin type b for γ ≤  ≤ γ + ψ(ε ) − 1). For a given choice of a bin type, the value cost (C ) − b(C )π1∗ is constant and we denote it by  . The number of possibilities for the value  is polynomial in ε1 (it is equal to the number of bin types). Given a value b (which is a bin size), the goal is to find a multiset of items (a subset of U  ) of total size at most b so as to maximize the total value such that an item whose size is σi has a value λ∗i − σi · π1∗ . The number of items of size σi is at most ni . This problem is an instance of the knapsack problem, which has an FPTAS [26]. We apply an FPTAS for this problem. As an output of this FPTAS we obtain configuration C whose multi-set of items has total size of at most b. Let the value of C be val(C ). If val(C ) ≤  , then an optimal solution to the instance of the knapsack problem is such that the total value is at most (1 + ε ) val(C ) ≤ (1 + ε ) . This means that for every configuration C  corresponding to the bin size b, its value is at most (1 + ε )(cost (C  ) − b(C  )π1∗ ), so constraint (5) is satisfied by the solution a∗ 1+ε

(for configurations corresponding to this bin type). Otherwise, we found a configuration C in which the total value of

| H |

the items, n (C )λ∗i − u (C )π1∗ , is larger than  , so we got a configuration whose dual constraint is not satisfied. If such i =1 i a configuration was not found for any bin size b, then all possible configurations satisfy constraint (5) approximately. The time complexity of the algorithm is ψ(ε ) times the time complexity of the FPTAS for the knapsack problem. Note that the input size for the knapsack problem consists of at most n items, and thus the FPTAS for the knapsack problem runs in polynomial time, and thus our algorithm is indeed a polynomial time approximated separation oracle. Bounding the cost of (x∗ , y ∗ , v ∗ ). Consider the optimal strong solution opt B of the input consisting of the item set U together with the values G in and G out . By Claim 21, there exists a solution of the same cost for the items U  (with the same values of G in and G out ). This last solution for the set U  of items induces a feasible solution to the primal linear program of the same cost. Therefore, since (x∗ , y ∗ , v ∗ ) is an (1 + ε )-approximated solution to the primal linear program, we conclude that its cost is at most 1 + ε times opt B (since opt B cannot be smaller than the optimal cost without the restrictions under which opt B is defined). Rounding the primal solution. Given the (1 + ε )-approximated solution to the primal linear program, we find a basic feasible solution to this linear program, that is not worse than the approximated solution we obtained (in terms of their objective function values). Hence, without loss of generality we assume that (x∗ , y ∗ , v ∗ ) is a basic feasible solution that is an (1 + ε )-approximated solution. We note that in the primal linear program there are at most | H | + 2 linearly independent inequality constraints (in addition to the non-negativity constraints), and therefore in a basic solution such as (x∗ , y ∗ , v ∗ ), there are at most | H | + 2 basic variables. Since all non-basic variables are set to zero, we conclude that the number of fractional components in (x∗ , y ∗ , v ∗ ) is at most | H | + 2. Note that by rounding up all the components of x∗ the cost increases by at most (| H | + 2) · c γ . Therefore, since we define x˜ C = x∗C for all C ,  of the solution  ∗ + (| H | + 2)c ≤ (1 + ε )opt + (| H | + 2)c . Recall that | H | is bounded by a function of ε . cost ( C )˜ x ≤ cost ( C ) x C γ B γ C ∈C C ∈C C To get an actual solution for the input consisting of the item set U  with the values G in and G out , we pack the items of U  according to the solution x˜ (note that x˜ defines some of these items as unpacked, and these items will not be packed in our solution). The sand is packed fractionally into the empty space in the packed bins. Note also that the “empty pattern” where no items are packed is a feasible pattern, in which case the entire bin is ready to receive sand. The remaining sand is not packed. By Claim 21, this solution can be converted into a solution for U \

γ +ψ(ε)−1 i =γ

I i1 of the same cost, without changing the

γ +ψ(ε)−1

total size of packed sand, or the budget. The cost for packing the items of i =γ I i1 is at most 4ε opt B + c γ . Therefore, we got a packing of a cost that does not exceed (1 + 5ε )opt B + (| H | + 3)c γ . Next, we show an additional step in which we modify this solution so that all items (including the tiny items forming the sand) are packed integrally.

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.15 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

15

Constructing an output without fractional packing of tiny items. We assume that we are given a set M of tiny items of total size G in , where the size of each such item is at most εbγ +ψ(ε)−1 instead of the sand of size G in . Recall that v ∗ is the total size of sand that was not packed, and G in − v ∗ is the total size of packed sand. Remove all packed sand. We start packing the items of M integrally into the spaces that the sand previously occupied called gaps. The first step of the packing is done using Next Fit using the gaps in the existing bins. If all items of M are packed into the gaps in the packed bins, then we are done. Otherwise, Next Fit packs items of M into a bin packed by a configuration C until the situation that packing an item would result in exceeding the size b(C ). Thus, a bin with configuration C is packed with items of total size at least b(C ) − εbγ +ψ(ε)−1 , while previously the total size of items and sand in itwas at most b(C ). Therefore, the total size of remaining tiny items at the end of the first step is at most v ∗ + εbγ +ψ(ε)−1 · C ∈C x˜ C . In the second step of the packing, we pack additional items of M using Next Fit into new bins of size bγ +ψ(ε)−1 , until ∗ the total size of the unpacked tiny items of M will be at most v ∗ for the first time (so the total size  may exceed v by εbγ +ψ(ε)−1 ). The total size of items which are packed in the second step is at most εbγ +ψ(ε)−1 · C ∈C x˜ C + εbγ +ψ(ε)−1 (where the last term bounds the possible size of the unpacked items of M is in ( v ∗ − εbγ +ψ(ε)−1 , v ∗ ].  excess), and the total The number of new bins is at most 2ε C ∈C x˜ C + 2ε + 1 ≤ 2ε C ∈C x˜ C + 2, and the cost of each such bin is the minimum cost of any bin that can be used. Thus the cost of the solution increases by at most a multiplicative factor of 1 + 2ε and an additional additive cost of 2c γ +ψ(ε)−1 . The resulting solution is a feasible integral solution for U ∪ M with the bound of G out on the total size of unpacked items (out of U ∪ M). 1 We get an integral solution of a cost of at most (1 + 2ε )((1 + 5ε )opt B + (| H | + 3)c γ ) + 2c γ +ψ(ε)−1 . Using ε < 10 and c γ +ψ(ε)−1 ≤ c γ , this cost does not exceed (1 + 8ε )opt B + (2| H | + 6)c γ . Therefore, we have established the following theorem. Theorem 22. There is a polynomial time algorithm that returns a feasible solution for BGCP (with integral packing of all the packed items) of cost at most (1 + 8ε )opt B + (2| H | + 6)c γ . 4. Approximating GCVS – combining the pieces Recall that it suffices to approximate an instance of GCVS that satisfies the conditions in Lemma 16 for which the values of ξi are divisible by bβ¯ −1 . Since every item in the subsets of indices i , i + 1, . . . , p has size of at most bβ¯ < εbβ¯ −1 , the i i i value of ξi is at most nεbβ¯ −1 , and thus there are at most nε  + 1 ≤ nε + 1 possible values for each ξi . i We note that given the value  of ξi and ξi +1 , we obtain an instance of BGCP for packing the (non-delayed) items of subset i using G in = ξi +1 + s j and G out = ξi . For this sub-problem we invoke the asymptotic scheme j :b ¯ < s j ≤ε b ¯ βi +1 −1

β i +1

of the previous section, and we combine the solutions (that is, we choose the values of ξi for i = 1, 2, . . . , p) so that the total cost of the resulting solution will be minimized. This is achieved using dynamic programming that we formulate as a shortest path computation in a layered graph as follows. We define a directed layered graph G = ( V 1 ∪ . . . ∪ V p +1 , E ) whose first layer V 1 consists of a single vertex t, and the (0)

(1)

( N (ε) −1)

i-th layer V i (for i ≥ 2) consists of N (ε) = nε  + 1 vertices. The vertices of V i are denoted as v i , v i , . . . v i , and for ( j) ( j) each such vertex v i we associate a value ai for ξi . As there are at most N (ε) possible values of ξi , we ensure that for each ( j) = ξ . Recall that ξ p +1 has only one possible value, which (0) (0) v p +1 corresponding to this value of ξ p +1 (that is, a p +1 = 0). (0)

such value ξ of ξi there will be at least one index j such that ai is zero, and therefore V p +1 will consist of a single vertex s =

Moreover, ξ1 is zero as well, and therefore V 1 consists of a single vertex t = v 1 (0)

( j)

(k)

corresponding to this value of ξ1 (that is, (k)

( j)

a1 = 0). For each i, j, k such that v i and v i +1 are vertices in G, we have an arc from v i +1 to v i whose cost is the cost of the solution returned by the scheme for BGCP which we presented in the previous section, when the input consists of ( j) (k) the non-tiny items of subset i, and the values of ξi and ξi +1 are fixed to ai and ai +1 , respectively, and thus resulting the values of G in and G out as described above. Note that we apply the scheme of the previous section at most p · ( N (ε) )2 ≤ rn2 times. Therefore, the running time of the algorithm for computing the graph G with the costs of its edges is polynomial. Next, we compute a minimum (total) cost path from s to t in G. Again this path is computed in polynomial time since the number of vertices in the graph is at most pN (ε) ≤ rn. Given this path π , we compute the packing of the items accordingly. That is, we pack the items of the i-th subset (which are packed as regular items) according to the solution of the scheme of BGCP as we compute for the arc along π which connects a vertex of V i +1 to a vertex of V i . Moreover, consider a path π and the corresponding subproblems. Assume that

π traverses the vertex v (i j) . Given the value a(i j) , the total size of items of subsets i , i + 1, . . . , p ( j)

that are unpacked by the solution of these subproblems is at most ai . Since the path ends at the vertex t, where we have ξ1 = 0, the solution packs all the items, and its cost is equal to the cost of the path. Lemma 23. Given an optimal solution to the input (after all reductions of Section 2), it is possible to partition its bins into the subsequences of bin types, such that for any i, the bins of subsequence i form a strong solution.

JID:YJCSS AID:3008 /FLA

16

[m3G; v1.189; Prn:29/09/2016; 9:35] P.16 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

Proof. First, we split the bins of the solution into subsequences such that a bin of size b j is allocated to the subsequence that has j as a possible bin type. Obviously, each bin of the solution is allocated exactly once. Next, by Lemma 19, there is an ε fraction of the total size of non-tiny items of subset i that is packed into bins of the solution that were allocated to the subsequence i. That is, the bins of subsequence i form a strong solution of the subset i. 2 Now, consider an optimal solution that does not use the layer x bins. By Lemma 23, the partial solution that results from the optimal solution when it is restricted to the ith subproblem is strong. We have developed an algorithm for the subproblem, and applying it with the correct values of ξi (for all i) will result in an AFPTAS, due to the following. For each i, the value of ξi appears as one of the values associated with a vertex in V i using Lemma 18. Therefore, we can find a path corresponding to the optimal strong solution of the values of ξi for all i. Note that the cost of the optimal strong solution is partitioned among the subproblems such that a bin of cost c that used by an optimal strong solution is associated with the subproblem i that has this bin type (of cost c). By Theorem 22, we concludethat the resulting solution for the path r associated with the optimal strong solution is at most (1 + 8ε )opt B + (2| H | + 6) γ =1 c γ ≤ (1 + 8ε )opt B + (2ψ(ε )η(ε ) + 6)(1 + ε1 ). Since the cost of a minimum cost path does not exceed the cost of the path associated with the optimal strong solution that we consider, we conclude that the cost of the resulting solution is at most (1 + 8ε )opt B + (2ψ(ε )η(ε ) + 6)(1 + 1ε ). Therefore, we have established the following main result. Theorem 24. Problem GCVS has an AFPTAS. 5. Reducing BPUC to GCVS and obtaining an AFPTAS to BPUC In this section, we consider BPUC. Given an instance of BPUC, we will show how to define a suitable set of bin types, such that an AFPTAS for BPUC can be obtained as follows. First, the list of bin types is constructed (based on the function f ), and then items are assigned to bins by the AFPTAS for GCVS. The resulting subsets that were packed into “bins” are actually packed into unit bins (no matter which “bins” they are assigned to by the AFPTAS for GCVS), and this is the output for BPUC. Before we proceed, we will discuss our assumptions for f . We make use of two approximate oracles for the function f . Definition 25. An oracle for approximate evaluation of f is defined as follows. Given a value b ∈ [0, 1], we can evaluate a 1 value ˆf (b) such that f (b) ≤ ˆf (b) ≤ (1 + ε ) · f (b) + 4n is the approximated value of f (b), and the representation of ˆf (b) 4nf (b)

is polynomial in the input size (for example, ˆf (b) = is a feasible function ˆf ). This evaluation can be done in time 4n O (1).

Note that such an approximate evaluation of f is essential to make any queries on the value of f (b) for some b, as (for example) we can have values of b for which f (b) is irrational, and thus the output of an exact evaluation would have infinite length. In the case where f is given by a computable function, then this computable function clearly satisfies the conditions of the oracle. Moreover, in a model for which the oracle computes the exact value of f or an approximated one in time O (1), but the algorithm must read the bits of its output one by one (and thus its time complexity depends on the number of bits that it reads), we can emulate ˆf by reading O (log n) most significant bits and round up the value of this 1 prefix to the next integer multiple of 4n . In order to define our second oracle, we assume that there is an integer k, such that every item j in the input has size that is divisible by 1k (that is, s j · k is an integer). Note that picking the minimum value of k satisfying this requirement will guarantee that the input (in binary representation) has at least log2 k bits. Such a minimal integer k must exist since the product of the denominators of the rational numbers s1 , . . . , sn satisfies this property. Definition 26. An oracle for approximate evaluation of f −1 is defined as follows. Given a value c ≥ 0, we can find a maximum value b ≤ 1 that is divisible by 1k such that f (b) ≤ c. We denote this value of b by f −1 (c ). This evaluation can be done in time O (1). Note that if f is given by a computable function, we can emulate the last oracle via binary search. We make the following two assumptions regarding f . The first assumption is that f (x) = 0 if and only if x = 0. Since the value f (0) is irrelevant (as any empty bin can be removed from any packing), we show that we can assume that f (x) > 0 1 . Given a set of for any x that is the total size of a non-empty set of items. On the other hand, if f (x) = 0, then ˆf (x) ≤ 4n items J , let J  be J excluding any item j such that ˆf (s j ) ≤ n1 . Given a solution for J  , a solution for J can be obtained by packing every such item (that is, every item of J \ J  ) into a dedicated bin. The cost of the modified solution is (additively) larger by at most 1 compared to the original solution. An algorithm that receives J as an input starts with testing for every item j whether ˆf (s j ) ≤ n1 , and if so, j is packed into a dedicated bin. This can be performed in polynomial time, and the same process can be applied to any solution. Thus, the cost of an optimal solution for J is at least the cost of an optimal solution for J  and at most the cost of an optimal solution for J  plus 1. Therefore, an AFPTAS for adapted inputs gives an AFPTAS for the original inputs. In what follows we assume that no such items exist, which is equivalent to ˆf (s j ) > n1 for

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.17 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

all j, and thus f (s j ) >

1 4n

for all j (since

17

ε < 1), and using the monotonicity of f , we have f (x) > 0 for any x ≥ min j∈ J s j .

Therefore, this assumption does not affect the correctness of our scheme below. In addition, we assume ˆf (1) = 1. Since any function can be scaled (by ˆf (1)) to satisfy this property, this assumption is not restrictive. We therefore assume that ˆf (1) = 1, and for the domain (0, 1], f is a non-decreasing function with the range (0, 1]. In [24,23], it is implicitly assumed that an oracle that computes values of f exactly is given. As mentioned above this case is a special case of our model. Given the input items 1, 2, . . . , n of sizes s1 , s2 , . . . , sn (where without loss of generality, s1 = mini si > 0), we first compute ˆf (s1 ). Note that the smallest cost of any bin that can accommodate a non-empty set of items is f (s1 ). We create a list of bin types as follows: for every integer value 1 ≤ i ≤ log1+ε ˆ 1 we compute f −1 ((1 + ε )i ˆf (s1 )) and denote this f (s1 )

value by βi . We let γi = ˆf (βi ). Then, our list of bin types B has a bin of size βi and cost γi for each value of i ≥ 0 such that (1 + ε )i ˆf (s1 ) < 1, and in addition, there is a bin type of size 1 and cost 1. Note that the smallest bin type may be larger than s1 but its cost is at most ˆf (s1 ). Moreover, it can be the case that βi = βi −1 (and γi = γi −1 ) for some values of i ≥ 1. The list of bin types has at most log1+ε ˆ 1 + 2 elements. As (due to preprocessing) ˆf (s1 ) > n1 , the number of types f (s1 )

does not exceed nε . Therefore, we can apply the AFPTAS for GCVS on this input of items and bin types, and obtain a feasible solution in polynomial time. It remains to bound the cost of the solution (as a solution to BPUC). We define a step-function f 1 that approximates the value of f as follows. We let f 1 (b) = ˆf (b ) for the minimum value of b ∈ B such that b ≥ b. Note that f 1 (b) = ˆf (b ) ≥ f (b ) ≥ f (b) where the first inequality holds because ˆf ≥ f , and the second inequality holds because f is monotonically non-decreasing. Note that the cost of a solution for the instance of GCVS is evaluated according to the cost function f 1 of bin utilization. That is, this is an over-estimate of the cost of a solution to the instance of BPUC. Lemma 27. For all b ∈ [s1 , 1] such that b is divisible by

1 , k

we have f 1 (b) ≤ (1 + ε )2 f (b) + n1 .

Proof. Denote by b the minimum value in B such that b ≥ b then f 1 (b) = ˆf (b ). If b is the bin type β0 , then f (b ) ≤ ˆf (s1 ), 1 1 +ε +ε and thus f 1 (b) = ˆf (b ) ≤ (1 + ε ) f (b ) + 4n ≤ (1 + ε ) ˆf (s1 ) + 4n ≤ (1 + ε )2 f (s1 ) + 24n ≤ (1 + ε )2 f (b) + 24n , using the relation between f and ˆf and the monotonicity of f . Otherwise, if b = βi , for some i > 0, we assume that i is the minimum integer such that f −1 ((1 + ε )i ˆf (s1 )) = b , and thus βi = βi −1 and by definition b > βi −1 . Therefore, (1 + ε )i −1 ˆf (s1 ) < f (b ) ≤ (1 + ε )i ˆf (s1 ). Next, we claim that f (b) > (1 + ε )i −1 ˆf (s1 ). If f (b) ≤ (1 + ε )i −1 ˆf (s1 ), then by definition, we get βi −1 ≥ b, so there is a bin size in B which 1 belongs to [b, b ) contradicting βi −1 < b ≤ b = βi . Therefore, f (b ) ≤ (1 + ε ) f (b). Thus, f 1 (b) = ˆf (b ) ≤ (1 + ε ) · f (b ) + 4n ≤

(1 + ε )2 f (b) +

1 n

and the claim follows.

2

Theorem 28. Problem BPUC has an AFPTAS. Proof. Consider a feasible solution to BPUC of cost C . Next, we show that the instance which we created to problem GCVS has a feasible solution of cost at most (1 + ε )2 · C + 1. To show this claim we keep the same subsets of items and for each subset in the solution we pack it in the minimum sized bin (in B ) which can accommodate it. Since the total size b of each such subset is a multiple of 1k , the size of the bin which we use b ∈ B satisfies f (b ) = f 1 (b) ≤ (1 + ε )2 f (b) + n1 . That is, in

order to pack this subset of items, we increase the cost of the bin by a multiplicative factor of (1 + ε )2 and another additive term of n1 . The claim follows by applying this transformation on all the bins and noting that there are at most n bins in a feasible solution. On the other hand, consider a feasible solution to GCVS of cost C , we can keep the same solution and use the inequality f 1 (b) ≥ f (b) shown above to conclude that its cost as a solution to BPUC is at most C . Therefore, by approximating the minimum cost solution to GCVS within an asymptotic approximation ratio of (1 + ε ) we obtain an asymptotic approximation ratio to BPUC of (1 + ε )3 , and thus we obtained an AFPTAS using Theorem 24. 2 The combined problem where there are r distinct types of bins where each bin of index i has a maximum capacity b i and its own bin utilization cost function f i , can be formulated easily as an instance of BPUC as follows. We define the bin utilization cost function f as f (x) = mini =1,2,...,r :x≤bi f i (x). Note that the existence of oracles as described above for each function f i results in the existence of such oracles for f as well.

Appendix A. It is impossible to reduce the number of bin types for GCVS while maintaining a finite asymptotic approximation ratio In this appendix, we show that if an algorithm has a constant integer parameter M, such that it first reduces the number of bin types to be at most M, and then solves the resulting instance of GCVS optimally (in exponential time), then its asymptotic approximation ratio is at least M. This is in contrast to the AFPTAS of Murgolo [27], where one of the key ideas was to reduce the number of bin types.

JID:YJCSS AID:3008 /FLA

[m3G; v1.189; Prn:29/09/2016; 9:35] P.18 (1-18)

L. Epstein, A. Levin / Journal of Computer and System Sciences ••• (••••) •••–•••

18

Proposition 29. Assume that algorithm A has a finite asymptotic approximation ratio, then for every integer M, there is an instance of GCVS for which A uses more than M distinct bin types. Proof. Assume by contradiction that A always uses at most M distinct bin types. Let N > M. Then, consider the following instance of GCVS. We have N item classes, where the i-th class of items (for i = 1, 2, . . . , N) consists of N 3i items, each of size 1i . The set of bin types has N bin types where bin type i (for i = 1, 2, . . . , N) is defined by b i = 1i and c i = 13i . N

Therefore

αi =

1 N 2i

N

N

.

Note that a feasible solution can pack the N 3i items of the i-th class into N 3i bins of type i. Thus, the total cost for packing all the items of the i-th class is N 3i · 13i = 1, and therefore the cost of this solution is N. N By assumption, algorithm A cannot use all bin types and therefore, there exists a value of i, such that the items of class i are not packed into bins of type i. Consider the cost that A pays for the bins that are used for packing the items of class i, possibly together with additional items (and ignore the cost of the other bins, which may only decrease the cost of A). Observe that these bins are of type at most i − 1 as the other bins are too small. The minimum density of a bin that is used c for packing items of class i is at least min j
N

N

are used for packing the items of class i is at least their total size times the minimum density of any such bin, that is, at 2

least N 3i · 1i · 2(1i−1) = N 2 . Therefore, the asymptotic approximation ratio of A is at least NN = N, and since opt = N, this N N implies that the asymptotic approximation ratio of A is unbounded, which is a contradiction. 2 References [1] N. Bansal, J.R. Correa, C. Kenyon, M. Sviridenko, Bin packing in multiple dimensions: inapproximability results and approximation schemes, Math. Oper. Res. 31 (2006) 31–49. [2] W.W. Bein, J.R. Correa, X. Han, A fast asymptotic approximation scheme for bin packing with rejection, Theor. Comput. Sci. 393 (1–3) (2008) 14–22. [3] A. Caprara, H. Kellerer, U. Pferschy, Approximation schemes for ordered vector packing problems, Nav. Res. Logist. 92 (2003) 58–69. [4] E.G. Coffman Jr., J. Csirik, Performance guarantees for one-dimensional bin packing, in: T.F. Gonzalez (Ed.), Handbook of Approximation Algorithms and Metaheuristics, Chapman & Hall/CRC, 2007, chapter 32, 18 pp. [5] J. Csirik, J.Y.-T. Leung, Variable-sized bin packing and bin covering, in: T.F. Gonzalez (Ed.), Handbook of Approximation Algorithms and Metaheuristics, Chapman & Hall/CRC, 2007, chapter 34, 11 pp. [6] J. Csirik, J.Y.-T. Leung, Variants of classical one-dimensional bin packing, in: T.F. Gonzalez (Ed.), Handbook of Approximation Algorithms and Metaheuristics, Chapman & Hall/CRC, 2007, chapter 33, 13 pp. [7] A. Das, C. Mathieu, S. Mozes, The train delivery problem – vehicle routing meets bin packing, in: Proc. of the 9th International Workshop in Approximation and Online Algorithms (WAOA’10), 2010. [8] W. Fernandez de la Vega, G.S. Lueker, Bin packing can be solved within 1 + ε in linear time, Combinatorica 1 (1981) 349–355. [9] L. Epstein, Bin packing with rejection revisited, Algorithmica 56 (4) (2010) 505–528. [10] L. Epstein, Cs. Imreh, A. Levin, Class constrained bin packing revisited, Theor. Comput. Sci. 411 (34–36) (2010) 3073–3089. [11] L. Epstein, A. Levin, Bin packing with general cost structures, Math. Program. 132 (1–2) (2012) 355–391. [12] L. Epstein, A. Levin, An APTAS for generalized cost variable-sized bin packing, SIAM J. Comput. 38 (1) (2008) 411–428. [13] L. Epstein, A. Levin, AFPTAS results for common variants of bin packing: a new method for handling the small items, SIAM J. Optim. 20 (6) (2010) 3121–3145. [14] D.S. Hochbaum, Various notions of approximations: good, better, best and more, in: D.S. Hochbaum (Ed.), Approximation Algorithms, PWS Publishing Company, 1997. [15] D.S. Hochbaum, W. Maass, Approximation schemes for covering and packing problems in image processing and VLSI, J. ACM 32 (1) (1985) 130–136. [16] D.S. Hochbaum, D.B. Shmoys, Using dual approximation algorithms for scheduling problems: theoretical and practical results, J. ACM 34 (1) (1987) 144–162. [17] D.S. Hochbaum, D.B. Shmoys, A polynomial approximation scheme for scheduling on uniform processors: using the dual approximation approach, SIAM J. Comput. 17 (1988) 539–551. [18] D.S. Johnson, Near-optimal bin packing algorithms, PhD thesis, MIT, Cambridge, MA, 1973. [19] D.S. Johnson, Fast algorithms for bin packing, J. Comput. Syst. Sci. 8 (1974) 272–314. [20] D.S. Johnson, A. Demers, J.D. Ullman, M.R. Garey, R.L. Graham, Worst-case performance bounds for simple one-dimensional packing algorithms, SIAM J. Comput. 3 (1974) 256–278. [21] N. Karmarkar, R.M. Karp, An efficient approximation scheme for the one-dimensional bin-packing problem, in: Proceedings of the 23rd Annual Symposium on Foundations of Computer Science (FOCS’82), 1982, pp. 312–320. [22] S. Khuller, J. Li, B. Saha, Energy efficient scheduling via partial shutdown, in: Proc. of 21st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’10), 2010, pp. 1360–1372. [23] J.Y.-T. Leung, C.-L. Li, An asymptotic approximation scheme for the concave cost bin packing problem, Eur. J. Oper. Res. 191 (2) (2008) 582–586. [24] C.-L. Li, Z.-L. Chen, Bin-packing problem with concave costs of bin utilization, Nav. Res. Logist. 53 (4) (2006) 298–308. [25] J. Li, S. Khuller, Generalized machine activation problems, in: Proc. of 22nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’11), 2011, pp. 80–94. [26] S. Martello, P. Toth, Knapsack Problems: Algorithms and Computer Implementations, John Wiley and Sons, 1990. [27] F.D. Murgolo, An efficient approximation scheme for variable-sized bin packing, SIAM J. Comput. 16 (1) (1987) 149–161. [28] H. Shachnai, T. Tamir, G.J. Woeginger, Minimizing makespan and preemption costs on a system of uniform machines, Algorithmica 42 (3–4) (2005) 309–334. [29] J.D. Ullman, The performance of a memory allocation algorithm, Technical report 100, Princeton University, Princeton, NJ, 1971. [30] E.C. Xavier, F.K. Miyazawa, The class constrained bin packing problem with applications to video-on-demand, Theor. Comput. Sci. 393 (1–3) (2008) 240–259.