The multi-band robust knapsack problem—A dynamic programming approach

The multi-band robust knapsack problem—A dynamic programming approach

Linear Algebra and18its Applications Discrete Optimization (2015) 123–149 466 (2015) 102–116 Contents lists at ScienceDirect Contents lists available...

839KB Sizes 5 Downloads 292 Views

Linear Algebra and18its Applications Discrete Optimization (2015) 123–149 466 (2015) 102–116

Contents lists at ScienceDirect Contents lists available at available ScienceDirect

Linear Algebra and its Applications Discrete Optimization www.elsevier.com/locate/laa www.elsevier.com/locate/disopt

Inverse of Jacobi matrix The multi-band robusteigenvalue knapsack problem problem—A dynamic mixed data programming with approach Ying Wei 1 Koster b , Anke Schmeink c Grit Claßen a,∗ , Arie M.C.A. of Mathematics, Nanjing University of Aeronautics INFORM GmbH, RiskDepartment & Fraud Division, Pascalstraße 35, 52076 Aachen, Germany and Astronautics, Nanjing 210016, PR China Lehrstuhl II f¨ ur Mathematik, RWTH Aachen University, 52056 Aachen, Germany c Institute for Theoretical Information Technology, RWTH Aachen University, 52056 Aachen, Germany a

b

a r t i c l e article

info

i n f o

a b s t r a c t

abstract

Article history: In this paper, the inverse eigenvalue problem of reconstructing Received 16 January 2014 a Jacobi matrix from its eigenvalues, its leading principal Article history: Accepted 20 September 2014paper, we consider In this the multi-band knapsack problem generalsubmatrix and partrobust of the eigenvalues of which its submatrix Received 25 February 2014 Available online 22 October izes the2014 Γ -robust knapsack problem by deviation band into is considered. Thesubdividing necessary the andsingle sufficient conditions for Received in revised form 19 Marchby Y. Weiseveral smaller bands. We state a compact ILP formulation and develop two dySubmitted the existence and uniqueness of the solution are derived. 2015 namic programming algorithms based the presented model where thenumerical first has Furthermore, a on numerical algorithm and some Accepted 23 September MSC: 2015 a complexity linear inexamples the number of items and the second has a complexity linare given. Available online 30 October 2015 15A18 ear in the knapsack capacity. As a side effect, we© generalize a result Bertsimas 2014 Published by of Elsevier Inc. 15A57 and Sim on combinatorial optimization problems with uncertain objective. A comKeywords: putational study demonstrates that the second dynamic program is significantly Multi-band robustness Keywords: faster than the first algorithm, especially after application of further algorithmic Knapsack problem Jacobi matrix ideas. The improved algorithm clearly outperforms cplex solving the compact ILP Dynamic program Eigenvalue Inverse problem formulation. Submatrix © 2015 Elsevier B.V. All rights reserved.

1. Introduction The classical knapsack problem (KP), one of the most fundamental problems in discrete optimization, asks to select a subset of items i ∈ N = {1, . . . , n} having a (positive) weight ai and a (positive) profit pi such that a given capacity B is not exceeded and the total profit is maximized. This problem is NP-hard but can be solved in pseudo-polynomial time via a dynamic programming algorithm (DP) with complexity O(nB), see, e.g., [1,2]. Further algorithms usually first order the items according to non-increasing profit–weight ratios and then derive upper and lower bounds which can be used to fix variables before applying branchE-mail address: [email protected]. 1 and-bound techniques, cf. +86 Pisinger [3] and the references therein. Tel.: 13914485239. A generalization of the KP is the Γ -robust knapsack problem (Γ -RKP) which takes data uncertainty, http://dx.doi.org/10.1016/j.laa.2014.09.031 especially in the weights, into account. Γ -robust optimization was first introduced by Bertsimas and Sim [4] 0024-3795/© 2014 Published by Elsevier Inc. ∗ Corresponding author. E-mail addresses: [email protected] (G. Claßen), [email protected] (A.M.C.A. Koster), [email protected] (A. Schmeink).

http://dx.doi.org/10.1016/j.disopt.2015.09.007 1572-5286/© 2015 Elsevier B.V. All rights reserved.

124

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

and is an example of a tractable robust optimization model. The aim of robust optimization is to find an optimal robust feasible solution, i.e., a solution that is protected against the worst coefficient deviations described by a deterministic uncertainty set. We refer to [5,6] for a comprehensive introduction to the theory and application of robust optimization. The widely applied Γ -robustness approach assumes each uncertain coefficient u to be a symmetric and bounded random variable, which takes values in an interval [¯ u−u ˆ, u ¯+u ˆ] with u ¯ denoting the nominal value and u ˆ its highest deviation. The uncertainty set is then defined by a parameter Γ which limits the maximum number of simultaneous deviations of the coefficients from their nominal value. The key feature of this approach is that the robust counterpart of a linear program (LP) is linear and its number of variables and constraints is polynomial in the size of the input of the deterministic problem. Additionally, Fischetti and Monaci [7] have developed a dynamic cut generation scheme to solve the exponential robust counterpart of an LP efficiently where each realization of the uncertain coefficients lying in the uncertainty set is considered in a separate constraint. Applying the robustness approach by Bertsimas and Sim [4] to the KP, the weights of at most 0 ≤ Γ ≤ n items i ∈ N can deviate from their nominal weights a ¯i in an interval [¯ ai − a ˆi , a ¯i + a ˆi ] leading to the Γ -RKP. A solution is feasible if the knapsack capacity is not exceeded for any realization of the weights. Since KP is a Γ -RKP with Γ = 0, Γ -RKP is also NP-hard. However, again there exist DPs which solve fairly large instances within reasonable time. Klopfenstein and Nace [8] present a DP which is a modification of the DP for the KP and has complexity O(nB 2 ) while Monaci et al. [9] develop an algorithm explicitly for the Γ -RKP improving the complexity to O(nΓ B). The crucial part of the algorithm is the ordering of items according to non-increasing deviation values a ˆi . In this paper, we investigate a further generalization of the Γ -RKP, the multi-band RKP. A first attempt to use multiple deviation bands was made by Bienstock [10] by means of the so-called histogram model for portfolio optimization problems in finance. The first theoretical framework of the multi-band robustness concept, in which the deviation range is partitioned into a number of deviation intervals, so-called bands, was presented in B¨ using and D’Andreagiovanni [11,12]. These works comprise fundamental investigations of the properties of this concept in case of a lower as well as upper bound on the number of realizations per band, multi-band robust counterparts, an efficient method to separate robustness cuts, and first studies on probabilistic bounds for feasibility guarantees. Multi-band robustness has recently been applied to a multi-period capacitated network design problem in [13]. A good overview on the history of the multiband concept with lower and upper bounds can be found at the website given in [14]. Shortly after the works [11,12], Mattia [15] published a multiple interval concept in a technical report. This concept is quite similar to the multi-band robustness but more restricted as the probability distribution of the random variables is assumed to be symmetric and no lower bounds on the number of realizations per interval are assumed. Moreover, Kutschka [16] shows that the multi-band RKP polytope is full-dimensional if and only if the highest possible weight of every item does not exceed the knapsack capacity. Additionally, the author states trivial facets of the polytope and derives (extended) cover inequalities, which are valid. Bertsimas and Sim [17] also consider a special case of a 0–1 optimization problem when the objective coefficients are subject to uncertainty, i.e., min{cT x|x ∈ X ∩{0, 1}n } with cj taking values in [¯ cj − cˆj , c¯j + cˆj ]. The authors show that the Γ -robust counterpart can be solved by solving n + 1 nominal (non-robust) problems whereas this result is improved to ⌈ n−Γ 2 ⌉ + 1 nominal problems by Lee and Kwon [18]. These 2 results imply a O(n B) or O(n(n−Γ )B), respectively, algorithm for both robust KP variants, with uncertain cost or uncertain weights. Monaci et al. [9] improve this result for the Γ -RKP to O(nΓ B). For the multiband robust version of a 0–1 optimization problem, B¨ using and D’Andreagiovanni [12] present an algorithm 2 which depicts a corrected version of an algorithm given in [15] and requires the solving of O(nK ) nominal 2 problems. This algorithm yields a O(nK B) algorithm for the multi-band RKP. In [19], the authors present a family of valid inequalities for the robust counterpart of a 0–1 optimization problem where the uncertainty

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

125

of the objective coefficients is modeled by multi-band robustness. This work uses a similar proof strategy to that in the paper by Atamt¨ urk [20], who introduces alternative formulations for a Γ -robust 0–1 optimization problem with strong LP relaxation bounds. Contribution and outline. In this paper, we study the complexity of the KP with uncertain weights modeled via a multi-band uncertainty set, i.e., the multi-band RKP. Due to the multiple deviation values, we cannot specify an ordering of the items by means of the weights and hence, it is not possible to extend the O(nΓ B) complexity result by Monaci et al. [9] for the Γ -RKP. Instead, we develop two DPs for the compact reformulation of the multi-band RKP given in [11]. The first algorithm has a complexity linear in the number of items but not in the knapsack capacity. As the capacity can be significantly higher than the number of items, the second DP has a complexity of O(K!n(K+1) B), which generalizes the result in Monaci et al. [9] to the multi-band RKP (Theorem 4.5). Additionally, we propose improvements to speed-up the solving process in practice. The performance of the different algorithms is analyzed in a computational study. As a side effect, we improve the result for 0–1 optimization problems with uncertain objective coefficients modeled by a multi-band robust uncertainty set, which is stated in [12] and represents a generalization of the result given by Bertsimas and Sim [17]. This means, we develop an algorithm with complexity O(K!nK ) for combinatorial optimization problems with uncertain objective (Corollary 4.6). The rest of the paper is organized as follows. In Section 2, we briefly present a compact integer linear programming formulation of the multi-band RKP. Based on this formulation, we derive a first DP which is an adaption of the DP for the KP and has complexity O(nB K+1 ) in Section 3. The main part of this paper is Section 4 in which we develop a DP with a complexity linear in the capacity of the knapsack, including a version for the case of uncertain objective coefficients in Section 4.2. By some improvements presented in Section 5, we can speed up the performance of this DP in practice. A computational study in Section 6 demonstrates the gained speed-ups. We conclude the paper in Section 7. 2. Compact formulation of the multi-band RKP In this section, we briefly state the used notation and then present a compact formulation of the multiband RKP when only upper bounds on the number of realizations per band are assumed. For more details on the compact reformulation, especially when also lower bounds on the number of realizations are assumed, we refer to [11,12]. Every item i ∈ N contributes a profit pi ∈ Z≥0 to the objective and has an uncertain weight ai . These uncertain weights are modeled as independent random variables with unknown distribution. Furthermore, we assume that there exist K deviation bands and a realization a ˜i of the random variable ai has a deviation value lying in band k ∈ {1, . . . , K} if and only if a ˜i ∈ (¯ ai + a ˆk−1 ,a ¯i + a ˆki ], where a ¯i ∈ Z≥0 denotes the i k nominal weight and a ˆi ∈ Z≥0 the deviation value in band k. We assume an increasing ordering among the deviation values for each item i ∈ N , i.e., a ˆki < a ˆk+1 for every k ∈ {1, . . . , K − 1}. The number of possible i deviation values lying in band k is limited over all items by a parameter Γk > 0. The capacity B of the knapsack is also assumed to be integer. We can now formulate the K-band RKP as the following integer program with binary decision variables xi indicating whether item i is included in the knapsack:  max p i xi (1a) i∈N

s.t.



a ¯i xi + DEVΓ (x, a ˆ) ≤ B

(1b)

i∈N

xi ∈ {0, 1}

∀i ∈ N.

(1c)

126

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

We maximize the profit in (1a) while the knapsack capacity should not be exceeded which is guaranteed by constraint (1b). The deviation term DEV is computed via the following ILP for an allocation of the decision variables x. The binary decision variables yik define the selection of the items in a band. Thus, yik = 1 if the deviation of the weight of item i lies in band k and yik = 0 otherwise. K 

DEVΓ (x, a ˆ) = max

a ˆki xi yik

(2a)

i∈N k=1



s.t.

yik ≤ Γk

∀k ∈ {1, . . . , K}

(2b)

yik ≤ 1

∀i ∈ N

(2c)

∀i ∈ N, ∀k ∈ {1, . . . , K}.

(2d)

i∈N K 

k=1 yik ∈

{0, 1}

This formulation maximizes the sum of the deviation values on condition that at most Γk many deviations can lie in band k, constraints (2b), and the weight of each item can deviate at most in one band, constraints (2c). Since the coefficient matrix of (2) can be proved to be totally unimodular (see [11]), the polytope is integral and by strong duality, we can replace DEVΓ (x, a ˆ) in (1) by its dual problem min

K 

Γ k πk +



ρi

(3a)

i∈N

k=1

s.t. πk + ρi ≥ a ˆki xi

∀k ∈ {1, . . . , K}, ∀i ∈ N

(3b)

πk , ρ i ≥ 0

∀i ∈ N, ∀k ∈ {1, . . . , K},

(3c)

where πk are the dual variables corresponding to constraints (2b) and ρi correspond to constraints (2c). For  k integer deviation values, optimal dual variables πk and ρi are also integer and obviously, πk ≤ maxi∈N a ˆi for all bands k in an optimal solution. Including (3) in (1), we get the compact reformulation of the K-band RKP as follows.  max pi xi (4a) i∈N

s.t.

 i∈N

a ¯ i xi +

K 

Γ k πk +

k=1



ρi ≤ B

(4b)

i∈N

πk + ρi ≥ a ˆki xi

∀i ∈ N, k ∈ {1, . . . , K}

(4c)

xi ∈ {0, 1}, πk , ρi ≥ 0

∀i ∈ N, k ∈ {1, . . . , K}.

(4d)

Note, for fixed values x ¯ and π ¯ , the optimal value of ρi in (4) can be computed according to constraints (4c)–(4d) as    k  a ˆi x ¯i − π ¯k , 0 ρi = max max k∈{1,...,K}    k  = max max a ˆi − π ¯k , 0 x ¯i (5) k∈{1,...,K}

since x ¯i ∈ {0, 1}. Hence, the optimal value of ρi is the maximum difference between a deviation value and the dual variable of the corresponding band. An alternative method to solving the compact formulation (4) is the separation of so-called robustness cuts; see [11]. The idea of this approach is to solve the nominal problem and then generate cuts, which enforce robustness and are added to the problem, in case that the optimal solution is not robust. This step,

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

127

where the feasibility check corresponds to solving a min-cost flow problem, is iterated via a cutting plane method. 3. A dynamic programming algorithm for the K-band RKP In this section, we present an exact DP for the K-band RKP which depicts a straightforward generalization of the DP for the knapsack problem [1,2] and its generalization for the Γ -RKP [8]. We denote by π the vector of dual variables πk for all bands k ∈ {1, . . . , K}. Applying (5), let f (j, b, π) denote the highest profit for given vector π, a feasible solution of (4) with weight b, and in which only items {1, . . . , j} ⊆ N with j ∈ N are considered. Hence, f (j, b, π) can be formulated as a classical equality KP  j   j    (¯ ai + DEVπ (i)) xi = b, xi ∈ {0, 1}, i ≤ j pi xi  f (j, b, π) = max  i=1

i=1

with total weight b and  DEVπ (i) := max

max k∈{1,...,K}



a ˆki

− πk



 ,0 .

  K To fulfill constraint (4b), b has to lie in 0, 1, . . . , B − k=1 Γk πk . Furthermore, for the dual variables πk holds    k πk ∈ Πk := 0, 1, . . . , max a ˆi ∀k ∈ {1, . . . , K}. (6) i∈N

As described in [1,2], the DP then consists of the computation of all values of f by the recursive equation f (j, b, π) = max {f (j − 1, b, π), f (j − 1, b − (¯ aj + DEVπ (j)) , π) + pj }

(7)

with initial values   if b = 0 0, f (1, b, π) = p1 , if b = a ¯1 + DEVπ (1)  −∞, otherwise   K and π ∈ Π1 × · · · × ΠK , b ∈ 0, 1, . . . , B − k=1 Γk πk . Formally, the optimal solution of (1) is determined by     K    max f (n, b, π)π ∈ Π1 × · · · × ΠK , b ∈ 0, 1, . . . , B − Γk πk . 

(8)

k=1

Lemma 3.1. The complexity of the dynamic programming approach (7)–(8) using the sets Πk for k ∈ {1, . . . , K} given in (6) is O(nB K+1 ). Proof. The computation of    K    max f (n, b, π ¯ )b ∈ 0, 1, . . . , B − Γk π ¯k  

k=1

K for a fixed vector π ¯ corresponds to solving a (non-robust) KP with a capacity of B − k=1 Γk π ¯k . By (7), this can be done in O(nB). Moreover, |Πk | ≤ B + 1 since a ˆki ≤ B for all i ∈ N and k ∈ {1, . . . , K}. Hence, the complexity of the DP (7)–(8) is O(nB K+1 ). 

128

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

The complexity of the presented DP is linear in the number of items. In many applications, the knapsack capacity B is larger than the number of items n. Hence, an algorithm with complexity linear in the capacity is desirable. For the one-band RKP, Monaci et al. [9] derive such a DP. Its crucial assumption is an ordering of the items according to non-increasing deviation values. As no comparable ordering of items with more than one deviation value exists, an extension of this DP to a DP for the problem studied would result in an algorithm with complexity O(nΓℓ B K ) for one ℓ ∈ {1, . . . , K}. As B ≫ n, it would beneficial to move the exponent of B to n. In [12], B¨ using and D’Andreagiovanni have 2 developed in parallel with this work an algorithm with running time O(nK B), thus increasing the exponent significantly. The transfer of the algorithm stated in Mattia [15] for a combinatorial optimization problem with uncertain objective coefficients to the multi-band RKP would lead to an algorithm with complexity O((n + 1)(K+1) B), which is however not correct (see Appendix A). In the following section, we present a DP with complexity linear in the capacity and nK in the number of items which uses smaller sets Πk compared to (6). 4. An improved dynamic programming algorithm 4.1. Uncertainty in the coefficient matrix By (5), we can write optimization problem (3) as minimizing the function χ of π: χ : RK → R : χ(π) =

K 

Γ k πk +



DEVπ (i).

(9)

i∈N

k=1

Note, we neglect xi here for simplicity. The inclusion would just cause the usage of a new set N ′ := {i ∈ N |xi = 1} instead of N . The function χ is piecewise linear as every term in the maximum is linear. Additionally, this function is convex since linear functions are convex, the maximum of convex functions is convex and the summation of convex functions is also convex. Hence, any local minimum of function χ is also a global minimum. To derive properties of a global minimum of the function χ for K > 1, we use a well-known result for the Γ -RKP, the one-band RKP. Assuming that items are sorted by non-increasing weight deviations a ˆ1i , the following lemma holds. Lemma 4.1 (Monaci et al. [9]). A subset I ⊆ N is feasible for the Γ -RKP if and only if   (¯ ai + a ˆ1i ) + a ¯i ≤ B i∈I:i≤iΓ

i∈I:i>iΓ

where iΓ denotes the Γ th item in I if |I| ≥ Γ , otherwise iΓ is the index of the last item in I. Applying a minor modification of Lemma 4.1, it is easy to see that χ also takes a global minimum at π1 = a ˆ1Γ1 +1 , where a ˆ1Γ1 +1 denotes the (Γ1 + 1)-largest deviation value. Now, we assume K > 1 and consider the case that all entries of the vector π but one are fixed and limit the domain for the remaining πℓ in such a way that χ is as low as possible regarding the fixed entries of π. For simplicity, we write k ̸= ℓ instead of k ∈ {1, . . . , K} \ {ℓ} for ℓ ∈ {1, . . . , K} and use the notation (α)+ for max{α, 0} henceforth. Lemma 4.2. If π ¯k is fixed for all k ̸= ℓ with ℓ ∈ {1, . . . , K}, then the optimal πℓ minimizing (9) lies in   +   a ˆℓi − max{(ˆ aki − π ¯ k )+ }  i ∈ N .  k̸=ℓ

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

129

In fact, the minimum is taken by the (Γℓ + 1)-largest of these values. The proof is rather straightforward by reformulating the function χ so that only a robust KP remains to be solved (see Appendix B). Since every local minimum of the piecewise linear function χ as defined in (9) is a global minimum, we have to find a point with a gradient equal to zero. If a linear segment with a gradient equaling zero exists, also the corresponding corners have the same property and still constitute global minima of the function χ. At such a corner, χ is not differentiable in one direction, i.e., the directional derivatives differ. For an index ℓ ∈ {1, . . . , K} (an index set K ⊆ {1, . . . , K}), we denote by eℓ (eK ) the unit vector with all entries set to zero apart from the position(s) ℓ (K) which is (are) set to 1. Theorem 4.3. If π ∈ RK ≥0 is a point of non-differentiability of the function χ with ∇eℓ χ(π) ̸= −∇−eℓ χ(π) for an ℓ ∈ {1, . . . , K}, then there exists at least one i ∈ N with πℓ = a ˆℓi or a ˆℓi − πℓ = a ˆki − πk for one k ̸= ℓ. Proof. First, recall the definition of the directional derivative in the direction eℓ and of the negative directional derivative in the direction −eℓ which will be used in this proof. χ(π + heℓ ) − χ(π) h χ(π1 , . . . , πℓ−1 , πℓ + h, πℓ+1 , . . . , πK ) − χ(π) = lim h↘0 h χ(π − heℓ ) − χ(π) −∇−eℓ χ(π) = lim h↘0 −h χ(π1 , . . . , πℓ−1 , πℓ − h, πℓ+1 , . . . , πK ) − χ(π) = lim . h↘0 −h ∇eℓ χ(π) = lim

h↘0

To show the claim, we first partition the item set N into subsets defined by the band which specifies the max-term in DEVπ (i). We define such a partition regarding π, π + heℓ as well as π − heℓ . Based on these partitions and inherent subset relations (among the subsets of different partitions), we compute a closed form expression of the numerators in the directional derivatives stated above. By the consideration of the limit value, we obtain the theorem. As indicated above, for h → 0, a difference between χ(π + heℓ ) and χ(π) can only occur due to differences in the max-term in the definition of χ in (9). We now define partitions of the set of items N based on the difference values a ˆki − πk which determine this maximum.      k I0 := i ∈ N a ˆ − πk < 0 ∀k ∈ {1, . . . , K} ,  i    ′ ′  k Ik := i ∈ N a ˆ − πk > a ˆki − πk′ ∀k ′ ≤ k − 1, a ˆki − πk ≥ a ˆki − πk′ ∀k ′ ≥ k + 1,  i  and a ˆki − πk ≥ 0 ∀k ∈ {1, . . . , K}. The set I0 is the set of items for which all difference values (deviation minus dual) are negative. For band k, the set Ik is the set of items for which k is the smallest index whereby the maximum in (5) is defined by the corresponding difference a ˆki − πk . Obviously, the sets Ik for all bands k ∈ {0, 1, . . . , K} are pairwise disjoint K defining a partition of N = I0 ∪ k=1 Ik at the point π. Analogously for the point π + heℓ , we define sets I0h ˆℓi − πℓ by a ˆℓi − (πℓ + h) and for point π − heℓ , sets I0−h and Ik−h by replacing a and Ikh by replacing a ˆℓi − πℓ

130

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

by a ˆℓi − (πℓ − h). These sets are also pairwise disjoint and thus, N = I0 ∪

K 

K 

Ik = I0h ∪

k=1

K 

Ikh = I0−h ∪

k=1

Ik−h .

(10)

k=1

The transition from point π + heℓ to point π and then to π − heℓ leads to the following subset relations: (a) I0−h ⊆ I0 ⊆ I0h , (b) Iℓh ⊆ Iℓ ⊆ Iℓ−h , (c) Ik−h ⊆ Ik ⊆ Ikh ∀k ̸= ℓ. This means, the changes in the sets by transition, e.g., from π + heℓ to π, can only be caused by items  moving from Iℓh to I0 ∪ k̸=ℓ Ik since a ˆℓi − (πℓ + h) ≤ a ˆℓi − πℓ and all other values remain the same. Hence, the following subset relations are also immediate.    (d) Iℓ \ Iℓh ⊆ I0h \ I0 ∪ k̸=ℓ (Ikh \ Ik ) ,    (e) Ikh \ Ik ⊆ Iℓ \ Iℓh ∀k ̸= ℓ, in particular k̸=ℓ Ikh \ Ik ⊆ Iℓ \ Iℓh ,    (f) Iℓ−h \ Iℓ ⊆ I0 \ I0−h ∪ k̸=ℓ (Ik \ Ik−h ) , and    (g) k̸=ℓ Ik \ Ik−h ⊆ Iℓ−h \ Iℓ . For i ∈ Iℓ \ Iℓh by (d), either (i) i ∈ I0h \ I0 ⇒ a ˆki − πk < 0 ∀k ̸= ℓ but a ˆℓi − πℓ ≥ 0 and a ˆℓi − (πℓ + h) < 0 hence, for h small πℓ = a ˆℓi , or  (ii) i ∈ k̸=ℓ (Ikh \ Ik ) ⇒ a ˆki − πk ≤ a ˆℓi − πℓ and a ˆki − πk > a ˆℓi − (πℓ + h) for at least one k ̸= ℓ. Hence, for h k ℓ small a ˆ i − πk = a ˆ i − πℓ . Based on these relations, we can now rewrite the numerators of the directional derivatives as follows.    (ˆ aℓi − (πℓ + h)) χ(π + heℓ ) − χ(π) = Γk πk + Γℓ (πℓ + h) + max{ˆ aki − πk } + k̸=ℓ  h h k̸=ℓ i∈

k̸=ℓ



K 

Γ k πk −

k=1

 i∈



i∈Iℓ

Ik

max{ˆ aki − πk } − k̸=ℓ

Ik



(ˆ aℓi − πℓ )

i∈Iℓ

k̸=ℓ

(b),(c)

=



Γℓ h + i∈

 k̸=ℓ

(e),(i),(ii)

=

(Ikh \Ik )

max{ˆ aki − πk } − |Iℓh |h − k̸=ℓ

 i∈Iℓ \Iℓh

(Γℓ − |Iℓh |)h.

By means of an analogue argumentation applying (b), (c), (f), and (g), we get χ(π − heℓ ) − χ(π) = (|Iℓ | − Γℓ )h. Hence, ∇eℓ χ(π) ̸= −∇−eℓ χ(π) χ(π − heℓ ) − χ(π) χ(π + heℓ ) − χ(π) ̸= lim ⇔ lim h↘0 h↘0 h −h

(ˆ aℓi − πℓ )

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

131

⇔ lim (Γℓ − |Iℓh |) ̸= lim (Γℓ − |Iℓ |) h↘0

h↘0

⇔ lim |Iℓh | = ̸ |Iℓ | h↘0

⇔ ∃ i ∈ Iℓ \ Iℓh

(i), (ii)



πℓ =

This concludes the proof.

a ˆℓi

for h ↘ 0 or a ˆℓi − πℓ = a ˆki − πk

for at least one i ∈ N and k ̸= ℓ.



We have shown now that in an optimal solution of (3) either a dual variable πℓ is equal to a deviation ˆki − πk for two different bands coincide. The following lemma value a ˆℓi or two difference values of the form a excludes the latter case from the space of optimal solutions but πk = 0 remains possible.   k ⋆ ˆκj − πκ⋆ > 0, ˆℓj − πℓ⋆ = a Lemma 4.4. For a point π ⋆ ∈ RK ˆi | i ∈ N ∪ {0} ∀k ∈ {1, . . . , K} and a ≥0 with πk ̸∈ a  k  K for one j ∈ N and ℓ, κ ∈ {1, . . . , K}, ℓ ̸= κ, there exists a point π ∈ R≥0 with πk ∈ a ˆi | i ∈ N ∪ {0} for at least one k ∈ {1, . . . , K} and χ(π) ≤ χ(π ⋆ ). Proof. Similarly to the proof of Theorem 4.3, we define a partition of the item set into two subsets. The first contains all items for which the band specifying the max-term in DEVπ (i) is one of the bands with a ˆℓj − πℓ⋆ = a ˆκj − πκ⋆ . The remaining items are added to the second subset. Again by means of these subsets, we state a closed form expression of the directional derivatives. We distinguish two cases: the directional derivative is negative or strictly positive. In the first case, the dual vector π ⋆ is increased along the corresponding direction until a point with the desired property is reached. Similarly for the second case, π ⋆ is decreased. The final step of the following proof is to show that the value of the function χ is not larger at the newly created point than at π ⋆ . First, we define α := a ˆℓj − πℓ⋆ and K := {k ∈ {1, . . . , K} | a ˆkj − πk⋆ = α}. We write k ̸∈ K instead of k ∈ {1, . . . , K} \ K for simplicity. Similar to the proof of Theorem 4.3, we define a partition of the item set N into two sets, one consisting of all items for which argmaxk∈K {ˆ aki − πk⋆ } ∈ K and the other one containing the remaining items:       ⋆ aki − πk⋆ }, max{ˆ aki − πk⋆ }, 0 = max{ˆ aki − πk⋆ } , IK := i ∈ N max max{ˆ k∈K k̸∈K k∈K ⋆ I ⋆ := N \ IK .

Then, for the directional derivatives in the direction eK at π ⋆ holds  ⋆ ∇eK χ(π ⋆ ) = −∇−eK χ(π ⋆ ) = Γk − |IK |,

(11)

k∈K

see Lemma B.1 for details. If the derivative is negative, we now increase all πk⋆ simultaneously until we reach a point π ∈ RK ≥0 with πk ∈ {ˆ aki | i ∈ N } ∪ {0} for (at least) one k ∈ {1, . . . , K}. On the other hand, if the derivative is strictly positive, we decrease all πk⋆ simultaneously until we reach a point π ∈ RK aki | i ∈ N } or πk = 0 ≥0 with πk ∈ {ˆ for (at least) one k ∈ {1, . . . , K}.  ⋆ In case of a negative derivative, i.e., k∈K Γk − |IK | ≤ 0, we define the minimum value possible to add ⋆ to πk without changing the partition of the item set as             ′ + k ⋆ k ⋆ k ⋆ ε := min min {ˆ ai − πk }, min {ˆ ai − πk − a ˆ i + πk ′ } ,     k ∈ K, i ∈ N : k ∈ K, k ′ ̸∈ K, i ∈ N :      k  ⋆ k ⋆ k′ ⋆ a ˆ i − πk > 0 a ˆ i − πk > a ˆ i − πk ′

132

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

i.e., the minimum difference between a dual variable πk⋆ values. Note, this minimum exists since a ˆkj − πk⋆ = α > 0.  πk⋆ + ε+ πk := πk⋆

and a deviation value or between two difference Now, we define k∈K k ̸∈ K.

(12)

If ε+ is defined by the first minimum, the point π has the required property that πk ∈ {ˆ aki | i ∈ N } ∪ {0}. + k ⋆ k′ ⋆ ′ ′ Otherwise, ε = a ˆ i − πk − a ˆi + πk′ for one i ∈ N, k ∈ K, k ̸∈ K and we define α := a ˆki − πk and a new   ′ k ′ set K := k ∈ K | a ˆi − πk = α . Then we start again from the beginning repeating the same steps. We continue this procedure until one πk has the desired property. What remains to show is that the objective is not increased when increasing some πk⋆ by ε+ . For that ⋆ ⋆ purpose, we first show that the set IK remains the same. So, we define a set IK analogously to IK by replacing ⋆ ⋆ ⋆ every πk by πk . By the definition of these two sets, it holds IK ⊆ IK . We will now show IK ⊆ IK , i.e., these ⋆ and show i ∈ IK . We have sets are equal. To this end, we consider an item i ∈ IK ⋆ , k ∈ K and k ′ ̸∈ K: ˆki + πk⋆′ for all i ∈ IK • ε+ ≤ a ˆki − πk⋆ − a ′

max{ˆ aki − πk } = max{ˆ aki − πk⋆ − ε+ } k∈K

k∈K



≥ max{ˆ aki − πk⋆ − a ˆki + πk⋆ + a ˆki − πk⋆′ } k∈K ′

=a ˆki − πk⋆′

∀k ′ ̸∈ K

∀k ′ ̸∈ K.



Hence, maxk∈K {ˆ aki − πk } ≥ maxk′ ̸∈K {ˆ aki − πk′ }. ⋆ and k ∈ K: • ε+ ≤ a ˆki − πk⋆ for all i ∈ IK max{ˆ aki − πk } = max{ˆ aki − πk⋆ − ε+ } k∈K

k∈K

≥ max{ˆ aki − πk⋆ − a ˆki + πk⋆ } k∈K

= 0.

⋆ Hence, i ∈ IK and thus, IK ⊆ IK .

Then, for the objective function holds     χ(π) = Γk (πk⋆ + ε+ ) + Γk πk⋆ + max{ˆ aki − πk⋆ − ε+ } + max{(ˆ aki − πk⋆ )+ } k̸∈K

k∈K

=

K 

Γk πk⋆ + ε+

k=1

= χ(π ⋆ ) + ε+

 k∈K

Γk +

i∈IK

 ⋆ i∈IK



k∈K

⋆ max{ˆ aki − πk⋆ } − ε+ |IK |+ k∈K

i∈N \IK

 ⋆ i∈N \IK

k̸∈K

max{(ˆ aki − πk⋆ )+ } k̸∈K

 

⋆ Γk − |IK |

k∈K

≤ χ(π ⋆ ). The second case, when the directional derivative is strictly positive, can be handled quite analogously to the previous case, see Appendix B.  Theorem 4.5. There exists a DP which solves (4) in O(K!n(K+1) B), i.e., with a complexity linear in the knapsack capacity B.

133

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

Proof. By means of Theorem 4.3 and Lemma 4.4, we know that for an optimal π holds πℓ ∈ {ˆ aℓi | i ∈ N }∪{0} for at least one ℓ ∈ {1, . . . , K}. We will prove the claim by mathematical induction. For K = 2, we can either first choose π1 ∈ {ˆ a1i | i ∈ N } ∪ {0} or π2 ∈ {ˆ a2i | i ∈ N } ∪ {0} having n + 1 possibilities each and 2 possibilities to choose the starting ℓ ∈ {1, 2}. For a fixed π ¯ℓ with ℓ ∈ {1, 2}, the  k + remaining πk , k ̸= ℓ, lies in { a ˆi − (ˆ aℓi − π ¯ ℓ )+ | i ∈ N } ∪ {0} as shown in Lemma 4.2. Again, we have n + 1 possibilities for the optimal πk . For a fixed π ¯ , the optimal ρ is fixed, cf. Lemma 4.2, and the problem reduces to a knapsack problem which can be solved in O(nB) by the DP (7)–(8). In total, we have a runtime of O(2n3 B) for K = 2. Now, we assume that the lemma holds for K − 1, i.e., the (K − 1)-band RKP can be solved in O((K − 1)!nK B). For the K-band RKP, we choose a ℓ ∈ {1, . . . , K} and one πℓ from {ˆ aℓi | i ∈ N } ∪ {0}, i.e., we have K(n + 1) many possibilities to fix the first πℓ . After the fixation, the problem reduces to a (K − 1)-band RKP. Hence, in total we have a runtime of O(K!n(K+1) B).  Note, the set Π1 × · · · × ΠK is implicitly described in this proof but a closed form is too large to state for K > 2, see Section 6 for K = 2. 4.2. Uncertain objective coefficients Based on the results presented in this section, we give a polynomial algorithm to solve an optimization problem with uncertain objective coefficients, which improves the result in [12] and generalizes the corresponding Γ -robust result given in [17]. The problem with uncertain objective coefficients reads  min ci xi (13a) i∈N

s.t. x ∈ X,

(13b)

where ci are subject to uncertainty (¯ ci nominal value and cˆki deviation in band k) and the feasible region is X ⊆ {0, 1}n . The robust counterpart of (13) can then be formalized as  min c¯i xi + max DEV(x, π) (14a) π∈Π1 ×···×ΠK

i∈N

s.t. x ∈ X,

(14b)

with DEV(x, π) :=

 k∈{1,...,K}

Γ k πk +

 i∈N

 max 0,

max k∈{1,...,K}

{ˆ cki

 − πk } xi .

(15)

Rewriting (14) by using (15), we get             min Γk πk + min c¯i + max 0, max {ˆ cki − πk } xi π ∈ Π1 × · · · × ΠK . x∈X    k∈{1,...,K} k∈{1,...,K}

i∈N

(16) The set Π1 × · · · × ΠK is defined as explained before by replacing a ˆ by cˆ. Corollary 4.6. The problem (13) with uncertain objective coefficients can be solved in O(K!nK ). Proof. Solve the equivalent formulation (16) and use the same argumentation as in the proof of Theorem 4.5. 

134

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

5. Practical improvements By Theorem 4.3 and Lemma 4.4, we can now define alternatives to the sets Πk as defined in (6). However, a straightforward application of the former results leads to sets which are still quite large. Hence, in this section, we present some improvements to reduce the size of the sets and to speed up the solving process in general. 5.1. Reducing the size of Πk For a reduction of the number of π-vectors to be considered in the DP, we define a mapping per band to sort the corresponding deviation values non-increasingly: hk : N → N

with a ˆkhk (i) ≥ a ˆkhk (i+1) , ∀i ∈ {1, . . . , n − 1}, k ∈ {1, . . . , K}.

 k  Lemma 5.1. There exists a point π defining a minimum of function χ with πk ∈ a ˆi | i ∈ N ∪ {0} for at least one k ∈ {1, . . . , K} (cf. Lemma 4.4) and πk ≤ a ˆkhk (Γk +1)

∀k ∈ {1, . . . , K}.

The proof can be found in Appendix B. For a further reduction of the necessary values of πk , recall that a ˆki < a ˆk+1 for all bands k ∈ {1, . . . , K −1} i and all items i ∈ N . Lemma 5.2. There exists a point π defining a minimum of function χ satisfying the properties of Lemma 5.1, i.e.,  k  (a) πk ∈ a ˆi | i ∈ N ∪ {0} for at least one k ∈ {1, . . . , K}, (b) πk ≤ a ˆkhk (Γk +1) for all k ∈ {1, . . . , K}, and (c) πk+1 ≥ πk ≥ 0 for all k ∈ {1, . . . , K − 1}. Again, the proof can be found in Appendix B. By means of this lemma, we have to consider only vectors π in the DP with 0 ≤ πk ≤ πk+1 for all k ∈ {1, . . . , K − 1} and πk ≤ a ˆkhk (Γk +1) for all k ∈ {1, . . . , K}. Corollary 5.3. For K = 2, there exists a point π defining a minimum of function χ satisfying the properties of Lemma 5.2, where property (c) is replaced by: π2 > π1 ≥ 0 or π2 = π1 = 0. 5.2. Speeding-up the solving process The decrease of the size of the set Π := Π1 × · · · × ΠK achieved in the previous section is just of practical relevance as the worst-case size remains the same. However, even though the size can be reduced in practice, solving a classical KP for every possible π ∈ Π still consumes too much time. Hence, in this subsection we order the elements in Π so that the most relevant elements are considered first and the least relevant are most likely not be taken into account at all.

135

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

For a fixed π ¯ ∈ Π , the following KP has to be solved.  max pi xi i∈N



s.t.

(¯ ai + DEVπ¯ (i))xi ≤ B −

i∈N

K 

Γk π ¯k

k=1

xi ∈ {0, 1}

∀i ∈ N.

An upper bound for this problem is an optimal solution of the LP relaxation which can be computed by pi non-increasingly and includes a greedy algorithm. This algorithm sorts the items by their ratio a¯i +DEV π ¯ (i) the items with the largest ratio in the knapsack as long as the capacity is not exceeded. To use the capacity completely, just a fraction of the last element, the so-called critical item, might be put into the knapsack. This bound can be improved as described in [1] by interchanging items closely before or after the critical item. We save this upper bound for every π ∈ Π and sort the elements non-increasingly regarding the bound. A vector π with a large upper bound has a higher potential to increase the objective. Furthermore, if the upper bound of π is less than or equal to the current best known solution, an optimal solution of the corresponding KP cannot improve the best known solution and thus, the KP does not have to be solved to optimality. Additionally, the whole solving process terminates as the upper bound of the next element is not higher than the current best solution.

6. Computational study In this section, we present an extensive computational study to evaluate the performance of the derived algorithms in practice. As multi-band RKPs are quite complex, we focus on two bands for the main part of this study. In Section 6.6, we briefly present additional results to provide an impression for the case of more than two bands. The considered algorithms are summarized in Table 1. The DP presented in Section 3 utilizing the sets Πk as defined in (6) is the basic algorithm and is denoted by simple. When replacing the Cartesian product of the sets Πk by Πreduced which is based on the results from Section 5.1 and will be defined in (17), we obtain the algorithm reduced. If we order the (π1 , π2 )-pairs as described in Section 5.2 in the algorithm reduced, we call the resulting algorithm improved. Finally, to evaluate the performance of the best DP, we additionally solve the ILP (4) for all instances and call this algorithm ilp. In all three DPs, we solve the underlying KPs for fixed variables π ¯ via algorithm MinKnap from Pisinger [3]. In the following, we derive a closed form expression of the set Πreduced which is used in the algorithm reduced. Assuming that π1 ∈ {ˆ a11 , . . . , a ˆ1n } in an optimal solution, we first define     B  1 1 Π1 := {0} ∪ a ˆh1 (i)  i ≤ Γ1 + 1 and a ˆh1 (i) ≤ , Γ1 and then for every π1 ∈ Π1 , we define    1 + +   i ∈ N and Π2 (π1 ) := {0} ∪ a ˆ2i − a ˆ i − π1       1 +  + B π1 < a ˆ2i − a ˆ i − π1 ≤ min a ˆ2h2 (Γ2 +1) , . Γ2

136

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

Table 1 Summary of algorithms considered in the computational study. Algorithm

Description

simple reduced improved

The DP using the simple sets Πk as defined in (6) with complexity O(nB 3 ). The DP using the set Πreduced as defined in (17) with complexity O(2!n3 B). As algorithm reduced but the elements in Πreduced are ordered non-increasingly regarding the upper bounds as described in Section 5.2 and the solving process stops as soon as the current upper bound is less than or equal to the best solution. The ILP (4) is solved with cplex.

ilp

Furthermore when assuming π2 ∈ {ˆ a21 , . . . , a ˆ2n } in an optimal solution, we define      B  ˆ2h2 (i) ≤ Π2′ := {0} ∪ a ˆ2h2 (i) i ≤ Γ2 + 1 and a ,  Γ2 and for every π2 ∈ Π2′  +   +  Π1′ (π2 ) := {0} ∪ a ˆ1i − a ˆ2i − π2 i ∈ N and        2 +  + B 1 1 , π2 − 1 . a ˆi − a ˆ i − π2 ≤ min a ˆh1 (Γ1 +1) , Γ1 





Combining these sets, we get the set of all (π1 , π2 )-pairs which have to be considered in the DP as       Πreduced := {π1 } × Π2 (π1 ) ∪  Π1′ (π2 ) × {π2 } . (17) π1 ∈Π1

π2 ∈Π2′

All computations are performed on a Linux machine with 3.40 GHz Intel Core i7-3770 processor and a general CPU time limit of 2 h. 6.1. Generation of test instances Based on Klopfenstein and Nace [21] and Pisinger [22], we randomly generate instances to study the performance of the presented DP including the different derived settings. In this subsection, we explain the setting for the generation of the instances. The nominal weights are uniformly distributed in a given interval [1, R], where R = 100 and R = 1000 defines the range, and the profits are proportional to the nominal weights plus a fixed charge, i.e., pi = a ¯i +10. Hence, the instances are strongly correlated, see [22]. Furthermore, the knapsack capacity B is randomly     ¯i , 23 i∈N a ¯i ∩ Z. To create two-band robust instances, we additionally have to chosen from 13 i∈N a compute two deviation values a ˆ1i and a ˆ2i for each i ∈ N with a ˆ1i < a ˆ2i . For that purpose, we define a maximum deviation δ ∈ {0.2, 0.5, 1.0} relative to the nominal weight, i.e., a maximum deviation of at most 20%, 50% or 100%of the ˆ2i = δ¯ ai . Assuming an (almost) equal bandwidth in each  nominal weight. Hence, a a ˆ2

band, we set a ˆ1i = 2i . Finally, we consider three different numbers of items n ∈ {200, 500, 1000}. For each setting, we generate a series of ten instances. The number of possible values for the robustness parameters Γ1 and Γ2 is quite large. To focus on the most meaningful values, we assume a normal distribution of the deviation values (see Fig. 1). 95% of the deviation values lie between −1.96σ and 1.96σ, where σ denotes the standard deviation (cf. the cumulative distribution function of the standard normal distribution). In particular, 97.5% of the values are less than 1.96σ. This point is reflected by Γ2 . As we are interested only in positive deviations, we consider 0.98σ = 1.96σ/2 for Γ1 . At this point, we have a probability of 83.65%. Furthermore, we have a probability of 33.65% = 83.65%–50%

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

137

Fig. 1. Sketch to explain the computation of the robustness parameters Γ1 and Γ2 .

between 0σ and 0.98σ and a probability of 13.85% = 97.5%–83.65% between 0.98σ and 1.96σ. This, means we have the following ratio. 33.65% Γ1 = = 2.43. Γ2 13.85% Thus, we assume Γ2 ∈ {⌈0.01n⌉, ⌈0.02n⌉, ⌈0.03n⌉, ⌈0.04n⌉, ⌈0.05n⌉}, where Γ2 = ⌈0.05n⌉ means that at most 5% of all weights deviate, and Γ1 = ⌈2.43 · Γ2 ⌉. Note, we round 2.43 · Γ2 to receive only integer values for Γ1 since the DP (8) requires that the values of all parameters are integer. 6.2. Comparison of different sets Π First, we compare the performance of the different Π -sets, i.e., algorithm reduced to simple. For that purpose, we examine the solving times of the two algorithms and compute a speed-up factor for reduced. For example, a solving time of 100 s for simple and a solving time of 10 s for reduced gives a speed-up factor of 10. For every setting, i.e., range R, number of items n, and maximum deviation δ, we take the mean over the ten instances. Furthermore, since differences in the solving time for different values of Γ2 are marginal, we also average over Γ2 . A graphical display of the results can be found in Fig. 2, for more detailed results see Tables C.5 and C.6 in Appendix C. The average speed-up factor we can achieve ranges from 1.31 to 12.20. In general, for higher range R and higher δ, we can also achieve higher speed-up factors. In simple, the size of the sets Πk , which is defined by the actual deviation values, has the strongest impact on the computing time, whereas the number of items plays a minor role. In contrast, the number of items influences the computing time of reduced most considerably while the deviation values are less important. Hence, the high speed-up factors achieved by reduced for n = 200 cannot be retained for higher numbers of items. The individual minimum speed-up factor, i.e., without averaging, achieved by reduced ranges from 1 to 14.09. 6.3. Evaluation of implementational improvements We now compare improved to reduced by means of speed-up factors. Again, we average over instances and Γ2 -values. A graphical display of the results is depicted in Fig. 3, for more detailed results see Tables C.6

138

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

Fig. 2. Speed-up factors of algorithm reduced normalized to simple averaged over the ten instances and Γ2 -values for the different settings and sizes.

Fig. 3. Speed-up factors of algorithm improved normalized to reduced averaged over the ten instances and Γ2 -values for the different settings and sizes.

and C.7 in Appendix C. For R = 100, δ = 0.2 and n = 200, no speed-up factor could be computed since the solving times of improved are strictly below 10−2 s. For better readability, we nevertheless draw a bar of height 1. For the remaining settings, the average speed-up factor achieved by improved ranges from 4.67 to 86.05. For an increasing number of items and R = 1000, the speed-up factor also increases because the stop criterion if the upper bound of a (π1 , π2 )-pair is below the current best known solution gains more significance for a higher number of items. For example, the stop criterion catches after 157 (π1 , π2 )-pairs have been considered for an instance with n = R = 1000, δ = 1 and Γ2 = ⌈0.01n⌉ while 240 382 pairs are considered with improved. On the other hand, for one of the smallest instances, i.e., n = 200, R = 1000, δ = 0.2 and Γ2 = ⌈0.01n⌉, the number of considered (π1 , π2 )-pairs is only reduced from 155 to 8. Hence, the reduction for the highest number of items is the largest. For R = 100, these effects cannot be seen that clearly since the problems are quite small and can be solved in less than 1.5 s with both algorithms.

139

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

Table 2 Absolute solving times in sec of ilp and improved for different settings averaged over the ten instances and Γ2 . The number of instances (out of 50) not exceeding the time limit with ilp are given in parenthesis. δ

0.2

n\R

100

1000

100

0.5 1000

1.0 100

1000

ilp

200 500 1000

671 (44) 1666 (36) 1966 (36)

540 (44) 490 (13) 83 (4)

578 (37) 1501 (23) 1161 (17)

1388 (37) 571 (12) 103 (6)

626 (29) 1718 (25) 362 (7)

885 (25) 882 (14) 105 (4)

improved

200 500 1000

0.000 0.002 0.012

0.213 0.613 1.073

0.007 0.028 0.059

0.360 2.075 5.705

0.025 0.103 0.219

0.383 4.030 16.116

If we do not average, the minimum speed-up factor that can be achieved by improved is 2 and the maximum is 500. 6.4. Comparison to ILP formulation To solve the ILP (4), we use cplex 12.4 [23]. The absolute solving times averaged over the ten instances and Γ2 for this algorithm and improved are displayed in Table 2, see Tables C.7 and C.8 in Appendix C for more detailed results. Note, some of the 50 problems per average value could not be solved within the time limit by ilp. These instances are not considered in the average for ilp. However, we give the number of the remaining instances in parenthesis (see Table C.9 for the corresponding numbers without the averaging over Γ2 ). The lowest average time consumption for ilp is 105 s (R = 100, δ = 1.0, n = 1000) but only 4 out of 50 instances are considered in this value, i.e., could be solved within the time limit. In contrast, improved needs at most 16 s on average for these 50 largest instances even when we leave the not solved instances of ilp in. Hence, for the 2-band RKP, the presented DP improved clearly outperforms ilp. As mentioned before, the runtime of algorithm improved strongly depends on the number of items. Hence, if the number of items is doubled, also the runtime is increased by a factor of 2. Such a factor cannot be computed for algorithm ilp as more quantities than the number of items influence its running time. 6.5. Larger instances The solving times of the DP improved are quite low for all instances (at most 18.01 s). Hence, in this subsection we briefly study the performance of improved for larger instances. Therefore, we generate instances with R ∈ {1000, 10 000}, n ∈ {500, 1000, 2000, 5000, 10 000}, and δ as before. Again, we generate ten instances for each setting. Note, we generate only instances for a combination of R and n which has not been considered before. The absolute solving times are displayed in Fig. 4, for more details see Appendix C. For R = 10 000, n = 10 000 and δ = 0.5, only two instance could be solved within the time limit whereas no instance for δ = 1.0 could be solved. For all other settings, the instances could be solved within a reasonable time of at most 5811.76 s on average. 6.6. More bands To give an impression of the performance of the algorithms reduced, improved and ilp for a multiband RKP with more than two bands, we additionally generate ten instances for each K ∈ {3, 4, 5} using the simplest setting R = 100, δ = 0.2 and n = 200. Similar to the two-band case, we test ΓK ∈ {⌈0.01n⌉, ⌈0.02n⌉, ⌈0.03n⌉⌈0.04n⌉, ⌈0.05n⌉} and compute the remaining Γk -values according to the cumulative distribution function of the standard normal distribution with an equidistant partitioning of the interval [0σ, 1.96σ]; cf. Fig. 1.

140

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

Fig. 4. Absolute solving times in sec of algorithm improved averaged over the ten instances and Γ2 -values for the different settings and sizes.

In Table 3, we state the absolute solving times of the algorithms reduced, improved and ilp averaged over the ten instances and ΓK -values for K ∈ {2, 3, 4, 5}. Again, we leave instances that cannot be solved to optimality by ilp out and give the number of the remaining instances in brackets. For K ≤ 4, reduced and improved solve all 50 instances and are significantly faster than ilp. As expected, improved gives the best results. However, for more than four bands, the number of π-vectors that have to be considered in the dynamic programming algorithms explodes and these algorithms cannot give any (meaningful) result. In contrast, the five-band RKP is solved by ilp without new difficulties (the number of solved instances is more or less constant). The reason for this is on the one hand, that the complexity of the compact ILP formulation (4) is hardly increased by the addition of a further band; one additional variable and n additional constraints. On the other hand, ilp uses up to eight threads of the machines while the dynamic programming algorithms are not parallelize and thus, can use only one thread. In summary, for up to four bands, the algorithm improved performs significantly better than ilp while ilp is the only algorithm able to solve the instances for K = 5. However, we would like to point out that the considered test instances are fairly small and, based on the results for K = 2, it is most unlikely that ilp would perform just as well for larger instances. 7. Conclusions In this paper, we considered a generalization of the robust knapsack problem, the multi-band RKP, where the weights of the items can have several deviation values lying in different bands. Based on the compact formulation, we developed a DP with a complexity linear in the number of items n. However, the complexity also depends on the knapsack capacity which is raised to the power of the number of bands K plus one. Since the knapsack capacity is usually higher than the number of items, we additionally develop a DP with a complexity that is linear in the capacity, i.e., O(K!nK+1 B). From this, it can be concluded that a combinatorial optimization problem with uncertain objective can be solved by solving O(K!nK ) similar problems with certain objective. We improve the developed DP algorithm in practice and compare the performance of the resulting algorithm to the former two DPs by means of solving times in an extensive computational study with randomly generated instances of various sizes in case of two deviation bands. On

141

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

Table 3 Absolute solving times in sec of algorithms reduced, improved and ilp averaged over the ten instances and ΓK -values for K ∈ {2, 3, 4, 5}. K

reduced

improved

2 3 4 5

0.03 0.68 340.71 –

0.01 0.44 337.68 –

ilp 382.10 133.95 528.03 606.29

(45) (35) (39) (37)

the one hand, the results demonstrate a clear benefit of the DP with a complexity linear in the capacity compared to the first DP. On the other hand, the results show the effectiveness of the presented improvements in practice. A comparison of the improved DP and cplex solving the compact ILP illustrates that the improved DP clearly outperforms the ILP. Additionally, we tested the performance of the improved DP for large instances with up to 10 000 items whereupon most instances could be solved within a time limit of 2 h. Finally, to receive an impression of the performance of the presented algorithms in case of more than two bands, we evaluated the absolute solving times. For up to four bands, the improved DP clearly outperformed the ILP while only the ILP was able to solve instances with five bands due to only an insignificant increase in the complexity of the compact formulation. As future work, we intend to extend the computational study to instances generated with different settings and to study the performance of the improved DP for applications containing a multi-band RKP such as (wireless) network planning problems. In this context, an evaluation of the price of robustness of the multiband approach as a refinement of the Γ -robustness is important. Furthermore, theoretical bounds on the constraint violation based on probabilistic analysis as stated in [15,12] require further research to (dis-) prove their tightness.

Acknowledgments

This work was supported by the DFG research grant KO2311/3-1, SCHM2643/5-1. We would like to thank Christina B¨ using for the fruitful discussions yielding meaningful test instances.

Appendix A. Counterexample

As mentioned in Section 3, conveying the algorithm stated in Mattia [15] for a combinatorial optimization problem with uncertain objective coefficients to the multi-band RKP, the sets Πk would be defined as ˜ k := {ˆ Πk = Π aki |i ∈ N } ∪ {0}

∀k ∈ {1, . . . , K}

(A.1)

leading to an algorithm with complexity O((n+1)(K+1) B). However, this algorithm is not correct. By means of the following counterexample of a two-band RKP with three items, we show that for an optimal solution ˜ 1 and π2 ∈ Π ˜ 2 and hence, these sets cannot be used to compute an optimal of (3) does not hold π1 ∈ Π solution of the two-band RKP. The considered deviation values are given in Table A.4. Further, we set Γ1 = 2 and Γ2 = 1. Then, the lowest objective value for the dual problem (3), we can compute by using the sets in (A.1) is 13 with corresponding solution π1 = 0, π2 = 3, ρ1 = 5, ρ2 = 4, ρ3 = 1. However, the optimal solution value is 12 with, e.g., π1 = 0, π2 = 4, ρ1 = 4, ρ2 = 3, ρ3 = 1 and thus, π2 ̸∈ Π2 .

142

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

Table A.4 Deviation values for the counterexample. Item

a ˆ1i xi

a ˆ2i xi

1 2 3

4 3 1

8 7 3

Appendix B. Omitted proofs

Proof of Lemma 4.2. If π ¯k is fixed for all k ̸= ℓ, then χ(¯ π , πℓ ) =



Γk π ¯ k + Γ ℓ πℓ +

k̸=ℓ

=





   k  max a ˆℓi − πℓ , max a ˆi − π ¯k , 0 k̸=ℓ

i∈N

Γk π ¯k +

k̸=ℓ



 k max a ˆi − π ¯k

i∈N

  +

k̸=ℓ

+ χ(π ˜ ℓ)

with 

χ(π ˜ ℓ ) := Γℓ πℓ +

  k  + a ˆℓi − πℓ − max (ˆ ai − π ¯ k )+ k̸=ℓ

i∈N

= Γ ℓ πℓ +



a ˆℓi

 k  − max (ˆ ai − π ¯k )+ − πℓ k̸=ℓ

i∈N

+ .

Defining dˆℓi := the function χ(π ˜ ℓ ) is equal to Γℓ πℓ +





i∈N

a ˆℓi



  k  + + − max (ˆ ai − π ¯k ) , k̸=ℓ

dˆℓi − πℓ

+

. This function is the objective of the dual problem (3)

dˆℓi .

of a Γℓ -RKP with deviation values Applying Lemma 4.1, we know that an optimal solution is given by setting the πℓ to the (Γℓ + 1)-largest deviation value.  Omitted Parts of Proof of Lemma 4.4.  k  ⋆ Lemma B.1. Let π ⋆ ∈ RK ˆi | i ∈ N ∪{0} ∀k ∈ {1, . . . , K} and a ˆℓj −πℓ⋆ = a ˆκj −πκ⋆ > 0, ≥0 be a point with πk ̸∈ a for one j ∈ N and ℓ, κ ∈ {1, . . . , K}, ℓ ̸= κ. Furthermore, we need the following notation. Let α := a ˆℓj − πℓ⋆ k ⋆ and K := {k ∈ {1, . . . , K} | a ˆj − πk = α}. We write k ̸∈ K instead of k ∈ {1, . . . , K} \ K for simplicity. Similar to the proof of Theorem 4.3, we define a partition of the item set N into two sets, one consisting of all items for which argmaxk∈K {ˆ aki − πk⋆ } ∈ K and the other one containing the remaining items:        ⋆ k ⋆ k ⋆ k ⋆ IK := i ∈ N  max max{ˆ ai − πk }, max{ˆ ai − πk }, 0 = max{ˆ ai − πk } ,  k∈K k̸∈K k∈K ⋆ I ⋆ := N \ IK .

Then, for the directional derivatives in the direction eK at π ⋆ holds  ⋆ ∇eK χ(π ⋆ ) = −∇−eK χ(π ⋆ ) = Γk − |IK |. k∈K

143

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

⋆ Proof. In order to define the numerators of the directional derivatives and analog to N = IK ∪ I ⋆ , we define ⋆ ⋆ a partition of the set of items at points π + heK and π − heK as        h k ⋆ k ⋆ k ⋆ IK := i ∈ N  max max{ˆ ai − πk − h}, max{ˆ ai − πk }, 0 = max{ˆ ai − πk − h} ,  k∈K k̸∈K k∈K h I h := N \ IK , −h −h −h ⋆ h and analogously, IK and I −h := N \ IK . If h is small enough, IK = IK = IK and I ⋆ = I h = I −h . Then, the numerators can be computed as follows.

χ(π ⋆ + heK ) − χ(π ⋆ )     max{ˆ aki − πk⋆ − h} max{(ˆ aki − πk⋆ )+ } + = Γk (πk⋆ + h) + Γk πk⋆ + k̸∈K

k∈K



K 

Γk πk⋆ −

i∈I ⋆

k=1

=h



Γk +

k∈K

h i∈IK

=h





k̸∈K

h i∈IK

max{(ˆ aki − πk⋆ )+ } −



k̸∈K

Γk πk⋆

+

⋆ i∈IK



Γk πk⋆



k̸∈K

k∈K



+



i∈I h

max{ˆ aki − πk⋆ } − k∈K

K 

max{ˆ aki − πk⋆ } k∈K

Γk πk⋆ +



h−

i∈I ⋆

h i∈IK



max{(ˆ aki − πk⋆ )+ }

i∈I h

k=1



k∈K

k̸∈K

max{(ˆ aki − πk⋆ )+ } − k̸∈K

 ⋆ i∈IK

max{ˆ aki − πk⋆ } k∈K

h Γk − |IK |h,

k∈K

and ⋆ χ(π ⋆ − heK ) − χ(π ⋆ ) = |IK |h − h



Γk .

k∈K

Hence, for the directional derivatives holds ⋆



χ(π + heK ) − χ(π ) = lim lim h↘0 h↘0 h  = lim

h↘0

k∈K

⋆ Γk − |IK |



h |h Γk − |IK

k∈K

= lim

h↘0

 = lim

h

h↘0

⋆ |IK |h − h

 

h

 k∈K

−h

Γk

 

h Γk − |IK |

k∈K

χ(π ⋆ − heK ) − χ(π ⋆ ) .  h↘0 h

= lim

Second case of Lemma 4.4, strictly positive derivative. In this appendix, we show the case of strictly positive  ⋆ directional derivative for Lemma 4.4, i.e., ∇eK χ(π ⋆ ) = k∈K Γk − |IK | > 0. We handle this case analogously to the case where the derivative is negative. Hence, we define the minimum value which is possible to subtract from πk⋆ without any change of the partition as follows.      

      ′ − ⋆ k k ⋆ k ⋆ ε := min min {πk − a ˆi }, min {ˆ ai − πk′ − a ˆ i + πk } .     k ∈ K, i ∈ N : k ∈ K, k ′ ̸∈ K, i ∈ N     ′  ⋆  k k ⋆ k ⋆ πk − a ˆi > 0 a ˆ i − πk ′ > a ˆ i − πk

144

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

Now, we define  πk :=

πk⋆ − ε− πk⋆

k∈K k ̸∈ K.

(B.1)

Again, if ε− is defined by the first minimum, we stop the decrease of π ⋆ . Otherwise, we compute the new value of α, define a new set K′ and start again from the beginning as described in the first case. Once, more we have to show that the objective value does not increase when decreasing π ⋆ as described. ⋆ ⋆ To this end, let IK be the set of items obtained by replacing π ⋆ by π in IK . Obviously, IK ⊆ IK . To prove ⋆ ⋆ also IK ⊆ IK , we show N \ IK ⊆ N \ IK . ⋆ For that purpose we consider an item i ∈ N \ IK , i.e.,   (a) 0 k ⋆ k′ ⋆ ′ max max{ˆ ai − πk }, max {ˆ ai − πk′ }, 0 = (b) max{ˆ aki − πk⋆′ } k∈K k′ ̸∈K ′ k ̸∈K

(a) ⇔ maxk∈K {ˆ aki −πk⋆ } < 0 ⇔ a ˆki −πk⋆ < 0 ∀k ∈ K. By the definition of ε− , a ˆki −πk⋆ +ε− = a ˆki −πk ≤ 0 ∀k ∈ K. Thus, i ̸∈ IK . ′ ′ ˆki − πk⋆ < maxk′ ̸∈K {ˆ aki − πk⋆′ } ∀k ∈ K. Again, by the (b) ⇔ maxk∈K {ˆ aki − πk⋆ } < maxk′ ̸∈K {ˆ aki − πk⋆′ } ⇔ a ′ definition of ε− , a ˆki − πk⋆ + ε− = a ˆki − πk < maxk′ ̸∈K {ˆ aki − πk⋆′ } ∀k ∈ K and i ̸∈ IK . ⋆ Altogether, we have IK = IK . For the objective function then holds     χ(π) = Γk (π ⋆ − ε− ) + Γk πk⋆ + max{ˆ aki − πk⋆ + ε− } + max{ˆ aki − πk⋆ } k̸∈K

k∈K

=

K 

Γk πk⋆ − ε−

k=1



Γk +

 = χ(π ) − ε





k∈K

i∈N \IK

⋆ max{ˆ aki − πk⋆ } + ε− |IK |+

⋆ i∈IK

k∈K ⋆

i∈IK

k∈K

 ⋆ i∈N \IK

k̸∈K

max{ˆ aki − πk⋆ } k̸∈K

 

Γk −

⋆ |IK |

k∈K

< χ(π ⋆ ). Proof of Lemma 5.1. Let π ⋆ be a point defining a minimum of χ with πℓ⋆ > a ˆℓhℓ (Γℓ +1) =: α for one band ℓ ∈ {1, . . . , K}. Construct a point π with χ(π) ≤ χ(π ⋆ ) as follows. πℓ := α,

πk := πk⋆

∀k ̸= ℓ.

Note that π still satisfies the property of Lemma 4.4. To show χ(π) ≤ χ(π ⋆ ), let z := πℓ⋆ −πℓ > 0. We define −1 a partition of the sets of items as N = I≤ ∪I> with I≤ := {i ∈ N | h−1 ℓ (i) ≤ Γℓ }, I> := {i ∈ N | hℓ (i) > Γℓ }. For i ∈ I≤ holds    k   k  ℓ ⋆ ⋆ max a ˆi − πk = max max a ˆ i − πk , a ˆ i − πℓ + z k̸=ℓ

k∈{1,...,K}



max k∈{1,...,K}

 k  a ˆi − πk⋆ + z

and  max

max k∈{1,...,K}

   k  a ˆi − πk , 0 ≤ max

max k∈{1,...,K}



  a ˆki − πk⋆ , 0 + z.

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

145

On the other hand, for i ∈ I> holds in particular a ˆℓi ≤ α and thus,      ℓ  k   k ˆi − α, 0 a ˆi − πk , 0 = max max a ˆi − πk⋆ , a max max k̸=ℓ k∈{1,...,K}    k  ≤ max max a ˆi − πk⋆ , 0 . k∈{1,...,K}

Hence, regarding the objective value χ(π) we have

χ(π) =

K 

Γ k πk +

K 





max

max k∈{1,...,K}

  a ˆki − πk , 0

 max

max k∈{1,...,K}

i∈I≤



i∈I> ⋆



max k∈{1,...,K}

Γk πk⋆ − Γℓ z +

k=1

+

 max

i∈N

k=1







   a ˆki − πk⋆ , 0 + z

  k  ⋆ a ˆ i − πk , 0

= χ(π ), a contradiction.



⋆ ≥ 0 for one ℓ ∈ {1, . . . , K − 1} Proof of Lemma 5.2. We assume π ⋆ defines a minimum of χ and πℓ⋆ > πℓ+1 but properties (a) and (b) are fulfilled. We construct a point π with χ(π) ≤ χ(π ⋆ ) that has the properties (a) and (b) and satisfies πℓ ≤ πℓ+1 . We distinguish the following two cases.

 ′  (I) There exists at least one band k ′ ̸= ℓ with πk⋆′ ∈ a ˆki | i ∈ N ∪ {0}.  ℓ  (II) Band ℓ is the only band satisfying πℓ⋆ ∈ a ˆi | i ∈ N ∪ {0}.

⋆ Case (I): Define z := πℓ⋆ − πℓ+1 > 0 and the new point π ⋆ πℓ := πℓ⋆ − z = πℓ+1

πk := πk⋆

∀k ̸= ℓ,

which still satisfies properties (a) and (b). If follows for all items i ∈ N ⋆ ⋆ a ˆℓi − πℓ = a ˆℓi − πℓ+1
and hence,  max

max k∈{1,...,K}

{ˆ aki

   k ⋆ ℓ − πk }, 0 = max max{ˆ ai − πk }, a ˆ i − πℓ , 0 k̸=ℓ   ℓ+1 k ⋆ ⋆ ≤ max max{ˆ ai − πk }, a ˆi − πℓ+1 , 0 k̸=ℓ   k ⋆ ≤ max max {ˆ ai − πk }, 0 . k∈{1,...,K}

146

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

For the objective function χ we have χ(π) =

K 



Γ k πk +



max k∈{1,...,K}

i∈N

k=1



 max

Γk πk⋆ + Γℓ (πℓ⋆ − z) +

k̸=ℓ



 {ˆ aki − πk }, 0 

max

max k∈{1,...,K}

i∈N

{ˆ aki − πk⋆ }, 0



< χ(π ⋆ ). Case (II): If we just used the same argumentation as in case (I), we would obtain a point π which does not necessarily satisfy property (a) anymore. Hence, we use a modified approach in the following. Due to the  k  assumption that band ℓ is the only band satisfying πk⋆ ∈ a ˆi | i ∈ N ∪ {0} and by Lemma 4.2, we know +  ℓ+1 ⋆ aki − πk⋆ } | i ∈ N }. Again, we distinguish two cases: that πℓ+1 ∈{ a ˆi − maxk̸=ℓ+1 {ˆ ⋆ (i) πℓ+1 = 0, ⋆ =a ˆℓ+1 −a ˆkj + πk⋆ for at least one k ̸= ℓ + 1 and j ∈ N . (ii) πℓ+1 i

For case (i), we construct a new point π with χ(π) < χ(π ⋆ ) as follows. πk := πk⋆

πℓ := 0,

∀k ̸= ℓ.

Note that π still fulfills the properties (a) and (b). It holds ⋆ a ˆℓi − πℓ = a ˆℓi < a ˆℓ+1 =a ˆℓ+1 − πℓ+1 i i

and with the same argumentation as in case (I),    k max max {ˆ ai − πk }, 0 ≤ max k∈{1,...,K}

max k∈{1,...,K}

{ˆ aki





πk⋆ }, 0

.

Therefore, for the objective function χ we have χ(π) =

K 

Γk πk +



 max

i∈N

k=1





Γk πk⋆

+ Γℓ · 0 +

k̸=ℓ

max k∈{1,...,K}



{ˆ aki

 − πk }, 0

 max

i∈N

max k∈{1,...,K}

{ˆ aki



πk⋆ }, 0



< χ(π ⋆ ). For case (ii), we construct a new point π in such a way that we will be able to use the same argumentation as in the proof of Lemma 4.4 to construct a further point π ˜ which then satisfies all required properties. ⋆ Therefore, we define α := πℓ+1 −a ˆℓ+1 and K := {k ∈ {1, . . . , K} | a ˆkj − πk⋆ = α} analogously to the proof of j Lemma 4.4. Note that ℓ + 1 ∈ K but ℓ ̸∈ K. Moreover, we set ε :=

min {ˆ aki − πk⋆ } k ∈ K, i ∈ N : a ˆki − πk⋆ > 0

⋆ and z := πℓ⋆ − πℓ+1 > 0. Then the new point π is constructed as ⋆ πℓ := πℓ⋆ − z − ε = πℓ+1 −ε

πk := πk⋆

∀k ̸= ℓ,

147

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

Table C.5 Absolute solving times of algorithm simple for all considered settings averaged over the 10 instances. δ

n

R\Γ2

200 0.2

500 1000 200

0.5

500 1000 200

1

500 1000

0.01

0.02

0.03

0.04

0.05

100 1000 100 1000 100 1000

0.02 18.09 0.04 56.41 0.07 134.28

0.02 17.46 0.04 53.92 0.07 129.14

0.02 16.55 0.04 52.24 0.07 124.03

0.02 16.08 0.04 50.32 0.07 117.84

0.02 15.49 0.04 48.01 0.06 113.66

100 1000 100 1000 100 1000

0.12 100.18 0.28 310.83 0.53 694.90

0.11 95.44 0.26 297.10 0.50 666.11

0.10 91.06 0.25 280.89 0.48 632.00

0.10 87.19 0.23 268.53 0.44 599.73

0.10 82.94 0.22 255.01 0.42 569.37

100 1000 100 1000 100 1000

0.41 297.53 0.99 1021.52 2.16 2429.64

0.38 277.25 0.91 961.34 2.02 2281.26

0.36 253.78 0.85 896.32 1.87 2132.96

0.34 232.52 0.78 833.81 1.73 1996.59

0.31 213.10 0.73 778.34 1.60 1852.72

Table C.6 Absolute solving times of algorithm reduced for all considered settings averaged over the 10 instances. δ

n

R\Γ2

200 0.2

500 1000 200

0.5

500 1000 200

1

500 1000

0.01

0.02

0.03

0.04

0.05

100 1000 100 1000 100 1000

0.02 8.91 0.03 39.18 0.06 99.15

0.02 8.57 0.03 37.47 0.06 95.63

0.02 8.30 0.03 36.61 0.05 92.37

0.01 8.24 0.03 35.48 0.05 88.30

0.02 8.03 0.02 34.05 0.05 85.52

100 1000 100 1000 100 1000

0.09 20.02 0.20 152.66 0.39 463.11

0.08 19.34 0.19 147.49 0.37 447.03

0.07 18.80 0.19 141.01 0.35 426.50

0.07 18.15 0.17 136.36 0.33 406.66

0.07 17.41 0.17 130.45 0.31 388.47

100 1000 100 1000 100 1000

0.26 23.17 0.72 279.28 1.57 1205.41

0.25 22.18 0.66 268.31 1.47 1145.39

0.24 20.85 0.62 254.68 1.36 1086.34

0.23 19.55 0.58 241.72 1.26 1025.87

0.21 18.41 0.53 229.20 1.19 963.85

which still satisfies property (b). If follows for all items i ∈ N ⋆ ⋆ ⋆ a ˆℓi − πℓ = a ˆℓi − πℓ+1 +ε≤a ˆℓi − πℓ+1 + πℓ+1 −a ˆiℓ−1 < 0

and hence,  max

   {ˆ aki − πk }, 0 = max max{ˆ aki − πk⋆ }, a ˆℓi − πℓ , 0 k̸=ℓ k∈{1,...,K}   ≤ max max{ˆ aki − πk⋆ }, 0 k̸=ℓ   ≤ max max {ˆ aki − πk⋆ }, 0 . max

k∈{1,...,K}

For the objective function χ we have χ(π) =

K  k=1

Γ k πk +

 i∈N

 max

max k∈{1,...,K}

 {ˆ aki − πk }, 0

148

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

Table C.7 Absolute solving times of algorithm improved for all considered settings averaged over the 10 instances. δ

n 200

0.2

500 1000 200

0.5

500 1000 200

1

0.01

R\Γ2

500 1000

0.02

0.03

0.04

0.05

100 1000 100 1000 100 1000

0.00 0.18 0.00 0.64 0.01 1.10

0.00 0.22 0.00 0.64 0.02 1.07

0.00 0.25 0.00 0.64 0.01 1.00

0.00 0.26 0.00 0.57 0.01 1.09

0.00 0.16 0.00 0.57 0.01 1.11

100 1000 100 1000 100 1000

0.01 0.50 0.03 2.13 0.06 6.10

0.01 0.44 0.03 1.88 0.06 5.75

0.01 0.32 0.03 2.16 0.06 5.83

0.01 0.27 0.03 2.08 0.06 5.52

0.01 0.27 0.03 2.14 0.06 5.32

100 1000 100 1000 100 1000

0.02 0.48 0.11 4.44 0.23 17.04

0.02 0.37 0.10 4.02 0.22 16.51

0.03 0.31 0.10 4.15 0.22 16.09

0.03 0.42 0.10 3.63 0.21 15.60

0.02 0.35 0.10 3.91 0.21 15.34

Table C.8 Absolute solving times of algorithm ilp for all considered settings averaged over all of the 10 instances which stopped before the time limit was reached. “–” denotes the cases in which all instances exceeded the time limit, see Table C.9 for the number of instances per setting not exceeding the time limit. δ

n

100 1000 100 1000

200 0.2

500 1000 200

0.5

500 1000 200

1

0.01

R\Γ2

500 1000





0.02

0.03

0.04

0.05

424.34 10.78 48.74 230.05

142.99 190.05 1513.46 635.91

613.59 1127.03 1869.82 269.65

763.92 977.55 2361.80 –

1408.39 393.60 2535.40 1314.26

100 1000



663.10

2230.30 147.63

2625.76 170.02

2886.45 –

1426.26 99.74

100 1000 100 1000 100 1000 100 1000 100 1000 100 1000

115.20 388.72 1406.10 – 1161.05 – 72.73 741.73 311.09 – – –

404.09 1080.66 2171.22 – 563.50 99.83 758.78 1705.23 1270.36 – 220.49 100.43

524.14 2673.58 1674.14 365.52 631.20 238.44 735.70 709.75 2510.91 955.01 223.75 316.12

1129.77 1083.02 1015.42 1111.64 1927.82 176.49 1112.14 1269.62 2521.21 1508.89 263.32 –

717.52 1714.86 1236.50 1380.22 1523.52 – 451.81 – 1977.53 1945.30 1102.87 110.92

Γk πk⋆

k̸=k′

+

Γℓ (πℓ⋆

− z − ε) +

 i∈N

 max

max k∈{1,...,K}

{ˆ aki



πk⋆ }, 0



< χ(π ⋆ ). ⋆ Although π does not satisfy property (a) but a ˆℓ+1 − πℓ+1 =a ˆkj − πk⋆ for one j ∈ N and k ̸= ℓ + 1, we can j apply the same argumentation as in the proof of Lemma 4.4 to construct a new point π ˜ satisfying property (a). By the definition of πℓ , it is not possible that πℓ+1 can become smaller again than πℓ . In case of a negative considered derivative, πℓ+1 is increased by ε+ and πℓ not. Note that property (b) is preserved by the definition of ε+ . If the considered derivative is positive, πℓ+1 is decreased by ε− ≤ ε. 

Appendix C See Tables C.5–C.9.

149

G. Claßen et al. / Discrete Optimization 18 (2015) 123–149

Table C.9 Number of instances out of 10 which do not exceed the time limit when solved with ilp. δ

0.2

0.5

1

R\Γ2

0.01

0.03

0.04

0.05

200

100 1000

9 10

8 9

9 10

10 7

8 8

500

100 1000

2 4

8 2

8 1

10 0

8 6

1000

100 1000

6 0

10 1

8 2

7 0

5 1

200

100 1000

10 10

9 9

8 9

6 6

4 3

500

100 1000

5 0

5 0

5 1

2 5

6 6

1000

100 1000

3 0

1 1

2 3

5 2

6 0

200

100 1000

10 10

10 9

4 3

4 3

1 0

500

100 1000

1 0

4 0

7 4

7 6

6 4

1000

100 1000

0 0

1 1

1 2

1 0

4 1

n

0.02

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]

S. Martello, P. Toth, Knapsack Problems: Algorithms and Computer Implementations, John Wiley & Sons, 1990. H. Kellerer, U. Pferschy, D. Pisinger, Knapsack Problems, Springer, 2004. D. Pisinger, A minimal algorithm for the 0–1 knapsack problem, Oper. Res. 45 (1997) 758–767. D. Bertsimas, M. Sim, The price of robustness, Oper. Res. 52 (2004) 35–53. A. Ben-Tal, L.E. Ghaoui, A. Nemirovski, Robust Optimization, Princeton University Press, 2009. D. Bertsimas, D. Brown, C. Caramanis, Theory and applications of robust optimization, SIAM Rev. 53 (2011) 464–501. M. Fischetti, M. Monaci, Cutting plane versus compact formulations for uncertain (integer) linear programs, Math. Program. Comput. 4 (2012) 239–273. O. Klopfenstein, D. Nace, A robust approach to the chance-constrained knapsack problem, Oper. Res. Lett. 36 (2008) 628–632. http://dx.doi.org/10.1016/j.orl.2008.03.006. URL: http://www.sciencedirect.com/science/article/pii/S0167637708000485. M. Monaci, U. Pferschy, P. Serafini, Exact solution of the robust knapsack problem, Comput. Oper. Res. 40 (2013) 2625–2631. http://dx.doi.org/10.1016/j.cor.2013.05.005. D. Bienstock, Histogram models for robust portfolio optimization, J. Comput. Finance 11 (2007) 1–64. C. B¨ using, F. D’Andreagiovanni, New results about multi-band uncertainty in robust optimization, in: R. Klasing (Ed.), Experimental Analysis—SEA 2012, Springer, 2012, Revised version at: http://arxiv.org/abs/1208.6322. C. B¨ using, F. D’Andreagiovanni, Robust optimization under multi-band uncertainty—Part I: Theory, 2013. URL: http:// arxiv.org/abs/1301.2734. F. D’Andreagiovanni, J. Krolikowski, J. Pulaj, A fast hybrid primal heuristic for multiband robust capacitated network design with multiple time periods, Appl. Soft Comput. 26 (2015) 497–507. http://dx.doi.org/10.1016/j.asoc.2014.10.016. F. D’Andreagiovanni, Multiband Robust Optimization, 2014. URL: http://www.dis.uniroma1.it/˜fdag/multiband.html. S. Mattia, Robust Optimization with Multiple Intervals, Technical Report R. 7,2012, Istituto di Analisi dei Sistemi ed Informatica (IASI), Consiglio Nazionale delle Ricerche (CNR), viale Manzoni 30, 00185 Rome, Italy, 2012. M. Kutschka, Robustness concepts for knapsack and network design problems under data uncertainty (Ph.D. thesis), RWTH Aachen University, 2013. D. Bertsimas, M. Sim, Robust discrete optimization and network flows, Math. Program. B 98 (2003) 49–71. T. Lee, C. Kwon, A short note on the robust combinatorial optimization problems with cardinality constrained uncertainty, 4OR 12 (2014) 373–378. http://dx.doi.org/10.1007/s10288-014-0270-7. C. B¨ using, F. D’Andreagiovanni, A. Raymond, 0–1 multiband robust optimization, in: D. Huisman, I. Louwerse, A.P. Wagelmans (Eds.), Operations Research Proceedings 2013, in: Operations Research Proceedings, Springer International Publishing, 2014, pp. 89–95. http://dx.doi.org/10.1007/978-3-319-07001-8 13. A. Atamt¨ urk, Strong formulations of robust mixed 0–1 programming, Math. Program. 108 (2006) 235–250. http://dx.doi.org/10.1007/s10107-006-0709-5. O. Klopfenstein, D. Nace, A note on polyhedral aspects of a robust knapsack problem, 2007. Optimization Online. URL: http://www.optimization-online.org/DB FILE/2006/04/1369.pdf. D. Pisinger, Where are the hard knapsack problems? Comput. Oper. Res. 32 (2005) 2271–2284. http://dx.doi.org/10.1016/ j.cor.2004.03.002. URL: http://www.sciencedirect.com/science/article/pii/S030505480400036X. IBM—ILOG, 2011. CPLEX Optimization Studio 12.4. URL: http://www.ilog.com/products/cplex.