The strength of multi-row aggregation cuts for sign-pattern integer programs

The strength of multi-row aggregation cuts for sign-pattern integer programs

Operations Research Letters 46 (2018) 611–615 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.c...

341KB Sizes 0 Downloads 52 Views

Operations Research Letters 46 (2018) 611–615

Contents lists available at ScienceDirect

Operations Research Letters journal homepage: www.elsevier.com/locate/orl

The strength of multi-row aggregation cuts for sign-pattern integer programs ∗

Santanu S. Dey a , , Andres Iroume b , Guanyi Wang a a b

Industrial and Systems Engineering, Georgia Institute of Technology, United States Revenue Analytics, Inc., United States

article

info

Article history: Received 19 November 2017 Received in revised form 31 October 2018 Accepted 1 November 2018 Available online 10 November 2018 Keywords: Cutting-plane Multi-row cuts Aggregation

a b s t r a c t Sign-pattern IPs are a generalization of packing IPs, where for a given column all coefficients are either non-negative or non-positive. We show that the aggregation closure for such sign-pattern IPs can be 2-approximated by the original 1-row closure. This generalizes a result for packing IPs. On the other hand, unlike in the case of packing IPs, we show that the multi-row aggregation closure cannot be well approximated by the original multi-row closure. © 2018 Elsevier B.V. All rights reserved.

1. Introduction In a recent paper [11], Bodur et al. studied the strength of aggregation cuts. An aggregation cut is obtained as follows: (i) By suitably scaling and adding the constraints of a given integer programming (IP) formulation, one can obtain a relaxation which is defined by a single constraint together with variable bounds. (ii) All the valid inequalities for the integer hull of this knapsack-like set are called aggregation cuts. The set obtained by adding all such aggregation cuts (for all possible aggregations) is called the aggregation closure [19]. Such cuts have been studied computationally [20] and also have been studied in theory [2,16,22]. A special subclass of aggregation cuts, known as formulation cuts, are obtained starting with a scaling factor of zero for all the constraints except one, i.e., formulation cuts are cuts implied by single constraint of the original formulation together with bound information. The weaker closure obtained from such formulation cuts is called as the original 1-row closure [11,14]. The paper [11] shows that for packing and covering IPs, the aggregation closure can be 2-approximated by the original 1-row closure. In contrast, they show that for general IPs, the aggregation closure can be arbitrarily stronger than the original 1-row closure. The aggregation cuts are obtained based on our ability to generate valid inequalities for feasible sets described by one non-trivial constraint and variable bounds. Recently there has a large body of work on multi-row cuts, for example [1,3,8,9,12,13,16–18,21]. Also see the review articles [6,7,24] and analysis of strength of ∗ Corresponding author. E-mail addresses: [email protected] (S.S. Dey), [email protected] (A. Iroume), [email protected] (G. Wang). https://doi.org/10.1016/j.orl.2018.11.001 0167-6377/© 2018 Elsevier B.V. All rights reserved.

these cuts [4,5,10]. Therefore, it is natural to consider the notion of multi-row aggregation cuts. Essentially by using k different sets of scaling factors on the constraints of the problem one can produce a relaxation that involves k constraints together with variable bounds. We call the valid inequalities for the integer hull of such relaxations as k-row or multi-row aggregation cuts. Analogous to the case of aggregation cuts, we can also define the notion of krow aggregation closure and the original k-row closure [11,15] (i.e., generate cuts from all relaxations described by k constraints from the IP formulation). The results in [11] can be used to show that for packing and covering IPs the k-row aggregation closure can be approximated by the original k-row closure within a multiplicative factor that depends only on k. (We obtain sharper bounds for the case of k = 2 in this paper.) For packing and covering IPs all the coefficients of all the variables in all the constraints have the same sign. Therefore when we aggregate constraints we are not able to ‘‘cancel’’ variables, i.e., the support of an aggregated constraint is exactly equal to the union of supports of the original constraints used for the aggregation. A natural conjecture for the fact that the aggregation closure (resp. multi-row aggregation closure) is well approximated by the original 1-row closure (resp. original multi-row closure) for packing and covering problems, is the fact that such cancellations do not occur for these problems. Indeed one of the key ideas used to obtain good candidate aggregations in the procedure described in [22] is to use aggregations that maximize the chances of cancellation. Also see [2], which discusses the strength of split cuts obtained from aggregations that create cancellations. In order to study the effect of cancellations, we study the strength of aggregation closures vis-à-vis original row closures for sign-pattern IPs. A sign-pattern IP is a problem of the form {x ∈

612

S.S. Dey, A. Iroume and G. Wang / Operations Research Letters 46 (2018) 611–615

Zn+ | Ax ≤ b} where a given variable has exactly the same sign in every constraint, i.e., for a given column j, Aij is either non-negative for all rows i or non-positive for all rows i. Thus aggregations do not create cancellations. Our study reveals interesting results. On the one hand we are able to show that the aggregation closure for such sign-pattern IPs is 2-approximated by the original 1-row closure, supporting the conjecture that non-cancellation implies that aggregation is less effective. On the other hand, unlike packing and covering IPs, the multi-row aggregation closure cannot be well approximated by the original multi-row closure. So for these classes of problems, multi-row cuts may do significantly better than single-row cuts, especially those obtained by aggregation. The structure of the rest of the paper is as follows. In Section 2 we provide formal definitions and statements of our results. In Section 3 we present the proofs for our results.

P ⊇ Q , and a positive scalar α ≥ 1, we say that P is an α approximation of Q if P ⊆ α1 Q . This is equivalent to saying that if we are minimizing a non-negative linear function over P and Q (and the optimal objective function over P and Q are z P and z Q respectively), then z P ≤ z Q ≤ α · z P . 2.1.2. Closures Given a polyhedron P, we are interested in cuts for the pure integer set P ∩ Zn . Definition 3. For P = {x ≥ 0 | Ax ≤ b}, where A ∈ Rm×n , k ≥ 1 integer, and λ1 , . . . , λk ∈ Rm + , let P(λ1 , . . . , λk ) := {x ≥ 0 | λ1 Ax ≤ λ1 b, . . . , λk Ax ≤ λk b} . P I (λ1 , . . . , λk ) := conv

,

({

x ∈ Zn+ | λ1 Ax ≤ λ1 b, . . .

λk Ax ≤ λk b}) .

2. Definitions and statement of results 2.1. Definitions For an integer n, we use the notation [n] to describe the set {(1,). . . , n} and for k ≤ n non-negative integer, we use the notation [n ] to describe all subsets of [n] of size k. For i ∈ [n], we denote by k

ei the ith vector of the standard basis of Rn . The convex hull of a set S is denoted as conv(S). For a set S ⊂ Rn and a positive scalar α we define α S := {α u | u ∈ S }. We use P I to denote the convex hull of integer feasible solutions of P. For a given linear objective function, we let z LP , z IP denote the optimal objective function over P and P I respectively.

Definition 4 (Closures). Given a polyhedron P = {x ∈ Rn+ ∥ Ax ≤ m×n b}, where , we define its aggregation closure A(P) as ⋂A ∈ R I A(P) = λ∈Rm P (λ). + We can generalize the aggregation-closure to consider simultaneously k aggregations, where k ∈ Z and k ≥ 1. More precisely, for a polyhedron P the k-aggregation closure is defined as



Ak (P) :=

λ1 ,...,λk

P I (λ1 , . . . , λk ).

∈Rm

+

Similarly, the original 1-row closure 1-A(P) is defined as 1-A(P) :=



P I (ei ).

i∈[m]

2.1.1. Sign-pattern IPs Definition 1. Let n be an integer, let J + , J − ⊆ [n] such that J + ∩ J − = ∅ and J + ∪ J − = [n]. We call a polyhedron P, a (J + , J − ) sign-pattern polyhedron if it is of the form P =

⎧ ⎨ ⎩

x ∈ Rn+ |

∑ j∈ J +

Aij xj −

∑ j∈J −

Aij xj ≤ bi

⎫ ⎬ ∀i ∈ [m] , ⎭

where Aij , bi ≥ 0, ∀i ∈ [m], ∀j ∈ [n]. Additionally, we require Aij ≤ bi ∀j ∈ J + , ∀i ∈ [m]. Throughout this paper, whenever we refer to a packing polyhedron or a covering polyhedron, we make a similar assumption on the coefficients defining the polyhedron, i.e., if P := {x ∈ Rn+ |Ax ≤ b} is a packing (resp. P := {x ∈ Rn+ |Ax ≥ b} is a covering) polyhedron we assume Aij ≤ bi for all i ∈ [m], j ∈ [n]. Also we assume that all data is rational. Definition 2. Given two polyhedra P and Q contained in Rn+ such that P ⊇ Q ⊇ {0}, and a positive scalar α ≥ 1, we say that P is an α -approximation of Q if P ⊆ α Q . Remark 1. Let P ⊇ Q ⊇ {0}. Suppose we are maximizing a linear objective over P and Q and let the optimal objective function over P and Q be z P and z Q respectively. Then P being an α -approximation of Q implies that either z P = z Q = ∞ or z P ≥ z Q ≥ α1 · z P . Therefore, in order to show P is not an α -approximation of Q , all we need to do is to establish that there is a linear objective such that z P , z Q < ∞ and z Q < α1 · z P . Remark 2. Definition 2 does not hold for covering polyhedron. We will refer to α -approximation results for covering polyhedron for comparison. Given two covering polyhedra P and Q such that

We can generalize the original 1-row closure, to the original krow closure k-A(P). More precisely, for a polyhedron P the original k-row closure is defined as k-A(P) :=



P I (ei1 , . . . , eik ).

] {i1 ,...,ik }∈([m k )

Given a linear objective function, we let z A , z Ak , z 1-A(P) , and z be the optimal objective function over A, Ak , 1-A(P) and k-A(P) respectively. k-A(P)

2.2. Statement of results The first result compares the aggregation closure with the LP relaxation of a (J + , J − ) sign-pattern polyhedron. Theorem 1. For a (J + , J − ) sign-pattern polyhedron P, we have that A(P) can be 2-approximated by P, and thus by 1-A(P). This result generalizes the result obtained in [11] for the case of packing IPs as setting J − = ∅ yields a packing IP. Since the ratio of 2 is already known to be tight for the case of packing instances [11], the result of Theorem 1 is tight. Next we show that, in general, for (J + , J − ) sign-pattern IPs, the aggregation-closure does not do a good job at approximating the 2-aggregation closure. Theorem 2. There is a family of (J + , J − ) sign-pattern polyhedra for which A is an arbitrarily bad approximation to A2 , i.e., for each α > 1, there is a (J + , J − ) sign-pattern polyhedron P such that A(P) is not an α -approximation of A2 (P). In contrast, if P is a packing (resp. covering) polyhedron, then A(P) ⊆ 3(A2 (P)) (resp. A(P) ⊆ 1 (A2 (P))). 2.5

S.S. Dey, A. Iroume and G. Wang / Operations Research Letters 46 (2018) 611–615

The previous result shows that for non-trivial (J + , J − ) signpattern polyhedra, using multiple row cuts can have significant benefits over one row cuts. This is different than for the case of packing/covering problems (where the improvement is bounded). The next result shows that the aggregation-closure considering simultaneously 2 aggregations (A2 ) can be arbitrarily stronger than the original 2-row closure (2-A). Theorem 3. There is a family of (J + , J − ) sign-pattern polyhedra with 4 constraints for which 2-A is an arbitrarily bad approximation to A2 , i.e., for each α > 1, there is a (J + , J − ) sign-pattern polyhedron P such that 2-A(P) is not an α -approximation of A2 (P). In contrast, if P is a packing (resp. covering) polyhedron, then 2-A(P) ⊆ 3(A2 (P)) (resp. 2-A(P) ⊆ 21.5 (A2 (P))). The previous results establish the comparison between the sets 1-A(P) and A(P), A(P) and A2 (P), and 2-A(P) and A2 (P). These results are presented in Table 1. 3. Proofs 3.1. Proof of Theorem 1 First, we need some general properties for (J + , J − ) sign-pattern LPs. Proposition 1. Consider a (J + , J − ) sign-pattern polyhedron ∑ ∑ defined by one non-trivial constraint P = {x ≥ 0 | j∈J + aj xj − j∈J − aj xj ≤ b} and let c be a vector with the same sign-pattern, i.e., cj ≥ 0 ∀j ∈ J + and cj ≤ 0 ∀j ∈ J − . Then: (1) z LP = maxx∈P c ⊤ x is bounded if and only if maxj∈J + minj∈J −

− cj aj

cj aj



613

For j ∈ J + , assume that there exists a facet gx ≤ h s.t. gj < 0. Let S be a set of n affinely independent points in P ∩ Zn that are tight at gx = h (they exist as P I is full-dimensional). As gx ≤ h is not the trivial inequality xj ≥ 0, there must exist a point x¯ in S such that g x¯ = h and satisfies x¯ j ≥ 1. But x¯ − x¯ j ej ∈ P ∩ Zn and violates gx ≤ h, a contradiction. □ Proposition 3. Let P be a (J + , J − ) sign-pattern polyhedron defined by one constraint, then P ⊆ 2P I . Proof. Assume by contradiction that there exists x′ ∈ 21 P s.t. x′ ∈ / P I . Since P I is a (J + , J − ) sign-pattern polyhedron, each nontrivial facet-defining inequality gx ≤ h satisfies gj ≥ 0 ∀j ∈ J + and gj ≤ 0 ∀j ∈ J − . Since x′ ∈ / P I , for one of these facets, we have: gx′ > h. Now, if we consider g as an objective: maxx∈P I gx ≤ h and thus defines a bounded problem. Since the IP is feasible and bounded and P is defined by rational data, the LP is also bounded (a consequence of Meyer’s Theorem [23]). Hence by Proposition 1, gx′ ≤ 12 z LP ≤ z I ≤ h a contradiction. □ Observation 1. Let {Si }i∈I be a collection of subsets in Rn and let α ∈ R+ . Then α (∩i∈I Si ) = ∩i∈I α (Si ). Now, we prove Theorem 1. Proof. By definition, we have that P ⊆ P(λ), ∀λ ∈ Rm + . Since P(λ) corresponds to a (J + , J − ) sign-pattern polyhedron defined by one constraint, by Proposition 3, we have that P(λ) ⊆ 2P I (λ). Then taking intersection over all λ ∈ Rm + and by Observation 1 we have P ⊆



P(λ) ⊆

λ∈Rm

.



2P I (λ) = 2

λ∈Rm

+



P I (λ) = 2A(P).

λ∈Rm

+

+

(2) If z LP is bounded, then there exists an optimal solution xLP such that xLP = b/aj for j ∈ argmaxj∈J + cj /aj and xLP = 0 for j k k ∈ [n]\{j}. (3) If z LP is bounded, then z LP ≤ 2z IP , where z IP = maxx∈P I c ⊤ x.

Since 1-A(P) is contained in P, we have that 1-A(P) is a 2approximation of A(P). □

Proof. Clearly 0 ∈ P, thus the (J + , J − ) sign-pattern LP cannot be infeasible. Consider its dual

In order to prove the first part of Theorem 2, consider the following family of (J + , J − ) sign-pattern polyhedra and M ≥ 2 integer

min{by | aj y ≥ cj ∀j ∈ J + , aj y ≤ −cj ∀j ∈ J − , y ≥ 0}, which is feasible if and only if maxj∈J +

cj aj

≤ minj∈J −

−cj aj

.

If z LP is bounded, then there exists an optimal solution that is an extreme point. Since the problem is defined by a single nontrivial constraint, each extreme point can have at most one nonzero coefficient, thus a maximizer xLP over the set of extreme points ∗ ∗ ≥ 1 for some j must be of the form xLP ∈ J + and j∗ = b/aj LP ∗ xj = 0 ∀j ∈ [n]\{j }. Clearly ⌊xLP ⌋ ∈ P I , thus

z LP z IP



b/aj∗ . Finally, since P ⌊b/aj∗ ⌋

sign-pattern polyhedron b/aj∗ ≥ 1 and thus

z LP z IP

is a (J + , J − )

≤ 2. □

Proposition 2. Consider a (J + , J − ) sign-pattern polyhedron P, then P I is also a (J + , J − ) sign-pattern polyhedron. Proof. First, since P is a (J + , J − ) sign-pattern polyhedron, 0, e1 , . . . , en ∈ P, then P I is a non-empty full dimensional polyhedron. We show that for every non-trivial facet gx ≤ h (that is facets other than xj ≥ 0), we must have gj ≥ 0 for j ∈ J + and gj ≤ 0 for j ∈ J − . For j ∈ J − . Note that the recession cone of P I is the same as the recession cone of P (a consequence of Meyer’s Theorem [23]). Then for every facet gx ≤ h we have gj ≤ 0 for j ∈ J − (otherwise ej would not be in the recession cone of P I ).

3.2. Proof of Theorem 2

max s.t.

x1 − (M − 1)x2 x1 − M(M − 1)x2 x1 x1 , x2 ≥ 0.

≤1 ≤M +1

(1)

Note that the only integral solutions are (0, 0), (1, 0) and (x¯ 1 , x¯ 2 ), where x¯ 1 , x¯ 2 ∈ Z+ , x¯ 1 ≤ M + 1 and x¯ 2 ≥ 1. Therefore, z IP = LP 2 and corresponds ( to (M +1 1), 1). Additionally, z = M since the feasible point M + 1, M −1 achieves that objective value and by multiplying the first constraint by M1 , multiplying the second constraint by MM−1 and adding them we obtain the valid inequality: LP

x1 − (M − 1)x2 ≤ M. Thus, in this case we have that: zz IP = M2 . Trivially, since we have only two constraints A2 (P) = P I . Now, we present our proof of the first part of Theorem 2. Proof. It follows from Theorem 1 that for any (J + , J − ) sign-pattern LP polyhedron zz A ∈ [1, 2]. For the family of (J + , J − ) sign-pattern polyhedra (1) we have that z IP = z A2 and therefore zA z A2

=

z LP z IP

·

zA z LP



M1 2 2

.

Since M can be arbitrarily large, A(P) cannot be an α -approximation of A2 (P) for any finite value of α . □

614

S.S. Dey, A. Iroume and G. Wang / Operations Research Letters 46 (2018) 611–615 Table 1 Upper bound (lower bound for covering case) on α for various containment relations; m is the number of constraints. 1-A(P) ⊆ α A(P) A(P)

⊆ α A2 (P)

2-A(P) ⊆ α A2 (P)

Packing

Covering

α ≤ 2 ([11])

α≥

α ≤ 3 if m ≥ 2

α≥

α≤

{

1 3

if m = 2 if m ≥ 3

α≥

In order to prove the second part of Theorem 2 we need the following result regarding packing and covering integer programs. Proposition 4. Let P := {x ∈ Rn+ |Ax ≤ b} be a packing (resp. P := {x ∈ Rn+ |Ax ≥ b} be a covering) polyhedron defined by two non-trivial constraints, i.e., A ∈ R2×n , satisfying the property Aij ≤ bi for all i ∈ [2], j ∈ [n]. Then P ⊆ 3P I (resp. P ⊆ 21.5 P I ). A proof of Proposition 4 is presented in the Appendix. Now we present a proof of the second part of Theorem 2. Proof. Let P be a packing polyhedron. We have that



A(P) =



P I (λ1 ) ⊆

λ1 ∈Rm +

P(λ1 ) ⊆

λ1 ∈Rm +

⊆3



P(λ1 , λ2 )

λ1 ,λ2 ∈Rm +



P I (λ1 , λ2 ) = 3A2 (P),

3.3. Proof of Theorem 3 In order to prove the first part of Theorem 3, we introduce the following family of instances. For M ≥ 2 an even integer: M M x2 − x3 − x4 2 2 2 s.t. x1 − Mx2 − Mx3 ≤1 M

x1 − Mx2 x1

(2) (3)

− Mx4 ≤ 1

(4)

− Mx3 − Mx4 ≤ 1

x1

(5)

≤M +1

(6)

x1 , x2 , x3 , x4 ≥ 0 . It is not difficult to verify that z IP = 1. The point (1, 0, 0, 0) has objective function value 1 and for any feasible solution such that x1 ≥ 2, we must have x2 + x3 + x4 ≥ 2 (from constraints (3)– (5)) thus the objective function value in this case ( is at most 1.) Additionally, z LP = M4 + 1 since the feasible point M + 1, 12 , 12 , 12 achieves that objective value and by aggregating constraints (3)– (6) and dividing by 4, we obtain the valid inequality: x1 − M2 x2 − M x − M2 x4 ≤ M4 + 1. 2 3 Now, the proof of the first part of Theorem 3. Proof. We show that for the family of instances (2)–(6), M 4

1 1 2.5

z 2-A z A2

= + 1 and thus 2-A can be an arbitrarily bad approximation of A2 . First, we show that z 2-A = M4 + 1 by showing that the optimal

point for the LP relaxation is also feasible for 2-A. To conclude the proof, we show that z A2 = z IP by providing an upper bound on z A2 coming from a particular selection of multipliers. In the case of 2-A, we verify that (x { , y1 , y2}, y3 ) = (M + 1, 12 , 12 , 21 ) is in P I (ei1 , ei2 ), where K := ei1 , ei2 corresponds to ([4]) an arbitrary selection of two constraints, i.e., any K ∈ 2 . Let SK′ be those variables with negative coefficients that are present in the

α≤2 α≤∞

≥2 if m = 2 if m ≥ 3

α≤

if m ≥ 2

⎧ ⎨ 1 ?

⎩ ∞

if m = 2 if m = 3 if m ≥ 4

inequalities in K . If constraint (6) is in K , let l denote the smallest index in SK′ . Otherwise, let l denote the index of {the variable (out of } {x2 , x3 , x4 }) that is present in both constraints ei1 , ei2 (note that there must always be one such index). Then it can be verified that ⊤ the points (M + 1, 0, 0, 0) + e⊤ l and ( (M + 11, 11, 11), 1) − el are in I P (ei1 , ei2 ) and so is the midpoint M + 1, 2 , 2 , 2 . The objective function value of the latter point is: M + 1 − M2 23 = M4 + 1, thus, in terms of objective function value, 2-A does not provide any extra improvement when compared to the LP relaxation. Now, we show that z A2 = 1. Since P I ⊆ A2 (P), we have that z A2 ≥ 1, and by the definition of A2 , we have that z A2 ≤ I λ¯ = (1, 1, 0, 0), µ ¯ = (0 z P (λ,µ) ( ∀λ, µ ∈ R4+ . Consider ) { , 0, 1, 1) and M M M ⊤ c =} 1, − 2 , − 2 , − 2 , then the problem max c ⊤ x : x ∈ P I ¯ µ (λ, ¯ ) corresponds to

s.t.

where the third containment follows from Proposition 4. Using Proposition 4, the proof of the covering case is similar to the packing case. □

max x1 −

{

max

λ1 ,λ2 ∈Rm +

Sign-pattern

1 ([11]) 2 1 if m 2.5

M M x2 − x3 − x4 2 2 2 2x1 − 2Mx2 − Mx3 − Mx4 ≤ 2

x1 −

2x1

M

− Mx3 − Mx4 ≤ M + 2

x1 , x2 , x3 , x4 ∈ Z + . Note that x3 and x4 have the same coefficients in the objective and in every constraint, thus by dropping x4 and rearranging the constraints we obtain a problem with the same optimal objective function value M M max x1 − x2 − x3 2 2 M s.t. x1 ≤ 1 + Mx2 + x3 2 M M x1 ≤ 1 + + x3 2 2 x1 , x2 , x3 ∈ Z + . Now, a simple case analysis (below) for each value of x1 , shows that I ¯ z P (λ,µ¯ ) = 1. Let z denote the best objective function value for each case:

• Case (x1 = 0): it is easy to see that z ≤ 0. • Case (x1 = 1): it is easy to see that z ≤ 1 (in fact, it is equal to 1 when x2 = x3 = 0). ( ) • Case 2 ≤ x1 ≤ M2 + 1 : note that in this case the first constraint forces either x2 or x3 to be at least one. Thus z ≤ M + (1 − M2 = 1. 2 ) • Case k M2 + 2 ≤ x1 ≤ (k + 1) M2 + 1, for k ≥ 1 integer : similar to the previous case. Now, since x1 ≥ k M2 + 2, the second constraint forces x3 ≥ k, this together with the first constraint forces x2 +x3 ≥ k+1. Then, z ≤ (k+1) M2 +1− M2 −k M2 = 1. □ The proof of the second part of Theorem 3 is very similar to the proof of the second part of Theorem 2 and therefore we do not present it here. Acknowledgments Santanu S. Dey would like to acknowledge the support of the NSF, USA grant CMMI #1562578. We would like to thank the reviewer for their comments which helped in significantly improving the presentation of the paper.

S.S. Dey, A. Iroume and G. Wang / Operations Research Letters 46 (2018) 611–615

References

Appendix. Proof of Proposition 4

Proof. The result for the case of packing polyhedron is shown in [11]. We prove the result for the case of covering polyhedron. Consider the LP relaxation: n ∑

max

cj xj

j=1 n ∑

s.t.

a1j xj ≥ b1

j=1 n ∑

a2j xj ≥ b2

j=1

xj ≥ 0 ∀j ∈ [n]. In the LP optimal solution x∗ , at most 2 variables are non-zero. If only one variable is positive, then it is easy to verify the result by rounding up this solution to produce an IP feasible solution. Therefore, without loss of generality, let x∗1 > 0 and x∗2 > 0. Let y∗ be an optimal dual solution. By strong duality and complimentary slackness, we have: b1 y∗1 + b2 y∗2 = c1 x∗1 + c2 x∗2 ∗

(A.1)



A11 y1 + A21 y2 = c1

(A.2)

A12 y∗1 + A22 y∗2 = c2 .

(A.3)

We consider six cases: 1. x∗1 ≥ 1, x∗2 ≥ 1: Clearly (⌈x∗1 ⌉, ⌈x∗2 ⌉, 0) is IP feasible. It is clear c1 ⌈x∗ ⌉+c2 ⌈x∗2 ⌉ 1 c x∗ + c x∗

that

1 1

2 2

≤ 2.

2. 0.4 ≤ x1 < 1, x2 ≥ 1: Again, (⌈x∗1 ⌉, ⌈x∗2 ⌉, 0) is IP feasible. ∗



c1 ⌈x∗1 ⌉ + c2 ⌈x∗2 ⌉

{ ≤ max

c1 x∗1 + c2 x∗2 ≤ max{2.5, 2} = 2.5

c1

∗,

c2 ⌈x∗2 ⌉

c1 x1

}

c2 x∗2

3. x∗1 ≥ 1, 0.4 ≤ x∗2 < 1: Same as above. 4. x∗1 < 0.4, x∗2 ≥ 1: Since Ai1 ≤ b for i ∈ [2], we ⌈ have ⌉ that Ai2 x∗2 ≥ bi (1 − x∗1 ) for i ∈ [2] and therefore (0, IP feasible. Now note that c2



x∗ 2



(5/3) · x∗2



1−x∗ 1

≤ c x∗ x∗2 ⎧2 2 ⎨ 2 2.5 < ⎩ 5/3 + 1∗ ≤ 20/9 x 2

x∗ 2

1−x∗ 1

, 0) is

⌉ 1 ≤ x∗2 ≤ 6/5 6/5 < x∗2 ≤ 9/5 x∗2 ≥ 9/5

5. x1 ≥ 1, x2 < 0.4: Same as above. 6. x∗1 < 1, x∗2 < 1: In this case, ∗



c1 ⌈x∗1 ⌉ + c2 ⌈x∗2 ⌉ ∗

= ≤



c1 x1 + c2 x2 c1

=

+

c1 + c2 c1 x∗1 + c2 x∗2 c2

c1 x∗1 + c2 x∗2 c1 x∗1 + c2 x∗2 ∗ ∗ A12 y∗1 + A22 y∗2 A11 y1 + A21 y2 b1 y∗1 + b2 y∗2

≤ 2,

+

615

b1 y∗1 + b2 y∗2

where the last but one inequality follows from (A.1), (A.2), (A.3) and last inequality follows from the fact that Aij ≤ bi for i ∈ [2], j ∈ [2]. □

[1] K. Andersen, Q. Louveaux, R. Weismantel, L.A. Wolsey, Inequalities from two rows of a simplex tableau, in: M. Fischetti, D.P. Williamson (Eds.), Integer Programming and Combinatorial Optimization, 12th International IPCO Conference, Ithaca, NY, USA, June 25–27, 2007, Proceedings, in: Lecture Notes in Computer Science, vol. 4513, Springer, 2007, pp. 1–15, http://dx.doi.org/10. 1007/978-3-540-72792-7. [2] K. Andersen, R. Weismantel, Zero-coefficient cuts, in: Integer Programming and Combinatorial Optimization, Springer, 2010, pp. 57–70. [3] G. Averkov, A. Basu, Lifting properties of maximal lattice-free polyhedra, Math. Program. 154 (1–2) (2015) 81–111, http://dx.doi.org/10.1007/s10107015-0865-6. [4] G. Averkov, A. Basu, J. Paat, Approximation of corner polyhedra with families of intersection cuts, in: F. Eisenbrand, J. Könemann (Eds.), Integer Programming and Combinatorial Optimization - 19th International Conference, IPCO 2017, Waterloo, on, Canada, June 26-28, 2017, Proceedings, in: Lecture Notes in Computer Science, vol. 10328, Springer, 2017, pp. 51–62, http://dx.doi.org/ 10.1007/978-3-319-59250-3_5. [5] A. Basu, P. Bonami, G. Cornuéjols, F. Margot, On the relative strength of split, triangle and quadrilateral cuts, in: C. Mathieu (Ed.), Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2009, New York, NY, USA, January 4–6, 2009, SIAM, 2009, pp. 1220–1229, http://dx.doi.org/10.1137/1.9781611973068. [6] A. Basu, R. Hildebrand, M. Köppe, Light on the infinite group relaxation I: foundations and taxonomy, 4OR 14 (1) (2016) 1–40, http://dx.doi.org/10. 1007/s10288-015-0292-9. [7] A. Basu, R. Hildebrand, M. Köppe, Light on the infinite group relaxation II: sufficient conditions for extremality, sequences, and algorithms, 4OR 14 (2) (2016) 107–131, http://dx.doi.org/10.1007/s10288-015-0293-8. [8] A. Basu, R. Hildebrand, M. Köppe, Equivariant perturbation in Gomory and Johnson’s infinite group problem - III: foundations for the k-dimensional case with applications to k=2, Math. Program. 163 (1–2) (2017) 301–358, http://dx.doi.org/10.1007/s10107-016-1064-9. [9] A. Basu, R. Hildebrand, M. Köppe, M. Molinaro, A (k+1)-slope theorem for the k-dimensional infinite group relaxation, SIAM J. Optim. 23 (2) (2013) 1021– 1040, http://dx.doi.org/10.1137/110848608. [10] M. Bodur, A. Del Pia, S.S. Dey, M. Molinaro, Lower bounds on the latticefree rank for packing and covering integer programs, 2017, arXiv preprint arXiv:1710.00031. [11] M. Bodur, A. Del Pia, S.S. Dey, M. Molinaro, S. Pokutta, Aggregation-based cutting-planes for packing and covering integer programs, Math. Program. (2017) http://dx.doi.org/10.1007/s10107-017-1192-x. [12] V. Borozan, G. Cornuéjols, Minimal valid inequalities for integer constraints, Math. Oper. Res. 34 (3) (2009) 538–546, http://dx.doi.org/10.1287/moor. 1080.0370. [13] G. Cornuéjols, F. Margot, On the facets of mixed integer programs with two integer variables and two constraints, Math. Program. 120 (2) (2009) 429– 456, http://dx.doi.org/10.1007/s10107-008-0221-1. [14] H. Crowder, E.L. Johnson, M. Padberg, Solving large-scale zero-one linear programming problems, Oper. Res. 31 (5) (1983) 803–834. [15] S. Dash, O. Günlük, M. Molinaro, On the relative strength of different generalizations of split cuts, Discrete Optim. 16 (2015) 36–50. [16] S.S. Dey, J.P. Richard, Facets of two-dimensional infinite group problems, Math. Oper. Res. 33 (1) (2008) 140–166, http://dx.doi.org/10.1287/moor. 1070.0283. [17] S.S. Dey, J.P. Richard, Relations between facets of low- and high-dimensional group problems, Math. Program. 123 (2) (2010) 285–313, http://dx.doi.org/ 10.1007/s10107-009-0303-8. [18] S.S. Dey, L.A. Wolsey, Constrained infinite group relaxations of MIPs, SIAM J. Optim. 20 (6) (2010) 2890–2912, http://dx.doi.org/10.1137/090754388. [19] M. Fischetti, A. Lodi, On the knapsack closure of 0-1 integer linear programs, Electron. Notes Discrete Math. 36 (2010) 799–804. [20] R. Fukasawa, M. Goycoolea, On the exact separation of mixed integer knapsack cuts, Math. Program. 128 (1–2) (2011) 19–41. [21] Q. Louveaux, R. Weismantel, Polyhedral properties for the intersection of two knapsacks, Math. Program. 113 (1) (2008) 15–37. [22] H. Marchand, L.A. Wolsey, Aggregation and mixed integer rounding to solve MIPs, Oper. Res. 49 (3) (2001) 363–371. [23] R.R. Meyer, On the existence of optimal solutions to integer and mixed-integer programming problems, Math. Program. 7 (1) (1974) 223–235. [24] J.P. Richard, S.S. Dey, The group-theoretic approach in mixed integer programming, in: M. Jünger, T.M. Liebling, D. Naddef, G.L. Nemhauser, W.R. Pulleyblank, G. Reinelt, G. Rinaldi, L.A. Wolsey (Eds.), 50 Years of Integer Programming 1958-2008 - From the Early Years to the State-of-the-Art, Springer, 2010, pp. 727–801, http://dx.doi.org/10.1007/978-3-540-68279-0_19.