Computers and Operations Research 113 (2020) 104770
Contents lists available at ScienceDirect
Computers and Operations Research journal homepage: www.elsevier.com/locate/cor
Improved flow-based formulations for the skiving stock problem J. Martinovic a,∗, M. Delorme b, M. Iori c, G. Scheithauer a, N. Strasdat a a
Institute of Numerical Mathematics, Technische Universität Dresden, Germany School of Mathematics, The University of Edinburgh, United Kingdom c DISMI, Università di Modena e Reggio Emilia, Italy b
a r t i c l e
i n f o
Article history: Received 6 February 2019 Revised 4 July 2019 Accepted 15 August 2019 Available online 19 August 2019 Keywords: Cutting and packing Skiving stock problem Arcflow model Integer linear programs
a b s t r a c t Thanks to the rapidly advancing development of (commercial) MILP software and hardware components, pseudo-polynomial formulations have been established as a powerful tool for solving cutting and packing problems in recent years. In this paper, we focus on the one-dimensional skiving stock problem (SSP), where a given inventory of small items has to be recomposed to obtain a maximum number of larger objects, each satisfying a minimum threshold length. In the literature, different modeling approaches for the SSP have been proposed, and the standard flow-based formulation has turned out to lead to the best trade-off between efficiency and solution time. However, especially for instances of practically meaningful sizes, the resulting models involve very large numbers of variables and constraints, so that appropriate reduction techniques are required to decrease the numerical efforts. For that reason, this paper introduces two improved flow-based formulations for the skiving stock problem that are able to cope with much larger problem sizes. By means of extensive experiments, these new models are shown to possess significantly fewer variables as well as an average better computational performance compared to the standard arcflow formulation. © 2019 Elsevier Ltd. All rights reserved.
1. Introduction Modeling frameworks based on graph theory have been investigated and successfully applied in different fields of cutting and packing (Valério de Carvalho, 1998; Clautiaux et al., 2018; Delorme et al., 2016; Macedo et al., 2010; Martinovic and Scheithauer, 2016a; 2018). Probably, one of the earliest references is given by Wolsey (1977) where theoretical foundations on the relationship between certain integer linear programs (ILPs) and corresponding flow formulations have been proposed. However, at that time, computers were much slower than they are today, and addressing the exact solution of discrete optimization problems was not an easy task. More precisely, the computational times required to enumerate all the variables and constraints and to solve the complete models exactly were too large. Hence, for a long time, research on these alternative formulations was rather theoretically motivated, so that, for instance, the numerical behavior and benefits of flow-based approaches could not be properly discussed in Wolsey (1977); nevertheless, some properties suggesting com-
∗
Corresponding author. E-mail addresses:
[email protected] (J. Martinovic),
[email protected] (M. Delorme),
[email protected] (M. Iori),
[email protected] (G. Scheithauer),
[email protected] (N. Strasdat). https://doi.org/10.1016/j.cor.2019.104770 0305-0548/© 2019 Elsevier Ltd. All rights reserved.
putational advantages of such formulations are already presented therein. Mainly after the publication of the famous book by Nemhauser and Wolsey (1988), computational aspects (e.g., the strength of the considered models) became a central concern in ILP modeling. In recent years, the rapidly advancing development of (commercial) Mixed ILP (MILP) software and the steady progress in terms of powerful hardware components have successively contributed to the scientific importance of pseudo-polynomial formulations for the solution of cutting and packing problems (Valério de Carvalho, 2002; Delorme et al., 2016; Martinovic et al., 2018). In particular, a significant body of today’s research is dealing with reduction techniques, as well as theoretical and algorithmic improvements (Brandão and Pedroso, 2016; Côté and Iori, 2018; Delorme and Iori, 2019) for the existing approaches. In this paper, we want to promote this general idea by presenting two new and improved flow-based formulations for the one-dimensional skiving stock problem (or SSP for short). In its classical formulation, a threshold length L ∈ N and m ∈ N (different) item types that are specified by their length li and frequency (of occurence) bi , i ∈ I := {1, . . . , m}, are considered. The SSP requires to recompose the given items in order to obtain a maximum number of objects, each having a length at least L, see also Fig. 1. In Assmann et al. (1984), this problem was introduced as
2
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
l1 = 5
3×
0
L = 10
0
L = 10
l2 = 3
4×
l3 = 2
4×
0 given items
L = 10 some possible item combinations
Fig. 1. A simple schema of the skiving stock problem for the instance E0 = (m, l, L, b) = (3, (5, 3, 2 ), 10, (3, 4, 4 )).
the dual bin packing problem (DBPP) whenever bi = 1 holds for all i ∈ I. The term skiving stock problem (SSP) has been proposed in the literature (Zak, 2003) for a particular practical application in the field of paper recycling (Johnson et al., 1997). Nowadays, both names are usually separated according to the typology presented in Wäscher et al. (2007), meaning that •
•
the DBPP refers to highly heterogeneous instances with bi , i ∈ I, close to (or equal to) one, see Labbé et al. (1995) and Peeters and Degraeve (2006), and the SSP describes instances where i ∈ I bi is large compared to the parameter m, see Martinovic and Scheithauer (2016a) and Zak (2003).
The skiving stock problem plays an important role whenever an efficient and sustainable use of limited resources is intended. By way of example, such computations can be found in industrial production processes (Johnson et al., 1997; Zak, 2003), manufacturing and inventory plannings (Arbib et al., 2012; Arbib and Marinelli, 2005), multiprocessor scheduling problems Alvim et al. (2004), or wireless communications (Martinovic et al., 2017; Tragos et al., 2013). For instance, in the latter case, the natural radio spectrum is considered. Since the utilization of the licensed spectrum parts may be very low, a major challenge is the efficient utilization of the resulting spectrum holes (i.e., the vacant areas within the spectrum) for secondary users. To this end, among other objectives, the unoccupied intervals (of length li , i ∈ I) have to be combined such that the bandwidth requirement (that is L) of as many secondary users as possible can be satisfied. Further scientific relevance of the SSP is given by the fact that it may appear in holistic cuttingand-skiving scenarios (Chen et al., 2019; Johnson et al., 1997). Observe that, although its connection to the extensively studied cutting stock problem1 (Valério de Carvalho, 2002; Delorme et al., 2016; Gilmore and Gomory, 1961; Kantorovich, 1939) is obvious, both problems are not dual formulations in the sense of mathematical optimization. Originally, the skiving stock problem was modeled by a patternbased approach introduced in Zak (2003). Although this model is known to possess a very tight LP relaxation (Kartak and Ripatti, 2019; Martinovic and Scheithauer, 2016b; 2017), similiar to the CSP context (Caprara et al., 2015; Kartak et al., 2015), the number of variables is exponential with respect to the number of items. As an alternative, pseudo-polynomial models for the SSP (most notably the arcflow model and the one-stick model) have been established in the literature and were shown to always exhibit an 1
The cutting stock problem (CSP) consists in determining the minimum number of given large stock rolls that has to be cut to satisfy the demands of certain smaller item lengths. In other words, the CSP is the reformulation of the BPP in which all items having the same widths have been grouped together into item types, each type having a given demand.
equally good relaxation, as well as a reasonable computational performance for most of the considered instances (Martinovic and Scheithauer, 2016a). However, especially for instances of practically meaningful sizes, the resulting models involve (very) large numbers of variables and constraints, so that appropriate reduction techniques are required to decrease the numerical efforts for solving the corresponding ILPs. For that reason, we introduce two improved graph-theoretic ILP formulations for the SSP that are based on the ideas of reversed loss arcs and reflected arcs, see Delorme and Iori (2019), respectively. Both of these new approaches are tailored for addressing specific drawbacks of the existing flow-based modeling framework. More precisely, the first idea mainly eliminates superfluous sinks (and possibly also further redundant vertices) of the graph, whereas the main novelty of the second formulation is to only consider half of the bin capacity (that is, the threshold L) so that significantly reduced sets of vertices and arcs are obtained. As a consequence of these reductions, both approaches will be shown to (on average) lead to a better numerical behavior (in terms of solution times and instances solved to optimality within a reasonable time limit) compared to the arcflow model from Martinovic and Scheithauer (2016a). Moreover, dominance relations between the LP bounds obtained by the various models will be discussed. Altogether, it should be emphasized that our computational study also serves as a first systematic approach to collect and describe benchmarking instances (and their performance) for the skiving stock problem, which can form an experimental basis for future research articles. A preliminary version of this research, containing just one new model and very limited computational results, was presented at the International Conference on Operations Research 2018 (OR 2018, Brussels) as Martinovic et al. (2019). The paper is organized as follows: in the next section, we review the standard flow-based formulation for the SSP and discuss its most important reduction techniques. Afterwards, in Section 3, two improved modeling frameworks and related theoretical properties are presented. Then, the computational performance of the three models is compared in Section 4. Finally, we summarize the main ideas of this paper and give an outlook on future research. 2. The arcflow model Throughout this paper we assume the integer item lengths to be ordered decreasingly, i.e.,
L > l1 > l2 > . . . > lm > 0.
(1)
If required, we can easily obtain this order by a sorting algorithm, such as merge sort, with O (m · log m ) operations. Let E = (m, l, L, b) be an instance of the SSP. Any combination of items whose lengths sum up to at least L is called a (packing) pattern of E. More formally, a pattern can be referred to as an
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770 •
0
1
2
3
4
5
6
7
8
9
10
11
12
Fig. 2. The vertex set and some arcs of G for the instance E0 = (3, (5, 3, 2 ), 10, (3, 4, 4 )). The colors of the item arcs are chosen according to Fig. 1. Altogether, G contains 13 vertices and 28 arcs.
integer vector a ∈ Zm + where the ith component states how many items of type i ∈ I are involved. Then, the pattern set is given by P ( E ) = a ∈ Zm + | l a ≥ L . As this set (at least theoretically) contains an infinite number of elements, normally the restriction to minimal patterns (collected in the set P (E)) is sufficient. Here, the term minimal means that none of the items (appearing in the pattern) can be removed without bringing the total length below the threshold L. Based on these preliminaries, we can define the quantity
v¯ := max l a | a ∈ P (E ) ,
which represents the largest total length of a minimal pattern. Remark 1. It is not necessary to know all the (exponentially many) minimal patterns to calculate v¯ . Instead, see Algorithm 1, this value is “automatically” found while generating the arcflow graph. Algorithm 1 Create_arcflow_skiv. 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21:
Input: L: Pattern threshold, m: Number of item types, l: Item lengths, b: Item availabilities E ← ∅ ; V ← ∅ M[0, . . . , L − 1] ← 0 an array that keeps track of the possible tails M[0] ← 1 node 0 is a possible tail for i = 1 to m do for k = L − 1 down to 0 do if M[k] = 1 then if k is a possible tail for j = 1 to bi do if k + ( j − 1 ) × li ≥ L then break break if tail ≥ L end if E ← E ∪ {(k + ( j − 1 ) × li , k + j × li )} V ← V ∪ {(k + j × li )} if k + j × li < L then M[k + j × li ] ← 1 end if end for end if end for end for return V , E
•
•
There are paths not corresponding to minimal patterns, such as the sequence 0 → 5 → 7 → 9 → 12. There are redundant vertices that cannot be contained in paths starting at v0 = 0 (e.g., v = 1). There are paths forming the same minimal pattern. For instance, the sequences 0 → 5 → 8 → 10 and 0 → 5 → 7 → 10 are both referring to the minimal pattern a = (1, 1, 1 ) .
Consequently, reduction techniques are required to tackle these problems: 1. Obviously, only nonnegative integer linear combinations of the given item lengths have to be included in the set of vertices, since all other nodes cannot occur in a path s = (v0 , v1 , . . . , vk ) from v0 = 0 to some vk ≥ L. Thus, the set V can be replaced by a reduced set (whose elements are mostly called raster points3 )
V :=
E := {( p, q ) ∈ V × V | p < L, q − p ∈ {l1 , . . . , lm }}. In this setting, an arc ( p, q ) ∈ E represents the positioning2 of an item of length q − p = l j ∈ {l1 , . . . , lm } with its left point at vertex p ∈ V, see Fig. 2 for an example. Then, any path s = (v0 , v1 , . . . , vk ) in G from v0 = 0 to some vertex vk ≥ L corresponds to a pattern a ∈ P(E). However, this basic approach still has some major drawbacks: 2 Hence, the considered approach is sometimes referred to as a position-indexed formulation, see Martinovic and Scheithauer (2016a).
v ∈ V | ∃ a ∈ Zm + : l a = v .
This technique originates from the consideration of so-called normal patterns in early publications on two-dimensional cutting problems (Christofides and Whitlock, 1977; Herz, 1972), and refers to the observation (Valério de Carvalho, 1999, Criterion 3). Even nowadays the theory of potential allocation points is a quite active field of research in cutting and packing (Côté and Iori, 2018; Fischer and Scheithauer, 2015). 2. In many cases, the previous reduction is not sufficient for vertices v ≥ L, because the minimality of the corresponding item combinations is not ensured. By way of example, the instance E = (2, (7, 2 ), 10, (7, 7 )) certainly leads to 12 ∈ V , but there is no minimal pattern a ∈ P (E) with l a = 12. For that reason, let
L := l a | a ∈ P (E )
be the set of all possible total lengths of minimal patterns. Then, we can replace V by
V := V ∩ ({0, 1, . . . , L − 1} ∪ L ). These observations also lead to a reduced set of arcs
E :=
( p, q ) ∈ V × V | p < L, q − p ∈ {l1 , . . . , lm } .
If we want to assign a path s in G = V , E to a minimal pattern a ∈ P (E), this identification is not necessarily unique, because a ∈ Zm + does not carry any information about the particular item order. For that reason, the consideration of monotonically decreasing paths (i.e., paths whose corresponding item lengths are sorted in non-increasing order) is proposed in the literature, see Scheithauer (2018, Section 4.8.4.) or Valério de Carvalho (1998) for an exemplary application to the related CSP. Formally, this can be done by defining an index μ(p) ∈ I ∪ {0} for all p ∈ V \ L by
μ ( p) :=
As a starting point, we consider the directed graph G = (V, E ) given by the set of vertices V = {0, 1, . . . , v¯ } and the set of arcs
3
0, min {i ∈ I | p − li ∈ V , i ≥ μ( p − li )},
if p = 0, if p ∈ V \ (L ∪ {0} ). (2)
Remark 2. Practically, this additional constraint can be incorporated by processing the item types in the order specified by assumption (1) within the graph generation, see also Algorithm 1. 3 Note that, mainly in the literature on cutting problems or pallet loading (Scheithauer and Terno, 1996), alternative methods to obtain (reduced) sets of raster points have been presented. However, as these approaches actively make use of the fixed bin capacity (or the fixed pallet size), a possible adaptation to the SSP context (where patterns can go beyond the vertex L) is not straightforward. However, to a certain extent, the basic idea of performing dynamic programming in forward and backward direction (which is usually done to obtain reduced sets of raster points) is contained in our reflect formulation, see Section 3.
4
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
0
2
3
4
5
6
7
8
9
10
11
12
Fig. 3. Arcflow graph G for the instance E0 = (3, (5, 3, 2 ), 10, (3, 4, 4 )). It contains 12 vertices and 17 arcs.
This leads to a new set of arcs
E := ( p, q ) ∈ V × V | p < L, q − p = li ∈ {l1 , . . . , lm }, i ≥ μ( p)
and a reduced graph G = V , E
s.t.
, see Fig. 3 for an example.
Remark 3. The introduction of μ may not solve the problem of equivalent paths within G entirely. Consider, by way of example, the instance E0 where G (illustrated in Fig. 3) contains the two paths
0 → 3 → 5 → 10 and 0 → 5 → 8 → 10 which both refer to the pattern a = Moreover, the first path does not respect the monotonicity requirement. However, this problem cannot be avoided if, for instance, a (large) item length can be obtained by summing up two (or more) smaller item lengths. Remark 4. Obviously, no pattern a ∈ P (E) with ai > bi for some i ∈ I can be part of an integer feasible solution. Hence, in a few articles (Kartak and Ripatti, 2019; Martinovic and Scheithauer, 2016c), the constraint a ≤ b is added to the pattern set leading to the so-called proper relaxation. However, to the best of our knowledge, there is no obvious way to fully include this criterion to the graph construction. This is mainly caused by the fact that arcs are generated “locally”, whereas the additional constraint is a “global” property of entire paths. Taking into account a ≤ b is only possible up to a certain extent by limiting the number of successive arcs (starting at a given vertex) of the same item type i ∈ I to bi in the graph generation, see also Algorithm 1. Nevertheless, by way of example, note that the arc (8, 10) in Fig. 3 can still belong to proper patterns (e.g., a = (1, 1, 1 ) ) as well as non-proper patterns (e.g., a = (0, 0, 5 ) ), so that it has to remain within the graph. Similar to Côté and Iori (2018, Algorithm 1), where a possible construction of the vertex set for the CSP scenario is described, the reduced arcflow graph can be generated in O (mL ) by the procedure given in Algorithm 1. By way of example, in Fig. 3, the red (blue, green) arcs, and vertices attached to them, are generated by Algorithm 1 for i = 1 (i = 2, i = 3). In order to formulate the arcflow model in a preferably convenient way, we introduce the auxiliary index sets
A (q ) :=
p ∈ V | ( p, q ) ∈ E ,
for every q ∈ V and the set
E (i ) := ( p, q ) ∈ E | q − p = li
A (q ) := +
q∈A+ ( 0 )
x pq =
p∈A− (q )
r ∈ V | (q, r ) ∈ E ,
for every i ∈ I. Remark 5. Note that A− (q ) and A+ (q ) model the predecessors and successors of vertex q ∈ V , respectively, whereas E(i) collects all arcs referring to an item of length li . Moreover, the cases A− (q ) = ∅ or A+ (q ) = ∅ can occur for some q ∈ V . Let x pq ∈ Z+ denote the flow along arc ( p, q ) ∈ E . Then we can state the
xqr ,
q ∈ V \ (L ∪ {0} ),
(3)
i ∈ I,
(4)
( p, q ) ∈ E .
(5)
r∈A+ (q )
x pq ≤ bi ,
( p,q )∈E (i )
x pq ∈ Z+ ,
( 1, 1, 1 ) .
−
Arcflow Model of the Skiving Stock Problem zA = x0q → max
Constraints (3) can be interpreted as a flow conservation: at every interior vertex, which is not a possible endpoint, the units of incoming flow (i.e., the number of items “terminating” at this position) have to equal the units of emanating flow (that is the number of items “starting” at this position). Thus, the objective function maximizes the total flow within G , that is, the total number of objects obtained. Conditions (4) ensure that the given item supply is not exceeded. In this arcflow model, the numbers of variables and constraints are O (mL ) and O (m + L ), respectively. Remark 6. In our implementation, a slight modification of Algorithm 1 is applied to generate an equivalent arcflow graph. More precisely, note that any arc (p, q) with q > L can be shortened to (p, L) without essentially changing the properties of the graph. In particular, this alternative representation does neither influence the numbers of variables and constraints in the ILP model presented above (i.e., the numbers of arcs and interior vertices remain the same) nor the general structure (e.g., the order of the elements) of the sets V and E . Basically, also the resulting ILP formulation will be identical to the one introduced before. Only a small adaptation in the definition of the sets E(i), i ∈ I, is required since the difference q − p does not directly lead to the item type for a shortened arc. Although this model has turned out to be competitive in solving randomly generated instances (Martinovic and Scheithauer, 2016a), there are still some drawbacks connected to different aspects of this formulation: (a) Whenever v¯ > L holds, the graph G (at least implicitly) contains mutiple sinks leading to a considerable amount of duplicated arcs, even if the technique described in Remark 6 is applied. By way of example, the arcs (8, 11) and (9, 12) (or (8, 10) and (9, 10) in the shortened representation) actually have the same meaning in Fig. 3, as we will see in the next section. (b) As indicated by Remark 3, in some cases, the graph still contains some symmetries or non-monotonous paths. (c) The reduction caused by raster points is particularly successful at the beginning of the graph (i.e., in the first half of V) whereas it tends to be nearly irrelevant in the second part. Whereas the second problem may not be addressed in the general context (see Remark 3), we aim at tackling the remaining unfavorable properties by two new flow-based approaches in the following section.
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
0
2
3
4
5
6
7
8
9
5
10
Fig. 4. Arcflow graph GL for the instance E0 = (3, (5, 3, 2 ), 10, (3, 4, 4 )). It contains 10 vertices and 17 arcs.
(0, 3, s)
3. Two improved flow-based formulations 3.1. An arcflow model with reversed loss arcs As a starting point, we want to deal with problem (a) mentioned above. To this end, we assume that L ∈ V holds without loss of generality. If this is not the case, then the given instance E = (m, l, L, b) can be replaced by an equivalent instance E = (m, l, L , b) with L ≥ L + 1. Now, instead of (implicitly) allowing multiple endpoints within the graph, we force any path to end at vertex v = L, while preserving the property that the item length is identical to the length of the corresponding arc. This can be obtained by the following three steps: (1) Any arc ( p, q ) ∈ E with q = L + r for some r ≥ 1 is shifted back to ( p − r, L ). After having completely executed this step, we let v denote the lowest-indexed vertex that is the tail of a shifted arc. (2) Add reversed loss arcs (e, d) with v ≤ d < e < L and e = succV (d ), where succX (x ) describes the successor of element x ∈ X within the canonically ordered set X ⊆ N. (3) Now, any vertex that is not part anymore of at least one arc is deleted. Let us refer to the resulting graph as GL := (VL , EL ). Then principally the same optimization model as in Section 2 can be applied (now, of course, based on the updated sets VL and EL ), if the artificial loss arcs are also respected in the flow conservation conditions. We will briefly refer to this model as the loss arcflow model (or simply larcflow), and define by zL its objective function. Example 1. The modified graph for the instance E0 = (3, (5, 3, 2 ), 10, (3, 4, 4 )) is depicted in Fig. 4. Here, the formerly existing arcs (9, 12 ), (8, 11 ), (9, 11 ) ∈ E have been shifted back to the arcs (7, 10 ), (7, 10 ), (8, 10 ) ∈ EL , and two additional reversed loss arcs have been introduced, namely (8, 7) and (9, 8). In this new setting, the vertices 11 and 12 do not appear anymore; and, by way of example, the path 0 → 3 → 6 → 9 → 12 is now loss
loss
represented by 0 → 3 → 6 → 9 → 8 → 7 → 10. Besides having eliminated the additional sinks of the graph (so that any path now terminates at v = L), there are some further advantageous, neutral and disadvantageous properties (marked by (+ ), ( ± ) and (− )) that we want to mention here as a concluding summary of this subsection:
(+ ) In our example, see Fig. 4, the new arcflow graph GL contains the same number of arcs as before since the number of additional loss arcs is equal to the savings that are obtained by shifting arcs back (onto some already existing arcs). However, for practically meaningful problem sizes, this effect is not likely to happen so that, normally, a smaller number of variables can be expected (see also Section 4). ( ± ) Typically, the loss arcflow model has (roughly) the same number of constraints as the standard arcflow formulation because the number of interior vertices is not likely to change considerably. However, in some (rather exceptional) cases, it may happen that interior vertices of V can be
0
(0, 3, s)
(3, 6, s) 3
(3, 6, s)
6
(6, 6, r)
Fig. 5. A preliminary schema to represent paths on only one half of the vertex set. The third index (s or r) of an arc is used as a label to distinguish between standard arcs (s) and reflected arcs (r).
deleted after having shifted the arcs according to Step (1) from above. On the other hand, for instances with sparse vertex set, it is also possible that the shifting of arcs can activate additional interior vertices. As we will see in our computational part (see Section 4), both of these phenomena do not play a very important role for larger instances, in general. (− ) Possibly, the shifting of some arcs can produce additional symmetries or can destroy some monotonicity properties of the graph. For instance, the graph GL depicted in Fig. 4 now contains a further possibility to represent the pattern a = (1, 1, 1 ) . Moreover, at least from a theoretical point of view, arbitrarily long paths (patterns) can be obtained in GL because the graph does not have to be acyclic anymore, see also Fig. 4. 3.2. The reflect arcflow model In this subsection, the problems (a) and (c) from the list presented in Section 2 shall be dealt with collectively. For that reason, we consider an approach basically introduced in Delorme and Iori (2019) (for the CSP scenario), whose key idea is to reduce the set of vertices by decomposing any path into two subpaths (providing roughly half of the desired total length) that are linked by a reflected arc. Example 2. Let us again consider the instance E0 = (3, (5, 3, 2 ), 10, (3, 4, 4 )) and the corresponding graph G depicted in Fig. 3. By way of example, the description of the path 0 → 3 → 6 → 9 → 12 actively uses five vertices and four different arcs. In a new setting, that only takes the vertices of the first half (of this path) into account, we could also describe this path as illustrated in Fig. 5. Here, the two (identical) subpaths (0, 3, s) → (3, 6, s) (each referring to two items of length l2 = 3) are connected by a reflected arc (6, 6, r) (where the reflection point R = 6 is given by half of the previous total path length of l a = 12). Hence, we obtain an equivalent description of the pattern a = (0, 4, 0 ) by only using three vertices and three different arcs. stock patterns can exhibit several lengths Lt ∈ L := As skiving l a | a ∈ P (E ) (which are identical to the sinks of G ), we usu-
ally would have to cope with multiple reflection points Rt = Lt /2 if the idea presented in Delorme and Iori (2019) (and also in the previous example) is simply overtaken. Moreover, the elements of L are not known in advance and, hence, cannot be used to efficiently implement a reflect graph. Due to these difficulties, we propose another strategy to represent a pattern in the SSP context. In order to avoid multiple reflection points, we again try to force any path (or pattern) to have exactly the length L by introducing appropriate loss arcs (but now in forward direction). Then,
6
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
Algorithm 2 can be applied to build the reflect graph GR := (U, A ) (for the SSP) with only one reflection point4 R := L/2.
reflect
that simply combining the two identical subpaths 0 → 3 → 4 by means of the reflected arc (4, 4, r) would correspond to the pattern a = (0, 4, 1 ) , so that (at the moment) also nonminimal patterns can be discovered in the graph. However, this possibility will later be excluded by adapted flow conservation constraints. As an additional example, we consider the pattern a = (0, 1, 4 ) which actually requires to merge two different sub-
Algorithm 2 Create_reflect_skiv. 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28:
Input: L: Pattern threshold, m: Number of item types, l: Item lengths, b: Item availabilities A←∅ ; U ←∅ M[0, . . . , L/2] ← 0 an array that keeps track of the possible tails M[0] ← 1 node 0 is a possible tail v = L/2 lowest-indexed vertex that is the head of a reflected arc for i = 1 to m do H[0 . . . L/2] ← 0 an array that keeps track of the tails already processed for j = 1 to bi do for k = L/2 − 1 down to 0 do if H[k] = 0 and M[k] = 1 then if k is unprocessed and a possible tail H[k] = 1 k is now processed if k + li ≤ L/2 then if the arc is standard A ← A ∪ {(k, k + li , s )} U ← U ∪ { ( k + li )} M[k + li ] ← 1 else if the arc is reflected A ← A ∪ {(k, L − (k + li ), r )} U ← U ∪ {(L − (k + li ))} v ← min{v, L − (k + li ))} end if end if end for end for end for U ← U ∪ {L/2} for i ∈ U : i ≥ v do A ← A ∪ {(i, j, l ): j = min(k ∈ U : k > i )} loss arcs A ← A ∪ {(L/2, L/2, r )} allow paths to collide in L/2 return U, A
More precisely, we have that any arc (u, v ) ∈ E (appearing in Section 2) • •
•
with u < v ≤ R is maintained as a standard arc (u, v, s ) ∈ A, with u < R < v is transformed into a reflected arc (u, L − v, r ) ∈ A, with R ≤ u < v is deleted.
Finally, if it is not already the case, we add R to U. After having executed these steps, let v denote the lowest-indexed vertex that is the head of a reflected arc. Then, we add the loss arcs (d, e, l) with d ∈ U, v ≤ d < R and e = succU (d ) acting as the direct successor of d. Finally, we have to introduce a special reflected arc (not referring to any item length) (L/2, L/2, r) to enable the combination of two subpaths providing exactly half of the bin capacity. Example 3. The reflect arcflow graph GR for the instance E0 = (3, (5, 3, 2 ), 10, (3, 4, 4 )) is depicted in Fig. 6. For the sake of a better comprehension, the colors of the arcs are now referring to the different arc types (standard, reflected and loss). •
Here, the path 0 → 3 → 6 → 9 → 12 from the previous example is obtained by linking two (identical) reflected subpaths reflect
loss
0 → 3 → 4 → 5 by means of the reflected arc (5, 5, r). Note 4
In order to avoid fractional coordinates it should be assumed that L is even. If required, this can be obtained by scaling all lengths of the instance by a factor 2. Note that this operation does not change the number of arcs.
•
reflect
loss
paths. More precisely, the subpaths 0 → 2 → 4 → 4 → 5 and 0 → 3 → 5 have to be linked by the reflect arc (5, 5, r). Observe that, even for this quite small example, the numbers of vertices and arcs could be reduced by roughly 50 percent compared to the standard arcflow model (from Fig. 3) or the model with reversed loss arcs (from Fig. 4). As mentioned in Section 2, these significant savings are obtained because the effect of normal patterns (or raster points) is noticed almost only at the beginning of the graph, whereas it tends to be irrelevant for the second half of the vertices (which is not modeled explicitly anymore). Let us define U := U \ {0} and A := A \ {(L/2, L/2, r )}, and let ξ deκ denote the flow along the generic arc (d, e, κ ) ∈ A (where any label κ ∈ {s, r, l} is possible); then the reflect arcflow model is given by
Reflect Arcflow Model of the Skiving Stock Problem zR = ξder → max (d,e,r )∈A
s.t.
(d,e,s )∈A: e−d=li
ξdes +
ξdel +
(d,e,l )∈A
(d,e,r )∈A
(6)
ξder ,
(7)
(d,e,r )∈A
ξdel +
(d,e,l )∈A
i ∈ I,
ξe f l = . . .
ξder +
ξ0er = 2
(e, f,l )∈A
(d,e,r )∈A
ξder ≤ bi ,
(d,e,r )∈A: e=L−d−li
(0,e,r )∈A
(d,e,s )∈A
...
ξ0es +
(0,e,s )∈A
ξdes +
ξder ≥
κ ∈{r,s} (e, f,κ )∈A
ξe f l ,
ξe f κ , e ∈ U ,
(8)
e ∈ U ,
(9)
(e, f,l )∈A
ξdeκ ∈ Z+ ,
(d, e, κ ) ∈ A , (10)
ξL/2,L/2,r ∈ Z.
(11)
The objective function maximizes the total flow on the reflected arcs. As (on balance) any unit of flow on a reflected arc is responsible for linking two subpaths to one single pattern, this quantity actually displays the number of obtained objects (with length ≥ L). For this reason, we further have to demand that the total flow in the system is twice the objective value, see constraint (7). Moreover, conditions (6) require each item to be used at most bi times, whereas constraints (8) model the (adapted) flow conservation at each interior vertex: note that (in a pattern sense) each flow entering a node through a standard arc has to be continued by a flow leaving the node through a standard or a reflected arc (classical flow conservation), or by a flow entering the node through a reflected arc (connection between two subpaths), including the additional flow brought and removed by the loss arcs. Example 4. The conditions explained so far would entail the following properties: 1. Fortunately, the non-minimal pattern a = (0, 4, 1 ) from the previous example cannot be represented by a feasible path reflect
(consisting of two copies of 0 → 3 → 4 linked by (4, 4, r ) ∈
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
(0, 3, s)
0
(0, 5, s)
2
(3, 5, s) 3
(0, 2, s)
7
(3, 4, r)
4
(5, 5, r) (4, 4, r)
5
(4, 5, l)
(2, 4, s)
Fig. 6. The reflect arcflow graph GR for the instance E0 = (3, (5, 3, 2 ), 10, (3, 4, 4 )). It contains 5 vertices and 9 arcs.
(0, 3, s) (3, 4, r) 0
2
3
4
(4, 4, r)
5
(3, 4, r) (0, 3, s) Fig. 7. The former representation of the non-minimal pattern a = (0, 4, 1 ) .
(0, 3, s) (3, 4, r) 0
2
3
(4, 5, l) 4
(3, 4, r)
5
(5, 5, r)
(4, 5, l)
(0, 3, s) Fig. 8. The pattern a = (0, 4, 0 ) is built by joining two subpaths with the help of the artificial reflected arc (L/2, L/2, r ) ∈ A.
A) in GR anymore. To this end, let us consider the flow propagation for the arcs and vertices depicted in Fig. 7. For the first interior node e = 3, we have two units of flow on the incoming standard arc (0, 3, s) and two units of flow on the leaving reflected arc (3, 4, r), so that condition (8) is satisfied. However, when looking at e = 4 there is no flow corresponding to the left-hand side of (8), whereas on the right-hand side we have two units of flow for the incoming reflected arc (3, 4, r) and an additional nonnegative flow on the reflected arc (4, 4, r). Hence, the flow conservation does not hold for the vertex e = 4, so that the non-minimal pattern cannot be part of a feasible flow (in GR ) anymore. 2. Contrary to the first point, at the moment, it would still be loss
possible to link the two identical subpaths 0 → 2 → 4 → 5 by means of the reflected arc (5, 5, r) without violating constraints (8) for any of the involved interior vertices. As this would refer to a vector a = (0, 0, 4 ) (which does not represent a pattern) some further conditions are needed to limit the use of loss arcs. Based on the second observation given in the previous example, loss arcs may only be used for special purposes. More precisely,
they either have to directly follow another loss arc or a reflected arc, see (9). Then, by way of example, the vector a = (0, 0, 4 ) cannot be obtained anymore as a feasible connection of two subpaths. One particularity of this model is that we have to allow negative flows on the (artificial) reflected arc (L/2, L/2, r), see (11), to also enable two reflected paths to be merged together without violating the flow conservation and without miscounting the number of obtained patterns in the objective function. Example 5. As long as a nonnegative flow on the artificial reflected arc (L/2, L/2, r ) ∈ A is demanded, the flow conservation conditions for e = 5 in the scenario depicted in Fig. 8 would lead to the same contradiction as in the previous example. Moreover, as already two units of flow are required on the reflected arc (3, 4, r ) ∈ A, this pattern would not be counted correctly (as one single pattern) in the objective function. To solve these two problems, the flow variable corresponding to (L/2, L/2, r ) ∈ A is allowed to possess negative values. Then, with ξ(0,3,s ) = ξ(3,4,r ) = ξ(4,5,l ) = 2 and ξ(5,5,r ) = −1 all constraints are satisfied and the total flow on all the involved reflected arcs is equal to one, so that exactly one pattern is added to the objective value.
(0, 8, s) 0
(0, 2, r)
2
(2, 4, l)
(0, 4, r)
4
(4, 8, l) (8, 4, r)
8
(8, 10, l)
10
(10, 10, r)
Fig. 9. The reflect graph GR for the instance E1 = (3, (18, 16, 8 ), 20, (10, 10, 10 )).
8
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
Example 6. Another typical case where linking two reflected paths can sometimes be meaningful occurs if very large items may appear together in a single pattern, see Fig. 9 for an illustration of the exemplary instance E1 = (3, (18, 16, 8 ), 20, (10, 10, 10 )). Observe that, also in this setting, ξ (L/2,L/2,r) will become negative if, for reflect
loss
loss
loss
instance, the two identical paths 0 → 2 → 4 → 8 → 10 shall be linked to obtain the pattern a = (2, 0, 0 ) . For the sake of completeness, let us mention that an optimal solution (with zR = 15) is given by ξ0,2,r = ξ0,4,r = ξ0,8,s = ξ2,4,l = ξ8,10,l = 10, ξ4,8,l = 20, and ξ10,10,r = −5. We also note that the reduction criterion proposed in Delorme and Iori (2019, Proposition 5) stating that any feasible pattern can be represented by a pair of colliding paths whose reflected arc (d, e, r) has d < e cannot be used for the SSP. Indeed, let us consider the pattern a = (0, 0, 3 ) from the previous example: reflect
loss
it can only be obtained by the two colliding paths 0 → 8 → 4 → 8 and 0 → 8, in which the reflected arc (8, 4, r) has to be used. In summary, modeling and generating the reflect arcflow formulation for the SSP is more challenging than in the cutting scenario (Delorme and Iori, 2019) and requires (partly) independent techniques as well as considerable modifications of the corresponding ILP. Altogether, this model possesses much fewer variables than the previous flow-based formulations (in Section 2 and Subsect 3.1). As regards the number of constraints, no clear prediction can be made. On the one hand, our new approach allows for using much fewer flow conservation constraints (in most cases only roughly 50% of the original number) of type (8), whereas, on the other hand, an additional constraint (see (9)) has to be included for any vertex. Depending on the particular instance, these two effects may lead to slight differences (in both directions) if compared to the number of constraints of the standard arcflow model, see also Section 4. Although the previous examples and explanations credibly suggest the validity of the new reflect formulation, we would like to add some more formal arguments to clearly support this observation also from a theoretical point of view. To this end, we show that any minimal pattern that can appear in an integer solution of the SSP (i.e., any proper pattern, see Remark 4) can be found in the reflect graph. Theorem 7. Let E = (m, l, L, b) be an instance of the SSP. Every valid minimal SSP pattern can be represented by the reflect graph. Proof. Consider a generic solution of the SSP (belonging to E) using the (proper) patterns a1 , . . . , at ∈ P (E ), t ∈ N, at least once. For an arbitrary pattern a := ak , k ∈ {1, . . . , t }, we now show that it can be transformed (in the reflect formulation) into a pair of paths that start in 0 and collide at another vertex. To this aim, we sort the items of pattern a ∈ Zm + by non-increasing lengths (hereinafter referred to as w1 ≥ w2 ≥ . . .) and divide them into two subsets, I1 and I2 . We start by filling I1 with the items in the computed order until their total length is either greater than or equal to L/2. We insert the rest of the items in I2 . For simplicity, suppose that the items are numbered from 1 to |I1 | in I1 and from |I1 | + 1 to |I1 | + |I2 | in I2 . Now, we can distinguish three cases: (1) i∈I1 wi = L/2 and i∈I2 wi = L/2. The pattern can be created by a pair of paths in which: (i) path p1 is formed by standard arcs associated with the items in I1 , so ending in L/2, and a last reflected arc (L/2, L/2, r) (carrying a positive flow); and (ii) path p2 is formed by standard arcs associated with the items in I2 . The two paths collide in L/2. (2) can be reprei∈I1 wi > L/2 and i∈I2 wi ≤ L/2. The pattern sented by a pair of paths colliding in w i∈I2 i : (i) path p1 is formed by standard arcs associated with the first |I1 | − 1
items, a reflected arc (d¯, e¯, r ), where d¯ = i=1,...,|I1 |−1 wi and e¯ = L − d¯ − w|I1 | , and possibly some loss arcs from e¯ to i∈I2 wi . Note that, by definition, we have i∈I2 wi ≥ L − i∈I1 wi , so i∈I2 wi ≥ e¯, i.e., the required loss arcs actually exist; and (ii) path p2 is formed by standard arcs associated with the items in I2 . The pattern is then represented by the pair of paths p1 and p2 , which collide in i∈I2 wi . (3) i∈I1 wi > L/2 and i∈I2 wi > L/2. The pattern can be represented by a pair of paths colliding in L/2: (i) path p1 is formed by standard arcs associated with the first |I1 | − 1 items, a reflected arc (d¯, e¯, r ), and some loss arcs from e¯ to L/2; and (ii) path p2 is formed by standard arcs associated with the first |I2 | − 1 items, a reflected arc (d¯ , e¯ , r ), some loss arcs from e¯ to L/2, and a last reflected arc (L/2, L/2, r) (carrying a negative flow). Once more, the two paths collide in L/2. We further note that, by using similar arguments to those presented in the proof of Theorem 7, it would also be possible to show that any feasible solution of the (integer) reflect model can be recomposed into valid patterns of the SSP. Each pair of colliding paths in reflect can be represented as a path in arcflow, which in turn represents a feasible SSP pattern. We skip a formal proof as this would require additional technical arguments. 3.3. Relations between the LP bounds Whenever there are different modeling approaches for one and the same optimization problem, an important question deals with the strength of the bounds obtained by the respective LP relaxations. It is well known that the tightness of a relaxation is a crucial factor in the size of branch-and-bound trees, because it significantly influences the efforts to solve the ILP. To this end, we will investigate the relationships between the optimal objective values of the three LP relaxations zALP := zALP (E ), zLLP := zLLP (E ), and zRLP := zRLP (E ) and discuss possible dominance relations. A first important observation is given by: Theorem 8. For E = (m, l, L, b) = (4, (8, 6, 4, 2 ), 12, (1, 1, 1, 1 )) we have zRLP < zALP < zLLP . Proof. For the given instance, we have zRLP = 1.5, zALP = 1.6, and zLLP = 5/3, obtained by the patterns (paths): 1 × ( 0, 2, 0, 0 ) , 2 2 × ( 1, 0, 1, 0 ) , × ( 1, 1, 0, 0 ) , 5 1 × ( 1, 0, 1, 0 ) , × ( 1, 0, 0, 2 ) , 3
R : 1 × ( 1, 0, 1, 0 ) , 3 5 2 L: 3
A:
2 × ( 0, 1, 1, 1 ) , 5 1 × ( 0, 2, 0, 0 ) , 3
1 × ( 0, 1, 0, 3 ) , 5 1 × ( 0, 1, 1, 1 ) . 3
The main reason for the occuring differences is given by the fact that some patterns (paths) can be feasible in one graph, while they cannot be composed in another graph. In the example presented in the previous theorem, the patterns a = (1, 0, 0, 2 ) and a = (0, 1, 0, 3 ) cannot appear in the reflect graph because it is impossible to start a subpath with more than one copy of the item length l4 = 2. On the other hand, the pattern a = (0, 2, 0, 0 ) can for instance appear in the reflect framework, whereas it cannot be found in the standard arcflow formulation. Given the information of Theorem 8, in the following result we clarify that there is no dominance relation between the reflect model on the one hand and either the standard arcflow model or the loss arcflow model on the other hand. Theorem 9. For E = (m, l, L, b) = (2, (5, 2 ), 10, (1, 5 )) we have zALP = zLLP < zRLP .
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
Proof. Indeed, for the given instance we have zRLP = 1.5 and zALP = zLLP = 1.4, obtained by the patterns (paths):
R : 1 × ( 0, 5 ) ,
1 × ( 2, 0 ) , 2
A & L : 1 × ( 1, 3 ) ,
2 × ( 0, 5 ) . 5
•
Again, the reason for the differing optimal values is based on different sets of feasible patterns (paths). For instance, the pattern a = (2, 0 ) used in the solution of the reflect model does not appear in the other two graphs due to the construction presented in Algorithm 1, where the for-loop for i = 1 (with 1 ≤ j ≤ b1 implying j = 1) only generates one single arc (in the whole graph) corresponding to this item. Of course, also the reflect graph does only contain this arc once, but here two identical copies of this subpath (0 → 5) can be linked to obtain a feasible pattern (that can then be used 0.5 times). Based on the two instances given in the previous theorems, only the relationship between the optimal values zALP and zLLP is not fully discussed yet. To answer this open question, we present the following concluding result. Note that, for the proof, arcs going beyond L are not shortened in the sense presented in Remark 6. Theorem 10. For any instance E = (m, l, L, b) we have zALP ≤ zLLP . Proof. Consider a given instance E = (m, l, L, b) and a corresponding feasible solution x = (x pq )( p,q )∈E of the LP relaxation of the standard arcflow model. Since the directed graph G of that instance is acyclic, any flow (i.e., any feasible solution) decomposes into feasible paths γ1 , . . . , γs from v0 = 0 to some v ≥ L with, in general, fractional flow values f1 , . . . , fs ≥ 0 (being constant along the arcs of a fixed path). Now note that, by construction, any path γ in arcflow has a corresponding path γ in loss arcflow, where all the arcs except the last one(s) are the same: •
•
If the last arc (p, q) of γ ends after L, then loss arcs have to be added from the penultimate node p of γ to the node L − (q − p) in loss arcflow. These loss arcs exist since loss arcs are created (between all active nodes) from L − 1 to L − max{li : i ∈ I}. The is then completed by an arc (L − (q − p), L ) which exists path γ as any arc ending after L in arcflow for item i ∈ I is transposed to (L − li , L ) in loss arcflow. are exactly If the last arc of γ ends in L, then the paths γ and γ the same. It is straightforward to show that these transformed paths
γ 1 , . . . , γ s (with the same flow values f1 , . . . , fs ≥ 0 as before) also satisfy the constraints of loss arcflow: •
•
•
•
•
As we did not touch arcs starting from v0 = 0, the objective function (and also the objective value) remains the same.
1 , . . . , γ s satisfies the flow conAny of the transformed paths γ servation constraints by construction; so the sum of all these paths will also satisfy it. The flow on the possibly new arcs of type (L − li , L ) in loss arcflow equals the flow on the arcs (d, d + li ) with d + li ≥ L in arcflow. Consequently, also the item limitations are obeyed. All flow variables are still nonnegative.
Altogether, we obtain a feasible solution x of the LP relaxation of loss arcflow with the same objective value. Remark 11. Note that, for all three relaxations, the instances presented in the previous proofs would lead to the same (roundeddown) upper bounds within branch-and-bound based exact approaches. However, for the sake of completeness, note that there are also exemplary instances of moderate size showing that different values can still be obtained if we focus on the integer-valued bounds provided by the relaxations.
9
For E = (m, l, L, b) = (5, (11, 10, 6, 4, 1 ), 12, (1, 1, 1, 2, 2 )) we obtain z = 2 and zALP = 2.83¯ , zLLP = 3, and zRLP = 3 showing that zALP < zLLP or zALP < zRLP can happen.
For E = (m, l, L, b) = (6, (12, 9, 7, 5, 4, 1 ), 14, (1, 1, 3, 2, 1, 1 )) we obtain z = 3 and zALP = 4, zLLP = 4, and zRLP = 3.83¯ showing that zRLP < zALP or zRLP < zLLP can happen.
Moreover, note that zLLP < zALP is not possible due to the previous theorem, whereas for the last remaining case (zLLP < zRLP ) we could not find an easy representative instance so far. Summarizing the main observations of this subsection, we can finally point out that arcflow dominates loss arcflow (in terms of the LP bound), whereas there are no further dominance relations among the three optimal objective values zALP , zLLP , and zRLP . Moreover, even strict inequalities for the related integer-valued upper bounds are possible. 4. Computational experiments In addition to the theoretical presentation that we provided for the new mathematical models of the SSP, their computational performance and average efficiency shall be discussed based on numerical experiments. For that purpose, three general categories of instances are considered: (A) datasets designed for the SSP (or dual bin packing problem) from the literature, (B) randomly generated instances, (C) benchmarking instances originally introduced for the CSP (or for the bin packing problem)5 Based on the fact that the SSP is a relatively young and rather unknown optimization problem, only a few families of instances of the first type can be found in the relevant literature, whereas much more test sets are available for the CSP. As both problems share the same input data (but with a partly different meaning), we decided to also include these CSP benchmarks into our experimental study in order to provide a wider variety of datasets. It is not guaranteed that these typically challenging instances for the CSP will be difficult to solve in the SSP context as well, but at least some of them (as, for instance, the AI/ANI sets introduced in (Delorme et al., 2016, Section 7.1.3)) will definitly preserve their hardness so that the consideration of CSP benchmarks is justified also from this point of view. Altogether, besides purely providing information on the practical behavior of our different models, this section also serves as a first systematic approach to collect and describe benchmarking instances for the SSP that can form an experimental basis for future research articles. 4.1. Datasets In what follows we aim at describing in more detail the three instance categories mentioned above. (A) As regards datasets particularly designed for the SSP, the literature only offers a very few of them. Moreover, given the year (and the computational limits at that time) when they were published, some of these instances (e.g., those presented in Zak (2003, Table 3) with L = 100 and m ≤ 10) nowadays have to be considered as too easy to notice any substantial differences between the various approaches. To the best of our knowledge, the only considerable systematic description of (larger) 5 In fact, almost all of these instances have been proposed in the context of bin packing. However, since any bin packing problem can be equivalently formulated as a CSP, these benchmark sets will always be referred to as CSP instances in the following paragraphs.
10
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
SSP benchmarking instances is contained in Peeters and Degraeve (2006), where also some test sets previously appearing in Labbé et al. (1995) are proposed. Contrary to our definitions, these constructions mostly fix the total number n = i∈I bi (instead of the number m of different item types) as an input parameter of a test class. Theoretically, given the fact that the numbers of variables and constraints (and, hence, the size of the ILP model) rather depend on m and L, this could lead to quite heterogeneous instances within one and the same test class. However, in our experiments we always have bi , i ∈ I, equal to (or at least very close to) one so that n ≈ m holds, and this leads to instance sets possessing comparably complex ILP models for fixed input parameters n and L. Unfortunately, the original instances from Peeters and Degraeve (2006) are not available anymore, so we had to build our own instances based on the constructions described in that paper. More precisely, this first category consists of two test sets with the following input parameters: (A1) These rather small instances were proposed in Labbé et al. (1995) and are described by lmax = 99 and the following integers:
n ∈ {10, 20, 40, 60, 80, 100}, L ∈ {100, 120, 150, 200, 300, 400, 500}, lmin ∈ {1, 20, 50}. For any of these 126 combinations, we randomly generated 10 instances, each with uniformly distributed item lengths li ∈ [lmin , lmax ], i ∈ I. Note that the values of bi do not have to be specified because any of the n items is drawn individually. (A2) A second test set (that contains some larger instances) appearing in Peeters and Degraeve (2006) is given by lmax = 999 and the following integers:
n ∈ {60, 80, 100, 250, 500}, L ∈ {10 0 0, 120 0, 150 0, 20 0 0, 30 0 0, 40 0 0, 50 0 0}, lmin ∈ {1, 200, 500}. For any of these 105 combinations, we randomly generated 10 instances, each with uniformly distributed item lengths li ∈ [lmin , lmax ], i ∈ I. (B) Randomly generated instances for the SSP have previously been addressed in Martinovic and Scheithauer (2016a). As all these problems were solved in reasonably short computation times, we intend to consider similar, but much larger instances in this second category. More precisely, any class is parametrized by a pair (m, L) in the following way:
(C2) GI: A total number of 80 instances (divided into four classes of 20 instances each) proposed in Gschwind and Irnich (2016) partly involving extremely large threshold lengths L ≤ 1, 500, 000. (C3) AI/ANI: Two sets containing 100 challenging instances each that have been proposed in Delorme et al. (2016). Here, the word “challenging” either means that already finding reasonable candidates for an optimal solution may be difficult (as in the AI case), or that (as regards the ANI instances) proving the optimality of a given solution becomes really hard since the additive integrality gap (measured with respect to the LP bound) is at least one. Instances in set C have been gathered together in the BPPLIB Delorme et al. (2018) and can be found on the Internet by http:// or.dei.unibo.it/library/bpplib. Instances from sets A and B are available from the authors upon request. 4.2. Methodology All our experiments were executed on an AMD A10-5800K CPU with 16 GB RAM, using Gurobi 8.0.1 (imposed to run on a single thread) as MILP solver with a time limit of one hour. The model generation itself was performed in Python 3.6.4 and the Gurobi Python module. For any instance and for any formulation, we collected the following data: •
•
•
•
•
•
Note that, given the large number of instances attempted, we will only refer to aggregate (in terms of opt) or averaged values for each considered class of instances instead of providing the individual results for any single example. In terms of handling unsolved instances, there are two common philosophies appearing in the relevant literature: •
m ∈ {20 0, 30 0, 40 0, 50 0}, L ∈ {10 0 0 0, 20 0 0 0, 30 0 0 0, 50 0 0 0}. In accordance with Martinovic and Scheithauer (2016a), for any of the 16 combinations we randomly generated 10 instances, each with uniformly distributed item lengths li ∈ [L/10, 3/4 · L] and availabilities bi ∈ [1, 100]. (C) The third category of test sets consists of 1895 challenging instances that were originally invented for the CSP (or for the bin packing problem) over the past years. All these instances are well known benchmarks whose specifications are described in the relevant literature. More precisely, as similarly pinpointed in Delorme and Iori (2019, Section 7.1), this category contains the three subclasses: (C1) Classical: A set of 1615 instances presented in various articles in the last decades and having highly different characteristics. A complete description of these datasets can be found in Delorme et al. (2016).
opt: an indicator stating whether the instance could be tackled successfully (opt = 1) or not (opt = 0) within the given time limit, z , zc : optimal value of the ILP and the LP relaxation, respectively (if solvable), t, tLP : total time (in seconds) needed to solve the ILP and the LP relaxation, respectively, nvar , ncon : number of variables and constraints appearing in the respective ILP, nnz : number of nonzero entries in the constraint matrix of the respective ILP, LB, UB: best lower (upper) bound obtained for the optimal objective value.
•
It is either possible to completely exclude the numerical results (especially the computation times) of unsolved instances from averaging. In this case, leaving an instance unsolved would not be penalized in the computation times, and the tables would only report on the results obtained for a reduced set of instances. However, observe that these reduced sets of instances are likely to be different for the various formulations we are testing. Another possible way is to also include the numerical results of unsolved instances. In particular, for these instances we can use the time limit (one hour) as (a lower bound of) the computation time t. In this scenario, the tables will always contain the same set of instances for any of the considered formulations.
In this article, we decided to stick to the second strategy meaning that: whenever an instance could not be solved to optimality, the time limit (one hour) is used as the computation time t. In addition to the comparison of the three (pure) ILP formulations, we also measure the effects of passing an initial feasible solution xheu to the Gurobi solver. This feasible solution
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
11
Table 1 Computational results for the 1260 A1-instances: opt presents an aggregated number whereas all other data are average values. Formulation Arcflow Larcflow Reflect
With heu Without heu With heu Without heu With heu Without heu
opt
t
1260 1260 1260 1260 1260 1260
0.2 0.5 0.3 0.5 0.1 0.2
tLP
zc
nvar
ncon
nnz
0.1
15.714466
4135.3
234.8
11431.8
0.1
15.718284
3306.7
238.8
9774.7
0.0
15.720586
1493.5
193.8
5601.7
Table 2 Computational results for the 1050 A2-instances: opt presents an aggregated number whereas all other data are average values. Formulation Arcflow Larcflow Reflect
With heu Without heu With heu Without heu With heu Without heu
opt
t
976 946 982 967 1011 983
435.1 626.3 415.6 561.6 250.9 391.8
tLP
zc
nvar
ncon
nnz
20.3
60.756419
197279.5
2206.3
543984.2
19.4
60.758035
150606.8
2229.3
450638.8
6.0
60.757427
61976.4
1820.9
235929.0
Table 3 Averaged computation times and total numbers (in parentheses) of solved A2-instances for different values of lmin . Arcflow lmin
With heu
1 200 500
458.3 499.0 348.0
Larcflow
(322) (318) (336)
Without heu
With heu
920.8 596.3 361.7
448.2 490.5 308.1
(295) (316) (335)
Reflect
(324) (324) (334)
was always obtained by a heuristic proposed in Peeters and Degraeve (2006) whose iterations can be described as follows: Choose the largest item type (say k) that is still available (i.e., with residual bk > 0) and allocate ak j = min{(L − 1 )/lk , bk } items of type k to the current bin j. (I) If akj < bk holds, then we choose the smallest possible item type i ∈ I with bi > 0 so that increasing aij by one unit leads to a feasible pattern. (This step can always be performed by choosing i = k.) (II) If ak j = bk holds, then we continue with item type k + 1 and try to assign as many items of that type to bin j without reaching the threshold L. Depending on the situation, then we have to apply step (I) or (II) with respect to the item index k + 1. This general procedure is repeated (while successively updating the values bi , i ∈ I), until all items have been assigned or no feasible pattern can be found anymore. Based on the collected data, we will evaluate and compare the computational behavior of six solution strategies (three ILP formulations with or without the additional application of the heuristic) in this article. Note that, in Martinovic and Scheithauer (2016a), the original arcflow model turned out to provide the best computational performance for practically meaningful problem sizes (among the tested formulations) although it possesses slightly more variables and constraints compared to the onestick formulation, see Martinovic and Scheithauer (2016a, Table 6). Consequently, the conventional arcflow formulation (introduced in Section 2) can be considered as the state-of-the-art solution strategy for skiving stock problems. Hence, our comparison also involves the most promising existing ILP formulation, so that a rea-
Without heu
With heu
800.4 593.9 290.6
259.3 329.2 164.2
(310) (319) (338)
Without heu (338) (331) (342)
527.1 430.9 217.4
(316) (328) (339)
sonable investigation of the various solution approaches is conducted. 4.3. Results and discussion In this subsection, we aim at presenting some of the results obtained by the computations described above. Note that, for the sake of simplicity, the arcflow model of Section 2, the loss arcflow model of Subsect 3.1 and the reflect model of Subsect 3.2 will mostly be referred to by their respective abbreviations “arcflow”, “larcflow” and “reflect”. Moreover, for the tables, we always use the short form “heu” instead of heuristic. At first, we consider the instances of category (A) and state that the subset (A1) could be solved to optimality by any of the six variants within really short times, see Table 1. Hence, these instances are rather small to observe significant time differences among the considered formulations. However, having a look at especially the numbers of variables and nonzeros, we can notice a considerably less complex structure of the ILP model for reflect (having roughly a third of the variables and half of the nonzero matrix elements compared to arcflow) and larcflow. Moreover, slight differences for the various LP relaxation values could be observed underlining the (theoretical) fact that the considered formulations are not equivalent if continuous variables are considered. To obtain a more precise picture of the six solution variants, we move forward to the harder instances contained in category (A2). At first, we summarize the averaged (or aggregated, in terms of opt) data in the same way as for the subset (A1), see Table 2. Here, we can see that both alternative formulations can solve more instances (out of a total number of 1050) than arcflow while also being faster on average (despite having a slightly worse
12
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770 Table 4 Averaged computation times and total numbers (in parentheses) of solved A2-instances for different values of L. Arcflow L
With heu
1000 1200 1500 2000 3000 4000 5000
1.4 3.8 55.4 303.7 604.8 903.6 1173.1
Larcflow
(150) (150) (150) (145) (138) (127) (116)
Without heu
With heu
1.5 3.8 54.9 303.5 735.6 1265.6 2019.0
0.7 6.6 54.5 302.3 585.4 840.2 1119.6
(150) (150) (150) (145) (136) (121) (94)
Reflect
(150) (150) (150) (146) (139) (130) (117)
Without heu
With heu
0.7 6.5 77.0 259.6 699.8 1140.4 1747.4
0.3 8.0 30.3 107.5 296.5 584.7 729.1
(150) (150) (150) (148) (135) (131) (103)
Without heu (150) (150) (150) (149) (143) (134) (135)
0.3 5.4 29.0 148.8 434.2 904.1 1220.6
(150) (150) (150) (148) (141) (126) (118)
Table 5 Averaged computation times and total numbers (in parentheses) of solved A2-instances for different values of n. Arcflow n
With heu
60 80 100 250 500
31.7 88.2 115.1 706.5 1234.1
Larcflow
(210) (209) (209) (189) (159)
Without heu
With heu
71.0 212.3 407.3 963.5 1477.2
31.5 62.9 131.4 698.6 1153.7
(210) (209) (199) (183) (145)
Reflect
(210) (209) (209) (189) (165)
LP bound). Again, this behavior is mainly due to significant savings related to the ILP model’s complexity. Moreover, a very interesting observation is given by the usefulness of the heuristic for any of the three different approaches: both the number of instances solved to optimality and the average time needed to solve them become better if a starting point is passed to the ILP solver. In Table 2 we are averaging over a wide variety of very heterogeneous instances. To highlight possible computational insights, we decided to perform more detailed analyses. To this end, Tables 3–5 report on the values opt and t for one input parameter being fixed. Note that, in these (and some of the upcoming) tables, we will use boldface to highlight the best formulation. By best, we mean that this specific approach could solve the largest number of instances to optimality, where ties are broken by choosing the smallest average computation time. Obviously, in almost all of the experiments and for almost any modeling approach, the application of the heuristic leads to better results with respect to the considered data. We also observe that the difficulty of the problem increases as: (i) the size of the smallest item decreases, (ii) the threshold length increases, and (iii) the number of items increases. At this position, we refer the interested reader to the appendix (Tables A.17–A.19) to have a look at an even more detailed presentation of the results for the A2-instances solved by any of the six formulations. In any of the experiments related to the subset (A2), the heuristic could be performed in much less than one second. Consequently, in addition to the advantageous effects observed earlier, the heuristic should always be applied to obtain a feasible starting point for the ILP solver. By way of example, Fig. 10 compares the solution times for the arcflow model with and without applying the heuristic. It can clearly be seen that there are many instances possessing roughly the same solution time in both cases, but there are also many instances whose solution time decreases drastically (partly from the time limit, indicating that there was no solution found previously) if the heuristic is applied prior to the optimization. Moreover, note that the opposite effect (i.e., a significant increase of the computation time) is also possible, but happens very rarely. As a summary, we would like to further underline the positive effects of the heuristic by the data collected in Tables 6–7. For any
Without heu
With heu
78.4 145.2 279.7 909.8 1395.1
18.0 24.9 38.0 323.5 850.2
(210) (209) (207) (185) (156)
Without heu (210) (210) (210) (202) (179)
25.6 67.5 102.1 589.5 1174.1
(210) (210) (209) (192) (162)
combination (F1 , F2 ) of the six solution variants, we are counting the number of A2-instances: •
•
that are solved faster by formulation F1 than by F2 , see Table 6. (Here, the word “faster” means that a time difference of at least 0.5 seconds could be observed.) that are solved by formulation F1 , but not by F2 , see Table 7.
In particular, if considering a fixed model type, then the entry at position (F1 , F2 ) is always better than that at (F2 , F1 ) if F1 applies the heuristic and F2 does not. Moreover, we can again state that the reflect formulation is superior to the other two approaches. This observation can also be verified by the data contained in Table 8. There, for any of the six formulations we counted the number of A2-instances where the respective approach possessed the fastest solution time. Again, being faster than another formulation is connected with a time difference of at least 0.5 seconds. Altogether, given the reasons presented (based on extensive computations) so far, for the remaining two categories of instances, (B) and (C), we will later only consider the three different models with the application of the heuristic. One last thing we would like to report for the A2-instances deals with the strength of the respective LP relaxations. As to be clearly seen in Table 9, apart from the theoretically proved statement that arcflow dominates the loss arcflow model, any relation is possible among the other pairs of optimal values. This also emphasizes the fact that differences between the LP relaxation values do not only appear for the very simple instances particularly constructed for the proofs within the previous section, but also for larger instances of practically meaningful complexity. Remark 12. Contrary to that, referring to Remark 11, we have to state that no differences between the rounded-down LP values could be observed for the considered set of instances. However, as it is known that instances not having the integer round-down property (IRDP) usually require specific input data and constructions, this observation is not surprising given the fact that we are dealing with randomly generated instances. Now, let us consider the instances from subset (B), see Table 10. Note that, although they are also randomly generated, they are completely different from the previous set (A) because the values bi are usually much larger than one, so that typical SSP
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
13
Fig. 10. Comparison of the ILP solution times for the A2-instances and the arcflow model: for each instance, a is drawn at position (x, y), where the x- and y-values contain the times with and without heuristic, respectively. The red line displays the function f (x ) = x. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Table 6 Comparison between the solution times for the A2-instances: an entry x at position (F1 , F2 ) gives the number of instances where formulation F1 was faster than formulation F2 (i.e., where a time difference of at least 0.5 seconds can be observed). Arcflow
Arcflow Larcflow Reflect
With heu Without heu With heu Without heu With heu Without heu
Larcflow
Reflect
With heu
Without heu
With heu
Without heu
With heu
Without heu
– 154 411 380 738 549
300 – 559 510 766 713
274 159 – 208 660 458
318 201 363 – 660 595
62 48 69 70 – 157
229 74 255 109 410 –
Table 7 Comparison between the A2-instances solved to optimality: an entry x at position (F1 , F2 ) states the number of instances which were solved by formulation F1 but not by F2 . Arcflow
Arcflow Larcflow Reflect
With heu Without heu With heu Without heu With heu Without heu
Larcflow
Reflect
With heu
Without heu
With heu
Without heu
With heu
Without heu
– 4 24 29 41 29
34 – 53 41 72 46
18 17 – 30 36 28
38 20 45 – 57 37
6 7 7 13 – 13
22 9 27 21 41 –
instances are built. Given the extensive discussion for the set (A) and the fact that randomly generated instances have already been partly adressed in the relevant literature (Martinovic and Scheithauer, 2016a), we only consider the average solution times and the numbers of solved instances per feasible combination of input data.
Contrary to the A-instances, the model with loss arcs is usually not better than the standard arcflow model, meaning that, in many cases, they possess a comparable performance, while in a few scenarios arcflow is also performing strictly better. A first explanation for this observation is given by the high degree of
14
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770 Table 8 Number of A2-instances where the respective model was the fastest of the six solution approaches. Formulation Arcflow Arcflow Larcflow Larcflow Reflect Reflect
#inst With heu Without heu With heu Without heu With heu Without heu
16 3 21 28 378 117
Table 9 Comparison of the LP relaxation values of the three formulations: an entry x at position (F1 , F2 ) gives the number of A2-instances with zc (F1 ) ≤ zc (F2 ) (measured with a tolerance of ε = 10−8 ). Model
Arcflow
Larcflow
Reflect
Arcflow Larcflow Reflect
1050 951 963
1050 1050 1047
1026 1004 1050
Table 10 Results for the 160 B-instances (average solution times for the ILP and numbers of solved instances) always with application of heuristic. m
L
Arcflow
200
10,000 20,000 30,000 50,000 10,000 20,000 30,000 50,000 10,000 20,000 30,000 50,000 10,000 20,000 30,000 50,000
78.4 328.1 784.4 2812.5 156.8 477.7 1538.1 3598.4 563.6 775.6 2289.9 3600.0 1752.3 2854.9 3126.5 3600.0
(10) (10) (10) (8) (10) (10) (10) (1) (9) (10) (9) (0) (8) (4) (4) (0)
70.1 346.9 851.9 2371.0 148.7 630.0 1696.8 3581.0 608.4 1008.3 2464.1 3600.0 2989.0 2829.5 3600.0 3600.0
(10) (10) (10) (9) (10) (10) (10) (1) (9) (10) (10) (0) (2) (5) (0) (0)
10.5 42.2 54.8 112.1 27.7 117.9 137.0 319.0 426.1 540.8 910.9 1256.0 1789.0 2921.9 3265.5 3600.0
(10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (9) (8) (6) (3) (1) (0)
1771.1
(113)
1899.7
(106)
970.7
(126)
300
400
500
avg/total
Larcflow
Reflect
randomization within the Gurobi solution procedure6 . Hence, even if the same solver is called in both situations, different subroutines or heuristics can appear in different orders, and this may influence the total solution efforts. On the other hand, even though the loss arcflow formulation basically has fewer variables (and roughly the same number of constraints), from our point of view, it does not counter balance the fact that its feasible integer solutions have a more complex structure that may be harder to deal with, compared to the standard flow propagation of arcflow, for a general ILP solver. Moreover, note that the B-instances are typically more complex and harder (on average) than those from the subset (A), and this especially leads to larger branching trees with much more nodes to visit. This is also due to the fact that the starting point provided by the heuristic leads to a lower bound whose absolute difference to the true optimal value7 is much larger than
6 A more detailed numerical study on the influence of the preprocessing applied by Gurobi can be found in Remark 13 at the end of this section. 7 Note that the optimal values for the B-instances are much higher than those from the A-part since bi ∈ [1, 100], i ∈ I, is allowed. In a very rough calculation, taking expected values for the uniformly distributed input data (i.e., li ≈ 0.425 · L and bi ≈ 50, i ∈ I), we end up with (on average) three items per pattern leading to z ≈ 50/3 · m for the B instances.
in the previous settings. Altogether, under these conditions, finding better integer solutions becomes more and more important, so that the arcflow model can show advantages compared to the loss arcflow formulation. Similar argumentations can be applied to further explain the very few cases where the arcflow model is beating the reflect formulation: also here, a good integer solution found quickly can lead to significant improvements in dealing with the branch-and-bound tree. However, it should be stated that also for this kind of instances the reflect formulation is (on average) the best one, with significantly lower computation times and a larger total number of instances solved to optimality. For the sake of completeness, as a last experiment we are also considering benchmark instances that have been presented for the related bin packing problem (or for the CSP) over the last couple of decades. Note that an instance which is hard in the bin packing case is not guaranteed to preserve its difficulty if interpreted as a dual bin packing instance, and vice versa. However, at least for some of the hard instances from the C-part, it is clear that they will remain hard in the SSP scenario. By way of example, let us particularly mention the ANI-instances, whose integrality gaps will also be at least one if interpreted as an SSP instance. The precise results can be found in Table 11. Note that, for some of the very large instances proposed by Gschwind and Irnich (2016), we observed memory issues (i.e., the branching trees became too large), so that no data could be obtained. These situations are indicated by “–” in the table. Also here, the impression we got from the previous instance sets is again emphasized. For any subcategory (C1), (C2), and (C3), the reflect formulation is able to cope with the most instances, and possesses the lowest solution time. Again, the behavior between arcflow and loss arcflow is changing depending on the particular problem set: for instances where the main difficulty is the model size (e.g., Scholl 3, GI AA125 and GI BA125), loss arcflow seems better, while for instances where the main difficulty is to find a good integer solution (e.g., most of the Wäscher and AI 201 and 402) or to improve the upper bound (e.g., a few of the Wäscher and ANI 201 and 402), loss arcflow seems worse. As a summary of all instance types, we would like to report about the average relations between the numbers of variables, constraints and nonzeros for the three models in Table 12. Both alternative formulations always possess much fewer variables and nonzeros compared to the standard arcflow model8 As regards the number of constraints, they are always roughly similar in any case, but we also observe that tiny differences (in both directions) compared to the standard arcflow model are possible. Remark 13. In order to obtain some more insights on how the three formulations actually benefit from the intrinsic reduction procedures applied by Gurobi, we exemplarily consider the preprocessing part in more details. To this end, we focus on the numbers of variables nvar , constraints ncon and nonzeros nnz (of the ILP models passed to the Gurobi solver) with and without preprocessing. More precisely, for every single instance (that did not lead to memory errors while building the ILP model), we collected the respective data before and after applying the (unknown) internal Gurobi strategies. In Tables 13 and 14, we then report on the percentage of variables (constraints, nonzeros) that remained in the ILP model after these reductions have been performed. As we can see in Table 13, in most cases, Gurobi is able to significantly reduce the number of constraints, while the decrease in terms of the numbers of variables and nonzeros is usually less effective or even has no effect at all, e.g., for the arcflow formulation applied to the B instances. However, in any category, the reflect
8 We would like to stress again that these B-instances were the only pure SSP instances, whereas the other ones possess very small values of bi , so that they should rather be termed as DBPP instances.
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
15
Table 11 Results for 1855 (out of 1895) C-instances (average solution times for the ILP and numbers of solved instances). Set
# inst
Arcflow
Wäscher Hard28 Falkenauer U Falkenauer T Schwerin 1 Schwerin 2 Scholl 1 Scholl 2 Scholl 3
17 28 80 80 100 100 720 480 10
83.0 6.3 0.2 4.9 0.5 2.6 0.1 40.2 3600.0
avg/total (1)
1615
GI GI GI GI
AA125 AB125 BA125 BB125
20 20 20 20
avg/total (2) AI 201 AI 402 ANI 201 ANI 402
Larcflow (17) (28) (80) (80) (100) (100) (720) (480) (0)
117.6 7.4 0.2 1.1 0.4 1.3 0.1 47.4 2306.6
35.7
(1605)
3376.4 – 2964.6 –
(3) (–) (4) (–)
40
3170.5
50 50 50 50
99.2 3219.5 1036.7 3600.0
avg/total (3)
200
avg/total (1–3)
1855
Reflect (17) (28) (80) (80) (100) (100) (720) (480) (9)
257.3 3.0 0.1 0.9 0.1 0.4 0.0 20.8 65.1
(17) (28) (80) (80) (100) (100) (720) (480) (10)
30.0
(1614)
9.4
(1615)
918.4 – 1190.0 –
(20) (–) (19) (–)
4.1 – 4.3 –
(20) (–) (20) (–)
(7)
1054.2
(39)
4.2
(40)
(50) (15) (43) (0)
117.5 3298.2 2843.5 3600.0
(50) (10) (18) (0)
26.3 2016.9 238.5 3600.0
(50) (31) (50) (0)
1988.9
(108)
2464.8
(78)
1470.4
(131)
313.9
(1720)
314.6
(1731)
166.9
(1786)
Table 12 Relative numbers of variables, constraints and nonzeros averaged over all instances of the respective category (where the value of the arcflow model is normalized to 1.00). A1
A2
B
C
Formulation
nvar
ncon
nnz
nvar
ncon
nnz
nvar
ncon
nnz
nvar
ncon
nnz
Arcflow Larcflow Reflect
1.00 0.73 0.34
1.00 1.06 0.96
1.00 0.78 0.51
1.00 0.60 0.23
1.00 1.05 1.00
1.00 0.66 0.34
1.00 0.52 0.11
1.00 1.00 1.02
1.00 0.62 0.20
1.00 0.65 0.27
1.00 1.02 0.95
1.00 0.71 0.42
Table 13 Relative average numbers of variables, constraints and nonzeros (where the value without preprocessing is set to 100) for the sets A1 (left), A2 (middle), and B (right). A1
A2
B
Formulation
nvar
ncon
nnz
nvar
ncon
nnz
nvar
ncon
nnz
Arcflow Larcflow Reflect
89.5 86.8 85.3
75.2 71.6 68.1
89.3 87.9 81.5
99.5 94.3 88.9
89.8 77.5 69.6
99.5 95.1 85.6
100.0 99.6 97.6
98.2 85.2 60.5
100.0 99.7 95.3
Table 14 Relative average numbers of variables, constraints and nonzeros (where the value without preprocessing is set to 100) for the set C. nvar
ncon
nnz
Set
Arcflow
Larcflow
Reflect
Arcflow
Larcflow
Reflect
arcflow
larcflow
reflect
Waescher Hard28 Falkenauer U Falkenauer T Schwerin 1 Schwerin 2 Scholl 1 Scholl 2 Scholl 3
99.5 99.9 99.6 99.7 99.6 99.6 99.2 99.8 100.0
99.4 99.9 98.8 96.9 98.1 98.3 96.6 99.0 98.6
99.1 99.9 99.1 92.5 98.6 98.8 94.3 98.9 86.5
94.6 97.4 90.6 84.2 91.9 91.8 88.5 93.7 98.4
94.0 96.8 82.6 51.0 72.1 72.6 83.0 82.8 58.1
95.0 91.6 74.4 37.7 77.4 78.1 63.4 88.0 20.1
99.5 99.9 99.6 99.7 99.6 99.6 99.1 99.8 100.0
99.4 99.9 99.0 97.8 98.6 98.7 97.3 99.3 99.1
99.0 99.7 97.1 80.9 93.8 94.0 89.2 97.1 72.5
GI AA125 GI BA125
98.9 97.8
86.7 76.9
59.7 58.8
88.9 82.9
48.3 34.9
14.5 14.1
98.8 97.4
90.2 82.5
51.9 51.2
AI 201 AI 402 ANI 201 ANI 402
100.0 100.0 100.0 100.0
100.0 100.0 100.0 100.0
100.0 100.0 100.0 100.0
99.2 99.4 99.1 99.4
99.1 99.2 99.0 99.2
96.0 97.5 96.0 97.4
100.0 100.0 100.0 100.0
100.0 100.0 100.0 100.0
99.9 100.0 99.9 100.0
16
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770 Table 15 Some further results for very hard CSP-bechmark instances also showing the limits of the reflect formulation. set GI GI GI GI AI
AA250 AA500 BA250 BA500 600
#inst
Arcflow
Larcflow
20 20 20 20 50
– – – – –
– – – – 3600
(–) (–) (–) (–) (–)
Reflect (–) (–) (–) (–) (0)
103.9 3600.0 112.2 3536.7 3588.2
(20) (0) (20) (2) (1)
model possesses the largest percentage of reduction meaning that Gurobi’s preprocessing tends to additionally support the least complex formulation (i.e., the reflect model). Considering the C instances from the BPP literature, a similar behavior can be noticed. In the vast majority of the cases, the reductions caused by Gurobi could have helped the reflect formulation most. Moreover, we would like to highlight the following observations: •
•
Especially for the sets Scholl, GI AA125, and GI BA125, the reflect formulation performed surprisingly good according to Table 11. By way of example, we can see that Gurobi considerably reduces the respective reflect instances up to a level only having roughly 60% of the variables and 14% of the constraints for AA125 and BA125. Contrary to that, the reductions related to the sets AI 201 and AI 402 are almost neglectable for all three formulations. However, the reflect formulation still performs much better than the other ones according to Table 11.
As we have seen in both tables, the preprocessing either has similar effects to all the three formulations or the reflect model tends to benefit most from the procedures performed by the solver. Hence, besides already being the least complex formulation (in terms of variables), an additional boost is given by these intrinsic reductions, meaning that the structure of the reflect formulation seems to also be particularly favorable for the preprocessing of today’s ILP solvers. Altogether, note that in any of the three categories we observed instances that cannot be solved (in one hour and with the given computational environment) even by the most promising of the considered formulations. Consequently, our experiments do not only cover a significant set of instances, but also clearly point out the limits of what can currently be solved within a reasonable amount of time. We better highlight this fact in Table 15, where we show the behavior of the reflect formulation for the very hard instances of the categories (C2) and (C3). Finally, we would like to refer the interested reader to Table A.16 in our appendix, where the quality of the heuristic has been evaluated also for the sets (B) and (C). 5. Conclusions In this paper, we investigated an existing and two new arcflow formulations for the skiving stock problem. The two new approaches are an arcflow model with reversed loss arcs (called larcflow for short) and a reflect arcflow model (briefly referred to as reflect). The main idea behind larcflow is to avoid duplicated arcs (formed by, at least implicitly contained, superfluous nodes that are larger than the capacity L), whereas for reflect it is to only consider the first half of the vertex set and to model a pattern as the sum of two subpaths that are connected by a reflected arc. We tested the models on a large number of instances, including benchmarks specifically designed for the SSP (datasets A1 and A2),
randomly generated instances (dataset B), and benchmarks originally introduced for the cutting stock problem (dataset C). Thus, we obtained a number of interesting numerical results. First of all, it is relevant to provide all models with a starting heuristic solution. This helps indeed in decreasing the average computing time and increasing the number of optimal solutions found. Considering the difficult A2 set, reflect obtained the best results on average, solving to proven optimality all instances with up to 100 items, as well as the majority of those with 250 and 500 items. It also proved to be the fastest model for most of the cases. Similar results have been noticed also for the sets B and C. Among the B instances, reflect could optimally solve all instances with up to 300 item types, whereas arcflow and larcflow failed for a few instances having 200 item types. Among the C instances, we noticed a dual behaviour between arcflow, which obtained nice results on the very challenging AI and ANI instances, and larcflow, which performed better on the other classical cutting stock instances. Reflect was again the most-performing model, and, notably, could quickly solve all Scholl 3 instances, which are notoriously difficult. The reason for the better behaviour of reflect can be mostly imputed to the reduced number of variables, that in some cases is just 10–20% of the number of variables required by arcflow. We could not notice, instead, a relevant difference in terms of upper bound values. It is worth noting that for the three formulations, the quality of the continuous relaxation has been investigated also from a theoretical point of view. On the basis of the extensive numerical tests with a wide variety of differently characterized instances, we can thus conclude that the new reflect model possesses significantly fewer variables and much better solution times (thus also resulting in a larger number of instances solved to optimality) compared to the original formulation. Consequently, the reflect arcflow model may be seen as a powerful tool for solving (large) instances of the SSP in reasonably short time. As future research, we envisage the use of these enhanced arcflow models to other relevant cutting and packing problems.
Acknowledgements This work has been partially supported by MIUR-Italy (Grant PRIN 2015, Nonlinear and Combinatorial Aspects of Complex Networks).
Appendix A. Additional Computational Results
Table A.16 Relative gap r = (z − zheu )/z between heuristic value and optimal value for all instance categories. If optimality was not proved, z ≈ UB with the upper bound UB (from the reflect model) found by Gurobi was taken instead. Interval
#inst (A1)
#inst (A2)
#inst (B)
#inst (C)
r = 0 r ∈ (0, 0.01] r ∈ (0.01, 0.02] r ∈ (0.02, 0.05] r ∈ (0.05, 0.10] r ∈ (0.10, 0.20] r > 0.20
905 0 0 99 173 62 21
398 18 41 180 289 124 0
0 0 0 2 48 110 0
628 45 112 453 428 182 7
1260
1050
160
1855
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770
17
Table A.17 Arcflow results for A2-instances (average solution times for the ILP and number of solved instances). With heuristic L
lmin /n 60
1000 1 200 500 1200 1 200 500 1500 1 200 500 2000 1 200 500 3000 1 200 500 4000 1 200 500 5000 1 200 500
Without heuristic
80
0.5 (10) 0.6 0.1 (10) 0.1 0.0 (10) 0.0 0.9 (10) 1.1 0.2 (10) 0.4 0.0 (10) 0.1 5.7 (10) 7.7 (10) 1.1 0.6 0.1 (10) 0.2 17.9 (10) 41.3 5.7 (10) 9.9 0.5 (10) 0.9 39.2 (10) 55.5 11.9 (10) 84.3 7.4 (10) 19.5 46.1 (10) 22.1 81.1 (10) 175.0 66.7 (10) 144.1 54.9 (10) 677.6 163.0 (10) 259.3 162.3 (10) 351.8
100 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10) (10)
1.0 0.2 0.0 1.9 0.5 0.1 15.7 2.1 0.3 72.9 23.7 2.3 206.9 240.2 30.8 166.5 281.7 228.1 60.8 509.9 571.7
250 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10)
3.6 0.8 0.2 10.4 3.3 0.3 203.3 7.9 1.4 510.4 181.4 17.1 1471.1 821.6 106.1 1382.0 2053.1 1413.6 1841.4 1559.2 3247.5
500 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10) (10) (7) (8) (9) (5) (7) (4)
10.5 2.1 0.8 26.7 9.9 1.0 534.1 46.8 3.9 2975.2 670.9 24.7 2580.7 3056.0 341.3 1836.2 3600.0 2058.2 1158.7 3600.0 3377.6
60 (10) (10) (10) (10) (10) (10) (10) (10) (10) (5) (10) (10) (5) (4) (10) (5) (0) (8) (7) (0) (5)
0.5 0.1 0.0 0.8 0.2 0.0 5.0 0.6 0.1 24.2 5.9 0.7 112.2 31.0 7.3 231.2 130.7 72.0 349.6 306.9 211.6
80 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10)
0.7 0.2 0.0 1.1 0.4 0.1 5.9 1.0 0.2 47.1 9.8 2.4 177.1 83.7 19.3 478.0 233.0 144.8 1883.9 827.1 541.6
100 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10) (10)
1.1 0.2 0.1 1.9 0.6 0.1 15.6 2.1 0.3 58.6 23.6 3.5 342.1 205.9 30.7 1453.2 342.7 201.2 3361.0 1695.5 814.2
250 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10) (10) (3) (7) (10)
4.0 1.1 0.3 10.2 3.3 0.5 199.2 7.8 1.3 463.1 181.7 25.2 2225.1 828.5 105.8 3185.5 1863.9 1415.7 3388.0 3107.3 3216.6
500 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (4) (10) (9) (2) (5) (3)
10.4 2.9 1.1 26.3 10.3 1.4 533.7 46.5 3.8 2970.0 671.7 64.9 3489.0 3043.8 332.5 3572.4 3600.0 2059.7 3600.0 3600.0 3381.6
(10) (10) (10) (10) (10) (10) (10) (10) (10) (5) (10) (10) (2) (4) (10) (1) (0) (8) (0) (0) (5)
Table A.18 Loss-Arcflow results for A2-instances (average solution times for the ILP and number of solved instances). With heuristic L
lmin /n 60
1000 1 200 500 1200 1 200 500 1500 1 200 500 2000 1 200 500 3000 1 200 500 4000 1 200 500 5000 1 200 500
0.5 0.1 0.0 1.0 0.1 0.0 2.1 0.5 0.1 9.8 2.4 0.2 40.1 16.5 3.7 56.2 101.3 64.1 65.0 129.7 167.9
Without heuristic
80 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10)
0.7 0.1 0.0 1.8 0.3 0.0 19.0 1.0 0.1 31.0 7.9 0.2 43.0 77.8 8.1 26.4 204.9 106.0 459.0 276.9 57.0
100 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10) (10)
0.9 0.1 0.0 2.2 0.4 0.0 9.8 1.3 0.1 89.1 20.7 0.4 518.6 178.0 12.0 144.1 369.2 96.4 77.3 521.9 717.6
250 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10) (10) (10) (10) (10) (10) (10) (10)
2.4 0.5 0.0 8.7 1.3 0.1 213.5 5.9 0.4 445.9 156.9 1.7 1352.0 1206.2 160.6 1257.7 2199.5 1366.7 1845.4 1561.8 2883.3
500 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10) (10) (8) (6) (9) (5) (9) (3)
4.7 1.0 0.0 79.6 2.7 0.1 545.1 17.1 1.2 2876.3 887.8 3.9 2512.1 2418.5 234.2 1786.0 3195.7 1628.9 1161.8 3600.0 3270.0
60 (10) (10) (10) (10) (10) (10) (10) (10) (10) (6) (10) (10) (5) (6) (10) (6) (3) (8) (7) (0) (4)
0.6 0.1 0.0 1.0 0.2 0.0 2.1 0.4 0.0 30.3 2.3 0.2 115.5 67.3 3.8 224.2 195.7 58.4 386.9 378.5 179.6
80 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10)
0.7 0.1 0.0 1.7 0.3 0.0 9.1 1.0 0.1 80.4 12.8 0.3 184.2 147.4 7.9 405.7 267.8 99.0 1036.3 570.9 223.5
100 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10) (10)
0.9 0.1 0.0 2.2 0.4 0.0 12.4 1.4 0.1 60.5 17.7 0.5 335.5 197.5 12.0 781.8 601.1 124.6 2108.0 1088.4 527.9
250 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (7) (10) (10)
2.2 0.5 0.0 7.7 1.2 0.1 189.3 8.2 0.4 657.2 143.0 1.9 1727.2 1271.8 69.7 2699.0 2231.5 950.6 3600.0 3091.1 2452.3
500 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (9) (10) (8) (7) (10) (0) (6) (6)
4.2 1.0 0.0 80.2 2.6 0.2 895.1 34.9 1.0 2241.3 640.7 4.6 3235.0 2890.3 232.6 3297.1 3373.8 1795.5 3600.0 3545.2 3422.7
(10) (10) (10) (10) (10) (10) (10) (10) (10) (8) (10) (10) (3) (4) (10) (6) (2) (8) (0) (1) (4)
18
J. Martinovic, M. Delorme and M. Iori et al. / Computers and Operations Research 113 (2020) 104770 Table A.19 Reflect results for A2-instances (average solution times for the ILP and number of solved instances). With heuristic L
lmin /n
60
1000
1 200 500 1 200 500 1 200 500 1 200 500 1 200 500 1 200 500 1 200 500
0.1 0.0 0.0 0.4 0.0 0.0 0.6 0.2 0.0 1.7 1.1 0.1 3.6 2.5 0.7 65.4 32.0 12.7 207.5 12.6 36.1
1200
1500
2000
3000
4000
5000
Without heuristic
80 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10)
0.2 0.0 0.0 0.4 0.1 0.0 1.4 0.3 0.0 4.2 4.3 0.1 3.6 12.6 1.6 13.4 22.5 21.9 248.4 162.7 24.1
100 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10)
0.3 0.0 0.0 1.0 0.1 0.0 2.8 0.5 0.1 12.8 11.2 0.2 25.5 71.2 2.5 97.0 21.0 37.8 356.8 77.3 80.2
250 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10)
1.0 0.1 0.0 8.1 0.4 0.0 63.6 6.6 0.2 111.6 95.7 1.1 72.8 719.2 27.1 963.8 1093.3 574.4 1426.8 1006.2 621.3
500 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10) (8) (9) (10) (7) (9) (10)
1.9 0.4 0.0 108.8 1.1 0.1 338.5 38.3 0.6 1114.0 249.7 4.2 1792.8 1651.2 60.8 1037.9 3079.3 1698.1 985.8 3149.7 2540.7
References Alvim, A.C.F., Ribeiro, C.C., Glover, F., Aloise, D.J., 2004. A hybrid improvement heuristic for the one-dimensional bin packing problem. J. Heurist. 10 (2), 205–229. Arbib, C., Di Iorio, C.F., Marinelli, F., Rossi, F., 2012. Cutting and reuse: an application from automobile component manufacturing. Oper. Res. 50 (6), 923–934. Arbib, C., Marinelli, F., 2005. Integrating process optimization and inventory planning in cutting-stock with skiving option: an optimization model and its application. Eur. J. Oper. Res. 163 (3), 617–630. Assmann, S.F., Johnson, D.S., Kleitman, D.J., Leung, J.Y.T., 1984. On a dual version of the one-dimensional bin packing problem. J. Algorithms 5, 502–525. Brandão, F., Pedroso, J.P., 2016. Bin packing and related problems: general arc-flow formulation with graph compression. Comput. Oper. Res. 69, 56–67. Caprara, A., Dell’Amico, M., Díaz Díaz, J.C., Iori, M., Rizzi, R., 2015. Friendly bin packing instances without integer round-up property. Math. Programm.-B 150, 5–17. Chen, Y., Song, X., Ouelhadj, D., Cui, Y., 2019. A heuristic for the skiving and cutting stock problem in paper and plastic film industries. Int. Trans. Oper. Res. 26 (1), 157–179. Christofides, N., Whitlock, C., 1977. An algorithm for two-dimensional cutting problems. Oper. Res. 25 (1), 30–44. Clautiaux, F., Sadykov, R., Vanderbeck, F., Viaud, Q., 2018. Combining dynamic programming with filtering to solve a four-stage two-dimensional guillotine-cut bounded knapsack problem. Discr.. Optim. 29, 18–44. Côté, J.-F., Iori, M., 2018. The meet-in-the-middle principle for cutting and packing problems. INFORMS J. Comput. 30 (4), 625–786. Delorme, M., Iori, M., 2019. Enhanced pseudo-polynomial formulations for bin packing and cutting stock problems. to appear in: INFORMS J. Comput.. Currently available online as a Technical Report: http://www.optimization-online. org/DB_HTML/2017/10/6270.html Delorme, M., Iori, M., Martello, S., 2016. Bin packing and cutting stock problems: mathematical models and exact algorithms. Eur. J. Oper. Res. 255, 1–20. Delorme, M., Iori, M., Martello, S., 2018. BPPLIB: A library for bin packing and cutting stock problems. Optim. Lett. 12 (2), 235–250. Fischer, A., Scheithauer, G., 2015. Cutting and packing problems with placement constraints. In: Fasano, G., Pinter, J.D. (Eds.), Optimized Packings with Applications. Springer, pp. 119–156. Gilmore, P.C., Gomory, R.E., 1961. A linear programming approach to the cutting-stock problem (part i). Oper. Res. 9, 849–859. Gschwind, T., Irnich, S., 2016. Dual inequalities for stabilized column generation revisited. INFORMS J. Comput. (28) 175–194. Herz, J.C., 1972. Recursive computational procedure for two-dimensional stock cutting. IBM J. Res. Dev. 16 (5), 462–469. Johnson, M.P., Rennick, C., Zak, E.J., 1997. Skiving addition to the cutting stock problem in the paper industry. SIAM Rev. 39 (3), 472–483. Kantorovich, L.V., 1939. Mathematical methods of organising and planning production. Manage. Sci. (6) 366–422. Russian, 1960 English) Kartak, V., Ripatti, A., 2019. Large proper gaps in bin packing and dual bin packing problems. J. Global Optim. 74 (3), 467–476.
60 (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10) (10) (7) (7) (10) (8) (2) (7) (9) (5) (5)
0.2 0.0 0.0 0.3 0.0 0.0 0.8 0.2 0.0 8.0 2.2 0.1 27.7 16.5 0.7 64.9 51.7 33.6 78.4 140.4 112.1
80 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10)
0.2 0.0 0.0 0.4 0.1 0.0 5.8 0.2 0.0 30.8 10.4 0.2 29.1 23.8 1.8 123.2 86.5 59.3 499.1 500.4 46.9
100 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10)
0.3 0.0 0.0 1.0 0.1 0.0 4.2 0.6 0.1 22.6 10.5 0.2 210.9 72.8 2.2 331.3 185.7 138.3 480.0 563.4 120.4
250 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10)
1.1 0.2 0.0 10.1 0.5 0.0 56.5 5.3 0.2 113.7 174.8 1.3 707.5 585.2 26.5 2000.2 1658.0 408.4 2686.0 1962.7 1981.3
500 (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (10) (7) (8) (10) (4) (7) (7)
1.8 0.4 0.0 67.1 1.4 0.1 268.2 92.4 0.6 780.7 1070.9 5.1 2858.9 1853.9 96.3 3600.0 3103.8 1716.4 3375.9 2905.2 2857.0
(10) (10) (10) (10) (10) (10) (10) (10) (10) (9) (9) (10) (4) (8) (10) (0) (3) (8) (2) (5) (4)
Kartak, V., Ripatti, A., Scheithauer, G., Kurz, S., 2015. Minimal proper non-IRUP instances of the one-dimensional cutting stock problem. Discret. Appl. Math. 187, 120–129. Labbé, M., Laporte, G., Martello, S., 1995. An exact algorithm for the dual bin packing problem. Oper. Res. Lett. 17, 9–18. Macedo, R., Alves, C., Valério de Carvalho, J.M., 2010. Arc-flow model for the two-dimensional guillotine cutting stock problem. Comput. Oper. Res. 37, 991–1001. Martinovic, J., Delorme, M., Iori, M., Scheithauer, G., 2019. An improved arcflow model for the skiving stock problem. Oper. Res. Proc. 2018. Martinovic, J., Jorswieck, E., Scheithauer, G., Fischer, A., 2017. Integer linear programming formulations for cognitive radio resource allocation. IEEE Wirel. Commun. Lett. 6 (4), 494–497. Martinovic, J., Scheithauer, G., 2016a. Integer linear programming models for the skiving stock problem. Eur. J. Oper. Res. 251 (2), 356–368. Martinovic, J., Scheithauer, G., 2016b. Integer rounding and modified integer rounding for the skiving stock problem. Discret. Optim.. 21, 118–130. Martinovic, J., Scheithauer, G., 2016c. The proper relaxation and the proper gap of the skiving stock problem. Math. Method. Oper. Res. 84 (3), 527–548. Martinovic, J., Scheithauer, G., 2017. An upper bound of (E) < 3 / 2 for skiving stock instances of the divisible case. Discrete Appl. Math. 229, 161–167. Martinovic, J., Scheithauer, G., 2018. The skiving stock problem and its relation to hypergraph matchings. Discret. Optim. 29, 77–102. Martinovic, J., Scheithauer, G., de Carvalho, V., 2018. A comparative study of the arcflow model and the one-cut model for one-dimensional cutting stock problems. Eur. J. Oper. Res. 266 (2), 458–471. Nemhauser, G., Wolsey, L., 1988. Integer and Combinatorial Optimization. Wiley, New York. Peeters, M., Degraeve, Z., 2006. Branch-and-price algorithms for the dual bin packing and maximum cardinality bin packing problem. Eur. J. Oper. Res. 170 (2), 416–439. Scheithauer, G., 2018. Introduction to cutting and packing optimization – problems, modeling approaches, solution methods. Int. Ser. Oper. Res.Manag. Sci. 263. Springer, 1.Edition Scheithauer, G., Terno, J., 1996. The g4-heuristic for the pallet loading problem. J. Oper. Res. Soc. 47 (4), 511–522. Tragos, E.Z., Zeadally, S., Fragkiadakis, A.G., Siris, V.A., 2013. Spectrum assignment in cognitive radio networks: a comprehensive survey. IEEE Commun. Surv. Tutor. 15 (3), 1108–1135. Valério de Carvalho, J.M., 1998. Exact solution of cutting stock problems using column generation and branch-and-bound. Int. Trans. Oper. Res. 5, 35–44. Valério de Carvalho, J.M., 1999. Exact solution of bin-packing problems using column generation and branch-and-bound. Annal. Oper. Res. 86, 629–659. Valério de Carvalho, J.M., 2002. LP Models for bin packing and cutting stock problems. Eur. J. Oper. Res. 141 (2), 253–273. Wäscher, G., Haußner, H., Schumann, H., 2007. An improved typology of cutting and packing problems. Eur. J. Oper. Res. 183, 1109–1130. Wolsey, L.A., 1977. Valid inequalities, covering problems and discrete dynamic programs. Annal. Discrete Math. 1, 527–538. Zak, E.J., 2003. The skiving stock problem as a counterpart of the cutting stock problem. Int. Trans. Oper. Res. 10, 637–650.