Upper and lower bounding procedures for the multiple knapsack assignment problem

Upper and lower bounding procedures for the multiple knapsack assignment problem

European Journal of Operational Research xxx (2014) xxx–xxx Contents lists available at ScienceDirect European Journal of Operational Research journ...

390KB Sizes 0 Downloads 31 Views

European Journal of Operational Research xxx (2014) xxx–xxx

Contents lists available at ScienceDirect

European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor

Discrete Optimization

Upper and lower bounding procedures for the multiple knapsack assignment problem Seiji Kataoka ⇑, Takeo Yamada Department of Computer Science, National Defense Academy, Yokosuka, Kanagawa 239-8686, Japan

a r t i c l e

i n f o

a b s t r a c t

Article history: Received 19 November 2012 Accepted 4 February 2014 Available online xxxx

We formulate the multiple knapsack assignment problem (MKAP) as an extension of the multiple knapsack problem (MKP), as well as of the assignment problem. Except for small instances, MKAP is hard to solve to optimality. We present a heuristic algorithm to solve this problem approximately but very quickly. We first discuss three approaches to evaluate its upper bound, and prove that these methods compute an identical upper bound. In this process, reference capacities are derived, which enables us to decompose the problem into mutually independent MKPs. These MKPs are solved euristically, and in total give an approximate solution to MKAP. Through numerical experiments, we evaluate the performance of our algorithm. Although the algorithm is weak for small instances, we find it prospective for large instances. Indeed, for instances with more than a few thousand items we usually obtain solutions with relative errors less than 0.1% within one CPU second. Ó 2014 Published by Elsevier B.V.

Keywords: Combinatorial optimization Heuristics Multiple knapsack problem Assignment problem Lagrangian relaxation

1. Introduction

MKAP:

This article is concerned with the multiple knapsack assignment problem (MKAP), as an extension of the multiple knapsack problem (MKP, Kellerer, Pferschy, & Pisinger, 2004; Martello & Toth, 1990; Pisinger, 1999), as well as of the assignment problem (Burkard, Dell’Amico, & Martello, 2009; Kuhn, 2005; Pentico, 2007), where we are given a set of n items N ¼ f1; 2; . . . ; ng to be packed into m possible knapsacks M ¼ f1; 2; . . . ; mg. As in ordinary MKP, by wj and pj we denote the weight and profit of item j 2 N respectively, and the capacity of knapsack i 2 M is ci . However, items are divided into K mutually disjoint subsets of items N k ðk ¼ 1; . . . ; KÞ, thus we have N ¼ [Kk¼1 N k ; nk :¼ jN k j, and P n ¼ Kk¼1 nk . The problem is to determine the assignment of knapsacks to each subset, and fill knapsacks with items in that subset, so as to maximize the total profit of accepted items. To formulate this mathematically, we introduce binary decision variables xij and yik such that xij ¼ 1 if item j is included in knapsack i, and xij ¼ 0 otherwise. Also, yik ¼ 1 if we assign knapsack i to subset N k , and yik ¼ 0 otherwise. Then, we have the following.

⇑ Corresponding author. E-mail addresses: (T. Yamada).

[email protected]

(S.

Kataoka),

[email protected]

maximize zðx; yÞ :¼

m X K X X pj xij ;

ð1Þ

i¼1 k¼1 j2N k

subject to

X wj xij 6 ci yik ;

i ¼ 1; . . . ; m;

k ¼ 1; . . . ; K;

ð2Þ

j2Nk m X xij 6 1;

j ¼ 1; . . . ; n;

ð3Þ

i ¼ 1; . . . ; m;

ð4Þ

i¼1 K X yik 6 1; k¼1

xij ; yik 2 f0; 1g;

8i; j; k:

ð5Þ

Here, (1) gives the total profit of items accepted, and (2) and (3) represent the same conditions as in MKP with respect to each N k and the set of knapsacks assigned to this subset of items. Constraint (4) means that each knapsack can be assigned to at most one subset. Such a problem may be encountered by a marine shipping company in drawing up a cargo plan. Here items are to be shipped to respective destinations, and we have m ships for this transportation. Let N k represent the set of items destined to the kth destination, and ci is the capacity of ship i. Cargo planning is to allocate ships to destinations, and for each k load the items in N k to the allocated ships.

http://dx.doi.org/10.1016/j.ejor.2014.02.014 0377-2217/Ó 2014 Published by Elsevier B.V.

Please cite this article in press as: Kataoka, S., & Yamada, T. Upper and lower bounding procedures for the multiple knapsack assignment problem. European Journal of Operational Research (2014), http://dx.doi.org/10.1016/j.ejor.2014.02.014

2

S. Kataoka, T. Yamada / European Journal of Operational Research xxx (2014) xxx–xxx

MKAP is NP-hard, since the special case of K ¼ 1 is simply an MKP, which is already NP-hard. For recent works on MKP, readers are referred to Chekuri and Khanna (2006), Dawande, Kalagnanam, Keskinocak, Ravi, and Salman (2000), and Lalami, Elkihel, Baz, and Boyer (2012). Since MKAP described as above is a linear 0–1 programming problem, small instance of this problem may be solved using free or commercial MIP (mixed integer program) solvers such as Gurobi (2012). However, as we shall see later, solvers can solve only small instances within a reasonable CPU time. Instead of solving MKAP exactly, we present an approach to solve larger instances approximately, but very quickly. More specifically, we first apply the Lagrangian relaxation to (2), and obtain an upper bound quickly. Here, we show that only one multiplier suffices to eliminate these mK inequalities, and the obtained upper bound is shown to be identical to the upper bound derived by the continuous (LP) relaxation of MKAP. In addition, we present an efficient way to solve this LP problem by decomposing it into K independent continuous knapsack problems. We exploit the result of this computation to derive a heuristic solution, which gives a lower bound to MKAP. Through numerical experiments on a series of randomly generated instances, we evaluate the quality (CPU time and relative errors) of the obtained solutions.

Next, let ky :¼ mini fki g. Then, þ þ maxi fðpj  ki wj Þ g ¼ ðpj  ky wj Þ , and thus

XX

zðkÞ ¼

þ

ðpj  ky wj Þ þ

we

have

X ki c i ;

k j2N k

i y

which is minimized at ki  k .

h

Remark 1. Due to the fact that the coefficients of (2) are identical for all i, this is obtained as an extension of the known result for MKP (i.e., K ¼ 1, Martello & Toth, 1990, pp. 164–165). See also (Yamada & Takeoka, 2009). From this theorem, to obtain ky it suffices to minimize the onedimensional

XX

zðkÞ ¼

ðpj  kwj Þþ þ kC

ð7Þ

k j2N k

over k P 0, where C is the total knapsack capacity, i.e.,

C :¼

X ci :

ð8Þ

i

At differentiable k P 0, we have 2. Upper bound

dzðkÞ=dk ¼ C 

XX

wj ;

ð9Þ

k j2Nk ðkÞ

To discuss upper bounds, without much loss of generality, we assume the following.

with Nk ðkÞ :¼ fj 2 N k jpj  kwj > 0g. Thus, zðkÞ is a piecewise-linear, A1: Problem data pj ; wj ðj ¼ 1; 2; . . . ; nÞ, and ci ði ¼ 1; 2; . . . ; mÞ are all positive integers. A2: Within each subset, items are arranged in non-increasing order of profit per weight, i.e., for all k ¼ 1; . . . ; K the following is satisfied:

8j;

0

j 2 Nk ;

convex function of k, and the optimal solution ky to the Lagrangian dual is characterized by

k ? ky ) C 

K X X

wj ? 0:

ð10Þ

k¼1 j2N k ðkÞ

Such a ky can be found by the standard binary search method, and we obtain the corresponding Lagrangian upper bound zL :¼ zðky Þ.

0

j < j ) pj =wj P pj0 =wj0 :

2.1. Lagrangian relaxation

2.2. Continuous relaxation

With non-negative multipliers kik associated with (2), the Lagrangian relaxation (Fisher, 1981) of MKAP is as follows. LMKAP(k):

By replacing the 0–1 condition (5) with non-negativity requirements, we obtain the continuous relaxation of MKAP as follows. CMKAP:

maximize Lðk; x; yÞ :¼

XXX XX ðpj  kik wj Þxij þ ci kik yik ; i

k j2N k

i

maximize ð1Þ;

k

subject to ð2Þ—ð4Þ and

subject to ð3Þ—ð5Þ: With k P 0 fixed, this problem is easily solved, and the optimal objective value is

XX X zðkÞ ¼ maxfðpj  kik wj Þþ g þ maxk fkik gci ; k j2N k

i

ð6Þ

i

where ðÞþ :¼ maxf; 0g. Then, zðkÞ is a piecewise linear and convex function of k. Moreover, if we consider Lagrangian DUAL:

minimize zðkÞ subject to k P 0; we have the following. Theorem 1. There exists an optimal solution ky ¼ ðkyik Þ to Lagrangian DUAL such that kyik is constant over i and k, i.e., kyik  ky . Proof. Let k ¼ ðkik Þ be a feasible solution to the above problem, and y put k ðiÞ :¼ arg maxk fkik g. Then, for all k we have kik 6 kiky ðiÞ . Since ðpj  kik wj Þþ is a non-increasing function of kik , this is minimized at kik ¼ kiky ðiÞ , for all k. Thus, in the Lagrangian dual we can assume that kik  ki , i.e., constant over k.

xij P 0;

yik P 0;

8i; j; k:

Let zC be the optimal objective value to this problem. This gives the upper bound by the continuous relaxation of MKAP. Then, the following states the relation between the Lagrangian and continuous relaxations. Theorem 2. Upper bounds derived from the Lagrangian and continuous relaxations are identical, i.e., zL ¼ zC . We note that the coefficient matrix of constraints (3) and (4) in LMKAP(k) is totally unimodular. Then, this theorem follows immediately from Theorem 10.3 (p. 172) of Wolsey (1998). 2.3. Continuous relaxation: an alternative approach Instead of applying LP algorithms such as the simplex method directly, CMKAP may be solved efficiently as follows. Let P Pm uk :¼ m i¼1 c i yik and xj :¼ i¼1 xij . Here, uk is the knapsack capacity allocated (from the total knapsack capacity C) to subset N k . Then, adding (2) for i ¼ 1; . . . ; m, the problem is decomposed into K independent subproblems for each subset as follows.

Please cite this article in press as: Kataoka, S., & Yamada, T. Upper and lower bounding procedures for the multiple knapsack assignment problem. European Journal of Operational Research (2014), http://dx.doi.org/10.1016/j.ejor.2014.02.014

S. Kataoka, T. Yamada / European Journal of Operational Research xxx (2014) xxx–xxx

x\ij :¼ðci =CÞx\j ;

CKPk(uk ):

X maximize pj xj ;

y\ik

j2N k

subject to

X X wj x\ij ¼ ðci =CÞ wj x\j 6 ðci =CÞu\k ¼ ci y\ik ;

8j 2 N k :

This is a continuous knapsack problem, and it is well known that its optimal objective value zk ðuk Þ is a piecewise linear, monotonically non-decreasing, and (under assumption A2) concave function of uk (Martello & Toth, 1990). The total profit obtained from the capacity allocation u ¼ ðuk Þ is K X

zk ðuk Þ;

ð11Þ

k¼1

and we consider the following total problem of the resource allocation type (Ibaraki & Katoh, 1988). TP:

maximize zðuÞ; subject to

8i:

j2Nk

0 6 xj 6 1;

j 2 Nk ;

Then, we obtain

X wj xj 6 uk ; j2N k

zðuÞ ¼

:¼u\k =C;

3

j2N k

m X x\ij ¼ x\j 6 1; i¼1 K K X X y\ik ¼ u\k =C 6 1; k¼1

k¼1

implying that ðx\ ; y\ Þ is feasible to CMKAP. Thus, we have P PK P PK P \ \ \ zC P zðx\ ; y\ Þ ¼ m h j2N k pj xij ¼ j2Nk pj xj ¼ zC . i¼1 k¼1 k¼1 In TP, since dzk ðuk Þ=duk is a monotonically non-increasing, right-continuous step function (Martello & Toth, 1990), we see that there exists k\ P 0 such that for any real k ? k\ implies P u\k ? j2Nk ðkÞ wj . Adding these for k ¼ 1; . . . ; K, we have

k ? k\ ) C ?

K X uk 6 C;

K X X

wj :

ð13Þ

k¼1 j2N k ðkÞ

k¼1

Comparing this with (10), we obtain the following.

uk P 0: By u\ ¼ ðu\k Þ we denote an optimal solution to TP, and z\C is the corresponding objective value, i.e., z\C ¼ zðu\ Þ. Then, we have the following. Theorem 3. z\C ¼ zC . Also, an optimal solution to TP is obtained from any optimal solution to CMKAP, and vice versa.

Proof. (i) Proof of zC 6 z\C . Let (x ; y ) denote an optimal solution to CMKAP. The upper bound zC is given as

zC ¼ zðx ; y Þ ¼

m X K X X pj xij ;

ð12Þ

i¼1 k¼1 j2Nk

where x ¼ ðxij Þ and y ¼ ðyik Þ. Then, with m X xj :¼ xij ; i¼1

uk

m X :¼ ci yik ;

Theorem 4. k\ obtained by solving TP is identical to the optimal Lagrangian multiplier characterized by (10), i.e., k\ ¼ ky .

3. Lower bounds In preliminary numerical experiments, we find that many variables in CMKAP take non-integer values in optimality, and thus it is not practical to expect a good approximate solution to MKAP by rounding those variables either to 0 or 1. As an alternative approach, we propose to allocate (from the total knapsack capacity C) u\k as the capacity for subset N k , and decompose MKAP into K mutually independent MKPs for each N k . Here, u\ ¼ ðu\k Þ is obtained in Section 2.3 as a solution to TP, and hereafter referred to as the reference capacity. Next, we try to assign knapsacks, i.e., determine y, so that u\ is most closely approximated. This may be accomplished heuristically, as we shall discuss in Section 3.1. Finally, once knapsacks are thus assigned to subsets, we obtain an approximate solution, and correspondingly a lower bound to MKAP, by solving (exactly or approximately) the resulting K independent MKPs. 3.1. Assignment of knapsacks: heuristic approach

i¼1

For an assignment vector y satisfying (4),

we have K X

m X

K X

k¼1

i¼1

k¼1

uk ¼

ci

uk ðyÞ :¼

yik 6 C;

m X m X X X wj xj ¼ wj xij 6 ci yik ¼ uk : i¼1 j2Nk

ð14Þ

i¼1

and

j2N k

m X ci yik

i¼1

These imply that u ¼ ðuk Þ is feasible to TP, and ðxj jj 2 Nk Þ is feasible P P P to CKPkðuk Þ. Thus, we have z\C P Kk¼1 zk ðuk Þ ¼ Kk¼1 j2Nk pj xj ¼ Pm PK P   j2Nk pj xij ¼ zC . i¼1 k¼1 \   (ii) Proof of zC P zC . Let x\k ¼ ðx\j jj 2 N k Þ denote an optimal solution to CKPkðu\k Þ, and put

denotes the capacity allocated to N k by this assignment of knapsacks. Let the difference between uðyÞ and u\ be

dðyÞ :¼

K X

juk ðyÞ  u\k j:

k¼1

Then, the problem is:

minimize dðyÞ subject to ð4Þ: This may be converted into a linear 0–1 integer program, and solved exactly using MIP solvers. However, to obtain solution more quickly, we present a heuristic approach as follows.

Please cite this article in press as: Kataoka, S., & Yamada, T. Upper and lower bounding procedures for the multiple knapsack assignment problem. European Journal of Operational Research (2014), http://dx.doi.org/10.1016/j.ejor.2014.02.014

4

S. Kataoka, T. Yamada / European Journal of Operational Research xxx (2014) xxx–xxx

First, we obtain a feasible y by the following greedy algorithm. Algorithm 1. GREEDY Step 1. Let uk :¼ 0, yik :¼ 0, 8i; 8k, and i :¼ 1; Step 2. Find k :¼ arg maxk fu\k  uk g, and assign knapsack i to N k , i.e., uk :¼ uk þ ci ; yik :¼ 1; Step 3. Stop if i ¼ m. Otherwise, let i :¼ i þ 1 and go to Step 2.

4. Numerical experiments We evaluate the performance of the MKAP algorithm developed in the previous section through a series of numerical experiments. We implemented the algorithm in ANSI C language and conducted computation on a DELL Precision T7400 computer (CPU: Xeon X5482 Quad-Core  2 3.20 gigahertz  2, RAM: 64 gigabyte), with Red Hat Enterprise Linux 5 operating system. 4.1. Design of experiments

Next, we apply local search to obtain improved solutions. This is accomplished by exchanging some parts of the assignment. Let y ¼ ðyik Þ be a current solution, and assume that yik ¼ yi0 k0 ¼ 1 in this 0 solution. By yði; i Þ we denote the solution obtained from y by 0 0 exchanging the assignments at rows i and i , i.e., yik0 ði; i Þ ¼ 0 yi0 k ði; i Þ ¼ 1. For a feasible y; UðyÞ denotes the neighborhood of y, i.e., the set of solutions obtained from y this way. Then, the algorithm is: Algorithm 2. LOCAL_SEARCH Step 1. Let y be the assignment vector obtained from GREEDY described above; Step 2. Find y0 2 UðyÞ such that dðy0 Þ < dðyÞ; Step 3. If such a y0 is found, put y :¼ y0 and go to Step 2; Step 4. Otherwise, output y :¼ y and stop.

The output from this algorithm is denoted as y. 3.2. Computing a lower bound Once y ¼ ðyik Þ is obtained, MKAP is decomposed into MKPs for each subset. Let the set of knapsacks assigned to N k be

Ik ðyÞ :¼ fijyik ¼ 1g:

ð15Þ

Then, for subset N k the problem is MKPk(y):

maximize

XX pj xij ;

xij 6 1;

– UNCOR (uncorrelated): Uniformly random over ½1; R, independent of wj , – WEAK (weakly correlated): pj :¼ 0:6wj þ hj , where hj is uniformly random over ½1; 0:4R, – STRONG (strongly correlated): pj :¼ wj þ 0:2R. – BINARY: pj is independent of wj , and takes value 1 or 100, both with probability 0.5. Here, BINARY aims at exploring the effects of smaller number of possible objective values of MKAP, as we shall experiment in Section 4.6 In addition, throughout experiments R is fixed at 103 , except in Section 4.5 where we conduct sensitivity analysis on this parameter. Knapsack capacity is determined by

8i 2 Ik ðyÞ;

xij 2 f0; 1g:

we note that in solving the original MKAP, since u\k is only an approximation for the capacity to N k , exact solutions of MKPk(y) may neither be required nor useful. Furthermore, solving K MKPs exactly can be quite time-consuming. Thus, we prefer to solve MKPs only approximately but quickly. This is accomplished by truncating MULKNAP as soon as an approximate (and feasible) solution is obtained. Let xk be an approximate solution to MKPk(y) thus obtained with the corresponding objective value zk , and let x :¼ ðx1 ; . . . ; xK Þ. Then (x; y) is a feasible solution to MKAP, and

gives a lower bound to MKAP.

%  ni ;

ð17Þ

4.2. Experiments for SMALL

This problem may be solved exactly using the MULKNAP code (Pisinger, 1999), which is a specialized solver for MKPs. However,

k¼1

!

P where (ni ) is uniformly distributed over fðn1 ; . . . ; nm Þ m i¼1 ni ¼ 1; ni P 0:g and q is another experimental parameter which takes either a value of 0.25, 0.50 or 0.75.

8j 2 N k ;

K X zk

n X wj j¼1

i2Ik ðyÞ

z :¼

These sizes come from the following considerations: in SMALL we intend to compare the upper and lower bounds against the optimal objective values; thus to solve instances exactly the problem size has to be limited. On the other hand, LARGE explores the behavior of the heuristic algorithm for large (or huge) instances. Let R denote the range of wj , i.e., the weight wj is distributed uniformly random over the integer interval [1, R], and profit pj is related to the weights in the following ways.

$

j2N k

X

– SMALL: n ¼ 20; 40; 60; K ¼ 2; 5, and m ¼ 10; 20, – LARGE: n ¼ 4000; 8000; K ¼ 50; 100, and m ¼ 200; 400; 800, and we set nk ¼ n=K ðk ¼ 1; . . . ; KÞ.

ci ¼ q 

i2Ik ðyÞj2N k

X wj xij 6 ci ; subject to

The size of instances tested is SMALL and LARGE as follows.

ð16Þ

For each combination of correlation types and values of K; m and n as shown in SMALL, we prepared 10 randomly generated instances, and computed their optimal objective values as well as upper and lower bounds. Table 1 summarizes the results of this computation, with each row showing the average of respective values over the 10 instances. Here the column ‘Exact’ gives the optimal objective value (zH ) and CPU time in seconds obtained using MIP solver Gurobi Version 5.0.1 (2012), with computation being truncated at 1200 CPU seconds. We show the number of instances solved to optimality within this time limit as #sol, and if truncated, zH gives the best incumbent objective value obtained at that time. The columns ‘Upper bound’ and ‘Lower bound’ investigate the performance of the heuristic algorithm for the same instances. We show the upper and lower bounds (z and z), and their relative errors in percentage, i.e., ErrU ¼ 100  ðz  zH Þ=zH and

Please cite this article in press as: Kataoka, S., & Yamada, T. Upper and lower bounding procedures for the multiple knapsack assignment problem. European Journal of Operational Research (2014), http://dx.doi.org/10.1016/j.ejor.2014.02.014

5

S. Kataoka, T. Yamada / European Journal of Operational Research xxx (2014) xxx–xxx Table 1 Summary of experiments for SMALL (q ¼ 0:5). Type

K

m

n

Exact

2

10

20

5

10

20

WEAK

2

10

20

5

10

20

STRONG

2

10

20

5

10

20

Lower bound

CPU

#sol

20 40 60 20 40 60

7438.8 16291.4 24716.7 4685.8 15405.9 24597.3

0.0 15.4 401.6 0.0 0.4 161.0

10 10 6 10 10 6

8075.6 16488.4 24804.7 8071.1 16484.3 24800.7

8.9 1.2 0.4 79.7 6.9 0.9

6991.6 15893.7 24568.8 4436.5 14798.2 23911.9

6.2 2.5 0.6 6.2 4.3 2.8

20 40 60 20 40 60

7314.7 16392.1 24489.8 4685.8 15379.7 24445.7

0.0 3.5 26.5 0.0 0.2 227.4

10 10 10 10 10 10

8075.6 16488.4 24804.7 8071.1 16484.3 24800.7

10.7 2.8 1.3 79.7 7.4 1.5

5847.6 15089.0 23846.1 3830.2 13356.5 22944.2

20.7 6.0 2.6 17.8 13.3 6.2

20 40 60 20 40 60

5559.0 12319.9 18724.8 2957.0 11612.7 18597.6

0.0 26.4 571.4 0.0 0.3 257.5

10 10 2 10 10 6

6162.7 12459.1 18793.3 6158.3 12454.6 18788.4

11.1 1.1 0.4 110.1 7.5 1.0

5061.2 11933.6 18659.0 2835.5 10911.7 18032.3

8.8 3.1 0.4 4.3 6.1 3.1

20 40 60 20 40 60

5430.4 12061.4 18540.4 2957.0 11907.5 18447.8

0.0 2.7 156.3 0.0 0.2 39.3

10 10 10 10 10 10

6162.7 12459.1 18793.3 6158.3 12454.6 18788.4

13.9 3.3 1.4 110.1 8.5 1.9

4385.9 11135.5 18090.3 2483.6 9659.3 16859.6

19.0 7.7 2.5 15.7 16.2 8.6

20 40 60 20 40 60

7134.4 15378.8 23163.3 4150.3 14891.9 23098.0

0.0 279.1 0.0 0.8 345.1

10 9 0 10 10 3

7960.9 15564.3 23303.6 7954.3 15557.7 23296.3

11.8 1.2 0.6 99.9 4.5 0.9

6596.9 15062.8 23018.8 3799.6 14252.2 22478.1

7.5 2.1 0.6 7.5 4.3 2.7

20 40 60 20 40 60

6978.5 15146.6 23006.1 4150.3 14781.3 22932.8

0.0 8.1 363.7 0.0 0.5 151.2

10 10 1 10 10 4

7960.9 15564.3 23303.6 7954.3 15557.7 23296.3

14.2 2.8 1.3 99.9 5.3 1.6

5766.4 14273.6 22483.5 3579.8 12677.4 21369.3

17.3 5.8 2.3 12.9 14.2 6.8

z UNCOR

Upper bound z

H

ErrL ¼ 100  ðzH  zÞ=zH . CPU time for computing these bounds was far less than a second in these instances, and thus negligible. From this table, we see that MKAP can be solved easily to optimality by MIP solvers if instances are as small as n 6 40. For larger values of n we often encounter difficulty, irrespective to the correlation types and the values of K and m. It is clear that the quality of the heuristic solution is unsatisfactory for these SMALL instances, with relative errors sometimes higher than 100%. In particular, upper bound is far from the optimal value in instances with n ¼ 20, although errors decrease rapidly as n increases. The relative error of the lower bound remains less than a few dozens percent for all values of n, and decreases with the increase of n. 4.3. Granularity effect To investigate the behavior of the heuristic algorithm for small to large n, Table 2 gives the gap between the bounds (absolute error = z  z) and their relative error (Err ¼ 100  ðz  zÞ=z), again as the average over 10 randomly generated instances for some pairs of K and m, as n increases from 20 to 1000 and q ¼ 0:5 fixed. We observe that the gap (z  z) decreases rapidly with the increase of n, and this is even accelerated in relative errors. This strength of the approximation algorithm can be further ascertained in Tables 3 and 4, where we give the results for larger instances. We may attribute this strength for larger instances to a sort of ‘granularity effect,’ as explained below. We note that in our

ErrUð%Þ

z

ErrLð%Þ

Table 2 Granularity effect (q ¼ 0:5). K

m

n

UNCOR z  z

WEAK Errð%Þ

z  z

STRONG Errð%Þ

z  z

Errð%Þ

2

10

20 40 60 80 100 200 400 600 800 1000

1083.95 594.71 235.94 131.02 136.01 91.37 64.87 49.96 39.94 40.40

16.28 3.77 0.97 0.39 0.34 0.11 0.04 0.02 0.01 0.01

1101.01 525.62 134.28 96.93 74.48 53.63 37.74 22.32 20.62 19.52

22.41 4.43 0.72 0.39 0.24 0.09 0.03 0.01 0.01 0.01

1363.99 501.48 284.83 233.73 252.03 209.38 160.49 197.10 169.46 200.70

20.85 3.33 1.23 0.76 0.65 0.27 0.10 0.08 0.05 0.05

5

20

20 40 60 80 100 200 400 600 800 1000

4240.88 3127.84 1856.53 1217.71 1066.21 356.02 310.38 252.75 199.91 161.29

121.99 24.04 8.18 3.88 2.70 0.44 0.19 0.10 0.06 0.04

3673.68 2831.35 1928.88 1284.00 696.34 239.69 174.70 121.99 102.85 92.77

153.40 30.30 11.48 5.43 2.30 0.39 0.14 0.07 0.04 0.03

4374.54 2880.33 1926.98 1503.61 800.57 562.18 549.72 515.94 509.39 539.82

129.38 22.77 9.05 5.09 2.09 0.73 0.35 0.22 0.16 0.14

experiments the knapsack capacity was determined by (17), which means ci ¼ OðnÞ for all i 2 M. Since wj and pj do not increase commensurately with n, for large n we have an MKAP with

Please cite this article in press as: Kataoka, S., & Yamada, T. Upper and lower bounding procedures for the multiple knapsack assignment problem. European Journal of Operational Research (2014), http://dx.doi.org/10.1016/j.ejor.2014.02.014

6

S. Kataoka, T. Yamada / European Journal of Operational Research xxx (2014) xxx–xxx

Table 3 Heuristics for LARGE (UNCOR).

q 0.25

K 50

m

n

zð106 Þ

z  z

Errð%Þ

CPU

Type

200

4000 8000 4000 8000 4000 8000

1.1530 2.3087 1.1522 2.3090 1.1355 2.3088

2312.0 1615.2 2999.0 1184.0 19493.2 1180.8

0.20 0.07 0.26 0.05 1.72 0.05

0.09 0.10 0.19 0.21 0.43 0.50

UNCOR

4000 8000 4000 8000 4000 8000

1.1474 2.3040 1.1428 2.3061 1.1158 2.3044

7989.2 6305.5 12402.4 4100.2 39160.1 5559.3

0.70 0.27 1.09 0.18 3.51 0.24

0.24 0.20 0.56 0.60 1.15 1.46

4000 8000 4000 8000 4000 8000

1.6238 3.2507 1.6240 3.2509 1.6188 3.2509

2120.2 1376.3 1810.3 1040.0 6863.2 878.7

0.13 0.04 0.11 0.03 0.42 0.03

0.10 0.10 0.21 0.24 0.52 0.53

4000 8000 4000 8000 4000 8000

1.6187 3.2424 1.6196 3.2483 1.6057 3.2485

7129.8 9670.3 6193.9 3626.4 19934.2 3302.0

0.44 0.30 0.38 0.11 1.24 0.10

0.20 0.21 0.62 0.50 1.52 1.45

4000 8000 4000 8000 4000 8000

1.9038 3.8122 1.9040 3.8124 1.9035 3.8124

1569.0 1212.6 1319.9 902.6 1742.6 848.2

0.08 0.03 0.07 0.02 0.09 0.02

0.09 0.10 0.23 0.20 0.52 0.57

4000 8000 4000 8000 4000 8000

1.8945 3.7953 1.9010 3.8104 1.8977 3.8107

10828.8 18067.7 4310.0 2995.7 7549.8 2622.0

0.57 0.48 0.23 0.08 0.40 0.07

0.23 0.18 0.52 0.52 1.45 1.53

400 800 100

200 400 800

0.50

50

200 400 800

100

200 400 800

0.75

50

200 400 800

100

Table 4 Heuristics for LARGE (q ¼ 0:5).

200 400 800

relatively ‘small’ items. In such a circumstance, each knapsack will be packed with many small objects, and it is natural to conjecture that objects of smaller weights (relative to knapsack capacities) can be packed to near capacity in many different ways, whereas when object weights are large relative to capacity, heuristic packings may end up far from capacity. This fact is obvious for ordinary 0–1 knapsack problem, and we observe the similar phenomenon for MKAP. 4.4. Heuristics for LARGE Tables 3 and 4 summarize the results of computation for LARGE instances, again as average over 10 randomly generated instances. Table 3 is for UNCOR case with various values of q, while Table 4 is for q ¼ 0:5 with correlation types varied. The observations from Table 3 are:  The heuristic algorithm described in Section 3 gives quite accurate approximate solutions for LARGE instances in small CPU time, irrespective of the values of q; K and m within the range of experiments tested. Indeed, except for a few instances, relative errors are far less than a percent, and CPU time is less than one second.  Parameter q is relatively insensitive to the accuracy of solutions and CPU time. From Table 4, we observe the following.

K 50

m

n

zð106 Þ

200

4000 8000 4000 8000 4000 8000

1.6238 3.2507 1.6240 3.2509 1.6188 3.2509

4000 8000 4000 8000 4000 8000

400 800 100

200 400 800

WEAK

50

200 400 800

100

200 400 800

STRONG

50

200 400 800

100

200 400 800

z  z

Errð%Þ

CPU

2120.2 1376.3 1810.3 1040.0 6863.2 878.7

0.13 0.04 0.11 0.03 0.42 0.03

0.09 0.10 0.22 0.24 0.53 0.53

1.6187 3.2424 1.6196 3.2483 1.6057 3.2485

7129.8 9670.3 6193.9 3626.4 19934.2 3302.0

0.44 0.30 0.38 0.11 1.24 0.10

0.20 0.22 0.63 0.50 1.54 1.48

4000 8000 4000 8000 4000 8000

1.2482 2.4981 1.2483 2.4983 1.2442 2.4983

1202.7 920.9 1026.3 671.6 4921.5 528.5

0.10 0.04 0.08 0.03 0.40 0.02

0.10 0.09 0.21 0.21 0.52 0.57

4000 8000 4000 8000 4000 8000

1.2449 2.4943 1.2456 2.4966 1.2347 2.4968

4519.5 4787.0 3729.6 2353.6 14504.4 2019.7

0.36 0.19 0.30 0.09 1.17 0.08

0.19 0.22 0.58 0.48 1.32 1.68

4000 8000 4000 8000 4000 8000

1.5610 3.1278 1.5611 3.1282 1.5582 3.1281

5014.5 5012.8 4740.7 4564.2 7420.0 4376.4

0.32 0.16 0.30 0.15 0.48 0.14

0.12 0.16 0.22 0.29 0.52 0.61

4000 8000 4000 8000 4000 8000

1.5562 3.1219 1.5558 3.1227 1.5466 3.1225

9816.5 10981.3 10078.1 9998.4 19015.2 9967.1

0.63 0.35 0.65 0.32 1.23 0.32

0.21 0.25 0.52 0.55 1.39 1.87

 For the values of K; m and n tested, the algorithm remains efficient, irrespective of the correlation types of the problem. 4.5. Sensitivity analysis So far, in the experiments we assumed that wj (and pj ) is distributed over ½1; R. Table 5 gives a result of the sensitivity analysis with respect to this range, i.e., we compare the cases of R ¼ 102 ; 103 and R ¼ 104 . Although the absolute error increases commensurately with R, the algorithm stably produces approximate solutions of at most a few percent relative errors, irrespective of the correlation type of the instance. 4.6. BINARY instances Finally, Table 6 compares the results of BINARY against those of UNCOR. In both of these types, absolute errors decrease monotonically with the increase of q. This may be explained as follows. As q increases, we have knapsacks of capacities sufficiently large to accept almost all the items of larger relative efficiency (pj =wj ), irrespective of specific assignment of knapsacks to subsets. The remaining capacities will be filled with items of smaller efficiency, but this does not make big difference in objective values. This is especially significant in BINARY with q ¼ 0:75, where knapsacks will include all the items of pj ¼ 100 (and in addition some of pj ¼ 1). Absolute errors are also smaller for larger n, as we have observed in Table 2.

Please cite this article in press as: Kataoka, S., & Yamada, T. Upper and lower bounding procedures for the multiple knapsack assignment problem. European Journal of Operational Research (2014), http://dx.doi.org/10.1016/j.ejor.2014.02.014

7

S. Kataoka, T. Yamada / European Journal of Operational Research xxx (2014) xxx–xxx Table 5 Sensitivity analysis on range R (q ¼ 0:5). Type

K

R ¼ 102

50

Errð%Þ

z  z

Errð%Þ

200

4000 8000 4000 8000 4000 8000

193.63 157.08 153.95 125.92 147.30 100.52

0.12 0.05 0.10 0.04 0.09 0.03

2120.18 1376.29 1810.29 1039.96 6863.24 878.68

0.13 0.04 0.11 0.03 0.42 0.03

20428.25 14052.61 21916.58 11074.60 108477.65 12284.92

0.13 0.04 0.14 0.03 0.67 0.04

4000 8000 4000 8000 4000 8000

646.93 951.58 518.75 387.52 764.70 317.52

0.40 0.29 0.32 0.12 0.48 0.10

7129.78 9670.29 6193.89 3626.36 19934.24 3301.98

0.44 0.30 0.38 0.11 1.24 0.10

74577.15 96740.71 68344.58 39187.90 216015.95 40195.22

0.46 0.30 0.42 0.12 1.35 0.12

4000 8000 4000 8000 4000 8000

122.53 85.84 96.19 78.26 80.82 61.67

0.10 0.03 0.08 0.03 0.07 0.03

1202.68 920.87 1026.30 671.61 4921.50 528.46

0.10 0.04 0.08 0.03 0.40 0.02

12582.79 9418.33 12790.90 6272.78 76057.93 6282.83

0.10 0.04 0.10 0.03 0.61 0.03

4000 8000 4000 8000 4000 8000

433.93 422.24 320.19 257.06 477.62 188.27

0.35 0.17 0.26 0.10 0.39 0.08

4519.48 4786.97 3729.60 2353.61 14504.40 2019.66

0.36 0.19 0.30 0.09 1.17 0.08

45155.59 48277.63 41759.80 24149.58 165116.33 22607.33

0.36 0.19 0.33 0.10 1.34 0.09

4000 8000 4000 8000 4000 8000

4478.89 4904.48 3987.56 4514.68 3233.27 4120.23

0.68 0.37 0.60 0.34 0.49 0.31

5014.47 5012.75 4740.71 4564.22 7420.03 4376.42

0.32 0.16 0.30 0.15 0.48 0.14

6629.07 5822.42 6750.11 5778.31 16087.23 6845.98

0.06 0.03 0.06 0.03 0.15 0.03

4000 8000 4000 8000 4000 8000

9196.19 10747.48 8176.16 8739.48 8476.77 7865.53

1.40 0.82 1.25 0.66 1.30 0.60

9816.47 10981.25 10078.11 9998.42 19015.23 9967.12

0.63 0.35 0.65 0.32 1.23 0.32

13675.57 11588.82 16197.41 13651.21 41291.43 13618.48

0.13 0.05 0.15 0.06 0.39 0.06

Errð%Þ

z  z

400 800 100

200 400 800

WEAK

50

200 400 800

100

200 400 800

STRONG

50

200 400 800

100

R ¼ 104

n

z  z UNCOR

R ¼ 103

m

200 400 800

Errð%Þ

z  z

Table 6 BINARY results compared against UNCOR. Type

K

m

n

q ¼ 0:25 z  z

BINARY

50

200 400 800

100

200 400 800

UNCOR

50

200 400 800

100

200 400 800

q ¼ 0:50

q ¼ 0:75 Errð%Þ

z  z

Errð%Þ

4000 8000 4000 8000 4000 8000

2244.94 1865.35 3654.10 1586.21 8626.13 2669.00

1.617 0.664 2.658 0.564 6.520 0.954

1167.04 1150.11 1315.79 729.88 4926.44 789.86

0.588 0.288 0.664 0.182 2.532 0.198

24.26 24.83 22.02 23.59 27.93 22.11

0.012 0.006 0.011 0.006 0.014 0.006

4000 8000 4000 8000 4000 8000

4630.54 4100.25 8178.70 3970.91 12208.23 7362.20

3.394 1.472 6.156 1.425 9.482 2.676

3188.64 4553.01 3749.39 2181.98 9018.14 2745.76

1.625 1.156 1.919 0.549 4.738 0.692

71.86 77.23 50.12 47.99 70.73 43.01

0.036 0.019 0.025 0.012 0.035 0.011

4000 8000 4000 8000 4000 8000

2311.96 1615.15 2998.98 1183.98 19493.15 1180.80

0.201 0.070 0.260 0.051 1.717 0.051

2120.18 1376.29 1810.29 1039.96 6863.24 878.68

0.131 0.042 0.112 0.032 0.424 0.027

1568.87 1212.60 1319.92 902.58 1742.57 848.23

0.082 0.032 0.069 0.024 0.092 0.022

4000 8000 4000 8000 4000 8000

7989.16 6305.45 12402.38 4100.18 39160.05 5559.30

0.697 0.274 1.086 0.178 3.510 0.241

7129.78 9670.29 6193.89 3626.36 19934.24 3301.98

0.441 0.298 0.383 0.112 1.241 0.102

10828.77 18067.70 4310.02 2995.68 7549.77 2622.03

0.572 0.477 0.227 0.079 0.398 0.069

Please cite this article in press as: Kataoka, S., & Yamada, T. Upper and lower bounding procedures for the multiple knapsack assignment problem. European Journal of Operational Research (2014), http://dx.doi.org/10.1016/j.ejor.2014.02.014

8

S. Kataoka, T. Yamada / European Journal of Operational Research xxx (2014) xxx–xxx

Except for the case of q ¼ 0:75, relative errors are larger in BINARY than in UNCOR, but throughout the experiments tested these are always less than a few per cent. Indeed, no significant differences were observed (both in accuracy of solutions and computing time) between BINARY and other type of instances.

We have formulated MKAP, and developed a solution algorithm to solve this problem approximately, but very quickly. For small instances, the quality of the solutions produced remains poor. Such a problem may better be solved using MIP solvers, although exact solutions are hard to obtain by this method as well. Through numerical experiments we found our algorithm prospective for larger instances. We discussed this strength of the heuristic algorithm for larger instances in relation with the granularity effect of the knapsack problem. Finally, we mention the cost of assignments which has been ignored in our formulation of the problem. It is natural to consider that different knapsack-subset pairs would incur different assignment costs. Thus, instead of (1) the objective function may be modified as m X K X m X K X X pj xij  dik yik ; i¼1 k¼1 j2N k

Acknowledgments The authors are grateful to the editor and anonymous referees for their constructive comments, which helped us improve the contents and expression of the work.

5. Concluding remarks

zðx; yÞ :¼

approaches to explore such an important issue related to MKAP, and we leave this as a future research direction.

i¼1 k¼1

where dik is the cost of allocating knapsack i to subset N k . In addition, we may introduce some constraints on ðyik Þ. Unfortunately, the theorems given in this paper are no longer valid and the heuristic algorithm based on these theorems is inapplicable in this extended framework. We need some different

References Burkard, R. E., Dell’Amico, M., & Martello, S. (2009). Assignment problems. Philadelphia: SIAM. Chekuri, C., & Khanna, S. (2006). A polynomial time approximation scheme for the multiple knapsack problem. SIAM Journal of Computing, 35, 713–728. Dawande, M., Kalagnanam, J., Keskinocak, P., Ravi, R., & Salman, F. S. (2000). Approximation algorithms for the multiple knapsack problem with assignment restrictions. Journal of Combinatorial Optimization, 4, 171–186. Fisher, M. (1981). The Lagrangian relaxation method for solving integer programming problems. Management Science, 27, 1–18. Gurobi Optimizer 5.0 (2012). (2012.10). Ibaraki, T., & Katoh, N. (1988). Resource allocation problems: Algorithmic approaches. MIT Press. Kellerer, H., Pferschy, U., & Pisinger, D. (2004). Knapsack problems. Berlin: Springer. Kuhn, H. W. (2005). The Hungarian method for the assignment problem. Naval Research Logistics, 52, 7–27. Lalami, M. E., Elkihel, M., Baz, D. E., & Boyer, V. (2012). A procedure-based heuristic for 0–1 multiple knapsack problems. International Journal of Mathematics in Operational Research, 4, 214–224. Martello, S., & Toth, P. (1990). Knapsack problems: Algorithms and computer implementations. Chichester: John Wiley & Sons. Pentico, D. W. (2007). Assignment problems: A golden anniversary survey. European Journal of Operational Research, 176, 774–793. Pisinger, D. (1999). An exact algorithm for large multiple knapsack problems. European Journal of Operational Research, 114, 528–541. Wolsey, L. (1998). Integer programming. New York: John Wiley & Sons. Yamada, T., & Takeoka, T. (2009). An exact algorithm for the fixed-charge multiple knapsack problem. European Journal of Operational Research, 192, 700–705.

Please cite this article in press as: Kataoka, S., & Yamada, T. Upper and lower bounding procedures for the multiple knapsack assignment problem. European Journal of Operational Research (2014), http://dx.doi.org/10.1016/j.ejor.2014.02.014