A branch-and-bound algorithm for the singly constrained assignment problem

A branch-and-bound algorithm for the singly constrained assignment problem

European Journal of Operational Research 176 (2007) 151–164 www.elsevier.com/locate/ejor Discrete Optimization A branch-and-bound algorithm for the ...

211KB Sizes 0 Downloads 40 Views

European Journal of Operational Research 176 (2007) 151–164 www.elsevier.com/locate/ejor

Discrete Optimization

A branch-and-bound algorithm for the singly constrained assignment problem P.M.D. Lieshout 1, A. Volgenant

*

Operations Research Group, Faculty of Economics and Econometrics, University of Amsterdam, Roetersstraat 11, 1018 WB Amsterdam, The Netherlands Received 25 October 2004; accepted 19 May 2005 Available online 7 October 2005

Abstract The singly constrained assignment problem (SCAP) is a linear assignment problem (LAP) with one extra side constraint, e.g., due to a time restriction. The SCAP is, in contrast to the LAP, difficult to solve. A branch-and-bound algorithm is presented to solve the SCAP to optimality. Lower bounds are obtained by Lagrangean relaxation. Computational results show that the algorithm is able to solve different types of SCAP instances up to size n = 1000 within short running times on a standard personal computer.  2005 Elsevier B.V. All rights reserved. Keywords: Linear assignment; Lagrangean relaxation; Branch-and-bound; Side constraint

1. Introduction The linear assignment problem (LAP for short) is a well-studied problem in the optimization literature. Many applications are known in real life, such as facility location, personnel scheduling and task assignment. Besides these practical applications, the LAP also arises as a sub-problem of difficult combinatorial problems, such as the traveling salesman problem (TSP) and the quadratic assignment problem (QAP). Efficient polynomial algorithms to solve the LAP can be found in literature, see, e.g., DellÕAmico and Toth (2000). Many real-life problems, however, consist of a LAP that is complicated with an extra side constraint. For example, in job scheduling under a minimal cost criterion, there can also be an amount of time associated with each assignment, while the time is limited to complete the schedule. *

1

Corresponding author. Tel.: +31 20 525 4219; fax: +31 20 525 4349. E-mail address: [email protected] (A. Volgenant). Now at CWI, Centre for Mathematics and Computer Science, Amsterdam, The Netherlands.

0377-2217/$ - see front matter  2005 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2005.05.028

152

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

The singly constrained assignment problem (SCAP) is formulated as: min

XX i2N

subject to

X

ð1Þ

cij xij

j2N

xij ¼ 1

i2N

ð2Þ

xij ¼ 1

j2N

ð3Þ

j2N

X i2N

xij 2 f0; 1g i; j 2 N XX rij xij 6 b; i2N

ð4Þ ð5Þ

j2N

where N = {1, 2, . . . , n}. Here the interpretation can be used, that xij = 1 implies that worker i is assigned to job j at a cost of cij, that rij is the resource usage when job i is assigned to job j and that b is the available capacity of the resource. We denote Eqs. (2)–(4) as the LAP constraints and Eq. (5) as the resource constraint. Without loss of generality, we assume the coefficients cij, rij and b to be non-negative integers. Rational values can be handled by multiplying by a suited factor to transform them to integer values. If the resource constraint must satisfy Pb, then it can easily be verified that the problem transforms to the SCAP by choosing rij := rmax  rij and b := nrmax  b, with rmax the largest rij value. The SCAP can be regarded as a knapsack problem (KP) complicated by Eqs. (2) and (3) (Garey and Johnson, 1979). As the KP is NP-complete, the SCAP is at least as hard as NP-complete. It is thus very unlikely that a fully polynomial algorithm exists to solve the SCAP. In special cases, though, the SCAP is polynomially solvable, e.g., (1) The solution ðx11 ¼ 1; x22 ¼ 1; . . . ; xnn ¼ 1Þ is optimal, if the costs matrix c = ((cij)) and the resource matrix r = ((rij)) are (or can be permuted to) distribution matrices (Van der Veen et al., 1991), i.e., cij þ ciþ1;jþ1 6 ci;jþ1 þ ciþ1;j

and

rij þ riþ1;jþ1 6 ri;jþ1 þ riþ1;j ;

for all i; j 2 N n fng:

ð6Þ

(2) All (n!) assignments are optimal, if either the cij or the rij values (or both) can be written as: cij ¼ ui þ vj

or

rij ¼ xi þ rj ;

for all i; j 2 N :

ð7Þ 2

3

The recognition problem of case (1) is solvable in O(n ) and of case (2) in O(n ). Gupta and Sharma (1981), Aggarwal (1985) and Mazzola and Neebe (1986) propose branch-and-bound algorithms to solve the SCAP to optimality. The latter authors report the best computational results on a variety of problem instances up to size n = 100. The algorithm employs a depth first branching strategy and at each node of the search tree lower bounds are obtained by sub-gradient optimization. They also solved problem instances with more than one side constraint. Rosenwein (1991) utilizes the algorithm of Mazzola and Neebe, but with Lagrangean decomposition (LD) instead of sub-gradient optimization. LD decomposes a 0–1 integer program into two (or more) sub-problems, each containing at least one of the original sets of constraints. One (or more) identical copies of the vector of decision variables are created, so that each sub-problem contains a unique vector of decision variables. Equality constraints are introduced to enforce these copies to be equal to the original variables. The equality constraints that link the sub-problems are dualized into the objective function. LD consists of a LAP and a single 0–1 KP, whereas Lagrangean relaxation (LR) only consists of a LAP.

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

153

Kennington and Mohammadi (1994) propose a heuristic algorithm, which contains no branch-andbound, but is based on bisection search, solving a series of LAP instances. We describe in this paper a faster branch-and-bound algorithm for solving the SCAP to optimality. To this end, we show how lower bounds can be obtained for the SCAP in Section 2. The algorithm is described in Section 3 and improvements to it are discussed in Section 4. In Section 5, an illustrative example is solved and computational results are given in Section 6. These results are about 5–10 times faster than the bestknown results. Finally, we give conclusions. 2. Lagrangean relaxation The Lagrangean relaxation (LR) of the SCAP is defined as a LAP: zLR ðkÞ ¼ min

XX i2N

ðcij þ krij Þxij  kb

ð8Þ

j2N

subject to LAP constraints; where k is a non-negative scalar. It is well known that zLR(k) is a lower bound on the optimal value z* of the SCAP and that it is a piecewise linear concave function (Fisher, 1981). The bound zLR(k) depends on the value of the multiplier and therefore the best lower bound on z* is obtained by solving the Lagrangean dual: zLB ¼ max zLR ðkÞ: kP0

ð9Þ

Assume for the moment that Eq. (9) has a unique optimal solution, and that k* is the optimal multiplier value. Let x(k) denote the assignment corresponding to multiplier value k, and let g ¼ P P * * i2N j2N rij xij ðkÞ  b, then for all k < k (k > k ) we have g < 0 (g > 0). Only if g > 0 then assignment x(k) is infeasible. If zLR(k) has two or more optimal solutions, then there exists a closed interval of k values for which g = 0. We propose an LR method (to solve the Lagrangean dual) that is a mix of the method of Volgenant and Jonker (1982) and bisection search. Let kLB (kUB) denote a lower (upper) bound on k*. Start with t = 0 and a multiplier value k0, obeying 0 6 kLB 6 k0 6 kUB, and iterate to obtain new multiplier values kt+1 as follows: LR: If gt < 0 then set kUB := kt else if gt > 0 then set kLB := kt; Set kt+1 := max{0, kt+aptgt + bptgt1}; If kt+1 > kUB then set kt+1 := (kLB + 3kUB)/4 else if kt+1 < kLB then set kt+1 := (3kLB + kUB)/4, where a and b are scalars, and pt is a decreasing positive scalar according to a geometric series. Thus, kt is initially updated according to the method of Volgenant and Jonker. Hence, if kt+1 < kLB (>kUB), then it is unattractive to continue this way and we switch to a bisection search. Our update scheme terminates if gt+1 = 0 as then kt+1 = k* and thus x(kt+1) is optimal, or when a maximum number of multiplier updates have been performed. The choice of the values for a, b, pt, k0 and the maximum number of updates depends on the distribution of the underlying data. These parameter values were obtained after computational experiments (see Section 6).

154

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

The assignment corresponding to the lower bound found in the above mentioned procedure, will probably not be optimal. Therefore, LR is not sufficient to solve the SCAP, but has to be fitted into a branchand-bound algorithm. 3. Branch-and-bound scheme Mazzola and Neebe (1986) employ an n + 1 level search tree based on a depth first strategy, with a permanent index i(l) 2 N for each level l 2 N of the tree. That is, all branches emanating directly from level l involve only the variables xi(l)j for j 2 N. We propose a similar scheme, but without permanent indices. At each node of the search tree lower bounds are obtained by utilizing the LR procedure of Section 2 instead of sub-gradient optimization. Let us first give some notations, before the algorithm is described: List: Array, to be filled with restrictions on certain variables (xij); Index: Number of restrictions contained in List; m: Number of variables fixed at 1; z*: Optimal objective function value of the SCAP; zLB (zUB): Lower (upper) bound on z*; dxe: Smallest integer that exceeds or equals x. Initial step Check whether a feasible solution (xUB) exists for the SCAP; If so, calculate zUB := z(xUB), otherwise stop; Calculate a lower bound zLB := z(xLB) on z*; If xLB is feasible then set xUB := xLB, zUB := zLB and go to the final step; Set List := ;, Index := 0; Bounding step Use LR to obtain zLB := dzLR(k*)e for the sub-problem with some variables fixed at 0 or 1, i.e., the restrictions contained in List; Let xLB denote the corresponding assignment; If xLB is feasible and z(xLB) < zUB, then set zUB := z(xLB) and xUB := xLB; If zLB P zUB then go to the backtracking step; Forward move Sort the n variables of xLB in decreasing order of cij and break ties in decreasing order of rij; Let xi(1)j(1)  xi(2)j(2)      xi(n)j(n) denote the sorted variables; For each h = 1, 2, . . . , n, add the restriction xi(h)j(h) := 1 to List, if the restriction is not already contained in List; Set Index := Index + (n  m); Backtracking step If Index = 1 go to the final step; Set List [Index] := ;; Index := Index  1; If List [Index] contains a restriction with a variable fixed at 1, then change this restriction to fixed at 0, otherwise repeat the backtracking step; If all the variables in a row or a column are fixed at 0, then repeat the backtracking step, otherwise return to the bounding step;

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

155

Final step Solution xUB is optimal with objective function value zUB. In the initial step, we can identify whether the SCAP has any feasible solutions by determining an assignment that minimizes the resource usage. If this minimum exceeds the capacity then no feasible solution exists. Solving the LAP Eqs. (1)–(4) gives a solution that is also optimal to the SCAP if it satisfies the resource constraint. Maintaining List corresponds to a search tree in which each problem branches into two sub-problems; see Fig. 1. If zLB < zUB after LR for some problem, then a better solution may exist. We try to find this solution by branching the problem in sub-problems. We employ a depth first strategy, that is, all elements of xLB are fixed to 1 if they have not been fixed. From here, we try to work our way up (backtracking) the search tree. Backtracking occurs if zLB P zUB after LR, because then no better solution exists for a sub-problem. In this way, all assignments will be systematically evaluated and an optimal solution will be found. A variable with small costs and low resource usage has more potential to be in the optimal solution, compared to a variable with high costs and high resource usage. However, if the capacity of the resource (b) would be large, high resource usage is of less importance. The following rules are proposed to determine the potential of the variables contained in xLB by sorting according to an: • • • •

increasing increasing increasing increasing

order order order order

of of of of

cij and ties are sorted in increasing order of rij; cij; rij; cijrij.

Computer experiments (with identical problem instances) showed that the first rule performed the best on average. We did also computational experiments about other branching rules, such as using the results of the relaxation; surprisingly, this does not lead to smaller computation times. We conclude with a remark on the implementation. If a variable xij is fixed at 0 for some i, j 2 N, then simply set cij = rij = M, where M is a large number. If xij is fixed at 1 then swap row i and column j with

Fig. 1. The search tree corresponding to List = {x12 = 1, x31 = 1, x23 = 0}.

156

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

row and column (n  m) in both the c and the r matrix. In this way, the first (n  m) rows and columns are unassigned, and the last m rows and columns are assigned. Now all fixed elements are xqq with (n  m) < q 6 n. Clearly, it is sufficient to perform LR on the first (n  m) rows and columns of the c and the r matrix. Of course, one has to keep track of the original positions of the columns and rows. The costs and the resource usage of the fixed variables have also to be maintained.

4. Improvements to the algorithm We describe improvements that speed up the algorithm: (1) reduction of parameters; (2) shortening the forward move; (3) 2-opt check; (4) fathoming tests; (5) improving LR; (6) outpricing; (7) partial enumeration and (8) saving the initial lower bound. (1) Reduction of parameters: If the original costs (resource) matrix contains a row i with high costs (resource usage) for each of the elements, then none of the jobs where worker i could be assigned to is attractive to do, but one of these jobs must be performed anyway. An optimal solution to the SCAP remains optimal if a constant is subtracted from any row or column of the resource matrix. After it is checked whether a feasible solution exists (by solving a LAP), the rij values are reduced with their dual row (xi) and column (rj) variables, i.e., rij := rij  xi  rj. The values of xi and rj are obtained Ptogether with the solution of the LAP. Subsequently, b is reduced to the value n b :¼ b  i¼1 ðxi þ ri Þ. The same operation is applied to the costs matrix, that is, the cij values are reduced with the values of their dual row (ui) and column (vj) variables after a lower bound is calculated in the initial step, that produces also these values, so cij := cij  ui  vj. Computer experiments showed that using reduced costs as well as reduced resources accelerates the algorithm. (2) Shortening the forward move: If zLB < zUB then this does not directly imply that z(xLB) < zUB and that xLB is feasible, i.e., it is often sufficient to add less than (n  m) new restrictions to List during the forward move. It suffices to add restrictions until the resource usage of the variables fixed at 1 exceeds b or until the costs of these variables are at least as large as zUB. (3) 2-opt check: Whenever a variable is fixed at 1, we verify whether this variable is 2-opt with all other variables already fixed at 1. A variable xi0 j0 is 2-opt with xij if: cij þ ci0 j0 6 ci0 j þ cij0

and

rij þ ri0 j0 < ri0 j þ rij0

cij þ c

and

rij þ r

i0 j0


ij0

i0 j0

6r þr . i0 j

ij0

or

ð10Þ ð11Þ

If this is not the case then variable xi0 j0 ¼ 1 cannot be combined in an optimal solution together with xij = 1 and thus xi0 j0 is fixed at 0 until the algorithm backtracks to a higher level. (4) Fathoming tests: At any node in the search tree except the initial node, two fathoming tests are applied before starting LR. The first one determines the assignment with the lowest costs (solving a LAP), say d. If d P zUB then the node is fathomed, otherwise the second test is executed, that determines an assignment with the lowest resource usage (solving a LAP), say c. If c > b then the node is fathomed, otherwise LR is applied. (5) Improving LR: If within an iteration of the LR a feasible solution to the SCAP is found with an objective function value lower than zUB, then xUB and zUB are updated. Likewise, if within an iteration a lower bound is found at least as large as zUB, then the node can be fathomed, so LR terminates and the algorithm proceeds to the backtracking step. (6) Outpricing: Solving the Lagrangean dual consists of solving LAP instances. The dual solution of the LAP corresponding to zLB = dzLR(k*)e contains valuable information for the bounding of the set of feasible solutions and its sub-problems. The reduced costs cij + k*rij  ui  vj is a lower bound on the

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

157

increase of zLB under the additional constraint xij = 1. There is no better feasible solution in the following sub-problems if a variable xij with dzLR(k*) + cij + k*rij  ui  vje P zUB has value 1 in any assignment. Therefore, all variables with this property can be fixed at 0 in the following sub-problems. Note that variables fixed at 0 in the first sub-set, remain fixed during the entire solution process. (7) Partial enumeration: If (n  m) 6 4 rather than using the bounding procedure, all feasible remaining combinations (at most (n  m)!) are evaluated and xUB and zUB are updated if a better solution is found. If (n  m) > 4, then it requires too much computation time to evaluate all solutions and therefore it is better to start with a branching of the problem in sub-problems. (8) Saving the initial lower bound: If during the branch-and-bound an upper bound is found that is equal to the initial lower bound, then an optimal solution has been found and the algorithm terminates. This phenomenon often occurs in the considered problem instances. The given improvements have resulted in a more effective algorithm as will be shown in Section 6.

5. Example We illustrate the described algorithm on a 4 · 4 example. The costs and resource matrices are given by (the optimal assignments indicated by starred elements in c and r): 0 1 0 1 4 0 3 9 7 0 5 3 B 3 9 8 5 C B 6 6 0 8 C B C B C c¼B C r¼B C b ¼ 16.  @5 1 @ 6 6 9 2 A 6 8A 5

2

4

6

3

0

0

3

Initial step: Check whether a feasible solution exists by determining the assignment with the smallest resource usage: xUB = (x12 = 1, x23 = 1, x34 = 1, x41 = 1); this solution has b(xUB) = 5 < b, thus it is feasible. The initial upper bound zUB = z(xUB) = 21. Determine the initial zLB value by solving Eqs. (1)–(4); zLB = 13 for xLB = (x13 = 1, x21 = 1, x32 = 1, x44 = 1) and b(xLB) = 20 > b. Now reduce c and r with their dual solutions, resulting in: 0 1 0 1 4 0 7 3 1 1 0 4 B 0 10 5 0 C B 1 4 0 6 C B C B C r ¼ c¼B C B C b ¼ 11. @ 0 0 1 1 A @ 1 4 9 0 A 1

2

0

0

0

0

2

3

Adjusting values gives b = 16  5 = 11, zLB = 0 and zUB = 8. List is still empty, so Index := 0. Bounding step: LR results in zLB = 1 (14 multiplier updates, zLR(k*) = 0.46 for k* = 0.19); the corresponding assignment xLB = (x12 = 1, x21 = 1, x34 = 1, x43 = 1) is feasible (3 < b) and z(xLB) = 2 < zUB, hence xUB = xLB and zUB = 2. The dual values hij = cij + k*rij  ui  vj are: 0 1 0:20 3:24 1:24 0 B 0 10:11 4:19 0:16 C B C h¼B C: @ 0 0:11 1:93 0 A 1:23

1:76

0

0

All variables with hij > zUB  zLR(k*)  1 = 0.54 can be fixed to 0 in all sub-problems, so it is unnecessary to add these restrictions to List. Because zLB < zUB make a forward move.

158

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

Forward move: The variables are sorted in increasing order of the cij values and ties are broken in increasing order of the rij values: c21 = c43 = 0 < c12 = c34 = 1 and r12 = r34 = 0 < r21 = 1 < r43 = 2. Hence, the variables are added to List in the order: x21  x43  x12  x34. Each time that a new variable is added, swap rows and columns as explained in Section 3. Currently, Index := 4 and List := {x21 = 1, x43 = 1, x12 = 1, x34 = 1}. Backtracking step: Delete the last restriction from List and Index := 4  1 = 3. Since List [3] contains the restriction x12 = 1, change it to x12 = 0; List := {x21 = 1, x43 = 1, x12 = 0}. The c and the r matrix are now:

;

where # denotes a variable fixed at 0. The numbers next and on top of the matrices present the original indices of the rows and columns respectively. The unassigned rows and columns lie above and to the left of the dotted lines, respectively. These transformations of the c and the r matrix are for implementation purposes only, to reduce the computation times. There is no feasible assignment, since x12 and x14 are fixed at 0, so repeat the backtracking step. Backtracking step: Set Index:= 3  1 = 2 and List := {x21 = 1, x43 = 0}. The matrices are:

:

There is no unassigned row with all the elements of the unassigned columns restricted to 0, so make a bounding step. Bounding step: It is easy to verify, without performing LR, that there is just one feasible solution at this node in the search tree: xLB = (x13, x21, x32, x44). Although z(xLB) = 0 < zUB, xUB is not updated because b(xLB) = 15 > b. A forward move is unnecessary, since there is just one feasible assignment for the current problem. As a consequence the sub-problems can have no solutions. Therefore, go to the backtracking step. Backtracking step: Set Index := 2  1 = 1 and List := {x21 = 0}. Currently, no rows and columns are assigned.

:

There are no rows or columns with all elements restricted to 0, so make a bounding step. Bounding step: Once again, there is just one feasible solution at this node: xLB = (x12 = 1, x24 = 1, x31 = 1, x43 = 1). As z(xLB) = 1 < zUB, set xUB = xLB and zUB = 1. Proceed to the backtracking step. Backtracking step: Index := 1  1 = 0 and List := ;, i.e., an optimal solution is found.

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

159

Final step: The optimal assignment is ðx12 ¼ 1; x24 ¼ 1; x31 ¼ 1; x43 ¼ 1Þ, with z* = 1 and b* = 9. The corresponding original costs and resource usage are both 14. In problem instances of size n = 4 one can better evaluate all possible assignments and choose a best one, but this has not been done for illustration purposes.

6. Computational results For problem instances up to size n = 1000 we give computational results and discuss the performance of the algorithm. It was implemented in Delphi 5.0 (Pascal) and tested on a personal computer with an Intel Pentium IV 2.6 GHz processor. The LAP instances were solved using the algorithm LAPJV (Volgenant and Jonker, 1987), one of the best LAP algorithms according to DellÕAmico and Toth (2000). We generated two classes of problem instances, i.e., random and negatively correlated problem instances analogous to Mazzola and Neebe (1986). For the random problem instances, the cij and the rij values are randomly drawn from a uniform distribution on the interval [50, 150]. The right-hand side of the resource constraint is set equal to: $ % XX b¼ q rij =n ; ð12Þ i2N

j2N

where bxc is the greatest integer less than or equal to x and q is a tightness scalar. Clearly, if q becomes smaller, the set of feasible solutions becomes smaller, and for sufficiently small q values the problem has no feasible solutions at all. For the negatively correlated problem instances, the cij values are still randomly drawn values from a uniform distribution on the interval [50, 150], but strongly negative correlation is incorporated between the cij and rij values by calculating the rij values as follows: rij ¼ minf50 þ cmax  cij þ uniform½0; 10; 150g;

ð13Þ

where cmax denotes the largest cij value. Note that the cij and the rij values would be perfectly negatively correlated if the random part was not incorporated, since cor(cij, rij) = cor(cij, a  cij) = 1, where a is scalar. Computational experiments showed that perfectly negatively correlated problem instances are easy to solve. After some computational experiments, the following parameter settings were chosen. At the initial node k0 is set equal to 1 and at subsequent nodes of the search tree k0 is set equal to the best multiplier k* of the previous node. The idea behind this choice is that problems at subsequent nodes do not differ substantially. At each node, the maximum number of iterations is 4 + b10(n  m)/nc. In this manner, the larger the value of m, the smaller the maximum number of iterations. The best results based on computational experiments were obtained by pt = 0.0005 * (7/8)t in the LR procedure (Section 2). So we have used: t ktþ1 ¼ maxf0; kt þ 0:0005  ð7=8Þ  ð0:6gt þ 0:4gt1 Þg.

The values for a (0.6) and b (0.4) are the same as in Volgenant and Jonker (1982). Further, we initially set kLB = 0 and kUB = M, as in this way we always satisfy kLB 6 k0 6 kUB. The computational results given in Tables 1–6 are averages over 20 problem instances of the same type. In each table the first column presents the value of q. For q 6 0.5 the considered instances are infeasible, whereas for q P 1.1 it appears that an assignment that minimizes the costs is also optimal for the SCAP. In both these cases, branch-and-bound is superfluous. The second column gives the problem size (n) and the third column presents the average number of nodes that have been evaluated for each value of n (#nodes). The average number of nodes that are fathomed before applying LR are listed in the fourth

160

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

Table 1 Computational results for random problem instances with range 100 (times in seconds) q

N

#nodes

#fath.b

#fath.a

#branch

#lap

Tavg

Tworst

Lgap

Ugap

0.6

200 400 600 800 1000

3279 3471 3251 3566 2094

3097 3213 2955 3152 1806

151 239 285 402 283

31 19 11 12 5

1464 1633 1564 2002 1302

1.07 4.39 11.19 22.38 27.83

2.78 11.25 37.55 63.50 97.17

0.02 0 0 0 0

0.12 0.04 0.01 0.01 0

0.7

200 400 600 800 1000

1484 1199 593 157 2

1372 1048 472 103 1

99 143 119 54 0

13 8 2 0 1

757 765 511 231 10

0.79 2.49 3.87 5.20 3.58

2.28 7.03 22.91 49.33 4.08

0 0 0 0 0.03

0.04 0.01 0 0 0

0.8

200 400 600 800 1000

445 166 2 2 2

387 146 1 1 1

53 19 0 0 0

5 1 1 1 1

306 118 10 11 11

0.29 0.83 0.93 2.43 4.15

0.84 5.50 1.00 2.58 4.39

0 0 0.02 0.08 0.11

0.02 0 0 0 0

0.9

200 400 600 800 1000

186 2 2 2 2

176 1 1 1 1

9 0 0 0 0

1 1 1 1 1

101 9 11 11 12

0.15 0.27 1.11 2.44 5.05

0.69 0.33 1.16 2.94 5.37

0 0.01 0.10 0.15 0.19

0 0 0 0 0

1.0

200 400 600 800 1000

19 1 1 1 1

18 1 1 1 1

1 0 0 0 0

0 0 0 0 0

17 7 7 7 7

0.03 0.19 0.68 1.47 2.34

0.27 0.38 1.17 3.09 5.42

0 0.05 0.15 0.20 0.26

0 0 0 0 0

Table 2 Computational results for random problem instances with range 200 (times in seconds) q

n

#nodes

#fath.b

#fath.a

#branch

#lap

Tavg

Tworst

Lgap

Ugap

0.6 0.7 0.8 0.9 1.0

400 400 400 400 400

3574 2278 745 232 41

3337 2093 657 216 40

220 174 85 16 1

17 11 3 0 0

1533 1106 476 150 21

5.95 4.85 2.78 1.20 0.24

19.06 14.86 11.56 5.88 1.44

0 0 0 0 0

0.03 0.02 0.01 0 0

column (#fath.b). Fathomed nodes are nodes with all elements in a row or column fixed at 0, or nodes that do not pass the fathoming tests (see Section 4). The average number of nodes fathomed after LR (but not before) are listed in the fifth column (#fath.a). Column 6 gives the average number of branching steps (#branch). Note that the number of nodes is equal to the number of fathomed nodes (before and after LR) plus the number of nodes with branching. The average number of LAPJV calls (#lap) to solve the SCAP, are presented in column 7. The next two columns give the average computation times (Tavg) and the worst case running times (Tworst) both in seconds. Column 10 (11) presents the relative lower (upper) gap in % defined as: Lgap ¼

z  zLB1  100%; z

U gap ¼

zUB1  z  100%; z

ð14Þ

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

161

Table 3 Computational results for negatively correlated problem instances with range 100 (times in seconds) q

n

#nodes

#fath.b

#fath.a

#branch

0.6

200 400 600 800 1000

2868 8326 10 256 11 254 11 492

2427 7229 9047 9840 10 100

77 482 527 625 663

364 615 682 789 729

3524 7583 8800 9842 9702

0.37 1.63 4.67 11.45 21.33

1.03 2.06 7.91 14.77 25.76

0 0 0 0 0

1.56 4.07 3.33 2.31 2.38

0.7

200 400 600 800 1000

4086 10 268 13 773 17 367 20 598

3529 9032 12 152 15 637 18 612

141 551 801 1012 1148

416 685 820 718 838

4371 8744 11 339 12 550 14 729

0.41 2.19 6.54 16.34 31.06

0.89 2.94 8.17 21.31 41.80

0 0 0 0 0

2.61 4.42 3.91 5.38 4.53

0.8

200 400 600 800 1000

4780 9112 15 251 25 289 41 095

4087 8000 13 610 23 288 38 701

187 488 440 1222 1453

506 624 1201 779 941

5278 7901 11 768 16 890 24 856

0.47 2.07 6.76 19.35 40.33

0.91 2.89 7.86 38.69 120.54

0 0 0 0 0

3.47 5.20 5.72 8.66 8.87

0.9

200 400 600 800 1000

5947 6783 19 665 14 188 12 256

5142 5865 17 654 12 905 10 825

242 325 1181 729 701

563 593 830 554 730

6115 6700 14 512 11 187 10 437

0.52 1.81 7.69 16.75 28.46

0.88 3.00 9.34 22.81 41.95

0 0 0 0 0

5.35 2.82 10.37 8.57 4.18

1.0

200 400 600 800 1000

3119 3216 5316 6613 6430

2683 2797 4597 5727 5556

93 90 227 354 309

343 329 492 532 565

3426 3276 5393 6340 6464

0.36 1.36 4.22 11.23 13.51

0.88 2.39 7.42 18.76 40.65

0 0 0 0 0

2.65 1.20 1.11 2.00 1.29

#lap

Tavg

Tworst

Lgap

Ugap

Table 4 Computational results for negatively correlated problem instances with range 200 (times in seconds) q

n

#nodes

#fath.b

#fath.a

#branch

#lap

Tavg

Tworst

Lgap

Ugap

0.6 0.7 0.8 0.9 1.0

400 400 400 400 400

25 269 29 093 16 226 21 603 11 619

22 559 26 063 14 363 19 337 10 022

1128 1372 682 926 394

1582 1658 1181 1340 1203

20 074 22 307 14 155 17 257 12 475

3.54 4.19 2.88 3.49 2.55

5.53 8.03 6.98 8.76 6.03

0 0 0 0 0

5.85 9.68 5.31 9.15 3.82

Table 5 Computational results for random problem instances with range 100 and e = 0.01 (times in seconds) q

n

#nodes

#fath.b

#fath.a

#branch

#lap

Tavg

Tworst

Lgap

0.6 0.7 0.8 0.9 1.0

1000 1000 1000 1000 1000

130 2 2 2 1

40 1 1 1 1

89 0 0 0 0

1 1 1 1 0

323 10 11 12 7

3.88 3.07 2.88 3.49 2.55

5.53 8.03 6.98 8.76 6.03

0.01 0.03 0.11 0.19 0.26

162

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

Table 6 Computational results for negatively correlated problem instances with range 100 and e = 0.01 (times in seconds) q

n

#nodes

#fath.b

#fath.a

#branch

#lap

Tavg

Tworst

Lgap

0.6 0.7 0.8 0.9 1.0

1000 1000 1000 1000 1000

29 70 136 59 20

25 61 121 53 16

3 7 12 5 3

1 2 3 1 1

57 121 220 101 52

14.57 19.92 21.06 21.10 9.90

16.61 25.64 23.86 23.06 31.12

0.31 0.33 0.29 0.29 0.27

40

40

30

30

time

time

where z* is the optimal objective function value and zLB1 (zUB1) is the best lower (upper) bound value in the first sub-set. Table 1 shows the results for solving dense random problem instances. As expected the computational effort increases with the value of n, except for q = 0.7 and n = 600 and 800, where two difficult instances caused average times larger than for n = 1000. The results also indicate that, as q gets larger, the running times decrease; see Fig. 2 for an illustration. Only for n = 1000 the results are somewhat different. For q = 0.6 and 0.7 the problem difficulty decreases rapidly. Between q = 0.7 and 0.9 it increases slowly, whereas it decreases from q = 0.9 to 1. We have no explanation for this phenomenon. It is also remarkable that for q P 0.7 the number of nodes in the branch-and-bound tree decreases with increasing values of n. Our method of LR performs quite well, as for large random problem instances an optimal solution is often found in the first sub-set (Ugap = 0). Table 3 considers dense negatively correlated problem instances. For these instances, the influence of q on the running times is different compared to the random problem instances. From q = 0.6 to 0.8 the problem difficulty increases, whereas it decreases from q = 0.8 to 1; see Fig. 2. As for the random problem instances, the computational effort increases with n. The number of nodes in the branch-and-bound tree now increases with increasing values of n in the most instances. The results indicate, as expected, that negatively correlated problem instances are more difficult than random problem instances. Note that for all the negatively correlated problem instances, the found optimal criterion value was equal to the initial lower bound (Lgap = 0), showing the good performance of our method of LR. If the data were drawn from a uniform distribution on the interval [0, 100] instead of [50, 150], then z*, zLB1 and zUB1 would be lower and therefore the lower and upper gaps would be higher. We use the interval [50, 150] to be able to compare our results with those of Mazzola and Neebe (1986). They created problem instances up to size n = 100 with q = 1 and their computer experiments were performed on an IBM 3081 Model K computer using a FORTRAN VS compiler, which is about 278 times slower compared to our computer (Dongarra, 2004). We solved 100 · 100 random problem instances with q = 1 in 0.01 seconds on average, i.e., about 10 times faster than Mazzola and Neebe did. On average, we solved negatively

20

n = 800 n = 600

20

n = 400 n = 200

10

10 0 0.6

n = 1000

0.7

0.8 rho

0.9

1

0 0.6

0.7

0.8 rho

0.9

1

Fig. 2. Plot of time (s) versus q for random (left) and negatively correlated (right) problem instances.

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

163

correlated problem instances of size 100 and q = 1 in 0.15 seconds, i.e., about 5 times faster. Whereas FORTRAN leads to faster computer codes than the Pascal programming language as we used, we think that these ratios of 10 and 5 although not very precise, yet are underestimates of the gain in speed we have obtained with the developed algorithm. Punnen and Aneja (1995) argue that the SCAP is sensitive for the range of the cij and rij values. To this end, this range is doubled from 100 to 200 (interval [50, 250]). The results for the 400 · 400 random and negatively correlated problem instances are presented in the Tables 2 and 4, respectively. The results are in line with the results of Punnen and Aneja, since the SCAP appears to be sensitive for the range: as the range doubles, the running time doubles for almost all the considered instances. The ratio between the worst case and the average computation time is for almost all the random and negatively correlated problem instances smaller than 4, which is quite small. In our opinion, it indicates that larger problem sizes and larger ranges can still be solved in reasonable running times. Note that the computational results are given for dense problem instances. Computer experiments showed that 20% sparse problem instances require about 5 times more computational effort compared to dense problem instances of the same type. The described algorithm can easily be transformed into a heuristic algorithm. Therefore we fathom a node if (1 + e)zLB P zUB, where e is the termination tolerance. For e = 0.01 this produces a solution that is within 1% of the optimum or better. In Tables 5 and 6 the results are depicted for 1000 · 1000 random and negatively correlated problem instances respectively. The running times of the heuristic algorithm are about 40% faster than the original (optimal) algorithm (compare with Tables 1 and 2). For all problem instances we obtained a solution that is within 0.35% of an optimum. Note that the upper gap can only be obtained by determining the related optimal criterion value.

7. Summary, remarks and conclusions The SCAP consists of a LAP that is complicated by the addition of a single constraint. Unlike the LAP, the SCAP is NP-complete. A new branch-and-bound algorithm is proposed to solve the SCAP to optimality. The algorithm is based on a depth first strategy and at each node of the branch-and-bound tree; lower bounds are obtained by utilizing Lagrangean relaxation. We have proposed a Lagrangean relaxation method that is a mixture of the method of Volgenant and Jonker (1982), and a bisection search. A number of improvements speed up the algorithm. Minor modifications transform the algorithm into a heuristic one. Computer experiments demonstrate that our algorithm is capable of dealing with problem instances up to size n = 1000 (one million variables). On the basis of the results, the SCAP difficulty appears to be sensitive for the nature of the underlying data, the number of variables and the available resource capacity. Compared with the fastest optimal algorithm for solving the SCAP, our algorithm is at least 5–10 times faster. We shortly discuss some reasons that may explain this experimental fact: 1. The linear assignment procedure LAPJV is faster than the code in, say Mazzola and Neebe (1986). 2. The dimensions of the LAPÕs that are solved during the branch-and-bound solution procedure are most of the time (much) smaller than n. 3. The good performance of our method of Lagrangean relaxation, both giving often very good lower bounds as well as good upper bounds in general, making it superfluous to use a separate heuristic to generate upper bounds. 4. The fathoming tests (see Section 4). The algorithm is speed up by 16% (58%) in case of random (negatively correlated) problem instances.

164

P.M.D. Lieshout, A. Volgenant / European Journal of Operational Research 176 (2007) 151–164

We compared the performance of our algorithm against the performance of a general purpose MIP solver (MPL with CPLEX solver), see MPL (2005). The results indicate that such a solver can only solve small problem instances in a reasonable time. For n = 50 the running time was on average already 40 times longer compared to our algorithm on identical problem test instances. We tried to make minor modifications to the described algorithm to solve the single RCAP with an equality side constraint, but we think this type of problem is much more difficult to solve. It is open for further research, just as the idea of one of the referees to use a Lagrangean relaxation based on the knapsack problem, by relaxing the assignment constraints instead of the single constraint. Remark: The program code is available on request.

Acknowledgements We are grateful to two anonymous referees who gave helpful comments that have led to improvements of this paper.

References Aggarwal, V., 1985. A Lagrangean-relaxation method for the constrained assignment problem. Computers and Operations Research 12, 97–106. DellÕAmico, M., Toth, P., 2000. Algorithms and codes for dense assignment problems: The state of the art. Discrete Applied Mathematics 100, 17–48. Dongarra, J.J., 2004. Performance of various computers using standard linear equations software, University of Tennessee Computer Science Technical Report. Available from: . Fisher, M.L., 1981. The Lagrangian-relaxation method for solving integer programming-problems. Management Science 27, 1–18. Garey, M.R., Johnson, D.S., 1979. Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman, San Francisco. Gupta, A., Sharma, J., 1981. Tree search method for optimal core management of pressurised water reactors. Computers and Operations Research 8, 263–266. Kennington, J.L., Mohammadi, F., 1994. The singly constrained assignment problem: A Lagrangian relaxation heuristic algorithm. Computational Optimization and Applications 3, 7–26. Mazzola, J.B., Neebe, A.W., 1986. Resource-constrained assignment scheduling. Operations Research 34, 560–572. MPL, 2005. See http://www.maximal-usa.com/. Punnen, A.P., Aneja, Y.P., 1995. A tabu seach algorithm for the resource-constrained assignment problem. Journal of the Operational Research Society 46, 214–220. Rosenwein, M.B., 1991. An improved bounding procedure for the constrained assignment problem. Computers and Operations Research 18, 531–535. Van der Veen, J.A.A., Sierksma, G., Van Dal, R., 1991. Pyramidal tours and the traveling salesman problem. European Journal of Operational Research 52, 90–102. Volgenant, A., Jonker, R., 1982. A branch and bound algorithm for the symmetric traveling salesman problem based on the 1-tree relaxation. European Journal of Operational Research 9, 83–89. Volgenant, A., Jonker, R., 1987. A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing 38, 325–340.