Upper and lower bounds for the single source capacitated location problem

Upper and lower bounds for the single source capacitated location problem

European Journal of Operational Research 151 (2003) 333–351 www.elsevier.com/locate/dsw Upper and lower bounds for the single source capacitated loca...

328KB Sizes 0 Downloads 23 Views

European Journal of Operational Research 151 (2003) 333–351 www.elsevier.com/locate/dsw

Upper and lower bounds for the single source capacitated location problem Maria Jo~ ao Cortinhal a

a,*

, Maria Eugenia Captivo

b

Instituto Superior de Ci^ encias do Trabalho e da Empresa, ISCTE-CIO, Av. Forcßas Armadas, Lisboa 1649-026, Portugal Faculdade de Ci^ encias da Universidade de Lisboa, DEIO-CIO, Edificio C2-Campo Grande, Lisboa 1749-016, Portugal

b

Abstract The single source capacitated location problem is considered. Given a set of potential locations and the plant capacities, it must be decided where and how many plants must be open and which clients must be assigned to each open plant. A Lagrangean relaxation is used to obtain lower bounds for this problem. Upper bounds are given by Lagrangean heuristics followed by search methods and by one tabu search metaheuristic. Computational experiments on different sets of problems are presented.  2003 Published by Elsevier B.V. Keywords: Capacitated facility location; Lagrangean heuristics; Tabu search

1. Introduction In the single source capacitated location problem (SSCLP) a set I ¼ f1; . . . ; ng of possible plant locations, each with a maximum capacity ai , i 2 I, and a set J ¼ f1; . . . ; mg of customers, each with a demand bj , j 2 J , are given. It is required to choose a subset of plants and to fully assign each customer to one of the chosen plants, in such a way that the total cost is minimized, and the plant capacities are not exceeded. Total cost is composed by fixed costs, fi , i 2 I, whenever plant i is open, and by the cost of assigning each customer to one plant, cij , i 2 I, j 2 J . Considering

 xij ¼ and



yi ¼

1; 0;

if customer j is assigned to plant i; otherwise;

1; 0;

if plant i is open; otherwise;

the SSCLP can be formulated as follows: min

n X m X i¼1

n X

fi yi

ð1Þ

j ¼ 1; . . . ; m;

ð2Þ

j¼1

subject to n X xij ¼ 1; i¼1 m X

cij xij þ

i¼1

bj xij 6 ai yi ;

i ¼ 1; . . . ; n;

ð3Þ

j¼1 *

Corresponding author. E-mail addresses: [email protected] (M.J. Cortinhal), [email protected] (M.E. Captivo). 0377-2217/$ - see front matter  2003 Published by Elsevier B.V. doi:10.1016/S0377-2217(02)00829-9

xij 2 f0; 1g; i ¼ 1; . . . ; n; j ¼ 1; . . . ; m;

ð4Þ

yi 2 f0; 1g;

ð5Þ

i ¼ 1; . . . ; n:

334

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

Constraints (2) stipulate that each customer must be assigned to a single plant. Constraints (3) are the plant capacity constraints, and also ensure that a customer can only be assigned to an open plant. Constraints (4) and (5) are the integrality constraints. The SSCLP is a combinatorial optimization problem that belongs to the class of NP-hard problems. Several authors have studied the problem. In Neebe and Rao [25] the problem is formulated as a set partitioning problem and solved by a column generation branch and bound procedure. Bounds were obtained using linear relaxation. They observe that the tree search is not very large due to the high probability of having integer solutions with the linear program. Computational results are given for a set of problems, supplied by DeMaio and Roveda [10], and for two randomly generated sets, with a maximum of 25 facilities and 40 customers. Barcel o and Casanovas [2] propose a Lagrangean heuristic for the problem in which the customer assignment constraints are relaxed. The heuristic for finding feasible solutions, consists of two phases: one for the plant selection and the other for the clientsÕ assignment. Klincewicz and Luss [22] describe a Lagrangean heuristic for the SSCPL, where they dualize the capacity constraints. Feasible solutions for the relaxed problems, which are uncapacitated location problems, were obtained by the dual ascent method of Erlenkotter [12], but without using branch and bound. The initial feasible solution was obtained with an add heuristic and, in attempt to improve the best feasible solution, a final adjustment heuristic, that tries to improve customer assignments, was used. Computational results are given for a set of 24 test problems, taken from Kuhen and Hamburger [23], with n ¼ 25 or 26, m ¼ 50 and where demands greater than 3700 were set to 3700. In Darby-Dowman and Lewis [8] the same Lagrangean relaxation is used to establish some relationships between fixed and assignment cost. With them, they identify problems for which the optimal solution to the relaxed problem is guaranteed not to be a feasible solution to the original problem. An example is used to illustrate.

Beasley [5] presents a framework to develop Lagrangean heuristics for location problems, where capacity constraints and customer assignment constraints are dualized. For the SSCLP, an allocation cost procedure was used to calculate the cost of one set of open facilities. Computational results are given for a set of 24 problems, taken from Klincewicz and Luss [22]. The average deviation is compared with the one obtained by the Lagrangean heuristic presented by Klincewicz and Luss [22] and Pirkul [27]. Sridharan [28] describes one Lagrangean heuristic for the problem. This heuristic is an improvement and an extension of the one presented by Nauss [26] for the Capacitated Plant Location Problem where customer assignment constraints are dualized. Computational experiments were made over a set of randomly generated problems, and a set of problems given in Guignard-Spielberg and Kim [17]. In Agar and Salhi [1], a general framework for solving location problems, namely the one we are studying, is described. This framework is similar to the one presented by Beasley [5]. The main differences can be found in the generation of feasible solutions. They report computational results obtained with some instances taken from BeasleyÕs OR Library [4] and other test problems, varying in size from 100 to 600 customers and with m ¼ n, which were constructed based on the standard data sets given in Christofides et al. [6]. Hindi and Pienkosz [20] describe a heuristic that combines Lagrangean relaxation with restricted neighbourhood search for the SSCLP. They dualize customer assignment constraints and use subgradient optimization. For finding feasible solutions a heuristic procedure with three phases is used, where the second one consists of a steepest descent method, and the last one is a restricted neighbourhood search. They observe that, in large instances, the heuristic procedure was carried out only at subgradient iterations that led to a lower bound improvement. Computational results are given for a set of 36 problems, taken from BeasleyÕs OR Library [4], and a set of 33 problems used in Barcel o et al. [3] and in Delmaire et al. [9]. Holmberg et al. [21] present a Lagrangean heuristic and a branch and bound to the problem.

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

In both, they dualize customer assignment constraints, and use subgradient optimization. Upper bounds were obtained by a primal heuristic. This heuristic solves a sequence of related matching problems using LagrangeanÕs solutions as starting points. Computational results are given for four sets of randomly generated test problems, with n ranging from 10 to 30 and m ranging from 50 to 200. Comparisons between the results obtained with branch and bound, Lagrangean heuristic and with Cplex [7] are made. The objective of our work is to develop solution procedures, based on Lagrangean heuristics and search methods, capable of providing good solutions for the SSCLP. In Section 2, a Lagrangean relaxation of the problem is presented. Section 3 describes the methods developed for finding upper bounds on the problem optimal value. Computational experiments are presented in Sections 4 and 5 contains the conclusions. 2. Lagrangean relaxation

i¼1

ð6Þ

j¼1

i¼1 m X

ð1Þ

i¼1

j ¼ 1; . . . ; m;

ð2Þ

i ¼ 1; . . . ; n;

bj xij 6 ai yi ;

ð3Þ

j¼1 n X

ai yi P

m X

i¼1

ð6Þ

bj ;

j¼1

xij 2 f0; 1g;

i ¼ 1; . . . ; n;

yi 2 f0; 1g;

i ¼ 1; . . . ; n:

m X

kj þ min

j ¼ 1; . . . ; m;

n X m X

j¼1

i¼1

ðcij  kj Þxij þ

j¼1

n X

fi yi

i¼1

ð7Þ subject to (3)–(6). Given a set of multipliers kj , j ¼ 1; . . . ; m, and since variables yi can only take the values zero or one, the optimal solution can be obtained, solving n þ 1 knapsacks [28]. In our work, the knapsack problems were solved by Martello and Toth [24] algorithm (code MT2) and maximization of ZðkÞ with respect to the k multipliers is carried out by subgradient optimization [18,19]. As it is well known, subgradient optimization is very sensitive to the initial values used as well as to the rules for updating the step length. In this case, the initial values were set to j 2 J;

and the step length is calculated as Tk ¼ qk

to the formulation (1)–(5) we obtain n X m n X X min cij xij þ fi yi subject to n X xij ¼ 1;

ZðkÞ ¼

i

j¼1

i¼1

has been shown (as mentioned in [28]) that adding such a constraint to the capacitated plant location problem (CPLP) leads to higher lower bounds. Dualizing constraints (2) with Lagrangean multipliers kj , j ¼ 1; . . . ; m, we obtain

kj ¼ min cij ;

Adding the constraint n m X X ai yi P bj

335

Z u  ZðkÞ kgk

where Z u is the best upper bound and kgk is the norm of the current subgradient. qk is set to 2.0 initially and halved after five consecutive iterations without improvement on the lower bound. Every 50 iterations qk is restarted to 2.0. The subgradient optimization has three stopping criteria: 1. the difference between upper and lower bound is less than 1; 2. a maximum number of iterations is reached; 3. the norm of the current subgradient is zero.

ð4Þ ð5Þ

Constraint (6) is redundant for the problem, since it is covered jointly by (2) and (3). However, it

3. Lagrangean heuristics At the end of each subgradient iteration, a solution to the relaxed problem is obtained. In this

336

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

solution, a set of plants, A  I is open, with a total capacity larger or equal than total demand, but where the set J of customers is partitioned in three subsets, ( ) n X JAF ¼ j 2 J : xij ¼ 1 ; i¼1

( JPAF ¼

j2J :

) xij > 1

i¼1

( JWAF ¼

n X

j2J :

n X

and )

xij ¼ 0 ;

i¼1

such that JAF \ JPAF \ JWAF ¼ U and JAF [ JPAF [ JWAF ¼ J : The objective of this heuristic is to find feasible solutions to the problem and consists of two phases. In the first phase, customers belonging to sets JPAF and JWAF will be reassigned in such a way that, at the end, JAF ¼ J and JPAF ¼ JWAF ¼ U. However, these new solutions can violate capacity constraints. So, denoting by 8 P > < 0; if Ji bj < ai where Ji is the set of si ¼   assigned to plant i; Pclients > : ai  J bj ; otherwise; i we will say, from now on, that one solution is feasible if and only if si ¼ 0 (8i ¼ 1; . . . ; n). Consenquently, the degree Pn of infeasibility of one solution is given by i¼1 si . In the second phase, we try to improve the solution obtained in the first phase, using a search procedure. 3.1. Phase I This phase consists of the following steps: 1. for each customer j, j 2 JPAF , do (a) assign customer j to plant i1 : ci1 j ¼ mini2Ij cij and Ij ¼ fi 2 A : xij ¼ 1g; 2. for each plant i, i 2 A, define Ji ¼ fj 2 J :  P  xij ¼ 1g and Di ¼ ai  Ji bj ; 3. order the customers j, j 2 JWAF , by descending order of their demands; 4. in this order, for all customers j 2 JWAF , do

(a) assign j to the open plant i1 : ci1 j ¼ mini2A:Di bj P 0 cij ; if such plant does not exist assign j to open plant i1 : ci1 j ¼ mini2A cij ; (b) update Jii and Di1 . 3.2. Phase II This phase consists, of a search procedure, with moves defined as shift or swap. A shift transfers the assignment of one customer to another open plant. A swap exchanges the assignment between two customers. Two different search procedures have been implemented: tabu search and local search. 3.2.1. Tabu search procedure Our heuristic––Tabu Clients––is based on tabu search methodology [15,16] and incorporates additional features such as strategic oscillation and compound moves. Tabu clients changes the clientsÕ assignment and for this purpose only two kinds of moves are allowed: MCI––swap the assignment between two clients assigned to different open plants; MCII––switch the assignment of one client to another open plant. Move MCI is a compound of two moves MCII, but it is not the same as doing two moves MCII, one after the other. As these moves are performed, tabu restrictions are employed to prevent moving back to previous solutions. Once a move MCI is performed, the previous assignment of each client is added to a list––TabuListI. A move MCI will be tabu if it assigns at least one client to a plant which is forbidden––it is on TabulistI––to him. For moves MCII, the same list––TabuListI––is used and a move MCII will be tabu if it changes the assignment of the client to another plant which is forbidden––it is on TabulistI––to him. The tabu status is not permanent. There is a time interval, measured in number of iterations, that must elapse for an assignment to be removed from the tabu list. In our heuristic, this time interval is given by number cþa  all number

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

337

where c is a given parameter, a is equal to n 1, number is the number of times this assignment has been made and all number is the maximum value of number, for all assignments already made. The aspiration criterion used is by objective function value. So a move can override its tabu status if it leads the search to the best feasible solution found during the search. In this problem, since there are restrictions on the capacity of the plants, it is sometimes necessary to cross the boundary of feasibility, as a way of obtaining better solutions. So a strategic oscillation was used 2. This strategy allows the search to cross that boundary whenever the best solution does not improve during m consecutive iterations. Once inside the infeasible region, the search is guided to return to feasibility. For this reason, it was necessary to create best move definitions that depend on the search state. In Tabu Clients we did it in the following way.

amine, our preliminary experiments showed that exploring totally the neighbourhood, can lead to better solutions. The stopping criteria are the maximum number of iterations and the number of iterations without improvement on the best solution.

(i) If the current solution is infeasible or is feasible and the best solution was improved during the last m iterations then the best movement is the one that leads to a feasible solution that minimizes the objective function value. (ii) If the current solution is feasible and the best solution was not improved during the last m iterations then only movements that lead to improvement solutions are allowed even if this implies the need to cross the boundary of feasibility. The best neighbour is the one that minimizes the infeasibility where ties are broken by the objective function value.

3.3. Algorithm

For each new current solution, the entire neighborhood is explored. So, considering the worst case that can occurs when each client is assigned to a different plant, mðn  1Þ þ mðm1Þ solu2 tions are explored. This number seems enormous, namely when large problems are considered. However, since the solutions are not costly to ex-

1

With a equal to zero, worst results were obtained. Some experiments were made without using it. In average, better results were obtained when strategic oscillation is used. 2

3.2.2. Local search procedure This local search is a descent local method, with the additional requirement that the entire neighbourhood, at each iteration, must be exploited. So, the best move is the one, over all possible moves, that most improves the current solution. In this case, and since the solution provided by the first phase can be infeasible, improving the current solution can be achieved by moves that lead to: (i) feasible solutions with smaller costs, if the current solution is feasible; (ii) solutions with a smaller degree of infeasibility, if the current solution if infeasible.

The pseudo-code for this algorithm can be described as follows. 1. Parameters initialization: number of iterations, initial values for kj and qk , Z l ¼ 0, Z u ¼ þ1 2. While the stopping criteria are not met do (a) For the current set of multipliers i. calculate ZðkÞ, by solving the problem (3)–(7) ii. update the best lower bound, Z l iii. try to find a feasible solution to the problem (1)–(5)––phase I iv. optimize the solution obtained in (iii)–– by one of the search algorithms described in phase II (tabu search procedure or local search procedure) v. update the best upper bound, Z u (b) If ðZ u  Z l < 1Þ then Stop (c) Update Lagrangean multipliers i. calculate Pn the subgradient vector gj ¼ 1  i¼1 xij ; j ¼ 1; . . . ; m, and its norm kgk ii. if kgk ¼ 0 then Stop iii. update qk

338

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

iv. do kj ¼ kj þ qk Z j ¼ 1; . . . ; m EndDo

u ZðkÞ

kgk

  P 1  ni¼1 xij , for

4. Computational results The algorithm was written in Fortran and run on a PC Pentium II, 233 MHz. Computational results were obtained using available test problems and randomly generated problems according to different methodologies. First, we are going to present the computational results obtained with a set of randomly generated problems according to our own methodology. Our objective was to see how the results can be influenced by the data structure and, at the same time, to compare the quality of the results using tabu search or local search. Instances were generated with different parameters in the following way. The client demands were drawn from a uniform distribution in the range [5,35]. The plant capacities were drawn in the same way but in the range    

5mq 35mq ; n n where bxc represent the greatest integer less or equal to x, m is the number of clients, n is the number of potential plant locations and q is the ratio between total supply and total demand. Fixed costs are defined as pffiffiffiffi fi ¼ U ½0; 90 þ U ½100; 110  ai where U ½x; y represents an uniform distribution in the range ½x; y. For assignment costs, a network of density d and (m þ n) vertices was used. The cost of each edge was drawn from a uniform distribution in range ½0; 100. This matrix was submitted to the Floyd algorithm [14] to obtain the shortest distance between all points. Finally, we worked with a sub-matrix with n rows and m columns. A total of 72 classes of problems were tested, with parameters n m: 8 25; 25 25; 33 50; 50 50; q: 1.2, 1.3, 1.4, 1.5, 3.0, 5.0; d: 25%, 50%, 75%.

For each class five instances were generated and tested. Tables 1–4 show the average results obtained considering different scenarios. 1. Scenario 1. 1000 subgradient iterations and upper bound calculated in every iteration. 2. Scenario 2. 1000 subgradient iterations and upper bound calculated only when the lower bound improved. 3. Scenario 3. 500 subgradient iterations and upper bound calculated in every iteration. 4. Scenario 4. 500 subgradient iterations and upper bound calculated only when the lower bound improved. With these tables we tried to compare the performance of each one of the methods tested: Lagrangean heuristic and Lagrangean heuristic combined with local search or tabu search. For each table we give: 1. Dev (%)––the mean percentage deviation between the best upper (UB) and lower (LB) bounds, calculated as 100  ðUB  LBÞ=LB: (a) for Lagrangean heuristic with tabu search 3 in phase II; (b) for Lagrangean heuristic with local search in phase II; (c) for Lagrangean heuristic without search (only with phase I); 2. Time (s)––the mean CPU time in seconds. Analysing the results, we can conclude that 1. Decreasing the number of subgradient iterations or calculating the upper bound only when the lower improves, does not seem to have any significant influence on the quality of the mean deviation. 2. Tabu search leads to better results than local search, since the mean deviation is always better

3 For tabu search, the stopping criteria was set to 1000 iterations or 200 without improvement and for the tabu time interval, c ¼ 7 and a ¼ n were considered (after different experiments the best results were obtained with these values).

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

339

Table 1 Average results for Scenario 1 Problems

Lagrangean heuristics Tabu search

8_25 25_25 33_50 50_50

Local search

Without search

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

0.13 0.88 0.51 1.21

19.01 55.42 188.40 249.66

0.21 1.40 3.26 10.60

3.82 22.09 125.52 214.16

0.25 1.77 3.92 11.69

0.38 1.70 4.36 7.46

Table 2 Average results for Scenario 2 Problems

Lagrangean heuristics Tabu search

8_25 25_25 33_50 50_50

Local search

Without search

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

0.12 0.92 0.52 1.20

10.66 33.96 118.45 167.20

0.17 1.56 4.13 11.95

2.49 16.20 101.28 183.81

0.17 1.96 4.04 12.61

0.34 1.72 4.36 7.37

Table 3 Average results for Scenario 3 Problems

Lagrangean heuristics Tabu search

8_25 25_25 33_50 50_50

Local search

Without search

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

0.13 1.27 0.63 1.35

9.85 28.22 96.60 126.85

0.23 2.32 4.15 11.36

2.13 11.61 64.02 107.98

0.28 2.70 4.92 12.88

0.18 0.90 2.11 3.60

Table 4 Average results for Scenario 4 Problems

Lagrangean heuristics Tabu search

8_25 25_25 33_50 50_50

Local search

Without search

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

0.13 1.28 0.78 1.62

5.45 18.23 60.07 87.16

0.20 2.46 4.85 12.80

1.37 8.75 52.67 94.70

0.27 3.14 5.38 14.22

0.21 0.86 2.17 3.60

340

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

Table 5 Average computational results for each class of problems Problem

Lagrangean heuristic with tabu search 8_25

25_25

33_50

50_50

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

A_1.2 A_1.3 A_1.4 A_1.5 A_3.0 A_5.0

0.12 0.29 0.07 0.15 0.01 0.36

11.80 13.20 7.60 4.80 0.20 16.27

2.42 0.92 0.65 1.35 0.38 0.36

18.40 21.60 19.80 24.00 13.80 37.29

2.48 1.02 1.14 1.04 0.08 0.16

88.00 69.60 69.80 80.40 41.20 56.63

4.67 3.27 2.80 0.49 0.12 0.14

136.60 108.00 93.60 87.60 61.00 66.51

B_1.2 B_1.3 B_1.4 B_1.5 B_3.0 B_5.0

0.13 0.25 0.11 0.15 0.03 0.01

10.20 8.40 8.40 9.00 1.40 0.20

3.76 4.12 1.69 1.50 0.11 0.05

27.60 25.80 20.00 26.00 13.00 3.00

1.53 3.44 0.73 0.19 0.03 0.02

84.00 74.20 68.40 69.60 41.60 25.80

4.03 4.50 1.36 0.22 0.08 0.03

115.80 123.20 91.40 94.80 61.40 43.60

C_1.2 C_1.3 C_1.4 C_1.5 C_3.0 C_5.0

0.07 0.08 0.11 0.05 0.04 0.00

5.60 5.00 5.60 4.60 1.60 0.00

2.02 1.24 0.96 1.74 0.09 0.02

21.40 21.00 20.20 27.00 13.80 4.80

0.79 0.65 0.70 0.09 0.03 0.02

73.00 65.20 68.20 71.20 54.00 11.40

4.13 1.63 1.45 0.14 0.08 0.03

109.40 100.20 92.00 91.20 61.80 60.20

and the mean CPU time, when the dimension of the problem increases, became similar. 3. When the search phase is not used, the mean CPU time is very small but the mean deviation suffers a significant increase––even if we compare the best mean deviations obtained without search (0.17, 1.77, 3.92, 11.69) with the worst mean deviations obtained with tabu search (0.13, 1.28, 0.78, 1.62). 4. The mean deviation obtained with local search or without search increases when the dimension increases. The same conclusion is not valid for tabu search: worst results were obtained when the number of customers is equal to the number of potential plants. Table 5 gives more detailed information about the results shown on Table 4, for the Lagrangean heuristic with tabu search. In that table, results are given for each pair of parameters tested (d q) and for each dimension (n m). The letters A, B and C represents d ¼ 25%, 50% and 75% and the numbers 1.2, 1.3, 1.4, 1.5, 3.0 and 5.0 represents the values of q.

We can see––Table 5––that the quality of the results is related with ratio q and density d. In general, the mean deviation and the mean time decrease as long as the ratio (q) increases–– although this is not always true for smaller dimensions. For ratios q ¼ 1:5, 3.0 and 5.0, we can observe that, in general, the mean deviation decreases as long as the density increases. For the other ratios, in general, the density d ¼ 50%, leads to worse mean deviations than d ¼ 25% or d ¼ 75%. Since the Lagrangean heuristic with tabu search seems to be a promising method, we have tried it for other instances, namely with bigger dimensions. A new set of instances was generated, with parameter q set to 3.0 and d equal to 50%. The results are shown in Table 6 4. In this table, we give the results with two stopping criteria in tabu search:

4

The maximum number of subgradient iterations was set to 500 and the upper bound was calculated only when the lower bound improved.

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

341

2. the mean CPU times are quite similar.

Table 6 Average results for higher dimension problems Heuristic lagrangean with tabu search Problem

Dev1 (%)

Time1 (s)

Dev2 (%)

Time2 (%)

25_50 25_100 25_250 25_500 50_100 50_250 50_500

0.09 0.08 0.02 0.09 0.11 0.07 0.02

41.60 132.80 754.40 6323.60 186.80 1074.20 2875.25

0.26 0.15 0.02 0.09 0.23 0.14 0.13

29.60 45.40 465.60 2188.60 76.00 106.00 1936.40

1. 1000 iterations or 200 iterations without improvement on the best solution––Dev1 and Time1; 2. 500 iterations or 150 without improvement on the best solution––Dev2 and Time2. Comparing Dev1 with Dev2, we can say that the mean deviation increases as the number of iterations decreases. Nevertheless, on a much smaller amount of time, tabu search still perform well. Exception made for dimension 25_500, we can conclude that, for a given n, the mean deviation decreases as the number of customers–– m––increases. For a given m we can see that, except for Dev1 with m ¼ 500, the mean deviation increases as the number of potential plant locations––n––increases. In our instances, the fixed component cost in the objective function is greater than the assignment component cost. For testing the effect of varying the relation between fixed costs and assignment costs, a new set of instances was considered. This new set was obtained, multiplying the assignments costs by the demand of each client (8i : cij ¼ cij bj ; j ¼ 1; . . . ; m). The results are given in Tables 7–10, considering the same four scenarios as in Tables 1–4. Observing these tables, we can drawn the same conclusions as for Tables 1–4: Lagrangean heuristic with tabu search yields the best deviations. However, comparing these results with the results obtained for the first set of instances, we can also conclude that: 1. the mean deviation has decreased, except for dimension 8_25;

Table 11 shows the results obtained with tabu search 5, for each dimension (n m) and for each pair of values for parameters (d q). For this set of instances it is not possible to establish any type of relation between the mean deviation and the parameters d and q, since they are highly dependent on the problem dimension. However, comparing with Table 5 we can observe that, for ratios 3.0 and 5.0, the mean deviations have increased, while for the others ratios, in general, they have decreased. In Table 12 results for higher dimension T instances, obtained with different stopping criteria in tabu search (the same as explained for Table 6) are shown. In Table 12, we can observe that the mean CPU time increases as the dimension increases. Concerning mean deviation, the same conclusion is not valid, when n ¼ 25. We can also observe that when the number of iterations decreases the mean CPU time decreases without affecting very much the quality of the mean deviation. For concluding, we can say that the proposed Lagrangean heuristic with tabu search seems to be a method capable of providing good bounds in a reasonable amount of time, even if only 500 iterations or 150 iterations without improvement are considered. For this reason, from now on, the computational results that we are going to present were obtained with the Lagrangean heuristic followed by tabu search, where 500 iterations or 150 iterations without improvement were considered. Next, we are going to apply our approach to test problems already used by other authors. A set of randomly generated problems according to the methodology of Pirkul [27] was considered. Data points representing customers and plant locations were drawn from a uniform distribution over a rectangle with sides 50  100. The Euclidean distance between i and j, dij , was used to define the cost coefficients cij as cij ¼ dij  W2 ,

5 500 subgradient iterations were considered and upper bound was calculated each time the lower bound improved.

342

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

Table 7 Average results for instances T with Scenario 1 Problems

Lagrangean heuristics Tabu search

T8_25 T25_25 T33_50 T50_50

Local search

Without search

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

0.27 0.62 0.44 0.48

18.30 46.30 205.11 249.33

0.29 0.75 0.52 0.95

6.87 21.03 156.60 212.32

0.38 0.89 0.82 1.23

0.27 1.14 3.17 4.60

Table 8 Average results for instances T with Scenario 2 Problems

Lagrangean heuristics Tabu search

T8_25 T25_25 T33_50 T50_50

Local search

Without search

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

0.27 0.65 0.45 0.49

11.70 28.39 133.52 162.66

0.31 0.74 0.53 0.91

3.95 12.71 98.61 133.29

0.35 0.87 0.87 1.31

0.27 1.11 3.19 4.51

Table 9 Average results for instances T with Scenario 3 Problems

Lagrangean heuristics Tabu search

T8_25 T25_25 T33_50 T50_50

Local search

Without search

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

0.47 1.00 0.56 0.59

9.63 24.47 107.12 129.99

0.48 1.20 0.68 1.32

3.63 11.22 80.60 110.51

0.56 1.48 1.20 1.81

0.11 0.59 1.55 2.25

Table 10 Average results for instances T with Scenario 4 Problems

Lagrangean heuristics Tabu search

T8_25 T25_25 T33_50 T50_50

Local search

Without search

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

0.42 1.08 0.59 0.61

6.12 15.18 69.05 85.96

0.48 1.22 0.79 1.29

2.19 6.83 51.05 70.28

0.53 1.50 1.24 2.04

0.14 0.53 1.56 2.21

where W2 is a positive constant. The fixed costs were defined as fi ¼ W1 þ W1  0:25R where R is a

number drawn from a uniform distribution between 0 and 1 and W1 is a positive number. The

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

343

Table 11 Average computational results for each class of problems Problem

Lagrangean heuristic with tabu search T8_25

T25_25

T33_50

T50_50

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

Dev (%)

Time (s)

A_1.2 A_1.3 A_1.4 A_1.5 A_3.0 A_5.0

0.12 0.10 0.26 1.04 0.11 0.59

12.20 6.00 5.80 10.80 1.40 17.78

0.03 0.19 0.33 0.48 0.68 0.72

14.60 13.80 16.20 17.00 12.80 42.48

0.51 0.21 0.17 0.74 0.82 0.75

83.60 87.60 74.80 71.40 58.00 65.91

1.40 1.04 0.24 0.31 0.78 0.75

106.20 110.20 102.00 83.20 64.80 74.00

B_1.2 B_1.3 B_1.4 B_1.5 B_3.0 B_5.0

3.30 0.27 0.10 0.23 0.21 0.18

8.60 7.80 6.20 11.20 2.20 0.20

3.06 4.52 0.15 0.37 1.49 0.58

24.80 18.20 12.60 23.80 14.00 6.40

1.70 0.32 0.33 2.23 0.45 0.39

80.40 81.40 91.80 75.40 52.00 43.20

0.14 0.82 0.06 0.56 0.69 0.73

84.40 96.40 82.60 104.00 64.60 57.40

C_1.2 C_1.3 C_1.4 C_1.5 C_3.0 C_5.0

0.15 0.42 0.29 0.13 0.59 0.00

12.60 9.00 6.00 7.40 2.60 0.00

4.53 0.28 0.19 0.88 0.44 0.60

12.00 19.80 17.60 22.20 14.40 7.60

0.16 0.22 0.23 0.62 0.93 0.16

88.00 70.20 64.80 72.80 60.20 43.80

0.17 0.18 0.22 0.70 0.85 0.83

98.80 99.60 101.00 116.20 69.60 50.40

Table 12 Average results for higher dimension T instances Lagrangean heuristic with tabu search Problem

Dev1 (%)

Time1 (s)

Dev2 (%)

Time2 (%)

T25_50 T25_100 T25_250 T25_500 T50_100 T50_250 T50_500

0.80 0.92 0.69 0.54 1.84 3.13 3.41

98.00 273.60 1742.00 7842.00 255.40 1316.60 7899.00

0.49 0.92 0.74 0.53 3.49 3.59 3.88

36.20 137.40 544.25 3338.33 11.00 110.00 1108.25

capacity of each plant, ai , was set to 500 and the demand of each customer was determined as bj ¼ 50  R, where R is defined as above. A total of 320 problems arranged in 32 groups, each one containing 10 problems was generated. Computational results as well as the parameters used for each group of problems are shown in Table 13. In that table we give 1. n m––number of potential plant locations and number of customers. 2. W 1 and W 2 values.

3. The maximum (Max), the average (Aver) and the minimum (Min) deviation in percentage. 4. The maximum (Max), the average (Aver) and the minimum (Min) CPU time. Analysing Table 13 we can conclude that our algorithm performed well. The deviations obtained are very small (always less than 0.6%) and does not seem to be influenced by the parameter values. Concerning CPU times, we can observe that there exist some groups with very high differences between maximum and minimum values. This fact can be confirmed taking the average and the standard deviation, respectively, 9.38 and 12.76 s. The average time seem to be influenced by m, increases as m increases, but not by n, W 1 or W 2 parameters. In Pirkul [27] deviations are reported for this set of problems. However, it is not possible to make direct comparisons, since they are calculated in a different way, namely the deviation between the lower bound and the optimal solution value as a percentage of lower bound and the deviation between optimal and heuristic values as a percentage

344

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

Table 13 Parameters and computational results for Pirkul problems Problem Pir10402002 Pir10402004 Pir10404002 Pir10404004 Pir10602002 Pir10602004 Pir10604002 Pir10604004 Pir10802002 Pir10802004 Pir10804002 Pir10804004 Pir101002002 Pir101002004 Pir101004002 Pir101004004 Pir20402002 Pir20402004 Pir20404002 Pir20404004 Pir20602002 Pir20602004 Pir20604002 Pir20604004 Pir20802002 Pir20802004 Pir20804002 Pir20804004 Pir201002002 Pir201002004 Pir201004002 Pir201004004

Parameters

Dev (%)

Time (s)

nm

W1

W2

Max

Aver

Min

Max

Aver

Min

10_40 10_40 10_40 10_40 10_60 10_60 10_60 10_60 10_80 10_80 10_80 10_80 10_100 10_100 10_100 10_100 20_40 20_40 20_40 20_40 20_60 20_60 20_60 20_60 20_80 20_80 20_80 20_80 20_100 20_100 20_100 20_100

200 200 400 400 200 200 400 400 200 200 400 400 200 200 400 400 200 200 400 400 200 200 400 400 200 200 400 400 200 200 400 400

2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4

0.02 0.00 0.01 0.04 0.02 0.53 0.01 0.04 0.00 0.14 0.19 0.04 0.01 0.02 0.01 0.04 0.16 0.13 0.06 0.04 0.01 0.01 0.02 0.00 0.01 0.00 0.01 0.01 0.05 0.06 0.01 0.01

0.00 0.00 0.00 0.00 0.00 0.05 0.00 0.01 0.00 0.02 0.03 0.00 0.00 0.00 0.00 0.00 0.02 0.01 0.01 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.01 0.00 0.00

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

5.00 1.00 1.00 5.00 31.00 16.00 7.00 17.00 7.00 68.00 55.00 36.00 162.00 121.00 26.00 5.00 7.00 6.00 2.00 7.00 4.00 4.00 5.00 4.00 14.00 22.00 14.00 16.00 195.00 127.00 26.00 47.00

0.60 0.40 0.60 0.80 4.80 3.60 2.20 5.10 4.30 20.90 15.20 9.10 40.50 47.20 9.50 0.80 1.20 1.20 0.80 1.80 2.20 2.40 1.80 2.30 7.10 6.50 7.00 5.30 44.30 21.50 11.40 17.90

0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 2.00 1.00 0.00 1.00 2.00 3.00 4.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 1.00 1.00 2.00 2.00 2.00 3.00 4.00 3.00 4.00 3.00

Average S.D.

of the optimal value. Although, we can observe that our mean average deviation (0.01) is smaller than both of the mean average deviations obtained by him. The comparison between CPU times is not possible, since in [27] only CPU times to obtain lower bounds are reported. For Pirkul problems, we can conclude that our algorithm provides solutions of very high quality in a reasonable amount of time. In Filho and Galv~ ao [13] a very similar methodology was employed to obtain a set of randomly generated problems. Their methodology is the same as the one of Pirkul [27], but with the de-

0.01 0.01

9.38 12.76

mand of each customer determined as bj ¼ 50  ðR þ 0:2Þ instead of bj ¼ 50  R. Computational results obtained for Filho and Galv~ao problems, as well as the parameters used for each one of the 32 groups of problems, are shown in Table 14. As we expected, since Filho and Galv~ao problems and Pirkul problems are randomly generated problems according to almost the same methodology, similar results were obtained: the same mean average deviation and a slight decrease on the mean average CPU time. Comparing the mean average deviation and the standard deviation,

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

345

Table 14 Parameters and computational results for Filho and Galv~ao problems Problems FG10402002 FG10402004 FG10404002 FG10404004 FG10602002 FG10602004 FG10604002 FG10604004 FG10802002 FG10802004 FG10804002 FG10804004 FG101002002 FG101002004 FG101004002 FG101004004 FG20402002 FG20402004 FG20404002 FG20404004 FG20602002 FG20602004 FG20604002 FG20604004 FG20802002 FG20802004 FG20804002 FG20804004 FG201002002 FG201002004 FG201004002 FG201004004

Parameters

Dev (%)

Time (s)

nm

W1

W2

Max

Aver

Min

Max

Aver

Min

10_40 10_40 10_40 10_40 10_60 10_60 10_60 10_60 10_80 10_80 10_80 10_80 10_100 10_100 10_100 10_100 20_40 20_40 20_40 20_40 20_60 20_60 20_60 20_60 20_80 20_80 20_80 20_80 20_100 20_100 20_100 20_100

200 200 400 400 200 200 400 400 200 200 400 400 200 200 400 400 200 200 400 400 200 200 400 400 200 200 400 400 200 200 400 400

2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4 2 4

0.00 0.62 0.01 0.00 0.01 0.08 0.00 0.00 0.01 0.09 0.01 0.41 0.14 0.08 0.12 0.02 0.31 0.03 0.00 0.00 0.01 0.00 0.00 0.00 0.01 0.00 0.01 0.00 0.01 0.01 0.02 0.00

0.00 0.06 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.02 0.00 0.04 0.01 0.01 0.02 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

1.00 5.00 2.00 2.00 29.00 43.00 12.00 5.00 75.00 67.00 80.00 36.00 105.00 120.00 137.00 106.00 3.00 7.00 2.00 1.00 20.00 4.00 4.00 5.00 24.00 77.00 68.00 19.00 29.00 109.00 13.00 34.00

0.40 0.80 0.50 0.30 4.20 5.30 3.70 1.80 13.10 19.90 20.80 7.90 28.80 46.90 39.50 25.70 0.80 1.90 0.60 0.40 3.90 2.00 2.50 2.30 9.90 14.30 12.50 5.50 12.80 25.40 9.60 10.60

0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 2.00 1.00 2.00 1.00 3.00 3.00 4.00 3.00 0.00 0.00 0.00 0.00 1.00 0.00 1.00 1.00 4.00 2.00 3.00 2.00 4.00 4.00 7.00 4.00

Average S.D.

presented by Filho and Galv~ ao––0.43 and 1.12–– with ours––0.01 and 0.01––we can say that our approach performed very well. Concerning CPU times, direct comparisons are not possible, since different computers were used. However, we can observe that our standard deviation––11.93––is higher than the one reported by them––9.07. A different methodology to obtain randomly generated problems was considered by Sridharan [28]. The demands were generated from a uniform distribution in the range between 5 and 35. The capacities of the plants were generated from a uniform distribution in the range between 10 and

0.01 0.01

10.46 11.93

160. The fixed cost for plant i was generated using pffiffiffiffi the relation fi ¼ ai þ bi ai , where ai Õs were generated from a uniform distribution in the range 0–90 and bi Õs were generated from a uniform distribution in the range 100–110. Further, the capacities were scaled in such a way that the sum of capacities is a predetermined multiple (taken as 10) of total demand. This scaling is done before the fixed costs were computed. For generating the transportation costs cij Õs the following methodology was used. Points in a unit square were generated. The Euclidean distance between them was computed. This Euclidean distance

346

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

Table 15 Computational results for Sridharan problems Problem Srid8_25 Srid16_25 Srid25_25 Srid16_50 Srid33_50 Srid50_50

Parameters

Dev (%)

nm

Max

Aver

Min

Time (s) Max

Aver

Min

8_25 16_25 25_25 16_50 33_50 50_50

0.02 0.05 0.15 0.02 0.17 0.06

0.00 0.02 0.05 0.01 0.04 0.03

000 0.01 0.02 0.01 0.01 0.01

0.00 5.00 10.00 1.00 38.00 64.00

0.00 1.00 2.20 0.40 11.40 36.20

0.00 0.00 0.00 0.00 0.00 0.00

between points is multiplied by 10 to give the cij matrix. Tests were performed over six sets of problems, each one containing five problems. The computational results are given in Table 15. Analysing the results we can conclude that our heuristic performed well. Comparing them, with the results presented in Sridharan [28], we can observe that: • except for the dimension 25_25, better average deviations were obtained by us; • even if it is not possible to compare CPU times, since different computers were used, the increase in CPU times with problem dimension obtained here is smaller than in [28]. BeasleyÕs OR Library [4] contains a set of instances for the capacitated facility location problem, varying from small to large sizes. Some of these instances are infeasible for the single source version of the problem, since there are customers whose demands are greater than the capacities of the plants. To overcome this difficulty, as proposed by Klincewicz and Luss [22], customer demands that exceed 3700 are set to 3700. In Tables 16 and 17, parameters and computational results obtained for the small-medium and large size problems 6 are given. In Table 16 we can observe that for 61% of the instances a deviation of 0% was obtained. For the others, the deviations are always less than 0.71%. CPU times are very instance dependent and do not

seem to have any kind of relation with the parameters. For large size problems (Table 17) the deviation in average is worst, but only for two of them a deviation greater than 2%, was obtained. Others researchers have reported results for these instances. For the Cap8*-6 and Cap8*-7 instances, the average deviations reported by zKlincewicz and Luss [22], Pirkul [27], Deng and Simchi-Levi 7 [11] and Filho and Galv~ao [13] and obtained by us are Klincewicz and Luss

Pirkul

Deng and SimchiLevi

Filho and Galv~ao

Our algorithm

3.46

0.28

0.03

0.29

0.18

Our algorithm, when compared with the others, seems to be capable of providing good mean deviations, for this set of instances. Comparisons between CPU time are not possible, since different computers were used. For the whole set of instances, the average deviations reported by Filho and Galv~ao [13] and obtained by us are Filho and Galv~ao

Our algorithm

0.2

0.08

With our algorithm, a better average deviation was obtained. Concerning CPU times, and since

7

6

The upper bound was calculated only when lower bound improved.

Since this paper is unpublished and it was not possible to obtain it, we have obtained the mean average from Filho and Galv~ao [13].

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

347

Table 16 Parameters and computational results for small-medium size problems Problem Cap41-5 Cap41-6 Cap41-7 Cap42-5 Cap42-6 Cap42-7 Cap43-5 Cap43-6 Cap43-7 Cap44-5 Cap44-6 Cap44-7 Cap51 Cap61 Cap62 Cap63 Cap64 Cap71 Cap72 Cap73 Cap74 Cap81-5 Cap81-6 Cap81-7 Cap82-5 Cap82-6 Cap82-7 Cap83-5 Cap83-6 Cap83-7 Cap84-5 Cap84-6 Cap84-7 Cap91 Cap92 Cap93 Cap94 Cap101 Cap102 Cap103 Cap104 Cap111-5 Cap111-6 Cap111-7 Cap112-5 Cap112-6 Cap112-7 Cap113-5 Cap113-6 Cap113-7 Cap114-5

Parameters

Dev (%)

Time (s)

7500 7500 7500 10000 10000 10000 17500 17500 17500 25000 25000 25000 17500 7500 12500 17500 25000 7500 12500 17500 25000

0.00 0.00 0.00 0.00 0.03 0.00 0.00 0.06 0.00 0.12 0.26 0.72 0.17 0.00 0.00 0.01 0.45 0.00 0.00 0.00 0.00

21.00 73.00 17.00 24.00 109.00 13.00 21.00 105.00 24.00 98.00 79.00 103.00 59.00 3.00 3.00 68.00 72.00 2.00 2.00 1.00 1.00

5000 6000 7000 5000 6000 7000 5000 6000 7000 5000 6000 7000 15000 15000 15000 15000 58268 58268 58268 58268

7500 7500 7500 12500 12500 12500 17500 17500 17500 25000 25000 25000 7500 12500 17500 25000 7500 12500 17500 25000

0.02 0.00 0.00 0.00 0.12 0.00 0.00 0.32 0.17 0.45 0.71 0.10 0.00 0.00 0.22 0.19 0.00 0.00 0.00 0.00

29.00 16.00 9.00 21.00 44.00 22.00 31.00 86.00 99.00 93.00 106.00 86.00 2.00 4.00 133.00 83.00 3.00 3.00 2.00 3.00

5000 6000 7000 5000 6000 7000 5000 6000 7000 5000

7500 7500 7500 12500 12500 12500 17500 17500 17500 25000

0.02 0.00 0.00 0.00 0.12 0.00 0.00 0.10 0.00 0.19

nm

a

f

16_50

5000 6000 7000 5000 6000 7000 5000 6000 7000 5000 6000 7000 10000 15000 15000 15000 15000 58268 58268 58268 58268

25_50

50_50

47.00 26.00 13.00 49.00 69.00 22.00 33.00 98.00 31.00 98.00 (continued on next page)

348

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

Table 16 (continued) Problem

Parameters nm

Cap114-6 Cap114-7 Cap121 Cap122 Cap123 Cap124 Cap131 Cap132 Cap133 Cap134

a

f

6000 7000 15000 15000 15000 15000 58268 58268 58268 58268

25000 25000 7500 12500 17500 25000 7500 12500 17500 25000

Average S.D.

different computers were used, we can only observe that a higher standard deviation was obtained with our algorithm. Comparison is also possible for the Cap6*, Cap7*, Cap9*, Cap10*, Cap12* and Cap13* problem instances used by Hindi and Pienkosz [20] Hindi and Pienkosz

Our algorithm

0.03

0.05

and also for the Capa*, Capb* and Capc* Hindi and Pienkosz

Our algorithm

0.92

0.93

For these instances, our algorithm gives higher mean averages. However, comparing the deviations, we can say that: • for small-medium size problems and exception made for Cap64, slightly better or equal deviations were obtained by us; • for large size problems, only for Capb2 and Capc* instances, better deviations were obtained by us. In Hindi and Pienkosz [20], the solution procedure has been coded in Pascal, and computational tests were carried out on a 233 MHz PC, that belongs to the same class of the one we used.

Dev (%)

Time (s)

0.16 0.08 0.00 0.00 0.02 0.19 0.00 0.00 0.00 0.00

110.00 83.00 4.00 8.00 58.00 81.00 3.00 7.00 5.00 1.00

0.08 0.16

42.93 39.41

Table 17 Computational results for large size problems Problem Capa1 Capa2 Capa3 Capa4 Capb1 Capb2 Capb3 Capb4 Capc1 Capc2 Capc3 Capc4 Average S.D.

nm

100_1000

Dev (%)

Time (s)

0.11 1.27 3.49 0.00 0.00 1.30 2.64 0.24 0.67 1.07 0.31 0.08 0.93 1.12

11860.00 5594.00 3832.00 4122.00 11166.00 9705.00 10666.00 10752.00 9381.00 9819.00 11113.00 10925.00 9077.92 2864.05

Consequently, it is possible to make comparisons between CPU times. So, for small-medium size problems, we obtained a smaller mean (23 s) than Hindi and Pienkosz (39.48 s) but a high standard deviation––37.22 against 16.13. For large size problems, times obtained by them are significantly better than ours: the maximum value obtained by them was 38 min. The next set of test problems, was also used by Hindi and Pienkosz [20], Delmaire et al. [9] and Barcel o et al. [3] and is available at www.upc/es/ catala/webs/webs.html. Computational results are shown in Table 18, where n m is the problem dimension, UBound and LBound are, respectively, the best upper and lower

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

349

Table 18 Computational results for Barcel o et al. problems Lagrangean heuristic with tabu search Problem p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15 p16 p17 p18 p19 p20 p21 p22 p23 p24 p25 p26 p27 p28 p29 p30 p31 p32 p33

nm

10_20

15_30

20_40

20_50

UBound

LBound

Time (s)

Dev (%)

2024 4251 6051 7168 4551 2269 4374 7926 2480 23112 3447 3721 3760 5984 7816 11543 9884 15615 18683 26583 7333 3271 6036 6327 8947 4448 10921 11121 9835 10973 4466 9881 39617

1975.87 4203.15 6044.64 7105.33 4515.63 2259.31 4285.23 7896.47 2448.86 23069.63 3437.71 3682.25 3722.36 5886.74 7787.59 11514.59 9847.47 15592.27 18670.89 26519.74 7250.37 3245.03 6025.10 6315.09 8946.52 4422.20 10909.51 11102.94 9817.10 10737.69 4450.11 9872.34 39371.03

6.00 6.00 5.00 6.00 5.00 8.00 12.00 11.00 16.00 14.00 13.00 12.00 12.00 12.00 15.00 14.00 12.00 26.00 25.00 26.00 27.00 27.00 21.00 22.00 1.00 36.00 31.00 37.00 33.00 39.00 35.00 31.00 39.00

2.44 1.14 0.11 0.88 0.78 0.43 2.07 0.37 1.27 0.18 0.27 1.05 1.01 1.65 0.36 0.25 0.37 0.15 0.06 0.24 1.14 0.80 0.18 0.19 0.01 0.58 0.11 0.16 0.18 2.19 0.36 0.09 0.62

19.24 11.44

0.66 0.65

Mean S.D.

bound obtained, Time (s) is the CPU time in seconds and Dev (%) is the deviation in percentage. Computational results for this set of problems were reported by Hindi and Pienkosz [20]. Comparing the mean percentage deviation obtained by them (0.56) with the one obtained by us (0.66), we can conclude that our method performed worse. However, it can be noted that it was possible to improve the upper bound for 13 instances (bold values) and, only for four instances (p1, p18, p28 and p29) worst results were obtained. Concerning lower bounds, our algorithm give worst results.

Comparing CPU times 8, we can observe that our average and standard deviations (19.24 and 11.44) are smaller than the ones obtained by them (45.13 and 24.57). In Hindi and Pienkosz [20], the best upper bound found, for each instance, by search algorithms used by Delmaire et al. [9], which improve the solutions values found by Barcel o et al. [3] are given. Comparing these values with our upper

8

As mentioned before, similar computers were used.

350

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

Table 19 Computational results for problems available at www.upc/es/catala/webs/webs.html Lagrangean heuristic with tabu search Problem p34 p35 p36 p37 p38 p39 p40 p41 p42 p43 p44 P45 p46 p47 p48 p49 p50 p51 p52 p53 p54 p55 p56 p57

UBound

LBound

4701 5457 16794 14672 47249 41022 61654 17246

4666.77 5454.90 16747.34 14607.55 47239.85 40996.40 61598.11 17245.47

46 42 54 49 52 61 62 1

0.73 0.04 0.28 0.44 0.02 0.06 0.09 0.00

30_75

7887 5126 36046 17676 48838 66241 58965

7885.96 5106.30 35975.54 17675.14 48831.25 66202.57 58951.44

67 63 95 41 77 94 65

0.01 0.39 0.20 0.00 0.01 0.06 0.02

30_90

79643 5947 9060 34659 30042 43853 69693 64474 49791

79584.77 5904.07 9039.30 34641.90 30031.01 43852.68 69585.22 64473.55 49790.42

88 101 112 119 114 12 118 9 7

0.07 0.73 0.23 0.05 0.04 0.00 0.15 0.00 0.00

nm

30_60

Mean S.D.

Time (s)

64.54 35.81

Dev (%)

0.15 0.22

bounds we can say that it was possible to improve the best solution for eight instances (p8, p9, p11, p17, p18, p20, p26, p33) and for nine of them (p1, p7, p12, p14, p21, p28, p29, p30, p33) worst values were obtained. The final set tested, is available at www.upc/es/ catala/webs/webs.html, and contains instances with 30 potential plant locations and four different values for the number of costumers. Computational results are given in Table 19, where the same notation as in Table 18, is used. For these instances, the solutions obtained are very good: only for 4 of them, the deviation is greater than 0.3%.

with local and tabu search. The computational results show that tabu search performed better than local search. Lagrangean heuristic combined with tabu search performed quite well, even for some large instances. The results obtained show that the quality of the mean deviation is highly dependent on the instance itself, namely when different data structures were tested. Extensions of this work will be done, namely for trying to find a better tabu search algorithm. Other combinations between Lagrangean heuristics and search methods––in particular with other metaheuristics––will be part of our future research.

5. Conclusions

References

In this paper, we have studied a Lagrangean heuristic combined with search methods, namely

[1] M.C. Agar, S. Salhi, Lagrangean heuristics applied to a variety of large scale capacitated plant location problems,

M.J. Cortinhal, M.E. Captivo / European Journal of Operational Research 151 (2003) 333–351

[2]

[3]

[4]

[5]

[6] [7] [8]

[9]

[10]

[11]

[12]

[13]

[14] [15]

Journal of the Operational Research Society 49 (1998) 1072–1084. J. Barcel o, J. Casanovas, A heuristic Lagrangean algorithm for the capacitated plant location problem, European Journal of Operational Research 15 (1984) 212–226. J. Barcel o, E. Fernandez, K. Jornsten, Computational results from a new relaxation algorithm for the capacitated plant location problem, European Journal of Operational Research 53 (1991) 38–45. J.E. Beasley, OR-Library: Distributing test problems by electronic mail, Journal of the Operational Society 41 (1990) 1069–1072. J.E. Beasley, Lagrangean heuristics for location problems, European Journal of Operational Research 65 (1993) 383– 399. N. Christofides, A. Mingozzi, P. Toth, Combinatorial Optimization, Wiley, Chichester, 1979. CPLEX Optimization, Inc., Using the CPLEX Callable Library, version 3.0, 1974. K. Darby-Dowman, H.S. Lewis, Lagrangean relaxation and the single source capacitated facility location problem, Journal of the Operational Research Society 39 (11) (1988) 1035–1040. H. Delmaire, J.A. Diaz, E. Fernandez, M. Ortega, Comparing new heuristics for the pure integer capacitated plant location problem, research report DR97/10, Dpt E10, Universidad Politecnica de Catalunya, Spain. O. DeMaio, C.A. Roveda, An all zero-one algorithm for a certain class of transportation problems, Operations Research 19 (1971) 1406–1418. Q. Deng, D. Simchi-Levi, Valid inequalities, facets and computational results for the capacitated concentrator location problem, Unpublished paper, 1992. D. Erlenkotter, A dual-based procedure for the uncapacitated facility location, Operations Research 26 (1978) 992– 1009. V.J.M.F. Fiho, R.D. Galv~ao, A tabu search heuristic for the concentrator location problem, Location Science 6 (1998) 189–209. R.W. Floyd, Algorithm 97: Shortest Path, Communications of the ACM 5 (6) (1962) 345. F. Glover, Future path for integer programming and links to artificial intelligence, Computers and Operations Research 1 (1986) 533–549.

351

[16] F. Glover, M. Laguna, Tabu Search, Kluwer Academic Publishers, Dordrecht, 1997. [17] M. Guignard-Spielberg, S. Kim, A strong Lagrangean relaxation for the capacitated plant location problems, Working paper no. 56, Department of statistics, The Wharton School, University of Pennsylvania, 1983. [18] M. Held, R.M. Karp, The travelling salesman problem and minimum spanning trees, Operations Research 18 (1970) 1138–1162. [19] M. Held, R.M. Karp, The travelling salesman problem and minimum spanning trees: Part II, Mathematical Programming 1 (1970) 6–25. [20] K.S. Hindi, K. Pienkosz, Efficient solution of large scale, single source, capacitated plant location problem, Journal of the Operational Research Society 50 (1999) 268– 274. [21] K. Holmberg, M. Ronnqvist, D. Yuan, An exact algorithm for the capacitated facility location problems with single sourcing, European Journal of Operational Research 113 (1999) 544–559. [22] J.G. Klincewicz, H. Luss, A Lagrangean relaxation heuristic for the capacitated facility location with single-source constraints, Journal of the Operational Research Society 37/5 (1986) 495–500. [23] A. Kuehn, M.J. Hamburger, A heuristic program for locating warehouses, Management Science 9 (1963) 643– 666. [24] S. Martello, P. Toth, Knapsack Problems: Algorithms and Computer Implementations, John Wiley & Sons, New York, ISBN 0-471-92420-2, 1990. [25] A.E. Neebe, M.R. Rao, An algorithm for the fixedcharge assigning users to sources problem, Journal of the Operational Research Society 34 (11) (1983) 1107– 1113. [26] R.M. Nauss, An improved algorithm for the capacitated facility location problem, Journal of the Operational Research Society 29 (1978) 1195–1201. [27] H. Pirkul, Efficient algorithms for the capacitated concentrator problem, Computers and Operations Research 14 (1987) 197–208. [28] R. Sridharan, A Lagrangean heuristic for the capacitated plant problem with single source constraints, European Journal of Operational Research 66 (1993) 305– 312.