Probabilistic local search algorithms for concave cost transportation network problems

Probabilistic local search algorithms for concave cost transportation network problems

European Journal of Operational Research 117 (1999) 511±521 www.elsevier.com/locate/orms Theory and Methodology Probabilistic local search algorith...

178KB Sizes 0 Downloads 79 Views

European Journal of Operational Research 117 (1999) 511±521

www.elsevier.com/locate/orms

Theory and Methodology

Probabilistic local search algorithms for concave cost transportation network problems Shangyao Yan *, So-Chang Luo Department of Civil Engineering, National Central University, Chungli 32054, Taiwan, ROC Received 27 January 1997; accepted 16 July 1998

Abstract In practice concave cost transportation problems are characterized as NP-hard, therefore cost functions are usually simpli®ed as linear in order to facilitate problem solving. However, linear cost functions may not re¯ect actual operations, which generally results in decreased operational performance. This research employs the techniques of simulated annealing and threshold accepting to develop several heuristics that would eciently solve these concave cost transportation network problems. A network generator has also been designed to generate many instances on an HP workstation to test the heuristics. The preliminary results show that these heuristics are potentially useful. Ó 1999 Elsevier Science B.V. All rights reserved. Keywords: Concave cost; Transportation problem; Simulated annealing; Threshold accepting

1. Introduction As shown in Fig. 1, the general transportation problem is formulated as a bipartite network, G ˆ (N,A,M), where N is the set of all supply nodes, M the set of all demand nodes, and A the set of all arcs. Arc (i, j) denotes the arc from supply node i to demand node j. Let Si be the supply for node i in N, let Dj be the demand for node j in M and f(xij ) as the cost function for transporting xij amount of goods along arc (i, j). Also let uij and lij be the ¯ow's upper bound and *

Corresponding author. Tel.: 886 3 422 7151 x4141; fax: 886 3 425 2960; e-mail: [email protected]

lower bound, respectively for arc (i, j). The transshipment problem can be formulated as follows: XX Minimize f …x† ˆ f …xij † …1a† i

subject to

X xij ˆ Si

j

8i 2 N ;

…1b†

j

X

xij ˆ Dj

8j 2 M;

…1c†

i

uij P xij P lij

…i; j† 2 A:

…1d†

The objective, (1a), minimizes the total transshipment cost. Constraints (1b) and (1c) are the ¯ow conservation constraints at every supply node and at every demand node, respectively. Constraint (1d) ensures that every arc ¯ow is within its upper

0377-2217/99/$ ± see front matter Ó 1999 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 7 - 2 2 1 7 ( 9 8 ) 0 0 2 7 0 - 7

512

S. Yan, S.-C. Luo / European Journal of Operational Research 117 (1999) 511±521

Fig. 1. Transportation network.

and lower bound. There are |N||M| arcs in the network. Assume that the sum of total supply is equal to that of total demand. To facilitate problem solving, f(xij ) is usually simpli®ed as linear (for example, f(xij ) ˆ cij xij ), where there are numerous ecient algorithms for optimization, for example, the transportation network simplex method (Ahuja et al., 1993). However, such linear cost functions may not re¯ect actual operations. In practice, the unit cost for transporting goods usually decreases as the amount of goods increases. For example, the fare structure for freight transportation is generally concave (Gallo and Sodini, 1979), as shown in Fig. 2. Because concave cost transportation problems are characterized as NP-hard (Larsson et al., 1994), it is time-consuming in order to optimally solve large-scale problems. Many solution algorithms have been developed for handling such problems. These algorithms have employed traditional mathematical programming techniques such as linear approximation, Lagrangian relaxation, branch-and-bound or dynamic programming to

Fig. 2. Concave cost function.

assist in their solutions (for examples, see Blumenfeld et al., 1985; Erickson et al., 1986; Florian and Klein, 1971; Gallo et al., 1980; Gallo and Sodini, 1979; Guisewite and Pardalos, 1993; Hall, 1983; Jordan, 1986; Larsson et al., 1994; Rech and Barton, 1970; Thach, 1992; Zangwill, 1968). Note that Zangwill (1968) has pointed out that the major diculty in solving concave cost network ¯ow problems results from the enormous number of local optima in the optimization. Thus, the traditional approaches may be inecient for enumerating all local optima to ®nd the global optimum. In our research we observed that the modern metaheuristics, such as simulated annealing (SA) (Kirkpatrick et al., 1983) and threshold accepting (TA) (Dueck and Scheuer, 1990) may be useful for resolving concave cost transportation problems, for an example, see the aforementioned transshipment problem (1). In particular, any feasible spanning tree in problem (1) corresponds to an extreme point of the constraint set (Gallo and Sodini, 1979). Since every arc cost function is concave, the objective function (1a) is also, obviously, concave. Based on the concavity of the cost function, there is at least an optimal solution which is located at an extreme point (Larsson et al., 1994). Furthermore, every extreme point is an integer solution because of the integrality property of problem (1), assuming that all the parameters in problem (1) are integers. As a result, the optimal solution for problem (1) is also an integer. From this, we may guess that a traditional local search is useful for ®nding the optimal solution for problem (1). That is, if we improve the solution from an extreme point to its adjacent extreme points, then, after a certain number of improvements, we might ®nd a near-optimal solution. Unfortunately, since there could be enormous number of local optima in the solution set, such a local search could easily fall into a local optimum, whose objective is far from the optimal one. Recently there have been new combinatorial optimization approaches, such as SA and TA, developed for eciently ``jumping'' out of local optima. Therefore, SA or TA could be applicable to the resolution of concave cost network ¯ow problems.

S. Yan, S.-C. Luo / European Journal of Operational Research 117 (1999) 511±521

SA is a stochastic optimization algorithm. The main di€erence between SA and a local search is that, when ®nding a solution to a given problem, the latter technique searches for solutions in a downhill direction, i.e. the initial solution is changed only if the change results in an improvement in the objective function value. As a result, if the local search technique is at a local optimal solution, it has no way of jumping out of this solution. This drawback is overcome in SA by allowing the technique to occasionally back out of a possibly inferior region. In other words, the SA technique does not always search for solutions in a downhill direction. For every new solution, SA determines the di€erence between the objective value of the previous solution and that of the new solution. If the di€erence is favorable, the previous solution is discarded and the new solution takes its place, which is the same as what is done in a local search technique. If the di€erence is not favorable, the new solution is accepted along with a certain probability, which depends on the value of the di€erence. After being ®rst introduced in Kirkpatrick et al. (1983) to solve VLSI layout and graph partitioning, SA has been applied to many NP-complete problems. For example, Golden and Skiscim (1986) used SA to solve routing and location problems. Robuste et al. (1990) applied SA to TSP and VRP problems. Alfa et al. (1991) used SA and 3-opt to solve VRP. Heragu and Alfa (1992) applied SA to the layout problem. Many other applications and ®ndings for the application of SA are found in Heragu and Alfa (1992). TA, a similar method to SA, was ®rst introduced by Dueck and Scheuer (1990). The essential di€erence between SA and TA is the di€erent acceptance rules. For every new solution, TA determines the di€erence between the objective value of the previous solution and that of the new solution. If the di€erence is favorable, the previous solution is discarded and the new solution takes its place, which is the same as in SA. If the di€erence is not favorable, the new solution is accepted when the di€erence falls within a given threshold. Intuitively, TA accepts every new solution which is ``not much worse'' than the previous one, while SA accepts worse solutions only with rather small

513

probabilities. TA is simpler than SA because with TA it is not necessary to compute probabilities or to make random decisions. Dueck and Scheuer (1990) found that TA performed better than SA in their studies. Our purpose is to use SA and TA to develop heuristics that eciently solve concave cost network ¯ow problems. To reduce the complexity of this research, we focus on the concave cost transportation network problems, as shown in Fig. 1. Referring to Larsson et al. (1994) and LeBlanc (1976), the cost function for arc (i, j) is assumed to p be fij …xij † ˆ cij xij , where xij denotes the ¯ow of arc (i,j) and cij is an associate constant. Note that the cost function is similar to many transportation cost functions in practice. Moreover, since constraint (1d) can be transferred into a nonnegativity constraint without upper bound constraints, then for simpli®cation we use nonnegativity constraints to stand for constraint (1d). We note that the same approach used in this research can be applied to general concave cost network ¯ow problems. We also note that a tabu search (Glover, 1989, 1990) and a genetic algorithm (Goldberg, 1989) have been shown to be good for solving certain combinatorial optimization problems and could be applicable to the resolution of concave cost network ¯ow problems as well. This is suggested for future research. The remainder of the paper is organized as follows. First, we investigate local improvement issues, then develop several heuristics and, ®nally, perform a numerical test on these heuristics. 2. Local improvement issues In this section we discuss the acceptance rules for the traditional local search approach, SA and TA from a new point of view. We also discuss neighborhood searching for concave cost transportation network problems. 2.1. Acceptance rule A traditional local search di€ers from SA or TA mainly based on the acceptance rule. We

514

S. Yan, S.-C. Luo / European Journal of Operational Research 117 (1999) 511±521

Fig. 3. The acceptance probability function for a local search.

developed a general framework to interpret the acceptance rules, as shown in Fig. 3, which accounts for the acceptance rule of the local search. The vertical axis represents the acceptance probability (denoted as ``p'') and the horizontal axis represents the di€erence of the objective (denoted as D) between the new solution and the previous one, in a local move. The line shows that when the di€erence is less than zero, the acceptance probability is 100%, meaning the new solution will be accepted if its objective value is improved. On the other hand, when the di€erence is equal to or greater than zero, the acceptance probability becomes zero, meaning the new solution is not accepted if its objective value is not improved. Using the same framework, the acceptance rules for SA and TA are shown as in Figs. 4 and 5, respectively. In Fig. 4, when the di€erence is less than zero, the acceptance probability is 100% similar to that in the local search, meaning that the new solution will be accepted if its objective value is improved. When the di€erence is equal to or greater than zero, the acceptance probability decreases from 100% to a small probability, meaning that a new solution with a worse objective value is accepted, with a probability that decreases as the

Fig. 4. The acceptance probability function for simulated annealing.

Fig. 5. The acceptance probability function for threshold accepting.

di€erence increases. The decreasing probability is usually negatively exponential (Heragu and Alfa, 1992). Note that when the annealing temperature decreases in SA (which will be described in Section 3.2), the acceptance curve moves toward the origin. Similarly, in Fig. 5 when the di€erence is below a given threshold which is greater than zero, the acceptance probability is 100%, meaning that the new solution is accepted if its objective value is not too bad (i.e. the di€erence is not greater than the threshold). When the di€erence is greater than the threshold, the acceptance probability becomes zero, meaning that the new solution is too bad to be accepted. Note that when the threshold decreases in TA, the acceptance line moves toward the left. Within this framework, we found that di€erences among these approaches inherently resulted from the acceptance probability functions. For example, SA is associated with a negative exponential probability function instead of the simple ¯at line associated with the traditional local search. TA is associated with a truncated uniform probability function and has been shown to perform better than SA in certain applications (Dueck and Scheuer, 1990). As mentioned above, TA improves SA by avoiding accepting ``too poor'' solutions and simplifying the calculation of the SA probability function. However, unlike SA, TA does not differentiate probability for solutions accepted with poorer objectives. Since the di€erentiation of such solutions when applied to SA was shown to be good in many applications, we might guess that if we incorporate a decreasing probability similar to SA into the calculation of the acceptance proba-

S. Yan, S.-C. Luo / European Journal of Operational Research 117 (1999) 511±521

515

Fig. 6. The acceptance rule for linear threshold accepting. Fig. 7. A local move for a spanning tree.

bility, we might have a greater chance of ®nding better solutions in a search. Thus TA might be improved. We have to note here that introducing such a probability function into TA should not result in too many calculations in order to maintain eciency, which is indeed one of the ways which TA is improved over SA. We try to incorporate a linear decreasing probability into TA (let us call it linear threshold accepting (LTA) to test whether TA can be improved. Other acceptance probability functions can be similarly tested later. The acceptance rule for LTA is shown in Fig. 6. Note that a minimum acceptance probability can be set. As is done in TA, when the threshold decreases in LTA, the acceptance line moves toward the left.

2.2. Neighborhood searching We refer to the pivoting rules of the network simplex method (Ahuja et al., 1993), i.e. to search for adjacent extreme points. In particular, given an initial spanning tree (an extreme point), an ``entering arc'' incorporating the spanning tree forms a unique circuit. A ``leaving arc'' can then be determined and deleted to form another spanning tree, which is an adjacent extreme point to the previous one (Ahuja et al., 1993). Thus, a pair of arcs of entering and leaving can be indicated as the direction of a move. As shown in Fig. 7, arc (3, 1) is the entering arc and arc (3, 3) is the leaving arc. Note that in the network simplex method the entering arc is chosen based on a negative reduced cost (Ahuja et al., 1993).

Given a spanning tree, there are many arcs that can be chosen as entering arcs (a total of |N| ´ |M| ÿ |N| ÿ |M| + 1 arcs). Two neighborhood searches have commonly been used. One is the steepest descent search and the other is the random search. To perform the former we have to calculate the ¯ow di€erence for every adjacent point in order to choose the best one. Thus, we have to identify all associated circuits and changes in the associated arc ¯ows. This, as a result, will be time-consuming, especially when the calculations involve a complex concave cost function. Although the random search may provide a faster search, the solution quality of the moves may be ine€ective. To trade o€ eciency and e€ectiveness, we suggest examining a limited number (denoted as EPOCH) of arcs in every move. For simpli®cation, EPOCH is suggested to be a multiple of the number of total nodes, where the multiple is subject to testing. We randomly choose an EPOCH of arcs from all feasible arcs and then determine the entering one that results in the maximum improvement. The following pseudocode addresses the searching process. Procedure SEARCH Begin k ˆ 1; D ˆ 9 999 999 999; Do while (k < EPOCH) Randomly choose a new arc which has not been chosen; Identify the associated circuit and ®nd the associated leaving arc from the circuit;

516

S. Yan, S.-C. Luo / European Journal of Operational Research 117 (1999) 511±521

Modify the arc ¯ows and calculate the associated di€erence of the objective value, Dk ; If Dk < D, then D ˆ Dk , save the new solution as an incumbent; k ˆ k + 1; End do End_SEARCH. 3. Algorithm development In this section, we ®rst develop an approach to generate an initial solution; we then use SA, TA and LTA to develop three heuristics for improving the initial solution. Finally, to preliminarily evaluate the three heuristics, we also develop a heuristic using a linear approximation approach. 3.1. Initial solution (INIT) Since a local improvement method must start from a feasible solution, we develop a heuristic to generate good initial solutions. Based on the characteristics of concave cost network ¯ow problems (Kuhn and Baumol, 1962), if ¯ows are assigned on arcs whose adjacent nodes have more supply or demand than other nodes, then the cost might be reduced. Besides, the total cost tends to be reduced if every arc ¯ow tends to be zero or the maximum amount that it can transship. In other words, an all-or-nothing assignment rule might help reduce the transshipment cost. The heuristic based on these is developed as follows. Step 1. Sort all arcs; 1.1. Determine the maximum amount of goods, xij , that can be transported from supply node i to demand node j; xij ˆ min(si , dj ), where si is the remaining supply for node i and dj is the remaining demand for node j. 1.2. Calculate the unit cost for transporting p xij which is equal to cij = xij ; 1.3. Sort all arcs increasingly, according to p cij = xij ; Step 2. Assign arc ¯ows; 2.1. Assign arc ¯ows sequentially using the remaining node supply and demand;

2.2. Calculate the remaining supply or demand for all nodes; Step 3. If all node supply and demand have been assigned, then go to step 4; else go to step 1; Step 4. If the number of arcs with positive ¯ows is less than (|M| + |N| ÿ 1), then add arcs with zero ¯ow to form a spanning tree; Stop the algorithm; Note that in each iteration, the supply or demand for at least half of the nodes with remaining supply or demand will become zero. Let n be |M| + |N| and m be |M||N|. Thus, INIT is ®nished in at most log 2 n iterations. Since the complexity of each iteration is O(m log m) when using a heap sort approach in Step 1, the complexity of INIT is O(m log n log m). 3.2. Simulated annealing (SA) Let T0 be an initial temperature, Rt be a cooling rate, Tf be a frozen temperature, Sinc be the incumbent, and MAX be the maximum number of moves for which an incumbent has not been improved. Referring to Alfa et al. (1991), Heragu and Alfa (1992), Golden and Skiscim (1986), and Robuste et al. (1990), we develop an SA algorithm as follows. Step 0. Use INIT to ®nd an initial solution, S, and its objective value, z(S); Sinc ˆ S; Set an initial temperature T0 ; Tm ˆ T0 ; count ˆ 0; Step 1. Apply SEARCH to ®nd a good adjacent point, S 0 ; Calculate its objective value, z…S 0 †; Step 2. Determine whether the new solution is accepted; 2.1. D ˆ z…S 0 † ÿ z…S†; 2.2. If D 6 0; then S ˆ S 0 ; else set S ˆ S 0 with probability exp(ÿD=Tm ); 2.3. If z…S 0 † < z(Sinc ), then Sinc ˆ S0 and count ˆ 0; else count ˆ count + 1; Step 3. If count > MAX, then Tm ˆ Rt Tm and count ˆ 0; else go to Step 1; Step 4. If Tm < Tf , then stop; else go to Step 1; We note that SA is ®nite. First INIT is ®nite. Then, every incumbent is obviously an extreme point. An incumbent is changed (improved) in at most MAX iterations. Since the number of

S. Yan, S.-C. Luo / European Journal of Operational Research 117 (1999) 511±521

extreme points in problem (1) is ®nite, the number of incumbent solutions is ®nite. Besides, the algorithm will stop in at most ((T0 ÿ Tf )/Rt ) iterations after ®nding the optimal solution. As a result, the algorithm ®nishes in a ®nite number of iterations. 3.3. Threshold accepting (TA) Referring to Dueck and Scheuer (1990), we develop a TA algorithm as follows; where T0 denotes an initial threshold, NUM is the number of thresholds, Sinc the incumbent, MAX the maximum number of moves for which an incumbent has not been improved. Note that a threshold used in this research is expressed by a percentage of the objective value. Also note that the reader can prove that TA is ®nite using a similar analysis mentioned for SA. Step 0. Use INIT to ®nd an initial solution, S, and its objective value, z(S); Sinc ˆ S; Set an initial threshold T0 ; Th ˆ T0 and count ˆ 0; Step 1. Apply SEARCH to ®nd a good adjacent point, S0 ; Calculate its objective value, z(S0 ); Step 2. Determine whether the new solution is accepted; 2.1. D ˆ z…S 0 † ÿ z…S†; 2.2. If D 6 Th z…S†, then S ˆ S0 ; 2.3. If z…S 0 † < z…Sinc †, then Sinc ˆ S 0 and count ˆ 0; else count ˆ count + 1; Step 3. If count > MAX, then lower Th and count ˆ 0; else go to Step 1; Step 4. If all thresholds have been tried, then stop; else go to Step 1; 3.4. Linear threshold accepting (LTA) The probability function for a di€erence in objective values within the threshold is suggested below.   p0 ÿ 1  D; …2† p ˆ1‡ Th z…S† where D 6 Th z…S† and p0 is a lower bound for the acceptance probability …0 6 p0 6 1†. Note that if p0 equals 1, then p equals 1, which is indeed TA, meaning that any new solution is accepted if its

517

objective value is not worse than the objective value of the current solution plus the threshold. After all, TA is a special case of LTA. The steps for LTA are similar to that for TA except that Step 2.2 in TA is modi®ed as follows. 2.2. If D 6 0, then S ˆ S0 ; else if 0 < D 6 Th z…S†, then set S ˆ S0 with probability   p0 ÿ 1  D: 1‡ Th z…S† 3.5. Linear approximation heuristic (LAH) In order to preliminarily evaluate the aforementioned heuristics, referring to Jordan (1986) we develop a linear-cost approximation heuristic. Note that the heuristic was shown to be good in Jordan (1986) and is particularly easy to code for solving our problems. The comparison of other algorithms with our algorithms are left to be addressed in the future. The heuristic starts from an initial solution which can be obtained using SEARCH. In each iteration the heuristic uses the network simplex method to optimally solve for a linear cost transportation problem. The cost function of the transportation problem is estimated using the arc ¯ows of the previous solution. For example, if the ¯ow on arc (i, j) in the previous solution is xij 0 , then the estimated cost function for this arc is h i cij xij …3† f …xij † ˆ f …x0ij †=x0ij xij ˆ p0  ; xij where xij is the arc ¯ow in this iteration. The process is repeated until an incumbent has not been improved for 1000 iterations. The steps for LAH are listed below. The reader can similarly prove that LAH is ®nite as was found in SA. Step 0. Use INIT to ®nd an initial solution, S, and its objective value, z(S); Sinc ˆ S; count ˆ 0; Step 1. Approximate a linear cost function using S; Step 2. Solve the modi®ed linear cost transportation problem and obtain a new solution S0 ; Step 3. If z…S 0 † < z…Sinc †, then Sinc ˆ S 0 and count ˆ 0; else count ˆ count + 1; Step 4. If count > 1000, then stop; else S ˆ S0 and go to Step 1;

518

S. Yan, S.-C. Luo / European Journal of Operational Research 117 (1999) 511±521

4. Numerical experiment In this section, we design a network generator, then determine good parameters for the algorithms, and ®nally evaluate the computational results. 4.1. Network generator To test the algorithms we design a network generator to generate concave cost transportation network problems. Since the performance of the metaheuristics may be in¯uenced by problem scales and parameters, we use random numbers to generate randomized networks with various network scales and parameters (Demeulemeester et al., 1993; Klingman et al., 1974). We ®rst randomly set the number of supply nodes and demand nodes, then randomly set the supply and demand for all nodes. To ensure that the ¯ows are balanced at all nodes, the sum of all node supply is set equal to that of all node demand. Thereafter, we build all arcs between every supply node and every demand node. The arc cost parameters are randomly set to be positive. We note that there is at least a feasible solution for any randomized network. Besides, since the arc cost functions are positive, the objective function for each network is bounded from zero. As a result, there is an optimal solution for every network. 4.2. Algorithm parameters Since SA, TA and LTA are metaheuristics, their performance should be related to their parameters. Before evaluating these algorithms, we use the tested networks in Section 4.3 as input to search for good parameters for these algorithms. After numerous tests, the parameters for SA are suggested to be as follows: T0 ˆ 10, Rt ˆ 0.9, Tf ˆ 1, EPOCH ˆ |M| + |N|, MAX ˆ 50. Note that in Step 3 of SA, we have also tried a ®xed number of iterations for each temperature, as was done in Alfa et al. (1991). However, we found that it was not as good as the way proposed in Step 3. The parameters of TA are suggested to be as follows:

T0 ˆ 0.39, NUM ˆ 60, EPOCH ˆ 2|M| + 2|N|, MAX ˆ 50. Note that two shapes for lowering thresholds are tested. One is a nonlinear decreasing threshold shape which is the same as that used by Dueck and Scheuer (1990), and the other is a linear decreasing shape (decreasing uniformly from T0 to zero). After testing, we found that the former was slightly better than the latter. Therefore we recommend the one used by Dueck and Scheuer (1990). The parameters for the LTA are suggested to be as follows: T0 ˆ 0.26, NUM ˆ 60, EPOCH ˆ 2|M| + 2|N|, MAX ˆ 50, p0 ˆ 0.25. The shape of the lowering thresholds in LTA is suggested to be the same as that in TA. 4.3. Computational results Based on the aforementioned parameters, we evaluated INIT, SA, TA, LTA and LAH on an HP735 workstation. We ®rst generated ®ve smallscale networks for evaluation. These small-scale problems can be manually optimized. As shown in Table 1, the average error for INIT was 27.52%. SA and TA both performed well, optimizing the initial solutions in all problems. LTA also performed well. Four networks were optimally solved, one converging to a 1.53% error, resulting in a 0.31% average error. We see that although LAH improves on the initial solution, the average error for LAH (16.81%) is still signi®cant, meaning that LAH is not as e€ective as SA, TA or LTA for solving the problem instances. As for computation times, INIT, LAH and LTA were all ecient. All of them ®nished in an average of 0.05 s. SA and TA were relatively slow, the former being slightly less (®nished in an average of 1.69 s) than the latter (®nished in an average of 2.01 s). In order to further understand the performance of these algorithms, we randomly generated another 14 networks with larger-scale sizes. The results are summarized in Table 2, the error percentage of the objective value being calculated based on the best obtained from among all algorithms. The average error in the objective values for INIT was 84.2%. For all 14 networks, LTA outperformed the other algorithms in 11 networks, while SA and TA were best for the other three.

1620.03

Average

27.52

56.67 16.54 37.44 25.36 1.58

0.02

0.02 0.04 0.00 0.02 0.00 1501.05

1155.30 1236.15 1987.56 1654.31 1471.91

66018

84.2

42.58

61447

1742 2222 2018 59862 3029 4015 3279 4987 5533 126067 158769 176221 160113 152396

Note: %obj ˆ 100(obj ÿ best obj)/best obj.

Average

55.22 0.04 22.17 0.04 52.33 0.12 56.45 0.22 97.18 0.40 84.50 0.94 72.10 1.86 86.15 8.80 95.32 20.62 116.86 20.04 123.31 38.50 92.65 129.48 115.84 130.56 108.76 244.46

obj

1844 2331 2253 64797 3498 4260 3466 5069 5713 133366 165802 183335 175061 173453

LAH

Time

obj

%obj

INIT

10*10 12*12 20*20 25*25 30*30 40*40 50*50 80*80 80*120 100*100 150*150 110*220 180*160 200*200

Network

Table 2 Computational results of larger-scale networks Time

16.81

27.81 5.18 27.65 21.81 1.58

%obj

72.04

44.61

46.63 0.54 16.46 0.50 36.44 1.34 44.53 2.20 70.74 4.38 73.88 3.84 62.81 5.62 83.14 10.72 89.16 30.88 104.99 25.02 113.84 48.92 85.17 140.86 97.41 151.44 83.41 198.28

%obj

Note: %obj ˆ 100(obj ÿ optimal obj)/optimal obj.

1416.12 1369.62 2140.01 1702.50 1471.91

LAH

Time

obj

%obj

INIT

obj

4*4 4*5 5*6 6*5 7*4

Network

Table 1 Computational results of small-scale networks

33666

1188 1996 1527 41418 1919 2313 2066 2747 3243 64353 76004 98028 84252 90268

obj

SA

0.05

0.02 0.04 0.04 0.16 0.00

Time

SA

0.00

0.00 0.00 0.00 0.00 0.00

%obj

3.79

0 4.61 3.25 0 8.17 0.17 2.58 0.88 10.87 4.64 2.37 3.01 3.88 8.64 73.8

3.78 4.50 7.84 10.86 13.14 20.06 30.38 56.92 77.50 83.62 109.98 183.08 192.28 239.22

%obj Time

1288.65

903.89 1175.26 1557.04 1358.09 1448.97

obj

34188

1188 1917 1543 42006 1947 2349 2020 2761 2925 64670 77637 105104 81852 90715

obj

TA

1.69

1.46 1.48 1.96 1.84 1.72

Time

TA

5.00 5.12 10.94 15.04 18.48 28.94 43.24 96.50 136.30 113.66 179.16 271.40 363.38 389.08

Time

0.00

0.00 0.00 0.00 0.00 0.00

%obj

3.55 119.73

0 0.47 4.33 1.42 9.75 1.73 0.30 1.40 0 5.16 4.57 10.44 0.92 9.18

%obj

1288.65

903.89 1175.26 1557.04 1358.09 1448.97

obj

LTA

%obj

Time

1293.07

903.89 1175.26 1557.04 1358.09 1471.08

obj

0.31

0.00 0.00 0.00 0.00 1.53

%obj

32732

1.22 117.49

1195 0.59 0.14 1908 0 2.74 1479 0 9.56 46710 12.78 13.76 1774 0 14.22 2309 0 21.68 2014 0 23.60 2723 0 45.86 3033 3.69 50.78 61498 0 84.12 74246 0 191.84 95165 0 306.56 81108 0 367.40 83089 0 512.56

obj

LTA

2.01

1.76 1.68 2.36 2.20 2.06

Time

Optimal

Best

32346

1188 1908 1479 41418 1774 2309 2014 2723 2925 61498 74246 95165 81108 83089

obj

903.89 1175.26 1557.04 1358.09 1448.97

SA, TA LTA LTA SA LTA LTA LTA LTA TA LTA LTA LTA LTA LTA

Method

0.03 1288.65

0.04 0.04 0.02 0.02 0.02

Time obj S. Yan, S.-C. Luo / European Journal of Operational Research 117 (1999) 511±521 519

520

S. Yan, S.-C. Luo / European Journal of Operational Research 117 (1999) 511±521

LTA averaged an error of 1.22% which was the best among all the algorithms. TA was second to LTA, with an average of 3.55%. SA was slightly worse than TA, with an average of 3.55%. LAH was apparently worse than LTA, TA and SA, with an average of 72.04%. As for computation times, they are positively correlated with problem sizes for all algorithms. That is the computation time increases with problem size, for every algorithm. INIT ®nished in an average of 42.58 s. Among SA, TA, LTA and LAH, TA and LTA were longer than SA and LAH. LTA ®nished in an average of 117.49 s. TA, with an average of 119.73 s, was slightly longer than LTA. SA, with an average of 73.8 s, was longer than LAH. LAH was the fastest, with an average of 44.61 s.

probability functions may be explored in the future to develop ecient algorithms for solving concave cost transportation problems. It should be mentioned that the above evaluation is based on the parameters proposed in Section 4.2. Although many tests have been done to ®nd good parameters (see Section 4.2), there may be other good parameters. For this, more tests may be performed in the future, probably including more problem instances. Finally, the evaluation of other concave cost functions, for example, concave piece-wise linear cost functions, for transportation network problems, or other network problems, and the development of a tabu search algorithm or a genetic algorithm for solving concave cost network ¯ow problems could also be directions of future research.

5. Summary and discussions

Acknowledgements

This research uses the techniques of SA and TA to develop a SA-based and a TA-based heuristic for eciently solving concave cost transportation network problems. Exploring both the advantages of SA and TA, a LTA algorithm is developed based on TA with a decreasing probability function. In order to preliminarily evaluate these three heuristics, a LAH is developed. A heuristic for generating initial solutions (INIT) is developed based on the problem's characteristics. A network generator is also designed to generate many instances on an HP735 workstation to test the heuristics. Five small-scale networks and ®fteen large-scale networks were tested. The computation times in all test networks di€er less than a minute under the demonstrated computation capability. If we omit the computation times, the results based on the average objective errors show that LTA is the best among all. TA and SA are second to LTA; while TA is slightly better than SA. LAH is the worst. In summary, the preliminary results show that the three metaheuristics are apparently better than LAH and are thus potentially useful for solving concave cost transportation network problems. We also ®nd that LTA is better than TA, meaning that with a suitable decreasing probability function, TA can be improved. From this, other

This research was partially supported by a grant (NSC-86-2621-E-008-001) from the National Science Council of Taiwan. We thank the anonymous referees for their valuable comments and suggestions.

References Ahuja, R.K., Magnanti, T.L., Orlin, J.B., 1993. Network Flows, Theory, Algorithms, and Applications. PrenticeHall, Englewood Cli€s, NJ. Alfa, A.S., Heragu, S.S., Chen, M., 1991. A 3-opt based simulated annealing algorithm for vehicle routing problems. Computers Industrial Engineering 21, 635±639. Blumenfeld, D.E., Burns, L.D., Diltz, J.D., Daganzo, C.F., 1985. Analyzing trade-o€s between transportation, inventory, and production costs on freight networks. Transportation Research 19B, 361±380. Demeulemeester, E., Dodin, B., Herroelen, W., 1993. A random activity network generator. Operations Research 41, 972±980. Dueck, G., Scheuer, T., 1990. Threshold accepting: a general purpose optimization algorithm appearing superior to simulated annealing. Journal of Computational Physics 90, 161±175. Erickson, R.E., Monma, C.L., Veinott, A.F., 1986. Send-andSplit method for minimum-concave-cost network ¯ows. Technical Report No. 33, Department of Operations Research, Stanford University, Stanford, CA.

S. Yan, S.-C. Luo / European Journal of Operational Research 117 (1999) 511±521 Florian, M., Klein, M., 1971. Deterministic production planning with concave cost and capacity constraints. Management Science 18, 12±20. Gallo, G., Sandi, C., Sodini, C., 1980. An algorithm for the min concave cost ¯ow problem. European Journal of Operational Research 4, 248±255. Gallo, G., Sodini, C., 1979. Adjacent extreme ¯ows and application to min concave cost ¯ow problems. Networks 9, 95±121. Glover, F., 1989. Tabu search, Part I. ORSA Journal on Computing 1, 190±206. Glover, F., 1990. Tabu search, Part II. ORSA Journal on Computing 2, 4±32. Goldberg, D.E., 1989. Genetic Algorithm in Search, Optimization and Machine Learning. Addison-Wesley, Reading MA. Golden, B.L., Skiscim, C.C., 1986. Using stimulated annealing to solve routing and location problems. Naval Research Logistics Quarterly 33, 261±279. Guisewite, G.M., Pardalos, P.M., 1993. A polynomial time solvable concave network ¯ow problem. Networks 23, 143± 147. Hall, R.W., 1983. Direct versus terminal freight routing on networks with concave costs. GMR-4517, Transportation Research Dept., GM Research Laboratories. Heragu, S.S., Alfa, A.S., 1992. Experimental analysis of simulated annealing based algorithms for the layout problem. European Journal of Operational Research 57, 190± 202. Jordan, W.C., 1986. Scale economies on multi-commodity distribution networks. GMR-5579, Operating Systems Research Dept., GM Research Laboratories.

521

Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P., 1983. Optimization by simulated annealing. Science 220, 671±680. Klingman, D., Napier, A., Stutze, J., 1974. NETGEN: A program for generating large scale capacitated assignment, transportation, and minimum cost ¯ow network problems. Management Science 20, 814±821. Kuhn, H.W., Baumol, W.J., 1962. An approximate algorithm for the ®xed-charge transportation problem. Naval Research Logistics Quarterly 9, 1±16. Larsson, T., Migdalas, A., Ronnqvist, M., 1994. A Lagrangian heuristic for the capacitated concave minimum cost network ¯ow problem. European Journal of Operational Research 78, 116±129. LeBlanc, L.J., 1976. Global solutions for a nonconvex, nonconcave rail network model. Management Science 23, 131±139. Rech, P., Barton, L.G., 1970. A non-convex transportation algorithm. In: Beale, E.M. (Ed.), Applications of Mathematical Programming Techniques. Robuste, F., Daganzo, C.F., Souleyrette, R., 1990. Implementing vehicle routing models. Transportation Research 24B, 263±286. Thach, P.T., 1992. A decomposition method using a pricing mechanism for min concave cost ¯ow problems with a hierarchical structure. Mathematical Programming 53, 339± 359. Zangwill, W.I., 1968. Minimum concave cost ¯ows in certain networks. Management Science 14, 429±450.