European Journal of Operational Research 124 (1999) 15±27
www.elsevier.com/locate/dsw
Theory and Methodology
A computational study of smoothing heuristics for the traveling salesman problem Steven P. Coy a, Bruce L. Golden b c
b,*
, Edward A. Wasil
c
a Continental Airlines, 1600 Smith St., Suite 900, Houston, TX 77002, USA Robert H. Smith School of Business, University of Maryland, College Park, MD 20742, USA Kogod College of Business Administration, American University, Washington, DC 20016, USA
Received 1 August 1998; accepted 1 January 1999
Abstract Over the last ®ve years or so, data smoothing has been used to improve the performance of heuristics that solve combinatorial optimization problems. Data smoothing allows a local search heuristic to escape from a poor, local optimum. In practice, data smoothing has worked well when applied to the traveling salesman problem. In this paper, we conduct an extensive computational study to test the performance of eight smoothing heuristics for the traveling salesman problem. In particular, we apply eight smoothing heuristics and standard versions of two-opt and three-opt to 40 randomly generated Euclidean problems and 30 problems taken from a well-known library of test problems. We compare the solutions generated by the heuristics and provide insight into the behavior of the smoothing heuristics. Ó 2000 Elsevier Science B.V. All rights reserved. Keywords: Traveling salesman problem; Data smoothing; Smoothing heuristics
1. Introduction Over the last ®ve years or so, data smoothing has been used to improve the performance of local search heuristics for the traveling salesman problem (TSP). Data smoothing (also called search space smoothing and ®ne-tuned learning in the literature) works in the following way. Start with intercity distances that have been smoothed by a * Corresponding author. Tel.: +1 301 405 2232; fax: +1 301 314 8787. E-mail address:
[email protected] (B.L. Golden).
speci®ed function. Apply a local search heuristic (for example, two-opt) and generate a locally optimal tour (called the current tour). Smooth the distances again (to a lesser extent), apply the local search heuristic using the current tour as the starting tour, and generate a new current tour. Continue until the local search heuristic is applied to the TSP with the original (unsmoothed) distances. In practice, data smoothing has worked well when applied to the TSP. In general, data smoothing allows the local search heuristic to escape from a poor, local optimal solution. For
0377-2217/00/$ - see front matter Ó 2000 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 7 - 2 2 1 7 ( 9 9 ) 0 0 1 2 5 - 3
16
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
example, the search space smoothing heuristic of Gu and Huang (1994) avoids becoming trapped in a poor local minimum by making some moves that increase the length of the tour and not making some moves that decrease the length of the tour. This behavior is in contrast to many well-known TSP heuristics (including two-opt and three-opt) that never make moves that increase the length of the tour and always make moves that decrease the length of the tour. In this paper, we conduct an extensive computational experiment that tests the performance of eight smoothing heuristics for the TSP. We apply the eight smoothing heuristics to 40 randomly generated Euclidean problems and 30 problems taken from a well-known library of test problems. We compare the solutions and provide insight into the behavior of the heuristics. 2. Background 2.1. Search space smoothing One of the ®rst TSP heuristics to use data smoothing was developed by Gu and Huang (1994). Their search space smoothing algorithm (denoted by GH smoothing) is given in Table 1. Gu and Huang smoothed the distances between the cities with the function given in Step 3. They started with a large value of the smoothing parameter so that all of the smoothed distances were nearly identical (see the graph of a 5 in Fig. 1).
As the value of a was decreased, the smoothed distances looked more like the original (unsmoothed) distances; when a 1, the original distances were used. GH smoothing performed well in several computational experiments that were conducted by Gu and Huang. To illustrate, in one set of experiments, they used two-opt as the local search heuristic in Step 4 and applied GH smoothing to 10 randomly generated, non-Euclidean TSPs with 50 cities in each problem. They also applied a standard version of two-opt to the 10 problems. Gu and Huang found that GH smoothing generated solutions that were 7% to 31% better on average than the solutions generated by the standard version of two-opt. Schneider et al. (1997) extended the work of Gu and Huang by examining the performance of four smoothing functions (exponential, hyperbolic, sigmoidal, and logarithmic) in heuristics they designed to solve the TSP. We shows these functions and their graphs in the ®rst four rows of Table 2. Each graph is generated using four dierent values of the smoothing factor a. In the case of the exponential, sigmoidal, and logarithmic smoothing functions, as a approaches 0 (when a 0, the value for dij is unde®ned), each function returns the original distances (this is the line at 45° in each graph). In the case of the hyperbolic smoothing function, when a is equal to 1, the function returns the original distances. In a limited computational experiment, Schneider et al. applied ®ve smoothing functions (GH,
Table 1 Search space smoothing algorithm of Gu and Huang Step 1 Step 2 Step 3
Step 4 Step 5
Let dij the distance from city i to city j. Normalize all distances so that 0 6 dij 6 1. Specify the schedule for the smoothing factor as a 5 (begin), 4, 3, 2, 1 (end). Generate a random starting tour. Set a equal to the next value in the smoothing schedule and then smooth the distances according to the following function: ( a ; dij P d; d
dij ÿ d dij
a d ÿ
d ÿ dij a ; dij < d; where d is the average intercity distance. Apply a local search heuristic to the TSP with the smoothed distances to produce the current tour. If a 1, stop. The current tour is the ®nal tour. Otherwise, using the current tour, go to Step 3.
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
Fig. 1. Smoothing function of Gu and Huang.
exponential, hyperbolic, sigmoidal, and logarithmic) with a descent-based, greedy local search procedure to two problems taken from TSPLIB (PCB442 ± a printed circuit board problem with 442 drilling holes and ATT532 ± a TSP through 532 cities of the United States). They solved each problem 50 times and averaged the tour lengths. On both problems, hyperbolic smoothing and logarithmic smoothing generated shorter tours, on average, than GH smoothing, exponential smoothing, and sigmoidal smoothing. In a second limited computational experiment, Schneider et al. applied the same ®ve smoothing functions with the great deluge method as the search procedure (see Dueck, 1993) to PCB442 and ATT532. GH smoothing and logarithmic smoothing generated the shortest tours, on average, to PCB442, while hyperbolic smoothing and logarithmic smoothing generated the shortest tours, on average, to ATT532. Coy et al. (1998) conducted an extensive computational study of GH smoothing on 40 Euclidean TSPs and 40 random TSPs that had 100, 316, and 1000 cities. They generated 25 solutions from a random starting tour and 25 solutions from a greedy starting tour and used two-opt as the local search heuristic. Coy et al. found that the smoothing schedule with a 6, 3, 2, 1 worked well. In all of their experiments, GH smoothing
17
outperformed a standard version of two-opt. To illustrate, in 100-city Euclidean problems with a random start, the average tour length of the solutions produced by the standard version of twoopt were 3.34% above the average tour length of the solutions produced by GH smoothing. Coy et al. conducted an analysis of GH smoothing that revealed why it worked well. They examined the behavior of GH smoothing in the concave, transitional, and convex regions of the smoothing function (for example, see the shape of the function when a 2 in Fig. 1). They observed that GH smoothing escaped from poor local minima by moving uphill occasionally (tour length increases) and by not taking a downhill move occasionally (tour length decreases). Most of the uphill moves and missed downhill moves occurred in the transitional region of the GH smoothing function, with some missed downhill moves occurring in the concave region and some uphill moves occurring in the convex region. In addition, as the smoothing parameter decreased in value, the number of uphill moves and missed downhill moves decreased. 2.2. Concave and convex smoothing Based on our analysis of where missed downhill moves and uphill moves are made with the GH smoothing function, we propose two new smoothing functions. The concave and convex smoothing functions and their graphs are shown in the ®nal two rows of Table 2. Each graph is generated using four dierent values of the smoothing factor a. When a is equal to 1, each function returns the original distances. The concave and convex smoothing functions would be used in place of the Gu and Huang smoothing function given in Table 1. 2.3. Sequential smoothing We now propose applying the convex smoothing function and then the concave smoothing function in an algorithm that we call sequential smoothing (see Table 3). Sequential smoothing
18
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
Table 2 Some other search space smoothing functions
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
19
Table 3 Sequential smoothing algorithm Step 1 Step Step Step Step Step Step Step Step
2 3 4 5 6 7 8 9
Let dij the distance from city i to city j. Normalize all distances so that 0 6 dij 6 1. Specify the schedule for the smoothing factor as a 6 (begin), 3, 2, 1 (end). Generate a starting tour using the greedy heuristic. Set a equal to the next value in the smoothing schedule. Smooth the distances according to the convex function dij (a) dija . Apply the local search heuristic to the TSP with the convex smoothed distances to produce the current tour. If a 1, stop. The current tour is the ®nal tour. Otherwise continue. Smooth the distances with the concave function dij
a dij1=a . Apply the local search heuristic to the TSP with the concave smoothed distances to produce the current tour. Using the current tour, go to Step 3.
performs two smoothing operations for each value of a. First, the distances between cities are smoothed with a convex function and the local search heuristic is applied. Second, the distances are smoothed with a concave function and the local search heuristic is applied. Sequential smoothing ends with an application of the local search heuristic using the original distances. We use a smoothing schedule that is similar to the one that worked well for Coy et al. (1998) and a randomized greedy heuristic to generate a starting tour (see Coy et al. (1998) for additional details).
3. Computational study In this section, we study the performance of eight smoothing heuristics and the standard twoopt and three-opt heuristics on a variety of TSPs. The eight smoothing heuristics are denoted by GH, concave, convex, sequential, exponential, hyperbolic, sigmoidal, and logarithmic. 3.1. Test problems In our computational tests, we use two sets of problems. In the ®rst problem set, we generate three dierent size problems (100, 316, and 1000 cities). Each problem is randomly generated (see Coy (1998) for more details). There are 25 problems that have 100 cities, 10 problems that have 316 cities, and 5 problems that have 1000 cities. In each of the 40 problems, we use Euclidean
distances (all of the Euclidean distances are real distances). In the second problem set, there are 30 problems taken from the TSPLIB library (see J unger et al., 1995). The problems range in size from 105 to 2392 cities and the optimal solution for each problem is known. We use the approximate Euclidean distance metric speci®ed in TSPLIB. 3.2. Implementation details Each heuristic is written in C for a 32-bit operating environment and is run on a 200 MHz Pentium personal computer. The two-opt heuristic is based on the tour-order, two-opt heuristic described by Bentley et al. (1994). The three-opt heuristic is based on a hybrid recommended by Bentley et al. (1994). The hybrid three-opt heuristic checks for a two-opt move before it checks for a three-opt move. The ®nal tour generated by the hybrid three-opt heuristic is two-optimal and three-optimal and the average tour length is slightly less than the average tour length of the standard three-opt heuristic. According to Bentley et al., the hybrid three-opt heuristic reduces processing time. In our experiments, we only allow two-opt and three-opt to make a move if the move reduces the objective function value and does not create a subtour. Each heuristic uses a neighbor list (see Johnson and McGeoch (1997) for more details). For the random Euclidean problems, the neighbor lists have 40 members. For the TSPLIB problems, the neighbor lists have 100 members. For the TSPLIB experiments that call for post-processing with
20
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
3.3. Design of computational experiments
three-opt, only the ®rst 40 members of the neighbor list are used during post-processing (more about this later). In order to reduce processing time, we incorporate a computational device known as the don'tlook bit into each heuristic. The don't-look bit allows the heuristic to consider only those cities that have been recently involved in improving moves. While using two-opt or three-opt with the don't-look bit does not always result in a locally optimal solution, Bentley et al. (1994) report that the increase in tour length is ``barely measurable'' while the reduction in processing time is substantial.
In order to run the eight smoothing heuristics, we must specify the schedule of values for the smoothing parameter a. We set the schedule of values by examining the plots of each smoothing function. The schedule of values for each heuristic is given in Table 4. The exponential smoothing function, sigmoidal smoothing function, and logarithmic smoothing function are unde®ned at a 0 so that, in the last iteration, the local search procedure uses the original distances. For each Euclidean problem, we generate 25 randomized greedy starting tours and then run
Table 4 Schedule of values for a Heuristics GH Concave Convex Sequential Exponential Hyperbolic Sigmoidal Logarithmic
Value of a at each iteration 1
2
3
4
5
6
6 6 6 7 2 16 16 50
5 5 5 5 1 8 8 20
4 4 4 3 0.75 4 6 10
3 3 3 1 0.5 2 4 5
2 2 2
1 1 1
0.25 1.5 2 1
0 1 0 0
Original distances.
Table 5 Procedure for each experiment with the TSPLIB problems Experiment
Procedure
2Opt
Local search using two-opt. Number of neighbors in the neighbor list is 100. One trial per problem started from a greedy tour.
2Opt±3Opt (1)
Local search using two-opt. After two-opt converges, post-process with three-opt. Smoothing heuristics: Local search using two-opt when a > 1 and three-opt with original distances. Size of the neighbor list is 100 for two-opt and 40 for three-opt. One trial per problem started from a greedy tour.
2Opt±3Opt (10)
Local search using two-opt. After two-opt converges, post-process with three-opt. Smoothing heuristics: Local search using two-opt when a > 1 and three-opt with original distances. Size of the neighbor list is 100 for two-opt and 40 for three-opt. Each problem is started from a randomized greedy tour. Save best result from 10 randomized greedy tours.
3Opt
Local search using three-opt. Size of the neighbor list is 100. One trial per problem started from a greedy tour.
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
each heuristic. This produces 25 solutions to each problem. In addition, we perform four experiments on the TSPLIB problems. These are summarized in Table 5. 3.4. Computational results In this section, we summarize the results of our experiments using the Euclidean problems and the problems from the TSPLIB library. 3.4.1. Results on the Euclidean problems In Table 6, we report the average tour length, the average minimum tour length, and the average
21
processing time for nine heuristics (the standard two-opt heuristic and eight smoothing heuristics) on 40 Euclidean problems. The average tour length for each row in Table 6 is computed by averaging over 25 ´ 25 625 tours for the 100-city problems, 10 ´ 25 250 tours for the 316-city problems, and 5 ´ 25 125 tours for the 1000-city problems. The heuristics are listed in ascending order according to the average tour length (that is, the heuristic at the top of the list has the smallest average tour length). In each row, we average the 25, 10, or 5 minimum length tours, depending on whether the problem involves 100, 316, or 1000 cities, to produce the average minimum tour length. For each
Table 6 Computational results for nine heuristics on 40 Euclidean problems Heuristic
Average tour length
Average min. tour length
Average time (s)
100-city problem Sequential Concave Exponential GH Convex Two-opt Sigmoidal Logarithmic Hyperbolic Average preprocessing time
7.9227 7.9629 7.9850 7.9918 8.0763 8.0930 8.0960 8.1131 8.1260
7.7865 7.8090 7.8499 7.8480 7.8917 7.8922 7.8831 7.9082 7.9746
0.1929 0.0951 0.1983 0.0932 0.2211 0.0039 0.2419 0.1478 0.0523 0.0332
316-city problems Sequential GH Concave Exponential Hyperbolic Two-opt Sigmoidal Convex Logarithmic Average preprocessing time
13.6988 13.8284 13.8311 13.8477 13.9967 13.9999 14.0118 14.0148 14.0361
13.5184 13.5794 13.5722 13.5813 13.7746 13.7746 13.7254 13.7162 13.7596
0.5960 0.2898 0.7190 0.4740 0.1767 0.0252 0.5641 0.2326 0.4028 0.3680
1000-city problems Sequential Concave GH Exponential Hyperbolic Two-opt Sigmoidal Logarithmic Convex Average preprocessing time
23.9903 24.1601 24.2009 24.2133 24.4410 24.4456 24.4572 24.4858 24.4946
23.8084 23.9034 23.9403 23.9703 24.1779 24.1779 24.2362 24.1946 24.2843
1.9294 2.3138 0.9387 1.5015 0.5781 0.0867 1.8092 1.3073 0.7815 3.6700
22
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
tour, we store the processing time (CPU time) and then, in each row, average over all tours to produce the average time (in seconds). The average total processing time can be computed by taking the sum of the average time and the average preprocessing time. Using the results displayed in Table 6, we make the following observations: · On all three problem sizes, sequential smoothing produces the smallest average tour length and smallest average minimum tour length. · Concave smoothing is the second best heuristic based on average tour length for 100-city and 1000-city problems. It is the second best heuristic for all three problems based on average minimum tour length. · Smoothing heuristics that transform the smaller distances with a concave transformation (GH, concave, sequential, and exponential) perform substantially better than two-opt when measured by average tour length and average minimum tour length. · Based on average tour length, logarithmic smoothing and sigmoidal smoothing are worse than two-opt for all three experiments. · Hyperbolic smoothing has the shortest processing time of the smoothing heuristics, while twoopt is much faster than all eight smoothing heuristics. 3.4.2. Results on the TSPLIB problems We summarize the results of the TSPLIB experiments in Tables 7 and 8. In Table 7, we give the average percent above the optimal solution for each heuristic on the 30 problems taken from TSPLIB. To ®nd the percent above the optimal solution, we calculate (heuristic solution ± optimal solution)/optimal solution. To ®nd the average percent above the optimal solution for the 30 problems, we take the sum of the percent above the optimal solution for each problem and divide by 30. In Table 8, we present the total processing time for each heuristic in seconds. Using the results displayed in Tables 7 and 8, we make the following observations: · Smoothing heuristics that transform the smaller distances with a concave function (GH, concave, sequential, and exponential) generate
shorter tours, on average, than the standard local search heuristics (two-opt and three-opt). · Convex smoothing, hyperbolic smoothing, sigmoidal smoothing, and logarithmic smoothing generate tours that are roughly the same quality as the tours produced by two-opt and three-opt. · In each experiment, sequential smoothing has the shortest average tour length. · In the experiment denoted by 2Opt±3Opt (10), every heuristic (except sequential smoothing) produces shorter tours, on average, than the tours produced in the experiment denoted by 3Opt, while requiring from 5 to 9 times less processing time to solve the 30 problems. · Hyperbolic smoothing has the shortest processing time and sigmoidal smoothing has the longest processing time of the eight smoothing heuristics. Two-opt is much faster than all eight smoothing heuristics. In Table 9, we present the results of computational experiments on the same 30 TSPLIB problems reported by J unger et al. (1995). We show the average percent above the optimal tour and the total processing time for three Lin and Kernighan heuristics and the sequential smoothing heuristic (2Opt±3Opt (10) version). The iterated Lin and Kernighan heuristic (denoted by ILK) produces the shortest tours, but it has the longest processing time. On average, sequential smoothing produces tours that are shorter than LK1 and almost as good as the tours produced by LK2. Sequential smoothing is slower than LK1 and LK2. The processing time comparisons are somewhat rough because the computing platforms are dierent. 4. Insight into the behavior of the smoothing heuristics In this section, we conduct a computational experiment designed to provide insight into why some smoothing functions work well and why others do not work well. In a recent paper (see Coy et al., 1998), we found that transforming the smaller distances with a concave function worked very well, while
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
23
Table 7 Summary of the performance of each heuristic in four experiments on 30 problems from TSPLIB. Results are the average percent above the optimal tour Experiment
Two-opt
Three-opt
GH
Concave
Convex
2Opt 2Opt±3Opt (1) 2Opt±3Opt (10) 3Opt
5.98 3.25 2.20 na
na na na 3.12
5.01 2.83 1.93 2.46
4.72 2.82 1.85 2.28
6.01 3.29 2.21 3.17
Experiment
Sequential
Exponential
Hyperbolic
Sigmoidal
Logarithmic
2Opt 2Opt±3Opt (1) 2Opt±3Opt (10) 3Opt
3.44 2.42 1.68 1.62
4.60 2.72 1.92 2.38
5.99 3.26 2.20 3.02
6.07 3.03 2.17 3.14
5.93 3.49 2.15 3.10
Bold indicates the best result for each experiment. na: not applicable. Table 8 Total processing time for each heuristic in four experiments on 30 problems from TSPLIB. Total preprocessing time for each experiment is between 72 and 76 seconds Experiment
Two-opt
Three-opt
GH
Concave
Convex
2Opt 2Opt±3Opt (1) 2Opt±3Opt (10) 3Opt
3 60 550 na
na na na 402
42 94 910 5441
95 148 1446 12981
36 88 853 5467
Experiment
Sequential
Exponential
Hyperbolic
Sigmoidal
Logarithmic
2Opt 2Opt±3Opt (1) 2Opt±3Opt (10) 3Opt
83 133 1300 11277
82 133 1309 10980
23 74 721 3345
102 152 1493 12993
62 113 1102 8579
na: not applicable. Table 9 Results reported by J unger et al. (1995) on 30 TSPLIB problems Percent above optimal tour Total processing time (seconds)
LK1
LK2
ILK
Sequential
1.93 275
1.50 723
0.60 48874
1.68 1372
LK1 and LK2 are two dierent implementations of the Lin and Kernighan heuristic and ILK is iterated Lin and Kernighan. The total times for LK1, LK2, and ILK are given in seconds on a Sun SPARC 10/20 workstation. The total time for sequential smoothing is given in seconds on a 200 MHz Pentium PC.
a convex function did not work very well. Our computational experiment is designed to provide insight into the behavior of these two functions. In our experiment, we solve a problem taken from TSPLIB (gil262) with two-opt and the eight smoothing heuristics. We perform 10 trials started from 10 randomized greedy tours and use two-opt as the local search procedure in the eight
smoothing heuristics. We count the number of moves made, the number of uphill moves made, and the number of downhill moves that the heuristic avoids on each iteration (the number of avoided downhill moves is somewhat in¯ated since a move can be avoided more than once). We present the computational results in Table 10.
24
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
From Table 10, we see that ®ve smoothing heuristics make the most uphill moves and avoid the most downhill moves in the ®rst iteration when the value of the smoothing parameter a is largest (the exceptions are convex smoothing, sequential smoothing, and hyperbolic smoothing). As the value of a decreases (and the iteration count increases), for the most part, seven smoothing heuristics make fewer uphill moves and avoid fewer downhill moves (the exception is hyperbolic smoothing). We expected hyperbolic smoothing (a linear transformation) to make downhill moves only on the ®rst iteration, but observed a small number of downhill moves on the sixth iteration. We attrib-
ute this behavior to the use of the neighbor list and the don't look bit which do not guarantee local optimality. A small number of downhill moves that were ``missed'' in the ®rst iteration were ``found'' in later iterations. Recall that GH, concave, and exponential smoothing transform the smaller distances with a concave function and sequential smoothing uses concave smoothing in each iteration. Our results indicate that these four heuristics avoid far more downhill moves than the two heuristics that transform the smaller distances with a convex function and transform the larger distances with a concave function (that is, logarithmic smoothing and sigmoidal smoothing). On gil262, GH,
Table 10 Average number of moves on problem gil262 Two-opt Iteration 1 2 3 4 5 6 Total
T 37.6 0.0 0.0 0.0 0.0 0.0 37.6
GH U
D
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
Convex T
U
D
1 2 3 4 5 6 Total
50.6 29.3 32.6 12.4 8.6 11.5 145.0
20.8 11.2 8.7 1.1 0.0 0.0 41.8
35.6 29.4 37.7 25.8 14.6 0.0 143.1
Hyperbolic 1 2 3 4 5 6 Total
T 32.4 0.0 0.0 0.0 0.0 3.9 36.3
15.2 7.4 1.5 0.3 7.1 7.7 39.2
Concave U
D
T
U
D
1.1 0.7 0.0 0.0 0.7 0.0 2.5
80.9 59.9 42.3 35.3 25.0 0.0 243.4
22.7 0.4 0.0 0.6 0.2 10.4 34.3
0.3 0.0 0.0 0.0 0.0 0.0 0.3
34.8 16.6 14.2 12.4 11.2 0.0 89.2
T
U
D
15.2 7.9 2.2 4.1 1.5 4.8 35.7
1.1 0.0 0.7 0.0 0.0 0.0 1.8
80.9 46.8 45.5 25.6 15.7 0.0 214.5
Sequential
Iteration
Iteration
T
T
Exponential U
D
77.2 77.4 66.2 5.0
23.6 31.6 23.1 0.0
68.0 49.6 39.8 0.0
225.8
78.3
157.4
Sigmoidal
Logarithmic
U
D
T
U
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
80.5 15.8 2.9 2.4 2.3 4.3 108.2
24.8 0.4 1.0 0.0 0.0 0.0 26.2
D 27.0 8.4 5.6 3.9 0.0 0.0 44.9
T
U
D
65.0 0.2 0.0 0.0 0.0 3.5 68.7
7.5 0.0 0.0 0.0 0.0 0.0 7.5
6.2 0.0 0.0 0.0 0.0 0.0 6.2
Each entry is the average of 10 trials. Each trial is started from a randomized greedy tour and two-opt is the local search procedure. T Number of moves. U Number of uphill moves. D Number of avoided downhill moves.
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
concave, sequential, and exponential smoothing avoid 89.2 to 243.4 downhill moves, on average, while logarithmic and sigmoidal smoothing avoid 6.2 and 44.9 downhill moves, on average.
25
Concave smoothing makes very few uphill moves and avoids many downhill moves. Convex smoothing makes many uphill moves and avoids more downhill moves than concave smoothing.
Fig. 2. Tour-length trajectories for TSPLIB problem gil262. The arrows identify large downhill moves (see text for details).
26
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
Sequential smoothing, which combines these two heuristics, makes more uphill moves than any other smoothing heuristic and avoids more downhill moves than convex smoothing. Recall that the plots of the GH and the exponential smoothing functions have very similar shapes. The results on the 40 Euclidean problems and 30 TSPLIB problems indicate that GH and exponential smoothing produce roughly the same length tours on average (see Tables 6 and 7). When we examine the move counts in Table 10, we see that both heuristics make roughly the same number of total moves, uphill moves, and downhill moves. To gain additional insight, we plot tour-length trajectories produced by the eight smoothing heuristics for problem gil262. The trajectories are shown in Fig. 2. In Fig. 2, we see that GH smoothing and exponential smoothing have similar trajectories and ®ve heuristics (GH smoothing, concave smoothing, sequential smoothing, exponential smoothing, and sigmoidal smoothing) make very large downhill moves (identi®ed by arrows in Fig. 2). These moves may be the beginning of a descent into a valley in the search space. After the large downhill move, sequential smoothing makes a series of uphill moves followed by another steep descent. Sequential smoothing is the only one of the ®ve
heuristics that appears to make enough uphill moves to leave the valley. This may be why sequential smoothing is the only heuristic to produce a ®nal tour that is less than 3% above the optimal solution (the tour produced by sequential smoothing is 2.82% above the optimal solution and the second shortest tour, produced by concave smoothing, is 3.49% above the optimal solution). The sequential smoothing trajectories alternate between periods when the tour length increases and periods when the tour length decreases. To gain additional insight into how sequential smoothing works, we enlarge its trajectory for problem gil262 and show this in Fig. 3. Convex smoothing occurs on the ®rst three odd numbered iterations (with a 7, 5, and 3) and concave smoothing occurs on the even numbered iterations (with a 7, 5, and 3). On iteration 7, two-opt is used with the original distances. In Fig. 3, we see that sequential smoothing makes most of its uphill moves during the odd numbered iterations (convex smoothing) and makes most of its downhill moves during the even numbered iterations (concave smoothing). Sequential smoothing avoids getting trapped in poor local minima by alternating between convex smoothing, when it expands its search, and concave smoothing, when it concentrates its search.
Fig. 3. Sequential smoothing trajectory on problem gil262. The beginning of each iteration is indicated by a vertical line. Iterations 1, 3, and 5 are convex smoothing. Iterations 2, 4, and 6 are concave smoothing. Iteration 7 is two-opt.
S.P. Coy et al. / European Journal of Operational Research 124 (1999) 15±27
(This is somewhat similar to diversi®cation and intensi®cation from tabu search.) To illustrate using Fig. 3, in iteration 1, sequential smoothing expands its search by entering and leaving several valleys in the search space. In iteration 2, it concentrates its search by making several large downhill moves. In iteration 3, sequential smoothing climbs out of the valley in the search space and then, in iteration 4, it descends towards another local minimum. 5. Conclusions In this paper, we conducted an extensive computational study of eight smoothing heuristics for the traveling salesman problem. We applied the eight heuristics to 40 Euclidean problems with 100, 316, and 1000 cities and 30 problems from the TSPLIB library with 105 to 2392 cities. Overall, we found that the sequential smoothing heuristic generated the least-cost tours. On all three sizes of Euclidean problems, sequential smoothing produced the smallest average tour length, clearly outperforming the standard version of two-opt. On the TSPLIB problems, the tours generated by sequential smoothing were competitive with the tours generated by two versions of the well-known Lin±Kernighan heuristic and were better than the tours generated by standard versions of the two-opt and three-opt heuristics. Based on our analysis of the types of moves made by the smoothing heuristics, we observed that heuristics that transformed the smaller dis-
27
tances with a concave function worked very well in practice. Furthermore, we observed that sequential smoothing avoided getting trapped in poor local minima by alternating between convex smoothing (it made many uphill moves) and concave smoothing (it made many downhill moves). References Bentley, J., Johnson, D., McGeoch, L., Rothberg, E., 1994. Near optimal solutions to very large traveling salesman problems (unpublished manuscript). Coy, S., 1998. Fine-tuned learning: A new approach to improving the performance of local search heuristics. Ph.D., dissertation. Robert H. Smith School of Business, University of Maryland, College Park, MD. Coy, S., Golden, B., Runger, G., Wasil, E., 1998. See the forest before the trees: Fine-tuned learning and its application to the traveling salesman problem. IEEE Transactions on Systems, Man, and Cybernetics 28 (4), 454±464. Dueck, G., 1993. New optimization heuristics: The great deluge algorithm and the record-to-record-travel. Journal of Computational Physics 104, 86±92. Gu, J., Huang, X., 1994. Ecient local search with search space smoothing: A case study of the traveling salesman problem. IEEE Transactions on Systems, Man, and Cybernetics 24 (5), 728±735. Johnson, D., McGeoch, L., 1997. The traveling salesman problem: A case study in optimization. In: Aarts, E., Lenstra, J.K., (Eds.), Local Search in Combinatorial Optimization, Wiley, New York, pp. 215±310. J unger, M., Reinelt, G., Rinaldi, G., 1995. The traveling salesman problem. In: Ball, M., Magnanti, T., Monma, C., Nemhauser, G. (Eds.), Network Models, North-Holland, Amsterdam, pp. 225±330. Schneider, J., Dankesreiter, M., Fettes, W., Morgenstern, I., Schmid, M., Singer, J., 1997. Search-space smoothing for combinatorial optimization problems. Physica A, Statistical and Theoretical Physics 243 (1/2), 77±112.