Optimization of the measuring path on a coordinate measuring machine using genetic algorithms

Optimization of the measuring path on a coordinate measuring machine using genetic algorithms

Measurement 23 (1998) 159–170 Optimization of the measuring path on a coordinate measuring machine using genetic algorithms Liangsheng Qu *, Guanhua ...

439KB Sizes 0 Downloads 100 Views

Measurement 23 (1998) 159–170

Optimization of the measuring path on a coordinate measuring machine using genetic algorithms Liangsheng Qu *, Guanhua Xu, Guohua Wang Research Institute of Diagnostics and Cybernetics, Xian Jiaotong University, Xian, 710049, People’s Republic of China

Abstract Stamping dies, car bodies, turbine blades and other machine parts with curved surfaces are usually verified on coordinate measuring machines (CMMs). This is a time-consuming process with low productivity. This paper suggests genetic algorithms to optimize the measuring path on CMMs. The measuring head of a CMM can be considered as a traveling salesperson. To find the shortest measuring path is just equivalent to solving a traveling salesman problem by means of genetic algorithms. This paper discusses the parameter selection, its operation behavior and the premature convergence problem. In comparison with the original measuring operations on a CMM, an example shows the optimized operation can save nearly one-third of the total measuring time. © 1998 Elsevier Science Ltd. All rights reserved. Keywords: Genetic algorithms; Coordinate measuring machine; Optimization; Traveling salesman problem; Manufacturing engineering

1. Introduction Nowadays, genetic algorithms have become a widespread technique for solving different engineering optimization problems. Since its inception, pioneered by Holand [1], extensive research activities concerning the mechanism and applications of this technique have been reported worldwide [2– 4]. Also, there is treatise [5] that systematically describes its basic principle, methodology and fields of possible application in detail. It is not an exaggeration to say that genetic algorithms have already become a mature technique in engineering applications of artificial intelligence. As is well known, the basic notion of a genetic algorithm is to solve the optimization problems based on a Darwinian ‘‘survival of the fittest’’ * Corresponding author.

approach. It is a probabilistic search algorithm with an iterative procedure that simulates the evolution process of a population of structures. The algorithm usually first encodes a fixed number of randomly produced solutions to assigned problems as the strings called ‘‘chromosomes’’. In each generation these chromosomes compete with each other to reproduce the next generation under ‘‘struggle for existence or natural selection’’. The latter in genetic algorithms is judged by means of a so-called fitness criterion. It evaluates the degree of fitness of each encoded chromosome. The chromosomes with a higher degree of fitness possess a higher probability of reproduction into the next generation. In order to increase the average quality of the population from generation to generation, genetic operators, such as selection, mutation, crossover, inversion, etc., are applied to certain percentages of the parents in expectation of their

0263-2241/98/$ – See front matter © 1998 Elsevier Science Ltd. All rights reserved. PII: S0 2 6 3 -2 2 4 1 ( 9 8 ) 0 0 02 3 - 2

160

L. Qu et al. / Measurement 23 (1998) 159–170

offspring with higher degrees of fitness. Since the total number of chromosomes in every generation is fixed in advance, those with a lesser degree of fitness will be eliminated through competition. Fig. 1 shows the whole optimization process. From Fig. 1 the basic parameters in a genetic algorithm are as follows: population size or number of chromosomes S in each generation; percentage of chromosomes p subjected to ceri tain genetic operator i; assigned maximum number of generations N in the algorithm; fitness criterion F. Engineering areas as diverse as function optimization [6 ], traveling salesman problem [7], scheduling [8], artificial neural network design [9,10] and machine learning [11], etc., can apply genetic algorithms. Goldberg [5] systematically reviewed these applications in detail. In manufacturing engineering, stamping dies, car bodies, turbine blades and other machine parts

Fig. 1. The basic principle of a genetic algorithm.

with curved surfaces are usually verified on coordinate measuring machines (CMMs). This is a timeconsuming process with low productivity. This paper suggests genetic algorithms to optimize the measuring path on CMMs. Basically, the measuring head of a CMM can be regarded as a traveling salesperson. To find the shortest measuring path is just equivalent to solving a traveling salesman problem ( TSP). This paper discusses how to select the parameters in a genetic operator, its operating behavior and the premature convergence problem. Practice showed satisfactory optimization results. In comparison with the original measuring operations on CMMs, the optimized operation can save nearly one-third of the total measuring time.

2. The TSP on CMMs The concept of TSP is as follows. There are a given number of cities with known distances between each other. A traveling salesperson has to visit each city once and only once. The objective is to obtain the optimal sequence of cities with the prerequisite that the total distance of his tour should be the shortest. Let us consider a simple example. There are 36 cities distributed on a rectangular plan as shown in Fig. 2. The total traveling distance is 6a+b for the routine shown in Fig. 2a, and 6b+a for the routine shown in Fig. 2b. If parameters a and b are fixed in advance, there must exist a shortest path with minimum traveling distance between 6a+b and 6b+a. The objective of a TSP is to find the shortest tour routine of the traveling salesperson. Let us first analyze the measuring process on a CMM. The measuring head of a CMM measures each sampling point distributed on the curved surface of a workpiece sequentially, and evaluates if these points locate inside the predetermined tolerance limit. Fig. 3 shows the path described by the measuring head between two adjacent sampling points A and B. As shown in Fig. 3, through points A and B we draw two outward normals N and a N . Let A and B be the points locating on these b 1 1 normals with distance D from A and B. This 1 distance D is called the measuring distance. Let 1

L. Qu et al. / Measurement 23 (1998) 159–170

161

Fig. 2. A typical TSP.

A and B , (l , h , k ) and (l , h , k ) are the vec1 2 A A A B B B tors of the direction normals of points A and B, i.e. N and N . The total moving distance of the a b measuring head from point A to B is equal to 2 2 =d +d +d D 1 2 3 A2B2 where d =앀(x −x )2+(y −y )2+(z −z )2 3 A1 B2 A1 B2 A1 B2 According to the above formula we can easily calculate the total moving distance of the measuring head: n−1 =∑ D AiAi+1 i=1 where D is the moving distance of the measAA uring headi i+1 from A to A . i i+1 Since the coordinates of the measuring points A, B, … and corresponding normals on these points are all known parameters, we can calculate the traveling distance from A to B . This distance 2 2 is equal to d +d +d . As we know, the distances 1 2 3 d and d are constant for every measuring point. 1 2 According to the statistics, the movement of the measuring head along d between two successive 3 measuring points occupies the majority of the operating time. On the contrary, the contacts of the measured surface with the measuring head as well as the data sampling process are nearly instantaneous. Therefore, the optimization of the measuring process on a CMM is approximately equivalent to finding the shortest distance contributed from the sum of d and can be regarded as a 3 typical TSP problem. Here, we neglect the time spent on rotation of the measuring head in the measuring process. This motion is aimed at preD

Fig. 3. Path between two measuring points.

A and B be another two points locating on these 2 2 normals with a distance D from points A and B. 2 This distance D is called the return distance. The 2 elementary sequence of movement of the measuring head is A –A–A –B –B–B −… 2 1 2 1 The coordinates of each measuring point on the curved surface are all known parameters before measurement on a CMM. We can determine the coordinates of the points A and B easily, i.e. 1 2 x =x +l d ; x =x +l d A1 A A 1 B2 B B 2 y =y +h d ; y =y +h d A1 A A 1 B2 B B 2 z =z +k d ; z =z +k d A1 A A 1 B2 B B 2 where (x , y , z ), (x , y , z ), (x , y , z ) and A A A B B B A1 A1 A1 (x , y , z ) are the coordinates of points A, B, B2 B2 B2

total

162

L. Qu et al. / Measurement 23 (1998) 159–170

venting contact with the measured workpiece when the head travels relative to the workpiece in the x or y directions, as shown in Fig. 4. As an example, Fig. 5 shows schematically a dam-type measured surface. On the curved surface 16 measuring points are distributed uniformly in each of five cross-sections. Table 1 lists their coordinates and corresponding direction normals. The distances between succeeding sections along the y axis maintain constant and equal to 50 mm. The total number of measuring points is 16×5=80 points. Fig. 6a shows the conventional measuring path, which is not the optimal. Fig. 6b shows the optimized path according to the results from a genetic algorithm. When D =5 mm and 1 D =10 mm, the total traveling paths of the meas2 uring head before and after optimization are equal to 7996 mm and 5707 mm respectively. Nearly 30% of the measuring time can be saved.

3. The statistical properties of the optimization process on a CMM In order to find the statistical properties of the genetic algorithms in measuring the path optimiza-

tion on a CMM, we did a series of computer experiments. Each experiment consisted of 500 trials. The measured object is supposed to be a cylindrical surface with 40 measuring points uniformly distributed around a circle. Therefore, the global optimum measuring path is the total side length of the inscribed 40-sided polygon of this circle. Fig. 7 shows the evolution process of the path from generation to generation and finally to the global optimum path, i.e. an inscribed 40-sided polygon (not shown in the figure). The reason for selecting such a measuring path is that the global optimum in this problem is known in advance. Let the traveling distance between two successive measuring points be equal to 1-unit, then the global optimum path length should equal to 45-units.1 Thus we can easily check if the algorithm has found the global optimum or whether the optimization process stays in a state of premature convergence, i.e. on a local optimum. The latter will be discussed in detail in Section 5. Fig. 8 shows some typical distributions of the necessary generations to find the global optimum. As stated above, we performed a total of 500 trials in each experiment. The rate of crossover, rate of mutation and the number of chromosomes in each generation were varied from experiment to experiment. In these empirical distributions we neglected those trials in which the number of iterations reached the predetermined number N (Fig. 1) but in which the algorithm still could not find the global optimum. All the empirical distributions shown in Fig. 8 coincide with an exponential distribution very well: f (x)=

Fig. 4. Possible motions of the measuring head.

1 m

exp

A B −x m

where x is the normalized number of generations necessary to find the global optimum and m is the expectation of the exponential distribution. According to Shannon (see Ref. [12]), the entropy H of a probability density function

P

H=− f (x) log( f (x)) dx

Fig. 5. A dam-shaped surface measured on a CMM.

1We take this side length as the unit length of the traveling distance in the following figures.

163

L. Qu et al. / Measurement 23 (1998) 159–170 Table 1 The coordinates of the measured points and the direction normals in one of five sections of the measured surface No. i

x i

y i

z i

l

i

h

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

40.994 122.983 241.724 261.650 286.406 322.806 365.233 544.198 632.789 672.154 695.679 725.873 790.542 875.295 964.932 1054.644

50.000 50.000 50.000 50.000 50.000 50.000 50.000 50.000 50.000 50.000 50.000 50.000 50.000 50.000 50.000 50.000

−241.780 241.780 −177.552 −92.041 −52.039 −21.388 −4.085 0.000 −5.529 −30.763 −70.921 −198.385 −267.389 −298.468 −300.000 −300.000

−0.428 −0.428 −0.835 −0.922 −0.757 −0.515 −0.232 0.000 0.328 0.722 0.957 0.871 0.547 0.124 0.000 0.000

0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000

i

k

i

0.904 0.904 0.551 0.386 0.653 0.857 0.973 1.000 0.945 0.692 0.291 0.492 0.837 0.992 1.000 1.000

Fig. 6. The curved surface of a workpiece with 64 measuring points: (a) ordinary measuring path; (b) optimized measuring path via a genetic algorithm.

In the case of an exponential distribution H =1+log( m) e Therefore, the population mean is the sole parameter that determines the property of the distribution. This parameter also characterizes the degree of uncertainty or the entropy in the optimization process using the genetic algorithms. The population mean changes with the rates of crossover, the population size and the number of measuring points to be optimized. The smaller the population mean is, the faster the optimization process will reach the global optimum on average. At the same time, no matter how one selects the above parameters, in each experiment there still remains a very small percentage of trials in which the algorithm cannot find the global optimum. The evolution process no longer continues owing

to the high homogeneity among the chromosomes in each generation. In other words, the chromosomes in each generation become highly ordered at the end of the optimization process. Therefore, we can explain the optimization process using the genetic algorithms from the viewpoint of system entropy. In the initial stages, the population in each generation is of higher disorder and the optimization process is relatively violent. After that, the chromosomes in the population are gradually assimilated from generation to generation, and the system entropy rapidly decreases. In this period, the optimization process also slows down until the system fully loses its optimization ability. For a dissipative closed system, this process can be described as [13] L

k+1

=rL +e k k

164

L. Qu et al. / Measurement 23 (1998) 159–170

tion mean value and the standard deviation in each generation. This result is contrary to the past proposition: that the genetic algorithms obey the random walk model with constant standard deviation in successive generations. From Fig. 10a and b we can also directly detect that the range between maximum and minimum path lengths decreases all the time from generation to generation. This phenomenon also verifies the above conclusion.

4. The genetic operator

Fig. 7. The process of optimization of the measuring path on a CMM using a genetic algorithm. Total measuring points: 40; rate of crossover: 60%; rate of mutation: 10%; population size: 40 chromosomes.

where l is the mean total path length, k is the number of iterations in an optimization process and e is a random component; k e =NID(0, s2 ). k k Fig. 9a and b show the variations of the maximum, minimum and mean lengths of the total measuring path from iteration to iteration. In the initial stages these three curves decrease very rapidly. Then they slow down; finally, they shut down when the process finds a global minimum (Fig. 9a) or maintain in the horizontal direction and converge to each other. The process finally shuts down until the number of iterations equals the predetermined value ( Fig. 9b). In the latter case, the process is halted in a local minimum without any further improvement. Fig. 10a shows the linear relationship between successive minimum traveling distances, which verifies the above-stated formula very well. Fig. 10b shows the linear relationship between the popula-

To select the proper genetic operator is the key problem in genetic algorithms. It seriously influences the operating efficiency of a genetic algorithm to reach the global optimum. For a large number of unevenly distributed measuring points this optimization problem becomes difficult using ordinary genetic operators. It is known as a famous combinatorial optimization problem and proved to be NP hard (i.e. hard to be solved with an effective polynomial algorithm). Some impressive heuristic and exact solution procedures were proposed to solve this problem [14], but the performance of these algorithms for large-scale TSPs is not satisfactory. We use the greedy selection crossover operator or GSX [15] with some modification and simplification to solve this problem. The GSX operator fully utilizes the prior information about the distances between the measuring points. It possesses strong ability to optimize large-scale TSPs. The operating process is as follows. Let P1 and P2 be two randomly selected chromosomes from the parent generation. Assume they are composed of nine randomly arranged measuring points: P1=1 2 3 4 5 6 7 8 9 P2=4 1 2 8 7 6 9 3 5 where the numbers 1 to 9 represent the order of measuring points on the detected surface. We first arbitrarily select a point, say no. 5 as the starting point in their son-chromosomes, then displace all the measuring points in the parents as shown in Fig. 11. The next point in the son-chromosome is determined by comparison of the distances from

L. Qu et al. / Measurement 23 (1998) 159–170

165

Fig. 8. The empirical and theoretical distributions of the necessary number of generations to find the global optimum: curves from upper-left to lower-right correspond to rates of crossover from 0.4 to 0.9. Rate of mutation: 0.1; population size: 50 chromosomes.

points 5 to 6, or D with the distance from 5 to 56 4, or D . If the distance D is shorter, we should 54 54 select point 4 as the second point in the sonchromosome. The process continues until a new son-chromosome is formed as shown in Fig. 11. According to our experience, the GSX operator is the most effective one with the highest optimizing ability in comparison with the partially mapping crossover operator, the ordered crossover operator, the cycle crossover operator and the inversion operator, etc. [16 ]. In addition to the greedy selection operator, we also employ the ordinary mutation operator. However, the effectiveness of the latter is far behind the GSX. Another question is to determine the proper percentage of chromosomes subjected to crossover or mutation in each iteration or generation. Strictly speaking, the larger the percentage of chromosomes that is subjected to genetic operation, the less is the number of generations that will be needed to approach the global optimum. In compensation, each generation takes more time. Therefore, there exists an optimum percentage value to achieve the maximum efficiency of a genetic algorithm.

The diversity of the total necessary number of generations in each trial is shown in Tables 2–4. From these tables it is obvious that the minimum numbers of generations to achieve the global optima in the experiment are all less than 20 generations, but the maxima are more than 900 generations. They depend solely on the type of genetic operator and the numerical value selected. The population size in each generation also seriously influences the optimization efficiency ( Table 4). Fig. 12 shows the relationship between the average generations necessary to find the optimum in 900 trials and the rate of chromosomes subjected to crossover in each generation. The rate of mutation and the population size in each generation were fixed to 10% and 50 chromosomes respectively. Fig. 13 shows the influence of mutation on the average number of generations to find the global optimum in 900 trials. Here, the rate of crossover and the population size in each generation are fixed at 60% and 50 chromosomes respectively. From Tables 2–4 and Figs. 12 and 13 we can draw the following conclusions.

166

L. Qu et al. / Measurement 23 (1998) 159–170

Fig. 9. The optimization process of the TSP problem shown in Fig. 6. Curves show variations of maximum, minimum and average path lengths accordingly. The global optimum path length is 45 units: (a) process shuts down when global optimum is found; (b) process shuts down when the number of iterations reaches the predetemined value N.

(1) The maximum generation number in each trial decreases monotonically when we increase the rate of crossover in each generation. (2) The average number of generations decreases more slowly, when the rate of crossover is larger than 0.7 or more. (3) The influence of the mutation operator is much smaller than the crossover operator.

Fig. 10. (a) The relationship between successive traveling distances is a straight line, which verifies the stated formula; (b) the relationship between mean traveling distance and the standard deviation in each generation.

5. The premature convergence problem and doping process From the above, there is a large discrepancy between the maximum and minimum numbers of generations in each experiment. This fact suggests we need to find a suitable measure to maintain a

167

L. Qu et al. / Measurement 23 (1998) 159–170

Fig. 11. The principle of a GSX operator with simplification. Table 2 The influence of the crossover on the number of generations in each trial Rate of crossover

Max. no. of generations

Min. no. of generations

Average no. of generations

40 50 60 70 80 90

>900 717 600 525 387 413

18 17 14 14 12 12

235.8 135.4 88.0 63.1 42.3 35.8

Rate of mutation: 10%; population size: 50 chromosomes.

relatively consistent optimization process to solve the measuring path optimization problem on a CMM. This is absolutely necessary in a production environment, even though the genetic algorithms are probabilistic search methods. Secondly, we have to solve the problem that the algorithm stops on the local optimum, i.e. the instability of the optimization quality. As stated above, the degree

of violence of the optimization process depends solely on the degree of disorder or the population entropy possessed in each generation. Therefore, we can regulate the entropy content in each generation by means of adding a certain percentage of newly randomly produced chromosomes with the prerequisite of keeping those better chromosomes remaining to the next generation. This measure is

168

L. Qu et al. / Measurement 23 (1998) 159–170

Table 3 The influence of the mutation on the number of generations in each trial Rate of mutation

Max. no. of generations

Min. no. of generations

Average no. of generations

6 10 14 18 22 26

>900 600 680 584 601 796

14 14 14 15 13 14

115.2 88.0 81.8 80.3 83.0 88.3

Rate of crossover: 60%; population size: 50 chromosomes. Table 4 The influence of the population size on the number of generations in each trial Population size

Max. no. of generations

Min. no. of generations

Average no. of generations

40 50 60 70 80 90

>900 600 534 367 131 105

15 14 13 13 13 12

177.3 88.0 47.0 30.5 20.0 17.9

Rate of crossover: 60%; rate of mutation: 10%.

Fig. 12. (a) Average number of generations required using GSX; (b) the standard deviation of the number of generations. Total trial number is 500; rates of crossover are 0.4–0.9; rate of mutation is 0.1; the population size is limited to 50 in each generation.

just like the doping process in the semiconductor industry. Since the newly added chromosomes are highly randomized, the entropy content in the whole population can be raised to some extent and the optimization process can be maintained until the global optimum is found. Fig. 14 shows the doping process we applied in the genetic algorithms. In this figure, the minimum path length in each generation detected at points A and B on the flow chart should be the same. However, the maximum and mean path lengths in

each generation detected at point A are completely different from those detected at point B. The mean length of path detected at point B decreases much more slowly. As stated above, these mean path lengths just reflect the change of the population entropy content in successive generations, as shown in Fig. 15. In this figure, nearly 20% of the chromosomes in the population are substituted by the same amount of newly randomly produced chromosomes in every generation. Experiments have verified the effectiveness of the doping pro-

L. Qu et al. / Measurement 23 (1998) 159–170

169

Fig. 13. (a) Average number of generations required to find the global optimum; (b) the standard deviation of the number of generations. Total trial number is 500; rates of mutation are 0.06–0.26; rate of crossover is 0.6; the population size is limited to 50 in each generation.

Fig. 14. Doping process in genetic algorithms (the remaining part of the flow chart is the same as in Fig. 1).

cess. We can definitely achieve the global optimum in each trial and fully eliminate the phenomenon of premature convergence. Satisfactory results were obtained.

6. Concluding remarks The optimization of the measuring path on a CMM is quite meaningful to increasing the proFig. 15. (a) The maximum, mean and minimum path lengths in each generation detected at point A (Fig. 14); (b) as in (a), but detected at point B. The global optimum is 45 units; rate of crossover is 0.6; rate of mutation is 0.1.

170

L. Qu et al. / Measurement 23 (1998) 159–170

ductivity in manufacturing processes. As a typical TSP, we can solve it using different methods: for example, using artificial neural networks [17,18], a combinatorial algorithm with an annealing schedule [19], etc. This paper selects genetic algorithms as the optimization tool. Analysis and examples show that it is simple and successful. The computerized design of the optimum measuring path using GA can also be realized in conjunction with CAD/CAM of the measured objects. In this paper, the selection of the genetic operators and measures to prevent the premature convergence are also discussed. The doping process introduced is effective and worthy of further theoretical research. As an applicable technique in practice, a fully automatic and intelligent toolboxequipped modern CMM is envisaged.

Acknowledgements This project is supported by the National Science Foundation of China.

References [1] J.H. Holand, Adaptive Natural Artificial System, University of Michigan Press, 1975. [2] Proceedings of 1st Int. Conf. on Genetic Algorithms and Their Application, Hillsdale, NJ, Lawrence Erlbaum, 1985. [3] Proceedings of 2nd Int. Conf. on Genetic Algorithms and Their Application, Hillsdale, NJ, Lawrence Erlbaum, 1987. [4] Proceedings of 3rd Int. Conf. on Genetic Algorithms and Their Application, Hillsdale, NJ, Lawrence Erlbaum, 1989.

[5] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, 1989. [6 ] K.A. DeJong, An analysis of the behavior of a class of genetic adaptive systems, Ph.D. Thesis, University of Michigan, Ann Arbor, MI, 1975. [7] D.E. Goldberg, R. Lingle Jr., Alleles, loci and traveling salesman problem, Proc. 1st Int. Conf. on Genetic Algorithms and Their Application, Hillsdale, NJ, Lawrence Erlbaum, 1985, pp. 154–159. [8] G.A. Cleveland, S.F. Smith, Using genetic algorithm to schedule flowshop release, Proc. 3rd Int. Conf. on Genetic Algorithms, San Mateo, CA, Morgan Kaufmann, 1989, pp. 160–169. [9] V. Maniezzo, Genetic evolution of the topology and weight distribution of networks, IEEE Trans. Neural Networks 6 (2) (1994) 39–53. [10] Z.Y. Han, Application of GA and ANN in the field of condition monitoring of equipment, Doctor Thesis, Xian Jiatong University, 1997. [11] M. Dorigo, U. Schnepf, Genetic-based machine learning and behavior-based robotics, a new synthesis, IEEE Trans. SMC 23 (1) (1993) 141–154. [12] J.N. Kapur, H.K. Kesavan, Entropy Optimization Principles With Applications, Academic Press, 1992. [13] R.K. Mishrad, ( Ed.) On Self-Organization, An Interdisciplinary Search for a Unifying Principle, Springer, Berlin, 1994, pp. 158–159. [14] P.P.C. Yip, Y.-H. Pao, Combinatorial optimization with use of guided evolutionary simulated annealing, IEEE Trans. Neural Networks 6 (2) (1995) 290–295. [15] R. Cheng, M. Gen, Crossover on intensive search and traveling salesman problem, Comput. Ind. Eng. 27 (1–4) (1994) 485–488. [16 ] G.H. Wang, A study of GA with applications in intelligent manufacturing, Master Thesis, Xian Jiatong University, 1997. [17] E. Wacholder, J. Han, R.C. Mann, A neural network algorithm for the multiple traveling salesman problem, J. Biol. Cybernetics 61 (1989) 11–19. [18] R. Cuykendall, R. Reese, Scaling the neural TSP algorithm, J. Biol. Cybernetics 60 (1989) 365–371. [19] E. Bonomi, J.L. Lutton, The N-city traveling salesman problem: statistical mechanics and metropolis algorithm, SIAM Rev. 26 (1984).