Available online at www.sciencedirect.com
Applied Mathematics and Computation 199 (2008) 590–598 www.elsevier.com/locate/amc
Comparing efficiencies of genetic crossover operators for one machine total weighted tardiness problem Talip Kellego¨z
a,* ,
Bilal Toklu b, John Wilson
c
a
b
Kirikkale University, Engineering Faculty, Industrial Engineering Department, Kirikkale, Turkey Gazi University, Engineering and Arthitecture Faculty, Industrial Engineering Department, Ankara, Turkey c Loughborough University, Business School, United Kingdom
Abstract In this study, the well-known one machine problem with the performance criterion of minimizing total weighted tardiness is considered. This problem is known to be NP-hard, and consists of one machine and n independent jobs. Each of these jobs has a distinct integer processing time, a distinct integer weighting factor, and a distinct integer due date. The purpose of this problem is to find a sequence of these jobs minimizing the sum of the weighted tardiness. Using benchmarking problems, this study compares performances of eleven genetic crossover operators which have been widely used to solve other types of hard scheduling problems. Ó 2007 Elsevier Inc. All rights reserved. Keywords: Genetic algorithms; One-machine scheduling; Total weighted tardiness; Crossover operators
1. Introduction The scheduling function in a company frequently uses mathematical techniques or heuristic methods to allocate limited resources to the processing of tasks. A proper allocation of resources enables the company to optimize its objectives and achieve its goals [1]. It is interesting to note that single machine scheduling problems arise in practice more often than one might expect. First, there are obvious ones involving a single machine, e.g. the processing of jobs through a small non-time sharing computer. Then, there are less obvious ones where a large complex plant acts as if it were one machine, e.g. in paint manufacture the whole plant may have to be devoted to making one colour of paint at a time. Finally, there are job-shops with more than one machine, but in which one machine acts as a ‘bottle-neck’. It makes sense to tackle m-machine problems with single bottle-necks as single machine problems, if only to get a first approximation to their solution [2]. One of the P well-known single machine problems is the one machine total weighted tardiness problem, classified as P 1== wi T i [1]. As 1== wi T i is strongly NP-hard [3], complete enumeration to get an exact solution has
*
Corresponding author. E-mail address:
[email protected] (T. Kellego¨z).
0096-3003/$ - see front matter Ó 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2007.10.013
T. Kellego¨z et al. / Applied Mathematics and Computation 199 (2008) 590–598
591
computational requirements that frequently grow as an exponential or high-degree polynomial in the problem size n [4]. Hence, research on this problem has focused on meta-heuristic approaches, such as tabu search, simulated annealing, genetic P algorithms and so on. The problem 1== wi T i can be stated as follows [5]: There is a set of n independent jobs ready at time zero to be scheduled on a single machine which is continuously available. Associated with a job i (i = 1, 2, . . . , n) there is a processing time (pi), and a due date (di), and a weighting factor (wi). Let r = ([1], [2], . . . , [n]) be a sequence of the jobs, where [i] is the ith job. Given a sequence r, let C ½i ¼
i X
P ½j ;
j¼1
T ½i ¼ maxf0; C ½i d ½i g; be the completion time and tardiness of ith job. The objective is to find a sequence r that minimizes ZðrÞ ¼
n X
w½i T ½i :
i¼1
P Many researchers have worked on the 1== wi T i problem and experimented with many different approaches. The approaches range from very sophisticated computer-intensive techniques to Pfairly crude heuristics designed primarily for implementation purposes [1]. Lawler [3] showed that the 1== wi T i problem is strongly NP-hard. Emmons [6] derived dominance properties which establish precedence relations among jobs in an optimal schedule of the one machine total tardiness problem. Emmons’ theorems give the necessary conditions for a shorter job to precede a longer one or for a longer job to precede a shorter one in an optimal schedule. Rinnooy Kan P et al. [7] extended these results to the weighted version. Proposed approaches for solving the problem 1== wi T i can be grouped into three categories: (1) exact solution approaches, (2) constructive heuristics, and (3) improvement heuristics. Although, there are some exact solution approaches including dynamic programming algorithms [8,9] and branch and bound algorithms [10,11], they require considerable CPU time and memory. Constructive heuristics build a feasible schedule from scratch. Cheng et al. [12] and Fisher [9] proposed this type of heuristics. Improvement heuristics start from a previously generated initial schedule or schedules and try to improve this schedule. Proposed improvement heuristics can be categorized as genetic algorithms [13,14], simulated annealing [15], tabu search [13], ant colony algorithms [16], and dynasearch algorithms [17,18]. In a more recent work, Bozejko et al. [19] proposed a new fast local search procedure based on a tabu search approach with a specific neighborhood which employs blocks of jobs, and they showed that this algorithm provides better results than heuristics proposed by Crauwels et al. [13] and Grosso et al. [18] on benchmarking instances. In conventional genetic algorithms, the crossover operator is used as the principal operator to improve solutions and the performance of a genetic system is heavily dependent on it [20]. In this study, using benchmarking problems, the effectiveness of eleven genetic crossover operators which have been proposed for the P permutation encoding scheme is compared for the problem 1== wi T i , and the results presented. The rest of the article is organized as follows. In the next section we introduce briefly the genetic algorithm and in Section 3, we explain the procedures of genetic crossover operators used in comparisons. Section 4 presents the comparison and its results. The conclusions are discussed in Section 5. 2. Genetic algorithms Genetic algorithms (GA) are search algorithms based on the mechanics of natural selection and natural genetics [21]. Genetic algorithms emphasize genetic encoding of potential solutions into chromosomes and apply genetic operators to these chromosomes [22]. That is, the genetic algorithm maintains a population of individuals. Each individual represents a potential solution to the problem at hand. Each individual is evaluated to give some measure of its fitness. Some individuals undergo stochastic transformations by means of genetic operations to form new individuals. There are two types of transformation: ‘mutation’, which creates new individuals by making changes in a single individual, and ‘crossover’, which creates new individuals by
592
T. Kellego¨z et al. / Applied Mathematics and Computation 199 (2008) 590–598
Step 1. Generate the initial population P (0) at random and set i = 0 ; Step 2. REPEAT a) Evaluate the fitness of each individual in P (i ) . b) Select parents from P (i ) based on their fitness as follows: Given the fitness of
n individuals as f1 , f 2 ,..., f n . Then select individual i with
probability
pi =
Σ
fi n j =1
fj
This is often called roulette wheel proportional fitness selection. c) Apply crossover to selected parents; d) Apply mutation to crossed-over new individuals; e) Replace parents by the offspring to produce generation P (i + 1) and set i = i + 1 Step 3. UNTIL the halting criterion is satisfied
Fig. 1. Canonical genetic algorithm.
combining parts of two individuals. New individuals, called ‘offspring’, are then evaluated. A new population is formed by selecting the more fit individuals from the parent population and the offspring population. After several generations, the algorithm converges to the best individual, which hopefully represents an optimal or suboptimal solution to the problem [20]. A general structure of a canonical genetic algorithm (also sometimes called the simple genetic algorithm) is given in Fig. 1 [22]. 3. Genetic crossover operators for permutation encoding In conventional genetic algorithms, the crossover operator is used as the principal operator and the performance of a genetic system is heavily dependent on it [20]. In this section, we introduce various genetic crossover operators P for the permutation encoding structure before comparing their effectiveness for solving the problem 1== wi T i in the following section. 3.1. Position based crossover operator (PBX) A set of positions from one parent is selected randomly. Each position is independently marked with the probability of 0.5. By copying the symbols in these positions into the corresponding positions, a protochild is produced. The symbols already selected are deleted from the second parent, and the symbols in the resulting sequence are placed into unfixed positions in the protochild from left to right to produce one offspring [20]. After changing the roles of parents, the same procedure is applied to produce the second offspring. This crossover is illustrated by Fig. 2.
Fig. 2. Position based crossover operator (PBX).
T. Kellego¨z et al. / Applied Mathematics and Computation 199 (2008) 590–598
593
Fig. 3. Order based crossover operator (OBX).
3.2. Order based crossover operator (OBX) This crossover operator was proposed by Syswerda [23], and is a slight variation of the PBX operator. In this operator the order of symbols in the position selected in one parent is imposed on the corresponding position in the other parent [20]. First, each position is chosen in turn and independently marked in the first parent with the probability 0.5 and symbols in unmarked positions are placed in the position they appeared in the second parent. Lastly, marked symbols of the first parent are copied into the empty positions of the offspring by preserving their absolute order in the first parent. An illustration of OBX crossover operator is given in Fig. 3. 3.3. One point crossover (1PX) In this operator, one point is randomly selected for dividing one parent. If n is the number of symbols (genes) in a chromosome, there are (n 1) crossover points. The first crossover point is the point between the first and the second genes, and second is the point between the second and third genes, and so on. One of these points is selected with equal probability. The jobs on one side (each side is chosen with the same probability) are inherited by the offspring from the parent, and the other jobs are placed in the order they appeared in the other parent [24]. After changing the roles of parents, the same procedure is applied to produce the second offspring. This crossover is illustrated by Fig. 4. 3.4. Cycle crossover operator (CX) Originally developed by Oliver et al. [25], this operator starts the procedure by finding the cycle that is defined by the corresponding positions of symbols between parents. In this crossover operator, the cycle always starts from the first position. Symbols in the cycle of one parent are copied to the same positions of the offspring, and these symbols are deleted from the other parent. The remaining symbols of the other parent are copied into the offspring. After changing the roles of parents, the same procedure is applied to produce the second offspring. An illustration of the CX crossover operator is given in Fig. 5. 3.5. Order crossover (OX) This operator was developed by Davis [26]. First, two crossover points are selected at random, and the symbols between these positions are copied to the same positions of the offspring. The copied symbols are deleted from the other parent, and the remaining symbols are inherited, beginning with the first position following the
Fig. 4. One point crossover operator (1PX).
594
T. Kellego¨z et al. / Applied Mathematics and Computation 199 (2008) 590–598
Fig. 5. Cycle crossover operator (CX).
Fig. 6. Order crossover operator (OX).
second crossover point. After changing the roles of parents, the same procedure is applied to produce the second offspring. An illustration of the OX crossover operator is given in Fig. 6. 3.6. Linear order crossover operator (LOX) LOX is a modified version of the OX operator, and the chromosome is considered linear instead of circular. First, two crossover points are selected at random. The symbols of the second parent between the two crossover points are deleted from the first parent, and the holes created are slid from the extremities toward the centre until they reach the cross section. By copying the symbols of the second parent between the two crossover points to the same positions of the first parent, the first offspring is produced [20]. After changing the roles of parents, the same procedure is applied to produce the second offspring. This crossover is illustrated by Fig. 7. 3.7. Partially mapped crossover operator (PMX) This operator was proposed by Goldberg and Lingle [27]. Two crossover points are selected at random. The substrings defined by the two crossover points are called mapping sections. The two substrings are exchanged to produce protochildren. The mapping relationship between two mapping sections is determined, and by using this relationship, offsprings are legalized [20]. The illustrative example of this operator is given in Fig. 8.
Fig. 7. Linear order crossover operator (LOX).
Fig. 8. Partially mapped crossover operator (PMX).
T. Kellego¨z et al. / Applied Mathematics and Computation 199 (2008) 590–598
595
Fig. 9. Edge recombination crossover operator (ER).
3.8. Edge recombination crossover operator (ER) Developed by Whitley et al. [28], this operator emphasizes adjacency information. First, the ‘edge table’ is built by determining symbols adjacent to each symbol in both parents. A starting symbol is selected at random. Until completing the offspring, of the symbols that have links to the previous symbol, the element is chosen randomly. If there is no symbol that has links to the previous one, any symbol from the remaining symbols is chosen randomly. By using this procedure, any number of offspring can be produced. This crossover is illustrated by Fig. 9. 3.9. Two point crossover operator (Version 1) (2PX_V1) This operator was proposed by Murata and Ishibuchi [24]. First, two crossover points are randomly selected. The symbols outside the selected two points are inherited from one parent to the offspring. The other symbols are placed in the order they appeared in the other parent. After changing the roles of parents, the same procedure is applied to produce the second offspring. This crossover is illustrated by Fig. 10. 3.10. Two point crossover operator (Version 2) (2PX_V2) This crossover was also proposed by Murata and Ishibuchi [24], and it is basically the same as the above two point crossover operator (Version 1) except for the inherited symbols of the first parent. In this crossover, the symbols between randomly selected points are inherited from one parent to the offspring. Illustration of this crossover operator is given in Fig. 11.
Fig. 10. Two point crossover operator – version 1 (2PX_V1).
Fig. 11. Two point crossover operator – version 2 (2PX_V2).
596
T. Kellego¨z et al. / Applied Mathematics and Computation 199 (2008) 590–598
3.11. Two point crossover operator (Version 3) (2PX_V3) Proposed by Murata and Ishibuchi [24], this crossover is a mixture of the above two-point crossovers (Versions 1 and 2). These two crossovers are applied to each pair of selected parents with the same probability (i.e., 0.5 for each crossover). 4. Comparison between crossover operators In this section, computational experiments on the benchmark instances from OR-Library (http:// people.brunel.ac.uk/~mastjjb/jeb/info.html) are presented. These benchmark instances include 40, 50 and 100 jobs problems, and in each group, there are 125 instances. We use the first P 5 problems in each group to compare the performances of crossover operators to solve the problem 1== wi T i . Solutions are encoded according to the permutation encoding scheme, and the ‘Roulette Wheel Selection’ mechanism is used to determine parent individuals. The search process is terminated after 300 generations. The fitness value of ith individual is determined as follows: fi ¼
1 ; Oi
where fi is the fitness value of ith individual, and Oi is the objective function value of ith individual (total weighted tardiness value). Selection probabilities are determined according to the ranking mechanism. Engin [29] analyzed lots of other genetic algorithm components for flowshop scheduling problem with the objective of minimizing makespan. By using his comparison results, other components of genetic algorithm are determined, and this component combination is presented in Table 1. Using the presented parameter values, five independent runs were conducted for each test problem by using each crossover operator introduced in Section 3, and for each run, the genetic algorithms with each crossover operator were started from same randomly generated initial populations. Average values of five independent runs were calculated and the results are given in Table 2. Mutation probability was not varied to retain consistency in the testing, but it was felt that changing it would not have led to different rankings of crossover operators. P It can be seen from the comparison results in Table 2 that for solving the problem 1== wi T i , the OBX and PBX crossover operators have a superior performance to other genetic crossover operators. Also, it can be P said that one of the properties which determines solution quality of the problem 1== wi T i is the order or position of jobs. The other issue is that the CX crossover operator has the worst performance for solving this type of scheduling problem (with consistently worst solution values). Although the PMX crossover operator has P a good performance for solving traveling salesman problems [27], its performance for solving the problem 1== wi T i is inferior to OBX and PBX crossover operators. Table 1 Determined parameter values for the genetic algorithm used in comparison Parameter
Value
Population size Crossover probability Mutation probability Mutation operator
40 0.95 0.50 Three-job exchange (see Fig. 12)
Fig. 12. Three-job exchange mutation operator.
T. Kellego¨z et al. / Applied Mathematics and Computation 199 (2008) 590–598
597
Table 2 Comparison results Problem
n = 40 1 2 3 4 5 n = 50 1 2 3 4 5 n = 100 1 2 3 4 5
Crossover operator OBX
PBX
1PX
2PX_V3
2PX_V1
PMX
CX
2PX_V2
LOX
962.0 1320.8 565.8 2097.6 990.0
947.4 1281.2 573.0 2100.6 990.0
958.2 1376.4 575.4 2157.0 990.0
967.8 1334.6 578.4 2125.8 990.0
951.0 1295.8 582.6 2136.0 1035.6
1002.6 1271.6 601.6 2167.8 1010.4
1032.2 1329.8 641.8 2211.6 1075.6
994.4 1412.0 573.0 2199.0 1050.0
982.8 1308.0 641.0 2173.6 1048.4
1985.6 1869.6 1551.4 3037.8 1598.0
2874.2 2117.0 1949.6 3101.0 2066.8
2186.2 2005.8 2597.4 2695.4 1590.2
2218.6 2011.0 2602.8 2691.0 1586.8
2144.0 2027.0 2619.0 2724.0 1644.0
2318.4 2017.6 2632.0 2789.0 1595.2
2193.6 2075.4 2623.4 2783.8 1644.4
2197.8 2036.6 2740.0 2843.8 1651.6
2170.0 2070.0 2644.8 2986.4 1703.8
2264.0 2062.0 2624.8 2851.0 1684.0
2266.2 2070.8 2675.2 2695.4 1643.2
3182.4 2844.8 3627.6 4997.0 3422.4
3498.8 3031.2 4294.0 5190.8 4850.6
6113.2 6270.0 4400.8 5137.6 5362.6
6175.0 6251.0 4432.4 5123.2 5395.6
6615.8 6721.6 4442.0 5552.2 5763.6
6532.2 6844.0 4520.8 5581.2 5674.4
6613.0 6644.4 4766.2 5711.6 5995.0
6845.8 6876.0 5058.2 5538.4 5951.2
6936.0 7231.4 4900.8 5882.0 6170.6
6951.2 7168.2 4885.0 6243.4 6188.0
6861.2 7115.8 5149.2 6049.6 6393.2
20769.2 19921.2 15965.0 15185.4 16667.0
26511.4 24536.8 20893.6 21025.0 21348.4
8
6
Number of best results
2
1
0
1
0
0
OX
0
ER
0
0
Table 3 Computational times Problem
Crossover operator
Prob. mean
LOX
PMX
OX
ER
PBX
OBX
2PX_V1
2PX_V2
2PX_V3
1PX
CX
n = 40 1 2 3 4 5 Mean 40
11.1 10.8 10.9 11.2 10.7 11.0
10.9 11.2 10.9 11.0 10.8 11.0
10.9 10.9 11.2 11.2 11.3 11.1
12.1 12.1 12.1 12.2 12.0 12.1
11.2 11.3 11.2 11.3 11.4 11.3
11.2 11.2 11.3 11.0 11.0 11.1
11.3 11.2 11.2 11.3 11.3 11.3
11.2 11.6 12.1 11.6 11.1 11.5
11.1 11.2 11.0 11.3 11.3 11.2
11.4 11.0 11.4 11.4 11.3 11.3
11.0 11.2 11.1 11.2 11.5 11.2
11.2 11.2 11.3 11.3 11.3 11.3
n = 50 1 2 3 4 5 Mean 50
15.4 15.5 15.7 15.3 15.4 15.5
15.7 15.7 15.5 15.8 15.6 15.7
15.6 15.8 15.3 15.4 15.7 15.6
17.1 17.0 17.5 17.0 16.9 17.1
15.7 15.7 16.4 15.7 15.9 15.9
15.9 15.8 16.5 15.7 15.7 15.9
15.7 15.4 15.8 15.9 15.9 15.7
15.5 15.9 16.1 16.0 15.8 15.9
15.9 15.7 16.1 16.6 16.1 16.1
16.0 16.1 15.8 16.1 16.1 16.0
15.8 17.1 15.8 16.1 15.9 16.1
15.8 16.0 16.0 15.9 15.9 15.9
n = 100 1 2 3 4 5 Mean 100
54.5 54.5 54.9 54.9 54.4 54.6
55.4 55.0 55.9 55.5 56.1 55.6
56.0 54.7 55.6 55.5 54.5 55.3
59.4 60.0 60.3 59.7 59.4 59.8
55.5 55.6 56.0 56.1 56.2 55.9
55.1 55.0 54.9 54.9 55.0 55.0
55.7 56.1 57.4 56.3 55.6 56.2
55.5 55.4 56.5 55.6 57.0 56.0
57.1 56.9 56.6 56.4 56.3 56.7
57.0 56.3 55.8 56.4 56.7 56.4
56.2 56.6 56.4 56.7 56.2 56.4
56.1 56.0 56.4 56.2 56.1 56.2
Overall mean
27.0
27.4
27.3
29.7
27.7
27.3
27.7
27.8
28.0
27.9
27.9
27.8
Reassuringly it was found that computation times for all crossover operations were nearly constant for each problem size. Table 3 gives details of these times in CPU seconds on a PC. The uniformity of times meant that there was no extra time wasted by using the more superior crossover operators.
598
T. Kellego¨z et al. / Applied Mathematics and Computation 199 (2008) 590–598
5. Conclusions In this study, the one machine n-jobs problem with the objective of minimizing total weighted tardiness is considered. The effectiveness of eleven crossover operators for solving this type of scheduling problem is compared on a set of benchmark instances. Experimental results demonstrate the effectiveness of the OBX and PBX crossover P operators. It is concluded that one of the properties which determine solution quality of the problem 1== wi T i is order or position of jobs. References [1] M. Pinedo, Scheduling: Theory, Algorithms, and Systems, Prentice-Hall, New Jersy, 1995. [2] S. French, Sequencing and Scheduling: An Introduction to the Mathematics of the Job Shop, Herwood, Chichester, 1982. [3] E.L. Lawler, A pseudopolynomial algorithm for sequencing jobs to minimize total tardiness, Annals of Discrete Mathematics 1 (1977) 331–342. [4] J.R. Coffman, Computer and Job-Shop Scheduling Theory, Wiley, New York, 1976. [5] B. Alidaee, K.R. Ramakrishnan, A computational experiment of covert-AU class of rules for single machine tardiness scheduling problems, Computers and Industrial Engineering 30 (2) (1996) 201–209. [6] H. Emmons, One-machine sequencing to minimize certain functions of job tardiness, Operations Research 17 (1969) 701–715. [7] A.H.G. Rinnooy Kan, B.J. Lageweg, J.K. Lenstra, Minimizing total costs in one-machine scheduling, Operations Research 25 (1975) 908–927. [8] E.L. Lawler, Efficient implementation of dynamic programming algorithms for sequencing problems, Report BW 106, Mathematisch Centrum, Amsterdam, 1979. [9] M.L. Fisher, A dual algorithm for the one machine scheduling problem, Mathematical Programming 11 (1976) 229–252. [10] C.N. Potts, L.N. Van Wassenhove, A branch and bound algorithm for the total weighted tardiness problem, Operations Research 33 (1985) 177–181. [11] M.S. Akturk, M.B. Yildirim, A new lower bounding scheme for the total weighted tardiness problem, Computers and Operations Research 25 (4) (1998) 265–278. [12] T.C.E. Cheng, C.T. Ng, J.J. Yuan, Z.H. Liu, Single machine scheduling to minimize total weighted tardiness, European Journal of Operational Research 165 (2005) 423–443. [13] H.A.J. Crauwels, C.N. Potts, L.N. Van Wassenhove, Local search heuristics for the single machine total weighted tardiness scheduling problem, INFORMS Journal on Computing 10 (3) (1998) 341–350. [14] S. Avci, M.S. Akturk, R.H. Storer, A problem space algorithm for single machine weighted tardiness problems, IIE Transactions 35 (2003) 479–486. [15] C.N. Potts, L.N. Van Wassenhove, Single machine tardiness sequencing heuristics, IIE Transactions 23 (1991) 346–354. [16] O. Holthaus, C. Rajendran, A fast ant-colony algorithm for single-machine scheduling to minimize the sum of weighted tardiness of job, Journal of the Operational Research Society 56 (2005) 947–953. [17] R.K. Congram, C.N. Potts, S.L. Van de Velde, An iterated dynasearch algorithm for the single-machine total weighted tardiness scheduling problem, INFORMS Journal on Computing 14 (1) (2002) 52–67. [18] G. Grosso, F. D Croce, R. Tadei, An enhanced dynasearch neighborhood for the single-machine total weighted tardiness scheduling problem, Operations Research Letters 32 (2004) 68–72. [19] W. Bozejko, J. Grabowski, M. Wodecki, Block approach-tabu search algorithm for single machine total weighted tardiness problem, Computers & Industrial Engineering 50 (1-2) (2006) 1–14. [20] M. Gen, R. Cheng, Genetic Algorithms and Engineering Optimization, John Wiley Sons Inc., USA, 2000. [21] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Massachusetts, USA, 1989. [22] R. Sarker, M. Mohammadian, X. Yao, Evolutionary Optimization, Kluwer Academic Publishers., NewYork, 2002. [23] G. Syswerda, Schedule Optimization Using Genetic Algorithms, in: L. Davis (Ed.), Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, 1990. [24] T. Murata, H. Ishibuchi, Performance evaluation of genetic algorithms for flowshop scheduling problems, in: International Conference on Evolutionary Computation, 1994, pp. 812–817. [25] I. Oliver, D. Smith, J. Holland, A study of permutation crossover operators on the traveling salesman problem, in: Proceedings of the Second International Conference on Genetic Algorithms and their Applications, Hillsdale, 1987, pp. 224–230. [26] L. Davis, Applying adaptive algorithms to epistatic domains, in: Proceedings of the International Joint Conference on Artificial Intelligence, 1985, pp.162–164. [27] D.E. Goldberg, R. Lingle, Alleles, loci, and the traveling salesman problem, in: J.J. Grefenstette (Ed.), Proceedings of the First International Conference on Genetic Algorithms and their Applications, Lowrence Erlbaum, Hillsdale, 1985, pp. 154–159. [28] D. Whitley, T. Starkweather, D.A. Fuquay, Scheduling problems and travelling salesman the genetic edge recombination operator, in: Proceedings of the Third International Conference on Genetic Algorithms, Los Altos, 1989, pp. 133–140. [29] O. Engin, To Increase the performance of flow-shop scheduling problems solving with genetic algorithms: a parameter optimization, Ph.D. Thesis, Istanbul Technical University, Turkey, 2001.