Information Sciences 173 (2005) 155–167 www.elsevier.com/locate/ins
A hybrid search algorithm with heuristics for resource allocation problem q Zne-Jung Lee a
b
a,*
, Chou-Yuan Lee
b
Department of Information Management, Kang-Ning Junior College of Medical Care and Management, 137 Lane 75, Sec. 3, Kang-Ning Road, Nei-Hu, Taipei, Taiwan, ROC Department of Information Management, Lan-Yang Institute of Technology, Taiwan, ROC Received 12 January 2004; received in revised form 9 July 2004; accepted 13 July 2004
Abstract The resource allocation problem is to allocate resources to activities so that the cost becomes as optimal as possible. In this paper, a hybrid search algorithm with heuristics for resource allocation problem encountered in practice is proposed. The proposed algorithm has both the advantages of genetic algorithm (GA) and ant colony optimization (ACO) that can explore the search space and exploit the best solution. In our implementation, both GA and ACO are well designed for the resource allocation problem. Furthermore, heuristics are used to ameliorate the search performance for resource allocation problem. Simulation results were reported and the proposed algorithm indeed have admirable performance for tested problems. 2004 Elsevier Inc. All rights reserved. Keywords: Resource allocation problem; Hybrid system; Genetic algorithm; Ant colony optimization
q
This work was supported in part by the National Science Council of Taiwan, ROC under the grant 92-2213-E-345-001. * Corresponding author. E-mail address:
[email protected] (Z.-J. Lee). 0020-0255/$ - see front matter 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2004.07.010
156
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
1. Introduction The resource allocation problem is one of the optimization problems that can be found in many fields such as load distribution, computer scheduling, weapon to target assignment problem, production planning and so on [1–3]. Traditional methods for solving this class of problems have been reported in the literature [1,4], but it is difficult to solve these problems directly and result in exponential computational complexities [5–8]. Recently, genetic algorithms (GAs) and ant colony optimization (ACO), known as population-based algorithms, have been widely used as search algorithms in various applications and have demonstrated satisfactory performance [9–12]. They both perform a multiple directional search by maintaining a set of solutions, referred to as population, that provide the capability of finding global optima. In our previous work [7,8,13], we have employed GA and ACO to solve a specific resource allocation problem. Even though those approaches could find the best solution in those simulated cases, the search efficiency seemed not good enough. In this paper, we propose a novel hybrid search algorithm, genetic algorithm and ant colony optimization, with heuristics to ameliorate the search performance for resource allocation problem. Genetic algorithms (GAs) or more generally, evolutionary algorithms [10] have been touted as a class of general-purpose search strategies for optimization problems. GAs use a population of solutions, from which, using recombination and selection strategies, better and better solutions can be produced. When applying to optimization problems, GAs provide the advantages to perform global search and may hybridize with domain-dependent heuristics for specific problems [14–16,22]. As a consequence, GAs seem to be nice approaches in solving resource allocation problem. However, GAs may cause certain degeneracy in search performance if their operators are not carefully designed [16,17]. Recently, ant colony optimization (ACO) algorithms are a class of algorithms inspired by the observation of real ants. Real ants are capable of finding the shortest route between their nest and food source through their biological mechanisms. Many search activities have been devoted to artificial ants, which are agents with the capability of mimicking the behavior of real ants [18]. Agents are capable of exploring and exploiting pheromone information, which have been left on the ground when they traversed. While building the solutions, each agent collects pheromone information on the problem characteristics and uses this information to modify the representation of the problem, as seen by the other agents. With such concepts, a multi-agent (population-based) algorithm called ant colony optimization has been widely used as a new cooperative search algorithm in optimization problems [19,20]. In this paper, a hybrid search algorithm with heuristics for resource allocation problem is proposed. In the proposed algorithm, GA is used to perform global exploration among the population, and heuristics are applied to each newly
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
157
generated offspring to escape from local optima. Then ACO uses the best solution obtained from GA and heuristics at each generation to perform local exploitation. This paper is organized as follows. In Section 2, the resource allocation problem encountered in practice is formulated. In Section 3, the basics of genetic algorithms and ant colony optimization are introduced. The proposed algorithm is presented and discussed in Section 4. In Section 5, the results of employing the proposed algorithm to solve resource allocation problem are presented. Finally, Section 6 concludes the paper.
2. The formulation of resource allocation problem The resource allocation problem addressed in this paper is known as a NPcomplete problem [1,4,13]. It is to allocate resources to activities so that the allocations can lead to the best solution. The following two assumptions are made for this resource allocation problem. The first one is that there are R resources and A activities, and all resources must be allocated to activities. The second assumption is that the individual allocating probability (Pij) for the fulfill of the ith activity when allocating the jth resource to the ith activity is known for all i and j. In our research, we consider the resource allocation problem as to minimize the object function: " # A R X Y X ij Z¼ COSTðiÞ ½1 P ij ð1Þ i¼1
j¼1
subject to the assumption that all resources must be allocated to activities; that is, A X
X ij ¼ 1;
j ¼ 1; 2; . . . ; R;
ð2Þ
i¼1
where Xij is a Boolean value for the allocation of the jth resource to the ith activity, Xij = 1 indicates that the jth resource is allocated to the ith activity and COST(i) is the penalty of the ith activity not being fulfilled by its assigned resource. It is noted that Eq. (1) is a specific form of resource allocation problems and can be equivalently rewritten as some other forms [8,13].
3. The basics of genetic algorithm and ant colony optimization Genetic algorithm and ant colony optimization, population-based search algorithms, maintain a population of structure as key elements in the design and implementation of problem solving algorithms. Each individual in the
158
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
population is evaluated according to its fitness value. Both algorithms are sufficiently complex to provide powerful adaptive search approaches, and usually can be embedded with heuristics to speed up their search performances for optimization problems. The basics of genetic algorithm and ant colony optimization are described in this section. 3.1. Genetic algorithms Genetic algorithms start with a set of randomly selected chromosomes as the initial population that encodes a set of possible solutions. In genetic algorithms, variables of a problem are represented as genes in a chromosome, and the chromosomes are evaluated according to their fitness values using some measures of profit or utility that we want to optimize. Recombination typically involves two genetic operators: crossover and mutation. The genetic operators alter the composition of genes to create new chromosomes called offspring. In crossover operator, it generates offspring that inherits genes from both parents with a crossover probability Pc. In Mutation operator, it inverts randomly chosen genes in a chromosome with a mutation probability Pm. The selection operator is an artificial version of natural selection, a Darwinian survival of the fittest among populations, to create populations from generation to generation, and chromosomes with better fitness values have higher probabilities of being selected at the next generation. After several generations, genetic algorithms can converge to the best solution. Let P(t) and C(t) are parents and offspring at generation t. A simple genetic algorithm is shown in the following [9]: Procedure: Simple genetic algorithm Begin t 0; Initialize P(t); Evaluate P(t); While (not match the termination conditions) do Recombine P(t) to yield C(t); Evaluate C(t); Select P(t + 1) form P(t) and C(t); t t + 1; End; End; Basically, the one cut point crossover, inversion mutation and (u + k)-ES (evolution strategy) survival [9,16], where u is the population size and k is the number of offspring created, can be employed as the operators for resource allocation problem addressed in this paper. However, these operations are not
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
159
carefully designed for resource allocation problem, the search in the solution space might become inefficient or even random. Recently, genetic algorithms with local search have also been considered as good alternatives for solving optimization problems. The flow chart of the GA with local search is shown as follows [15,17]. Procedure: GA with local search Begin t 0; Initialize P(t); Evaluate P(t); While (not matched for the termination conditions) do Apply crossover on P(t) to generate c1(t); Apply local search on c1(t) to yield c2(t); Apply mutation on c2(t) to yield c3(t); Apply local search on c3(t) to yield c4(t); Evaluate C(t) = {c1(t), c2(t), c3(t), c4(t)}; Select P(t + 1) from P(t) and C(t); t t + 1; End; End; Where local search can explore the neighborhood in an attempt to enhance the fitness value of the solution in a local manner. It starts from the current solution and repeatedly tries to improve the current solution by local changes. If a better solution is found, then it replaces the current solution and the algorithm searches from the new solution again. These steps are repeated until a criterion is satisfied. It is noted that the local search approach can be well designed by heuristics to ameliorate its search performance. In this paper, we embed heuristics into genetic algorithm as local search approach. 3.2. Ant colony optimization Ant colony optimization (ACO) is a constructive population-based search technique to solve optimization problems by using principle of pheromone information. It is an evolutionary approach where several generations of artificial agents in a cooperative way search for good solutions. Agents are initially randomly generated on nodes, and stochastically move from a start node to feasible neighbor nodes. During the process of finding feasible solutions, agents collect and store information in pheromone trails. Agents can online release pheromone while building solutions. In addition, the pheromone will be evaporated in the search process to avoid local convergence and to explore more
160
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
search areas. Thereafter, additional pheromone is deposited to update pheromone trail offline so as to bias the search process in favor of the currently optimal path. The pseudo code of ant colony optimization is stated as [18]: Procedure: Ant colony optimization (ACO) Begin While (ACO has not been stopped) do Agents_generation_and_activity(); Pheromone_evaporation(); Daemon actions(); End; End; In ACO, agents find solutions starting from a start node and moving to feasible neighbor nodes in the process of Agents_generation_and_activity. During the process, information collected by agents is stored in the so-called pheromone trails. In this process, agents can release pheromone while building the solution (online step-by-step) or while the solution is built (online delayed). An agent-decision rule, made up of the pheromone and heuristic information, governs agentsÕ search toward neighbor nodes stochastically. The kth ant at time t positioned on node r move to the next node s with the rule governed by 8 a b < arg max ½sru ðtÞ gru when q 6 q0 ; u¼allowedk ðtÞ s¼ ð3Þ : S otherwise; where sru(t) is the pheromone trail at time t, gru is the problem-specific heuristic information, a is a parameter representing the importance of pheromone information, b is a parameter representing the importance of heuristic information, q is a random number uniformly distributed in [0, 1], q0 is a pre-specified parameter (0 6 q0 6 1), allowedk(t) is the set of feasible nodes currently not assigned by ant k at time t, and S is an index of node selected from allowedk(t) according to the probability distribution given by 8 srs ðtÞa gbrs > < P if s 2 allowedk ðtÞ; sru ðtÞgbru ð4Þ P krs ðtÞ ¼ u2allowedk ðtÞ > : 0 otherwise: Pheromone_evaporation is a process of decreasing the intensities of pheromone trails over time. This process is used to avoid locally convergence and to explore more search space. Daemon actions are optional for ant colony optimization, and they are often used to collect useful global information by depositing additional pheromone. In the original algorithm of [18], there is a
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
161
scheduling process for the above three processes. It is to provide freedom for conducting how these three processes should interact in ant colony optimization and other approaches.
4. The proposed algorithm In this paper, we propose to hybridize genetic algorithm and ant colony optimization with heuristics to solve resource allocation problem. The proposed algorithm has both the advantages of genetic algorithm, the ability to find feasible solutions and to avoid premature convergence, and that of ant colony optimization, the ability to conduct fine-tuning in the search space and to find better solutions. Meanwhile, heuristics are embedded into genetic algorithm as local search to improve the search ability. The diagram of the proposed algorithm is shown as follows: Procedure: The proposed algorithm Begin t 0; Initialize P(t); Evaluate P(t); While (not matched for the termination conditions) do Apply crossover on P(t) to generate c1(t); Apply heuristics on c1(t) to yield c2(t); Apply mutation on c2(t) to yield c3(t); Apply heuristics on c3(t) to yield c4(t); Evaluate C(t) = {c1(t), c2(t), c3(t), c4(t)}; If (a new best solution found in C(t)) then Apply offline pheromone updating rule by using the found best solution; While (not matched the stop criterion for all agents) do Apply Agents_generation_and_activity and generate c5(t); Evaluate C(t) = {c1(t), c2(t), c3(t), c4(t), c5(t)}; Apply online pheromone updating rule; End; End; Select P(t + 1) from P(t) and C(t); t t + 1; End; End; In the proposed algorithm, the representation of chromosome for resource allocation problem is encoded as a vector of activities, and the value i of the jth
162
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
gene indicates that the jth resource is allocated to the ith activity. The evaluation fitness value is defined as the object function shown in Eq. (1). First, the initial population is random generated randomly and then evaluated. In our implementation, GA is used to provide a main portion of diversity in search, but it is not well suited for fine-tuning structure that is very close to optimal solutions [9,21]. Two approaches are proposed to enhance the search ability for resource allocation problem in the proposed algorithm. The first approach is to use elite preserving crossover (GEX) to keep those genes supposed to be good at each generation [25]. The concept of GEX is to construct offspring with possibly good genes from parents. In GEX operator, the information for those genes supposed to be good contained in both parents are preserved. Here, a gene is good if it is an assignment of the jth resource to the activity with the highest value of COST(i*)Pij among all i. The second approach involves heuristics after the basic loop of genetic operators. To incorporate heuristics, working after crossover and mutation operators, it can perform quick optimization to improve offspring. The flow chart of the proposed heuristics is shown in Fig. 1. In the first phase, for those activities existing in the current chromosome sort COST(i)*Pij in a descending order. Resource j with the highest COST(i)*Pij is selected and allocated to the ith activity in the new chromosome. It continues until all currently existing activities have been allocated in the chromosome. There may exist unallocated resources in the new chromosome. Then, in the second phase, those unallocated resources greedily select the activities with the highest COST(i)*Pij which have also included all activities that are not in the current chromosome. It is noted that in the first phase, it attempts to maintain the allocation properties in the original chromosome. In the second phase, it becomes greedy allocation for activities and resources [13]. After genetic operators, ACO is performed to exploit the new obtained best solution. In ACO, the three main processes are scheduled as the sequence of Daemon actions, Agents_generation_and_activity, and Pheromone_evaporation. In our implementation, the best solution is used to update the pheromone trail for only the allocated resource (j) and activity (i) as described in Eq. (3). sij ðt þ 1Þ ¼ ð1 qÞsij ðtÞ þ qDsðtÞ;
ð5Þ
where 0 < q 6 1 is a parameter governing the pheromone decay process, Ds(t) = 1/Cbest and Cbest is the total cost of the best solution obtained from Eq. (1). Then, agent starts from a random resource and successively allocates activities to resources until resources have been completely allocated. In the process of Agents_generation_and_activity, the agent-decision table for ant k assigning activity i to resource j is governed by 8 < arg max ½sij ðtÞa gbij when q 6 q0 ; i¼allowedk ðtÞ pðjÞ ¼ ð6Þ : S otherwise;
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
Start
Sort the values of COST(i)*Pij
Select the highest unallocated resource with COST(i)*Pij and generate the new resource-allocation pair in the chromosome
No
Completely modify the allocation of existing activities in the chromosome?
Yes
Unallocated resource exists?
No
Yes Randomly choose the unallocated resource to the activity with the highest COST(i)*Pij
No
Completely generate the new chromosome? Yes Stop
Fig. 1. The flow chart of heuristics for resource allocation problem.
163
164
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
where gij is the heuristic information and is set as the highest value of COST(i)*Pij. In finding feasible solutions, agents perform online step-by-step pheromone updates as sij ðt þ 1Þ
ð1 wÞsij ðtÞ þ wDsk ;
ð7Þ
where 0 < w 6 1 is a constant, Dsk is the reciprocal of cost found by the kth agent. In Eq. (6), pheromone update rule has the effects of making the visited paths less and less attractive if agents do not visit those paths again. The above steps are repeated until all agents have found feasible solutions. Thereafter, the selection operator is performed to create populations from generation to generation according to fitness values. Finally, the proposed algorithm is repeated until the stop criterion is satisfied.
5. Simulation results In simulations, we need to identify a set of parameters. We performed the simulation of five scenarios with various values for those parameters, and the size of the initial population for GA and the number of ant are set as 50, q = 0.1, and w = 0.1 for ACO. The tested values were: the crossover probability Pc=0.1, 0.3, 0.6, 0.8, 0.9, the mutation probability Pm=0.05, 0.1, 0.4, 0.9, a=1, 2, 5, b=1, 2, 5 for ACO. The results are all similar. We keep the following values as default: the size of initial population and the number of ant are 50, Pc = 0.6, Pm = 0.05, a = 1, b = 2, q = 0.1, and w = 0.1, because the results under those values are fair in all algorithms. To evaluate the performance of the proposed algorithm, we have compared it with other existing algorithms, including the simulated annealing (SA) [13,23,24], GA, ACO, and GA with ACO as local search [25]. SA is based on the analogy to the annealing process of solids, which is the process of heating a solid to a high temperature and then cooling it down gradually. With the capability of escaping from local optima by incorporating a probability function in accepting or rejecting new solutions, SA is also widely used for solving combinatorial optimization problems. For the SA approach, the annealing factor 0.95 is used [13,22,14]. The general local search method is also adopted in the algorithm of SA, GA, and ACO [13]. It is noted that ACO is only embedded into genetic algorithm (GA) as local search process to improve the search efficiency for the approach of GA with ACO as local search [25]. First, sets of simulations were conducted on randomly generated resource allocation problems. Experiments are conducted for 10 trials. All simulations use the same initial population, which are randomly generated. The maximum number of generations is set as 8000. The simulation results are listed in Table 1. In Table 1, the best solution is obtained from all of these tested algorithms and the averaged solution means the average of solution across
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
165
Table 1 Simulation results for various search algorithms when maximum number of generations is set as 8000 Resource allocation problem
The best solution
Offset from the best solution (%) SA
ACO
GA
GA with ACO as local search
The proposed algorithm
R = 120 A = 120 R = 130 A = 140 R = 150 A = 120 R = 160 A = 160
169.4978
3.86
2.91
3.04
1.53
1.08
291.5022
4.83
3.02
3.95
1.78
1.12
111.3617
5.19
4.18
4.75
2.34
1.85
248.8451
6.51
4.85
5.26
2.72
2.04
5.10
3.74
4.25
2.09
1.52
Average
The Bold-faced text indicates the best one among these algorithms.
Table 2 Simulation results for various search algorithms when experiments were stopped after two hours of running Resource allocation problem
The best solution
R = 120 A = 120 R = 130 A = 140 R = 140 A = 120 R = 150 A = 120 R = 120 A = 160 R = 160 A = 160 R = 200 A = 220 R = 230 A = 200 R = 250 A = 250
167.9431
Average
Offset from the best solution (%) SA
ACO
GA
GA with ACO as local search
The proposed algorithm
3.59
2.85
2.94
1.47
0.94
291.5022
4.65
3.16
4.07
1.69
1.03
102.2743
5.13
4.02
4.54
2.01
1.73
110.6014
5.17
4.13
4.82
2.19
1.82
775.1474
5.67
4.37
5.17
2.54
1.91
248.8451
6.23
4.64
5.23
2.37
1.96
435.6341
7.59
5.21
6.43
3.46
2.95
173.9982
8.13
5.31
6.52
3.87
3.18
317.6258
10.12
6.94
8.07
4.36
3.97
6.25
4.51
5.31
2.66
2.17
The Bold-faced text indicates the best one among these algorithms.
166
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
10 conducted trials. The percentage offset from the best solution is calculated as: % offset from the best solution = [(Averaged solution best solution)/best solution] · 100. Table 1 illustrates that the results obtained by the proposed algorithm are much better than those of SA, ACO, GA, and GA with ACO as local search. The percentage offset from the best solution by the proposed algorithm is less than other tested algorithms. Note that the computational times required for one generation are different for variant algorithms. Next, we simply stopped these algorithms after a fixed time of running. Those simulations are to see which algorithm can find better solutions in a fixed period of running. If the running time is too short, the results may not be significant. To run algorithms for 2 hours is only a handy selection. Experiments were stopped after two hours of running. The results are listed in Table 2. From Table 2, it is easy to see that the proposed algorithm can still find better solutions, and the percentage offset from the best solution for the proposed algorithm is the lowest for all tested problems among those algorithms.
6. Conclusions In this paper, a hybrid search algorithm with heuristics for resource allocation problem is proposed. With the hybrid approach, genetic algorithm with heuristics is used to perform global exploration among population while ant colony optimization is used to perform furthermore exploitation around the best solution at each generation. Because of the complementary properties of genetic algorithm and ant colony optimization, the hybrid approach outperforms other existing algorithms. References [1] T. Ibarraki, N. Katoh, Resource Allocation Problems, The MIT Press, Cambridge, MA, 1988. [2] A.M.H. Bjorndal, A. Caprara, P.I. Cowling, D. Croce, H. Lourenco, F. Malucelli, A.J. Orman, D. Pisinger, C. Rego, J.J. Salazar, Some thoughts on combinatorial optimization, European Journal of Operational Research (1995) 253–270. [3] P.L. Hammer, P. Hansen, B. Simeone, Roof duality complementation and persistency in quadratic 0–1 optimization, Mathematical Programming 28 (1984) 121–155. [4] S.P. Lloyd, H.S. Witsenhausen, Weapon allocation is NP-Complete, in: IEEE Summer Simulation Conference, Reno, NV, 1986. [5] D.L. Pepyne et al., A decision aid for theater missile defense, in: Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC Õ97), 1997. [6] William, Meter, F.L. Preston, A Suite of Weapon Assignment Algorithms for a SDI MidCourse battle Manager, AT&T Bell Laboratories, 1990. [7] Z.-J. Lee, S.-F. Su, C.-Y. Lee, A genetic algorithm with domain knowledge for weapon–target assignment problems, Journal of the Chinese Institute of Engineers 25 (3) (2002) 287–295. [8] Z.-J. Lee, S.-F. Su, C.-Y. Lee, An immunity based ant colony optimization algorithm for solving weapon–target assignment problem, Applied Soft Computing 2 (2002) 39–47.
Z.-J. Lee, C.-Y. Lee / Information Sciences 173 (2005) 155–167
167
[9] M. Gen, R. Cheng, Genetic Algorithms and Engineering Design, John Wiley, 1997. [10] T. Ba¨ck, U. Hammel, H.P. Schwefel, Evolutionary computation: comments on the History and current state, IEEE Transactions on Evolutionary Computation 1 (1) (1997) 1–17. [11] Z. Michalewicz, Genetic Algorithms + Data Structure = Evolution Programs, SpringerVerlag, Berlin, 1994. [12] W.K. Lai, G.G. Coghill, Channel assignment through evolutionary optimization, IEEE Transactions on Vehicular Technology 45 (1) (1998) 91–96. [13] Z.-J. Lee, S.-F. Su, C.-Y. Lee, Efficiently solving general weapon–target assignment problem by genetic algorithms with greedy eugenics, IEEE Transactions on Systems, Man and Cybernetics, Part B (2003) 113–121. [14] C.R. Reeves, Modern Heuristic Techniques for Combinatorial Problems, Blackwell Scientific Publications, Oxford, 1993. [15] A. Kolen, E. Pesch, Genetic local search in combinatorial optimization, Discrete Applied Mathematics and Combinatorial Operations Research and Computer Science 48 (1994) 273– 284. [16] P. Merz, B. Freisleben, Fitness landscape analysis and memetic algorithms for quadratic assignment problem, IEEE Transactions on Evolutionary Computation 4 (4) (2000) 337–352. [17] J. Miller, W. Potter, R. Gandham, C. Lapena, An evaluation of local improvement operators for genetic algorithms, IEEE Transactions on Systems, Man and Cybernetics 23 (5) (1993) 1340–1341. [18] M. Dorigo, G.D. Caro, Ant colony optimization: a new meta-heuristic, in: Proceedings of the 1999 Congress on Evolutionary Computation, vol. 2, 1999, pp. 1470–1477. [19] V. Maniezzo, A. Colorni, The ant system applied to the quadratic assignment problem, IEEE Transactions on knowledge and data engineering 11 (1999) 769–778. [20] T. Stu¨tzle, H. Hoos, MAX-MIN ant system and local search for the traveling salesman problem, IEEE international conference on evolutionary computation (1997) 299–314. [21] E.H.L. Arts, J.K. Lenstra, Local search in combinatorial optimization, John Wiley & Sons Inc, 1997. [22] U.K. Chakraborty, D. Laha, M. Chakraborty, A heuristic genetic algorithm for flowshop scheduling, Proceedings of the 23rd. IEEE International conference on information technology interfaces (2001) 313–318. [23] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, Equation of state calculations by fast computing machines, Journal of Chemical Physics 21 (1953) 1087–1092. [24] P.J.M. Van Laarhoven, E.H.L. Arts, Simulated Annealing: Theory and Applications, Kluwer Academic Publishers, 1992. [25] Z.-J. Lee, W.-L. Lee, A hybrid search algorithm of ant colony optimization and genetic algorithm applied to weapon–target assignment problems, Lecture Notes in Computer Science 2690 (2003) 278–285.