A hybrid backtracking search algorithm for permutation flow-shop scheduling problem

A hybrid backtracking search algorithm for permutation flow-shop scheduling problem

Accepted Manuscript A Hybrid Backtracking Search Algorithm for Permutation Flow-shop Scheduling Problem Qun Lin, Liang Gao, Xinyu Li, Chunjiang Zhang ...

846KB Sizes 0 Downloads 143 Views

Accepted Manuscript A Hybrid Backtracking Search Algorithm for Permutation Flow-shop Scheduling Problem Qun Lin, Liang Gao, Xinyu Li, Chunjiang Zhang PII: DOI: Reference:

S0360-8352(15)00158-8 http://dx.doi.org/10.1016/j.cie.2015.04.009 CAIE 4007

To appear in:

Computers & Industrial Engineering

Received Date: Revised Date: Accepted Date:

21 October 2014 5 March 2015 8 April 2015

Please cite this article as: Lin, Q., Gao, L., Li, X., Zhang, C., A Hybrid Backtracking Search Algorithm for Permutation Flow-shop Scheduling Problem, Computers & Industrial Engineering (2015), doi: http://dx.doi.org/ 10.1016/j.cie.2015.04.009

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

A Hybrid Backtracking Search Algorithm for Permutation Flow-shop Scheduling Problem Qun Lin, Liang Gao, Xinyu Li, and Chunjiang Zhang State Key Lab of Digital Manufacturing Equipment & Technology, Huazhong University of Science and Technology, Luoyu Road 1037#, Wuhan, Hubei, 430074, China Correspondence should be addressed to Xinyu Li; [email protected]

Abstract The Permutation Flow-shop Scheduling Problem (PFSP) which is an NP-complete problem widely exists in many industrial manufacturing systems, such as motor industry, semiconductor industry, appliance industry and so on. Therefore, how to obtain the optimal schedule for PFSP is very important for these manufacturing systems. Many attentions from researchers and engineers have been paid to solve this problem, but the developments of more effective and efficient scheduling technologies and methods are never end. In this paper, based on a new evolutionary algorithm -- Backtracking Search Algorithm (BSA), a hybrid BSA (HBSA) is proposed for PFSP with the objective to minimize the makespan. To make original BSA suitable for discrete problems, some improvements and relative techniques about original BSA, such as crossover and mutation strategies, simulated annealing (SA) mechanism used to avoid premature and random insertion local search, are presented. 29 famous benchmark problems have been used to evaluate the performance of the proposed HBSA. And several comparisons between HBSA and some other classical algorithms are conducted. The results show the effectiveness of proposed HBSA.

Keywords:

hybrid backtracking search algorithm (HBSA), permutation flow-shop

scheduling problem (PFSP), simulated annealing (SA) mechanism, random insertion local search

A Hybrid Backtracking Search Algorithm for Permutation Flow-shop Scheduling Problem Abstract The Permutation Flow-shop Scheduling Problem (PFSP) which is an NP-complete problem widely exists in many industrial manufacturing systems, such as motor industry, semiconductor industry, appliance industry and so on. Therefore, how to obtain the optimal schedule for PFSP is very important for these manufacturing systems. Many attentions from researchers and engineers have been paid to solve this problem, but the developments of more effective and efficient scheduling technologies and methods are never end. In this paper, based on a new evolutionary algorithm -- Backtracking Search Algorithm (BSA), a hybrid BSA (HBSA) is proposed for PFSP with the objective to minimize the makespan. To make original BSA suitable for discrete problems, some improvements and relative techniques about original BSA, such as crossover and mutation strategies, simulated annealing (SA) mechanism used to avoid premature and random insertion local search, are presented. 29 famous benchmark problems have been used to evaluate the performance of the proposed HBSA. And several comparisons between HBSA and some other classical algorithms are conducted. The results show the effectiveness of proposed HBSA.

Keywords:

hybrid backtracking search algorithm (HBSA), permutation flow-shop

scheduling problem (PFSP), simulated annealing (SA) mechanism, random insertion local search

1. Introduction Scheduling is taking significant effect in production planning and manufacturing systems for maintaining competitive position [1]. Even a little improvement may bring many benefits, such as saving much processing time, enhancing production efficiency and bring much more profits, especially for the huge manufacturing enterprises. PFSP, one of typical scheduling problem, widely exists in modern industries and has been extensively discussed in modern flow-shop scheduling optimization theoretic research and engineering application field. Karel et al. [2] had proven that PFSP with makespan minimization is a NP-complete problem. This NP-complete characteristic and its wide range of engineering applications background make it always be the hot spot not only in theory research field but also in engineering field. Moreover, it is still an

important method to illustrate the efficiency and effectiveness of newly proposed optimization algorithms, although it has been widely studied so far [3]. On one hand, the global optimal solution should be found, on the other hand, the computing time should be much less. Therefore solving PFSP is a hard and complicated task. This paper focuses on the effectiveness of proposed algorithm and finding global optimal solution. To solve PFSP, researchers have done a lot of valuable researches which can be classified into three categories [4]. The first category is the precision methods, such as enumeration method, branch and bound method, cutting plane method, and dynamic programming. These precision methods have such high computation complexity and memory usage that they are only suitable for very small scale PFSPs. The second one is the constructive methods, which includes Gupta method [5], Johnson method [6], Palmer method [7], Campleu-Dudek-Smith (CDS) method [8], Rajendran (RA) method [9], Nawaz-Enscore-Ham (NEH) method[10], and so on. Among the above methods, NEH method and RA method are better than the other algorithms. These kinds of methods design the scheduling rules through local information, and finally construct the whole solution of PFSP. Even though these methods can quickly find the solutions, their final solutions are almost local optimal, not global optimal. The third one is the metaheuristic algorithms, which are designed according to the operation law of nature and some other accumulated work experience by human beings. For example: genetic algorithm (GA) [11-13], simulated annealing (SA) algorithms [14-15] and modified simulated annealing (MSA) [16], ant colony optimization (ACO) algorithm [17-18], artificial bee colony (ABC) algorithm[19], modified evolutionary programming (MEP) algorithm [20], tabu search (TS) algorithm [21] and particle swarm optimization (PSO) algorithms [22-24]. This category can obtain better solutions than the other two categories through large amount of iterations, algorithm operation control and parameter control. Solutions obtained by this category are much better than that obtained by the former two categories, thus the improved heuristic algorithms are the most popular methods for PFSP currently. However, because each category has its own advantages, one popular way is combining several algorithms together to propose more powerful algorithm for PFSP nowadays. This combination utilizes the advantages of each algorithm and has been proved to be more effective and efficient in most cases. Liu et al. [25] incorporated the modified NEH method into a random initialization of PSO to generate an initial population with certain quality and diversity. Rajendran et al. [26] applied the NEH method to search for the better solutions near the current local solutions when using ant colony optimization (ACO) algorithm to solve PFSPs. Wang et al. [27] presented a hybrid GA, in which the mutation process was replaced by SA and multiple crossover operators were applied to offspring. Nearchou et al. [28] integrated a hybrid SA with features borrowed from the GA and local searches, which worked from population initialization, new population generation to the final annealing process. Wang et al. [29] added some local search

operators to the estimation of distribution algorithm (EDA) to enhance its local exploitation. There are no obvious boundaries among the above combination algorithms and the algorithms in the third categories, because the combination algorithms are developed from the third category. The combinations have been proven to be more effective, and have been the trend in last few decades. So many efforts had been made to solve PFSP, unfortunately there was no algorithm that completely solved PFSP (obtain global optimal solution for each benchmark of PFSP). This means that it is very necessary to find better ways to solve this problem. In this paper, by hybridizing the population-based evolutionary searching abilities of BSA with local searching abilities of some methods, we try to propose a HBSA to solve this problem effectively. BSA is a new-born algorithm that was first proposed by Civicioglu [30] in 2013 and used in continuous numerical optimization problems. Civicioglu showed its good performance in dealing with boundary-constrained benchmark problems compared with particle swarm optimization (PSO), covariance matrix adaptation evolution strategy (CMAES), artificial bee colony (ABC), adaptive differential evolution algorithm (JDE), comprehensive learning particle swarm optimization (CLPSO) and self-adaptive differential evolution (SADE) algorithms. And he also solved three constrained real-world benchmark problems (Antenna, Radar and FM) [31] as examples to illustrate BSA’s ability to solve real-world problems. BSA is developed from Differential Evolutionary (DE) algorithm [32], and is a kind of evolutionary algorithms (EAs). Different from traditional EAs, BSA adds a selection operation before crossover and mutation operation, and sets up a memory, from which BSA can gain experiences when generating the new trial population and the search-direction matrix, to store the previous generations. The added selection operation makes BSA with faster convergence speed. The search-direction matrix, unique crossover operation and mutation operation ensure the powerful searching abilities of BSA. Moreover, BSA has a very simply structure, and only one control parameter. However, BSA has a very big shortcoming that its exploitation ability is very poor, especially for the high dimension problems. As far as we know, there is no other research about BSA in the discrete optimization field in literature. Taking into consideration of its novelty, effectiveness, simple structure and good performance by utilizing the experiences from former iteration, we try to propose the way to avoid its shortcoming and modify it to solve the PFSP effectively in this paper. HBSA not only inheritances the good features of EA, but also gains experience from the former iteration information, thus it can obtain more diversities than other EAs. Moreover, in proposed hybrid BSA (HBSA), the random insertion local search method has been applied to avoid the shortcoming of poor local search ability in BSA. So it is believed that HBSA can achieve satisfactory improvement for PFSPs. The rest of this paper is organized as follows: Section 2 introduces the mathematical model of PFSP and the original BSA. Section 3 presents some improvements about BSA and proposes

HBSA for PFSP. Section 4 shows the experimental results and comparisons between HBSA and other several algorithms. Section 5 summarizes the conclusions of this paper.

2. PFSP formulation and original BSA 2.1 PFSP formulation The PFSP is a typical combinatorial optimization problem, which determines the processing sequence of jobs over machines to minimize the total makespan, minimize the total flowtime or satisfy other objectives. The pre-conditions include [33]: (1)

The processing sequence of all jobs on each machine is same, but has not been known;

(2)

The processing sequence of each job on all machines is same, and has been known;

(3)

Each machine can just process at most one job at any time;

(4)

Each job can be processed on at most one machine at any time;

(5)

Each job must complete processing without preemption, and the processing time of each job on each machine is known. The mathematical model of this problem is shown as follows[33]:

Decision variants: π  {π1, π2, ..., πn} , where π denotes the job processing sequence of n jobs. Subject to:

C ( 1,1)  T (1,1) C ( j,1)  C ( j  1,1)  T ( j,1) j  2,3, ..., n   C ( 1, k )  C ( 1, k  1)  T (1, k ) k  2,3, ..., m C ( j, k )  max(C ( j  1, k ), C ( j, k  1))  T ( j, k )  j  2,3, ..., n ; k  2,3, ..., m  

(1)

Where T ( j, k ) denotes processing time of job j on machine k. n denotes the number of jobs and m denotes the number of machines. C ( j, k ) denotes the completion time of jth job on kth machine. Two common objectives are as follows: (1) Minimizing makespan: Min Cmax  C (  n, m) ; represented by n/m/P/Cmax.

(2) Minimizing total flowtime: Min TFT 

n

 C(

j

, m) ; represented by n/m/P/Fmax.

j

In this paper, the makespan (n/m/P/Cmax) objective is chosen for further discussion. To make it easy to understand, Figure 1 illustrates the details of PFSP schedules and points out the differences of different job sequences. The processing time T is shown in Eq. (2), and the makespan of first job sequence π1  {3, 1, 2, 4} is 240, while the makespan of second job sequence π 2  {2, 3, 1, 4} is 220.

50 25 T 30 15

25 45 25 35

20 30 35 25

20 25 25 30

(2)

Figure 1 Gantt chart of different job sequences

2.2 Original BSA BSA[30] is a new EA, and it consists of five processes: initialization, selection-I, mutation, crossover and selection-II. The general structure of BSA is shown in Figure 2.

Figure 2 General structure of BSA

1) Initialization Uniform distribution is used to initialize the population P, as shown in Eq. (3). i  1, 2, ..., N ,

j  1, 2, ..., D , where N and D are the population size and the problem dimension respectively. lowj and upj mean the lower boundary and upper boundary of jth dimension respectively, and they are set according to the range of variables that are to be resolved.

Pi , j ~ U (lowj, upj )

(3)

2) Selection-I The first selection determines the historical population oldP, which is used to calculate the search direction. oldP is initialized using Eq.(4) before iteration, and will be redefined and changed at the beginning of each iteration using the “if- then” rules in Eq.(5) and random shuffling function in Eq.(6), respectively. The oldP remembers the population from a randomly

chosen previous generation for use in generating the search-direction matrix, thus taking partial advantage of previous experiences to generate a new trial population. oldP is a main difference between BSA and the other EAs.

oldPi , j ~ U (lowj, upj )

(4)

if a  b then oldP :  P | a, b ~ U (0,1)

(5)

oldP :  permuting (oldP)

(6)

3) Mutation BSA’s mutation process generates the trail population by Eq. (7), in which F value controls the amplitude of the search-direction matrix (oldP - P), just the same as F value in DE algorithm. To make it easy, we define F value through Eq. (8) in this paper.

Mutant = P  F *(oldP  P)

(7)

F  5* randn, where randn ~ N (0,1)

(8)

4) Crossover BSA’s unique crossover process consists of two main steps. The first step is to define a binary integer valued matrix (map) of size N*D. Two different ways are selected to define the map, as shown in Eq. (9) and Eq. (10), in which the mix rate parameter (mixrate) controls the number of elements that will mutate. The second step is to update the trail population according to the defined map. This kind of crossover process is quite different from traditional DE that the crossover rate is controlled by crossover rate parameter (Cr), and makes the crossover process easy to realize, as shown in Eq. (11).

mapi , u (1 : mixrate*rand *D )  1 u  permuting (1,2,3, ...D ) mapi , randi ( D )  1

(10)

Crossover  P  (map.* F ).*(oldP  P) Ti , j  rand *(upj  lowj )  lowj

(9)

if Ti , j beyond the boundary

(11) (12)

Moreover, to avoid the individuals in trial population exceeding the search space, a boundary control mechanism is introduced, as shown in Eq. (12). 5) Selection-II BSA’s second selection is to update and record better solution and global optimal solution. If individuals in a trial population T have better fitness values than that in the original population P, the individuals in original population P are replaced by the individuals in the trail population. Moreover, if one individual in current population P (Pbest) have better fitness values than the global optimal value (global_minimum), the global optimal value (global_minimum) and global optimal

solution is replaced by Pbest fitness value and Pbest, respectively. From the above description, it can be seen that BSA is very similar to the traditional EAs, especially similar to GA. The most different part is the usage of oldP, which uses the former iteration experience to generate new trail population, and that is why it is called backtracking searching. The results show that this thought is useful and effective. Compared to other EAs, BSA has the advantages of simple structure, fewer control parameters and higher convergence speed, but it remains some disadvantages, such as poor exploitation ability and limited application field. To overcome these shortcomings, we will make some improvements about BSA itself and introduce some other techniques to enhance BSA’ abilities and expand the range of application to solve PFSP in section 3.

3. HBSA for PFSP In this section, the improvements and relative techniques of BSA for PFSP are shown. First, by combining the original BSA and some techniques, the flowchart of HBSA is presented in Section 3.1. Then the improvements about original BSA itself are introduced to make BSA suitable for solving PFSP in Section 3.2. And at last, to enhance exploitation ability of original BSA, random insertion local search method is explained in details in Section 3.3.

3.1 Flowchart of the HBSA HBSA not only takes advantage of BSA that utilizes the historical population information, but also takes advantage of EA that utilizes many kinds of crossover and mutate strategies to generate new trail population. To avoid falling into local optimum, the mechanism in SA that accept second-best solution with a probability is added to BSA. To make up for the poor exploitation ability of BSA and enrich the local searching behaviors, a random insertion local search method is adopted to find better solutions. The flowchart of HBSA is illustrated in Figure 3.

Figure 3 Flowchart of HBSA

3.2 Improvements about original BSA To improve the quality of the initial population and speed up the convergence, a constructive method, NEH [10], is utilized to initialize the population before iteration. This approach has

already been adopted and proven to be effective by most of improved heuristic algorithms or hybrid algorithms. The flowchart of NEH is as follows: Step 1)

Calculate the total processing time of each job in all machines, and rank them by descending order to generate a sequence π0 that includes n jobs.

Step 2)

Select the first two jobs in above sequence π 0, and evaluate two possible schedules of these two jobs. Select the sequence that has less makespan as the current sequence π1.

Step 3)

Take the remain kth job from sequence π0, k=3, 4, … n. Find the best schedule by inserting it into all possible k positions in sequence π1. Take the best schedule as the current sequence π1.

Step 4)

Repeat Step 3) until sequence π1 including all n jobs.

Step 5)

Replace one individual by sequence π1 in the initial population.

Different from original BSA that updating the oldP at each iteration by Eq. (5), HBSA uprates and records the oldP after every 20 iterations in this paper, which represents the core ideal of backtracking search. From the mutation and crossover operators of original BSA in section 2, it is evident that original BSA is not suitable for solving the discrete problem, such as PFSP. To overcome this shortcoming, we change the crossover and mutate operators to the traditional crossover and mutate operator, whose details are list in Figure 4 and Figure 5. Three kinds of crossover strategies [34], namely one-segment crossover, two-segment crossover and three-segment crossover, are adopted to generate the trail population. We take two-segment as an example to explain their rules. Parent A and Parent B are the sequences that are to be crossed. Four positions are randomly generated and ranked: 2, 6, 9,11. Two segments which are the sequence between 2nd position and 6th position [2, 10, 7, 4, 3] and the sequence between 9th position and 11th position [12, 9, 5] are transmitted to its off spring Child A, as shown in Figure 4. The remaining sequence [8, 11, 6, 13, 1] is ranked again according to the order in Parent B, so it turns to be [1, 6, 13, 11, 8]. Finally make this sequence [1, 6, 13, 11, 8] fill the blank positions in Child A. So Child A is generated as [1, 2, 10, 7, 4, 3, 6, 13, 12, 9, 5, 11, 8]. Child B is generated similar to Child A. As For one-segment or three-segment crossover, the only difference is the segment numbers that are to be transmitted to their off springs, while their rules are the same. Parent A comes from the current population, while Parent B comes from the old population (oldP) that recorded by the former iteration. Two children (Child A and Child B) will be generated, and the one with better fitness will be selected as one individual in the trail population. Three kinds of mutate strategies, named as SWAP, INSERT and INVERSE, are adopted to maintain the diversity of trail population. SWAP means swapping the gene on two different positions in the same chromosome. INSERT means inserting the gene on one position before the gene on another position. INVERSE means inversing all genes between two different positions in

a chromosome. As shown in Figure 5.

Figure 4 Crossover operators for PFSP

Figure 5 Mutate operators for PFSP

Another improvement about BSA is to update individuals in original population P by means of a probability in simulated annealing (SA) [35]. The new individual is selected to update the population by a judgment, as shown in Eq. (13), where T is a control parameter and will be continuous changed by Eq. (14), and Δ is the fitness difference value of original individual and new individual. This mechanism makes BSA probabilistically escaping from local optima, and the search process can be controlled by the cooling schedule.

min{1, exp( / T )}  random[0,1]

Tk  1  Tk ,

  (0, 1]

(13) (14)

3.3 Random insertion local search To overcome the shortcoming of BSA that owns very poor exploitation ability, the local search method is a good choice. For permutation and combination optimization problems, random insert, random swap and random inverse are the commonly used three neighborhood structures during local search process. Schiavinotto et al. [36] have proven that the distance among solutions calculated by random insert and random interchange are much less than that of random swap, which means that random insert and random interchange have higher efficiency and stronger search ability. Chu [37] and Kouvelis [38] have proven that results of random insert are better than that of random interchange under the same condition. Thus random insert local search method is designed as follows to find better solutions near the current local optimal solution in this paper. Step 1)

Given a sequence π0 that includes n jobs.

Step 2)

Random select two positions u and v, where u≠v, u,v=1,2,…., n. Generate a new sequence π1 by inserting the job on position u before the job on position v in the sequence π0. Calculate the fitness of sequence π1.

Step 3)

Iteration begins: set x=1:n-1, and y=x+1:n, where x and y mean the different positions in sequenceπ1.

Step 4)

Generate a new sequence π 2 by inserting the job on position y before the job on

position x in the sequence π1. Generate another new sequence π 3 by inserting the job on position x before the job on position y in the sequence π1. Step 5)

Replace sequence π2 with sequence π3 if the fitness of sequence π3 is less than that of sequence π2. Replace sequence π1 with sequence π2 if the fitness of sequence π2 is less than that of sequence π1.

Step 6)

Iteration ends.

Step 7)

Compare the fitness of sequence π 2 and that of sequence π0, and replace sequence π0 with sequence π2 if the fitness of sequence π2 is less than that of sequence π1.

The main advantages of random insert local search are: (1) The search is performed in a relatively impact space, which is conducive to improve search efficiency. (2) The insertion is always accepted no matter how much its fitness is in Step (2), which makes it possible to avoid falling into local minimum. (3) All possible insertion positions in a certain sequence are generated by positions x and y, but the insertion is accepted only when its fitness is better than that of original sequence. And this method not only makes the search in a wide range, but also ensures the accuracy and completeness of the search. The same sequence may obtain different optimized sequences finally through random insert local search, which keeps the diversity of local search results.

4. Numerical experiments To test the performance of the proposed HBSA, computational simulation is carried out with 29 well-known benchmark problems of PFSPs that belong to OR-Library [39]. The first 8 problems are proposed by Carlier and denoted by car1, car2, till car8.The other 21 problems are proposed by Reeves and denoted by rec01, rec03, rec05 through rec41. So far, these benchmark problems have been widely used to illustrate the abilities of different algorithms for PFSPs. In this paper, HBSA is coded in Matlab 7.0, and simulation is executed on a PC with Pentium (R) Dule-Core 3.0GHz processor and 4.0 GB RAM. And the parameters in HBSA are listed as follows: population size P=40, initial temperature T0=30, crossover probability: CR=0.9, mutate probability Mu=0.2, annealing rate λ=0.95, maximum iteration epoch=300, maximum consecutive step L =60. Each instance is run 20 times independently for comparisons. In order to compare the merits of the algorithm, some parameters are chosen as the reference standard, as shown in Eq. (15-17). C* is the optimal makespan or lower bound value known so far. BRE is the best relative error to C*, ARE is the average error to C* and WRE is the worst relative error to C*. T means the average computing time of each problem, and its unit is second, donated by T (s).

BRE 

ARE 

min( solutions)  C *  100% C*

(15)

average( solutions )  C *  100% C*

(16)

max( solutions)  C *  100% C*

(17)

WRE 

This section includes three sub-sections. The comparisons of HBSA without any local research method (denoted by HBSA_NOLS) and HBSA with random insertion local search (denoted by HBSA) will be introduced in the first part. Then, comparisons of HBSA and improved GAs for PFSP will be presented in the second part. At last, HBSA will be compared with some other classical optimization algorithms.

4.1 Comparisons of HBSA_NOLS and HBSA The statistical performances of 20 independent runs are listed in Table 1 Comparisons of HBSA_NOLS and HBSA Table 1. HBSA_NOLS denotes HBSA without any local search method, while HBSA denotes HBSA with the random insertion local search. From the Table 1, it can be seen that HBSA_NOLS can obtain the global optimal solution for low dimension PFSP, which shows that the improvements about original BSA is successful. However it can hardly obtain the global optimal solution for high dimension PFSP, such as Rec01, Rec03 … and Rec 41, which show the exploitation search limitation of BSA. After adding the random insertion local search, the results of HBSA are obvious improved. Even though for some high dimension PFSP, HBSA still obtain the global optimal solution, such as Rec01, Rec07, Rec17, Rec33 and Rec35, while HBSA_NOLS cannot obtain the optimal solutions. The gaps between the results of remainder PFSP and the global optimal solutions have been narrowed considerably. So the random insertion local search takes an important role in improving the solutions. It can be seen from Table 1 that the computing time of HBSA with local search increases slowly when the dimension of problems are not so high. And it will increase rapidly when it comes to high dimension problems, such as Rec 37, Rec 39, and Rec 41. And the computing time of HBSA is larger than that of HBSA_NOLS, which is due to the local search method. In other words, we sacrifice some computing time to gain better solution, and it is worthy. Table 1 Comparisons of HBSA_NOLS and HBSA

4.2 Comparisons of HBSA and improved GAs for PFSP Due to the crossover and mutate operators in HBSA are the basic operators of GA, HBSA and some improved GAs are compared in this part to illustrate the difference and effectiveness of HBSA. The data of simple genetic algorithm (SGA) and Improved genetic algorithm (IGA) are coming from Shop Scheduling with Genetic Algorithms written by Wang [40]. The hybrid genetic algorithm (HGA) was proposed by Zheng et al. [27]. The hybrid quantum inspired genetic algorithm (HQGA) was proposed by Wang et al. [41]. From the Table 2, it can be seen that results of HBSA are absolutely better than those of SGA. Expect for the Rec05, the BRE values of HBSA are better than or equal to those of IGA, HGA and HQGA. Except for Car3, the ARE values of HBSA are better than or equal to those of IGA. Except for car3, Rec03, Rec23 and Rec31, the ARE values of HBSA are better than or equal to those of HGA. Except for Car3, Rec23 and Rec 29, the ARE values of HBSA are better than or equal to those of HQGA. Therefore, from the comparisons, it can be concluded that the results of HBSA are better than SGA IGA, HGA and HQGA for most PFSP cases. The computing time shown in Table 1 can be used as a reference for readers. Due to different platform and computer configuration, it is hardly to compare the computing time of HBSA and that of other algorithms accurately. And the computing time of HBSA for each problem is similar to that of other algorithms, and here we will not do further discussion. Not only does HBSA utilize the good operators in GA, HBSA but also utilizes the experience from the former iterations, which is the essence of BSA. The results and comparisons in this section give a strong support of this.

Table 2

Comparison of HBSA and improved GAs

4.3 Comparisons of HBSA and some other classical optimization algorithms for PFSP To further show the effectiveness of HBSA, we carried out some comparisons of HBSA and several other classical algorithms, such as Osman’s simulate annealing (OSA) [14], particle swarm optimization with variable neighborhood search (PSOVNS) [23], PSO-based memetic algorithm (PSOMA) [42] and hybrid differential evolution method (HDE) [39]. OSA, which was first proposed by Osman, is a simulated annealing algorithm to obtain approximate solutions for permutation flow shop scheduling. PSOVNS was first proposed by Tasgetiren et al, who

embedded a very efficient local search method, namely variable neighborhood search (VNS), to the PSO algorithm to solve the PFSP with the objectives of minimizing makespan and the total flow time. In PSOMA, both PSO-based searching operators and some other special local searching operators were designed to balance the exploration and exploitation abilities. HDE applies the parallel evolution mechanism of DE to perform effective exploration (global search), and adopts problem-dependent local search methodology to adequately perform exploitation (local search). From Table 3, it can be seen that the results of HBSA is obvious better than that of OSA and PSOVNS. Compared to PSOMA, the BRE values of HBSA are better than those of PSOMA except for Rec37, and the ARE values of HBSA are better than or equal to those of PSOMA except for Car 3, Rec15and Rec33. Compared to HDE, in some PFSP cases, HBSA can obtain better results, and in other PFSP cases, HBSA cannot. However, from the overall, HDE is a little better than the proposed HBSA. From the comparisons with other classical optimization problems, it is obvious that hybrid algorithms can obtain better solutions than the sole algorithm, and this is why hybrid algorithms are paid more attentions and become the trend. The proposed HBSA hybridized some other techniques to make up for the shortcomings in original BSA, and the results show that HBSA is successful, effective and competitive.

Table 3 Comparisons of HBSA and some other classical optimization algorithms

4.4 Effects of Parameters To investigate the effects of population size P, Crossover operator CR and Mutate operator Mu, a lot of experiments are done. We take problem Rec 25 as an example to test the effects of these parameters. The results of 20 independent runs of each problem are recorded and analyzed. From Table 4, it can be seen that the computing time increases with the increase of population size. Populations from 10 to 50 are much better than others, so a small and medium population size 40 is recommended in this paper. From Table 5, it can be seen that Crossover operator CR is almost no effect on computing time, but it affects the BRE values seriously. That is to say Crossover operator CR can really ensure the diversity of population, and makes great contribution to find better solution. CR set from 0.7 to 1.0 are better from Table 5. And in our paper 0.9 is recommended. From Table 6, it can be seen that, with the increase of mutate probability, computing time increases, but BRE value and WRE value turn to be worse in general. Taking computing time, BRE value and WRE value into consideration, we think Mu set from 0.1

to 0.3 is better. So Mu=0.2 is recommended in this paper.

Table 4 Effect of population size P

Table 5 Effect of Crossover operator CR

Table 6 Effect of Mutate Operator Mu

5. Conclusion and future works In this paper, combining the population-based evolutionary searching ability of BSA with two local searches to balance exploration and exploitation, a HBSA is proposed to solving PFSPs with the objective to minimize the makespan. The improvements include the utilization of discrete crossover and mutate strategies, SA-based mechanism to avoid falling into local optimal, and two fast local search methods. Thus we successfully applied the BSA that is only used to solve continuous problems currently to solve the discrete problems. The results of HBSA on 29 benchmark problems have shown the feasibility of the improvements and the effective of HBSA for PFSP. From the numerical experiments, it can be seen that HBSA has obtained the global optimal solution for some PFSPs and narrowed the gap between the optimal solutions. These provide some good suggestions and ways for others to solve the discrete problems by BSA. However, there are still some improvements of HBSA to find the global optimal solutions. Some other topics could be taken into considerations in future: (1) More effective local search methods. Some other local search methods, such as NEH-based local search [43], SA-based local search combining adaptive meta-Lamarckian learning strategy and pairwise-based local search [42], and referenced local search (RLS) [44], are tried by us, and the two local search methods in this paper are proven to be more effective and suitable for BSA. Therefore, some more other local search methods may lead better performances. (2) Applications of BSA. This paper has proposed a HBSA for PFSPs, which are discrete problems. Due to the good performance of BSA in Civicioglu [30], BSA have potential on other engineering applications or problems, taking constraint problems for instance. (3) Some other talented algorithms should also be tried to solve permutation flow-shop problems.

References [1] Li Xiangtao, Yin Minghao. A hybrid cuckoo search via lévy flights for the permutation flow shop scheduling problem. International Journal of Production Research 2013; 51(16): 4732-4754. [2] Lenstra Jan Karel, Rinnooy Kan AHG, Brucker Peter. Complexity of machine scheduling problems. Annals of discrete mathematics 1977; 1: 343-362. [3] Rajkumar R, Shahabudeen P. An improved genetic algorithm for the flowshop scheduling problem. International Journal of Production Research 2009; 47(1): 233-249. [4] Qiu Chang Hua, Wang Can. An Immune Particle Swarm Optimization Algorithm for Solving Permutation Flowshop Problem. Key Engineering Materials 2010; 419: 133-136. [5] Gupta Jatinder ND, Shanthikumar J George, Szwarc Wlodzimierz. Generating improved dominance conditions for the flowshop problem. Computers & operations research 1987; 14(1): 41-45. [6] Garey Michael R, Johnson David S, Sethi Ravi. The complexity of flowshop and jobshop scheduling. Mathematics of operations research 1976; 1(2): 117-129. [7] Palmer DS. Sequencing jobs through a multi-stage process in the minimum total time--a quick method of obtaining a near optimum. OR 1965: 101-107. [8] Campbell Herbert G, Dudek Richard A, Smith Milton L. A heuristic algorithm for the n job, m machine sequencing problem. Management science 1970; 16(10): B-630-B-637. [9] Gangadharan Rajesh, Rajendran Chandrasekharan. Heuristic algorithms for scheduling in the no-wait flowshop. International Journal of Production Economics 1993; 32(3): 285-290. [10] Nawaz Muhammad, Enscore Jr E Emory, Ham Inyong. A heuristic algorithm for the m machine and n job flow-shop sequencing problem. Omega 1983; 11(1): 91-95. [11] Iyer Srikanth K, Saxena Barkha. Improved genetic algorithm for the permutation flowshop scheduling problem. Computers & Operations Research 2004; 31(4): 593-606. [12] Reeves Colin R. A genetic algorithm for flowshop sequencing. Computers & operations research 1995; 22(1): 5-13. [13] Zhang Yi, Li Xiaoping, Wang Qian. Hybrid genetic algorithm for permutation flowshop scheduling problems with total flowtime minimization. European Journal of Operational Research 2009; 196(3): 869-876. [14] Osman IH, Potts CN. Simulated annealing for permutation flow-shop scheduling. Omega 1989; 17(6): 551-557. [15] Tian Peng, Ma Jian, Zhang Dong-Mo. Application of the simulated annealing algorithm to the combinatorial optimisation problem with permutation property: An investigation of generation mechanism. European Journal of Operational Research 1999; 118(1): 81-94. [16] Seyed-Alagheband SA, Davoudpour H, Doulabi SH Hashemi, Khatibi M. Using a modified simulated annealing algorithm to minimize makespan in a permutation flow-shop scheduling problem with job deterioration. Proceedings of the world congress on engineering and computer science; 2009;

2009. p. 20-22. [17] Ying Kuo-Ching, Liao Ching-Jong. An ant colony system for permutation flow-shop sequencing. Computers & Operations Research 2004; 31(5): 791-801. [18] Stützle Thomas. An ant approach to the flow shop problem. Proceedings of the 6th European Congress on Intelligent Techniques & Soft Computing (EUFIT’98); 1998; 1998. p. 1560-1564. [19] Tasgetiren M Fatih, Pan Quan-Ke, Suganthan Ponnuthurai N, Chen Angela HL. A discrete artificial bee colony algorithm for the total flowtime minimization in permutation flow shops. Information Sciences 2011; 181(16): 3459-3475. [20] Wang Ling, Zheng Da-Zhong. A modified evolutionary programming for flow shop scheduling. The International Journal of Advanced Manufacturing Technology 2003; 22(7-8): 522-527. [21] Grabowski Józef, Wodecki Mieczyslaw. A very fast tabu search algorithm for the permutation flow shop problem with makespan criterion. Computers & Operations Research 2004; 31(11): 1891-1909. [22] Tasgetiren M Fatih, Sevkli Mehmet, Liang Yun-Chia, Gencyilmaz Gunes. Particle swarm optimization algorithm for permutation flowshop sequencing problem.

Ant Colony Optimization and

Swarm Intelligence: Springer; 2004: 382-389. [23] Tasgetiren M Fatih, Liang Yun-Chia, Sevkli Mehmet, Gencyilmaz Gunes. A particle swarm optimization algorithm for makespan and total flowtime minimization in the permutation flowshop sequencing problem. European Journal of Operational Research 2007; 177(3): 1930-1947. [24] Pan Quan-Ke, Fatih Tasgetiren M, Liang Yun-Chia. A discrete particle swarm optimization algorithm for the no-wait flowshop scheduling problem. Computers & Operations Research 2008; 35(9): 2807-2839. [25] Liu Bo, Wang Ling, Jin Yi-Hui. An effective hybrid PSO-based algorithm for flow shop scheduling with limited buffers. Computers & Operations Research 2008; 35(9): 2791-2806. [26] Rajendran Chandrasekharan, Ziegler Hans. Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/total flowtime of jobs. European Journal of Operational Research 2004; 155(2): 426-438. [27] Zheng D-Z, Wang L. An effective hybrid heuristic for flow shop scheduling. The International Journal of Advanced Manufacturing Technology 2003; 21(1): 38-44. [28] Nearchou Andreas C. A novel metaheuristic approach for the flow shop scheduling problem. Engineering Applications of Artificial Intelligence 2004; 17(3): 289-300. [29] Wang Sheng-yao, Wang Ling, Liu Min, Xu Ye. An effective estimation of distribution algorithm for solving the distributed permutation flow-shop scheduling problem. International Journal of Production Economics 2013; 145(1): 387-396. [30] Civicioglu Pinar. Backtracking search optimization algorithm for numerical optimization problems. Applied Mathematics and Computation 2013; 219(15): 8121-8144. [31] Das Swagatam, Suganthan PN. Problem definitions and evaluation criteria for CEC 2011

competition on testing evolutionary algorithms on real world optimization problems. Jadavpur Univ, Nanyang Technol Univ, Kolkata, India 2010. [32] Storn Rainer, Price Kenneth. Differential evolution-a simple and efficient adaptive scheme for global optimization over continuous spaces: ICSI Berkeley; 1995. [33] Liu Yan-Feng, Liu San-Yang. A hybrid discrete artificial bee colony algorithm for permutation flowshop scheduling problem. Applied Soft Computing 2013; 13(3): 1459-1463. [34] Lian Zhigang, Gu Xingsheng, Jiao Bin. A novel particle swarm optimization algorithm for permutation flow-shop scheduling to minimize makespan. Chaos, Solitons & Fractals 2008; 35(5): 851-861. [35] Brooks SP, Morgan BJT. Optimization using simulated annealing. The Statistician 1995: 241-257. [36] Chandra Ramesh. On n/1/F dynamic deterministic problems. Naval Research Logistics Quarterly 1979; 26(3): 537-544. [37] Chu Chengbin. Efficient heuristics to minimize total flow time with release dates. Operations Research Letters 1992; 12(5): 321-330. [38] Kouvelis Panos, Daniels Richard L, Vairaktarakis George. Robust scheduling of a two-machine flow shop with uncertain processing times. Iie Transactions 2000; 32(5): 421-432. [39] Qian Bin, Wang Ling, Hu Rong, Wang Wan-Liang, Huang De-Xian, Wang Xiong. A hybrid differential evolution method for permutation flow-shop scheduling. The International Journal of Advanced Manufacturing Technology 2008; 38(7-8): 757-777. [40] Wang Ling. Shop scheduling with genetic algorithms. Tsinghua University & Springer Press, Beijing 2003. [41] Wang Ling, Wu Hao, Tang Fang, Zheng Da-Zhong. A hybrid quantum-inspired genetic algorithm for flow shop scheduling. Advances in Intelligent Computing: Springer; 2005: 636-644. [42] Liu Bo, Wang Ling, Jin Yi-Hui. An effective PSO-based memetic algorithm for flow shop scheduling. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on 2007; 37(1): 18-27. [43] Aldowaisan Tariq, Allahverdi Ali. New heuristics for no-wait flowshops to minimize makespan. Computers & Operations Research 2003; 30(8): 1219-1231. [44] Pan Quan-Ke, Tasgetiren Mehmet Fatih, Liang Yun-Chia. A discrete differential evolution algorithm for the permutation flowshop scheduling problem. Computers & Industrial Engineering 2008; 55(4): 795-816.

Acknowledgements The authors would like to thank the editor and anonymous referees whose comments helped a lot in improving this paper. This research work is supported by the fund supported by the National Natural Science Foundation of China (NSFC) under Grant No. 51375004 and independent innovation research fund of Huazhong University of Science and Technology under Grand No.2014XJGH010. This research is also carried out as a part of GREENet project, supported by a Marie Curie International Research Staff Exchange Scheme Fellowship within the 7th European Community Framework Program under Grant No. 269122, and the relevant program supported by Department of International Cooperation, Ministry of Science and Technology of People’s Republic of China under Grand No. 1208. The paper reflects only the authors’ views and that the European Union is not liable for any use that may be made of the information contained therein.

Table 7 Pro.

n,m

C*

Car1

11×5

Car2

Comparisons of HBSA_NOLS and HBSA HBSA_NOLS

HBSA

BRE

ARE

WRE

T(s)

BRE

ARE

WRE

T(s)

7038

0

0

0

0.89

0

0

0

1.26

13×4

7166

0

0

0

0.94

0

0

0

1.42

Car3

12×5

7312

0

0.48

1.20

1.27

0

0.06

1.19

1.53

Car4

14×4

8003

0

0

0

1.43

0

0

0

1.55

Car5

10×6

7720

0

0.06

0.61

1.02

0

0

0

1.11

Car6

8×9

8505

0

0

0

0.81

0

0

0

0.95

Car7

7×7

6590

0

0

0

0.75

0

0

0

0.77

Car8

8×8

8366

0

0

0

0.81

0

0

0

0.85

Rec01

20×5

1247

0.16

0.16

0.16

1.88

0

0.14

0.16

3.30

Rec03

20×5

1109

0

0.16

0.18

1.77

0

0.08

0.18

3.66

Rec05

20×5

1242

0.24

0.26

0.40

1.66

0.24

0.24

0.24

3.23

Rec07

20×10

1566

0.38

1.22

1.92

2.90

0

0.46

1.15

3.47

Rec09

20×10

1537

0

0.57

2.41

2.22

0

0.07

0.65

4.10

Rec11

20×10

1431

0

0.10

0.49

2.60

0

0

0

3.77

Rec13

20×15

1930

0.26

1.08

1.35

2.20

0.10

0.53

1.14

4.65

Rec15

20×15

1950

0.67

1.05

1.28

2.24

0.05

0.64

1.18

4.96

Rec17

20×15

1902

1.05

1.76

3.10

2.14

0

1.00

2.16

4.72

Rec19

30×10

2093

1.19

1.86

2.48

4.14

0.29

0.81

1.29

12.41

Rec21

30×10

2017

1.64

1.94

3.32

3.63

0.69

1.50

1.83

9.41

Rec23

30×10

2011

0.94

1.95

3.08

3.24

0.45

1.28

3.08

11.59

Rec25

30×15

2513

1.47

2.54

3.78

3.86

0.40

1.29

2.43

14.99

Rec27

30×15

2373

1.01

1.96

2.91

4.18

0.25

1.27

2.57

14.67

Rec29

30×15

2287

1.66

2.84

4.90

4.36

0.57

1.42

2.97

13.22

Rec31

50×10

3045

2.59

3.43

5.22

8.80

0.43

1.91

2.66

42.29

Rec33

50×10

3114

0.83

1.56

3.40

4.13

0

0.59

1.28

36.60

Rec35

50×10

3277

0.09

0.34

1.01

4.95

0

0

0

32.52

Rec37

75×20

4951

4.99

6.04

7.41

11.90

1.92

2.93

4.20

223.2

Rec39

75×20

5087

4.19

4.96

6.29

11.25

0.90

1.88

3.38

224.6

Rec41

75×20

4960

4.74

6.08

7.22

12.11

1.69

2.72

3.55

202.6

Table 8 Pro.

Comparison of HBSA and improved GAs

SGA

IGA

HGA

HQGA

HBSA

BRE

ARE

BRE

ARE

BRE

ARE

BRE

ARE

BRE

ARE

Car1

0

0.27

0

0

0

0

0

0

0

0

Car2

0

4.07

0

0

0

0

0

0

0

0

Car3

1.19

2.95

0

0

0

0

0

0

0

0.06

Car4

0

2.36

0

0

0

0

0

0

0

0

Car5

0

1.46

0

0

0

0

0

0

0

0

Car6

0

1.86

0

0.08

0

0.04

0

0

0

0

Car7

0

1.57

0

0.24

0

0

0

0

0

0

Car8

0

2.59

0

0

0

0

0

0

0

0

Rec01

2.81

6.96

0

0.14

0

0.14

0

0.14

0

0.14

Rec03

1.89

4.45

0

0.14

0

0.09

0

0.17

0

0.08

Rec05

1.93

3.82

0

0.31

0

0.29

0.24

0.34

0.24

0.24

Rec07

1.15

5.31

0

0.59

0

0.69

0

1.02

0

0.46

Rec09

3.12

4.73

0

0.79

0

0.64

0

0.64

0

0.07

Rec11

3.91

7.39

0

1.48

0

1.10

0

0.67

0

0

Rec13

3.68

5.97

0.62

1.52

0.36

1.68

0.16

1.07

0.10

0.53

Rec15

2.21

4.29

0.46

1.28

0.56

1.12

0.05

0.97

0.05

0.64

Rec17

3.15

6.08

1.73

2.69

0.95

2.32

0.63

1.68

0

1.00

Rec19

4.01

6.07

1.09

1.58

0.62

1.32

0.29

1.43

0.29

0.81

Rec21

3.42

6.07

1.44

1.52

1.44

1.57

1.44

1.63

0.69

1.50

Rec23

3.83

7.46

0.45

0.99

0.40

0.87

0.50

1.20

0.45

1.28

Rec25

4.42

7.20

1.63

2.74

1.27

2.54

0.77

1.87

0.40

1.29

Rec27

4.93

6.85

0.80

2.11

1.10

1.83

0.97

1.83

0.25

1.27

Rec29

6.21

8.48

1.53

2.59

1.40

2.70

0.35

1.97

0.57

1.42

Rec31

6.17

8.02

0.49

1.62

0.43

1.34

1.05

2.50

0.43

1.91

Rec33

3.08

5.12

0.13

0.75

0

0.78

0.83

0.91

0

0.59

Rec35

1.46

3.30

0

0

0

0

0

0.15

0

0

Rec37

7.89

10.07

2.26

3.49

3.75

4.90

2.52

4.33

1.92

2.93

Rec39

7.32

8.51

1.14

1.93

2.20

2.79

1.63

2.71

0.90

1.88

Rec41

8.51

10.03

3.27

3.78

3.64

4.92

3.13

4.15

1.69

2.72

Table 9 Comparisons of HBSA and some other classical optimization algorithms Pro.

OSA

PSOVNS

PSOMA

HDE

HBSA

BRE

ARE

BRE

ARE

BRE

ARE

BRE

ARE

BRE

ARE

Car1

0

0

0

0

0

0

0

0

0

0

Car2

0

0

0

0

0

0

0

0

0

0

Car3

0

0.63

0

0.42

0

0

0

0

0

0.06

Car4

0

0

0

0

0

0

0

0

0

0

Car5

0

0.80

0

0.04

0

0.02

0

0

0

0

Car6

0

2.09

0

0.08

0

0.11

0

0

0

0

Car7

0

1.48

0

0

0

0

0

0

0

0

Car8

0

2.30

0

0

0

0

0

0

0

0

Rec01

0.16

0.16

0.16

0.17

0

0.14

0

0.14

0

0.14

Rec03

0

0.19

0

0.16

0

0.19

0

0

0

0.08

Rec05

0.24

0.59

0.24

0.25

0.24

0.25

0.24

0.24

0.24

0.24

Rec07

0

0.43

0.70

1.10

0

0.99

0

0.23

0

0.46

Rec09

0

0.69

0

0.65

0

0.62

0

0

0

0.07

Rec11

0

2.22

0.07

1.15

0

0.13

0

0

0

0

Rec13

0.31

1.79

1.04

1.79

0.26

0.89

0.10

0.30

0.10

0.53

Rec15

0.72

1.57

0.77

1.49

0.05

0.63

0

0.31

0.05

0.64

Rec17

1.84

3.80

1.00

2.45

0

1.33

0

1.18

0

1.00

Rec19

0.29

0.80

1.53

2.10

0.43

1.31

0.29

0.56

0.29

0.81

Rec21

1.44

1.48

1.49

1.67

1.44

1.60

0.20

1.41

0.69

1.50

Rec23

0.50

0.85

1.34

2.11

0.60

1.31

0.45

0.48

0.45

1.28

Rec25

1.19

1.94

2.39

3.17

0.84

2.09

0.48

1.49

0.40

1.29

Rec27

0.84

1.85

1.73

2.46

1.35

1.61

0.84

1.29

0.25

1.27

Rec29

0.61

2.88

1.97

3.11

1.44

1.89

0.31

0.79

0.57

1.42

Rec31

0.30

1.33

2.59

3.23

1.51

2.25

0.30

0.82

0.43

1.91

Rec33

0.13

0.73

0.84

1.01

0

0.65

0

0.43

0

0.59

Rec35

0

0

0

0.04

0

0

0

0

0

0

Rec37

2.00

2.75

4.38

4.95

2.10

3.54

1.82

2.73

1.92

2.93

Rec39

0.77

1.24

2.85

3.37

1.55

2.43

0.98

1.54

0.90

1.88

Rec41

1.73

2.73

4.17

4.87

2.64

3.68

1.67

2.65

1.69

2.72

Table 10 Effect of population size P Problem

Rec 25

HBSA P

BRE

ARE

WRE

T(s)

10

0.80

1.72

2.63

15.15

20

0.60

1.56

2.51

17.92

30

0.56

1.38

2.27

19.29

40

0.40

1.03

1.67

15.31

50

0.76

1.47

1.83

15.62

60

0.56

1.19

2.27

25.75

70

0.92

1.93

2.75

28.19

80

0.96

1.66

2.11

30.82

90

0.76

2.25

4.18

34.70

100

0.68

1.30

2.59

39.52

Table 11 Effect of Crossover operator CR Problem

Rec 25

HBSA CR

BRE

ARE

WRE

T(s)

0

1.11

1.56

2.67

15.53

0.1

0.84

1.37

2.87

15.35

0.2

1.11

1.72

2.67

19.08

0.3

0.88

1.52

2.47

19.49

0.4

0.68

1.82

2.38

16.21

0.5

0.84

1.54

2.51

15.73

0.6

0.84

1.58

2.63

20.79

0.7

0.68

1.38

1.87

19.96

0.8

0.52

1.48

2.07

16.69

0.9

0.68

1.42

1.85

16.18

1.0

0.52

1.38

1.87

16.06

Table 12 Effect of Mutate Operator Mu Problem

Rec 25

HBSA Mu

BRE

ARE

WRE

T(s)

0

1.27

2.04

3.10

14.67

0.1

0.52

1.20

1.59

15.08

0.2

0.40

1.26

2.03

15.35

0.3

0.68

1.32

1.91

20.17

0.4

0.44

1.55

2.23

27.56

0.5

0.40

1.15

2.07

31.16

0.6

0.68

1.63

2.55

25.24

0.7

0.68

1.27

2.11

25.35

0.8

0.44

1.50

1.95

31.81

0.9

0.96

1.38

2.63

25.11

1.0

0.96

1.38

2.63

26.97

Highlights



It is the first time that BSA is applied to solve discrete problems.



The proposed HBSA combines methods in BSA and methods in SA algorithm.



The random insertion local search method adopted by HBSA is effective.



The effectiveness of proposed HBSA has been proven from statistical analysis.