An approach using particle swarm optimization and bottleneck heuristic to solve hybrid flow shop scheduling problem

An approach using particle swarm optimization and bottleneck heuristic to solve hybrid flow shop scheduling problem

Applied Soft Computing 12 (2012) 1755–1764 Contents lists available at SciVerse ScienceDirect Applied Soft Computing journal homepage: www.elsevier...

409KB Sizes 24 Downloads 185 Views

Applied Soft Computing 12 (2012) 1755–1764

Contents lists available at SciVerse ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

An approach using particle swarm optimization and bottleneck heuristic to solve hybrid flow shop scheduling problem Ching-Jong Liao ∗ , Evi Tjandradjaja, Tsui-Ping Chung Department of Industrial Management, National Taiwan University of Science and Technology, Taipei, Taiwan

a r t i c l e

i n f o

Article history: Received 30 January 2011 Received in revised form 7 December 2011 Accepted 9 January 2012 Available online 8 February 2012 Keywords: Hybrid flow shop Particle swarm optimization Bottleneck heuristic Simulated annealing Scheduling

a b s t r a c t Hybrid flow shops (HFS) are common manufacturing environments in many industries, such as the glass, steel, paper and textile industries. In this paper, we present a particle swarm optimization (PSO) algorithm for the HFS scheduling problem with minimum makespan objective. The main contribution of this paper is to develop a new approach hybridizing PSO with bottleneck heuristic to fully exploit the bottleneck stage, and with simulated annealing to help escape from local optima. The proposed PSO algorithm is tested on the benchmark problems provided by Carlier and Néron. Experimental results show that the proposed algorithm outperforms all the compared algorithms in solving the HFS problem. © 2012 Elsevier B.V. All rights reserved.

1. Introduction Production scheduling is a decision-making process that plays an important role in most manufacturing and production systems. Effective scheduling can lead to improvements in throughput, customer satisfaction, inventory costs, utilization of bottleneck resources, and other performance measures. In this paper, we consider a hybrid flow shop (HFS) scheduling problem with minimum makespan objective. HFS systems are more commonly seen than traditional flow shops in real industries, e.g., in glass, steel, paper and textile industries. In such systems, machines are arranged into several stages in series, and each of which has one or more identical machines in parallel. A job has to pass through all stages and must be processed by exactly one machine at every stage. Many different approaches have been proposed to solve the HFS problem, such as exact solution, heuristic and metaheuristic. One of the exact solution methods mostly used for the HFS problem is the branch and bound approach. Brah and Hunsucker [1] proposed such a branch and bound algorithm, and Portmann and Vignier [2] presented an improvement based on the introduction of genetic algorithm (GA) in the algorithm. Néron et al. [3] proposed a procedure based on energetic reasoning and global operations that enhances the efficiency of branch and bound procedures. Riane

∗ Corresponding author at: Department of Industrial Management, National Taiwan University of Science and Technology, 43 Keelung Road, Section 4, Taipei 106, Taiwan. E-mail address: [email protected] (C.-J. Liao). 1568-4946/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2012.01.011

et al. [4] presented another branch and bound approach for a threestage HFS. Several heuristics have been developed for the HFS problem. Gupta [5] developed a heuristic in finding a minimum makespan schedule of a special case for which there is only one machine at the second stage. Kahraman et al. [6] presented an effective parallel greedy heuristic to solve a HFS with multiprocessor task. Lin and Liao [7] presented a case study of a two-stage HFS in a label manufacturing company to minimize the total weighted maximum tardiness. Gupta and Tunc¸ [8] developed four heuristics for a twostage HFS with separable setup and removal times to minimize the makespan. Also, Heydari and Fakhrzad [9] proposed a heuristic for a HFS to minimize the sum of the earliness and tardiness costs. In recent years, metaheuristics have become a popular approach to solve the HFS scheduling problem. Janiak and Kozan [10] applied simulated annealing (SA) and tabu search (TS) to solve the HFS with a cost related objective. Abiri et al. [11] proposed a TS algorithm to solve the HFS with sequence-dependent setup times. Genetic algorithm (GA) was applied by many authors for solving the HFS with the objective of minimizing the makespan [12–16]. The benchmark problems generated by Carlier and Néron [17] were used as test problems by many metaheuristics, such as artificial immune system (AIS) [18,19], genetic algorithm [16] and ant colony optimization (ACO) [20,21]. Particle swarm optimization (PSO), inspired by the flocking behavior of birds, is an emerging population-based metaheuristic. In the past few years, PSO has been successfully applied to a large number of combinatorial optimization problems, such as the traveling salesman problem [22] and the placement problem

1756

C.-J. Liao et al. / Applied Soft Computing 12 (2012) 1755–1764

in field programmable gate arrays (FPGA) [23]. In particular, PSO has been applied successfully to scheduling problems such as flow shop [24–27] and open shop [28] scheduling. In this paper, we will present a PSO algorithm combined with a bottleneck heuristic and a SA algorithm for the HFS with minimum makespan objective. The developed PSO algorithm will be tested on the benchmark problems of Carlier and Néron [17]. The rest of the paper is organized as follows. Section 2 introduces the HFS scheduling problem, PSO and SA. The proposed PSO, as well as the mathematical formulation of HFS, bottleneck heuristic and local search, is presented in Section 3. The experimental results tested on the benchmark problems are provided in Section 4. Finally, conclusions and future research are given in Section 5.

The objective is to minimize the makespan, or maximum completion time (Cmax ). Constraints (1) and (2) are used to define the makespan. Constraint (3) ensures that each job is processed exactly by one machine at each stage. Constraint (4) ensures that each job can be started at the current stage only after it has been completed at the preceding stage. Constraints (4) and (5) together require that each machine can process only one job at a time and the starting time of the current job must be greater than the finishing time of the preceding job. Constraint (6) defines the binary variables. All the decision variables are integers, so the model is an integer programming formulation.

3. Proposed algorithms 2. Problem formulation of HFS 2.1. Hybrid flow shop Hybrid flow shops (HFS) are common manufacturing environments in many industries, such as the glass, steel, paper and textile industries. In a HFS, machines are arranged into k stages in series. At each stage s (s = 1, . . ., k), there are ms identical parallel machines, where ms ≥ 2 in at least one stage. Job j (j = 1, . . ., n) has to be processed on any one machine at each stage, and a machine can process only one job at a time. Preemption is not allowed and there is unlimited intermediate storage between two successive stages. The scheduling problem is to choose a machine at each stage for each job and determine the sequence of jobs on each machine so as to minimize the makespan, or maximum completion time. The HFS problem has been proved to be NP-hard by Gupta [5] for the minimum makespan objective in a two-stage problem even in the case for which there is only one machine at a stage.

This subsection will introduce the mathematical formulation of the HFS problem, which requires the following notation:

Yjis

: job index : stage index : machine index : starting time of job j at stage s : processing time of job j at stage s : finishing time of job j at stage s : number of machines at stage s : a large constant : a binary variable equal to 1 if job j is assigned to machine i at stage s; 0 otherwise : a binary variable equal to 1 if job j precedes job i at stage s; 0 otherwise

Using the above notation, Néron et al. [3] formulated the problem as an integer program as follows: Minimize Cmax subject to Cmax ≥ Fjs , for s = 1, . . . , k, j = 1, . . . , n

(1)

Fjs = Sjs + Pjs , for s = 1, . . . , k, j = 1, . . . , n

(2)

ms 

Xjis = 1, for s = 1, . . . , k, j = 1, . . . , n

(3)

i=1

Fjs ≤ Sj(s+1) , for s = 1, . . . , k − 1

(4)

Sxs ≥ Fys − LYxys , for all pairs of jobs (x, y), s = 1, . . . , k

(5)

Xjis ∈ {0, 1}, Yjis ∈ {0, 1}, for all j = 1, . . . , n, i = 1, . . . , ms and s = 1, . . . , k

• Jobs are independent and available to be processed at time zero. • The travel time between consecutive stages and the machine setup time are included in the processing time. • The processing time of jobs at each stage is known and fixed. • No interruption is allowed when processing a job, i.e., preemption is not allowed. • There is unlimited intermediate storage between two successive stages. • Machines are continuously available for processing. 3.1. Particle swarm optimization

2.2. Mathematical formulation

j s i Sjs Pjs Fjs ms L Xjis

This section will first present each component of the proposed PSO algorithm, including the pure PSO, bottleneck heuristic, local search, restart procedure and SA. The procedure of the proposed algorithm will then be provided. There are several assumptions that are commonly made regarding this problem:

(6)

This subsection will introduce the pure PSO algorithm, developed by Kennedy and Eberhart [29] for optimization of continuous non-linear functions. PSO was inspired by the motion of a flock of birds searching for food. During the search, each bird called a particle adjusts its searching direction according to two factors, its own best previous experience (pbest) and the best experience of all other members (gbest). Mathematically, assume that the searching space is Dt , xt , xt , . . . , xt ) be the i-th particle in dimensional. Let Xit = (xi1 i2 i3 id D-dimensional vector, where Xit is treated as a potential solution that explores the search space by the rate of position change called velocity. The velocity is denoted as Vit = (vti1 , vti2 , vti3 , . . . , vtid ). Let Pit = (pti1 , pti2 , pti3 , . . . , ptid ) be the best particle i obtained until iteration t and Pgt = (ptg1 , ptg2 , ptg3 , . . . , ptgd ) be the global best in the population at iteration t. The basic procedure for implementing PSO is described as follows: 1. Create population of particles with random positions and velocities on the searching space. 2. For each particle, evaluate the desired optimization fitness function and compare the evaluated fitness with its pbest. If the current particle is better than pbest, then set pbest to the current particle. 3. Update particle velocities according to the following equation: t t vtid = wvt−1 + c1 rand(.)(ptid − xid ) + c2 rand(.)(ptgd − xid ) id

(7)

where c1 is the cognition learning factor, c2 is the social learning factor, rand(.) are random numbers uniformly distributed in U(0, 1), and w is the inertia weight.

C.-J. Liao et al. / Applied Soft Computing 12 (2012) 1755–1764

1757

For example, let Xit−1 represent the particle at iteration t − 1 with the sequence (5, 6, 2, 3, 4, 1), r1 = 0.08, r2 = 0.75 and r3 = 0.5 represent random numbers, w = 0.1 represents mutation rate, c1 , c2 = 0.5 represent crossover rate, Pit−1 = (1, 4, 3, 2, 6, 5) represents pbest at iteration t − 1 and Gt−1 = (5, 6, 2, 3, 4, 1) represents gbest at iteration t − 1, respectively. The position of particle Xit−1 is updated using Eq. (9) as follows: • For the first component, mutation operation is applied because r1 ≤ w. Fig. 1. Decoding the solution (2,3,1,4,5).

Xit−1 = (5, 6, 2, 3, 4, 1)

4. Particles are changed to their new positions according to the following equation: t+1 xid

=

t xid (t) + vtid

(8)

5. Stop the algorithm if the stopping criterion is satisfied; return to Step 2 otherwise. The velocity update can improve the diversification of the search. To assure that the velocity would not lead the particles to move beyond boundaries, a maximum velocity (Vmax ) is set to limit the velocity range; any velocity tending to exceed it is brought back to it [30]. An inertia weight w is used to balance between global and local searches when updating the velocity in Eq. (7). The swarm population size ranging from 10 to 30 are the most common one, and it has been learned that PSO requires a smaller population than is generally used in genetic algorithms to reach high quality solution [31]. 3.1.1. Solution representation A solution is simply represented by a string of numbers consisting of a permutation of n jobs denoted by (1, 2, . . ., n) [18]. To decode the solution for a specific problem, the jobs are arranged into machine by priority rules to the first available machine. To illustrate the decoding, consider a simple example of a HFS problem with 5 jobs and 2 stages. Both stages consist of 2 identical machines in parallel. As depicted in Fig. 1, the solution (2, 3, 1, 4, 5) gives the schedule for the two machines in stage 1, where jobs are arranged into machine by the sequence to the first available machine. The schedule in stage 2 is arranged as soon as jobs are completed by the preceding stage. 3.1.2. Update of particle The position of a particle refers to a possible solution of the function to be optimized. The position of the particle sequence is updated according to the following equation [27]: Xit = c2 ⊗ F3 (c1 ⊗ F2 (w ⊗ F1 (Xit−1 ), Pit−1 ), Gt−1 )

(9)

Instead of using the fixed coefficients w, c1 and c2 in Eq. (7), it is possible to improve the solution quality by involving operations such as crossover and mutation. The update equation consists of three components. The first component is Ati = w ⊗ F1 (Xit−1 ), which represents the velocity of the particle and F1 represents the mutation operator (see Section 3.1.4) with the probability of w. The second component is Bit = c1 ⊗ F2 (Ati , Pit−1 ), which represents the “cognition” part of the particle. In this component, F2 represents the crossover operator (see Section 3.1.3) with the probability of c1 , and Ati and Pit−1 are the first and second parents for the crossover, respectively. The third component is Xit = c2 ⊗ F3 (Bit , Gt−1 ), which represents the “social” part of the particle. In this component, F3 represents the crossover operator (see Section 3.1.3) with the probability of c2 , and Bit and Gt−1 are the first and second parents for the crossover, respectively.

→ swap mutation jobs 3 and 1 → Ati = (5, 6, 2, 1, 4, 3) • For the second component, crossover operation is not applied because r2 > c1 . Ati = (5, 6, 2, 1, 4, 3), Pit−1 = (1, 4, 3, 2, 6, 5) → Bit = (5, 6, 2, 1, 4, 3) • For the third component, mutation operation is applied because r3 ≤ c2 . Bit = (5, 6, 2, 1, 4, 3), Gt−1 = (5, 6, 2, 3, 4, 1) → Xit = (2, 5, 6, 3, 1, 4) The final particle after updating the new position of particle Xit is (2, 5, 6, 3, 1, 4). 3.1.3. Crossover operator Pan et al. [27] proposed a new crossover operator, named PTL crossover. The PTL crossover is used here because it has the advantage of generating a different offspring even from two identical parents. The PLT crossover works as follows (see Fig. 2): Step 1: Choose two different points randomly from the first parent. Step 2: Copy the block of jobs determined by two-cut points from the first parent. This block is either moved to the right or left corner of the offspring. Step 3: Place the empty elements of jobs from the remaining jobs of the second parent. 3.1.4. Mutation operators In the proposed algorithm, two types of mutation procedures are used to sequence jobs and each is chosen with a 50% probability. Insert mutation. Given a sequence q, let x and y be two positions in the sequence. A neighbor of q is obtained by inserting the job in x position to y position. In the insert mutation process, it has a wider search space so it can help construct a good sequence in the early step because the solution is still far from a good solution. Swap mutation. Given a sequence q, let x and y be two positions in the sequence. A neighbor of q is obtained by interchanging the

Fig. 2. Illustration of PLT crossover.

1758

C.-J. Liao et al. / Applied Soft Computing 12 (2012) 1755–1764

jobs in positions x and y. The swap mutation is needed when there is already a good solution and only small changes are required to obtain a better solution. 3.2. Simulated annealing

Table 1 Processing times for an example. Stage

1

2

Machine

M1

M1

M2

M1

5 2 3 6

10 13 14 15

10 13 14 15

6 3 2 5

1 2 3 4

Job

SA is a simple local search method introduced by Kirkpatrick et al. [32]. SA algorithm starts with an initial solution and moves around the neighbors to generate a new solution. The initial temperature t0 must be high enough to allow free moves. The movement of the solution is accepted if the objective value is better, but often a poor solution is accepted with some probability controlled by a temperature parameter T. The following is a pseudocode for SA algorithm (for minimization problem): Select an initial solution s0 ; initial temperature t0 k Select a temperature reduction function (cooling schedule) T (k) = t0 × (˛) Set temperature t = t0 Outer: While temperature > final temperature Inner: While iteration limit niter is not reached Select s as neighbors of s0 ı = f (s) − f (s0 ) If ı < 0 s0 = s Else Generate r∼U(0, 1) If r < exp(−ı/t)then s0 = S End Inner t = T (k) End Outer Return s0

In the annealing process, the temperature gradually decreases during the process T(k) = t0 × (˛)k , which is called the cooling schedule. The ˛ is a constant smaller than (close to) 1. The cooling schedule is processed repeatedly until it reaches the final temperature. The temperature reduction of the cooling schedule is important; if it is too fast, it most likely will lead to a local optimum. In every temperature, there is an inner iteration which allows enough iteration so that the solution stabilizes at that temperature. 3.3. Bottleneck heuristic The initial swarm population is constructed by a given number of particles, named the swarm size. These particles are randomly generated except one, which is obtained by the proposed bottleneck heuristic. Bottleneck is a phenomenon by which the performance or capacity of an entire system is severely limited by a single component. The bottleneck heuristic procedure is given as follows [33]: Step 1: Identify the bottleneck stage



n • Calculate the flow ratio FRs = P /ms between the total proj=1 js cessing time and the number of available machines at each stage s. • Select the stage with maximum flow ratio. • Calculate the total processing time for each job at every stage before the bottleneck stage and denote it by Rj . • Calculate the difference between the total flow ratio at every stage with the total processing time at every stage after the bottleneck stage and denote it by Dj .

Step 2: Sequence the bottleneck stage • Schedule jobs in increasing order of Rj . If there is more than one job with the same Rj , then rank them in increasing order of Dj . Break ties by selecting the one with smaller processing time. • Schedule jobs on the machines according to the preceding ranking.

FR

3

26

16

16

Step 3: Sequence the non-bottleneck stage • Stages before the bottleneck stage: Jobs are scheduled according to Dj , Rj and processing time for that stage (details are stated in Step 2). • Stages after the bottleneck stage: Jobs are scheduled as soon as they are completed by the preceding stage. To illustrate the bottleneck heuristic, consider a simple example of a HFS problem with 4 jobs and 3 stages. In the example, the first and the third stages consist of 1 machine, and the second stage consists of 2 identical machines in parallel. The processing times are given in Table 1. In the first step, we apply the procedure to calculate the flow ratio for each stage (see the last row of Table 1). It can be observed that the second stage is the bottleneck stage. After that, we calculate the release time and delivery time at the bottleneck stage: Rj = (5, 2, 3, 6) Dj = (52, 55, 56, 53). Now, schedule the jobs according to Rj , Dj and processing time at the bottleneck stage and obtain the results as shown in Table 2. In the next step, the jobs at the first stage are scheduled in increasing order of Dj , Rj and processing time at the stage. The resulting sequence is given in Table 3. Finally, we obtain the job sequences at the first and second stages and calculate the completion time for each job until the bottleneck stage. The jobs at the third stage are scheduled as soon as they are completed by the second stage. The final sequence is given in Table 4. 3.4. Local search A local search is incorporated into the proposed PSO algorithm to improve its performance. The basic idea is as follows. Stage 1 until the bottleneck stage mostly have the same sequence because jobs were ranked in increasing order of Dj , Rj and processing time. This may generate more machine idle time because each job at every Table 2 Schedule for the bottleneck stage (stage 2). Job

1 2 3 4

M1

M2

[3] [1] [2] [4]

Rj

Dj

5 2 3 6

52 55 56 53

M1

M2

M1

Pj1

Pj2

[s,

c]

M2

10 13 14 15

10 13 14 15

15 2

25 15

[s,

3 17

T c]

17 32

15 2 3 17

Table 3 Schedule for stage 1. Job

M1

Rj

Dj

1 2 3 4

[3] [1] [2] [4]

0 0 0 0

15 2 3 17

M1

M1

Pj1

[s,

c]

5 2 3 6

5 0 2 10

10 2 5 16

C.-J. Liao et al. / Applied Soft Computing 12 (2012) 1755–1764 Table 4 Schedule for stage 3. Job

1 2 3 4

M1

[3] [1] [2] [4]

Rj

25 15 19 34

Dj

33 43 39 24

M1

M1

Pj1

[s,

c]

6 3 2 5

25 15 19 34

31 18 21 39

1759

• The solution is moved around the neighborhood by insert or shift mutation. • Calculate the fitness value of the new particle. • Acceptance–rejection: A solution is accepted if the fitness value is better, but a poor solution is allowed to be accepted with the following probability: fitness value after movement − fitness value before movement current temperature

(10)

• Perform cooling schedule: T(k) = t0 × (alpha)k . • Repeat the procedure until the final temperature is reached.

Final sequence: Stage 1 – M1 → [2] [3] [1] [4]. Stage 2 – M1 → [2] [1]. – M2 → [3] [4]. Stage 3 – M1 → [2] [3] [1] [4].

stage has a different processing time but it was ranked first by Dj and Rj . Thus, we can randomly change (mutate) the particle position in the preceding stage for several iterations to further obtain a better particle position (job sequence). The best particle will be selected as the solution particle at that stage. Two types of neighborhoods are employed in our local search: • Swap mutation: Swap any two positions. • Shift mutation: Swap the position with the left or right neighbor. As an illustration, consider the bottleneck stage (i.e., stage 3) in Fig. 3. According to the bottleneck heuristic, we sequence the particle position at stage 2 by ranking Dj , Rj and processing times at stage 3 and obtain the sequence (7, 6, 5, 3, 2, 1, 4) with Cmax = 63 in Fig. 3. With several mutations, we obtain a better sequence (7, 6, 5, 3, 2, 4, 1) with Cmax = 62 in Fig. 4. The procedure is repeated until stage 1. 3.5. Restart procedure This is an additional procedure to avoid premature convergence, where the SA algorithm is incorporated to avoid being trapped in local optima. At each iteration, we store the minimum makespan and increment the counter value if the minimum makespan is not changed in the next iteration. We apply the following restart procedure until a preset restart value is reached: Step 1: Sort the particles in the population in increasing order of fitness value (Cmax ). Step 2: Choose the first 10% of the particles and optimize them by using SA in the following way:

Step 3: Randomly choose new particles to replace the 90% worst particles in the population. Step 4: Calculate the fitness value to obtain pbest and gbest and set counter value = 0. PSO has the advantage of obtaining most of the optimistic results, but it has the disadvantage of being trapped in local optima [34]. However, SA has a strong ability to avoid the problem of local minima [34,35]. Therefore, we apply SA to optimize the 10% best particles in the above procedure to avoid being trapped in local minima. 3.6. Proposed PSO algorithms After discussing each component of the proposed PSO algorithm, we can now present the proposed PSO algorithm in the following steps: Step 1: (Initialization) Randomly generate a set of feasible particles except that one is obtained by the proposed bottleneck heuristic. Step 2: (Fitness) Evaluate the fitness of each particle solution in the population. Step 3: (Optimize) Optimize the particles using SA. Step 4: (Find) Find pbest and gbest from the solutions. Step 5: (Update) Calculate the particle update position in PSO by Eq. (9) for particle in the bottleneck stage. Step 6: (Arrange) Find the particle position in the preceding and succeeding stages using the bottleneck heuristic (Section 3.3). Step 7: (Search) Perform local search to find a better particle position (Section 3.4). Step 8: (Arrange) Find the particle position in the succeeding stages using the bottleneck heuristic. Schedule jobs as soon as it is completed by the preceding stage (Section 3.3). Step 9: (Fitness) Evaluate the fitness of each particle solution in the population.

Fig. 3. Gantt chart for the illustrated example (without local search).

1760

C.-J. Liao et al. / Applied Soft Computing 12 (2012) 1755–1764

Fig. 4. Gantt chart for the illustrated example (with local search).

Step 10: (Find) Find pbest and gbest from the solutions. Step 11: (Restart) If the minimum makespan is not changed until the restart value is reached, apply the restart procedure (Section 3.5). Step 12: (Fitness) Evaluate the fitness of each solution particle in the population. Step 13: (Find) Find pbest and gbest from the solutions. Step 14: (Termination) Stop the algorithm if the stopping criterion is satisfied; return to Step 5 otherwise. The proposed PSO algorithm is a combination of PSO, bottleneck heuristic, local search and simulated annealing, where the advantage of each component is taken to find good solution. In particular, the PSO algorithm focuses on the identification and exploitation of bottlenecks. Local search is then applied to sequence the preceding bottleneck stage to find a better particle position. Moreover, a restart procedure is utilized to avoid premature convergence, where the SA algorithm is employed. 4. Experiment results 4.1. Design of experiment There are 77 problems in Carlier and Néron’s benchmark problems [17]. The three characteristics that define the problem are the number of jobs, number of stages and number of identical machines at each stage. For example, the notation j15c10b1 means a 15-job, 10-stage problem. The letters j and c are abbreviations for job and stage, respectively, and the letter b defines the structure of the machine layout at the stages. The last number 1 is the problem index for a specific type. The meaning of the letter for machine layouts is given below [20]: a. There is one machine at the middle stage (bottleneck) and three machines at the other stages. b. There is one machine at the first stage (bottleneck) and three machines at the other stages. c. There are two machines at the middle stage (bottleneck) and three machines at the other stages. d. There are three machines at each stage (no bottleneck). The parameter setting in PSO is very important. The PSO search process is controlled by multiple variable factors (parameters) that have to be optimized in order to achieve good solution quality. The parameters were determined by full factorial experimental design,

where all the parameters in the algorithm were tested (see Table 5). There are nine parameters needed to be considered with a total number of 2187 (= 37 × 12 ) different combinations. There is a total of 77 problems consisting of 53 easy problems and 24 hard problems. The machine configuration has an important effect in solving the problem. The problems with a and b machine layouts are easier to solve, while the problems with c and d layouts are relatively harder so they are mostly grouped as hard problems. The total 77 problems can be classified into 13 groups according to their characteristics. Two problems from each group were tested to choose the best parameter setting, based on the mean values of CPU times and Cmax , among all the possible parameter values. The obtained best parameter setting is summarized in Table 6 and used for each group of the 77 problems. 4.2. Preliminary experiment The proposed PSO algorithm was tested using the benchmark problems provided by Carlier and Néron’s [17]. First, we examine each method in the algorithm including pure PSO, SA, and bottleneck heuristic to see how each method added in the algorithm is useful to solve the HFS scheduling problem. In the preliminary experiment, the run time was limited to 0.1 s or until the lower bound (LB) was reached. We chose 13 hard problems from Carlier and Néron’s benchmark problems. The same parameter setting was used to perform the PSO algorithm with each added method. The initial setting of parameters is given in Table 7. Each method was run five times and its performance, including the best Cmax value, average CPU time and the number of times reaching the LB solution, was recorded. As shown in Table 8, the pure PSO algorithm usually cannot solve hard problems. By adding SA (i.e., PSO–SA), several solutions are improved but not

Table 5 Parameters of the PSO algorithm. Parameter factor

Level

Swarm size (A) Restart generation (B) Inner SA iteration (C) Initial temperature of SA (D) Final temperature of SA (E) Alpha SA (F) Mutation rate (G) Crossover rate (H) Inner iteration local search (I)

3 levels: 5, 15, 30 3 levels: 5, 10, 25 3 levels: 1, 10, 25 3 levels: 25, 50, 100 1 levels: 0.01 1 levels: 0.99 3 levels: 0.05, 0.08, 0.1 3 levels: 0.3, 0.5, 0.8 3 levels: 1, 5, 10

C.-J. Liao et al. / Applied Soft Computing 12 (2012) 1755–1764

1761

Table 6 Parameter settings. Problem j10c5a2 j10c5a5 j10c5b1 j10c5b3 j10c5c1 j10c5c5 j10c5d1 j10c5d4 j10c10a1 j10c10a2 j10c10b1 j10c10b2 j10c10c1 j10c10c2 j15c5a1 j15c5a2 j15c5b1 j15c5b3 j15c5c1 j15c5c6 j15c5d1 j15c5d2 j15c10a1 j15c10a4 j15c10b1 j15c10b3

Swarm size (A) 5 15 5 15 5 5 5 5 5 5 5 5 5 30 15 5 5 5 5 5 5 30 15 5 15 5

Restart generation (B) 5 25 5 25 5 10 5 25 10 5 10 5 5 25 10 5 5 5 5 5 5 25 5 5 5 5

Inner SA iteration (C)

Initial temperature (D)

1 1 1 1 1 1 1 1 1 1 1 1 1 25 1 1 1 1 1 1 1 25 1 10 1 1

25 25 25 25 25 50 25 25 25 25 100 25 25 100 100 25 25 25 25 100 25 100 25 100 25 50

Table 7 Parameter settings in preliminary experiments.

Final temperature (E)

Alpha SA (F)

Mutation rate (G)

Crossover rate (H)

Inner LS (I)

0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01

0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99

0.05 0.1 0.05 0.1 0.1 0.05 0.05 0.1 0.05 0.05 0.05 0.05 0.05 0.1 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.1 0.05 0.05 0.05 0.05

0.3 0.8 0.3 0.8 0.8 0.3 0.8 0.5 0.8 0.5 0.3 0.8 0.3 0.8 0.8 0.3 0.3 0.8 0.5 0.5 0.3 0.8 0.3 0.8 0.3 0.3

1 1 1 1 1 1 1 1 1 1 1 1 1 5 1 1 1 1 5 1 1 5 1 1 1 1

based on the solution quality, measured by the percentage deviation between the solution and the lower bound as follows:

Parameter

PSO

PSO–SA

PSO–SA–BH

Swarm size Restart generation Inner SA iteration Initial temperature of SA Final temperature of SA Alpha SA Mutation rate Crossover rate Inner iteration local search

5 5 – – – – 0.1 0.8 –

5 5 1 25 0.01 0.9 0.1 0.8 –

5 5 1 25 0.01 0.9 0.1 0.8 1

% deviation =

significantly. By further including the bottleneck heuristic (i.e., PSO–SA–BH), four hard problems can be solved. 4.3. Experiment results The proposed PSO cannot be compared to the other algorithms on the basis of CPU time because the computational environment of all the algorithms is different. Thus the comparison is made

best Cmax − LB LB

(11)

The proposed PSO algorithm was programmed in Java and run on a PC with Intel Core 2 Duo CPU P7350 2.0 GHz and 2 GB RAM. The run time was limited to 1600 s or until the LB was reached. If the LB was not found within the time limit, the search was stopped and the best solution was accepted as the solution. Carlier and Néron’s benchmark problems [17] have been solved by several researchers using different methods, such as branch and bound (B&B) by Carlier and Néron [17] and Néron et al. [3], artificial immune system (AIS) by Engin and Döyen [18], quantum-inspired immune algorithm (QIA) by Niu et al. [19], genetic algorithm (GA) by Besbes et al. [16], and ant colony optimization (ACO) by Alaykyran et al. [20] and Khalouli et al. [21]. The computational results are summarized in Table 9, in which the “% deviation” columns show the performance comparison among different algorithms. The proposed PSO algorithm was run twenty times to obtain the best Cmax from the replications and the

Table 8 Performance of PSO with different versions. Problem

PSO Best Cmax

j10c5c1 j10c5c3 j10c5c6 j10c5d2 j10c5d5 j10c10c2 j10c10c5 j15c5c1 j15c5c2 j15c5c5 j15c5d3 j15c5d5 j15c5d6 a

69 72 69 74 67 120 127 87 93 78 85 82 84

PSO–SA Na

CPU time

0 0 1 0 0 0 0 0 0 0 0 0 0

0.1 0.1 0.084 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1

Number of times (out of 5) the LB was reached.

Best Cmax 69 72 69 74 66 120 125 86 92 78 84 81 83

PSO–SA–BH N

CPU time

0 0 3 0 1 0 0 0 0 0 0 0 0

0.1 0.1 0.062 0.1 0.094 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1

Best Cmax 68 71 69 74 66 119 125 86 92 77 84 81 83

LB N

CPU time

1 1 3 0 1 0 0 0 0 0 0 0 0

0.098 0.1 0.07 0.1 0.084 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1

68 71 69 73 66 116 121 85 90 73 77 67 79

% deviation PSO

PSO–SA

PSO–SA–BH

1.47 1.41 0 1.37 1.51 3.45 4.96 2.35 3.33 6.85 10.39 22.38 6.33

1.47 1.41 0 1.37 0 3.45 3.31 1.18 2.22 6.85 9.09 20.9 5.06

0 0 0 1.37 0 2.59 3.31 1.18 2.22 5.48 9.09 20.9 5.06

1762

C.-J. Liao et al. / Applied Soft Computing 12 (2012) 1755–1764

Table 9 Comparison results on benchmark problems (bold represented problems are hard problems). Problem

PSO

QIA

ACO

Cmax

CPU

Cmax

CPU

j10c5a2 j10c5a3 j10c5a4 j10c5a5 j10c5a6

88 117 121 122 110

0.002 0.002 0.003 0.013 0.174

88 117 121 122 110

– – – – –

j10c5b1 j10c5b2 j10c5b3 j10c5b4 j10c5b5 j10c5b6

130 107 109 122 153 115

0.003 0.003 0.012 0.025 0.001 0.001

130 107 109 122 153 115

j10c5c1 j10c5c2 j10c5c3 j10c5c4 j10c5c5 j10c5c6

68 74 71 66 78 69

0.332 0.535 36.997 0.215 0.122 0.405

j10c5d1 j10c5d2 j10c5d3 j10c5d4 j10c5d5 j10c5d6

66 73 64 70 66 62

j10c10a1 j10c10a2 j10c10a3 j10c10a4 j10c10a5 j10c10a6

Cmax

AIS

B&B

LB

% Deviation

CPU

Cmax

CPU

Cmax

CPU

PSO

QIA

ACO

AIS

B&B

88 117 121 124 110

– – – – –

88 117 121 122 110

1 1 1 1 4

88 117 121 122 110

13 7 6 11 6

88 117 121 122 110

0 0 0 0 0

0 0 0 0 0

0 0 0 1.64 0

0 0 0 0 0

0 0 0 0 0

– – – – – –

131 107 109 124 153 115

– – – – – –

130 107 109 122 153 115

1 1 1 2 1 1

130 107 109 122 153 115

13 6 9 6 6 11

130 107 109 122 153 115

0 0 0 0 0 0

0 0 0 0 0 0

0.77 0 0 1.64 0 0

0 0 0 0 0 0

0 0 0 0 0 0

69 76 74 75 79 72

– – – – – –

68 76 72 66 78 69

– – – – – –

68 74 72 66 78 69

32 4 a 3 14 12

68 74 71 66 78 69

28 19 240 1017 42 4865(b)

68 74 71 66 78 69

0 0 0 0 0 0

1.47 2.7 4.23 13.6 1.28 4.35

0 2.7 1.41 0 0 0

0 0 1.41 0 0 0

0 0 0 0 0 0

0.185 1.158 0.098 0.337 0.515 0.383

69 76 68 75 71 64

– – – – – –

– – – – – –

– – – – – –

66 73 64 70 66 62

5 31 15 5 1446 8

66 73 64 70 66 62

6490(b) 2617(b) 481 393 1627(b) 6861(b)

66 73 64 70 66 62

0 0 0 0 0 0

4.55 4.11 6.25 7.14 7.58 3.23

– – – – – –

0 0 0 0 0 0

0 0 0 0 0 0

139 158 148 149 148 146

0.055 0.87 0.017 0.085 0.102 0.239

139 158 148 149 148 146

– – – – – –

– – – – – –

– – – – – –

139 158 148 149 148 146

1 18 1 2 1 4

139 158 148 149 148 146

41 21 58 21 36 20

139 158 148 149 148 146

0 0 0 0 0 0

0 0 0 0 0 0

– – – – – –

0 0 0 0 0 0

0 0 0 0 0 0

j10c10b1 j10c10b2 j10c10b3 j10c10b4 j10c10b5 j10c10b6

163 157 169 159 165 165

0.013 0.221 0.014 0.021 0.037 0.056

– – – – – –

– – – – – –

163 157 169 159 165 165

– – – – – –

163 157 169 159 165 165

1 1 1 1 1 1

163 157 169 159 165 165

36 66 19 20 33 34

163 157 169 159 165 165

0 0 0 0 0 0

– – – – – –

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

j10c10c1 j10c10c2 j10c10c3 j10c10c4 j10c10c5 j10c10c6

115 117 116 120 125 106

a a a a a a

– – – – – –

– – – – – –

118 117 108 112 126 102

– – – – – –

115 119 116 120 126 106

a a a a a a

127 116 133 135 145 112

c 1100 c c c c

113 116 98 103 121 97

j15c5a1 j15c5a2 j15c5a3 j15c5a4 j15c5a5 j15c5a6

178 165 130 156 164 178

0.06 0.005 0.006 0.013 0.004 0.006

– – – – – –

– – – – – –

178 165 132 156 166 178

– – – – – –

178 165 130 156 164 178

1 1 1 2 1 1

178 165 130 156 164 178

18 35 34 21 34 38

178 165 130 156 164 178

j15c5b1 j15c5b2 j15c5b3 j15c5b4 j15c5b5 j15c5b6

170 152 157 147 166 175

0.003 0.005 0.03 0 0.086 0.016

– – – – – –

– – – – – –

170 152 157 149 166 176

– – – – – –

170 152 157 147 166 175

1 1 1 1 2 1

170 152 157 147 166 175

16 25 15 37 20 23

170 152 157 147 166 175

j15c5c1 j15c5c2 j15c5c3 j15c5c4 j15c5c5 j15c5c6

85 90 87 89 74 91

4.205 1198 2.398 2.208 a 0.191

– – – – – –

– – – – – –

85 90 87 89 73 91

– – – – – –

85 91 87 89 74 91

j15c5d1 j15c5d2 j15c5d3 j15c5d4 j15c5d5 j15c5d6

167 84 82 84 79 81

0 a a a a a

– – – – – –

– – – – – –

167 86 83 84 80 79

– – – – – –

167 84 83 84 80 82

1 a a a a a

167 85 96 101 97 87

j15c10a1 j15c10a2 j15c10a3

236 200 198

0.018 0.214 0.171

236 200 198

– – –

236 200 198

– – –

236 200 198

1 30 4

236 200 198

774 a 16 317 a 19

85 90 87 90 84 91

– – – – – –

4.42 0.86 10.2 8.74 4.13 5.15

0 0 0 0 0 0

– – – – – –

0 0 1.54 0 1.22 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

– – – – – –

0 0 0 1.36 0 0.57

0 0 0 0 0 0

0 0 0 0 0 0

85 90 87 89 73 91

0 0 0 0 1.37 0

– – – – – –

0 0 0 0 0 0

0 1.11 0 0 1.37 0

0 0 0 1.12 15.07 0

24 c c c c c

167 82 77 61 67 79

0 2.44 6.49 37.7 17.91 2.53

– – – – – –

0 2.44 7.79 37.7 19.4 3.8

0 3.66 24.68 65.57 44.78 10.13

40 154 45

236 200 198

2131(b) 184 202 c c 57

1.77 0.86 18.37 16.5 3.31 9.28

0 0 0

0 0 0

0 4.88 7.79 37.7 19.4 0 0 0 0

1.77 2.59 18.37 16.5 4.13 9.28

0 0 0

12.39 0 35.71 31.07 19.83 15.46

0 0 0

C.-J. Liao et al. / Applied Soft Computing 12 (2012) 1755–1764

1763

Table 9 (Continued) Problem

PSO

QIA

ACO

AIS

Cmax

CPU

Cmax

CPU

CPU

Cmax

CPU

Cmax

CPU

PSO

QIA

ACO

AIS

B&B

j15c10a4 j15c10a5 j15c10a6

225 182 200

0.072 0.509 0.468

225 182 200

– – –

228 182 200

– – –

225 182 200

12 2 2

225 183 200

78 c 44

225 182 200

0 0 0

0 0 0

1.33 0 0

0 0 0

0 0.55 0

j15c10b1 j15c10b2 j15c10b3 j15c10b4 j15c10b5 j15c10b6

222 187 222 221 200 219

0.017 0.012 0.007 0.007 0.135 0.006

222 187 222 221 200 219

– – – – – –

222 188 224 221 – –

– – – – – –

222 187 222 221 200 219

3 1 1 1 1 1

222 187 222 221 200 219

70 80 80 84 84 67

222 187 222 221 200 219

0 0 0 0 0 0

0 0 0 0 0 0

0 0.53 0.9 0 – –

0 0 0 0 0 0

0 0 0 0 0 0

Cmax

B&B

LB

% Deviation

PSO & AIS could not reach LB in 1600 s. B&B reached LB more than 1600 s. B&B could not reach LB.

average CPU time. For the other algorithms, we obtained the computational results from their original papers. It is noted that AIS [18], ACO [20] and B&B [3] algorithms also limited their run time to 1600 s and the best Cmax was accepted as the solution, and QIA [19] was run a fixed number of iterations and the best Cmax from 10 replications was recorded. With regard to the computational environment, AIS was coded in Excel Macros (Visual Basic Applications for Excel) and run on Intel P4 1.7 GHz 256 MB RAM, ACO [20] was coded in Visual Basic and run on Intel Celeron 1500 microprocessor 256 MB RAM, QIA was coded in MATLAB and run on Pentium Dual 1.60 GHz PC and B&B [3] was coded in C and run on SPARC Enterprise 4000. The performance of all the compared algorithms is summarized in Table 10. For the hard problems, PSO can solve 18 of the 24 problems (75%) with the smallest average percentage deviation among all the algorithms. For the easy problems, PSO, AIS, and B&B algorithms can solve 47 of the 53 problems (88.7%), but PSO can obtain the smallest average deviation value (0.95%) among the three best algorithms. Note that ACO has a smaller deviation value (0.92%) but it can solve only 45 of the 53 problems; QIA has a zero deviation value but it can solve only 29 of the 53 problems. The results show the competitiveness of the proposed PSO algorithm compared with the other algorithms. AIS result is comparable with the proposed PSO, but it still cannot solve problems as many as the proposed PSO. QIA itself quickens up convergence speed to Table 10 Performance summary of different algorithms. Alg.

PSO QIA ACO AIS B&B

Easy problems

Hard problems

% solved

% dev.

# of prob.

% solved

% dev.

# of prob.

88.7 100.0 64.4 88.7 88.7

0.95 0.00 0.92 0.99 2.17

53 29 45 53 53

75.0 0.0 66.7 66.7 70.8

2.85 5.04 3.88 3.13 6.88

24 12 18 24 24

overcome the limitation of IA, but because of the time limitation it can solve only part of the problems. Although ACO has the smallest deviation value for easy problem, but it also cannot solve as many problem as the proposed PSO. Thus, we can conclude that the proposed PSO algorithm outperforms all the compared algorithms in solving the existing HFS benchmark problems. A further investigation of PSO will be conducted in the next subsection.

4.4. Hypothesis test for PSO with AIS As a formal comparison, a hypothesis test will be performed to compare PSO with AIS. AIS is selected because it is the best metaheuristic approach for the HFS benchmark problems in the literature. Although branch and bound (B&B) has a good performance, it is a total enumeration method which requires a huge computation time as the problem size is increased. The vast majority of the results in Table 9 indicate that the proposed PSO has no statistically significant differences than AIS in the benchmark problems although PSO can solve more problems than AIS. This is due to the fact that the existing benchmark problems were established by solving using B&B, so the problem size is limited, i.e., they are not hard enough. To generate harder problems, we change the job number from 15 to 30. In each problem, machines are arranged into 5 stages in series. At each stage, the machine number is increased from 2 or 3 to a random number determined by U(3, 5). The range of processing time distribution is expanded from U(3, 20) to U(1, 100). The generated ten harder problems can be downloaded from http://web.ntust.edu.tw/∼i.e./index.html. This study uses Minitab software to conduct a Mann–Whitney test [36], a nonparametric test, in which the null hypothesis H0 assumes that PSO ≥ AIS , and the alternative hypothesis H1 assumes that PSO < AIS with 0.05 significance level. Each problem was run twenty times, each of which is limited to 200 s. The computational results are summarized in Table 11. It can be observed from the table that the proposed PSO approach is statistically

Table 11 Mann–Whitney test of PSO with AIS for 30-job problems. Item

PSO

AIS

Ave.

Min.

Max.

Std.

Ave.

Min.

Max.

Std.

j30c5e1 j30c5e2 j30c5e3 j30c5e4 j30c5e5 j30c5e6 j30c5e7 j30c5e8 j30c5e9 j30c5e10

474.70 616.25 610.25 577.10 606.80 612.50 630.60 684.20 654.65 599.75

471 616 602 575 605 605 629 678 651 594

478 617 619 580 609 619 632 688 658 617

1.42 0.44 4.70 1.52 1.11 3.49 0.75 2.50 1.87 5.28

485.35 620.70 625.70 588.55 618.75 625.75 641.30 697.50 670.20 613.45

479 619 614 582 610 620 635 686 662 604

488 625 635 594 624 631 659 704 676 619

2.58 1.63 4.81 3.38 3.42 3.01 4.67 5.14 3.85 5.33

Ave.

606.68

602.6

611.7

2.31

618.73

611.1

625.5

3.78

p-Value

Significant? (p < 0.05)

0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Y Y Y Y Y Y Y Y Y Y

CPU (s) PSO

AIS

96.16 55.28 64.56 86.98 79.84 67.99 87.18 97.67 83.80 77.46

99.44 80.24 116.70 108.63 101.19 100.47 93.56 100.68 100.75 89.29

79.69

99.09

1764

C.-J. Liao et al. / Applied Soft Computing 12 (2012) 1755–1764

significantly better than the compared AIS approach for all of the ten harder problems. Also, PSO has smaller average values of minimum, maximum and standard deviation than those of AIS. With respect to the computation time, the column “CPU” in the table gives the average computation time that the solution converges to the final solution of the respective approach. The column shows that PSO takes less time, on the average, to reach a better solution than AIS. 5. Conclusions In this paper, we have considered the hybrid flow shop (HFS) problem with the objective of minimizing the makespan. The main contribution of the paper is to develop a new approach hybridizing PSO with bottleneck heuristic to fully exploit the bottleneck stage, and with SA to help escape from local optima. In addition, we add some local search to further improve the solution at the preceding stage before the bottleneck. To evaluate the performance of the PSO algorithm, it was tested on the well-known Carlier and Néron’s benchmark problems [17]. Computational results show that the proposed PSO algorithm outperforms the PSO–SA algorithm, which in turn performs better than the pure PSO algorithm. By comparing with other existing algorithms for HFS, the proposed PSO algorithm found better solutions for seven instances compared to AIS results and found the same good solutions for the remaining instances. The PSO algorithm has been applied successfully in this study for the HFS problem. Future research may be conducted to further investigate the algorithm to solve other scheduling problems. It is also worthwhile to develop new ways of updating the velocity and determining the parameter setting optimally in a dynamic environment while the algorithm is running. Acknowledgements The authors would like to thank Jasques Carlier and Emmanuel Néron for their benchmark problems and other help. We are also grateful to Orhan Engin and Alper Döyen for sharing their previous work. References [1] S.A. Brah, J.L. Hunsucker, Branch and bound algorithm for the flow shop with multiple processors, European Journal of Operational Research 51 (1991) 88–99. [2] M.C. Portmann, A. Vignier, D. Dardilhac, D. Dezalay, Branch and bound crossed with GA to solve hybrid flowshops, European Journal of Operational Research 107 (1998) 389–400. [3] E. Néron, P. Baptise, J.N.D. Gupta, Solving hybrid flow shop problem using energetic reasoning and global operations, Omega 29 (2001) 501–511. [4] F. Riane, A. Artibs, S.E. Elmaghraby, A hybrid three stage flow shop problem: efficient heuristics to minimize makespan, European Journal of Operational Research 109 (1998) 321–329. [5] J.N.D. Gupta, Two-stage, hybrid flowshop scheduling problem, Journal of the Operational Research Society 39 (1988) 359–364. [6] C. Kahraman, O. Enginb, I. Kaya, R.E. Öztürk, Multiprocessor task scheduling in multistage hybrid flow-shops: a parallel greedy algorithm approach, Applied Soft Computing 10 (2010) 1293–1300. [7] H.T. Lin, C.J. Liao, A case study in a two-stage hybrid flow shop with setup time and dedicated machines, International Journal of Production Economics 86 (2003) 133–143. [8] J.N.D. Gupta, E.A. Tunc¸, Scheduling a two-stage hybrid flowshop with separable setup and removal times, European Journal of Operation Research 77 (1994) 415–428.

[9] M. Heydari, M.B. Fakhrzad, A heuristic algorithm for hybrid flow shop production scheduling to minimize the sum of the earliness and tardiness costs, Journal of the Chinese Institute of Industrial Engineers 25 (2008) 105–115. [10] A. Janiak, E. Kozan, M. Lichtenstein, C. O˘guz, Metaheuristic approaches to hybrid flow shop scheduling problem with a cost-related criterion, International Journal of Production Economics 105 (2007) 407–424. [11] M.B. Abiri, M. Zandieh, A.A. Tabriz, A tabu search approach to hybrid flow shop scheduling with sequence-dependent setup time, Journal of Applied Sciences 9 (2009) 1740–1745. [12] O. Engin, G. Ceran, M.K. Yilmaz, An efficient genetic algorithm for hybrid flow shop scheduling with multiprocessor task problems, Applied Soft Computing (2011), doi:10.1016/j.asoc.2010.12.006. [13] R. Ruíz, C. Maroto, A genetic algorithm for hybrid flowshops with sequence dependent setup times and machine eligibility, European Journal of Operational Research 169 (2006) 781–800. [14] C. O˘guz, M.F. Ercan, A genetic algorithm for hybrid flow-shop scheduling with multiprocessor tasks, Journal of Scheduling 8 (2005) 323–351. [15] M.R.A. Naseri, M.A.B. Nia, Hybrid flow shop scheduling with parallel batching, International Journal of Production Economics 117 (2009) 185–196. [16] W. Besbes, T. Loukil, J. Teghem, Using genetic algorithm in the multiprocessor flow shop to minimize the makespan, in: Proceedings of the International Conference on Service Systems and Service Management, France, 2006, pp. 1228–1233. [17] J. Carlier, E. Néron, An exact method for solving the multiprocessor flowshop, R.A.I.R.O: Operations Research 34 (2000) 1–25. [18] O. Engin, A. Döyen, A new approach to solve hybrid flow shop scheduling problems by artificial immune system, Future Generation Computer System 20 (2004) 1083–1095. [19] Q. Niu, T. Zhou, S. Ma, A quantum-inspired immune algorithm for hybrid flow shop with makespan criterion, Journal of Universal Computer Science 15 (2009) 765–785. [20] K. Alaykyran, O. Engin, A. Döyen, Using ant colony optimization to solve hybrid flow shop scheduling problems, International Journal of Advanced Manufacturing Technology 35 (2007) 541–550. [21] S. Khalouli, F. Ghedjati, A. Hamzaoui, An integrated ant colony optimization algorithm for the hybrid flow shop scheduling problem, in: Computers & Industrial Engineering. International Conference, France, 2009, pp. 554–559. [22] Y. Marinakis, M. Marinaki, A hybrid multi-swarm particle swarm optimization algorithm for the probabilistic traveling salesman problem, Computers and Operations Research 37 (2010) 432–442. [23] M. El-Abd, H. Hassan, M. Anis, M.S. Kamel, M. Elmasry, Discrete cooperative particle swarm optimization for FPGA placement, Applied Soft Computing 10 (2010) 284–295. [24] C.J. Liao, C.T. Tseng, P. Luarn, A discrete version of particle swarm optimization for flowshop scheduling problems, Computers and Operations Research 34 (2007) 3099–3111. [25] B. Liu, L. Wang, Y. Jin, An effective hybrid PSO-based algorithm for flow shop scheduling with limited buffers, Computers and Operations Research 35 (2008) 2791–2806. [26] Q.K. Pan, L. Wang, B. Qian, A novel differential evolution algorithm for bi-criteria no-wait flow shop scheduling problems, Computers and Operations Research 36 (2009) 2498–2511. [27] Q.K. Pan, M.F. Tasgetiren, Y.C. Liang, A discrete particle swarm optimization algorithm for the no-wait flowshop scheduling problem, Computers and Operations Research 35 (2008) 2807–2839. [28] D.Y. Sha, C.Y. Hsu, A new particle swarm optimization for the open shop scheduling problem, Computers and Operations Research 35 (2008) 3243–3261. [29] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Network, Piscataway, NY, 1995, pp. 1942–1948. [30] M. Clerc, Particle Swarm Optimization, ISTE, London, 2006. [31] A.P. Engelbrecht, Fundamentals of Computational Swarm Intelligent, John Wiley & Sons, 2005. [32] S. Kirkpatrick, C.D. Gelatt Jr., M.P. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680. [33] C.D. Paternina-Arboleda, J.R. Montoya-Torres, M.J. Acero-Dominguez, M.C. Herrera-Hernandez, Scheduling jobs on k-stage flexible flow-shop, Annals of Operations Research 164 (2008) 29–40. [34] X. Wang, J. Xiao, PSO-based model predictive control for nonlinear processes, Lecture Notes in Computer Science 3611 (2005) 196–203. [35] E. Triki, Y. Collette, P. Siarry, A theoretical study on the behavior of simulated annealing leading to a new cooling schedule, European Journal of Operational Research 166 (2005) 77–92. [36] R.E. Kirk, Statistics: An Introduction, 3rd ed., Holt, Rinehart & Winston, Fort Worth, Texas, 1990.