A combinatorial particle swarm optimisation for solving permutation flowshop problems

A combinatorial particle swarm optimisation for solving permutation flowshop problems

Available online at www.sciencedirect.com Computers & Industrial Engineering 54 (2008) 526–538 www.elsevier.com/locate/dsw A combinatorial particle ...

285KB Sizes 1 Downloads 67 Views

Available online at www.sciencedirect.com

Computers & Industrial Engineering 54 (2008) 526–538 www.elsevier.com/locate/dsw

A combinatorial particle swarm optimisation for solving permutation flowshop problems Bassem Jarboui a, Saber Ibrahim a, Patrick Siarry

b,*

, Abdelwaheb Rebai

c

a

b

FSEGS, route de l’ae´roport km 4, Sfax 3018 Tunisie LiSSi, universite´ de Paris 12, 61 avenue du Ge´ne´ral de Gaulle, 94010 Cre´teil, France c ISAAS, route de l’ae´roport km 4, B.P.N 101, Sfax 3018 Tunisie

Received 12 April 2007; received in revised form 6 September 2007; accepted 7 September 2007 Available online 19 September 2007

Abstract The m-machine permutation flowshop problem PFSP with the objectives of minimizing the makespan and the total flowtime is a common scheduling problem, which is known to be NP-complete in the strong sense, when m P 3. This work proposes a new algorithm for solving the permutation FSP, namely combinatorial Particle Swarm Optimization. Furthermore, we incorporate in this heuristic an improvement procedure based on the simulated annealing approach. The proposed algorithm was applied to well-known benchmark problems and compared with several competing metaheuristics.  2007 Elsevier Ltd. All rights reserved. Keywords: Particle swarm optimization; Combinatorial particle swarm optimization; Permutation flowshop problem; Makespan; Flowtime

1. Introduction The permutation flowshop scheduling problem (PFSP) consists in scheduling a set of n jobs on m machines in the same technological order, such that each job is processed on machine 1 in the first place, machine 2 in the second place,..., and machine m in the last place; the processing time of job i on machine j is denoted pij. The objective is to find a sequence (schedule) in which these n jobs should be processed on each of the m machines such that a given criterion be optimized. The most common criteria are the makespan minimization and the total flowtime minimization. The PFSP has been extensively investigated by the research community. The PFSP has been proved to be NP-complete in the strong sense when m P 3 (Garey, Johnson, & Sethi, 1976). Few exact algorithms have been developed so far for solving the PSFP. These methods include the branch-and-bound algorithms (Lomnicki, 1965; Brown & Lomnicki, 1966; McMahon & Burton, 1967) for makespan minimization criterion and (Ignall & Schrage, 1965; Bansal, 1977; Van de Velde, 1990; Chung, Flynn, & Kirca, 2002) for total flowtime minimization. A major constraint of the above techniques is that *

Corresponding author. E-mail addresses: [email protected] (B. Jarboui), [email protected] (S. Ibrahim), [email protected] (P. Siarry), [email protected] (A. Rebai). 0360-8352/$ - see front matter  2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.cie.2007.09.006

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

527

there is a limit for the size of the problem that can be solved in a reasonable time. Therefore, the only feasible way to solve large PFSP instances is to apply heuristic algorithms, which may find high quality solutions in short computation time. Several such algorithms have been proposed to minimize the makespan and total flowtime criteria, they can be classified into the following categories: (i) construction methods (Johnson, 1954; Palmer, 1965; Campbell, Dudek, & Smith, 1970; Gupta, 1972; Nawaz, Enscore, & Ham, 1983; Liu & Reeves, 2001); (ii) improvement heuristics (Dannenbring, 1977; Ho & Chang, 1991; Suliman, 2000; Widmer & Hertz, 1989; Rajendran, 1993; Ho, 1995; Wang, Chu, & Proth, 1997; Woo & Yim, 1998); (iii) metaheuristics, which include algorithms like simulated annealing (Osman & Potts, 1989; Ogbu & Smith, 1990; Ishibuchi, Misaki, & Tanaka, 1995); tabu search (Taillard, 1990; Nowicki & Smutnicki, 1996; Ben-Daya & Al-Fawzan, 1998); genetic algorithms (Reeves, 1995; Murata, Ishibuchi, & Tanaka, 1996; Ruiz, Maroto, & Alcaraz, 2006); ant colony algorithms (Rajendran & Ziegler, 2004, 2005); and particle swarm optimization (Tasgetiren, Liang, Sevkli, & Gencyilmaz, 2007; Liao, Tseng, & Luarn, 2007). Particle swarm optimization (PSO), first proposed by Kennedy and Eberhart (1995), is one of the most recent and hopeful evolutionary metaheuristics, which is inspired by adaptation of a natural system based on the metaphor of social communication and interaction. Originally PSO was focused on solving nonlinear programming and nonlinear constrained optimization problems comprising of continuous variables. The challenge has been to employ the algorithm to combinatorial problems, thus new PSO variants have been developed for solving combinatorial optimization problems, such as single machine total weighted tardiness problems (Tasgetiren et al., 2004); PFSP (Tasgetiren et al., 2007; Liao et al., 2007) and task assignment problem (Salman, Ahmed, & Almadani, 2002). In this paper we propose a novel PSO approach for solving combinatorial optimization problems. The problem of permutation flowshop scheduling with the objective of minimizing the makespan and total flowtime criteria has been investigated. A set of benchmark flowshop scheduling problems taken from Taillard (1993) were used for experimental simulation. The remainder of this paper is organized as follows. Section 2 introduces the PFSP. Then in Section 3, we present the details of the PSO algorithm, Section 4 introduces our CPSO algorithm. In Section 5, we adapt CPSO algorithm to solve the PFSP. Next in Section 6, we propose our hybrid CPSO algorithm (H-CPSO). Section 7 provides the computational results evaluation of the solutions obtained by the two algorithms while comparing with the continuous version of PSO proposed by Tasgetiren et al. (2007), the composite heuristic of Liu and Reeves (2001) and the two ant colony algorithms of Rajendran and Ziegler (2004) and finally, concluding remarks and future issues are mentioned in Section 8. 2. Formulation of the permutation flowshop problem In a typical static flowshop, a set of n jobs are simultaneously available for being processed on a set of mmachines. Without loss of generality, we assume that all jobs are available for processing at time zero. Each job j, j 2 J = {1,2,...,n}, passes through the machines 1,2,..,m in that order and requires an uninterrupted processing time pjk on machine k,k = 1,2,.,m. Each machine may process the n jobs in any order. If all machines process the n jobs in exactly the same order, the schedule is called a permutation schedule. The permutation flowshop scheduling problem (PFSP) treated in this paper is often designated by the symbols n/m/P/Obj, where n jobs have to be processed on m machines in the same order. P indicates that only permutation schedules are considered, where the order in which the jobs are processed is identical for all machines. Hence a schedule is uniquely represented by a permutation of jobs. Obj describes the performance measure by which the schedule is to be evaluated, i.e. the objective function. For example, n/m/P/Cmax is the problem of minimizing the makespan Cmax, and n/m/P/F is the problem of minimizing total flowtime F. The scheduling objective is to minimize the makespan and the flowtime criteria. It is worth noticing that, in this case, a job sequence uniquely determines a permutation schedule since unforced machine idleness is undesirable. It is assumed that a job may be processed by at most one machine and a machine may process at most one job at any point of time. Let [i] denote the index of the ith job in a schedule. Then, job [i] cannot start on machinek before it is completed on machine k  1, or before job [i-1] is completed on machine k. Let Cjk denote the completion time of job j on machine k. Then,C[i]k is the completion time of the job scheduled in the ith position on machine k. C[i],kis computed as:

528

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

  C ½i;k ¼ p½i;k þ max C ½i;k1 ; C ½i1;k ; where i ¼ 2; 3:::n; and k ¼ 2; 3:::m: So, the makespan is equal to Cmax = C[n],m The flowtime is then computed by the following formula: n X C ½i;m : F ¼ i¼1

3. The particle swarm optimization algorithm PSO is one of the nature-inspired metaheuristics. The first PSO model was introduced by Kennedy and Eberhart (1995). The main idea in PSO is to simulate social systems such as fish schooling and birds flocking. Similarly to evolutionary computation techniques, PSO uses a set of particles, representing potential solutions to the problem under consideration. The swarm consists of m particles; each particle has a position Xi = {xi1,xi2,. . .,xin}, a velocity Vi = {vi1,vi2,. . .,vin}, where i =1,2,...m and moves through a n-dimensional search space. According to the global variant of the PSO algorithm, each particle moves towards its best previous position and towards the best particle g in the swarm. Let us denote the best previously visited position of the i-th particle that gives the best fitness value as Pi = {pi1,pi2,. . .,pin}, and the best previously visited position of the swarm that gives the best fitness as G = {G1,G2,. . .,Gn}. The change of position of each particle from one iteration to another can be computed according the distance between the current position and its previous best position and the distance between the current position and the best position of the swarm. Then the updating of velocity and particle position can be obtained by using the two following equations: t1 t1 t1 vtij ¼ vt1  xt1 ij þ c1 r1 ðp ij  xij Þ þ c2 r2 ðGj ij Þ

ð1Þ

xtij

ð2Þ

¼

xijt1

þ

vtij

where t = 1,2,. . ..,Tmax denote the iteration number, c1 is the cognition learning factor, c2 is the social learning factor and r1 and r2 are random numbers uniformly distributed in [0, 1]. Thus, the particle flies through potential solutions towards P ti and Gt in a navigated way while still exploring new areas by the stochastic mechanism to escape from local optima. Since there was no actual mechanism for controlling the velocity of a particle, it was necessary to impose a maximum value Vmax on it. If the velocity exceeded this threshold, it was set equal to Vmax, which controls the maximum travel distance at each iteration to avoid this particle flying past good solutions. The PSO algorithm is terminated with a maximal number of generations or the best particle position of the entire swarm cannot be improved further after a sufficiently large number of generations. The aforementioned problem was addressed by incorporating a weight parameter for the previous velocity of the particle. Thus, in the latest versions of the PSO, Eqs. (2) and (3) are changed into the following ones: t1 t1 t1 vtij ¼ vðxvt1  xijt1 ÞÞ ij þ c1 r 1 ðpij  xij Þ þ c2 r2 ðGj

ð3Þ

xtij

ð4Þ

¼

xijt1

þ

vtij

where x is called inertia weight and is employed to control the impact of the previous history of velocities on the current one. Accordingly, the parameter x regulates the trade-off between the global and local exploration abilities of the swarm. A large inertia weight facilitates global exploration, while a small one tends to facilitate local exploration. A suitable value for the inertia weight x usually provides balance between global and local exploration abilities and consequently results in a reduction of the number of iterations required to locate the optimum solution. v is a constriction factor used to limit velocity. The PSO algorithm has shown its robustness and efficacy in solving function value optimization problems in real number spaces; only a few researches have been conducted for extending PSO to combinatorial optimization problems under a binary form.

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

529

4. Proposed combinatorial PSO (CPSO) In this paper, we propose an extension of PSO algorithm to solve the combinatorial optimization problem with integer values. Combinatorial PSO essentially differs from the original (or continuous) PSO in some characteristics. 4.1. Definition of a particle   Let us denote  by Y ti ¼ y ti1 ; y ti2 ; . . . ; y tin the n-dimensional vector associated to the solution X ti ¼ xti1 ; xti2 ; . . . ; xtin taking a value in {1, 0, 1} according to the state of solution of the ith particle at iteration t. Y ti is a dummy variable used to permit the transition from the combinatorial state to the continuous state and vice versa. 8 1 if xtij ¼ Gtj > > > < 1 if xt ¼ pt ij ij ð5Þ y tij ¼ t t t > > > 1 or 1 randomly if ðxij ¼ Gj ¼ pij Þ : 0 otherwise 4.2. Velocity Let d 1 ¼ 1  y t1 be the distance between xt1 and the best solution obtained by the ith particle. ij ij Let d 2 ¼ 1  y t1 be the distance between the current solution xt1 and the best solution obtained in the ij ij swarm. The update equation for the velocity term used in the CPSO is then: vtij ¼ w:vijt1 þ r1 :c1 :d 1 þ r2 :c2 :d 2 vtij

¼

w:vijt1

þ r1 :c1 ð1 

y ijt1 Þ

þ r2 :c2 ð1 

ð6Þ y t1 ij Þ

ð7Þ

With this function the variation of the velocity vtij depends on the result of y ijt1 . t1 t1 If xt1 ij ¼ Gj , then y ij ¼ 1. Thereafter, d2 turns to ‘‘0’’, and d1 receives ‘‘2’’, which is going to impose to the velocity to move in the negative sense. t1 t1 If xt1 ij ¼ p ij , then y ij ¼ 1. Thereafter, d2 turns to ‘‘2’’, and d1 receives ‘‘0’’, thus imposing to the velocity to move in the positive sense. t1 t1 t1 In the case xt1 and xt1 ij 6¼ Gj ij 6¼ pij ; y ij turns to ‘‘o’’, d2 is equal to ‘‘1’’ and d1 is equal to ‘‘-1’’, thereafter the parameters r1,r2,c1 and c2 will determine the sense of the variation of the velocity. In the case xijt1 ¼ pijt1 and xijt1 ¼ Gjt1 , y ijt1 takes a value in {1, 1}, thus imposing to the velocity to move in the inverse sense of the sign of y tij . 4.3. Construction of a particle solution The update of the solution is computed within y tij : ktij ¼ y ijt1 þ vtij The value of y tij is adjusted with the following function: 8 t > < 1 if kij > a y tij ¼ 1 if ktij < a > : 0 otherwise The new solution is then:

ð8Þ

ð9Þ

530

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

8 t1 > < Gj t xij ¼ pt1 ij > : a random number

if

y tij ¼ 1

if

y tij ¼ 1

ð10Þ

otherwise

The choice previously achieved for the affectation of a random value in {-1, 1} for y t1 ij , in the case of equality t1 t between xijt1 ; pt1 and G , might insure that the variable y takes a value 0, and permit to change the value of j ij ij variable xtij . We define a parameter a as parameter for intensification and diversification. For a small value of a; xtij takes t1 one of the two values pt1 (intensification). In the opposite case, we impose to the algorithm to assign a ij or Gj t null value to they ij , which induces to choose another value different from pt1 and Gt1 (diversification). j ij t1 The parameters c1 and c2 are two parameters relative to the importance of the solution pt1 for the ij and Gj t generation of the new solution X i . They also have a role in the intensification of the research. 5. CPSO for solving PFSP 5.1. Solution representation A natural representation is the permutation of n jobs where the jth number in the permutation denotes the job located on position j. In our implementation, we have used   this representation scheme. Each particle i is represented by an n-dimensional vector X ti ¼ xti1 ; xti2 ; . . . ; xtin , where each dimension corresponds to one position. xtij denotes the affected job to position j in the particle i at instant t. Let X ti ¼ f5; 6; 2; 4; 1; 3g be the associated solution to the ith particle, in the tth iteration. xti3 ¼ 2 means that job ‘‘2’’ is scheduled at position ‘‘3’’ in the ith particle at tth iteration. 5.2. Initial solution For the flowtime criterion, we randomly generated the initial population. To evaluate the makespan criterion, we used the NEH heuristic proposed by Nawaz et al. (1983). While this heuristic allows unique initial solution, we developed a new variant of NEH, called PNEH, which is based on a probabilistic process. We illustrate in what follows the NEH approach and the modification which leads to the PNEH. The NEH algorithm is based on the idea that jobs with high processing times on all the machines should be scheduled as early as possible. Thus, jobs are sorted in non increasing order of their total processing time requirements. The final sequence is built in a constructive way, adding a new job at each step and finding the best partial solution. NEH is divided into four simple steps: 1. Compute total processing times for job j using the following formula: m X pj ¼ pj;k ; j ¼ 1; . . . ; n k¼1

2. Sort the n jobs in non increasing order according to their total processing times pj. 3. Take the first two jobs and evaluate the two possible schedules containing them. The sequence with better objective function value is taken for further consideration. 4. Take every remaining job in the list given in Step 2 and find the best schedule by placing it at all possible positions in the sequence of jobs that are already scheduled. To make this procedure as being a probabilistic algorithm, we have introduced a probability into step 2 which is mathematically written as follows:  k pj Prj ¼ Pn  k j¼1 p j So, in step 2, we rearrange the n jobs according to their probability Prj.

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

531

Fig. 1. New solution generation.

5.3. Creation of a new solution After the determination of the new vector y tij , we ought to obtain a feasible solution indicating a job sequence. Fig. 1 shows our algorithm structure used to succeed this task. X ti designates the ith particle associated solution. According to the value of y tij , we will try to find the posit t tion of job r, from the solution X ti , and which corresponds to Gt1 and pt1 j ij respectively if y ij ¼ 1 or y ij ¼ 1. If r P j, we perform a permutation between the two jobs scheduled in positions r and j, else, we are facing a contradictory case that requires the presence of the same job in two different positions and so we choose randomly one of them. If y tij ¼ 0, we try to find the first job in position r with an index larger or equal to j and which is different to both Gt1 and pijt1 (it is clear that this operation does not require more than 3 computational times). j When j = n  1, we can face the case that no job can satisfy the previous condition and thereafter, r receives ;. To preserve the linearity of the algorithm, we define the position vector psti that presents the position of jobs in the solution X ti . Given a solution X ti ¼ f3; 1; 4; 2; 6; 5g, psti ¼ f2; 4; 1; 3; 6; 5g. So if y tij ¼ 1, it suffices that we t see the position of job Gt1 in the sequence through the intermediary of psti ðGt1 j j Þ. As an example, if y i2 ¼ 1 and rd t t G2 ¼ 3, the position of the 3 job in the sequence is equal to psi ð3Þ ¼ 1. 6. Hybrid algorithm for solving PFSP (H-CPSO) In this section, we will present a hybrid algorithm for solving PFSP. The basic idea is to add to the CPSO algorithm an improvement phase, which will be presented by simulated annealing algorithm (SA), in order to obtain good quality solutions. Tasgetiren et al. (2007) proposed to apply a local search procedure based on VNS method to the global best solution Gt at each iteration. The drawback is to preserve the same starting solution during several iterations which can limit the capacity of exploration in the algorithm. Our proposal is to apply the SA on the set of solutions that can be considered as good. This objective can be achieved by using a threshold acceptance which is a function of the fitness of the global best solution. Given a fitness function f, we accept only solutions that have a function value less than (1 + d) · f(Gt). 6.1. Simulated annealing algorithm SA algorithm is a good tool for solving hard optimization problems. It was first proposed by Kirkpatrick, Gelatt, and Vecchi (1983) based on the physical annealing process of solids. SA can be viewed as a process which, given a neighbourhood structure, attempts to move from the current solution xcurrent to one of its neighbours x 0 . The new solution will be accepted as the current one if its objective function value f (x 0 ) is less than that of the current solution f(xcurrent) (for minimization problems). Otherwise, the neighbour x 0 will be accepted with a probability P = e(D/T); where D = f(x 0 )  f(xcurrent) and T is the current temperature. Generally, SA starts with a high temperature, which is further decreased in each iteration.

532

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

Fig. 2. Simulated annealing procedure.

To achieve a reduction of computation time, we choose a small constant temperature in order to focus the search process on the intensification phase. Two neighbourhood structures are used in the proposed SA algorithm. On one hand, the permutation criterion, denoted by permute(xcurrent;i;j), possesses the task of permuting the two jobs scheduled in position i and j in the current solution. In the other hand, the insertion criterion comes to put the job scheduled in position i in position j, which is denoted by insert(xcurrent;i;j). Fig. 2 describes the implemented simulated annealing algorithm. 7. Implementation and experimental results The algorithm is coded by C++ programming language. All experiments with CPSO were run in Windows XP on desktop PC with Intel Pentium IV, 3.2 GHz processors. Results are obtained after five replications for each instance. In what follows, we will illustrate different results obtained by our algorithms minimizing makespan and flowtime criteria and compare them with some competing algorithms. 7.1. Tests evaluation with respect to the makespan criterion The experiments were conducted on the benchmark problems proposed by Taillard (1993), with m = 5,10,20 and n = 20,50,100,200. There were 10 instances for each problem size and 110 problem instances in all. The problem instances and their best upper bounds can be downloaded from http://ina.eivd.ch/collaborateurs/etd/. The solution quality was measured with the average relative percentage deviation in makespan Davg with respect to the best known solutions, provided by Taillard.  R  X Heui  Bestsol Davg ¼  100 R; Bestsol i¼1 where Heui is the solution given by any of the R replications of the considered algorithms and in this case Bestsol is the best known solutions. 7.1.1. Performance of CPSO algorithm We used the same implementation conditions of Tasgetiren (2007), i.e. the same population size (n · 2), we used also a random procedure to generate the initial solution in our first CPSO algorithm and we considered 500 generations as a stopping criterion.

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

533

Table 1 Performance comparison of PSOspv and CPSO on Taillard’s benchmarks with respect to makespan criterion Problems

20 · 5 20 · 10 20 · 20 50 · 5 50 · 10 50 · 20 100 · 5 100 · 10 100 · 20 200 · 10 200 · 20 Average

PSOspv

CPSO

Davg

Davg

tavg

1.75 3.25 2.82 1.14 5.29 7.21 0.63 3.27 8.25 2.47 8.05 4.01

1.05 2.42 1.99 0.90 4.85 6.40 0.74 2.94 7.11 2.17 6.89 3.40

0.05 0.12 0.19 0.26 0.74 1.15 0.68 2.13 4.39 7.50 16.42 3.06

tavg: time in seconds per run.

We fixed the parameters c1 = 0.6,c2 = 0.4 and w = 0.95. We also fixed parameter a at the value q/n to limit the diversification, especially in the large size instances. From the best got results, the parameter q was fixed at a value of 7. Table 1 summarizes both the results obtained by Tasgetiren et al. (2007) with their PSO algorithm based on the smallest position value rule (PSOspv ) and our CPSO algorithm. The performance of CPSO is better than that of PSOspv in all instances except the class 100 · 5. The CPSO algorithm was able to find results on overall average ‘‘3.40’’ better than the one obtained by PSOspv algorithm ‘‘4.01’’. 7.1.2. Performance of CPSO-PNEH algorithm We now evaluate the proposed CPSO initialized by PNEH and denoted by CPSO-PNEH. An experiment was conducted to make a comparison with the genetic algorithm (GA) of Reeves (1995). Both algorithms have been coded in the same programming language C++ and are tested on the same computer with Intel Pentium IV, 3.2 GHz. The parameters used in CPSO-PNEH are as follows: c1 = 0.2,c2 = 0.8,w = 0.7,a = 7/n and the population size is fixed at 200. The parameter k used in the PNEH algorithm is fixed at 3. In order to have a suitable comparison, we used the same number of evaluations, which is fixed at 5000 · n and each problem was tested for 10 trials in both algorithms. A summary of the results is displayed in Table 2. We see from this table that the CPSO-PNEH algorithm dominates the GA for all problems. Regarding the CPU time requirement, the average time to reach the best solution of the CPSO algorithm was shorter than that of the GA for the instances from 20 · 5 to 100 · 5, except for 50 · 20. However, CPSO was computationally more expensive than GA for large size instances (200 · 20;200 · 10;100 · 20;100 · 10). 7.1.3. Performance of H-CPSO algorithm The performance of the H-CPSO was compared with that of PSOvns from Tasgetiren et al. (2007). The parameters used in our implementation were c1 = 0.8,c2 = 0.2, w = 0.75 and a = 0.3. Computational time is another evaluation criterion; its maximal value was fixed at 250 seconds for the 20 · 5,20 · 10,20 · 20,50 · 5, 50 · 10 and 50 · 20 instances and at 500 seconds for the remaining instances sizes. We set the value of the parameter d related to the improvement phase at 0.02 and the parameter k used in the PNEH algorithm at 3. The stopping criterion of SA algorithm is defined as the maximal number of iterations without improvement and is fixed to 10 · n2. We fixed the temperature which can assure to every solution having the minimal deviation a probability of 0.5 to be accepted. We suppose that the minimal deviation is equal to 1, then T =  1/log(0.5).

534

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

Table 2 Performance comparison of GA of Reeves and H-CPSO on Taillard’s benchmarks with respect to makespan criterion Problems

20 · 5 20 · 10 20 · 20 50 · 5 50 · 10 50 · 20 100 · 5 100 · 10 100 · 20 200 · 10 200 · 20 Average

GA

CPSO

Davg

tavg

Davg

tavg

0.65 1.87 1.34 0.28 2.76 4.04 0.19 1.19 3.34 0.61 3.39 1.79

0.18 0.33 0.41 0.70 1.84 2.50 2.72 5.41 11.08 36.72 58.84 10.98

0.35 1.20 0.96 0.09 2.55 3.84 0.04 1.02 3.07 0.59 3.15 0.59

0.04 0.14 0.22 0.06 1.75 4.73 0.67 9.16 29.03 48.47 183.13 48.47

tavg: time in seconds per run.

According to Table 3, the performance of H-CPSO and PSOvns are almost identical in terms of the average values. We can see that results obtained by our algorithm are more efficient than the ones yielded by PSOvns in instances with sizes less than 50 · 20 and 100 · 20, but they are less effective in the rest of instances. 7.2. Tests evaluation with respect to total flowtime criterion In this section, we evaluate the effectiveness of H-CPSO algorithm to solve PFSP with minimization of flowtime criterion. The performance of the H-CPSO has been compared with other metaheuristics: the PSOvns from Tasgetiren et al. (2007), the composite heuristic (BES(LR)) from Liu and Reeves (2001) and the two ant colony algorithms (PACO and M-MMAS) from Rajendran and Ziegler (2005). The experiments were conducted on the benchmark problems given by Taillard (1993), with m = 5,10,20 and n = 20,50,100. There were 10 instances for each problem size and 90 problem instances in all. Parameters of H-CPSO are fixed in the same way as used in the makespan context, except d, that was fixed at 0.05 and the maximal computational time, that was fixed at n · m · 0.4 seconds. In Tables 4–6, we give a comparison between our H-CPSO algorithm and the best results obtained by BES(LR), PACO, M-MMAS and PSOvns with respect to the total flowtime criterion after 5 replications. ValTable 3 Performance comparison of PSOvns and H-CPSO on Taillard’s benchmarks with respect to makespan criterion Problems

20 · 5 20 · 10 20 · 20 50 · 5 50 · 10 50 · 20 100 · 5 100 · 10 100 · 20 200 · 10 200 · 20 Average tavg: time in seconds per run.

PSOvns

H-CPSO

Davg

Davg

tavg

0.03 0.02 0.05 0.00 0.57 1.36 0.00 0.18 1.45 0.18 1.35 0.47

0.00 0.01 0.02 0.00 0.49 0.96 0.02 0.26 1.28 0.40 1.55 0.45

0.85 7.72 18.77 31.86 77.98 145.60 22.53 229.66 372.00 315.06 480.28 162.49

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

535

ues indicated in boldface correspond to the best reached values and the symbol * indicates that H-CPSO provided much better results than the other four approaches. We can observe that H-CPSO performs better in all experiments than BES(LR) and M-MMAS. In addition, we find that the results yielded by our algorithm outperform those obtained by PACO algorithm, except in one problem instance. On the other hand, in comparison to PSOvns, H-CPSO obtained somewhat worse results than PSOvns in 100 · 5 and 100 · 10. In 20 · 5 and 20 · 10, H-CPSO obtained almost the same results as PSOvns. However, in20 · 20,50 · 5,50 · 10, 50 · 20 and 100 · 20 it can be said that the performance of H-CPSO is better than PSOvns. In overall, 48 out of the 90 best known solutions generated by the BES(LR), M-MMAS, PACO and PSOvns algorithms were improved by H-CPSO algorithm. 8. Conclusion and future research The permutation flowshop scheduling problem with respect to the makespan and flowtime criteria was considered, which is known to be NP-complete. In this paper, a new particle swarm algorithm for solving comTable 4 Numerical results for instances with n = 20 with respect to total flowtime criterion Problems

BES(LR)

M-MMAS

PACO

PSOvns

Best

Average

Worst

20 · 5

14226 15446 13676 15750 13633 13265 13774 13968 14456 13036

14056 15151 13416 15486 13529 13139 13559 13968 14317 12968

14056 15214 13403 15505 13529 13123 13674 14042 14383 13021

14033 15151 13301 15447 13529 13123 13548 13948 14295 12943

14033 15151 13301 15447 13529 13123 13548 13948 14295 12943

14033 15151 13301 15447 13529 13123 13553.4 13948 14295 12943

14033 15151 13301 15447 13529 13123 13557 13948 14295 12943

4.95 2.51 1.63 2.04 0.30 0.25 2.98 0.79 2.48 3.34

20 · 10

21207 22927 20072 18857 18939 19608 18723 20504 20561 21506

20980 22440 19833 18724 18644 19245 18376 20241 20330 21320

20958 22591 19968 18769 18749 19245 18377 20377 20330 21323

20911 22440 19833 18710 18641 19249 18363 20241 20330 21320

20911 22440 19833 18710 18641 19245 18363 20241 20330 21320

20911 22440 19833 18710 18641 19245.8 18363 20241 20330 21320

20911 22440 19833 18710 18641 19249 18363 20241 20330 21320

2.30 1.21 1.43 10.15 8.70 44.04 2.71 6.36 2.59 12.60

20 · 20

34119 31918 34552 32159 34990 32734 33449 32611 34084 32537

33623 31604 33920 31698 34593 32637 33038 32444 33625 32317

33623 31597 34130 31753 34642 32594 32922 32533 33623 32317

34975 32659 34594 32716 35455 33530 33733 33008 34446 33281

33623 31587* 33920 31661* 34557* 32564* 32922 32412* 33600* 32262*

33623 31587 33920 31661 34557 32564 32922 32412 33600 32262

33623 31587 33920 31661 34557 32564 32922 32412 33600 32262

2.62 10.79 1.94 19.61 10.10 2.74 5.54 19.14 8.40 17.92

tavg: the average time in seconds per run. Best: the best solution over 5 runs. Worst: the worst solution over 5 runs. Average: the average solution over 5 runs.

H-CPSO tavg

536

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

Table 5 Numerical results for instances with n = 50 with respect to total flowtime criterion Problems

BES(LR)

M-MMAS

PACO

PSOvns

H-CPSO Best

50 · 5

64838*

Average

Worst

tavg

65663 68664 64378 69795 70841 68084 67186 65582 63968 70273

65768 68828 64166 69113 70331 67563 67014 64863 63735 70256

65546 68485 64149 69359 70154 67664 66600 65123 63483 69831

65058 68298 63577 68571 69698 67138 66338 64638 63227 69195

68223* 63436* 68590 69584* 67062* 66375 64531* 63157* 69121*

64911.4 68323.6 63577 68710.2 69687.6 67154 66456.6 64688.2 63246.2 69268.8

65000 68419 63676 68821 69759 67284 66621 64810 63331 69378

61.85 49.02 47.60 52.90 49.74 51.34 69.92 64.88 74.43 26.33

50 · 10

88770 85600 82456 89356 88482 89602 91422 89549 88230 90787

89599 83612 81655 87924 88826 88394 90686 88595 86975 89470

88942 84549 81338 88014 87801 88269 89984 88281 86995 89238

88031 83624 80609 87053 87263 87255 89259 87192 86102 88631

87672* 83199* 80311* 87037* 86819* 86735* 89014* 87336 85964* 88149*

88098.6 83802.8 80542.6 87141.8 87185 87178.8 89407.2 87511.2 86147.2 88819.4

88713 84307 80974 87371 87765 87465 89764 87795 86530 88897

130.88 160.73 145.62 104.71 162.86 150.89 104.84 94.09 127.09 139.49

50 · 20

129095 122094 121379 124083 122158 124061 126363 126317 125318 127823

127348 121208 118051 123061 119920 122369 125609 124543 124059 126582

126962 121098 117524 122807 119221 122262 125351 124374 123646 125767

128622 122173 118719 123028 121202 123217 125586 125714 124932 126311

126126* 119936* 117210* 121540* 118783* 120914* 123756* 122900* 122281* 124529*

126715 120151.4 117351.8 121727.4 119219.4 121298 124025.4 123466.8 122777.4 124820.4

127117 120514 117561 122248 119882 121610 124324 124079 123361 125093

332.29 310.57 225.63 303.23 287.21 248.38 287.63 208.38 325.02 251.43

tavg: time in seconds per run. Best: the best solution over 5 runs. Worst: the worst solution over 5 runs. Average: the average solution over 5 runs.

binatorial problems was introduced. The proposed algorithm was compared with the continuous PSO algorithm of Tasgetiren et al. (2007). Computational experiments conducted with makespan criterion proved the superiority of CPSO over continuous PSO. Next, we have incorporated an improvement phase based on simulated annealing procedure, called H-CPSO. In the case of makespan criterion, the PNEH heuristic was used to generate an efficient initial solution and the performance of this algorithm was compared with PSOvns, proposed by Tasgetiren et al. (2007). The experimental results showed that the two algorithms are almost identical in terms of the average values. For the total flowtime criterion, we have conducted a comparison of the H-CPSO against four algorithms including PSOvns by Tasgetiren et al. (2007), BES(LR) by Liu and Reeves (2001), and the two ant colony algorithms (PACO, M-MMAS) by Rajendran and Ziegler (2005). The experiment results showed that our hybrid algorithm is clearly superior to the heuristics BES(LR), M-MMAS and PACO. Comparing with the PSOvns, we can see that H-CPSO algorithm performs better or equal results, except for the instance classes 100 · 5 and 100 · 10. For future research, we suggest to use the CPSO technique to solve many other combinatorial optimization problems.

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

537

Table 6 Numerical results for instances with n = 100 with respect to the total flowtime criterion Problems

BES(LR)

M-MMAS

PACO

PSOvns

H-CPSO Best

Average

Worst

tavg

100 · 5

256789 245609 241013 231365 244016 235793 243741 235171 251291 247491

257025 246612 240537 230480 243013 236225 243935 234813 252384 246261

257886 246326 241271 230376 243457 236409 243854 234579 253325 246750

254762 245315 239777 228872 242245 234082 242122 232755 249959 244275

255520 244511* 239843 229481 242229* 234394 242779 232889 250294 244903

256079.6 245234.4 240383.8 229890.2 242671.2 234829.2 243091.2 233161.6 250515 245347.6

256528 245618 240785 230087 242960 235027 243460 233639 250689 245735

116.42 167.64 103.76 106.58 129.58 119.86 78.92 105.82 164.77 122.94

100 · 10

306375 280928 296927 309607 291731 276751 288199 296130 312175 298901

305004 279094 297177 306994 290493 276449 286545 297454 309664 296869

305376 278921 294239 306739 289676 275932 284846 297400 307043 297182

303142 277109 292465 304676 288242 272790 282440 293572 305605 295173

302971* 277408 291669* 305663 287761* 274152 282602 295430 305819 296425

303633.8 278759.4 292363.8 306619 288702.6 274636.8 283685.8 296061 306728.8 297125.8

304086 279449 293542 307609 289189 275247 284482 296565 307885 298469

268.82 321.10 258.33 286.35 342.51 239.77 215.18 266.31 289.16 259.36

100 · 20

383865 383976 383779 384854 383802 387962 384839 397264 387831 394861

373756 383614 380112 380201 377268 381510 381963 393617 385478 387948

372630 381124 379135 380765 379064 380464 382015 393075 380359 388060

374351 379792 378174 380899 376187 379248 380912 392315 382212 386013

372480* 376476* 375733* 379273* 374416* 378380* 379395* 390587* 380762 384734*

373019.4 379299.4 376605.8 380117 376126.6 378847.8 380178.6 391381.4 381256.2 385311.6

373535 381694 378063 381257 377853 379324 381074 392367 382147 386390

462.18 670.85 444.47 520.77 269.52 391.54 601.52 448.90 462.12 416.13

tavg: time in seconds per run. Best the best solution over 5 runs. Worst the worst solution over 5 runs. Average the average solution over 5 runs.

References Bansal, S. P. (1977). Minimizing the sum of completion times of n jobs over m machines in a flowshop: a branch and bound approach. AIIE Transactions, 9, 306–311. Ben-Daya, M., & Al-Fawzan, M. (1998). A tabu search approach for the flow shop scheduling problem. European Journal of Operational Research, 109, 88–95. Brown, A. P. G., & Lomnicki, Z. A. (1966). Some applications of the branch and bound algorithm to the machine scheduling problem. Operational Research Quarterly, 17, 173–186. Campbell, H. G., Dudek, R. A., & Smith, M. L. (1970). A heuristic algorithm for the n job, m machine sequencing problem. Management Science, 16(10), B630–B637. Chung, C. S., Flynn, J., & Kirca, O. (2002). A branch and bound algorithm to minimize the total flow time for m-machine permutation flowshop problems. International Journal of Production Economics, 79, 185–196. Dannenbring, D. G. (1977). An evaluation of flow shop sequencing heuristics. Management Science, 23(11), 1174–1182. Garey, M. R., Johnson, D. S., & Sethi, R. (1976). The complexity of flowshop and jobshop scheduling. Mathematics of Operations Research, 1(2), 117–129. Gupta, J. N. D. (1972). Optimal scheduling in a multi-stage flowshop. AIIE Transactions, 4, 238–243. Ho, J. C. (1995). Flowshop sequencing with mean flow time objective. European Journal of Operational Research, 81, 571–578.

538

B. Jarboui et al. / Computers & Industrial Engineering 54 (2008) 526–538

Ho, J. C., & Chang, Y.-L. (1991). A new heuristic for the n-job, m-machine flow-shop problem. European Journal of Operational Research, 52, 194–202. Ignall, E., & Schrage, L. (1965). Application of the branch and bound technique to some flow-shop scheduling problems. Operations Research, 13, 400–412. Ishibuchi, H., Misaki, S., & Tanaka, H. (1995). Modified simulated annealing algorithms for the flow shop sequencing problem. European Journal of Operational Research, 81, 388–398. Johnson (1954). Optimal two- and three-stage production schedules with setup times included. Naval Research Logistics Quarterly, 1, 61–68. Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of IEEE international conference on neural networks (pp. 1942–1948). NJ: Piscataway. Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671–680. Liao, Ch. J., Tseng, Ch. T., & Luarn, P. (2007). A discrete version of particle swarm optimization for flowshop scheduling problems. Computers & Operations Research, 34(10), 3099–3111. P Liu, J., & Reeves, C. R. (2001). Constructive and composite heuristic solutions to the Pj Ci scheduling problem. European Journal of Operational Research, 132, 439–452. Lomnicki, Z. A. (1965). A branch and bound algorithm for the exact solution of the three machine scheduling problem. Operational Research Quarterly, 16, 89–100. McMahon, G. B., & Burton, P. G. (1967). Flowshop scheduling with the branch and bound method. Operations Research, 15, 473–481. Murata, T., Ishibuchi, H., & Tanaka, H. (1996). Genetic algorithms for flow shop scheduling problems. Computers and Industrial Engineering, 30, 1061–1071. Nawaz, M., Jr., Enscore, E. E., & Ham, I. (1983). A heuristic algorithm for the m-machine, n job flow-shop sequencing problem. OMEGA, The International Journal of Management Science, 11(1), 91–95. Nowicki, E., & Smutnicki, C. (1996). A fast tabu search algorithm for the permutation flowshop problem. European Journal of Operational Research, 91, 160–175. Ogbu, F., & Smith, D. (1990). The application of the simulated annealing algorithm to the solution of the n/m/Cmax flowshop problem. Computers and Operations Research, 17(3), 243–253. Osman, I. H., & Potts, C. N. (1989). Simulated annealing for permutation flow-shop scheduling. OMEGA, The International Journal of Management Science, 17(6), 551–557. Palmer, D. S. (1965). Sequencing jobs through a multi-stage process in the minimum total time: a quick method of obtaining a near optimum. Operational Research Quarterly, 16(1), 101–107. Rajendran, C. (1993). Heuristic algorithm for scheduling in a flowshop to minimize total flowtime. International Journal of Production Economics, 29, 65–73. Rajendran, C., & Ziegler, H. (2004). Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/ total flowtime of jobs. European Journal of Operational Research, 155, 426–438. Rajendran, C., & Ziegler, H. (2005). Two ant-colony algorithms for minimizing total flowtime in permutation flowshops. Computers & Industrial Engineering, 48(4), 789–797. Reeves, C. R. (1995). A Genetic algorithm for flowshop sequencing. Computers and Operations Research, 22(1), 5–13. Ruiz, R., Maroto, C., & Alcaraz, J. (2006). Two new robust genetic algorithms for the flowshop scheduling problem. OMEGA, 34, 461–476. Salman, A., Ahmed, I., & Almadani, S. (2002). Particle swarm optimization for task assignment problem. Microprocessors and Microsystems, 26(8), 363–371. Suliman, S. M. A. (2000). A two-phase heuristic approach to the permutation flow-shop scheduling problem. International Journal of Production Economics, 64, 143–152. Taillard, E. (1990). Some efficient heuristic methods for the flowshop sequencing problems. European Journal of Operational Research, 47, 65–74. Taillard, E. (1993). Benchmarks for basic scheduling problems. European Journal of Operational Research, 64, 278–285. Tasgetiren, M.F., Y.C Liang, M. Sevkli, & G. Gencyilmaz (2004). Particle swarm optimization algorithm for makespan and maximum lateness minimization in permutation flowshop sequencing problem. In: Proceedings of the fourth international symposium on intelligent manufacturing systems, Turkey: Sakarya, 431–441. Tasgetiren, M. F., Liang, Y. C., Sevkli, M., & Gencyilmaz, G. (2007). A particle swarm optimization algorithm for makespan and total flowtime minimization in the permutation flowshop sequencing problem. European Journal of Operational Research, 177(3), 1930–1947. Van de Velde, S. L. (1990). Minimizing the sum of the job completion times in the two-machine flow shop by Lagrangian relaxation. Annals of Operations Research, 26, 257–268. P Wang, C., Chu, C., & Proth, J.-M. (1997). Heuristic approaches for n/m/F/ Ci scheduling problems. European Journal of Operational Research, 96, 636–644. Widmer, M., & Hertz, A. (1989). A new heuristic method for the flow shop sequencing problem. European Journal of Operational Research, 41, 186–193. Woo, H. S., & Yim, D. S. (1998). A heuristic algorithm for mean flowtime objective in flowshop scheduling. Computers and Operations Research, 25, 175–182.