Two stage reentrant hybrid flow shop with setup times and the criterion of minimizing makespan

Two stage reentrant hybrid flow shop with setup times and the criterion of minimizing makespan

Applied Soft Computing 11 (2011) 4530–4539 Contents lists available at SciVerse ScienceDirect Applied Soft Computing journal homepage: www.elsevier...

501KB Sizes 5 Downloads 79 Views

Applied Soft Computing 11 (2011) 4530–4539

Contents lists available at SciVerse ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

Two stage reentrant hybrid flow shop with setup times and the criterion of minimizing makespan M. Hekmatfar, S.M.T. Fatemi Ghomi ∗ , B. Karimi Department of Industrial Engineering, Amirkabir University of Technology, 424 Hafez Avenue, 1591634311 Tehran, Iran

a r t i c l e

i n f o

Article history: Received 24 December 2008 Received in revised form 18 May 2011 Accepted 14 August 2011 Available online 22 August 2011 Keywords: Reentrant hybrid flow shop Hybrid genetic algorithm Setup times Heuristics

a b s t r a c t This paper discusses a two stage reentrant hybrid flowshop scheduling problem. The problem to be studied has two main stages: In stage one, there are g stations and in station k, there are m1k identical parallel machines and in stage two, there is a station with m2 identical parallel machines. The first stage is a reentrant shop which all jobs have the same routing over the machines of the shop and the same sequence is traversed several times to complete different levels of the jobs. Such scheduling problems occur in certain practical applications such as semiconductors, electronics manufacturing, airplane engine production, and petrochemical production. The criterion is to minimize the makespan of the system. Setup times are implemented between jobs in both stages and between levels in stage one. There has not been any study about this real area which has two separated stages with different setup times in each one. Because of the intractability of the reentrant hybrid flow shop model, some heuristic algorithms and a random key genetic algorithm (RKGA) are proposed and compared with a new hybrid genetic algorithm (HGA). Computational experiments are performed to illustrate that the proposed HGA provides the best solutions in comparison with other algorithms. © 2011 Elsevier B.V. All rights reserved.

1. Introduction In many manufacturing and assembly facilities, a number of operations have to be done on every job. Often these operations have to be done on all jobs in the same order and therefore the jobs follow the same route. The machines on which operations are done are assumed to be set up in series; so this environment is referred to as a flow-shop. The assumption of classical flow-shop scheduling problems that each job visits each machine only once (Baker [1]) is sometimes violated in practice. A new type of manufacturing shop, the re-entrant shop has recently attracted attention. The basic characteristic of a re-entrant shop is that a job visits certain machines more than once. The re-entrant flow-shop (RFS) means that there are n jobs to be processed on m machines in the shop and every job must be processed on machines in the order of M1 , M2 , . . ., Mm for l times (l is the number of repetition of jobs on the sequence of machines). In a reentrant hybrid flow shop (RHFS), all jobs have the same routing over the machines of the shop and the same sequence is traversed several times to complete the jobs (reentrant) and there is more than one machine in at least one station (hybrid). Some shops

∗ Corresponding author. Tel.: +98 21 66545381; fax: +98 21 66954569. E-mail address: [email protected] (S.M.T. Fatemi Ghomi). 1568-4946/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2011.08.013

are flexible and differ from hybrid shops. The difference between hybrid flow shops and flexible flow shops is that in flexible shops at least one job needs to be processed on none of machines in at least one station. So jobs may skip some stations; but in hybrid shops, all jobs need to be processed in all of stations. In this paper, the considered environment constitutes two main stages; the first stage is a reentrant stage with some stations and the second stage is a simple stage. This problem can be occurred in real world like a semiconductor manufacturing where the wafers have to be processed on some machines for more than one time and then they need to be tested in a separated stage. A real example of such a system is introduced in two stages of fabricating a chip in semiconductor area, the wafer fabrication stage and the wafer probe. In the wafer fabrication step, wafers of raw silicon are processed to build up the layers and patterns of metal and wafer material to produce the required circuitry. In wafer probe, the individual circuits are electronically tested by means of thin probes, and defective circuits are discarded. The criterion of our problem is to minimize the makespan. Finding an optimal schedule to minimize the makespan in RHFS is always a difficult task. In fact, a flow-shop scheduling, the sequencing problem in which n jobs have to be processed on m machines, is known to be NP-hard (Kubiak et al. [16]; Pinedo [27]). Because of their intractability, this paper presents a hybrid genetic algorithm (HGA) to solve the RHFS scheduling problems. The original GA has been widely used to solve classical flow-shop

M. Hekmatfar et al. / Applied Soft Computing 11 (2011) 4530–4539

problems and has performed well. So we have expanded the original GA and have made a new GA called HGA. The HGA is based on random key genetic algorithm (RKGA), which was first proposed by Bean [2]. In RKGA, random numbers work as sort keys to decode the solution. The parameters in our HGA include population size, number of generations, copy probability, crossover probability and the probability of mutation operator which are explained completely in Section 4.2. The population size and number of generations are based on those in Kurz and Askin [17]. The initial population sets are generated by four heuristic methods, MSPT, MNEH, MLPT and MJSN which are explained in Section 4.1. This article is organized as follows. In Section 2, a review of former and related works is given, and Section 3 presents the model developed for the problem. Some relevant heuristic algorithms are modified and a new hybrid genetic algorithm (HGA) is proposed to solve our model in Section 4. In Section 5 the design of experiments is conducted to compare the modified heuristics and the proposed HGA and the results are analyzed. Finally, Section 6 is devoted to conclusions and future works.

2. Literature review One of the most famous areas of reentrant scheduling problems is semiconductor manufacturing field which has been investigated by many authors (Kim et al. [14,15]; Sourirajan and Uzsoy [29]). In this area processes are additionally characterized by a wide changing range of product types (Monch and Driebel [20]). A mix of different process types, i.e. batch processes and single wafer processes, sequence-dependent setup times, very expensive equipment and reentrant flows are typical for this type of manufacturing. In semiconductor manufacturing, consequently, each wafer re-visits the same machines for multiple processing steps (VargasVillamil and Rivera [33]). The wafer traverses flow lines several times to produce different layers on each circuit (Bispo and Tayur [3]). Flow-shop scheduling problem is one of the most well known problems in the area of scheduling. It is a production planning problem in which n jobs have to be processed in the same sequence on m machines. Most of these problems concern the criterion of minimizing makespan. The time between the beginning of the execution of the first job on the first machine and the completion of the execution of the last job on the last machine is called makespan. To minimize the makespan is equivalent to maximize the utilization of the machines. The purpose of this paper is to minimize the maximum completion times of jobs in all stations. In the area of flow shop problems, for the first time, Johnson [13] proposed an easy algorithm to the two-machine flow-shop problem with makespan as the criterion. Since then, several researchers have focused on solving mmachine (m > 2) flow-shop problems with the same criterion. However, these fall in the class of NP-hard (Garey et al. [10]; Rinnooy Kan [30]), complete enumeration techniques must be used to solve these problems. As the problem size increases, this approach is not computationally practical. For this reason, researchers have constantly focused on developing heuristics for the hard problem. Low et al. [18] proposed four job sequencing methods which were combined with four dispatching rules to minimizing makespan in a two-stage HFS problem with unrelated alternative machines. Their problem had m unrelated alternative machines at the first machine center and a common processing machine at the second machine center. In a RHFS problem, processes cannot be treated as a simple flow shop problem. The repeated use of the same machines by the same job means that there may be conflicts among jobs, at some machines, at different levels in the process. Later operations

4531

to be done on a particular job by some machine may interfere with earlier operations to be done at the same machine on a job that started later. This reentrant or returning characteristic makes the process look more like a job shop on first examination. Jobs arrive at a machine from several different sources or predecessor facilities and may go to several successor machines. Despite the original flow shop problem, Pan and Chen [25] shown that the reentrant flow shop even for the two-machine case is a NP-hard problem. Uzsoy et al. [31,32] investigated and published two review papers for the reentrant scheduling problems and explained that semiconductor industry is a proper example for these problems. Yura [35] proposed a revised cyclic production method to the reentrant manufacturing systems with the criterion of maximizing the production rate and minimizing the throughput time. In the area of reentrant scheduling problems, Pearn et al. [26] represented the integrated circuit final testing scheduling problem with reentry which is a variation of the parallel machine scheduling problem and is also a particular generation of the complex flow shop with batch process problems. They also used setup times with sequence dependent between two consecutive jobs of different product types. Miragliotta and Perona [19] developed a new heuristic algorithm based on decentralized approach in job shop reentrant scheduling. They used real data from two different factories to improve the results. Choi et al. [8] proposed some heuristic algorithms to minimize total tardiness of a given set of orders. In their scheduling, priorities of lots were determined at each stage using a certain procedure and then lots were selected and assigned to machines according to the priorities of the lots when they became available to solve a RHFS problem in which reentrant flows (lots of certain orders) had to visit the stages twice. Chen [6] proposed a branch and bound algorithm to solve the reentrant permutation flow shop scheduling problem (RPFS) where no passing is allowed and all jobs have a same route. Chen et al. [4] also represented and applied a hybrid tabu search (HTS) to minimize the makespan of jobs in RPFS. Their method was compared to the optimal solutions generated using the integer programming technique, and to the near optimal solutions generated by pure TS and heuristic NEH and was shown that HTS has better performance than the others tested. Choi and Kim [7] proposed some heuristic methods to solve a RPFS problem in which each pass of each job was considered as a sub-job and sequences of sub-jobs on the m machines were the same, i.e. each sub-job was considered as an independent job. There does not seem to be any published work addressing sequence dependent setup times for RHFS problem. Just Hwang and Sun [11,12] represented a two-machine flow-shop problem with reentrant flows and sequence dependent setup times to minimize makespan and Yang et al. [34] proposed independent setup times for a two machine reentrant flow shop. Demirkol and Uzsoy [9] proposed some algorithms to schedule the wafer fabrication stage of semiconductor manufacturing. Then they developed an enhanced decomposition method to minimize maximum lateness for the RFS with sequence-dependent setup times. 3. General description and formulation of the problem This section first describes the main ideas and assumptions made for this problem. Then the mathematical model is formulated and solved by an exact method. 3.1. General description Assumed that there are n jobs, J1 , J2 , . . ., Jn , and two main stages. In stage one, there are g stations, St1 , St2 , . . ., Stg , and in station k,

4532

M. Hekmatfar et al. / Applied Soft Computing 11 (2011) 4530–4539

there are m1k identical parallel machine(s). This stage is a reentrant shop and every job here must be processed on stations in the order of St1 , St2 , . . ., Stg for several times. In second stage, there is a station with m2 identical parallel machines. Every job can be decomposed into several levels such that each level starts on St1 and finishes on Stg . Every job visits certain stations l times; hence each job is processed l.g times in stage one. The processing of a job on a station is called an operation and requires a duration called the processing time. The criterion is to minimize the makespan. A minimum makespan usually implies a high utilization of the machine(s). 3.2. Formulation of the model To describe the problem more clearly, a mixed integer programming model is presented. Note that the model is modified from that of Kurz and Askin [17] to cover our proposed model. The assumptions and notations used are summarized as follows. 3.2.1. Model assumptions Following introduces the assumptions made in RHFS scheduling problem studied in this paper. 1. All data in this problem are deterministic and fixed when scheduling is started. 2. Machines are available at all times, with no breakdowns or maintenance delays. 3. Jobs are always processed without error. 4. Job processing cannot be interrupted (no preemption is allowed) and jobs have no priority values. 5. No limited buffer exists between stations and before the first station in stage one and after the last station in stage two. 6. There is no travel time between stations; jobs are available for processing at a station immediately after completing processing at the previous station. 7. The ready time for each job is the completion time of that job at the previous station. 8. Machines in parallel are identical in capability and processing rate. 9. Two kind of sequence-dependent setup times exist between jobs at each station. After completing processing of one job and before beginning processing of the next job in a station, some kind of setup must be performed. When a job revisits a station in a different level in stage one, it needs also to have a different setup time. This setup time occurs after the last job processed on this station from the previous level. The length of time required to do the setup depends on both the previous and the current job to be processed; this means the setup times are sequence-dependent. 10. In each station, the number of jobs must be equal to or bigger than the number of machines which are visited. 11. The processing and setup times of jobs 0 and n + 1 are equal to 0 and these nominal set points just are inserted into formulation to represent the first and the last jobs scheduled at each station respectively. 12. Setup times cannot interrupt processing times and in each station every job must be processed before the setup time of other jobs can be begun. 13. In Stage one, the processing time for job i at station t is not the same for different levels. 3.2.2. Model inputs n: Number of jobs. g: Number of stations at stage one. l: Number of levels in reentrant mode. mt : Number of identical machines at station t.

Pit : Processing time for job i at station t. sijt : Setup time from job i to job j at station t.

srijt : Setup time from job i to job j at station t of stage one where t > g (because of reentrant). z: The objective function. 3.2.3. Model outputs (decision variables) t : Completion time for job i on machine m at station t. cim t : 1 if job i is scheduled immediately before job j on machine m xijm at stage t and 0 otherwise. 3.2.4. Objective function and the constrains min z

(1)

subject to: t

n m  

x0t

jm

= mt ,

t = 1, . . . , g.l + 1

(2)

m=1 j=1 t

m n+1  

t xijm = 1,

i = 1, . . . , n, t = 1, . . . , g.l + 1

(3)

t xijm = 1,

j = 1, . . . , n, t = 1, . . . , g.l + 1

(4)

m=1 j=1 t

n m   m=1 i=0 n 

t xijm −

i=0

n+1 

t xjkm = 0,

j = 1, . . . , n, m = 1, . . . , mt ,

k=1

t = 1, . . . , g.l + 1

(5)

 t cim

+ sijt

+ Pjt

−M

t



t

1−

m 

t xijm

t ≤ cjm ,

i = 0, . . . , n,

m=1

j = 1, . . . , n, m = 1, . . . , mt , t = 1, . . . , g.l + 1

 t−1 cjm

+ sijt

+ Pjt

− Mjt



t

1−

m 

(6)

t xijm

t ≤ cjm ,

i = 0, . . . , n,

m=1

j = 1, . . . , n, m = 1, . . . , mt , t = 2, . . . , g.l + 1 t−g

t−g

t−g

cim + xi(n+1)m (Pjt + srij

t ) ≤ cjm ,

(7)

i = 0, . . . , n, j = 1, . . . , n,

m = 1, . . . , mt , t = g + 1, . . . , g.l t xijm ≤ Pjt ,

(8)

i = 0, . . . , n + 1, j = 1, . . . , n,

m = 1, . . . , mt , t = 1, . . . , g.l + 1 t xjim ≤ Pjt ,

(9)

i = 1, . . . , n + 1, j = 0, . . . , n + 1, m = 1, . . . , mt ,

t = 1, . . . , g.l + 1 sg+1

j = 1, . . . , n, m = 1, . . . , mg.l+1

z ≥ cjm , t xijm ∈ {0, 1}, t = 0, xijm t cjm ≥ 0,

(10)

i, j ∈ {0, 1, ..., n, n + 1},

t = 1, . . . , g.l + 1

(11) (12)

i = j, t = 1, . . . , g.l + 1

(13)

j = 0, . . . , n, m = 1, . . . , mt , t = 1, . . . , g.l + 1

(14)

Eq. (1) denotes minimizing the makespan. Eq. (2) ensures that mt machines are scheduled in each station t. Eqs. (3) and (4) ensure that

M. Hekmatfar et al. / Applied Soft Computing 11 (2011) 4530–4539

4533

Table 1 LINGO results for small problems. Problem

Jobs (n)

Stations in stage one (g)

Levels (l)

Machines in stage one

Machines in stage two

Deviation from HGA (ı)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

6 6 6 6 6 6 6 6 6 6 6 6 30 30 30 30 30 30

2 2 2 2 2 4 4 4 4 6 6 6 2 2 2 2 2 2

2 2 3 3 5 2 2 3 3 2 3 2 2 2 3 3 2 5

2 2 2 3,4 2 2 2,3,1,2 2 2 2 2 2 2 2 2 2 1,3 2

1 2 2 2 1 1 2 1 2 1 1 2 1 2 2 1 2 2

0 0 0.112 0.043 0.111 0 0.106 0.140 NA 0.228 NA 0.249 NA NA NA NA NA NA

each job is processed once at each station. Note that x0t jm = 1 if job j is the first job to be processed on machine m at station t. Similarly, t xi(n+1)m = 1 if job i is the last job to be processed on machine m in station t. Eq. (5) ensures that each job has a predecessor and a successor on the machine where the job is processed. That is, if job j is processed directly after job i on machine m at station t, job k must be processed directly after job j on machine m at station t. The job completion time at each machine is represented by Eqs. (6) and (7). In other words, Eq. (6) implies that the sum of the completion time of job i on machine m and the processing time of job j at station t and the setup time from job i to job j is less than or equal to the completion time of job j on machine m at station t, if job j is processed directly after job i on machine m at station t. The value Mt is an upper bound on the time station t completes processing, which was mentioned in Kurz and Askin [17]: M1 =

n 

(Pi1 + max sji1 ,

j ∈ {0, ..., n})

i=1

M t = M t−1 +

n 

(Pit + max stji ,

j ∈ {0, ..., n})

i=1

feasibility of solving this MIP and the run time for each was up to 7200 s. Problems had integer processing times selected from a uniform distribution between [20, 100], integer setup times in stages one and two selected from a uniform distribution [12, 24] and [24, 40], respectively and integer setup times between different levels in reentrant stations selected from a uniform distribution [24, 36]. Table 1 demonstrates the results. Twelve problems were solved with 6 jobs in each station and six problems were run with 30 jobs. There was no solution after 2 h for problems with 30 jobs. Three problems were solved to optimality, which had just 6 jobs in each station. One of them had four stations in stage one with two reentries and one machine in stage two; others had two stations in stage one and two reentries with one machine and two machines in their second stage. All of these three problems had two machines in each station of stage one. The other fifteen problems were stopped after 2 h before finding an optimal integer solution and there was no feasible solution for eight of them. According to these results only small problems can be solved by MIP and it is obvious that heuristic methods are inevitable to solve this model. 4. Development of some heuristic algorithms and a metaheuristic algorithm

Eq. (7) implies that the sum of the completion time of job j on machine m at station t − 1 and the processing time of job j at stage t and the setup time from job i to job j is less than or equal to the completion time of job j on machine m at station t. The value Mjt is as follows:

In this section first four modified heuristic algorithms are proposed. Then a hybrid genetic algorithm based on random key genetic algorithm (RKGA), proposed for the first time by Bean [2], is developed.

Mjt = maxi (sjit ) + Pjt

4.1. Heuristic algorithms

Eq. (8) ensures that if job i is the last job on machine m at station t–g and job j is processed directly after job i on machine m at station t, the completion time of job j is equal or greater than the completion time of job i plus the processing time of job j plus the setup time between two different levels, job i at station t–g and job j at station t, on machine m. Eqs. (9) and (10) ensure that nominal jobs (for example j = n + 1) that do not visit a station are not assigned to that station. Eq. (11) implies that the objective function is greater than or equal to any completion time of jobs in the station of stage two. Finally, the other Eqs. (12)–(14) represent the conditions on the decision variables. Eighteen small problems were solved in LINGO 8.0 and run on a PC with a 2500 MHz processor with 1 GB of RAM to evaluate the

In this part, we introduce four heuristic algorithms which are famous and efficient in scheduling area. The first three algorithms were represented in Kurz and Askin [17]. We have modified these algorithms for our two stage RHFS model. The fourth is an expansion of NEH algorithm which is based on this concept that all jobs must pass through all the machines in the same order. No restriction on the form of the resultant schedules is made. In the following [i] indicates the ith job in an ordered sequence. In the following heuristics except modified LPT algorithm (MLPT) algorithm, a modified processing time is used. It is denoted by P˜ it for job i in station t and is defined as P˜ t = P t + minj st . This time i

i

ji

represents the minimum time at station t that must elapse before job i could be completed. St is the set of jobs that visit station t. mt

4534

M. Hekmatfar et al. / Applied Soft Computing 11 (2011) 4530–4539

is the number of machines at station t in stage one and m2 is the number of machines in stage two. 4.1.1. The modified SPT algorithm (MSPT) This is a heuristic that assigns jobs to machines with little regard to setup times. It means that we only calculate setup times between similar stations in reentrant schedule and ignore the real setup times between jobs in each station. In the SPT heuristic algorithm, the jobs are ordered at station 1 in stage one in increasing order of the modified processing times P˜ i1 . At subsequent stations, jobs are assigned in earliest ready time order. Jobs are assigned to the machine in every station in a manner to allow them to be completed at the earliest time as measured in a greedy fashion. 1. Create the modified processing times P˜ i1 . 2. Order the jobs in non-decreasing order (SPT) of P˜ i1 . 3. At each station t = 1, . . ., g.l in stage one and the station of stage two, assign job 0 to each machine in that station. 4. For station 1: 4.1. Let bestmc = 1. 4.2. For [i] = 1 to n, i ∈ S 1 : 4.2.1. For mc = 1 to m1 : 4.2.1.1. Place job [i] last on machine mc. 4.2.1.2. Find the completion time of job [i]. If this time is less on mc than on bestmc, let bestmc = mc. 4.2.2. Assign job [i] to the last position on machine bestmc. 5. For each station t = 2,. . ., g.l in stage one: 5.1. Update the ready times in station t to be the completion times in station t − 1. 5.2. Arrange jobs in increasing order of ready times. 5.3. Let bestmc = 1. 5.4. For [i] = 1 to n, i ∈ S t : 5.4.1. For mc = 1 to mt : 5.4.1.1 Place job [i] last on machine mc. 5.4.1.1.1. If job [i] is the first job which should be scheduled on machine mc and t > g: 5.4.1.1.1.1. Calculate the real setup time between this job and the last job on machine mc in the station t–g, add this time to the P˜ it . 5.4.1.2. Find the completion time of job [i]. If this time is less on mc than on bestmc, let bestmc = mc. 5.4.2 Assign job [i] to the last position on machine bestmc. 6. For stage two: 6.1. Update the ready times in this stage to be the completion times in station g.l of stage one. 6.2. Arrange jobs in increasing order of ready times. 6.3. Let bestmc = 1. 6.4. For [i] = 1 to n, i ∈ S sg+1 : 6.4.1. For mc = 1 to m2 : 6.4.1.1 Place job [i] last on machine mc. 6.4.1.2. Find the completion time of job [i]. If this time is less on mc than on bestmc, let bestmc = mc. 6.4.2 Assign job [i] to the last position on machine bestmc.

4.1.2. The modified g.l/2 Johnson’s rule (MJSN) Jouhnson (1954) proposed an algorithm which was useful for two machine flow shop scheduling problem with minimizing makespan, F2//Cmax . This heuristic is an extension of Johnson’s rule to take into account the setup times. The aggregated first half of the stations in stage one and the aggregated last half of the stations in this stage are considered to create the order for assignment in station 1 in stage one. The value P˜ i1 is the sum of modified processing sg times for stations 1 to [g.l/2] in stage one and P˜ is the sum over i

stations [g.l/2] + 1 to g.l in stage one. The steps of algorithm are as follows: sg

1. Create the modified processing times P˜ i1 and P˜ i . sg sg 2. Let U = {j|P˜ j1 < P˜ j } and V = {j|P˜ j1 ≥ P˜ j }. 3. Arrange jobs in U in non-decreasing order of P˜ i1 and arrange jobs in V in sg non-increasing order of P˜ . i

Append the ordered list V to the end of U. 4. At each station t = 1,. . ., g.l in stage one and the station of stage two, assign job 0 to each machine in that station.

5. For [i] = 1 to n, i ∈ S 1 : 5.1. For mc = 1 to m1 : 5.1.1. Place job [i] last on machine mc. 5.1.1.1. If this placement results in the lowest completion time for job [i], let m = mc. 5.1.1.1.1. Place job [i] last on machine m. 6. For each station t = 2,. . ., g.l in stage one: 6.1. Update the ready times in station t to be the completion times in station t − 1. 6.2. Arrange jobs in increasing order of ready times. 6.3. For [i] = 1 to n, i ∈ S t : 6.3.1. For mc = 1 to mt : 6.3.1.1. Place job [i] last on machine mc. 6.3.1.1.1. If job [i] is the first job which should be scheduled on machine mc and t > g: 6.3.1.1.1.1. Calculate the real setup time between this job and the last job on machine mc in the station t–g, add this time to the P˜ it . 6.3.1.2. If this placement results in the lowest completion time for job [i], let m = mc. 6.3.2. Place job [i] last on machine m. 7. For stage two: 7.1. Update the ready times in this station to be the completion times in station g.l of stage one. 7.2. Arrange jobs in increasing order of ready times. 7.3. For [i] = 1 to n, i ∈ S sg+1 : 7.3.1. For mc = 1 to m2 : 7.3.1.1 Place job [i] last on machine mc. 7.3.1.2. If this placement results in the lowest completion time for job [i], let m = mc. 7.3.2 Assign job [i] to the last position on machine m.

4.1.3. The modified LPT algorithm (MLPT) The LPT algorithm is a heuristic algorithm that minimizes the sum of flow times (completion ready times) at each station. It is a multiple machine, multiple stage adaptation of the insertion heuristic for the TSP. Setup times between jobs in each station are accounted for by integrating their values into the processing times using P˜ it . The insertion heuristic can then be performed using these modified processing times at each station. Once jobs have been assigned to machines, the true processing and setup times can be used. The LPT has the following steps: 1. For stations 1 in the first stage: 1.1. Create the modified processing times P˜ i1 1.2. Order the jobs in non-increasing order (LPT) of P˜ i1 . 1.3. For [i] = 1 to n, i ∈ S 1 : 1.3.1. Insert job [i] into every position on each machine. 1.3.2. Calculate the true sum of flow times using the actual setup times. 1.3.3. Place job i in the position on the machine with the lowest resultant sum of flow times. 2. For station t = 2,. . ., g.l in the first stage: 2.1. Update the ready times in station t to be the completion times in stage t − 1. 2.2. Order the jobs in increasing order of ready times. 2.3. For [i] = 1 to n, i ∈ S t : 2.3.1. For mc = 1 to mt : 2.3.1.1. Insert job [i] last on machine mc. 2.3.1.1.1. If job [i] is the first job which should be scheduled on machine mc and t > g: 2.3.1.1.1.1. Calculate the real setup time between this job and the last job on machine mc in the station t–g, add this time to the P˜ it . 2.3.1.1.2. Else calculate the true sum of flow times using the actual setup time between job [i] and the previous job in machine mc. 2.3.1.2. Place job i in the position on the machine with the lowest resultant sum of flow times. 2.3.1.2.1. If this placement results in the lowest completion time for job [i], let m = mc. 2.3.2 Assign job [i] to the last position on machine m. 3. For stage two: 3.1. Update the ready times in this station to be the completion times in station g.l of stage one. 3.2. Order jobs in increasing order of ready times. 3.3. For [i] = 1 to n, i ∈ S sg+1 : 3.3.1. For mc = 1 to m2 : 3.3.1.1 Place job [i] last on machine mc. 3.3.1.2. If this placement results in the lowest completion time for job [i], let m = mc. 3.3.2 Assign job [i] to the last position on machine m.

M. Hekmatfar et al. / Applied Soft Computing 11 (2011) 4530–4539

4.1.4. The modified NEH algorithm (MNEH) The NEH algorithm first was proposed by Nawaz et al. [23] for permutation flow shop scheduling problems which all the jobs must pass through all the machines in the same order. Afterward, this algorithm was expanded by many authors like Pan and Chen [25] who approved that NEH heuristic is the best algorithm for reentrant permutation flow shop (RPFS) problems. The NEH algorithm is based on the assumption that a job with more total processing time on all the machines should be given higher priority than a job with less total processing time. The number of enumerations in the algorithm is (n(n + 1)/2) − 1, of which, n enumerations are complete sequences and the rest are partial sequences. The MNEH has the following steps: 1. For each job [i] calculate the sum of modified processing times, Ti , consisting of all stations of stage one and the station in the second stage. 2. Arrange the jobs in descending order of Ti . 3. Pick the two jobs from the first and second position of the list of Step 2, and find the best sequence for these two jobs by calculating the completion times for the two possible sequences and selecting the minimum sequence. The arrangement of these two jobs in the remaining steps of the algorithm is fixed. 4. For i = 3, . . ., n: 4.1. Pick the job in the ith position of the list generated in Step 2 and find the best sequence by placing it at all possible i positions in the partial sequence found in the previous step, without changing the arrangement of the already assigned jobs. 5. For each station t = 2,. . ., g.l in stage one: 5.1. Update the ready times in station t to be the completion times in station t − 1. 5.2. Arrange jobs in increasing order of ready times. 5.3. For [i] = 1 to n, i ∈ S t : 5.3.1. For mc = 1 to mt : 5.3.1.1. Place job [i] last on machine mc. 5.3.1.1.1. If job [i] is the first job which should be scheduled on machine mc and t > g: 5.3.1.1.1.1. Calculate the real setup time between this job and the last job on machine mc in the station t − g, add this time to the P˜ it . 5.3.1.2. If this placement results in the lowest completion time for job [i], let m = mc. 5.3.2. Place job [i] last on machine m. 6. For stage two: 6.1. Update the ready times in this station to be the completion times in station g.l of stage one. 6.2. Arrange jobs in increasing order of ready times. 6.3. For [i] = 1 to n, i ∈ S sg+1 : 6.3.1. For mc = 1 to m2 : 6.3.1.1 Place job [i] last on machine mc. 6.3.1.2. If this placement results in the lowest completion time for job [i], let m = mc. 6.3.2 Assign job [i] to the last position on machine m.

4.2. The hybrid genetic algorithm Genetic algorithm is one of the metaheuristic search techniques based on the mechanism of natural selection and genetics. It originates from Darwin’s theory, survival of the best generations, which means a good parent produces better offspring. A genetic algorithm starts with a set of feasible solutions (population) and iteratively replaces the current population by a new population. It requires a suitable encoding for the problem and a fitness function that represents a measure of the quality of each encoded solution (chromosome or individual). The reproduction (selection) mechanism selects two parents and recombines them using a crossover operator to generate offspring and also a mutation operator is used to be as a neighborhood local search algorithm. The HGA is based on random key genetic algorithm (RKGA), which differs from pure genetic algorithm in decoding the solutions it was first proposed by Bean [2] and has been used by many authors to solve practical problems in an extensive domain. Some examples are vehicle routing problem (VRP), resource allocation problem, quadratic assignment problem (QAP), job shop makespan minimization problems (Bean [2]), multiple machine tardiness scheduling problem (Norman and

4535

Bean [24]) and the generalized traveling salesman problem (TSP) (Snyder and Daskin [28]). In RKGA, random numbers work as sort keys to decode the solution. The decoded solution is evaluated with a fitness function that is suitable for the model and introduce later. The proposed HGA has the following structure. 4.2.1. Parameters setting The parameters in our HGA include population size, number of generations, copy probability, crossover probability and the probability of mutation operator. The population size and number of generations are based on those in Kurz and Askin [17]. Each generation has a population of 100 chromosomes and we continue until 100 generations have been tested. 15 test problems (5 problems for each number of jobs, 30, 70, 100) were tested to evaluate and compare three different levels of copy probability, crossover probability and mutation probability. Next section gives the results. 4.2.2. Encoding (representation of chromosomes) In some GA algorithms, each solution is usually encoded as a binary string. However, this is not suitable for scheduling problems. During the past years, many encoding methods have been proposed for scheduling problems like job-based encoding, machine-based encoding and operation-based encoding methods (Chen et al. [5]). But Norman and Bean [24] proposed a method to decode solution for an identical multiple machine problem. This method has been used by some authors in recent years for flexible and hybrid flow shop problems (Kurz and Askin [17], Zandieh et al. [36]). We have used this method to encode chromosomes. Each job is assigned a real number whose integer part is the machine number to which the job is assigned and whose fractional part is used to sort the jobs assigned to each machine. After the job assignments and order on each machine are found through the decoding, a schedule can be built incorporating other factors such as ready times and sequencedependent setup times. The desired performance measure can then be found using the schedule. For example, we have six jobs and three machines in station A, so a chromosome is randomly generated to arrange jobs in this station. Fig. 1 shows that the second and fourth jobs are assigned to machine number one, the first, third and sixth jobs are assigned to machine number two and the remained job, the fifth job is assigned to machine number three. It is noted that this kind of arrangement is only used to sort jobs in the first station of stage one. In other stations, the sorting of jobs depends on ready time of each job which gained from the previous station. 4.2.3. Initial population Generating the initial population is an important part of genetic algorithms and a good initial population directly affects the solution to reach better quality. The initial population sets were generated by four heuristic methods, MSPT, MNEH, MLPT and MJSN. To establish an initial population with these heuristics in the first generation, we needed to compare heuristics and give higher value to better heuristics; so better algorithms were generated by occupying more chromosomes in the initial population. We designed three different kinds of combinations. In the first design, 96 percent of chromosomes in initial population were generated randomly; and one percent of population was assigned to each heuristic algorithm. In the second design, 50 percent of populations were generated randomly. The other chromosomes of population were generated in a different manner: The SPT algorithm is a suitable and well

Fig. 1. A randomly generated chromosome in a problem with six jobs and three machines.

4536

M. Hekmatfar et al. / Applied Soft Computing 11 (2011) 4530–4539

adapted heuristic in RFS scheduling problem and NEH algorithm is the best heuristic for RPFS scheduling problem (Chen et al. [5]). The RFS problem where no passing is allowed and all jobs have a same route through all stations is called RPFS problem. So we assigned more chromosomes to these two heuristics: 24 percent of chromosomes in first population were generated by this procedure: the first schedule of this part was generated by MSPT algorithm, the rest of this part was generated by selecting two points (jobs) in the first schedule randomly and exchanging these two positions to make a new schedule and so on. For example if we have the arrangement of 2,4,3,6,1,5 for jobs of a station and exchange the second and fifth positions, we have a new sort of jobs for the second schedule, 2,1,3,6,4,5. Another 24 percent of chromosomes were generated as follows: the first schedule of this part was generated by MNEH algorithm and the other schedules of this part were generated like the procedure of MSPT heuristic part. After that, one percent of chromosomes was generated by MLPT heuristic and another percent of the initial population was generated by MJSN algorithm. In the third design, 40 percent of populations were generated randomly, the another 60 percent of populations was divided into four parts and each 15 percent section of chromosomes was generated by each heuristic mentioned in this paper, MSPT, MNEH, MLPT and MJSN. In each part, the first schedule was generated by one of these heuristic algorithms. The rest of this part was generated by selecting two points (jobs) randomly in the first schedule and exchanging these two positions to make a new schedule and so on. 15 test problems were generated to evaluate and compare these three different designs of initial population. Next section gives the results. 4.2.4. Copy operator Copy is an operator to duplicate the best chromosomes with the lowest makespan values from the previous generation to the next one.

Fig. 3. The mutation operator.

a new number is assigned to this place to make a new chromosome for the next generation. In Fig. 3a, a chromosome with six genes (jobs) and three machines in a station is selected and the second job is randomly nominated to make a new chromosome in Fig. 3b. 4.2.7. Fitness function To measure the effectiveness of each chromosome, we should make a fitness function. In HGA, the objective function is to maximize the purpose (object), but in our problem, the purpose is to minimize the makespan, Cmax . So we use the reverse of makespan value to adjust the scheduling problem object to HGA object. The fitness function of chromosome i is 1/Cmax (i) and the probabil pop ity of this chromosome is Fi = (1/Cmax (i))/ i=1 1/Cmax (i), where pop is the number of populations in each generation. The higher probability causes lower makespan and higher desirability. 4.2.8. Termination criterion The above procedures are repeated to process by HGA until a good situation of population in the search space is achieved. There are some stop criteria which are set by user to terminate searching. The commonly used criteria are: (1) the number of executed generations; (2) a particular object; and (3) the homogeneity of population. This paper uses a fixed number of generations to serve as the termination condition. Here we use the first method by continuing 100 generations to reach the best schedule. 5. Design of experiments

4.2.5. Crossover operator Crossover is an operation to generate a new string (child of offspring) from two parent strings. It is the main operator of GA. During the past years, various crossover operators had been proposed. Murata et al. [22] showed that the two-point crossover is effective for flow-shop problems. Hence the two-point crossover method is used in this paper. Fig. 2 illustrates a Two-point crossover. It is an example of six job, three machines and a single station. Two randomly points are generated, 3 and 5, and the set of jobs between these two points is inherited from one parent to the child, and other jobs are placed in the order of their appearance in another parent. 4.2.6. Mutation operator Mutation is regularly a useful operator of GA. This operator plays a role as a transition from a current solution to its neighborhood solution in a local search algorithm. Mutation is used to prevent premature from falling into local optimum. In this paper, first a chromosome is selected randomly from the previous iteration and then one gene (job) in this chromosome is nominated randomly and

Fig. 2. A two point crossover. (a) The chromosome before mutation and the second gene is selected to change. (b) The chromosome after mutation with generating a random number in its second gene.

This section contains the method of generating data sets and run these data sets by proposed algorithms and then analyzing the results. 5.1. Generating data sets To compare our results, some test problems were generated and run with proposed algorithms. Data required for a problem consist of the number of jobs (n), range of processing times, number of stations in stage one (g), number of machines exists at each station of stage one (m), number of machines in stage two (m2 ), the number of levels in reentrant jobs (l), the sequence dependent setup times, the processing times and the ready times. The ready times for station 1 in stage one are set to 0 for all jobs. The ready times at station t + 1 are the completion times at station t. The processing times in both stage one and stage two are distributed uniformly in the range of [20, 100] with the mean of 60 and [40, 120] with the mean of 80 respectively. According to Kurz and Askin [17], the setup times between jobs in each station of stage one are uniformly distributed from 12 to 24 which are 20–40% of the mean of the processing times. The setup times between jobs in reentrant stations in stage one are uniformly distributed from 24 to 36 (40–60% of the mean of the processing times in stage two). Finally the setup times between jobs in stage two are uniformly distributed from 24 to 40, 30–50% of the mean of the processing times in stage two. Table 2 shows different levels for these factors. According to the levels of test problems, we have 243 different combinations of levels in the five separated factors. To compare random test data, five data sets are generated for each combination, so finally we have 1215 different data sets.

M. Hekmatfar et al. / Applied Soft Computing 11 (2011) 4530–4539 Table 2 Factor levels.

4537

Table 5 Comparing the average of ı in proposed methods.

Factor

Level

n,g,l

MSPT

MJSN

MLPT

MNEH

RKGA

HGA

Number of stations in stage 1 (g)

2 4 6

Number of levels in stage 1 (number of reentrants) (l)

2 3 5

Number of machines in each station of stage 1 (m)

Constant 2 6 Variable Unif(1,6)

Number of machines in stage 2 (m2 )

1 2 4

Number of jobs (n)

30 70 100

30,2,2 30,2,3 30,2,5 30,4,2 30,4,3 30,4,5 30,6,2 30,6,3 30,6,5 70,2,2 70,2,3 70,2,5 70,4,2 70,4,3 70,4,5 70,6,2 70,6,3 70,6,5 100,2,2 100,2,3 100,2,5 100,4,2 100,4,3 100,4,5 100,6,2 100,6,3 100,6,5 Average

7.576 4.977 4.053 5.797 5.898 3.802 9.334 3.863 3.913 7.356 1.415 1.420 5.623 2.077 0.977 4.830 1.409 1.129 5.418 0.851 0.647 4.521 0.994 0.733 4.434 1.119 1.033 3.526

7.819 5.288 4.223 6.076 5.870 3.949 9.303 3.711 3.522 8.310 1.266 1.496 6.555 2.097 1.409 5.043 1.503 1.255 5.729 0.861 0.648 5.007 1.130 0.952 4.823 1.315 1.055 3.712

15.011 11.705 9.257 13.132 11.934 9.111 16.251 9.244 7.808 17.135 9.499 8.948 14.804 10.626 8.776 13.995 9.575 8.857 15.555 9.481 8.588 14.550 9.938 8.986 14.132 10.025 9.253 11.340

7.782 5.604 4.340 6.282 5.915 4.861 9.958 3.908 4.267 7.455 1.287 1.072 5.853 1.882 0.947 4.654 1.363 1.093 5.449 0.763 0.539 4.654 0.940 0.764 4.159 1.152 0.879 3.623

0.192 0.111 0.087 0.161 0.100 0.083 0.130 0.115 0.057 0.048 0.053 0.030 0.094 0.033 0.033 0.060 0.036 0.027 0.065 0.033 0.024 0.088 0.040 0.023 0.040 0.027 0.031 0.067

0.118 0.075 0.031 0.079 0.067 0.059 0.061 0.050 0.030 0.060 0.029 0.019 0.037 0.029 0.021 0.025 0.028 0.010 0.022 0.020 0.012 0.040 0.013 0.012 0.014 0.009 0.011 0.036

Table 3 Comparing the different combinations of HGA parameters. State (operator division)

Design 1

Design 2

Design 3

n = 30

1 2 3

0.161 0.285 0.120

0.223 0.252 0.215

0.161 0.099 0.010

n = 70

1 2 3

0.124 0.121 0.119

0.176 0.123 0.096

0.131 0.123 0.015

n = 100

1 2 3

0.122 0.134 0.116

0.128 0.155 0.093

0.098 0.092 0.009

5.2. Analyzing the experimental results This section discusses the effectiveness of the proposed heuristics and HGA. The algorithms were implemented in MATLAB 7.5.0 and run on a PC with a 2500 MHz processor with 1 GB of RAM. Each combination of data sets was run by each algorithm and its running time was found using the clock () function. 5.2.1. Parameter setting To select the best parameters of the HGA, we compared three different percents of the genetic operators (crossover, mutation and copy) and three different states of associating heuristics to make the initial population. Then we selected randomly five data sets in each number of jobs, n = 30, n = 70 and n = 100, and generated nine different combinations for each of 15 data sets. The average of ı was used to represent the differences between combinations. ı=

Cmax − bestCmax × 100% bestCmax

where Cmax is the makespan found by the algorithm and bestCmax is the best solution among all nine algorithms which were obtained from each different combination. Table 3 shows the results. In this table each cell consists of the average of ı for five tests of a certain

combination. Designs are the same presented in Section 4.2.3. State 1 represents that the genetic operators are divided by copy operator, crossover and mutation with percents of 0.3, 0.65 and 0.05, respectively. These divisions for states 2 and 3 are 0.25, 0.7 and 0.05 percents and 0.25, 0.65 and 0.1 percents, respectively. A two-factor ANOVA was performed for the data. The validity of the two-factor ANOVA was checked by using a Levene’s test for two responses. This test indicates that the variances are homogeneous. The normality of data was checked by verifying how the residuals fit into a theoretical normal distribution, which first was applied by Montgomery [21]. The results are shown in Table 4. 5.3. Comparing the proposed algorithms To evaluate the quality of the heuristic and metaheuristic solutions, we categorize all data sets to some levels specified by number of jobs (n), number of stations in stage one (g) and number of reentrants (l) and then for each separated level the percentage deviation of the algorithm makespan from the best solution found between all six algorithms is computed: ı=

Cmax − bestCmax × 100% bestCmax

where Cmax is the makespan found by the algorithm and bestCmax is the best solution among all six algorithms in the specified level. To compare the results obtained for different levels, we introduced 27 different levels in which each algorithm was compared. The results are shown in Table 5. Each cell contains 45 instances in which nine different combinations of machines in both stages are considered.

Table 4 Results of the best selection of HGA parameters. Parameters GA operators in each population

Percentage

Generation of initial population

Copy

Crossover

Mutation

Random

MSPT

MNEH

MJSN

MLPT

25

65

10

50

24

24

1

1

4538

M. Hekmatfar et al. / Applied Soft Computing 11 (2011) 4530–4539

Fig. 4. Means plot and LSD intervals at the 95% confidence level. Fig. 6. Means plot and LSD intervals at the 95% confidence level for heuristic algorithms and in different number of jobs.

Table 6 Comparing the situations that MLPT works better that other heuristics. Jobs

30 70 100

MLPT vs. MSPT

MLPT vs. MNEH

MLPT vs. MJSN

Frequency

Percent

Frequency

Percent

Frequency

Percent

20 1 0

4.938 0.247 0

23 1 0

5.679 0.247 0

14 2 1

3.457 0.494 0.247

From Table 4 it is found that RKGA and HGA work better than heuristic algorithms and MLPT performs worse than other heuristics. A single factor ANOVA was performed for both heuristic algorithms (except MLPT) and metaheuristic algorithms individually. The validity of the results obtained from Table 4 was verified by a design of experiments and a single factor ANOVA by using different algorithms as factor and the average of ı as response variable. The results show that the metaheuristic algorithms work better than heuristics significantly. If we want to use an algorithm with a short run time (less than one second), we have to utilize heuristic algorithms. Fig. 4 shows the means plot and LSD intervals for all algorithms at the 95% confidence level. The comparison between RKGA and HGA in CPU time indicates that in 11 percents of problems in size n = 30, RKGA has an average decrease of 1.67% (in remaining 89% of problems both algorithms have equal CPU time) and in problems with n = 70 and n = 100, RKGA has an average decrease of 12.61% and 23.31%, respectively. Table 5 shows the frequencies when MLPT reaches minimum makespan against other heuristics. Each cell contains comparing between 405 data sets. Table 6 indicates that when the size of jobs is increased, MLPT does not work as a good heuristic in this study.

It is reasonable to compare heuristics separately which work similarly. These heuristics are MSPT, MJSN and MNEH. Also the two metaheuristic have been compared individually. To analyze data, we tested the effect of number of jobs. A two way ANOVA was applied where the factors are types of algorithms and number of jobs and the response is the average of ı. The results are shown in Figs. 5 and 6. Fig. 5 indicates that the HGA works better than RKGA in all problem sizes. Fig. 6 shows that MSPT works better than MJSN in three different types of jobs, 30, 70 and 100; also MSPT and MJSN algorithms work better than MNEH works in problems with n = 30, but MNEH is better than others in large scale problems (n = 70, n = 100). 6. Conclusions and future works This paper studies the two stage reentrant hybrid flow shop scheduling problem with setup times. The model of the paper, which illustrates two different stages with reentrant in stage one, is applied in many real cases especially in semiconductor industries. Six algorithms, four heuristics and two metaheuristics have been examined to find the minimum makespan. In the proposed HGA, an efficient method was used to generate the initial population. The computational results have shown that HGA works better than other algorithms and MNEH algorithm obtains minimum solutions in comparison with other heuristics in large scales (n = 70 and n = 100). In the future research, other scheduling criteria such as total completion time and maximum lateness or multi objective methods can be tested. It would also be interesting to investigate the proposed model in different areas of scheduling such as permutation reentrant flow shops or job shops or the relation between machines in hybrid area can be changed by unrelated machines or uniform systems. At last other algorithms can be used to solve such problems. References

Fig. 5. Means plot and LSD intervals at the 95% confidence level for metaheuristic algorithms and in different number of jobs.

[1] K.R. Baker, Introduction to Sequencing and Scheduling, John Wiley & Sons, New York, 1974. [2] J.C. Bean, Genetic algorithms and random keys for sequencing and optimization, ORSA Journal on Computing 6 (1994) 154–160. [3] C.F. Bispo, S. Tayur, Managing simple re-entrant flow lines: theoretical foundation and experimental result, IIE Transactions 33 (8) (2001) 609–623. [4] J.S. Chen, J.C.H. Pan, C.K. Wu, Hybrid tabu search for re-entrant permutation flow shop scheduling problem, Expert Systems with Applications 34 (3) (2008) 1924–1930. [5] J.S. Chen, J.C.H. Pan, C.M. Lin, A hybrid genetic algorithm for the re-entrant flow-shop scheduling problem, Expert Systems with Applications 34 (2008) 570–577. [6] J.S. Chen, A branch and bound procedure for the reentrant permutation flowshop scheduling problem, International Journal of Advanced Manufacturing Technology 29 (2006) 1186–1193.

M. Hekmatfar et al. / Applied Soft Computing 11 (2011) 4530–4539 [7] S.W. Choi, Yd. Kim, Minimizing makespan on an m-machine re-entrant flowshop, Computers & Operations Research 35 (5) (2008) 1684–1696. [8] S.W. Choi, Y.D. Kim, G.C. Lee, Minimizing total tardiness of orders with reentrant lots in a hybrid flowshop, International Journal of Production Research 43 (11) (2005) 2149–2167. [9] E. Demirkol, R. Uzsoy, Decomposition methods for re-entrant flow shops with sequence-dependent setup times, Journal of Scheduling 3 (3) (2000) 155–177. [10] M.R. Garey, D.S. Johnson, R. Sethi, The complexity of flowshop and job shop scheduling, Mathematics of Operations Research 1 (2) (1976) 117–129. [11] H. Hwang, J.U. Sun, Production sequencing problem with reentrant work flows and sequence dependent setup times, Computers & Industrial Engineering 33 (1997) 773–776. [12] H. Hwang, J.U. Sun, Production sequencing problem with reentrant work flows and sequence dependent setup times, International Journal of Production Research 36 (9) (1998) 2435–2450. [13] S.M. Johnson, Optimal two- and three-stage production schedules with set up times included, Naval Research Logistics Quarterly 1 (1) (1954) 61–68. [14] Y.D. Kim, J.U. Kim, S.K. Lim, H.B. Jun, Due date based scheduling and control policies in a multi-product semiconductor wafer fabrication facility, IEEE Transactions on Semiconductor Manufacturing 11 (1) (1998) 155–164. [15] Y.D. Kim, D. Lee, J.U. Kim, A simulation study on lot release control, mask scheduling and batch scheduling in semiconductor wafer fabrication facility, Journal of Manufacturing Systems 17 (2) (1998) 107–117. [16] W. Kubiak, S.X.C. Lou, Y. Wang, Mean flow time minimization in re-entrant job-shops with a hub, Operations Research 44 (5) (1996) 764–776. [17] M.E. Kurz, R.G. Askin, Scheduling flexible flow lines with sequence dependent setup times, European Journal of Operational Research 159 (2004) 66–82. [18] C. Low, C.J. Hsu, C.T. Su, A two-stage hybrid flowshop scheduling problem with a function constraint and unrelated alternative machines, Computers & Operations Research 35 (2008) 845–853. [19] G. Miragliotta, M. Perona, Decentralised, multi-objective driven scheduling for reentrant shops: a conceptual development and a test case, European Journal of Operational Research 167 (2005) 644–662. [20] L. Monch, R. Driebel, A distributed shifting bottleneck heuristic for complex job shops, Computers & Industrial Engineering 49 (2005) 363–380. [21] D.C. Montgomery, Design and analysis of experiments, John Wiley & Sons, New York, 2000. [22] T. Murata, H. Ishibuchi, H. Tanaka, Genetic algorithms for flows shop scheduling problems, Computers & Industrial Engineering 30 (4) (1996) 1061–1071.

4539

[23] M. Nawaz Jr., E.E. Enscore, I. Ham, A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem, Omega, International Journal of Management Science 11 (1) (1983) 91–95. [24] B.A. Norman, J.C. Bean, A genetic algorithm methodology for complex scheduling problems, Naval Research Logistics 46 (1999) 199–211. [25] J.C. Pan, J.S. Chen, Minimizing makespan in re-entrant permutation flow-shops, Journal of the Operational Research Society 54 (2003) 642–653. [26] W.L. Pearn, S.H. Chung, A.Y. Chen, M.H. Yang, A case study on the multistage IC final testing scheduling problem with reentry, International Journal of Production Economics 88 (2004) 257–267. [27] M. Pinedo, Scheduling: Theory, Algorithms, and Systems, Prentice-Hall, New Jersey, 2002. [28] L.V. Snyder, M.S. Daskin, A random-key genetic algorithm for the generalized traveling salesman problem, Working Paper, Department of Industrial Engineering and Management Sciences, Northwestern University, 2001. [29] K. Sourirajan, R. Uzsoy, Hybrid decomposition heuristics for solving large-scale scheduling problems in semiconductor wafer fabrication, Journal of Scheduling 10 (2007) 41–65. [30] A.H.G. Rinnooy Kan, Machine Scheduling Problems: Classification, Complexity and Computations, Martinus Nijhoff, The Hague, Holland, 1976. [31] R. Uzsoy, C.Y. Lee, L.A. Martin-Vega, A review of production planning and scheduling models in the semiconductor industry (Part I: System characteristics, performance evaluation and production planning), IIE Transactions 24 (4) (1992) 47–60. [32] R. Uzsoy, C.Y. Lee, L.A. Martin-Vega, A review of production planning and scheduling models in the semiconductor industry (Part II: Shop-floor control), IIE Transactions 26 (5) (1994) 44–55. [33] F.D. Vargas-Villamil, D.E. Rivera, A model predictive control approach for realtime optimization of re-entrant manufacturing lines, Computers in Industry 45 (1) (2001) 45–57. [34] D.L. Yang, W.H. Kuo, M.S. Chern, Multi-family scheduling in a two-machine reentrant flow shop with setups, European Journal of Operational Research 187 (3) (2008) 1160–1170. [35] K. Yura, Cyclic scheduling for re-entrant manufacturing systems, International Journal of Production Economics 60–61 (1999) 523–528. [36] M. Zandieh, S.M.T. Fatemi Ghomi, S.M. Moattar Husseini, An immune algorithm approach to hybrid flow shops scheduling with sequencedependent setup times, Applied Mathematics and Computation 180 (2006) 111–127.