Event driven strategy based complete rescheduling approaches for dynamic m identical parallel machines scheduling problem with a common server

Event driven strategy based complete rescheduling approaches for dynamic m identical parallel machines scheduling problem with a common server

Computers & Industrial Engineering 91 (2016) 66–84 Contents lists available at ScienceDirect Computers & Industrial Engineering journal homepage: ww...

1MB Sizes 0 Downloads 32 Views

Computers & Industrial Engineering 91 (2016) 66–84

Contents lists available at ScienceDirect

Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie

Event driven strategy based complete rescheduling approaches for dynamic m identical parallel machines scheduling problem with a common server Alper Hamzadayi a,⇑, Gokalp Yildiz b a b

Department of Industrial Engineering, Yuzuncu Yil University, 65080 Van, Turkey Department of Industrial Engineering, Dokuz Eylül University, 35397 Izmir, Turkey

a r t i c l e

i n f o

Article history: Received 24 December 2014 Received in revised form 4 November 2015 Accepted 9 November 2015 Available online 14 November 2015 Keywords: Dynamic scheduling m identical parallel machines scheduling with a common server Simulation Simulated annealing Dispatching rules Sequence dependent setup times

a b s t r a c t This paper addresses the dynamic m identical parallel machine scheduling problem in which the sequence dependent setup operations between the jobs are performed by a single server. An event driven rescheduling strategy based simulation optimization model is proposed by inspiration from limited order release procedure (Bergamaschi, Cigolini, Perona, & Portioli, 1997) for being able to tackle the changing environment of the system. The proposed event driven rescheduling strategy is based on the logic of controlling the level of the physical work-in-process on the shop floor. A simulated annealing and dispatching rules based complete rescheduling approaches as the simulation based optimization tools are proposed and adapted to the developed simulation model for generating new schedules depending on the proposed event driven rescheduling strategy. The objective of this study is to minimize the length of schedule (makespan). The performances of the approaches are compared on a hypothetical simulation case. The results of the extensive simulation study indicate that simulated annealing based complete rescheduling approach produces better scheduling performance. Ó 2015 Elsevier Ltd. All rights reserved.

1. Introduction Scheduling is the allocation of resources (e.g. machines) to tasks (e.g. jobs) in order to ensure the completion of these tasks in a reasonable amount of time; and the scheduling strategies are usually classified into two categories (Ouelhadj & Petrovic, 2009): static and dynamic. In static scheduling, all jobs are available and ready for processing. Once the schedule is prepared, the processing sequence of jobs is determined and is not changed during processing (Fang & Xi, 1997). The static parallel machine scheduling problem is one of the most difficult classes of the scheduling problem, and a considerable number of studies have been conducted on various commercial, industrial, and academic fields. Parallel machine scheduling problems can be roughly classified into three categories (Cheng & Sin, 1990): (1) identical parallel machines, (2) uniform parallel machines, and (3) unrelated parallel machines. On the other hand, there are two common types of setup time in classical scheduling problems: sequence independent or sequence dependent (Allahverdi, Aldowaisan, & Gupta, 1999). In the first ⇑ Corresponding author. Tel.: +90 505 279 73 77; fax: +90 432 225 17 30. E-mail address: [email protected] (A. Hamzadayi). http://dx.doi.org/10.1016/j.cie.2015.11.005 0360-8352/Ó 2015 Elsevier Ltd. All rights reserved.

case, the setup time is usually added to the job processing time, whereas, in the second case the setup time depends not only on the job currently being scheduled but also the last scheduled job. The studies of parallel machine scheduling problems with sequence-dependent setup times to minimize makespan have attracted a special attention in recent years. A comprehensive literature reviews on solution methods for different types of parallel machine scheduling problems can be found at Allahverdi, Ng, Cheng, and Kovalyov (2008), Allahverdi et al. (1999), and Pfund, Fowler, and Gupta (2004). The parallel machine scheduling problem is NP-hard (Ho & Chang, 1995). Due to the complexity of the problem, it is a general practice to find an appropriate heuristic rather than an optimal solution for the parallel-machine scheduling problem (Bilge, Kirac, Kurtulan, & Pekgun, 2004; Kim, Kim, Jang, & Chen, 2002; Mendes, Muller, França, & Moscato, 2002; Ventura & Kim, 2003). In dynamic scheduling, jobs arrive dynamically over time and are processed on machines continuously. Jobs whose operations are finished are moved out of the system continuously. There may be random and unpredictable events in the system, such as machine breakdown and repair, and the due date of jobs may be changed during processing. So, a previously feasible static schedule may turn infeasible when it is released to the shop floor (Fang & Xi,

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

1997). Most scheduling problems in real-world production systems occur in such a dynamic environment. Rescheduling is the process of updating an existing production schedule in response to unpredictable real-time events to minimize its impact on the system performance and it needs to address two issues: how and when to react to real-time events. Regarding the first issue, the literature provided two main rescheduling strategies (Cowling & Johansson, 2002; Sabuncuoglu & Bayiz, 2000; Vieira, Herrmann, & Lin, 2003), namely schedule repair and complete rescheduling. Schedule repair refers to some local adjustments of the current schedule and complete rescheduling refers to regenerating a new schedule from scratch (Ouelhadj & Petrovic, 2009). Regarding the second issue, when to reschedule, three policies have been proposed in the literature (Sabuncuoglu & Bayiz, 2000; Vieira et al., 2003): periodic, event-driven, and hybrid. Under the periodic strategy, the rescheduling occurs regularly with a constant time interval (the rescheduling period) between consecutive rescheduling events. No other events trigger the rescheduling. In event-driven strategy, the rescheduling is triggered in response to an unexpected event alters the current system status. Under the hybrid strategy, the rescheduling occurs not only periodically but also whenever an unexpected event that alters the current system status occurs. The dynamic identical parallel machine scheduling problem with sequence-dependent setup time has been studied for many years and many solution techniques have been proposed for these problems. Firstly, Ovacik and Uzsoy (1995) presented a family of rolling horizon procedures (RHPs) in order to solve the dynamic m identical parallel machines scheduling problem for minimizing the maximum lateness. It was found from the results that both on average and in the worst case, RHPs consistently outperformed dispatching rules (DRs) combined with local search methods. Kim and Shin (2003) proposed a restricted Tabu search (RTS) approach for the same problem. Experimental results indicated that, in general, the RTS seems to outperform some heuristic algorithms such as the RHP, the basic Tabu search, and simulated annealing (SA); however, one drawback is the requirement of a certain level of tuning and customization. Lee, Lin, and Ying (2010) proposed a restricted SA (RSA) for solving the same problem. They reported that RSA algorithm is highly effective when compared to the basic SA. Ying and Cheng (2010) and Lin, Lee, Ying, and Lu (2011) proposed an iterated greedy heuristic for the same problem presented by Ovacik and Uzsoy (1995). Their computational results revealed that iterated greedy heuristic is highly effective when compared to state-of-the-art algorithms on the same benchmark problem data set. Yang (2009) proposed a genetic algorithm based simulation optimization approach to solve the m identical parallel machines scheduling problem under the dynamic job arrival and static machine availability constraints for minimizing makespan. More recently, Yang and Shen (2012) proposed job-driven scheduling heuristic and machine-driven scheduling heuristic to solve a practical-size the m identical parallel machines scheduling problem with stochastic failures. The reader can refer to Vieira et al. (2003) and Ouelhadj and Petrovic (2009) for the detailed survey on dynamic scheduling in manufacturing systems. In modern manufacturing systems, some resources such as a piece of equipment or a team of setup workers or a single operator may be required throughout the setup process. Each of these situations defines a scheduling problem with a single server. The type of server may vary according to the production environment. Examples of this problem arise frequently in production environments such as manufacture of automobile components and printing industry (Huang, Cai, & Zhang, 2010). Numerous research efforts have been attempted for solving the static parallel machine scheduling problem with a single server over the years in the literature. Some examples from the literature indicate the importance

67

of the problem in the industry can be found at Kravchenko and Werner (1997), Hall, Potts, and Sriskandarajah (2000), Glass, Shafransky, and Strusevich (2000), Guirchoun, Souhkal, and Martineau (2005), Huang et al. (2010), and Kim and Lee (2012). Since the two identical parallel machines scheduling problem with a common server in which setup times are sequence dependent is NP-hard (Abdekhodaee & Wirth, 2002; Abdekhodaee, Wirth, & Gan, 2004; Koulamas, 1996; Kravchenko & Werner, 1997) therefore m identical parallel machines version of the same problem is also NP-hard. When a dynamic and stochastic manufacturing environment is encountered in which static scheduling may be impractical, the use of real time scheduling approaches is required (Xiang & Lee, 2008). Simulation is a powerful tool to analyze complex, dynamic and stochastic systems and a simulation-based real time scheduling system usually consists of four main components (Yoon & Shen, 2006): ‘‘a monitoring system to collect data from the physical shop floor; a simulator to generate simulation models, run the models, and analyze their results; a decision-making system to generate decisions such as schedules and priority rules; and an execution system to control the shop floor”. The reader can refer to Negahban and Jeffrey (2014) for the recently published survey on simulation approaches for manufacturing systems. One of the common problems of the production system is that continuously arriving customer orders cannot be released to shop floor as soon as the planning system releases the job order due to the physical work-in-process (PWIP) constraint. The PWIP requires a storage space and most companies strive to keep the actual amount of the PWIP as low and constant as possible, so as to reduce the amount of capital tied up in the production or manufacturing process and to reduce the risk of obsolescence, especially in fast-moving sectors such as technology and consumer electronics. Therefore, the PWIP is an important constraint that must be taken into consideration in terms of production systems. Before giving a concise literature review, the notations used for convenience and readability are summarized as shown in Table 1. The three-field notation a=b=c of Graham, Lawler, Lenstra, and Rinnooy Kan (1979) is used to describe a scheduling problem in the literature. The a field denotes the shop (machine) environment. The b field describes the setup information, other shop conditions, and details of the processing characteristics, which may consist of one or more entries. Finally, the c field contains the objective to be minimized. According to the standard three-field notation, Pm,S/STsd/Cmax can be used to denote the m identical parallel machines scheduling problem with a common server and sequence dependent setup times under the objective of minimizing the makespan. Koulamas (1996) showed that the problem of P2,S/STsi with the objective of minimizing the machine idle time resulting from the unavailability of the server is NP-hard in the strong sense.

Table 1 The notations used for convenience and readability. Notation

Description

PDm, S Pm, S STsd STsi Cmax Lmax P C P j wC P j j T P j wT P j j U P j wj U j

m dedicated parallel machines with a common server m identical parallel machines with a common server Sequence dependent setup time Sequence independent setup time Makespan Maximum lateness Total completion time Total weighted completion time Total tardiness Total weighted tardiness Number of tardy jobs; if Tj > 0 ) Uj = 1, 0 otherwise Weighted number of tardy jobs

68

Table 2 An overview of the approaches in the literature on parallel machines scheduling problem with server. Criterion (Comments)

Approach/Result

Setup type

Environment

Koulamas (1996)

Complexity results, a beam search heuristic

STsi

P2, S

Complexity results, polynomially solvable cases, heuristics

STsi = 1

P2, S & Pm, S

Kravchenko and Werner (1998) Glass et al. (2000) Hall et al. (2000)

Minimizing machine idle time resulting from unavailability of the server Cmax, minimizing the amount of time when some machine is idle due to the unavailability of the server Cmax Cmax P P P P P P Cmax, Lmax, C j ; wj C j ; T j ; wj T j ; U j ; wj U j

STsi STsi STsi

Pm, m  1 server PDm, S P2, S

Kravchenko and Werner (2001) Wang and Cheng (2001) Abdekhodaee and Wirth (2002) Brucker et al. (2002) Abdekhodaee et al. (2004) Guirchoun et al. (2005) Abdekhodaee et al. (2006) Zhang and Wirth (2009) Werner and Kravchenko (2010) Ou et al. (2010) Huang et al. (2010) Kim and Lee (2012) Gan et al. (2012) Jiang et al. (2013) Su (2013) Hasani et al. (2014a) Hasani et al. (2014b) Jiang et al. (2015)

P C P j wj C j Cmax P P P P P P Cmax, Lmax, C j ; wj C j ; T j ; wj T j ; U j ; wj U j Cmax P P P P P P Cmax, Lmax, C j ; wj C j ; T j ; wj T j ; U j ; wj U j Cmax Cmax P Cmax, Lmax, wj C j P Cj Cmax Cmax Cmax Cmax Cmax Cmax Cmax Cmax

A pseudo-polynomial algorithm Complexity results, polynomially solvable cases, algorithms Complexity results, polynomial or pseudo polynomial time algorithms Heuristic An approximation algorithm Complexity results, mathematical model, heuristics New complexity results for special cases Complexity results, lower bound, heuristics Complexity results Greedy heuristic, genetic algorithm LS (List scheduling) algorithm Complexity results, LS algorithm Heuristic algorithms Mathematical model, hybrid genetic algorithm Mathematical models, hybrid heuristic algorithm Mathematical models, variants of a branch-and-price scheme An algorithm with a tight worst case ratio LPT algorithm Mathematical models based on blocks and setups Simulated annealing, genetic algorithm LS and LPT algorithms

STsi = 1 STsi STsi STsi STsi STsi STsi STsi STsi STsi STsd STsi STsi STsi STsi STsi STsi STsi = 1

P m, S P m, S P2, S P m, S P2, S P m, S P2, S Dynamic job arrivals & P2, S Pm, multiple server Pm, multiple servers PDm, S P m, S P2, S Preemptive scheduling & P2, S Dynamic job arrivals & P2, S P2, S P2, S Preemptive and non-preemptive scheduling & P2, S

Kravchenko and Werner (1997)

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

Publications

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

Customer Enquiry

Job Order

Job Release

Completed jobs

Whenever the PWIP level of the shop floor drops down under the lower_level, generate a new schedule with the jobs waiting in front of the server (if available) and job order(s) waiting in the pre-shop pool (if available) and then, release the maximum upper_level-PWIP jobs order(s) to the shop floor according to generated schedule.

Shop Floor Manufacturing

Pre-shop Pool

Physical WIP (PWIP)

Fig. 1. Position of order release mechanism within the shop floor scheduling and control system.

Table 3 Job attributes used in the developed simulation model.

Table 4 Terms and variables used in the developed simulation model.

Job attributes

Definition

Terms & variables

Definition

jserial

The serial number of job j (job serial numbers are the special numbers assigned to each job at the planning phase) The arrival time of job j to the system The job type of job j The processing time of job j 1 If job j has been preempted, 0 Otherwise 1 If job j has been previously scheduled, 0 Otherwise The planned machine on which job j will be processed The needed (assigned) setup time before the production operation of job j In which sequence the setup operation of job j will be performed by a common server

OE SQ1, SQ2

The waiting area of the customer orders (pre-shop Pool) The waiting area of the scheduled jobs in front of the common server. SQ2 is dependent on SQ1 and whenever there is a job(s) waiting in the SQ1, the one having the highest server priority is transferred to (max(jserver_pr)) SQ2 Total number of being processed jobs on the machines and being setup by the common server at any moment Physical work-in-process (this is the sum of TPJ, and number of jobs in SQ1 and SQ2) Inter arrival times of the jobs Planned number of jobs to be produced Lower level workload bound Upper level workload bound Number of job type that can be produced on the identical parallel machines production system This variable controls the logic of the model (0 – model is now running, 1 – model is paused to generate a new schedule) This variable controls whether the task of the event number 1 is finished (0 – continuing, 1 – finished) Number of identical parallel machines used in the production system Processing time matrix for the jobs Sequence dependent setup time matrix for the jobs Mean time between failures matrix for the machines Mean time to repair matrix for the machines This matrix is used for controlling whether the cumulative processing time of the machine m exceeds the MTBF or not This variable indicates how many new jobs (waiting jobs in OE) can be released to the shop floor after the rescheduling. This is the difference between upper_level and PWIP Whenever the failure/repair logic is activated for the machine m, this variable shows how much time the machine m will continue the processing just before the failure New schedule will be generated from which arising event The time at which the common server waits The time at which the each machine waits Types of the latest assigned jobs to each machine Serial numbers of the latest assigned jobs to each machine Active/failed status information of each machine. 1 if the machine m is currently active, 0 Otherwise

jAT jtype jpr_time jpreemtion jschedule jmachine jsetup_time jserver_pr

Kravchenko and Werner (1997) studied the problem P2,S/STsi = s (unit setup)/Cmax and showed that it is strongly NP-hard, and suggested a pseudo-polynomial time algorithm for the problem of Pm,S/STsi = 1/Cmax. Kravchenko and Werner (1998) considered the problem with m  1 servers and m machines. They presented a pseudo-polynomial time algorithm for the problem in order to minimize the Cmax. Hall et al. (2000) showed that the problem P2, S/STsi/Cmax is NP-hard. For the problem of Pm,S/STsi/Cmax, they presented several heuristic algorithms. Kravchenko and Werner P (2001) also considered the problem of Pm,S/STsi = 1/ C j . They presented a heuristic algorithm and proved that the heuristic has an absolute error bounded by the product of the number of short jobs with processing times less than m  1 and m  2. Glass et al. (2000) suggested an approximation algorithm for the problem of PDm,S/STsi/Cmax. They proved that the problem with two machines is NP-hard in the strong sense even if all the setup times or all processing times are equal. Wang and Cheng (2001) addressed the P problem of Pm,S/STsi/ wj C j and provided an approximation algorithm to tackle with this problem. Brucker, Dhaenens-Flipo, Knust, Kravchenko, and Werner (2002) obtained a number of new complexity results for the problems which have begun with the studies of Kravchenko and Werner (1997) and Hall et al. (2000). Abdekhodaee and Wirth (2002) addressed the problem of P2,S/STsi/Cmax for the situation of alternating job processing. They proved that the problem is strongly NP-hard and presented an

TPJ PWIP ARtime PNJ lower_level upper_level NJT schedule_state

event1_control NPM PT(.) STM(.,.) MTBF(.) MTTR(.) DMTBF(.) TQ

wtbf(.)

event_number Cserver Ccurrent_makespan(.) JTNcurrent(.) JSNcurrent(.) Mstatus(.)

69

70

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

Activate this simulation procedure whenever a new replication begins

(Initialize state variables) state_control←0, serial←1, MTBF(.)←generate a MTBF for each machine and MTTR(.)←generate a MTTR for each machine

Schedule the arrival of PNJ jobs according to inter arrival time (ARtime )

jserial←serial, serial←serial+1 and jAT←simulation clock

jtype←generate an integer random number from u.d.(1,NJT), jpr_time←PT(jtype) and jpreemtion←0

event1_control=0 N

Y

Insert the job j to OE

PWIP
Y

event1_control ←1 OE Insert the job j to SQ1 and then, jschedule←1, schedule_state←1, event_number←1, send a signal code to activate event (call event-

SQ1

activate simulation procedure 7) Fig. 2. Transferring of an arriving job to the pre-shop pool or the server queue (Simulation Procedure 1).

integer programming formulation. Also, they developed polynomial algorithms for several special cases. Abdekhodaee et al. (2004) investigated special cases of problem P2,S/STsi = s/Cmax and showed that an optimal sequence exists in cases of equal processing times and equal setup times. The complexities of server models are discussed in Guirchoun et al. (2005) and Werner and Kravchenko (2010). Abdekhodaee, Wirth, and Gan (2006) proposed greedy heuristics and a genetic algorithm for the problem of P2,S/STsi = s/Cmax. Zhang and Wirth (2009) considered an online (dynamic job arrivals) scheduling problem of P2,S/STsi/Cmax. Huang et al. (2010) proposed a mathematical model and hybrid genetic algorithm for the problem of PDm,S/STsd/Cmax. Almost simultaneously, Ou, Qi, and Lee (2010) addressed the identical parallel machines scheduling problem with multiple unloading servers. Kim and Lee (2012) addressed the problem of Pm,S/STsi/Cmax and they proposed a hybrid heuristic algorithm to solve the problem. Gan, Wirth, and Abdekhodaee (2012) extended the analysis started by Abdekhodaee et al. (2006). They proposed two mixed integer linear programming formulations and two variants of a branchand-price scheme. Their computational experiments showed that

for small instances, one of the mixed integer linear programming formulations was the best whereas for the larger instances, the branch-and-price scheme worked better. Jiang, Dong, and Ji (2013) considered the preemptive version of the scheduling on two identical machines with a common server and presented an algorithm, which can produce optimal schedules for two special cases: equal processing times (P2,S/pmtn,pj = p/Cmax) and equal setup times (P2,S/pmtn,sj = s/Cmax). Jiang, Zhang, Hu, Dong, and Ji (2015) studied preemptive and non-preemptive two identical parallel machines scheduling problem to minimize the makespan, where each job has to be loaded and unloaded by a common server before and after being processed on one of two machines. They assumed that both loading and unloading times are one unit (P2, S/sj = tj = 1/Cmax). They applied the List Scheduling (LS) and Longest Processing Time (LPT) heuristics to tackle with the NP-hard nonpreemptive variant of the problem. Su (2013) studied the online LPT algorithm on two machines with a common server, where jobs arrive over time and its objective was to minimize the Cmax. More recently, Hasani, Kravchenko, and Werner (2014a) suggested several mixed integer programming formulations for the problem of

71

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

Activate this simulation procedure

Activate this simulation procedure

whenever a new replication begins

whenever a new replication begins

Control schedule_state variable

Scan SQ2

schedule_state=0

Is there any waiting N Y

job in SQ2?

NVJ←calculate the number of waiting

N Y

job in SQ1

m←jmachine

NVJ>=1 N Y

Is the machine m idle? && Is the server idle? && Is schedule_state equals to 0?

Is there any waiting

N

job in SQ2? Y

Y

N

Send a signal code to activate the server Find the job j having the maximum server

(activate simulation procedure 4)

priority (max(jserver_pr)) in SQ1 and insert it to SQ2

SQ2

Fig. 3. Finding the job having the highest server priority (Simulation Procedure 2).

P2,S/pi,si/Cmax, based on blocks and setups, which turned out to be superior to the algorithms presented in Abdekhodaee et al. (2006) and Gan et al. (2012). In their next study, Hasani, Kravchenko, and Werner (2014b) proposed a simulated annealing and a genetic algorithms for the problem of P2,S/pi,si/Cmax. The performances of these algorithms are evaluated for instances with up to 1000 jobs. They compared the results with the existing results in the literature. They observed that the presented algorithms both showed an excellent behavior and that the objective function values obtained were very close to a lower bound. According to the literature review, parallel machines scheduling problem with a common server studies can be roughly classified into three categories: (1) the studies related to two parallel machines scheduling problem with a common server and sequence independent setup times, (2) m parallel machines scheduling problem with a common server and sequence independent setup times, (3) the studies discussing the complexity of the parallel machines scheduling problem with a common server. The papers published are also summarized by taking into account the solution approaches, setup types, and the production environments and the summary is presented in Table 2. As can be seen in Table 2, only two papers (Su, 2013; Zhang & Wirth, 2009) dealt with the dynamic parallel machines scheduling problem with a common server and they focus on two identical parallel machines with sequence independent setup times. However, two identical parallel machines are not able to respond the

Fig. 4. Finding the machine and the server status in order to catch the time at which they are simultaneously idle (Simulation Procedure 3).

higher product volume. This review also reveals there is no paper handled the dynamic m identical parallel machines scheduling with a common server and sequence dependent setup times. This paper bridges this gap for this problem. In this paper for the first time the dynamic m identical parallel machines scheduling problem with a common server and sequence dependent setup times is tackled. The studied problem in this paper is briefly summarized as follows: jobs enter the production system one-by-one depending on stochastic inter arrival times. Type of an arriving job is also stochastic. Each job requires a single operation and may be processed on any of the m parallel machines. The setup times for the machines are significant and sequence dependent. There is only one server serving to all machines for the setup process. It is assumed that processing and setup times of an arriving job are known beforehand (deterministic) depending on the job types. Stochastic breakdowns may occur on the machines and the job preemption is allowed. Detailed information about the problem being studied will be given in the next section. For modelling and solving this problem, a simulated annealing (SA) and the dispatching rule (DR) based complete rescheduling approaches (CRAs) as the simulation optimization tools are proposed. An event-driven rescheduling strategy is proposed by inspiration from limited order release procedure (Bergamaschi, Cigolini, Perona, & Portioli, 1997). The proposed event driven rescheduling strategy is based on the logic of controlling the level of PWIP on the shop floor. The objective of this study is to minimize the total length of the schedule when all the jobs are finished (makespan). The performances of the CRAs are compared on a hypothetical simulation case. The rest of this paper is organized as follows. Section 2 gives a detailed explanation for the problem and the proposed solution methodology. Development of the simulation model is described

72

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

Kurtz, 2004). Hence, the time between arrivals of jobs is exponentially distributed. Type of an arriving job is also stochastic and selected from a discrete uniform distribution. Each job requires a single operation and may be processed on any of the m parallel machines. The setup times for the machines are significant and sequence dependent, i.e., the setup time for one job is affected by its preceding job on the same machine. There is only one server serving to all machines for the setup process. Setup times are classified as minor setup, medium setup, and major setup.

in Section 3. Proposed simulated annealing based complete rescheduling approach (SA/CRA) and dispatching rule based complete rescheduling approaches (DR/CRAs) are presented in Section 4. Subsequently, computational experimental results are reported in Section 5. Finally, conclusions are given in the last section. 2. Problem description and proposed solution methodology In this section, a detailed explanation of the problem and proposed solution methodology are given.

 Minor Setup Time is needed if a machine will be started to process any of the jobs for the first time at the beginning of the first schedule. If there is any preempted job after any machine breakdown, Minor Setup Time is required when remaining part of the preempted job will be immediately processed on the same machine after having it repaired.  Medium Setup Time is required when the type of previous job produced on any machine is the same with the type of a subsequent job to be produced again on the same machine. Subsequent job may be any preempted job.

2.1. Problem description Production orders, that may be generated from a requirement planning system or directly originated from customers’ orders, enter the m identical parallel machines system one-by-one depending on stochastic inter arrival time. It has been observed in the literature that the distribution of the job arrival process follows closely the Poisson distribution (Rangsaritratsamee, Ferrel, &

Wait a signal code to activate the server

Pull the job j out from SQ2 and put it on the server (seize the server for the setup operation of the job j on the machine m)

Cserver
Cserver
Cserver←Cserver+jsetup_time

DMTBF(m)+ jpr_time>= MTBF(m) Y

N

wtbf(m)←MTBF(m)- DMTBF(m),

Ccurrent_makespan(m)←Cserver+ jpr_time, JTNcurrent(m)←jtype and

Ccurrent_makespan(m)←Cserver+wtbf(m), JTNcurrent(m)←jtype and

JSNcurrent(m)←jserial

JSNcurrent(m)←jserial

Wait jsetup_time time units for the setup operation of the job j then, release the server and send a signal code to activate the machine m (activate simulation procedure 5)

Stop Fig. 5. Server related activities (Simulation Procedure 4).

73

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

 Major Setup Time (Sequence dependent setup time) is needed when the type of previous job produced on any machine is different from the type of a subsequent job to be produced again on the same machine. Subsequent job may be any preempted job. Stochastic breakdowns may occur on the machines and the job preemption is allowed. The unreliability of the machine is expressed in terms of two parameters (Law & Kelton, 1991), namely, the mean time between failure (MTBF) and the mean time to repair (MTTR). MTBF and MTTR times for the machines are assumed to be following an exponential distribution. 2.2. Proposed solution methodology Scheduling jobs is a challenging task since new information arrive continuously to the system and as a result, complete schedules cannot be generated in advance and moreover, existing production schedules are quickly rendered poor system performance as the system evolves dynamically.

It is frequently reported in the dynamic scheduling literature that the complete rescheduling strategy is good at reaching the optimal solution but these solutions require more computational time (Ouelhadj & Petrovic, 2009). One of the reasons why complete rescheduling solutions require more computational time is that the number of jobs which will be evaluated in each new schedule may be too excessive. So, the number of jobs which will be evaluated in each new schedule will increase the computation time. In this context, an event driven rescheduling strategy based on controlling the level of PWIP on the shop floor is developed by inspiration from limited order review release procedure (Bergamaschi et al., 1997). The reason for developing this strategy is to create the new schedules when the amount of PWIP in the production system drops below a certain amount (lower level workload bound) and to provide maximum availability of a job amount suitable for actual workload capacity (upper level workload bound) of the system. In doing so, the aim is to take the advantage of the complete rescheduling strategy in reaching the optimal solution and eliminating the disadvantage of requiring more computational time of the complete rescheduling strategy originating from the number

Wait a signal code to activate the machine m

Put the job j on the machine m (seize the machine m to process the job j)

DMTBF(m)+ jpr_time>= MTBF(m) N

Y wtbf(m)← MTBF(m)- DMTBF(m),

MTBF(m)← MTBF(m)+ jpr_time

MTBF(m)← MTBF(m)+ wtbf(m)

*Send a signal code to activate the

Wait jpr_time time units for the processing

failure/repair procedure for the machine m

operation of the job j and then, release the

(activate simulation procedure 6)

machine m

Stop

WIP
N

schedule_state←1, event_number←4, TQ←upper_level-

Place this job among the

WIP, send a signal code to activate event (call event-

finished one(s).

activate simulation procedure 7) and place this job among the finished one(s).

Finished jobs *Control of the related machine passes to the failure/repair procedure. The job is still on the machine. Fig. 6. Machine related activities (Simulation Procedure 5).

74

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

Wait a signal code to activate the failure/repair procedure for the machine m

Wait wtbf(m) time units before the breakdown of the machine m

Deactivate the machine m (Mstatus(m) ←0), decouple the job j from the machine m, assign the remaining processing time of the job j as the processing time (jpr_time), jpreemtion←1 and insert the job j to SQ1

MTBF(m)←generate a new MTBF to the machine m

schedule_state←1, event_number←2 and send a signal code to activate event (call event-activate simulation procedure 7)

Wait MTTR(m) time units for the machine m to be repaired

Ccurrent_makespan(m) ← Ccurrent_makespan(m)+MTTR(m), DMTBF(m)=0 and MMTR(m)←generate a new MTBF to the

 event_number 3: Whenever a broken-down machine gets repaired, the jobs waiting on the shop floor (jobs released to the shop floor but not processed on any machine) are rescheduled for all machines available at that moment. Then, the new generated schedule is executed on the shop floor.  event_number 4: After the end of the function of event_number 1, this event becomes active and whenever the level of PWIP on the shop floor falls below lower level workload bound, the rescheduling is triggered to reschedule and guarantees the existence of the PWIP as much as a maximum upper level workload bound on the shop floor. The optimization is carried out for each incoming job with the available jobs waiting in front of the common server and then scheduled jobs are released to the shop floor to execute the generated schedule. Due to the upper level workload bound of the production system, some of the incoming jobs that results in poor optimization performance are not included in the new schedule. This proposed event driven rescheduling strategy is similar in one way to the queue size (EDQS) strategy developed by Vieira, Herrmann, and Lin (2000). In EDQS strategy, the rescheduling occurs when the number of jobs arrived reaches a specific threshold. So, PWIP control is not considered on the shop floor when EDQS strategy is used. However, in the proposed strategy, the rescheduling is performed when the level of PWIP on the shop floor falls below lower level workload bound and guarantees the existence of the PWIP as much as a maximum upper level workload bound on the shop floor. The reader can refer to Bergamaschi et al. (1997) for a detailed survey on order review and release strategies in a jobs-shop environment.

machine m

Activate the machine m (Mstatus(m)←1) schedule_state←1, event_number ←3 and send a signal code to activate event (call event-activate simulation procedure 7)

Stop Fig. 7. Failure/repair procedure (Simulation Procedure 6).

In all of the event numbers, the new schedule is generated by considering the machines available at that moment. If a new schedule will be generated due to the machine break-down (event_number 3) and there is no machine available, it is awaited until a machine whatsoever is available (until the realization of event_number 4) and a new schedule is not generated. Similarly, if one of event_number 1 or event_number 2 is not carried out and there is no machine available, it is awaited again until a machine whatsoever is available (until the realization of event_number 4) and a new schedule is not generated. Position of the proposed order release mechanism within the shop floor scheduling and control system is shown in Fig. 1.

of the evaluated jobs in a new schedule. Thus, the number of the evaluated jobs in a new schedule will be considered completely according to workload bounds of the system. The proposed event driven rescheduling strategy also considers the machine breakdown(s) and machine repair(s) to generate new schedule. A new schedule is generated whenever any of 4 events explained below is realized on the production system.

In this section, a detailed explanation of the developed simulation based optimization model is given.

 event_number 1: Whenever a new job arrives to the system, this job triggers the CRA for the optimization. The optimization is carried out for each incoming job with the available jobs waiting in front of the common server and then scheduled jobs are released to the shop floor to execute the generated schedule. This event number is used to fill up the system to a certain level of job and the task of this event ends whenever the level of PWIP on the production system reaches the level of upper level workload bound.  event_number 2: Whenever a machine breaks down, jobs waiting in front of the common server (jobs released to the shop floor) and the job taken from the broken-down machine (for being able to process the remaining part of the job) are rescheduled on machines that are available at that moment. Then, the new generated schedule is executed on the shop floor.

The dynamic m identical parallel machines scheduling with a common server system is modelled by using Arena 10.0 with Visual Basic 6.5 interactions. Variables and attributes used in the developed model are given in Tables 3 and 4, respectively. The Arena model logic of this system is depicted in Figs. 2–8. The aim of the developed model is to minimize the makespan of the system until the completion of a certain amount of work at a certain number of identical parallel machines. Jobs enter the production system one-by-one depending on stochastic inter arrival times. The time between arrivals of jobs is exponentially distributed. Type of an arriving job is selected from a discrete uniform distribution between 1 and NJT. Machine breakdowns are modelled by using the busy time approach given in Law and Kelton (1991). When the dispatching rules (e.g., SPT) are used, even if processing times of the jobs are stochastic, this processing time

3. Simulation based optimization model

3.1. Development of the simulation model

75

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

Wait a signal code to activate event (call event)

event_number =(1||2||3) N

Y If there is a waiting job in SQ2, transfer it to

If there is a waiting job in SQ2, transfer it to

SQ1 and then, generate a new schedule by using

SQ1 and, generate a new schedule by using the

the jobs in SQ1 (gathering information from

jobs in OE and SQ1 (gathering information from

shop floor and generating a new schedule)

the shop floor and generating a new schedule)

*Release the new schedule to the shop floor

Release the new schedule to the shop floor and

(transfer maximum TQ new job having the

schedule_state←0

maximum server priority (max(jserver_pr)) from OE to SQ1) and schedule_state←0

Release the new schedule in front of SQ1

Stop * maximum TQ (if TQ>0) number of previously unscheduled jobs after the complete rescheduling are released to the shop floor. Performed optimization methods and how the related attributes of the jobs are assigned to them are explained in the Section 5. Fig. 8. Generating a new schedule and releasing it to the shop floor (Simulation Procedure 7).

Table 5 Information required for the optimization.

Fig. 9. Collected job information from the simulation model depending on the realized event.

information is known before the jobs start processing on the machines and this situation is not different from that of deterministic. On the other hand, this situation shows in fact that the information related to processing times is at the highest level. Similarly, it is assumed that processing and setup times of the jobs are known beforehand (deterministic) depending on the job types. Developed model can be easily transformed into a form containing the stochastic processing times and the setup times in this sense. Common random numbers (Law & Kelton, 1991) have been used to produce inter arrival times, job type assignments, MTBF times of each machine, and MTTR times of each machine for being able to compare proposed approaches more healthily. The developed simulation based optimization model consists of seven simulation procedures. Simulation procedures 1, 2 and 3 (see Figs. 2–4) are activated as soon as the simulation replication begins to run, and simulation procedures 4, 5, 6 and 7 (see Figs. 5–8) are triggered by other procedures by sending a signal code. Simulation procedure 4 is triggered by the procedure 3 for making setup operation of the next job by the common server. After the finishing of the setup operation of the next job, simulation procedure 5 is triggered by the procedure 4 for processing the job on the related machine.

Terms & variables

Definition

event_number Cserver Ccurrent_makespan(.) JTNcurrent(.) JSNcurrent(.) Mstatus(.)

New schedule will be generated from which arising event The time at which the common server waits The time at which the each machine waits Types of the latest assigned jobs to each machine Serial numbers of the latest assigned jobs to each machine Active/failed status information of each machine. 1 if the machine m is currently active, 0 Otherwise

j1 j2 j3 j4 j5

j4 j2 j3 j1 j5

Fig. 10. Neighborhood structure.

Whenever a machine breakdown occurs, simulation procedure 5 triggers the procedure 6 to activate the failure/repair procedure for the related machine. Simulation procedure 7 is related to simulation based optimization and triggered by the procedures 1, 5 and 6 depending on the realized event. Whenever the simulation procedure 7 is triggered in the simulation replication, the simulation model is paused to generate a new schedule and the run of the simulation model is continued by assigning related attributes to the jobs according to the best solution vector that is found.

76

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

11 Jobs J4

O3

J2

J3

O11

O9

J1

O2

O10

O6

O8

O12

O4

O7

O1

O5

O12

O4

J2

O1

O5

(a) 11 Jobs J4

O3

O7

J3

O11

O9

J1

O2

O10

O6

O8

(b) J

waiting jobs in front of the server

O

waiting jobs in OE

Fig. 11. (a) A feasible and (b) an infeasible solution.

3.2. Verification and validation of the simulation model

Table 6 Some new terms used in the proposed algorithm.

Since the present study involves a conceptual model of dynamic m identical parallel machine scheduling problem with sequencedependent setup times and a single server for the physical workin-process constrained system, a multi-level verification exercise is performed for ensuring a correct programming and implementation of the conceptual model using the following steps:

In this section, SA/CRA and DR/CRAs are explained in detail. 4.1. Proposed simulated annealing based complete rescheduling approach The SA algorithm is chosen, among other meta-heuristics, mainly because of the fact that neighborhood structures can be easily defined. 4.1.1. Overview of simulated annealing The SA is inspired by an analogy to annealing in solids. This idea firstly propounded by Metropolis, Rosenbluth, Teller, and Teller (1953) and Kirkpatrick, Gelatt, and Vecchi (1983) applied it to combinatorial and other optimization problems. Often the solution space of an optimization problem has many local optimums. A simple local search algorithm proceeds by choosing random initial solution and generating a neighbor from that solution. The neighboring solution is accepted if it is a cost decreasing transition. Such a simple algorithm has a drawback of a frequent convergence to a local optimum. Though the SA is a local search algorithm by itself, it avoids getting trapped in a local optimum by also accepting cost increasing neighbors with some probability. The performance of the SA is influenced by some factors such as initial temperature (T0), final temperature (Tcry), number of temperature decline (q), and number of iterations at each temperature (IT). First of all, in the SA, an initial solution is randomly generated, and a neighbor is found around initial solution. This new neighbor solution is accepted if it improves the value of cost function. The SA uses an intensification scheme in this manner to search the neighborhood

Definition

BestSolution(.) CurrentSolution(.)

The best solution found Current solution whose neighborhood is going to be searched The solution generated by a small Perturbation from the CurrentSolution Function value (makespan) of BestSolution Function value (makespan) of CurrentSolution Function value (makespan) of NeighborSolution Probability of acceptance Uniformly distributed random number between 0 and 1 Initial, final, and current temperature, respectively Cooling rate Number of iterations at each temperature level

NeighborSolution(.) CmaxBest CmaxCurrent CmaxNeighbor Pacc U T0, Tcry, Tc q IT

(a) Debugging the program. (b) Checking the internal logic of the model. In this sense, the model is traced step-by-step and the animation of the model is set up and monitored carefully. (c) Running the model under different settings of the input parameters and checking whether the model behaves in a plausible manner. 4. Proposed simulated annealing and dispatching rules based complete rescheduling approaches

Terms & variables

of initial solutions efficiently. But not queering in local optimum, the SA has to use diversification scheme and search all around solution space. As a diversification scheme, in a condition where the value of cost function of new neighbor solution is worse, the SA accepts it if the following criteria are satisfied: p < eD=T c , where p is generated using uniform distribution between 0 and 1; D is the difference between the value of the cost function of the current solution and the new neighbor solution, and Tc is current temperature. So, in case of slow temperature reduction, the algorithm converges to global optimum solution. The proposed SA/CRA algorithm for solving the addressed problem consists of 4 phases. Whenever a new schedule will be generated, new information related to the latest status of the system is collected from the simulation model and the initial solution is generated in line with this information by using the encoding suitable for the SA/CRA algorithm. Subsequently, the SA/CRA algorithm starts working with suitable parameters and new solutions are generated according to the logic of the algorithm. After the generation of solutions through encoding algorithm, decoding algorithm is applied for evaluating their fitness values. The simulation model continues to resume over the best solution obtained by using the SA algorithm. 4.1.2. Information gathered from the shop floor and encoding the initial solution When the event which causes a new schedule occurs, the control system of the simulation model,    

Stops running the model. Collects the related information about the system state. Uses this information to optimize the schedule. And then starts running the model by fixing this optimized schedule until the new event which will cause a new schedule.

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

77

1. Initialization; specify SA parameters (T0, Tcry, q, IT) 2. Run procedure 1 (see Figure 13) 3. If event_number=4 Then 4. Run procedure 2 (see Figure 14) 5. End if 6. CurrentSolution(.)←NJS(.) 7. CmaxCurrent←Calculate the current fitness value using decoding algorithm 8. BestSolution(.)←CurrentSolution(.), CmaxBest←CmaxCurrent, Tc←T0 9. While Tcry< Tc do 10. For i=1 to IT do 11. JS(.)←Generate a new JS(.) from old JS(.) by using swap operator 12. If event_number=4 Then 13. Run procedure 2 (see Figure 14) 14. End if 15. NeighborSolution(.)←NJS(.) 16. CmaxNeighbor←Calculate the fitness value of the neighbor solution 17. ∆←CmaxNeighbor-CmaxCurrent 18. If ∆<=0 Then 19. CurrentSolution(.)←NeighborSolution(.) 20. CmaxCurrent←CmaxNeighbor 21. If CmaxNeighbor < CmaxBest Then 22. BestSolution(.)←NeighborSolution(.) 23. CmaxBest←CmaxNeighbor 24. End if 25. Else Δ − T 26. P ← e c , U←Create a random number ∈ [0;1] acc

27. If U < Pacc Then CurrentSolution(.)←NeighborSolution(.) 28. 29. CmaxCurrent←CmaxNeighbor 30. End if 31. End if 32. End for 33. Tc←Tc×q 34. End while 35. If event_number=4 Then 36. If NJ > upper_level-TPJ Then 37. For i=1 to upper_level-TPJ do 38. j←BestSolution (i) 39. jserver_pr←i 40. If jschedule=0 Then 41. jschedule←1 42. End if 43. End for 44. Reading from at most left job to right. release the first (upper_level-TPJ) job to SQ1 as a new schedule 45. Else 46. For i=1 to NJ do 47. j←BestSolution (i) 48. jserver_pr←i 49. If jschedule=0 Then 50. jschedule←1 51. End if 52. End for 53. Release the new schedule to SQ1 54. End if 55. Else 56. For i=1 to NJ do 57. j←BestSolution (i) 58. jserver_pr←i 59. If jschedule=0 Then 60. jschedule←1 61. End if 62. End for 63. Release the new schedule to SQ1 64. End if Fig. 12. Steps of proposed SA.

Information collected from the simulation model and related attributes of the jobs are different depending on the event number (see Section 2.2) which is triggered as indicated in Fig. 9. Information required for the optimization is indicated by means of terms and variables as shown in Table 5. The initial solution is a scheduling decision. It is a vector made up of a job sequence (JS(.)). The sequence of the jobs from the leftmost to the rightmost in a job sequence shows the realization of

the priority sequence of the setup operations by the common server. Server priorities of the jobs decrease from left to right in the job sequence. 4.1.3. Neighborhood solutions The swap operator, in which two positions are selected randomly and their contents are swapped, is used as illustrated in Fig. 10.

78

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

1. If event_number=4 Then 2. If NJ > upper_level-TPJ Then 3. previously_scheduled_jobs←0 4. For i=1 to NJ do 5. j←JS(i) 6. If jschedule=1 Then 7. previously_scheduled_jobs←previously_scheduled_jobs+1 8. End if 9. End for 10. For i=1 to upper_level-TPJ do 11. NJS(i)←JS(i) 12. End for 13. Else 14. NJS(.)←JS(.) 15. End if 16. Else 17. NJS(.)←JS(.) 18. End if Fig. 13. Procedure 1 for controlling PWIP.

1.If NJ > upper_level-TPJ Then 2. counter←0 3. For i=1 to upper_level-TPJ do 4. j←NJS(i) 5. If jschedule=1 Then 6. counter←counter+1 7. End if 8. End for 9. If previously_scheduled_jobs!= counter Then /*Infeasible solution*/ 10. JS(.)←Generate a new JS(.) from old JS(.) by using swap operator 11. For i=1 to upper_level-TPJ do 12. NJS(i)←JS(i) 13. End for 14. Turn back to line 2 for controlling the feasibility status of NSJ(.) 15. End if 16.End if Fig. 14. Procedure 2 for controlling PWIP.

4.1.4. Steps of proposed simulated annealing based complete rescheduling approach (SA/CRA) As stated before, the schedule differs depending on the realized event number. If the event number 4 is triggered at any moment, a new schedule is generated and the scheduled jobs are released to the shop floor from the jobs that are waiting (if available) in front of the common server and the jobs which are not yet scheduled (waiting jobs in OE). For example, suppose that lower and upper level workload bounds are 9 and 15 respectively. As soon as the level of PWIP on the production system at any moment falls to 8, the event number 4 will be triggered. Suppose that 4 of the PWIPs are the jobs processed on the machines and the common server (TPJ) at that moment and 4 of them are the jobs that are not processed on the machines and are waiting in front of the server (SQ1 [ SQ2). Again suppose that the number of unscheduled jobs (waiting jobs in OE) is 12. In this case, the number of jobs to be used while generating a new schedule is 16 (SQ1 [ SQ2 [ OE) and 11 of these jobs (upper_level-TPJ) will be released to the shop floor as a result of new schedule. Since 4 of these 11 jobs are previously scheduled jobs, new schedule will contain 7 unscheduled jobs. The best permutation of the first 11 jobs from the leftmost from the rightmost of the job sequence is found by using the SA among the job sequence with a total length of 16. The existence of 4 more previously scheduled jobs among these first 11 jobs is guaranteed. Otherwise, the solution becomes infeasible. Also, the sequence of these 11 jobs from the leftmost to the rightmost represents the setup operation priority sequence of the jobs to be performed by a common server. It can be summarized by means of terms and notations as follows;

PWIP? number of processed jobs on machine and server S (TPJ) number of the unprocessed jobs waiting in front of the S server (cardinality of SQ1 SQ2). Number of jobs will be assessed for a new schedule? number S of the unprocessed jobs in front of the server (PWIP-TPJ) number of the job orders in the pre-shop pool (cardinality of OE). Number of new jobs will be released to shop floor after the reschedule? upper_level – PWIP. Example feasible and infeasible solutions are shown in Fig. 11. Decisions made for determining on which machines the jobs will be processed are explained in Section 4.1.5. Some new terms used in the proposed algorithm are defined in Table 6. Steps of proposed SA are given in Fig. 12. As stated before, the schedule differs depending on the realized event number. If the event number 4 is triggered at any moment, the existence of the PWIP as much as a maximum upper level workload bound on the shop floor is controlled by using Procedures 1 and 2 (see Figs. 13 and 14) in the steps of the proposed SA. 4.1.5. Decoding After the generation of solutions through encoding algorithm, decoding algorithm is applied for evaluating their fitness values. Steps of the proposed decoding algorithm are provided in Fig. 15. The earliest completion time of the next job in the job sequence is computed for all of the active machines and the job is assigned to the machine where its completion time is minimum (see Fig. 16). 4.2. Proposed dispatching rules based complete rescheduling approaches (DR/CRAs) CRAs are also developed by using the longest processing time first (LPT), the shortest processing time first (SPT), and the first in first out (FIFO) rules. The sequence of the jobs from the leftmost to the rightmost, obtained by means of related DR, shows the realization of the priority sequence of the setup operations by the common server. Similarly, the decoding algorithm is used for determining on which machines the operations of the jobs will be performed according to realization sequence of the setup operations (see Section 4.1.5). The steps of the proposed DR/CRAs are shown in Fig. 17. For example, suppose that we use the SPT/CRA and lower level workload bound is 11 jobs. Again suppose that there are 6 jobs waiting in front of the server as shown in Fig. 18 when the number of PWIPs falls below lower level workload bound. Processing times of the jobs are given in the brackets as time units in Fig. 18. In this case, total number of jobs processed on the machines and the server becomes 4. Suppose that upper level workload bound is 15 and the number of job orders arriving to the system is 8 as shown in Fig. 19. In this case, 5 (15–10) of new arriving jobs having shortest processing times will be released to the shop floor. These jobs are O1, O2, O4, O6 and O8. The formed job sequence in front of the server is given in Fig. 20. The job at the leftmost has the maximum server priority. Server priorities of the jobs decrease from left to right in the job sequence. At this point, the earliest completion time of the next job in the job sequence is computed for all of the machines (considering the setup times) and the job is scheduled to the machine where its completion time is minimum. 5. Experimentation In this section, the performance of SA/CRA and DR/CRAs are compared in a hypothetical simulation case.

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

79

1. Initialize Gantt chart structure. Set JTN1current(.)←JTNcurrent(.), JSN1current(.)←JSNcurrent(.), C1server←Cserver and C1current_makespan(.)←Ccurrent_makespan(.) 2. For i=1 to Length of NJS(.)do /*reading NJS(.) from left to right*/ 3. j←NJS(i) and dummy←a very high value; /* Identify the job*/ 4. Decide on which machine the next job will be produced (see Figure 16) 5. If C1current_makespan(machine)=0 Then 6. If C1server < jAT Then 7. C1server←jAT 8. End if 9. If C1server < C1current_makespan(machine) Then 10. C1server←C1current_makespan(machine) 11. End if 12. setup_time←MiS /*Minor Setup Time*/ 13. Else 14. If C1server < jAT Then 15. C1server← jAT 16. End if 17. If C1server < C1current_makespan(machine) Then 18. C1server←C1current_makespan(machine) 19. End if 20. If jpreemtion=1 Then 21. If jserial =JSN1current(machine) Then 22. setup_time←MiS /*Minor Setup Time*/ 23. Else 24. If JTN1current(machine)=jtype Then 25. setup_time←MdS /* Medium Setup Time*/ 26. Else 27. setup_time←STM(JTN1current(machine), jtype) (MaS) 28. End if 29. End if 30. Else 31. If JTN1current(machine)=jtype Then 32. setup_time←MdS /* Medium Setup Time*/ 33. Else 34. setup_time←STM(JTN1current(machine), jtype) (MaS) /* Major Setup Time*/ 35. End if 36. End if 37. ET←setup_time +C1server+ jpr_time 38. jmachine←machine 39. jsetup_time←setup_time 40. JTN1current(machine)←jtype 41. JSN1current(machine)←jserial 42. C1current_makespan(machine)←ET 43. C1server←C1server+ setup_time 44. End if 45. End for 46. Return makespan←max(m1.m2,...,mNPM) Fig. 15. Decoding algorithm.

5.1. Experimental conditions

5.2. Steady-state condition of the developed model

The performance of SA/CRA and DR/CRAs are compared in a hypothetical simulation case with the following characteristics: The production system consists of 10 identical parallel machines. The number of job types to be processed is assumed to be 50. So, the job type of a job arriving to the system is selected from a discrete uniform distribution between 1 and 50. Processing times are drawn from uniform distribution with a mean of 300, between 100 and 500. The major setup times are also uniformly distributed between 20 and 100 which are approximately 7–34% of the mean of processing time. Major setup times are generated as indicated by Rios-Mercado and Bard (1998). Major setup times satisfy the triangle inequality (Balakrishnan, Kanet, & Sridharan, 1999) i.e., STMi.j 6 STMi.k + STMk.j for all i – j – k = 1, . . ., NJT. Medium setup time is chosen as 10 time units which is 50% of the minimum level of the major setup time. Minor setup time is chosen as 5 time units which is 50% of the medium setup time. MTBF and MTTR times for the machines are drawn from exponential distribution with the mean of 12,000 and 3000 time units, respectively. These values are chosen such that a machine’s availability is 80% on average. Specifications used for designing the hypothetical dynamic m identical parallel machines scheduling problem with a common server are summarized in Table 7.

First of all, the simulation model is run and observed several times for being able to determine a common inter arrival time distribution that may reach each CRA to the steady state. The number of jobs waiting in front of the server is observed as soon as a new job arrives to the production system. So, the simulation model is run under the same conditions for all CRAs until the completion of the first 10,000 jobs (for a long period of time). Upper level workload of the system is not limited under any circumstances and a new schedule is generated after each incoming job. The event number 4 is not activated on any account. The SA/CRA parameters are determined through preliminary experiments and given in Table 8. Whenever a new schedule will be generated, the SA/CRA is made 10 repeats by using the initial parameter settings in each repeat in the simulation replication and the continuation of the simulation model is ensured over the best solution found. It is seen after the tests that the steady state is ensured no matter which method is used when the inter arrival time is 52.6 time units (see Figs. 21–24). The determination of common inter arrival time is completely controversial case of the decision made for finding the most suitable number of machines that will balance the system when the inter arrival time is known in any production system. In this sense,

80

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

1. For m=1 to NPM do 2. If Mstatus(m)=1 Then 3. If C1current_makespan(m)=0 Then 4. If C1server > jAT Then 5. dummy_makespan←C1server 6. Else 7. dummy_makespan←jAT 8. End if 9. If dummy_makespan ET Then 15. dummy←ET 16. machine←m 17. End if 18. Else 19. If jpreemtion=1 Then 20. If jserial =JSN1current(m) Then 21. setup_time←MiS /*Minor Setup Time*/ 22. Else 23. If JTN1current(m)=jtype Then 24. setup_time←MdS /* Medium Setup Time*/ 25. Else 26. setup_time←STM(JTN1current(m), jtype) (MaS) /* Major*/ 27. End if 28. End if 29. Else 30. If JTN1current(m)=jtype Then 31. setup_time←MdS /* Medium Setup Time*/ 32. Else 33. setup_time←STM(JTN1current(m), jtype) (MaS) /* Major S.*/ 34. End if 35. End if 36. If C1server > jAT Then 37. dummy_makespan←C1server 38. Else 39. dummy_makespan←jAT 40. End if 41. If dummy_makespan < C1current_makespan(m) Then 42. dummy_makespan←C1current_makespan(m) 43. End if 44. ET←setup_time+jpr_time+ dummy_makespan 45. If dummy > ET Then 46. dummy←ET 47. machine←m 48. End if 49. End if 50. End if 51. End for Fig. 16. Deciding on which machine the next job will be produced.

the developed simulation model can allow us to make such analyses. 5.3. Comparison of methods As observed in Fig. 24, the number of jobs waiting in front of the server is minimal in the SA/CRA. Because of 10 identical parallel machines in the hypothetical system, maximum 10 jobs can be processed on the machines and the common server. So, the TPJ can be maximum 10 jobs. Approximately 60 jobs (50 jobs waiting in front of the server + 10 jobs in progress on the machines and the server) can be available in the production system at any time when inter arrival time is 52.6 time units for the SA/CRA. It can be also observed Fig. 24 that the number of jobs waiting in front of the common server increases frequently to a level of 10 jobs. As a strategy, upper level workload bound is accepted as 20 jobs by considering the total of the number of jobs waiting in front of the common server (it increases frequently to a level of 10 jobs) and

the jobs in progress on the machines and the common server (the TPJ can be maximum 10 jobs). On the other hand, lower level workload bound must be at least as large as the TPJ for being able to prevent the machines and the server from being idle. Five different scenarios are developed in a manner that lower level workload would be 12, 14, 16, 18 and 20. The condition where lower level and upper level workload bounds are equal to each other (20– 20) corresponds to a new schedule carried out whenever a new job arrives to the production system. The simulation model is run for all CRAs with 5 different scenarios until the first 5000 jobs are produced. Ten independent replications are made in the simulation model. Again in the same manner, the parameters in Table 8 are used for the SA/CRA by making 10 repeats whenever a new schedule will be generated in the simulation replication and the continuation of the simulation model is ensured over the best solution found. As observed in Table 9, each SA/CRA scenario gives solutions better than all of DR/CRA scenarios in terms of makespan. No significant difference is observed between DR/CRAs. Solution

81

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

Fig. 17. The steps of proposed DR/CRAs.

Table 7 Design specifications of the hypothetical system.

J1 (7)

J2 (11)

J3 (20)

J4 (30)

J5 (45)

J6 (55)

Fig. 18. Unprocessed jobs waiting in front of the server.

O1 (12)

O2 (8)

O3 (45)

O4 (10)

O5 (120)

O6 (25)

O7 (35)

O8 (18)

Fig. 19. Job orders waiting in the pre-shop pool.

J1 (7)

O2 (8)

O4 J2 (10) (11)

O1 (12)

O8 J3 O6 J4 J5 J6 (18) (20) (25) (30) (45) (55)

Fig. 20. Obtained job sequence in front of the server for the example.

quality decreases when lower level workload limit is reduced both in SA/CRAs and DR/CRAs. Although the scenario giving the worst result for the SA/CRA is (12–20), it gives better solution than new schedule condition (20–20) generated whenever each new job

Characteristic

Specification

Number of parallel machines Number of job types Processing time Major setup time (sequence dependent setup time between the two different job types) Medium setup time (setup time between the same job types) Minor setup time (setup time to continue the same job on the same machine after the breakdown) Machine breakdown

10 50 Generated from U[100,500] Generated from U[20,100] 10 5

MTBF = Exponential(12,000) MTTR = Exponential(3000)

Table 8 Selected control parameters. Parameters Initial temperature (T0) Length of the each temperature level (IT) Cooling rate (q) Final temperature (Tcry)

500 20 0.99 0.1

arrives, that is the best solution for DR/CRAs. A difference of approximately 5% occurs in terms of solution quality between (20–20) that is the best scenario and (12–20) that is the worst

82

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84 50

Number of waiting jobs in front of the server

Number of waiting jobs in front of the server

150

100

50

0

0

1000

2000

3000

4000

5000

6000

7000

8000

9000 10000

45 40 35 30 25 20 15 10 5 0

0

1000

Fig. 21. Number of waiting jobs in front of the server when using LPT/CRA.

Number of waiting jobs in front of the server

80 70 60 50 40 30 20 10 0

0

1000

2000

3000

4000

5000

6000

7000

8000

9000 10000

Arrivals of the i. job to the production system Fig. 22. Number of waiting jobs in front of the server when using SPT/CRA.

Number of waiting jobs in front of the server

120

3000

4000

5000

6000

7000

8000

9000 10000

Fig. 24. Number of waiting jobs in front of the server when using SA/CRA.

scenario for SA/CRA. In fact, since common inter arrival time providing the steady state for each CRA, the arrival time of the last job prevents further increase in solution quality difference between the methods. If average computational time of simulation replications are examined from Table 9, it is seen that the computational time for SA/CRA is considerably decreased as lower level workload bound is decreased. Deciding which scenario is better depends on the relation between acceptable computational time and solution quality. Server utilizations of the scenarios giving the best and the worst solution quality in terms of the makespan are given in Table 10. As it can be observed, SA/CRA used the common server more efficiently than DR/CRAs in shorter period of time (with less makespan) in the scenarios (20–20) and (12–20) (approximately 10% lesser than DR/CRAs). Box plots of the mean flow times of the scenarios giving the best and the worst solution quality are given in Fig. 25. The SA/CRA provides a smooth and less flow time when compared to DR/CRAs. Mean flow time increases for each proposed CRA from (20–20) to (12–20). The number of jobs waiting in front of the common server, when inter arrival time is reduced up to 48 time units and the SA/CRA is used, still reaches to the steady state. However, the number of jobs waiting in front of the common server increases in an uncontrolled manner when DR/ CRAs are used. 6. Conclusions

100

80

60

40

20

0

2000

Arrivals of the i. job to the production system

Arrivals of the i. job to the production system

0

1000

2000

3000

4000

5000

6000

7000

8000

9000 10000

Arrivals of the i. job to the production system Fig. 23. Number of waiting jobs in front of the server when using FIFO/CRA.

In this paper, a simulated annealing and the dispatching rule based complete rescheduling approaches as well as the simulation optimization tools are proposed for dynamic m identical parallel machines scheduling problem with a common server. Inspired by the load limited order release mechanism, an event driven rescheduling strategy is proposed for maintaining the physical work-in-process (PWIP) on the shop floor under control. In the proposed event driven based rescheduling strategy, the rescheduling is performed when the level of PWIP on the shop floor falls below lower level workload bound and guarantees the existence of PWIP as much as maximum upper level workload bound on the shop floor after the rescheduling. The performance of proposed simulated annealing (SA/CRA) and three dispatching rule (DR/CRA) based complete rescheduling approaches are compared in a hypothetical simulation case. The results of extensive simulation study

83

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84 Table 9 Makespan values obtained by the proposed CRAs. Scheduling method

Minimum

Mean

Maximum

Standard Deviation

Mean CPU time for replications (s)

SA/CRA (20–20) SPT/CRA (20–20) LPT/CRA (20–20) FIFO/CRA (20–20)

256482.78 285781.76 286360.69 285759.67

262743.92 289710.20 289213.13 288604.16

266553.09 294961.35 294106.48 293530.31

3578.03 3443.82 2977.30 2434.93

2377.01 281.16 261.03 276.12

SA/CRA (18–20) SPT/CRA (18–20) LPT/CRA (18–20) FIFO/CRA (18–20)

260666.81 287597.80 288121.00 288707.08

265816.32 291577.20 293062.85 290309.47

271324.00 296561.35 296489.07 295443.09

3466.86 3415.63 2962.88 2071.45

1512.78 120.03 131.91 125.67

SA/CRA (16–20) SPT/CRA (16–20) LPT/CRA (16–20) FIFO/CRA (16–20)

265195.78 288306.81 289424.31 289834.70

269202.89 292830.72 292981.85 294528.64

272363.40 297752.97 297181.49 298729.75

2337.39 3204.91 3421.56 2895.53

1012.05 87.72 86.88 91.12

SA/CRA (14–20) SPT/CRA (14–20) LPT/CRA (14–20) FIFO/CRA (14–20)

269857.03 290139.38 290474.09 291862.15

274517.29 293056.65 295621.27 294998.10

278611.37 298856.57 298847.47 298861.31

3199.63 2435.26 2969.52 2310.64

634.67 45.12 41.67 38.45

SA/CRA (12–20) SPT/CRA (12–20) LPT/CRA (12–20) FIFO/CRA (12–20)

271004.70 292272.00 292397.85 292077.94

277452.66 295150.10 296322.04 296343.23

279900.20 298940.66 299733.52 299640.40

2600.19 2140.43 2114.86 2492.43

338.45 12.56 10.89 15.02

Table 10 Server utilizations obtained by the best and worst scenarios. Scheduling method

Minimum

Mean

Maximum

Standard Deviation

SA/CRA (20–20) SPT/CRA (20–20) LPT/CRA (20–20) FIFO/CRA (20–20)

0.74 0.83 0.83 0.83

0.76 0.86 0.86 0.86

0.79 0.89 0.89 0.89

0.01 0.02 0.02 0.02

SA/CRA (12–20) SPT/CRA (12–20) LPT/CRA (12–20) FIFO/CRA (12–20)

0.71 0.80 0.80 0.80

0.73 0.83 0.83 0.83

0.75 0.85 0.85 0.85

0.01 0.01 0.01 0.01

Fig. 25. Mean flow times obtained by the best and worst scenarios.

indicate that the SA/CRA produces better scheduling performance when compared to DR/CRAs. Acknowledgements The authors would gratefully like to extend their appreciation to the anonymous referees whose valuable suggestions lead to an improved organization of this paper. References Abdekhodaee, A. H., & Wirth, A. (2002). Scheduling parallel machines with a single server: Some solvable cases and heuristics. Computers & Operations Research, 29, 295–315.

Abdekhodaee, A. H., Wirth, A., & Gan, H. S. (2004). Equal processing and equal setup time cases of scheduling parallel machines with a single server. Computers & Operations Research, 31, 1867–1889. Abdekhodaee, A. H., Wirth, A., & Gan, H. S. (2006). Scheduling two parallel machines with a single server: The general case. Computers & Operations Research, 33, 994–1009. Allahverdi, A., Aldowaisan, T., & Gupta, J. N. D. (1999). A review of scheduling research involving setup considerations. Omega, 27(2), 219–239. Allahverdi, A., Ng, C., Cheng, T., & Kovalyov, M. (2008). A survey of scheduling problems with setup times or costs. European Journal of Operational Research, 187, 985–1032. Balakrishnan, N., Kanet, J. J., & Sridharan, S. V. (1999). Early/tardy scheduling with sequence dependent setups on uniform parallel machines. Computers & Operations Research, 26, 127–141. Bergamaschi, D., Cigolini, R., Perona, M., & Portioli, A. (1997). Order review and release strategies in a job shop environment: A review and a classification. International Journal of Production Research, 35(2), 399–420. Bilge, U., Kirac, F., Kurtulan, M., & Pekgun, P. A. (2004). Tabu search algorithm for parallel machine total tardiness problem. Computers & Operations Research, 31, 397–414. Brucker, P., Dhaenens-Flipo, C., Knust, S., Kravchenko, S. A., & Werner, F. (2002). Complexity results for parallel machine problems with a single server. Journal of Scheduling, 5, 429–457. Cheng, T. C. E., & Sin, C. C. S. (1990). A state-of-the-art review of parallel-machine scheduling research. European Journal of Operational Research, 47, 271–290. Cowling, P. I., & Johansson, M. (2002). Using real-time information for effective dynamic scheduling. European Journal of Operational Research, 139(2), 230–244. Fang, J., & Xi, Y. (1997). A rolling horizon job shop rescheduling strategy in the dynamic environment. The International Journal of Advanced Manufacturing Technology, 13, 227–232. Gan, H. S., Wirth, A., & Abdekhodaee, A. (2012). A branch-and-price algorithm for the general case of scheduling parallel machines with a single server. Computers & Operations Research, 39, 2242–2247. Glass, C. A., Shafransky, Y. M., & Strusevich, V. A. (2000). Scheduling for parallel dedicated machines with a single server. Naval Research Logistics, 47, 304–328. Graham, R. L., Lawler, E. L., Lenstra, J. K., & Rinnooy Kan, A. H. G. (1979). Optimization and approximation in deterministic sequencing and scheduling: A survey. Annals of Discrete Mathematics, 5, 287–326. Guirchoun, S., Souhkal, A., & Martineau, P. (2005). Complexity results for parallel machine scheduling problems with a server in computer systems. In Proceedings of the 2nd multidisciplinary international conference on scheduling: Theory and applications (MISTA) (pp. 232–236). Hall, N. G., Potts, C. N., & Sriskandarajah, C. (2000). Parallel machine scheduling with a common server. Discrete Applied Mathematics, 102, 223–243. Hasani, K., Kravchenko, S. A., & Werner, F. (2014a). Block models for scheduling jobs on two parallel machines with a single server. Computers & Operations Research, 41, 94–97. Hasani, K., Kravchenko, S. A., & Werner, F. (2014b). Simulated annealing and genetic algorithms for the two-machine scheduling problem with a single server. International Journal of Production Research, 52(13), 3778–3792. Ho, J. C., & Chang, Y. L. (1995). Minimizing the number of tardy jobs for m parallel machines. European Journal of Operational Research, 84(2), 343–355. Huang, S. H., Cai, L., & Zhang, X. (2010). Parallel dedicated machine scheduling problem with sequence-dependent setups and a single server. Computers & Industrial Engineering, 58, 165–174.

84

A. Hamzadayi, G. Yildiz / Computers & Industrial Engineering 91 (2016) 66–84

Jiang, Y., Dong, J., & Ji, M. (2013). Preemptive scheduling on two parallel machines with a single server. Computers & Industrial Engineering, 66, 514–518. Jiang, Y., Zhang, Q., Hu, J., Dong, J., & Ji, M. (2015). Single-server parallel-machine scheduling with loading and unloading times. Journal of Combinatorial Optimization, 30, 201–213. Kim, D. W., Kim, K. H., Jang, W., & Chen, F. F. (2002). Unrelated parallel machine scheduling with setup times using simulated annealing. Robotics and ComputerIntegrated Manufacturing, 18, 223–231. Kim, M. Y., & Lee, Y. H. (2012). MIP models and hybrid algorithm for minimizing the makespan of parallel machines scheduling problem with a single server. Computers & Operations Research, 39(11), 2457–2468. Kim, C. O., & Shin, H. J. (2003). Scheduling jobs on parallel machines: A restricted tabu search approach. International Journal of Advanced Manufacturing Technology, 22, 278–287. Kirkpatrick, S., Gelatt, J. C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220, 671–680. Koulamas, C. P. (1996). Scheduling two parallel semiautomatic machines to minimize machine interference. Computers & Operations Research, 23(10), 945–956. Kravchenko, S. A., & Werner, F. (1997). Parallel machine scheduling problems with a single server. Mathematical and Computer Modelling, 26(12), 1–11. Kravchenko, S. A., & Werner, F. (1998). Scheduling on parallel machines with a single and multiple servers. Otto-von-Guericke-Universitat Magdeburg, 1–18. Kravchenko, S. A., & Werner, F. (2001). A heuristic algorithm for minimizing mean flow time with unit setups. Information Processing Letters, 79, 291–296. Law, A. M., & Kelton, W. D. (1991). Simulation modeling and analysis (3rd ed.). New York: McGraw-Hill. Lee, Z. J., Lin, S. W., & Ying, K. C. (2010). Scheduling jobs on dynamic parallel machines with sequence-dependent setup times. International Journal of Advanced Manufacturing Technology, 47, 773–781. Lin, S. W., Lee, Z. J., Ying, K. C., & Lu, C. C. (2011). Minimization of maximum lateness on parallel machines with sequence-dependent setup times and job release dates. Computers & Operations Research, 38, 809–815. Mendes, A. S., Muller, F. M., França, P. M., & Moscato, P. (2002). Comparing metaheuristic approaches for parallel machine scheduling problems. Production Planning and Control, 13, 143–154. Metropolis, N., Rosenbluth, A. W., Teller, A. H., & Teller, E. (1953). Equation of state calculation by east computing machines. Journal of Chemical Physics, 21, 1087–1091. Negahban, A., & Jeffrey, S. S. (2014). Simulation for manufacturing system design and operation: Literature review and analysis. Journal of Manufacturing Systems, 33, 241–261. Ou, J., Qi, X., & Lee, C. Y. (2010). Parallel machine scheduling with multiple unloading servers. Journal of Scheduling, 13, 213–226. Ouelhadj, D., & Petrovic, S. (2009). A survey of dynamic scheduling in manufacturing systems. Journal of Scheduling, 12(4), 417–431. Ovacik, I. M., & Uzsoy, R. (1995). Rolling horizon procedures for dynamic parallel machine scheduling with sequence-dependent setup times. International Journal of Production Research, 33, 3173–3192.

Pfund, M., Fowler, J. W., & Gupta, J. N. D. (2004). A survey of algorithms for single and multi-objective unrelated parallel-machine deterministic scheduling problems. Journal of the Chinese Institute of Industrial Engineers, 21, 230–241. Rangsaritratsamee, R., Ferrel, W. G., Jr., & Kurtz, M. B. (2004). Dynamic rescheduling that simultaneously considers efficiency and stability. Computers & Industrial Engineering, 46, 1–15. Rios-Mercado, R. Z., & Bard, J. F. (1998). Computational experience with a branchand-cut algorithm for flowshop scheduling with setups. Computers & Operations Research, 25(5), 351–366. Sabuncuoglu, I., & Bayiz, M. (2000). Analysis of reactive scheduling problems in a job shop environment. European Journal of Operational Research, 126(3), 567–586. Su, C. (2013). Online LPT algorithms for parallel machines scheduling with a single server. Journal of Combinatorial Optimization, 26, 480–488. Ventura, J. A., & Kim, D. (2003). Parallel machine scheduling with earliness– tardiness penalties and additional resource constraints. Computers & Operations Research, 30(13), 1945–1958. Vieira, G. E., Herrmann, J. W., & Lin, E. (2000). Predicting the performance of rescheduling strategies for parallel machine systems. Journal of Manufacturing Systems, 19(4), 256–266. Vieira, G. E., Herrmann, J. W., & Lin, E. (2003). Rescheduling manufacturing systems: A framework of strategies. Policies and methods. Journal of Scheduling, 6(1), 36–92. Wang, G., & Cheng, T. C. E. (2001). An approximation algorithm for parallel machine scheduling with a common server. Journal of the Operational Research Society, 52, 234–237. Werner, F., & Kravchenko, S. A. (2010). Scheduling with multiple servers. Automation and Remote Control, 71(10), 2109–2121. Xiang, W., & Lee, H. P. (2008). Ant colony intelligence in multi-agent dynamic manufacturing scheduling. Engineering Applications of Artificial Intelligence, 21 (1), 73–85. Yang, T. (2009). An evolutionary simulation–optimization approach in solving parallel-machine scheduling problems – A case study. Computers & Industrial Engineering, 56, 1126–1136. Yang, T., & Shen, Y. A. (2012). Heuristic algorithms for a practical-size dynamic parallel-machine scheduling problem: Integrated circuit wire bonding. Production Planning and Control: The Management of Operations, 23(1), 67–78. Ying, K. C., & Cheng, H. M. (2010). Dynamic parallel machine scheduling with sequence-dependent setup times using an iterated greedy heuristic. Expert Systems with Applications, 37, 2848–2852. Yoon, H. J., & Shen, W. (2006). Simulation-based real-time decision making for manufacturing automation systems: A review. International Journal Manufacturing Technology and Management, 8, 188–202. Zhang, L., & Wirth, A. (2009). On-line scheduling of two parallel machines with a single server. Computers & Operations Research, 36, 1529–1553.