Computers & Operations Research 40 (2013) 2983–2990
Contents lists available at ScienceDirect
Computers & Operations Research journal homepage: www.elsevier.com/locate/caor
Scheduling unrelated parallel batch processing machines with non-identical job sizes XiaoLin Li a, YanLi Huang a,n, Qi Tan b, HuaPing Chen c a
School of Mines, China University of Mining and Technology, China School of Electrical Engineering and Automation, HeFei University of Technology, China c School of Management, University of Science and Technology of China, China b
art ic l e i nf o
a b s t r a c t
Available online 17 July 2013
Scheduling unrelated parallel batch processing machines to minimize makespan is studied in this paper. Jobs with non-identical sizes are scheduled on batch processing machines that can process several jobs as a batch as long as the machine capacity is not violated. Several heuristics based on best fit longest processing time (BFLPT) in two groups are proposed to solve the problem. A lower bound is also proved to evaluate the quality of the heuristics. Computational experiments were undertaken. These showed that J_SC-BFLPT, considering both load balance of machines and job processing times, was robust and outperformed other heuristics for most of the problem categories. & 2013 Elsevier Ltd. All rights reserved.
Keywords: Scheduling Unrelated parallel batch processing machines Makespan Heuristics
1. Introduction Scheduling batch processing machine is very common in the manufacturing industry such as in metals [1,2] and semiconductors [3–5]. Unlike a discrete machine, a batch processing machine can process a number of jobs simultaneously as a batch. The processing time of a batch is determined by the largest processing time of the jobs in the batch. All the jobs in one batch have the same begin and completion time and the process cannot be interrupted until the processes of all the jobs in the batch are finished. The problem under study is mainly motivated by burn-in operations in semiconductor manufacturing. The purpose of a burn-in operation is to identify the “fragile” chips by subjecting them to thermal stress for an extended period of time. Since chips should be subjected to thermal stress for a period longer than their minimum required burn-in time, it is possible to process several different chips in the oven at the same time [6]. In this study, chips will be referred to as jobs and burn-in ovens as batch processing machines. As a burn-in operation is usually longer than other operations, chips should be scheduled to burn-in ovens effectively in order to reduce makespan and minimize production costs. In recent decades, an increasing amount of research has been done in the field of scheduling batch processing machines since it was first proposed by Ikura and Gimple [7]. The studies have n
Corresponding author. Tel.: +86 13905207498. E-mail addresses:
[email protected] (X. Li),
[email protected] (Y. Huang),
[email protected] (Q. Tan),
[email protected] (H. Chen). 0305-0548/$ - see front matter & 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cor.2013.06.016
focused mainly on single batch processing machine [6]. In multiple machine environments, research has also been done when batch processing machines are identical machines operating in parallel [8–11]. However, new machines and old ones often run together in real production environment, which will result in different processing times when jobs are processed on different machines. When the speeds of the machines are not independent of the jobs, the machine environment is called unrelated machines in parallel [12]. In this work, scheduling unrelated parallel batch processing machines is studied. Job sizes are non-identical and speeds of the machines are not independent of jobs. The scheduling objective is to minimize the makespan, i.e., the completion time of the last job. The problem can be represented as Rm|batch,pkj,sj|Cmax by using a triplet [12], where pkj and sj denote each job j that has arbitrary processing times on machine k and non-identical size. Rm means m unrelated parallel batch processing machines. It is strongly NPhard as the simplified problem of scheduling one batch processing machine is NP-hard in strong sense [3]. This problem is the generalization of identical parallel processing machines and PmOCmax is NP-hard in the ordinary sense as it is equivalent to the PARTION problem [12]. Hence, no polynomial-time algorithms exist for solving the problem unless P ¼NP. So, several heuristics are proposed to solve the problem under study and a lower bound is also developed to evaluate the performance of the proposed heuristics. The remainder of this paper is organized as follows. In Section 2, a brief overview of scheduling batch processing machines is given. The detailed description of the problem studied in this paper is presented in Section 3. In Section 4, a lower bound is
2984
X. Li et al. / Computers & Operations Research 40 (2013) 2983–2990
proved in order to evaluate the performance of heuristics proposed. Two groups of heuristics are proposed in Section 5. Computational experiments are conducted in Section 6. Finally, Section 7 presents the conclusions.
2. Literature review Considerable research has been done in the area of scheduling batch processing machines involving different machine environments, problem characteristics and objectives. Scheduling a batch processing machine was firstly introduced by Ikura and Gimple [7]. They considered efficient algorithms to schedule single batch processing machine under the assumption that job release times and due dates were agreeable. Lee et al. [3] presented efficient dynamic programming-based algorithms for minimizing different objectives, including maximum tardiness, and number of tardy jobs, on a single batch processing machine. Early studies including those cited above focus on the scheduling problem with identical jobs sizes. Uzsoy [13] considered the problems of minimizing makespan and total completion time with non-identical job sizes. These problems were proven to be NPhard and heuristics like first fit longest processing time (FFLPT) were developed to obtain near-optimal solutions. Heuristics proposed by Uzsoy [13] were analyzed by Zhang et al. [14] who studied the general case and the special case when jobs processing time satisfied the proportional assumption that the large jobs have processing times not less than those of small jobs. Dupont et al. [15] proposed two heuristics Successive Knapsack Problem (SKP) and best-fit longest processing time (BFLPT), the later one achieving better results than FFLPT proposed by Uzosy [13]. Dupont et al. [16] provided a branch-and-bound method for the problem of minimizing maximum completion time on a single batch processing machine. Enumeration schemes for the general problem and problems with regular criteria, which is non-decreasing in the job completion times, were proposed and the branch-and-bound algorithm was developed to deal with minimizing the makespan. Various meta-heuristics have been developed to solve the problem of scheduling batch processing machine as well. A simulated annealing algorithm (SA) was proposed by Melouk et al. [17] and random instances were generated to test the effectiveness of the algorithm. Damodaran et al. [18] solved the same problem by using a genetic algorithm (GA). The experimentation was compared with SA and CPLEX and the results showed that the GA was able to arrive at better makespan with shorter run times. Two different GAs were proposed by Kashan et al. [19]. One is a sequence based GA (SGA) and the other is a batch based hybrid GA (BHGA). Numeric experimentation showed that BHGA outperformed SGA and SA, especially for large-scale problems. Chen et al. [20] solved the problem from a clustering perspective. A clustering algorithm constrained agglomerative clustering of batches (CACB) was proposed and computational experiments were carried out to compare CACB with BFLPT and GA. The results showed that CACB outperformed BFLPT and GA especially for large-scale problems. Other meta-heuristics like colony optimization algorithm (ACO) [21] and particle swarm optimization algorithm (PSO) [22] were also used to solve the problem. Apart from the objective of makespan, some other objectives are also studied, such as weighted number of late jobs [23], total weighted completion time [5], and mean-flow time [24]. Research on parallel batch processing machines is also available. GAs [10,11] were designed to solve the problem of scheduling parallel batch processing machines. Both of the studies provided better results when compared to SA. Kashan et al. [10] also developed a lower bound for the problem of Pm|batch|Cmax. Damodaran et al. [25] proposed a lower bound considering the
ready times of the jobs and a greedy randomized adaptive search procedure (GRASP) was also proposed to minimize the makespan. A SA [26] approach was then compared to GRASP and the SA is comparable to GRASP with respect to solution quality, and less computationally costly. Unlike the research reviewed above, this paper considers the problem of scheduling when speeds of the machines are not independent of the jobs, i.e. Rm|batch,pkj,sj|Cmax. This machine environment is a generalization of the identical parallel processing machines. Heuristics are proposed to schedule the batches formed by using BFLPT [15], which shows a better performance compared with other approximate algorithms, and a computational experiment is also conducted to evaluate the effectiveness of the proposed algorithms.
3. Problem description There are n jobs, J, to be processed on m different batch processing machines, M, in parallel. Machine k can process job j at speed vkj. The processing time pkj ¼pj/vkj when job j is processed on machine k. Obviously, this condition means that each job has different processing time when it is processed on different machines. Thus, the problem can be simplified as following: each job j∈J is associated with job size sj and m different processing times, pkj, where k∈M. The processing machines are batch processing machines which can process several jobs simultaneously as a batch b, as long as the total size of all the jobs in the batch does not exceed the machine capacity C. All jobs in a batch have the same start time and completion time. Once the processing of a batch is started it cannot be interrupted and no jobs can be added or removed from the batch. Thus, the processing time of a batch, b, on a machine, k, equals to P bk ¼ maxfpkj jj∈bg. That is to say, the processing time of a batch is determined by the job with longest processing time. Certainly, the job processing time depends on which machine it is processed.
4. Lower bound In this section, a lower bound is presented to evaluate the performance of the proposed heuristics. As 1|B|Cmax can be solved optimally by full batch longest processing time (FBLPT) [27], a lower bound can be achieved by relaxing the original problem to the one in which jobs are split and processed in different batches. A lower bound LB is given as follows: Algorithm LB Step 1. Let J′¼{j|C sj omin{s1,…,sn}}. That is to say, each job in J′ should be placed in a single batch. So we can easily generate a batch set Bs from J′. Step 2. Split each job j in J–J′ into sj unit-size jobs with processing time pkj ¼min{pkj|k∈M}. Step 3. Apply the algorithm FBLPT to the unit-size job list generated in step 2. Let Bu denote the resulting batch set. Then we have the lower bound, as the following: ( & ’) ∑b∈Bs þBu minfpbk jk∈Mg LB ¼ max maxfminfpkj k∈M g; jMj j∈J
ð1Þ
Proposition 1. LB is a valid lower bound. Proof. Let C nmax denote the optimal makespan of a given problem.
X. Li et al. / Computers & Operations Research 40 (2013) 2983–2990
In one hand, since the batch processing time is determined by the job with the longest processing time in that batch and the makespan equals the completion time of the last batch processed, we have C nmax ≥maxfminfpkj jk∈Mgg. j∈J
Bu and Bs are optimal for 1|B|Cmax. So, for any feasible solution, H, we have ∑b∈Bs þBu minfpbk jk∈Mg≤∑b∈H minfpbk jk∈Mg. Hence, when batches are processed on parallel machines, the optimal makespan will not be less than ⌈∑b∈Bs þBu minfpbk jk∈Mg=jMj⌉. Thus, LB is a valid lower bound.
5. Heuristics Given a set of jobs, J, to be processed on batch processing machines set, M, heuristics techniques can be classified into two groups. One group is to divide the solution into two stages as follows, stage 1: grouping the jobs in J into several feasible batches and stage 2: scheduling the batches so formed on the set of machines, M, to minimize the makespan. The first stage of the problem is equivalent to a bin-packing problem when the processing times of jobs are ignored. As mentioned in Section 2, heuristic BFLPT [15], which is derived from a prominent heuristic for bin-packing problem best fit (BF), is usually used in scheduling jobs into batches. Another group considers two stages differently. Stage 1 determines which jobs have to be allocated to which machines and stage 2 determines how to schedule jobs on each single batch processing machine. Base on two groups given above, the framework of several heuristics proposed in this paper can be described as in Table 1. 5.1. Heuristics in Group 1 BFLPT cannot be used directly in scheduling unrelated parallel batch processing machines, as each job now has m processing times. Thus, three modified BFLPT heuristics are presented as following when jobs are ordered by different processing time: Heuristics BFLPT_Max, BFLPT_Min, BFLPT_Avg Step 1. Arrange the jobs in decreasing order of maxfpkj jk∈Mg in BFLPT_Max, minfpkj jk∈Mgin BFLPT_Min and ∑k∈M pkj ==jMj in BFLPT_Avg. Step 2. Select the job at head of the list and place it in a feasible batch with the smallest residual capacity. If the job fits in no existing batches, place the job in a new batch. Repeat step 2 until all jobs have been assigned to a batch and batch list B will be created.
2985
used in the procedure of allocating batches to parallel processing machines [12] cannot be used here too because each batch has multiprocessing time now. Heuristic batch_earliest idle time (B_EI) Step 1. Select batch b at head of the batch list B created in stage 1. Step 2. Allocate batch b to the earliest idle machine in priority. Repeat step 2 until all batches have been scheduled. Batches are usually allocated to the machine with earliest idle time in the problem of scheduling identical machines in parallel [8,9]. However, when jobs are processed in different speeds, this rule cannot allocate batches to machines averagely. Heuristic batch_shortest completion time (B_SC) Step 1. Select batch b at head of the batch list B created in stage1. Step 2. Allocate batch b to the machine that enables bath b have the shortest completion time in priority. Repeat step 2 until all batches have been scheduled. B_SC is developed to allocate batches to machines as averagely as possible by calculating completion time of machines after each batch is scheduled. Heuristics batch_shortest processing time (B_SP) Step 1. Select batch b at head of the batch list B created in stage1. Step 2. Allocate batch b to the machine k on which batch b has the shortest processing time in priority. That is, k ¼ arg minfpbk jk∈Mg. Repeat step 2 until all batches have been scheduled. B_SP attempts to allocate batches to the fastest machines. The load balance of machines is not considered. A 10 job problem instance with two machines is chosen to illustrate the heuristics in Group 1. The machine capacity is assumed to be 10. The problem instance is shown in Table 2 where Maxj ¼max{p1j,p2j}, Averagej ¼average{p1j,p2j} and Minj ¼ min{p1j,p2j}. Three job lists BLPT_Max ¼{4,5,2,6,0,1,9,3,8,7}, BLPT_Avg ¼ {4,5,2,1,6,9,0,3,8,7} and BLPT_Min ¼ {4,5,2,1,8,9,6,3,0,7} are generated by arranging jobs in decreasing order of Maxj, Averagej and Minj correspondingly. Each job list is grouped into batches as shown in Step 2 of Heuristics BFLPT_Max, BFLPT_Min, and BFLPT_Avg, and the batches are then allocated to batch processing machines by using Heuristic B_EI, B_SC and B_SP. The results are illustrated in Fig. 1. For simplicity, only the results for job list BLPT_Max are presented. 5.2. Heuristic in Group 2
Once the jobs have been scheduled into batches, heuristics batch_earliest idle time (B_EI), batch_shortest completion time (B_SC) and batch_shortest processing time (B_SP) will be used in stage 2 to allocate batches to different batches processing machines. The longest processing time (LPT) rule which is usually
Different from heuristics mentioned above, heuristic J_EIBFLPT, J_SC-BFLPT and J_SP-BFLPT allocate the jobs on different machines according to various, different rules. The processing time of each job is determined once it is allocated to a machine and then
Table 1 Framework of heuristics proposed in this paper.
Table 2 A 10 job problem instance with two machines.
Group 1 Group 2
Stage 1
Stage 2
Jobs
0
1
2
3
4
5
6
7
8
9
BFLPT_Max, BFLPT_Min, BFLPT_Avg (Group jobs into batches) J_EI, J_SC, J_SP (Allocate jobs to machines)
B_EI, B_SC, B_SP
sj p1j p2j Maxj Averagej Minj
3 8 1 8 4.5 1
4 7 4 7 5.5 4
2 9 4 9 6.5 4
2 2 7 7 4.5 2
2 8 9 9 8.5 8
3 9 7 9 8.0 7
2 8 2 8 5.0 2
3 1 4 4 2.5 1
4 5 4 5 4.5 4
4 3 7 7 5.0 3
(Allocate batches to machines) BFLPT (Group jobs into batches on each machine)
2986
X. Li et al. / Computers & Operations Research 40 (2013) 2983–2990
Fig. 1. Solutions generated by using heuristics in Group 1 for job list BLPT_Max.
Fig. 2. Solutions generated by using heuristics in Group 2 for job list BLPT_Min.
heuristic BFLPT is used to schedule the jobs on each machine into batches before they are processed.
machine when the processing times of all jobs on one machine are all smaller than that on other machines. Heuristics in Group 2 are used to solve the problem instance shown in Table 2 and the results are illustrated in Fig. 2. For simplicity, only the results for job list BLPT_Min are presented. In the following section, all 12 heuristics proposed above, viz. BFLPT_Max-B_EI, BFLPT_Max-B_SC, BFLPT_Max-B_SP, BFLPT_MinB_EI, BFLPT_Min-B_SC, BFLPT_Min-B_SP, BFLPT_Avg-B_EI, BFLPT_ Avg-B_SC, BFLPT_Avg-B_SP, J_EI-BFLPT, J_SC-BFLPT and J_SP-BFLPT, are compared with each other through the experimental study. The results are also compared with the lower bound, LB, to evaluate the performance of these heuristics.
Heuristic job_earliest idle time-BFLPT (J_EI-BFLPT) Step 1. Arrange the jobs in decreasing order of minfpkj jk∈Mg. Step 2. In this order, allocate job j to the machine with the earliest idle time, where the idle time of machine k is calculated by applying BFLPT to the existing jobs on that machine. Thus, a single job list Lk will be generated for each machine k. Step 3. For each k∈M, schedule Lk into batches by applying BFLPT to it and sequence the batches on k in any arbitrary order. Heuristic job_shortest completion time-BFLPT (J_SC-BFLPT) Step1. Arrange the jobs in decreasing order of minfpkj jk∈Mg. Step2. In this order, allocate each job j to the machine that would have the earliest completion time should job j be processed on that machine, where the completion time of machine k is calculated by applying BFLPT to the jobs on that machine. Thus, a single job list Lk will be generated for each machine k. Step3. For each k∈M, schedule Lk into batches by applying BFLPT to it and sequence the batches on k in any arbitrary order. J_SC-BFLPT considers both the load balance of machines and the job processing times to minimize the imbalance of the machines' utilization. Heuristic job_shortest processing time-BFLPT (J_SP-BFLPT) Step 1. Allocate each job j to the machine k ¼ arg minfpkj jk∈Mg, then a single job list Lk will be generated for each machine k. Step 2. Schedule Lk into batches by applying BFLPT to the job list Lk allocated to machine k and sequence batches on k in any arbitrary order.
6. Computational experiments 6.1. Experimental design Random instances are generated to evaluate the quality of solution obtained using the above heuristics. The random instances account for variations in five factors, namely the number of machines, the number of jobs, the size of jobs and the job processing times on each machine. The factors and levels are shown in Table 3. Different machine environments are considered as follows. In Section 6.2, numeric experimentations of two machines (i.e., level M1) are conducted. On one machine, job processing times are discrete uniformly distributed within the interval P¼ U[1,50]. On the other machine, three levels of job processing times (i.e., Q1, Q2 and Q3) are considered. In Section 6.3, machine environments of level M2 and level M3 are studied and job processing times are all discrete uniformly distributed within the interval P ¼U[1,50] for simplicity. The machine capacity is assumed to be 10 for all problem instances. 6.2. Results for level M1
The processing time of jobs are considered only in J_SP-BFLPT, which may result in all the jobs being allocated to exactly one
45(1 5 3 1 3) problem categories are considered and 100 random instances are generated for each category. As stated
X. Li et al. / Computers & Operations Research 40 (2013) 2983–2990
above, the number of processing times of each job is equal to the number of machines in each problem category. MmJjSsPQq is used to denote various problem categories. For example, M1J3S2PQ2 represents the problem category with two machines, 50 jobs and the following there factors job size, job processing time on machine 1 and job processing time on machine 2 are generated uniformly with the interval [4,8], [1,50] and [51,100] accordingly. Table 4 shows the results (average Cmax of 100 instances for each problem category) for M1Q1 problem categories obtained by using different heuristics proposed in this study. Column 1 in Table 4 presents the run code. Columns 2–4 present three heuristics using BFLPT_Max on stage 1 and heuristics B_EI, B_SC, and B_SP on stage 2. Columns 5–7 and 8–10 are similar to columns 2–4. That is to say, Columns 2–10 show the average Cmax reported by nine heuristics in Group 1. Columns 11, 12 and 13 denote average Cmax obtained by using J_EI, J_SC and J_SP in stage 1 respectively and BFLPT is used in stage 2. Namely, the solutions generated by using three heuristics in Group 2 are reported in Columns 11–13. The average lower bound LB for each problem category is given in column 14. The best average Cmax values for each problem category are given in bold. The result from J_SC-BFLPT is better than those from other heuristics for most of the problem categories as shown in Table 4. For all the nine heuristics in Group 1, the ones using B_SC in stage 2 outerperform the others. For example, BFLPT_Max-B_SC is better than BFLPT_Max-B_EI and BFLPT_Max-B_SP for all problem categories. BFLPT_Min-B_SC performs mainly better than other heuristics in Group 1. Results from J_SC-BFLPT and J_SP-BFLPT are close to each other while J_SP-BFLPT is gradually performing better when the population becomes larger, especially for instances with 200 jobs (i.e. M1J5S2Q1 and M1J5S3Q1). There are two plausible reasons for this: (1) In M1 problem category, two processing times of all jobs are generated from a discrete uniform distribution over [1,50], which
2987
means that the probability of job processing time on machine 1 is less than or equal to job processing time on machine 2 which is about 0.5. So, J_SP will allocate half of all jobs to each machine with a high probability which is a very good schedule and the superiority is higher for larger population size; (2) BFLPT is an effective heuristics for scheduling batch processing machine and it performs better when the number of jobs is becomes larger. So, J_SP-BFLPT reports good results for J5 problem categories. Results for M1Q2 problem categories are reported in Table 5. M1Q2 problem categories consider the extreme condition where job processing times on machine 2 are totally different from (i.e., larger than) job processing times on machine 1. J_SC-BFLPT performs better than the other heuristics for all M1Q2 problem categories. The performance of BFLPT_Min-B_SC is near to that of J_SC-BFLPT and these two heuristics are the best ones in their group. Heuristic J_SP-BFLPT is not so good as for M1Q1 problem categories because the J_SP rule is inclined to allocate all jobs to machine 1 when job processing times on machine 1 are all smaller than that on machine 2. So the J_SP rule performs poorly for the instances like in M1Q2 problem categories. Mixed job processing times on the two machines is considered in M1Q3 problem categories as shown in Table 6. M1Q3 problem categories are closer to real condition from the perspective of job processing times on different machines. J_SC-BFLPT gives better results than all other heuristics for all M1Q3 problem categories and the performance is getting better when the number of jobs is becoming larger. Heuristics using BFLPT_Min rule in stage 1 and B_SC rule in stage 2 still show better performances than other heuristics in Group 1. As the job processing times on different machines are largely different, the J_SP rule cannot allocate jobs on two machines properly.
6.3. Results for level M2 and M3 Table 3 Factors and levels. Factors
Levels
Number of machines Number of jobs Size Processing time P Processing time Q
M1 ¼ 2, M2 ¼ 3, M3 ¼ 6 J1 ¼ 10, J2 ¼ 20, J3 ¼ 50, J4 ¼100, J5 ¼200 S1 ¼ U[2,4], S2 ¼ U[4,8], S3 ¼ U[1,10] P ¼U[1,50] Q1 ¼ U[1,50], Q2 ¼ [51,100], Q3 ¼ [1,100]
The results for three and six machines (i.e., level M2 and M3) are given in Tables 7 and 8. The average Cmax declines when jobs are processed on more machines. J_SC-BFLPT still reports best solutions for all problem categories. Heuristics using B_SC (i.e., BFLPT_Max-B_SC, BFLPT_ Min-B_SC and BFLPT_Avg-B_SC) in Group 1 show better performance compared with other heuristics, whose performance are broadly similar. As all the job processing times are generated
Table 4 Average Cmax for M1Q1 problem categories. BFLPT_Max
BFLPT_Min
BFLPT_Avg
LB
BFLPT
Run code B_EI
B_SC
B_SP
B_EI
B_SC
B_SP
B_EI
B_SC
B_SP
J_EI
J_SC
J_SP
J1S1 J2S1 J3S1 J4S1 J5S1
62.3 113.4 262.8 518.0 1018.9
59.9 108.7 256.0 509.3 1005.7
74.6 136.7 343.2 737.3 1588.8
69.4 121.1 286.1 557.0 1095.8
58.0 104.1 246.3 484.4 962.6
68.2 116.7 269.4 500.7 966.3
63.8 115.0 265.4 532.3 1049.8
57.6 106.9 250.3 501.3 991.8
69.2 129.4 278.6 545.4 1033.2
64.4 103.0 227.9 426.2 823.6
46.1 75.6 168.1 326.3 677.3
52.8 87.4 198.2 358.4 680.2
37.2 58.6 138.5 263.5 517.6
J1S2 J2S2 J3S2 J4S2 J5S2
109.8 208.8 495.7 984.4 1975.7
92.2 168.8 418.8 831.6 1669.7
102.0 186.6 440.4 867.1 1791.3
115.8 217.3 506.3 1003.6 1989.6
87.9 160.4 384.0 769.3 1540.9
99.6 175.5 397.0 774.5 1506.2
113.1 206.1 497.6 990.1 1974.4
88.2 163.1 401.2 806.5 1598.7
101.5 179.7 421.7 809.3 1580.7
112.6 211.0 481.1 947.6 1874.8
83.8 150.8 354.1 706.0 1444.6
92.9 166.8 380.5 719.5 1409.6
67.3 125.5 300.9 600.6 1193.4
J1S3 J2S3 J3S3 J4S3 J5S3
95.5 177.9 426.5 856.7 1682.1
82.1 157.5 374.5 750.3 1484.5
96.5 174.8 411.5 819.1 1743.6
96.4 188.3 433.1 876.6 1709.2
76.6 148.9 343.7 688.8 1352.2
91.5 163.6 356.8 691.6 1305.8
95.3 176.9 428.3 865.3 1697.4
80.2 151.5 357.4 723.8 1432.0
95.6 169.9 378.0 731.1 1422.5
100.6 177.8 405.1 796.6 1549.9
72.6 131.2 296.6 594.8 1220.0
85.0 148.0 319.3 616.7 1166.7
55.9 104.5 239.3 475.4 930.7
2988
X. Li et al. / Computers & Operations Research 40 (2013) 2983–2990
Table 5 Average Cmax for M1Q2 problem categories. BFLPT_Max
BFLPT_Min
BFLPT_Avg
LB
BFLPT
Run code B_EI
B_SC
B_SP
B_EI
B_SC
B_SP
B_EI
B_SC
B_SP
J_EI
J_SC
J_SP
J1S1 J2S1 J3S1 J4S1 J5S1
98.8 180.6 414.2 803.9 1570.9
91.8 170.3 399.1 777.5 1534.2
128.1 247.8 596.7 1183.0 2348.5
94.1 173.2 337.0 621.6 1193.5
84.1 140.2 309.9 594.0 1158.0
95.9 176.7 410.6 802.3 1575.1
94.5 174.7 396.9 763.1 1494.1
85.5 159.2 374.5 735.1 1452.1
112.6 216.4 524.7 1048.2 2081.2
94.3 160.6 336.0 631.2 1222.8
78.6 135.8 293.8 564.5 1103.7
102.2 201.5 483.4 934.2 1804.9
48.3 84.0 194.8 381.9 753.8
J1S2 J2S2 J3S2 J4S2 J5S2
170.0 326.0 760.0 1518.0 2978.4
152.0 297.7 706.3 1422.7 2784.4
210.4 425.0 1025.9 2087.0 4087.8
167.0 309.2 692.5 1368.1 2638.3
147.7 283.7 657.9 1323.6 2564.2
199.2 394.3 928.1 1890.4 3668.7
174.1 322.5 745.6 1484.6 2893.4
148.0 290.6 684.5 1365.8 2682.1
205.9 413.2 992.5 2011.9 3939.2
177.5 320.5 733.8 1453.5 2801.5
147.6 282.7 655.2 1307.8 2538.0
202.0 405.2 965.2 1962.1 3813.9
96.7 188.5 447.1 902.9 1761.7
J1S3 J2S3 J3S3 J4S3 J5S3
152.9 278.8 668.8 1308.2 2567.3
139.1 255.4 623.4 1227.7 2418.8
190.0 365.2 911.1 1817.0 3591.8
152.6 255.9 589.2 1135.4 2194.2
130.4 233.2 558.4 1100.6 2130.3
171.6 318.3 780.5 1552.1 3024.2
152.8 276.2 646.2 1264.1 2480.3
133.9 243.1 593.7 1175.4 2315.5
181.6 343.9 859.3 1715.2 3384.5
151.5 269.6 620.3 1203.6 2320.7
129.3 231.0 550.2 1081.3 2090.8
175.9 331.0 837.3 1669.4 3236.3
79.3 145.6 356.1 708.4 1384.6
Table 6 Average Cmax for M1Q3 problem categories. BFLPT_Max
BFLPT_Min
BFLPT_Avg
LB
BFLPT
Run code B_EI
B_SC
B_SP
B_EI
B_SC
B_SP
B_EI
B_SC
B_SP
J_EI
J1S1 J2S1 J3S1 J4S1 J5S1
80.1 148.3 343.3 661.7 1296.7
74.5 140.7 329.5 642.8 1265.3
98.1 193.0 466.3 935.5 1901.4
89.4 167.9 375.1 710.5 1391.1
78.2 142.6 330.2 623.7 1244.1
98.2 187.8 459.1 891.5 1792.7
84.0 154.6 362.0 700.1 1375.3
75.9 143.8 339.5 663.9 1312.3
99.5 191.5 478.9 938.3 1830.6
86.8 142.5 306.3 568.1 1095.5
62.5 107.2 231.0 438.1 875.7
78.4 148.9 340.7 654.7 1254.2
45.3 74.6 170.5 327.9 639.7
J1S2 J2S2 J3S2 J4S2 J5S2
141.4 261.3 637.8 1254.5 2478.3
119.6 227.5 564.0 1123.9 2213.6
151.2 297.5 732.6 1460.9 2935.8
159.3 282.6 663.2 1315.2 2576.8
116.2 217.8 521.1 1027.7 2036.8
147.8 290.1 705.2 1407.4 2823.5
145.4 266.7 650.5 1295.3 2565.6
119.1 224.1 541.0 1089.7 2159.1
154.6 298.6 723.8 1445.8 2923.1
156.5 275.5 654.3 1283.0 2480.5
111.7 202.9 484.5 947.0 1884.8
141.5 269.4 665.1 1303.8 2599.2
82.7 151.9 374.9 745.6 1476.6
J1S3 J2S3 J3S3 J4S3 J5S3
121.8 223.9 556.2 1075.1 2121.9
105.1 201.2 502.2 977.0 1939.7
132.5 261.4 658.3 1298.7 2618.8
136.5 248.0 579.2 1129.6 2208.8
103.2 191.5 467.2 910.8 1788.4
131.6 258.4 642.8 1262.5 2488.8
124.9 233.0 573.8 1103.2 2187.1
103.3 196.8 489.2 960.6 1902.0
131.6 261.2 664.7 1292.4 2566.5
135.4 234.2 556.6 1056.7 2040.2
96.8 168.6 411.7 786.6 1588.9
120.8 226.1 572.6 1108.0 2160.5
67.7 122.1 303.0 582.9 1154.6
uniformly within the interval P ¼U[1,50], J_SP-BFLPT performs quite well.
6.4. Comparison with the lower bound For each instance, the gap between the makespan obtained from applying heuristic, H, and the lower bound, LB, is calculated as following: gap ¼
CH max LB 100% LB
J_SC
J_SP
The gaps are represented in the form of dot plots in Fig. 3. J_SCBFLPT is much closer to LB than other heuristics and the gaps between J_SC-BFLPT and LB are generally smaller than 50% for all problem categories. It can be seen from Fig. 3 that the gaps for M1 problem categories are basically smaller than for M3 problem categories. That is to say, the gaps are getting larger when the number of machines increases. The gaps for heuristics BFLPT_AvgB_SC, BFLPT_Min-B_SC and BFLPT_Max-B_SC are relatively bigger and tend to spread in a larger range.
ð2Þ
The gap for each instance is calculated and then the average over all 100 instances is given to show the quality of the proposed heuristics. As can be seen from the previous analysis, J_SC-BFLPT shows better performance than the other heuristics in all the problem categories except instances of M1J5S2Q1 and M1J5S3Q1 in which J_SP-BFLPT is marginally better than J_SC-BFLPT. However, J_SP-BFLPT is largely affected by levels of job processing time. On average, for all heuristics in the two Groups, the ones using SC rule are better than others. These four heuristic (i.e., BFLPT_Max-B_SC, BFLPT_Min-B_SC, BFLPT_Avg-B_SC and J_SC-BFLPT) using the SC rule are taken to represent typical example, and the gaps for these “good” heuristics are presented in Fig. 3.
7. Conclusions Scheduling unrelated parallel batch processing machines are studied in this paper. The problem is NP-hard for the objective of minimizing makespan. Several heuristics classified into two groups are proposed based on different solving mechanisms. Heuristics in Group 1 schedule the jobs into batches first and then the batches are allocated to machines. On the contrary, heuristics in Group 2 allocate the jobs to machines first and then schedule jobs on each machine into batches. A lower bound is also proposed to evaluate the quality of the heuristics. The experimentation shows that B_SC and J_SC rules perform better than other rules and the results are better when the job list
X. Li et al. / Computers & Operations Research 40 (2013) 2983–2990
2989
Table 7 Average Cmax for M2P problem categories. BFLPT_Max
BFLPT_Min
BFLPT_Avg
LB
BFLPT
Run code B_EI
B_SC
B_SP
B_EI
B_SC
B_SP
B_EI
B_SC
B_SP
J_EI
J_SC
J_SP
J1S1 J2S1 J3S1 J4S1 J5S1
47.4 82.0 191.3 369.0 730.2
43.4 75.7 180.1 348.6 690.4
60.9 106.4 244.2 445.8 928.6
51.0 92.2 205.0 394.3 771.4
41.3 73.8 170.0 329.7 648.8
54.7 93.6 201.1 372.7 679.8
49.1 83.8 192.5 375.7 733.7
42.0 72.4 168.1 325.8 648.4
55.8 96.0 204.5 370.0 689.6
55.5 80.7 160.3 294.1 561.0
31.7 46.7 94.4 179.4 369.6
37.6 60.7 120.4 216.4 407.6
30.2 34.4 69.2 134.0 262.9
J1S2 J2S2 J3S2 J4S2 J5S2
76.6 143.6 340.6 669.4 1347.9
55.2 99.9 239.7 481.4 963.4
73.2 116.7 269.9 491.9 924.8
81.4 150.3 351.4 677.0 1346.4
54.7 95.4 225.8 444.6 889.9
69.2 110.6 246.1 455.9 871.1
75.3 143.8 340.9 669.2 1331.3
53.3 97.0 230.9 457.5 919.4
70.1 113.1 260.1 469.2 913.5
83.6 146.9 334.4 639.0 1261.6
48.4 84.0 192.4 376.4 756.9
60.7 99.9 220.4 395.9 758.7
34.5 63.6 151.1 299.9 599.7
J1S3 J2S3 J3S3 J4S3 J5S3
71.5 127.0 299.3 588.1 1156.9
52.1 95.2 228.2 453.8 896.6
64.4 112.2 248.6 467.7 910.3
76.9 136.0 309.3 601.5 1169.2
50.3 90.8 214.4 416.1 822.2
62.4 104.9 228.1 429.4 805.1
72.1 127.1 298.5 581.5 1149.7
51.8 91.1 216.4 429.3 843.2
63.8 108.8 240.9 451.5 841.5
77.4 127.0 287.1 543.1 1045.0
43.7 72.7 164.6 318.1 650.7
53.2 84.8 191.7 344.7 658.7
32.6 51.7 123.4 239.1 469.7
Table 8 Average Cmax for M3P problem categories. BFLPT_Max
BFLPT_Min
BFLPT_Avg
LB
BFLPT
Run code B_EI
B_SC
B_SP
B_EI
B_SC
B_SP
B_EI
B_SC
B_SP
J_EI
J_SC
J_SP
J1S1 J2S1 J3S1 J4S1 J5S1
45.2 52.3 111.0 204.4 392.7
33.4 40.3 89.7 165.7 319.2
41.6 69.6 129.9 215.9 365.2
45.6 54.8 117.5 214.8 406.3
33.0 41.1 87.5 162.1 308.7
40.6 61.4 118.9 200.1 351.6
44.8 50.4 112.1 205.4 392.4
33.3 40.4 85.4 156.4 300.6
39.8 61.1 123.2 198.5 348.5
45.7 60.3 100.0 165.5 297.9
18.8 22.7 35.1 61.6 115.2
19.8 28.0 54.3 92.0 153.8
18.7 22.3 26.0 40.5 77.2
J1S2 J2S2 J3S2 J4S2 J5S2
48.8 81.8 181.7 348.7 678.6
28.0 43.2 90.6 177.8 353.8
34.8 55.2 112.4 196.2 365.8
51.5 87.1 188.8 357.7 696.9
27.2 42.3 91.5 169.8 337.3
35.4 53.9 108.9 189.9 350.1
48.7 83.8 185.2 347.6 684.2
27.8 40.9 89.0 176.1 344.4
36.8 55.6 115.6 199.1 369.9
55.9 88.9 184.3 336.2 647.9
22.1 31.3 64.9 120.3 237.0
28.4 41.9 85.2 148.7 265.9
19.0 22.2 44.4 88.2 175.4
J1S3 J2S3 J3S3 J4S3 J5S3
46.8 73.8 161.6 304.2 601.1
30.1 43.1 94.2 179.4 350.1
37.9 60.3 117.3 210.4 367.0
47.6 78.9 166.1 312.2 611.8
30.4 43.5 91.1 174.2 331.8
38.2 56.0 110.1 197.9 349.1
47.1 77.1 162.4 304.9 604.3
30.5 42.0 90.2 173.8 340.0
38.8 61.0 117.3 206.6 365.1
54.2 83.5 162.0 286.0 551.1
21.7 29.0 56.7 103.8 201.4
27.2 40.2 75.7 131.5 236.6
19.0 22.2 36.1 70.1 137.9
Well designed heuristics and meta-heuristics will be a promising research direction for this problem and problems involving other objectives and constrains also can be studied.
Acknowledgments This work is supported by the National Natural Science Foundation of China (No. 71171184), the Funds for Creative Research Group of China (No. 70821001), and the Fundamental Research Funds for the Central Universities (No. 2013QNA27). References
Fig. 3. Gaps between heuristics and LB grouped by number of machines.
or batch list is arranged in decreasing order of minfpkj jk∈Mg. J_SPBFLPT performs well when job processing times on each machine are generated from the same data range uniformly. J_SC-BFLPT is much more robust and can function well for most of the problem categories.
[1] Mathirajan M, Sivakumar AI, Chandru V. Scheduling algorithms for heterogeneous batch processors with incompatible job-families. Journal of Intelligent Manufacturing 2004;15(6):787–803. [2] Ram B, Patel G. Modelling furnace operations using simulation and heuristics. In: Proceedings of the 1998 winter simulation conference. Vols. 1–2. D.J.W.E.F. C.J.S.M.M.S. Medeiros; 1998. pp. 957–63. [3] Lee CY, Uzsoy R, Martinvega LA. Efficient algorithms for scheduling semiconductor burn-in operations. Operations Research 1992;40(4):764–75. [4] Chandru V, Lee CY, Uzsoy R. Minimizing total completion-time on a batch processing machine with job families. Operations Research Letters 1993;13 (2):61–5.
2990
X. Li et al. / Computers & Operations Research 40 (2013) 2983–2990
[5] Uzsoy R, Yaoyu Y. Minimizing total weighted completion time on a single batch processing machine. Production and Operations Management 1997;6 (1):57–73. [6] Mathirajan M, Sivakumar AI. A literature review, classification and simple meta-analysis on scheduling of batch processors in semiconductor. International Journal of Advanced Manufacturing Technology 2006;29(9– 10):990–1001. [7] Ikura Y, Gimple M. Efficient scheduling algorithms for a single batch processing machine. Operations Research Letters 1986;5(2):61–5. [8] Chang PY, Damodaran P, Melouk S. Minimizing makespan on parallel batch processing machines. International Journal of Production Research 2004;42 (19):4211–20. [9] Damodaran P, Chang PY. Heuristics to minimize makespan of parallel batch processing machines. International Journal of Advanced Manufacturing Technology 2008;37(9–10):1005–13. [10] Kashan AH, Karimi B, Jenabi M. A hybrid genetic heuristic for scheduling parallel batch processing. machines with arbitrary job sizes. Computers and Operations Research 2008;35(4):1084–98. [11] Damodaran P, Hirani NS, Velez-Gallego MC. Scheduling identical parallel batch processing machines to minimise makespan using genetic algorithms. European Journal of Industrial Engineering 2009;3(2):187–206. [12] Pinedo ML. Scheduling theory, algorithms, and systems. 3rd ed. Prentice-Hall, Upper Saddle River, NJ: Springer Science+Business Media; 2008. [13] Uzsoy R. Scheduling a single batch processing machine with non-identical job sizes. International Journal of Production Research 1994;32(7):1615–35. [14] Zhang GC, Cai XQ, Lee CY, et al. Minimizing makespan on a single batch processing machine with nonidentical job sizes. Naval Research Logistics 2001;48(3):226–40. [15] Dupont L, Ghazvini FJ. Minimizing makespan on a single batch processing machine with non-identical job sizes. APII-JESA Journal Europeen des Systemes Automatises 1998;32(4):431–40. [16] Dupont L, Dhaenens-Flipo C. Minimizing the makespan on a batch machine with non-identical job sizes: an exact procedure. Computers and Operations Research 2002;29(7):807–19.
[17] Melouk S, Damodaran P, Chang PY. Minimizing makespan for single machine batch processing with non-identical job sizes using simulated annealing. International Journal of Production Economics 2004;87(2):141–7. [18] Damodaran P, Manjeshwar PK, Srihari K. Minimizing makespan on a batchprocessing machine with non-identical job sizes using genetic algorithms. International Journal of Production Economics 2006;103(2):882–91. [19] Kashan AH, Karimi B, Jolai F. Minimizing makespan on a single batch processing machine with non-identical job sizes: a hybrid genetic approach. Evolutionary Computation in Combinatorial Optimization, Proceedings 2006;3906:135–46. [20] Chen H, Du B, Huang GQ. Scheduling a batch processing machine with nonidentical job sizes: a clustering perspective. International Journal of Production Research 2011;49(19):5755–78. [21] Xu R, Chen HP, Li XP. Makespan minimization on single batch-processing machine via ant colony optimization. Computers and Operations Research 2012;39(3):582–93. [22] Damodaran P, Diyadawagamage DA, Ghrayeb O, et al. A particle swarm optimization algorithm for minimizing makespan of nonidentical parallel batch processing machines. International Journal of Advanced Manufacturing Technology 2012;58(9–12):1131–40. [23] Brucker P, Kovalyov MY. Single machine batch scheduling to minimize the weighted number of late jobs. Mathematical Methods of Operations Research (ZOR) 1996;43(1):1–88. [24] Ghazvini FJ, Dupont L. Minimizing mean flow times criteria on a single batch processing machine with non-identical jobs sizes. International Journal of Production Economics 1998;55(3):273–80. [25] Damodaran P, Velez-Gallego MC, Maya JA. GRASP approach for makespan minimization on parallel batch processing machines. Journal of Intelligent Manufacturing 2011;22(5):767–77. [26] Damodaran P, Velez-Gallego MC. A simulated annealing algorithm to minimize makespan of parallel batch processing machines with unequal job ready times. Expert Systems with Applications 2012;39(1):1451–8. [27] Lee CY, Uzsoy R. Minimizing makespan on a single batch processing machine with dynamic job arrivals. International Journal of Production Research 1999;37(1):219–36.