Available online at www.sciencedirect.com
Computers & Operations Research 31 (2004) 1655 – 1666
www.elsevier.com/locate/dsw
Makespan minimization subject to 'owtime optimality on identical parallel machines Chien-Hung Lina; b , Ching-Jong Liaoa;∗ a
Department of Industrial Management, National Taiwan University of Science and Technology, 43 Keelung Road, Section 4, Taipei 106, Taiwan b Jin Wen Institute of Technology, Taipei County, Taiwan
Abstract We consider the identical parallel machine problem with makespan minimization subject to minimum total 'owtime. First, we develop an optimal algorithm to the identical parallel machine problem with the objective of minimizing makespan. To improve the computational e3ciency, two implementation techniques, the lower bound calculation and the job replacement rule, are applied. Based on the algorithm, an optimal algorithm, using new lower bounds, to the considered problem is developed. The result of this study can also be used to solve the bicriteria problem of minimizing the weighted sum of makespan and mean 'owtime. Computational experiments are conducted up to six machines and 1000 jobs. Although the proposed algorithm has an exponential time complexity, the computational results show that it is e3cient to 8nd the optimal solution. ? 2003 Elsevier Ltd. All rights reserved. Keywords: Identical parallel machines; Bicriteria; Makespan minimization; Total 'owtime minimization
1. Introduction We consider a scheduling problem that assigns n available and independent jobs to m identical parallel machines. Each job can be processed on any of the identical parallel machines and the machine can process only one job at a time. Assume that the processing times of jobs, pi , are integers. Jobs cannot be split and preemption is not allowed. Also, the ready times of all jobs are zero and there is no priority on any one job. The objective is to minimize the makespan subject to the minimum total 'owtime. That is, any feasible schedule of the problem has the minimum total 'owtime. ∗
Corresponding author. Tel.: +886-2-2737-6437; fax: +886-2-2737-6360. E-mail address:
[email protected] (C.-J. Liao).
0305-0548/$ - see front matter ? 2003 Elsevier Ltd. All rights reserved. doi:10.1016/S0305-0548(03)00113-8
1656
C.-H. Lin, C.-J. Liao / Computers & Operations Research 31 (2004) 1655 – 1666
The importance of the multiple criteria consideration in the multiple-machine scheduling problem is widely appreciated. A general survey can be found in Nagar et al. [1] and T’kindt and Billaut [2]. Three primary approaches for the multiple criteria problem are e3cient solution method, weighting method, and priority method. According to the e3cient solution method, the complete set of all e3cient solutions, which cannot be dominated by any other schedules, is generated. The weighting method is to express all the objectives in a single criterion by the use of weighting factors. In the priority method, the criteria are ranked according to their strict priorities. When more than one, say two, criteria are considered, we can optimize one of the two criteria while holding the other 8xed at its best value. The problem considered in this paper is one of the priority problems. Following the three-8eld notation in the scheduling problem, our problem is denoted as P==(Cmax | Ci ), where P represents the makespan (the maxthe identical parallel machines, Cmax designates imum completion time), Ci denotes the total 'owtime, and Cmax | Ci represents the makespan minimization subject to the minimum total 'owtime. The problem had been shown to be NP-hard [3]. CoHman and Sethi [4], Eck and Pinedo [5], and Gupta and Ruiz-Torres [6] all had proposed heuristics for this problem. Based on the minimum 'owtime constraint, Eck and Pinedo [5] pointed out the concepts of rank restriction and reduced processing time. Gupta and Ruiz-Torres [6] applied the reduced processing time to propose an eHective two-phase heuristic (G&R heuristic). The 8rst phase generated an upper bound solution; the other phase improved the solution to be closer to the optimal solution. Now, we review the related literature of the optimal solution method on the identical parallel machine problem. Conway et al. [7] showed that the P== Ci problem is optimally solved by assigning jobs in the SPT order. Ho and Wong [8] proposed the two-machine optimal scheduling (TMO) algorithm, based on lexicographic search, to solve the two identical machine problem with makespan minimization optimally. Applying the TMO algorithm, Gupta and Ho [9] were successful to solve the two identical parallel machine problem with makespan minimization subject to minimum total 'owtime, denoted as P2==(Cmax | Ci ). Based on both the optimal solutions for P2==Cmax and P==(Cmax | Ci ), Gupta et al. [10] further proposed an optimal algorithm to the P2==wCmax +(1−w)FK problem. In summary, they solved several important bicriteria problems on two-machine environment by applying lexicographic search. In this paper, we will extend the results to m machines. The rest of the paper is organized as follows. The following section introduces, based on TMO and lexicographic search, an m-identical machine optimal (m-IMO) algorithm to the P==Cmax problem. In Section 3, we combine m-IMO with the reduced processing time to propose an m-machine hierarchical optimization (m-MHO) algorithm to solve the P==(Cmax | Ci ) problem. Also, we establish two new lower bounds on makespan for the problem. The simulation experiment and computational results are reported in Section 4. Finally, we conclude the paper with a summary in Section 5.
2. Minimizing makespan on m identical parallel machines The identical parallel machine problem with makespan minimization, P==Cmax , is known to be NP-hard [11]. For the two-machine case, Ho and Wong [8] proposed the TMO algorithm, based on lexicographic search, to optimally solve the problem. Since TMO algorithm is the core of our
C.-H. Lin, C.-J. Liao / Computers & Operations Research 31 (2004) 1655 – 1666
1657
developed algorithm, we describe the steps of the TMO algorithm as follows: 0: Renumber the jobs (J1 ; J2 ; : : : ; Jn ) in the LPT order, i.e., p1 ¿ p2 ¿ · · · ¿ pn . Set Psum = Step n ˆ p i i=1 , C max = max{Psum =2; p1 }, j = 1, C max = ∞, k = 1, (1) = 1, k = ((1); (2); : : : ; (k)), k Pk = i=1 p(i) . Step 1: If max{Pk ; Psum − Pk } = C max , then set Cˆ max = C max , S ∗ = k and go to Step 6. Step 2: If max{Pk ; Psum − Pk } ¡ Cˆ max , then set Cˆ max = max{Pk ; Psum − Pk } and S ∗ = k . Step 3: If Pk ¡ Cˆ max , then k = k + 1. Step 4: If j ¡ n, then j = j + 1, (k) = j and return to Step 1. Step 5: If k ¿ 1, then set k = k − 1, j = (k) and return to Step 4. ∗ =C ˆ max . Assign jobs in S ∗ to one machine and the remaining jobs to the other. Step 6: Set Cmax The job order in each machine can be arbitrary. The complexity of TMO is O(2n ). We will extend TMO to develop an optimal algorithm for the multiple-machine case. Theorem 1 is essential for the algorithm. Theorem 1. For the P==Cmax problem, there is at least one machine that has a job completion time smaller than or equal to ni=1 pi =m. Proof. If all the job completion times of the m machines are greater than workload is greater than ni=1 pi . This is a contradiction.
n
i=1
pi =m, then the total
To develop an optimal algorithm by applying TMO, we 8rst consider the three-machine case. The smallest possible makespan is Psum =3, where Psum is the total job processing time and x denotes the smallest integer greater than or equal to x. Assume that machine M3 has the smallest job completion time among the three machines. Based on Theorem 1, the job completion time of M3 , denoted by CM3 , is not greater than Psum =3. Since the job processing times are integers, the value of CM3 ranges from 0 to Psum =3 , where x is the greatest integer smaller than or equal to x. Hence, one of the other two machines has the makespan value for the P3==Cmax problem. Based on the above discussion, we can develop the algorithm as follows. We 8rst assign jobs (arranged in the LPT order) into the interval (0; Psum =3 ) on M3 in turn until either the workload equals Psum =3 or job n has been tried. Then we assign the remaining jobs to the other two machines by applying TMO and obtain the makespan. If the makespan is greater than the smallest possible makespan, we will change the job assignment on M3 based on lexicographic search and perform TMO again. That is, the last assigned job, say job k, is replaced with job k + 1. If the last assigned job is job n, it is removed. The procedure is continued until either the smallest possible makespan is obtained or all the job combinations have been tested. For convenience, the algorithm will be called the three-identical machine optimal (3-IMO) algorithm. Similarly, we can develop an algorithm for the four-machine case, and so on. We will call the algorithm as 4-IMO algorithm and the algorithm with m machines as m-IMO algorithm. To improve the e3ciency of the algorithm, we investigate two implementation problems, the lower bound calculation and the job replacement rule. Assume that machine Mh has the smallest job completion time, CMh , among the m machines. We study the lower bound of CMh . When the
1658
C.-H. Lin, C.-J. Liao / Computers & Operations Research 31 (2004) 1655 – 1666
assigned workload on Mh is smaller than the lower bound, we can directly change the job assignment on Mh without performing (m−1)-IMO. This allows us to save much unnecessary computation time. The lower bound is established in the following theorem. Theorem 2. Let C(S1 ) be the makespan under the feasible schedule S1 . Then CMh ¿ Psum − (m − 1)(C(S1 ) − 1), where Psum = ni=1 pi . Proof. Assume that C(S2 ) ¡ C((S1 )), where C(S2 ) is the makespan under the feasible schedule S2 . Since the processing times of jobs are integers, we have C(S2 ) 6 C(S1 ) − 1. Note that the total workload on the m machines is equal to the total processing time, i.e., CM1 (S2 ) + CM2 (S2 ) + · · · + CMh (S2 ) + · · · + CMm (S2 ) = Psum ;
(1)
where CMi (S2 ) is the job completion time on Mi under schedule S2 . Since C(S2 ) is the makespan under schedule S2 , we have C(S2 ) ¿ CMi (S2 ) for 1 6 i 6 m. Thus, C(S2 ) + C(S2 ) + · · · + CMh (S2 ) + · · · + C(S2 ) ¿ Psum
(2)
CMh (S2 ) ¿ Psum − (m − 1)C(S2 ) ¿ Psum − (m − 1)(C(S1 ) − 1):
(3)
or This completes the proof. Another implementation problem is the replacement rule of the jobs in the interval. Suppose that the last assigned job in the interval is job k. If the smallest possible makespan is not obtained, we will replace job k with job k + 1. If pk+1 = pk , the replacement is unnecessary. This is because all the job combinations of (k + 1; k + 2; : : : ; n) have been tried in the remaining interval when the last assigned job is job k. If we replace job k with job k + 1, we will try all the job combinations of (k + 2; k + 3; : : : ; n) in the same remaining interval. Since (k + 2; k + 3; : : : ; n) is a subset of (k + 1; k + 2; : : : ; n), the replacement is unnecessary, and we can directly test job k + 2. We now introduce the m-IMO algorithm. The following notation is used in the algorithm. Let be the set of jobs assigned into (0; s = Psum =m ); k be the number of assigned jobs, and j be the current job. Let P(k) be the total processing times of the assigned jobs in job set and Cˆ max be the makespan in the incumbent schedule. The steps of the algorithm are as follows: The m-IMO algorithm Step 0: Renumber the jobs (J1 ; J2 ; : : : ; Jn ) in the LPT order, i.e., pi ¿ pi+1 for 1 6 i 6 n − 1. Set Psum = ni=1 pi , s = Psum =m , C max = max{Psum =m; p1 }, j = 1, Cˆ max = ∞, s = 0, k = 1, = ((1); : : : ; (k)), (1) = 1, and P(k) = ki=1 p(i) . Step 1: If p1 = C max , then we schedule the remaining jobs by applying (m − 1)-IMO. We can obtain Cmax and the sets of jobs Sm −1 ; Sm −2 ; : : : ; S1 . We set Sˆm = , Sˆm−1 = Sm −1 ; : : : ; Sˆ1 = S1 , Cˆ max = ), and go to Step 8. max(p1 ; Cmax Step 2: Set j = j + 1. If j ¿ n; then go to Step 5; else (k + 1) = j. Step 3: If P(k+1) = s; then k = k + 1 and go to Step 5. Step 4: If P(k+1) ¡ s, then k = k + 1. Return to Step 2. Step 5: If P(k) ¡ s, then go to Step 7; else schedule the remaining jobs, not in , by applying (m − 1)-IMO to obtain makespan Cmax and Sm −1 ; Sm −2 : : : ; S1 .
C.-H. Lin, C.-J. Liao / Computers & Operations Research 31 (2004) 1655 – 1666
1659
Table 1 Processing times for the m-IMO example i
1
2
3
4
5
6
7
8
pi
10
9
8
8
5
5
5
3
, Sˆ = , Sˆ ˆ Step 6: If Cmax = C max , then Cˆ max = Cmax m m−1 = Sm−1 ; : : : ; S 1 = S1 , and go to Step 8. If , Sˆ = , Sˆ ˆ ˆ Cmax ¡ Cˆ max , then Cˆ max = Cmax m m−1 = Sm−1 ; : : : ; S 1 = S1 and s = Psum − (m − 1)(C max − 1). Step 7: If p(k) = p(k)+1 , then (k) = (k) + 1 and restart Step 7; else set j = (k) and k = k − 1. If k ¿ 0, then return to Step 2. Step 8: The makespan is Cˆ max . Jobs in Sˆm ; Sˆm−1 ; : : : ; Sˆ1 are assigned to machine M1 ; M2 : : : ; Mm , respectively. The job order in each machine can be arbitrary. To analyze the time complexity of the algorithm, we 8rst study the three-machine case. In the worst case, TMO needs to try the remaining jobs not assigned to M3 for all possible combinations. Since TMO hasa time complexity of O(2n ) for n-job problem [8], the complexity of the 3-IMO algorithm is O( nk=0 Ckn 2n−k ) = O(3n ). Similarly, we can derive that the complexity of the m-IMO algorithm is O(mn ). We illustrate the m-IMO algorithm with an example. Consider a three-machine problem with eight jobs where the processing times are given in Table 1. We can calculate the total processing time Psum = 53, the interval (0; 53=3 ) = (0; 17), and the smallest possible makespan C max = 18. First, we test the largest job. Since p1 = 10 ¡ 17, we continue to assign the remaining jobs into the interval. When job 8 has been tried, we have k = 2, = (1; 5) and P(k) = 15. Then we schedule the remaining jobs by applying TMO and obtain Cmax = 19, S1 = (2; 6; 7), and S2 = (3; 4; 8). Based on the above result, we can obtain the lower bound on the workload of the interval s = 17. Since the smallest possible makespan is not achieved, we change the job assignment in the interval. Since p5 =p6 =p7 , we replace the last assigned job, job 5, with job 8. Since the workload p1 + p8 = 13 ¡ s and the last assigned job is job 8, we remove job 8 and replace job 1 with job 2 in the interval. The remaining jobs (3 6 i 6 8) continue to be assigned and we have k = 2; (2; 3), and P(k) = 17. We schedule the jobs not in by TMO and obtain Cmax = 18, S1 = (1; 4), and S2 = (5; 6; 7; 8). Finally, we obtain ˆ the optimal makespan C max = 18 and the job sets (2,3), (1,4) and (5,6,7,8) that are assigned to machines M1 , M2 , and M3 , respectively. The job order in each machine can be arbitrary.
3. Minimizing makespan subject to minimum !owtime In this section, we discuss the m identical parallel machine problem with makespan minimization subject to minimum total 'owtime. For the two-machine case, Gupta and Ho [9] proposed an optimal algorithm that combines TMO with the reduced processing time. They 8rst renumbered all jobs in the LPT order and obtained the alternated P2==Cmax problem with the reduced processing time. Then they applied the TMO algorithm to obtain the optimal solution. We will utilize both the m-IMO algorithm and the reduced processing time to develop the optimal algorithm for the multiple-machine case. For convenience, the developed algorithm will be called the m-MHO algorithm. Notice that the 2-MHO algorithm is the same as the algorithm of Gupta and Ho [9].
1660
C.-H. Lin, C.-J. Liao / Computers & Operations Research 31 (2004) 1655 – 1666
Consider a system with m machines and a set of jobs (J1 ; J2 ; : : : ; Jn ) with processing times in the LPT order, i.e., p1 ¿ p2 ¿ · · · ¿ pn . Without loss of generality, we assume that n is an integer multiple of m, i.e., t = n=m is integer. If not, we can add (n=m × m − n) dummy jobs with zero processing times. Under the minimum total 'owtime, all jobs are partitioned into t (= n=m) sets. The 8rst set (with rank 1) consists of the m largest jobs. The second set (with rank 2) consists of the remaining m largest jobs, and so on. The jobs with the same rank cannot be assigned to the same machine, which is called rank restriction [5]. The reduced processing time of a job is obtained by subtracting the smallest processing time of the job within the same rank from its processing time. Mathematically, the reduced processing times of jobs with rank k (1 6 k 6 t) are de8ned as follows: dkm−i = pkm−i − pkm ;
for i = 0; 1; 2; : : : ; m − 1;
(4)
where pkm is the smallest processing time with rank k. Clearly, the diHerence of makespan values between the problem with reduced processing time and the original problem is the sum of the smallest processing times in all sets, i.e., ti=1 pim . To apply m-IMO, we need to reorder all jobs to obtain another job set, = ((1); (2); : : : ; (n)), by the reduced processing time in the LPT order, i.e., p(i) ¿ p(i+1) for i = 1; 2; : : : ; n − 1. Although the job order has been changed after sorting, they still keep the original rank. That is, when (i) = j, then R(i) = Rj = j=m, where Rx denotes the rank of job x. In m-MHO, the job assignment is similar to m-IMO. That is, compute the interval on the machine with the smallest completion time according to the reduced processing time, and then assign jobs one by one. But the rank restriction is imposed in the considered problem. When job (i) is assigned into the interval, we check whether there exists another job with the same rank being assigned. If not yet, we assign the job and set r(R(i) ) = 1. Recall that there are t jobs with zero reduced processing times because their original processing times are the smallest in themselves. Since the jobs with zero reduced processing times cannot aHect the workload in the interval, the actual number of assigned jobs is n − t = n . When job (n ) has been tried and the assigned workload in the interval is greater than or equal to the lower bound (on workload), we assign the last t jobs ((n + 1); (n + 2); : : : ; (n)) with zero reduced processing times into the interval to satisfy the rank restriction. The exact t jobs with diHerent ranks are assigned to the machine. The remaining jobs continue to be assigned to the other (m − 1) machines by applying (m − 1)-MHO, and so on. The procedure is continued until there are only two machines, which can be solved by applying the algorithm of Gupta and Ho [9]. Recall that we need to add the sum of the smallest processing times in all sets when computing the original makespan. 3.1. The lower bound of makespan The lower bound of makespan on identical parallel machines is well known [12], i.e., n LB0 = max pi =m ; p1 :
(5)
i=1
The notation is used because the processing time is assumed to be integer. Under the minimum 'owtime consideration, the lower bound can be modi8ed. Gupta and Ruiz-Torres [6] proposed an improved lower bound by the sum of the largest original processing time and the smallest original
C.-H. Lin, C.-J. Liao / Computers & Operations Research 31 (2004) 1655 – 1666
processing times with diHerent ranks. That is, n t pim ; pi =m : LB1 = max p1 + i=2
1661
(6)
i=1
We can further improve this lower bound. After transferring to the reduced processing time, the job with the largest original processing time may not have the largest reduced processing time. Thus, we can modify the lower bound as follows: n t LB2 = max p(1) + pim ; pi =m : (7) i=1
i=1
Since p(1) ¿ p1 − pm , we have LB2 ¿ LB1 . Moreover, the above lower bound can be further improved based on rank restriction. Assume that R(1) = k. If the largest two jobs with rank k have large enough reduced processing time, the workloads of the two machines that include the two jobs at least equal the sum of processing times of both jobs and the smallest two jobs with diHerent ranks. If the sum is greater than twice the current lower bound, the lower bound can be improved. Similarly, we can consider the case of the largest three jobs with rank k, and so on. Thus, an improved lower bound can be formulated as follows. g −1 g g LB3 = max (8) p + p (k − 1)m+i jm − i for 1 6 g 6 m; i=1 j =k i=0 where k = R(1) . Clearly, LB3 ¿ LB2 since LB2 is the sub-case of LB3 with g = 1. The steps of the m-MHO algorithm is stated as follows. The m-MHO algorithm Step 0: Renumber the jobs (J 1 ; J2 ; : : : ; Jn ) in the LPT order, i.e., pi ¿ pi+1 for 1 6 i 6 n − 1. Set Rj = j=m for 1 6 j 6 n, ’ = ti=1 pim . Compute the reduced processingtime by using (4). Let =((1); (2); : : : ; (n)), where p( j) ¿ p( j+1) for 1 6 j 6 n−1. Set P(n) = ni=1 p(i) , s= P(n) =m , C max = LB3 , j = 0, Cˆ max = ∞, n = n − t, s = 0, k = 0, = ((1); : : : ; (k)) = , and r(R( j) ) = 0, for 1 6 j 6 n. Step 1: Set j = j + 1. If j ¿ n , then go to Step 5. Step 2: If r(R( j) ) = 1, then return to Step 1; else (k + 1) = j. Step 3: If P(k+1) = s, then set k = k + 1 and r(R( j) ) = 1, and go to Step 5. Step 4: If P(k+1) ¡ s, then set k = k + 1 and r(R( j) ) = 1. Return to Step 1. Step 5: If P(k) ¡ s, then go to Step 8. Otherwise, we set k = k and check j = n + 1; : : : ; n in turn. If r(R( j ) ) = 0, then set k = k + 1 and (k ) = j . Step 6: Schedule the remaining jobs (not in ) by applying (m − 1)-MHO to obtain makespan Cmax and the sets of jobs Sm −1 ; Sm −2 ; : : : ; S1 . , Sˆ = , Sˆ ˆ Step 7: If Cmax = C max , then Cˆ max = Cmax m m−1 = Sm−1 ; : : : ; S 1 = S1 , and go to Step 9. If , Sˆ =, Sˆ ˆ ˆ Cmax ¡ Cˆ max , then Cˆ max =Cmax m m−1 =Sm−1 : : : ; S 1 =S1 , and s=P(n) −(m−1)(C max −’−1). Step 8: Set j = (k); r(R( j) ) = 0, and k = k − 1. If k ¿ 0, then return to Step 1.
1662
C.-H. Lin, C.-J. Liao / Computers & Operations Research 31 (2004) 1655 – 1666
Table 2 Processing times for the m-MHO example i
1
2
3
4
5
6
7
8
9
10
11
12
pi di Ri
25 3 1
23 1 1
22 0 1
19 4 2
17 2 2
15 0 2
13 2 3
12 1 3
11 0 3
9 9 4
8 8 4
0 0 4
Table 3 The new job set after sorting i
1
2
3
4
5
6
7
8
9
10
11
12
p(i) R(i)
9 4
8 4
4 2
3 1
2 2
2 3
1 1
1 3
0 1
0 2
0 3
0 4
Step 9: The makespan is Cˆ max . Jobs in Sˆm ; Sˆm−1 ; : : : Sˆ1 are assigned to machines M1 ; M2 ; : : : ; Mm , respectively. The jobs on each machine are sorted in the SPT order of their original processing times. To analyze the time complexity of the algorithm, we consider the three-machine case. Since there are three jobs in each set, the job combination is 3t on M3 , where t = n=3. Note that in the worst case 2-MHO (the algorithm of Gupta and Ho [9]), which has a time complexity of O(2n=2 ), needs to try all the possible job combinations on M3 . After the assignment of M3 , there are 2t remaining jobs. This implies that the complexity of 3-MHO is O(6n=3 ). With the same analysis, the complexity of 4-MHO can be derived as O(24n=4 ). We can thus conclude that the complexity of the m-MHO algorithm is O(m!n=m ). We illustrate the m-MHO algorithm with an example. Consider a three-machine problem with 11 jobs where the processing times are given in Table 2. Since t = n=m is not integer, we add a dummy job with zero processing time. The reduced processing times and the ranks of jobs are calculated and given in Table 2. The new job set after sorting is shown in Table 3. We can calculate the total reduced processing time P(n) = 30, the sum of the smallest processing times in all sets ’ = 48, and the interval (0,10). The calculation of the smallest possible makespan (i.e., lower bound) is as follows. The largest reduced processing time is job 10 with rank k = 4. When g = 1, the value is p10 + p3 + p6 + p9 = 57; when g = 2, the value is (p10 + p11 + p3 + p2 + p6 + p5 + p9 + p8 )=2 = 59; when g = 3, the value is 12 i=1 pi =3 = 58, then C max = LB3 = 59. We start to assign jobs and obtain = ((1); (7); (10); (11)) = (10; 2; 6; 9) and P(k) = 10. The remaining jobs are solved by 2-MHO. We obtain Cmax = 59 and S1 = ((2); (5); (8); (9)) = (11; 5; 8; 3), S2 = ((3); (4); (6); (12)) = =C ˆ (4; 1; 7; 12). Since Cmax max , we obtain the optimal makespan C max =59 and the job sets (10,9,6,2), (11,8,5,3), (12,7,4,1) are assigned to M1 , M2 , and M3 in the other, respectively. As an extension, we can use the results for the two problems, P==Cmax and P==(Cmax | Ci ), to derive the solution algorithm for the P==wCmax + (1 − w)FK problem. For the two-machine case, Gupta et al. [10] proposed an optimal algorithm for the P2==wCmax +(1−w)FK problem. They applied
C.-H. Lin, C.-J. Liao / Computers & Operations Research 31 (2004) 1655 – 1666
1663
the optimal solutions of P2==Cmax and P2==(Cmax | Ci ) to develop the optimal properties and the associated bounds. If the optimal properties cannot be met, they apply lexicographic search to 8nd the feasible solution between upper and lower bounds. Similarly, we can use the same procedure to develop an optimal algorithm for the m-machine case.
4. Computational results A simulation experiment is conducted to evaluate the proposed algorithm, m-MHO, and the G&R heuristic (Gupta and Ruiz-Torres [6]). Three factors are considered in the experiment: m, n, and pi . The number of machines m is 3, 4, 5, and 6. The number of jobs n is 10, 15, 20, 25, 30, 40, 50, 100, 500, and 1000. The processing times pi are randomly generated from a uniform distribution over [1; b], b = 25; 50; 100 (see Gupta and Ruiz-Torres [6]). The combination of the three factors gives a total of 120 sets of problems. In all, 100 instances are generated for each problem set. Hence, we report the results of the total 12,000 instances solved. Each instance is solved by m-MHO and G&R algorithm. All the algorithms are implemented using Visual Basic Language on PC with CELERON 366 CPU. As a complement to m-MHO and interesting in itself, we also provide the computational results of the m-IMO algorithm. Table 4 reports the mean percentage deviation (MPD) from optimum for the G&R algorithm. In summary, the MPD tends to increase as m, b increase and decrease as n increases. It is observed that the G&R algorithm can obtain the optimal solution for large-job-sized problems. Table 5 gives the number of optimal solutions obtained by G&R in each set of 100 instances. Table 6 provides the average CPU time of 100 instances by the proposed algorithm m-MHO. In general, the result shows that the CPU time tends to increase as m; b increase. In the region of m = 5; 6 and 15 6 n 6 40, the average CPU time is much larger because some instances must try all the job combinations. In general, the m-MHO is very e3cient especially for large-job-sized problems. Table 4 Mean percentage deviation of the G&R heuristic n
10 15 20 25 30 40 50 100 500 1000
m=3
m=4
m=5
m=6
(1–25)
(1–50)
(1–100)
(1–25)
(1–50)
(1–100)
(1–25)
(1–50)
(1–100)
(1–25)
(1–50)
(1–100)
0.0674 0.0155 0.0228 0 0 0 0 0 0 0
0.0678 0.1027 0.0472 0.0188 0.0039 0 0 0 0 0
0.0403 0.2178 0.0893 0.0186 0.0157 0.0015 0.0012 0 0 0
0.3231 0.0410 0.1065 0.0122 0 0 0 0 0 0
0.1924 0.4234 0.2359 0.0312 0.0260 0.0078 0 0 0 0
0.1123 0.4185 0.3091 0.1355 0.0452 0.0215 0.0047 0 0 0
0.4380 0.4463 0.1544 0.1380 0.0766 0.0095 0 0 0 0
0.1742 0.5267 0.4222 0.2975 0.0971 0.0247 0.0039 0 0 0
0.0770 0.3365 0.5386 0.3147 0.1653 0.0466 0.0158 0 0 0
0 0.1779 0.2729 0.1269 0.1218 0.0115 0.0091 0 0 0
0 0.2225 0.4542 0.3745 0.2345 0.0415 0.0047 0 0 0
0 0.2498 0.5502 0.6051 0.3569 0.0801 0.0262 0 0 0
1664
C.-H. Lin, C.-J. Liao / Computers & Operations Research 31 (2004) 1655 – 1666
Table 5 Percentage of optimal solution obtained by the G&R heuristic n
10 15 20 25 30 40 50 100 500 1000
m=3
m=4
m=5
m=6
(1–25)
(1–50)
(1–100)
(1–25)
(1–50)
(1–100)
(1–25)
(1–50)
(1–100)
(1–25)
(1–50)
(1–100)
97 99 98 100 100 100 100 100 100 100
95 88 92 96 99 100 100 100 100 100
96 64 73 92 92 99 99 100 100 100
91 98 93 99 100 100 100 100 100 100
90 74 75 95 95 98 100 100 100 100
89 55 47 62 84 89 97 100 100 100
87 85 92 91 94 99 100 100 100 100
90 67 65 65 86 95 99 100 100 100
91 70 39 37 59 81 92 100 100 100
100 95 88 93 92 99 99 100 100 100
100 89 68 66 76 93 99 100 100 100
100 85 58 30 35 76 89 100 100 100
Table 6 Average CPU time of 100 instances by the proposed algorithm m-MHO n
10 15 20 25 30 40 50 100 500 1000
m=3
m=4
m=5
m=6
(1–25) (1–50) (1–100) (1–25) (1–50) (1–100) (1–25)
(1–50)
0.002 0.002 0.001 0.003 0.001 0.001 0.002 0.002 0.010 0.016
0.039 0.062 0.663 0.971 0.866 1.593 2.242 83:405(2) 220:862(4) 366:662(6) 6.265 16.680 31:548(13) 28:082(25) 52:618(57) 0:151(1) 5:369(1) 4:217(5) 7:773(12) 48:391(17) (1) (1) (4) 0:944 0:027 0:470 14:205(4) 8:066(4) (2) 2.447 8.159 6.336 0:063 0:088(1) 0.108 0.040 0.004 0.030 0:005(1) 0.007 0.011 0.008 0.012 0.013 0.036 0.069 0.035 0.061 0.123 0.062 0.104 0.068 0.091 0.187
0.004 0.003 0.001 0.003 0.001 0.002 0.004 0.002 0.012 0.015
0.004 0.008 0.001 0.001 0.001 0.013 0.002 0.004 0.017 0.026
0.012 0.043 0.047 0.005 0.002 0.004 0.018 0.006 0.016 0.035
0.026 0.055 0.096 0.003 0.017 0.002 0.007 0.006 0.023 0.038
0.034 0.155 0.289 0.123 0.004 0.001 0.000 0.010 0.036 0.052
0.026 0.474 4.034 0:402(1) 0.257 0:007(1) 0.003 0.005 0.022 0.046
(1–100)
(1–25)
(1–50)
(1–100)
The number in parentheses at the superscript represents the number of instances taking more than 20 min. The average CPU times are computed without including the instances taking more than 20 min.
To compare the e3ciency of the proposed algorithm and the G&R heuristic, we record the average CPU time of G&R for large-job-sized problems (n = 100; 500; 1000), where the optimal solution can be obtained. The result is presented in Table 7. Examining Tables 6 and 7, it can be concluded that m-MHO is much more e3cient than G&R for 8nding the optimal solution in large-job-sized problems. In addition, the solution of m-MHO is guaranteed to be optimal. It is noted that the computational result for m-MHO is reported up to six machines (m = 6; see Table 6) because many instances for m = 6 took more than 20 min. The computational result of m-IMO algorithm is given in Table 8, which demonstrates a similar behavior as m-MHO.
C.-H. Lin, C.-J. Liao / Computers & Operations Research 31 (2004) 1655 – 1666
1665
Table 7 Average CPU time of 100 instances for large-job-sized problems by the G&R heuristic n
m=3
m=4
m=5
m=6
(1–25)
(1–50)
(1–100)
(1–25)
(1–50)
(1–100)
(1–25)
(1–50)
(1–100)
(1–25)
(1–50)
(1–100)
100 0.0656 500 0.3423 1000 0.7344
0.1356 0.7040 1.4144
0.1799 1.3869 2.7920
0.1038 0.5086 1.0847
0.1758 1.0427 2.1732
0.3348 2.1109 4.3455
0.1561 0.7328 1.4736
0.3877 1.5017 2.9855
0.4331 3.0754 6.3112
0.2408 1.3395 3.0461
0.4181 2.5081 5.2632
0.4801 4.5838 9.6070
Table 8 Average CPU time of 100 instances by the proposed algorithm m-IMO n
m=3
m=4
m=5
(1–25) (1–50) (1–100) (1–25) (1–50) (1–100) (1–25) (1–50) 10 15 20 25 30 40 50 100 500 1000
0.008 0.001 0.003 0.003 0.002 0.005 0.004 0.013 0.080 0.243
0.024 0.003 0.005 0.005 0.004 0.006 0.007 0.009 0.079 0.236
0.041 0.024 0.008 0.008 0.007 0.007 0.008 0.010 0.082 0.242
0.142 0.026 0.005 0.006 0.006 0.005 0.009 0.019 0.104 0.267
0.217 0.146 0.013 0.014 0.007 0.010 0.011 0.015 0.102 0.257
0.429 1.473 0.028 0.019 0.021 0.013 0.021 0.021 0.101 0.253
m=6 (1–100)
(1–25)
(1–50)
(1–100)
2.038 3.738 5.604 7.473 17.897 20.582 13.525 79:818(1) 260:411(7) 138:569(32) 364:010(80) 230:325(95) 0.005 0.597 11:853(1) 0:107(4) 7:706(10) 67:114(37) 0.007 0.036 0.385 0.017 0.113 13.306 0.006 0.016 0.047 0.046 0.045 0.360 0.009 0.016 0.030 0.011 0.021 0.104 0.011 0.015 0.028 0.015 0.019 0.050 0.023 0.034 0.030 0.026 0.031 0.039 0.121 0.123 0.122 0.136 0.154 0.150 0.293 0.285 0.299 0.324 0.335 0.333
The number in parentheses at the superscript represents the number of instances taking more than 20 min. The average CPU times are computed without including the instances taking more than 20 min.
5. Conclusions The paper has considered an identical parallel machine problem with makespan minimization subject to minimum 'owtime, denoted as P==(Cmax | Ci ). We 8rst propose an optimal algorithm (m-IMO), based on lexicographic search, to solve the P==Cmax problem. To save the computational time, we have applied two implementation techniques, the lower bound calculation and the job replacement rule. Then we combine m-IMO with the reduced processing time to develop an optimal algorithm, using new lower bounds, for the considered problem, P==(Cmax | Ci ). As an extension, the results have been used to derive the solution algorithm for the P==wCmax + (1 − w)FK problem. Computational experiments have been conducted up to six machines and 1000 jobs. The results show that the proposed algorithm is quite e3cient. In addition, we have demonstrated the good performance of the G&R heuristic. There are several further issues that can be researched. The solutions of the identical parallel machine problem with other bicriteria, e.g., 'owtime minimization subject to minimum makespan or tardiness minimization subject to minimum makespan, are worthy to be studied. Also, future research
1666
C.-H. Lin, C.-J. Liao / Computers & Operations Research 31 (2004) 1655 – 1666
can be done to extend the results to more complex shop environments, e.g., hybrid 'ow shop and job shop problems. References [1] Nagar A, Haddock J, Heragu S. Multiple and bicriteria scheduling: a literature survey. European Journal of Operational Research 1995;81:88–104. [2] T’kindt V, Billaut JC. Multicriteria scheduling problem: a survey. RAIRO Operations Research 2001;35:143–63. [3] Bruno JL, CoHman EG, Sethi R. Algorithms for minimizing mean 'ow time. Proceedings of the International Federation for Information Processing Congress 74. Amsterdam: North-Holland, 1974. p. 504 –10. [4] CoHman EG, Sethi R. Algorithms minimizing mean 'ow time: schedule length properties. Acta Informatica 1976;6: 1–14. [5] Eck BT, Pinedo M. On the minimization of the makespan subject to 'owtime optimality. Operations Research 1993;41:797–801. [6] Gupta JND, Ruiz-Torres AJ. Minimizing makespan subject to minimum total 'ow-time on identical parallel machines. European Journal of Operational Research 2000;125:370–80. [7] Conway RW, Maxwell WL, Miller LW. Theory of scheduling. Reading, MA: Addison-Wesley, 1967. [8] Ho JC, Wong JS. Makespan minimization for m parallel identical processors. Naval Research Logistics 1995;42: 935–48. [9] Gupta JND, Ho JC. Minimizing makespan subject to minimum 'owtime on two identical parallel machines. Computers and Operations Research 2001;28:705–17. [10] Gupta JND, Ho JC, Webster S. Bicriteria optimisation of the makespan and mean 'owtime on two identical parallel machines. Journal of the Operational Research Society 2000;51:1330–9. [11] Garey MR, Johnson DS. Computers and intractability: a guide to the theory of NP-completeness. San Francisco, CA: Freeman, 1979. [12] Baker KR. Introduction to sequencing and scheduling. New York: Wiley, 1974.