Minimizing makespan on a single batching machine with release times and non-identical job sizes

Minimizing makespan on a single batching machine with release times and non-identical job sizes

Operations Research Letters 33 (2005) 157 – 164 Operations Research Letters www.elsevier.com/locate/dsw Minimizing makespan on a single batching mac...

214KB Sizes 0 Downloads 71 Views

Operations Research Letters 33 (2005) 157 – 164

Operations Research Letters www.elsevier.com/locate/dsw

Minimizing makespan on a single batching machine with release times and non-identical job sizes Shuguang Lia;∗ , Guojun Lib; c;1 , Xiaoli Wangb , Qiming Liud a Department

of Mathematics and Information Science, Yantai University, Yantai 264005, People’s Republic of China of Mathematics and System Science, Shandong University, Jinan 250100, People’s Republic of China c Institute of Software, Chinese Academy of Sciences, Beijing 100080, People’s Republic of China d School of Computer Science and Technology, Yantai Normal University, Yantai 264025, People’s Republic of China b School

Received 2 June 2003; accepted 22 April 2004

Abstract We consider the problem of scheduling jobs with release times and non-identical job sizes on a single batching machine; our objective is to minimize makespan. We present an approximation algorithm with worst-case ratio 2 + , where  ¿ 0 can be made arbitrarily small. c 2004 Elsevier B.V. All rights reserved.  Keywords: Approximation algorithms; Scheduling; Batch processing; Makespan; Release times

1. Introduction In this paper, we consider the problem of minimizing makespan on a batching machine. More precisely, we are given a set of n jobs and a single batching machine with a capacity 1. Each job, j, is characterized by a triple of real numbers (rj ; pj ; sj ), where rj is the release time before which job j cannot be scheduled, pj is the processing time which speci:es the minimum time needed to process job j without interruption by the machine, and sj ∈ (0; 1] is the size of job j. The batching machine can process a number of jobs simultaneously as a batch as long as the total ∗

Corresponding author. E-mail address: [email protected] (S. Li). 1 Supported by the fund from NSFC under fund nos. 10271065 and 60373025.

size of jobs in the batch does not exceed the machine capacity. The processing time of a batch is the longest processing time of any job in the batch. Jobs processed in the same batch have the same completion time (the completion time of the batch in which they are contained), i.e., their common start time (the start time of the batch in which they are contained) plus the processing time of the batch. Our goal is to :nd a schedule for the jobs so that the makespan, de:ned as the completion time of the last job, is minimized. Job scheduling on a batching machine is an important issue in the manufacturing industry. As a motivating example, the manufacturing of very large scale integrated circuits requires a burn-in operation in which the integrated circuits are put into an oven in batches and heated for a prolonged period to bring out any latent defect. (See Lee [8] for the detailed process.) Since diDerent circuits require diDerent

c 2004 Elsevier B.V. All rights reserved. 0167-6377/$ - see front matter  doi:10.1016/j.orl.2004.04.009

158

S. Li et al. / Operations Research Letters 33 (2005) 157 – 164

minimum baking time, the batching and scheduling of the integrated circuits is highly non-trivial and can greatly aDect the production rate. Note that if all jobs are released at the same time and have identical processing times then the problem is just the classical one-dimensional bin packing problem [3]. The bin packing problem is strongly NP-hard [6], and there is no polynomial algorithm that can achieve a better worst-case ratio than 32 unless P = NP [9]. On the other hand, when all jobs have identical job sizes, i.e., sj = 1=b (b ¡ n) for all jobs j, then the problem is equivalent to the batching problem studied by Lee and Uzsoy [7]. A number of heuristics were proposed in [7]. The batching problem was shown to be strongly NP-hard by Brucker et al. [2]. Deng et al. [5] obtained a PTAS for it which computes a (1+)-approximation solution in O(42= n8=+1 b2= =4=−2 ) time for any  ¿ 0. There have been a lot of research works on similar or related scheduling problems. However, most of them concentrated on the model of identical job sizes. For example, Lee et al. [8] provided eMcient algorithms for minimizing the number of tardy jobs and maximum tardiness under a number of assumptions. They also presented a heuristic for the problem of minimizing maximum lateness on identical parallel batching machines and a worst-case ratio on its performance. Brucker et al. [2] addressed the problem of scheduling n jobs on a batching machine to minimize regular scheduling criteria that are non-decreasing in the job completion times. Deng et al. [4] presented a PTAS for the more vexing problem of minimizing total completion time on a single batching machine with release times, from which we draw upon several ideas to solve our problem. Only a few research works discuss the problem where jobs have diDerent job sizes. To the best of our knowledge, the previous results on the same problem we state above are restricted on the model of equal release times [10,11]. Uzsoy [10] presented a number of heuristics. Zhang et al. [11] proposed a non-trivial approximation algorithm with worst-case ratio 74 and analyzed all heuristics in [10]. They showed that the best heuristic in [10] has a worst-case ratio no greater than 2. In this paper we discuss the general problem with arbitrary release times and job sizes. As our main result, we present an approximation algorithm with worst-case ratio 2 + , where  ¿ 0 can

be made arbitrarily small. The overall running time of our algorithm is O(n log n) + f(1=), where the multiplicative constant hidden in O(n log n) is reasonably small and independent of . When all jobs have identical job sizes, our algorithm actually becomes an approximation scheme, a result that we feel to be of interest in its own right, since it is more eMcient in contrast to the approximation scheme introduced in [5]. This paper is organized as follows. In Section 2, we introduce notations and the main idea. In Section 3, we concentrate on the case where all jobs may be split in size. Namely, each original job j = (rj ; pj ; sj ), may be split into two jobs j1=(rj ; pj ; sj1 ) and j2=(rj ; pj ; sj2 ) where sj = sj1 + sj2 and j1 and j2 are independent (i.e., may be assigned to diDerent batches). In Section 4, we present a (2 + )-approximation algorithm for the general problem. 2. Preliminaries An algorithm A is a -approximation algorithm for a minimization problem if it produces a solution which is at most  times the optimal one, in time that is polynomial in the input size. We also say that  is the worst-case ratio of algorithm A. The worst-case ratio is the usual measure for the quality of an approximation algorithm for a minimization problem: the smaller the ratio is, the better the approximation algorithm is. A family of algorithms {A } is called a polynomial time approximation scheme (PTAS) if, for each  ¿ 0, the algorithm A is a (1 + )-approximation algorithm running in time that is polynomial in the input size. Note that  is a :xed number, and thus the running time may depend upon 1= in any manner. We use BPP (Batch Processing Problem) to denote our problem and use SBPP to denote the problem which is the same as BPP except that all jobs can be split in size. A job is called a split job if it is split in size. It should be stressed that the term “split job” refers to an original job that is subjected to splitting, and not to the two parts that are obtained from splitting an original job. We call a job available if it has been released but not yet assigned into a batch. We call an available job suitable for a given batch if it can be added in that batch. We call a batch available if all the jobs in it have been released and it has not been scheduled.

S. Li et al. / Operations Research Letters 33 (2005) 157 – 164

The outline of our main idea is as follows: we :rst get a PTAS for SBPP, and then use it to get a (2 + )-approximation algorithm for BPP. 3. A PTAS for problem SBPP In this section, we present a polynomial time approximation scheme for problem SBPP in which all jobs can be split in size. We use opt to denote the optimal makespan of problem SBPP. Throughout this section, if a job has been split in size and some part of it has been scheduled, the remaining part of it will be treated as a single job. We :rst describe the full-batch-longest-process ing-time (FBLPT) rule, which will be used throughout this paper. FBLPT Rule Step 1: Sort the jobs in a non-increasing order according to their processing times. Step 2: Open a batch with the longest remaining job processing time as its processing time, and then :ll it from the head of the job list. If the batch has not enough room for some job, place part of the job into the batch such that it is completely full and put the remaining part of the job at the head of the remaining job list. Step 3: Repeat Step 2 until the job list is empty. The special case of problem SBPP in which all rj = 0 can be solved optimally by the above FBLPT rule: we :rst use the FBLPT rule for all the jobs and get a series of batches, and then schedule these batches successively in any arbitrary order. The correctness of the algorithm can be easily proved by a job-interchange argument. For the general version of the problem with release times, we will perform several transformations on the given input to form an approximate problem instance that has a simpler structure. Each transformation potentially increases the objective function value by O() · opt, so we can perform a constant number of them while still staying within a 1 + O() factor of the original optimum. When we describe such a transformation, as in [1], we shall say that it produces 1 + O() loss. To simplify notations we will assume throughout the paper that 1= is integral.

159

In the remainder of this section, we :rst simplify the problem by applying the rounding method. We proceed to de:ne short and long jobs and then present a PTAS for the case where all jobs are short. Finally, we get a PTAS for problem SBPP. 3.1. Simplifying the input We use the FBLPT rule for all the jobs and get a series of batches. Denote by d the total processing time of these batches. Let rmax = max16j6n {rj }. Then we get directly the following bounds for the optimal makespan of problem SBPP: max{rmax ; d} 6 opt 6 rmax + d:

(1)

Let M =  · max{rmax ; d}. We will show that we can restrict attention to the case where there are only a constant number of distinct release times. Consider the following algorithm for the unrestricted version. Round each release time down to the nearest multiple of M . Get a schedule for the rounded problem, and then add M to each job’s start time in the output to obtain a feasible schedule for the original problem. As M 6  · opt, we get the following lemma. Lemma 1. With 1 +  loss, we can assume that there are at most 1=+1 distinct release times in the original problem. As a result of Lemma 1, we partition the time interval [0; +∞) into 1= + 1 disjoint intervals and denote by i = [(i − 1)M; iM ) the ith interval, i = 1; 2; : : : ; 1=, and 1=+1 =[(1=)M; +∞). We assume without loss of generality that the release times take on 1=+1 values, which we denote by 1 ; 2 ; : : : ; 1=+1 , i = (i − 1)M , i = 1; 2; : : : ; 1= + 1. We use Ji to denote the set of jobs with i as their common release time. We now discuss short and long jobs. We say that a job (or a batch) is short if its processing time is less than M , and long otherwise. Short jobs are easy to deal with, as we will see in Section 3.2. On the other hand, we can assume that there are only a constant number of distinct processing times of long jobs, as the following lemma states. Lemma 2. With 1 + 2 loss, the number of distinct processing times of long jobs, k, can be up-bounded by 1=3 − 1= + 1.

160

S. Li et al. / Operations Research Letters 33 (2005) 157 – 164

Proof. Recall that M =  · max{rmax ; d}. By the previous inequality (1), we see that for each long job j, M 6 pj 6 (1=)M . We round every long job’s processing time down to the nearest integral multiple of 2 M . This creates a rounded instance in which there are at most [(1=)M ]=(2 M ) − [(M )=(2 M ) − 1] = 1=3 − 1= + 1 distinct processing times of long jobs. Hence we get k 6 1=3 − 1= + 1. Consider the optimal value of the rounded instance. Clearly, this value cannot be greater than opt, the optimal makespan of problem SBPP. As there are at most 2=2 long batches in any optimal schedule in the rounded instance, by replacing the rounded values with the original ones we may increase the solution value by at most (2=2 )(2 M ) = 2M 6 2 · opt.

SBPP1: To sequence the batches in B to minimize makespan. SBPP2: To schedule the modi:ed jobs to minimize makespan. Both these problems deal with the modi:ed jobs. But while SBPP1 demands to leave the grouping of the jobs into batches as dictated by the FBLPT rule on the original jobs, SBPP2 allows the re-opening of the batches and playing with the grouping into batches. Hence, SBPP2 might obtain a better makespan. However, Lemma 3 below shows that it does not. We use opt1 to denote the optimum value to SBPP1, and opt2 the optimum value to SBPP2. Then we observe the following fact.

For simplicity, we assume without loss of generality that the original problem SBPP has the properties described in Lemmas 1 and 2.

Lemma 3. opt1 = opt2 6 opt.

3.2. Short jobs In this subsection, we assume that all jobs under consideration are short. We will use the idea of [4] to get a PTAS to settle problem SBPP in this case. To do this, we use the FBLPT rule for all the jobs in Ji (i = 1; 2; : : : ; 1= + 1) and get a series of batches Bi; 1 ; Bi; 2 ; : : : ; Bi; ki , such that Bi; 1 ; Bi; 2 ; : : : ; Bi; ki −1 are full batches and q(Bi; j ) ¿ p(Bi; j+1 );

(2)

where p(Bi; j ) and q(Bi; j ) denote the processing times of the longest and shortest jobs in Bi; j , respectively. Inequality (2) implies the following bound: k i −1

(p(Bi; j ) − q(Bi; j )) + p(Bi; ki ) ¡ p(Bi; 1 ) ¡ M:

j=1

(3) We de:ne a new set of batches B = {Bi; j } by letting all processing times in Bi; j be equal to q(Bi; j ) for i = 1; 2; : : : ; 1= + 1; j = 1; 2; : : : ; ki − 1, and all processing times in Bi; ki be equal to zero. Each original job is now modi:ed into one new job if it has not been split, or two new jobs if it has been split. (Any original short job will be split at most once.) We call the new jobs modi=ed jobs. Then we de:ne two accessory problems:

Proof. Any optimal solution to SBPP1 is a feasible solution to SBPP2, therefore we get opt1 ¿ opt2. On the other hand, any optimal solution to SBPP2 can be transformed into a feasible solution to SBPP1 without increasing the objective value, which implies that opt1 6 opt2. To show this, let us :x an optimal solution to SBPP2, . Suppose that B is the batch which starts earliest among the batches in  with the longest processing time. Suppose that B is the batch which becomes available earliest among the batches in B with the longest processing time. We exchange the modi:ed jobs which are in B but not in B and the modi:ed jobs which are in B but not in B without increasing the completion time of any batch in . Consequently, B appears in modi:ed . Repeat this procedure until all the batches in B except those with processing time zero appear in modi:ed . The modi:ed jobs with processing time zero are fully negligible and thus can be batched in such a way that the batches in B with processing time zero appear in modi:ed . We eventually achieve a feasible solution to SBPP1, whose makespan is not greater than that of . It follows that opt1 6 opt2. Therefore we get opt1 = opt2. It is obvious that opt2 6 opt. Hence we get opt1 = opt2 6 opt. Note that SBPP1 can be solved easily: any ordering of the batches that respects both the release times and the rule “never keep an available batch wait if the machine is also available” will yield an optimal result.

S. Li et al. / Operations Research Letters 33 (2005) 157 – 164

Algorithm ScheduleShort Step 1: Use the FBLPT rule for all the jobs in J1 ; J2 ; : : : ; J1=+1 , respectively. Step 2: Change the job processing times as described above and get a new set of batches B . Step 3: Schedule the available batches in B on the batching machine successively in any arbitrary order and return a solution S1 . Step 4: Output a solution S to problem SBPP which is obtained from S1 by replacing each Bi; j with Bi; j . Theorem 4. If all the jobs are short, then Algorithm ScheduleShort is a PTAS for problem SBPP with at most 1 +  + 2 loss. Proof. Step 3 returns an optimum solution to problem SBPP1. By the previous inequalities (3), we can stretch interval i (i=1; 2; : : : ; 1=+1) to make an extra space M to cover the time (p(Bi; 1 ) − p(Bi; 1 )) + · · · + (p(Bi; ki−1 ) − p(Bi; ki−1 )) + (p(Bi; ki ) − 0). Since there are 1= + 1 intervals, Step 4 yields a feasible schedule for SBPP whose length is at most opt1 + ( + 2 )opt. Combining Lemma 3, we complete the proof. It is worth pointing out that we need not actually change the job processing times during the run of Algorithm ScheduleShort. In fact, once the batch structure is determined, we can immediately schedule the available batches successively in any arbitrary order and get a (1 +  + 2 )-approximation schedule. 3.3. General case We are now going to establish a PTAS to solve the general SBPP problem. By an interchange argument, we get the following lemma which plays an important role in design and analysis of our algorithm. Lemma 5. There exists an optimal schedule with the following properties: (1) The batches in the same interval (started but not necessarily =nished) are arranged in non-increasing order of processing times, (2) each batch contains as many as possible of the longest suitable jobs, and

161

(3) any job can be split in size whenever necessary, therefore all the batches in the same interval are full batches except possibly the shortest one. Lemma 6. With 1 +  + 2 loss, we assume that no short job is included in long batches. Proof. By Lemma 5, there exists an optimal schedule in which only the last long batch in each interval may contain short jobs. Therefore, we can stretch those intervals to make extra spaces with length M for the short jobs that are included in the long batches. Since there are 1=+1 intervals, we may increase the solution value by at most (1 + )M , which is no more than ( + 2 )opt. Combining Lemma 6 and Theorem 4, we can determine the batch structure of short jobs at the beginning of the algorithm as follows: use the FBLPT rule for all the short jobs in Ji (i = 1; 2; : : : ; 1= + 1) and get a series of short batches. We observe that although we need assign the jobs into batches and sequence the batches to get a schedule, it is easy to determine the sequence of the batches, since it is always optimal to simply schedule the available batches successively in any arbitrary order. Therefore we get the main idea of our algorithm: whenever a new interval begins, we :rst schedule all the available short batches. Only when there are no available short batches, we begin to schedule the long batches. Note that an FBLPT schedule is an optimal schedule when all the jobs are released at the same time, whenever the completion time of the current batch is greater than or equal to (1=)M , we schedule the remaining long jobs according to the FBLPT rule. We will generate a series of states in our algorithm. Each state consists of a current batch B, a time t at which the current batch is completed, a set R of all remaining available jobs. We denote such a state by (B; t; R). We also say that t is the completion time of state (B; t; R). A state (B; t; R) will be referred to as type I if it crosses the time interval in which the batch B has started, and type II otherwise. If the state is of type I, then at time t we process :rst all short jobs that might have been released when the new interval started, even though there may still be unprocessed long jobs from the previous intervals. If the state is of type II, that

162

S. Li et al. / Operations Research Letters 33 (2005) 157 – 164

means that all short jobs have been processed and then we continue with processing long batches or wait for the beginning of the next interval. Our algorithm is composed of two phases. The :rst phase involves “growing” a TREE of states, T . Initially, T consists of only one state (; 0; J1 ) which is of type I. In each outer-loop of the :rst phase, we select a leaf of T which has completion time less than (1=)M , and then generate one or more new states that are added to the tree as the sons of the state that “created” them. The loop goes on until all leaves in the tree have completion time that is above (1=)M . Then we proceed to the second phase. In the second phase, we use each leaf of T to generate a feasible schedule and then take the one with the minimum makespan from among these schedules. At the beginning of our algorithm, we use the FBLPT rule for all the short jobs in Ji and get a series of short batches. We group the long jobs in Ji with the same processing time together. (i = 1; 2; : : : ; 1= + 1.) Thereby we can compute each state in O(k) time. Recall that k denotes the number of distinct processing times of long jobs and can be up-bounded by 1=3 − 1= + 1 (see Lemma 2). Suppose that we have selected a leaf of T , (B; t; R), in the process of growing T . We denote by Q1 the set of the short batches which consist of the short jobs in R. We use Q2 to denote the set of the long jobs in R. Then we have two cases. Case 1: State (B; t; R) we selected is of type I. We shall use Q1 to generate only one new state, i.e., a new leaf of T . To do so, we arrange the short batches in Q1 one by one such that the start time of the :rst one is exactly the current time t, and the start time of any of the others is exactly the completion time of its predecessor. Thereby we get a new state. (The current batch of the new state is in fact a sequence of short batches.) If the last short batch of the new state ends after the next interval has started, we still label the new state type I. Otherwise, type II. We then add the new state to T as the son of state (B; t; R). Clearly, if Q1 = , we need only relabel state (B; t; R) as type II. Case 2: State (B; t; R) we selected is of type II. It is easy to see that Q1 is empty in this case. Suppose that the jobs in Q2 take on l distinct processing times, P1 ; P2 ; : : : ; Pl . We generate l new states as follows: start a new batch at time t with processing

time Pi (i = 1; 2; : : : ; l), which contains as many as possible of the longest suitable jobs (by Lemma 5). For each of these states, if the new batch ends after the next interval has started, we label the new state type I. Otherwise, type II. We also generate a new state (; t  ; R) which is of type I, where t  is the left-end-point of the next interval. Since l 6 k 6 1=3 − 1= + 1 (by Lemma 5), we get at most k + 1 new states. We then add the new states to T as the sons of state (B; t; R). We are now ready to describe our algorithm. Algorithm ScheduleSplit Step 1: Use the FBLPT rule for all the short jobs in Ji and get a series of short batches. Group the long jobs in Ji with the same processing time together. (i = 1; 2; : : : ; 1= + 1:) Step 2: Set T ={S}, where S =(; 0; J1 ) is of type I. Step 3: Select a leaf of T which has completion time less than (1=)M . If it is of type I, we do as described in Case one. Otherwise, we do as described in Case two. Repeat this step until all leaves in the tree have completion time that is above (1=)M . Step 4: Use each leaf of T , (B; t; R), to generate a feasible schedule: from time t onwards, :rst process the remaining short batches, and then process the remaining long jobs according to the FBLPT rule. Step 5: From among the schedules generated in Step 4, return the one with the minimum makespan. Theorem 7. Algorithm ScheduleSplit is a PTAS for the general SBPP problem. Proof. By Lemmas 1 and 2, manipulating the release times and processing times may increase the objective value by 3 · opt. By Lemma 6 and Theorem 4, batching the short jobs via the FBLPT rule may increase the objective value by (2 + 22 ) · opt. Therefore Step 1 ends with at most 1 + 5 + 22 loss. We then enumerate some feasible schedules, among which there exists a (1 + 5 + 22 )-approximation schedule (by the arguments above). By selecting the schedule with the minimum makespan, Step 5 will return a (1 + 5 + 22 )-approximation schedule. We now discuss the time complexity of Algorithm ScheduleSplit. Denote by "i the number of the states whose current batches start in interval i (i = 1; 2; : : : ; 1=). Note that there can be at most

S. Li et al. / Operations Research Letters 33 (2005) 157 – 164

1= long batches in each of the :rst 1= intervals. Moreover, during the process of growing T , a state of type I will create only one new state, while a state of type II will create at most k + 1 new states. Thus we get: "1 6 1+[1+(k +1)+(k +1)2 +· · ·+(k +1)1= ], "i 6 (k + 1)(i−1)= + (k + 1)(i−1)=+1 + · · · + (k + 1)i= , i = 2; 3; : : : ; 1=. Therefore the total number of the generated states is at most: "1 + "2 + · · · + "1= 6 1+[1+(k + 1)+(k + 1)2 + · · · +(k + 1)1= ] + [(k + 1)1= +(k + 1)1=+1 + · · · +(k + 1)2= ] 2

+ · · · + [(k + 1)(1−)= + (k + 1)(1−)=

2

+1

2

+ · · · + (k + 1)1= ] 2

¡ 2[1 + (k + 1) + (k + 1)2 + · · · + (k + 1)1= ] 2

= O((k + 1)1= ):

(4)

Each state can be computed in O(k) time. Hence, 2 Step 3 can be executed in O((k + 1)1= +1 ) time. 2 The :nal tree created in Step 3 has O((k + 1)1= ) leaves. In Step 4, each leaf of the :nal tree will be used to generate a feasible schedule in O(1=2 ) time, because the remaining long jobs form at most 1=2 long batches (by the de:nition of long jobs). Step 1 can be executed in O(n log n) time. Hence, the time complexity of the algorithm can be up-bounded by 2 O(n log n + (k + 1)1= +1 ). (We assume without loss of generality that k ¿ 1=2 .) 4. A (2 + )-approximation algorithm for problem BPP Now we start to construct an approximation algorithm for BPP. Algorithm ScheduleWhole Step 1: Get a (1 + =2)-approximation schedule #1 for SBPP by Algorithm ScheduleSplit. Step 2: Move out all split jobs from #1 and open a new batch for each of them. Step 3: Process the new batches one by one at the end of #1∗ , where #1∗ is the schedule that is obtained from #1 after removing from it all split jobs.

163

Theorem 8. Algorithm ScheduleWhole is a (2 + )-approximation algorithm for problem BPP, where  ¿ 0 can be made arbitrarily small. Proof. Denote by #2 the schedule given by Algorithm ∗ ScheduleWhole. Let Cmax and Cmax be the makespans of #2 and an optimal schedule for BPP, respectively. Recall that opt denotes the optimal makespan of prob∗ lem SBPP. It is obvious that opt 6 Cmax and #2 is a feasible schedule for BPP. Note that #2 consists of two parts, one of which is #1∗ and another is the new batches opened for the split jobs. The completion time C1 of the former part is no more than (1 + =2)opt. Let us consider the total processing time C2 of the latter part. We say that a batch splits a job if it contains some part but not the last part of the job. From Algorithm ScheduleSplit, each batch splits at most one job and there are at most n batches in #1 . Since the processing time of a split job cannot be greater than that of any batch which splits the job, it follows that C2 6 C1 . Thus we ∗ get Cmax = C1 + C2 6 (2 + )opt 6 (2 + )Cmax . In Algorithm ScheduleWhole, Step 1 can be executed in (O(n log n) + f(1=)) time, while Step 2 can be executed in O(n) time, therefore this algorithm can be implemented in (O(n log n) + f(1=)) time. Note that in the algorithm the treatment of the split jobs is very trivial (each one in its own batch). Is it possible to improve this and get a better worst-case ratio? To explain why more involved techniques for scheduling the split jobs do not seem to yield a better worst-case ratio, we present the following example. There are 2=−1 jobs with processing times 1, where r1 = r1=+1 = 0; r2 = r1=+2 = 1; : : : ; r1=−1 = r2=−1 = 1= − 2; r1= = 1= − 1; s1 = s2 = · · · = s1= = ; s1=+1 = s1=+2 =· · ·=s2=−1 =1. An optimal schedule for BPP, ∗ = 1=, is as follows: from time with makespan Cmax 0 onwards, :rst process the jobs with sizes 1 one by one in an increasing order according to their release times, and then process all the jobs with sizes  as a batch. On the other hand, an optimal schedule #1 for SBPP, constructed by Algorithm ScheduleSplit, can be obtained as follows: sort all the jobs in an increasing order according to their release times (ties broken in favor of the smallest job size), then batch the sorted jobs via the FBLPT rule, :nally process the available batches successively. A schedule #2 for BPP, with makespan Cmax =2= −1, obtained from #1

164

S. Li et al. / Operations Research Letters 33 (2005) 157 – 164

by Algorithm ScheduleWhole, is as follows: :rst process the jobs with sizes  one by one in an increasing order according to their release times, and then process the jobs with sizes 1 one by one in any arbitrary ∗ order. Thus we have Cmax =Cmax = 2 −  → 2 ( → 0). Since each split job in #1 has size 1, they have to be processed trivially anyway, i.e., each one in its own batch. References [1] F. Afrati, E., C. Chekuri, D. Karger, C. Kenyon, S. Khanna, I. Milis, M. Queyranne, M. Skutella, C. Stein, M. Sviridenko, Approximation schemes for minimizing average weighted completion time with release dates, Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer Science, New York, October, 1999, pp. 32–43. [2] P. Brucker, A. Gladky, H. Hoogeveen, M.Y. Kovalyvov, C.N. Potts, T. Tautenhahn, S.L. van de Velde, Scheduling a batching machine, J. Scheduling 1 (1998) 31–54. [3] E.G. CoDman, M.R. Garey, D.S. Johnson, Approximation algorithms for bin packing: a survey, Approximation Algorithms for NP-hard Problems, PWS, Boston, 1996, pp. 46–93.

[4] X. Deng, H.D. Feng, G.J. Li, A PTAS for semiconductor burn-in scheduling, J. Combin. Optim., to appear. [5] X. Deng, C.K. Poon, Y. Zhang, Approximation algorithms in batch processing, in: The Eighth Annual International Symposium on Algorithms and Computation, Lecture Notes in Computer Science, Vol. 1741, Springer, Chennai, India, December 1999, pp. 153–162. [6] M.R. Garey, D.S. Johnson, Computers and Intractability, a Guide to the Theory of NP-completeness, Freeman, San Francisco, 1979. [7] C.Y. Lee, R. Uzsoy, Minimizing makespan on a single batch processing machine with dynamic job arrivals, Technical Report, Department of Industrial and System Engineering, University of Florida, January 1996. [8] C.Y. Lee, R. Uzsoy, L.A. Martin Vega, EMcient algorithms for scheduling semiconductor burn-in operations, Oper. Res. 40 (1992) 764–775. [9] J.K. Lenstra, D.B. Shmoys, Computing near-optimal schedules, Scheduling Theory and its Application, Wiley, New York, 1995. [10] R. Uzsoy, A single batch processing machine with non-identical job sizes, Int. J. Prod. Res. 32 (1994) 1615–1635. [11] G. Zhang, X. Cai, C.Y. Lee, C.K. Wong, Minimizing makespan on a single batch processing machine with non-identical job sizes, Naval Res. Logist. 48 (2001) 226–240.