The conference version of this paper, “Bi-criteria scheduling on multiple machines subject to machine availability constraints”, appeared in the proceedings of The Seventh International Frontiers of Algorithmics Workshop and The Ninth International Conference on Algorithmic Aspects of Information and Management (FAW-AAIM 2013), Lecture Notes In Computer Science, Vol. 7924: 325–338, 2013. Discrete Optimization
Accepted Manuscript
Total Completion Time Minimization on Multiple Machines Subject to Machine Availability and Makespan Constraints Yumei Huo, Hairong Zhao PII: DOI: Reference:
S0377-2217(14)01013-3 10.1016/j.ejor.2014.12.012 EOR 12671
To appear in:
European Journal of Operational Research
Received date: Revised date: Accepted date:
13 September 2013 4 December 2014 9 December 2014
Please cite this article as: Yumei Huo, Hairong Zhao, Total Completion Time Minimization on Multiple Machines Subject to Machine Availability and Makespan Constraints, European Journal of Operational Research (2015), doi: 10.1016/j.ejor.2014.12.012
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Yumei Huo1,∗
CR IP T
Total Completion Time Minimization on Multiple Machines Subject to Machine Availability and Makespan Constraints✩ Department of Computer Science, College of Staten Island, City University of New York, Staten Island, NY 10314, U.S.A.
Hairong Zhao
AN US
Department of Mathematics, Computer Science & Statistics, Purdue University Calumet, Hammond, IN 46323, U.S.A.
Abstract
M
This paper studies preemptive bi-criteria scheduling on m parallel machines with machine unavailable intervals. The goal is to minimize the total completion time subject to the constraint that the makespan is at most a constant T . We study the unavailability model such that the number of available machines can not go down by 2 within any period of pmax where pmax is the maximum processing time among all jobs. We show that there is an optimal polynomial time algorithm.
ED
Keywords: scheduling, parallel machine, bi-criteria, limited machine availability, polynomial time algorithm.
PT
1. Introduction
AC
CE
Multi-criteria scheduling and scheduling subject to machine availability constraints have been two very active areas in manufacturing and operations management research over the last couple of decades. A lot of applications for multi-criteria scheduling have been addressed in the books ([20], [19]), surveys ([3], [1], [6]) and the references therein. On the other hand, applications for scheduling subject to machine availability have been addressed in many surveys ([2], [9], [14], [18], [17]) and the references
✩ The conference version of this paper, “Bi-criteria scheduling on multiple machines subject to machine availability constraints”, appeared in the proceedings of The Seventh International Frontiers of Algorithmics Workshop and The Ninth International Conference on Algorithmic Aspects of Information and Management (FAW-AAIM 2013), Lecture Notes In Computer Science, Vol. 7924: 325-338, 2013. ∗ Corresponding author. Email addresses:
[email protected] (Yumei Huo),
[email protected] (Hairong Zhao) 1 Work supported in part by the PSC-CUNY research fund.
Preprint submitted to Elsevier
January 5, 2015
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
therein. In the real life, both models may co-exist in some scenarios. While manufacturers aim at optimizing multi-criteria simultaneously, resources may not be always available due to breakdown, preventive maintenance or processing unfinished jobs from a previous planning horizon. However, most research in these two areas has been conducted independently from one another. The only work that concerns both bi-criteria scheduling and scheduling with limited machine availability simultaneously is done by the authors in paper [7]. The paper [7] considers preemptively scheduling the jobs on two machines to minimize the total completion time and makespan at the same time with one as primary criterion and the other as secondary criterion. In this paper, we continue our research on this complicated model trying to solve more tractable problems. We consider m, m ≥ 2, parallel machines with unavailability constraint. We study the unavailability model such that the number of available machines can not go down by 2 within any period of pmax where pmax is the maximum processing time among all jobs. So, in this model, the number of available machines can decrease or increase at any time t; but there is a restriction on how the number of available machines decreases, i.e. the number of available machines can only go down by at most one machine unit, provided that it has not gone down by one machine unit during the interval [t − pmax , t]. This machine unavailabililty model is so far the maximally solvable model for total completion time minimization problem. It includes two special models that have been studied in literature: (1) The number of available machines can only increase. In this case, each machine has at most one unavailable period and this period starts at time zero. We say in this case that each machine has a release time; (2) Each machine may have multiple unavailable periods, but there is at most one machine unavailable at any time. In reality, the first machine unavailability model may occur due to the unfinished jobs in the previous scheduling horizon and the second machine model exists in many scenarios since the preventive maintenance or periodical repair is usually done on a rotation basis instead of maintaining or repairing several machines simultaneously. We focus on preemptive schedules. A job can be preempted by another job or interrupted by machine unavailable intervals and resumed later on any available machine. Our goal is to minimize the total completion time subject to the condition that the makespan is at most a constant T . The makespan and total completion time are two objectives of considerable interest. Minimizing makespan can ensure a good balance of the load among the machines and minimizing the total completion time can reduce the inventory holding costs. It is quite common that the manufacturers wish to minimize both objectives. The motivations for bi-criteria scheduling concerned with makespan and total completion have been addressed by Gupta et al. ([5]), Leung and Young in [11], and survey papers about multi-criteria scheduling mentioned above. Formally speaking, there is a set J = {J1 , J2 , · · · , Jn } of n jobs that need to be scheduled on m machines. Each job Jj has a processing time pj . Let pmax = max(p1 , p2 , · · · , pn ). Without loss of generality, the processing times of jobs are assumed to be integer. There are m machines in total and each may have unavailable intervals. Thus the number of available machines changes over time and we use m(t) to represent the number of available machines at time t. We assume that for all time t ≥ 0 and all ∆ < pmax , we have m(t + ∆) ≥ m(t) − 1 . Let S be a feasible schedule of these n jobs on the m machines, the completion time of job Jj in schedule S is de2
ACCEPTED MANUSCRIPT
CR IP T
noted by Cj (S). If S is clear from the context, we will use Cj for short. The P makespan of S is Cmax (S) = max{Cj (S)}, and the total completion time of S is Cj (S). We ∗ will use Cmax to denote the minimum makespan among all feasible schedules. The P goal is to schedule the set of jobs on m parallel machines so as to minimize Ci subject to the machine unavailability constraint and the condition that Cmax ≤ T , where ∗ T ≥ Cmax . To denote our problem, we can extend the 3-field notation P introduced by Graham et al. [4]: P m(t) | m(t + ∆) ≥ m(t) − 1, r − a, prmt | Cj /Cmax ≤ T , where 0 ≤ ∆ < pmax .
CE
PT
ED
M
AN US
Literature Review. So far there is only one research paper [7] that considers multicriteria scheduling with limited machine availability constraint. In the paper, Huo and Zhao give optimal polynomial algorithms for three problems: (1) P1,1 | r − a, prmt | P Cj /Cmax where p1,1 means one machine is alwaysP available and the other may have unavailable intervals; (2) P | r − a, prmt | C / Cj ; and (3) P2 | r − a, prmt | 1,1 max P Cmax / Cj in which both machines are unavailable during an interval [t, t + x) and at most one machine is unavailable at any other time. In the following, we will review the relevant results in the area of bi-criteria scheduling and in the area of scheduling with limited machine availability, respectively. We will survey the results concerning with makespan and total completion time only. Research on multicriteria scheduling problems on parallel machines has not been dealt with adequately in the literature. Gupta et al. ([5]) propose an exponential algorithm to solve optimally the bi-criteria problem of minimizing the weighted sum of makespan and mean flow time on machines. When preemption P two identical parallelP is allowed, P | prmt | Cmax / Cj and P | prmt | Cj /Cmax are polynomially solved by Leung and Young in [11], and Leung and Pinedo in [12], respectively. With limited machine availability, when there are multiple machines, if preemption is not allowed, makespan minimization and total completion time minimization problems are both NP-hard ([10],[18], [8]). When preemption is allowed and the machines have limited availability constraint, the makespan minimization problem is shown to be solvable in P by Liu and Sanlaville ([15]); when each machine has a release time, that is, m(t) is non-decreasing, it is easy to show that SPT (shortest processing time first) P rule minimizes Cj and one can modify the McNaughton rule ([16]) to minimize the makespan. Finally, for our unavailability model, i.e. m(t + ∆) ≥ m(t) − 1 for all t ≥ 0 and ∆ < pmax , Leung and Pinedo([13]) showed that PSPT (preemptive SPT, i.e., at any time, when a machine becomes available for processing jobs, the job with the minimum remaining time gets scheduled.) rule gives the optimal schedule for the total completion time.
AC
New Contributions. P In this paper, we study the problem P m(t) | m(t + ∆) ≥ m(t) − 1, r − a, prmt | Cj /Cmax ≤ T for ∆ < pmax . We show that we can solve this problem in polynomial time by developing an optimal algorithm. Organization. Our paper is organized as follows. In Section 2, we give some preliminary results. In Section 3, we give the main results. We develop a polynomial algorithm and prove that it solves our problem optimally. In Section 4, we draw the conclusion.
3
ACCEPTED MANUSCRIPT
2. Preliminaries
CR IP T
As we mentioned before, SPT rule gives an optimal schedule for the total completion time when each machine has only a single unavailable interval starting from time 0. In general, however, SPT is not optimal when machines have arbitrary unavailable intervals. Still, it has been shown by Leung and Pinedo in [13] that PSPT rule gives an optimal schedule for the total completion time and some other objectives under our unavailability model. See below.
Lemma 1. (THEOREM 3 in [13]) For m machines, mP≥ 2, if m(t + ∆) ≥ m(t) − 1 n for all t ≥ 0 and ∆ < pmax , then PSPT minimizes j=1 f (Cj ) provided that f is increasing concave.
AN US
When P the machines have arbitrary unavailable intervals, for the bi-criteria objective, Cj /Cmax ≤ T , following the same argument as Lemma 1 in [7] one can show that the jobs complete in SPT order, see the following lemma. Lemma 2. For m (m ≥ 2) machines with arbitrary P unavailability constraints, there is a preemptive optimal schedule for the objective Cj /Cmax ≤ T , such that if pi < pj , then Ci ≤ Cj .
PT
ED
M
P Proof Suppose in an optimal schedule for Cj /Cmax ≤ T , there are two jobs Ji and Jj such that pi < pj and Ci > Cj . We can divide all the intervals that Ji is scheduled into two parts such that the intervals in the first part are all before Cj and the intervals in the second part are all after Cj . We exchange the second part of Ji with part of Jj making sure that this part of Ji does not overlap with the first part of Ji . We must be able to do so since pj ≥ pi . In this way, the completion time of job Ji is decreased by at least Ci − Cj , and the completion time of job Jj is increased by (Ci − Cj ) and all other jobs have same completion time as before. Throughout this paper, we assume jobs J1 , J2 , . . . , Jn are indexed in nondecreasing order of their processing times, i.e., p1 ≤ p2 ≤ . . . ≤ pn . 3. P m(t) | m(t + ∆) ≥ m(t) − 1, r − a, prmt |
P
Cj /Cmax ≤ T
AC
CE
In this section, we will develop an optimal algorithm for our problem. Here we ∗ 0 0 consider Cmax ≤ T ≤ Cmax where Cmax is the makespan of the schedule produced ∗ by PSPT rule and Cmax is the minimum makespan of problem P m(t) | m(t + ∆) ≥ m(t) − 1, r − a, prmt | Cmax ≤ T which can be obtained by the algorithm of Liu ∗ and Sanlaville ([15]). Apparently, if T < Cmax , no feasible schedule exists and if 0 T > Cmax , we can simply apply PSPT rule to the problem. The basic idea of our algorithm is to iteratively schedule the jobs, in SPT order, as early as possible subject to the condition that all the remaining jobs can finish by T . We can show that in order to check whether the remaining jobs can finish by T , it is sufficient to check whether the last m − 1 jobs (or all the remaining jobs if less than m − 1 jobs are left) can finish by T . Let n0 be the number of jobs we are going to schedule as early as possible subject to the condition that the last m − 1 jobs can finish by T . Initially, n0 = n, and n0 may 4
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
be updated during the procedure. For each job Jj , (1 ≤ j ≤ n0 ), when we add it to S, we try to schedule it on the machine with the earliest idle time. For convenience, we use fi to denote the completion time of the last job scheduled on machine Mi at the beginning of each iteration in our algorithm, and we reindex the machines such that f1 ≤ f2 . . . ≤ fm . So job Jj will be scheduled on machine M1 . We also assume that at the beginning of each iteration, the unavailable intervals can be rearranged to the machine with index as large as possible, thus at any time instant after fi+1 , if machine Mi+1 is available, then Mi must also be available. For each machine Mi , we use ai to denote the total length of the idle intervals between fi and T on Mi . We use Li (1 ≤ i ≤ m) and |Li | to represent the set of unavailable time intervals on machine Mi during [fi , T ] and its total length of these intervals, respectively. Apparently, if u ≤ v, the set of unavailable time intervals after time fv in Lu must be a subset of Lv and Lv \ Lu is the set of time intervals after fv during which machine Mu is available but Mv is not. Note that the above assumption is just for ease of algorithm description and analysis. If job Jj is scheduled at certain time on a machine which is actually unavailable in S by our algorithm, we can always find a machine to schedule the job at the same time without rearranging the real unavailable intervals since preemptive is allowed. After job Jj is scheduled, we need to make sure that the last m − 1 jobs (or all the remaining jobs if n − j < m − 1) can finish by T . We first check if Jn can complete on M2 by T , then check if Jn−1 can complete on M3 by T , and so on, in this order. During this process, we decide if a job Jk , n − m + 2 ≤ k ≤ n, needs to preempt Jj , and if so, at what time and how much. Specifically, for each job Jk , n−m+2 ≤ k ≤ n (or j+1 ≤ k ≤ n if n−j < m−1), we first check if pk ≤ ai , i = n − k + 2 . If so, it means that machine Mi has enough idle time interval before T to schedule Jk , and we say that this machine has a surplus σi = ai − pk ; otherwise, only part of the job with length ai can be scheduled on machine Mi , we say that the job has a deficit δk = pk − ai . When a job Jk has deficit, we check if its deficit can be distributed to the machines with surplus. We check the machines in the order of Mi−1 , Mi−2 ,· · · , M2 . If a machine Ml (i − 1 ≥ l ≥ 2) has surplus σl and the surplus is more than the deficit, i.e. σl ≥ δk , we update σl = σl −δk , δk = 0 and we continue to check the next job Jk−1 on machine Mi+1 ; otherwise, the surplus is less than the deficit, we update δk = δk − σl and σl = 0, and we continue to check the next machine with surplus. If after we consider all the machines Ml (i − 1 ≥ l ≥ 2), we still have δk > 0, then this is the amount that job Jk has to be scheduled on M1 . We try to schedule this part of Jk on M1 as late as possible. So we first schedule Jk backward starting from T to the idle interval where Jk is not scheduled on Mi . If there is still part of Jk not scheduled, we schedule the remaining part of the job Jk to those intervals where Jj is scheduled but Jk is not, and reschedule the replaced part of job Jj to the idle time intervals on M1 as early as possible, and set n0 = k − 1. We repeat this procedure until all the last m−1 jobs (or all the remaining jobs if n− j < m − 1) are checked. We then fix the schedule of Jj and the jobs scheduled before Jj on machine M1 , update the processing times of the last m − 1 jobs by excluding the part scheduled before the completion time of job Jj on M1 , and remove the jobs that are scheduled after Jj on M1 . If Jj finishes at time T , set n0 = j. Otherwise, we continue 5
ACCEPTED MANUSCRIPT
CR IP T
to schedule the next job Jj+1 in a similar way until all n0 jobs have been scheduled. If n0 < n, we schedule all the remaining jobs Jn , · · · , Jn0 +1 one by one, in this order, on machine M2 , · · · , Mn−n0 +1 respectively. We schedule job Jk , n ≥ k ≥ n0 + 1 backwards from time T on machine Mi , i = n − k + 2. If it cannot be completely scheduled on machine Mi , we schedule the unfinished part of job Jk backward during the idle time intervals on machines Mi−1 , Mi−2 ,· · · , M2 , in this order. In case there is an overlap of Jk on Ml (i − 1 ≥ l ≥ 2) with part of Jk on other machines, we swap the overlapped Jk on Ml with the job Jn−l+2 on Ml . The formal algorithm is presented in the following. Algorithm1
AN US
Let S be an empty schedule. Let fi = 0 for all machines Mi , i = 1, . . ., m. n0 = n j=1 While j ≤ n0 do (a) Renumber the machines that still have idle time so that f1 ≤ f2 · · · ≤ fm . (b) Rearrange the unavailable intervals after fi , 1 ≤ i ≤ m to the machines with index as large as possible, so that the set of unavailable time intervals after time fi+1 in Li is a subset of Li+1 for all 1 ≤ i ≤ m − 1. (c) Update ai to be the total available time between fi and T on machine Mi , i = 1, . . ., m. (d) Schedule Jj on machine M1 , update f1 , a1 . (e) Let k = n and i = 2 (f) While k ≥ j + 1 and i ≤ m do if i = m do rearrange the unavailable intervals after f1 , where M1 is available but Mm is not, from Mm to M1 update a1 and am . end (if); if ai ≥ pk do σi = ai − pk , δk = 0 end (if); else do σi = 0, δk = pk − ai , l =i−1 while δk > 0 and l ≥ 2 do if σl > δk , do σl = σl − δk , δk = 0 end (if); else do δk = δk − σl , σl = 0 end (else); l =l−1 end (While); if δk > 0 do
AC
CE
PT
ED
M
1. 2. 3. 4. 5.
6
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
starting from T backward, schedule the remaining part of Jk to the idle intervals [s, t) (if there is any) on M1 such that either (1) s > fi and [s, t) is an unavailable interval on machine Mi or (2) t < fi . update δk end (if); if δk > 0 do continue to backward schedule the remaining part of Jk starting from fi to the time intervals on M1 where Jj is scheduled, and reschedule this part of Jj to the idle time on M1 as early as possible update f1 , a1 n0 = k − 1 end (if); end (else); i = i + 1, k = k − 1 end (While); (g) Update the processing times of jobs Jn , . . . , Jn−m+2 (or Jj+1 if n − j < m−1) by excluding the parts of these jobs scheduled before the completion time of job Jj on machine M1 (h) Remove the jobs scheduled after Jj on M1 (i) If the completion time of job Jj is T set n0 = j (j) j = j + 1 end (While); 6. k = n and i = 2 7. While k ≥ j do backward schedule Jk on Mi from T as much as possible. if ai ≥ pk , then do σi = ai − pk , δk = 0 end (if); else do σi = 0, δk = pk − ai end (else); l =i−1 while δk > 0 and l ≥ 2 do if σl > 0 do backward schedule Jk on Ml starting from the last idle time for period of length min(σl , δk ). If some part of Jk on Ml overlaps with part of Jk that is already scheduled on some other machines, swap this part of Jk on Ml with same length of the job on Ml , Jn−(l−2) , that is scheduled in those intervals where Jk is not scheduled on other machines. if σl > δk , then σl = σl − δk and δk = 0 end (if); else δk = δk − σl and σl = 0 end (else); end (if); l =l−1 end (while); 7
ACCEPTED MANUSCRIPT
i = i + 1, k = k − 1 end (While); 8. Return S
0
1
2
0
1
2
3
4
M1 M2
M3 M4
3
4
5
ED
M1
(1)
5
6
7
8
9
10
11
12
7
8
9
10
11
12
7
8
9
10
11
12
M
(0)
AN US
CR IP T
We use an example to explain Algorithm1. There are 4 machines such that M1 is unavailable during [0, 3), M3 is unavailable during [4, 7), M4 is unavailable during [8, 12). See Figure 1 (0). We need to schedule 5 jobs before time T = 12. The processing times of J1 , . . . , J5 are 5, 6, 7, 10, 10, respectively. We first rearrange the unavailable intervals and renumber the machines so that all unavailable intervals are on the last machine M4 . Then we schedule J1 on M1 during [0, 5), we check if J5 , J4 , and J3 can finish before 12. Since p5 = 10 < a2 = 12, we have σ2 = 2, and δ5 = 0. Similarly, σ3 = 2, and δ4 = 0, see Figure 1 (1). For J3 , we first move the unavailable interval after time 5 to M1 . Then we get σ4 = 1 δ3 = 0. See Figure 1 (2). So we fix the schedule of J1 , reorder the machines, see Figure 2 (3). Then we schedule J2 on the new M1 during interval [0, 6) , and do the check in a similar way. This time, however, J4 has to utilize the surplus on M2 (see Figure 2 (4)), and J3 has to preempt J2 and so J2 is delayed. By the algorithm, we update the processing time of J3 and set n0 = 2 and stop Step 5(f). Then, we schedule job 5, 4, 3 in this order based on Step 7. The final schedule is given Figure 2 (5).
6
J1
M2
p5
M3
p4
PT
M4
0
M1
AC
CE
(2)
1
2
3
4
5
6
J1
M2
p5
M3
p4
M4
Figure 1: Illustration of Algorithm1
Lemma 3. The order pn ≥ pn−1 ≥ · · · ≥ pn−m+2 ≥ pn−m+1 ≥ · · · ≥ p1 is always maintained even if pk is updated for some n − m + 2 ≤ k ≤ n in some iterations in Step 5 of the algorithm. Proof We show the Lemma is true after each time we schedule a job Jj in Step 5. We first show that the relative order of the lengths of the last (m − 1) jobs is maintained. 8
ACCEPTED MANUSCRIPT
0
1
2
0
1
2
3
4
5
6
7
8
9
10
11
12
3
4
5
6
7
8
9
10
11
12
10
11
12
M1 (3)
M2 M3 M4
M1 (4)
J2
M2
p5
M3
p4=a3+s2
M4
J1
1
(5)
M2
2
3
4
J2
5
6
8
9
J2
J4
J5
M3 M4
7
J3 J4
J1
AN US
0
M1
CR IP T
J1
J4
J3
Figure 2: Illustration of Algorithm1
AC
CE
PT
ED
M
Let Jk+1 and Jk be two of the last (m − 1) jobs. According to the algorithm, after Jj is scheduled on M1 , we check if job Jk+1 can complete by T on machine Mi−1 and if job Jk can complete by T on machine Mi , where i = n − k + 2. If pk+1 is unchanged in Step 5 (g), we must have pk+1 ≥ pk . Otherwise, part of Jk+1 is scheduled before the completion time of job Jj on machine M1 . Let p0k+1 be the new length of job Jk+1 after it is updated. We know p0k+1 ≥ ai−1 . Furthermore, there is no surplus on any machine Mi−1 , · · · , M2 . If job Jk can be fully scheduled on machine Mi , we have pk ≤ ai ≤ ai−1 ≤ p0k+1 . Otherwise, part of job Jk will be scheduled on machine M1 backward from time T . Let p0k be the length of job Jk after Step 5(g). Since part of Jk+1 is scheduled before Cj on M1 , there must be no idle time before fi−1 on M1 and we must have Cj ≥ fi−1 . So if there is a part of job Jk before fi−1 on M1 , it must be excluded from p0k . And since the unavailable intervals after fi in Li−1 is the subset of that in Li , p0k is at most ai−1 at the end of Step 5 (g). Thus p0k ≤ ai−1 ≤ p0k+1 and the order is maintained. Now we show that pn−m+1 ≤ pn−m+2 . Note that when we check Jn−m+2 , am and a1 will be updated at the beginning of Step 5(f), and we have am ≥ a1 . If pn−m+2 is not updated in Step 5, then pn−m+2 ≥ pn−m+1 . Otherwise, let p0n−m+2 be the new length of job Jn−m+2 . We must have p0n−m+2 ≥ am and there is no idle time before fm on M1 after Jj is preempted and scheduled after part of Jn−m+2 on M1 , and Cj ≥ am . Furthermore, σl = 0, for all l, 2 ≤ l ≤ m. In this case, all the remaining jobs have to be and can be scheduled on the first machine, which means that we must have am ≥ a1 ≥ pn−m+1 . Thus p0n−m+2 ≥ am ≥ a1 ≥ pn−m+1 . Since the processing time is unchanged for p1 , p2 , . . ., pn−m+1 , the relative order of the (remaining) processing times is maintained.
9
ACCEPTED MANUSCRIPT
Lemma 4. Algorithm1 generates a feasible schedule S of the n jobs with the first n0 jobs being scheduled as early as possible subject to the condition that the last m − 1 jobs can finish before T .
CR IP T
∗ Proof By assumption, T ≥ Cmax , so there is a feasible schedule such that the makespan is at most T . We will show S is feasible in two phases: in phase 1, we will prove by induction that the partial schedule S after Step 5 is feasible; and in phase 2, we will show that the schedule S after Step 7 is still feasible.
AC
CE
PT
ED
M
AN US
Phase 1. In this step, we show the partial schedule S after Step (5) is feasible. Specifically, we show by induction that S, which is the partial schedule produced after the first n0 jobs have been scheduled as early as possible subject to the condition that the last m − 1 jobs can finish before T (we name it the earliness property), is feasible, and given S there is a way to schedule the remaining (n − n0 ) jobs before T . This is certainly true when no jobs are scheduled. Assume that the hypothesis is true after the first j − 1 jobs have been scheduled, we need to show that after Jj is scheduled in Step 5, the remaining jobs can all be scheduled before T . We show this in two steps. Step (1) We first show that the partial schedule S at the end of Step 5 is feasible, and given S, the largest m − 1 jobs, Jn , Jn−1 , · · · , Jn−m+2 (or all the remaining jobs, Jn , Jn−1 , · · · , Jj+1 if n − j < m − 1) can be feasibly scheduled. By induction, job Jj can be feasibly scheduled on machine M1 in Step 5(d). Apparently, this is the earliest time that Jj can be scheduled and so the earliness property is maintained. Now we will show that the largest m − 1 jobs, Jn , Jn−1 , · · · , Jn−m+2 (or all the remaining jobs, Jn , Jn−1 , · · · , Jj+1 if n − j < m − 1) can be feasibly scheduled based on Step 5(f). As we can see in Step 5(f), these jobs are considered in non-increasing order of their processing time. For each job Jk , n − m + 2 ≤ k ≤ n, we schedule it backwards from time T on machine Mi , i = n − k + 2. If a job Jk has a deficit, we schedule the remaining part of Jk to the idle times (surpluses) on the machines Mi−1 , Mi−2 ,· · · , M2 , M1 . Since the deficit δk and surplus σl (i + 1 ≥ l ≥ 2) may be reduced after each (0) machine Ml (i + 1 ≥ l ≥ 2) is considered, we use δk to represent the initial deficit (0) after Jk is scheduled on machine Mi and σl (i + 1 ≥ l ≥ 2) to represent the initial surplus after job Jn−l+2 is scheduled on machine Ml . Without loss of generality, we assume that S does not contain any partial job Jk before job Jj was scheduled. And we have two cases based on whether the total surplus on machines Mi−1 , Mi−2 ,· · · , M2 is larger than the deficit of job Jk or not. Case 1: The total surplus on machines Mi−1 , Mi−2 ,· · · , M2 is larger than the Pi−1 (0) deficit of job Jk , that is, q=2 σq ≥ δk . Let p be the largest index such that 2 ≤ p < i and machine Mp has surplus. Job Jn−p+2 must have been fully scheduled on machine Mp , so pn−p+2 = T − fp − (0) (0) (0) |Lp | − σp ≥ pk . And since pk = T − fi − |Li | + δk , we have δk ≤ fi − fp + (0) |Li | − |Lp | − σp . We can always schedule Jk of length min{δk , σp } on machine Mp backward from T to the time intervals such that during these intervals job Jk is not scheduled but (1) machine Mp is idle; or (2) Jn−p+2 is scheduled, and in this case we need to reschedule the replaced part of Jn−p+2 to the idle time intervals on machine 10
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
Mp . Note that even if another job Jx replaced Jn−p+2 in the previous iterations and is (0) scheduled during such time intervals on machine Mp , due to δx ≤ fn−x+2 − fp + (0) |Ln−x+2 | − |Lp | − σp , there always exists a set of time intervals with the total length (0) of σp such that Mp is available during these time intervals but Mn−x+2 is not (and Mi is not available either), and thus we can always use these intervals to schedule Jk . After that if δk is still larger than 0, we find the new p which is the largest index such that p < i and machine Mp has surplus and repeat the same procedure. Case 2: The total surplus on machines Mi−1 , Mi−2 ,· · · , M2 is smaller than the Pi−1 (0) deficit of job Jk , that is, q=2 σq < δk . Based on whether there exist a machine Mp (2 ≤ p ≤ i − 1) that has a surplus, we have the following two cases: Case (a): There does not exist a machine Mp (2 ≤ p ≤ i − 1) that has a surplus. Pi−1 In this case, we have q=2 σq = 0. Let f1 be the earliest idle time on M1 before job Jj is scheduled. By induction hypothesis, all jobs Jj , · · · , Jn can be feasibly scheduled by T , so we must have that pk ≤ T − f1 − |L1 |, and thus we can always schedule the deficit of job Jk during the time interval where machine M1 is available but machine Mi is not, which includes the available intervals in [f1 , fi ] on machine M1 and the time intervals in Li \ L1 . We schedule the part of job Jk with length (0) δk backward from T on machine M1 during the time intervals where machine Mi is unavailable but machine M1 is idle or has job Jj scheduled. If part of Jj needs to be replaced, we reschedule this part of job Jj to the idle time after Cj and update Cj . Note that by scheduling job Jk in this way, we make sure that Jj is scheduled as early as possible subject to the condition that Jk can finish at T . On the other hand, if there is a job other than Jj , say Jx , that is scheduled on M1 during an interval [t − , t] on M1 , we skip it and schedule the unfinished part of job Jk before this part of Jx backwards from t − and so on. Since x > k, by scheduling job Jk in this way, we make sure that job Jk is scheduled as early as possible subject to the condition that Jn , · · · , Jk+1 can finish by T . Case (b): There exists a machine Mp (2 ≤ p ≤ i − 1) that has a surplus, and let p be the largest index of all such machines. In this case, since ap ≥ ai and job Jn−p+2 which has larger processing time than Jk has been fully scheduled on Mp , we can always schedule the deficit of job Jk during the time interval where machine Mi is unavailable but machine Mp is available, which includes the available intervals in [fp , fi ] on machine Mp and the time intervals in Li \ Lp . Note that we must have that no partial jobs Jk+1 , · · · , Jn−p+2 are scheduled (0) on machine M1 and δk ≤ ap − ai . Furthermore, no partial jobs Jn−p+3 , · · · , Jn are scheduled on M1 after fp during the time intervals where machine Mp is available (Note that during these intervals, Mp−1 , · · · , M2 are all available). Thus, we can (0) always schedule the part of job Jk with length δk during the time intervals, where Mi is unavailable but Mp is available, on machine M1 and machines Mp , Mp−1 , · · · , (0) M2 if they have surplus. Specifically, we schedule part of job Jk with length of δk − Pi−1 q=2 σq backward from T on machine M1 during the time intervals where Jk is not scheduled but M1 is idle or or has Jj scheduled. If part of Jj needs to be replaced, we reschedule this part of job Jj to the idle time after Cj and update Cj . Then we distribute Pi−1 the unscheduled part of Jk of length q=2 σq on the machines with surplus during the 11
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
time interval in [fp , fi ] where Mp is available and the time intervals of Li \ Lp as we did in case 1 but making sure these part of Jk does not overlap with the part that is scheduled on M1 . At the end of Step 5(f), let αk be the part of job Jk scheduled before Cj on machine M1 . By the algorithm, this part of job Jk will be fixed and the processing time of job Jk will be updated in Step 5(g). Clearly, the new partial schedule S after this iteration is feasible and contains the partial job Jk during the time interval [fp , Cj ]. We now show that in the later iterations when we schedule Jj+1 , Jj+2 , · · · , Jn0 , we can always schedule job Jk feasibly. Consider the iteration when we schedule job Jj+1 . The 0 0 machines will be renumbered and we use M10 , · · · , Mm−1 , Mm to represent the new 0 0 0 0 0 0 ordered machines, and f1 , f2 , · · · , fm and L1 , L2 , · · · , Lm to represent the earliest 0 , available time and the set of unavailable time intervals on machine M10 , · · · , Mm−1 0 0 Mm , respectively. Apparently, we have fx ≥ fx for all 1 ≤ x ≤ m. When we try to 0(0) (0) schedule job Jk on Mi0 , job Jk will have a new deficit δk = δk − αk + fi0 − fi + 0 |Li | − |Li |. With the partial job Jk of length αk being scheduled during the interval 0(0) [fp , Cj ], we can still feasibly schedule the new deficit of job Jk , δk , after fp during 0 the time intervals, where Mi is unavailable but Mp is available, on machine M10 and machines My0 (2 ≤ y ≤ p), if they have surplus. By the same argument, job Jk can always be feasibly scheduled in all the iterations until job Jk−1 is scheduled. Step (2) Next, we show that if there are still jobs Jy (j < y < n − m + 2) not scheduled after Step (1) above, we can find a feasible schedule for these jobs based on the schedule produced above. We first schedule Jn−m+1 if there are still idle intervals on M1 before T , we schedule job Jn−m+1 backward from T on M1 . Either it can be totally scheduled on M1 and we are done, or it has a deficit δn−m+1 > 0. In the latter case, we try to schedule the remaining part of Jn−m+1 to those machines that have surplus just as we did in Step (1). By inductive hypothesis, the total length of the idle intervals (can not be less than the total length of the remaining jobs) is sufficient, the deficit of this job must be able to be distributed to other machines starting from Mm until M2 . Then, we schedule the remaining jobs. After Jn−m+1 have been completely scheduled as above, or have not been scheduled at all because of no idle time on M1 , we backward schedule the remaining jobs in decreasing order of their processing times using the following procedure. Let Mi1 , Mi2 , . . . , Mix be the machines that still have surplus. Let tu (1 ≤ u ≤ x) be the end time of the last idle interval on Miu and we assume t1 ≤ t2 ≤ · · · ≤ tx . We know that each of these machines has one of the longest m−1 (or m if the first machine still has surplus after Jn−m+1 is scheduled) jobs that are completely scheduled. We denote these jobs by Ji1 , . . ., Jix . Pick the next longest job Jw , j < w < n − m + 1 (or j < w ≤ n − m + 1 if Jn−m+1 is not scheduled because of no idle time on M1 ). We backward schedule Jw on the machine Mix starting from the end of the last idle interval tx . If Jw can be completely scheduled on the machine, we are done and continue to schedule the next job. Otherwise, it has a deficit δw . The fact that Jix is one of the longest (m − 1) (or m if M1 has surplus) jobs guarantees that δw must be less than or equal to the length of job Jix which is scheduled after tx . Also by the way we arrange the machines, we
12
ACCEPTED MANUSCRIPT
AN US
CR IP T
know that, for all 1 ≤ z ≤ x, δw cannot be longer than the length of the part of job Jiz scheduled after tx . To schedule the remaining part of Jw , we pick machine Mix−1 . We schedule Jw of length min(σix−1 , δw ) in the idle interval on Mix−1 . If there is part of job Jw overlapped with part of Jw scheduled on Mix , we exchange this overlapped part of job Jw with Jix−1 of the same length scheduled after tx on Mix−1 . If δw = 0, we continue to schedule the next job; otherwise, we continue to schedule the remaining part to machine Mix−2 , etc. Note that during the time interval [tx , T ], the total length of intervals, where Jiz (1 ≤ z ≤ x) is scheduled but Jw is not, is always larger than ∗ δw . With the fact of T ≥ Cmax , we must have δw = 0 at some point. Following the same procedure, we can feasibly schedule all the remaining jobs. Now we have shown that based on the partial schedule that the first j − 1 jobs are scheduled as early as possible subject to the condition that all the remaining jobs can be finished by T , the algorithm schedules job Jj as early as possible subject to the condition that all the remaining jobs can be finished by T . By induction, the partial schedule S is feasible after the first n0 jobs have been scheduled as early as possible, and given S there is a way to schedule the remaining jobs before T .
AC
CE
PT
ED
M
Phase 2. Now we show that after Step 6 and 7, the schedule S is still feasible. If n0 is never updated, then n0 = n, that is, we have feasibly scheduled all n jobs after Step 5; otherwise, we have n0 < n. From the algorithm, we reset n0 in two cases. The first case is after we schedule job Jj in Step 5(d), partial job Jk with length of αk > 0 has to be scheduled before Cj on machine M1 . For this case, we set n0 = k − 1. Since αk > 0, the completion time of jobs Jk , · · · , Jn must be T in our final schedule, otherwise, the earliness property of job Jj will be violated. For this case, the partial jobs of Jk , · · · , Jn scheduled before Cj will be fixed and the remaining part of jobs Jk , · · · , Jn must be able to be feasibly scheduled on machines M2 , · · · , Mi in Step 7 following the same analysis as case 2 of step (1) in the above proof. The second case is when the current job Jj finishes at time T after Step 5(f) and we set n0 = j. For this case, we have two subcases. (1) Job Jj is not preempted. Since before Jj is scheduled, we have a1 ≥ a2 ≥ · · · ≥ am and all the remaining jobs have processing times larger than or equal to pj , so we must have that a1 = a2 = · · · = am , pn0 = · · · = pn = pj , exactly n − n0 < m jobs are left, and the completion time of jobs Jn0 +1 , · · · , Jn must be T in our final schedule. For this subcase, it is clear that jobs Jj+1 , · · · , Jn can be feasibly scheduled in Step 7. (2) Job Jj is preempted and finishes at T . Let job Jk be the job with the smallest index that preempts job Jj . Apparently, Jj must be scheduled on M1 during all those intervals where Mi (i = n − k + 2) is available, thus we have pj ≥ ai , and all the machines Ml (i ≥ l ≥ 2) have no surplus. For all jobs, Jx , j < x < k, we have px ≥ pj ≥ ai . However, for all machines Mz , i + 1 ≤ z ≤ m, we have az ≤ ai . So we must have that the processing time of job Jx , j < x < k, is exactly ai and az = ai for all i+1 ≤ z ≤ m, and thus all such jobs Jx (j < x < k) will finish at time T . For jobs Jk , · · · , Jn , following the same analysis as case 2 of step (1) in the above proof, they must be able to be feasibly scheduled on machines M2 , · · · , Mi in Step 7 of Algorithm1. Lemma 5. Algorithm1 can be implemented in O((nm2 + nmB + BlogB) where B is the number of unavailable intervals. 13
ACCEPTED MANUSCRIPT
AC
CE
PT
ED
M
AN US
CR IP T
Proof Apparently, the running time of the algorithm is dominated by Step 5. In this step, we schedule n0 ≤ n jobs and for each job Jj (1 ≤ j ≤ n0 ), we need to check whether the last m − 1 jobs can finish at T . To implement this step, we can maintain a list of the unavailable intervals, Bi , on each machine Mi , 1 ≤ i ≤ m. And we use |Bi | to represent the number of intervals in Bi . To build the initial lists, we first sort all the input unavailable intervals in non-decreasing order of the beginning time of the intervals and if necessary, we break the ties by the decreasing order of the length of the unavailable intervals; and then we scan and rearrange the sorted unavailable intervals one by one to machine Mm , Mm−1 , · · · , M1 in this order such that Bi is the subset of Bi+1 . Note that when we rearrange each interval, we only need to look at the last unavailable interval in Bi for all m ≥ i ≥ 1. Thus, the time of creating the initial sets Bi , 1 ≤ i ≤ m, is O(BlogB + Bm). And the total number of unavailable intervals in these initial sets, |B1 | + |B2 | + · · · + |Bm |, is at most B. When we check each of the last m − 1 jobs, Jk , on machine Mi (i = n − k + 2), we first calculate Jk ’s deficit δk and check if δk can be decreased by using the surplus σl of machines Ml for all 2 ≤ l ≤ i − 1. It is easy to see that this can be done in O(m) time. Next, if δk is still greater than 0, we need to schedule this part of Jk on machine M1 backward from T to the intervals where Jk is not scheduled on Mi but M1 is idle or has Jj scheduled. To implement this, after Jj is scheduled in Step 5(d), for M1 , we build another list, A, of intervals that either is idle or has job Jj scheduled. This can be easily obtained based on the list for B1 . Let |A| be the number of intervals in A, then we must have that |A| ≤ |B1 |+2. Thus when we schedule δk of Jk on M1 to the intervals where Jk is not scheduled on Mi but M1 is idle or has Jj scheduled, we only need to traverse the intervals in Bi (i = n − k + 2) and A backwards simultaneously, and the running time is |A| + |Bi |. Afterwards, the number of intervals in A may be increased, but by at most |Bi |. Thus when we check all the jobs, Jk , n ≥ k ≥ n − m + 2, the number of intervals in A is always bounded by O(B) and the total time to check all these jobs is O(mB + B). Note that, in Algorithm1, when we check job Jn−m+2 on machine Mm , we need to do some additional work, that is, moving the unavailable intervals after the updated f1 , where M1 is available but Mm is not, from Bm to B1 . For this step, the update of Bm , B1 and A can be done in time O(B). So the total time to check the last m − 1 jobs is O(m(m + B)) in each iteration of Step (5). After that, in Step 5(h), we remove all the jobs after Cj on machine M1 and remove all the unavailable intervals before Cj from B1 . So the total number of unavailable intervals in B1 , B2 , · · · , Bm is still at most B. At the beginning of next iteration, before job Jj+1 is scheduled, we need to rearrange the unavailable intervals in Bi for all 1 ≤ i ≤ m, such that the unavailable intervals after Cj , where Mi+1 is available but Mi is not, are moved from Bi to Bi+1 . This can can be easily done in O(B). In conclusion, the total running time of the Algorithm1 is O(nm2 + nmB + BlogB). Theorem 1. Algorithm1P returns an optimal schedule S for P m(t) | m(t + ∆) ≥ m(t) − 1, r − a, prmt | Cj /Cmax ≤ T for ∆ < pmax in polynomial time.
Proof By Lemma 4, S schedules the first n0 jobs as soon as possible as long as the last m − 1 jobs can finish before T . We now prove that S is optimal by first showing there 14
ACCEPTED MANUSCRIPT
0
t1
t1+1
Jk
S:
Ji Jz 0
t1
S*:
t1+1
t’
Jj
Case (0):
t’+1
Ci-1
Ci
Cj-1
T
Cj
T
Cj
T
Jk’’
M
Jk’
AN US
CR IP T
exists an optimal schedule that schedules the first n0 jobs as early as possible subject to the condition that the last m − 1 jobs can finish by T . Assume there is an optimal schedule S ∗ that does not schedule the first n0 jobs as early as possible. Let Ji , i ≤ n0 , be the first job in S ∗ that is not scheduled as early as possible and let t1 be the first time that Ji is not scheduled in S ∗ but scheduled in S. That is, all the jobs with index less than i are scheduled in S ∗ exactly the same as in S and job Ji is scheduled in S ∗ exactly the same as S before time t1 . Consider all the jobs Jj (j > i) scheduled at t1 in S ∗ . By Lemma 2, we have Cj (S ∗ ) ≥ Ci (S ∗ ). If there exists a time t0 and a job Jj such that Jj is scheduled at t1 but not at t0 and Ji is scheduled at t0 , we can exchange Jj at t1 with Ji at t0 . The completion times of Ji and Jj are not increased and the completion times of other jobs are unchanged, so the total completion time is not increased, see Figure 3. So in the following, we assume t0 does not exist in S ∗ ; i.e. in S ∗ , at any time after t1 , if Ji is scheduled, then all the jobs Jj , j > i, that are scheduled at t1 in S ∗ must also be scheduled.
Ji
Ji
Jj
Before exchange
t1
t1+1
ED
0
PT
S*:
t’
t’+1
Jk’
Jk’’
Ji
Jj
Ci-1
Ci
Ji
Cj-1
Jj
After exchange
Figure 3: Illustration of Theorem 1.
AC
CE
For the convenience, we define (Ji1 (ti1 ), Ji2 (ti2 ), · · · , Jip (tip )) to be a “ sequence” of a schedule, if we can reschedule Jik , from tik to tik+1 for 1 ≤ k ≤ p − 1 without increasing the total completion time of the schedule. And a sequence (Ji1 (ti1 ), Ji2 (ti2 ), · · · , Jip (tip )) is an “exchange sequence” of a schedule if we can reschedule Jik , from tik to tik+1 (or ti1 if k = p) without increasing the total completion time of the schedule. It is easy to see for this to be true, we must have Jik , 1 ≤ k ≤ p, is scheduled at tik , but not at tik+1 (or ti1 if k = p). Depending on the completion time of the jobs scheduled at t1 in S ∗ . we have the following two cases. We will show that in each case, there is an exchange sequence in S ∗ so that we can reschedule Ji at t1 in S ∗ .
Case1: There exists a job Jj at t1 , j > i, Cj 6= T . Let τ , Ci ≤ τ ≤ Cj be the first time after Ci that Jj is not scheduled. We must have that Jj is continuously scheduled during the period [Ci − 1, τ ), and 15
ACCEPTED MANUSCRIPT
0
t1
CR IP T
then is preempted at τ or finishes at τ if τ = Cj . Then we must have that τ − (Ci − 1) < pj ≤ pmax and thus m(τ ) ≥ m(Ci − 1) − 1. Since neither Ji nor Jj is scheduled at τ in S ∗ , but both Ji and Jj are scheduled at Ci − 1 in S ∗ , there must exist a job Jl (l > i) (or idle time/dummy job) which is scheduled at τ but not scheduled at Ci − 1. We can convert S ∗ based on the exchange sequence (Jj (t1 ), Jl (τ ), Ji (Ci − 1)), see Figure 4. In this way, Ci is decreased by at least 1, Cj is increased by at most 1 and Cl is not increased, all other jobs have same completion time as before, and the total completion time is not increased.
t1+1
T
Jk
S:
Ji
0
t1
S*:
t1+1
Jk’
AN US
Jz
t’
t’+1
Ci-1
Ci
Cj-1
Cj
T
Ji
Jj
Jj
Case (1):
Ji
Jj
Jl
Before exchange
0
t1
t1+1
S*:
M
Jk’
t’
Ji
t’+1
Ci-1
Ci
Cj-1
Cj
T
Ji
Jj
Jl
Jj
Jj
After exchange
ED
Figure 4: Illustration of Theorem 1: Case1, here τ = Cj in the figure.
AC
CE
PT
Case2: For all jobs Jj at t1 such that j > i, we have Cj = T . Since Ji is scheduled at t1 in S but not in S ∗ and the number of available machines at t1 is fixed, there must exists a job Jj scheduled at t1 in S ∗ but not in S at time t1 . Since all the jobs Jj , j < i, have exactly the same schedule in S and S ∗ , we must have j > i. Thus there must exist a time t2 such that Jj is not scheduled at t2 in S ∗ but is scheduled at t2 in S. In return, there must exist a job Jk , which is scheduled at t2 in S ∗ but not in S. Note that Jk cannot be a dummy job representing a machine idle at t2 . Otherwise, we can follow the sequence (Jj (t1 ), Jk (t2 ), Ji (Ci − 1)) to move job Jj out of t1 and move Ji from Ci − 1 to t1 . The obtained schedule has smaller total completion time which is a contradiction. Also, k cannot be smaller than i because the schedule of these jobs are exactly the same in S and S ∗ . So we must have k ≥ i. We can repeat the procedure until we find a job Jp such that either Cp < T or p = i. Since Ji is scheduled at t1 in S, but not in S ∗ , there must exist a time that Ji is scheduled in S ∗ but not in S. And since the number of jobs and the makespan of the schedule is finite, we must be able to stop at some point. In this way, we get a sequence (Jj (t1 ), Jk (t2 ), · · · , Jp (tp )). Note that all the jobs except Jp in this sequence must have completion time T in S ∗ . If p = i, 16
ACCEPTED MANUSCRIPT
0
t1
S:
t1+1
Ji
0
t1
AN US
CR IP T
then we can follow the exchange sequence (Jj (t1 ), Jk (t2 ), · · · , Ji (ti )) to convert S ∗ so that Ji is scheduled at t1 in S ∗ . Otherwise, Cp < T . If there exists a time ti that Ji is scheduled but Jp is not scheduled, then we can have the exchange sequence (Jj (t1 ), Jk (t2 ), · · · , Jp (tp ), Ji (ti )) to convert S ∗ . If there does not exist such time ti , then at time Ci − 1, both Ji and Jp are scheduled, let τ be the first time after Ci − 1 such that Jp is not scheduled. Then we have τ − (Ci − 1) < pp ≤ pmax , and thus m(τ ) ≥ m(Ci − 1) − 1. Since at time τ , both Ji and Jp are not scheduled, there must exist a job Jl (l > i) which is scheduled at τ but not at Ci − 1. We can follow the exchange sequence (Jj (t1 ), Jk (t2 ), · · · , Jp (tp ), Jl (τ ), Ji (Ci − 1)) to convert S ∗ so that Ji is scheduled at t1 in S ∗ , see Figure 5. In summary, we can always reschedule S ∗ so that Ji is scheduled at t1 without increasing the total completion time.
t2
t2+1
T
Jj
t1+1
S*:
Ci-1
Ci
t2
t2+1
Cp
Ji
Jj
Jj
M
Case (2):
Jk
Jp
T
Jl Jj
Before exchange
0
t1+1
ED
S*:
t1
Ci-1
Ci
t2
t2+1
T
Jl
Ji
Jj
Jp Jj
Jk
Jj
After exchange
Figure 5: Illustration of Theorem 1: Case2, here τ = Cp in the figure.
CE
PT
By repeating the above process, we can always convert S ∗ so that the schedule of the first n0 jobs in S ∗ is exactly the same as that in S. Note that the last n − n0 jobs complete exactly at T in S. And we have shown in the proof of Lemma 4, given the schedule of the first n0 jobs exactly the same as in S, the remaining jobs have to complete at T in any schedule. This means that S must be optimal.
AC
4. Conclusion In this paper, we study the bi-criteria scheduling problem subject P to the machine unavailability constraint: P m(t) | m(t+∆) ≥ m(t)−1, r −a, prmt | Cj /Cmax ≤ T for ∆ < pmax . The number of machines can increase or decrease with the constraint that it can not decrease by 2 within any period of pmax . This general unavailability model includes the special case that each machine has a release time after which the machine is always available; it also includes the special case where at any time there is 17
ACCEPTED MANUSCRIPT
at most one machine unavailable. We show that the problem is solvable in polynomial time by developing an optimal algorithm. Both the algorithm and the proofs are quite involved and subtle.
CR IP T
References [1] C.L. Chen, R.L. Bulfin. Complexity of single machine, multicriteria scheduling problems, European Journal of Operational Research, 70, pp. 115-125, 1993.
[2] F. Diedrich, K. Jansen, U. M. Schwarz & D. Trystram. A Survey on Approximation Algorithms for Scheduling with Machine Unavailability, Algorithmics of Large and Complex Networks, Lecture Notes in Computer Science ,Volume 5515, pp 50-64, 2009.
AN US
[3] P. Dileepan, T. Sen. Bicriteria static scheduling research for a single machine, OMEGA, 16, pp. 53-59, 1988.
[4] R.L. Graham, E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan. Optimization and approximation in deterministic sequencing and scheduling, a survey, Annals of Discrete Mathematics, 5, pp. 287-326, 1979.
M
[5] J.N.D. Gupta, J.C. Ho, S. Webster. Bicriteria optimisation of the makespan and mean flowtime on two identical parallel machines, Journal of Operational Research Society, 51(11), pp. 1330-1339, 2000. [6] J.A. Hoogeveen. Multicriteria scheduling, European Journal of Operational Research, 167(3), pp. 592-623, 2005.
ED
[7] Y. Huo and H. Zhao. Bicriteria Scheduling Concerned with Makespan and Total Completion Time Subject to Machine Availability Constraints, Theoretical Computer Science, 412, pp. 1081-1091, 2011.
PT
[8] C.-Y. Lee and S.D. Liman. Capacitated two-parallel machine scheduling to minimize sum of job completion times, Discrete Appl. Math., 41, 211- 222 (1993)
CE
[9] C.-Y. Lee, L. Lei and M.L. Pinedo. Current trends in deterministic scheduling, Annals of Operations Research Volume 70, pp 1-41, 1997. [10] J.K. Lenstra, A.H.G. Rinnooy Kan and P. Brucker. Complexity of machine scheduling problems, Annals of Discrete Math., 1:343-362, 1977.
AC
[11] J. Y-T. Leung, G.H. Young. Minimizing schedule length subject to minimum flow time, SIAM Journal on Computing, 18(2), pp. 314-326, 1989.
[12] J. Y-T. Leung, M.L. Pinedo. Minimizing total completion time on parallel machines with deadline constraints, SIAM Journal on Computing, 32, pp. 13701388, 2003. [13] J. Y-T. Leung, M.L. Pinedo. A Note on the scheduling of parallel machines subject to breakdown and repair, Naval Research Logistics, 51, pp. 60-72, 2004. 18
ACCEPTED MANUSCRIPT
[14] Y. Ma, C. Chu & C. Zuo. A survey of scheduling with deterministic machine availability constraints, Computers & Industrial Engineering, 58(2), pp. 199-211, 2010.
CR IP T
[15] Z. Liu, E. Sanlaville. Preemptive scheduling with variable profile, precedence constraints and due dates, Discrete Applied Mathematics, 58, pp. 253-280, 1995.
[16] R. McNaughton. Scheduling with Deadlines and Loss Functions, Management Science, 6(1) , pp. 1-12, 1959. [17] H. Saidy, M. Taghvi-Fard. Study of Scheduling Problems with Machine Availability Constraint, Journal of Industrial and Systems Engineering, 1(4), pp. 360-383, 2008.
AN US
[18] G. Schmidt. Scheduling with limited machine availability, European Journal of Operational Research, 121(1), pp. 1-15, 2000. [19] Multiple Criteria Optimization: Theory, Computation and Application., Ralph E. Steuer, John Wiley, New York, 1986.
AC
CE
PT
ED
M
[20] V. T’kindt, J.C. Billaut. Multicriteria Scheduling: Theory, Models and Algorithms, Springer Verlag, Heidelberg, 2002.
19