Online deadline scheduling on faster machines

Online deadline scheduling on faster machines

Information Processing Letters 85 (2003) 31–37 www.elsevier.com/locate/ipl Online deadline scheduling on faster machines Jae-Hoon Kim, Kyung-Yong Chw...

112KB Sizes 1 Downloads 97 Views

Information Processing Letters 85 (2003) 31–37 www.elsevier.com/locate/ipl

Online deadline scheduling on faster machines Jae-Hoon Kim, Kyung-Yong Chwa ∗ Department of Electrical Engineering & Computer Science, Korea Advanced Institute of Science and Technology, Taejon 305-701, Republic of Korea Received 5 February 2002; received in revised form 10 May 2002 Communicated by K. Iwama

Abstract Online deadline scheduling is to determine which jobs are accepted or rejected, where jobs have the deadline by which they must finish their processing and they arrive in the online fashion. The slack of a job is the gap between its arrival time and the last time when it can first be scheduled to meet its deadline. The job instance is given such that the slack of each job is at least κ times its processing time, where κ is called patience. In this paper, online algorithms have faster machines than the adversary. We investigate the speed of machines on which the online algorithms can achieve the optimality and parametrize them by the patience.  2002 Elsevier Science B.V. All rights reserved. Keywords: Scheduling; Online algorithms

1. Introduction In this paper, we consider online deadline scheduling problems in a single machine or multiple machine setting. Jobs arrive over time with no prior knowledge and they have processing times and deadlines. The goal is to maximize the utilization of machines, that is, the total of processing times of jobs meeting their deadlines. We distinguish preemptive and nonpreemptive models depending on whether preemption is allowed or not. In particular, for the preemptive model, when jobs arrive, they have to get admitted, and once admitted jobs must be satisfied to completion, that is, cannot be abandoned. (This model is called preemptive

with commitment.1) For both models, there is no online algorithm which can guarantee a constant competitive ratio. It has recently been shown possible to break this situation [5,9,7], where the online algorithm is allowed to use faster machines than the offline adversary. This approach of allowing more resources to algorithms gives a new insight to problems of this type. For preemptive and nonpreemptive scheduling, we parametrize an instance by some value, called patience. The slack of a job is the gap between its arrival time and the last time by which it can start to process to meet its deadline. The job instance is assumed each job of which has a slack of at least the patience times its processing time. There is some work [2,4,6] to ex-

* Corresponding author.

E-mail addresses: [email protected] (J.-H. Kim), [email protected] (K.-Y. Chwa).

1 In the general model, called preemptive with no commitment,

too frequent preemptions can burden the system.

0020-0190/02/$ – see front matter  2002 Elsevier Science B.V. All rights reserved. PII: S 0 0 2 0 - 0 1 9 0 ( 0 2 ) 0 0 3 3 2 - 0

32

J.-H. Kim, K.-Y. Chwa / Information Processing Letters 85 (2003) 31–37

amine the performances of online algorithms using this parameter when algorithms do not have extra resources. In this paper, allowing the online algorithm to use faster machines, we investigate the effect of the slackness assumption on the speed of machines. Obviously, this assumption results in the reduction of speed of machines, because online algorithms can wait for better jobs to arrive afterward during the slack times of jobs having already arrived. In online deadline scheduling, we analyze the performance of an online algorithm using competitive analysis [10], where it is compared with that of the optimal offline algorithm having complete knowledge of jobs. The competitive ratio of an online algorithm A is defined as, over all job instances I , OPT(I ) , I A(I ) where OPT(I ) denotes the gain of the optimal schedule over I and A(I ) the gain of the schedule of A. In our analysis, an online algorithm is given faster machines than the adversary. An algorithm A is said to be an s-speed c-approximation algorithm if

max

OPT(I )  c, I As (I ) where As (I ) denotes the gain of the schedule of A on speed-s machines over I and OPT(I ) the gain of the optimal schedule on speed-1 machines. max

Notations. Let J be the set of all jobs. Each job Ji in J arrives at its release time ri having the processing time pi , the amount of work of speed-1 machine on which it is scheduled, and the deadline di by which it finish to process. The expiration time xi of Ji is the last time when it can start to process to meet its deadline, i.e., di − pi and the slack λi of Ji is defined to be xi − ri . The slackness assumption is defined as λi  κpi for each job Ji and the value κ  0 is called a patience. Here xi and λi are defined on speed-1 machines. Every machine has a speed s(κ), a function of the patience κ, and the processing time pi , the expiration time xi and the slack λi are again defined corresponding to speed-s machines.

els [2,8,4,3,1]. In particular, Garay et al. [2] and Goldwasser [4] show matching upper and lower bounds of 1 + κ1 and 2 + κ1 for the preemptive and nonpreemptive model, respectively, on a single machine, where κ > 0 is the patience of the given instance of jobs. Recently, there is various work to study the deadline scheduling when the online scheduler is provided with faster machines than the adversary. In [7], Lam and To show that for a single machine, Earliest Deadline First (EDF) is a preemptive online algorithm which is optimal with speed-2 machine, and they also show that for multiple machines, EDF is optimal using speed-3 machines. For nonpreemptive scheduling, to our knowledge, there is no work to adjust this approach of allowing more resources. 1.2. Our results For preemptive scheduling, we show that EDF can achieve the optimality on a single machine if the speed s is given as 1 + 1/(κ + 1). Also, we prove that this speed is tight for any deterministic preemptive online algorithm, that is, if s < 1 + 1/(κ + 1), then any preemptive online algorithm cannot achieve the optimality. On multiple machines, it is straightforwardly followed that EDF is optimal if s is given as 1 + 2/(κ + 1). When κ = 0, these results are the same as those of [7]. For nonpreemptive scheduling, we consider the Earliest EXpiration Time First (EXF) algorithm on a single machine (and on multiple machines). When the lengths of jobs are given arbitrarily, we show that EXF is an s(κ)-speed 2-approximation algorithm, where 1 + 1/(κ + 1) < s(κ) < 1 + 2/(κ + 1), specified later. Also, we give a lower bound of the speed such that an online algorithm can have a competitive ratio of two. For equal length jobs, we prove that any online algorithm cannot achieve the optimality using a machine of speed less than two when κ = 0. Also, it is shown that EXF is optimal on a speed-2 machine and if κ > 1, then the speed can be improved to be less than two.

1.1. Previous work

2. Preemptive scheduling

Online Deadline Scheduling was extensively investigated for preemptive and nonpreemptive mod-

First, we assume a single machine model, and we describe the preemptive EDF algorithm; When a job

J.-H. Kim, K.-Y. Chwa / Information Processing Letters 85 (2003) 31–37

arrives, it is admitted if it can be completed before its deadline with jobs already admitted by (virtually) performing EDF preemptively. If either a new job is admitted or a job is completed, then EDF schedules the job with the earliest deadline among all admitted jobs. When EDF uses a speed-(s = 1 + 1/(κ + 1)) machine, we wish to prove that OPT(J )  EDFs (J ), for any job instance J . For a contradiction, we assume that, for some job instance J , OPT(J ) > EDFs (J ). Let J0 be the job instance with fewest jobs among such job instances. Then jobs in J0 are scheduled by EDF in one continuous time interval, because if they are scheduled in two or more continuous intervals, there is a proper subset J0 of J0 such that OPT(J0 ) > EDF s (J0 ). Assume the jobs in J0 scheduled by EDF are processed in one continuous interval of length , say [0, ], and EDF uses a machine of speed s = 1 + α1 , where α is specified later. We call the jobs with deadlines > s the late jobs and the remaining jobs the early jobs. We will prove that all late jobs are scheduled by EDF. Let J be any late job. Then we claim that J has the expiration time x  > . Otherwise, it has a slack λ as follows:   λ = λ + (p − p )  κp + (p − p )   1  = (κ + 1) 1 + p − p , α where λ and p are the slack and the processing time of J , respectively, on a speed-1 machine. Also we see that p > α1  since it has the deadline > s but x   . Thus we get that     1 1  − 1 , λ > (κ + 1) 1 + α α and then we set α such that     1 1 (κ + 1) 1 + = 1. −1 α α Then the above inequality is a contradiction, that is, J cannot have the expiration time x   . Consequently, any late job has the expiration time x  >  and so it is scheduled by EDF since EDF can complete any feasible job set on a single machine [2]. Also, α is set as κ + 1. Let E0 be the set of all early jobs in J0 and L0 the set of all late jobs. Now, we will prove that

33

EDFs (E0 ) = EDFs (J0 ) − L0 . By a contradiction, we assume that there is an early job J which belongs to EDFs (E0 ) but does not belong to EDFs (J0 ). Then there is a late job J  such that it belongs to EDFs (J0 ) and if J were admitted by EDF, J  would not meet its deadline. Thus to cause J  to miss the deadline which is after s, J must have its processing time p > α1 , because otherwise, the processing of J and all previously admitted jobs when it arrives would be completed before s. So J has the expiration time x  < . Therefore the processing time p of J (on a speed-1 machine) must be satisfied as follows:   1 1  and p> 1+ α α  p< . 1 κ + 1 − 1+1/α We can show that the two right-hand sides are equal in the case α = κ + 1. Then it is a contradiction. Consequently, EDFs (E0 ) = EDFs (J0 ) − L0 < OPT(J0 ) − L0  OPT(E0 ). Since OPT must execute jobs of J0 during at least > s times, there is a late job in J0 . Thus, E0 is a smaller set satisfying EDFs (E0 ) < OPT(E0 ), which is a contradiction. Theorem 1. If s = 1 + 1/(κ + 1), then EDFs (J )  OPT(J ). Here we will prove that if the speed s is less than s0 = 1 + 1/(κ + 1), any online algorithm A cannot achieve the optimality. Let s = s0 − δ, for some δ > 0. At time 0, very small jobs with deadline x = κ + 1 arrive such that the sum of processing times of all the jobs is x. Then A admits all small jobs since it must achieve the optimality. The schedule of a block of small jobs spans the interval [0, xs ]. Slightly after time 0, that is, at time ε > 0, there arrives a job J which has the processing time 1 and the deadline x + ε. Then to admit J , the schedule of all small jobs and J will span the interval of length κ +2 x 1 + = s s s0 − δ = κ + 1 + δ,

for some δ  > 0.

34

J.-H. Kim, K.-Y. Chwa / Information Processing Letters 85 (2003) 31–37

Fig. 1. The division of time into periods.

Here without loss of generality, we can assume δ  > ε, because otherwise, we can take smaller ε. Thus A cannot admit J . But the optimal algorithm can schedule small jobs in [0, κ + ε) and the job J at κ + ε. Thus the performance of A is x and that of the optimal algorithm is x + ε. Theorem 2. If s < 1 + 1/(κ + 1), then any online algorithm cannot achieve the optimality. For multiple machines, EDF cannot anymore perform the feasibility test successfully. So in [7], it is proved that if any job J is given and X is the set of all jobs already admitted before J , by the EDF schedule, all jobs in X ∪ {J } can be completed if and only if all early jobs in X ∪ {J } can be completed, when the speed s = 3. It is easy to prove that the property is also true for the speed s = 1 + 2/(κ + 1). Then it is straightforward to prove that EDF is as good as OPT if we follow the proof given in [7].

3. Nonpreemptive scheduling In this section we consider a nonpreemptive scheduling, where if a job is scheduled, it cannot be preempted or rejected. The analysis can be applied to both a single machine and multiple (identical) machines with no difference of performances, that is, all the following results are satisfied both on a single machine and on multiple machines. So we assume the single machine. Let J be the set of all job instances. As in [6], according to the behavior of EXF, we will divide the time into periods and classify jobs in J . Here, observe the behavior of EXF (on a speed-s machine). When a new job arrives, it enters into the queue of EXF and it is alive until its expiration time. Whenever there is an idle time on the machine, EXF selects the job with the earliest expiration time in the queue and schedules it. If a job has not been selected until its expiration

time, EXF rejects it at the expiration time and remove it from the queue. Suppose there is a job which EXF rejects. Inductively, we define time intervals Ti . Suppose Tj = [tjb , tjb−1 ), j = 0, . . . , i − 1, have already been defined b (where t−1 = ∞). Then Ii denotes the job last rejected b before ti−1 and ti the time at which the job Ii is rejected, called the critical time. Then no job with expiration time > ti is scheduled in [ri , ti ], where ri is the release time of the job Ii , because Ii has a priority over such a job. If there is the job with expiration time > ti which is last scheduled before ri and it is scheduled at tˆi , tib is defined as the maximum of the time at which a new job first arrives after tˆi and the time t˜i at which an idle time last appears before ri and otherwise, it is defined as t˜i , and tib is called the barrier time. See Fig. 1. b Here we define Ti as the half-open interval [tib , ti−1 ), called the principal period. Until either there is no job rejected before tkb or, although there is the job Ik+1 last rejected before tkb , there is neither an idle time nor a job with expiration time greater than tk+1 , for some k, we continue to divide the time into principal periods Ti , i = 0, . . . , k + 1, where Tk+1 is [0, tkb ). Lemma 1. In each principal period Ti , there is no idle time on [tib , ti ] in EXF’s schedule and each job scheduled by EXF in [tib , ti ] should have the expiration time  ti , where tib is the barrier time and ti the critical time. Let Ji be the set of all jobs which arrive in Ti . In each Ji , jobs having expiration times > ti on a speed-s machine are called patient jobs. Lemma 2. All patient jobs of each Ji must be scheduled by EXF. Proof. Let J be a patient job in Ji . Then it has the expiration time x  > ti . By Lemma 1, it cannot be scheduled in [tib , ti ] and it must be scheduled after ti . b b , J is scheduled in (ti , ti−1 ), because If ti < x  < ti−1

J.-H. Kim, K.-Y. Chwa / Information Processing Letters 85 (2003) 31–37

EXF never rejects any job in the queue within the interval. Otherwise, i.e., x  ∈ Tj , for some 0  j  i − 1, then in the case that the barrier time tjb is determined by an idle time, J is scheduled before tjb , because it is possible to schedule it at that idle time. Suppose tjb is determined by a job J˜ with expiration time > tj and J is not scheduled before tjb . If tjb  x   tj , it is a contradiction, because J has a priority over J˜. So tj < x  < tjb−1 . From Lemma 1, J is not scheduled in [tjb , tj ] and it is in the queue at tj . Since EXF never rejects any job in the queue within (tj , tjb−1 ), the job J is scheduled. ✷ From Lemmas 1 and 2, it is straightforward to obtain the following lower bound of the performance of EXF. Lemma 3.

     b s ti − ti , pj , EXFs (J )  max i

i j ∈Pi

where Pi is the set of all patient jobs of Ji . For each job class Ji , we consider the performance of OPT. Recall that OPT schedules jobs on the speed-1 machine. Let J be a job in Ji having the expiration time x  on a speed-s machine. Then J has the expiration time x = x  − (1 − 1s )p on a speed-1 machine since x + p = d = x  + 1s p for the processing time p and the deadline d of J . If J is not a patient job of Ji , then x   ti . So since for the release time r of J , κp  λ = x − r

  1 = x − r − 1 − p s     1 p,  ti − tib − 1 − s

Proof. Let J be a job of Ji scheduled by OPT and not a patient job. Then it is scheduled either to be completed before ti or to cross ti . Thus, from the above statement, the gain of OPT for jobs except patient jobs in Ji is at most (1 + 1/(κ + (1 − 1s )))(ti − tib ). ✷ 3.1. Arbitrary processing time jobs We will show that for any fixed s and κ, any online algorithm A cannot have a better competitive 1 . At time 0, a job J1 with ratio than 1 + s−1/(κ+1) processing time s and very large deadline arrives and then A must schedule it at some time t in order to be competitive. At time t + ε, for sufficiently small ε > 0, a batch of jobs of very small size and a job J2 with the expiration time t + 1 − ε on a speed-s machine are given. Here small jobs have deadlines less than t + 1 and the sum of all processing times of small jobs is equal to the slack λ2 of J2 on a speed-1 machine, and the processing time p2 of J2 satisfies the equality λ2 = κp2 . Then we can see that 1 − 2ε = λ2 = λ2 + (1 − 1/s)p2 = (κ + 1 − 1/s)p2 . In particular, we can take the processing time p2 and the deadline of J2 such that p2 = 1/(κ + 1 − 1/s) − δ, for some sufficiently small δ(ε) > 0. Then A cannot schedule all the small jobs and J2 but the optimal algorithm can schedule all the small jobs on [t +ε, t +λ2 +ε), J2 at t +λ2 +ε and J1 afterward. So the performance of A is s and that of the optimal algorithm is s + λ2 + p2 = s + (κ + 1)p2 . Thus, κ +1 OPT  1+ − δ A s(κ + 1) − 1 1 − δ, = 1+ s − 1/(κ + 1) for some sufficiently small δ  (ε) > 0. If ε → 0, then we get the result.

the processing time p of J is bounded above by (ti − tib )/κ + (1 − 1s ). Thus we obtain the following upper bound of the performance of OPT. Lemma 4. For each job class Ji ,      1 ti − tib + OPT(Ji )  1 + pj , κ + (1 − 1/s) j ∈Pi

where Pi is the set of all patient jobs of Ji .

35

Theorem 3. For fixed s > 1 and κ  0, any online algorithm cannot have a better competitive ratio than 1 . 1+ s − 1/(κ + 1) In particular, if s < 1 + 1/(κ + 1), then any online algorithm cannot have a competitive ratio less than two. Here we will find a possibly small speed s of machine on which EXF can have the competitive ratio

36

J.-H. Kim, K.-Y. Chwa / Information Processing Letters 85 (2003) 31–37

of two, where the speed s is close to the lower bound shown in Theorem 3. Theorem 4. For s = s(κ), 2EXFs (J )  OPT(J ), where  1 1 1 1 + + . s(κ) = + 2 κ +1 4 (κ + 1)2 Proof. From Lemmas 3 and 4, we get   1 1 EXFs (J ) OPT(J )  1 + κ + (1 − 1/s) s

Lemma 6. For each job class Ji ,     1 OPT(Ji )  ti − tib − 1 − + 1 + |Pi |. s Proof. If a job J in Ji is not patient, it has the expiration time  ti − (1 − 1s ). OPT can allocate at most ti − tib − (1 − 1s ) + 1 jobs being not patient in [tib , ti − (1 − 1s )]. ✷

+ EXFs (J ). Choose s such that   1 1 = 1. 1+ κ + (1 − 1/s) s Then it is satisfied that OPT(J )  2EXFs (J ).

Let fi = ti − ti − tib s /s. Then although some job in Ji+1 is scheduled to cross the time fi , EXF can schedule at least ti − tib s jobs of Ji in [tib , ti ]. From Lemma 2, we obtain the inequality. ✷



3.2. Equal processing time jobs Here we consider the case that the processing times of all jobs are identical and especially, we assume that all processing times are one. Then the performance of an online algorithm A is equal to the number of jobs accepted by A. First, we prove that when κ = 0, if the speed s is less than 2, any online algorithm A cannot achieve the optimality. Let s < 2, i.e., s = 2/(1 + 2δ), for some δ > 0. At time 0, a job with large deadline arrives and then A must schedule the job at some time t. Note that the processing of the job finishes at t + 12 + δ. At time t + 2δ , another job with deadline t + 1 + 2δ arrives. It has the expiration time t + 12 + 2δ in A and so it is rejected by A. But the optimal algorithm can schedule both jobs. Thus we will show that EXF can achieve the optimality for the speed s = 2. Here As (A  0) represents the positive integer k such that A = ks + s , 0   < 1, and Pi is the set of all patient jobs of Ji and |Pi | the number of jobs in Pi .

For each Ji , we have shown the lower bound and the upper bound of EXF(Ji ) and OPT(Ji ), respectively. For simplicity, we define two notations Li and Ui as ti − tib s and ti − tib − (1 − 1s ) + 1, respectively. Before giving an analysis, we consider two cases, that is, Case (1):

1

ti − tib +  ti − tib < ti − tib + 1 s and Case (2):

1 ti − tib  ti − tib < ti − tib + . s b Let Ai = ti − ti . Then we can easily see that Li  sAi + 1

in Case (1)

and Li = sAi

in Case (2).

Also it follows that 2 1  Ui  A i + 1 + in Case (1) Ai + s s and 1 2 Ai +  Ui < A i + in Case (2). s s Theorem 5. For s = 2, EXFs (J )  OPT(J ). Proof. In Case (1), it is satisfied that

Lemma 5. For each job class Ji ,

EXFs (Ji )  ti − tib s + |Pi |.

Li  2Ai + 1  Ai + 1  Ui .

Proof. Fix a principal period Ti . From Lemma 1, there is no idle time in [tib , ti ] in the schedule of EXF.

When κ > 1, we can show an improvement of the speed of machine.

In Case (2), Li = 2Ai  Ai  Ui .



J.-H. Kim, K.-Y. Chwa / Information Processing Letters 85 (2003) 31–37

Theorem 6. For s = s(κ), κ > 1, EXFs (J )  OPT(J ), where    1 3 κ2 + 8 s(κ) = 1+ + . 2 κ +1 (κ + 1)2 Proof. In Case (1), it is satisfied that Li  sAi + 1  Ai + 1 1  Ui , = Ai + 1 + s if s = 1 + ε, for sufficiently small ε > 0. In Case (2), we choose the speed s such that sAi  Ai + 1, that is, s  1 + A1i . From the fact that Ai > (ti − tib ) − 1s  κ + (1 − 2s ), choose s satisfying s = 1 + 1/(κ + (1 − 2s )). ✷

References [1] P. Berman, B. DasGupta, Improvements in throughput maximization for real-time scheduling, in: Proc. 32nd Annual ACM Symposium on Theory of Computing, 2000, pp. 680–687. [2] J. Garay, J. Naor, B. Yener, P. Zhao, On-line admission control and packet scheduling with interleaving, Manuscript, 2001.

37

[3] S. Goldman, J. Parwatikar, S. Suri, On-line scheduling with hard deadlines, in: Proc. of the Workshop on Algorithms and Data Structures (WADS), in: Lecture Notes in Computer Science (LNCS), Vol. 1272, Springer, Berlin, 1997, pp. 258– 271. [4] M.H. Goldwasser, Patience is a virtue: The effect of slack on competitiveness for admission control, in: Proc. 10th Annual ACM–SIAM Symposium on Discrete Algorithms, 1999, pp. 396–405. [5] B. Kalyanasundram, K. Pruhs, Speed is as powerful as clairvoyance, J. ACM 47 (4) (2000) 617–643. [6] J.-H. Kim, K.-Y. Chwa, On-line deadline scheduling on multiple resources, in: 7th Annual International Computing and Combinatorics Conference, in: Lecture Notes in Computer Science (LNCS), Vol. 2108, Springer, Berlin, 2001, pp. 443–452. [7] T.W. Lam, K.K. To, Performance guarantee for online deadline scheduling in the presence of overload, in: Proc. 12th Annual ACM–SIAM Symposium on Discrete Algorithms, 2001, pp. 755–764. [8] R. Lipton, A. Tomkins, Online interval scheduling, in: Proc. 5th Annual ACM–SIAM Symposium on Discrete Algorithms, 1994, pp. 302–311. [9] C. Phillips, C. Stein, E. Torng, J. Wein, Optimal time-critical scheduling via resource augmentation, in: Proc. 29th Annual ACM Symposium on Theory of Computing, 1997, pp. 140– 149. [10] D. Sleator, R. Tarjan, Amortized efficiency of list update and paging rules, Comm. ACM 28 (1985) 202–208.