A note on minimizing the sum of quadratic completion times on two identical parallel machines

A note on minimizing the sum of quadratic completion times on two identical parallel machines

Information Processing Letters 112 (2012) 738–742 Contents lists available at SciVerse ScienceDirect Information Processing Letters www.elsevier.com...

151KB Sizes 2 Downloads 121 Views

Information Processing Letters 112 (2012) 738–742

Contents lists available at SciVerse ScienceDirect

Information Processing Letters www.elsevier.com/locate/ipl

A note on minimizing the sum of quadratic completion times on two identical parallel machines F. Della Croce a,∗ , C. Koulamas b a b

DAI – Politecnico di Torino, Italy Florida International University, United States

a r t i c l e

i n f o

Article history: Received 8 November 2011 Accepted 19 June 2012 Available online 26 June 2012 Communicated by Jinhui Xu Keywords: Parallel machines Scheduling Quadratic completion times

a b s t r a c t We consider the problem of minimizing the sum of quadratic completion times on two parallel machines and we discuss the approximation ratio of the generalized shortest processing time (GSPT) priority rule according to which the jobs are sorted in nondecreasing processing time order and the next job on the list is assigned to the earliest available machine. We show that the √approximation ratio of the GSPT rule is bounded √ above by √5+2 ≈ 1.309 and below by √6+2 ≈ 1.290. Extensions to the parallel m-machine 5+1

problem are also discussed.

1. Introduction One of the earliest observations in single-machine scheduling is that the scheduling problem is easy when the optimal solution can be obtained by implementing a simple index priority rule. In that case, a single priority index is computed for each job using its own characteristics and an optimal sequence is obtained by ranking the jobs according to the values of their indices. In the case of non-preemptive deterministic singlemachine scheduling problems with simultaneous job arrivals, the earliest application of an index priority rule dates back to Smith [8] who showed that the shortest processing time (SPT) priority rule can be used to minimize the summation of the job completion times T C = n j =1 C j where C j denotes the completion time of job j, j = 1, . . . , n. It should be pointed out that Smith’s [8] result was actually derived for the more general total weighted n completion time objective function j =1 w j C j where w j denotes the weight of job j. Smith’s [8] results motivated

*

Corresponding author. E-mail addresses: [email protected] (F. Della Croce), koulamas@fiu.edu (C. Koulamas). 0020-0190/$ – see front matter © 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.ipl.2012.06.011

6+1

© 2012 Elsevier B.V. All rights reserved.

the consideration of other scheduling functions that can be optimized by the use of a simple index priority rule. The issue of investigating the existence of additional scheduling objective functions that can be optimized by an index priority rule was settled by Rothkopf and Smith [7] n n who showed that the j =1 w j C j and the j =1 w j (1 −

e −rC j ) can be optimized by the SPT index priority rule. Similarly, Townsend [9] showed that the quadratic funcn 2 tion j =1 C j is also minimized by the SPT index priority rule but this result does not extend to the weighted case. Additional research focused on extending the above results to an identical parallel machine setting. It was observed that the problem of minimizing the summation of the job completion times on m identical parallel machines n (the P m  j =1 C j problem) can be solved by the generalized SPT rule (GSPT) of Conway et al. [3] according to which the jobs are listed in the SPT order and the next job on the list is assigned to the next available machine (with ties broken in favor of the lowest numbered machine). The proof of this result was based on the application of the weight matching approach of Hardy et al. [5] to a parallel machine environment. It was also observed that this approach does not extend to the weighted case since n the P 2  j =1 w j C j was shown to be NP-hard by Bruno et al. [1].

F. Della Croce, C. Koulamas / Information Processing Letters 112 (2012) 738–742

Kawaguchi and Kyan [6] implemented the WSPT heurisn tic for the P m  j =1 w j C j problem (in which the jobs are sequenced in the non-decreasing order of their ratios with the next job on the list being assigned to the earliest √ available machine) and obtained the 22+1 ratio bound for it. With respect to the other two objective functions mentioned above, it can be shown GSPT rule is optimal n that the−rC j ) problem. In fact for the unweighted P m  j =1 (1 − e  j = 1n f (C j ) for a stronger result holds: GSPT minimizes any concave increasing function f . Furthermore, a similar result is true also for stochastic processing times: the shortest expected processing time first policy is optimal if the processing times are stochastically comparable. This has been shown in [10]. other hand, the complexity status of the P m  nOn the 2 C problem remained open until recently when the j =1 j problem was proved to be strongly NP-hard by Cheng and Liu [2]. Here, we consider the special case problem of minimizing the sum of quadratic completion times on two parallel machines. In [2], it is shown by probabilistic analysis that the approximation ratio of the GSPT rule asymptotically solves the problem. Here, we show that, on 2 machines, the approximation ratio of the GSPT √rule is √

bounded above by √5+2 ≈ 1.309 and below by √6+2 ≈ 5+1

6+1

1.290. Also, we show that for the √ more general m-machine

739

Lemma 2. For any sequence S, we have C [ j ] ( S ) + C [ j −1] ( S ) 

j

i =1

pi .

Proof. If [ j ] and [ j − 1] are sequenced on different machines,

C [ j ] ( S ) + C [ j −1 ] ( S ) =

j 

p [i ] ( S ) 

j 

i =1

pi .

i =1

Alternatively, [ j ] and [ j − 1] are sequenced on the same machine. Let [h] be the largest completion time job on the other machine such that C [h] ( S )  C [ j −1] ( S ). Then,

C [ j ] ( S ) + C [h] ( S ) =

j 

p [i ] ( S ) 

i =1

j 

pi

i =1

and, correspondingly,

C [ j ] ( S ) + C [ j −1] ( S )  C [ j ] ( S ) + C [h] ( S ) 

j 

2

pi .

i =1

We focus now on the completion times of the ( j − 1)th job and the j-th job in the GSPT schedule. The following lemma holds.

case such ratio is not less than √m+4+2 . A preliminary verm+4+1

sion of this work was presented in [4].

Lemma 3. In the GSPT schedule, the following conditions hold:

2. Main result

(a) C [ j ] ( S GSPT ) + C [ j −1] ( S GSPT ) = i =1 p i , (b) C [ j ] ( S GSPT )  p j + C [ j −1] ( S GSPT ),

j

n

2 The P 2  j =1 C j problem can be stated as follows. A set of n jobs must be processed on a set of two identical parallel machines. Each job i has a processing time p i which is identical on all machines. Each machine can process at most one job at a time. Preemption is not allowed. The objective is to minimize the sum of quadratic completion  2 times. The problem is usually referred to as C j . W.l.o.g. we can assume that the jobs are inP2  dexed in the shortest processing time order, that is p 1  p 2  · · ·  pn with ties broken arbitrarily. Given any schedule S, we denote by C j ( S ) the completion time of job j and by C [ j ] ( S ) the j-th completion time in the schedule S. Let S OPT and S GSPT be the optimal schedule and the schedule obtained by applying the GSPT rule respectively. The following lemma holds.

C [ j ] ( S )  max p j ,

 j −1 i =1

pi

2

.

Proof. Given the GSPT rule, the first condition has been shown in the proof of Lemma 2. For the second condij tion, given that C [ j ] ( S GSPT ) + C [ j −1] ( S GSPT ) = i =1 p i and

 j −1

therefore C [ j −1] ( S GSPT ) + C [ j −2] ( S GSPT ) = i =1 p i , we derive C [ j ] ( S GSPT ) = p j + C [ j −2] ( S GSPT ) with C [ j −2] ( S GSPT )  C [ j −1] ( S GSPT ). Hence, C [ j ] ( S GSPT )  p j + C [ j −1] ( S GSPT ). For the third condition, as

C [ j −1] ( S GSPT ) + C [ j −2] ( S GSPT ) =

j −1 

pi

i =1

and

C [ j −2] ( S GSPT )  C [ j −1] ( S GSPT ),

Lemma 1. For any sequence S, we have



(c) C [ j ] ( S GSPT )  p j +

j

 p i =1 i .

we have C [ j −2] ( S GSPT ) 

 j −1 i =1

2

pi

. Hence,

 j −1

2

C [ j ] ( S GSPT ) = p j + C [ j −2] ( S GSPT )  p j +

Proof. C [ j ] ( S )  p j trivially holds as j is the j-th jobs in j

pi

j

the GSPT order. Besides, C [ j ] ( S )  i=21 holds as p i =1 i is a lower bound on the sum of the processing times of the first j jobs in S and the best we can do is to equally partition these processing times on the two machines. 2 The following lemma also holds.

i =1

2

pi

.

2

The following properties hold. Property 1. If j

p

 j −1 i =1

i 2 ) else,  j −1 2 p j + ( i =1 p i )2 .

2(

i =1

2

p i  p j , then C [2j ] ( S OPT ) + C [2j −1] ( S OPT ) 

 j −1 i =1

p i < p j and C [2j ] ( S OPT )+ C [2j −1] ( S OPT )

740

F. Della Croce, C. Koulamas / Information Processing Letters 112 (2012) 738–742

Proof. Assume the first condition (

 j −1 j

From Lemma 1, we have C [ j ] ( S OPT )  such that j

p i =1 i 2

j

α = C [ j] ( S OPT ) −

pi

i =1

i =1

pi

i =1



C [2j −1] ( S OPT )

2

j

i =1

=2



pi

i =1

+

pi

i =1

pi

2

− α . Sum-

j

i =1

pi

2

j

i =1

=2

pi

j

pi

i =1

2

2

j

i =1

2 .

Now, assume the second condition ( i =1 p i < p j ) holds. From Lemma 1, C [ j ] ( S OPT )  p j and from Lemma 2,

=

5



+ 2γ  2 j

i =1

pi

8

pi

 ( p j + β)2 +  = p 2j +  = p 2j +   p 2j +

j −1 

i =1 j −1 

pi − β j −1 

pi

i =1

 + 2β β + p j −

pi

i =1 j −1 



+ 2β 2 + 2β p j − 2β 2

j −1 



pi

i =1 p i and β  0.

p i + p j  2p j , that

pi

2 .

2

Proof. From Lemma 1, we know that C [ j ] ( S GSPT )  p j , namely C [ j ] ( S GSPT ) = p j + β with β  0. Therefore, from

j  j −1 p i − C [ j ] ( S GSPT ) = i =1 p i −  j −1 i =1 (C [ j] ( S GSPT ) − p j ) = i =1 p i − β where β := C [ j ] ( S GSPT ) −

Lemma 3, C [ j −1] ( S GSPT ) = pj 

 j −1 i =1

pi

2

. Then

j −1 

2 pi − β

.

i =1

Since p j  i =1 p i , z is a function increasing in β and reaches its maximum in the largest feasible value of β that

2

is for β =

 j −1

p  p j ∀ j, then i =1 i

5



8

j 

pi

j 

 j −1 i =1

pi

2

. Hence, we have

C [2j ] ( S GSPT ) + C [2j −1] ( S GSPT )

2



.

 pj +

i =1

Proof. From Lemma 3(a),

C [ j ] ( S GSPT ) + C [ j −1] ( S GSPT ) =

i =1

+

2

 z = ( p j + β) +

C [2j ] ( S GSPT ) + C [2j −1] ( S GSPT ) 

  j −1

2

 j −1

 j −1

Property 2. If

i =1

2

pi

.

p i < p j , then



i =1

2

2

2

2

C [2j ] ( S GSPT ) + C [2j −1] ( S GSPT )

pi

i =1

as p j >

i =1

 pj +

2

2 pi

 j −1

 j −1



i =1

j −1 

p i − β . Sum-

+2

pj

2

.

C [2j ] ( S GSPT ) + C [2j −1] ( S GSPT )

i =1

i =1



4

2

j

C [ j ] ( S OPT ) = p j + β and C [ j −1] ( S OPT )  ming up,



 j −1

pi

i =1

+2

Property 3. If

C [2j −1] ( S OPT )

pi =

2

i =1

 j −1

 j −1

pi

2

j

2

C [ j ] ( S OPT ) + C [ j −1] ( S OPT )  p j + i =1 p i . Let β  0 such that β = C [ j ] ( S OPT ) − p j . Therefore,

C [2j ] ( S OPT ) +

i =1

2

. Hence,

2



−γ

j

p i  p j , then

i =1

2

pi

i =1

+

2

2

 j −1

j

2



C [2j ] ( S GSPT ) + C [2j −1] ( S GSPT )

2

2  j −1

pi

i =1

=

As

−α

2

+ 2α 2  2

2

j

2

j

2

α0

C [2j ] ( S GSPT ) + C [2j −1] ( S GSPT )

is p j 

j

2

. Let

. Therefore, C [ j ] ( S OPT ) =

2

From Lemma 2, we get C [ j −1] ( S OPT )  ming up,

j

pi

2

+ α.

C [2j ] ( S OPT ) +

p i  p j ) holds.

i =1

 j −1 i =1

pi

  j −1

2

2

+

i =1

2

pi

2 .

2

From the above properties, approximation ratio of  the the GSPT rule for the P 2  C 2j problem can be established as follows.

pi

i =1

C [ j ] ( S GSPT )  C [ j −1] ( S GSPT ). Hence,

with j

i =1

pi

 j2

and C [ j −1] ( S GSPT ) 

p i =1 i 2

j

i =1

2

pi

C [ j ] ( S GSPT ) 

, that is C [ j ] ( S GSPT ) =

pj 2

p i =1 i 2

. Then,

bounded above by √5+2 ≈ 1.309. 5+1

j

+ γ and C [ j −1] ( S GSPT ) = − γ with γ  0. Hence, C [ j ] ( S GSPT ) − C [ j −1] ( S GSPT ) = 2γ . Also, from Lemma 3(b), 2γ = C [ j ] ( S GSPT ) − C [ j −1] ( S GSPT )  p j , namely

γ

Proposition 1. The approximation ratio of the GSPT rule is √

Proof. Let assume w.l.o.g. that the number of jobs is even (in case of an odd number of jobs, it is sufficient to add a dummy job with processing time equal to 0). From Prop j −1 p  p j , then erties 1 and 2, we have that, if i =1 i

F. Della Croce, C. Koulamas / Information Processing Letters 112 (2012) 738–742

j

i =1

C [2j ] ( S OPT ) + C [2j −1] ( S OPT )  2 1

=



2

2



2

pi

M 2 with cost function value 12 + 2 6, whilst the GSPT rule assigns job 1 to machine M 1 , job 2 to machine M 2 and job 3 arbitrarily √ to machine M 1 or M 2 with cost function value 12 + 4 6. Therefore, the approximation ratio √ √

2

 j

pi

is

i =1

C [2j −1] ( S GSPT )

5





j 

8

2 pi

.

i =1

Hence,

C [2j ] ( S GSPT ) + C [2j −1] ( S GSPT ) C [2j ] ( S OPT ) +

C [2j −1] ( S OPT )



j 5 ( i =1 8 j 1 ( i =1 2

p i )2 p i )2

5

= .

C [2j ] ( S OPT ) + C [2j −1] ( S OPT )  p 2j +

j −1 

 j −1

pi

i =1

 pj +

2

  j −1

2

+

i =1

pi

p i < p j , then

 j −1  2    j −1 2 pi p i =1 i  p j + i =1 + (m − 1) . m

C [2j −1] ( S GSPT ) + · · · + C [2j −m+1] ( S GSPT ) = C [212] ( S GSPT ) + C [211] ( S GSPT ) + · · · + C [22] ( S GSPT ) = 252 + 122 + 9 · 12 = 778.

.

On the other hand, ( p j +

C [2j ] ( S OPT ) + C [2j −1] ( S OPT )

(p j +

Define y =

 j −1

pi 2

 j −1

) + ( i=21 2  j −1 p 2j + ( i =1 p i )2

 j −1 i =1

i =1

pi

2

and thus

is p j = y (1 +

τ=

pi 2

)

i =1

 j −1

pi 2

p

m

the natural extension of Property 3 to m machines does not hold, that is the worst case of the GSPT rule is not to have all jobs j − m + 1, . . . , j − 1 completing at the  j −1

.

( p j + y )2 + y 2 . To compute p 2j +4 y 2



5 ). Therefore, function τ has a maximum  j −1 √ C 2 ( S GSPT )+C [2j −1] ( S GSPT ) pi at p j = ( 5 + 1) i=21 inducing [ 2j]  2 C [ j ] ( S OPT )+C [ j −1] ( S OPT )

≈ 1.309.

Overall, we have

 j −1

) + (m − 1)( i=m1 i )2 = 22 2 22 2 2 (24 + 11 ) + (10)( 11 ) = 26 + 10 · 22 = 716. As 716 < 778,

a maximum on τ , we require that the derivative of τ with respect to p j is equal to zero, which reduces to p 2j − 4 y 2 − 2p j y = 0. The only positive root of this equation

√ √5+2 5+1

i =1

C [22] ( S GSPT ) = · · · = C [210] ( S GSPT ) = 1, C [211] ( S GSPT ) = 12 and C [212] ( S GSPT ) = 25. Therefore, for j = 12 we have C [2j ] ( S GSPT )+

C [2j ] ( S GSPT ) + C [2j −1] ( S GSPT )

√ √5+2 5+1

 j −1

However, consider the following example with 12 jobs and 11 machines where p 1 = p 2 = · · · = p 10 = 1, p 11 = 12 and p 12 = 24. In this example, C [21] ( S GSPT ) =

Hence,

τ =

≈ 1.290. However, it will be hard to

m

2

2

√6+2 6+1

pi

and



=

C [2j ] ( S GSPT ) + C [2j −1] ( S GSPT ) + · · · + C [2j −m+1] ( S GSPT )

2

i =1

C [2j ] ( S GSPT ) + C [2j −1] ( S GSPT )

If

4

On the other hand, from Properties 1 and 3, we have  j −1 that, if i =1 p i < p j , then



12+4√6 12+2 6

close the remaining gap between the upper and the lower bound. Our approach cannot be straightforwardly extended to m machines. Indeed, the natural extension of Property 3 n−1 to m machines when p j is greater than i =1 p i does not work. Such extension should have been formulated as follows:

and

C [2j ] ( S GSPT ) +

741

C [2j ] ( S GSPT )+C [2j −1] ( S GSPT ) C [2j ] ( S OPT )+C [2j −1] ( S OPT )



 max{ √5+2 , 54 } = 5+1

≈ 1.309. By summing up the above result on all pairs j, j − 1 with j = 2k for k = 1, . . . , n/2, we obtain the claimed result. 2

pi

and then job j completing at time same time T = i=m1 T + p j. On the other hand, a lower bound on the approxi 2 mation ratio of the GSPT rule for the P m  C j problem, can be obtained by generalizing to m machines the 2-machine problem considered previously. On m machines, we consider √ m + 1 jobs with p 1 = · · · = pm = 1 and pm+1 = 1 + m + 4. An optimal schedule assigns job i to machine M i with i = 1, . . . , m − 1, job m arbitrarily to machine M 1 and job m + 1 to machine M m . 2 Then, f (OPT ) = pm +1 + (m − 1)(

 j −1

We will first show that the bound of Proposition 1 is almost tight. Let us consider the following 3-job example √ with p 1 = p 2 = 1 and p 3 = 1 + 6. An optimal schedule assigns jobs 1, 2 to machine M 1 and job 3 to machine

p

m

the other hand, the GSPT rule assigns job i to machine M i with i = 1, . . . , m and job m + 1 arbitrarily to machine M 1 . Therefore, f (GSPT ) = m(  j −1 i =1

m

pi 2

) = m + (1 +

 j −1

pi 2

) + ( pm+1 + √ m + 4 + 1) = 2m + 8 + 4 m + 4.



i =1

m

2

Hence, the approximation ratio is 3. Lower bounds and extension to m machines

 j −1

pi 2

) + (2 i=m1 i )2 = √ √ 2 (1 + m + 4) + m − 1 + 4 = 2m + 8 + 2 m + 4. On i =1

√ √m+4+2 . In Table 1, we compute m+4+1

ρ =



2m+8+4√m+4 2m+8+2 m+4

=

ρ for different val-

ues of m. We notice that, while ρ obviously decreases as m increases, it remains significantly greater than 0 also for large values of m (a gap of more than 3% for m = 1000).

742

F. Della Croce, C. Koulamas / Information Processing Letters 112 (2012) 738–742

Table 1 Lower bounds on the approximation ratio of the GSPT rule for different values of m. m

2

3

4

5

6

7

8

9

ρ

1.290

1.274

1.261

1.250

1.240

1.232

1.224

1.217

4. Conclusions

n

2 We considered the P 2  j =1 C j problem and showed that the GSPT priority rule supplies the almost tight worstcase ratio bound of 1.309. n We 2also showed that, in the case of the general P m  j =1 C j problem, there is an optimality gap for the GSPT rule which is at least 3% even for quite large problems with as many as 1001 jobs sequenced on 1000 machines.

References [1] J.L. Bruno, E.G. Coffman Jr., R. Sethi, Scheduling independent tasks to reduce mean finishing time, Communications of the ACM 17 (1974) 382–387. [2] T.C. Cheng, Z. Liu, Parallel machine scheduling to minimize the sum of quadratic completion times, IIE Transactions 36 (2004) 11–17.

10 1.211

100 1.089

1000 1.031

[3] R.W. Conway, W.L. Maxwell, L.W. Miller, Theory of Scheduling, Addison–Wesley, Reading, 1967. [4] F. Della Croce, C. Koulamas, Minimizing the sum of quadratic completion times on two identical parallel machines, in: MISTA 2011 Conference Proceedings, Phoenix, USA, August 2011, pp. 240–244. [5] G.H. Hardy, J.E. Littlewood, G. Polya, Inequalities, Cambridge University Press, London, 1967. [6] T. Kawaguchi, S. Kyan, Worst case bound of an LRF schedule for the mean weighted flow-shop problem, SIAM Journal of Computing 15 (1986) 1119–1129. [7] M.H. Rothkopf, S.A. Smith, There are no undiscovered priority index sequencing rules for minimizing total delay costs, Operations Research 32 (1984) 451–456. [8] V.E. Smith, Various optimizers for single stage production, Naval Research Logistics Quarterly 3 (1956) 59–66. [9] V. Townsend, The single machine problem with quadratic penalty function of completion times: A branch-and-bound solution, Management Science 24 (1978) 530–534. [10] R.R. Weber, P. Varaiya, J. Walrand, Scheduling jobs with stochastically ordered processing times on parallel machines to minimize expected flowtime, Journal of Applied Probability 23 (1986) 841–847.