Minimizing total weighted completion time when scheduling orders in a flexible environment with uniform machines

Minimizing total weighted completion time when scheduling orders in a flexible environment with uniform machines

Information Processing Letters 103 (2007) 119–129 www.elsevier.com/locate/ipl Minimizing total weighted completion time when scheduling orders in a f...

330KB Sizes 0 Downloads 123 Views

Information Processing Letters 103 (2007) 119–129 www.elsevier.com/locate/ipl

Minimizing total weighted completion time when scheduling orders in a flexible environment with uniform machines Joseph Y.-T. Leung a,∗ , Haibing Li a , Michael Pinedo b , Jiawei Zhang b a Department of Computer Science, New Jersey Institute of Technology, Newark, NJ 07102, USA b Stern School of Business, New York University, New York, NY 10012, USA

Received 31 August 2005; received in revised form 29 September 2006 Available online 12 March 2007 Communicated by A.A. Bertossi

Abstract We consider the scheduling of orders in an environment with m uniform machines in parallel. Each order requests certain amounts of k different product types. Each product type can be produced by any one of the m machines. No setup is required if a machine switches over from one product type to another. Different product types intended for the same order can be produced at the same time (concurrently) on different machines. Each order is released at time zero and has a positive weight. Preemptions are allowed. The completion time of an order is the finish time of the product type that is completed last for that order. The objective is to minimize the total weighted completion time. We propose heuristics for the non-preemptive as well as the preemptive case and obtain worst case bounds that are a function of the number of machines as well as the differences in the speeds of the machines. Even though the worst-case bounds we showed for the two heuristics are not very tight, our experimental results show that they yield solutions that are very close to optimal. © 2007 Elsevier B.V. All rights reserved. Keywords: Analysis of algorithms; Order scheduling; Flexible machines; Uniform machines; Heuristics

1. Introduction Consider a facility with m uniform machines in parallel that operate at different speeds. The speed of machine i is denoted by vi . Thus, machine i can carry out in one time unit vi units of processing. Without loss of generality, we assume that v1  v2  · · ·  vm . For convenience, we define ρ and σ such that ρ = v1 /vm and σ = v1 /v2 . There are k (can be a fixed or an arbitrary number) different product types (interchangeably, we also refer to a product type as a job) and each prod* Corresponding author.

E-mail address: [email protected] (J.Y.-T. Leung). 0020-0190/$ – see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.ipl.2007.03.002

uct type can be produced on any one of the m machines. When a machine switches over from one product type to another, no setup is required (i.e., no time is lost setting the machine up for a different product type). Assume there are n orders from n different clients. Order j , j = 1, 2, . . . , n, demands a given quantity of product type l = 1, 2, . . . , k and the units of processing required is plj  0. The items of type l can be produced on any one of the m machines. We may assume without loss of generality that p1j  p2j  · · ·  pkj (this can be accomplished by renaming product types if necessary). If the items of type l are produced on machine i, then the processing time on machine i is plj /vi . The k different jobs of order j may

120

J.Y.-T. Leung et al. / Information Processing Letters 103 (2007) 119–129

be processed concurrently. Preemptions are allowed. We assume that each order j , j = 1, 2, . . . , n, is released at time zero and has a weight wj , wj  0. The completion time of order j , denoted by Cj , is the time at which all k jobs of order j have been completed. If Clj denotes the completion time of the items of type l for order j , it follows that Cj = max(C1j , C2j , . . . , Ckj ). With regard to the specific definition of the completion time of an order, we are interested in the objective of minimizing the total n weighted completion time of the n orders, namely j =1 wj Cj . We denote our problem with the well-known α|β|γ notation. However, we introduce F after Q in the α field to denote that the machines are flexible to produce all types of products in addition to their different speeds. In the β field, we let Π to denote constraints on the number of product types. If Π is followed by a k, then the number of product types that can be produced by the machines are fixed, otherwise, this number is arbitrary. The γ field denotes the objective being optimized. Using such notation, our problem is denoted as  QF|Π| wj Cj , see Leung et al. [5] for the classification scheme. The problem described above has many applications that arise in various contexts, such as manufacturing systems, equipment maintenance, and parallel computing systems. The first example arises in the form of a car repair shop. Suppose each car has several broken parts that need to be fixed. Each broken part can be fixed by all the mechanics in the shop. However, due to their different levels of expertise and experience, the mechanics would spend different amount of time on fixing a specific part of a car. Several mechanics can work on different parts of the same car at the same time. The car will leave the shop when every broken part is fixed. A second example arises in manufacturing systems that consist of two stages. In such systems, different types of components (or subassemblies) are produced first in a pre-assembly stage, and then assembled into final products at an assembly stage. The pre-assembly stage consists of parallel machines (called feeding machines), each of which produces different types of components with different speeds. Each assembly operation can not start its processing until all the necessary components are fed in. When there is only one order, i.e., n = 1, the problem QF|Π| Cj becomes the classical scheduling prob(see Garey lem QCmax , which is strongly NP-hard  and Johnson [2]). It follows that QF|Π| wj Cj is also strongly NP-hard. Thus, it would be of interest to develop effective heuristics for this problem. For the case in which all the machines are identical

(i.e., v1 = v2 = · · · = vm ) Blocher and Chhajed [1] developed six heuristics and conducted an experimental study of their performance; however, they did not focus on worst-case performance bounds. One of the heuristics was also analyzed by Yang [10] and Yang and Posner [11]; they obtained a worst-case bound of 7/6 for m = 2. For the same problem with the additional feature that each order has a weight, Leung et al. [6] analyzed four (2 − 1/m)-approximation algorithms and five m-approximation algorithms. For the preemptive case, one can define two forms of preemptions for this problem. According to one form of preemptions, multiple machines can work simultaneously on the same product for a given order; a problem that allows this form of preemption is polynomially solvable. We first sequence the orders in ascending order k of their total weighted processing times, that is, l=1 plj /wj . Then,  for each product type l of order j , we assign plj vi / m η=1 vη on machine i. Applying this procedure yields an optimal schedule. The key point for the optimality of our procedure is that, such form of preemption allows the finish times of processing an order on all machines to be identical. This property gives an optimal structure. Indeed, if in an optimal schedule, there exists an order j whose finish times on some machines are not aligned up, we can show that Cj decreases by some aligned-up switching operations carried on between the jobs of order j and those of the orders that follow it. The contradiction implies that the finish times of each order are aligned up on all machines in an optimal schedule. Following the optimality of the Weighted Shortest Processing Time (WSPT) rule for the  classical problem 1 wj Cj (see Smith [9]), our algorithm is therefore optimal too. Note that the WSPT rule sequences  the jobs in non-decreasing order of pj /wj for 1 wj Cj . A second form of preemptions does not allow multiple machines to work simultaneously on the same product for an order; however, it does allow the processing of a product for a given order to be interrupted and resumed at a later point in time, possibly on a different machine. Assuming this form of preemption, the problem is NP-hard even with m = 2, k = 2, and all wj = 1 (see Leung et al. [4]). In what follows, we consider this second form ofpreemption, and refer to the problem as QF|Π, pmtn| wj Cj . In this paper, we propose two approximation algorithms for solving the nonpreemptive version and the preemptive version of this problem, respectively; the algorithm for the nonpreemptive version is denoted by HNP and the algorithm for the preemptive version is de-

J.Y.-T. Leung et al. / Information Processing Letters 103 (2007) 119–129

noted by HP . We show that HNP obeys a worst-case bound of 1 + ρ(m − 1)/(ρ + m − 1) and that HP obeys a worst-case bound of min{m, 1 + (m − 1)/σ }. We also perform an empirical analysis of the two heuristics. We adopt the following notation. Let Cj (H ) denote the completion time of order j under heuristic H , where H is either HNP or HP . Let Cj (OPT) denote the completion time of order j under the optimal schedule, and let [j ] denote the j th order completed in the optimal schedule. We also assume, without loss of generality, that the orders are labeled such that p2 pn p1   ···  , (1) w1 w2 wn  where pj = kl=1 plj , j = 1, 2, . . . , n. Furthermore, we denote the  sum of the speeds of the m uniform machines by v = m i=1 vi . 2. A heuristic for QF|Π|



wj Cj

It seems  that no algorithm has yet been designed for QF|Π| wj Cj . We propose for this problem a heuristic that consists of two phases. The first phase determines the sequence of the orders, while the second phase assigns the individual jobs within each order to the specific machines. In the first phase, the following rule is used to sequence the orders: The Weighted Shortest Total Processing time first (WSTP)  rule sequences the orders in ascending order of ( kh=1 phj )/wj . Ties are broken arbitrarily. After the sequence of orders has been determined, the individual jobs for each order are assigned to the specific machines according to the following assignment rule: The Longest Processing Time first rule (LPT) selects in each iteration from the remaining jobs a job with the longest processing time and assigns it to the machine that completes the job the earliest. The n orders have at most nk jobs. Thus, using a sorting procedure, the WSTP rule takes O(kn + n log n) time. On the other hand, assignment of a job by the LPT rule takes O(m) time. Therefore, the LPT rule takes O(mnk) time. It turns out that the entire algorithm takes O(mnk + n log n) time. We refer to the two-phase algorithm as HNP . Now consider the worst-case performance ratio of the algorithm. The following result from Horvath et al. [3] is made use of: Theorem 1. For a set of independent jobs with processing requirements q1  q2  · · ·  qn , the level algo-

121

Fig. 1. The profile after l ∗ is assigned.

rithm due to Horvath et al. [3] constructs a minimal length preemptive schedule on m uniform machines with speeds v1  v2  · · ·  vm . The makespan is given by    n i h=1 qh h=1 qh ∗ Cmax = max m , max i . i=1 vi 1i
Pi ∗ + pl ∗ j . vi ∗

(2)

In addition, according to the LPT rule, l ∗ is assigned to the machine that completes the job the earliest (ties may be broken arbitrarily). Thus, if we assign l ∗ to any machine i = i ∗ , then the finish time of machine i would be at least (Pi ∗ + pl ∗ j )/vi ∗ . Therefore, we have Cj (HNP ) 

Pi + pl ∗ j , vi

for each i = i ∗ .

From (2) and (3), we have

(3)

122

J.Y.-T. Leung et al. / Information Processing Letters 103 (2007) 119–129

 vi ∗ +



 vi Cj (HNP )

n

j =1 wj



Pi + mpl ∗ j =

i=1



j 

j  h=1

ph −

k 

j 

plj + mpl ∗ j

l=l ∗

ph + (m − 1)pl ∗ j ph + (m − 1)p1j .

(4)

h=1

Or, equivalently, j Cj (HNP ) 

h=1 ph

j 

v

h=1 ph

v

+

(m − 1)p1j v

(m − 1)p1j + , 1 + m−1 ρ v1

(5)

where the last inequality holds because v  v1 + (m − 1)vm = (1 + (m − 1)/ρ)v1 . On the other hand, let p1  p2  · · ·  pm−1 denote the processing requirements of the longest m − 1 jobs among the first j orders in the optimal schedule. According to Lemma 1, it is clear that  j   i l=1 p l h=1 p[h] C[j ] (OPT)  max m , max i 1i
since pl  pl[j ] for each l = 1, 2, . . . , m. Therefore, from (5) and (6) it follows that n j =1 wj Cj (HNP ) n j =1 wj Cj (OPT)

+

(m−1)p1j (1+ m−1 ρ )v1

j p[h] p1[j ] j =1 w[j ] max h=1 v , v1 n j ph j =1 wj h=1 v  n j p[h] j =1 w[j ] h=1 v  n (w (m − 1)p1j )/ (1 + m−1 j j =1 ρ )v1 n + j =1 (w[j ] p1[j ] )/v1

h=1



v

 n

i=i ∗ m 

 jh=1 ph

1+

m−1 1+

m−1 ρ

=1+

ρ(m − 1) . ρ +m−1

(7)

Note that in (7), the first component of the last inequality is due to the WSPT (Weighted Shortest Processing Time First, that is, schedule the jobs in increasing order of the ratios of their processing times divided by their weights) rule and the ordering in (1) (this imj j   plies that nj=1 wj h=1 ph  nj=1 w[j ] h=1 p[h] ), n and its second component is due to j =1 w[j ] p1[j ] = n j =1 wj p1j . 2 When ρ = 1, i.e., all the machines are identical, the above result implies that HNP has a worst case bound of (2 − 1/m). Indeed, for the identical machine case, HNP is exactly the WSTP-LPT rule introduced by Leung et al. [6]. The authors also showed in [6] that WSTP-LPT has a worst case bound of (2 − 1/m). The consistent results imply that HNP is a generalization of WSTP-LPT. To see that HNP could perform badly, consider the following example: Let wj = 1, j = 1, 2, . . . , n. There are m = ρ 2 + 1 machines with speeds v1 = ρ and v2 = v3 = · · · = vρ 2 +1 = 1. Let k = m. The processing times of the n orders are p11 = ρ 2 ,

p21 = p31 = · · · = pk1 = 0;

p1j = ρ + 2,

p2j = p3j = · · · = pkj = 1,

j = 2, 3, . . . , n. It is clear that the schedule produced by HNP is the one shown in Fig. 2. Thus,

Fig. 2. The schedule produced by HNP .

J.Y.-T. Leung et al. / Information Processing Letters 103 (2007) 119–129 n 

Cj (HNP ) = ρ +

j =1

 n−1  (ρ + 2)j ρ+ ρ j =1

= nρ +

n(n − 1)(ρ + 2) . 2ρ

On the other hand, in the schedule shown in Fig. 2, if we put orders 2, 3, . . . , n before order 1, then we obtain an optimal schedule with an objective value n 

Cj (OPT) =

j =1

n−1  (ρ + 2)j j =1

=ρ+

ρ

 +

(ρ + 2)(n − 1) +ρ ρ



(n + 2)(n − 1)(ρ + 2) . 2ρ

Thus, when ρ → ∞ we have  n  2nρ + (n − 1)n j =1 Cj (HNP ) . lim n = ρ→∞ 2ρ + (n + 2)(n − 1) j =1 Cj (OPT) (8)



Let n = 2ρ. Assume that n and m are large enough, it is easy to see that √ √ n 2 1/4 2nρ + (n − 1)n 2 m−1 ≈ = ≈ m , 2ρ + (n + 2)(n − 1) 2 2 2 √ due to ρ = m − 1. Note that, according to the bound showed earlier, the theoretical bound for the instance would be √ m − 1(m − 1) ρ(m − 1) =1+ √ 1+ ρ +m−1 m−1+m−1 √ ≈ 1 + m − 1, where m is sufficiently large. 3. A heuristic for QF|Π, pmtn|



wj Cj

 A heuristic for QF|Π, pmtn| wj Cj also consists of two phases. In the first phase, we use the WSTP rule to sequence the orders according to their total weighted processing times. In the second phase, we assign the jobs of each order using the following “SharedSchedule” procedure. This procedure is based on the work by Horvath et al. [3], in which the Level Algorithm (see Muntz and Coffman [7,8]) is generalized for solving the problem of preemptive scheduling on uniform machines to minimize makespan Cmax , that is, Q|pmtn|Cmax . In Muntz and Coffman [7,8], the level of a job is defined as the minimal amount of time taken to execute this job and all jobs that follow it. The level algorithm executes jobs level by level, starting with the highest level jobs.

123

Given an order j , we denote Lt (l, j ) as the level of job l of order j at time t. Since the jobs are independent, we set the initial level of job l of order j at Lt (l, j ) = plj before the jobs of order j are assigned. The following procedure is used to construct a shared schedule in which the machines are shared equally among jobs. The shared schedule is used to construct a preemptive schedule in a later procedure. Procedure Shared-Schedule Input: The set of jobs in order j . Output: A shared schedule for the set of jobs in order j . Step 1: Group the machines according to their finish times, so that the machines in each group have the same finish time. Let the groups of machines be M1 , M2 , . . . , MG so that the finish time of the machines in M1 is less than that of the machines in M2 , and so on. Step 2: Let s be the finish time of group M1 . For each job l in order j , let Ls (l, j ) = plj . Let κ = 1. Step 3: Reorder the machines in group Mκ in descending order of their speeds. Step 4: {Assigning the jobs to the machines in Mκ } Set mκ to be the number of unassigned machines in group Mκ at time s; let nh be the number of jobs at the highest level at time s. If nh < mκ , assign the nh jobs to the fastest nh machines that execute the jobs at the same rate. Otherwise, assign the nh jobs to the mκ machines that execute the jobs at the same rate. If there are any unassigned machines, consider the jobs at the next highest level and so on. Repeat such an assignment until a time t when one of the following events occurs: Event 1: Some jobs are completed. Event 2: There are two jobs l1 and l2 of order j such that Ls (l1 , j ) > Ls (l2 , j ) but Lt (l1 , j ) = Lt (l2 , j ). Event 3: The finish time of some machines (the other machines may become idle due to some Event 1 occurred earlier) in Mκ becomes equal to that of the machines in Mκ+1 . Set s = t and go to Step 5. Step 5: {Handling the events} If Event 1 occurs, keep executing the assignment on other machines (note that these machines are faster than those on which Event 1 occurs). If all jobs are finished, then go to Step 6. If Event 2 occurs, go to Step 4.

124

J.Y.-T. Leung et al. / Information Processing Letters 103 (2007) 119–129

If Event 3 occurs, delete the machines on which the event occurs from Mκ ; include them in Mκ+1 ; let κ = κ + 1; go to Step 3. Step 6: Stop the procedure. It should be noted that the above procedure actually works in the same way as the Level Algorithm due to Horvath et al. [3] when their algorithm is applied to a set of independent jobs. However, being more general than the above procedure, their level algorithm also applies to a set of jobs with precedence constraints. After the above procedure determines a shared schedule for the jobs of order j , the following procedure is used to construct a preemptive schedule from the portion of the shared schedule: Procedure Interval-Preemptive-Schedule Input: The portion of the shared schedule. Output: A portion of preemptive schedule. Step 1: Divide the interval of the shared schedule into n equal subintervals along the m shared machines, where n is the number of jobs that share the machines in this interval. Step 2: Assign the n jobs so that each job occurs in exactly one non-overlapped subinterval on each machine. An example of the assignment in Step 2 is shown in Fig. 3 for n = 4 and m = 3. Now with the above two procedures, we present the entire algorithm for scheduling the n orders.

Algorithm Preemptive-Schedule Input: The complete set of orders to be scheduled. Output: A complete preemptive schedule. Step 1: Relabel the orders such that p1 /w1  p2 /w2  · · ·  pn /wn , where pj = kl=1 plj , j = 1, 2, . . . , n. Step 2: For each order j, j = 1, 2, . . . , n, run procedure Shared-Schedule on the set of jobs of order j , to produce a shared schedule. For each portion of the shared schedule, run procedure IntervalPreemptive-Schedule to construct the corresponding preemptive schedule for this portion. Now let us consider the time complexity of the algorithm. Step 1 takes O(kn + n log n) time. In Step 2, for the set of jobs of each order, there is a run of procedure Shared-Schedule, incorporated with a series of runs of procedure Interval-Preemptive-Schedule. For each order, Events 1 and 2 can occur at most k times each, and Event 3 can occur at most m times. In a run of IntervalPreemptive-Schedule, n is at most k and m is at most m, therefore, writing down the schedule takes O(km) time. Thus, each order takes at most O(km(k + m)) time. For all orders, the algorithm takes O(nkm(k + m)) time to construct the preemptive schedule. Adding the time it takes in Step 1, the overall running time of the whole algorithm is O(nkm(k + m) + n log n). We now analyze the worst-case performance ratio of the algorithm. For convenience, we refer to the algorithm Preemptive-Schedule as HP . We have the following result.  Theorem 3. For the problem QF|Π, pmtn| wj Cj , the algorithm HP has a worst-case bound of min{m, 1 + (m − 1)/σ }.

Fig. 3. Illustrating the assignment in Step 2 of procedure IntervalPreemptive-Schedule.

Proof. According to the algorithm, the orders have to be in their index order. Otherwise, we can always relabel the orders. We will prove the bound of m separately from the bound of 1 + (m − 1)/σ . First, we prove the bound of m. We first consider the finish time of any order j = 1, 2, . . . , n. In the worst case, all the jobs of the first j orders are scheduled on machine 1 (the fastest machine), j such that order j completes at time h=1 ph /v1 . Clearly, because of the preemptive behavior of the algorithm, if any portion of a job belonging to the first j orders was assigned previously to any other machine, then the completion time of order j would be less. Thus, an upper bound of Cj (HP ) is

J.Y.-T. Leung et al. / Information Processing Letters 103 (2007) 119–129

Cj (HP ) 

j 

(9)

ph /v1 .

h=1

 From the fact that v1  v2  · · ·  vm and v = m i=1 vi , we have v1  mv . It follows that Cj (HP )  m × j h=1 ph /v. On the other hand, a lower bound of C[j ] (OPT) is: j p[h] . (10) C[j ] (OPT)  h=1 v We therefore have  j n n j =1 wj Cj (HP ) j =1 wj m h=1 ph /v n  n j  m. w[j ] p[h] /v j =1 wj Cj (OPT) j =1

h=1

(11) The last inequality in (11) is due to the WSPT rule and the ordering in (1). We now prove the bound of 1 + (m − 1)/σ . For each i = 2, 3, . . . , m, since v1 /vi  v1 /v2 = σ , we have vi  v1 /σ . Therefore, v=

m  i=1

From (10) and (13), it follows that n m−1 j =1 wj Cj (HP ) n 1+ , σ w C (OPT) j =1 j j which completes the proof.

2

The above result reveals that, if we have a machine that is much faster than all the other machines, i.e., σ being very large and m being fixed, then the algorithm tends to be optimal. The reason might be that all the jobs are scheduled on the fastest machines, and the algorithm behaves like the WSPT rule on one machine. To see that HP could perform badly, consider the following example: Let wj = 1, j = 1, 2, . . . , n. There are m = ρ 2 + 1 machines with speeds v1 = ρ and v2 = v3 = · · · = vρ 2 +1 = 1. Here, σ = ρ. Let k = m − 1. The processing times of the n orders are p11 = ρ 2 ,

p21 = p31 = · · · = pk1 = 0;

p1j = ρ + (j − 1),

p2j = p3j = · · · = p(m−j ),j = 1,

j = 2, 3, . . . , n. It is clear that the schedule produced by HP is the one shown in Fig. 4. Thus,

v1 vi  v1 + (m − 1) . σ

It follows that σv v1  . m+σ −1

125

n 

Cj (HP ) ≈ nρ + O(n2 /ρ).

j =1

(12)

Following the same argument of Theorem 3, we still have (9). From (9) and (12), we have j j ph m + σ − 1 ph Cj (HP )  h=1 · h=1 . (13)  v1 σ v

On the other hand, if we schedule orders 2, 3, . . . , n before order 1, then we obtain an optimal schedule with an objective value n 

Cj (OPT) ≈ O(n2 ) + ρ.

j =1

Fig. 4. The schedule produced by HP .

126

J.Y.-T. Leung et al. / Information Processing Letters 103 (2007) 119–129

Thus, we have n nρ + O(n2 /ρ) j =1 Cj (HNP ) n . ≈ ρ + O(n2 ) j =1 Cj (OPT) √ Let n = ρ. Then we have n √ j =1 Cj (HP ) n ≈ O(n) = O( ρ). C (OPT) j =1 j

n

j =1 wj Cj (HNP )

n (14)

j =1 wj Cj (OPT)  jh=1 ph n w j j =1 v

+

 n

Note that, according to the bound showed earlier, the theoretical bound for the instance would be

 m−1 min m, 1 + = 1 + ρ. σ Comparing Theorems 2 and 3, it is interesting to note that HNP performs better in an identical machine environment, whereas HP performs better in an environment in which the fastest machine is much faster than all other machines. 4. A special case We now go back to the nonpreemptive case and analyze HNP when there are only two different speeds, i.e., there exists q, 1  q  m, such that vi = ρ for 1  i  q, and vi = 1 for q + 1  i  m. We refer to this problem as the Two-Speed problem. Clearly, if m = 2, then ρ = σ . By modifying the proofs of Theorems 2 and 3, we obtain the following result. Theorem 4. For the Two-Speed √ problem, the worst case bound of HNP is less than m + 1. In particular, when m = 2, the worst case bound of HNP is equal to 1 + ν 1.618 where ν = 2√ .

(m−1)p1j (q+ m−q ρ )v1

j p[h] p1[j ] j =1 w[j ] max h=1 v , v1 j ph n j =1 wj h=1 v  n j p[h] j =1 w[j ] h=1 v  n (w (m − 1)p1j )/ (q + m−q j j =1 ρ )v1 n + j =1 (w[j ] p1[j ] )/v1

1+

m−1 q+

m−q ρ

=1+

ρ(m − 1) . ρq + m − q

(16)

Similarly, since the proof of Theorem 3 is also valid for the non-preemptive problem, by replacing (12) with (15), inequality (13) results in j j ph ρq + m − q h=1 ph · h=1 . = Cj (HNP )  v1 ρ v Then we obtain n ρq + m − q j =1 wj Cj (HNP ) n .  ρ w C (OPT) j =1 j j

(17)

Thus, from (16) and (17) we obtain the following combined bound: n j =1 wj Cj (HNP ) n j =1 wj Cj (OPT) 

ρ(m − 1) ρq + m − q , . (18)  min 1 + ρq + m − q ρ

v = qv1 + (m − q)v1 /ρ

With some special conditions, the combined bound in (18) can be simplified. We first consider the case in which ρq + m − q √  m, ρ

or equivalently ρ v1 = . ρq + m − q

then from (17) we have n √ j =1 wj Cj (HNP ) n  m. j =1 wj Cj (OPT)

1+ 5

Proof. For the Two-Speed problem,

(15)

Now, we follow the proof of Theorem 2, except that we replace (5) by j Cj (HNP ) 

h=1 ph

j 

v

h=1 ph

v

+

(m − 1)p1j v

(m − 1)p1j + . q + m−q v1 ρ

Then inequality (7) becomes

On the other hand, if ρq + m − q √  m, ρ then from (16), n √ m−1 j =1 wj Cj (HNP ) n 1+ √  1 + m. m j =1 wj Cj (OPT) Thus, in either case, the bound is less than 1 +

√ m.

J.Y.-T. Leung et al. / Information Processing Letters 103 (2007) 119–129

127

In the second case, we consider m = 2, which implies q = 1. Then from (18), the bound of HNP becomes 

ρ ρ +1 . min 1 + , ρ +1 ρ

non-preemptive schedule and that of the optimal preemptive schedule is:

It reaches maximality when

Thus, the ratio   wj Cj (H ) wj Cj (H )   1, r(H ) = wj Cj (OPT) LB

1+

ρ ρ +1 = , ρ +1 ρ

that is, when ρ =

min 1 +

2√ , 1+ 5

 ρ +1 2 ρ , =1+ √ , ρ +1 ρ 1+ 5

which is maximal.

2

5. Empirical analysis To evaluate the two heuristics empirically, we generate problem instances with various problem sizes that are determined by n, m and k, where n ∈ {20, 50, 100, 200}, m ∈ {2, 5, 10, 20} and k ∈ {2, 5, 10, 20, 50, 100}. For each combination of n, m and k, 10 problem instances are randomly generated. These 10 problem instances have a similar structure and are treated as a group. In order to produce an instance for a combination of n, m and k, first the speeds of the m machines are generated. Each machine speed is generated from the uniform distribution [1, 50]. Then, n orders are generated. For each order j , its number of product types, denoted as kj , is generated from the uniform distribution [1, k]. Then, for each product type l = 1, 2, . . . , kj , an integer processing requirement plj is generated from the uniform distribution [1, 200]. In addition, a weight for order j is randomly generated from the uniform distribution [1, 20]. In total, 4 × 4 × 6 × 10 = 960 instances are generated. The algorithms are implemented in C++. The running environment is based on the Windows 2000 operating system; the PC used was a notebook computer (Pentium III 900 MHz plus 384 MB RAM). From our observation, since the time used by either heuristic to solve any problem instance costs only several milliseconds, we do not consider running times, but focus on an analysis of the performance of the algorithms. In particular, since it is unlikely that the optimal solution can be obtained by an exact algorithm very quickly, we compare the heuristic results with a lower bound of the optimal solution which can be computed easily. From the analyses in previous sections, it is easy to see that a lower bound for both the cost of the optimal

LB =

n  j =1

wj

j 

ph /v.

h=1

where H ∈ {HNP , HP }. If r(H ) is close to 1, then it means that the heuristic yields a result that is close to the lower bound. Hence, it would be even closer to the optimal cost. Therefore, to some extent, the ratio r(H ) indicates how good the heuristics are when they are applied to solve the problem instances. For each instance generated, we run both HNP and HP on it to produce a non-preemptive schedule and a preemptive schedule, respectively. In addition, the value of LB is computed. With these, we compute both r(HNP ) and r(HP ) for this instance. As we mentioned before, for each combination of n, m, and k, the 10 problem instances that are randomly generated have a similar structure so that they can be treated as a group. It would be more reasonable to study the average ratio of the two heuristics for each group of the instances. Table 1 shows the average ratio for each group of 10 instances for each combination of n, m, and k. In the table, r(HNP ) and r(HP ) denote the average ratio for HNP and HP , respectively. From Table 1, we have the following observations: • The largest r(HNP ) and r(HP ) are 2.113 and 1.866, respectively. This indicates that for all problem instances generated, the worst-case heuristic results are about twice the lower bound of the optimal costs. This worst case occurs for n = 20, m = 20, and k = 2. However, this does not mean that the heuristics perform badly for small n, k and large m. Instead, we believe that the large ratios are caused by the gap between LB and the actual optimal cost. We notice that when k = 2, each order has at most two product types. Thus, LB would be very small if we compute pj /v. In contrast, in the optimal schedules, the assignment of the jobs of each order may involve only a few machines and its finish time would have a closer relationship to max{plj } than pj /v. Therefore, computing LB may underestimate the optimal cost too much, which results in large values of r(HNP ) and r(HP ). • For each combination of n and m, r(H ) drops close to 1.0 when k grows from 2 to 100. This indicates that the schedules produced by HNP and HP

128

J.Y.-T. Leung et al. / Information Processing Letters 103 (2007) 119–129

Table 1 The average ratio of HNP and HP over the instances generated for each combination of n, m, and k k

m=2 r(HNP )

r(HP )

r(HNP )

r(HP )

r(HNP )

r(HP )

r(HNP )

r(HP )

20

2 5 10 20 50 100

1.052 1.024 1.009 1.003 1.001 1.000

1.029 1.010 1.003 1.000 1.000 1.000

1.227 1.101 1.043 1.019 1.004 1.002

1.163 1.053 1.013 1.006 1.001 1.000

1.541 1.303 1.141 1.057 1.014 1.004

1.417 1.208 1.079 1.024 1.005 1.001

2.113 1.726 1.413 1.182 1.038 1.015

1.866 1.563 1.290 1.116 1.018 1.006

50

2 5 10 20 50 100

1.024 1.009 1.004 1.001 1.000 1.000

1.011 1.003 1.001 1.000 1.000 1.000

1.095 1.052 1.020 1.009 1.002 1.000

1.062 1.030 1.006 1.002 1.000 1.000

1.228 1.123 1.062 1.022 1.005 1.002

1.174 1.081 1.036 1.008 1.001 1.000

1.491 1.304 1.162 1.076 1.016 1.006

1.364 1.219 1.114 1.050 1.007 1.002

100

2 5 10 20 50 100

1.011 1.004 1.002 1.001 1.000 1.000

1.005 1.001 1.000 1.000 1.000 1.000

1.047 1.026 1.010 1.004 1.001 1.000

1.029 1.016 1.004 1.001 1.000 1.000

1.114 1.060 1.030 1.013 1.003 1.001

1.080 1.041 1.018 1.005 1.001 1.000

1.244 1.146 1.081 1.035 1.009 1.003

1.178 1.104 1.058 1.021 1.004 1.001

200

2 5 10 20 50 100

1.005 1.002 1.001 1.000 1.000 1.000

1.002 1.001 1.000 1.000 1.000 1.000

1.025 1.013 1.006 1.002 1.000 1.000

1.016 1.008 1.002 1.001 1.000 1.000

1.056 1.031 1.016 1.006 1.002 1.000

1.040 1.021 1.009 1.003 1.000 1.000

1.120 1.069 1.040 1.018 1.005 1.001

1.086 1.048 1.026 1.010 1.002 1.000

n

m=5

are very close to the lower bound of the optimal schedules. Hence, they are even closer to the optimal schedules. The reason may lie in the fact that, when k is large, each order may have a lot of different product types to produce so that the data of the orders are more regular for the heuristics to obtain good schedules. • For each combination of n and k, r(H ) grows when m becomes large. This indicates that the heuristics perform better when the number of machines is smaller. The observation is consistent with the worst-case bounds that we have shown for the two heuristics in previous sections. • For each combination of m and k, r(H ) drops when n becomes large. However, we do not conclude that the heuristics may perform better for a large number of orders than for a small number. Again, we believe that this may be caused by the gap between the lower bound and the optimal cost. From the above observations, we are quite confident that the two heuristics can produce very near-optimal solutions for the randomly generated instances with large k and small m.

m = 10

m = 20

6. Concluding remarks In this paper, we proposed two heuristics for order scheduling on uniform machines, one for nonpreemptive scheduling (HNP ) and the other for preemptive scheduling (HP ). We show that the worst-case bound for HNP is 1 + ρ(m − 1)/(ρ + m − 1) and the worst-case bound for HP is min{m, 1 + (m − 1)/σ }. The worst-case bounds for both HNP and HP are not tight, even though we could construct examples that approach an order of the square root of the theoretical bounds. For future research, it will be interesting to see if we could tighten the bounds. Since the counterexamples show that the two heuristics are not bounded by any constant, it would be of interest to investigate whether the heuristics are bounded by a sub-linear function of m. We also implemented the two heuristics in order to do an empirical analysis. Even though the worst-case bounds we showed for the two heuristics are not very strong, the experimental results reveal that the heuristics can produce in practice solutions that are very close to optimal.

J.Y.-T. Leung et al. / Information Processing Letters 103 (2007) 119–129

Acknowledgements The authors gratefully acknowledge the support by the National Science Foundation through grants DMI0300156 and DMI-0245603. References [1] J. Blocher, D. Chhajed, The customer order lead-time problem on parallel machines, Naval Research Logistics 43 (1996) 629– 654. [2] M.R. Garey, D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, Freeman, New York, 1979. [3] E.C. Horvath, S. Lam, R. Sethi, A level algorithm for preemptive scheduling, Journal of the Association for Computing Machinery 24 (1977) 32–43. [4] J.Y.-T. Leung, C.Y. Lee, C.W. Ng, G.H. Young, Minimizing total weighted completion time on flexible machines for order scheduling, submitted for publication.

129

[5] J.Y.-T. Leung, H. Li, M.L. Pinedo, Order scheduling models: An overview, in: G. Kendall, E.K. Burke, S. Petrovic, M. Gendreau (Eds.), Multidisciplinary Scheduling: Theory and Applications, Springer, 2005, pp. 37–53. [6] J.Y.-T. Leung, H. Li, M.L. Pinedo, Approximation algorithms for minimizing total weighted completion time of orders on identical machines in parallel, Naval Research Logistics 53 (2006) 243– 260. [7] R.R. Muntz, E.G. Coffman Jr., Optimal preemptive scheduling on two-processor systems, IEEE Transactions on Computers 11 (1969) 1014–1020. [8] R.R. Muntz, E.G. Coffman Jr., Preemptive scheduling of real time tasks on multiprocessor systems, Journal of the Association for Computing Machinery 17 (1970) 324–338. [9] W.E. Smith, Various optimizers for single stage production, Naval Research Logistics Quarterly 3 (1956) 59–66. [10] J. Yang, Scheduling with batch objectives, PhD thesis, Industrial and Systems Engineering Graduate Program, The Ohio State University, Columbus, OH, 1998. [11] J. Yang, M.E. Posner, Scheduling parallel machines for the customer order problem, Journal of Scheduling 8 (2005) 49–74.