Scheduling about a common due date with earliness and tardiness penalties

Scheduling about a common due date with earliness and tardiness penalties

03050548/90 S3.00+ 0.00 Copyright0 1990 Pcrgamon Press plc Computers Opns Res. Vol. 17. No. 2, pp. 231-241, 1990 Printedin Great Britain.All rightsre...

1011KB Sizes 0 Downloads 26 Views

03050548/90 S3.00+ 0.00 Copyright0 1990 Pcrgamon Press plc

Computers Opns Res. Vol. 17. No. 2, pp. 231-241, 1990 Printedin Great Britain.All rightsreserved

SCHEDULING

ABOUT A COMMON AND TARDINESS PRABUDDHA

DE,* JAY B. GHosHt

DUE DATE WITH EARLINESS PENALTIES and CHARLES E. WELLS$

Department of MIS and Decision Sciences, University of Dayton, 300 College Park, Dayton, OH 45469-0001, U.S.A.

(Received November 1988; revised

April

1989)

Scope aad Pwpoae-The growing awareness of the just-in-time production philosophy has led to the consideration of penalties for jobs completed before they are due in addition to penalties for jobs which are tardy. This paper examines the problem of scheduling a set of jobs in a single machine environment where the effects of both types of penalties are addressed. The objective is to develop a computer-based methodology for solving such scheduling problems within a reasonable time. Both exact and approximate solution procedures are explored. Extensive computational results are reported to show the effectiveness of the methodology.

Abstract-This paper describes solution techniquesfor scheduling a set of independent jobs on a single machine where the objective is to minimize the mean squared deviation of the completion times about a common due date. A branch and bound algorithm is proposed and implemented using a recursive scheme. The algorithm is tested on job sets drawn from six different distributions of processing times. Computational experiments which compare this approach with the existing methodology indicate that the former is significantly faster and can solve much larger problems. A heuristic based upon the branch and bound algorithm is also presented which provides fast solutions with constant performance guarantees. 1. INTRODUCTION

The research literature in scheduling has generally treated tardiness as the only penalty associated with completing a job at a time other than its due date. Until recently there have been few instances [ 1,2] in which earliness has also been considered as a penalty. However, with the present emphasis on the just-in-time production philosophy, the consequences of earliness are beginning to receive an increased level of attention. We give below a brief review of past research involving both earliness and tardiness penalties. For a more comprehensive coverage, refer to the excellent survey by Baker and Scudder [3]. Beyond the early work of Sidney [l] and Lakshminarayan et al. [2], examples of scheduling about distinct due dates (i.e. each job having a distinct due date) with penalties for both earliness and tardiness are relatively sparse and exist only for the single-machine case. Of the few that we are aware of, Gupta and Sen [4] use the minimization of the sum of squared lateness as their objective. More recently, Fry et al. [S] attempt to minimize the sum of absolute lateness weighted with unique earliness and tardiness penalties for each job. Several researcher8 have, however, addressed the problem of scheduling about a common due date in the presence of both earliness and tardiness penalties. Kanet [6], Sundararaghavan and Ahmed [7], Hall [8], and Bagchi et al. [9] focus on minimizing either the sum or equivalently the average of absolute lateness for a single-machine problem. Sundararaghavan and Ahmed [7] extend their work on the same objective to the multiple-machine situation as well. While the above objective, generically termed MAD for mean absolute deviation, assumes equal penalties for earliness and tardiness, Panwalker et al. [lo] and Bagchi et al. [ 1 l] discuss a variation where the penalties are allowed to be different for the single-machine case. An alternate objective, called mean squared deviation and abbreviated as MSD, is to minimize the average of squared lateness. Bagchi et al. *Prabuddha De is Standard Register-Sherman Distinguished Professor of MIS at the University of Dayton. He was previously a Professor of Accounting and MIS at Ohio State University. He received his PhD from Carnegie-Mellon University, and has published in many journals including Operations Research, Computers and Operations Research, Management Science and Decision Sciences. tJay B. Ghosh is an Assistant Professor of MIS and Decision Sciences at the University of Dayton. His PhD is from the University of Arkansas. The journals he has published in include Management Science, Simulation and Mathematical Modelling.

ICharles E. Wells is a Professor of MIS and Decision Sciences at the University of Dayton. He received his PhD from the University of Cincinnati. He has published in Management Science, Networks, Decision Sciences, IIE Transacrions and various other journals. 231

232

PRABUDDHADE et al.

[12] investigate the single-machine version of the problem. In a related paper [ll], they also consider the variation where earliness and tardiness penalties are different. In this paper, we concentrate on the MSD problem as addressed by Bagchi et al. in [ 123 referred henceforth as BSC. Being a quadratic function, the MSD objective penalizes larger instances of earliness and tardiness at a higher rate. In situations where this is desired, the MSD serves as a more appropriate criterion than some of the others discussed above. Another compelling reason for working with the MSD problem is that it subsumes the so-called completion (waiting) time variance problem, introduced in Section 2.1, as a special case. This latter problem has received wide attention in the literature on its own merits [13-161. We show elsewhere [17] that the solution strategy for this problem as reported in BSC may lead to nonoptimal schedules. Here, we explore the existence of an effective algorithm needed for the solution of the MSD problem, an open problem for which no polynomially efficient algorithm has been identified to date. In particular, we propose a branch and bound (B&B) scheme that normally finds the solution to a 30-job problem within a few seconds and a 40-job problem within a few minutes on a VAX 8600. The procedures of BSC, in contrast, cannot realistically solve problems involving more than 25 jobs, particularly for large due dates (50% or more of the makespan). For such due dates, it has failed to generate optimal solutions to larger test problems used in our study when run on the same machine with an assigned limit of 6 hr of CPU time. Based on our B&B scheme, we also propose a fast heuristic that produces solutions with constant performance guarantees. The organization of this paper is as follows: in Section 2, we introduce the MSD problem, characterize its solution in general terms, define the various subproblems whose solutions are critical to the solution of the MSD problem, describe the properties associated with the solutions to these subproblems, and outline the B&B approach to solving the problem. Section 3 details the development of bounds required for the B&B solution. In Section 4, the B&B scheme and its implementation are discussed. Section 5 presents the results of computational experience with the B&B implementation, while Section 6 provides some concluding remarks. 2. THE

PROBLEM

AND ITS SOLUTION

Consider a single machine with n jobs available for immediate processing. Associated with each job j is its required processing time p,. We assume, without any loss of generality, that the jobs have been numbered such that

Let us define d = common due data (exogenously determined)

for all n jobs,

MS = ~ pj, j=l

and for any schedule S, Rj = start time of job j,

Cj = completion time of job i, [i] = index of the ith job in the sequence, C= (l/n) i

Cj.

i=l

The MSD for schedule S is then given by MSD(S)=

(l/n) t

(Cj-d)‘.

j=l

For convenience, we will work with the objective Z(S) = n MSD(S). Note that we are interested only in nonpreemptive scheduling, and we assume continuous

Scheduling about a common

due date

233

MSD

Fig. 1. Optimal MSD profile.

processing, i.e. we force Rtil = Cti_ I1 for i = 2, . . . , n. However, we let Rt,r >, 0. This allows us to translate the schedule (i.e. to delay its start). The MSD problem can now be stated as: (P)

Find a schedule S* which minimizes Z(S).

We consider preemption uninteresting, since it can be shown that with preemption the problem is resolved trivially. Insertion of idle time between job completions is also considered unnecessary, as it always leads to a suboptimal solution. 2.1. Solution concepts for the MSD problem Figure 1 shows the graph of MSD as a function of d for three different schedules generated from a set of jobs. The solid lower line indicates the optimal MSD profile. Note that a given schedule assumes its minimum MSD value at d = c, the mean completion time for that schedule. In the figure, d* is the value of d where MSD reaches its global minimum and d** is the smallest value of d at which a local minimum is realized. Observe that the optimal value of MSD at d’ is not achieved from Schedule 3, but rather from Schedule 2, translated such that its minimum coincides with d’. It is clear, however, that for d < d **, the optimum MSD cannot be realized from the translation of a schedule, i.e. we must have Rtil = 0. We say that the problem is tightly constrained in this region. Solution to a tightly constrained MSD problem is obtained by solving the following subproblem: (Pl)

Find a schedule S* which minimizes Z(S) subject to Rt,, = 0.

When d** < d < d*, the problem still remains constrained but the optimal MSD may be realized either with or without translation. Consider the following subproblem: (P2)

Find a schedule S* which minimizes Z,(S) subject to c < d, where Z,(S) = i

(Cj - C)‘.

j=1

Note that Z,,(S) is invariant of translation, and thus P2 identifies the schedule that provides the best local minimum to the left of d. The solution to the MSD problem for d** < d < d* is obtained by comparing the optimal value of Z(S) in Pl with the optimal value of Z,(S) in P2 and choosing the better solution. CAOR 17r2-n

234

PRABUDDHADE

etal.

Finally, if d z d*, the problem is said to be unconstrained and the optimal MSD is realized by translating the schedule that produces the global minimum. The solution to the MSD problem is now obtained by solving the following subproblem: (P3)

Find a schedule S* which minimizes Z,(S).

This, incidentally, is the completion time variance (CTV) problem addressed by Merten and Muller [ 133, Schrage [ 143, and more recently by Vani and Raghavachari [ 163. It is also closely related to the waiting time variance (WTV) problem of Eilon and Chowdhury [lS]. 2.2. Dominance properties A detailed discussion of the various properties that have been established for the solution to the MSD problem appears in [2]. We summarize here only those properties which are most important for our discussion. PROPERTY

1 (V-shape):

The solution to problem P must be a schedule that is

I/-about-d. This property subsumes Theorem 7 and Proposition 2 in BSC. It suggests that in the optimal schedule jobs preceeding and succeeding the shortest job must be in LPT and SPT orders, respectively. In addition, jobs with Cj < d and Rj > d must respectively be in LPT and SPT orders as well. The implication is that we should only consider V-about-d schedules in solving Pl, and I/-about-C schedules in solving P2 and P3. This reduces the number of schedules to be evaluated from O(n!) to 0(2”-I). PROPERTY

2 (duality): For a given schedule, the value of Z,(S) does not change if the order of the last (n - 1) jobs is reversed.

This property corresponds to Theorem 4 of BSC, and points to the existence of alternate solutions to P3. It helps us develop tighter bounds on the optimal value of Z,(S). PROPERTY

3 (longest jobs): The solution to problem P3 must be a schedule that has the longest job in the first position and the second longest job either in the second or the last position.

This property combines Theorems 5 and 6 of BSC, and reduces the complexity of our search for the solution to P3 to 0(2”-3). PROPERTY 4 (completion): Given a partial schedule S: (A, . . . , B) of the k longest jobs, where A and B are LPT and SPT subsequences, let C, = &,,P, and R B= MS -‘&Pi. If C, + (&(p. + P,_~) > d, then the remaining (n-k) jobs scheduled in SPT order will provide the best value of Z(S) for this partial schedule. Similarly, if R, - ())(p, - pn_ 1) < d, then the remaining jobs scheduled in LPT order provides the best value of Z(S).

This is an extension of Proposition on and thereby reduce our search.

3 in BSC. It allows us to complete a partial schedule early

2.3. A branch and bound solution approach Without the knowledge of d* and d **, it may be possible a priori to characterize a given MSD problem as tightly constrained, constrained, or unconstrained through the use of bounds on d* and d**. We report the development of such bounds in [ 171. Specifically, we can compute bounds dL+*, dt, and d$ such that dz* < d**, and df < d* G d$. Once an initial characterization is obtained on the basis of the above bounds, the MSD problem may be solved by a branch and bound (B&B) algorithm. This algorithm has a worst case complexity of 0(2”- ‘) like the enumerative procedure of BSC. But our computational results show that unlike the BSC procedure, it can be quite effective in solving relatively large problems. Depending on the characterization of the problem, the B&B approach entails solving some or all of problems Pl, P2, and P3 in a specific order. Due to the recursive implementation scheme discussed in Section 4, the computations involved are not as repetitive as they may appear to be. At any node in its enumeration tree, say the one corresponding to the partial schedule S: (A, . . . , B)

Scheduling about a common

235

due date

with the longest k jobs already scheduled, the B&B algorithm applies two tests regardless of whether it is solving Pl, P2, or P3-a completion test based on Property 4 and a fathoming test based on the lower and upper bounds of Section 3. In the event both tests fail, two new nodes are created-one with the (k + l)-st longest job scheduled at the back of A and the other with that job in front of B. The search proceeds on a depth-first basis to the node with the smaller lower bound. If either one of the tests passes, backtracking is initiated. The process continues until all viable nodes are considered. How and in what order problems Pl-P3 are solved depenc’s on d. We delineate the possibilities now: BBl. d c dE*. Solve Pl. Start B&B with the empty schedule S: (. . .) and a heuristic upper bound on Z(S), such as the one given in Section 3.3. The solution to Pl solves P. BB2. df* dt. Solve P3 first. Start B&B with S: (1, . . . ,2) and a heuristic upper bound on Z,(S). If d 2 d*, where d* is equal to the minimum f? for any schedule that solves P3, the solution to P3 solves P. Otherwise, do as in BB2.

3. BOUNDS The efficacy of a B&B algorithm is typically determined by the tightness of the bounds used. While a number of approaches may be pursued for the development of bounds, here we only emphasize the ones we have considered in this study. 3.1. Bounds based on partitioning Given a partial schedule S: (A, . . . , B) with the k longest jobs already scheduled, the idea is to generate lower and upper bounds on the optimal Z(S) and the optimal Z,,(S) realizable by completing S. We denote any possible sequencing of the (n - k) shortest jobs, as yet unscheduled but needed to complete S, by T. We also let Z’(S) and Z;(S) designate the already determined portions of Z(S) and Z,(S), respectively, corresponding to the partial schedule S. Two results can now be derived that lead to the desired bounds. RESULT 1 (for PI). Let CT and Z,(T) be the mean completion time and the completion time variability, respectively, of the jobs in T, i.e. the shortest (n - k) jobs. Then Z(S) = Z’(S) + Z,(T) + (n - k)(d - CT)‘. Derioation. Z(S) = 1 (Cj - d)’ + 1 (Cj - d)’ jcT W =

Z’(S) + C (Cj - CT)* + (n - k)(d - CT)* jcT

=

Z’(S) + Z,(T)

+ (n - k)(d - CT)*.

Note that while Z,,(T) is independent of the partial schedule (A, . . . , B), CT depends on the partial schedule. Specifically, CT = C, + C(T), where C(T) is the mean completion time of the jobs in T determined independently of (A, . . . , B). If we define cz = C, + c,(r), where C,(T) is a lower bound on c(T) (easily derived from an SPT ordering), cc = CA + C,(T), where C,(T) is an upper bound on C(T) (easily derived from an LPT ordering), Z;f(T) to be the optimal value of Z,(T), and C,T= CA + C,(T), where co(T) is the mean completion time of the schedule that gives Z;(T), a lower bound z(S) and an upper bound Z(S) on Z(S) can be developed as follows. BOUND

1A (for Pl):

Let CT = cf

if

d < Cl, CT = CL

if

d > c;,

Then z(S) = Z’(S) + Z,*(T) + (n - k)(d - CT)‘. BOUND 1B (for Pl):

Let CT = COT.Then Z(S) = Z’(S) + Zz( T) + (n - k)(d - CT)‘.

and CT = d otherwise.

PRALKJDDHADE et al.

236

RESULT2

(for P2 & P3).

Let cp be the mean completion time of S: (A,. . . , B). Then

Z,(S) = Z;(S) + Z,(T) + (k/n)@ - k)(C’Derivation.

iST)2.

Let C be the unknown mean completion time of the schedule completed from S. Then

zo(s)= C (Cj-C)2

+1

=

(Cj-C)2

jeT

je.7

C (Cj- Cp)’ + C (Cj-~')'+ je.5

k(i5-CP)’

jeT +

Noting that C = (l/n)[kCP + (n - k)cT],

(n - k)(C - (fT)2.

we have

Z,,(S) = Z:(S) + Z,-,(T) + (k/n)(n - k)(C’ - CT)2. A similar partitioning concept has also been used in 1163, although in an entirely different context. A lower bound g,,(S) and an upper bound Z,(S) on Z,(S) can be developed from Result 2 as follows. BOUND

2A (for

P2 & P3):

Let CT = cz if Cp < cl,

CT = C,T if cp > Cc, and CT = Cp

otherwise. Then g,,(S) = Z;(S) + Z,*(T) + (k/n)(n - k)(Cp - CT)2. BOUND

2B (for P2 & P3):

Let CT = cl. Then

Ze(S) = Z;(S) + Zg( T) + (k/n)(n - k)(C’ - CT)2. The bounds described above can be further strengthened by comparing them with similar bounds developed from the dual schedule of (A, . . . , B), and selecting the tighter of these bounds as the values for &(S) and Z,,(S). 3.2. Other approaches

to bounding

If the application of the dominance properties discussed in Section 2.2 allows us to put tight limits on the pm’s, it may be possible to develop tight bounds on Z(S) and Z,,(S) using other methods. In one approach, we treat the problem as an (n - k) x (n - k) linear assignment problem (along the lines of [18]). The cost of an assignment is a lower bound on the true cost, accomplished by setting the ptnL in the cost expression to their true values for jobs that are already scheduled and their lower limits for jobs that are yet to be scheduled. The optimal objective function value of the assignment problem then provides the desired lower bound, while the true cost associated with the generated schedule provides an upper bound. In a second approach, we treat the problem as a quadratic program in which the pm’s are the decision variables, and the objective is to optimize Z(S) or Z,(S) subject to the limits on the pm’s derived from the dominance properties. Our computational experience indicates that the dominance properties do not usually lead to tight limits on the pm’s, particularly for the inner positions of a schedule. Consequently, neither of the above formulations seems to produce bounds which are as good as the ones generated by the partitioning scheme. 3.3. Initial upper bound from a heuristic Even though an upper bound is derived at each node of the B&B enumeration tree, we have, in general, found it computationally more efficient to start the enumeration with a good global upper bound. This global upper bound is generated by a simple heuristic, similar to the one proposed by Sundararaghavan and Ahmed for the MAD problem. It quickly produces near-optimal solutions for both Pl and P3 (recall that P2 is initiated with the solution to Pl as its upper bound). Let D = d (dt) for Pl (P3). The heuristic can now be described as follows: UBl. UB2.

Set T={l,..., n),A=B=@, C,=O,and R,=MS. Do until T=@: remove the first job (say job j) from T. ifD-C,,GRs-D,

UB3.

then insert j in front of B and let R, = R, - pj* else insert j at the back of A and let C,, = C, + pi. Deliver S: (A, B) as the heuristic schedule.

ScheduIing about a common due date

237

4. IMPLEMENTATION

Our B&B implementation is an extension to the partitioning approach we have adopted for the development of our bounds. 4.1. The recursive scheme Recall that for a partial schedule S: (A, . . . , B) with the k longest jobs scheduled, the computation of bounds on both Z(S) and Z,(S) requires the knowledge of Z:(T), which represents the optimal variability for any schedule formed with the shortest (n - k) jobs. It is only natural that we derive Z:(T) by solving P3 for these (n - k) jobs. This implies that in order to solve a problem with n jobs, we recursively solve P3 with 1, . . . , (n - 1) jobs. One positive aspect of this approach is that even though solving P may involve solving some combination of Pl, P2, and P3, we need to perform the recursion only once. If the recursion becomes excessively time consuming, one might consider substituting Z,*(T) with a lower bound that can be obtained relatively quickly. We describe below a B&B-based heuristic which provides a solution that is guaranteed to be within 8% above the optimal. A lower bound can be produced if we multiply this solution by l/(1 + 0.01~). This approach could thus be used to generate the lower bounds on Z,*(T) for the first (n - 1) stages in the recursion. Our experience, however, indicates that the use of this approach to obtain an optimal solution does not consistently yield reductions in the overall computation time. Nevertheless, the heuristic itself remains useful if we are satisfied with solutions that are within a specified 8% of the optimal. 4.2. A h~r~stic with constant performance guarantee The heuristic we propose is essentially the recursive implementation of the B&B algorithm with a modified fathoming criterion. It is motivated by the recognition that the problem we are trying to solve is characterized by a multiplicity of near-optimal solutions, and that most of the time spent by the B&B algorithm is consumed not in reaching an optimal solution, but in establishing the optimality of the solution. In the heuristic, we fathom a node if the lower bound, z(S) or Z,(S), at that node is greater than or equal to l/(1 + 0.01s) times the global upper bound, Z(S) or !!$(S), where E is the maximum allowable error specified as a percentage. This results in rapid fathoming, and at the end of the enumeration produces a solution that is guaranteed to be within 8% of the optimal. Even for small values of E chosen, the heuristic produces quick solutions to PI, P2, or P3, and consequently to P with the same E% guarantee. In solving P, essentially the same steps as described in Section 2.3 are followed; only this time the heuristic is used instead of the exhaustive B&B. In addition, a minor modification is needed for the case when d 2 dt (Step BB3). Since the heuristic cannot identify the exact value of d*, for dz < d < d$ the computations of Step BB2 are repeated to obtain a solution to P. However, for d 2 dt only P3 needs to be solved, and the solution to P3 provides the solution to P. Furthermore, it should be noted that in trying to obtain a heuristic solution to P, the recursion need not be carried out optimally. As described in the preceding section, the heuristic itself can be applied recursively (viz., with a sliding E. e.g. E increasing as a linear function of n from zero to its maximum specified value) to yield the lower bounds on Z,*(T).

5. COMPUTATIONAL

EXPERIENCE

To compare the performance of our B&B implementation with that of the enumerative scheme of BSC, we report the CPU times to solve problems Pl and P3 using the two approaches. As for problem P2, we report the B&B CPU times; no comparison, however, is made with BSC since the latter did not address this problem. All our computer runs have been made on a VAX MOO-single CPU machine. We have tried our utmost to maintain the same coding standard for both approaches. For the same class of processing time distribution (viz., Discrete Uniform over l-99), our coding of the BSC procedure, in fact, consistently produces better CPU times on the VAX than those on a CDC 170-Dual Cyber machine reported in BSC. Also note that the B&B times reported for Pl , P2, and P3 include the recursion times needed to develop the Z,*(T)‘s. These recursion times are given in Table 1. When solving P, the B&B scheme of Section 2.3, depending on the value of d,

238

F'RABUDDHA

Deetol.

Table I. B&B recursion times for PI, PZ and P3* Distribution n = 20

type I 1



3 4 5 6 MIN MAX

n = 25

00:00.10 OO:OO.lO 00:00.12 00:00.10 00:00.10 00:00.11 OO:OO.OS 00:00.20

c0:00.31 00: 00.20 00: 00.23 00:00.25 00:00.20 00:00.32 00:00.17 00:00.66

n=30 00100.42 00:00.77 00: 00.94 00: 00.52 oO:OO.Sl 00:00.77 oO:cQ.35 00:01.62

PI=40 OOzo2.64 00: 02.20 00:29.00 00:04.52 OOTo5.31 00: 22.72 00:01.62 01: 11.08

All times are reported as m:s.s, where m and s indicate minutes and seconds, respectively. *Median CPU times for individual distributions; minimum and maximum times over all distributions.

may entail solving {PI}, (Pl and P2}, {P3}, or (P3, Pl and P2). However, the overhead due to recursion is incurred only once. 5.1. Problem generation We have tested the algorithm for n = lo-40 in steps of 5, though for the sake of brevity, results are reported only for n = 20, 25, 30, and 40. For a given n, the pjL are generated from 6 different distributions, each distribution contributing a random sample of 5 job sets. The pis are chosen to be integers and between 1 and 99. Finally, for PI and P2, we have selected the same levels for d as used in BSC, viz., d = 0.2d*, OSd*, and 0.8d*. The different processing time distributions are represented by mixtures of discrete uniform distributions. The probabilty function of any of the distributions used can be expressed as h(x) = a,h,(x) + a&(x)

+ a&,(x), where

h,(x) = l/(x, -x0)

for x,, + 1 < x < xi; 0 elsewhere,

h2(x) = 1/(x1 -xi)

for x1 + 1
0 elsewhere, and

h,(x) = l/(x, - x2) for x2 + 1 < x G x3; 0 elsewhere. For all types of distributions, we have chosen x0 = 0, x1 = 33, x2 = 66, and x3 = 99. The other parameters are assigned values as follows: Type 2. a, = 0.40, a, = 0.20, and a3 = 0.40 (Bimodal-symmetric). Type 2. a, = 0.60, a2 = 0.20, and a3 = 0.20 (Skewed-right). Type 3. u1 =0.20, a2 = 0.60, and a3 = 0.20 (Unimodal-symmetric). Type 4. a, = 0.33, a2 = 0.33, and a3 = 0.33 (Uniform-high a’). Type 5. a, = 1.00, a, = 0.00, and a3 = 0.00 (Uniform-low 0’). Type 6. a, = 0.20, a, = 0.20, and a3 = 0.60 (Skewed-left). 5.2. Results and discussion In Tables 2,3-5, and 6 we report computational results for problems P3, Pl, and P2, respectively. In each of these tables, the median CPU times for individual distributions, as well as the minimum, maximum, and average CPU times over all distributions, are reported. These tables clearly demonstrate that our B&B algorithm is significantly faster than the BSC algorithm. For problem P3 (Table 2), the BSC approach has failed to produce optimal solutions within 6 CPU hr for n 2 30, while the B&B approach has never required more than 3 CPU set and 2 CPU min for n = 30 and n = 40, respectively. Because the BSC algorithm is an enumerative procedure the complexity of which increases exponentially with n, one would strongly suspect that none of the job sets with n = 40 would execute in less than 6 CPU hr using this algorithm. As a result, we have only run a small sample of job sets using the BSC algorithm for n = 40, and observed that each run has indeed required more than 6 CPU hr. While solving Pl for small values of d (Table 3), both algorithms perform well, although the

Scheduling about a common due date

239

Table 2. Results for P3

BSC

2 3 4 5 6 MIN MAX AVG

00: 18.22 00: 18.85 00: 20.62 00: 18.63 00: 19.73 00: 20.59 00: 16.63 00: 22.20 00: 19.32

n = 30

n = 25

n = 20 Distribution type

BSC

B&B

11:40.81 10:31.48 12: 17.78 11:20.74 11:20.50 11:36.41 10:00.12 12:38.73 11: 24.85

00:00.34 00:00.22 WOO.25 00:00.23 00:00.19 00:00.38 00:00.14 00:00.80 00:00.32

B&B 00:00.17 00:00.09 00:00.13 OO:CQ.O8 OO:OO.lO 00:00.08 00:00.06 00:00.20 00:00.11

BSC

n=40

B&B

>> >> >> >> >> >> >> >> >>

00:00.43 00:00.81 00:00.98 00:00.61 KkoO.89 00:01.39 00:00.29 00:02.74 C&00.98

BSC*

B&B

>>

00:03.16 00:02.16 00:34.55 OfkO6.17 00:05.98 Ofk59.81 00:01.45 01:48.77 00:23.09

x >> >> >> >> >> >> >>

All times are reported as m:s.s. where m and s indicate minutes and seconds, respectively. >> More than 6 hr of CPU time. *Based upon a sample of 5 job sets.

Table 3. Results for PI (d = O&d*) ” = 20

” = 25

Distribution type

BSC

B&B

1 2 3 4 5 6 MIN MAX AVG

00:00.23 00:00.33 00:00.11 00:00.16 00:00.18 OO:OO.ll 00:00.09 00:00.66 00:00.21

00:00.17 oo:cQ.17 00:00.18 OO:OO.lS 00:00.16 OO:OO.17 00:00.14 00:00.27 00:00.18

BSC 00:00.92 00:03.64 00:00.58 oOzo1.35 OOzOO.97 00:00.77 00:00.45 00: OS.67 00~01.38

II=30

?I=40

B&B

BSC

B&B

BSC

B&B

oozcQ.40

00: 14.76 00: 24.38 OfJZO5.06 00~07.18 OOzO6.88 00:04.40 00:02.scl 00: 58.06 OOz11.79

ooroo.54 oo:OO.% OO:Ol.ll f_Kkcul.66 00: 00.95 oo:OO.% OOroo.46 00:01.81 00: 00.92

12: 16.44 48: 17.58 03: 04.73 07:37.87 OkO5.42 02: 12.49 01:34.81

00:03.05 00~02.65 00: 29.79 OOzO4.99 00: 05.80 00:2l.S5 00:01.95 01: 12.34 00: 16.52

00:00.30

00: 00.33 OOzOO.35 00~00.26 00:00.43 OO:CKI.25 OozOO.77 00:00.39

12&l,

All times are reported as m:s.s, where m and s indicate minutes and seconds, respectively. > More than 1 hr of CPU time.

Table 4. Results for Pl (d = 0.5d*) n = 20

Distribution type I

2 3 4 5 6 MIN MAX AVG

n=25

BSC

B&B

BSC

00~08.75 00: 12.14 00:07.14 00:09.70 00:08.39 00:06.41 00:06.11 00: 19.67 OOzO8.68

00:00.21 00:00.21 00: Kk.26 00:00.22 00:00.20 00:00.25 OQ:CQ.16 00:00.35 00:00.23

02: 17.97 04: 22.87 02:04.89 02:51.90 02: 19.18 02:00.16 01: 36.58 05: 26.96 02:40.37

Jl=30 B&B

BSC

B&B

>

00:01.06 OOzo1.41 00:02.67 00:01.27 00:01.80 00~02.39 00~00.63 0fJz04.68 OOzO1.86

00:00.59 CKkOO.38 00:00.55

OOx10.43 00:00.44 00:01.01 00:00.29 00:01.91 00:00.67

It=40

41&62 49: 39.29 46: 39.33 38:32.19 3o:OO.lO 56: G.54

BSC*

>> >> >> >> >> >> >> >> >>

B&B WO9.83 00~06.80 00:22.18 00: 15.65 00: 16.52 02:23.51 00:03.11 03:07.10 00:41.89

All times are reported as m:s.s, where m and s indicate minutes and seconds, respectively > More than 1 hr of CPU time. >> More than 6 hr of CPU time. *Based upon a sample of 5 job sets.

times required by the BSC algorithm begins to deteriorate at n = 40. This deterioration magnifies as d increases. In Table 4, we see that the minimum time required by the BSC approach has been over 30 min for n = 30, while the maximum time spent by the B&B algorithm has been about 3 min even for n = 40. When d is slightly smaller than d* (Table 5), the BSC algorithm requires about 30 min to obtain optimal schedules for n = 25 and no less than 6 hr for n = 30. On the other hand, the B&B algorithm does very well until n = 40. Even at n = 40, the algorithm continues to perform well on distribution types 1,2,4, and 5, but begins to show some deterioration with types 3 and 6. A closer examination of the distributions used to generate the pi’s for types 3 and 6 reveals that these job sets contain a large proportion of large and moderate sized jobs; on the average, these job sets have 80% of their pj’s exceeding 33. Consequently, fathoming does not take place early in the B&B enumeration, resulting in longer CPU times.

240

PRABUDDHA De et al. Table 5. Results for Pl (d= O.&f*) n=25

n = 20 Distribution tYF

f 4 5 6 MIN MAX AVG

n=30

II=40

BSC

B&B

BSC

B&B

BSC?

B&B

00: 59.00 Ol:o9.44 Ol:oo.O5 00~59.38 00: 59.66 00: 56.07 00: 55.58 01:19.17 Ol:oo.57

oo:OO.25 oo:oo.22 oo:oo.34 OO:OO.29 OO:OO.26 00:00.37 OO:W.16 oo:oo.57 oo:oo.31

29: 52.41 35:46.15 30: 53.48 31:15.39 29: 14.87 2834.17 28: 12.17 37t35.02 30: 54.44

oo:o1.15 oozoo.5o 00:01.14 OO:OO.99 oo:oo.93 oo:o2.85 OO:OO.36 oo:o9.21 oo:o1.70

>> >> >> >> >> >> >> >> >>

00: 02.07 mol.36 OO:O9.24 OO:O4.O9 OOzO3.12 oo:o7.89 OO:oO.99 00: 16.89 00:05.55

BSC*

>> >> >> >> >> >> >> >> >>

B&B oo:31.97 00: 19.48 13: 13.12 01: 13.19 02:31.24 36:44SO oo:o4.55 46r44.66 oa:55.6a

All times are reported as m:s.s, where m and s indicate minuta and seconds, rcspcctively. >> Marc than 6 hr of CPU time. *Based upon a sample of 5 job sets.

Table 6. Results for P2 n = 30

: 3 f 6 MIN MAX AVG

?I=40

d=O.W

d = O.Sd*

d = 0.8d*

d-O&f*

d=O.Sd*

d=O.W

oo:oo.48 oozoo.a4 oozOl.lXt oo:oo.sa oo:oo.a7 ofkoo.83 oo:oo.41 ooro1.69 ooxm.83

oozoo.49 oo:oo.a3 00:01.01 OOKQ.56 oo:oo.aa oo:oo.a3 oozoo.41 00:01.70 oo:oo.a4

oo:M).54 OOm.84

OOzO2.73 txko2.3o ooz29.09 OOzO4.63 00:05.42 mm.82 00:01.70 01: il.18 00: 16.01

CNkO2.73 ooEo2.29 ooz29.09 o&04.61 txko5.38 00: 34.46 00:01.70 01:11.17 00: 16.01

wo3.83 WO2.33 ooz29.12 WO5.23 oozo5.77 ooz34.46 oo:ol.aa 01:11.22 00: 16.36

00:01.02 OOzOO.61 CQzOO.92 oo:oo.aa oozoo.44 ooro1.71 oo:oo.as

All times an reported as m:s.s, whcrc m and s indicate minutes and seconds, rcspcctively.

Table 7. Heuristic results

Prooxlure B&B Heuristic (E= 1%) Heuristic (E= 5%)

P3

(d=pofW)

ok48.77

46L44.66 03: 32.99 00: 12.53

00:is.58 oozoo.5o

All times arc reported as m:s.s, whcrc m and s indicate minutes and seconds, nspcctivcly.

Note that the value of d* is approx. 50% of the makespan. Consequently, in Table 5 where d thereby leading to a fairly symmetric scheduling problem. It becomes less clear for this case whether the large jobs should be scheduled near the beginning or near the end. The absence of fathoming mentioned above is typically not a problem for smaller values of d because for such d’s, as in Tables 3 and 4, the larger jobs tend to be placed at the end of the schedule. Table 6 demonstrates that problem P2 can be solved very quickly once problem Pl is solved. Recall that P2 is solved in a similar way to P3 except that the additional fathoming rule CL > d is checked at each node. In Section 4.2, we have described a B&B-based heuristic that has a constant performance guarantee; in other words, we can generate a schedule that has a value Z(S) which is guaranteed to be within 8% of the optimal Z*(S). Table 7 shows the CPU times required by the heuristic, when applied to the job sets with the worst B&B solution times for problems P3 and Pl. It is evident from this table that near-optimal solutions can be generated and confirmed very quickly. We may also point out that the true quality of the heuristic is typically much better than what the guarantee implies. In fact, the heuristic solution for any problem has never exceeded the optimal solution by more than 0.015%, regardless of the value of E chosen (1 or 5%). is close to d*, it is also near the middle of the makespan,

Scheduling about a common due date

241

6. CONCLUSIONS

In this paper, we have discussed solution concepts for the MSD problem. A highly effective bounding scheme and a branch and bound algorithm based on that scheme have been proposed. The algorithm has been implemented using a recursive approach. Computational results for the various subproblems of the overall MSD problem have been presented for six different distributions of processing times. These results show that this approach is significantly better than the existing methodology. In addition to the branch and bound algorithm, the paper also presents a very fast heuristic that provides solutions that are guaranteed to be within a specified percentage of the optimal. REFERENCES 1 J. B. Sidney, Optima1 single-machine scheduling with earliness and tardiness penalties. Opns Res. 25, 62-69 (1977). 2 S. Lakshminarayan, R. Lakshmanan. R. L. Papineau and R. Rochette. Outimal single-machine schedulinn with earliness and tardiness penalties. Opns Res. 26, 1079-1082 (1978). 3. K. R. Baker and G. D. Scudder, Sequencing with Earliness and Tardiness Penalties: A Review. Working Paper No. 226, The Amos Tuck School of Business Administration, Dartmouth College, Hanover, N.H. (1988). 4. S. K. Gupta and T. Sen, Minimizing a quadratic function of job lateness on a single machine. Engng Costs Product. Econ. 7. 187-194 11983). 5. T. D. Fry, R. D. Armstrong and J. H. Blackstone, Minimizing weighted absolute deviation in single machine scheduling. JJE Trans. 19.445-450 (1987). 6. J. J. Kanet, Minimizing the average deviation of job completion times about a common due. Naval Res. Log. Q. 28, 643-651 (1981). 7. P. S. Sundararaghavan and M. U. Ahmed, Minimizing the sum of absolute lateness in single-machine and multimachine scheduling. Naval Res. Log. Q. 31, 325-333 (1984). 8. N. G. Hall, Single- and multiple-processor models for minimizing completion time variance. Naval Res. Log. Q. 33, 49-54 (1986). 9. U. Bagchi, R. S. Sullivan and Y. L. Chang, Minimizing mean absolute deviation of completion times about a common due data. Naval Res. Log. Q. 33, 227-240 (1986). 10. S. S. Panwalker, M. L. Smith and A. Seidmann, Common due date assignment to minimize total penalty for the one machine scheduling problem. Opus Res. 30.391-399 (1982). 11. I-J. Bagchi, Y. L. Chang and R. S. Sullivan, Minimizing absolute and squared deviations of completion times with different earliness and tardiness penalties and a common due date. Naoal Res. Log. Q. 34, 739-751 (1987). 12. U. Bagchi, R. S. Sullivan and Y. L. Chang, Minimizing mean squared deviation of completion times about a common due date. Mgmr Sci. 33, 894-906 (1987). 13. A. G. Merten and M. E. Muller, Variance minimization in single machine sequencing problems. Mgmt Sci. l&518-528 (1972). 14. L. Schrage, Minimizing the time-in-system variance for a finite jobset. Mgmt Sci. 21, 540-543 (1975). 15. S. Eilon and I. G. Chowdhury, Minimizing waiting time variance in the single machine problem. Mgmr Sci. 23.567-575 (1977). 16. V. Vani and M. Raghavachari, Deterministic and random single machine sequencing with variance minimization. Opns Res. 35, 111-120 (1987). 17. P. De, J. B. Ghosh and C. E. Wells, A note on the minimization of mean squared deviation of completion times about a common due date. Mgmr Sci. 35, 1143-l 147 (1989). 18. A. H. G. Rinnooy Kan, B. J. Lageweg and J. K. Lenstra, Minimizing total costs in one-machine scheduling. Opns Res. 23,908-927 (1975).