Computers & Operations Research 28 (2001) 193}207
Decomposition methods for large job shops夽 Marcos Singer* Escuela de Administracio& n, Pontixcia Universidad Cato& lica de Chile, Casilla 76, Correo 17, Santiago, Chile Received 1 September 1998; received in revised form 1 March 1999
Abstract A rolling horizon heuristic is presented for large job shops, in which the total weighted tardiness must be minimized. The method divides a given instance into a number of subproblems, each having to correspond to a time window of the overall schedule, which are solved using a shifting bottleneck heuristic. A number of rules for de"ning each time window are derived. The method is tested by using instances up to 10 machines and 100 operations per machine, outperforming a shifting bottleneck heuristic that has been shown to generate close to optimal results. Scope and purpose There has been a signi"cant amount of research focused on the scheduling of a job shop, either minimizing the makespan or the tardiness. Although the results for small-size problems are satisfactory, there has been no approach as for yet middle- and large-size problems. This paper presents a heuristic that decomposes the problems on a time window basis, solving each subproblem using a shifting bottleneck heuristic. Its results for a due-date-related objective function are promising. 2000 Elsevier Science Ltd. All rights reserved. Keywords: Scheduling; Job shop; Rolling horizon
1. Introduction The job shop problem can be described as follows: a plant with m machines must process n jobs. Each job has a release date r , and a predetermined routing that has to follow through the di!erent H machines. The processing of job j on machine i is referred to as operation (i, j), with processing time p . GH * Tel.: #562-686-4353; fax: #562-553-1672. E-mail address:
[email protected] (M. Singer). 夽 This research has been partially supported by FONDECYT project number 197/1021. 0305-0548/00/$ - see front matter 2000 Elsevier Science Ltd. All rights reserved. PII: S 0 3 0 5 - 0 5 4 8 ( 9 9 ) 0 0 0 9 8 - 2
194
M. Singer / Computers & Operations Research 28 (2001) 193}207
Let C denote the time that job j completes its last operation, and de"ne the makespan as H C "max(C ). There is a signi"cant amount of research that has been devoted to the problem of
H "nding the schedule that minimizes C , denoted by Jm"r " C . As noticed by Wein and
H Chevalier [1], such an objective function is equivalent to minimizing the cycle-time, which by Little's law implies maximizing the throughput and therefore minimizing costs. In very general terms, two types of techniques have been developed: exact methods and heuristics. Among the "rst is the branch-and-bound method developed by Carlier and Pinson [2], which was the "rst to solve the well-known MT10 instance posted by Fisher and Thompson [3] that remained unsolved for 25 years despite various attempts to solve it. Brucker et al. [4] and Martin and Shmoys [5], who use di!erent problem formulations, branching schema and lower bound methods, provide other interesting results. Among the heuristics, one of the most successful approaches is the Shifting Bottleneck proposed by Adams et al. [6] that is explained in Section 3. Dauzere-Peres and Lasserre [7], Applegate and Cook [8] and Balas et al. [9] later enhanced this method. There have also been other developments on local search methods such as the ones presented by van Laarhoven et al. [10], Taillard [11], Barnes and Chambers [12] and Nowicki and Smutnicki [13]. A di!erent line of research has focused on objective functions related to due-date performance, an objective related to service quality rather than throughput. Such research studies the e!ectiveness rather than the e$ciency of the job shop on meeting the clients' requests, an objective that is gaining more relevance as the companies compete not only on price but also on punctuality. Assume that each job j has a due date d and a weight w , and de"ne ¸ "C !d as the lateness of H H H H H job j and ¹ "max(¸ , 0) as the tardiness of job j. Ovacik and Uzsoy [14] have modi"ed the H H Shifting Bottleneck heuristic for "nding the schedule that minimizes the maximum lateness denoted by Jm"r , s "¸ , where s denotes the presence of sequence-dependent setup times. H GH GH This work considers the problem of minimizing the total weighted tardiness in a job shop, denoted by Jm"r "&w ¹ . It is strongly NP-Hard since it is a generalization of the single-machine H H H problem 1""&w ¹ , which Lenstra et al. [15] proved to be strongly NP-Hard. Table 1 shows the H H input data for a total weighted tardiness job shop instance with 3 machines and 3 jobs (3;3) that is interpreted as follows. Step 1 of job 1 is processed on machine A with a processing time equal to 5. Step 2 of job 1 is processed on machine B with a processing time equal to 10, and so on. The weight of job 1 is equal to 1, its release date is 5, and its due date is 24. Singer and Pinedo [16] describe a branch-and-bound method capable of solving 10;10 instances, including various total weighted tardiness versions of the MT10. Among the heuristic methods, Vepsalainen and Morton [17] study a number of dispatching rules, which assign priorities to those operations not yet to be processed, and then schedule them in the decreasing order of priority. They test a number of rules and conclude that the Apparent Tardiness Cost (ATC) rule achieves the best results. Wu et al. [18] use a graph-decomposition approach in order to generate robust schedules, while Pinedo and Singer [19] develop a Shifting Bottleneck heuristic for minimizing the total weighted tardiness (SB-TWT), showing that it clearly outperforms an enhanced version of the ATC rule. Most of the methods mentioned above are incapable of solving instances larger than 10;30. Branch-and-bound methods have exponential complexity; dispatching rules have a very low e!ectiveness in large instances; shifting bottleneck heuristics are very sensitive to the number of operations per machine; and local search algorithms require good initial solutions or otherwise
M. Singer / Computers & Operations Research 28 (2001) 193}207
195
take very long times to reach close to optimal results. Given that an instance with 30 jobs is a rather small problem in a realistic industrial context, it seems necessary to have an algorithm capable of solving larger instances. This paper presents a decomposition method that minimizes the total weighted tardiness of job shops with 100 jobs or more, using a time-decomposition technique called the rolling horizon heuristic in which the SB-TWT heuristic is a subroutine.
2. The rolling horizon heuristic Rolling horizon heuristics divide the problem into time windows that are optimized independently, as shown by Ovacik and Uzsoy [20] with a single-machine scheduling problem with sequence-dependent setup times. In order to prevent the schedule for one time window from interfering with the next one, the time window has a degree of overlapping that is de"ned as follows. Let u denote the size of the time window subproblem to be optimized, let )u denote the number of operations that will be `frozena or scheduled according to the solution for the subproblem. Fig. 1 shows an example where u"4 and "3. In the "rst stage, a subproblem of four operations is optimized, yielding the sequence APBPCPD. In the second stage, the partial sequence APBPC is frozen and operations D, E, F, and G are optimized, and so on. The application to the job shop problem is as follows: instead of solving an entire 10;50 instance, an initial 10;10 instance is de"ned corresponding to the "rst time window. After being optimized, only 80% of the operations are frozen while the rest are included in a second time window. This process is repeated until all the job shop has been scheduled, as described by Fig. 2. In the above heuristic one must decide how to assign operations to each time window, and how to optimize each time window. The assignment of operations includes how many windows and how much overlapping is acceptable, a matter that can be better illustrated after explaining the method for optimizing each time window.
Fig. 1. The rolling horizon heuristic on a single-machine problem.
196
M. Singer / Computers & Operations Research 28 (2001) 193}207
Fig. 2. Rolling horizon heuristic in a job shop.
3. Optimization of each time window The SB-TWT heuristic follows a divide-and-conquer approach that consists of three main steps: subproblem formulation, subproblem optimization and bottleneck selection. The subproblem formulation creates a subproblem for each machine that has not yet been scheduled. In order to do this, it de"nes a disjunctive graph G(N, A, B) adapted from the one proposed by Balas [21], and depicted by Fig. 3 that shows a feasible solution for the instance in Table 1. The set of nodes N contains one element for each operation (i, j), one source node ; and n sink nodes < . The set of H conjunctive arcs A"+(i, j)P(k, j), contains the arcs that connect the nodes representing each pair of consecutive operations (i, j) and (k, j) of job j. Each arc (i, j)P(k, j) has a length "(i, j)P(k, j)""p and represents the constraint that operation (k, j) may be started no less than GH p time units after operation (i, j) has started. The node that represents the "nal operation of job j, GH say (h, j), has an arc of length p incident to < . The source node ; has n outgoing arcs, each one FH H incident to the "rst operation of job j, j"1,2, n, with a length equal to r . H Let N denote the set of nodes corresponding to the operations processed on machine i. The set G of disjunctive arcs B"+(i, j) (i, k), has, for every pair of nodes (i, j) and (i, k) in N , two arcs G going in opposite directions. The arc (i, j)P(i, k) has a length "(i, j)P(i, k)""p and the arc GH (i, k)Q(i, j) has a length "(i, k)Q(i, j)""p . Each pair of arcs represents the fact that two GI operations cannot be processed simultaneously on the same machine. Fixing an arc, in one direction or the other, corresponds to the decision of which operation will come "rst. For instance, "xing arc (i, j)P(i, k) implies that operation (i, k) is processed after operation (i, j). Let p(B) denote a selection of disjunctive arcs from B. Any solution for the job shop problem is equivalent to some p(B) that has exactly one arc from every pair (i, j)(i, k), such that the resulting graph G(N, A, p(B)) is acyclic. Conversely, any selection p(B) satisfying the above properties corresponds to a feasible schedule. In Fig. 3 the selection of the disjunctive arcs (A, 2) P (A, 3) and (A, 3) P (A,1) means that machine A "rst processes job 2, then job 3 and "nally job 1. Let ¸(v, v) denote the length of the critical (longest) path from node v to node v in graph G(N, A, p(B)) (if there is no path, then ¸(v, v) is not de"ned). The completion time C of job j is equal to ¸(;, < ). The H H values ¸(;, v), v being any node, are computed using the algorithm by Bellman [22], which has complexity O("N") since each node has two incoming arcs at most. The disjunctive graph representation allows the calculation of release dates, due dates, and the so-called delayed precedence constraints for each operation on a given machine i, where such data depends on the schedule of other machines. A solution for the optimization of subproblem i corresponds to a sequence of operations for machine i, and the value of this solution provides an
M. Singer / Computers & Operations Research 28 (2001) 193}207
197
Fig. 3. Selection of disjunctive arcs and corresponding schedule.
Table 1 Input data for a total weighted tardiness job shop problem Job
Machine and processing time
w H
r H
d H
1 2 3
A C C
1 2 2
5 0 0
24 18 16
5 4 5
B A B
10 5 C
C B A
4 6 7
estimate for the increase in the job shop's total weighted tardiness caused by this machine schedule. The subproblem optimization step "nds a solution for each subproblem that, once mapped onto the job shop, minimizes such an estimate. The optimization is performed using a partial enumeration heuristic in which each node of the enumeration tree corresponds to a partial schedule on the machine. The complexity of this heuristic grows exponentially with the number of operations, a fact that explains why this method cannot be used in very large instances. Finally, the bottleneck selection step inserts the schedule of the machine having the maximum estimate for the increase in the job shop's total weighted tardiness in the job shop. This insertion is done by adding onto graph G those arcs that represents such a machine schedule. The three steps above can be performed in a one-pass manner that "rst schedules the machine with the highest optimal value, next the machine with the second highest optimal value, and so on until all the machines are scheduled. The reoptimization can enhance this mechanism by optimizing those machines that were scheduled in previous iterations each time a new machine is scheduled. One can also use backtracking that consists of trying di!erent orders for which the machines are
198
M. Singer / Computers & Operations Research 28 (2001) 193}207
scheduled. The backtracking with reoptimization technique used in the SB-II heuristic by Adams et al. [6] is a combination of the schemes presented above.
4. Assignment of operations to time windows The assumption behind the divide-and-conquer heuristics, such as the one presented in this paper, is that any given problem instance can be divided into a number of subproblems whose solutions can be combined into a solution to the original problem. Successful combination of the solutions is a!ected by how uncoupled the subproblems are. If the problems depend on one another, the optimization of one cannot be performed without considering the others. In the Shifting Bottleneck heuristic, such isolation from the subproblems is achieved by using the disjunctive graph. This graph translates most of the external data into release dates, due dates, and delayed precedence constraints that are necessary to optimize a subproblem. An alternative to the subproblem being fully uncoupled is that some of them are only `one-waya coupled. For instance, if subproblem b depends on subproblem a, but not the other way around, it is convenient to optimize "rst subproblem a and then subproblem b. Such a situation arises in the Shifting Bottleneck heuristic with those subproblems related to bottleneck machines. A heavily loaded machine will de"ne almost by itself the schedule for the entire job shop, so it can be optimized independently. The schedule of a not-so-loaded machine should be obtained as a function of the critical machine and so it must be optimized later. The rationale of the method is to "nd which machine is the critical one and to schedule it before the rest. In a rolling horizon heuristic the subproblems correspond to time windows, where each window is strongly coupled with the previous one. In the case of two contiguous windows, the "rst one de"nes the machine availability for the operations in the second window, so it must be optimized "rst. The main issue is to de"ne which operations belong to each time window. Fig. 4 shows a "rst decomposition schema that assigns the "rst operations of each job to the "rst time window. Its explanation is that it divides a given instance into evenly sized smaller instances that keep their size under control so that the time needed by the SB-TWT heuristic does not grow too much. A second scheme for assigning operations to the windows is to balance the number of operations per machine. Its justi"cation is that the complexity of the SB-TWT heuristic is very sensitive to the
Fig. 4. Time window with a "xed number of operations per job.
M. Singer / Computers & Operations Research 28 (2001) 193}207
199
Fig. 5. Time shifted windows with loose interaction among operations.
Fig. 6. Time window with a balanced workload for each job.
machine having the largest amount of operations since the subproblem complexity grows exponentially with the number of operations per machine. Computational tests showed that both schemas do not perform satisfactorily, due to an e!ect that is illustrated in Fig. 5. Assume for simplicity that there is no overlapping, although this analysis remains true even if there is. Machine C has a larger duration of operations than the average and so as the scheduling horizon moves toward the right-hand side, the time windows include operations that are time shifted from each other. This makes them loose interaction as seen on the second time window that includes operation C6 instead of operation C2. If C2 belonged to the second time window it could be swapped with operation C4, which would produce a better completion time for jobs 4 and 7. This concept is formalized as follows: De"ne r as the release time of the "rst operation of job HU j assigned to window w, and de"ne the workload as the summation of the processing times of the HU operations of job j that are assigned to window w. Let o denote the coe$cient of variation of HU (r #workload ) for all the jobs j that are assigned to window w. A time window is balanced if HU HU o is below some given parameter, which is obtained through computational experiments. HU Consider the example depicted by Fig. 6 where the "rst window has a release time plus a workload of r #p "5#5"10 for job 1, r #p #p "0#4#5"9 for job 2 and ! r #p #p "0#5#3"8 for job 3. The variance of the set +10, 9, 8, is 1 and its average is 9, ! so o "0.11. If the window does not include operation B3 then the variance of the set +10, 9, 5, is HU 7 and its average is 8, so o "0.33. HU The overlapping to be used is a second issue to de"ne. Consider a single-machine problem with "ve operations with its optimal schedule being APBPCPDPE. Suppose that "u"3, so that there is no overlapping, and operations A, B and C are assigned to the "rst window, while operations D and E are assigned to the second window. This assignment implies that
200
M. Singer / Computers & Operations Research 28 (2001) 193}207
+A, B, C,P+D, E,, i.e., operations D and E must be processed after operations A, B and C. In this case the optimization of both windows in an independent manner may reach the optimal solution. However, if the assignment is such that +A, B, D,P+C, E,, the method will not reach the optimal sequence. Suppose that this last decomposition is kept, but "2 and u"3, so that there is an overlapping of one operation. This means that the precedence relationship is relaxed: Now only two out of the three operations A, B and D must be processed before operations C and E, so the optimal solution can be reached. Therefore, larger overlapping implies that the decomposition process, i.e., the assignment of operations to each time window, becomes less crucial. However, a larger overlapping demands additional computing time since the windows become bigger. The proper trade-o! between these issues is considered in Section 6 of Computational Results. The overlapping is de"ned as a percentage of the largest job workload of the window. Suppose that job jH has the largest (r #workload ) for all the jobs j that are assigned to the current HU HU window w. Suppose that some workload of job k from the previous window w- is included in w, so w is overlapping w- by such workload. De"ne overlapping factor as the maximum proportion between the workload from w- that is overlapped by w, and (r H #workload H ). For instance, HU HU suppose that the second window in Fig. 6 includes operations B1, C1, B2 and A3. The largest workload corresponds to job 1 and is equal to p #p "10#4"14. If the overlapping factor ! is de"ned as 30%, then the maximum workload from the "rst window that will be considered to be overlapping, and therefore included in the second window, is 4.2. In this case only operation B3 meets that requirement, so the second window is extended to include operations B1, C1, B2, B3 and A3. If there were three operations B3, B3 and B3 instead of B3 with processing times 2, 1 and 3, respectively, only operations B3 and B3 would be overlapped by the second window. The heuristic that assigns operations to time windows is detailed in the following pseudo-code: REPEAT for each job j Include in current window w the "rst not-yet-scheduled operations such that (r #workload ))(r #workload )/(number of windows) HU HU H H
(1)
END REPEAT IF o 'maximum o THEN HU Fix o
HU
by including or removing an operation for each job j
(2)
END IF Find jH such that (r H #workload H ) is maximum in w HU HU REPEAT for each job j Include in w the last already-scheduled operations from previous window w- such that (r #workload ))(r H #workload H )H(overlapping factor) HUU HUU HU HU END REPEAT
(3)
M. Singer / Computers & Operations Research 28 (2001) 193}207
201
In (1) the approximate workload of each job assigned to the window is obtained by dividing the largest (r #workload ) for all the jobs j of the instance by the number of windows to be used. In H H case the coe$cient of variation o of (r #workload ) for all the jobs j that are assigned to HU HU HU window w is too high, then (2) runs the following procedure for each job j: Include another operation; if the new o is acceptable then "nish; otherwise remove an operation and check HU whether o is acceptable. The instruction in (3), valid for all but the "rst window, includes new HU operations overlapped from the previous windows, such that their workload is within the range de"ned by the overlapping factor.
5. Benchmark instances Although a given algorithm may be of value because of the insight it provides for future research, it is clear that the ultimate test for an optimization technique is its computational performance. These algorithms developed within an applied research framework will use the data from the speci"c real-world problems that they are attempting to solve. For the type of research presented in this paper however, the generation of proper instances remains an open question in the scienti"c literature. There has been recent interest on this issue, as seen in the work by Barr et al. [23], Hooker [24] and McGeoch [25]. In general terms, there are two opinions on the kind of instances to be used: random generation, and standard libraries. Random generation consists of algorithms that randomly generate data ful"lling a number of properties that may be relevant such as feasibility (there exists a solution), realism (the data has some similarity with the real wold), complexity (the degree of di$culty is known), etc. Standard libraries have sets of well-known instances that have been posted (often by groundbreaking papers), which are easily available for researchers testing their work. If necessary, these instances may be adapted. This paper inclines for the standard libraries alternative, although both sources have their pros and cons. The e!ectiveness of optimization methods, particularly branch-and-bound techniques, are very dependent on the instance with which they are tested. A randomly generated set of instances may show a signi"cantly worse performance than a second set, simply because the "rst one has only one `impossiblea instance, as the MT10 had been years ago. A researcher may be tempted to disregard such a set, failing to consider that the impossible instance is often the most interesting for it may provide the insight needed to solve these di$cult problems. The use of standard libraries does not allow disregarding inconvenient instances, making the comparison among algorithms to be more objective. For computational tests the same 22 10;10 instances used by Pinedo and Singer [19] are considered: Instances ABZ5 and ABZ6 are from Adams et al. [6] and instances LA16 to LA24 are from Lawrence [26]. The MT10 instance is from Fisher and Thompson [3] and instances ORB1 to ORB10 are from Applegate and Cook [8]. Note that instances LA21 to LA24 are 10;15, so the last 5 jobs were eliminated. Since the above instances were formulated for problems of minimizing the makespan, a weight and a due date has to be added for each job. In practice, it is often observed that 20% of the customers are very important, 60% of them are average and the remaining 20% are not very relevant. For that reason we de"ned w "w "4, w "2 for j"3, 4,2, 8 and w "w "1. Release dates are set to be 0. The due date of job j is set to be equal to the release date plus the #oor
202
M. Singer / Computers & Operations Research 28 (2001) 193}207 Table 2 MT10, standard and additional data Job
1 2 * 10
Standard data: operation routing
A A * B
29 43 * 85
B C * A
78 90 * 13
Additional data
* * * *
w H
r H
D H
4 4 * 1
0 0 * 0
592 765 * 910
Table 3 Addition of two MT10 instances, standard and additional data Job
1 2 * 10 11 12 * 20
Standard data: operation routing
A A * B A A * B
29 43 * 85 29 43 * 85
B C * A B C * A
78 90 * 13 78 90 * 13
Additional data
* * * * * * * *
w H
r H
D H
4 4 * 1 4 4 * 1
0 10 * 90 100 110 * 190
592 775 * 1000 695 875 * 1100
of the sum of the processing times of its operations multiplied by a due-date tightness factor f (see Eilon and Chowdhury [27]), i.e., d "r #W f; p X. Table 2 partially shows the MT10 H H G GH instance for which f"1.5, so d "0#W 1.5; p X"592. G G Since there are few large instances in the standard libraries, a 10;20 is created by adding two 10;10 instances as suggested by Table 3. Release dates are set in a more realistic manner. De"ne *r as the di!erence on the release date of two consecutive jobs. If *r"10 then r "0, r "10, r "20, and so on. 6. Computational results The computational tests are performed on a Pentium 166 MHz, measuring the time in real seconds rather than CPU seconds. Table 4 shows the results of total weighted tardiness, percentage error from the best solution founded and computing time for the 10;10 MT10 instance with a di!erent number of windows and overlapping indices. The same tests were performed for each
M. Singer / Computers & Operations Research 28 (2001) 193}207
203
Table 4 Performance for the MT10 instance Windows
Overlapping %
TWT
Time
1
0
394
12
2 2 2 2
0 20 50 80
2746 564 548 443
5 14 13 20
3 3 3 3
0 20 50 80
3056 1874 762 548
6 7 14 31
4 4 4 4
0 20 50 80
4141 3338 972 548
4 6 5 19
5 5 5 5
0 20 50 80
2589 1968 1953 1032
6 5 2 9
one of the instances mentioned in Section 5 yielding similar results, so the conclusions derived from this table are rather general. From the table it is clear that in terms of quality, the solution for one window is better than the solution for two, which is better than the solution for three, and so on. This holds true since the decomposition process will always make mistakes on assigning operations to each time window. It is also observed that greater overlapping indices provide better results since, as explained in Section 4, they overcome the mistakes of the decomposition process. From a computing time point of view, the one window version is still competitive with the other versions, but is not for instances with more that 30 operations per machine as seen later on. Although it may not be clear in this table, the time required by an overlapping of 80% is much greater than the overlapping of 50%. Since the di!erence in the objective function between these two possibilities is not so relevant, the 50% overlapping is adopted as the standard from now on. Table 5 shows the results for 22 10;10 instances with an overlapping of 50%, for di!erent number of windows. The performance of the two windows version is competitive with respect to the one window. Even though it has a larger average total weighted tardiness, in seven out of 22 instances it provided a better result. For windows three and four, the method performs worse. Tables 6}10 present the results of total weighted tardiness, percentage error from the best solution founded and computing time for instances with 15, 20, 30, 50 and 100 jobs. Di!erent number of windows (where one window is equivalent to the SB-TWT heuristic) with di!erent values of *r are tested. Each entry in the table is the average result for the 22 instances under study.
204
M. Singer / Computers & Operations Research 28 (2001) 193}207
Table 5 Performance for 10;10 instances Instance
Optimal TWT
Number of windows 1
ABZ5 ABZ6 LA16 LA17 LA18 LA19 LA20 LA21 LA22 LA23 LA24 MT10 ORB1 ORB2 ORB3 ORB4 ORB5 ORB6 ORB7 ORB8 ORB9 ORB10 Average
69 0 166 260 34 21 0 0 196 2 82 394 1098 292 918 358 405 426 50 1023 297 346 293
2
3
4
TWT
Time
TWT
Time
TWT
Time
TWT
Time
109 0 180 260 83 76 0 16 196 2 150 394 1548 408 1162 358 524 647 128 1298 342 535 383
12 0 16 6 10 12 19 11 15 12 10 12 17 13 20 25 28 26 16 23 97 16 19
155 0 180 271 60 68 3 655 268 86 140 548 1773 442 1187 506 482 532 141 1193 302 547 434
16 1 16 2 12 20 17 21 3 4 18 14 17 11 38 48 30 23 20 16 34 23 18
787 191 280 271 209 436 167 630 579 319 384 762 2023 810 2185 776 1014 1740 472 1433 583 613 757
16 25 25 2 14 9 4 10 19 1 2 23 15 12 24 16 7 16 28 5 62 31 17
1010 342 612 724 514 896 218 1162 1274 390 815 972 2452 2129 1944 1454 1197 1525 388 1788 1321 1708 1129
2 4 12 11 4 6 1 4 6 2 1 16 14 7 24 28 30 5 27 27 196 20 20
Table 6 Performance for 10;15 instances Windows
1 2 3
*r"20
*r"30
*r"40
TWT
Error (%) Time (s)
TWT
Error (%) Time (s)
TWT
Error (%) Time (s)
1211 1246 1695
0 3 40
563 497 672
13 0 35
170 112 264
52 0 136
22 59 51
2 8 22
1 3 13
The due date tightness factor f is de"ned for each of *r, in order to keep the number of tardy jobs greater than zero. The results show that while *r is small, it is not convenient to have a large number of windows. However, as the di!erence among the release time becomes considerable, the method can discriminate where it is convenient to assign the operations, and therefore a larger
M. Singer / Computers & Operations Research 28 (2001) 193}207
205
Table 7 Performance for 10;20 instances Windows
1 2 3 4 5
*r"30
*r"40
*r"50
TWT
Error (%) Time (s)
TWT
Error (%) Time (s)
TWT
Error (%) Time (s)
7154 7130 7682 9092 10145
0 0 8 28 42
7806 7817 8140 8811 10601
0 0 4 13 36
9675 9727 9605 10658 12687
1 1 0 11 32
151 183 162 299 656
179 252 169 613 1096
148 145 182 464 1095
Table 8 Performance for 10;30 instances Windows *r"40
1 2 3 4 5
*r"60
*r"80
*r"100
TWT Error (%)
Time (s)
TWT Error (%)
Time (s)
TWT Error (%)
Time (s)
TWT Error (%)
Time (s)
10202 8830 7679 9123 12702
71 99 908 134 710
4177 3436 3119 7077 7922
93 139 403 133 381
3317 3200 2966 6450 5765
195 236 414 198 681
7865 4 7560 0 7777 3 10062 33 9858 30
230 323 515 394 1176
33 15 0 19 65
34 10 0 127 154
12 8 0 117 94
Table 9 Performance for 10;50 instances Windows *r"80
1 2 3 4 5 6
*r"100
*r"120
*r"140
TWT Error (%)
Time (s)
TWT Error (%)
Time (s)
TWT Error (%)
Time (s)
TWT Error (%)
Time (s)
2728 668 632 2315 4425 5721
35 166 121 62 13 21
2559 693 496 591 1449 2701
28 95 47 44 69 57
3802 826 413 388 582 1018
24 146 51 107 132 197
12396 56 8167 3 7923 0 7935 0 8255 4 9406 19
25 543 539 713 1027 1590
332 6 0 266 600 805
416 40 0 19 192 445
880 113 6 0 50 162
number of windows is preferred. The optimal number of windows to be used also grows with the size of the problem. Small instances present the best results with one or two windows, while larger instances are better solved with four and seven windows.
206
M. Singer / Computers & Operations Research 28 (2001) 193}207
Table 10 Performance for 10;100 instances Windows
1 2 3 4 5 6 7 8
*r"130
*r"150
*r"170
TWT
Error (%) Time (s)
TWT
Error (%) Time (s)
TWT
Error (%) Time (s)
3132 2534 1789 942 371 379 433 758
744 583 382 154 0 2 17 104
573 788 436 197 142 100 107 209
473 688 336 97 42 0 7 109
2059 1925 1556 987 771 567 834 1053
263 240 174 74 36 0 47 86
286 905 746 593 254 133 128 363
206 198 311 250 157 99 118 288
313 889 915 933 1234 1017 1744 1854
7. Conclusions The above tables show that the rolling horizon heuristic is competitive with original SB-TWT heuristic for 10;10, 10;12 and 10;15 instances. However, with larger instances the new method becomes more e!ective. Such a performance is due to a scheme in which there is a double decomposition: a "rst one done horizontally in the Gantt chart using time windows, and a vertical one using the machine decomposition of the SB-TWT heuristic. The di!erent time windows are partially decoupled by using overlapping, while a disjunctive graph decouples the machine's subproblems within each window. It should be noted that the horizontal decomposition performs better as the criteria for assigning each operation to its corresponding time window is more accurate. Although the makespan objective function is not analyzed in this work, it is likely that applying this technique would not be as e$cient as that for the tardiness case. The total weighted tardiness function can be decomposed in an easier way than the makespan since the former is associated with each operation, while the latter depends on a combination of all the operations in a simultaneous manner. However, this conjecture should be tested carefully with future research.
References [1] Wein LM, Chevalier PB. A broader view of the job-shop scheduling problem. Management Science 1992;38:1018}33. [2] Carlier J, Pinson E. An algorithm for solving the job-shop problem. Management Science 1989;35:164}76. [3] Fisher H, Thompson GL. Probabilistic learning combinations of local job-shop scheduling rules. In: Muth J, Thompson GL, editors. Industrial scheduling. Englewood Cli!s, NJ: Prentice Hall, 1963, pp. 225}51. [4] Brucker P, Jurisch B, Sievers B. A branch and bound algorithm for the job-shop scheduling problem. Discrete Applied Mathematics 1994;49:107}27. [5] Martin P, Shmoys DB. A new approach for computing optimal schedules for the job-shop scheduling problem. School of Operations Research and Industrial Engineering, Cornell University, USA, 1996.
M. Singer / Computers & Operations Research 28 (2001) 193}207
207
[6] Adams J, Balas E, Zawack D. The shifting bottleneck procedure for job shop scheduling. Management Science 1988;34:391}401. [7] Dauzere-Peres S, Lasserre J. A modi"ed shifting bottleneck procedure for job-shop scheduling. International Journal of Production Research 1993;31:923}32. [8] Applegate D, Cook W. A computational study of the job shop scheduling problem. ORSA Journal of Computing 1991;3:149}56. [9] Balas E, Lenstra J, Vazacopoulos A. The one-machine problem with delayed precedence constraints and its use in the job shop scheduling. Management Science 1995;41:94}109. [10] Van Laarhoven P, Aarts E, Lenstra J. Job shop scheduling by simulated annealing. Operations Research 1992;40:113}25. [11] Taillard E. Parallel taboo search techniques for the job shop scheduling problem. ORSA Journal of Computing 1994;6:108}17. [12] Barnes J, Chambers J. Solving the job shop scheduling problem with tabu search. IIE Transactions 1995;27:257}63. [13] Nowicki E, Smutnicki C. A fast taboo search algorithm for the job shop problem. Management Science 1996;42:797}813. [14] Ovacik I, Uzsoy R. A shifting bottleneck algorithm for scheduling semiconductor testing operations. Journal of Electronic Manufacturing 1992;2:119}34. [15] Lenstra J, Rinnooy Kan A, Brucker P. Complexity of machine scheduling problems. Annals of Discrete Mathematics 1977;1:343}62. [16] Singer M, Pinedo M. Computational study of branch and bound techniques for minimizing the total weighted tardiness in job shops. IIE Transactions 1998;29:109}19. [17] Vepsalainen A, Morton T. Priority rules for job shops with weighted tardiness cost. Management Science 1987;33:1035}47. [18] Wu S, Byeon E, Storer R. A graph-theoretic decomposition of job shop scheduling problems to achieve scheduling robustness. Department of Industrial Engineering, Lehigh University, USA, 1993. [19] Pinedo M, Singer M. A shifting bottleneck heuristic for minimizing the total weighted tardiness in a job shop. Naval Research and Logistics 1999;46:1}17. [20] Ovacik I, Uzsoy R. Rolling horizon procedures for single-machine dynamic scheduling problem with sequencedependent setup times. International Journal of Production Research 1994;32:1243}63. [21] Balas E. Machine sequencing via disjunctive graphs: an implicit enumeration algorithm. Operations Research 1969;17:941}57. [22] Bellman RE. On a routing problem. Quarterly of Applied Mathematics 1958;16:87}90. [23] Barr RS, Golden BL, Kelly JP, Resende MGC, Stewart Jr. WR. Designing and reporting on computational experiments with heuristic methods. Journal of Heuristics 1995;1:9}32. [24] Hooker JN. Needed: an empirical science of algorithms. Operations Research 1994;42:201}12. [25] McGeoch CC. Toward an experimental method for algorithm simulation. INFORMS Journal on Computing 1996;8:1}15. [26] Lawrence S. Resource constrained project scheduling: An experimental investigation of heuristic scheduling techniques. GSIA, Carnegie Mellon University, USA, 1984. [27] Eilon S, Chowdhury IG. Due dates in job shop scheduling. International Journal of Production Research 1976;14:223}37.
Marcos Singer is a Professor at the Business School teaching Operations Management courses. He holds a B.S. and M.Sc. in Computer Science from Ponti"cia Universidad CatoH lica de Chile, and a Ph.D. in Operations Research from Columbia University, USA. His current research interests are production optimization, logistics and distribution. He has published in journals such as Naval Research and Logistics, IIE Transactions, Journal of the Operations Research Society, Management and Information Systems and in a chapter of a book by Springer-Verlag.