The total completion time open shop scheduling problem with a given sequence of jobs on one machine

The total completion time open shop scheduling problem with a given sequence of jobs on one machine

Computers & Operations Research 29 (2002) 1251}1266 The total completion time open shop scheduling problem with a given sequence of jobs on one machi...

189KB Sizes 0 Downloads 36 Views

Computers & Operations Research 29 (2002) 1251}1266

The total completion time open shop scheduling problem with a given sequence of jobs on one machine Ching-Fang Liaw*, Chun-Yuan Cheng, Mingchih Chen Department of Industrial Engineering and Management, Chaoyang University of Technology, 168 Gifeng East Road, Wufeng, Taichung, Taiwan Received 1 May 2000; received in revised form 1 November 2000

Abstract This paper addresses the open shop scheduling problem to minimize the total completion time, provided that one of the machines has to process the jobs in a given sequence. The problem is NP-hard in the strong sense even for the two-machine case. A lower bound is derived based on the optimal solution of a relaxed problem in which the operations on every machine may overlap except for the machine with a given sequence of jobs. This relaxed problem is NP-hard in the ordinary sense, however it can be quickly solved via a decomposition into subset-sum problems. Both heuristic and branch-and-bound algorithm are proposed. Experimental results show that the heuristic is e$cient for solving large-scaled problems, and the branchand-bound algorithm performs well on small-scaled problems. Scope and purpose Shop scheduling problems, widely used in the modeling of industrial production processes, are receiving an increasing amount of attention from researchers. To model practical production processes more closely, additional processing restrictions can be introduced, e.g., the resource constraints, the no-wait in process requirement, the precedence constraints, etc. This paper considers the total completion time open shop scheduling problem with a given sequence of jobs on one machine. This model belongs to a new class of shop scheduling problems under machine-dependent precedence constraints. This problem is NP-hard in the strong sense. A heuristic is proposed to e$ciently solve large-scaled problems and a branch-and-bound algorithm is presented to optimally solve small-scaled problems. Computational experience is also reported.  2002 Elsevier Science Ltd. All rights reserved. Keywords: Production scheduling; Open shop; Heuristic; Branch-and-bound

* Corresponding author. Tel.: #886-4-332-3000; fax: #886-4-374-2327. E-mail address: c#[email protected] (C.-F. Liaw). 0305-0548/02/$ - see front matter  2002 Elsevier Science Ltd. All rights reserved. PII: S 0 3 0 5 - 0 5 4 8 ( 0 1 ) 0 0 0 2 8 - 4

1252

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

1. Introduction In shop scheduling problems, a set of jobs must be processed on a number of machines in series. A job consists of several operations, each to be performed on a di!erent machine for a given amount of time. At any time, at most one operation can be processed on each machine, and at most one operation of each job can be processed. Shop scheduling problems are further classi"ed according to ordering restrictions for the operations of a job. In a #ow shop, each job has exactly one operation on each machine, and the order in which each job is processed by the machines is the same for all jobs. In a job shop, the operations of each job must be processed in a given order speci"c to that job. For an open shop, the operations of each job can be processed in any order. Shop scheduling problems, widely used in the modeling of industrial production processes, are receiving an increasing amount of attention from researchers. Many heuristics and exact solution methods have been proposed for solving shop scheduling problems. However, most studies in this "eld are focused on job shop and #ow shop scheduling problems. This paper addresses the problem of scheduling open shops. To model practical production processes more closely, additional processing restrictions can be introduced, e.g., the resource constraints, the no-wait in process requirement, the precedence constraints, etc. In this paper, we consider the open shop scheduling problem with following simple precedence constraint: one of the machines processes the jobs according to a given sequence. The problem under consideration may arise in modeling a production process in which, due to technological or managerial decisions, the jobs should be processed on one of the machines according to a predetermined order. For example, the various setup requirements of jobs on this machine may lead to a decision that the jobs should be processed on this machine in a speci"c order. This model is also applicable when an existing schedule should be adjusted, provided that the sequence of jobs on one machine has to be retained. Furthermore, this model may also arise as a subproblem in determining a lower bound for a general open shop problem (see Gueret and Prins [1]). Formally, the open shop scheduling problem studied in this paper can be stated as follows. We are given a set of n jobs J  1)i)n, and a set of m machines M  1)j)m. Each job G H J consists of m operations O , O ,2, O such that operation O of job J has to be processed on G G G GK GH G machine M for a given amount of time p . The operations of each job can be processed in any H GH order, but only one at a time, and preemptions are not allowed. We assume, without loss of generality, the jobs are required to be processed on machine M in the sequence J , J ,2, J . The    L objective is to "nd a feasible schedule that minimizes the sum of job completion times. Using the notation of (Sharfransky and Strusevich) [2], this problem is denoted by O GS(1) C , where K G GS(1) stands for a given sequence of jobs on one machine. The O GS(1) C problem is NP-hard K G in the strong sense even for the case m"2 since the proof of Achugbue and Chin [3] which shows that the problem O  C is NP-hard in the strong sense is also applicable to the problem with  G a given sequence of jobs on machine M .  Current researches on open shop scheduling problems have concentrated on minimizing makespan (C ), i.e., the maximal job completion time [4}7]. Some progress has been made on the

 solutions of makespan open shop scheduling problems with precedence constraints. If all machines are required to process the jobs in the same sequence, i.e., the same sequence of jobs is given on all machines, the two-machine problem is solvable in O(n log n) time, if preemption is forbidden, and in

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

1253

O(n) time, if preemption is allowed [8]. However, the three-machine problems in both preemptive and nonpreemptive case are NP-hard [9]. The O GS(1)C problem is studied by Sharfransky K

 and Strusevich [2]. They showed that in the preemptive case this problem is polynomially solvable. If preemption is not allowed, this problem is NP-hard. For the latter case, they developed an e$cient heuristic for the case of m"2. In this paper, we develop both heuristic and branch-and-bound algorithm for solving O GS(1) C , the total completion time open shop scheduling problem with a given sequence of K G jobs on one machine. The total completion time criterion is important, since many important cost measures, such as "nished goods turn-over, depend on total completion time. To the best of our knowledge neither heuristic nor exact algorithm has previously been proposed for the total completion time open shop scheduling problem. The rest of this paper is organized as follows. Section 2 presents a heuristic for O GS(1) C . A lower bound for this problem is derived in K G Section 3. In Section 4, we describe a branch-and-bound algorithm for solving O GS(1) C . K G Computational experiments are given in Section 5, followed by the conclusions in Section 6. 2. A heuristic The heuristic developed in this paper is an iterative dispatching procedure. Each iteration of the procedure is a one-pass heuristic which generates a complete schedule using some dispatching index. At the end of each iteration, the parameter used in the dispatching index is adjusted to improve upon the current schedule. This iterative dispatching procedure consists of two major components: a one-pass heuristic to generate a complete schedule at each iteration, and an adjustment strategy to adjust the parameter used at each iteration. 2.1. A one-pass heuristic The one-pass heuristic constructs a complete schedule using a dispatching index. We de"ne a dispatching index DI for each unscheduled operation O as follows: GH GH 1 if jO1, ( ;I #1);RJ #p G GH G GH DI " GH 1 if j"1, i



where  is the parameter associated with job J , I the idleness between O and its immediate G G GH GH predecessor on machine M if O is processed next, and RJ the remaining processing time H GH G (excluding p ) of job J . GH G Whenever a machine, say M , is available, the operation with the highest dispatching index is H chosen to be processed next on machine M . The process repeats until a complete schedule is H generated. It can be seen easily that with this dispatching index the required sequence of jobs on machine M is guaranteed.  The dispatching index DI combines the philosophies of the well-known minimum idleness rule GH and the shortest remaining processing time rule. It prioritizes each operation O corresponding to GH the total completion time objective.

1254

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

2.2. An adjustment strategy Given a complete schedule, we "rst compute the actual completion time AC and the ratio G AC /¹P of job J , where ¹P " K p is the sum of processing times of job J , for i"1, 2,2, n. G G G G H GH G The ratio AC /¹P represents in some sense the extent to which job J is &delayed'. We can identify G G G a set of most delayed jobs Q such that the ratio of each job in Q is greater than the ratio of each job not in Q. We then adjust the parameters  associated with those jobs in Q such that they are assigned G a higher dispatching index in the following iteration. This will tend to decrease the actual completion times of the jobs in Q, and thus hopefully generate a better schedule in the next iteration. The structure of this iterative dispatching procedure is given below. Step 1: Set the iteration count k"0 and set Max}iter to be the maximum number of iterations. Determine the initial parameter to be used in the dispatching index. Step 2: Generate a complete schedule using the one-pass heuristic with the current dispatching index. Step 3: Update the parameter used in the dispatching index based on an adjustment strategy. Step 4: Set k"k#1. If (k(Max}iter) goto Step 2, else STOP. Based on preliminary experiments, we set Max}iter equal to 5000, subset size Q equal to [;n], where  is a scalar between 0 and 1, and [x] denotes the largest integer less than or equal to x. Also, the parameter  of each job J in Q is updated by  "; where  is a scalar between 0 and 1. In G G G G order to determine the most appropriate parameters (the initial value of  ,  and ) to be used in G the heuristic, a factorial experiment is conducted to investigate the e!ect of each parameter on the performance of the heuristic. The parameters are set to the following possible values: Initial value of  (factor A): 0.1, 0.5, 1.0, 5.0, 10.0, 50.0, 100.0; G Parameter  (factor B): 0.25, 0.50, 0.75; Parameter  (factor C): 0.50, 0.70, 0.90, 0.99, 0.999. Five replications for each combination of these parameters are run with n"m"20, and the analysis of variance is summarized in Table 1. As Table 1 shows, with a 95% con"dence coe$cient, both the initial value of  and the parameter  have signi"cant e!ects on the performance of the G heuristic, whereas the e!ect of parameter  is insigni"cant. Furthermore, there is no indication of interaction between these factors. We then apply the Turkey multiple comparison method to compare the individual e!ects for di!erent values of  and di!erent values of . The results are G summarized in Table 2. The e!ects of values within each group are not signi"cantly di!erent, whereas the e!ect of each value in group 1 is signi"cantly better than that of each value in group 2 which in turn is signi"cantly better than that of each value in group 3. Based on above results, we set the initial value of  to be 1.0, the parameter  to be 0.75, and the parameter  to be 0.90. G 3. A lower bound In this section we derive a lower bound for O GS(1) C . The lower bound J¸B (job-based K G lower bound) is equal to the optimal L C value of a relaxed problem O GS(1), R C , in which G G K G

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

1255

Table 1 Result for analysis of variance ("0.05) Source C B A C*B C*A B*A C*B*A Error Total

d.f.

Mean square

F

P

225949.619 3.189 33067281.531 648.735 50524.221 9675.291 19168.385 2095223.600

4 2 6 8 24 12 48 420

56487.405 1.594 5511213.589 81.092 2105.176 806.274 399.341 4988.628

11.323 0.000 1104.755 0.016 0.422 0.162 0.080

0.000 1.000 0.000 1.000 0.993 0.999 1.000

3389113446.000

525

Sum of squares

Table 2 Results of the Turkey multiple comparison method ("0.05)

Initial value of  G Parameter 

Group 1

Group 2

Group 3

1.0, 5.0, 10.0, 50.0, 100.0 0.90, 0.99, 0.999

0.5 0.999, 0.70

0.1 0.70, 0.50

the operations on every machine may overlap except for machine M . The problem  O GS(1), R C is still NP-hard, however it can be solved pseudo-polynomially via a decomposiK G tion into subset-sum problems [10]. Proposition 1. The problem O GS(1), R C is NP-hard in the ordinary sense even for three jobs. K G Proof. See Appendix A. 䊐 In what follows, we "rst describe brie#y the principle of our algorithm for solving O GS(1), R C , and then specify in details a pseudo-polynomial implementation of the proposed K G algorithm. We remark that our algorithm for solving O GS(1), R C can be considered as dual K G with respect to the algorithm A developed by Gueret and Prins [1]. The algorithm consists of n steps. In step k, k"1, 2,2, n, we compute the completion time of job J by considering a subproblem SP consists of k jobs J , J ,2, J . A set of schedules has been I I   I generated for SP during the previous step. For each of these schedules, two schedules for SP I\ I are constructed in step k: in the "rst one S, O starts just after O is completed; in the second I I\  one S, an idle time is allowed between these two operations. In this setting, the optimal schedule can be obtained among the schedules constructed in the last step. Let C (SP ) denote the sum of completion times of the jobs considered in subproblem G I\ SP , and let c denote the completion time of operation O on machine M . In schedule S, we I\ I I  schedule before O a subset of operations of job J whose total processing time is maximal but not I I

1256

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

greater than the completion time t of O . Let B(k, t) denote the total processing time of the I\  operations of this subset. Then, the completion of operation O is c "t#p , the completion I I I time of job J is C "¹P #t!B(k, t), and C (SP )" C (SP )#C . In schedule S, we I I I G I G I\ I schedule before O a subset of operations of job J whose total processing time is minimal but I I greater than the completion time t of O . Let A(k, t) denote the total processing time of the I\  operations of this subset if it exists. Otherwise, A(k, t) is set equal to t. In this case, the completion time of operation O is c "A(k, t)#p , the completion time of job J is C"max(¹P , c ), I I I I I I I and C (SP )" C (SP )#C. The values of B(k, t) and A(k, t) can be computed e$ciently in G I G I\ I O(¹P ) by solving a subset-sum problem using the algorithm proposed by Gueret and Prins [1]. I We demonstrate the algorithm for solving O GS(1), R C via a small example. Consider the K G following processing times for a problem instance with n"m"3. p GH

J 

J 

J 

M  M  M 

8 3 2

5 4 7

9 6 8

In the "rst step, we solve the subproblem SP consisting of job J only. The optimal schedule of   this problem is obvious (Fig. 1(a)): operation O starts at time 0 and the remaining operations of  job J follow contiguously in any order after O . This gives a schedule S with C (SP )"13. In    G  step 2, we solve the subproblem SP consisting of jobs J and J . Based on the schedule S in step     1, we consider two cases (Figs. 1(b) and (c)). Case 1: O starts just after O at time c . In this case, to minimize the completion time of job    J , we must schedule O before O , and O after O . This results in an optimal schedule      S with C "17 and C (SP )"13#17"30.   G  Case 2: O starts strictly after O . In this case, we must schedule both O and O before O .      This gives an optimal schedule S with C "16 and C (SP )"13#16"29.   G  In step 3, we consider the subproblem SP consisting of jobs J , J and J . We "rst construct     two schedules based on the schedule S obtained in step 2 (Figs. 1(d) and (e)).  Case 1: O starts just after O . To minimize the completion time of job J , we must schedule    O before O and O after O . This results in an optimal schedule S with C "28 and       C (SP )"30#28"58. G  Case 2: O starts strictly after O . In this case, we schedule both O and O before O . This      produces an optimal schedule S with C "23 and C (SP )"30#23"53.   G  Similarly, based on the schedule S , we get two other schedules: S with C (SP )"29#25   G  "54 and S with C (SP )"29#25"54 (Figs. 1(f) and (g)). In fact, S is exactly the same as  G   S and can be discarded since we can schedule both O and O before the completion time of    O . The optimal C value is equal to min58, 53, 54"53.  G We now present a pseudo-polynomial algorithm for solving O GS(1), R C . At step k, the K G algorithm generates the schedules for jobs J , J ,2, J . For the "rst step, there is only one   I schedule: O starts at time 0 and the remaining operations of job J follow contiguously in any   order after O . At the end of step k, the algorithm only keeps schedules with di!erent c values.  I

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

Fig. 1. Illustration of the computation of algorithm RELAX.

1257

1258

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

If two schedules have the same value for c , it keeps the one with smallest total completion times I of jobs J , J ,2, J . The reason is that the completion times of the other jobs J ,J , ,J   I I> I> 2 L depend only on c . Also, the only information stored for each schedule is a pair (c , I C ), since I I G G we only need to compute the optimal I C value of O GS(1), R C , but not the exact G G K G composition of an optimal schedule. Let ;B be an upper bound, which can be obtained by the heuristic presented in Section 2, and ¸B0" L ¹P be a simple lower bound, for O GS(1), R C . Also, let t}max" G G K G L p # L maxp : 2)j)m denote the largest possible completion time of operation O . G G G GH L The algorithm stores the schedules at the beginning of step k in an array Z of t}max#1 elements indexed from 0. If a schedule with c "t was generated at the previous step, then I\  Z[t]" I\C , else Z[t] is set to be a big number (BigM). More precisely, the schedules generated G G in step k are "rst stored in an array Z}temp which then overwrites array Z for the next step. The algorithm RELAX for solving O GS(1), R C is given below. K G Solve the subset-sub problem corresponding to each job and store the results. If ;B"¸B0, stop. Initialize array Z to be BigM Z[p ]"¹P   For (k"2; k=n; k##) Initialize array Z}temp to be BigM For (t"0; t=t}max; t##) If ( Z[t]OBigM) // Here we have a schedule with c " and I\C "Z[t] I\  G G Determine A(k, t) and B(k, t) for job J I // First schedule S c "t#p I I ¹emp"Z[t]#¹P #t!B(k, t) I If (¹emp(;B && ¹emp(Z}temp[c ]) Z}temp[c ]"¹emp I I // Second schedule S c "A(k, t)#p I I ¹emp"Z[t]#max(¹P , c ) I I If (¹emp(;B && ¹emp(Z}temp[c ]) Z}temp[c ]"¹emp I I End of If (Z[t]OBigM) End of For (t"0; t=t}max; t##) Z"Z}temp End of For (k"2; k=n; k##) Optimal L C value"minimum value in Z G G Algorithm RELAX has a pseudo-polynomial complexity of O( L ¹P #n;t}max). The I I following proposition shows that algorithm RELAX "nds the optimal L C value for G G O GS(1), R C . K G Proposition 2. Algorithm RELAX xnds the optimal L C value for O GS(1), R C . G G K G Proof. See Appendix B. 䊐

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

1259

4. A branch-and-bound algorithm In this section, we describe the implementation of our branch-and-bound algorithm. 4.1. Branching strategy Each node in the search tree corresponds to a partial schedule. A node in level r represents a partial schedule PS with r operations scheduled. Given PS , a set of new partial schedules each P P consisting of PS plus exactly one unscheduled operation is generated. Gi%er and Thompson [11] P have proposed an active schedule generation scheme for job shop scheduling problems. This scheme can be easily adapted to open shop scheduling problems. Our branching strategy is based on the modi"ed active schedule generation scheme. Given a partial schedule PS , let ;S(PS ) be the P P set of unscheduled operations corresponding to PS . The branching procedure adopted is P described as follows. Step 1: Calculate the earliest completion time of all operations in ;S(PS ). Let O H H be an P GH operation for which the minimum earliest completion time t is achieved. Step 2: Let C(O H H )-;S(PS ) denote the con#ict set of operations associated with O H H . If P GH GH jH"1, C(O H H ) is the set of operations in ;S(PS ) which belong to job J H , and whose earliest P G GH starting time is less than t. Otherwise, C(O H H ) is the set of operations in ;S(PS ) which take place GH P on machine M H or belong to job J H , and whose earliest starting time is less than t. G H Step 3: For each operation O 3C(O H H ), generate a child node which corresponds to the partial GH GH schedule PS in which operation O is added to PS and started at its earliest starting time. P> GH P It can be seen easily that this branching procedure generates all possible active schedules for the problem O GS(1) C . In fact, it can be further improved by eliminating the duplicated schedules K G

Fig. 2. Illustration of duplicated schedule elimination.

1260

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

generated during the branching process. The duplicated schedule elimination criterion is based on the following idea. Suppose that there are two operations O H and O H such that iHOl and kOjH GI JH in the con#ict set C(O H H ), where C(O H H ) is the set of operations to be processed on machine M H or GH GH H belonging to job J H . We call such a pair of operations unrelated since their earliest completion G times remain unchanged if we reverse the order at which they are scheduled. If such a situation can be detected during the branching process, then it su$ces to generate only one of the schedules since the second one is redundant. Consider the following example of n"m"3 given in Fig. 2. Suppose that we are branching the root node and that O H H "O . Then C(O )"O , O , O , O , O . The following pairs of        GH operations are unrelated: (O , O ), (O , O ), (O , O ), and (O , O ). For each unrelated pair         of operations, only one sequence of the operations needs to be considered. As shown in Fig. 2, the last four branches can be discarded later during the branching process. 4.2. Bounding strategy The algorithm RELAX presented in Section 3 can be easily modi"ed to take into account a given partial schedule. Furthermore, it is modi"ed to produce the job completion times C , C ,2, C   L corresponding to the lower bound value J¸B. The lower bound J¸B can be further improved by considering the following. Given a partial schedule, let D be the earliest completion time and ; be H H the set of unscheduled operations of machine M , j"1, 2,2, m. The unscheduled operations of H a machine may have di!erent ready times. Therefore, determining D , for each j"1, 2,2, m, is H equivalent to solve a 1r C problem for which it is known that the earliest ready time (ERT) rule G  D : ; O . If jHO1, any one of the operaproduces an optimal schedule. Let D H "max WHWK H H H tions in the set ; H may be processed last on machine M H and thus completed at time D H . H H H Otherwise, according to the given sequence of jobs on machine M , operation O is the only  L candidate to be processed last on machine M and completed at time D H . An improved lower  H bound ¸B1 can be de"ned as follows: ¸B1"J¸B# ,  where



(D H !C )> if jH"1, L

" H  min D H !C : D H 'C , O H 3; H  if jHO1 H G H G GH H WGWL and (x)> denotes max0, x. The lower bound ¸B1 considers only the machine with the largest earliest completion time in improving the job-based lower bound J¸B. If '0, ¸B1 can be further tightened, in a similar  way, by considering the machine with the second largest earliest completion time. Suppose that the D : jOjH, value of is achieved by job J H , i.e., "D H !C H '0. Let D HH "max  H G H WHWK H  G

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

1261

; O+. That is, machine M HH has the second largest earliest completion time D . De"ne H H H



D HH !C if jHH"1, L D HH !C HH " H H G min D HH !C : D HH 'C , O HH 3; HH  if jHHO1. G H G GH H H WGWL That is, J HH is the job selected to be processed last on machine M HH and completed at time D HH . An G H H improved lower bound ¸B2 can be thus de"ned as follows: ¸B2"¸B1# ,  where



0 if iHH"iH,

"  (D HH !C HH )> if iHHOiH. G H The lower bound ¸B2 is used in the branch-and-bound algorithm developed in this paper. We illustrate the computation of ¸B1 and ¸B2 via a simple example. Consider the problem instance of n"m"4 with the following data. p GH

J 

J 

J 

J 

M  M  M  M 

1 6 9 3

9 4 7 7

4 6 2 10

10 9 3 3

Suppose that we are given the partial schedule as shown in Fig. 3. The lower bound J¸B generated by algorithm RELAX is 96 with C "20, C "28, C "22, and C "26. In this case, we have     ; "O , ; "O , O , ; "O , O  and ; "O , O . Hence, the earliest ma           chine completion times are D "24, D "26, D "23, and D "23. The gives     D H "max24, 26, 23, 23"26 with jH"2. By de"nition, we have "minD H !  H H C , D H !C "min26!20,26!22"4. Thus, we have ¸B1"J¸B# "96#4"100.    H Note that "4 is achieved by job J , i.e., iH"3. The machine with the second largest earliest   completion time is M , that is, D HH "D "24. Thus, we have D HH !C HH "  H G  H D !C "24!26"!2 and "(!2)>"0. In this case, ¸B2"¸B1. We remark that in    other cases the value of may be strictly positive and hence the resulted lower bound ¸B2 may be  strictly greater than ¸B1. 4.3. Search strategy Initially, the heuristic presented in Section 2 is used to calculate an upper bound for the problem. The upper bound is updated whenever a complete schedule that improves the upper bound is generated during the search process. The following mixed best-"rst and depth-"rst strategy is used in the search process. The best-"rst search is "rst performed until the number of nodes generated

1262

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

Fig. 3. A given partial schedule. Table 3 Performance of the branch-and-bound algorithm n;m

¸B2/OP¹

4;4 5;5 6;6 7;7

0.95 0.96 0.97 0.98

CP; 0.02 1.76 925.43 53114.72

NODES

;NSO¸

1 628 88 395 29 187 479 797 556 143

0 0 0 7

reaches 1000; thereafter, the depth-"rst search is applied. We have found that this mixed strategy performs better than with the best-"rst search and the depth-"rst search used separately. For each node generated during the search process, a lower bound for this node is calculated. If the lower bound plus the cost of the associated partial schedule is greater than or equal to the current upper bound, the node is discarded.

5. Computational experiments We tested the branch-and-bound algorithm and the heuristic proposed with randomly generated problems on a Pentium III-600 personal computer using the programming language C. The branch-and-bound algorithm was tested on problems with up to n"m"7, whereas the heuristic was tested on problems with up to n"m"30. We generated only square problems (n"m) since such problems are harder solvable than the other ones [12]. For each operation, an integer processing time is generated from the uniform distribution u[1,10]. For each problem size, 25 instances are generated. We present the results of our computational runs associated with the branch-and-bound algorithm in Table 3. We terminate the execution of the branch-and-bound algorithm after 50 h of CPU time. For each problem size, the average ratio of the lower bound ¸B2 at the root node over the optimal value (¸B2/OP¹), the average CPU time in seconds (CP;), the average number of

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

1263

Table 4 Performance of the lower bound ¸B2 n;m

¸B2/¸B0

CP;(¸B2)/CP;(¸B0)

NODES(¸B2)/NODES(¸B0)

4;4 5;5 6;6

1.03 1.02 1.06

0.912 0.987 0.305

0.610 0.774 0.206

Table 5 Performance of the heuristic n;m

HE;/OP¹

4;4 5;5 6;6 7;7 10;10 15;15 20;20 30;30

1.025 1.041 1.047 1.043

Average

1.039

HE;/¸B2

CP;

1.077 1.083 1.089 1.076

0.114 0.280 0.461 0.705 1.688 5.417 12.441 41.279

1.081

nodes generated (NODES), and the number of unsolved instances (;NSO¸) are reported. Unsolved instances are excluded from the calculations of above performance measures. All the instances with n"m)7 are solved to optimality, except for 3 instances of n"m"7. As can be observed from Table 3, both the average CPU time and the average number of nodes generated increase rapidly as the problem size increases. Our lower bound ¸B2 performs quite satisfactory. The average deviation of the lower bound ¸B2 from the optimal value is about 4%. To test the e!ectiveness of our lower bound ¸B2, we compare in Table 4 the results of the branch-and-bound algorithm based on lower bound ¸B2 with the results of the branch-and-bound algorithm based on the simple lower bound ¸B0. It can be seen from Table 4 that the lower bound ¸B2 signi"cantly outperforms the simple lower bound ¸B0. The improvement of ¸B2 over ¸B0 seems to increase as the problem size increases. In Table 5 we report on the performance of the heuristic presented in Section 2. For small problems (n"m)7), the solution obtained by the heuristic (HEU) is measured against the optimal value obtained by our branch-and-bound algorithm. The average optimality gap is about 4%. For large problems (n"m'7), the solution found by the heuristic is compared to the lower bound value ¸B2. The average gap is about 8%. Table 5 also gives the average CPU time of the heuristic for each problem size. As shown in Table 5, the heuristic is able to "nd good solutions within a short time.

1264

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

Table 6 Results for di!erent processing time distributions of operations on machine M



Distribution

n;m

HE;/OP¹

¸B2/OP¹

CP;

NODES

p 3;[1, 10] G

4;4 5;5 6;6

1.025 1.041 1.047

0.951 0.964 0.973

0.024 1.763 925.431

1 628 88 395 29 187 479

p 3;[6, 15] G

4;4 5;5 6;6

1.022 1.020 1.038

0.996 0.997 0.998

0.013 0.496 653.616

509 11 961 9 304 251

p 3;[11, 20] G

4;4 5;5 6;6

1.013 1.027 1.015

1.000 1.000 1.000

0.005 0.102 26.962

62 2 147 316 900

Finally, Table 6 presents the results of the proposed algorithms for di!erent processing time distributions of operations on machine M . Since the operations on machine M are processed in   a given order, the relative length of the operations on machine M compared to the other  operations will have a signi"cant e!ect on the problem hardness. We compare the results for p 3;[1, 10], p 3;[6, 15] and p 3;[11, 20] for all i, with p 3;[1, 10] for all i and jO1. As G G G GH Table 6 shows, the performances of both the heuristic and branch-and-bound algorithm improve as the processing times p 's increase. Hence, the problem becomes easier to solve as the processing G times p 's increase. G

6. Conclusions In this paper we have examined the problem of scheduling open shops to minimize total completion time with a given sequence of jobs on one machine. We have derived a new lower bound based on the optimal solution of a relaxed problem in which the operations on every machine may overlap except for the machine with a given sequence of jobs. Our experimental results show that the new lower bound outperforms the classical simple lower bound de"ned as the sum of all processing times. We have also developed a heuristic, based on iterative dispatching, for e$ciently solving large-scaled problems, and a branch-and-bound algorithm for optimally solving small-scaled problems. In view of our experimental results, the problem under consideration is extremely di$cult. Our results indicate that even sharper upper and lower bounds are needed to cut down the size of the search space. Also, more e$cient branching strategy is necessary to further improve the performance of the branch-and-bound algorithm. The approach considered in this paper may lead to the development of branch-and-bound algorithms for more general performance criteria than total completion time such as total tardiness. The ideas presented here may also be extended to more general open shop scheduling problems.

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

1265

Fig. 4. A schedule with C "9X. G

Appendix A. Proof of Proposition 1 Proposition 1. The problem O GS(1), R C is NP-hard in the ordinary sense even for three jobs. K G Proof. Consider the PARTITION problem that is known to be NP-complete [13]: given a set S"a , a ,2, a  of r integers with P a "2X, does there exists a partition of S into two   P H H subsets > and Z such that a " a "X? HZ7 H HZ8 H Given an instance of PARTITION, we de"ne in polynomial time the following instance of O GS(1), R C with n"3 jobs and m"r#1 machines. The processing times are K G p "X, i"1, 2, 3, G p "a , i"1, 2, 3, j"1, 2,2, r. G H> H Machine M has to process the jobs in the sequence J , J , J . We show that in the constructed     problem a schedule S such that C (S ))9X exists if and only if PARTITION has a solution.  G  Suppose that PARTITION has a solution, and > and Z are the required subsets of S. Let (>) and (Z) denote an arbitrary permutation of the machines M , M ,2, M that correspond to   P> > and Z, respectively. Then, the desired schedule S exists (Fig. 4). In S , all jobs are processed   continuously from time 0 to time 3X. Job J is processed on machines in the sequence  M , (>), (Z). Job J is processed on machines in the sequence (>), M , (Z), and job J is     processed on machines in the sequence (>), (Z), M . Clearly, we have C (S )"9X.  G  Suppose now that there exists a schedule S such that C (S )"9X. Since each job lasts 3X,  G  each job is processed continuously in S . So, there exists a set of machines on which the sum of  processing times of job J is equal to X. Such a situation is possible only if PARTITION has  a solution. This proves the proposition. 䊐

Appendix B. Proof of Proposition 2 Proposition 2. Algorithm RELAX xnds the optimal L C value for O GS(1), R C . G G K G

1266

C.-F. Liaw et al. / Computers & Operations Research 29 (2002) 1251}1266

Proof. Let SH be an optimal schedule for O GS(1), R C , and = be the set of schedules generated K G by algorithm RELAX. We show that SH can be transformed into a schedule belonging to =, without increasing total job completion time. To accomplish that, we execute algorithm RELAX and check at each step k if algorithm RELAX "nds the same job completion time as in SH for job J . I Let t"C . A discrepancy implies that the subset of operations, belonging to job J , processed I\  I before operation O in SH has a total duration di!erent from B(k, t) and A(k, t). This can be I adjusted easily since any operation O , jO1, processed before (after) operation O can be moved IH I to be processed after (before) operation O without increasing the completion time of job J . The I I "nal schedule adjusted by this process belongs to =. Hence, algorithm RELAX "nds the optimal L C value for O GS(1), R C . 䊐 G G K G References [1] Gueret C, Prins C. A new lower bound for the open-shop problem. Annals of Operations Research 1999;92:165}83. [2] Sharfransky YM, Strusevich VA. The open shop scheduling problem with a given sequence of jobs on one machine. Naval Research Logistics 1998;45:705}31. [3] Achugbue JO, Chin FY. Scheduling the open shop to minimize mean #ow time. SIAM Journal on Computing 1982;11:709}20. [4] Brucker P, Hurink J, Jurisch B, Wostmann B. A branch & bound algorithm for the open-shop problem. Discrete Applied Mathematics 1997;76:43}59. [5] Gonzales T, Sahni S. Open shop scheduling to minimize "nish time. Journal of the ACM 1976;23(4):665}79. [6] Gueret C, Prins C. Classical and new heuristics for the open-shop problem. European Journal of Operational Research 1998;107(2):306}14. [7] Brasel H, Tautenhahn T, Werner F. Constructive heuristic algorithms for the open shop problem. Computing 1993;51:95}110. [8] Tanaev VS, Sotskov YN, Strusevich VA. Scheduling theory. Multi-stage systems. Dordrecht: Kluwer Academic, 1994. [9] Strusevich VA. Shop scheduling problems under precedence constraints. Annals of Operations Research 1997;69:351}77. [10] Martello S, Toth P. Knapsack problem. New York: Wiley, 1990. [11] Gi%er B, Thompson GL. Algorithms for solving production scheduling problems. Operations Research 1960;8:487}503. [12] Taillard E. Benchmarks for basic scheduling problems. EJOR 1993;64:278}85. [13] Garey MR, Johnson DS. Computers and intractability. A guide to the theory of NP-completeness. San Francisco: Freeman, 1979.

Ching-Fang Liaw received B.B.A. and M.S. degrees in Industrial Management from National Cheng Kung University, Taiwan, and a Ph.D. degree in Industrial and Operation Engineering from the University of Michigan, MI. He is currently a Professor at the Chaoyang University of Technology, Taiwan. His research interests include combinatorial optimization and heuristic search methods with applications to production scheduling, and vehicle routing and scheduling. Chun-Yuan Cheng received B.S. degree in Industrial Engineering from Chung-Yuan University, Taiwan, and M.S. and Ph.D. degrees in Industrial Engineering from Auburn University, AL. She is currently an Associate Professor at Chaoyang University of Technology, Taiwan. Her research interests include input analysis in simulation, maintenance and reliability, and production scheduling. Mingchih Chen received a B.S. degree in Industrial Engineering from Chung Yuan Christian University, Taiwan, and M.S. and Ph.D. degrees in Industrial Engineering from the Texas A&M University, Texas. She is currently an associate professor at the Chaoyang University of Technology, Taiwan. Her research interests include reliability, maintainability and production scheduling.