Algorithms for two-machine flow-shop sequencing with precedence constraints

Algorithms for two-machine flow-shop sequencing with precedence constraints

238 Algorithms for two-machine flow-shop sequencing with precedence constraints A.M.A. H A R I R I King Abdul- Aziz University, Jeddah, Saudi Arabia ...

827KB Sizes 0 Downloads 75 Views

238

Algorithms for two-machine flow-shop sequencing with precedence constraints A.M.A. H A R I R I King Abdul- Aziz University, Jeddah, Saudi Arabia

C.N. POTTS Department of Mathematics, University of Keele, United Kingdom Received October 1982 Revised March 1983

This paper considers the problem of minimizing the maximum completion time in a two-machine flow-shop for which precedence constraints on the jobs are specified. If one job has precedence over another, then the second of these jobs cannot be processed on a machine until the first of them is completed on that machine. A powerful new lower bounding rule which uses Lagrangean relaxation is developed. Then several dominance theorems are presented which are used to eliminate some jobs from the problem. The lower bound is used in three branch and bound algorithms, two of which employ well-known branching rules while the third uses a new branching rule. The algorithms are compared and tested on problems with up to 80 jobs.

1. Introduction The two-machineflow-shop problem with precedence constraints that is considered in this paper m a y be stated as follows. Each of n jobs (numbered 1 . . . . . n) must be processed without interruption firstly on machine A and then on machine B. Job i (i = 1 . . . . . n) becomes available for processing at time zero and requires a positive processing time of a, and b i on machines A and B respectively. At any time each machine can handle only one j o b and each j o b can be processed on only one machine. Precedence constraints are represented by a directed acyclic graph G where the vertices of G represent the jobs. If a directed path from vertex i to v e r t e x j exists (i ~ j), then j o b i is a predecessor of j o b j and j o b j is a successor of j o b i which means that job i must be processed before j o b j on each machine. The objective is to find a feasible schedule which minimizes the m a x i m u m completion time. It is well k n o w n [2,18] that it is not necessary to consider schedules in which the processing orders on the two machines are not identical. Consequently, we shall restrict ourselves to permutation schedules which have identical processing orders on the machines. It is also well k n o w n [18] that if the processing times a~ and b~ are interchanged (i = 1 . . . . . n) and the directions of all arcs of G are reversed, then an equivalent inverse problem results. In some situations, as we shall see in Section 3.2, it is preferable to solve the inverse problem rather than the original problem. W h e n there are no precedence constraints, the problem can be solved in O(n log n) steps using J o h n s o n ' s algorithm [6] in which jobs i with a, < b~ are sequenced in non-decreasing order of ai followed by the remaining jobs i (with a~ > b,) sequenced in non-increasing order of b,. Mitten [13] generalizes J o h n s o n ' s algorithm to the problem in which the final l i units of processing on machine A and the initial/~ units of processing on machine B for j o b i ( i = 1 . . . . . n) m a y be performed simultaneously, where I i < rain{ a~, bi }. Assuming the same processing orders on each machine, this generalized problem is solved

North-Holland European Journal of Operational Research 17 (1984) 238-248 0377-2217/84/$3.00 © 1984, Elsevier Science Publishers B.V. (North-Holland)

A.M.A. Hariri, CN. Potts / Algorithms for two-machine flow-shop sequencing

239

by applying Johnson's algorithm to the processing times a; - Ii and b; - / ; for job i (i = 1 . . . . . n) on the two machines respectively. Kurisu [7] presents an algorithm for the case of parallel chain precedence constraints and Sidney [19] generalizes this algorithm to solve problems with series-parallel precedence constraints. Both of these algorithms utilize the property that a group of jobs that are known to be sequenced in adjacent positions may be replaced by a single composite job which can be handled using Mitten's model. Monma [14] gives an implementation of Sidney's algorithm which requires O(n log n) steps after the precedence graph has been decomposed. For the case of general precedence constraints, Monma [15] shows that the problem is NP-hard which indicates that the existence of a polynomial bounded algorithm to solve the problem is unlikely and an enumerative approach is required. Kurisu [8] presents a tree search algorithm which employs some powerful dominance theorems to restrict the search and Hariri [4] shows that the performance of the algorithm can be enhanced by introducing a lower bounding procedure. There has also been some research [18] into an alternative version of our problem. In this alternative version the precedence constraints are interpreted differently: if a directed path from vertex i to vertex j exists in G, then job i must be completed on machine B before job j can start on machine A. This interpretation of precedence constraints appears to yield harder problems than those resulting from the interpretation adopted in this paper. For example, Lenstra et al. [10] show that the problem with precedence constraints in the form of a tree is NP-hard, whereas the corresponding problem with our interpretation of precedence constraints is polynomially solvable using Sidney's algorithm [19]. Although, for our problem, the processing of job i on machine A cannot overlap with its processing on machine B, it is convenient to write our results in terms of Mitten's model in which the final 1, units of processing on machine A and the initial I; units of processing on machine B may be performed concurrently (i = 1. . . . . n). The results are then easily understood both when the problem is in its original form with /, = 0 (i = 1. . . . . n) and when each group of jobs sequenced in adjacent positions is replaced by a single composite job for which concurrent processing is allowed. In this paper, branch and bound algorithms based on three different branching rules are presented and compared. Section 2 contains the derivation of a new lower bounding procedure based on Lagrangean relaxation and a description of a heuristic method. Several dominance rules are given in Section 3 together with a description of three branching rules. Section 4 reports on computational experience with the algorithms, which is followed by some concluding remarks in Section 5.

2. Bounding procedures 2.1. The lower bound

The lower bounds derived in this section may be regarded as generalizations of ones used in branch and bound algorithms for the permutation flow-shop problem. Our first lower bound generalizes the job-based bound proposed by Brown and Lomnicki [1] and McMahon and Burton [12]. It is based on a Lagrangean relaxation technique in which the multipliers are determined by the cost reduction method. In this method appropriate constraints are successively introduced into a Lagrangean function with the multipliers to be chosen so that the coefficients of the variables remain non-negative. Subject to this restriction, multipliers are chosen as large as possible to provide the maximum increment to the lower bound. Consider the problem formulated in terms of the zero-one variables x;j (i, j = 1. . . . . n) where 10 x,j =

if job iis sequenced before job j , otherwise,

and the variable C representing the maximum completion time. It is convenient to define e;j -- 1 whenever the precedence constraints specify that job i is a predecessor of job j and to define e i j = 0 otherwise

240

A.M.A. Hariri, C.N. Potts / AIgorithms for two- machine flow- shop sequencing

(i,j = 1 ..... n). This implies that ez/= 0 ( j = 1 ..... n). The problem can now be written as C, a+ - 6 + bj + Ea,x,j + E b,xj, <. c.

minimize subject to

i

j = 1 ..... n,

(1)

i , j = 1 ..... n, i , j = 1 ..... n, i 4=j, i,j, k = 1 ..... n, i ~ j , j --# k, k q=i, i,j = 1 ..... n.

(2)

i

X i j ~ el j ,

1, xij + xjk + Xki >1 1, Xij~ {0, 1}, X i j "~ Xji =

(3) (4) (5)

Inequality (1) is well known in two-machine flow-shop theory: the maximum completion time can be expressed as the sum of the total processing time of some job j, the total processing time on machine A of jobs that are sequenced before job j and the total processing time on machine B of jobs that are sequenced after job j. Inequality (2) ensures that the given precedence constraints are respected while equation (3) states that job i is sequenced either before or after j o b j . Inequality (4) ensures that any ordering of a pair of jobs that is implied by transitivity is respected. Clearly, xjj = 0 ( j = 1..... n) in any solution of our problem. A lower bound is obtained by relaxing n - 1 of the constraints (1). Thus, for any job j, a relaxed problem can be written as

minimize{aj-lj+bj+~,i

a,xiJ+~i bixji},

subject to (2), (3), (4) and (5). We proceed by performing a Lagrangean relaxation of the constraints

x,/ + xyi = l,

i = l ..... n, i ~ j.

If k~j are the corresponding multipliers~ the Lagrangean function becomes

L°=aj-

lj + bj + E a i x , j + Ebixj, + E k , j ( 1 - x i j - xj,), i

i

i*#j

which may be written as

L°=aj-lj+bj

+ E (ai-kij)xo+ i*,j

E (b,-Xo)xj,+

E•ij •

i,#j

i,#j

The Lagrangean problem is to minimize L° subject to (2), (4) and (5). To maximize the contribution to the lower bound, k,j (i = 1 ..... n; i q=j) is chosen as large as possible with the restriction that the coefficients of x~j and xj, remain non-negative. Thus we have k~j -- min{ a , b, } (i = 1..... n; i 4=j). If constraints (4) are relaxed, the Lagrangean problem is solved by setting x~j = e,j and x/, = e/, (i = 1 ..... n), yielding the lower bound

LB°=aj-lj+bj+

E a i e i j + Ebieii+ E m i n { a , , b i } ( 1 - e i j - e j , ) , i

i

i~j

which consists of the total processing time of job j, the processing time on machine A of the predecessors of job j, the processing time on machine B of the successors of job j and the smaller of the two processing times of the other jobs. It is a simple modification of the job-based bound to take into account precedence constraints between job j and the other jobs. However, precedence constraints between pairs of jobs other than j o b j are not used. If a~ > b~ and a k < b k for some jobs i and k (i * j , k * j , i q= k) which are neither predeccors nor successors of job j but where job i is a predecessor of job k then, by implication from the bounding procedure, job k will be sequenced before job j and job i will be sequenced after job j contradicting the precedence constraints. We proceed by suggesting a method to improve the lower bound in such situations.

A . M . A . Hariri, C.N. Ports / AIgorithms f o r two - machine f l o w - shop sequencing

241

Returning to the problem of minimizing L0./ subject to (2), (4) and (5), we first define 5 i = a , - Xo and /~ = b, - h u to be reducedprocessing times. It may be possible to increase the lower bound by performing a Lagrangean relaxation of certain of the constraints (4) as follows. Suppose that job i and job k are chosen (i 4:j, k ~ j , i ~ k) so that job i is a predecessor of job k but is not a predecessor of job j, so that job k is not a successor of j o b j and so that ~ and bk are strictly positive. Since eik = 1, ( 2 ) implies that x,k--1 and (3) implies that xk~ = 0. Thus inequality (4) reduces to x u + x:k >f 1. P e r f o r m i n g a Lagrangean relaxation of this constraint we obtain the Lagrangean function

L~=LO+llijk(1-xij-xik),. where/~,:k is the non-negative multiplier for this constraint. Again, to preserve the non-negativity of the coefficients of the variables and to maximize the increment in the lower bound, we choose ttnk= m i n { ~ , bk }. The reduced processing times are reset using ~ , - - ~ , - / t U k and b~ = / ~ k - ~Uk" Then the procedure is repeated until no further constraint (4) can be introduced. At this stage the Lagrangean problem becomes

rrfinimize{aj-lj+bj+~".aixu+~ib, xji+~hij+~t~ijk }, i

"

i~*j

i. k

subject to (2) and (5), where the final summation is over all values of i and k for which a constraint of type (4) is introduced into the Lagrangean function. It is solved by setting x i / = e u and x j, = e:, (i = 1. . . . . n) to yield the generalizedjob-based bound

LB:=a:-lj+b:+E~,eu+Eb, e.+ Ehu.+ E i

i

i*j

~tUk.

i, k

Having applied the procedure for all jobs j, the resulting bound is LB 1 = max{ LB~.. ..,LB~}.1 The order in which the constraints (4) are introduced into the Lagrangean function affects the multipliers which determine the lower bound. Admittedly, our procedure uses an arbitrary ordering which could be improved upon. However, a substantial contribution to the difference between the lower bound and the optimum arises from another source: for different LB:~ the varialrles may take different values due to relaxing n - 1 of the constraints (1). Adding the constraints (4) to the Lagrangean function with different multipliers is unlikely to improve the lower bound significantly. As a by-product of the generalized job-based bound, it may be possible to derive additional precedence constraints. If U is any upper bound on the maximum completion time, possibly found by applying a heuristic, and if, having computed LB) for some job j, LB] + ~, >/U, for any job i (i ~ j ) with e u = ej~ = 0, then any solution whose value is less than U has the property that x u = 0 and X/, = 1. Thus, we set ej, = 1. Similarly, if LB) + b, >/U for any job i (i ~ j ) with e~j = e:,= O, we set e~:= 1. Additional precedence constraints found in this way may, through transitivity, induce others. They may improve lower bounds that are computed subsequently. Because the variables are permitted to take different values for different U/, for some problems the generalized job-based bounds may be rather weak. For this reason we also compute machine-basedbounds that are generalizations of those originally proposed by Ignall and Schrage [5] and Lomnicki [11]. If I denotes the set of jobs with no predecessors and I' denotes the set of jobs with no successors, these bounds may be written as

LA=Ea,+min{b,-l,} i

iEl'

and

LB=~.,bi+min{ai-li}. i

i~l

Thus our overall lower bound is max{LW, LA, Ls }. We now discuss the implementation of the lower bound. Each job j is considered in turn and LB° is computed in O(n) steps. If LB° >/ U, where U is an upper bound obtained from the maximum completion time of any feasible sequence that has been generated, then lower bounding computations terminate. Otherwise LBI is computed. This requires O(r/2) steps since there are O(n 2) arcs (i, k) of the precedence

242

A.M.A. Hariri, C.N. P o n s / Algorithms for two - machine flow - shop

sequencing

graph which can contribute a multiplier #, ~ to LB! Unless LB~ >/U, these bounds are computed for the l J .I" 3 / next j o b j until LB 1 is found. Thus, LB requires O(n-) steps. When LB 1 < U, LA and L R are computed in O(n) steps. Thus, the computational complexity of the lower bounding calculations is dominated by LB I which requires O(n 3) steps. When max{ LB1, L A, L B } < U, the search for additional precedence constraints is performed. The direct tests based on comparing L ~ + ~, and LB) + b~ with U for all jobs i a n d j require O(n 2) steps. However, O(n 3) steps are required to search for any further constraints that are implied by transitivity. (Floyd's O(n 3) algorithm [3] for finding the shortest path between each pair of vertices of the precedence graph can be used.) 2.2. The upper bound

It is seen above that the use of a method to obtain a sequence to generate an upper bound U can reduce computations in a branch and bound algorithm. Additionally, the heuristic method described below is applied at each node of the search tree in our third enumeration scheme to indicate which branches are added to the search tree. In the heuristic, the set of jobs having no predecessors is found and a job is selected from this set which, according to Johnson's rule, should be sequenced before all other jobs in this set. This job is sequenced in the first unfilled position in the sequence and is deleted from the precedence graph. The procedure is repeated until all jobs are sequenced. This procedure requires O(n 2) steps. 3. Branching rules and dominance 3.1. Reduction o f problem size

In this section we give two results which may schedule some jobs in the initial positions of an optimal sequence in which case the problem may be reduced in size by discarding these jobs. As with other results in Section 3, these theorems are also applied to the inverse problem to yield corresponding corollaries. Before stating our dominance theorems some notation is required. Let o be an initial partial sequence of jobs the last of which is completed on machine A at time CA(o) and on machine B at time C s ( o ) . Let o' be a final partial sequence with C j ( o ' ) denoting the minimum time between the start of processing jobs of o' on machine A and the completion of processing jobs of o' on machine B and with C ~ ( o ' ) denoting the minimum time between the start and the completion of processing jobs of o' on machine B. Clearly, CB(~r) -- Cj(~r) for any partial sequence ~r. Let S denote the set of unsequenced jobs and let I and I' denote the subsets of S which have no predecessors in S and no successors in S respectively. Theorem 1 (Kurisu [7]). I f there exists a j o b i ~ 1 with a, ~ b, and with a, - I i <~a ~ - !~f o r all j ~ 1, then there exists an optimal continuation o f o in which j o b i is sequenced first amongst j o b s in S.

Corollary 1 (Kurisu [7]). /f there exists a j o b i ~ I ' with b, <~a, and with b, - 1, <~bl - !i f o r all j ~ I', then there exists an optimal continuation o f o' in which j o b i is sequenced last amongst jobs in S.

In the following theorem and its corollary, L denotes any lower bound on the maximum completion time of any sequence having o as an initial partial sequence and o' as a final partial sequence. In particular, L is chosen to be the best of the lower bounds described in the previous section each of which can be modified to incorporate o and o'. Theorem 2 (Potts [14]). I f there exists a j o b i ~ 1 with a i <~b i and with CA( o ) + a l - I, + Z i ~ s bl + C~( o') ~ L , then there exists an optimal continuation o f o in which j o b i is sequenced first amongst j o b s in S.

Corollary 2 (Potts [14]). I f there exists a j o b i ~ I ' with b, <~a, and with C~ (a') + b, - l, + ~

s a j + CA( o ) <~L, then there exists an optimal continuation o f o' in which j o b i is sequenced last amongst jobs in S.

A.M.A. Hariri, CN. Potts / Algorithms for two-machineflow-shop sequencing

243

Theorem 1 and Theorem 2 use the notion that when a job i with a, ~ bg for all i ~ I and a~ < b, for all i ~ I'. However, if I or I' contains only one job, then that job is sequenced enabling the conditions of the theorems and corollaries to be retested once I and I' have been updated. Prior to applying this reduction procedure at the root node of the search tree, an upper bound U is computed using the heuristic method of Section 2.2, the lower bound L -- max(LB ~, L A, L s ) is computed for use in Theorem 2 and Corollary 2, and U is used to generate precedence constraints as described in Section 2.1. If L < U, branching is necessary to solve the problem. We proceed by giving three branching rules and stating any dominance theorems which are useful for that particular branching rule. 3.2. Forward branching rule

In a forward branching rule each node of the search tree corresponds to an initial partial sequence of jobs. Each new branch that is added represents the addition of another job to the corresponding initial partial sequence. This type of enumeration scheme is widely used in algorithms for solving the m-machine permutation flow-shop without precedence constraints. If a forward branching rule is applied to the inverse problem, then, with respect to the original problem, each node corresponds to a final partial sequence and the enumeration scheme can be regarded as one of backward branching. It is sometimes preferable to use such a backward branching rule if the important decisions that affect the maximum completion time occur towards the end of the sequence. A scheme similar to that proposed by Potts [17] in which each node corresponds to an initial and a final partial sequence is also possible. In our implementation a forward branching rule is applied to the original problem if I{i I a, < b, } I >1 I{ i I ai > bi } I and is applied to the inverse problem otherwise. The advantage of a forward branching rule is that the theorems of Section 3.1. can be repeatedly tested since the set I will be different at each node. (Our method of choosing between the original and the inverse problem is aimed at finding the maximum number of candidate jobs i for Theorem 1 and Theorem 2.) Additionally, a special case of the dominance theorem of dynamic, programming is applied. An initial partial sequence o and the initial partial sequence o" obtained by interchanging the last two jobs of o are compared: if CB(o)>~ CB(o"), then o is dominated and the corresponding node of the search tree is discarded. However, if CB(o ) = Cs(o"), great care must be taken that both o and o" are not discarded. (In many such cases o" will already have been discarded using one of the results of Section 3.1.). The disadvantage of a forward (or backward) branching rule is that if the major decisions relate to jobs that are sequenced in the middle positions of an optimal sequence, large search trees will be constructed before this important information becomes available. 3.3. Kurisu's branching rule [8]

Any two jobs j and k, such that job j is a predecessor of job k, may only be sequenced in adjacent positions if there is no other job which is both a successor of job j and a predecessor of job k. When jobs j and k are permitted to be sequenced in adjacent positions, job j is called a direct predecessor of job k and job k is called a direct successor of job j'. Kurisu's branching rule is based on the following results. Theorem 3 (Kurisu [8]). I f there exists a job i ~ S haoing at least one predecessor and with a, <~b~ and with a i - 1, ~ aj - I~for a l l j o b s j ~ S, then there exists an optimal sequence in which job i is sequenced immediately after one of its direct predecessors.

244

A.M.A. Hariri, C.N. Ports / AIgorithms for two - machine flow- shop sequencing

Corollary 3 (Kurisu [8]). 1./" there exists a j o b i ~ S having at least one successor and with b~ <~a, and b~ - 1~ <~by - lj for alljobs j ~ S, then there exists an optimal sequence in which job i is sequenced immediately before one o f its direct successors.

Whenever Theorem 1, Corollary 1, Theorem 2 or Corollary 2 cannot be used to sequence a job, the number of direct predecessors n I of job i of Theorem 3 and the number of direct successors n 2 of job i of Corollary 3 are found. If n I ~ n2, then n 2 branches are added to the search tree according to Corollary 3. In either case, at any node let ~r be the resulting partial sequence of jobs that are known to be sequenced in adjacent positions and let CB(~r) ( = Cj(~r)) be the time necessary to process all jobs of or. Then ~r may be regarded as a single composite job having processing time a,, = Y'.,~,, a, on machine A, having processing time b,, = E,~,~ b, on machine B and where l,, = a,, + b,, - CB(~') represents the time during which the composite job may be processed simultaneously on both machines. Furthermore the precedence graph is condensed by replacing the vertices corresponding to the jobs of rr by a single vertex which represents the composite job. Any job which is a predecessor (successor) of a job of Ir in the original graph is a predecessor (successor) of the composite job in the condensed graph. Thus, each new branch that is added to the search tree reduces the size of the resulting problem by one job. For each node of the search tree except those created using the results of Section 3.1, the lower bounding procedure of Section 2.1 is applied. This branching rule has the advantage that Theorem 3 and Corollary 3 are powerful results which enables narrow search trees to be constructed. It unfortunately has the same disadvantage as the forward branching rule: there is no guarantee that the important decisions are made at the top of the search tree. The branching rule that is given next attempts to overcome this disadvantage. 3.4. The proposed branching rule

In this new branching rule we attempt to make the important decisions which affect the maximum completion time as follows. At each node an appropriate set Y of jobs is found and branches are formed which one job of Y is either sequenced before all other jobs in Y or is sequenced after all other jobs in Y. If Y consists of all unsequenced jobs, then this branching rule is equivalent to forward or backward branching. Unfortunately, for other choices of Y the dominance rules of Section 3.1 are unlikely to be useful except at the root node of the search tree. We degcribe next how the set Y is obtained from the sequence generated by the heuristic method of Section 2.2. Suppose that p = (p(1) . . . . . p(n)) is the sequence obtained with

cB(o)= G(o): E ao,,,+ i:1

t=k

for some k ~ {l . . . . . n }. We may assume without loss of generality that aplk~ ~< bol,i since if ap~,)> bpt,) the inverse problem is considered. Job p ( k ) is placed in the set Y first. Then jobs p ( k + 1) . . . . . p(n) respectively are considered in turn as candidates to be added to the set Y. Job p ( i ) (i = k + 1 . . . . . n) is added to Y if a~,~ < bp~,~ and if p ( i ) has no predecessors that are already in Y. When the complete set Y is found, if Y contains more than one job, then IYI branches are constructed in the search tree each of which specifies that one job of Y is a predecessor of the other jobs of Y. On the other hand, if Y contains only one job p(k), we proceed as follows. Jobs p(k - 1) . . . . . p(1) respectively are considered in turn as candidates to be added to the set Y. Each job p ( i ) (i = 1~ . . . . k - 1) is added to Y if ap~,~> bo~,~ and if job p(i) has no successors that are already in Y. If Y now contains more than one job, then IYI branches are added to the search tree each of which specifies that one job of Y is a successor of the other jobs of Y. If Y still only contains job p ( k ) , then it is clear from Section 2 that the lower bound is equal to C n ( p ) so branching is not required.

A.M.A. Hariri, C N. Potts / Algorithms for two- machine flow- shop sequencing

245

When there are no precedence constraints, P is the sequence obtained from Johnson's rule [6]. If k' is defined so that apt,~< bpt,i for i = 1 ..... k' and a o ~ > bo~,) for i = k' + 1 . . . . . n, then Y = ( # ( i ) I k <~ i <~k', apt,~ < bp~,~ for i > k }. When job p ( k " ) ~ Y is constrained to be sequenced before all other jobs of Y, we have LB~¢k,,I

ap(x.,,) + E ao(i) + i=1

i~k

Since aptk,,)>~ap~k) by Johnson's rule, LB°tk,,~>~ CB(O). Thus, the lower bound is exact after the first branching. It is hoped that when there are precedence constraints this branching scheme will also make decisions to increase the lower bound at the descendant node towards the value of the upper bound given by the heuristic at its parent. Its disadvantages are that it does not employ any dominance theorems to help reduce the size of the search tree and that the subproblems at each node involve the same number of nodes as at the root node. The latter disadvantage is serious if the branch and bound algorithm does not terminate reasonably quickly. Thus, it might be expected that this branching rule is effective only when the lower bounds are close to the optimum.

4. Computational experience In addition to the number of jobs, the factors that are likely to affect the efficiency of a branch and bound algorithm are any correlation between the two processing times for each job and the number of arcs in the precedence graph G. Consequently, problems with 30, 40, 50, 60, 70 and 80 jobs were generated as follows. For each value of n, random problems with integer processing times a i and bi (i = 1 . . . . . n) were generated from the uniform distribution [1,100] and correlated problems with integer processing times a i and b, (i = 1. . . . . n) were generated from the uniform distribution [1 + 20a,, 20 + 20a~] for a, randomly selected from (1, 2, 3, 4, 5}. This method of processing time generation follows that given in [9]. In the precedence graph each arc ( i , j ) with i < j was included with probability p. For each value of n ten random problems and ten correlated problems were generated for each of the p values 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.4 and 0.5. This yields a total of 160 problems for each value of n. Three algorithms, each using the lower bound of Section 2.1, the upper bound of Section 2.2, the reduction process of Section 3.1 and one of the three branching rules, were coded in F O R T R A N IV and run on a CDC 7600 computer. Whenever a problem was not solved within the time limit of 60 seconds, computation was abandoned for that problem. Computational results for the random and correlated problems are given in Tables 1 and 2 respectively while Table 3 gives numbers of unsolved problems for different values of p. For each algorithm, Tables 1 and 2 give the average computation time or a lower bound on the average computation time when there are unsolved problems, list the numbers of unsolved problems and show the numbers of solved problems that require not more than 100 nodes, that require over 100 and not more than 500 nodes and that require over 500 nodes. The results shown in Table 1 for problems with random processing times show that our proposed branching rule is remarkably effective with 477 of the 480 problems solved with a search tree of 500 or less nodes. The branching rule of Kurisu [8] is the least effective and produces 10 unsolved problems for n = 80. Each of the 480 problems were solved with at least one of the three algorithms. The results shown in Table 2 for problems with correlated processing times are very much in contrast to those of Table 1. These correlated problems are clearly very much harder than the random problems as is often the case in flow-shop scheduling. For small values of n (n ~< 50) Kurisu's branching rule appears to be the best whereas for large values of n (n >/60) it gives the worst performance. For these problems with large values of n, the forward branching rule is superior to the other rules. For n ~< 60, all problems were solved by at least one algorithm, for n = 70 one problem could not be solved by any of the algorithms and for n = 80 a total of seven problems could not be solved by any of the algorithms.

246

A.M.A. Hariri, C.N. Ports / Algorithms for two-machine flow-shop sequencing

Table 1 Computational results for problems with random processing times n.

Branching rule Forward

30 40 50 60 70 80 ACT: NSI: NS2: NS3: NU:

Kurisu

Proposed

ACT

NS1

NS2

NS3

NU

ACT

NS1

NS2

NS3

NU

ACT

NS1

NS2

NS3

NU

0.2 3.0 1.4 3.6 5.0 9.9

78 75 71 61 67 59

2 1 8 16 12 16

0 1 1 2 1 3

0 3 0 1 0 2

0.2 1.0 2.0 5.0 5.7 15.4

80 72 77 66 71 59

0 5 2 9 8 10

0 3 1 3 0 1

0 0 0 2 1 10

0.2 0.8 1.2 3.3 4.6 8.7

80 78 80 78 80 79

0 1 0 1 0 0

0 1 0 0 0 0

0 0 0 1 0 1

average number number number number

computation time or a lower bound on average computation time with unsolved problems contributing 60 seconds. of problems solved that require not more than 100 nodes. of problems solved that require over 100 nodes and not more than 500 nodes. of problems solved that require over 500 nodes. of unsolved problems.

Table 2 Computational results for problems with correlated processing times n

Branching rule Forward

30 40 50 60 70 80 ACT: NSI: NS2: NS3: NU:

Kurisu

Proposed

ACT

NS1

NS2

NS3

NU

ACT

NS1

NS2

NS3

NU

ACT

NSI

NS2

NS3

NU

0.5 0.8 5.3 6.1 10.3 16.0

72 63 57 48 44 40

4 15 16 24 23 21

4 2 2 5 9 12

0 0 5 3 4 8

0.3 1.3 3.9 12.2 21.1 25.8

78 70 62 53 43 44

1 3 8 12 13 8

1 7 10 5 2 4

0 0 1 10 22 24

0.4 1.5 5.1 8.0 9.2 18.7

75 75 67 66 69 60

3 2 4 6 5 4

2 2 5 2 2 2

0 1 4 6 4 14

average number number number number

computation time or a lower bound on average computation time with unsolved problems contributing 60 seconds. of problems solved that require not more than 100 nodes. of problems solved that require over 100 nodes and not more than 500 nodes. of problems solved that require over 500 nodes. of unsolved problems.

Table 3 N u m b e r s of unsolved problems for all n: the influence of p Correlated problems Branching rule

R a n d o m problems Branching rule

0.05 0.1 0.15 0.2 0.25 0.3 0.4 0.5

Forward

Kurisu

Proposed

Forward

Kurisu

Proposed

0 2 1 2 1 0 0 0

0 0 3 3

0 1 1 0 0 0 0 0

12 6 2 0 0 0 0 0

19 15 13 5

17 9 3 0 0 0 0 0

4

2 1 0

4

1 0 0

A.M.A. Hariri, C.N. Ports / Algorithms for two-machineflow-shop sequencing

247

The numbers of unsolved random problems listed in Table 3 indicate that those with large and small p are relatively easy whereas those with p = 0.15, 0.2 and 0.25 are the hardest. In contrast, the correlated problems with p = 0.05 are hardest with problems becoming easier as p increases. For our proposed branching rule there are no unsolved problems when p >/0.2. The results produced by Hariri [4] for Kurisu's algorithm [8] are discussed now. They show that Kurisu's algorithm, which does not employ a lower bound, cannot effectively solve 40-job problems: for n = 40, 16 out of a total of 100 test problems cannot be solved within the time limit and for n = 50, 31 out of a total of 100 test problems cannot be solved within the time limit. When the lower bound LB° is incorporated into Kurisu's algorithm, Hariri's results show that for n = 60, 33 out of a total of 100 test problems cannot be solved within the time limit. Our algorithms are clearly far superior to Kurisu's algorithm. The performance at the top of the search tree of the lower bound of Section 2.1., the upper bound of Section 2.2 and the reduction procedure of Section 3.1 is of interest. Our discussion concerns the 60-job problems but similar remarks are likely to apply for other values of n. Of the 80 random problems, for 64 of them the lower bound is exact and for 74 of them the lower bound is within 0.5% of the optimum. The maximum deviation of the lower bound from the optimum is 1.2% occuring when p = 0.25. For the p values 0.05, 0.1 and 0.15 the machine-based bound is usually better than LB~ and in most cases it is exact. As p increases, LB~ becomes closer to the optimum while the deviation of the machine-based bound from the optimum increases. For these random problems the upper bound is exact for 47 problems, is within 1% of the optimum for 60 problems and is within 2% of the optimum for 68 problems. The maximum deviation of the upper bound from the optimum is 6.1% occuring when p = 0.15. We now turn our attention to the 80 correlated problems with 60 jobs. For 47 of them the lower bound is exact and for 74 of them the lower bound is within 0.5% of the optimum. The maximum deviation of the lower bound from the optimum is 1.2% which occurs when p ~ 0.05. For most correlated problem with p >10.1 our generalized job-based bound is better than the machine-based bound. The overall bound L is weakest when p = 0.05 but becomes closer to the optimum as p increases. This weakness of the lower bound explains the large numbers of unsolved correlated problems which occur when p is small. For the correlated problems the upper bound is exact for 28 of them and is within 1% of the optimum for 76 of them. The maximum deviation of the upper bound from the optimum is 1.3% occuring when p = 0.1. We now discuss the effect of the reduction procedure on those 60-job problems for which the lower and upper bounds at the top of the search tree differ. The average number of jobs eliminated from the problem using the procedure of Section 3.1. is 8.6. For p = 0.05 and p -- 0.1 the average number of eliminated jobs is 17.4 and 10.1 respectively. As p increases the average number of eliminated jobs first decreases and then increases again as p becomes larger. These results are generally in accordance with expectations based on the nature of the dominance theorems.

5. Concluding remarks The three branch and bound algorithms each solve problems with up to 50 jobs satisfactorily. Their success is largely attributed to the powerful new lower bounding rule that was developed in Section 2.1. The upper bound of Section 2.2 and the dominance rules of Section 3.1 also help in reducing computation. For problems with more than 50 jobs, Kurisu's branching rule generates large search trees and is inferior to the forward branching rule and our proposed branching rule. The proposed branching rule appears to be the most effective except for those problems in which the initial lower bound is not very close to the optimum, i.e. for correlated problems having a sparse precedence graph. For these hard problems the forward branching rule is the best. The unexpectedly good performance of the forward branching rule is explained by the use of the dominance rules of Section 3.1 at each node of the search tree. There is some scope for further research into an algorithm which will be able to solve the correlated problems with a sparse precedence graph. One possible approach to improving the lower bounds is based on the following observation. Due to the nature of such graphs, it should be possible to obtain a

248

A.M.A. Hariri, C.N. Ports / Algorithms for two-machine flow-shop sequencing

series-parallel subgraph by deleting only a few arcs: the resulting relaxed problem can be solved using the algorithm of Sidney [19] thereby yielding a lower bound. For denser precedence graph, it appears likely that too many arcs would be deleted to yield a satisfactory lower bound. Lower bounds for the m-machine permutation flow-shop problem with precedence constraints can be obtained by relaxing the capacity constraints on all except two machines. The lower bound for the resulting two-machine subproblems is obtained using the procedure of Section 2.1 and provides a valid lower bound for the m-machine problem. We are currently testing the effectiveness of this and other lower bounds in a branch and bound algorithm.

References [1] A.P.G. Brown and Z.A. Lomnicki, Some applications of the 'branch-and-bound' algorithm to the machine scheduling problem, Operational Res. Quart. 17 (1966)173-186. [2] R.W. Conway, W.L. Maxwell and L.W. Miller, Theory of Scheduling (Addison-Wesley, Reading, MA, 1967). [3] F.W. Floyd, Algorithm 97: shortest path, Comm. ACM 5 (1962) 345. [4] A.M.A. Hariri, Scheduling, using branch and bound techniques, Ph.D. Thesis, University of Keele (1981). [5] E. Ignall and L. Schrage, Application of the branch and bound technique to some flow-shop scheduling problems, Operations Res. 13 (1965) 400-412. [6] S.M. Johnson, Optimal two- and three-stage production schedules with setup times included, Naval Res. Logist. Quart. 1 (1954) 61-68. [7] T. Kurisu, Two-machine scheduling under required precedence among jobs, J. Operations Res. Soc. Japan 19 (1976) 1-13. [8] T. Kurisu, Two-machine scheduling under arbitrary precedence constraints, J. Operations Res. Soc. Japan 20 (1977) 113-131. [9] B.J. Lageweg, J.K. Lenstra and A.H.G. Rinnooy Kan, A general bounding scheme for the permutation flow-shop problem, Operations Res. 26 (1978) 53-67. [10] J.K. Lenstra, A.H.G. Rinnooy Kan and P. Brucker, Complexity of machine scheduling problems, Ann Discrete Math. 1 (1977) 343-362. [11] Z.A. Lomnicki, A 'branch-and-bound' algorithm for the exact solution of the three-machine scheduling problem, Operational Res. Quart. 16 (1965) 89-100. [12] G.B. McMahon and P.G. Burton, Flow-shop scheduling with the branch-and-bound method, Operations Res. 15 (1967) 475-482. [13] L.G. Mitten, Sequencing n jobs on two machines with arbitrary time lags, Management Sci. 5 (1958) 293-298. [14] C.L. Monma, The two-machine maximum flow time problem with series-parallel precedence constraints: an algorithm and extensions, Operations Res. 27 (1979) 792-798. [15] C.L. Monma, Sequencing to minimize the maximum job cost, Operations Res. 28 (1980) 942-951. [16] C.N. Ports, The job-machine scheduling problem, Ph.D. Thesis, University of Birmingham (1974). [17] C.N. Potts, An adaptive branching rule for the permutation flow-shop problem, European J. Operational Res. 5 (1980) 19-25. [18] A.H.G. Rinnooy Kan, Machine Scheduling Problems: Classification, Complexity and Computations (Nijhoff, The Hague, 1976). [19] J.B. Sidney, The two-machine maximum flow time problem with series-parallel precedence relations, Operations Res. 27 (1979) 782-791.