European Journal of Operational Research 107 (1998) 414±430
A bicriteria two-machine permutation ¯owshop problem Funda Sivrikaya-SËerifo glu, G und uz Ulusoy
*
Department of Industrial Engineering, Bo gazicËiUniversity; Bebek; 80815Istanbul; Turkey Received 1 May 1996; received in revised form 1 January 1997
Abstract The problem attacked in this paper is the scheduling of n jobs on two machines which are assumed to be continuously available. All jobs are available at the beginning of the scheduling period. Setup times are included in the processing times. No preemption of jobs is allowed. The objective is the minimization of a weighted combination of the average job ¯owtime and the schedule makespan. Three branch and bound (B&B) approaches are compared which differ mainly in their branching strategies. One of them is the conventional B&B approach called the forward approach, and the other two are referred to as the backward and the double-sided approaches, respectively. To obtain an upper bound, a ¯owshop heuristic (FSH1) is employed and is subjected to 2-opt and 3-opt. Lower bounds for each approach are developed. In each case, the lower bound is taken as the weighted sum of the lower bounds for the average ¯owtime and for the makespan. A computational study is performed to evaluate the three dierent B&B approaches on a testbed consisting of 30 problems each for n 10, n 14, and n 18; and each one of these problems is solved for ®ve dierent levels of the weighting factor. An analysis into the resulting ecient points is also included. An improved version of FSH1 (called FSH2) is developed and both heuristics are subjected to extensive numerical experimentation. Ó 1998 Elsevier Science B.V. All rights reserved. Keywords: Permutation ¯owshops; Bicriteria scheduling; Branch and bound approaches
1. Introduction In this paper, three dierent branch and bound (B&B) and two heuristic approaches to a bicriteria two-machine permutation ¯owshop problem are presented. In a ¯owshop, the jobs go through the operations following the same machine schedule. If the jobs are not allowed to overtake each other,
*
Corresponding author. Fax: +90 212 265 18 00; e-mail:
[email protected]. 0377-2217/97/$17.00 Ó 1998 Elsevier Science B.V. All rights reserved. PII S 0 3 7 7 - 2 2 1 7 ( 9 7 ) 0 0 3 3 8 - X
then the schedule is said to be a permutation schedule. Thus, permutation schedules constitute only a subset of all feasible schedules. Permutation schedules are of considerable interest because of their extensive use in practice. There are various performance criteria reported in literature and used in practice in order to compare two dierent permutation schedules and then to choose among them. The decision can be based on a single criterion or on multiple criteria. Most of the existing literature on ¯owshop scheduling problems involve a simple objective criterion like
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
the minimization of the makespan, the total ¯owtime or the total tardiness. The bicriteria approach is quite recent in general; and the bicriteria approach in multi-machine problems is even more recent. For the bicriteria static scheduling of a single machine, Dileepan and Sen (1988) provide a survey. They con®ne their survey to single machine problems only, since, to that date, the scheduling literature did not provide any bicriteria research involving multiple machines. In the problem de®nition of Dileepan and Sen (1988), the processing times of jobs can be crashed according to crashing cost functions de®ned for each job. The researchers classify the problems into two classes. In class 1 problems, one of the criteria acts as the objective function and the other as a constraint; while in class 2 problems, both criteria are included in the objective function. In this second class, they ®nd that the most frequently used criteria are ¯owtime, tardiness/lateness, crashing cost and setup cost. The researchers note that the enumeration technique most often considered is the B&B technique. A further survey on the multiple objective single machine scheduling problem is provided by Fry et al. (1989). Complexity issues associated with the multiple objective single machine scheduling problems is treated by Chen and Bul®n (1993). In their survey, Nagar et al. (1995a) classify the problems of multiple and bicriteria scheduling according to ®ve characteristics: Problem nature (deterministic/stochastic); shop con®guration (single machine/multiple machine); solution methods (traditional/integer programming/recent methods); performance measure involved (regular/nonregular); and criteria (bicriteria/multiple criteria). They also have as a sixth characteristic, applications. But they report only a few application cases. They discuss the cited literature based on the number of machines and criteria. Concerning the implicit enumeration techniques, they conclude that successful implementation of these techniques largely depends on identifying special problem structure, if any, and using this information to devise fathoming criteria. The problem attacked in this paper is the scheduling of n jobs on two machines which are assumed
415
to be continuously available. It is assumed that all jobs are available at the beginning of the scheduling period. Setup times are included in the processing times. No preemption of jobs is allowed. In this study, the objective is the minimization of a weighted combination of the average job ¯owtime, F , and the schedule makespan, Cmax ; i.e., wF
1 ÿ wCmax , where 0 6 w 6 1 (see Fig. 1 for the notation used throughout the paper). Note that the minimization of makespan is equivalent to the minimization of maximum ¯owtime, Fmax , in this case since the ready times are zero. The minimization of Fmax in a two-machine ¯owshop problem is solved to optimality by Johnson's algorithm (Johnson, 1954). Ignall and Schrage (1965) develop B&B approaches for two- and three-machine permutation ¯owshop problems with the objective of minimizing the total ¯owtime. Various improvements over Ignall and Schrage (1965) lower bounds are discussed and evaluated by Della Croce et al. (1996). Note that another way of de®ning the problem treated here is to formulate it as a special case of the sum of the weighted completion times problem. An exact solution procedure for the general case of m machines is provided by Rebai and Carlier (1996). Both of the above mentioned criteria, average ¯owtime and makespan, aect scheduling costs. The minimization of the makespan reduces the total turn-around time; while the minimization of the average job ¯owtime leads to rapid turn-over of jobs, reduced in-process inventory and even utilization of resources. In a comprehensive study on performance criteria for ¯owshop problems, Gupta and Dudek (1971) recommend that a combination of makespan and ¯owtime be used in order to reduce the scheduling costs signi®cantly. Rajendran (1992) attempts this problem by trying to minimize the total ¯owtime such that the optimal makespan as obtained from Johnson's algorithm is met. He develops a B&B approach which solves rather small sized problems involving up to 10 jobs to optimality. For larger problems, Rajendran (1992, 1995) provides heuristic algorithms. Sayin and Karabati (1996) suggest a method for ®nding the ecient points. They propose a
416
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
Fig. 1. Terminology.
B&B procedure that iteratively solves restricted single objective scheduling problems until the set of ecient solutions is completely enumerated. They observe that the number of ecient solutions is signi®cantly small irrespective of problem size. They ®nd that the changes of the makespan and the sum of completion times over the ecient points take place over a relatively small range. On the other hand, an approximate 20% improvement in the sum of completion times is observed to be possible over the sum of completion times of the schedule obtained by Johnson's algorithm. Another B&B approach is developed by Nagar et al. (1995b). The study identi®es two parameters that aect the performance of the B&B algorithm:
the so-called average dominance Da and the objective weights of the individual criteria. Da is de®ned as the dierence between the sum of the processing times of jobs on machine 1 and 2. It is a measure of total workload imbalance between the machines. Their B&B algorithm is able to eciently solve problems where for each job, the processing time on machine 1 is smaller than that on machine 2. Note that this is a special case for Da < 0. Randomly generated 10- and 14-job problems, one with Da 1 and others with negative Da values, are used to experiment with various objective weights. The researchers conjecture that problem diculty is maximum when the two objective weights are equal or near-equal.
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
This study extends that of Rajendran (1992) and Nagar et al. (1995b) whose B&B approaches can solve up to 10 and 14 jobs, respectively. Here, this bound is increased to 18 jobs. The test bed is also enlarged so that a wide range of Da values is covered. The behaviors of the three B&B approaches over both positive and negative Da values are investigated including the special cases of aj > bj and aj < bj for all j. Rajendran and Chaudhuri (1991) provide a general ¯owshop heuristic for n jobs and m machines. An improved version of that ¯owshop heuristic for m 2 is developed and extensively tested. The improved heuristic proves to be very eective. 2. Three branch and bound approaches Three B&B approaches are developed for the bicriteria two-machine permutation ¯owshop problem de®ned above. One of them is the conventional B&B approach. The approaches dier mainly in their branching strategies. In the well known, conventional B&B approach, a node at the kth level of the tree corresponds to the assignment of a job from the set of unassigned (free) jobs to the kth position of the processing sequence. The jobs are ®xed starting from the beginning of the sequence; this approach will therefore be called as the forward B&B approach. In the backward B&B approach, jobs are ®xed starting from the end of the processing sequence. That is, a node at level k corresponds to the assignment of a free job to the (n ÿ k 1)th position of the sequence. In the double-sided B&B, jobs are ®xed at either end of the sequence depending on the level. A node at an odd numbered level k corresponds to the assignment of a free job to the
k 1=2th position, and a node at an even numbered level k corresponds to the assignment of a free job to the (n ÿ k=2 1)th position of the job sequence. An example of this approach is given by Carlier and Rebai (1995) for the permutation ¯owshop problem. The initial upper bound is obtained by using a heuristic procedure. The heuristic procedure employed here is a special case of the general ¯ow-
417
shop heuristic for n jobs and m machines provided by Rajendran and Chaudhuri (1991) for m 2. This heuristic denoted by FSH1, parallels the heuristic of Rajendran and Chaudhuri (1991) up to the tie break rules. The initial upper bound is obtained by applying 2-opt and 3-opt (Lin, 1965; Lin and Kernighan, 1973) to the solution of FSH1. 2.1. Lower bounds Here, three dierent lower bounds will be introduced, one for each type of B&B approach. In each case, the lower bound is taken as the weighted sum of the lower bounds for the average ¯owtime, LBF , and for the makespan, LBCmax ; i.e., LB w LBF
1 ÿ w LBCmax . Two types of lower bounds for the average ¯owtime, F , are developed, both of which are modi®ed versions of the lower bounds proposed by Ignall and Schrage (1965). The ®rst one is used for problems with a positive Da value and is denot. The second one is used for problems ed by LB F . with a negative Da value and is denoted by LBÿ F These two modi®ed versions have been preferred as lower bounds due to the ease of their representation in all three B&B approaches. Lower bound for the makespan, LBCmax , on the other hand, is computed by applying Johnson's algorithm to the set of free jobs and is valid for all three B&B approaches. 2.1.1. Lower bound for forward B&B
r is computed by ordering free jobs in LB F nondecreasing processing time (SPT) order on machine 1. The completion times of jobs on machine 1 yield an arrival pattern to the queue of machine 2. The arrival time of jth ®xed job, P Ar
j , is given by the following formula: Ar
j jk1 ar
k . The completion time of jth ®xed job, Br
j , is then computed as follows: Br
j max
Ar
j ; Br
jÿ1 br
j , where Br
0 0. The arrival time of jth free job is denoted by ra
j , and it gives the earliest possible start time of jth free job on machine 2. As an improvement over the lower bound suggested by Ignall and Schrage (1965), any arrivals prior to the completion of the last ®xed job on machine 2
418
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
are transferred to that completion time. In general,
r can be formulated as follows: LB F " 1 X Bj LBF
r n j2S # u ÿ X ÿ max Br
s ; ra
j ba
j ; j1
where ra
j Ar
s
j X
aa
k ;
j 1; . . . ; u:
k1
The improvement over the lower bound of Ignall and Schrage (1965) is given by the following inequality: u X
ÿ
max Br
s ; ra
j
j1
u X ra
j : P
2.1.2. Lower bound for backward B&B
r, the free jobs are ordered To compute LB F on machine 1 in SPT order without any inserted idle time. The arrival times of ®xed jobs to the queue of machine 2 are now given by the following formula: Ar
j
X
ak
j X ar
k ;
j 1; . . . ; s:
k1
The arrival times of free jobs are given by
j1
To compute ®rst, jobs are ordered on machine 2 in SPT order without any inserted idle time. Idle time occurs though preceding the ®rst operation and might occur preceding the last operation on machine 2. LBF is obtained by scheduling the free job with the largest processing time on machine 2 as the last operation on machine 2. As an improvement over the lower bound suggested by Ignall and Schrage (1965), the possibility of an idle time preceding this last operation on machine 2 is accounted for. At the root node of the B&B tree, the idle time preceding the ®rst job on machine 2 is taken as the minimum processing time on ma
r can be formulated as follows: chine 1. LBÿ F " ! j uÿ1 X X X 1 LBÿ
r Bj B0r
s bb
k F n j2S j1 k1 # 0 max Br
s b
U ; a
N bb
u ;
B0r
s
An example for the computation of LBF in forward B&B is given in Fig. 2 for r
I; J ; ; ::; . In that ®gure, and in the following two ®gures, the free job with the smallest (largest) processing time on the associated machine i is denoted by Si
Li .
k2U
LBÿ
r, F
where
max B0r
s b
U ; a
N bb
u P B0r
s b
U :
ra
j
j X aa
k ;
j 1; . . . ; u:
k1
The ®rst ®xed job starts on machine 2 as soon as it
r is formulated as arrives there. In general, LB F follows:
max Br
s ; Ar
s min aj : j2U
The improvement over the lower bound of Ignall and Schrage (1965) is given by the following inequality:
Fig. 2. Gantt charts for the computation of the lower bounds for F associated with the node
I; J ; ; ::; in the forward B&B approach.
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
" # u ÿ X X 1 LB
r ra
j ba
j Bj ; F n j1 j2S where Br
j are de®ned as in Section 2.1.1.
r the free jobs are ordered To compute LBÿ F on machine 2 in SPT order without any inserted idle time. Any idle time preceding ®xed jobs on machine 2 are taken into account. Another idle time accounted for is the one preceding the ®rst job on machine 2. It is taken to be equal to the minimum processing time on machine 1 among
r is formulated as follows: the free jobs. LBÿ F " ! # j u X X 1 X ÿ LBF
r min ak bb
k Bj ; n j1 k2U j2S k1 where Br
j are de®ned as in Section 2.1.1 with Br
0 Bb
u : An example for the computation of LBF in backward B&B is given in Fig. 3 for the partial schedule r
; ::; ; I; J . 2.1.3. Lower bound for double-sided B&B In the double-sided approach, the set of ®xed jobs is partitioned into S and T, which denote the set of ®xed jobs at the beginning and at the end of the job sequence, respectively.
419
An example for the computation of LBF in double-sided B&B is given in Fig. 4 for the partial schedule r
J ; ; ::; ; I; K. In that example, S is represented by {J} and T by {I,K}. LBF in the double-sided approach is a composite of the LBF formulations of the previous two approaches.
r in the double-sided apTo compute LB F proach, the free jobs are ordered in SPT order on machine 1 without any inserted idle time. The arrival times ra
j are obtained. Any arrivals prior to the completion of the last ®xed job in r on machine 2 are transferred to that completion time. The processing of the ®rst ®xed job in s starts as soon as it arrives on machine 2. The general formula is as follows:
r LB F
" u ÿ X ÿ 1 X Bj max Br
s ; ra
j ba
j n j2S j1 # X Bj ; j2T
where Br
j and Bs
j are de®ned as in Section 2.1.1 with Br
0 0:
r, the free jobs are ordered To compute LBÿ F on machine 2 in SPT order without any inserted idle time. The arrival times of the ®xed jobs at the end of the sequence to the queue of machine 2 are given by As
j
X j2S
aj
j X X aj as
k ; j2U
j 1; . . . ; t:
k1
The ®xed jobs are scheduled on machine 2 with
r is appropriate idle times preceding them. LBÿ F formulated as follows: " ! j uÿ1 X X 1 X ÿ 0 Bj Br
s bb
k LBF
r n j2S j1 k1 0 max Br
s b
U ; a
S # X Bj a
U bb
u j2T
Fig. 3. Gantt chart for the computation of the lower bounds for F associated with the node
; ::; ; I; J in the backward B&B approach.
and Br
j are de®ned as above with Br
0 0 and Bs
j are de®ned as above with Bs
0 Bb
u :
420
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
Fig. 4. Gantt charts for the computation of the lower bounds for F associated with the node
J ; ; ::; ; I; K in the double-sided B&B approach.
2.2. Search strategy The search strategy employed is a combination of the well-known strategies, ``branch-from-lowest-lower bound'' (frontier-search) and the ``branch-from-newest-active-node'' (depth-®rst search). In general, the frontier-search reaches the optimal solution by exploring a smaller number of nodes than the depth-®rst search, which, in turn, requires less storage space. The combined strategy starts from level 0 by assigning the root node as the current branching node. At each level, all ospring nodes of the current branching node are temporarily generated and are called temporary nodes. The lower bounds of all temporary nodes are computed and stored. Each temporary node is subjected to the bound test. Those nodes which have a lower bound that is greater than or equal to the current upper bound are fathomed; i.e. such lower bounds are set equal to in®nity. Lower bounds of nodes which pass the bound test are stored as they are. The temporary node with the minimum lower bound is appended to the B&B tree, and it becomes the current branching node. None of the other temporary nodes are appended to the B&B tree. The only in-
formation kept about these nodes are their lower bounds. These information are made use of when backtracking. If no temporary node can pass the bound test so that the minimum lower bound is in®nity, the algorithm backtracks to the previous level and ®nds the minimum lower bound of that level. The node with that minimum LB is generated and appended to the tree. It becomes the current branching node. If the minimum LB of that level is also in®nity, the algorithm backtracks again. The whole process is repeated until the tree is exhausted. Each time the current upper bound is updated all the lower bounds stored beforehand are subjected to the bound test again, and those which are greater than or equal to the new current UB are set equal to in®nity. This search strategy combines advantages of its two component strategies, depth-®rst search and frontier-search strategies. The number of active nodes is always less than or equal to n ÿ 1. The lower bounds are stored in an upper diagonal, two-dimensional array of dimensions n n. Thus the memory requirements are low. Also, since branches with minimum lower bounds are followed, it is expected that an optimal terminal node
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
is reached faster than when the pure depth-®rst strategy is applied. 3. Heuristic approaches 3.1. FSH1 FSH1 tries to minimize the sum of idle times on machine 2 and as a secondary objective, tries to schedule the jobs so as to be as close as possible to the SPT order both on machine 1 and machine 2. Initially, set of scheduled jobs is empty and therefore A B 0. Starting with the ®rst job, next job to be scheduled, job j, is selected from the set of remaining jobs R such that jB ÿ A ÿ aj j is minimum. If there is a tie, then choose the job with the smaller aj . If there is still a tie, choose the job with the smaller bj . Repeat this step until all jobs are scheduled. Although FSH1 is a heuristic solution procedure, it produces the optimal solution to a special case of this problem, where aj bj for all j. In this case, SPT is known to provide the optimal solution (Nagar et al., 1995b). Claim 1. FSH1 solves the special case of aj bj , for all j, for all w to optimality. Proof. FSH1 schedules the job j with the minimum processing time as the ®rst job. Thus A becomes aj
1 , B becomes 2aj
1 , and B ÿ A aj
1 , where j(1) j . In general, at any iteration k, B ÿ A aj
kÿ1 . At each step, a job j 2 U is selected to be scheduled next such that jB ÿ A ÿ aj j is minimum. Since the processing times for all remaining jobs are greater than or equal to the processing time of the last ®xed job, to keep jB ÿ A ÿ aj j as small as possible, job j, j 2 U , with the minimum processing time is chosen. This selection scheme produces the SPT sequence and hence the optimal solution in this special case. Another special case solved to optimality by FSH1 is the minimization of average ¯owtime with aj > bj , for all j. Claim 2. FSH1 solves the special case of aj > bj , for all j, and w 1 to optimality.
421
Proof. First job chosen by FSH1 is j such that aj min aj . At this point, following relations are true: aj P aj
1 > bj
1 , for all j, and B ÿ A bj
1 , where j(1) j . It follows that aj > B ÿ A for all j 2 U , and among the free jobs, the one with the minimum processing time on machine 1 will be chosen to keep jB ÿ A ÿ aj j as small as possible. In general, at each iteration k, aj P aj
kÿ1 > bj
kÿ1 , for all j 2 U , and B ÿ A bj
kÿ1 . Therefore, at each iteration, the free job with the smallest processing time on machine 1 will be chosen by FSH1 which produces the SPT sequence on machine 1. Since the optimal solution is given by ordering jobs in SPT on machine 1 in this special case as shown by Hoogeveen and Kawaguchi (1996), FSH1 produces the optimal solution. 3.2. FSH2 An improved version of FSH1 is developed and denoted by FSH2. The algorithm for FSH2 is as follows: The ®rst job to be scheduled is Ji for which the processing time on machine 1 is minimum. In case of a tie, the job with the smaller processing time on machine 2 is chosen. To choose the next job to be scheduled, a candidate set (CS) is formed from the set of remaining jobs such that job j; j 2 U , is included in CS if aj <
B ÿ A . If there are more than one job in CS, the one with the minimum processing time on machine 2 is chosen as the next job to be scheduled. If CS is empty, then among the free jobs, the one with the minimum processing time on machine 1 is chosen. This step is repeated until all jobs are scheduled. FSH2 provides optimal solutions to the special cases of aj bj for all j, and of aj > bj for all j with w 1.00. The proofs are similar to those provided for FSH1 and will not be repeated here. 4. Computational results and conclusion 4.1. Comparison of the B&B approaches An important data characteristic aecting the performance of the B&B algorithms is Da , namely
422
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
the total workload imbalance between the machines. When generating problems, care is taken so that both cases Da < 0 and Da > 0 are equally represented. In this study, three levels of n have been employed; namely, n 10, 14, 18. The processing times are drawn randomly from a uniform distribution U[1,10] with two digits accuracy. For each level of n, 10 problems have been generated with Da < 0 and 10 problems with Da > 0. Each set of problems is run for various weights of w 0.00, 0.25, 0.50, 0.75, 1.00. This totals up to 300 problems. Note that the cases of w 0.00, which is solved to optimality by Johnson's algorithm (Johnson, 1954), and w 1.00 have been included in the computations for the purpose of covering the whole range of w. Also, two special cases of Da < 0 and Da > 0 are investigated. For each level of n, ®ve problems have been generated with aj < bj for all j, and ®ve problems with aj > bj for all j. With these problems included, the test bed consists of 450 problems. The three B&B approaches are compared on the basis of the average number of temporary nodes generated (and thus of lower bounds computed) for each problem. The average number of nodes actually appended to the tree is much less than the average number of temporary nodes. The accuracy employed in the computation of the lower and upper bounds is kept to ®ve digits. As the number of jobs increases, the size of the tree explodes and computation times become prohibitively long. Therefore, an upper bound on the number of temporary nodes generated is applied as follows. If an algorithm has not terminated before 10 million temporary nodes are generated, the bound test is modi®ed so that a temporary
node with partial schedule r is fathomed if UB ÿ LB
r 6 e. In this study, e is taken to be 0.001. After applying this new bound test to all B&B tree nodes currently live, the algorithm is allowed to run for at most another 5 million temporary nodes. Table 1 below gives the average, minimum, and maximum Da values of problems for all levels of n. For the cases with Da < 0 and Da > 0, the average is taken over 10 problems; whereas for the two special cases it is taken over ®ve problems. The table indicates that the problems in the test bed represent a wide range of the spectrum. The experimentations are carried out on an HP computer with pentium microprocessor of 100 MHz. The average run time per problem for n 10 is less than 1±2 s, while it quickly climbs to 4±96 s for n 14, and 240±1320 s for n 18. The results obtained by the three B&B algorithms for n 10, 14, and 18, respectively are presented in Table 2. For w 0.00 all three approaches reach the optimum at the root node which is an expected result due to the way their lower bounds are calculated. For 0 < w < 1, in the three cases of Da < 0; Da > 0 and aj < bj for all j, the forward approach is superior compared to the other two approaches. In the special case of aj > bj for all j and with 0 < w < 1 the backward approach performs extremely good. Note that in this special case, in general, the performance of the algorithms improve with increasing w. Although FSH1 with 2opt and 3-opt provides solutions very close to the optimal solutions and hence a very good upper bound as will be discussed later, this improvement which is mainly due to the performance of LB F employs SPT on machine 1 to sequence the free
Table 1 Average, minimum, and maximum Da values of problems for each level of n n 10 Da < 0 aj < bj Da > 0 aj > bj
n 14
n 18
Avg
Min
Max
Avg
Min
Max
Avg
Min
Max
) 8.84 )30.90 10.38 27.45
)27.96 )40.37 0.58 20.13
) 0.65 )22.80 24.87 35.47
)14.56 )43.82 9.77 40.03
)26.34 )51.64 2.30 28.88
) 5.9 )30.31 17.11 58.13
)12.17 )41.40 7.38 50.54
)27.04 )50.41 1.07 30.60
) 0.44 )29.96 22.84 64.09
0.25
1071 23 3567 6029
4021 6445 7001 13
347 31 1038 56
Da < 0 aj < bj Da > 0 aj > bj
Da < 0 aj < bj Da > 0 aj > bj
Da < 0 aj < bj Da > 0 aj > bj
w
FWD
BWD
DS
n 10
869 23 894 58
4885 4754 6765 22
1444 15 1947 6027
0.50
2941 23 1179 45
8069 1980 9738 11
1872 15 1187 5539
0.75
5626 23 1713 1
14,274 1495 17,417 1
1763 15 272 1
1.00
60,281 12 162,239 197
961,247 229,323 3,690,836 59
21,303 12 119,804 2,042,352
0.25
n 14
109,004 24 189,391 136
1,026,896 230,061 4,199,080 43
26,304 15 27,712 393,456
0.50
Table 2 Average number of temporary nodes generated (rounded to the nearest integer)
311,086 27 322,137 98
1,296,362 214,159 5,436,272 41
33,734 15 39,322 24,446
0.75
638,600 34 792,104 1
2,326,532 189,208 8,969,209 1
56,904 17 11,023 1
1.00
n 18
5,970,308 72 13,502,810 493
13,786,693 1,914,564 13,556,842 87
7,654,647 25 9,229,272 12,033,921
0.25
9,818,757 63 12,421,302 217
14,028,552 339,255 13,543,165 49
7,685,170 25 6,915,525 7,870,575
0.50
10,611,785 63 11,511,356 137
14,399,953 259,945 13,530,170 56
7,743,119 25 5,228,184 977,825
0.75
11,974,853 63 13,509,464 1
15,000,000 262,870 13,589,090 1
7,429,220 25 4,193,029 1
1.00
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430 423
424
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
jobs. For w 1.00 in this special case, the optimal solution is reached at the root node. The reason is that, in that case, sequencing jobs in SPT order on machine 1 provides the optimal solution; and both FSH1 (see Claim 2 in Section 3.1 above) and LB F employ this sequencing scheme when aj > bj : In the symmetrical special case of aj < bj for all j, the forward approach has very good performance closely followed by the double-sided approach. In this case, the improvement is due to which employs SPT on machine 2 to sequence LBÿ F the free jobs. The double-sided approach is the second best performer in all cases. Note that it nicely combines the advantages of the two other approaches in the cases of positive and negative Da values and exhibits a more ``robust'' performance as compared to the other two approaches. Nagar et al. (1995b) conjecture that problem diculty is maximum when the two objective weights are equal or near-equal. The computational results presented in Table 2 indicate, on the other hand, an increase in the problem diculty with increasing w in problems with Da < 0 and with decreasing w in problems with Da > 0.
The improvements for Da > 0 are summarized in Table 3. The improvements for Da < 0 have been marginal and are therefore not included in
4.2. Comparison of LBF with the Ignall±Scharge bounds LBF is compared to the lower bounds of Ignall and Schrage (1965) for each level of n 10, 14, and 18. Ten instances are randomly generated at each one of the levels from 1 to (n ) 2) of the B&B tree for each problem. This amounts to a total of 1600, 2400, and 3600 instances for n 10; n 14; n 18, respectively. For each n, these instances are equally divided between the cases Da > 0 and Da < 0.
Table 3 with the lower bound of Results of the comparison of LB F Ignall and Schrage (1965) n
Avg % impr
Max % impr
Avg score (%)
10 14 18
1.77 1.31 1.39
5.00 3.32 3.58
88.63 90.50 96.25
Fig. 5. Histograms of the number of ecient solutions for each level of n: (a) The distribution of the number of ecient solutions for problems with n 10; (b) The distribution of the number of ecient solutions for problems with n 14; (c) The distribution of the number of ecient solutions for problems with n 18. Table 4 Ranges of Cmax and F as a percentage of their best values n 10 14 18
Cmax
F
Min
Avg
Max
Min
Avg
Max
0.00 0.00 0.00
4.72 2.78 2.33
16.76 10.51 7.52
0.20 4.13 6.01
17.54 19.92 21.10
34.05 35.01 41.04
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
425
Table 5 FSH1 vs. LB at the root, Johnson's algorithm and GH w
n
LB
Johnson's
Da > 0 Avg % dv
GH
Da < 0 Avg % dv
t-test
Avg % dv
t-test
0.00
10 20 50 100 200 500
6.95 3.55 1.78 1.05 0.54 0.39
t-test 7.80 7.89 6.25 8.12 6.34 6.65
Avg % dv 3.69 2.53 1.86 1.39 0.89 0.81
6.14 7.86 14.60 12.22 13.51 13.50
5.45 3.10 1.83 1.21 0.69 0.58
9.02 10.42 12.18 13.44 11.47 11.24
)5.82 )6.32 )7.02 )6.79 )6.56 )6.23
8.67 9.81 17.41 21.07 32.47 44.83
0.25
10 20 50 100 200 500
17.48 16.24 15.42 15.13 14.84 14.53
22.99 42.62 45.12 108.08 131.80 173.45
16.96 16.76 16.40 15.76 15.48 14.98
32.84 44.47 83.48 111.93 130.98 177.88
3.53 1.03 )0.41 )1.19 )1.70 )2.13
7.86 3.99 2.88 12.45 27.07 40.01
)4.40 )4.73 )5.02 )4.76 )4.39 )4.19
6.86 7.97 12.47 15.56 24.69 35.07
0.50
10 20 50 100 200 500
24.74 26.70 28.06 28.69 28.78 28.40
35.17 66.81 52.74 138.72 128.26 181.45
29.03 30.64 30.81 30.01 30.26 29.05
39.32 50.17 82.82 121.54 125.08 216.65
1.18 )1.59 )3.24 )4.24 )4.74 )5.57
2.44 4.90 15.67 27.75 48.48 74.29
)2.49 )2.52 )2.21 )1.88 )1.31 )1.29
3.86 4.42 5.09 6.18 7.24 11.50
0.75
10 20 50 100 200 500
24.10 29.17 34.23 36.33 36.88 36.69
31.86 38.33 42.63 88.14 85.41 121.01
35.29 39.40 40.03 39.08 40.31 37.94
32.53 40.55 68.43 88.22 98.79 195.76
)1.75 )5.00 )6.94 )8.23 )8.74 )10.08
2.54 9.25 20.26 32.73 54.40 86.80
0.24 0.73 2.02 2.50 3.41 3.18
0.07 0.80 3.70 6.86 12.98 21.19
1.00
10 20 50 100 200 500
11.10 16.40 25.35 28.90 29.55 29.84
9.19 10.67 22.46 32.37 36.36 50.70
29.36 35.18 35.16 33.88 36.32 32.26
19.09 20.73 40.29 42.40 56.36 114.52
)5.52 )9.64 )11.98 )13.67 )14.22 )16.25
4.64 10.95 21.87 34.54 56.03 90.84
4.45 6.00 9.13 9.96 11.54 10.93
3.28 5.42 11.42 18.63 24.40 39.72
that table. It follows from the table that as n increases, the average percentage of the cases where is better than the Ignall±Schrage bound also LB F increases although the average percent of improvement slightly decreases for n 14 and 18. 4.3. An analysis into ecient points Recall that the test bed consists of 30 problems each for n 10, n 14, and n 18 and each one of these problems is solved for ®ve dierent w levels. For each problem, the (Cmax ; F ) pair obtained at
t-test
each of the ®ve dierent w levels is plotted. From the pairs corresponding to w 0.25, 0.50, and 0.75, at least one is an ecient point. Whether these are the only ecient points is not known. Only for cases where two or three pairs are equal, this constitutes the only ecient point for the corresponding w-range. For example, when all three pairs are equal, one can say that for 0:25 6 w 6 0:75, this is the only ecient point. For w 0.00 and w 1.00, one cannot guarantee that these points correspond to ecient points unless their (Cmax ; F ) pair is equal to one in the arange 0.25 6 w 6 0.75. In order to overcome
426
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
this problem, further experimentation is performed with w 0.001 and w 0.999. The (Cmax ; F ) pairs and optimal values of 30 problems each for three levels of n and for seven dierent w levels are given in Sivrikaya-SËerifogÆlu and Ulusoy (1996). It is observed that only a relatively small percentage of the (Cmax ; F ) pairs correspond to ef®cient points. For each level of n, a histogram depicting the distribution of the number of ecient points is provided in Fig. 5 below. It is interesting to note that Sayin and Karabati (1996) report also a relatively small number of ecient points. For all problems, the Rajendran (1992) solution is obtained as the ecient point corresponding to w 0.001. Note that Rajendran (1992) has solved problems with at most 10 jobs. Besides being able to solve larger problems, the procedure proposed in this study provides the decision maker with more information. An interesting observation is that for a problem, Cmax values do not change much over the different w levels, i.e., the range is relatively small. These ranges are calculated and the results are summarized in Table 4, which verify this observation. On the same table, the range analysis for F is also reported. Note that the range for F is much larger. Similar results are reported by Sayin and Karabati (1996). 4.4. Evaluation of the ¯owshop heuristics FSH1 and FSH2 FSH1 and FSH2 are compared to the lower bound (LB) at the root node, Johnson's algorithm and the greedy heuristic (GH) of Nagar et al. (1995b) over 50 problems for each level of n and the weight w, where n 2 f10; 20; 50; 100; 200; 500g and w 2 f0:00; 0:25; 0:50; 0:75; 1:00g. The results are reported in average percent deviation of FSH1 (FSH2) from LB, Johnson's algorithm and GH along with the corresponding t-test values in Table 5 (Table 6). In all these experiments, processing times are again drawn from a uniform distribution U[1,10]. FSH1 and FSH2 are also compared to the optimal (best) solutions obtained by the B&B approaches. Results are in Tables 7 and 8, respectively.
4.4.1. Comparison with respect to lower bound at the root node There are various conclusions one can reach by investigating Table 5. FSH1 comparison with LB for a given level of w 6 0.25 results in a decreasing deviation with increasing n. The deviation for w P 0.50 increases with increasing n for problems with Da > 0. For a given n, the deviation increases with w increasing from 0.00 to 0.75. When w becomes 1.00, the deviation shows a decrease. The amount of decrease is larger for problems with Da > 0. The comparison of FSH2 with LB (Table 6) results in similar observations as the comparison of FSH1 with LB (Table 5); except that the deviations are signi®cantly smaller. Table 9 provides ttests performed on the deviations of FSH1 and FSH2 from LBs for problems with both Da < 0 and Da > 0. The results suggest that an improvement of 10±20% in the deviations from the LB for problems with Da > 0 is possible when FSH2 is used. The improvement is 30±45% for problems with Da < 0. 4.4.2. Comparison with respect to Johnson's algorithm FSH1 comparison with Johnson's algorithm shows that for w 0.00, the deviation decreases as n increases. At w 0.25, Johnson's algorithm is a better estimator for the optimal value at n 10 and n 20, but FSH1 becomes a better estimator at larger values of n. For larger values of w, FSH1 is a better estimator than Johnson's, with the estimation improving with increasing n (Table 5). The comparison of FSH2 with Johnson's algorithm results again in similar observations as the comparison of FSH1 with Johnson's algorithm but with the deviations being signi®cantly smaller (Table 6). 4.4.3. Comparison with respect to greedy heuristic FSH1 comparison with the GH indicates that FSH1 is a better estimator for the optimal value in a large number of cases. The exceptions are found for w 0.75 and w 1.00 (Table 5). When compared to GH, FSH2 outperforms GH over all n and w levels in both of the experi-
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
427
Table 6 FSH2 vs. LB at the root, Johnson's algorithm and GH w
n
LB
Johnson's
Da > 0 Avg % dv
GH
Da < 0 Avg % dv
t-test
0.00
10 20 50 100 200 500
6.19 4.30 2.25 1.24 0.56 0.27
t-test 8.07 10.85 16.21 19.97 16.68 17.28
Avg % dv 2.40 1.25 0.57 0.19 0.15 0.05
3.66 3.91 4.63 2.27 3.07 2.05
4.45 2.96 1.38 0.76 0.38 0.18
7.67 8.69 9.06 8.39 9.34 8.35
)6.71 )6.45 )7.43 )7.21 )6.85 )6.60
10.58 9.62 19.85 21.54 28.97 41.08
0.25
10 20 50 100 200 500
15.80 15.49 13.73 12.90 12.42 12.04
19.87 44.85 66.81 99.15 131.47 190.52
13.44 12.72 12.11 11.72 11.62 11.61
22.20 35.33 102.51 101.38 173.68 260.19
1.30 )0.87 )3.02 )3.82 )4.31 )4.59
2.47 2.60 19.34 33.68 57.58 112.12
)6.49 )6.51 )7.51 )7.28 )6.92 )6.59
11.19 10.16 20.63 22.77 31.06 43.84
0.50
10 20 50 100 200 500
21.74 23.55 22.88 22.52 22.36 24.47
27.14 76.61 75.13 113.60 154.73 275.42
22.20 22.34 21.80 21.46 21.37 21.47
36.28 49.77 115.81 119.04 197.25 282.68
)2.58 )5.71 )8.59 )9.62 )10.26 )10.63
4.86 14.52 44.13 55.96 77.55 142.01
)6.18 )6.60 )7.63 )7.39 )7.03 )6.58
11.99 10.93 21.63 24.58 34.34 48.27
0.75
10 20 50 100 200 500
19.44 22.57 23.89 24.26 24.45 24.47
30.99 92.90 71.11 126.97 181.26 275.42
24.04 25.05 24.17 23.90 23.87 24.05
34.83 47.28 77.80 77.47 127.33 198.57
)7.51 )12.00 )15.88 )17.21 )18.08 )18.56
11.84 22.89 58.49 65.47 85.29 152.71
)5.73 )6.74 )7.80 )7.55 )7.19 )6.55
12.73 12.02 22.78 27.39 40.00 56.19
1.00
10 20 50 100 200 500
4.77 5.75 8.69 9.46 9.59 10.23
8.04 11.14 24.20 26.89 31.62 55.82
13.21 13.58 10.97 10.54 10.33 10.44
10.07 11.87 16.73 16.66 23.95 42.77
)14.00 )20.58 )25.82 )27.60 )28.80 )29.40
16.05 27.43 65.40 69.91 89.18 157.90
)5.03 )6.95 )8.08 )7.82 )7.47 )6.51
11.56 13.33 23.39 31.35 49.88 70.22
ments. An interesting observation is that percent improvement of FSH2 over GH does not change for a given n over the levels of w (Table 6). 4.4.4. Comparison with respect to optimal values In Table 7, average percent deviation results of FSH1 from the optimal values are displayed. In the special case of aj > bj for all j, the performance of FSH1 for a given n improves as w increases. For w 1, FSH1 provides the optimal solution in this special case as has been proven above. In all other problems, the performance of FSH1 for a given n deteriorates as w increases. This is expected since FSH1 is more geared towards the minimization
t-test
Avg % dv
t-test
of makespan rather than of average ¯owtime since its primary aim is to minimize idle times. In the same table, average percent deviation results of FSH1 with 2-opt and 3-opt applied from the optimal values are also displayed. The results indicate that FSH1 with 2-opt and 3-opt applied produces results which are very close to the optimal solution. Note that all values are less than 0.30%. The results of the comparison of FSH2 to the optimal solutions (to best solutions when optimal solutions are not available) are displayed in Table 8. The most important conclusion is that FSH2 provides a very good estimator: the maxi-
428
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
Table 7 Average percent deviation of FSH1 and FSH1 with 2- and 3-opt applied from the optimal values FSH1
FSH1 + 2-opt + 3-opt
Da < 0
aj < bj
Da > 0
aj > bj
Da < 0
aj < bj
Da > 0
aj > bj
n 10
w 0.00 w 2 (0,1) w 1.00
4.07 6.89 14.15
0.04 6.74 19.42
6.71 4.96 4.60
4.71 2.25 0.00
0.00 0.05 0.13
0.00 0.01 0.00
0.00 0.18 0.24
0.00 0.01 0.00
n 14
w 0.00 w 2 (0,1) w 1.00
1.95 7.81 19.19
0.52 6.61 18.15
5.28 5.73 7.61
4.71 2.39 0.00
0.00 0.15 0.29
0.00 0.00 0.00
0.00 0.14 0.20
0.00 0.01 0.00
n 18
w 0.00 w 2 (0,1) w 1.00
3.72 7.62 15.57
0.00 0.06 0.25
0.00 0.01 0.00
a
0.23 9.43 28.02
a a
a
4.12 6.87 13.14
a a
3.66 1.92 0.00
a
0.00 0.16 0.29
a a
0.00 0.00 0.00
a
Deviations from the best values.
Table 8 Average percent deviation of FSH2 and FSH2 with 2- and 3-opt applied from the optimal values FSH2
FSH2 + 2-opt + 3-opt
Da < 0
aj < bj
Da > 0
aj > bj
Da < 0
aj < bj
Da > 0
aj > bj
n 10
w 0.00 w 2 (0,1) w 1.00
3.67 1.93 0.85
0.00 0.93 3.07
7.08 3.71 0.41
4.71 2.25 0.00
0.00 0.04 0.21
0.00 0.01 0.01
0.00 0.08 0.02
0.00 0.01 0.00
n 14
w 0.00 w 2 (0,1) w 1.00
0.25 0.24 0.48
0.00 0.00 0.02
6.08 3.92 1.12
4.71 2.39 0.00
0.00 0.03 0.09
0.00 0.00 0.00
0.10 0.10 0.24
0.00 0.01 0.00
n 18
w 0.00 w 2 (0,1) w 1.00
1.29 1.08 1.16
0.00 0.63 2.03
a a
a
5.46 3.74 1.51
a a
3.66 1.92 0.00
a
0.00 0.20 0.40
a a
0.00 0.00 0.00
a
0.00 0.12 0.39
a a
0.00 0.01 0.00
a
Table 9 ÿ Results of t-tests comparing the performance of FSH2 with that of FSH1. LB(D a ) and LB(Da ) are the LB at the root node for problems with Da > 0 and Da < 0 respectively. ZBB refer to optimal (best) solutions Devn1
Devn2
Experiment
% Deviatn
t-test
FSH2 vs. LB(D a) FSH2 vs. LB(Dÿ a) FSH2 vs. ZBB
FSH1 vs. LB(D a) FSH1 vs. LB(Dÿ a) FSH1 vs. ZBB
[1,10] [1,10] Da < 0 aj < bj Da > 0 Da < 0 aj < bj Da > 0
)23.83 )44.79 )79.48 )94.61 )33.68 )0.02 0.00 )0.01
5.51 6.90 5.23 4.21 2.86 0.77 1.47 0.44
FSH2
a
b
a
vs. ZBB
FSH1
a
vs. ZBB
b b b
Indicates that the corresponding algorithm is 2- and 3-opted. Indicates that the corresponding numbers are averages.
mum deviation from optimal solutions is less than 8% and the majority are well below this ®gure (61% of the deviations are less than 2%). Except
for the special case of aj < bj for all j, the performance of FSH2 improves with increasing n and w. This is in contrast to the performance of
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
429
Table 10 Average percent deviation (av % dv) of the lower bound at the root from optimal solutions for problems with Da > 0 and for problems with Da < 0 for each level w
Problems with Da > 0
0.25 0.50 0.75 1.00 a
Problems with Da < 0
n 10
n 14
n 18
n 10
n 14
n 18
Av % dv t-test
Av % dv t-test
Av % dv t-test
Av % dv t-test
Av % dv t-test
Av % dv t-test
)0.43 )0.83 )1.22 )1.09
)0.36 )0.67 )1.03 )1.30
)0.41 )0.80 )1.27 )1.83
)0.72 )1.61 )2.71 )3.98
)0.42 )0.97 )1.75 )2.92
)0.52 )1.18 )2.09 )3.47
6.72 7.39 6.39 3.41
6.17 7.15 6.30 4.33
a a a a
7.31 6.57 5.25 4.25
4.60 4.47 4.37 4.31
5.00 4.95 4.93 4.91
a a a a
3.88 3.84 3.81 3.79
Deviations from best solutions.
FSH1 with increasing n and w (Table 7). For the special case of aj > bj for all j, FSH1 and FSH2 provide the same solutions since they both apply SPT on the processing times on machine 1 to obtain the sequence in this special case. t-tests performed on the deviations of FSH2 and FSH1 from optimal (best) solutions for the rest of the problems indicate that FSH2 signi®cantly outperforms FSH1, the improvement in deviation being 33±95%. When 2- and 3-opt are applied to FSH2, the deviations become even smaller (Table 8). But comparing Tables 7 and 8 it can be concluded that FSH2 with 2- and 3-opt applied does not perform dierent than FSH1 with 2- and 3-opt applied. Indeed, t-tests approve this observation (Table 9). Therefore, it can be stated that within the realm of this study, where n 6 18, there is no dierence between using FSH1 and FSH2 as the heuristic before applying 2- and 3-opt for computing the upper bound. FSH2 can replace FSH1 with 2- and 3-opt applied especially when larger levels for n are attempted.This can bring important computation time savings since it takes quite long to apply 2- and 3-opt to problems involving large number of jobs. Indeed, limited experience gathered from further experiments with problems involving up to 100 jobs support this conjecture. 5. Direction for further work As Table 7 also clearly re¯ects, the upper bounding scheme performs very well. The LB is still weak. The LB at the root is compared with the optimal (best) solutions and the results are giv-
en in Table 10. These results indicate that LB values deteriorate with increasing values of w which can be interpreted as LBF being the weaker component of LB. In order to improve the performance of LB, and thus of the B&B procedures proposed, further work needs to be done on LBF . Further experimentation with FSH2 without employing 2- and 3-opt on problems involving larger number of jobs appears to be worthwhile since it might lead to an ecient heuristic procedure. References Carlier, J., Rebai, I., 1995. Two branch and bound algorithms for the permutation ¯ow shop problem. European J. Oper. Res., accepted for publication. Chen, C.-L., Bul®n, R.L., 1993. Complexity of single machine, multi-criteria scheduling problems. European J. Oper. Res. 70, 115±125. Della Croce, F., Narayan, V., Tadei, R., 1996. The twomachine total completion time ¯ow shop problem. European J. Oper. Res. 90, 227±237. Dileepan, P., Sen, T., 1988. Bicriterion static scheduling research for a single machine. OMEGA 16, 53±59. Fry, T.D., Armstrong, R.D., Lewis, H., 1989. A framework for single machine multiple objective sequencing research. Omega 17, 595±607. Gupta, J.N.D., Dudek, R.A., 1971. An optimality criterion for ¯owshop schedules. AIIE Trans. 3, 199±205. Hoogeveen, J.A., Kawaguchi, T., 1996. Minimizing total completion time in a two-machine ¯owshop: Analysis of special cases, Proceedings of the Fifth International Workshop on Project Management and Scheduling, pp. 114±117. Ignall, E., Schrage, L.E., 1965. Application of the branch-andbound technique to some ¯owshop problems. Oper. Res. 13, 400±412. Johnson, S.M., 1954. Optimal two- and three-stage production schedules with set-up times included. Nav. Res. Logist. Q. 1, 61±68.
430
F. Sivrikaya-SËerifo glu, G. Ulusoy / European Journal of Operational Research 107 (1998) 414±430
Lin, S., 1965. Computer solution of the traveling salesman problem. Bell System Tech. J. 44, 2245±2269. Lin, S., Kernighan, B.W., 1973. An eective heuristic algorithm for the traveling salesman problem. Oper. Res. 21, 498±516. Nagar, A., Heragu, S.S., Haddock, J., 1995a. Multiple and bicriteria scheduling: A literature survey. European J. Oper. Res. 81, 88±104. Nagar, A., Heragu, S.S., Haddock, J., 1995b. A branch and bound approach for a 2-machine ¯owshop scheduling problem. J. Opl. Res. Soc. 46, 721±734. Rajendran, C., Chaudhuri, D., 1991. An ecient heuristic approach to the scheduling of jobs in a ¯owshop. European J. Oper. Res. 61, 318±325. Rajendran, C., 1992. Two-stage ¯owshop scheduling problem with bicriteria. J. Opl. Res. Soc. 43, 871±884.
Rajendran, C., 1995. Heuristics for scheduling in ¯owshop with multiple objectives. European J. Oper. Res. 82, 540±555. Rebai, I., Carlier, J., 1996. Minimizing dierent measures of performance in an n job m machine permutation ¯ow shop, The Fifth International Workshop on Project Management and Scheduling PMS'96 Abstracts, 199±202. Sayin, S., Karabati, S., 1996. Bicriteria approach to the 2machine ¯ow shop scheduling problem, Working Paper, Bilkent University, Ankara. Sivrikaya-Serifo glu, F., Ulusoy, G., 1996. Branch and bound approaches to a bicriteria 2-machine permutation ¯owshop problem, Working Paper FBE-IE-5/96-4, Bo gazici University, Istanbul.