European Journal of Operational Research 151 (2003) 307–332 www.elsevier.com/locate/dsw
Job-shop scheduling with processing alternatives Tam as Kis
*
Computer and Automation Research Institute, Hungarian Academy of Sciences, P.O. Box 63, H-1518 Budapest, Hungary
Abstract In this paper we study an extension of the job-shop scheduling problem where the job routings are directed acyclic graphs that can model partial orders of operations and that contain sets of alternative subgraphs consisting of several operations each. We develop two heuristic algorithms for our problem: a tabu search and a genetic algorithm. The two heuristics are based on two common subroutines: one to insert a set of operations into a partial schedule and another to improve a schedule with fixed routing alternatives. The first subroutine relies on an efficient operation insertion technique and the second one is a generalisation of standard methods for classical job-shop scheduling. We compare our heuristics on various test problems, including the special case MPM job-shop scheduling. Moreover, we report on the success of the two subroutines on open-shop instances. 2003 Elsevier B.V. All rights reserved. Keywords: Scheduling; Job-shop; Open-shop; Insertion techniques; Tabu search; Genetic algorithms
1. Introduction Job-shop scheduling is one of the most basic models of scheduling theory (cf. [8]). A job shop consists of a set of m different machines M ¼ fM1 ; . . . ; Mm g that perform operations of jobs [2]. There is a set of n jobs J ¼ fJ1 ; . . . ; Jn g, where each job Ji has a specified processing order through the machines. That is, each Ji is composed of an ordered list of operations (Oi;1 ; . . . ; Oi;ni ), where Oi;j is determined by the machine required, denoted by li;j 2 M, and the duration of the operation, di;j . The rest of the assumptions are as follows:
*
Tel.: +36-1-2796156; fax: +36-1-4667503. E-mail address:
[email protected] (T. Kis).
(a) no machine can process more than one job (operation) at a time, (b) the processing of the operations cannot be interrupted, (c) all jobs and all machines are available from time 0 on. In the meantime the classical model has been extended by considering machine alternatives for individual operations and by allowing jobs to have partial orders of operations. The former extension has evolved from jobshop scheduling with multi-purpose machines (MPM-JSP) [5,16], or flexible job-shop scheduling (FLEX-JSP) [3,11,20], to multi-mode job-shop scheduling (MMJSP) [6] and multi-resource shop scheduling with resource flexibility (MULFLEXSP) [12]. Recall that in MPM-JSP with each operation Oi;j there is associated a subset of machines
0377-2217/$ - see front matter 2003 Elsevier B.V. All rights reserved. doi:10.1016/S0377-2217(02)00828-7
308
T. Kis / European Journal of Operational Research 151 (2003) 307–332
li;j M from which exactly one must be chosen to process the operation. In FLEX-JSP the operations may have different processing times on the different machines. In MMJSP each operation Oi;j possesses a set of processing modes Ai;j ¼ m fA1i;j ; . . . ; Ai;ji;j g where Aki;j M, 1 6 k 6 mi;j , and a m 1 set of processing times fdi;j ; . . . ; di;ji;j g, i.e. one for each processing mode. Exactly one processing mode must be selected for each operation and if Aki;j is the mode selected for Oi;j , then the operation must be processed on all machines in Aki;j in parallel k for di;j time units. Finally, in MULFLEX-SP each operation has to be executed on a pre-specified number of distinct machines in parallel, where each machine has to be chosen from a specified set of machines. Moreover, the routings of the jobs are directed acyclic graphs, which feature belongs to the second direction of extending the classical problem. Concerning the routings of the jobs, in the mixed-shop model (proposed by Masuda et al. [21]) the set of jobs consists of flow-shop type jobs and open-shop type jobs (a flow-shop is a special job-shop where jobs follow the same routing and in an open-shop the operations of the same job can be processed in arbitrary order). For a recent survey of the classification and the complexity status of these kind of problems we refer the reader to [22]. In this paper we will study a further extension motivated partly by the capabilities of advanced process planning systems for discrete part manufacturing (see e.g. [10]) or by potential applications in the chemical process industry. In our model the routings of the jobs are directed acyclic graphs with special structure. We define such a graph recursively as follows: let G1 ; . . . ; Gk be subgraphs already constructed with mutually disjoint node sets. Note that single operations constitute subgraphs consisting of one node each. We assume that each Gi has a unique source node sðGi Þ with in-degree 0 and a unique sink node tðGi Þ with out-degree 0. This property trivially holds for single operations and the three combination methods defined next will always yield a subgraph satisfying this property:
• Sequence of G1 ; . . . ; Gk : introduce arcs ðtðGi Þ; sðGiþ1 ÞÞ, 1 6 i 6 k 1. • And-subgraph with branches G1 ; . . . ; Gk : introduce new dummy source aþ and sink a nodes and arcs ðaþ ; sðGi ÞÞ, ðtðGi Þ; a Þ, 1 6 i 6 k. • Or-subgraph with branches G1 ; . . . ; Gk : introduce new dummy source oþ and sink o nodes and arcs ðoþ ; sðGi ÞÞ, ðtðGi Þ; o Þ, 1 6 i 6 k. These constructions are illustrated in Fig. 1. An example job routing is depicted in Fig. 2. By each operation the machine required and its processing time are indicated, too. The routing is an andsubgraph with three branches: the first branch is a sequence of two operations, the second branch is an or-subgraph with two branches, whereas the third branch is a sequence of three operations. Let Di ¼ ðVi ; Ai Þ denote the directed graph associated with job Ji . Each operation of Ji must occur exactly once in Di . There is no upper bound on the length of a sequence or on the number of branches of an and/or-subgraph. However, orsubgraphs cannot be embedded into one another, i.e. there can be no or-subgraphs on any branch of an or-subgraph. This is not a limitation, since a job routing with embedded or-subgraphs can be modelled by a job routing without embedded orsubgraphs, although the size of the latter routing may be exponential in the size of the former one. Note that job routings with these features are generated by some advanced process planning systems for discrete part manufacturing (see e.g. [10]). The branches of an or-subgraph constitute a set of alternative subroutes: exactly one of them must be chosen during scheduling. This is a generalisation of the resource alternatives of individual operations. For ease of notation we assign a unique label l 2 L ¼ f1; 2; . . . ; xg to each or-subgraph of each job, where x is the total number of or-subgraphs in all Di . Furthermore, let bl denote the number of branches of the or-subgraph labelled with l. A selection is a function X : L ! N selecting a branch in each or-subgraph, i.e. X ðlÞ 2 f1; . . . ; bl g, for all l 2 L. Let Vi ½X and Ai ½X denote the set of all nodes and the set of all arcs, respectively, after
T. Kis / European Journal of Operational Research 151 (2003) 307–332
309
Fig. 1. The three basic constructions: sequence of subgraphs, and-subgraph, or-subgraph. M1, 100 o1
M2, 150 o2 M3, 175 o3
a+
oo+
M4, 200 o4
o-
M1, 50 o5
a-
a+ M3,100 o6 o7 M4,300 o8
a-
M5, 150
M5, 200 o9
M6, 90 o10
Fig. 2. A job routing.
deleting all nodes from Di not selected by X , 1 6 i 6 n.
And-subgraphs model partial orders of operations: it is not hard to see that a partial order can be represented by and-subgraphs if and only if is series-parallel. In a given selection X an operation Oi;j can start only if all its predecessors in Vi ½X are completed. Finally, in addition to (a)–(c) we assume that (d) no two operations of the same job may be processed in parallel. We may ensure (d) by introducing disjunctive edges between the operations that belong to the same and-subgraph. However, for computational reasons that become apparent later we model this constraint by introducing a new machine Mmþi for
310
T. Kis / European Journal of Operational Research 151 (2003) 307–332
each Ji 2 J and requiring that the operations of Ji be sequenced on Mmþi . Namely, let li;j contain the machine specified for Oi;j and also the new machine Mmþi , for all 1 6 j 6 ni . Oi;j can only start if no machine in li;j processes some other operation. Moreover, if it starts at time t, then it occupies both machines in li;j until time t þ di;j . Since all operations of Ji must be processed on Mmþi , no two operations of the job can be processed in parallel. A schedule (X ; r) selects exactly one branch in each or-subgraph and Sn assigns a starting time ri;j to each operation in i¼1 Vi ½X , such that • ri;j þ di;j 6 ri;k , whenever Oi;j and Oi;k are operations of the same job and there exists a directed path from Oi;j to Oi;k in Di , • either ri;j þ di;j 6 rx;y or rx;y þ dx;y 6 ri;j for all pairs of operations Oi;j and Ox;y with li;j \ lx;y 6¼ ;. The completion time Ci of job Ji in some schedule (X ; r) is Ci :¼ maxfri;j þ di;j jOi;j 2 Vi ½X g. The makespan Cmax of a schedule is the maximum of job completion times. The makespan minimisation problem, as usual, consists in finding a schedule of minimum makespan. We call this problem job-shop scheduling with processing alternatives, shortly AJSP. Since AJSP contains the NP-hard job-shop scheduling problem [14,19] as a special case, it is NP-hard too. We will describe two heuristic algorithms to find good solutions: • TABU is based on tabu search, • GA is a genetic algorithm. The two heuristics rely on the efficient solutions of two subproblems: • insert a set of operations into a partial schedule, • improve a schedule by changing the order of operations only. In order to solve the first subproblem we have developed an efficient operation insertion technique (Section 3): on the one hand, we give a new characterisation of feasible insertions which allows us to provide simpler proofs of known results and also to
present new results in a conscise way. We describe a fast implementation of the operation insertion algorithm of Brucker and Neyer [6] that will be further refined to reduce the computation time. We use part of our findings to solve the second subproblem (Section 4). The presented tabu search algorithm is used as a subroutine when solving AJSP both in TABU and GA. Moreover, the same algorithm can be used to solve problems in which jobs have partial orders of operations. The algorithm TABU is a simple extension of known techniques for the classical job-shop scheduling problem and we will confine our description to that of the neighbourhood function and neighbour evaluation (Section 5). The algorithm GA is presented in Section 6. We propose a simple lower bound for AJSP to evaluate the strength of our method (Section 7). We have conducted several computational experiments (Section 8). We have tested our algorithms on randomly generated AJSP instances and evaluated our results by comparing the upper and lower bounds. Moreover, we have tested our insertion techniques on open-shop benchmark problems: we have found the optimal solution of three open problems and improved on the best upper bound in some other cases. Finally, we have evaluated our algorithms on the MPM problem: we have obtained very good results, although in a longer computation time than the currently best heuristic for MPM. 2. Notation First we describe the disjunctive graph representation of AJSP. To this end, let D ¼ ðV ; A [ EÞ be a disjunctive graph, where the set of nodes V is S equal to ð ni¼1 Vi Þ [ f0; g, and the set of arcs and disjunctive edges is the disjoint union of arcs: ! n [ A :¼ Ai [ fð0; xÞjx has no predecessors i¼1
in any Di g [ fðy; Þjy has no successors in any Di g and disjunctive edges: E :¼ ffx; ygjlx \ ly 6¼ ;g:
T. Kis / European Journal of Operational Research 151 (2003) 307–332
Let D½X ¼ ðV ½X ; A½X [ E½X Þ denote the disjunctive graph obtained from D by deleting all branches in all or-subgraphs which are not selected by X . A partial schedule (X ; F ) consists of a selection X and a set of arcs F , such that • F is an orientation of E½X . • DðX ;F Þ :¼ ðV ½X ; A½X [ F Þ is acyclic. • There exists a partitioning VS ½X [ VU ½X of V ½X with the following properties. For any two distinct operations x, y in VS ½X with lx \ ly 6¼ ;, there is a directed path in DðX ;F Þ either from x to y or from y to x. Furthermore, for any operation v 2 VU ½X it holds that there is no arc adjacent to it but those in A. Note that we consider only partial schedules in which a branch is selected in each or subgraph. In the above definition VS ½X is called the set of scheduled operations, while operations in VU ½X are unscheduled. Note that scheduled operations are totally ordered on the machines, due to the above definition. Operations in V ½X are called selected, while all other operations in V V ½X are unselected. A partial schedule (X ; F ) with VU ½X ¼ ; is called a complete schedule or simply a schedule. Suppose x and y are operations such that there is a directed path from x to y in DðX ;F Þ . We denote this fact by x ðX ;F Þ y, and we call ðX ;F Þ the precedence relation induced by (X ; F ). Let (X ; F ) be a partial schedule. The arcs of DðX ;F Þ are weighted with the duration of operations, that is, all arcs emanating from some operation v have weight dv . All other arcs have weight 0. We define the head hv of v as the length of the longest path in DðX ;F Þ from 0 to v. Similarly, the tail tv of v is the length of a longest path from v to minus dv . The makespan Cmax ðX ; F Þ of a partial schedule (X ; F ) is the length of a longest path from 0 to .
311
operation (e.g. [6,11,13,16,18,20]) or a set of operations (e.g. [1,24]) into a schedule. Our objective is to insert a set U of unscheduled operations that constitute a branch of an or-subgraph into a (partial) schedule (X ; F ) while minimising the makespan. It can be shown that this problem is NP-hard. Since we have to solve this subproblem in a very short time, we use a modified version of the heuristic of Werner and Winkler [24] which was designed to construct a schedule for the classical job-shop problem. Namely, the operations of U are inserted into (X ; F ) one by one in nonincreasing processing time order. However, for inserting a single operation v 2 U we cannot use their method, for v has to be inserted both into the sequence of operations on the required machine and also into that of its job. More concretely, we study the following problem: Let (X ; F ) be a partial schedule and v be some unscheduled operation. Insert v into the schedule in an ‘‘optimal’’ way. This problem has been studied in a slightly more general form by Brucker and Neyer [6] who have devised a method to insert an operation into several sequences simultaneously. Moreover, they give sufficient conditions for feasible insertions. We will give a new necessary and sufficient condition for feasible insertion that will simplify several proofs to a great extent. We will develop a theory that will allow us to provide a fast implementation of the operation insertion technique of Brucker and Neyer [6]. Moreover, we will further refine our algorithm to reduce the computation time. Since we do not make any assumption about the cardinality of li;j our results can be used not only for AJSP, but for any problem where the processing of an operation needs several machines simultaneously. 3.1. Optimal feasible insertions
3. Inserting a set of operations into a partial schedule
Let Ql denote the sequence of operations on Ml 2 lv and let [ Q :¼ Ql :
In job-shop scheduling several heuristics are based on a subroutine that inserts or reinserts an
When inserting v into the sequences Ql , then Q is split into two subsets: the set of operations
Ml 2lv
312
T. Kis / European Journal of Operational Research 151 (2003) 307–332
preceding and succeeding v, respectively. Consequently, an insertion of v into the schedule can be specified by a subset H of Q: all operations in H precede and all operations in Q H succeed v after the insertion. It is natural to associate a digraph DHðX ;F Þ with H : this graph is obtained by adding the arcs (x; v) for each x 2 H and the arcs (v; y) for each y 2 Q H to DðX ;F Þ . First we study feasible insertions. An insertion H is feasible if and only if DHðX ;F Þ is acyclic. Let Pv and Sv denote the subsets of Q preceding and succeeding v in DðX ;F Þ , respectively. Since DðX ;F Þ is acyclic, we have Pv \ Sv ¼ ;: Clearly, if H is feasible, then Pv H and H \ Sv ¼ ; must hold, but unlike in classical jobshop scheduling, this condition is not sufficient. We say that a subset H of Q is a prefix if and only if (a) Pv H , and (b) H \ Sv ¼ ;, and (c) x 2 H implies that y 2 H for all y 2 Q with y ðX ;F Þ x. The connection between feasible insertions and prefixes is the following: Lemma 1. A subset H Q is a feasible insertion if and only if H is a prefix. Proof. Direction ‘‘only if ’’: Let H be an arbitrary feasible insertion. We have already noted that Pv H and H \ Sv ¼ ; must hold. It remains to prove that x 2 H implies that y 2 H for all y 2 Q with y ðX ;F Þ x. Suppose not, and let x and y be a pair of operations violating this condition. This case is illustrated in Fig. 3. Then there exists an arc in DHðX ;F Þ from v to y, for y 2 Q H . Since y ðX ;F Þ x and x 2 H , it follows that DHðX ;F Þ contains a directed cycle, a contradiction. Direction ‘‘if’’: Let H be a prefix. If H is a feasible insertion, then there is nothing to prove. Suppose not. Then there exists a directed cycle C in DHðX ;F Þ . Since DðX ;F Þ is acyclic, it follows that v is on C.
Fig. 3. An infeasible insertion.
We claim that C contains two operations x and y such that x 2 H and y 2 Q H . Suppose not, i.e. assume C contains either no operation from H or no operation from Q H . • C \ H ¼ ;. Then it must be the case that C ¼ ð. . . ; v; . . . ; y; . . .Þ, for some y 2 C \ ðQ H Þ, for DðX ;F Þ is acyclic. We also know that there is no new arc directed from y to v in DHðX ;F Þ . Hence y ðX ;F Þ v must hold. Consequently, y 2 Pv and then Pv 6 H , a contradiction. • C \ ðQ H Þ ¼ ;. Similar to the previous case. Since x 2 H \ C and y 2 ðQ H Þ \ C, there must exist an arc from x to v and an arc from v to y in DHðX ;F Þ . Moreover, x 2 H and H \ Sv ¼ ; imply x 62 Sv , and y 2 ðQ H Þ and Pv H imply y 62 Pv . Consequently, y ðX ;F Þ x must hold, for C is a directed cycle of DHðX ;F Þ . Hence H is not a prefix, a contradiction. Our goal is to insert v into the schedule in an optimal way. Let H be a prefix (or equivalently a feasible insertion). We define hðH Þ as hðH Þ :¼ maxfhx þ dx jx 2 H g; whereas tðH Þ is defined as tðH Þ :¼ maxftx þ dx jx 2 Q H g: The value lv ðH Þ of the insertion defined by H is the length of the longest path from 0 to through v in DHðX ;F Þ , that is, lv ðH Þ :¼ maxfhðH Þ; hv g þ dv þ maxftðH Þ; tv g: ð1Þ We consider a feasible insertion optimal if it minimises (1). Note that this criterion has been
T. Kis / European Journal of Operational Research 151 (2003) 307–332
chosen for several job-shop scheduling problems [6,11,20]. We may find an optimal insertion by enumerating all feasible insertions between Pv and Q Sv . Unfortunately, this method is impractical, for checking whether some H Q satisfies the conditions of Lemma 1 is a non-trivial issue, hence we look for a simpler characterisation of feasible insertions. 3.2. Maximal prefixes In [6] an optimal insertion of v into the sequences of operations on machines that constitute a processing mode of v is found by enumerating all partitionings IPh [ ISh of Q such that • Pv IPh Q Sv and • IPh contains all operations x 2 Q ðPv [ Sv Þ that satisfy hx þ dx 6 h. Here Q denotes the set of all operations processed by the machines of the given processing mode of v and h is a value from the set fhðPv Þg [ fhx þ dx jx 2 Q ðPv [ Sv Þg. In [6] an optimal partitioning is found by sorting the operations x 2 Q ðPv [ Sv Þ according to their hx þ dx values and evaluating (1) on each of these partitionings. It is also proved that any partitioning IPh [ ISh determines a feasible insertion of v. In the following we give an equivalent definition of partitionings and then give a new proof of the fact that partitionings determine feasible insertions. Moreover, we show that the optimal insertion can be found without sorting the operations. Some H Q is called maximal prefix if and only if • Pv H , and • H \ Sv ¼ ;, and • H contains all operations x 2 Q Sv with hx þ dx 6 hðH Þ. Lemma 2. If H Q is a maximal prefix, then H is a feasible insertion. Proof. Since feasible insertions and prefixes are the same thing by Lemma 1, it suffices to prove that H is a prefix. The definition of maximal prefixes and
313
prefixes agree on the first two requirements. Let x be an arbitrary operation in H , and let y be any operation in Q with y ðX ;F Þ x. We claim that y 2 H . Since y ðX ;F Þ x, it follows that hy þ dy 6 hx holds. Consequently, hy þ dy 6 hðH Þ, and then y 2 H , as claimed. It is an elementary observation that for any prefix H there exists a maximal prefix H 0 with the same hðÞ value. In fact, H 0 contains all operations in H and possibly other operations x 2 Q H with hx þ dx 6 hðH Þ. Lemma 3. Let H be a feasible insertion and let H 0 be a maximal prefix with hðH Þ ¼ hðH 0 Þ. Then lv ðH 0 Þ 6 lv ðH Þ holds. Proof. Since hðH Þ ¼ hðH 0 Þ and H 0 is a maximal prefix, it follows that H H 0 . Hence Q H 0 Q H . Consequently, tðH 0 Þ 6 tðH Þ. Finally, we have lv ðH Þ ¼ maxfhðH Þ; hv g þ dv þ maxftðH Þ; tv g P maxfhðH 0 Þ; hv g þ dv þ maxftðH 0 Þ; tv g ¼ lv ðH 0 Þ; proving the statement. Corollary 1. There exists a maximal prefix that is an optimal feasible insertion. Consequently, to find an optimal feasible insertion it suffices to enumerate all maximal prefixes. Our algorithm is based on the following: Lemma 4. Let H and H 0 be maximal prefixes. The following properties hold: (i) H H 0 if and only if hðH Þ 6 hðH 0 Þ. (ii) H ¼ H 0 if and only if hðH Þ ¼ hðH 0 Þ. Proof (i) First suppose H H 0 . Then hðH Þ 6 hðH 0 Þ follows by the definition of maximal prefixes. Conversely, suppose hðH Þ 6 hðH 0 Þ. Let x be any operation in H . We clearly have hx þ dx 6 hðH Þ 6 hðH 0 Þ. Consequently, x 2 H 0 holds.
314
T. Kis / European Journal of Operational Research 151 (2003) 307–332
(ii) Apply the first part to hðH 0 Þ 6 hðH Þ and to hðH Þ 6 hðH 0 Þ, the statement follows.
voted to describe how to avoid this time consuming computation.
Corollary 2. Maximal prefixes are nested, i.e. whenever H 6¼ H 0 are two distinct maximal prefixes, either H H 0 or H 0 H holds.
b) Algorithm 1. Simple Insertion (v; P ; H
The main result of this section is the following: Lemma 5. An optimal maximal prefix between Pv and Q Sv can be found in OðjQ Pv Sv j log2 jlv jÞ time. Proof. Consider Algorithm 1. The arguments are b satisfying operation v and two prefixes P and H b Q. P H First we show that the output of the algorithm is an optimal maximal prefix. Since maximal prefixes are nested, by Corollary 2, the algorithm b. enumerates all maximal prefixes between P and H Consequently, when invoking the algorithm with b ¼ Q Sv , the output is an optimal P ¼ Pv and H maximal prefix. Concerning the running time, the efficiency of the algorithm relies on the fact that the heads of successive operations in the same sequence are (strictly) increasing. Namely, to find value k in the third step of the algorithm, we maintain a priority queue that contains the first operation not yet included in a maximal prefix in each sequence Ql , for all machines Ml 2 lv . The key of some operation x in the queue is the hx þ dx value. Initially, the queue contains the last operation x in each seb . k is the quence satisfying hx þ dx 6 hðP Þ and x 2 H smallest key in the queue. When some operation is put into a maximal prefix, then that operation is removed from the queue and the next operation of the same sequence is inserted into the queue. Since the queue contains at most jlv j elements, such manipulations can be effected in Oðlog2 jlv jÞ time using appropriate data structures [9]. Since there b P j operations to be processed, the stateare j H ment follows. The drawback of this method (and the method in [6]) is that sets Pv and Sv must be known. However, these sets can be determined in linear time in the size of DðX ;F Þ . The next section is de-
b jhx þ dx 6 hðP Þg, e :¼ 1, H :¼ 1: Set H1 :¼ fx 2 H H1 . b do 2: while He 6¼ H 3: Let k be the smallest integer such that b He with hx þ dx ¼ k. 9x 2 H b He jhx þ dx ¼ kg, 4: Set Heþ1 :¼ He [ fx 2 H e :¼ e þ 1. 5: if lv ðHe Þ < lv ðH Þ then 6: Set H :¼ He . 7: end if 8: end while 9: Output H . 3.3. A problem reduction technique In [20] an efficient operation insertion technique is proposed to insert an operation into a sequence optimally. First we sketch the main ideas of that paper and then we generalize them to maximal prefixes. Moreover, we show that the problem size can be reduced in some cases. Suppose v is to be inserted into the sequence of operations Ql on Ml . To avoid the determination of the subsets of Ql preceding and succeeding v respectively, Mastrolilli and Gambardella [20] define two sets: Ll :¼ fx 2 Ql jtx þ dx > tv g; Rl :¼ fx 2 Ql jhx þ dx > hv g: Clearly, Ll is a prefix and Rl is a suffix of Ql . One of the main results in [20] states that inserting v after all operation in Ll Rl and before any operation in Rl Ll is always feasible. Moreover, there exists an optimal insertion of v in this interval of Ql . Our first goal is to generalise these ideas to our more general problem. There are some technical difficulties to cope with, due to the fact that in our case v is to be inserted into two (or more) sequences simultaneously. First we define sets Ll and Rl for all machines in lv as above. We will frequently exploit the following property of these sets:
T. Kis / European Journal of Operational Research 151 (2003) 307–332
Lemma 6. Suppose Ml1 , Ml2 2 lv and assume that Ql1 \ Ql2 6¼ ;. Let x be an arbitrary operation in Ql1 \ Ql2 . Then we have (a) x 2 Ll1 if and only if x 2 Ll2 . (b) x 2 Rl1 if and only if x 2 Rl2 . Proof. We prove only part (a), the proof of part (b) being similar. Suppose x 2 Ll1 . We show that x 2 Ll2 , the other direction is symmetric. x 2 Ll1 implies that tx þ dx > tv . Hence x 2 Ll2 by definition. Let lNv [ lIv be a partitioning of lv , such that Ml 2 lNv if and only if Ll \ Rl ¼ ;. We define sets Pbv and b S v as [ Pbv :¼ ðQl Rl Þ;
b S v :¼
! Rl
[
Ml 2lN v
[
! ðQl Ll Þ :
Ml 2lIv
Our first observation concerns b Sv: Lemma 7. b S v has the property that [ b Rl : Sv
ð2Þ
Ml 2lv
Proof. If lIv ¼ ;, then there is nothing to prove. Assume this is not the case and let Ml be an arbitrary machine in lIv . Notice that Rl \ Ll 6¼ ;, since Ml 2 lIv . Since Ll is a prefix and Rl is a suffix of Ql , we have Ql Ll Rl . The statement follows. The following ‘‘separation’’ lemma is a useful tool in the next proofs and it is the ground for reducing the problem size. Lemma 8. Suppose lNv 6¼ ; and lIv 6¼ ;. Then ! ! [ [ Ll \ Rl ¼ ;: Ml 2lv
S S • x 2 Ml 2lNv Ll . Since x 2 Ml 2lNv Rl as well it follows that there exists a machine Ml 2 lNv with Rl \ Ll 6¼ ;, by Lemma 6, which contradicts the definition of lNv . S • x 2 Ml 2lIv Ll . This condition implies that there I exists S Ml1 2 lv such that x 2 Ll1 Ql1 .N Since x 2 Ml 2lNv Rl as well, there exists Ml2 2 lv such that x 2 Rl2 Ql2 . Consequently, x 2 Ql1 \ Ql2 holds. Hence Lemma 6 applies and x 2 Ll2 follows. We deduce that x 2 Ll2 \ Rl2 , which is a contradiction, since Ll2 \ Rl2 ¼ ; holds by the choice of Ml2 2 lNv (see Fig. 4 and the next example for an illustration). No more cases are left, the statement is proved. We illustrate this Lemma by Fig. 4. The horizontal bold dashed line separates sequences of operations on machines in lNv and lIv , respectively. S Regions S L and R identify operations in Ml 2lv Ll and Ml 2lv Rl , respectively. Operations coloured grey are those in the intersection of L and R. Lemma 8 states that x in the intersection of sets L, R and the set of operations on machines in lNv cannot exist.
Ml 2lv
[
315
Ml 2lN v
Proof. Assume the contrary and letSx be an arbiS trary operation in ð Ml 2lv Ll Þ \ ð Ml 2lNv Rl Þ 6¼ ;. We distinguish between two cases:
Lemma 9. Pbv and b S v have the following properties: (a) Pv Pbv Q Sbv Q Sv holds. (b) Pbv is a maximal prefix and hð Pbv Þ 6 hv . (c) Q b S v is a prefix and tðQ b S v Þ 6 tv . Proof (a) First we prove that Pv Pbv . Let x be some operation in Pv . We claim that x 2 Pbv . Since x 2 Pv , there exists a directed path from x to v in DðX ;F Þ . Consequently, hx þ dx 6 hv must hold. It follows that x 62 Rl for any Ml 2 lv . Hence x 2 Pbv and our claim is verified. Next we show that Pbv Q b S v . It suffices to bv \ b prove that P S ¼ ;. We claim that Pbv \ v S ð Ml 2lv Rl Þ ¼ ; from this Pbv \ b S v ¼ ; follows by (2). S Suppose Pbv \ ð Ml 2lv Rl Þ 6¼ ; and let x 2 Pbv \ S ð Ml 2lv Rl Þ be any operation. Then x 2 ðQl1 Rl1 Þ \ Rl2 holds for some Ml1 ; Ml2 2 lx \ lv . This contradicts Lemma 6. Finally we claim that Sv b S v . The proof is similar to that of Pv Pbv .
316
T. Kis / European Journal of Operational Research 151 (2003) 307–332
Fig. 4. The separation property of Pbv and b S v . Operation x in the intersection of L, R and the set of operations on machines in lNv cannot exist.
(b) We already know that Pv Pbv Q Sv , by part (a). Since Pbv contains all operations x 2 Q with hx þ dx 6 hv , it follows that Pbv is a maximal prefix with hð Pbv Þ 6 hv , as claimed. (c) First we show that Q b S v is a prefix. Let x be an arbitrary operation in Q b S v . Let y 2 Q be any operation with y ðX ;F Þ x. We claim that y 2Qb S v . Suppose not, i.e. assume that y2b S v holds.S b S Since S v Ml 2lv Rl , it follows that y 2 Ml 2lv Rl . Since S hy þ dy 6 hx , for y ðX ;F Þ x, it follows that x 2 Ml 2lv Rl , too. But x 62 b S v , hence for all machines Ml such that x 2 Ql , Ml 2 lIv and x 2 Ll hold. Consequently, tx þ dx > tv . Moreover, ty P tx þ dx , for y ðX S;F Þ x, and then ty þ dy > tvS . It follows that y 2 Ml 2lv Ll and then y 2 ð Ml 2lv Ll Þ \ b S v . That is, 0 1 [ [ Ll \ Rl A y2@ 0 [@
Ml 2lv
[ Ml 2lv
Ml 2lN v
Ll \
[
1 ðQl Ll ÞA
Ml 2lIv
holds. The first intersection is empty, by Lemma 8. The second one is empty, too, by Lemma 6. Hence we have encountered a contradiction. Finally, tðQ b S v Þ 6 tv holds by the definition of b Sv. We would like to find an optimal insertion between Pbv and Q b S v . However, since Q b S v is not necessarily a maximal prefix, there may emerge the
e be an optimal maximal following difficulty. Let H prefix between Pv and Q S v . There are three possible cases: • • •
e ðQ b H S v Þ, or e , or ðQ b SvÞ H e. e 6 ðQ b H S v Þ 6 H
The first two cases do not pose any difficulties, e can whereas in the third case we will show that H be truncated into a prefix between Pbv and Q b Sv with the same value. First we define maximal prefixes restricted to sets between Pbv and (Q b S v ) by replacing Pv and Sv by Pbv and b S v , respectively, in the definition. It is not obvious whether a maximal prefix between Pbv and (Q b S v ) defines a feasible insertion. Fortunately, we have the following: Lemma 10. If H is a maximal prefix between Pbv and (Q b S v ), then H is a prefix. Proof. Since Pbv H ðQ b S v Þ holds by the choice of H and Pv Pbv and ðQ b S v Þ ðQ Sv Þ by part (a) of Lemma 9, we can deduce that Pv H and H \ Sv ¼ ; hold. Let x, y be a pair of operations such that x 2 H and y ðX ;F Þ x hold. Since x 2 H Q b S v , it follows that y 2 Q b S v , by part (c) of Lemma 9. On the other hand, hy þ dy 6 hðH Þ, for hy þ dy 6 hx < hx þ dx 6 hðH Þ. Since H contains all operations z in (Q b Sv) with hz þ dz 6 hðH Þ, it follows that y 2 H .
T. Kis / European Journal of Operational Research 151 (2003) 307–332
Lemma 11. There exists a maximal prefix between Pbv and (Q b S v ) that minimises (1). e be an optimal feasible insertion, i.e. Proof. Let H e is a one minimising (1). We may assume that H maximal prefix, by Corollary 1. It suffices to show that there exists a maximal prefix H between Pbv and Q b S v such that e Þ holds. lv ðH Þ 6 lv ð H Since maximal prefixes are nested, by Corollary 2, and Pbv is a maximal prefix, by part (b) of Lemma 9, we distinguish between two cases: e Pbv . Since hð Pbv Þ 6 hv , it follows that • H e Þ < hv , by Lemma 4. Moreover, tð H eÞP hð H b tð P v Þ, by the definition of tðÞ. Hence we have lv ð Pbv Þ ¼ maxfhð Pbv Þ; hv g þ dv þ maxftð Pbv Þ; tv g ¼ hv þ dv þ maxftð Pbv Þ; tv g e Þ; tv g e Þ; hv g þ dv þ maxftð H 6 maxfhð H e Þ: ¼ lv ð H Since Pbv is a maximal prefix, by part (b) of Lemma 9, the statement is verified by the maximal prefix H ¼ Pbv between Pbv and Q b Sv. e . Since Q b • Pbv H S v is not necessarily a maximal prefix, we distinguish between three cases: e ðQ b H S v Þ. In this case there is nothing e ðQ b to prove, since Pbv H S v Þ. b e e Þ and ðQ S v Þ H . Then tðQ b S v Þ P tð H e Þ follows. Since tv P tðQ hðQ b S v Þ 6 hð H b S v Þ, by part (c) of Lemma 9, we deduce that e Þ. Consequently, we have tv P tð H S v Þ ¼ maxfhðQ b S v Þ; hv g þ dv lv ðQ b þ maxftðQ b S v Þ; tv g b ¼ maxfhðQ S v Þ; hv g þ dv þ tv e Þ; hv g þ dv 6 maxfhð H e Þ; tv g þ maxftð H eÞ ¼ lv ð H proving that Q b S v is a solution not worse e. than H e 6 ðQ b e . We show that there H S v Þ 6 H exists a maximal prefix H 0 between Pbv and e Þ. (Q b S v ) such that lv ðH 0 Þ ¼ lv ð H
317
e . Notice that Pbv H 0 SvÞ \ H Let H 0 :¼ ðQ b b ðQ S v Þ holds. Moreover, H 0 is a maximal prefix between Pbv and ðQ b S v Þ. e Þ holds. We still have to show that lv ðH 0 Þ ¼ lv ð H To this end we prove 2 claims: e Þ. Claim 1. tv < tð H b e , it follows that there exists Since ðQ S v Þ 6 H e Þ with tx þ dx > tv . x 2 ðQ b Sv H e Þ ðQ H e Þ, it follows that Since ðQ b Sv H e e x 2 ðQ H Þ. Hence tv < tð H Þ by the definition of e Þ. tð H e Þ. Claim 2. tðH 0 Þ ¼ tð H 0 e implies tðH 0 Þ P tð H e Þ. Suppose tðH 0 Þ > H H e Þ holds. Then there must exist x 2 tð H e such that tx þ dx > tð H e Þ. Since ðQ H 0 Þ \ H e tv < tð H Þ, by claim 1, it follows that tv < tx þ dx . Hence x 2 ðQ b S v Þ holds that implies x 2 e by the choice of x. The latter set is ðQ b SvÞ \ H precisely H 0 , thus x 2 H 0 , a contradiction. Using the last claim and the fact that e Þ it is a straightforward matter to hðH 0 Þ 6 hð H e Þ. verify that lv ðH 0 Þ 6 lv ð H No more cases are left, the statement has been proved. Owing to this lemma, Algorithm 1 will find an optimal insertion between Pbv and Q b S v . Nevertheless, instead of stopping here, we show that the problem size can be reduced. Consider Algorithm 2: it reduces the computation by distinguishing between three special cases of the insertion problem: • lIv ¼ ;. In this case we will prove that Pbv is an optimal insertion. • lNv ¼ ;. In this case we will show that there exists a maximal prefix H between Pbv and Q b Sv, such that H is an optimal insertion. Consequently, we can find such an insertion by applying the Simple Insertion algorithm between Pbv and Q b Sv. • lNv 6¼ ; and lIv 6¼ ;. After finding an optimal insertion H 0 of v into the sequences of operations on machines in lIv , we will extend H 0 to an insertion on all machines in lv . We will prove that the resulting set is indeed an optimal insertion.
318
T. Kis / European Journal of Operational Research 151 (2003) 307–332
Algorithm 2. Modified Insertion(v) bv . Calculate lNv , lIv , Pbv and S if lIv ¼ ; then return Pbv else if lNv ¼ ; then return Simple Insertion (v; Pbv ; Q b S v ). else S S Q0 :¼ Ml 2lIv Ql , Pv0 :¼ Ml 2lIv ðQl Rl Þ, Sv0 :¼ S Ml 2lIv ðQl Ll Þ. 8: H 0 :¼ SimpleSInsertion ðv; Pv0 ; Q0 Sv0 Þ. 9: H :¼ H 0 [ ð Ml 2lNv ðQl Rl ÞÞ. 10: return H . 11: end if
1: 2: 3: 4: 5: 6: 7:
Lemma 12. The Modified Insertion algorithm outputs an optimal feasible insertion. Proof. We distinguish between three cases: • lIv ¼ ;. In this case Lemma 6 implies that Pbv ¼ Q b S v holds. The statement follows from Lemma 11. • lNv ¼ ;. The statement follows directly from Lemma 11. • lNv 6¼ ; and lIv 6¼ ;. On the one hand, it is easy to see that the output H of the algorithm is a maximal prefix between Pbv and Q b Sv. On the other hand, we claim that H is an optimal insertion. Suppose not. That is, assume there e between Pv and Q Sv such that exists a prefix H e Þ < lv ðH Þ lv ð H e is a maximal prefix, by Corollary 1. Let W.l.o.g. H 0 t ðH Þ denote the tail of some prefix H of Q0 , i.e. t0 ðH Þ :¼ maxftx þ dx jx 2 Q0 H g
where H
0
Q is a prefix: Notice that there is no need to define h0 ðH Þ. Let l0v ðH Þ :¼ maxfhv ; hðH Þg þ dv þ maxftv ; t0 ðH Þg. Now we have a contradiction: e Þ ðH 0 is optimal on Q0 Þ; l0v ðH 0 Þ 6 l0v ðQ0 \ H eÞ 6 lv ð H
ðfrom the definitionsÞ;
< lv ðH Þ
ðour assumptionÞ;
¼ l0v ðH 0 Þ
ðjustified belowÞ:
It remains to prove that lv ðH Þ ¼ l0v ðH 0 Þ. In fact, it is enough to show that maxfhðH Þ; hv g ¼ maxfhðH 0 Þ; hv g and that maxftðH Þ; tv g ¼ 0 0 maxft ðH Þ; tv g. We have the following: maxfhðH Þ; hv g 8 ( < [ 0 ðQl ¼ max hðH Þ; max hx þ dx jx 2 : Ml 2lN v ) 9 = Rl Þ ; hv ; ¼ maxfhðH 0 Þ; hv g; where the first equality follows from the definition, Sand the second one is due to the fact that Ml 2lNv ðQl Rl Þ Pbv and hð Pbv Þ 6 hv , by part (b) of Lemma 9. One similarly shows that maxftðH Þ; tv g ¼ maxft0 ðH 0 Þ; tv g holds as well. No more cases are left, the lemma is proved. To insert a set of operations U into a partial schedule (X ; F ), we use a modified version of the original method of Werner and Winkler [24] that has been proposed for classical job-shop scheduling. The method inserts the operations into a partial schedule one by one in non-increasing processing time order. We follow the same strategy, but for inserting a single operation into the partial schedule we use the Modified Insertion algorithm. We will call this algorithm InsertSet (ðX ; F Þ; U ). Its time complexity is in OðjU jðe þ jU jÞÞ, where e is the number of arcs in DðX ;F Þ , due to the necessary update of heads and tails after inserting an operation. Note though that after inserting some operation v it is enough to recompute the heads of those operations that follow it and the tails of those operations that precede it after the insertion.
4. A heuristic for scheduling jobs with partial order of operations In this section we describe a tabu search algorithm to solve the problem with fixed routing alternatives, i.e., when the routing alternative in each
T. Kis / European Journal of Operational Research 151 (2003) 307–332
or-subgraph is fixed in advance and cannot be changed. We confine our discussion to the neighbourhood function and neighbour evaluation, since other aspects of the algorithm are simple adaptations of known tabu search algorithms for the classical job-shop problem (see e.g. [13]). Throughout this section we always refer to a fixed selection X . Consider the disjunctive graph D½X obtained from D by eliminating all notselected branches in all or-subgraphs. Let F be any orientation of the disjunctive edges E½X such that (X ; F ) is a complete schedule. Let P be a critical path of DðX ;F Þ . A job-block of P is a maximal subsequence B of P such that all operations in B belong to the same job. Symmetrically, a machine-block of P is a maximal subsequence B of P such that all operations in B require the same machine. Clearly, a critical path can uniquely be decomposed into a sequence of successive job-blocks or machine-blocks. Let f ðBÞ and lðBÞ denote the first and the last operation, respectively, of some block B of a critical path. The neighbourhood used in the tabu search algorithm is derived from the following: Theorem 1. Let (X ; F1 ) and (X ; F2 ) be two complete schedules with Cmax ðX ; F1 Þ > Cmax ðX ; F2 Þ. Let P be a critical path of DðX ;F1 Þ and P ¼ ðB1 ; . . . ; Bk Þ be a decomposition of P into successive job or machine blocks. Then at least one of the following conditions is satisfied: • There exists a block Bt and an operation v 2 Bt ff ðBt Þg such that v is processed in ðX ; F2 Þ before all operations in Bt fvg. • There exists a block Bt and an operation v 2 Bt flðBt Þg such that v is processed in ðX ; F2 Þ after all operations in Bt fvg. • There exist two successive blocks Bt and Btþ1 such that f ðBtþ1 Þ is processed before lðBt Þ in (X ; F2 ). The first two conditions of the theorem have already been proposed by DellÕAmico and Trubian for the job-shop scheduling problem [13], but the third one is new. The reason is that unlike in job shop scheduling, there may be successive blocks Bt and Btþ1 of P such that either
319
• Bt and Btþ1 are machine-blocks, and lðBt Þ and f ðBtþ1 Þ are operations of the same job that are on different branches of some and-subgraph, or • Bt and Btþ1 are job-blocks, and lðBt Þ and f ðBtþ1 Þ are operations of distinct jobs that require the same machine. We define the neighbourhood by means of transformations of schedules: • move-before(B; v): move v 2 B ff ðBÞg before f ðBÞ, • move-after(B; v): move v 2 B flðBÞg after lðBÞ. Observe that there is no transformation corresponding to the third condition of the theorem. Consider e.g. two successive machine blocks Bt and Btþ1 . Then lðBt Þ and f ðBtþ1 Þ belong to the same job. Hence there exists a job-block containing both of them. When processing this job-block, then reversing the order of the two operations is indeed considered. Move-before checks first whether v can be moved before the first operation f ðBÞ of B. Remove all arcs from F adjacent to v in DðX ;F Þ , and let DðX ;Fv Þ be the resulting digraph. If there is no directed path from f ðBÞ to v in DðX ;Fv Þ then v can be moved before f ðBÞ. But checking this condition is time consuming, so move-before performs the following test instead: Let h v be the head of v in DðX ;Fv Þ . Notice that h can be computed without v recomputing the heads of the operations, for the head of the operation before v on any path from 0 to v is the same in DðX ;F Þ and in DðX ;Fv Þ . If h v < hf ðBÞ þ df ðBÞ
ð3Þ
then no path exists from f ðBÞ to v in DðX ;Fv Þ . Assume for a moment that (3) is not satisfied, i.e. h v P hf ðBÞ þ df ðBÞ holds. Then it is still possible that there is no directed path from f ðBÞ to v. Suppose that v is not the last operation of B, either. Then tfðBÞ P tf ðBÞ dv holds. Consequently, moving v before f ðBÞ will not improve the schedule, for the makespan of the resulting schedule will be at least h v þ dv þ df ðBÞ þ tf ðBÞ , and we have
320
T. Kis / European Journal of Operational Research 151 (2003) 307–332
h v þ dv þ df ðBÞ þ tf ðBÞ P hv þ df ðBÞ þ tf ðBÞ
P hf ðBÞ þ df ðBÞ þ df ðBÞ þ tf ðBÞ > hf ðBÞ þ df ðBÞ þ tf ðBÞ ¼ Cmax ðX ; F Þ: Hence we have proven the following: Lemma 13. Let P be a critical path of DðX ;F Þ and let B be a block of P . Suppose that B consists of at least three operations and let v 2 B ff ðBÞ; lðBÞg be any operation with h v P hf ðBÞ þ df ðBÞ . Then moving v before f ðBÞ cannot improve (X ; F ). However, when v ¼ lðBÞ and condition (3) fails, then we do not know whether move-before would improve the schedule or not. Since we want to avoid the time-consuming computation for checking feasibility, move-before is not defined in this case. Move-before is defined if and only if v verifies (3). When defined, it returns a prefix denoted by Hvb : Hvb :¼ fx 2 Q jhx þ dx 6 maxfhf ðBÞ ; h v gg; where Q is the set of operations on machines in lv but v. Lemma 14. Hvb , when defined, induces a feasible insertion. Proof. We claim that Hvb is a maximal prefix between Pv and Q Sv , where Pv and Sv are the set of operations preceding and succeeding v in DðX ;Fv Þ . First we claim that Pv Hvb . Suppose not and let x be any operation in Pv such that x 62 Hvb . Then hx þ dx > maxfhf ðBÞ ; h v g must hold, by the definition of Hvb . But this is impossible, for x 2 Pv implies hx þ dx 6 h v . Second we claim that Sv \ Hvb ¼ ;. Suppose not and let y be an arbitrary operation in Hvb \ Sv . Then y 2 Hvb \ Sv holds as well. Hence we have hy P hv þ dv
ðfor y 2 Sv Þ
> hv P hf ðBÞ þ df ðBÞ >
ðv is a successor of f ðBÞ in DðX ;F Þ Þ
maxfhf ðBÞ ; h v g
ðh v satisfies ð3ÞÞ;
which contradicts the fact that y 2 Hvb .
Finally, it is a trivial matter to verify that Hvb contains all operations x 2 Q Sv with hx þ dx 6 hðHvb Þ. The definition of move-after is symmetric to that of move-before. That is, move-after (B; v) is defined if and only if v 6¼ lðBÞ and tv < tlðBÞ þ dlðBÞ hold. When defined, it returns a prefix Hva of v defined as Hva :¼ fx 2 Q jtx þ dx > maxftlðBÞ ; tv gg: The value of a move, ideally, is the length of the longest path from 0 to through v after moving the operation into the new position. However, the heads and the tails of the operations as defined in DðX ;F Þ are not adequate to determine this value precisely, for if DðX ;Fv Þ is the digraph obtained by removing v from the sched ule, then h x 6 hx and tx 6 tx for all operations x and equality is guaranteed only if x precedes (succeeds) v in DðX ;F Þ . In order to avoid the time consuming re-computation of the heads and the tails of the operations in DðX ;Fv Þ , we make an estimation. Namely, when the move is movebefore, then we can determine exactly the head of v in the new position, and we can estimate its tail by locally recomputing the tails of operations between v and its original position using a procedure similar to the lpath procedure of DellÕAmico and Trubian [13]. Symmetrically, when the move is move-after, then we know the tail of v, and we estimate the head of the operation by recomputing locally the heads of the operations between the original position of v and the new position. We apply to some initial schedule S ¼ ðX ; F Þ the tabu search depicted in Algorithm 3. For the tabu status of the moves and for the management of the tabu list, we have adopted the method of DellÕAmico and Trubian [13] and for details we refer the reader to that paper. Note that the size of the tabu list varies between the extremities tabumin and tabumax, and that the search restarts from the best solution ever found if no improvement occurs in the last D iterations.
T. Kis / European Journal of Operational Research 151 (2003) 307–332
Algorithm 3. TabuAnd (ðX ; F Þ; maxit) 1: 2: 3: 4: 5:
6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21:
Set F1 :¼ F , Fopt :¼ F , T :¼ ;, optit :¼ 1. for it :¼ 1 to maxit do Determine a critical path P in DðX ;Fit Þ . Generate all possible moves with respect to all job and machine-blocks of P . Let N be the set of all non-tabu moves and let A be the set of tabu moves with value smaller than Cmax ðX ; Fopt Þ. if N [ A 6¼ ; then Let mv 2 N [ A be the move with smallest value. Apply mv to DðX ;Fit Þ , let Fitþ1 be the resulting orientation of E½X . Update T with mv. if Cmax ðX ; Fitþ1 Þ < Cmax ðX ; Fopt Þ then Set Fopt :¼ Fitþ1 , optit :¼ it. else if it optit > D then Restart the search. end if end if else Restart the search. end if end for Output ðX ; Fopt Þ.
5. A tabu search algorithm for AJSP In this section we devise a simple tabu search algorithm for AJSP by combining our methods with known techniques for the job-shop problem. The crucial decision when designing this algorithm was how much effort should be invested in neighbourhood evaluation. As a prerequisite recall that Brandimarte [3] has proposed a hierarchical approach for FLEX-JSP which is a special case of our problem allowing processing alternatives for single operations only. Meanwhile, much stronger algorithms have been proposed for that problem based on a one-level tabu search using a well-designed neighbourhood (cf. [11,20]). The common property of the neighbourhoods proposed in [11,20] is that no distinction is made between assigning a new machine to an operation and re-inserting the operation on
321
the same machine. This idea works very well in practice for FLEX-JSP for two reasons: • it is possible to well-estimate the effects of the transformation performed on a schedule, • the transformation only slightly modifies the actual schedule. In our more general problem it does not seem so obvious to estimate the effects of changing the routing alternative selected in some or-subgraph. Hence, we return to the search-control proposed by Brandimarte with some modifications and solutions for some anomalies. Our local search algorithm is based on tabu search: it maintains an actual schedule S ¼ ðX ; F Þ, and in each iteration it generates a number of neighbours of S. Let Lv be the set of all or-subgraphs containing at least one critical operation on the branch selected by X (Brandimarte considers only the operations on a critical path). The set of moves with respect to Lv consists of the ordered pairs (l; r), where l 2 Lv , r 2 f1; . . . ; bl g fX ðlÞg and it means that the selection in or-subgraph l is to be changed to r. The value of a move (l; r) is the makespan of the schedule obtained from S by removing the operations on branch X ðlÞ of l and inserting the new operations on branch r into the partial schedule by InsertSet. A move (l; r) is tabu if and only if r was the selected branch of l in a schedule visited during the last tabusize iterations. This way short cycles due to removing and then reinserting the same branch of an or-subgraph are prevented. Only non-tabu moves are considered when generating the neighbours of S: we have observed that incorporating the standard aspiration condition does not improve the quality of the final solution but considerably increases the computation time (Brandimarte includes aspiration condition in his method). After generating all non-tabu neighbours of S, the one with smallest makespan is selected and it becomes the actual solution of the next iteration. If the best neighbour is not unique, then one is chosen uniformly at random. If l is the or-subgraph in which the selection is changed, then (l; X ðlÞ) becomes tabu for the next tabusize iterations.
322
T. Kis / European Journal of Operational Research 151 (2003) 307–332
The new solution is improved by TabuAnd (Algorithm 3), and the algorithm passes to the next iteration. However, there are some anomalies to cope with: it may be the case that Lv ¼ ;, yet S is not an optimal schedule. In this case the search is restarted from the best solution ever found. On the other hand, there may exist only tabu moves: then the move used the least recently is applied to S. We will call this algorithm TABU.
Xf is obtained from Xf0 by choosing a new branch uniformly at random in a b fraction of or-subgraphs. The new branches are selected uniformly at random. In the terminology of genetic algorithms, Xf is obtained from Xf0 by mutation. Finally, an orientation Ff of D½Xf is calculated after changing the selection in (Xapp ; Fapp ) to Xf by Algorithm 4. Notice that the algorithm reuses the orientation of the common edges of D½Xapp and D½Xf . Algorithm 4. SwapBranchesððXapp ; Fapp Þ; Xf Þ
6. A genetic algorithm for AJSP In this section we describe a genetic algorithm based heuristic for AJSP. For a general description of genetic algorithms we refer the reader to [15]. In our adaptation chromosomes are schedules. We denote the population (set of schedules) by P. Furthermore, for ease of notation we assume that chromosomes are indexed in non-decreasing makespan order, i.e. Cmax ðX1 ; F1 Þ 6 Cmax ðX2 ; F2 Þ 6 6 Cmax ðXjPj ; FjPj Þ. There is only one genetic operator, the crossover. The operator is applied to a pair of chromosomes master and apprentice. The idea of our crossover is to use a subset of selections in the master to possibly improve upon the apprentice. First, the master ðXmst ; Fmst Þ 2 P is selected at random, where the probability of choosing ðXi ; Fi Þ 2 P for the role of master is ð2ðjPj iÞ þ 1Þ=jPj2 , 1 6 i 6 jPj. The apprentice (Xapp ; Fapp ) is selected from the set P fðXmst ; Fmst Þg uniformly at random. The master and apprentice play asymmetric roles in producing the offspring (Xf ; Ff ). Let v be the set of critical operations in (Xapp ; Fapp ), i.e. v ¼ fx 2 V ½Xapp jhx þ dx þ tx ¼ Cmax ðXapp ; Fapp Þg. Let Lv L be the set of or-subgraphs that contain at least one operation in v. Let L0 be a subset of Lv , such that jL0 j ¼ ajLv j, where a is an adjustable parameter of the algorithm. The contents of L0 is chosen uniformly at random from Lv . The two selections Xmst and Xapp are combined into a new intermediate selection Xf0 : Xmst ðlÞ if l 2 L0 ; 0 Xf ðlÞ ¼ Xapp ðlÞ otherwise:
1: Delete all operations of V ½Xapp V ½Xf from ;F Þ be the resulting digraph. DðXapp ;Fapp Þ , let DðXapp app 2: Change the selection from Xapp to Xf . 3: Set ðXf ; Ff Þ :¼ InsertSetððXf ; Fapp Þ; V ½Xf V ½Xapp Þ. 4: Improve ðXf ; Ff Þ by TabuAnd. 5: Output ðXf ; Ff Þ. The offspring is inserted into the population after deleting the parent with higher makespan. The initial population is obtained by generating jPj selections at random, then for each partial schedule ðX ; ;Þ 2 P the operations in V ½X are inserted into the schedule by Modified insertion, and finally the schedule is improved by TabuAnd. The genetic algorithm consists of applying crossover to P gennum times. Upon termination, the best schedule encountered during the search is (X1 ; F1 ), which is the first (best) schedule in the last population. We call this algorithm GA. Finally, we remark that the design of crossover is by no means arbitrary. We have conducted experiments with various parent selection rules and inheritance schemes, and the best combination seems to be the one presented here. In fact, a simpler crossover would just copy some selections from one parent and the rest of the selections from the other parent, but that crossover performed very poorly. 7. Lower bound computation For evaluating TABU and GA we compute a lower bound on the optimal makespan. We obtain the lower bound by relaxing the precedences between the operations of the jobs. Hence the relaxed
T. Kis / European Journal of Operational Research 151 (2003) 307–332
problem is similar to an open-shop with the extension that there are sets of alternative sets of operations in the specification of the jobs. The lower bound is a straightforward extension of the well-known lower bound for the open-shop problem with n jobs and m machines, which is as follows: ( ) n X max di;j jj ¼ 1; . . . ; m i¼1
[
(
m X
)! di;j ji ¼ 1; . . . ; n
;
ð4Þ
j¼1
where di;j is the processing time of Oi;j . Notice that the first set consists of the total processing demand on each machine, while the second set consists of the total processing time of each job. We generalise (4) by taking into consideration the necessary choice from each set of alternatives in the routings of the jobs. More concretely, the lower bound is the optimum value of the following integer linear program: min lb bl XX xl;k aðl; kÞ lb1 6 f 0 ;
ð5Þ
l2L k¼1 bl XX
ð1T aðl; kÞÞxl;k lb 6 f i ;
l2Li k¼1
81 6 i 6 n; bl X
xl;k ¼ 1;
ð6Þ 8l 2 L;
k¼1
xl;k 2 f0; 1g;
ð7Þ
where • f 0 denotes the m-vector whose jth component is the total processing time of those operations that require Mj and do not belong to any orsubgraph, • f i is the total processing time of those operations of job Ji that do not belong to any or-subgraph, • Li is the set of or-subgraphs in the routing of Ji , • aðl; kÞ denotes the m-vector whose jth component is the total processing time of those operations that require Mj in the kth branch of the or-subgraph labelled with l.
323
Notice that (5) ensures that lb is not smaller than the total processing time of operations on any machine, whereas constraints (6) imply that lb is not smaller than the total processing time of any job, with respect to a selection of branches of orsubgraphs.
8. Computational results We tested our algorithms on several data sets. Since we are not aware of benchmark problems for AJSP, we used randomly generated problem instances. AJSP contains MPM job-shop scheduling as a special case, hence we evaluated our algorithms on benchmarks from the literature. Inserting a set of operations into a partial schedule is an important subproblem when solving AJSP. To see how efficiently we are able to solve this problem, we tested separately our insertion heuristic and also TabuAnd on open-shop instances. We start with this latter topic. 8.1. Results on open-shop instances We have implemented InsertSet and TabuAnd in the programming language Cþþ. The tests of this section were executed on a personal computer with a Pentium II 400 MHz processor. The parametrisation of TabuAnd was as follows: maxit ¼ 10000, tabumin ¼ 1, tabumax ¼ 10, D ¼ 500. In our experiments we used the open-shop instances of Taillard [23], the results are summarised in Table 1. The table contains problem instances of different sizes: • • • •
tai5-1–tai5-10: 5 job, 5 machine instances, tai7-1–tai7-10: 7 job, 7 machine instances, tai10-1–tai10-10: 10 job, 10 machine instances, tai20-1–tai20-10: 20 job, 20 machine instances. For each instance it is given:
• Opt/(LB): the optimal makespan taken from Brucker et al. [7], or if it is not known, then
324
T. Kis / European Journal of Operational Research 151 (2003) 307–332
Table 1 Computational results on the open-shop instances from Taillard [23] Instance
Opt/(LB)
UBInsert
tai5-1 tai5-2 tai5-3 tai5-4 tai5-5 tai5-6 tai5-7 tai5-8 tai5-9 tai5-10
300 262 323 310 326 312 303 300 353 326
319 281 354 322 345 341 332 311 378 345
301 262 331 311 330 312 305 300 353 326
tai7-1 tai7-2 tai7-3 tai7-4 tai7-5 tai7-6 tai7-7 tai7-8 tai7-9 tai7-10
435 443 468 463 416 451 422 424 458 398
442 461 478 481 423 475 455 440 470 426
tai10-1 tai10-2 tai10-3 tai10-4 tai10-5 tai10-6 tai10-7 tai10-8 tai10-9 tai10-10
(637) 588 (598) 577 640 538 (616) 595 595 (596)
tai20-1 tai20-2 tai20-3 tai20-4 tai20-5 tai20-6 tai20-7 tai20-8 tai20-9 tai20-10
1155 (1241) 1257 1248 1256 (1204) 1294 (1169) 1289 1241
UBTabuAnd
UBBr€asel
CPU
300 262 328 310 329 312 305 300 353 326
310 265 339 325 343 325 310 307 364 341
0.88 0.89 0.89 0.89 0.88 0.87 0.9 0.88 0.89 0.89
438 446 474 469 416 470 429 427 459 402
438 449 479 467 419 460 435 426 460 400
442 461 482 473 426 469 440 431 461 410
1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5 1.5
663 595 602 578 660 541 619 600 609 598
648 588 602 577 643 538 618 595 596 597
652 596 617 581 657 545 623 606 605 604
645 588 611 577 641 538 625 596 595 602
2.6 2.6 2.6 2.6 2.6 2.6 2.6 2.6 2.6 2.6
1155 1243 1257 1248 1256 1204 1294 1169 1289 1241
1155 1241 1257 1248 1256 1204 1294 1169 1289 1241
1215 1332 1294 1310 1301 1252 1352 1269 1322 1284
1155 1244 1257 1248 1256 1209 1294 1173 1289 1241
9.84 9.83 9.83 9.85 9.82 9.81 9.84 9.83 9.83 9.82
the trivial lower bound (LB) computed by formulae (4), • UBInsert : the makespan of the solution obtained by InsertSet, • UBTabuAnd : the makespan of the schedule computed by TabuAnd, where the starting solution was the output of InsertSet, • UBTaillard : the best makespan by Taillard [23],
UBTaillard
• UBBr€asel : the best makespan by Br€asel et al. [4]. • CPU: the computation time in CPU seconds needed for the combined effort of InsertSet and TabuAnd. Notice that Taillard obtained the upper bounds by tabu search and Br€asel et al. used various constructive methods.
T. Kis / European Journal of Operational Research 151 (2003) 307–332
We compared our two non-exact methods to the other two heuristics: the figures in bold indicate the best upper bound obtained by one of the four heuristics. We can observe that on smaller instances (tai5 and tai7 instances) our methods are competitive with that of Taillard, whereas on larger instances there is a tight competition between our two heuristics and that of Br€ asel et al., while Taillard obtained significantly worse results. Concerning the relative performance of our method, we have found the optimal solution in 20 cases out of the 40 instances. The average and the maximum relative error between our upper bound and the optimum makespan, if known, was 1.005 and 1.036, respectively. Finally, notice that we have found the optimal makespan of three open problems: tai20-2, tai20-6 and tai20-8, and also improved upon the best upper bound for several other problems.
325
Table 2 Summary of results on AJSP instances Instance
ðm; nÞ
# or
GA
TABU
RND
a01–a03 a04–a06 a07–a09 a10–a12
ð5; 5Þ ð5; 10Þ ð5; 15Þ ð5; 20Þ
1 1 1 1
1.024 1.004 1.025 1.024
1.000 1.001 1.002 1.002
1.028 1.025 1.079 1.084
a13–a15 a16–a18 a19–a21
ð10; 10Þ ð10; 15Þ ð10; 20Þ
1 1 1
1.003 1.029 1.052
1.006 1.001 1.000
1.050 1.116 1.129
a22–a24 a25–a27 a28–a30
ð10; 10Þ ð10; 15Þ ð10; 20Þ
2 2 2
1.056 1.108 1.106
1.037 1.048 1.017
1.127 1.187 1.183
a31–a33 a34–a36 a37–a39
ð10; 10Þ ð10; 15Þ ð10; 20Þ
3 3 3
1.147 1.137 1.138
1.080 1.054 1.030
1.234 1.244 1.220
that ‘‘# or’’ denotes the number of or-subgraphs in a job.
8.2. Results on AJSP instances 8.2.1. Problem instances Since we were not aware of any data set for AJSP, we tested our algorithms on randomly generated problem instances. We start with the description of jobs. Each job was a sequence of 1, 2 or 3 or-subgraph(s), and each or-subgraph had two or three branches, where a branch consisted of either one and-subgraph of five operations, or a sequence of two and-subgraphs one of them comprising 2, the other 3 operations. The processing times and the machine requirements of the operations were generated at random with the restriction that the five operations on the same branch of an or-subgraph always required five different machines. The processing time of the operations varied between 2 and 99. There were problem instances with 5 and 10 machines respectively. The five machine instances consisted of 5, 10, 15 or 20 jobs respectively and the jobs had one or-subgraph each. The 10 machine instances consisted of 10, 15, 20 jobs respectively and all jobs in the same instance had the same number of or-subgraphs, which was 1, 2 or 3, depending on the data set. We summarise the characteristics of the instances in Table 2. Note
8.2.2. The algorithms tested GA was run on all problem instances with the following parameters: • population size ¼ 30, • number of generations ¼ 6mn, • a ¼ 12, b ¼ 18. TabuAnd was invoked with maxit ¼ 200, tabumin ¼ 1, and tabumax ¼ 10 in all cases. TABU was run on all problem instances with the following parameters: • number of iterations ¼ 4mn, • tabu size ¼ 0. In order to appreciate our results we also tested a third algorithm, called RND, that generates a number of selections at random and for each selection it obtains an initial schedule by InsertSet which is improved by TabuAnd. Notice that there is no interaction between two schedules generated. The output of RND is the best schedule generated so far. On each problem instance RND generated 6mn schedules.
326
T. Kis / European Journal of Operational Research 151 (2003) 307–332
Notice that the parametrisations of the algorithms were established on the basis of preliminary tests. Moreover, we obtained a lower bound for each problem instance by solving the integer linear program of Section 7. We have implemented the three algorithms in Cþþ. The lower bound was computed by CPLEX. The computational experiments were executed on a personal computer with a Pentium II 400 MHz processor. 8.2.3. Evaluation of results The detailed computational results on AJSP instances are included in Table 3. Since all of our algorithms contain non-deterministic elements, each algorithm was run on each problem instance five times. For each problem instance it is given: • LB: the lower bound computed by CPLEX, • for each of the algorithms GA, TABU and RND: UB: the average makespan out of five runs, UB: the best makespan out of five runs, CPU: the average computation time in CPU seconds. Figures in bold indicate the best upper bounds obtained on the problem instances. Moreover, a lower bound is marked with an asterisk if it is optimal, i.e. there is a matching upper bound in the same line. To have an overview on the performance of the algorithms, Table 2 depicts the mean relative error between the best upper bound and the lower bound for each group of problem instances and for each of the three algorithms. From the computational experiments we have derived some conclusions: • TABU delivers superior results in a shorter time than GA, • finding good selections at random is rather unlikely, • if m is fixed and n increases, then the ratio between the upper bound and the lower bound decreases.
8.3. Comparison on MPM instances Recall that MPM job-shop scheduling is a special case of our problem. We have also tested our algorithms on MPM instances from [16] and compared our results to known upper bounds. That paper proposes three data sets: edata, rdata and vdata containing 43 instances each. Operations in the edata instances have the least machine alternatives, and the number of machine alternatives increases in rdata and vdata. The characteristics of these instances are summarised in Table 4. Notice that jMi;j j avg and jMi;j j max denote the average and the maximum number of machines available for processing an operation, respectively. On each instance of each data set we run GA, TABU and RND and compared our results to that of Hurink et al. [16], Brucker and Neyer [6], and Mastrolilli and Gambardella [20]. We have included the computational results in Tables 5–7 respectively. Since all of our algorithms take some decisions at random, each algorithm was run five times on each problem instance. The parametrisations of the algorithms were as follows: • GA: the number of generations and the size of the population were fixed at 6mn and 30 respectively, and a ¼ 12, b ¼ 18. • TABU: the number of iterations was set to 4mn, tabumin ¼ 1, tabumax ¼ 10. • RND: the number of trials was set to 6mn. Moreover, TabuAnd was invoked with maxit ¼ 200, tabumin ¼ 1 and tabumax ¼ 10 in all cases. For each instance it is given: • UBMG : the upper bound obtained by Mastrolilli and Gambardella [20], • UBHB : the better upper bound from the papers Hurink et al. [16], and Brucker and Neyer [6], • for each of the algorithms GA, TABU and RND: UB: the average makespan out of five runs, UB: the best makespan out of five runs, CPU: the average computation time measured in CPU seconds on a SGI Origin with a R12000, 270 MHz processor.
T. Kis / European Journal of Operational Research 151 (2003) 307–332
327
Table 3 Computational results on AJSP instances i
GA
LB
UB
TABU UB
CPU
RND
UB
UB
CPU
UB
UB
CPU
a0l a02 a03
265 313 221
266.80 313.00 239.80
266 313 236
3.65 3.74 4.05
266.80 313.00 236.00
265 313 236
2.03 2.07 2.91
271.00 313.00 240.80
269 313 236
3.17 3.25 3.48
a04 a05 a06
454 425 407
465.00 442.00 414.40
458 426 407
17.21 17.16 16.76
457.20 432.20 411.40
454 426 407
11.63 11.60 10.91
473.00 447.80 443.40
464 426 428
16.92 16.89 16.53
a07 a08 a09
690 640 661
711.40 673.00 690.40
706 660 674
40.28 40.91 40.37
695.20 653.60 665.20
691 641 662
30.32 31.78 30.19
734.80 716.00 730.40
730 692 726
42.85 43.40 42.71
a10 a11 a12
931 872 923
952.40 923.00 964.40
943 893 956
77.74 78.78 79.49
937.20 876.80 926.00
942 872 923
65.09 67.81 70.13
1015.40 958.00 1006.40
1008 952 994
86.05 87.17 87.89
a13 a14 a15
280 275 295
280.00 277.40 309.80
280 276 297
27.49 26.84 28.31
280.00 277.40 304.60
280 275 300
17.12 17.70 19.06
303.00 294.00 336.80
280 287 326
29.76 29.19 30.80
a16 a17 a18
425 378 354
450.00 397.00 368.40
446 392 354
68.05 69.17 65.49
428.20 381.00 362.60
426 378 354
50.02 51.90 45.88
471.80 429.40 411.80
460 422 407
78.42 78.89 74.93
a19 a20 a21
451 499 488
470.80 533.00 532.00
466 524 523
125.14 124.99 124.51
455.60 506.40 498.20
451 499 488
97.13 97.19 97.08
520.00 563.40 568.40
514 559 550
152.75 152.19 154.40
a22 a23 a24
587 535 529
587.00 560.40 622.40
587 553 615
59.56 59.79 61.57
587.00 547.80 585.40
587 541 582
37.83 44.38 47.76
607.40 613.00 654.40
594 613 647
71.89 71.78 72.40
a25 a26 a27
715 695 706
800.00 773.80 801.80
792 762 790
149.84 146.45 147.12
756.80 732.40 750.00
754 723 741
135.22 126.90 132.17
841.80 820.40 882.80
822 814 875
194.17 190.38 190.39
a28 a29 a30
966 917 855
1075.00 1029.80 969.80
1069 1012 948
270.78 276.90 275.13
979.80 943.80 882.80
969 938 876
265.01 280.59 276.76
1149.40 1109.80 1060.40
1114 1063 1056
381.97 388.60 388.25
a31 a32 a33
702 688 847
859.80 814.80 902.00
850 809 894
102.54 100.48 100.72
810.40 761.60 851.80
801 756 847
85.65 82.63 80.01
927.00 891.00 978.00
909 877 959
132.84 130.07 129.56
a34 a35 a36
1033 1024 1057
1180.80 1201.40 1222.40
1156 1181 1202
243.94 244.83 242.01
1097.80 1079.20 1124.40
1093 1072 1118
248.37 255.30 257.83
1304.80 1292.80 1329.00
1293 1278 1303
349.66 348.57 344.98
a37 a38 a39
1339 1427 1296
1563.80 1611.00 1498.80
1550 1590 1481
458.91 460.20 454.03
1384.40 1480.60 1343.20
1376 1472 1337
330.73 329.72 322.60
1637.00 1730.50 1639.40
1609 1717 1628
709.27 715.78 703.43
According to Mastrolilli and Gambardella [20], the edata instances are more difficult to solve than
instances in the other two data sets. However, on these instances our algorithms are quite
328
T. Kis / European Journal of Operational Research 151 (2003) 307–332
Table 4 Characteristics of edata, rdata and vdata MPM instances Instance
ðm; nÞ
edata jMi;j j avg
rdata jMi;j j max
jMi;j j avg
vdata jMi;j j max
jMi;j j avg
jMi;j j max 4 m 5 4 m 5 4 m 5
mt06 mt10 mt20
ð6; 6Þ ð10; 10Þ ð5; 20Þ
1.15 1.15 1.15
2 3 2
2 2 2
3 3 3
1 m 2 1 m 2 1 m 2
l01–l05 l06–l10 l11–l15
ð5; 10Þ ð5; 15Þ ð5; 20Þ
1.15 1.15 1.15
2 2 2
2 2 2
3 3 3
1 m 2 1 m 2 1 m 2
4 m 5 4 m 5 4 m 5
l16–l20 l21–l25 l26–l30 l31–l35
ð10; 10Þ ð10; 15Þ ð10; 20Þ ð10; 30Þ
1.15 1.15 1.15 1.15
3 3 3 3
2 2 2 2
3 3 3 3
1 m 2 1 m 2 1 m 2 1 m 2
4 m 5 4 m 5 4 m 5 4 m 5
l36–l40
ð15; 15Þ
1.15
3
2
3
1 m 2
4 m 5
competitive with that of Mastrolilli and Gambardella [20]. The results of Hurink et al. [16] and Brucker and Neyer [6] are never better than ours, and indeed they are significantly outperformed by GA and TABU and even by RND in several cases. Concerning the computation time, the fastest method is that of Mastrolilli and Gambardella [20]: it solves all problems in this data set in less than 10 seconds on a PC with a Pentium 266 MHz processor. However, our methods are designed to solve a much more general problem than MPM job-shop scheduling, this explains the big difference in computation time. We have also computed a lower bound by integer programming for edata instances. Our lower bound tended to that of Jurisch [17] especially on rectangular instances, i.e. when n was much bigger than m. In one case we have even improved on the known lower bound: for l11 our lower bound is 1103, while the known lower bound is 1087. This proves that the known upper bound 1103 is indeed optimal. On rdata and vdata instances the upper bound UBMG is slightly better than UBHB . The solution quality of TABU is usually between the two, except on some vdata instances on which it performs slightly worse than the former algorithms. How-
ever, GA and RND deliver definitely poor solutions. We explain the lack of success of the latter two methods on rdata and vdata instances by the strong dependence on random choices.
9. Conclusions In this paper we have studied an extension of job-shop scheduling where job routings are directed graphs with a special structure: beside operation nodes there are additional dummy nodes to identify and-subgraphs and or-subgraphs. Andsubgraphs can represent partial order of operations, while the branches of an or-subgraph correspond to alternative subroutes from which exactly one must be chosen. This problem is called AJSP. We have addressed AJSP by tabu search and by a genetic algorithm. It has turned out that tabu search is superior both in terms of solution quality and computation time. We have tested our methods on special cases of our problem like MPM jobshop scheduling and open-shop scheduling. We have obtained very good results and for the openshop we have found the optimal solution of three open problems and improved upon known upper bounds in some other cases.
T. Kis / European Journal of Operational Research 151 (2003) 307–332
329
Table 5 Computational results on MPM-edata instances i
mt06 mt10 mt20
UBMG
UBHB
GA
TABU
RND
UB
UB
CPU
UB
UB
CPU
UB
UB
CPU
55 871 1088
55 917 1109
55.00 878.40 1089.00
55 875 1088
34.12 95.41 126.52
55.00 909.40 1089.80
55 890 1088
3.13 20.46 23.12
55.00 898.80 1092.80
55 888 1088
55.63 156.22 162.14
l01 l02 l03 l04 l05
609 655 550 568 503
609 655 573 578 503
609.00 655.00 550.00 568.00 503.00
609 655 550 568 503
29.96 26.35 74.10 66.40 45.55
609.00 655.00 558.80 586.20 503.00
609 655 554 568 503
5.79 6.21 6.05 5.64 5.78
609.00 655.00 565.40 568.80 503.00
609 655 563 568 503
77.92 82.68 78.71 77.81 77.18
l06 l07 l08 l09 l10
833 762 845 878 866
833 765 845 878 866
833.00 762.40 845.00 878.00 866.00
833 762 845 878 866
50.68 102.45 103.72 99.74 27.36
833.00 763.20 845.00 878.00 866.00
833 762 845 878 866
12.54 12.60 12.39 12.16 12.93
833.00 765.40 845.00 878.00 866.00
833 765 845 878 866
113.45 115.68 117.96 112.18 118.95
l11 l12 l13 l14 l15
1103 960 1053 1123 1111
1106 960 1053 1123 1111
1103.00 960.00 1053.00 1123.00 1111.00
1103 960 1053 1123 1111
135.18 134.76 63.58 133.52 71.29
1103.20 960.00 1053.00 1123.00 1111.00
1103 960 1053 1123 1111
21.73 21.21 22.60 21.57 23.00
1106.40 962.00 1053.00 1126.00 1111.00
1106 960 1053 1123 1111
163.83 167.38 160.19 158.54 165.49
l16 l17 l18 l19 l20
892 707 842 796 857
924 749 864 844 909
899.80 707.00 843.00 798.00 857.00
892 707 843 796 857
45.23 120.83 96.38 124.65 109.13
913.00 714.00 844.60 820.40 876.40
899 707 843 805 857
20.55 19.97 19.71 20.34 20.22
916.00 711.40 843.00 805.00 865.40
915 707 843 805 857
151.80 146.74 147.92 150.64 149.40
l21 l22 l23 l24 l25
1017 882 950 909 941
1066 919 977 951 970
1024.80 885.80 950.80 915.40 952.40
1023 883 950 909 947
185.76 161.25 180.06 179.79 177.88
1044.60 892.60 972.20 929.40 957.40
1028 887 962 917 951
43.74 44.57 43.01 45.09 44.41
1064.40 907.80 968.80 946.00 966.00
1064 897 960 937 964
251.18 246.50 241.17 246.40 251.49
l26 l27 l28 l29 l30
1125 1186 1149 1118 1204
1160 1327 1204 1210 1253
1132.80 1203.80 1151.40 1136.40 1228.00
1126 1199 1151 1122 1216
248.02 224.15 234.22 248.21 246.65
1135.40 1215.20 1162.60 1137.40 1272.80
1132 1207 1157 1121 1219
82.02 82.66 80.86 83.57 79.92
1176.00 1252.80 1208.00 1203.80 1271.00
1167 1240 1202 1202 1266
361.03 366.57 360.73 375.58 364.36
l31 l32 l33 l34 l35
1539 1698 1547 1599 1736
1566 1722 1575 1627 1736
1550.80 1698.00 1547.00 1618.00 1736.00
1541 1698 1547 1610 1736
396.48 397.52 364.24 396.58 156.74
1541.80 1698.00 1547.00 1599.20 1736.00
1541 1698 1547 1599 1736
195.03 197.35 188.97 186.81 186.25
1597.40 1746.00 1603.80 1666.40 1760.80
1582 1738 1594 1664 1751
657.20 662.61 649.44 652.76 650.70
l36 l37 l38 l39 l40
1162 1397 1144 1184 1150
1215 1453 1185 1226 1214
1172.00 1421.80 1164.80 1198.40 1170.40
1164 1415 1156 1195 1162
287.97 276.75 281.87 264.17 286.45
1189.80 1452.00 1179.00 1232.80 1194.00
1166 1438 1159 1214 1168
103.06 99.97 103.07 103.22 102.53
1222.00 1443.00 1186.00 1237.40 1212.40
1217 1441 1180 1235 1211
428.56 437.63 426.92 431.45 431.94
330
T. Kis / European Journal of Operational Research 151 (2003) 307–332
Table 6 Computational results on MPM-rdata instances i
UBMG
UBHB
GA
TABU
UB
UB
UB
UB
UB
UB
mt06 mt10 mt20
47 686 1022
47 701 1025
48.00 746.80 1042.80
47 728 1035
7.80 55.01 59.49
47.00 705.00 1023.80
47 696 1023
4.75 38.35 44.24
49.40 803.00 1074.40
49 794 1063
7.50 69.73 76.00
la01 la02 la03 la04 la05
571 530 478 502 457
574 534 481 506 460
596.00 556.40 496.80 534.80 480.00
589 550 489 530 475
14.01 14.26 14.60 14.86 14.80
577.20 535.60 481.00 509.00 461.20
574 534 479 507 458
9.11 9.11 9.45 9.72 9.75
614.60 576.60 529.00 551.40 497.40
596 569 493 536 492
14.69 14.84 15.30 15.66 15.47
la06 la07 la08 la09 la10
799 750 765 853 804
801 752 766 855 806
822.20 768.20 787.20 873.00 820.60
819 764 782 869 813
31.71 32.38 32.00 31.73 33.13
800.20 751.20 766.40 855.40 805.40
799 750 766 855 805
21.80 22.68 22.02 21.78 22.83
833.80 784.20 800.20 883.80 860.00
825 769 788 879 836
36.63 36.93 37.12 36.76 38.88
la1l la12 la13 la14 la15
1071 936 1038 1070 1090
1072 937 1039 1071 1092
1087.40 948.40 1053.00 1088.80 1115.80
1082 946 1043 1080 1109
61.71 60.06 60.13 59.26 59.84
1071.80 936.00 1038.80 1070.80 1091.60
1071 936 1038 1070 1091
46.10 45.48 44.88 44.18 44.34
1112.20 962.40 1074.60 1111.80 1128.20
1100 944 1060 1103 1119
77.54 74.58 74.57 73.97 74.34
la16 la17 la18 la19 la20
717 646 666 700 756
717 646 685 707 756
773.80 663.60 711.80 752.00 787.60
762 654 705 743 767
55.39 54.16 55.57 54.38 54.33
730.00 646.00 675.00 707.00 756.60
717 646 669 703 756
38.08 36.49 37.57 37.61 36.11
832.20 695.00 770.80 808.00 841.80
826 690 759 803 823
70.65 68.66 70.46 69.18 68.84
la21 la22 la23 la24 la25
835 760 842 808 791
861 790 863 825 821
926.20 855.80 924.80 889.40 870.80
913 845 912 882 865
131.80 132.35 132.82 129.23 131.35
857.20 785.80 868.60 828.40 816.80
852 777 859 824 806
98.29 98.85 100.32 94.06 96.85
999.00 920.60 1006.00 963.20 952.60
974 913 992 953 944
192.79 192.86 193.08 187.54 190.36
la26 la27 la28 la29 la30
1061 1091 1080 998 1078
1085 1107 1092 1007 1101
1122.80 1161.80 1141.00 1063.40 1158.80
1117 1151 1125 1055 1145
236.70 236.05 238.61 238.28 235.08
1072.00 1102.40 1090.20 1004.40 1092.20
1069 1099 1087 1003 1088
188.47 187.89 196.05 193.57 184.56
1195.80 1246.00 1237.00 1143.00 1259.00
1175 1242 1233 1107 1237
384.07 386.22 387.39 389.94 383.78
la31 la32 la33 la34 la35
1521 1659 1499 1536 1550
1528 1665 1500 1538 1553
1558.20 1709.00 1544.20 1575.80 1592.00
1550 1700 1536 1569 1582
552.77 554.74 554.68 558.34 572.58
1523.00 1661.20 1500.20 1536.60 1551.00
1522 1660 1499 1536 1551
489.78 496.36 510.27 502.76 531.29
1663.20 1786.80 1643.00 1666.80 1665.80
1644 1765 1622 1649 1655
1073.77 1078.78 1080.14 1083.88 1084.47
la36 la37 la38 la39 la40
1030 1077 962 1024 970
1052 1122 988 1041 1009
1121.20 1196.80 1070.40 1119.60 1075.60
1082 1180 1061 1102 1054
288.74 292.11 294.82 288.23 293.29
1052.20 1112.80 991.80 1047.40 1002.00
1044 1087 976 1034 982
212.22 217.89 224.03 214.72 224.23
1228.40 1286.20 1168.20 1209.00 1162.60
1216 1269 1150 1202 1140
505.30 505.73 511.75 503.14 507.72
CPU
RND CPU
CPU
T. Kis / European Journal of Operational Research 151 (2003) 307–332
331
Table 7 Computational results on MPM-vdata instances i
UBMG
UBHB
GA
TABU
UB
UB
CPU
RND
UB
UB
CPU
UB
UB
CPU
mt06 mt10 mt20
47 655 1022
47 655 1023
47.00 705.80 1045.00
47 694 1040
8.76 80.28 73.89
47.00 655.80 1023.00
47 655 1023
5.72 70.43 60.92
48.40 762.00 1053.20
48 737 1042
8.47 101.38 91.22
la01 la02 la03 la04 la05
570 529 477 502 457
572 531 480 504 464
602.60 560.80 512.20 527.80 488.00
582 552 508 521 484
16.68 16.36 16.56 15.97 15.16
574.60 532.40 481.80 503.80 462.40
573 531 479 503 460
11.78 11.28 11.38 10.86 10.22
617.80 576.00 522.00 537.60 498.80
611 567 515 522 488
17.41 17.21 17.33 16.60 15.92
la06 la07 la08 la09 la10
799 749 765 853 804
801 751 766 854 805
823.60 777.00 790.60 878.00 823.40
815 759 788 875 815
37.72 37.30 39.38 38.37 38.84
800.40 751.80 766.20 854.60 805.00
799 751 766 854 805
27.84 28.06 30.16 28.77 29.69
839.40 788.60 803.80 885.60 837.00
833 778 790 869 821
43.53 42.78 45.74 44.11 44.61
la11 la12 la13 la14 la15
1071 936 1038 1070 1089
1073 937 1039 1071 1091
1093.40 955.80 1064.00 1093.20 1111.00
1087 949 1058 1088 1101
72.41 70.38 68.90 70.08 72.10
1071.80 936.60 1038.60 1070.60 1090.00
1071 936 1038 1070 1090
59.08 58.33 55.78 56.02 59.93
1120.00 968.00 1071.20 1110.20 1126.60
1102 953 1059 1097 1111
90.44 86.77 85.74 87.64 89.14
la16 la17 la18 la19 la20
717 646 663 617 756
717 646 663 617 756
751.00 651.80 710.20 729.20 766.20
742 646 702 720 756
82.11 82.67 82.76 82.73 83.64
722.80 650.60 663.00 656.20 760.00
717 646 663 617 756
67.74 61.13 72.85 68.81 72.83
808.20 706.00 755.00 797.40 819.60
800 696 744 781 811
103.50 104.38 103.85 104.27 104.82
la21 la22 la23 la24 la25
806 739 815 777 756
820 743 826 787 770
912.80 844.80 924.00 889.60 870.40
889 822 909 877 869
198.91 193.01 192.01 200.60 200.23
825.80 754.80 834.80 800.20 785.60
819 748 830 796 781
203.85 200.85 198.42 207.10 196.55
994.20 909.00 1006.20 960.60 939.60
989 879 989 934 914
281.93 273.92 273.39 282.72 282.73
la26 la27 la28 la29 la30
1054 1085 1070 994 1069
1058 1088 1072 995 1071
1126.00 1168.40 1143.60 1075.80 1150.40
1110 1160 1133 1066 1132
385.05 391.98 384.30 379.75 388.27
1057.80 1089.00 1073.60 999.80 1074.40
1056 1087 1071 997 1072
446.27 448.12 444.50 441.55 453.58
1212.20 1243.40 1236.00 1148.60 1251.20
1198 1239 1210 1122 1234
588.46 593.32 585.17 581.86 594.95
la31 la32 la33 la34 la35
1520 1658 1497 1535 1549
1521 1658 1498 1536 1553
1579.40 1715.60 1557.20 1596.40 1597.40
1569 1704 1549 1591 1592
1634.15 1637.25 1639.11 1655.34 1675.29
1520.80 1658.00 1498.20 1535.20 1549.40
1520 1658 1498 1535 1549
1433.17 1324.13 1328.61 1376.55 1395.43
1653.20 1818.00 1632.40 1663.40 1703.60
1646 1805 1610 1640 1686
1685.97 1675.14 1666.27 1687.54 1699.26
la36 la37 la38 la39 la40
948 986 943 922 955
948 986 943 922 955
1092.20 1166.00 1045.00 1082.40 1067.80
1077 1154 1017 1069 1056
608.46 608.62 601.46 602.96 600.32
970.40 1034.00 965.60 965.40 999.00
948 993 958 930 955
676.99 691.43 616.08 684.81 630.91
1198.00 1277.20 1156.60 1195.00 1172.80
1174 1270 1130 1186 1155
712.05 710.34 699.05 701.12 700.14
332
T. Kis / European Journal of Operational Research 151 (2003) 307–332
Acknowledgements The author is grateful to the two anonymous referees for their constructive comments. References [1] J. Adams, E. Balas, D. Zawack, The shifting bottleneck procedure for job-shop scheduling, Management Science 34 (1988) 391–401. [2] J. Blazewicz, K.H. Ecker, E. Pesch, G. Schmidt, J. Weglarz, Scheduling Computer and Manufacturing Processes, second ed., Springer-Verlag, Berlin, 2001. [3] P. Brandimarte, Routing and scheduling in a flexible jobshop by tabu search, Annals of Operations Research 41 (1993) 157–183. [4] H. Br€ asel, T. Tautenhahn, F. Werner, Constructive heuristic algorithms for the open-shop problem, Computing 51 (1993) 95–110. [5] P. Brucker, R. Schlie, Job-shop scheduling with multipurpose machines, Computing 45 (1990) 369–375. [6] P. Brucker, J. Neyer, Tabu search for the multi-mode jobshop problem, OR-Spektrum 20 (1998) 21–28. [7] P. Brucker, J. Hurink, B. Jurisch, B. W€ ostmann, A branch and bound algorithm for the open-shop problem, Discrete Applied Mathematics 76 (1997) 43–59. [8] E.G. Coffman Jr. (Ed.), Computer and Job-Shop Scheduling Theory, John Wiley & Sons, 1976. [9] T.H. Cormen, C.E. Leiserson, R.L. Rivest, Introduction to Algorithms, MIT Press, 1998. [10] J. Detand, A Computer Aided Process Planning System Generating Non-linear Process Plans, PhD Thesis, KU Leuven, Belgium, 1993. [11] S. Dauzere-Peres, J. Paulli, An integrated approach for modeling and solving the general multiprocessor job-shop scheduling problem using tabu search, Annals of Operations Research 70 (1997) 281–306.
[12] S. Dauzere-Peres, W. Roux, J.B. Lasserre, Multi-resource shop scheduling with resource flexibility, European Journal of Operational Research 107 (1998) 289–305. [13] M. DellÕAmico, M. Trubian, Applying tabu search to the job-shop scheduling problem, Annals of Operations Research 41 (1993) 231–252. [14] M.R. Garey, D.S. Johnson, Computers and Intractability, W.H. Freeman, 1979. [15] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, MA, 1989. [16] J. Hurink, B. Jurisch, M. Thole, Tabu search for the job shop scheduling problem with multi-purpose machines, OR-Spektrum 15 (1994) 205–215. [17] B. Jurisch, Scheduling jobs in shops with multi-purpose machines, Dissertation, Fachbereich Mathematik/Informatik, Universit€at Osnabr€ uck, 1992. [18] P.J.M. Van Laarhoven, E.H.L. Aarts, J.K. Lenstra, Jobshop scheduling by simulated annealing, Operations Research 40 (1) (1992) 113–125. [19] J.K. Lenstra, A.H.G. Rinnooy Kan, P. Brucker, Complexity of machine scheduling problems, Annals of Discrete Mathematics 4 (1977) 121–140. [20] M. Mastrolilli, L.M. Gambardella, Effective neighbourhood for the flexible job-shop problem, Journal of Scheduling 3 (1) (2000) 3–20. [21] T. Masuda, H. Ishii, T. Nishida, The mixed shop scheduling problem, Discrete Applied Mathematics 11 (2) (1985) 175–186. [22] N.V. Shakhlevich, Yu.N. Sotskov, F. Werner, Complexity of mixed shop scheduling problems: A survey, European Journal of Operational Research 120 (2000) 343– 351. [23] E. Taillard, Benchmarks for basic scheduling problems, European Journal of Operational Research 64 (2) (1993) 278–285. [24] F. Werner, A. Winkler, Insertion techniques for the heuristic solution of the job-shop problem, Discrete Applied Mathematics 58 (1995) 191–211.