A divide and merge heuristic for the multiprocessor scheduling problem with sequence dependent setup times

A divide and merge heuristic for the multiprocessor scheduling problem with sequence dependent setup times

European Journal of Operational Research 133 (2001) 183±189 www.elsevier.com/locate/dsw Theory and Methodology A divide and merge heuristic for the...

90KB Sizes 0 Downloads 111 Views

European Journal of Operational Research 133 (2001) 183±189

www.elsevier.com/locate/dsw

Theory and Methodology

A divide and merge heuristic for the multiprocessor scheduling problem with sequence dependent setup times Michel Gendreau

a,b

, Gilbert Laporte

a,c,*

, Eduardo Morais Guimar~ aes

a,b

a b

Centre de recherche sur les transports, Universit e de Montr eal, C.P. 6128, succursale Centre-ville, Montr eal, Canada H3C 3J7 D epartement d'informatique et de recherche op erationnelle, Universit e de Montr eal, C.P. 6128, succursale Centre-ville, Montr eal, Canada H3C 3J7 c   Ecole des Hautes Etudes Commerciales, 3000 chemin de la C^ ote-Sainte-Catherine, Montr eal, Canada H3T 2A7 Received 23 November 1999; accepted 20 July 2000

Abstract This article proposes lower bounds, as well as a divide and merge heuristic for the multiprocessor scheduling problem with sequence dependent setup times (MSPS). The heuristic is tested on randomly generated instances and compared with a previously published tabu search algorithm. Results show that the proposed heuristic is much faster than tabu search while providing similar quality solutions. Ó 2001 Elsevier Science B.V. All rights reserved. Keywords: Machine scheduling; Lower bounds; Heuristics

1. Introduction The multiprocessor scheduling problem with sequence dependent setup times (MSPS) arises in manufacturing contexts where n non-preemptive jobs must be executed on m parallel identical machines and sequence dependent setup costs are present. The aim is the minimization of the overall processing time or makespan. More precisely, denote J ˆ f1; . . . ; ng as the set of jobs, M ˆ * Corresponding author. Tel.: +1-514-343-6143; fax: +1-514343-7121. E-mail addresses: [email protected] (M. Gendreau), [email protected] (G. Laporte), morais@ crt.umontreal.ca (E.M. Guimar~aes).

f1; . . . ; mg the set of machines, pi the processing time of job i, cij the changeover time required if job j is the immediate successor of job i on a machine, and de®ne tij ˆ pi ‡ cij . The value of cii is zero for all i. Assume that all pi and cij are integer and that m 6 n 1 in order to avoid trivial cases. The MSPS consists of simultaneously determining an assignment of jobs to machines and a cyclic sequence of jobs on each machine, i.e., a permutation of jobs assigned to a machine that repeats itself once all jobs assigned to the machine have been processed. The objective is to minimize Cmax ˆ maxk2M fCk g, where Ck is the completion time of all jobs assigned to machine k, including the changeover time from the last job to the ®rst job. According to the standard machine scheduling

0377-2217/01/$ - see front matter Ó 2001 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 7 - 2 2 1 7 ( 0 0 ) 0 0 1 9 7 - 1

184

M. Gendreau et al. / European Journal of Operational Research 133 (2001) 183±189

classi®cation, the MSPS is referred to as P jsdsjCmax (Graham et al., 1979) and is NP-hard since it reduces to the traveling salesman problem (TSP) if m ˆ 1. It is also equivalent to the m-TSP with minmax objective (Francßa et al., 1995) if production on all machines starts and ends with a common dummy job 0 representing the initial and ®nal states. Applications of the MSPS are encountered in several contexts. One case arises in the textile industry where orders corresponding to a speci®c fabric type must be assigned to looms equipped with warp chains. Whenever a new fabric type is processed on a machine, the warp chain must be replaced and the time it takes depends on the two fabrics involved in the changeover. Another case is batch chemical production where the time needed to clean a reactor depends on which product was just processed and which product comes next (Francßa et al., 1996). Yet another application of the MSPS is the design of vehicle routes through several locations where the aim is to reduce the length of the longest route travelled (Giust, 1992). Few algorithms are known for the MSPS. Frederickson et al. (1978) worked in the context of the minmax m-TSP. They showed that if m job sequences are constructed simultaneously, using a least cost insertion or a nearest neighbour method, and the matrix (tij ) satis®es the triangle inequality, then the heuristic has a worst-case performance ratio of 2 for the ®rst criterion and of m ‡ …m=2† log n for the second criterion. Another heuristic suggested by Frederickson et al. ®rst consists of sequencing all jobs on the same machine, using a TSP algorithm, and then splitting this sequence in m more or less equal sequences. The worst-case performance of this heuristic is 5=2 1=m if the Christo®des (1976) heuristic is used to construct the TSP solution. No worst-case results are known for any of these heuristics if (tij ) does not satisfy the triangle inequality. The MSPS can be solved to optimality by embedding an exact algorithm for the time constrained m-TSP within a dichotomic search scheme (Francßa et al., 1995), but this is rather time consuming. The main interest of this approach is to provide solution values usable for benchmarking purposes. More recently, Francßa et al. (1996) have

applied two versions of tabu search (fast and long) to the MSPS and have implemented the nearest neighbour heuristic (FHK) of Frederickson et al. They have shown that on medium size instances (20 6 n 6 60 and 2 6 m 6 5), FHK is very quick, but yields solutions lying on the average within 26% of the optimum. The fast tabu search implementation is slower than FHK by 5 orders of magnitude, but the solutions produced by this algorithm are on the average within 4.68% of the optimum. The purpose of this article is to provide easy-tocompute lower bounds for the MSPS, as well as a very ecient divide and merge heuristic for this problem. These are described in Sections 2 and 3, respectively. The quality of the proposed lower and upper bounds is assessed through a series of computational experiments presented in Section 3.4.2. The conclusion follows in Section 4.

2. Lower bounds The ®rst lower bound, called LB1, is easily computed by solving the assignment problem (AP) associated with the MSPS. For this, de®ne binary variables xij equal to 1 if and only if job j is the immediate successor of job i in a cyclic sequence on a machine. The AP is then X …AP† minimize tij xij …1† i;j;2J

subject to X xij ˆ 1 …j 2 J †;

…2†

i2J

X

xij ˆ 1 …i 2 J †;

…3†

j2J

xij ˆ 0 or 1 …i; j 2 J †:

…4†

Low order polynomial algorithms are available for the AP (see for example Dell'Amico and Martello, 1997). Our ®rst MSPS lower bound is then LB1 ˆ dz=me, where z is the optimal AP solution value. Since cii ˆ 0 for all i, the AP solution Pnis always xii ˆ 1 …i ˆ 1; . . . ; n† and therefore z ˆ iˆ1 pi .

M. Gendreau et al. / European Journal of Operational Research 133 (2001) 183±189

A stronger bound, called LB2, is obtained by observing that since m < n, at most m 1 machines can be assigned a single job and therefore the constraint X xii 6 m 1 …5†

M ˆ minftir is‡1 ‡ tis ir‡1

tir ir‡1

185

tis is‡1 g;

where 1 6 r < s 6 k and is‡1  1 if s ˆ k. Create the two cycles …i1 ; . . . ; ir ; is‡1 ; . . . ; ik ; i1 † and …ir‡1 ; . . . ; is ; ir‡1 †:

i2J

is valid. The second MSPS lower bound LB2 is equal to dz0 =me; where z0 is the optimal value of the program (AP0 ) de®ned by (1)±(5). Since this problem is no longer polynomially solvable, a weaker lower bound LB3 is de®ned as dz00 =me, where z00 is the optimal value of the linear relaxation of (AP0 ). Alternatively, one can incorporate (5) in a Lagrangian fashion in the objective function of (AP) to yield the modi®ed objective ( !) X X maximize min tij xij ‡ k xii m ‡ 1 ; kP0

i;j2J

i2J 0

…1 † where k is a non-negative Lagrangian multiplier. A valid lower bound ^z on the optimal value of (10 ) subject to (2)±(4) can be computed by using a subgradient technique (see, e.g., Ahuja et al., 1993; Beasley, 1993). The fourth lower bound for the MSPS is then LB4 ˆ d^z=me. It can readily be established that the relationship between these four lower bounds is given by LB1 6 LB4 6 LB3 6 LB2:

3. A divide and merge heuristic We have developed a number of construction and improvement heuristics for the MSPS. They all work on a family of cycles which are dynamically divided and merged. This is done by applying the following procedures to selected cycles.

3.1. Procedure DIVIDE Given a cycle …i1 ; . . . ; ik ; i1 † where k P 4; determine the two indices r and s yielding

3.2. Procedure MERGE Given two cycles …i1 ; . . . ; i` † and …i`‡1 ; . . . ; ik †; determine two indices r and s such that 1 6 r 6 `; ` ‡ 1 6 s 6 k and yielding M ˆ minftir is‡1 ‡ tis ir‡1

tir ir‡1

tis is‡1 g;

where ir‡1  1 if r ˆ ` and is‡1  i`‡1 if s ˆ k. Combine the two cycles into a single cycle …i1 ; . . . ir ; is‡1 ; . . . ; ik ; i`‡1 ; . . . ; is ; ir‡1 ; . . . ; i` ; i1 †. The computational complexity of each of the 2 two procedures is O…n† . Given a set of cycles, each procedure can be applied in several ways to yield DIVIDE…b† …b 2 f1; 2g† and MERGE…c† …c 2 f1; 2; 3; 4g†. In the following description of these variants, the length of a cycle is the number of jobs it contains: DIVIDE(1): divide the longest cycle; DIVIDE(2): tentatively divide each cycle and implement the division yielding the least cost M ; MERGE(1): merge the two longest cycles; MERGE(2): merge the longest cycle with an arbitrary select cycle; MERGE(3): merge the longest and shortest cycle; MERGE(4): tentatively merge each pair of cycles and implement the merge yielding the least cost M : We can now describe a constructive heuristic for the MSPS. As for the asymmetric TSP (Karp, 1979), a natural algorithm would be to ®rst solve the AP with cost matrix …tij † and then apply DIVIDE or MERGE until m cycles are obtained. The diculty with this approach is that the AP solution always contains n cycles having exactly one job each since cii ˆ 0. To counter this, it is preferable to initially replace each tii with tii0 ˆ 1, solve the AP, and then reset tii ˆ pi : Note that because of this transformation, a solution containing m cycles may not be optimal for the MSPS. The constructive algorithm works as follows.

186

M. Gendreau et al. / European Journal of Operational Research 133 (2001) 183±189

3.3. Algorithm CONSTRUCT Step 1: (AP solution). Solve the AP with the modi®ed (tij ) matrix and reset each tii to its initial value pi in the solution. If the solution contains m cycles, then stop. Step 2: (MERGE). If the solution less than m cycles, then go to Step 3. Otherwise, apply MERGE(1) to the solution until it contains m cycles and stop. Step 3: (DIVIDE). Apply DIVIDE(1) to the solution until it contains m cycles and stop.

3.4. Algorithm POSTOPT …a; b; c† Several postoptimization algorithms can be applied to a feasible solution by combining procedures DIVIDE and MERGE in di€erent ways. The ®rst, called POSTOPT…a; b; c† works with three parameters. The ®rst of these takes the value 1 or 2. If a ˆ 1, then DIVIDE …b† is applied, followed by MERGE …c†. If a ˆ 2, then MERGE…c† is ®rst applied, followed by DIVIDE …b†. Since b 2 f1; 2g and c 2 f1; 2; 3; 4g; this yields 16 di€erent algorithms. Using POSTOPT…a; b; c†, we have also developed the following postoptimization algorithms. 3.4.1. Algorithm POSTOPT1 Successively apply the 16 versions of POSTOPT …a; b; c† to a feasible solution S. Whenever a better solution S 0 is identi®ed, set S :ˆ S 0 and repeat starting from the ®rst version. An alternative version of this algorithm was also developed and tested. It consists of exhaustively applying the 16 versions of POSTOPT …a; b; c† to S. If this has yielded better solution than S, then the procedure is reapplied to the best of these solutions. This proved to be less powerful than POSTOPT1. A second postoptimization algorithm, called POSTOPT2, was developed and turned out to be quite e€ective. Instead of applying the 16 versions of POSTOPT …a; b; c†, it selects one randomly. It then merges all cycles into one or divides them into n cycles. Finally, it regains feasibility by applying

CONSTRUCT. The description of this algorithm is as follows. 3.4.2. Algorithm POSTOPT2 Step 1: (POSTOPT…a; b; c†). Randomly select a and b in f1; 2g and c in f1; 2; 3; 4g. Apply POSTOPT…a; b; c† to a solution S. Step 2: (DIVIDE or MERGE). Randomly select d in f1; 2g. If d ˆ 1, repeatedly apply DIVIDE(1) to S until n cycles are obtained and go to Step 3. If d ˆ 2, repeatedly apply MERGE(1) to S until only cycle is obtained. Step 3: (CONSTRUCT). Apply CONSTRUCT to the set of cycles obtained at the end of Step 2. Our overall divide and merge heuristic for the MSPS can now be described. 3.5. Algorithm DIVIDE_and_MERGE Step 1: (CONSTRUCT). Generate a feasible solution S by means of CONSTRUCT. Step 2: (POSTOPT1). Apply POSTOPT1 to S 0 ˆ S and insert in a list L all improving solutions S 0 identi®ed in the process. Step 3: (POSTOPT2). Apply POSTOPT2 h times to each solution of L and identify the best solution, where h is a user-controlled parameter. 4. Computational results The lower bounding procedures and heuristics just described were coded in C and run on a SunUltra60 workstation. Instances involving m machines and n jobs were obtained by generating the cij …i 6ˆ j† and the pi coecients according to discrete uniform distributions in ‰1; 10Š and ‰1; 30Š, respectively. Values of m and n were selected in f2; . . . ; 25g and f4; . . . ; 100g, respectively. In the ®rst series of tests, we compared on a series of test problems the lower bound values and computing times rounded to the nearest integer. All linear and integer linear programs were solved using CPLEX (1995). Comparative results over 29 instances are presented in Table 1. All best values are indicated in bold characters. These results

M. Gendreau et al. / European Journal of Operational Research 133 (2001) 183±189

187

Table 1 Lower bound comparisons n

m 4 6 8 10

20

40

60

80

100

2 2 2 4 2 3 4 5 2 4 5 10 4 5 8 10 5 8 10 16 5 8 10 16 5 10 15 20 25

LB1

LB2

LB3

LB4

Value

Seconds

Value

Seconds

Value

Seconds

Value

Seconds

41 32 70 33 76 49 46 28 144 77 60 34 148 134 70 63 179 136 90 65 205 163 111 78 278 152 104 86 57

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

46 37 77 35 83 52 49 30 156 83 63 35 157 142 74 66 190 146 95 68 220 172 118 82 298 161 110 90 60

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 1 3 2 2 2 7 4

45 37 77 35 83 52 49 30 156 83 63 35 157 142 74 66 190 146 95 68 220 172 118 82 298 161 110 90 60

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 2 2 2 2 2

44 35 75 35 79 51 48 30 155 82 63 35 157 141 74 66 190 145 95 68 220 172 118 82 297 161 110 90 60

0 0 0 0 0 0 0 0 0 1 0 0 1 2 2 1 3 3 2 5 4 5 4 7 6 7 6 8 8

con®rm the relative superiority of LB2, and also the high quality of LB3. For the selected values of m and n, computing times are relatively low for all four lower bounds, but larger values LB3 may prove an interesting alternative to LB2 since it is just as good, but faster to compute. The lower bound LB1 provided by the AP solution is relatively weak, and the bound LB4 obtained by Lagrangian relaxation is of uneven quality and relatively expensive to compute. In the following tests, we will use LB2 to help assess the quality of the heuristics. We then ran and compared two heuristics for the MSPS. The ®rst, called TABU, is the fast tabu search heuristic developed by Francßa et al. (1996). The second is the DIVIDE_ AND_MERGE heuristic described in Section 3,

run with h ˆ 5. We solved 10 instances for each of several combinations of m and n. We present in Table 2 the average and maximum values of the ratio z=z over the 10 instances, where z is the heuristic solution value and z is the value of the lower bound LB2, as well as the average running times in seconds, rounded to the nearest integer. These results indicate that with very few exceptions, both TABU and DIVIDE_AND_MERGE yield similar quality results, but the latter heuristic is much faster to run. Its running time is insigni®cant as long as n 6 100. The average and maximum z=z ratios do not seem to depend on n, but tend to grow with the value of n=m. Another observation is that for the DIVIDE_AND_MERGE heuristic, the maximum z=z ratio tends to be closer to the average

188

M. Gendreau et al. / European Journal of Operational Research 133 (2001) 183±189

Table 2 Upper bound comparisons n

m 20

40

60

80

100

200

300

2 3 4 5 10 5 8 10 15 20 5 10 15 20 30 8 15 25 30 5 10 15 30 5 20 25 30 40 15 25 30 35 40

TABU

DIVIDE_AND_MERGE

AVG z=z

MAX z=z

Seconds

AVG z=z

MAX z=z

Seconds

1.03 1.05 1.06 1.13 1.18 1.06 1.08 1.10 1.08 1.26 1.04 1.07 1.14 1.17 1.24 1.05 1.08 1.17 1.19 1.03 1.05 1.07 1.18 1.02 1.05 1.06 1.09 1.11 1.03 1.05 1.06 1.06 1.06

1.04 1.06 1.08 1.14 1.18 1.06 1.09 1.11 1.20 1.28 1.05 1.08 1.16 1.20 1.27 1.05 1.10 1.18 1.20 1.04 1.05 1.08 1.20 1.03 1.05 1.07 1.09 1.11 1.03 1.05 1.06 1.06 1.06

1 1 1 0 0 3 1 1 1 0 9 4 3 2 1 9 5 3 2 33 13 9 4 736 30 24 20 16 123 65 54 53 40

1.02 1.04 1.04 1.09 1.18 1.05 1.08 1.08 1.16 1.25 1.05 1.06 1.12 1.14 1.20 1.05 1.07 1.16 1.15 1.29 1.07 1.07 1.14 1.17 1.12 1.06 1.07 1.11 1.03 1.05 1.06 1.18 1.06

1.02 1.04 1.04 1.09 1.18 1.05 1.08 1.08 1.16 1.25 1.05 1.06 1.12 1.14 1.20 1.05 1.07 1.16 1.15 1.29 1.07 1.07 1.14 1.25 1.12 1.06 1.07 1.11 1.03 1.05 1.06 1.18 1.06

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 1 2 2 2 7 6 9 6 6

value than in the case of the TABU heuristic, thus indicating better stability. Since the optimum MSPS value z is unknown for all these test problems, there is no way to tell from Table 2 whether a large average or maximum z=z ratio can be explained by the size of z =z or by that of z=z . However, this question can be answered indirectly since on medium size instances, TABU yields solutions within 4.68% of the optimum on the average (Francßa et al., 1996), and the present study has shown that on medium and large size instances, DIVIDE_AND_MERGE is almost always just as good as TABU, and sometimes better.

5. Conclusion We have developed lower bounds and heuristics for the MSPS, a hard combinatorial problem with applications in a number of production scheduling settings. The diculty of the MSPS comes mostly from the nature of its minmax objective which is often harder to deal with than a minsum objective (one can compare, for example, the minmax m-TSP (Francßa et al., 1995) with the minsum m-TSP (Laporte and Nobert, 1980)). The DIVIDE_AND_MERGE heuristic we have proposed is both easy to implement and has a short running time. On ran-

M. Gendreau et al. / European Journal of Operational Research 133 (2001) 183±189

domly generated test instances, it compares favourably with a previously published tabu search algorithm and consistently produces high quality solutions.

Acknowledgements This work was partially supported by the Canadian Natural Sciences and Engineering Research Council under grants OGP0038816 and OGP0039682, and by the Canadian International Development Agency. This support is gratefully acknowledged. Thanks are also due to three anonymous referees for their valuable comments.

References Ahuja, R.L., Magnanti, T.L., Orlin, J.B., 1993. Network ¯ows: Theory algorithms and applications. Prentice-Hall, Englewood Cli€s, NJ. Beasley, J.E., 1993. Subgradient optimization. In: Reeves, C.R. (Ed.), Modern Heuristic Techniques for Combinatorial Optimization. Blackwell, Oxford. Christo®des, N., 1976. Worst-case analysis of a new heuristic for the traveling salesman problem. Report 388, Graduate

189

School of Industrial Administration, Carnegie Mellon University, Pittsburgh, PA. CPLEX Optimization Inc. 1995. Using the CPLEX Callable Library and CPLEX Mixed Integer Library. Incline Village, Nevada. Dell'Amico, M., Martello, S., 1997. Linear assignment. In: Dell'Amico, M., Maoli, F., Martello, S. (Eds.), Annotated Bibliographies in Combinatorial Optimization. Wiley, Chichester. Francßa, P.M., Gendreau, M., Laporte, G., M uller, F.M., 1995. The m-traveling salesman problem with minmax objective. Transportation Science 29, 267±275. Francßa, P.M., Gendreau, M., Laporte, G., M uller, F.M., 1996. A tabu search heuristic for the multiprocessor scheduling probem with sequence dependent setup times. International Journal of Production Economics 43, 79±89. Frederickson, G.N., Hecht, M.S., Kim, C.E., 1978. Approximation algorithms for some routing problems. SIAM Journal on Computing 7, 178±193.  1992. Optimisation de tournees de vehicules. AppliGiust, E., cations a la distribution de gaz. M.Sc. Dissertation, Facultes Universitaires Notre-Dame de la Paix, Namur, Belgium. Graham, R.L., Lawler, E.L., Lenstra, J.K., Kan, A.H.G., 1979. Optimization and approximation in deterministic sequencing and scheduling: A survey. Annals of Discrete Mathematics 5, 287±326. Karp, R.M., 1979. A patching algorithm for the nonsymmetric traveling-salesman problem. SIAM Journal on Computing 8, 561±573. Laporte, G., Nobert, Y., 1980. A cutting planes algorithm for the m-salesmen problem. Journal of the Operational Research Society 31, 1017±1023.