Optimal and heuristic solution methods for a multiprocessor machine scheduling problem

Optimal and heuristic solution methods for a multiprocessor machine scheduling problem

Computers & Operations Research 36 (2009) 2822 -- 2828 Contents lists available at ScienceDirect Computers & Operations Research journal homepage: w...

190KB Sizes 0 Downloads 39 Views

Computers & Operations Research 36 (2009) 2822 -- 2828

Contents lists available at ScienceDirect

Computers & Operations Research journal homepage: w w w . e l s e v i e r . c o m / l o c a t e / c o r

Optimal and heuristic solution methods for a multiprocessor machine scheduling problem Fayez F. Boctor a,b , Jacques Renaud a,b,∗ , Angel Ruiz a,b , Simon Tremblay a a b

Interuniversity Research Center on Enterprise Networks, Logistics and Transportation (CIRRELT), Canada Faculté des Sciences de l'administration, Université Laval, Québec, Canada G1K 7P4

A R T I C L E

I N F O

Available online 6 January 2009 Keywords: Scheduling Mathematical formulation Decomposition approach Heuristic

A B S T R A C T

This paper deals with the scheduling of operations on a multiprocessor machine in the context of shoe manufacturing. Multiprocessor machines are composed of several parallel processors. Unlike parallel machines, the entire machine needs to be stopped whenever a single processor needs a setup. Our industrial partner, one of the major winter-shoe manufacturers in Canada, offers a relatively large variety of models. For each of these models, all the sizes proposed by this manufacturer must be produced in various quantities. Our objective is to schedule the production of the required sizes on the machine's different processors in order to minimize the global makespan, which includes both the production time and the set up time. We first present a mathematical formulation of the problem. Then, we introduce a decomposition procedure based on the mathematical model, and we demonstrate that this procedure is very efficient for small- and medium-size instances. We also propose two construction heuristics and two improvement heuristics for larger problems, and we report the results of our extensive computational experiments to demonstrate the quality of the proposed heuristics in terms of reducing production time and computational effort. © 2009 Elsevier Ltd. All rights reserved.

1. Introduction Over the last decade, scheduling problems have been studied intensively in the literature (see, for example, [1]). Many applications in the fields of production [2], telecommunications [3] and parallel computing [4] have been addressed. In several cases, theoretical results were reported, and efficient solution procedures, either exact or heuristic, were proposed. However, despite the theoretical challenges involved [5], industrial applications remain a great source of inspiration in scheduling research. In fact, the study of the multiprocessor machine scheduling problem (MPMSP) presented in this article was inspired by an industrial application brought to our attention by one of our industrial partners. To the best of our knowledge, this problem has never been addressed before. As stated just above, the MPMSP presented in this article was inspired by an application used by our industrial partner, a major shoe manufacturer in Canada. However, as we discovered, MPMSP

∗ Corresponding author at: Interuniversity Research Center on Enterprise Networks, Logistics and Transportation (CIRRELT), Canada. Tel.: +1 418 656 7029; fax: +1 418 656 2624. E-mail addresses: [email protected] (F.F. Boctor), [email protected] (J. Renaud), [email protected] (A. Ruiz). 0305-0548/$ - see front matter © 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2008.12.017

arise in many other manufacturing contexts involving plastic molding or rubber injection processes. Our industrial partner manufactures winter-shoes and boots. Our research focuses on the manufacturing of the shoe outsole and shell, involving an injection molding process on a multiprocessor machine. A multiprocessor machine consists of P (typically four, five or eight) identical processors working in parallel. However, unlike the well-known parallel machine scheduling problem (PMSP), processors in a multiprocessor machine are not totally independent, meaning that (1) all the processors share the same raw material feeders and (2) whenever a single processor requires a setup, the entire machine should be stopped. Setups cannot be done simultaneously, most notably because the machine is often operated by a single employee, but also because, for security purposes, the machine is configured so that only one processor at a time can be accessed to perform a setup. Thus, whenever two or more setups are required, they must be done one by one. These specific features insure that production on a multiprocessor machine must be planned at two levels (i.e., for color and size), as described in the following paragraphs. Color: Our partner offers a large variety of shoe models that have the same kind of rubber outsole, but depending on the specific model, the composition of the rubber may differ. Each of the J available rubber types (or recipes) can be imagined as J different rubber colors. Since all the processors inject the same rubber, a single outsole color

F.F. Boctor et al. / Computers & Operations Research 36 (2009) 2822 -- 2828

P1

1(60)

Setup(30)

4(20)

P2

3(60)

Waiting

3(30)

4(90)

Setup(30) 2(10)

Processor P2 is idle

Makespan color A = 120

2823

Makespan color B = 130

2(40) 3(30)

Makespan color C= 40

Overall makespan = 290 P1

1(60)

Setup(30)

4(20)

Waiting

P2

3(60)

Waiting

3(30)

Setup(30) 2(10)

Makespan color A = 120

4(90)

Makespan color B = 120

Setup(30)

3(30)

Waiting

2(40)

Makespan color C=70

Overall makespan = 310 Fig. 1. Two solutions for a 2-processor problem.

is produced on all the processors. Thus, the production managers group the total shoe demand for all the available shoe models and divide the demand into production blocks based on the outsole color j, j ∈ {1, . . . ,J}. These blocks are then used to decide the color production sequence. Each time the rubber color changes, the injectors need to be cleaned and the rubber tank refilled, incurring a setup time t1 . This “color setup” time is constant and does not depend on the sequence of colors to be processed. In this article, we consider a restricted version of the MPMSP, called hereafter MPMSP with fixed sequence (MPMSP-FS), where the color sequence is fixed in advance. In our case, the color sequence is predetermined by the company. Size: In addition to its color, each outsole must be manufactured in all the commercial sizes proposed (i.e., sizes 7–14 for the USA and Canada and/or sizes 39–48 for Europe). The sizes are obtained by injecting rubber into a mold installed in one of the machine processors. Each size has its own specific mold, which can be used to produce any of the colors offered by the company. Given that the cost of such molds is very high, only one copy of each mold is available, which means that, in practice, the molds have to be exchanged often since the number of sizes exceeds the number of available processors. This “mold setup” time, t2 , is constant and does not vary with the molds that are exchanged. Therefore, inside each production block based on a given outsole color, the sizes to be produced need to be assigned and sequenced among the available processors. The processing time tij for the requested number of units of a given size i, i ∈ {1, . . . ,I} of color j is determined by dividing the required quantity qij by the corresponding production rate. The overall processing time needed to produce the quantity qij remains tij , even if the used processor is stopped one or more times. In the scheduling process presented in this paper, color setups are not considered explicitly since they must be performed a fixed number of times, J, because the color sequence is given. The MPMSP-FS therefore consists of determining the sequence of sizes produced by each processor for a given sequence of colors, in order to minimize the overall makespan. The production of color j is considered finished when all the units of all the requested sizes are finished. For this reason, the makespan for the color j, denoted Tj , is equal to the longest makespan of the processors involved in the production of this color, including both production times and setup times. As the sum of color setup times is determined, since the color J sequence is fixed, our objective is to minimize T = j=1 T j . The MPMSP-FS is not a trivial problem. In order to illustrate some of its particularities, Fig. 1 presents two solutions for a problem involving three colors (A, B and C) that need to be produced in up to four sizes (1–4) on a two-processor (P1 and P2 ) machine. The colors are scheduled to be produced in the sequence A–B–C and the

processing times in minutes for the requested sizes of each color are: t1A = 60, t3A = 90, t4A = 20, t2B = 10, t4B = 90, t2C = 40 and t3C = 30. A mold exchange or setup time of t2 = 30 minutes is assumed. The upper part of Fig. 1 shows the optimal solution with a makespan of 290 minutes. Each production block corresponds to a specific color and its processing time is shown. When producing color A, sizes 1 and 4 are done on the same processor P1 , thus requiring a mold exchange before starting the production of size 4. The processor P2 stops and “waits” while P1 is being set up. When producing color B, the optimal solution requires that processor P2 remains idle in order to maintain its mold, which will be used to produce size 3 within color C. This allows us to avoid making one setup. Producing size 2 on processor P2 instead of maintaining this processor idle would have reduced the makespan of B to 120 rather than 130, as shown in the lower part of Fig. 1. However, this would lead to an additional setup when doing color C, increasing its makespan from 40 to 70 and the overall makespan from 290 to 310. The contribution of this article is twofold. First, we formulate the MPMSP-FS as a mixed integer linear model and present a decomposition solution procedure that uses this model. Second, since the proposed model involves a very large number of integer variables, we propose two construction heuristics, the sequential and the look-ahead heuristics (LAH), and two improvement heuristics, the 2-exchange and the 3-exchange heuristics, that allow us to solve this problem within a short computational time. Extensive computational experiment was conducted to evaluate the performance of our model and the proposed solution methods. The remainder of this article is organized as follows. Section 2 presents a short literature review. Section 3 proposes our mathematical formulation of the MPMSP-FS. Section 4 describes a solution approach based on a decomposition procedure, and Section 5 presents our construction and improvement heuristics. Section 6 reports the numerical results of our experiment conducted to evaluate the performance of the proposed heuristics, and Section 7 presents our conclusions. 2. Literature review To the best of our knowledge, no previous research dealing with this specific problem has ever been published. However, the MPMSPFS is to some extent related to the classical PMSP where there are several independent jobs to be processed by a number of identical machines. Each job has to be carried out on one of the machines during a fixed processing time without preemption. Minimizing the makespan for the PMSP has been shown to be NP-hard for at least two machines if preemption is not allowed (see [6,7]). The MPMSP-FS

2824

F.F. Boctor et al. / Computers & Operations Research 36 (2009) 2822 -- 2828

with at least two processors is also NP-hard as it reduces to a PMSP if we have only one color and if mold setups are nil (t2 = 0). Surveys of the different contributions to the PMSP have been published by Cheng and Sin [8] Mokotoff [9]. Best optimal algorithms for the PMSP are the column generation methods of Chen and Powell [10] and of Van Den Akker et al. [11], the cutting plane algorithm of Mokotoff [12] which is able to solve instances having up to 100 machines and 1000 jobs and the branch-and-price algorithm of Dell'Amico et al. [13]. Most known heuristics are list scheduling [14] like the longest processing time (LPT) where the job having the LPT is assigned to the first available machine. Coffman et al. [15] proposed the classical multifit heuristic which is an iterative method based on the LPT rule. More efficient iterative local search heuristics have been proposed by Frangioni et al. [16] and Tang and Luo [17]. Nonetheless, stopping all processors to perform any setup, having only one mold for each size and performing setups one by one, make MPMSP-FS quite different from PMSP. Readers interested in research involving different setup considerations are referred to Allahverdi et al. [18] who reviewed the literature since the mid-1960s up to 1999. Allahverdi et al. [19] provide an extensive review of the scheduling literature on models with setup times or costs from 1999 to date covering more than 300 papers. Finally, Allahverdi and Soroush [20] present a classification of various applications involving setup times or costs and emphasize the importance and benefits of explicitly considering setup in scheduling research.

The model is: Minimize

J 

Tj

(1)

j=1

T j  Tjp

Subject to : T1p =



for all j, p;

ti1 xi1p + t



P I  

xi1p − t

i=1 p=1

i∈S1

Tjp =

(2)

tij xijp + t

P I  

xijp − t

I 

xijp = 1

for all p;

(3)

wijp

i=1

for all p, j = 2, . . . , J; P 

yi1p

i=1

i=1 p=1

i∈Sj

I 

for all i ∈ Sj , j;

(4) (5)

p=1

yijp  xijp

for all i, j, p;

(6)

zijp  xijp

for all i, j, p;

(7)

yijp = 1

for all j, p;

(8)

zijp = 1

for all j, p;

(9)

I  i=1 I 

3. Mathematical formulation This section presents a mixed integer model for the MPMSP-FS. First, let us introduce the notation used.

i=1

wijp  zi,j−1,p wijp  yijp

Indices: p processor index; p = 1, . . . ,P, j color index; j = 1, . . . ,J, i size index; i = 1, . . . ,I.

I 

Binary decision variables: xijp equals 1 if size i of color j is assigned to processor p; otherwise, it's equal to 0, yijp equals 1 if i is the first size of color j to be manufactured on processor p; otherwise, it's equal to 0, zijp equals 1 if i is the last size of color j to be manufactured on processor p; otherwise, it's equal to 0, wijp equals 1 if i is the last size of color j−1 and the first size of color j to be manufactured on processor p; otherwise, it's equal to 0, ujp equals 1 if more than one size of color j is assigned to processor p; otherwise, it's equal to 0, vjp is an auxiliary variable to help calculate the value of the corresponding ujp . Continuous, positive variables: Tjp time to process (makespan) all the sizes of color j assigned to processor p, Tj time to process (makespan) all the sizes of color j = Max(Tjp ).

for all i, p, j = 2, . . . , J; for all j, p;

(10) (11)

(12)

i=1 I 

Parameters: mp mold initially in place on processor p, tij processing time (not including setup time) for the requested quantity of size i of color j, Sj set of requested sizes of color j (those having tij > 0), t setup time associated with a mold exchange.

xijp  1

for all i, p, j = 2, . . . , J;

xijp − vjp = 1

for all j, p;

(13)

i=1

vjp  Iujp

for all j, p;

yijp + zijp  2 − ujp

(14)

for all i, j, p;

xijp , yijp , zijp , wijp ∈ {0, 1} ujp ∈ {0, 1} for all j, p;

for all i, j, p;

(15) (16) (17)

The objective function (1) is to minimize the overall makespan, which is expressed as the sum of all the colors' makespans. Constraint (2) states that Tj , the makespan of color j, must be larger than or equal to Tjp , the makespan of each processor p used to produce j. Constraint (3) determines the value of T1p , the makespan of processor p when producing the first color. T1p is the sum of the processing times of all the sizes of the first color assigned to processor p, plus a setup time t for each of these sizes, minus a setup time t if the processor was already set to produce the first assigned size. Constraint (4) defines the value of Tjp as the sum of the processing times of all sizes of color j assigned to processor p, plus a setup time t for each of these sizes, minus a setup time t if the processor begins production with the same size that was the last one in the previous color. Constraint (5) ensures that every size of every color is assigned to one and only one processor. Constraint (6) prevents size i from being assigned to the first position on processor p if this size is not assigned to this processor. Constraint (7) is similar to the previous

F.F. Boctor et al. / Computers & Operations Research 36 (2009) 2822 -- 2828

one, but with respect to the assignment to the last position on processor p. Constraint (8) ensures that only one size is assigned to the first position on processor p. Constraint (9) does the same for the last position. Constraints (10) and (11) assign the value 1 to wijp if and only if size i is the last of color j−1 and the first of color j produced on processor p. Constraint (12) forces each processor to produce at least one size, though this size can be a zero-quantity if it is better to keep the processor idle, thus maintaining its setting unchanged until the following color. Constraints (13) and (14) set ujp to 0 if at most one size of color j is assigned to processor p, and to 1 otherwise. Constraint (15) states that if processor p produces more than one size, the first size must be different from the last one. This mathematical model is clearly a large linear mixed-integer model, and the number of variables can increase very fast as the number of processors (P) and the number of colors (J) increase. As will be shown in Section 5, the MPMSP-FS problem can be solved optimally only for small instances. The next section shows how this model can be used with a decomposition procedure to produce high quality solutions for the MPMSP-FS. 4. A decomposition procedure Since solving the model presented above for all J colors could be prohibitively time consuming, an approximate solution can be reached by using the following decomposition procedure. First, the model is solved for the first  colors (  J), and the solution (the schedule) obtained for the first  colors (  ) is retained. The final processor setup for color  becomes the initial configuration of the system for the next sub-problem, which pertains to the next  colors. This procedure is repeated until all the colors have been scheduled. At the end, at most J/  sub-problems will have been solved, where x stands for the largest integer smaller than x. The detailed steps of this decomposition procedure are given below. 1. Initialization: Let M = {m1 , . . . , mp } be the initial configuration of the machine (set of molds initially in place), and set n = 1, where n is the index of the sub-problems to be solved. 2. Iteration: Use the model given in Section 2 to solve sub-problem n for the colors numbered from [(n−1)+1] to Min[(n−1)+, J]. Note that all sub-problems involve  colors except the last one, which may involve less. 3. Stopping rule: If Min[(n−1)+, J] = J, stop. Otherwise, retain the schedule for colors [(n−1)+1] through [(n−1)+]. Set S to be the set of molds in place after color [(n−1)+], set n = n +1, and return to step 2. In the computational section, the effect of the  and  values on this procedure's performance, in terms of solution quality and computing time, will be examined in detail. Unfortunately, the computation time required by this decomposition procedure can be relatively high. To mediate this problem, we developed several rapid construction and improvement heuristics for solving the MPMSP-FS. These heuristics are presented in the next section. 5. Heuristics This section proposes two construction heuristics, the sequential and the look-ahead, and two improvement heuristics, the 2-exchange and the 3-exchange, for solving MPMSP-FS. 5.1. The sequential heuristic The sequential heuristic constructs a solution by sequentially solving P problems, with each problem corresponding to a color.

2825

When solving for each color, the heuristic performs four steps. Step 1 allocates the sizes, matching the molds already installed on processors in order to save as much setup time as possible. Step 2 identifies the sizes to be produced in both the current color and the subsequent color and puts these sizes aside for a later assignment. Step 3 assigns the remaining sizes of the current color to the processors in order to minimize the makespan of the current color. Finally, Step 4 assigns sizes that were put aside in Step 2. A detailed description is presented below. Step 0. Initialization: Let M0 = {m1 , m2 , . . . ,mp } be the initial configuration of the machine (set of molds initially in place) and set j = 1. Let Sj be the set of sizes to be produced in color j. Step 1. Size matching: Identify in Sj the subset of the sizes matching the molds currently in place, Mj−1 , and assign these sizes to the corresponding processors. Let Rj be the set of sizes remaining to produce. Step 2. Size postponement: From the remaining sizes to produce Rj , identify those that must also be produced in color j+1 and postpone their assignment until step 4 (we call these sizes postponed sizes). If the number of these sizes exceeds P, the number of available processors, retain only the P sizes with the shortest production times. Let Rj = Rj \{postponed sizes}. Step 3. Size assignment: Assign each size in Rj to the available processors as follows. Find the size i* having the longest processing time. Assign this size to either the processor px with the earliest finishing time or the processor py with the second earliest finishing time, according to the following decision rule: • If the mold currently on px does not match any of the sizes to be produced in the subsequent color, then assign i* to px . • If the mold on px matches one of the sizes in the subsequent color, but assigning i* to py will increase the current makespan of producing the color j by more than the time required to perform one mold setup, assign i* to px . Otherwise, assign i* to py . Step 4. Postponed size assignment: Assign the postponed sizes to the available processors using the rule described in step 3. Step 5. Iteration: If j = J, stop, because a complete solution has been obtained. Otherwise, let Mj be equal to the molds on each processor, set j = j+1, and go to step 1. Although the “LPT first” rule performs fairly well, our computational results clearly shown that the enhanced size assignment rule of step 3 is much more efficient. 5.2. The Look-Ahead Heuristics The Look-Ahead Heuristics (LAH) is an extension of the sequential heuristic, with the difference that the decisions made in steps 2 and 3 are more astute. When solving for a color j, some size assignment decisions are validated by solving the sub-problem SPj , which takes the unassigned sizes of color j and the subsequent colors j+1 to J into account. The initial processor configuration for this sub-problem takes into consideration not only the molds at the end of color j−1, but also the mold exchanges that have already been performed when processing color j. The sequential heuristic is then applied on SPj in order to calculate and compare the solutions that should be obtained in the case that a particular size is selected or not. The new versions of steps 2 and 3 are presented below. Step 2 Size postponement (look-ahead version): From the remaining sizes to be produced Rj , identify those that will also be produced in j+1 and postpone their assignment temporarily. Let Qj = Rj ∩ Sj+1 be the set containing the postponed sizes (sizes that will be assigned later in step 4). If the number of postponed sizes in less than the

2826

F.F. Boctor et al. / Computers & Operations Research 36 (2009) 2822 -- 2828

number of processors (|Qj |  P), then go to step 3. Otherwise, as we have more candidate sizes in Qj than processors, we should decide of which sizes should remain in Qj to be assigned in step 4. Take each size i ∈ Qj and decide whether or not size i will remain in Qj by comparing m1 and m2 , the makespans obtained by solving SPj with the sequential heuristic for the cases of i ∈ Qj and i ∈/ Qj , respectively. If m1 < m2 , size i remains in Qj ; otherwise, remove size i from Qj . When all the sizes in Qj have been validated, check whether the number of sizes in Qj exceeds P and, if so, retain only the P sizes having the shortest production times. Let Rj = Rj \Qj . Step 3 Size assignment (look-ahead version): The remaining sizes Rj are assigned to the P available processors after considering all the possible size/processor assignments. Assign each size i temporarily to each processor p and use the sequential heuristic to solve the subproblem SPj . Implement the combination of size i* and processor p* that leads to the lowest global makespan, and remove i* from Rj . Repeat this step until Rj is empty.

Table 1 Results for the small instances (3 processors, 8 sizes and 8 colors). Heuristics

Average deviation

Maximum deviation

Number of optimum results

DP(1/1) DP(1/2) DP(2/2) DP(1/3) DP(2/3) DP(3/3) DP(1/4) DP(2/4) DP(3/4) DP(4/4)

5.53 1.03 2.50 0.30 0.35 1.30 0.22 0.35 0.39 0.70

10.11 4.23 7.69 2.14 2.14 4.29 2.80 3.37 3.37 3.37

1 14 7 23 22 12 26 23 22 17

<1 2 1 6 3 2 17 10 8 10

Sequential heuristic LAH LAH+2-exchange LAH+3-exchange

2.88 0.86 0.63 0.42

10.26 5.00 3.73 2.16

5 15 16 20

< < < <

Average time (seconds)

1 1 1 1

5.3. The improvement heuristics We also developed two improvement heuristics designed to improve an initial solution by modifying the assignment of the sizes to processors. Since the requested sizes can only be rearranged within a same color, the improvement heuristic—the 2-exchange or the 3-exchange—is applied one color at the time from colors 1 to J. When the order of certain sizes is changed within the processing of color j, the mold setups between colors j−1 and j, the makespan of color j (including mold setups within the color) and the mold setups between colors j and j+1 are updated. The heuristic is repeated until no improvement can be made in a given color's production, and then it is applied to the next color until there are no colors left. This heuristic is repeated as long as at least one improvement is obtained for any color. The exchange heuristic starts with a feasible solution and proceeds as follows. Starting with the first size in the production sequence of the first processor, try to exchange this size with all the other sizes of color j. If there is no improvement, repeat the heuristic with the next size in the sequence. Once all the sizes assigned to processor 1 have been processed, repeat the heuristic with the sizes of the next processor. Whenever a size exchange leads to an improvement, it is implemented and the heuristic restarts beginning with the first processor and the first size. This heuristic also introduces a dummy size that requires a null production time so that it is possible to leave a processor idle if this helps to reduce the makespan. Clearly, all potential exchanges do not improve the solution makespan. For example, exchanging two sizes that are assigned neither to the first or the last position in the same processor will not modify the color's makespan nor the total solution makespan. The first heuristic, the 2-exchange, considers all pair exchanges within a given color and computes the resulting processing time, including setup times and production times. The 3-exchange procedure is just an extension of the 2-exchange procedure, allowing all triplet combinations to be considered. If n is the number of sizes to be produced for a given color, the complexity of one iteration of the 2-exchange for that color is O(n2 ). However as we do not known how many iterations of the 2-exchange will be needed, the total complexity of the procedure for one color is unknown. For the 3-exchange the complexity of one iteration for one color is O(n3 ). 6. Computational results This section presents our extensive computational experiment designed to evaluate the efficiency and robustness of the proposed solution approaches. The section is divided into two main parts: the first one introduces the test problem generator and the second part presents the results obtained.

6.1. Test-problems generator Three sets of 30 instances were generated. The first set was composed of small instances involving 3 processors (P = 3), 8 sizes (I = 8) and 8 colors (J = 8). The number of requested sizes for each color was drawn from a discrete uniform distribution between 2 and 5. The two other sets involve, respectively, 5 and 8 processors and 13 and 20 sizes; both involve 8 colors. These parameters are frequently observed in practice. For the 5-processor instances, the number of requested sizes for each color was drawn from a uniform distribution between 2 and 8, while the number of sizes for the 8-processor instances was drawn from a uniform distribution between 4 and 16. For all the sets, the demand for each size was sampled from an empirical distribution inferred from the historical data provided by our industrial partner. The initial molds were determined randomly, all the molds having the same probability of being selected. The problem generator and the test instances are available from the authors upon request. 6.2. Numerical results All methods were coded in Delphi and calculations were conducted on a personal computer equipped with a Pentium III processor, running at 1266 Mhz with 1750 MB RAM. The standard branch-and-bound algorithm in Cplex 9.0 was used to solve the mathematical model. Cplex 9.0 was used to solve the 30 small instances to optimality but it was unable to solve the larger instances. Computing times for these small instances range from a minute up to 63 minutes with an average of 9.9 minutes per instance. Table 1 presents the average and the maximum percentage deviation above the optimum, as obtained by the proposed heuristics and procedures for the 30 small instances. It also reports the number of times that each heuristic reached an optimal solution and the average computation time in seconds. Results are first presented for 10 combinations of the / parameters of the decomposition procedure, referred as DP(/), where each subproblem was solved optimally. The last four lines in Table 1 report the results for the sequential heuristic, the LAH, and the LAH plus either the 2-exchange or the 3-exchange procedure. Table 1 shows that the / decomposition procedure produced very good results. Depending on the parameters selected, the average deviation above the optimum ranges from 0.22% to 5.53%. For a fixed value of , the average percentage deviation decreases as  increases. For a fixed value of , the average percentage deviation decreases as  decreases. These results mean that better results can be obtained by looking forward several periods, which is consistent with the

F.F. Boctor et al. / Computers & Operations Research 36 (2009) 2822 -- 2828

2827

Table 2 Results for the medium size instances (5 processors, 13 sizes and 8 colors).

Table 3 Results for the larger instances (8 processors, 20 sizes and 8 colors).

Heuristics

Average deviation

Maximum deviation

Number of best results

Average time (seconds)

Heuristics

Average deviation

Maximum deviation

Number of best results

Average time (seconds)

DP(1/1) DP(1/2) DP(2/2) DP(1/3) DP(2/3) DP(3/3)

10.27 1.66 4.69 0.12 0.81 2.76

19.44 6.47 10.79 1.56 4.69 8.18

0 9 1 26 21 3

2 33 17 1054 582 625

1/1 1/2 2/2

13.29 0.37 3.62

19.81 1.70 6.76

0 20 1

18 4179 3449

5.56 1.78 1.35 1.23

13.51 7.29 6.47 6.47

0 5 9 10

< < < <

6.42 2.37 1.64 1.49

14.35 5.80 5.43 5.43

0 5 10 11

< < < <

Sequential heuristic LAH LAH+2-exchange LAH+3-exchange

1 1 1 1

Sequential heuristic LAH LAH+2-exchange LAH+3-exchange

1 1 1 1

strategy is closely linked to the size of the problem, particularly the number of processors involved. idea of employing a rolling horizon. The best results were obtained by DP(1/4) which found the optimal solution 26 times out of 30 in an average of 17 seconds of computational time. The proposed sequential heuristic produced an average deviation of 2.88% within a fraction of a second. The LAH produced much more interesting results with an average deviation of 0.86% above the optimal solution. Followed by the 3-exchange procedure, the LAH produced an average deviation of only 0.42% in a fraction of a second. Table 2 summarizes the average results for 30 medium-sized instances. Since we were not able to obtain any optimal solutions for theses instances, we calculated the deviations in relation to the best solution produced by all the proposed heuristics. Table 2 reports only results of the decomposition procedure for   3, because Cplex was not able to find optimal solutions to any of the sub-problems involving four colors ( = 4) since the sub-problem computing time was limited to 3600 seconds. Table 2 confirms the behavior of the decomposition procedure with respect to the / parameters. The best results were obtained by DP(1/3), which produced an average deviation of 0.12%, reached the best solution for 26 of the 30 instances, and obtained the smallest maximum percentage deviation (1.56%). The drawback of this decomposition procedure is that its computational time increases rapidly with the number of colors involved in the sub-problems. For these medium-sized instances, the sequential heuristic did not perform very well, producing an average deviation of 5.56%. However, the LAH offers an interesting compromise. In a fraction of a second, the LAH, followed by the 3-exchange procedure, obtained an average deviation of 1.23%. Only DP(1/3) and DP(2/3) produce better results at the cost of an average computational time of 4953 and 7112 seconds, respectively. DP(3/3) is clearly dominated by LAH, LAH+2-exchange and LAH+3-exchange, given that, with a time limit of 3600 seconds, some sub-problems cannot always be solved optimally. For the larger 8-processor instances, given the time limit of 3600 seconds for each sub-problem, Cplex was unable to produce optimal solutions to any of the sub-problems using the decomposition procedure with   3. In other words, for the larger problems, the decomposition procedure was unable to solve any instance with 8 processors, 20 sizes and 3 colors. For this reason, Table 3 presents only the results obtained for  = 1 and 2. The deviations shown in this table were computed in relation to the best solutions found. Here again, better results were obtained with the higher values of . The DP(1/2) decomposition procedure produced an average deviation of 0.37% and found the best solution for 20 out of 30 instances with an average computational time of 4179 seconds. The second best result, an average deviation of 1.49%, was obtained within a second by the LAH plus the 3-exchange improvement heuristic. (Detailed solutions for all instances are available upon request from the authors.) According to these results, the choice of the best solving

7. Conclusions This article presented a new scheduling problem called the multiprocessor machine scheduling problem (MPMSP), inspired by a specific industrial context. We studied a restricted version of this problem, the MPMSP with fixed sequence (MPMSP-FS), where the color sequence is fixed. We first introduced a mathematical formulation of the MPMSP-FS but, since solving this mathematical model was only possible for small-sized instances, several heuristic solution methods were suggested. The first one, called the “/ decomposition procedure”, decomposes the problem into several sub-problems and uses the proposed model to solve each of them. Two construction heuristics (the Sequential and look-ahead), as well as two local improvement heuristics (the 2-exchange and the 3-exchange) were developed in order to reduce the required computational time. Extensive computational experiment was conducted to evaluate the performance of the proposed heuristics and procedures for the MPMSP-FS. Our results show that the decomposition procedure is able to produce very good results for small problems. However, as problems become larger, the decomposition procedure becomes less interesting, and the look-ahead heuristic becomes a better choice, producing high quality solutions in a fraction of a second. Acknowledgments This work was partially supported by grants OPG 0036509, OPG 0172633 and OPG 0293307 from the Canadian Natural Sciences and Engineering Research Council (NSERC). This support is gratefully acknowledged. The authors are grateful to two anonymous referees for their useful and constructive comments and suggestions.

References [1] Pinedo ML. Scheduling: theory, algorithms, and systems. NJ: Prentice-Hall; 2002. [2] Chrétienne P, Coffman EG, Lenstra JK, Liu Z. Scheduling theory and its applications. New York: Whiley; 1995. [3] Roberts JW. A survey on statistical bandwidth sharing. Computer Networks 2004;45:319–32. [4] Sarkar V. Partitioning and scheduling parallel programs for execution on multiprocessors. Cambridge, MA: MIT Press; 1989. [5] Smith SF. Is scheduling a solved problem? Proceedings of the 1st multidisciplinary international conference on scheduling: theory and applications. Nottingham, UK, 2003. [6] Lenstra JK, Rinnooy Kan AHG. Complexity of scheduling under precedence constraints. Operations Research 1978;26(1):22–35. [7] Garey MR, Johnson DS. Strong NP-completeness results: motivation, examples and implications. Journal of the Association for Computing Machinery 1978;25:499–508. [8] Cheng TCE, Sin CCS. A state-of-the-art review of parallel-machine scheduling research. European Journal of Operational Research 1990;47:271–92.

2828

F.F. Boctor et al. / Computers & Operations Research 36 (2009) 2822 -- 2828

[9] Mokotoff E. Parallel machine scheduling problems: a survey. Asia-Pacific Journal of Operational Research 2001;18:193–242. [10] Chen Z-L, Powell WB. Solving parallel machine scheduling problems by column generation. INFORMS Journal on Computing 1999;11:78–94. [11] Van Den Akker JM, Hoogeveen JA, Van De Velde SL. Parallel machine scheduling with column generation. Operations Research 1999;47:862–72. [12] Mokotoff E. An exact algorithm for the identical parallel machine scheduling problem. European Journal of Operational Research 2004;152:758–69. [13] Dell'Amico M, Lori M, Martello S, Monaci M. Heuristic and exact algorithms for the identical parallel machine scheduling problem. INFORMS Journal on Computing 2008;20:333–44. [14] Graham RL. Bounds for certain multiprocessing anomalies. Bell System Technical Journal 1966;45:1563–81. [15] Coffman Jr. EG, Garey MR, Johnson DS. An application of bin-packing to multiprocessor scheduling. SIAM Journal on Computing 1978;7:1–17.

[16] Frangioni A, Necciari E, Scutellá MG. A multi-exchange neighborhood for minimum makespan machine scheduling problem. Journal of Combinatorial Optimization 2004;8:195–220. [17] Tang L, Luo J. A new ILS algorithm for parallel machine scheduling problems. Journal of Intelligent Manufacturing 2006;17:609–19. [18] Allahverdi A, Gupta JND, Aldowaisan T. A review of scheduling research involving setup considerations. Omega 1999;27:219–39. [19] Allahverdi A, Ng CT, Cheng TCE, Kovalyov Mikhail Y. A survey of scheduling problems with setup times or costs. European Journal of Operational Research 2008;187:985–1032. [20] Allahverdi A, Soroush HM. The significance of reducing setup times/setup costs. European Journal of Operational Research 2008;187:978–84.