Applied Soft Computing 10 (2010) 703–710
Contents lists available at ScienceDirect
Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc
A high performing metaheuristic for job shop scheduling with sequence-dependent setup times B. Naderi, S.M.T. Fatemi Ghomi *, M. Aminnayeri Department of Industrial Engineering, Amirkabir University of Technology, 424 Hafez Avenue, Tehran, Iran
A R T I C L E I N F O
A B S T R A C T
Article history: Received 27 May 2008 Received in revised form 7 April 2009 Accepted 28 August 2009 Available online 6 September 2009
This paper investigates scheduling job shop problems with sequence-dependent setup times under minimization of makespan. We develop an effective metaheuristic, simulated annealing with novel operators, to potentially solve the problem. Simulated annealing is a well-recognized algorithm and historically classified as a local-search-based metaheuristic. The performance of simulated annealing critically depends on its operators and parameters, in particular, its neighborhood search structure. In this paper, we propose an effective neighborhood search structure based on insertion neighborhoods as well as analyzing the behavior of simulated annealing with different types of operators and parameters by the means of Taguchi method. An experiment based on Taillard benchmark is conducted to evaluate the proposed algorithm against some effective algorithms existing in the literature. The results show that the proposed algorithm outperforms the other algorithms. ß 2009 Elsevier B.V. All rights reserved.
Keywords: Scheduling Job shop Sequence-dependent setup times Simulated annealing Taguchi method
1. Introduction Job shop scheduling (or JSS) is one of the most complicated combinatorial optimizations. A JSS could be described as follows: we have a set of n jobs need to be operated on a set of m machines [1]. Each job has its own processing route; that is, jobs visit machines in different orders. Each job might need to be performed only on a fraction of m machines, not all of them. The following assumptions are additionally characterized. Each job can be processed by at most one machine at a time and each machine can process at most one job at a time. When the process of an operation starts, it cannot be interrupted before the completion; that is, the jobs are non-preemptive. There is no transportation time between machines; in other words, when an operation of a job finishes, its operation on subsequent machine can immediately begin. The jobs are independent; that is, there are no precedence constraints among the jobs and they can be operated in any sequence. The jobs are available for their process at time 0. There is unlimited buffer between machines for semi-finished jobs; meaning that if a job needs a machine that is occupied, it waits indefinitely until it becomes available. There is no machine breakdown (i.e. machines are continuously available). The objective function when solving or optimizing a JSS is to determine the processing order of all jobs on each machine that minimizes the makespan.
* Corresponding author. Tel.: +98 21 66413034; fax: +98 21 66413025. E-mail address:
[email protected] (S.M.T. Fatemi Ghomi). 1568-4946/$ – see front matter ß 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2009.08.039
Numerous savings obtained by considering setup times in scheduling decisions prompted researchers to utilize this assumption in their studies [2]. Setup times are typically sequencedependent (or SDST), that is, the magnitude of setup strongly depends on both current and immediately processed jobs on a given machine. For example, this may occur in a painting operation, where different initial paint colors require different levels of cleaning when being followed by other paint colors. We also assume that setup is non-anticipatory, meaning that the setup can only begin as soon as the job and the machine are both available. The sequence-dependent setup time job shop scheduling (SDST JSS) is defined as J/STsd/Cmax according to three-fold notation of Graham et al. [41]. The JSS is known to be an NP-hard optimization problem [3]. Therefore, effective metaheuristics for the JSS are necessary to find optimal or near optimal solutions in reasonable amount of time. This paper proposes such an algorithm, in the form of simulated annealing (or SA), for the problem under consideration. Many researchers in the field of scheduling conclude that SAs show inferior performance in comparison with other metaheuristics [4]; however, SAs have recently proved their efficiency and effectiveness in a wide variety of optimization problems [4–6]. It is known that the performance of SAs strongly depends on the choice of its operators and parameters. Hence, beside presenting our operators, we explore the impact of different operators and parameters on the performance of SA by means of Taguchi method. Taguchi method is an optimization technique that brings robustness into experimental designs as well as being a cost-effective and labor-saving method [7,8]. It can simultaneously investigate several factors and
704
B. Naderi et al. / Applied Soft Computing 10 (2010) 703–710
define quickly those having significant effects on response variables by conducting a minimal number of possible experiments [9]. There are many successful applications of this approach at the parameter design stage in many fields [10]. The reminder of the paper is organized as follows. Section 2 reviews the literature of SDST JSS. Section 3 describes the proposed simulated annealing. The calibration of the proposed algorithm is presented in Section 4. In Section 5, experimental design and comparison of the proposed algorithm with existing methods are reported. Section 6 concludes the paper and provides some directions for future studies. 2. Literature review The consideration of sequence-dependent setup times in scheduling problems has developed into a very active field of research in the recent years. Many review papers list the researches that are conducted on SDST scheduling problems. Among all, we can point out to survey papers of [2,11–13]. In [11– 13], papers published before 2000 are cited while in [2] papers after 2000 are covered. The importance of this consideration is comprehensively explored in [14]. Coleman [15] proposes an integer programming model for minimizing earliness/tardiness in a single machine with sequence-dependent setups. He also shows that SDST single machine is strongly NP-hard. Most of researches in production scheduling with SDST are restricted to the flowshop and its extensions, such as the hybrid flowshop scheduling [16,17]. With regard to SDST JSS, Kim and Bobrowski [18] group and evaluate scheduling rules in dynamic scheduling environments defined by due date tightness, setup times and cost structure. They use a simulation model of a nine machine job shop for the experiment. Choi and Korkmaz [19] explore JSS with anticipatory SDST, i.e. to begin the setup, only machine need to be idle. They formulate the problem as a mixed integer program and propose a heuristic based on consecutively recognizing a pair of operations which gives a minimum lower bound on the makespan of the associated two-job/m-machine problem with release times. They also show that the proposed heuristic is more effective than one proposed by [20]. Regarding enumeration algorithms, three different branch and bound methods are presented in [21–23]. Schutten [24] addresses the job shop with some practical aspects, such as release and due date, setup times and transportation times. He then proposes an extension of the shifting bottleneck procedure for the problem. Sun and Noble [25] consider JSS with release dates, due dates, and sequence-dependent setup times to minimize the weighted sum of squared tardiness. They decompose the problem into a series of single-machine scheduling problems within a shifting bottleneck framework. The singlemachine scheduling problem is solved using a lagrangian relaxation-based approach. They compare the proposed algorithm against some dispatching rules including ‘‘Earliest Due Date’’, ‘‘Apparent Tardiness Cost’’ and ‘‘Similar Setup Times’’. Cheung and Zhou [26] develop a hybrid algorithm based on a genetic algorithm and a well-known dispatching rule for SDST JSS where the setup times are anticipatory. The first operation for each of the m machines is obtained by the genetic algorithm while the subsequent operations on each machine are scheduled according to the shortest processing time (or SPT) rule. Choi and Choi [27] study JSS with alternative operations and SDST. They provide a mixed integer program as well as a local-search scheme which incorporates a speed-up feature. Artigues and Roubellat [28] provide a polynomial insertion algorithm for multi-resource SDST JSS under minimization of maximum lateness. They first describe the algorithm for pure JSS, and then introduce multi-resource requirements for the operations. Finally, SDSTs are integrated in the multi-resource context. Using some dominance properties,
they show that the proposed insertion algorithm outperforms the alternative enumeration algorithms. Sun and Yee [29] address JSS with reentrant work flows and SDST to minimize makespan. They use the disjunctive graph representation to study interactions between machines. For this representation, four two-phase heuristics are proposed. They also introduce a genetic algorithm employing an efficient local improvement procedure. Artigues et al. [30] study SDST JSS with concentration on formal definition of schedule generation schemes (SGSs) based on the semi-active, active, and non-delay schedule categories. They also review some priority rules and present a comparative computational analysis of the different SGSs on sets of instances taken from the literature. Zhou et al. [31] address SDST JSS and propose an immune algorithm which certifies the diversity of the antibody. Manikas and Chang [32] consider SDST JSS and present a scatter search combining the mechanisms of diversification and intensification. To evaluate the proposed algorithm, they compare it with a simple tabu search, simulated annealing, and a genetic algorithm. Naderi et al. [33] propose a hybrid algorithm that shows high performance in comparison with other algorithms in the literature of SDST JSS. This hybrid algorithm is a genetic algorithm incorporating some additional features, namely restart phase and local search. Many genetic operators and parameters are evaluated in this paper. Vinod and Sridharan [34] address dynamic SDST JSS and develop a discrete event simulation model of the job shop. Two types of scheduling rules (ordinary and setup-oriented rules) are applied in simulation model. Their experimental results demonstrate that setup-oriented rules outperform ordinary ones. This difference rises with the increase in shop load and setup time ratio. Roshanaei et al. [35] employ a variable neighborhood search to solve SDST JSS. Their metaheuristic employs three different neighborhood search structures centered on insertion operator concept. 3. Simulated annealing Simulated annealing (SA) is a local-search-based metaheuristic which has exhibited some promise when it is applied to NP-hard problems [4–6]. A typical SA starts from an initial solution and proceeds sequentially and slowly toward the area that might be far from the search area of the initial solution. The SA accepts moves to inferior neighboring solution under the control of a randomized scheme to reduce the probability of getting trapped in local optima. It seems that the performance of SA is strongly determined by precise calibration of its operators and parameters. In the following subsections we describe all parameters and operators used in the proposed SA. 3.1. Encoding scheme and initialization Encoding schemes are used to make a candidate solution recognizable for algorithms. A proper encoding scheme plays a key role in maintaining the search effectiveness of any algorithms. One of the most extensively used encoding schemes in the literature is the operation-based representation [36]. By making use of this representation, the relative order of the operations of the jobs on the machines on which they are processed is determined. Since there are precedence constraints among the operations of each job, not all the permutations of the operations give feasible solutions. With respect to the above explanations, Gen et al. [37] propose an alternative scheme which is as follows: Each job i has a set of ni operations. So in the representation, each job number i occurs as the number of its operations. By scanning the permutation from left to right, the kth occurrence of a job number refers to the kth
B. Naderi et al. / Applied Soft Computing 10 (2010) 703–710
Fig. 1. Representation of a candidate solution.
operation in the technological sequence of this job. A permutation with repetition of job numbers merely expresses the order in which the operations of the jobs are processed. So long as a job number is repeated as the number of its operations, the solution is always feasible. For example consider a problem with 3 jobs and 2 machines. Each job consists of 2 operations, and is thereby repeated twice, so that for this problem we have 6 operations {1 1 2 2 3 3}. For each operation, we generate a random number from a uniform distribution between (0, 1). These random keys (RKs) are then sorted to find relative order of operations (Fig. 1). For example in Fig. 1, after sorting the RKs, third block implies second operation of job 3 because number 3 has been repeated twice. One advantage of representing the solutions by these RKs is the high adaptability of them to any operators; in particular to types of neighborhood search structures we are going to propose in the following subsection. SA is a local-search-based metaheuristic and starts its process from one initial solution (IS). The significant role of IS on the quality of final results of a search procedure has already been recognized by many researchers in recent years [6]. We have two alternatives, starting from one randomly generated IS or from the solution of one constructive heuristic. It is clear that the better choice of IS results in better final solution of SAs. To evaluate the degree of sensitivity (robustness) of our SA on its IS, we bring the choice of IS into the calibration section and compare the performance of two SAs starting from a randomly generated IS and from shortest processing time (SPT) heuristic proposed by Sule [1].
705
effectiveness of SA. Doing so, we have tested several combinations of small and large neighbor sizes. Finally, it appears that the following NSS provides the most convincing results: In this NSS, for generating a new neighbor in each temperature we make use of the SHIFT operator (i.e. small neighbors). During each temperature i, if the best ever visited makespan (xbest) is not promoted, a counter increases by one unit. This procedure is repeated until the counter reaches the number 20. If during temperature i, xbest is improved and the counter shows a number less than 20, the counter restarts from zero. If the counter becomes greater than 20, it is expected that the algorithm gets stuck in a local optimum or a loop. On the other hand, after 20 temperatures, the algorithm has been given enough time to extricate itself from this situation. Hence, we need a specific type of operator that enables SA to separate from current search space and moves to a new relatively good search space for maintaining the probability of finding a better solution. We therefore need to generate farther neighbors than just changing the position of one operation. This is done through a procedure, called the Migration Mechanism (MM), as follows: 1) 50 new farther neighbors are generated from current solution by relocating two randomly selected operations into two new randomly selected positions (i.e. the RKs of two randomly selected operations are randomly regenerated). 2) The best generated farther neighbor is accepted to move whether it has the better objective function than the current solution or not. The mechanism that we just defined can be considered as a novel acceptance mechanism that complements the classical acceptance mechanism of SA. SA commonly used acceptance mechanism is applied while producing small neighbors whereas our proposed mechanism is utilized when producing larger neighbors through MM. We expect the premature convergence of SA to a local minimum to be postponed when adopting a combination of both mechanisms.
3.2. Neighborhood search structure (NSS) 3.3. Cooling schedule Neighborhood search structure generates a new solution from current candidate solution by making a slight change in it. Many different NSSs have been applied to scheduling problems. These NSSs must work in a way that they avoid generating infeasible solutions. It seems to be often overlooked that performance of the SA depends critically on precise calibration of its neighborhood search structure. In this paper, four different NSSs are considered. The first one is SWAP operator in which the RKs of two randomly selected operations are swapped. The second one is a SHIFT operator in which the RK of one randomly picked operation is randomly regenerated. The third one is an INVERSION operator in which the RKs between two randomly selected cut points are reversed. In fourth NSS, we aim at working on the precise determination of the neighborhood size which strongly influences the success and failure of SAs. If the neighborhood size is too small, then the resultant process will not be able to move around the search space quickly enough to reach the optimum within a reasonable amount of time. On the other hand, if the neighborhood size too large, then the process essentially performs a random search with the next possible state being chosen practically uniformly over the search space. Intuitively, it turned out that the neighborhood search structure that strikes a compromise between these extremes seems appropriate. According to the corresponding research findings and also to diversify the search space to avoid getting trapped in local optima, we have been motivated to propose a novel NSS to assure the
Neighborhood search structure is not the only aspect of the SA that is free to be chosen in a way that it improves the performance of the algorithm, the form of the energy function (cooling schedule) may also influence the behavior of the algorithm. As previously explained, to avoid local minima, solutions with worse objective values are probably accepted depending on the temperature. As the procedure proceeds, the temperature is gradually lowered under a certain mechanism called cooling schedule. Generally, there are three types of cooling schedule in the literature (see more details in [38]): 1) Linear cooling rate: Ti = T0 i((T0 Tf)/N), i = 1,2,. . .,N. 2) Exponential cooling rate: Ti = (A/(i + 1)) + B, A = ((T0 Tf)(N + 1))/N, B = T0 A, i = 1,2,. . .,N. 3) Hyperbolic cooling rate: Ti = 1/2(T0 Tf)(1 tgh((10i/ N) 5)) + Tf, i = 1,2,. . .,N. where T0, Tf and N are initial temperature, final (stopping) temperature and desired number of temperature levels between T0 and Tf, respectively. An appropriate initial temperature should be high enough to create equal opportunity for all states of the search space to be visited, and at the same time, it should not be too high to perform quite a lot of unnecessary searches at high temperature. Initial instances showed that the temperature ranging from 10 to 100 is suitable for our problem, and the stopping temperature is fixed at 1. Fig. 2 summarizes the general outline of the proposed SA.
706
B. Naderi et al. / Applied Soft Computing 10 (2010) 703–710
Fig. 2. General outline of the proposed simulated annealing.
4. Parameter tuning In this section, we aim at analyzing the behavior of the proposed algorithm considering the above-mentioned operators and parameters. Doing so, there exist various approaches to statistically design an experimental investigation. Each of these approaches is effective depending on the situation of experiment. Although the most widely used approach is a full factorial design, this approach is not always efficient because it becomes increasingly difficult to perform investigation when the number of factors is significantly high. To reduce the number of required tests, fractional factorial experiment (FFE) was developed [39]. FFEs allow only a portion of the total possible combinations to estimate the main effect of factors and some of their interactions. Taguchi [8] developed a family of FFE matrices which eventually reduce the number of experiments, but still provide sufficient information. In Taguchi method, orthogonal arrays are used to study a large number of decision variables with a small number of experiments. Taguchi separates the factors into two main groups: controllable and noise factors. Noise factors are those over which we have no direct control. Since elimination of the noise factors is impractical and often impossible, the Taguchi method seeks to minimize the effect of noise and to determine optimal level of important controllable factors based on the concept of robustness [7]. Besides determining the optimal levels, Taguchi identifies the relative significance of individual factors in terms of their main effects on the objective function. Taguchi has created a transformation of the repetition data to another value which is the measure of variation. The transformation is the signal-to-noise (S/N) ratio which explains why this type of parameter design is called robust design [7,9]. Here, the term ‘‘signal’’ denotes the desirable value (mean response variable) and ‘‘noise’’ denotes the undesirable value (standard deviation). So the S/N ratio indicates the amount of variation presents in the response variable. The aim is to maximize the signal-to-noise ratio.
Taguchi classifies objective functions into three categories: the smaller-the-better type, the larger-the-better type, and nominal-isbest type. Since almost all objective functions in scheduling are classified in the smaller-the-better type, their corresponding S/N ratio [7] is: S=N ratio ¼ 10 log10 ðobjective functionÞ
2
As mentioned earlier, in this study, the SA factors are: initial solution (A), combination of number of desired temperature and number of neighborhood searches in each temperature (B), initial temperature (C), cooling schedule (D) and neighborhood search structure (E). Different levels of above-mentioned factors are shown in Table 1. The associated degree of freedom for these five factors is 12. So, the selected orthogonal array should have a minimum of 13 rows Table 1 Factors and their levels. Factor
Symbol
Level
Type
Initial solution
A
2
A(1)—SPT A(2)—Random
Combination of number of temperatures between T0 and Tf and number of neighbors visited at each temperature
B
4
B(1)—(800, B(2)—(400, B(3)—(300, B(4)—(200,
Initial temperature
C
4
C(1)—10 C(2)—40 C(3)—70 C(4)—100
Cooling schedule type
D
3
D(1)—Linear D(2)—Exponential D(3)—Hyperbolic
Neighborhood search structure (NSS)
E
4
E(1)—SWAP E(2)—MM E(3)—INVERSION E(4)—SHIFT
50) 100) 150) 200)
B. Naderi et al. / Applied Soft Computing 10 (2010) 703–710 Table 2 The modified orthogonal array L16. Trial
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Levels of control factors A
B
C
D
E
A(1) A(1) A(1) A(1) A(1) A(1) A(1) A(1) A(2) A(2) A(2) A(2) A(2) A(2) A(2) A(2)
B(1) B(2) B(3) B(4) B(1) B(2) B(3) B(4) B(1) B(2) B(3) B(4) B(1) B(2) B(3) B(4)
C(1) C(2) C(3) C(4) C(2) C(1) C(4) C(3) C(3) C(4) C(1) C(2) C(4) C(3) C(2) C(1)
D(1) D(2) D(3) D(3) D(3) D(3) D(1) D(2) D(3) D(3) D(2) D(1) D(2) D(1) D(3) D(3)
E(1) E(2) E(3) E(4) E(4) E(3) E(2) E(1) E(2) E(1) E(4) E(3) E(3) E(4) E(1) E(2)
and five columns to assess the five factors. From standard table of orthogonal arrays, the L16 is selected as the fittest orthogonal array design which fulfils all our minimum requirements. But this orthogonal array still entails some modifications to adapt itself to our experimental design. For more clarification, the way in which we adjust our orthogonal array is explained as such: Since our problem consists of one 2-level factor, one 3-level factor and three 4-level factors and L16 is composed of five 4-level factors, our problem has to be transformed structurally to mould itself to the standard shape of L16. Doing so, for each factor which lacks a number of levels in comparison with standard L16, we compensate this lack through the assignment of extra levels in standard L16 to one of the optionally selected existing levels of our associated factor. For example, consider our factor A which has two levels and lacks two levels with regard to standard L16. The two extra levels in standard L16 are offset by the repetition of each of two levels of factor A to eliminate this gap. Factor D should undergo the same procedure to be standardized. Extra levels in standard L16 are assigned to the level 3 of factor D. Table 2 presents the modified orthogonal array L16 (see more details about how to modify in [7]). To conduct the experiment, we generate a set of instances as follows: 6 instances for each of 8 combinations of (nm), ({20, 30, 50}{15, 20}), (15, 15) and (100, 15), resulting in 48 instances. The processing times and setup times are randomly generated from uniform distributions ranging (1, 99) and (1, 50), respectively. We implement the algorithms in MATLAB 7.0 and run on a PC with 2.0 GHz Intel Core 2 Duo and 2 GB of RAM memory. We use the relative percentage deviation (RPD) as a common performance measure to compare the methods. The best solutions obtained for each instance (which is named Minsol) are calculated by any of the algorithms. RPD is obtained from the following formula: RPD ¼
Alg sol Minsol 100 Minsol
(1)
707
where Algsol is the objective function value obtained for a given algorithm and instance. Clearly, lower values of RPD are preferable. The stopping criterion used to test algorithms in current and next sections is set at a CPU time limit fixed to n2 m milliseconds. This stopping criterion not only permits for more time as the number of jobs or machines increases, but also is more sensitive toward a rise in the number of jobs than the number of stages. Above stopping criterion is only applicable for the metaheuristics other than our SA. For our proposed SA, the stopping criterion is determined by the stopping temperature which has been set equal to 1. After obtaining the results of Taguchi experiment, RPDs are transformed into S/N ratio. Fig. 3 shows the average S/N ratio obtained at each level of the different factors. As indicated in Fig. 3, the optimal level of the factors A, B, C, D and E are A(1), B(4), C(2), D(2) and E(2), respectively. To explore the relative significance of individual factors in terms of their main effects on the objective function, the analysis of variance (ANOVA) test is conducted. The results of analysis are presented in Table 3. Neighborhood search structure (NSS) has the greatest effect on the quality of the algorithm with relative importance of 69.3%. After NSS factor, cooling schedule factor, with relative importance of 13.5% is placed in second rank; and Factors A and B have the least effects on performance of our SA. This interestingly shows the SA is almost independent of the choice of the initial solution. In sum, the chosen levels are as follows: initial solution: SPT, combination of number of desired temperature and number of neighborhood searches in each temperature: (200, 200), initial temperature: 40, cooling schedule: Exponential, and neighborhood search structure: MM. 5. Experimental evaluation This section compares the proposed SA with other existing algorithms, genetic algorithm proposed by Cheung et al. [26] (called GA), immune algorithm of Zhou et al. [31] (called IA), the hybrid genetic algorithm of Naderi et al. [33] (called HGA), variable neighborhood search of Roshanaei et al. [35] (called VNS) and SPT of Sule [1]. The stopping criterion for GA, HGA, IA and VNS is n2 m milliseconds computational time. Moreover, RPD measure (Eq. (1)) is used to compare the algorithms. We use a benchmark similar to [33]. This benchmark is generated based on Taillard’s instances [40] as follows. Data required for a problem consist of the number of jobs (n), number of machines (m), range of processing times (p) and range of the sequence-dependent setup times (SDST). The benchmark contains different combinations of the number of jobs n and the number of machines m. The (nm) combinations are: ({20, 30, 50}{15, 20}), (15, 15) and (100, 20) summing up 8 combinations of (nm). The processing times in Taillard’s instances [40] are generated from a uniform distribution on (1, 99). We have four levels for SDST, 25%, 50%, 100% and 125% of maximum possible processing times. So, SDSTs are generated from four uniform distributions over U (1, 25),
Fig. 3. The mean S/N ratio plot for each level of the factors.
B. Naderi et al. / Applied Soft Computing 10 (2010) 703–710
708 Table 3 ANOVA table for S/N ratio. Factor
df
SS
MS
F
Relative impotence
Cumulative
P-value
A B C D E Error Total
1 3 3 2 4 3 15
3.619 20.656 58.771 91.217 445.409 9.406 629.078
3.619 6.885 19.590 45.608 148.470 3.135
1.154 2.196 6.248 14.546 47.352
0.1 1.8 7.8 13.5 69.3
0.1 1.9 9.7 23.2 92.5
>0.10 0.08 > 0.05 0.021 0.031 0.008
Table 4 Average RPD for the algorithms grouped by n and m. n
m
Algorithm GA
HGA
SA
IA
VNS
SPT
15
15
14.49
5.28
0.34
18.71
5.37
24.85
20
15 20
15.77 11.42
11.43 7.49
0.01 0.07
22.25 17.61
7.99 5.60
29.46 21.51
30
15 20
10.78 6.93
5.38 3.81
0.63 1.49
16.05 11.54
3.71 2.24
22.76 18.17
50
15 20
14.74 9.57
8.40 6.26
0.03 0.05
17.82 12.65
8.01 5.61
24.17 17.30
100
20
4.79
4.16
0.25
9.83
4.15
9.17
11.06
6.53
0.36
15.81
5.34
20.92
Average
Fig. 5. Means plot for the interaction between the type of algorithm and number of jobs.
U (1, 50), U (1, 100) and U (1, 125) for each subset. The different levels of the factors result in 32 different scenarios. We produce 10 instances for each scenario, similar to Taillard’s benchmark. Therefore we have 320 instances. Table 3 shows the results of the experiments, averaged for each combination of (nm) (40 data per average). The proposed SA provides the best results among the algorithms tested with RPD of 0.36%. VNS gains the second rank with RPD of 5.34%. HGA outperforms GA and IA. The worst performing algorithm is SPT with RPD of 20.92%. In order to statistically analyze the results shown in Table 4, we conduct an analysis of variance (ANOVA) test where we consider the different algorithms as a factor and RPD as the response variable. The results indicate that there are significant differences among the algorithms with p-value very close to zero. Fig. 4 shows the means plot and Tukey intervals for the algorithms. The figure demonstrates that SA statistically
outperforms all the algorithms, whereas VNS and HGA both work better than GA and IA. To analyze the interaction between quality of the algorithms and different levels of the number of jobs and magnitude of SDST, the average RPDs obtained by each algorithm in different levels are shown in Figs. 5 and 6, respectively. Due to remarkable difference between SPT and the metaheuristics, it is taken out of the further experiment. Comparatively speaking, the proposed SA keeps its robustness in a variety of situations as well as significantly outperforming the other algorithms in all the different cases. Regarding the number of jobs, when n = 15, VNS and HGA work almost the same and when n = 100, VNS, HGA, and GA provide the same results. As Fig. 6 obviously illustrates, there is a clear trend that increasing the magnitude of SDST results in the relatively better performance of HGA and VNS against GA and IA. In this section, it was illustrated that the proposed algorithm was effective for the problem. It is noteworthy to see that SA enjoys absolute superiority over other algorithms besides the maintenance of its robustness in a variety of situations. Last but not least, CPU times of the algorithms, grouped by (nm) are summarized in Table 5. As we mentioned before, CPU time of
Fig. 4. Means plot and Tukey intervals (at the 95% confidence level) for the type of algorithm factor.
Fig. 6. Means plot for the interaction between the type of algorithm and magnitude of SDST.
B. Naderi et al. / Applied Soft Computing 10 (2010) 703–710 Table 5 Average CPU times of the algorithms, measured in seconds. n
m
Algorithm SPT
SA
Metaheuristics
15
15
<0.01
8.10
3.37
20
15 20
<0.01 <0.01
15.58 16.89
6.00 8.00
30
15 20
<0.01 <0.01
17.47 19.63
13.50 18.00
50
15 20
<0.01 <0.01
22.42 26.68
37.50 50.00
100
20
0.01
139.01
200.00
Fig. 7. CPU times of algorithms as regards increasing problem size.
GA, HGA, IA and VNS is set at n2 m milliseconds. The stopping criterion of SA is a stopping temperature. Fig. 7 plots the CPU time of the algorithms with respect to the increase in the number of jobs. As Fig. 7 shows that the stopping criterion of other metaheuristics is designed in such a way that it becomes very close to the CPU time of SA. 6. Conclusions and future studies This paper studied job shop scheduling where the setup times were sequence-dependent. The optimization criterion was makespan. Considering scheduling problems with setup times is of interest among researchers because nowadays companies produce multiple goods on common resources and this ends up with the need for setup activities. Setup activities usually play a costly role in production processes. Therefore, planners should consider the principles of setup activities in their schedules. To tackle the problem, we apply a high performing metaheuristic, in the framework of simulated annealing, incorporating a novel neighborhood search structure. We investigated impacts of many operators and parameters (i.e. such as initial solution, cooling schedule, initial temperature and neighborhood search structure and, etc.) on the performance of SA. This extensive calibration was conducted through effective statistical tools, called Taguchi method, in order to exploit its advantages. The effectiveness of our new neighborhood search structure against the other existing ones was illustrated by the experimental results. To evaluate the performance of the proposed SA, we carried out a numerical experiment by which we compared the SA with some other existing algorithms (i.e. two different genetic algorithms, an immune algorithm and a variable neighborhood
709
search) that showed their effectiveness in papers they were applied. The numerical experiment included a set of instances generated based on Taillard’s benchmark. The results showed that the proposed SA outperformed the other metaheuristics taken from the literature. As a direction for future studies, it could be interesting to work on multi-objective cases of the problem and develop effective metaheuristics incorporating advanced features. Since realistic cases are of interest, consideration of other practical assumptions such as machine availability constraints or transportation times could be regarded as an impressive research. References [1] D.R. Sule, Industrial Scheduling, PWS Publishing Company, USA, 1997. [2] A. Allahverdi, C.T. Ng, T.C.T. Cheng, M.Y. Kovalyov, A survey of scheduling problems with setup times or costs, European Journal of Operational Research 187 (3) (2008) 985–1032. [3] B. Ombuki, M. Ventresca, Local search genetic algorithms for the job shop scheduling problem, Applied Intelligence 21 (2004) 99–109. [4] T.H. Wu, C.C. Chang, S.H. Chung, A simulated annealing algorithm for manufacturing cell formation problems, Expert Systems with Applications 34 (3) (2008) 1609–1617. [5] E. Rodriguez-Tello, J.K. Hao, Torres-Jimenez, An effective two-stage simulated annealing algorithm for the minimum arrangement problem, Computer and Operation Research 35 (10) (2008) 3331–3346. [6] B. Naderi, M. Zandieh, A. Khaleghi Ghoshe Balagh, V. Roshanaei, An improved simulated annealing for hybrid flowshops with sequence-dependent setup and transportation times to minimize total completion time and total tardiness, Expert Systems with Applications 36 (2009) 9625–9633. [7] M.S. Phadke, Quality Engineering Using Robust Design, Prentice-Hall, USA, 1989. [8] R.J. Ross, Taguchi Techniques for Quality Engineering, McGraw-Hill, USA, 1989. [9] R. Al-Aomar, Incorporating robustness into genetic algorithm search of stochastic simulation outputs, Simulation Modeling Practice and Theory 14 (2006) 201–223. [10] F. Luo, H. Sun, T. Geng, N. Qi, Application of Taguchi’s method in the optimization of bridging efficiency between confluent and fresh microcarriers in bead-to-bead transfer of vero cells, Biotechnology Letter 30 (4) (2008) 645–649. [11] A. Allahverdi, J.N.D. Gupta, T. Aldowaisan, A review of scheduling research involving setup considerations, OMEGA The International Journal of Management Sciences 27 (1999) 219–239. [12] W.H. Yang, C.J. Liao, Survey of scheduling research involving setup times, International Journal of Systems Science 30 (1999) 143–155. [13] C.N. Potts, M.Y. Kovalyov, Scheduling with batching: a review, European Journal of Operational Research 120 (2000) 228–349. [14] A. Allahverdi, H.M. Soroush, The significance of reducing setup times/setup costs, European Journal of Operational Research 187 (3) (2008) 978–984. [15] B.J. Coleman, Technical note: a simple model for optimizing the single machine early/tardy problem with sequence-dependent setups, Production and Operations Management 1 (1992) 225–228. [16] M.E. Kurz, R.G. Askin, Scheduling flexible flow lines with sequence dependent setup times, European Journal of Operational Research 159 (2004) 66–82. [17] M. Zandieh, S.M.T. Fatemi Ghomi, S.M. Moattar Husseini, An immune algorithm approach to hybrid flow shops scheduling with sequence dependent setup times, Journal of Applied Mathematics and Computation 180 (2006) 111–127. [18] S.C. Kim, P.M. Bobrowski, Impact of sequence-dependent setup time on job shop scheduling performance, International Journal of Production Research 32 (7) (1994) 1503–1520. [19] I. Choi, O. Korkmaz, Job shop scheduling with separable sequence dependent setups, Annals of Operations Research 70 (1997) 155–170. [20] C. Zhou, P.G. Egbelu, Scheduling in a manufacturing shop with sequence dependent setups, Robotics and Computer Integrated Manufacturing 51 (1989) 73–81. [21] S.K. Gupta, N jobs and m machines job shop problems with sequence-dependent setup times, International Journal of Production Research 20 (5) (1986) 643– 656. [22] P. Brucker, O. Thiele, A branch-and-bound method for general shop problem with sequence dependent setup times, OR Spectrum 18 (1996) 145–161. [23] C. Artigues, D. Feillet, A branch and bound method for the job-shop problem with sequence-dependent setup times, Annals of Operations Research 159 (2008) 135– 159. [24] J.M.J. Schutten, Practical job shop scheduling, Annals of Operations Research 83 (1998) 161–178. [25] X. Sun, J.S. Noble, An approach to job shop scheduling with sequence-dependent setups, Journal of Manufacturing Systems 18 (1999) 416–430. [26] W. Cheung, H. Zhou, Using genetic algorithms and heuristics for job shop scheduling with sequence dependent setup times, Annals of Operations Research 17 (2001) 65–81. [27] I.C. Choi, D.S. Choi, A local search algorithm for job shop scheduling problems with alternative operations and sequence-dependent setups, Computers and Industrial Engineering 42 (2002) 43–58. [28] C. Artigues, F. Roubellat, An efficient algorithm for operation insertion in a multiresource job-shop scheduling with sequence-dependent setup times, Production Planning and Control 13 (2002) 175–186.
710
B. Naderi et al. / Applied Soft Computing 10 (2010) 703–710
[29] J.U. Sun, S.R. Yee, Job shop scheduling with sequence dependent setup times to minimize makespan, International Journal of Industrial Engineering: Theory Applications and Practice 10 (2003) 455–461. [30] C. Artigues, P. Lopez, P.D. Ayache, Schedule generation schemes for the job shop problem with sequence dependent setup times: dominance properties and computational analysis, Annals of Operations Research 138 (2005) 21–52. [31] Y. Zhou, L. Beizhi, J. Yang, Study on job shop scheduling with sequence-dependent setup times using biological immune algorithm, International Journal of Advanced Manufacturing Technology 30 (2006) 105–111. [32] A. Manikas, Y.L. Chang, A scatter search approach to sequence-dependent setup times job shop scheduling, International Journal of Production Research (May) (2008). [33] B. Naderi, M. Zandieh, S.M.T. Fatemi Ghomi, Scheduling job shops with sequence dependent setup times, International Journal of Production Research 47 (2009) 5959–5976, doi:10.1080/00207540802165817. [34] V. Vinod, R. Sridharan, Scheduling a dynamic job shop production system with sequence-dependent setups: an experimental study, Robotics and ComputerIntegrated Manufacturing 24 (3) (2008) 435–449.
[35] V. Roshanaei, B. Naderi, F. Jolai, M. Khalili, A variable neighborhood search for job shop scheduling with setup times to minimize makespan, Future Generation Computer Systems (2009), doi:10.1016/j.future.2009.01.004. [36] R. Cheng, M. Gen, Y. Tsujimura, A tutorial survey of job-shop scheduling problems using genetic algorithms, part II: hybrid genetic search strategies, Computers and Industrial Engineering 36 (1999) 343–364. [37] M. Gen, Y. Tsujimura, E. Kubota, Solving job shop scheduling problem using genetic algorithms, in: Proceedings of the 16th International Conference on Computer and Industrial Engineering, Ashikaga, Japan, (1994), pp. 576– 579. [38] M. Lundy, A. Mees, Convergence of an annealing algorithm, Mathematical Programming 34 (1986) 111–124. [39] W.G. Cochran, G.M. Cox, Experimental Designs, 2nd ed., Wiley, USA, 1992. [40] E. Taillard, Benchmarks for basic scheduling problems, European Journal of Operational Research 64 (1993) 278–285. [41] R.L. Graham, E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, Optimization and approximation in deterministic sequencing and scheduling: a survey, Annals of Discrete Mathematics 5 (1979) 287–326.