Computers & Operations Research 31 (2004) 2151 – 2164
www.elsevier.com/locate/dsw
A neuro-tabu search heuristic for the &ow shop scheduling problem M. Solimanpur a , P. Vratb , R. Shankar c;∗ a
Mechanical Engineering Department, Indian Institute of Technology Delhi, Hauz Khas, New Delhi-110 016, India b Director, Indian Institute of Technology Roorkee, Roorkee-247 667, Uttaranchal, India c Department of Management Studies, Indian Institute of Technology Delhi, Hauz Khas, New Delhi-110 016, India
Abstract Flow shop scheduling deals with the sequencing of di0erent jobs that visit a set of machines in the same order. A neural networks-based tabu search method, namely EXTS, is proposed for the &ow shop scheduling. Unlike the other tabu search-based methods, the proposed approach helps diminishing the tabu e0ect in an exponential way rather than most commonly used way of diminishing it in a sudden manner. On the basis of the conducted tests, some rules are evolved to set the values for di0erent parameters. The e0ectiveness of the proposed method is tested with 23 problems selected from literature. The computational results indicate that the proposed approach is e0ective in terms of reduced makespan for the attempted problems. ? 2003 Elsevier Ltd. All rights reserved. Keywords: Scheduling; Flow shop scheduling; Tabu search; Exponential tabu search; Neural networks
1. Introduction The &ow shop scheduling is a well-known problem among researchers and practitioners. It deals with sequencing N jobs that have to be processed on M machines so that the manufacturing system performs better in terms of performance measures such as minimum makespan, minimum tardiness, minimum work-in-process, etc. In &ow shop scheduling, the processing routes are the same for all the jobs. Emergence of advanced manufacturing systems such as CAD/CAM, FMS, CIM, etc. has increased the importance of &ow shop scheduling. In these systems, similar parts are grouped in manufacturing cells. Since the manufacturing processes of parts within a cell are almost similar, the parts need to visit machines in the same order. The research attempted in this paper bridges ∗
Corresponding author. Tel.: +91-11-26596421; fax: +91-11-26862620. E-mail addresses: ms
[email protected] (M. Solimanpur),
[email protected] (P. Vrat),
[email protected],
[email protected] (R. Shankar). 0305-0548/$ - see front matter ? 2003 Elsevier Ltd. All rights reserved. doi:10.1016/S0305-0548(03)00169-2
2152
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
the &ow shop scheduling and neural networks, which has not been explored so far. The following assumptions are made for the &ow shop scheduling problem in this paper: • all jobs are available for processing at time zero, • each job can be processed only once on each machine (if a job does not need processing on a particular machine, the related processing time is set to zero), • once an operation is started on a machine, it has to be completed without breaking (non-preemption), • all machines are available at time zero, • every machine can process only one job at a time, • each job can be processed on only one machine at a time, • the setup times are independent of the sequence and therefore can be incorporated in processing times.
2. Literature review Sequencing methods in the literature can be broadly categorized into two types of approaches, namely optimization and heuristic. Optimization approaches guarantee to obtain the optimum sequence, whereas heuristic approaches mostly obtain near-optimal sequences. Among the optimization approaches, the algorithm developed by Johnson [1] is the widely cited research dealing with sequencing N jobs on two machines. Lomnicki [2] proposed a branch and bound technique to Gnd the optimum permutation of jobs. Since the &ow shop scheduling problem has been recognized to be NP-hard, the branch and bound method cannot be applied for large size problems. This limitation has encouraged researchers to develop eHcient heuristics. Heuristics are two-fold: constructive and improvement. In constructive category, methods developed by Palmer [3], Campbell et al. [4], Gupta [5], Dannenbring [6], RKock and Schmidt [7] and Nawaz et al. [8] can be listed. Mostly, these methods are developed on the basis of the Johnson’s algorithm. Turner and Booth [9] and Taillard [10] have veriGed that the method proposed by Nawaz et al. [8], namely NEH, performs superior among the constructive methods tested. On the other hand, Osman and Potts [11], Widmer and Hertz [12], Ho and Chang [13], Ogbu and Smith [14], Taillard [10], Nowicki and Smutnicki [15] and Ben-Daya and Al-Fawzan [16] have developed improvement heuristics for the problem of interest. Tabu search (TS) is a meta-heuristic approach initially proposed by Glover [17] that has been successfully applied to solve di0erent combinatorial optimization problems. TS works by choosing an initial sequence as current solution and then applies a move mechanism to search the neighborhood of the current solution to select the most appropriate one. It forces the move causing the selected solution be forbidden (tabu) for a certain number of iterations and then changes the current solution into the selected one. Despite the simplicity of this approach, it still remains an art in deGning neighborhood, searching among neighbors, deGning forbidden moves, setting tabu list length, etc. Di0erent implementations of these elements result in di0erent tabu search-based algorithms. Basic ideas and advanced topics of TS have been widely discussed in Glover and Laguna [18]. In TS-based methods, the selected moves are stored in a data structure called tabu list. This list contains s elements at a time, and once a move is entered the oldest one is exhausted. In other words, the selected move is put in the tabu list and remains there for the next s iterations and then it is suddenly exhausted. The parameter s is called tabu list length or tabu list size. Currently,
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
2153
there are three di0erent ways to determine this parameter: setting a predetermined constant value, choosing randomly from a given range and dynamically changing through certain adjustments [19]. Salhi [19] proposed a TS method with two key features: (1) use of dynamic tabu list in which the tabu list size of an attribute is linked to its contribution on the quality of solution, and (2) use of a softer aspiration criterion. Some advanced ideas for deGning &exible tabu list strategies have been discussed in Glover and Laguna [18]. Several TS-based approaches have been proposed for &ow shop scheduling problem [10,15,16]. Nowicki and Smutnicki [15] used block properties to explore di0erent sequences in &ow shop scheduling. These properties ensure that a considerable number of moves cannot be e0ective and therefore should be ignored. This results in saving computation time as well as enhances the quality of solutions. In this paper, these properties are adopted to construct the neighborhood to be searched. Hasegawa et al. [20] constructed a neural network with TS architecture to solve quadratic assignment problem. Unlike the classical tabu list mechanisms where a move is either tabu or not at a time, in their approach, the tabu e0ect declines over time. They showed that this mechanism handles tabu moves more e0ective than the classical tabu list. A relatively similar neural network architecture employing TS approach has also been reported in [21]. Hasegawa et al. [22] developed a neural network architecture for traveling salesman problem (TSP), which uses 2-opt algorithm. In this paper, we propose a neuro-dynamical TS for &ow shop scheduling problem. This paper is organized as follows: Section 3 includes the formulation of the &ow shop scheduling problem. In Section 4, the elements of the proposed TS method are discussed. Section 5 presents the stepwise procedure of the proposed method. The computational results obtained by the application of the proposed method to the 23 problems selected from literature are discussed in Section 6. Section 7 includes conclusions and discussions. 3. Formulation of problem The &ow shop scheduling problem concerns a set of N jobs called J = {1; 2; : : : ; N } that are to be processed on a set of M machines called I = {1; 2; : : : ; M }. Without losing generality, it is assumed that jobs are processed in the order of indices of machines. Each part contains M operations and each operation is processed on one machine. Let denote the set of all permutations of jobs belonging to J . A permutation of all jobs belonging to J is represented by where ∈ . The jth element of permutation is represented by (j). The objective is to Gnd an optimal sequence of all jobs, i.e. ∗ , to minimize makespan deGned as the completion time of last job on last machine. It is assumed that job j takes tij units of time on machine i to get processed, i = 1; 2; : : : ; M ; j = 1; 2; : : : ; N . Let Cmax ( ) denote the makespan associated with permutation . Fig. 1 shows a grid graph illustration of a typical &ow shop scheduling problem with six machines and nine jobs. In this Ggure, the makespan is considered as the longest path from node (1,1) to node (M; N ). Every path is composed of horizontal and vertical sub-paths. Each horizontal sub-path is called a block. Let B be the number of blocks in critical path (longest path) of permutation . Let ul−1 and ul show the beginning and the end nodes of block l associated with permutation ; l = 1; 2; : : : ; B. Thus, the vector U = (u0 ; u1 ; : : : ; uB ), where u0 = 1 and uB = N , fully describes the blocks in the critical path associated with permutation . For example, the bold path in Fig. 1 is the critical path associated with permutation = (5; 8; 1; 4; 2; 7; 3; 9; 6). As seen in this Ggure, there are four blocks, thus B = 4.
2154
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164 position: 1 Machine 1
: 5
2 8
3 1
4 4
5 2
6 7
7 3
8 9
9 6
2 3 4 5 6
Fig. 1. Grid graph of a &ow shop scheduling problem with six machines and nine jobs.
These blocks have been made on machines 1; 2; 4 and 5, respectively. The vector U is constructed as U = (1; 2; 4; 8; 9). 4. Elements of tabu search The TS algorithm proposed in this paper contains the following elements: • • • • •
initial solution, move mechanism, neighborhood, TS mechanism, stopping condition.
There are some other optional elements, such as aspiration function, etc. that some TS-based methods exploit. For a more detailed description of these elements, Glover [23,24] may be referred. In the following sub-sections, we discuss how the elements mentioned above have been implemented in this paper. 4.1. Initial solution To Gnd an initial sequence, a constructive algorithm is to be adopted. Taillard [10] has experimentally concluded that NEH algorithm performs better than other constructive methods. We have adopted this algorithm with its modiGcations proposed by Taillard [10] to construct the initial sequence. 4.2. Moves and block properties A move is a tool to form a sequence in the neighborhood of the current sequence. Previously, three types of moves have been used: (i) Exchange any two adjacent jobs placed at the ath and (a + 1)th positions. Each move is identiGed by one parameter, i.e. a.
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
2155
(ii) Exchange any two jobs placed at the ath and bth positions. Each move is identiGed by two parameters, i.e. a and b. (iii) Remove the job placed at position a and insert it to position b. Each move is identiGed by two parameters, i.e. a and b. Taillard [10] has experimentally shown that the last type is more eHcient and e0ective than the Grst two types. Therefore, we consider this type only and the move v = (a; b) indicates removing the job placed at position a and inserting it into position b. Let v denote the permutation obtained by move v on permutation . Then ( (1); : : : ; (a − 1); (a + 1); : : : ; (b); (a); (b + 1); : : : ; (N )) if a ¡ b; v = ( (1); : : : ; (b − 1); (a); (b); : : : ; (a − 1); (a + 1); : : : ; (N )) if a ¿ b: It is notable that v = (a; b) and v = (b; a) create the same permutation if |a − b| = 1 and are called equivalent moves. Let, set V represents all moves deGned above. Note that only one out of two equivalent moves is included in V . The total number of such moves is (N − 1)2 , i.e. V = (N − 1)2 . For example, in Fig. 1 the total number of moves is (9 − 1)2 = 64. 4.3. Neighborhood The neighborhood of a current permutation is a set of permutations that actually are explored with TS method. Each move deGned above can represent one solution around the current solution. As seen in Section 4.2, the number of moves to be evaluated in each iteration is high. To reduce the number of moves, the more desirable moves are to be identiGed. The reduction policy used in this paper is based on block properties and experimental observations. Consider block l and let Vl represent the set of moves that can be done by interchanging positions inside block l. If block l belongs to machine 1 or M , this set is enlarged by including positions 1 and N , respectively. Mathematically, the set Vl is deGned as below: {(a; b): a; b ∈ {1; : : : ; u1 − 1}} if l = 1 and block l belongs to machine 1; {(a; b): a; b ∈ {uB−1 + 1; : : : ; uB }} if l = B and block l belongs (1) Vl = to machine M; {(a; b): a; b ∈ {ul−1 + 1; : : : ; ul − 1}} otherwise: An elementary property implies that moves belonging to Vl cannot yield permutations with makespan better than the current one, i.e. Cmax ( ) 6 Cmax (v) ∀v ∈ Vl ; l = 1; 2; : : : ; B. For example, in Fig. 1 V1 = V2 = V4 = and V3 = {(5; 6); (5; 7); (6; 7); (7; 5)}. Therefore, none of the moves belonging to V3 can lead to permutation with makespan lesser than the current one. Although the number of moves to be evaluated is reduced by 4, it is still high. For further reduction of moves, we consider the experimental Gndings of Nowicki and Smutnicki [15]. Their experimental observations show that for jobs placed inside a block, few insertions in the adjacent blocks can be more promising than the other positions. More clearly, for job (j) placed at block l, i.e. ul−1 ¡ j ¡ ul , few positions of sequence ul ; ul + 1; : : : ; ul+1 in the right side and
2156
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
similarly few positions of sequence ul−2 ; ul−2 + 1; : : : ; ul−1 in the left side are good enough. For cases where = N=M ¿ 5, we observed that the single positions ul in the right side and ul−1 in the left side are suHcient. Also, for job (j) placed at j = ul−1 , the position ul−1 + 1 and few positions of sequence ul ; ul + 1; : : : ; ul+1 in the right side and also the position ul−1 − 1 and few positions of sequence ul−2 ; ul−2 − 1; : : : ; ul−3 in the left side are suHcient. Based on these observations, the incorporated moves in the proposed method are deGned in Table 1. As seen in Table 1, the incorporated moves strongly depend on distribution of blocks. For permutation , the set of moves in Table 1 is denoted by ( ). For example, the set of moves to be done on permutation in Fig. 1 is shown in Table 2. 4.4. Tabu search mechanism In TS-based methods, a certain move is selected in each iteration. As mentioned in Section 2, the selected move is stored in a data structure called tabu list. This move will be forbidden for a certain number of iterations called as tabu list size denoted by s. Therefore, the currently selected move remains forbidden for the next s iterations. After s iterations, it is removed from tabu list and can be selected again. It means that the tabu e0ect is eliminated suddenly. This situation causes the Gnal solution to be sensitive to the size of tabu list. As reported in the literature, it is diHcult to Gnd a proper value of tabu list size and mostly tedious trial and error process is needed. Table 1 Moves incorporated in the proposed method N=M
N=M ¿ 5 N=M ¡ 5
Position of the jth job in current permutation
ul−1 ¡ j ¡ ul j = ul−1 ul−1 ¡ j ¡ ul j = ul−1
Positions to be inserted Left
Right
ul−1 ul−2 ul−2 ; ul−2 + 1; : : : ; ul−1 ul−3 ; ul−3 + 1; : : : ; ul−2
ul ul ul ; ul + 1; : : : ; ul+1 ul ; ul + 1; : : : ; ul+1
Table 2 Set of moves to be done on permutation shown in Fig. 1 j
Place of j
Positions at which the job (j) is to be inserted
1 2 3 4 5 6 7 8 9
u0 u1 u 1 ¡ j ¡ u2 u2 u 2 ¡ j ¡ u3 u 2 ¡ j ¡ u3 u 2 ¡ j ¡ u3 u3 u4
{2; 3; 4} {4; 5; 6; 7; 8} {1; 2; 4; 5; 6; 7; 8} {1; 2; 8; 9} {2; 3; 4; 8; 9} {2; 3; 4; 8; 9} {2; 3; 4; 8; 9} {2; 3; 4; 9} {4; 5; 6; 7}
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
2157
Neuro-dynamical TS is a recent TS mechanism in which each move is represented by a neuron. This architecture makes it possible to reduce the tabu e0ect of a move exponentially. A successful implementation of this approach has been reported for quadratic assignment problem [20]. We now develop a neuro-dynamical TS for &ow shop scheduling problem. Firstly, it is tried to represent the traditional TS process using neural networks. To do so, each move is represented by a neuron. Therefore, the proposed neural network contains (N − 1)2 neurons. Let neuron (i; j) represent the move of inserting job (i) in position j. In our proposed neural network architecture, the history of each neuron is memorized by its internal state called as tabu e6ect. If a certain neuron Gres, its output in the related iteration is set to 1 and the outputs of all other neurons are set to 0. The Gred neuron is to be avoided from Gring for the next s iterations. Consider the following neuro-dynamical equations: ij (t + 1) = ij (t);
(2)
∗ ; ij (t) = Cmax ( v(t) )=Cmax
(3)
ij (t + 1) =
s −1
k d xij (t − d);
(4)
d=0
where xij (t) is the output of neuron (i; j) at iteration t. The term Cmax ( v(t) ) stands for makespan if the move v = (i; j) is done on permutation at iteration t, i.e. (t) . The term (0) stands for initial permutation, which is constructed using the NEH algorithm. The term ij (t) represents the normalized ∗ value of current makespan and is calculated by Eq. (3). The term Cmax in Eq. (3) represents the minimum makespan yet obtained. The parameter is a scaling factor. The term ij (t + 1) in Eq. (2) is called the gain e6ect of neuron (i; j) and indicates the quality of move (i; j). In Eq. (4), the term ij (t + 1) is called the tabu e6ect of neuron (i; j), which stores the history of this neuron for the last s iterations. The parameter k is a scaling factor. A neuron is selected to Gre, if it has a lower tabu e0ect and provides a better reduction in the makespan SpeciGcally, the neuron (i; j) Gres, if it has the minimum value of {ij (t + 1) + ij (t + 1)} among all the neurons identiGed in Section 4.3. We call this as the Gring rule in the proposed EXTS algorithm. Once the neuron (i; j) Gres, the job placed at position i in current permutation is removed and inserted at position j. The resulted permutation would be the next solution. Also, the output of neuron (i; j), i.e. xij (t + 1), is set to 1 and the outputs of other neurons are set to 0. It may be said that the Gring rule explained above does not necessarily choose the move, which results into a solution better than the one obtained so far. In many existing TS methods, an aspiration function is considered, due to which the tabu status of such moves is ignored. SpeciGcally, in the proposed EXTS algorithm this function can be implemented by ignoring the tabu e0ect of those moves (i; j) for which ij ¡ 1. However, we experimentally observed that inclusion of this function in the EXTS algorithm deteriorates the quality of solutions. Therefore, this function has not been included in the EXTS algorithm. By setting k → +∞, the neural network discussed above can perfectly simulate the behavior of conventional TS method with a tabu list size equal to s. As seen in Eq. (4), when a particular neuron Gres, its internal state ij would be +∞ for the next s iterations. Therefore, throughout the next s iterations, this neuron cannot be Gred again. In fact, in this architecture, a move is either tabu or not. However, it is also possible to reduce the tabu e0ect gradually. To do so, in Eq. (4), we
2158
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
set 0 ¡ k ¡ 1 and s = t, because we want to reduce the tabu e0ect over time. Substitution of these conditions in Eq. (4) yields the following equation: ij (t + 1) = kij (t) + xij (t);
where ij (0) = 0; xij (0) = 0
∀i; j:
(5)
As seen in Eq. (5), the tabu e0ect of each neuron is decreased exponentially. For this reason, the proposed TS method is called as exponential TS (EXTS). Despite the traditional TS methods where a move is either tabu or not, the proposed neural network-based TS provides a facility to deGne moves with di0erent degrees of tabu. For example, by setting xij (t) = 1, the neuron (i; j) will be quite tabu, while a xij (t) = 0:5 makes the neuron (i; j) semi-tabu. Now, we would describe the rules employed in the current research to identify tabu moves. Suppose move v∗ = (i∗ ; j ∗ ) has been selected to perform on current permutation, i.e. (t) . The Grst rule is to make the move v∗ and its reverse move, (j ∗ ; i∗ ), prohibited. This is done by Eq. (6). Since the selected move v∗ implies that the place of job (i∗ ) at position j ∗ is better than other positions, this job is retained at position j ∗ for the next few iterations. This fact is realized through the following equations: xi∗ j∗ (t + 1) = 1;
xj∗ i∗ (t + 1) = 1;
(6)
xj∗ i (t + 1) = 0:5;
i = 1; 2; : : : ; N; i = i∗ ;
(7)
xi∗ j (t + 1) = 0:5;
j = 1; 2; : : : ; N; j = j ∗ ; if |i∗ − j ∗ | = 1:
(8)
In the proposed EXTS algorithm, these rules have been used to set the outputs of neurons.
5. The proposed method In the proposed method, a variable called as bad counter is used to count the number of iterations throughout which no improvement in makespan is obtained. Once the variable bad counter exceeds a predeGned control parameter, say max bad, the algorithm jumps back and the best permutation obtained so far is considered as current permutation. This action is also called as diversiGcation scheme in the terminology of TS algorithm. The number of jumps to back is counted by a variable called back counter. The proposed method stops when the variable back counter exceeds a predeGned control parameter called max back. The stepwise procedure is as follows: Step 1: Set initial values for parameters max back, max bad, k and . Set bad counter = 0, back counter = 0; t = 0: Set ij (0) = 0; xij (0) = 0 ∀i; j = 1; 2; : : : ; N: Step 2: Find initial permutation, i.e. (0) . Find the set of moves to be performed on initial permutation, i.e. ( (0) ). Step 3: Set ∗ = (0) , ∗ = ( (0) ), ∗ij = ij (0), xij∗ = xij (0) ∀i; j = 1; 2; : : : ; N and C ∗ = Cmax ( (0) ). Step 4: Use Eq. (5) and calculate ij (t + 1) ∀i; j = 1; 2; : : : ; N .
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
2159
Step 5: Calculate Cmax ( v ) and substitute this value in Eqs. (2) and (3) and then calculate ij (t + 1) ∀v = (i; j) ∈ ( (t) ): Step 6: Find move v∗ = (i∗ ; j ∗ ) such that i∗ j∗ (t + 1) + i∗ j∗ (t + 1) 6 ij (t + 1) + ij (t + 1) ∀v = (i; j) ∈ ( (t) ): Step 7: If bad counter = 0 and back counter = 0, set ∗ij = ij (t + 1) and ∗ = ( (t) ): Step 8: If bad counter = 0, then remove v∗ from ∗ . Step 9: Set bad counter = bad counter + 1. Step 10: If Cmax ( v(t)∗ ) ¡ C ∗ , do the move v∗ on (t) and call the new permutation ∗ : Set C ∗ = Cmax ( ∗ ); back counter = 0; bad counter = 0: Step 11: If bad counter 6 max bad, do the move v∗ on (t) and call the new permutation (t+1) : Find the set of moves on the basis of (t+1) ; i:e: ( (t+1) ): Set the outputs of neurons; i:e: xij (t + 1); using Eqs: (6)–(8): Set t = t + 1and go to Step 4: Step 12: If bad counter ¿ max bad, set bad counter = 0; back counter = back counter + 1: Set (t+1) = ∗ ; ( (t+1) ) = ∗ and ij (t + 2) = ∗ij
∀i; j = 1; 2; : : : ; N:
Step 13: Set t = t + 1. If back counter 6 max back, go to Step 5. Otherwise stop. The proposed method starts with an initial permutation obtained by NEH algorithm (Step 2). In Step 3, the initial solution is set as the best permutation obtained till now. In Step 4, the tabu e0ect is computed for all neurons (moves). The gain e0ects of the identiGed moves are calculated in Step 5. Step 6 Gnds the move for which the summation of tabu e0ect and gain e0ect is minimum among all the identiGed moves. Once the method leads to a better permutation, the history of neurons’ activation, i.e. ij , as well as the identiGed moves are stored in ∗ and ∗ , respectively (Step 7). The move selected to perform on the best permutation is removed from ∗ within Step 8. This operation prevents from cycling and also enforces the algorithm to go to the regions not yet visited. If the selected move can improve the makespan obtained till now, in Step 10 the selected move is performed and the resulted permutation is considered as best permutation. In Step 11, the selected move is performed and the resulted permutation is considered current permutation, and the set of moves on the basis of the newly obtained current permutation are identiGed and stored in ( (t+1) ). For the selected move, outputs of neurons are updated using Eqs. (6)–(8). In Step 12, where no better permutation is obtained during last max bad iterations, the algorithm jumps back and the best permutation obtained so far is considered as the current permutation. The states of neurons are
2160
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
reloaded accordingly and the network starts from the state in which the best permutation has been obtained. The elimination performed in Step 8 ensures that the same sequence will not be repeated again.
6. Computational results Osman and Potts [11] and Ogbu and Smith [14] have developed simulated annealing-based algorithms for solving &ow shop scheduling problems. Ogbu and Smith [25] reported that the performance of these algorithms is comparable in terms of the quality of solutions and eHciency of computation. Also, Reeves [26] used a genetic algorithm to solve the benchmark problems of Taillard [27] and reported that the simulated annealing algorithm proposed by Osman and Potts [11] performs slightly better than the genetic algorithm. On the other hand, Ben-Daya and Al-Fawzan [16] have proposed a TS-based method, called BF-TS to solve the &ow shop scheduling problem. They used the test problems proposed by Taillard [27] and reported that the BF-TS method performs better than the method proposed by Ogbu and Smith [14] in terms of the quality of solutions. Consequently, the BF-TS algorithm is e0ective than all the methods mentioned above and, therefore, we compare our results with those obtained through BF-TS algorithm. The results related to computational time of BF-TS approach have been obtained through a 486 DX2 PC [16]. To remain comparable with the BF-TS method, the EXTS algorithm has been coded in Borland C++ and run on a 486 DX2 PC. We have selected all the 23 test problems that have been attempted by Ben-Daya and Al-Fawzan [16] for comparing our results with the BF-TS algorithm. The proposed method contains four parameters, viz. ; k, max bad and max back. These parameters a0ect the performance of EXTS algorithm. Based on experimental observations, we have tuned the values of these parameters and derived the rules given in Table 3. These rules are used to set the values of parameters in di0erent problems. The scale parameter is set to 10 in all the problems. Table 4 shows the makespan and the computation time of EXTS and BF-TS methods for the tested problems. As seen in Table 4, in Gve out of the 23 problems (problems 17, 20, 21, 22 and 23), the proposed EXTS method obtains lesser makespan than the BF-TS method. The opposite is true in two problems (problems 13 and 18). The last column of Table 4 shows the percentage reduction in the computation time (PRCT%) due to EXTS algorithm as compared to BF-TS method. As seen in this column, the EXTS algorithm has signiGcantly reduced the computation time in all but one problem (problem 18). In
Table 3 Rules to set values of parameters Size
max bad
max back
k
N=M ¡ 5; N ¡ 50 N=M ¡ 5; N ¿ 50 5 6 N=M ¡ 10 N=M ¿ 10
4×N 3×N 2×N 0:75 × N
3×N 2×N 0:75 × N 0:5 × N
0.35 0.35 0.35 0.4
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
2161
Table 4 Makespan and computation time of BF-TS and EXTS methods for problems adopted from Taillard [27] No.
Problem size
BF-TS
(N × M )
Cmax
EXTS Timea (s)
Cmax
PRCT () Timea (s)
1 2 3 4 5
20 × 5
1278 1359 1081 1293 1235
504 504 504 504 504
1278 1359 1081 1293 1235
18.1 18.3 31.7 57.4 22.6
96.4 96.3 93.7 88.6 95.5
6 7 8 9 10
20 × 10
1582 1659 1496 1377 1419
534 534 534 534 534
1582 1659 1496 1377 1419
64.1 100.1 71.7 73.1 76.5
88 81.2 86.5 86.3 85.6
11 12 13 14 15
20 × 20
2297 2099 2326 2223 2291
564 564 564 564 564
2297 2099 2329 2223 2291
259.6 194.6 288.1 262.6 380.4
53.9 65.5 48.9 53.4 32.5
16 17 18
50 × 5 50 × 10 50 × 20
2724 3037 3886
2940 3780 4860
2724 3035 3893
11.7 84 5046
99.6 97.7 −3:8
19 20 21
100 × 5 100 × 10 100 × 20
5493 5776 6345
12300 15660 18720
5493 5770 6306
70 260 5715
99.4 98.3 69.4
22 23
200 × 10 200 × 20
10894 11398
35100 44400
10869 11327
2328 8330
93.3 81.2
Average reduction in computation time
77.7
a
Computation time is in seconds obtained on a 486 DX2 PC so as to keep it comparable with the results of the BF-TS method [16].
problem 18, the EXTS algorithm has increased the computation time by 3.8%. However, this increment in computation time is negligible as compared to the large decrement in all the other problems. The last row of Table 4 indicates that the EXTS algorithm has reduced the computation time by 77.7% on an average. Another interesting observation in these experiments is the possibility of higher e0ectiveness and computational eHciency of the proposed EXTS algorithm over the BF-TS method on large problems. For example in problem 23 with 200 jobs and 20 machines, the makespan obtained by the EXTS method is 11 327 and it is 11 398 for BF-TS. It is seen in Table 4 that the EXTS algorithm has obtained such a good quality solution in only 18.8% of the time needed by the BF-TS method.
2162
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
As stated, the value of has been Gxed as 10 for all the problems in the experimentation mentioned above. However, this does not mean that the EXTS algorithm is not sensitive to this parameter. In general, with a small value for , the EXTS algorithm is likely to search a wide range of solution space, though it may increase computation time. On the contrary, a large value for this parameter will result in quick convergence to a local optimum solution. Table 5 in the appendix shows the makespan and the computation time of each problem for =8 and 12. The values of other parameters are set according to the rules given in Table 3. The EXTS algorithm has been run on a 550 MHz Pentium III PC. As seen in Table 5, the EXTS algorithm has obtained better solutions in eight problems (problems 7; 9; 13; 15; 18; 20; 21 and 23) when = 8, whereas it is only three problems (problems 8; 12 and 17) for = 12.
Table 5 Makespan and computation time of EXTS algorithm for = 8 and 12 for the problems adopted from Taillard [27] No.
Problem size
=8
(N × M )
Cmax
= 12 Timea (s)
Cmax
Timea (s)
1 2 3 4 5
20 × 5
1278 1359 1081 1293 1235
1.3 1.5 2.2 3 1.5
1278 1359 1081 1293 1235
1.1 1.4 2.2 2.2 1.6
6 7 8 9 10
20 × 10
1582 1659 1499 1378 1419
5.2 11.1 5.2 8.5 5.4
1582 1664 1496 1381 1419
5 5.5 5.2 5.5 6.2
11 12 13 14 15
20 × 20
2297 2101 2330 2229 2291
22.4 31.5 19.5 17.8 25.2
2297 2099 2336 2229 2298
24.9 14.6 13.7 19.1 17.3
16 17 18
50 × 5 50 × 10 50 × 20
2724 3034 3893
0.82 4.6 575.4
2724 3025 3896
0.7 8.4 350
19 20 21
100 × 5 100 × 10 100 × 20
5493 5771 6326
4.9 19.3 242.8
5493 5777 6333
4.9 15.5 668.7
22 23
200 × 10 200 × 20
10872 11326
154.4 421.5
10872 11339
152.8 548.8
a
Computation time is in second obtained on a 550 MHz Pentium III PC.
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
2163
7. Discussion and conclusion In this paper, a neural network-based TS method, termed as EXTS, is developed to solve the &ow shop scheduling problem. The EXTS algorithm is an improvement method. It gets an initial permutation from a constructive method such as NEH algorithm. The EXTS exploits a neuro-dynamical structure to iteratively improve the initial permutation. The proposed EXTS algorithm is di0erent from the other TS-based methods, as it reduces the tabu e0ect exponentially. Based on experimental tests, some rules are presented to set the values of control parameters. The EXTS algorithm is tested on 23 problems adopted from Taillard [27] and compared with the BF-TS method proposed by Ben-Daya and Al-Fawzan [16]. The experimental observations veriGed the e0ectiveness and eHciency of the EXTS algorithm over the BF-TS method. We can summarize the following advantages for the proposed EXTS algorithm: 1. The EXTS integrates the &ow shop scheduling problem and neural networks, thus making it possible to exploit the advantages of neural networks in the &ow shop scheduling area. 2. Some rules are provided to set the values of parameters of EXTS algorithm in di0erent problems. 3. The EXTS introduces a new mechanism in the application of TS method that has the potential to reduce the computational time and improve the makespan. The EXTS method can be applied in other areas of combinatorial optimization with appropriate modiGcations.
Acknowledgements The authors would like to put on record their appreciation to the anonymous referees for their valuable suggestions, which have enhanced the quality of the paper over its earlier versions.
Appendix. The results obtained by the EXTS algorithm with = 8 and 12 are shown in Table 5.
References [1] Johnson SM. Optimal two- and three-stage production schedules with setup times included. Naval Research Logistics Quarterly 1954;1(1):61–8. [2] Lomnicki ZA. A branch and bound algorithm for the exact solution of the three-machine scheduling problem. Operational Research Quarterly 1965;16(1):89–100. [3] Palmer DS. Sequencing jobs through a multi-stage process in the minimum total time—a quick method of obtaining a near optimum. Operational Research Quarterly 1965;16(1):101–7. [4] Campbell HG, Dudek RA, Smith ML. A heuristic algorithm for the n-job, m-machine sequencing problem. Management Science 1970;16(B):630–7. [5] Gupta JND. A functional heuristic algorithm for the &ow shop scheduling problem. Operational Research Quarterly 1971;22(1):39–47.
2164
M. Solimanpur et al. / Computers & Operations Research 31 (2004) 2151 – 2164
[6] Dannenbring DG. An evaluation of &ow shop sequencing heuristics. Management Science 1977;23(11):1174–82. [7] RKock H, Schmidt G. Machine aggregation heuristics in shop scheduling. Methods of Operations Research 1982;45: 303–14. [8] Nawaz M, Enscore Jr. EE, Ham I. A heuristic algorithm for the m-machine, n-job &ow shop sequencing problem. OMEGA: International Journal of Management Science 1983;11(1):91–5. [9] Turner S, Booth D. Comparison of heuristics for &ow shop sequencing. OMEGA: International Journal of Management Science 1987;15:75–8. [10] Taillard E. Some eHcient heuristic methods for the &ow shop sequencing problem. European Journal of Operational Research 1990;47:65–74. [11] Osman IH, Potts CN. Simulated annealing for permutation &ow shop scheduling. OMEGA: International Journal of Management Science 1989;17:551–7. [12] Widmer M, Hertz A. A new heuristic for the &ow shop sequencing problem. European Journal of Operational Research 1989;41:186–93. [13] Ho JC, Chang YL. A new heuristic for the n-job, m-machine &ow shop problem. European Journal of Operational Research 1990;52:194–206. [14] Ogbu FA, Smith DK. The application of the simulated annealing algorithm to the solution of the n=m=Cmax &ow shop problem. Computers and Operations Research 1990;17:243–53. [15] Nowicki E, Smutnicki C. A fast tabu search algorithm for the permutation &ow shop problem. European Journal of Operational Research 1996;91:160–75. [16] Ben-Daya M, Al-Fawzan M. A tabu search approach for the &ow shop scheduling problem. European Journal of Operational Research 1998;109:88–95. [17] Glover F. Future paths for integer programming and links to artiGcial intelligence. Computers and Operations Research 1986;13:533–49. [18] Glover F, Laguna M. Tabu search. Boston: Kluwer Academic Publishers, 1997. [19] Salhi S. DeGning tabu list size and aspiration criterion within tabu search methods. Computers and Operations Research 2002;29:67–86. [20] Hasegawa M, Ikeguchi T, Aihara K. Exponential and chaotic neuro-dynamical tabu searches for quadratic assignment problems. Control and Cybernetics 2000;29(3):773–88. [21] Hasegawa M, Ikeguchi T, Aihara K, Itoh K. A novel chaotic search for quadratic assignment problems. European Journal of Operational Research 2002;139:543–56. [22] Hasegawa M, Ikeguchi T, Aihara K. Combination of chaotic neurodynamics with the 2-opt algorithm to solve traveling salesman problem. Physical Review Letters 1997;79:2344–7. [23] Glover F. Tabu search-part I. ORSA Journal of Computing 1989;1:190–206. [24] Glover F. Tabu search-part II. ORSA Journal of Computing 1990;2:4–32. [25] Ogbu FA, Smith DK. Simulated annealing for the permutation &ow-shop problem. OMEGA: International Journal of Management Science 1991;19:64–7. [26] Reeves CR. A genetic algorithm for &ow shop sequencing. Computers and Operations Research 1995;22(1):5–13. [27] Taillard E. Benchmarks for basic scheduling problems. European Journal of Operational Research 1993;64:278–85.