0263–8762/01/$10.00+0.00 # Institution of Chemical Engineers Trans IChemE, Vol 79, Part A, September 2001
A COMPARATIVE STUDY OF ANNEALING METHODS FOR BATCH SCHEDULING PROBLEMS B. DATTA1 , V. K. JAYARAMAN and B. D. KULKARNI Chemical Engineering Division, National Chemical Laboratory, Pune, India.
B
atch Scheduling is an important problem, relevant to a large sector of the processing industries. Methods like Simulated Annealing have been traditionally used in dealing with scheduling problems that are combinatorially complex. In this paper the performance of four different annealing strategies, Simulated Annealing (SA), Threshold Acceptance (TA) and Multicanonical Jump Walk Annealing (MJWA) with and without window factor scheduling have been compared for small and large size problems. Criteria such as robustness of the method and mean deviation from the optimal solution reveal that the MJWA with window factor scheduling is far superior to SA, TA or MJWA without window factor scheduling. Keywords: batch; computation; optimization; product processing; multicanonical ensembles; combinatorial minimization.
Morse clusters. In this work the MJWA has been modi ed by incorporating a window scheduling algorithm and the ef cacy of the modi ed MJWA has been evaluated for batch scheduling problems with different operating policies against the backdrop of the two other annealing algorithms SA and TA. A brief introduction to batch scheduling and the policies used here is given before the annealing algorithms, which have been compared, are discussed.
INTRODUCTION Batch Scheduling is an important operation in the processing industries. Scheduling is always necessitated when a processing system is used to produce multiple products, by sharing the production time available to the products. Batch processes can accommodate varied operating conditions and usually have complex time-varying dynamics, presence of non-idealities and non-linearities, which make their optimization a dif cult task. Several deterministic programming techniques have been successfully used by researchers for different important classes of practically relevant scheduling applications in chemical engineering1–10. Most of the batch-Scheduling problems belong to the class of NP-Complete class11 and methods such as random and heuristic strategies, the RAES (Rapid Access with Extensive Search) algorithm3 and Simulated Annealing (SA) have been successful in obtaining near-optimal solutions for the small-scale scheduling problems11. Several attempts have been made in the past to modify the basic Simulated Annealing algorithm with a view to improving its performance4,12. Another algorithm known as the Threshold Acceptance (TA)13 has been successfully used as an alternative to SA in computer science applications. This method is faster and more simple to implement. Recently a new algorithm, Multicanonical Jump Walk Annealing (MJWA)14 has been introduced for energy minimization in molecular modeling studies. This new method has been found to be superior to SA for geometric optimization of complex atom clusters like the Lennard-Jones and
MULTIPRODUCT BATCH PROCESSES Problem Description The general multiproduct scheduling problem is characterized by1,4:
· A set of N products or batches to be produced. · A set of M processing units available for this purpose. · A performance criterion with respect to which the schedule is optimized. · A matrix of processing times (tij) associated with each product i and unit j. · A set of rules which govern the production processes involving: · Order in which the operations must be performed for each product. · Regulation of intermediate storage between processing units. · Allowances made for subdividing product runs. · Constraints on production order for some products and other factors to be considered like time required to set up a unit ready for production or time of transfer of product between units for processing.
1
Present address: Department of Chemical Engineering, Indian Institute of Technology, Kharagpur, India.
673
DATTA et al.
674
The batch scheduling problem is essentially a combination of two interlinked sub-problems: Sequencing and Optimizing Makespan, nding the optimal feed sequence of products into the units so as to use the minimum possible processing time (makespan). On the basis of the intermediate storage available to the products between units several different operation policies can be framed, for example Unlimited Intermediate Storage (UIS), Finite Intermediate Storage (FIS), No Intermediate Storage (NIS), Zero Wait (ZW) and Mixed Intermediate Storage (MIS). This paper deals speci cally with UIS, FIS and NIS policies in evaluating the performance of different annealing algorithms1,2. Unlimited Intermediate Storage (UIS) Intermediate products are removed from units as soon as processing is completed. Unlimited storage is available between each processing stage to store partially processed products. A batch process plant operating under the UIS policy has M 7 1 storage units between every pair of consecutive units (total M). This implies a product i, which is nished at unit j, need not occupy the latter until the next unit can process i. If the unit ( j + 1) is busy, the product i can be transferred from unit j to any available storage at any time. In the simplest case, when the processing times are only considered, the completion time Cij for product i on unit j is developed by the recurrence relation, Cij = max[C(i- 1) j ; Ci( j- 1) ] + tki j (1) where, tki j is the processing time of product ki in unit j where ki is the ith product in the sequence k1 - k2 - . . . . . . - kN. Finite Intermediate Storage (FIS) For the FIS policy, a nite number of storage units are available between the stages of production. The interstage storage capacity is measured in terms of the number of units rather than the physical size of storage, since it is usually assumed that each storage unit can temporarily hold any intermediate product. The N-product batch unit is assumed to have M-units in series. zj storage units, where (zj $ 0, 1 # j # M 7 1) are present between any two batch units. The completion time Cij for this policy is given by: Cij = max[C(i- 1) j ; Ci( j- 1) ; C(i- zj - 1)( j+ 1) - tki j ] + tki j
(2)
No Intermediate Storage (NIS) In the NIS policy, no intermediate storage is available to hold the semi-processed products (batches). This implies that if the next unit ( j + 1) for a product i in unit j is busy, then the product must be held in unit j. Without consideration of transfer or setup times the recurrence relation that yields the completion time Cij for the process is given by: Cij = max[Ci( j- 1) ; C(i- 1)( j+ 1) - tki j ] + tki j
(3)
Below, a few of the assumptions inherent in all the three types of multiproduct scheduling problems are summarized4:
· The operation is non-preemptive. A unit cannot process more than one product at a time, nor does more than one unit process a product at a time.
· All products are produced in the same order on each processing unit (Permutation Schedules). · Storage is always available for the last product on the last unit irrespective of the policy being implemented. · Transfer and setup times are negligible. · The above formulations represent simple recipies with no recycling material or shared intermediates, reuse of equipment or due date constraints for products. THE ANNEALING ALGORITHMS Simulated Annealing Simulated Annealing (SA) is a computational stochastic technique for nding near global, optimum solutions to combinatorial optimization problems. The method draws inspiration from the thermodynamic process of cooling (annealing) of molten metals to attain the lowest free energy state. The root idea of the method is to allow occasional wrong-way movement that can help locate the true (global) minimum. The probability associated with such wrong moves is given by the Boltzmann probability, viz. Pr ob; P = exp
- DE T
(4)
where, DE is the change in the value of the function from one point to the next and, T is the temperature (control parameter). All the terms are exact equivalents of physical parameters like energy and temperature. For function optimization energy refers to the value of the function and temperature is a control parameter that regulates the process of annealing. The consideration of such a probability distribution leads to the generation of the Markov chain of points. The acceptance criterion given by (4) is referred to as the Metropolis criterion for SA. In the course of the simulations, the temperature is systematically annealed using a schedule like Kirkpatrick’s exponential schedule, Ti+ 1 = xTi
(5)
where, x is the cooling factor with a value generally between 0.9 and 1.0. A owchart for SA based on the Metropolis criterion is given in Figure 1. Simulated Annealing has been put to use for the physical design of computers, wiring and placement on electronic circuitry by Kirkpatrick et al.15 and subsequently used to solve the NP Complete class of combinatorial optimization problems like the Traveling Salesman Problem (TSP). Ku and Karimi11 have evaluated the performance of SA for Batch Scheduling problems and SA is found to perform satisfactorily for small-size problems. SA has since found use in several real life engineering problems like Reactor Network Synthesis16 and Process Optimization17. Though SA is superior to several random and heuristic strategies it suffers from some major drawbacks:
· There is a risk of local optimization as SA can get trapped in local minima for several multimodal functions. There is also no way to know that the true minimum has been reached. Trans IChemE, Vol 79, Part A, September 2001
ANNEALING METHODS FOR BATCH SCHEDULING PROBLEMS
675
Figure 1. Simulated Annealing (SA) algorithm.
· For a poor initial guess an acceptable solution may not be obtained. · SA is an overkill method for smooth energy landscapes. · SA derives very little information in the search and hence is often unable to overcome large barriers that are frequent in complex function landscapes. Trans IChemE, Vol 79, Part A, September 2001
Threshold Acceptance Dueck and Scheuer13 introduced the method of Threshold Acceptance (TA) for the solution of combinatorial optimization problems. The method is even simpler than Simulated Annealing (SA). The power of this algorithm lies in its easy implementation and hence, less computational time for
676
DATTA et al.
locating the optimum. TA is very similar to SA in basic structure. A owchart for the implementation of TA is supplied in Figure 2. The essential difference between these two algorithms is in their different acceptance rules. TA does not consider any probability distribution like equation (4) of
SA. However, like temperature, here we have a parameter called the Threshold which regulates the algorithm. This parameter is initially set to a high value and annealed like temperature in SA by a schedule as in equation (5). At each value of the threshold speci c number of sample
Figure 2. Threshold Acceptance (TA) algorithm.
Trans IChemE, Vol 79, Part A, September 2001
ANNEALING METHODS FOR BATCH SCHEDULING PROBLEMS
con gurations are considered and the iterations carried out. The acceptance criterion for moves in TA is given by, (6)
DE < Th
where, DE represents the change in the quality of the function being optimized and, Th represents the value of the current threshold. The method accepts every new con guration that is not much worse than the old one. Hence, this may also include con gurations that need not improve the value of the function but satisfy equation (6). The algorithm terminates when Th falls below an ‘error’ value. Computationally much simpler than SA, TA has been found to yield better results than SA for some dif cult optimization problems like the TSP and the problem of error-correcting codes. However, it too suffers from drawbacks similar to SA, the basic methodology for the two being similar. Multicanonical Jump Walk Annealing The Multicanonical Jump Walk Annealing (MJWA) is a recent development in the eld of Stochastic Optimization Techniques. Xu and Berne14 have introduced it for the geometric optimization of complex atom clusters like the Lennard-Jones and Morse clusters and the BLN protein model. Basically, MJWA combines the method of SA with the method of Multicanonical Ensembles18 and assimilates their best features. SA can cross barriers in the function landscape at high temperatures but once the temperature anneals to low values there is not suf cient excitation available to do so. As a consequence the search is prone to get stuck in local minima in particular for complex landscapes that are common for multimodal functions. This drawback of SA is easily overcome in MJWA by attaching the method of Multicanonical Ensembles to the high-energy end of the landscape. The ‘Information Theoretic’ methodology of the multicanonical approach is to sample the con gurations with a probability (also referred to as the weight factor) [rmu(E)] inversely proportional to the density of the energy states [O(E)] for the corresponding sampling, i.e. rmu (E) µ
1 O(E)
(7)
The method strongly relies on the ‘Feedback Thermodynamic Approach’. The density of states is not known before hand19 and its calculation is the main driving force of the algorithm. An energy histogram H(E) of the sampling is obtained and using this the microcanonical entropy S(E) is found out. The entropy is used in the next iteration to calculate the weight factor, rmu(E). Knowing the weight factor is equivalent to estimation of the density of states as they are related by equation (7). This effectively solves our problem at hand. The Multicanonical Ensemble is de ned by the condition: Probability; P(E) µ O(E)rmu (E) µ O(E)
1 = 1 O(E)
(8)
The relation in equation (8) implies that the distribution of the con gurations is essentially ‘ at’. The atness implies that the simulation is enabled to move equitably between Trans IChemE, Vol 79, Part A, September 2001
677
regions of high and low probabilities. The transition probability W of the moves is chosen as: W (x ® x0 ) = exp{- [S(E(x0 ))] - [S(E(x))]} W (x0 ® x)
(9)
where, the entropy S(E) is de ned as: S(E) = ln(O(E))
(10)
Further, the energy histogram H(E) that gives the frequency of occurrence of the particular range of energies in the sampling should satisfy (due to equation (7)), H (E) µ O(E)rmu (E)
(11)
If the highest and lowest energies sampled in the simulation are Emax and Emin respectively, then the density of energy states is estimated as, (from equations (10) and (11)): S(E) = ln O(E) = ln )
H (E) rmu (E)
S(E) = ln(H (E)) - ln(rmu (E))
(12)
where, Emin # E # Emax : The above relations are used to update the estimate of O(E) which in turn helps nd out rmu(E) in the next iteration. In each iteration the range of the sampled energies increases in width and extends to the low energy end. The Multicanonical method can thus, intelligently pick up the direction of moves to escape local minima. This is possible because the effective temperature becomes lower as the energy decreases which helps the system stuck in a high-energy metastable state to generate larger thermal excitations than one in a low energy state. This enables the system to escape local minima. This method has been successfully used to solve the benchmark combinatorial optimization problem, the Traveling Salesman Problem20. In spite of being a signi cant improvement over SA, the Multicanonical method has considerable dif culties in exploring low energy regions. This is due to the fact that the weight factor being derived from the previous iteration, no information is sought concerning the unexplored low lying regions of the function landscape. MJWA alleviates the discrepancies in both the Multicanonical method and SA by coupling them together. By shuttling back and forth between the canonical and multicanonical ensembles (hence, the name ‘Jump Walk’) it can search the high and low energy ends of a function landscape with equal probability and thereby locate the true minimum. The basic structure of the algorithm is outlined in Figure 3. Window Factor Scheduling MJWA further involves placing a hard wall potential, in terms of the Window Factor Ewindow, to the upper end of the annealing direction. All samples beyond the window are rejected. This helps regulate the size of the sampling interval and thus, enables the method to move out of local minima within the energy interval (if their barrier heights do not exceed the upper bound of the energy range). The MJWA method for optimizing functions has been further augmented by introducing a novel schedule for the window factor that helps enhance the performance of the algorithm. Instead of keeping the window factor constant (that has to be
678
DATTA et al.
Figure 3. Multicanonical Jump Walk Annealing (MJWA) algorithm.
Trans IChemE, Vol 79, Part A, September 2001
ANNEALING METHODS FOR BATCH SCHEDULING PROBLEMS
679
Figure 3. Continued.
arbitrarily chosen over a substantial range) a schedule is introduced (shown by the dotted box in Figure 3); Ewindow = (Emax - Emin )* f
(13)
where, 0 < f < 1. The dynamic scheduling of the window helps the method to automatically adjust the energy interval of the samplings from time to time and hence choose the best course of annealing. It is this fact that makes the MJWA with optimized window schedule more effective than the constant window case. In MJWA simultaneously the temperature is annealed using a schedule as in equation (5). RESULTS FOR BATCH SCHEDULING The performance of the Multicanonical Jump Walk Annealing (MJWA) algorithm, with and without window scheduling, for Batch Scheduling problems (UIS, FIS and NIS policies) is evaluated using methods like Simulated Annealing (SA) and Threshold Acceptance (TA). The comparative study was carried out on the basis of keeping the number of evaluations of the makespan same for all the algorithms. All computations have been performed on a Pentium-II, 350 MHz machine. Test examples with number of products 6, 8, 10, 20 and 30 and number of units varying between 3 and 10 have been considered for the comparative analysis. Trans IChemE, Vol 79, Part A, September 2001
For all the problems, we have randomly generated ten test examples (for each N ´ M con guration) where the elements of each matrix are drawn from a uniform distribution with a range 1–100. Move Schedule The initial sequence for each of the algorithms is generated randomly in each run. The new sequences are generated in the future samplings by choosing any element of the sequence randomly and then swapping the adjacent neighbors. From a sequence (2-3-1-4) if position 2 is chosen then the new sequence is (1-3-2-4). The acceptance of the moves is carried out as per respective acceptance criterion for each of the algorithms given in the previous Section. Cooling Schedule The same cooling schedule is implemented for control parameters like the Temperature for SA and MJWA and Threshold for TA. The parameters considered for the different sizes of the processing matrices that have been used for performance evaluation are enlisted in Table 1. The results for the UIS Policy are tabulated in Tables 2–4, FIS policy in Tables 5–7 and that for the NIS Policy in Tables 8–10.
DATTA et al.
680
The tabulations have been carried out on the following three bases:
Tables 2, 5 and 8 compare this performance index for various methods for different storage policies.
% Mean Deviation from the Optimum Makespan For smaller problems the optimal makespan can be found by solving the problem by brute force enumeration. For larger size problems, (10 ´ 10 and higher) the Optimum Makespan refers to the best-obtained solution amongst all the algorithms. For each sample matrix the deviation of the best result of any one method from the best amongst all methods is calculated. To calculate the % mean deviation for a speci c matrix size, the mean of all the deviations is calculated and then expressed in terms of a percentage.
% Optimal Proportion for the Algorithms For a particular matrix size the performance of the algorithms differs from case to case. The % Optimal Proportion gives an idea about which algorithm has been the most successful in obtaining the best results for the case at hand. 100% Optimal Proportion refers to 100% success rate of an algorithm in locating the best makespan. Tables 3, 6 and 9 give the details for the cases analyzed here.
Table 1. Number of function evaluations used in tabulating results.
Sl no
Problem size
Tstart
Terr
1 2 3 4 5
6´ 8´ 10 ´ 20 ´ 30 ´
400 400 400 400 800
0.01 0.01 0.01 0.01 0.01
4 3 10 10 10
x
No of iterations
No of function evaluations
0.90 0.90 0.95 0.95 0.95
100 100 300 400 600
20200 20200 62100 82800 132600
Percentage of solutions within ‘k%’ of Optimum Makespan This tool of comparison refers to the deviation of the results of all the trial runs of all the algorithms from the bestobtained makespan in any one run of the successful algorithm. The sample values of ‘k’ that have been considered are: k = 0: This refers to the percentage of the runs for one algorithm that strikes exactly the best-obtained result. k < 1: This refers to the percentage of runs that give results within 1% of the best-obtained solution. k > 1: This gives the percentage of runs that give results above 1% of the best-obtained solution (could be referred to as ‘inaccurate’).
Table 2. % Mean Deviation from the optimum makespan (UIS Policy).
Table 5. % Mean Deviation from the optimum makespan (FIS Policy).
MJWA Sl no
Case 6´ 8´ 10 ´ 20 ´ 30 ´
1 2 3 4 5
4 3 10 10 10
MJWA
Constant window
Dynamic window
SA
TA
Sl no
0.0 0.0 0.0 0.37 0.91
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.21 1.63 1.52
0.0 0.0 0.27 1.29 1.78
1 2 3 4 5
Case 6´ 8´ 10 ´ 20 ´ 30 ´
Table 3. % Optimal Proportion for the algorithms (UIS Policy).
Constant window
Dynamic window
SA
TA
0.0 0.0 0.15 0.73 1.49
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.63 1.31 2.04
0.0 0.0 0.86 1.51 1.94
4 3 10 10 10
Table 6. % Optimal Proportion for the algorithms (FIS Policy).
MJWA Sl no 1 2 3 4 5
Case 6´ 8´ 10 ´ 20 ´ 30 ´
MJWA
Constant window
Dynamic window
SA
TA
Sl no
100 100 100 20 0
100 100 100 100 100
100 100 70 0 0
100 100 60 0 0
1 2 3 4 5
4 3 10 10 10
Case 6´ 8´ 10 ´ 20 ´ 30 ´
Constant window
Dynamic window
SA
TA
100 100 80 0 0
100 100 100 100 100
100 100 40 0 0
100 100 40 0 0
4 3 10 10 10
Table 4. Percentage of solutions within ‘k%’ of optimum makespan (UIS Policy). MJWA Constant window Sl no 1 2 3 4 5
Case 6´ 8´ 10 ´ 20 ´ 30 ´
4 3 10 10 10
Dynamic window
SA
TA
k= 0
k<1
k>1
k= 0
k<1
k>1
k= 0
k< 1
k>1
k= 0
k<1
k>1
100 100 53 5 0
0 0 30 25 25
0 0 17 70 75
100 100 70 20 10
0 0 25 60 63
0 0 5 20 27
100 88 35 0 0
0 12 38 3 0
0 0 27 97 100
100 100 50 0 0
0 0 30 8 0
0 0 20 92 100
Trans IChemE, Vol 79, Part A, September 2001
ANNEALING METHODS FOR BATCH SCHEDULING PROBLEMS
681
Table 7. Percentage of solutions within ‘k%’ of optimum makespan (FIS Policy). MJWA Constant window Sl no
Case 6´ 8´ 10 ´ 20 ´ 30 ´
1 2 3 4 5
4 3 10 10 10
Dynamic window
SA
TA
k= 0
k <1
k>1
k= 0
k<1
k>1
k= 0
k<1
k>1
k= 0
k<1
k>1
100 100 28 0 0
0 0 34 26 16
0 0 28 74 84
100 100 42 10 10
0 0 38 40 35
0 0 20 50 65
100 100 26 0 0
0 0 42 6 4
0 0 32 94 96
100 100 10 0 0
0 0 34 6 3
0 0 56 94 97
Table 9. % Optimal Proportion for the algorithms (NIS Policy).
Table 8. % Mean Deviation from the optimum makespan (NIS Policy).
MJWA
MJWA Sl no 1 2 3 4 5
Case 6´ 8´ 10 ´ 20 ´ 30 ´
4 3 10 10 10
Constant window
Dynamic window
SA
TA
Sl no
0.0 0.0 0.05 1.09 0.43
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.07 1.95 1.06
0.0 0.07 0.05 1.83 1.25
1 2 3 4 5
Case 6´ 8´ 10 ´ 20 ´ 30 ´
Constant window
Dynamic window
SA
TA
100 100 80 0 0
100 100 100 100 100
100 100 80 0 0
100 80 80 0 0
4 3 10 10 10
Table 10. Percentage of solutions within ‘k%’ of optimum makespan (NIS Policy). MJWA Constant window Sl no 1 2 3 4 5
Case 6´ 8´ 10 ´ 20 ´ 30 ´
4 3 10 10 10
Dynamic window
SA
TA
k= 0
k <1
k>1
k= 0
k<1
k>1
k= 0
k<1
k>1
k= 0
k<1
k>1
100 100 58 0 0
0 0 34 26 30
0 0 8 74 70
100 100 68 12 10
0 0 28 54 52
0 0 4 34 38
100 88 35 0 0
0 12 38 2 3
0 0 27 98 97
100 100 50 0 0
0 0 30 4 3
0 0 20 96 97
Table 11. Comparison of number of function evaluations and CPU times for SA and TA to approach the best makespan obtained by MJWA (dynamic window) by 0.5%. MJWA (Dynamic window) Policy UIS FIS NIS
Problem size 10 ´ 20 ´ 30 ´ 10 ´ 20 ´ 30 ´ 10 ´ 20 ´ 30 ´
10 10 10 10 10 10 10 10 10
SA
TA
F.E.
t
F.E.
t
F.E.
t
62100 82800 132600 62100 82800 132600 62100 82800 132600
7.445 13.781 30.134 8.934 14.957 35.789 7.426 12.986 29.331
316500 449600 674400 316500 449600 674400 316500 449600 674400
11.338 31.568 62.419 13.132 34.725 71.538 10.823 30.769 59.876
316500 449600 674400 316500 449600 674400 316500 449600 674400
15.418 36.723 69.810 18.077 39.066 74.341 14.696 36.111 69.388
F.E. – Number of function evaluations. t – CPU Time in seconds (Pentium-II, 350 MHz, 64 MB RAM).
Trans IChemE, Vol 79, Part A, September 2001
DATTA et al.
682
The results of the simulation runs based on the above discussion are tabulated in Tables 4, 7 and 10. It would be instructive to compare the relative performance by evaluating the number of function evaluations and the CPU time required to get the best makespan for different algorithms. The results of the runs for such a comparison are shown in Table 11. DISCUSSION In this study processing time matrices are considered for the UIS, FIS and NIS policies vary from small-size to largesize. The results are discussed in the same order. Small-Size Problems There are two small-scale problems under consideration in this paper. These are the 6 ´ 4 and 8 ´ 3 processing time matrices cases. The results are similar for all the algorithms in terms of comparison criterion. The results for SA are found to compare suitably with Ku and Karimi’s results11. Further the optimum makespan found out is veri ed using a Brute Force algorithm. However an important feature to be noted is that most batch scheduling problems are multiple global minima problems, in the context that the same makespan may be obtained for one or more product sequences. The MJWA algorithm (with and without window scheduling) has been found to signi cantly outperform SA and TA in this regard. The following 6 ´ 4 matrix example illustrates the above fact suitably. For the matrix: 32 1 79 71 90 28
25 89 2 10 5 91
75 65 13 74 67 60
79 76 91 91 78 4
the optimal makespan is found to be 529 by all the algorithms. Both SA and TA give the optimum for only one sequence (1-3-2-5-4-6) in all ten sample runs, whereas for MJWA nine such optimal sequences (1-3-2-4-5-6), (1-54-3-2-6), (1-3-4-5-2-6), (1-2-5-3-4-6), (1-3-2-5-4-6), (1-4-32-5-6), (1-2-3-4-6-5), (1-5-3-2-4-6), (1-4-3-5-2-6) are found in the ten successful runs. This result is signi cant, as in a Processing Industry, there may be several constraints that may not permit the use of certain sequences, which may even be the optimal sequence. It is useful to keep several alternative sequences at hand all of which can provide the same optimal makespan. This feature can be found only in the MJWA algorithm, which underlines its importance even for small-size scheduling problems where SA and TA also have comparable success rates. Large-Size Problems Three large-size problems 10 ´ 10, 20 ´ 10, 30 ´ 10 processing time matrices are considered. For these largescale problems, MJWA with optimized window schedule (implying with an optimum value of the parameter f) is the best algorithm. The algorithm inevitably emerges with 100% optimal proportion (Tables 2 and 3) for all the problems while SA and TA’s optimal proportion are found to go down signi cantly to near zero for 20 ´ 10 case and zero for
30 ´ 10 case. Even MJWA with constant window factor does not perform as well as the window-optimized MJWA. For the largest problem under consideration, the 30 ´ 10 case although the number of runs for window-optimized MJWA which hit the best optimum is only as low as 10% but still it can be seen from the result that a large fraction (60– 70%) of the results are within 1% of the optimal makespan (Table 4). These results are still near optimal, and can be considered for practical implementation if the optimal sequence cannot be used. In this case, most of the results for SA and TA are inaccurate and fall in the region over 1% of the optimum. In an industry even 1% deviation from the optimum is not desirable as the optimum makespan for such large problems has high values in the range 2500–3000 (for the test matrices considered). In terms of the mean deviation from the best-obtained solution, SA and TA best results are found to deviate widely (by almost 4%) from the optimum that is obtained by the window-optimized MJWA. The constant window MJWA deviates by less than 1% for the large problems. Although the above detailed discussions were mainly with UIS policy, a look at Tables 5–10 reveals that the performance of MJWA with dynamic window is superior to other algorithms. It can also be seen from Table 11 that in order to arrive at the best possible makespan SA and TA take considerably more number of function evaluations than the MJWA algorithm with window factor scheduling. These results again testify the superiority of the MJWA algorithm. CONCLUSION Batch Scheduling is a real-life problem that is multimodal in nature. Results obtained with our simulations for the batch scheduling problems suggest that the Multicanonical Jump Walk Annealing algorithm with an optimized window schedule is superior to both the Simulated Annealing and Threshold Acceptance techniques. This method, with appropriate modi cations, can be used for solving a variety of diverse formulations of batch scheduling problems including state task network and resource task network methodologies. For example, the MJWA algorithm can be used by two different ways to solve the mixed integer linear programming formulation of the state task network methodology9 for scheduling multipurpose batch plants:
· The MJWA algorithm can be used for solving the integer part while the LP part can be solved by conventional mathematical programming techniques. · The MJWA algorithm can be used for solving both the parts. The algorithm can also be used to solve a large variety of problems like reactor optimization and synthesis of reactor, distillation sequences and mass=heat exchanger network problems. REFERENCES 1. Reklaitis, G. V., 1982, Review of scheduling of process operations, AIChE Symp Ser No 214, (78): 119. 2. Kim, M., Jung, J. H. and Lee, I. B., 1996, Optimal scheduling of multiproduct batch processes for various intermediate storage policies, Ind Eng Chem Res, 35: 4058. 3. Ku, H. M., Rajagopalan, D. and Karimi, I., 1987, Scheduling in batch processes, Chem Eng Prog, Aug issue: 35.
Trans IChemE, Vol 79, Part A, September 2001
ANNEALING METHODS FOR BATCH SCHEDULING PROBLEMS 4. Ku, H. M. and Karimi, I., 1988, Scheduling in serial multiproduct batch processes with nite intermediate storage: A mixed integer linear program formulation, Ind Eng Chem Res, 27: 1840. 5. Vaselenak, J. A., Grossmann, I. E. and Westerberg, A. W., 1987, An embedding formulation for the optimal scheduling and design of multipurpose batch plants, Ind Eng Chem Res, 26: 139. 6. Birewar, D. B. and Grossmann, I. E., 1990, Simultaneous, sizing and scheduling of multiproduct batch plants, Ind Eng Chem Res, 29: 2242. 7. Papageourgaki, S. and Reklaitis, G. V., 1990, Optimal design of multipurpose batch plants I. Problem formulation, Ind Eng Chem Res, 29: 2054. 8. Ierapetritou, M. J. and Floudas, C. A., 1998, Effective continuous time formulation for short term scheduling 1: Multipurpose batch processes, Ind Eng Chem Res, 37: 4341. 9. Kondili, E., Pantelides, C. C. and Sargent, R. W. H., 1993, A general algorithm for short term scheduling of batch operations I. MILP formulation, Comp Chem Eng, 17: 211. 10. Rippin, D. W. T., 1993, Batch process system engineering retrospective and prospective review, Comp Chem Eng, 17: S1. 11. Ku, H. M. and Karimi, I., 1991, An evaluation of simulated annealing for batch process scheduling, Ind Eng Chem Res, 30: 163. 12. Patel, A. N., Mah, R. S. H. and Karimi, I. A., 1991, Preliminary design of multiproduct non continuous plants using simulated annealing, Comp Chem Eng, 15(7): 451. 13. Dueck, G. and Scheuer, T., 1990, Threshold Accepting: A general purpose Optimization Algorithm appearing superior to Simulated Annealing, J Comp Phys, 90: 161. 14. Xu, H. and Berne, B. J., 2000, Multicanonical jump walk annealing: An ef cient method for geometric optimization, J Chem Phys, 112: 2701. 15. Kirkpatrick, S., Gellatt, Jr. C. D., Vecchi, M. P., 1983, Optimization by simulated annealing, Science, 220: 671.
Trans IChemE, Vol 79, Part A, September 2001
683
16. Cordero, J. C., Davis, A., Floquet, P., Pibouleau, L. and Domenech, S., 1997, Synthesis of optimal reactor networks using mathematical programming and simulated annealing, Comp Chem Eng, 21: S47. 17. Dolan, W. B., Cummings, P. T. and LeVan, M. D., 1989, Process optimization via simulated annealing: Application to network design, AIChE J, 35: 725. 18. Berg, B. A. and Neuhas, T., 1992, Multicanonical ensemble: A new approach to simulate rst order phase transitions, Phys Rev Lett, 68(1): 9. 19. Hansmann, U. H. E., 1997, Effective way for determination of multicanonical weights, Phys Rev E, 56: 6200. 20. Lee, J. and Choi, M. Y., 1994, Optimization by multicanonical annealing and the traveling salesman problem, Phys Rev E, 50(2): R651.
ACKNOWLEDGEMENTS We are grateful to Unilever Research, Port Sunlight, UK, for extending nancial assistance to the project.
ADDRESS Correspondence concerning this paper should be addressed to Dr B. D. Kulkarni, Chemical Engineering Division, National Chemical Laboratory, Pune, India. E-mail:
[email protected] The manuscript was communicated via our International Editor for India, Professor Kandukuri Gandhi. It was received 8 December 2000 and accepted for publication after revision 9 April 2001.