A genetic algorithm for solving economic lot size scheduling problem

A genetic algorithm for solving economic lot size scheduling problem

Computers & Industrial Engineering 42 (2002) 189±198 www.elsevier.com/locate/dsw A genetic algorithm for solving economic lot size scheduling proble...

83KB Sizes 8 Downloads 268 Views

Computers & Industrial Engineering 42 (2002) 189±198

www.elsevier.com/locate/dsw

A genetic algorithm for solving economic lot size scheduling problem Ruhul Sarker*, Charles Newton Operations Research/Management Science Group, School of Computer Science, The University of New South Wales, ADFA Campus, Northcott Drive, Canberra, ACT 2600, Australia

Abstract The purpose of this research is to determine an optimal batch size for a product and purchasing policy of associated raw materials. Like most other practical situation, this manufacturing ®rm has a limited storage space and transportation ¯eet of known capacity. The mathematical formulation of the problem indicates that the model is a constrained nonlinear integer program. Considering the complexity of solving such model, we investigate the use of genetic algorithms (GAs) for solving this model. We develop GA code with three different penalty functions usually used for constraint optimizations. The model is also solved using an existing commercial optimization package to compare the solution. The detailed computational results are presented. q 2002 Elsevier Science Ltd. All rights reserved. Keywords: Batch sizing problem; Nonlinear integer program; Genetic algorithm

1. Introduction Consider a batch manufacturing environment that procures raw materials from outside suppliers and processes them to convert into ®nished products for retailers. The manufacturing batch size is dependent on the retailer's sales volume (/market demand), unit product cost, set-up cost, and inventory holding cost. The raw material purchasing lot size is dependent upon the raw material requirement in the manufacturing system, unit raw material cost, ordering cost and inventory holding cost. Therefore, the optimal raw material purchasing quantity may not be equal to the raw material requirement for an optimal manufacturing batch size. To operate the manufacturing system optimally, it is necessary to optimize the activities of both raw material purchasing and production batch sizing simultaneously, taking all operating parameters into consideration. Unfortunately, most of the analytical studies do not * Corresponding author. Tel.: 161-2-6268-8051; fax: 161-2-6268-8581. E-mail address: [email protected] (R. Sarker). 0360-8352/02/$ - see front matter q 2002 Elsevier Science Ltd. All rights reserved. PII: S0 3 6 0 - 8 3 5 2 ( 0 2 ) 0 0 02 7 - X

190

R. Sarker, C. Newton / Computers & Industrial Engineering 42 (2002) 189±198

take all the costs for both sub-systems into consideration (Ansari & Hechel, 1987; Ansari & Modarress, 1987; O'Neal, 1989). Therefore, the overall optimality of the system cannot be achieved. A larger manufacturing batch size reduces the set-up cost component to the overall unit product cost. The products produced in one batch (/one manufacturing cycle) may be delivered to the retailer in m small lots (i.e. m retailer cycles per one manufacturing cycle) at ®xed time intervals. This delivery system is very common in batch manufacturing environment (Sarker & Parija, 1994; Sarker & Parija, 1996). In the pharmaceutical and chemical industries, the critical products are not delivered to the retailers until the whole lot is ®nished and quality certi®cation is ready (Sarker & Khan, 1997; Sarker & Newton, 2000). The inventory level of such products increases linearly at the production rate during the production up-time and it forms a stair case pattern during production down time in each manufacturing cycle. The manufacturer procures raw materials from outside suppliers in a lot of ®xed quantity at ®xed intervals of time. The manufacturer has a number of options on how much to receive in each lot as compared to manufacturing batch size. For example, the manufacturer may receive ² a larger lot of raw material that will be consumed in n manufacturing cycles; ² a smaller lot of raw material where k lot is required for one manufacturing cycle; ² a medium lot the raw material required for exactly one manufacturing cycle. We assume that m and n are integers. The above options are dependent on the cost parameters. In this paper we consider the ®rst option, which is a typical case of having higher ordering cost relatively to inventory holding cost. The raw materials are used at a given rate during the production up-time only. It is assumed that the production rate is greater than the demand rate. So the accumulated products during production up-time of a manufacturing cycle are used for making deliveries during production downtime of that cycle until the inventory is exhausted. We have developed a total cost equation for multiple product case with respect to the production quantity, ®nished product delivery frequency for a manufacturing cycle and the number of production runs for a raw material purchasing cycle. The production quantity can be assumed to be a continuous variable. However the ®nished product delivery frequency per manufacturing cycle and the number of production runs per purchasing cycle are integer decision variables. So the total cost function is nondifferentiable. As such a closed form solution for production quantity is not possible. There is always a natural limitation on storage space for raw materials and ®nished products. In addition to storage limitation, it is logical to consider the minimum truck-load for each delivery instead of unit transportation cost. So the resulting batch scheduling model is a constrained nonlinear integer program. Genetic algorithms (GAs) have been employed to solve optimization problems across all disciplines and interests. Applications of GAs in production/operations management include assembly line balancing (Leu, Matheson, & Rees, 1994), buffer size determination in assembly systems (Bulgak, Diwan, & Inozu, 1995), production scheduling (Biegel & Davern, 1990; Chen, Vempati, & Aljaber, 1995; Khouja, Michalewicz, & Wilmot, 1998; Reeves, 1995; Sridhar & Rajendran, 1994; Yip-hoi & Dutta, 1996), and manufacturing cell design (Joines, Culbreth, & King, 1996). Considering the complexity of solving such a model, we use GAs to solve this model. We use static (Homaifar, Qi, & Lai, 1994), dynamic (Joines & Houck, 1994) and adaptive (Bean & Hadj-Alouane, 1992) penalty functions to handle the constraints within the GAs approach. We use binary coding, to generate the population of candidate variables in GAs, with different crossovers and mutations. The results of three penalty function methods are analyzed and discussed.

R. Sarker, C. Newton / Computers & Industrial Engineering 42 (2002) 189±198

191

The organization of the paper is as follows: following this introduction, this paper presents a brief introduction to GAs. In Section 3, the mathematical formulation is provided. Different penalty functions and implementation of GAs are discussed in Sections 4 and 5. Computational experience and conclusions are presented in Sections 6 and 7. 2. Introduction to GA GAs are heuristic solution techniques that use the principles of evolution and heredity to arrive at near-optimum solutions to dif®cult problems. The idea behind GAs is to do what nature does. Let us take rabbits as an example (Khouja et al., 1998). At any given time there is a population of rabbits. Some of them are faster and smarter than other rabbits. These faster, smarter rabbits are less likely to be eaten by foxes, and therefore more of them survive to do what rabbits do best-make more rabbits. Of course, some of the slower, dumber rabbits will survive just because they are lucky. This surviving population of rabbits starts breeding. The breeding results in a good mixture of rabbit genetic material. Some slow rabbits breed with fast rabbits, some fast with fast, some smart rabbits with dumb rabbits, and so on. And on the top of that, nature throws in a `wild here' every once in a while by mutating some of the rabbit genetic material. The resulting baby rabbits will (on average) be faster and smarter than those in the original population because more faster, smarter parents survived the foxes. It is a good thing that the foxes are undergoing similar processÐotherwise the rabbits might become too fast and smart for the foxes to catch any of them. GAs follow a step-by-step procedure that closely matches the story of the rabbits discussed earlier; they mimic the process of natural evolution following the principles of natural selection and `survival of the ®ttest'. In the algorithm, a population of individuals (potential solutions) undergoes a sequence of unary (mutation type) and higher order (crossover type) transformations. These individuals strive for survival. A selection scheme biased towards ®tter individuals selects the next generation. This new generation contains a higher proportion of the characteristics possessed by the `good' members of the previous generation; in this way good characteristics (e.g. being fast) are spread over the population (e.g. of rabbits) and mixed with other good characteristics (e.g. being smart). After some number of generations, the program either converges or is terminated and the best individual is taken as the solution. The GAs procedure is shown below: begin tÃ0 initialize Population (t) evaluate Population (t) while (not terminate-condition) do begin tÃt11 select Population (t) from Population (t 2 1) alter (cross-over and mutate) Population (t) evaluate Population (t) end end.

192

R. Sarker, C. Newton / Computers & Industrial Engineering 42 (2002) 189±198

3. Mathematical model The unconstrained model for manufacturing batch sizing without transportation cost is developed by Sarker and Khan (1997) and with transportation cost by Sarker and Newton (2000). Our problem is very much similar to Sarker and Khan (1997) except the constraints imposed to make the situation more realistic. The notations used in the model are given below. Dp Pp Qp Hp Ap r Di Qi Ai Hi PRi Qip x L T m n s

demand rate of a product p, units per year production rate, units per year (here, Pp . Dp) production lot size annual inventory holding cost, $/unit/year setup cost for a product p ($/setup) quantity of raw material required in producing one unit of a product demand of raw material for the product p in a year, Di ˆ rDp ordering quantity of raw material ordering cost of a raw material annual inventory holding cost for raw material price of raw material optimum ordering quantity of raw material shipment quantity to customer at a regular interval (units/shipment) time between successive shipments ˆ x/Dp cycle time measured in year ˆ Qp/Dp number of full shipments during the cycle time ˆ T/L number of manufacturing cycles that would consume one lot of raw material space required by one unit of raw material i

The mathematical model is as follows:   Dp A mx Ap 1 i 1 Minimize TC ˆ 2 mx n

! !! Dp Dp x 1 1 Hp 1 ri Hi 1 n 2 1 2 Hp 2 Pp Pp

Subject to: n £ r £ m £ x £ s # Raw_s_Cap n £ r £ m £ x £ s $ Min_Truck_Load m £ x £ s # Finish_S_Cap m and n $ 0 and integer. Constraint 1 is for raw material storage capacity limitation, constraint 2 for a lower limit on truck load and the last constraint is for ®nished product storage capacity. This is clearly a nonlinear integer program where two out of three constraints are nonlinear.

R. Sarker, C. Newton / Computers & Industrial Engineering 42 (2002) 189±198

193

4. Penalty function Penalty function method is well known for solving constrained optimization problems for decades. The approach converts the problem into an equivalent unconstrained problem and then solve using suitable search algorithm. Two basic types of penalty function exist: exterior penalty functions, which penalize infeasible solutions and interior penalty functions, which penalize feasible solutions. We use only exterior penalty functions in this research as the implementation of interior penalty functions is considerably more complex for multiple constraints case (Smith & Coit, 1997). The transformation method from constrained to unconstrained model is similar using penalty function in GA. The key issue in the penalty function approach is the choice of penalty coef®cient in each iteration. There are three different approaches in GA to set the penalty coef®cient: the static one where the coef®cient is a constant, dynamic where the coef®cient is a predetermined monotonically nondecreasing sequence and adaptive which rely on population information to adjust the coef®cient adaptively during the optimization process. These three methods are brie¯y discussed later. 4.1. Static penalties The method of static penalties (Homaifar et al., 1994) assumes that for every constraint we establish a family of intervals that determine the appropriate penalty coef®cient. ² For each constraint, create several (n) levels of violation. ² For each level of violation and for each constraint, create a penalty coef®cient mij …i ˆ 1; 2; ¼; l 1 l; j ˆ 1; 2; ¼; q†; higher level of violation require larger values of this coef®cients. ² Start with a random population of individuals (feasible or infeasible). ² Evolve the population; evaluate individuals. It is clear that the results are parameter dependent. It is quite likely that for a given problem instance there exists one optimal set of parameters for which the system returns a feasible near-optimum solution; however, it might be hard to ®nd. A limited set of experiments reported by Michalewicz and Schoenauer (1996) indicate that the method can provide good results if violation levels and penalty coef®cients are tuned to the problem. We use three level of ®xed penalty coef®cient for each constraints: 0.5, 0.5 £ 10 3 and 0.5 £ 10 6 as used by other researchers (Michalewicz & Schoenauer, 1996). 4.2. Dynamic penalties Joines and Houck (1994) proposed dynamic penalties. The authors assumed that the penalty coef®cient mk ˆ …Ck†a ; where C and a are constants. A reasonable choice for these parameters is C ˆ 0:5; a ˆ 2: This method requires much smaller number (independent of the number of constraints) of parameters than the ®rst method. Also, instead of de®ning several levels of violation, the pressure on infeasible solutions is increased due to the (Ck) a component of the penalty term: towards the end of the process (for high values of the generation number k), this component assumes large values.

194

R. Sarker, C. Newton / Computers & Industrial Engineering 42 (2002) 189±198

4.3. Adaptive penalties Bean and Hadi±Alouane proposed an adaptive penalty method in 1992 which uses feedback from the search process (see Michalewicz & Schoenauer (1996)). This method allows either an increase or a decrease of the imposed penalty during evolution as shown below. This involves the selection of two constants, b 1 and b 2 (b 1 . b 2 . 1), to adaptively update the penalty function multiplier and the evaluation of the feasibility of the best solution over successive intervals of Nf generations. As the search progresses, the penalty function multiplier is updated every Nf generations based on whether or not the best solution was feasible during that interval. Speci®cally, the penalty function is as follows; 8 mk b1 if previous Nf generations have only infeasible best solution > > < mk11 ˆ mk =b2 if previous Nf generations have only feasible best solution > > : mk Otherwise The parameters were chosen to be similar to the above methods with Nf ˆ 3; b1 ˆ 5; b2 ˆ 10 and m0 ˆ 1: The m values were found to be always increasing by 10, indicating only feasible best solutions. 5. Implementing GA It is generally accepted that any GA to solve a problem must have ®ve basic components: ² ² ² ² ²

problem representation; a way to create an initial population of solutions; an evaluation function rating solutions in terms of their `®tness'; genetic operators that alter the genetic composition of parents during reproduction; and values for the parameters (population size, probabilities of applying genetic operators, etc).

5.1. Problem representation The ®rst hurdle to overcome in using GAs is problem representationÐwe must represent our problem in a way that is suitable for handling by a GA, which works with strings of symbols that in structure resemble chromosomes. The representation often relies on binary coding. GAs work with a population of competing strings, each of which represents a potential solution for the problem under investigation. The individual strings within the population are gradually transformed using biologically based operations. For example, two strings might `mate' and produce an offspring that combines the best features of its parents. In accordance with the law of the survival of the ®ttest, the best-performing individual in the population will eventually dominate the population. 5.2. Initialize the population There are no strict rules for determining the population size. Larger populations ensure greater diversity but require more computer resources. We use a population size of 50. Once the population

R. Sarker, C. Newton / Computers & Industrial Engineering 42 (2002) 189±198

195

Fig. 1. 1-point crossover of two individuals.

size is chosen, then the initial population must be randomly generated. The random number generator generates random bits (0 or 1) to represent the decision variables. 5.3. Calculate ®tness Now that we have a population of potential problem solutions, we need to see how good they are. Therefore, we calculate a ®tness, or performance, for each string. The ®tness function is the penalty objective ( fp) function in our case (for penalty function see Michalewicz and Schoenauer (1996)). Each string is decoded into its decimal equivalent. This gives us a candidate value for the solution. This candidate value is used to calculate the ®tness value. 5.4. Selection and genetic operators GAs work with a population of competing problem solutions that gradually evolve over successive generations. Survival of the ®ttest means that only the best-performing members of the population will survive in the long run. In the short run we merely tip the odds in favor of the better performers; we do not eliminate all the poor performers. This can be done in a variety of ways. We generate a ranking of the competing strings based upon their ®tness and better solutions are selected with higher probability. The chromosomes which survive the selection step undergo genetic operations crossover and mutation. Crossover is the step that really powers the GA. It allows the search to fan out in diverse directions looking for attractive solutions and permits two strings to mate. This may result in offspring that are more ®t than parents. Crossover is accomplished in four small steps. ² two potential parents are randomly chosen from the population; ² generate the crossover probability to answer `yes, perform crossover' or `no';

Fig. 2. 2-point crossover of two individuals.

196

R. Sarker, C. Newton / Computers & Industrial Engineering 42 (2002) 189±198

Fig. 3. A bit-wise mutation of an individual.

² if the answer is `yes', randomly choose a cutting point; and ² cut each string at the cutting point and creates two offspring by gluing parts of the parents. Fig. 1 represents the classical 1-point crossover. Sometimes more than one cutting point is used. For example, in case of two 2-point crossover, the ®rst segment of the ®rst chromosome is followed by the second segment of the second chromosome (the other offspring is built up from remaining segments as shown in Fig. 2). Mutation introduces random variations into the population. Mutation zaps a 0 to a 1 and vice versa. Mutation is usually performed with low probability, otherwise it would defeat the order building being generated through selection and crossover. Mutation attempts to bump the population gently into a slightly better course. An example of mutation is shown in Fig. 3. 5.5. Parameters of GA All GA runs used in our study have the following standard characteristics: ² ² ² ² ² ²

Probability of crossover: 1.0 Probability of mutation: 1/(string length of the chromosome) Two point crossover and bit-wise mutation Population size: 50 Number of generations in each run: 200 Number of independent runs: 300

6. Computational Experience The mathematical model presented in Section 3 is solved using three different penalty functions based GA described earlier. The data used for the model are as follows: Dp ˆ 4 £ 106 units, Pp ˆ 5 £ 10 6 units, Ap ˆ $50.00 per setup, Ai ˆ $3000 per order, Hi ˆ $1.00 per unit per year, Hp ˆ $1.20 per unit per year, r ˆ 1.00 and x ˆ 1000 units. Table 1 Comparing three different penalty function-based methods Method

Minimum

Mean

Standard deviation

Median

Maximum

Optimal %

Dynamic penalty Static penalty Adaptive penalty

1.8483 1.8483 1.8483

1.8484 1.8484 1.8484

0.0014 0.0010 0.0014

1.8483 1.8483 1.8483

1.8651 1.8651 1.8651

99 100 99

R. Sarker, C. Newton / Computers & Industrial Engineering 42 (2002) 189±198

197

The RHS values are: Raw_S_Cap ˆ 400,000, Truck_Min_load ˆ 200,000 and Fin_S_Cap ˆ 20,000. The best solution found: m ˆ 13 and n ˆ 10 with TC ˆ 1.8483 £ 10 5. The comparison of the three methods are presented in Table 1. As we can see the static penalty method provide optimal solutions in all test runs within the set 200 generations (see parameters of GA). However the performances of other two methods are similar and close to static penalty method. All three methods have the same minimum and maximum ®tness function values. This problem can be solved using a commercially available optimization package like LINGO. However there is a possibility of having a local optimum. For example, the best solution we found with the nonlinear programming module of LINGO5: m ˆ 17 and n ˆ 12 with TC ˆ 1.8865 £ 10 5. This is a minimization problem. The objective is 2.07% higher than GAs solutions and even 1.15% higher than maximum GAs objective function value recorded. The GAs code is developed in MATLAB on an UNIX machine. It takes few minutes to get the solutions. However LINGO takes less than 30 s on a PC. 7. Conclusions We introduced a manufacturing batch sizing problem where the manufacturer procures raw materials from outside suppliers and processes them to convert into ®nished products for retailers. The purpose of this research was to determine an optimal batch size for the product and purchasing policy of associated raw materials. Like most other practical situation, the constraints like limited storage space and transportation ¯eet capacity are imposed. The mathematical formulation of the problem forms a constrained nonlinear integer program. Considering the complexity of the model, the well-known GAs were applied. The GAs codes were developed with three different penalty functions usually used for constraint optimizations in evolutionary computation. The model is also solved using an existing commercial optimization package. The results of three penalty functions are compared. Their performances are almost similar in terms of number of times hitting the optimal values. As compared to the solution of the commercial package, the GAs solution is superior. Acknowledgements This work is supported by UC special research grant, ADFA, University of New South Wales, Australia, awarded to Dr Ruhul Sarker. We would like to thank Prof. Xin Yao for many useful comments, and Mr Thomas Runarsson for developing GA code and testing and experimentation. References Ansari, A., & Hechel, J. (1987). JIT purchasing: Impact of freight and inventory costs. Journal of Purchasing and Material Management, 23 (2), 24±28. Ansari, A., & Modarress, B. (1987). The potential bene®ts of just-in-time purchasing for US manufacturing. Product Inventory Management, 28 (2), 32±35. Bean, J. C., & Hadj-Alouane, A. B. (1992). A dual genetic algorithm for bounded integer programs. Technical Report TR 92-53, Ann Arbor, MI: Department of I&OE, University of Michigan. Biegel, J. E., & Davern, J. J. (1990). Genetic algorithm and job shop scheduling. Computers and Industrial Engineering, 19, 81±91.

198

R. Sarker, C. Newton / Computers & Industrial Engineering 42 (2002) 189±198

Bulgak, A. A., Diwan, P. D., & Inozu, B. (1995). Buffer size optimization in asynchronous assembly systems using genetic algorithm. Computers and Industrial Engineering, 28, 309±322. Chen, C., Vempati, V. S., & Aljaber, N. (1995). An application of genetic algorithms for ¯ow shop problems. European Journal of Operational Research, 80, 389±396. Homaifar, A., Qi, C. X., & Lai, S. H. (1994). Constrained optimization vis genetic algorithms. Simulation, April, 242±253. Joines, J. A., Culbreth, C. T., & King, R. E. (1996). Manufacturing cell design: An integer programming approach employing genetic algorithms. IIE Transactions, 28, 69±85. Joines, J. A., & Houck, C. R. (1994). On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GAs. In Z. Michalewicz, J. D. Schaffer, H.-P. Schwefel, D. B. Fogel, and H. Kitano (Eds.), Proceedings of the IEEE ICEC, IEEE Press, 579±584. Khouja, M., Michalewicz, Z., & Wilmot, M. (1998). The use of genetic algorithms to solve the economic lot size scheduling problem. European Journal of Operational Research, 110, 509±524. Leu, Y., Matheson, L. A., & Rees, L. P. (1994). Assembly line balancing using genetic algorithms with heuristic-rated initial populations and multiple evaluation criteria. Decision Sciences, 25, 581±606. Michalewicz, Z., & Schoenauer, M. (1996). Evolutionary algorithms for constrained parameter optimization problems. Evolutionary Computation, 4 (1), 1±32. O'Neal, C. R. (1989). The buyer-seller linkage in a just-in-time environment. Journal of Purchasing and Material Management, 25 (1), 34±40. Reeves, C. R. (1995). A genetic algorithm for ¯ow shop sequencing. Computers and Operations Research, 22, 5±15. Sarker, R., & Khan, L. (1997). An optimal batch size for a manufacturing system operating under a periodic delivery policy. Conference of the Asia-Paci®c Operational Research Society (APORS), Melbourne, November±December, Australia. Sarker, R., & Newton, C. (2000). Determination of optimal batch size for a manufacturing system, Progress in optimization: Contributions from Australasia, X. Yang, A. Mees, M. Fisher, & L. Jennings (Eds.), Kluwer Academic Publishers, The Netherlands, 315±327. Sarker, B., & Parija, G. (1994). An optimal batch size for a production system operating under a ®xed-quantity, periodic delivery policy. Journal of Optimal Research Society, 45 (8), 891±900. Sarker, B., & Parija, G. (1996). Optimal batch size and raw material ordering policy for a production system with a ®xedinterval, lumpy demand delivery system. European Journal of Operational Research, 89, 593±608. Smith, A. E., & Coit, D. W. (1997). Constraint-handling techniques: penalty functions. Handbook of evolutionary computation, Release 97/1, IOP Publishing Ltd and Oxford University Press, C5.2:1±2:6. Sridhar, J., & Rajendran, C. (1994). A genetic algorithm for family and job scheduling in a ¯ow-line based manufacturing cell. Computers and Industrial Engineering, 27, 469±472. Yip-hoi, D., & Dutta, D. (1996). A genetic algorithm application for sequencing of operations in process planning for parallel machines. IIE Transactions, 28, 55±68.