Electronic Notes in Discrete Mathematics 36 (2010) 471–478 www.elsevier.com/locate/endm
Variable Parameters Lengths Genetic Algorithm for Minimizing Earliness-Tardiness Penalties of Single Machine Scheduling With a Common Due Date Hemmak Allaoua 1 Department Of Informatics University Of M’sila M’sila, Algeria
Ibrahim Osmane 2 Olayan School Of Business American University Of Beirut Beirut, Lebanon
Abstract Modern manufacturing philosophy of just-in-time emphasizes that a job should be completed as close as possible to its due date to avoid inventory cost and loss of customers goodwill. In this paper, the single machine scheduling problem with a common due date, where the objective is to minimize the total earliness and tardiness penalties in the schedule of jobs, is considered. A new genetic algorithm inspired by the philosophy of dynamic programming, where the chromosome and the population lengths are varied from one iteration to another, is proposed. Keywords: Common due date, Single Scheduling, Dynamic Programming, Genetic algorithm, Earliness-tardiness penalties.
1571-0653/$ – see front matter Crown Copyright © 2010 Published by Elsevier B.V. All rights reserved. doi:10.1016/j.endm.2010.05.060
472
1
H. Allaoua, I. Osmane / Electronic Notes in Discrete Mathematics 36 (2010) 471–478
Introduction
In recent years, new management concepts based on just-in-time (JIT) philosophy have been introduced during which a job/product should be completed as close as possible to the job’s due date. A job completed earlier than the due date would incur inventory carrying, storage and insurance costs, whereas a job completed after the due date would result in penalties measured in terms of loss of customer goodwill and damaged reputation. Moreover, since in a single machine one job can only be completed on the due date, there will be a set of jobs finished before the due date (earliness) and some jobs will be completed after the common due date (tardiness). The single-machine scheduling problem with a common due date can be defined as follows. There are n jobs available to be processed at time zero. Each job j has a processing time pj and a common due date d. No job preemption is permitted. The objective is to find a sequence of jobs (S) that minimizes the following total penalties: f (S) = ni=1 (αi ∗ Ei + βi ∗ Ti ) Where Ej and Tj are the earliness and tardiness of a job j defined as follows. Let Lj = (Cj − d) be the lateness of a job j, Cj is the completion time of the job j and d its due date, then its earliness Ej = max(0; −Lj ) and its tardiness Tj = max(0; Lj ). The constants αj and βj represent the earliness and tardiness weights/penalties of job j. Moreover, Hall et al. (1991), Cheng and Gupta (1989) proved that this problem is NP-hard. A review of Early 1990 works on scheduling models with single machine E/T penalties is found in Baker and Schudder (1990) whereas the recent works are reviewed in Gordon et al. (2002). Due to NP-Hardiness there are been also a number of approximate approaches. In the remaining part of this paper, we present our new variable parameters’ lengths genetic algorithm. It is different than those in the literature in that the previous Genetic algorithms where a fixed length chromosome and fixed population sizes are used. However our genetic algorithm uses a variable length chromosome to represent the partially constructed sequence of jobs. As the partial solution grows from one iteration to another the population is also proportionally increased. The variable augmented change concept is inspired from the famous principle of dynamic programming that states: “every optimal policy is composed of optimal sub-policies”, Bellman (1958). This principle is applied to solve a sequencing problem by breaking it down into simpler sub-problems. 1 2
Email: hem
[email protected] Email:
[email protected]
H. Allaoua, I. Osmane / Electronic Notes in Discrete Mathematics 36 (2010) 471–478
473
Each sub-problem is then optimized using genetic algorithm principles before it is augmented sequentially to include other unselected jobs at the next iteration. The process is continued until all jobs are included and a feasible solution to the original problem is obtained. The new approach can be seen as a meta-heuristic guiding the construction of a solution following the work in Osman and Ahmadi (2007). Our new approach is presented in Section2. The computational results using the Benchmark instances of Biskup and Feldmann’s (2001) are presented in Section3. Finally we could with directions and further research in Section 4.
2
Properties of optimal sequence
Before we explain the algorithm, we shall present some well-know properties for the optimal sequence of jobs that will be used to construct and evaluate the generated solution in our proposed approach. (P1) No idle times are inserted between consecutive jobs Cheng and Kahlbacher (1991). (P2) The schedule is V-Shaped, that is, jobs that is completed at and before the due date is sequenced in non-increasing order of ratio pj /αj . Jobs whose processing starts after the due date are sequenced in non-decreasing order of ratio pj /βj Note that a straddling job whose processing is started before and finished after the due date may exist, Biskup and Feldmann (2001). (P3) There is an optimal schedule in which either the processing time of the first job starts at time zero or one job is completed at the due date, Biskup and Feldmann ( 2001).
3
The VLGA detailed implementation
We shall describe the various components used on in our VLGA using an encoding of jobs; fitness function; the different phases of VLGA (initialization, evaluation, selection, crossover, mutation) and the algorithm parameters (initial generation size, crossover probability, mutation probability, selection method, criteria termination). 3.1
Encoding
In a traditional genetic algorithm, an individual solution is represented using a binary encoding of strings of 0s, and 1s. Such abstract representation is called a chromosome (or genotype in genetic science) of an individual. However, in
474
H. Allaoua, I. Osmane / Electronic Notes in Discrete Mathematics 36 (2010) 471–478
our implementation, a chromosome consists of an linear array (vector) of (nk + 1) integer numbers, where the first element in the array indicates the starting time of the sequencing schedule and the other remaining nk elements represent the sequence of jobs at iteration k. In our representation, the beginning time and the job’s notation are integer numbers. 4
2
6
1
4
5
3
starting time sequence Figure 1: Individual encoding. Population At iteration k, the population PoPk consisting of mk individuals is represented by a matrix of mk ∗ (nk + 1) dimesnions where mk is the number of individuals of the generation and nk is the number of jobs that are included in the partial solution at iteration k. As the iteration number increases the size population (or matrix) will grow in both horizontal and vertical dimensions: its horizontal width shows the increase in the partial solution size whereas the vertical height shows the increase in the population/generation size. However, the population size must be proportional to the size of problem; this is because it is not reasonable to use the same population size to different problem sizes. These two variations represent the main contribution of the paper. Initially many individuals are randomly generated to form the initial population. Traditionally, uniform law of probability must be used to cover all search space of solutions to prevent the algorithm from premature converge to a local optimum. At any population generation at iteration k, the partial sequence size must be less than that of generation k+1. The algorithm starts with an initial partial sequence size n0 , then the partial sequence of size nk will grow at iteration k using the following relationship: nk = n0 + Current iter ∗ N br iterations/ni Where: nk : size of the partial sequence at the current iteration k; n0 : initial partial sequence size; n : size of the original size of the problem; Current iter : current generation number k; N br iterations : the total number of generations to terminate the algorithm; x: Upper integer part of x;
H. Allaoua, I. Osmane / Electronic Notes in Discrete Mathematics 36 (2010) 471–478
3.2
475
Fitness Function
The fitness function measures the quality of the individuals in the population. An ideal fitness function would correlate closely with the algorithm’s goal, and should be easily and quickly computed. However, it should be defined in way so that solutions leading to early convergence to local minimum should not be favored over solutions with good fitness values. The linearization and exponentiation fitness function is used as follows: F itness(x) = h ∗ {1 − obj(x)/sum obj}0.1 + 1 Where h is the common due date rate, x is a chromosome and obj(x) the value of its objective function. Our experimental results show that this rule produces good results. The expression (1 − obj(x)/sum obj) shows that the fitness function is inversely proportional to the sum of objectives in the current population. The power (0.1) is used to ensure the diversity of the population and the linear coefficient h is used to favor the weak fitness individuals. The evaluation step computes the objective for each individual and its fitness. Also, it save the best objective value and the correspondent sequence of jobs in order to declare it as the minimum cost solution. So, at the last iteration, this step obtains the best individual of the population, it is the near optimal solution searched. Note that property 2 is used to choose the best job to be added to the generation while properties 1 and 3 are applied indirectly in the computation of objective values. 3.3
Selection
The selection step chooses individuals to participate in the production of the next generation. Candidate solutions are selected through a fitness-based process, where fitter solutions (as measured by fitness values) are more likely to be selected. A “fortune wheel” method is used to bias the selection of individuals with good fitness values. 3.4
Crossover
Each selected individual in (v) (named “parent 1”) has a probability (pc = 0.8) to undergo crossover with any other randomly chosen individual in the population (parent 2’). The new individual generated has chromosome parts of each parent and idle time equal to (parent). Similar to the crossover procedure in Hino et al (2005) and Kozan and Preston (1999), a 2-point crossover is used. It is illustrated as follows: Let the two random crossover points at positions 4
476
H. Allaoua, I. Osmane / Electronic Notes in Discrete Mathematics 36 (2010) 471–478
and 7 and the individuals are: {[12][1,7,3,4,5,6,2,8,9,10]}- Parent-1 {[14][1,2,3,6,8,5,7,4,9,10]}- Parent-2. The crossover procedure would generate the following sequence that has jobs such as jobs 7 and 8 repeated twice. Hence it produces infeasible offspring as follows. {[12][1,7,3,6,8,5,7,8,9,10]}- infeasible offspring. Therefore, it is necessary to replace them by other jobs. The repair procedure would check the first and third parts and look for jobs that also appear in the second part; in this example, jobs 7 and 8. These jobs are replaced by other jobs that have never appeared (jobs 2 and 4), following the same order as in parent 1. The modified offspring would then be feasible and would be represented as follows. {[12][1,4,3,6,8,5,7,2,9,10]}- feasible offspring. It should be noted, that the first cross-over point is chosen before the common due date d, while the second cross-over point is chosen after the due date. 3.5
Mutation
The mutation genetic operator consists of giving just a small modification to the offspring obtained by the crossover procedure. This mutation procedure is to introduce into the population a new character which does not exist among its parents to assure diversity in population and hence pre- vents early convergence. Two jobs 3 and 2 in the previous offspring example would switch positions with a mutation probability pm = 0.01 as follows: {[12][1, 4, 2, 6, 8, 5, 7, 3, 9, 10]}.
4
Computational Analysis
The set of problems tested was selected from Biskup and Feldmann (2001, 2003), with seven different instance sizes with n = 10; 20; 50;100; 200; 500;1000 jobs and 4 restrictive factors h = 0.2; 0.4; 0.6; and 0.8. They are used to deter mine the common due date, d, by multiplying n the total sum of all processing times, as follows: d = h ∗ T ; where T = i=1 pi . There are 10 instances for each problem size for our computational results. We have chosen the most restrictive set of 140 instances, i.e. with h = 0.2 and h = 0.4. In order to measure the effectiveness of the results obtained, we computed the average percentage deviation of the average of our solution values (H) of every 10 instances over the average of the corresponding benchmark values provided by Biskup and Feldman as follows: RP D = {(H − BF )/BF } ∗ 100. Our results were obtained with following number of iterations numbers: 500 iterations for n= 10, 20, 50 jobs; 1000 iterations for n = 100, 200 jobs, 10000 iterations for
477
H. Allaoua, I. Osmane / Electronic Notes in Discrete Mathematics 36 (2010) 471–478
n = 500, 1000 jobs to terminate the algorithm. The algorithm was coded in visual basic 6, VB6, an run on an PC with an Intel dual core 2.16 GHZ, 2 GB RAM. h=0.2 n=10 n=20 n=50 n=100 n=200 n=500 n=1000 BFA
1674.4 6429.2 37583.7 141143.3 543591.2 3348405.6
13293514.6
HA
1674.4 6204.7 35505.1 133021.7 509500.7 3147715.8
9208717.2
RPD% 0
-3.62
-5.85
-6.11
-6.69
-6.38
-44.36
h=0.4 n=10
n=20
n=50
n=100
n=200
n=500
n=1000
BFA
973.1
3703.7 21419.5 82120.2
315312.9 1917425.8
7651046.5
HA
973.1
3651.1 20519
307061.2 1787201.8
7320456.9
79051.7
RPD% 0 -1.44 -4.39 -3.88 -2.69 -7.29 -4.52 Table 1: Relative percentage deviation of the average solutions. The computational results reported in the Table 1 above show that our algorithm improves the average of all instances. The overall average improvement for h = 0.2 is around -10.43% while it is −6.94% for h = 0.4. The CPU time in second are reported for the above obtained results. n
10
20
50
100
200
500
Time (s) for h=0.2
6s
6s
14 s
61 s
216 s
3325 s
14 s
59 s
212 s
3287 s
Time (s) for h=0.4 6 s 6s Table 2: The CPU time in Seconds.
5
Conclusion
A new genetic algorithm was developed. It is based on the concept of dynamic programming where the population size and the chromosome of the solution are increased as the iteration number increases. The computational results are very encouraging. The authors are currently attempting to solve the other 140 remaining instances of lower restriction factors as well as comparing with the most developed meta-heuristics we have recently found in the literature.
478
6
H. Allaoua, I. Osmane / Electronic Notes in Discrete Mathematics 36 (2010) 471–478
References and Cross-references
References [1] K. R. Baker, G. D. Scudder, Scheduling with earliness and tardiness penalties: a review, European Journal of Operational Research, 160 (2005) 190–201. [2] C. Duron, Louly, M.A. Ould, Proth, J.-M. (2008). The one machine scheduling problem: Insertion of a job under the real-time constraint. European Journal of Operational Research , Volume 199, Issue 3, Pages 695–701. [3] M. Feldman, and D. Biskup, Single-machine scheduling for minimizing earliness and tardiness penalties by meat-heuristic approaches, Computer & Industrial Engineering, 44 (2003), 307–323. [4] M. Feldman, and D. Biskup, Benchmarks for scheduling on a single machine against restrictive and unrestrictive common due dates, Computer Industrial Engineering, 28 (2001) 787–801. [5] Goncalves, Jose Fernando, Valente, Jorge M.S. (2008). A genetic algorithm approach for the single machine scheduling problem with linear earliness and quadratic tardiness penalties. Computers Operations Research, Volume 36, Issue 10, Pages 2707–2715. [6] Gordon, Valery S., Tarasevich, Alexander A. (2007). A note: Common due date assignment for a single machine scheduling with the rate-modifying activity . Computers Operations Research , Volume 36, Issue 2, 325–328. [7] Ching Ying, Kuo (2008). Minimizing earlinesstardiness penalties for common due date single-machine scheduling problems by a recovering beam search algorithm. Computers Industrial Engineering, Volume 55, Issue 2, Pages 494– 502. [8] Wei Lin, Shih, Yan Chou, Shuo, Ching Ying, Kuo (2006). A sequential exchange approach for minimizing earlinesstardiness penalties of single-machine scheduling with a common due date . European Journal of Operational Research, Volume 177, Issue 2, Pages 1294–1301. [9] Weng, Wei, Fujimura, Shigeru (2008). Self evolution algorithm to minimize earliness and tardiness penalties with a common due date on a single machine. Special Issue on Electronics, Information and Systems, Volume 3 Issue 6, Pages 604–611. [10] Yu, Y.a, Lu, Z.a, He, L.-M., Hu, J.-D. (April 2009). Scheduling problems on tardiness penalty and earliness award with simply linear processing time. Journal of Shanghai University , Volume 13, Issue 2, Pages 123–128.