Computers & Industrial Engineering 63 (2012) 813–818
Contents lists available at SciVerse ScienceDirect
Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie
Uniform parallel-machine scheduling to minimize makespan with position-based learning curves q Wen-Chiung Lee a, Mei-Chi Chuang c,⇑, Wei-Chang Yeh b,c a
Department of Statistics, Feng Chia University, Taichung, Taiwan Advanced Analytics Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, NSW 2007, Australia c Integration and Collaboration Laboratory, Department of Industrial Engineering and Engineering Management, National Tsing Hua University, P.O. Box 24-60, Hsinchu 300, Taiwan, ROC b
a r t i c l e
i n f o
Article history: Received 19 December 2011 Received in revised form 10 May 2012 Accepted 12 May 2012 Available online 18 May 2012 Keywords: Scheduling Learning curve Uniform parallel-machine Makespan
a b s t r a c t Scheduling with learning effects has become a popular topic in the past decade; however, most of the research focuses on single-machine problems. In many situations, there are machines in parallel and the skills of workers might be different due to their individual experience. In this paper, we study a uniform parallel machine problem in which the objective is to jointly find an optimal assignment of operators to machines and an optimal schedule to minimize the makespan. Two heuristic algorithms are proposed and computational experiments are conducted to evaluate their performance. Ó 2012 Elsevier Ltd. All rights reserved.
1. Introduction In classical scheduling, the processing times of jobs are assumed to be fixed and known. In reality, in many situations, employees do the same job repeatedly. As a consequence, they learn and are able to perform similar jobs in a more efficient way. That is, the actual processing time of a job is shorter if it is scheduled later. This phenomenon is known as the ‘‘learning effect’’ and has become a popular topic in the past decade. Lee and Wu (2004) considered a two-machine flowshop with learning effects to minimize total completion time. Wang and Xia (2005) studied flow-shop scheduling with learning effects. Wu and Lee (2008) considered a number of single-machine scheduling problems with learning effects. Cheng, Lee, and Wu (2008) considered single-machine and flowshop permutation problems with deteriorating jobs and learning effects, and provided optimal solutions for several scheduling problems. Wang, Ng, Cheng, and Liu (2008) studied single machine scheduling with a time-dependent learning effect. Biskup (2008) provided a comprehensive review of scheduling models with learning considerations. Janiak and Rudek (2009) considered a learning effect model in which the learning curve is S-shaped. They provided NP-hard proofs for two cases of the problem to minimize the makespan.
q
This manuscript was processed by Area Editor T.C. Edwin Cheng.
⇑ Corresponding author.
E-mail address:
[email protected] (M.-C. Chuang). 0360-8352/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cie.2012.05.003
Cheng, Lai, Wu, and Lee (2009) developed a learning effect model in which the job processing time is a logarithm function of the normal processing time of jobs already processed. They provided optimal solutions for several single-machine problems. Cheng, Lee, and Wu (2010) considered a new scheduling model with deteriorating jobs, learning and setup times. They obtained polynomial-time optimal solutions for single-machine problems. Janiak and Rudek (2010) suggested a new approach called multi-ability learning that generalizes existing ones and more precisely models real-life settings. They focused on the makespan problem and provided optimal polynomial time algorithms for special cases. Ji and Cheng (2010) considered a scheduling problem with job-dependent learning effects and multiple rate-modifying activities. They showed that the problem of minimizing the total completion time is polynomially solvable. Lee, Wu, and Hsu (2010) investigated a single-machine problem with the learning effect and release times to minimize the makespan. Wang, Wang, and Wang (2010) considered resource allocation scheduling with learning effects where the processing time of a job is a function of the job’s position in a sequence and its resource allocation. They provided a polynomial algorithm to find the optimal job sequence and resource allocation. Zhang and Yan (2010) provided a general learning effect model and derived optimal solutions for single-machine and flowshop problems. Wang, Sun, and Sun (2010) and Wang and Wang (2011) provided optimal solutions for a number of single-machine problems with an exponential sum-of-actual-processing-time-based learning effect. Lai and Lee (2011) proposed a learning effect model in
814
W.-C. Lee et al. / Computers & Industrial Engineering 63 (2012) 813–818
which the processing time of a job is a general function of the normal processing times of the jobs already processed and its own scheduled position. Rudek (2011) provided the computational complexity and solution algorithms for flowshop scheduling problems with the learning effect. Yang and Yang (2011) considered single-machine group scheduling with setup and job processing times with the learning effect to minimize the makespan. They proved that the problem is polynomially solvable. Li, Hsu, Wu, and Cheng (2011) proposed a truncated position-based learning model to minimize the total completion time. They developed a branch-and bound algorithm and three simulated annealing algorithms to solve the two-machine flowshop problem. Zhu, Sun, Chu, and Liu (2011) investigated single-machine group scheduling problems with resource allocation and learning effects. Lee (2011) proposed a general position-based learning curve model in which the plateau and S-shaped curves are its special cases. He provided optimal solutions for single-objective and multipleobjective problems on a single machine. Wu, Huang, and Lee (2011) studied a two-agent scheduling problem on a single machine with learning considerations. The objective was to minimize the total tardiness of the jobs for the first agent, given that no tardy jobs are allowed for the second agent. Cheng, Cheng, Wu, Hsu, and Wu (2011) studied a two-agent single-machine problem with truncated sum-of-processing-times-based learning effects. They utilized a branch-and-bound algorithm and three simulated annealing algorithms to obtain optimal and near-optimal solutions. Cheng, Wu, Cheng, and Wu (2011) studied a two-agent problem on a single machine in which the deteriorating jobs and learning effects were considered concurrently. Cheng, Wu, Chen, Wu, and Cheng (2012) considered a two-machine flowshop scheduling with a truncated learning function to minimize the makespan. They proposed a branch-and-bound algorithm and genetic algorithm to find the optimal and approximate solutions. Yang, Cheng, Yang, and Hsu (2012) considered an unrelated parallel-machine scheduling with aging effects and multi-maintenance activities to minimize the total machine load. They provided an efficient algorithm to solve the problem. Pinedo (2008) pointed out that a bank of machines in parallel is a setting that is important from both a theoretical and a practical point of view. From a theoretical viewpoint, it is a generalization of the single machine and a special case of the flexible flowshop. From a practical point of view, it is important because the occurrence of resources in parallel is common in the real world. However, research on scheduling with learning effects on parallel machines is relatively unexplored. Eren (2009) considered a bicriteria parallel machines scheduling problem with a learning effect of setup times and removal times. He provided a mathematical programming model for the problem of minimizing the weighted sum of total completion time and total tardiness. Toksari and Guner (2009) studied a parallel machine earliness/tardiness (ET) scheduling problem with different penalties under the effects of learning and deterioration. Okołowski and Gawiejnowicz (2010) considered the parallel-machine makespan problem in which the learning effect on job processing times is modeled by the general DeJong’s learning curve. They proposed two exact algorithms: a sequential branch-and-bound algorithm and a parallel branchand-bound algorithm for this NP-hard problem. Hsu, Kuo, and Yang (2011) studied an unrelated parallel machine problem with setup time and learning effects in which the setup time of each job is past-sequence-dependent. They derived a polynomial time solution for the total completion time problem. Using the same model, Kuo, Hsu, and Yang (2011) considered the problem of minimizing the total absolute deviation of job completion times as well as the total load on all machines. They showed that the proposed problem is polynomially solvable.
In many situations, there are machines operating in parallel and the skills of the workers might be different due to their individual experience. In this paper, we study a uniform parallel machine scheduling problem. The objective is to jointly find a near-optimal schedule and an assignment of operators to machines to minimize the makespan. The remainder of this paper is organized as follows. The problem formulation is presented in the next section. In Section 3, a lower bound and two heuristic algorithms are proposed for this problem. The computational experiments are conducted in Section 4. The conclusion is given in the last section. 2. Problem formulation Formulation of the proposed learning effect model in the uniform parallel machine is as follows. There are n jobs to be scheduled on m uniform parallel machines. All the jobs are available for processing at time 0. Job j has a processing time pj, and can be processed on a machine i. Without loss of generality, we assume that p1 P p2 P P pn . Once a job starts to be processed, it must be completed without interruption. Let si denote the speed of machine i. Without loss of generality, we assume that s1 P s2 P P sm . Each machine can process one job at a time, and there is no precedence relation between jobs. In addition, there are m operators to be assigned to one machine each. In this paper, we consider the learning effect model proposed by Lee (2011) and the assignment of operators to machines. The actual processing time of job j is
pik j½r ¼ pj
r 1 Y al;k =si
ð1Þ
l¼0
if it is processed by operator k on machine i and scheduled in the rth position in a sequence, where a0,k = 1 and 0 < al;k 6 1 for l = 1, . . . , n and k = 1, . . . , m. Note that al,k denotes the learning rate on processQ ing the lth position job by operator k and rl¼0 al;k denotes the cumulative level of the learning effect after processing r jobs by operator k. Let S = (S1, . . . , Sm) denote a schedule where Si is the schedule of a subset of jobs assigned to machine i under S. Let p = (p(1), . . . , p(m)) denote an assignment of operator p(i) to machine i for i = 1, . . . , m. Thus, the objective of this paper is to jointly find a near-optimal schedule S and a near-optimal assignment of operators to machines such that
max C p ð1Þ ðS Þ; C p ð2Þ ðS Þ; . . . ; C p ðmÞ ðS Þ 6 max C pð1Þ ðSÞ; C pð2Þ ðSÞ; . . . ; C pðmÞ ðSÞ
Using the conventional notation, our problem is denoted as Qm|LE|Cmax. 3. Heuristic algorithms The problem under study is NP-hard even without consideration of the learning effects (Lenstra, Rinnooy Kan, & Brucker, 1977). Thus, developing efficient heuristic algorithms would be a good approach. In this paper, we utilize the genetic algorithm and the particle swarm optimization method. Before presenting the algorithms, we first provide a lower bound adapted from Pinedo (2008) to evaluate the performance of the heuristics. Property 1. (Pinedo, 2008). Let A ¼ min16k6m
LB ¼ A
(
max p1 =s1 ;ðp1 þ p2 Þ=ðs1 þ s2 Þ;...;
m1 X j¼1
is a lower bound of the makespan.
Qn
l¼0 al;k ,
, pj
then
m1 X
n X pj
i¼1
j¼1
si ;
, ) m X si i¼1
W.-C. Lee et al. / Computers & Industrial Engineering 63 (2012) 813–818
Parent 1 1.54 0.27 2.35 0.74 1.77 2.13 0.41 1.54 Offspring 1.54 0.25 1.45 0.74 1.77 2.89 1.74 1.54
815
operator 3 assigned to machine 1, operator 2 assigned to machine 2, and operator 1 assigned to machine 3. 3.1.2. Initialization Generate an initial population of N chromosomes and calculate their objective values.
Parent 2 2.51 0.25 1.45 1.22 2.47 2.89 1.74 2.65 3.1.3. Evaluation Calculate the fitness value of each chromosome as follows
Fig. 1. Crossover operator.
fitnessi ¼ b logðC maxi Þ Lee (2011) showed that the shortest processing time (SPT) order provided the optimal sequence for the single-machine makespan problem under the proposed model, as stated in Property 2. We will use it to facilitate the search process of both algorithms. With the help of Property 2, we need to determine the assignment of operators and jobs to machines, and the job sequence on each machine must follow the SPT rule to achieve local optimality. Property 2. (Lee, 2011). For the 1|LE|Cmax problem, the optimal schedule is obtained by sequencing jobs in the shortest processing time (SPT) order.
ð2Þ
where C maxi is the makespan of each chromosome and b is chosen to be a sufficiently large number such that the fitness value of each chromosome remains positive. 3.1.4. Selection We use the roulette wheel method to select the parent chromosomes where the selection probability is given as
pk ¼ fitnessk
, N X
fitnessk
ð3Þ
i¼1
where pk and fitnessk are the selection probability and the fitness value of chromosome k. 3.1. Genetic algorithm (GA) The genetic algorithm (GA) has received considerable attention regarding its potential as an optimization technique for many complex problems. It has been successfully applied in the area of industrial engineering, which includes scheduling and sequencing, reliability design, vehicle routing, facility layout and location, transportation, and other areas. The usual form of GA has been described by Goldberg (1989). The GA starts with an initial population of random solutions. Each individual, or chromosome, in the population represents a solution to the problem. Chromosomes evolve through successive generations. In each generation, chromosomes are evaluated by a measure of fitness. Chromosomes with smaller fitness values have higher probabilities of being selected to produce offspring through the crossover and mutation operations. The crossover operator merges two chromosomes to create offspring which inherit features from their parents. The mutation operator produces a spontaneous random change in genes to prevent premature convergence. The procedures of the GA implemented to the problem are as outlined below. 3.1.1. Chromosome representation For a problem of n jobs and m machines, each chromosome has n + m genes. We randomly generate the (n + m)-dimensional position of a chromosome with n + m uniform random numbers between 0 and m. The integer parts of the first n genes plus 1 represent the machine that jobs are assigned to. The ordering of the last m genes represents the assignment of operators to the machines. For example, a chromosome of (2.32, 0.12, 1.56, 2.79, 1.43, 2.34, 1.45, 0.28) in a problem with 5 jobs and 3 machines would represent the schedule of job 2 assigned to machine 1, jobs 3 and 5 assigned to machine 2, jobs 1 and 4 assigned to machine 3,
3.1.5. Crossover In this study, we use the flat crossover operator which equally selects the value of each gene from its parents’ genes to form the offspring. As shown in Fig. 1, the resulting offspring would be (1.54, 0.25, 1.45, 0.74, 1.77, 2.89, 1.74, 1.54) if the 1st, 4th, 5th and 8th genes are selected from parent 1, and the other genes are from parent 2. 3.1.6. Mutation The mutation mechanism randomly selects a gene and alters its value. As shown in Fig. 2, the fourth gene is randomly selected and altered to 0.66. 3.2. Particle swarm optimization method (PSO) Particle swarm optimization (PSO) is an evolutionary computational method advocated by Kennedy and Eberhart (1995). Its main concept is inspired by the collective behaviors of animals. Bees or animals exchange their experiences of searching for food. This causes the bees to fly, or the animals to move, to the food using the shortest path or in a more efficient way. In PSO, each particle is embedded with the relevant information regarding the decision variables and a fitness value that gives an indication of the performance of this particle. Basically, the trajectory of each particle is updated according to its own flying experience as well as that of the best particle in the swarm. The elements of the PSO implemented to the problem are shown below. 3.2.1. Initialization The representation of the position of the particle is the real number coding as described in Section 3.1.1.
Offspring
Parent 0.54 1.47 2.48 1.89 0.45 1.25 0.47 2.77
0.54 1.47 2.48 0.66 0.45 1.25 0.47 2.77
Fig. 2. Mutation procedure.
816
W.-C. Lee et al. / Computers & Industrial Engineering 63 (2012) 813–818 Table 1 The actual values of learning rates for large job-sized problems.
Table 4 Parameter settings of GA and PSO.
Pattern
Learning rates
GA
PSO
No LE Constant
al,k = 1 for l = 1, . . . , 9, k = 1, . . . , m a1,k U(0.95, 0.99) for i = 1, 2, . . . , m al,k = a1,k for l = 2, . . . , 4 al,k = 1 for l P 5
Increasing
a1,k U(0.93, 0.95) for i = 1, 2, . . . , m al,k = al1,k + 0.002 for l = 2, . . . , 4 al,k = 1 for l P 5
Iteration number is 500 Population size is 300 Crossover rate is 0.9 Mutation rate is 0.03 b Is set to be 100
Iteration number is 500 Particle number is 300 Maximum velocity is 30 Weight is 0.8 The cognition learning factor: c1 = 2, c2 = 1.49 b is set to be 100
Decreasing
a1,k U(0.98, 1) for i = 1, 2, . . . , m al,k = al1,k 0.002 for l = 2, . . . , 4 al,k = 1 for l P 5
xtij ¼ xt1 þ v t1 ij ij
ð5Þ
where w is the inertia weight, and its value will decrease when iteration increases; v t1 is the velocity of particle i in the jth dimension ij at iteration t 1, c1 and c2 is the cognition learning factor, rand( ) is a random number between 0 and 1, pt1 is the best solution of particle ij i in the jth dimension up to iteration t 1, gt1 is the best solution among all the particles up to iteration t 1, xt1 is the position of ij particle i in the jth dimension at iteration t 1. In addition, we set an upper and lower limit for the velocity of the particles as follows
Table 2 Different level of every parameter for GA and PSO. GA Crossover rate Mutation rate
0.7 (Pc 1) 0.01 (Pm 1)
0.8 (Pc 2) 0.02 (Pm 2)
0.9 (Pc 3) 0.03 (Pm 3)
PSO Maximum velocity Weight The cognition learning factor
28 (MV1) 0.7 (W1) c1 = 2, c2 = 2 (CLF1)
29 (MV2) 0.8 (W2) c1 = 2, c2 = 1.49 (CLF2)
30 (MV3) 0.9 (W3) c1 = 1.49, c2 = 1.49 (CLF3)
If If
v tij > v max ; then v tij ¼ v max v tij < v max ; then v tij ¼ v max
ð6Þ
4. Computational experiments 3.2.2. Calculate the fitness value The fitness value of each particle is calculated according to Eq. (2), and the value of parameter b is the same as that in Section 3.1.3. 3.2.3. Update the particle local best For each particle, compare its current fitness value with its optimal fitness value. Replace its optimal fitness value and optimal position if its current fitness value is smaller than its optimal fitness value. 3.2.4. Update the global best For each particle, compare its optimal fitness value with the global optimal fitness value. Replace the global optimal fitness value and the global optimal position if its optimal fitness value is smaller than the global optimal fitness value. 3.2.5. Update the velocity and position of particles At iteration t, update the velocity and position of each particle as follows,
v tij ¼
w v t1 þ c1 randð Þ pt1 xt1 þ c2 randð Þ ij ij ij t g t1 xijt1
ð4Þ
To evaluate the performance of the proposed algorithms, a computational experiment is conducted in this section. All the algorithms are coded in Visual Basic 6.0 and run on Intel Core version i3 on a personal computer with 2.94 GHz CPU and 1.9 GB RAM on Windows XP. The proposed heuristic algorithms are tested with two different numbers of jobs: n = 100 and 200. The numbers of machines (m) are set at three levels, namely 10, 20 and 30. The processing times (pj) are generated from a discrete uniform distribution, namely U(1, 100). The speeds of the machines (si) are generated from a continuous uniform distribution, namely U(1, 10). Four patterns of learning effects are studied. They are no learning effect (No LE), constant learning rate (Const), increasing learning rate (Inc), and decreasing learning rate (Dec). The framework of generating actual values is presented in Table 1. A set of 100 instances is randomly generated for each situation. As a result, 24 experimental cases are conducted and a total of 2400 instances are tested. In this paper, we utilize the Taguchi method to determine the values of the parameters in GA and PSO. The Taguchi method is a statistical method to achieve robust parameter design (Cheng & Chang, 2007; Kuo, Syu, Chen, & Tien, 2012; Yildiz, 2009). After several pretests, we set each of these parameters at three levels as shown in Table 2. The orthogonal arrays represent a set of experiments. In PSO, the total number of possible experiments is
Table 3 L9 orthogonal table and results for GA and PSO. No.
1 2 3 4 5 6 7 8 9
GA
PSO
Crossover rate
Mutation rate
S/N ratio
Maximum velocity
Weight
The cognition learning factor
S/N ratio
Pc 1 Pc 1 Pc 1 Pc 2 Pc 2 Pc 2 Pc 3 Pc 3 Pc 3
Pm 1 Pm 2 Pm 3 Pm 1 Pm 2 Pm 3 Pm 1 Pm 2 Pm 3
36.4569 36.6233 35.8838 36.579 36.9935 36.2314 36.4448 35.8875 35.7491
MV1 MV1 MV1 MV2 MV2 MV2 MV3 MV3 MV3
W1 W2 W3 W1 W2 W3 W1 W2 W3
CLF1 CLF2 CLF3 CLF2 CLF3 CLF1 CLF3 CLF1 CLF2
34.6270 32.9278 32.6328 32.2835 33.4760 34.7037 33.1549 33.4526 32.9062
817
W.-C. Lee et al. / Computers & Industrial Engineering 63 (2012) 813–818 Table 5 The computational results of GA and PSO for large job-sized problems. m
n
Pattern
Mean
Max
Mean
Max
GA
PSO
10
100
No LE Const Inc Dec No LE Const Inc Dec
1.3559 1.4377 1.4095 1.4642 1.6092 1.5899 1.6987 1.5855
2.0576 2.5324 2.1110 2.4890 2.7562 2.8091 3.1736 2.3808
1.0453 1.0931 1.0950 1.1048 1.1020 1.1384 1.1407 1.1368
1.1826 1.3011 1.2221 1.3240 1.5175 1.3778 1.4440 1.4762
68.9081 66.0978 63.1877 63.1443 170.2711 171.9185 171.0761 171.2615
74.5113 71.9083 71.3081 71.1983 200.6799 195.0400 198.3597 195.8638
No LE Const Inc Dec No LE Const Inc Dec
1.7188 1.8630 1.8791 1.8773 1.8310 1.9945 1.9680 1.9567
2.4287 2.7147 2.3866 2.5087 2.5588 2.9792 2.7165 2.6474
1.2234 1.3652 1.3537 1.3560 1.2528 1.3873 1.3849 1.3873
1.4147 1.5619 1.5493 1.6084 1.4790 1.6155 1.6936 2.0076
65.5876 65.7636 66.3714 65.7289 140.9204 141.3012 141.3661 144.6989
79.0461 78.3473 79.4691 79.5114 172.0282 173.5059 173.1888 173.6605
No LE Const Inc Dec No LE Const Inc Dec
1.9963 2.4733 2.4035 2.4226 2.0876 2.4073 2.3985 2.4235
2.5716 3.3445 3.2332 3.3710 3.1295 3.4658 3.2602 3.3066
1.3995 1.6510 1.6603 1.6614 1.4032 1.6147 1.6078 1.5925
1.6705 1.8995 2.0911 1.9864 1.6415 1.9137 2.1447 2.0043
82.5751 82.3577 82.2678 82.0006 154.4243 149.0319 148.9144 155.3245
98.6717 102.7313 99.7031 102.9380 186.1148 187.7024 192.0494 187.8692
33, but the orthogonal array needs a set of only 9 experiments. With the help of these 9 experiments, we can find a suitable level of each factor; the results are illustrated in Table 3. Each experiment is implemented three times. The best values of the parameters in GA and PSO are shown in Table 4. The mean and maximum ratios of GA/LB and PSO/LB are given, where GA denotes the solution obtained by the genetic algorithm, PSO denotes the solution obtained by the particle swarm optimization method, and LB denotes the lower bound from Property 1. In addition, the average execution times of the two algorithms are given in Table 5. The results indicate that the performance of PSO is consistently better than that of GA when the number of jobs is large, although it consumes more time. The ratio with no learning effect is always the smallest one among the four learning curves. It implies that the lower bound is better, or that the heuristics yield better solutions in this case. It is worth mentioning that the operator with the best learning ability is not always assigned to the highest speed machine in the optimal solution, although this phenomenon is not present in Table 5. Moreover, it is clear that the distribution of the speeds of the machines has no influence on the performance of the PSO and the GA. It is observed that the ratios of GA/LB and PSO/LB remain the same when the number of jobs increases. Thus, PSO is recommended for the proposed problem.
References
200
20
100
200
30
100
200
GA/LB
PSO/LB
5. Conclusion In this paper, we utilized GA and PSO algorithms to solve a uniform parallel machine problem with learning effects. The objective was to jointly find an optimal assignment of operators to machines and an assignment of jobs to machines such that the makespan is minimized. Computational experiments were conducted to evaluate the performance of the heuristics for several different scenarios. The results show that PSO outperforms GA, and thus it is recommended. Acknowledgements The authors are grateful to the area editor and the referees, whose constructive comments have led to a substantial improvement in the presentation of the paper.
Mean CPU time
Biskup, D. (2008). A state-of-the-art review on scheduling with learning effect. European Journal of Operational Research, 188(2), 315–329. Cheng, B. W., & Chang, C. L. (2007). A study on flowshop scheduling problem combining Taguchi experimental design and genetic algorithm. Expert Systems with Applications, 32(2), 415–421. Cheng, T. C. E., Cheng, S. R., Wu, W. H., Hsu, P. H., & Wu, C. C. (2011). A two-agent single-machine scheduling problem with truncated sum-of-processing-timesbased learning considerations. Computers and Industrial Engineering, 60(4), 534–541. Cheng, T. C. E., Lai, P. J., Wu, C. C., & Lee, W. C. (2009). Single-machine scheduling with sum-of-logarithm-processing-times-based learning considerations. Information Sciences, 179(18), 3127–3135. Cheng, T. C. E., Lee, W. C., & Wu, C. C. (2008). Some scheduling problems with deteriorating jobs and learning effects. Computers and Industrial Engineering, 54(4), 972–982. Cheng, T. C. E., Lee, W. C., & Wu, C. C. (2010). Scheduling problems with deteriorating jobs and learning effects including proportional setup times. Computers and Industrial Engineering, 58(2), 326–331. Cheng, T. C. E., Wu, C. C., Chen, J. C., Wu, W. H., & Cheng, S. R. (2012). Two-machine flowshop scheduling with a truncated learning function to minimize the makespan. International Journal of Production Economics. doi: org/10.1016/ j.ijpe.2012.03.027. Cheng, T. C. E., Wu, W. H., Cheng, S. R., & Wu, C. C. (2011). Two-agent scheduling with position-based deteriorating jobs and learning effects. Applied Mathematics and Computation, 217(21), 8804–8824. Eren, T. (2009). A bicriteria parallel machine scheduling with a learning effect of setup and removal times. Applied Mathematical Modelling, 33(2), 1141–1150. Goldberg, D. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA: Addison-Wesley. Hsu, C. J., Kuo, W. H., & Yang, D. L. (2011). Unrelated parallel machine scheduling with past-sequence-dependent setup time and learning effects. Applied Mathematical Modelling, 35(3), 1492–1496. Janiak, A., & Rudek, R. (2009). Experience based approach to scheduling problems with the learning effect. IEEE Transactions on Systems, Man, and Cybernetics – Part A, 39(2), 344–357. Janiak, A., & Rudek, R. (2010). A note on a makespan minimization problem with a multi-abilities learning effect. Omega, The International Journal of Management Science, 38(3–4), 213–217. Ji, M., & Cheng, T. C. E. (2010). Scheduling with job-dependent learning effects and multiple rate-modifying activities. Information Processing Letters, 110(11), 460–463. Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. Proceedings of IEEE International Conference on Neural Networks, 4, 1942–1948. Kuo, W. H., Hsu, C. J., & Yang, D. L. (2011). Some unrelated parallel machine scheduling problems with past-sequence-dependent setup time and learning effects. Computers and Industrial Engineering, 61(1), 179–183. Kuo, R. J., Syu, Y. J., Chen, Z. Y., & Tien, F. C. (2012). Integration of particle swarm optimization and genetic algorithm for dynamic clustering. Information Sciences, 195, 124–140.
818
W.-C. Lee et al. / Computers & Industrial Engineering 63 (2012) 813–818
Lai, P. J., & Lee, W. C. (2011). Single-machine scheduling with general sum-ofprocessing-time-based and position-based learning effect. Omega, The International Journal of Management Science, 39(5), 467–471. Lee, W. C. (2011). Scheduling with general position-based learning curves. Information Sciences, 181(24), 5515–5522. Lee, W. C., & Wu, C. C. (2004). Minimizing total completion time in a two-machine flowshop with a learning effect. International Journal of Production Economics, 88(1), 85–93. Lee, W. C., Wu, C. C., & Hsu, P. H. (2010). A single-machine learning effect scheduling problem with release times. Omega, The International Journal of Management Science, 38, 3–11. Lenstra, J. K., Rinnooy Kan, A. H. G., & Brucker, P. (1977). Complexity of machine scheduling problems. Annals of Discrete Mathematics, 1, 343–362. Li, D. C., Hsu, P. H., Wu, C. C., & Cheng, T. C. E. (2011). Two-machine flowshop scheduling with truncated learning to minimize the total completion time. Computers and Industrial Engineering, 61(3), 656–662. Okołowski, D., & Gawiejnowicz, S. (2010). Exact and heuristic algorithms for parallel-machine scheduling with DeJong’s learning effect. Computers and Industrial Engineering, 59(2), 272–279. Pinedo, M. (2008). Scheduling: Theory, algorithms, and systems (3rd ed.). Upper Saddle River, NJ: Prentice Hall. Rudek, R. (2011). Computational complexity and solution algorithms for flowshop scheduling problems with the learning effect. Computers and Industrial Engineering, 61(1), 20–31. Toksari, M. D., & Guner, E. (2009). Parallel machine earliness/tardiness scheduling problem under the effects of position based learning and linear/nonlinear deterioration. Computers and Operations Research, 36(8), 2394–2417. Wang, J. B., Ng, C. T., Cheng, T. C. E., & Liu, L. L. (2008). Single-machine scheduling with a time-dependent learning effect. International Journal of Production Economics, 111(2), 802–811.
Wang, J. B., Sun, L. H., & Sun, L. Y. (2010). Scheduling jobs with an exponential sum-of-actual-processing-time-based learning effect. Computers and Mathematics with Applications, 60(9), 2673–2678. Wang, J. B., & Wang, J. J. (2011). Single-machine scheduling jobs with exponential learning functions. Computers and Industrial Engineering, 60(4), 755–759. Wang, D., Wang, M. Z., & Wang, J. B. (2010). Single-machine scheduling with learning effect and resource-dependent processing times. Computers and Industrial Engineering, 59(3), 458–462. Wang, J. B., & Xia, Z. Q. (2005). Flow-shop scheduling with learning effect. Journal of the Operational Research Society, 56(11), 1325–1330. Wu, C. C., Huang, S. K., & Lee, W. C. (2011). Two-agent scheduling with learning consideration. Computers and Industrial Engineering, 61(4), 1324–1335. Wu, C. C., & Lee, W. C. (2008). Single-machine scheduling problems with a learning effect. Applied Mathematical Modelling, 32, 1191–1197. Yang, D. L., Cheng, T. C. E., Yang, S. J., & Hsu, C. J. (2012). Unrelated parallel-machine scheduling with aging effects and multi-maintenance activities. Computers and Operations Research, 39(7), 1458–1464. Yang, S. J., & Yang, D. L. (2011). Single-machine scheduling simultaneous with position-based and sum-of processing-times-based learning considerations under group technology assumption. Applied Mathematical Modeling, 35(5), 2068–2074. Yildiz, A. R. (2009). A new design optimization framework based on immune algorithm and Taguchi’s method. Computers in Industry, 60(8), 613– 620. Zhang, X. G., & Yan, G. L. (2010). Machine scheduling problems with a general learning effect. Mathematical and Computer Modelling, 51(1–2), 84–90. Zhu, Z., Sun, L., Chu, F., & Liu, M. (2011). Single-machine group scheduling with resource allocation and learning effect. Computers and Industrial Engineering, 60(1), 148–157.