Neurocomputing 148 (2015) 248–259
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
An improved discrete artificial bee colony algorithm to minimize the makespan on hybrid flow shop problems Zhe Cui, Xingsheng Gu n Key Laboratory of Advanced Control and Optimization for Chemical Process, East China University of Science and Technology, Ministry of Education, Shanghai 200237, China
art ic l e i nf o
a b s t r a c t
Article history: Received 3 January 2013 Received in revised form 8 July 2013 Accepted 19 July 2013 Available online 25 June 2014
As a typical NP-hard combinatorial optimization problem, the hybrid flow shop (HFS) problem is widely existing in manufacturing systems. In this article, the HFS problem is modeled by vector representation, and then an improved discrete artificial bee colony (IDABC) algorithm is proposed for this problem to minimize the makespan. The proposed IDABC algorithm combines a novel differential evolution and a modified variable neighborhood search to generate new solutions for the employed and onlooker bees, and the destruction and construction procedures are used to obtain solutions for the scout bees. Moreover, an orthogonal test is applied to efficiently configure the system parameters, after a small number of training trials. The simulation results demonstrate that the proposed IDABC algorithm is effective and efficient comparing with several state-of-the-art algorithms on the same benchmark instances. & 2014 Elsevier B.V. All rights reserved.
Keywords: Hybrid flow shop problem Scheduling Mathematical model Artificial bee colony Orthogonal test
1. Introduction Production scheduling is a decision-making process that plays a crucial role in manufacturing and service industries. It concerns how to allocate available production resources to tasks over given time periods, aiming at optimizing one or more objectives [1]. As industries are facing increasingly competitive situations, the classical flow shop model is not applicable to some practical industry processes. As a result, the hybrid flow shop (HFS) problem in which a combination of flow shop and parallel machines operate together arises. The HFS problem, also called multi-processor or flexible flow shop, widely exists in real manufacturing environments, such as chemical, oil, food, tobacco, textile, paper, and pharmaceutical industries. In the HFS problem, it is assumed that the jobs have to pass through all stages in the same order and that there are at least one stage having multiple machines. Additionally, one machine can only process one job at a time. As to one single job, it can be processed by only one machine at a time. Preemption of processing is not allowed. The problem consists of assigning the jobs to machines at each stage and ordering the jobs assigned to the same machine in order to minimize some criteria. As to the computational complexity, since even the two-stage HFS problem is strongly NP-hard [2,3] on minimizing the maximum
n
Corresponding author. Tel.: þ 86 2164253463; fax: þ 86 2164253121. E-mail address:
[email protected] (X. Gu).
http://dx.doi.org/10.1016/j.neucom.2013.07.056 0925-2312/& 2014 Elsevier B.V. All rights reserved.
completion time (makespan), the multi-stage HFS problem is at least as difficult as the two-stage one. Despite of the intractability, the HFS problem has great significance in both engineering and theoretical fields. Therefore, it is meaningful to develop an effective and efficient approach for this problem. Among a large number of algorithms on this problem, the authors opt for a simple classification with three different classes [4,5]: exact algorithms, heuristics, and meta-heuristics. The branch and bound (B&B) method is the preferred exact algorithm when solving the HFS problem. Arthanary and Ramaswamy [6] developed a B&B method for a simple HFS problem with only two stages in the 1970s. Santos et al. [7] presented a global lower bound for makespan minimization that has been used to analyze the performance of other algorithms. Neron et al. [8] used the satisfiability tests and time-bound adjustments based on the energetic reasoning and global operations, in order to enhance the efficiency of another kind of B&B method proposed in Carlier and Neron [9]. Several heuristics have also been extensively studied. One efficient heuristic algorithm is developed by Gupta [2] to find a minimum makespan schedule for the two-stage HFS problem in which there is only one machine at the stage two. And the similar problem was studied later with the objective of minimizing the total number of tardy jobs in [10]. Brah and Loo [11] investigated some excellent flow shop heuristics for the HFS problem with the criteria makespan and mean flow time. They also examined the effects of the problem characteristics and the performance of the heuristics by using regression analysis. With advanced
Z. Cui, X. Gu / Neurocomputing 148 (2015) 248–259
statistical tools, Ruiz et al. [12] tested several heuristics in the realistic HFS problem and suggested that the modified NEH heuristic [13] outperformed other dispatching rules. Ying and Lin [14] proposed an effective and efficient heuristic with a simple conception to solve the multistage hybrid flow shop with multiprocessor tasks. During the past decades, meta-heuristics, which can generate approximate solutions close to the optimum but with considerably less computational time, has become a new and effective approach to solve the HFS problem. The genetic algorithm (GA) was applied by several researchers to solve the HFS problem with the criterion makespan [15,16]. On the basis of vertebrate immune system, Engin and Doyen [17] proposed the artificial immune system (AIS) technique that incorporated the clonal selection principle and affinity maturation mechanism. Inspired by the natural mechanism of the ant colony, Alaykyran et al. [18] introduced an improved ant colony optimization (ACO) algorithm, employing the same formula as the classical ant system algorithm, except using a different starting solution procedure. Niu et al. [19] presented a quantum-inspired immune algorithm (QIA) for the HFS problem to minimize makespan. Liao et al. [20] developed a particle swarm optimization (PSO) algorithm. This algorithm hybridized the PSO and bottleneck heuristic to fully exploit the bottleneck stage, and further introduced simulated annealing to help escape from local optima. In addition, a local search is embedded to further improve its performance. The artificial bee colony (ABC) algorithm, simulating the intelligent foraging behaviors of honey bee colonies, is one of the latest population-based evolutionary meta-heuristics [21]. Applying the ABC algorithm to some numerical optimization problems, Basturk and Karaboga suggested that the ABC algorithm has a better performance than other population-based algorithms [22–24]. Due to the ABC algorithm's remarkable advantages [25], including simple structure, easy implementation, quick convergence, and fewer control parameters, it has been found an increasingly wide utilization in a variety of fields [26–29]. Nevertheless, on account of its continuous nature, the studies on ABC algorithm for production scheduling problems are still in their infancy. In the latest study, some researchers have successfully applied ABC-based algorithm to a series of scheduling problems, such as permutation flow shop scheduling problem [30], blocking flow shop scheduling problem [31,32], lot-streaming flow shop scheduling problem [33,34], multiobjective flexible job shop scheduling problem[35], and steelmaking process scheduling problem [36]. Although the ABC algorithm has solved some combinatorial optimization problems, there is no published work using the ABCbased algorithm for the HFS problem. In this article, the model of the HFS problem is established by employing the vector representation, and then an improved discrete artificial bee colony (IDABC) algorithm is proposed for the problem to minimize the makespan. In the IDABC algorithm, an efficient population initialization based on the NEH heuristic is incorporated into the random initialization to generate an initial population with certain quality and diversity. Meanwhile, a novel differential evolution operation, which includes mutation, crossover, and selection, is applied to generating new neighbor food sources in the employed bee phase. Moreover, a modified variable neighborhood search is developed to further improve the performance in the onlooker bee phase. Furthermore, inspired by the iterated greedy algorithm, the destruction and construction procedures are employed in the scout bee phase to enrich the population and to avoid premature convergence. Besides, the orthogonal experimental design is used to provide a receipt for turning the adjustable parameters of the IDABC algorithm. Finally, the simulation results on benchmarks demonstrate the effectiveness and efficiency of the proposed IDABC algorithm.
249
The rest of the article is organized as follows. In Section 2, the model of the HFS problem is formulated. Section 3 presents the details of the proposed IDABC algorithm. The tests on parameter selection and the simulation results are provided in Section 4. Finally, conclusions are drawn in Section 5.
2. Problem statement 2.1. Description of the problem The HFS problem can be described as follows. There are n jobs J¼ {1,2,…,i,…,n 1,n} that have to be performed on s stages S ¼{1,2, …,j,…,s 1,s}, and each stage s has ms identical machines. These identical machines are continuously available from time zero and have the same effect. At least one stage j must has more than one machine. Every job has to visit all of the stages in the same order string from stage 1 through stage s and is processed by exactly one machine at every stage. A machine can process at most one job at a time and a job can be processed by at most one machine at a time. The processing time pi,j is given for each job at each stage. The setup and release times of all jobs are negligible. Preemption is not allowed and intermediate buffer capacities between two successive stages are unlimited. The scheduling problem is to choose a machine at each stage for each job and determine the sequence of jobs on each machine so as to minimize the makespan. 2.2. Mathematical model As we know, on the research of the HFS problem, there are two formats to represent a solution, namely the matrix representation and the vector representation [37]. When using the matrix representation, a solution is represented by an s n matrix whose elements are all real numbers [38]. Each job is assigned with a real number, whose integer part is the machine number on which the job is to be processed and whose fractional part is used to sort the jobs on the same machine. The advantages of this representation are that it can avoid the infeasible solution and that it also covers the entire solution space. However, there are two disadvantages associated with this representation. The first disadvantage is that there exist a large number of matrices associated with only one solution for the problem, which means that it is not a one-to-one correspondence. The other one is that it is cumbersome to work with. In this article, the authors employ the vector representation, which considers the sequence of jobs only at stage one. This subset sequence should contain the collection of all potentially good solutions for the problem and should be a one-to-one correspondence. Most importantly, it is very convenient to design and operate with this format. A subset sequence is decoded to a complete schedule by employing a generalization of the List Scheduling (LS) algorithm to incorporate the jobs at other stages [39,40]. For scheduling jobs at each stage, the LS algorithm is based on the first-come-first-service rule, in which the jobs with the shortest completion time from the previous stage should be scheduled as early as possible. It could result in a non-permutation schedule, that is, the sequence of jobs at each stage may be different. The model of the HFS problem can be formulated as follows in terms of this representation: Minimize C max ðπ 1 Þ ¼
(
max fC π s ðiÞ;s g
i ¼ 1;2;…;n
ð1Þ
Subject to C π1 ðiÞ;1 ¼ pπ 1 ðiÞ;1 IM i;1 ¼ C π1 ðiÞ;1
i ¼ 1; 2; …m1
ð2Þ
250
Z. Cui, X. Gu / Neurocomputing 148 (2015) 248–259
8 C π 1 ðiÞ;1 ¼ min fIM k;1 g þpπ 1 ðiÞ;1 > > k ¼ 1;2;…m1 > < NM 1 ¼ arg min fIM k;1 g i ¼ m1 þ1; m1 þ 2; …n k ¼ 1;2;…m1 > > > : IM NM1 ;1 ¼ C π1 ðiÞ;1
ð3Þ
π j ðiÞ ¼ gðC πj 1 ðiÞ;j 1 Þ i ¼ 1; 2; …n; j ¼ 2; 3; …s
ð4Þ
(
C π j ðiÞ;j ¼ C π j ðiÞ;j 1 þ pπ j ðiÞ;j IM i;j ¼ C π j ðiÞ;j
Stage 2
:
where πj is the job permutation at the stage j; πk(i) is the ith job in the job permutation πk; C π k ðiÞ;j is the completion time of job πk(i) at the stage j; IMi,j represents the idle moment of machine i at the stage j; NMj denotes the serial number of earliest available machine at the moment at the stage j; the function Sj(i)¼ g(Sj 1(i)) (i¼ 1,2,…, n) means that Sj is the permutation of i (i¼ 1,2,…n) at the stage j based on the ascending order of Sj 1(i) (i¼ 1,2,…n) at the stage j 1; and argminfIM k g stands for the argument of the k
minimum, i.e. the set of points of the given argument for which the given function attains its minimum value. Eq. (1) defines the objective function which to minimize the makespan Cmax. In above recursive equations (2)–(6), the authors firstly calculate the completion time of jobs at the stage one, then that of the stage two, until the last stage. To illustrate the model and the issues described above, the authors consider a simple example of the HFS problem with 4 jobs and 3 stages. The number of machines at each stage is 2 and the processing time pi,j is given by 2 3 2 4 1 65 4 27 6 7 pi;j ¼ 6 7 42 1 55 3
Suppose the job sequence at the stage one π1 ¼ (2, 1, 3, 4) is given. Then the makespan Cmax is calculated as follows (the Gantt chart is shown in Fig. 1): 8 C π 1 ð1Þ;1 ¼ IM 1;1 ¼ pπ1 ð1Þ;1 ¼ 5 > > > > > C π 1 ð2Þ;1 ¼ IM 2;1 ¼ pπ1 ð2Þ;1 ¼ 2 > > > > < C π 1 ð3Þ;1 ¼ min fIM 1;1 ; IM 2;1 g þ pπ ð3Þ;1 ¼ 2 þ 2 ¼ 4 1 Stage 1 NM 1 ¼ arg min fIM 1;1 ; IM 2;1 g ¼ 2 > > > > > IM 2;1 ¼ C π 1 ð3Þ;1 ¼ 4 > > > > : C π ð4Þ;1 ¼ min fIM 1;1 ; IM 2;1 g þ p π 1 ð4Þ;1 ¼ 4 þ 2 ¼ 6 1 π 2 ¼ ð1; 3; 2; 4Þ
Fig. 1. An example of HFS problem with 4 jobs and 3 stages.
8 C π 3 ð1Þ;3 ¼ IM 1;3 ¼ C π 3 ð1Þ;2 þ pπ 3 ð1Þ;3 ¼ 5 þ 5 ¼ 10 > > > > > C π 3 ð2Þ;3 ¼ IM 2;3 ¼ C π 3 ð2Þ;2 þ pπ 3 ð2Þ;3 ¼ 6 þ 1 ¼ 7 > > > <
ð5Þ
ð6Þ
2
C π 2 ð3Þ;2 ¼ max fC π 2 ð3Þ;1 ; min fIM 1;2 ; IM 2;2 gg þ pπ 2 ð3Þ;2 ¼ 5 þ 4 ¼ 9
NM 2 ¼ arg min fIM 1;2 ; IM 2;2 g ¼ 2 > > > > > > IM 2;2 ¼ C π2 ð3Þ;2 ¼ 9 > > :C π 2 ð4Þ;2 ¼ max fC π 2 ð4Þ;1 ; min fIM 1;2 ; IM 2;2 gg þ pπ 2 ð4Þ;2 ¼ 6 þ 2 ¼ 8
π 3 ¼ ð3; 1; 4; 2Þ i ¼ 1; 2; …m1 ; j ¼ 2; 3; …s
8 C π ðiÞ;j ¼ max fC πj ðiÞ;j 1 ; min fIM k;j gg þpπj ðiÞ;j > > > j k ¼ 1;2;…mj > < NM j ¼ arg min fIM k;j g i ¼ mj þ 1; mj þ 2; …n; j ¼ 2; 3; …s k ¼ 1;2;…mj > > > > : IM NMj ;j ¼ C πj ðiÞ;j
2
8 C π 2 ð1Þ;2 ¼ IM 1;2 ¼ C π 2 ð1Þ;1 þ pπ 2 ð1Þ;2 ¼ 2 þ 4 ¼ 6 > > > > C π 2 ð2Þ;2 ¼ IM 2;2 ¼ C π 2 ð2Þ;1 þ pπ 2 ð2Þ;2 ¼ 4 þ 1 ¼ 5 > > > > <
Stage3
C π 3 ð3Þ;3 ¼ max fC π 3 ð3Þ;2 ; min fIM 1;3 ; IM 2;3 gg þ pπ 3 ð3Þ;3 ¼ 8 þ 3 ¼ 11
: NM 3 ¼ arg min fIM 1;3 ; IM 2;3 g ¼ 2 > > > > > IM 2;3 ¼ C π 3 ð3Þ;3 ¼ 11 > > > : C π ð4Þ;3 ¼ max fC π ð4Þ;2 ; min fIM 1;3 ; IM 2;3 gg þ p π 3 ð4Þ;3 ¼ 10 þ 2 ¼ 12 3 3
Thus the makespan is: C max ðπ 1 Þ ¼ max fC π 3 ð1Þ;3 ; C π 3 ð2Þ;3 ; C π3 ð3Þ;3 ; C π 3 ð4Þ;3 g ¼ C π 3 ð4Þ;3 ¼ 12
3. The improved discrete artificial bee colony (IDABC) algorithm for the HFS problem The ABC algorithm inspired by the foraging behaviors of real honey bees is originally designed for the continuous nature of optimization problems. In a real bee colony, there are some tasks done by specialized individuals. Bees try to maximize the nectar amount unloaded to the food stores in the hive by the division of labor and self-organization, which are essential components of swarm intelligence [41]. The colony of artificial bees in the ABC algorithm contains three groups of bees, namely, the employed bees, the onlooker bees, and the scout bees. Each of them plays different roles in the process: the employed bees fly onto the sources which they are exploiting; the onlooker bees waiting in the hive are responsible for deciding whether a food source is promising by watching the dances performed by the employed bees; and the scout bees choose sources randomly by means of some internal motivations or possible external clues. Both the onlooker bees and the scout bees are also called unemployed bees. A food source in the search place corresponds to a solution of the optimization problem, and the nectar amount of the food source corresponds to the fitness of the solution. The processes of the ABC algorithm can be shown in pseudo-code as Fig. 2. Following the above procedure for the continuous function optimization, in this article the authors propose an improved discrete version of the ABC algorithm for the HFS problem. It is shown in details below. 3.1. Individual representation and initialization Owing to its continuous nature of the basic ABC algorithm, researchers always converted the real domain into the discrete domain when it is applied to the discrete optimization. And this overhead makes the algorithms complicated. The model of the HFS problem is formulated by using the vector representation in the previous section. As a result, the authors adopt this representation
Fig. 2. The procedure of basic ABC algorithm.
Z. Cui, X. Gu / Neurocomputing 148 (2015) 248–259
in the proposed IDABC algorithm. The individual in the IDABC algorithm is represented by a permutation of jobs at the stage one π¼{π(1), π(2), …, π(n)}. The number of individuals (food sources) is determined by parameter NP, which denotes the size of the employed bees or onlooker bees. To guarantee the initial population with a certain quality and diversity, it is constructed randomly except the one who is established by the aforementioned NEH heuristic [13]. According to [11,12], the NEH heuristic, a typical constructive method for the permutation flow shop scheduling problem, is also very robust and well-performing for the HFS problem. 3.2. Employed bee phase In the original ABC algorithm, the employed bees exploit the given food sources in their neighborhood. Here the authors propose a novel differential evolution scheme for the employed bees to generate neighboring food sources. The differential evolution scheme consists of three steps: mutation, crossover, and selection. In the mutation part, two parameters: mutation rate (MR) and insert times (IT) are introduced. For each incumbent individual, a uniformly random number is generated in the range of [0,1]. If it is less than MR, the mutant individual is obtained by operating the insert operation on the best individual πbest in the population IT times; otherwise, the mutant individual is gained by operating the insert operation on a randomly selected individual πr IT times. That is to say, the mutant individual is obtained as: ( 1 insertðπ tbest Þ IT times; if randð0; 1Þ o MR V ti ¼ ð7Þ t 1 insertðπ r Þ IT times; otherwise 1 , and π tr 1 denote the mutant individual at generawhere V ti ; π tbest tion t, the best individual, and a randomly selected individual at generation t 1, respectively, rand(0,1) is a random function returning a number between 0 and 1 with uniform distribution, insert( ) means a random insert move. Next, the partially mapped crossover (PMX) [42], a widely used crossover operator for permutation-based, is used in the crossover part. The incumbent individual and the mutant individual undergo the PMX operation with a crossover rate (CR) to obtain the two crossed individuals. On the other hand, the two crossed individuals are the same as the mutant individual with about a probability of (1 CR). This means that the crossed individual is given by: ( PMXðV ti ; π ti 1 Þ; if randð0; 1Þ o CR U ti ¼ ð8Þ V ti ; otherwise
U ti
represents the crossed individual at generation t and where PMX( ) means the PMX operator. Following the crossover operation, the selection is conducted. The one with the lowest value of the objective function among the two crossed individuals and the incumbent individual is accepted. In other words, if either of these two crossed individuals yields a better makespan than the incumbent individual, then the better individual replaces the incumbent one and becomes a new member in the population; otherwise, the old individual is retained.
251
neighborhood search (VNS) [43] is incorporated into our algorithm as a hybrid strategy to further improve the performance. Two structures of neighborhoods are utilized, which are referred to as the insert local search and the swap local search. The procedures of the insert local search and the swap local search are given in Fig. 3, where u and v are two different positive integers chosen randomly in the range of [1,n]. The operation of insert(π, u, v) denotes that the job at its original position u is removed and inserted into another position v. And the operation of swap(π, u, v) means that the uth job of the solution π is swapped with the vth job. The local search combining both the insert local search and the swap local search is illustrated as follows: Step1. Perform the insert local search. If the individual is improved, go back to Step 2; otherwise end the procedure. Step2. Perform the swap local search. If the individual is improved, go back to Step 1; otherwise end the procedure. If the new food source obtained is better than or equal to the incumbent one, the new food source is memorized in the population. The onlooker bee phase in the IDABC algorithm provides the intensification of the local search on the relatively promising solutions.
3.4. Scout bee phase As it has been stated in the basic ABC algorithm, the scout bees search randomly in the predefined space. This procedure will increase the population diversity and avoid getting trapped in local optima. The new food source of a scout bee is produced as follows. Firstly, a tournament selection with the size of two is applied due to its simplicity and efficiency. That is, a scout bee selects two individuals πa and πb randomly from the population, and compares them with each other. If the makespan of πa is smaller than that of πb, πa wins the tournament and πb loses. Then, the scout bee generates a new solution πnew by employing the destruction and construction procedures of the iterated greedy (IG) algorithm [44]. The destruction and construction procedures, which have one parameter: destruction size (d), are performed on the better individual πa in the tournament selection. After that, the new solution πnew becomes a new member in the population and the worse one πb is discarded regardless of whether πnew is better than πb or not. In this phase, the number of scout bees is ten percent of that of food sources.
3.5. Computational procedure Based on the above designs, the procedure of the IDABC algorithm for the HFS problem is summarized as follows:
3.3. Onlooker bee phase The onlooker bees are placed on the food sources by using a probability based selection process in terms of the basic ABC algorithm. As the nectar amount of a food source increases, the probability value with which the food source is preferred by onlookers increases. In this article, the authors also apply the probability selection. After the wheel selection, a modified variable
Fig. 3. The procedure of modified variable neighborhood search. (a) the insert local search and (b) the swap local search.
252
Z. Cui, X. Gu / Neurocomputing 148 (2015) 248–259
Step1. Set the algorithm parameters NP, MR, CR, IT, d, and initialize the population. Step2. Employed bee phase: apply the differential evolution operation to each individual in the population, which includes mutation, crossover, and selection. Step3. Onlooker bee phase: employ the modified variable neighborhood search to further improve the performance. Step4. Scout bee phase: produce new solutions by using the destruction and construction procedures. Step5. If the given termination criterion is satisfied, end the procedure and return the best solution; otherwise go back to Step2. It is worth noting that several DABC algorithms have been proposed in recent years, in order to make a comparison among the DABC algorithms, the authors compare our IDABC algorithm with DABCS proposed by Sang et al. [34] and HDABC proposed by Liu and Liu [30]. It should be noted that the IDABC algorithm is very different from the other two DABC algorithms. In terms of the initialization, the DABCS algorithm employs the extended NEH heuristic and insert operator, and HDABC employs a greedy randomized adaptive search procedure (GRASP) based on NEH heuristic, while IDABC employs the simple NEH heuristic that also guarantees the initial population with a certain quality and diversity. In terms of employed bee phase, DABCS performs an insert or swap operator, and HDABC performs an approach based on the path relinking, while IDABC performs the differential evolution operation. In terms of onlooker bee phase, DABCS and HDABC utilize the swap or insert operator randomly while IDABC utilizes a modified variable neighborhood search based on both insert and swap neighborhoods. In terms of scout bee phase, DABCS applies one insert operator and one swap operator to the best solution to generate a solution, and HDABC generates a solution by applying the GRASP based on NEH heuristic with a fixed α value, while IDABC generates new solutions by using the destruction and construction procedures. The IDABC algorithm applies the employed bees and the onlooker bees to play the part of exploitation, and utilizes the scout bees to play the part of exploration. Since both exploitation and exploration are improved and well balanced, it is expected to generate satisfactory results for the HFS problem with the criterion makespan. In the next section, the performance of the IDABC algorithm is investigated based on simulation results and comparisons.
4. Simulation results and comparisons 4.1. Experimental setup To fully examine the performance of the IDABC algorithm, a parameter discussion and an extensive experimental comparison with other powerful methods were provided. The IDABC algorithm was coded in Visual Cþ þ and run on an Intel Pentium 3.06 GHz PC with 2 GB RAM under Windows 7 operating system. The computational experiments are conducted on the 98 benchmark instances presented in [9] and the 10 benchmark instances generated by Liao et al. [20] recently. The sizes of the 98 instances vary from 10 jobs and 5 stages to 15 jobs and 10 stages. The processing times of the operations in these 98 instances are uniformly distributed between 3 and 20. Three characteristics that define a problem are the number of jobs, the number of stages and the number of identical machines at each stage. Therefore, the authors use the notation of j10c5b1 for instance, which means a 10-job, 5-stage problem. The letters j and c indicate the job and stage, respectively. The letter b defines
the structure of the machine layout at the stages and the last number 1 is the problem index for a specific type. The meanings of the letters for machine layouts are given below [18]: a. There is one machine at the middle stage (bottleneck) and three machines at the other stages; b. There is one machine at the first stage (bottleneck) and three machines at the other stages; c. There are two machines at the middle stage (bottleneck) and three machines at the other stages; d. There are three machines at each stage (no bottleneck); e. There are two machines at the middle stage (bottleneck), one machine at the last stage (bottleneck) and three machines at the other stages. The 98 benchmark problems taken from Carlier and Neron [9] are relative simple. Thus, another 10 benchmark problems [20] are also used in this section. In each instance of Liao's benchmark problems, there are 30 jobs and 5 stages. At each stage, the machine number has a uniform distribution in the range of [3,5]. The processing times in these problems are within [1,100]. 4.2. Parameter discussion Tuning parameters properly is critical for an evolutionary algorithm to achieve a good performance. In the proposed IDABC algorithm, there are five main parameters: NP, MR, CR, IT, and d. Common approaches for parameter design lead either to a long time span for trying out all combinations, or to a premature termination of the design process with results far from optimal in most cases. In this part, the Factorial Design (FD) [45] approach, which can reduce the number of experiments but still achieve satisfactory solutions, is applied to provide a receipt for tuning the adjustable parameters of the IDABC algorithm. According to [46], the ABC algorithm does not need a fine tuning for NP in order to obtain satisfactory results. Hence, NP is equal to the number of jobs n. By extensive simulations [44,47], the range of d is set to [3,5]. Meanwhile, the MR, CR, and IT in our algorithm are set in the ranges of [0.2,1], [0.2,1], and [3,5], respectively. All the four parameters are regarded as factors at three different levels, as illustrated in Table 1. Considering the statistical theory and cost, the full factorial experiment design is not necessary and economical in this case, which requires 34 ¼81 experiments. Therefore, the authors carry out an orthogonal experiment design, which just needs 32 ¼9 experiments. Parameter experiments are conducted on 10 harder benchmark problems proposed by Liao et al. [20]. Taking the test on instance j30c5e1 for example, Table 2 shows the orthogonal parameter table L9(34) including 9 groups of the parameter test samples. Each group of the parameters was tested twenty times and the termination criterion for all the parameter tests was the computation time 100 s. In Table 2, in order to evaluate the effects of factors after implementing each group of the parameters listed in the table, the mean values of the makespan that get in twenty runs are
Table 1 Factors and levels for orthogonal experiment. Factor
level 1
level 2
level 3
MR (1) CR (2) IT (3) d (4)
0.2 0.2 3 3
0.5 0.5 4 4
0.8 0.8 5 5
Z. Cui, X. Gu / Neurocomputing 148 (2015) 248–259
obtained and listed in the last column of the table. ki in column j is the mean value of the experiment results in 3 groups of factor j at level i (i ¼1,2,3; j¼ 1,2,3,4). The standard deviations (STD) in
Table 2 Orthogonal parameter table L9 (34) and results of j30c5e1. Test
Factor
1 2 3 4 5 6 7 8 9 k1 k2 k3 STD
Mean Value
MR
CR
IT
d
0.2 (1) 0.2 (1) 0.2 (1) 0.5 (2) 0.5 (2) 0.5 (2) 0.8 (3) 0.8 (3) 0.8 (3) 465.22 465.30 464.83 0.251
0.2 (1) 0.5 (2) 0.8 (3) 0.2 (1) 0.5 (2) 0.8 (3) 0.2 (1) 0.5 (2) 0.8 (3) 465.12 465.13 465.10 0.015
3 (1) 4 (2) 5 (3) 4 (2) 5 (3) 3 (1) 5 (3) 3 (1) 4 (2) 465.17 465.07 465.12 0.050
3 (1) 4 (2) 5 (3) 5 (3) 3 (1) 4 (2) 4 (2) 5 (3) 3 (1) 465.15 465.08 465.12 0.035
465.3 465.15 465.2 465.25 465.35 465.3 464.8 464.9 464.8
253
column j is the standard deviation of k1–k3 for factor j. The unabridged result tables similarly to Table 2 in all the 10 experiments are too large, so they are omitted here. Table 3–6 show the values of ki at three levels of four parameters in 10 experiments respectively. From Table 3 to Table 6, the smallest value of ki in the columns is typed in bold. Obviously, the satisfactory value of these parameters can be identified. Table 3 and Table 4 indicate that the best case of MR and CR are both 0.8. Table 5 shows that the results with 4 times insertion are best. It can be observed from Table 6 that the best value of the destruction size d is 4. Thus, finally, the authors set the parameters MR ¼0.8, CR ¼0.8, IT¼4, and d ¼4 in the following experiments. 4.3. Preliminary experiment Comparing with the pure discrete artificial bee colony algorithm, four improvements are made on the processes of initialization, evolution operation, neighborhood search, and scout bee phase. In this experiment, the authors consider the proposed IDABC algorithm and its four variants. The first variant is the pure discrete artificial bee colony algorithm and is denoted as DABC;
Table 3 Parameter experiment results of IDABC with different MR. k
k1 k2 k3
MR
0.2 0.5 0.8
Problem j30c5e1
j30c5e2
j30c5e3
j30c5e4
j30c5e5
j30c5e6
j30c5e7
j30c5e8
j30c5e9
j30c5e10
465.22 465.30 464.83
616.00 616.00 616.00
596.77 596.90 596.73
566.45 566.78 566.67
602.75 602.67 602.42
603.43 603.72 603.92
626.13 626.23 626.12
674.78 675.02 674.62
643.93 643.80 643.90
577.78 577.67 577.47
Table 4 Parameter experiment results of IDABC with different CR. k
k1 k2 k3
CR
0.2 0.5 0.8
Problem j30c5e1
j30c5e2
j30c5e3
j30c5e4
j30c5e5
j30c5e6
j30c5e7
j30c5e8
j30c5e9
j30c5e10
465.12 465.13 465.10
616.00 616.00 616.00
596.88 596.67 596.85
566.65 566.35 566.90
602.62 602.72 602.50
603.63 604.18 603.25
626.12 626.30 626.07
674.73 674.92 674.77
643.78 643.82 644.03
577.73 577.65 577.53
Table 5 Parameter experiment results of IDABC with different IT. k
k1 k2 k3
IT
3 4 5
Problem j30c5e1
j30c5e2
j30c5e3
j30c5e4
j30c5e5
j30c5e6
j30c5e7
j30c5e8
j30c5e9
j30c5e10
465.17 465.07 465.12
616.00 616.00 616.00
596.78 597.00 596.62
566.40 566.78 566.72
602.63 602.68 602.52
603.60 603.40 604.07
626.20 626.17 626.12
674.82 674.75 674.85
643.87 643.85 643.92
577.85 577.42 577.65
Table 6 Parameter experiment results of IDABC with different d. k
k1 k2 k3
d
3 4 5
Problem j30c5e1
j30c5e2
j30c5e3
j30c5e4
j30c5e5
j30c5e6
j30c5e7
j30c5e8
j30c5e9
j30c5e10
465.15 465.08 465.12
616.00 616.00 616.00
597.02 596.42 596.97
566.50 566.47 566.93
602.58 602.53 602.72
604.05 603.30 603.72
626.13 626.18 626.17
674.83 674.72 674.87
643.80 643.83 644.00
577.20 577.52 578.20
254
Z. Cui, X. Gu / Neurocomputing 148 (2015) 248–259
the second variant introduces the heuristic in the initialization of DABC and is denoted as DABC-heu; the third variant uses a differential evolution scheme in the procedure of DABC-heu and is denoted as DABC-heu-DE; the fourth variant adds a modified variable neighborhood search in the neighborhood search and is denoted as DABC-heu-DE-VNS. To validate the effect of each element in the algorithm for solving the HFS problem, five algorithms: DABC, DABC-heu, DABC-heu-DE, DABC-heu-DE-VNS, and IDABC are compared on Liao's benchmarks. The execution time for each instance is limited to 200 s. Each method was run twenty times and its performance, including the average and minimum value, was recorded in Table 7. In Table 7, AVE and MIN indicate the values of average and minimum respectively. The result of DABC-heu is just a little bit better than that of DABC because the heuristic in the initialization improves the quality of initial population. The dominance of DABC-heu-DE over DABC-heu validates the effectiveness of the differential evolution operations. The dominance of DABC-heu-DEVNS over DABC-heu-DE validates the effectiveness of the modified variable neighborhood search that provides the intensification of the local search. Finally, the overall mean values of AVE and MIN yielded by the IDABC algorithm are equal to 596.9 and 595 respectively, and the IDABC algorithm is the best one among the five variants for Liao's benchmark problems with the same computational time. It is shown that the utilization of the heuristic, differential evolution scheme, modified variable neighborhood search, and scout bee scheme is the key to stress the balance of exploration and exploitation so as to improve the performance of DABC algorithm.
within this time limit, the search was stopped and the best solution was accepted as the final solution. In order to establish more accurate and objective comparisons, the computational results of these compared algorithms were obtained from their original papers. It is noted that B&B, AIS, ACO, GA, and PSO algorithms also limited their run time to 1600 s, and QIA was run a fixed number of iterations. For each test problem, the proposed IDABC algorithm was run independently twenty times. The solutions to Carlier and Neron's benchmark problems of all the algorithms are given in Appendix A. And the statistical results are summarized in Table 8. In Table 8, Solved means the number of problems which the algorithm can solve and Deviation denotes the average relative percentage error to LB. There are 98 problems consisting 55 easy problems and 43 hard problems. The problems with a and b machine layouts are easy problems. The problems with c, d, and e machine layouts are relatively harder to solve, so they are mostly grouped as hard problems. As it can be noticed from Table 8, the machine layouts have an important effect on the complexity of problems that affects solution quality. In the 55 easy problems and 43 hard problems, B&B, AIS, GA, and PSO can solve 53 easy problems and 24 hard problems, ACO can solve only 45 easy problems and 18 hard problems, QIA can solve only 29 easy problems and 12 hard problems, while the proposed IDABC algorithm can solve all the 98 problems. The average percentage deviation values of the easy and hard problems generated by IDABC are equal to 0.94% and 2.82%. For the 55 easy problems, QIA has a zero deviation value but it can solve only 29 of the 55 easy problems. The performance of PSO is comparable with that of the proposed IDABC, but it still cannot solve problems as many as the proposed IDABC. Table A.1 in Appendix A shows that the IDABC algorithm can obtain the best
4.4. Computational results 4.4.1. Comparison of Carlier and Neron's benchmarks Several algorithms have been applied to Carlier and Neron's benchmark problems. To evaluate the performance of the proposed IDABC algorithm in solving the HFS problem with the criterion makespan, the IDABC algorithm is compared with a B&B method by Neron et al. [8], an AIS by Engin and Doyen [17], an ACO by Alaykyran et al. [18], a GA by Kahraman et al. [16], a QIA by Niu et al. [19], and a PSO algorithm which was recently developed by Liao et al. [20]. The lower bound (LB) of these problems for makespan minimization [7,8] was calculated to analyze the performance of the algorithms. By extensive simulations [8,16–18,20], the maximum run time of the algorithm was set at 1600 s or until the LB was reached. If the LB was not found
Table 8 Comparison results on Carlier and Neron's benchmark problems. Algorithm
B&B AIS ACO GA QIA PSO IDABC
Easy problems
Hard problems
Solved
Deviation (%)
Solved
Deviation (%)
53 53 45 53 29 53 55
2.17 0.99 0.92 0.95 0 0.95 0.94
24 24 18 24 12 24 43
6.88 3.13 3.88 3.05 5.04 2.85 2.82
Table 7 Performance of IDABC with different versions. Problem
j30c5e1 j30c5e2 j30c5e3 j30c5e4 j30c5e5 j30c5e6 j30c5e7 j30c5e8 j30c5e9 j30c5e10 Average
DABC
DABC-heu
DABC-heu-DE
DABC-heu-DE-VNS
IDABC
AVE
MIN
AVE
MIN
AVE
MIN
AVE
MIN
AVE
MIN
471.4 616.0 601.6 574.1 604.4 607.8 628.4 680.2 649.9 586.9 602.1
470 616 599 571 603 604 626 678 647 584 599.8
470.9 616.0 601.2 574.0 604.6 607.8 628.4 680.3 649.4 586.3 601.9
467 616 598 571 603 605 626 678 647 584 599.5
470.5 616.0 600.5 571.6 604.4 606.7 627.5 678.8 647.6 583.6 600.7
466 616 598 570 603 603 626 676 645 582 598.5
466.5 616.0 597.8 567.9 603.6 605.0 626.6 675.6 645.0 579.9 598.4
464 616 596 565 601 602 626 674 643 577 596.4
465.2 616.0 596.4 566.2 602.0 603.1 626.0 674.7 643.7 576.3 596.9
463 616 593 565 600 601 626 674 642 573 595
Z. Cui, X. Gu / Neurocomputing 148 (2015) 248–259
solutions in almost all the problems, and the average computation time of the IDABC algorithm is much shorter than that of the other six algorithms. Furthermore, the IDABC algorithm can solve some complex problems which are not tackled by other algorithms. Therefore, it is concluded that the IDABC algorithm is more effective and efficient in comparison with other algorithms for Carlier and Neron's benchmark problems.
4.4.2. Comparison of Liao's benchmarks For Liao's benchmark problems, two meta-heuristics: AIS and PSO, have been applied to the problems in [20] and the optimal solutions for these problems are unknown. The proposed IDABC algorithm is compared with these two algorithms and another powerful algorithm DABC which was proposed by Tasgetiren et al. [48]. The computational results are shown in Table 9, where the experimental data of AIS and PSO were obtained from the corresponding literatures, and the DABC and the IDABC algorithms were run twenty independent replications for each problem. The execution time for each problem is limited to 200 s. In Table 9, STD indicates the value of standard deviation, and T presents the average computation time (given in seconds) that the solution converges to the final solution. In Table 9, the smallest values of AVE, MIN, STD, and T in the rows are shown in bold, respectively. It is noted that the overall mean values of AVE, MIN, and STD yielded by the IDABC algorithm are equal to 596.9, 595, and 1.1, respectively, which are much better than those generated by AIS, PSO, and DABC. Besides, the average computation time T of IDABC: 48 s is much shorter than that generated by AIS, PSO, and DABC. From these observations, it is shown that the IDABC algorithm can obtain a better solution than other algorithms in an obviously shorter computational time. This means that the IDABC algorithm can converge to the good solutions faster than AIS, PSO, and DABC. Also, it can be seen that the IDABC algorithm is more stable than other algorithms for Liao's benchmark problems. Furthermore, an analysis of variance (ANOVA) is carried out to confirm if the observed differences are indeed statistically significant. Therefore, ARE is analyzed by multi-compare method using least significant difference (LSD) procedure, where ARE denotes the average relative error to the best solution found by any of the compared algorithms. Obviously, the smaller ARE value is, the better result the algorithm yields. The means plot of ARE for all the compared algorithms with LSD intervals at a 95% confidence level is shown in Fig. 4. The authors assume the noun hypothesis H 0 : μIDABC Z μAIS , and the alternative hypothesis H 1 : μIDABC o μAIS with 0.05 significance level. Similarly, the hypotheses
255
H 0' : μIDABC Z μPSO , H 1' : μIDABC o μPSO , H }0 : μIDABC Z μDABC , and H }1 : μIDABC o μDABC could also be easily obtained. The hypothesis tests of IDABC with other algorithms are summarized in Table 10. Fig. 4 and Table 10 show that the proposed IDABC algorithm is statistically better than AIS, PSO, and DABC. Meanwhile, the figure also demonstrates the stability of the IDABC algorithm. Instances j30c5e4 and j30c5e10 are selected to draw several convergence curves of all the algorithms in Fig. 5 and Fig. 6, respectively. These figures show that when the same time is used as the stopping criterion, the proposed IDABC algorithm outperforms other three algorithms for the HFS problem to minimize the makespan.
Fig. 4. Means plot of ARE with 95% LSD intervals for different algorithms.
Table 10 The hypothesis tests of IDABC with other algorithms.
p-Value Significant? (p o 0.05)
IDABC and AIS
IDABC and PSO
0.000 Yes
0.001 Yes
IDABC and DABC 0.007 Yes
Table 9 Comparison results on Liao's benchmark problems. Problem
j30c5e1 j30c5e2 j30c5e3 j30c5e4 j30c5e5 j30c5e6 j30c5e7 j30c5e8 j30c5e9 j30c5e10 Average
AIS
DABC
PSO
IDABC
AVE
MIN
STD
T(s)
AVE
MIN
STD
T(s)
AVE
MIN
STD
T(s)
AVE
MIN
STD
T(s)
485.4 620.7 625.7 588.6 618.8 625.8 641.3 697.5 670.2 613.5 618.7
479 619 614 582 610 620 635 686 662 604 611
2.6 1.6 4.8 3.4 3.4 3.0 4.7 5.1 3.9 5.3 3.8
99 80 117 109 101 100 94 101 101 89 99
474.7 616.3 610.3 577.1 606.8 612.5 630.6 684.2 654.7 599.8 606.7
471 616 602 575 605 605 629 678 651 594 603
1.4 0.4 4.7 1.5 1.1 3.5 0.8 2.5 1.9 5.3 2.3
96 55 65 87 80 68 87 98 84 77 80
469.2 616.0 598.9 571.0 603.5 605.6 627.2 677.9 647.9 583.5 600.0
468 616 598 566 603 604 626 675 646 581 598
1.4 0.0 0.9 1.8 1.0 1.2 0.5 1.7 1.9 1.6 1.2
60 3 79 100 53 93 64 67 79 86 69
465.2 616.0 596.4 566.2 602.0 603.1 626.0 674.7 643.7 576.3 596.9
463 616 593 565 600 601 626 674 642 573 595
1.5 0.0 1.7 1.2 1.6 1.5 0.0 0.9 1.0 1.5 1.1
57 2 49 39 58 55 19 55 67 76 48
256
Z. Cui, X. Gu / Neurocomputing 148 (2015) 248–259
Fig. 5. The convergence curves of instance j30c5e4.
Fig. 6. The convergence curves of instance j30c5e10.
5. Conclusions
Acknowledgments
This article models the HFS problem by vector representation and presents an improved discrete artificial bee colony (IDABC) algorithm for the makespan minimization. In the IDABC algorithm, first an initialization scheme based on the NEH heuristic is designed, in order to construct the initial population with both quality and diversity. Then the discrete operators and algorithms, such as differential evolution, insert, swap, destruction and construction are applied to generate new solutions for employed bees, onlooker bees, and scout bees. In addition, an orthogonal experiment design is carried out for a guideline on tuning the four designed parameters. Furthermore, to evaluate the performance of the IDABC algorithm, the algorithm is compared with other algorithms basing on the same benchmark instances. The simulation results indicate that the IDABC algorithm is more effective and efficient than the others. Our future work includes studying the interaction effects among individual parameters to further improve the algorithm performance, as well as extending the IDABC algorithm to other kinds of scheduling problems, such as stochastic scheduling and multi-objective scheduling.
The authors are grateful to Carlier, Neron, and Liao for making the benchmark set available and to the anonymous reviewers for giving us helpful suggestions. This work is supported by National Natural Science Foundation of China (Grant no. 61174040, 61104178), Fundamental Research Funds for the Central Universities.
Appendix A. Solutions to Carlier and Neron's benchmark problems The solutions to Carlier and Neron's benchmark problems of all the algorithms are shown in Table A.1, where Cmax is the best makespan from the replications, the symbol in columns Cmax and T(s) means that the instance is not tackled by the algorithm and the computation time is not provided, the letter a in column T (s) means the algorithm could not reach LB in 1600 s, the letter b means B&B reaches LB more than 1600 s, and the letter c means B&B could not reach LB.
Table A.1 Solutions to Carlier and Neron's benchmark problems. Problem
AIS
ACO
GA
QIA
PSO
IDABC
Cmax
T(s)
Cmax
T(s)
Cmax
T(s)
Cmax
T(s)
Cmax
T(s)
Cmax
T(s)
Cmax
T(s)
88 117 121 122 110 130 107 109 122 153 115 – 68 74 71 66 78 69 66 73 64 70 66 62 – 139 158 148 149 148 146 – 163 157 169 159 165 165 127 116 133 135 145 112 – – – – – – 178 165 130
13 7 6 11 6 13 6 9 6 6 11 – 28 19 240 1017 42 4865 (b) 6490 (b) 2617 (b) 481 393 1627 (b) 6861 (b) – 41 21 58 21 36 20 – 36 66 19 20 33 34 c 1100 c c c c – – – – – – 18 35 34
88 117 121 122 110 130 107 109 122 153 115 – 68 74 72 66 78 69 66 73 64 70 66 62 – 139 158 148 149 148 146 – 163 157 169 159 165 165 115 119 116 120 126 106 – – – – – – 178 165 130
1 1 1 1 4 1 1 1 2 1 1 – 32 4 a 3 14 12 5 31 15 5 1446 8 – 1 18 1 2 1 4 – 1 1 1 1 1 1 a a a a a a – – – – – – 1 1 1
88 117 121 124 110 131 107 109 124 153 115 – 68 76 72 66 78 69 – – – – – – – – – – – – – – 163 157 169 159 165 165 118 117 108 112 126 102 – – – – – – 178 165 132
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
88 117 121 122 110 130 107 109 122 153 115 – 68 74 71 66 78 69 66 73 64 70 66 62 – 139 158 148 149 148 146 – 163 157 169 159 165 165 115 117 116 120 125 106 – – – – – – 178 165 130
0.000 0.000 0.015 0.000 0.015 0.000 0.000 0.000 0.000 0.000 0.000 – 0.031 0.016 0.016 0.031 0.094 0.000 0.046 0.110 0.015 0.000 0.031 0.062 – 0.015 0.125 0.047 0.141 0.000 0.156 – 0.000 0.131 0.000 0.015 0.016 0.016 a a a a a a – – – – – – 0.031 0.015 0.015
88 117 121 122 110 130 107 109 122 153 115 – 69 76 74 75 79 72 69 76 68 75 71 64 – 139 158 148 149 148 146 – – – – – – – – – – – – – – – – – – – – – –
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
88 117 121 122 110 130 107 109 122 153 115 – 68 74 71 66 78 69 66 73 64 70 66 62 – 139 158 148 149 148 146 – 163 157 169 159 165 165 115 117 116 120 125 106 – – – – – – 178 165 130
0.002 0.002 0.003 0.013 0.174 0.003 0.003 0.012 0.025 0.001 0.001 – 0.332 0.535 36.997 0.215 0.122 0.405 0.185 1.158 0.098 0.337 0.515 0.383 – 0.055 0.87 0.017 0.085 0.102 0.239 – 0.013 0.221 0.014 0.021 0.037 0.056 a a a a a a – – – – – – 0.06 0.005 0.006
88 117 121 122 110 130 107 109 122 153 115 121 68 74 72 66 78 69 66 73 64 70 66 62 63 139 158 148 149 148 146 151 163 157 169 159 165 165 115 119 116 120 125 106 112 119 116 115 107 105 178 165 130
0.002 0.003 0.003 0.005 0.009 0.003 0.004 0.003 0.003 0.003 0.004 0.002 0.017 0.055 a 0.005 0.004 0.004 0.005 0.069 0.007 0.003 0.009 0.009 a 0.009 0.009 0.008 0.007 0.006 0.008 a 0.007 0.009 0.007 0.005 0.007 0.006 a a a a a a a a 0.008 a a a 0.016 0.018 0.017
257
88 117 121 122 110 130 107 109 122 153 115 121 68 74 71 66 78 69 66 73 64 70 66 62 58 139 158 148 149 148 146 144 163 157 169 159 165 165 113 116 98 103 121 97 103 116 116 105 106 97 178 165 130
B&B
Z. Cui, X. Gu / Neurocomputing 148 (2015) 248–259
j10c5a2 j10c5a3 j10c5a4 j10c5a5 j10c5a6 j10c5b1 j10c5b2 j10c5b3 j10c5b4 j10c5b5 j10c5b6 j10c5b7 j10c5c1 j10c5c2 j10c5c3 j10c5c4 j10c5c5 j10c5c6 j10c5d1 j10c5d2 j10c5d3 j10c5d4 j10c5d5 j10c5d6 j10c5d7 j10c10a1 j10c10a2 j10c10a3 j10c10a4 j10c10a5 j10c10a6 j10c10a7 j10c10b1 j10c10b2 j10c10b3 j10c10b4 j10c10b5 j10c10b6 j10c10c1 j10c10c2 j10c10c3 j10c10c4 j10c10c5 j10c10c6 j10c10d1 j10c10d2 j10c10d3 j10c10d4 j10c10d5 j10c10d6 j15c5a1 j15c5a2 j15c5a3
LB
258
Table A.1 (continued ) Problem
156 164 178 170 152 157 147 166 175 85 90 87 89 73 91 167 82 77 61 67 79 167 154 176 164 146 159 236 200 198 225 182 200 222 187 222 221 200 219 130 123 141 124 141 131
B&B
AIS
ACO
GA
QIA
PSO
IDABC
Cmax
T(s)
Cmax
T(s)
Cmax
T(s)
Cmax
T(s)
Cmax
T(s)
Cmax
T(s)
Cmax
T(s)
156 164 178 170 152 157 147 166 175 85 90 87 90 84 91 167 85 96 101 97 87 – – – – – – 236 200 198 225 183 200 222 187 222 221 200 219 – – – – – –
21 34 38 16 25 15 37 20 23 2131 (b) 184 202 c c 57 24 c c c c c – – – – – – 40 154 45 78 c 44 70 80 80 84 84 67 – – – – – –
156 164 178 170 152 157 147 166 175 85 91 87 89 74 91 167 84 83 84 80 82 – – – – – – 236 200 198 225 182 200 222 187 222 221 200 219 – – – – – –
2 1 1 1 1 1 1 2 1 774 a 16 317 a 19 1 a a a a a – – – – – – 1 30 4 12 2 2 3 1 1 1 1 1 – – – – – –
156 166 178 170 152 157 149 166 176 85 90 87 89 73 91 167 86 83 84 80 79 – – – – – – 236 200 198 228 182 200 222 188 224 221 – – – – – – – –
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
156 164 178 170 152 157 147 166 175 85 91 87 89 75 91 167 84 83 84 80 82 – – – – – – 236 200 198 225 182 200 222 187 222 221 200 219 – – – – – –
0.015 0.046 0.032 0.015 0.015 0.015 0.015 0.016 0.015 0.031 a 0.109 0.000 a 0.047 0.015 a a a a a – – – – – – 0.015 0.015 0.063 0.031 0.016 0.031 0.031 0.047 0.015 0.016 0.094 0.031 – – – – – –
– – – – – – – – – – – – – – – – – – – – – – – – – – – 236 200 198 225 182 200 222 187 222 221 200 219 – – – – – –
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
156 164 178 170 152 157 147 166 175 85 90 87 89 74 91 167 84 82 84 79 81 – – – – – – 236 200 198 225 182 200 222 187 222 221 200 219 – – – – – –
0.013 0.004 0.006 0.003 0.005 0.03 0 0.086 0.016 4.205 1198 2.398 2.208 a 0.191 0 a a a a a – – – – – – 0.018 0.214 0.171 0.072 0.509 0.468 0.017 0.012 0.007 0.007 0.135 0.006 – – – – – –
156 164 178 170 152 157 147 166 175 85 90 87 89 74 91 167 84 82 84 79 81 167 154 176 164 146 159 236 200 198 225 182 200 222 187 222 221 200 219 134 132 145 129 141 132
0.019 0.016 0.016 0.012 0.015 0.014 0.017 0.020 0.018 0.127 27 0.048 0.038 a 0.027 0.020 a a a a a 0.017 0.018 0.015 0.016 0.014 0.015 0.033 0.048 0.032 0.034 0.036 0.033 0.048 0.044 0.039 0.037 0.048 0.037 a a a a 2.037 a
Z. Cui, X. Gu / Neurocomputing 148 (2015) 248–259
j15c5a4 j15c5a5 j15c5a6 j15c5b1 j15c5b2 j15c5b3 j15c5b4 j15c5b5 j15c5b6 j15c5c1 j15c5c2 j15c5c3 j15c5c4 j15c5c5 j15c5c6 j15c5d1 j15c5d2 j15c5d3 j15c5d4 j15c5d5 j15c5d6 j15c5e1 j15c5e2 j15c5e3 j15c5e4 j15c5e5 j15c5e6 j15c10a1 j15c10a2 j15c10a3 j15c10a4 j15c10a5 j15c10a6 j15c10b1 j15c10b2 j15c10b3 j15c10b4 j15c10b5 j15c10b6 j15c10c1 j15c10c2 j15c10c3 j15c10c4 j15c10c5 j15c10c6
LB
Z. Cui, X. Gu / Neurocomputing 148 (2015) 248–259
References [1] M. Pinedo, Scheduling: Theory Algorithms and Systems, Prentice-Hall, New Jersey: Englewood Cliffs, 2002. [2] J.N.D. Gupta, Two-stage hybrid flowshop scheduling problem, J. Oper. Res. Soc. 39 (1988) 359–364. [3] J.A. Hoogeveen, J.K. Lenstra, B. Veltman, Minimizing the makespan in a multiprocessor flow shop is strongly NP-hard, Eur. J. Oper. Res. 89 (1996) 172–175. [4] R. Ruiz, J.A.V. Rodriguez, The hybrid flow shop scheduling problem, Eur. J. Oper. Res. 205 (2010) 1–18. [5] I. Ribas, R. Leisten, J.M. Framinan, Review and classification of hybrid flow shop scheduling problems from a production system and a solutions procedure perspective, Comput. Oper. Res. 37 (2010) 1439–1454. [6] T.S. Arthanary, K.G. Ramaswamy, An extension of two machine sequencing problems, Oper. Res. 8 (1971) 10–22. [7] D.L. Santos, J.L. Hunsucker, D.E. Deal, Global lower bounds for flow shops with multiple processors, Eur. J. Oper. Res. 80 (1995) 112–120. [8] E. Neron, P. Baptiste, J.N.D. Gupta, Solving hybrid flow shop problem using energetic reasoning and global operations, Omega—Int. J. Manage. Sci. 29 (2001) 501–511. [9] J. Carlier, E. Neron, An exact method for solving the multi-processor flowshop, RAIRO-Oper. Res. 34 (2000) 1–25. [10] J.N.D. Gupta, E.A. Tunc, Minimizing tardy jobs in a two-stage hybrid flow shop, Int. J. Prod. Res. 36 (1998) 2397–2417. [11] S.A. Brah, L.L. Loo, Heuristics for scheduling in a flow shop with multiple processors, Eur. J. Oper. Res. 113 (1999) 113–122. [12] R. Ruiz, F.S. Serifoglu, T. Urlings, Modeling realistic hybrid flexible flowshop scheduling problems, Comput. Oper. Res. 35 (2008) 1151–1175. [13] M. Nawaz, E. Enscore, I. Ham, A heuristic algorithm for the m-machine, n-job flow shop sequencing problem, Omega-Int. J. Manage. Sci. 11 (1983) 91–95. [14] K. Ying, S. Lin, Scheduling multistage hybrid flowshops with multiprocessor tasks by an effective heuristic, Int. J. Prod. Res. 13 (2009) 3525–3538. [15] R. Ruiz, C. Maroto, A genetic algorithm for hybrid flowshops with sequence dependent setup times and machine eligibility, Eur. J. Oper. Res. 169 (2006) 781–800. [16] C. Kahraman, O. Engin, I. Kaya, M.K. Yilmaz, An application of effective genetic algorithms for solving hybrid flow shop scheduling problems, Int. J. Comput. Int. Syst. 1 (2008) 134–147. [17] O. Engin, A. Doyen, A new approach to solve hybrid flow shop scheduling problems by artificial immune system, Future Gen. Comput. Syst. 20 (2004) 1083–1095. [18] K. Alaykyran, O. Engin, A. Doyen, Using ant colony optimization to solve hybrid flow shop scheduling problems, Int. J. Adv. Manuf. Technol. 35 (2007) 541–550. [19] Q. Niu, T. Zhou, S. Ma, A quantum-inspired immune algorithm for hybrid flow shop with makespan criterion, J. Univ. Comput. Sci. 15 (2009) 765–785. [20] C.J. Liao, E. Tjandradjaja, T.P. Chung, An approach using particle swarm optimization and bottleneck heuristic to solve hybrid flow shop scheduling problem, Appl. Softw. Comput. 12 (2012) 1755–1764. [21] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization, Computer Engineering Department, Engineering Faculty, Erciyes University, 2005 (Technical Report). [22] B. Basturk, D. Karaboga, An artificial bee colony (ABC) algorithm for numeric function optimization, in: IEEE Proceedings of the Swarm Intelligence Symposium, Indianapolis, Indiana, USA, 2006. [23] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Glob. Optim. 39 (2007) 459–471. [24] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Appl. Softw. Comput. 8 (2008) 687–697. [25] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: artificial bee colony (ABC) algorithm and applications, Artif. Intell. Rev. (2012) 1–37. [26] D. Karaboga, C. Ozturk, Neural networks training by artificial bee colony algorithm on pattern classification, Neural Netw. World. 19 (2009) 279–292. [27] D. Karaboga, C. Ozturk, A novel clustering approach: artificial bee colony (ABC) algorithm, Appl. Softw. Comput. 11 (2011) 652–657. [28] W.C. Yeh, T.J. Hsieh, Solving reliability redundancy allocation problems using an artificial bee colony algorithm, Comput. Oper. Res. 38 (2011) 1465–1473. [29] B. Akay, D. Karaboga, Artificial bee colony algorithm for large-scale problems and engineering design optimization, J. Intell. Manuf. 23 (2012) 1001–1014. [30] Y.F. Liu, S.Y. Liu, A hybrid discrete artificial bee colony algorithm for permutation flowshop scheduling problem, Appl. Softw. Comput. 13 (2013) 1459–1463. [31] G.L. Deng, Z. Cui, X.S. Gu, A discrete artificial bee colony algorithm for the blocking flow shop scheduling problem, in: Proceedings of the Ninth World Congress on Intelligent Control and Automation, Beijing, 2012 pp. 518–522.
259
[32] Y.Y. Han, Q.K. Pan, J.Q. Li, H.Y. Sang, An improved artificial bee colony algorithm for the blocking flowshop scheduling problem, Int. J. Adv. Manuf. Technol. 60 (2012) 1149–1159. [33] Q.K. Pan, M.F. Tasgetiren, P.N. Suganthan, T.J. Chua, A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem, Inform. Sci. 181 (2011) 2455–2468. [34] H.Y. Sang, L. Gao, Q.K. Pan, Discrete artificial bee colony algorithm for lotstreaming flowshop with total flowtime minimization, Chin. J. Mech. Eng.-En. 25 (2012) 990–1000. [35] L. Wang, G. Zhou, Y. Xu, M. Liu, An enhanced pareto-based artificial bee colony algorithm for the multi-objective flexible job-shop scheduling, Int. J. Adv. Manuf. Technol. 60 (2012) 1111–1123. [36] Q.K. Pan, L. Wang, K. Mao, J.H. Zhao, M. Zhang, An effective artificial bee colony algorithm for a real-world hybrid flowshop problem in steelmaking process, IEEE Trans. Autom. Sci. Eng. 10 (2013) 307–322. [37] B. Wardono, Y. Fathi, A tabu search algorithm for the multi-stage parallel machine problem with limited buffer capacities, Eur. J. Oper. Res. 155 (2004) 380–401. [38] B.A. Norman, J.C. Bean, A genetic algorithm methodology for complex scheduling problems, Nav. Res. Log. 46 (1999) 199–211. [39] C. Oguz, Y. Zinder, V.H. Do, A. Janiak, M. Lichtenstein, Hybrid flow-shop scheduling problems with multiprocessor task systems, Eur. J. Oper. Res. 152 (2004) 115–131. [40] C. Oguz, M.F. Ercan, A genetic algorithm for hybrid flow shop scheduling with multiprocessor tasks, J. Sched. 8 (2005) 323–351. [41] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Appl. Math. Comput. 214 (2009) 108–132. [42] D.E. Goldberg, R.J. Lingle, Alleles, loci and the traveling salesman problem, in: Proceedings of the First International Conference on Genetic Algorithms and Their Application, Lawrence Erlbaum, 1985, pp. 154–159. [43] N. Mladenovic, P. Hansen, Variable neighborhood search, Comput. Oper. Res. 24 (1997) 1097–1100. [44] R. Ruiz, T. Stutzle, A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem, Eur. J. Oper. Res. 177 (2007) 2033–2049. [45] D.C. Montgomery, Design and Analysis of Experiments, 7th ed., John Wiley and Sons, Inc, New York, 2008. [46] B. Akay, D. Karaboga, Parameter tuning for the artificial bee colony algorithm, in: Proceedings of First International Conference on Computational Collective Intelligence, Springer-Verlag: Berlin Heidelberg, 2009, pp. 608–619. [47] I. Ribas, R. Companys, X.T. Martorell, An iterated greedy algorithm for the flowshop scheduling problem with blocking, Omega—Int. J. Manage. Sci. 39 (2011) 293–301. [48] M.F. Tasgetiren, Q.K. Pan, P.N. Suganthan, A.H.L. Chen, A discrete artificial bee colony algorithm for the total flow time minimization in permutation flow shops, Inform. Sci. 181 (2011) 3459–3475.
Zhe Cui received the B.S. degree in Department of Automation, School of Information Science and Engineering, East China University of Science and Technology in 2009. He is now pursuing Ph.D. degree in Control Science and Engineering from East China University of Science and Technology. His research interests include scheduling problems and meta-heuristics.
Xingsheng Gu received the B.S. degree from Nanjing Institute of Chemical Technology in 1982, M.S. and Ph. D. degree from East China University of Chemical Technology in 1988 and 1993, respectively. He is currently a professor at School of Information Science and Engineering, East China University of Science and Technology. His research interests include planning and scheduling for process industry, modeling, control and optimization for industry processes, intelligent optimization, faults detection and diagnosis, etc.