Applied Soft Computing 34 (2015) 721–727
Contents lists available at ScienceDirect
Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc
Two-phase neighborhood search algorithm for two-agent hybrid flow shop scheduling problem Deming Lei ∗ School of Automation, Wuhan University of Technology, 122 Luoshi Road, Wuhan, Hubei Province, People’s Republic of China
a r t i c l e
i n f o
Article history: Received 11 January 2014 Received in revised form 12 March 2015 Accepted 14 May 2015 Available online 14 June 2015 Keywords: Hybrid flow shop scheduling Feasibility model Two-phase neighborhood search
a b s t r a c t In this paper hybrid flow shop scheduling problem with two agents is studied and its feasibility model is considered. A two-phase neighborhood search (TNS) algorithm is proposed to minimize objectives of two agents simultaneously under the given upper bounds. TNS is constructed through the combination of multiple variable neighborhood mechanisms and a new perturbation strategy for new current solution. A new replacement principle is also applied to decide if the current solution can be updated. TNS is tested on a number of instances and compared with the existing methods. The computational results show the promising advantage of TNS on the considered problem. © 2015 Elsevier B.V. All rights reserved.
1. Introduction In the scheduling problem with multiple agents, all agents compete on the use of common processing resources and each of them wishes to minimize an objective function that depends on the completion time of his/her own set of jobs. The goal of the problem is either to find a schedule that minimizes a combination of the agents’ objective functions or to find a schedule that satisfies each agent’s requirement for his/her own objective function. The scheduling problems with multiple agents have attracted much attention in the past decade since the pioneering works of Agnetic et al. [1] and Bake and Smith [2] and a large body of literature has discussed the problems in single machine, parallel machines and flow shop environments. Cheng et al. [3] study the NP-compete feature of multi-agent single-machine scheduling, in which each agent’s objective is to minimize the total weighted number of tardy jobs. Cheng et al. [4] discuss the feasibility model and the minimality model of multi-agent scheduling on a single machine. Lee et al. [5] propose three genetic algorithms (GA) for two-agent single-machine scheduling with release time. The objective is to minimize the total tardiness of jobs from the first agent given that the maximum tardiness of jobs from the second agent does not exceed an upper bound. Yin et al. [6] consider several twoagent single-machine scheduling problems with assignable due dates and provide polynomial-time algorithms. Liu et al. [7] discuss
∗ Tel.: +86 2786534910. E-mail address:
[email protected] http://dx.doi.org/10.1016/j.asoc.2015.05.027 1568-4946/© 2015 Elsevier B.V. All rights reserved.
the optimal properties of two-agent single-machine scheduling with sum-of-processing-times-based deterioration and present some polynomial time algorithms to solve the problem. Wu et al. [8] propose a branch-and-bound (BB) algorithm and a tabu search (TS) for two-agent single-machine scheduling with deterioration jobs. With respect to multi-agent parallel machines scheduling, Li and Yuan [9] consider the constrained optimization problem with unbounded parallel-batch machines and two agents and provide polynomial-time and pseudo-polynomial-time algorithm. Elvikis et al. [10] discuss the problem with two conflicting objectives and provide polynomial time algorithms for solving the problem. Fan et al. [11] study the NP-hard feature and polynomial solvability of bounded parallel-batching scheduling with two competing agents and different objectives. Some studies have considered multi-agent scheduling in flow shop environments. Lee et al. [12] study a two-machine flow shop problem with two agents where the objective is to minimize the total completion time of first agent with no tardy jobs for the second agent. Lee et al. [13] consider two-agent two-machine flow shop scheduling problem and develop a BB algorithm and simulated annealing (SA) to minimize the total tardiness of the first agent with no tardy jobs for the second agent. Luo et al. [14] investigate the weighted-sum optimization model and the constrained optimization model and study approximation schemes for two-machine flow shop scheduling with two agents. The previous studies on multi-agent production scheduling problems have the following features:
722
D. Lei / Applied Soft Computing 34 (2015) 721–727
(1) Most of papers have been presented for multi-agent scheduling on single machine. Multi-agent scheduling is not considered fully in many-machines environments, for example, scheduling problem in hybrid flow shop has attracted much attention [15–17]; however, to the best of our knowledge, multi-agent scheduling problem is not considered in hybrid flow shop. (2) The polynomial-time algorithm and mathematical programming methods are main approaches to solve the problem. The applications of meta-heuristics such as GA, TS and SA are not extensively investigated. (3) In most of literature, multiple objectives are involved and the goal is to optimize an objective of jobs from the first agent given that the objective of jobs from the second agent does not exceed an upper bound. Generally, all agents should have the same chance to compete for the common resources and the objectives of them should be dealt with equally and optimized simultaneously. It is not fair to optimize the objective of only one agent.
Table 1 An illustrative example. Job
Operation
Processing time Stage 1
o11 o12 o13 o21 o22 o23 o31 o32 o33 o41 o42 o43
J1
J2
J3
J4
AGl Stage 2
Stage 3
M1
M2
M3
M4
M5
M6
4 6 4 11 8 12 15 14 23 10 7 12
5 3 6 14 7 23 9 34 31 8 8 15
2 5 5 11 4 22 6 31 28 9 7 6
8 12 9 8 5 56 6 2 7 15 22 1
7 11 4 3 10 40 4 3 8 12 18 1
8 7 1 6 9 19 11 1 9 6 10 2
dil
AG2 130 AG1
AG2 124 AG1
Notations used in this section are listed below: Iterated local search (ILS) is a simple stochastic local search method, in which perturbations of the current solution is used to overcome local optimality. ILS has been applied to many combinatorial optimization problems [18–21]. Variable neighborhood search (VNS) is also a local search based meta-heuristic, which has been extensively applied to a number of production scheduling problems [22–25]. The characteristic variable neighborhood mechanism can effectively avoid the search of VNS falling into the local optima. ILS and VNS have been proved that they are competitive when they are applied to solve scheduling problem; however, some difficulties still exist in their applications. For ILS, its perturbation can produce new starting points; however, the inappropriate perturbation may lead to the frequent deterioration of quality of the current solution, as a result, the solution of ILS cannot be improved after many iterations and the search of ILS may stagnate. For VNS, more than one variable neighborhood mechanism can increase its exploration ability. When multiple variable neighborhood mechanisms are combined with the appropriate perturbation, the advantages of ILS and VNS can be used fully. In this study hybrid flow shop scheduling problem (HFSP) with two agents is considered, which is composed of operation sequence sub-problem and machine assignment sub-problems. The feasibility model is considered. The goal of the problem is to minimize objectives of two agents simultaneously under the given upper bounds. Obviously, the problem is multi-objective one with two sub-problems. The optimization difficulties of the problem and the advantages of the combination of ILS and VNS motivate us to propose a TNS for the problem. The remainder of the paper is organized as follows. Problem under study is described in Section 2. The proposed algorithm for the problem is shown in Section 3. Numerical test experiments on TNS are reported in Section 4. The conclusions are summarized and some topics of the future research are provided in the final section.
2. Problem description and discussions on objectives
n m Mk Sl pihk Cil dil
the number of jobs the number of stages the set of all parallel machines on stage k = 1,2, . . . ,m the set of jobs of agent AGl , l = 1, 2 the processing time of job Ji on machine h of stage k, 1 ≤ h ≤ |Mk | the completion time of job Ji from AGl , l = 1, 2 due date of job Ji from AGl , l = 1, 2
In this study, HFSP with two agents consists of n jobs and m stages. There are n jobs J1 , J2 , . . ., Jn being processed according to production flow: stage 1, stage 2, . . .,stage the same m. There are Mk ≥ 1 machines in parallel on stage k and Mk > 1 is set for at least one stage. Each job belongs to either the first agent AG1 or the second agent AG2 . Processing constraints of HFSP can be directly adopted in the problem, for example, each machine can only process one job at a time. The goal of the problem is to obtain an appropriate processing sequence and machine assignment of all jobs to minimize the following two objectives simultaneously. Minimize
f1 = max Ci1
(1)
i∈S1
Minimize
f2 =
max Ci2 − di2 , 0
(2)
i∈S2
where the first objective f1 is the maximum completion time of jobs of the first agent and the second objective f2 is the total tardiness of jobs of the second agent. Obviously, the schedule of the problem is composed of jobs of two agents and these jobs affect each other. We cannot schedule jobs of each agent independently and should treat each job equally, as a result, the considered problem can be regarded as the traditional HFSP when it is solved using meta-heuristics; moreover, the makespan of the first agent and the total tardiness of the second agent must conflict each other because of the competition between agents for processing resources. These characteristics show that the problem is a multi-objective one in essence. Table 1 shows an illustrative example of the problem, where “ ” indicates no due date is considered for jobs of AG1 .
2.1. Problem description 2.2. Discussions on objectives HFSP is generally to optimize the processing of a set of n jobs in a series of m stages in terms of certain objectives. At least two machines exist on at least one stage. HFSP is quite common in practice, especially in the process industry. In general, it is assumed that all jobs of HFSP come from the same agent; however, more than one agent provides processing tasks for the same manufacturer.
Cheng et al. [3,4] considered the following two models in single machine environments. (1) Feasibility model: objectives should satisfies fi ≤ Qi , i = 1, 2, . . ., T
D. Lei / Applied Soft Computing 34 (2015) 721–727
(2) Minimality model: the sum of all objectives minimized.
T
f i=1 i
should be
where Qi is an upper bound of fi and T is the number of agents, in this study, T = 2. In most of the literature on two-agent scheduling, the task is to find a schedule that minimizes the objective of first agent given that the objective of jobs from the second agent does not exceed an upper bound. There are some defects on the above handling, for example, the objective of only one agent is minimized and the optimization results may be just beneficial to only one agent. When two agents should be treated fairly, the feasibility model should be applied because many feasible solutions meeting fi ≤ Qi can be obtained, from which the win-win schedule for two agents can be chosen. For the problem with minimization of f1 and f2 , the optimal result is not a single solution but a set of solutions; moreover, the optimal set cannot be obtained without comparing all solutions. When solutions in a set are compared each other, take x andy as an example, if fi (x) ≤ fi (y) for ∀i ∈ 1, 2 and fi (x) < fi (y) for ∃i ∈ 1, 2 , then x dominates y; if a solution x cannot be dominated by any other solutions in the same set, x is non-dominated solution regarding the set. If a solution is not dominated by other solutions in search space, the solution is Pareto optimal. Pareto front is the set of all Pareto optimal solutions. When all objectives of the problem are optimized simultaneously, the goal is to obtain a set of non-dominated solutions which locate on the Pareto front and cover the whole front. To implement the goal, some steps such as sorting solutions based on Pareto dominance must be added in the multi-objective optimization method, as a result, the construction of the multi-objective method is often more difficult than that of the single-objective one. 3. TNS for HFSP with two agents The main steps of ILS are local search and perturbation. The general framework of ILS is as follows: for initial solution x0 , a local search is applied to x0 and a new solution x is obtained. The following steps are repeated until the termination condition is met: (1) perturbation is performed on x and a solution x is generated; (2) local search is applied to the solution x and a solution x is obtained; (3) the acceptance criterion is used to decided if x can be replaced with x . Contrary to other meta-heuristics based on local search such as TS and SA, VNS does not follow a trajectory but explores increasingly distant neighborhoods of the current incumbent solutions and jumps from this solution to a new one if and only if an improvement has been made. In this way the favorite features of the incumbent solutions are kept and used to produce the promising neighborhood solutions. As shown above, ILS and VNS have been proved that they can effectively deal with production scheduling problem; however, they are seldom applied to multi-agent scheduling problems. In this paper TNS combining VNS with ILS is proposed for HFSP with two agents. 3.1. Initial solution Obviously, jobs of two agents cannot be scheduled independently and should be scheduled together because of the sharing of resources, so the representation of the traditional HFSP can be adopted. For the problem with n jobs, a job permutation {1 , 2 , . . ., n } of all operations and a string of the assigned machines 11 , 12 , . . .1m , . . ., n1 , n2 , . . ., nm used to indicate its solution, where i ∈
1, 2, . . ., n , ik ∈ Mk is the machine assigned to
723
job Ji at stage k. In the second string, the first m machines are allocated into the m operations of job J1 , the second m machines are assigned to all operations of J2 and so on. When the job permutation and the machine assignment string are decoded, the processing of the job 1 is first done on machine 1 1 at stage 1, then 1 is processed on machine 1 2 of stage 2, and so on; after the processing of 1 is done at all stages, the processing of 2 is begin on the machine 2 1 , and then on machine 2 2 and so on; after the processing of 2 is done at all stages, the remained jobs are processed sequentially according to the job permutation and the string of the assigned machines. 3.2. Neighborhood structures and replacement principle The technique of moving from a solution to its neighborhood one is neighborhood structure. Many neighborhood structures have been applied to scheduling problems. Neighborhood structure should prevent any infeasible solutions. In this section, four neighborhood structures are used to produce new solutions in TNS. For job permutation (1 , 2 , . . ., n ) of the current solution x, / j, are swap is described as follows: a pair of jobs i and j , i = randomly chosen and exchanged. Neighborhood structure insert is defined as follows: a job i is stochastically selected, a position j is randomly chosen, and then the job i is inserted into the position j. Neighborhood structure insert1 is described below: three jobs i1 , i2 and i3 are stochastically chosen and deleted from the permutation, then i3 is inserted into position i1 , i1 is into position i2 and i2 into position i3 . Neighborhood structure change (v) is used to change the machine assignment of some chosen operations. The detailed steps the following steps are repeated of change (v) are shown follows: as v times: (1) set = oij Mj > 1, i = 1, 2. . ., n, j = 1, 2. . ., m is first determined; (2) an operation is randomly chosen from the set , for example, oij is selected, and then (3) a machine is randomly selected from Mj and allocated to oij . We let N1 indicate swap, N2 denote insert, N2 represent insert1, N4 indicate change (1) and N5 denote change (2) in this paper. When neighborhood structure N1 i = 1, 2, 3, 4, 5 is applied to the current solution x and a new solution y ∈ Ni (x) is obtained, a new replacement principle is applied: if x ∈ ˝ and y is not dominated / ˝ and y is not dominated by x, then y by any solutions in ˝, or if x ∈ becomes the new current solution, where Ni (x) is the set of neighborhood solutions of x produced with Ni and ˝ is the set of the non-dominated solutions generated by TNS. The different conditions based on the solution quality are used in the above method to decide if x can be replaced with the new solution. When multiple objectives are optimized simultaneously, the set ˝ should be updated using new solutions of TNS. After a neighborhood structure Ni is performed on the current solution x, if x can be replaced with a new solution y ∈ Ni (x), then the set ˝ is updated in the following way: x is included into the set ˝, all solutions in ˝ are compared in terms of Pareto dominance and the dominated ones are deleted from ˝. Two variable neighborhood mechanisms are used. The first one with N1 , N2 , N3 is applied to the operation sequence sub-problem and the second one with N4 and N5 is constructed for the machine assignment problem. 3.3. Perturbation In ILS, perturbation is used to modify the current solution to avoiding falling into the local optima. We perturb the current solution in the following way: when a predetermined condition is met, for example, perturbation is done after m it1 iterations, a solution y is randomly chosen from the non-dominated set ˝ and then neighborhood structure swap is performed on y, the obtained solution y
724
D. Lei / Applied Soft Computing 34 (2015) 721–727
directly substitutes for the current solution x, where m it1 is an integer. Unlike the existing ILS, the ILS part of TNS has the following features: (1) the perturbed solution does not come from the neighborhood of the current solution and is obtained by perturbing a chosen member from the set ˝; (2) Perturbation is not done in the early search stage of TNS. 3.4. Algorithm description The detailed procedure of TNS is shown as follows. (1) (2)
(3) (4)
Start with a randomly produced initial solution x and construct the initial non-dominated set ˝, iter = 1 While iter ≤ m it do (1) k1 ← 1, k2 ← 4. (2) Randomly generate a new solution y ∈ Nk1 (x); if the current solution x and y meet the conditions in the replacement principle, then replace x with y and continue the search with N1 (k1 ← 1); otherwise, k1 ← k1 + 1. (3) if k1 > 3, then k1 ← 1. (4) Randomly generate a new solution y ∈ Nk2 (x); if the current solution x and the new one meet the conditions of the replacement principle, replace x with y and continue the search with N4 (k2 ← 4); otherwise, k2 ← k2 + 1. (5) if k2 > 5, then k2 ← 4. (6) Update the set ˝ using the current solution if possible, iter ← iter + 2. (7) If iter > m it1 , then perturb the solution x, update the set ˝ and iter ← iter + 1. End while Output all members of set ˝ which meet fi ≤ Qi , i = 1, 2.
where m it is the maximum number of iteration. Generally, objectives cannot exceed the predetermined upper bounds in the feasibility model; however, these upper bounds are not considered in the search process of TNS to simplify the handling on objective constraint and make TNS freely explore new solutions. If upper bounds are considered with the beginning of the search, the set ˝ will be empty in many iterations and the first condition of the replacement principle does not work. On the other hand, stagnation can be effectively avoided when perturbation is done after m it1 iterations and the deterioration of new starting point is controlled. The exploration ability of TNS is also improved because of the usage of two variable neighborhood mechanisms. These features guarantee that TNS is reasonable and effective for the considered problem. 4. Computational experiments Extensive experiments are conducted on a set of problems to test the performance of TNS for the considered problem. All experiments are implemented by using Microsoft Visual C++ 7.0 and run on 2G RAM 4.0G CPU Pentium PC. 4.1. Data generation and performance metrics As stated in Section 1, multi-agent scheduling problem is not considered in hybrid flow shop, so no benchmark instances are existed for the considered problem. We construct six problems with m = 5, 10 and n = 40, 50, 60, 80, 100, 200. Some instances are stochas tically generated for each problem according to the value of S1 so that 66 problem instances are obtained. For each instance, for job Ji , pihk is integer randomly produced on [30,100], for job Ji with i ≤ 15, di ∈ [320, 670] and for Ji with i > 15, di = di−15 + ı, ı ∈ [70, 290]. Qi is an upper bound of fi and should have been provided in the benchmark instances; however, there are no such instances, so we decide Qi according to the initial schedule and the convergence curve of TNS without step (4). Table 2 shows the set of Mk . TNS randomly 66 20 data are run 20 times for each instance and total i=1 j=1 ij
Table 2 Information on set Mk . Mk
Mk
M1 =
M6 =
M1 , M2 , M3 M2 = M4 , M5 , M6 , M7 , M8 M3 = M9 , M10 , M11 , M12 M4 = M13 M5 =
M18 , M19 , M20 M7 = M21 , M22 , M23 M8 = M24 , M25 M9 = M26 M10 =
M14 , M15 , M16 , M17
M27 , M28 , M29 , M30
generated, where ij is the number of non-dominated solutions produced by the jth run of TNS on the problem instance i, ij ≥ 1. Many performance metrics have been applied to compare the results of multi-objective Pareto optimization results of the different algorithms. The following three metrics are used in this study. the number of the elements in the set i indicates Metric x ∈ ˝i x ∈ ˝∗ . Distance metric DIR is used to measure the performance of nondominated solution set ˝j relative to a reference set ˝*.
DIR ˝j =
1
˝ ∗
min dxy x ∈ ˝j
(3)
y∈˝∗
where dxy is the distance between a solution x and a reference solution y in the normalized objective space, dxy =
2
2
(f1∗ (x) − f1∗ (y)) + · · · + (fD∗ (x) − fD∗ (y)) , D is the number of objec-
tives, fi∗ is the ith normalized objective using the reference solution set ˝*, the details of normalization can be found in Ishibuchi et al. [26]. The reference set ˝* consists of the non-dominated solutions
A of j=1 ˝j , A is the total number of algorithms. The smaller the value of DIR (˝j ) is, the better the solutions of ˝j are. We define a metric SP that measures the extent of ˝i using the maximum extent in each dimension based on the metric of Zitzler et al. [27].
D 2 SP ˝i = max fi∗ (x) − fi∗ (y) , ∀x, y ∈ ˝i
(4)
i=1
4.2. Descriptions on comparison algorithms Two algorithms are chosen to compare with TNS and described in this section. The first algorithm is non-dominated sorting genetic algorithm 2 (NSGA2) [28], which is a famous evolutionary algorithm and has very competitive performance on solving multi-objective problems. In NSGA2, a non-dominated sorting approach is used for each individual to create a Pareto rank, and a crowding distance assignment method is applied to implement density estimation. In fitness assignment, between two individuals, NSGA2 prefers the point with a lower rank value, or the point located in a region with fewer points if both of the points belong to the same front. Therefore, by combining a fast non-dominated sorting approach, an elitism scheme, and a parameter-less sharing method with its origin, NSGA2 is claimed to produce a better spread of solutions in some test problems. The strong search ability on multi-objective problem motivated us to choose NSGA2 as the comparative algorithm. The second algorithm is called L-NSGA [15], which is based on NSGA2 coupled with Lorenz dominance not Pareto dominance. Lorenz dominance relationship restricts the Lorenz searching space to a subset of Pareto searching space, so Lorenz dominance can increase the speed of the algorithm by focusing on the promising area; however, more computation time is needed to sort the
D. Lei / Applied Soft Computing 34 (2015) 721–727
725
Table 3 Results of three algorithms for problem with n = 40. n×m
40 × 10
S 1
15 16 17 18 19 20 21 22 23 24 25
TNS
NSGA2
L-NSGA
DIR ˝1
1
SP1
DIR ˝2
2
SP2
DIR ˝3
3
SP3
0.2195 0.506 0.355 1.043 0.671 0.570 0.104 12.60 1.067 1.833 1.618
53 43 51 38 63 87 93 39 58 63 36
134.2 140.4 139.9 131.9 116.8 120.5 136.7 108.6 119.2 100.8 117.9
4.375 12.52 3.515 4.244 33.59 28.18 25.99 17.64 30.88 21.24 7.521
5 13 15 21 0 0 0 3 0 6 4
109.1 89.50 121.0 104.3 87.22 22.49 46.56 64.57 40.27 26.08 89.39
16.16 26.25 25.46 23.90 15.82 22.46 18.30 11.57 18.38 6.531 9.502
5 0 1 11 4 8 5 26 9 15 21
56.53 34.16 39.44 58.91 77.57 69.31 75.42 92.47 83.05 97.70 92.38
Table 4 Results of three algorithms for problem with n = 50. n×m
50 × 10
S 1
20 21 22 23 24 25 26 27 28 29 30
TNS
NSGA2
L-NSGA
DIR ˝1
1
SP1
DIR ˝2
2
SP2
DIR ˝3
3
SP3
0.142 0.015 2.422 0.000 5.227 0.486 0.692 0.150 0.088 0.092 0.098
59 52 18 40 2 47 44 61 79 56 79
141.3 141.2 135.3 139.3 137.5 137.6 139.7 141.4 139.4 137.5 141.4
7.371 4.056 5.689 11.67 0.098 8.015 6.005 5.815 4.975 20.12 5.159
4 1 9 0 36 8 20 2 1 0 5
105.1 125.8 132.2 79.88 134.3 106.7 102.3 127.1 123.0 98.45 103.2
33.38 31.72 54.75 46.93 34.16 23.18 17.34 18.57 15.81 19.33 13.28
0 0 4 0 0 6 7 6 8 5 1
33.16 40.25 24.87 21.47 38.09 59.42 69.44 55.28 68.26 63.65 86.23
Lorenz non-dominated solutions, Lorenz dominance is also called equitable dominance defined by Kostreva and Ogrycak [29] and extended by Kostreva et al. [30]. Dugardin et al. [15] proposed LNSGA for reentrant hybrid flow shop scheduling and compared it with NSGA2. The computational results show that L-NSGA can approximate the optimal solutions obtained by full enumeration, so we choose L-NSGA as the second comparative algorithm. In paper [15], the same crossover and mutation are used in L-NSGA and NSGA2. In this paper, these crossover and mutation are also adopted in two GAs and applied to job permutation part of the solutions of the problem. Two-point crossover and assignment mutation are applied to the string of the assigned machine. Assignment mutation is described below: an operation is randomly chosen and reassigned a machine.
TNS: m it = 100, 000 and m it1 = 70, 000 for problem instances with n = 40, m it = 150, 000 and m it1 = 100, 000 for other instances, NSGA2: population scale is 100, crossover probability is set to be 0.8 mutation probability is 0.05 and the maximum generation is equal to m iter/100. L-NSGA has the same parameters as NSGA2 because the only difference between them is Lorenz dominance. The total number of objective function evaluations is set to be m iter for each algorithm. All algorithms randomly run 20 times for each instance. In this paper, ˝* is the set of the non-dominated
3 solutions in j=1 ˝j . ˝1 , SP1 and 1 indicate the non-dominated set and metrics of TNS, ˝2 , SP2 and 2 are the non-dominated set and metrics of NSGA2, and ˝3 , SP3 and 3 denote the non-dominated set and metrics of L-NSGA. The computational results and times of three algorithms are shown in Tables 3–9, in which symbol “*” indicates that the algorithm cannot produce any feasible solutions meeting fi ≤ Qi and the average running time is the average value of all running times on all instances of each problem, for example, for
4.3. Results and discussion Parameters of three algorithms are obtained through a number of experiments and shown as follows.
Table 5 Results of three algorithms for problem with n = 60. n×m
60 × 10
S 1
25 26 27 28 29 30 31 32 33 34 35
TNS
NSGA2
L-NSGA
DIR ˝1
1
SP1
DIR ˝2
2
SP2
DIR ˝3
3
SP3
0.495 0.984 2.825 0.171 0.167 0.124 0.000 1.446 0.417 0.001 0.701
50 40 7 78 97 76 77 58 64 100 46
140.8 135.6 137.5 136.4 140.4 140.4 141.2 141.4 141.1 141.4 141.2
5.729 1.427 0.899 3.430 8.190 8.839 23.42 8.731 6.384 11.91 2.900
17 43 51 11 15 8 0 50 24 1 43
104.5 138.9 135.4 130.5 101.9 102.5 38.73 96.93 90.47 70.65 119.1
60.54 45.48 39.66 40.05 41.94 42.15 35.61 40.64 27.92 27.39 34.07
0 1 2 1 1 1 0 2 17 0 0
18.51 25.97 25.23 31.58 34.90 39.42 40.99 34.64 47.01 45.74 42.18
726
D. Lei / Applied Soft Computing 34 (2015) 721–727
Table 6 Results of three algorithms for problem with n = 80. n×m
80 × 10
S
TNS
1
NSGA2
35 36 37 38 39 40 41 42 43 44 45
L-NSGA
DIR ˝1
1
SP1
DIR ˝2
2
SP2
DIR ˝3
3
SP3
0.015 0.159 0.273 0.000 0.022 1.353 0.000 0.023 0.501 1.280 0.125
106 114 87 113 109 43 96 130 75 100 112
139.5 141.3 137.4 141.1 141.4 139.8 138.6 138.8 136.5 140.3 139.9
7.071 7.786 5.399 24.76 23.93 6.066 11.87 11.98 16.01 7.296 21.93
2 20 16 0 1 19 6 0 34, 72 10
109.1 95.20 103.3 79.35 56.03 113.4 102.6 75.56 66.92 99.11 58.81
66.42 54.27 49.64 43.69 54.96 58.10 46.52 30.68 37.77 43.06 35.64
0 0 0 0 2 4 0 6 0 0 0
26.50 32.92 31.34 33.12 27.83 34.17 28.84 32.13 34.95 35.23 30.99
Table 7 Results of three algorithms for problem with n = 100. n×m
100 × 10
S
TNS
1
NSGA2
40 42 44 46 48 50 52 54 56 58 60
L-NSGA
DIR ˝1
1
SP1
DIR ˝2
2
SP2
DIR ˝3
3
SP3
4.140 6.551 1.400 1.051 0.010 0.414 0.095 0.203 0.354 0.159 0.000
46 35 58 69 130 62 124 96 119 81 97
134.4 136.4 138.2 141.1 139.4 141.2 141.3 141.4 141.3 140.2 141.1
0.667 1.453 0.808 3.806 3.550 31.58 21.50 24.38 24.74 25.25 32.66
62 49 72 44 3 7 9 23 41 5 0
135.2 129.3 135.0 111.2 126.4 65.26 53.37 69.62 52.34 50.15 45.89
* * * * * * * 51.70 53.93 49.73 46.79
* * * * * * * 7 8 4 0
* * * * * * * 21.59 21.34 26.80 25.57
Table 8 Results of three algorithms for problem with n = 200. n×m
200 × 5
S
TNS
1
NSGA2
80 84 88 92 96 100 104 108 112 116 120
L-NSGA
DIR ˝1
1
SP1
DIR ˝2
2
SP2
DIR ˝3
3
SP3
1.357 0.952 0.459 0.000 0.181 0.047 0.089 0.055 0.011 0.022 0.047
55 50 41 38 74 111 87 96 111 82 81
137.8 138.5 137.4 138.2 141.1 140.0 141.3 139.6 140.8 141.0 140.6
14.51 12.29 38.35 28.37 19.43 30.36 21.56 20.88 32.61 42.85 11.80
10 9 7 2 9 5 8 3 2 0 1
127.9 91.88 38.72 67.13 49.51 48.89 54.78 42.04 35.78 61.36 97.94
* * * * * * * 62.95 62.93 44.43 36.80
* * * * * * * 0 0 0 0
* * * * * * * 48.28 54.28 39.23 47.66
problem 40 × 10 and TNS, the average value of all running times of 11 instances is 70.1 s. We can find from Tables 3–8 that TNS outperforms NSGA2 in terms of solution quality on 60 of 66 problem instances. NSGA2 can provide fewer solutions than TNS for the reference set ˝* on 60 instances, especially, 2 ≤ 5 for 28 instances. Because most of solutions in ˝* are provided by TNS, the value of DIR (˝2 ) is notably greater than that of DIR (˝1 ) on 60 of 66 instances. NSGA2 only produces smaller DIR (˝2 ) than TNS on six instances. SP2 is less than SP1 on 64 instances and SP1 is close to the optimal value of 141.4 on
39 instances, this feature shows that NSGA2 approximate to only a part of Pareto front. We can conclude from Tables 3 to 8 that the performance of TNS is also better than that of L-NSGA. L-NSGA cannot produce any feasible solutions meeting the objective constraints on 14 instances. TNS obtain smaller DIR (˝1 ) than L-NSGA for 65 of 66 instances; moreover, DIR (˝3 ) is greater than DIR (˝1 ) by at least 10 for most of instances. L-NSGA also cannot approximate solutions covering the whole Pareto front in most cases. TNS can produce more members of ˝* than L-NSGA.
Table 9 Average running times of three algorithms. n×m
40 × 10 50 × 10 60 × 10
n×m
Average running time TNS
NSGA2
L-NSGA
70.1 116 145
55.7 80.8 88.7
33.6 56.9 62.6
80 × 10 100 × 10 200 × 5
Average running time TNS
NSGA2
L-NSGA
201 251 291
98.7 119 251
92.8 111 229
D. Lei / Applied Soft Computing 34 (2015) 721–727
TNS can produce more non-dominated solutions than NSGA2 and L-NSGA, as a result, TNS has to use more times to update the non-dominated set ˝ than NSGA2 and L-NSGA and requires more computational times than other two algorithms as shown in Table 9. For HFSP with two agents, each agent has its own objective; however, jobs of two agents should be listed in the same permutation, as a result, neighborhood searches may be more effective than global searches because of the gradual and local feature of the former. The comparisons of results show that the above conclusion is reasonable. The inadaptability of crossovers to the problem results in the low solution quality of two GAs. The good performances of TNS mainly result from its neighborhood search feature and hybridization of multiple neighborhood mechanisms with perturbation. Thus, TNS is more effective and competitive than GA for HFSP with two agents. 5. Conclusions Multi-agent scheduling has attracted much attention in recent years; however, multi-agent hybrid flow shop scheduling is seldom considered. In this paper, HFSP with two agents is investigated and the feasibility model is considered, in which the objective is to minimize two agent’s conflicting objectives simultaneously under the corresponding upper bounds. A new algorithm named TNS has been presented by combining multiple variable neighborhood mechanisms of VNS and the perturbation step of ILS. A new replacement principle is also applied to decide if the current solution can be updated. TNS is tested on a number of problem instances and compared with NSGA2 and L-NSGA. Computational results show that TNS can provide notably better solutions than NSGA2 and L-NSGA on most of instances. For TNS, its variable neighborhood mechanism and perturbation avoid the search falling into the local optimum and the new replacement principle guarantees the continuous improvement of solution quality. In the near future, we will continue to focus on multi-agent flow shop scheduling problem. We will discuss the problems with nowait, blocking and deterioration jobs and try to apply some metaheuristics to solve them. We also pay attention to the application of other meta-heuristics to multi-agent scheduling problems in more complex shop. Acknowledgements This paper is supported by Natural Science Foundation of China (61374151). We also express our thanks for the help of reviewers. References [1] A. Agnetic, P.B. Mirchandani, D. Pacciarelli, A. Pacifici, Scheduling problems with two competing agents, Oper. Res. 52 (2004) 229–242.
727
[2] K.R. Baker, J.C. Smith, A multiple-criterion model for machine scheduling, J. Sched. 6 (2003) 7–16. [3] T.C.E. Cheng, C.T. Ng, J.J. Yuan, Multi-agent scheduling on a single machine to minimize total weighted number of tardy jobs, Theor. Comput. Sci. 362 (2006) 273–281. [4] T.C.E. Cheng, C.T. Ng, J.J. Yuan, Multi-agent scheduling on a single machine with max-form criteria, Eur. J. Oper. Res. 188 (2008) 603–609. [5] W.C. Lee, Y.H. Chung, M.C. Hu, Genetic algorithms for a two-agent singlemachine problem with release time, Appl. Soft Comput. 12 (2012) 3580–3589. [6] Y.Q. Yin, S.R. Cheng, T.C.E. Cheng, C.C. Wu, W.H. Wu, Two-agent singlemachine scheduling with assignable due dates, Appl. Math. Comput. 219 (2012) 1674–1685. [7] P. Liu, N. Yi, X. Zhou, G. Hua, Scheduling two agents with sum-of-processingtimes-based deterioration on a single machine, Appl. Math. Comput. 219 (2013) 8848–8855. [8] W.H. Wu, J. Xu, W.H. Wu, Y.Q. Yin, I.F. Cheng, C.C. Wu, A tabu method for a two-agent single-machine scheduling with deterioration jobs, Comput. Oper. Res. 40 (2013) 2116–2127. [9] S.S. Li, J.J. Yuan, Unbounded parallel-batching scheduling with two agents, J. Sched. 15 (2012) 629–640. [10] D. Elvikis, H.W. Hamacher, V. T’kindt, Scheduling two agents on uniform parallel machines with makespan and cost functions, J. Sched. 14 (2011) 471–481. [11] B.Q. Fan, T.C.E. Cheng, S.S. Li, Q. Feng, Bounded parallel-batching scheduling with two competing agents, J. Sched. 16 (2013) 261–271. [12] W.C. Lee, S.K. Chen, C.W. Chen, C.C. Wu, A two-machine flowshop problem with two agents, Comput. Oper. Res. 38 (2011) 98–104. [13] W.C. Lee, S.K. Chen, C.C. Wu, Branch-and-bound and simulated annealing algorithms for a two-agent scheduling problem, Expert Syst. Appl. 37 (2010) 6594–6601. [14] W.C. Luo, L. Chen, G.C. Zhang, Approximation schemes for two-machine flow shop scheduling with two agents, J. Comb. Optim. 24 (2012) 229–239. [15] F. Dugardin, F. Yalaoui, L. Amodeo, New multi-objective method to solve reentrant hybrid flow shop scheduling problem, Eur. J. Oper. Res. 203 (2010) 22–31. [16] S.J. Wang, M. Liu, A heuristic method for two-stage hybrid flow shop with dedicated machines, Comput. Oper. Res. 40 (2013) 438–450. [17] J. Yang, A new complexity proof for the two-stage hybrid flow shop scheduling with dedicated machines, Int. J. Prod. Res. 48 (2010) 1531–1538. [18] T. Stützle, Iterated local search for the quadratic assignment problem, Eur. J. Oper. Res. 174 (2006) 1519–1539. [19] B. Laurent, J.K. Hao, Iterated local search for the multiple depot vehicle scheduling problem, Comput. Ind. Eng. 57 (2009) 277–286. [20] M.J. Geiger, Decision support for multi-objective flow shop scheduling by the Pareto iterated local search methodology, Comput. Ind. Eng. 61 (2011) 805–812. [21] H. Derbel, B. Jarboui, S. Hanfi, H. Chabchoub, An iterated local search for solving a location-routing problem, Electron. Notes Discret. Math. 36 (2010) 875– 882. [22] P. Hansen, N. Mladenovíc, Variable neighborhood search: principles and applications, Eur. J. Oper. Res. 130 (2001) 449–467. [23] M.A. Adibi, M. Zandieh, M. Amiri, Multi-objective scheduling of dynamic job shop using variable neighborhood search, Expert Syst. Appl. 37 (2010) 282– 287. [24] D.M. Lei, X.P. Guo, Variable neighborhood search for minimizing total tardiness on flow shop with batch processing machines, Int. J. Prod. Res. 49 (2011) 519–529. [25] D.M. Lei, X.P. Guo, Variable neighborhood search for dual-resource constrained flexible job shop scheduling, Int. J. Prod. Res. 52 (2014) 2519–2529. [26] H. Ishibuchi, T. Yoshida, T. Murata, Balance between genetic search and local search in memetic algorithms for multi-objective permutation flowshop scheduling, IEEE Trans. Evol. Comput. 7 (2003) 204–223. [27] E. Zitzler, K. Deb, L. Thiele, Comparison of multi-objective evolutionary algorithm: empirical study, Evol. Comput. 8 (2000) 173–195. [28] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGAII, IEEE Trans. Evol. Comput. 6 (2002) 182–197. [29] M.M. Kostreva, W. Ogryczak, Linear optimization with multiple equitable criteria, RAIRO Oper. Res. 33 (1999) 275–297. [30] M.M. Kostreva, W. Ogryczak, A. Wierzbicki, Equitable aggregation and multiple criteria analysis, Eur. J. Oper. Res. 158 (2004) 362–377.