An iterated local search for the multi-objective permutation flowshop scheduling problem with sequence-dependent setup times

An iterated local search for the multi-objective permutation flowshop scheduling problem with sequence-dependent setup times

Accepted Manuscript Title: An Iterated Local Search for the Multi-objective Permutation Flowshop Scheduling Problem with Sequence-dependent Setup Time...

802KB Sizes 0 Downloads 54 Views

Accepted Manuscript Title: An Iterated Local Search for the Multi-objective Permutation Flowshop Scheduling Problem with Sequence-dependent Setup Times Author: Jianyou Xu Chin-Chia Wu Yunqiang Yin Win-Chin Lin PII: DOI: Reference:

S1568-4946(16)30600-7 http://dx.doi.org/doi:10.1016/j.asoc.2016.11.031 ASOC 3921

To appear in:

Applied Soft Computing

Received date: Revised date: Accepted date:

25-4-2015 21-11-2016 21-11-2016

Please cite this article as: Jianyou Xu, Chin-Chia Wu, Yunqiang Yin, Win-Chin Lin, An Iterated Local Search for the Multi-objective Permutation Flowshop Scheduling Problem with Sequence-dependent Setup Times, Applied Soft Computing Journal http://dx.doi.org/10.1016/j.asoc.2016.11.031 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

An Iterated Local Search for the Multi-objective Permutation Flowshop Scheduling Problem with Sequence-dependent Setup Times

Jianyou Xua* , Chin-Chia Wub, Yunqiang Yinc, and Win-Chin Linb

a

College of Information Science and Engineering,

Northeastern University, Shenyang 110819, China

b

Department of Statistics,

Feng Chia University, Taichung, Taiwan

c

Faculty of Science, Kunming University of Science and Technology, Kunming, 650093, China

*

Corresponding author. E-mail address: [email protected]

1

Graphical Abstract

Due to its simplicity yet powerful search ability, iterated local search (ILS) has been widely used to tackle a variety of single-objective combinatorial optimization problems. However, applying ILS to solve multi-objective combinatorial optimization problems is scanty. In this paper we design a multi-objective ILS (MOILS) to solve the multi-objective permutation flowshop scheduling problem with sequence-dependent setup times. In the MOILS, we design a Pareto-based variable depth search in the multi-objective local search phase. The search depth is dynamically adjusted during the search process of the MOILS to strike a balance between exploration and exploitation. We incorporate an external archive into the MOILS to store the non-dominated solutions and provide initial search points for the MOILS to escape from local optima traps. We compare the MOILS with several multi-objective evolutionary algorithms (MOEAs) shown to be effective for treating the multi-objective permutation flowshop scheduling problem in the literature. The computational results show that the proposed MOILS outperforms the MOEAs.

2

Highlights of the paper 1.

The multiple objectives and the sequence-dependent setup times are considered in the permutation flowshop scheduling problem.

2.

The extension of conventional single-objective iterated local search (ILS) to solve multi-objective combinatorial optimization problem.

3.

A Pareto based variable depth search is designed to act as the multi-objective local search phase in the multi-objective ILS.

4.

The experimental results on some benchmark problems show that the proposed multi-objective ILS outperforms several powerful multi-objective evolutionary algorithms in the literature.

3

Abstract Due to its simplicity yet powerful search ability, iterated local search (ILS) has been widely used to tackle a variety of single-objective combinatorial optimization problems. However, applying ILS to solve multi-objective combinatorial optimization problems is scanty. In this paper we design a multi-objective ILS (MOILS) to solve the multi-objective permutation flowshop scheduling problem with sequence-dependent setup times to minimize the makespan and total weighted tardiness of all jobs. In the MOILS, we design a Pareto-based variable depth search in the multi-objective local search phase. The search depth is dynamically adjusted during the search process of the MOILS to strike a balance between exploration and exploitation. We incorporate an external archive into the MOILS to store the non-dominated solutions and provide initial search points for the MOILS to escape from local optima traps. We compare the MOILS with several multi-objective evolutionary algorithms (MOEAs) shown to be effective for treating the multi-objective permutation flowshop scheduling problem in the literature. The computational results show that the proposed MOILS outperforms the MOEAs.

Keywords: Iterated local search; multi-objective optimization; permutation flowshop scheduling with sequence-dependent setup times.

4

1. Introduction The permutation flowshop scheduling problem (PFSP) is a classical production scheduling problem. Many production processes in the iron and steel, petrochemical, and mechanical industries can be modelled as the PFSP. The scheduling objectives that are most commonly studied in the literature include the makespan (Cmax), total flowtime (TFT), and total weighted tardiness (TWT). Although the PFSP with a single objective has been widely studied [1-6], its theoretical results cannot be directly applied to real-world scheduling. One major reason is that in real practice it generally needs to optimize multiple objectives instead of a single objective. In addition, the machine in the real-world often undergoes some operations between the processing of two consecutive jobs such as cleaning, replacement of processing tools, transportation of jobs from one machine to the next machine, and so on. The duration of such operations are dependent not only on the current job being processed but also on the next job waiting to be processed. That is, a sequence-dependent setup time exists between any two jobs. Motivated by these observations, we deal with the PFSP with sequence-dependent setup times to minimize multiple objectives (MOPFSP-SDST) in this paper. Specifically, we seek to minimize two scheduling objectives - Cmax and TWT simultaneously. There are a few studies on the single-objective PFSP with sequence-dependent setup times (SDST) and on the multi-objective PFSP (MOPFSP) in the literature. However, to the best of our knowledge, there is little research focusing on the MOPFSP-SDST considered in this paper. For the single-objective PFSP with SDST, Mirabi [7] developed an ant colony optimization algorithm in which a new approach for computing the initial pheromone values and a local search was adopted. Ríos-Mercado and Bard [8] presented a branch-and-cut algorithm and later they further investigated the polyhedral structure of two mixed-integer programs for the PFSP with SDST so as to improve the efficiency of the branch-and-cut algorithm in [9]. Recently, Yin et al. [10] took into consideration the sequence-dependent family setup times in the no-wait flowshop manufacturing cell scheduling problem and proposed three metaheuristic algorithms. As reviewed by Sun et al. [11], the literature on the multi-objective PFSP (MOPFSP) is 5

still scarce compared with that on the single-objective PFSP. However, the algorithms developed for MOPFSP continues to attract significant research interest from both theoretical and practical perspectives.

Many methods proposed for the MOPFSP are based on the

genetic algorithm. Ishibuchi et al. [12] proposed a memetic algorithm for the MOPFSP and evaluated the positive effect of incorporating local search into the genetic algorithm. Another kind of multi-objective genetic algorithm with local search (MOGALS) was developed by Arroyo and Armentano [13]. Yanda and Tamura [14] presented a variant of the NSGAII of Deb et al. [15], namely hMGA, for the MOPFSP. Kollat and Reed [16] extended the NSGAII by incorporating epsilon dominance and presented an epsilon-based NSGA-II (ε-NSGAII). Chen et al. [17] investigated a bi-criteria two-machine flow shop scheduling problem with a learning effect and developed a branch-and bound and two heuristics for this problem.

A

multi-objective simulated annealing algorithm (MOSA) was presented by Varaharajan and Rajendran [18] for the minimization of the makespan and total flowtime criteria.

Framinan

and Leisten [19] constructed a multi-objective iterated greedy search (MOIGS) for this problem and obtained promising results. Sha and Lin [20] presented a multi-objective particle swarm optimization method and compared it with some simple heuristic methods.

Arroyo

and Armentano [21] developed a partial enumeration heuristic based on the insertion technique of Nawaz et al. [22] (which is called NEH) for the MOPFSP.

Tajbakhsh et al. [23]

studied a kind of MOPFSP in which a three-stage manufacturing system including machining, assembly and batch processing stages is considered.

They developed a hybrid algorithm

based on genetic algorithm and particle swarm optimization to solve this problem.

Minella

et al. [24] conducted a comprehensive review on the current 23 algorithms proposed for the MOPFSP with the objectives of makespan, total flowtime, and total weighted tardiness, and the computational results showed that the MOGALS [13] and the MOSA [18] belonged to the most powerful algorithms.

Chiang et al. [25] incorporated a NEH-based local search into

the genetic algorithm and proposed a hybrid memetic algorithm. A multi-objective ant-colony algorithm (MOACA) was developed by Rajendran and Ziegler [26] to minimize the makespan and total flowtime. For more recent papers for multi-objective permutation flow shop scheduling problem, the readers may refer to Tiwari et al. [27], Li and Ma [28], Rifai et al. [29], Han et al. [30], and a survey study by Yenisey andYagmahan [31]. 6

Due to its simplicity yet powerful search ability, iterated local search (ILS) has been widely used to tackle single-objective combinatorial optimization problems. Application of ILS in the scheduling domain has reached state-of-the-art level for such problems as the single-machine total weighted tardiness problem and permutation flowshop scheduling [32-33]. However, applying ILS to solve multi-objective combinatorial optimization problems is scanty. Besides the basic considerations in the design of single-objective ILS, the design of multi-objective ILS (MOILS) should consider more factors, such as the simultaneous optimization of conflicting objectives and the spread of the obtained solutions in the objective space. In addition, the implementation of local search in MOILS is a major difficulty because there are many Pareto solutions obtained by MOILS and we need to select a solution for improvement by local search and decide how local search is performed on the selected solution. Therefore, the design of MOILS is rather different from that of single-objective ILS, and the structure and performance of MOILS for scheduling problems are worth studying. In this paper we design a multi-objective ILS (MOILS) to solve the MOPFSP-SDST. To the best of our knowledge, there is no previous research that uses ILS to solve the MOPFSP-SDST in the literature. The main contributions of this paper are as follows: (1) In the MOILS, we design a Pareto-based variable depth search in the multi-objective local search phase. The search depth is dynamically adjusted during the search process of the MOILS to strike a balance between exploration and exploitation; (2) We incorporate an external archive into the MOILS to store the non-dominated solutions and provide initial search points for the MOILS to escape from the traps of local optima; consequently, achieving good search diversity; (3) We compare the MOILS with several multi-objective evolutionary algorithms (MOEAs) shown to be effective for treating the MOPFSP in the literature. The computational results show that the proposed MOILS is comparable or superior to some MOEAs. Although many algorithms have been proposed for the MOPFSP in the literature [12-22], they are mainly genetic algorithms with a local search. The genetic algorithm is a kind of population-based evolutionary algorithm, so the whole population needs to be updated in each iteration by the crossover and mutation operators, and each new solution needs to be evaluated. On the one hand, since the new solution is randomly generated, its quality cannot 7

be guaranteed to be better than its parents. On the other hand, the generation and evaluation of a solution for the MOPFSP is very time-consuming. So the need for population updating may render genetic algorithms inefficient for solving the MOPFSP because many computational efforts may be devoted to generating and evaluating inferior solutions. Different from the genetic algorithm, the proposed MOILS in this paper selects one single solution from the external archive and performs a local search on it in each iteration, so its search is focused only on the promising solutions. Although the MOILS also generates an initial population (please refer to the following Section 3.3), the aim is just to obtain a good initial external archive, and this population is not used and updated any more in the subsequent iterations. Therefore, our MOILS will be much more efficient than genetic algorithms for the MOPFSP-SDST (please refer to the experimental results in Section 4.5). This paper is organized as follows: In Section 2 we introduce and define the MOPFSO-SDST. In section 3 we present the proposed MOILS. In Section 4 we analyze and discuss the experimental results of applying the MOILS and other MOEAs to solve benchmark problems. We conclude the paper and suggest topics for future research in Section 5. 2.

Description of the MOPFSP-SDST In this problem there is a set of jobs J = {1, 2, …, n} to be processed on a given set of

machines M = {1, 2, …, m}. During processing, each job must be processed through all the m machines in the same order, i.e., starting from machine 1, then machine 2, and so on until machine m. Each job iJ has a fixed and known in advance nonnegative processing time pik on machine kM. It is assumed that all the jobs and machines are available at time zero. Once a job starts processing, its processing cannot be interrupted. It is also required that a job cannot be processed by more than one machine simultaneously and a machine can process at most one job at a time. The processing a job i on machine k can be started if two requirements are met: (i) job i has finished its processing on machine k-1 and (ii) machine k is idle. In addition, a setup is needed between any two consecutive jobs and the setup time is dependent not only on the job being processed but also on the next job waiting for processing. That is, the setup time is sequence dependent. The objective of the problem is to minimize the 8

makespan (It means that the system can yield a better and efficient task planning to limited resources. The completion time of the last job in the permutation that is denoted by Cmax) and the total weighted tardiness of all jobs (If a task is finished after its due date, it is recorded as tardy, and charged with a penalty equal to its completed time minus due date known as weighed tardiness. The sum of the weighted tardiness of all jobs in a sequence is known as total weighted tardiness that is denoted as TWT) simultaneously. Let Sijk denote the setup time between jobs i and j (i, jJ, i  j) on machine k (kM). Then for a given job permutation π = (π(1), …, π(l), …, π(n)), where π(l) denotes the job assigned to position l, the completion time of each job π(l) can be obtained by the following recursive equation Ck, π(l) = max{Ck-1, π(l), Ck, π(l-1) + Sk, π(l-1), π(l)} + pk, π(l), where Ck, π(0) = 0, C0, π(l) = 0, and Sk, π(0), π(l) = 0. Then we obtain Cmax = max l=1,2,.., n{Cm, π(l)} and TWT = l 1 max 0, w (l )  (Cm, (l )  d (l ) ) , where wπ(l) and dπ(l) are the weight and due date of n

job π(l), respectively. 3.

Proposed MOILS In this section we first give a brief introduction of multi-objective optimization and the

conventional ILS, and then present the detailed procedure of our MOILS for the MOPFSP-SDST. 3.1 Multi-objective optimization Instead of a single objective, multi-objective optimization needs to optimize multiple objectives simultaneously where in general the objectives are conflicting with one another. Let X  [ x1 , x2 , ..., xn ]T and Y  [ y1 , y2 , ..., yn ]T be two n-dimensional decision variable vectors, and fi ( X ) and fi (Y ) be the ith objective values of them, respectively. Then X is said to dominate Y (denoted as X

Y ) if and only if fi ( X )  fi (Y ) for each objective i and

there exists at least one objective j such that f j ( X )  f j (Y ) . A decision variable vector X* is referred to as Pareto-optimal if and only if X* is not dominated by any other decision variable vector in the decision space. All the Pareto-optimal vectors in the decision space make up the 9

Pareto set (P*), and the corresponding objective vectors of P* in the objective space is called the Pareto front (PF*). The task of multi-objective optimization is to find P* and the corresponding PF*. 3.2 Conventional iterated local search Among metaheuristics for treating NP-hard combinatorial optimization problems, ILS is well known for its effectiveness and simplicity in practice. When a search is trapped in a local optimal solution, ILS can be used to allow the search to escape the trap without losing many of the good properties of the current solution. Let πbest be the best solution found in the history of ILS, the search procedure of ILS is given in Algorithm 1. Algorithm 1. Iterated local search Generate an initial solution π. π*: = LocalSearch(π) πbest : =π* while the termination criterion is not reached do π': = Perturbation(π*) π'': = LocalSearch (π') π*: = AcceptanceCriterion(π*, π'', πbest) end while

For a given initial solution π, a LocalSearchI(π), which is usually a neighbourhood search on π, is performed to improve it and the local optimal solution π* in the neighbourhood is obtained. Subsequently, set the current best solution to πbest be π*. The iteration cycle of ILS mainly consists of the following three steps: Firstly, ILS uses a Perturbation(π*) function, which means a mutation or change (the mutation or change is usually performed by applying a random neighborhood move to the input solution π*) in the current local minimum π* to allow the search to escape the local minimum and generate an intermediate solution π'. For example, for a given local optimum solution π* = (π(1), π(2), …, π(n)), the perturbation with a random insertion move is performed as follows: a job from a randomly selected position a (i.e., π(a)) is firstly removed from its current position a, and then the removed job π(a) is inserted to another position b (a  b). Through this random perturbation, the local optimum solution π* can be perturbed to

10

another place in the solution space. After the perturbation procedure, the LocalSearch(π') based on a neighbourhood search is again applied on the perturbed solution π' to obtain a new local optimal solution π'' in the neighborhood. Due to the fact that the local search has significant impact on the performance of ILS, the design of local search of ILS is very important and challenging. At last, for the new local optimum solution π'', the AcceptanceCrierion(π*, π'', πbest) is used to check whether the current best solution π* can be updated. That is, if π'' is better than π*, then π* is replaced with π'' and we further check whether πbest can be updated; otherwise, π* remains unchanged. The iteration continues until the termination criterion is reached. In the implementation of ILS, if the current best solution πbest has not been improved for a given number of consecutive iterations (which means that the search from π* cannot achieve more promising solutions for several consecutive iterations), then π' will be generated from πbest instead of π* in the Perturbation operation, which is called the backtracking. 3.3 Proposed MOILS Since the task of MOPFSP-SDST is to obtain a set of non-dominated solutions while the conventional ILS generally deals with single-objective optimization, we introduce an external archive A to store these solutions. Let S denote the set of non-dominated solutions obtained from a solution π through ParetoLocalSearch. The overall framework of the proposed MOILS is given in Algorithm 2, in which modifications are made to adapt the conventional ILS to multi-objective optimization. Algorithm 2. Multi-objective Iterated local search P: = CreateInitialPopulation(). for each solution π in P do S : = ParetoLocalSearch(π, d0) A: = UpdateExternalArchive(S, A) end for while the maximum runtime is not reached do π*: = Selection(A) Set updated : = true while updated do π': = Perturbation(π*) d : = UpdateSearchDepth() S: = ParetoLocalSearch (π', d) 11

//create an initial population //local search with initial depth d0

//select a solution from A

//determine current search depth //local search with depth d

A: = UpdateExternalArchive(S, A) if A is not updated then Set updated : = false else π*: = π' end if end while end while

//accept π' as the perturbation seed

3.3.1 Creating initial population The adoption of an initial population (whose size is set to100 in our algorithm) is to generate an initial external archive with good quality and diversity. We use a generalization of the NEH to generate the first solution because NEH is considered to be the best among simple constructive heuristics for the PFSP (Taillard [34]). In addition, we also adopt a random generation heuristic that iteratively assigns jobs to random positions to generate other initial solutions to guarantee good diversity of the initial population. After initializing the population, we perform a Pareto-based local search with variable search depth on each solution to obtain a set of non-dominated solutions. Then we initialize and update the external archive A based on them. 3.3.2 Update of the external archive For a given solution π, we first check if it is dominated by any solution in A. If so, this solution is discarded immediately. Otherwise, we delete all the solutions that are dominated by the solution π and add it into A. If the number of solutions in A exceeds the maximum size, then the most crowded solution in A will be repeatedly deleted until the maximum size is reached. 3.3.3 Selection initial solution from the external archive To improve the search efficiency and diversity of the MOILS, in each iteration the initial solution π is selected from A based on the crowding distance of each non-dominated solution. The selection procedure is as follows: First, the crowding distance of each non-dominated solution in A is calculated according to the method proposed in Deb et al. [15]. Second, randomly select two solutions. Third, use the tournament selection method to select the one with a better crowding distance. 12

3.3.4 Perturbation of a solution For the input solution π = (π(1), π(2), …, π(n)), the perturbation is performed by applying a random insertion move that randomly removes a job from position a and then inserts it to another position b (a  b). By doing so, a new solution can be obtained and this solution may have escaped from the current local optimum region. 3.3.5 Pareto-based local search with variable search depth The Pareto-based local search (ParetoLocalSearch) has two major characteristics: (1) the returned search result is a set of non-dominated solutions instead of a single solution; and (2) the search depth (or neighbourhood size) is variable during the search process of the MOILS. For the PFSP, the traditional neighbourhoods used in the local search are the insertion neighbourhood based on the insertion move and the swap neighbourhood based on the swap move that exchanges two jobs. In this paper we use another kind of neighbourhood based on the compound insertion move. This move deletes a block of consecutive jobs from a solution and then inserts each of them into the best position according to their original sequence. Figure 1 illustrates the job deletion of this move. Given a solution π = (π(1), π(2), …, π(n)), we first choose a position r from π, and from position r delete d consecutive jobs from π according to the following condition: if n – r+1 ≥ d, then delete d consecutive jobs π(r), π(r+1), …, π(r+d–1); otherwise, delete consecutive jobs π(r), …, π(n) and then π(1), …, π(d–(n–r+1)). For simplicity, the deleted jobs are denoted as π(r1), π(r2), …, π(rd) and the resulting partial solution as π'. r n – r +1≥ d (d=4)

2

5

8

6

1

7

10

3

4

9

r n – r +1< d (d=4)

2

5

8

6

1

7

10

3

4

9

Figure 1. Illustration of the deletion of d consecutive jobs Algorithm 3. Pareto-based Job Insertion Input partial solution π' with n-d positions and d removed jobs π(r1), π(r2), …, π(rd) Set the partial non-dominated solution set Sn: = {π'}; for i: =1 to d do for each non-dominated partial solution in Sn do 13

Set the new solution set Pnew:= ∅; for j: =1 to n-d+i do Insert job π(ri) into position j of the non-dominated partial solution; Store the resulted new solution into Pnew; end for end for Update Sn with Pnew; end for Return Sn

The job insertion process is implemented as Algorithm 3. For the resulting partial solution π' with n-d positions, we first insert the first job π(r1) into all the positions of π' and obtain n-d+1 new partial solutions, then we only store the non-dominated partial solutions. For the next job π(r2), we insert it into all the positions of each non-dominated partial solution, and again store only the non-dominated partial solutions from the new resulting partial solutions. This process iterates until all the removed jobs are inserted, and finally we obtain a set of non-dominated solutions. As shown in Algorithm 2, the search depth is determined by d, which is variable during the iteration of the MOILS. When d is large, the neighborhood’s size (search depth) is large, and vice versa. To endow the local search with good exploration ability at earlier stages and good exploitation ability at later stages, we use a dynamic strategy for the value of d. That is, the value of d is dynamically adjusted from large to small (i.e., 1) based on the current runtime used by the MOILS (note that the maximum available run time is divided into d equal parts). 4.

Computational experiments

4.1 Test problems To test the performance of the proposed MOILS, we performed computational experiments on the problem instance derived based on the well-known benchmark set of Taillard [34]. The used benchmark set is composed of 11 problem sizes and for each problem size there are ten instances. The combinations of n × m are: {20, 50, 100}×{5, 10, 20} and {200}×{10, 20}. So there are 110 instances in total used in the experiments. The setup times and job weight are generated in a uniform distribution of [1, 50] and [1, 10], respectively. The due date of a job j is generated by dj = (Pj + Sj)× (1+random×3) where Pj 14

is the total processing time on all machines of job j, Sj is the sum of average setup time for all possible following jobs on all machines, and random is a random number uniformly distributed in [0, 1]. 4.2 Parameter settings We coded the proposed MOILS in C++ and ran it on a personal PC with an Intel Core2 2.83 GHz CPU with 4 GB memory. Please note that only one core of the CPU was used in the experiments. We determined the parameters of the algorithms by experiments.

The size of the initial

population is set 100, the maximum of the external archive is set to 100, and the maximum of consecutive jobs to be removed in the local search is set to 5.

In addition, the number of

consecutive iterations for which the backtracking is called is set to 5. We set stopping criterion to the maximum run time of n  m 2 150 milliseconds of CPU time. For each of the 110 instances, we ran the testing algorithms for ten independent runs and collected the average performance metrics. 4.3 Performance evaluation metrics We measure solution quality by two metrics that are commonly used in the literature for the MOPFSP, namely the hypervolume metric and the unary epsilon metric [35]. The hypervolume metric can estimate how far the obtained Pareto front is from the true Pareto front and how well the solutions are distributed. It is calculated as the cumulative area covered by a set of non-dominated solutions in comparison with a given reference point, namely R. An illustration of the hypervolume for a minimization problem is shown in Figure 2. Following reference [24], we normalize the objective values of the non-dominated solutions obtained by different testing algorithms into [0, 1] and select (1.2, 1.2) as the reference point.

15

TWT

Reference point

nondominated solutions

Cmax Figure 2. Calculation of the hypervolume

The unary epsilon metric shows how far the obtained Pareto front is from the reference Pareto front. Given a Pareto front P and the reference Pareto front R, the unary epsilon is defined as the minimum factor ε such that any solution in R is ε-dominated by at least one solution in P. For the unary epsilon we normalize the objective values into [1, 2]. Please note that a large value of the hypervolume means a better performance while a smaller value of the unary epsilon means a better performance. 4.4

Contribution of the variable search depth In this section we analyze the contribution of the variable search depth. To show the

efficiency of this strategy with d dynamically adjusted from 5 to 1, we compare it with the MOILS with a static search depth d = 5 (denoted as MOILSstatic). The comparison results are given in Table 1, in which the better results are presented in bold type. In the last column, the notations “+” and “–” denote that the performance difference is significant or not at the 95% confidence level. Table 1. Performance metrics for each instance group Problem

Hypervolume MOILSstatic

MOILS

20×5

1.1779

1.1817

20×10

1.2487

20×20

Unary Epsilon MOILSstatic

MOILS

+

1.0845

1.0817



1.2621

+

1.0543

1.0507

+

1.2643

1.2656

1.0551

1.0522

50×5

1.1698

1.1677

1.1412

1.1417

– –

50×10

1.1877

1.1881

– – –

1.1138

1.1107

+

50×20

1.1841

1.1825

+

1.1054

1.1098

+

100×5

1.1413

1.1934

+

1.1634

1.1417

+

16

100×10

1.1210

1.1876

+

1.1565

1.1301

+

100×20

1.1204

1.1898

+

1.1424

1.1156

+

200×10

1.0125

1.1512

+

1.1721

1.1064

+

200×20

1.0113

1.1571

+

1.1934

1.1195

+

Average

1.1512

1.1927

+

1.1255

1.1058

+

Based on these results, we conclude that the performance of the variable search depth strategy is significantly better than the static search depth strategy for most instances, especially for large-size groups. The reason behind this phenomenon can be explained as follows: For large-size groups, when d is fixed the search neighborhood is always large and needs a great deal of CPU time during the iteration of the MOILS, so number of iterations executed of the MOILS will be small within a given maximum available CPU time. On the contrary, when d is dynamically adjusted from 5 to 1, the search neighborhood will dynamically decrease accordingly; consequently, the practical iterations of the MOILS will be much larger, thus yielding a better performance. 4.5 Sensitive analysis of the fixed search depth In the above section the efficiency of the variable search depth was illustrated. In this section we further test the sensitivity of the performance of the MOILS to d. We selected the search depth d from {5, 4, 3} and fixed it during the search process (i.e., we used the static strategy to gain a better understanding of its performance sensitivity). Table 2. Results for different values of d Problem

Hypervolume

Unary Epsilon

MOILS

d=5

d=4

d=3

20×5

1.2001

1.1981

1.1954

1.1887

MOILS

d=5

d=4

d=3

1.0809

1.0864

1.0951

+

1.2521

– –

1.0843

20×10

1.2635

1.2627

1.2589

1.0517

1.0545

1.0553

1.0671

1.2576

1.2503

+

1.0534

1.0551

1.0573

1.0643

– –

20×20

1.2621

1.2589

50×5

1.1662

1.1627

1.1590

1.1507

+

1.1347

1.1435

1.1450

1.1557

+

50×10

1.1783

50×20

1.1978

1.1887

1.1871

1.1679

+

1.1255

1.1207

1.1241

1.1308

+

1.1976

1.1997

1.1931

1.1096

1.1135

1.1075

1.1171



100×5

1.1757

1.1168

1.1509

1.1735

– +

1.1539

1.1771

1.1598

1.1576

+

100×10

1.1985

1.1309

1.1410

1.1757

+

1.1280

1.1523

1.1507

1.1346

+

100×20

1.2061

1.1364

1.1590

1.1834

+

1.1097

1.1387

1.1275

1.1152

+

200×10

1.1703

1.0419

1.0556

1.0809

+

1.1024

1.1703

1.1575

1.1387

+

200×20

1.1556

1.0277

1.0389

1.0775

+

1.1175

1.1872

1.1803

1.1408

+

1.1964

1.1566

1.1641

1.1767

1.1067

1.1271

1.1252

1.1225

+

Average

+

The comparison results are given in Table 2, in which the better results are presented in bold type. Based on these results, we see that our MOILS with a variable search depth still 17

obtains the best results for most of the group instances in terms of both the hypervolume and the unary epsilon metrics. For large-size instances, the performance of the MOILS is significantly better than the others. In addition, the performance changes with a decrease in the search depth are as follows (Figure 2 and Figure 3). (1) For small-size instances with n ≤ 50, the performance tends to decrease as the search depth d decreases. The reasons behind this phenomenon can be explained as follows: The first reason is that a larger value of d can provide a deep search (or a large neighbourhood) and the second one is that the search efficiency (or the practical iterations of the MOILS) between large and small values of d is not significant because the numbers of insertion positions for these instances are small. (2) For large-size instances with n ≥ 100, the performance improves as the search depth d decreases. The reason behind this phenomenon is that a large value of d results in a large neighbourhood that needs a lot of CPU time to search and thus the practical iterations of the MOILS become much smaller than those generated from a small value of d.

Figure 3. Performance of the hypervolume for different values of d

18

Figure 4. Performance of the unary epsilon for different values of d

4.6 Sensitive analysis of the variable search depth In the above section the sensitivity of fixed search depth was analyzed and the results show that the variable search depth is superior. So in this section, we further test the algorithm with different variable search depth, i.e., the maximum value of d is selected from {3, 4, 5, 7} and the search depth is dynamically adjusted according to the method described in section 3.3.5. The computational results are given in Table 3, from which it can be found that with the increase of d the performance of the MOILS first improves but then deteriorates quickly. The value of d=5 achieves the best average performance for both the hypervolume and the unary epsilon metrics.

In addition, the MOILS with d=5 also can obtain the best result for

most of the instance groups. When d reaches the value of 7, the performance of MOILS becomes worse because the computational time required by the local search is much more than that required by the local search with d=5.

That is, the search iterations of the MOILS

with d=7 will be much less than those of the MOILS with d=5. Table 3. Results for different values of d under the variable search depth strategy Problem

Hypervolume

Unary Epsilon

d=3

d=4

d=5

d =7

d=3

d=4

d=5

d =7

20×5

1.1754

1.1854

1.1875

1.1842

1.0954

1.0892

1.0842

1.0923

20×10

1.2506

1.2543

1.2619

1.2602

1.0672

1.0651

1.0570

1.0584

20×20

1.2583

1.2608

1.2677

1.2604

1.0619

1.0537

1.0531

1.0563

50×5

1.1497

1.1631

1.1789

1.1692

1.1608

1.1458

1.1341

1.1451

50×10

1.1655

1.1708

1.1853

1.1681

1.1320

1.1281

1.1196

1.1343

19

50×20

1.2016

1.2035

1.2046

1.1983

1.1021

1.1075

1.1060

1.1124

100×5

1.1593

1.1904

1.1882

1.1109

1.1609

1.1481

1.1521

1.1840

100×10

1.1662

1.1648

1.1792

1.1154

1.1396

1.1342

1.1306

1.1665

100×20

1.1709

1.1813

1.1833

1.1271

1.1215

1.1211

1.1226

1.1538

200×10

1.1092

1.1369

1.1641

1.0705

1.1427

1.1354

1.1169

1.2146

200×20

1.1047

1.1352

1.1557

1.1036

1.1390

1.1265

1.1257

1.2181

Average

1.1738

1.1860

1.1960

1.1607

1.1203

1.1141

1.1093

1.1396

4.7 Comparison with other metaheuristics As mentioned in the Introduction section, different kinds of metaheuristics have been proposed for the MOPFSP. So in this section we compare our MOILS algorithm with them. We selected four powerful metaheuristics from the literature that can be extended to solve this problem, i.e., the MOSA_Varad of Varadharajan and Rajendran [18], the MOSA_Varad_M of Minella et al. [24], which is a modified version of MOSA_Varad, the MOIGS of Framinan and Leisten [20], and the MOGALS of Arroyo and Armentano [13]. The computational results of these algorithms can be obtained from http://soa.iti.es/problem-instances. The comparison results are given in Table 4 and Table 5, in which the better results are presented in bold type. In the last column, the notations “+” and “–” denote that the performance difference between our MOILS and the best one from the other four algorithms is significant or not at the 95% confidence level. Table 4. Comparison results for different metaheuristics in terms of hypervolume Problem

MOILS

20×5

MOSA_Varad_M MOSA_Varad

MOIGS

MOGALS

1.2472

1.1721

1.1170

1.1591

1.0637

+

20×10

1.2988

1.2920

1.2440

1.2754

1.2072

20×20



1.2929

1.2813

1.2292

1.2808

1.2113

+

50×5

1.2009

1.1857

1.1023

1.1401

1.0597

+

50×10

1.1878

1.1697

1.0904

1.1351

1.0955

+

50×20

1.1910

1.1883

1.1304

1.1578

1.1355

+

100×5

1.1505

1.1523

1.0887

0.8968

1.0461



100×10

1.1629

1.1613

1.1059

0.9296

1.0496

+

100×20

1.2028

1.1757

1.1229

0.9365

1.0362

+

200×10

1.1053

1.0411

1.0411

1.0411

1.0411

+

200×20

1.1360

1.1338

1.1369

0.7577

1.0645



Average

1.1978

1.1776

1.1281

1.0645

1.0919

20

Table 5. Comparison results for different metaheuristics in terms of unary epsilon Problem

MOILS

MOSA_Varad_M MOSA_Varad

MOIGS

MOGALS

20×5

1.0734

1.1511

1.1921

1.1686

1.2339

+

20×10

1.0468

1.0520

1.0889

1.0708

1.1242

+

20×20

1.0511

1.0526

1.0921

1.0595

1.1225

+

50×5

1.1385

1.1667

1.2114

1.1947

1.2281

+

50×10

1.1570

1.1593

1.2012

1.1807

1.1798

+

50×20

1.1564

1.1434

1.1775

1.1635

1.1634

+

100×5

1.2342

1.2041

1.2422

1.3672

1.2331

+

100×10

1.1865

1.1718

1.1993

1.3144

1.1987

+

100×20

1.1600

1.1694

1.1947

1.3160

1.1939

+

200×10

1.1322

1.3901

1.3901

1.3901

1.3901

+

200×20

1.1165

1.1163

1.1122

1.3946

1.1699

+

Average

1.1322

1.1615

1.1911

1.2382

1.2034

From the comparison results, we see that the MOILS algorithm is the best algorithm and it obtains the best results for 9 out of the 11 group instances for the hypervolume metric and for 8 out of the 11 group instances for the unary epsilon metric. In addition, the performance differences between our MOILS algorithm and the best one among the other four algorithms is significant for most of the instances. To give a graphical comparison among these algorithms, the Pareto fronts obtained by each algorithm for problems of 20×20, 50×20, 100×20 and 200×20 are illustrated in Figures 5-8, respectively. Since each algorithm was run for ten independent duplications for each problem instance, we use the union Pareto front corresponding to the non-dominated solutions selected from all the solutions obtained by each algorithm. From these figures, it can be found that the distribution and quality of the union Pareto fronts obtained by our MOILS are better than those obtained by the other algorithms. More specifically, the difference among them for small size problem of 20×20 is not significant. However, as the problem size increases (50×20, 100×20 and 200×20), the difference between the Pareto front obtained by our MOILS and the other Pareto fronts becomes significant.

21

Figure 5. Union Pareto fronts obtained by each algorithm for 20×20

Figure 6. Union Pareto fronts obtained by each algorithm for 50×20

22

Figure 7. Union Pareto fronts obtained by each algorithm for 100×20

Figure 8. Union Pareto fronts obtained by each algorithm for 200×20

5.

Conclusions In this paper a multi-objective iterated local search algorithm was developed to solve the

multi-objective permutation flowshop scheduling problem with sequence-dependent setup 23

times, which is rarely studied in the literature. The main feature of the proposed MOILS algorithm is that a multi-objective local search with a variable search depth was incorporated to improve the search efficiency of the MOILS.

This kind of local search can make the

MOILS more stable in solving problem instances with different sizes.

The results of

computational experiments on benchmark problems show that the proposed MOILS algorithm belongs to the most powerful metaheuristics for the MOPFSO-SDST. Future research may consider applying the proposed MOILS to address practical production scheduling problems.

Acknowledgements We are grateful to the Editor, an AE, and two anonymous referees for their many helpful comments on earlier versions of our paper. This paper was supported in part by the National Natural Science Foundation of China under grant numbers 11561036 and 71301022; and in part by the Ministry of Science Technology (MOST) of Taiwan under grant numbers MOST105-2221-E-035-053-MY3 and MOST 103-2410-H-035- 022-MY2.

References [1] L.Y. Tseng, Y.T. Lin. A genetic local search algorithm for minimizing total flowtime in the permutation flowshop scheduling problem. International Journal of Production Economics, 127 (2010), pp.121–128. [2] Y.H. Chung, L.I. Tong. Makespan minimization for m-machine permutation flowshop scheduling problem with learning considerations. International Journal of Advanced Manufacturing Technology, 56 (2011), pp. 355–367. [3] Y.M. Chen, M.C. Chen, P.C. Chang, S.H. Chen. Extended artificial chromosomes genetic algorithm for permutation flowshop scheduling problems. Computers & Industrial Engineering, 62 (2012), pp. 536–545. [4] Y.H. Chung, L.I. Tong. Bi-criteria minimization for the permutation flowshop scheduling problem with machine-based learning effects. Computers & Industrial Engineering, 63 (2012), pp. 302–312. [5] X.P. Wang, L.X. Tang. A discrete particle swarm optimization algorithm with self-adaptive diversity control for the permutation flowshop problem with blocking. Applied Soft Computing, 12(2) (2012), pp. 652–662. [6] P.C. Chang, W.H. Huang, J.L. Wu, T.C.E. Cheng. A block mining and re-combination

24

enhanced genetic algorithm for the permutation flowshop scheduling problem. International Journal of Production Economics, 141 (2013), pp. 45–55. [7] M. Mirabi Ant colony optimization technique for the sequence-dependent flowshop scheduling problem. International Journal of Advanced Manufacturing Technology, 55 (2011), pp. 317–326. [8] R.Z. Rios-Mercado, J.F. Bard. Computational experience with a branch-and-cut algorithm for flowshop scheduling with setups. Computers & Operations Research, 25(5) (1998), pp. 351–366. [9]

R.Z. Rios-Mercado, J.F. Bard. The flow shop scheduling polyhedron with setup times. Journal of Combinatorial Optimization, 7(3) (2003), pp. 291–318.

[10] K.C. Ying, Z.J. Lee, C.C. Lu, S.W. Lin. Metaheuristics for scheduling a no-wait flowshop manufacturing cell with sequence-dependent family setups. International Journal of Advanced Manufacturing Technology, 58 (2012), pp. 671–682. [11] Y. Sun, C.Y. Zhang, L. Gao, X.J. Wang. Multi-objective optimization algorithms for flow shop scheduling problem: a review and prospects. International Journal of Advanced Manufacturing Technology, 55 (2011), pp. 723–739. [12] H. Ishibuchi, T. Yoshida, T. Murata. Balance between genetic search and local search in memetic algorithms for multiobjective permutation flowshop scheduling. IEEE Transactions on Evolutionary Computation, 7(2) (2003), pp. 204–223. [13] J.E.C. Arroyo, V.A. Armentano. Genetic local search for multi-objective flowshop scheduling problems. European Journal of Operational Research, 167(3) (2005), pp. 717–738. [14] Yanda, H. Tamura. A new multiobjective genetic algorithm with heterogeneous population for solving flowshop scheduling problems. International Journal of Computer Integrated Manufacturing, 20(5) (2007), pp. 465–477. [15] K. Deb, S. Agrawal, A. Pratap, T. Meyarivan. A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2) (2002), pp. 182–197. [16] J.B. Kollat, P.M. Reed. The value of online adaptive search: A performance comparison of nsgaii, nsgaii and moea. In proceedings of the Third International Conference on Evolutionary Multi-Criterion Optimization, EMO 2005, Guanajuato, Mexico, 2005, 386–398. [17] P. Chen, C.C. Wu, W.C. Lee. A bi-criteria two-machine flow shop scheduling problem with a learning effect. Journal of the Operational Research Society, 57 (2006), pp. 1113–1125. [18] T.K. Varadharajan, C. Rajendran. A multiobjective simulated-annealing algorithm for 25

scheduling in flowshops to minimize the makespan and total flowtime of jobs. European Journal of Operational Research, 167(3) (2005), pp. 772–795. [19] J.M. Framinan, R. Leisten. A multi-objective iterated greedy search for flowshop scheduling with makespan and flowtime criteria. OR Spectrum, 30(4)(2008), pp. 787–804. [20] D.Y. Sha, H.H. Lin. A particle swarm optimization for multi-objective flowshop scheduling. International Journal of Advanced Manufacturing Technology, 45 (2009), pp. 749–758. [21] J.E.C. Arroyo, V.A. Armentano. A partial enumeration heuristic for multi-objective flowshop scheduling problems. Journal of the Operational Research Society, 55 (2004), pp. 1000–1007. [22] M. Nawaz, E.E. Enscore, I. Ham. A heuristic algorithm for the m-machine, n-job sequencing problem. Omega, 11 (1983), pp. 91–95. [23] Z. Tajbakhsh, P. Fattahi, J. Behnamian. Multi-objective assembly permutation flow shop scheduling problem: a mathematical model and a meta-heuristic algorithm. Journal of the Operational Research Society, 65(4) (2014), pp. 1580-1592. [24] G. Minella, R. Ruiz, M. Ciavotta. A review and evaluation of multiobjective algorithms for the flowshop scheduling problem. INFORMS Journal on Computing, 20(3) (2008), pp. 451–471. [25] T.C. Chiang, H.C. Cheng, L.C. Fu. NNMA: An effective memetic algorithm for solving multiobjective permutation flow shop scheduling problems. Expert Systems with Application, 38(8) (2011), pp. 5986–5999. [26] C. Rajendran, H. Ziegler. A multi-objective ant-colony algorithm for permutation flowshop scheduling to minimize the makespan and total flowtime of jobs. In: Computational Intelligence in Flow Shop and Job Shop Scheduling (Ed. U K Chakraborty), Springer-Verlag, Berlin. [27] A. Tiwari, P.C. Chang, M.K. Tiwari, N.J. Kollanoor. A Pareto block-based estimation and distribution algorithm for multi-objective permutation flow shop scheduling problem. International Journal of Production Research, 53(3) (2015), pp. 793–834. [28] X. Li, S. Ma. Multi-objective memetic search algorithm for multi-objective permutation flow shop scheduling problem, IEEE Access, 4 (2016), pp. 2154–2165. [29] A.P. Rifai, H.T. Nguyen, S.Z.M. Dawal. Multi-objective adaptive large neighborhood search for distributed reentrant permutation flow shop scheduling. Applied Soft

26

Computing, 40 (2016), pp. 42–57. [30] Y. Han, D. Gong, Y. Jin, Q.K. Pan. Evolutionary multi-objective blocking lot-streaming flow shop scheduling with interval processing time. Applied Soft Computing, 42 (2016), pp. 229–245. [31] M.M. Yenisey, B. Yagmahan. Multi-objective permutation flow shop scheduling problem: Literature review, classification and current trends. Omega, 45(2014), pp. 119–135. [32] R.K. Congram, C.N. Potts, S.L. Van de Velde. An iterated dynasearch algorithm for the single machine total weighted tardiness scheduling problem. INFORMS Journal on Computing, 14(1) (2002) 52–67. [33] X.Y. Dong, H.K. Huang, P. Chen. An iterated local search algorithm for the permutation flowshop problem with total flowtime criterion. Computers & Operations Research, 36(2009), pp. 1664–1669. [34] E. Taillard. Benchmarks for basic scheduling problems. European Journal of Operational Research, 64 (1993), pp. 278–285. [35] E. Zitzler, L. Thiele, M. Laumanns, C.M. Fonseca, V.G. Fonseca. Performance assessment of multiobjective optimizers: an analysis and review. IEEE Transactions on Evolutionary Computation, 7(2) (2003), pp. 117–132.

27