A backtracking search hyper-heuristic for the distributed assembly flow-shop scheduling problem

A backtracking search hyper-heuristic for the distributed assembly flow-shop scheduling problem

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx Contents lists available at ScienceDirect Swarm and Evolutionary Computation journal homepage:...

1MB Sizes 0 Downloads 91 Views

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

Contents lists available at ScienceDirect

Swarm and Evolutionary Computation journal homepage: www.elsevier.com/locate/swevo

A backtracking search hyper-heuristic for the distributed assembly flow-shop scheduling problem ⁎

Jian Lina, , Zhou-Jing Wanga, Xiaodong Lib a b

School of Information, Zhejiang University of Finance & Economics, Hangzhou 310018, China School of Science (Computer Science and IT), RMIT University, Melbourne, VIC 3001, Australia

A R T I C L E I N F O

A BS T RAC T

Keywords: Hyper-heuristic Backtracking search algorithm Distributed assembly Flow-shop scheduling

Distributed assembly permutation flow-shop scheduling problem (DAPFSP) is recognized as an important class of problems in modern supply chains and manufacturing systems. In this paper, a backtracking search hyperheuristic (BS-HH) algorithm is proposed to solve the DAPFSP. In the BS-HH scheme, ten simple and effective heuristic rules are designed to construct a set of low-level heuristics (LLHs), and the backtracking search algorithm is employed as the high-level strategy to manipulate the LLHs to operate on the solution space. Additionally, an efficient solution encoding and decoding scheme is proposed to generate a feasible schedule. The effectiveness of the BS-HH is evaluated on two typical benchmark sets and the computational results indicate the superiority of the proposed BS-HH scheme over the state-of-the-art algorithms.

1. Introduction Production scheduling has been a very active research area because of its practical significance in decision-making of manufacturing systems [1–4]. As one of the most studied scheduling problems, the permutation flow-shop scheduling problem (PFSP) is an extensively investigated combinatorial optimization problem in manufacturing systems and industrial processes. The PFSP with the makespan criterion has been proven to be NP-hard when the number of machines are no less than three [5]. Following the pioneering work of Johnson [6], many approaches have been proposed to solve the PFSP [7–18]. A common assumption among these studies is that there is only a single production center or factory, and all jobs in the permutation are assigned to the same factory. However, production systems with more than one production center (namely, a distributed manufacturing system) is more common in practice [19–23], since it can achieve higher product quality while reducing production distribution costs and management risks [24]. Scheduling in distributed systems is more challenging than in regular shop scheduling problems; in particular, job allocation to factories and job scheduling at each factory must be both considered when making decisions. Recently, an extension of the regular PFSP called the distributed assembly permutation flow-shop scheduling problem (DAPFSP) was introduced by Hatami et al. [25], where a set of products and a set of factories are combined with the regular PFSP. Each job in the DAPFSP belongs to one product and is processed in one factory. All products are



assembled in a single assembly factory with an assembly machine. Hatami et al. [25] also considered the minimization of makespan at the assembly factory and presented 14 heuristics based on constructive heuristics and variable neighborhood descent (VND). In [26], an estimation of distribution algorithm based memetic algorithm (EDAMA) was developed for solving the DAPFSP with the objective to minimize the maximum completion time. In our previous work [27], an effective hybrid biogeography-based optimization (HBBO) algorithm that integrates several novel heuristics is proposed to solve the DAPFSP. A recent trend in search and optimization suggests that hyperheuristic has emerged as an effective search methodology that controls other heuristics to provide near-optimal solutions for various problems [28,29]. Instead of searching directly in the solution space, hyperheuristics operate on a set of low-level heuristics (LLHs), and attempt to find an optimal sequence of heuristics [30]. During the past few years, there is a growing literature in the field of hyper-heuristics [28]. In particular, meta-heuristics have been used to construct hyperheuristic schemes, e.g., a particle swarm optimization based hyperheuristic approach by Koulinas et al. [31], evolutionary hyper-heuristics by Sanz et al. [32] and Moreno et al. [33], a harmony search based hyper-heuristic by Anwar et al. [34], and a bacterial foraging based hyper-heuristic by Rajni and Chana [35]. However, to the best of our knowledge, there is no hyper-heuristic approach for solving the DAPFSP. The motivation behind this paper is to propose a hyper heuristic

Corresponding author. E-mail address: [email protected] (J. Lin).

http://dx.doi.org/10.1016/j.swevo.2017.04.007 Received 8 January 2017; Received in revised form 15 April 2017; Accepted 26 April 2017 2210-6502/ © 2017 Elsevier B.V. All rights reserved.

Please cite this article as: Lin, J., Swarm and Evolutionary Computation (2017), http://dx.doi.org/10.1016/j.swevo.2017.04.007

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

J. Lin et al.

based scheduling algorithm which would be applicable in solving the DAPFSP. The backtracking search optimization algorithm (BSA) [36] is a newly developed powerful evolutionary algorithm, which has been proved to be very promising when compared with other evolutionary algorithms (EAs) [36–40]. Especially, BSA is a dual-population algorithm that uses the current as well as the historical populations, and also has a simple structure. This paper aims at employing an effective backtracking search hyper-heuristic (BS-HH) algorithm to solve the DAPFSP with the objective of minimizing the makespan value. In BSHH, the BSA is used as the high-level hyper-heuristic strategy, which manages solution methods rather than solutions, and employs a set of designed LLHs. Experiments and comparisons are conducted on two sets of benchmarks provided in Hatami et al. [25] to verify the effectiveness of the proposed scheme. The rest of the paper is organized as follows. In Section 2, the DAPFSP is briefly introduced. In Section 3, the BS-HH scheme is proposed for the DAPFSP. The computational results on benchmark instances together with comparison to some state-of-the-art algorithms are presented in Section 4. Finally, a conclusion is drawn in Section 5.

Table 1 The notations used in the optimization model for the DAPFSP. Indices for for for for for

The The The The The

number of number of number of number of processing

The number of jobs belongs to product Ph The processing time to assemble product Ph A given feasible schedule

nhf

The total number of jobs in product Ph assigned to factory f The sequence of jobs in factory f that belong to product Ph where

πhf = [πhf (1), πhf (2), …, πhf (nhf )]

Ci, j

The completion time of operation Oij on machine Mj

C MA, h

The completion time of product Ph on assembly machine MA

Cmax

Makespan value

operation Oij on machine Mj , respectively. For a schedule Λ of the DAPFSP, i.e., a set of sequences {π1f , π2f , …, πHf }, the makespan Cmax(Λ) is given by:

C π f (1),1 = p π f (1),1 , f = 1, 2, …, F ; h = 1, 2, …, H , h

h

h

h

h

f = 1, 2, …, F ; k = 1, 2, …, nhf ; ; h = 1, 2, …, H ,

h

h

h

f = 1, 2, …, F ; j = 1, 2, …, m; ; h = 1, 2, …, H ,

f = 1, 2, …, F ; k = 2, …, nhf ; j = 1, 2, …, m; h = 1, 2, …, H , Mm

M3 ...

Factory 1

...

Factory 2

...

Factory F

J

P3

P2

...

J

(3)

⎧ ⎫ C π f (k ), j = max⎨C π f (k −1), j , C π f (k ), j −1⎬ , h h ⎩ h ⎭

...

J

(2)

C π f (1), j = C π f (1), j −1 + p π f (1), j ,

Production stage

J

(1)

C π f (k ),1 = C π f (k −1),1 + p π f (k ),1 ,

M2

P1

jobs machines factories products time of operation Oij on machine Mj

Nh Qh Λ Variables πhf

As illustrated in Fig. 1, DAPFSP [25,27] is a combination of the distributed PFSP and the assembly flow-shop scheduling problem, which consists of two stages: production and assembly, and can be generalized into three sub-problems: job scheduling, product scheduling and factory assignment. The notations used in the optimization model of DAPFSP are presented in Table 1. In the production stage, there are n jobs {J1, J2, …, Jn} to be processed in F identical factories. All factories are capable of processing all jobs, and each factory can be considered as a PFSP with m machines {M1, M2, …, Mm}. Each job Ji requires a sequence of operations {Oi1, Oi2 , …, Oim} to be processed one after another on m machines. In the assembly stage, there is an assembly factory with a single assembly machine MA which assembles all jobs into H different products {P1, P2, …, PH }. Each product Ph has Nh jobs, with these jobs first processed in the production stage before assembling into the H product Ph ; hence, ∑h =1 Nh = n . In this paper, the maximum completion time (makespan) at the assembly factory is the objective to minimize. Let πhf = [πhf (1), πhf (2), …, πhf (nhf )] be the sequence of jobs in factory f (f = 1, …, F ) that belong to product Ph , where nhf (nhf < Nh ) is the total number of jobs in product Ph assigned to factory f . CMA, h and Ci, j denote the completion time of product Ph on assembly machine MA and the

Assembly stage

jobs where i = 1, …, n machines where j = 1, …, m products where h = 1, …, H factories where f = 1, 2, …, F jobs in product Ph assigned to factory f where

k = 2, …, nhf Parameters n m F H pij

2. Distributed assembly permutation flow-shop scheduling problem

M1

Index Index Index Index Index

i j h f k

...

Factory assignment problem

J

Assembly factory (MA)

PH

P P

P

P

P

Product scheduling problem

Job scheduling problem Fig. 1. Illustration of the DAPFSP.

2

(4)

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

J. Lin et al.

The goal of the DAPFSP is to find an optimal permutation Λ* with the minimum makespan.

Start

3. Backtracking search hyper-heuristic algorithm

Set general data of BSA and problem parameters

3.1. Backtracking search algorithm The BSA [36,37,41] is a population-based EA developed based on a DE algorithm, but it is not the same as DE. The flowchart of the BSA is illustrated in Fig. 2, which mainly consists of five phases: initialization, selection-I, mutation, crossover and selection-II.

Initialization

Selection-I

Mutation (1) Initialization: The initial population P and historical population old P are generated using the uniform distribution operator rand() as in Eqs. (7) and (8), in which S and D are the population size and the problem dimension, respectively.

Crossover

Selection-II

Pi, j = rand(lowj , upj ), i = 1, 2, …, S ; j = 1, 2, …, D.

(7)

oldPi, j = rand(lowj , upj ),

(8)

Boundary control No

Stopping conditions

Yes

End

(2) Selection-I: In each generation, the population old P is updated by the “if-then” rule in Eq. (9) and the random shuffling operation in Eq. (10).

Fig. 2. The flowchart of BSA.

if a < b then

2

5

4

9

7

3

8

1

oldP = P a, b = rand(0, 1),

oldP = permuting(oldP )

Product 1

Product 2

(3) Mutation: A trial population mu P is generated as in Eq. (11), where F = λ⋅U(0, 1), and λ is a user-defined parameter used to control the amplitude of the search-direction matrix.. By simultaneously considering the current population P and the historical population old P , the trial population will be improved by taking advantage of its experiences from previous generations.

⎧ ⎫ CMA, h = max ⎨C π f (n f ), m, CMA, h −1⎬ + Qh , ⎩ h h ⎭ (5)

The makespan is defined by:

Cmax(Λ) = CMA, H .

(10)

Product 3

Fig. 3. Illustration of the solution encoding scheme.

CMA,0 = 0; f = 1, 2, …, F ; h = 1, 2, …, H .

(9)

6

muP = P + F⋅(oldP − P ) (6)

Fig. 4. The pseudo code of the improved LLH.

3

(11)

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

J. Lin et al.

High-level strategy

BSA

Optimal heuristic

Problem instances Solution representation Objective function ……

Low-level heuristics

New heuristics

Applied the LLHs

LLH1 LLH2 LLH3 LLH4 ...

Fig. 5. The framework of BS-HH scheme.

Table 2 Combinations of parameter values.

Generate an initial population P and its corresponding initial historical population oldP

Parameter

Update the historical population oldP by Eqs. (9) and (10)

Itrmax λ rmax ξ

Calculate the binary integer-valued matrix map by Eq. (12)

Factor level 1

2

3

4

50 2 0.25 0.60

100 3 0.50 0.70

150 4 0.75 0.80

200 5 1.00 0.90

Table 3 Orthogonal array and AM values.

Generate the initial trial population muP by Eq. (11)

Experiment number

Generate the final trial population trialP by handling the infeasible solutions based on Eq. (13)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

For each individual in trialP, apply each low-level heuristic successively to the solution space

Compute fitness value for each individual in trialP based on Eqs. (1)-(6)

Update the corresponding individual in P if a better individual is found Yes No

Factor level

AM

Itrmax

λ

rmax

ξ

1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4

1 2 3 4 2 1 4 3 3 4 1 2 4 3 2 1

1 2 3 4 3 4 1 2 4 3 2 1 2 1 4 3

962.10 962.40 961.53 961.17 961.23 960.83 961.70 962.00 960.73 960.77 961.23 961.17 960.90 960.97 960.67 960.53

Table 4 AM value and significant rank of each parameter.

The termination condition is met ?

Output the best solution found so far Fig. 6. The main procedure of BS-HH.

4

Factor level

Itrmax

λ

rmax

ξ

1 2 3 4 F value Rank

961.80 961.44 960.98 960.77 2.3481 1

961.24 961.24 961.28 961.22 0.0080 4

961.17 961.37 961.31 961.14 0.1316 3

961.49 961.63 961.02 960.85 1.5123 2

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

J. Lin et al.

Itrmax

963 962

962

961

961

960

50

100

150

960

200

rmax

963

962

961

961

0.25

0.5

2

3

4

5

0.6

0.7

0.8

0.9

963

962

960

λ

963

0.75

960

1.0

Fig. 7. Factor level trend of the BS-HH.

has a better fitness value than the historical best solution found so far.

Table 5 Statistical results of various algorithms on small-sized instances. F×n

BS-HH (without SA)

BS-HH (with SA)

2×8 2×12 2×16 2×20 2×24 3×8 3×12 3×16 3×20 3×24 4×8 4×12 4×16 4×20 4×24 Average

0.38 0.42 0.20 0.06 −0.17 0.45 0.12 0.10 0.11 −0.02 0.25 0.38 0.16 0.17 0.03 0.18

0.38 0.41 0.16 −0.17 −0.34 0.40 0.12 0.08 0.09 −0.12 0.25 0.38 0.13 0.08 −0.03 0.12

3.2. Solution encoding and decoding schemes Each individual in the BS-HH is represented by a sequence of LLHs, and is associated with a solution of DAPFSP. The LLHs in an individual are applied to operating in the solution space which is composed of a set of product sequences. Each product includes a subsequence of current jobs. Fig. 3 illustrates an example of a solution with nine jobs and three products. The first three jobs belong to product 1, the next two jobs belong to product 2, and the last four jobs belong to product 3. Furthermore, note that the jobs from one product are not separated in the solution encoding scheme. For each product, an effective decoding rule called NR2, as used in [25], is employed to assign the jobs to a factory that can complete the job with the earliest completion time. Then the job sequence shown in Fig. 3 can be decoded to a feasible schedule and be further calculated as discussed in Section 2.

Note: The bold values correspond to better results.

3.3. Low-level heuristics (LLHs) (4) Crossover: First, a binary integer-valued matrix map , with the size of S × D , is generated by Eq. (12), in which ⌈•⌉ is the ceiling function, r1 ∈ [0, 1] and r2 ∈ [0, 1] are two randomly generated real values, rmix ∈ [0, 1] is the mix rate parameter. Then, the mu P is updated to generate a new trial population trialPi, j as follows: If mapi, j = 1, trialPi, j is set to Pi, j ; otherwise, trialPi, j is set to mu Pi, j . The individuals beyond the boundary in trialPi, j will be regenerated using Eq. (8).

⎧ = 0, r1 > r2 ⎪ mapi,permuting(1 : ⌈r mix ⋅ D ⋅rand(0,1)⌉) ⎨ ⎪ map = 0, else i,randi(D ) ⎩

It is widely agreed that the design of the set of LLHs is important for the efficiency of the hyper-heuristic approach [31]. In this section, ten easy-to-implement LLHs are designed to construct the set of LLHs, which are detailed as follows: (1) Job-Swap: Randomly select two different jobs Ja and Jb from the sequence and swap them. (2) Job-Forward-Insert: Randomly select two different jobs Ja and Jb (b > a ) from the sequence and insert Jb before Ja . (3) Job-Backward-Insert: Randomly select two different jobs Ja and Jb (b > a ) from the sequence and insert Ja before Jb . (4) Job-Inverse: Randomly select two different positions a and b(b > a ) from the job sequence, then inverse the subsequence {Ja, Ja +1, …, Jb}. (5) Job-Adjacent-Swap: Randomly select one position a from the

(12)

(5) Selection-II: The original individuals in P are replaced by the corresponding individuals in a trial population trialP that have better fitness values. The best individual of P will be recorded if it 5

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

J. Lin et al.

Table 6 Statistical results of various algorithms on small-sized instances. F×n

H11

H12

H21

H22

H31

H32

VNDH11

VNDH12

VNDH21

VNDH22

VNDH31

VNDH32

BS-HH

2×8 2×12 2×16 2×20 2×24 3×8 3×12 3×16 3×20 3×24 4×8 4×12 4×16 4×20 4×24 Average

14.62 13.70 12.52 10.23 8.71 11.35 9.96 10.10 9.86 7.77 9.03 5.63 7.21 6.80 5.14 9.51

13.61 12.78 11.40 9.59 8.34 9.96 9.13 9.16 8.93 6.48 8.01 4.53 6.34 6.00 4.43 8.58

6.91 5.74 5.77 4.55 5.00 4.57 3.03 3.77 2.72 3.11 2.16 1.82 2.86 2.96 2.02 3.80

5.99 5.17 5.10 3.78 4.74 3.15 2.55 3.14 2.19 2.52 1.25 1.38 2.27 2.61 1.60 3.16

13.55 11.58 10.00 8.96 7.54 8.92 8.72 9.59 8.53 7.24 6.41 4.58 6.14 5.66 4.87 8.15

12.17 11.05 9.16 8.46 7.15 7.79 7.50 8.73 7.84 6.32 5.25 3.58 5.18 5.04 4.19 7.29

1.00 0.93 0.73 0.53 0.54 1.09 0.44 0.86 0.43 0.64 1.08 0.74 0.59 1.10 0.57 0.75

0.76 0.87 0.55 0.36 0.21 0.70 0.28 0.56 0.43 0.33 0.63 0.47 0.28 0.63 0.26 0.49

1.00 0.93 0.72 0.51 0.54 1.15 0.44 0.91 0.43 0.64 0.99 0.74 0.59 1.10 0.57 0.75

0.76 0.87 0.53 0.37 0.21 0.76 0.28 0.56 0.43 0.33 0.63 0.47 0.28 0.63 0.26 0.49

1.02 0.93 1.09 0.57 0.54 1.15 0.44 0.91 0.43 0.64 0.99 0.74 0.59 1.10 0.57 0.78

0.78 0.87 0.53 0.37 0.21 0.76 0.28 0.56 0.43 0.33 0.63 0.56 0.28 0.63 0.26 0.50

0.38 0.41 0.16 −0.17 −0.34 0.40 0.12 0.08 0.09 −0.12 0.25 0.38 0.13 0.08 −0.03 0.12

Note: The bold values correspond to better results.

Fig. 8. Gantt chart of the new best solution obtained by the BS-HH for instance I_24_5_2_4_2. Table 7 Pair-wise t-test of various algorithms on small-sized instances. Algorithm VNDH12 VNDH22 VNDH32 VNDH12 VNDH12 VNDH22

vs. vs. vs. vs. vs. vs.

BS-HH BS-HH BS-HH VNDH22 VNDH32 VNDH32

Mean

SD

SEM

IC-lower

IC-upper

t

Significance

0.367 0.370 0.378 −0.003 −0.011 −0.008

0.142 0.141 0.131 0.016 0.027 0.023

0.035 0.035 0.033 0.004 0.007 0.006

0.291 0.295 0.308 −0.012 −0.025 −0.020

0.442 0.445 0.447 0.006 0.004 0.005

10.367 10.486 11.547 −0.771 −1.577 −1.324

0.000 0.000 0.000 0.453 0.136 0.205

ducts Pa and Pb (b > a ) from the sequence and insert Pa before Pb . (9) Product-Inverse: Randomly select two different positions a and b(b > a ) from the product sequence, then inverse the subsequence {Pa, Pa +1, …, Pb}. (10) Product-Adjacent-Swap: Randomly select one product Pa from the sequence, and swap it with the next product Pa+1 of the sequence. If the selected product Pa is the last one of the sequence, swap it with the first product P1 of the sequence.

sequence, and swap it with the next position of the sequence. Especially, if the selected position a is the last position of the sequence, swap it with the first position of the sequence. (6) Product-Swap: Randomly select two products Pa and Pb (b > a ) and swap them. (7) Product-Forward-Insert: Randomly select two different products Pa and Pb (b > a ) from the sequence and insert Pb before Pa . (8) Product-Backward-Insert: Randomly select two different pro6

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

J. Lin et al.

Fig. 9. Boxplots for ARPD values of the compared algorithms in solving the small-sized instances.

Fig. 10. Boxplots for ARPD values of the compared algorithms in solving the large-sized instances. Table 9 Pair-wise t-test of various algorithms on large-sized instances.

To guarantee that the jobs belonging to one product are not separated after applying the LLHs, a product is randomly selected before conducting the job-related LLHs as in (1)-(5). In addition, to improve the local search ability of the designed heuristic rules, simulated annealing (SA) [42] is embedded into each LLH. The pseudo code of the improved version of LLH is shown in Fig. 4, where ω is a given job or product sequence to be processed, ξ is the annealing rate, T0 and Tf are the initial temperature and final temperature, respectively. 3.4. Outline of the BS-HH This section describes the framework of the backtracking search based hyper-heuristic (BS-HH) algorithm, and its main procedure. In the BS-HH, each individual is a sequence of low-level heuristics, and is associated with a solution. An individual can be evaluated by applying the low-level heuristics to the solution space. The general framework of BS-HH scheme is presented in Fig. 5. In the BS-HH, BSA is employed as the high-level hyper-heuristic strategy to manipulate the low-level heuristics. Especially, BSA searches the solution space indirectly to construct an optimal heuristic. For a problem instance, each low-level heuristic searches the solution space directly to find an optimal schedule for the DAPFSP. The main procedure of the BS-HH is shown in Fig. 6 and can be described as follows:

Algorithm

Mean

SD

SEM

IC-lower

IC-upper

t

Significance

VNDH12 vs. BS-HH VNDH22 vs. BS-HH VNDH32 vs. BS-HH VNDH12 vs. VNDH22 VNDH12 vs. VNDH32

0.021

0.011

0.003

0.01307

0.02833

6.136

0.000

0.014

0.009

0.003

0.00716

0.02024

4.739

0.001

0.014

0.009

0.003

0.00716

0.02024

4.739

0.001

0.007

0.007

0.002

0.00217

0.01183

3.280

0.010

0.007

0.007

0.002

0.00217

0.01183

3.280

0.010

result in a set of infeasible solutions. To tackle this problem, Eq. (13) is used to get an integer number within the heuristic domain.

⎧1 , muPi, j < 1 ⎪ muPi, j > Ln muPi, j = ⎨ Ln, ⎪ [ ], muP else i , j ⎩

(13)

where Ln is the number of the available low-level heuristics, [muPi, j ] is the rounding operation for each muPi, j . (3) The low-level heuristics of each individual are applied consecutively to the solution space to find a better solution. Each individual is evaluated based on the best solution obtained by the sequence of low-level heuristics, and the solution decoding scheme is employed to generate a feasible schedule.

(1) The initial population and the initial historical population are generated randomly in the way that each heuristic appears once in each individual. (2) The output of the initial trial population mu P using Eq. (11) may

Table 8 Statistical results of various algorithms on large-sized instances.

F

H

n

Average

4 6 8 30 40 50 100 200 500

H11

H12

H21

H22

H31

H32

VNDH11

VNDH12

VNDH21

VNDH22

VNDH31

VNDH32

BS-HH

5.57 3.77 3.09 3.78 4.30 4.36 6.30 3.76 2.37 4.14

5.09 3.29 2.66 3.34 3.85 3.85 5.61 3.28 2.16 3.68

0.32 0.11 0.04 0.21 0.15 0.11 0.17 0.15 0.14 0.16

0.19 0.06 0.02 0.11 0.10 0.05 0.08 0.07 0.10 0.09

2.96 1.64 1.21 2.23 1.94 1.65 2.02 1.92 1.87 1.94

2.56 1.31 0.93 1.86 1.62 1.32 1.58 1.55 1.67 1.60

0.06 0.03 0.02 0.03 0.04 0.04 0.05 0.03 0.03 0.04

0.03 0.01 0.00 0.01 0.02 0.01 0.02 0.01 0.01 0.01

0.05 0.02 0.01 0.04 0.02 0.02 0.03 0.02 0.03 0.03

0.01 0.00 0.00 0.01 0.01 0.00 0.01 0.00 0.01 0.01

0.05 0.02 0.01 0.04 0.02 0.02 0.03 0.02 0.03 0.03

0.01 0.00 0.00 0.01 0.01 0.00 0.01 0.00 0.01 0.01

−0.014 −0.004 −0.005 −0.013 −0.007 −0.003 −0.003 −0.004 −0.016 −0.008

Note: The bold values correspond to better results.

7

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

J. Lin et al.

Table 10 CPU time(s) of various algorithms on large-sized instances.

F

H

n

4 6 8 30 40 50 100 200 500

Average

H11

H12

H21

H22

H31

H32

VNDH11

VNDH12

VNDH21

VNDH22

VNDH31

VNDH32

BS-HH

0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.00 0.03 0.01

0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.00 0.02 0.01

0.01 0.01 0.01 0.02 0.01 0.01 0.00 0.00 0.03 0.01

0.01 0.01 0.01 0.02 0.01 0.01 0.00 0.00 0.04 0.01

0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.00 0.03 0.01

0.01 0.01 0.01 0.01 0.01 0.01 0.00 0.00 0.02 0.01

4.39 3.49 3.26 3.64 3.59 3.91 1.09 2.02 8.03 3.71

6.79 7.73 9.56 8.05 7.12 8.91 2.84 3.85 17.39 8.03

2.90 2.85 1.86 3.14 2.45 2.02 0.27 0.58 6.76 2.54

7.67 8.94 10.21 11.00 8.05 7.77 0.72 2.22 23.88 8.94

2.55 1.95 1.83 2.70 1.96 1.66 0.24 0.66 5.41 2.11

42.87 6.11 20.64 45.20 5.54 18.88 0.43 1.37 67.81 23.20

12.01 12.61 13.07 12.57 12.63 12.49 1.02 3.99 32.68 12.56

4.1. Parameters setting

Table 11 Statistical results of various algorithms on large-sized instances. EDAMA

F

4 6 8 H 30 40 50 n 100 200 500 Average

HBBO

To make a fair comparison, the population size is set to 50, which is the same as used in the literature [26,27]. However, the proposed BS-HH still contains several key parameters: the maximum iteration number Itrmax , the user defined parameter λ , the mix rate rmix and the annealing rate ξ . To investigate the influence of these parameters on the performance of the BS-HH, the Taguchi method of experimental design [44] is conducted on instance I_20_3_2_2_1 [25], where 20_3_2_2 denotes the size (n = 20 ,m = 3,F = 2 ,H = 2 ) of the instance and the number 1 denotes that it is the first replication of this particular size. Different combinations of these parameters are presented in Table 2. All of the parameter combinations are run repeatedly and independently for 30 times with BS-HH, and the results obtained are the averaged makespan (AM) value over 30 runs. Additionally, the orthogonal array L16(44) is chosen based on the number of parameters and factor levels. The orthogonal array and the obtained results are listed in Table 3. For each parameter, the AM value is calculated to carry out the significance test. The F values as well as the significance rank are presented in Table 4. The trend of each factor level is illustrated in Fig. 7. It can be drawn from Table 4 and Fig. 7 that, among the parameters, the maximum number of iterations Itrmax is the most significant one. Note that the AM value is not satisfactory when Itrmax is set to 50, but changes very slightly when Itrmax is larger than 150. The annealing rate ξ is the second most significant variable. From Fig. 7, it can be observed that a larger value of ξ results in a smaller AM value. However, a large ξ will cause a high computational cost. Hence, a better choice of parameter combination is: Itrmax = 150 , λ = 5, rmax = 1.0 and ξ = 0.8. Additionally, the initial temperature T0 and final temperature Tf used in the simulated annealing is set to 2 and 1, respectively.

BS-HH

ARPD

CPU Time (s)

ARPD

CPU Time (s)

ARPD

CPU Time (s)

0.013 0.004 0.004 0.011 0.008 0.003 0.008 0.006 0.007 0.007

22.06 22.84 24.58 23.22 23.31 22.96 3.57 11.27 54.65 23.16

−0.013 −0.005 −0.005 −0.012 −0.006 −0.004 −0.003 −0.003 −0.016 0.007

59.41 62.08 71.06 64.07 64.04 64.45 1.50 8.26 182.80 64.19

−0.014 −0.004 −0.005 −0.013 −0.007 −0.003 −0.003 −0.004 −0.016 −0.008

12.01 12.61 13.07 12.57 12.63 12.49 1.02 3.99 32.68 12.56

Note: The bold values correspond to better results. Table 12 Pair-wise t-test of EAs on large-sized instances. Algorithm

Mean

SD

SEM

IC-lower

IC-upper

t

Significance

EDAMA vs. BS-HH HBBO vs. BS-HH EDAMA vs. HBBO

0.015

0.008

0.003

0.00873

0.02083

5.633

0.000

0.000

0.001

0.000

0.00028

0.00086

0.800

0.447

0.015

0.007

0.002

0.00246

0.02023

5.913

0.000

4. Computational results To verify the effectiveness of the proposed BS-HH scheme, computational simulations are carried out with two sets of benchmarks generated in [25]. Set one includes 900 small-sized instances, where n = {8, 12, 16, 20, 24}, m = {2, 3, 4, 5}, F = {2, 3, 4} and H = {2, 3, 4}. Set two includes 810 large-sized instances, where n = {100, 200, 500}, m = {5, 10, 20}, F = {4, 6, 8}, and H = {30, 40, 50}. The performance of BS-HH is compared with the existing algorithms, including the heuristics developed in [25] and two state-of-the-art EAs (EDAMA in [26] and HBBO in [27]). BS-HH is coded in Visual C++6.0 and performed on a core i5-4210U processor with 2.40 GHz and 4 GB RAM. The average relative percentage deviation (ARPD) defined by Ruiz et al. in [43] is employed to evaluate the algorithm's performance:

⎞ ⎛ Ci − Cbest 1 × 100⎟ × (%), R C ⎠ ⎝ best i =1

4.2. Effect of the improved LLHs on BS-HH To demonstrate the effectiveness of the improved LLHs, BS-HH with and without SA method are compared by using a set of small instances. The two algorithms have the same stopping condition and run 30 times independently. The statistical results, grouped in terms of combinations of F × n and averaged over 60 instances, are summarized in Table 5. It can be seen from Table 5 that lower ARPD values can be obtained by the BS-HH with SA in most combinations. In particular, the averaged ARPD values obtained by the BS-HH with and without SA are 0.12 and 0.18, respectively. Table 5 shows that the improved LLHs can help to improve the performance of the proposed BS-HH scheme.

R

ARPD =

∑⎜

(14)

4.3. Comparison with different heuristics

where Cbest is the makespan of the best known or optimal solution for all instances, and Ci is the makespan value obtained by any of the R replications of the considered algorithms. A new best solution can be obtained when the ARPD value is less than 0.

In this section, the performance of the proposed BS-HH is compared with twelve heuristics developed in [25] on two sets of benchmarks. First, 900 small-sized instances [25] are used. For each instance, 30 independent replications are carried out with the BS-HH, 8

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

J. Lin et al.

Fig. 11. Gantt chart of the new best solution obtained by the BS-HH for instance I_100_10_4_30_6.

810 large-sized instances obtained by the BS-HH are listed in Appendix A. The statistical results of the two-tailed t-test at 95% confidence level are listed in Table 9. As shown by Table 9, BS-HH is significantly different with other compared heuristics. Since the results obtained by the VNDH22 are identical with the VNDH32 in Table 8, the t-test is not conducted between VNDH22 and VNDH32. In addition, the CPU times consumed by the comparison algorithms are listed in Table 10. Observe that BS-HH used more time than most of the heuristics. However, the average CPU time is within 13 s, and is only about half of the time cost by the VNDH32 heuristic. Moreover, the average time cost is acceptable since the DAPFSP can be solved offline, and solutions with much better quality can be found by the BS-HH algorithm.

and the makespan value obtained by the algorithm is calculated as the ARPD value. The statistical results are listed in Table 6, where the results of the twelve heuristics are obtained directly from the literature [25]. It can be seen from Table 6 that, for all instances, the ARPD values obtained by the BS-HH are much better than those of other algorithms. In particular, new best solutions have been found by BSHH on 76 small-size instances, which are listed in detail in Appendix A. Moreover, as one of the obtained new best solutions, the corresponding Gantt chart for the instance I_24_5_2_4_2 is depicted in Fig. 8. In addition, a pair-wise t-test [45] is employed to check the differences between the VNDH12, the VNDH22, the VNDH32 and the BS-HH. By using the two-tailed t-test at 95% confidence level, the statistic comparisons between the four algorithms are listed in Table 7, where, SD is the standard deviation, SEM is the standard error of mean, IC-lower and ICupper are the confidence interval of the difference. It can be drawn from the table that BS-HH is statistically different with the other compared algorithms. However, no statistical significance can be found between the VNDH12, the VNDH22 and the VNDH32 heuristics. The boxplot presented in Fig. 9 also illustrates the difference between the BS-HH and the six selected best heuristics. Additionally, the average CPU times consumed by the algorithms are all within 0.01 s, which is negligible when solving small-sized instances. To further validate the performance, BS-HH is tested by using 810 large-sized instances. Specifically, we consider 270 instances each of F , H , and n . The statistical results of these algorithms over all instances for 5 independent runs are summarized in Table 8. It can be concluded from the table that BS-HH outperforms the other algorithms in solving all the large-sized instances, which is also evident from the boxplot in Fig. 10. On average, BS-HH yields an almost −0.01% ARPD value to the best known solutions. In particular, new best solutions for 92 out of

4.4. Comparison with EAs In this section, the performance of the proposed BS-HH is compared with two state-of-the-art EAs: the EDAMA [26] and the HBBO [27]. The parameters of the EDAMA are set as follows: population size of 50, percentage of superior sub-population from population of 10, learning rate of 0.3, and local search intensity of 0.25. The HBBO is implemented with a population size of 50, maximum mutation, immigration and emigration rates 0.1, 1 and 1, respectively. Since the small-sized instances can be solved easily, only the largescale instances are employed for comparisons in this section. The statistical results in terms of the ARPD value and the average CPU time are listed as in Table 11, where the results of the compared algorithms are directly obtained from the literatures [26,27]. From the table, it can be seen that BS-HH outperforms EDAMA and HBBO in terms of the overall solution quality. In addition, the two-tailed t-test at 95% confidence level is conducted on the ARPD values of the compared algorithms, and the 9

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

J. Lin et al.

hyper-heuristic strategy to manipulate the low-level heuristics to find an optimal solution for the DAPFSP; (4) The performance of the proposed scheme was evaluated using 900 small-sized instances and 810 large-sized instances. The computational results and comparisons with state-of-the-art algorithms demonstrated the feasibility and effectiveness of the proposed hybrid scheme. Compared to the best known results, new best solutions for 76 small-sized instances and 92 large-sized instances were obtained using the BS-HH scheme. In our future work, some fitness landscape analysis techniques will be used to determine the characteristics of the problem, which can further guide the design of hyper-heuristics. Moreover, it will also be interesting to apply the hyper-heuristic scheme for solving the DAPFSP with multiple assembly factories.

results are listed in Table 12. It can be concluded from the table that the compared algorithms are statistically different from each other, except for BS-HH and HBBO, for which no statistical significant can be found between the two algorithms. However, the large-scale instances can be solved by the BS-HH within average of 13 s, which is much less than the other two algorithms. It also demonstrates the efficiency of the BS-HH algorithm. The corresponding Gantt chart for the large-sized instance I_100_10_4_30_6 is presented in Fig. 11. In conclusion, from the two sets of benchmark instances studied herein, the proposed BS-HH is significantly better and more effective than the other compared algorithms considered for solving the DAPFSP. 5. Conclusions

Acknowledgment

In this paper, an effective BS-HH scheme was proposed to solve the distributed assembly permutation flow-shop scheduling problem (DAPFSP). To the best of our knowledge, this is the first work of the hyper heuristic-based algorithm for solving the DAPFSP. The main contributions of this work are as follows: (1) A novel encoding scheme was proposed to encode a feasible solution more effectively; (2) A set of low-level heuristics was designed and embedded within the BS-HH scheme; (3) The backtracking search algorithm was employed as the

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. This work is part of a project supported by the National Natural Science Foundation of China (Grant Nos. 61503331, 71671160 and 61503330) and the Zhejiang Provincial Natural Science Foundation of China (Grant Nos. LQ15F030002 and LY15G010004).

Appendix A The new best known solutions obtained by the BS-HH for two sets of benchmark instances are as follows. See Appendix Tables A1 and A2. Table A1 New best solutions for 76 small-sized instances. Instance

Best known

BS-HH

ARPD

Instance

Best known

BS-HH

ARPD

I_16_4_2_2_3 I_20_2_2_2_2 I_20_2_2_2_3 I_20_2_4_2_2 I_20_2_4_2_3 I_20_3_2_2_1 I_20_3_2_2_3 I_20_3_2_2_4 I_20_3_2_3_1 I_20_3_2_3_2 I_20_3_2_3_4 I_20_3_3_2_2 I_20_4_2_2_1 I_20_4_2_2_2 I_20_4_2_2_4 I_20_4_2_2_5 I_20_4_2_3_2 I_20_4_2_3_4 I_20_4_3_3_5 I_20_5_2_2_1 I_20_5_2_2_2 I_20_5_2_2_3 I_20_5_2_2_5 I_20_5_2_3_2 I_20_5_2_3_3 I_24_2_2_2_3 I_24_2_2_3_2 I_24_2_2_3_3 I_24_2_2_3_5 I_24_2_3_2_4 I_24_2_3_2_5 I_24_2_4_2_2 I_24_2_4_2_5 I_24_3_2_2_1 I_24_3_2_2_2 I_24_3_2_2_3 I_24_3_2_2_5 I_24_3_2_3_1

1146 859 1462 369 553 968 997 1722 1756 1121 1482 1561 1229 862 1675 1394 858 1725 1137 1478 1258 1500 1589 1181 1499 1986 1514 1495 1861 1706 1084 1264 563 1705 1538 879 1685 1077

1143 857 1460 365 551 961 982 1718 1749 1118 1479 1560 1220 858 1673 1381 845 1716 1129 1474 1250 1479 1587 1176 1494 1985 1509 1492 1856 1705 1082 1258 556 1694 1537 868 1674 1075

−0.26 −0.23 −0.14 −1.08 −0.36 −0.72 −1.50 −0.23 −0.40 −0.27 −0.20 −0.06 −0.73 −0.46 −0.12 −0.93 −1.52 −0.52 −0.70 −0.27 −0.64 −1.40 −0.13 −0.42 −0.33 −0.05 −0.33 −0.20 −0.27 −0.06 −0.18 −0.47 −1.24 −0.65 −0.07 −1.25 −0.65 −0.19

I_24_3_2_3_2 I_24_3_2_4_1 I_24_3_2_4_2 I_24_3_3_2_2 I_24_3_3_2_3 I_24_3_3_2_4 I_24_3_3_2_5 I_24_3_3_3_2 I_24_3_3_4_1 I_24_3_3_4_3 I_24_4_2_2_1 I_24_4_2_2_2 I_24_4_2_2_5 I_24_4_2_3_2 I_24_4_2_3_3 I_24_4_2_4_2 I_24_4_3_2_2 I_24_4_3_2_3 I_24_4_3_2_4 I_24_4_3_4_5 I_24_4_4_2_5 I_24_5_2_2_1 I_24_5_2_2_2 I_24_5_2_2_3 I_24_5_2_2_4 I_24_5_2_2_5 I_24_5_2_3_1 I_24_5_2_3_3 I_24_5_2_3_5 I_24_5_2_4_2 I_24_5_2_4_3 I_24_5_3_2_2 I_24_5_3_2_3 I_24_5_3_2_4 I_24_5_3_3_3 I_24_5_3_3_4 I_24_5_4_2_2 I_24_5_4_2_5

1417 1323 755 1232 1999 584 856 1078 918 1738 1350 1365 999 1225 1711 1205 1694 2458 1844 1480 1356 1625 2136 2049 1764 2100 2333 1340 914 1054 1691 1518 1890 2287 1761 2318 1771 1792

1398 1320 751 1227 1995 578 854 1073 917 1736 1335 1358 974 1214 1673 1198 1684 2448 1836 1473 1355 1605 2118 2024 1739 2095 2328 1333 910 1031 1687 1488 1870 2277 1759 2308 1757 1791

−1.34 −0.23 −0.53 −0.41 −0.20 −1.03 −0.23 −0.46 −0.11 −0.12 −1.11 −0.51 −2.50 −0.90 −2.22 −0.58 −0.59 −0.41 −0.43 −0.47 −0.07 −1.23 −0.84 −1.22 −1.42 −0.24 −0.21 −0.52 −0.44 −2.18 −0.24 −1.98 −1.06 −0.44 −0.11 −0.43 −0.79 −0.06

10

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

J. Lin et al.

Table A2 New best solutions for 92 large-sized instances. Instance

Best known

BS-HH

ARPD

Instance

Best known

BS-HH

ARPD

I_100_10_4_30_6 I_100_20_4_40_9 I_200_5_4_30_2 I_200_5_4_30_5 I_200_5_4_40_3 I_200_5_8_30_10 I_200_10_4_30_2 I_200_20_4_30_1 I_200_20_4_30_2 I_200_20_4_40_7 I_200_20_4_50_6 I_200_20_4_50_9 I_200_20_6_30_2 I_200_20_6_50_6 I_200_20_8_30_8 I_500_5_4_30_1 I_500_5_4_30_2 I_500_5_4_30_4 I_500_5_4_30_6 I_500_5_4_30_7 I_500_5_4_30_8 I_500_5_4_30_9 I_500_5_4_30_10 I_500_5_4_40_4 I_500_5_4_40_7 I_500_5_4_40_8 I_500_5_4_40_9 I_500_5_4_50_7 I_500_5_6_30_3 I_500_5_6_30_7 I_500_5_6_30_9 I_500_5_6_40_1 I_500_5_6_40_3 I_500_5_6_40_5 I_500_5_6_50_2 I_500_5_8_30_1 I_500_5_8_30_5 I_500_5_8_30_6 I_500_5_8_30_9 I_500_5_8_50_8 I_500_10_4_30_1 I_500_10_4_30_3 I_500_10_4_30_5 I_500_10_4_30_6 I_500_10_4_30_8 I_500_10_4_30_9 I_500_10_4_30_10 I_500_10_4_40_2

6296 5393 10759 9309 9204 7612 10904 11639 11152 9320 11939 9532 11873 10408 12692 27845 30191 24811 28052 29111 27785 23787 20595 25925 29700 28543 24801 24512 28170 26723 19654 22655 22361 24017 25661 24383 29782 26461 27737 24103 26749 22034 22961 29194 26650 26670 28040 25598

6284 5365 10754 9308 9203 7607 10903 11635 11137 9289 11918 9525 11866 10394 12678 27841 30189 24794 28050 29094 27776 23771 20571 25913 29693 28538 24799 24499 28165 26715 19646 22650 22356 24010 25656 24377 29772 26453 27731 24099 26743 22019 22951 29166 26649 26648 28037 25593

−0.19 −0.52 −0.05 −0.01 −0.01 −0.07 −0.01 −0.03 −0.13 −0.33 −0.18 −0.07 −0.06 −0.13 −0.11 −0.01 −0.01 −0.07 −0.01 −0.06 −0.03 −0.07 −0.12 −0.05 −0.02 −0.02 −0.01 −0.05 −0.02 −0.03 −0.04 −0.02 −0.02 −0.03 −0.02 −0.02 −0.03 −0.03 −0.02 −0.02 −0.02 −0.07 −0.04 −0.10 0.00 −0.08 −0.01 −0.02

I_500_10_4_40_5 I_500_10_4_40_6 I_500_10_4_40_8 I_500_10_4_40_9 I_500_10_4_40_10 I_500_10_4_50_1 I_500_10_6_30_1 I_500_10_6_30_3 I_500_10_6_30_6 I_500_10_6_30_7 I_500_10_6_30_8 I_500_10_6_30_10 I_500_10_6_40_1 I_500_10_6_40_6 I_500_10_8_30_1 I_500_10_8_30_2 I_500_10_8_30_5 I_500_10_8_30_6 I_500_10_8_40_2 I_500_10_8_40_4 I_500_20_4_30_4 I_500_20_4_30_5 I_500_20_4_30_9 I_500_20_4_30_10 I_500_20_4_40_1 I_500_20_4_40_2 I_500_20_4_40_4 I_500_20_4_40_9 I_500_20_4_50_2 I_500_20_4_50_3 I_500_20_4_50_4 I_500_20_4_50_5 I_500_20_4_50_6 I_500_20_4_50_9 I_500_20_6_30_1 I_500_20_6_30_4 I_500_20_6_30_7 I_500_20_6_30_9 I_500_20_6_30_10 I_500_20_8_30_6 I_500_20_8_30_8 I_500_20_8_30_9 I_500_20_8_40_6 I_500_20_8_40_9 I_500_10_4_40_5 I_500_10_4_40_6 I_500_10_4_40_8 I_500_10_4_40_9

22904 25341 23335 24650 22928 28887 28319 25022 31658 23584 22654 30512 26234 23791 20965 17054 27900 21922 24055 24971 23096 26810 27519 24783 25222 28057 24585 21272 27350 23868 25697 26082 23666 25599 24512 26019 26381 26169 27023 25654 22579 22575 24129 24699 22904 25341 23335 24650

22898 25333 23296 24649 22885 28837 28317 25003 31657 23576 22638 30476 26217 23784 20952 17031 27899 21886 24050 24968 23089 26791 27499 24769 25203 28020 24583 21267 27346 23840 25691 26074 23663 25582 24475 25998 26341 26138 26999 25552 22568 22571 24120 24667 22898 25333 23296 24649

−0.03 −0.03 −0.17 0.00 −0.19 −0.17 −0.01 −0.08 0.00 −0.03 −0.07 −0.12 −0.06 −0.03 −0.06 −0.13 0.00 −0.16 −0.02 −0.01 −0.03 −0.07 −0.07 −0.06 −0.08 −0.13 −0.01 −0.02 −0.01 −0.12 −0.02 −0.03 −0.01 −0.07 −0.15 −0.08 −0.15 −0.12 −0.09 −0.40 −0.05 −0.02 −0.04 −0.13 −0.03 −0.03 −0.17 0.00

References

[11] Y.F. Liu, S.Y. Liu, A hybrid discrete artificial bee colony algorithm for permutation flowshop scheduling problem, Appl. Soft Comput. 13 (2013) 1459–1463. [12] Q.K. Pan, M.F. Tasgetiren, Y.C. Liang, A discrete differential evolution algorithm for the permutation flowshop scheduling problem, Comput. Ind. Eng. 55 (2008) 795–816. [13] J.Y. Xu, Y.Q. Yin, T.C.E. Cheng, C.C. Wu, S.S. Gu, An improved memetic algorithm based on a dynamic neighbourhood for the permutation flowshop scheduling problem, Int. J. Prod. Res. 52 (2014) 1188–1199. [14] D.Z. Zheng, L. Wang, An effective hybrid heuristic for flow shop scheduling, Int. J. Adv. Manuf. Technol. 21 (2003) 38–44. [15] P.C. Chang, W.H. Huang, J.L. Wu, T.C.E. Cheng, A block mining and recombination enhanced genetic algorithm for the permutation flowshop scheduling problem, Int. J. Prod. Econ. 141 (2013) 45–55. [16] P.C. Chang, W.H. Huang, C.J. Ting, A hybrid genetic-immune algorithm with improved lifespan and elite antigen for flow-shop scheduling problems, Int. J. Prod. Res. 49 (2011) 5207–5230. [17] Y.M. Chen, M.C. Chen, P.C. Chang, S.H. Chen, Extended artificial chromosomes genetic algorithm for permutation flowshop scheduling problems, Comput. Ind. Eng. 62 (2012) 536–545. [18] V. Fernandez-Viagas, J.M. Framinan, On insertion tie-breaking rules in heuristics for the permutation flowshop scheduling problem, Comput. Oper. Res. 45 (2014) 60–67. [19] C. Moon, J. Kim, S. Hur, Integrated process planning and scheduling with minimizing total tardiness in multi-plants supply chain, Comput. Ind. Eng. 43 (2002) 331–349. [20] J. Deng, L. Wang, A competitive memetic algorithm for multi-objective distributed permutation flow shop scheduling problem, Swarm Evolut. Comput. 32 (2017)

[1] J. Behnamian, S.M.T. Fatemi Ghomi, A survey of multi-factory scheduling, J. Intell. Manuf. 27 (2016) 231–249. [2] K.Z. Gao, P.N. Suganthan, Q.K. Pan, M.F. Tasgetiren, A. Sadollah, Artificial bee colony algorithm for scheduling and rescheduling fuzzy flexible job shop problem with new job insertion, Knowl.-Based Syst. 109 (2016) 1–16. [3] K.Z. Gao, P.N. Suganthan, Q.K. Pan, T.J. Chua, C.S. Chong, T.X. Cai, An improved artificial bee colony algorithm for flexible job-shop scheduling problem with fuzzy processing time, Expert Syst. Appl. 65 (2016) 52–67. [4] K.Z. Gao, P.N. Suganthan, M.F. Tasgetiren, Q.K. Pan, Q.Q. Sun, Effective ensembles of heuristics for scheduling flexible job shop problem with new job insertion, Comput. Ind. Eng. 90 (2015) 107–117. [5] T. Gonzalez, S. Sahni, Flowshop and jobshop schedules: complexity and approximation, Oper. Res. 26 (1978) 36–52. [6] S.M. Johnson, Optimal two- and three-stage production schedules with setup times included, Nav. Res. Logist. Q. 1 (1954) 61–68. [7] D.G. Dannenbring, An evaluation of flow shop sequencing heuristics, Manag. Sci. 23 (1977) 1174–1182. [8] M. Nawaz, E.E. Enscore, I. Ham, A heuristic algorithm for the m-machine,n-job flow-shop sequencing problem, Omega 11 (1983) 91–95. [9] B. Liu, L. Wang, Y.-H. Jin, An effective PSO-based memetic algorithm for flow shop scheduling, IEEE Trans. Syst. Man Cybern. B: Cybern. 37 (2007) 18–27. [10] M.F. Tasgetiren, Q.K. Pan, P.N. Suganthan, A.H. Chen, A discrete artificial bee colony algorithm for the total flowtime minimization in permutation flow shops, Inf. Sci. 181 (2011) 3459–3475.

11

Swarm and Evolutionary Computation xxx (xxxx) xxx–xxx

J. Lin et al. 121–131. [21] M. Ji, Y. Yang, W. Duan, S. Wang, B. Liu, Scheduling of no-wait stochastic distributed assembly flowshop by hybrid PSO, in: Proceedings of The IEEE Congress on Evolutionary Computation (CEC 16), Vancouver, Canada, 2016, pp. 2649–2654. [22] X. Du, M. Ji, Z. Li, B. Liu, Scheduling of stochastic distributed assembly flowshop under complex constraints, in: Proceedings of The IEEE Symposium Series on Computational Intelligence (SSCI 16), Athens, Greece, 2016, pp. 1–7. [23] L. Wang, H.-Y. Feng, N. Cai, W. Jin, An Effective Approach for Distributed Process Planning Enabled by Event-driven Function Blocks, in: Process Planning and Scheduling for Distributed Manufacturing, Springer London, London, 2007, pp. 1–30. [24] F.T. Chan, S.H. Chung, P. Chan, An adaptive genetic algorithm with dominated genes for distributed scheduling problems, Expert Syst. Appl. 29 (2005) 364–371. [25] S. Hatami, R. Ruiz, C. Andrés-Romano, The distributed assembly permutation flowshop scheduling problem, Int. J. Prod. Res. 51 (2013) 5292–5308. [26] S.Y. Wang, L. Wang, An estimation of distribution algorithm-based memetic algorithm for the distributed assembly permutation flow-shop scheduling problem, IEEE Trans. Syst. Man Cybern.: Syst. 46 (2016) 139–149. [27] J. Lin, S. Zhang, An effective hybrid biogeography-based optimization algorithm for the distributed assembly permutation flow-shop scheduling problem, Comput. Ind. Eng. 97 (2016) 128–136. [28] E.K. Burke, M. Gendreau, M. Hyde, G. Kendall, G. Ochoa, E. Özcan, R. Qu, Hyperheuristics: a survey of the state of the art, J. Oper. Res. Soc. 64 (2013) 1695–1724. [29] J. Branke, S. Nguyen, C.W. Pickardt, M. Zhang, Automated design of production scheduling heuristics: a review, IEEE Trans. Evolut. Comput. 20 (2016) 110–124. [30] B. Dong, L. Jiao, J. Wu, A two-phase knowledge based hyper-heuristic scheduling algorithm in cellular system, Knowl.-Based Syst. 88 (2015) 244–252. [31] G. Koulinas, L. Kotsikas, K. Anagnostopoulos, A particle swarm optimization based hyper-heuristic algorithm for the classic resource constrained project scheduling problem, Inf. Sci. 277 (2014) 680–693. [32] S. Salcedo-Sanz, J.M. Matías-Román, S. Jiménez-Fernández, A. Portilla-Figueras, L. Cuadra, An evolutionary-based hyper-heuristic approach for the Jawbreaker puzzle, Appl. Intell. 40 (2014) 404–414. [33] J. Gascón-Moreno, S. Salcedo-Sanz, B. Saavedra-Moreno, L. Carro-Calvo,

[34]

[35] [36] [37] [38]

[39] [40]

[41]

[42]

[43] [44] [45]

12

A. Portilla-Figueras, An evolutionary-based hyper-heuristic approach for optimal construction of group method of data handling networks, Inf. Sci. 247 (2013) 94–108. K. Anwar, A.T. Khader, M.A. Al-Betar, M.A. Awadallah, Harmony Search-based Hyper-heuristic for examination timetabling, in: 2013 IEEE 9th International Colloquium on Signal Processing and its Applications, IEEE, Kuala Lumpur, Malaysia, 2013, pp. 176–181. Rajni, I. Chana, Bacterial foraging based hyper-heuristic for resource scheduling in grid computing, Future Gener. Comput. Syst. 29 (2014) 751–762. P. Civicioglu, Backtracking search optimization algorithm for numerical optimization problems, Appl. Math. Comput. 219 (2013) 8121–8144. J. Lin, Oppositional backtracking search optimization algorithm for parameter identification of hyperchaotic systems, Nonlinear Dyn. 80 (2015) 209–219. K. Bhattacharjee, A. Bhattacharya, S. Halder nee Dey, Backtracking search optimization based economic environmental power dispatch problems, Int. J. Electr. Power Energy Syst. 73 (2015) 830–842. Q. Lin, L. Gao, X. Li, C. Zhang, A hybrid backtracking search algorithm for permutation flow-shop scheduling problem, Comput. Ind. Eng. 85 (2015) 437–446. C. Zhang, Q. Lin, L. Gao, X. Li, Backtracking Search Algorithm with three constraint handling methods for constrained optimization problems, Expert Syst. Appl. 42 (2015) 7831–7845. M. Modiri-Delshad, S.H. Aghay Kaboli, E. Taslimi-Renani, N.A. Rahim, Backtracking search algorithm for solving economic dispatch problems with valvepoint effects and multiple fuel options, Energy 116 (Part 1) (2016) 637–649. B. Naderi, R. Tavakkoli-Moghaddam, M. Khalili, Electromagnetism-like mechanism and simulated annealing algorithms for flowshop scheduling problems minimizing the total weighted tardiness and makespan, Knowl.-Based Syst. 23 (2010) 77–85. R. Ruiz, C. Maroto, J. Alcaraz, Two new robust genetic algorithms for the flowshop scheduling problem, Omega 34 (2006) 461–476. D.C. Montgomery, Design and Analysis of Experiments, John Wiley & Sons, 2008. I. Boussaid, A. Chatterjee, P. Siarry, M. Ahmed-Nacer, Biogeography-based Optimization for Constrained Optimization Problems, Computers & Operations Research, 2012.