A genetic algorithm for unconstrained multi-objective optimization

A genetic algorithm for unconstrained multi-objective optimization

Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Q1 15 16 17 Q3 Q2 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 3...

2MB Sizes 2 Downloads 111 Views

Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 12 13 14 Q1 15 16 17 Q3 Q2 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

Contents lists available at ScienceDirect

Swarm and Evolutionary Computation journal homepage: www.elsevier.com/locate/swevo

A genetic algorithm for unconstrained multi-objective optimization Qiang Long a, Changzhi Wu b, Tingwen Huang c, Xiangyu Wang b,d a

School of Science, Southwest University of Science and Technology, Mianyang, China Australasian Joint Research Centre for Building Information Modelling, School of Built Environment, Curtin University, Australia Texas A&M University at Qatar, Doha, P.O.Box 23874, Qatar d Department of Housing and Interior Design, Kyung Hee University, Seoul, South Korea b c

art ic l e i nf o

a b s t r a c t

Article history: Received 29 August 2014 Received in revised form 18 November 2014 Accepted 19 January 2015

In this paper, we propose a genetic algorithm for unconstrained multi-objective optimization. Multiobjective genetic algorithm (MOGA) is a direct method for multi-objective optimization problems. Compared to the traditional multi-objective optimization method whose aim is to find a single Pareto solution, MOGA tends to find a representation of the whole Pareto frontier. During the process of solving multi-objective optimization problems using genetic algorithm, one needs to synthetically consider the fitness, diversity and elitism of solutions. In this paper, more specifically, the optimal sequence method is altered to evaluate the fitness; cell-based density and Pareto-based ranking are combined to achieve diversity; and the elitism of solutions is maintained by greedy selection. To compare the proposed method with others, a numerical performance evaluation system is developed. We test the proposed method by some well known multi-objective benchmarks and compare its results with other MOGASs'; the result show that the proposed method is robust and efficient. & 2015 Elsevier Ltd. All rights reserved.

Keywords: Genetic algorithm Optimal sequence method Multi-objective optimization Numerical performance evaluation

1. Introduction In this paper, we consider the following multi-objective optimization problem: ( ðMOPÞ

Minimize

FðxÞ

Subject to

x A X;

ð1Þ

where FðxÞ ¼ ðf 1 ðxÞ; f 2 ðxÞ; …; f p ðxÞÞT is a vector-valued function, X ¼ fx A Rn : lb r x r ub g  Rn is a box set, lb and ub are lower and upper bounds, respectively. We suppose that f i ðxÞ; i ¼ 1; 2; …; p are Lipschitz continuous but not necessarily differentiable. Multi-objective optimization has extensive applications in engineering and management [2,28,29]. Most of the optimization problems appearing in real-world applications have multiple objectives; they can be modeled as multi-objective optimization problems. However, due to the theoretical and computational challenges, it is not easy to solve multi-objective optimization problems. Therefore, multi-objective optimization attracts lots of researches over the last few decades. So far, there are two types of methods to solve multi-objective optimization problems: indirect and direct methods. The indirect method converts multiple objectives into a single one. One strategy is to combine the multiple objective functions using the utility theory or the weighted sum method. The difficulties for such methods are the selection of utility function or proper

weights so as to satisfy the decision-maker's preferences, and furthermore, the greatest deficiency of the (linear) weighted sum method is that we cannot obtain the concave part of the Pareto frontier. Another indirect method is to formulate the multiple objectives, except one, as constraints. However, it is not easy to determine the upper bounds of these objectives. On the one hand, small upper bounds could exclude some Pareto solutions; on the other hand, large upper bounds could enlarge the objective function value space which leads to some sub-Pareto solution. Additionally, indirect method can only obtain a single Pareto solution in each run. However, in practical applications, decisionmakers often prefer a number of Pareto solutions so that they can choose one strategy according to their preferences. Direct methods devote themselves to explore the entire set of Pareto solutions or a representative subset. However, it is extremely hard or impossible to obtain the entire set of Pareto solutions for most multi-objective optimization problems, except some simple cases. Therefore, stepping back to a representative subset is preferred. Genetic algorithm (GA), as a population-based algorithm, is a good choice to achieve this goal. The generic singleobjective genetic algorithm can be modified to find a set of multiple non-dominated solutions in a single run. The ability of the genetic algorithm to simultaneously search different regions of a solution space makes it possible to find a diverse set of solutions for difficult problems. The crossover operator of the genetic algorithm can exploit structures of good solutions with respect

http://dx.doi.org/10.1016/j.swevo.2015.01.002 2210-6502/& 2015 Elsevier Ltd. All rights reserved.

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

2

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

to different objectives, which in return, creates new nondominated solutions in unexplored parts of the Pareto frontier. In addition, multi-objective genetic algorithm does not require user to prioritize, scale, or weight objectives. Therefore, the genetic algorithm is one of the most popular metaheuristic approaches for solving multi-objective optimization problems [18,24,36]. The first multi-objective optimization method based on the genetic algorithm, called the vector evaluated GA (or VEGA), was proposed by Schaffer [35]. Afterwards, several multi-objective evolutionary algorithms were developed, such as Multi-objective Genetic Algorithm (MOGA) [6], Niched Pareto Genetic Algorithm (WBGA) [15], Weight-based Genetic Algorithm (WBGA) [13], Random Weighted Genetic Algorithm (RWGA) [31], Nondominated Sorting Genetic Algorithm (NSGA) [38], Strength Pareto Evolutionary Algorithm (SPEA) [52], improved SPEA (SPEA2) [51], Pareto-Archived Evolution Strategy (PAES) [21], Pareto Envelopebased Selection Algorithm (PESA) [3], Nondominated Sorting Genetic Algorithm-II (NSGA-II) [5], Multi-objective Evolutionary Algorithm Based on Decomposition (MOEA/D) [1,46,47] and Indicator-Based Evolutionary Algorithm (IBEA) [34]. There are three basic issues [50] in solving multi-objective optimization problems using the genetic algorithm: 1. Fitness: Solutions whose objective values are close to the real Pareto frontier should be selected as parents of the next generation. This gives rise to the task of ranking candidate solutions in each generation. 2. Diversity: The obtained subset of Pareto solutions should distribute uniformly over the real Pareto frontier. This reveals the true tradeoff of the multi-objective optimization problem for decision-makers. 3. Elitism: The best candidate solution is always kept to the next generation in solving single-objective optimization problems using the genetic algorithm. We extend this idea to the multiobjective case. However, this extension is not straightforward since a large number of candidate solutions are involved in multi-objective genetic algorithm. The aim of this paper is to introduce new techniques to tackle the issues mentioned above. The rest of the paper is organized as follows. In Section 2, we review some basic definitions of multiobjective optimization and the process of genetic algorithm. In Section 3, we propose an improved genetic algorithm for multiobjective optimization problems. In Section 4, an evaluation system for numerical performance of multi-objective genetic algorithms is developed. Some simulation studies are carried out in Section 5. Section 6 concludes the paper.

2. Preliminaries In this section, we first review some definitions and theorems in the multi-objective optimization, and then introduce the general procedure of genetic algorithm. 2.1. Definitions in multi-objective optimization First of all, we present the following notations which are often used in vector optimization. Given two vectors x ¼ ðx1 ; x2 ; …; xn ÞT

and

y ¼ ðy1 ; y2 ; …; yn ÞT A Rn ;

then

 x ¼ y 3 xi ¼ yi for all i ¼ 1; 2; …; n;  x o y 3 xi o yi for all i ¼ 1; 2; …; n;

 x ry 3 xi ryi for all i ¼ 1; 2; …; n, and there is at least one 

i A f1; 2; …; ng such that xi o yi , i.e., x a y. x≦y 3 xi r yi for all i ¼ 1; 2; …; n.

“ 4”, “ Z ” and “≧” can be defined similarly. In this paper, we call x r y x dominates y or y is dominated by x. Definition 2.1. Suppose that x D Rn and xn A X. If xn ≦x for any x A X, then xn is called an absolute optimal point of X. Absolute optimal point is an ideal point but it may not exist. Definition 2.2. Let x A Rn and xn A X. If there is no x A X such that x r xn ðor x o xn Þ; then xn is called an efficient point (or weakly efficient point). The sets of absolute optimal points, efficient points and weakly efficient points of X are denoted as Xab, Xep and Xwp, respectively. For the problem MOP, X D Rn is called the decision variable space and its image set FðXÞ ¼ fy A Rp jy ¼ FðxÞ; x A Xg  Rp is called the objective function value space. Definition 2.3. Suppose that xn A X. If Fðxn Þ≦FðxÞ; for any x A X, xn is called an absolute optimal solution of the problem MOP. The set of absolute optimal solution is denoted as Sas. The concept of the absolute optimal solution is a direct extension of that for single-objective optimization. It is the ideal solution but may not exist for most cases. Definition 2.4. Suppose that xn A X. If there is no x A X such that FðxÞ r Fðxn Þðor FðxÞ o Fðxn ÞÞ; i.e., Fðxn Þ is an efficient point (or weakly efficient point) of the objective function value space F(X), then xn is called an efficient solution (or weakly efficient solution) of the problem MOP. The sets of efficient solutions and weakly efficient solutions are denoted as Ses and Sws, respectively. Another name of the efficient solution is Pareto solution, which was introduced by T.C. Koopmans in 1951 [22]. The meaning of Pareto solution is that, if xn A Ses , then there is no feasible solution x A X, such that any fi(x) of F(x) is not worse than that of Fðxn Þ. In other words, xn is the best solution in the sense of “ r ”. Another intuitive interpretation of Pareto solution is that it cannot be improved with respect to any objective without worsening at least one of the other objectives. The set of Pareto solutions is denoted by P n . Its image set FðP n Þ is called the Pareto frontier, denoted by PF n . 2.2. Genetic algorithm Genetic algorithm is one of the most important evolutionary algorithms. It was introduced by John Holland in 1960s, and then developed by his students and colleagues at the University of Michigan between 1960s and 1970s [14]. Over the last two decades, the genetic algorithm was increasingly enriched by plenty of literatures, such as [9,10,12,19]. Nowadays various genetic algorithms are applied in different areas; for example, mathematical programming, combinational optimization, automatic control and image processing. Suppose that P(t) and O(t) represent parents and offspring of the tth generation, respectively. Then, the general structure of genetic algorithm can be described in the following pseudocode. General structure of genetic algorithm.

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

1

Initialization

1.1 1.2 1.3

Generate the initial population Pð0Þ, Set crossover rate, mutation rate and maximal generation time, Let t’0.

2

While the maximal generation time is not reached, do

2.1 2.2 2.3 2.4 2.5

Crossover operator: generate O1 ðtÞ, Mutation operator: generate O2 ðtÞ, Evaluate O1 ðtÞ and O2 ðtÞ: compute the fitness function, Selection operator: build the next population, t’t þ1, go to 2.1 end

end From the pseudocode, we can see that there are three important operators in a general genetic algorithm: the crossover operator, mutation operator and selection operator. Usually a different encoding leads to different operators.

Table 1 Table of optimal numbers. xi

xj

x1 x2 ⋮ xi ⋮ xn

x1

x2



xj



xn

Ki

0 a21 ⋮ ai1 ⋮ an1

a12 0 ⋮ ai2 ⋮ an2

⋯ ⋯

a1j a2j ⋮ aij ⋮ anj

⋯ ⋯

a1n a2n ⋮ ain ⋮ 0

K1 K2 ⋮ Ki ⋮ Kn

⋯ ⋯

⋯ ⋯

Obviously, for a minimization problem, a bigger optimal number reflects a better point. Therefore, optimal numbers can be considered as basis for ranking the current population. Due to this observation, we propose the following algorithm to rank the current population. Algorithm 3.1. Optimal Sequence Method (OSM). Step 1:

3. A multi-objective genetic algorithm

Step 2:

In this section, we present a genetic algorithm to solve Problem 1. We present this method through the three aspects illustrated in Section 1, i.e., fitness, diversity and elitism of solutions.

Step 3:

Input: the current population and their objective function values. Compute optimal numbers: compute all the optimal numbers and total optimal numbers in Table 1 according to Eq. (2). Rank the solution: re-arrange the order of solutions according to the decreasing order of the total optimal numbers Ki. More precisely, denote

3.1. Fitness

K α1 ¼ maxfK i g;

For single-valued function, fitness is normally assigned as its function value. However, to assign fitness of multi-objective function is not straightforward. So far, there are three typical approaches. The first one is weight sum approach [31,32], which converts the multiple objectives into a single objective using Pp normalized weight i ¼ 1 λi ; λi 4 0; i ¼ 1; 2; …; p. The decision of weight parameters is not an easy task for this approach. The second one is altering objective functions [35,51], which randomly divides the current population into p equal sub-populations: P 1 ; P 2 ; …; P p . Then, each solution in subpopulation Pi is assigned a fitness value based on objective function fi. In fact, this approach is a straightforward extension of the single-objective genetic algorithm. The last one is Pareto-ranking approach [5,8,38], which is a direct application of the definition of Pareto solution. More specifically, the population is ranked according to a dominance rule. Then, each solution is assigned a fitness value based on its rank in the population rather than its actual objective function value. In the following, we present a new rank strategy based on optimal sequence method [25].

K α2 ¼

iAN

Definition 3.1. Let P ¼ f1; 2; …; pg and N ¼ f1; 2; …; ng, for any xi ; xj A X, define 8 f l ðxi Þ o f l ðxj Þ; > < 1; aijl ¼ 0:5; f l ðxi Þ ¼ f l ðxj Þ; ð2Þ > : 0; f ðxi Þ 4 f ðxj Þ or i ¼ j: l

Then, X aijl aij ¼

3

l

max fK i g

i A N o fα1 g

and so on. Then, the solutions xα1 , xα2 , etc., which are corresponding to K α1 , K α2 , etc., are called the best solution, the second best solution, etc. The new order is called optimal order.

It is worthy to mention that the optimal sequence method does not necessarily rank Pareto solutions in the frontier positions, but rank those solutions which are more reasonable in the foremost positions. This is different from Pareto-ranking approach. Lemma 3.1. Suppose that xi ; xj A X. If Fðxi Þ rFðxj Þ, then aij 4 aji . Proof. Let A ¼ fl A Pj f l ðxi Þ of l ðxj Þg;

a ¼ j Aj ;

B ¼ fl A Pj f l ðxi Þ ¼ f l ðxj Þg;

b ¼ j Bj ;

where j  j denotes the number of components in a set. Obviously, we have a 40, b 4 0 and a þ b ¼ p. Then, ( 1; lAA aijl ¼ 0:5; l A B; and

i; j A N

lAP

is called the optimal number of xi corresponding to xj for all P objectives. Furthermore, K i ¼ j A N aij is defined as the total i optimal number of x corresponding to all the other solutions for all objectives.

( ajil ¼

0;

lAA

0:5;

l A B:

Therefore, X X aijl þ aijl ¼ a þ 0:5b aij ¼ lAA

lAB

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

Proof. If xe is not an efficient solution, then there exists an xs A X, such that

and

X X ajil þ ajil ¼ 0:5b: aji ¼ lAA

Fðxs Þ r Fðxe Þ:

lAB

Since a 4 0, aij 4 aji .



Therefore, according to Lemma 3.1,

Lemma 3.2. Suppose that xi ; xj A X. If Fðxi Þ r Fðxj Þ; then, for any xk ðk a i; j; k A NÞ, aik 4 ajk .

aes o ase :

Proof. For any x ðk ai; j; k A NÞ, let

aej r asj :

A ¼ fl A Pj f l ðxk Þ o f l ðxi Þg;

a ¼ j Aj ;

B ¼ fl A Pj f l ðxk Þ 4f l ðxi Þg;

b ¼ j Bj ;

Based on (9), (10) and aee ¼ ass ¼ 0, we have X X Ke ¼ aej o asj ¼ K s ;

C ¼ fl A Pj f l ðxk Þ ¼ f l ðxi Þg;

c ¼ j Cj :

k

ð10Þ

jAN

which is a contradiction to the assumption ke ¼ maxi A N fK i g. The proof is completed. □ Based on Theorem 3.1, the following results are obvious.

 When l A A. Since Fðxi Þ r Fðxj Þ, we have

Corollary 3.1. If xa is an absolute optimal solution, then

i

f l ðx Þ of l ðx Þ

K a ¼ maxfK j g: jAN

and f l ðxk Þ of l ðxj Þ: Therefore, X aikl ¼ 0  a ¼ 0; a1ik ¼

ð3Þ

lAA

a1jk ¼

Due to Lemma 3.2, we have, for any x ðj A N; j ae; sÞ,

jAN

Then, we have a; b; c Z 0 and a þ b þ c ¼ p.

k

ð9Þ j

X ajkl ¼ 0  a ¼ 0:

ð4Þ

lAA

 When l A B. We have f l ðxk Þ 4 f l ðxi Þ. Thus, X aikl ¼ 1  b ¼ b: a2ik ¼

ð5Þ

lAB

Corollary 3.2. If xe is corresponding to K e ¼ maxj A N fK j g and xs is corresponding to K s ¼ maxj A N o feg fK j g, then xs is an efficient solution of the population without xe. Theorem 3.1 and Corollary 3.2 reveal the rationality of optimal sequence method, because the best solution must be an efficient solution, and the second best solution, although may not be an efficient solution, is an efficient solution without considering the best one. This procedure can be continued until the last solution is obtained. It is reasonable for us to rank the population based on their total optimal number and assign fitness according to the ranking index.

On the other hand, let M 1 ¼ fl A Bj f l ðxk Þ o f l ðxj Þg;

m1 ¼ j M 1 j ;

M 2 ¼ fl A Bj f l ðxk Þ ¼ f l ðxj Þg;

m2 ¼ j M 2 j ;

k

j

M 3 ¼ fl A Bj f l ðx Þ 4 f l ðx Þg;

3.2. Diversity

m3 ¼ j M 3 j :

Obviously, m1 ; m2 ; m3 Z 0 and m1 þ m2 þm3 ¼ b. Therefore, a2jk ¼ m1  0 þ m2  0:5 þ m3  1 r b:  When l A C. We have X aikl ¼ 0:5  c: a3ik ¼

ð6Þ

ð7Þ

lAC

On the other hand, let M 1 ¼ fl A C j f l ðxk Þ o f l ðxj Þg;

m1 ¼ j M 1 j ;

M 2 ¼ fl A C j f l ðxk Þ ¼ f l ðxj Þg;

m2 ¼ j M 2 j :

Note that it is not possible to have f l ðxk Þ 4 f l ðxj Þ. Otherwise, we will have f l ðxi Þ 4 f l ðxj Þ, which is contrary to Fðxi Þ rFðxj Þ. Again, we have m1 ; m2 Z0 and m1 þ m2 ¼ c. Thus, X ajkl ¼ m1  0 þ m2  0:5 r 0:5c: ð8Þ a3jk ¼ lAC

In light of Eqs. (3)–(8), we have aik ¼

3 X α¼1

ααik Zajk ¼

3 X

aαjk :

α¼1

□ Theorem 3.1. If K e ¼ maxi A N fK i g, then the solution xe corresponding to Ke must be an efficient solution (Pareto solution).

In order to obtain uniformly distributed solutions, it is important to maintain the diversity of populations. Without diversitymaintaining measures, the population tends to form relatively few clusters in the multi-objective genetic algorithm. This phenomenon is known as genetic drift. Several approaches have been devised to prevent it. Fitness sharing [6,11] encourage the search in unexplored sections of a Pareto frontier by artificially reducing fitness of solutions in densely populated areas. To achieve this goal, densely populated area is identified and a penalty method is incorporated so as to distribute the solutions located in these areas. The difficulty of this method is how to identify the dense areas and measure their density. Crowding distance approach [5] aims to obtain a uniform spread of solutions along the best-known Pareto frontier. The advantage of this approach is that the measurement of population density around a solution can be computed without requiring a user-defined parameter. Cellbased density approach [20,21,30,44] divides the objective space into p-dimensional cells. The number of solutions in each cell is defined as the density of the cell, and the density of a solution is equal to the density of the cell in which the solution locates. An efficient approach to dynamically divide the objective function space into cells is proposed by Lu and Yen [30,44]. The main advantage of this approach is that a global density map of the objective function space is obtained as a result of the density calculation while keeping similar computational efficiency as other approaches. In this paper, we design a new selection operator based on the combination of the cell-based density approach and the Paretoranking approach. This selection operator not only maintains the

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

5

density of solutions but also guarantees that all the Pareto solutions are reserved to the next generation. Algorithm 3.2. Cell-based Pareto selection operator. Step 1:

Step 2:

Input the current ranked (solutions are firstly sorted by Algorithm 3.1 ) population P and their corresponding objective function values F(P). Set α as the number of sections for each dimension. Separate each dimension of the objective function space into α sections using width w¼

Step 3:

Step 4:

zmax  zmin k k ; α

k ¼ 1; 2; …; p;

where zmax and zmin are the maximum and minimum k k values of the objective function k in the current population, respectively. Classify objective function values F(P) into cells, and measure the density of each cell and the density of each point in F(P). Select solutions one by one according to the following substeps until population is fully filled.

Step 4.1: Successively check each cell, if the density of a cell is not zero, then choose the first best solution (the order is already sorted by optimal sequence method) in the cell. Let the set E be the collection of all the solutions selected. Step 4.2: Identify Pareto frontier of the set E and collect them in a set denoted by PE. Step 4.3: Maintain the solutions corresponding to points in PE (i.e., S ¼ fx A EjFðxÞ ¼ y; yA PEg) to the next generation. Step 4.4: Meanwhile, for each point in PE, reduce the density of the cell in which it locates by 1, marking this point as having been selected (which means that it cannot be considered as a candidature again when the loop goes back to Step 4.1). Step 4.5: Check the number of solutions that is kept in the next generation. If the number reaches the population size, then stop and return; otherwise, go to Step 4.1.

Remark 3.1. There is only one user-determined parameter α which determines the size of cells in this selection operator. Evidently, a large α leads to a small size of cells. Thus, the enlargement of α will increase the diversity of population, but will lead to a heavy computational burden. For simplicity, we fix the number of sections for all dimensions in the objective function value space, instead of applying different numbers of sections for each dimension [30,44]. Remark 3.2. In Step 4.1, the solutions are not chosen strictly according to their optimal number, since those solutions who have high optimal number may locate in a same cell. Instead, only the best solution in that cell can be selected. However, we can guarantee that each solution which is chosen in the corresponding cell is the most reasonable one. Remark 3.3. In Step 4.2, only the efficient points (Pareto points) in E are maintained to the next generation. During the selection process, we not only guarantee diversity of the next generation but also ensure that the solutions selected to be parents of the next generation are better ones. To illustrate the procedure of Algorithm 3.2, let us investigate a simple example. Fig. 1 shows a population of solutions and their optimal sequence orders assigned by using Algorithm 3.1. The solutions are classified into two dimensional cells. Clearly, some

Fig. 1. Optimal numbers and cell-based classifying of the population.

Fig. 2. Choose the best point of each cell and collect them as a candidature set E.

cells include many solutions, such as cells A, B and C, while others include few solutions, such as E, D and F. Then, we select the best point from each cell and collect them constructing a candidature set E which is depicted in Fig. 2. It is easy to note that for those cells who have more than one solutions, only the one has the best optimal number can be selected. For instance, solutions 1, 3 and 6 are selected from cells A, B and C, respectively. Whereas, for those cells who have only one solution, the solution is selected directly, such as solutions 22, 29 and 30 from cells D, E and F, respectively. Finally, Pareto frontier points of set E are identified and maintained to the next generation (see Fig. 3). We can see from the figure that the obtained solutions are not only better solutions but also well distributed. Suppose that the dimension of the vector function F is p, the population size is N, then for each component function of F, one needs 12N 2 times of compare, so the computation complexity of Algorithm 3.1 is OðpN 2 Þ. In Algorithm 3.2, in order to allocate a cell for one solution, one needs αp times of compare. This is because for each dimension of F, we need α times of comparison in the worst case, while the total dimensions of F is p. Note that there are N individuals in a population, so the computational complexity of Algorithm 3.2 is OðαpNÞ. 3.3. Elitism Elitism in the single-objective genetic algorithm means that the best individual in a population always survives to the next

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

1.3 1.4

Set crossover rate, mutation rate and maximal generation number, Let t’0.

2

While the maximal generation number is not reached, do

2.1 2.2 2.3

Crossover operator: generate O1 ðtÞ, Mutation operator: generate O2 ðtÞ, Evaluate objective function values: construct selection pool: SðtÞ ¼ O1 ðtÞ [ O2 ðtÞ and compute their multi-objective function values: FSðtÞ ¼ fFðxÞj x A SðtÞg:

2.4 Fig. 3. Identify Pareto frontier PE of the candidature set E.

2.5 generation. In the process of solving multi-objective optimization problems using the genetic algorithm, all the nondominated solutions should be considered as elite solutions. Thus, the number of possible elite solutions becomes very large. The early multi-objective genetic algorithm did not use elitism at all. Since the genetic algorithm with elitist strategies outperforms those without using it, the elitism is incorporated in the recent multiobjective genetic algorithms and their variations [4,42,52]. For these methods, two strategies are introduced to implement elitism [17]: (i) maintaining elitist solutions in the population, and (ii) storing elitist solutions in an external secondary list and reintroducing them to the population by a certain strategy. Our strategy to implement elitism is simple. We use an extra set, say B, to store the best populations we have achieved so far. At each iteration, we put the population obtained from last generation together with B and construct a new set, say C. Then, optimal sequence method (Algorithm 3.1) is applied to rank all the solutions in C. Finally, we update the best population B by choosing the first population size of best solutions in ranked C. This elitism strategy is summarized in the following algorithm. Algorithm 3.3. Optimal sequence elitism. Step 1:

Step 2: Step 3:

Input the current population P obtained by the selection operator (Algorithm 3.2); the current best population B; the population size. Let C ¼ B [ P. Apply the optimal sequence method (Algorithm 3.1) on set C. Denote the ranked set as C 0 . Update B by assigning it with the first population size of best solution in C 0 .

Remark 3.4. The initial best population B is equal to the initial population. 3.4. Optimal sequence multi-objective GA (OSMGA) Based on the algorithms provided above, we are ready to present the improved multi-objective GA: Algorithm 3.4. Optimal sequence multi-objective GA (OSMGA).

2.6 2.7

Rank solutions in selection pool: apply optimal sequence method (Algorithm 3.1) to rank the solution in selection pool S(t). Selection operator: apply the cell-based Pareto selection operator (Algorithm 3.2) to construct the parents population Pðt þ 1Þ. Elitism: update the best population B by applying optimal sequence elitism (Algorithm 3.3), t’t þ 1, Go to 2.1 End

End

4. An evaluation system for MOGAs In order to show the improvement of a new method, it is necessary to compare the numerical performance of this method with others. However, for the multi-objective genetic algorithm, the numerical performance evaluation is not an evident task, because the multi-objective genetic algorithm gives a population of approximate Pareto solutions, so the comment has to be analyzed from the performances of all the solutions but not any one of them. Therefore, it is needed to develop a reasonable evaluation system for multi-objective genetic algorithms. In this section, we develop an evaluation system to evaluate the numerical performance of different MOGAs. As it is pointed out by Deb [4], the multi-objective optimization is a mixture of two optimization strategies: (i)

minimizing the distance of the obtained solutions from the true optimal solutions, (ii) maximizing the spread of solutions in the obtained set of solutions. The former task is similar to the usual task in a single-objective optimization, while the latter one is similar to finding multiple optimal solutions in a multi-modal optimization. Based on this understanding, we will develop a system to evaluate the distance between the image set of obtained solutions and the real Pareto frontier in the objective function value space; and furthermore, the distributional degree of the obtained solutions. In the following, we provide two performance metrics which are correspondence to the closeness and diversity of the image set of the obtained solutions. As a matter of convenience, we use Q to represent the solution set obtained from a multi-objective genetic algorithm. 4.1. Closeness metric

1

Initialization

1.1 1.2

Generate the initial population Pð0Þ, Initialize the best population B ¼ Pð0Þ,

Veldhuizen [41] introduced a metric called Error Ratio which simply counts the number of solutions in Q which does not belong to the Pareto frontier PF n and then calculate the ratio of it respect

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

to the total number of solutions. Mathematically, Pj Q j ei ER ¼ i ¼ 1 ; jQj where ( 1 ei ¼ 0

Step 3: ð11Þ

if xi A Q and xi A PF n if xi A Q and xi 2 = PF n

and j Q j represents the cardinal number of set Q. This error ratio metric is simple and evident. However, the requirement of xi that exactly belongs to PF n is too strict. From the formula (11), ei ¼ 0 even xi is very close to but not belonging to PF n , which cannot represent the suitable closeness between real Pareto frontier and the image set of the obtained solutions. Although a threshold δ was introduced in some later literatures, the determination of δ is still a problem. In our modified error ratio, we introduce the following function: ei ¼ e  αdi ; i ¼ 1; 2; …; j Q j ; where α A ½1; 4 is a parameter and di is the closest distance between the point and the real Pareto frontier. Mathematically, di ¼ 0min n dðxi ; P 0 Þ: P A PF

The distance function dð; Þ could be the Euclidian distance. And eventually, the modified error ratio is still expressed as Pj Q j ei MER ¼ i ¼ 1 : jQj Since di A ½0; þ 1Þ, ei A ð0; 1; which makes MER A ð0; 1. A bigger MER means the whole solution set is closer to the real Pareto frontier. Note that for some problems, especially those whose optimal Pareto frontier is unknown, the calculation of distance di could be difficult, even impossible. However, for the test problems in this paper, their Pareto frontiers are clear. 4.2. Diversity metric For the diversity of solutions, we expect the solution spread as uniform and extensive as possible. Suppose that we uniformly grid the objective function space into many cubes whose volume is the same. Then it is evident that all the solutions distribute in different cubes. But based on the diversity of population, some cubes may have no solution, some cubes may have just one solution and some cube may have two or more than two solutions. For the cubes having only one solution, we count this solution as a individual one. However, for those cubes having two or more than two solutions we can say that those solutions are very close to each other which is not a good distribution. Thus, we count all of those solutions just as one. Then we calculate the ratio between the number of cubes having solutions and the total number of solutions; and this ratio is used as an evaluation index of the diversity of solutions. The process of this cell-based diversity metric is introduced in detail as follows: Cell-based diversity metric: Step 1:

Step 2:

Data: input the upper bound (ub) and the lower bound (lb) of the Pareto frontier; set a parameter β as the number of section for each dimension. Grid the objective function space: divide each dimension of objective function space into β sections from lb to ub. This grids the objective function space into a cell space and then assign each cube an indicator which originally set to be 0.

Step 4:

7

Consecutively check each solution from the population, set the indicator of a cell to be 1 if there is one or more solutions located in the cell. Count the number of cells whose indicator is 1 and assign the number to M, then the diversity metric is evaluated as R¼

M : jQj

Remark 4.1. It is evident to note that this diversity metric is sensitively influenced by the number of sections β. First of all, the grid of objective function value space should not be two tiny (β is chosen as a large number), otherwise, almost every solution can locate in a different cell which is not good for the evaluation of diversity. On the other hand, the grid should not be too rough (β is chosen as a small number) as well, because in this way, many solutions, even they are uniformly distributed may be put into a same cell, consequently, counted as just one solution. It is reasonable to set β equal or a little bit larger than the population size. In the way, the ratio M can be guaranteed between 0 and 1. For a certain number of solutions, it is obvious that a larger R implies that these solutions distribute in more different cells, which means a better diversity is obtained. Fig. 4 illustrates an example of cell-based diversity metric for a hypothetical problem with 20 solutions in total. From the picture, two or more than two solutions distribute in cells A, B, C, D and E, so they are just counted as 5 individuals; plus the other 8 cells who has just 1 solution each. Then, the diversity metric for this set of solutions is M ¼ 13=20 ¼ 0:65.

5. Numerical experiments In this section, we investigate the numerical performance of OSMGA. First, we apply OSMGA on two multi-objective benchmarks quoted from [5]. Then, we compare OSMGA with the methods proposed in CEC'09 [48] using the test problems proposed for CEC'09 [49]. All the numerical experiments are implemented in an environment of MATLAB(2010a) installed on an ACER ASPIRE 4730Z laptop with a 2 G RAM and a 2.16 GB CPU.

Fig. 4. An example for cell-based diversity metric.

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 Q7 35 36 37 38 39 40 41 42 43 44 45 Q4 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

Table 2 Multi-objective test problems. Pro.

n

Variable bounds

Objective functions

Optimal solutions

Comments

SCH

1

½  5; 10

f 1 ðxÞ ¼ x2

x A ½0; 2

convex

x1 ¼ x2 ¼ x3 pffiffiffi pffiffiffi A ½  1= 3; 1= 3

nonconvex

f 2 ðxÞ ¼ ðx  2Þ2 FON

  ! 1 2 xi  pffiffiffi 3   ! P3 1 2 f 2 ðxÞ ¼ 1  exp  i ¼ 1 xi þ pffiffiffi 3

½  4; 4

3

f 1 ðxÞ ¼ 1  exp 

P3

9

1

8

0.9

i¼1

0.8

7

0.7

6

0.6 5 0.5 4 0.4 3

0.3

2

0.2

1 0

0.1 0

2

4

6

8

10

12

14

16

0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 5. Objective function value space of Problems (a) SCH and (b) FON. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

5.1. Numerical performance of OSMGA The test problems (see Table 2) applied in this subsection are quoted from [5]. Problem SCH is a one dimensional convex problem, its Pareto solutions are x A ½0; 2. Problem FON is a three dimensional nonconvex problem, its pffiffiffi p ffiffiffi Pareto solutions satisfy x1 ¼ x2 ¼ x3 , where xi A ½  1= 3; 1= 3; i ¼ 1; 2; 3. Fig. 5 depicts the objective function value space of Problems SCH and FON (the red curves represent Pareto frontiers). From the figure, one can observe that their Pareto frontiers are connective. We solve the proposed benchmarks by OSMGA and other MOGAs, such as VEGA [35], NSGA [38], NSGAII [5], and NPGA [15]. The results are compared using the evaluation system developed in the previous section. In the implementation of OSMGA, parameters are adjusted according to the scale of problem. Empirically, if the dimension of the problem is n, then the population size is 2n–5n, the number of maximal generation is 20n–50n. the crossover rate and the mutation rate are 0:4–0:5 and 0:2–0:3, respectively. In Algorithm 3.1, we use 10–15 for the number of sections in each dimension. In the following tables, the following notations are used:

     

MER—the modified error ratio of Pareto solutions; ave. MER—the average of modified error ratio; std. MER—the standard deviation of modified error ratio; R—the diversity metric of Pareto solutions; ave. R—the average of diversity metric; std. R—the standard deviation of diversity metric.

For each problem, we run OSMGA independently for 100 times. Table 3 demonstrates the statistic results of the experiments. From

Table 3 Numerical performance of OSMGA. Pro.

ave. MER

std. MER

ave. R

std. R

SCH FON

0.99 0.99

0.00 0.00

0.50 0.83

0.11 0.08

the table, the average of modified error ratio is very close to 1 and its standard deviation is almost equal to 0, which reveals that for the considered benchmarks, the approximate Pareto frontier obtained by OSMGA is steadily located on the real Pareto frontier. The average and the standard deviation of the diversity metric of Problem SCH's solutions are 0.50 and 0.11, respectively, which are not very good; however, for Problem FON, they are 0.83 and 0.08, respectively, which are very promising. Fig. 6 depicts some intermediate results of Problems SCH and FON solved by OSMGA. From the figures, we can observe that OSMGA converges very fast in solving these two problems. For Problem SCH, the range of Pareto frontier is located at the third generation and the implementation terminates at the tenth generation. For Problem FON, the range of Pareto frontier is located at the 19th generation and the simulation terminates at the 30th generation. The obtained approximate Pareto frontier for both problems is accurate and uniformly distributed. Table 4 illustrates numerical results of Problem SCH solved by OSMGA, VEGA, NSGA, NSGAII and NPGA for 10 independence implementations. From the table, one can observe that the approximate Pareto frontiers obtained by different methods are all very close to the real Pareto frontier. However, the diversity

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

SCH in generation 1

10

SCH in generation 2

5

4 2

5 0

0

5

0

10

SCH in generation 4

4

2

0

4

SCH in generation 5

4

0

2

0

4

SCH in generation 7

0

0

4

SCH in generation 8

4

2

2

4

0

2

0

4

0

2

4

SCH in generation 6

0

2

0

4

0

1

0

2

4

SCH in generation 10

0

1

0

0.7 0.8 0.9 FON in generation 11

2

4

0

FON in generation 4

1

0

0.5

0

1

FON in generation 22

1

0

0.5

1

FON in generation 15

0.5

1

0

FON in generation 8

0

1

0

0.5

1

FON in generation 19

0.5

0

0.5

1

FON in generation 26

0.5

0

1 0.5

0.5

0.5

0

1 0.5

0.5

2

2

FON in generation 1

1 0.5

2

2

4

0

0

4

2 0

SCH in generation 3

9

0

1

0

0.5

1

FON in generation 30

0.5

0

0.5

1

0

0

0.5

1

Fig. 6. Intermediate results Problems (a) SCH and (b) FON solved by OSMGA.

Table 4 Comparison among OSMGA and other algorithms on problem SCH. Implementation index

OSMGA

1 2 3 4 5 6 7 8 9 10

VEGA

NSGA

NSGAII

NPGA

MER

R

MER

R

MER

R

MER

R

MER

R

0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99

0.53 0.43 0.60 0.56 0.46 0.46 0.40 0.54 0.70 0.30

0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99

0.06 0.13 0.06 0.13 0.13 0.13 0.13 0.13 0.06 0.06

0.99 0.99 0.99 0.99 0.99 0.99 0.67 0.99 0.99 0.41

0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33

0.99 0.99 0.99 0.99 0.98 0.98 0.99 0.99 0.99 0.98

0.23 0.33 0.06 0.43 0.13 0.06 0.06 0.30 0.03 0.43

0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99

0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33 0.33

1

4.5 VEGA NSGA NSGAII NPGA OSMGA

4 3.5

VEGA NSGA NSGAII NPGA OSMGA

0.9 0.8 0.7

3

0.6

2.5 0.5

2

0.4

1.5

0.3

1

0.2

0.5 0

0.1 0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

0

0

0.2

0.4

0.6

0.8

1

Fig. 7. Numerical comparison of MOGAs on Problems (a) SCH and (b) FON.

metric of OSMGA is evidently larger than the other methods. It is not uncommon to see that diversity metric of VEGA is lower than the other four methods. Actually, VEGA, as the first multi-objective genetic algorithm, did not take account of diversity. The method of NSGAII, although not very stable, enjoys a modest diversity metric. Fig. 7(a) depicts the best implementation for each methods. We can observe that the approximate Pareto solutions obtained by these methods are close to the real Pareto frontier. This fits with

the data in Table 4. However, solutions obtained by VEGA, NSGA and NPGA are very concentrative, while solutions obtained by OSMGA and NSGAII are better distributed. Problem FON is a nonconvex multi-objective optimization pffiffiffi problem whose Pareto solutions satisfy x1 ¼ x2 ¼ x3 A ½ 1= 3; pffiffiffi 1= 3. Table 5 illustrates 10 independent implementations of OSMGA,VEGA, NSGA, NSGAII and NPGA on Problem FON. From the table, we can observe that solutions obtained by OSMGA,

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

10

1 2 Q8 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

Table 5 Comparison among VEGA, NSGA, NSGAII and NPGA on problem FON. Implementation index

OSMGA

1 2 3 4 5 6 7 8 9 10

VEGA

NSGA

NSGAII

NPGA

MER

R

MER

R

MER

R

MER

R

MER

R

0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99

0.37 0.35 0.30 0.25 0.30 0.53 0.30 0.50 0.28 0.34

0.79 0.76 0.69 0.70 0.90 0.89 0.88 0.89 0.89 0.89

0.04 0.02 0.02 0.04 0.02 0.04 0.04 0.04 0.02 0.04

0.84 0.81 0.74 0.85 0.69 0.88 0.75 0.79 0.89 0.76

0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01

0.99 0.98 0.99 0.98 0.99 0.98 0.98 0.97 0.98 0.97

0.67 0.57 0.33 0.46 0.07 0.48 0.54 0.72 0.35 0.58

0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99

0.02 0.04 0.06 0.08 0.05 0.05 0.07 0.04 0.07 0.05

Problem 1

Problem 3

Problem 4

1

1

1

0.5

0.5

0.5

0

0

0.5

0

1

0

0.5

Problem 5

0

1

Problem 6 1

1

0.5

0.5

0.5

0

0.5

0

1

0

0.5

Problem 8

0.5 0 0

0.5

1

1

0

1

0

Problem 9

1

0.5

0

1

0.5

0.5

0

0

0.5

1

0.5

1

Problem 10

1

0 1 0.5

0.5

Problem 7

1

0

0

1

0 0 0.5 1

1

0.5

0

Fig. 8. The objective function value space for test problems. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

NSGAII and NPGA are all very close to real Pareto frontier, some of the solutions obtained by VEGA are not very good, the solutions obtained by NSGA are failed. For the diversity metric, comparison among OSMGA, NSGAII and NPGA, OSMGA enjoys the best performance, the solutions obtained by OSMGA are well distributed along the Pareto frontier. Whereas, most of the solutions obtained by NSGAII distribute on the boundary and solutions obtained by NPGA distribute in the middle of the real Pareto frontier but very close to each other. Fig. 7(b) shows the best performance for each method among the 10 implementations. We can observe that the figure is identical with the table data.

4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

5.2. Numerical comparison

The test problems applied in this subsection are quoted from [49] and these are used as benchmarks in CEC'09. Fig. 8 illustrates the objective function value space of these test problems (the figure of test problem 2 is ignored here since it is similar to that of the test problem 1), the red curve/surface represents their Pareto frontiers. Among these test problems, Problems 1–7 have two objective functions, whereas Problems 8–10 have three objective functions. The Pareto solutions of Problems 5, 6 and 9 are disconnected, while the others are connected. In order to evaluate the numerical performance, we use the performance metric IGD proposed in [49]. Suppose that Pn is a set

In this subsection, we compare the numerical performance of OSMGA with the methods proposed in the special session on performance assessment of unconstrained/bound constrained multi-objective optimization algorithms at CEC'09. There are 13 algorithms submitted to the special session: 1. MOEAD [47]; 2. GDE3 [23]; 3. MOEADGM [1];

MTS [40]; LiuLiAlgorithm [26]; DMOEADD [27]; NSGAIILS [37]; OWMOSaDE [16]; ClusteringMOEA [43]; AMGA [39]; MOEP [33]; DECMOSA-SQP [45]; OMOEAII [7].

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

11

Table 6 The numerical performance evaluated by IGD. Rank

UF1

UF2

1 2 3 4 5 6 7 8 9 10 11 12 13 14

MOEAD GDE3 MOEADGM MTS LiuLiAlgorithm DMOEADD NSGAIILS OWMOSaDE OSMGA ClusteringMOEA AMGA MOEP DECMOSA-SQP OMOEAII

Rank

UF4

1 2 3 4 5 6 7 8 9 10 11 12 13 14

MTS GDE3 DECMOSA-SQP AMGA DMOEADD MOEP LiuLiAlgorithm OMOEAII MOEADGM OWMOSaDE OSMGA NSGAIILS ClusteringMOEA MOEAD

Rank

UF7

1 2 3 4 5 6 7 8 9 10 11 12 13 14

MOEAD LiuLiAlgorithm MOEADGM DMOEADD MOEP NSGAIILS ClusteringMOEA DECMOSA-SQP GDE3 OMOEAII MTS AMGA OWMOSaDE OSMGA

Rank

UF10

1 2 3 4 5 6 7 8 9 10 11 12 13 14

MTS OSMGA DMOEADD AMGA MOEP DECMOSA-SQP ClusteringMOEA GDE3 LiuLiAlgorithm MOEAD MOEADGM OMOEAII OWMOSaDE NSGAIILS

0.00435 0.00534 0.0062 0.00646 0.00785 0.01038 0.01153 0.0122 0.02868 0.0299 0.03588 0.0596 0.07702 0.08564

MTS MOEADGM DMOEADD MOEAD OWMOSaDE GDE3 LiuLiAlgorithm NSGAIILS AMGA OSMGA MOEP ClusteringMOEA DECMOSA-SQP OMOEAII

IGDðA; P n Þ ¼

v A P n dðv; AÞ

j Pn j

0.00615 0.0064 0.00679 0.00679 0.0081 0.01195 0.0123 0.01237 0.01623 0.01815 0.0189 0.0228 0.02834 0.03057

UF5 0.02356 0.0265 0.03392 0.04062 0.04268 0.0427 0.0435 0.04624 0.0476 0.0513 0.05392 0.0584 0.0585 0.06385

0.01489 0.03928 0.09405 0.16186 0.16713 0.1692 0.18071 0.2245 0.2473 0.31454 0.4303 0.5657 0.71407 1.7919

UF8 0.00444 0.0073 0.0076 0.01032 0.0197 0.02132 0.0223 0.02416 0.02522 0.03354 0.04079 0.05707 0.0585 0.13056

;

MOEAD LiuLiAlgorithm OSMGA DMOEADD MOEADGM MTS ClusteringMOEA AMGA DECMOSA-SQP MOEP OWMOSaDE NSGAIILS GDE3 OMOEAII

0.00742 0.01497 0.02241 0.03337 0.049 0.0531 0.0549 0.06998 0.0935 0.099 0.103 0.10603 0.10639 0.27141

UF6

MTS GDE3 AMGA LiuLiAlgorithm DECMOSA-SQP OMOEAII MOEAD MOEP ClusteringMOEA DMOEADD OWMOSaDE NSGAIILS OSMGA MOEADGM

MOEAD MTS DMOEADD OMOEAII ClusteringMOEA MOEP DECMOSA-SQP AMGA OSMGA LiuLiAlgorithm OWMOSaDE GDE3 NSGAIILS MOEADGM

0.00587 0.05917 0.06673 0.07338 0.0871 0.1031 0.12604 0.12942 0.14062 0.17555 0.1918 0.25091 0.31032 0.5563

UF9

MOEAD DMOEADD LiuLiAlgorithm NSGAIILS OWMOSaDE MTS AMGA OMOEAII OSMGA DECMOSA-SQP ClusteringMOEA MOEADGM GDE3 MOEP

0.0584 0.06841 0.08235 0.0863 0.0945 0.11251 0.17125 0.192 0.21083 0.21583 0.2383 0.2446 0.24855 0.423

OSMGA DMOEADD NSGAIILS MOEAD GDE3 LiuLiAlgorithm OWMOSaDE MTS DECMOSA-SQP MOEADGM AMGA OMOEAII ClusteringMOEA MOEP

0.04762 0.04896 0.0719 0.07896 0.08248 0.09391 0.0983 0.11442 0.14111 0.1878 0.18861 0.23179 0.2934 0.342

0.15306 0.24821 0.32211 0.32418 0.3621 0.36985 0.4111 0.43326 0.44691 0.47415 0.5646 0.62754 0.743 0.84468

of uniformly distributed points along the Pareto frontier (in the objective function value space). Let A be a set of solutions obtained by a certain solver. Then, the average distance from Pn to A is defined as P

UF3

where dðv; AÞ is the minimum Euclidean distance between v and the points in A, i.e., dðv; AÞ ¼ min J v  y J : yAA

In fact, P represents a sample set of the real Pareto frontier, if j P n j is large enough to approximate the Pareto frontier very well, n

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

12

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

0.2

0.4

0.6

0.8

1

0

0

0.2

0.4

0.6

0.8

1

1 0.9

1.4

0.8

1.2

0.7

1

0.6

0.8

0.5

0.6

0.4

0.4

0.3

0.2

0.2

0 1

0.1

0.5 0

0

0.2

0.4

0.6

0.8

0

1

0

0.2

0.4

0.6

0.8

1

Fig. 9. The approximated Pareto frontier of Problems (a) 1, (b) 2, (c) 4 and (d) 9.

IGDðA; P n Þ could measure both the diversity and convergence of A in a sense. A smaller IGDðA; P n Þ means the set A is closer to the real Pareto frontier and has better diversity. In order to keep consistent with the final report of CEC'09 [48], in the implementation of OOMOGA, we set the population size as 100 for problems with two objectives and 150 for problems with three objectives, the number of function evaluations is less than 300; 000. For each test instance, we run OOMOGA independently for 30 times. The numerical performance evaluated by IGD are illustrated in Table 6. From Table 6, the proposed method OSMGA performs the best at solving Problem 9; its IGD evaluation is 0.04762, better than all the other solvers. In solving Problem 10, OSMGA (whose IGD evaluation is 0.24821) performs only worse than MTS (whose IGD evaluation is 0.15306) but better than all the other solvers. In solving Problem 3, the IGD evaluation of OSMGA (0.02241) is ranked thirdly, worse than MOEAD (0.0072) and LiuLiAlgorithm (0.01497). The numerical performance of OSMGA is moderate in solving Problems 1, 2, 4, 6 and 8, the IGD ranks are 9, 10, 11, 9 and 9, respectively. For Problems 5 and 7, the numerical performance of OSMGA is not so good, ranking in 13 and 14, respectively. It is not uncommon that the numerical performance of the proposed solver OSMGA is unstable among different test problems. Because the numerical results are not only affected by the performance of solvers, but also impacted by the linearity and structure of objective function themselves. Furthermore, the crossover and mutation operators are also affected by the distribution of points in the objective function value space. Generally speaking, if a new point generated by the crossover or mutation operators

has a higher probability of locating around the Pareto frontier, then the Pareto frontier can be well approximated by the solver, for example, Problems 3, 4 and 9. On the contrary, if it is hard for the crossover or mutation operators to generate new point around the Pareto frontier, then the problem is difficult to be solved by MOGAs, for instance, Problems 5, 6 and 7. In fact, this instability still appears in the other solvers, such as MOEAD, which is reported as the best solver in [48]. The IGD evaluation of MOEAD in solving Problems 4, 5 and 10 is not very good, ranking in 14, 7 and 10, respectively. Fig. 9 demonstrates Pareto frontiers of Problems 1, 2, 4 and 9, respectively. From Fig. 9(a) and (b), we can observe that for Problems 1 and 2, the proposed solver obtained very good representations of their Pareto frontiers. Results for Problem 4 (see Fig. 9(c)) are not very uniformly distributed, which may affect the performance of IGD evaluation. Fig. 9(d) illustrates the approximate Pareto frontier of Problem 9, we can see that the structure is more or less an approximation of the real Pareto frontier.

6. Conclusion In this paper, we proposed an improved genetic algorithm for solving multi-objective optimization problems. The proposed method is abbreviated as OSMGA. In each iteration, the population is first ranked by the optimal sequence method. The fitness of each solution is assigned using the rank. The diversity of a population is measured using a method combining Pareto-based ranking and cell-based diversity. Then, parents for the next generation are

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Q5 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 Q6 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

selected according to their fitness value and diversity measurement. In the proposed method, the best population is updated at each iteration such that the image of solutions is diversely distributed along the real Pareto frontier. In order to compare the numerical performance of different multi-objective genetic algorithms, we developed an evaluation system which takes account of the closeness metric and diversity metric. We tested OSMGA by the two well-known multi-objective benchmarks, results show that OSMGA is promising in solving small scale convex multi-objective optimization problems. We also applied OSMGA on the famous test instances proposed in CEC'09 and compared its numerical results with the methods proposed in CEC'09. The rank of OSMGA shows that it is efficient and robust in solving unconstrained multi-objective optimization problems.

Acknowledgments Changzhi Wu was partially supported by Australian Research Council Linkage Program, Natural Science Foundation of China (61473326), Natural Science Foundation of Chongqing (cstc2013jcyj A00029 and cstc2013jjB0149). This publication was made possible by NPRP Grant # NPRP 41162-1-181 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the author[s].

References [1] C.M. Chen, Y.P. Chen, Q.F. Zhang, Enhancing MOEA/D with Guided Mutation and Priority Update for Multi-objective Optimization, 2009, pp. 209–216. [2] J. Cheng, Q.S. Liu, H.Q. Lu, Y.W. Chen, Supervised kernel locality preserving projections for face recognition, Neurocomputing 67 (8) (2005) 443–449. [3] D. Corne, J. Knowles, M. Oates, The pareto envelope-based selection algorithm for multiobjective optimization, in: Parallel Problem Solving from Nature PPSN VI, Springer, 2000, pp. 839–848. [4] K. Deb, Multi-objective optimization, in: Multi-objective Optimization Using Evolutionary Algorithms, 2001, pp. 13–46 [5] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6 (2) (2002) 182–197. [6] C.M. Fonseca, P.J. Fleming, Multiobjective genetic algorithms, in: IEE Colloquium on Genetic Algorithms for Control Systems Engineering, IET, 1993, pp. 6–1. [7] S. Gao, S.Y. Zeng, B. Xiao, L. Zhang, An orthogonal multi-objective evolutionary algorithm with lower-dimensional crossover, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 1959–1964. [8] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, 1989. [9] D.E. Goldberg, A note on Boltzmann tournament selection for genetic algorithms and population-oriented simulated annealing, Complex Syst. 4 (4) (1990) 445–460. [10] D.E. Goldberg, B. Korb, K. Deb, Messy genetic algorithms: motivation, analysis, and first results, Complex Syst. 3 (5) (1989) 493–530. [11] D.E. Goldberg, J. Richardson, Genetic algorithms with sharing for multimodal function optimization, in: Proceedings of the Second International Conference on Genetic Algorithms on Genetic algorithms and their application, L. Erlbaum Associates Inc., 1987, pp. 41–49. [12] J.J. Grefenstette, Optimization of control parameters for genetic algorithms, IEEE Trans. Syst. Man Cybern. 16 (1) (1986) 122–128. [13] P. Hajela, C.Y. Lin, Genetic search strategies in multicriterion optimal design, Struct. Multidiscip. Optim. 4 (2) (1992) 99–107. [14] J.H. Holland, Adaptation in Natural and Artificial Systems, no. 53, University of Michigan Press, 1975. [15] J. Horn, N. Nafpliotis, D.E. Goldberg, A niched pareto genetic algorithm for multiobjective optimization, in: Proceedings of the First IEEE Conference on Evolutionary Computation, 1994. IEEE World Congress on Computational Intelligence, IEEE, 1994, pp. 82–87. [16] H.L. Huang, S.Z. Zhao, R. Mallipeddi, P.N. Suganthan, Multi-objective optimization using self-adaptive differential evolution algorithm, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 190–194. [17] M.T. Jensen, Reducing the run-time complexity of multiobjective EAs: the NSGA-II and other algorithms, IEEE Trans. Evol. Comput. 7 (5) (2003) 503–515. [18] D.F. Jones, S.K. Mirrazavi, M. Tamiz, Multi-objective meta-heuristics: an overview of the current state-of-the-art, Eur. J. Oper. Res. 137 (1) (2002) 1–9.

13

[19] H. Kitano, Neurogenetic learning: an integrated method of designing and training neural networks using genetic algorithms, Physica D: Nonlinear Phenomena 75 (1–3) (1994) 225–238. [20] J. Knowles, D. Corne, The pareto archived evolution strategy: A new baseline algorithm for pareto multiobjective optimisation, in: Proceedings of the 1999 Congress on Evolutionary Computation, 1999 (CEC'99), vol. 1, IEEE, 1999. [21] J.D. Knowles, D.W. Corne, Approximating the nondominated front using the pareto archived evolution strategy, Evol. Comput. 8 (2) (2000) 149–172. [22] T.C. Koopmans, S. Reiter, A model of transportation, activity analysis of production and allocation, 1951, p. 222. [23] S. Kukkonen, J. Lampinen, Performance assessment of generalized differential evolution 3 (gde3) with a given set of problems, in: IEEE Congress on Evolutionary Computation, 2007 (CEC 2007), IEEE, 2007, pp. 3593–3600. [24] P. Kumar, D. Gospodaric, P. Bauer, Improved genetic algorithm inspired by biological evolution, Soft Comput.—A Fusion of Foundations, Methodologies and Applications 11 (10) (2007) 923–941. [25] T. Chuoyun Lin, T. Jiali Dong, Multi-Objective Optimization: Method and Theory (Chinese Edition), Jilin Education Press, 1992. [26] H.L. Liu, X.Q. Li, The multiobjective evolutionary algorithm based on determined weight and sub-regional search, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 1928–1934. [27] M.H. Liu, X.F. Zou, Y. Chen, Z.J. Wu, Performance assessment of dmoea-dd with CEC 2009 moea competition test instances, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 2913–2918. [28] Q.S. Liu, H.L. Jin, X.O. Tang, H.Q. Lu, S.D. Ma, A new extension of kernel feature and its application for visual recognition, Neurocomputing 71 (10–12) (2008) 1850–1856. [29] Q.S. Liu, Y.C. Huang, D.N. Metaxas, Hypergraph with sampling for image retrieval, Pattern Recognit. 10 (44) (2011) 2255–2262. [30] H. Lu, G.G. Yen, Rank-density-based multiobjective genetic algorithm and benchmark test function study, IEEE Trans. Evol. Comput. 7 (4) (2003) 325–343. [31] T. Murata, H. Ishibuchi, Moga: Multi-objective genetic algorithms, in: IEEE International Conference on Evolutionary Computation, 1995, vol. 1, IEEE, 1995, p. 289. [32] T. Murata, H. Ishibuchi, H. Tanaka, Multi-objective genetic algorithm and its applications to flowshop scheduling, Comput. Ind. Eng. 30 (4) (1996) 957–968. [33] B.Y. Qu, P.N. Suganthan, Multi-objective evolutionary programming without non-donamination sorting is up to twenty times faster, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 2934–2939. [34] R. Allmendinger, J. Handl, J. Knowles, Multiobjective optimization: when objectives exhibit non-uniform latencies, Eur. J. Oper. Res. (2014) 1–17. [35] J.D. Schaffer, Multiple objective optimization with vector evaluated genetic algorithms, in: Proceedings of the First International Conference on Genetic Algorithms, L. Erlbaum Associates Inc., 1985, pp. 93–100. [36] K. Sindhya, S. Ruuska, T. Haanpää, K. Miettinen, A new hybrid mutation operator for multiobjective optimization with differential evolution, Soft Comput.—A Fusion of Foundations, Methodologies and Applications 15 (10) (2011) 2041–2055. [37] K. Sindhya, A. Sinha, K. Deb, K. Miettinen, Local search based evolutionary multi-objective optimization algorithm for constrained and unconstrained problems, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 2919–2926. [38] N. Srinivas, K. Deb, Muiltiobjective optimization using nondominated sorting in genetic algorithms, Evol. Comput. 2 (3) (1994) 221–248. [39] S. Tiwari, G. Fadel, P. Koch, K. Deb, Performance assessment of the hybrid archive-based micro genetic algorithm (AMGA) on the CEC'09 test problems, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 1935–1942. [40] L.Y. Tseng, C. Chen, Multiple trajectory search for unconstrained/constrained multi-objective optimization, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 1951–1958. [41] David A Van Veldhuizen. Multiobjective Evolutionary Algorithms: Classifications, Analyses, and New Innovations. Technical Report, DTIC Document, 1999. [42] D.A.V. Veldhuizen, G.B. Lamont, Multiobjective evolutionary algorithms: analyzing the state-of-the-art, Evol. Comput. 8 (2) (2000) 125–147. [43] Y.P. Wang, C.Y. Dang, H.C. Li, L.X. Han, J.X. Wei. A clustering multi-objective evolutionary algorithm based on orthogonal and uniform design, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 2927–2933. [44] G.G. Yen, H. Lu, Dynamic multiobjective evolutionary algorithm: adaptive cellbased rank and density estimation, IEEE Trans. Evol. Comput. 7 (3) (2003) 253–274. [45] A. Zamuda, J. Brest, B. Bošković, V. Žumer, Differential evolution with selfadaptation and local search for constrained multiobjective optimization, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 195–202. [46] Q. Zhang, H. Li, MOEA/D: a multiobjective evolutionary algorithm based on decomposition, IEEE Trans. Evol. Comput. 11 (6) (2007) 712–731. [47] Q.F. Zhang, W.D. Liu, H. Li, The performance of a new version of MOEA/D on CEC'09 unconstrained mop test instances, in: IEEE Congress on Evolutionary Computation, 2009 (CEC'09), IEEE, 2009, pp. 203–208. [48] Q.F. Zhang, P.N. Suganthan, Final report on CEC'9 MOEA competition, in: Congress on Evolutionary Computation (CEC 2009), 2009. [49] Q.F. Zhang, A. Zhou, S.Z. Zhao, P.N. Suganthan, W.D. Liu, S. Tiwari, Multiobjective Optimization Test Instances for the CEC 2009 Special Session and

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

14

1 2 3 4 5 6

Q. Long et al. / Swarm and Evolutionary Computation ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Competition, University of Essex and Nanyang Technological University,

Technical Report, CES-487, 2008. [50] E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: empirical results, Evol. Comput. 8 (2) (2000) 173–195.

[51] E. Zitzler, M. Laumanns, L. Thiele, E. Zitzler, E. Zitzler, L. Thiele, L. Thiele, SPEA2: Improving the Strength Pareto Evolutionary Algorithm, 2001. [52] E. Zitzler, L. Thiele, Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach, IEEE Trans. Evol. Comput. 3 (4) (1999) 257–271.

Please cite this article as: Q. Long, et al., A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation (2015), http://dx.doi.org/10.1016/j.swevo.2015.01.002i

7 8 9 10 11