ARTICLE IN PRESS
JID: INS
[m3Gsc;February 18, 2016;20:51]
Information Sciences xxx (2016) xxx–xxx
Contents lists available at ScienceDirect
Information Sciences journal homepage: www.elsevier.com/locate/ins
An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization Xianpeng Wang a,b, Lixin Tang a,∗
Q1
a b
Institute of Industrial Engineering & Logistics Optimization, Northeastern University, Shenyang 110819, PR China State Key Laboratory of Synthetical Automation for Process Industries, Northeastern University, Shenyang 110819, PR China
a r t i c l e
i n f o
Article history: Received 11 July 2015 Revised 26 December 2015 Accepted 3 January 2016 Available online xxx Keywords: Adaptive differential evolution Multi-objective optimization Multiple subpopulation
a b s t r a c t For evolutionary algorithms, the search data during evolution has attracted considerable attention and many kinds of data mining methods have been proposed to derive useful information behind these data so as to guide the evolution search. However, these methods mainly centered on the single objective optimization problems. In this paper, an adaptive differential evolution algorithm based on analysis of search data is developed for the multi-objective optimization problems. In this algorithm, the useful information is firstly derived from the search data during the evolution process by clustering and statistical methods, and then the derived information is used to guide the generation of new population and the local search. In addition, the proposed differential evolution algorithm adopts multiple subpopulations, each of which evolves according to the assigned crossover operator borrowed from genetic algorithms to generate perturbed vectors. During the evolution process, the size of each subpopulation is adaptively adjusted based on the information derived from its search results. The local search consists of two phases that focus on exploration and exploitation, respectively. Computational results on benchmark multi-objective problems show that the improvements of the strategies are positive and that the proposed differential evolution algorithm is competitive or superior to some previous multi-objective evolutionary algorithms in the literature. © 2016 Elsevier Inc. All rights reserved.
1
1. Introduction
2
In practical industries, most optimization problems need to deal with multiple objectives simultaneously, which often results in conflict because the improvement in one objective will inevitably cause the deterioration in some other objectives. These problems are referred to as multi-objective optimization problems (MOPs), and multi-objective evolutionary algorithms (MOEAs) have shown a very good performance for these problems [7,8,30]. Due to the good ability of obtaining well distributed solutions, the Pareto-based MOEAs are widely adopted, e.g., NSGA-II [9], microGA [5], SPEA2 [47], MOEA/D [43], multi-objective particle swarm optimization (MOPSO) [6], multi-objective scatter search [21], and hybrid MOEA [34]. Differential evolution (DE) [32] is an evolutionary algorithm, and has shown very good performance for single objective problems (SOPs) [3,24,26,33,35,37,48]. Consequently many researchers have attempted to extend DE to deal with multiple objectives. The first attempt was made by Abbass et al. [1] in which a Pareto DE (PDE) algorithm was developed to solve continuous MOPs. Another multi-objective DE (MODE) similar to PDE is proposed by Madavan [18] through incorporating the
3 4 5 6 7 8 9 10 11
∗
Correspondence to: Institute of Industrial Engineering & Logistics Optimization, Northeastern University, Shenyang 110819, PR China. E-mail address:
[email protected] (L. Tang).
http://dx.doi.org/10.1016/j.ins.2016.01.068 0020-0255/© 2016 Elsevier Inc. All rights reserved.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS 2
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69
ARTICLE IN PRESS
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
fast non-dominated sorting and ranking method of NSGA-II into DE. To achieve a balance of convergence and diversity, Robic and Filipic [28] developed a so-called DEMO algorithm that adopts two mechanisms for the convergence and diversity, respectively. Huang et al. [14] extended their self-adaptive DE (SaDE) originally designed for the SOPs to multi-objective space and constructed a multi-objective SaDE (MOSaDE) that can adaptively select an appropriate mutation strategy. SantanaQuintero et al. [29] incorporated a local search based on the use of rough set theory into MODE to solve constrained MOPs, and the computational results showed that the hybrid strategy succeeds to make the MODE robust and efficient. Ali et al. [2] proposed a multi-objective DE algorithm (MODEA), in which the authors adapted the opposition-based learning strategy to generate the initial population, the random localization in the mutation step, and a new strategy based on the PDEA [18] and DEMO [28] in the selection step. Soliman et al. [31] presented a memetic coevolutionary MODE where coevolution and local search are both incorporated. In this algorithm, two populations are maintained, one consisting of solutions and the other storing search directions of solutions, to implement the coevolution. Kukkonen and Lampinen [16] developed a generalized DE (GDE3) by extending the selection operator of canonical DE so that it can handle constrained MOPs. Although many MODEs have been successfully applied to solve different MOPs, there are still two issues that have not been taken into account by previous researches. Firstly, in previous MODEs only one population is maintained in the whole evolution process. It is clear that the main disadvantage is that the diversity of the population may be very poor because it is easy for the population to converge to one or several local optimal areas for some MOPs with many local optima. In the literature, some researchers have developed MOEAs with multiple subpopulations and the computational results illustrate that the performance of MOEAs can be considerably improved. In the MOPSO proposed in [25], the population is divided into several subpopulations by a clustering technique, and the computational results reported by the authors showed that the distribution of obtained solutions can be much improved. In another kind of MOPSO proposed by Yen and Leong [39], a dynamic mechanism is designed to adaptively adjust the number of subpopulations. The computational results reported by the authors showed that this strategy can significantly improve the performance of the MOPSO. There are also some papers adopting multiple subpopulations in DE, however, most papers only deal with the SOPs [17,40]. For the MOPs, Zaharie and Petcu [41] developed a parallel adaptive Pareto DE (APDE) algorithm in which the population was divided into several subpopulations and the parallelization was implemented based on an island model and a random connection topology. Parsopoulos et al. [23] also presented a multi-population DE called vector evaluated DE for MOPs in which the parallelization was implemented by a ring topology. Although the multiple subpopulation strategy was adopted in [23,41], the focus of the two algorithms was the parallelization of MODE. In addition, in the two parallel MODEs only one mutation strategy was used during the evolution process. Since different mutation strategies have different search behaviors and performance, how to combine the multipopulation strategy with multiple mutation strategies still needs to be studied so as to further improve the performance of MODE. Secondly, the information contained in the search data during evolution is neglected by most MODEs in the literature, though such information is very valuable and has attracted considerable attentions from researchers. Several kinds of data mining methods have been developed to derive the useful information from the search data so as to guide the evolution to promising regions [3,15,19,22,27,38,42,44]. However, these methods are mainly centered on the SOPs [45]. For DE with data mining techniques, Qin et al. [26] used the statistical method to help DE to adaptively select the most appropriate mutation strategies, and then Huang et al. [14] extended this strategy to MODE. An opposition-based learning method was adopted in Ali et al. [2] to generate a high quality initial population. However, in the above three references the data mining techniques were only focused on the mutation strategy selection and the generation of initial population. The incorporation of data mining techniques into multi-population strategy and local search in MODE still needs to be studied. Motivated by the above two main issues, in this paper we propose a hybrid DE algorithm for the MOPs, namely an adaptive multi-population DE (AMPDE). The proposed AMPDE has the following three main features with comparison to previous MODEs in the literature. • The AMPDE adopts multiple populations for MOPs, each of which maintains a different evolution path, so as to improve the search robustness of traditional MODEs. Instead of the canonical mutation strategies used in MODEs, the AMPDE adopts the crossover operators of genetic algorithms to generate perturbed vectors. • The AMPDE adopts data mining methods such as clustering and statistical method to derive useful information from the search data during the evolution process. The derived information is then used to guide the generation of new population and local search. For example, the size of each subpopulation will be adaptively adjusted based on the information derived from its previous search results. • The AMPDE adopts a two-phase local search to improve exploration and exploitation abilities: the first phase focuses on the improvement of exploration through the data analysis of the evolution process, and the second phase centers on the improvement of exploitation through the data analysis of current non-dominated solutions. The remainder of this paper is organized as follows. Section 2 describes some related definitions of multi-objective optimization and DE algorithm. The details of the proposed AMPDE are provided in Section 3. In Section 4, the AMPDE is compared with some other state-of-the-art MOEAs based on bi-objective and tri-objective benchmark MOPs. Finally, the conclusions based on the present study are drawn in Section 5.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
ARTICLE IN PRESS
JID: INS
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
70
2. Background
71
2.1. Multi-objective optimization
72
[m3Gsc;February 18, 2016;20:51] 3
The definition of a multi-objective optimization problem can be given as follows:
F (X ) = ( f1 (X ), f2 (X ), ..., fk (X ))
Minimize s.t.
gi ( X ) ≥ 0
hi ( X ) = 0
( i = 1, 2, . . . , m )
(i = 1, 2, . . . , p)
X ∈ n
(1) (2) (3) (4)
80
In the previous definition, X = (x1 , x2 , …, xn ) is the decision vector of n variables and F(X) is the objective vector consisting of k objectives f1 (X), …, fk (X) that should be optimized simultaneously. Constraints (2) and (3) are the inequality and equality constraints. Given two decision vectors X = (x1 , x2 , …, xn ) and Y = (y1 , y2 , …, yn ), X is said to dominate Y if and only if fi (X ) ≤ fi (Y ) for any i∈ {1, 2,..., k} and f j (X ) < f j (Y ) for at least one objective j ∈ {1, 2,..., k}. If there is no other X in the decision space that can dominate the decision vector X∗ , then X∗ is called the Pareto optimal vector. The set of corresponding objective vectors of all Pareto optimal vectors in the objective space (the space in which the objective vector belongs) is called the Pareto front. The task of a MOP is to obtain the set of all Pareto optimal vectors and the corresponding Pareto front.
81
2.2. Canonical single-objective DE algorithm
82
In canonical DE algorithm, the evolution of population at each generation G consists of three main steps, namely mutation, crossover and selection. For the ith solution (Xi,G ) in the population, the mutation step (with the traditional DE/rand/bin/1) first randomly selects three different solutions Xr 1, G , Xr 2, G , and Xr 3, G from the population (the three solutions are also different from Xi, ), and then generates a new perturbed solution Vi,G by Vi,G = Xr 1, G + F × (Xr 2, G – Xr 3, G ), in which F is the control parameter. Based on Vi,G = (v1, i , G , …, vn , i , G ), a trial solution Ui,G = (u1, i , G , …, un , i , G ) is generated in the crossover step by u j,i,G =
73 74 75 76 77 78 79
83 84 85 86 87 88
if rand j ≤Cr or j= jrand {vx j,i,G,,otherwise , where randj is a random number in [0, 1] and Cr ∈[0, 1] is the crossover probability. jrand is a random j,i,G
89 90 91 92
integer index in [1, n] used to ensure that the trial solution Ui,G is different from solution Xi,G . In the third step, the new solution Xi,G+ 1 in the next generation is determined based on the comparison result between the trial solution Ui,G and solution Xi,G . To ensure the convergence, the selection rule is defined by Xi,G+1 = if f (Ui,G )≤ f (Xi,G ) {UXi,G ,,otherwise . i,G
93
3. Proposed adaptive multi-population DE algorithm
94
100
As mentioned in Section 1, the proposed AMPDE is designed with the motivation to test the incorporation of a multipopulation strategy and the data mining techniques into DE for the MOPs. The two key components of the AMPDE are the multi-population management strategy and the data analysis strategy of search data obtained from the previous evolution process. The two strategies are not independent with each other because the adjustment of subpopulation size is based on the analysis results. In addition, the data analysis strategy also derives useful information from the evolution process and the non-dominated solutions to guide the local search. The main procedure of the AMPDE is provided in Algorithm 1, and its components are elaborated in the following sections.
101
3.1. Population initialization – InitializePopulation()
102
105
To obtain an initial population with good diversity, we developed a diversification method, which follows the main ideas of reference [21]. The procedure of this method is given in Algorithm 2 where NP denotes the size of population, and Xi = (xi ,1 , xi ,2 , …, xi,n ) is the ith solution in the population. Please note that each subpopulation has the same size at the initialization stage.
106
3.2. Adaptive generation method of new population
107
As the most widely studied evolutionary algorithm, genetic algorithm has attracted considerable attentions in recent years and many kinds of crossover operators have been designed. Among these operators, the blend crossover (denoted as BLX-α ) [13], the simulated binary crossover (denoted as SBX) [10], the simplex crossover (denoted as SPX) [36], and the
95 96 97 98 99
103 104
108 109
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
ARTICLE IN PRESS
JID: INS 4
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx Algorithm 1 Main procedure of the proposed AMPDE 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11:
12: 13:
14: 15: 16: 17:
Initialization: Set the termination criterion, and initialize the values of parameters. Set the generation count G = 0. Set the external archive (denoted as EXA) to be empty. Generate an initial population with four subpopulations using the Initialize-Population() method in Section 3.1, and then assign a crossover operator to each subpopulation. Evaluate each solution in each subpopulation. Add the non-dominated solutions in all the subpopulations to EXA. while (the termination criterion is not reached) do Set G = G +1. Improve the EXA by the two-phase local search method described in Section 3.3. (1) Phase I: Improve the diversity of the non-dominated solutions of the EXA using the EXA-propagation-search () method in Section 3.3.1. (2) Phase II: If G is a multiple of nCLS (frequency of performing phase II), improve the EXA using the EXA-clustering-search() in Section 3.3.2, and then repair the diversity of EXA using the EXA-propagation-search () method. Generate new population using the adaptive generation method in Section 3.2. (1) If G is a multiple of ntrim (frequency of adjusting subpopulations’ sizes), then determine the new size of each subpopulation based on the New-size-calculation () method described in Section 3.2.1; otherwise, keep the sizes unchanged. (2) Generate new solutions for each subpopulation with the given size according to the Subpopulation-adjustment () method described in Section 3.2.2. Evaluate each solution in each subpopulation. Update the EXA using the EXA-update-strategy () method described in Section 3.4. End while
Algorithm 2 Initialize-Population() 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11:
110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128
129 130 131
Divide the range of each variable into L =5 equal sub-ranges. for j: = 1 to n do for i: = 1 to NP do if i==1 then The selection probability of each sub-range l of the jth variable is initialized as pl = 1/L. end if Use the roulette-wheel to select a sub-range according to the selection probability of each sub-range. Randomly generate the value of xi,j within the selected sub-range. Update the selection probability of each sub-range l by:
/NP,if subrangel was selected; pl = { ppl −1 +1/ (NP×(L−1 ) ),otherwise. l
end for end for
parent centric crossover (denoted as PCX) [11] have proved to be very effective for continuous problems. The different characteristics of these operators determine their different search powers for different problems. Therefore, it is reasonable to concurrently use these operators by assigning an operator to each subpopulation and designing an appropriate management strategy to adaptively adjust the size of each subpopulation through analyzing and learning from search data during the evolution process. Such an adaptive subpopulation generation method can make the EAs more robust for different kinds of MOPs. In this paper, the above four crossover operators are used and correspondingly the population is divided into four parts. Each subpopulation is assigned with a crossover operator. The evolution procedure follows the idea of canonical DE, and the only difference is that in the AMPDE the perturbed vector is generated by applying the assigned crossover operator. That is, the mutation operation is performed through the assigned operator. For a solution Xi in one subpopulation, one non-dominated solution will be randomly selected from the EXA if this subpopulation is assigned with BLX-α or SBX, while two non-dominated solutions will be randomly selected from the EXA if this subpopulation is assigned with SPX or PCX. The operator will be applied to the current solution and the selected non-dominated solutions to generate the perturbed vector. After the above mutation operation, the canonical crossover operation of DE is performed to generate the trial solution and finally the selection operation will be applied to select the solution for the next generation. Therefore, we only use the four crossover operators in genetic algorithm to act as the mutation strategy of canonical DE and consequently our algorithm is still a DE algorithm. To dynamically adapt the size of each subpopulation, we propose the following adaptive subpopulation generation method based on the contribution of each sub-population to the external archive. 3.2.1. Determining new size of each subpopulation – New-size-calculation() The four subpopulations are first set to equal size after the population is initialized, i.e., NP/4. At the end of each generation, the external archive will be updated by new solutions in each subpopulation. For simplicity, in the implementation a Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149
[m3Gsc;February 18, 2016;20:51] 5
flag is added to each non-dominated solution in the EXA to show by which subpopulation it is obtained. Once the subpopulation adjustment procedure is called, the new size of each subpopulation is determined as follows. Step 1. Determine the number of non-dominated solutions that are provided by each subpopulation j (denoted as bj ). Note that if one solution is found by more than one subpopulation, the flag contains all indexes of these subpopulations. Step 2. The new size of each subpopulation j (denoted as SubPj ) is then calculated as SubPj = NP × b j / 4i=1 bi . To ensure that each subpopulation can always have a chance to evolve, a minimum size (i.e., 5) is set for each subpopulation. 3.2.2. Adjusting each subpopulation – Subpopulation-adjustment() After the determination of the new size of each subpopulation, the solutions in each subpopulation are first sorted using the fast non-dominated sorting and ranking method of NSGA-II. Then each subpopulation can be adjusted based on two cases: • In the case of size increase for a certain subpopulation j with the crossover operator k, we first select a random solution from this subpopulation j, then select a random solution (or two solutions if operator k is SPX or PCX) from the EXA and finally generate a new solution by applying operator k on the selected solutions. This process continues until the size of subpopulation j reaches its new size. • On the contrary, in the case of size decrease for a certain subpopulation j, the worst solution (or the most crowded one if all the solutions in this subpopulation are all non-dominated ones) will be repeatedly deleted until the new size is reached.
155
Besides the above two cases, for the other solutions that still remain for a certain subpopulation j with the crossover operator k, we use the following update method based on DE: first, for each solution we select a random solution (or two solutions if operator k is SPX or PCX) from the EXA; second, generate a perturbed solution Vi,G by applying operator k on this solution and the selected non-dominated solution; and finally perform the traditional crossover step and selection step of DE based on this perturbed solution. In the selection step, the trial solution will be selected only if it dominates the target solution.
156
3.3. Two-phase local search
157
Since the solutions in the EXA are involved in the generation of new solutions in our algorithm, the quality and diversity of the EXA is very important to the whole performance of our algorithm. Therefore, we develop a two-phase local search to improve the EXA. The first phase focuses on the improvement of exploration (or diversity) through the data analysis of the evolution process by a statistical method and the second phase centers on the improvement of exploitation (or quality) through the data analysis of non-dominated solutions from the EXA by a clustering method. We prefer to use such a strategy because the second phase needs many diversified solutions that can be obtained by the first phase so as to derive useful information.
150 151 152 153 154
158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184
3.3.1. Phase I: EXA-propagation-search() If the size of EXA is small or its diversity is poor, the new subpopulations will inevitably move toward several local regions and subsequently the search will converge quickly. To overcome the premature convergence phenomenon, the diversity and the size of the EXA need to be improved, especially in the initial evolution process. In this paper, a statistics-based improvement method named the EXA-propagation-search is proposed based on the analysis of the evolution process of each subpopulation. The main idea of the first phase is to generate new solutions to improve the EXA through better crossover operators that are determined by the search process of subpopulations. Let nEXA and |EXA| be the maximum size and the current size of EXA, and then the EXA-propagation-search() method can be given in Algorithm 3. In this algorithm, the perturbation() procedure changes the value of a randomly selected dimension of a given solution to a random value within the range of this dimension. The update procedure of the EXA is given in Section 3.4. Since there are at most nEXA solutions in the EXA and the update of the EXA needs at most k×nEXA comparisons (k is the number of objectives), the complexity of the Algorithm 2 is O(k×nEXA 2 ). 3.3.2. Phase II: EXA-clustering-search() After the phase I, the experiment results show that the diversity of EXA can be much improved and the number of non-dominated solutions in the EXA is also increased. With the large amount of non-dominated solutions, the EXAclustering-search is then performed to further improve the quality of the EXA based on the information derived from these solutions. The main idea of the EXA-clustering-search is to classify these non-dominated solutions into ncluster clusters (Algorithm 4), and then generate new solutions based on an appropriate crossover operator to improve the quality of each cluster (Algorithm 5). In the clustering, the distance between two solutions is defined as their Euclidean distance in the objective space and the distance between two clusters is defined as the distance between their centers. Based on Algorithm 4, when combining Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
ARTICLE IN PRESS
JID: INS 6
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx Algorithm 3 EXA-propagation-search() 1: 2: 3: 4: 5: 6: 7: 8: 9:
10: 11: 12: 13: 14: 15:
if |EXA| ≤ 2 then for k: =1 to nEXA do Generate a new solution Sk by perturbation () to a randomly selected solution from the EXA. Update the EXA with the new solution Sk . end for Else for k: = 1 to nEXA do for each operator i: =1 to 4 do Determine the contribution of subpopulation i as ci =bi /|EXA|, where bi is the sum of non-dominated solutions provided by subpopulation i to the EXA (note that the contribution of subpopulation i can also been seen as the contribution of operator i). end for Use the roulette-wheel method to select an operator based on the contribution of each subpopulation. Perform the selected operator on the randomly selected solutions from the EXA to generate a new solution Sk (the operator used to generate this solution will be memorized by Sk ). Update the EXA with the new solution Sk . end for end if
Algorithm 4 Clustering 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11:
if |EXA| ≤ ncluster then Stop and exit from the EXA-clustering-search. Else Let each solution in the EXA be a cluster and denote them as C = {C1 , C2 , …, C| EXA | }. end if while |C| > ncluster do Calculate the distance between any two clusters Ci and Cj , and denote the obtained distance as dij (dij = dji ). Determine the minimum value of dij , and denote the corresponding two clusters as Cp and Cq (p < q). Add all the solutions in cluster Cq into cluster Cp . Delete cluster Cq . end while
Algorithm 5 Generating new solutions for each cluster 1: 2: 3: 4: 5: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15:
185 186 187 188 189 190 191 192 193 194 195
for each operator k: =1 to 4 do Calculate the selection probability pk of operator k. end for for each cluster i: =1 to ncluster do for j: =1 to |Ci | do Use the roulette-wheel method to select an operator. if cluster i does not have enough solutions needed by the selected operator then Randomly select the other solutions from the EXA. end if Perform the selected operator on the randomly selected solutions from Ci or Ci ∪EXA to generate a new offspring solution Sk . if Sk dominates at least one of the parent solutions do Replace the solution with solution Sk . Update the EXA with the new solution Sk . end if end for end for
the two nearest clusters we need to calculate the distance of (|EXA| − ncluster )(|EXA| − ncluster − 1 )/2 cluster pairs. Since ncluster is generally much smaller than |EXA| and |EXA| is generally equals to nEXA during the evolution, the complexity of the clustering process is O(nEXA 3 ). The update of the EXA generally needs k×nEXA comparisons and nEXA new solutions will be generated, so the complexity of Algorithm 5 is at most O(k×nEXA 2 ). Since the number of objectives is at most 3, the complexity of the two-phase local search is O(nEXA 3 ). In addition, as shown in Algorithm 1, phase II is not performed at each generation like the phase I. This phase is applied if the generation counter is a multiple of nCLS (i.e., the frequency of performing phase II). We prefer to adopt this strategy because the phase II is based on the clustering and learning of the EXA and the EXA generally has little change in two consecutive generations. If we perform it at each generation, the clustering and learning results may be the same but a great deal of computational efforts has to be paid. So, it is reasonable to apply the phase II when the EXA has sufficient changes. The value of nCLS and its impact on the performance of the AMPDE are analyzed through the experiment in Section 4.4.1.3. Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
[m3Gsc;February 18, 2016;20:51] 7
196
3.4. Update of external archive
197
For a given non-dominated solution i in the current population at generation G, this paper uses the update procedure in reference [4] that can be described as follows.
198 199 200 201 202
Step 1. If solution i is dominated by any one solution in the EXA, then solution i is discarded. Step 2. If no solution in the EXA can dominate solution i, add it to the EXA, and then delete all solutions in the EXA that are dominated by this solution. Step 3. If the sum of non-dominated solutions exceeds nEXA , delete the most crowded one.
203
3.5. Constraints handling
204
207
For MOPs with constraints, we use the constraint-handling approach proposed in [4] and [21] to compare solutions for such MOPs. That is, solution Xi is said to dominate Xj if any of the following three conditions is satisfied: (1) two solutions are both feasible and solution Xi dominates Xj ; (2) solution Xi is feasible and solution Xj is infeasible; (3) both solutions are infeasible but solution Xi has a smaller overall violation of constraints.
208
4. Computational results
209 211
To test the performance of our AMPDE algorithm, experiments based on benchmark test problems are carried out. The AMPDE algorithm is implemented using C++ language and all the experiments are performed on a PC with the Intel Core 9550 CPU (2.83 GHz for a single core) and the Windows 7 operating system.
212
4.1. Test problems
213
12 bi-objective and three-objective benchmark problems presented in the literature are chosen as the test problems, whose definitions are given in Appendix A. These problems are often used in many published papers dealing with evolutionary algorithms for MOPs.
205 206
210
214 215 216 217 218
(1) The bi-objective problems are the ZDT series: ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6 in [46]. (2) The three-objective problems are the DTLZ family of scalable test problems: DTLZ1, DTLZ2, DTLZ3, DTLZ4, DTLZ5, DTLZ6, and DTLZ7 in [12].
219
4.2. Performance metrics
220 222
For the selected test problems, the true Pareto fronts of them are known. In this paper, we adopt two performance metrics that are often used in many research papers: the general distance (GD) and the hypervolume (IH ). The description of these performance metrics are provided in Appendix B.
223
4.3. Parameter setting
221
224 225 226
For the parameters contained in the four operators, we prefer to adopt their suggested setting in the original literature:
α =0.5 for the BLX-α operator according to [13]; η = 20 for the SBX operator according to [9] and [10]; ε = 1 for the SPX operator according to [40]; and σ η = σ ε = 0.1 for the PCX operator according to [11]. The value of the crossover probability
232
Cr in the crossover step is set by a normal distribution with mean value Crk and standard deviation 0.05, denoted by N(Crk , 0.05) where Crk is the median value of Cr for solutions in subpopulation k. The value of Crk for each subpopulation k is initially set to 0.1. For the other parameters contained in the proposed AMPDE, we adopt the following parameter setting based on the computational results: npop = 100, nEXA = 100, nCLS = 60, ntrim = 20, and ncluster = 20. In the following Section 4.4.1, the impact of the value of each parameter on the performance of the AMPDE is provided and analyzed.
233
4.4. Experimental results
234
In this section, we first analyze the impact of parameters on the AMPDE, and then carry out experiments to show the efficiency of the proposed improvement strategies such as the multiple subpopulations, the dynamic management on the size of subpopulations, and the two-phase local search. At last, the proposed AMPDE is compared to several state-of-the-art MOEAs such as NSGA-II [9], AbYSS [21], GDE3 [16], and SMPSO [20]. In these experiments, all the testing algorithms take the same stopping criterion of 50 0 0 0 function evaluations. We made 100 independent runs for each problem, and collected the median (xm ) and interquartile range (IRQ) of each problem as measures of location (or central tendency) and statistical dispersion.
227 228 229 230 231
235 236 237 238 239 240
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
ARTICLE IN PRESS
JID: INS 8
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx Table 1 Comparison results for different values of ncluster . GD Problems
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
5
10
20
30
40
xm
IQR
xm
IQR
xm
IQR
xm
IQR
xm
IQR
1.439e−04 4.735e−05 3.433e−05 1.419e−04 8.638e−04 5.729e−04 6.340e−04 2.056e−03 4.660e−03 2.505e−04 5.772e−04 1.983e−03
3.432e −05
1.362e−04 4.701e−05 3.411e−05 1.338e−04 8.817e−04 5.709e−04 6.358e−04 2.070e−03 4.595e−03 2.473e−04 5.737e−04 1.968e−03
3.479e −05
1.381e−04 4.693e−05 3.367e−05 1.368e−04 1.127e−03 6.007e−04 6.356e−04 2.022e−03 4.678e−03 2.551e−04 5.685e−04 1.868e−03
3.794e −05
1.428e−04 4.764e−05 3.400e−05 1.434e−04 9.447e−04 5.668e−04 6.363e−04 2.059e−03 4.643e−03 2.473e−04 5.750e−04 1.916e−03
3.496e −05
1.399e−04 4.717e−05 3.388e−05 1.404e−04 8.438e−04 5.685e−04 6.381e−04 2.124e−03 4.671e−03 2.496e−04 5.773e−04 1.932e−03
2.980e −05
4.698e −06 2.875e −06 3.719e −05 4.679e −04 2.795e −05 3.892e −05 6.866e −04 3.383e −04 3.608e −05 1.773e −03 5.608e −04
3.415e −06 3.546e −06 3.573e −05 5.344e −04 3.139e −05 4.093e −05 5.459e −04 4.014e −04 3.201e −05 1.878e −03 5.993e −04
3.287e −06 3.440e −06 3.160e −05 1.735e −03 4.172e −05 3.288e −05 8.089e −04 1.948e −04 3.043e −05 3.130e −05 8.754e −04
4.177e −06 2.908e −06 3.539e −05 5.989e −04 4.085e −05 3.339e −05 6.581e −04 2.692e −04 3.284e −05 5.593e −05 5.630e −04
3.575e −06 3.364e −06 3.759e −05 6.657e −04 2.346e −05 4.896e −05 1.397e −03 2.638e −04 3.493e −05 1.542e −03 6.528e −04
IH Problems
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
5
10
20
30
40
xm
IQR
xm
IQR
xm
IQR
xm
IQR
xm
IQR
6.619e−01 3.287e−01 5.876e−01 6.620e−01 4.013e−01 7.659e−01 3.880e−01 3.192e−01 3.828e−01 9.401e−02 9.494e−02 2.944e−01
5.234e −05
6.619e−01 3.287e−01 5.876e−01 6.620e−01 4.013e−01 7.657e−01 3.873e−01 3.193e−01 3.832e−01 9.400e−02 9.494e−02 2.940e−01
4.909e −05
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.698e−01 3.889e−01 3.222e−01 3.853e−01 9.397e−02 9.491e−02 2.930e−01
3.757e −05
6.619e−01 3.287e−01 5.876e−01 6.620e−01 4.013e−01 7.662e−01 3.869e−01 3.209e−01 3.824e−01 9.401e−02 9.493e−02 2.952e−01
3.767e −05
6.619e−01 3.287e−01 5.876e−01 6.620e−01 4.013e−01 7.672e−01 3.882e−01 3.210e−01 3.842e−01 9.401e−02 9.493e−02 2.942e−01
4.013e −05
4.116e −05 1.820e −05 4.735e −05 5.687e −05 8.117e −03 6.302e −03 3.485e −02 8.561e −03 4.535e −05 4.124e −05 3.794e −03
3.720e −05 2.244e −05 3.273e −05 6.856e −05 7.811e −03 6.313e −03 2.913e −02 7.714e −03 4.952e −05 4.299e −05 3.524e −03
3.911e −05 1.781e −05 3.741e −05 1.182e −04 7.439e −03 6.776e −03 3.954e −02 8.114e −03 3.836e −05 4.830e −05 3.346e −02
4.097e −05 2.268e −05 3.430e −05 5.871e −05 6.333e −03 5.953e −03 3.222e −02 7.322e −03 4.526e −05 3.894e −05 3.831e −03
3.788e −05 1.875e −05 3.505e −05 6.014e −05 7.127e −03 5.553e −03 3.721e −02 7.074e −03 4.924e −05 3.583e −05 3.508e −03
Fig. 1. Means plots of GD and IH metrics for different values of ncluster .
241 242 243 244 245 246 247 248
4.4.1. Analysis of parameters’ impacts on the AMPDE 4.4.1.1. Number of clusters to be classified – ncluster . Five levels of ncluster are tested to show its impact on the performance of the AMPDE, i.e., ncluster is selected from {5, 10, 20, 30, 40}. The computational results are given in Table 1, in which the best results are shown in the bold style. The means plots of the GD and IH metrics for the 12 problems obtained by each value of ncluster are presented in Fig. 1. Based on the results, it can be seen that for the GD metric the performance of AMPDE first improves but later deteriorates with the increase of ncluster . The value of 20 gives the best results for 5 out of the 12 testing problems and for the other problems the performance difference between 20 and other values are not statistically significant with a confidence level of Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
ARTICLE IN PRESS
JID: INS
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
9
Table 2 Comparison results for different values of ntrim . GD Problems
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
5
10
20
30
40
xm
IQR
xm
IQR
xm
IQR
xm
IQR
xm
IQR
1.359e−04 4.691e−05 3.425e−05 1.391e−04 1.019e−03 5.945e−04 6.452e−04 1.901e−03 4.665e−03 2.478e−04 5.719e−04 2.132e−03
3.125e −05
1.384e−04 4.684e−05 3.438e−05 1.372e−04 1.241e−03 5.943e−04 6.420e−04 2.148e−03 4.666e−03 2.480e−04 5.710e−04 2.224e−03
3.427e −05
1.306e−04 4.659e−05 3.466e−05 1.369e−04 1.081e−03 5.947e−04 6.351e−04 1.998e−03 4.701e−03 2.554e−04 5.610e−04 2.137e−03
3.492e −05
1.387e−04 4.652e−05 3.404e−05 1.342e−04 1.092e−03 5.980e−04 6.386e−04 2.121e−03 4.627e−03 2.527e−04 5.619e−04 2.131e−03
3.386e −05
1.377e−04 4.652e−05 3.424e−05 1.325e−04 1.033e−03 5.914e−04 6.449e−04 2.028e−03 4.730e−03 2.511e−04 5.664e−04 2.230e−03
3.638e −05
3.258e −06 2.689e −06 3.835e −05 1.752e −03 5.585e −05 4.224e −05 5.716e −04 2.732e −04 2.765e −05 3.441e −05 1.061e −03
3.536e −06 2.644e −06 3.517e −05 2.034e −03 4.049e −05 3.807e −05 7.600e −04 2.823e −04 2.926e −05 3.274e −05 6.736e −04
3.355e −06 3.069e −06 3.892e −05 1.811e −03 5.171e −05 3.430e −05 6.149e −04 2.622e −04 2.957e −05 3.810e −05 6.839e −04
2.580e −06 2.709e −06 4.017e −05 2.111e −03 9.144e −05 3.606e −05 7.458e −04 2.856e −04 3.118e −05 4.208e −05 6.913e −04
3.103e −06 2.420e −06 3.356e −05 1.745e −03 4.946e −05 3.379e −05 5.013e −04 2.818e −04 3.166e −05 3.131e −05 9.274e −04
IH Problems
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
249 250 251 252 253 254 255 256 257 258 259
260 261 262 263 264 265 266 267 268 269 270 271 272 273
5
10
20
30
40
xm
IQR
xm
IQR
xm
IQR
xm
IQR
xm
IQR
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.691e−01 3.882e−01 3.290e−01 3.850e−01 9.396e−02 9.491e−02 2.916e−01
2.928e −05
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.698e−01 3.885e−01 3.232e−01 3.843e−01 9.396e−02 9.491e−02 2.936e−01
2.998e −05
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.702e−01 3.888e−01 3.161e−01 3.856e−01 9.397e−02 9.491e−02 2.936e−01
4.320e −05
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.692e−01 3.879e−01 3.183e−01 3.848e−01 9.397e−02 9.491e−02 2.946e−01
3.141e −05
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.696e−01 3.882e−01 3.163e−01 3.849e−01 9.397e−02 9.491e−02 2.942e−01
3.964e −05
3.033e −05 2.133e −05 3.521e −05 1.123e −04 5.816e −03 8.127e −03 3.560e −02 5.455e −03 5.187e −05 4.417e −05 3.370e −02
3.055e −05 1.894e −05 3.256e −05 1.312e −04 9.827e −03 5.872e −03 3.001e −02 6.692e −03 5.098e −05 3.676e −05 4.152e −03
3.765e −05 1.885e −05 3.152e −05 1.327e −04 7.515e −03 6.784e −03 3.020e −02 6.814e −03 4.856e −05 3.898e −05 5.247e −03
3.704e −05 2.077e −05 3.426e −05 1.324e −04 7.961e −03 5.634e −03 3.714e −02 6.266e −03 5.226e −05 3.644e −05 4.503e −03
4.018e −05 2.149e −05 3.631e −05 1.131e −04 6.679e −03 5.621e −03 3.500e −02 7.587e −03 5.046e −05 4.030e −05 5.575e −03
95% based on the analysis of variance (ANOVA). The reason behind such results can be analyzed as follows. When ncluster takes a small value such as 5, the number of solutions in each cluster is large and the distribution of these solutions is wide, so many of the new solutions generated based on the learning of solutions in each cluster may be dominated by the solutions in the cluster, especially when the selected solutions to perform crossover are far away from each other. On the contrary, when ncluster takes a large value such as 40, the number of solutions in each cluster significantly decreases and consequently the solutions in each cluster become too similar to each other. That is, the probability of finding better new solutions becomes small and thus the algorithm’s performance deteriorates. For the IH metric, the result in Fig. 1 shows that a large value of ncluster tends to obtain a better IH . This is because the more clusters we classify, the better distribution of solutions in the EXA we will have, and thus the IH metric will be better. However, the results of ANOVA show that the different values of ncluster do not have significant impact on the algorithm’s performance, especially for the values of 20 and 40. Therefore, it is preferred to set the value of ncluster to be 20. 4.4.1.2. Frequency of adjusting subpopulations’ sizes – ntrim . The frequency of adjusting subpopulations’ sizes may affect the performance of the AMPDE. If ntrim is small, the adjustment operation will be often performed. However, this may cause two disadvantages: (1) the frequent adjustment will occupy a lot of computational efforts; and (2) each subpopulation cannot have sufficient generations to evolve with a given set of initial solutions. On the contrary, if ntrim is large, some subpopulations with inappropriate crossover operators may have been trapped in local optimal areas and their further evolutions will waste computational efforts because they cannot find more promising solutions even if more generations are allocated to them. Therefore, there is a trade-off for the value of ntrim so that each subpopulation can have sufficient generations to evolve and at the same time more computational efforts are devoted to promising subpopulations. To support the above analysis, we carried out experiment in which ntrim has five levels {5, 10, 20, 30, 40}. The experiment results are given in Table 2 and the means plots of the GD and IH metrics obtained by each value are given in Fig. 2. It can be found that for many problem instances the algorithm’s performance first improves but then deteriorates for the GD and IH metrics with the increase of ntrim . Based on the results, the value of 20 can be taken as a trade-off. In addition, the results of ANOVA show that the different values of ntrim do not have significant impact on the algorithm’s performance for the GD and IH metrics.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS 10
ARTICLE IN PRESS
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
Fig. 2. Means plots of GD and IH metrics for different values of ntrim .
Fig. 3. Means plots of GD and IH for different nCLS .
274 275 276 277 278 279 280 281 282 283 284 285 286
287 288 289 290 291 292
4.4.1.3. Frequency of performing phase II of the local search – nCLS . As we analyzed in Section 3.3.2, the phase II of the local search is not performed at each generation. The small value of nCLS will be harmful because the frequent application of phase II will occupy a lot of computational efforts but the learning results may have little difference due to the fact that the external archive may not have significant changes between every nCLS generations. In addition, the small value of nCLS will also make the algorithm focus its search on the exploitation, and thus the number of solutions in the EXA may be small. Since the update of new subpopulations is based on the EXA, the whole performance will be decreased. On the contrary, if nCLS is large, the search focus of the algorithm will be put on the exploration and thus the distribution of solutions in the EXA will be improved (that is, the IH metric will be better), but the GD metric may be not so good. The computational results for nCLS with five levels {5, 20, 40, 60, 80} are shown in Table 3 and Fig. 3, which illustrates the above analysis because the small value of 5 gives the worst performance while the large value of 80 has a worse GD metric (though its IH metric is the best). In addition, the computational results show that the average size of the EXA for many problems is about 30 when nCLS = 5, which is much less than its maximum value of 100. Based on these results, the value of nCLS is set to 60 to obtain a balance for GD and IH metrics. 4.4.2. Efficiency analysis of improvement strategies in the AMPDE 4.4.2.1. Adoption of multiple subpopulations. To show the positive effect of the adoption of multiple subpopulations, we replaced the multiple subpopulations with a single population and consequently derived an adaptive single population DE (denoted as ASPDE). Since we used four kinds of crossover operators, there are four version of the ASPDE, namely the ASPDE based on BLX-α (denoted as ASPDEBLX ), the ASPDE based on SBX (denoted as ASPDESBX ), the ASPDE based on SPX (denoted as ASPDESPX ), and ASPDE based on PCX (denoted as ASPDEPCX ). In these four SPDE algorithms, only the given sinPlease cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
ARTICLE IN PRESS
JID: INS
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
11
Table 3 Comparison results for different values of nCLS . GD Problems
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
5
20
40
60
80
xm
IQR
xm
IQR
xm
IQR
xm
IQR
xm
IQR
1.085e−04 6.529e−05 5.822e−05 1.117e−04 4.876e−04 1.671e−03 7.630e−04 2.608e−03 1.001e−02 3.671e−04 5.859e−04 3.118e−03
4.396e −05
1.297e−04 4.651e−05 3.399e−05 1.173e−04 8.902e−04 1.069e−03 6.464e−04 2.445e−03 4.668e−03 2.465e−04 5.733e−04 2.096e−03
3.733e −05
1.381e−04 4.693e−05 3.367e−05 1.368e−04 1.127e−03 6.007e−04 6.356e−04 2.022e−03 4.678e−03 2.551e−04 5.685e−04 1.868e−03
3.794e −05
1.289e−04 4.623e−05 3.420e−05 1.350e−04 1.438e−03 6.001e−04 6.461e−04 1.087e−03 4.647e−03 2.502e−04 5.627e−04 2.063e−03
4.222e −05
1.359e−04 4.677e−05 3.384e−05 1.333e−04 1.173e−03 5.856e−04 6.392e−04 1.106e−03 4.653e−03 2.501e−04 5.666e−04 1.981e−03
3.322e −05
3.249e −05 2.260e −05 5.063e −05 2.045e −03 2.046e −04 1.350e −04 5.584e −04 3.245e −03 6.808e −05 7.978e −05 2.653e −03
3.591e −06 3.294e −06 4.348e −05 1.945e −03 4.251e −04 2.934e −05 7.435e −04 2.515e −04 3.362e −05 3.280e −05 1.079e −03
3.287e −06 3.440e −06 3.160e −05 1.735e −03 4.172e −05 3.288e −05 8.089e −04 1.948e −04 3.043e −05 3.130e −05 8.754e −04
3.111e −06 3.267e −06 2.908e −05 2.804e −03 5.263e −05 3.050e −05 9.913e −05 2.422e −04 2.694e −05 3.123e −05 8.619e −04
3.520e −06 3.141e −06 4.043e −05 1.942e −03 4.080e −05 3.758e −05 4.191e −04 3.130e −04 2.692e −05 2.911e −05 7.805e −04
IH Problems
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
293 294 295 296 297 298 299 300 301 302
303 304 305 306 307 308 309 310 311
312 313 314 315 316 317
5
20
40
60
80
xm
IQR
xm
IQR
xm
IQR
xm
IQR
xm
IQR
6.534e−01 3.226e−01 5.841e−01 6.564e−01 3.957e−01 6.834e−01 3.717e−01 2.856e−01 3.226e−01 9.109e−02 9.486e−02 2.739e−01
5.197e −03
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.289e−01 3.885e−01 2.991e−01 3.857e−01 9.397e−02 9.491e−02 2.930e−01
3.988e −05
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.698e−01 3.889e−01 3.222e−01 3.853e−01 9.397e−02 9.491e−02 2.930e−01
3.957e −05
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.719e−01 3.873e−01 3.882e−01 3.848e−01 9.397e−02 9.491e−02 2.939e−01
3.235e −05
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.703e−01 3.875e−01 3.866e−01 3.848e−01 9.397e−02 9.491e−02 2.926e−01
3.612e −05
9.845e −03 3.908e −03 8.518e −03 1.018e −02 1.900e −02 1.186e −02 1.516e −02 2.820e −02 1.282e −03 6.004e −04 3.439e −02
3.658e −05 1.921e −05 2.986e −03 1.184e −04 3.057e −02 5.640e −03 2.433e −02 7.371e −03 4.765e −05 3.714e −05 6.200e −03
3.911e −05 1.820e −05 3.741e −05 1.182e −04 7.439e −03 6.776e −03 3.954e −02 8.114e −03 3.836e −05 4.830e −05 3.346e −02
3.518e −05 1.549e −05 3.893e −05 1.351e −04 6.939e −03 7.513e −03 6.784e −03 6.737e −03 4.968e −05 4.137e −05 8.259e −03
2.979e −05 2.151e −05 2.995e −05 1.149e −04 6.586e −03 7.053e −03 8.864e −03 6.717e −03 4.276e −05 4.281e −05 3.406e −02
gle operator is used in the generation of new solutions and the local search. In addition, the subpopulation procedure is not applied. The comparison results between our AMPDE and the four ASPDE are given in Table 4, where the symbol “+” denotes that the performance difference is significant between our AMPDE and the best one among the others and the symbol “–“ denotes that the performance difference is not significant with a confidence level of 95% (please note that these symbols are used in the following tables). Based on the results, it can be seen that the four crossover operators have different advantages on different problems and that the adoption of the multiple subpopulation strategy obtains the best GD metrics for 8 out of 12 problems (the differences are significant for 6 problems) and the best IH metrics for 10 out of the 12 problems (the differences are significant for 10 problems). Therefore, it can be concluded that this strategy combines the advantages of the four operators and thus makes the proposed AMPDE more effective and robust for different problems. 4.4.2.2. Two-phase local search. In this section, the effect of the two-phase local search is tested by comparing the proposed AMPDE to another version of AMPDE in which the two-phase local search is not used (denoted as AMPDEnoLS ). The comparison results are presented in Table 5. From this table, it appears that the two-phase local search can obtain better GD results for 10 problems (the improvement is significant for 6 problems) and better IH results for 8 problems (the improvement is significant for 5 problems). Although the AMPDEnoLS obtains significantly better GD for problem ZDT4, the distribution of the EXA obtained by it is significantly worse than that obtained by the AMPDE. The AMPDEnoLS obtains better results for some problems because it can evolve for more generations than the AMPDE. In general, we can conclude that the two-phase local search is positive to the AMPDE due to its effectiveness in improving the convergence and diversity of the obtained EXA. 4.4.2.3. Adaptive population generation method. The aim of this section is to test the effect of the adaptive population generation method, i.e., the dynamic size control of subpopulations. In this experiment, we compared the proposed AMPDE with another version of AMPDE in which the size of subpopulation is fixed to be 25 (this version is denoted as AMPDEfixed ). The comparison results between the AMPDE and the AMPDEfixed are given in Table 6. Based on these results, it can be found that the AMPDE obtains better results of GD and IH metrics for 8 out of the 12 problems (the improvement is significant for 5 problems). It can be then concluded that the adoption of the adaptive population generation method can Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
ARTICLE IN PRESS
JID: INS 12
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
Table 4 Comparison results between AMPDE and four ASPDE algorithms. GD Problems
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
AMPDE
ASPDEBLX
ASPDESBX
ASPDESPX
ASPDEPCX
xm
IQR
xm
IQR
xm
IQR
xm
IQR
xm
IQR
1.289e−04 4.623e−05 3.420e−05 1.350e−04 1.438e−03 6.001e−04 6.461e−04 1.087e−03 4.647e−03 2.502e−04 5.627e−04 2.063e−03
4.222e −05
1.117e−04 8.463e−05 6.980e−05 3.657e−01 1.170e−02 6.911e−01 7.501e−04 1.809e+01 4.862e−03 2.551e−04 9.393e−04 8.023e−03
2.956e −05
5.951e −05
1.017e −03
6.124e−04 2.608e−03 5.248e−04 2.338e−03 1.177e−01 6.320e+00 3.183e−02 1.958e+01 3.136e−02 3.675e−02 9.047e−01 3.373e−03
4.696e −04
2.339e −03
2.577e−04 1.101e−04 2.129e−04 5.893e−04 4.550e−05 3.182e+01 1.426e−01 6.559e+01 1.853e−01 2.572e−01 4.672e−02 1.252e−03
3.190e −04
4.570e −03
1.216e−04 7.940e−05 5.567e−05 1.261e−04 9.292e−05 1.375e−03 6.339e−04 2.601e−03 4.774e−03 2.626e−04 1.004e−03 3.456e−03
3.111e −06 3.267e −06 2.908e −05 2.804e −03 5.263e −05 3.050e −05 9.913e −05 2.422e −04 2.694e −05 3.123e −05 8.619e −04
1.760e −05 1.511e −05 4.245e −01 2.434e −02 1.478e+00
9.838e −05 1.029e+01
5.707e −04 3.140e −05 2.196e −04
4.260e −05 1.943e −05 5.444e −05 3.088e −05 5.053e −04 3.476e −05 3.125e −02 4.791e −04 4.688e −05 2.338e −03
8.713e −04 2.491e −04 7.002e −04 6.130e −06 1.682e+01
1.846e −02 2.265e+01
4.432e −02 5.803e −02 9.015e −02
3.776e −03 3.948e −04 2.278e −03 2.225e −01 1.872e+00
4.468e −03 4.755e+00
2.481e −02 1.058e −02 1.651e −01 1.750e −03
+ + + – + + – + + – + +
IH Problems
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
AMPDE
ASPDEBLX
ASPDESBX
ASPDESPX
ASPDEPCX
xm
IQR
xm
IQR
xm
IQR
xm
IQR
xm
IQR
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.719e−01 3.873e−01 3.882e−01 3.848e−01 9.397e−02 9.491e−02 2.939e−01
3.235e −05
6.468e−01 3.151e−01 5.811e−01 3.128e−02 3.953e−01 4.283e−02 3.866e−01 3.813e−01 3.776e−01 9.395e−02 9.024e−02 2.575e−01
4.578e −03
6.490e−01 3.214e−01 5.845e−01 6.612e−01 3.854e−01 7.074e−01 3.868e−01 2.923e−01 3.764e−01 9.383e−02 9.109e−02 2.658e−01
5.033e −03
6.488e−01 2.684e−01 5.445e−01 6.081e−01 4.013e−01 4.013e−01 7.726e−03 4.984e−03 4.984e−03 4.984e−03 8.154e−02 2.210e−01
1.626e −02
1.791e−01 2.415e−01 1.316e−01 4.075e−01 2.438e−01 2.386e−01 7.922e−02 7.267e−02 1.122e−01 5.902e−02 9.296e−09 1.892e−01
3.474e −02
3.518e −05 1.549e −05 3.893e −05 1.351e −04 6.939e −03 7.513e −03 6.784e −03 6.737e −03 4.968e −05 4.137e −05 8.259e −03
6.876e −03 3.780e −03 1.929e −01 1.952e −02 3.648e −01 5.887e −03 0.0 0 0e+0 0
9.766e −03 5.532e −05 2.989e −03 1.647e −02
6.699e −03 2.885e −02 5.609e −03 1.114e −02 3.443e −02 6.344e −03 2.095e −02 1.984e −02 5.252e −04 4.165e −03 2.642e −02
4.704e −02 3.768e −02 1.025e −01 4.809e −04 0.0 0 0e+0 0
1.686e −02 0.0 0 0e+0 0 0.0 0 0e+0 0 0.0 0 0e+0 0
8.215e −03 3.958e −02
2.474e −01 2.590e −02 5.740e −01 1.279e −01 0.0 0 0e+0 0
2.533e −02 0.0 0 0e+0 0
9.032e −02 1.010e −02 5.103e −06 1.139e −01
+ + + + + + + + + + + +
Table 5 Comparison results for the adoption of two-phase local search. Problems
GD
IH
AMPDE
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
318 319 320 321 322 323 324 325 326 327
AMPDEnoLS
AMPDE
xm
IQR
xm
IQR
1.289e−04 4.623e−05 3.420e−05 1.350e−04 1.438e−03 6.001e−04 6.461e−04 1.087e−03 4.647e−03 2.502e−04 5.627e−04 2.063e−03
4.222e −05
1.324e−04 4.708e−05 3.540e−05 8.473e−06 2.053e−03 1.351e−03 6.451e−04 2.361e−03 4.680e−03 2.540e−04 5.802e−04 2.292e−03
3.643e −05
3.111e −06 3.267e −06 2.908e −05 2.804e −03 5.263e −05 3.050e −05 9.913e −05 2.422e −04 2.694e −05 3.123e −05 8.619e −04
3.761e −06 5.152e −06 1.228e −04 2.008e −02 5.285e −04 3.994e −05 7.551e −04 3.889e −04 2.853e −05 5.189e −05 9.180e −04
– – + + + + – + – – + +
AMPDEnoLS
xm
IQR
xm
IQR
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.719e−01 3.873e−01 3.882e−01 3.848e−01 9.397e−02 9.491e−02 2.939e−01
3.235e −05
6.619e−01 3.286e−01 5.876e−01 6.608e−01 4.011e−01 7.064e−01 3.883e−01 2.992e−01 3.835e−01 9.399e−02 9.489e−02 2.950e−01
2.813e −05
3.518e −05 1.549e −05 3.893e −05 1.351e −04 6.939e −03 7.513e −03 6.784e −03 6.737e −03 4.968e −05 4.137e −05 8.259e −03
4.748e −05 7.419e −05 4.460e −03 1.431e −03 4.144e −02 7.858e −03 3.310e −02 6.832e −03 5.779e −05 2.454e −04 4.697e −03
– – – + – + + + + – + +
adaptively select the appropriate subpopulation and make it have more chances to evolve and thus makes the AMPDE have more robustness for different MOPs. 4.4.3. Comparison to other state-of-the-art MOEAs In this section, the proposed AMPDE is compared with other four state-of-the-art MOEAs such as NSGA-II [9], AbYSS [21], GDE3 [16], and SMPSO [20]. The source codes of these four MOEAs are implemented by Java in jMetal 4.5 that can be obtained from the website: http://jmetal.sourceforge.net/. In the comparison algorithms, the parameters are set as suggested by the original references. The size of the external archive is set to 100 and the maximum number of function evaluations is set to 50 0 0 0 for all the 12 benchmark problems so as to make a fair comparison between algorithms. The computational results are presented in Table 7. It reveals that the AMPDE obtains the best GD values for 7 out of the 12 problems, and with statistical confidence in 3 cases. The GDE3 has the best performance for 2 problems (1 of them Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
ARTICLE IN PRESS
JID: INS
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
13
Table 6 Comparison results for the adoption of adaptive population generation method. Problems
GD
IH
AMPDE
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
AMPDE
AMPDEfixed
xm
IQR
xm
IQR
1.289e−04 4.623e−05 3.420e−05 1.350e−04 1.438e−03 6.001e−04 6.461e−04 1.087e−03 4.647e−03 2.502e−04 5.627e−04 2.063e−03
4.222e −05
1.332e−04 4.682e−05 3.372e−05 1.449e−04 9.925e−04 5.801e−04 8.500e−04 2.318e−03 4.676e−03 2.481e−04 5.714e−04 2.346e−03
3.901e −05
3.111e −06 3.267e −06 2.908e −05 2.804e −03 5.263e −05 3.050e −05 9.913e −05 2.422e −04 2.694e −05 3.123e −05 8.619e −04
3.209e −06 2.407e −06 3.319e −05 1.397e −03 5.947e −05 1.630e −03 8.0 0 0e −01 2.688e −04 3.338e −05 4.008e −05 7.726e −04
+ – + + + + + + – – + –
AMPDEfixed
xm
IQR
xm
IQR
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.719e−01 3.873e−01 3.882e−01 3.848e−01 9.397e−02 9.491e−02 2.939e−01
3.235e −05
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.673e−01 3.867e−01 3.149e−01 3.846e−01 9.400e−02 9.494e−02 2.935e−01
3.681e −05
3.518e −05 1.549e −05 3.893e −05 1.351e −04 6.939e −03 7.513e −03 6.784e −03 6.737e −03 4.968e −05 4.137e −05 8.259e −03
4.095e −05 2.051e −05 3.183e −05 8.094e −05 8.429e −03 7.130e −03 3.609e −02 7.714e −03 4.944e −05 3.240e −05 4.214e −03
– – + + + + – + + + + +
Table 7 Comparison results between AMPDE and other MOEAs. GD Problems
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
AMPDE
NSGA-II
AbYSS
GDE3
SMPSO
xm
IQR
xm
IQR
xm
IQR
xm
IQR
xm
IQR
1.289e−04 4.623e−05 3.420e−05 1.350e−04 1.438e−03 6.001e−04 6.461e−04 1.087e−03 4.647e−03 2.502e−04 5.627e−04 2.063e−03
4.222e −05
4.586e−04 5.225e−04 1.799e−04 1.606e−04 9.627e−02 7.416e−04 1.366e−03 4.420e−03 5.137e−03 3.684e−04 6.314e−04 2.665e−03
5.976e −05
1.525e−04 5.223e−05 3.393e−05 2.316e−04 7.475e−05 6.034e−04 6.841e−04 4.091e−03 4.881e−03 2.539e−04 9.373e−03 1.287e−03
2.139e −05
1.299e−04 4.661e−05 3.305e−05 1.614e−01 4.344e−05 1.345e−03 1.192e−03 9.587e−01 4.925e−03 2.520e−04 5.760e−04 2.881e−03
4.590e −05
1.088e−04 4.740e−05 3.609e−05 1.212e−04 5.031e−05 5.191e−03 3.393e−03 3.360e−03 5.745e−03 2.611e−04 5.664e−04 4.675e−03
4.290e −05
3.111e −06 3.267e −06 2.908e −05 2.804e −03 5.263e −05 3.050e −05 9.913e −05 2.422e −04 2.694e −05 3.123e −05 8.619e −04
9.523e −05 3.695e −05 5.0 0 0e −05 1.357e −02 2.683e −04 2.414e −04 1.156e −02 3.862e −04 7.521e −05 2.361e −03 7.560e −04
1.147e −05 3.417e −06 1.022e −04 8.606e −06 8.943e −05 3.744e −05 1.015e −01 3.154e −04 3.135e −05 7.674e −03 1.057e −03
3.562e −06 2.250e −06 1.625e −01 5.622e −06 3.854e −04 2.504e −04 1.203e+00
2.496e −04 3.071e −05 3.118e −05 5.106e −04
2.683e −06 2.718e −06 4.160e −05 8.380e −03 1.378e −03 6.469e −04 1.156e −03 5.061e −04 3.347e −05 3.344e −05 1.072e −03
+ – – + + – + + + – – +
IH Problems
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
328 329 330 331 332 333 334 335 336
AMPDE
NSGA-II
AbYSS
GDE3
SMPSO
xm
IQR
xm
IQR
xm
IQR
xm
IQR
xm
IQR
6.619e−01 3.286e−01 5.876e−01 6.619e−01 4.012e−01 7.719e−01 3.873e−01 3.882e−01 3.848e−01 9.397e−02 9.491e−02 2.939e−01
3.235e −05
6.551e−01 3.206e−01 5.835e−01 6.595e−01 6.587e−01 7.623e−01 3.754e−01 3.342e−01 3.759e−01 9.288e−02 9.355e−02 2.841e−01
9.119e −04
6.620e−01 3.287e−01 5.876e−01 6.599e−01 4.006e−01 7.612e−01 3.847e−01 3.417e−01 3.867e−01 9.401e−02 4.883e−02 2.604e−01
7.165e −05
6.620e−01 3.287e−01 5.876e−01 1.864e−01 4.013e−01 7.672e−01 3.771e−01 2.907e−01 3.764e−01 9.397e−02 9.487e−02 2.933e−01
3.394e −05
6.620e−01 3.287e−01 5.875e−01 6.618e−01 4.013e−01 7.401e−01 3.548e−01 3.561e−01 3.632e−01 9.390e−02 9.488e−02 2.764e−01
4.578e −05
3.518e −05 1.549e −05 3.893e −05 1.351e −04 6.939e −03 7.513e −03 6.784e −03 6.737e −03 4.968e −05 4.137e −05 8.259e −03
1.362e −03 8.462e −04 9.191e −04 0.0 0 0e+0 0
7.801e −03 9.428e −03 3.843e −02 8.662e −03 2.400e −04 1.857e −02 7.348e −03
7.603e −05 2.403e −05 2.319e −03 1.433e −04 8.167e −03 7.116e −03 4.182e −02 5.254e −03 2.998e −05 1.903e −02 4.918e −02
3.921e −05 1.705e −05 2.356e −01 2.373e −05 5.962e −03 7.784e −03 1.006e −01 6.219e −03 5.044e −05 3.629e −05 3.471e −03
3.720e −05 7.421e −05 1.002e −04 6.396e −05 9.501e −03 7.298e −03 1.448e −02 8.697e −03 6.704e −05 4.576e −05 6.196e −03
– – – – + + – + + – – –
is significantly better than the others), the AbYSS obtains the best result for only one problem, the SMPSO achieves superior results for 2 problems (both are significantly better than the others), and the NSGA-II fails to obtain a better result for any one problem. More specifically, our AMPDE only obtains the best result for only one bi-objective problem ZDT2 but it shows an overwhelming advantage over the others for the three-objective problems. The pairwise comparison results between our AMPDE and its rivals are given in Table 8, in which the symbol “Y” denotes that our AMPDE obtains better result than a certain rival while the symbol “N” denotes that the rival is superior. Based on Table 8, it can be seen that our AMPDE obtains 9 better GD results than AbYSS and SMPSO, 12 better GD results than NSGA-II, and 10 better GD results than GDE3. For the IH metric, our AMPDE obtains the best results for 2 bi-objective problems and 5 three-objective problems, respectively. The AbYSS is superior to the other algorithms for 2 three-objective problems, the GDE3 algorithm also shows a Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS 14
ARTICLE IN PRESS
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx Table 8 Pair-wise comparison results between AMPDE and other MOEAs for the GD metric. Problems
NSGA-II
AbYSS
GDE3
SMPSO
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7 Total
Y Y Y Y Y Y Y Y Y Y Y Y 12
Y Y N Y N Y Y Y Y Y Y N 9
Y Y N Y N Y Y Y Y Y Y Y 10
N Y Y N N Y Y Y Y Y Y Y 9
Fig. 4. Pareto front for the best GD metric obtained by each algorithm on problem DTLZ7.
337 338 339 340 341 342 343 344 345 346 347 348 349 350
better performance than the others for 2 problems, and NSGA-II and SMPSO only obtains the best IH metric for only one problem. To give a graphical overview of the behavior of these algorithms, we show the Pareto fronts obtained by each algorithm with the best GD metric for problem DTLZ7 in Fig. 4 (please note that the results are for a single run). From this figure, it is clear that the Pareto front obtained by AbYSS is trapped in the first block of the true Pareto front, though the GD result obtained by AbYSS is the best one. The SMPSO shows a little inferior performance in solution distribution than AMPDE, NSGA-II and GDE3. Although the three algorithms (AMPDE, NSGA-II and GDE3) have similar performance in Fig. 4, it should be pointed out that the maximum value of objective 3 obtained by GDE3 and NSGA-II are both larger than 6.0 0 0 (6.0 06 and 6.013 respectively) while the maximum value of objective 3 obtained by our AMPDE is exactly 6.0 0 0 (the value of the true Pareto front is 6.0 0 0). That is, the Pareto fronts obtained by NSGA-II and GDE3 are a little far from the true Pareto front with comparison to that obtained by our AMPDE. Based on the comparison results between our AMPDE and the other MOEAs, it can be found that the performance of the AMPDE is good and robust because it obtains comparable or superior results for most of the testing problems. The major reasons can be analyzed as follows. Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
ARTICLE IN PRESS
JID: INS
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
15
Table 9 Comparison results between AMPDEnoLS and other MOEAs for GD metric. GD Problems
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7
AMPDEnoLS
NSGA-II
AbYSS
GDE3
SMPSO
xm
IQR
xm
IQR
xm
IQR
xm
IQR
xm
IQR
1.324e−04 4.708e−05 3.540e−05 8.473e−06 2.053e−03 1.351e−03 6.451e−04 2.361e−03 4.680e−03 2.540e−04 5.802e−04 2.292e−03
3.643e −05
4.586e−04 5.225e−04 1.799e−04 1.606e−04 9.627e−02 7.416e−04 1.366e−03 4.420e−03 5.137e−03 3.684e−04 6.314e−04 2.665e−03
5.976e −05
1.525e−04 5.223e−05 3.393e−05 2.316e−04 7.475e−05 6.034e−04 6.841e−04 4.091e−03 4.881e−03 2.539e−04 9.373e−03 1.287e−03
2.139e −05
1.299e−04 4.661e−05 3.305e−05 1.614e−01 4.344e−05 1.345e−03 1.192e−03 9.587e−01 4.925e−03 2.520e−04 5.760e−04 2.881e−03
4.590e −05
1.088e−04 4.740e−05 3.609e−05 1.212e−04 5.031e−05 5.191e−03 3.393e−03 3.360e−03 5.745e−03 2.611e−04 5.664e−04 4.675e−03
4.290e −05
3.761e −06 5.152e −06 1.228e −04 2.008e −02 5.285e −04 3.994e −05 7.551e −04 3.889e −04 2.853e −05 5.189e −05 9.180e −04
9.523e −05 3.695e −05 5.0 0 0e −05 1.357e −02 2.683e −04 2.414e −04 1.156e −02 3.862e −04 7.521e −05 2.361e −03 7.560e −04
1.147e −05 3.417e −06 1.022e −04 8.606e −06 8.943e −05 3.744e −05 1.015e −01 3.154e −04 3.135e −05 7.674e −03 1.057e −03
3.562e −06 2.250e −06 1.625e −01 5.622e −06 3.854e −04 2.504e −04 1.203e+00
2.496e −04 3.071e −05 3.118e −05 5.106e −04
2.683e −06 2.718e −06 4.160e −05 8.380e −03 1.378e −03 6.469e −04 1.156e −03 5.061e −04 3.347e −05 3.344e −05 1.072e −03
+ + + + + – + + + – + +
Table 10 Pair-wise comparison results between AMPDEnoLS and other MOEAs for the GD metric. Problems
NSGA-II
AbYSS
GDE3
SMPSO
ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7 Total
Y Y Y Y Y N Y Y Y Y Y Y 11
Y Y N Y N N Y Y Y N Y N 7
N N N Y N N Y Y Y N N Y 5
N Y Y Y N Y Y Y Y Y N Y 9
361
Firstly, the two-phase local search helps to improve the non-dominated solutions contained in the EXA in both the quality and diversity, and then these solutions with good quality and diversity are used to generate new solutions. This mechanism drives the new solutions to converge to the true Pareto front quickly and at the same time keeps a good spread of nondominated solutions in the EXA. To illustrate the positive effect of the local search method, we further compared the AMPDE without local search (AMPDEnoLS ) to the four state-of-the-art MOEAs with the GD metric in Tables 9 and 10. Based on the comparison results, it is clear that the AMPDEnoLS can only obtain the best results for 4 problems. Although the AMPDEnoLS is still superior to the NSGA-II and SMPSO for most problems, the AbYSS algorithm becomes competitive with the AMPDEnoLS while the GDE3 algorithm is superior to the AMPDEnoLS . Secondly, the adaptive population generation method can self-adaptively change the size of each subpopulation and allocate more evolution chances to certain subpopulations with more appropriate crossover operators. That is, this mechanism can help to guarantee a robust performance of the AMPDE for different kinds of multi-objective problems.
362
5. Conclusions
363
This paper proposes an adaptive multi-population DE algorithm for the multi-objective optimization problems. One major feature of the proposed AMPDE algorithm is that it uses multiple subpopulations. For each subpopulation, a certain crossover operator is assigned to it and the generation of perturbed vectors in this subpopulation is based on this operator, which is different from the mutation step of the canonical DE algorithm. Another main feature is that it adopts clustering and statistical methods to derive useful information from the search process. The derived information is then used to adaptively adjust the size of each subpopulation according to their contribution to the obtained external archive, and guide the two-phase local search so as to improve both the quality and diversity of the external archive. Computational results on benchmark problems are carried out to test the performance of the proposed AMPDE algorithm, and the results show that the proposed improvements of the strategies are positive and that the proposed AMPDE is comparative or superior to some state-of-the-art MOEAs in the literature for the ZDT and DTLZ series of problems. Our future research will be the inclusion of more advanced clustering techniques into the AMPDE to assist the subpopulation management and the application of the AMPDE for some complex MOPs in practical industries.
351 352 353 354 355 356 357 358 359 360
364 365 366 367 368 369 370 371 372 373 374
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
ARTICLE IN PRESS
JID: INS 16
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
375
Acknowledgments
376
379
This research is partly supported by the National Natural Science Foundation of China (no. 61374203, no. 61573086), the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (no. 71321001), the National 863 High-Tech Research and Development Program of China (2013AA040704), and the State Key Laboratory of Synthetical Automation for Process Industries Fundamental Research Funds (grant no. 2013ZCX02).
380
Appendix A. Definitions of test problems
377 378
Problem ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1
DTLZ2
DTLZ3
DTLZ4
DTLZ5
DTLZ6
DTLZ7
381
382 383 384 385 386 387 388 389 390 391 392 393 394
Definition
Constraints
f 1 (x ) = x1 , f 2 (x ) = g(x )[1 − x1 /g(x )], g(x ) = 1 + 9 ni=2 xi /(n − 1 ) 2 (x1 /g(x )) ], g(x ) = 1 + 9 ni=2 xi /(n − 1 ) f 1 (x ) = x1 , f 2 (x ) = g(x )[1 −
f 1 (x ) = x1 , f 2 (x ) = g(x )[1 − x1 /g(x ) − x1 sin(10π x1 )/g(x )], g(x ) = 1 + 9 ni=2 xi /(n − 1 )
f 1 (x ) = x1 , f 2 (x ) = g(x )[1 − x1 /g(x )], g(x ) = 1 + 10(n − 1 ) + ni=2 [x2i − 10 cos(4π xi )] 6 f 1 (x ) = 1 − exp(−4x1 )sin (6π x1 ), f 2 (x ) = g(x )[1 − ( f 1 (x )/g(x ) )2 ], g(x ) = 1 + 9[ ni=2 xi /(n − 1 )]0.25 f 1 (x ) = 0.5x1 x2 (1 + g(x )), f 2 (x ) = 0.5x1 (1 − x2 )(1 + g(x )), f 3 (x ) = 0.5(1 − x1 )(1 + g(x )), 2 g(x ) = 100[5 + ni=3 ( (xi − 0.5 ) − cos(20π (xi − 0.5 )))] f 1 (x ) = (1 + g(x )) cos(0.5π x1 ) cos(0.5π x2 ), f 2 (x ) = (1 + g(x )) cos(0.5π x1 ) sin(0.5π x2 ), f 3 (x ) = (1 + g(x )) sin(0.5π x1 ), 2 g(x ) = ni=3 (xi − 0.5 ) f 1 (x ) = (1 + g(x )) cos(0.5π x1 ) cos(0.5π x2 ), f 2 (x ) = (1 + g(x )) cos(0.5π x1 ) sin(0.5π x2 ), f 3 (x ) = (1 + g(x )) sin(0.5π x1 ), 2 g(x ) = 100[10 + ni=3 ( (xi − 0.5 ) − cos(20π (xi − 0.5 )))] ) cos(0.5π x100 f 1 (x ) = (1 + g(x )) cos(0.5π x100 1 2 ), 100 f 2 (x ) = (1 + g(x )) cos(0.5π x100 1 ) sin (0.5π x2 ), f 3 (x ) = (1 + g(x )) sin(0.5π x100 1 ), 2 g(x ) = ni=3 (xi − 0.5 ) f 1 (x ) = (1 + g(x )) cos(0.5πθ1 ) cos(0.5πθ2 ), f 2 (x ) = (1 + g(x )) cos(0.5πθ1 ) sin(0.5πθ2 ), f 3 (x ) = (1 + g(x )) sin(0.5πθ1 ), 2 g(x ) = ni=3 (xi − 0.5 ) , θi = π (1 + 2xi g(x ))/(4(1 + g(x ))) f 1 (x ) = (1 + g(x )) cos(0.5πθ1 ) cos(0.5πθ2 ), f 2 (x ) = (1 + g(x )) cos(0.5πθ1 ) sin(0.5πθ2 ), f 3 (x ) = (1 + g(x )) sin(0.5πθ1 ), g(x ) = ni=3 x0i .1 , θi = π (1 + 2xi g(x ))/(4(1 + g(x ))) f 1 ( x ) = x1 , f 2 ( x ) = x2 , f 3 (x ) = (1 + g(x )) · h ( f 1 , f 2 , g(x )), g(x ) = 1 + 9 ni=3 xi /20, h ( f 1 , f 2 , g(x )) = 3 − 2i=1 f i · (1 + sin(3π f i ))/(1 + g(x ))
n = 30, 0 ≤ xi ≤ 1, i = 1, …, n n = 30, 0 ≤ xi ≤ 1, i = 1, …, n n = 30, 0 ≤ xi ≤ 1, i = 1, …, n n = 10, 0 ≤ x1 ≤ 1, –5 ≤ xi ≤ 5, i = 2, …, n n = 10, 0 ≤ xi ≤ 1, i = 1, …, n n = 7, 0 ≤ xi ≤ 1, i = 1, …, n
n = 12, 0 ≤ xi ≤ 1, i = 1, …, n
n = 12, 0 ≤ xi ≤ 1, i = 1, …, n
n = 12, 0 ≤ xi ≤ 1, i = 1, …, n
n = 12, 0 ≤ xi ≤ 1, i = 1, …, n
n = 12, 0 ≤ xi ≤ 1, i = 1, …, n
n = 22, 0 ≤ xi ≤ 1, i = 1, …, n
All objectives are to be minimized.
Appendix B. Performance measures • The GD metric indicates how far the obtained Pareto front is from the true Pareto optimal front. It is defined as GD = |EXA| 2 di /|EXA|, where |EXA| is the number of the non-dominated solutions obtained by a certain algorithm, and di is i=1 the Euclidean distance (measured in objective space) between each solution in the obtained Pareto front and the nearest member of the true Pareto optimal front. • The IH metric is used to measure the hypervolume in the objective space dominated by a given Pareto set of points. In the case of two or three objectives, IH is the area or volume covered by a set of non-dominated solutions with comparison to a reference point. As is a common practice in the literature, we normalize the objective values of the non-dominated solutions obtained by different testing algorithms into [0, 1]. That is, the objective of each solution x is normalized by: f j (x ) = ( f j (x ) − f jmin )/( f jmax − f jmin ), where fj (x) is the jth objective value of solution x and f jmin ( f jmax ) is the minimum (maximum) value of the jth objective value among all the solutions in the union set of non-dominated solutions obtained by all test algorithms. Then the reference point is set to (1.0, 1.0), and consequently the maximum value of IH is 1. Please note that in the comparison of two Pareto fronts, the higher value of IH means a better frontier. When calculating IH , if all Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS
ARTICLE IN PRESS X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
395 396
[m3Gsc;February 18, 2016;20:51] 17
solutions obtained by a certain algorithm are dominated by the reference point, then the hypervolume of this algorithm is equal to 0.
397
Reference
398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470
[1] H.A. Abbass, R. Sarker, C. Newton, PDE: a Pareto-frontier differential evolution approach for multi-objective optimization problems, in: Proceedings of IEEE Congress on Evolutionary Computation, 2001, pp. 971–978. [2] M. Ali, P. Siarry, M. Pant, An efficient Differential Evolution based algorithm for solving multi-objective optimization problems, Eur. J. Oper. Res. 217 (2) (2012) 404–416. [3] Y.Q. Cai, J.H. Wang, Differential evolution with hybrid linkage crossover, Inform. Sci. 320 (2015) 244–287. [4] Y. Chia, C.K. Goh, V.A. Shim, K.C. Tan, A data mining approach to evolutionary optimisation of noisy multi-objective problems, Int. J. Syst. Sci. 43 (7) (2012) 1217–1247. [5] C.A.C. Coello, G.T. Pulido, Multiobjective optimization using a micro-genetic algorithm, in: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’2001), 2001, pp. 274–282. [6] C.A.C. Coello, G.T. Pulido, M.S. Lechuga, Handling multiple objectives with particle swarm optimization, IEEE Trans. Evol. Comput. 8 (3) (2004) 256–279. [7] C.A.C. Coello, D.A.V. Veldhuizen, G.B. Lamont, Evolutionary Algorithms for Solving Multi-Objective Problems, Kluwer, Norwell, MA, 2002. [8] K. Deb, Multi-objective Optimization Using Evolutionary Algorithms, Wiley, Chichester, UK, 2001. [9] K. Deb, S. Agrawal, A. Pratap, T. Meyarivan, A fast and elitist multi-objective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6 (2) (2002) 182–197. [10] K. Deb, R.B. Agrawal, Simulated binary crossover for continuous search space, Complex Syst 9 (1995) 115–148. [11] K. Deb, A. Anand, D. Joshi, A computationally efficient evolutionary algorithm for real-parameter evolution, Evol. Comput. 10 (4) (2002) 371–395. [12] K. Deb, L. Thiele, M. Laumanns, E. Zitzler, Scalable test problems for evolutionary multi-objective optimization, Evolutionary Multiobjective Optimization, Springer-Verlag, Berlin, 2005, pp. 105–145. [13] L.J. Eshelman, J.D. Schaffer, Real-coded genetic algorithms and interval-schemata, Foundations of Genetic Algorithms 2, L.D. Whitley, Morgan Kaufmann Publishers, California, 1993, pp. 187–202. [14] V.L. Huang, A.K. Qin, P.N. Suganthan, M.F. Tasgetiren, Multi-objective optimization based on self-adaptive differential evolution algorithm, in: Proceedings of Congress on Evolutionary Computation, 2007 , pp. 3601–2608. [15] L. Jourdan, C. Dhaenens, E.G. Talbi, Using datamining techniques to help metaheuristics: a short survey, in: Proceedings of the Third International Work on Hybrid Metaheuristics, 2006, pp. 57–69. [16] S. Kukkonen, J. Lampinen, GDE3: the third evolution step of generalized differential evolution, in: Proceedings of IEEE Congress on Evolutionary Computation, 2005, pp. 443–450. [17] F. Lu, J. Zhang, L. Gao, A hierarchical differential evolution algorithm with multiple sub-population parallel search mechanism, in: Proceedings of International Conference on Computer Design and Applications (ICCDA), 2010, pp. 482–486. [18] N.K. Madavan, Multiobjective optimization using a Pareto differential evolution approach, in: Proceedings of Congress on Evolutionary Computation, 2002, pp. 1145–1150. [19] A.C. Martínez-Estudillo, C. Hervás-Martínez, F.J. Martínez-Estudillo, N. García-Pedrajas, Hybridization of evolutionary algorithms and local search by means of a clustering method, IEEE Trans. Syst. Man Cybern. B. 36 (3) (2006) 534–545. [20] A.J. Nebro, J.J. Durillo, J. García-Nieto, C.A. Coello Coello, F. Luna, E. Alba, SMPSO: a new pso-based metaheuristic for multi-objective optimization, in: Proceedings of IEEE Symposium on Computational Intelligence in Multicriteria Decision-Making (MCDM 2009), 2009, pp. 66–73. [21] A.J. Nebro, F. Luna, E. Alba, B. Dorronsoro, J.J. Durillo, A. Beham, AbYSS – adapting scatter search to multiobjective optimization, IEEE Trans. Evol. Comput 12 (4) (2008) 439–457. [22] N. Noman, H. Iba, Accelerating differential evolution using an adaptive local search, IEEE Trans. Evol. Comput 12 (1) (2008) 107–125. [23] K.E. Parsopoulos, D.K. Tasoulis, N.G. Pavlidis, V.P. Plagianakos, M.N. Vrahatis, Vector evaluated differential evolution for multiobjective optimization, in: Proceedings of IEEE Congress on Evolutionary Computation, 2004, pp. 204–211. [24] I. Poikolainen, F. Neri, F. Caraffini, Cluster-based population initialization for differential evolution frameworks, Inform. Sci. 297 (2015) 216–235. [25] G.T. Pulido, C.A.C. Coello, Using clustering techniques to improve the performance of a multi-objective particle swarm optimizer, in: Proceedings of Genetic and Evolutionary Computation Conference, 2002, pp. 1051–1056. [26] A.K. Qin, V.L. Huang, P.N. Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Trans. Evol. Comput. 13 (2) (2009) 398–417. [27] S. Rahnamayan, H.R. Tizhoosh, M.M.A. Salama, Opposition-based differential evolution, IEEE Trans. Evol. Comput. 12 (1) (2008) 64–79. [28] T. Robic, B. Filipic, DEMO: differential evolution for multiobjective optimization, in: Proceedings of 3rd International Conference on Evolutionary MultiCriterion Optimization, 2005, pp. 520–533. [29] L.V. Santana-Quintero, A.G. Hernández-Díaz, J. Molina, C.A. Coello Coello, R. Caballero, DEMORS: a hybrid multi-objective optimization algorithm using differential evolution and rough set theory for constrained problems, Comput. Oper. Res. 37 (3) (2010) 470–480. [30] J.D. Schaffer, Multiple objective optimization with vector evaluated genetic algorithms, in: Proceedings of the First International Conference on Genetic Algorithms, 1985, pp. 93–100. [31] O. Soliman, L.T. Bui, H. Abbass, A memetic coevolutionary multi-objective differential evolution algorithm, Multi-Objective Memetic Algorithm, Springer, Berlin, 2009, pp. 369–388. [32] R. Storn, K. Price, Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 11 (4) (1997) 341–359. [33] L.X. Tang, Y. Dong, J.Y. Liu, Differential evolution with an individual-dependent mechanism, IEEE Trans. Evol. Comput. 19 (4) (2015) 560–574. [34] L.X. Tang, X.P. Wang, A hybrid multi-objective evolutionary algorithm for multi-objective optimization problems, IEEE Trans. Evol. Comput. 17 (1) (2013) 20–46. [35] L.X. Tang, Y. Zhao, J.Y. Liu, An improved differential evolution algorithm for practical dynamic scheduling in steelmaking-continuous casting production, IEEE Trans. Evol. Comput. 18 (2) (2014) 209–225. [36] S. Tsutsui, M. Yamamura, T. Higuchi, Multi-parent recombination with simplex crossover in real coded genetic algorithms, in: Proceedings of Genetic Evolutionary Computation Conference (GECCO’99), 1999, pp. 657–664. [37] J. Vesterstrom, R. Thomsen, A comparative study of differential evolution, particle swarm optimization and evolutionary algorithms on numerical benchmark problems, in: Proceedings of IEEE Congress on Evolutionary Computation, 2004, pp. 1980–1987. [38] Y. Wang, B. Lin, T. Weise, J.Y. Wang, B. Yuan, Q.J. Tian, Self-adaptive learning based particle swarm optimization, Inform. Sci. 181 (20) (2011) 4515–4538. [39] G.G. Yen, W.F. Leong, Dynamic multiple swarms in multiobjective particle swarm optimization, IEEE Trans. Syst. Man Cybern. Part A: Syst. Man 39 (4) (2009) 890–911. [40] W. Yu, J. Zhang, Multi-population differential evolution with adaptive parameter control for global optimization, in: Proceedings of 13th Genetic and Evolutionary Computation Conference (GECCO’2011), 2011, pp. 1093–1098. [41] D. Zaharie, D. Petcu, Adaptive pareto differential evolution and its parallelization, Parallel Processing and Applied Mathematics, Springer, 2004, pp. 261–268. [42] Z.H. Zhan, J. Zhang, Y. Li, Y.H. Shi, Orthogonal learning particle swarm optimization, IEEE Trans. Evol. Comput 15 (6) (2011) 832–847. [43] Q. Zhang, H. Li, MOEA/D: A multiobjective evolutionary algorithm based on decomposition, IEEE Trans. Evol. Comput 11 (6) (2007) 712–731. [44] J.Q. Zhang, A.C. Sanderson, JADE: adaptive differential evolution with optional external archive, IEEE Trans. Evol. Comput 13 (5) (2009) 945–958.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068
JID: INS 18
471 472 473 474 475 476
ARTICLE IN PRESS
[m3Gsc;February 18, 2016;20:51]
X. Wang, L. Tang / Information Sciences xxx (2016) xxx–xxx
[45] J. Zhang, Z. Zhan, Y. Lin, N. Chen, Y. Gong, J. Zhong, Evolutionary computation meets machine learning: a survey, IEEE Comput. Intell. Mach 6 (4) (2011) 69–75. [46] E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: empirical results, Evol. Comput 8 (2) (20 0 0) 173–195. [47] E. Zitzler, M. Laumanns, L. Thiele, SPEA2: improving the strength Pareto evolutionary algorithm, Comput. Eng. Networks Lab. (TIK), Swiss Fed. Inst. Technol. (ETH), Zurich, Switzerland, 2001 Tech. Rep. 103. [48] W. Zhu, Y. Tang, J.A. Fang, W.B. Zhang, Adaptive population tuning scheme for differential evolution, Inform. Sci. 223 (2013) 164–191.
Please cite this article as: X. Wang, L. Tang, An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences (2016), http://dx.doi.org/10.1016/j.ins.2016.01.068