Multiobjective memetic algorithm based on decomposition

Multiobjective memetic algorithm based on decomposition

Applied Soft Computing 21 (2014) 221–243 Contents lists available at ScienceDirect Applied Soft Computing journal homepage: www.elsevier.com/locate/...

3MB Sizes 0 Downloads 74 Views

Applied Soft Computing 21 (2014) 221–243

Contents lists available at ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

Multiobjective memetic algorithm based on decomposition Wali Khan Mashwani a,∗ , Abdellah Salhi b a b

Department of Mathematics, Kohat University of Science & Technology, Kohat 26000, Khyber Pukhtunkhwa, Pakistan Department of Mathematical Sciences, University of Essex, Colchester, Wivenhoe Park CO4 3SQ, UK

a r t i c l e

i n f o

Article history: Received 8 October 2012 Received in revised form 21 February 2014 Accepted 7 March 2014 Available online 29 March 2014 Keywords: Multiobjective optimization Pareto optimality Memtic algorithm MOEA/D DE PSO

a b s t r a c t In recent years, hybridization of multi-objective evolutionary algorithms (MOEAs) with traditional mathematical programming techniques have received significant attention in the field of evolutionary computing (EC). The use of multiple strategies with self-adaptation manners can further improve the algorithmic performances of decomposition-based evolutionary algorithms. In this paper, we propose a new multiobjective memetic algorithm based on the decomposition approach and the particle swarm optimization (PSO) algorithm. For brevity, we refer to our developed approach as MOEA/D-DE+PSO. In our proposed methodology, PSO acts as a local search engine and differential evolution works as the main search operator in the whole process of optimization. PSO updates the position of its solution with the help of the best information on itself and its neighboring solution. The experimental results produced by our developed memtic algorithm are more promising than those of the simple MOEA/D algorithm, on most test problems. Results on the sensitivity of the suggested algorithm to key parameters such as population size, neighborhood size and maximum number of solutions to be altered for a given subproblem in the decomposition process are also included. © 2014 Published by Elsevier B.V.

1. Introduction In this paper, we consider the following general continuous multiobjective optimization problem (MOP) which can be mathematically formulated as minimize F(x) = (f1 (x), f2 (x). . ., fm (x))

(1)

where x = (x1 , . . ., xn )T ∈ X ⊆ Rn is an n-dimensional vector of the decision variables, X is the decision (variable) space, and F is the objective vector function that contains m real valued functions. In the last few decades, various types of multiobjective evolutionary algorithms (MOEAs) have been developed and they are considered a natural choice for solving MOPs (1). All MOEAs are population-based nature-inspired and operate on sets of candidate solutions (population). An MOEA generates a set of solutions in each of its generations during the whole course of evolution while moving towards the real Pareto front (PF) of (1). Despite the current widespread use of evolutionary algorithms dealing with test and real world problems, their computational cost remains a big issue among others. However, hybridization is a

∗ Corresponding author. Tel.: +44 1206873022. E-mail addresses: [email protected] (W.K. Mashwani), [email protected] (A. Salhi). http://dx.doi.org/10.1016/j.asoc.2014.03.007 1568-4946/© 2014 Published by Elsevier B.V.

good way forward to address these issues when dealing with complicated and diverse sets of standard test problems and practical ones. The main positive effect of hybridising evolutionary algorithms is fast convergence towards the Pareto front, but with an increase in the overall computation cost. For instance, the number of generations is decreased when the available computation time is fixed or limited. As a result, the global search ability of an MOEA is not fully utilized. These positive and negative effects are experimentally studied in [1]. Multi-objective evolutionary algorithm based on Decomposition (MOEA/D) was first proposed by Zhang et al. [2]. This paradigm decomposes a MOP into several single-objective optimization problems and then uses a population-based method to optimize these subproblems simultaneously. In this way, a set of approximate solutions to the Pareto optimal set is generated by minimizing each subproblem. Different search operators suit different optimization problems. It is, therefore, very crucial to chose an ideal operator in a particular evolutionary optimization algorithm for solving complicated optimization problems. More recently, these kind of issue and study has been carried out in [3]. Memetic algorithms (MA) represent one of the recent growing areas of research in evolutionary computation. They are population based meta-heuristic search methods inspired by Darwinian principles of natural evolution and Dawkins notion of a meme defined as a unit of cultural evolution that is capable of local refinements

222

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

[4–6]. They are also called genetic local search, hybrid genetic algorithms, and cultural algorithms. They can be derived with the use of multiple components/stratgies/operators/local search engines in the framework of the particular evolutionary algorithms. In this paper, we have proposed a new multiobjective memetic algorithm by employing particle swarm optimization (PSO) [7] and differential evolution (DE) [8] in the framework of MOEA/D [2]. This algorithm is called MOEA/D-DE+PSO. MOEA/D [2] is employed as a global search algorithm, while PSO and DE are employed as local search operators in this new algorithm. In case of stagnation of any of the search operators, we relocate them to a number of subproblems/solutions to keep them switched on during the whole process of evolution. Keeping in mind the principle of Ockham’s Razor, the reader can see later the benefits of MOEA/D-DE+PSO on ZDT [9] and CEC’09 test instances [10]. The Proposed algorithm is more robust in finding the set of optimal solutions to almost all test problems. The following issues are addressed in this paper: 1 How each search algorithm be rewarded in MOEA/D − DE+ PSO? 2 How each search algorithm is chosen and used in the next generation? 3 What are the effects of different population sizes on the performance of MOEA/D − DE + PSO? The rest of this paper is organized as follows: Section 2 introduces PSO and DE. Section 3 outlines the framework of MOEA/D-DE+PSO. Section 4 presents parameter settings for experiments. Section 5 discusses the experimental results. Finally Section 6 concludes this paper.

Table 1 Formulation of DE mutation strategies. S/no.

Strategy name

DE-mutation strategies

1 2 3 4 5 6 7

DE/rand/1 DE/rand/2 DE/current-to-best/1 DE/current-to-best/2 DE/best/1 DE/best/2 DE/rand to best/1

y˜ = xr1 + F(xr2 − xr3 ) y˜ = xr1 + F(xr2 − xr3 ) + F(xr4 − xr5 ) y˜ = xi + F(xbest − xi ) + F(xr1 − xr2 ) y˜ = xi + F(xbest − xi ) + F(xr1 − xr2 ) + F(xr3 − xr4 ) y˜ = xbest + F(xr2 − xr3 ) y˜ = xbest + F(xr1 + xr2 ) + F(xr3 − xr4 ) y˜ = xr1 + F(xbest − xr1 ) + F(xr2 − xr3 )

A mutant vector y˜ is uniformly crossed with the base individual xi and as a result a new solution y is generated.



y=



if ur ≤ CR

xi

otherwise

(2)

where ur ∈ [0, 1] is a uniformly random number, CR is the crossover rate. • Mutation strategy “DE/current-to-best/1” is similar to the crossover operator employed by some genetic algorithms (GAs) [15]. • Mutation strategy “DE/rand/1” is derived from “DE/currentto-best/1”, whereas the best individual xbest of the previous generation is replaced with a random individual xr1 . • Mutation strategies (i.e., “DE/rand/2”, “DE/best/2”, “DE/currentto-best/2”) are the modified forms of the mutation strategies, (i.e., “DE/current-to-best/1”, “DE/rand/1”, DE/rand to best/1). Other new DE-mutation strategies including trigonometric mutation operator [16], genetically programmed mutation operator [17] and some others, are suggested recently.

2. The basic idea and formulation of DE and PSO In the following we describe the basic ideas of DE [8], and PSO [7]. 2.1. Differential evolution It is a reliable and versatile optimizer proposed by Storn and Price [8,11]. It has very few control parameters and has been successfully applied to diverse sets of test and real-world optimization problems. DE uses vector differences for perturbing the population vector. In DE, a parent vector from the current generation is called a target vector, a mutant vector obtained through the differential mutation operation is called a donor vector and the final offspring formed by recombing the donor and target vectors is called a trial vector [12–14]. DE has few mutation strategies. It generates an offspring by perturbing the solutions with a scaled difference of randomly selected population members. DE has attracted a lot of attention since 1995. A detailed review can be found in [12–14]. We have listed some different DE-mutation strategies in Table 1. In Table 1, the indices r1 , r2 , r3 , r4 and r5 are mutually exclusive integers randomly chosen within [1, N] and are different from the index i. xbest is the best solution in the current generation, (xrj − xrk ) is the difference vector, and F > 0 is a positive control parameter that controls the amplification of the difference between two individuals, called scaling or mutation factor. F controls the balance between exploration and exploitation. A large value of F encourages exploration and a small value of F favors exploitation [13].

2.2. Particle swarm optimization (PSO) PSO was first developed in 1995 by Kennedy and Eberhart in [7] for solving global optimization problems. Its mechanism is mainly inspired by the social behavior of bird flocking or fish schooling and animal societies. PSO shares many similarities with evolutionary algorithms. However, it does not normally use variation operators (i.e., crossover and mutation). PSO starts optimization search with a group of random solutions (particles). Each particle has its updating position vector and updating velocity and moves through the search space. In recent years, many interesting and important modifications are made to the original PSO [7] and its latest review can be found in [18–21]. One important modified version of PSO has been developed in [22]. In this modified PSO, the velocity and position of each particle is calculated as follows:

vt+1 = w × vt + c1 × r1 × (pt − xt ) + c2 × r2 × (g t − xt )

(3)

xt+1 = xt +  × vt+1 ,

(4)

x

t+1

=x

t+1

+ ur × x

t+1

(5)

where • • • • •

r1 and r2 are two uniformly distributed random numbers;

vt is the current velocity of the particle;

xt is the current position of the particle; w is the inertia factor which lies in [0.8, 1.2]; c1 and c2 are the two acceleration constant or acceleration coefficients that usually lies between 1 and 4; • ur ∈ [−1, 1] is a continuous uniform random number.

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

In multiobjective particle swarm optimization (MOPSO), selecting the best local guide (the global best particle) for each particle in its population of Pareto optimal solutions has great impact over diversity and proximity, especially when dealing with problems with many objectives. Different methods to find a good local guide are introduced and explained in [23–25]. In this paper, we used a technique based on the Euclidian distance and replace the global best particle, gt , with an individual having lower Euclidian distance from the individual with minimum objective function values.

3. Frameworks of MOEA/D-DE+PSO

Algorithm 1. INPUT : • • • • •

Pseudocode of MOEA/D-DE+PSO

MOP: the multiobjective optimization problem; N: population (i.e., the number of subproblems); Feval : the maximal number of function evaluations; a uniform spread of N weight vectors, 1 , . . ., N ; T: the number of weight vectors in the neighborhood of each weight vector; Output : {x(1) , x(2) , . . ., x(N) } and {F(x(1) ), . . ., F(x(N) )}; Step 1 Initialization:

0.1: Uniformly randomly generate a population of size N, P = {x(1) , . . ., x(N) } within search space ; 0.2: Initialize a set of N weight vectors, {1 , 2 , . . ., N }; 0.3: Compute the Euclidian distances between any two weight vectors and then find the T closest weight vectors to each weight vector. For the ith subproblem, set B(i) = {i1 , . . ., iT }, where i1 , . . ., iT are the T closest weight vectors to i ; 0.4: Compute the F-function value of each member in P, F(xi ), i = 1, 2, . . ., N; 0.5: Initialize z = (z1 , . . ., zm )T by problem-specific method; 0.6: Set  1 = 0.5N;  2 = N−  1 ; where  1 , is the number of subproblems which will deal with PSO, and  2 , is the number of new solutions which will deal with DE; 0.7: Set t = 0; Step2 : EvolvingSchemeofMOEA/D − DE + PSO : 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19:

while t < FEVAL do Randomly divide I = {1, 2, . . ., N} into two subsets, IA and IB , such that IA has  1 indices and IB has  2 indices at t; for i = 1 : N do if i ∈ IA then Apply PSO to IA to produce yi ; else Apply DE to IB to produce yi ; end if Compute the F-function value of yi , F(yi ); Updating z: For each j = 1, . . ., m, fj (yi ) < zj , then set zj = fj (yi ); Updating Neighboring Solutions: For each index k ∈ B(i) = {i1 , . . ., iT } if gte (yi |k , z) ≤ g(xk |k , z) then xk = yi and F(xk ) = F(yi ) end if end for Update  1 and  2 (Details in Algorithm 2.) t=t+1 end while

Algorithm 2. Resources Allocation to DE and PSO Step 1: Compute  1 , the number of subproblems deal with PSO [7], and  2 is number of subproblems deal with DE [8].

223

Table 2 Details of the used ZDT benchmark functions. Function

Search space range

Characteristics

ZDT1 ZDT2 ZDT3 ZDT4 ZDT6

[0, 1]n [0, 1]n [0, 1]n [0, 1] × [− 5, 5]n−1 [0, 1]n

convex PF nonconvex PF discontinuous PF many local Pareto fronts local density solutions near Pareto front/nonuniformly spaced, nonconvex

Step 2: Compute the probability success (i.e.,  1 ) of PSO [7] applied to  1 subproblems. 1 =

1 /1 1 /1 + 2 /2

(6)

where 1 are total successful solutions in  1 subproblems allocated to PSO [7]. A new solution of PSO is successful if it replaces successfully at least one solution in T neighboring solutions. A successful get reward 1 and unsuccessful gets 0. Step 3: Replace  1 = N ×  1  and  2 = N −  1 . In the initialization step of Algorithm 1, B(i) contains the indices of the T closest weight vectors denoted by i . We have used the Euclidian distance to measure the closeness between two weight vectors. Therefore, i ’s closest vector is itself, and then i ∈ B(i). If k ∈ B(i), then the kth subproblem can be regarded as a neighbor of the ith subproblem. Initially, the values of  1 and  2 are kept fixed and then updated in the evolutionary process of the Algorithm 1. Whereas  1 denotes the number of subproblems in subset of IA ,  2 denotes the number of subproblems in subset IB . In Step 2 of Algorithm 1, we divided N subproblem indices into two different index sets, IA and IB . The index set IA contains  1 subproblems which are dealt with with PSO. The index set IB contains  2 subproblems that are dealt with with DE. The values of  1 and  2 are updated based on implementation steps described in Algorithm 2. Finally, we can say that the combined use of PSO and DE establishes the trade-off between exploration and exploitation ability of MOEA/D. The dynamic use of DE and PSO enable MOEA/D-DE+PSO to get better approximate solutions in all 30 independent runs the results of which will be seen later in this paper. 4. Parameter settings and experimental results In this section, we present parameter settings that we used in our experiments for solving two types of test problems, the ZDT problems [9] and the CEC’9 test instances [26]. The range of the search space and characteristics of the Pareto front of the ZDT test problems are given in Table 2. All these problems have two objective functions. None of these problems have any constraint.

Table 3 Average CPU time spend by MOEA/D-DE+PSO and MOEA/D-DE on CEC’09 [10]. CEC’09

MOEA/D-DE+PSO

MOEA/D-DE

UF1 UF2 UF3 UF4 UF5 UF6 UF7 UF8 UF9 UF10

287.14 306.24 302.59 307.51 236.87 221.98 306.22 794.97 710.13 797.01

207.03 211.013 219.48 209.420 161.17 153.67 207.90 653.30 632.12 638.393

224

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

Table 4 The IGD values for ZDT1–ZDT4 and ZDT6 approximated by MOEA/D (A) and MOEA/D+DE+PSO (B), in 30 independent runs. (A) MOEA/D-DE, (B) MOEA/D-DE+PSO ZDT

Min

Median

Mean

Std

Max

Algorithms

ZDT1

0.0040137 0.004012

0.004196 0.004142

0.004215 0.004153

0.0001186 0.000071

0.004557 0.004369

A B

ZDT2

0.0038379 0.003811

0.0038768 0.003850

0.0038865 0.003874

0.00004268 0.0000424

0.004039 0.003992

A B

ZDT3

0.0084837 0.008400

0.009063 0.009054

0.009147 0.0090258

0.0006620 0.00059532

0.012523 0.0124062

A B

ZDT4

0.0119630 0.004141

0.0305620 0.007099

0.0425856 0.007550

0.0334212 0.001971

0.15796097 0.011584

A B

ZDT6

0.00856320 0.006977

0.0151035 0.014680

0.0147892 0.014575

0.00405373 0.004037

0.02350384 0.022697

A B

4.1. Parameter settings for ZDT problems

5. Discussion of experimental results

• • • • • •

In the algorithm of MOEA/D-DE+PSO, Step 2(10) performs O(m) comparisons and assignments, and Step 2(11) needs O(mT) basic operations since its major cost is to compute the single objective optimization subproblem values for T solutions and computation of one such solution requires O(m) basic operations. Therefore, the computational complexity of Step 3 in MOEA/D-DRA algorithms is O(m[N]T) since it has [N] passes. We have implemented all algorithms in Matlab and we have carried out experiments on the same computer machine. We have recorded the average CPU time elapsed by respective algorithm as given in Table 3. We have used the tic and toc command (a Matlab built-in function). Table 3 records the average CPU times used by MOEA/D-DE+PSO and MOEA/D. This table shows that the proposed algorithm used more time than MOEA/D. The extra computational overhead of our proposed algorithm is due to the fact that PSO needs a global guide in each generation which requires more time than DE.

N = 100: population size for bi-objective test instances. T = 0.1N: the neighborhood size for each subproblem; F = 0.5: scaling factor of the DE; CR = 0.5: crossover probability for DE; Feval = 25, 000: maximum function evaluations; 1 , . . ., N are weight vectors dealing with bi-objective problems. Each individual weight takes a value form: {

0 1 H , , . . ., }, where H = N − 1; H H H

(7)

4.2. Parameter settings for CEC’09 test instances The parameter settings used in MOEA/D-DE+PSO when applied to CEC’09 test instances [10] are as follows. • • • • • • •

N = 600: population size for bi-objective test instances; N = 1000: for 3-objective test instances; T = 0.1N: the neighborhood size for each subproblem; F = 0.5: scaling factor of the DE; CR = 1: crossover probability for DE; Feval = 300, 000: maximum function evaluations; The set of weight vectors dealing with 3-objective test instances are generated according to criteria which are described in [3,26]. The weight vectors for bi-objective CEC’09 test instances are generated by formula (7);

4.3. Performance metric In general, the quality of the non-dominated sets of solutions should be assessed in terms of convergence and diversity. Convergence depicts the closeness of the final non-dominated solutions to the true PF, whereas diversity aims at the distribution of the final solutions along the true PF. Inverted generational distance abbreviated as IGD [27] has been used as a performance indicator to measure the performance of the developed algorithms. Let P* be a set of uniformly distributed points along the PF. Let A be an approximate set to the PF, the average distance from P* to A is defined as [2,26]:

 D(A, P) =

v∈P ∗ d(v, A)

|P ∗ |

where d(v, A) is the minimum Euclidean distance between v and the points in A. If P* is large enough to represent the PF very well, D(A, P) could measure both the diversity and convergence of A in a sense.

5.1. Test problems and performance indicator Benchmark problems are used to bring out the capabilities, important characteristics, and possible pitfalls of the algorithm under investigation. The detailed definition of the CEC’09 test instances is presented in [10], and the set P* ∈ PF for IDG calculation is also available there. The ZDT test problems [9] are presented in Table 2. These two suites of problems are picked out to validate the effectiveness of MOEA/D-DE+PSO in this paper. We applied IGD [27]) to evaluate the performance of MOEA/D-DE+PSO over both test suites of problems against MOEA/D [2]. To calculate the IGDmetric values, we have taken P of uniformly distributed points in the PF of size 1000 for ZDT test problems [9]. All parameters are kept same in the experiments of the MOEA/D [2] and MOEA/D-DE+PSO. 5.2. Discussion of the statistical experimental results for ZDT test problems Here, we discuss some observations over the results obtained by MOEA/D-DE+PSO and MOEA/D [2]. Table 4 records the IGDmetric values after 25,000 function evaluations in 30 independent runs. The five columns refer to best (i.e., minimum), median, mean, standard deviation (sdt), and worst (i.e., maximum) of the IGD values obtained by MOEA/D [2] and MOEA/D-DE+PSO for each ZDT problem. The experimental results clearly indicate that MOEA/DDE+PSO has performed much better than MOEA/D [2] on ZDT1, ZDT2, ZDT4, and ZDT6. MOEA/D-DE+PSO ranks first on four ZDT problems out of five used in this paper. MOEA/D [2] has performed better on ZDT4 than MOEA/D-DE+PSO. This weak performance of the MOEA/D-DE+PSO on ZDT3 could be attributed to the fact that

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

ZDT1 Pareto Front

ZDT1 Pareto Front

1.2

1.2 RPF MOEA/D−DE+PSO

1.0

1.0

0.8

0.8

f2

f2

RPF MOEA/D−DE

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

0.0 0.0

1.2

0.2

ZDT2 Pareto Front

0.4

0,6 f1

1.0

1.0

0.8

0.8

f2

f2

1.2

RPF MOEA/D−DE+PSO

0,6

0,6

0.4

0.4

0.2

0.2

0.2

0.4

0,6 f1

0.8

1.0

0.0 0.0

1.2

0.2

ZDT3 Pareto Front

0.4

0,6 f1

0.8

1.0

1.2

ZDT3 Pareto Front

1.2

1.2 RPF MOEA/D−DE

RPF MOEA/D−DE+PSO

1.0

1.0

0.8

0.8

f2

f2

1.0

1.2 RPF MOEA/D−DE

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.8

ZDT2 Pareto Front

1.2

0.0 0.0

225

0.2

0.4

0,6 f1

0.8

1.0

1.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

1.2

Fig. 1. Plots of the final non-dominated solutions in the objective space of the ZDT1–ZDT3. The left panel is for MOEA/D [2] and right panel is for our MOEA/D-DE+PSO.

226

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

ZDT4 Pareto Front

ZDT4 Pareto Front

1.2

1.2 RPF MOEA/D−DE+PSO

1.0

1.0

0.8

0.8

f2

f2

RPF MOEA/D−DE

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

0.0 0.0

1.2

0.2

ZDT6 Pareto Front

0.4

0,6 f1

1.2

1.2 RPF MOEA/D−DE

RPF MOEA/D−DE+PSO

1.0

1.0

0.8

0.8

f2

f2

1.0

ZDT6 Pareto Front

1.2

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.8

0.2

0.4

0,6 f1

0.8

1.0

1.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

1.2

Fig. 2. Plots of the final non-dominated solutions in the objective space of the ZDT4 and ZDT6. The left panel is for MOEA/D [2] and right panel is for our MOEA/D-DE+PSO.

the objective functions of this problem are disparately scaled in its PF. 5.3. Discussion of the PFs plots of ZDT test problems Figs. 1 and 2 demonstrate the location of the final set of solutions in the objective space achieved by MOEA/D-DE and MOEA/DDE+PSO after executing 30 times each algorithm over ZDT problems [9]. The Left panel of Figs. 1 and 2 is for MOEA/D [2] and the right panel is for MOEA/D-DE+PSO. Figs. 1 and 2 clearly indicate the approximate PF generated by MOEA/D-DE+PSO for each ZDT problem are uniformly distribution along the real PF of ZDT problems. In Figs. 3 and 4, we have also plotted the PFs of 30 independent runs altogether. It can be seen from these figures that MOEA/D-DE+PSO has produced solutions in better distribution ranges than MOEA/D [2] on each ZDT problem. As recorded and explained in Table 2, ZDT4 has many local Pareto fronts. This good performance of the MOEA/D-DE+PSO on ZDT4 could be a good addition to the existing literature of memetic algorithms. 5.4. Discussion of the IGD-metric values figures for ZDT test problems Fig. 5 displays the evolution of the average IGD-metric values versus the number of generations over 30 independent runs of

MOEA/D [2] and MOEA/D-DE+PSO on each ZDT problem. These figures exhibit that MOEA/D-DE+PSO is much better in reducing IGD values on almost all ZDT test problems, especially, on ZT4 and ZDT6. Initially, the converging ability of the MOEA/D-DE+PSO is much better than MOEA/D [2] over ZDT3, however, in the final few generations MOEA/D [2] has shown good performances in terms of reducing IGD-metric values.

5.5. Discussion of statistical experimental results for CEC’09 test instances In this subsection, we discuss the effectiveness of MOEA/DDE+PSO in comparison with MOEA/D [2] over CEC’09 test instances [10]. Table 5 collects the IGD values including minimum, median, mean, standard deviation and maximum obtained by MOEA/D [2] and MOEA/D-DE+PSO. We carried out our experiments by executing MOEA/D [2] and MOEA/D-DE+PSO for 30 times. Each algorithm was allowed 300, 000 function evaluations in each run on each CEC’09 test instance [10]. A bold figure in Table 5 denotes best results regarding the algorithms used in this paper. As can be seen in Table 5, MOEA/D-DE+PSO ranks first on eight (i.e., UF1-UF4 and UF7-UF10) out of ten CEC’09 test instances [10]. MOEA/D-DE+PSO has shown weaker performance than MOEA/D [2] on UF5 and UF6 in terms of reducing IDG values. This weak performance of our algorithm could be due to the fact that UF5 has 21 local Pareto fronts

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

ZDT1 Pareto Front

227

ZDT1 Pareto Front

1.2

1.2 Real PF MOEA/D−DE+PSO−30Runs

1.0

1.0

0.8

0.8

f2

f2

Real PF MOEA/D−DE−30Runs

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

0.0 0.0

1.2

0.2

0.4

ZDT2 Pareto Front

0,6 f1

1.2

1.2 Real PF MOEA/D−DE−30Runs

Real PF MOEA/D−DE+PSO−30Runs

1.0

1.0

0.8

0.8

f2

f2

1.0

ZDT2 Pareto Front

1.2

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

0.0 0.0

1.2

0.2

0.4

ZDT3 Pareto Front

0,6 f1

0.8

1.0

1.2

ZDT3 Pareto Front

1.2

1.2 Real PF MOEA/D−DE−30Runs

Real PF MOEA/D−DE+PSO−30Runs

1.0

1.0

0.8

0.8

f2

f2

0.8

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

1.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

Fig. 3. Plots of the all 30 PFs for ZDT1-ZDT3. The left panel is for MOEA/D [2] and right panel is for our MOEA/D-DE+PSO.

1.2

228

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

ZDT4 Pareto Front

ZDT4 Pareto Front

1.2

1.2 Real PF MOEA/D−DE+PSO−30Runs

1.0

1.0

0.8

0.8

f2

f2

Real PF MOEA/D−DE−30Runs

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

0.0 0.0

1.2

0.2

0.4

ZDT6 Pareto Front

0,6 f1

1.2

1.2 Real PF MOEA/D−DE−30Runs

Real PF MOEA/D−DE+PSO−30Runs

1.0

1.0

0.8

0.8

f2

f2

1.0

ZDT6 Pareto Front

1.2

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.8

0.2

0.4

0,6 f1

0.8

1.0

1.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

Fig. 4. Plots of the all 30 PFs for ZDT4 and ZDT6. The left panel is for MOEA/D [2] and right panel is for our MOEA/D-DE+PSO.

1.0

1.2

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

ZDT1

ZDT2

2.5

ZDT3

3

1.8

MOEA/D−DE MOEA/D−DE+PSO

MOEA/D−DE MOEA/D−DE+PSO

MOEA/D−DE MOEA/D−DE+PSO

1.6

Averge IGD−metric Values

1.5

1

Averge IGD−metric Values

2.5

2 Averge IGD−metric Values

229

2

1.5

1

1.4 1.2 1 0.8 0.6 0.4

0.5

0.5 0.2

0 0

50

100 150 Generations

200

0 0

250

50

100 150 Generations

200

ZDT4

100 150 Generations

200

250

0.5

MOEA/D−DE MOEA/D−DE+PSO

MOEA/D−DE MOEA/D−DE+PSO

0.45 0.4

0.6

Averge IGD−metric Values

Averge Variation IGD Values

50

ZDT6

0.8 0.7

0 0

250

0.5 0.4 0.3 0.2

0.35 0.3 0.25 0.2 0.15 0.1

0.1 0 0

0.05

50 100 Last 150 Generations

150

0 0

50

100 150 Generations

200

250

Fig. 5. Average IGD values versus number of generations over ZDT1–ZDT4 and ZDT6. The left panel is for MOEA/D [2] and the right panel is for MOEA/D-DE+PSO.

Table 5 The IGD values of the final non-dominated set of solutions for UF1–UF10 approximated by MOEA/D [2] and MOEA/D-DE+PSO. Parameters

w = 0.3 + rand/2

c1 = c2 = 0.4

 = 0.7

CR = 1

F = 0.5

N = 600 & 1000

IGD values of the MOEA/D-DE (A) and MOEA/D-DE+PSO (B) CEC’09

Min

Median

Mean

Std

Max

Algorithm

UF1

0.004499 0.004466

0.073061 0.0182737

0.078707 0.0285723

0.051734 0.02360902

0.193602 0.0711827

A B

UF2

0.018233 0.010218

0.057095 0.011671

0.062814 0.011737

0.032670 0.000756

0.142833 0.013261

A B

UF3

0.027292 0.00394040

0.238281 0.00582842

0.220884 0.0060393

0.084442 0.0018267

0.319024 0.010494

A B

UF4

0.062797 0.047843

0.070280 0.0546041

0.070560 0.00404584

0.004064 0.004053

0.077784 0.071241

A B

UF5

0.210113 0.282111

0.428818 0.454364

0.426645 0.490882

0.131033 0.118561

0.707106 0.708999

A B

UF6

0.244165 0.186140

0.456803 0.823351

0.508164 0.778214

0.142289 0.157339

0.798462 0.843885

A B

UF7

0.007096 0.006726

0.106932 0.008776

0.239466 0.243517

0.244556 0.237355

0.648772 0.674953

A B

UF8

0.057705 0.057600

0.076824 0.076222

0.079935 0.078455

0.015975 0.006532

0.138522 0.1019020

A B

UF9

0.047530 0.035499

0.151860 0.038980

0.139659 0.071131

0.035320 0.035008

0.160975 0.149478

A B

UF10

0.296028 0.184050

0.427088 0.187033

0.442905 0.187158

0.067564 0.001552

0.672801 0.190097

A B

230

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

UF1 Pareto Front

1.2

UF1 Pareto Front 1.2 Real PF MOEA/D−DE+PSO

1.0

1.0

0.8

0.8

f2

f2

Real PF MOEA/D−PF

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.2

1.2

0.4

0,6 0.8 f1 UF2 Pareto Front

1.0

0.0 0.0

1.2

0.8

f2

0.8

0,6

0.4

0.2

0.2

1.2

0,6 0.8 f1 UF3 Pareto Front

0.8

1.0

1.2

0,6

0.4

0.4

0,6 f1

Real PF MOEA/D−DE+PSO 1.0

0.2

0.4

UF2 Pareto Front

1.0

0.0 0.0

0.2

1.2

Real PF MOEA/D−PF

f2

0,6

1.0

0.0 0.0

1.2

0.2

0.4

0,6 f1

0.8

1.0

1.2

UF3 Pareto Front

Real PF MOEA/D−PF

1.2 Real PF MOEA/D−DE+PSO

1.0

1.0 0.8

0,6

f2

f2

0.8

0.4

0.4

0.2

0.0 0.0

0,6

0.2

0.2

0.4

0,6 f1

0.8

1.0

1.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

1.2

Fig. 6. Plots of the best PF in the objective space of the UF1–UF3 in 30 independent runs. Left panel is MOEA/D [2] and right panel is for MOEA/D-DE+PSO.

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

UF4 Pareto Front

1.2

UF4 Pareto Front 1.2 Real PF MOEA/D−DE+PSO

1.0

1.0

0.8

0.8

f2

f2

Real PF MOEA/D−PF

0,6

0.2

0.2

0.2

1.2

0.4

0,6 0.8 f1 UF5 Pareto Front

1.0

0.0 0.0

1.2

0.8

0.8

f2

f2

0,6 f1

0.8

1.0

1.2

Real PF MOEA/D−DE+PSO 1.0

0,6

0,6

0.4

0.4

0.2

0.2

0.2

1.2

0.4

0,6 0.8 f1 UF6 Pareto Front

1.0

0.0 0.0

1.2

0.2

0.4

0,6 f1

0.8

1.0

1.2

UF6 Pareto Front 1.2

Real PF MOEA/D−PF

Real PF MOEA/D−DE+PSO 1.0

0.8

0.8

f2

1.0

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.4

UF5 Pareto Front

1.0

0.0 0.0

0.2

1.2

Real PF MOEA/D−PF

f2

0,6

0.4

0.4

0.0 0.0

231

0.2

0.4

0,6 f1

0.8

1.0

1.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

1.2

Fig. 7. Plots of the best PF in the objective space of the UF4–UF6 in 30 independent runs. Left panel is MOEA/D [2] and right panel is for MOEA/D-DE+PSO.

232

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

UF7 Pareto Front

1.2

UF7 Pareto Front 1.2

Real PF MOEA/D−PF

Real PF MOEA/D−DE+PSO

1.0

1.0

0.8

f2

f2

0.8

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

1.2

0.0 0.0

0.2

0.4

0,6 f1

Real PF MOEA/D−PF

1.2

1.0

1.0

0.8

0.8

0,6

0,6

0.4

0.4

0.2

0.2

0.8

0,6

f2

0.4

0.2

0.0

0.0

0.2

1.2

Real PF MOEA/D−DE+PSO

1.2

1.0

1.0

UF8 Pareto Front

UF8 Pareto Front

0.0 1.2

0.8

0.4

0,6

0.8

1.0

1.2

f1

0.0 1.2 1.0 0.8 0,6 0.4 0.2 0.0 f2

1.2 1.0 0.8 0,6 0.4 0.2 0.0 f1

Fig. 8. Plots of the best PF in the objective space of the UF7 and UF8 in 30 independent runs. Left panel is MOEA/D [2] and right panel is for MOEA/D-DE+PSO.

and UF6 one isolated point and two disconnected parts. The presence of many local PFs in UF5 may hinder the convergence ability of MOEA/D-DE+PSO toward its real PF. A problem, UF6, has piecewise PF that hinders MOEA/D-DE+PSO in finding a good diverse set of solutions. The allowed function evaluations might not be good enough to deal with such complicated problems.

test instances. MOEA/D-DE+PSO has generated “a good looking PF” as per the parlance of multiobjective optimization. To show the distribution ranges of all 30 PF’s covered with 30 different sets of non-dominated final solution in the objective space of each respective CEC’09 test instance, we have displayed them in Figs. 10–13. In these figures, a Left panel is for MOEA/D [2] and right panel is for MOEA/D-DE+PSO. From these figures, it can be seen that MOEA/D-DE+PSO has produced much better PFs on most CEC’09 test problems as compared to MOEA/D [2].

5.6. Discussion of Pareto Fronts of the CEC’09 test instances Figs. 6–9 depict the approximated Pareto front of UF1–UF10 generated by MOEA/D [2] in the best of 30 independent runs. The Left panel is MOEA/D [2] and the right panel is for MOEA/DDE+PSO. It can be seen in these figures that MOEA/D-DE+PSO has produced better approximated PF than MOEA/D [2] on most CEC’09

5.7. Discussion of the IGD-metric values figures for CEC’09 test instances Fig. 14 displays the evolution of the average IGD-metric values in 30 independent runs versus the number of generations of the

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

233

UF9 Pareto Front

UF9 Pareto Front Real PF MOEA/D−PF

Real PF MOEA/D−DE+PSO

1.0 1.2

0.8 1.0

0,6 0.8

0,6

0.4

0.4

0.2 0.2

0.0 1.2

1.0

0.8

0,6

0.4

0.2

0.0

0.0

0.2

0.4

f2

0,6

0.8

1.0

0.0 1.2 1.0 0.8 0,6 0.4 0.2 0.0 f2

1.2

f1

1.2 1.0 0.8 0,6 0.4 0.2 0.0 f1

UF10 Pareto Front

UF10 Pareto Front Real PF MOEA/D−PF

Real PF MOEA/D−DE+PSO

1.2 1.0 0.8

1.2 1.0

0,6

0.8

0.4

0,6 0.4

0.2

0.2 0.0 1.2

1.0

0.8

0,6

f2

0.4

0.2

0.0

0.0

0.2

0.4

0,6

0.8

1.0

1.2

f1

0.0 1.2 1.0 0.8 0,6 0.4 0.2 0.0 f2

1.2 1.0 0.8 0,6 0.4 0.2 0.0 f1

Fig. 9. Plots of the best PF in the objective space of the UF9 and UF3 in 30 independent runs. Left panel is MOEA/D [2] and right panel is for MOEA/D-DE+PSO.

MOEA/D-DE+PSO and MOEA/D [2] over each CEC’09 test instance. These figures demonstrate that MOEA/D-DE+PSO had performed better than MOEA/D [2] in terms of reducing the average IGDmetric values in almost all CEC’09 test instances [10]. Fig. 15 depicts the evolution of the best IGD values in the best run among 30 independent runs of the MOEA/D [2] and MOEA/DDE+PSO on each CEC’09 test instance [10]. From these figures, one can conclude that the convergence of the MOEA/D-DE+PSO is much better than that of MOEA/D [2]. 5.8. Sensitivity to population size and neighborhood size in MOEA/D-DE+PSO To study the sensitivity of MOEA/D-DE+PSO to different population sizes and neighborhood sizes T on CEC’09 test instances [10], we examined the following values of N: 100, 200, 300, 400, 500, 600 as well as the following values of T: 10, 20, 30, 40, 50, 60 for 2-objective CEC’09 test instances. For 3-objective test instances,

we assigned the following values to N: 250, 500, 600, 1000 and the following values to T: 25, 50, 60, 100 in the framework of MOEA/D-DE+PSO. Table 6 summarizes the IGD values with different population sizes and neighborhood sizes, aforementioned, on each CEC’09 test instance obtained by MOEA/D-DE+PSO. This tables shows that the search ability of MOEA/D-DE+PSO is not much effected with different population sizes and neighborhood sizes settings. Fig. 17 demonstrates the evolution of the average IGD-metric values versus the population size with different values created by MOEA/D-DE+PSO. These visual results and Table 6 indicate that MOEA/D-DE+PSO is not much sensitive to population sizes on most CEC’09 problems. Furthermore, the above figures establish that MOEA/D-DE+PSO has much better options to accommodate any population sizes N within 200–600 dealing with UF1-UF7 and within 400–1000 to tackle UF8–UF10. Fig. 16 exhibits the evolution of the average IGD-metric values versus different settings of T. T is one of the important control

234

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

Table 6 The IGD values statistics of the non-dominated solutions found by MOEA/D (A) and MOEA/D-DE+PSO (B) on UF1-UF10 during 30 independent runs. Parameters

w = 0.3 + rand/2

c1 = c2 = 0.4

 = 0.7

CR = 1

F = 0.5

POP-SIZE

NICH-SIZE

IGD’s statistics found by MOEA/D-DE+PSO CEC’09

Min

Median

Mean

Std

Max

N

T

UF1

0.00848601 0.00503239 0.00630236 0.00519335 0.00519335

0.0432089 0.0403459 0.0378201 0.0253672 0.0253672

0.0464794 0.0347456 0.0324587 0.0296343 0.0296343

0.03443536 0.02053173 0.02004103 0.02218467 0.02218467

0.1504137 0.0600952 0.0625536 0.0701052 0.0701052

100 200 300 400 500

10 20 30 40 50

UF2

0.013190 0.010383 0.009860 0.010557 0.010254

0.014774 0.011732 0.011136 0.012184 0.011286

0.015165 0.011902 0.011167 0.012315 0.011468

0.001154 0.001138 0.000611 0.001151 0.000759

0.018112 0.015254 0.012303 0.015706 0.013094

100 200 300 400 500

10 20 30 40 50

UF3

0.00499431 0.00412732 0.00399730 0.00398105 0.00395048

0.0087208 0.0063829 0.0050561 0.0054549 0.0054913

0.0092065 0.0065183 0.0059154 0.0058070 0.0058391

0.0028841 0.0018276 0.0019803 0.0016600 0.0017154

0.020124 0.010447 0.011129 0.009685 0.011247

100 200 300 400 500

10 20 30 40 50

UF4

0.050347 0.049817 0.0484987 0.0470158 0.047473

0.057500 0.056582 0.052430 0.056452 0.057109

0.057960 0.0562579 0.0529812 0.0565206 0.0564868

0.004838 0.003299 0.0035208 0.0051251 0.005015

0.071733 0.062676 0.062597 0.066531 0.0666803

100 200 300 400 500

10 20 30 40 50

UF5

0.2273 0.2838 0.2123 0.2233 0.2997

0.7071 0.7071 0.7071 0.7071 0.7071

0.6742 0.6593 0.6805 0.6308 0.6532

0.1050 0.1221 0.1046 0.1521 0.1168

0.7071 0.7071 0.7112 0.7071 0.7090

100 200 300 400 500

10 20 30 40 50

UF6

0.6570 0.2065 0.1838 0.6409 0.1831

0.8131 0.8214 0.8218 0.8238 0.8209

0.8057 0.7981 0.7419 0.8167 0.7763

0.0388 0.1132 0.1921 0.0341 0.1631

0.8490 0.8449 0.8407 0.8379 0.8433

100 200 300 400 500

10 20 30 40 50

UF7

0.0107 0.0059 0.0067 0.0075 0.0062

0.4826 0.4355 0.5496 0.1588 0.4224

0.4818 0.3551 0.4318 0.2574 0.3370

0.1599 0.2634 0.2484 0.2684 0.2510

0.6869 0.6864 0.6948 0.6730 0.6971

100 200 300 400 500

10 20 30 40 50

UF8

0.1150 0.0891 0.0737 0.0751

0.1509 0.0998 0.0989 0.0904

0.1498 0.1011 0.0997 0.0893

0.0156 0.0086 0.0192 0.0066

0.1744 0.1221 0.1851 0.1004

250 500 600 800

25 50 60 80

UF9

0.073589 0.039016 0.037572 0.037572

0.180044 0.145791 0.096752 0.096752

0.142312 0.102507 0.093896 0.093896

0.049597 0.052039 0.054576 0.054576

0.188414 0.150000 0.150470 0.150470

250 500 600 800

25 50 60 80

UF10

0.2052173 0.183229 0.184304 0.183445

0.211854 0.186690 0.186031 0.185819

0.212762 0.186787 0.186248 0.186224

0.005705 0.001969 0.001633 0.001904

0.228089 0.192183 0.190058 0.191934

250 500 600 800

25 50 60 80

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

UF1 Pareto Front

UF1 Pareto Front

1.2

1.2 Real PF MOEA/D−DE−30Runs

1.0

1.0

0.8

0.8

f2

f2

Real PF MOEA/D−DE+PSO−30Runs

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.2

0.4

0,6

0.8

1.0

0.0 0.0

1.2

0.2

0.4

f1 UF2 Pareto Front

0.8

0.8

0,6

f2

f2

1.2

Real PF MOEA/D−DE−30Runs

1.0

0,6

0.4

0.4

0.2

0.2

0,6

1.0

UF2 Pareto Front

1.0

0.4

0.8

1.2

Real PF MOEA/D−DE+PSO−30Runs

0.2

0,6

f1

1.2

0.0 0.0

235

0.8

1.0

1.2

f1

0.0 0.0

0.2

0.4

0,6

0.8

1.0

1.2

f1

Fig. 10. Plots of the all 30 PFs in the objective space of the UF1 and UF2 in 30 independent runs. Left panel is MOEA/D [2] and right panel is for MOEA/D-DE+PSO.

parameter in the MOEA/D paradigm. Therefore, we plotted the different values of T versus the average IGD-metric values for the purposes of their appropriate settings in MOEA/D-DE+PSO. However, MOEA/D-DE+PSO, like its competitor, performs poorly

on UF5 and UF6. Both these problems are more complicated than the others. MOEA/D-DE+PSO gets stuck in its local optimal basin of attraction and, therefore, no further improvement is observed on these problems (Fig. 17).

236

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

UF3 Pareto Front

UF3 Pareto Front

1.2

1.2 Real PF MOEA/D−DE−30Runs

1.0

1.0

0.8

0.8

0,6

0,6

f2

f2

Real PF MOEA/D−DE+PSO−30Runs

0.4

0.4

0.2

0.2

0.0 0.0

0.2

0.4

0,6

0.8

1.0

0.0 0.0

1.2

0.2

0.4

0,6

f1

0.8

1.0

1.2

f1 UF4 Pareto Front 1.2 Real PF MOEA/D−DE−30Runs

UF4 Pareto Front 1.2 Real PF MOEA/D−DE+PSO−30Runs

1.0 1.0

0.8

f2

f2

0.8

0,6

0,6

0.4 0.4

0.2

0.2

0.0 0.0

0.2

0.4

0,6

f1

0.8

1.0

1.2

0.0 0.0

0.2

0.4

0,6

0.8

1.0

1.2

f1

Fig. 11. Plots of the all 30 PFs in the objective space of the UF3 and UF4 in 30 independent runs. Left panel is MOEA/D [2] and right panel is for MOEA/D-DE+PSO.

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

237

UF5 Pareto Front

UF5 Pareto Front

1.2

1.2

Real PF MOEA/D−DE−30Runs

1.0

1.0

0.8

0.8

f2

f2

Real PF MOEA/D−DE+PSO−30Runs

0,6

0,6

0.4

0.4

0.2

0.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

0.0 0.0

1.2

0.2

0.4

UF6 Pareto Front

0.8

0.8

0,6

f2

f2

1.0

0,6

0.4

0.4

0.2

0.2

0,6 f1

0.8

1.0

0.0 0.0

1.2

0.2

0.4

UF7 Pareto Front

0.8

0.8

0,6

f2

f2

1.0

1.2

0,6

0.4

0.4

0.2

0.2

0,6 f1

1.0

Real PF MOEA/D−DE−30Runs

1.0

0.4

0.8

1.2

Real PF MOEA/D−DE+PSO−30Runs

0.2

0,6 f1 UF7 Pareto Front

1.2

0.0 0.0

1.2

Real PF MOEA/D−DE−30Runs

1.0

0.4

1.0

1.2

Real PF MOEA/D−DE+PSO−30Runs

0.2

0.8

UF6 Pareto Front

1.2

0.0 0.0

0,6 f1

0.8

1.0

1.2

0.0 0.0

0.2

0.4

0,6 f1

0.8

1.0

1.2

Fig. 12. Plots of the all 30 PFs in the objective space of the UF5–UF7 in 30 independent runs. Left panel is MOEA/D [2] and right panel is for MOEA/D-DE+PSO.

238

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

UF8 Pareto Front

UF8 Pareto Front

Real PF MOEA/D−DE−30Runs

Real PF MOEA/D−DE+PSO−30Runs

1.2

1.0 1.2 0.8

1.0

0.8

0,6

0,6 0.4 0.4 0.2 0.2

0.0 1.2

1.0

0.8

0,6

0.4

0.2

f2

0.0

0.0

0.2

0.4

0,6

0.8

1.0

0.0 1.2

1.2

1.0

0.8

0,6

0.4

0.2

0.0

0.0

0.2

0.4

f2

f1

0,6

0.8

1.0

1.2

f1 UF9 Pareto Front

UF9 Pareto Front

Real PF MOEA/D−DE−30Runs

Real PF MOEA/D−DE+PSO−30Runs

1.2

1.2

1.0

1.0

0.8

0.8

0,6

0,6

0.4

0.4

0.2

0.2

0.0 1.2

1.0

0.8

0,6

0.4

0.2

f2

0.0

0.0

0.2

0.4

0,6

0.8

1.0

0.0 1.2

1.2

1.0

0.8

0,6

0.4

0.2

0.0

0.0

0.2

0.4

f2

f1

0,6

0.8

1.0

1.2

f1 UF10 Pareto Front

UF10 Pareto Front

Real PF MOEA/D−DE−30Runs

Real PF MOEA/D−DE+PSO−30Runs

1.2 1.0 0.8 0,6 0.4 0.2 0.0 1.2

1.0

0.8

0,6 f2

0.4

0.2

0.0

0.0

0.2

0.4

0,6 f1

0.8

1.0

1.2

1.2 1.0 0.8 0,6 0.4 0.2 0.0 1.2

1.0

0.8

0,6 f2

0.4

0.2

0.0

0.0

0.2

0.4

0,6

0.8

1.0

1.2

f1

Fig. 13. Plots of the 30 PFs in the objective space of the UF8–UF10 in 30 independent runs. Left panel is MOEA/D [2] and right panel is for MOEA/D-DE+PSO.

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

UF1

UF2

1.4

1.4 MOEA/D−DE MOEA/D−DE+PSO

0.8 0.6 0.4 0.2

0.5 0.4 0.3 0.2 0.1

200 300 400 Number of Generations

0 0

500

100

200 300 400 Number of Generations

MOEA/D−DE MOEA/D−DE+PSO

0.4

100

200 300 400 Number of Generations

Average Variation in IGD Values

0.16 0.14 0.12 0.1

UF6 MOEA/D−DE MOEA/D−DE+PSO

4 3.5 3 2.5 2 1.5 1

200 300 400 Number of Generations

500

MOEA/D−DE MOEA/D−DE+PSO

5

0.5 0 0

4.5 4 3.5 3 2.5 2 1.5 1

100

UF7

200 300 400 Number of Generations

0.5 0

500

100

200 300 400 Number of Generations

UF8

1.6

3 MOEA/D−DE MOEA/D−DE+PSO

1 0.8 0.6

2.5

2

1.5

1

0.5

0.4

200 300 400 Number of Generations

0 0

500

MOEA/D−DE MOEA/D−DE+PSO Average Variation in IGD Values

Average Variation in IGD Values

1.2

2.5

2

1.5

1

0.5

50

100 150 200 Number of Generations

250

300

0 0

50

100 150 200 Number of Generations

UF10 14 MOEA/D−DE MOEA/D−DE+PSO Average Variation in IGD Values

12 10 8 6 4 2 0 0

500

UF9

3 MOEA/D−DE MOEA/D−DE+PSO

1.4

500

5.5

4.5

0.08

Average Variation in IGD Values

0.6

0 0

500

5

0.18

100

0.8

UF5

UF4

100

1

0.2

Average Variation in IGD Values

100

0.2

Average Variation in IGD Values

1.2 Average Variation in IGD Values

1

0.2 0

MOEA/D−DE MOEA/D−DE+PSO

0.6 Average Variation in IGD Values

Average Variation in IGD Values

1.2

0.06 0

UF3

0.7 MOEA/D−DE MOEA/D−DE+PSO

0 0

239

50

100 150 200 Number of Generations

250

300

Fig. 14. Evolution of the average IGD-metric values versus number of generations for UF1–UF10.

250

300

240

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

UF1

UF2

1.4

1.4 MOEA/D−DE MOEA/D−DE+PSO

1 0.8 0.6 0.4 0.2

200 300 400 Number of Generations

0.4 0.3 0.2

0 0

500

100

200 300 400 Number of Generations

0.14 0.12 0.1 0.08

0.6 0.4

100

200 300 400 Number of Generations

500

UF6 6 MOEA/D−DE MOEA/D−DE+PSO

4 3.5 3 2.5 2 1.5 1

5

4

3

2

1

0.5 100

200 300 Function Evaluations

400

0 0

500

100

UF7

200 300 400 Number of Generations

0 0

500

200 300 400 Number of Generations

3 MOEA/D−DE MOEA/D−DE+PSO

1 0.8 0.6 0.4 0.2

200 300 400 Number of Generations

2.5

2

1.5

1

0.5

0 0

500

MOEA/D−DE MOEA/D−DE+PSO The IGD−metric Values in best run

The IGD−metric Values in best run

1.2

50

100 150 200 Number of Generations

250

300

2.5

2

1.5

1

0.5

0 0

50

100 150 200 Number of Generations

UF10 15

The IGD−metric Values in best run

MOEA/D−DE MOEA/D−DE+PSO

10

5

0 0

500

UF9

3 MOEA/D−DE MOEA/D−DE+PSO

100

100

UF8

1.4

The IGD−metric Values in best run

0.8

0 0

MOEA/D−DE MOEA/D−DE+PSO

4.5 The IGD−metric Values in best run

0.16

1

500

UF5

0.18

1.2

0.2

5 MOEA/D−ONLY MOEA/D−MMTD

Variation in IGD Values in best run

0.5

The IGD−metric Values in best run

100

0.2

0 0

MOEA/D−DE MOEA/D−DE+PSO

0.1

UF4

0.06 0

0.6

The IGD−metric Values in best run

The IGD−metric Values in best run

The IGD−metric Values in best run

1.2

0 0

UF3

0.7 MOEA/D−DE MOEA/D−DE+PSO

50

100 150 200 Number of Generations

250

300

Fig. 15. Evolution of the average IGD-metric values versus number of generations for UF1–UF10.

250

300

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

Fig. 16. Average evolution of the IGD-metric values versus the value of T in MOEA/D-DE+PSO in 30 independent runs for UF1–UF10.

241

242

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

Fig. 17. Average evolution of the IGD-metric values versus the size of the population N in MOEA/D-DE+PSO in 30 independent runs for UF1–UF10.

W.K. Mashwani, A. Salhi / Applied Soft Computing 21 (2014) 221–243

6. Conclusion and future plan In this paper, a multiobjective memetic algorithm based on MOEA/D [2], called MOEA/D-DE+PSO is developed. MOEA/DDE+PSO used two well established search algorithms, namely differential evolution (DE) [8] and particle swarm optimization (PSO) [7]. The number of subproblems dealt with by DE and PSO are dynamically changed based on their particular success reward obtained in the evolutionary process of MOEA/D-DE+PSO. Based on experimental study, we found that DE works well globally while PSO acts as a local search operator. We analyzed the performances of the MOEA/D-DE+PSO on two sets of continuous multiobjective optimization problems, the ZDT [9] and the CEC’09 unconstrained test instances [10]. We found that MOEA/D-DE+PSO works better in terms of reducing IGD-metric values on most of the test problems. We would like to point out that: • We have used a simple implementation of MOEA/D [2] in the study of this paper. • We have selected parent solutions randomly from the neighborhood scheme only for the DE operation. • In MOEA/D-DE+PSO, each new solution has an option to replace any number of old solutions. The restriction of maximum replacement strategy that is already used in [28] might deprive some good solutions from taking part in further evolutionary steps. We did not use any restricted strategy in MOEA/D-DE+PSO. • The use of open maximal replacement strategy in MOEA/DDE+PSO is a good idea because each subproblem or solution has the opportunity to make full use of its neighborhood information. • Overall, MOEA/D-DE+PSO has performed reasonably well on both test suits problems used in the study of this paper. The inclusion of PSO has helped a lot in finding the best final set of optimal solutions to most test instances. Moreover, PSO has speeded up the search process toward the PF. • MOEA/D-DE+PSO has performed better than the original MOEA/D [2] with different population sizes and neigborhood sizes. This paper also reports the analysis of useful parameters of the MOEA/D-DE+PSO. In the future, we intend to test our proposed algorithm on an industrial design problem, namely, the tubular permanent magnet linear synchronous motor which takes into account multiple conflicting objectives. References [1] H. Ishibuchi, T. Yoshida, T. Murata, Balance between genetic search and local search in memetic algorithms for multiobjective permutation flowshop scheduling, IEEE Trans. Evol. Comput. 7 (April (2)) (2003) 204–233. [2] Q. Zhang, H. Li, MOEA/D: a multiobjective evolutionary algorithm based on decomposition, IEEE Trans. Evol. Comput. 11 (December (6)) (2007) 712–731. [3] W.K. Mashwani, A. Salhi, A decomposition-based hybrid multiobjective evolutionary algorithm with dynamic resource allocation, Appl. Soft Comput. 12 (9) (2012) 2765–2780.

243

[4] N. Krasnogor, J. Smith, A tutorial for competent memetic algorithms: model, taxonomy, and design issues, IEEE Trans. Evol. Comput. 9 (5) (2005) 474–488. [5] J. Knowles, D. Corne, Memetic algorithms for multiobjective optimization: issues, methods and prospects, in: Studies in Fuzziness and Soft Computing, 2005. [6] Y. Soon, A.J. Kean, Meta-lamarckian learning in memetic algorithms, IEEE Trans. Evol. Comput. 8 (2) (2004) 99–110. [7] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of the Sixth International Symposium on Micro Machine and Human Science, MHS’95, October, 1995, pp. 39–43. [8] R. Storn, K.V. Price, Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces, ICSI, Technical Report TR-95-012, 1995. [9] E. Zitzler, K. Deb, L. Thiele, Comparsion of multiobjective evolutionary algorithms: emperical results, Evol. Comput. 8 (2) (2000) 173–195. [10] Q. Zhang, A. Zhou, S. Zhaoy, P.N. Suganthany, W. Liu, S. Tiwariz, Multiobjective Optimization Test Instances for the CEC 2009 Special Session and Competition, Technical Report CES-487, 2009. [11] R. Storn, K.V. Price, Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces, J. Global Opt. 11 (December (4)) (1997) 341–359. [12] F. Neri, V. Tirronen, Recent advances in differential evolution: a survey and experimental analysis, Artif. Intell. Rev. 33 (2010) 61–106. [13] S. Das, P.N. Suganthan, Differential evolution: a Survey of the state-of-the-art, IEEE Trans. Evol. Comput. 15 (1) (2011) 4–31. [14] U.K. Chakraborty, Advances in Differential Evolution, Springer Publishing Company, Incorporated, London, 2008. [15] M.G. Epitropakis, D.K. Tasoulis, N.G. Pavlidis, V.P. Plagianakos, M.N. Vrahatis, Enhancing differential evolution utilizing proximity-based mutation operators, IEEE Trans. Evol. Comput. 15 (1) (2011) 99–119. [16] H.-Y. Fan, J. Lampinen, A trigonometric mutation operation to differential evolution, J. Global Optim. 27 (2003) 105–129. [17] N. Pavlidis, V. Plagianakos, D. Tasoulis, M. Vrahatis, Human designed vs. genetically programmed differential evolution operators, in: Proceeding of IEEE Congress on Evolutionary Computation. heraton Vancouver Wall Centre Hotel, IEEE Press, Vancouver, BC, Canada, July 16–21, 2006, pp. 1880–1886. [18] M. Reyes-Sierra, C.A. Colleo Colleo, Multi-objective particle swarm optimizers: a survey of the state-of-art, Int. J. Comput. Intell. Res. 2 (3) (2006) 287–308. [19] A. Banks, J. Vincent, C. Anyakoha, A review of particle swarm optimization. Part I: background and development, Nat. Comput. 6 (December) (2007) 467–484. [20] A. Banks, J. Vincent, C. Anyakoha, A review of particle swarm optimization. Part II: hybridisation, combinatorial, multicriteria and constrained optimization and indicative applications, Nat. Comput. 7 (March) (2008) 109–124. [21] R. Thangaraj, M. Pant, A. Abraham, P. Bouvry, Particle swarm optimization: hybridization perspectives and experimental illustrations, Appl. Math. Comput. 217 (12) (2011) 5208–5226. [22] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in: Proceedings of IEEE International Conference on Evolutionary Computation, CEC’98, May, 1998, pp. 69–73. [23] J.T. Sanaz Mostaghim, Strategies for finding good local guides in multi-objective particle swarm optimization (MOPSO), in: IEEE Swarm Intelligence Symposium Proceedings 2(5), 2003, pp. 26–33. [24] M.A. Abido, Two-level of nondominated solutions approach to multiobjective particle swarm optimization, in: Proceedings of the 9th annual conference on Genetic and evolutionary computation, ser. GECCO ’07. New York, ACM, NY, USA, 2007, pp. 726–733. [25] Q. Jiang, J. Li, A novel method for finding global best guide for multiobjective particle swarm optimization, in: Proceedings of the Third International Symposium on Intelligent Information Technology Application – Volume 03, ser. IITA ’09, IEEE Computer Society, Washington, DC, USA, 2009, pp. 146–150. [26] Q. Zhang, W. Liu, H. Li, The Performance of a New Version of MOEA/D on CEC’09 Unconstrained MOP Test Instances, in: IEEE Congress On Evolutionary Computation (IEEE CEC 2009), Trondheim, Norway, May 18–21, 2009, pp. 203–208. [27] E. Zitzler, L. Thiele, M. Laumanns, C.M. Fonseca, V.G. da Fonseca, Performance assessment of multiobjective optimizers: an analysis and review, IEEE Trans. Evol. Comput. 7 (2003) 117–132. [28] H. Li, Q. Zhang, Multiobjective optimization problems with complicated pareto sets: MOEA/D and NSGA-II, IEEE Trans. Evol. Comput. 13 (April (2)) (2009) 284–302.