Using reference points to update the archive of MOPSO algorithms in Many-Objective Optimization

Using reference points to update the archive of MOPSO algorithms in Many-Objective Optimization

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Using ref...

818KB Sizes 1 Downloads 69 Views

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Using reference points to update the archive of MOPSO algorithms in Many-Objective Optimization Andre Britto n, Aurora Pozo Computer Science Department, Federal University of Paraná (UFPR), Brazil, PO 19081, ZIP Code: 81531-970 Curitiba, Brazil

art ic l e i nf o

a b s t r a c t

Article history: Received 10 January 2013 Received in revised form 3 May 2013 Accepted 14 May 2013 Communicated by Chennai Guest Editor.

Many-Objective Optimization Problems are problems that have more than three objective functions. In general, Multi-Objective Evolutionary Algorithms scale poorly when the number of objectives increases. To overcome this limitation, in a previous study, a new MOPSO algorithm called I-MOPSO was proposed. In this study, this work is extended, and we seek to achieve two goals. The first goal is to perform an indepth evaluation of the I-MOPSO algorithm in different many-objective scenarios. Two versions of this algorithm are studied: I-MOPSO and I-SIGMA. The second goal is to generalize the I-MOPSO algorithm; the new version is called REF-I-MOPSO, and it uses a new archiving method that guides the search in the algorithm to different regions of the Pareto Front using reference points. Two variants of this algorithm are presented: REF_M and REF_Ex. All these algorithms are evaluated with several Many-Objective Problems in terms of their convergence and diversity to the Pareto front. Additionally, we present an empirical analysis that aims to analyze the distribution of the solutions that are generated by the REF-IMOPSO algorithm. The results showed that the solutions generated by this algorithm were close to the selected reference point. Furthermore, the results of REF-I-MOPSO were notably similar to I-MOPSO. & 2013 Elsevier B.V. All rights reserved.

Keywords: Many-Objective Optimization Particle swarm optimization Archiving methods

1. Introduction Multi-Objective Particle Swarm Optimization (MOPSO) is a population-based meta-heuristic method that is inspired by animal swarm intelligence [1]. MOPSO has been widely used to solve several Multi-Objective Optimization Problems (MOPs) that involve the simultaneous optimization of two or more conflicting objectives that are subject to certain constraints. Different Multi-Objective Evolutionary Algorithms (MOEAs) include algorithms based on MOPSO.1 However, in spite of the good results of MOEAs, these algorithms scale poorly when the number of objective functions increases, and the algorithms usually encounter difficulty with more than 3 objective functions [2,3]. These problems are called Many-Objective Optimization Problems (MaOPs). One of the main challenges that are faced by MOEAs with many objectives is the deterioration of the searching ability. This deterioration mainly occurs because of an increase in the number of non-dominated

n

Corresponding author. Tel.: þ 55 7999158717. E-mail addresses: [email protected], [email protected] (A. Britto), [email protected] (A. Pozo). 1 In the literature, there is a differentiation between Evolutionary Algorithms and other bio-inspired meta-heuristic, such as Particle Swarm Optimization, Artificial Immunologic Systems, among others. However, in the multi-objective optimization literature all bio-inspired algorithms are often classified as MOEAs.

solutions with the number of objectives, and consequently, there is no pressure toward the Pareto front. The Many-Objective Optimization field is a research area that involves new techniques for overcoming these limitations [4]. In the literature, MaOPs are usually solved by different strategies [2–6]. Among them we can highlight a new algorithm called the I-MOPSO, first presented in [7]. I-MOPSO has two primary aspects: an archive method that introduces more convergence to the Pareto front and the leader's selection method, which addresses diversity of the obtained solutions. The archiving method uses the idea of guiding the solutions in the archive to a specific area of the objective space that is near to the ideal point [8]. The ideal point is a vector containing the best possible values for each objective function. For the leader's selection, we chose the NWSum method [9], which introduces more diversity on the search. The results presented in [7] showed that I-MOPSO had good results in terms of convergence and diversity when applied to the Many-Objective Problem DTLZ2 [10]. Furthermore, it was shown that I-MOPSO generated an approximated Pareto front that was close to the ideal point. In the current study, the main goal is to extend the previous study on the I-MOPSO algorithm. In this study, two new tasks are addressed: first, the I-MOPSO algorithm is evaluated in different scenarios. I-MOPSO is evaluated with two different leader's selection methods, the original method NWSUm [9] (original algorithm) and the Sigma method [11] (called I-Sigma). Furthermore,

0925-2312/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2013.05.049

Please cite this article as: A. Britto, A. Pozo, Using reference points to update the archive of MOPSO algorithms in ManyObjective Optimization, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.05.049i

A. Britto, A. Pozo / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

2

the experimental set is extended, and the proposed algorithm is applied to DTLZ4 and DTLZ6 problems. In this empirical analysis, I-MOPSO is compared to CDAS-SMPSO [12], to SMPSO [13] and to I-Sigma. A set of quality indicators is used to investigate how these algorithms scale up in terms of convergence and diversity in many objective scenarios. Second, the I-MOPSO algorithm is extended, and the new algorithm is called REF-I-MOPSO. In this algorithm, the main idea is to allow the algorithm to be guided to different areas of the objective space, which have been previously chosen by the user. Thus, a new archiving method is proposed and, instead of guiding the search toward the ideal point, a reference point is chosen as a guide. Furthermore, this reference point belongs to a hyperplane that represents a good region in which the algorithm must converge. This hyperplane was first used in Many-Objective Optimization in the extension of the NSGA-II algorithm, called MO-NSGA-II [5]. REF-I-MOPSO needs an extra parameter to define the reference point that is used as a guide in the archiver. In this study, the approach of selecting specific regions of the hyperplane is used instead of a reference point and two configurations of REFI-MOPSO are defined: REF_M (select a point in the center of the hyperplane) and REF_Ex (select a point near an extreme point of the hyperplane). An empirical analysis to evaluate REF-I-MOPSO is presented, in which the results of REF-I-MOPSO are compared to I-MOPSO's results in terms of convergence and diversity. Furthermore, we present an analysis that observes whether the solutions generated by REF-I-MOPSO approximate the chosen reference point. The methodology used in this analysis will be discussed further in this paper. The remainder of this paper is organized as follows: Section 2 presents the main concepts of Many-Objective Optimization and discusses some related studies. The Multi-Objective Particle Swarm Optimization Algorithms proposed in this work are described in Section 3. Afterwards, the extended empirical evaluation of I-MOPSO and the evaluation of the new algorithm REF-I-MOPSO are reported in Section 4. Finally, Section 5 presents the conclusions and future studies.

2. Many-Objective Optimization A Multi-Objective Optimization Problem involves the simultaneous satisfaction of two or more objective functions. Furthermore, in such problems, the objectives to be optimized are usually in conflict, which means that they do not have a single best solution; instead, they have a set of solutions. To determine this set of solutions, we use Pareto Optimality Theory [14]. In this case, the purpose is to optimize m objective functions simultaneously, with the goal of finding a set of solutions that represent the better compromise between the objective functions. ! ! Thus, given a solution x and its objective vector f ð x Þ ¼ ! ! ! ! ðf 1 ð x Þ; f 2 ð x Þ; …; f m ð x ÞÞ and a solution y and its objective vector ! ! ! ! ! ! f ð y Þ ¼ ðf 1 ð y Þ; f 2 ð y Þ…; f m ð y ÞÞ, f ð x Þ dominates f ð y Þ, denoted by ! ! f ð x Þ ! f ð y Þ, if and only if (minimization): ! ! 8 i A f1; 2; …; mg : f i ð x Þ r f i ð y Þ; and ! ! ( i A f1; 2; …; mg : f i ð x Þ o f i ð y Þ ! ! ! f ð x Þ is non-dominated if there is no f ð y Þ that dominates f ð x Þ. ! ! Furthermore, if there is no solution y that dominates x , then ! ! x is called Pareto Optimal and f ð x Þ is a non-dominated objective vector. The set of all Pareto Optimal solutions is called the Pareto Optimal Set and is denoted by Pn. The set of all non-dominated objective vector is called the Pareto Front and is denoted by PFn. MOEAs modify Evolutionary Algorithms by incorporating a selection mechanism that is based on Pareto optimality and by

adopting a diversity preservation mechanism that avoids the convergence to a single solution [14]. However, because in most applications, the search for the Pareto optimal set is NP-hard, the MOEAs focus on finding an approximated Pareto front that is as close as possible to the true Pareto front. Although most of the studies on MOPs have focused on problems that have a small number of objectives, practical optimization problems involve a large number of criteria [15]. Therefore, research efforts have been oriented toward investigating the scalability of these algorithms with respect to the number of objectives [2]. MOPs that have more than 3 objectives are referred to as Many-Objective Optimization Problems, MaOPs, and the field that studies new solutions for these problems is called ManyObjective Optimization. Several studies have shown that MOEAs scale poorly in ManyObjective Optimization problems [2,12]. The main reason for this scaling property is that the number of non-dominated solutions increases exponentially with the number of objectives. The following consequences occur: First, the search ability deteriorates because it is not possible to impose preferences for selection purposes. Second, the number of solutions that are required for approximating the entire Pareto front increases. Finally, there is difficulty when visualizing the solutions. Among the studies presented in the literature, several papers address the issues of Many-Objective Optimization and deserve to be highlighted. In [5], an extension of the NSGA-II algorithm applied to Many-Objective Problems is presented. The new algorithm, known as MO-NSGA-II, uses a set of reference points that are aimed at guiding the search toward the Pareto front without losing diversity. These points help the convergence and the diversity of the algorithm. Basically, the proposed algorithm builds niches for each reference point. Thus, the specialization of the algorithm enables convergence, and the different niches enable diversification. With these goals, the authors proposed a new density estimator that is based on a hyperplane and that defines the set of reference points. This hyperplane is constructed by the extreme points in each dimension and defines different regions in the objective space. The new density estimator calculates the concentration of solutions around the reference points. Solutions that are close to less crowded reference points obtain an advantage in the selection process, similar to the crowding distance in NSGA-II. The MO-NSGA-II was compared to MOEA/D [16], and the best results were presented for different many-objective scenarios. Other recent work is presented in [4]. In this prior study, it is argued that because of the high complexity of the search in ManyObjective Problems, most algorithms are concerned only with convergence and cover only a small region of the Pareto front. Thus, aiming to promote more diversity in Many-Objective Optimization, this work proposes two new strategies. The first strategy is to activate or de-activate the promotion of diversity according to the spreading of the solutions; the second strategy adapts the range of the mutations, also according to the spread and the crowding of the solutions. The experimental results show that the mutation strategies failed to promote diversity during the search, but the first strategy obtained notably good results. This strategy was applied to the NSGA-II algorithm in different Many-Objective Problems and significantly improved the results of the original algorithm in terms of the diversity. In [3], Schutze et al. discuss the main aspects of the deterioration of the MOEAs in Many-Objective Optimization. These researchers argue that the deterioration is not always the result of a large number of objectives. These authors' study shows that sometimes the growth of multi-modality is responsible for the deterioration in contrast to the growth of the number of nondominated solutions. However, the study does not invalidate the common sense expectation that increasing the number of

Please cite this article as: A. Britto, A. Pozo, Using reference points to update the archive of MOPSO algorithms in ManyObjective Optimization, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.05.049i

A. Britto, A. Pozo / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

non-dominated solutions causes deterioration in the search; instead, it notes that this relationship cannot be considered to be true in all cases. Another important factor that can influence the search ability of MOEAs in Many-Objective Optimization is the correlation between the objective functions. A study that discusses the behavior of some MOEAs when the objective functions are correlated is presented in [17]. In this study, well-known algorithms, such as NSGA- II, SPEA2 and MOEA/D, are evaluated on different many objective scenarios in which the functions are highly correlated. It was shown that the convergence of the algorithms NSGA-II and SPEA2 does not deteriorate; furthermore, in certain situations, the results improved. However, the MOEA/D algorithm did not obtain a good result in these scenarios. Similar to [3], this study showed that it is not always true that the growth of the number of objective functions introduces deterioration in the searching ability. In spite of the existence of different studies that address ManyObjective Optimization, until very recently, most of the research studies focused on a small group of algorithms, often the NSGA-II. In our project, the behavior of the Particle Swarm Optimization in MaOPs is investigated. In this direction, several research studies can be noted. The algorithm proposed in [18] uses a distance metric that is based on user preferences to efficiently find solutions. In that study, the user defines good regions in the objective space that must be explored by the algorithm. PSO is used as a baseline, and the particles update their positions and velocities according to their closeness to the preferred regions. In this method, the PSO algorithm does not rely on Pareto dominance comparisons to find solutions. However, because this algorithm uses only the distance to the reference points as a fitness function, there is no assurance that dominated points are avoided by the algorithm. Selecting dominated points as the best solutions can prevent the search to find good solutions. Furthermore, an algorithm that uses only reference points can lose diversity in the search and thus can be trapped in a local optimum. In our work, the reference point is used as a guide for the archiving process, and the Pareto dominance relation is used to select the best solutions. Additionally, in [19], Castro et al. perform an analysis of different leader's selection method for the Many-Objective Problem. This study extends the algorithm CDAS-SMPSO [12] while aiming to improve the results of the algorithm by exploring different leader's selection methods. The study showed that the methods NWSum [9] and Sigma method [11] are more suitable for Many-Objective Problems. Additionally, several previous studies can be highlighted, which were developed by the authors. In [12], the influence of Control of Dominance Area of Solutions [20] on different MOPSO algorithms is studied. The study showed that the technique improves the results of MOPSO for problems that have many objectives. Additionally, de Carvalho and Pozo [21] present a comprehensive study that applies the Average Ranking (AR) technique and the Control of Dominance Area of Solutions (CDAS) to a PSO algorithm to identify aspects such as convergence and diversity in the PSO search. Furthermore, in [8], different archiving methods are applied to a MOPSO algorithm and are compared to different many-objective scenarios. Finally, in [7], the algorithm I-MOPSO is proposed; this algorithm explores specific characteristics of MOPSO, such as the archiving methods and the leader's selection, to enhance the MOPSO metaheuristic in ManyObjective Optimization. To summarize, these studies reflect that there is current research that focuses on addressing Many-Objective Optimization problems (MaOPs). One of the main conclusions of these studies is related to the weakness of the Pareto dominance relation when addressing MaOPs, and several alternatives were proposed. Next, the

3

I-MOPSO algorithm will be explored and extended to decrease the negative effect of the Pareto dominance relation in ManyObjective Optimization.

3. MOPSO algorithms based on archiving methods Multi-Objective Particle Swarm Optimization is a cooperative population-based heuristic that is inspired by the social behavior of birds flocking to find food [1]. In Particle Swarm Optimization, the algorithm initializes a set of solutions and searches for optimal solutions by updating them throughout many generations. The set of possible solutions is a set of particles called a swarm, which moves in the search space with a cooperative search procedure. These movements are performed by the velocity operator, which is guided by a local and a social component. In MOPSO, the Pareto dominance relation is adopted to establish preferences among the solutions that are considered to be leaders. By exploring the Pareto dominance concepts, each particle in the swarm could have different leaders, but only one can be selected to update the velocity. This set of leaders is stored in an external archive that contains the best non-dominated solutions found to date. Often, the archive of leaders has a maximum size. When this archive becomes full, the algorithm must decide which particles will remain in the external archive for the next iteration. In the literature, there are different archiving methods for making this decision, such as the MGA (Multi-level Grid Archiving) archiver, the Random Archiver and the Adaptive Grid Archiver, among others [22]. The MGA archiver was proposed in [23] and it has the idea to combine the Adaptive Grid scheme with the ϵPareto Archiving. This archiver divides the objective space into boxes and every point in the archive has a box index. After, the dominance relation between the box index is observed. If the new solution to be added in the archive belongs to one of the dominated boxes, it is removed, otherwise one of the points that has dominated boxes is removed randomly [23]. If there is no dominance relation between the boxes, the objective space is re-divided into smaller boxes until a dominated box is found. In addition, the selection of the leaders from the external archive is responsible for the characteristics of convergence and diversity that will affect the entire swarm. When all of the particles in the swarm are updated, the leader's repository is filled with the best particles. Before all of the particles update their positions, they must decide their guide. Thus, if the leader is chosen in a crowded region, the diversity characteristic will be affected, and the search could converge to a local optimum. In contrast, if the leader is chosen from a less crowded region, then the convergence characteristics will be affected, and the search could have difficulty in converging to a good result, which decreases the MOPSO search capabilities. To address these problems, various methods have been proposed in the literature, such as the methods that are discussed in [19]. 3.1. I-MOPSO algorithm Different MOPSO algorithms have been proposed in recent years, including the Multi-Objective Particle Swarm Optimization algorithm I-MOPSO (Ideal Point Guided MO-PSO) [7]. This algorithm has two main features: the archiving process, which introduces more convergence toward the Pareto front, and the leader's selection method which provides diversity of the obtained solutions. I-MOPSO has, as basis, the SMPSO algorithm [13]. This algorithm uses a constriction factor (χ ) that limits the velocity of each particle. This factor varies based on the values of C1 and C2.

Please cite this article as: A. Britto, A. Pozo, Using reference points to update the archive of MOPSO algorithms in ManyObjective Optimization, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.05.049i

A. Britto, A. Pozo / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

Furthermore, SMPSO introduces a mechanism that links or constrains the accumulated velocity of each dimension j of the particles. The proposed algorithm differs from SMPSO in the archiving method and the leader's selection strategy. In I-MOPSO, the archiving method introduces more convergence toward the Pareto front. I-MOPSO uses the Ideal archiver, which guides the solution to a specific area of the objective space; in this method, the ideal point [14] is selected as a guide. In this approach, the distance to the ideal point defines which solutions will remain in the archive. To define the distance between the vector the Euclidian Distance is used. With this method, when the archive becomes full and a new non-dominated solution attempts to enter, the following procedure is executed: first, the ideal point between all of the solutions in the archive, and the new solution is obtained; second, the Euclidean distance from each point to the ideal point is calculated; and finally, the point with the highest distance is removed. To avoid the concentration of the generated approximation Front to a small region, the method of leader's selection, which introduces diversity into the search, is chosen. The NWSum method proposed in [9] is used. This method consists in guiding the particle to the dimension where it is closer (dcloser). The select leader will be the particle in the repository who is closer to dcloser. With this method, it is possible to guide the particles closer to the axis of each dimension, avoiding them to be only located near to the ideal point. This method calculates weights for each objective value and gives more power for those objectives where the particle has good values. It is defined by Eq. (1), where xi represents the position of the particle i and pi is a possible leader for xi F ¼∑ j

f j ðxi Þ f ðp Þ ∑k f k ðxi Þ j i

obtain the following: an approximated Pareto front around the reference point and good convergence toward the Pareto front. REF-I-MOPSO uses the Hyperplane Archiver, which is based on the Ideal archiver. The main difference involves the point that is used as a basis. In Hyperplane Archiver, the solutions that will be kept in the archiver are the points in the external archive that are closer to the reference point (instead of the ideal point). Thus, if the archive becomes full, the solution with the greatest distance to the reference point is removed. The algorithm REF-I-MOPSO is described in Algorithm 1. First, the reference point is obtained; this procedure will be discussed later in this section. The next step is the initialization procedure of the MOPSO algorithm. After the initialization phase, the evolutionary loop is performed. In this loop, first, each particle performs a movement in the search space. Next, Hyperplane Archiver is executed to update the archive. Then, each particle must choose its leader using the NWSum method [9]. Finally, after the end of the evolutionary loop, the solutions in the external archive are the approximations of Pareto front that are obtained by REF-I-MOPSO. The end conditions are: maximum number of iterations or a maximum number of fitness evaluation. Algorithm 1. REF-I-MOPSO algorithm. Select reference point or a region in the hyperplane Initialize MOPSO algorithm While end condition not met For each particle pi do Update position and velocity of pi Archive pi using Hyperplane Archiver Update leaders using NWSum method Return: solutions in the external archive

ð1Þ

The particle pi that generates the greatest weighted sum is used for the update, aiming to push the particles toward the axis it is already close. This study extends the I-MOPSO algorithm by applying it with a different leader's selection method. The main characteristic of I-MOPSO is the Ideal Archive, but the leader's selection is important because it is used to introduce diversity into the search. In this instance, the Sigma method [11] is also used in an attempt to observe how different leader methods behave in the I-MOPSO algorithm. The I-MOPSO method with the Sigma method leader is called I-Sigma in this study. In the Sigma method, the guide is chosen according to its sigma vector. The sigma vector represents the direction between the point and the origin in the objective space. The leader for a particle of the swarm is the solution of the repository with the smallest Euclidean distance between its sigma vector and the sigma vector of the swarm particle [11], in other words, a particle chooses as leader a solution in a nearby region of the objective space. 3.2. Reference point based I-MOPSO The basic idea of the new proposed algorithm is to extend I-MOPSO, which aims to guide the search to a previously defined region of the objective space. As discussed earlier, I-MOPSO uses the Ideal Archiver, which guides the leaders toward the ideal point and later increases the search convergence. The new algorithm, called REF-I-MOPSO, uses the same strategy as I-MOPSO; however, we extend the archiver to work with other reference points. This reference point is any vector in the objective space, e.g., a point chosen by the user in a specific region of the objective space, the ideal point, a point near a specific axis. This archiver intends to

3.3. Selection of reference point Because it is important to obtain good convergence toward the Pareto front, REF-I-MOPSO uses a specific strategy to create a set of reference points in different areas of the objective space. This strategy was first used in many-objective problems in [5]. Importantly, this set of reference points must be distributed; thus, it is possible to choose points in different areas of the objective space. Therefore, before the beginning of the search, a hyperplane is defined that contains the reference points spread over different regions of the objective space. This hyperplane is built before the beginning of the search. For this construction, it is necessary to have several extreme points in each dimension of the objective space. With these points, the procedure to construct the hyperplane discussed in [5] is used. The reference points are distributed among the extreme points, and a parameter defines the range of the reference points. In Algorithm 1, the first step for selecting the reference point is the generation of the hyperplane through a set of extreme points. Next, a reference point must be chosen by the user. Furthermore, it is possible to choose a region in the hyperplane instead of the reference point. For example, it is possible to choose a region in the middle of the hyperplane, a region that is near to a specific dimension or a point at the edge of the hyperplane. If a region is chosen, then a subset of points in the region is selected by the algorithm, and the reference point is chosen randomly among those points. In the proposed approach, this subset is limited to 10% of the total number solution for the hyperplane. This value was experimentally chosen to limit the number of possible reference points.

Please cite this article as: A. Britto, A. Pozo, Using reference points to update the archive of MOPSO algorithms in ManyObjective Optimization, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.05.049i

A. Britto, A. Pozo / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4. Empirical analysis This section presents the evaluation of the I-MOPSO and REF-IMOPSO algorithms. An empirical analysis was accomplished to study the behavior of the algorithms when they were applied to different many-objective scenarios in terms of their convergence to the Pareto front and diversity of the obtained solutions. In this section, first, the methodology applied in this study is explained (Section 4.1). Next, the algorithms, problems and quality indicators are described, in Sections 4.2–4.4, respectively. Finally, the results are discussed in Section 4.5. 4.1. Methodology This empirical analysis has two different evaluations: first, there is an extended evaluation of the I- MOPSO algorithm, and second, there is an evaluation of REF-I-MOPSO. The first comparison has two aspects: to observe how I-MOPSO works with a different leader's selection method and to observe how the algorithm behaves in different many-objective problems. In this analysis, I-MOPSO and I-Sigma are compared to two different algorithms, CDAS-SMPSO [12] and SMPSO [13]. These algorithms were the same when they were used in the initial comparison presented in [7]. The first algorithm was specifically designed for Many-Objective Problems and explores the CDAS technique [20]. The CDAS-SMPSO presented good results when it was applied to different problems [12]. The former, SMPSO, is one of the most suitable MOPSO algorithms that have been proposed to date, as presented in [13]. In this empirical analysis, the algorithms are confronted with different Many-Objective scenarios in terms of the convergence and diversity using the quality indicators of the Generational Distance and the Inverted Generational Distance. The REF-I-MOPSO analysis has two goals: to compare the performance of the proposed algorithm to the I-MOPSO, I-Sigma and SMPSO algorithms in terms of the convergence and diversity and to analyze the distribution of the approximated Pareto front of each algorithm over some reference points. The second analysis has the goal of observing REF-I-MOPSO generated solutions that are closer to the selected reference point. 4.2. Algorithms and parameters I-MOPSO, I-Sigma, CDAS-SMPSO and REF-I-MOPSO are based on the SMPSO algorithm. Thus, all of the algorithms share the same basic MOPSO parameters. All of the algorithms were executed for 50 000 fitness evaluation and were executed 30 independent runs. The population was limited to 250 particles, and the external archive was limited to 250 solutions. In each iteration, ω was varied randomly in the interval ½0; 0:8. ϕ1 and ϕ2 were varied randomly in ½0; 1, and C1 and C2 were varied randomly over the interval ½1:5; 2:5. The CDAS-SMPSO algorithm uses a parameter that controls the dominance area of the solutions (Si). This parameter was defined with 5 different values, performing five different configurations, the same as in [7]: Si varies in 0.05 intervals in the interval [0.25, 0.45]. In Section 3.1, each configuration of CDAS-SMPSO algorithm is referred to as its Si value. REF-I-MOPSO needs an extra parameter to define the reference point that is used as a guide in the archiver. In this study, the approach of selecting specific regions of the hyperplane is used instead of a reference point (discussed in Section 3.2). Thus, two configurations of REF-I-MOPSO are defined: REF_M, which always chooses a point in the middle of the hyperplane; and REF_Ex, which always chooses a point that is near to a random extreme point of the hyperplane. The hyperplane was generated in the

5

beginning of the search and the used extreme points were: ð1; 0; 0Þ, ð0; 1; 0Þ and ð0; 0; 1Þ. 4.3. Benchmark problems The Many-Objective Problems used here are from the DTLZ family [10], specifically, the DTLZ2, DTLZ4 and DTLZ6 problems. The DTLZ family is a set of benchmark problems that is often used in the analysis of MOEAs [13,18]. In this study, we are interested in analyzing the behavior of the algorithms with many objectives. Thus, the algorithms are applied to different problems with high dimensional objective spaces: 3, 5, 10, 15 and 20. These problems were selected because they share the following important features: (a) the implementation effort is relatively small (a bottom-up approach and a constraint surface approach), (b) any number of objectives can be scaled to (M) and decision variables (n), (c) the global Pareto front is known analytically, and (d) convergence and diversity difficulties can be easily controlled. For each problem, the variable k represents the search's complexity, where k ¼ nM þ1 (n is the number of variables, M is the number of objectives). The DTLZ2 problem has a Pareto front shape of a sphere. DTLZ2 can be used to investigate the ability of the algorithms to scale up their performances with large numbers of objectives. DTLZ4 and DTLZ6 were used to extend the evaluation of the I-MOPSO algorithm. DTLZ4 is an extension of the DTLZ2 problem, which is often used to investigate the ability to maintain a good solution distribution. This problem generates more solutions that are near the f m f 1 plane; as a result, the algorithms tend to concentrate their solutions in this region. The DTLZ6 is a variation of the DTLZ2, where the Pareto Front is defined by a curve instead of a sphere. Additionally, this problem presents as (3k  1) local Pareto optimal. These problems are chosen to allow the execution of an empirical analysis that allows us to observe the algorithms in different scenarios. With this analysis, we can observe the behavior of the algorithms in a scenario where the main difficulty is the increase of the number of objective functions (DTLZ2), a scenario where there is greater difficulty in diversifying the search (DTLZ4) and a scenario where it is hard to converge to the Pareto front (DTLZ6). There are several other problems in DTLZ family, including problems that present major threats to MOPSO algorithms. Future works will address other functions like DTLZ3 and DTLZ7. 4.4. Quality indicators The selection of experimental measures is a major obstacle in Many-Objective Optimization research. In the literature, different quality indicators are used [4,3], and there is no consensus on which measure is the best for identifying the behavior of MOEAs in MaOPs. In our experiments, because we are addressing benchmark problems for which the Pareto front can be obtained analytically, we combine a set of distances to the Pareto front metrics with the goal of observing the behavior of the MOEAs in terms of their convergence toward the Pareto front and diversity of the obtained solutions. This set of quality indicators is described as follows [14]: Generational Distance (GD) measures how far the approximation of the Pareto front (PFapprox) is; in other words, it measures how far the solutions generated by the algorithms are from the true Pareto front of the problem PFtrue. If GD is equal to 0, then all of the points of PFapprox belong to the true Pareto front. GD allows us to observe whether the algorithm converges to some region in the true Pareto front. Inverse Generational Distance (IGD) measures the minimum distance of each point of PFtrue to the points of PFapprox. If IGD is

Please cite this article as: A. Britto, A. Pozo, Using reference points to update the archive of MOPSO algorithms in ManyObjective Optimization, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.05.049i

A. Britto, A. Pozo / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

equal to zero, then PFapprox contains every point of the true Pareto front. IGD allows us to observe whether PFapprox converges to the true Pareto front and also whether this set is well-diversified. Because PSO is a stochastic algorithm, in the following experiments, each algorithm is executed a pre-determined number of times. Thus, to perform a statistical comparison for each measure, the Friedman test was used. The Friedman test is a non-parametric test for testing the difference between several related samples [24]. This test was used at a 5% significance level and applied to raw values for each metric. The post-test of the Friedman test indicates whether there is any statistical difference between each analyzed data set. The post-test was performed through a set of functions from R statistical tool [25]. In addition to the analysis of these quality indicators, this work has the goal of observing how the REF-I-MOPSO algorithm generates its approximated Pareto Front. For this goal, a metric is used that calculates the distribution of the solutions over a reference point. This metric compares the distribution of the solutions that are generated by different algorithms. Thus, given a different set of solutions of different algorithms and the reference point, for every point, the Euclidean distance to the reference point is first calculated for each algorithm. Next, the smallest and largest distances are obtained. Third, the range that varies from the smallest to the largest distance is divided into 10 intervals, which correspond to regions in proximity to the reference point. Finally, the metric counts how many points for each algorithm are located in each of the proximity regions. Algorithms that have more points in smaller intervals, i.e., 10% or 20% of the distance from the reference point, are closer to this point.

crowded regions of the Pareto front and to obtain good diversity is more important. In this context, the CDAS-SMPSO algorithm presented the best results, especially for configurations with 0.45. I-MOPSO also presented the best results. These two algorithms obtained the best IGD values for all of the objective values and later obtained a better distribution of the solutions. SMPSO had good results with a small number of objectives, however its performance deteriorated when the number of objective function was large.

4.5. Results and discussion 4.5.1. I-MOPSO evaluation First, I-MOPSO and I-Sigma are confronted with CDAS-SMPSO and SMPSO algorithms using the DTLZ4 problem. In this problem, algorithms tend to concentrate their solutions on specific regions of the Pareto front and tend to lose diversity. The results of the best algorithms according to the Friedman test are presented in Table 1. Additionally, Fig. 1(a) and (b) present the mean value of each quality indicator for all of the objective values. Observing the GD measure, both I-MOPSO and I-Sigma obtained the best convergence toward the Pareto front. These algorithms had the best values of GD for almost all of the objectives, especially for high dimensional spaces. SMPSO did not have good results for any objective functions; furthermore, as discussed in [12], this algorithm loses performance when the number of objective grows. For CDAS-SMPSO, only the configuration Si ¼0.45 obtained a good result for 3 objective functions. Although a good convergence is desired from the search, for DTLZ4, the ability to avoid the

Fig. 1. Mean values of the quality indicators for the DTLZ4 problem. (a) DTLZ4-GD and (b) DTLZ4-IGD.

Table 1 Best algorithms according to the post-test of Friedman test: I-MOPSO  I-Sigma  CDAS-SMPSO  SMPSO. Prob

Obj

Best algorithms GD

IGD

DTLZ4

3 5 10 15 20

0.45 and I-Sigma I-Sigma I-MOPSO and I-Sigma I-MOPSO and I-Sigma I-MOPSO and I-Sigma

0.45, I-MOPSO, I-Sigma SMPSO 0.45, I-MOPSO and SMPSO 0.45, I-MOPSO and SMPSO 0.45, I-MOPSO and SMPSO 0.4, 0.45 and I-MOPSO

DTLZ6

2 5 10 15 20

0.3, 0.3, 0.3, 0.3, 0.3,

0.45 and SMPSO 0.45 and SMPSO 0.4 and 0.45 0.4 and 0.45 0.4 and 0.45

0.35, 0.4, 0.45, SMPSO and I-Sigma 0.35, 0.4 and I-Sigma 0.35, 0.4 and I-Sigma 0.35 and I-Sigma 0.35 and I-Sigma

Please cite this article as: A. Britto, A. Pozo, Using reference points to update the archive of MOPSO algorithms in ManyObjective Optimization, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.05.049i

A. Britto, A. Pozo / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Observing Fig. 1(a) and (b), the good results of GD for I-MOPSO and I-Sigma can be observed. It can also be observed that CDASSMPSO with Si ¼0.25 obtained a GD value that is close to 0. The reason was that the algorithm generates only one solution, and later, during the analysis of the Friedman test, the algorithm was discarded. For IGD, whereas CDAS-SMPSO had the best results, almost all of the algorithms had a similar result, including I-MOPSO. The DTLZ6 problem attempts to introduce several local optimums with the goal of making the convergence harder. These local optimums hampered the search of the I-MOPSO. The results of the post-test of Friedman test are presented in Table 1. This algorithm did not obtain the best GD for any number of objectives. However, I-Sigma obtained the best results for all of the numbers of objective functions. Fig. 2(a) shows negative results for I-MOPSO and the good results for I-Sigma. The difference between these two versions of I-MOPSO stresses the influence of the leader's selection in a MOPSO algorithm. Although the archiving method is responsible for convergence in the I-MOPSO algorithm, the leader's selection method can introduce different behaviors to the algorithm. CDAS-SMPSO obtained the best convergence for several of the numbers of objective functions. Observing IGD (see Fig. 2(b)), CDAS-SMPSO had the best results, especially in high-dimensional spaces, while I-Sigma and I-MOPSO did not obtain the best results. SMPSO presented the expected

Fig. 2. Mean values of the quality indicators for the DTLZ6 problem. (a) DTLZ6-GD and (b) DTLZ6-IGD.

7

behavior and obtained good values only for a small number of objective functions; its search deteriorated when the number of objective functions increased. It is interesting to note that some configurations of CDASMOPSO significantly improve their results in terms of IGD, when the number of objectives increases. The greater the number of objective functions, the greater the number of non-dominated solutions and, the set of solutions generated by the CDAS-MOPSO also grew. This behavior allows the algorithm to cover a larger region of the objective space. However, only for DTLZ6 there is an improvement of IGD values, because the Pareto front is located in the center of the objective space. Studies in the literature [12,26] show that the CDAS technique generates an approximated Pareto front in the center of the objective space. So, a large set of solutions in a region close to the Pareto front enables a better value of IGD for DTLZ6. In summary, the proposed algorithms produced notably good results in terms of convergence; thus, it is efficient to use an archiving method to guide the search in Many Objective Problems. Furthermore, the I-Sigma algorithm obtained good results in terms of convergence for a scenario with several local optima. Additionally, the leader's selection method can influence the search, introducing more convergence or diversity on the search. CDASSMPSO also obtained good results for the DTLZ4 and DTLZ6 problems; however, it was outperformed by I-MOPSO or I-Sigma (especially in terms of convergence). This algorithm managed to diversify its search better than I-MOPSO for DTLZ4; however, the results of these algorithms were similar. One of the major drawbacks of CDAS-SMPSO is the choice of Si. Observing the results in Table 1, none of the configurations stood out, and the value of Si must be defined for each context.

4.5.2. REF-I-MOPSO evaluation The second set of experiments evaluates the REF-I-MOPSO algorithms. In these experiments, first the quality indicators GD and IGD is considered. Afterwards, the distribution of the approximated Pareto front of each algorithm is analyzed. The first analysis discusses convergence toward the Pareto front using DTLZ2, DTLZ4 and DTLZ6 problems. Both variants of REF-I-MOPSO, REF_M and REF_Ex (Section 4.2), are compared to I-MOPSO, I-Sigma and SMPSO. The second experiment compares both variants of REF-I-MOPSO to SMPSO and I-MOPSO algorithm in DTLZ2 problem. Observing the quality indicators, REF-I-MOPSO obtained very good results in terms of IGD, specially the REF_M variant. The best results according to the Friedman test are presented in Table 2. For DTLZ2 problem, I-MOPSO and I-Sigma obtained the best values of GD for almost all number of objective functions. The approach that selects the reference point in the middle of the hyperplane, REF_M, obtained the best results for 5 objective functions, while the result that selects the points near the extremes, REF-Ex, did not obtain the best results in any scenario. However, when observing IGD, REF_M obtained the best values, especially in high dimensionality problems. I-MOPSO also obtained the best values of IGD. This result shows that it is possible to obtain good results in many-objective problems choosing a reference point (that is not the ideal point) to guide the archiving process. For DTLZ4, again I-MOPSO and I-Sigma outperformed both variants of REF-I-MOPSO in terms of GD. However, for this problem the analysis of IGD is more important, since it also measures diversity. Again, REF_M and I-MOPSO obtained the best values of IGD. These algorithms controlled better the diversity and avoided in a better way the traps of DTLZ4 problem. REF_Ex variant obtained the best IGD values only for 3 and 5 number of objective

Please cite this article as: A. Britto, A. Pozo, Using reference points to update the archive of MOPSO algorithms in ManyObjective Optimization, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.05.049i

A. Britto, A. Pozo / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

Table 2 Best algorithms according to the post-test of Friedman test: REF-I-MOPSO  I-MOPSO  I-Sigma  SMPSO. Prob

Obj

Best algorithms GD

IGD

DTLZ2

3 5 10 15 20

I-MOPSO and I-Sigma I-MOPSO and I-Sigma REF_M, I-MOPSO and I-Sigma I-MOPSO and I-Sigma I-MOPSO and I-Sigma

I-MOPSO and SMPSO I-MOPSO REF_M, I-MOPSO and I-Sigma REF_M and I-MOPSO REF_M and I-MOPSO

DTLZ4

3 5 10 15 20

I-Sigma I-Sigma I-MOPSO and I-Sigma REF_M, I-MOPSO and I-Sigma I-MOPSO and I-Sigma

REF_M, REF_Ex, I-MOPSO, I-Sigma and SMPSO REF_M, REF_Ex and I-MOPSO REF-M and I-MOPSO REF-M and I-MOPSO REF-M and I-MOPSO

DTLZ6

3 5 10 15 20

I-Sigma and SMPSO I-Sigma I-Sigma I-Sigma I-Sigma

SMPSO SMPSO REF_M, REF_Ex, I-MOPSO and I-Sigma REF_M, REF_Ex and I-MOPSO REF_M, REF_Ex and I-MOPSO

functions. Again, the hyperplane archiver introduced a good diversity in the search of the PSO algorithm. The last problem is DTLZ6. For GD measure, I-Sigma outperformed all other algorithms. Since, REF-I-MOPSO is based on I-MOPSO, therefore, it suffered from the same threats in DTLZ6 (discussed in the previous section). Again, REF-I-MOPSO obtained the best values of IGD for the greatest number of objective functions. Both REF_M and REF_Ex presented the best value of this quality indicator for 10, 15 and 20 numbers of objective functions. I-MOPSO also had good results of IGD. Summarizing the analysis of the quality indicators, REF-IMOPSO had the best results in terms of diversity. The versions using the hyperplane archiver obtained competitive results. The major advantage of REF-I-MOPSO is that it is possible to direct the search for a user-defined region and also to perform a good search specially in terms of diversity. Comparing the two proposed variants of REF-I-MOPSO, the REF_M algorithm presented the best results. This variant obtained the best values of IGD for almost all studied scenarios. This method allowed a better diversification of the search. On the other hand, the choice of extreme reference point proved to be less suitable. Thus, the choice of the reference point can change the quality of the search algorithm in terms of convergence and diversity. However, it is important to emphasize that the choice of the reference point in the algorithm must be defined by the user. These experiments were designed to show that the REF-I-MOPSO can be competitive with other MOPSO algorithms. Next goal is to observe how each algorithm distributes its solutions over a specific point in the objective space. In this assessment, two analyses are performed, first comparing REF_M to I-MOPSO and SMPSO and later comparing REF_Ex to I-MOPSO and SMPSO. The points selected to calculate the distribution were the reference points selected at the beginning of the search for each of the REF-I-MOPSO algorithms. This analysis allows us to observe whether REF-I-MOPSO generates its solution distributed over the reference point compared to the other algorithms. The results are presented in Figs. 3 and 4, for REF_M and REF_Ex, respectively. These figures present charts in which the x-axis represents the distance to the reference point (e.g., 10% of the distance, 20% of the distance). The y-axis represents the number of solutions that were generated by each algorithm (varying from 0 to 1 (0–100% of the solutions)). One point in the chart represents how many solutions generated by an algorithm are located at that

distance from the reference point (the distance in the objective space). This analysis is restricted to high dimensional objective space: 10, 15 and 20 objective functions. Observing the distributions of REF_M, which are given in Fig. 3, this algorithm generated its solutions near the reference point when compared to I-MOPSO and SMPSO. For all of the numbers of objectives, REF-M obtained all of the solutions in a range that was smaller than 10% of the distance from the reference point. A similar result was obtained by REF_Ex, as shown in Fig. 4. For 10 objective functions, this algorithm generated all of the solutions distributed at least 30% from the reference point, concentrating most at a distance of 10%. For 15 and 20 objective functions, the algorithm concentrated all of the solutions at a distance of 10%. These results showed that compared to different algorithms, REF-I-MOPSO generated an approximated Pareto front closer to the reference point.

5. Conclusions Multi-Objective Particle Swarm Optimization algorithms have been applied to Many-Objective Optimization Problems successfully in recent years [12]. The MOPSO meta-heuristic has specific characteristics that can be exploited to avoid the deterioration of the search that occurs when the number of objectives increases. This work explored the archiving process of MOPSO to improve its results in Many-Objective Optimization. Thus, two tasks were designed: first, to extend the evaluation of the I-MOPSO algorithm and to use a different leader's selection method (I-Sigma algorithm); and second, to extend I-MOPSO through the proposition of the REF-I-MOPSO algorithm. The REF-I-MOPSO algorithm aims to guide the search to different regions without losing performance in terms of convergence toward the Pareto front and diversity of the obtained solutions. For the first task, I-MOPSO and I-Sigma were compared to the CDAS-SMPSO and SMPSO algorithms. The results showed that in general, both I-MOPSO and I-Sigma exhibited good results in terms of convergence. I-Sigma obtained the best results for both problems; however, I-MOPSO faced a number of difficulties when addressing the DTLZ6 problem. The second set of experiments explored the REF-I-MOPSO algorithm. In conclusion, it is possible to guide the search of MOPSO to a specific region of the Pareto front using an arching method.

Please cite this article as: A. Britto, A. Pozo, Using reference points to update the archive of MOPSO algorithms in ManyObjective Optimization, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.05.049i

A. Britto, A. Pozo / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 3. Distribution over the reference point for REF_M, I-MOPSO and SMPSO. (a) 10, (b) 15, and (c) 20.

The results of REF-I-MOPSO showed that the solutions generated by this algorithm were close to the selected reference point. Furthermore, the results of REF-I-MOPSO, in terms of IGD, were similar to I-MOPSO; therefore, the search was guided to a specific region, but the algorithm's performance was not reduced.

9

Fig. 4. Distribution over the reference point for REF_Ex, I-MOPSO and SMPSO. (a) 10, (b) 15, and (c) 20.

Future research could include exploring REF-I-MOPSO with the aim of observing whether the use of the hyperplane is useful for guiding the search for problems with Pareto fronts that have

Please cite this article as: A. Britto, A. Pozo, Using reference points to update the archive of MOPSO algorithms in ManyObjective Optimization, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.05.049i

10

A. Britto, A. Pozo / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

different shapes. Additionally, different characteristics of MOPSO must be explored, such as the use of Multi-Swarm algorithms. References [1] M. Reyes-Sierra, C.A.C. Coello, Multi-objective particle swarm optimizers: a survey of the state-of-the-art, International Journal of Computational Intelligence Research 2 (3) (2006) 287–308. [2] H. Ishibuchi, N. Tsukamoto, Y. Nojima, Evolutionary many-objective optimization: a short review, in: IEEE Congress on Evolutionary Computation (CEC 2008), 2008, pp. 2419–2426. [3] O. Schütze, A. Lara, C.A.C. Coello, On the influence of the number of objectives on the hardness of a multiobjective optimization problem, IEEE Transactions on Evolutionary Computation 15 (4) (2011) 444–455. [4] S. Adra, P. Fleming, Diversity management in evolutionary many-objective optimization, IEEE Transactions on Evolutionary Computation 15 (2) (2011) 183–195. [5] K. Deb, H. Jain, Handling many-objective problems using an improved NSGA-II procedure, in: IEEE Congress on Evolutionary Computation (CEC 2012), 2012, pp. 1–8. [6] M. Garza-Fabre, G. Toscano-Pulido, C.A. Coello Coello, Two novel approaches for many-objective optimization, in: IEEE Congress on Evolutionary Computation (CEC 2010), 2010, pp. 1–8. [7] A. Britto, A. Pozo, I-mopso: a suitable PSO algorithm for many-objective optimization, in: Eleventh Brazilian Symposium on Neural Networks, IEEE Computer Society, 2012, pp. 166–171. [8] A. Britto, A. Pozo, Using archiving methods to control convergence and diversity for many-objective problems in particle swarm optimization, in: IEEE Congress on Evolutionary Computation (CEC 2012), 2012, pp. 605–612. [9] N. Padhye, J. Branke, S. Mostaghim, Empirical comparison of MOPSO methods— guide selection and diversity preservation, Evolutionary Computation (2009) 2516–2523. [10] K. Deb, L. Thiele, M. Laumanns, E. Zitzler, Scalable multi-objective optimization test problems, in: Congress on Evolutionary Computation (CEC 2002), 2002, pp. 825–830. [11] S. Mostaghim, J. Teich, Strategies for finding good local guides in multiobjective particle swarm optimization, in: SIS '03 Swarm Intelligence Symposium, Proceedings of the 2003 IEEE Swarm Intelligence Symposium. IEEE Computer Society, 2003, pp. 26–33. [12] A.B. Carvalho, A. Pozo, Measuring the convergence and diversity of CDAS multi-objective particle swarm optimization algorithms: a study of manyobjective problems, Neurocomputing 75 (2012) 43–51. [13] A. Nebro, J. Durillo, J. Garcia-Nieto, C.A.C. Coello, F. Luna, E. Alba, SMPSO: a new PSO-based metaheuristic for multi-objective optimization, in: IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making (MCDM '09), 2009, pp. 66–73. [14] C.A.C. Coello, G.B. Lamont, D.A.V. Veldhuizen, Evolutionary Algorithms for Solving Multi-Objective Problems (Genetic and Evolutionary Computation), Springer-Verlag, New York, Inc., Secaucus, NJ, USA, 2006. [15] P. Fleming, R. Purshouse, R. Lygoe, Many-objective optimization: an engineering design perspective, in: C. Coello Coello, A. Hernández Aguirre, E. Zitzler (Eds.), Evolutionary Multi-Criterion Optimization, Lecture Notes in Computer Science, vol. 3410, Springer, Berlin/Heidelberg, 2005, pp. 14–32. [16] Q. Zhang, H. Li, MOEA/D: a multiobjective evolutionary algorithm based on decomposition, IEEE Transactions on Evolutionary Computation 11 (6) (2007) 712–731. [17] H. Ishibuchi, N. Akedo, H. Ohyanagi, Y. Nojima, Behavior of EMO algorithms on many-objective optimization problems with correlated objectives, in: IEEE Congress on Evolutionary Computation (CEC 2011), 2011, pp. 1465–1472.

[18] U.K. Wickramasinghe, X. Li, Using a distance metric to guide PSO algorithms for many-objective optimization, in: 11th Annual Conference on Genetic and Evolutionary Computation (GECCO '09), ACM, New York, NY, USA, 2009, pp. 667–674. [19] O. Castro, A. Britto, A. Pozo, A comparison of methods for leader selection in many-objective problems, in: IEEE Congress on Evolutionary Computation (CEC 2012), 2012, pp. 1–8. [20] H. Sato, H.E. Aguirre, K. Tanaka, Controlling Dominance Area of Solutions and Its Impact on the Performance of MOEAs, Lecture Notes in Computer Science: Evolutionary Multi-Criterion Optimization, vol. 4403, Springer, Berlin, 2007, pp. 5–20. [21] A.B. de Carvalho, A. Pozo, Using different many-objective techniques in particle swarm optimization for many objective problems: an empirical study, International Journal of Computer Information Systems and Industrial Management Applications 3 (2011) 096–107. [22] M. López-Ibáñez, J. Knowles, M. Laumanns, On sequential online archiving of objective vectors, in: R. Takahashi, K. Deb, E. Wanner, S. Greco (Eds.), Evolutionary Multi-Criterion Optimization, Lecture Notes in Computer Science, vol. 6576, Springer, Berlin/Heidelberg, 2011, pp. 46–60. [23] M. Laumanns, R. Zenklusen, Stochastic convergence of random search methods to fixed size Pareto front approximations, European Journal of Operational Research 213 (2) (2011) 414–421. [24] D.J. Sheskin, Handbook of Parametric and Nonparametric Statistical Procedures, 4th edition, Chapman & Hall/CRC, 2007. [25] R Core Team, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, ISBN 3-900051-07-0, 2012. URL 〈http://www.R-project.org〉. [26] A.L. Jaimes, C.A.C. Coello, Study of preference relations in many-objective optimization, in: 11th Annual Conference on Genetic and Evolutionary Computation (GECCO '09), 2009, pp. 611–618.

André Britto is a Ph.D student at the Computer Science Department in Federal University of Parana. He received the Master's degree in Computer Science at Federal University of Parana in 2009 and the BS degree in Computer Science at Federal University of Sergipe in 2006. His main interests are evolutionary algorithms metaheuristic and multiobjective optimization.

Aurora Pozo is an associate professor of Computer Science Department and Numerical Methods for Engineering at Federal University of Parana, Brazil, since 1997. She received a M.S. in electrical engineering from Federal University of Santa Catarina, Brazil, in 1991. She received a Ph.D. in electrical engineering from the Federal University of Santa Catarina, Brazil. Aurora's research interests are in evolutionary computation, data mining and complex problems.

Please cite this article as: A. Britto, A. Pozo, Using reference points to update the archive of MOPSO algorithms in ManyObjective Optimization, Neurocomputing (2013), http://dx.doi.org/10.1016/j.neucom.2013.05.049i