Accepted Manuscript A parallel MOEA with criterion-based selection applied to the Knapsack Problem Kantour Nedjmeddine, Bouroubi Sadek, Chaabane Djamel
PII: DOI: Reference:
S1568-4946(19)30189-9 https://doi.org/10.1016/j.asoc.2019.04.005 ASOC 5433
To appear in:
Applied Soft Computing Journal
Received date : 6 November 2018 Revised date : 19 March 2019 Accepted date : 1 April 2019 Please cite this article as: K. Nedjmeddine, B. Sadek and C. Djamel, A parallel MOEA with criterion-based selection applied to the Knapsack Problem, Applied Soft Computing Journal (2019), https://doi.org/10.1016/j.asoc.2019.04.005 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
*Manuscript Click here to view linked References
A PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KNAPSACK PROBLEM KANTOUR NEDJMEDDINE1 , BOUROUBI SADEK2 AND CHAABANE DJAMEL3
[email protected] ,
[email protected] ,
[email protected] 1,2
LIFORCE Laboratory,
3
AMCD-RO Laboratory
USTHB, Faculty of Mathematics, Department of Operations Research, P.B. 32 El-alia, 16111, Bab Ezzouar, Algiers, Algeria. Abstract. In this paper, we propose a parallel multiobjective evolutionary algorithm called Parallel Criterion-based Partitioning MOEA (PCPMOEA), with an application to the Multiobjective Knapsack Problem (MOKP). The suggested search strategy is based on a periodic partitioning of potentially efficient solutions, which are distributed to multiple multiobjective evolutionary algorithms (MOEAs). Each MOEA is dedicated to a sole objective, in which it combines both criterion-based and dominance-based approaches. The suggested algorithm addresses two main sub-objectives: minimizing the distance between the current non-dominated solutions and the ideal point, and ensuring the spread of the potentially efficient solutions. Experimental results are included, where we assess the performance of the suggested algorithm against the above mentioned sub-objectives, compared with state-of-the-art results using wellknown multi-objective metaheuristics.
Keywords and phrases. Parallel evolutionary algorithms, multiobjective discrete optimization, multiobjective Knapsack Problem.
1
1. Introduction
2
Multiobjective Optimization Problems (MOPs) involves several conflicting criteria, where any solution is qualified as optimal if it belongs to criteria trade-offs set called the Pareto front [30]. In contrast with single-objective optimization problems, aiming to find a solution that completes a desired ”goal” (to maximize or minimize), called optimal solution. The notion of optimality in MOPs context needed to be rethought, since the goal here is to find a set of compromise solutions between the different objectives, called Pareto optimal solutions and characterized by introducing the dominance relation.The difficulty of multiobjective optimization lies in the absence of a total order relation which links all the solutions of the problem. In terms of evolutionary algorithms, this lack appears in the difficulty of conceiving a selection operation that assigns for each individual a probability of selection proportionally to the performance of that individual. Another drawback is the premature loss of diversity; hence the need to conceive techniques for maintaining diversity within the population. Against such problems one have to conceive algorithms that satisfies the following criteria [29]: the algorithm must converge towards the true Pareto front in a reasonable time, and it must also result diverse solutions on the front in order to have a good representative sample instead of focusing on an area of the objective space. A number of approaches (exact and heuristic) have been considered for solving
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
1
2 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
MOPs, mainly, metaheuristics which have proven their ability to give good approximations. Among these methods, one of the most popular resorts for solving MOPs are Multiobjective Evolutionary Algorithms (MOEAs). The very early works on MOEAs were initiated by D. Schaffer (VEGA, [8]), who is considered to be the first to design a MOEA during the 1980s. Since than, many successfull MOEAs has been proposed, see for exapmle: PAES [32], NSGAII [6], SPEA2 [7], IBEA [26], MOEA/D [41], etc. The reader may find further details about early studies of MOEAs in [30]. The employment of MOEAs in Multiobjective Optimization (MOO) is often justified by their population based character, managing to achieve high quality approximations of Pareto optimal solutions. On the other hand, For most MOPs, executing the generational cycle of standard MOEAs on large instances of the problem and/or on large sizes of population requires considerable resources in terms of computational time and memory [1]. Therefore, a variety of design and implementation difficulties are studied to construct more effective MOEAs. Overcoming these difficulties usually involve defining new operators, hybrid algorithms, and parallel models, etc. Hence, the parallelization of MOEAs emerges naturally when dealing with computationally expensive algorithms. However, parallelization of MOEAs aims not only to reduce computational time, but also to improve the quality of the approximated Pareto fronts, and increases the robustness of MOEAs against MOPs in both real life and theoretical research fields [1]. In this work, we employed the master/worker model, handling multiple MOEAs with different populations, and a separated ”main” monitoring algorithm. Many pMOEAs of different models and implementations can be found in literature, for literature review, see [2], [37], [39]. However, in the following part we will briefly present some of the existing pMOEA using master/worker model and/or multiple search algorithms (MOEAs), partitioning either the search space or the objective space. Related work. An early work on master/worker pMOEA named the Parallel Multi-Objective Genetic Algorithm (PMOGA) applied to eigenstructure assignment problems [31]. This algorithm launches multiple MOEAs using identical initial population and different decision-making logic (fitness assignment and selection operator), where each one is executed on a worker processor. The master uses the populations produced by workers to form final population [2]. An other master/worker pMOEA is applied to solve scheduling problems [35], where two versions were implemented: heterogeneous and homogeneous populations. The first uses multiple subpopulations distributed for each objective function, while the heterogeneous subpopulations are Pareto-oriented. Furthermore, both versions utilize unidirectional migration flow by sending individuals to a separate main population [2]. We mention also the master/worker Parallel Single Front Genetic Algorithm (PSFGA) applied to several the benchmark functions [36]. This algorithm performs periodically the following tasks: it sorts the population according to the values of the objective functions, then it partitions it into subpopulations and send them to different processors, where a sequential multiobjective genetic algorithm (SFGA) is applied to each subpopulation. In [50], the authors introduced a new parallel model for the MOEA/D algorithm, where, two parallel versions of MOEA/D were proposed. Moreover, the parallelization framework employed consists to divide the population into many sub-populations, and distributed each sub-populations to workers. However, This study was mainly focused on conceiving parallelization which can consider the computational effort required by the different sub-problems
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103
3
(i.e., generated by dividing an MOP). More recently, in [54] the authors proposed a pMOEA for uncertain optimization problems called parallel double-level multiobjective evolutionary algorithm (PDL-MOEA). This algorithm adresses a biobjective problem formulated by transforming a single objective robust optimization problem into a biobjective. Besides, this parallelization aims to reduce the execution time of the MOEA. Furthermore, an extended version of IBMOLS to a cooperative model is presented in [22], with an application to the multiobjective Knapsack Problem. The suggested approaches uses a multipopulation based cooperative framework W-CMOLS, in which different subpopulations are executed independently with different configurations of IBMOLS. Furthermore, each subpopulation is focused on a specific part of the search space using a weighted versions of the epsilon indicator. Moreover, a new framework for MOEA parallelization, and a new algorithm called parallel evolutionary algorithm (PEA) were proposed in [49]. The novelty in this framework is that it separates the selection mechanism from the rest of the evolutionary process. Accordingly, the PEA algorithm consists mainly to separate the convergence and diversity. Furthermore, the convergence is achieved by a number of evolving subpopulations, while the diversity is simply emphasized on the converged solutions from each subpopulation using a proper selection mechanism. We mention also a parallel multiobjective cooperative coevolutionary (CC) variant of the Speed-constrained Multiobjective Particle Swarm Optimization (SMPSO) algorithm [51], called CCSMPSO [52]. In this paper, the authors show the leverages, in terms of parallelization benefits, quality of approximated fronts, and efficiency, that can be offered by the employment of CC framework. We mention a recent parctical study presented in [53], where a new parallel multiobjective Particle Swarm Optimization (PSO) algorithm is introduced and applied to solve the cascade hydropower reservoirs operation balancing benefit and firm output (i.e., case of Lancang cascade hydropower system in southwest China). The parallelization of PSO was achived using the so called Fork/Join framework, benifiting from the divide-and-conquer strategy to generate and handle subpopulations (i.e., devides its evolutionary process to substasks). We mention also a late multiobjective heuristic algorithm called multiobjective three level parallel PSO algorithm (MO-3LPPSO) [55], designed and implemented in a master/worker topology on multiple parallel machines, this algorithm was proposed to handle structural alignment of complex RNA sequences. In this paper, we propose a parallel multiobjective evolutionary algorithm designed in a master/worker model, we call it Parallel Criterion-based Partitioning MOEA (PCPMOEA). The proposed algorithm uses multiple MOEAs with a criterion-based selection operation, each one is dedicated to a sole objective function, while these algorithms are monitored by a master entity that periodically collects and partitions the current Pareto solutions according to their distribution in the objective space. The parts are used to update the current subpopulations of MOEAs. Experimental results are provided, where we used several benchmark instances of the multiobjective Knapsack Probkem, and compared PCPMOEA with some effective multiobjective evolutionary algorithms. This paper is organized as follows: in the next section, we recall some basic concepts of MOO, and the multiobjective Knapsack Problem. In section 3, we present a brief description of parallel MOEAs and its taxonomy. In section 4, we present the suggested parallel approach, with detailed functioning description. In section 5, we discuss the experimental results obtained from the suggested algorithm. Finally, in Section 5, we end the paper with the conclusion and our perspectives.
4
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
104
2. Background
105
Multiobjective Problems. The goal in Multiobjective Problems (MOPs) is to optimize simultaneously k objective functions. More formally, MOPs can be stated as follows [3]: ( ”max” Z(x) = (Z 1 (x), Z 2 (x), . . . , Z k (x)), (M OP ) s.t. x ∈ Ω.
106
107
108 109
where Ω is the decision space, and x ∈ Ω is a decision vector (feasible solution). The vector Z(x) consists of k objective functions.
110
n Y
Z i : Ω → Di , i = 1, . . . , k,
Di is the feasible objective space.
111
where,
112
Very often, the objectives in MOPs contradict each other, there exists no point that maximizes all the objectives simultaneously. One has to balance them. Moreover, a tradeoffs solutions set can be defined in terms of Pareto optimality using the dominance relation.
i=1
113 114 115 116 117 118 119 120
Pareto optimality. Since the aim in MOPs is to find good compromises rather than a single solution as in single-objective optimization problems, we present the dominance relation, allow0 ing to define optimality for MOPs. A solution x is said to dominate another solution x , denoted 0 0 0 as x x , if and only if, ∀i ∈ {1, . . . , k}, Z i (x) ≥ Z i (x ) and Z(x) 6= Z(x ). A feasible solution x∗ ∈ Ω is called a Pareto optimal solution, if and only if, 6 ∃y ∈ Ω such that Z(y) Z(x∗ ). The set of all Pareto optimal solutions is called the Pareto-optimal set (PS); moreover: P S = {x ∈ Ω| 6 ∃y ∈ Ω, Z(y) Z(x∗ )}.
121 122
The evaluation of PS in the objective space is called the Pareto front (PF):
123
P F = {Z(x)|x ∈ P S}.
124
From a MOEA point of view, Pareto-optimal solutions can be seen as those solutions within the genotype search space, whose corresponding phenotype objective vector components cannot be all simultaneously improved [2].
125 126
130
Ideal vector. The ideal vector contains the optimum for each separately considered objective function, all constituing the same vector in the objective space (usually Rn ) [2]. In other words, (i) let x0 ∈ Ω be a given vector of decision variables which maximizes the ith objective function (i) Z i . Meaning that the vector x0 verifies the following equality [2]:
131
Z i (x0 ) = max Z i (x).
127 128 129
132 133 134
135 136 137 138 139 140
(i)
Z0i
x∈Ω ith objective
(i)
We denote by the optimum evaluation of the function Z i (x0 ), and by the vector Z0 = (Z01 , Z02 , . . . , Z0k ) the ideal vector for a MOP [2]. The ideal vector is usually used in some methods as a reference point. Multiobjective multidimensional Knapsack Problem. The Multiobjective multidimensional Knapsack Problem (MOMKP) is a widely studied combinatorial optimization problem. Furthermore, the MOMKP is the multidimensional version of the multiobjective Knapsack Problem (MOKP) [3], which is proven to be N P−hard [15], besides, it is known for the fact that the size of the Pareto optimal solutions set can grow exponentially with the number of items in the knapsack [20]. Despite its simple Formulation, MOKP can be applied for modeling many real
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP 141 142 143 144 145 146
147
148 149
5
problems such as budget allocation and resource allocation [5]. Therefore, its resolution has both theoretical and practical character that draws researchers attention. MOMKP is a particular case of multiobjective linear integer programming (MOILP) [3]. Mathematically, MOMKP can be stated as follows: given n items having p characteristics wji , j ∈ {1, . . . , n} (weight, volume, etc.), and k profits cij , j ∈ {1, . . . , n}, we want to select items as to maximize the k profits, while not exceeding the p knapsack capacities Wi [3]. n X i ”max ” Z (x) = cij xj , i ∈ {1, . . . , k} j=1 n X (M OM KP ) s.t. wji xj ≤ Wi , i ∈ {1, . . . , p} j=1 xj ∈ {0, 1}, ∀j ∈ {1, . . . , n}, where n is the number of item, xj denotes the j th binary decision variable, k represents the number of objectives, Z i stands for the ith objective function.
164
The multiobjective Knapsack Problem (MOKP) and its variants have been subject to many studies, addressing new approaches and analysis of either exact or heuristic resolution methods. Namely, Zitzler and Thiele [16] pioneered the work on the multidimensional variant of the MOPK using evolutionary algorithms, where they introduced also a set of MOMKP instances that are widely used afterwards. Many other research works are dedicated to different variants of the MOKP (see for example [17], [18], [19], [20], [24], [27]). However, the existing heuristic approaches are not restricted to evolutionary algorithms, one may find for instance: indictor based ACO [21], parallel multi-population algorithm using local search [22], iterated local search based on quality indicators [23], tabu search [25], and swarm intelligence [5]. The interested reader may find a detailed discussion of resolution approaches of the MOKP and its variants in [3]. Due to the popularity and simplicity of the MOKP, and the fact that MOEAs are known to perform at their best against problems with string structures; we have chosen the MOMKP as test problem for the assessment of the suggested approach. Furthermore, we used the MOMKP known benchmark instances itroduced by Zitzler and al. in [16] (also referred to as ZMPK instances in [3]).
165
3. Parallel multiobjective evolutionary algorithms (pMOEAs)
166
In this section we present a brief sweep over the common theoretical aspects of parallelization models and taxonomy that are used in (MO)EA. MOEAs are the most commonly considered metaheuristics for parallelization. That is when dealing with populations of individuals, parallelism arises naturally (i.e., each individual can be considered as an independent unit) [13]. As a natural extension, the parallelization of MOEAs are derived from models designed for single-objective optimization [12]: master/worker, island, and diffusion models. Furthermore, there exists many criteria that provides the theoretical description of a given parallelization. In a recent publication, T. El-Ghazali has given a taxonomy for pMOEAs, where it describes a unified view over pMOEAs [1].
150 151 152 153 154 155 156 157 158 159 160 161 162 163
167 168 169 170 171 172 173 174 175 176
However, in this work we adopt a master/worker or global parallelization paradigm. Where a processor serves as master, which usually preforms tasks that require a global outlook over the
6 177 178 179 180 181 182 183 184 185 186 187
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
target problem (search space, objective functions, etc.), as it is the case for selection operator in MOPs. Furthermore, the master supervises multiple processors called workers, by distributing workload (i.e., tasks, subproblems) to the workers to process (i.e., execute, solve). Moreover, for the design of pMOEA there are also three major hierarchical models that can be identified [1]: algorithmic-level, iteration-level, solution-level. This classification is done according to their dependency on the target MOPs, behavior, granularity, and aim of parallelization. Note that each paradigm can be implemented in either a synchronized or asynchronized way [14]. A further particularity in designing a pMOEA is to specify whether the whole Pareto front is distributed among each parallel search algorithm, or it is a centralized element of the parallel scheme. This leads also to another classification criterion of pMOEAs, where these two strategies are called: distributed and centralized approaches [1], [13].
195
Accordingly, the suggested parallel algorithm is designed in a master/worker paradigm with full independence from the target multiobjective problem. Besides it works in an algorithmiclevel parallelization, where we use multiple asynchronous search algorithms working on subpopulations in parallel (intra-algorithm parallelization); each one is assigned a region of the objective space to work on. Furthermore, the suggested search strategy uses a centralized Pareto front handled by the master. This Pareto front is built by combining multiple distributed Pareto fronts where ”local” non-dominated solutions are archived. Then, this latter is advisedly partitioned and distributed again to workers/search algorithms as to proceed ”local” search.
196
4. Parallel Criterion-based Partitioning MOEA (PCPMOEA)
197
In this section, we describe the general functioning of the suggested parallel scheme, and the search strategy. The idea is to launch multiple asynchronous MOEAs with different populations, where each MOEA is dedicated to a given criterion. While the search entities (MOEAs) are supervised by a global processing element, in order to adjust and redirect the search process in real time. In summary, The suggested pMOEA can be classified as a cooperative algorithmic level parallel model designed in a master/worker paradigm, handling:
188 189 190 191 192 193 194
198 199 200 201 202
203 204 205 206 207 208
• Workers (search entities) consisting of criterion-based MOEAs: multiple MOEAs with criterionbased selection. • Master entity where its main mission is to update the current population of each search entities (workers). That is, by preforming a global selection among elite individuals produced by the workers, and then advisedly partitions and redistributes the selected individuals to search entities.
M OEA1
M OEA2
...
M OEAk Information exchange
M aster Figure 1. Schematic of the adopted parallel model.
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229
7
In other words, each M OEAi , i = 1, . . . , k, represents a criterion based search entity, dedicated to the ith objective function. The master node accomplishes (periodically) the following tasks: performs global selection among the potentially efficient solutions produced by the search entities, and then updates the current population in each MOEA (see Figure 1). In the rest of this section, we will present a detailed description of the suggested pMOEA. 4.1. M OEAi (criterion-based MOEA). With the view to design a pMOEA that aims to minimize the distance between the current Pareto front and the ideal point; the suggested algorithm proceeds by targeting different regions of the Pareto front with each search entity (i.e., MOEA). This is by using a proper selection mechanism which gives preponderance to a given objective function, while retaining its multiobjective character. The suggested MOEA is characterized by a secondary population and its criterion-based selection mechanism. Many MOEAs involve an additional population in the search process, as to store potentially efficient solutions found through the generational process. This approach has proven to be effective in finding good approximations of the optimal Pareto front (see for example: PAES [32], SPEA [16], SPEA2 [7], MOMGA [33], MOMGA-II [34]). This secondary population is often called archive or external archive [2]. In this algorithm, like most of MOEAs, we follow the pattern of initializing a population of individuals, then executing a generational loop with evolutionary operators, fitness assignment, ranking individuals, and then storing non-dominated solutions in an archive. Furthermore, the archive size is not limited, while the number of dominated individuals is fixed beforehand. However, the archive is updated periodically by the master entity as to keep it enclosed and dedicated to its allocated search space. The following algorithm resumes the adopted MOEA, we call it M OEAi referring to the ith objective function Z i : Algorithm 1 M OEAi (criterion-based: ith objective) Begin Generate the initial population Pi ; repeat Evaluate the individuals in Pi ; Apply selection operator (see Figure 2); Select parents form Pi and perform evolutionary operations; if (Migration condition is satisfied) then Send the current archived solutions to master; Update archive (see Section 4.2); endif until (Termination condition met) End
230 231 232 233 234 235 236
Selection mechanism. The most commonly employed selection operators are based on dominance relation, where they assign a fitness value (order/rank) for each individual in the population according to different properties based on the dominance relation, mainly: dominance rank, dominance count, and dominance depth [2]. The common point among these operators is that they can preserve Pareto solutions throughout the generational process. Furthermore, the current non-dominated solutions will iteratively approach the true Pareto front of the target
8 237 238 239 240 241 242 243 244 245 246
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
problem. On the other hand, there are also criterion based selection operators that differ from the above mentioned ones, where it basically uses each of the objective functions periodically to select the solutions that pass to the next generation. In other words, preponderance is given to one of the objectives in each iteration of the algorithm, a classic example of criterion based techniques is VEGA [8]. In this work, we use both dominance based and criterion based selection techniques, where we extract and archive the non-dominated individuals, and then we assign fitness values to the rest of individuals according to their fitness in the ith criterion environment (only the ith objective function is considered, see Figure 2). This will keep the search process of each M OEAi focused on the ith criterion, while storing the non-dominated solutions met throughout the search process. Hence, the M OEAi ’s population will incrementally adapt to the ith criterion environment. For the selection operator of each M OEAi , the order relation is
Archive ← PSt
New population Pt+1 . Current population Pt
N elite according to Z i
Rejected
Figure 2. Schematic of the selection mechanism (iteration t). 247 248 249 250
defined as to give preponderance to the ith objective function, while retaining non-dominated solutions to be archived. The order relation is defined as follows. Let P St be the set of Pareto solutions obtained at iteration t, ∀x, y ∈ Pt , x ≥i y ⇐⇒ Z i (x) ≥ Z i (y).
251 252 253 254 255 256 257 258
Moreover, the number of dominated solution maintained for the next generation is determined by the population size N . More precisely, the parameter N denotes the size of dominated individuals set, which are to be retained. Hence, a dominated solution is retained only if its rank according to the ith criterion is less than N , while the archive size is dynamic. The process of selecting individuals that pass to the next generation is given explicitly as follows: let P St ⊂ Pt , the set of current non-dominated individuals in Pt , P St = {x ∈ Pt | 6 ∃y ∈ Pt : y x}. The population of the next generation Pt+1 is constructed as follows: Pt+1
259
= {x ∈ Pt | (x ∈ P St ) ∨ (ranki (x, Pt \P St ) ≤ N )} .
260 261 262 263
= {x ∈ Pt | (x ∈ P St ) ∨ (ranki (x, Pt ) ≤ N + |P St |)} , 1
In terms of complexity, the selection operator can be resumed in two major tasks: extracting non-dominated solutions running in O k|Pt |2 , and sorting dominated ones, which runs in the very worst case in O (|Pt | log(|Pt |)). Hence, the run-time of the selection procedure for each 1
ranki (x, A) is the order of x compared to elements of a set A, according to the ith objective function Z i .
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP 264 265 266 267 268 269 270 271 272 273 274 275 276
M OEAi is dominated by the process of extracting non-dominated solutions. Furthermore, the worst case complexity of the selection operator is O k|Pt |2 ) .
4.2. Master entity (global selection and partitioning technique). The master entity preforms a periodical collection of the current archives from the criterion based MOEAs, in order to extract from them current global Pareto solutions; then partitions these potentially efficient solutions into k subsets, where k is the number of objective functions. The obtained subsets are used to update archives (current non-dominated solutions) of each search entity M OEAi (dedicated to the ith objective). The main goad of the partition procedure is the keep each worker focused on a given area, by adjusting periodically elite solutions of search entities according to the topology of the current Pareto front. In other words, it helps intensifying the search in each area assigned to parallel MOEAs. The partitioning procedure basically uses the statistical quantile of a given order α [10]. Let X a discrete random variable, a value xα is called a quantile of order α, 0 < α < 1, if: P (X < xα ) ≤ α ≤ P (X ≤ xα ) .
277 278 279 280 281 282 283 284 285 286 287
If we project this concept on the distribution of Pareto solutions in the objective space, the quantile of order α can be defined as the evaluation of the individual which divides the Pareto front into two parts such that: α proportion of the total number of individuals in the Pareto front are less than or equal to this quantile, and (1 − α) proportion of the total number of individuals in the current Pareto front P Ft (iteration t) are greater than this quantile. Furthermore, let pα ∈ P St a non-dominated individual, Z i (pα ) ∈ P Ft is said to be the Pareto front’s quantile of order α according to the ith objective function, if: ∀ p ∈ P St , P Z i (p) < Z i (pα ) ≤ α ≤ P Z i (p) ≤ Z i (pα ) . Hence, for a given α, we can construct for each objective function Z i , i ∈ {1, . . . , k}, a partition {P St \P i , P i } using pα , the quantile of order α, such that: P i = {p ∈ P St |Z i (p) ≥ Z i (pα )}.
288 289 290 291 292
9
However, for a problem with k objective functions, the set of non-dominated solutions is partitioned into k subsets {P i }1≤i≤k using the partitioning procedure (see Algorithm 2), where P St = ∪1≤i≤k P i , and each pair of the {P i }1≤i≤k shares a subset of solution F . The size of F is determined by the parameter α, for example |F | = (1 − 2α)|P St | for bi-objective problems. Algorithm 2 Partitioning procedure Requires: Set of non-dominated solutions P , and α ∈ ]0, 0.5]. Ensures: k subsets {P i }1≤i≤k of a given set P . For each objective Z i do P ← sort(P, Z i , ↑);2
P i ← P ([α|P |] : |P |);3
Send P i to worker M OEAi ; // update M OEAi archive. End 2 sort(P, Z i , ↑) denotes the procedure of sorting individuals in P according to Z i , in an increasing order. 3
P (a : b) = {pj ∈ P |a ≤ j ≤ b}, and [a] is the integer part of a.
10 293 294 295 296 297 298 299 300
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
Since the partitioning procedure is invoked periodically throughout the search process, and also needs to take all objective functions into consideration, it is imperative that the chosen partitioning procedure runs in a reasonable time. Hence, we employed the concept of quantiles as to efficiently construct an adequate partition of the current non-dominated individuals recovered by the master. Whilst the partitioning is accomplished according to the distribution of nondominated solutions in the objective space. In other words, all the objective functions are taken into consideration by the partitioning procedure. The run-time of the partitioning procedure is O (k|P | log(|P |)). The following algorithm resumes the functioning of the master entity. Algorithm 3 Master process Repeat if (Migration condition is satisfied) then Collect current archives P i from M OEAi , i = 1, . . . , k, P = P ∪ (∪i P i );
Remove dominated solutions from P ;
Preform partitioning procedure, (P 1 , . . . , P k ) ← P artition(P, α);
// P 1 , . . . , P k are sent to M OEA1 , . . . , M OEAk , respectively. endif until (Termination condition met)
301 302 303 304 305
Figure 3 presents an example of the partitioning procedure applied to a bi-objective Knapsack instance: 2KP100-50 [9], showing the obtained Pareto front at iteration 103 , partitioned using α = 0.25. Here the shared solutions (i.e., contained in the gray area) are enclosed in the interquartile range (IQR) of the Pareto solutions. Furthermore, IQR is represents here 50% of the current non-dominated solutions that are shared among search entities. 3,400
3,200
3,000
2,800
handled only by M OEA2 handled only by M OEA1 Shared among M OEA1 and M OEA2 Ideal point
2,600 2,300
2,400
2,500
2,600
2,700
2,800
2,900
Figure 3. Illustrative example of the partitioning procedure.
3,000
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP
11
306
5. Experimental studies
307
5.1. Experiment methodology. We tested the sugessted algorithm on benchmark instances of MOMKP chosen from the instance libraries: Zitzler and al. [16], of which we consider for this experiments six instances with the number of items 250, 500, and 750, with two and three objectives. The parallelization of the suggested algorithm is implemented via multi-threading under JAVA SE platform. The tests was carried out on a personal computer equipped with an Intel® CoreTM i7-5600U CPU, 2.60GHz with 8GBs of RAM. We compared the performance of the PCPMOEA with five multiobjective algorithms with different concepts and/or different search strategies:
308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346
• NSGA-II [6]: An elitist non-dominated sorting genetic algorithm, a multiobjective genetic algorithm using dominance depth, and crowding distance for selection operator and search guidance. • SPEA2 [7]: Strength Pareto evolutionary algorithm, a MOEA with an external archive using dominance rank, and a nearest neighbor density estimation technique. • MOEA/D [41]: Multiobjective evolutionary algorithm based on decomposition, essentially, it decomposes an MOP into a number of scalar optimization subproblems and optimizes them simultaneously. • NSGA-III [47]: follows the framework of NSGA-II with significant improvement in diversity maintenance strategy. Roughly, these improvements consists of replacing crowding operator by a reference vector based approach. • MOFPA [5]: Multiobjective Firefly algorithm with Particle swarm optimization, a hybrid swarm intelligence discrete algorithm, employing cooperation of two intelligent swarm algorithms: Firefly Algorithm, and Particle swarm optimization. This experimental study consists of testing and evaluating the suggested algorithm by addressing the following points: • Comparing the suggested algorithm against a set of well known multiobjective metaheruistics; this is according to: the quality of the produced solutions, and efficiency. • Carry out a sensitivity analysis over the suggested algorithm against variations of the partitioning parameter. In order to evaluate and compare the quality of solutions evoloved using these algorithms, we used four performance metrics: Inverted Generational Distance (IGD) [42], Hypervolume (H) [43], Spacing metric [40], Diversity metric [32], and the set coverage metric [43]. As we mentioned, one of the aims of the suggested algorithm is to minimize the distance between the potentially efficient solutions and the ideal point. Therefore, we used a modified version of the GD metric as to measure this distance. Furthermore, we analyzed using the above mentioned performance metrics, the impact of the partitioning parameter choice over the quality of solutions produced by the suggested algorithm. 5.2. Constraints handling. Throughout the generational process of the search entities, each constructed individual must be repaired if the corresponding decision does not satisfy the constraints. Hence, a correction procedure is designed as to fulfill the required feasibility conditions. The used correction procedure consists to flip randomly the decision variables of value one to
12 347 348 349 350 351 352
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
zero, until we obtain a feasible decision. Here, we use a random iterative correction as to prevent to impose genes (solution characteristics), or any influence on the algorithm behavior. However, as to construct an efficient correction procedure, it uses a list of the decision variables of value one, and then it iteratively chooses a random decision to flip and then removes it from the list (see Algorithm 4). Thus, the worst case complexity of the correction procedure is O(pn) for MOMKP of n decision variables and p constraints. Algorithm 4 Correction procedure Requires: A non-feasible solution x = (x1 , . . . , xn ) ∈ {0, 1}n . Ensures: A feasible solution induced from x. Construct the set of indices L = {i | xi = 1}; Repeat
Withdraw a random element of L, say k; Flip the value of xk (i.e., xk = 0); Remove k from the set L (i.e., L = L\{k}); Until (x is feasible or L = ∅) 353
5.3. Performance evaluation metrics.
354
Hypervolume (H) [43]. One of the most popular indicators for multiobjective optimization algorithms is the hypervolume . The hypervolume indicator measures the volume of the kdimensional space dominated by a set of points A (in the objectives space). The aim of this indicator is to measure both the convergence to the true Pareto front and diversity of the obtained Pareto fronts. The calculation of this volume requires the designation of a reference point Zref , which is dominated by all the points of the set A. Consequently, a set with a larger hypervolume is likely to present a better set of trade-offs than sets with lower hypervolume. The reference point considered to compute the hypervolume is the null vector (origin point, Zref = {0}k , where k is the number of objective functions).
355 356 357 358 359 360 361 362
363 364 365 366 367
368
369 370 371 372 373
Inverted generational distance (IGD) [39]. The IGD metric is an inverted version of the GD (generational distance metric) used to measure convergence. This metric evaluates the average distance between the approximated set A and a reference set PF (true Pareto front). In other words, the IGD measures the proximity between the obtained potentially efficient solutions and the true Pareto front in the objective space. 1 P 2 2 d s s∈P F IGD(P F, A) = , |P F | where, ds is the Euclidean distance in the objective space between the solution s ∈ P F and the nearest solution in A. ∀s ∈ P F, ds = min ||Z(s) − Z(s0 )||2 . 0 s ∈A
The relation between the GD and the IGD, can be expressed as follows: GD(A, B) = IGD(B, A), where B is a reference set.
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP 374 375 376 377
378
13
Ideal distance (ID). As to evaluate the distance between the obtained potentially efficient solutions and the ideal vector, we used a modified version the GD metric, we refer to it as the ideal distance (ID). Here, we compute the average distance between the approximated set A and the ideal vector Z0 . Similar to the GD metric, the ID is calculated as follows: 1 P 2 2 ∆ s∈A ds ID = , |A|
379
where, ∀s ∈ A, ds = ||Z(s) − Z0 ||2 .
380
Spacing metric (SP) [40]. This metric evaluates the spread of a given set of non-dominated solutions, say A, using its distribution in the objective space. This is, by computing the standard deviation in the distances between consecutive pairs of solutions. s X 1 ¯ − ds )2 , SP = (d |A| − 1
381 382
383
s∈A
384 385 386
387 388 389
390 391 392 393 394 395 396 397 398 399 400
401 402 403
404
405 406
407 408
where, ds represents the distance between the solution s and the closest neighboring solution in the objective space. k X ∀s ∈ A, ds = 0 min0 |Z m (s) − Z m (s0 )|, s ∈A,s 6=s
m=1
¯ denotes the average of the distances ds , ∀s ∈ A. However, a set A with a good spacing and d metric does not necessarily mean a good distribution of solutions compared to the entire Pareto optimal front [44]. Diversity metric (DM) [45]. As to provide a comprehensive view over the Pareto front to decision makers, it is imperative that the obtained non-dominated solutions are converged, yet as diverse as possible in the objective space. Hence, the DM is indroduced in order to evaluate the diversity of solutions in a given approximated Pareto front. The basic idea of this metric is to divide the objective space into a number of distinct equal-size cells (i.e., defines a grid), and then assigns a value according to the distribution of the obtained non-dominated points in these cells (i.e., depending on the existence of non-dominated points in the cell and its neighbors dimensionwise). This metric takes advantage of the fact that a set of more diversified solutions usually occupy more cells in the objective space [46]. Furthermore, this metric takes also into count the attainment (i.e., convergence towards the optimal Pareto front) of the approximated solutions by considering a reference set in scores attribution. Set coverage metric [43]. This metric is used for comparing two potentially efficient solutions sets, say A and B. This is, by calculating the proportion of solutions in the set B that are weakly dominated by solutions in the first set A. C(A, B) =
|{b ∈ B|∃a ∈ A : a b}| · |B|
Note that C(A, B) and C(B, A) are not necessarily equal, hence, both must be calculated in order to compare the approximations A and B. 5.4. Experimental results and comments. In the rest of this section we will present the experimental results obtained for the following algorithms: PCPMOEA, NSGA-II, SPEA2,
14 409 410 411 412
413 414 415 416 417 418 419
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
MOEA/D, NSGA-III and MOFPA. The forthcoming study is presented in three parts which consists of: assessment of the suggested algorithm in comparison with a selection of multiobjective metaheuristics according to its effectiveness (solutions quality) and its efficiency, performing a sensitivity analysis of the suggested algorithm against the partitioning parameter α variation.
5.4.1. Quality of the approximated fronts. In this exerimentation we used fifteen potentially efficient solutions produced using the above mentioned algorithm for each instance (15 runs per instance, per algorithm). The aim of these experimentations is to compare the quality of solutions produced by the suggested algorithm with a set of efficient algorithms that uses different approaches of selection mechanisms and search strategies. The parameters settings used for each algorithm are chosen according to the papers from which they originated [6, 7, 41, 5], and for the PCPMOEA the parameters are fixed as follows:
Algorithm
Parameters
NSGA-II
population size N = 100, crossover probability: 0.9, mutation probability 1/n.
SPEA2
population size N = 100, crossover probability: 0.8, and mutation probability: 0.1.
MOEA/D
population size N = 100, Subproblems: 10, neighbors number: 10, mutation rate: 0.5, selection probability: 0.9, distribution index µ = 30.
NSGA-III
population size: 90 to 150, mutation probability: 1/n, and selection probability: 0.9.
MOFPA
population size = 50, α = 0.2, γ = 1, C1 = (2t3 /(max it3 )) + 2 ,C2 = (2t3 )/(max it3 )).4
PCPMOEA
population size N = 150, partitioning parameter α = 0.25 for 2 objectives, while N = 250, and α = 0.3 for 3 objectives, with different values for the period (100ms to 500ms). Crossover probability: 0.8, mutation probability: 0.1.
TABLE 1. Parameters setting used for each tested algorithm.
420 421 422 423 424 425 426 427 428 429 430 431
As to illustrate the obtained results used for comparison, we qualitatively show in Figures 4-8 some of the obtained results, where each figure shows Pareto fronts obtained for each instance using the above mentioned algorithms. As to maintain the clarity of the three dimensional illustrations, we have chosen to show only the most competitive Pareto fronts, which lead us to the Pareto fronts evolved using PCPMOEA and MOFPA. Table 1 resumes the obtained results regarding the Hypervolume indicator. The aim of this experimentation is to evaluate and compare the quality of the suggested algorithm with the above mentioned algorithms. The obtained results show that the PCPMOEA algorithm achieves a good quality of solutions, especially, when compared to the NSGA-II, SPEA2, MOEA/D, and NSGA-III. The PCPMOEA demonstrates to have a better Hypervolume with a significant difference. However, MOFPA appears to be the most competitive to the PCPMOEA, namely, for the instance with three objectives and 750 items (3.750).
4
α and γ are the Firefly Algorithm parameters, C1 and C2 are the PSO parameters.
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP
Algorithm
Instance 2.250
NSGA-II
SPEA2
MOEA/D
MOFPA
PCPMOEA
Average
8.76512
9.18159
9.83537
Median
8.77438
9.18573
9.83568
9.60001
9.85685
9.86569
9.67123
9.85572
9.86649
2.716E-2
6.837E-2
6,258E-3
4.01E-2
2.808E-3
6.579E-3
Best
8.81628
9.26775
9.84387
Worst
8.72353
9.04351
9.82125
9.76521
9.86246
9.87017
7.65103
9.85441
9.84700
Average
34.22430
36.65644
40.52875
39.54558
40.71232
40.74421
Median
34.24338 0.1296
36.67221
40.53538
39.58278
40.70868
40.76286
0.25230
0.003981
0.28692
0.00230
0.00358
Best
34.40142
36.95143
40.59179
39.92388
40.75594
40.78454
Worst
34.24338
36.19984
40.53538
38.69412
40.66021
40.69426
Average
71.42760
77.37992
88.49387
84.94718
88.70273
89.20554
Median
71.41220
77.33868
88.53043
85.25770
88.70192
89.22105
std
0.503297
0.403669
9.422E-2
9.0384E-2
4.897E-2
8.0114E-2
Best
72.95212
78.03857
88.61037
86.04454
88.81874
89.27410
Worst
std
2.500
std
2.750
3.250
70.91351
77.33868
88.32110
82.97746
88.63514
88.99637
72322.4
77614.4
92409.8
80033.9
93009.9
93119.6
Median
72311.1
77443.4
92434.8
79960.8
93016.2
93126.2
435.1785
709.9357
80.0224
82.4145
39.3793
37.8173
Best
73187.1
79520.4
92534.5
82414.5
93071.6
93182.2
Worst
71477.3
76865.1
92434.8
78469.6
92956.6
93054.2
Average
547738.3
605712.2
758240.9
665189.7
758312.6
766498.7
Median
546821.8
605755.9
758202.0
674343.7
758331.7
766488.1
std
3.750
NSGA-III
Average std
3.500
15
3608.78
3340.01
670.63
2535.38
870.71
1014.90
Best
555481.1
612315.4
759647.3
699826.4
759959.4
768077.1
Worst
542001.8
600666.8
756955.4
623399.7
756555.1
764012.6
Average
2195528.6
2207060.2
2649034.1
2314272.2
2703273.4
2703522.7
Median
2192486.3
2206634.2
2648629.3
23.224464
2703114.3
2703648.3
13524.87
7704.53
2214.50
806.41
732.79
580.84
Best
2217966.1
2225107.6
2653975.2
2403030.3
2704278.8
2704231.5
Worst
2171692.9
2191793.4
264862.93
2210880.8
2702333.1
2702581.8
std
Values in bold indicate the best obtained results in the correspondent row.
TABLE 2. Experimental results concerning the Hypervolume indicator of the MOMKP instances (the unit of values is 107 ).
432 433 434 435 436 437
In order to assess the convergence of the obtained Pareto fronts towards the true Pareto front, we computed the IGD metric values for all fifteen runs of each algorithm. The obtained results are resumed in Table 3. From Table 3 it is obvious that the obtained average (and median) values of IGD for each instance using PCPMOEA are better than those obtained by the other five algorithms. However, the NSGA-III and MOFPA are shown to be the most competitive for instances with two and three objective functions respectively. Accordingly, the PCPMOEA
16 438 439
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
has converged better towards the true Pareto front compared to other tested algorithms. These results can be also confirmed qualitatively by observing Figures 4-8. Algorithm
Instance 2.250
NSGA-II
SPEA2
MOEA/D
NSGA-III
MOFPA
PCPMOEA
Average
76.09615
Median
72.33389
15.64012
4.00665
0.74219
1.07286
0.29718
15.63951
3.94590
0.75323
1.05223
0.29637
9.88158
2.25782
0.39901
0.16485
0.10124
0.12132
Best
61.35042
12.16928
3.13348
0.42731
0.89778
0.12869
Worst
95.26060
20.76262
4.86990
0.94188
1.26336
0.57562
Average
355.87107
81.02844
14.67452
2.73726
2.18277
0.69520
Median
359.74476
80.88057
14.67389
2.31321
2.17060
0.58288
43.97135
12.46397
1.01867
1.12115
0.23388
0.29223
std
2.500
std
2.750
Best
293.13791
65.77132
12.72309
1.78177
1.84681
0.37414
Worst
418.81440
111.14670
16.61654
6.16143
2.64536
1.35093
Average
616.90264
81.028445
32.19721
9.50524
10.54516
3.53590
Median
622.15216
80.88057
32.51318
8.29712
10.94526
3.33629
55.24098
12.46397
1.48806
3.90665
0.62207
0.90879
Best
486.57075
65.77132
29.99165
6.57967
9.70699
2.37712
Worst
std
3.250
722.35960
111.14670
34.86158
21.38223
11.02817
6.22080
Average
45.52375
16.23703
6.4468998
26.4132550
2.3051193
1.3234943
Median
45.82247
16.33518
6.4847662
26.4953481
2.3301823
1.2882777
std
3.500
3.11597
1.36025
0.2016878
1.9970100
0.0500886
0.1688584
Best
39.93595
13.60152
6.0662901
22.6421173
2.2376310
1.0267159
Worst
51.34482
18.37225
6.7613303
29.1085911
2.3671946
1.6474703
Average
159.08672
47.10854
14.4351057
51.2543061
4.7504078
3.7412855
Median
158.70766
47.61122
14.4262958
44.2505703
4.8048472
3.3671623
std
3.750
11.05657
4.19196
0.5505419
15.7447354
0.1500751
1.1078586
Best
135.34139
37.73514
13.6882246
30.2252162
4.4624758
2.4686905
Worst
174.41828
54.96809
15.6030902
75.1006594
4.9110414
6.9519051
Average
59.62771
55.42584
26.7652333
46.43090086
3.9739178
2.6863341
Median
59.82934
55.06092
26.6458393
41.1368432
3.9824589
2.6076377
3.16625
3.49813
0.5282267
9.3990180
0.0337948
0.2612799
Best
52.14942
47.98128
25.9533346
40.8729764
3.9126965
2.4197297
Worst
64.00565
61.57806
27.9284213
57.2828828
4.0265507
3.2129786
std
Values in bold indicate the best obtained results among the correspondent row.
TABLE 3. Experimental results concerning the IGD of the MOMKP instances.
440 441 442 443 444 445
Table 4 summarizes the results regarding the spacing metric values, obtained for all five algorithms using all fifteen runs for each algorithm. The aim of this experimentation is to compare the distribution uniformity of the produced Pareto fronts. The results show that all of the competing algorithms can produce Pareto fronts with a good spacing. However, the Pareto fronts evolved by the proposed algorithm have shown to be the most uniformly distributed for almost all the tested instances. Namely for the following instances: 2.500, 2.750, 3.250, 3.500,
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP 446 447
17
and 3.750. Consequently, we can say that the diversity of Pareto fronts evolved by PCPMOEA are at least comparable to those of the other algorithms. Algorithm
Instance 2.250
2.500
2.750
3.250
3.500
3.750
NSGA-II
SPEA2
MOEA/D
NSGA-III
MOFPA
PCPMOEA
Average
4.8441E-3
1.6520E-3
1.6788E-3
1.6369E-3
1.8034E-3
1.0830E-3
Median
4.1695E-3
1.4919E-3
1.7452E-3
1.6756E-3
1.7452E-3
1.0712E-3
std
2.7102E-3
5.5580E-4
2.9105E-4
4.6281E-4
2.9105E-4
3.9267E-4
Best
2.0031E-3
1.2372E-3
1.1081E-3
8.8243E-4
9.7163E-4
4.3610E-4
Worst
1.1410E-2
3.2591E-3
2.0978E-3
2.7188E-3
2.6706E-3
1.7888E-3
Average
5.3383E-3
1.5020E-3
1.0604E-3
8.9274E-4
9.9709E-4
4.5168E-4
Median
5.4688E-3
1.4126E-3
9.9696E-4
7.7029E-4
9.6733E-4
4.2077E-4
std
3.2216E-3
4.3462E-4
1.5039E-4
2.6573E-4
1.5387E-4
1.1816E-4
Best
1.5329E-3
9.4383E-4
9.1382E-4
5.9806E-4
7.8784E-4
2.5867E-4
Worst
1.2600E-2
2.6173E-3
1.4491E-3
1.6304E-3
1.4351E-3
6.6605E-4
Average
3.5060E-3
1.5047E-3
1.0523E-3
8.2189E-4
1.4097E-3
4.4397E-4
Median
2.8953E-3
1.4157E-3
1.0648E-3
8.1641E-4
1.3104E-3
4.3931E-4
std
1.4508E-3
4.1546E-4
6.5984E-5
1.9786E-4
3.6208E-4
7.3312E-5
Best
1.8880E-3
9.6539E-4
8.9261E-4
4.9616E-4
9.0932E-4
3.3732E-4
Worst
6.6667E-3
2.1664E-3
1.1373E-3
1.2855E-3
1.8850E-3
6.4364E-4
Average
4.5967E-3
2.5863E-3
2.6694E-3
4.8477E-3
2.1473E-3
1.2423E-3
Median
4.5748E-3
2.5642E-3
2.6888E-3
4.8211E-3
2.1386E-3
1.2427E-3
std
8.4339E-4
3.6247E-4
8.1093E-5
4.5662E-4
1.0228E-4
6.9140E-5
Best
3.5991E-3
1.9196E-3
2.5017E-3
4.2689E-3
1.9909E-3
1.9909E-3
Worst
6.3322E-3
3.1437E-3
2.8073E-3
5.8693E-3
2.3278E-3
2.3278E-3
Average
3.8769E-3
1.9653E-3
2.3068E-3
3.6764E-3
2.2222E-3
1.2039E-3
Median
3.7883E-3
1.9151E-3
2.3036E-3
3.6859E-3
2.2210E-3
1.1587E-3
std
6.9103E-4
2.4949E-4
3.8202E-5
5.3312E-4
8.0886E-5
1.7005E-4
Best
2.7061E-3
1.5828E-3
2.2471E-3
3.0052E-3
2.1169E-3
1.0065E-3
Worst
5.5471E-3
2.3522E-3
2.3762E-3
4.6509E-3
2.4065E-3
1.6452E-3
Average
1.6451E-3
9.7395E-4
2.2277E-3
2.8563E-3
1.2556E-3
7.1063E-4
Median
1.6529E-3
9.9614E-4
2.2156E-3
2.7339E-3
1.2700E-3
7.0007E-4
std
8.5805E-5
1.0503E-4
7.3563E-5
5.5959E-4
7.3563E-5
2.6296E-5
Best
1.5284E-3
6.9086E-4
2.0839E-3
2.1468E-3
2.0839E-3
6.8251E-4
Worst
1.7873E-3
1.1137E-3
2.3914E-3
3.8166E-3
2.3914E-3
7.4769E-4
TABLE 4. Experimental results concerning the spacing metric of the MOMKP instances. 448 449 450 451 452 453 454 455
Aspiring to achieve a good convergence towards the true Pareto front, the suggested algorithm operates with multiple MOEAs, within each one, a selection operator is designed to take into consideration one sole criterion through the search process. This is, as we mentioned above, to minimize the distance between the current Pareto front and the ideal point. However, as to evaluate this distance we defined a modified version of the GD metric, called the ideal distance (ID). Table 5 summarizes the obtained results regarding the ID metric. The obtained mean values clearly shows that the Pareto fronts evolved using the proposed algorithm are the nearest to the ideal point.
18
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
Algorithm
Instance 2.250
NSGA-II
SPEA2
MOEA/D
NSGA-III
MOFPA
PCPMOEA
Average
495.30
349.40
213.24
267.59
206.26
137.97
Median
344.28
344.28
214.67
267.04
206.26
136.11
std
2.500
2.750
11.54
11.54
9.18
2.99
4.89
6.79
Best
333.77
333.77
197.83
264.13
200.19
129.23
Worst
369.89
369.89
226.26
272.72
217.28
152.99
Average
795.7
669.99
336.78
434.37
300.33
190.83
Median
810.5
668.75
338.80
433.59
299.53
184.77
std
132.0
68.69
8.94
76.01
7.60
17.68
Best
588.5
567.98
567.98
425.32
288.13
165.24
Worst
1003.4
842.07
349.65
451.75
316.11
229.60
Average
1145.6
1071.2
525.94
726.88
464.08
328.33
Median
1111.7
1063.0
526.68
725.23
465.33
329.61
12.24
10.13
10.61
7.64
11.94
32.88
std Best 3.250
908.1
952.5
507.04
719.79
440.59
286.80
Worst
1325.3
1325.3
542.69
743.77
486.11
419.83
Average
208.35
135.82
70.75
286.31
71.43
42.20
Median
213.46
135.53
70.84
286.91
71.27
42.36
14.80
8.54
1.48
4.25
0.63
2.12
Best
183.87
124.22
68.35
277.79
69.58
38.69
Worst
231.83
154.58
74.20
292.22
72.3
46.72
Average
378.24
230.40
110.95
392.35
120.60
86.50
Median
377.71
229.35
110.77
346.07
121.18
84.11
25.53
8.51
1.62
85.59
2.61
11.06
Best
335.10
217.55
108.23
289.55
115.95
71.81
Worst
434.09
245.55
113.60
510.28
124.07
113.87
Average
413.47
417.46
161.56
408.38
117.38
86.56
Median
417.85
417.85
161.79
391.15
117.03
86.56
std
3.500
std
3.750
std
3.16
3.16
3.25
0.56
0.90
4.36
Best
411.34
411.34
156.38
365.83
116.55
81.70
Worst
422.11
422.11
166.41
485.40
118.88
94.73
Values in bold indicate the best obtained results in the correspondent row.
TABLE 5. Experimental results concerning the ideal distance of the MOMKP instances. 456 457 458 459 460 461 462 463
Table 6 resumes the obtained result regarding the diversity metric. This experiment aims to compare the diversity of the obtained solution using the suggested algorithm with those of the four competing algorithms. As it is shown in Table 6, the obtained solution using the suggested algorithm suggested algorithm presents a good diversity, when compared to the other algorithms, especially for the instances with two objective functions, and the rather small instances with 3 objective functions, where it has shown to be superior for these instance categories. However, the suggested algorithm has shown also some drawbacks for large instances with 3 criteria. These results can be observed also in the qualitative comparison (see Figures 4-8).
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP
Algorithm
Instance 2.250
2.500
2.750
3.250
3.500
3.750
19
NSGA-II
SPEA2
MOEA/D
NSGA-III
MOFPA
PCPMOEA
Average
0.431020
0.422543
Median
0.436471
0.432509
0.733156
0.668941
0.787706
0.827007
0.741896
0.657210
0.785704
0.824704
std
0.053189
worst
0.325714
0.041466
0.025304
0.039278
0,019302
0,023959
0.346995
0.685963
0.609775
0.756875
0.789647
best
0.497407
0.472651
0.685963
0.737771
0.838331
0.871025
Average
0.448914
0.329398
0.713179
0.541829
0.702784
0.745999
Median
0.445000
0.333571
0.708588
0.546321
0.699252
0.749443
std
0.068231
0.029096
0.022155
0.050225
0.016381
0.021698
worst
0.333750
0.260625
0.684622
0.423398
0.665456
0.692070
best
0.559000
0.383714
0.754859
0.602364
0.724838
0.772977
Average
0.335777
0.313474
0.766237
0.570511
0.670938
0.808304
Median
0.333846
0.321073
0.759391
0.586556
0.668632
0.807167
std
0.041969
0.040611
0.013905
0.064971
0,022447
0.024498
worst
0.271250
0.250170
0.750287
0.455256
0.626113
0.779512
best
0.422667
0.381235
0.799135
0.645027
0.705843
0.869450
Average
0.377472
0.445919
0.733179
0.451124
0.742303
0.806551
Median
0.377602
0.452759
0.732999
0.460905
0.739854
0.831004
std
0.014421
0.015039
0.010327
0.027410
0.007297
0.034034
worst
0.356508
0.416207
0.714890
0.389613
0.734002
0.753976
best
0.409063
0.463239
0.758089
0.480436
0.755723
0.844520
Average
0.296290
0.361604
0.733902
0.469194
0,723575
0,685794
Median
0.296640
0.359516
0.733379
0.497382
0,722961
0,682033
std
0.016839
0.014680
0.008795
0.060448
0,007045
0,014016
worst
0.259264
0.343978
0.721893
0.356402
0,710117
0,669736
best
0.325672
0.390555
0.750445
0.537289
0,738623
0,715957
Average
0,367596
0,377044
0.712588
0.520077
0.711044
0.672295
Median
0,364609
0,377480
0.711547
0.525631
0.708614
0.674658
std
0,020697
0,018457
0.012164
0.023284
0.006755
0.019521
worst
0,330063
0,350830
0.689704
0.487117
0.697864
0.641762
best
0,400636
0,420949
0.733370
0.541932
0.721628
0.694853
Values in bold indicate the best obtained results in the correspondent row.
TABLE 6. Experimental results concerning the diversity metric of the MOMKP instances. 464 465 466 467 468 469 470 471 472 473 474
As to confirm and support the accuracy of the obtained results regarding the comparison of the convergence and diversity of Pareto fronts, we computed the coverage of each pair of Pareto fronts: (PCPMOEA, and a competing algorithm), produced by all 15 runs of each algorithm. Table 5 shows the obtained mean coverage values for each pair adduced as follows: the symboles and refer to C(PCPMOEA, Algorithm) and C(Algorithm, PCPMOEA) respectively. The results show that PCPMOEA produces a better quality of Pareto fronts when compared to NSGA-II, SPEA2, MOEA/D. However, MOFPA and NSGA-III are shown to be the most competitive to the suggested algorithm for instances: 2.750, 3.250, 3.500, 3.750, and 2.250, 2.500 respectively. Although, the suggested algorithm maintained to be dominant over both algorithms, scoring an overall mean coverage values of: 52.7% for PCPMOEA against 25.9% for MOFPA, and 58.3% for PCPMOEA against 10.2% for NSGA-III.
20
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
Algorithm NSGA-II SPEA2 MOEA/D NSGA-III MOFPA
Instance 2.250 2.500 2.750 3.250 3.500 3.750
Average
1 0
1 0
0.9973 0.0003
0.4568 0.3058
0.7421 0.0819
1 0
1 0
0.9997 0
0.3369 0.3034
0.5859 0.2013
1 0
1 0
1 0
0.5498 0.0070
0.3834 0.2092
1 0
1 0
0.9278 2.7E-5
0.7029 0
0.4525 0.2761
1 0
1 0
0.9563 0
0.6983 0
0.5671 0.4028
1 0
1 0
1 0
0.7579 0
0.4332 0.3840
1 0
1 0
0.9801 5.45E-5
0.5837 0.1027
0.5273 0.2592
The last two rows contain the mean values for each column.
TABLE 7. Coverage matric of PCPMOEA against other algorithms. 475 476 477 478 479 480 481 482 483 484 485 486 487
As it is mentioned earlier, in order to visually compare the quality of non-dominated solutions obtained using the five algorithms: NSGA-II, SPEA2, MOEA/D, MOFPA, and PCPMOEA. The obtained Pareto fronts for the 6 tested instances are shown in Figures 4-8. From these observations we can confirm the test results regarding the convergence and the spread of the Pareto fronts evolved using PCPMOEA, which can be seen for instances 2.500, 2.750, 2.250, 3.500. It also confirms the fact that MOFPA is the most competitive to the suggested algorithm. Furthermore, we can also observe especially in Figure 8 (instance 3.750), that the suggested algorithm tends to neglect some subspaces with only two conflicting criteria, which may results blind spots in the objective space. However, these observable results confirms the obtained results concerning the diversity metric (see Table 6). This is because the chosen selection mechanisms for PCPMOEA tends to handle either one (for each parallel MOEA) or all objective functions combined using dominance relation (for each parallel MOEA, and for the master’s selection operator). ·104 1 0.95 0.9 0.85 0.8 0.75
MOFPA PCPMOEA NSGA-III MOEA/D NSGA-II SPEA2
0.7 0.72 0.74 0.76 0.78 0.8 0.82 0.84 0.86 0.88 0.9 0.92 0.94 0.96 0.98
1
·104
Figure 4. Illustion of Non-dominated solutions obtained for 2.250 instance.
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP
21
·104 2 1.9 1.8
MOFPA PCPMOEA NSGA-III MOEA/D NSGA-II SPEA2
1.7 1.6
1.55
1.6
1.65
1.7
1.75
1.8
1.85
1.9
1.95
2
2.05
·104
Figure 5. Illustion of Non-dominated solutions obtained for 2.500 instance. ·104 3
2.8
2.6
MOFPA PCPMOEA NSGA-III MOEA/D NSGA-II SPEA2
2.4
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3 ·104
Figure 6. Illustion of Non-dominated solutions obtained for 2.750 instance. ·104
·104 1
MOFPA PCPMOEA
MOFPA PCPMOEA
0.95
2
0.9
1.9 1.8
0.85
1.7
0.8
1.6
0.75
1.5
0.7 9,500 9,000 8,500 8,000 7,500 7,000 1
0.9 ·104
0.8
1.9
1.8
1.7
1.6
1.5
2
·104
Figure 7. Illustration of Pareto fronts obtained for 3.250 and 3.500 instances.
1.8 ·104
1.6
22
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL ·104 MOFPA PCPMOEA 3
2.8
2.6
2.4
3.1
3
2.9
2.8
2.7
2.6
2.5
2.5 2.4
·104
·104
Figure 8. Illustration of Pareto solutions obtained for 3.750 instance. 488 489 490 491 492 493 494 495 496
5.4.2. Computational efficiency. In this experimentation we focus on the efficiency advantages that can be offered by the PCPMOEA algorithm, this is by comparing the CPU running time of the suggested algorithm against the NSGA-III algorithm. To achieve this, we used the computational speedup as a reference metric for efficiency measurement [48]. The speedup is defined as the ratio of the execution time of the suggested parallel algorithm PCPMOEA, and the best obtained execution time of the recent sequential algorithm NSGA-III, moreover, the execution time values gathered in this experiment are obtained by fixing a predefined target hypervolume value. For each tested instance, the chosen target value of this latter is smaller than the average value obtained for the NSGA-III algorithm (see Table 2). Table 8 resumes the obtained results regarding the efficiency comparison between the NSGA-III algorithm and PCPMOEA. Algorithm
Instance 2.250
2.500
2.750
NSGA-III
63.889E+3
162.320E+3
PCPMOEA
20.406E+3
69.264E+3
3.131
2.343
Speedup
3.250
3.500
3.750
235.281E+3
42.487E+3
122.102E+3
256.460E+3
91.037E+3
24.171E+3
66.088E+3
154.837E+3
2.584
1.756
1.847
1.656
TABLE 8. Results of execution time (in ms) for the algorithms: PCPMOEA and NSGA-III. 497 498 499 500 501 502 503 504 505 506 507 508
As presented in Table 8, the execution time of the suggested algorithm is clearly less than the NSGA-III algorithm, considering all the tested instances it achieves a minimal computational speedup of 1.65. As to emphasize the efficiency of the proposed algorithm compared to the NSGA-III algorithm, we present in Figure 9 an example of one run of both algorithm demonstrating the evolution of the produced solutions quality with respect to the hypervolume and the IGD metric. However, one can notice in Table 8 that the computational speedup decreases when increasing the instance size and/or increasing the number of objective functions, especially, for the instances with two objectives. This can be justified with the fact that the current Pareto front size (i.e., archive size) increases for large instances, and as it is mentioned above, the PCPMOEA uses a dynamical archive; hence, this may effect its efficiency (i.e., decelerate the suggested algorithm).
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP
H
4.1
·108
23
IGD NSGA-III PCPMOEA
30 4
20
3.9
10
3.8
NSGA-III PCPMOEA
3.7 0
0.2
0.4
0.6
T (ms)
0.8
1 ·107
0 0
0.2
0.4
0.6
T (ms)
0.8
1 ·107
Figure 9. Illustrative example of one run of PCPMOEA and NSGA-III according the hypervolume and the IGD metric (instance 2.500). 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535
5.4.3. Sensitivity analysis of the partitioning parameter α. The following experiment aims to study the sensitivity of the suggested algorithm against the variation of the partitioning parameter α. As it is mentioned earlier the Master entity supervises the search entities, this is by preforming a periodical update of their current archive. Furthermore, these solutions are deliberately chosen among the current elite solutions (i.e., Pareto solutions extracted from all archive solutions) using a partitioning procedure based on the quantile of order α ∈]0, 0.5]. We seek to analyze the impact of the parameter α on the performance (i.e., convergence and diversity) of the proposed algorithm. For this, we tested for each instance the results produced in 103 iterations while variating the parameter α. Table 6 summarizes the obtained results concerning the hypervolume, IGD, spacing metric, and execution time with respect to the partitioning parameter and the size of the Pareto set. As it is shown in Table 6, the IGD values obtained for α = 0.2 are significantly better than those obtained for other values of α especially for the instances 2.250, 2.500, and 3.500, with a sole exception for the instance 3.250 where the difference is minor. Furthermore, observing the variations of the IGD with respect to the values of α, we notice that the IGD is inversely proportional to the partitioning parameter α. Hence, we can say that the PCPMOEA has better convergence when the search entities share a sufficiently large space. Regarding the obtained hypervolume values, we can distinguish two cases characterized by the number of objective functions (two or three criteria). In the case of bi-objective instances, it is shown that there is a minor difference between the obtained hypervolume values for α ≤ 0.4, and a rather significant difference with the obtained values for α = 0.5. Furthermore, in the case of instances with three criteria, the best values of the hypervolume are obtained for α = 0.5 with a minor difference with those obtained for α = 0.4, while the number of Pareto solutions is mostly in direct proportion to the parameter α. However, the obtained values for α = 0.5 can be justified with two facts: (1) the size of Pareto set is significantly large when compared to the Pareto sets obtained for smaller values of α, furthermore, the Pareto solutions set size may significantly increase the hypervolume; (2) for α close or greater than 0.5 the computational effort of each
24 536 537 538 539 540 541 542 543 544 545 546 547
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
search entities will be focused on a space with preponderance of the allocated objective function, which likely will induce a fast convergences to the extrema (i.e., optimum value considering each objective function separately), which induces spread solutions and increase the corresponding hypervolume. The execution time is also considered for this experiment, where we observe in Table 6 that the execution time is mostly in inverse proportion to the parameter α; this can be justified by the fact that the size of the current population of a search entity is inversely proportional to the parameter α, moreover, the computational cost of the search entities (MOEAi ) is strongly related to the size of its population. As it is shown in Table 6, the parameter α variations has shown no impact on the spacing of the produced Pareto solutions. However, considering the other metrics we should mention that the quality of the produced Pareto solutions may differ when variating α (see Figure 9). Instance
α
|P S|
2.250
0.2
105
0.3
2.500
3.250
3.500
H
IGD
SP
Time (s)
9.80632E+7
6.04851
2.72182E-03
32
108
9.83812E+7
7.38744
1.91869E-03
30
0.4
102
9.81509E+7
6.95425
2.05759E-03
28
0.5
88
9.75131E+7
14.48878
2.78290E-03
25
0.2
120
4.03017E+8
12.8172
1.01800E-03
66
0.3
130
4.02046E+8
16.7881
1.03633E-03
70
0.4
121
4.04383E+8
17.3475
1.27680E-03
69
0.5
85
4.01620E+8
30.6735
2.43016E-03
65
0.2
869
8.80753E+11
4.35477
3.18604E-03
115
0.3
866
8.93434E+11
4.92615
2.55782E-03
97
0.4
970
9.00557E+11
4.82399
2.71375E-03
93
0.5
1019
9.07789E+11
4.99096
2.48203E-03
87
0.2
773
6.79145E+12
10.8491
2.03975E-03
201
0.3
765
6.94450E+12
11.7951
2.25788E-03
185
0.4
821
7.06583E+12
11.7075
2.07973E-03
171
0.5
958
7.22077E+12
12.3809
2.01348E-03
164
Values in bold indicate the best obtained results for the correspondent instance.
TABLE 9. Experimental results concerning the variations of the partitioning parameter α. 548 549 550 551 552 553 554 555
As described above, the partitioning parameter α determines the size of the shared space according to the distribution of the current Pareto solutions, and since the solutions enclosed in this area are characterized by highly conflictual criteria (i.e., no criteria is at its best); we focus in the following experiment on the impact of the parameter α variations over the convergence in the shared region. Hence, we measure the distance between F the obtained solutions enclosed in the shared space and Ft the corresponding solutions in the true Pareto front5. For this, we used the generational distance (GD), as we only want to know how far are the obtained solutions from a chosen subset of the corresponding true Pareto front. 6 We extract from the obtained solutions and the true Pareto front the sets of solutions F and F enclosed in the t
shared space determined by α for each distribution.
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP 556 557 558 559 560 561
25
Table 7 resumes the obtained results regarding the impact of the parameter α variations on the convergence in the space shared between the search entities. We can observe that the distance (i.e., GD values) between F and Ft extend when increasing the parameter α. Hence, we can say that decreasing the size of the shared space effects the convergence of the produced solutions in this latter. Figure 10 shows an illustrative example of the impact of the partitioning parameter α on the resulting Pareto fronts. Instance
2.250
2.500
α
|F |
|Ft |
GD
|F |
|Ft |
GD
0.2
56
284
2.62503
89
714
3.99704
0.3
44
226
4.89355
51
570
5.07591
0.4
10
114
13.02007
9
286
12.46097
Values in bold indicate the best results for the correspondent instance.
TABLE 10. Experimental results concerning the impact of the parameter α variations on the convergence in the shared space. ·104 2 1.9 1.8 α = 0.2 α = 0.4 α = 0.5
1.7 1.6
1.65
1.7
1.75
1.8
1.85
1.9
1.95
2
·104 Figure 10. Illustrative example (instance 2.500) of the impact of the partitioning parameter α variation on the suggested algorithm behavior.
567
As a general conclusion of the analysis conducted on the impact of the parameter α variations on the PCPMOEA algorithm behavior, we conclude that the choice of this latter may have an impact on the convergence, spread, and execution time of the proposed algorithm. Furthermore, we can say also that generally, there is not a given value for α that may satisfy all these quality criteria. However, the above stated results describe the impact of a given choice of α on the solutions quality.
568
6. Conclusion
569
In this paper we presented a parallel multiobjective evolutionary algorithm applied to the multidimensional multiobjective Knapsack Problem (MOMKP), called Parallel Criterion-based Partitioning multiobjective evolutionary algorithm (PCPMOEA). The suggested algorithm is
562 563 564 565 566
570 571
26 572 573 574 575 576 577 578 579 580 581 582
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
designed in a master/worker paradigm, handling multiple MOEAs with criterion-based selection operator. An experimental study has been carried out, where we compared the suggested algorithm, using five metrics, against four algorithms that are reputed in literature: NSGA-II, SPEA2, MOEA/D, and MOFPA. Furthermore, the testes were achieved using six instances of the well-known multiobjective multidimensional Knapsack Problem, with two and three objective functions and between 250 and 750 items. The suggested algorithm has shown conclusive results regarding the standard goals of heuristic multiobjective optimization (convergence and diversity), which encourages to attempt other applications for the suggested algorithm. However, the proposed algorithm has also shown some drawbacks for large instances with three objectives. Furthermore, a brief explanation was given as we seek for possible improvements, which opens several directions to investigate as perspectives on short term future work. References [1] TALBI, El-Ghazali. A unified view of parallel multi-objective evolutionary algorithms. Journal of Parallel and Distributed Computing, 2018. [2] COELLO, Carlos A. Coello, LAMONT, Gary B., VAN VELDHUIZEN, David A., and al. Evolutionary algorithms for solving multi-objective problems. New York : Springer, 2007. [3] LUST, Thibaut and TEGHEM, Jacques. The multiobjective multidimensional knapsack problem: a survey and a new approach. International Transactions in Operational Research, 2012, vol. 19, no 4, p. 495-520. [4] TALBI, El-Ghazali. Metaheuristics: from design to implementation. John Wiley & Sons, 2009. [5] ZOUACHE, Djaafar, MOUSSAOUI, Abdelouahab, and ABDELAZIZ, Fouad Ben. A cooperative swarm intelligence algorithm for multi-objective discrete optimization with application to the knapsack problem. European Journal of Operational Research, 2018, vol. 264, no 1, p. 74-88. [6] DEB, Kalyanmoy, PRATAP, Amrit, AGARWAL, Sameer, and al. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE transactions on evolutionary computation, 2002, vol. 6, no 2, p. 182-197. [7] ZITZLER, Eckart, LAUMANNS, Marco, and THIELE, Lothar. SPEA2: Improving the strength Pareto evolutionary algorithm. TIK-report, 2001, vol. 103. [8] SCHAFFER, J. David. Multiple objective optimization with vector evaluated genetic algorithms. In : Proceedings of the First International Conference on Genetic Algorithms and Their Applications, 1985. Lawrence Erlbaum Associates. Inc., Publishers, 1985. [9] GANDIBLEUX, Xavier. MOCOlib: Test Problems of the MCDMlib for MultiObjective Combinatorial Optimization. [online]. 2015. [Accessed 22 August 2018]. Available at: http://xgandibleux.free.fr/MOCOlib/. [10] AHSANULLAH, Mohammad, NEVZOROV, Valery B., and SHAKIL, Mohammad. An introduction to order statistics. Paris : Atlantis Press, 2013. [11] BRANKE, Jrgen, BRANKE, Jurgen, DEB, Kalyanmoy, and al. (ed.). Multiobjective optimization: Interactive and evolutionary approaches. Springer Science & Business Media, 2008. [12] BRANKE, Jrgen, SCHMECK, Hartmut, DEB, Kalyanmoy, and al. Parallelizing multi-objective evolutionary algorithms: cone separation. In : IEEE Congress on Evolutionary Computation. 2004. p. 1952-1957. [13] ALBA, Enrique (ed.). Parallel evolutionary computations. springer, 2006. [14] VAN VELDHUIZEN, David A., ZYDALLIS, Jesse B., and LAMONT, Gary B. Considerations in engineering parallel multiobjective evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 2003, vol. 7, no 2, p. 144-173. [15] KARP, Richard M. Reducibility among combinatorial problems. In : Complexity of computer computations. Springer, Boston, MA, 1972. p. 85-103. [16] ZITZLER, Eckart and THIELE, Lothar. Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE transactions on Evolutionary Computation, 1999, vol. 3, no 4, p. 257-271. [17] KNOWLES, Joshua D. and CORNE, David W. M-PAES: A memetic algorithm for multiobjective optimization. In : evolutionary computation, 2000. Proceedings of the 2000 congress on. IEEE, 2000. p. 325-332.
PARALLEL MOEA WITH CRITERION-BASED SELECTION APPLIED TO THE KP
27
[18] RAIDL, Gnther R. An improved genetic algorithm for the multiconstrained 0-1 knapsack problem. In : Evolutionary Computation Proceedings, 1998. IEEE World Congress on Computational Intelligence., The 1998 IEEE International Conference on. IEEE, 1998. p. 207-211. [19] KUMAR, Rajeev, SINGH, P. K., SINGHAL, A. P., and al. Evolutionary and Heuristic Algorithms for Multiobjective 01 Knapsack Problem. In : Applications of Soft Computing. Springer, Berlin, Heidelberg, 2006. p. 331-340. [20] KUMAR, Rajeev and BANERJEE, Nilanjan. Analysis of a multiobjective evolutionary algorithm on the 01 knapsack problem. Theoretical Computer Science, 2006, vol. 358, no 1, p. 104-120. [21] BEN MANSOUR, Imen and ALAYA, Ines. Indicator based ant colony optimization for multi-objective knapsack problem. Procedia Computer Science, 2015, vol. 60, p. 448-457. [22] BEN MANSOUR, Imen , BASSEUR, Matthieu, and SAUBION, Frdric. A multi-population algorithm for multi-objective knapsack problem. Applied Soft Computing, 2018, vol. 70, p. 814-825. [23] CHABANE, Brahim, BASSEUR, Matthieu, and HAO, Jin-Kao. R2-IBMOLS applied to a practical case of the multiobjective knapsack problem. Expert Systems with Applications, 2017, vol. 71, p. 457-468. [24] CHEN, Min-Rong, WENG, Jian, and LI, Xia. A novel multiobjective optimization algorithm for 0/1 multiobjective knapsack problems. In : Industrial Electronics and Applications (ICIEA), 2010 the 5th IEEE Conference on. IEEE, 2010. p. 1511-1516. [25] GANDIBLEUX, Xavier and FREVILLE, Arnaud. Tabu search based procedure for solving the 0-1 multiobjective knapsack problem: The two objectives case. Journal of Heuristics, 2000, vol. 6, no 3, p. 361-383. [26] ZITZLER, Eckart and KNZLI, Simon. Indicator-based selection in multiobjective search. In: International Conference on Parallel Problem Solving from Nature. Springer, Berlin, Heidelberg, 2004. p. 832-842. [27] SHINDE, G. N., JAGTAP, Sudhir B., and PANI, Subhendu Kumar. Parallelizing multi-objective evolutionary genetic algorithms. In : Proceedings of the World Congress on Engineering. 2011. [28] JIN, Huidong and WONG, Man-Leung. Adaptive, convergent, and diversified archiving strategy for multiobjective evolutionary algorithms. Expert Systems with Applications, 2010, vol. 37, no 12, p. 8462-8470. [29] ZITZLER, Eckart. Evolutionary algorithms for multiobjective optimization: Methods and applications. Ithaca : Shaker, 1999. [30] COELLO, CA Coello. Evolutionary multi-objective optimization: a historical view of the field. IEEE computational intelligence magazine, 2006, vol. 1, no 1, p. 28-36. [31] DA FONSECA NETO, Joo V. and BOTTURA, Celso P. Parallel genetic algorithm fitness function team for eigenstructure assignment via LQR designs. In : Evolutionary Computation, 1999. CEC 99. Proceedings of the 1999 Congress on. IEEE, 1999. p. 1035-1042. [32] KNOWLES, Joshua D. and CORNE, David W. Approximating the nondominated front using the Pareto archived evolution strategy. Evolutionary computation, 2000, vol. 8, no 2, p. 149-172. [33] VAN VELDHUIZEN, David A. and LAMONT, Gary B. Multiobjective optimization with messy genetic algorithms. In : Proceedings of the 2000 ACM symposium on Applied computing-Volume 1. ACM, 2000. p. 470-476. [34] ZYDALLIS, Jesse B., VAN VELDHUIZEN, David A., and LAMONT, Gary B. A statistical comparison of multiobjective evolutionary algorithms including the MOMGA-II. In : International Conference on Evolutionary Multi-Criterion Optimization. Springer, Berlin, Heidelberg, 2001. p. 226-240. [35] SZMIT, Ricardo and BARAK, Amnon. Evolution strategies for a parallel multi-objective genetic algorithm. In : Proceedings of the 2nd Annual Conference on Genetic and Evolutionary Computation. Morgan Kaufmann Publishers Inc., 2000. p. 227-234 [36] DE TORO, Francisco, ORTEGA, Julio, FERNNDEZ, Javier, and al. PSFGA: a parallel genetic algorithm for multiobjective optimization. In : Parallel, Distributed and Network-based Processing, 2002. Proceedings. 10th Euromicro Workshop on. IEEE, 2002. p. 384-391. [37] KACPRZYK, Janusz and PEDRYCZ, Witold (ed.). Springer handbook of computational intelligence. Springer, 2015. [38] ALBA, Enrique. Parallel metaheuristics: a new class of algorithms. John Wiley & Sons, 2005.
28
KANTOUR NEDJMEDDINE, BOUROUBI SADEK AND CHAABANE DJAMEL
[39] VAN VELDHUIZEN, David A. and LAMONT, Gary B. Multiobjective Evolutionary Algorithm Research: A History and Analysis’ Department of Electrical and Computer Engineering Air Force Institute of Technology. OH, Technical Report TR-98-03, 1998. [40] SCHOTT, Jason R. Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm Optimization. AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH, 1995. [41] ZHANG, Qingfu and LI, Hui. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on evolutionary computation, 2007, vol. 11, no 6, p. 712-731. [42] VAN VELDHUIZEN, David A. and LAMONT, Gary B. On measuring multiobjective evolutionary algorithm performance. In : Evolutionary Computation, 2000. Proceedings of the 2000 Congress on. IEEE, 2000. p. 204-211. [43] ZITZLER, Eckart and THIELE, Lothar. Multiobjective optimization using evolutionary algorithms: a comparative case study. In : International conference on parallel problem solving from nature. Springer, Berlin, Heidelberg, 1998. p. 292-301. [44] DEB, Kalyanmoy, THIELE, Lothar, LAUMANNS, Marco, and al. Scalable test problems for evolutionary multiobjective optimization. In : Evolutionary multiobjective optimization. Springer, London, 2005. p. 105145. [45] DEB, Kalyanmoy and JAIN, Sachin. Running performance metrics for evolutionary multi-objective optimization. 2002. [46] LI, Miqing and YAO, Xin. Quality Evaluation of Solution Sets in Multiobjective Optimisation: A Survey. ACM Comput. Surv, 2018, vol. 1, no 1. [47] DEB, Kalyanmoy et JAIN, Himanshu. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans. Evolutionary Computation, 2014, vol. 18, no 4, p. 577-601. [48] ALBA, Enrique et LUQUE, Gabriel. Evaluation of parallel metaheuristics. Lecture Notes in Computer Science, 2006, vol. 4193, p. 9-14. [49] CHEN, Huangke, ZHU, Xiaomin, PEDRYCZ, Witold, et al. PEA: Parallel Evolutionary Algorithm by Separating Convergence and Diversity for Large-Scale Multi-Objective Optimization. In : 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2018. p. 223-232. [50] DURILLO, Juan J., ZHANG, Qingfu, NEBRO, Antonio J., et al. Distribution of computational effort in parallel MOEA/D. In : International Conference on Learning and Intelligent Optimization. Springer, Berlin, Heidelberg, 2011. p. 488-502. [51] NEBRO, Antonio J., DURILLO, Juan Jos, GARCIA-NIETO, Jose, et al. SMPSO: A new PSO-based metaheuristic for multi-objective optimization. In : 2009 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making (MCDM). IEEE, 2009. p. 66-73. [52] ATASHPENDAR, Arash, DORRONSORO, Bernab, DANOY, Grgoire, et al. A scalable parallel cooperative coevolutionary PSO algorithm for multi-objective optimization. Journal of Parallel and Distributed Computing, 2018, vol. 112, p. 111-125. [53] NIU, Wen-jing, FENG, Zhong-kai, CHENG, Chun-tian, et al. A parallel multi-objective particle swarm optimization for cascade hydropower reservoir operation in southwest China. Applied Soft Computing, 2018, vol. 70, p. 562-575. [54] YU, Wei-Jie, LI, Jin-Zhou, CHEN, Wei-Neng, et al. A parallel double-level multiobjective evolutionary algorithm for robust optimization. Applied Soft Computing, 2017, vol. 59, p. 258-275. [55] LALWANI, Soniya et SHARMA, Harish. Multi-objective three level parallel PSO algorithm for structural alignment of complex RNA sequences. Evolutionary Intelligence, 2019, p. 1-9.
*Highlights (for review)
Highlights In this paper, we propose a Novel Parallel multiobjective evolutionary algorithm that addresses two main sub-objectives: minimizing the distance between the current non-dominated solutions and the ideal point, and ensuring the spread of the potentially efficient solutions. We present some experimental results, compared with state-of-the-art results using well-known multiobjective metaheuristics. Pr. Sadek Bouroubi