Landscape-assisted multi-operator differential evolution for solving constrained optimization problems

Landscape-assisted multi-operator differential evolution for solving constrained optimization problems

Landscape-Assisted Multi-Operator Differential Evolution for Solving Constrained Optimization Problems Journal Pre-proof Landscape-Assisted Multi-Op...

7MB Sizes 0 Downloads 54 Views

Landscape-Assisted Multi-Operator Differential Evolution for Solving Constrained Optimization Problems

Journal Pre-proof

Landscape-Assisted Multi-Operator Differential Evolution for Solving Constrained Optimization Problems Karam M. Sallam , Saber M. Elsayed, Ruhul A. Sarker, Daryl L. Essam PII: DOI: Reference:

S0957-4174(19)30750-X https://doi.org/10.1016/j.eswa.2019.113033 ESWA 113033

To appear in:

Expert Systems With Applications

Received date: Revised date: Accepted date:

3 July 2019 27 September 2019 15 October 2019

Please cite this article as: Karam M. Sallam , Saber M. Elsayed, Ruhul A. Sarker, Daryl L. Essam, Landscape-Assisted Multi-Operator Differential Evolution for Solving Constrained Optimization Problems, Expert Systems With Applications (2019), doi: https://doi.org/10.1016/j.eswa.2019.113033

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier Ltd.

Highlights • A multi-operator Differential Evolution Algorithm is proposed. • A landscape-based adaptive operator selection mechanism is developed. • Three Constrained benchmark optimization problems had been solved. • Ten real-world constrained optimization problems had been solved. • Experiments showed that the proposed method outperforms state-of-the-art algorithms.

1

Landscape-Assisted Multi-Operator Differential Evolution for Solving Constrained Optimization Problems Karam M. Sallama,1,∗, Saber M. Elsayeda , Ruhul A. Sarkera , Daryl L. Essama a University

of New South Wales, Canberra, Australia

Abstract Over time, many differential evolution (DE) algorithms have been proposed for solving constrained optimization problems (COPs). However, no single DE algorithm was found to be the best for many types of COPs. Although researchers tried to mitigate this shortcoming by using multiple DE algorithms under a single algorithm structure, while putting more emphasis on the best-performing one, the use of landscape information in such designs has not been fully explored yet. Therefore, in this research, a multi-operator DE algorithm is developed, which uses a landscapebased indicator to choose the best-performing DE operator throughout the evolutionary process. The performance of the proposed algorithm was tested by solving a set of constrained optimization problems, 22 from CEC2006, 36 test problems from CEC2010 (18 with 10D and 18 with 30D), 10 real-application constrained problems from CEC2011 and 84 test problems from CEC2017 (28 with 10D, 28 with 30D and 28 with 50D). Several experiments were designed and carried out, to analyze the effects of different components on the proposed algorithm’s performance, and the results from the final variant of the proposed algorithm were compared with different variants of the same algorithm with different selection criteria. Subsequently, the best variant found after analyzing the algorithm’s components, was compared to several state-of-the-art algorithms, with the results showing the capability of the proposed algorithm to attain high-quality results. Keywords: evolutionary algorithms, differential evolution, landscape analysis, adaptive operator selection, constrained optimization 1. Introduction Solving constrained optimization problems (COPs) successfully has had a significant impact in many scientific areas, such as computer science and operations research. Optimizing a constrained problem is more challenging than its unconstrained counterpart, due to the additional constraints that must be satisfied (Hamza et al., 2016). Such constraints may have difficult characteristics, i.e., the feasible region may be tiny and/or be composed of a set of disjointed ones. Generally, a COP can be described as: −x ) minimize f (→ ∗I

am corresponding author Email addresses: [email protected] (Karam M. Sallam), [email protected] (Saber M. Elsayed), [email protected] (Ruhul A. Sarker), [email protected] (Daryl L. Essam) Preprint submitted to Elsevier

−x ) ≤ 0, k = 1, 2, ..., s subject to: gk (→ −x ) = 0, e = 1, 2, ..., q h (→ e

(1)

L j ≤x j ≤U j , j = 1, 2, ..., D

(2) −x ), where s is the number of inequality constraints, gk (→ → − q equality constraints, he ( x ), and each variable, x j , has a lower and upper bound, L j , U j , respectively. The target of a COP, is to determine the values of all variables, x1 , x2 , ..., xD , that minimize (or maximize) the objec−x ), while satisfying all the constraints, tive function, f (→ including the boundary ones. COPs have different characteristics and mathematical proprieties, such as that their objective functions and constraints may be uni-modal or multi-modal, continuous or discontinuous, linear or nonlinear, and their variables can be discrete or real. Also, the feasible region of a COP can be either a small or large portion of the search space and either one bounded region, a set of disjointed ones, or in some practical problems, even unOctober 16, 2019

bounded (Sallam et al., 2017d; Elsayed et al., 2011b). These different characteristics make the process of locating the optimal solution challenging. Computational intelligence (CI) based-methods, such as evolutionary algorithms (EAs), are widely used and have been successfully applied for COPs, since they have some essential advantages over traditional mathematical programming methods (K.Deb, 2012), such as they are resilient to dynamic changes, have the capability to self-organize, do not require particular mathematical characteristics to be satisfied and can evaluate several solutions in parallel (Fogel et al., 1966; Rutkowski, 2008). However, there is no guarantee that they will obtain optimum solutions and the quality of their solutions relies on the particular algorithm’s design, the selection of its operators and its parameter settings. Among current CI techniques, EAs such as differential evolution (DE) (Storn and Price, 1997), evolutionary programming (EP) (Attaviriyanupap et al., 2002), evolution strategy (ES) (Xia and Elaiw, 2010) and genetic algorithms (GAs) (Golberg, 1989), are populationbased approaches that utilize some sort of selection, mutation and crossover operators to produce new solutions and to guide them during the search to attain an optimal solution. Of them, DE has been extensively implemented in several fields, has gained popularity for solving problems in continuous domains and has proven its superiority over other well-known algorithms for solving complex optimization problems with different properties (Neri and Tirronen, 2010; Das et al., 2016). A function landscape is represented by a surface in a search space that reflects the fitness function value of each solution. Fitness landscape analysis (FLA) has become a popular method for analyzing the characteristics of optimization problems, such as their ruggedness, complexity, modality, presence of funnels, neutrality and variable separability. It has a population of individuals (set of solutions) in which each individual has a fitness value (also known as an objective function value) and a neighborhood operator that can be expressed as a distance measure (Malan and Engelbrecht, 2013; Sallam et al., 2017a). Moreover, such an approach is often carried out in an offline mode, i.e., the required processes are conducted independently from the evolutionary process (Malan and Engelbrecht, 2013). Also, it is computationally expensive (Muñoz et al., 2015), and limited work has been carried out for using this to solve optimization problems. Although the use of function and search space-specific information may help in selecting the proper evolutionary operator in solving a particular problem, it has not been fully explored in designing such an optimization algorithm.

Although the use of EAs for solving COPs has been widely used (Elsayed et al., 2016b; Sallam et al., 2016, 2017b,c, 2018b; Vrugt et al., 2009), no single algorithm has been consistently the best for all types of COPs. To minimize this shortcoming, researchers have developed several methodologies that use multiple algorithms and/or search operators (Skakovski and Jedrzejowicz, ˛ 2019) in a single framework. Such approaches use adaptive algorithm/operator selection methods to put more emphasis on the most-suitable algorithm and/or operator during the optimization process. However, to our knowledge, utilizing information from both the objective function and constraints landscapes,to design efficient algorithms for solving COPs, has not previously been proposed. Therefore, in this paper, we introduce a new DE framework that uses a modified Information Landscape Negative Searchability (ILNS) approach, to analyze the objective function and constraint landscapes, so that more emphasize is given to the most appropriate DE operator during the evolutionary process. The proposed algorithm is named MODE-ILNS. The performance of MODE-ILNS was assessed by solving four sets of COPs benchmark problems. These problems were introduced in the CEC2006 (Suganthan et al., 2005), and CEC2010 (Mallipeddi and Suganthan, 2010b) special competition sessions on constrained optimization problems, the CEC2011 special completion on real-world problems (Das and Suganthan, 2010) and the CEC2017 (Wu et al., 2017) special competition sessions on constrained optimization problems. Several experiments have been designed and carried out, which showed that the proposed algorithm is superior to wellknown algorithms. Overall, the proposed framework was able to achieve 69.0% and 33.0% savings in computational time and fitness evaluations, respectively. It is worth mentioning here, that this work is built on our earlier work presented in (Sallam et al., 2018a). However, it has significant differences, such as 1) the selection of the best-performing DE mutation operator is different from that used in (Sallam et al., 2018a), 2) the design of both algorithms are different, 3) the proposed constraints handling method in both papers are different, 4) the proposed algorithm in this paper is justified by considering four sets of constrained test problems, and 5) all the algorithm’s components and parameters have been analyzed. This paper is organized as follows. Section 2 reviews recent advances in DE algorithms and fitness landscape measures. Then, the proposed algorithm is discussed in Section 3, followed by the experimental results and analysis in Section 4. Finally, conclusions are given in 3

Section 5.

In this paper, the following three DE mutation search operators are used. used. • DE/current-to-φbest with archive/1/bin     xi, j + Fi .(xφ, j − xi, j + xr1, j − xr2, j )     ui, j =  if (rand ≤ Cri or j = jrand )      otherwise  xi, j

2. Background and Literature review 2.1. Differential Evolution (DE) Storn and Price (Storn and Price, 1997) proposed a DE which is an EA variant. DE is popular due to its simplicity, that it usually converges and the same parameter values can be used for different problems. Due to its useful characteristics, DE has been successfully and widely used for different real-world optimization problems in many scientific and engineering fields (Elsayed et al., 2012; Zamuda et al., 2016; Zamuda and Sosa, 2019; Sallam et al., 2015). In the literature, DE’s performance has been shown to be better than many other EAs for a wide range of optimization problems (Elsayed et al., 2012). DE is consistent and reliable for solving many real-life nonlinear COPs, such as those in power and chemical systems, communication, and pattern reconciliation (Elsayed et al., 2013b). Initially, it has an initial population and then a mutant vector (donor vector) is obtained for each solution in the current population (target vector), by adding the weighted difference vector (DV) between two solutions to a third one, a new solution (offspring) is generated, and finally, a selection mechanism is conducted to decide which of the parent and offspring vectors will survive to the next generation. Its structure and main search operators are described below.

• DE/rand-to-φbest with archive/1/bin     xr1, j + Fi .(xφ, j − xr1, j + xr3, j − xr2, j )     ui, j =  if (rand ≤ Cri or j = jrand )      otherwise  xi, j

• DE/φbest/1/bin     xφ, j + Fi .(xr1, j − xr3, j )     ui, j =  if (rand ≤ Cri or j = jrand )      otherwise  xi, j

(5)

(6)

where r1 , r2 , r3 , i are random integer numbers, → −x and → −x are randomly chosen from the entire popr1 r3 ulation, xφ, j was selected from the best 10% individuals from the entire population, while xr2, j was selected from the union of the entire population and the archive. Similar to JADE (Zhang and Sanderson, 2009), in this paper, an archive is used to maintain the diversity of the population. Parents that were worse than their trial vectors are inserted into the archive. If the archive size exceeds its predefined size, the worst solutions are removed from the archive to make room for new elements.

2.1.1. Mutation DE uses a mutation operator before crossover. For example, in the (DE/rand/1) variant, three candidate solutions are randomly chosen and a mutant solution is generated by multiplying a scaling parameter (F) by the difference vector between two solutions, with the result added to a third solution as (Elsayed et al., 2011a): → − −x → − → − V i,G = → r1,G + F( x r2,G − x r3,G )

(4)

2.1.2. Crossover A crossover search operator is applied, after implementing mutation, on the mutant solution to produce an offspring vector. The most common and simple crossover operators are binomial and exponential. The former operates on each j decision variable, in the case when a generated random number is less than the crossover rate (Cr), by:

(3)

where rz ∈ [1, NP], z = 1, 2, 3 are random numbers, such that i , r1 , r2 , r3 , NP is the population size, F > 0 is a mutation factor of the mutation operator that is used to scale DV, and G is the current iteration. Many mutation strategies have been proposed in the literature (Elsayed et al., 2011a), such as: DE/best/1 (Price et al., 2006), DE/current-to-rand/1 (Iorio and Li, 2004), DE/rand/2 (Price et al., 2006), DE/rand-to-best/1 (Qin et al., 2009), DE/current-to-best/1 (Price et al., 2006), and DE/current-to-φbest/1 (Zhang and Sanderson, 2009).

ui, j,G

   vi, j,G =   xi, j,G

if rand ≤ Cr or j = jrand otherwise

(7)

where rand ∈ [0, 1] and jrand ∈ [1, 2, ..., N x ] are randomly chosen to ensure that at least one decision variable is gained from the offspring (Elsayed et al., 2011a). 4

In the later, firstly an integer, d, is randomly chosen so that d ∈ [1, D], where D is the number of decision variables, d represents the starting point of the target vector for crossover. Then, another integer value, b, chosen from [d, D] represents how many decision variables are selected from the donor vector. Once d and b are chosen, an offspring is produced as:    vi. j.G for j = hdiD , hc + 1iD , hc + b − 1iD ui, j,G =    xi, j,G otherwise (8) where the angular bracket, hciD , represents a function of modulus D with a starting point of d, and j = 1, 2, ..., D.

al. (Elsayed et al., 2013a),in which CMA-ES was periodically applied to enhance its local search capability, and a dynamic penalty was used to handle constraints. Elsayed et al. (Elsayed et al., 2014) proposed a DE with self-adaptive multi-combination strategies, called SASDE. SAS-DE utilizes two crossover and four mutation operators and two constraint handling techniques. SASDE was analyzed and tested by solving a set of benchmark problems, with consistently better performance than well-known algorithms. A composite DE for constrained optimization (C2 oDE) was proposed by Wang et al. (Wang et al., 2018), in which three different mutation strategies were used to generate three different trial vectors. To balance diversity and convergence, one of the mutation operators was used to increase diversity, while the other two were utilized to increase convergence. Also, to balance the objective function and constraints, a mutation strategy for diversity was used by the solution with the best objective FV, while one of the two mutation strategies for convergence, was used by the individual with the least degree of constraint violation. Moreover, to handle constraints, a hybrid constraint-handling method, consisting of the  constrained technique and the feasibility rule, was proposed. A restart mechanism was also developed to handle complex constraints. Xu et al. proposed a Constrained Optimization EA (COEA) (Xu et al., 2018), which uses an adaptive solution generation strategy and a cluster-replacementbased feasibility rule, to handle constraints. In COEA, mutation and crossover strategies, crossover rates and mutation factors are stored in pools, with a selection rate assigned to each element in the pools. The selection probabilities are dynamically updated, based on information learned from previous generations. The population is divided into many clusters, and an archived infeasible solution with low objective FV is replaced by one solution from its cluster. The performance of COEA was judged by solving two sets of benchmark problems and five mechanical design problems. An adaptive hybrid DE algorithm (AH-DEa) was proposed by Asafuddoula et al. (Asafuddoula et al., 2014), in which DE/rand/1/bin is used in the early generations for exploration, while DE/rand/1 with exponential crossover operator is employed in the later generations for exploitation. Based on the success of generated solutions, Cr is adaptively updated. A local search is also used to improve the best solution. Gao et al. (Gao et al., 2015) proposed DualPopulation DE with co-evolution (DPDE) to solve COPs. In DPDE, the constrained problem is treated as a bi-objective one. The first objective denotes the ac-

2.1.3. Selection Considering the fitness values of the parent and offspring, a selection operator is applied to choose the fittest solution. In this step, If the offspring has a better FV, it survives to the upcoming generation, otherwise, the parent vector is mathematically copied into that generation by:    ui,G if ( f (ui,G ) ≤ f (xi,G ) (9) xi,G+1 =    xi,G otherwise

2.2. Improved variants of DE

This section discusses the improved DE variants that can be used in solving COPs. A modified DE for solving COPs has been proposed by Mezura et al. (Mezura-Montes et al., 2006), in which each parent vector was allowed to generate multiple offspring using different DE mutation strategies. It uses Deb’s feasibility rule (Deb, 2000) to handle constraints. A self-adaptive multi-operator differential evolution (SAMO-DE) algorithm was proposed by Elsayed et al. (Elsayed et al., 2011c) to solve COPs. In it, the rule of feasibility (Deb, 2000) was used to handle constraints. Each operator evolved its sub-population. An improvement index that combines the constraint violation, feasibility ratio, and solution quality, was applied to calculate the success of each operator, which is used to dynamically update the number of solutions in each sub-population. Then more weight was given to the operator with the highest success rate. To justify the performance of SAMO-DE, two bench-mark data-sets were considered. The results showed that SAMO-DE performed better than other state-of-the-art algorithms. Later, an improved version of SAMO-DE, called ISAMODE-CMA, was proposed by Elsayed et 5

tual objective function, while the second is the degree of constraint violation. Based on the individuals’ feasibility, the whole population is separated into two subpopulations. Each sub-population emphases on only optimizing the corresponding objective, which leads to a clear division of work. An information-sharing strategy is also used to exchange search information between these two sub-populations. In (Yu et al., 2019), an improved DE algorithm (e-DE) to solve COPs has been proposed. In e-DE, one of two different mutation strategies is chosen randomly to evolve the whole population. Also, a new mechanism is used to transform the equality constraints into inequality constraints. Trivedi et al. (Trivedi et al., 2017) proposed a unified DE (UDE) algorithm to solve COPs. UDE unifies the main idea of some existing algorithms, SaDE, CoDE, JADE and ranking based mutation strategy. UDE utilizes three mutation strategies and two parameter setting mechanisms. In UDE, the population is divided into two sub-populations. In the top one, and similar to CoDE, three generation strategies are employed on each individual. While for the bottom sub-population, similar to SaDE, UDE uses an adaptation mechanism to generate new solutions. It also uses the static penalty technique to handle constraints. Later an improved version of UDE (IUDE) was introduced by Trivedi et al. (Trivedi et al., 2018). In IUDE, and similar to C2 oDE, a combination of the epsilon constrained and feasibility rules were used to handle constraints. Wagdy (Mohamed, 2018) proposed a novel DE (NDE) algorithm, which uses a triangular mutation rule in a DE algorithm. Its main aim was to balance the global and local search, to speed up the convergence rate of the algorithm. For more information about DE variants, readers are referred to (Das et al., 2016). 2.3. Fitness landscape (FL): a review Generally, a FL refers to i) a search space that consists of all individuals (populations of candidate solutions), ii) a fitness function (objective function) value, which is given to each solution in the search space and iii) a neighborhood search operator, which may be for example a distance metric (Malan and Engelbrecht, 2013; Mersmann et al., 2011). Finding the fitness landscape of a problem helps in determining problem difficulty (Poursoltan and Neumann, 2015). Several landscape metrics, for analyzing and understanding the different characteristics of problems have been developed (Malan and Engelbrecht, 2013; Pitzer and Affenzeller, 2012), such as: Auto-correlation (Chicano et al., 2012), fitness distance correlation (FDC) (Jones and Forrest, 6

1995), dispersion metric (DM) (Sutton et al., 2006) and length scale (LS) (Morgan and Gallagher, 2017). Recently, fitness landscape techniques have been used by many researchers and practitioners, to choose and determine the most suitable algorithm and/or operators, for solving optimization problems. Sallam et al (Sallam et al., 2018a) proposed an algorithm that uses fitness distance correlation, to choose the best-performing DE operator from a pool of many DE mutation strategies. The performance of the proposed algorithm was justified by solving 10 real-world constrained optimization problems, taken from CEC2011. Bischl et al. (Bischl et al., 2012) used a cost-sensitive model, based on learning to choose the most-suitable algorithm from a pool of four, for solving black-box optimization benchmarking (BBOB) problems (Hansen et al., 2009, 2010). To do this, exploratory landscape analysis (ELA) techniques were used to extract 19 measurements, to be used along with low-level features (Mersmann et al., 2011), to characterize 10D functions. Then, the modality, separability, and global structure of an optimization problem are determined as the first step in identifying the landscape (performed off-line). Next, to select the most-suitable algorithm, a machinelearning model was constructed and validated, based on two different cross-validation schemes. Nevertheless, the results may not be generalizable for problems with different dimensions. As the low-level features are obtained in a separate step, the computational cost for calculating these lower-level features was not added to the number of fitness evaluations. Also, as the selection of the algorithm pool is manual, its validation on unobserved problems is weakened. A model proposed by Malan et al (Malan and Engelbrecht, 2014b) aims to determine the reason for the failure of a particle swarm optimization (PSO) algorithm to solve a particular problem. This model uses decision trees to predict the failures of seven different PSOs, by utilizing several fitness landscape measures. A multilayer feed-forward neural network has been used to find the best parameter combination from eight of them for the Co-variance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm (Muñoz et al., 2012). To train this model, a database of 1800 problem instance, taken from the comparing continuous optimization (COCO) benchmark (Hansen et al., 2010), was used. For validating this model, data from the CEC2005 competition (Suganthan et al., 2005) and seven ELA measures, were employed to characterize and analyze each problem. The main drawbacks of that model, are that all the landscape analyzes were done off-line and the sample size of 15000 × D that was used to compute the ELA mea-

Algorithm 1 MODE-ILNS 1: Define CS , c ← 0, MAXFES , and FES ← 0; 2: Generate an initial random population (X) of size NP using LHD; 3: Evaluate f (X) and ψ(X), and update number of fitness evaluations FES ← FES + NP; 4: while FES ≤ MAXFES do 5: c ← c + 1; 6: if c < CS then 7: Randomly assign each operator to the same number of solutions; 8: Generate new population using the assigned operators; 9: Calculate LDop , based on individuals updated by operator op, as explained in Section 3.1.2; 10: end if 11: if c == CS then 12: Compute normalized value of LDop of every operator over the last CS generations; 13: Update the number of solutions generated by each DE operator (op) (Equation 17); 14: end if 15: if c > CS and c < 2CS then 16: Each DE operator evolves the updated number of solutions assigned to it, based on Equation 17; 17: end if 18: if c == 2CS then 19: c ← 0, LDop = [ ]; 20: end if 21: FES ← FES + NP; 22: Update population size (NP) using equation (11); 23: end while

surement is too expensive. Also, the model’s accuracy was examined and compared with only random configurations of unseen problems during the validation stage. By utilizing a set of four ELA techniques, an adaptive operator selection mechanism (Consoli et al., 2014) was proposed and used to train an online regression model that uses a dynamic weighted majority to predict the weight of each crossover operator, in the aim of solving a number of Capacitated Arc Routing Problems (CARPs). Also, an instantaneous reward was used to calculate the reward of each operator. When comparing the performance of this algorithm with some of the well-known algorithms, it did not show significant benefit. A new self-feedback DE algorithm (SFDE) was proposed by Li et al. (Li et al., 2019). In each iteration, SFDE’s optimal variation strategy is chosen by extracting the local fitness landscape characteristics. It also combines the probability distributions of multimodality and uni-modality in each local fitness landscape. SFDE was tested by solving a suite of 17 unconstrained problems.

3. Proposed algorithm This section proposes the MODE algorithm, which utilizes modified landscape information of the problem, to dynamically choose the best-performing DE mutation operator to solve COPs. This section also presents modified landscape measures, the general framework, and its components. 3.1. MODE-ILNS

3.1.1. Population initialization and updating method MODE-ILNS starts with a random initial population of size NP. Then, each solution is evaluated, and the number of current fitness evaluations (FES) is updated. Subsequently, each DE mutation strategy generates the same number of solutions. At the end of each generation, LD is calculated for each operator (see Section 3.1.2). After a pre-defined number of generations (CS ), the number of solutions generated by each DE mutation strategy is updated for the subsequent CS generations, as explained in Section 3.1.3. Then, as the performance of an operator may become better during later generations, all the DE mutation operators are reused and generate the same number of solutions for the next CS generations. This process continues until the stopping criterion is met.

As it has a capability of generating a sample of points that more efficiently covers the whole search space (Rajabi et al., 2015), the Latin Hypercube Design (LHD) is applied to produce the initial population

max xi, j = xmin − xmin j + (x j j ) × lhd(1, NP) i ∈ NP and j = 1, 2, ..., D

(10)

where lhd is a function that produces random numbers using LHD. A linear population size reduction mechanism is also applied, to dynamically reduce NP during the search process (Tanabe and Fukunaga, 2014), as 7

Table 1: The values of x, y , f (x, y) and ψ(x, y)

NPt+1

No. 1 2 3 4

NPmin − NPinit = round[( )×FES +NPinit ] (11) MAXFES

where NPmin refers to the smallest number of solutions that MODE-ILNS can use. FES is the current number of fitness evaluations, MAXFES is the maximum number of fitness evaluations.

x 15.9997 14.4503 14.4354 15.0003

y 4.1173 2.1789 1.4401 4.3179

f (x, y) - 6321.1 - 5571.7 - 6310.8 - 3731.7

ψ(x, y) 2.6024 2.7325 0 0

Minimize f (x, y) = −(x − 10)3 + (y − 20)3 (13)

3.1.2. Landscape measure

subject to: g1 (x, y) = −(x − 5)2 − (y − 5)2 + 100 ≤ 0

In (Sallam et al., 2017b) the Information Landscape Negative Searchability (ILNS) measure was used in an adaptive way to choose the most-suitable DE search operator, from a pool of m operators, and was used to solve unconstrained optimization problems. As dealing with constrained optimization problems is different from unconstrained ones, as all the functional constraints must be satisfied, ILNS must be modified to be able to solve COPs. The suggested modification is explained below.

g2 (x, y) = (x − 6)2 + (y − 5)2 − 82.81 ≤ 0 13 ≤ x ≤ 100 and 0 ≤ y ≤ 100

and the values of x, y, f and ψ are given in Table 1. Then the pairwise matrix M is: f1 M = f2 f3 f4

First, an information matrix M = [ai, j ] problem is created for minimization as:    1 if (ψ(xi ) = ψ(x j ) = 0 and f (xi ) < f (x j )) or       (if (ψ(xi ) , 0 and ψ(x j ) , 0 and ψ(xi ) < ψ(x j )) or        (ψ(xi ) = 0 and ψ(x j ) , 0           ai j =  0 if (ψ(xi ) = ψ(x j ) = 0 and f (xi ) > f (x j )) or      (if (ψ(xi ) , 0 and ψ(x j ) , 0 and ψ(xi ) > ψ(x j )) or        ψ(x i ) , 0 and ψ(x j ) = 0            0.5 otherwise

f1 f2 f3 f4 0.5 1 0 0 0 0.5 0 0 1 1 0.5 1 1 1 0 0.5

To create the vector LS from matrix M, the main diagonal, lower triangular, third row and third column are deleted. So, LS = (a12 , a14 , a24 ) = (1, 0, 0). Given two landscapes, LS f = (ls f,1 , ls f,2 , ..., ls f,|LS f | ) and LS re f = (lsre f,1 , lsre f,2 , ..., lsre f,|LS re f | ), the difference value between the two given landscapes is calculated by equation 14. LDop

(12) where ψ(xi ) is the total degree of constraint violations for individual xi .

|LS Xf | 1 = |ls f,z − lsre f,z | × |LS f | z=1

(14)

where z = 1, 2, ..., |LS f |. When LDop is near to 0 or 1, the problem is considered easy or difficult, respectively. Physically, LDop measures the hardness of the problem, based on the difference between the information landscape vector of the considered problem and a reference landscape vector (Malan and Engelbrecht, 2014a). In this study, we use the well-known sphere function as a reference landscape, due to the following reasons:

To construct the information landscape, not all entries in the information matrix M are necessary. Owing to symmetry, the upper and lower triangle are opposite, i.e., if a12 = 1, then a21 = 0. Therefore, the lower triangle is omitted. The values on the diagonal are always equal to 0.5, so the diagonal is also omitted. Also, the row and column of the optimum solution should be omitted. Thus, this matrix can be expressed by a vector LS = (ls1, ls2 , ..., ls|LS | ), where the number of entries in LS , |LS | = (NP−1)×(NP−2) , and NP is the population size. 2

• The sphere function presents non-negative information for the search. In other words, for any two given solutions A and B, if f (A) < f (B), then A lies nearer the optimal solution than B.

For more clarification, an example is introduced to explain how LS is computed for COPs. Consider the following COP:

• The sphere function is scalable, so it can be scaled up to any dimension. 8

Algorithm 2 MODE-ILNS 1: Input: sample of solutions, X s , randomly generated by a LHD (Equation 10); 2: Find the best solution, xbest , in the sample; 3: Fill the pairwise matrix M using Equation 12 and construct the vector LS f that represents the information matrix of the problem; 4: Define reference function (Sphere function with the same number of variables and using the same constraints of the problem to be solved). 5: Construct the vector LS re f , that represents the information matrix of the reference problem. 6: Compute LDop as the difference between the two vectors, using Equation 14;

ever, setting their values is not an easy task. Therefore, a self-adaptive technique to manage the values of F and Cr, is used in this study (Elsayed et al., 2016b; Sallam et al., 2018b; Tanabe and Fukunaga, 2014). A recording memory of length H for both F and Cr is applied. The parameter values in this recording memory are expressed as µF and µCr , and initially were set to 0.5. Each −x is linked with its own (F ) and (Cr ), and solution → z z z their values are produced using the following equations:

Given the above-mentioned discussion, the following algorithm presents the main steps for computing LDop . 3.1.3. Updating number of individuals evolved by operator op The normalized value of LDop is calculated by Equation 15, based on which the number of individuals each operator op evolves is calculated. LDop

op=1

LDop

, ∀op = 1, 2, 3

(15) µCR,h,G+1

where LDop is the mean value of LD of operator op over the last CS generations. Based on the normalized value of the LD for each operator, LDop , the probability is computed by: Prbop = max(0.1, min(0.9, P3

NLDop

op=1

NLDop

µF,h,G+1

)), ∀op = 1, 2, 3

Fz = randci(µF,rz , 0.1)

(19)

    meanwL (S Cr ) =   µCR,h,G

if S CR , φ otherwise

    meanwL (S F ) if S F , φ =   µF,h,G otherwise

(20)

(21)

where 1 ≤ h ≤ H is the position of the memory to update. It is initialized with a value of 1 and is consequently incremented whenever a new element is inserted into the history. If h > H, h is reset to 1 and meanwL (S F), the Lehmer mean, is computed by:

(16) After that the number of solutions that every DE operator PS op evolves is calculated by : PS op = Prbop × NPinit , ∀op = 1, 2, 3

(18)

where rz is randomly chosen from [1, H], randni and randci are randomly chosen from normal and Cauchy distributions with mean µCr,rz and µF,rz respectively, with variance 0.1. A repair mechanism is used to handle any values of Crz and Fz , if their values are outside [0, 1]. They are repaired as follows. If Crz is out of the range, it is replaced by the limit value (0 or 1) closest to the generated value. If Fz > 1, its value is changed to 1, and if Fz ≤ 0, equation 19 is repeatedly carried out until a valid value is obtained. At the end of every iteration, the (Fz ) and (Crz ) utilized by the successful solutions are put in S F and S Cr , then the values of the recording memory are updated as follows:

• It is easy to shift the sphere function, so that the optimum is positioned anywhere in the search space.

NLDop = P3

Crz = randni(µCr,rz , 0.1)

(17)

Note: the summation of PS op must equal the whole population size. As a kind of information sharing, the individuals every operator op evolves, is randomly assigned at every generation.

meanwL (S F) =

|SP F| h=1

|SP F| h=1

ωh .S 2Fh

(22)

ωh .S Fh

where ωh is the weight computed using:

3.1.4. Adaptation of F and Cr As previously mentioned, DE’s performance depends on its search operators and control parameters. How-

βh ωh = P|S | Cr h=1

9

βh

(23)

• For CEC2006: All algorithms were run to a maximum number of fitness evaluations of 200000 (Liang et al., 2006);

The value of βh is computed, based on one of the following cases (Elsayed et al., 2016b): 1. Infeasible to infeasible: the best individual in the population is infeasible in both iterations G − 1 and G; 2. Infeasible to feasible: the best individual in the population is infeasible at iteration G − 1 and becomes feasible at G; and 3. Feasible to feasible: the best individual is feasible at both iterations G − 1 and G.

• For CEC2010: All algorithms were run to a maximum number of fitness evaluations of 2000 × D (Mallipeddi and Suganthan, 2010b); and • For CEC2011: All algorithms were run to a maximum number of fitness evaluations of 150000 (Das and Suganthan, 2010); • For CEC2017: All algorithms were run to a maximum number of fitness evaluations of 2000 × D (Wu et al., 2017);

Firstly, for every successful solution (h ∈ 1, 2, ..., |S Cr |)2 which falls in case 1, its βh is computed as:

All algorithms were coded in Matlab R2014a and were run on a PC with a 3.4GHz Core I7 processor, 16GB RAM, and Windows7. Note that each comparative algorithm was run 25 times, with the best, mean and standard deviation results recorded. To conduct a statistical comparison between algorithms, we performed two non-parametric tests (Friedman’s ranking test and Wilcoxon signed-rank test (García et al., 2010)). The performance of the proposed algorithm was also graphically judged by plotting performance profiles (Dolan and Moré, 2002; Barbosa et al., 2013), which is a way to compare the performance of a number of algorithms (M) using a number of problems (P) and a comparison goal (i.e., the average number of FES or the computational time) to achieve a specific level of performance criteria (i.e., optimal objective function value). For an algorithm and/or method (s), the performance profile Rho s is computed as:

fh,G−1 − fh,G ψh,G−1 − ψh,G ) + max(0, ) βh = Ih = max(0, ψh,G−1 fh,G−1 (24) Then, for every successful solution (h ∈ 1, 2, ..., |S Cr |)2 which exists in case 2 or 3, its βh is calculated as: βh = max(0, Ih ) +

ψh,G−1 − ψh,G fh,G−1 − fh,G + max(0, ) ψh,G−1 fh,G−1 (25)

3.2. Constraints handling In this method, a feasibility rule is used to select between any individual and its parent, as 1) from two feasible solutions, the one with the best fitness value is selected; 2) from two infeasible solutions, the one with the smallest sum of constraint violations (ψ) is chosen, where ψ is calculated using Equation26; and 3) a feasible individual is always better than an infeasible one. −x ) = ψ(→ z

s X k=1

−x )) + max(0, gk (→ z

q X e=1

Rho s (τ) =

−x )| − δ ) max(0, |he (→ z e

1 × |p ∈ P : r ps ≤ τ| np

(27)

(26) −x ) and h (→ −x ) are the kth inequality and eth where gk (→ z e z equality constraints, respectively. For every equality constraint he , δe is started with a large value and is then gradually decreased to 0.0001, and its initial value is problem dependent (Elsayed et al., 2016b), (MezuraMontes and Coello, 2003), (Si et al., 2014).

where Rho s (τ) is the probability for s ∈ M that the performance ratio r p,s , which is calculated by Equation 28, is within a factor τ > 1 of the best possible ratio and Rho s is the cumulative distribution function for the performance ratio.

4. Experimental Setup and Results

where t ps is the CPU time taken by an algorithm s to reach the objective function value f p in problem p. In the following sections, the effect of the parameters and each component of the proposed algorithm is examined. After this, a comparison between the best variants of the proposed algorithm and other state-of-the-art algorithms is conducted.

r ps =

This section presents, analyzes and discusses the performance and computational results obtained by the proposed algorithm, on three sets of constrained benchmark problems. The stopping condition of all algorithms are set as follows: 10

t ps n o min t ps : s ∈ S

(28)

Table 2: Total average time and total average fitness evaluations of different variants for adapting F and Cr

Algorithms V1 V2 V3

Total average time (sec) 2.65 1.9 1.07

Table 3: Comparison summary between different variants for adapting F and Cr

Total average FEs 163861 123725 67066

4.1. Parameter analysis This section analyzes the effect of: 1) different adaptation rates of F and Cr; 2) CS ; 3) NPinit ; 4) NPmin ; and 5) using the landscape method in the selection method. the default values of the parameters were as follow. The ϕ parameter was set to 0.5 for DE/ϕbest/1 to maintain diversity, and in the aim of speeding up the convergence rate ϕ, to 0.1 for the remaining variants, CS parameter was set to 25. The archive rate (A) to 1.4, the memory size (H) to 5. As previously stated, an LPSR mechanism (Tanabe and Fukunaga, 2014) is used, in which NPinit is set at a value of 180 and NPmin was set at a value of 4.

Criteria

Algorithms

Better

Equal

Worse

P-value

Dec.

Best

V3 vs. V1 V3 vs. V2

13 10

8 11

1 1

0.005 0.049

+ +

Average

V3 vs. V1 V3 vs. V2

15 11

6 10

1 1

0.003 0.001

+ +

Table 4: Total average time and total average fitness evaluations of different CS values

Variants CS = 25 CS = 50 CS = 75 CS = 100 CS = 125 CS = 150

Total average time (sec) 1.065 1.109 0.978 0.996 0.963 0.981

Total average FEs 67066 68570 68407 66988 66074 66469

CS . CS represents the number of generations during which MODE-ILNS selects the best-performing operator. A number of experiments were conducted by using CS = 25, 50, 75, 100, 125 and 150 generations, with detailed results presented in supplementary material Tables II and III. Table 4 presents the total average time and total average number of fitness evaluations. From the results, the variant with CS = 125 is slightly better than all other variants in both average time and FEs. Also, the Friedman test’s results are presented in Table 5, from which it is clear, that the variant with CS = 125 has the best rank, for both the best and average results.

4.1.1. Effect of Self-adaptation In this section, to see the effect of the self-adaptation strategy used in this paper, two variants of the proposed algorithm, with two different mutation and crossover rates, are considered. Their results are compared with those obtained from the proposed algorithm with the self-adaptive mechanism. V1 uses F = 0.9 and Cr = 0.1, while V2 uses F = 0.5 and Cr = 0.5, and V3 uses the self-adaptive mechanism described in Section 3.1.4. The best, average and standard deviation results, obtained from 25 runs for each variant, are presented in supplementary material Table I. When comparing the computational time in seconds and FEs of the different variants, V3 which uses the self-adaptive approach is the best. This is shown in Table 2. Also, the performance profiles for both computational time and FEs depicted in Figure 1 demonstrate that the variant with self-adaptation is the best. Regarding the quality of solutions, Table 3 presents a comparison summary between V3 and the other two variants (V1 and V2). From the results, the performance of V3 with self-adapting F and Cr is better than the two other variants for both the best and average results obtained. The Wilcoxon test also confirms that V3 statistically outperforms V1 and V2.

4.1.3. Effect of NPinit To see the effect of the initial population size, NPinit , different experiments with NPinit = 50, 75, 100, 125, 150 and 200 were conducted. The detailed results are presented in the supplementary material Tables IV and V. The total average time and FEs are presented in Table 6, from which it is clear that the variant with NPinit = 150 is slightly better. This is shown by the performance

Table 5: Ranks of proposed algorithm with different CS values

Variants CS = 25 CS = 50 CS = 75 CS = 100 CS = 125 CS = 150

4.1.2. Effect of CS In this section, MODE-ILNS is tested while using the self-adaptation of F and Cr, as it was better than both of the fixed values, but with different values of 11

Mean rank 3.47 3.70 3.61 3.48 3.25 3.49

Order 2 6 5 3 1 4

1 0.9 0.8

Rho s( )

0.7 0.6 0.5 0.4 0.3 0.2 V1 V2 V3

0.1 0 0.5

1

1.5

2

2.5

3

(a)

1 0.9 0.8

Rho s( )

0.7 0.6 0.5 0.4 0.3 0.2

V1 V2 V3

0.1 0 0.5

1

1.5

2

2.5

3

(b) Figure 1: Comparison of performance profiles of different variants for adapting F and Cr (a) based on computational time; and (b) based on FEs

12

Table 6: Total average time and total average fitness evaluations of different NPinit values

Variants NPinit = 50 NPinit = 75 NPinit = 100 NPinit = 125 NPinit = 150 NPinit = 200

Total average time (sec) 1.513 1.068 0.977 0.942 0.913 0.963

Table 9: Ranks of proposed algorithm with different NPmin values

Variants NPmin = 4 NPmin = 10 NPmin = 20 NPmin = 30 NPmin = 40 NPmin = 50

Total average FEs 51912 57134 55279 61032 50475 68297

Table 7: Comparison summary between different variants of the proposed algorithm with different NPinit values Variants

Better

Equal

Worse

Dec.

NPinit = 150 vs. NPinit = 50

5

16

1

NPinit = 150 vs. NPinit = 75

4

17

1



NPinit NPinit NPinit

= 150 vs.

NPinit

= 100

4

17

1

= 150 vs.

NPinit

= 125

4

17

1

= 150 vs.

NPinit

= 200

3

17

2

≈ ≈ ≈ ≈

NPmin = 40 vs. NPmin = 4

5

17

0

+

NPmin = 40 vs. NPmin = 10

5

17

0

+

NPmin = 40 vs. NPmin = 20

4

17

1

NPmin = 40 vs. NPmin = 30

4

17

1



NPmin = 40 vs. NPmin = 50

4

17

1

≈ ≈

the ones with NPmin = 30 and NPmin = 50 come second and third, respectively. As a further comparison, a Wilcoxon test was carried out, with a summary of the results presented in Table 10. It is clear that the variant with NPmin = 40 is significantly better than those with NPmin = 4 and NPmin = 10, while there is no significant difference with the others.

4.1.4. Effect of NPmin For this analysis, the proposed algorithm was run with different NPmin values, such that NPmin = 4, 10, 20, 30, 40 and 50 individuals. The detailed results are shown in the supplementary material Tables VI and VII. The average computational time and FEs are presented in Table 8, which shows that the variant with NPmin = 40 requires the least average computational time, and the second least in average FEs. A Friedman test was carried out to rank all the variants. From the results shown in Table 9, it can be seen that the variant with NPmin = 40 is ranked first, while

4.1.5. Comparing MODE-ILNS with different selection mechanisms In this section, the effect of the modified landscape method in the selection mechanism is analyzed. To do this, a comparison of MODE-ILNS is compared with 4 of its variants (Q-MODE, D-MODE, QD-MODE, and FDC-MODE), in each of which only the selection mechanism was different, and below is a description of the selection mechanism in every variant: 1. In D-MODE, population diversity is used to select the most-suitable DE mutation operator, as: −−→ −−→ dis( xop i , xop b ) Dop,G = , ∀op = 1, 2, ..., m NPop (29) −−→ −→ op − op where dis( x i , x b ) is the distance between the ith individual and the best individual generated by operator op, NPop is the number of solutions an −x the best solution in those operator op evolves, → b evolved by operator op, and m is the number of operators. PNPop i=1

Table 8: Total average time and total average fitness evaluations of different NPmin values

Total average time (sec) 1.038 1.176 0.961 0.936 0.880 0.935

Order 4.5 6 4.5 2 1 3

Table 10: Comparison summary between different variants of the proposed algorithm with different NPmin values Variants Better Equal Worse Dec.

profiles graphs depicted in Figure 2, from which it is clear that the variant with NPinit = 150 is the best for both average computational time and FEs. Regarding the quality of solutions, Table 7 shows a summary of the obtained results, based on the average results. NPinit = 150D was better than all of the other variants. Regarding the Wilcoxon test results, there is no significant difference between the variants. However, there is a bias towards NPinit = 150D.

Variants NPmin = 4 NPmin = 10 NPmin = 20 NPmin = 30 NPmin = 40 NPmin = 50

Mean rank 3.73 3.77 3.73 3.32 3.07 3.39

Total average FEs 55515 57173 56496 57678 56384 57240

13

1

0.9

0.8

0.7

Rhos( )

0.6

0.5

0.4

0.3

0.2

NPinit =50 NPinit =75 NPinit =100

0.1

NPinit =125 NPinit =150 NPinit =200

0

1

1.2

1.4

1.6

1.8

2

2.2

2.4

2.6

2.8

3

(a)

1

0.9

0.8

0.7

Rhos( )

0.6

0.5

0.4

0.3

0.2

NPinit =50 NPinit =75 NPinit =100

0.1

NPinit =125 NPinit =150 NPinit =200

0

1

1.5

2

2.5

3

(b) Figure 2: Comparison of performance profiles of proposed algorithm with different values of NPinit based on (a) computational time; and (b) FEs

14

Table 11: Ranks of 5 different variants, based on Friedman rank test

Algorithm MODE-ILNS D-MODE Q-MODE QD-MODE FDC-MODE

Mean rank 2.59 3.29 3.07 3.09 2.96

Table 13: Comparison among MODE-ILNS, Q-MODE, D-MODE, QD-MODE and FDC-MODE, based on average computational time and FEs

Order 1 5 3 4 2

Variants MODE-ILNS Q-MODE D-MODE QD-MODE FDC-MODE

Table 12: Comparison summary between MODE-ILNS, D-MODE, Q-MODE, QD-MODE and FDC-MODE

Total average time (sec) 0.999 1.519 1.530 3.200 1.209

Table 14: Comparison summary between MODE-ILNS and the three other variants, on CEC2010’s 30D test problems

Variants

Better

Equal

Worse

Dec.

MODE-ILNS vs. D-MODE

11

7

4



MODE-ILNS vs. Var1



MODE-ILNS vs. Var2

MODE-ILNS vs. Q-MODE

11

5

6

MODE-ILNS vs. QD-MODE

12

6

4

MODE-ILNS vs. FDC-MODE

10

6

6

Algorithms

≈ ≈

MODE-ILNS vs. Var3

2. In Q-MODE, the selection mechanism is calculated, based on the improvement rates in objective function values(Elsayed et al., 2016a), as: PNPop

max(0, fG+1,z − fG,z ) , ∀op = 1, 2, ..., m PNPop z=1,op fG,z (30) where fG+1,z and fG,z are the new and old objective FVs, respectively. 3. QD-MODE: the selection mechanism is based on both population diversity and the quality of solutions, as: Qop,G =

z=1

QDop,G = Dop,G + Qop,G

Total average FEs 53377 76420 79659 65906 70939

(31)

4. FDC-MODE: the proposed method is compared with FDC-MODE (Sallam et al., 2018a).

Criteria Best Average Best Average Best Average

Better 2 10 2 10 15 15

Equal 15 7 15 6 3 2

Worse 1 1 1 2 0 1

Dec. ≈ + ≈ + + +

stopping condition is ( f (x) − f (x∗ ) ≤ 0.0001), where f (x∗ ) is the best known-solution, was recorded and is presented in Table 13. It is clear that MODE-ILNS saves computational time by 34%, 35%,69% and 17% in comparison to Q-MODE, D-MODE, QD-MODE, and FDC-MODE, respectively. Regarding FEs, MODEILNS saves those by 30%, 33%, 19% and 24% in comparison to Q-MODE, D-MODE, QD-MODE, and FDCMODE, respectively. In addition, the performance profiles for both the computational time and FEs results are depicted in Figure 3. From these figures, the proposed MODE-ILNS is considered the best. 4.2. Comparison of MODE-ILNS with its constituent DE variants In this section, the performance of the proposed MODE-ILNS is compared to the following three variants on the CEC2010 with 30D test problems.

The detailed results obtained from these five variants are presented in Tables VIII and IX in the supplementary material. A Friedman’s rank test was done to rank all the variants for all problems in CEC2006. From the results, shown in Table 11, it is clear that MODE-ILNS performs best, while FDC-MODE comes second. A summary of the comparison between MODEILNS and the other four variants is shown in Table 12. MODE-ILNS is better than D-MODE, Q-MODE, QDMODE, and FDC-MODE for 11, 11, 12, and 10 test problems respectively, while MODE-ILNS is inferior to D-MODE, Q-MODE, QD-MODE, and FDC-MODE in 4, 6, 4 and 6 test problems, respectively. Furthermore, the average number of fitness evaluations and the average computational time needed to attain the optimal solution with an error of 0.0001, i.e., the

1. Var1: DE/current-to-φbest with archive/1/bin; 2. Var2: DE/rand-to-φbest with archive/1/bin; and 3. Var3: DE/φbest/1/bin The detailed results obtained from the proposed algorithm, as well as the other three variants, are presented at Table X in the supplementary material. The summary of the obtained solutions is presented in Table 14, from which it is clear that MODE-ILNS is superior to its constituent DE variants. Considering the solution quality for the average obtained results, MODE-ILNS is better, similar and worse to Var1 for 10, 7 and 1, test problems, respectively, to Var2 in 10, 6 and 2 test problems, respectively and to 15

1

0.9

0.8

0.7

Rhos( )

0.6

0.5

0.4

0.3

0.2 MILNS-MODE D-MODE Q-MODE MFDC-MODE QD-MODE

0.1

0

1

1.2

1.4

1.6

1.8

2

2.2

2.4

2.6

2.8

3

(a)

1

0.9

0.8

0.7

Rhos( )

0.6

0.5

0.4

0.3

0.2 MILNS-MODE D-MODE Q-MODE QD-MODE MFDC-MODE

0.1

0

1

1.2

1.4

1.6

1.8

2

2.2

2.4

2.6

2.8

3

(b) Figure 3: Performance profiles comparing MODE-ILNS, Q-MODE, D-MODE, QD-MODE and FDC-MODE, based on (a) the average computational time and (b) the average number of FEs

16

Table 15: Average ranking of MODE-ILNS, Var1, Var2 and Var3 on CEC2010 with 30D test problems

Algorithm MODE-ILNS Var1 Var2 Var3

Best 2.08 2.11 2.08 3.75

Table 16: Summary of comparisons of the proposed MODE-ILNS against AH-DEa, SAMO-GA, SAMO-DE, AIS-ZHY, ECHT-EP2, ECHT-DE, APF-GA, ISAMODE-CMA, DEg, and rank-iMDDE, based on the average results, where ’Dec.’ statistical decision is based on Wilcoxon signed-rank test results

30D Mean 1.64 2.39 2.36 3.61

Var3 in 15, 2 and 1 test problems, respectively. Regarding the Wilcoxon test results, as presented in Table 14, the proposed algorithm was significantly superior to all other three variants for the average results obtained. For the best results obtained, it is also significantly better than Var3, while there were no significant differences in comparison with Var1 and Var2, as all achieved the optimal results for 15 test problems. Further analysis was done by conducting a Friedman test to rank all algorithms. The results recorded in Table 15 reveal the superiority of the proposed algorithm. One reason for the superiority of the proposed algorithm,is its ability to put more emphasize on the best-performing DE mutation strategy during the evolutionary process.

Algorithms

Better

Equal

Worse

Dec.

MODE-ILNS vs. AH-DEa

7

15

0

+

MODE-ILNS vs. SAMO-GA

11

9

2

+

MODE-ILNS vs. SAMO-DE

8

14

0

+

MODE-ILNS vs. AIS-ZHY

6

16

0

+

MODE-ILNS vs. ECHT-EP2

6

16

0

+

MODE-ILNS vs. ECHT-DE

6

16

0

+

MODE-ILNS vs. APF-GA

10

11

1

MODE-ILNS vs. ISAMODE-CMA

2

20

0



MODE-ILNS vs. DEg

1

21

0

MODE-ILNS vs. rank-iMDDE

3

19

0

≈ ≈ ≈

7. DE based on an ensemble of constraint-handling techniques (ECHT-DE) (Mallipeddi and Suganthan, 2010a). 8. a rank based multi-operator DE algorithm (rankiMDDE) (Gong et al., 2014). 9. DEg with Gradient-Based Mutation (DE) (Takahama and Sakai, 2009). 10. an artificial immune system based approach for COPs (AIS-ZYH) (Zhang et al., 2014).

4.3. Comparisons with state-of-the-art algorithms From the above-mentioned analysis it was found, that MODE-ILNS with CS = 125, NPinit = 150 and NPmin = 40 is the best variant. So, this variant will be compared to the state-of-the-art algorithms by solving the four benchmark problem sets (CEC2006, CEC2010, CEC2011 and CEC2017).

The detailed results of MODE-ILNS, based on 200000 FEs, along with those obtained from the stateof-the-art algorithms, are presented in the supplementary material Tables XI, XII, XIII, and XIV Those tables show the mean and standard deviation (Std.) results obtained from 25 runs. It must be mentioned here that MODE-ILNS used 200000 fitness evaluations (FEs), while ISAMODECMA, SAMO-DE, ECHT-EP2, ECHT-DE, SAMOGA, rank-iMDDE, AH-DEa used 240000 FEs, and APF-GA, MDE, and DEg used 500000 FEs. It should also be mentioned here that all algorithms solved 22 out of 24 test problems. Thus, the analysis is based on 22 test problems. The proposed MODE-ILNS algorithm was able to obtain the optimal solutions for all test problems, with feasibility and success rates equal to 100%. Table 16 presents a summary of the comparison between MODE-ILNS and state-of-the-art algorithms. MODE-ILNS is better than AH-DEa, SAMO-GA, SAMO-DE, AIS-ZHY, ECHT-EP2, APF-GA, ECHTDE, ISAMODE-CMA, DE, and rank-iMDDE for 7, 11, 8, 6, 6, 10, 6, 2, 1 and 3 test problems, respectively, while MODE-ILNS is inferior to SAMO-GA and APFGA in 2 and 1 test problems, respectively. Regarding the Wilcoxon test results, MODE-ILNS was significantly better than AH-DEa, SAMO-GA,

4.3.1. Comparison to the state-of-the-art algorithms for CEC2006 In this section, to verify the performance of the proposed (MODE-ILNS) algorithm, comparisons were carried out with: 1. an adaptive hybrid DE algorithm (AH-DEa) (Asafuddoula et al., 2014). 2. a self-adaptive multi-operator genetic algorithm (SAMO-GA) (Elsayed et al., 2011c). 3. a self-adaptive algorithm with multi-operator strategy (SAMO-DE) (Elsayed et al., 2011c). 4. an improved version of SAMO-DE (ISAMODECMA) (Elsayed et al., 2013b). 5. adaptive penalty formulation with GA (APF-GA) (Tessema and Yen, 2009). 6. an evolutionary programming algorithm based on an ensemble of constraint-handling techniques (ECHT-EP2) (Mallipeddi and Suganthan, 2010a). 17

Table 17: Average ranking of MODE-ILNS, EHCT-DE, AISZHY,ISMOADE-CMA, SAMO-DE, ECHT-EP2, DEg, AH-DEa, SAMO-GA, APF-GA and rank-iMDDE, by the Friedman test for the 22 functions, in terms of mean value

6.59

6.45

6.55

5.84

6

order 1 6 5 2 8 4 10 7 11 9 3

5.80 5.25

5.14 5

4.70

4 3 2 1

D E

A

A G

D nk -iM

ra

AP F-

Ea

-G O

-D

M

AH

SA

2

D Eg

EP TH

EC

#

A

-D E

M C

O

E-

SA M

AD O

E

ZH Y SAI

D

N

TC

EIL

EH

D

IS

M

S

0

O

Mean rank 4.70 6.45 5.84 5.14 6.59 5.80 7.75 6.55 7.86 7.07 5.25

7.07

7

M

Algorithm MODE-ILNS EHCT-DE AIS-ZHY ISMOADE-CMA SAMO-DE ECHT-EP2 DEg AH-DEa SAMO-GA APF-GA rank-iMDDE

7.86

7.75

8

Figure 4: Average ranking of MODE-ILNS, EHCT-DE, AISZHY,ISMOADE-CMA, SAMO-DE, ECHT-EP2, DEg, AH-DEa, SAMO-GA, APF-GA and rank-iMDDE, by the Friedman test for 22 functions, in terms of mean value

SAMO-DE, AIS-ZHY, ECHT-EP2, ECHT-DE, and APF-GA. Although there is no significant difference between MODE-ILNS and ISAMODE-CMA, DEg and rank-iMDDE, there is a bias towards MODEILNS in the number of better functions. One advantage of MODE-ILNS, is its ability to reach the optimal solution faster than ISAMODE-CMA, DEg, and rank-iMDDE. In summary, the average number of FEs that are consumed by MODE-ILNS is 53377, while ISAMODE-CMA, DEg and rank-iMDDE consumed 76420, 79659, and 65,906 FEs respectively, which means that MODE-ILNS can save 30.15%, 32.99%, and 19.01% of FEs in comparison to ISAMODE-CMA, DEg and rank-iMDDE, respectively. Furthermore, a Friedman test was used to rank all the algorithms according to the mean results obtained, with the mean rank presented in Table 17 and Figure 4. From Table 17, MODE-ILNS has the first rank among the 11 algorithms on the 22 test functions. The proposed MODE-ILNS has the ability to deal with different kinds of COPs. It performs well in problems that have different numbers of constraints, such as problems with a small number of constraints (g06 and g08), problems with moderate numbers of constraints (g04), and problems that have a large number of constraints (such as g16 and g20). Also, MODE-ILNS performs very well for problems with low (g06, g11, and g24), moderate (g07 and g14) and high (g02 and g20) dimensionality, with different types of combined constraints (linear, nonlinear, equality, and inequality). Regarding the feasible region, the proposed MODE-ILNS is able to successfully solve problems with very small (g05, g11, g13, g17, and g23), moderate (g04 and g19), very large (g02) or even disjoint (g12) feasible regions.

Also, the proposed MODE-ILNS is able to deal with large search spaces (based on the intervals of the decision variables) with a very small feasible region (g10). Moreover, the algorithm can find the optimal solution in problems where such solutions exist on the feasible region boundaries (such as: g01, g02, g03, g04, g05, g06, g07, and g09). 4.3.2. Comparison to the state-of-the-art algorithms for CEC2010 In this section, the proposed MODE-ILNS is tested by solving the test set of the CEC2010 (Mallipeddi and Suganthan, 2010b) constrained optimization competition. MODE-ILNS is compared with the following state-of-the-art algorithms: 1. DE with an archive and gradient-based mutation (DEag) (Takahama and Sakai, 2010), which won the CEC2010 COP competition. 2. self-adaptive multi-operator DE (SAMODE) (Elsayed et al., 2011b). 3. DE combined with DE-DBmax (DE-DBmax) (Hamza et al., 2012). 4. co-evolutionary comprehensive learning particle swarm optimizer (Co-CLPSO) (Liang et al., 2010). 5. adaptive ranking mutation operator-based DE (ECHT-ARMOR-DE) (Gong et al., 2015). 6. elitist artificial bee colony (eABC) (MezuraMontes and Velez-Koeppel, 2010). 7. multi-operator GA (SAMO-GA) (Elsayed et al., 2011b). 8. constraint-consensus mutation based DE (DEbavCC) (Hamza et al., 2016). 18

The detailed results obtained from MODE-ILNS and the state-of-the-art algorithms, are presented in the supplementary material Tables XV, XVI, and XVII. For 10D and 30D test problems, MODE-ILNS, SAMODE, DE-DBmax and DEbavDBmax were able to reach a 100% feasibility rate, while DEag achieved 100% feasibility ratio for only 35 out of 36 test problems, as it only got a 12% feasibility ratio for C12 with 30D. All other algorithms were also unable to attain the 100% feasibility rate.

4.3.3. Comparison to the state-of-the-art algorithms for CEC2011 In this section, the proposed MODE-ILNS is judged by solving 10 real-world application problems, taken from the CEC2011 (Das and Suganthan, 2010) competition on real-world optimization problems. The performance of the proposed MODE-ILNS is compared with the following state-of-the-art algorithms. 1. a continuous DE ant-stigmergy algorithm (CDASA) (Korošec and Šilc, 2011) 2. an adaptive DE algorithm (ADE) (Asafuddoula et al., 2011). 3. an ensemble DE algorithm (EPSDE) (Mallipeddi and Suganthan, 2011). 4. SAMODE (Elsayed et al., 2011a). 5. DE with adaptive crossover rate (DE-Acr) (Mandal et al., 2011). 6. a competitive DE with local search (CDELS) (Reynoso-Meza et al., 2011),

Table 18 shows the summary of the quality of solutions obtained. From this table, it is clear that MODEILNS was able to obtain better results, for many problems of both the 10D and 30D test problems. Based on the Wilcoxon test results, for 10D test problems, MODE-ILNS is significantly better than DEbavDBmax, DE-DBmax, eABC, Co-CLPSO and SAMO-GA for the average results obtained, and better than eABC and Co-CLPSO for the best results obtained. Based on 30D test problems, MODE-ILNS was found to be significantly better than all the algorithms for the best results obtained, except DEbavDBmax and DE-DBmax, which were statistically similar. Regarding the average fitness values, MODE-ILNS was statistically better than DEag, eABC, Co-CLPSO, SAMOGA and ECHT-ARMOR-DE, and statistically similar to DEbavDBmax, SAMODE, and DE-DBmax.

The detailed results are presented in the supplementary material Table XVIII. All these algorithms solve these test problems as unconstrained optimization problems, and there is no information about whether the optimal solution obtained is feasible or not. To compare the performance of the proposed MODEILNS with the state-of-the-art algorithms, a Friedman test is conducted. Table 20 shows the average ranking of the seven algorithms. The highest-ranking is shown in boldface. As seen, MODE-ILNS and DE-Acr obtained the best ranking. Regarding the quality of solutions, a summary has been reported in Table 21. MODE-ILNS was better than ADE, EPSDE, SAMODE, DE-ACr, CDELS and CDASA for 8, 8, 7, 5, 10 and 10 test problems, respectively, While ADE, EPSDE, SAMODE, DE-ACr, CDELS, and CDASA are better than MODE-ILNS in 2, 2, 3, 5, 0, and 0 test problems, respectively. Based on the Wilcoxon test, it was found that MODE-ILNS is significantly better than ADE, EPSDE, CDELS, and CDASA, while there is no significant difference with SAMODE and DE-Acr.

Furthermore, a Friedman test was conducted to rank all algorithms, based on the best and average results. Table 19 and Figure 5 show the results, from which it is clear that MODE-ILNS is ranked first for 10D and 30D. MODE-ILNS performs well in problems that have different numbers of constraints, such as problems with a small number of constraints (C03), problems with a moderate number of constraints (C01), and problems that have a large number of constraints (such as C04 and C16). It also performs very well for the problems with different types of combined constraints (linear, nonlinear, equality, and inequality). Regarding the feasible region, the proposed MODE-ILNS is able to successfully solve problems with tiny (C05, C09, and C17), moderate (C07 and C08), and very large (C01) feasible regions.

4.3.4. Comparison to the state-of-the-art algorithms for CEC2017 This section presents the solutions obtained by the MODE-ILNS, for the CEC2017 test problems for 10D, 30D and 50D. The proposed algorithm is compared with the following well-known algorithms.

Also, the proposed algorithm is able to deal with large search spaces (based on the intervals of the decision variables) with a tiny feasible region (C09 and C10). Moreover, the algorithm can find the optimal solution in problems where such a solution exists on the feasible region boundaries (such as, C03, C09, and C17). 19

Table 18: Summary of comparison of MODE-ILNS and state-of-the-art algorithms for CEC2010

Algorithms MODE-ILNS vs. DEbavDBmax MODE-ILNS vs. SAMODE MODE-ILNS vs. DEag MODE-ILNS vs. DE-DBmax MODE-ILNS vs. eABC MODE-ILNS vs. Co-CLPS MODE-ILNS vs. SAMO-GA MODE-ILNS vs. ECHT-ARMOR-DE

Criteria Best Average Best Average Best Average Best Average Best Average Best Average Best Average Best Average

Better 3 13 2 12 5 8 5 16 16 17 14 16 10 14 6 8

10D Equal Worse 12 3 5 0 12 4 2 4 10 3 6 4 10 3 2 0 1 1 0 1 3 1 0 2 5 3 0 4 10 2 5 5

Dec. ≈ + ≈ ≈ ≈ ≈ ≈ + + + + + ≈ + ≈ ≈

Better 7 11 13 12 15 15 8 11 16 17 15 16 13 13 11 14

30D Equal Worse 9 2 3 4 2 3 1 5 1 2 0 3 8 2 3 4 0 2 0 1 1 2 0 2 1 4 0 5 4 3 1 3

Dec. ≈ ≈ + ≈ + + ≈ ≈ + + + + + + + +

Table 19: Average ranking of MODE-ILNS, DEbavDBmax, SAMODE, DEag, DE-DBmax, eABC, Co-CLPSO,SAMO-GA and ECHT-ARMORDE, by Friedman test for CEC2010 10D 30D Algorithms Average rank based Average rank based Overall Average rank based Average rank based Overall on best results on average results rank on best results on average results rank obtained obtained obtained obtained MODE-ILNS 3.86 2.67 3.27 2.83 2.72 2.78 DEbavDBmax 3.81 4.39 4.10 2.97 2.94 2.96 SAMODE 3.47 4.11 3.79 4.47 3.81 4.14 DEag 4.22 4.00 4.11 6.81 5.47 6.14 DE-DBmax 4.31 5.47 4.89 3.11 3.67 3.39 eABC 8.36 8.42 8.39 7.94 8.39 8.17 Co-CLPSO 6.81 6.78 6.80 6.89 7.08 6.99 SAMO-GA 5.47 5.33 5.40 5.5 5.00 5.25 ECHT-ARMOR-DE 4.69 3.83 4.26 4.47 5.92 5.20

20

9

8.39

8 6.80

7

Mean rank

6

5.40 4.89

5 4.10

4

4.26

4.11

3.79

3.27 3 2 1

E

A R

-D

-G M T-

C

AR

o-

SA

C

M

O

O

LP

eA

ED

$

SO

BC

ax

g D

$D

Bm

Ea

E D O M SA

EC

H

D

M

Eb

O

av

D

D

E-

IL

Bm

N

ax

S

0

(a)

9 8.17 8 6.99

Mean rank

7 6.14

6

5.25

5.20

5 4.14

4

3.39

2.96

2.78

3 2 1

E

A

-D O R

O

M

M T-

AR

SA

LP C

-G

SO

BC C

o-

D

eA

Bm

ax

Ea g

E O D

D

ED

D av

EC H

D

Eb

M

Bm

IL EO D M

SA

N

ax

S

0

Algorithms

(b) Figure 5: Average Ranking achieved by Friedman test for CEC2010 (a) 10D and (b) 30D.

21

For 10D test problems and based on the best results, MODE-ILNS was superior to LSHADE44, LSHADE44-IEpsilon, CAL-LSAHDE and UDE for 16, 12, 18 and 18 test problems, respectively, and based on the average results for 17, 17, 21 and 21 test problems respectively, while MODE-ILNS was inferior to them in 7, 8, 6 and 5, respectively (best results) and 8, 7, 5 and 5, respectively (average results). Also, it is clear that MODE-ILNS was able to obtain better results for most of the problems of both the 30D and 50D test problems, for both best and average results. The Wilcoxon test was used to statistically compare the performance of the algorithms, with the results presented in Table 22. It shows that MODE-ILNS was statistically better than all other algorithms, for both best and average results obtained for 10D, 30D and 50D test problems, except LSHADE44-IEpsilon for average results for 50D, which was statistically similar. Although there was no significant difference between MODEILNS and LSHADE44-IEpsilon for average results for 50D test problems, there was a bias towards MODEILNS in the number of better problems. Regarding the Friedman rank test, the proposed algorithm was the best for 10D, 30D and 50D, based on both best and average results obtained, as presented in Table 23. Finally, it can be concluded from the performance profiles, that MODE-ILNS was superior to all other algorithms, as depicted in Figure 6. This confirms that MODE-ILNS has the highest probability at the start for all cases, and was the first to reach a probability of 1.0.

Table 20: Average Ranking achieved by Friedman test for CEC2011

Algorithms MODE-ILNS ADE EPSDE SAMODE DE-Acr CDELS CDASA

Mean rank 2.20 4.40 3.70 3.10 2.20 6.20 6.20

Table 21: Comparison summary of MODE-ILNS against ADE, EPSDE, SAMODE, DE-ACr, CDELS and CDASA for CEC2011 Algorithms Better Equal Worse Dec. MODE-ILNS vs. ADE 8 0 2 + MODE-ILNS vs. EPSDE 8 0 2 + MODE-ILNS vs. SAMODE 7 0 3 ≈ MODE-ILNS vs. DE-Acr 5 0 5 ≈ MODE-ILNS vs. CDELS 10 0 0 + MODE-ILNS vs. CDASA 10 0 0 +

1. an enhanced version of LSHADE , that uses four mutation strategies to create trial vector (Poláková, 2017) (LSHADE44). 2. LSHADE44 with an improved  constrained handling technique (Fan et al., 2018) (LSHADE44IEpsilon). 3. a constrained handling with LSHADE algorithm (Zamuda, 2017) (CAL-SHADE). 4. a unified differential evolution algorithm(Trivedi et al., 2017) (UDE). 5. a new evolution strategy with  constrained handling method (Hellwig and Beyer, 2018) (MAgES).

5. Conclusion and Future Work Many DE variants for solving constrained optimization problems have been developed. Since no single DE search operator has been proven to be able to solve many kinds of COPs, the evolutionary algorithms researchers and community have used, has been methods and frameworks that utilize more than one operator in a single algorithmic framework. Notwithstanding that these frameworks have shown superiority to singlebased operators, their designs were still based on trial and error approaches. Also, no information from the problem or function to be solved has been used. Consequently, based on problem landscapes (objective function and constraints), a new multi-operator DE algorithm has been introduced. Our proposed algorithm was developed using powerful DE mutation strategies, and then based on the problem landscape, more weight was put on the best-performing DE mutation strategy during the optimization process.

The detailed results, achieved by MODE-ILNS and the other well-known algorithms used in the comparison, are presented at Tables XIX, XX, and XXI in the supplementary material. MODE-ILNS was able to obtain a 100% feasibility ratio for 26 test problems out of 28 for 10D, 30D and 50D, while LSHADE44 achieved a 100% feasibility ratio for 21 test problems out of 28 for 10D, 30D and 50D. LSHADE44-IEpsilon was able to achieve a 100% feasibility ratio for 22 test problems out of 28 for both 10D and 30D and 21 test problems for 50D. CAL-SHADE was able to attain a 100% feasibility ratio for 15, 14 and 12 test problems for 10D, 30D, and 50D, respectively. UDE achieved a 100% feasibility ratio for 19, 21 and 20 test problems for 10D, 30D, and 50D test problems, respectively. Considering the quality of the obtained solutions, a summary of the results is presented at Table 22. 22

Table 22: Comparison summary between MODE-ILNS, LSHADE44, LSHADE44-IEpsilon, CAL-SHADE, UDE and cMAg-ES, based on both best and average results obtained by solving the CEC2017 test problems, where ’Dec.’ statistical decision is based on the Wilcoxon signed-rank test results.

Algorithms

MODE-ILNS vs. LSHADE44 MODE-ILNS vs. LSHADE44-IEpsilon 10D

MODE-ILNS vs. CAL-SHADE MODE-ILNS vs. UDE MODE-ILNS vs. cMAg-ES MODE-ILNS vs. LSHADE44 MODE-ILNS vs. LSHADE44-IEpsilon

30D

MODE-ILNS vs. CAL-SHADE MODE-ILNS vs. UDE MODE-ILNS vs. cMAg-ES MODE-ILNS vs. LSHADE44 MODE-ILNS vs. LSHADE44-IEpsilon

50D

MODE-ILNS vs. CAL-SHADE MODE-ILNS vs. UDE MODE-ILNS vs. cMAg-ES

Criteria Best Average Best Average Best Average Best Average Best Average Best Average Best Average Best Average Best Average Best Average Best Average Best Average Best Average Best Average Best Average

Better 16 17 12 17 18 21 18 21 12 19 19 19 18 20 21 23 20 18 16 18 20 19 22 18 21 21 22 21 14 17

Equal 5 3 8 4 4 2 4 2 8 4 3 0 4 0 4 0 2 2 3 2 1 0 0 0 1 0 0 0 3 1

Worse 7 8 8 7 6 5 6 5 8 5 6 9 6 8 3 5 6 8 9 8 7 9 6 10 6 7 6 7 11 10

Dec. + + + + + + + + + + + + + + + + + + ≈ + + + + + + + + + ≈ +

Table 23: Average ranking of MODE-ILNS, LSHADE44, LSHADE44-IEpsilon, CAL-SHADE, UDE and cMAg-ES by Friedman test for CEC2017 Algorithms

MODE-ILNS LSHADE44 LSHADE44-IEpsilon CAL-SHADE UDE cMAg-ES

10D Average rank based on best results obtained 2.77 4.05 3.46 4.30 3.61 2.80

30D Average rank based on average results obtained 2.34 3.91 2.93 4.93 3.52 3.38

Average rank based on best results obtained 2.36 4.36 3.09 4.36 3.59 3.25

23

50D Average rank based on average results obtained 2.29 4.02 2.84 4.63 3.91 3.32

Average rank based on best results obtained 2.38 3.96 3.45 4.55 3.80 2.86

Average rank based on average results obtained 2.55 3.93 3.04 4.57 3.79 3.13

1

0.9

0.8

0.8

0.7

0.7

0.6

0.6

Rho s ( )

Rho s ( )

1

0.9

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2 MODE-ILNS LSHADE44 LSHADE44-IEpsilon CAL-SHADE UDE cMAg-ES

0.1

0

500

1000

1500

2000

MODE-ILNS LSHADE44 LSHADE44-IEpsilon CAL-SHADE UDE cMAg-ES

0.1

0

2500

20

40

60

80

100

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.4

0.3

0.3

0.2

200

1

1.5

2

2.5

3

3.5

MODE-ILNS LSHADE44 LSHADE44-IEpsilon CAL-SHADE UDE cMAg-ES

0.1

0

4

50

100

150

200

250

(c)

300

350

400

450

500

550

(d)

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

Rhos ( )

Rhos ( )

180

0.2 MODE-ILNS LSHADE44 LSHADE44-IEpsilon CAL-SHADE UDE cMAg-ES

0.1

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2 MODE-ILNS LSHADE44 LSHADE44-IEpsilon CAL-SHADE UDE cMAg-ES

0.1

0

160

0.5

0.4

0

140

(b)

Rho s ( )

Rhos ( )

(a)

120

500

1000

1500

(e)

2000

MODE-ILNS LSHADE44 LSHADE44-IEpsilon CAL-SHADE UDE cMAg-ES

0.1

0

2500

24

5

10

15

20

25

30

35

40

45

50

(f)

Figure 6: Performance profiles comparing MODE-ILNS, LSHADE44, LSHADE44-IEpsilon, CAL-SHADE and UDE for (a) 10D best results obtained (b) 10D average results obtained (c) 30D best results obtained (d) 30D average results obtained (e) 50D best results obtained (f) 50D average results obtained.

The proposed MODE-ILNS was used to solve four benchmark data sets (CEC2006, CEC2010, CEC2011 and CEC2017) with different dimensions and characteristics. From the results obtained, it was concluded that MODE-ILNS was 100% successful in achieving statistically better or similar results to other algorithms. Also, MODE-ILNS was able to achieve significant improvements and was able to obtain savings in computational time and number of fitness evaluations, of up to 69% and 33%, respectively. The Friedman ranking and Wilcoxon tests were also used to compare the algorithms, with the results showing the superiority of the proposed algorithm over well-known algorithms. Generally, MODE-ILNS was statistically competitive with several existing-algorithms. MODE-ILNS was able to solve constrained problems with different characteristics, such as; problems with small, moderate and large number of constraints, problems with small, moderate and high dimensionality, problems with different types of constraints (linear, nonlinear, equality and inequality). It was also able to solve problems with tiny, moderate, very large and even disjoint feasible regions. In the future, we intend to investigate the use of other fitness landscape measures to solve more complex COPs and to solve more real-world application problems. Also, we intend to extend the proposed algorithm to solve larger scale problems.

Barbosa, H. J., Bernardino, H. S., and Barreto, A. M. (2013). Using performance profiles for the analysis and design of benchmark experiments. In Advances in Metaheuristics, pages 21–36. Springer. Bischl, B., Mersmann, O., Trautmann, H., and Preuß, M. (2012). Algorithm selection based on exploratory landscape analysis and cost-sensitive learning. In Proceedings of the 14th annual conference on Genetic and evolutionary computation, pages 313–320. ACM. Chicano, F., Luque, G., and Alba, E. (2012). Autocorrelation measures for the quadratic assignment problem. Applied Mathematics Letters, 25(4):698–705. Consoli, P. A., Minku, L. L., and Yao, X. (2014). Dynamic selection of evolutionary algorithm operators based on online learning and fitness landscape metrics. In Simulated Evolution and Learning, pages 359–370. Springer. Das, S., Mullick, S. S., and Suganthan, P. (2016). Recent advances in differential evolution–an updated survey. Swarm and Evolutionary Computation, 27:1–30. Das, S. and Suganthan, P. N. (2010). Problem definitions and evaluation criteria for cec 2011 competition on testing evolutionary algorithms on real world optimization problems. Jadavpur University, Nanyang Technological University, Kolkata.

Conflict of Interest: All the authors declare that they have no conflict of interest.

Deb, K. (2000). An efficient constraint handling method for genetic algorithms. Computer methods in applied mechanics and engineering, 186(2):311–338.

References

Dolan, E. D. and Moré, J. J. (2002). Benchmarking optimization software with performance profiles. Mathematical programming, 91(2):201–213.

Asafuddoula, M., Ray, T., and Sarker, R. (2011). An adaptive differential evolution algorithm and its performance on real world optimization problems. In 2011 IEEE Congress of Evolutionary Computation (CEC), pages 1057–1062. IEEE.

Elsayed, S., Hamza, N., and Sarker, R. (2016a). Testing united multi-operator evolutionary algorithmsii on single objective optimization problems. In Evolutionary Computation (CEC), 2016 IEEE Congress on, pages 2966–2973. IEEE.

Asafuddoula, M., Ray, T., and Sarker, R. (2014). An adaptive hybrid differential evolution algorithm for single objective optimization. Applied Mathematics and Computation, 231:601–618.

Elsayed, S., Sarker, R., and Coello, C. C. (2016b). Enhanced multi-operator differential evolution for constrained optimization. In Evolutionary Computation (CEC), 2016 IEEE Congress on, pages 4191–4198. IEEE.

Attaviriyanupap, P., Kita, H., Tanaka, E., and Hasegawa, J. (2002). A hybrid ep and sqp for dynamic economic dispatch with nonsmooth fuel cost function. IEEE Power Engineering Review, 22(4):77–77.

Elsayed, S. M., Sarker, R., and Essam, D. L. (2013a). An improved self-adaptive differential evolution 25

algorithm for optimization problems. Industrial Informatics, IEEE Transactions on, 9(1):89–99.

comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Information Sciences, 180(10):2044–2064.

Elsayed, S. M., Sarker, R. A., and Essam, D. L. (2011a). Differential evolution with multiple strategies for solving cec2011 real-world numerical optimization problems. In Evolutionary Computation (CEC), 2011 IEEE Congress on, pages 1041–1048. IEEE.

Golberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. Addion wesley, 1989:102. Gong, W., Cai, Z., and Liang, D. (2014). Engineering optimization by means of an improved constrained differential evolution. Computer Methods in Applied Mechanics and Engineering, 268:884–904.

Elsayed, S. M., Sarker, R. A., and Essam, D. L. (2011b). Multi-operator based evolutionary algorithms for solving constrained optimization problems. Computers & Operations Research, 38:1877–1896.

Gong, W., Cai, Z., and Liang, D. (2015). Adaptive ranking mutation operator based differential evolution for constrained optimization. IEEE transactions on cybernetics, 45(4):716–727.

Elsayed, S. M., Sarker, R. A., and Essam, D. L. (2011c). Multi-operator based evolutionary algorithms for solving constrained optimization problems. Computers & operations research, 38(12):1877–1896.

Hamza, N. M., Essam, D. L., and Sarker, R. A. (2016). Constraint consensus mutation-based differential evolution for constrained optimization. IEEE Transactions on Evolutionary Computation, 20(3):447–459.

Elsayed, S. M., Sarker, R. A., and Essam, D. L. (2012). On an evolutionary approach for constrained optimization problem solving. Applied Soft Computing, 12(10):3208–3227.

Hamza, N. M., Sarker, R. A., and Essam, D. L. (2012). Differential evolution with a mix of constraint consenus methods for solving a real-world optimization problem. In Evolutionary Computation (CEC), 2012 IEEE Congress on, pages 1–7. IEEE.

Elsayed, S. M., Sarker, R. A., and Essam, D. L. (2013b). An improved self-adaptive differential evolution algorithm for optimization problems. IEEE Transactions on Industrial Informatics,, 9:89–99.

Hansen, N., Auger, A., Finck, S., and Ros, R. (2010). Real-parameter black-box optimization benchmarking 2010: Experimental setup. PhD thesis, INRIA.

Elsayed, S. M., Sarker, R. A., and Essam, D. L. (2014). A self-adaptive combined strategies algorithm for constrained optimization using differential evolution. Applied Mathematics and Computation, 241:267–282.

Hansen, N., Finck, S., Ros, R., and Auger, A. (2009). Real-parameter black-box optimization benchmarking 2009: Noiseless functions definitions. PhD thesis, INRIA.

Fan, Z., Fang, Y., Li, W., Yuan, Y., Wang, Z., and Bian, X. (2018). Lshade44 with an improved  constraint-handling method for solving constrained single-objective optimization problems. In 2018 IEEE Congress on Evolutionary Computation (CEC), pages 1–8. IEEE.

Hellwig, M. and Beyer, H.-G. (2018). A matrix adaptation evolution strategy for constrained realparameter optimization. In 2018 IEEE Congress on Evolutionary Computation (CEC), pages 1–8. IEEE.

Fogel, L. J., Owens, A. J., and Walsh, M. J. (1966). Artificial intelligence through simulated evolution.

Iorio, A. W. and Li, X. (2004). Solving rotated multiobjective optimization problems using differential evolution. In Australasian Joint Conference on Artificial Intelligence, pages 861–872. Springer.

Gao, W.-F., Yen, G. G., and Liu, S.-Y. (2015). A dualpopulation differential evolution with coevolution for constrained optimization. IEEE transactions on cybernetics, 45(5):1108–1121.

Jones, T. and Forrest, S. (1995). Fitness distance correlation as a measure of problem difficulty for genetic algorithms. In ICGA, volume 95, pages 184– 192.

García, S., Fernández, A., Luengo, J., and Herrera, F. (2010). Advanced nonparametric tests for multiple 26

K.Deb (2012). Optimization for engineering design: Algorithms and examples. PHI Learning Pvt. Ltd.

Mandal, A., Das, A. K., Mukherjee, P., Das, S., and Suganthan, P. N. (2011). Modified differential evolution with local search algorithm for real world optimization. In Evolutionary Computation (CEC), 2011 IEEE Congress on, pages 1565–1572. IEEE.

Korošec, P. and Šilc, J. (2011). The continuous differential ant-stigmergy algorithm applied to real-world optimization problems. In Evolutionary Computation (CEC), 2011 IEEE Congress on, pages 1327– 1334. IEEE.

Mersmann, O., Bischl, B., Trautmann, H., Preuss, M., Weihs, C., and Rudolph, G. (2011). Exploratory landscape analysis. In Proceedings of the 13th annual conference on Genetic and evolutionary computation, pages 829–836. ACM.

Li, W., Li, S., Chen, Z., Zhong, L., and Ouyang, C. (2019). Self-feedback differential evolution adapting to fitness landscape characteristics. Soft Computing, 23(4):1151–1163.

Mezura-Montes, E. and Coello, C. A. C. (2003). Adding a diversity mechanism to a simple evolution strategy to solve constrained optimization problems. In Evolutionary Computation, 2003. CEC’03. The 2003 Congress on, volume 1, pages 6–13. IEEE.

Liang, J., Runarsson, T. P., Mezura-Montes, E., Clerc, M., Suganthan, P., Coello, C. C., and Deb, K. (2006). Problem definitions and evaluation criteria for the cec 2006 special session on constrained real-parameter optimization. Journal of Applied Mechanics, 41:8.

Mezura-Montes, E., Velázquez-Reyes, J., and Coello Coello, C. (2006). Modified differential evolution for constrained optimization. In Evolutionary Computation, 2006. CEC 2006. IEEE Congress on, pages 25–32. IEEE.

Liang, J. J., Zhigang, S., and Zhihui, L. (2010). Coevolutionary comprehensive learning particle swarm optimizer. In Evolutionary Computation (CEC), 2010 IEEE Congress on, pages 1–8. IEEE.

Mezura-Montes, E. and Velez-Koeppel, R. E. (2010). Elitist artificial bee colony for constrained realparameter optimization. In Evolutionary Computation (CEC), 2010 IEEE Congress on, pages 1–8. IEEE.

Malan, K. and Engelbrecht, A. (2014a). Characterising the searchability of continuous optimisation problems for pso. Swarm Intelligence, 8(4):275–302. Malan, K. M. and Engelbrecht, A. P. (2013). A survey of techniques for characterising fitness landscapes and some possible ways forward. Information Sciences, 241:148–163.

Mohamed, A. W. (2018). A novel differential evolution algorithm for solving constrained engineering optimization problems. Journal of Intelligent Manufacturing, 29(3):659–692.

Malan, K. M. and Engelbrecht, A. P. (2014b). Particle swarm optimisation failure prediction based on fitness landscape characteristics. In Swarm Intelligence (SIS), 2014 IEEE Symposium on, pages 1–9. IEEE.

Morgan, R. and Gallagher, M. (2017). Analysing and characterising optimization problems using length scale. Soft Computing, 21(7):1735–1752.

Mallipeddi, R. and Suganthan, P. N. (2010a). Ensemble of constraint handling techniques. IEEE Transactions on Evolutionary Computation, 14(4):561– 579.

Muñoz, M. A., Kirley, M., and Halgamuge, S. K. (2012). A meta-learning prediction model of algorithm performance for continuous optimization problems. In Parallel Problem Solving from Nature-PPSN XII, pages 226–235. Springer.

Mallipeddi, R. and Suganthan, P. N. (2010b). Problem definitions and evaluation criteria for the cec 2010 competition on constrained real-parameter optimization. Nanyang Technological University, Singapore.

Muñoz, M. A., Sun, Y., Kirley, M., and Halgamuge, S. K. (2015). Algorithm selection for black-box continuous optimization problems: A survey on methods and challenges. Information Sciences, 317:224–245.

Mallipeddi, R. and Suganthan, P. N. (2011). Ensemble differential evolution algorithm for cec2011 problems. In Evolutionary Computation (CEC), 2011 IEEE Congress on, pages 1557–1564. IEEE.

Neri, F. and Tirronen, V. (2010). Recent advances in differential evolution: a survey and experimental analysis. Artificial Intelligence Review, 33(12):61–106. 27

Pitzer, E. and Affenzeller, M. (2012). A comprehensive survey on fitness landscape analysis. In Recent Advances in Intelligent Engineering Systems, pages 161–191. Springer.

Sallam, K. M., Elsayed, S. M., Sarker, R. A., and Essam, D. L. (2017a). Differential evolution with landscape-based operator selection for solving numerical optimization problems. In Intelligent and Evolutionary Systems, pages 371–387. Springer.

Poláková, R. (2017). L-shade with competing strategies applied to constrained optimization. In 2017 IEEE congress on evolutionary computation (CEC), pages 1683–1689. IEEE.

Sallam, K. M., Elsayed, S. M., Sarker, R. A., and Essam, D. L. (2017b). Landscape-based adaptive operator selection mechanism for differential evolution. Information Sciences, 418:383–404.

Poursoltan, S. and Neumann, F. (2015). Ruggedness quantifying for constrained continuous fitness landscapes. In Evolutionary Constrained Optimization, pages 29–50. Springer.

Sallam, K. M., Elsayed, S. M., Sarker, R. A., and Essam, D. L. (2017c). Multi-method based orthogonal experimental design algorithm for solving cec2017 competition problems. In Evolutionary Computation (CEC), 2017 IEEE Congress on, pages 1350–1357. IEEE.

Price, K., Storn, R. M., and Lampinen, J. A. (2006). Differential evolution: a practical approach to global optimization. Springer Science & Business Media.

Sallam, K. M., Elsayed, S. M., Sarker, R. A., and Essam, D. L. (2018b). Improved united multioperator algorithm for solving optimization problems. In 2018 IEEE Congress on Evolutionary Computation (CEC), pages 1–8. IEEE.

Qin, A. K., Huang, V. L., and Suganthan, P. N. (2009). Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Transactions on Evolutionary Computation, 13:398.

Sallam, K. M., Sarker, R. A., and Essam, D. L. (2017d). Reduced search space mechanism for solving constrained optimization problems. Engineering Applications of Artificial Intelligence, 65:147–158.

Rajabi, M. M., Ataie-Ashtiani, B., and Janssen, H. (2015). Efficiency enhancement of optimized latin hypercube sampling strategies: Application to monte carlo uncertainty analysis and metamodeling. Advances in Water Resources, 76:127– 139.

Sallam, K. M., Sarker, R. A., Essam, D. L., and Elsayed, S. M. (2015). Neurodynamic differential evolution algorithm and solving cec2015 competition problems. In 2015 IEEE Congress on Evolutionary Computation (CEC), pages 1033–1040. IEEE.

Reynoso-Meza, G., Sanchis, J., Blasco, X., and Herrero, J. M. (2011). Hybrid de algorithm with adaptive crossover operator for solving real-world numerical optimization problems. In Evolutionary Computation (CEC), 2011 IEEE Congress on, pages 1551–1556. IEEE.

Si, C., An, J., Lan, T., Ußmüller, T., Wang, L., and Wu, Q. (2014). On the equality constraints tolerance of constrained optimization problems. Theoretical Computer Science, 551:55–65. Skakovski, A. and Jedrzejowicz, ˛ P. (2019). An islandbased differential evolution algorithm with the multi-size populations. Expert Systems with Applications, 126:308–320.

Rutkowski, L. (2008). Computational intelligence: methods and techniques. Springer. Sallam, K., Elsayed, S., Sarker, R., and Essam, D. (2018a). Landscape-based differential evolution for constrained optimization problems. In 2018 IEEE Congress on Evolutionary Computation (CEC), pages 1–8. IEEE.

Storn, R. and Price, K. (1997). Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11(4):341–359.

Sallam, K. M., Elsayed, S. M., Sarker, R. A., and Essam, D. L. (2016). Two-phase differential evolution framework for solving optimization problems. In Computational Intelligence (SSCI), 2016 IEEE Symposium Series on, pages 1–8. IEEE.

Suganthan, P. N., Hansen, N., Liang, J. J., Deb, K., Chen, Y.-P., Auger, A., and Tiwari, S. (2005). Problem definitions and evaluation criteria for the cec 2005 special session on real-parameter optimization. KanGAL report, 2005005. 28

Sutton, A. M., Whitley, D., Lunacek, M., and Howe, A. (2006). Pso and multi-funnel landscapes: how cooperation might limit exploration. In Proceedings of the 8th annual conference on Genetic and evolutionary computation, pages 75–82. ACM.

Wu, G., Mallipeddi, R., and Suganthan, P. (2017). Problem definitions and evaluation criteria for the cec 2017 competition on constrained real-parameter optimization. National University of Defense Technology, Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and Nanyang Technological University, Singapore, Technical Report.

Takahama, T. and Sakai, S. (2009). Solving difficult constrained optimization problems by the ε constrained differential evolution with gradient-based mutation. In Constraint-Handling in Evolutionary Optimization, pages 51–72. Springer.

Xia, X. and Elaiw, A. (2010). Optimal dynamic economic dispatch of generation: A review. Electric Power Systems Research, 80:975 – 986.

Takahama, T. and Sakai, S. (2010). Constrained optimization by the ε constrained differential evolution with an archive and gradient-based mutation. In IEEE congress on evolutionary computation, pages 1–9. IEEE.

Xu, B., Chen, X., and Tao, L. (2018). Differential evolution with adaptive trial vector generation strategy and cluster-replacement-based feasibility rule for constrained optimization. Information Sciences, 435:240–262.

Tanabe, R. and Fukunaga, A. S. (2014). Improving the search performance of shade using linear population size reduction. In Evolutionary Computation (CEC), 2014 IEEE Congress on, pages 1658–1665. IEEE.

Yu, X., Lu, Y., Wang, X., Luo, X., and Cai, M. (2019). An effective improved differential evolution algorithm to solve constrained optimization problems. Soft Computing, 23(7):2409–2427. Zamuda, A. (2017). Adaptive constraint handling and success history differential evolution for cec 2017 constrained real-parameter optimization. In 2017 IEEE Congress on Evolutionary Computation (CEC), pages 2443–2450. IEEE.

Tessema, B. and Yen, G. G. (2009). An adaptive penalty formulation for constrained evolutionary optimization. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 39(3):565–578.

Zamuda, A. and Sosa, J. D. H. (2019). Success history applied to expert system for underwater glider path planning using differential evolution. Expert Systems with Applications, 119:155–170.

Trivedi, A., Sanyal, K., Verma, P., and Srinivasan, D. (2017). A unified differential evolution algorithm for constrained optimization problems. In 2017 IEEE Congress on Evolutionary Computation (CEC), pages 1231–1238. IEEE.

Zamuda, A., Sosa, J. D. H., and Adler, L. (2016). Constrained differential evolution optimization for underwater glider path planning in sub-mesoscale eddy sampling. Applied Soft Computing, 42:93– 118.

Trivedi, A., Srinivasan, D., and Biswas, N. (2018). An improved unified differential evolution algorithm for constrained optimization problems. In 2018 IEEE Congress on Evolutionary Computation (CEC), pages 1–10.

Zhang, J. and Sanderson, A. (2009). Jade: Adaptive differential evolution with optional external archive. IEEE Transactions on Evolutionary Computation, 13(5):945–958.

Vrugt, J. A., Robinson, B. A., and Hyman, J. M. (2009). Self-adaptive multimethod search for global optimization in real-parameter spaces. IEEE Transactions on Evolutionary Computation, 13(2):243– 259.

Zhang, W., Yen, G. G., and He, Z. (2014). Constrained optimization via artificial immune system. IEEE transactions on cybernetics, 44(2):185–198.

Wang, B.-C., Li, H.-X., Li, J.-P., and Wang, Y. (2018). Composite differential evolution for constrained evolutionary optimization. IEEE Transactions on Systems, Man, and Cybernetics: Systems, (99):1– 14. 29

Author contributions Use this form to specify the contribution of each author of your manuscript. A distinction is made between five types of contributions: Conceived and designed the analysis; Collected the data; Contributed data or analysis tools; Performed the analysis; Wrote the paper. For each author of your manuscript, please indicate the types of contributions the author has made. An author may have made more than one type of contribution. Optionally, for each contribution type, you may specify the contribution of an author in more detail by providing a one-sentence statement in which the contribution is summarized. In the case of an author who contributed to performing the analysis, the author’s contribution for instance could be specified in more detail as ‘Performed the computer simulations’, ‘Performed the statistical analysis’, or ‘Performed the text mining analysis’. If an author has made a contribution that is not covered by the five pre-defined contribution types, then please choose ‘Other contribution’ and provide a one-sentence statement summarizing the author’s contribution.

Landscape-Assisted Multi-Operator Differential Evolution for Solving Constrained Optimization Problems Manuscript

title:

Author 1: Karam Sallam ☒

Conceived and designed the analysis Specify contribution in more detail (optional; no more than one sentence)



Collected the data Specify contribution in more detail (optional; no more than one sentence)



Contributed data or analysis tools Specify contribution in more detail (optional; no more than one sentence)



Performed the analysis Specify contribution in more detail (optional; no more than one sentence)



Wrote the paper Specify contribution in more detail (optional; no more than one sentence)



Other contribution Wrote the Matlab Code.

Author 2: Saber Elsayed ☒

Conceived and designed the analysis Specify contribution in more detail (optional; no more than one sentence)



Collected the data Specify contribution in more detail (optional; no more than one sentence)



Contributed data or analysis tools Specify contribution in more detail (optional; no more than one sentence)



Performed the analysis Specify contribution in more detail (optional; no more than one sentence)



Wrote the paper Specify contribution in more detail (optional; no more than one sentence)



Other contribution Helped in coding

Author 3: Ruhul Sarker ☒

Conceived and designed the analysis Specify contribution in more detail (optional; no more than one sentence)



Collected the data Specify contribution in more detail (optional; no more than one sentence)



Contributed data or analysis tools Specify contribution in more detail (optional; no more than one sentence)



Performed the analysis Specify contribution in more detail (optional; no more than one sentence)



Wrote the paper Specify contribution in more detail (optional; no more than one sentence)



Other contribution Revise the paper

Author 4: Daryl Essam ☒

Conceived and designed the analysis Specify contribution in more detail (optional; no more than one sentence)



Collected the data Specify contribution in more detail (optional; no more than one sentence)



Contributed data or analysis tools Specify contribution in more detail (optional; no more than one sentence)



Performed the analysis Specify contribution in more detail (optional; no more than one sentence)



Wrote the paper Specify contribution in more detail (optional; no more than one sentence)



Other contribution Specify contribution in more detail (required; no more than one sentence)

Author 5: Enter author name ☐

Conceived and designed the analysis Specify contribution in more detail (optional; no more than one sentence)



Collected the data Specify contribution in more detail (optional; no more than one sentence)



Contributed data or analysis tools Specify contribution in more detail (optional; no more than one sentence)



Performed the analysis Specify contribution in more detail (optional; no more than one sentence)



Wrote the paper Specify contribution in more detail (optional; no more than one sentence)



Other contribution Specify contribution in more detail (required; no more than one sentence)

Author 6: Enter author name ☐

Conceived and designed the analysis Specify contribution in more detail (optional; no more than one sentence)



Collected the data Specify contribution in more detail (optional; no more than one sentence)



Contributed data or analysis tools Specify contribution in more detail (optional; no more than one sentence)



Performed the analysis Specify contribution in more detail (optional; no more than one sentence)



Wrote the paper Specify contribution in more detail (optional; no more than one sentence)



Other contribution Specify contribution in more detail (required; no more than one sentence)

Author 7: Enter author name ☐

Conceived and designed the analysis Specify contribution in more detail (optional; no more than one sentence)



Collected the data Specify contribution in more detail (optional; no more than one sentence)



Contributed data or analysis tools Specify contribution in more detail (optional; no more than one sentence)



Performed the analysis Specify contribution in more detail (optional; no more than one sentence)



Wrote the paper Specify contribution in more detail (optional; no more than one sentence)



Other contribution Specify contribution in more detail (required; no more than one sentence)

Author 8: Enter author name ☐

Conceived and designed the analysis Specify contribution in more detail (optional; no more than one sentence)



Collected the data Specify contribution in more detail (optional; no more than one sentence)



Contributed data or analysis tools Specify contribution in more detail (optional; no more than one sentence)



Performed the analysis Specify contribution in more detail (optional; no more than one sentence)



Wrote the paper Specify contribution in more detail (optional; no more than one sentence)



Other contribution Specify contribution in more detail (required; no more than one sentence)

Author 9: Enter author name ☐

Conceived and designed the analysis Specify contribution in more detail (optional; no more than one sentence)



Collected the data Specify contribution in more detail (optional; no more than one sentence)



Contributed data or analysis tools Specify contribution in more detail (optional; no more than one sentence)



Performed the analysis Specify contribution in more detail (optional; no more than one sentence)



Wrote the paper Specify contribution in more detail (optional; no more than one sentence)



Other contribution Specify contribution in more detail (required; no more than one sentence)

Author 10: Enter author name ☐

Conceived and designed the analysis Specify contribution in more detail (optional; no more than one sentence)



Collected the data Specify contribution in more detail (optional; no more than one sentence)



Contributed data or analysis tools Specify contribution in more detail (optional; no more than one sentence)



Performed the analysis Specify contribution in more detail (optional; no more than one sentence)



Wrote the paper Specify contribution in more detail (optional; no more than one sentence)



Other contribution Specify contribution in more detail (required; no more than one sentence)