On an evolutionary approach for constrained optimization problem solving

On an evolutionary approach for constrained optimization problem solving

Applied Soft Computing 12 (2012) 3208–3227 Contents lists available at SciVerse ScienceDirect Applied Soft Computing journal homepage: www.elsevier...

1MB Sizes 4 Downloads 82 Views

Applied Soft Computing 12 (2012) 3208–3227

Contents lists available at SciVerse ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

On an evolutionary approach for constrained optimization problem solving Saber M. Elsayed ∗ , Ruhul A. Sarker, Daryl L. Essam School of Engineering and Information Technology, University of New South Wales, Australian Defence Force Academy Campus, Canberra 2600, Australia

a r t i c l e

i n f o

Article history: Received 3 November 2011 Received in revised form 16 April 2012 Accepted 10 May 2012 Available online 2 June 2012 Keywords: Constrained optimization Differential evolution Memetic algorithms

a b s t r a c t Over the last few decades, many different evolutionary algorithms have been introduced for solving constrained optimization problems. However, due to the variability of problem characteristics, no single algorithm performs consistently over a range of problems. In this paper, instead of introducing another such algorithm, we propose an evolutionary framework that utilizes existing knowledge to make logical changes for better performance. The algorithmic aspects considered here are: the way of using search operators, dealing with feasibility, setting parameters, and refining solutions. The combined impact of such modifications is significant as has been shown by solving two sets of test problems: (i) a set of 24 test problems that were used for the CEC2006 constrained optimization competition and (ii) a second set of 36 test instances introduced for the CEC2010 constrained optimization competition. The results demonstrate that the proposed algorithm shows better performance in comparison to the state-of-the-art algorithms. © 2012 Elsevier B.V. All rights reserved.

1. Introduction Solving constrained optimization problems (COPs) has been a challenging research area in the computer science and optimization fields, due to physical, geometric and other limitations [1]. Formally, COP is stated as follows: − → min f ( X ) Subject to:  ≤ 0, gc (X)

c = 1, 2, . . . , C,

 = 0, he (X)

e = 1, 2, . . . , E,

Lj ≤ xj ≤ Uj ,

j = 1, 2, . . . , D,

(1)

 = [x0 , x1 , . . . , xD ]T is a vector with D-decision variables, where X   is the cth inequality constraints, f (X) is the objective function, gc (X)  he (X) is the eth equality constraint, and each xz has a lower limit Lj and an upper limit Uj . COPs can be divided into many different categories based on their characteristics and mathematical properties. They may contain different types of variables, such as real, integer and discrete, and may have equality and/or inequality constraints. The objective and constraint functions can be linear or nonlinear, continuous or discontinuous and unimodal or multimodal. The feasible region

∗ Corresponding author. Tel.: +61 2 626 88324; fax: +61 2 626 88581. E-mail addresses: [email protected] (S.M. Elsayed), [email protected] (R.A. Sarker), [email protected] (D.L. Essam). 1568-4946/$ – see front matter © 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.asoc.2012.05.013

of such problems can be either a tiny or significant portion of the search space. Moreover, it can be either one single bounded region or a collection of multiple disjointed regions. In some practical problems, the feasible region can even be unbounded. The optimal solution may exist either on the boundary of the feasible space or in the interior of the feasible region. Also, high dimensionality, due to a large number of variables and constraints, may add further complexity to solving COPs. These different characteristics have made COPs a challenging research area in the optimization domain. Evolutionary algorithms (EAs) have a long history of successfully solving COPs, such as the genetic algorithm (GA) [2], differential evolution (DE) [3], evolutionary strategies (ES) [4], and evolutionary programming (EP) [5]. Some of the recent studies are discussed here. Li et al. [6] proposed a DE that adopted an ␣ (comparison level) constrained method for handling constraints. Furthermore, the scale factor of its mutation was set to a random number to vary the searching scale, and a certain percentage of the population was replaced with random individuals to enrich its diversity and, hence, to avoid being trapped in local optima. This algorithm was able to reach optimal solutions for all 13 benchmark problems. Brest et al. [7] proposed an improved version of the self-adaptive DE algorithm which used more strategies, that is, an aging mechanism to reinitialize individuals which stagnated in local optima and an ␧ level to control constraint violations. It was tested on the CEC2010 constrained real-parameter optimization competition problems [8] and performed well. Takahama and Sakai [9] proposed an ␧-constrained DE with an archive and gradient-based mutation (␧DEag) which utilized the archive to maintain the diversity of individuals and adopted a new way of selecting its ␧-level control parameter. It

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

Nomenclature − → X D gc he Lj Uj − → Vz − → u z

t PS F Cr WS Nopt Viobest i,t FR MSS ni,t FEs max FEsLS

the decision vector number of decision variables in-equality constraints equality constraints lower boundary for a variable j UPPER boundary for a variable j mutant vector for an individual z a trial vector for an individual z the generation number population size amplification factor crossover rate window size of generations number of operators constraint violation of the best individual at generation t and an operator i feasibility ratio minimum subpopulation size sub-population size for the ith operator at generation t fitness function evaluations the maximum fitness function evaluations for the local search procedure

came first in the CEC2010 constrained real-parameter optimization competition. Elsayed et al. [10], proposed a multi-operator based GA (SAMO-GA) for solving COPs. The feasibility rules were used to handle the constraints. Tessema and Yen [12] introduced a GA based on an adaptive penalty formulation to solve COPs. The proposed method aimed to exploit infeasible individuals with low objective values and low constraint violations. It showed an ability to find feasible solutions in every run but its performance was not robust in terms of the solutions’ quality. Singh et al. [13] presented an infeasibility empowered memetic algorithm (IEMA) which was combined with a local search strategy to solve COPs. It explicitly preserved marginally infeasible solutions to intensify the search around the constraint boundaries. Although it performed well on small problems, it had difficulties with those with high dimensionality. A novel GA for the location of the global minimum of a COP was introduced by [14]. It suggested using: (1) modified genetic operators that preserve the feasibility of the chromosomes; (2) applications of a local search procedure to randomly select chromosomes; and (3) a stopping rule based on asymptotic considerations. The proposed algorithm performed well on the majority of the test problems but still lacked robustness. Yong et al. [15] used an ES algorithm as an EA combined with an adaptive tradeoff model (ATMES) to solve COPs. In ATMES, three main phases were considered: (1) the evaluation of infeasible solutions when the population contained only infeasible individuals; (2) the balancing of feasible and infeasible solutions when the population consisted of a combination of feasible and infeasible individuals; and (3) the selection of feasible solutions when the population was composed of only feasible individuals. In each phase, a different CHT was used, i.e., in phase 1, the Pareto dominance technique, in phase 2, a normalized fitness function added to a normalized violation to act as a new fitness value and, in phase 3, handling the problem as unconstrained. The algorithm was tested on 13 well-known benchmark test problems/functions and the results showed that it either outperformed or performed similarly to other state-of-the-art techniques. Mezura-Montes and Coello [16] explored the capabilities of different types of ES for solving COPs in which five versions of

3209

ES were tested on well-known benchmark problems. For all the variants, constraints were handled based on [17]. The most competitive version was ( + 1)-ES which was compared with a similar approach based on a GA. The choice of operators in EAs is important in designing a high performing algorithm. This choice is often made by trial-and-error. Due to the variability of the characteristics, and the underlying mathematical properties, of problems, most EAs use specialized search operators that suit the problem on hand. Such an algorithm, even if it works well for one problem, or a class of problems, does not guarantee that it will work for another class, or range of problems. This behavior is consistent with the no free lunch (NFL) theorem [18]. In other words, we can state that there is no single algorithm, or an algorithm with a known search operator, that will consistently perform for all classes of optimization problems. In this research, we consider multi-operator based evolutionary algorithms, where multiple operators, of different types, work together within the framework of one algorithm. This type of algorithm can be seen not only as a better alternative over the trialand-error based design, but also as a provider of a better coverage of problems. The multi-operator based evolutionary algorithm is not new in the literature. However, its actual performance in practice for solving constrained optimization problems has not been fully explored. The choice of operators and their appropriate mix, and the strategy for their use in designing an effective EA is also not well studied. A review on previous multi-operator based EAs is provided in a later section. In constrained optimization, feasible-is-always-better is the basic rule that is widely used for constraint handling. It is implemented during the selection process and it has no influence on the search operator once the selection is done. In this paper, in addition to the basic rule applied in the selection process, a self-adaptive parameter selection mechanism is derived based on the feasibility status, as well as the quality of fitness. These parameters influence the operators in guiding the search. As indicated earlier, a constrained optimization problem may contain equality and/or inequality constraints. The existence of any equality constraints makes the feasible region tiny compared to the entire search space, which makes it difficult for EAs to locate the optimal solution. A local search approach can be very useful in finding better solutions in this case. In fact, using local search, one would not only improve the quality of solution, but would also reduce the overall computational time. In this research, we consider DE [3] as the base algorithm. Firstly, we have implemented eight different variants of the DE algorithm, and have analyzed their performance by solving 24 benchmark problems [19]. From the analysis, it is clear that no single variant of DE is superior when considering all the test problems. We have then implemented a self-adaptive multi-strategy differential evolution (SAMSDE) algorithm, using the best two, three, four, and five variants. SAMSDE divides the population into a number of sub-populations, where each sub-population evolves with its own mutation and crossover. SAMSDE uses a self-adaptive learning process that changes the sub-population sizes, depending on their reproductive success, as the search progresses. To speed up the convergence of the best version of SAMSDE, we have applied a local search procedure to a randomly selected individual from the entire population. This algorithm is called Memetic-SAMSDE. After reading the last few paragraphs, the readers may get the view that the algorithm is complex as it has more operators and parameters than a simple single mutation strategy based DE. However, some of the parameters can be determined adaptively as the evolution progresses some of the others can easily be determined experimentally. As we will provide the detailed results later, for now in summary, the improvement in solution quality and at the same time

3210

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

the reduction of computational time, are the key features of the algorithm. To assess this improvement, the algorithms were tested by solving two sets of test problems, which contain a variety of optimization problems, differing by types and by mathematical properties. The first set, of 24 widely known test problems, was taken from the CEC2006 constrained optimization competition and the second set, of 36 specialized test instances, was taken from the CEC2010 competition [8]. This paper is organized as follows. After the introduction, Section 2 presents the DE algorithm with an overview of its parameters, and a brief review of the literature regarding memetic-DE. Section 3 describes the design of our memetic-DE with an adaptive multioperator based evolutionary algorithm. The experimental results and the analysis of those results are presented in Section 4. Finally, the conclusions are given in Section 5. 2. Differential evolution In this section, we discuss the commonly used DE operators, parameter analysis, and the memetic DE literature.

the interval [1, D]. L denotes the number of components that the donor vector actually contributes to the target. After the generation  z (t)) is obtained as: of l and L, the trial vector (u



uz,j (t) =

vz,j (t), for j = lD , l + 1D , . . . , l + L − 1D xz,j (t),

for all other j ∈ [1, D]

(8)

where j = 1, 2, . . ., D, and the angular brackets l  D denote a modulo function with modulus D, with a starting index l. The binomial crossover is performed on each of the jth variables whenever a randomly picked number (between 0 and 1) is less than or equal to a crossover rate (Cr). The generation number is indicated here by t. In this case, the number of parameters inherited from the donor has a (nearly) binomial distribution:



uz,j (t) =

vz,j (t), if(rand ≤ Cr or j = jrand ) xz,j (t),

(9)

otherwise

where rand ∈ [0, 1], and jrand ∈ [1, 2, . . ., D] is a randomly chosen  z (t) gets at least one component from V z (t). index, which ensures U 2.3. A brief review and analysis

2.1. Mutation The simplest form of this operation is that a mutant vector at generation t, V z (t), is generated by multiplying the amplification factor, F, by the difference of two random vectors, and the result is added to another third random vector (Eq. (2)). V z (t) = xr1 (t) + F · (xr2 (t) − xr3 (t))

(2)

where r1 , r2 , r3 are random numbers {1, 2, . . ., PS}, r1 = / r2 = / r3 = / z, x is a decision vector, PS is the population size, the scaling factor F is a positive control parameter for scaling the difference vector, and t is the current generation. This operation enables DE to explore the search space and maintain diversity. There are many strategies for mutation, such as: 1. DE/best/1 [20]: V z (t) = xbest (t) + F · (xr2 (t) − xr3 (t))

(3)

2. DE/rand-to-best/1 [21]: V z (t) = xr1 (t) + F · (xbest (t) − xr2 (t)) + F × (xr3 (t) − xr4 (t))

(4)

3. rand/2/dir [22]: F V z (t) = v1 (t) + · ((v1 (t) − (v2 (t)) + (v3 (t) − (v4 (t))) ∈ 2

(5)

where f (v1,t ) ≤ f (v2,t ) and f (v3,t ) ≤ f (v4,t ) 4. DE/current-to-rand/1 [23]: V z (t) = xr1 (t) + F · (xr2 (t) − xz (t)) + F × (xr3 (t) − xr4 (t))

(6)

5. DE/current-to-best/1 [24]: V z (t) = xr1 (t) + F · (xbest (t) − xi (t)) + F × (xr3 (t) − xr4 (t))

(7)

For more details, readers are referred to [25]. 2.2. Crossover The DE family of algorithms uses two crossover schemes: exponential and binomial crossover. These crossovers are briefly discussed below. In exponential crossover, we first choose an integer l randomly within the range [1, D]. This integer acts as a starting point in the target vector, from where the crossover or exchange of components with the donor vector starts. We also choose another integer L from

From an analysis of the literature, it is known that there is no single DE variant that is able to obtain the best results for all types of problems. Mezura-Montes et al. [22] performed a comparative study of several DE variants solving unconstrained global optimization problems. They found that “current-to-best/1” and “current-to-rand/1”, with arithmetic recombination, had difficulty exploring the search space when solving multimodal functions. They suggested that a combination of arithmetic recombination with DE mutation based on differences was not sui for solving problems with high dimensionality. They considered the alternatives “best/1/bin” and “rand/1/bin” were much better than “best/1/exp” and “rand/1/exp” due to the fact that, in exponential recombination, not all corners of the hypercube formed by the mutation vector and current parent can be sampled, regardless of the Cr value used. The variant “best/1/bin” was judged the most competitive for unimodal problems with both separable and non-separable test functions. However, for multimodal functions, this variant provided competitive results only when the function was separable. For multimodal and non-separable functions, the variant “rand/2/dir” was more suitable due to its ability to incorporate information about the fitness of the individuals in the mutation process. In DE, there is no single fixed value for each parameter (F, Cr and PS) that will be able to solve all types of problems with a reasonable quality of solution. Many studies have been conducted on parameter selection. Storn and Price [3] recommended a population size of 5–20 times the dimensionality of the problem, and that a good initial choice of F could be 0.5. Gämperle et al. [26] evaluated different parameter settings of DE, where they found that a plausible choice of the population size NP is between 3D and 8D, with the scaling factor F = 0.6 and the crossover rate Cr in [0.3, 0.9]. Ronkkonen et al. [27] claimed that typically 0.4 < F < 0.95 with F = 0.9 is a good first choice, and that Cr typically lies in (0, 0.2) when the function is separable, while in (0.9, 1) when the function’s parameters are dependent. Abbass [28] proposed self-adaptive operator (crossover and mutation) for multi-objective optimization problems, where the scaling factor F is generated using a Gaussian distribution N(0, 1). Zaharie [29] proposed a parameter adaptation strategy for DE (ADE) based on the idea of controlling the population diversity, and implemented a multiple population approach. Das et al. [30] introduced two schemes for adapting the scale factor F in DE. In the first scheme (called DERSF: DE with random scale factor) they varied F randomly between 0.5 and 1.0 in successive generations. Zamuda et al. [31] presented differential evolution with self-adaptation and

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

local search for the constrained multiobjective optimization algorithm (DECMOSA-SQP), which uses the self-adaptation mechanism from DEMOwSA algorithm and a local search known as the sequential quadratic programming (SQP). Multi-operator DE algorithms have emerged in the last couple of years. Mallipeddi et al. [32] proposed an ensemble of mutation strategies and control parameters with DE (EPSDE). In EPSDE, a pool of distinct mutation strategies, along with a pool of values for each control parameter, coexists throughout the evolution process and competes to produce offspring. The algorithm has been used to solve a set of unconstrained problems. In this research, we deal with the constrained optimization problem, which is much more complex than its unconstrained counterpart. Tasgetiren et al. [33] proposed an ensemble DE, in such a way that each individual is assigned to either a variable parameter search (VPS), or one of two distinct mutation strategies. The authors selected VPS to enhance the local exploitation capability. Qin et al. [21] Proposed the self-adaptive differential evolution algorithm (SaDE), where the choice of learning strategy and the two control parameters, F and CR, are not required to be pre-specified. It should be noted that, their algorithm design is different from our proposed algorithm. This difference is due to following reasons. They have used two mutation strategies (while we are using four) where each individual is assigned to one of them based on a given probability. After evaluation of all newly generated trial vectors, the numbers of trial vectors successfully entering the next generation are recorded as ns1 and ns2. Those two numbers were accumulated within a specified number of generations, called the “learning period”. Then, the probability of assigning each individual was updated. Beside this difference, their learning strategy is entirely different from ours, in their algorithm, a strategy may be totally excluded from the list if its p = 0. In our algorithm, there is a lower limit on the use of each strategy. We have also used an information sharing scheme in every couple of generations, in which each strategy learns from all other strategies. Moreover, we have used a local search procedure. Elsayed et al. [34] proposed a three-strategy based DE algorithm for solving a variety of COPs that uses an adaptive learning process. The experimental results showed that the algorithm is superior to individual single-mutation based DE algorithms. However, that study was limited on a small set of test problems, and did not analyze how the mutation operators were selected. Elsayed et al. [35] proposed a DE algorithm with multiple strategies that was used only for solving a set of real-world problems. The study has not used any specific criteria for selecting the search operators. Elsayed et al. [11] also proposed an integrated strategies DE algorithm for solving a limited set of benchmark problems. The design of the algorithm is different from the proposed algorithm in this paper, in which the population was not divided into a number of sub-populations, and the algorithm used two crossover operators as well as two constraint handling techniques. Elsayed et al. [10] also proposed a multi-operator based evolutionary framework for solving COPs. The selection of the operators was based on an experimental analysis of 10 variants of GA. The framework was tested using two variants of GA that is with and without self-adaptive mechanism. The algorithm with self-adaptive mechanism showed its superiority. The algorithm was analyzed by solving different benchmark problems. The framework was then validated with DE with very limited problem solving.

spaces as much as possible, the simplex (Nelder–Mead method) search method was employed to speed up the local exploitation and the DE operators helped the algorithm to jump to a better point. The algorithm has been tested on 13 high dimensional continuous problems with improved results. Tirronen et al. [38] proposed a hybridization of a DE framework with the Hooke–Jeeves Algorithm (HJA) and a Stochastic Local Searcher (SLS). Its local search mechanisms are coordinated by means of a novel adaptive rule which estimates the fitness diversity among the individuals of a population. Caponio et al. [39] proposed a fast adaptive memetic algorithm (FAMA) that uses DE with a dynamic parameter setting and two local search mechanisms that are adaptively launched, either one by one or simultaneously, according to the needs of the evolution. The employed local search methods are: the Hooke–Jeeves method and the Nelder–Mead simplex. The Hooke–Jeeves method is executed only on the elite individual while the Nelder–Mead simplex is carried out on 11 randomly selected individuals. This algorithm has been compared to another well-known algorithm, and gained better results for the problem of permanent magnet synchronous motors. Caponio et al. [40] proposed the super-fit memetic differential evolution algorithm, which is a DE framework hybridized with three meta-heuristics, each having different roles and features. Particle swarm optimization assists the DE in the beginning of the optimization process by helping to generate a super-fit individual. The two other meta-heuristics are local search mechanisms adaptively coordinated by means of an index measuring the quality of the super-fit individual with respect to the rest of the population. The choice of the local search mechanisms and its application is then executed by means of a probabilistic scheme which makes use of a generalized beta distribution. In this paper, we consider the SQP [41] as a local search technique as SQP showed its benefits when it was combined with EAs [42,13]. The idea of SQP is to model the problem at a given approximate solution by a quadratic programming subproblem, and then to use the solution to this subproblem to construct a better approximation (new point) [43]. As this method may be viewed as an extension of Newton and quasi-Newton methods to the constrained optimization setting, SQP methods could share the characteristics of Newton-like methods, such as when the iterates are close to the solution a rapid convergence can be achieved, when the iterates are far from a solution a possible eccentric behavior can be happened that needs to be carefully controlled [43]. 2.5. Different DE variants As of the literature, the DE variants “rand/*/*” performed better, because it randomly finds new search directions [44]. The investigation by [45] confirmed that there is a benefit of using more than one difference vector (DV) in DE. Interestingly, two or three DV are good enough, but more than this may lead to premature convergence. As it is widely accepted that the binomial crossover is superior to the exponential one [44], in this research, we have selected the following eight variants for a comparative study. Note that all of them use the binomial crossover. 1. Var1: V z (t) = xbest (t) + F · ((xr1 (t) − xr2 (t)) + (xr3 (t) − xr4 (t))

2.4. Memetic DE Memetic differential evolution (MDE) has appeared in the literature over the last few years, but with very limited analysis. Gao and Wang [37] introduced memetic differential evolution modified by initialization and local searching, in which the stochastic properties of a chaotic system were used to spread the individuals in search

3211

+ (xr5 (t) − xr6 (t)))

(10)

2. Var2: V z (t) = xr1 (t) + F · ((xr2 (t) − xr3 (t)) + (xr4 (t) − xr5 (t)) + (xr6 (t) − xr7 (t)))

(11)

3212

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

3. Var3: V z (t) = xr1 (t) + F · ((xbest (t) − xr2 (t)) + (xr3 (t) − xr3 (t)))

(12)

4. Var4: V z (t) = xr1 (t) + F · ((xr2 ,t − xbest,t ) + (xr3 (t) − xr4 (t)))

(13)

5. Var5: V z (t) = xr1 (t) + F · ((xr2 (t) − xz (t)) + (xr3 (t) − xr4 (t)))

(14)

because one operator may work well in some stages of the evolutionary search process but perform poorly in others. (3) It is important to share information among the operators, as the information from a good strategy may help a bad strategy to become competitive. (4) A local search is periodically applied, to accelerate the convergence pattern of the proposed algorithm. A periodic application will help to maintain a balance between the steps required for escaping local optima and the increase of computational time due to local search.

6. Var6: V z (t) = xz (t) + F · ((xbest − xz (t)) + (xr3 (t) − xr4 (t)))

3.1. Memetic-SAMSDE (15)

7. Var7: V z (t) = xz (t) + F · ((xr2 − xz (t)) + (xr3 (t) − xr4 (t)))

(16)

8. Var8: V z (t) = xr1 (t) + F · ((xbest (t) − xr2 (t)) + (xr3 (t) − xz (t)))

(17)

As indicated earlier, we take advantage of the self-adaptive concept to generate F and Cr. So, for each decision vector (z), we generate two Gaussian numbers N(0.5, 0.15) (as of Abbas [28]) one for Fz , while the other number represents Crz . For the variants Var1 and Var2, the overall F and Cr are calculated according to formulas (18) and (19), while the same parameters are calculated for the other six variants according to formulas (20) and (21). F = Fr1 ,G + N(0, 0.5) × (Fr2 ,G − Fr3 ,G ) + N(0, 0.5) × (Fr4 ,G − Fr5 ,G ) + N(0, 0.5) × (Fr6 ,G − Fr7 ,G )

(18)

Cr = Crr1 ,G + N(0, 0.5) × (Crr2 ,G − Crr3 ,G ) + N(0, 0.5) × (Crr4 ,G − Crr5 ,G ) + N(0, 0.5) × (Crr6 ,G − Crr7 ,G )

In Memetic-SAMSDE, as shown in Table 1, an initial population of size, PS, is generated and then divided into a number of subpopulations (Nopt) of equal size, pi . Each sub-population, with ni individuals, evolves with its own mutation and allocated crossover operator. The new trial vector (Uz ) are evaluated according to the fitness function value and/or the constraint violation of the problem under consideration, and if it is better than its parent, it will replace it in the next generation; or else the existing parent will be kept. To measure the success of each operator, an improvement index is calculated using the method discussed in Section 3.2. Based on the improvement index, the sub-population sizes are increased, decreased or kept unchanged. As this process may abandon certain operators which may be useful at later stages of the evolution process, we set a minimum subpopulation size for each operator. After WS generations (indicated as window size), a random individual is selected from the whole population and a local search technique (SQP) is applied to it with a given maximum number of fitness evalmax . To consider sharing information aspect, after every uations FEsLS WS generations, the best solutions among the subpopulations are exchanged, and if an exchanged individual is redundant, then it will be replaced by a random vector. The algorithm continues until the stopping criterion is met.

(19) 3.2. Improvement measure

F = Fr1 ,G + N(0, 0.5) × (Fr2 ,G − Fr3 ,G ) + N(0, 0.5) × (Fr4 ,G − Fr5 ,G ) (20)

Cr = Crr1 ,G + N(0, 0.5) × (Crr2 ,G − Crr3 ,G ) + N(0, 0.5) × (Crr4 ,G − Crr5 ,G )

(21)

3. Memetic self-adaptive multi-strategy differential evolution In this section, we describe the key design aspects of the proposed algorithm, followed by its main steps, improvement scheme, and the selection rules used in this research. Firstly, it is worth mentioning the design aspects of the proposed algorithm, namely: (1) It is inappropriate to give equal emphasis to all search operators throughout the entire evolutionary process because their relative performances may vary with the progression of generations. This means that one operator may work well in the early stages of the search process but may perform poorly later or vice versa. (2) A poorly performing operator should not be totally discarded, instead a lower limit on its use should be assigned. This is

To measure the improvement of each operator (/subpopulation) in a given generation, we consider both the feasibility status and the fitness value, where the consideration of any improvement in feasibility is always better than any improvement in the infeasibility. For any generation t > 1, there arises one of three scenarios. These scenarios, in order from least desirable, to most desirable, are discussed below. (A) Infeasible to infeasible: for any subpopulation i, where the best solution was infeasible at generation t − 1, and is still infeasible in generation t, then the improvement index is calculated as follows: VIi,t =

|Viobest − Viobest | i,t i,t−1 avg · Vioi,t

= Ii,t

(22)

where Viobest is constraint violation of the best individual at i,t generation t, avg·Vioi,t the average violation. Hence VIi,t = Ii,t above represents the relative improvement as compared to the average violation in the current generation. (B) Feasible to feasible: for any subpopulation i, where the best solution was feasible at generation t − 1, and was still feasible in generation t, then the improvement index is: best best Ii,t = maxi (VIi,t ) + |Fi,t − Fi,t−1 | × FRi,t

(23)

where Ii,t is the improvement for subpopulation i at generation best the objective function for the best individual at genert, Fi,t ation t, and the feasibility ratio of operator i at generation t

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

3213

Table 1 Memetic self-adaptive multi-strategy differential evolution (Memetic-SAMSDE).

STEP 1: In generation , generate an initial random population of size must be within the range as shown below:

. The variables in each individual (z)

where are the lower and upper bound for decision variable , and is a random number . into four equal subpopulations , where i is the i th sub-population. Each subpopulation consists STEP 2: Divide of individuals and each subpopulation has its own operators. using equations (18) – (21), then STEP 3: For each DE variant, generate the self-adaptive parameters and generate the trial vector , and update the fitness evaluations (FEs). STEP 4: Sort the individuals in each subpopulation according to their fitness value and/or constraint violation. and ; then If 1- Replace the worst 3 individuals in each sub-population with the other 3 best solutions (the best solution in each other subpopulation). 2- If there is any redundant decision vector, then replace it by generating a random vector. Else, go to STEP 5. STEP 5: Store the best individual for each operator, based on fitness value and/or constraint violation. ; then update . STEP 6: If from PS, apply SQP as a local search up to a fixed . , Select a random vector STEP 7: If . STEP 8: Stop if the termination criterion is met; else go to STEP 3 and set is: FRi,t =

to the sum of all violated constraints. The equality constraints are transformed to inequalities of the form, where ε is a small value:

Number of feasible solutions in a subpopulation i Subpopulation size at iteration t

|he (x )| − ε ≤ 0, (24)

To assign a higher index value to a subpopulation with a higher feasibility ratio, we multiply the improvement in fitness value by the feasibility ratio. To differentiate between the improvement index of feasible and infeasible subpopulations, we have added a term maxi (VIi,t ) to Eq. (23). If all the best solutions are feasible, then maxi (VIi,t ) will be zero. (C) Infeasible to feasible: for any subpopulation i, where the best solution was infeasible at generation t − 1, and it is feasible in generation t, then the improvement index is: best best bv Ii,t = maxi (VIi,t ) + |Vi,t−1 + Fi,t − Fi,t−1 | × FRi,t

(25)

bv where Fi,t−1 is fitness value of the least violated individual in generation t − 1. To assign a higher index value to an individual that changes best with the change of from infeasible to feasible, we add Vi,t−1 fitness value in Eq. (25). After calculating the improvement index for each subpopulation, the subpopulation sizes are calculated according to the following equation:

ni,t = MSS +

I

mi,t

I i=1 i,t

× (PS − MSS × Nopt)

(26)

where ni,t is the sub-population size for the ith operator at generation t, MSS is the minimum subpopulation size for each operator i at generation t, and Nopt is the number of operators. 3.3. Selection rules and constraint handling In this paper, we consider the selection of the individuals for the purposes of a tournament [17] as follows: (i) between two feasible solutions, the fittest one (according to fitness function) is better, (ii) a feasible solution is always better than an infeasible one, (iii) between two infeasible solutions, the one having the smaller sum constraint violation is preferred. The violation here is equal

for e = 1, . . . , E

(27)

Although this rule is widely used for constraint handling, which is implemented during the selection process and it has no influence on the search operator once the selection is done. In our algorithm, in addition to the basic rule applied in the selection process, a selfadaptive parameters selection mechanism is derived based on the feasibility status as well as the quality of fitness. These parameters influence the search operator in guiding the search in an effective manner. 4. Experimental results In this section, by presenting and analyzing the computational results of 8 variants of DE, using 24 well-known test problems from CEC2006, we decide how many DE variants and which variants we should use in designing SAMSDE. We also present and analyze the computational results of the Memetic-SAMSDE and compare them with SAMSDE, as well as the state-of-the-art algorithms. To show the robustness of the proposed algorithm, we also solve the CEC2010 test problems and compare the results with the CEC2010 competition winning algorithm. All the algorithms have been coded using visual C#, and have been run on a PC with a 3.5 Core 2 Duo processor, 3GB RAM, and windows XP. We have solved the set of 24 test instances that were proposed in CEC2006 [19]. The details of all of these test problems are given in Table 2. 4.1. Eight different variants of DE In this section, a comparative study among eight variants of DE, which were defined in Section 2.5, will be discussed. For all variants, we set the parameters: PS = 80, ε = 0.0001, 30 independent runs were performed for each test problem, and the stopping criterion was to run up to 240K fitness evaluations (FEs). The detailed results (best, median, average, standard deviation (St. d) and the feasibility ratio) are presented in Appendix A. The results demonstrate that no single variant is able to obtain a good quality solution for all 24 test instances. If one variant is good for one test instance, it performs badly for another. The

3214

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

Table 2 Details of the 24 test problems, where ratio is the estimated ratio between the feasible region and the search space, LI is the number of linear inequality constraints, NI is the number of nonlinear inequality constraints, LE is the number of linear equality constraints, NE is the number of nonlinear equality constraints, and act is the number of active constraints. Prob.

D

Obj. fun.

Ratio (%)

LI

NI

LE

NE

act

Optimal

g01 g02 g03 g04 g05 g06 g07 g08 g09 g10 g11 g12 g13 g14 g15 g16 g17 g18 g19 g20 g21 g22 g23 g24

13 20 10 5 4 2 10 2 7 8 2 3 5 10 3 5 6 9 15 24 7 22 9 2

Quadratic Nonlinear Polynomial Quadratic Cubic Cubic Quadratic Nonlinear Polynomial Linear Quadratic Quadratic Nonlinear Nonlinear Quadratic Nonlinear Nonlinear Quadratic Nonlinear Linear Linear Linear Linear Linear

0.0000 99.9971 0.0000 52.1230 0.0000 0.0066 0.0003 0.8560 0.5121 0.0010 0.0000 4.7713 0.0000 0.0000 0.0000 0.0204 0.0000 0.0000 33.4761 0.0000 0.0000 0.0000 0.0000 79.6556

9 0 0 0 2 0 3 0 0 3 0 0 0 0 0 4 0 0 0 0 0 0 0 0

0 2 0 6 0 2 5 2 4 3 0 1 0 0 0 34 0 13 5 6 1 1 2 2

0 0 0 0 0 0 0 0 0 0 0 0 0 3 1 0 0 0 0 2 0 8 3 0

0 0 1 0 3 0 0 0 0 0 1 0 3 0 1 0 4 0 0 12 15 11 1 0

6 1 1 2 3 2 6 0 2 6 1 0 3 3 2 4 4 6 0 16 6 19 6 2

−15.000000000 −0.8036191042 −1.0005001000 −30665.53867178 5126.4967140071 −6961.813875580 24.3062090681 −0.0958250415 680.6300573745 7049.2480205286 0.7499000000 −1.0000000000 0.0539415140 −47.7648884595 961.7150222899 −1.9051552586 8853.5396748064 −0.8660254038 32.6555929502 0.2049794002 193.7245100700 236.4309755040 −400.0551000000 −5.5080132716

average feasibility ratio for the 24 test problems for the variants was, Var1 (86.81%), Var2 (86.25%), Var8 (80.00%), Var6 (75.00%), Var5 (74.44%), Var4 (68.33%), Var7 (65.83%), and Var3 (64.58%), respectively. Based on the solution’s quality, we found that Var2 was able to obtain the optimal solution for 13 test problems, while Var1 was able to obtain the optimal solutions for 12 test problems, while the other variants Var6, Var8, Var5, Var4, Var7 and Var3, were able to reach the optimal for 11, 9, 8, 7, 7, and 5 test problems, respectively. Note that no variant could reach any feasible solution for 3 test problems (g20, g21 and g22). Based on the average results, we found that all variants obtained the same average results for 3 test problems (g08, g11 and g12), for the other test problems we found that Var1 and Var2 were able to obtain the best average results for 8 test problems, while Var6, Var5, Var8, Var7, Var3 and Var4 attained the best average results for 7, 6, 3, 2, 0 and 0, respectively. From the above comparisons, it is clear that Var1 and Var2 are the best two variants, but it is difficult to decide the third and fourth placed variants. From the other six variants, one may be good based on one criterion, but may be worse for another. So, we have ranked all variants, from 1 to 10, based on a simple scoring scheme. This scoring scheme uses 3 criteria (best and average solutions plus the feasibility ratio). To rank the variants, we assign a score of ‘1.0’ if a variant obtains the best fitness value for a given test instance and ‘0.0’ if a variant fails to achieve any feasible solution. If a variant achieves a feasible solution, but not the best fitness, it will receive a fractional score (between 0 and 1) as discussed below. We assume that all the problem instances have a minimization objective function. For a variant r and test instance y, and a total number of test problems Y, we define Fry as the actual fitness, and BFy = minr (Fry ), WFy = maxr (Fry ) as the overall best and worst fitness value for a test instance y, respectively. The score of a variant r for instance y is then:

Sry =

⎧ ⎪ ⎨ 1− ⎪ ⎩

0,

|Fry − BFy | a × |BFy − WFy |

 ,

if Frj is feasible otherwise

(28)

where a ≥ 1 and  > 1. A value a > 1 will differentiate between the worst feasible and any infeasible solution by having a small positive value for Sry . A higher value of  will put a higher emphasis on good solutions. In this study, we use a = 1.1 and  = 2. In a similar way we can also calculate scores for averages. In that case, the final score for a variant i can be calculated as follows:



FSry = ⎝ϑ ×

Y

best Sry + (1 − ϑ) ×

y=1

J



average

Sry

⎠ × FRry

(29)

j=1

where FSry is the final score of variant r for test problem y, FRry is best is the score the feasibility ratio of variant r for test problem y, Sry average based on the best solutions, Sry is the score based on the average values, and, ϑ is a constant with a value between 0 and 1. A higher value of ϑ (1 or close to 1), will put a higher emphasis on the best solutions, this higher emphasis appropriate when we are only interested in the best fitness value. A lower value of ϑ (0 or close to 0) will put a higher emphasis on the average solutions, which is appropriate when we are interested in a number of good alternative solutions. In this study, we use ϑ = 0.5 to make a balance between the best and the average results. The overall score (OSr ) for each variant can then be calculated using the following equation: OSr =



FSry

(30)

y

Based on the 24 test problems, the overall score for each variant was Var2 (17.55), Var1 (16.56), Var8 (13.87), Var5 (12.74), Var6 (12.28), Var4 (7.85), Var7 (7.528) and Var3 (5.72), respectively. In designing the multi-operator based DE, we consider the best four variants based on the scores calculated here. These four variants cover the best solutions found for 22 out of 24 test problems. 4.2. SAMSDE and its variants In the previous section, we have studied DE based on only one mutation operator. In this section, we will use the top two, three, four and five mutation operators in designing selfadaptive multi-strategy differential evolution (SAMSDE) variants.

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

3215

4.3. Memetic-SAMSDE

Fig. 1. The general structure for the self-adaptive multi-strategy differential evolution with four different DE variants.

SAMSDE follows the same steps as in Table 1, without the local search procedure, and its main structure is shown in Fig. 1. The parameter settings are the same as those described in the previous section, while the additional parameters for the minimum subpopulation size are set to 10% of PS, and WS = 50 generations. We have designed three different versions of SAMSDE. The first version uses only the best 2 mutation operators indicated as SAMSDE-2, while the second, third and fourth versions use the best 3, 4, and 5 mutation operators, indicated as SAMSDE-3, SAMSDE4, and SAMSDE-5, respectively. The results (best, median, average, standard deviation (St. d) and the feasibility ratio) are presented in Appendix A. As it can be seen in Appendix A, all four SAMSDE versions cannot solve g20 and g22. From the literature, we know that there is no feasible solution for g20, while for g22 it is rarely to find a feasible solution [19]. So we intend to exclude these 2 problems from these comparisons. As of the results, the average feasibility ratios for SAMSDE-4, SAMSDE-5, SAMSDE-2, and SAMSDE-3 were 100%, 98.93%, 97.6%, 97.4%, respectively. Based on the best solutions, SAMSDE-4 was able to obtain the optimal solutions for 21 test problems, while SAMSDE-3, SAMSDE-2 and SAMSDE-5 obtained the optimal solutions for 18, 16, and 15 problems, respectively. When comparing the average results, SAMSDE-4 is either better than or equal to all the other versions. According to the scoring scheme discussed earlier, SAMSDE-4 comes 1st place with a maximum possible score of 22, while SAMSDE-3, SAMSDE-2, SAMSDE-5 are in the 2nd, 3rd, and 4th positions with scores of 16.83, 12.78, and 11.90, respectively. In the next section, we use SAMSDE-4 to design our Memetic-SAMSDE. Note that, from here to the end of the paper, we will indicate SAMSDE-4 as SAMSDE.

The parameter settings in the Memetic-SAMSDE are the same as those in the previous section, but with the additional parameters max = 500, this local search is done every 50 generations. The FEsLS detailed results are shown in Appendix A. Here we consider all of the 24 test problems. As presented in Appendix A, the best solution obtained for g22 using MemeticSAMSDE is better than the best solution recorded in the literature. Memetic-SAMSDE is able to obtain the optimal solutions for 22 test problems. For problem g20, there is no feasible solution recorded in the literature, here our Memetic-SAMSDE algorithm cannot obtain any feasible solution too, but the violation here is very tiny as compared to the literature. From the obtained result we can see that the proposed Memetic-SAMSDE is robust, in which it is able to obtain the optimal solutions for all test problems for all runs. In comparison between Memetic-SAMSDE and SAMSDE, based on the average results, both SAMSDE and Memetic-SAMSDE are equal for 14 test problems, while Memetic-SAMSDE is better for all other problems. For the standard deviation results, both algorithms are equal for 14 test problems, while Memetic-SAMSDE achieves better standard deviation results for all other test problems. From the convergence patterns, we found that MemeticSAMSDE is able to converge at a faster rate than SAMSDE for all test problems. The detailed results that show the number of generations to converge to the optimal solution can be made available if requested. Furthermore, some convergence plots for both algorithms are shown in Fig. 2. From the computational time record, it is interesting to report here that the average computation time required by MemeticSAMSDE is approximately 25% lower than the same for SAMSDE. From the above analysis, it is clear that the addition of local search to SAMSDE not only improved the quality of solutions, but also reduced the required computational time significantly. 4.4. Comparison to the state-of-the-art algorithms The first 13 problems of the CEC2006 test set have been widely used for performance testing. The detailed results of MemeticSAMSDE are provided in Appendix B, along with that of the state-of-the-art algorithms known as: the adaptive penalty formulation with GA (APF-GA) [12], modified differential evolution (MDE) [46], Adaptive Tradeoff Model with evolution strategy (ATMES) [15], Multimembered evolution strategy (SMES) [47], and Stochastic ranking with evolution strategy (SR) [48]. APF-GA and MDE solved all 24 test problems, whereas ATMES, SMES and SR solved only the first 13 problems. We must mention here that our algorithms (Memetic-SAMSDE), and two other algorithms (ATEMS and SMSS) used 240K FEs, while SR used 350K Fes, while APF-GA, and MDE use 500K FEs. The parameter ε for Memetic-SAMSDE, APF-GA, MDE and SR was set to 1.0E−04, while it was set to 4.0E−04 and 5.0E−06 for SMES and ATMES respectively. The results are based on 30 independent runs for all algorithms except APF-GA and MDE, which are based on 25 runs. From Appendix B, Memetic-SAMSDE was always better or the same to APF-GA and MDE. Firstly, no algorithm was able to reach any feasible solution for problems g20. For the other 23 feasible problems, Memetic-SAMSDE, APF-GA, and MDE obtained the optimal solutions for 22, 17, and 20 problems, respectively. In which in regard to the average results, we found that Memetic-SAMSDE is better than APF-GA and MDE, for 9 and 6 instances, respectively. There is no significant difference between Memetic-SAMSDE and both APF-GA, and MDE for 14 and 17 test problems. To study the difference between any two stochastic algorithms in a more meaningful way, we have performed statistical significance testing. We have chosen a non-parametric test, Wilcoxon

3216

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

Fig. 2. The convergence plots for g01, g04, g09 and g18 represented in (a), (b), (c) and (d), respectively. The graphs represent only the feasible individuals.

Signed Rank Test [49] that allows us to judge the difference between paired scores when it cannot make the assumptions required by the paired-samples t test, such as that the populations should be normally distributed. The results based on the best and average fitness values are presented in Table 3, where W = w− or w+ is the sum of ranks based on the absolute value of the difference between the two test variables. The sign of the difference between the 2 independent samples is used to classify cases into one of two samples: differences below zero (negative rank w− ), or above zero (positive rank w+ ). As a null hypothesis, there is no significant difference between

the best/or mean values of two samples. Whereas the alternative hypothesis it is assumed that is that there is a significant difference in the best and/or mean fitness values of the two samples, at the 5% significance level. Based on the test results/rankings, we assign one of three signs (+, −, and ≈) for the comparison of any two algorithms column), where the “+” sign means that the first algorithm is significantly better than the second, the “−” sign means that the first algorithm is significantly worse, and the “≈” sign means that there is no significant difference between the two algorithms. From Table 3, Memetic-SAMSDE is clearly superior to both APF-GA and

Table 3 Wilcoxon sign rank results for Memetic-SAMSDE against APF-GA, MDE, ATMES, SMES and SR. Algorithms

Criteria

w−

w+

Decision

Memetic-SAMSDE-to-APF-GA

Best fitness Average fitness

21 36

0 0

+ +

Memetic-SAMSDE-to-MDE

Best fitness Average fitness

28 21

0 0

+ ≈

Memetic-SAMSDE-to-ATMES

Best fitness Average fitness

10 28

0 0

≈ +

Memetic-SAMSDE-to-SMES

Best fitness Average fitness

28 36

0 0

+ +

Memetic-SAMSDE-to-SR

Best fitness Average fitness

21 36

0 0

+ +

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

3217

Table 4 Properties of the CEC2010 test problems. Feasibility region is the estimated ratio between the feasible region and the search space, I the number of inequality constraints, E the number of equality constraints. Prob.

C01 C02 C03 C04 C05 C06 C07 C08 C09 C10 C11 C12 C13 C14 C15 C16 C17 C18

Search range

D

[0, 10] [−5.12, 5.12]D [−1000, 1000]D [−50, 50]D [−600, 600]D [−600, 600]D [−140, 140]D [−140, 140]D [−500, 500]D [−500, 500]D [−100, 100]D [−1000, 1000]D [−500, 500]D [−1000, 1000]D [−1000, 1000]D [−10, 10]D [−10, 10]D [−50, 50]D

Objective type

Non separable Separable Non separable Separable Separable Separable Non separable Non separable Non separable Non separable Rotated Separable Separable Non separable Non separable Non separable Non separable Non separable

Number of constraints

Feasibility region

E

I

10D

30D

0 1 Separable 1 Separable 2 Non separable, 2 Separable 2 Separable 2 Rotated 0 0 1 Separable 1 Rotated 1 Non separable 1 Non separable 0 0 0 2 Separable 1 Separable 1 Separable

2 Non separable 2 Separable 0 0 0 0 1 Separable 1 Rotated 0 0 0 1 Separable 2 Separable, 1 Non separable 3 Separable 3 Rotated 1 Separable, 1 Non separable 2 Non separable 1 Separable

0.997689 0.000000 0.000000 0.000000 0.000000 0.000000 0.505123 0.379512 0.000000 0.000000 0.000000 0.000000 0.000000 0.003112 0.003210 0.000000 0.000000 0.000010

1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.503725 0.375278 0.000000 0.000000 0.000000 0.000000 0.000000 0.006123 0.006023 0.000000 0.000000 0.000000

MDE in regard to best and average results. Memetic-SAMSDE is also better than the other three algorithms. Based on our scoring scheme, using the CEC2006 test set, Memetic-SAMSDE is in the 1st rank with a score of 23, while MDE and APF-GA have the 2nd and 3rd positions with scores 19.5 and 17.03, respectively. The scores based on the first 13 test problems are that Memetic-SAMSDE is in the 1st position with a score of 13. ATMES is in the 2nd rank with a score of 10.92, while SR and SMES are in the 3rd and 4th positions with scores of 8.54 and 7.38, respectively. Based on the previous analysis, we have demonstrated that Memetic-SAMSDE got better quality solutions than SAMSDE. However, it was expected that the Memetic multi-operator based algorithm with self-adaptive mechanism would take much higher computational time per generation, but we found that the opposite was true. To test this, we have compared the computational time taken by Memetic-SAMSDE, SAMSDE and a standard DE (rand/1/bin) with F = 0.9 and CR = 0.95 [3]. For all algorithms, we calculated the time consumed to reach the best known solutions (as recorded in Table 2) with an error 0.0001, i.e., the stopping criteria is [f (x ) − f (x ∗ ) ≤ 0.0001], where f (x ∗ ) is the best known solutions. We have found that Memetic-SAMSDE saves the computational time by 48% as compared to SAMSDE, SAMSDE requires about 25%

overall lower computational time compared to the standard DE, and hence Memetic-SAMSDE is save the computational time by 73% as compared the standard DE. 4.5. CEC2010 test set As indicated earlier, we also solved another set of 36 test instances (18 problems each with 10 and 30 dimensions) that were introduced in CEC2010 [8], and have compared our results with a new algorithm which won the CEC2010 constrained optimization competition and the best GA in the same competition. The details of the test problems are given in Table 4. The algorithm has been run 25 times for each test problem, where the stopping criterion is to run for up to 200 K FEs for 10D instances, and 600 K FEs for 30D. In this experiment, we have used parameters similar to the previous section, except PS is set to 100 for the 30D test instances. The detailed computational results for both the 10D and 30D instances are shown in Appendix C with the detailed results ␧DEag [50] (the CEC2010 competition winning algorithm), and the known best GA based algorithm IEMA [13]. We also wish to mention here that our Memetic-SAMSDE is able to reach the 100% feasibility ratio for both 10D and 30D, while ␧DEag is able to reach

Fig. 3. Convergence plots for C09, C10, C14 and C15 for 10D in the left (only the first 10,000 FEs are plotted), and 30D (only the first 10,000 FEs are plotted) in the right. The graphs represents only the feasible individuals.

3218

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

Fig. 4. Convergence plots for C17 and C18 for 10D in the left (only the first 10,000 FEs are plotted), and 30D (only the first 30,000 FEs are plotted) in the right. The graphs represents only the feasible individuals.

Fig. 5. Self-adaptive changes in sub-population sizes over only first 50 generations for g09 (x-axis represents number of generations and y-axis sub-population size).

the 100% feasibility ratio for only 35 out of the 36 test instances. This is occurred because it only obtained a 12% feasibility ratio for C12 with 30D. Note that, the “*” in Appendix C for C12 means that it includes infeasible solutions in calculating means and other parameters. The average feasibility ratios for IEMA are 97% and 66% for the 10D and the 30D cases. Considering the best solutions in 10D, Memetic-SAMSDE has obtained better solutions than ␧DEag and IEMA for 5 and 12 test problems, respectively. Memetic-SAMSDE obtains the same optimal solutions as ␧DEag and IEMA for 13 and 6 test problems, respectively. For the 30D test problems, MemeticSAMSDE obtained better solutions than ␧DEag and IEMA for

17 test problems, respectively, while for C16 both algorithms (Memetic-SAMSDE and ␧DEag) and for C16 both algorithms (Memetic-SAMSDE and IEMA) obtained the same best solution. So it is clear that our proposed algorithm is superior in regards to the best solutions. Based on the average results obtained in 10D, Memetic-SAMSDE was better than ␧DEag, and IEMA for 10 and 16 test problems respectively, ␧DEag is better, with very tiny margins, for 6 test problems, and IEMA is better for only one test problem. For 30D, the Memetic-SAMSDE is better than both ␧DEag and IEMA for 16, 17 test problems, respectively. We present a few sample convergence plots for different problems for both 10D and 30D, in Figs. 3 and 4. From these figures, it is

Table 5 Wilcoxon sign rank results for Memetic-SAMSDE against ␧DEag, and IEMA on the CEC2010 test problems. Algorithms

Dim.

Criteria

w−

w+

Decision

Memetic-SAMSDE-to-␧DEag

10D

Best fitness Average fitness Best fitness Average fitness

15 105 153 161

0 31 0 10

+ ≈ + +

Best fitness Average fitness Best fitness Average fitness

78 148 153 167

0 5 0 4

+ + + +

30D

Memetic-SAMSDE-to-IEMA

10D 30D

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

clear that the proposed algorithm is able to converge very quickly to the optimal solution. According to the non-parametric statistical test, as shown in Table 5, for the 10D test problems, Memetic-SAMSDE is clearly superior to IEMA in terms of both the best and average results, while it is better than ␧DEag in terms of the average results. Considering the 30D test problems, Memetic-SAMSDE is better than the other two algorithms. By using our scoring scheme, we found that the MemeticSAMSDE is significantly better with a score of 35.5 in comparison to ␧DEag and IEMA scores of 21.44 and 17.41, respectively. For problems with equality constraints, in the 10D test problems, the proposed algorithm obtained the best known solution results for (C02, C03, C04, C05, C06, C09, C10, C11, C12, C16, C17 and C18). In the 30D test problems, the proposed algorithm got better results, compared to the best known, for (C01, C02, C03, C04, C05, C06, and C11), and the same results for C12, C16 and C17, while our best values for (C09, C10, C18) are different from the best known by very tiny average values. Finally, to show readers how the subpopulation’s sizes are being changed during the evolutionary process, we provide a graph, as shown in Fig. 5 that illustrates this issue. 5. Conclusion and future work During the last few decades, many evolutionary algorithms have been introduced to solve constrained optimization problems. Most of these algorithms were designed to use a single crossover and/or a single mutation operator. In this paper, we have shown that the efficiency of evolutionary algorithms can be improved by adopting a concept of self-adaptation with an increased of number of search operators. To choose the operator-mix, we have performed a comparative study of well-known DE variants by solving the 24 test problems introduced in CEC2006. The study showed that no single operator was able to solve all the test problems optimally. From the analyzed results, we found that Var2 (DE/rand/3/bin) was superior to the other variants, while Var1 (DE/best/3/bin) had second position, followed by Var8 (DE/rand-to-(best & current)/1/bin), and Var5 (DE/rand-to-current/1/bin). We then designed our algorithm SAMSDE with these four DE variants. In our opinion, the real power of multiple strategies based algorithm has not yet been fully covered and their applicability to different problem domains was also not fully explored. So this paper contributed to further advancing knowledge on this topic

3219

in constrained optimization. The idea behind our algorithm was simple and quite logical, namely that a set of carefully selected mutation strategies would be able to solve a wider range of problems effectively, where as any single mutation strategy based algorithm would perform well as for only a subset of the problems. The algorithm design that we have proposed was different from the existing algorithms in the following ways. The mutation strategies were selected based on a systematic analysis. The population was divided into a number of subpopulations, where each sub-population uses only one mutation strategy. A number of equations have been modified with new components (such as mutation strategies, F, and Cr). The sub-population sizes vary adaptively, with the progress of evolution, depending on the success of search operators. Considering the fact that a mutation strategy may work badly at the initial stage of the evolution process and do well at a later stage, no subpopulation was discarded; instead a lower bound on the subpopulation size was set. The subpopulations exchange information periodically. In constrained optimization, the feasiblity rule was the most widely technique for handling constraints which was implemented during the selection process and that has no influence on the search operator once the selection was done. In our algorithm, in addition to the basic rule applied in the selection process, a self-adaptive parameters selection mechanism was derived based on the feasibility status as well as the quality of fitness. These parameters influence the search operator in guiding the search in an effective manner. A local search approach has been applied that guided the search process with respect to the feasibility status. Also, an important aspect for Memetic-SAMSDE was the adaptive specification for amplification factor (F), crossover rate (Cr), and each of the subpopulation’s sizes (pi ). To show the robustness of our algorithm, we have tested it on more constrained optimization benchmark problems than any other research reported in the literature Memetic-SAMSDE was able to either reach the same or better solutions than SAMSDE with a faster convergence rate. Moreover, the average computational time was approximately 25% lower than the same for SAMSDE. Memetic-SAMSDE has been used to solve another set of test problems, presented in CEC2010, and it was found superior to the best algorithm, so far, in the literature. Appendix A. The computational results for 8 different variants of DE, 5 variants of SAMSDE and Memetic-SAMSDE. The bold-face values mean they are the best values obtained.

Prob.

Variant

Best

Median

Mean

Worst

St. d

Fr

g01

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

−15.0000 −15.0000 −15.0000 −15.0000 −15.0000 −14.93612 −13.84243 −15.0000 −15.0000 −15.0000 −15.0000 −15.0000 −15.0000

−15.0000 −15.0000 −14.99772 −14.99999 −15.0000 −11.27281 −10.42081 −15.0000 −15.0000 −15.0000 −15.0000 −15.0000 −15.0000

−15.0000 −15.0000 −14.72997 −14.88902 −15.0000 −11.22802 −10.33411 −15.0000 −15.0000 −15.0000 −15.0000 −15.0000 −15.0000

−15.0000 −15.0000 −12.80830 −13.89072 −15.0000 −6.92534 −5.35221 −15.0000 −15.0000 −15.0000 −15.0000 −15.0000 −15.0000

0.000E+00 0.000E+00 6.348E−01 3.038E−01 0.000E+00 2.063E+00 2.179E+00 2.783E−12 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00

100% 100% 100% 100% 100% 100% 83% 100% 100% 100% 100% 100% 100%

g02

Var1 Var2 Var3 Var4 Var5 Var6 Var7

−0.792607 −0.803616 −0.802957 −0.803383 −0.803492 −0.794691 −0.803590

−0.75094889 −0.78525935 −0.56434695 −0.79231407 −0.80315591 −0.74520163 −0.79449554

−0.72777859 −0.77941386 −0.58500731 −0.78383929 −0.80277924 −0.73727212 −0.78715469

−0.62657042 −0.72990464 −0.45762017 −0.71409465 −0.79251767 −0.62672549 −0.74060639

5.268E−02 1.902E−02 8.986E−02 2.291E−02 1.953E−03 3.897E−02 1.880E−02

100% 100% 100% 100% 100% 100% 100%

3220

Prob.

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

Variant

Best

Median

Mean

Worst

St. d

Fr

Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

−0.803577 −0.803619 −0.803619 −0.803619 −0.80354089 −0.803619

−0.80317653 −0.7830533 −0.790770 −0.803615 −0.79251776 −0.803619

−0.80143889 −0.7749171 −0.790192 −0.798735 −0.79084087 −0.803619

−0.79238179 −0.7293984 −0.774025 −0.78656612 −0.77824257 −0.803619

4.078E−03 2.459E−02 9.624E−03 5.505E−03 6.3733E−03 0.000E+00

100% 100% 100% 100% 100% 100%

g03

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-4 Memetic-SAMSDE

−1.0005E+00 −1.0005E+00 −1.000392 −0.981477 −8.7963E−01 −1.0005E+00 −1.0005E+00 −8.7948E−01 −1.0005E+00 −1.0005E+00 −1.0005E+00 −1.0005E+00 −1.0005E+00

−1.0005E+00 −1.0005E+00 −0.998534 −0.301294 −5.1691E−01 −1.0004E+00 −9.9921E−01 −4.8251E−01 −1.0005E+00 −1.0005E+00 −1.0005E+00 −1.0005E+00 −1.0005E+00

−1.0005E+00 −1.0005E+00 −0.996051 −0.327997 −5.2722E−01 −1.0004E+00 −9.9336E−01 −5.1357E−01 −1.0005E+00 −1.0005E+00 −1.0005E+00 −1.0005E+00 −1.0005E+00

−1.0002E+00 −1.0005E+00 −0.967157 −0.050538 −2.1889E−01 −9.9984E−01 −9.1698E−01 −1.8886E−01 −1.0005E+00 −1.0005E+00 −1.0005E+00 −9.9999E−01 −1.0005E+00

5.483E−05 1.290E−05 6.900E−03 2.440E−01 1.862E−01 1.378E−04 1.851E−02 2.003E−01 8.216E−06 1.943E−06 0.000E+00 1.05839E−04 0.000E+00

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

g04

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

−30665.53867 −30665.53867 −30665.53861 −30665.53867 −30665.53867 −30665.53867 −30665.53867 −30665.53867 30665.53867 30665.53867 30665.53867 30665.53867 30665.53867

−30665.53867 −30665.53867 −30598.22830 −30645.64006 −30665.53867 −30665.53867 −30517.52764 −30665.53867 30665.53867 30665.53867 30665.53867 30665.53867 30665.53867

−30665.53867 −30662.69876 −30507.30664 −30618.58900 −30665.53867 −30665.53867 −30491.58165 −30665.53864 30665.53867 30665.53867 30665.53867 30665.53867 30665.53867

−30665.53867 −30634.93399 −30063.80410 −30350.44067 −30665.53866 −30665.53867 −30205.92244 −30665.53818 30665.53867 30665.53867 30665.53867 30665.53867 30665.53867

3.549E−07 8.658E+00 2.033E+02 7.584E+01 3.248E−06 7.400E−12 1.451E+02 9.254E−05 0.000E+00 0.000E+00 0.000E+00 2.62164E−11 0.000E+00

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

g05

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

5126.4986 5126.49674 5126.84975 5128.51882 5126.49672 5126.496714 5126.49679 5126.49672 5126.496714 5126.496714 5126.496714 5126.496714 5126.496714

5127.55094 5126.98662 5165.10296 5154.10451 5128.57673 5126.82144 5126.53355 5126.76752 5126.496748 5126.496714 5126.496714 5126.496714 5126.496714

5139.24089 5133.70682 5215.0651 5300.69776 5159.375 5127.60153 5126.66692 5146.10659 5126.635427 5126.496716 5126.496714 5126.496716 5126.496714

5321.52365 5217.94796 5476.49308 6019.29817 5746.11511 5134.42675 5127.27191 5439.4559 5128.806225 5126.496733 5126.496714 5126.496737 5126.496714

3.836E+01 1.992E+01 1.348E+02 2.961E+02 1.173E+02 1.864E+00 3.387E−01 5.875E+01 4.968E−01 4.671E−06 0.000E+00 4.95672E−06 0.000E+00

100% 100% 20% 53% 97% 73% 17% 100% 100% 100% 100% 100% 100%

g06

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

−6961.81388 −6961.81388 −6961.65199 −6955.79948 −6961.81388 −6961.81388 −6961.19154 −6961.81388 −6961.81388 −6961.81388 −6961.81388 −6961.81388 −6961.81388

−6961.81388 −6961.80882 −6583.17632 −5954.14448 −6961.81388 −6961.81388 −6751.41491 −6961.81381 −6961.81388 −6961.81388 −6961.81388 −6961.81388 −6961.81388

−6961.81388 −6937.41732 −6365.24249 −5952.08478 −6961.81388 −6961.81388 −6602.66730 −6961.81344 −6961.81388 −6961.81388 −6961.81388 −6961.81388 −6961.81388

−6961.81388 −6804.36991 −4841.31506 −4724.92326 −6961.81388 −6961.81388 −5302.91507 −6961.81166 −6961.81388 −6961.81388 −6961.81388 −6961.81388 −6961.81388

2.536E−10 4.785E+01 5.942E+02 7.000E+02 0.000E+00 0.000E+00 4.499E+02 6.564E−04 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

g07

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

24.3065 24.3069 25.0982 24.3093 24.3905 24.3141 27.5349 24.3557 24.3069 24.3062 24.3062 24.3082 24.3062

24.3211 24.3205 56.1442 24.3786 24.5539 24.9317 148.5187 24.5890 24.3287 24.3197 24.3097 24.3198 24.3062

24.3562 24.3289 259.5011 24.4392 24.5537 27.4315 224.3423 24.5971 24.3330 24.3273 24.3096 24.3254 24.3062

24.6157 24.4085 2104.6870 24.9022 24.7469 45.6479 833.9120 24.8384 24.3881 24.3660 24.312 24.3620 24.3062

7.264E−02 2.528E−02 5.027E+02 1.523E−01 9.127E−02 5.081E+00 2.059E+02 1.343E−01 2.497E−02 1.938E−02 1.589E−03 1.5184E−02 0.000E+00

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

g08

Var1 Var2 Var3 Var4

−0.09582504 −0.09582504 −0.09582504 −0.09582504

−0.09582504 −0.09582504 −0.09582504 −0.09582504

−0.09582504 −0.09582504 −0.09582504 −0.09582504

−0.09582504 −0.09582504 −0.09582504 −0.09582504

0.000E+00 0.000E+00 0.000E+00 0.000E+00

100% 100% 100% 100%

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

Prob.

3221

Variant

Best

Median

Mean

Worst

St. d

Fr

Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

−0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504

−0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504

−0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504

−0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504 −0.09582504

0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00

100% 100% 100% 100% 100% 100% 100% 100% 100%

g09

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

680.630 680.630 680.643 680.645 680.631 680.630 680.630 680.631 680.630 680.630 680.630 680.630 680.630

680.631 680.633 692.080 680.783 680.636 680.632 684.124 680.636 680.631 680.630 680.630 680.630 680.630

680.635 680.634 709.048 682.053 680.639 680.633 695.189 680.638 680.631 680.630 680.630 680.631 680.630

680.690 680.646 846.972 696.090 680.676 680.642 732.508 680.652 680.632 680.631 680.630 680.634 680.630

1.108E−02 3.979E−03 4.263E+01 3.401E+00 1.072E−02 3.598E−03 1.754E+01 5.860E−03 6.763E−04 3.525E−04 1.157E−05 8.00174E−04 0.000E+00

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

g10

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

7052.90735 7049.56975 7253.67448 7136.60588 7110.24729 7049.26064 7125.27024 7139.26999 7049.37848 7049.24802 7049.24802 7055.20928 7049.24802

7201.68608 7071.05981 9255.19180 7449.83952 7223.17625 7103.34539 7125.27024 7256.83458 7105.922 7082.63023 7060.63373 7121.77451 7049.24802

7207.32656 7105.84371 9810.10624 7547.13172 7258.18394 7152.03603 10872.51384 7259.31204 7112.012 7091.18063 7059.81345 7142.32343 7049.24802

7474.97111 7347.26421 16206.18366 8252.45423 7443.93855 7517.67488 16568.94495 7407.47287 7191.295 7160.25786 7071.30327 7263.34947 7049.24802

1.074E+02 7.252E+01 2.365E+03 3.162E+02 9.459E+01 1.244E+02 2.715E+03 7.076E+01 7.049E+03 3.058E+01 7.856E+00 6.43355E+01 0.000E+00

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

g11

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499

0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499

0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499

0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499 0.7499

0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

g12

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

−1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000

−1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000

−1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000

−1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000 −1.0000

0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

g13

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

0.05394151 0.05394521 – 0.05412206 0.05394161 0.05394309 0.05394257 0.05394154 0.05394151 0.05394151 0.05394151 0.05394151 0.05394151

0.05394156 0.05468931 – 0.09819895 0.0539425 0.05431651 0.05395372 0.05394171 0.05394151 0.05394175 0.05394151 0.0539424 0.05394151

0.05394193 0.06028431 – 0.52572748 0.1260918 0.05534815 0.05404684 0.08213685 0.05394166 0.05394181 0.05394151 0.05394265 0.05394151

0.05394862 0.08281988 – 4.05142982 0.70414959 0.06243634 0.05460277 0.70240346 0.0539428 0.0539423 0.05394151 0.0539457 0.05394151

1.409E−06 9.850E−03 – 1.183E+00 2.100E−01 2.235E−03 2.455E−04 1.352E−01 2.968E−07 2.669E−07 1.754E−08 1.1314E−06 0.000E+00

100% 100% 0% 37% 60% 67% 23% 77% 100% 100% 100% 100% 100%

3222

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

Prob.

Variant

Best

Median

Mean

Worst

St. d

Fr

g14

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

−47.66182 −47.76206 −42.99755 −44.99615 −46.12642 −45.83477 – −46.66291 −47.75117 −47.75740 −47.76489 −47.75208 −47.76489

−47.24947 −47.52454 – – −44.79094 – – −46.63655 −47.6591 −47.65839 −47.67968 −47.65362 −47.76489

−47.08731 −47.46548 – – −44.64022 – – −46.43386 −47.597 −47.62644 −47.68115 −47.62349 −47.76489

−45.69602 −46.53409 – – −42.85257 – – −46.00211 −47.134385 −47.42405 −47.60681 −47.41915 −47.76489

5.160E−01 3.044E−01 – – 1.387E+00 – – 3.741E−01 1.663E−01 9.708E−02 4.043E−02 1.022E−01 0.000E+00

100% 100% 3% 3% 17% 3% 0% 100% 100% 100% 100% 100% 100%

g15

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

961.7150223 961.7150223 961.7261865 961.7150273 961.7150227 961.7150223 961.7150761 961.7150223 961.7150223 961.7150223 961.7150223 961.7150223 961.7150223

961.7150479 961.7150223 962.5881094 962.3347265 961.7150321 961.7258983 961.7175236 961.7150243 961.7150223 961.7150223 961.7150223 961.7150223 961.7150223

961.7153402 961.7166034 963.0149786 963.5089152 961.7195018 961.7912551 961.9357546 961.7764624 961.7150223 961.7150223 961.7150223 961.7150223 961.7150223

961.7194871 961.7573710 965.3104942 971.5931973 961.7835877 962.9247336 963.9488793 963.2689452 961.7150224 961.7150223 961.7150223 961.7150223 961.7150223

8.815E−04 7.867E−03 1.357E+00 2.587E+00 1.552E−02 2.313E−01 5.638E−01 2.837E−01 4.109E−08 7.283E−10 0.000E+00 0.000E+00 0.000E+00

100% 100% 47% 77% 100% 100% 63% 100% 100% 100% 100% 100% 100%

g16

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

−1.9051553 −1.9051553 −1.9049509 −1.9051553 −1.9051553 −1.9051553 −1.9050978 −1.9051553 −1.9051553 −1.9051553 −1.9051553 −1.9051553 −1.9051553

−1.9051553 −1.9051553 −1.8360519 −1.9037112 −1.9051553 −1.9051553 −1.6699701 −1.9051552 −1.9051553 −1.9051553 −1.9051553 −1.9051553 −1.9051553

−1.9051553 −1.9051553 −1.7803699 −1.8804328 −1.9051553 −1.9051553 −1.6653851 −1.9051552 −1.9051553 −1.9051553 −1.9051553 −1.9051553 −1.9051553

−1.9051553 −1.9051553 −1.5150565 −1.7420216 −1.9051553 −1.9051553 −1.2531025 −1.9051552 −1.9051553 −1.9051553 −1.9051553 −1.9051553 −1.9051553

0.000E+00 3.229E−10 1.262E−01 4.699E−02 1.249E−09 6.260E−15 1.491E−01 9.582E−09 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00

100% 100% 100% 100% 100% 100% 93% 100% 100% 100% 100% 100% 100%

g17

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

8853.5830 8853.5398 8884.3566 8933.5484 8853.7346 8927.6413 8854.3748 8854.9060 8853.5397 8853.5397 8853.5397 8853.5397 8853.5397

8927.6245 8949.0383 8935.6364 8955.8365 8895.1022 8933.2121 8933.6632 8906.5570 8927.5977 8853.5442 8853.5397 8854.6280 8853.5397

8906.6158 8937.0179 8930.1918 8948.5367 8896.7260 8938.7521 8944.3237 8905.5853 8914.0841 8861.0374 8853.5397 8884.2418 8853.5397

8953.5082 8956.7443 8958.7359 8956.2252 8942.9652 8972.7385 9132.1231 8955.2387 8941.2845 8927.5976 8853.5397 8934.6320 8853.5397|

3.660E+01 2.690E+01 2.470E+01 1.300E+01 4.680E+01 1.300E+01 7.270E+01 4.060E+01 3.095E+01 2.257E+01 1.150E−05 37.040844 0.000E+00

100% 100% 20% 10% 13% 43% 33% 33% 100% 100% 100% 100% 100%

g18

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

−0.8660247 −0.866025 −0.8656157 −0.8659147 −0.8656695 −0.8659593 −0.8659053 −0.8657295 −0.866025 −0.866025 −0.866025 −0.866025 −0.866025

−0.865918 −0.866024 −0.693437 −0.864423 −0.855914 −0.790401 −0.717658 −0.859722 −0.866024 −0.866024 −0.866025 −0.865932 −0.866025

−0.865634 −0.866020 −0.679722 −0.858775 −0.853913 −0.683671 −0.655643 −0.857528 −0.866018 −0.866021 −0.866024 −0.821271 −0.866025

−0.860471 −0.865983 −0.423627 −0.779581 −0.822679 −0.116692 −0.208438 −0.841841 −0.865959 −0.866000 −0.866023 −0.673401 −0.866025

1.022E−03 9.806E−06 1.619E−01 1.731E−02 1.081E−02 2.077E−01 2.289E−01 6.458E−03 1.400E−05 6.000E−06 7.044E−07 8.2292E−02 0.000E+00

100% 100% 60% 60% 100% 87% 67% 100% 100% 100% 100% 100% 100%

g19

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3

32.733574 32.694400 34.337936 38.979160 33.267702 33.134872 84.805281 33.321226 32.674727 32.660512

33.264270 32.859724 38.763762 193.115623 33.908131 37.908671 463.831694 34.069069 32.886802 32.796922

33.641059 32.930357 47.387551 230.142185 34.071673 59.641999 497.356785 34.191125 32.928571 32.843528

38.583624 33.462962 107.248951 696.302035 35.481096 157.471637 1223.905920 35.427756 33.373128 33.232222

1.189E+00 2.112E−01 1.844E+01 1.522E+02 5.990E−01 1.189E+00 3.149E+02 5.989E−01 2.004E−01 1.596E−01

100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

Prob.

3223

Variant

Best

Median

Mean

Worst

St. d

Fr

SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

32.655592 33.032823 32.655592

32.747529 34.058272 32.655592

32.757340 34.147526 32.655592

32.877319 35.754285 32.655592

6.145E−02 7.92953E−01 0.000E+00

100% 100% 100%

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE(Violation)

– – – – – – – – – – – – 0.123425(2.023E−06)

– – – – – – – – – – – – 0.124009(4.50E−05)

– – – – – – – – – – – – 0.122097(4.5039E−05)

– – – – – – – – – – – – 0.139065(5.02E−03)

– – – – – – – – – – – 0.010122

0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0%

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

– – – – – – – – 308.908985 193.751890 193.724510 193.779209 193.724510

– – – – – – – – 324.712344 308.826920 193.777252 296.203018 193.724510

– – – – – – – – 324.104402 262.936182 193.771375 280.583585 193.724510

– – – – – – – – 329.548876 326.131610 193.802357 324.873440 193.724510

– – – – – – – – 4.564E+00 6.680E+01 1.964E−02 4.0090E+01 0.000E+00

0% 0% 0% 0% 0% 0% 0% 0% 47% 43% 100% 100% 100%

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

– – – – – – – – – – – – 236.370313

– – – – – – – – – – – – 244.287118

– – – – – – – – – – – – 245.738829

– – – – – – – – – – – – 262.298133

– – – – – – – – – – – – 9.05939E+00

0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 100%

g23

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

−309.959147 −382.522771 – – – −176.574898 – −163.241640 −374.201398 −376.547624 −396.165732 −233.312952 −400.055100

−113.851981 −191.134573 – – – −100.974123 – 56.424551 −262.684823 −277.622914 −357.138015 −115.499963 −400.055100

−68.339320 −199.903075 – – – −65.110554 – 9.734071 −265.241202 −283.477008 −360.817656 −48.563576 −400.055100

258.711079 −6.556980 – – – 111.962597 – 136.019301 −176.606485 −207.154198 −337.163277 280.898759 −400.055100

1.645E+02 9.725E+01 – – – 1.197E+02 – 1.550E+02 5.312E+01 4.715E+01 1.962E+01 137.612796 0.000E+00

83% 70% 0% 0% 0% 27% 0% 10% 100% 100% 100% 76.67% 100%

g24

Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 SAMSDE-2 SAMSDE-3 SAMSDE-4 SAMSDE-5 Memetic-SAMSDE

−5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133

−5.5080133 −5.5080133 −5.5080133 −5.5075853 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133

−5.5080133 −5.5080133 −5.5079728 −5.5039679 −5.5080133 −5.5080133 −5.5068265 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133

−5.5080133 −5.5080133 −5.5071330 −5.4687543 −5.5080133 −5.5080133 −5.4847366 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133 −5.5080133

0.000E+00 0.000E+00 1.651E−04 8.271E−03 4.921E−13 0.000E+00 4.487E−03 1.276E−12 0.000E+00 0.000E+00 0.000E+00 0.000E+00 0.000E+00

100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%

g21

3224

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

Appendix B. Function values achieved via Memetic-SAMSDE, APF-GA, MDE, ATMES, SMES, and SR FOR 24 CEC2006 test problems. The bold-face values mean they are the best values obtained. Problem

Criteria FEs Runs

Memetic-SAMSDE 240,000 30

APF-GA 500,000 25

MDE 500,000 25

ATMES 240,000 30

SMES 240,000 30

SR+ES 350,000 30

g01

Best Average St. d

−15.0000 −15.0000 0.00E+00

−15.0000 −15.0000 0.00E+00

−15.0000 −15.0000 0.00E+00

−15.0000 −15.0000 1.6E−14

−15.0000 −15.0000 0.00E+00

−15.0000 −15.0000 0.00E+00

g02

Best Average St. d

−0.8036191 −0.8036191 0.00E+00

−0.803601 −0.803518 1.00E−04

−0.8036191 −0.78616 1.26E−02

−0.803339 −0.790148 1.3E−02

−0.803601 −0.785238 1.67E−02

−0.803515 −0.781975 2.0E−02

g03

Best Average St. d

−1.0005 −1.0005 0.00E+00

−1.001 −1.001 0.00E+00

−1.0005 −1.0005 0.00E+00

−1.000 −1.000 5.9E−05

1.000 1.000 2.09E−05

−1.000 −1.000 2.09E−05

g04

Best Average St. d

−30665.539 −30665.539 0.00E+00

−30665.539 −30665.539 1.00E−04

−30665.5386 −30665.5386 0.00E+00

−30665.539 −30665.539 7.4E−12

−30665.539 −30665.539 0.00E+00

−30665.539 −30665.539 2.0E−05

g05

Best Average St. d

5126.497 5126.497 0.00E+00

5126.497 5127.5423 1.4324 E+00

5126.497 5126.497 0.00E+00

5126.498 5127.648 1.8E+00

5126.599 5174.492 5.006E+01

5126.497 5128.881 3.5E+00

g06

Best Average St. d

−6961.813875 −6961.813875 0.00E+00

−6961.814 −6961.814 0.00E+00

−6961.814 −6961.814 0.00E+00

−6961.814 −6961.814 4.6E−12

−6961.814 −6961.284 1.85E+00

−6961.814 −6875.940 1.6E+02

g07

Best Average St. d

24.3062 24.3062 0.00E+00

24.3062 24.3062 0.00E+00

24.3062 24.3062 0.00E+00

24.306 24.316 1.1E−02

24.327 24.475 1.32E−01

24.307 24.374 6.6E−02

g08

Best Average St. d

−0.095825 −0.095825 0.00E+00

−0.095825 −0.095825 0.00E+00

−0.095825 −0.095825 0.00E+00

−0.095825 −0.095825 2.8E−17

−0.095825 −0.095825 0.00E+00

−0.095825 −0.095825 0.00E+00

g09

Best Average St. d

680.630 680.630 0.00E+00

680.630 680.630 0.00E+00

680.630 680.630 0.00E+00

680.630 680.639 1.0E−02

680.632 680.643 1.55E−02

680.630 680.656 3.4E−02

g10

Best Average St. d

7049.24802 7049.24802 0.00E+00

7049.24802 7077.6821 5.1240E+01

7049.24802 7049.24802 0.00E+00

7052.253 7250.437 1.2E+02

7051.903 7253.047 1.36E+02

7051.903 7253.047 1.36E+02

g11

Best Average St. d

0.7499 0.7499 0.00E+00

0.7499 0.7499 0.00E+00

0.7499 0.7499 0.00E+00

0.75 0.75 3.4E−04

0.75 0.75 1.52E − 04

0.750 0.750 8.0E − 05

g12

Best Average St. d

−1.0000 −1.0000 0.00E+00

−1.0000 −1.0000 0.00E+00

−1.0000 −1.0000 0.00E+00

−1.0000 −1.0000 1.0E−03

−1.0000 −1.0000 0.00E+00

−1.0000 −1.0000 0.00E+00

g13

Best Average St. d

0.053942 0.053942 0.00E+00

0.053942 0.053942 0.00E+00

0.053942 0.053942 0.00E+00

0.053950 0.053959 1.3E−05

0.053986 0.166385 1.77E−01

0.067543 0.067543 3.1E−02

g14

Best Average St. d

−47.764888 −47.764888 0.00E+00

−47.76479 −47.76479 1.00E−04

−47.764887 −47.764874 1.400E−05

– – –

– – –

– – –

g15

Best Average St. d

961.71502 961.71502 0.00E+00

961.71502 961.71502 0.00E+00

961.71502 961.71502 0.00E+00

– – –

– – –

– – –

g16

Best Average St. d

−1.905155 −1.905155 0.00E+00

−1.905155 −1.905155 0.00E+00

−1.905155 −1.905155 0.00E+00

– – –

– – –

– – –

g17

Best Average St. d

8853.5397 8853.5397 0.00E+00

8853.5398 8888.4876 29.0347

8853.5397 8853.5397 0.00E+00

– – –

– – –

– – –

g18

Best Average St. d

−0.866025 −0.866025 0.00E+00

−0.866025 −0.865925 1.00E−04

−0.866025 −0.866025 0.00E+00

– – –

– – –

– – –

g19

Best Average St. d

32.655593 32.655593 0.00E+00

32.655593 32.655593 0.00E+00

32.64827 33.34125 8.475E−01

– – –

– – –

– – –

g21

Best Average St. d

193.72451 193.72451 0.00E+00

196.63301 199.51581 2.3565

193.72451 193.72451 0.00E+00

– – –

– – –

– – –

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

3225

Problem

Criteria FEs Runs

Memetic-SAMSDE 240,000 30

APF-GA 500,000 25

MDE 500,000 25

ATMES 240,000 30

SMES 240,000 30

SR+ES 350,000 30

g22

Best Average St. d

236.370313 245.738829 9.05939E+00

– – –

– – –

– – –

– – –

– – –

g23

Best Average St. d

−400.0551 −400.0551 0.00E+00

−399.7624 −394.7627 3.8656E+00

−400.0551 −400.0551 0.00E+00

– – –

– – –

– – –

g24

Best Average St. d

−5.508013 −5.508013 0.00E+00

−5.508013 −5.508013 0.00E+00

−5.508013 −5.508013 0.00E+00

– – –

– – –

– – –

Appendix C. The function values achieved via Memetic-SAMSDE, ␧DEag, and IEMA for 10D and 30D, presented in a and b, respectively. The bold-face values mean they are the best values obtained. Prob.

Alg.

Best

Median

Mean

Worst

St. d

Memetic-SAMSDE ␧DEag IEMA

−0.7473104 −0.7473104 −0.74731

−0.7473104 −0.7473104 −0.74615

−0.7473104 −0.7470402 −0.743189

−0.7473104 −0.7405572 −0.738026

0.0000E+00 1.323339E−03 0.00433099

C02

Memetic-SAMSDE ␧DEag IEMA

−2.2777099 −2.277702 −2.27771

−2.2776697 −2.269502 −2.27771

−2.2776477 −2.258870 −2.27771

−2.2775011 −2.174499 −2.27771

5.9704E−05 2.389779E−02 1.82278E−07

C03

Memetic-SAMSDE ␧DEag IEMA

0.0000E+00 0.0000E+00 1.46667E−16

1.176214E−21 0.0000E+00 3.2005E−15

4.815855E−21 0.0000E+00 6.23456E−07

2.923471E−20 0.0000E+00 4.51805E−06

6.862013E−21 0.0000E+00 1.40239E−06

C04

Memetic-SAMSDE ␧DEag IEMA

−1.0000E−05 −9.992345E−06 −9.98606E−06

−1.0000E−05 −9.977276E−06 −9.95109E−06

−1.0000E−05 −9.918452E−06 −9.91135E−06

−1.0000E−05 −9.282295E−06 −9.68939E−06

2.2847E−10 1.546730E−07 8.99217E−08

C05

Memetic-SAMSDE ␧DEag IEMA

−483.610625 −483.6106 −483.611

−483.610625 −483.6106 −483.611

−483.610625 −483.6106 −379.156

−483.610624 −483.6106 134.827

1.4883E−07 3.890350E−13 179.424

C06

Memetic-SAMSDE ␧DEag IEMA

−578.66236 −578.6581 −578.662

−578.6622 −578.6533 −578.662

−578.6622 −578.6528 −551.47

−578.6619 −5.786448 −351.933

1.182591E−04 3.627169E−03 73.5817

C07

Memetic-SAMSDE ␧DEag IEMA

0.000E+00 0.000E+00 1.74726E−10

2.321176E−24 0.000E+00 1.9587E−09

9.297296E−24 0.000E+00 3.25685E−09

7.414433E−23 0.000E+00 1.33403E−08

1.706221E−23 0.000E+00 3.38717E−09

C08

Memetic-SAMSDE ␧DEag IEMA

0.000E+00 0.000E+00 1.00753E−10

3.767414E−22 1.094154E+01 3.94831E−09

9.514571E−20 6.727528E+00 4.0702

1.712791E−18 1.537535E+01 21.0992

3.494602E−19 5.560648E+00 6.38287

C09

Memetic-SAMSDE ␧DEag IEMA

0.000E+00 0.000E+00 1.20218E−09

3.1431E−23 0.000E+00 333.32

1.2949E−21 0.000E+00 1.95109E+12

1.2751E−20 0.000E+00 1.96407E+13

2.8690E−21 0.000E+00 5.40139E+12

C10

Memetic-SAMSDE ␧DEag IEMA

0.000E+00 0.000E+00 5.4012E−09

8.1403E−24 0.000E+00 42130.4

7.4597E−23 0.000E+00 2.5613E+12

6.6775E−22 0.000E+00 1.10111E+13

1.5575E−22 0.000E+00 3.96979E+12

C11

Memetic-SAMSDE ␧DEag IEMA

−1.52271E−03 1.522713E−03 −0.00152271

−1.52271E−03 1.522713E−03 −0.00152271

−1.52271E−03 1.522713E−03 −0.00152271

−1.52271E−03 1.522713E−03 −0.00152258

1.3605E−08 6.341035E−11 2.73127E−08

C12

Memetic-SAMSDE ␧DEag IEMA

−570.0899 −570.0899 −10.9735

−0.19924565 −423.1332 −0.199246

−33.55340799 −336.7349 −0.648172

−0.199245475 −0.1989129 −0.199246

1.160889E+02 1.782166E+02 2.19928

C13

Memetic-SAMSDE ␧DEag IEMA

−68.42937 −68.42937 −68.4294

−68.42937 −68.42936 −68.4294

−68.42937 −68.42936 −68.0182

−68.42937 −68.42936 −63.2176

0.000E+00 1.02596E−06 1.40069

C14

Memetic-SAMSDE ␧DEag IEMA

0.000E+00 0.000E+00 8.03508E−10

5.708920E−22 0.000E+00 1.29625E−08

9.776551E−21 0.000E+00 56.3081

7.546749E−20 0.000E+00 853.321

2.101378E−20 0.000E+00 182.866

C15

Memetic-SAMSDE ␧DEag IEMA

0.000E+00 0.000E+00 9.35405E−10

6.9974E−22 0.000E+00 26.1715

2.4762E−20 1.798978E−01 1.57531E+08

5.0809E−19 4.497445 2.8734E+09

1.0131E−19 8.813156E−01 6.04477E+08

(a) 10D C01

3226

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

Prob.

Alg.

Best

Median

Mean

Worst

St. d

C16

Memetic-SAMSDE ␧DEag IEMA

0.000E+00 0.000E+00 4.44089E−16

0.000E+00 2.819841E−01 0.0320248

0.000E+00 3.702054E−01 0.0330299

0.000E+00 1.018265 0.0766038

0.000E+00 3.710479E−01 0.0226013

C17

Memetic-SAMSDE ␧DEag IEMA

0.0000E+00 1.463180E−17 9.47971E−15

3.5243E−16 5.653326E−03 2.59284E−12

8.1740E−16 1.249561E−01 0.00315093

5.8365E−15 7.301765E−01 0.0787733

1.4492E−15 1.937197E−01 0.0157547

C18

Memetic-SAMSDE ␧DEag IEMA

0.0000E+00 3.731439E−20 2.23664E−15

8.3539E−30 4.097909E−19 6.78077E−15

3.0395E−25 9.678765E−19 1.61789E−14

7.4918E−24 9.227027E−18 1.91678E−13

1.4976E−24 1.811234E−18 3.82034E−14

Memetic-SAMSDE ␧DEag IEMA

−0.821884397 −0.8218255 −0.821883

−0.817736123 −0.8206172 −0.819145

−0.815632419 −0.8208687 −0.817769

−0.806881644 −0.8195466 −0.801843

4.413638E−03 7.103893E−04 0.00478853

C02

Memetic-SAMSDE ␧DEag IEMA

−2.2809621 −2.169248 −2.28091

−2.2774425 −2.152145 −2.27767

−2.2777017 −2.151424 −1.50449

−2.277206 −2.117096 4.40943

9.847005E−04 1.197582E−02 2.14056

C03

Memetic-SAMSDE ␧DEag IEMA

1.149732E−20 2.867347E+01 –

5.501674E−19 2.867347E+01 –

9.85425E−18 2.883785E+01 –

1.74052E−16 3.278014E+01 –

3.47535E−17 8.047159E−01 –

C04

Memetic-SAMSDE ␧DEag IEMA

−3.332720E−06 4.698111E−03 –

−3.328525E−06 6.947614E−03 –

1.512122E−05 8.162973E−03 –

4.523993E−04 1.777889E−02 –

9.110543E−05 3.067785E−03 –

C05

Memetic-SAMSDE ␧DEag IEMA

−483.610624 −453.1307 −286.678

−483.610622 −450.0404 –

−483.61058 −449.5460 −270.93

−483.610208 −442.1590 −253.165

9.6009E−05 2.899105 14.1169

C06

Memetic-SAMSDE ␧DEag IEMA

−530.636798 −528.5750 −529.593

−530.1155585 −528.0407 –

−530.0979007 −527.9068 −132.876

−529.4906113 −526.4539 263.84

3.0770E−01 4.748378E−01 561.042

C07

Memetic-SAMSDE ␧DEag IEMA

5.01729E−25 1.147112E−15 4.81578E−10

3.01491E−20 2.114429E−15 6.32192E−10

9.34410E−20 2.603632E−15 8.48609E−10

7.62179E−19 5.481915E−15 2.19824E−09

1.66599E−19 1.233430E−15 4.84296E−10

C08

Memetic-SAMSDE ␧DEag IEMA

3.404908E−21 2.518693E−14 1.12009E−09

9.274585E−20 6.511508E−14 0.101033

1.645260E−17 7.831464E−14 17.7033

1.255612E−16 2.578112E−13 139.111

3.885606E−17 4.855177E−14 40.8025

C09

Memetic-SAMSDE ␧DEag IEMA

9.823762E−23 2.770665E−16 7314.23

2.690738E−17 1.124608E−08 7.91089E+06

1.377364E−14 1.072140E+01 2.98793E+07

2.285227E−13 1.052759E+02 1.76199E+08

4.586357E−14 2.821923E+01 4.50013E+07

C10

Memetic-SAMSDE ␧DEag IEMA

4.884692E−25 3.252002E+01 27,682

9.559575E−19 3.328903E+01 1.1134E+07

1.622473E−15 3.326175E+01 1.58342E+07

1.696935E−14 3.463243E+01 6.86131E+07

3.642993E−15 4.545577E−01 1.68363E+07

C11

Memetic-SAMSDE ␧DEag IEMA

−3.922441E−04 −3.268462E−04 −

−3.915911E−04 −2.843296E−04 –

−3.915947E−04 −2.863882E−04 –

−3.907394E−04 −2.236338E−04 –

4.797565E−07 2.707605E−05 –

C12

Memetic-SAMSDE ␧DEag IEMA

−0.1992611 −0.1991453 −

−0.1992552 5.337125E+02* –

−0.1992558 3.562330E+02 –

−0.1992531 5.461723E+02* –

2.0089E−06 2.889253E+02 –

C13

Memetic-SAMSDE ␧DEag IEMA

−68.42936 −66.42473 −68.4294

68.42936 −65.31507 −67.6537

−68.2398 −65.35310 −67.4872

−67.56873 −64.29690 −65.3341

3.4466E−01 5.733005E−01 0.983662

C14

Memetic-SAMSDE ␧DEag IEMA

1.568781E−19 5.015863E−14 3.28834E−09

3.693949E−16 1.359306E−13 7.38087E−09

2.806450E−12 3.089407E−13 0.0615242

1.876775E−11 2.923513E−12 1.53683

6.249252E−12 5.608409E−13 0.307356

C15

Memetic-SAMSDE ␧DEag IEMA

7.472138E−21 2.160345E+01 31,187.6

7.304483E−19 2.160375E+01 7.28118E+07

3.940227E−15 2.160376E+01 2.29491E+08

6.478787E−14 2.160403E+01 2.30084E+09

1.306798E−14 1.104834E−04 4.64046E+08

C16

Memetic-SAMSDE ␧DEag IEMA

0.0000E+00 0.0000E+00 6.15674E−12

0.0000E+00 0.0000E+00 1.26779E−10

0.0000E+00 2.168404E−21 0.00163294

0.0000E+00 5.421011E−20 0.0408235

0.0000E+00 1.062297E−20 0.0081647

C17

Memetic-SAMSDE ␧DEag IEMA

6.906494E−11 2.165719E−01 9.27664E−10

3.532013E−08 5.315949E+00 5.67557E−06

1.229944E−07 6.326487E+00 0.0883974

4.388244E−07 1.889064E+01 0.351339

1.477504E−07 4.986691E+00 0.15109

C18

Memetic-SAMSDE ␧DEag IEMA

3.003416E−20 1.226054E+00 1.37537E−14

3.378179E−17 2.679497E+01 2.12239E−14

8.360598E−15 8.754569E+01 4.73841E−14

1.926489E−13 7.375363E+02 3.25877E−13

3.843566E−14 1.664753E+02 6.5735E−14

(b) 30D C01

S.M. Elsayed et al. / Applied Soft Computing 12 (2012) 3208–3227

References [1] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer with a novel constraint-handling mechanism, in: IEEE Congress on Evolutionary Computation, 2006, pp. 9–16. [2] D. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, MA, 1989. [3] R. Storn, K. Price, Differential evolution – a simple and efficient adaptive scheme for global optimization over continuous spaces, in: Technical Report, International Computer Science Institute, 1995. [4] I. Rechenberg, Evolutions Strategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution, Fromman-Holzboog, Stuttgart, 1973. [5] L. Fogel, J. Owens, M. Walsh, Artificial Intelligence Through Simulated Evolution, John Wiley & Sons, New York, 1966. [6] J. Li, B. Suolang, J.-p. Liu, Urban cliamtology, energy conservation and thermal performance of Lhasa City, in: International Conference on Energy and Environment Technology (ICEET’09), 2009, pp. 497–500. [7] J. Brest, B. Boˇskoviˇc, V. Zˇ umer, An improved self-adaptive differential evolution algorithm in single objective constrained real-parameter optimization, in: IEEE Congress on Evolutionary Computation, 2010, pp. 1–8. [8] R. Mallipeddi, P.N. Suganthan, Problem definitions and evaluation criteria for the CEC 2010 competition and special session on single objective constrained real-parameter optimization, in: Technical Report, Nangyang Technological University, Singapore, 2010. [9] T. Takahama, S. Sakai, Constrained optimization by the ␧ constrained differential evolution with an archive and gradient-based mutation, in: IEEE Congress on Evolutionary Computation, 2010, pp. 1–9. [10] S.M. Elsayed, R.A. Sarker, D.L. Essam, Multi-operator based evolutionary algorithms for solving constrained optimization problems, Computers and Operations Research 38 (2011) 1877–1896. [11] S.M. Elsayed, R.A. Sarker, D.L. Essam, Integrated strategies differential evolution algorithm with a local search for constrained optimization, in: IEEE Congress on Evolutionary Computation, 2011, pp. 2618–2625. [12] B. Tessema, G.G. Yen, An adaptive penalty formulation for constrained evolutionary optimization, IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans 39 (2009) 565–578. [13] H.K. Singh, T. Ray, W. Smith, Performance of infeasibility empowered memetic algorithm for CEC 2010 constrained optimization problems, in: IEEE Congress on Evolutionary Computation, 2010, pp. 1–8. [14] I.G. Tsoulos, Solving constrained optimization problems using a novel genetic algorithm, Applied Mathematics and Computation 208 (2009) 273–283. [15] W. Yong, C. Zixing, Z. Yuren, Z. Wei, An adaptive tradeoff model for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation 12 (2008) 80–92. [16] E. Mezura-Montes, C.A.C. Coello, An empirical study about the usefulness of evolution strategies to solve constrained optimization problems, International Journal of General Systems 37 (2008) 443–473. [17] K. Deb, An efficient constraint handling method for genetic algorithms, Computer Methods in Applied Mechanics and Engineering 186 (2000) 311–338. [18] D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization, IEEE Transactions on Evolutionary Computation 1 (1997) 67–82. [19] J.J. Liang, T.P. Runarsson, E. Mezura-Montes, M. Clerc, P.N. Suganthan, C.A.C. Coello, K. Deb, Problem definitions and evaluation criteria for the CEC 2006 special session on constrained real-parameter optimization, in: Technical Report, Nanyang Technological University, Singapore, 2005. [20] R. Storn, K. Price, Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization 11 (1997) 341–359. [21] A.K. Qin, V.L. Huang, P.N. Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Transactions on Evolutionary Computation 13 (2009) 398–417. [22] E. Mezura-Montes, J.V. Reyes, C.A. Coello Coello, A comparative study of differential evolution variants for global optimization, in: The 8th Annual Conference on Genetic and Evolutionary Computation, ACM, Seattle, Washington, USA, 2006, pp. 485–492. [23] A. Iorio, X. Li, Solving rotated multi-objective optimization problems using differential evolution, in: Australian Conference on Artificial Intelligence, 2004, pp. 861–872. [24] K.V. Price, R.M. Storn, J.A. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer, Berlin, 2005. [25] S. Das, P.N. Suganthan, Differential evolution: a survey of the state-of-the-art, IEEE Transactions on Evolutionary Computation 15 (2011) 4–31. [26] R. Gämperle, S.D. Müller, P. Koumoutsakos, Parameter study for differential evolution, in: WSEAS International Conference on Advances in Intelligent Systems, Fuzzy Systems, Evolutionary Computation, 2002, pp. 293–298.

3227

[27] J. Ronkkonen, S. Kukkonen, K.V. Price, Real-parameter optimization with differential evolution, in: The 2005 IEEE Congress on Evolutionary Computation, vol. 501, 2005, pp. 506–513. [28] H.A. Abbass, The self-adaptive Pareto differential evolution algorithm, in: IEEE Congress on Evolutionary Computation, 2002, pp. 831–836. [29] D. Zaharie, Control of population diversity and adaptation in differential evolution algorithms, in: The 9th International Conference on Soft Computing, 2003, pp. 41–46. [30] S. Das, A. Konar, U.K. Chakraborty, Two improved differential evolution schemes for faster global search, in: The 2005 Conference on Genetic and Evolutionary Computation, ACM, Washington, DC, USA, 2005, pp. 991–998. [31] A. Zamuda, J. Brest, B. Boskovic, V. Zumer, Differential evolution with selfadaptation and local search for constrained multiobjective optimization, in: IEEE Congress on Evolutionary Computation, 2009, pp. 195–202. [32] R. Mallipeddi, S. Mallipeddi, P.N. Suganthan, Differential evolution algorithm with ensemble of parameters and mutation strategies, Applied Soft Computing 11 (2010) 1679–1696. [33] M.F. Tasgetiren, P.N. Suganthan, P. Quan-Ke, R. Mallipeddi, S. Sarman, An ensemble of differential evolution algorithms for constrained function optimization, in: IEEE Congress on Evolutionary Computation, 2010, pp. 1–8. [34] S.M. Elsayed, R.A. Sarker, D.L. Essam, A three-strategy based differential evolution algorithm for constrained optimization, in: The 17th International Conference on Neural Information Processing: Theory and Algorithms – Volume Part I, Springer-Verlag, Sydney, Australia, 2010, pp. 585–592. [35] S.M. Elsayed, R.A. Sarker, D.L. Essam, Differential evolution with multiple strategies for solving CEC2011 real-world numerical optimization problems, in: IEEE Congress on Evolutionary Computation, 2011, pp. 1041–1048. [37] Y. Gao, Y.J. Wang, A memetic differential evolutionary algorithm for high dimensional functions’ optimization, in: The Third International Conference on Natural Computation, 2007, pp. 188–192. [38] V. Tirronen, F. Neri, T. Karkkainen, K. Majava, T. Rossi, A memetic differential evolution in filter design for defect detection in paper production, in: The 2007 EvoWorkshops 2007 on EvoCoMnet, EvoFIN, EvoIASP, EvoINTERACTION, EvoMUSART, EvoSTOC and EvoTransLog: Applications of Evolutionary Computing, Springer-Verlag, Valencia, Spain, 2007, pp. 320–329. [39] A. Caponio, G.L. Cascella, F. Neri, N. Salvatore, M. Sumner, A fast adaptive memetic algorithm for online and offline control design of PMSM drives, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 37 (2007) 28–41. [40] A. Caponio, F. Neri, V. Tirronen, Super-fit control adaptation in memetic differential evolution frameworks, Soft Computing – A Fusion of Foundations, Methodologies and Applications 13 (2009) 811–831. [41] M. Powell, A fast algorithm for nonlinearly constrained optimization calculations, in: G. Watson (Ed.), Numerical Analysis, Springer, Berlin/Heidelberg, 1978, pp. 144–157. [42] N. Noman, H. Iba, Accelerating differential evolution using an adaptive local search, IEEE Transactions on Evolutionary Computation 12 (2008) 107–125. [43] P.T. Boggs, J.W. Tolle, Sequential quadratic programming, Acta Numerica 4 (1995) 1–51. [44] E. Mezura-Montes, J.V. Reyes, C.A.C. Coello, A comparative study of differential evolution variants for global optimization, in: The 8th Annual Conference on Genetic and Evolutionary Computation, ACM, Seattle, Washington, USA, 2006, pp. 485–492. [45] T. Chuan-Kang, H. Chih-Hui, Varying number of difference vectors in differential evolution, in: IEEE Congress on Evolutionary Computation, 2009, pp. 1351–1358. [46] E. Mezura-Montes, J. Velazquez-Reyes, C.A. Coello Coello, Modified differential evolution for constrained optimization, in: IEEE Congress on Evolutionary Computation, 2006, pp. 25–32. [47] E. Mezura-Montes, C.A.C. Coello, A simple multimembered evolution strategy to solve constrained optimization problems, IEEE Transactions on Evolutionary Computation 9 (2005) 1–17. [48] T. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation 4 (2000) 284–294. [49] G.W. Corder, D.I. Foreman, Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach, John Wiley, Hoboken, NJ, 2009. [50] T. Takahama, S. Sakai, Constrained optimization by the ␧ constrained differential evolution with an archive and gradient-based mutation, in: IEEE Congress on Evolutionary Computation (CEC), 2010, pp. 1–9.