Self-adaptive mix of particle swarm methodologies for constrained optimization

Self-adaptive mix of particle swarm methodologies for constrained optimization

Information Sciences xxx (2014) xxx–xxx Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins...

2MB Sizes 95 Downloads 166 Views

Information Sciences xxx (2014) xxx–xxx

Contents lists available at ScienceDirect

Information Sciences journal homepage: www.elsevier.com/locate/ins

Self-adaptive mix of particle swarm methodologies for constrained optimization Saber M. Elsayed a,⇑, Ruhul A. Sarker a, Efrén Mezura-Montes b a b

School of Engineering and Information Technology, University of New South Wales at Canberra, Canberra 2600, Australia Departamento de Inteligencia Artificial, Universidad Veracruzana, Sebastián Camacho 5, Centro, Xalapa, Veracruz 91000, Mexico

a r t i c l e

i n f o

a b s t r a c t

Article history: Received 12 November 2012 Received in revised form 24 July 2013 Accepted 27 January 2014 Available online xxxx Keywords: Constrained optimization Particle swarm optimization Ensemble of operators

In recent years, many different variants of the particle swarm optimizer (PSO) for solving optimization problems have been proposed. However, PSO has an inherent drawback in handling constrained problems, mainly because of its complexity and dependency on parameters. Furthermore, one PSO variant may perform well for some test problems but not obtain good results for others. In this paper, our purpose is to develop a new PSO algorithm that can efficiently solve a variety of constrained optimization problems. It considers a mix of different PSO variants each of which evolves with a different number of individuals from the current population. In each generation, the algorithm assigns more individuals to the better-performing variants and fewer to the worse-performing ones. Also, a new PSO variant is developed for use in the proposed algorithm to maintain a better balance between its local and global PSO versions. A new methodology for adapting PSO parameters is presented and the proposed self-adaptive PSO algorithm tested and analyzed on two sets of test problems, namely the CEC2006 and CEC2010 constrained optimization problems. Based on the results, the proposed algorithm shows significantly better performance than the same global and local PSO variants as well as other-state-of-the-art algorithms. Although, based on our analysis, it cannot guarantee an optimal solution for any unknown problem, it is expected to be able to solve a wide variety of practical problems. Ó 2014 Elsevier Inc. All rights reserved.

1. Introduction Constrained optimization problems (COPs) are common in many real world applications and their purpose is to determine the values for a set of decision variables by optimizing the objective function while satisfying the functional constraints and variable bounds as:

! min f ð X Þ ! s:t: : g k ð X Þ 6 0; ! he ð X Þ ¼ 0;

e ¼ 1; 2; . . . ; E

Lj 6 xj 6 U j

j ¼ 1; 2; . . . ; D

k ¼ 1; 2; . . . ; K

ð1Þ

⇑ Corresponding author. Tel.: +61 425151130. E-mail addresses: [email protected] (S.M. Elsayed), [email protected] (R.A. Sarker), [email protected] (E. Mezura-Montes). http://dx.doi.org/10.1016/j.ins.2014.01.051 0020-0255/Ó 2014 Elsevier Inc. All rights reserved.

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

2

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

! ! ! where X ¼ ½x0 ; x1 ; . . . ; xD T is a vector with D-decision variables, f ð X Þ the objective function, g k ð X Þ the kth inequality con! straints, he ð X Þ the eth equality constraint and Lj and U j the lower and upper limits of xj, respectively. COPs are considered complex because of their unique structures and variations in mathematical properties. They may contain different types of variables, such as real, integer and discrete, and have equality and/or inequality constraints. A COP’s objective and constraint functions can be linear or nonlinear, continuous or discontinuous, and unimodal or multimodal. Its feasible region can be either a tiny or significant portion of the search space and either one single bounded region or a collection of multiple disjointed regions. Its optimal solution may exist either on the boundary of the feasible space or in the interior of the feasible region. Also, high dimensionality due to their large numbers of variables and constraints may add further complexity to solving COPs. Researchers and practitioners have attempted to use different computational intelligence methods to solve such problems, such as PSO [25,36,38,49,53,56], genetic algorithms (GA) [19] and differential evolution (DE) [52]. PSO was conceived as a simulation of the social behavior of flocks of birds and fishes [25] and is easy to both understand and implement. As a result, during the last decade, it has become very popular and successfully applied for control problems [1,59,61] in the areas of image processing [40], mobile robot navigation [5], airfoil in transonic flow [43], and mechanical engineering [21]. However, the choice of operators for any PSO algorithm plays a pivotal role in its success but is often made by trial and error. Because of the variability of the characteristics of optimization problems, most PSO algorithms may work well for one class of problems but it is not guaranteed to do so for another. To deal with this issue, as discussed later, an ensemble of evolutionary algorithms (EAs) and/or operator methodologies have recently been introduced [39]. Elsayed et al. [11] proposed a multi-topology PSO algorithm with a local search technique which was tested by solving a small set of constrained problems. The algorithm produced very encouraging results that indicate it is worthy of further investigation. Spanevello and de Oca [50] proposed two adaptive heterogeneous PSO algorithms for solving optimization problems. In the first, each particle counted the number of consecutive function evaluations in which its personal best did not improve. When the value of the counter exceeded pre-defined parameter values, the particle switched its search mechanism and reset its counter. In the second, each particle adopted one mechanism based on a probability and, at the same time, some of the particles were fixed to use a specific PSO strategy. However, the results obtained from solving a set of unconstrained problems showed that using the adaptive mechanisms hindered the algorithms’ performances. Engelbrecht [17] proposed a heterogeneous PSO algorithm for six unconstrained problems in which each particle randomly selected one of five PSO strategies. It demonstrated good performances in comparison with a single strategy PSO and also for large-scale problems [18] as well as dynamic problems [26]. However, no mechanism for favoring the best-performing strategy was proposed. To deal with the abovementioned shortcoming, Nepomuceno and Engelbrecht [37] introduced two new self-adaptive heterogeneous PSO algorithms which were influenced by the ant colony optimization meta-heuristic. They were tested on unconstrained problems, with the results showing that the performances of both outranked those of other state-of-the-art algorithms. Wang et al. [58] introduced a selfadaptive learning-based PSO algorithm to tackle unknown landscapes in which a probability model was used to describe the probability of a strategy being used to update a particle which performed well on a set of unconstrained problems. Generally speaking, as the potential of ensemble methodologies in PSO to solve COPs has not fully been investigated in, this is our focus in this paper. In the literature on ensemble operators, Qin et al. [45] proposed a DE algorithm that used two mutation strategies to one of which each individual was assigned based on a given probability. After evaluating all the newly generated trial vectors, the numbers of them successfully entered into the next generation were recorded as ns1 and ns2 and accumulated within a specified number of generations called the ‘‘learning period’’. Then, the probability of assigning each individual was updated but, if it was zero, it might have been totally excluded from the list. Mallipeddi et al. [30] proposed an ensemble of mutation strategies and control parameters with DE (EPSDE) for solving unconstrained optimization problems. In EPSDE, a pool of distinct mutation strategies, along with a pool of values for each control parameter, coexisted throughout the evolution process and competed to produce offspring. Mallipeddi et al. [31] then extended their algorithm for constrained optimization by adding an ensemble of four constraint-handling techniques. Tasgetiren et al. [55] proposed an ensemble DE designed in such a way that each individual was assigned to one of two distinct mutation strategies or a variable parameter search (VPS), with the latter used to enhance the local exploitation capability. However, no adaptive strategy was used in their algorithm. Elsayed et al. [15] proposed a mix of four different DE mutation strategies within a single algorithm framework to solve COPs which performed well for solving a set of small-scale theoretical benchmark constrained problems and was further extended in [14,16]. Elsayed et al. [12] proposed two novel DE variants each of which utilized the strengths of multiple mutation and crossover operators to solve 60 constrained problems and demonstrated competitive, if not better, performances in comparison with those of state-of-the-art algorithms. Spears [51] applied an adaptive strategy in a GA using two different crossovers for solving N-peak problems. Eiben et al. [10] developed an adaptive GA framework with multiple crossover operators for solving unconstrained problems in which the population was divided into a number of sub-populations, each of which used a particular crossover. Although, based on the success of these crossovers, the sub-population sizes were varied, their adaptive GAs did not outperform the standard GA using only the best crossover. Hyun-Sook and Byung-Ro [23] investigated whether a combination of crossover operators could perform better than using only the best crossover operator by solving the traveling salesman problem (TSP) and graph bisection problem, with their adaptive strategy being to assign probabilities to using different crossovers.

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

3

Based on the above discussion, the key contributions made in this paper are: (1) proposing an ensemble of different PSO variants for solving COPs; (2) introducing a self-adaptive mechanism to decide parameter values on which PSO is highly dependent; and (3) developing a new PSO variant with an archive that will maintain a balance between global and local PSO variants and, thus, reduce the chance of solutions becoming stuck in local optima which occurs using other variants. In the proposed ensemble algorithm, a set of different PSO variants are used each of which is considered to evolve different individuals in the current population and, at each generation, has an improvement index calculated for it. Based on this measure, the number of individuals assigned to each variant is determined adaptively while considering not discarding any PSO variant during the evolution process when, at different stages, the performances of different variants may vary. More interestingly, each individual in the population is assigned to three values, each representings one PSO parameter value, which are then evolved using a particle swarm variant. Generally speaking, this type of algorithm can be seen not only as a better alternative to a trial and error-based design but also a provider of better problem coverage. To judge the performance of the proposed algorithm, a set of CEC2006 [27] and CEC2010 [33] constrained problems are solved. Based on the quality of the solutions it obtains and computational times it requires, it demonstrates superior performance over other independent PSO variants as well as other state-of-the-art algorithms. This paper is organized as follows. After this introduction, Section 2 presents an overview of PSO, Section 3 describes the proposed algorithm and its components, Section 4 presents the experimental results and analyses of them and, finally, conclusions and proposed future work are discussed in Section 5. 2. Particle Swarm Optimization (PSO) As a stochastic global optimization method, PSO takes inspiration from the motions of a flock of birds searching for food [25]. The algorithm starts with an initial population (particles) which fly through a problem’s hyperspace at given velocities, with each particle’s velocity updated at each generation based on its own and its neighbor’s experiences [9,29]. The movement of each particle then naturally evolves to an optimal or near-optimal solution. 2.1. Operators and parameters In the PSO field, particles have been studied in two general types of neighborhoods: global and local. In the former, particles move towards the best particle found so far which leads to the algorithm having a quick convergence pattern and, maybe becoming trapped in local optima. On the other hand, in the local PSO, each particle’s velocity is adjusted according to both its personal best and the best particle within its neighborhood. In this variant, most probably, the algorithm is able to escape from local solutions but may suffer from a slower convergence pattern [29]. These two variants can be stated as follows: in a D-dimensional search space, the position and velocity of particle-z are pbest pbest pbest represented as vectors Xz and Vz, respectively. Let pbestxz ¼ ðxpbest and z;1 ; xz;2 ; . . . ; xz;j ; . . . ; xz;D Þ gbest gbest gbest gbest gbestx ¼ x1 ; x2 ; . . . ; xj ; . . . ; xD Þ be the best particle and the best position of its neighbors so far, respectively. Then, its velocity and position/the particle in generation t can be calculated as: (1) Global PSO: t1 t V tz;j ¼ wV z;j þ c1 r 1 ðpbestxz;j  X t1 z;j Þ þ c 2 r 2 ðgbestxj  xz;j Þ

ð2Þ

(2) Local PSO: t1 t V tz;j ¼ wV z;j þ c1 r 1;j ðpbestxz;j  xt1 z;j Þ þ c 2 r 2;j ðlbestxz;j  xz;j Þ

ð3Þ

xtz ¼ xt1 þ V tz z

ð4Þ

V t1 z

where is the velocity of particle z at generation t  1, V z 2 ½V max ; V max , w the inertia weight factor, c1 and c2 the acceleration coefficients, r1,j and r2,j the uniform random numbers within [0, 1] and lbestxz the local best for individual z. Clerc and Kennedy [6] proposed an alternative version of PSO in which the velocity adjustment is given as:

   t1 t1 t1 V tþ1 z;j ¼ s V z;j þ c 1 r 1 ðpbestxz;j  xz;j Þ þ c 2 r 2 gbestxj  X z;j

where s is the constriction factor,

ð5Þ

s ¼ j2up2 ffiffiffiffiffiffiffiffiffiffiffiffi and u = c1 + c2, u > 4. This version of PSO eliminates the parameter Vmax, u2 4uj

where s = 0.729 while c1 + c1 > 4 while other researchers have mentioned that c1 = c2 = 1.49445 is a good choice [11]. Also, because PSO has a nature of prematurely converging, Mezura-Montes and Flores-Mendoza [35] proposed a PSO with a dynamic (deterministic) adaptation mechanism for both s and c2 that was designed to start with low-velocity values for some particles and then increase them during the search process. They also assumed that not all particles adapted as some would use fixed values of those two parameters. Motivated by the fact that an individual was not influenced by only the best performer among his neighbors, Mendes et al. [34] proposed making all individuals fully informed by each particle using information from all its neighbors rather Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

4

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

than just the best. Although their proposed algorithm was better than other state-of-the-art algorithms, it was not able to obtain consistent results over all the unconstrained problems tested. Parsopoulos and Vrahatis [42] proposed a unified PSO variant that harnessed both local and global variants of PSO with the aim of combining their exploration and exploitation abilities without imposing additional requirements in terms of function evaluations. For the purpose of handling multi-modal problems, Qu et al. [46] proposed a distance-based locally informed PSO which eliminated the need to specify any niching parameter and enhanced the fine search ability of PSO. It used several local bests to guide each particle’s search and the neighborhoods were estimated in terms of Euclidean distances. The algorithm was tested on a set of unconstrained problems and showed superior performance in comparison with niching algorithms. However, our empirical results showed that this algorithm is poor at handling constrained problems. For the local PSO, different neighborhood structures may affect the performance of swarms, such as ring, wheel and random local topologies. In the ring topology, each particle is connected to two neighbors, in the wheel topology, individuals are isolated from one another and information is only communicated to a focal individual and, in the random topology, each individual obtains information from a random individual. To tackle with the premature convergence behavior of PSO, many research studies were proposed [24,4,57–58]. 2.2. Review of solving COPs by PSO Over the last few years, PSO has gradually gained more attention in terms of solving COPs. Cagnina et al. [3] proposed a hybrid PSO for solving COPs. In their algorithm, they introduced different methods for update particle information and used a double population and special shake mechanism to maintain diversity. Although the results were competitive, optimal solutions to several test problems could not be obtained. As mentioned earlier, Mezura-Montes and Flores-Mendoza [35] proposed a PSO with a dynamic adaptation of two different PSO parameters which was tested on 24 test problems. Although their proposed algorithm was better than others, it was not able to obtain optimal solutions on many occasions and its performance was not consistent. The PSO algorithm developed by Hu and Eberhart [22] started with a group of feasible solutions, with the feasibility function used to check the satisfaction of constraints achieved by the newly explored solutions, and all particles kept only the feasible solutions in their memories. The algorithm was tested on 11 test problems but the results were not consistent. Liang and Suganthan [29] proposed a dynamic multi-swarm PSO with a constraint-handling mechanism for solving COPs in which the sub-swarms were adaptively assigned to explore different constraints according to their difficulties. In addition, a sequential quadratic programming (SQP) method was altered to improve its local search ability. Although the algorithm showed superior performance to many other algorithms, it seems that its search capabilities were highly dependent on SQP. The algorithm of Pulido and Coello [44] used the feasibility rule to handle constraints as well as a turbulence operator to improve its exploratory capability. However, the results obtained were not consistent. Renato and Leandro dos Santos [48] introduced a PSO algorithm in which the accelerating coefficients of PSO were generated using a Gaussian probability distribution. In addition, two populations were evolved, one for the variable vector and the other for the Lagrange multiplier vector. However, their algorithm was not able to obtain optimal solutions over all runs. He and Wang [20] proposed a co-evolutionary PSO algorithm in which the notion of co-evolution was employed and two kinds of swarms, for evolutionary exploration and exploitation, as well as penalty factors were evolved. The algorithm was competitive with other algorithms for solving different engineering problems. Motivated by multi-objective optimization techniques, Ray and Liew [47] proposed an effective multi-level informationsharing strategy within a swarm to handle single-objective, constrained and unconstrained optimization problems in which a better-performer list (BPL) was generated by a multi-level Pareto ranking scheme which treated every constraint as an objective, while the particles which were not in the BPL gradually congregated around their closest neighbor in the BPL. Zahara and Kao [60,15] proposed a PSO algorithm using a gradient repair method and constraint fitness priority-based ranking method which performed well on different test problems. Elsayed et al. [16] introduced a PSO algorithm, with a periodic mode for handling constraints, which made periodic copies of the search space when it started the run. It evidenced an improved search performance compared with those of conventional modes and other algorithms. Parsopoulos and Vrahatis [41] proposed a unified PSO version (which balanced the influences of the global and local search directions in a unified scheme) and used a penalty function technique to handle constraints. Cagnina et al. [2] proposed a PSO algorithm for solving COPs called CPSO in which each particle consisted of a D-dimensional real number vector. The particles were evaluated using a fitness function that chose a feasible individual over an infeasible one and, of the infeasible particles, it preferred those that were closer to the feasible region. To determine the degree of infeasibility, CPSO saved the largest violation obtained for each constraint and, when a particle was detected as infeasible, added the amount of violation corresponding to that particle (normalized with respect to the largest violation recorded so far). 3. Self-adaptive mix of particle swarm methodologies In this section, we describe a new self-adaptive parameter control method in PSO, a new PSO variant with an archive, our proposed algorithm which uses a self-adaptive mix of PSO variants (SAM-PSO) and the constraint-handling technique used in this research. Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

5

3.1. New self-adaptive parameter control From the literature, it is known that PSO is highly dependent on its own parameters (s, c1 and c2). Although researchers usually use fixed parameters in each generation (t), one of our aims is to allow the algorithm to choose its own parameters. To do this, each individual (z) in the current population is assigned three values, as listed in Fig. 1 one for the constriction factor (sz) and one for each of c1z and c2z , which are initialized within their own boundaries, as shown in Eqs. (6)–(8).

sz ¼ s þ rand  ðs  sÞ

ð6Þ

c1z ¼ c1 þ rand  ðc1  c1 Þ

ð7Þ

c2z ¼ c2 þ rand  ðc2  c2 Þ

ð8Þ

where s and s are the lower and upper bounds of sz, respectively, c1 and c1 the lower and upper bounds of c1, respectively, and c1 and c2 the lower and upper bounds of c2, respectively. Following the PSO mechanism in Eq. (5), as the velocity (V) of each parameter is then calculated, the new parameter value is the sum of its previous and new velocity values, as shown by:

V sz ¼ 0:5r1 ðsz þ r2 ðpbest sz  sz Þ þ r3 ðgbest s  sz ÞÞ

sz ¼ sz þ V sz

ð9Þ ð10Þ

where pbestsz is the s value of the best local individual of individual z, gbestsz the s value of the global best individual obtained so far, and r1, r2 and r3 uniform random numbers between zero and one. Note that, as 0.5r1 represents a constriction factor, as shown in Eq. (5) and based on our empirical results, a value of 0.5 is used in this research and also applies in Eqs. (11) and (13).

V c1 z ¼ 0:5r 1 ðc 1z þ r 2 ðpbestc 1z  c 1z Þ þ r 3 ðgbestc 1  c 1z ÞÞ

ð11Þ

c1z ¼ c1z þ V c1 z

ð12Þ

where pbestc1z is the c1 value of the local best for individual z, gbestc1z the c1 value of the global best individual found so far, and r1, r2 and r3 uniform random numbers e [0, 1].

V c2 z ¼ 0:5r 1 ðc 2z þ r 2 ðpbestc 2z  c 2z Þ þ r 3 ðgbestc 2  c 2z ÞÞ

ð13Þ

c2z ¼ c2z þ V c2 z

ð14Þ

where pbestc2z is the c2 value of the local best for individual z, gbestc2z the c2 value of the global best individual found so far, and r1, r2 and r3 uniform random numbers e [0, 1]. 3.2. New PSO variant with archive The global PSO has the ability to converge quickly but may easily become stuck in local optima. In contrast, local PSO topology can avoid premature convergence but has the drawback of slower convergence. Therefore, it is beneficial to adopt a new variant that can deal with this important issue. Our method aims to maintain an archive of individuals which, to ensure a balance of intensification and diversification, is filled with the best, as well as those far from the center, of the population, according to the following procedure. 1. Set two parameter values (PSarch and @). The former defines the archive size and the latter determines the number of best individuals which should be inserted. 2. Sort the entire population based on the fitness function and/or constraint violation. 3. Fill the archive with the best @ individuals, that is, those with the best fitness function and/or constraint violation. PPS xi @ xc ) of all individuals in the current population, i.e., xjc ¼ PS 4. Calculate the center vector (~ @ , where PS is the population size.

Fig. 1. Initialization of initial individuals and PSO parameters.

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

6

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

5. Calculate the Euclidian distance of each individual ðz; 8@ < z 6 PSÞ to ~ xc , and then sort them from the largest to smallest distance. 6. Add the best (PSarch  @) individual, based on its Euclidian distance, to the archive. 7. Evolve all individuals using:

If rand < CFE=ðFFEs  0:25FFEsÞ t1 V tz ¼ sz ðV zt1 þ c1z r1 ðpbestxz  xt1 z Þ þ c 2z r 2 ðxH  xz ÞÞ

ð15Þ

t1 t1 V tz;j ¼ sz ðV t1 z;j þ c 1z r 1;j ðpbestxz;j  xz;j Þ þ c 2z r 2;j ðxH;j  xz;j ÞÞ ! ~ þ V tz xtz ¼ ~ xt1 z

ð16Þ

Else

ð17Þ

where xH is selected from the archive pool, H is an integer random number e [1, w], and CFE and FFE the current and maximum fitness evaluations, respectively. We must mention here that the difference between Eqs. (15) and (16) is that, in (15) H, r1 and r2 are fixed to update all variables while, in (16), for every j = 1, 2, . . . , D, a random H , r1 and r2 are selected. Furthermore, the algorithm places emphasis on (16) in the early stages of the search process to maintain diversity and then adaptively focuses on (15). 8. Gradually emphasize the  best  individuals and update w in the previous step as:

w ¼ PSarch  PSarch 

CFE FFE

ð18Þ

We must mention here that the abovementioned steps 4 and 5 are discarded once w is less than @.

Algorithm I. Self-adaptive mix of particle swarm algorithms STEP 1: At generation t = 0, generate an initial random population (PS). The variables in each individual must be within the range of xz;j ¼ Lj þ rand  ðU j  Lj Þ and each individual must generate its own sz, c1z and c2z. STEP 2: Set ni ¼ PS m , where ni is the number of individuals that should be assigned to the ith PSO variant and m the number of PSO variants. STEP 3: Randomly assign ni individuals to the ith PSO variant. STEP 4: For each PSO variant: – generate new parameters values, as shown in Section 3.1; – generate new individuals using the assigned PSO topology; – if rand 6 pmut , then apply mutation, as shown in Eq. (19); and – store the information (fitness value and constraint violation) for the new best individual. STEP 5: Calculate the improvement, as shown in Section 3.4. STEP 6: Update each ni using Eq. (29). STEP 7: Stop if the termination criterion is met; else, set t = t + 1 and go to STEP 3.

3.3. Self-adaptive mix of PSO (SAM-PSO) In the proposed algorithm, at the first generation, all PSO variants are assigned equally to the same number of individuals, i.e., ni individuals for the ith PSO variant, which are randomly chosen from the current population and are not exactly the same. In subsequent generations, it is inappropriate to place equal emphasis on all PSO variants throughout the entire evolution process as one or more may perform badly in later generations. Therefore, to assign more importance to the better-performing variants, we propose to adaptively change the number of individuals assigned to each based on a success measure regarding changes in the fitness values, constraint violations and feasibility ratios of individuals in the sub-populations, details of which are discussed later. Moreover, considering the fact that any PSO variant may perform very well at an early stage of the evolution process and do badly at a later stage or vice versa, a lower bound is set on each ni is set. Bearing in mind that any PSO can easily become wedged in local solutions, a mutation operator with a pre-defined probability applied is adopted. In this mutation, the new individual is shifted to search new areas as:

~ xtz ¼ pbestx# þ b  ðpbestxri1  pbestxri2 Þ

ð19Þ

where # is an integer random number e [0.1PS, 0.5PS], and ri1 and ri2 e [1, PS], which are values based on [13]. The mutation step is applied after the PSO updates an individual (z). SAM-PSO continues until the stopping criterion is met and its basic steps are presented in Algorithm I. We must mention that the following three PSO variants are considered in this research and the effect of this number is analyzed in a later section. Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

7

1. PSO with an archive, as described in Section 3.2. 2. The local PSO with the aim of maintaining diversity. 3. A sub-swarm PSO [29] in which all individuals are divided into one of five different sub-swarms. Each individual, except the best (leader), in each sub-swarm is updated as: t1 t1 V tz;j ¼ sz ðV t1 z;j þ c 1z r 1;j ðpbestxz;j  xz;j Þ þ c2z r 2;j ðpbestxx;j  xz;j ÞÞ

ð20Þ

! ~ þ V tz xtz ¼ ~ xt1 z

ð21Þ

where pbestxx is the local best of a random x within the same sub-swarm. The best particle (leader) in each sub-swarm is then updated using information about a random best particle in another sub-swarm as: t1 t1 V tz;j ¼ sz ðV t1 z;j þ c 1z r 1;j ðpbestxz;j  xz;j Þ þ c2z r 2;j ðpbestxh;j  xz;j ÞÞ

ð22Þ

! ~ þ V tz xtz ¼ ~ xt1 z

ð23Þ

where pbestxh is the local best of a random best h individual in the other sub-swarms.

3.4. Improvement measure To measure the improvement in each variant in a given generation, we consider both its feasibility status and fitness value as it is always better to consider any improvement in feasibility than in infeasibility. For any generation (t > 1), one of the following four scenarios arises. (1) Infeasible to infeasible: for any variant (i), if the best solution is infeasible in generation t  1 and still infeasible in generation t, the improvement index is calculated as: best

V i;t ¼

best

jVioi;t  Vioi;t1 j ¼ Ii;t av g  Vioi;t

ð24Þ

best

where Vioi;t is the constraints violation of the best individual in generation t and avg  Vioi,t the average violation. Therefore, VIi,t = Ii,t represents the relative improvement compared with the average violation in the current generation. (2) Feasible to feasible: for any variant (i), if the best solution is feasible in generation t  1 and still feasible in generation t, the improvement index is: best Ii;t ¼ maxðVIi;t Þ þ jF best i;t  F i;t1 j  FRi;t i

ð25Þ

where Ii,t is the improvement for a PSO variant (i) in generation t, F best i;t the objective function for the best individual in generation t, and the feasibility ratio of variant i in generation t is:

FRi;t ¼

Number of feasible solutions in a subpopulation i Subpopulation size at iteration t

ð26Þ

To assign a higher index value to a PSO variant with a higher feasibility ratio, we multiply the improvement in fitness value by the feasibility ratio. To differentiate between the improvement indices of feasible and infeasible groups of individuals, we add a term (maxi(VIi,t)) to (25). If all the best solutions are feasible, maxi ðVIi;t Þ will be zero. (3) Infeasible to feasible: for any variant (i,) if the best solution is infeasible in both generation t  1 and generation t, the improvement index is: best bv Ii;t ¼ maxðVIi;t Þ þ jV best i;t1 þ F i;t  F i;t1 j  FRi;t i

ð27Þ

v where F bi;t1 is the fitness value of the least violated individual in generation t  1.

To assign a higher index value to an individual that changes from infeasible to feasible, we add V best i;t1 to the change of fitness value in (27).

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

8

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

(4) Becoming worse: considering the fitness function and/or constraint violation for any variant (i), if it becomes inferior in generation (t) to its previous value in generation t  1, the improvement index is:

Ii;t ¼ 0

ð28Þ

After calculating the improvement index for each PSO variant, the number of individuals assigned to each variant is calculated according to:

Ii;t ni;t ¼ MSS þ Pm

i¼1 I i;t

 ðPS  MSS  NoptÞ

ð29Þ

where ni,t is the number of individuals assigned to the ith variant at generation t, MSS the minimum size of each i at generation t, PS the total population size and Nopt the number of PSO variants.

3.5. Constraint handling In this paper, constraints are handled as follows [8]: (i) between two feasible solutions, the fittest (according to the fitness function) is better; (ii) a feasible solution is always better than an infeasible one; and (iii) between two infeasible solutions, the one with the smaller sum of constraint violations is preferred. The equality constraints are also transformed to inequalities of the following form, where e is a small value (here is equal to 0.0001):

jhe ð~ xÞj  e 6 0;

for e ¼ 1; . . . ; E

ð30Þ

4. Experimental results In this section, the performances of different components of the proposed algorithm are presented and analyzed by solving 24 benchmark problems, the characteristics of which can be found in [27]. However, as all the algorithms considered in this paper were not able to obtain feasible solutions for g20 and g22, we exclude them from our experiments. All algorithms were coded using Matlab and run on a PC with a 3 GHz Core 2 Duo processor, 3.5 G RAM and Windows XP. 4.1. Analysis of new self-adaptive parameter control In this sub-section, the proposed parameter adaptation method is analyzed. To do this, we solved all the test problems using the global PSO variant (Eq. (5)) both with and without the proposed method described in Section 3.1.1 (called the g-pso and sa-g-pso variants, respectively). A summary of their parameters is presented in Table 1. Based on the experimental results, a comparison summary is provided in Table 2 in which, for example, considering the best results obtained, sa-g-pso was better than g-pso for 15 test problems, equal for 5 and worse for only two. More interesting is to statistically study the difference between any two stochastic algorithms. We chose a non-parametric test, the Wilcoxon Signed Rank Test [7], which allowed us to judge the difference between paired scores when it was not

Table 1 Summary of parameter values used in both PSO variants. Variants Global PSO without parameter adaptations (g-pso)

Global PSO with parameter adaptations (sa-g-pso)

Parameters PS = 120, s = 0.729 and c1 = c2 = 1.49445 FFE = 240,000

PS = 120, s e [0.4, 0.729], c1 e [1.4, 2], and c2 e [1.4, 2], s, c1 and c2 are self-updated as shown in Section 3.1 and FFE = 240,000

Table 2 Comparison summary of g-pso and sa-g-pso (numbers shown in the table for first algorithm in column 1). Algorithms

Criteria

Better

Equal

Worse

Test

sa-g-pso-to-g-pso

Best results Average results

15 19

5 2

2 0

+ +

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

9

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

possible to make the assumptions required by the t test, such as that the population should be normally distributed. The results based on the best and average fitness values are presented in the last column in Table 2. As a null hypothesis, it is assumed that there is no significant difference between the best and/or mean values of two samples whereas the alternative hypothesis is that there is a significant difference at a 5% significance level. Based on the test results, we assigned one of three signs (+, , and ) for the comparison of any two algorithms (shown in the last two rows), where ‘+’ means that sa-g-pso is significantly better than the other algorithm, ’’ sign means that it is significantly worse and ‘’ sign means that there is no significant difference between the two. From Table 2, it is clear that sa-g-pso was statistically superior to g-pso. Considering the feasibility ratio results for sa-g-pso and g-pso of 90% and 86% feasibility ratios, respectively, sa-g-pso was still better. In the previous analysis, we demonstrated that sa-g-pso achieved better quality solutions than g-pso. However, it is important to report their computational times. To test this, we compared that taken by each algorithm to reach the bestknown solution with an error of 0.0001, i.e., the stopping criteria were ½f ð~ xÞ  f ð~ x Þ 6 0:0001, where f ð~ x Þ was the bestknown solution and it was found that sa-g-pso took 49.8% less computational time than g-pso. As further analysis required some convergence plots to illustrate the benefit of the proposed technique, two sample plots are presented in Fig. 2, which clearly show that sa-g-pso was able to converge faster than g-pso. The algorithms’ performances on the remaining test problems were similar to those presented in these two plots. 4.2. Analysis of proposed PSO variant with archive In this sub-section, an analysis of the proposed PSO with an archive discussed in Section 3.2, called pso-w-archive, is undertaken by comparing its performance with those of both a global and local PSO variant (sa-g-pso and sa-l-pso, respectively). To begin with, the proposed self-adaptive mechanism is used to update the PSO parameters in all variants. Then, two more parameters, PSarch = PS/2 and @ ¼ PS=3, are added to pso-w-archive the values of which are based on our empirical results.

(a) g01 (the y-axis is in the log scale)

(b) g07 (the x-axis is in the log scale)

Fig. 2. Convergence plots of g-pso and sa-g-pso for (a) g01 and (b) g07.

Table 3 Comparison summary of pso-w-archive against sa-g-pso and sa-l-pso. Algorithms

Criteria

Better

Equal

Worse

Test

pso-w-archive-to-sa-g-pso

Best results Average results

13 15

9 7

0 0

+ +

pso-w-archive-to-sa-l-pso

Best results Average results

15 14

7 7

0 1

+ +

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

10

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

Detailed results are shown in Appendix A and a comparison summary presented in Table 3 in which it is clear that, both the best and average results of pso-w-archive were better than those of sa-g-pso and sa-l-pso, as confirmed by the Wilcoxon test pso-w-archive was statistically better. More interestingly, pso-w-archive was able to obtain a 100% feasibility ratio while sa-g-pso and sa-l-pso reached 90% and 70%, respectively. Using the abovementioned measure to calculate the average computational time, compared with those of sa-g-pso and sal-pso, pso-w-archive was able to achieve savings of 7.05% and 10.36%, respectively. 4.3. Analysis of SAM-PSO In this sub-section, to further analyze the proposed SAM-PSO, we compare its performance firstly against its independent variants and then against state-of-the-art algorithms. To begin with, SAM-PSO uses the same parameter settings as previously as well as two extra ones: MSS = 10%, as in [15]; and the mutation rate (mr) set at 0.2. Detailed results from a comparison of SAM-PSO and its three independent variants (sa-l-pso, pso-w-archive and subswarm-pso) are presented in Appendix A. The comparison summary in Table 4 shows that, for example, based on the best results obtained, SAM-PSO was better than pso-w-archive for 6 test problems while both were similar for 16 and, generally speaking, SAM-PSO performed better than all its independent variants. It is also worth mentioning that the Wilcoxon test results showed that SAM-PSO was superior to all other variants, as seen in Table 4. In terms of their feasibility ratios, those of both SAM-PSO and pso-w-archive were 100% and those of sa-l-pso and subswarm-pso 70% and 93%, respectively. The convergence patterns of the proposed algorithm and its variants are depicted in Fig. 3 from which it is clear that SAMPSO converged fastest to the optimal solution, followed by pso-w-archive, sub-swarm-pso and sa-l-pso, respectively. Finally, to illustrate how the number of individuals assigned to each PSO variant changed, a plot which gives us a sense of which variant performed best during the search process is depicted in Fig. 4. However, considering all test problems, pso-warchive was assigned to 41% of the total number of individuals and sa-l-pso and sub-swarm-pso to 40% and 39%, respectively, which confirmed that one variant may be good for one problem but bad for another.

Table 4 Comparison summary of sam-pso against pso-w-archive, sa-g-pso and sa-l-pso. Algorithms

Criteria

Better

Equal

Worse

Test

SAM-PSO-to-pso-w-archive

Best results Average results

6 11

16 11

0 0

+ +

SAM-PSO-to-sa-l-pso

Best results Average results

14 15

8 7

0 0

+ +

SAM-PSO-to-sub-swarm-pso

Best results Average results

11 13

11 9

0 0

+ +

Fig. 3. Convergence plots of SAM-PSO, sa-l-pso, sub-swarm-pso and pso-w-archive for g07, the x-axis is in the log scale.

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

11

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

Fig. 4. Example of self-adaptive change in number of individuals assigned to each PSO variant for g02 (x-axis represents generation number and y-axis population size).

Table 5 Comparison summary of SAM-PSO against CPSO-SHAKE, IPSO, SAMO-GA, SAMO-DE and ECHT-EP2. Algorithms

Criteria

Better

Equal

Worse

Test

SAM-PSO-to-CPSO-shake

Best results Average results

9 16

13 5

0 1

+ +

SAM-PSO-to-IPSO

Best results Average results

13 15

9 7

0 0

+ +

SAM-PSO-to-SAMO-GA

Best results Average results

2 14

19 8

1 0

+

SAM-PSO-to-SAMO-DE

Best results Average results

1 8

20 13

1 1

 +

SAM-PSO-to-ECHT-EP2

Best results Average results

2 4

20 16

0 2

 

Table 6 Comparison summary of SAM-PSO against eDEag and CO-CLPSO. D

Comparison

Fitness

Better

Equal

Worse

Test

10D

SAM-PSO-to-eDEag

Best Average Best Average

6 4 14 15

6 13 3 0

6 1 1 3

  + +

Best Average Best Average

16 13 17 17

1 0 1 0

1 5 0 1

+ + + +

SAM-PSO-to-CO-CLPSO 30D

SAM-PSO-to-eDEag SAM-PSO-to-CO-CLPSO

4.4. SAM-PSO against state-of-the-art algorithms Here, the results obtained by SAM-PSO are compared with those from other PSO state-of-the-art algorithms, namely the constrained PSO with a shake mechanism (CPSO-shake) [3] and improved PSO (IPSO) [35], and other algorithms that use ensembles of operators and/or constraint-handling techniques, that is, the self-adaptive multi-operators GA and DE, SAMO-GA [15] and SAMO-DE [15], respectively, and the ensemble of constraint-handling techniques based on evolutionary programming (ECHT-EP2) [32]. We must mention that SAM-PSO and three other algorithms (SAMO-GA [15], SAMO-DE and

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

12

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

ECHT-EP2) used 240,000 FFE, and both CPSO-shake and IPSO 350,000 while the e parameter was set to 1.0E04 for all algorithms. Detailed results are provided in Appendix B and a comparison summary presented in Table 5. In this table, it can be seen that, based on the best and average results: SAM-PSO outperformed CPSO-shake for 9 and 16 test problems, respectively, while its average value for one was inferior; it was better than IPSO for 13 and 15 test problems, respectively; and superior to SAMO-GA for 2 and 14 test problems, respectively, while worse for only one. Furthermore, based on the best results, SAM-PSO was better than SAMO-DE and ECHT-EP2 for 1 and 2 test problems, respectively, while inferior to SAMO-DE for only one and, considering the average results, inferior to ECHT-EP2 for only two. Therefore, generally speaking, SAM-PSO performed better for the majority of the test problems. Furthermore, the results from the Wilcoxon test proved that SAM-PSO was statistically better than both COPS-shake and IPSO and, based on the average results, better than SAMO-GA and SAMO-DE while there was no significant difference between it and ECHT-EP2.

4.5. Solving CEC2010 constrained problems In this section, 36 test problems (18 test problems each with 10 and 30 dimensions) which were introduced in CEC2010 [33] are considered. The algorithm’s performance is compared with that of a DE algorithm (eDEag [54]), which won the CEC2010 constrained optimization competition, as well the other best PSO algorithm, Co-evolutionary Comprehensive Learning Particle Swarm Optimizer (Co-CLPSO) [28], in the same competition. 25 runs were conducted with the stopping criterion up to 200K FEs for the 10D and 600 K FEs for the 30D cases for which detailed computational results are shown in Appendix C. To being with, it is important to highlight that SAM-PSO was able to reach a 100% feasibility ratio for both the 10D and 30D cases, eDEag attained a 100% feasibility ratio for only 35 of the 36 test instances, with only a 12% feasibility ratio for C12 with 30D and Co-CLPSO obtained a 94.4% feasibility ratio for the 10D and 87.3% for the 30D test problems. A summary of the quality of solutions is provided in Table 6 which shows that, for the 10D problems, SAM-PSO was competitive with eDEag and better than CO-CLPSO in terms of average results and, for the 30D problems, superior to the other algorithms for the majority of test problems. Furthermore, in terms of the statistical test, based on the best and average results obtained, SAM-PSO was superior to both other algorithms for the 30D, but only better than CO-CLPSO for the 10D test problems for which there was no significant difference between it and eDEag.

5. Conclusions and future work During the last two decades, many different particle swarm variants for solving diverse types of optimization problems have been proposed. However, from the literature review, it was observed that the performances of PSO variants: (1) are highly dependent on their own parameters; (2) are not consistent over multiple runs; (3) are not consistent over a wide range of test problems; and (4) are significantly poor in terms of solving constrained problems unless hybridized with other search techniques. Because of these drawbacks, in this paper, a number of contributions have been made to improve the performance of PSO in solving a wide range of constrained optimization problems. Firstly, a new self-adaptive mechanism for adapting the PSO parameters was proposed. In it, each individual in the population was assigned a set of values for its parameters which were then optimized using a PSO variant. This algorithm showed better performance than that of a PSO variant with fixed parameters. Secondly, a new PSO variant, in which an archive of both the best and more diversified individuals was maintained, was proposed. In the early stages of the search process, the algorithm placed more emphasis on the local PSO variant and then adaptively emphasized the best individuals, out-performing both local and global PSO variants. Thirdly, as no single PSO variant was suitable for a wide range of test problems, a mix of particle swarm variants were used together in one algorithmic structure in which a self-adaptive mechanism was used to assign more individuals to the best-performing variants. This algorithm was tested on 58 test problems and showed more consistent performances than each of its individual variants. Furthermore, it was always either better than, or the same as, the state-of-the-art algorithms. Generally speaking, the proposed algorithm demonstrated several advantages, such as the ability to: (i) maintain a balance between diversification and intensification; (ii) solve a wide variety of test problems; (iii) effectively handle problems with tiny feasible regions; and (iv) deal with scalable problems (those with different dimensions). Although a weakness of this algorithm was that its performance was not stable for multi-modal problems with higher dimensions, many algorithms in the literature exhibited the same problem. From the analysis of its results, although the proposed algorithm cannot predict the quality of the solution to a new problem, its strength is its high level of performance in solving a wide variety of test problems. In future work, we intend to analyze this algorithm by solving more test problems (both constrained and unconstrained) and different real-world applications.

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

13

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

Appendix A Function values obtained by sa-g-pso, sa-l-pso, pso-w-archive, sub-swarm-pso and SAM-PSO. Prob

Criteria

sa-g-pso

sa-l-pso

pso-w-archive

sub-swarm-pso

SAM-PSO

g01

Best Avg. St. dev

15.0000 15.0000 0.00E+00

15.0000 15.0000 0.00E+00

15.0000 15.0000 0.00E+00

15.0000 15.0000 0.00E+00

15.0000 15.0000 0.00E+00

g02

Best Avg. St. dev

0.78444 0.69059 6.0086E02

0.8034342 0.79415 9.4667E03

0.803619 0.792797 7.1006E03

0.803619 0.767577 3.0157E02

0.803619 0.79606 5.3420E03

g03

Best Avg. St. dev

1.0005 1.00048 3.7689E05

0.9986598 0.96448 5.3490E02

1.0005 1.0005 1.6900E08

1.0005 1.0005 3.2075E07

1.0005 1.0005 0.00E+00

g04

Best Avg. St. dev

30665.539 30665.535 1.0734E02

30665.539 30665.539 0.00E+00

30665.539 30665.539 0.00E+00

30665.539 30665.539 2.0899E11

30665.539 30665.539 0.00E+00

g05

Best Avg. St. dev

5127.53565 5341.03868 3.2507E+02

  

5126.4967 5130.7813 6.7400E+00

5126.826 5192.6383 9.6614E+01

5126.4967 5126.4967 1.3169E10

g06

Best Avg. St. dev

6961.8139 6961.8139 2.4995E08

6961.8139 6961.8139 0.00E+00

6961.8139 6961.8139 0.00E+00

6961.8139 6961.8139 2.0133E07

6961.8139 6961.8139 0.00E+00

g07

Best Avg. St. dev

24.3354323 25.4598933 9.8841E01

24.349512 24.4293622 7.9630E02

24.309881 24.356475 3.7269E02

24.3227 24.492075 2.3146E01

24.306209 24.306209 1.9289E08

g08

Best Avg. St. dev

0.09582504 0.09582504 0.00E+00

0.09582504 0.09582504 0.00E+00

0.09582504 0.09582504 0.00E+00

0.09582504 0.09582504 0.00E+00

0.09582504 0.09582504 0.00E+00

g09

Best Avg. St. dev

680.630667 680.634811 5.2418E03

680.63127 680.637222 4.5260E03

680.630057 680.63007 1.3870E05

680.6302 680.63158 1.1994E03

680.630057 680.630057 0.00E+00

g10

Best Avg. St. dev

7110.34154 7713.23933 4.9479E+02

7054.1733 7146.48167 6.8071E+01

7049.35833 7066.2101 1.3924E+01

7110.5933 7290.2654 1.0360E+02

7049.248 7049.248 1.5064E05

g11

Best Avg. St. dev

0.7499 0.7499 9.3452E09

0.7499 0.75015864 9.2073E04

0.7499 0.7499 0.00E+00

0.7499 0.7499 1.8258E08

0.7499 0.7499 0.00E+00

g12

Best Avg. St. dev

1.0000 1.0000 0.00E+00

1.0000 1.0000 0.00E+00

1.0000 1.0000 0.00E+00

1.0000 1.0000 0.00E+00

1.0000 1.0000 0.00E+00

g13

Best Avg. St. dev

0.053982 0.15822881 1.6412E01

0.936987 0.98838548 1.9420E02

0.053942 0.053942 1.7754E07

0.0539474 0.2106026 1.8327E01

0.053942 0.053942 0.00E+00

g14

Best Avg. St. dev

47.620205 45.299491 1.2527E+00

  

47.762792 47.73825 1.9834E02

47.70201 47.39022 2.0564E01

47.76489 47.76489 2.8342E07

g15

Best Avg. St. dev

961.71503 961.902529 4.0733E01

961.72245 961.857301 2.4191E01

961.71502 961.71502 0.00E+00

961.71502 961.73876 1.1393E01

961.71502 961.71502 0.00E+00 (continued on next page)

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

14

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

g16

Best Avg. St. dev

1.9051553 1.9051553 8.2568E10

1.9051553 1.9051553 0.00E+00

1.9051553 1.9051553 0.00E+00

1.9051553 1.9051553 0.00E+00

1.9051553 1.9051553 0.00E+00

g17

Best Avg. St. dev

8856.09207 9021.31626 1.4609E+02

8996.4176 9011.12197 2.0795E+01

8853.5397 8903.4781 3.9396E+01

8892.5674 8958.4176 6.8207E+01

8853.5397 8853.5397 2.1125E07

g18

Best Avg. St. dev

0.86602 0.8637285 2.6114E03

0.8660079 0.8629285 4.4663E03

0.866025 0.865853 2.2208E04

0.866025 0.865186 1.2654E03

0.866025 0.866025 7.5801E08

g19

Best Avg. St. dev

40.9841944 68.0547587 1.2838E+01

34.166736 36.1279533 1.3251E+00

32.6954216 33.331036 5.6921E01

33.414259 37.021167 3.0795E+00

32.656734 32.66553 9.8422E03

g20

Best Avg. St. dev

– – –

– – –

– – –

– – –

– – –

g21

Best Avg. St. dev

– – –

– – –

193.74716 193.75381 1.8409E–03

– – –

32.656235 32.666236 1.5762E–02

g22

Best Avg. St. dev

– – –

– – –

– – –

– – –

– – –

g23

Best Avg. St. dev

221.88355 141.657785 2.1318E+02

– – –

239.4370012 44.42996588 8.2282E+01

122.3332 19.72075 8.3234E+01

400.0551 400.0512 1.2014E03

g24

Best Avg. St. dev

5.508013 5.508013 0.00E+00

5.508013 5.508013 0.00E+00

5.508013 5.508013 0.00E+00

5.508013 5.508013 0.00E+00

5.508013 5.508013 0.00E+00

The boldface value means that it is better than that of the other algorithm(s).

Appendix B Function values obtained by SAM-PSO, CPSO-shake, IPS, SAMO-GA, SAMO-DE, APF-GA and ECHT-EP2 for 24 test problems (in CPSO-shake, ‘’ and ‘⁄’ symbols mean value not available or infeasible, respectively). Prob.

Criteria

SAM-PSO

CPSO-shake

IPSO

SAMO-GA

SAMO-DE

ECHT-EP2

g01

Best Avg. St. dev

15.0000 15.0000 0.00E+00

15 15 –

15.0000 15.0000 0.00E+00

15.0000 15.0000 0.00E+00

15.0000 15.0000 0.00E+00

15.0000 15.0000 0.00E+00

g02

Best Avg. St. dev

0.803619 0.79606 5.3420E03

0.803 0.79661 –

0.802629 0.713879 4.62 E02

0.80359052 0.79604769 5.8025E03

0.8036191 0.79873521 8.8005E03

0.8036191 0.7998220 1.26E02

g03

Best Avg. St. dev

1.0005 1.0005 0.00E+00

1 1 –

0.641 0.154 1.70 E01

1.0005 1.0005 0.00E+00

1.0005 1.0005 0.00E+00

1.0005 1.0005 0.00E+00

g04

Best Avg. St. dev

30665.539 30665.539 0.00E+00

30665.538 30646.179 –

30665.539 30665.539 7.40E12

30665.5386 30665.5386 0.00E+00

30665.5386 30665.5386 0.00E+00

30665.539 30665.539 0.00E+00

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

15

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

g05

Best Avg. St. dev

5126.4967 5126.4967 1.3169E10

5126.498 5240.49671 

5126.498 5135.521 1.23E+01

5126.497 5127.97643 1.1166E+00

5126.497 5126.497 0.00E+00

5126.497 5126.497 0.00E+00

g06

Best Avg. St. dev

6961.8139 6961.8139 0.00E+00

6961.825 6859.0759 –

6961.814 6961.814 2.81E05

6961.814 6961.814 0.00E+00

6961.814 6961.814 0.00E+00

6961.814 6961.814 0.00E+00

g07

Best Avg. St. dev

24.306209 24.306209 1.9289E08

24.309 24.9122091 –

24.366 24.691 2.20E01

24.3062 24.4113 4.5905E02

24.3062 24.3096 1.5888E03

24.3062 24.3063 3.19E05

g08

Best Avg. St. dev

0.09582504 0.09582504 0.00E+00

0.095 0.095 –

0.095825 0.095825 4.23E17

0.09582504 0.09582504 0.00E+00

0.09582504 0.09582504 0.00E+00

0.095825 0.095825 2.61E08

g09

Best Avg. St. dev

680.630057 680.630057 0.00E+00

680.63 681.373057 –

680.638 680.674 3.00E02

680.630 680.634 1.4573E03

680.630 680.630 1.1567E05

680.630 680.630 0.00E+00

g10

Best Avg. St. dev

7049.248 7049.248 1.5064E05

7049.285 7850.40102 –

7053.963 7306.466 2.22E+02

7049.2481 7144.40311 6.7860E+01

7049.2481 7059.81345 7.856E+00

7049.2483 7049.2490 6.60E04

g11

Best Avg. St. dev

0.7499 0.7499 0.00E+00

0.7499 0.7499 –

0.7499 0.753 6.53E03

0.7499 0.7499 0.00E+00

0.7499 0.7499 0.00E+00

0.7499 0.7499 0.00E+00

g12

Best Avg. St. dev

1.0000 1.0000 0.00E+00

1 1 –

1.0000 1.0000 0.00E+00

1.0000 1.0000 0.00E+00

1.0000 1.0000 0.00E+00

1.0000 1.0000 0.00E+00

g13

Best Avg. St. dev

0.053942 0.053942 0.00E+00

0.054 0.45094151 –

0.066845 0.430408 2.30E+00

0.053942 0.054028 5.9414E05

0.053942 0.053942 1.7541E08

0.053942 0.053942 1.00E12

g14

Best Avg. St. dev

47.76489 47.76489 2.8342E07

47.635 45.665888 –

47.449 44.572 1.58E+00

47.1883 46.4731793 3.1590E01

47.76489 47.68115 4.043E02

47.7649 47.7648 2.72E05

g15

Best Avg. St. dev

961.71502 961.71502 0.00E+00

961.715 962.516022 –

961.715 962.242 6.20E01

961.71502 961.715087 5.5236E05

961.71502 961.71502 0.00E+00

961.71502 961.71502 2.01E13

g16

Best Avg. St. dev

1.9051553 1.9051553 0.00E+00

1.905 1.7951553 –

1.9052 1.9052 2.42E12

1.905155 1.905154 6.9520E07

1.905155 1.905155 0.00E+00

1.905155 1.905155 1.12E10

g17

Best Avg. St. dev

8853.5397 8853.5397 2.1125E07

8853.5397 8894.70867 

8863.293 8911.738 2.73E+01

8853.5397 8853.8871 1.7399E01

8853.5397 8853.5397 1.1500E05

8853.5397 8853.5397 2.13E08

g18

Best Avg. St. dev

0.866025 0.866025 7.5801E08

0.866 0.7870254 –

0.865994 0.862842 4.41E03

0.866025 0.865545 4.0800E04

0.866025 0.866024 7.043672E07

0.866025 0.866025 1.00E09

g19

Best Avg. St. dev

32.656235 32.666236 1.5762E02

34.018 64.505593 –

33.967 37.927 3.20E+00

32.655592 36.427463 1.0372E+00

32.655592 32.757340 6.145E02

32.6591 32.6623 3.4E03

g20

Best Avg. St. dev

– – –

– – –

– – –

– – –

– – –

– – – (continued on next page)

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

16

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

g21

Best Avg. St. dev

193.72451 193.72474 8.9521E05

⁄ ⁄ –

193.758 217.356 2.65E+01

193.724510 246.091539 1.4917E+01

193.724510 193.771375 1.9643E02

193.7245 193.7438 1.65E02

g22

Best Avg. St. dev

– – –

– – –

– – –

– – –

– – –

– – –

g23

Best Avg. St. dev

400.0551 400.0512 1.2014E03

326.963 271.8431 –

250.707 99.598 1.20E+02

355.661000 194.760335 5.3278E+01

396.165732 360.817656 1.9623E+01

398.9731 373.2178 3.37E+01

g24

Best Avg. St. dev

5.508013 5.508013 0.00E+00

5.508 5.508 –

5.508 5.508 9.03E16

5.508013 5.508013 0.00E+00

5.508013 5.508013 0.00E+00

5.508013 5.508013 1.8E15

The boldface value means that it is better than that of the other algorithm(s).

Appendix C Function values obtained by SAM-PSO, eDEag and CO-CLPSO for CEC2010 test problems. Prob. Alg.

10D Best

30D Mean

St. dev

Best

Mean

St. dev

C01 SAM-PSO 7.473104E01 7.405873E01 6.845061E03 8.218844E01 7.964331E01 1.841715E02 eDEag 7.473104E01 7.470402E01 1.323339E03 8.218255E01 8.208687E01 7.103893E04 CO-CLPSO 7.4731E01 7.3358E01 1.7848E02 8.0688E01 7.1598E01 5.0252E02 C02 SAM-PSO 2.277709E+00 2.274400E+00 6.743072E03 2.280937E+00 2.270597E+00 5.105588E03 eDEag 2.277702E+00 2.269502E+00 2.3897790E02 2.169248E+00 2.151424E+00 1.197582E02 2.2777 2.2666 1.4616E02 2.2809 2.2029 1.9267E01 C03 SAM-PSO eDEag

0.00000E+00 0.000000E+00 2.4748E13

0.000000E+00 0.0000000E+00 0.000000E+00 0.0000000E+00 3.5502E01 1.77510

2.554119E22 2.867347E+01 –

3.309261E07 1.654628E06 2.883785E+01 8.047159E01 – –

C04 SAM-PSO 1.0000E05 1.0000E05 0.0000000E+00 3.333246E06 3.332197E06 9.692637E10 eDEag 9.992345E06 9.918452E06 1.5467300E07 4.698111E03 8.162973E03 3.067785E03 1.0000E05 9.3385E06 1.0748E06 2.9300E06 1.1269E01 5.6335E01 C05 SAM-PSO 4.836106E+02 4.836106E+02 1.160311E13 4.836106E+02 4.658651E+02 6.144266E+01 eDEag 4.836106E+02 4.836106E+02 3.89035E13 4.531307E+02 4.495460E+02 2.899105E+00 4.8361E+02 4.8360E+02 1.9577E02 4.8360E+02 3.1249E+02 8.8332E+01 C06 SAM-PSO 5.786622E+02 5.786431E+02 1.796101E02 5.306379E+02 5.306378E+02 7.757346E05 eDEag 5.786581E+02 5.786528E+02 3.6271690E03 5.285750E+02 5.279068E+02 4.748378E01 5.7866E+02 5.7866E+02 5.7289E04 2.8601E+02 2.4470E+02 3.9481E+01 C07 SAM-PSO eDEag

0.00000E+00 0.00000E+00 1.0711E09

0.000000E+00 0.0000000E+00 0.000000E+00 0.0000000E+00 7.9732E01 1.6275

3.375699E21 1.147112E15 3.7861E11

4.043819E14 1.308356E13 2.603632E15 1.233430E15 1.1163 1.8269

C08 SAM-PSO eDEag

0.000000E+00 0.00000E+00 9.6442E10

7.861859E+00 4.901086E+00 6.727528E+00 5.560648E+00 6.0876E01 1.4255

9.682223E21 2.518693E14 4.3114E14

8.233284E+01 9.222257E+01 7.831464E14 4.855177E14 4.7517E+01 1.1259E+02

C09 SAM-PSO eDEag

0.000000E+00 0.00000E+00 3.7551E16

7.643747E28 3.821873E27 0.000000E+00 0.0000000E+00 1.9938E+10 9.9688E+10

5.079997E21 2.770665E16 1.9695E+02

3.339778E04 1.669889E03 1.072140E+01 2.821923E+01 1.4822E+08 2.4509E+08

C10 SAM-PSO eDEag

0.000000E+00 0.000000E+00 2.3967E15

3.102937E27 7.259108E27 0.000000E+00 0.0000000E+00 4.9743E+10 2.4871E+11

3.190775E22 3.252002E+01 3.1967E+01

4.863491E01 1.344199E+00 3.326175E+01 4.545577E01 1.3951E+09 5.8438E+09

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

17

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

Appendix C (continued) Prob. Alg.

10D Best

30D Mean

St. dev

Best

Mean

St. dev

C11 SAM-PSO 1.522713E03 1.522713E03 4.456278E12 3.923439E04 3.923423E04 1.362166E09 eDEag 1.52271E03 1.52271E03 6.3410350E11 3.268462E04 2.863882E04 2.707605E05       C12 SAM-PSO 1.992458E01 1.992457E01 1.193491E08 1.992633E01 1.992628E01 3.413745E07 eDEag 5.700899E+02 3.367349E+02 1.7821660E+02 1.991453E01 3.562330E+02 2.889253E+02 1.2639E+01 2.3369 2.4329E+01 1.9926E01 1.9911E01 1.1840E04 C13 SAM-PSO 6.842937E+01 6.431966E+01 1.974159E+00 6.358851E+01 6.143381E+01 1.107755E+00 eDEag 6.842937E+01 6.842936E+01 1.0259600E06 6.642473E+01 6.535310E+01 5.733005E01 6.842936E+01 6.397445E+01 2.134080E+00 6.2752E+01 6.0774E+01 1.1176 C14 SAM-PSO eDEag

0.000000E+00 0.000000E+00 5.7800E12

0.000000E+00 0.0000000E+00 0.000000E+00 0.0000000E+00 3.1893E01 1.1038

1.227861E20 5.015863E14 3.28834e09

6.080968E13 2.804565E12 3.089407E13 5.608409E13 0.0615242 0.307356

C15 SAM-PSO eDEag

0.000000E+00 0.000000E+00 3.0469E12

0.000000E+00 0.0000000E+00 1.798980E01 8.8131560E01 2.9885 3.3147

4.642754E23 2.160345E+01 5.7499E12

3.767962E06 1.883981E05 2.160376E+01 1.104834E04 5.1059E+01 9.1759E+01

C16 SAM-PSO eDEag

0.000000E+00 0.000000E+00 0.000000E+00

0.000000E+00 0.000000E+00 3.702054E01 3.7104790E01 5.9861E03 1.3315E02

0.000000E+00 0.000000E+00 0.000000E+00

0.000000E+00 0.000000E+00 2.168404E21 1.062297E20 5.2403E16 4.6722E16

C17 SAM-PSO eDEag

0.000000E+00 1.463180E17 7.6677E17

2.711709E33 3.122304E33 1.249561E01 1.9371970E01 3.7986E01 4.5284E01

1.089699E08 2.165719E01 1.5787E01

5.433162E02 8.983999E02 6.326487E+00 4.986691E+00 1.3919 4.2621

C18 SAM-PSO eDEag

0.000000E+00 3.731440E20 7.7804E21

2.287769E22 1.075245E21 9.678765E19 1.8112340E18 2.3192E01 9.9559E01

2.398722E10 1.226054E+00 6.0047E02

4.828803E06 1.399493E05 8.754569E+01 1.664753E+02 1.0877E+01 3.7161E+01

The boldface value means that it is better than that of the other algorithm(s).

References [1] M.A. Abido, Optimal design of power-system stabilizers using particle swarm optimization, IEEE Trans. Energy Convers. 17 (2002) 406–413. [2] L. Cagnina, S. Esquivel, C.A. Coello Coello, A particle swarm optimizer for constrained numerical optimization, in: T. Runarsson, H.-G. Beyer, E. Burke, J. Merelo-Guervós, L. Whitley, X. Yao (Eds.), Parallel Problem Solving from Nature – PPSN IX, Springer, Berlin/Heidelberg, 2006, pp. 910–919. [3] L.C. Cagnina, S.C. Esquivel, C.A. Coello Coello, Solving constrained optimization problems with a hybrid particle swarm optimization algorithm, Eng. Optim. 43 (2011) 843–866. [4] W.N. Chen, J. Zhang, Y. Lin, N. Chen, Z.H. Zhan, H.S.H. Chung, Y. Li, Y.H. Shi, Particle swarm optimization with an aging leader and challengers, IEEE Trans. Evol. Comput. 17 (2013) 241–258. [5] J. Chia-Feng, C. Yu-Cheng, Evolutionary-group-based particle-swarm-optimized fuzzy controller with application to mobile-robot navigation in unknown environments, IEEE Trans. Fuzzy Syst. 19 (2011) 379–392. [6] M. Clerc, J. Kennedy, The particle swarm – explosion, stability, and convergence in a multidimensional complex space, IEEE Trans. Evol. Comput. 6 (2002) 58–73. [7] G.W. Corder, D.I. Foreman, Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach, John Wiley, Hoboken, NJ, 2009. [8] K. Deb, An efficient constraint handling method for genetic algorithms, Comput. Methods Appl. Mech. Eng. 186 (2000) 311–338. [9] Y. del Valle, G.K. Venayagamoorthy, S. Mohagheghi, J.C. Hernandez, R.G. Harley, Particle swarm optimization: basic concepts, variants and applications in power systems, IEEE Trans. Evol. Comput. 12 (2008) 171–195. [10] A.E. Eiben, I.G. Sprinkhuizen-Kuyper, B.A. Thijssen, Competing crossovers in an adaptive GA framework, in: IEEE Congress on Evolutionary Computation, IEEE, 1998, pp. 787–792. [11] S. Elsayed, R. Sarker, D. Essam, Memetic multi-topology particle swarm optimizer for constrained optimization, in: IEEE Congress on Evolutionary Computation, World Congress on Computational Intelligence (WCCI2012), IEEE, Brisbane, 2012. [12] S. Elsayed, R. Sarker, D. Essam, Self-adaptive differential evolution incorporating a heuristic mixing of operators, Comput. Optim. Appl. (2012) 1–20. [13] S. Elsayed, R. Sarker, T. Ray, Parameters adaptation in differential evolution, in: IEEE Congress on Evolutionary Computation, Brisbane, IEEE, 2012, pp. 1–8. [14] S.M. Elsayed, R.A. Sarker, D.L. Essam, An improved self-adaptive differential evolution algorithm for optimization problems, IEEE Trans. Indust. Inform. 9 (2013) 89–99. [15] S.M. Elsayed, R.A. Sarker, D.L. Essam, Multi-operator based evolutionary algorithms for solving constrained optimization problems, Comput. Oper. Res. 38 (2011) 1877–1896. [16] S.M. Elsayed, R.A. Sarker, D.L. Essam, On an evolutionary approach for constrained optimization problem solving, Appl. Soft Comput. 12 (2012) 3208– 3227.

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051

18

S.M. Elsayed et al. / Information Sciences xxx (2014) xxx–xxx

[17] A. Engelbrecht, Heterogeneous particle swarm optimization, in: M. Dorigo, M. Birattari, G. Di Caro, R. Doursat, A. Engelbrecht, D. Floreano, L. Gambardella, R. Groß, E. Sahin, H. Sayama, T. Stützle (Eds.), Swarm Intelligence, Springer, Berlin/Heidelberg, 2010, pp. 191–202. [18] A.P. Engelbrecht, Scalability of a heterogeneous particle swarm optimizer, in: 2011 IEEE Symposium on Swarm Intelligence (SIS), IEEE, 2011, pp. 1–8. [19] D. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, MA, 1989. [20] Q. He, L. Wang, An effective co-evolutionary particle swarm optimization for constrained engineering design problems, Eng. Appl. Artif. Intell. 20 (2007) 89–99. [21] S. He, E. Prempain, Q.H. Wu, An improved particle swarm optimizer for mechanical design optimization problems, Eng. Optim. 36 (2004) 585–605. [22] Hu, X., R. Eberhart, Solving constrained nonlinear optimization problems with particle swarm optimization, in: the 6th World Multiconference on Systemics, Cybernetics and Informatics, 2002, pp. 203–206. [23] Y. Hyun-Sook, M. Byung-Ro, An empirical study on the synergy of multiple crossover operators, IEEE Trans. Evol. Comput. 6 (2002) 212–223. [24] Y.-T. Juang, S.-L. Tung, H.-C. Chiu, Adaptive fuzzy particle swarm optimization for global optimization of multimodal functions, Inform. Sci. 181 (2011) 4539–4549. [25] J. Kennedy, R. Eberhart, Particle swarm optimization, in: IEEE International Conference on Neural Networks, IEEE, 1995, pp. 1942–1948. [26] B.J. Leonard, A.P. Engelbrecht, A.B. van Wyk, Heterogeneous particle swarms in dynamic environments, in: IEEE Symposium on Swarm Intelligence (SIS), IEEE, 2011, pp. 1–8. [27] J.J. Liang, T.P. Runarsson, E. Mezura-Montes, M. Clerc, P.N. Suganthan, C.A. Coello Coello, K. Deb, Problem Definitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-Parameter Optimization, Technical Report, Nanyang Technological University, Singapore, 2005. [28] J.J. Liang, Z. Shang, Z. Li, Coevolutionary comprehensive learning particle swarm optimizer, in: IEEE Congress on Evolutionary Computation, IEEE, 2010, pp. 1–8. [29] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer with a novel constraint-handling mechanism, in: IEEE Congress on Evolutionary Computation, IEEE, 2006, pp. 9–16. [30] R. Mallipeddi, S. Mallipeddi, P.N. Suganthan, M.F. Tasgetiren, Differential evolution algorithm with ensemble of parameters and mutation strategies, Appl. Soft Comput. 11 (2011) 1679–1696. [31] R. Mallipeddi, P.N. Suganthan, Differential evolution with ensemble of constraint handling techniques for solving CEC 2010 benchmark problems, in: IEEE Congress on Evolutionary Computation, IEEE, 2010, pp. 1–8. [32] R. Mallipeddi, P.N. Suganthan, Ensemble of constraint handling techniques, IEEE Trans. Evol. Comput. 14 (2010) 561–579. [33] R. Mallipeddi, P.N. Suganthan, Problem Definitions and Evaluation Criteria for the CEC 2010 Competition and Special Session on Single Objective Constrained Real-parameter Optimization, Technical Report, Nangyang Technological University, Singapore, 2010. [34] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler, maybe better, IEEE Trans. Evol. Comput. 8 (2004) 204–210. [35] E. Mezura-Montes, J. Flores-Mendoza, Improved particle swarm optimization in constrained numerical search spaces, in: R. Chiong (Ed.), NatureInspired Algorithms for Optimisation, Springer, Berlin/Heidelberg, 2009, pp. 299–332. [36] M.A. Montes de Oca, J. Pena, T. Stutzle, C. Pinciroli, M. Dorigo, Heterogeneous particle swarm optimizers, in: IEEE Congress on Evolutionary Computation, IEEE, 2009, pp. 698–705. [37] F. Nepomuceno, A. Engelbrecht, A self-adaptive heterogeneous PSO inspired by ants, in: M. Dorigo, M. Birattari, C. Blum, A. Christensen, A. Engelbrecht, R. Groß, T. Stützle (Eds.), Swarm Intelligence, Springer, Berlin Heidelberg, 2012, pp. 188–195. [38] F. Neri, E. Mininno, G. Iacca, Compact particle swarm optimization, Inform. Sci. 239 (2013) 96–121. [39] O. Olorunda, A.P. Engelbrecht, An analysis of heterogeneous cooperative algorithms, in: IEEE Congress on Evolutionary Computation, IEEE, 2009, pp. 1562–1569. [40] M. Omran, A.P. Engelbrecht, A. Salman, Particle swarm optimization method for image clustering, Int. J. Pattern Recognit. Artif. Intell. 19 (2005) 297–321. [41] K. Parsopoulos, M. Vrahatis, Unified particle swarm optimization for solving constrained engineering optimization problems, in: L. Wang, K. Chen, Y. Ong (Eds.), Advances in Natural Computation, Springer, Berlin/Heidelberg, 2005, pp. 582–591. [42] K. Parsopoulos, M. Vrahatis, Unified particle swarm optimization for solving constrained engineering optimization problems, in: L. Wang, K. Chen, Y. Ong (Eds.), Advances in Natural Computation, Springer, Berlin/Heidelberg, 2005, pp. 582–591. [43] Y.V. Pehlivanoglu, A new particle swarm optimization method enhanced with a periodic mutation strategy and neural networks, IEEE Trans. Evol. Comput. 17 (2013) 436–452. [44] G.T. Pulido, C.A. Coello Coello, A constraint-handling mechanism for particle swarm optimization, in: IEEE Congress on Evolutionary Computation, IEEE, 2004, pp. 1396–1403. [45] A.K. Qin, V.L. Huang, P.N. Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Trans. Evol. Comput. 13 (2009) 398–417. [46] B.Y. Qu, P.N. Suganthan, S. Das, A distance-based locally informed particle swarm model for multimodal optimization, IEEE Trans. Evol. Comput. 17 (2013) 387–402. [47] T. Ray, K.M. Liew, A swarm with an effective information sharing mechanism for unconstrained and constrained single objective optimisation problems, in: IEEE Congress on Evolutionary Computation, IEEE, 2001, pp. 75–80. [48] A.K. Renato, C. Leandro dos Santos, Coevolutionary particle swarm optimization using Gaussian distribution for solving constrained optimization problems, IEEE Trans. Syst. Man Cybernet. Part B: Cybernet. 36 (2006) 1407–1416. [49] Y. Shi, H. Liu, L. Gao, G. Zhang, Cellular particle swarm optimization, Inform. Sci. 181 (2011) 4460–4493. [50] P. Spanevello, M.A.M.d. Oca, Experiments on adaptive heterogeneous PSO algorithms, in: Doctoral Symposium on Engineering Stochastic Local Search Algorithms, IRIDIA, Institut de Re her hes Interdisiplinaires, Université Libre de Bruxelles, Belgium, 2009, pp. 36–40. [51] W.M. Spears, Adapting crossover in evolutionary algorithms, in: J.R. McDonnell, R.G. Reynolds, D.B. Fogel (Eds.), The 4th Annual Conference on Evolutionary Programming, MIT Press, 1995, pp. 367–384. [52] R. Storn, K. Price, Differential evolution – a simple and efficient adaptive scheme for global optimization over continuous spaces, in: Technical Report, International Computer Science Institute, 1995. [53] L. Sun, S. Yoshida, X. Cheng, Y. Liang, A cooperative particle swarm optimizer with statistical variable interdependence learning, Inform. Sci. 186 (2012) 20–39. [54] T. Takahama, S. Sakai, Constrained optimization by the e constrained differential evolution with an archive and gradient-based mutation, in: IEEE Congress on Evolutionary Computation, IEEE, 2010, pp. 1–9. [55] M.F. Tasgetiren, P.N. Suganthan, P. Quan-Ke, R. Mallipeddi, S. Sarman, An ensemble of differential evolution algorithms for constrained function optimization, in: IEEE Congress on Evolutionary Computation, IEEE, 2010, pp. 1–8. [56] H. Wang, I. Moon, S. Yang, D. Wang, A memetic particle swarm optimization algorithm for multimodal optimization problems, Inform. Sci. 197 (2012) 38–52. [57] H. Wang, H. Sun, C. Li, S. Rahnamayan, J.-S. Pan, Diversity enhanced particle swarm optimization with neighborhood search, Inform. Sci. 223 (2013) 119–135. [58] Y. Wang, B. Li, T. Weise, J. Wang, B. Yuan, Q. Tian, Self-adaptive learning based particle swarm optimization, Inform. Sci. 181 (2011) 4515–4538. [59] T. Xiaoyong, Y. Fei, C. Ruijuan, Path planning of underwater vehicle based on particle swarm optimization, in: 2010 International Conference on Intelligent Control and Information Processing (ICICIP), 2010, pp. 123–126. [60] E. Zahara, Y.-T. Kao, Hybrid Nelder–Mead simplex search and particle swarm optimization for constrained engineering design problems, Expert Syst. Appl. 36 (2009) 3880–3886. [61] G. Zwe-Lee, A particle swarm optimization approach for optimum design of PID controller in AVR system, IEEE Trans. Energy Convers. 19 (2004) 384– 391.

Please cite this article in press as: S.M. Elsayed et al., Self-adaptive mix of particle swarm methodologies for constrained optimization, Inform. Sci. (2014), http://dx.doi.org/10.1016/j.ins.2014.01.051