Multi-strategy adaptive particle swarm optimization for numerical optimization

Multi-strategy adaptive particle swarm optimization for numerical optimization

Engineering Applications of Artificial Intelligence 37 (2015) 9–19 Contents lists available at ScienceDirect Engineering Applications of Artificial In...

940KB Sizes 23 Downloads 155 Views

Engineering Applications of Artificial Intelligence 37 (2015) 9–19

Contents lists available at ScienceDirect

Engineering Applications of Artificial Intelligence journal homepage: www.elsevier.com/locate/engappai

Multi-strategy adaptive particle swarm optimization for numerical optimization Kezong Tang a,b,n, Zuoyong Li c, Limin Luo a, Bingxiang Liu b a

School of Computer Science and Engineering, Southeast University, Nanjing Jiangsu 210094, PR China Information Engineering Institute, Jingdezhen Ceramic Institute, Jingdezhen Jiangxi 333000, PR China c Department of Computer Science, Minjiang University, Fuzhou Fujian 350108, PR China b

art ic l e i nf o

a b s t r a c t

Article history: Received 7 February 2014 Received in revised form 7 July 2014 Accepted 1 August 2014

To search the global optimum across the entire search space with a very fast convergence speed, we propose a multi-strategy adaptive particle swarm optimization (MAPSO). MAPSO develops an innovative strategy of diversity-measurement to evaluate the population distribution, and performs a real-time alternating strategy to determine one of two predefined evolutionary states, exploration and exploitation, in each iteration. During iterative optimization, MAPSO can dynamically control the inertia weight according to the diversity of particles. Moreover, MAPSO introduces an elitist learning strategy to enhance population diversity and to prevent the population from possibly falling into local optimal solutions. The elitist learning strategy not only acts on the globally best particle, but also on some special particles that are very near to the globally best particle. The aforementioned features of MAPSO have been comprehensively analyzed and tested on eight benchmark problems and a standard test image. Experimental results show that MAPSO can substantially enhance the ability of PSOs to jump out of the local optimal solutions and significantly improve the search efficiency and convergence speed. & 2014 Elsevier Ltd. All rights reserved.

Keywords: Particle swarm optimization Optimization problems Diversity of population Entropy Image segmentation

1. Introduction In real-world applications, there exist a lot of complicated optimization problems whose model might change along with environment. In the past decades, researchers have focused on solving complicated optimization problems in chemical industry production and bioengineering research, such as the economic dispatch problem, the minimum spanning tree problem, and the vehicle routing problem. To solve these optimization problems, some traditional optimization methods have emerged, such as the gradient-based method, Nelder–Mead's simplex method, and the quasi-Newton method. However, these methods usually lead to inaccurate and even infeasible solutions. This restricts their further application to some practical problems, as many practical problems are usually discontinuous and even non-differentiable at some points in the domain. Moreover, it is worth noting that there are no common methods for solving all complex problems in a similar context according to the non-free lunch theorem. Apart from the aforementioned traditional methods, there are also some well-established and common heuristic populationbased methods such as Genetic Algorithm (GA) (Holland, 1975), Ant Colony Algorithm (ACA) (Dorigo and Gambardella, 1997),

n

Corresponding author. Tel.: +8613627987439. E-mail address: [email protected] (K. Tang).

http://dx.doi.org/10.1016/j.engappai.2014.08.002 0952-1976/& 2014 Elsevier Ltd. All rights reserved.

Particle Swarm Optimization (PSO) (Kennedy and Eberhart, 1995), Bacterial Foraging Algorithm (BFA) (Passino, 2002), Differential Evolution (Storn and Price, 1997), and Artificial Neural Network (ANN) (Lu et al., 2003). These methods have their own characteristics, merits, and demerits. Among them, PSO is probably the simplest and the most effective, and a lot of research has been done on the optimization of complex system (Kathiravan and Ganguli (2007); Gudla and Ganguli, 2005; Samanta and Nataraj, 2009; Modares et al., 2010; Dutta et al.,2013; Lim and Isa, 2014). As a well-known swarm intelligence method, PSO has been attracting considerable interest owing to its simple mechanism and easy implementation, as it requires only a few parameters. Compared with traditional methods, PSO has four notable advantages (Yang et al., 2007; Tang et al., 2011a): (1) PSO is conceptually simple and relatively easy to implement, which is mainly due to a simple search mechanism that mimics feeding behavior in birds flocking and fish schooling. (2) In a similar fashion to other population-based iterative algorithms, PSO is less sensitive to the characteristics of optimization problems, which means that PSO does not require complex functions to be continuous or even to be differentiable as required by traditional methods. (3) In most cases, PSO has the ability to jump out of the local optima because of the individual memory of the particles. Thus, the knowledge of good solutions is retained throughout the search process. (4) PSO can be programmed easily and is computationally inexpensive in terms of both memory capacity and running speed.

10

K. Tang et al. / Engineering Applications of Artificial Intelligence 37 (2015) 9–19

diversity to adjust the population distribution in search phases. Then, MAPSO uses the proposed evaluation strategy of diversity to dynamically change inertia weight, achieving a good balance between exploration and exploitation. Finally, MAPSO utilizes an elitist learning strategy (ELS) to enhance the population diversity and prevent search into a local region of optimum. This expands the search area of solutions at the next stage. ELS not only acts on the globally best particle, but also on some special particles that are very near to the globally best particle. Experimental results show that MAPSO is not only easy to implement, but also computationally efficient for complex multidimensional problems and better than the other compared algorithms in terms of quality of solution. The rest of this paper is organized as follows: Section 2 describes the basic model of PSO. The MAPSO algorithm is proposed in Section 3 through the development of a novel evaluation strategy of diversity and adaptive adjustment on the inertia weight and an elitist learning strategy. Section 4 experimentally compares MAPSO with other PSO algorithms using a set of benchmark functions. Section 5 concludes with a brief summary of the paper and some paths for future research.

Despite PSO having been successfully applied to some complex problems such as the vehicle routing problem, the economic dispatch problem, the power allocation problem, or to problems having irregular, noisy, and multimodal characteristics, there are still some problems with running PSO. For instance, sometimes PSO might fall into local optimal solutions because of the faster loss of diversity on some optimization problems. When entering into a location near the optimal solution, the loss of diversity is too fast for the entire population to guarantee convergence to the optimal solution. Hence, the convergence speed can descend dramatically in the later stages of evolution. To overcome such problems, various attempts have been made to enhance the performance of PSO, such as by adjusting the inertia weight, the variation of neighbor topology, or the dynamic regulation of diversity. Tang and Wu (2013) used a neighboring function criterion to maintain diversity among population embers. Sun et al. (2011) proposed a novel variant of quantum-behaved particle swarm optimization (QPSO) algorithm with the local attractor point subject to a Gaussian probability distribution (GAQPSO). Zhao (2010) presented a perturbed particle-swarm algorithm based on a new particle-updating strategy, derived from the concept of perturbed global best particle, to deal with the problem of premature convergence and diversity maintenance within the swarm. Zhan et al. (2009) presented an adaptive PSO. By evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of four predefined evolutionary states, and an elitist learning strategy is performed when the evolutionary state is classified as the convergence state. Experimental results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. Du and Li (2008) divided all particles into two parts. Then, two new strategies (Gaussian local search and differential mutation) are introduced into the two parts, respectively. Experimental results show that the two mechanisms can enhance the convergence ability of PSO, while the searching area of the particle population can be extended to avoid being trapped at the local optimum. Carlisle and Dozier (2002) used a multi-niche crowding mechanism to maintain population diversity throughout the run, and the spread-out population was used to enhance the adaptability of the algorithm to dynamic environments. Blackwell and Bentley (2002) proposed an atom analogy structure PSO optimizer, which makes use of the Coulomb force to increase diversity. Additionally, a multi-population technique has also been used to enhance the population diversity to respond promptly to the changed environment (Blackwell and Branke, 2004, 2006; Li et al., 2006; Parrott and Li, 2004). Among the studies on PSO, there are two most important and appealing objectives, i.e., accelerating convergence speed and avoiding the local optima. Adaptive control of inertia weight and the adjustment of diversity have become the most promising approaches. In this paper, we propose a novel multi-strategy adaptive PSO (MAPSO). MAPSO first adopts a novel evaluation strategy of

Start

2. PSO scheme Particle swarm optimization, introduced by Kennedy and Eberhart (1995), is one of the most important stochastic global optimization techniques. It mimics the process of predation in birds flocking or fish schooling and starts with artificial particles (represented by a group of “birds”). PSO tries to evolve those particles that are fitter and, by applying a particle updating strategy (measurement of velocity and displacement), it attempts to move to a new position that is better than the previous one in the search space. In spite of the diversity of PSO schemes, most of them are based on the same iterative procedure. As a stochastic population-based optimization method, PSO is similar to a “black box”, completely independent from the characteristic of the optimized problem. Fig. 1 describes the classic PSO flow chart. An initial population of particles is generated randomly, where each particle is represented as a potential solution. Each of these particles is evaluated in terms of a certain “fitness function” that can guide particles to search for globally optimal solutions. Kennedy's three basic updating operations (extremum, position, velocity) are the main components to improve the PSO's performance. Updating the extremum includes the operations of individual extremum “pbest” and global extremum “gbest” in each iteration. The pbest is the position with the best fitness found so far for the ith particle (memorized by every particle). The gbest is the best position in a neighborhood, where typical topologies (Ghosh et al., 2009) are the fully connected structure, the ring structure, the Von Neumann structure, and so on. Then, the velocity vi,d and position xi,d of the particle i on the dth dimension

Criterion

No Update of extremum

verified Population Initialization

Fitness Evaluation

Yes End

Fig. 1. PSO flow chart.

Update of velocity

Update of position

K. Tang et al. / Engineering Applications of Artificial Intelligence 37 (2015) 9–19

are updated by vi;d ¼ wvi;d þ c1 r 1 ðpbest i;d  xi;d Þ þ c2 r 2 ðgbest d  xi;d Þ;

ð1Þ

xi;d ¼ xi;d þ vi;d ;

ð2Þ

where w A [0,1] is the inertia weight that decides how much each particle maintains its current velocity in the next iteration. Choosing a suitable w is helpful for PSO to achieve a balance between the ability of global exploration and local exploitation. In general, a higher value of w is helpful for the global exploration of particles. On the contrary, the lower values are beneficial to the particles' local exploitation of particles. c1 and c2 are positive constants named learning factors, also called acceleration constants. Generally, c1 and c2 are usually equal to 2. r1 and r2 are two uniformly distributed random numbers ranging from 0 to 1 for the dth dimension. The velocity of a particle is limited to [  Vmax, Vmax]; the purpose of setting Vmax is to provide a mechanism to avoid the excessive roaming of each particle in the dth dimension, which tuning of the inertia weight cannot fully guarantee. The searching process terminates when the predefined criterion is satisfied.

3. The proposed algorithm 3.1. A novel diversity evaluation strategy Diversity of particles reflects the discrete information on population distribution within the search space. Therefore, it would affect the ability of the global exploration and local exploitation during the evolution process. In general, a higher diversity of particles means that the ability to explore the global space is stronger, and vice versa. Hence, the evaluation mechanism of diversity would have a significant influence on the convergence rate and the accuracy of the final solution. At present, there are two common methods of diversitymeasure in the PSO community, i.e., population-distribution-entropy and average-distance-amongst-points (Yu et al., 2005). Averagedistance-amongst-points is the simplest method of diversitymeasure in d-dimensional space. By using this method, the distribution information can be gained by calculating the mean distance from each particle to all the other particles. Assuming that population size is N, the basic concept of this method is that each particle of the population is considered as a corresponding point in n-dimensional space. |L| is assumed to be the length of the longest diagonal in the search space. Then, the diversity of population is calculated according to the following formula: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi N N 1 DðtÞ ¼ ∑ ∑ ðp  pd Þ2 ; ð3Þ jNjjLj i ¼ 1 j ¼ 1 ij where pij andpd are the jth value of the ith particle and the dth value of the average point p of all particles, respectively. In addition, the scheme of population-distribution-entropy was originally motivated by the principle of Shannon entropy in information theory. The basic concept of this method is that the current search space is divided into Q areas with equal size. The number of particles in each area is denoted as Zi, from which can be determined the probability (qk ¼ Zk/N) that particles are situated in the kth area. Then, the diversity of population is defined by Q

EðtÞ ¼  ∑ qk log e qk : k¼1

ð4Þ

Based on the diversity-measure above, it is worth noting that the former D(t) represents the discrete degree of various distribution situations among particles, whereas the latter E(t) describes the situation where particles are distributed among the various domains in search space. These two existing methods have some

11

shortcomings when describing the diversity of population. For example, E(t) is very small when all particles converge to a scattered number of local optimal points. D(t), by contrast, will be very large. However, the current diversity of particles has been very bad in this circumstance. To overcome this problem, several researchers have attempted to derive techniques to build better diversity-measure methods for describing the population distribution information in each generation. Riget and Vesterstrøm (2002) introduced two phases: attraction and repulsion. For measuring the diversity, the search stage alternates between phases of exploring and exploiting-attraction and repulsion-low diversity and high diversity. As long as the diversity is above a certain threshold dlow, the particles attract each other. When the diversity declines below dlow, the particles simply switch and start to repel each other until the threshold dhigh is met. Zhan et al. (2009) adopted an evolutionary state estimation technique that takes into account the population distribution information in every generation. The combination of mean distance of each particle to all the other particles is used to build a new formula (the evolutionary factory). The population distribution can be achieved by the identification of evolutionary state. As an extension of the previous work, we give the following method of diversity-measurement: Step 1. For convenience, taking the minimization of the single objective function f(x) as an example. Assuming that f_p is an array with N elements, where each f_p(i) represents a fitness value corresponding to the ith particle, let fmax ¼max(f_p) and fmin ¼f(gbest). Then, generate an inspection interval [fmin, fmax]. Step 2. Divide [fmin, fmax] into N intervals of equal length. Then, figure out the specific number ni for each interval while various particles fall into different intervals, and ∑N i ¼ 1; i ¼ 1 ni ¼ N; 2; …; N: Step 3. Diversity of the population distribution is then given as follows: N

Et ¼  ∑ pi log pi ; i¼1

pi ¼ ki =N:

ð5Þ

This method has fully taken into account the population distribution information from the three different perspectives, including fitness value, distance, and entropy. Fig. 2 graphically depicts the comparison in terms of particles distribution of the evolutionary processes using Eq. (5). It can be seen that the entropy takes the smallest value (0) when all particles fall into the same interval in Fig. 2(c). Then, the fitness values of all particles tend to be very similar. As more fitness values particles behave, so are more particles uniformly distributed. This means that the entropy takes the maximum value (log N/N) when each interval has a particle in Fig. 2(b). Fig. 2(a) shows a stochastic distribution in terms of the fitness values obtained in the evolution process. To some extent, a higher value of Et means that the population distribution is larger, and vice versa.

3.2. Adaptive control of inertia weight w In conventional PSO, w would be decreased linearly with time when searching for the global optimum. However, for a large number of complicated real-world problems, it is not feasible to decrease w purely with time, as w cannot adapt to the constantly changing search environment characterized by the diversity of population distribution. Since the search process of PSO is nonlinear and highly complex, linearly decreasing w does not really reflect the searching process. For this weakness, numerous researchers have suggested that the value of w should be larger in the exploration stage and smaller in the exploitation stage.

12

K. Tang et al. / Engineering Applications of Artificial Intelligence 37 (2015) 9–19

Fig. 2. Particle distribution map. (a) General distribution, (b) uniform distribution, and (c) extreme distribution.

In particular, it would be beneficial to balance the ability of global exploration and local exploitation. For this reason, we design an adjusting scheme of w in which w would be initialized to 1 and adaptively controlled according to the Et of Eq. (5), which is given by wðEt Þ ¼

1 A ½0:25; 1; 0:8 þ 3:2e  2Et In4

8 Et A ½0; 1;

ð6Þ

where w should vary with the diversity Et of population distribution throughout the evolution process. The purpose of the variation is to help particles to balance both global and local search capabilities, since w is not necessarily changeable purely with time, but regularly changes with Et. Hence, w will adapt to the environmental change characterized by Et during the search process. 3.3. Elitist learning strategy The higher similarity of population means all particles tend to the state that the range of activities of particle is concentrated in a very small region near the gbest. Then, the entire population can easily converge to local optimal points when solving complex optimization problems. Therefore, the ability to jump out of local optima has become the most important and attractive objective in PSO improvement. A number of variant PSO algorithms have been proposed to achieve this objective (Zhan et al., 2009; Yu et al., 2005; Riget and Vesterstrøm, 2002). In this improvement, adjustment of the algorithm parameters and the introduction of an auxiliary search operator in PSO have become the two most important and appealing strategies. Accordingly, an elitist learning strategy is designed here and implemented in the local optimal regions to help some special particles to jump out of local optimal regions. Unlike the other particles, the globally best particle has no leader to follow. It needs more fresh energy to enhance itself. Hence, a mutation-based ELS is proposed to help the globally best particle to move to a potentially better region. Then, if a better region is found, other particles in the population will also follow the new leader to jump out and move to the new region. In addition, some special particles that are very close to the gbest should also be mutated. This is because these special particles are likely to have some good structural properties of the global optimum. To some extent, it will be beneficial to accelerate the search speed and find a potential better region. ELS is based on the

concept of the mutation operator of DE[5], which is described as follows: Step 1. For i¼ 1 to D (number of dimensions) Generate a random number ri. If ri opm, then a mutation operation is performed by means of a modified scheme: max xmin Þ; A xnew gj ¼ gbest j þ F U ðxj j

 xmin  is the same as the lower where the range of interval ½xmax j j and upper bounds of the optimized problem, and η is a constant from(0,2); gbestj is the jth coordinate value of the globally best particle gbest. End For Step 2. Calculate the average distance of each particle i to all 1 N the other particles, which is defined as: AV E ¼ N  1∑i ¼ 1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 ∑D j ¼ 1 ðxij  xij Þ ;where N is the size of population. Step 3. For i¼1 to N (number_of_particles) 3.1. Compute the distance between the particle pi ¼(xi1, xi2,..,xiD) and the globally best particle gbest ¼(gbest1, gbest2,…, gbestD) according to the Euclidian metric: D qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Disi_g ¼ ∑ 2 ðxij  gbest j Þ: j¼1

pffiffiffiffiffiffiffiffiffi 3.2. If Disi_g o AV E, then perform a mutation operation on the randomly selected jth dimension of particle i according to the following formula: xnew ¼ xij þ rand1 Uðxij  xgj Þ: ij where rand1 is a uniform random number between 0 and 1. End For

3.4. Summary of MAPSO The procedure for the MAPSO algorithm can be summarized as follows: Step 1. Form an initial population array P of N particles with random positions and velocities on D dimensions in the search space. Let the current mode: mode ¼ “Exploit”, and the

K. Tang et al. / Engineering Applications of Artificial Intelligence 37 (2015) 9–19

maximum and minimum of diversity be Emax and Emin, respectively. Step 2. Loop 2.1. Compute Et. If Et o Emin, then switch mode: mode ¼ “Explore”; on the contrary, if Et 4Emax, then switch mode: mode ¼“Exploit”. 2.2. If mode ¼“Exploit”, then go to step 2.3; otherwise, perform ELS and go to step 2.4. 2.3. Update the inertia weight w according to Eq. 6; generate a new population Pt[N] by updating particle's velocity and displacement according to Eqns.1–2. 2.4. Evaluate the new population Pt[N]. Calculate the fitness value ft[i] of each particle Pt[i] (i¼1, 2, …, N). If the Pt[i] is better than the pbesti, then update the pbesti, i.e., pbesti ¼Pt[i]; otherwise, it remains the same. Then, if pbesti is better than gbest, replace gbest with pbesti; otherwise, gbest remains the same. 2.5. If the termination criteria are met (usually a maximal number of iterations or a satisfied fitness), then exit loop. Step 3. End loop.

4. Numerical experiment 4.1. Experimental methodology The MAPSO developed in this study uses an improved PSO with ELS and Adaptive control of inertia weight to reach the global optima. However, it is a known fact that PSO show better global optimizing properties when run several times under the circumstances of a good set of parameters. Therefore, we tested several combinations of parameters settings, and selected the best one from several combinations of parameters settings according to the obtained mean fitness values. To obtain comparative results more fully, we adopt three different types of test performances including the optimal solution value, mean value, and standard variance of the solution. Experiments were carried out for several times under the same conditions. In order to evaluate MAPSO more fully, additional t-test has also been carried out, which means the probability of occurrence could be inferred from t-distribution theory so as to compare the level of differences between two averages. It is significant to note that the search spaces were always asymmetrical distribution in many real-world optimization problems. Hence, if the proposed method could be applied to solve the problems with the asymmetry in searching space, to some extent, it should demonstrate its robustness for the proposed method in real applications. For this reason, we would test asymmetry problems using the MAPSO so as to prove its robustness. In addition, in order to be able to further verify the algorithm in real engineering situations. We tested

13

the MAPSO on a standard testing image, which would illustrate the advantages or disadvantages of the MAPSO. 4.2. Description of functions and parameters Many engineering and economic problems are presented in simple mathematical formulations. In this section, six benchmark optimization functions f1–f6 listed in Table 1 will be used for the numerical simulations. The first three functions are unimodal, and the remaining functions are multimodal. These benchmark functions have been previously solved by applying a variety of other algorithms. In this selection of functions, it is worth noting that each function has its own unique characteristics. The Griewank function is p similar to the Sphere ffi function with added noise term (∏D i ¼ 1 cos ðxi = iÞ); this is a highly multimodal function in low dimensions, whereas it resembles the plain sphere function in higher dimensionspbecause of a continuous ffi diminishing of the noise ( lim ∏D cos ðx = i Þ ¼ 0 for random x). The i D-1 i ¼ 1 Rosenbrock function is very steep when the optimum is being approached far from the optimum, and “banana”-shaped close to the optimum. Ackley and Rastrigin functions are both highly multimodal in all dimensions, but they have different steepnesses. Additionally, all benchmark functions have a globally minimum 0 at the point x¼(0,0,…,0)D. To evaluate the performance of the MAPSO algorithm, we compare MAPSO with three other kinds of algorithm. These algorithms are a diversity-guided adaptive PSO (Riget and Vesterstrøm, 2002), a comprehensive learning PSO (Liang et al., 2006), and a perturbed PSO (Zhao, 2010), denoted by APSO, CLPSO, and PPSO, respectively. APSO uses the concept of attraction and repulsion with the aim of controlling the diversity of swarms to prevent convergence. CLPSO offers a comprehensive learning strategy, which aims at yielding better performance of the optimized functions. PPSO is based on the concept of perturbed global best to deal with the problem of premature convergence and diversity maintenance within the swarm. The parameter configurations for these PSO variants are also given in Table 2 according to their corresponding references. The algorithm configuration of the MAPSO is as follows: the initial population size, popsize, is set to 20. The maximum number of fitness evaluations (FE) is 2.0  105. The inertia weight, w, is initially to 1, and c1 and c2 to 2.0, which is the same as the common configuration in a conventional PSO. Emin ¼0.005, and Emax ¼0.5. For a fair comparison between all the variants of PSO, the size of population is set to 20 for each variant. This is commonly adopted for comparisons of all kinds of improved PSOs. More importantly, each test function is independently executed 30 times for the purpose of reducing statistical errors. Three test performances (TPs) are the best value (BV), mean value (MV), and standard variance (SV) of the solution. All the simulations are performed in Matlab 7.0 using the same machine with an Intel Core i5 CPU of 2.27 GHz, 2 GB memory, and the Windows 7 operating system.

Table 1 Six benchmark functions used in this paper. Type

Test functions

Name

D

Search space

Min

Unimodal

2 f 1 ðxÞ ¼ ∑D i ¼ 1 xi  2 D f 2 ðxÞ ¼ ∑i ¼ 1 ∑ij ¼ 1 xj

Sphere

30

[  100, 100]

0

Quadric

30

[  100, 100]

0

10

2 1 2 2 f 3 ðxÞ ¼ ∑D i ¼ 1 ð100ðxi þ 1  xi Þ þ ðxi  1Þ Þ

Multimodal

f 4 ðxÞ ¼ f 5 ðxÞ ¼

2 ∑D i ¼ 1 ðxi  10 cos ð2πxi Þ þ 10Þ xiffi D 1 D 2 p ∑ 4000 i ¼ 1 xi  Π i ¼ 1 cos ð iÞþ 1

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  1 D  2 f 6 ðxÞ ¼  20exp  0:2 D1 ∑D i ¼ 1 xi  exp D∑i ¼ 1 cos ð2πxi Þ þ 20 þ e

Acceptance 0.01

Rosenbrock

30

[  30, 30]

0

100

Rastrigrin

30

[  5.12, 5.12]

0

60

Griewank

30

[  600, 600]

0

0.01

Ackley

30

[  30, 30]

0

0.01

14

K. Tang et al. / Engineering Applications of Artificial Intelligence 37 (2015) 9–19

Table 2 Parameter configurations used in the comparison. Algorithm

Year

Parameters Settings

Reference

APSO CLPSO PPSO

2002 2006 2010

w:09–0.4, c1 ¼c2 ¼ 2.0, dlow ¼5.0  10  6, dhigh ¼0.25, Popsize ¼20; FE¼ 2.0  105 w:0.9–0.4,c ¼ 1.49445, m¼7; Popsize ¼ 20; FE¼ 2.0  105 w¼ 0.9; c1 ¼ 0.5; c2 ¼0.3; σmax ¼0.15; σmin ¼ 0.001; a ¼0.5; Popsize ¼ 20; FE ¼2.0  105

13 21 22

4.3. Comparisons on the performances of solutions Table 3 shows a comparison of the performances of the obtained solutions between MAPSO and the three known algorithms. The boldfaced results in the table indicate the best result among those obtained by all four algorithms. Fig. 3 provides a graphical comparison of the convergence characteristics of the evolutionary processes in solving the six different functions. Longitudinal coordinates are given by a base-10 logarithmic function for the mean fitness value. It can be seen that MAPSO achieves the best performances among all algorithms in terms of several criteria including BV, MV, and SV. For function f1, MAPSO offers a better performance than the other three algorithms, i.e., it obtains the global optimum at the global optimal location. The accuracy of BV and MV that are obtained are also both very satisfactory compared to the other algorithms. Hence, this problem can be solved relatively easily in terms of solution accuracy. For function f2, the quality of solutions is much better than the other three algorithms. It can easily be observed from Fig. 1(b) that MAPSO is much faster than the other algorithms when approaching the optimum. Additionally, for function f3, it should be noted that several results of criteria of assessment are very close among all the PSO algorithms. Compared to solutions obtained on the other testing functions, it is clear that the solutions of function f3 are much worse than those of the other functions. The comparison in both Table 3 and Fig. 3 shows that, when solving unimodal problems, MAPSO can provide better performance than the other algorithms, especially in functions f1 and f2. MAPSO also obtains satisfactory solutions on the optimization of complex multimodal problems f4, f5, and f6. APSO performs better than the other algorithms according to three criteria for all multimodal functions except for f5. This means that CLPSO achieves the best performance among all algorithms in terms of test criteria on f5. Furthermore, although APSO finds a slightly better BV than MAPSO for problem f6, it performs slightly worse than MAPSO according to MV and SV. Hence, comparing Table 3 and Fig. 3, it is apparent from the testing results of six functions that MAPSO outperforms the other improved PSO algorithms, and that MAPSO has the ability to jump out of the local optima, and enhance the solution accuracy in the evolutionary process, which suggests that MAPSO can really obtain good benefits from the ELS. 4.4. Comparisons on the convergence speed It is important to emphasize that the speed in obtaining the global optimum is also a very important performance criterion for comparing the algorithms. Table 4 shows that MAPSO can perform at a much faster speed, as measured by the average numbers of FE or by the average CPU time required to obtain an acceptable solution. SR in Table 3 represents the rate of obtaining an acceptable solution after executing 30 independent runs, and MFE and Time are the average value of FE and corresponding mean CPU time to achieve an acceptable solution, respectively. Of the six test functions, there are five fully successful rates of the MAPSO for functions f1, f2, f3 f4, and f6. The SR of the MAPSO algorithm is higher than those of the other three algorithms for

Table 3 Comparisons of test results for five PSO algorithms on six functions. Function

TP

APSO

CLPSO

PPSO

MAPSO

f1

BV MV SV

1.97  10–15 7.06  10–13 9.23  10–13

1.57  10–20 1.62  10–19 1.28  10–19

5.30  10  6 7.62  10  5 0.98  10  5

0 1.35  10–30 2.42  10–30

f2

BV MV SV

3.15  10  2 6.18  10  3 6.36  10  3

20.93 38.26 129.18

4.62  10  5 12.36 40.63

1.23  10–11 3.61  10  10 5.87  10  10

f3

BV MV SV

1.23 29.45 16.86

5.02 7.47 8.48

13.65 62.12 81.36

1.13 2.68 3.14

f4

BV MV SV

15.364 30.12 7.63

2.51  10–11 3.84  10–11 5.92  10–11

46.32 56.31 17.32

3.15  10–15 1.89  10–14 1.02  10–14

f5

BV MV SV

5.68  10  3 2.07  10  3 2.61  10  2

1.26  10–14 2.39  10–13 139  10–12

3.02  10  7 1.16  10  6 5.17  10  6

7.16  10  3 2.54  10  2 6.21  10  2

f6

BV MV SV

1.35  10–15 2.91  10–14 4.06  10–13

6.39  10–13 1.94  10–12 1.01  10–12

1.35  10-6 1.05  10  6 1.95  10  5

1.56  10–15 1.69  10–14 3.82  10–14

functions f2 and f4, and the SR of MAPSO is equal to that of the other algorithms for functions f1, f3, and f6. The success rate of the proposed algorithm is a little worse than that of the CLPSO algorithm for f5 according to the testing criteria (MFE, Time, SR). In short, MAPSO uses the least CPU time and the smallest number of FEs to obtain acceptable solutions for functions (f1, f2, f3, f4, and f6), and in terms of the average reliability of all testing functions, MAPSO obtains the highest reliability of 98.71%, followed by APSO, CLPSO, and PPSO. 4.5. Analysis of parameter settings To investigate the effects of parameters settings on the proposed algorithm MAPSO, we use the different sets of parameters settings and combinations of these parameters are described in Table 5. Table 6 shows the comparison results with respect to MAPSO with different parameters settings. It is clear from the results that MAPSO with C4 can deliver good solutions to six functions in terms of MV. Therefore, we adopt the combination C4 for parameters settings in the above experiments. However, in many cases, the parameters can only be set by the experiments themselves or through the experience of the researchers. Hence, more research on the parameters is needed to investigate this problem. 4.6. Discussion of results The characteristic of each benchmark optimization function is that there is only a global solution, but with more than one local optimal solution distributed in the solution space. More importantly, since MAPSO is a stochastic algorithm based on population, each independent run will produce different results. Therefore, in order to evaluate more fully the performance stability of the results obtained by the proposed algorithm MAPSO, we have run

K. Tang et al. / Engineering Applications of Artificial Intelligence 37 (2015) 9–19

15

Fig. 3. Convergence performance of the four different PSO algorithms on the six functions. (a) f1,(2) f2,(3) f3,(4) f4,(5) f5, and (6) f6.

the same set of experiments many times for all the test functions. As expected, the proposed algorithm gives very good results. Tables 3 and 4 summarize the experimental results we obtained after 30 independent runs. As can be seen from Table 3, MAPSO is perfectly capable of locating the minimum of unimodal functions with a limited number of particles, which is much better than the other PSO algorithms. Besides, it is also interesting to note that the SV of our results is much better than that of the other algorithms. For example, MV and SV on function f1 are 1.35  10  30 and 2.42  10  30, respectively. This means that the obtained optimal results concentrate in the neighboring regions of the global solution after each run of MAPSO. For multimodal problems f4,

f5, and f6, the proposed algorithm also shows a good performance compared to other algorithms. For example, by testing APSO, BV, MV, and SV of function f6 are 1.35  10  15, 2.91  10  14, and 4.06  10  13, respectively. APSO is absolutely superior to the other algorithms only in terms of the BV, but the BV and SV of APSO are both slightly worse than MAPSO. To some extent, this also means that the probability of the optimal solutions that MAPSO produces is much greater than that of APSO. Furthermore, throughout our study, the average reliability of MAPSO is 98.71% for all the test problems, which means the stability of obtaining acceptable solution is very good. For f1 and f4, the best solutions are 0 and 3.15  10  15 respectively, which

16

K. Tang et al. / Engineering Applications of Artificial Intelligence 37 (2015) 9–19

means the proposed algorithm has a stronger advantage over other algorithms in its ability to search the global optimum. It is worth noting that the MFE and Time of the MAPSO are both the least among all the compared algorithms, and SR is 100%. A significant difference of the proposed algorithm with respect to other algorithms is the way in which the strategy of diversity evaluation is incorporated into ELS and the setting of inertia parameter w during the course of evolution. The reason for this is to prevent our algorithm from stagnating and prematurely converging to a local optimum. This strategy is thus responsible for keeping the diversity required in the population to move the search towards the global optimum of the problem. This strategy together with ELS and the adjustment of w confirms the fact that the performance of an improved PSO is determined by the introduction of other operators, as well as the nature of the particular problems. This implies that any evolutionary algorithm has not absolutely superiority over other approaches. From the curves shown in Fig. 3, it can be observed that good convergence results are achieved by MAPSO in solving the six different problems. In addition, from the experimental results, it can also be seen that it is effective to perform the ELS with a mutation operator for particles to jump out of the local optimal solutions. For us, further work is needed to understand how the natural characteristics of problems exert an influence on the quality of solutions in terms of the strategy of diversity-measurement, ELS, and the adjustment of inertia weight w. In summary, it is clear from the results that MAPSO can perform well in different complex optimization problems both in terms of the quality of the solution and convergence speed.

Table 4 Convergence speed and mean reliability among four PSO algorithms on six functions. Function

TP

APSO

CLPSO

PPSO

MAPSO

f1

MFE Time SR

11,364 1.23 100.00

7501 0.45 100.00

13,048 2.68 100.00

6025 0.35 100.00

f2

MFE Time SR

13,789 3.16 96.25

0.00

10,265 1.51 50.45

2314 0.76 100.00

f3

MFE Time SR

10,564 1.23 100.00

7301 0.56 100.00

9014 0.97 100.00

6010 0.43 100.00

f4

MFE Time SR

9632 1.35 92.65

5612 1.05 97.46

13,645 2.84 56.25

325 0.06 100.00

f5

MFE Time SR

12,045 1.98 30.95

8251 0.81 100.00

8391 1.02 90.26

13,051 1.47 92.25

f6

MFE Time SR

12,968 1.56 100.00

7725 0.56 100.00

11,392 1.24 100.00

3924 0.68 100.00

86.64%

82.91%

82.83%

98.71%

AR

MAPSO has better efficiency in solving complex optimization problems for reaching the near-optimal solution with less MFE and Time. Our results also show that MAPSO has a significant universality to some extent. It should also be pointed out that the stability of any algorithm is dependent on the setting of the parameters of the algorithm. In order to evaluate MAPSO more fully, additional t-test(zhan et al., 2009) has also been carried out. Table 7 presents the t values and P values on every function of this two-tailed test with a significance level of 0.05 between the APSO and another PSO algorithm. Rows “1(Better),” “0(same),” and “-1(worse)” give the number of functions that the MAPSO performs significantly better than, almost the same as, and significantly worse than the compared algorfithm, respectively. Row “Genearl Merit” shows the difference between the number of 1's and the number of  1's, which is used to give an overall comparison between the two algorithms. for example, comparing MAPSO and PPSO, the MAPSO is significantly superior to the PPSO on three functions (f2, f3 and f4), does as well as the PPSO on other three functions (f1, f5 and f6), and does worse on 0 function, yielding a “general merit” figure of merit of 3  0 ¼3, indicating that the MAPSO generally is superior to the PPSO. Despite the lightly weaker performance on some functions, the MAPSO in general offered a significant performance improvement than other PSO compared, as confirmed in Table 7. Analogous to the previous experiment for six benchmark functions, 2 minimization problems(F1–F2) from the literature (Suganthan et al., 2005) are used to further evaluate the performance of the MAPSO. 2 Minimization F 1 ðXÞ ¼ ∑D i ¼ 1 zi þ f _bias1 , z¼x-o, x ¼[x1,x2,…,xD], o ¼[o1,o2,…,oD].  2 i Minimization F 2 ðXÞ ¼ ∑D þf _bias2 , z¼x-o,x ¼[x1, i ¼ 1 ∑j ¼ 1 zj x2,…,xD], os ¼[o1,o2,…,oD]. where D is number of dimensions, X A ½  100; 100D , o is the shifted global optimum. The features of two functions are unimodal and shifted, and global optimum xn ¼o, f_bias1 ¼f_bias2 ¼ 450. We compare the MAPSO with two different type of heuristic algorithms. These algorithms are the IGA (Tang et al., 2011) and the PDE( Tasoulis et al.,2004), respectively. To make a reliable comparison, the description of the evaluation criteria are chosen exactly the same as in the literature (Suganthan et al., 2005). Then, the number of dimensions and running times are set to 10 and 25 respectively. Each problem is executed 25 times, and the maximum number of fitness evaluations is 1.0  105. Terminate before reaching maximum number of fitness evaluations (FES) if the error in the function value is 10  8 or less. For each problem, we recorded function error value (f(x)  f(xn)) after 1e3, 1e4, 1e5 FES and termination. Then sorted the error values in 25 runs from the smallest (Best) to the largest(Worst), i.e. 1st(best), 7th, 13th (median), 19th, 25th(Worst) function values, MV and SV for the 25 runs. Table 8 shows the Error values achieved when FES¼ 1e3, FES¼1e4, FES ¼1e5 for problems 1 and 2. The results indicate that MAPSO method is superior to the known two algorithm form the viewpoints of the evaluation criteria. Particularly, the mean error values obtained by MAPSO are remarkably better than the

Table 5 The combination of parameter settings used in the MAPSO. Combination

Parameters settings

C1 C2 C3 C4

popsize ¼20, popsize ¼20, popsize ¼20, popsize ¼20,

w:0.9, c1¼ c2¼ 2.0, Emin ¼ 0.005, and Emax ¼0.5, FEs ¼ 200,000 w:0.9, c1:2.5–0.5, c2:0.5-2.5, Emin ¼0.005, and Emax ¼0.5, FEs ¼ 200,000 w:1, c1:3–1, c2: 1–3, Emin ¼ 0.005, and Emax ¼0.5, FEs ¼ 200,000 w:1, c1 ¼c2¼ 2.0, Emin ¼0.005, and Emax ¼ 0.5, FEs ¼200,000

K. Tang et al. / Engineering Applications of Artificial Intelligence 37 (2015) 9–19

17

Table 6 Comparison results for MAPSO with different combinations(C1-C4). Algorithms

MAPSO with C1

MAPSO with C2

MAPSO with C3

Functions

MV

SV

MV

SV

MV

SV

MV

SV

f1 f2 f3 f4 f5 f6

1.08  10  19 3.94  10  8 185  101 3.65  10  13 9.35 2.69  10  12

3.25  10  20 5.62  10  8 1.02  101 2.84  10  13 35.2 8.54  10  12

6.39  10  12 3.14  10  4 6.98 2.68  10  12 3.62 6.25  10  4

8.50  10  12 2.35  10  4 2.41 1.06  10  12 6.98 7.84  10  4

3.68  10  15 7.88  10  6 3.13  101 2.35  10  11 2.38  101 2.36  10  8

4.06  10  15 6.14  10  6 4.72  101 1.38  10  11 1.98  101 4.97  10  8

1.35  10  30 3.61  10  10 2.68 1.89  10  14 1.47 1.69  10  14

2.42  10  30 5.87  10  10 3.14 1.02  10  14 92.25 3.82  10  14

other three algorithms with F1 and F2. In addition, we can see that best solutions obtained by MAPSO are better than other two algorithms since its mean error values are the least in all compared algorithms. Moreover, simulation results with other test criteria also show that the proposed algorithm is more effective than other compared algorithms.

Table 7 Comparisons between MAPSO and other PSOs on t-Tests. Functions

U ¼ 1  2c

∑cj ¼ 0 ∑i A Rj ðf i ui Þ2 Nðf max  f min Þ2

ð7Þ

where c is the number of thresholds, Rj is the segmented region j, fi is the gray level of pixel i, uj is the mean of the gray levels of those pixels in segmented region j; N, fmax, and fmin are the total number of pixels, maximum and minimum gray levels of the pixels in the given image, respectively. The value of the uniformity measure is between zero and one. A higher uniformity indicates that the quality of the thresholded image is higher. A typical image (Gray21) was tested to illustrate the effect of the MAPSO method. Fig. 4 shows the Gray21 image and its corresponding histograms. This image was selected because it is composed of 21 rectangles of different gray shades and produces a unique histogram with 21 distinct peaks. In addition, these peaks are significantly different from those in the histograms shown in Fig. 1. To provide a visual comparison, the results of the threshold values (for c¼ 3 and 4) are illustrated through the corresponding thresholded images shown in Figs. 5 and 6. The segmented images become more informative as the number of thresholds increase. From Figs. 5 and 6, it can be seen that the quality of the thresholded images between MAPSO and HCOCLPSO (Maitra and Chatterjee, 2008) differs greatly. Table 9 shows a comparative study of the uniformity measures obtained by implementing MAPSO and HCOCLPSO for the Gray21 image. It can be seen that the value of U (with c ¼2) from the MAPSO method is very close to that derived by HCOCLPSO, implying that the result of segmentation for the Gray21 image using MAPSO is very close to that from the HCOCLPSO method. On the other hand, we can see that the results of U using the MAPSO (c ¼3, 4) are slightly higher than those obtained by HCOCLPSO, and we can achieve a maximum of 0.9345 for c ¼4. Hence, our method is feasible in finding the optimal thresholds for other image analyses.

PSOs PPSO

APSO

CLPSO

f1

t-value P-value

1.6248 0.2158

5.7624a 0.0000

7.3291a 0.0000

f2

t-value P-value

2.2458a 0.0213

6.5614a 0.0000

16.3294a 0.0000

f3

t-value P-value

5.6891a 0.0000

34.1209a 0.0000

2.3158a 0.0392

f4

t-value P-value

17.3624a 0.0000

16.0294a 0.0000

1.8674 0.0754

f5

t-value P-value

0.5962 0.6517

 5.3621a 0.0000

12.9257a 0.0000

f6

t-value P-value

1.8021 0.0814

 3.5492a 0.0007

 1.6984a 0.1103

3 3 0 3

4 0 2 2

5 1 0 5

4.7. Application of the Gray21 image As an additional test for image segmentation in image analysis system, the MAPSO method was also compared with the other efficient method to show its advantage in multilevel thresholding. In this work, we employed a popular uniformity measure employed by Maitra and Chatterjee (2008) to provide an effective comparison of the different performances achieved. This uniformity measure can be given as

MAPSO with C4

1 (Better) 0 (Same)  1 (Worse) General merit over contender

a The value of t with 29 degrees of freedom is significant at a-0.05 by a two – tailed test.

5. Conclusion In this paper, a multi-strategy adaptive particle swarm optimization is proposed for numerical optimization problems. In the proposed algorithm, a novel evaluation strategy of diversity is introduced to adjust the population distribution in search phases. This strategy is incorporated into the dynamic changing inertia weight to balance the ability between exploration and exploitation. More importantly, an elitist learning strategy is used to enhance the population diversity to prevent search into a local region of optimum, which helps to better expand the search region of solutions at a later stage. The performance of the proposed algorithm has been compared with three well-known algorithms on six objective benchmark problems. Moreover, 2 minimization problems with asymmetrical distribution in searching space and a standard testing image are used to further evaluate the performance of the MAPSO. In general, the proposed algorithm clearly outperforms other algorithms reported in the literature in terms of a chosen set of criteria. Overall, although MAPSO is not the best choice for solving all kinds of complex optimization problems, when solving practical problems, we do not need to have prior knowledge such as differentiability or continuity of problems. Another attractive property of MAPSO is that it does not introduce too much computation to the original PSO scheme. The existing differences from the original PSO are the diversity-measure that is incorporated into the inertia weight w and ELS. For future work, we intend

18

K. Tang et al. / Engineering Applications of Artificial Intelligence 37 (2015) 9–19

Table 8 Error values achieved when FES ¼1e3, FES¼ 1e4, FES¼ 1e5. FES

Algorithms

Problems

1st (Best)

7th

13th (Median)

19th

25th (Worst)

MV

SV

1e3

MAPSO

F1 F2 F1 F2 F1 F2

6.8692e þ 2 7.0173e þ2 7.2514e þ 2 8.2145e þ 2 8.2642e þ2 9.7891e þ2

7.4281e þ 2 8.9453e þ2 8.5147e þ 2 9.1023e þ2 1.2951e þ3 1.5429e þ 3

9.0183e þ 2 2.3614e þ 3 9.8547e þ 2 4.1257e þ 3 3.0214e þ 3 4.8457e þ 3

2.3943e þ3 2.5624eþ 3 2.9247e þ 3 4.6892e þ3 4.2157e þ3 4.9281e þ3

2.6941e þ 3 3.7954e þ 3 3.5471e þ 3 8.6954e þ 3 9.5147e þ 3 3.7954e þ 3

2.0132e þ3 2.3654e þ3 2.3814e þ 3 4.2514e þ 3 4.5612e þ 3 6.0214e þ 3

6.3243eþ2 5.2487e þ2 7.3629e þ 2 6.0125e þ 2 7.0214e þ2 3.2569e þ2

F1 F2 F1 F2 F1 F2

3.9824e-3 1.9547e þ0 5.1474e-3 2.5671e þ 0 6.0214e-3 8.5671e-1

5.2149e-2 2.0453e þ 1 6.5987e-2 2.6984e þ1 7.0215e-2 5.1281e þ 0

7.8433e-2 2.5473e þ1 8.6987e-2 3.6519e þ 1 9.5897e-2 8.9614e þ 1

2.2548e-1 4.5691e þ 1 5.6278e-1 5.3691e þ 1 7.6258e-1 7.6548e þ 1

8.2491e-1 3.0278e þ2 9.6584e-1 5.6214e þ2 3.2459e-1 5.6214e þ2

6.2541e-2 3.5841e þ 1 9.9658e-2 3.1278e þ1 9.2361e-2 1.0214e þ2

9.0124e-2 3.2567eþ1 1.0247e-1 2.3254e þ1 1.2365e-2 6.2365e þ1

F1 F2 F1 F2 F1 F2

5.2487e-9 5.9217e-9 4.2354e-8 4.6547e-8 5.6245e-8 6.0928e-8

8.2193e-9 8.7742e-9 5.9581e-8 5.6548e-8 6.5478e-8 6.9781e-8

9.6478e-9 9.9147e-9 7.6589e-8 7.96584e-8 8.0648e-8 8.2374e-8

9.8915e-9 9.9214e-9 8.3625e-8 8.6598e-8 9.1241e-8 9.1034e-8

1.2546e-8 2.3456e-8 7.3659e-7 7.9584e-7 8.2659e-7 9.1204e-7

8.7845e-9 1.0241e-8 6.2314e-8 7.2518e-8 7.0129e-8 7.2914e-8

1.0143e-9 4.0215e-8 5.3629e-8 3.2658e-8 3.2658e-8 5.0129e-8

IGA PDE 1e4

MAPSO IGA PDE

1e5

MAPSO IGA PDE

Fig. 4. (a) Gray21 image (615  819) and (b) its histogram.

Fig. 5. Three-level (b) HCOCLPSO.

thresholding

images

obtained

using

(a)

MAPSO

and

to test the proposed algorithm and enhance its performance for multi-objective or higher-dimensional optimization problems. Besides, it is interesting that MAPSO is combined with other

Fig. 6. Four-level (b) HCOCLPSO.

thresholding

images

obtained

using

(a)

MAPSO

and

heuristic population-based methods such as GA, ACA, and BFA. To some extent, the combination should offer the merits of mutual heuristic methods while offsetting their demerits.

K. Tang et al. / Engineering Applications of Artificial Intelligence 37 (2015) 9–19

Table 9 Uniformity measures for the Gray21 image obtained using MAPSO and HCOCLPSO. Method

Threshold

Uniformity

MAPSO

84, 194 57, 126, 190 47, 87, 124, 205

0.9212 0.9366 0.9345

HCOCLPSOs

83, 194 55, 126, 175 46, 92, 123, 207

0.9213 0.9341 0.9327

Acknowledgments This paper is supported by the National Natural Science Foundation of China (Grant No. 61202313, 61202318), and the Natural Science Foundation of Jiangxi Province of China (GJJ13637 and 2012BAB201044, 2013BAB211020) and the Natural Science Foundation of Fujian Province (2012D109) References Blackwell, T., Bentley, P.J., 2002. Dynamic search with charged swarms. In: Genetic and Evolutionary Computation Conference, Morgan Kaufmann, pp. 19–26. Blackwell, T., Branke, J., 2004. Multi-swarm optimization in dynamic environments, Application of Evolutionary Computing, 3005 of Lecture Notes in Computer Science. Springer-Verlag, Berlin, Germany, pp. 489–500. Blackwell, T., Branke, J., 2006. Multi-swarms, exclusion, and anti-convergence in dynamic environments. IEEE Trans. Evolut. Comput. 10 (4), 459–472. Carlisle, A., Dozier, G.. 2002. Tracking changing extrema with adaptive particle swarm optimizer. In: Proceedings of the World Automation Congress, Orlando FL USA, pp. 265–270. Dorigo, M., Gambardella, L., 1997. Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Trans. Evolut. Comput. 1, 53–56. Dutta, R., Ganguli, R., Mani, V., 2013. Exploring isospectral cantilever beams using electromagnetism inspired optimization technique. Swarm Evolut. Computation 9, 37–46. Du, W.L., Li, B., 2008. Multi-strategy ensemble particle swarm optimization for dynamic optimization. Inf. Sci. 178, 3096–3109. Gudla, P.K., Ganguli, R., 2005. An automated hybrid genetic-conjugate gradient algorithm for multimodal optimization problems. Appl. Math. Comput. 2, 1457–1474. Ghosh, S., Kundu, D., Suresh, K., Das, S.,et al., 2009 On some properties of the lbest Topology in particle swarm optimization. In: Proceedings of the 2009 ninth International Conference on Hybrid Intelligent Systems, pp. 370–375. Holland, J., 1975. Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Arbor. Kennedy, J., Eberhart, R., 1995. Particle swarm optimization. In: Proceeding of the IEEE International Conference on Neural Network, Perth, Australia. IEEE Service Center, Piscataway, pp. 1942–1948.

19

Kathiravan, R., Ganguli, R., 2007. Strength design of composite beam using gradient and particle swarm optimization. Compos. Struct. 4, 471–479. Lu, S.W., Wang, Z.Q., Shen, J., 2003. Neuro-fuzzy synergism to the intelligent system for edge detection and enhancement. Pattern Recognit. 10, 2395–2409. Lim, W.H., Isa, N.A.M., 2014. Particle swarm optimization with increasing topology connectivity. Eng. Appl. Artif. Intell. 27, 80–102. Li, X., Branke, J., Blackwell, T., 2006. Particle swarm with speciation and adaptation in a dynamic environment. In: Genetic and Evolutionary Computation Conference 2006(GECCO'06), pp. 51–58. Liang, J.J., Qin, A.K., Suganthan, P.N., Baskar, S., 2006. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10 (3), 281–295. Modares, H., Alfi, A., Sistani, M-B.N., 2010. Parameter estimation of bilinear systems based on an adaptive particle swarm optimization. Eng. Appl. Artif. Intell. 7, 1105–1111. Maitra, M., Chatterjee, A., 2008. A hybrid cooperative–comprehensive learning based PSO algorithm for image segmentation using multilevel thresholding. Expert Syst. Appl. 34 (2), 1341–1350. Passino, K.M., 2002. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Trans. Control Syst. Mag. 3, 52–67. Parrott, D., Li, X., 2004. A particle swarm model for tacking multiple peaks in a dynamic environment using speciation. In: Proceedings of the 2004 Congress on Evolutionary Computation, pp. 98–103. Riget, J., Vesterstrøm, J.S., 2002. A Diversity-guided Particle Swarm Optimizer—the ARPSO. University of Aarhus, Aarhus. Storn, R., Price, K., 1997. Differential evolution-A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 4, 341–359. Samanta, B., Nataraj, C., 2009. Use of particle swarm optimization for machinery fault detection. Eng. Appl. Artif. Intell. 2, 308–316. Sun, J., Fang, W., Palade, V., Wu, X.J., et al., 2011. Quantum-behaved particle swarm optimization with Gaussian distributed local attractor point. Appl. Math. Comput. 218, 3763–3775. Suganthan, P.N., Hansen, N., Liang, J.J., Deb, K., Chen, Y.-P., Auger, A., Tiwari, S., 2005. Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization, Technical Report. Nanyang Technological University, Singapore, May 2005 and KanGAL Report #2005005, IIT Kanpur, India. Tang, K.Z., Sun, T.K., Yang, J.Y., 2011a. An improved genetic algorithm based on a novel selection strategy for nonlinear programming problems. Comput. Chem. Eng. 35, 615–621. Tang, Kezong, Wu, Jun, 2013. Improved multi-objective differential evolution for maintaining population diversity. In: Proceedings of the 9th International Conference on Natural Computation, Fuxin, liaoning China, pp. 528–533. Tang, K.Z., Yang, J.Y., Chen, H.Y., Gao, S., 2011b. Improved genetic algorithm for nonlinear programming problems. J. Syst. Eng. Electron. 22 (3), 540–546. Tasoulis, D.K., Pavlidis, N.G., Plagianakos, V.P., 2004. Parallel differential evolution. In: Proceedings of the 2004 Congress on Evolutionary Computation (CEC2004), pp. 2023–2029. Yang, X.M., Yuan, J.S., Yuan, J.G., Mao, H.N., 2007. A modified particle swarm optimizer with dynamic adaptation. Appl. Math. Comput. 189, 1205–1213. Yu, H.J., Zhang, L.P., Chen, D.Z., Hu, S.X., 2005. Adaptive particle swarm optimization algorithm based on feedback mechanism. Chin. J. Zhenjiang Univ. 39 (9), 1286–1291. Zhao, X.C., 2010. A perturbed particle swarm algorithm for numerical optimization. Appl. Soft Comput. 10, 119–124. Zhan, Z.H., Zhang, J., Li, Y., Chung, H.S.H., 2009. Adaptive Particle swarm optimization. IEEE Trans. Syst., Man, and Cybern.—Part B: Cybern. 39 (6), 1362–1381.