Co-evolving bee colonies by forager migration: A multi-swarm based Artificial Bee Colony algorithm for global search space

Co-evolving bee colonies by forager migration: A multi-swarm based Artificial Bee Colony algorithm for global search space

Applied Mathematics and Computation 232 (2014) 216–234 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

2MB Sizes 0 Downloads 89 Views

Applied Mathematics and Computation 232 (2014) 216–234

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Co-evolving bee colonies by forager migration: A multi-swarm based Artificial Bee Colony algorithm for global search space Subhodip Biswas a, Swagatam Das b,⇑, Shantanab Debchoudhury a, Souvik Kundu a a b

Dept. of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata 700 032, India Electronics and Communication Sciences Unit, Indian Statistical Institute, Kolkata 700 108, West Bengal, India

a r t i c l e

i n f o

Keywords: Artificial Bee Colony algorithm Foraging Population-based Metaheuristics Migration No Free Lunch theorem

a b s t r a c t Swarm intelligent algorithms focus on imitating the collective intelligence of a group of simple agents that can work together as a unit. Such algorithms have particularly significant impact in the fields like optimization and artificial intelligence (AI). This research article focus on a recently proposed swarm-based metaheuristic called the Artificial Bee Colony (ABC) algorithm and suggests modification to the algorithmic framework in order to enhance its performance. The proposed ABC variant shall be referred to as Migratory Multi-swarm Artificial Bee Colony (MiMSABC) algorithm. Different perturbation schemes of ABC function differently in varying landscapes. Hence to maintain the basic essence of all these schemes, MiMSABC deploys a multiple swarm populations that are characterized by different and unique perturbation strategies. The concept of reinitializing foragers around a depleted food source using a limiting parameter, as often used conventionally in ABC algorithms, has been avoided. Instead a performance based set of criteria has been introduced to thoroughly detect subpopulations that have shown limited progress to eke out the global optimum. Once failure is detected in a subpopulation provisions have been made so that constituent foragers can migrate to a better performing subpopulation, maintaining, however, a minimum number of members for successful functioning of a subpopulation. To evaluate the performance of the algorithm, we have conducted comparative study involving 8 algorithms for testing the problems on 25 benchmark functions set proposed in the Special Session on IEEE Congress on Evolutionary Competition 2005. Thorough a detailed analysis we have highlighted the statistical superiority of our proposed MiMSABC approach over a set of population based metaheuristics. Ó 2013 Elsevier Inc. All rights reserved.

1. Introduction The field of AI aims at developing studying and exploiting potential relations between cognitive science and the theories pertaining to computation. Age-old techniques have mimicked simple concepts of human behavior or biological processes to simplify and solve complex and sophisticated real life problems. Real parameter optimization boasts of the use of such nature-inspired processes. Many researchers have made use of the fundamentals of evolutionary processes [1] to model a population based system that can successfully track the optima in a fitness landscape. Others have studied swarm intelligence [2] to deploy the simple characteristic traits of group agents like birds, bees, ants to successfully perform optimization in domains widely varying as well as complex. In the first category algorithms like Evolutionary Programming [3,4], Evolution Strategy [5,6], Genetic Programming [7,8] and most commonly Genetic Algorithm [9,10] are used widely. With the ⇑ Corresponding author. E-mail addresses: [email protected] (S. Biswas), [email protected] (S. Das), [email protected] (S. Debchoudhury), [email protected] (S. Kundu). 0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2013.12.023

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

217

introduction of Differential Evolution (DE), proposed by Storn and Price [11–13], evolutionary algorithms assumed widespread popularity. Although similar in essence to Genetic Algorithms (GAs), DE credited all population members with equal probability of being selected as a parent unlike GA’s where fitness of the member formed an essential criterion for the selection process of parents. However, at the same time, swarm intelligence as a research topic continued to flourish as well and provided new avenues in the field of optimization. Bonabeau [14] defined Swarm Intelligence as any attempt to design algorithms or distributed problem-solving devices inspired by collective behavior of social insect colonies and other animal societies. While Bonabeau et al. primarily focused on the group behavior of social insects like wasps, bees and termites, a swarm need not necessarily be confined to these. Any group of particles or individuals interacting amongst themselves for co-existence and survival may be classified as a swarm. Thus even an immune system [15] may be considered as a swarm of cells with functional properties. A human form may be assumed as a cluster of molecules which act as a swarm of functional units with vital properties. More common examples range from colony of ants to group of fireflies to flock of birds. The main essence of swarm intelligence lies in the proper inclusion of functional behavior of these group agents into a practical computational model. Thus optimization algorithms have exploited the flashing of fireflies for attraction [16,17], the breeding behavior of the cuckoo bird [18,19], and the motion of ants seeking the shortest path [20,21]. One of the most popular forms of applications of swarm intelligence lies in the Particle Swarm Optimization (PSO) algorithm [22–24] first put forward by Eberhart and Kennedy in 1995. Devised on the modeling of bird flocking or fish schooling, PSO implements a stochastic optimization technique. It has found applications in many real life problems [25] with varying complexity and till day, it remains one of the primary interests of researchers in the domain of swarm intelligence. The idea of bees or honeybees as swarm particles was initially put to use in many combinatorial problems in the domain of optimization [26–28]. In the works of Tereshko and Loengarov [29] swarm agents were considered like robots adapting dynamically to changes, without any prior information. The collective enhancement of knowledge of the entire group as a whole, rather than individual successes, determined the benchmark for proper functioning of such a novel idea. The same authors also used the idea of honey bees gathering around a nectar source as well as abandoning of the same in [30,31]. Metaheuristics modeled on honeybees were also proposed in the form of Bee Colony Optimization [32] and Bee Swarm Optimization [33] both in the domain of combinatorial optimization. The BeeHive algorithm [34] by Wedde et al. introduced the idea of foraging zones or network regions. Apart from application in combinatorial problems, two-dimensional numeric functions were optimized by using the Virtual Bee Algorithm (VBA) put forward by Yang [35]. The competitive interaction of bees to seek out a food source laid the foundation for determination of the fitness in the latter algorithm. With a wide set of control parameters, the Bees algorithm (BA) [36] was first put forward by Pham et al. in 2005. Karaboga and Basturk [37] improved upon the BA and devised the ABC algorithm for continuous derivative-free, real-parameter optimization. ABC algorithm has been used widely for optimization in varied domains. The authors of ABC have given thorough comparison of the same with DE, PSO, GA and ES [38–41]. Applications were far and wide in problems involving constraints [42,43], IIR filters [44] and neural networks [45]. For single objective problems involving tracking of a global minimum, ABC has also been widely used. Guopu and Kwong introduced the GABC algorithm [46], while Chen, Sarosh and Dong utilized simulated annealing [47] for global optimization problems. In chaotic systems algorithms were devised with various modifications on ABC [48,49]. Problems with multimodal landscapes were also tested using ABC [50]. Apart from single objective, ABC found use in solving multi-objective problems as well [51] thereby enhancing its range of application. However, ABC algorithm suffers from disadvantages in the form of narrow search zone [52] and slow convergence. The method of perturbation used in the classical ABC can be modified to enhance its robustness to a greater set of problems. According to the No Free Lunch theorems [53] for any algorithm, any elevated performance over one class of problems is offset by performance over another class. It has been observed that the mutation operators in EAs [54] are modified to improve the search process keeping in mind the nature of the benchmark problems. Another interesting approach to solving optimization problem is the maintenance of multipopulation models [55–58]. This is done to divide the search space into subspaces, each covering certain number of optima, one of which is the global one. However problem arises in allocating the number of subpopulations due to their individual size. Motivated by these findings, we synergize the idea of multiple populations with a pool of perturbation strategies in our MiMSABC algorithm. Furthermore improvisation in the form of migrating forager population contributes to the versatility of the approach. The migration phase replaces the random scout phase present in the original ABC algorithm that relied on user defined limit. Thus our MiMSABC algorithm is relatively devoid of problem dependent parameter. The content of the rest of the paper is organized as follows. The basic ABC algorithm is depicted in Section 2 and is followed by the essentials of MiMSABC algorithm in Section 3. The experimental analysis, the study on the components of the algorithm, validation of many experimental settings and discussion on computation complexity is covered thoroughly in Section 4. The paper is concluded in Section 5 where entire work has been provided in a summary with mention of areas for further improvements in the future. 2. Artificial Bee Colony algorithm Bee colonies are hierarchical distributed systems just like other insect societies. Entomologists are of the view that in spite of the simplicity of individuals, an insect colony presents a highly structured social organization, by the virtue of which

218

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

bee colonies can accomplish highly complex tasks surpassing the individual capabilities of a single bee. Drawing the analogy of their behavioral pattern and mapping them to the field of optimization led to the birth of a new branch of swarm metaheuristics, ABC algorithm [59] introduced by Karaboga in 2005. The entire bee-colony is in itself a large cluster of specialized individuals and our main attention is channeled on the honey-bees entrusted with the duty of finding the food sources and collecting nectar from them. Let us consider that the food sources are distributed continuously throughout the solution space and each one has a nectar content associated with it. Each food source has an associated honey bee forager that continually evaluates its fitness value (nectar content). These food sources represent the probable solutions to the objective function that are gradually improved through the combined sequence of actions- positional perturbation, greedy selection, etc. Analogically the forager workforce represents the members of a population based metaheuristic. The steps involved in the ABC algorithm are as follows. 2.1. Initialization hi ðcÞ where c is the cycle The food sources indicating the probable solutions to the objective function are represented as ~ number and i is the running index that can takes values {1, 2, ..., SN} (SN being food number). The jth component of the ith food source (real vector) are initialized as

hij ð0Þ ¼ hmin þ rand ð0; 1Þ  ðhmax  hmin ij ij ij Þ

ð1Þ

where j is {1, 2, . . ., D} for a D-dimensional problem. The food sources continue to improve upon their previous values till the termination criterion is met. The sample run is terminated if the maximum number of cycles MCN are traversed or a given accuracy threshold e is reached. But here we have used fixed budget of function evaluations (FEs) to mark the termination criteria. 2.2. Employer bee phase Each food source has an employed forager associated with it and their population size equals the number of food sources. An employed bee modifies the position of the food source and searches an adjacent food source. The search for a neighboring food site is executed by using the following equation.

v ij ðcÞ ¼ hij ðcÞ þ /ij  ðhkj ðcÞ  hij ðcÞÞ

ð2Þ

The perturbation is done for a single component j so that local regions are explored. Note that k and i are mutually exclusive, that is k – i. If the modified value exceeds the search space bounds, it is reset to the specified bound. Now the fitness of the newly produced site is calculated. The fitness function is calculated based on our primary motive, maximize or minimize. Here our objective is minimization and accordingly the fitness function is designed as

( fitnessi ¼

hi ÞÞ if f ð~ hi Þ P 0 1=ð1 þ f ð~ 1 þ jf ð~ hi Þj hi Þ < 0 if f ð~

) ð3Þ

hi and ~ where f(.) denote the objective function. Between ~ mi , the fitter one is selected by using greedy selection. In case the employed forager fails to improve upon the fitness of its assigned food source, the associated trial counter is incremented by 1 otherwise it is reset to 0. 2.3. Calculating probability The employed foragers gather the information about the nectar content of the food sources and convey it to the waiting onlooker bees. The onlooker bees select the food source based on its nectar content. The chances of selecting a food source is determined by its probability value computed as

fitnessi pi ¼ PSN i¼1 fitnessi

ð4Þ

There is a proportional rise in the onlooker population employed at a fitter food source. 2.4. Onlooker phase This phase is identical to the employer bee phase except the fact that a random number is generated in the range [0, 1] and if this value is less than or equal to probability (calculated above) of a given food source, positional modification using (2) is done. This is carried on till all the onlookers have been allotted a food source. This is followed by identical greedy selection and trial counter values are accordingly updated. Note that in the original ABC algorithm the number of onlooker bees equaled the employed bee population size.

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

219

2.5. Scout phase The scout phase was designed to prevent the wastage of computational resources. In case the trial counter for a food source exceeds a pre-defined limit then it is deemed to be exhausted such that no further improvement is possible. In such a case, it is randomly reinitialized using (1) and its fitness value is calculated and counter is reset to 0. This is motivated from the fact that repeated exploitation of nectar by a forager causes the exhaustion of the nectar present in the food source.

3. Our proposed algorithm The dynamics of the proposed MiMSABC algorithm is outlined in this section. But before delving into it its details, we would like to draw attention to the EA-inspired perturbation strategies that have been discussed for the convenience of the readers. The relative tendency of the used strategies towards exploration, exploitation or maintaining a balanced approach has been specified. The subpopulations make use of these strategies while exploring the search space. This is identical to allocating different perturbation modes to the subpopulations and gradually shifting them to the better performing one. 3.1. Modes of positional perturbation In the original ABC, Eq. (2) is used for bringing about the positional change. But this method is restricted to narrow search basins and is prone to rise in the dimension of the problem. Inspired from DE, we propose a set of perturbation strategies that follows the nomenclature given below. Taking the number of food sites selected and the nature of food sources involved into account, the nomenclature describes a perturbation strategy as ABC—{s}{d}{dp}. {s} represents the nature of the employed food source and can be sto, src or fit based on whether- stochastically selected, current source or the fittest one in the current cycle- is chosen as the starting point of the forager perturbation. {d} denotes the destination food source towards which the forager guides its motion and can take the same values as {s}. Usually it is preferable to avoid writing the same symbol repetitively for {s} and {d} and we can just write once inside the curly braces. {dp} denotes the number of difference vectors formed by pair of food sources. Generally it takes values of either 1 or 2. Although manifold formulation of perturbation modes is possible, we resort to defining only four of those implemented here in addition to the basic ABC—{src}{sto}{1} defined in (2). It must be noted that in the following notations used, (i) ABC—{sto}{1}: Well balanced between exploration and exploitation, this perturbation mode has a greater search range as compared to (2) and was found to be suitable for a greater variety of test functions. It can be formulated as

~ mi ðcÞ ¼ ~hr1 ðcÞ þ /i  ð~hr2 ðcÞ  ~hr3 ðcÞÞ

ð5Þ

It was found to be a good variant for both unimodal and multi-modal test functions. (ii) ABC—{sto}{fit}{1}: As an exploitative perturbation mode, this strategy guides randomly chosen food sources towards the fittest member of the current cycle. It is preferred for unimodal functions only since it strong exploitation makes it prone to local optima trapping, which inhibits its use in multi-modal basins that exhibit the same. It can be written as

~ mi ðcÞ ¼ ~hr1 ðcÞ þ /i  ð~hfittest ðcÞ  ~hr2 ðcÞÞ

ð6Þ

(iii) ABC—{src}{fit}{2}: On combining the above two modes we formulate this strategy. Although it ensures exploitation, but it’s explorative tendency is more than ABC—{sto}{fit}{1} but less than ABC—{sto}{1} due to the extra pair of randomly chosen food sources. It is a good choice for a fair share of problems. Its notation is given as

~ mi ðcÞ ¼ ~hi ðcÞ þ /i  ð~hfit ðcÞ  ~hi ðcÞÞ þ /i  ð~hfit ðcÞ  ~hi ðcÞÞ

ð7Þ

(iv) ABC—{src}{2}: On adding an extra difference vector to (5) we get the following form as

~ mi ðcÞ ¼ ~hr1 ðcÞ þ /i  ð~hr2 ðcÞ  ~hr3 ðcÞ þ ~hr4 ðcÞ  ~hr5 ðcÞÞ

ð8Þ

220

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

The added difference helps to incorporate added explorative ability to the forager population. In Eqs. (5)–(8), rx represents a set of mutually exclusive vectors different from running index i. 3.2. MiMSABC algorithm The algorithm proposed involves initialization through the basic framework of a standard ABC algorithm as shown in (1). It involves scattering of the bees within the permissible limits of the search range in a random direction. The fitness of the locations is calculated. The subsequent stages of MiMSABC algorithm deals with introduction of two features that effectively define the performance in single objective landscapes 3.2.1. Multipopulation based grouping of forager members Classical ABC algorithm utilizes the fixed perturbation phase. However, in our proposed algorithm the bee colony comprising of SN number of foragers must be equally divided into n groups, with each being perturbed by a different mode. Thus enough randomness in behavior is guaranteed and it ensures effective exploration and if needed, effective exploitation of the neighboring food sources. We randomly group the members as follows

rndind subi

randpermðSNÞ

ð9Þ

    SN SN ~ h rndind i ;: ði  1Þ þ 1 : n n

ð10Þ

This implies that initially the foragers are randomly chosen and taken into groups with each group having its characteristics perturbation scheme. Random grouping method ensures that unbiased attention is given to the foragers. Akay and Karaboga [60] recently proposed some modifications in the existing ABC framework for real-parameter optimization. It focused on the importance of controlling frequency of perturbation by a new parameter modification rate MR. The underlying idea behind the modified ABC (m-ABC) was to facilitate better search in complex higher dimensional search space as opposed to classical ABC that perturbs a single parameter. The following technique was adopted in mABC.

(

v

C ij

¼

hCij þ /ij ðhCkj  hCij Þ if randð0; 1Þ 6 MR hCij

otherwise

) ð11Þ

The impact of the introduced control parameters was studied individually and a detailed analysis was made. The modified ABC was able to improve upon the existing classical ABC framework for highly multimodal, composite, rotated problems. The Greedy selection follows the evaluation of fitness function. Original ABC dealt with the positional perturbation, evaluation and selection in an orderly basis. Every food source was modified, evaluated and compared before moving onto the next one. It made available the newly obtained sites for perturbation in the present cycle and helped to accelerate the convergence. But in view of using the parameter MR to obtain the same purpose, a check must be ensured to prevent trapping in local optima. This is achieved by synchronous model [61] of updating the positions. The basic steps involved in the process are:  Identify the group in which the forager member belongs and note the characteristic perturbation scheme for that group  Check if a randomly chosen number between 0 and 1 is less than the frequency controlling modification rate.  If the test returns success, go for perturbation or in other words, search for nectar around neighboring search spaces. If however the contrary appears then no perturbation is undertaken and the forager member remains satisfied with the current nectar source.  The fitness is calculated and compared with the previous fitness or memory of previously held nectar amount  In case the new nectar location provides an equal or better food source in terms of fitness-based quality then the forager member shifts to the new source and its current nectar quantity is held in memory replacing the previous. The value of ‘trial’ is reduced to zero for every such successful discovery.  However, if the new location proves to be inferior in quality of food source, then the forager member retains the previously held position and its nectar information. The value of ‘trial’ is incremented for each such failure in procuring a new food source for the subpopulation. The idea is summarized below in Algorithm 1.

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

221

Algorithm 1. Synchronous update of forager position for k = 1: n //n: number of strategies used for i = 1: SN (

v ij ðcÞ ¼

Strategy kr~ hi 2 subk ~ hij ðcÞ

if rand½0; 1 < MR

)

otherwise

endfor endfor for i = 1: SN // Synchronous update of position and fitness if f ð~ hi ðcÞÞ // minimization problem mi ðcÞÞ 6 f ð~ ~ hi ðcÞ; f ð~ hi ðcÞÞ mi ðcÞ ~ f ð~ mi ðcÞÞ; triali 0; else triali triali + 1; endif Compute fitnessi using Eq. (2) Endfor

3.2.2. Migration based on performance of a subpopulation As the number of cycles increase, the modes will be perceived as relatively poor, moderate and better performing. The factors that effectively judge the performance are discussed later in this section. In case of a poorly performing forager population, the members of the population will migrate towards the superior mode in the hope of excavating improved fitness basins and thereby reaching the optimal solution. The main reason behind the introduction of the migration scheme is the fact it effectively removes the possibility of wastage of functional evaluations. To relatively judge the various perturbation modes, a ranking system is allotted to them based on the statistical evaluation. Performances of a given scheme in both employer and onlooker phase are taken into account for devising an adaptive rank-based migration strategy. The steps for the statistical ranking include:  Calculate the sum of fitness of each subpopulation characterized by a perturbation scheme. Divide it by the number of members of the subpopulation to determine the average value of fitness ‘avg_fit’ for the subpopulation.  Determine the best fitness value achieved in the subpopulation and denote it by ‘sub_bestfit’. This gives an estimate as to the best excavated food source by the subgroup of foragers.  Determine the total number of particles for which value of trial is 0. In other words, for each concluded cycle, estimate the number of forager bees that have successfully discovered an existing neighboring food site by virtue of the characteristic perturbation scheme for the subpopulation. This value is demarcated by ‘success’ value of the subpopulation for the given cycle.  The final performance of the subpopulation is a deterministic function of both avg_ fit and sub_bestfit given by

perfi

w  av g fit þ ð1  wÞ  sub bestfit i

ð12Þ

It should be noted that a low value of avg_fit and sub_bestfit which are ideal, implies a low value of perf which might misleadingly imply a poor performance. However the situation is quite the opposite and a low value of perf necessarily means high performance. Hence the ranking is done in ascending order of performance and the best rank is given to the subpopulation with lowest value of perf. Algorithm 2 elucidates the statistical evaluation scheme. Algorithm 2. Statistical evaluation and ranking Apply Algorithm 2 for i = 1: n sum_ fiti 0, sub_bestfiti 0, successi for j = 1: size(subi) if fitnessj > sub_bestfiti sub_bestfiti fitnessj; endif if trialj == 0 successi successi + 1;

0;;

(continued on next page)

222

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

endif sum_fiti sum_fiti + fitnessj;; endfor avg_fit sum_fiti /size(subi);; perf fi w  avg_fit + (1  w)  sub_bestfitt;; endfor (perf ,rank) sort(perf); //ascending order

The initial value of the weighted factor wo is adjusted to 0.8 and is varied as

! C emðMCNÞ  1 w ¼ wo  0:6 em  1

ð13Þ

where MCN denotes the maximum cycle number and C denotes the current cycle number. Details of the suitability and requirement of such a functional variation have been discussed in detail in Section 4.6. The limit parameter used in ABC is problem dependent and needed careful tuning to obtain the suitable results. Whenever, a food source is excavated beyond further modifications and the number of failures to procure a new source exceeds this limiting value, scout phase is activated with re-initialization. This problem is mitigated by using a migratory strategy that involved transfer of forager population. The member of a subpopulation functioning unsatisfactorily migrates to a better performing subpopulation in hope of improvement. The steps involved in the process of migration include:  For every subpopulation, compare every other subpopulation. For any pair of subpopulations under comparison, check their values of rank and success attained previously.  A subpopulation is considered to be better performing if and only if rank is less and value of success is not less than its competitor. Simultaneous satisfaction of both these factors is essential as it gives an overall view about the progress of a subpopulation in a particular cycle.  The fitness wise worst member of the worse performing subpopulations is noted.  At this stage the algorithm checks the pop_limit value is noted for the worse performing population. If it is found that the latter contains more members than the limiting value, then normal migration takes place. The worst performing bee is transferred to the better performing population.  If on the other hand it is seen that size of the worse performing population is just equal to the limiting case, simple migration is prohibited as the limiting number or the minimum constituent requirement for the subpopulation is breached. In this case a mutual replacement takes place between the two compared subpopulations. The process sees a swap between the two worst performing bees in the two subpopulations. In this way the limiting condition is maintained and every subpopulation retains a minimum number of members to fully utilize the search for better food sources. Algorithm 3 elucidates the migration scheme that is to be implemented in the scout phase. Algorithm 3. Adaptive Migrating forager population for i = 1: n  1 for j = i + 1: n if(ranki P rankj) && (successj P successi) migi worstmember(subi); if size(subi) > pop_lim it subj subj [ migi; else interchange (migj, migi); endif endfor endfor

In the adaptive migration strategy, the migration occurs only if two conditions are simultaneously satisfied—the rank and the number of food sources generated by the compared subpopulation, successfully passing onto the next cycle, should be less than or equal to the reference one. During migration, we prefer to transfer the worst member to the better performing subpopulation provided its size does not fall below a certain threshold (pop_limit = 10). In such cases, we interchange the

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

223

worst members of the two subpopulations. Hence the recent history of both the subpopulations is taken into account before migration thereby contributing towards its adaptive nature. Having illustrated our MiMSABC approach, we shall verify it experimentally in the following section. 4. Experiments and analysis 4.1. Benchmark set We have used the standard benchmark problems that appeared in IEEE CEC 2005 competition [62]. It contains a wide variety of problems encompassing a broad range of complexity. The main basis of the problems has been laid on well known classical functions like Rastrigin’s, Rosenbrock’s, Griewank’s, Schwefel’s and Ackley’s functions. These basic problems have been combined with incorporation of multimodality, ruggedness, interdependence, noise non-separability and other forms of complexities. The functions f1–f5 are unimodal functions, f6–f12 multimodal and f13–f25 have been listed as hybrid composition functions. The huge numbers of local optima, presence of narrow basins, optima on bounds are some of the challenging features present in this experimental set. We report our results for 30-D and 50-D problems. The termination criterion is set to D1e+4 for D-dimensional problem set, which amounts to 3e+5 for 30-D problems and 5e+4 for 50-D problems. The algorithm is stopped as soon as the aforesaid conditions are reached and the functional value of the vector, which is least for the entire population over the entire process, is marked as the overall best functional value, and the corresponding vector the obtained global optimum. 4.2. Algorithms compared and set-up The algorithms used for comparison purpose are given below along with their respective parametric set up. 1. 2. 3. 4. 5. 6. 7.

BA [36]: n = 50, m = 15, e = 5, nep = nsp = 25, ngh = 0.3. ABC [40]: SN = 100, limit = 200. m-ABC [60]: SN = 100, limit = 200, MR = 0.4. SPC-PNX [63]: g = 2.0, k = 1, NREP = 2, N tuned. DMS-L-PSO [64]: x = 0.729, c1 = c2 = 1.4944, R = 10, L = 100. p GSO [65]: a = h (D + 1)i, ranger/population = 0.2. DE [11,66]: F = 0.5, Cr = 0.5.

Interested readers are requested to refer to the mentioned papers to gain a better understanding of the algorithms used. The parametric settings of our proposed MiMSABC are Colony size = 200 = 2 SN, MR = 0.4, pop_limit = 10, n = 4, m = 10. The parameters used have been kept similar to the ABC variants in order to evaluate their functionality with reference to common standards. It is done with the purpose of highlighting the enhancement induced by the multipopulation strategy and the adaptive migration control. Other associated parameter w is varied as per (13). 4.3. Simulation environment The test results tabulated in have been simulated by taking 50 independent runs of each competing algorithm. The host platform used is 1. 2. 3. 4. 5. 6.

Language: MATLAB (version R2012a). Processor: Intel (R) Core(TM) i5-2450M. Speed: 2.50 GHz. Memory: 4.00 GB (usable 3.90). Operating System: Microsoft Windows 7 Home Premium SP1. System Type: 64-bit.

4.4. Evaluation criteria The following criteria are used for evaluation. 1. Mean error: An algorithm is executed for a pre-defined number of runs and the error values are recorded in each case. Finally the mean of the error values along with the standard deviation are reported. 2. Box plot: This is a graphical way of verifying algorithms performance by depicting groups of numerical data (error values obtained after termination) through their quartiles. Please refer to Figs. 1 and 2.

224

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

3. Wilcoxon’s test: The statistical significance is evaluated by applying Wilcoxon rank sum test [67]. The cases when the null hypothesis is rejected at the 5% significance level and MiMSABC exhibits superior performance are marked with ‘‘+’’, when the null hypothesis is rejected at same significance level and MiMSABC exhibits inferior performance ‘‘’’ mark is used and with ‘‘=’’ when there is no significant performance difference between MiMSABC and the best performing algorithm. 4. Bar diagram representation has also been used here. Please refer to Fig. 5. 4.5. Performance analysis Based on the experimental data in Tables 1 and 2, we can infer that incorporation of the multi-strategic subpopulation brings a about a significant improvement in the performance of ABC and also mitigates the problem of tuning the control parameter limit. The least mean error values are obtained by the proposed MiMSABC approach in 19 and 20 instances out Table 1 Mean, standard deviation and Wilcoxon’s rank-sum test for best-of-the-run error values of 30D problems. (Best entries are marked in boldface). Function

f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22 f23 f24 f25 (+/=/)

Algorithm BA Mean (Std.)

mABC Mean (std.)

ABC Mean (Std.)

SPC-PNX Mean (Std.)

DMS-PSO Mean (Std.)

GSO Mean (Std.)

DE Mean (Std.)

MiMSABC Mean (Std.)

1.2039e+04 (2.0345e+03) 1.4174e+04 (3.4927e+03) 1.1211e+08 (3.3961e+07) 1.9014e+04 (4.3193e+03) 1.5160e+04 (1.5160e+03) 2.0273e+08 (8.0305e+07) 4.6558e+03 (9.8388e+02) 2.0838e+01 (7.0835e+00) 1.0168e+02 (9.2401e+00) 1.4611e+02 (1.0738e+01) 1.2343e+01 (0.7666e+00) 1.0792e+05 (2.5222e+04) 2.9522e+01 (6.6815e+00) 4.4644e+01 (1.1032e+00) 8.1065e+02 (5.4466e+01) 4.7357e+02 (7.4595e+01) 1.2743e+03 (4.5238e+01) 1.2324e+03 (6.0562e+01) 1.2621e+03 (6.0223e+01) 1.4374e+03 (3.0647e+01) 1.2080e+03 (8.0619e+01) 1.4401e+03 (2.2291e+01) 1.3724e+03 (4.7672e+01) 2.0130e+03 (3.6293e+01) 5.6624e+02 (6.5717e+01) 15/7/3

4.6334e24 (5.2573e25) 4.3172e17 (3.6379e15) 2.2971e+05 (1.9675e+05) 2.3012e+05 (2.0123e+04) 6.0934e+03 (8.2951e+02) 1.5621e+02 (6.4385e+01) 1.2946e+02 (7.7856e+01) 2.1019e+01 (3.2167e+00) 7.3421e+01 (1.4212e+01) 2.1924e+02 (1.8453e+01) 4.2198e+01 (2.1456e+01) 1.0065e+05 (1.6372e+04) 1.1231e+01 (1.8674e+00) 1.3464e+01 (9.4465e+00) 2.9023e+02 (3.2316e+01) 3.2316e+02 (2.2167e+01) 3.0176e+02 (2.1276e+01) 8.8423e+02 (6.6789e+01) 8.7267e+02 (5.7845e+01) 8.7969e+02 (6.2786e+01) 7.2376e+02 (1.7843e+02) 9.0654e+02 (4.8938e+01) 8.3102e+02 (6.6704e+01) 2.0165e+02 (1.6708e+00) 2.0001e+02 (3.8712e02)

1.0075e09 (1.9454e08) 8.5451e+03 (1.7258e+03) 5.0903e+07 (1.3437e+07) 1.2255e+04 (2.0365e+03) 8.0021e+03 (1.1184e+03) 3.7644e+07 (1.5149e+07) 6.1255e+03 (1.0194e+02) 2.0957e+01 (3.5087e02) 1.1542e+02 (1.2343e+01) 2.1940e+02 (1.1478e+01) 3.2407e+01 (1.6009e+00) 1.5916e+05 (4.3912e+04) 5.7836e+01 (1.4627e+01) 1.2774e+01 (1.8352e01) 4.8678e+02 (7.1429e+01) 2.8733e+02 (3.6337e+01) 3.4402e+02 (7.8310e+01) 9.6592e+02 (7.5363e+00) 9.6456e+02 (6.0133e+00) 9.6980e+02 (4.5616e+00) 7.2374e+02 (4.7423e++01) 1.0487e+03 (1.6308e+01) 7.1178e+02 (3.5533e+01) 5.5351e+02 (3.9123e+01) 5.7227e+02 (4.2111e+01)

1.2324e08 (4.6482e10) 5.9327e07 (2.1491e06) 1.0121e+06 (3.8211e+05) 7.3125e05 (8.4775e06) 4.2435e+03 (1.2572e+03) 1.5197e+01 (1.5013e+01) 1.4598e02 (1. 3021e02) 2.1039e+01 (3.7586e02) 2.4014e+01 (6.2501e+00) 1.0302e+02 (4.1126e+01) 2.1315e+01 (3.3029e+00) 1.3221e+04 (1.2016e+04) 3.6021e+00 (1.1227e+00) 1.2981e+01 (2.7017e01) 3.6452e+02 (8.9458e+01) 2.4723e+02 (3.6214e+01) 3.5411e+02 (3.8912e+01) 9.0390e+02 (1.4312e+00) 9.0427e+02 (1.0792e+00) 9.0517e+02 (1.1027e+00) 5.0000e+02 (0.0000e+00) 8.8125e+02 (1.6213e+01) 5.3416e+02 (3.4895e04) 2.0000e+02 (0.0000e+00) 2.1321e+02 (6.2752e01)

5.3531e16 (1.9724e15) 1.1577e07 (7.5924e08) 1.6343e+06 (3.9247e+06) 2.4667e+03 (4.0458e+02) 2.2162e+03 (8.3144e+02) 4.6850e+01 (1.3222e+00) 7.0349e03 (4.5371e03) 2.0340e+01 (2.3029e04) 1.7591e+01 (3.0222e+00) 1.7512e+02 (5.2783e+01) 2.7238e+01 (1.5652e+00) 2.5541e+04 (2.8903e+03) 2.3595e+00 (5.1763e01) 1.1872e+01 (4.2406e01) 3.4512e+02 (5.1182e+01) 2.3287e+02 (1.1283e+02) 2.4519e+02 (7.3247e+01) 9.1053e+02 (1.5761e+00) 9.1060e+02 (1.3383e+00) 9.0189e+02 (3.0719e+01) 7.8521e+02 (1.3592e+02) 1.5838e+03 (5.1014e+02) 9.5321e+02 (2.4353e+02) 4.2656e+02 (1.9956e+02) 5.3219e+02 (4.1203e+01)

5.9627e+02 (4.8747e+02) 1. 3427e+04 (6.2434e+03) 8.9741e+06 (3.1279e+06) 3.1267e+04 (7.8321e+03) 1.5436e+04 (2.4943e+03) 1.8038e+03 (9.0139e+02) 4.7174e+03 (7.0142e01) 2.0467e+01 (1.1178e01) 7.4517e+01 (8.1907e+00) 3.2311e+02 (3.1621e+01) 3.2973e+01 (1.7651e+00) 5.9168e+04 (2.8719e+04) 8.0041e+00 (5.4107e+00) 1.3036e+01 (3.1453e01) 5.7023e+02 (7.6125e+01) 4.7463e+02 (6.8722e+01) 5.7211e+02 (5.9712e+01) 1.1045e+03 (2.5102e+01) 1.1024e+03 (1.8121e+01) 1.1092e+03 (5.3121e+01) 1.2678e+03 (3.1570e+01) 1.2775e+03 (1.3242e+02) 1.2671e+03 (4.2938e+00) 1.3302e+03 (1.1016e+01) 1.3602e+03 (3.6015e+01)

2.4315e23 (1.2954e28) 5.3374e23 (3.3472e25) 5.2187e+05 (3.3786e+05) 2.3468e02 (3.8704e+00) 1.4704e02 (2.8904e03) 2.3034e+00 (1.7904e+00) 4.7002e+03 (8.0123e04) 2.1041e+01 (3.8045e02) 1.4235e+02 (3.0453e+01) 2.0763e+02 (8.9620e+01) 3.8794e+01 (3.9628e+00) 2.3423e+04 (1.9389e+04) 1.5398e+01 (6.8267e01) 1.3658e+01 (2.375e02) 2.8945e+02 (1.0266e+02) 2.3203e+02 (4.5767e+01) 2.3568e+02 (4.846e+01) 9.0453e+02 (1.5761e01) 9.0393e+02 (2.5761e01) 9.0563e+02 (1.0461e01) 5.8740e+02 (7.5646e+01) 8.6143e+02 (1.7244e+01) 5.6786e+02 (2.4895e+02) 9.8371e+02 (1.3543e+02) 1.6547e+03 (5.9551e+01)

6.5643e27 (+) (3.7854e32) 3.3342e33 (+) (6.7391e32) 2.0016e+05 (+) (4.3649e+04) 2.0007e05 (=) (1.4155e04) 2.1666e+02 () (1.2322e+01) 1.6744e+00 (+) (1.0326e+00) 9.0330e+01 () (8.2115e+01) 2.0033e+01 (=) (1.2316e02) 1.2322e08 (+) (1.3165e08) 7.8531e+01 (+) (2.3516e+01) 9.9544e+00 (+) (3.0023e+00) 2.6665e+02 (+) (1.0333e+02) 2.4833e+00 (=) (1.1131e02) 3.2323e+00 (+) (1.8912e02) 2.0000e+02 (+) (0.0000e+00) 1.9921e+02 (+) (2.3105e+01) 1.2362e+02 (+) (4.3288e+01) 8.4223e+02 (+) (3.1445e+01) 8.3447e+02 (+) (2.3164e+01) 8.3912e+02 (=) (2.3576e+00) 7.1668e+02 () (2.2471e+02) 8.4530e+02 (+) (1.5431e+01) 5.3404e+02 (=) (1.2688e06) 2.0000e+02 (=) (0.0000e+00) 2.0000e+02 (=) (0.0000e+00)

225

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234 Table 2 Mean, standard deviation and Wilcoxon’s rank-sum test for mean best-of-the-run error values of 50D problems. (Best entries are marked in boldface). Function

f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22 f23 f24 f25 (+/=/)

Algorithm BA Mean (Std.)

mABC Mean (Std.)

ABC Mean (Std.)

SPC-PNX Mean (Std.)

DMS-PSO Mean (Std.)

GSO Mean (Std.)

DE Mean (Std.)

MiMSABC Mean (Std.)

9.2831e+04 (6.5098e+03) 1.1876e+05 (1.0826e+04) 1.8389e+08 (3.8172e+07) 1.5168e+05 (2.1908e+04) 4.1723e+04 (2.3694e+03) 6.2126e+09 (1.0602e+09) 3.5723e+03 (3.1661e+02) 2.1277e+01 (6.5900e02) 4.9132e+02 (2.5978e+01) 7.7140e+02 (5.3153e+01) 4.6363e+01 (1.2783e+00) 2.0967e+06 (1.9040e+05) 8.4211e+02 (2.0694e+02) 1.4333e+01 (1.3030e01) 1.2115e+03 (3.7699e+01) 1.0376e+03 (7.2542e+01) 1.1388e+03 (7.0046e+01) 1.3646e+03 (2.6968e+01) 1.3772e+03 (2.3988e+01) 1.3673e+03 (4.0274e+01) 1.5299e+03 (2.9546e+01) 1.7601e+03 (5.8328e+01) 1.4839e+03 (3.8892e+01) 1.5263e+03 (3.0725e+01) 1.9275e+03 (1.6677e+01) 17/7/1

3.6754e18 (0.0000e+00) 1.3675e15 (5.3341e23) 1.1245e+06 (1.0234e+05) 4.0234e+02 (3.7569e+02) 9.8963e+03 (8.7608e+02) 2.5321e+03 (1.2248e+03) 1.0867e+00 (6.9843e01) 2.1204e+01 (5.6493e02) 2.6326e+02 (2.4387e+02) 4.7648e+02 (2.3457e+01) 8.1432e+01 (1.8672e+01) 9.3268e+05 (1.0223e+05) 3.7632e+01 (3.1186+00) 2.4124e+01 (2.2667e+00) 3.9643e+02 (2.7582e+02) 3.5857e+02 (4.0672e+01) 3.2134e+02 (2.1045e+01) 1.0004e+03 (1.9834e+02) 9.7375e+02 (3.2845e+00) 9.7254e+02 (2.2854e+00) 8.9632e+02 (2.8745e+02) 8. 8564e+02 (2.2845e+01) 6.0845e+02 (4.4387e+01) 2.0134e+02 (2.8034e+00) 2.0198e+02 (6.6548e01)

3.9586e05 (5.4419e02) 2.9570e02 (3.8134e08) 1.5518e+07 (2.7136e+06) 4.1402e+04 (2.8410e+03) 1.4334e+04 (5.8657e+02) 2.5675e+03 (4.5336e+01) 8.2911e+00 (1.1576e02) 2.1142e+01 (2.5157e02) 2.6679e+02 (2.3175e+01) 4.4697e+02 (1.0893e+01) 6.4269e+01 (1.5541e+00) 8.3994e+05 (9.1289e+04) 5.4768e+02 (1.6313e+02) 2.2629e+01 (1.2810e01) 4.7895e+02 (9.1882e+00) 3.3201e+02 (3.6514e+01) 3.9659e+02 (5.6166e+01) 1.0383e+03 (1.0493e+01) 1.0320e+03 (7.6347e+00) 1.0322e+03 (1.0295e+01) 9.2271e+02 (4.9654e+01) 1.0964e+03 (8.9899e+00) 9.0514e+02 (5.9692e+01) 1.2241e+03 (3.0745e+01) 2.2187e+02 (1.8899e+00)

4.3654e07 (7. 4325e08) 3.7682e04 (8.4391e06) 1.3425e+07 (3.8211e+06) 3.3125e02 (2.2545e04) 5.2614e+03 (3.3712e+03) 1.5197e+01 (1.5013e+01) 1.4598e02 (1. 3021e02) 2.1039e+01 (3.7586e02) 2.4014e+01 (6.2501e+00) 3.0302e+02 (4.1126e+01) 1.1315e+01 (3.3029e+00) 1.3221e+04 (1.2016e+04) 3.6021e+00 (1.1227e+00) 1.2981e+01 (2.7017e01) 3.9452e+02 (8.9458e+01) 2.4723e+02 (3.6214e+01) 2.7321e+02 (4.3213e+01) 9.9593e+02 (4.6342e+00) 9.9473e+02 (2.1194e+01) 9.7522e+02 (5.4322e+00) 5.0000e+02 (0.0000e+00) 9.8245e+02 (6.5632e+01) 8.4457e+02 (6.4045e+01) 2.0000e+02 (0.0000e+00) 2.0215e+02 (5.2752e+00)

6.5274e08 (3.6483e10) 1. 5457e02 (7.5294e03) 4.6723e+06 (3.9547e+06) 5.4667e+04 (6.5328e+04) 5.5432e+03 (1.5564e+03) 4.6850e+01 (1.3222e+00) 7.0349e03 (4.5371e03) 2.0340e+01 (2.3029e04) 1.7591e+01 (3.0222e+00) 3.7512e+02 (5.2783e+00) 2.7238e+01 (1.5652e+00) 2.5541e+04 (2.8903e+02) 2.3595e+00 (5.1763e01) 1.1872e+01 (4.2406e01) 3.9512e+02 (5.1182e+01) 3.2187e+02 (1.1283e+02) 5.3549e+02 (3.6267e+01) 9.9143e+02 (3.4721e+00) 1.1360e+03 (4.6832e+01) 9. 8219e+02 (5.7231e+01) 9.4531e+02 (2.3472e+02) 1.4348e+03 (4.3414e+02) 1.2514e+03 (3.3183e+02) 9.2648e+02 (2.4326e+02) 8.4329e+02 (8.7863e+01)

1.6580e+04 (7.3408e+03) 6.0926e+04 (2.3285e+04) 7.7343e+07 (3.5464e+07) 9.2658e+04 (7.1483e+03) 2.3662e+04 (2.2227e+03) 8.4898e+08 (1.2879e+09) 6.2055e+03 (3.3784e+00) 2.0676e+01 (4.0057e02) 3.0171e+02 (2.7324e+01) 6.2579e+02 (1.4198e+02) 6.8566e+01 (3.0434e+00) 7.6201e+05 (2.5427e+05) 4.0614e+01 (1.2514e+01) 2.2102e+01 (8.2407e01) 8.0007e+02 (1.3666e+02) 4.8300e+02 (1.1806e+02) 6.3521e+02 (7.7593e+01) 1.2158e+03 (3.7003e+01) 1.2279e++03 (2.6464e+01) 1.2539e+03 (3.0142e+01) 1.3431e+03 (3.5842e+01) 1.3941e+03 (2.5470e+01) 1.3515e+03 (3.4894e+01) 9.8529e+02 (6.1248e+01) 9.0153e+02 (4.9213e+01)

4.8246e28 (7.7226e36) 3.9602e23 (9.4207e22) 5.4214e+07 (1.2132e+07) 1.2303e+04 (3.1252e+03) 2.8133e+03 (7.0941e+02) 4.2131e+01 (1.2182e+01) 6.2025e+03 (4.6142e02) 2.1091e+01 (3.3032e01) 2.5424e+02 (1.1321e+01) 3.7463e+02 (1.7658e+01) 7.2635e+01 (1.3224e+00) 2.122e+06 (5.6327e+05) 3.3216e+01 (1.6536e+00) 2.3401e+01 (1.5436e01) 2.6627e+02 (3.1239e+01) 2.6885e+02 (1.5321e+01) 3.0502e+02 (2.3857e+01) 9.5521e+02 (7.0217e01) 9.5494e+02 (1.033e+00) 9.4543e+02 (5.5353e01) 1.0034e+03 (1.0201e+00) 9.0621e+02 (4.0567e+00) 1.0033e+03 (1.0229e+00) 1.0395e+03 (6.3247e+00) 1.6878e+03 (2.5941e+00)

6.5271e37 (+) (7.8557e56) 8.3611e32 (+) (7.5156e29) 1.0222e+06 (=) (2.5599e+05) 2.1489e02 (+) (1.6137e04) 1.0043e+03 (+) (2.5962e+02) 3.2642e+01 () (1.2648e+01) 5.3255e06 (+) (2.3206e04) 2.0611e+01 (=) (1.2100e+00) 1.5631e08 (+) (2.3489e09) 1.0361e+02 (+) (2.3448e+01) 9.9461e+00 (+) (1.0374e+00) 1.0884e+04(+) (2.0344e+03) 2.4130e+00(=) (1.2517e+00) 1.0691e+01 (=) (3.4839e+00) 2.0000e+02 (+) (3.0236e08) 2.1088e+02 (+) (2.3116e+01) 1.3277e+02 (+) (6.3592e+01) 8.8962e+02 (+) (2.6411e+02) 8.8729e+02 (+) (3.2934e+02) 8.8872e+02 (+) (3.6234e+02) 5.0000e+02 (=) (0.0000e+00) 8.9948e+02 (=) (2.3451e+02) 5.9434e+02 (+) (2.9135e+01) 2.0000e+02 (=) (0.0000e+00) 2.0000e+02 (+) (0.0000e+00)

of possible 25 cases in 30 and 50D problems respectively. It must be noted that the integration of our suggested framework in ABC brings about marked enhancement in its optimizing ability as reported by the Wilcoxon’s rank sum test. The test result shows that our algorithm is not hampered by increment of dimensional complexity. In fact while it achieves success on 15 cases (as indicated by +) for 30D problems, the performance is even bettered with 17 best results out of 25 among the contending algorithms, for 50D problem set. The emphasis is laid on ‘best’ since many cases are present like f24 and f25 in which MiMSABC performs on the same level as the best, in which case we have refrained from using the latter superlative since the results are statistically similar. The main crux of MiMSABC is that it uses historic data to analyze the best performing strategy for a given benchmark and promotes migration of the population adaptively towards the better mode of perturbation. No deterministic transfer of population is enacted. Another advantage is the absence of parameter tuning as was existent in general ABC framework. Our

226

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

proposed algorithm manages to outperform powerful EAs like basic DE and SPC-PNX, which have reported competitive performances on the CEC 2005 benchmark set. As compared to the bee-based algorithms, MiMSABC has successfully outperformed BA, ABC and m-ABC in majority of the test instance spanned over multi dimensional problems. This feature can be attributed to the fact that MiMSABC and m-ABC both makes use of the parameter MR but m-ABC lacks the multi-strategic approach. In function f5 the problem landscape as seen in [62] is highly conducive to parallel search agents which rely on effective exploitation. It is thus expected that algorithms like DE would achieve success in it, which is evident from the results in 30D cases. However DE fails to achieve the same success in case of 50D due to the curse of dimensionality. DMS-PSO which also employs multi swarm agents is seen to achieve success in both 30D and 50D environments for f13 which is a shifted combination of both Griewank’s and Rosenbrock’s function. In fact MiMSABC attains only the second rank among Table 3 Mean, standard deviation and Wilcoxon’s rank sum test for best-of-the-run error values in various algorithm configurations in the scout phase for 50D benchmark problems. (Best entries are marked in boldface). Function

f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18 f19 f20 f21 f22 f23 f24 f25 (+/=/)

Algorithm MiMSABC w/c Mean (Std.)

MiMSABC w/s Mean (Std.)

MiMSABC Mean (Std.)

2.9364e19 (2.1985e10) 1.2658e08 (2.9354e10) 3.9623e+07 (2.1238e+06) 1.6921e+02 (1.3649e+01) 4.9325e+03 (2.3561e+03) 9.2156e+01 (2.1568e+01) 6.8454e+03 (2.3641e10) 1.0091e+02 (2.3648e+01) 1.6982e+01 (9.3554e+00) 2.6548e+04 (1.2551e+03) 6.3569e+01 (6.3259e+01) 6.4846e+04 (2.3641e+04) 1.2363e+01 (6.3688e+00) 5.3698e+01 (3.3633e+01) 5.5556e+02 (2.3618e+02) 2.8267e+02 (1.3694e+02) 3.1689e+02 (1.269e+02) 9.3645e+02 (4.3329e+02) 9.3811e+02 (5.2284e+02) 9.4002e+02 (1.2322e+02) 8.3541e+02 (2.3112e+02) 1.0032e+03 (6.3112e+02) 8.1129e+02 (1.0036e+02) 9.9116e+02 (2.0015e+02) 1.1099e+03 (4.6612e+02) 20/4/1

2.39685e13 (2.6581e14) 2.39685e13 (2.6581e14) 4.2397e+06 (1.3692e+05) 2.9637e02 (2.3691e03) 1.9634e+03 (2.3544e+02) 2.8924e+01 (1.3519e+01) 3.3641e+03 (2.3642e12) 2.1235e+01 (2.6943e+00) 1.2684e+01 (2.355e+00) 1.5965e+02 (3.8912e+01) 8.1593e+00 (2.2541e+00) 1.2366e+04 (2.3615e+03) 8.9638e+00 (1.2365e01) 2.8881e+01 (7.9322e+00) 4.8552e+02 (1.3328e+02) 2.3684e+02 (1.2227e+02) 2.4887e+02 (1.1115e+02) 9.4862e+02 (2.3399e+02) 9.5012e+02 (3.1452e+02) 9.4747e+02 (3.3915e+02) 6.8226e+02 (2.1163e+02) 9.9356e++02 (4.3321e+02) 7.2366e+02 (1.3302e+02) 2.0000e+02 (0.0000e+00) 2.0000e+02 (0.0000e+00)

6.5271e37 (+) (7.8557e56) 8.3611e32 (+) (7.5156e29) 1.0222e+06 (+) (2.5599e+05) 2.1489e02 (+) (1.6137e04) 1.0043e+03 (+) (2.5962e+02) 3.2642e+01 () (1.2648e+01) 5.3255e06 (+) (2.3206e04) 2.0611e+01 (+) (1.2100e+00) 1.5631e08 (+) (2.3489e09) 1.0361e+02 (+) (2.3448e+01) 9.9461e+00 (=) (1.0374e+00) 1.0884e+04(+) (2.0344e+03) 2.4130e+00(+) (1.2517e+00) 1.0691e+01 (=) (3.4839e+00) 2.0000e+02 (+) (3.0236e08) 2.1088e+02 (+) (2.3116e+01) 1.3277e+02 (+) (6.3592e+01) 8.8962e+02 (+) (2.6411e+02) 8.8729e+02 (+) (3.2934e+02) 8.8872e+02 (+) (3.6234e++02) 5.0000e+02 (+) (0.0000e+00) 8.9948e+02 (+) (2.3451e+02) 5.9434e+02 (+) (2.9135e+01) 2.0000e+02 (=) (0.0000e+00) 2.0000e+02 (=) (0.0000e+00)

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

227

Fig. 1. Box plots showing mean error along with extent of deviation for 30D problems f1–f25.

the contending algorithms for this function. The use of multi agents is highly influential for this problem since a shifted landscape demands diversity which is ensured in both these algorithms. SPC-PNX which is a form of real parameter genetic algorithm shows appreciable results in 50D cases of hybrid composition functions where complexity of landscape is very high. GSO, a biologically inspired model, on the other hand, fails in most cases. Experiments have also shown that synchronous mode of update helps in preserving diversity and better results have been achieved in most cases. It is thus seen that the hybridization of multiple ideas like information sharing, diversity preservation and improved exploitation have contributed to the overall statistical superiority of our MiMSABC. In Table 3, three frameworks have been compared to justify the use of migration control.

228

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

Fig. 2. Box plots showing mean error along with extent of deviation for 50D problems f1–f25.

Box plots showing the errors obtained have been illustrated in Figs. 1 and 2 for the entire benchmark set f1–f25 for both 30D and 50D test functions. The plots have vertical axis showing the difference in functional value from the achieved best solution and the actual best solution. The red lines show the mean error while the blue box determines the extent of deviation obtained. From the experimental section it can be inferred that the simulation of population migration, observed in living organisms, into algorithmic framework not only enhances its resemblance with swarm behavior but also lays down a novel approach for improving the performance of nature-inspired metaheuristics.

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

229

Fig. 3. Variation of subpopulation members due to migration with progress for 30D functions f2, f4, f9, f14, f17 and f22.

4.6. Validation of algorithmic components The various constituent features of MiMSABC have been dissected in this section with proper validation and justified reasons for implementation of the same. 4.6.1. Effectiveness of forager migration In order to establish the effectiveness of forager migration, we propose three algorithmic configurations a. MiMSABC w/c: scout phase deactivated b. MiMSABC w/s: re-initialization scout phase like basic ABC. Limit = 200 c. MiMSABC: forager migration enabled in Scout phase (proposed approach).

230

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

The three varying configurations are simulated for 50 sample runs and the test results obtained are tabulated in Table 3. The rank sum test result (+/=/) obtained is 20/4/1 indicating that the forager migration model performs significantly better than its counterparts on 20 out of possible 25 instances. This is testament to the superiority of the migration framework. The random scouts can improve the performance when applied to the multi swarm model but due to the problem of premature or delayed re-initialization it is better to opt for a parameter free approach. However the use of migration enables the transfer of information between parallel swarms which is useful in the long run. Usually the scheme of random scouts is ideal for low dimensional cases (3D space in real life encountered by living organisms) however instances of notable enhancement in algorithm efficiency for a much vast hyper-space (30D and 50D) by unearthing new basins of attraction is almost negligible and random searching scout simulation fails. The migration scheme is shown to be functioning effectively through the population distribution plots of the 30D functions f2, f4, f9, f14, f17 and f22. Since the size of a more effective subpopulation, characterized by a particular perturbation scheme, increases with the progress of the optimization process, the population with greater number of number of swarm agents at the end of the process is inferred to have been the most successful. From Fig 3, we can see that the increase in the share of the total swarm in each of the perturbation scheme varies with functions. No single perturbation scheme can be adjudged to be the best one for each of these cases. This is in accordance with the No Free Lunch theorem [53]. The use of the subpopulation based approach with a pool of perturbation schemes and the tendency to migrate facilitates the algorithm to adjust to the nature of the problem. This justifies our approach to include more than one scheme with the objective of balancing exploration and exploitation. 4.6.2. Suitability of weight factor formula devised for performance of a subpopulation The performance of a subpopulation is determined by two contributing values for a particular cycle – avg_fit or the average fitness value of the consisting members and bestfit or the best fitness value obtained among all the particles in that subpopulation. The contribution of the former is assigned by a weight factor w which decreases with increasing cycle number. Correspondingly the contribution from the best fitness value bestfit is assigned the weight value 1-w so that the influence of the latter increases with increasing number of cycles. The decrease of w is given by (13) where m = 10 and starting value w0 is set 0.8. The graphs obtained for different values of m is shown in Fig 4.The graph shown in black (3rd from the top) is the graph corresponding to m = 10 as proposed in our paper. It is seen from the set of graphs that with increasing number of cycles the weight factor decreases. The extent of descent varies with m. As m decreases the graph approaches linearity. In case the value of m is more than 10 the graph approaches the outer envelope of the constant initial weight value w0 which equals 0.8, with sudden abrupt decrease at the end of the number of cycles. If we apply the obtained experimental observation to the justification of the implementation in our algorithm, it is seen that with m being 10, the most suitable experimental conditions exist. The weight factor w is the proportion of the contribution of the average best and the local best value of the subpopulation. If w is assigned to avg_best, then (1  w) is the contribution of the local best. Now if we rely too heavily on avg_best as the determining criteria to the evaluation of performance of a subpopulation, then we focus heavily on the explorative capability of the forager population since a suitable low value of avg_best is obtained only if all sum of the fitness value is low i.e. the total nectar content is high enough. This encompasses the varied forager members with potentially most suitable nectar sources, which is nothing but exploration. On the other hand, a biased evaluation of performance on the basis of a low bestfit would make the algorithm over-reliant on exploitation.

Fig. 4. Variation of w with increasing number of cycles.

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

231

Fig. 5. Mean error values for different values of pop_limit corresponding to 30D functions f1, f2, f3, f6, f10, f14, f17, f20 f22 and f24 shown in column diagrams.

232

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

In any optimization problem too much of either exploration or exploitation is harmful. Hence if m is chosen to be 10 as its strikes the perfect balance for proper functioning. In case the value of m is lower than 10 (the curves below the black one in Fig 4) the graph approaches linearity which means exploration and exploitation is given equal balance. However if exploitation sets in too early then chances of stagnation in local optima remain. On the contrary if m is more than 10 the exploitation sets in too late and computational labor is wasted leading to possible unsuccessful optimization. For m = 10 the most suitable background conditions are met and exploitation starts at the perfect moment towards the end. Thorough exploration had been present before and hence the desired results are met with such settings. 4.7. Suitability of the population limit prameter in case of migration In the subpopulation based migration concept used in MiMSABC, a minimum strength of 10 is always maintained. This is referred to as the pop_limit. Initially a colony size of 100 is used which is equally subdivided into 4 subgroups according to 4 perturbation schemes of ABC. The migration scheme is used as an effective way to manage the scout phase where forager members not functioning satisfactorily migrate to another subpopulation. However a scenario may well arrive when successive migrations leave a subpopulation devoid of many members. The perturbation schemes of ABC deal with random members which means that for a population with fewer members, chances of better exploration of productive food sources are curtailed. This may lead to improper functioning of certain perturbation schemes and the entire motive of the subpopulation approach is lost. Thus we have to fix a certain value which keeps a minimum number of members always present in a subpopulation. If this number is too small the detrimental effects as mentioned will occur. On the other hand keeping too many members also handicap the algorithm in its freedom in terms of computational time allotted, besides putting a check on the effectiveness on the migration scheme. Thus a proper value is to be chosen. It is seen by experimentation that 10 meets the requirements of pop_limit. The column diagrams in Fig. 5(a)–(h), in order, show the comparative performance of migratory schemes with different pop_limit values of 5, 7, 10, 12, 15 and 20 on functions f1, f2, f3, f6, f10, f14, f17, f20 f22 and f24. The column charts have been shown to graphically establish the comparatively superior performance obtained with pop_limit being 10, which is demonstrated by the green bar in each representation. The mean error values are plotted along the vertical axis and the topmost bolded caption in each graph displays the corresponding function number. In all the cases the green value has the least height indicating least error obtained with use of the same, thereby validating its use. 4.8. Computational effort Compared to the basic ABC framework, the MiMSABC approach makes use of the parameter MR which causes it to scan through the D-parameters of each vector, representing the food sources. Including both the employed and onlooker phase followed by the synchronous update, the algorithmic complexity is O(MCN  CS  D) for a colony size CS of D-dimensional vectors running for MCN cycles. However the scout phase employs sorting technique that incurs worst case complexity of O(n2) for n swarm populations. Here n is 4 (n2 < D) and no significant overheads are incurred. On calculating the ratio of computation time of MiMSABC to ABC based on simulation runs conducted in host platform, the median value of 1.0028 and 1.0032 is reported for 30D and 50D problems. This is nominal for offline and practical purposes. 5. Conclusions A biologically inspired model of ABC has been introduced in this paper that synergizes multiple forager population with different perturbation techniques. The perturbation schemes all have varying degree of suitability and effectiveness in different landscapes and each subpopulation of foragers is demarcated by a unique perturbation scheme that remains constant through the course of the optimization process. The use of migratory scheme helps to allot more population member to the better performing strategies and also facilitates information sharing. The idea of effectively measuring the performance has been entrusted on a time-varying level of dominance of both individual and collective swarm agents. Such an idea helps to address the drawbacks posed by the parameter-dependent process of re-initialization once a food source is depleted. Through the proposed algorithms we aim at alleviating the observed shortcomings present in the ABC framework without adversely affecting the basic idea. At every possible juncture proper justification with figures has been provided to validate the essentiality of each algorithmic component. Statistical tests have been conducted to outline the high range of suitability of MiMSABC, and comparisons made on a similar background condition with other state-of-the-art algorithms have furthermore added voice to our claim that such a proposed scheme functions as a powerful global optimizer. The model of migration proposed here holds potential and can be extended to other population based EAs. Investigation of migrating tendency through clustering and development of co-evolutionary framework through adaptation and information sharing swarms is also an interesting field that may be undertaken in future. References [1] A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, Springer, 2003. [2] R.C. Eberhart, Y. Shi, J. Kennedy, Swarm Intelligence, Morgan Kaufmann, 2001.

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

233

[3] L.J. Fogel, A.J. Owens, M.J. Walsh, Artificial Intelligence Through Simulated Evolution, John Wiley & Son, New York, NY, 1966. [4] M. Ji, H. Yang, Y. Yang, Z. Jin, A single component mutation evolutionary programming, Appl. Math. Comput. 215 (10) (2010) 3759–3768. [5] H.P. Schwefel, Kybernetische evolution als strategie der experimentellen forschung in der stromungstechnik (Master’s thesis), Technical University of Berlin, Germany, 1965. [6] Y. Liang, K.S. Leung, Evolution strategies with exclusion-based selection operators and a Fourier series auxiliary function, Appl. Math. Comput. 74 (2) (2006) 1080–1109. [7] J.R. Koza, Genetic programming: a paradigm for genetically breeding populations of computer programs to solve problems, Technical Report STAN-CS90-1314, Stanford University Computer Science Department, 1990. [8] A. Vincent Antony Kumar, P. Balasubramaniam, Optimal control for linear singular system using genetic programming, Appl. Math. Comput. 192 (1) (2007) 78–89. [9] J.H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, MI, 1975. [10] K. Deep, K.P. Singh, M.L. Kansal, C. Mohan, A real coded genetic algorithm for solving integer and mixed integer optimization problems, Appl. Math. Comput. 212 (2) (2009) 505–518. [11] R. Storn, K. Price, Differential evolution – a simple and efficient adaptive scheme for global optimization over continuous spaces, Technical report, International Computer Science Institute, Berkley, 1995. [12] M. Ali, M. Pant, A. Abraham, Unconventional initialization methods for differential evolution, Appl. Math. Comput. 219 (9) (2013) 4474–4494. [13] L. Wang, F. Huang, Parameter analysis based on stochastic model for differential evolution algorithm, Appl. Math. Comput. 217 (7) (2010) 3263–3273. [14] E. Bonabeau, M. Dorigo, G. Theraulaz, Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press, New York, NY, 1999. [15] L.N. De Castro, F.J. Von Zuben, Artificial immune systems, part I. Basic theory and applications, Technical Report Rt Dca 01/99, Feec/Unicamp, Brazil, 1999. [16] X-S. Yang, Firefly algorithm, stochastic test functions, and design optimization, Int. J. Bio-inspired Comput. (2010) 78–84. [17] A.H. Gandomi, X.-S. Yang, S. Talatahari, A.H. Alavi, Firefly algorithm with chaos, Commun. Nonlinear Sci. Numer. Simul. 18 (1) (2013) 89–98. [18] X.-S. Yang, S. Deb, Cuckoo search via Lévy flights, in: World Congress on Nature & Biologically Inspired Computing, 2009, pp. 210–214. [19] Ramin Rajabioun, Cuckoo optimization algorithm, Appl. Soft Comput. 11 (8) (2011) 5508–5518. [20] M. Dorigo, M. Birattari, T. Stutzle, Ant colony optimization-artificial ants as a computational intelligence technique, IEEE Comput. Intell. Mag. 1 (4) (2006) 28–39. [21] O. Baskan, S. Haldenbilen, H. Ceylan, H. Ceylan, A new solution algorithm for improving performance of ant colony optimization, Appl. Math. Comput. 211 (1) (2009) 75–84. [22] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, 1995, pp. 1942–1948. [23] R. Thangaraj, M. Pant, A. Abraham, Pascal Bouvry particle swarm optimization: hybridization perspectives and experimental illustrations, Appl. Math. Comput. 217 (12) (2011) 5208–5226. [24] Y. Jiang, T. Hu, C.C. Huang, X. Wu, An improved particle swarm optimization algorithm, Appl. Math. Comput. 193 (1) (2007) 231–239. [25] D. Hu, A. Sarosh, Y.-F. Dong, An improved particle swarm optimizer for parametric optimization of flexible satellite controller, Appl. Math. Comput. 217 (21) (2011) 8512–8521. [26] P. Lucic, D. Teodorovic´, Transportation modeling: an artificial life approach, in: ICTAI, 2002, pp. 216–223. [27] D. Teodorovic´, Transport modeling by multi-agent systems: a swarm intelligence approach, Transp. Plann. Technol. 26 (4) (2003) 89–96. [28] K. Benatchba, L. Admane, M. Koudil, Using bees to solve a data-mining problem expressed as a max-sat one, artificial intelligence and knowledge engineering applications: a bioinspired approach, in: First International Work-Conference on the Interplay Between Natural and Artificial Computation IWINAC 2005, Palmas, Canary Islands, Spain, June 15–18, 2005. [29] V. Tereshko, Reaction–diffusion model of a honeybee colony’s foraging behaviour, in: M. Schoenauer (Ed.), Parallel Problem Solving from Nature VI, Lecture Notes in Computer Science, vol. 1917, Springer-Verlag, Berlin, 2000, pp. 807–816. [30] V. Tereshko, T. Lee, How information mapping patterns determine foraging behaviour of a honeybee colony, Open Syst. Inform. Dyn. 9 (2002) 181–193. [31] V. Tereshko, A. Loengarov, Collective decision-making in honeybee foraging dynamics, Comput. Inform. Syst. J. 9 (3) (2005). [32] D. Teodorovic, M. Dell’Orco, Bee colony optimization – a cooperative learning approach to complex transportation problems, in: 10th EWGT Meeting, Poznan, 3–16 September 2005. [33] H. Drias, S. Sadeg, S. Yahi, Cooperative bees swarm for solving the maximum weighted satisfiability problem, in: Computational Intelligence and Bioinspired Systems, Lecture Notes in Copmputer Science, vol. 3512, 2005, pp. 318–325. [34] H.F. Wedde, M. Farooq, Y. Zhang, Beehive: an efficient fault-tolerant routing algorithm inspired by honeybee behavior, ant colony, optimization and swarm intelligence, in: 4th International Workshop, ANTS 2004, Brussels, Belgium, September 5–8, 2004. [35] X.S. Yang, Engineering optimizations via nature-inspired virtual bee algorithms, in: Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, Lecture Notes in Computer Science, vol. 3562, Springer, Berlin/Heidelberg, 2005, pp. 317–323. [36] D.T. Pham, A. Ghanbarzadeh, E. Koc, S. Otri, S. Rahim, M. Zaidi, The bees algorithm, Technical Note, Manu-facturing Engineering Centre, Cardiff University, UK, 2005. [37] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Global Optim. 39 (3) (2007). [38] B. Basturk, D. Karaboga, An artificial bee colony (ABC) algorithm for numeric function optimization, in: IEEE Swarm Intelligence Symposium 2006, Indianapolis, Indiana, USA, May 2006. [39] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Global Optim. 39 (3) (2007) 459–471. [40] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Appl. Soft Comput. 8 (1) (2008) 687–697. [41] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Appl. Math. Comput. 214 (1) (2009) 108–132. [42] D. Karaboga, B. Basturk (Eds.), Advances in Soft Computing: Foundations of Fuzzy Logic and Soft Computing, LNCS, vol. 4529, Springer-Verlag, 2007, pp. 789–798. [43] E.M. Montes, O.C. Domínguez, Empirical analysis of a modified artificial bee colony for constrained numerical optimization, Appl. Math. Comput. 218 (22) (2012) 10943–10973. [44] N. Karaboga, A new design method based on artificial bee colony algorithm for digital iir filters, J. Franklin Inst. 346 (4) (2009) 328–348. [45] D. Karaboga, B. Basturk Akay, C. Ozturk (Eds.), Modeling Decisions for Artificial Intelligence, LNCS, vol. 4617, Springer-Verlag, 2007, pp. 318–329 (Chapter Artificial Bee Colony (ABC) Optimization Algorithm for Training Feed-Forward Neural Networks). [46] G. Zhu, S. Kwong, Gbest-guided artificial beecolony algorithm for numerical function optimization, Appl. Math. Comput. 217 (7) (2010) 3166–3173. [47] S.M. Chen, A. Sarosh, Y.F. Dong, Simulated annealing based artificial bee colony algorithm for global numerical optimization, Appl. Math. Comput. 219 (8) (2012) 3575–3589. [48] W. Gao, S. Liu, F. Jiang, An improved artificial bee colony algorithm for directing orbits of chaotic systems, Appl. Math. Comput. 218 (7) (2011) 3868– 3879. [49] F. Gao, F. Fei, Q. Xu, Y. Deng, Y. Qi, I. Balasingham, A novel artificial bee colony algorithm with space contraction for unknown parameters identification and time-delays of chaotic systems, Appl. Math. Comput. 219 (2) (2012) 552–568. [50] S. Biswas, S. Kundu, S. Das, A.V. Vasilakos, Information sharing in bee colony for detecting multiple niches in non-stationary environments, in: Christian Blum (Ed.), Proceeding of the Fifteenth Annual Conference Companion on Genetic and Evolutionary Computation Conference Companion (GECCO 13 Companion), Amsterdam, The Netherlands, July 6–10, ACM, NY, USA, 2013, pp. 1–2, http://dx.doi.org/10.1145/2464576.2464588. [51] R. Akbari, R. Hedayatzadeh, K. Ziarati, B. Hassanizadeh, A multi-objective artificial bee colony algorithm, Swarm Evol. Comput. 2 (2012) 39–52.

234

S. Biswas et al. / Applied Mathematics and Computation 232 (2014) 216–234

[52] P.-W. Tsai, J.-S. Pan, B.-Y. Liao, S.-C. Chu, Enhanced bee colony optimization, Int. J. Innovative Comp. Inf. Control 5 (12) (2009). [53] D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization, IEEE Trans. Evol. Comp. I (I) (1997). [54] H.G. Beyer, E. Brucherseifer, W. Jakob, H. Pohleim, B. Sendhoff, T.B. To, Evolutionary algorithms – terms and definitions, , 2002. [55] C. Li, S. Yang, A general framework of multipopulation methods with clustering in undetectable dynamic environments, IEEE Trans. Evol. Comp. 16 (2012) 556–577. [56] X. Li, Adaptively choosing neighborhood bests using species in a particle swarm optimizer for multimodal function optimization, in: Proc. Genet. Evol. Comput. Conf., 2004, pp. 105–116. [57] S. Yang, C. Li, A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments, IEEE Trans. Evol. Comput. 14 (6) (Dec. 2010) 959–974. [58] S. Yang, X. Yao, Experimental study on population-based incremental learning algorithms for dynamic optimization problems, Soft Comput. 9 (11) (2005) 815–834. [59] D. Karaboga. An idea based on honey bee swarm for numerical optimization. Tech. Report TR06, Erciyes University, Engineering Faculty, Computer Engineering Dept., 2005. [60] B. Akay, D. Karaboga, A modified artificial bee colony algorithm for real parameter optimization, Inf. Sci. 192 (2012) 120–142. [61] B. Dorronsoro, P. Bouvry, Improving classical and decentralized differential evolution with new mutation operators and population topologies, IEEE Trans. Evol. Comput. 15 (1) (Feb. 2011) 67–98. [62] P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y.P. Chen, A. Auger, S. Tiwari, Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization, Nanyang Technol. Univ., Singapore, and IIT Kanpur, Kanpur, India, KanGAL Rep. #2005005, May 2005. [63] P.J. Ballester, J. Stephenson, J.N. Carter, K. Gallagher, Real-parameter optimization performance Study on the CEC-2005 benchmark with SPC-PNX, IEEE Cong. Evol. Comput. (2005) 498–505. [64] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer with local search, Proc. IEEE Cong. Evol. Comput. (2005) 522–528. [65] S. He, Q.H. Wu, J.R. Sanders, Group search optimizer: an optimization algorithm inspired by animal searching behavior, IEEE Trans. Evol. Comp. 13 (5) (Oct. 2009) 973–991. [66] S. Das, P.N. Suganthan, Differential evolution – a survey of the state-of-the-art, IEEE Trans. Evol. Comput. 15 (1) (2011) 4–31. [67] J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm Evol. Comput. 1 (1) (Mar. 2011) 3–18.