Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions

Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions

Information Sciences 181 (2011) 3508–3531 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/i...

2MB Sizes 0 Downloads 74 Views

Information Sciences 181 (2011) 3508–3531

Contents lists available at ScienceDirect

Information Sciences journal homepage: www.elsevier.com/locate/ins

Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions Fei Kang ⇑, Junjie Li, Zhenyue Ma Faculty of Infrastructure Engineering, Dalian University of Technology, Dalian 116024, China

a r t i c l e

i n f o

Article history: Received 9 April 2010 Received in revised form 9 April 2011 Accepted 14 April 2011 Available online 21 April 2011 Keywords: Evolutionary computation Artificial bee colony algorithm Rosenbrock method Unconstrained optimization Memetic algorithm

a b s t r a c t A Rosenbrock artificial bee colony algorithm (RABC) that combines Rosenbrock’s rotational direction method with an artificial bee colony algorithm (ABC) is proposed for accurate numerical optimization. There are two alternative phases of RABC: the exploration phase realized by ABC and the exploitation phase completed by the rotational direction method. The proposed algorithm was tested on a comprehensive set of complex benchmark problems, encompassing a wide range of dimensionality, and it was also compared with several algorithms. Numerical results show that the new algorithm is promising in terms of convergence speed, success rate, and accuracy. The proposed RABC is also capable of keeping up with the direction changes in the problems. Ó 2011 Elsevier Inc. All rights reserved.

1. Introduction Global optimization is a field with applications in many areas of science and engineering. Without loss of generality, unconstrained global optimization problems can be formulated using the following minimization model:

min f ðxÞ; n

x ¼ ðx1 ; x2 ; . . . ; xn Þ;

ð1Þ n

where f : R ? R is a real-valued objective function, x 2 R , and n is the number of parameters to be optimized. As many real-world optimization problems are becoming increasingly complex, global optimization using traditional methods is becoming a challenging task. Evolutionary algorithms (EAs) such as genetic algorithms (GAs) [9,15,20,28, 58,59], differential evolution (DE) [7,43,46,53–55,64,67,74], particle swarm optimization (PSO) [21,22,38,61,70], and bacterial foraging optimization [6,12,13] are powerful tools for solving difficult optimization problems. Recently, many global optimization algorithms have been inspired by nature [34,41,56]. Most of these algorithms are not sensitive to the starting point, and generally are not prone to being trapped at a local minimum. However, they take a long time to locate the exact local optimum within the region of convergence. As extensions of EAs, memetic algorithms (MAs) [49,62] have been developed to combine the advantage of EAs and some local search strategies. Different EAs such as GA and DE have been adopted as the main framework of MAs. Many traditional local search procedures such as Nelder–Mead (NM) simplex search, Hooke–Jeeves pattern search, and the Rosenbrock method (RM) have been adopted as local procedures. Such hybrids have been successfully applied to global optimization of numerical functions [8,36,39,48] and have been used to solve various real-world optimization problems [46,47,66,67]. Studies on the swarm intelligence of honeybees are currently popular [1–3,5,26,27,30,42,45]. An artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honeybees, was proposed by Karaboga [30]. It has received ⇑ Corresponding author. Tel.: +86 0411 84708516; fax: +86 0411 84708501. E-mail addresses: [email protected] (F. Kang), [email protected] (J. Li), [email protected] (Z. Ma). 0020-0255/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2011.04.024

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

3509

increasing interest because of its simplicity, wide applicability, and outstanding performance. The basic ABC has been compared with other evolutionary algorithms, such as GA, PSO, and DE on a limited number of test functions [31,32]. ABC methods that use chaotic maps as efficient alternatives to pseudorandom sequences were proposed by Alatas [3], and global-bestsolution-guided ABC was proposed by Zhu and Kwong [75]. ABC has been used to solve real-word problems such as flow shop scheduling problems [52], minimum spanning tree problems [65], and parameter estimation problems [27]. ABC has already been shown to be a promising, population-based global optimization algorithm [1,26,33]. However, it still has some limitations in handling functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. Like most population-based algorithms, ABC takes a long time because of its stochastic nature. To improve the performance of ABC, NM simplex search has been introduced into ABC, and a hybrid simplex ABC was proposed for inverse analysis [27]. According to Hvattum and Glover [23], the Rosenbrock method can solve more functions with high dimensions, and it is better at adapting to the changes and at adjusting the step length than NM simplex search. In this paper, a Rosenbrock artificial bee colony (RABC) algorithm is proposed for global optimization of numerical functions. The memetic algorithm retains the main steps of ABC and incorporates a RM-based local search technique. Similar to most local search techniques, RM easily gets trapped at a local optimum and its convergence is extremely sensitive to the starting point. However, RM is guaranteed to find a local optimum in a finite number of steps for a unimodal problem, and the rotating direction operator of RM can keep up with the directional changes in the problems being solved. Therefore, ABC and RM have complementary advantages, and a hybrid of the two algorithms can result in a faster and more robust technique. The efficiency of the new algorithm was tested. The remainder of this paper is organized as follows. In Section 2, the ABC algorithm is described. In Section 3, the RM is introduced. In Section 4, the proposed RABC algorithm is described. In Section 5, comparative studies are presented. Discussions are given in Section 6, and in Section 7, conclusions are provided. All test problems are listed in Appendix A. 2. Artificial bee colony algorithm 2.1. Foraging behavior of honeybees Honeybees are creatures of unexpected complexity. They are models of domesticity that can produce food, learn, navigate, and communicate through intricate dances. Indeed, their colonies represent the ultimate socialist state, with complete selflessness and redistribution of ‘‘income’’ [18]. Nearly all bees in a hive are female: a single queen and tens of thousands of worker bees. When a worker bee is approximately three-week-old, it begins foraging, making trip after trip to collect food. There are three essential components in the foraging process: food sources, employed bees and unemployed bees [30]. For simplicity, the ‘‘profitability’’ of a food source can be represented by a single value. The employed bees search around the food sources already known to them, and they share information about the food sources with a certain probability. They share information about food sources via the waggle dance [60]. The direction and duration of each waggle run are closely correlated with the information about the food source being advertised. There are two types of unemployed bees: onlookers and scouts. The onlookers tend to select attractive food sources from the information shared by employed bees, and then search around the selected food sources. The probability of a food source being selected is proportional to the profitability of the food source. The scouts search for new food sources randomly. 2.2. Artificial bee colony algorithm ABC is a swarm intelligence-based optimization algorithm inspired by honeybee foraging behavior. In ABC, just as in real life, there are three types of honeybees: employed bees, onlookers, and scouts. The number of employed bees is equal to the number of onlookers, and equal to the number of food sources. If a food source position cannot be improved further through limit cycles, then that food source is assumed to be abandoned. The corresponding employed bee becomes a scout, and the abandoned position is replaced with a new food source found by the scout. The parameter limit is a number whose value is predetermined by the users. The position of a food source represents a possible solution of the optimization problem and the profitability of a food source corresponds to the quality (fitness) of the associated solution. In the beginning, an initial population containing NS solutions is generated randomly. NS is the number of food sources and it is equal to the number of employed bees. Each solution xi (i = 1, 2, . . . , NS) is an n-dimensional vector. In the original ABC, the fitness function is defined as

(

fit i ¼

1 1þfi

;

f i P 0;

1 þ absðfi Þ;

f i < 0;

ð2Þ

where fi is the objective function value of solution i, fiti is the corresponding fitness value after transformation. The probability of a food source being selected by an onlooker bee is given by

pi ¼ fiti

, NS X j¼1

fitj :

ð3Þ

3510

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

Fig. 1. The main steps of ABC algorithm.

A candidate solution

vi can be generated from the old solution xi as

v ij ¼ xij þ /ij ðxij  xkj Þ;

ð4Þ

where k 2 {1, 2, . . . , NS} and j 2 {1, 2, . . . , n} are randomly chosen indexes; k must be different from i; / ij is a random number in the range [1, 1]. The candidate solution is compared with the old one. If the new food source has equal or better quality than the old source, the old source is replaced by the new source. Otherwise, the old source is retained. If the abandoned source is xi, the scout discovers a new food source according to

xij ¼ lj þ randð0; 1Þðuj  lj Þ;

ð5Þ

where lj and uj are the lower and upper bounds of variable xij. The main steps of ABC are shown in Fig. 1. 3. Rosenbrock’s rotational direction method The Rosenbrock method [37,57] was designed to cope with the peculiar features of the famous ‘‘banana function,’’ the minimum of which lies inside a narrow, curved valley. It is a classical derivative-free local search technique with adaptive search orientation and size. The RM local search technique is described in Fig. 2, where d1, . . . , dn are n vectors forming the orthonormal basis and d1, . . . , dn are n step lengths. The initial stage of the RM local search begins with the coordinate axes as the search directions. It searches along these directions, cycling over each in turn, and moving to new points that yield successful steps. This continues until there has been at least one successful and one unsuccessful step in each search direction. Once this occurs, the current stage terminates. If in one direction di an improvement is found, the step size di will be multiplied by a > 1; if no improvement is found in the direction, the step size will be multiplied by 1 < b < 0. The factor b can not only reduce the step size in that direction, but also cause the search change to the opposite direction in the next loop. The values of the step adjustment factors recommended by Rosenbrock are a = 3.0 and b = 0.5 [57]. As soon as at least one successful and one unsuccessful move in each direction is carried out, the algorithm updates its orthonormal basis to reflect the cumulative effect of all successful steps in all directions. It also resets the step sizes to their original values. The update of the orthonormal basis is done by the Gram–Schmidt orthonormalization procedure or Palmer’s method [51]. It is described as follows:

xkþ1  xk ¼

n X

ki di ;

ð6Þ

i¼1

where ki is the sum of successful step sizes along di.

Fig. 2. Pseudo-code of RM.

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

3511

The direction xk+1  xk is the most ‘‘promising’’ direction from the just-completed stage, and therefore, should be included in the updated search directions. A new set of directions can be defined as follows:

8 > < di ; ki ¼ 0; n pi ¼ P > : kj dj ; ki – 0:

ð7Þ

j¼i

Then the Gram–Schmidt orthonormalization procedure is adopted to update the search directions as:

qi ¼

8 > < pi ; > : pi 

i ¼ 1; i1 P qTj pi j¼1

qTj qj

qj ;

ð8Þ

i P 2:

After normalization, the updated search directions are:

di ¼

qi ; kqi k

i ¼ 1; 2; . . . ; n:

ð9Þ

Cyclic search is performed along the new orthogonal directions again. The process is repeated until some termination conditions are met. 4. Rosenbrock artificial bee colony algorithm 4.1. The modified RM procedure The basic RM is modified to accommodate the hybrid strategy. The original termination criteria of RM are suitable for unimodal problems. However, with multimodal problems they tend to get stuck at a local optimum; consequently, the directions are not updated in a timely manner and the promising directions obtained in previous search iterations are left unused. Therefore, the termination criteria of the two RM loops have been modified. The detailed pseudo-code of the modified RM procedure is presented in Fig. 3, where z is a temporary vector; k1 is an integer variable that counts the loop times for searching along current orthogonal directions; k2 is an integer variable that counts the number of times for which the absolute value of the relative change in the objective function is smaller than a preset value e2. In the authors’ experience, e2 = 1.0e4 is appropriate. The parameter e1 is a very small variable, for example, 1.0e150, that is incorporated to avoid division by zero. For the inner loop, the end condition is the number of cyclic searches along current orthogonal directions that have reached a preset number nl. This implies that every after nl cyclic searches along the current directions, the algorithm will update the directions. For the outer loop, the termination condition is set as the absolute value of the relative change in the objective function that is smaller than e2 for more than 2nl times or when the step size is sufficiently small. This condition is given by

dmin < 1:0e  ða þ 1:0  nRM Þ

ð10Þ

where nRM is a variable that counts the number of runs of the RM procedure, dmin is the current minimum step size for all directions, and a is a parameter that is applied to control the accuracy of the step size. Another feature of the modified RM is that the step sizes are not reset after the orthonormal basis is updated. This strategy can reduce the number of function evaluations needed to adapt the step sizes at each stage. 4.2. The proposed RABC algorithm The modified RM algorithm is introduced to ABC as a local exploitation tool. The rank-based fitness transformation [26,27] adopted to replace (2) is

Fig. 3. Pseudo-code of the modified RM.

3512

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

Fig. 4. The main steps of RABC.

fiti ¼ 2  SP þ

2ðSP  1Þðr i  1Þ ; NS  1

ð11Þ

where ri is the position of solution i in the entire population after ranking, SP 2 [1.0, 2.0] is the selection pressure; a medium value of SP (=1.5) is appropriate. The main steps of the hybrid algorithm are summarized as follows. After initialization, the modified RM procedure is called to check whether it can solve the problem. In the main loop, for every nc cycles of ABC, the modified RM algorithm is activated to perform a local search using the best solution as the starting point. The step size d should be suitable to the states of solutions; therefore, an adaptive step size is selected. The step size is set as a fraction of the average of the distance between the selected solutions and the best solution achieved thus far. The first 10% of the solutions after ranking are selected to calculate the step size as follows:

dj ¼ 0:1

 Pm  0 i¼1 xij  xbest;j m

;

ð12Þ

where dj is the step size of the jth dimension, m = NS  10% is the number of solutions selected to calculate the step size, x0i is the ith solution after ranking, xbest is the current best solution. In the early stages, the diversity of the population is large and this results in a large dj. As the population converges, the distance between different solutions decreases and so does the step size of modified RM. The iteration times of the modified RM are controlled by the parameters nl and a. When the termination condition is met, the algorithm returns to the main framework of RABC. After ranking, the solution in the middle position is replaced by the better solution obtained in the previous RM step. The worst solution in the population cannot be replaced because doing so affects the function of the scout operator and leads to premature convergence in the case of a few multimodal problems. For the sake of clarity, the main steps of RABC are outlined in Fig. 4. The new algorithm carries out the optimization in two phases: during the exploration phase, it employs the ABC algorithm to locate regions of attraction; thereafter, during the exploitation phase, it employs the adaptive RM technique to carry out a local exploitation search near the best solution. This process is repeated until the termination condition is met, for example, the maximum number of function evaluations (NFEmax) is reached. 5. Experimental results 5.1. Test suite and experimental setup A comprehensive set of benchmark functions, including 41 global optimization problems collected from Refs. [4,9,20,35,40,44,50,53,55,71,72], was used to test the performance of the proposed approach. The 41 test functions in this set are described in Appendix A. The functions of Colville and Rosenbrock have a narrow curving valley and are hard to optimize, if the search space cannot be explored properly and the direction changes cannot be kept up with. The Colville function is illustrated in Fig. 5. Zakharov

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

3513

Fig. 5. Colville function, x1 = x3, x2 = x4.

and Schwefel Ridge are functions with a high eccentric ellipse. The main difficulty is that their gradients are not oriented along the axes and adapting to both the orientation and high eccentricity of the ellipse can be a significant challenge for some evolutionary algorithms. The Zakharov function is illustrated in Fig. 6. Ackley, Griewank, Rastrigin and Schwefel’s problem 2.26 are complex multimodal functions with a large number of local optima. To obtain good results for these functions, the search strategy must efficiently combine the exploratory and exploitative components. The functions of Fletcher–Powell, Modified Shekel’s Foxholes, and Modified Langerman are highly multimodal, like Ackley and Griewank; however, they are asymmetrical and their local optima are randomly distributed. Therefore, the objective function has no implicit symmetry advantages that might simplify optimization for certain algorithms. The functions of Fletcher–Powell and Modified Shekel’s Foxholes are illustrated in Figs. 7 and 8, respectively. The common parameters for ABC and RABC are population size, limit, andNFEmax. The population size was set as 50 [31,32] and different NFEmax values were set for different experiments. The parameter limit was defined in accordance with the dimensionality of the problem and the colony size as limit = NS  n [31,32].

Fig. 6. Zakharov function, n = 2.

Fig. 7. Fletcher-Powell function, n = 2.

3514

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

Fig. 8. Modified Shekel’s Foxholes, n = 2.

Compared to the basic ABC, three new control parameters a, nl, and nc were used in RABC. The parameter nc was applied to control the frequency of calling the RM procedure, nl was applied to control the termination of the inner RM loop, and a was applied to control the termination of the entire RM procedure. The parameter nc was defined in accordance with the dimensions of the problem as nc = k  n, k = 1, 2, . . . In the following sub-section, we describe the effect of parameters, and compare the proposed RABC with a basic ABC and some other algorithms. Each experiment was repeated 50 times with different random seeds. 5.2. Effect of parameters on RABC To study the effect of three new parameters on the performance of RABC, different values of parameters were tested on 10 typical functions. The maximum number of function evaluations was set as 200,000 in the test. The mean best function values (mean) and the standard deviations (SD) of RABC for different values of a, nl, and nc are listed in Tables 1–3, respectively. The results suggest that a proper combination of different parameter values can improve the performance of RABC and the quality of solutions. In wide ranges, i.e., a = 10–25, nl = 10–25, and nc = 3n–10n, good results can be obtained. The combination of parameter values a = 20, nl = 15 and nc = 5n was used in subsequent comparisons. 5.3. Comparison between ABC and RABC 5.3.1. Fixed-iteration results RABC was compared with ABC for all 41 problems listed in Appendix A. The number of dimensions and global minimum (GM) of each test problem are listed in Table 4. The total number of function evaluations was fixed to 300,000. The mean solutions, the standard deviations, and the results of nonparametric Wilcoxon signed-rank test [73] of 50 independent runs are listed in Table 4. The best result for each function is highlighted in boldface. The Wilcoxon signed-rank test verifies whether the results of RABC are significantly superior to the results of ABC. The null hypothesis in this test is that the median of the difference of two samples is zero. A significance level 5% was implemented, and a p-value less than 0.05 indicated that there were significant differences between the two samples. From Table 4, it can be inferred that for 30 test problems, the mean solutions obtained by RABC are closer to the theoretical optima, whereas ABC performs better than RABC only for six functions. The accuracy of ABC shows considerable improvement in the case of a few hard problems, such as Collville, Rosenbrock, Schwefel Ridge, Zakharov, and Fletch–Powell. The Wilcoxon signed-rank test results show that RABC performed better than ABC in the case of 29 problems, whereas from a statistical viewpoint, ABC performed better than RABC only in the case of three problems. A few sample graphs that compare

Table 1 Results of RABC with different values of parameter a on 10 typical functions, nl = 5, nc = 3n. F

n

a = 5 Mean(SD)

a = 10 Mean(SD)

a = 15 Mean(SD)

a = 20 Mean(SD)

a = 25 Mean(SD)

SP CO RO SR ZA AC GR RA FP MS

30 4 30 30 30 30 30 30 10 10

2.51e52(1.02e51) 1.30e11(3.62e11) 5.58e7(1.83e6) 5.56e9(1.95e8) 3.79e9(8.58e9) 3.44e14(4.55e15) 1.48e4(1.05e3) 0(0) 1.55e5(5.35e5) 7.67869(3.57179)

2.10e52(7.83e52) 1.67e12(5.58e12) 1.91e12(1.12e11) 8.16e10(2.76e9) 2.85e11(9.94e11) 3.41e14(4.13e15) 1.97e4(1.39e3) 0(0) 8.50e12(3.25e11) 7.54801(3.59574)

1.59e52(6.11e52) 1.44e12(3.57e12) 6.16e13(2.61e12) 5.68e10(1.18e9) 4.07e12(2.15e11) 3.37e14(3.89e15) 3.94e4(2.01e3) 0(0) 1.82e12(6.71e12) 7.566381(3.57088)

1.79e53(5.09e53) 7.94e13(1.65e12) 8.80e13(4.48e12) 6.41e10(2.86e9) 4.52e11(2.33e10) 3.42e14(4.32e15) 1.97e4(1.39e3) 0(0) 5.07e12(2.04e11) 7.53070(3.61835)

2.26e53(8.81e53) 4.40e12(2.63e11) 2.12e13(1.18e12) 3.89e10(1.36e9) 5.28e11(2.87e10) 3.33e14(3.31e15) 2.96e4(2.09e3) 0(0) 1.21e12(4.67e12) 7.45702(3.55994)

3515

F. Kang et al. / Information Sciences 181 (2011) 3508–3531 Table 2 Results of RABC with different values of parameter nl on 10 typical functions, a = 20, nc = 3n. F

n

nl = 5 Mean(SD)

nl = 10 Mean(SD)

nl = 15 Mean(SD)

nl = 20 Mean(SD)

nl = 25 Mean(SD)

SP CO RO SR ZA AC GR RA FP MS

30 4 30 30 30 30 30 30 10 10

1.79e53(5.09e53) 7.94e13(1.65e12) 8.80e13(4.48e12) 6.41e10(2.86e9) 4.52e11(2.33e10) 3.42e14(4.32e15) 1.97e4(1.39e3) 0(0) 5.07e12(2.04e11) 7.53070(3.61835)

3.42e64(2.11e63) 5.07e24(2.82e23) 6.65e21(1.02e20) 3.86e18(1.30e17) 2.72e24(1.68e23) 3.45e14(3.74e15) 3.45e4(1.73e3) 0(0) 1.21e20(4.01e19) 7.68034(3.57095)

1.88e68(1.11e67) 3.38e27(7.90e27) 2.05e22(2.18e21) 1.99e21(1.94e21) 3.91e24(1.79e23) 3.31e14(3.82e15) 2.17e4(1.15e3) 0(0) 4.84e23(1.12e22) 7.57936(3.55205)

6.84e75(3.13e74) 6.37e28(6.81e28) 9.09e23(2.80e22) 8.54e21(1.84e20) 2.52e25(5.39e25) 3.34e14(4.57e15) 2.96e4(2.09e4) 0(0) 1.12e22(2.87e22) 7.48466(3.54024)

2.38e81(7.71e81) 9.40e28(1.11e27) 2.93e22(2.01e21) 5.75e22(7.35e22) 1.21e24(7.36e24) 3.43e14(4.77e15) 3.45e4(2.44e3) 0(0) 6.16e23(1.76e22) 7.03324(3.78419)

Table 3 Results of RABC with different values of parameter nc on 10 typical functions, a = 20, nl = 15. F

n

nc = n Mean(SD)

nc = 3n Mean(SD)

nc = 5n Mean(SD)

nc = 10n Mean(SD)

nc = 15n Mean(SD)

SP CO RO SR ZA AC GR RA FP MS

30 4 30 30 30 30 30 30 10 10

4.06e57(1.62e56) 3.36e27(7.47e27) 1.95e22(6.25e21) 3.34e24(7.10e24) 8.92e25(6.30e24) 4.49e14(6.31e15) 1.48e4(1.05e3) 0(0) 1.13e22(2.15e22) 6.65161(3.75632)

1.88e68(1.11e67) 3.38e27(7.90e27) 2.05e22(2.18e21) 1.99e21(1.94e21) 3.91e24(1.79e23) 3.31e14(3.82e15) 2.17e4(1.15e3) 0(0) 4.84e23(1.12e22) 7.57936(3.55205)

1.86e74(8.36e74) 4.90e27(1.61e26) 1.41e22(9.12e22) 3.46e16(1.08e15) 3.36e24(1.65e23) 3.38e14(4.11e15) 2.02e4(1.43e3) 0(0) 3.78e23(9.54e23) 7.64253(3.62300)

4.23e62(1.39e61) 7.92e27(3.73e26) 2.59e22(9.43e22) 8.40e10(1.29e9) 2.89e23(1.17e22) 3.38e14(3.64e15) 2.46e4(1.74e3) 0(0) 8.21e23(3.05e22) 7.58871(3.54625)

1.88e58(3.59e58) 7.31e26(3.42e25) 3.76e21(2.29e20) 2.20e5(2.56e5) 3.69e23(9.35e23) 3.41e14(4.55e15) 1.48e4(1.05e3) 0(0) 1.36e22(5.96e22) 7.50161(3.65699)

the performances of ABC, PSO, DE, and RABC are shown in Fig. 9. These graphs show how RABC converged towards the optimal solution faster than the other three algorithms. The parameters for PSO and DE were set to be identical to the values in Refs. [38,74], respectively. Therefore, it can be concluded that RABC is more efficient than ABC, and the final solution of RABC is better than that of ABC in most cases. 5.3.2. Robustness analysis The NFEmax was set as 300,000 for the test. We consider a trial to be successful if the following inequality holds:

jFOBJALG  FOBJANAL j < erel jFOBJANAL j þ eabs ;

ð13Þ

where FOBJANAL is the analytical global minimum, FOBJALG is the best function value obtained by the algorithm, and the accuracy controlling parameters are set as erel = 104 and eabs = 106 [9]. To compare the convergence speeds, we use the acceleration rate (AR) [55], which is defined as follows, based on the number of function evaluations (NFEs) for the two algorithms ABC and RABC:

AR ¼

NFEABC : NFERABC

ð14Þ

We compared the new algorithm with ABC in terms of convergence speed and robustness, and the results of 41 benchmark functions are summarized in Table 5. The average of the objective function evaluation numbers was evaluated in relation to only the successful minimizations. The best results of NFE and success rate (SR) for each function are highlighted in boldface. The average acceleration rate ARave and the average success rate SRave for 57 test problems are also listed in the last row of Table 5. As can be inferred from Table 5, RABC converged much faster than ABC in most cases except for seven functions, which are Shekel’s Foxholes, Michalewicz, Expansion of F10, Rastrigin, Weierstrass, Noncontinuous Rastrigin, and Step. The last two of the seven functions are noncontinuous. The overall average acceleration rate of RABC to ABC is 17.03. Except for eight functions with AR larger than 20, the average acceleration rate of RABC to ABC is 3.01. To check whether RABC saves computational time, the execution time of 50 runs is also shown in Table 5. Less time was needed by RABC for most functions. The results reported in Table 5 show that there were 20 functions that cannot be solved 100% successfully by ABC, whereas the number for RABC was 11. Except for Quartic, Schwefel’s problem 2.21, 10-D Modified Langerman, 100-D Schwefel Ridge, and Zakharov, the success rates for other hard problems were much improved by RABC. The SR of RABC was only less than that of ABC on the 100-D Rastrigin function. The average success rate of RABC was 0.89, which is much higher than the 0.67 obtained by ABC. Therefore, it can be concluded that RABC is faster and more robust than ABC.

3516

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

Table 4 Fixed-iteration comparison between ABC and RABC. No.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 39 39 40 40 40 41 41 41 23 24 25 26 28 30 31 32 35 38

F

BO1 BO2 RC ES GP SF SB SH H3,4 HV CO KO PE PS S4,5 S4,7 S4,10 H6,4 MI WI PO F10 AC GR P1 P2 QU RA NR RO SR SP ST S21 S22 S26 WE ZA FP FP FP ML ML ML MS MS MS AC GR P1 P2 RA RO SR SP S22 ZA

n

2 2 2 2 2 2 2 2 3 3 4 4 4 4 4 4 4 6 10 10 24 25 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 2 5 10 2 5 10 2 5 10 100 100 100 100 100 100 100 100 100 100

GM

0 0 0.397887 1 3 0.998004 1.03163 186.7309 3.86278 0 0 3.075e4 0 0 10.15320 10.40294 10.53641 3.32237 9.66015 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.0809 0.965 0.965 12.119 10.4056 10.2088 0 0 0 0 0 0 0 0 0 0

ABC

RABC

Signed-rank test

Mean

(SD)

Mean

(SD)

p-value

h

0 0 0.397887 1 3.000010 0.998004 1.03163 186.7309 3.862780 5.5088e09 1.6073e01 4.4451e04 1.6863e03 6.6536e03 10.15320 10.40294 10.53461 3.322368 9.660152 4.5777e01 1.3710e03 1.5127e22 3.1213e14 0 1.5704e32 1.3497e32 3.4251e02 0 0 1.3431e01 1.9868e+03 1.9285e93 0 9.7380e02 1.9832e96 0 0 1.6466e+02 0 1.4593e01 1.2616e+01 1.080938 0.964959 0.547477 12.11901 10.35277 8.526342 2.5317e11 8.3267e16 2.1030e22 4.5454e20 5.7189e09 1.082803 9.6249e+04 3.8114e20 1.1441e22 1.3027e+03

(0) (0) (3.3650e16) (0) (3.7232e05) (5.6075e16 (2.2430e16) (1.1613e13) (2.6916e15) (2.3903e08) (1.0917e01) (7.4222e05) (3.0784e03) (5.7332e03) (7.2579e15) (6.7903e15) (1.1252e14) (8.9720e16) (3.5888e16) (8.0434e01) (1.9426e04) (2.1818e22) (3.2805e15) (0) (1.6588e47) (8.2941e48) (7.3449e03) (0) (0) (1.9219e01) (8.3930e+02) (2.5889e93) (0) (3.9357e02) (5.9984e96) (0) (0) (3.0345e+01) (0) (1.4422e01) (1.2322e+01) (6.0454e08) (7.8491e05) (1.1611e01) (8.9791e15) (8.1346e02) (2.9684e+00) (9.0195e12) (2.8695e15) (9.0628e22) (6.3950e20) (4.0438e08) (1.388891) (8.0981e+03) (3.6304e20) (1.1098e22) (1.0086e+02)

0 0 0.397887 1 3.000000 0.998004 1.03163 186.7309 3.862780 1.7712e42 1.1989e27 3.1215e04 7.3565e14 3.4194e07 10.15320 10.40294 10.53461 3.322368 9.660152 1.7558e01 1.6442e11 1.1993e17 3.2350e14 0 1.5704e32 1.3497e32 3.5789e02 0 0 6.4175e23 2.4016e22 1.9682e98 0 4.0328e01 7.3795e97 0 0 2.5876e35 0 9.7767e24 2.7952e23 1.080938 0.965000 0.551841 12.11901 10.40562 8.205473 3.5217e13 2.2205e18 1.2793e29 1.6994e31 3.7809e01 1.061286 2.5139e+01 1.2388e43 8.2525e41 3.2435e01

(0) (0) (3.3650e16) (0) (8.9720e16) (5.6075e16) (2.3093e16) (7.4204e14) (2.6916e15) (4.3447e42) (3.3911e27) (2.2996e05) (1.2209e13) (7.8542e07) (7.4158e15) (7.2534e15) (1.0438e14) (8.9720e16) (5.0753e16) (6.4615e01) (1.2160e11) (1.4046e17) (5.9798e15) (0) (1.6588e47) (8.2941e48) (6.8367e03) (0) (0) (3.8493e22) (2.1732e22) (1.2977e97) (0) (2.0077e01) (4.3449e96) (0) (0) (8.1584e35) (0) (4.3782e23) (7.8447e23) (3.9998e16) (1.1266e15) (1.1805e01) (7.8667e15) (9.8773e15) (3.2517e+00) (2.7939e13) (1.5701e17) (5.2057e29) (8.0578e31) (4.8784e01) (1.760282) (1.3178e+01) (8.7071e43) (5.8353e40) (9.3042e01)

1.0 1.0 1.0 1.0 1.11e09 1.0 1.0 0.0027 1.0 7.56e10 7.56e10 1.30e09 7.56e10 7.56e10 1.0 0.0625 1.0 1.0 0.5625 1.23e6 7.56e10 8.03e10 0.0676 1.0 1.0 1.0 0.3320 1.0 1.0 7.56e10 7.56e10 7.56e10 1.0 1.38e09 1.99e06 1.0 1.0 7.56e10 1.0 7.56e10 7.56e10 2.70e05 7.56e10 0.0126 0.0039 7.56e10 0.1284 7.56e10 0.0012 7.56e10 7.56e10 4.78e09 0.2949 7.56e10 7.56e10 7.56e10 7.56e10

0 0 0 0 1 0 0 1 0 1 1 1 1 1 0 0 0 0 0 1 1 2 0 0 0 0 0 0 0 1 1 1 0 2 1 0 0 1 0 1 1 1 1 1 1 1 0 1 1 1 1 2 0 1 1 1 1

Meaning of h values: 0 – there is no difference between ABC and RABC; 1 – RABC is better; 2 – ABC is better.

5.3.3. Comparison with chaotic ABC To improve the convergence characteristics and to prevent ABC from becoming trapped into local solutions, several chaotic ABCs (CABCs) were proposed by introducing chaotic maps into ABC [3]. For CABC1, the chaotic maps were used to initialize the artificial colony. For CABC2, chaotic search was performed by the scout bee. CABC3 is a combination of CABC1 and CABC2. RABC was compared with CABCs using a Logistic map in this section. The parameters of functions and algorithms were identical to [3]. The results, which have been summarized in Tables 6 and 7, show that RABC performed much better than CABCs on the Rosenbrock function, and the average success rate of RABC was the highest among all the algorithms.

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

3517

Fig. 9. Sample graphs for convergence process comparison of PSO, DE, ABC, and RABC.

5.4. Comparison with EP and DE RABC was compared with five variants of evolutionary programming (EP) and four variants of DE on 23 test problems, which were used by Yao et al. [72]. The various EP and DE variants are listed in Table 8. The test functions, the corresponding numbers of the functions in Appendix A, and the maximum numbers of function evaluations are shown in Table 9.

3518

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

Table 5 Comparison between ABC and RABC in terms of NFE, SR and execution time. No.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 39 39 40 40 40 41 41 41 23 24 25 26 28 30 31 32 35 38 Ave.

F

BO1 BO2 RC ES GP SF SB SH H3,4 HV CO KO PE PS S4,5 S4,7 S4,10 H6,4 MI WI PO F10 AC GR P1 P2 QU RA NR RO SR SP ST S21 S22 S26 WE ZA FP FP FP ML ML ML MS MS MS AC GR P1 P2 RA RO SR SP S22 ZA

n

2 2 2 2 2 2 2 2 3 3 4 4 4 4 4 4 4 6 10 10 24 25 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 30 2 5 10 2 5 10 2 5 10 100 100 100 100 100 100 100 100 100 100

ABC

RABC

AR

NFE

SR

Time (s)

NFE

SR

Time (s)

1998 2625 1608 3314 27,773 890 854 2114 1082 84,229 – – – – 4343 7266 7827 3240 25,381 – – 141,701 58,103 56,555 25,300 33,688 – 52,326 23,970 – – 31,778 6816 – 27,680 78,428 66,915 – 3878 – – 13,824 99,815 171,255 8712 130,562 152,711 204,092 134,512 73,294 112,612 227,535 – – 105,202 95,891 –

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0 0 0 0 1.0 1.0 1.0 1.0 1.0 0 0 1.0 1.0 1.0 1.0 1.0 0 1.0 1.0 0 0 1.0 1.0 0 1.0 1.0 1.0 0 1.0 0 0 1.0 0.96 0.02 1.0 0.10 0.02 1.0 1.0 1.0 1.0 1.0 0 0 1.0 1.0 0 0.67

0.093 0.109 0.093 0.218 0.953 0.359 0.062 0.156 0.218 3.484 16.28 8.640 35.38 41.58 0.812 1.656 2.421 0.718 3.718 482.6 51.20 79.72 7.781 8.625 13.44 18.11 11.08 5.687 5.016 114.02 57.70 1.218 0.546 22.56 2.265 15.98 557.8 15.48 0.531 119.0 407.2 2.031 25.19 116.1 0.968 41.86 79.53 72.80 61.92 123.6 189.2 69.75 372.9 495.6 8.625 21.19 34.97

500 656 72 989 326 988 239 465 154 235 1031 11,321 948 35,817 1181 1177 1266 855 26,380 33,092 11,251 149,196 27,616 32,168 10,415 17,845 – 58,922 31,220 36,830 88,207 1322 16,543 – 1128 73,783 76,131 23,851 92 943 2126 1586 58,349 125,148 845 47,511 87,555 103,479 68,650 42,453 57,270 288,431 160,632 – 4979 3988 261,575

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.98 1.0 0.88 1.0 1.0 1.0 1.0 1.0 0.88 1.0 1.0 1.0 1.0 1.0 1.0 0 1.0 1.0 1.0 1.0 1.0 1.0 0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.02 1.0 1.0 0.74 1.0 1.0 1.0 1.0 0.5 0.74 0 1.0 1.0 0.04 0.89

0.031 0.031 0.015 0.094 0.015 0.437 0.031 0.046 0.109 0.031 0.109 0.813 0.156 9.703 0.265 0.328 0.468 0.218 3.921 104.5 2.312 85.39 4.187 5.281 5.812 9.937 12.56 6.937 6.671 15.14 17.64 0.156 1.281 25.52 0.187 15.73 636.4 2.312 0.031 0.437 2.890 0.296 13.80 114.8 0.140 6.968 40.59 47.61 37.94 75.45 100.7 110.6 318.8 531.7 2.359 2.296 114.0

4.00 4.00 22.33 3.35 85.19 0.90 3.57 4.55 7.03 358.42 – – – – 3.68 6.17 6.18 3.79 0.96 – – 0.95 2.10 1.76 2.43 1.89 – 0.89 0.77 – – 24.04 0.41 – 24.54 1.06 0.88 – 42.15 – – 8.72 1.71 1.37 10.31 2.75 1.74 1.97 1.96 1.73 1.97 0.79 – – 21.13 24.04 – 17.03

The mean solution values and the standard deviations of 50 independent runs are listed in Tables 9 and 10, respectively. The best results are highlighted in boldface. In Table 9, RABC performed better than five EP algorithms on 14 problems. The EP algorithms only performed better than RABC on three problems: Kowalik function, Quartic function, and Schwefel’s problem 2.21. It can be concluded that RABC is more accurate and robust than several variants of EP. From Table 10, the results of RABC are better than or similar to the results of several DE variants. RABC performed much better than several DE variants on

3519

F. Kang et al. / Information Sciences 181 (2011) 3508–3531 Table 6 Comparison of success rates among ABC, RABC, and CABCs when the termination accuracy is 1.0e5. F(n)

ABC

CABC1

CABC2

CABC3

RABC

RO(2) GR(10) RA(10) Ave.

0 13 75 29.3

0 16 69 28.3

6 26 91 41.0

6 25 89 40.0

100 24 60 61.3

Table 7 Comparison of success rates among ABC, RABC, and CABCs when the termination accuracy is 1.0e6. F(n)

ABC

CABC1

CABC2

CABC3

RABC

RO(2) GR(10) RA(10) Ave.

0 13 60 24.3

0 10 59 23.0

4 23 85 37.3

4 22 69 31.7

100 20 58 59.3

Table 8 List of various EP and DE algorithms for comparison. Method

Reference

Conventional evolutionary programming (CEP) Fast evolutionary programming (FEP) Evolutionary programming using a Lévy mutation (LEP) Singlepoint mutation evolutionary programming (SPMEP) Mixed strategy evolutionary programming (MSEP) Differential evolution (DE/rand/1/bin) Differential evolution with self-adapting control parameters (jDE) Self-adaptive DE (SaDE) Adaptive DE with optional external archive (JADE)

Yao et al. [72] Yao et al. [72] Dong et al. [14] Ji et al. [24,25] Dong et al. [14] Storn and Price [64] Brest et al. [7] Qin and Suganthan [54] Zhang and Sanderson [74]

Schwefel’s problem 2.22 and Rosenbrock function, while DE variants performed much better than RABC on Schwefel’s problem 2.21. 5.5. Comparison between RABC and RCGA RABC was compared with several real-coded genetic algorithms (RCGAs) on 10 test problems, which were used by Herrera et al. [20]. The various RCGAs used for comparison are listed in Table 11. The total number of function evaluations was set as 100,000 [20]. The test problems and the results of different algorithms are listed in Table 12, where SLE means the system of linear equations problem and FMS means the frequency modulation sounds parameter identification problem [20]. The best results of each function are highlighted in boldface. Results show that RABC obviously performed more accurately in most cases than several RCGAs except the RCGA using complex hybrid crossover operators with multiple descendants. RCGA-HCX-MDs performed more accurately than RABC on five of the 10 problems. However, RABC performed much better than RCGA-HCX-MDs on Rosenbrock function, SLE, and FMS problems. Therefore, it can be concluded that RABC is more robust than several RCGAs for comparison. 5.6. Comparison with PSO and clonal selection algorithms RABC was compared with several variants of PSO and clonal selection algorithms (CSA) on eight commonly used test functions with 10 dimensions and 30 dimensions. The various methods used for comparison are listed in Table 13 and the functions used for comparison are listed in Table 14. On the 10 dimensional problems, NFEmax was set as 30,000 [38] for the comparison of RABC with PSO and its variants, and NFEmax was set as 50,000 [17] for the comparison of RABC with CSA. On the 30 dimensional problems, NFEmax was set as 200,000 [17,38] for all the cases. The mean solution values and the standard deviations are listed in Tables 14–17. The best results for each case are highlighted in boldface. Tables 14 and 15 show that RABC performed better than other algorithms except for CLPSO. On the Rosenbrock function, the most accurate solution was obtained by RABC. However, PSO variants and the basic CSA could not solve it accurately. From Tables 16 and 17, RABC performed better than all the other algorithms on 30-D problems, because it obtained the best results more often.

3520

Table 9 Comparison between RABC and EP in terms of mean (SD). F(n)

NFEmax

GM

CEP Mean(SD)

FEP Mean(SD)

LEP Mean(SD)

SPMEP Mean(SD)

MSEP Mean(SD)

RABC Mean(SD)

32 35 31 34 30 30 33 27 36 28 23 24 25 26 6 12 7 3 5 9 18 15 16 17

SP(30) S22(30) SR(30) S21(30) RO(30) RO(30) ST(30) QU(30) S26(30) RA(30) AC(30) GR(30) P1(30) P2(30) SF(2) KO(4) SB(2) RC(2) GP(2) H3,4(3) H6,4(6) S4,5(4) S4,7(4) S4,10(4)

1500  100 2000  100 5000  100 5000  100 20,000  100 1500  100 1500  100 3000  100 9000  100 5000  100 1500  100 2000  100 1500  100 1500  100 100  100 4000  100 100  100 100  100 100  100 100  100 200  100 100  100 100  100 100  100

0 0 0 0 0 0 0 0 12569.5 0 0 0 0 0 0.998004 3.075e3 1.031628 0.397887 3 3.86278 3.32237 10.1532 10.40294 10.53641

2.2e4(5.9e4) 2.6e3(1.7e4) 5.0e2(6.6e2) 2.0(1.2) 6.17(13.61) – 577.76(1125.76) 1.8e2(6.4e3) 7917.1(634.5) 89.0(23.1) 9.2(2.8) 8.6e2(0.12) 1.76(2.4) 1.4(3.7) 1.66(1.19) 4.7e4(3.0e4) 1.03(4.9e7) 0.398(1.5e7) 3.0(0) 3.86(1.4e2) 3.28(5.8e2) 6.86(2.67) 8.27(2.95) 9.10(2.92)

5.7e4(1.3e4) 8.1e3(7.4e4) 1.6e2(1.4e2) 0.3(0.5) 5.06(5.87) – 0(0) 7.6e3(2.6e3) 12554.5 (52.6) 4.6e2(1.2e2) 1.8e2(2.1e3) 1.6e2(2.2e2) 9.2e6(3.6e6) 1.6e4(7.3e5) 1.22(0.56) 5.0e4(3.2e4) 1.03 (4.9e7) 0.398(1.5e7) 3.02(0.11) 3.86(1.4e5) 3.27 (5.9e2) 5.52 (1.59) 5.52(2.12) 6.57(3.14)

6.3e4(7.6e5) 2.4e3(1.6e3) – 0.67 (0.46) – 162.4(135.1) 0(0) 1.5e2(5.7e3) 11937.8(68.4) 8.8(2.73) 5.5e2(2.3e2) 9.8e3(8.4e3) 6.0e6(1.0e6) 9.8e5(1.2e5) 0.998004(0) 4.38e4(3.5e4) 1.03(2.9e4) 0.398(4.4e4) 3(3.7e3) 3.85(9.3e3) 2.87(0.16) 4.76(0.16) 5.53(1.41) 8.23(1.16)

1.3e4(1.0e4) 5.1e4(1.5e5) – 6.5e3(2.0e3) – – 0(0) 1.0e2(1.5e3) 12569.5(9.1e12) 2.9e7(6.0e8) 1.9e3(4.4e4) 5.6e3(1.7e3) 8.5e7(9.7e9) 1.4e5(9.2e6) 1.0(1.6e15) 4.5e4(1.5e4) 1.03(4.3e10) 0.398(5.7e10) 3.0(5.6e9) 3.86(1.5e9) 3.25(5.4e2) 6.63(3.5) 7.40(3.00) 6.53(4.00)

1.0e4(1.3e5) 4.1e4(2.1e4) – 2.7e2(1.7e2) – 29.2(24.9) 0(0) 6.1e3(1.7e3) 12569.48(3.4e4) 2.5e5(2.1e5) 1.7e3(4.3e4) 8.5e4(2.3e3) 7.5e7(4.0e7) 1.2e5(1.1e5) 0.998004(0) 3.08e4(1.3e6) 1.03(0) 0.398(0) 3(0) 3.86(0) 3.32(1.7e4) 10.15(5.0e5) 10.4(4.7e6) 10.54(1.3e4)

9.1148e61 (2.1063e60) 3.2100e74(1.9714e73) 2.9083e24(1.4586e23) 2.7753e2(1.7366e2) 3.0815e23(1.0443e22) 8.0453e22(3.7371e21) 0(0) 3.5789e2(6.8367e3) 12569.5(7.2852e12) 0(0) 3.8247e14(4.3659e15) 2.0200e04(1.4283e03) 1.5704e32(1.6588e47) 1.3497e32(8.2941e48) 0.998004(5.6075e16) 3.1215e4(2.2996e5) 1.031628(4.2794e16) 0.397887(6.8851e16) 3(1.0520e14) 3.86278(2.5995e15) 3.32237(7.8984e16) 10.1532(8.3709e14) 10.40294(2.5488e13) 10.53641(4.1694e13)

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

No.

3521

F. Kang et al. / Information Sciences 181 (2011) 3508–3531 Table 10 Comparison between RABC and DE in terms of mean(SD). F(n)

NFEmax

DE

SaDE

jDE

JADE

RABC

SP(30) S22(30) SR(30) S21(30) RO(30) RO(30) ST(30) QU(30) S26(30) RA(30) AC(30) GR(30) P1(30) P2(30)

1500  100 2000  100 5000  100 5000  100 20000  100 3000  100 1500  100 3000  100 9000  100 5000  100 2000  100 3000  100 1500  100 1500  100

9.8e14(8.4e14) 1.6e09(1.1e09) 6.6e11(8.8e11) 4.2e01(1.1e+00) 8.0e02(5.6e01) 2.1e+00(1.5e+0) 0(0) 4.7e03(1.2e03) 5.7e+01(7.6e+01) 7.1e+01(2.1e+01) 9.7e11(5.0e11) 0(0) 1.1e14(1.0e14) 7.5e14(4.8e14)

4.5e20(6.9e20) 1.9e14(1.1e14) 9.0e37(5.4e36) 7.4e11(1.8e10) 1.8e+01(6.7e+0) 2.1e+01(7.8e+0) 0(0) 4.8e03(1.2e03) 4.7e+00(3.3e+01) 0(0) 4.3e14(2.6e14) 0(0) 1.2e19(2.0e19) 1.7e19(2.4e19)

2.5e28(3.5e28) 1.5e23(1.0e23) 5.2e14(1.1e13) 1.4e15(1.0e15) 8.0e02(5.6e01) 1.3e+01(1.4e+01) 0(0) 3.3e03(8.5e04) 0(0) 0(0) 4.7e15(9.6e16) 0(0) 2.6e29(7.5e29) 1.9e28(2.2e28)

1.8e60(8.4e60) 1.8e25(8.8e25) 5.7e61(2.7e60) 8.2e24(4.0e23) 8.0e02(5.6e01) 8.0e02(5.6e01) 0(0) 6.4e04(2.5e04) 0(0) 0(0) 4.4e15(0) 0(0) 1.6e32(5.5e48) 1.4e32(1.1e47)

9.1e61(2.1e60) 3.2e74(2.0e73) 2.9e24(1.5e23) 2.8e02(1.7e02) 3.1e23(1.0e22) 6.4e23(3.8e22) 0(0) 3.6e02(6.8e03) 0(0) 0(0) 3.8e14(4.4e15) 0(0) 1.6e32(1.7e47) 1.3e32(8.3e48)

Table 11 List of various RCGA methods for comparison. Method

Reference

RCGA using dynamic heuristic crossover operators (RCGA-DHX) Real-coded memetic algorithms with crossover hill-climbing (RCMA-XHC) RCGA using fuzzy recombination crossover with multiple descendants (RCGA-FR-MDs) RCGA based on parent-centric crossover operators (RCGA-GL-25) RCGA using hybrid crossover operators with multiple descendants (RCGA-HCX-MDs)

Herrera et al. [20] Lozano et al. [40] Sánchez et al. [58] García-Martínez et al. [15] Sánchez et al. [59]

Table 12 Comparison between RABC and RCGAs in terms of mean best solution value. F

n

GM

RCGA-DHX

RCMA- XHC

RCGA-FR-MDs

RCGA- GL-25

RCGA-HCX-MDs

RABC Mean(SD)

SP RO SR RA GR F10 AC BO2 SLE FMS

25 25 25 25 25 25 25 25 10 6

0 0 0 0 0 0 0 0 0 0

1.37e14 2.17e+01 6.04e+01 1.13e11 9.67e03 1.31e+00 3.81e07 0 1.27e+02 1.64e+01

6.5e101 2.2e+00 3.8e07 1.4e+00 1.3e02 – – – 5.5e+01 7.7e+00

1.0e16 2.1e+01 2.5e+00 1.3e+00 9.0e04 7.8e03 5.2e08 0 4.8e+01 3.7e+00

4.36e147 8.22e01 3.43e09 1.38e+01 2.26e18 1.99e01 – 0 4.7e+00 2.96e01

0 2.04e+01 0 0 0 0 0 0 1.40e+00 8.41e+00

8.7333e54(1.8143e53) 6.6195e20(4.0033e19) 1.0873e11(2.2940e11) 0(0) 3.4507e04(1.7256e03) 8.6630e02(3.9454e02) 4.1516e14(7.4329e15) 0(0) 1.2577e13(1.8990e13) 2.5747e+00(2.8213e+00)

Table 13 List of various PSO and CSA methods for comparison. Method

Reference

Particle swarm optimization (PSO) Fully informed particle swarm (FIPS) Cooperative particle swarm optimization (CPSO-H) Comprehensive particle swarm optimization PSO (CLPSO) Clonal selection algorithm (CSA) Baldwinian clonal selection algorithm (BCSA)

Liang et al. [38] Liang et al. [38] Liang et al. [38] Liang et al. [38] Gong et al. [17] Gong et al. [17]

5.7. Comparison with other algorithms RABC was further compared with eight algorithms on Chelouah and Siarry’s test set [9], which is widely used for testing algorithms. The various methods are listed in Table 18. With the exception of CGA, ACOr and HKA, most of these algorithms are hybrid/memetic algorithms. The control parameters were set as NFEmax = 200,000, erel = 104 and eabs = 106 [9]. The number of function evaluations and success rates of each algorithm are listed in Tables 19 and 20, respectively. The average of the function evaluation numbers is also evaluated in relation to the successful minimizations. The average error is defined

3522

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

Table 14 Comparison with PSO and its variants on 10-D problems in terms of mean (SD), NFEmax = 30,000. Fn

PSO

FIPS

CPSO-H

CLPSO

RABC

SP10 RO10 AC10 GR10 WE10 RA10 NR10 S2610

7.96e51(3.56e50) 3.08e+00(7.69e01) 1.58e14(1.60e14) 9.69e02(5.01e02) 2.28e03(7.04e03) 5.82e+00(2.96e+00) 4.05e+00(2.58e+00) 3.20e+02(1.85e+02)

3.15e30(4.56e30) 2.78e+00(2.26e01) 3.75e15(2.13e14) 1.31e01(9.32e02) 2.02e03(6.40e03) 2.12e+00(1.33e+00) 4.35e+00(2.80e+00) 7.10e+01(1.50e+02)

4.98e45(1.00e44) 1.53e+00(1.70e+00) 1.49e14(6.97e15) 4.07e02(2.80e02) 1.07e15(1.67e15) 0(0) 2.00e01(4.10e01) 2.13e+02(1.41e+02)

5.15e29(2.16e28) 2.46e+00(1.70e+00) 4.32e14(2.55e14) 4.56e03(4.81e03) 0(0) 0(0) 0(0) 0(0)

3.19e47(1.00e46) 8.50e22(5.76e21) 1.64e14(5.19e15) 5.17e03(6.89e03) 3.71e14(6.83e14) 0(0) 0(0) 2.04e12(6.65e12)

Table 15 Comparison with CSA and BCSA on 10-D problems in terms of mean (SD), NFEmax = 50,000. Fn

CSA

BCSA

RABC

SP10 RO10 AC10 GR10 WE10 RA10 NR10 S2610

1.1483e+00(4.8303e01) 7.7924e01(2.9267e01) 9.7770e01(3.0621e01) 6.5710e01(1.3957e01) 8.3160e01(7.4920e02) 7.4799e01(3.1822e01) 8.9879e01(3.3916e01) 3.0710e+00(1.7815 + 00)

6.1735e30(9.1544e30) 6.8559e13(1.7711e12) 4.4409e15(0) 4.1900e02(2.4100e02) 0(0) 4.9146e15(2.6918e14) 2.8171e07(1.5430e06) 0(0)

1.5779e61(5.5490e61) 1.2440e22(6.5507e22) 8.8310e15(1.9578e15) 4.8785e03(5.6403e03) 0(0) 0(0) 0(0) 0(0)

Table 16 Comparison with PSO and its variants on 30-D problems in terms of mean (SD), NFEmax = 200,000. Fn

PSO

FIPS

CPSO-H

CLPSO

RABC

SP30 RO30 AC30 GR30 WE30 RA30 NR30 S2630

9.78e30(2.50e29) 2.93e+01(2.51e+01) 3.94e14(1.12e14) 8.13e03(7.16e03) 1.30e04(3.30e04) 2.90e+01(7.70e+00) 2.97e+01(1.39e+01) 1.10e+03(2.56e+02)

2.69e12(6.84e13) 2.45e+01(2.19e01) 4.81e07(9.17e08) 1.16e06(1.87e06) 1.54e01(1.48e01) 7.30e+01(1.24e+01) 6.08e+01(8.35e+00) 2.05e+03(9.58e+02)

1.16e113(2.92e113) 7.08e+00(8.01e+00) 4.93e14(1.10e14) 3.63e02(3.60e02) 7.82e15(8.50e15) 0(0) 1.00e01(3.16e01) 1.08e+03(2.59e+02)

4.46e14(1.73e14) 2.10e+01(2.98e+00) 0(0) 3.14e10(4.64e10) 3.45e07(1.94e07) 4.85e10(3.63e10) 4.36e10(2.44e10) 1.27e12(8.79e13)

1.86e74(8.36e74) 1.41e22(9.12e22) 3.38e14(4.11e15) 2.02e04(1.43e03) 2.84e16(1.41e15) 0(0) 0(0) 0(0)

Table 17 Comparison with CSA and BCSA on 30-D problems in terms of mean (SD), NFEmax = 200,000. Fn

CSA

BCSA

RABC

SP30 RO30 AC30 GR30 WE30 RA30 NR30 S2630

2.3252e03(5.5760e04) 1.4631e01(1.0069e01) 1.1321e02(1.6591e03) 6.5029e03(2.0366e03) 2.0976e01(1.8000e02) 3.0853e01(4.6031e01) 4.6780e01(4.8535e01) 2.2505e+02(8.4318 + 01)

5.6879e41(1.3119e40) 1.1896e11(4.2596e11) 4.4409e15(0) 8.6168e03(1.1259e02) 3.1955e03(5.9246e03) 0(0) 1.2443e+00(5.5933e+00) 1.9596e07(1.0404e06)

1.8563e74(8.3557e74) 1.4140e22(9.1163e22) 3.3842e14(4.1082e15) 2.0200e04(1.4283e03) 2.8422e16(1.4065e15) 0(0) 0(0) 0(0)

Table 18 List of various methods for comparison on Chelouah and Siarry’s test set. Method

Reference

Rosenbrock ABC (RABC) Continuous genetic algorithm (CGA) Continuous hybrid algorithm (CHA) Continuous tabu simplex search (CTSS) Directed tabu search using adaptive pattern search strategy (DTSAPS) Ant colony optimization for continuous domains (ACOr) Hybrid genetic algorithm and PSO (GA-PSO) Heuristic Kalman algorithm (HKA) Hybrid optimization technique combining LPsO and Nelder-Mead method (LPsNM)

This work Chelouah and Siarry [9] Chelouah and Siarry [10] Chelouah and Siarry [11] Hedar and Fukushima [19] Socha and Dorigo [63] Kao and Zahara [29] Toscano and Lyonnet [68] Georgieva and Jordanov [16]

3523

F. Kang et al. / Information Sciences 181 (2011) 3508–3531 Table 19 Comparison on Chelouah and Siarry’s test set in terms of NFE. F

n

CGA

CHA

CTSS

DTSAPS

ACOr

HKA

LPsNM

GA-PSO

RABC

RC BO1 ES GP SH RO ZA SP H3,4 S4,5 S4,7 S4,10 RO ZA H6,4 RO ZA RO ZA

2 2 2 2 2 2 2 3 3 4 4 4 5 5 6 10 10 50 50

620 430 1504 410 575 960 620 750 582 610 680 650 3990 1350 970 21,563 6991 78,356 75,201

295 132 952 259 345 459 215 371 492 698 620 635 3290 950 930 14,563 4291 55,356 75,520

125 98 325 119 283 369 78 155 225 538 590 555 – – – – – – –

212 – 223 230 274 254 201 446 438 819 812 828 1684 1003 1784 9037 4032 – –

857 559 – – – – – 392 – 793 748 715 – – 722 – – – –

625 1275 – – – – – 600 – 675 686 687 – – 667 – – – –

247 248 182 303 226 180 266 292 839 837 1079 2353 1163 1552 9188 6826 – –

8254 174 809 25,706 96,211 140,894 95 206 2117 529,344 56,825 43,314 1,358,064 398 12,568 5,319,160 872 – –

72 500 989 326 465 355 69 127 154 1181 1177 1266 1826 276 855 5698 1686 48,973 72,565

Table 20 Comparison on Chelouah and Siarry’s test set in terms of success rate. F

n

CGA

CHA

CTSS

DTSAPS

ACOr

HKA

LPsNM

GA-PSO

RABC

RC BO1 ES GP SH RO ZA SP H3,4 S4,5 S4,7 S4,10 RO ZA H6,4 RO ZA RO ZA

2 2 2 2 2 2 2 3 3 4 4 4 5 5 6 10 10 50 50

100 100 100 100 100 100 100 100 100 76 83 81 100 100 100 80 100 77 100

100 100 100 100 100 100 100 100 100 85 85 85 100 100 100 83 100 79 100

100 100 100 100 100 100 100 100 100 75 77 74 – – – – – – –

100 – 82 100 100 100 100 100 100 75 65 52 85 100 83 100 85 – –

100 100 – – – – – 100 – 57 79 81 – – 100 – – – –

100 100 – – – – – 100 – 93 92 93 – – 97 – – – –

100 – 100 100 85 100 100 100 100 100 100 96 91 100 100 88 100 – –

100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 – –

100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100

as the average of function value gaps between the best successful point found and the known global optimum. The average errors available for six algorithms are listed in Table 21. The best results for each case are highlighted in boldface. The results show that RABC was the only algorithm that can achieve 100% success rate on all the 19 test problems. RABC was more efficient than other algorithms on eight problems, especially on Branin, Sphere, Zakharov, and high dimension Rosenbrock. GA-PSO can also achieve 100% success rate on the first 17 problems; however, it needed more NFE than RABC in most cases. The comparison results indicate the high efficiency and robustness of RABC.

6. Discussion From Section 5, it can be seen that there is considerable scope for improving the ABC in terms of accuracy, efficiency, and robustness by introducing the modified RM. The proposed RABC algorithm is competitive with other well-known and recently proposed global optimization algorithms. This improvement can mainly be attributed to the following reasons: (1) ABC produces the candidate solution from its parent by a simple operator described by Eq. (4). It always searches along the coordinates during the entire optimization process. The modified RM algorithm can increase the number of search directions of ABC by rotating search directions. (2) In ABC, the best solution discovered thus far is not always held in the population because it might be replaced with a randomly produced solution discovered by a scout. It might not contribute to the production of trial solutions. However, in RABC, the best solution discovered thus far is taken as the starting point for RM,

3524

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

Table 21 Comparison on Chelouah and Siarry’s test set in terms of average error. F

n

GM

CGA

CHA

CTSS

HKA

GA-PSO

RABC

RC BO1 ES GP SH RO ZA SP H3,4 S4,5 S4,7 S4,10 RO ZA H6,4 RO ZA RO ZA

2 2 2 2 2 2 2 3 3 4 4 4 5 5 6 10 10 50 50

0.397887 0 1.0 3.0 186.7309 0 0 0 3.86278 10.1532 10.40294 10.53461 0 0 3.32237 0 0 0 0

1.0e4 3.0e4 1.0e3 1.0e3 5.0e3 4.0e3 3.0e6 2.0e4 5.0e3 0.14 0.12 0.15 0.15 4.0e4 4.0e2 2.0e2 1.0e6 5.00e2 1.00e5

1.00e4 2.00e7 1.00e3 1.00e3 5.00e3 4.00e3 3.00e6 2.00e4 5.00e3 9.00e3 1.00e2 1.50e2 1.80e2 6.00e5 8.00e3 8.00e3 1.00e6 5.00e3 1.00e5

5.00e3 5.00e6 5.00e3 1.00e3 1.00e3 4.00e3 3.00e7 2.00e4 5.00e3 7.00e3 1.00e3 1.00e3 – – – – – – –

5.00e6 4.00e5 – – – – – 2.00e6 – 2.00e4 8.00e4 5.00e4 – – 8.00e3 – – – –

9.00e5 1.00e5 3.00e5 1.20e4 7.00e5 6.40e4 5.00e5 4.00e5 2.00e4 1.40e4 1.50e4 1.20e4 1.30e4 0.00e5 2.40e4 5.00e5 0.00e5 – –

2.00e5 4.13e7 4.27e5 1.52e4 8.15e3 4.15e7 3.97e7 4.09e7 1.79e4 5.22e4 5.52e4 5.12e4 6.97e7 6.61e7 2.02e4 8.80e7 9.19e7 8.61e7 9.80e7

and if it is improved, the solution in the middle position could be replaced by the updated solution from RM. Therefore, the best solution obtained using RABC is more likely to contribute to the production of trial solutions than that obtained using ABC. (3) The population in RABC can still maintain diversity to avoid premature convergence while approaching an optimum by incorporating the local search property of the modified RM technique. A repulsion factor was introduced to increase the diversity of the population and alleviate stagnation in swarm-based algorithms [2,21]. It encourages an individual to search in a direction opposite to that of their elites and move toward a region that may be ignored by the population. Therefore, the repulsion technique guarantees that not all individuals will move towards the best solution. The purpose of introducing RM into swarm-based algorithms is to increase the number of search directions, which is also the purpose for introducing the repulsion factor. Compared to the repulsion factor, the advantage of the RM method is that it can rotate the search directions according to the search process in previous iterations. Meanwhile, RM is a local search tool, and therefore, it is more efficient. A multiple trajectory search (MTS) method for large-scale optimization was presented by Tseng and Chen [69]. In MTS, an agent tests the performance of the three predefined methods and chooses the best method to conduct the local search. Therefore, MTS has the ability to adapt to the landscape of the solution’s neighborhood. The RM also can adapt to the landscape by changing search directions. Meanwhile, RM not only searches along each dimension but also searches along other directions. 7. Conclusion A novel hybrid global optimization method, RABC that combines ABC and a slightly modified RM has been proposed in this paper. In the new algorithm, ABC is utilized to locate regions of attraction, and then RM is adopted to keep up with the direction changes and refine the best solution. The proposed RABC algorithm has been tested on a number of benchmark functions and has shown very reliable performance. Compared with ABC and other algorithms, RABC has demonstrated strong competitive capabilities in terms of robustness, efficiency, and accuracy. Acknowledgement This research was supported by China Postdoctoral Science Foundation under Grant No. 20100471444 and National Natural Science Foundation of China under Grant No. 90815024. The authors thank the editors and three anonymous reviewers for their valuable comments. We also acknowledge support of Prof. Xiaohui Su for the proof reading. Appendix A. List of benchmark functions To define the test problems, we have adopted the following format: No. Name (Symbol, Characteristic: U – Unimodal, M – Multimodal, S – Separable, N – Nonseparable). Description of the problem 1. Bohachevsky 1 Problem (BO1, MS)

f ðxÞ ¼ x21 þ 2x22  0:3 cosð3px1 Þ  0:4 cosð4px2 Þ þ 0:7; subject to  100 6 x1 ; x2 6 100:

3525

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

The number of local minima is unknown but the global minimum is located at x⁄ = (0, 0) with f(x⁄) = 0. 2. Bohachevsky 2 Problem (BO2, MN)

f ðxÞ ¼

n1 X

x2i þ 2x2iþ1  0:3 cosð3pxi Þ cosð4pxiþ1 Þ þ 0:3; subject to  100 6 xi 6 100; i ¼ 1; . . . ; n:

i¼1

The number of local minima is unknown but the global minimum is located atx⁄ = (0, 0) with f(x⁄) = 0. 3. Branin RCOS (RC, MN)

   2 5:1 5 1 cos x1 þ 10; subject to  5 6 x1 ; x2 6 15: f ðxÞ ¼ x2  2 x21 þ x1  6 þ 10 1  4p 8p p There are three global minima, located at x⁄  (p, 12.275), (p), 2.275), (3p, 2.475) with f(x⁄) = 5/(4p). 4. Easom (ES, UN)

f ðxÞ ¼  cosðx1 Þ cosðx2 Þ expððx1  pÞ2 þ ðx2  pÞ2 Þ; subject to  10 6 x1 ; x2 6 10: The minimum value is located at x⁄ = (p, p) with f(x⁄) = 1. 5. Goldstein and Price (GP, MN)

f ðxÞ ¼ ½1 þ ðx1 þ x2 þ 1Þ2 ð19  14x1 þ 3x21  14x2 þ 6x1 x2 þ 3x22 Þ  ½30 þ ð2x1  3x2 Þ2 ð18  32x1 þ 12x21 þ 48x2  36x1 x2 þ 27x22 Þ; subject to  2 6 x1 ; x2 6 2: There are four local minima and the global minima is located at x⁄ = (0, 1) with f(x⁄) = 3. 6. Shekel’s Foxholes (SF, MN)

f ðxÞ ¼

25 X 1 1 þ P 500 j¼1 j þ 2i¼1 ðxi  aij Þ6

!1 ; subject to  65:536 6 x1 ; x2 6 65:536; where

a1 ¼ ½32  16 0 16 32  32  16 0 16 32  32  16 0 16 32  32  16 016 32  32  16 0 16 32; a2 ¼ ½32  32  32  32  32  16  16  16  16  16 0 0 0 0 0 16 16 16 16 16 32 32 32 32 32: The global optimum is located at x⁄  (32, 32) with f(x⁄)  0.998004. 7. Six-hump Camel Back (SB, MN)

f ðxÞ ¼ 4x21  2:1x41 þ 1=3x61 þ x1 x2  4x22 þ 4x42 ; subject to  5 6 x1 ; x2 6 5: The function has two global optima at x⁄  (0.0898, 0.7126) and (0.0898, 0.7126) with f(x⁄)  1.03163. 8. Shubert (SH, MN)

f ðxÞ ¼

5 X

!

5 X

i cosðði þ 1Þx1 þ iÞ

i¼1

! i cosðði þ 1Þx2 þ iÞ ; subject to  10 6 x1 ; x2 6 10:

i¼1

The function has 760 local minima, 18 of which are global with f(x⁄)  186.7309. 9. Hartman 3 (H3,4, MN)

f ðxÞ ¼

4 X

ci exp 

i¼1

3 X

!

aij ðxj  pij Þ2 ; subject to 0 6 xi 6 1; i ¼ 1; 2; 3 with constants aij ; pij and ci

j¼1

given in the following table. There are four local minima, xlocal = (pi1, pi2, pi3) with f(xlocal)  ci. The global minimum is located at x⁄ = (0.114614, 0.555649, 0.852547) with f(x⁄) = 3.862782. i

ci

aij, j = 1, 2, 3

1 2 3 4

1.0 1.2 3.0 3.2

3.0 0.1 3.0 0.1

pij, j = 1, 2, 3 10 10 10 10

30 35 30 35

10. Helical valley problem (HV, UN)

" f ðxÞ ¼ 100 ðx3  10hðx1 ; x2 ÞÞ2 þ where

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 # x21 þ x22  1 þ x23 ;

0.3689 0.4699 0.1091 0.03815

0.1170 0.4387 0.8732 0.5743

0.2673 0.7470 0.5547 0.8828

3526

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

( hðx1 ; x2 Þ ¼

x2 x1 1 x2 1 tan 2p x1 1 2p

tan1

if x1 > 0;

;

þ 0:5; if x1 < 0;

subject to  10 6 xi 6 10; i ¼ 1; 2; 3:

This is a steep-sided valley which follows a helical path. The minimum is located at x⁄ = (1, 0, 0) with f(x⁄) = 0. 11. Colville (CO, UN)

f ðxÞ ¼ 100ðx21  x2 Þ2 þ ðx1  1Þ2 þ ðx3  1Þ2 þ 90ðx23  x4 Þ2 þ 10:1ððx2  1Þ2 þ ðx4  1Þ2 Þ þ 19:8ðx2  1Þðx4  1Þ; subject to  10 6 xi 6 10; i ¼ 1; 2; 3; 4: This function has a saddle near (1, 1, 1, 1). The only minimum is located at x⁄ = (1, 1, 1, 1) with f(x⁄) = 0. 12. Kowalik (KO, MN)

f ðxÞ ¼

11 X i¼1

 12 2 x1 bi þ bi x2 @ai  A ; subject to  5 6 xi 6 5; i ¼ 1; 2; 3; 4; where 2 bi þ bi x3 þ x4 0

a ¼ ½0:1957; 0:1947; 0:1735; 0:1600; 0:0844; 0:0627; 0:0456; 0:0342; 0:0323; 0:0235; 0:0246; b ¼ ½4:0; 2:0; 1:0; 0:5; 0:25; 0:167; 0:125; 0:1; 0:0833; 0:0714; 0:0625: The global optimal value is f(x⁄)  3.0748e4 located at x⁄  (0.192, 0.190, 0.123, 0.135). 13. Perm (PE, MN)

" #2 n n X X k k f ðxÞ ¼ ði þ 0:5Þððxi =iÞ  1Þ ; subject to  n 6 xi 6 n; i ¼ 1; 2; 3; 4: i¼1

k¼1

The global optimum is located at x⁄ = (1, 2, 3, 4) with f(x⁄) = 0. 14. Power Sum (PS, MN)

f ðxÞ ¼

n X

"

n X

!

xki

#2

 bk

; b ¼ ½8; 18; 44; 114; subject to  n 6 xi 6 n; i ¼ 1; 2; 3; 4:

i¼1

k¼1

The global optimum is located at x⁄ = (1, 2, 2, 3) with f(x⁄) = 0. 15–17 (m = 5, 7, 10). Shekel’s Problem Family (S4,m, MN)

f ðxÞ ¼ 

m X j¼1

1

P4

i¼1 ðxj

 aij Þ2 þ ci

; subject to 0 6 xi 6 10; i ¼ 1; 2; 3; 4; where m ¼ 5; 7; 10

There are m local minima and the global minima is located at x⁄ = (4, 4, 4, 4) with f(x⁄)  10.1532 for m = 5, f(x⁄)  10.40294 for m = 7 and f(x⁄)  10.53641 for m = 10. The constants can be seen in [9]. 18. Hartman 6 (H6,4, MN)

f ðxÞ ¼

4 X

ci exp 

i¼1

6 X

!

aij ðxj  pij Þ2 ;

j¼1

subject to 0 6 xi 6 1, i = 1, 2, 3. The constants aij, pij and ci can be seen in [9]. There are four local minima, and the global minimum is located at x⁄ = (0.201690, 0.150011, 0.476874, 0.275332, 0.311652, 0.657301) with f(x⁄) = 3.322368. 19. Michalewicz (MI, MS)

f ðxÞ ¼ 

n X

2

sinðxi Þðsinðixi =pÞÞ2m ;

i¼1

where m = 10, subject to 0 6 xi 6 p, i = 1, . . . , n. There are n! local optima and the global minimum value for n = 10 is f(x⁄) = 9.660152. 20. Whitley (WI, MN)

f ðxÞ ¼

n X n X k¼1 j¼1

y2j;k 4000

!  cosðyj;k Þ þ 1 ;

yj;k ¼ 100ðxk  x2j Þ2 þ ð1  xj Þ2 ;

subject to 100 6 xi 6 100, i = 1, . . . , n. The global minimum is located at x⁄ = (1, 1, . . . , 1) with f(x⁄) = 0. This function is a composition of the Griewank and Rosenbrock functions. The function’s landscape resembles Rosenbrock’s function at large scale and Griewank’s function at small scale. 21. Powell (PO, UN)

f ðxÞ ¼

n=4 X ½ðx4i3 þ 10x4i2 Þ2 þ 5ðx4i1  x4i Þ2 þ ðx4i2  2x4i1 Þ4 þ 10ðx4i3  x4i Þ4 ; i¼1

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

3527

subject to 4 6 xi 6 5, i = 1, . . . , 24. This is a unimodal function with f(x⁄) = 0, x⁄ = (0, . . . , 0). The minimum is hard to obtain accurately as the Hessian matrix at the optimum is singular. 22. Expansion of F10 (F10, MN)

f ðxÞ ¼ f10 ðx1 ; x2 Þ þ    þ f10 ðxi1 ; xi Þ þ    þ f10 ðxn ; x1 Þ; 2

f10 ðx; yÞ ¼ ðx2 þ y2 Þ0:25  ½sin ð50  ðx2 þ y2 Þ0:1 Þ þ 1; subject to 100 6 xi 6 100, i = 1, . . . , n. The global minimum value is f(x⁄) = 0. 23. Ackley (AC, MN)

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 ! u n n u1 X 1X 2A t @  exp f ðxÞ ¼ 20 exp 0:2 x cosð2pxi Þ þ 20 þ e; n i¼1 i n i¼1 0

subject to 32 6xi6 32, i = 1, . . . , n. The number of local minima is not known, but the global minimum is located at the origin with f(x⁄) = 0. 24. Griewank (GR, MN)

  n n Y 1 X xi x2i  cos pffi þ 1; subject to  600 6 xi 6 600; i ¼ 1; . . . ; n: 4000 i¼1 i i¼1

f ðxÞ ¼

The global minimum is located at x⁄ = (0, . . . , 0) with f(x⁄) = 0. This function becomes simpler and smoother as the dimensionality of the search space is increased. 25. Penalized Levy and Montalvo 1 (P1, MN)

p

f ðxÞ ¼

(

n1 X 2 ðyi  1Þ2 ½1 þ 10 sin ðpyiþ1 Þ þ ðyn  1Þ2 10 sin ðpy1 Þ þ

)

2

n

i¼1

8 m < kðxi  aÞ ; where yi = 1 + 0.25(xi + 1) and u(xi, a, k, m) = 0; : kðxi  aÞm ; ⁄ imum is known to be x = (1, 1, . . . , 1) with f(x⁄) = 0. 26. Penalized Levy and Montalvo 2 (P2, MN)

þ

n1 X

uðxi ; 10; 100; 4Þ

i¼1

xi > a a 6 xi 6 a , subject to 50 6 xi 6 50, i = 1, . . . , n. The global minxi < a

(

) n1 X 2 2 2 2 f ðxÞ ¼ 0:1 sin ð3px1 Þ þ ðxi  1Þ ½1 þ sin ð3pxiþ1 Þ þ ðxn  1Þ ½1 þ sin ð2pxn Þ 2

i¼1

þ

n1 X

uðxi ; 5; 100; 4Þ; subject to  50 6 xi 6 50; i ¼ 1; . . . ; n:

i¼1

The global minimum is known to be x⁄ = (1, 1, . . . , 1) with f(x⁄) = 0. 27. Quartic function (QU, US)

f ðxÞ ¼

n X

4

ixi þ random½0; 1Þ; subject to  1:28 6 xi 6 1:28; i ¼ 1; . . . ; n:

i¼1

The global minimum is x⁄ = (0, . . . , 0) with f(x⁄) = 0. This function is padded with noise. The random noise makes sure that the algorithm never gets the same value on the same point. 28. Rastrigin (RA, MS)

f ðxÞ ¼ 10n þ

n X ðx2i  10 cosð2pxi ÞÞ; subject to  5:12 6 xi 6 5:12; i ¼ 1; . . . ; n: i¼1

The number of local minima is not known. The global minimum is x⁄ = (0, . . . , 0) with f(x⁄) = 0. 29. Non-continuous Rastrigin (NR, MS)

f ðxÞ ¼ 10n þ

n X ðy2i  10 cosð2pyi ÞÞ; i¼1



xi ; jxi j < 1=2 where yi ¼ , subject to 5.12 6 xi 6 5.12, i = 1, . . . , n. The number of local minima is not known. The roundð2xi Þ=2; jxi j P 1=2 ⁄ global minimum is x = (0, . . . , 0) with f(x⁄) = 0. 30. Rosenbrock (RO, UN)

f ðxÞ ¼

n1 X ½100ðxiþ1  x2i Þ2 þ ðxi  1Þ2 ; subject to  30 6 xi 6 30; i ¼ 1; . . . ; n: i¼1

3528

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

It is unimodal, yet due to a saddle point it is very difficult to locate the minimum x⁄ = (1, 1, . . . , 1) with f(x⁄) = 0. The global optimum lies inside a long, narrow, parabolic shaped flat valley. To find the valley is trivial, however convergence to the global optimum is difficult. 31. Schwefel’s Ridge or Schwefel’s problem 1.2 (SR, UN)

f ðxÞ ¼

n i X X i¼1

!2 xj

; subject to  100 6 xi 6 100; i ¼ 1; . . . ; n:

j¼1

The global minimum is located at x⁄ = (0, . . . , 0) with f(x⁄) = 0. 32. Sphere (SP, US)

f ðxÞ ¼

n X

x2i ; subject to  100 6 xi 6 100; i ¼ 1; . . . ; n:

i¼1

The global minimum is located at x⁄ = (0, 0, . . . , 0) with f(x⁄) = 0. 33. Step (ST, US)

f ðxÞ ¼

n X

ðbxi þ 0:5cÞ2 ; subject to  100 6 xi 6 100; i ¼ 1; . . . ; n:

i¼1

This is a noncontinuous function. The global minimum is located at 0:5 6 xi < 0:5 with f ðx Þ ¼ 0. 34. Schwefel’s problem 2.21 (S21, UN)

f ðxÞ ¼ max fjxi j; 1 6 i 6 ng; subject to  100 6 xi 6 100; i ¼ 1; . . . ; n: The global minimum is located at x⁄ = (0, . . . , 0) with f(x⁄) = 0. 35. Schwefel’s problem 2.22 (S22, UN)

f ðxÞ ¼

n X

jxi j þ

i¼1

n Y

jxi j; subject to  10 6 xi 6 10; i ¼ 1; . . . ; n:

i¼1

The global minimum is located at x⁄ = (0, . . . , 0) with f(x⁄) = 0. 36. Schwefel’s problem 2.26 (S26, MN)

f ðxÞ ¼

n X

pffiffiffiffiffiffiffi xi sinð jxi jÞ or

i¼1

f ðxÞ ¼ 418:98288727243369n þ

n X

pffiffiffiffiffiffiffi xi sinð jxi jÞ; subject to  500 6 xi 6 500; i ¼ 1; . . . ; n:

i¼1

For the 1st format, the global minimum is located at x⁄  (420.9687, . . . , 420.9687) with f(x⁄)  12569.5. For the 2nd format, the global minimum is located atx⁄  (420.9687, . . . , 420.9687) with f(x⁄)  0. 37. Weierstrass (WE, MN)

f ðxÞ ¼

n X i¼1

(

kX max h

 i k ak cos 2pb ðxi þ 0:5Þ

) n

k¼0

kX max 

 k ak cosðpb Þ ; a ¼ 0:5; b ¼ 3; kmax ¼ 30;

k¼0

subject to 0.5 6 xi 6 0.5, i = 1, . . . , n. The global minimum is located at x⁄ = (0, . . . , 0) with f(x⁄) = 0. 38. Zakharov (ZA, UN)

f ðxÞ ¼

n X i¼1

x2i þ

n X i¼1

!2 0:5ixi

þ

n X

!4 0:5ixi

; subject to  5 6 xi 6 10; i ¼ 1; . . . ; n:

i¼1

The global minimum is located at x⁄ = (0, . . . , 0) with f(x⁄) = 0. 39. Fletcher–Powell (FP, MN)

f ðxÞ ¼

n n X X ðAi  Bi Þ2 ; where Ai ¼ ðaij sin aj þ bij cos aj Þ; i¼1

j¼1

n X ðaij sin xj þ bij cos xj Þ; subject to  p 6 xi 6 p; i ¼ 1; . . . ; n: Bi ¼ j¼1

The number of local minima is unknown but the global minimum is located at x⁄ = a with f(x⁄) = 0. The constants can be found in the work of Karaboga and Akay [33].

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

3529

40. Modified Langerman (ML, MN)

f ðxÞ ¼ 

m X

cj exp 1=p

j¼1

! n X ðxi  aji Þ2 cos i¼1

p

! n X ðxi  aji Þ2 ; subject to 0 6 xi 6 10; i ¼ 1; . . . ; n; i¼1

where m = 5. The number of local minima is not known. The constants can be seen in [53] and the global minima are shown in the following table. n

f(x⁄)

x⁄

2 5 10

1.0809 0.965 0.965

(9.6810707, 0.6666515) (8.074, 8.777, 3.467, 1.867, 6.708) (8.074, 8.777, 3.467, 1.867, 6.708, 6.349, 4.534, 0.276, 7.633, 1.567)

41. Modified Shekel’s Foxholes (MS, MN)

f ðxÞ ¼ 

30 X j¼1

cj þ

Pn

1

i¼1 ðxi

 aji Þ2

; subject to 0 6 xi 6 10; i ¼ 1; . . . ; n:

The number of local minima is not known. The constants can be seen in [53] and the global minima are shown in the following table. n

f(x⁄)

x⁄

2 5 10

12.11901 10.4056 10.2088

(8.024, 9.147) (8.025, 9.152, 5.114, 7.621, 4.564) (8.025, 9.152, 5.114, 7.621, 4.564, 4.711, 2.996, 6.126, 0.734, 4.982)

References [1] B. Akay, D. Karaboga, A modified artificial bee colony algorithm for real-parameter optimization, Information Sciences (2010), doi:10.1016/ j.ins.2010.07.015. [2] R. Akbari, A. Mohammadi, K. Ziarati, A novel bee swarm optimization algorithm for numerical function optimization, Communication in Nonlinear Science and Numerical Simulation 15 (10) (2010) 3142–3155. [3] B. Alatas, Chaotic bee colony algorithms for global numerical optimization, Expert Systems with Applications 37 (8) (2010) 5682–5687. [4] M.M. Ali, C. Khompatraporn, Z.B. Zabinsky, A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems, Journal of Global Optimization 31 (4) (2005) 635–672. [5] A. Baykasoglu, L. Ozbakir, P. Tapkan, Artificial bee colony algorithm and its application to generalized assignment problem, in: Swarm Intelligence, Focus on Ant and Particle Swarm Optimization, ITech Education and Publishing, Vienna, Austria, 2007. pp. 113–144. [6] A. Biswas, S. Das, A. Abraham, S. Dasgupta, Stability analysis of the reproduction operator in bacterial foraging optimization, Theoretical Computer Science 411 (21) (2010) 2127–2139. [7] J. Brest, S. Greiner, B. Boskovic, M. Mernik, V. Zumer, Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems, IEEE Transactions on Evolutionary Computation 10 (6) (2006) 646–657. [8] A. Caponio, F. Neri, V. Tirronen, Super-fit control adaptation in memetic differential evolution frameworks, Soft Computing – A Fusion of Foundations, Methodologies and Applications 13 (8) (2009) 811–831. [9] R. Chelouah, P. Siarry, A continuous genetic algorithm designed for the global optimization of multimodal functions, Journal of Heuristics 6 (2) (2000) 191–213. [10] R. Chelouah, P. Siarry, Genetic and Nelder–Mead algorithms hybridized for a more accurate global optimization of continuous multiminima functions, European Journal of Operational Research 148 (2) (2003) 335–348. [11] R. Chelouah, P. Siarry, A hybrid method combining continuous tabu search and Nelder–Mead simplex algorithms for the global optimization of multiminima functions, European Journal of Operational Research 161 (3) (2005) 636–654. [12] S. Das, S. Dasgupta, A. Biswas, A. Abraham, A. Konar, On stability of chemotactic dynamics in bacterial-foraging optimization algorithm, IEEE Transactions on Systems Man and Cybernetics Part A – Systems and Humans 39 (3) (2009) 670–679. [13] S. Dasgupta, S. Das, A. Abraham, A. Biswas, Adaptive computational chemotaxis in bacterial foraging optimization: an analysis, IEEE Transactions on Evolutionary Computation 13 (4) (2009) 919–941. [14] H.B. Dong, J. He, H.K. Huang, W. Hou, Evolutionary programming using a mixed mutation strategy, Information Sciences 177 (1) (2007) 312–327. [15] C. García-Martínez, M. Lozano, F. Herrera, D. Molina, A.M. Sánchez, Global and local real-coded genetic algorithms based on parent-centric crossover operators, European Journal of Operational Research 185 (3) (2008) 1088–1113. [16] A. Georgieva, I. Jordanov, A hybrid meta-heuristic for global optimization using low-discrepancy sequences of points, Computers & Operations Research 37 (3) (2010) 456–469. [17] M.G. Gong, L.C. Jiao, L.N. Zhang, Baldwinian learning in clonal selection algorithm for optimization, Information Sciences 180 (8) (2010) 1218–1236. [18] J.L. Gould, C.G. Gould, The Honey Bee, Scientific American Library, New York, 1988. [19] A. Hedar, M. Fukushima, Tabu search directed by direct search methods for nonlinear global optimization, European Journal of Operational Research 170 (2) (2006) 329–349. [20] F. Herrera, M. Lozano, A.M. Sánchez, A taxonomy for the crossover operator for real-coded genetic algorithms: an experimental study, International Journal of Intelligent Systems 18 (3) (2003) 309–338. [21] S.T. Hsieh, T.Y. Sun, C.L. Lin, C.C. Liu, Effective learning rate adjustment of blind source separation based on an improved particle swarm optimizer, IEEE Transactions on Evolutionary Computation 12 (2) (2008) 242–251.

3530

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

[22] H. Huang, H. Qin, Z. Hao, A. Lim, Example-based learning particle swarm optimization for continuous optimization, Information Sciences (2010), doi:10.1016/j.ins.2010.10.018. [23] L.M. Hvattum, F. Glover, Finding local optima of high-dimensional functions using direct search methods, European Journal of Operational Research 195 (1) (2009) 31–45. [24] M. Ji, H. Tang, J. Guo, A single-point mutation evolutionary programming, Information Processing Letters 90 (6) (2004) 293–299. [25] M. Ji, H. Yang, Y. Yang, Z. Jin, A single component mutation evolutionary programming, Applied Mathematics and Computation 215 (10) (2010) 3759– 3768. [26] F. Kang, J. Li, H. Li, Z. Ma, Q. Xu, An improved artificial bee colony algorithm, In: IEEE 2nd International Workshop on Intelligent Systems and Applications, Wuhan, China, 2010, pp. 791–794. [27] F. Kang, J. Li, Q. Xu, Structural inverse analysis by hybrid simplex artificial bee colony algorithms, Computers & Structures 87 (13-14) (2009) 861–870. [28] F. Kang, J. Li, Q. Xu, Virus coevolution partheno-genetic algorithms for optimal sensor placement, Advanced Engineering Informatics 22 (3) (2008) 362– 370. [29] Y.T. Kao, E. Zahara, A hybrid genetic algorithm and particle swarm optimization for multimodal functions, Applied Soft Computing 8 (2) (2008) 849– 857. [30] D. Karaboga, An Idea based on Bee Swarm for Numerical Optimization, Tech. Rep. TR-06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. [31] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, Journal of Global Optimization 39 (3) (2007) 459–471. [32] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Applied Soft Computing 8 (1) (2008) 687–697. [33] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Applied Mathematics and Computation 214 (1) (2009) 108–132. [34] P. Korošec, J. Šilc, B. Filipicˇ, The differential ant-stigmergy algorithm, Information Sciences (2010), doi:10.1016/j.ins. 2010.05.002. [35] M. Laguna, R. Martí, Experimental testing of advanced scatter search designs for global optimization of multimodal functions, Journal of Global Optimization 33 (2) (2005) 235–255. [36] M.N. Le, Y.S. Ong, Y. Jin, B. Sendhoff, Lamarckian memetic algorithms: local optimum and connectivity structure analysis, Memetic Computing 1 (3) (2009) 175–190. [37] R.M. Lewisa, V. Torczona, M.W. Trossetc, Direct search methods: then and now, Journal of Computational and Applied Mathematics 124 (1-2) (2000) 191–207. [38] J.J. Liang, A.K. Qin, P.N. Suganthan, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Transactions on Evolutionary Computation 10 (3) (2006) 281–295. [39] X. Li, J. Luo, M. Chen, N. Wang, An improved shuffled frog-leaping algorithm with extremal optimisation for continuous optimisation, Information Sciences (2010), doi:10.1016/j.ins.2010.07.016. [40] M. Lozano, F. Herrera, N. Krasnogor, D. Molina, Real-coded memetic algorithms with crossover hill-climbing, Evolutionary Computation 12 (3) (2004) 273–302. [41] H. Ma, C. Liao, An analysis of the equilibrium of migration models for biogeography-based optimization, Information Sciences 180 (18) (2010) 3444– 3464. [42] Y. Marinakis, M. Marinaki, G. Dounias, Honey bees mating optimization algorithm for the Euclidean traveling salesman problem, Information Sciences (2010), doi:10.1016/j.ins.2010.06.032. [43] E. Mezura-Montes, M.E. Miranda-Varela, R. del Carmen Gómez-Ramón, Differential evolution in constrained numerical optimization: an empirical study, Information Sciences 180 (22) (2010) 4223–4262. [44] J.J. Moré, B.S. Garbow, K.E. Hillstrom, Testing unconstrained optimization software, ACM Transactions on Mathematical Software 7 (1) (1981) 17–41. [45] M.A. Munoz, J.A. López, E. Caicedo, An artificial beehive algorithm for continuous optimization, International Journal of Intelligent Systems 24 (11) (2009) 1080–1093. [46] F. Neri, J. Toivanen, G.L. Cascella, Y.S. Ong, An adaptive multimeme algorithm for designing HIV multidrug therapies, IEEE/ACM Transactions on Computational Biology and Bioinformatics 4 (2) (2007) 264–278. [47] F. Neri, E. Mininno, Memetic compact differential evolution for Cartesian robot control, IEEE Computational Intelligence Magazine 5 (2) (2010) 54–65. [48] Q.C. Nguyen, Y.S. Ong, M.H. Lim, A probabilistic memetic framework, IEEE Transactions on Evolutionary Computation 13 (3) (2009) 604–623. [49] Y.S. Ong, A.J. Keane, Meta-Lamarckian learning in memetic algorithm, IEEE Transactions on Evolutionary Computation 8 (2) (2004) 99–110. [50] D. Ortiz-Boyer, C. Hervás-Martınez, N. García-Pedrajas, CIXL2: a crossover operator for evolutionary algorithms based on population features, Journal of Artificial Intelligence Research 24 (1) (2005) 1–48. [51] J.R. Palmer, An improved procedure for orthogonalising the search vectors in Rosenbrock’s and Swann’s direct search optimisation methods, The Computer Journal 12 (1) (1969) 69–71. [52] Q.K. Pan, M.F. Tasgetiren, P.N. Suganthan, T.J. Chua, A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem, Information Sciences 181 (12) (2011) 2455–2468. [53] K. Price, R.M. Storn, J.A. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer, New York, 2005. [54] A.K. Qin, P.N. Suganthan, Self-adaptive differential evolution algorithm for numerical optimization, in: Proceedings of the IEEE Conference on Evolutionary Computation, Edinburgh, Scotland, September 2005, pp. 1785–1791. [55] S. Rahnamayan, H.R. Tizhoosh, M.M.A. Salama, Opposition-based differential evolution, IEEE Transactions on Evolutionary Computation 12 (1) (2008) 64–79. [56] E. Rashedi, H. Nezamabadi-pour, S. Saryazdi, GSA: a gravitational search algorithm, Information Sciences 179 (13) (2009) 2232–2248. [57] H.H. Rosenbrock, An automatic method for finding the greatest or least value of a function, The Computer Journal 3 (3) (1960) 175–184. [58] A.M. Sánchez, M. Lozano, C. García-Martínez, D. Molina, F. Herrera, Real-parameter crossover operators with multiple descendents: an experimental study, International Journal of Intelligent Systems 23 (2) (2008) 246–268. [59] A.M. Sánchez, M. Lozano, P. Villar, F. Herrera, Hybrid crossover operators with multiple descendents for real-coded genetic algorithms: combining neighborhood-based crossover operators, International Journal of Intelligent Systems 24 (5) (2009) 540–567. [60] T.D. Seeley, The Wisdom of the Hive, Harvard University Press, MA, Cambridge, 1995. [61] Y. Shi, H. Liu, L. Gao, G. Zhang, Cellular particle swarm optimization, Information Sciences (2010), doi:10.1016/j.ins. 2010.05.025. [62] J.E. Smith, Co-evolving memetic algorithms: a review and progress report, IEEE Transactions on Systems Man and Cybernetics Part B – Cybernetics 37 (1) (2007) 6–17. [63] K. Socha, M. Dorigo, Ant colony optimization for continuous domains, European Journal of Operational Research 185 (3) (2008) 1155–1173. [64] R. Storn, K. Price, Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces, Journal of Global Optimization 11 (4) (1997) 341–359. [65] S. Sundar, A. Singh, A swarm intelligence approach to the quadratic minimum spanning tree problem, Information Sciences 180 (17) (2010) 3182– 3191. [66] C. Ting, C. Liao, A memetic algorithm for extending wireless sensor network lifetime, Information Sciences 180 (24) (2010) 4818–4833. [67] V. Tirronen, F. Neri, T. Karkkainen, K. Majava, T. Rossi, An enhanced memetic differential evolution in filter design for defect detection in paper production, Evolutionary Computation 16 (4) (2008) 529–555. [68] R. Toscano, P. Lyonnet, A new heuristic approach for non-convex optimization problems, Information Sciences 180 (10) (2010) 1955–1966. [69] L. Tseng, C. Chen, Multiple trajectory search for large scale global optimization, in: Proceedings of the IEEE Conference on Evolutionary Computation, 2008, pp. 3052–3059.

F. Kang et al. / Information Sciences 181 (2011) 3508–3531

3531

[70] Y. Wang, B. Li, T. Weise, J. Wang, B. Yuan, Q. Tian, Self-adaptive learning based particle swarm optimization, Information Sciences (2010), doi:10.1016/ j.ins.2010.07.013. [71] D. Whitley, D. Rana, J. Dzubera, E. Mathias, Evaluating evolutionary algorithms, Artificial Intelligence 85 (1–2) (1996) 245–276. [72] X. Yao, Y. Liu, G. Lin, Evolutionary programming made faster, IEEE Transactions on Evolutionary Computation 3 (2) (1999) 82–102. [73] E.L. Yu, P.N. Suganthan, Ensemble of niching algorithms, Information Sciences 180 (15) (2010) 2815–2833. [74] J. Zhang, A.C. Sanderson, JADE: adaptive differential evolution with optional external archive, IEEE Transactions on Evolutionary Computation 13 (5) (2009) 945–958. [75] G. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm for numerical function optimization, Applied Mathematics and Computation 217 (7) (2010) 3166–3173.