Applied Soft Computing 23 (2014) 227–238
Contents lists available at ScienceDirect
Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc
A quick artificial bee colony (qABC) algorithm and its performance on optimization problems Dervis Karaboga, Beyza Gorkemli ∗ Erciyes University, Engineering Faculty, Intelligent Systems Research Group, Kayseri, Turkey
a r t i c l e
i n f o
Article history: Received 5 March 2013 Received in revised form 28 March 2014 Accepted 22 June 2014 Available online 28 June 2014 Keywords: Optimization Swarm intelligence Artificial bee colony Quick artificial bee colony
a b s t r a c t Artificial bee colony (ABC) algorithm inspired by the foraging behaviour of the honey bees is one of the most popular swarm intelligence based optimization techniques. Quick artificial bee colony (qABC) is a new version of ABC algorithm which models the behaviour of onlooker bees more accurately and improves the performance of standard ABC in terms of local search ability. In this study, the qABC method is described and its performance is analysed depending on the neighbourhood radius, on a set of benchmark problems. And also some analyses about the effect of the parameter limit and colony size on qABC optimization are carried out. Moreover, the performance of qABC is compared with the state of art algorithms’ performances. © 2014 Elsevier B.V. All rights reserved.
1. Introduction Optimization is an issue of finding the best solutions via an objective in the search space of a problem. In order to solve the real world optimization problems – especially NP hard problems – evolutionary computation (EC) based optimization methods that consist of evolutionary algorithms and swarm intelligence based algorithms are frequently preferred. A swarm intelligence based algorithm models an intelligent behaviour/behaviours of social creatures that can be characterised as an intelligent swarm and this model can be used for searching the optimal solutions of various engineering problems. Artificial bee colony (ABC) algorithm is a swarm intelligence based optimization technique that models the foraging behaviour of the honey bees in the nature [1]. This algorithm was first introduced to literature by Karaboga in 2005 [2]. Until today, ABC algorithm has been used in many applications. In [3], a good survey study on ABC algorithm can be found. Although the standard ABC optimization generally produces successful results in many application studies, for getting a better performance from ABC algorithm, some researchers attempted to implement ABC in parallel [4–8] and some integrated the different concepts of the other EC based methods with ABC algorithm [9–20]. Zhu and Kwong are influenced by particle swarm optimization (PSO) and they proposed an improved ABC algorithm named
∗ Corresponding author. Tel.: +90 3522076666 32554. E-mail addresses:
[email protected] (D. Karaboga),
[email protected] (B. Gorkemli). http://dx.doi.org/10.1016/j.asoc.2014.06.035 1568-4946/© 2014 Elsevier B.V. All rights reserved.
gbest-guided ABC [9]. They placed the global best solution into the solution search equation. For multiple sequence alignment problem, Xu and Lei introduced Metropolis acceptance criteria into the searching process of ABC to prevent the algorithm from sliding into local optimum [10]. They called the improved version of ABC as ABC SA. Tuba et al. improved GABC algorithm that integrates ABC optimization with self-adaptive guidance [11]. Li et al. used inertia weight and acceleration coefficients to get a better performance on the searching process of ABC algorithm [12]. A new scheduling method based on best-so-far ABC for solving the JSSP was proposed by Banharnsakun et al. [13]. In this method solution direction is biased toward the best-so-far solution rather than a neighbour one. Bi and Wang presented a modification on the scouts’ behaviour in ABC algorithm [14]. They used a mutation strategy based on opposition-based learning and called the new method as fast mutation ABC. Inspired by differential evolution (DE) algorithm, Gao and Liu presented a new solution search equation for standard ABC [15]. The new equation aims to improve the exploitation capability and it is based on that the bee searches only around the best solution of the previous iteration. Mezura-Montes and Cetina-Dominguez presented a modified ABC to solve constrained numerical optimization problems [16]. They modified ABC algorithm with the selection mechanism, the scout bee operator and the equality and boundary constraints. For constrained optimization problems, Bacanin and Tuba introduced some modifications based on genetic algorithm operators to the ABC algorithm [17]. In order to improve the exploitation capability of ABC algorithm, Gao et al. presented an improved ABC [18]. In this improved version, they used a modified search strategy to generate new food source.
228
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238
They used opposition based learning method and chaotic maps to produce the initial population for a better global convergence. For a better exploitation, Gao et al. also presented a modified ABC algorithm inspiring by DE [19]. With this modification, each bee searches only around the best solution of the previous cycle to improve the exploitation. And they also used chaotic systems and opposition based learning method while producing the initial population and scout bees to improve the global convergence. Liu et al. introduced an improved ABC algorithm with mutual learning [20]. This approach adjusts the produced candidate food source using some individuals which are selected by a mutual learning factor. In order to get more powerful optimization technique, some researchers combined ABC algorithm with other EC based methods or traditional algorithms. Kang et al. proposed a hybrid simplex ABC [21]. In [21], they combined ABC with Nelder–Mead simplex method and in another study, they used this new method for inverse analysis problems [22]. Marinakis et al. proposed a hybrid algorithm based on ABC optimization and greedy randomised adaptive search procedure in order to cluster n objects into k clusters [23]. Another different hybrid ABC was described by Xiao and Chen by using artificial immune network algorithm [24]. They applied the hybrid algorithm to multi-mode resource constrained multiproject scheduling problem. Bin and Qian introduced a differential ABC algorithm for global numerical optimization [25]. Sharma and Pant used DE operators with standard ABC algorithm [26]. Kang et al. demonstrated how the standard ABC can be improved by incorporating a hybridization strategy [27]. They suggested a novel hybrid optimization technique composed of Hooke Jeeves pattern search and ABC algorithm. Hsieh et al. proposed a new hybrid algorithm of PSO and ABC optimization [28]. Abraham et al. made a hybridization of ABC algorithm and DE strategy, and called this novel approach as hybrid differential ABC algorithm [29]. In this work, instead of presenting a new hybrid ABC algorithm or integrating an operator of an existing algorithm into ABC, our aim is to model the behaviour of foragers in ABC more accurately by introducing a new definition for onlooker bees. By using the new definition proposed for onlooker bees, ABC achieves a better performance than standard ABC in terms of local search ability. So, we called our new algorithm as quick ABC (qABC) [30]. In this work, the qABC algorithm is described in more detailed way and then its performance is tested on a set of test problems larger than that in [30] and the effects of control parameters neighbourhood radius, limit and colony size on its performance are investigated. Also the performance of qABC is compared with the state of art algorithms. The rest of the paper is organised as follows: Section 2 describes the standard ABC and the novel strategy (qABC) is presented in Section 3. The computational study and simulation results are demonstrated in Section 4 and finally, in Section 5, the conclusion is given.
all of the bees in the hive work as scouts and they all start with random solutions or food sources. In further cycles, when the food sources are abandoned, the employed bee related to the abandoned resource becomes a scout. In the algorithm, a parameter, limit is used to control the abandonment problem of the food sources. For each solution, the trial number of improvement is taken, in every cycle the solution which has the maximum trial number is determined and its trial number is compared with the parameter limit. If it achieve the limit value, this solution is leaved and searching process continues with a randomly produced new solution. In ABC algorithm, a food source position is defined as a possible solution and the nectar amount or quality of the food source corresponds to the fitness of the related solution in optimization process. Since, each employed bee is associated with one and only one food source, the number of employed bees is equal to the number of food sources. The general algorithmic structure of the ABC optimization is given below: Initialization phase REPEAT Employed bees phase Onlooker bees phase Scout bees phase Memorize the best solution achieved so far UNTIL (cycle = maximum cycle number or a maximum CPU time) 2.1. Initialization phase In the initialization phase, food sources are randomly initialised with Eq. (1) in a given range. xm,i = li + rand(0, 1) ∗ (ui − li )
where xm,i is the value of the i. dimension of the m. solution. li represents the lower and ui represents the upper bound of the parameter xm,i . Then, obtained solutions are evaluated and their objective function values are calculated. 2.2. Employed bees phase This phase of the algorithm describes the employed bees behaviour for finding a better food source within the neighbourhood of the food source (xm ) in their minds. They leave the hive with the information of the food source position, but when they arrive the target point, they are affected from the traces of the other bees on the flowers and they find a candidate food source. In the ABC optimization they determine this neighbour food source by using Eq. (2).
m,i = xm,i + m,i (xm,i − xk,i ) 2. Standard ABC algorithm The artificial bees are divided in three groups considering the foraging behaviour of the colony in ABC algorithm. The first group consists employed bees. These bees have a food source position in their mind when they leave from the hive. And they perform dances about their food sources on the dancing area in the hive. Some of the bees decide the food sources to exploit by watching the dances of employed bees. This group of bees is called as onlookers. In the algorithm, onlookers select the food sources in a probability that related to the qualities of the food sources. The last different bee group is scouts. Regardless of any information of other bees, a scout finds a new food source and start to consume it, then she continues her work as an employed bee. Hence, while the known resources are consuming, at the same time exploration of the new food sources is provided. At the beginning of the search (initialization phase),
(1)
(2)
where xk is a food source selected randomly. i is also a randomly chosen dimension and m,i is a random number within the range [−1,1]. After producing the new candidate food source m , its profitability is calculated. Then, a greedy selection is applied between m and xm . The fitness of a solution fit(xm ) can be calculated from its objective function value f(xm ) by using Eq. (3).
fit(xm ) =
1/(1 + f (xm ))
if f (xm ) ≥ 0
1 + abs(f (xm ))
if f (xm ) < 0
(3)
2.3. Onlooker bees phase When employed bees return to the hive, they share their food source information with onlooker bees. An onlooker chooses her
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238
food source to exploit depending on this information, probabilistically. In ABC algorithm, using the fitness values of the solutions, the probability value pm can be calculated by Eq. (4). pm =
fit(xm )
SN
m=1
(4)
fit(xm )
2.4. Scout bees phase At the end of every cycle, trial counters of all solutions are controlled. And abandonment of the solution that has maximum counter value is determined by looking limit parameter. If an abandonment is detected, the related employed bee is converted to a scout and takes a new randomly produced solution using Eq. (1). In this phase, to balance the positive feedback, a negative feedback behaviour arises. 3. The novel definition for the searching behaviour of onlookers (qABC algorithm) In real honey bee colonies, an employed bee exploits the food source that she visited before, but an onlooker chooses a food source region depending on the dances of the employed bees. After reaching that region where she visits for the first time, the onlooker bee examines the food sources in the area and chooses the fittest one to exploit. So, it can be said that onlookers choose their food sources in a different way from employed bees. However, in standard ABC algorithm, this difference is not considered and artificial employed bees and onlookers determine a new candidate solution by using the same formula (Eq. (2)). Onlookers behaviour should be modelled by a different formula from Eq. (2). So, in qABC algorithm, a new definition is introduced for the behaviour of onlookers. This novel definition is given as follows:
best best best = xN + m,i xN − xk,i Nm ,i m ,i m ,i
(5)
best represents the best solution among the In this formula, xN m neighbours of xm and itself (Nm ). A similarity measure in terms of structure of solutions can be used to determine a neighbourhood for xm . At this point, in order to define an appropriate neighbourhood, different approaches could be used. And also, for different representations of the solutions, different similarity measures can be defined. Hence, using this novel formula, Eq. (5), combinatorial or binary problems could be optimised by using qABC algorithm, too. As an instance, the neighbourhood of a solution (xm ) could be defined considering the mean Euclidean distance between xm and the rest of solutions for the numerical optimization problems. Representing the Euclidean distance between xm and xj as d(m, j), the mean Euclidean distance for xm , mdm , is calculated by Eq. (6).
mdm =
j=1
d(m, j)
SN − 1
best to improve. Including x , if there are S chooses the best one, xN m m solutions, the best solution is described by Eq. (7) in Nm .
best 1 2 S fit xN = max fit xN , fit xN , . . ., fit xN m m m m
(6)
If a solution of which Euclidean distance from xm is less than the mean Euclidean distance, mdm , it could be accepted as a neighbour of xm . It means that, an onlooker bee watches the dances of the employed bees in the hive and being effected by them, she selects the region which is centred by the food source xm . When she arrives the region of xm , she examines all of the food sources in Nm and
(7)
In order to determine a neighbour of xm , a more general and flexible definition can be used is given below: if d(m, j) ≤ r × mdm then xj is a neighbor of xm ,
After a food source is selected, as in the employed bees phase a neighbour source m is determined by using Eq. (2), and its fitness value is computed. Then, a greedy selection is applied between m and xm . Therefore, recruiting more onlookers to richer sources, positive feedback behaviour appears.
SN
229
(8)
else not With this expression, a new parameter r which refers to the “neighbourhood radius” is added into the parameters of standard ABC algorithm. This parameter must be used as r ≥ 0. In Eq. (8), when r = 0, Eq. (5) works same as Eq. (2) and in this situation, qABC best becomes x . While the turns to be the standard ABC, since xN m m value of r increases, the neighbourhood of xm enlarges or its neighbourhood shrinks as the value of r decreases. Detailed steps of qABC algorithm are given below: Initializationphase: Initialize the control parameters: colony size CS, maximum number of cycles MaxNum, limit l. Initialize the positions of the food sources (initial solutions) using Eq. (1), xm . m = 1, 2, . . ., SN. Evaluate the solutions. Memorize the best solution. c=0 repeat Employedbeesphase: for each employed bee; Generate a new candidate solution m in the neighborhood of xm (Eq. (2)) and evaluate it. Apply a greedy selection between xm and m . Compute the fitness of the solutions by using Eq. (3) and calculate the probability values pm for the solutions xm with Eq. (4). Onlookerbeesphase: for each onlooker bee; Select a solution xm depending on pm values best among the neighbours of the x Find the best solution xN m m and itself. These neighbours are determined by expression (8) best (Eq. (5)) Generate a new candidate solution best from xN Nm m and evaluate it. best best Apply a greedy selection between xNm and Nm . Memorize the best solution found so far. Scoutbeephase: using the limit parameter value l, determine the abandoned solution. If exists, replace it with a new solution for the scout by using Eq. (1). c=c+1 until (c = MaxNum) 4. Computational study and discussion We conducted the experiments with different values of r (r = 0, r = 0.25, r = 0.5, r = 1, r = 1.5, r = 2, r = 2.5, r = 3, r =∞) and the effect of this parameter was analysed in means of the convergence performance and the quality of the solutions obtained by the algorithm. Results of qABC were compared with the state of art algorithms including genetic algorithm (GA), particle swarm optimization (PSO), differential evolution (DE) algorithm and standard ABC. The results of GA, PSO and DE are taken from [31]. For a fair comparison the same parameter settings and the evaluation number are used as in [31]. So, the colony size is 50 and maximum evaluation number is 500,000. For the scout process, the limit parameter l is calculated with Eq. (9) [31]: l=
CS ∗ D 2
(9)
where CS is the colony size and D is the dimension of the problem.
230
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238
Table 1 Test problems. Test function
C
Interval
Min
Formulation
Sphere
US
30
D
[−100, 100]
Fmin = 0
f (x) =
Rosenbrock
UN
30
[−30, 30]
Fmin = 0
Rastrigin
MS
30
[−5.12, 5.12]
Fmin = 0
Griewank
MN
30
[−600, 600]
Fmin = 0
D 2 xi i=1 D−1 2 2 f (x) = 100(xi+1 − xi 2 ) + (xi − 1) i=1 D f (x) = (xi 2 − 10 cos(2xi ) + 10) i=1 D 2 D x 1 f (x) = 4000 ( xi ) − cos √i +1 i=1 i=1 i D
sin2 (
Schaffer
MN
2
Dixon-Price
UN
Ackley
i=1
xi 2 )−0.5
D 2 2 (1+0.001( xi )) i=1 D 2 2 f (x) = (x1 − 1) + i(2xi 2 − xi−1 ) i=2
D 1 f (x) = 0.5 +
[−100, 100]
Fmin = 0
30
[−10, 10]
Fmin = 0
MN
30
[−32, 32]
Fmin = 0
Schwefel
MS
30
[−500, 500]
Fmin = −12, 569.5
f (x) =
SixHumpCamelBack
MN
2
[−5, 5]
Fmin = −1.03163
f (x) = 4x1 − 2.1x1 4 +
Branin
MS
2
[−5, 10] × [0, 15]
Fmin = 0.398
f (x) = x2 −
f (x) = 20 + e − 20 exp
D
i=1 2
− xi sin( 5.1 42
x1 2 +
−0.2
|xi |) 1 x 6+ 3 1
5 x 1
D
i=1
− exp
xi 2
1 D D
i=1
cos(2xi )
x1 x2 − 4x2 2 + 4x2 4
2
−6
+ 10 1 −
1 8
cos x1 + 10
Table 2 Performance comparison of qABC algorithms with different r values on Rosenbrock function. Algorithm
Mean
SD
Best
Worst
qABC(r = 0) qABC(r = 0.25) qABC(r = 0.5) qABC(r = 1) qABC(r = 1.5) qABC(r = 2) qABC(r = 2.5) qABC(r = 3) qABC(r =∞)
0.1766957 0.1464777 0.1066600 0.1329198 0.0886332 0.1391871 0.0976565 0.1319018 0.1194790
0.2661797 0.2563342 0.2173580 0.1799131 0.1181011 0.2204129 0.1918210 0.1821670 0.2266048
0.0005833 0.0004032 0.0001952 0.0001488 0.0010554 0.0004161 3.4632951E−05 0.0016222 9.2660744E−05
1.0439761 1.0708763 1.1410752 0.8556796 0.5050920 0.9241398 0.9517281 0.6486930 0.9657111
The best values are written in bold. Table 3 Performance comparison of qABC algorithms with different r values on Griewank function. Algorithm
Mean
SD
Best
Worst
qABC(r = 0) qABC(r = 0.25) qABC(r = 0.5) qABC(r = 1) qABC(r = 1.5) qABC(r = 2) qABC(r = 2.5) qABC(r = 3) qABC(r =∞)
0 0 0 0 0 0 1.1139238E−15 5.8878828E−15 0
0 0 0 0 0 0 3.6522213E−15 3.1233220E−14 0
0 0 0 0 0 0 0 0 0
0 0 1.3322676E−15 0 2.1094237E−15 0 1.9317881E−14 1.7408297E−13 0
Wilcoxon statistical test was also carried out for standard ABC and qABC algorithms. 10 well-known benchmark problems with different characters were considered in order to test the performance of qABC. These test problems, characteristic of the functions (C), dimensions of the problems (D), bounds of the search spaces and the global optimum values for these problems are presented in Table 1. One of these benchmarks is unimodal-seperable (US),
two of them are unimodal-nonseperable (UN), three of them are multimodel-seperable (MS) and four of them are multimodelnonseperable (MN). For each test case, 30 independent runs were carried out with random seeds. The values below the E−15 are accepted as 0. Tables 2–9 show the mean values (mean) and the standard deviation values (SD) calculated for the test problems over 30 runs.
Table 4 Performance comparison of qABC algorithms with different r values on Schaffer function. Algorithm
Mean
SD
Best
Worst
qABC(r = 0) qABC(r = 0.25) qABC(r = 0.5) qABC(r = 1) qABC(r = 1.5) qABC(r = 2) qABC(r = 2.5) qABC(r = 3) qABC(r =∞)
1.0367306E−10 3.3495204E−07 2.3833678E−06 8.6610854E−06 1.1965610E−05 1.2654317E−05 1.0554241E−05 5.4603177E−06 7.4164064E−06
4.8286140E−10 7.2781864E−07 2.9895980E−06 7.8289333E−06 1.8307200E−05 1.5067672E−05 1.2934532E−05 6.9975635E−06 8.4686549E−06
0 5.0494242E−10 1.7145336E−08 3.4646886E−08 1.6412526E−07 1.1556006E−08 1.6327515E−08 1.0385392E−08 8.9004138E−08
2.6913450E−09 3.5700525E−06 1.1465332E−05 3.1484602E−05 8.8334161E−05 5.1342794E−05 6.4734260E−05 3.4239721E−05 3.4904411E−05
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238
231
Table 5 Performance comparison of qABC algorithms with different r values on Dixon-Price function. Algorithm
Mean
SD
Best
Worst
qABC(r = 0) qABC(r = 0.25) qABC(r = 0.5) qABC(r = 1) qABC(r = 1.5) qABC(r = 2) qABC(r = 2.5) qABC(r = 3) qABC(r =∞)
4.0922605E−15 7.9822146E−14 2.0912869E−13 1.1543113E−12 4.8758038E−10 1.7512132E−11 4.7203508E−11 4.5301684E−12 1.1022304E−09
0 2.0983737E−13 6.1424217E−13 3.3608330E−12 1.7748020E−09 3.7392269E−11 1.5378112E−10 1.3752168E−11 5.7130716E−09
2.2791451E−15 3.6601337E−15 2.3120888E−15 7.6586127E−15 7.1680057E−15 4.8438112E−14 3.1991671E−14 7.5714139E−15 3.9034247E−14
5.6569644E−15 1.1324986E−12 3.1851131E−12 1.7660988E−11 8.4398125E−09 1.6990125E−10 8.0074050E−10 7.6488374E−11 3.1861559E−08
Table 6 Performance comparison of qABC algorithms with different r values on Ackley function. Algorithm
Mean
SD
Best
Worst
qABC(r = 0) qABC(r = 0.25) qABC(r = 0.5) qABC(r = 1) qABC(r = 1.5) qABC(r = 2) qABC(r = 2.5) qABC(r = 3) qABC(r =∞)
3.0819791E−14 3.3306691E−14 3.2359300E−14 3.5556743E−14 3.5319895E−14 3.6148862E−14 3.4964624E−14 3.5201471E−14 3.4964624E−14
2.3206228E−15 4.5635433E−15 3.5150128E−15 3.6385215E−15 3.9914263E−15 5.1196893E−15 3.7242352E−15 4.0489854E−15 4.3495560E−15
2.7977620E−14 2.0872193E−14 2.7977620E−14 2.7977620E−14 2.7977620E−14 2.0872193E−14 2.7977620E−14 2.7977620E−14 2.7977620E−14
3.8635761E−14 3.8635761E−14 3.8635761E−14 3.8635761E−14 4.2188475E−14 4.2188475E−14 4.2188475E−14 4.2188475E−14 4.2188475E−14
Table 7 Performance comparison of qABC algorithms with different r values on Schwefel function. Algorithm
Mean
SD
Best
Worst
qABC(r = 0) qABC(r = 0.25) qABC(r = 0.5) qABC(r = 1) qABC(r = 1.5) qABC(r = 2) qABC(r = 2.5) qABC(r = 3) qABC(r =∞)
−12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182
2.0739670E−12 1.8189894E−12 1.9926030E−12 2.5073039E−12 2.6359661E−12 2.4404304E−12 2.2277979E−12 2.4404304E−12 2.1522573E−12
−12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182
−12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182 −12,569.4866182
Table 8 Performance comparison of qABC algorithms with different r values on SixHumpCamelBack function. Algorithm
Mean
SD
Best
Worst
qABC(r = 0) qABC(r = 0.25) qABC(r = 0.5) qABC(r = 1) qABC(r = 1.5) qABC(r = 2) qABC(r = 2.5) qABC(r = 3) qABC(r =∞)
−1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284
0 0 0 0 5.9542190E−15 3.9448606E−15 2.0726806E−15 1.1523523E−15 0
−1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284
−1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284 −1.0316284
Table 9 Performance comparison of qABC algorithms with different r values on Branin function. Algorithm
Mean
SD
Best
Worst
qABC(r = 0) qABC(r = 0.25) qABC(r = 0.5) qABC(r = 1) qABC(r = 1.5) qABC(r = 2) qABC(r = 2.5) qABC(r = 3) qABC(r =∞)
0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874
0 5.3906531E−12 4.4721075E−11 9.7048440E−10 1.6710417E−09 5.7917265E−09 3.0718302E−09 2.4476542E−09 2.9740517E−09
0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874
0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874 0.3978874
0 0.1329198 0 0 0 8.6610854E−06 −12,569.4866182 1.1543113E−12 −1.0316284 0.3978874 0 0.2661797 0 0 0 4.8286140E−10 2.0739670E−12 0 0 0 Sphere Rosenbrock Rastrigin Griewank Ackley Schaffer Schwefel Dixon-Price SixHumpCamelBack Branin
1.11E+03 1.96E+05 52.92259 10.63346 14.67178 0.004239 −11,593.4 1.22E+03 −1.03163 0.397887
74.214474 3.85E+04 4.564860 1.161455 0.178141 0.004763 93.254240 2.66E+02 0 0
0 15.088617 43.9771369 0.0173912 0.1646224 0 −6909.1359 0.6666667 −1.0316285 0.3978874
0 24.170196 11.728676 0.020808 0.493867 0 457.957783 E8 0 0
0 18.203938 11.716728 0.0014792 0 0 −10,266 0.6666667 −1.031628 0.3978874
0 5.036187 2.538172 0.002958 0 0 521.849292 E9 0 0
0 0.1766957 0 0 0 1.0367306E−10 −12,569.4866182 0 −1.0316284 0.3978874
Mean
qABC
SD Mean
ABC
SD Mean
DE
Mean
SD PSO
Mean
SD GA Function
Table 10 Comparison of the qABC algorithm with state of art algorithms.
0 0.1799131 0 0 0 7.8289333E−06 2.5073039E−12 3.3608330E−12 0 9.7048440E−10
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238
SD
232
Table 11 Wilcoxon signed rank test results. Function
Mean difference
p-Value
Sphere Rosenbrock Rastrigin Griewank Schaffer Dixon-Price Ackley Schwefel SixHumpCamelBack Branin
0 0.0437759 0 0 −8.66098E−06 −1.15022E−12 0 0 0 −5.90257E−10
– 0.711 – – 0.000 0.000 – – – 0.000
Moreover, objective function values of the best and the worst of 30 runs are given in these tables too. From the tables, it is very easy to see similar mean and SD values presenting the performances of qABC with different values of the parameter r. The qABC algorithm hits the optimum result for each of 30 independent runs on Sphere and Rastrigin problems for all r values. Tables 7 and 8 present the effect of the parameter r on Schwefel and SixHumpCamelBack functions, respectively. For all r values, qABC can find optimum results in terms of mean, best and worst of 30 runs for these test functions. The same performance comparisons on Rosenbrock function is demonstrated in Table 2. qABC(r = 1.5) produces more successful mean, SD and worst results and qABC(r = 2.5) is the most successful one in terms of the best of 30 runs among the qABC algorithms with different r values. In Table 9, the effect of the parameter r on Branin function is presented. For all r values, qABC finds the same objective function value which is very close to the optimum result in all columns of the table except SD column on Branin function. Only qABC(r = 0) finds the 0 value as standard deviation for this test function. For most of the r values, qABC finds optimum results in Table 3 for Griewank function. Only qABC(r = 2.5 and r = 3) give different results from the optimum values on mean and SD columns. However, these results are very close to optimum ones. And at least once in 30 runs, qABC finds the optimum solution of Griewank function. Tables 4 and 5 demonstrate the performance results of qABC on Schaffer and Dixon-Price functions, respectively. These results show that, qABC is effected by the value of the parameter r on these test functions. qABC(r = 0) presents more successful results in all fields of the table for both functions. And also, only qABC(r = 0) finds the optimum result for Schaffer function. Table 6 shows the results of the qABC on Ackley function. For all r values, qABC presents similar results at the end of the optimization process for Ackley function. qABC(r = 0) generates the best results in terms of mean and SD of 30 runs. When r is smaller than 1.5, qABC gives better values in the column “worst” of the table on Ackley function. Considering the objective function values and the evaluation numbers, the convergence graphics of qABC with different r values are shown in Figs. 1–10 for Sphere, Rosenbrock, Rastrigin, Griewank, Schaffer, Dixon-Price, Ackley, Schwefel, SixHumpCamelBack and Branin, respectively. When these figures are examined, the first six of them present a remarkable difference between qABCs having r ≥ 1 and the other ones. The speed of the convergence looks similar for the r ≥ 1 valued qABC algorithms for these test problems except Schwefel. Although qABC(r = 1) has a quicker convergence performance than qABC(r = 0) and qABC(r = 0.5), in early evaluations its convergence performance is slower than in the case of r > 1 for Schwefel function. When the graphics of the first six functions are examined for the values of r smaller than 1, qABC(r = 0.5) performs softly better convergence than qABC(r = 0) for Griewank, Dixon-Price and Ackley functions. For Schaffer function, the convergence speed of qABC(r = 0.5) is considerably better than the performance of qABC(r = 0) while there is no significant difference
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238 x 10
12
233
4
qABC(r=0) qABC(r=0.5) qABC(r=1) qABC(r=2) qABC(r=3) qABC(r=infinite)
Objective Function Value
10
8
6
4
2
0 0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
Evaluations
Fig. 1. qABC algorithms’ convergence performance on Sphere function.
5
x 10
8
qABC(r=0) qABC(r=0.5) qABC(r=1) qABC(r=2) qABC(r=3) qABC(r=infinite)
Objective Function Value
4.5 4 3.5 3 2.5 2 1.5 1 0.5 0
0
500
1000
1500
2000
2500
3000
Evaluations
Fig. 2. qABC algorithms’ convergence performance on Rosenbrock function.
between these two r values on Rosenbrock and Rastrigin functions. Since qABC converges to the optimal values in very early evaluations on Branin and SixHumpCamelBack functions for all r values, comparing the speed of the convergence is meaningless through the optimization process. Actually, it can be said that qABC has a very successful convergence performance on these two 2 dimensional test problems for all considered r values. These graphics show that the parameter r is one of the main factors for the convergence speed of qABC algorithm. It should be
reminded that, when r = 0, Eq. (5) becomes equal to Eq. (2) and qABC works like the standard ABC. Generally, when r ≥ 1, the standard ABC requires at least two times more function evaluations than qABC to reach the same mean value. When the convergence graphics and the results in the tables are evaluated, it can be generalised that a value around 1 for r is an appropriate value for qABC algorithm. So, in the comparison of qABC with the state of algorithms (GA, PSO, DE, ABC), the results of qABC having the parameter r = 1 was used. This comparison results are given in Table 10. In the
600 qABC(r=0) qABC(r=0.5) qABC(r=1) qABC(r=2) qABC(r=3) qABC(r=infinite)
Objective Function Value
500
400
300
200
100
0
0
0.5
1
1.5
2
Evaluations
Fig. 3. qABC algorithms’ convergence performance on Rastrigin function.
2.5 4 x 10
234
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238 1000 qABC(r=0) qABC(r=0.5) qABC(r=1) qABC(r=2) qABC(r=3) qABC(r=infinite)
Objective Function Value
900 800 700 600 500 400 300 200 100 0 0
1000
2000
3000
4000
5000
Evaluations
Fig. 4. qABC algorithms’ convergence performance on Griewank function.
0.7 qABC(r=0) qABC(r=0.5) qABC(r=1) qABC(r=2) qABC(r=3) qABC(r=infinite)
Objective Function Value
0.6
0.5
0.4
0.3
0.2
0.1
0
0
200
400
600
800
1000
1200
Evaluations
Fig. 5. qABC algorithms’ convergence performance on Schaffer function.
table, the mean of 30 independent runs and the standard deviations are presented for considered problems. For a fair comparison the table values below E−12 are accepted as 0 as in [31]. When Table 10 is examined, it can be seen that GA has the worst performance among the considered algorithms for all problems except Schwefel, SixHumpCamelBack and Branin. It has the best performance on SixHumpCamelBack and Branin functions. However, there is not a really remarkable difference between the performance of the algorithms, we can say that all algorithms perform well on SixHumpCamelBack and Branin test functions. On Rosenbrock function, the standard ABC and qABC algorithms find
4
x 10
smaller objective function values than other compared algorithms, and the best mean and SD values belong to the qABC algorithm. qABC and ABC algorithms find optimum results for Rastrigin and Griewank functions while other algorithms do not. Performance of the PSO and DE on Griewank is better than that on Rastrigin function. However GA’s performance does not show a remarkable difference on these test functions. PSO and DE algorithms achieve the optimum results in given number of function evaluations, while ABC, qABC and GA orderly converge to the optimum value with some errors on Schaffer function. qABC and ABC algorithms present excellent performance, however other algorithms do not
6
qABC(r=0) qABC(r=0.5) qABC(r=1) qABC(r=2) qABC(r=3) qABC(r=infinite)
Objective Function Value
3.5 3 2.5 2 1.5 1 0.5 0
0
500
1000
1500
2000
2500
Evaluations
Fig. 6. qABC algorithms’ convergence performance on Dixon-Price function.
3000
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238
235
25 qABC(r=0) qABC(r=0.5) qABC(r=1) qABC(r=2) qABC(r=3) qABC(r=infinite)
Objective Function Value
20
15
10
5
0 0
0.5
1
1.5
2
2.5 x 10
Evaluations
4
Fig. 7. qABC algorithms’ convergence performance on Ackley function.
2000 qABC(r=0) qABC(r=0.5) qABC(r=1) qABC(r=2) qABC(r=3) qABC(r=infinite)
Objective Function Value
0
−2000
−4000
−6000
−8000
−10000
−12000 0
1
2
3
4
5
6
7
8
Evaluations
x 10
4
Fig. 8. qABC algorithms’ convergence performance on Schwefel function.
provide so good results on Schwefel and Dixon-Price functions. Both ABC algorithms give the same mean value which is very close to optimum for Schwefel function and ABC hits the optimum mean value of Dixon-Price while qABC does not. For Ackley problem, all of the algorithms find the optimum results except GA and PSO. PSO’s result is closer to 0 than the result of the GA. The table clearly shows that ABC and qABC algorithms clearly outperform GA, PSO and DE in these conditions on the considered test problems. However, it is not very clear that there is a significant difference between the performances of the two ABC algorithms which produce very similar results and present the best
mean values for six of ten problems among the compared optimization algorithms. So, in order to compare the performance of the qABC and ABC, the Wilcoxon signed rank test was used in this paper. Wilcoxon test is a nonparametric statistical test that can be used for the analyzing the behaviour of evolutionary algorithms [33]. The test results are shown in Table 11. The first column of the table presents the test functions, and the second column gives the mean difference between the results of ABC and qABC. The last column gives the p value that is an important determiner of the test. Since the mean difference column value is 0 for six text functions, there are four test problems that the significance of the
1400 qABC(r=0) qABC(r=0.5) qABC(r=1) qABC(r=2) qABC(r=3) qABC(r=infinite)
Objective Function Value
1200 1000 800 600 400 200 0 −200 0
10
20
30
40
50
60
70
80
Evaluations
Fig. 9. qABC algorithms’ convergence performance on SixHumpCamelBack function.
90
100
236
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238 70 qABC(r=0) qABC(r=0.5) qABC(r=1) qABC(r=2) qABC(r=3) qABC(r=infinite)
Objective Function Value
60
50
40
30
20
10
0
0
10
20
30
40
50
60
70
80
90
100
Evaluations
Fig. 10. qABC algorithms’ convergence performance on Branin function.
difference between the performances of the algorithms can be discussed. Among the four test problem, the p value is different from 0 for only Rosenbrock. So, only for Rosenbrock problem there is not enough evidence to reject the null hypothesis (0.711 > 0.05). These tests show that in these conditions, the performance of ABC algorithm is significantly better than qABC for other three test functions (Schaffer, Dixon-Price and Branin). It should be emphasised that these tests are based on the final results obtained by the algorithms. Generally, we could interpret the simulation and test results as, when Eq. (5) is used for onlookers to produce new solutions with r ≥ 1, the local convergence performance of ABC is significantly improved especially in the early cycles of the optimization process. So, a good tuning of the parameter r promises a superior convergence performance for qABC algorithm. 4.1. Time complexity of ABC algorithms In this section, a time complexity analysing process is carried out for ABC and qABC algorithms on Rosenbrock function. In order to present the time complexity’s relationship with the dimension of the problem, the complexities are calculated for the dimensions: 10, 30 and 50 as described in [32]. The results are shown in Table 12 for ABC and qABC algorithms. These analyses were performed on Windows 7 Professional (SP1) on a Intel(R) Core(TM) i7 M640 2.80 GHz processor with 8 GB RAM and the algorithms were coded by using C Sharp programming language and.net framework 3.5 was used. The code execution time of this system was obtained and demonstrated in the table as T0 . Also, the computing time for Rosenbrock function for 200,000 function evaluations is presented as T1 in the table. Each of the algorithms was run 5 times for 200,000 function evaluations and the average computing time of the algorithms are presented as Tˆ2 . The algorithm complexities were calculated by (Tˆ2 − T1 )/T0 . The time complexity of the qABC is higher than ABC algorithm since there is an additional part in the phase of onlooker
bees. However, it should be noticed that the increment rate on the complete computing time of qABC is lower than the increment rate of the dimension, like ABC algorithm. So, it can be indicated that there is not a strict dependence between the dimension of the problem and the complexities of qABC and ABC. 4.2. Experiments on “colony size” For the experiments in this section, four test functions from Table 1 were selected. Each function has different character. These benchmarks are: Sphere, Rosenbrock, Rastrigin and Griewank functions. In the experiments, r was set as 1. The same parameter setting was used with the previous experiments (maximum evaluation number = 500,000).The limit value was calculated by using Eq. (9) as indicated before. The qABC algorithm was tested on the mentioned functions for the several different colony size (CS) values: 4, 6, 12, 24, 50, 100, 200 and the results of the experiments are presented in Table 13. The table shows that qABC algorithm gives the optimal results in all fields without being influenced by changing the CS for the values CS > 6 and CS > 12 on Sphere and Rastrigin functions, respectively. The optimal values are also found by the algorithm on Griewank function with CS = 50 and CS = 200. Although the algorithm finds the results very close to the optimum for CS = 100, when CS < 50, there is not efficient convergence to the optimum for Griewank test function. Considering the standard deviations, the mean values are very similar for Rosenbrock problem for the intervals of the CS values 24–200 and 6–12. However, when CS = 4, the results of qABC significantly get worse on Rosenbrock function. 4.3. Experiments on “limit” The same test functions in the previous section were used to test the qABC algorithm for different limit values (10, 50, 187, 375, 750, 1500) to observe the relation between the parameter limit and the performance of the algorithm for this algorithm. The
Table 12 Time complexities of the ABC and qABC algorithms on Rosenbrock function. D
T0
T1
Tˆ2 of ABC
Tˆ2 of qABC
Complexity of ABC((Tˆ2 − T1 )/T0 )
Complexity of qABC((Tˆ2 − T1 )/T0 )
10 30 50
0.088005 0.088005 0.088005
0.4280245 1.3590778 2.3511345
0.58063322 1.66389516 2.72615592
2.85596336 8.28667398 13.7839884
1.73409147207545 3.46363683881598 4.26136492244759
27.5886467814329 78.7182112379978 129.911412987898
Table 13 Effect of the colony size (CS) on the performance of qABC algorithm. CS
Rosenbrock
Rastrigin
Griewank
Mean
SD
Mean
SD
Mean
SD
Mean
SD
0.0295194 2.6999635E−15 0 0 0 0 0
0.0564562 2.3764678E−15 0 0 0 0 0
80.0869673 0.2882164 0.3033491 0.1086825 0.1329198 0.0618102 0.1350616
46.9672644 0.3470707 0.5071798 0.1165915 0.1799131 0.1052786 0.1581752
5.9836256 0.25516277 3.7133555E−12 0 0 0 0
1.8967871 0.4203245 6.8609101E−12 0 0 0 0
0.1236460 2.4260047E−05 4.1002980E−10 1.5733340E−13 0 2.1353290E−15 0
0.0935611 7.2695534E−05 1.6459408E−09 7.7332947E−13 0 1.1087109E−14 0
Table 14 Effect of the limit on the performance of qABC algorithm. Limit
10 50 187 375 750 1500
Sphere
Rosenbrock
Rastrigin
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238
4 6 12 24 50 100 200
Sphere
Griewank
Mean
SD
Mean
SD
Mean
SD
Mean
SD
7.4129058 6.2867716E−09 1.1402053E−15 0 0 0
2.9555841 4.7047270E−09 0 0 0 0
654.3308640 0.6935193 0.1007734 0.1084152 0.1329198 0.1919979
181.7699715 0.4453130 0.1877014 0.1497979 0.1799131 0.3661197
26.6525432 1.0218641E−05 6.8093679E−15 0 0 0
3.6289378 1.5189265E−05 6.9419476E−15 0 0 0
1.0672783 3.6988779E−06 2.5103216E−12 1.8577732E−14 0 0
0.0355146 5.1940736E−06 1.1533350E−11 8.8020357E−14 0 0
237
238
D. Karaboga, B. Gorkemli / Applied Soft Computing 23 (2014) 227–238
same parameter setting used in the previous experiments (colony size = 50 and the maximum evaluation number = 500,000) were used. In terms of the mean and standard deviation of 30 independent runs, the simulation results are given in Table 14. On Griewank, Sphere and Rastrigin functions, the results get better as the limit values increase. However, the algorithm achieves the optimum results when the limit value is l ≥ 750 for the Griewank function and l ≥ 375 for the Sphere and Rastrigin functions. When the standard deviation is considered, the difference between the mean objective function values produced for different limit values looks very small on Rosenbrock function except the smallest limit value, 10. The results of these experiments showed that, 750 which is equal to the value calculated by Eq. (9) is a suitable value as the limit parameter. 5. Conclusion In this paper a new definition for the behaviour of the onlooker bees of ABC algorithm was presented and a novel version of ABC called quick ABC (qABC) was described. Experimental studies showed that the new definition significantly improves the convergence performance of the standard ABC when the neighbourhood radius r is set appropriately. The performance of the qABC algorithm was compared with the standard ABC and the state of art algorithms. The results showed that, qABC algorithm presents promising results for considered problems. In order to analyse the effect of the parameters limit and colony size on the performance of the qABC algorithm, some experiments were also conducted. Moreover, time complexity analyses were carried out for ABC and qABC algorithms. In the future, the adaptation of the parameter r can be studied to improve the performance of qABC. It is also noted that qABC algorithm can be used for all type optimization problems, such as binary, combinatorial, integer optimization problems. References [1] D. Karaboga, Artificial bee colony algorithm, Scholarpedia 5 (3) (2010) 6915 www.scholarpedia.org/article/Artificial bee colony algorithm [2] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization, Technical Report-TR06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. [3] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: artificial bee colony (ABC) algorithm and applications, Artif. Intell. Rev. (2012), http://dx.doi.org/10.1007/s10462-012-9328-0. [4] P.W. Tsai, J.S. Pan, B.Y. Liao, S.C. Chu, Enhanced artificial bee colony optimization, Int. J. Innov. Comput. Inf. Control 5 (12) (2009) 5081–5092. [5] H. Narasimhan, Parallel artificial bee colony (PABC) algorithm, in: Proceedings of the World Congress on Nature Biologically Inspired Computing (NaBIC 2009), 2009, pp. 306–311. [6] M. Subotic, M. Tuba, N. Stanarevic, Parallelization of the artificial bee colony (ABC) algorithm, in: Proceedings of the 11th WSEAS International Conference on Neural Networks and 11th WSEAS International Conference on Evolutionary Computing and 11th WSEAS International Conference on Fuzzy Systems, World Scientific and Engineering Academy and Society (WSEAS), Stevens Point, Wisconsin, USA, 2010, pp. 191–196. [7] W. Zou, Y. Zhu, H. Chen, X. Sui, A clustering approach using cooperative artificial bee colony algorithm, Discret. Dyn. Nat. Soc. (2010), http://dx.doi.org/10.1155/2010/459796.
[8] M. Subotic, M. Tuba, N. Stanarevic, Different approaches in parallelization of the artificial bee colony algorithm, Int. J. Math. Model. Method Appl. Sci. 5 (4) (2011) 755–762. [9] G. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm for numerical function optimization, Appl. Math. Comput. (2010), http://dx.doi.org/10.1016/j.amc.2010.08.049. [10] X. Xu, X. Lei, Multiple sequence alignment based on ABC SA, in: Proceedings of the Artificial Intelligence and Computational Intelligence, Lecture Notes in Computer Science, vol. 6320, 2010, pp. 98–105. [11] M. Tuba, N. Bacanin, N. Stanarevic, Guided artificial bee colony algorithm, in: Proceedings of the European Computing Conference (ECC11), 2011, pp. 398–403. [12] G. Li, P. Niu, X. Xiao, Development and investigation of efficient artificial bee colony algorithm for numerical function optimization, Appl. Soft Comput. (2011), http://dx.doi.org/10.1016/j.asoc.2011.08.040. [13] A. Banharnsakun, B. Sirinaovakul, T. Achalakul, Job shop scheduling with the best-so-far abc, Eng. Appl. Artif. Intell. 25 (3) (2012) 583–593. [14] X. Bi, Y. Wang, An improved artificial bee colony algorithm, in: Proceedings of the 3rd International Conference on Computer Research and Development (ICCRD), vol. 2, 2011, pp. 174–177. [15] W.F. Gao, S.Y. Liu, A modified artificial bee colony algorithm, Comput. Oper. Res. 39 (3) (2012) 687–697. [16] E. Mezura-Montes, O. Cetina-Dominguez, Empirical analysis of a modified artificial bee colony for constrained numerical optimization, Appl. Math. Comput. 218 (22) (2012) 10943–10973. [17] N. Bacanin, M. Tuba, Artificial bee colony (ABC) algorithm for constrained optimization improved with genetic operators, Stud. Inf. Control 21 (2) (2012) 137–146. [18] W.F. Gao, S.Y. Liu, F. Jiang, An improved artificial bee colony algorithm for directing orbits of chaotic systems, Appl. Math. Comput. 218 (7) (2011) 3868–3879. [19] W.F. Gao, S.Y. Liu, L.L. Huang, A global best artificial bee colony algorithm for global optimization, J. Comput. Appl. Math. 236 (11) (2012) 2741–2753. [20] Y. Liu, X.X. Ling, Y. Liang, G.H. Liu, Improved artificial bee colony algorithm with mutual learning, J. Syst. Eng. Electron. 23 (2) (2012) 265–275. [21] F. Kang, J. Li, Q. Xu, Hybrid simplex artificial bee colony algorithm and its application in material dynamic parameter back analysis of concrete dams, J. Hydraul. Eng. 40 (6) (2009) 736–742. [22] F. Kang, J. Li, Q. Xu, Structural inverse analysis by hybrid simplex artificial bee colony algorithms, Comput. Struct. 87 (13–14) (2009) 861–870. [23] Y. Marinakis, M. Marinaki, N. Matsatsinis, A hybrid discrete artificial bee colony grasp algorithm for clustering, in: Proceedings of the International Conference on Computers and Industrial Engineering (CIE: 2009), vols. 1–3, 2009, pp. 548–553. [24] R. Xiao, T. Chen, Enhancing ABC optimization with Ai-net algorithm for solving project scheduling problem, in: Proceedings of the 7th International Conference on Natural Computation (ICNC), vol. 3, 2011, pp. 1284–1288. [25] W. Bin, C.H. Qian, Differential artificial bee colony algorithm for global numerical optimization, J. Comput. 6 (5) (2011) 841–848. [26] T.K. Sharma, M. Pant, Differential operators embedded artificial bee colony algorithm, Int. J. Appl. Evol. Comput. 2 (3) (2011) 1–14. [27] F. Kang, J. Li, Z. Ma, H. Li, Artificial bee colony algorithm with local search for numerical optimization, J. Softw. 6 (3) (2011) 490–497. [28] T.J. Hsieh, H.F. Hsiao, W.C. Yeh, Mining financial distress trend data using penalty guided support vector machines based on hybrid of particle swarm optimization and artificial bee colony algorithm, Neurocomputing 82 (2012) 196–206. [29] A. Abraham, R.K. Jatoth, A. Rajasekhar, Hybrid differential artificial bee colony algorithm, J. Comput. Theor. Nanosci. 9 (2) (2012) 249–257. [30] D. Karaboga, B. Gorkemli, A quick artificial bee colony – qABC – algorithm for optimization problems, in: Proceedings of the 2012 International Symposium on Innovations in Intelligent Systems and Applications (INISTA), 2–4 July, Turkey, 2012, http://dx.doi.org/10.1109/INISTA.2012.6247010. [31] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Appl. Math. Comput. 214 (1) (2009) 108–132. [32] P.N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y.P. Chen, A. Auger, S. Tiwari, Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-parameter Optimization, Technical Report, Nanyang Technological University, Singapore, 2005. [33] S. Garcia, D. Molina, M. Lozano, F. Herrera, A study on the use of non-parametric tests for analyzing the evolutionary algorithms behaviour: a case study on the CEC2005 special session on real parameter optimization, J. Heuristics 15 (6) (2009) 617–644.