Applied Mathematics and Computation 219 (2012) 3575–3589
Contents lists available at SciVerse ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
Simulated annealing based artificial bee colony algorithm for global numerical optimization Shi-Ming Chen ⇑, Ali Sarosh, Yun-Feng Dong School of Astronautics, Beijing University of Aeronautics and Astronautics, Beijing 100191, PR China
a r t i c l e
i n f o
Keywords: Artificial bee colony Simulated annealing algorithm Global best guided ABC Global numerical optimization Swarm intelligence Optimization
a b s t r a c t Artificial bee colony (ABC) algorithm is a global optimization algorithm, which has been shown to be competitive with some conventional swarm algorithm, such as genetic algorithm (GA) and particle swarm optimization (PSO). However, there is still an insufficiency in ABC algorithm, in that it has poor convergence rate in some situations. Inspired by simulated annealing algorithm, a simulated annealing based ABC algorithm (SAABC) is proposed. Simulated annealing algorithm is introduced into employed bees search process to improve the exploitation of the algorithm. The experimental results are tested on a set of numerical benchmark functions with different dimensions. That show that SAABC algorithm can outperform ABC and global best guided ABC algorithms in most of the experiments. Ó 2012 Elsevier Inc. All rights reserved.
1. Introduction Artificial bee colony (ABC) algorithm was proposed by Karaboga in [1], it is based on swam intelligence and is applied to global optimization problems [1]. It is inspired by the intelligent foraging behavior of honey bee swarm. Compared with genetic algorithm (GA) and particle swarm optimization (PSO) [2,3], ABC has lower computation complexity, easier programming and outstanding performance. This has raised great interest amongst researchers in recent years. The ABC algorithm has been extended for optimization of multitude of design problem including constrained optimization problems in [4], training of neural network [5], designing digital IIR filters [6], solution of TSP problems [1], processing images, recognizing patterns [7–9], structural optimization [10–13] and data clustering [14–19]. Akay and Karaboga [20] gave a modified version of the ABC algorithm to solve optimization problem for real parameters. Xu et al. [21] discussed an improved ABC optimization algorithm based on chaos theory for solving the path planning problem of Uninhabited Combat Air Vehicle (UCAV). Pan et al. [22] proposed a discrete artificial bee colony (DABC) algorithm to solve the lot-streaming flow shop scheduling problem. Bernardino et al. [23] developed an efficient traffic loading algorithm using ABC to find the routing of Weighted Ring Arc-Loading Problem (WRALP). Yeh and Hsieh [24] proposed a penalty guided ABC to solve the reliability redundancy allocation problem. Although ABC has been successfully applied in many fields, however according to [25], the basic ABC algorithm is good at exploration but poor at exploitation. Exploration and exploitation are necessary for the population-based optimization algorithms [25,26]. While exploration process is related to the independent search for an optimal solution, exploitation uses existing knowledge to find better solutions. In practice, the exploration and exploitation contradict each other, and in order to achieve good optimization performance, the two abilities should be well balanced.
⇑ Corresponding author. Address: 37 Xueyuan Road, New Main Building, Office B-324, Haidian District, Beijing 100191, PR China. E-mail addresses:
[email protected] (S.-M. Chen),
[email protected] (A. Sarosh),
[email protected] (Y.-F. Dong). 0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2012.09.052
3576
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
Like most population-based algorithms, ABC takes a long time because of its stochastic nature. To improve the convergence characteristics and to prevent the ABC from getting stuck in local solutions, chaos theory [27,21,16] was introduced into the algorithm to replace the random number. The results showed that the proposed method increased the solution quality and improved the global searching capability. Some local search techniques, such as Rosenbrock method (RM) and Nelder-Mead simplex search method (NMSS), were introduced into the ABC algorithm, by Kang et al. [28,10]. Comparison results showed that the proposed methods performed better than basic ABC algorithm. The best-so-far ABC algorithm has been presented by Banharnsakun et al. [26], it provides solution update of the onlooker bees. There are three changes in this best-so-far ABC. Firstly, the best feasible solutions found so far, are shared globally among the entire population. Secondly, in each iteration, the radius of the search for new candidates uses a larger radius than employed earlier in the search process and subsequently reduces the radius as the process comes closer to convergence. Finally, a robust calculation to determine and compare the quality of alternative solutions was used. The results demonstrated that the proposed method produced higher quality solutions with faster convergence. In order to improve the exploitation, the global-best-solution guided ABC (GABC) was proposed by Zhu and Kwong [25]. In GABC, the global best solution information was incorporated into the solution search equation. The approach improved the convergence rate and hence the capability of convergence up to a global optimum. The experimental results from these improved ABC algorithms showed that it could successfully solve numerical optimization problems. However, in some cases the convergence speed could be an issue. In order to overcome this shortcoming and improve the performance of ABC algorithm, this paper introduces a simulated annealing based ABC algorithm. Simulated annealing (SA) algorithm is a general-purpose stochastic optimization method that has proven to be quite effective in finding global optima for many problems. Recent interest began with the works of Kirkpatrick et al. [29], and Cerny [30]. The simulated annealing algorithm was initially proposed by Kirkpatrick et al. [29], who drew an analogy between the cooling of a fluid and the optimization of a complex system. Simulated annealing is based on the theory of Markov chains, it accepts and rejects randomly generated ‘moves’ on the basis of a probability related to an ‘annealing’ temperature [31]. It can accept moves which change the value of an objective function in the direction opposite to that of the desired long-term trend. Thus, for a global minimization problem, a move that increase the value of the objective function may be accepted as part of the full series of moves for which the general trend is to decrease the value of the objective function. In this way, simulated annealing is able to explore the full solution space and can escape from the local optima. Simulated annealing has proven to be a practical method for solving combinatorially large optimization problems. Press [32] report that simulated annealing can obtain solutions to combinatorial optimization problems within specified ‘distance’ of the global optimum. ABC is a swarm intelligence algorithm, while SA focus on the evolution process of individuals. If improving the individual’s evolution process of ABC algorithm, the convergence speed will be increased and ABC algorithm will get better result. So a new algorithm combined two algorithms is proposed. In this paper, the employed bees searching process is modified by SA algorithm so as to increase the convergence rate. Hereinafter the SA based ABC algorithm is referred to SAABC algorithm. The design of computational experiment for testing of SAABC heuristic algorithm is carried out in accordance with guidelines provided by Barr et al. [33]. Experimental results tested on optimization of numerical functions show that SAABC is superior to ABC and GABC algorithms in most of the cases. Details of the work are presented in subsequent sections of this paper. Section 2 summarizes basic ABC, SA and SAABC algorithms. Section 3 describes the test functions and parameters. In Section 4, the optimal parameters of SAABC are chose and numerical test result obtained for ABC, GABC and SAABC algorithms are compared. Conclusions are made based on the results of numerical comparisons of the aforementioned methods. 2. Optimization theory and methods 2.1. Basic artificial bee colony algorithm In a natural bee swarm, there are generally three kinds of honey bees that search food. These include the employed bees, the onlookers, and the scouts. Employed bees are responsible for exploiting the nectar sources, they explore the site beforehand and give information to the onlooker bees in the hive about the quality of the food at the source sites which they are exploiting. Onlooker bees wait in the hive and decide on a food source to exploit based on the information shared by the employed bees. Scouts randomly search the environment in order to find a new food source, either depending on an internal motivation or based on possible external clues [20]. In ABC algorithm [19], each employed bee uses the currently associated food source to determine a new neighboring source, based on the nectar amount at the new source. Eq. (1) shows the method to determine the nectar amount of this new food source.
v ij ¼ xij þ hij ðxij xkj Þ;
ð1Þ
where i; j 2 f1; 2; . . . ; Dg are randomly chosen indexes and k 2 f1; 2; . . . ; SNg. D is the variables dimensions, and SN is the number of food sources which is equal to the number of employed bees. Although k is determined randomly, it has to be different from i. The hij is a randomly produced number between ½1; 1.
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
3577
As can be seen from Eq. (1), if the nectar amount of this new food source is higher than that of the currently associated food source, then this employed bee moves to this new food source, otherwise it continues with the old one. After all employed bees complete the search process, they share the information about their food sources with onlooker bees. An onlooker bee evaluates the nectar information obtained from all employed bees and chooses a food source using a probability related to its nectar amount. Eq. (2) refers to the probability function which is known as roulette wheel selection method. This method provides better candidates to have a greater chance of being selected.
fit pi ¼ PSN i
n¼1 fit n
ð2Þ
;
where fit i is the corresponding fitness value, i which is proportional to nectar amount of the food source in the position i. The fitness function is defined as
fit i ¼
1=ð1 þ fi Þ
fi P 0;
1 þ absðfi Þ fi < 0;
ð3Þ
where fi is the objective function value of solution i. Once all onlooker bees have selected their food source, each of them determines a new neighboring food source and computes it’s nectar amount. Provided that its nectar contents are higher than that of previous one, the bee stores the new position and discards the old one. The food source which has been exhausted by the employed and onlooker bees is assigned as ‘abandoned’. It is employed bee becomes a scout. This implies that, if any position cannot be improved further through a predetermined number of cycles, called as limit parameter, the food source is assumed to be abandoned and employed bee of that source will become a scout. In that position, a new solution is randomly generated by the scout as given in Eq. (4). It assumes that the abandoned source is xi and j 2 f1; 2; . . . ; Dg, D is the solution vector, the scout discovers a new food source which will be replaced with xi .
xji ¼ xjmin þ randð0; 1Þ xjmax xjmin ;
ð4Þ
where j is determined randomly, it should be noticed that it has to be different from i. 2.2. An overview of simulated annealing algorithm Simulated annealing (SA) is a stochastic relaxation technique that has its origin in statistical mechanics. The SA methodology draws its analogy from the annealing process of solids. In the annealing process, a solid is heated to a high temperature and gradually cooled to allow it to crystallize. As the heating process allows the atoms to move randomly, if the cooling is not done too rapidly, it gives the atoms enough time to align themselves in order to reach a minimum energy state. This analogy can be used in combinatorial optimization with the state of solid corresponding to the feasible and the minimum energy state being the optimal solution. The SA algorithm repeats an iterative neighbor generation procedure and follows search directions that improve the objective function value [34]. While exploring solution space, the SA method offers the possibility of accepting worse neighbor solutions in a controlled manner in order to escape form local minima. More precisely, in each iteration, for a current solution x is characterized by an objective function value f ðxÞ, a neighbor x0 is selected from the neighborhood of x denoted by NðxÞ, and defined as the set of all its immediate neighbors. For each move, the objective difference D ¼ f ðx0 Þ f ðxÞ is evaluated. x0 could also be accepted with a probability by using Eq. (5).
D ; Ps ¼ exp T
ð5Þ
the acceptance probability is compared to a number r 2 ð0; 1Þ generated randomly and x0 is accepted whenever p > r. T is temperature which is controlled by a cooling scheme. There exist theoretical schedules [34], which require infinite computing time guaranteeing asymptotic convergence toward the optimal solution. In practice, much simpler and finite computing time schedules are preferred even if they do not guarantee an optimal solution. The SA algorithm includes specific components: a neighbor generation procedure, objective function evaluation, a method for assigning the initial temperature, a procedure to change the temperature, and a cooling scheme including stopping criteria. The basic structure of SA algorithm is presented in Fig. 1. 2.3. Simulated annealing based artificial bee colony algorithm (SAABC) According to the solution search equation of ABC algorithm described by Eq. (1), the new candidate solution is generated by moving the old solution towards another solution selected randomly from the population. This method can increase the probability of locating the global optima, but it cannot guarantee a better solution, so the convergence speed is slow. If every new candidate solution is better than the old one, the convergence speed will be faster. But if only the better solution is considered, the algorithm could get trapped into local optimum. Thus, to increase the convergence speed and to produce better
3578
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
Fig. 1. Basic structure of simulated annealing algorithm.
quality solution, SA algorithm is introduced into the employed bee search process. In this case the SA can help find a better solution for the best bee of each generation, and accept worse neighboring solution using certain possibility. In each iteration, the possibility of accepting worse solutions is higher earlier in the search process but reduces by as cooling process takes place. As shown in previous subsection, the important components of SA are, neighbor generation procedure, objective function evaluation, procedure to change the temperature, and a cooling scheme including stopping criteria. The neighbor generation procedure uses Eq. (1) of employed bee search method in ABC algorithm, while the objective function evaluation is the same as ABC algorithm. Eq. (5) of SA probability is introduced where D represents the fitness error. In this paper, a relative error replaces the absolute error, and introduces a positive factor, a, to control the speed. Therefore, the SA probability of neighbor solution in Eq. (5) is replaced by Eq. (6).
f ðv ij Þ f ðxij Þ Ps ¼ exp ; f ðv ij ÞaT
ð6Þ
where v ij is produced by Eq. (1), f ðv ij Þ; f ðxij Þ are the objective function values. Since D; a and T are positive quantities, it implies that when T is high, then the probability of acceptance is high, where as for the same given value of D and a, when T is very low, then the probability of acceptance will asymptotically converge to 0. The initial probability of acceptance is high, thereby giving even worse solutions a good chance of acceptance, meanwhile ensuring that the algorithm does not get stuck in any local optimum. In the later stages of the search, the probability of acceptance of worse solutions goes down, and the algorithm becomes more greedy in nature in order to focus the search on the potentially optimal region of the search space. In any implementation of SA, a cooling schedule must be specified [35]. The temperature, T, is set to an initial value T 0 , this in general is relatively high, so that most trials are accepted and there is little chance of the algorithm zooming in on a local optima in the early stages. A scheme is then required for reducing T and for deciding how many trials are to be attempted at each value of T. Finally a stopping criterion is required to terminate the algorithm. The choice of a cooling schedule clearly has an important bearing on the performance of a SA algorithm.
Fig. 2. The simulated annealing search process pesudo-code of the SAABC algorithm.
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
3579
Fig. 3. Main steps of SAABC algorithm.
A typical finite time implementation of SA consists of decreasing the temperature T in s steps, starting from an initial value T 0 and using an attenuation factor b 2 ð0; 1Þ. The initial temperature T 0 is supposed to be high enough to allow acceptance of any new neighbor proposed in the first step. In each step s, the procedure generates a fixed number of neighbor solutions and evaluates them using the current temperature value by Eq. (7).
T s ¼ bs T 0 :
ð7Þ
The pesudo-code for simulated annealing search process in the SAABC algorithm is described in Fig. 2. The performance of SA depends on the definition of the several control parameters. In Fig. 2, the SCN is the maximum cycle for each employed bee search process. The initial temperature (T 0 ) and a decides the probability in the first iteration of the algorithm. The most commonly used temperature reducing function is geometric. Typically, 0:75 P b P 0:95 [34]. The main steps of the SAABC algorithm are described in Fig. 3. 3. Designing and executing the experiment 3.1. Selection of test functions In order to test the performance of SAABC algorithm for numerical function optimization, several numerical benchmark functions as of Refs. [2,3,25] are used here. According to their properties, these six functions are divided into two groups. The Sphere, SumSquares while Dixon–Price are unimodal function, and the others are multimodal function. (a) Sphere function is described by Eq. (8)
f1 ð~ XÞ ¼
D X
x2i ;
ð8Þ
i¼1
where ~ x ¼ ½x1 ; x2 ; . . . ; xD , the initial range of ~ x is ½100; 100D , and D denotes the dimension of the solution space. The minimum solution of the Sphere function is ~ xopt ¼ ½0; 0; . . . ; 0, and f1 ð~ xopt Þ ¼ 0. (b) SumSquares function is described by Eq. (9)
f2 ð~ XÞ ¼
D X
2
ixi ;
ð9Þ
i¼1
where the initial range of ~ x is ½10; 10D . The minimum solution of the SumSquares function is ~ xopt ¼ ½0; 0; . . . ; 0, and f2 ð~ xopt Þ ¼ 0. (c) Dixon–Price function is described by Eq. (10)
f3 ð~ XÞ ¼
D X
ið2x2i xi1 Þ2 þ ðx1 1Þ2 ;
ð10Þ
i¼2
where the initial range of ~ x is ½65:535; 65:535D . The minimum solution of the Dixon–Price function is ~ xopt ¼ ½0; 0; . . . ; 0, and f3 ð~ xopt Þ ¼ 0. (d) Rastrigin function is described by Eq. (11)
f4 ð~ XÞ ¼
D X 2 xi 10 cosð2pxi Þ þ 10 ; i¼1
ð11Þ
3580
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
where the initial range of ~ x is ½5:12; 5:12D . The minimum solution of the Rastrigin function is ~ xopt ¼ ½0; 0; . . . ; 0, and f4 ð~ xopt Þ ¼ 0. (e) Griewank function is described by Eq. (12)
f5 ð~ XÞ ¼
D X ðxi 100Þ2
4000
i¼1
D Y
cos
i¼1
xi 100 pffi þ 1; i
ð12Þ
where the initial range of ~ x is ½600; 600D . The minimum solution of the Griewank function is ~ xopt ¼ ½100; 100; . . . ; 100, and f5 ð~ xopt Þ ¼ 0. (f) Ackley function is described by Eq. (13)
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 ! u D D u1 X 1X f6 ð~ XÞ ¼ 20 exp @0:2t x2i A exp cosð2pxi Þ þ 20 þ e; D i¼1 D i¼1 0
ð13Þ
where the initial range of ~ x is ½32:768; 32:768D . The minimum solution of the Ackley function is ~ xopt ¼ ½0; 0; . . . ; 0, and f6 ð~ xopt Þ ¼ 0. 3.2. Design of experiments For each benchmark function, two dimension sizes i.e. 20 and 40 of solution space were tested. The reported results are the mean and standard deviations of the statistical experimental data. (a) ABC algorithm The honey bee population size is 80 and the maximum number of generations is 2000. Each of the experiments was repeated 30 times. (b) GABC algorithm The honey bee population size and the maximum number of generations are the same as the ABC algorithm, and the experiment is repeat 30 times. As shown in [25], the parameter C of GABC algorithm plays an important role in controlling the exploration and exploitation of the new candidate solution search. For the test functions, when the value of parametric C is increased, the best-mean-value initially decreases, but then begins to increase beyond a certain point. This behavior is the same in most of the cases. GABC algorithm with C ¼ 1:5 has the best performance. (c) SAABC algorithm The honey bee population size and the maximum number of generations are the same as the other algorithms, and the experiment is repeat 30 times which is also the same as in previous cases. In order to investigate the effect of the parameters T 0 ; a; b on the performance of SAABC, the algorithm was tested for values of T 0 ¼ 10; 50; 100; 500; a = 5e3, 5e5, 5e7, 5e9 and b ¼ 0:85; 0:9; 0:95; 0:99, respectively. The experiment is repeated N times by changing one parameter at a time, this means that it will take 43 N times to finish the process of choosing parameters. In order to reduce the computation time, the Orthogonal Experimental Design method is used for choosing the parameters [33]. Considering the three parameters and each having four levels, we choose the L16 ð45 Þ orthogonal table to analyze the three parameters. The orthogonal table and experimental results are shown as Table 1, inhere, the first column refers to the experiment number and the columns of 2 to 4 show the changes in the three parameters, while numbers ‘‘1’’, ‘‘2’’, ‘‘3’’ and ‘‘4’’ refer to the four levels, for example, the number ‘‘1’’ in column T 0 refers to 10 degree. The last column is the logarithm value of
Table 1 Orthogonal experimental table and results. Exp. no.
T0
a
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
1 1 2 1 4 1 4 3 3 4 3 2 2 2 4 3
2 1 4 3 3 4 1 4 3 4 2 2 3 1 2 1
(10) (50) (500)
(100)
Result
b (5e5) (5e3) (5e9) (5e7)
2 1 3 3 2 4 4 2 1 1 4 1 4 2 3 3
(0.9) (0.85) (0.95)
(0.99)
3.53 0.78 2.39 2.33 3.84 1.97 1.04 2.00 3.63 2.19 2.76 2.98 2.92 0.95 0.96 0.97
3581
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589 Table 2 Main effect analysis of three factors. The bold values are the minimum values.
K1 K2 K3 K4 Range
T0
a
b
8.61 9.24 9.37 8.02 1.35
3.74 10.23 12.72 8.55 8.98
9.59 10.31 6.65 8.69 3.66
D=20
5
10
ABC GABC SAABC
Mean of Best Function Values
0
10
−5
10
−10
10
−15
10
−20
10
0
1
2
3 Time/sec
4
5
D=40
5
10
ABC GABC SAABC
0
Mean of Best Function Values
6
10
−5
10
−10
10
−15
10
−20
10
0
1
2
3 Time/sec
4
5
6
Fig. 4. Convergence curves of ABC, GABC and SAABC algorithms for the Sphere function.
minimum deviation of Dixon–Price function with dimension size of 40 obtained by SAABC algorithm using parameters from the corresponding row. Table 2 shows the main effects of the three parameters in reference to the results in Table 1. The first row in Table 2 refers to the sum of deviations of all corresponding experiments when the three parameters equal the value of level 1. Row 2 to row 4 is similar to row 1 except for the value of the three parameters, this means row 2 corresponds to level 2 and row 3 corresponds to level 3 and so on. Eqs. (14)–(17) shows the column 2 elements of Table 2.
3582
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
K 11 ¼ R1 þ R2 þ R4 þ R6 ;
ð14Þ
K 21 ¼ R3 þ R12 þ R13 þ R14 ;
ð15Þ
K 31 ¼ R8 þ R9 þ R11 þ R16 ;
ð16Þ
K 41 ¼ R5 þ R7 þ R10 þ R15 ;
ð17Þ
where K i1 is the elements of column 2 in Table 2, and Rj refers to the elements of column 5 in Table (1). Row 5 refers to the range of effect values for each of the parameters. According to the range shown in Table 2, we can sequence the parameters in the degree of their influence over the experimental results, namely a > b > T 0 . As it is shown in Table 2, the relationship of the intensity of four levels for each parameter is as follow: T 0 ð3Þ > T 0 ð2Þ > T 0 ð1Þ > T 0 ð4Þ; að3Þ > að2Þ > að4Þ > að1Þ; bð2Þ > bð1Þ > bð4Þ > bð3Þ. Therefore the optimal parameters of SAABC algorithm are T 0 ð3Þ ¼ 100; a(3) = 5e7, bð2Þ ¼ 0:9. 3.3. Performance measurements In order to exhibit the performance of SAABC heuristic vis other algorithms (i.e. ABC and GABC), graphs of solution quality are plotted as function of time refer Figs. 4–9. These graphs depict variation in fitness function values as a function of time. Each of the heuristics is used to solve the mathematical test functions (of Section 3.1) for varying dimension sizes (i.e. D = 20 and 40). Comparison of plots depict the speed and rate of convergence to optimal solution and hence the solution quality. D=20
5
10
ABC GABC SAABC
Mean of Best Function Values
0
10
−5
10
−10
10
−15
10
−20
10
0
1
2
3 Time/sec
4
5
D=40
5
10
ABC GABC SAABC
0
Mean of Best Function Values
6
10
−5
10
−10
10
−15
10
−20
10
0
1
2
3 Time/sec
4
5
6
Fig. 5. Convergence curves of ABC, GABC and SAABC algorithms for the SumSquares function.
3583
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
D=20
10
10
Mean of Best Function Values
ABC GABC SAABC 5
10
0
10
−5
10
−10
10
0
1
2
3
4
5
Time/sec D=40
10
10
Mean of Best Function Values
ABC GABC SAABC
5
10
0
10
−5
10
0
1
2
3 Time/sec
4
5
6
Fig. 6. Convergence curves of ABC, GABC and SAABC algorithms for the Dixon–Price function.
In order to evaluate the heuristic that produces superior solution, the speed of computation is determined for each algorithm. Heuristics are timed in order to determine (a) time to best-found solution; (b) total run time. Timings are recorded for each of the heuristic when applied to mathematical test functions in varying dimension sizes (refer Table 4). Finally, robustness of solution is portrayed by the ability of each heuristic (algorithm) to perform well over a wide range of test problems. For this purpose, each heuristic is used for solving six test functions (refer Section 3.1 above). Thereafter sensitivity of results for each of the heuristics is plotted (refer Fig. 11) for Sphere function by varying the optimal variable xi within 0:5xi while xi ! xopt . 4. Test result and discussion 4.1. Computational environment The computing environment used for assessment of heuristic is identical in all cases. A Pentium (R) Dual-Core CPU of 2.7 GHz processing speed is used. It’s peripheral items include a 2G random access memory. Operating system is Windows XP SP3 while Matlab V7.9.0 programming language is used for compilation of code and computation of results. Computation timings are noted using motherboard clock of the system and provision is made in the algorithm for recording of time to best-found solution and total run time for the solutions.
3584
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
D=20
5
10
ABC GABC SAABC
Mean of Best Function Values
0
10
−5
10
−10
10
−15
10
−20
10
0
1
2
3 Time/sec
4
5
6
D=40
5
10
Mean of Best Function Values
ABC GABC SAABC 0
10
−5
10
−10
10
−15
10
0
1
2
3 4 Time/sec
5
6
7
Fig. 7. Convergence curves of ABC, GABC and SAABC algorithms for the Rastrigin function.
In order to measure the quality of solutions, the results of heuristics are compared with optimal solution. Since the optimal solution for all the employed test functions was zero, therefor the mean best values for each of the heuristic (in varying dimension sizes) as shown in Table 3 were used to measure the quality of solutions. 4.2. Execution of experiment: result of ABC, GABC and SAABC The six functions with different search dimensions are chosen to simulate and compare ABC, Gbest-guided ABC (GABC) and simulated annealing based ABC (SAABC). The result are shown in Table 3. The first column shows the benchmark functions. The ‘‘D’’ column represents the functions search dimension. The ‘‘Mean’’ column contains the average best values of benchmark functions. The ‘‘SD’’ column shows the standard deviation of the best values. For Sphere and SumSquares function, it can be seen that the GABC has better best-mean-value than ABC, but for all dimensions SAABC is better than both of the other algorithms. The standard deviation of SAABC algorithm, for all dimensions, is less than ABC and GABC algorithms. For Dixon–Price function, the data shows that the GABC has better best-mean-value than ABC, it indicates that GABC is not suitable for Dixon–Price function. But for all dimensions SAABC is better than both of the other algorithms. The standard
3585
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
D=20
5
10
Mean of Best Function Values
ABC GABC SAABC 0
10
−5
10
−10
10
−15
10
0
2
4
6
8
10
Time/sec D=40
5
10
Mean of Best Function Values
ABC GABC SAABC 0
10
−5
10
−10
10
−15
10
0
2
4
6
8
10
Time/sec Fig. 8. Convergence curves of ABC, GABC and SAABC algorithms for the Griewank function.
deviation of SAABC algorithm for all dimensions is less than ABC and GABC algorithms. Thus the SAABC again shows better performance than the other algorithms. For Rastrigin function, the results show that the mean and standard deviation of best-values of GABC and SAABC are 0 when the dimension size is 20, and are all better than ABC. When the dimension size is 40, the average of best-values and its deviation in GABC is less than ABC, but then again SAABC has better performance than GABC algorithms. Hence the SAABC has superior performance than ABC and GABC algorithms. For Griewank function, it can be seen that for all dimensions the GABC has greater best-mean-value than ABC, especially when the dimension size is 20. But for all dimensions SAABC is better than both of the other algorithms. The standard deviation for all dimensions of SAABC algorithm is less than ABC and GABC algorithms. For Ackley function, with dimension size of 20 the GABC has better best-mean-value than ABC, but has worse mean-bestvalues than ABC when the dimension size is 40. For all dimensions SAABC is better than both of the other algorithms. The standard deviation of SAABC algorithm for all dimensions is less than ABC and GABC algorithms. Thus from the simulation results, for the six test functions with different dimensions, the SAABC has better results than ABC and GABC. For all test functions, the precision of SAABC results is one order of magnitude better than ABC and GABC, particularly in the case of Dixon–Price and Ackley functions, while SAABC’s accuracy is several orders of magnitude higher than ABC and GABC. Hence, the SAABC is very suitable for these two functions.
3586
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
D=20
5
10
Mean of Best Function Values
ABC GABC SAABC 0
10
−5
10
−10
10
−15
10
0
1
2
3 4 Time/sec
5
6
D=40
2
10
ABC GABC SAABC
0
10 Mean of Best Function Values
7
−2
10
−4
10
−6
10
−8
10
−10
10
−12
10
−14
10
0
1
2
3 4 Time/sec
5
6
7
Fig. 9. Convergence curves of ABC, GABC and SAABC algorithms for the Ackley function.
The convergence curves of ABC,GABC and SAABC algorithms have been plotted for the test functions in dimension sizes of 20 and 40, as shown in Figs. 4–9, respectively. Convergence curves of Sphere and SumSquares functions, i.e. Figs. 4 and 5, are fairly identical in their plots of ABC, GABC and SAABC algorithms. These show that for both dimension sizes of 20 and 40, ABC algorithm converges at the slowest speed and locates a local optima only. The GABC has fastest convergence speed but fails to locate the global optima. The SAABC not only has fast convergence but also managed to locate the true global optima. The convergence curves of Dixon–Price function in Fig. 6, show that GABC algorithm converges at fastest rate, however it locates only a local optimal solution. The ABC and SAABC continue to refine the search however the ABC consistently locates local optima while SAABC produces global optimal results. The behavior is the same for both dimension sizes. The convergence curves of Rastrigin function in Fig. 7, show that in both cases (D = 20 and 40), ABC algorithm converges at the slowest speed and locates a local optimal. The GABC has faster convergence speed than ABC. The GABC locates the global optima when D = 20 but fails to locate the global optima when D = 40. The SAABC has fastest convergence speed and successfully locates the true global optima in both cases. The convergence curves of Griewank function show markedly different behavior of ABC, GABC and SAABC curves when dimensions are changed for D = 20 to D = 40. Fig. 8 shows that for D = 20 the ABC curve converges at slowest rate to locate
3587
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
ABC GABC SAABC
Sphere(D=20) Sphere(D=40) SumSquares(D=20) SumSquares(D=40) Rastrigin(D=20) Rastrigin(D=40) DixonPrice(D=20) DixonPrice(D=40) Griewank(D=20) Griewank(D=40) Ackley(D=20) Ackley(D=40) 0
2
4 6 Total Run Time/sec
8
10
Fig. 10. Total run time comparison of ABC, GABC and SAABC algorithms.
14 ABC GABC SAABC
12
Fiteness value variation percent
10 8 6 4 2 0 −2 −4 −6 0.5
1 The xi variable variation between 0.5xi to 1.5xi
1.5
Fig. 11. Sensitivity analysis of the optimal result for Sphere function (D = 20, i = 20).
a local optimal, the GABC and SAABC convergence at similar speed however GABC locates a local optimal while SAABC locates the global optimal result. As dimensions are increased to D = 40, i.e. Fig. 8, the convergence rate and optimal results of GABC and ABC become almost identical to each other while SAABC still manages to locate the global optimal. The convergence curves of Ackley function in Fig. 9 also show varying behavior for both dimension sizes. When D = 20 the ABC converges slowest and acquire a local optimal, while the SAABC and GABC converge at identical rate but solution of SAABC is more global than GABC. When dimension sizes are change to D = 40 then the global optimal is located by both GABC and SAABC algorithms while ABC still locates only a local optimal solution. From the results of Table 3 and convergence history plots in Figs. 4–9, it can be comprehensively deduced that ABC algorithm has the slowest convergence speed and tends to locate the local optima only. The convergence speed of GABC is better than ABC algorithm, however this too gets trapped into local optima. The plots of SAABC show that for all employed test functions, it has the fastest convergence rate and tends to always locate the true global optima.
3588
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
Table 3 Comparison with SAABC algorithm, ABC algorithm and GABC algorithm on optimizing six benchmark function. The bold values are the minimum values. Function
D
ABC
GABC
SAABC
Mean
SD
Mean
SD
Mean
SD
Sphere
20 40
3.86e16 1.56e15
8.91e17 3.50e16
2.33e16 9.25e16
4.83e17 1.16e16
1.20e18 2.54e17
5.02e19 1.22e17
SumSquares
20 40
3.29e16 1.18e15
8.05e17 1.91e16
2.35e16 9.02e16
4.50e17 1.22e16
1.06e18 2.67e17
5.74e19 1.28e17
Dixon–Price
20 40
1.86e4 3.42e2
2.55e4 2.35e2
0.178 0.918
0.30 1.04
1.04e10 3.52e5
1.47e10 3.18e5
Rastrigin
20 40
1.23e14 4.24e11
1.43e14 1.86e10
0 6.06e14
0 4.46e14
0 7.58e15
0 1.97e14
Griewank
20 40
2.51e11 3.26e11
1.01e10 1.48e10
2.47e4 7.04e11
1.35e3 2.70e10
7.92e15 5.71e15
2.01e14 6.87e15
Ackley
20 40
2.97e14 1.40e9
3.28e15 6.38e10
2.03e14 5.87e14
2.59e15 4.47e15
7.88e15 3.70e14
1.14e15 4.67e15
Table 4 Time comparison with SAABC algorithm, ABC algorithm and GABC algorithm on optimizing six benchmark function. The bold values are the minimum values. Function
D
BestSolutionTime (s)
TotalRunTime (s)
ABC
GABC
SAABC
ABC
GABC
SAABC
Sphere
20 40
2.52 4.86
3.68 3.02
2.90 5.12
4.87 4.92
4.92 4.99
5.35 5.38
SumSquares
20 40
3.64 5.12
3.28 3.11
4.29 5.44
5.25 5.33
5.32 5.39
5.75 5.94
Dixon–Price
20 40
4.55 4.37
4.40 4.69
4.86 4.78
4.56 4.65
4.61 4.72
5.07 5.14
Rastrigin
20 40
3.03 5.11
1.80 3.86
3.07 5.76
5.32 5.50
5.39 5.56
5.94 6.06
Griewank
20 40
5.17 8.44
5.43 6.21
5.96 9.04
8.76 9.05
8.82 9.11
9.69 9.94
Ackley
20 40
4.40 5.94
3.42 5.72
4.59 6.54
5.78 5.94
5.83 5.95
6.41 6.54
4.3. Analysis and interpretation The effects of variation in independent factors such as T 0 ; a; b, and D are considered in Section 3.2. The data pertaining to variation factors as presented in Tables 1 and 2. From the foregoing results it can be analyzed that variations in T 0 ; a and b are the main drivers for the selection of optimal parameters of SAABC algorithm. Moreover the optimal parametric values for SAABC would inevitably result when T 0 ¼ 100; a = 5e7 and b ¼ 0:9. Insofar as analysis of solution quality are concerned. Tabulated results of Table 4 and plotted results of Figs. 4–9 show that computational efforts are directly dependent on dimension sizes (D). As D increase from 20 to 40, the time to best found solution increases substantially, however the mean best function values do not vary significantly. This implies that, the computational efforts and not the solution accuracy depend on dimension sizes. The phenomenon for total run time of solution is presented in Fig. 10 and it shows that the solution time is independent of problem sizes, however it is dependent on the type of problem. That is Rastrigin function takes least computational efforts while Griewank consumes the maximum. The robustness of solution is analyzed by considering variation of fitness values as function of one of the optimal variables. Plotted data in Fig.11 shows that SAABC exhibits least variation in its solution when the value of optimal variable is varied over ±50% range. On the contrary, ABC and GABC suffer from increasing level of sensitivity respectively. This implies that SAABC offers the most robust solution. 5. Conclusion In this paper, artificial bee colony (ABC) algorithm was studied. To increase the convergence rate of the algorithms, a simulated annealing based ABC algorithm (SAABC) is proposed, which uses the simulated annealing algorithm to find better solution in employed bees searching process. The experimental results tested on six benchmark functions show that insofar as convergence speed and accuracy is concerned SAABC algorithm with appropriate parameters outperforms ABC and GABC algorithm on both accounts.
S.-M. Chen et al. / Applied Mathematics and Computation 219 (2012) 3575–3589
3589
Acknowledgments The authors thank the anonymous reviewers for providing valuable comments to improve this paper. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35]
D. Karaboga, An idea based on honey bee swarm for numerical optimization, Erciyes Unversity, Kayseri, Turkey, Technical Report-TR06, 2005. D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Applied Soft Computing 8 (1) (2008) 687–697. D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Applied Mathematics and Computation 214 (1) (2009) 108–132. D. Karaboga, B. Basturk, Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems, Foundations of Fuzzy Logic and Soft Computing, vol. 4529, Springer, Berlin/Heidelberg, 2007. D. Karaboga, B. Akay, Artificial Bee Colony (ABC) Algorithm on Training Artificial Neural Networks, in: IEEE 15th Signal Processing and Communications Applications, SIU 2007, 2007, pp. 1–4. N. Karaboga, A new design method based on artificial bee colony algorithm for digital IIR filters, Journal of the Franklin Institute 346 (4) (2009) 328– 348. C. Xu, H. Duan, Artificial bee colony (ABC) optimized edge potential function (EPF) approach to target recognition for low-altitude aircraft, Pattern Recognition Letters 31 (13) (2010) 1759–1772. M. Horng, T. Jiang, Multilevel image thresholding selection using the artificial bee colony algorithm, Artificial Intelligence and Computational Intelligence, vol. 6320, Springer, Berlin/Heidelberg, 2010. B. Rao et al, Swarm intelligence for optimizing hybridized smoothing filter in image edge enhancement, Swarm, Evolutionary, and Memetic Computing, vol. 6466, Springer, Berlin/Heidelberg, 2010. F. Kang, J. Li, Q. Xu, Structural inverse analysis by hybrid simplex artificial bee colony algorithms, Computers and Structures 87 (13–14) (2009) 861– 870. M. Sonmez, Artificial bee colony algorithm for optimization of truss structures, Applied Soft Computing 11 (2) (2011) 2406–2418. S.N. Omkar et al, Artificial bee colony (ABC) for multi-objective design optimization of composite structures, Applied Soft Computing 11 (1) (2011) 489–499. M. Sonmez, Discrete optimum design of truss structures using artificial bee colony algorithm, Structural and Multidisciplinary Optimization 43 (1) (2011) 85. D. Karaboga, C. Ozturk, A novel clustering approach: artificial bee colony (ABC) algorithm, Applied Soft Computing 11 (1) (2011) 652–657. C. Zhang, D. Ouyang, J. Ning, An artificial bee colony approach for clustering, Expert Systems with Applications 37 (7) (2010) 4761–4767. Y. Zhang et al, Chaotic artificial bee colony used for cluster analysis, Intelligent Computing and Information Science, vol. 134, Springer, Berlin/ Heidelberg, 2011. Y. Marinakis, M. Marinaki, N. Matsatsinis, A hybrid discrete artificial bee colony – GRASP algorithm for clustering, in: International Conference on Computers and Industrial Engineering, CIE 2009, 2009, pp. 548–553. X. Lei, X. Huang, A. Zhang, Improved artificial bee colony algorithm and its application in data clustering, in: IEEE Fifth International Conference on BioInspired Computing: Theories and Applications (BIC-TA) 2010, 2010, pp. 514–521. D. Karaboga, S. Okdem, C. Ozturk, Cluster based wireless sensor network routings using artificial bee colony algorithm, in: International Conference on Autonomous and Intelligent Systems (AIS) 2010, 2010, pp. 1–5. B. Akay, D. Karaboga, A modified artificial bee colony algorithm for real-parameter optimization, Information Sciences 192 (2012) 120–142. C. Xu, H. Duan, F. Liu, Chaotic artificial bee colony approach to uninhabited combat air vehicle (UCAV) path planning, Aerospace Science and Technology 14 (8) (2010) 535–541. Q. Pan et al, A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem, Inform Sciences 181 (12) (2011) 2455–2468. A. Bernardino et al, Efficient Load Balancing for a Resilient Packet Ring Using Artificial Bee Colony, Applications of Evolutionary Computation, vol. 6025, Springer, Berlin/Heidelberg, 2010. W. Yeh, T. Hsieh, Solving reliability redundancy allocation problems using an artificial bee colony algorithm, Computers & Operations Research 38 (11) (2011) 1465–1473. G. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm for numerical function optimization, Applied Mathematics and Computation 217 (7) (2010) 3166–3173. A. Banharnsakun, T. Achalakul, B. Sirinaovakul, The best-so-far selection in artificial bee colony algorithm, Applied Soft Computing 11 (2) (2011) 2888– 2901. B. Alatas, Chaotic bee colony algorithms for global numerical optimization, Expert Systems with Applications 37 (8) (2010) 5682–5687. F. Kang, J. Li, Z. Ma, Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions, Information Sciences 181 (16) (2011) 3508–3531. S. Kirkpatrick, C.D. Gelatt, M. Vecchi, Optimization by simulated annealing, Science 220 (4598) (1983) 671–680. V. Cerny, Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm, Journal of Optimizaiton Theory and Applications 45 (1) (1985) 41–51. W. Dolan, P.T. Cummings, M. LeVan, Process optimization via simulated annealing: application to network design, AIChE Journal 35 (5) (1989) 725– 736. W.H. Press, Numerical Recipes: The Art of Scientific Computing, Cambrige University Press, 2001. R.S. Barr, B.L. Golden, J.P. Kelly, M.G.C. Resende, W.R. Stewart, Designing and reporting on computational experiments with heuristic methods, Journal of Heuristics 1 (1) (1995) 9–32. B. Abbasi et al, A hybrid variable neighborhood search and simulated annealing algorithm to estimate the three parameters of the Weibull distribution, Expert Systems with Applications 38 (1) (2011) 700–708. M.M. Ali, A.T. Rn, S. Viitanen, A direct search variant of the simulated annealing algorithm for optimization involving continuous variables, Computers & Operations Research 29 (1) (2002) 87–102.