G Model
ARTICLE IN PRESS
ASOC 3405 1–11
Applied Soft Computing xxx (2016) xxx–xxx
Contents lists available at ScienceDirect
Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc
Artificial bee colony algorithm with memory
1
2
Q1
Xianneng Li, Guangfei Yang ∗ Faculty of Management and Economics, Dalian University of Technology, No. 2 Linggong Road, Ganjingzi, Dalian 116024, China
3 4
a r t i c l e
5 19
i n f o
a b s t r a c t
6
Article history: Received 3 December 2014 Received in revised form 12 November 2015 Accepted 24 December 2015 Available online xxx
7 8 9 10 11 12 13
Artificial bee colony algorithm (ABC) is a new type of swarm intelligence methods which imitates the foraging behavior of honeybees. Due to its simple implementation with very small number of control parameters, many efforts have been done to explore ABC research in both algorithms and applications. In this paper, a new ABC variant named ABC with memory algorithm (ABCM) is described, which imitates a memory mechanism to the artificial bees to memorize their previous successful experiences of foraging behavior. The memory mechanism is applied to guide the further foraging of the artificial bees. Essentially, ABCM is inspired by the biological study of natural honeybees, rather than most of the other ABC variants that integrate existing algorithms into ABC framework. The superiority of ABCM is analyzed on a set of benchmark problems in comparison with ABC, quick ABC and several state-of-the-art algorithms. © 2016 Elsevier B.V. All rights reserved.
18
Keywords: Artificial bee colony Swarm intelligence Bee memory Foraging
20
1. Introduction
21Q3
As a relatively new optimization method inspired by swarm intelligence, artificial bee colony algorithm (ABC) [1,2] imitates the foraging behavior of honeybees to perform its search mechanism. Many studies have confirmed that ABC can achieve very competitive performance comparing with the classical evolutionary algorithms [3]. More importantly, it is very simple to implement and only consists of three control parameters to be tuned, i.e., population size (number of food sources) SN, maximal number of generations (terminal condition) maxGeneration, and exploration parameter limit.1 Accordingly, researchers have successfully applied ABC to numerous applications [4], i.e., numerical function optimization [5–8], scheduling problems [9–11], clustering [12,13] and program generation [14,15], etc. Many researchers have focused on the improvement of ABC from different perspectives. Inspired by particle swarm optimization (PSO), Zhu and Kwong [16] developed a gbest-guided ABC (GABC) to improve the search efficiency by incorporating the information of global best (gbest) solution into the original search equation of ABC. Gao and Liu [6] are influenced by differential evolution (DE) to propose a new search equation called ABC/best/1 together with a new chaotic initialization method, which improves the exploitation ability of ABC. Banharnsakun et al. [17] developed a modified search equation so that the solution direction is biased towards the best-so-far solution rather than a randomly selected neighbor one. Li et al. [18] and Xiang and An [19] followed similar concepts to combine the information of best-so-far solution to the search equation in different ways to accelerate the evolution efficiency. The newly proposed search equations are combined with the other search equations to parallel create multiple new solutions, where a greedy selection is applied to select the best one. They argued that the search efficiency can be improved significantly, however, which actually tends to perform an unfair comparison since they require larger number of fitness evaluations than
14 15 16 17
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
Q2
∗ Corresponding author. Tel.: +86 41184707917. E-mail addresses:
[email protected] (X. Li),
[email protected] (G. Yang). 1 Since SN and maxGeneration are the general parameters of almost all evolutionary algorithms, it can be viewed that ABC only consists of one control parameter: limit.
ABC within a predefined maxGeneration. Das et al. [8] developed an improved ABC by introducing a fitness learning mechanism with a weighted selection scheme and proximity based stimuli to balance the exploitation and exploration of ABC. It has been recognized that most of the current ABC variants are dedicated to present new hybrid ABC algorithms or combining operators of existing algorithms into ABC, rather than modeling the natural behavior of honeybees in biology especially neuroscience. A very recent work by Karaboga and Gorkemli [20] indicated that by modeling the foraging behavior of artificial bees in a more accurate way, ABC can achieve better performance than standard ABC in terms of local search ability. They introduced a quick ABC algorithm (qABC) to imitate the real onlooker bees’ behavior. That is, Euclidean distance is used to help each onlooker bee to choose the fittest food source within a restricted dancing area rather than to deterministically choose the food source. Apart from this work, there is few study on improving ABC from the perspective of real honeybees. In this paper, we develop a new ABC variant named ABC with memory algorithm (ABCM). ABCM is inspired by the studies of natural honeybees in neuroscience [21–23] which indicate that in addition to the swarm behavior, honeybees consist of memory ability to allow them efficiently trace new nectar by associatively learning from their previous experience. According to this concept, ABCM imitates a memory mechanism to the artificial bees of ABC to memorize their previous successful experiences of foraging behavior. The memory mechanism is applied to guide the further foraging of the artificial bees, which leads to a more efficient search performance than traditional ABC without memory ability. ABCM is developed in a fairly simple way to preserve the advantages of ABC’s simplicity as much as possible. Only one new control parameter related to the memory size called M is added in ABCM. In addition to our work, there are two recent studies which are also dedicated to introduce the memory mechanism into ABC [24,25]. Kiran and Babalik [24] added a memory board to save the solutions whose qualities are better than the average fitness value, where they are only used in the neighbor selection of the onlooker bees to increase the exploitative tendency of ABC. Bayraktar [25] integrated the short term tabu list (STTL) of tabu search to memorize the abandoned solutions, which will be prohibited to be repeatedly generated and visited. The memory mechanisms of these two studies are designed in a solution-level to memorize the entire solutions, while ABCM memorizes the successful search experience of each parameter by the neighboring food source and updating coefficient. In other words, ABCM is designed in a
http://dx.doi.org/10.1016/j.asoc.2015.12.046 1568-4946/© 2016 Elsevier B.V. All rights reserved.
Please cite this article in press as: X. Li, G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2015.12.046
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82
G Model
ARTICLE IN PRESS
ASOC 3405 1–11
X. Li, G. Yang / Applied Soft Computing xxx (2016) xxx–xxx
2
87
parameter-level, which is conceptually different from the aforementioned studies. It can be considered as a complementary study rather than a competitor. In what follows we will organize the paper as follows: Section 2 presents a brief introduction of ABC. Section 3 describes ABCM in details. Section 4 testifies our proposal in the numerical experiments. Finally, Section 5 concludes the paper.
88
2. Artificial bee colony algorithm (ABC)
83 84 85 86
116
Artificial bee colony algorithm (ABC) maintains a population of individuals/solutions, which are specifically named food sources. The population consists of SN food sources, which are evolved by three groups of artificial bees, including employed bees, onlooker bees and scout bees. In the group of employed bees, each bee corresponds to a specific food source, which memorizes the position of its food source. The employed bees search the neighboring region of the food sources to seek better ones. Afterwards, the new food sources are updated and shared with onlooker bees. Onlooker bees work in a different way from employed bees. Exploitation is added by onlooker bees by means of roulette wheel selection. That is, each onlooker bee probabilistically selects a better food source according to its quality to process. The selected food source is evolved by the onlooker bee to search for a better position, which works similarly to the employed bees. Occasionally, a new kind of artificial bees named scout bee is sent to explore the search space. When a food source is not improved after a certain number of trials (defined by a control parameter limit) by employed bees and onlooker bees, this food source is considered to have a poor position that will be abandoned, where a complete new food source will be randomly generated by a scout bee to replace it. ABC iteratively sends the three groups of artificial bees to search the solution space until meeting a terminal condition, i.e., the maximal number of generations maxGeneration. Originally, the number of employed bees and the number of onlooker bees are equal to the number of food sources, that is, the population size SN. The algorithmic description of ABC is presented in the following parts.
117
2.1. Initialization
89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115
118 119 120 121 122 123 124 125 126 127
ABC consists a population of food sources with size SN. For the numerical optimization problem, each food source consists of a D-dimensional parameter vector, which encodes the candidate solution, i.e., Xi = {xi1 , xi2 , . . ., xiD }, i = 1, 2, . . ., SN. Like the other evolutionary algorithms, ABC generates an initial population of food sources randomly. To cover the search space as much as possible, the initial food sources are uniformly placed within the search space constrained by the predefined minimum 1 , x2 , . . ., xD } and maximum parameter bounds i.e., Xmin = {xmin min min 1 2 D }. For parameter j in food source i, and Xmax = {xmax , xmax , . . ., xmax j
128
the initial value xi is generated by
129
xi = xmin + rand(0, 1) × (xmax − xmin ),
j
j
j
j
(1)
131
where, i = 1, 2, . . ., SN and j = 1, 2, . . ., D. rand(0, 1) is a random value ranging in [0, 1].
132
2.2. Employed bees
130
133 134 135 136 137 138 139
140
As mentioned above, the number of employed bees are equal to the population size, that is, SN. Each employed bee maintains a specific food source. For each food source i, its employed bee performs a neighboring search to generate a new vector Vi by updating its vector Xi . Let Vi = Xi , the neighboring search is performed by modj ifying one parameter vi of Vi where j ∈ {1, 2, . . ., D} is a randomly selected index. The solution search equation is described as follows
vji
=
j xi
j + i
j × (xi
j − xk ).
(2)
Here, k ∈ {1, 2, . . ., SN} is a randomly selected neighboring food source that guides the vector update of Vi , where k should be difj ferent from i. i is a random value ranging in [−1, 1]. After obtaining Vi , it is evaluated and compared with Xi . If the quality of Vi is better than Xi , Vi will replace Xi in the population. Otherwise, Xi will be remained in the population. In other words, a greedy selection is used between Vi and Xi . 2.3. Probability calculation
fit i
SN
j=1
fit j
,
(3)
where fiti denotes the fitness value of food source i, which is a problem-specific value measuring the quality of the candidate solution. The probability pi is proportional to the fiti value, where food sources with higher fitness values will be assigned higher probability values. For the minimization problems of numerical optimization, fiti is calculated by
fit i =
⎧ ⎨
1 1 + fi
if fi ≥ 0,
⎩ 1 + |f | otherwise. i
142 143 144 145 146 147
148
After searching the space, the employed bees return to the hive and share the nectar information of their sources with onlooker bees by dancing. Onlooker bees apply roulette wheel selection to select the food sources. That is, each onlooker bee prefers a food source depending on the nectar information distributed by the employed bees. Therefore, the probability of selecting a food source should be calculated. The probability of selecting a food source i by an onlooker bee is denoted by pi , which is calculated by pi =
141
(4)
149 150 151 152 153 154 155 156
157
158 159 160 161 162 163 164
165
where fi is the objective function value of food source i.
166
2.4. Onlooker bees
167
Based on the calculated probabilities, each onlooker bee selects a food source to further search its neighboring area. The searching procedure of onlooker bees is the same as that of employed bees. That is, if an onlooker bee selects a food source i, a new vector Vi is produced by Eq. (2). Xi and Vi are compared, where the better one survives in the population.
2.5. Scout bees It is possible that a food source can not be improved even that the employed bees and onlooker bees have visited it many times. In this case, it is considered that this food source is an abandoned food source that performs poorly in the evolution process, which should be eliminated from the population. To maintain necessary population diversity, once an abandoned food source is found, a scout bee is sent to randomly generate a completely new food source to replace the abandoned one. To perform this procedure, each food source i is given a parameter named triali , which counts the number of continuously failed trials that the employed bees and onlooker bees have performed on it but cannot improve its quality. Once triali reaches a predefined threshold limit, this food source i is considered as an abandoned food source, which will be replaced by a new food source, and its triali will be reset to 0. Originally, ABC only allows one scout bee sent in each generation.
Please cite this article in press as: X. Li, G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2015.12.046
168 169 170 171 172 173
174
175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190
G Model
ARTICLE IN PRESS
ASOC 3405 1–11
X. Li, G. Yang / Applied Soft Computing xxx (2016) xxx–xxx
3
food sources
1
1
2
j
...
...
parameter j of food source
D 1 D
memory M i j
...
i
1
2
j
...
...
update index
neighbour
1 2
k i j (1) k i j ( 2)
D 1 D
...
j
...
M
kij (M)
coefficient i i
j j
(1) (2)
...
2
...
1
...
...
SN
i
i
j
(M)
D 1 D Fig. 1. Memory structure of ABCM.
191
2.6. Basic algorithm of ABC
195
The basic framework of ABC is described in Algorithm 1. It is a fairly simple algorithm which only consists of three control parameters: the population size SN, the terminal condition maxGeneration and exploration parameter limit.
196
Algorithm 1.
192 193 194
Basic framework of ABC.
a more efficient search procedure than standard ABC. The detail of ABCM is described in the following parts. 3.1. Memory structure
220 221
222
Basically, each parameter of each food source is associated with a new memory component denoted by M,
223 224
197
198
199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219
3. ABC with memory (ABCM) In neuroscience, it has been investigated that the real honeybees have indeed evolved an impressive associative learning ability for their foraging behavior and dancing communication. That, honeybees have evolved sophisticated learning and memorizing capacities of which are similar to those of intelligent species [26]. By training with a discrimination task, an employed bee is capable of learning preferences to guide its further foraging behavior, which can be transferred to the unemployed bees by dancing communication. However, this kind of memory ability is not considered in the foraging behavior of standard ABC as well as its variants. That, each employed bee or onlooker bee searches for a new food source by randomly placing a neighboring food source as j shown in term xk of Eq. (2). Every time, a neighboring food source k is randomly picked up, regardless of the learning and memorizing abilities of each bee. Therefore, in this paper, ABC with memory algorithm (ABCM) is proposed to imitate the memory ability of artificial bees in ABC. A memory mechanism is introduced, which allows the artificial bees to memorize their previous successful experiences of foraging behavior. The memorized experiences are constantly updated and used to guide the further foraging of the artificial bees, leading to
⎡
M11
⎢ M1 ⎢ 2
M=⎢ ⎢
⎣ ... 1 MSN
M12
...
M1D
M22
...
M2D
.. .
..
2 MSN
...
.
⎤
⎥ ⎥ ⎥, .. ⎥ . ⎦
(5)
225
D MSN
j
where, Mi denotes the memory of parameter j of food source i (j = 1, 2, . . ., D and i = 1, 2, . . ., SN). j Initially, each memory Mi is remained empty. During the search j
procedure of ABC, Mi is constantly updated according to the experiences of foraging behavior of employed bees and onlooker bees. Conceptually, if an employed bee or onlooker bee is sent to update j parameter xi by search equation (2) and a better food source is found, the corresponding updating information will be considered j j as successful experience and saved into the memory Mi . Mi will memorize the following two components of Eq. (2): • Neighboring food source k; • Updating coefficient j . i
Please cite this article in press as: X. Li, G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2015.12.046
226 227 228 229 230 231 232 233 234 235
236 237
G Model
ARTICLE IN PRESS
ASOC 3405 1–11
X. Li, G. Yang / Applied Soft Computing xxx (2016) xxx–xxx
4
238 239 240 241 242 243
j
In other words, if the used values of k and i can improve the quality of parameter j of food source i, we memorize them and treat them as the successful experience. The memory structure is illustrated in Fig. 1. In ABCM, a new control parameter M is introduced to define the memory size. For j each memory Mi , the memorized information includes the neigh-
245
boring food source . . ., M.
246
3.2. Use of memory
244
247 248 249 250 251 252 253 254 255 256 257
j ki (m)
and updating coefficient
j i (m),
m = 1, 2,
In standard ABC, the neighboring food source k and the updating coefficient are generated randomly to perform the search equation (2). On the other hand, ABCM directly incorporates the memorized values saved in the memory to determine the neighboring food source and the updating coefficient. That is, if parameter j of food source i is selected to update, we j randomly select a paired k and from the memory Mi to perform the search equation (2). In other words, the artificial bees will follow their previously successful experiences to forage the nectar, which is expected to significantly improve the search efficiency comparing with the traditional random-based ones.
3.3. Update of memory
258
In the initialization, all memory M is set to empty. During the searching procedure by artificial bees, the neighboring food source and updating coefficient of successful trials, that is, a better food source can be generated, will be memorized in the corresponding memory. During the experience collection process, ABCM is performed equivalently to standard ABC of which k and are randomly determined, until M successful trials are recorded, i.e., the memory is full. j Once the memory Mi is found full, in the next time of updating parameter j of food source i, the neighboring food source k and j updating coefficient i are randomly selected from one of the M j
259 260 261 262 263 264 265 266 267 268 269
memorized records in Mi . If the food source i can be improved,
270
memory
271
j Mi
is remained unchanged. However, if the food source i j
j
cannot be improved, the used k and i will be erased from Mi since
272
they are considered as failed experience. Mi becomes not full so
273
that standard ABC is called to randomly determine new k and
274
j
j
j i
to
fill Mi . The above process is constantly executed so that the latest successful experiences will always be memorized by the artificial bees to guide their further foraging behavior. The detailed description of ABCM is shown in Algorithm 2.
Algorithm 2.
Basic framework of ABCM.
Please cite this article in press as: X. Li, G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2015.12.046
275 276 277 278 279
280
G Model
ARTICLE IN PRESS
ASOC 3405 1–11
X. Li, G. Yang / Applied Soft Computing xxx (2016) xxx–xxx
5
Table 1 Benchmark functions used in the paper. Formula
Parameter bound
f1 (x) =
[−100, 100]
0
Sphere
[−30, 30]
0
Rosenbrock
[−5.12, 5.12]
0
Rastrigin
[−600, 600]
0
Griewank
[−100, 100]
0
Schaffer
[−10, 10]
0
Dixon–Price
[−32, 32]
0
Ackley
[−500, 500]
0
Schwefel
D 2 x i=1 i D−1 2 2 [100(xi+1 − xi2 ) + (xi − 1) ] f2 (x) = i=1 D (x2 − 10 cos(2xi ) + 10) f3 (x) = i=1 i
f4 (x) =
D
1 4000
x2 i=1 i
−
sin2
f5 (x) = 0.5 +
D
i=1
2
f6 (x) = (x1 − 1) +
+1
x2 i=1 i
D 2 i(2x2 − xi−1 ) i=2 i D 1
x12
−0.2
D i=1
f9 (x) = 4x12 − 2.1x14 + f10 (x) = x2 −
i
1 6 x 3 1
+
D
xi sin
− exp
x2 i=1 i
2
−6
+ 10 1 −
1 8
281
4. Experimental study and analysis
282
4.1. Benchmark functions
283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318
1 D D
i=1
cos(2xi )
|xi |
+ x1 x2 − 4x22 + 4x24
5 x 1
Name
−0.5
2
f8 (x) = 418.9828D −
5.1 42
x2 i=1 i
f7 (x) = 20 + e − 20 exp
√
cos
2
1+0.001
xi
Optimum
−1.03163
[−5, 5] cos x1 + 10
10 benchmark functions which have been widely studied in the ABC literature [3,5,20] are employed to verify the performance of ABCM. The definition of each function is presented in Table 1, including its mathematical formula, parameter bound and optimal (minimum) solution. Basically, the features of a function can be grouped by the modality and separability [3]. The modality of a function is denoted by the number of ambiguous peaks in its landscape, which causes the possibility that the algorithm may be trapped in one of such peaks, resulting the local optimum. If a function has only one peak, it is called a unimodal function, otherwise, it is a multimodal function. The separability is a measure of difficulty of different test functions, which includes separable functions and nonseparable functions. Separable functions are generally easier to solve than nonseparable functions, because each parameter is independent of the others. If all the parameters are independent, the entire optimization process can be performed by optimizing each parameter independently. For nonseparable functions, there are interrelation among their parameters, which can not be optimized separately. Therefore, nonseparable functions are more difficult than separable functions. The 10 benchmark functions studied in this paper cover different types of problems discussed above, that is, unimodal-separable (US), unimodal-nonseparable (UN), multimodal-separable (MS) and multimodal-nonseparable (MN). The dimension D of each function is set to 30 except Schaffer function (f5 ), Six Hump Camel Back function (f9 ) and Branin function (f10 ) with D = 2 due to their definitions. The above configuration is the same as the ones reported in [20]. Specifically, the success rates of the compared algorithms are reported. The success of an algorithm is defined by that the error of its best value and the optimal value is less than a predefined acceptance threshold. The success rate is calculated as the number of successful experiments divided by the total number of runs. For the 10 examined benchmark functions, their features, dimensions D and acceptance thresholds are listed in Table 2.
[−5, 10] × [0, 15]
Six Hump Camel Back
0.39789
Branin
Table 2 Characteristics of the benchmark functions. Index
Name
Feature
D
Acceptance threshold
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
Sphere Rosenbrock Rastrigin Griewank Schaffer Dixon–Price Ackley Schwefel Six Hump Camel Back Branin
US UN MS MN MN UN MN MS MN MS
30 30 30 30 2 30 30 30 2 2
1.0E−10 1.0E−10 1.0E−10 1.0E−10 1.0E−10 1.0E−10 1.0E−10 1.0E−05 1.0E−05 1.0E−05
4.2. Parameter settings
319
ABCM is primarily compared with standard ABC to testify the effectiveness of the memory ability for ABC. For ABCM, we conducted several experiments with different values of the newly introduced parameter M (memory size) so that the effect of this parameter can be analyzed. The parameter settings are remained the same as those of [20] so that we can further compare ABCM with the state-of-the-art algorithms, such as genetic algorithm (GA), PSO, DE and a recently proposed quick ABC (qABC). The population size SN is set to 25, and the maximal number of generation maxGeneration is equal to 10,000 so that the maximal number of fitness evaluations (NFEs) is around 500,000. The exploration parameter limit is calculated by limit = D × SN.
(6)
For each function, 30 independent runs are carried out, where the final results are the average values over these independent runs. 4.3. Experimental results 4.3.1. ABCM vs. ABC In the first experiment, we conduct the empirical comparison between ABCM and standard ABC. In the reported results, the memory size M of ABCM is set to 2 based on hand-tuning. Table 3 reports the mean best values and standard deviations of the objective function values as well as the success rates by
Please cite this article in press as: X. Li, G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2015.12.046
320 321 322 323 324 325 326 327 328 329 330 331
332
333 334
335
336 337 338 339 340 341
G Model
ARTICLE IN PRESS
ASOC 3405 1–11
X. Li, G. Yang / Applied Soft Computing xxx (2016) xxx–xxx
6
Table 3 Objective function values obtained by ABC, qABC and ABCM. No.
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379
ABC
qABC
Mean
St. dev.
Suc. rate
4.6371E−16 0.10875208 0 4.6629E−16 1.8504E−18 5.2864E−15 3.2596E−14 5.7125E−08 −1.03162845 0.39788736
5.3469E−17 0.19060393 0 7.3759E−17 1.0135E−17 8.8777E−15 3.5096E−15 3.3210E−13 6.7752E−16 0
100% 0% 100% 100% 100% 100% 100% 100% 100% 100%
Mean 4.6357E−16 0.17777490 0 4.9960E−16 1.2104E−08 5.5228E−11 3.6030E−14 5.7126E−08 −1.03162845 0.39788736
applying ABC and ABCM to optimize the 10 benchmark functions. The bold values indicate the better results obtained in the experiments. From Table 3, it is found that both ABC and ABCM can achieve 100% success rate within the specified maximal number of generations except the Rosenbrock function (f2 ). In the Sphere function (f1 ), Rosenbrock function (f2 ), Griewank function (f4 ), Schaffer function (f5 ), Dixon–Price function (f6 ) and Ackley function (f7 ), ABCM converges to smaller mean values than that of ABC, while both of ABC and ABCM find the same mean values in the Rastrigin function (f3 ), Schwefel function (f8 ), Six Hump Camel Back function (f9 ) and Branin function (f10 ). Overall, the bold values indicate that ABCM can obtain better results than ABC in all 10 benchmark functions. To demonstrate the significant difference between ABC and ABCM, the non-parametric Wilcoxon signed-rank test is performed. It has been widely used to analyze whether the result comparison has statistical meaning in evolutionary algorithms [20,27]. Some other statistical tests, i.e., t-test [28], can be also applied, however, which achieve similar results based on our investigation. In the Wilcoxon signed-rank test, a P-value is calculated based on the results of 30 independent runs obtained by ABC and ABCM. If the P-value is smaller than 0.05, it is said that the difference between ABC and ABCM has statistical significance with confidence level 95%, and vice-versa. Table 4 reports that ABCM is statistically better than ABC in function f1 , f4 , f5 , f6 and f7 . In the other functions, they achieve equivalent performances. Therefore, it can be concluded that ABCM can generally obtain better mean values within the maximal number of generations than that of ABC with statistical meaning. In spite of the mean values, the convergence characteristics are further discussed. Figs. 2 and 3 plot the convergence curves of ABC and ABCM for the 10 benchmark functions. From the convergence curves, it is found that ABC can generally show the advantage of fast convergence speed in most functions, that is, quickly locating the near optimal solutions with very small number of generations. On the other hand, ABCM can result in faster convergence speed than that of ABC in function f3 , f5 , f6 , f7 , f8 , f9 and f10 . In function f1 , f2 and f4 , the convergence speed of ABCM is lower than that of ABC. Table 4
Q4 Results of Wilcoxon signed-rank test (P-value is shown). No.
ABC vs. ABCM
qABC vs. ABCM
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
0 0.125 — 0 0.003 0.034 0.047 — — —
0 0.043 — 0 0 0 0 — — —
ABCM St. dev.
Suc. rate
Mean
St. dev.
Suc. rate
7.8174E−17 0.40701350 0 6.3544E−17 4.3127E−08 1.4733E−10 3.7242E−15 7.8250E−13 6.7752E−16 0
100% 0% 100% 100% 53.3% 86.7% 100% 100% 100% 100%
6.4505E−17 0.06749250 0 5.9212E−17 0 3.1737E−15 3.0228E−14 5.7125E−08 −1.03162845 0.39788736
1.9982E−17 0.13977455 0 5.6335E−17 0 8.3862E−16 6.4964E−15 6.2891E−13 6.7752E−16 0
100% 0% 100% 100% 100% 100% 100% 100% 100% 100%
Table 5 Number of failed trials after the maximal NFEs. No.
ABC
ABCM
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
395,546.3 458,513.7 438,767.1 396,669.8 492,862.6 439,978.0 399,183.2 447,913.1 477,611.6 396,208.9
296,273.2 397,341.6 309,928.8 356,861.8 474,731.1 411,877.8 377,387.1 343,549.1 383,796.4 312,592.8
Average
434,325.4
366,434.0
However, ABCM can eventually converge to much better results within the maximal number of generations. Therefore, it can be analyzed that ABCM can obtain either faster convergence speed or smaller mean values than that of ABC in the studied benchmark functions. Conclusively, the experimental results show that by developing the introduced memory ability, the performance of ABC can be improved significantly. To investigate the effectiveness of the proposed memory mechanism, we count the number of failed trials2 after the maximal NFEs for each function and the results are presented in Table 5. It is found that ABCM can result in much smaller number of failed trials than that of ABC in all 10 benchmark functions. In other words, the memory mechanism proposed in this paper can help the artificial bees to find better food sources with higher possibility. 4.3.2. ABCM vs. qABC In addition to standard ABC, the results of ABCM are further compared with the recently sophisticated ABC variant called quick ABC (qABC) [20], which also studies the improvement of ABC from the perspective of real honeybees. Different from ABC whose onlooker bee selects a food source, i.e., i, to search by the roulette-wheel selection of Eq. (3), qABC enforces a further step elitism strategy to select the best food source from the neighbor region of i, where the neighbor region is defined by Euclidean distance. Consequently, exploitative tendency of the onlooker bees can be significantly improved to realize the fast convergence of qABC. The results of qABC are presented in Table 3 as well (the new parameter of qABC called r is set to 1). Similar to the results of Section 4.3.1, qABC finds the same mean values as ABCM in the Rastrigin function (f3 ), Schwefel function (f8 ), Six Hump Camel
2 In each vector updating (trial), if the new vector Vi is worse than its incumbent Xi , it is called a failed trial. Counting the number of failed trials can reflect the search efficiency towards the better solutions, where smaller number generally indicates higher possibility to improve the solutions.
Please cite this article in press as: X. Li, G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2015.12.046
380 381 382 383 384 385 386 387 388 389 390 391 392 393 394
395 396 397 398 399 400 401 402 403 404 405 406 407 408 409
G Model ASOC 3405 1–11
ARTICLE IN PRESS X. Li, G. Yang / Applied Soft Computing xxx (2016) xxx–xxx
7
Fig. 2. Convergence curves of ABC and ABCM (function f1 to f6 ).
410 411 412 413 414 415 416 417
Back function (f9 ) and Branin function (f10 ), but converges to worse results in the other 6 functions. Especially in the Schaffer function (f5 ) and Dixon–Price function (f6 ), qABC cannot achieve 100% success rates, while both ABC and ABCM can perfectly solve them. We should note that the results presented here are consistent with the original ones reported in [20], which also addressed that qABC results in worse mean values than ABC. The Wilcoxon signed-rank test between qABC and ABCM is also conducted, where the results
shown in Table 4 demonstrate that ABCM is statistically better than qABC in 6 functions. However, due to the enhancement of exploitation, qABC may result in the advantage of fast convergence. As the convergence curves of qABC shown in Figs. 2 and 3, qABC ensures faster convergence speed than ABC and ABCM in very early generations. However, as sufficient number of generations processed, it will converges to worse results than ABCM in some functions, including
Please cite this article in press as: X. Li, G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2015.12.046
418 419 420 421 422 423 424 425
G Model ASOC 3405 1–11 8
ARTICLE IN PRESS X. Li, G. Yang / Applied Soft Computing xxx (2016) xxx–xxx
Fig. 3. Convergence curves of ABC and ABCM (function f7 to f10 ).
426 427 428 429 430 431 432 433 434 435 436 437 438 439 440
441 442 443 444 445 446 447 448 449
f1 , f2 , f4 , f5 and f6 . In function f7 and f8 , the convergence speed of qABC decreases in later generations. Only in function f3 , f9 and f10 , qABC requires smaller number of generations to reach the identical objective function values as ABCM. Therefore, if we consider the final results and the convergence speed as a whole, it can be analyzed that ABCM is on average better than qABC. Moreover, it should be particularly mentioned that qABC actually requires much more CPU time per each generation than ABC and ABCM, since the calculation of Euclidean distance for all the paired food sources is computationally expensive. Therefore, requiring smaller number of generations to find the solutions of the same quality does not firmly ensure faster convergence speed of qABC, since it may cost more actual CPU time. The comparison and discussion of computational costs will be presented in Section 4.3.6. 4.3.3. Effect of problem dimensionality In the previous experiments, we fixed the problem dimension to 30 for the 7 scalable functions (f1 , f2 , f3 , f4 , f6 , f7 and f8 as shown in Table 2). It is of interest to further testify the performance changes of ABCM when the problem dimension varies. Accordingly, we conduct two additional experiments to compare the results of ABC and ABCM under low-dimensional (D = 15) and high-dimensional (D = 60) functions. The 7 scalable functions are examined under the same conditions used in the previous experiments. For
low-dimensional and high-dimensional cases, maxGeneration is set to 5000 and 20,000, respectively. The results of ABC and ABCM are reported in Table 6. In both of the low-dimensional and high-dimensional cases, it is shown that ABCM surpasses ABC in 5 functions (f1 , f2 , f4 , f6 and f7 ), while in the other 2 cases (f3 and f8 ), they achieve the same performance. In other words, ABCM can on average obtain better results than ABC. The Wilcoxon signed-rank test is also applied to the results. As the P-values shown in Table 6, in principle, the superiority of ABCM over ABC has statistical significance. Only in f4 with D = 15, the Pvalue does not show significant difference.
4.3.4. Effect of memory size M In ABCM, a new control parameter referring to the memory size called M is introduced. We conducted the experiments with different values of M to analyze the effect of this parameter in terms of mean value, standard deviation, as well as the best and worst objective function values of 30 independent runs. If M has small values, the artificial bees tend to have small memory to only remember very recent successful trails. On the other hand, large M values allow the artificial bees to remember successful trials of longer time. Six ABCM variants with M = {1, 2, 3, 4, 5, 10} are tested. The experimental results is shown in Table 7. The bold values indicate the best results obtained in the experiments. From the table, we
Please cite this article in press as: X. Li, G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2015.12.046
450 451 452 453 454 455 456 457 458 459 460
461 462 463 464 465 466 467 468 469 470 471 472
G Model
ARTICLE IN PRESS
ASOC 3405 1–11
X. Li, G. Yang / Applied Soft Computing xxx (2016) xxx–xxx
9
Table 6 Objective function values obtained by ABC and ABCM with D = 15 and 60. No.
D
ABC
ABCM
P
Mean
St. dev.
Suc. rate
Mean
St. dev.
Suc. rate
f1
15 60
1.6797E−16 1.1026E−15
5.3464E−17 1.1945E−16
100% 100%
7.7166E−17 7.9564E−16
1.8242E−17 1.9696E−16
100% 100%
0 0
f2
15 60
0.05834160 0.13201098
0.07777991 0.25285252
0% 0%
0.02306071 0.05705444
0.02739972 0.10771555
0% 100%
0.027 0.003
f3
15 60
0 0
0 0
100% 100%
0 0
0 0
100% 100%
— —
f4
15 60
1.1102E−16 1.0362E−15
0 1.0239E−16
100% 100%
1.0362E−16 8.3267E−16
2.8167E−17 2.3461E−16
100% 100%
0.157 0
f6
15 60
3.3953E−14 9.7011E−15
1.1439E−13 1.2676E−14
100% 100%
2.1356E−15 5.1828E−15
7.0044E−16 7.1699E−16
100% 100%
0.038 0.009
f7
15 60
1.1517E−14 7.4518E−14
2.7174E−15 5.2239E−15
100% 100%
7.9640E−15 6.8834E−14
2.5523E−15 1.3925E−14
100% 100%
0 0.037
f8
15 60
2.8563E−08 1.1429E−07
0 0
100% 100%
2.8563E−08 1.1429E−07
0 0
100% 100%
— —
Table 7 Performance of ABCM with different M values. Parameter
function f1 Mean
St. dev.
Best
Worst
Mean
St. dev.
Best
Worst
M=1 M=2 M=3 M=4 M=5 M = 10
5.6181E−17 6.4505E−17 1.2180E−16 7.3443E−17 1.5502E−16 3.4119E−16
1.9234E−17 1.9982E−17 4.2623E−17 1.8009E−17 5.0331E−17 9.8966E−17
0 0 0 0 0 0
9.3855E−17 1.0521E−16 1.9622E−16 1.0132E−16 2.3016E−16 4.9263E−16
0.15338240 0.06749250 0.09948655 0.09619985 0.10484573 0.16091519
0.46193076 0.13977455 0.23405245 0.14981454 0.13899793 0.34116009
5.3846E−04 7.8355E−05 2.4733E−04 2.4472E−05 1.3097E−04 6.5990E−04
2.54933825 0.49241241 0.57401856 0.55185348 0.45832022 1.76100461
function f2
Parameter
function f3 Mean
St. dev.
Best
Worst
Mean
St. dev.
Best
Worst
M=1 M=2 M=3 M=4 M=5 M = 10
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
8.5117E−17 5.9212E−17 1.4063E−16 1.9244E−16 2.5165E−16 4.1448E−16
4.7760E−17 5.6335E−17 5.7824E−17 5.7824E−17 8.7143E−17 1.2347E−16
0 0 0 0 0 0
1.1102E−16 1.1102E−16 2.2204E−16 2.2204E−16 4.4409E−16 5.5511E−16
Parameter
function f5 Mean
St. dev.
Best
Worst
Mean
St. dev.
Best
Worst
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
3.8159E−15 3.1737E−15 3.6152E−15 3.6225E−15 3.3979E−15 3.2810E−15
8.6628E−16 8.3862E−16 1.0675E−15 9.5760E−16 7.7143E−16 7.1905E−16
2.3189E−15 1.3827E−15 1.3438E−15 2.1022E−15 1.8609E−15 1.8127E−15
4.9223E−15 4.9683E−15 6.3922E−15 6.5284E−15 5.8687E−15 4.5421E−15
M=1 M=2 M=3 M=4 M=5 M = 10 Parameter
function f4
function f6
function f7
function f8
Mean
St. dev.
Best
Worst
Mean
St. dev.
Best
Worst
M=1 M=2 M=3 M=4 M=5 M = 10
3.0376E−14 3.0228E−14 3.1678E−14 3.1323E−14 3.0376E−14 3.1293E−14
6.8178E−15 6.4964E−15 7.1105E−15 6.9777E−15 6.6236E−15 6.7236E−15
2.7534E−14 2.7978E−14 2.7534E−14 2.7534E−14 2.7534E−14 2.7978E−14
3.8192E−14 3.8636E−14 3.8192E−14 3.8192E−14 3.8192E−14 3.8636E−14
5.7126E−08 5.7126E−08 5.7126E−08 5.7126E−08 5.7126E−08 5.7126E−08
3.3210E−13 6.2891E−13 8.4782E−13 6.8949E−13 7.8250E−13 6.2891E−13
5.7125E−08 5.7125E−08 5.7125E−08 5.7125E−08 5.7125E−08 5.7125E−08
5.7127E−08 5.7127E−08 5.7127E−08 5.7127E−08 5.7127E−08 5.7127E−08
Parameter
function f9
M=1 M=2 M=3 M=4 M=5 M = 10
function f10
Mean
St. dev.
Best
Worst
Mean
St. dev.
Best
Worst
−1.03162845 −1.03162845 −1.03162845 −1.03162845 −1.03162845 −1.03162845
6.7752E−16 6.7752E−16 6.7752E−16 6.7752E−16 6.7752E−16 6.7752E−16
−1.03162845 −1.03162845 −1.03162845 −1.03162845 −1.03162845 −1.03162845
−1.03162845 −1.03162845 −1.03162845 −1.03162845 −1.03162845 −1.03162845
0.39788736 0.39788736 0.39788736 0.39788736 0.39788736 0.39788736
0 0 0 0 0 0
0.39788736 0.39788736 0.39788736 0.39788736 0.39788736 0.39788736
0.39788736 0.39788736 0.39788736 0.39788736 0.39788736 0.39788736
Please cite this article in press as: X. Li, G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2015.12.046
G Model
ARTICLE IN PRESS
ASOC 3405 1–11
X. Li, G. Yang / Applied Soft Computing xxx (2016) xxx–xxx
10
Table 8 Objective function values obtained by the state-of-the-art algorithms. No.
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491
492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507
508 509 510 511 512 513 514 515 516 517 518 519 520 521
GA
PSO
DE
qABC
ABCM
Mean
St. dev.
Mean
St. dev.
Mean
St. dev.
Mean
St. dev.
Mean
St. dev.
1.11E+03 1.96E+05 52.92259 10.63346 0.00424 1.22E+03 14.67178 976.08663 −1.03163 0.397887
74.21447 3.85E+04 4.56486 1.16146 0.00476 2.66E+02 0.17814 93.25424 0 0
0 15.08862 43.97714 0.01739 0 0.66667 0.16462 5.66E+03 −1.03163 0.397887
0 24.17020 11.72868 0.02081 0 E−08 0.49387 4.58E+02 0 0
0 18.20394 11.71673 0.00148 0 0.66667 0 2.30E+03 −1.03163 0.397887
0 5.03619 2.53817 0.00296 0 E−09 0 5.22E+02 0 0
0 0.13292 0 0 8.66E−06 1.15E−12 0 5.71E−08 −1.03163 0.397887
0 0.17991 0 0 7.83E−06 3.36E−12 0 2.51E−12 0 0
0 0.06749 0 0 0 0 0 5.71E−08 −1.03163 0.397887
0 0.13977 0 0 0 0 0 6.28E−13 0 0
explicitly find that similar mean values and standard deviations are obtained by ABCM with different values of parameter M. In function f3 , f5 , f8 , f9 and f10 , the optimal results can be found in terms of mean, best and worst values in each of 30 independent runs for all M values. In the other five functions, M = 2 can find the best mean values in four functions (f2 , f4 , f6 , f7 ), and rank in the second place in function f1 . Meantime, M = 1 reaches the best mean value in function f1 , finds competitive values in function f4 , f6 and f7 , but performs significantly worse than M = 2 in function f2 . In most functions, M = 3, 4 and 5 result in very similar performances to M = 2. M = 10 achieves the worst performances in function f2 and f4 , but it can still find very competitive results in the other functions. Overall, the experimental results show that under different values of parameter M, the final results are generally similar and competitive. In other words, ABCM is considered to be not so sensitive to the setting of the newly introduced parameter M. However, the results still show a trend that M = 2 can reach the best results, and too large values of M (i.e., 10) may deteriorate the effectiveness of ABCM. 4.3.5. ABCM vs. state-of-the-art algorithms The results of ABCM are further compared with the state-of-theart algorithms including GA, PSO, DE and qABC. The results of the state-of-the-art algorithms except ABC are taken from [20], associated with the parameter settings. Particularly, if a value below 1.0E−12 is found, it is treated as 0. The performance comparison is presented in Table 8. It can be seen that GA achieves the worst results among the compared algorithms in most of the 10 benchmark functions. It only has the best performance on function f9 and f10 , however, which can also be performed equally by the other compared algorithms. PSO and DE hit the optimal values in function f1 , f5 , f9 and f10 , but cannot provide good results in the other functions. The recently proposed qABC results in better performances than GA, PSO and DE in all functions except f5 . Eventually, the table show that ABCM clearly outperforms GA, PSO, DE and qABC in all the selected 10 benchmark functions. 4.3.6. Computational costs Another important factor to compare ABC and the proposed ABCM is the computational costs. Conceptually, ABCM requires more computational time than that of ABC, since a new process to maintain and use the memory is introduced. However, as we have discussed that the memory size is not required to be large, it is expected that ABCM will not increase the computational time of ABC too much. In this section, we report the computational costs of ABC, qABC and ABCM with M = 2 in the 10 benchmark functions in Table 9. The experiments are carried out on a PC with Intel Core i7 running at 2.00 GHz with 8 GB RAM. The complier is Eclipse SDK ver. 4.3.2 of Windows 7. The reported values are the total computational time of 30 independent runs.
Table 9 Computational time of ABC, qABC and ABCM (unit: millisecond). No.
ABC
qABC
ABCM
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
2559 3404 21,075 29,924 3075 2874 22,822 26,254 5840 2825
5715 6808 24,460 32,876 4028 6152 26,067 30,267 6795 3809
2748 3764 21,386 29,744 3185 3123 22,993 26,457 6026 3029
Average
12,065.2
14,697.7
12,245.5
ABCM requires more computational time than standard ABC due to the processing of its memories. However, in each function, the time is only increased slightly as shown in Table 9. By counting the average time of the 10 functions, ABCM only increases the computational time of ABC in a level of (12,245.5 − 12,065.2)/12,065.2 = 1.49%. In other words, the results show that even integrating the memory ability, ABCM only requires very similar computational time to standard ABC. On the other hand, as mentioned in Section 4.3.2, qABC should requires much more computational time per each generation than ABC and ABCM, since the Euclidean distance for all the paired food sources should be calculated, which is computationally expensive. The results of Table 9 confirm our argument, where qABC tends to increase the computational time of ABC.
5. Conclusions and future work A novel ABC called ABC with memory (ABCM) has been proposed in this paper. ABCM introduces a new memory ability to the artificial bees of standard ABC to memorize its previous experiences of successful trials, which guide the further foraging behavior of the employed bees and onlooker bees. Essentially, ABCM is inspired by the biological study of real honeybees which potentially provides a new direction to consider the improvement of ABC. It is conceptually different from most of the other ABC variants that focus on integrating existing algorithms into ABC framework. In this paper, ABCM is designed to be implemented as simply as possible to realize the memory ability of the artificial bees. Experimental studies show that the proposed memory mechanism can significantly improve the final objective function values of standard ABC with only slightly more computational time. Our investigation also shows that ABCM presents promising results for the studied benchmark functions comparing with the state-of-the-art algorithms. In addition, we verify that ABCM is not so sensitive to the setting of its newly introduced parameter: memory size M.
Please cite this article in press as: X. Li, G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2015.12.046
522 523 524 525 526 527 528 529 530 531 532 533 534 535
536
537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554
G Model ASOC 3405 1–11
ARTICLE IN PRESS X. Li, G. Yang / Applied Soft Computing xxx (2016) xxx–xxx
563
In the future, we will focus on the improvement of this topic from various perspectives. Firstly, the memory mechanism presented in ABCM mainly aims at recording and reusing the neighboring food sources k and updating coefficient of the successful trials. In the future, the memory mechanism will be introduced to adapt the search equations of ABC. Another research direction is the adaptation of the control parameter M. ABCM will also be applied to the other problems, such as the scheduling problems and program generation.
564
Acknowledgments
555 556 557 558 559 560 561 562
Q5 565 566 567 568 569 570
571
572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592
This work is supported by the National Natural Science Foundation of China (71001016, 71421001), the Fundamental Research Funds for the Central Universities (DUT15RC(3)076, DUT15TD48), Humanity and Social Science Foundation of Ministry of Education of China (15YJCZH198) and Economic and Social Development Foundation of Liaoning (2016lslktzizzx-01). References [1] D. Karaboga, An idea based on honey bee swarm for numerical optimization, Tech. Report TR06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. [2] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Glob. Optim. 39 (3) (2007) 459–471. [3] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Appl. Math. Comput. 214 (1) (2009) 108–132. [4] D. Karaboga, B. Akay, A survey: algorithms simulating bee swarm intelligence, Artif. Intell. Rev. 31 (1) (2009) 68–85. [5] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Appl. Soft Comput. 8 (1) (2008) 687–697. [6] W. Gao, S. Liu, A modified artificial bee colony algorithm, Comput. Oper. Res. 39 (3) (2012) 687–697. [7] B. Akay, D. Karaboga, A modified artificial bee colony algorithm for realparameter optimization, Inf. Sci. 192 (2012) 120–142. [8] S. Das, S. Biswas, S. Kundu, Synergizing fitness learning with proximity-based food source selection in artificial bee colony algorithm for numerical optimization, Appl. Soft Comput. 13 (12) (2013) 4676–4694. [9] Q.-K. Pan, M. Fatih Tasgetiren, P.N. Suganthan, T.J. Chua, A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem, Inf. Sci. 181 (12) (2011) 2455–2468.
11
[10] J. Taheri, Y.C. Lee, A.Y. Zomaya, H.J. Siegel, A bee colony based optimization approach for simultaneous job scheduling and data replication in grid environments, Comput. Oper. Res. 40 (6) (2013) 1564–1578. [11] H. Garg, M. Rani, S. Sharma, An efficient two phase approach for solving reliability-redundancy allocation problem using artificial bee colony technique, Comput. Oper. Res. 40 (12) (2013) 2961–2969. [12] C. Zhang, D. Ouyang, J. Ning, An artificial bee colony approach for clustering, Expert Syst. Appl. 37 (7) (2010) 4761–4767. [13] D. Karaboga, C. Ozturk, A novel clustering approach: artificial bee colony (ABC) algorithm, Appl. Soft Comput. 11 (1) (2011) 652–657. [14] D. Karaboga, C. Ozturk, N. Karaboga, B. Gorkemli, Artificial bee colony programming for symbolic regression, Inf. Sci. 209 (2012) 1–15. [15] X. Li, G. Yang, K. Hirasawa, Evolving directed graphs with artificial bee colony algorithm, in: Proc. of the 14th International Conference on Intelligent Systems Design and Applications, 2014, pp. 89–94. [16] G. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm for numerical function optimization, Appl. Math. Comput. 217 (7) (2010) 3166–3173. [17] A. Banharnsakun, T. Achalakul, B. Sirinaovakul, The best-so-far selection in artificial bee colony algorithm, Appl. Soft Comput. 11 (2) (2011) 2888–2901. [18] G. Li, P. Niu, X. Xiao, Development and investigation of efficient artificial bee colony algorithm for numerical function optimization, Appl. Soft Comput. 12 (1) (2012) 320–332. [19] W. Xiang, M. An, An efficient and robust artificial bee colony algorithm for numerical optimization, Comput. Oper. Res. 40 (5) (2013) 1256–1265. [20] D. Karaboga, B. Gorkemli, A quick artificial bee colony (qABC) algorithm and its performance on optimization problems, Appl. Soft Comput. 23 (0) (2014) 227–238. [21] M. Hammer, R. Menzel, Learning and memory in the honeybee, J. Neurosci. 15 (3) (1995) 1617–1630. [22] S. Zhang, F. Bock, A. Si, J. Tautz, M.V. Srinivasan, Visual working memory in decision making by honey bees, Proc. Natl. Acad. Sci. U. S. A. 102 (14) (2005) 5250–5255. [23] S. Zhang, S. Schwarz, M. Pahl, H. Zhu, J. Tautz, Honeybee memory: a honeybee knows what to do and when, J. Exp. Biol. 209 (22) (2006) 4420–4428. [24] M.S. Kiran, A. Babalik, Improved artificial bee colony algorithm for continuous optimization problems, J. Comput. Commun. 2 (2014) 108–116. [25] T. Bayraktar, A memory-integrated artificial bee algorithm for heuristic optimisation (M.Sc. Thesis), University of Bedfordshire, 2014. [26] M. Giurfa, S. Zhang, A. Jenett, R. Menzel, M.V. Srinivasan, The concepts of sameness and difference in an insect, Nature 410 (6831) (2001) 930–933. [27] S. García, D. Molina, M. Lozano, F. Herrera, A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the cec2005 special session on real parameter optimization, J. Heuristics 15 (6) (2009) 617–644. [28] X. Li, S. Mabu, K. Hirasawa, A novel graph-based estimation of the distribution algorithm and its extension using reinforcement learning, IEEE Trans. Evol. Comput. 18 (1) (2014) 98–113.
Please cite this article in press as: X. Li, G. Yang, Artificial bee colony algorithm with memory, Appl. Soft Comput. J. (2016), http://dx.doi.org/10.1016/j.asoc.2015.12.046
593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640