Neurocomputing 148 (2015) 39–45
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
A mechanism based on Artificial Bee Colony to generate diversity in Particle Swarm Optimization L.N. Vitorino, S.F. Ribeiro, C.J.A. Bastos-Filho n University of Pernambuco, Recife, Pernambuco, Brazil
art ic l e i nf o
a b s t r a c t
Article history: Received 7 April 2012 Received in revised form 2 February 2013 Accepted 5 March 2013 Available online 1 August 2014
Particle Swarm Optimization (PSO) presents fast convergence for problems with continuous variables, but in most cases it may not balance properly exploration and exploitation behaviours. On the other hand, Artificial Bee Colony (ABC) presents an interesting capability to generate diversity when employed bees stagnate in a certain region of the search space. In this paper we put forward a mechanism based on the ABC to generate diversity when all particles of the PSO converge to a single point of the search space. Then, the swarm entities can switch between two pre-defined behaviours by using fuzzy rules depending on the diversity of the whole swarm. As the basis of our proposal, we utilize the Adaptive PSO (APSO) approach because it presents the capability to properly weight the terms of the velocity equation depending mainly on the current diversity of the entire swarm. We name our proposal ABeePSO, which was evaluated and compared to other well known swarm based approaches in all benchmark functions recently proposed in CEC 2010 for large scale optimization. Our proposal outperformed previous approaches in most of the cases. & 2014 Elsevier B.V. All rights reserved.
Keywords: Swarm intelligence Multidimensional optimization Artificial Bee Colony Particle Swarm Optimization
1. Introduction Swarm Intelligence has been widely used to solve real-world optimization problems, especially in high dimensional search spaces. There are many approaches based on populations which are suitable to tackle this type of problem, such as Particle Swarm Optimization – PSO [10,11,7], Artificial Bee Colony – ABC [5,13–15,25], Fish School Search – FSS [3,4] and Ant Colony Optimization – ACO [9,8]. PSO was inspired by the social behaviour of flocks of birds [11]. Currently, PSO is one of the most used swarm intelligence algorithms. Although the original PSO presents a high convergence velocity, it does not present the capability to escape from local minima. It occurs because the original PSO is not able to maintain diversity within the swarm whenever it is necessary during the search process. These issues affect the PSO performance, mainly in dynamic problems or high dimensional multimodal search spaces. Many efforts have been deployed to overcome these weaknesses. Among them, we can cite the Adaptive PSO (APSO) [24,20], Charged PSO [6], Restart PSO [12] and Barebones PSO (BPSO) [16]. The Charged PSO presents problems in terms of convergence in some cases, since some of the particles repel from one another. On the
n Corresponding author at: University of Pernambuco, Av. Benfica, 455, ZIP 50720-001, Recife/PE, Brazil. Tel.: þ55 81 91083450. E-mail address: carmelofi
[email protected] (C.J.A. Bastos-Filho).
http://dx.doi.org/10.1016/j.neucom.2013.03.076 0925-2312/& 2014 Elsevier B.V. All rights reserved.
other hand, the Restart PSO loses information when some of the particles stagnate and are mutated. In the Barebones PSO algorithm (BPSO), the positions of the particles are updated by using a Gaussian distribution. This distribution uses operations with the values of cognitive and social memories such as the average value and the standard deviation. The coherence of the movement of the particles can be mitigated in the BPSO. In our opinion, the APSO is a promising approach since it maintains the original ethos of the basic PSO and can self-adapt the acceleration coefficients used to update the velocity of the particles depending on the current state of the swarm [19]. Although the APSO presents these very interesting characteristics, the mechanism used to generate diversity when the swarm is trapped in the same region of the search space just alters one dimension of one the particles by performing a Gaussian mutation, which does not include previous information about the search process. On the other hand, the ABC presents the capability to increase the diversity of the swarm when an employed bee stagnates during the search process around the food source. Then, this bee changes its behaviour to explore another region of the search space. When a bee finds a promising region, the food source is assigned to this region, and the employed bee can attract other bees to exploit this new food source. One can observe that it is possible to use this feature of the ABC algorithm to generate diversity in other algorithms. In this paper, we propose an approach to generate diversity by using a mechanism based on the ABC algorithm when the APSO stagnates. We defined
40
L.N. Vitorino et al. / Neurocomputing 148 (2015) 39–45
two types of behaviours and the swarm alternates between them depending on the diversity of the entire swarm. The remainder of the paper is organized as follows. In Sections 2 and 3, we present the APSO and the ABC, respectively. In Section 4, we describe our proposal. In Section 5, we present the simulation setup and the results. Finally, we present our conclusions in Section 6.
the best particle is just updated if the new position found by the operator is better than the previous one. In the last step, the inertial factor ω is updated using Eq. (3) in order to self-adjust the exploration–exploitation capability of the swarm: ωðf Þ ¼
1 A ½0:4; 0:9; 1 þ 1:5e 2:6f d
8 f A ½0; 1:
ð3Þ
2. Adaptive Particle Swarm Optimization (APSO) Zhan et al. [24] proposed the APSO in 2009. This algorithm updates the PSO parameters based on fuzzy rules and can be viewed as a sequence of iterations of the following steps: (i) estimation and classification of the diversity state; (ii) determination of the acceleration coefficients based on the diversity state; (iii) application of a mechanism to generate diversity; and (iv) adaptation of the inertia factor. The diversity state can be inferred by evaluating the diversity factor (fd) as shown in the following equation: fd ¼
dg dmin A ½0; 1; dmax dmin
ð1Þ
in which dg is the average distance between the best particle of the swarm and the other particles, dmin and dmax are the smallest and the biggest average distance among all particles, respectively. The average distance (di) of the ith particle can be evaluated by using sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi N D 1 ∑ ∑ ðxk xkj Þ; di ¼ ð2Þ N 1 j ¼ 1;j a i k ¼ 1 i in which N is the number of particles in the swarm and D is the number of dimensions of the particles. The state of convergence of the swarm is classified based on fuzzy rules in which the diversity state is evaluated iteration per iteration based on the membership function with the higher value. Fig. 1 presents the four membership functions used in the APSO. The next step is to update the acceleration coefficients (c1 and c2). According to Zhan et al. [24], c1 and c2 are initialized with values equal to 2.0 and are updated according to the current value of fd. If the swarm is in the Convergence state, then we slightly increment c1 and slightly increment c2. If the swarm is in the Exploitation state, then we slightly increment c1 and slightly decrement c2. If the swarm is in the Exploration state, then we increment c1 and decrement c2. If the swarm is in the Jumping out state, then we decrement c1 and increment c2. Since c1 þ c2 must be equal to 4.0, one has to recalculate c1 and c2 based on their previous weights. The third step is the application of a mechanism to generate diversity, which is a greedy local search applied only in one dimension of the current best particle of the swarm aiming to allow this particle to escape from a local optimum. The operator is a Gaussian mutation generated by a normal distribution Nð0; σÞ, where σ is called the elitism learning rate. Zhan et al. [24] proposed to use σ ¼1.0 in the beginning of the simulations and decrease it to σ¼ 0.1 at the end of the simulation. The position of
Fig. 1. Fuzzy membership functions for the APSO algorithm.
3. Artificial Bee Colony (ABC) The ABC algorithm was proposed by Basturk and Karaboga [5] and it was inspired by the behaviour of honey bees. In the original version, there are three types of artificial bees: employed, onlookers and scouts. The employed bees explore the food sources found by themselves. The bees that decide to exploit a food source depending on the information shared by the employed bees are called onlookers bees. The bees that try to find new food sources are the scout bees. Actually, there are just two types of entities: the guide bees and the guided bees. The guide bees have two different behaviours: exploration (guide bees are in the scout mode) and exploitation (guide bees are in the employed mode). The guided bees are the onlookers bees. In general, the number of the guide bees and guided bees is the same. Besides, there is only one employed bee (guided bee in exploitation mode) for every food source. The number of food sources is SN. Each source food, which is associated with the ith guide bee in the exploitation mode, will be updated according to the following equation: ! ! ! ! ! v i ¼ x i þ r i ð x i x k Þ; ð4Þ ! in which r i is a vector of random numbers generated within the ! ! interval [ 1,1]. x i is the current position of the guide bee. x k is a different food source, i.e. k ¼ 1; 2; …; SN, where k a i. The new ! ! possible solution ( v i ) is compared with the original one ( x i ), and the better one is assigned as the ith food source. Besides, each guided bee needs to select one of the available food sources to explore. The probability to select a food source can be evaluated by pi ¼
fit i ; ∑SN j ¼ 1 fit j
ð5Þ
in which pi is the probability to select the ith food source, fiti is the fitness of the ith food source. Each guided bee searches for better solutions around the selected food source. Each food source is evaluated at every iteration. If a food source does not improve its fitness after a pre-determined number of steps (called MaxTrial), this food source is discarded and the guide bee associated to this food source changes its behaviour from exploitation mode to exploration mode. Then, the food source is updated by using ! ! !! ! x i ¼ x min þ r ð x max x min Þ; ð6Þ ! in which r is a vector of random numbers generated within ! ! the range [ 1,1], and x min and x max are the lower and upper boundaries of the search space, respectively. Some other approaches based on the behaviour of honey bees were proposed in the last decade. Teodorovic and Dell'orco [23] proposed Bee Colony Optimization (BCO) for combinatorial problems [22]. Pham et al. proposed the Bee Algorithm (BA) for combinatorial problems [17] and optimization functions [18], mimicking the food foraging behaviour of honey bees. Akbari et al. [1] developed the Bee Swarm Optimization (BSO), which was also inspired by the foraging behaviour of honey bees, to provide different patterns for adjustment of the flying trajectories of the bees [2]. Although the BA and BSO algorithms can be used
L.N. Vitorino et al. / Neurocomputing 148 (2015) 39–45
41
for continuous optimization, we have chosen the ABC algorithm because of its simplicity and the capability to quickly generate diversity.
4. Our proposal: adaptive bee and Particle Swarm Optimization (ABeePSO)
Fig. 2. Fuzzy membership functions for the ABeePSO algorithm.
In this paper, we propose a hybrid algorithm, called Adaptive Bee and Particle Swarm Optimization (ABeePSO), which combines two classic algorithms: APSO and ABC. We chose the APSO due to its fast convergence and the presence of adaptive mechanisms based on the diversity factor. On the other hand, the ABC has the capability to generate diversity in the swarm when the guide bees are in the exploration mode. Actually, we propose here to include the exploration ability of the ABC algorithm instead of using the Learning Strategy used in APSO. The APSO Learning Strategy is not enough to ensure the required diversity for optimization in high dimensional spaces. In our proposal, the ABC is just executed when the swarm achieves the Convergence diversity state. In this case, the swarm with N particles is divided. The N=2 particles with better fitness become guide bees and the other particles become guided bees. The ith guide bee will be optimized according to the following equation: " # ! B ! ! !! ! v i ¼ x iþ s : r i x i ð7Þ ! ! ; Distð B xi Þ ! in which B is the barycentre of the guide bees and is evaluated as ! defined in Eq. (8). r i is a vector of random numbers generated by a uniform distribution within the interval [0,1] and Dist is the ! Euclidean distance between the barycentre ( B ) and the current ! position ( x i ). Although one can observe any similarity between Eq. (8) and the term used to define the best position of the swarm in the fully informed PSO, in this case we just use the barycentre to coherently generate diversity regarding the positions of the particles: N=2 ! ! ∑i ¼ 1 x i fit i B ¼ : N=2 ∑i ¼ 1 fit i
ð8Þ
! ! s is a vector that defines the dispersion step of the swarm. s is evaluated by ! ! ! s ¼ f1 ½1=ð1 þ e αðf d þ βÞ gð x max x min Þ;
ð9Þ
in which α and β are the parameters of the sigmoid function. When the fitness value changes, we update the value of the dispersion according to Eq. (10). If the new position is better than the current position, the step value is decreased; otherwise, the step value is increased: ! ! ! s ¼ s 7f d s :
ð10Þ
After this, the guided bees need to select one of the food sources, where the probability of choosing the food source is evaluated by using the equation presented in (5). Each guided bee searches for a new solution around the food source by using ! ! ! ! ! v i ¼ x i þ r i ð fs i B ÞÞ; ð11Þ ! in which r i is a vector of random numbers generated by an ! uniform distribution within the interval [0,1] and fs i is the food source selected by the guided bee. The next step is to estimate the diversity state, such as in the APSO algorithm. However, this estimation just considers the food sources, which represent the current potential solutions for the optimization problem.
Besides, we observed that other intervals for the fuzzy membership functions have led to better results. Fig. 2 presents the membership functions used by our proposal. The algorithm stops the execution of the ABC only when the diversity state is Jumping out or Exploration. During the execution of the ABC-based diversity mechanism, the memory of the particles is stored. Before the reinitialization of the APSO algorithm, it is necessary to compare the memory of the particles and the food sources. If the food source of the bee is better than the best position stored in the memory of the entity, the memory of the particle is updated. Otherwise, the memory is re-established.
5. Simulations and results 5.1. Simulations setup We performed our simulations in all the 20 benchmark minimization problems proposed by Tang et al. [21] for performance evaluations and comparisons to other algorithms. Basically, the functions are modified versions of the Elliptic, Rastrigin, Ackley, Rosenbrock and Schwefel. For parameter analysis, we selected six functions in order to represent all classes of problems described in the entire benchmark. We used 30 particles for all PSO based approaches, while 60 bees were used for the ABC algorithm. For ABeePSO and APSO, c1 and c2 vary in the interval between 1.5 and 2.5. For the other PSO-based approaches, c1 ¼ c2 ¼ 2:05. The inertia factor for all PSO based approaches decays linearly from 0.9 to 0.4 along the iterations. We performed 1000, 3000 and 5000 iterations for 100, 300 and 500 dimensions, respectively. The standard MaxTrial value for the ABC is equal to 5. In the ChPSO, we used the Q¼ 16 and 50% of charged particles. 5.2. Parameter analysis and results We performed a parametrical analysis regarding step of dispersion, bounds of the acceleration coefficients (c1 and c2), intervals of fuzzy membership functions and the correlation between the diversity factor and the proposed diversity mechanism. 5.2.1. Analysis on the dispersion step This analysis aims to investigate the influence of the parameters α and β used to calculate the dispersion step of the guide bees (9) for one unimodal and one multimodal functions. We selected a logistic function since we aim to have a step value around 1.0 for low f d values and low step values for high fd values. We used 100 dimensions and 1000 iterations for this analysis. We varied the value of α between 0 and 100, for β ¼ 0:1, 0.2 and 0.5. We observed that the best results were obtained for β ¼ 0:5. We also observed that the variation of α does not significantly interfere in the performance for the studied unimodal functions (F 1). In the multimodal function (F 16), we observed that lower α values led to better results. Because of this, we assume α¼1.0 and β¼0.5 for further simulations. Figs. 3 and 4 depict the fitness value as a function of α for three different values of β (center) for functions F 1 and F 16, respectively.
42
L.N. Vitorino et al. / Neurocomputing 148 (2015) 39–45
observations carried out in this section can be generalized for all cases. The influence on the performance is not significant for the unimodal function, but the better result was achieved for δ ¼ 0:2. For the multimodal (F13), we achieved the best results when δ is randomly chosen from the intervals [0.01; 0.05] and [0.05; 0.1]. Since the difference between the results is not statistically significant, we chose the same interval defined in the original APSO for further simulations, i.e. δ randomly chosen from the interval [0.05; 0.1].
Fig. 3. Performance of the algorithm for the F 1 function by varying the value alpha and offset from the center of the function.
Fig. 4. Performance of the algorithm for the F 16 function by varying the value alpha and offset from the center of the function.
Table 1 Average value of the fitness value for different policies for δ. δ value Random(0.01; Random(0.01; Random(0.02; Random(0.02; Random(0.05; Random(0.05; Fixed at 0.1 Fixed at 0.2 Fixed at 0.05
0.1) 0.05) 0.1) 0.2) 0.1) 0.2)
F4
F 13
1:23E þ 14 1:33E þ 14 1:21E þ 14 1:38E þ 14 1:32E þ 14 1:37E þ 14 1:30E þ 14 1:13E þ 14 1:32E þ 14
1:02E þ 08 5:26E þ 07 5:91E þ 07 8:42E þ 07 5:72E þ 07 5:82E þ 07 8:91E þ 07 6:56E þ 07 1:05E þ 08
5.2.2. Acceleration coefficients (c1 and c2) analysis Since the acceleration coefficients (c1 and c2) must be updated at each iteration [24] and our proposal presents a different dynamic behaviour, it is necessary to analyze the impact of the variation rate of c1 and c2 on the performance of our proposal. The increment (or decrement) ratio of c1 and c2 per iteration is defined as δ. A slightly increment (or decrement) is performed by using δ=2. We used 100 dimensions and 1000 iterations for this analysis. Table 1 shows the average value of the fitness for functions F4 (unimodal) and F13 (multimodal) for different policies to determine δ. We chose these two functions since it can represent two different families of functions, but other simulations must be performed in the entire group of functions to assure that the
5.2.3. Intervals of fuzzy membership functions analysis We observed that it is necessary to define properly the conditions to trigger on and turn off the proposed mechanism, since it determines the rules to switch between APSO and ABC and vice versa. We performed some simulations for the functions F 7 (unimodal) and F 20 (multimodal) for different Fuzzy membership functions, since it can represent two different types of functions. We selected these other functions to avoid biasing the analysis. The configuration 1 uses the Fuzzy membership functions depicted in Fig. 2. In the configuration 2 we modified the interval of the Convergence state from 0.2 to 0.3 and the threshold of the Exploitation state from 0.1 to 0.2. This variation allows the algorithm to anticipate the start of the execution of the ABC algorithm. In the configuration 3 we modified the interval of Exploitation state from 0.5 to 0.6 and the threshold of the Exploration state from 0.4 to 0.5. This variation allows the execution of the ABC algorithm for a longer time. We used 100 dimensions and 1000 iterations for this analysis. For the unimodal function (F 7), the configurations 1, 2 and 3 achieved the following average fitness and (standard deviation) equal to 4:2E þ 10 ð8:53E þ 09Þ, 4:59E þ10 ð5:43E þ09Þ and 4:9E þ 10 ð1:31E þ 09Þ, respectively. For the multimodal function (F 20), the configurations 1, 2 and 3 achieved the following average fitness and (standard deviation) equal to 8:23E þ 08ð2:5E þ 08Þ, 1:02E þ 09ð6:39E þ 08Þ and 1:29E þ 09ð4:6E þ 08Þ, respectively. The best results were achieved for the configuration depicted in Fig. 2, i.e. configuration 1. In other benchmark functions we observed that the dependence on the fuzzy intervals is higher for the multimodal functions, but the configuration depicted in Fig. 2 is the best option yet. 5.2.4. Analysis of the correlation between the diversity factor and the proposed mechanism We performed simulations in most of the benchmark functions aiming at analyzing the evolution of the diversity factor along the simulations. We observed empirically that in most of the benchmark functions the diversity factor decays monotonically when the swarm is converging to a particular region of the search space. On the other hand, the diversity factor increases whenever the proposed mechanism is triggered until it stops. As a consequence, there is a strong indication that the proposed mechanism can generate diversity. We also observed that the particles tend to escape from the barycentre of the swarm coherently, changing the granularity of the search, but partially maintaining the spatial relative positions of the particles. In order to illustrate this, we show in Fig. 5 the diversity factor value as a function of the number of iterations for the F 2 function. 5.2.5. Performance comparison with previous approaches This subsection aims to compare the ABeePSO with other classical approaches, such as PSO, APSO and ABC. We used the parameters proposed by the authors for the PSO, APSO and ABC. We performed the simulations in 100, 300 and 500 dimensions for 1000, 3000 and 5000 iterations, respectively. We used the same number of fitness evaluations for all algorithms (Table 2).
L.N. Vitorino et al. / Neurocomputing 148 (2015) 39–45
43
The population size of the PSO and the APSO is 30 particles and the communication topology is local. We used 60 bees and MaxTrial ¼ 5 for the ABC algorithm. Table 3 shows the average fitness and (standard deviation) for our proposal and the other
Table 4 Results of Wilcoxon test for the comparison of our proposal to the other approaches for 100D and 1000 iterations.
Fig. 5. Diversity factor as a function of the number of iterations for the F 2 function.
Table 2 Variation of intervals of fuzzy membership functions. Conf.
F7
F 20
1 2 3
3.95E þ10 (8.03E þ 09) 4.04E þ 10 (6.40Eþ 09) 4.45E þ10 (7.77Eþ 09)
5.21E þ 08 (1.26E þ08) 5.45E þ 08 (1.51E þ 08) 7.77Eþ 08 (1.75E þ08)
F#
ABC
APSO
PSO
ChPSO
BPSO
RPSO
F F F F F F F F F F F F F F F F F F F F
▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
▴ ▿ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▿ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
▴ ▿ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▿ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Table 3 Performance comparison in terms of average fitness value and (standard deviation) among ABeePSO, PSO, APSO and ABC for all the 20 benchmark functions for 100 dimensions. F#
ABeePSO
ABC
APSO
PSO
Charged-PSO
F1
2.86E þ08 (8.22E þ 07) 9.58E þ02 (63.09667) 12.07975 (0.656529) 1.32E þ14 (3.05E þ 13) 3.81E þ 08 (2.64E þ07) 7.59E þ 06 (8.71E þ05) 4.33Eþ 10 (9.10Eþ 09) 6.78E þ13 (5.49E þ05) 4.02E þ08 (9.68E þ07) 9.73E þ 02 (39.83971) 25.93234 (2.34576) 9.26E þ04 (1.67Eþ 04) 8.08E þ07 (7.19E þ07) 6.29E þ08 (1.21Eþ 08) 9.82E þ02 (40.05412) 25.48114 (2.66121) 1.84E þ05 (2.29E þ 04) 1.02E þ09 (6.82E þ 08) 2.09E þ05 (2.82E þ 04) 1.09E þ09 (6.41Eþ 08)
9.20Eþ 09 (1.92E þ 09) 1.31E þ03 (85.5529) 20.62107 (0.08368) 1.58E þ15 (3.05Eþ 14) 7.28E þ08 (3.32Eþ 07) 2.09Eþ 07 (6.01E þ 04) 2.19E þ11 (2.79Eþ 10) 7.98E þ16 (2.01E þ 16) 2.16E þ09 (6.70Eþ08) 1.64E þ 03 (68.497) 41.67951 (0.132561) 5.86Eþ 05 (7.60E þ 04) 6.46Eþ 10 (1.52E þ 10) 5.36Eþ 09 (9.68E þ 08) 1.75E þ03 (57.49683) 42.02208 (0.098697) 6.79Eþ 05 (7.42E þ 04) 2.01E þ11 (3.15E þ10)
3.23E þ09 (1.19E þ09) 9.10E þ02 (70.90304) 19.62592 (0.401446) 3.11E þ14 (1.24Eþ 14)
9.16E þ 08 (1.62E þ08) 1.21E þ 03 (62.80075) 17.45967 (0.635132) 2.84E þ 14 (5.87E þ13) 4.81E þ 08 (2.75E þ07) 1.28E þ 07 (6.37E þ05) 7.50E þ 10 (1.2E þ10)
1.69E þ 09 (3.8E þ08)
F2 F3 F4 F5 F6 F7 F8 F9 F 10 F 11 F 12 F 13 F 14 F 15 F 16 F 17 F 18 F 19 F 20
8.01E þ05 (7.82E þ 04) 2.58Eþ 11 (4.56Eþ 10)
4.45E þ08 (4.70Eþ 07) 1.97E þ 07 (3.27E þ05) 8.45E þ10 (1.61E þ10) 2.99E þ16 (9.34Eþ 15) 1.15E þ 09 (4.64E þ 08) 8.54E þ02 (77.301) 38.83987 (0.875927) 3.08E þ05 (5.03Eþ 04) 2.00E þ 10 (5.99Eþ 09) 1.26E þ09 (3.63Eþ 08) 1.15E þ 03 (62.5245) 40.38697 (0.411129) 3.04E þ05 (3.85Eþ 04) 8.16E þ10 (1.37E þ 10)
Barebones PSO
4.60E þ 08 (1.20E þ 06) 1.33Eþ 03 (62.8571) 6.31E þ02 (86.2147) 20.1836 (0.2693) 19.4055 (1.6169) 4.19E þ 14 (1.19E þ14) 2.31E þ 14 (3.76E þ 13) 5.31E þ 08 4.76E þ 08 (3.36E þ07) (6.22E þ 07) 1.67Eþ 07 1.29E þ07 (8.79E þ05) (6.44E þ 05) 1.12E þ11 (2.10E þ 10) 7.18E þ10 (1.47E þ 10)
4.49E þ 14 (1.19E þ 14) 2.50E þ 15 (1.06E þ 15) 1.11E þ 09 (1.52E þ 08) 2.49E þ 09 (1.09E þ 09) 1.23E þ 03 (64.016) 1.38E þ 03 (61.3140) 34.68351 (0.87051) 39.5357 (0.7336) 1.95E þ 05 3.08E þ 05 (2.19E þ04) (4.64E þ 04) 4.69E þ 08 2.99E þ 09 (1.19E þ 08) (1.03E þ 09) 1.32E þ 09 2.13E þ 09 (2.21E þ08) (4.24Eþ 08) 1.24Eþ 03 (67.00242) 1.37E þ03 (64.5416) 35.10031 (0.811823) 39.6941 (0.7631) 2.77Eþ05 3.85E þ 05 (3.40E þ04) (6.36E þ04) 1.16E þ 10 4.63E þ 10 (3.25E þ09) (1.03E þ 10) 4.03E þ05 3,08E þ 05 4.39E þ 05 (9.40Eþ 04) (3.96E þ04) (5.92E þ04) 1.12E þ 11 (1.81E þ 10) 1.17E þ10 6.27Eþ10 (5.35Eþ 09) (1.55E þ 10)
Restart PSO 8.11E þ 09 (1.15E þ 09)
1.44E þ03 (44.6243) 20.6819 (0.0829) 2.05E þ15 (4.26Eþ 14) 6.23E þ08 (2.87Eþ 07) 2.04E þ 07 (1.40E þ05) 1.92E þ 11 (2.97E þ 10) 1.31E þ14 (1.10E þ 12) 4.56E þ16 (5.76E þ15) 2.45E þ 09 1.47E þ 10 (6.70Eþ07) (1.86E þ09) 7.76E þ02 (133.6236) 1.53E þ 03 (53.8489) 30.3871 (3.7690) 41.2393 (0.2381) 1.20E þ05 6.85E þ05 (2.37E þ 04) (7.27E þ 04) 2.52E þ 08 3.16E þ 10 (1.54E þ 08) (5.27E þ 09) 8.47E þ 08 1.03E þ 10 (1.33Eþ 08) (1.46E þ09) 1.03E þ03 (110.9042) 1.53E þ 03 (43.7585) 29.0369 (1.5069) 41.2976 (0.1351) 2.67Eþ 05 8.20E þ05 (3.25E þ 04) (1.12E þ 05) 2.00E þ 09 1.10E þ 11 (1.08E þ10) (6.46E þ 08) 3.23E þ 05 8.12E þ 05 (4.58E þ 04) (1.33Eþ 05) 2.31E þ 09 1.49E þ 11 (4.88E þ 08) (1.72E þ 10)
44
L.N. Vitorino et al. / Neurocomputing 148 (2015) 39–45
Table 5 Results of Wilcoxon test for the comparison of our proposal to the other approaches for 300D and 3000 iterations. F#
ABC
APSO
PSO
ChPSO
BPSO
RPSO
F F F F F F F F F F F F F F F F F F F F
▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
– ▿ ▴ ▴ ▴ ▴ ▴ ▴ – ▴ ▴ ▴ ▴ ▴ ▿ ▴ ▴ ▴ ▿ ▴
▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
▴ ▿ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
▴ ▿ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ – ▴ ▴ ▴ – ▴
▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Table 6 Results of Wilcoxon test for the comparison of our proposal to the other approaches for 500D and 5000 iterations. F#
ABC
APSO
PSO
ChPSO
BPSO
RPSO
F1 F2 F3 F4 F5 F6 F7 F8 F9 F 10 F 11 F 12 F 13 F 14 F 15 F 16 F 17 F 18 F 19 F 20
▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
– ▿ ▴ ▴ ▴ ▴ ▴ ▴ ▿ ▿ ▴ ▿ ▴ ▿ ▿ ▴ ▴ ▴ ▿ ▴
▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
– ▿ ▴ ▴ ▴ ▴ ▴ – – – ▴ – ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴ ▴
algorithms for 100 dimensions. One can observe that our proposal achieved better results in most of the benchmark functions. Since we also performed simulations for 300 and 500 dimensions and the difference between the algorithms is not too large in some few cases, we performed a comparison between the ABeePSO and the other approaches by using a statistical test. We show the results for the Wilcoxon test with significance level of 0.05. Up-triangle means that our approach is better, Down-triangle means that our approach is worst and “–” means that there is no statistical difference. The test shows that our proposal is superior in most of cases, except for F 2 and F 10 in 100 dimensions (see Table 4), F 2, F 15 and F 19 in 300 dimensions (see Table 5) and F 2, F 9, F 10, F 12, F 14, F 15 and F 19 in 500 dimensions (see Table 6).
6. Discussion and conclusions In this paper we present a proposal to include the diversity generation capability of the ABC in an adaptive Particle Swarm
Optimization approach. We believe that this type of analysis is important since all swarm intelligence algorithms have some kind of weakness. The most used swarm intelligence algorithm for continuous optimization is the PSO, but it is already known that the PSO loses the population diversity very quickly and, as a consequence, the PSO cannot tackle properly dynamic or multimodal problems. Many efforts have been made to overcome this weakness, but all the previous approaches are too much complicated for a non-specialist or use mechanisms that deploy operators that impose loss of information acquired along the search process (as for example mutation). From our results, we observed that the analyzed parameters affect mainly the optimization process in multimodal functions. Our proposal outperformed well-known previous approaches in most benchmark functions, such as the PSO, ABC and APSO. We used statistical test and we showed that our algorithm achieved better results when compared to these well-known approaches in most of the benchmark functions. We also have included extra parameters in the algorithm to trigger the diversity mechanism. Although it is not the ideal solution, since the optimum values for the parameters can vary depending on the type of optimization function, we have some evidences that the number of extra parameters can be reduced in a future work.
References [1] R. Akbari, A. Mohammadi, K. Ziarati, A powerful bee swarm optimization algorithm, in: IEEE 13th International Multitoptic Conference, INMIC'09, 2009, pp. 1–6. [2] R. Akbari, A. Mohammadi, K. Ziarati, A novel bee swarm optimization algorithm for numerial function optimization, in: Communications in Nonlinear Science and Numerial Simulation, vol. 15, Elsevier, 2010, p. 3142–3155. [3] C.J.A. Bastos-Filho, F.B. Lima-Neto, A.J.C.C. Lins, A.I.S. Nascimento, M.P. Lima, A novel search algorithm based on fish school behavior, in: IEEE International Conference on Systems, Man, and Cybernetics, 2008, pp. 2646–2651. [4] C.J.A. Bastos-Filho, F.B.L. Neto, M.F.C. Sousa, M.R. Pontes, S.S. Madeiro, On the influence of the swimming operators in the fish school search algorithm, in: 2009 IEEE International Conference on Systems, Man and Cybernetics, 2009, pp. 5012–5017. [5] B. Basturk, D. Karaboga, An artificial bee colony (ABC) algorithm for numeric function optimization, in: IEEE Swarm Intelligence Symposium, 2006, pp. 12–14. [6] T.M. Blackwell, Dynamic search with charged swarms, in: Genetic and Evolutionary Computation Conference, 2002, pp. 19–26. [7] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence in a multidimensional complex space, IEEE Trans. Evol. Comput. 6 (2002) 58–73. [8] M. Dorigo, C. Blum, Ant colony optimization theory: a survey, in: Theoretical Computer Science, vol. 344, Elsevier, 2005, p. 243278. [9] M. Dorigo, G.D. Caro, Ant colony optimization: a new meta-heuristic, in: Proceedings of the Congress on Evolutionary Computation, IEEE Press, 1999, pp. 1470–1477. [10] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, 1995, pp. 39–43. [11] R. Eberhart, J. Kennedy, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, IEEE Service Center, 1995, pp. 1942–1948. [12] J. Garcia-Nieto, E. Alba, Restart particle swarm optimization with velocity modulation: a scalability test, Soft Comput. 15 (2011) 2221–2232. [13] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization, Technical Report Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. [14] D. Karaboga, B. Akay, A compartive study of artificial bee colony algorithm, Appl. Math. Comput. 214 (2009) 108–132. [15] D. Karaboga, B. Akay, A survey: algorithms simulating bee swarm intelligence, in: Artificial Intelligence Review, vol. 31, Springer, 2009, pp. 61–85. [16] J. Kennedy, Bare bones particle swarms, in: IEEE Swarm Intelligence Symposium (SIS'2003), 2003, pp. 80–87. [17] D.T. Pham, A. Ghanbarzadeh, E. Ko, S. Otri, S. Rahim, M. Zaidi, The Bees Algorithm. Technical Report Manufacturing Engineering Center, Cardiff University, Cardiff CF24 3AA, UK, 2005. [18] D.T. Pham, S. Otri, A. Afify, M. Mahmuddin, H. Al-Jabbouli, Data clustering using the bees algorithm, in: The 40th CIRP International Manufacturing Systems Seminar, Liverpool, 2007. [19] M. Pontes, F. Neto, C. Bastos-Filho, Adaptive clan particle swarm optimization, in: 2011 IEEE Symposium on Swarm Intelligence (SIS), 2011, pp. 1–6.
L.N. Vitorino et al. / Neurocomputing 148 (2015) 39–45
[20] T.J. Su, M.Y. Huang, Y.J. Sun, An adaptive particle swarm optimization for the coverage of wireless sensor network, in: Communications in Computer and Information Science, vol. 218, Springer-Verlag, Berlin, Heidelberg, 2011. [21] K. Tang, X. Li, P.N. Suganthan, Z. Yang, T. Weise, Benchmark Functions of the CEC'2010 Special Session and Competition on Large Scale Global Optimization, Technical Report University of Science and Technology of China (USTC), School of Computer Science and Technology, Nature Inspired Computation and Applications Laboratory (NICAL), China, 2009. [22] D. Teodorovic, Bee colony optimization: principles and applications, in: 8th Seminar on Neural Network Applications in Electrical Engineering, 2009, pp. 151–156. [23] D. Teodorovic, M. Dell'orco, Bee colony optimization: a cooperative learning approach to complex transportation problems, in: Proceedings of the 16th Mini-EURO Conference on Advanced OR and AI Methods in Transportation, 2005, pp. 51–60. [24] Z. Zhan, J. Zhang, Y. Li, H. Chung, Adaptive particle swarm optimization, IEEE Trans. Syst. Man Cybern. B Cybern. 39 (2009) 1362–1381. [25] K. Ziarati, R. Akbari, V. Zeighami, On the performance of bee algorithms for resource-constrained project scheduling problem, Appl. Soft Comput. 11 (2011) 3720–3733.
Lumadaiara do Nascimento Vitorino received the B.Sc. degree in Computing Engineering from University of Pernambuco (UPE), in 2010. She is a M.Sc. candidate in Computing Engineering. Her interests are related to swarm intelligence and evolutionary computation.
45 Sergio Ferreira Ribeiro is an undergraduate student in computer engineering at Polytechnic School of the University of Pernambuco. His interests are related to swarm intelligence and evolutionary computation.
Carmelo J.A. Bastos-Filho was born in Recife, Brazil, in 1978. He received the B.Sc. degree in Electronics Engineering from Federal University of Pernambuco (UFPE), in 2000. He received the M.Sc. and Ph.D. degrees in Electrical Engineering from Federal University of Pernambuco (UFPE), in 2003 and 2005, respectively. In 2006, he received the best Brazilian Thesis award in Electrical Engineering. His interests are related to optical networks, swarm intelligence, evolutionary computation, multi-objective optimization and biomedical applications. He is currently an associate professor at Polytechnic School of the University of Pernambuco. He is the head of the research division of the Polytechnic School of Pernambuco. He also coordinates the Masters course on Systems Engineering.