Accepted Manuscript Title: A hybrid optimizer based on firefly algorithm and particle swarm optimization algorithm Author: Xuewen Xia Ling Gui Guoliang He Chengwang Xie Bo Wei Ying Xing Ruifeng Wu Yichao Tang PII: DOI: Reference:
S1877-7503(17)30353-8 http://dx.doi.org/doi:10.1016/j.jocs.2017.07.009 JOCS 724
To appear in: Received date: Revised date: Accepted date:
31-3-2017 16-5-2017 11-7-2017
Please cite this article as: Xuewen Xia, Ling Gui, Guoliang He, Chengwang Xie, Bo Wei, Ying Xing, Ruifeng Wu, Yichao Tang, A hybrid optimizer based on firefly algorithm and particle swarm optimization algorithm, (2017), http://dx.doi.org/10.1016/j.jocs.2017.07.009 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ip t
A hybrid optimizer based on firefly algorithm and particle swarm optimization algorithm Xuewen Xia*a,b , Ling Guic , Guoliang Hed , Chengwang Xiea,b , Bo Weia,b , Ying Xinga,b , Ruifeng Wua,b , Yichao Tanga,b a School
cr
of Software, East China Jiaotong University, Jiangxi 330013, China Optimization and Information Processing Lab., East China Jiaotong University, Jiangxi 330013, China c School of Economics and Management, East China Jiaotong University, Jiangxi 330013, China d School of Computer, Wuhan University, Hubei 430079, China
an
us
b Intelligent
Abstract
M
As two widely used evolutionary algorithms, particle swarm optimization (PSO) and firefly algorithm (FA) have been successfully applied to diverse difficult applications. And extensive experiments verify their own merits and character-
d
istics. To efficiently utilize different advantages of PSO and FA, three novel operators are proposed in a hybrid optimizer based on the two algorithms,
te
named as FAPSO in this paper. Firstly, the population of FAPSO is divided into two sub-populations selecting FA and PSO as their basic algorithm to
Ac ce p
carry out the optimization process, respectively. To exchange the information of the two sub-populations and then efficiently utilize the merits of PSO and FA, the sub-populations share their own optimal solutions while they have stagnated more than a predefined threshold. Secondly, each dimension of the search space is divided into many small-sized sub-regions, based on which much historical knowledge is recorded to help the current best solution to carry out a detecting operator. The purposeful detecting operator enables the population to find a more promising sub-region, and then jumps out of a possible local optimum. Lastly, a classical local search strategy, i.e., BFGS Quasi-Newton method, is introduced to improve the exploitative capability of FAPSO. Exten∗ Corresponding
author
Preprint submitted to Journal of Computational Science
May 16, 2017
Page 1 of 37
sive simulations upon different functions demonstrate that FAPSO is not only outperforms the two basic algorithm, i.e., FA and PSO, but also surpasses some
ip t
state-of-the-art variants of FA and PSO, as well as two hybrid algorithms.
Keywords: Firefly algorithm, Particle swarm optimization, Knowledge-based
cr
detecting, Local search operator
us
1. Introduction
In recent years, many real-world problems become extremely complex and
an
are difficult solved by conventional algorithms. Thus, non-deterministic algorithms and heuristic algorithms play more and more important roles in various 5
applications [10, 29, 30, 44]. As a type of heuristic algorithm, evolutionary
M
algorithms (EAs) have shown very favorable performance on non-convex and non-differentiable problems, and various EAs have been developed and applied to diverse difficult real-life problems during the last few decades.
10
d
Firefly algorithm (FA) [40] and particle swarm optimization (PSO) [22] are two widely used evolutionary algorithms inspired by some social behaviors of eu-
te
social organisms. Through simple interaction among individuals, the entire population can manifest very high intelligence when optimizing a problem. Aming
Ac ce p
to further improve the performance of them and broaden their application fields, many strategies are proposed during the last decades, such as adjusting param-
15
eters [2, 3, 25, 38, 43, 45] and enriching learning models [14, 20, 21, 39].However, considering that different optimizers have their own merits and characteristics which are suitable for different problems, many researchers pay much attention on hybridizing of different EAs to deal with real-world problems involving complexity, noise, imprecision, uncertainty, and vagueness [11, 33, 36].
20
In the research field of EAs, hybridization refers to merging different opti-
mization techniques into a single framework. Through the synergistic mechanism, a hybrid algorithm could take advantage of various merits within different algorithms, and then yields more favorable performance than a single algorithm. Some preliminary research manifests that hybrid optimizers are effective and
2
Page 2 of 37
25
competent for global optimization [6, 7, 13]. Inspired by these researches, we proposed a hybrid evolutionary algorithm
ip t
based on FA and PSO. In the hybrid optimizer, which is called FAPSO in
this paper, there are three modules proposed to enhance its comprehensive
30
cr
performance. The first module is parallel-evolving module, in which an entire population is divided into two sub-populations parallel evolved by FA and PSO,
us
respectively. To take advantage of different merits of PSO and FA, the two subpopulations share their own optimal solutions while they have ceased improve more than a predefined threshold. The second one is detecting module in which
35
an
a purposeful detecting operator is adopted to help the best individual of the population to jump out of local optimum solutions. The last module is local search module, in which the BFGS Quasi-Newton method is applied to improve
M
solutions’ accuracy.
The rest of this paper is organized as follows. In section 2, a brief introduction on FA and PSO is provided. The details of FAPSO are demonstrated in Section 3. Experimental setups, including details of benchmark function-
d
40
te
s and peer algorithms, are introduced in Section 4. Section 5 experimentally compares FAPSO with other 12 peer algorithms using 30 benchmark functions.
Ac ce p
Moreover, the efficiency and effectiveness of the modules involved in FAPSO are also discussed in this section. Finally, Section 6 concludes this paper.
45
2. A brief introduction on FA and PSO 2.1. Firefly Algorithm (FA) Firefly algorithm (FA) inspired by the social behavior of fireflies flying in
the tropical and temperate summer sky was proposed by Yang in 2009 [40]. In FA, a firefly’s brightness I depends on its position X which is regarded as a
50
potential solution. And the trajectory of the swarm can be characterized as a search process. During the optimization process of FA, a firefly moves towards a brighter one not only depending on I of the brighter firefly but also relying on the distance r between the two fireflies.
3
Page 3 of 37
In the canonical FA, I of a firefly is determined by X which is proportional to the value of objective function I(X) ∝ f (X). In addition, inspired by the
ip t
phenomenon that the brightness is always absorbed in the light propagation media, I in FA decreases with the distance r from its source. A widely accepted
I(r) = I0 · e−γr
cr
update form of I is defined as (1). 2
(1)
55
us
where I0 denotes the light intensity of the light source, and γ is the light absorption coefficient of the propagation media.
Accordingly, a firefly’s attractiveness β, which is proportional to I, can be
an
described as (2).
2
β(r) = (β0 − βmin ) · e−γr + βmin
(2)
where β0 is the attractiveness at r=0, generally takes β0 =1; βmin is the mini-
M
mum attractiveness.
The distance between any pair of fireflies, whose positions are denoted as Xi and Xj , respectively, can be represented by the Euclidean distance as (3).
d
te
rij
v u d u∑ = ||Xi − Xj ||= t (xik − xjk )2
(3)
k=1
where xik and xjk are the k th component of the spatial coordinate Xi and Xj ,
Ac ce p
respectively.
Based on the definition introduced above, the movement of firefly Xi at-
tracted by anther brighter firefly Xj can be described as (4). 2
Xi = Xi + ((β0 − βmin ) · e−γrij + βmin ) · (Xj − Xi ) + α · (rnd − 0.5)
60
(4)
where α is the parameter deciding the size of the random walk, and rnd is a random number uniformly distributed in [0, 1]. The pseudo code of FA is detailed in Algorithm 1.
2.2. Particle Swarm Optimization (PSO) Particle swarm optimization algorithm (PSO) is a widely known swarm intelligence algorithm proposed by Kennedy and Eberhart in 1995 [22, 31]. During the optimizing process for a specific problem with D dimension variables, the 4
Page 4 of 37
Algorithm 1. FA Begin
ip t
01: Generate initial population of fireflies Xi (i = 1, ..., N ); 02: Initialize parameters: α, γ, βmin , t = 0, and f es = 0; 03: Brightness Ii at Xi is determined by f (Xi ); 04: Define light absorption coefficient γ; 05: While (not meet the stop conditions)
08:
For i=1:N all N fireflies
cr
07:
For j =1:N all N fireflies If Ij > Ii Then
09:
Move firefly i towards j in all dimension according to Equation (4);
us
06:
10:
End If
11:
Attractiveness varies with distance according to Equation (2);
13: 14:
Evaluate the new solution and update its brightness; f es=f es+1; End For
an
12:
End For
15: Rank the fireflies and find the current best; 16: t=t+1;
18: Post process results. End
M
17: End While
d
ith particle has a velocity vector and a position vector represented as Vi = [vi1 ,
te
vi2 , ..., viD ] and Xi = [xi1 , xi2 , ..., xiD ], respectively. The vector Xi is regarded as a candidate solution of the problem while the vector Vi is treated as the par-
Ac ce p
ticle’s search direction and step size. During the process of optimization, each particle decides its trajectory according to its personal historical best position Pbi = [pbi1 , pbi2 , ..., pbiD ] and the global best-so-far position Gb = [gb1 , gb2 , ..., gbD ]. In the canonical PSO, the update rules of Vi and Xi are defined as (5) and (6), respectively.
t+1 t vij = ω · vij + c1 · rnd1 · (pbtij − xtij ) + c2 · rnd2 · (gbtj − xtij )
(5)
t+1 t xt+1 ij = xij + vij
(6)
where ω represents an inertia weight indicating how much the previous velocity 65
is preserved; c1 and c2 are known as two acceleration coefficients determining relative learning weights for Pbi and Gb, which called “self-cognitive” and
5
Page 5 of 37
“social-learning”, respectively; rnd1 and rnd2 are two random numbers uniformly distributed over [0, 1].
ip t
The pseudo-code of PSO is detailed as Algorithm 2. Algorithm 2. PSO
01: Initialize parameters: w = 0.9, c1 = c2 = 2.0, t = 0, f es = 0;
cr
Begin
02: Generate initial population’s positions Xi (i = 1, ..., N ) and velocities Vi (i = 1, ..., N ); 03: Evaluate all Xi ; f es = f es + N ;
us
04: Initialize Pbi and Gb according to the evaluation results; 05: While (not meet the stop conditions) 06:
For i=1:N all N particles
Update Vi and Xi according to Equation (5) and (6), respectively;
08:
If f it(Xi ) < f it(Pbi )
09:
Pbi = Xi ; f it(Pbi ) = f it(Xi );
10:
If f it(Xi ) < f it(Gb)
13: 14:
Gb = Xi ; f it(Gb) = f it(Xi ); End If
M
12:
End If f es = f es + 1;
15:
End For
16:
t = t + 1;
17:
w = 0.9 − 0.5 · (t/T );
te
18: End While
d
11:
an
07:
19: Post process results. End
Ac ce p
* f it(X) is the fitness of individual X. For a minimum problem, a smaller f it(X) is better, the same below.
70
2.3. Modifications of FA and PSO To further expand their application scopes, various modifications are intro-
duced to improve the comprehensive performance of the canonical FA and PSO. According to the different objects to be dealt with, these improvements can be generally categorized into the following cases.
75
1) Regulation of parameters. Although there are many parameters in the
canonical FA and PSO, it is not to say that the values are the optimal choices. For example, a larger α in FA is beneficial for exploration capability while a smaller α tends to facilitate exploitation ability [38]. Similarly, Many works indicate that a larger w facilitates global search capability for PSO while a
6
Page 6 of 37
80
smaller one is beneficial for its local search ability. Thus, tuning parameters has attracted much attention during the last years.
ip t
For example, α and γ that given by constant values in the canonical FA are replaced by different chaotic maps in [3, 25] aiming to help population to escape
85
cr
from local optima more easily. In addition, the fuzzy technology [26] and the
Levy flights operator [41] are also introduced to regulate the parameters with
us
the aim of gaining trade-off between exploration and exploitation capability. Furthermore, using light intensity difference (LFA) to adjust α and β0 is introduced in [5]. The fundamental though of LFA is that β of a firefly is not only
90
an
determined by its I and r to another firefly, but also relied on the light intensity difference between the two fireflies.
In PSO, the most ubiquitous update rule of w, introduced by Shi and Eber-
M
hart [43] in 1998, is linearly decreasing from 0.9 to 0.4 during the evolutionary process. Motivated by the time-varying w, Ratnaweera et al. further advocated a self-organizing hierarchical PSO (H-PSO) with time-varying c1 and c2 in [4]. However, these linearly adjustments do not truly reflect the actual
d
95
te
searching process, which is nonlinear and complicated. Thus, some nonlinearlyvarying strategies [2, 45] are proposed. Many simulation results illustrate that
Ac ce p
nonlinearly-varying adjustment strategies [2, 43, 45] improve the performance of PSO to some extent.Although these iteration-based tuning strategies can
100
improve PSO’s performance in various degrees, they may run into a risk of inappropriate regulating parameters since the information of evolutionary state is not appropriately utilized. To layout a more satisfactory adjustment, Zhan [46] takes advantage of population’s distribution and particles’ fitness to carry out an evolutionary state estimation (ESE), based on which an adaptive strategy
105
for tuning w, c1 and c2 is proposed. 2) Adjustment of search models. Although the search models in FA and PSO are easily implemented, they may cause the algorithms are vulnerable to premature convergence when optimizing complex multimodal problems. Thus, many researchers propose various strategies to enrich search behaviors of the
110
algorithms. 7
Page 7 of 37
Although much research suggests that different leaning model have various characteristics [23, 45], it is very difficult for us to fix a proper model for a black-
ip t
box problem. So, quite a few dynamical leaning model have been proposed in recent years [20, 21, 39]. For example, in [39], Veeramachaneni proposed a
fitness-distance-ration-based PSO (FDR) in which Euclidian distance and fit-
cr
115
ness are deemed as criteria for selecting exemplars for a specific particle. Liang
us
[21] proposed a comprehensive learning particle swarm optimization (CLPSO), in which a particle chooses different particles’ historical best information to update its velocity. The new strategy makes particles have more exemplars to learn from and a larger potential space to fly through, which give CLPSO a promising
an
120
performance on multimodal problems. Moreover, inspired by the niche technology, many researchers propose multi-swarm strategies [12, 20] to keep the popu-
M
lation diversity which is crucial for optimizing multimodal problems. Based on this mechanism, each sub-group periodically exchanges information with other 125
sub-groups in the optimizing process. These dynamic adjustment techniques not
d
only keep the diversity of population but also enable much useful information
te
obtained by a sub-swarm to be periodically exchanged with other sub-swarms. In addition, to endow population with more intelligence to deal with differ-
Ac ce p
ent complex situations, a self-learning particle swarm optimizer (SLPSO) was 130
proposed in 2012 [8]. In SLPSO, each particle has four candidate learning models (i.e., population topologies), and each individual can adopt a proper model based on its local fitness landscape to carry out a more efficient search at each generation.
3) Hybridization strategy. Since different intelligence algorithms have their
135
own merits, it seems natural to think of integrating different algorithms or strategies to deal with complex problems. For example, three genetic operators, i.e., selection [38], crossover and mutation operators [9, 15], are integrated into FA and PSO to improve their performance. Afterwards, some scholars began to pour attention into integrating various EAs into a single framework with diverse
140
design ideas [6, 7, 24, 37]. For instance, Mohammad [24] presented a new hybrid optimization algorith8
Page 8 of 37
m by integrating FA with the sequential quadratic programming (SQP) for optimum design of reinforced concrete foundation. The novel algorithm combines
145
ip t
the global exploration ability of the FA and the accurate local exploitation abili-
ty of SQP to offset the contradiction between the explorative capability and the
cr
exploitative capability. In [1], a hybrid evolutionary firefly algorithm (HEFA) was proposed, which combines the standard FA with differential evolution (DE)
us
algorithm to improve the search accuracy.
Recently, hybridizing local search strategies with PSO and FA has attracted 150
more attentions, and the experimental results verify the promising comprehen-
an
sive performance, including faster convergence speed and higher solution quality, of the hybrid algorithms [8, 35, 47]. Meanwhile, detecting strategies [42] and opposition-based learning (OBL) strategies [14, 16, 34] are also introduced to
155
M
help the population to jump out of locally optimal solutions. Extensive experimental results manifest that the comprehensive performance of a hybrid algorithm can be dramatically improved if various merits within
te
nism.
d
different algorithms/strategies are fully utilized by a proper integration mecha-
Ac ce p
3. A hybrid algorithm based on FA and PSO 160
In FA, the movement of an individual is only depends on all other brighter
individuals. Thus, the historical knowledge of the individual does not effect it’s current search behavior. In this case, while the brightest individual is located in a local optimum, the population is easily trapped into premature convergence. However, this search behavior cause FA has a promising exploitative ability
165
on unimodal problems and simple multimodal problems. In contrast with FA, PSO selects much historical knowledge of particles to guide them to search for promising regions. Thus, PSO has more favorable explorative capability, which is more suitable for complicated problems then FA. Considering that FA and PSO have their own merits, we regard integrating FA and PSO with a proper
170
mechanism can design an outstanding hybrid optimizer.
9
Page 9 of 37
In this research, we propose a hybrid evolutionary algorithm called FAPSO in which the canonical PSO and FA are two basic algorithms. Furthermore, a
ip t
knowledge-based detecting operator and a local search operator are adopted to enhance its comprehensive performance.
In this section, we firstly describe the framework of FAPSO, and then details
cr
175
each components involved in it.
us
3.1. Framework of FAPSO
There are two main issues need to be deal when designing a hybrid optimiza-
180
an
tion algorithm. The one issue is how to integrate different evolutionary algorithms (EAs) into a single framework. In this research, the collaboration-based strategy is applied to cooperate FA with PSO, which means that the entire pop-
M
ulation is divided into two sub-populations selecting FA and PSO as their basic search algorithm, respectively. In each generation, the two sub-populations are parallel evolution not only for the high efficiency of parallel structure verified by many researches [17–19] but also for better diversity caused by the different
d
185
te
search model of the two independent populations. The other one issue is how to exchange information between the two sub-populations. In this work, elite individuals within a sub-population are shared by the other sub-population under
Ac ce p
certain conditions.
190
In addition, it is unavoidable that the population may fall into a local opti-
mum when optimizing a difficult multimodal function. To help the population jump out of a potential local optimum, a detecting operator is introduced in this research. Unlike mutation strategy and perturbation strategy, in which randomly disturbances are adopted, the detecting operator takes advantage of much
195
historical knowledge to help the population jump out of the local optimum. Although the aforementioned mechanisms are beneficial for keeping the pop-
ulation’s diversity and enhancing the explorative capability, they may reduce the optimizer’s exploitation ability. To compensate the defects, a local search strategy is proposed to improve solutions’ accuracy. 200
According to the above discussions, the framework of FAPSO is described 10
Page 10 of 37
as Fig. 1. Since basic algorithms applied in the two sub-populations are FA and PSO, which have been introduced in Section 2, the evolutionary processes of
ip t
the sub-populations are not illustrated in Fig. 1. The details of other modules in FAPSO are introduced in follows sections.
cr
6WDUW
an
1R
us
,QLWLDOL]DWLRQ
6XESRSXODWLRQ3RSI(YROYLQJEDVHGRQ)$
6XESRSXODWLRQ3RSS(YROYLQJEDVHGRQ362
0RGXOH$
M
([FKDQJLQJHOLWHNQRZOHGJHEHWZHHQ WZRVXESRSXODWLRQV
0RGXOH%
d
&DUULQJRXWWKHGHWHFWLQJRSHUDWRU
te
0RGXOH&
Ac ce p
&DUULQJRXWWKHORFDOVHDUFKRSHUDWRU
(QG
Figure 1: Flowchart of FAPSO algorithm.
205
3.2. Exchanging elite knowledge During the process of optimization, an entire population is divided into two
parallel evolutionary sub-populations named as P opf and P opp which choose FA and PSO as the basic algorithm, respectively. To share much useful knowledge, the two sub-populations exchange their own best solutions under certain
210
conditions (see Modul A in Fig. 1). Specifically, if the best solution BestF in P opf consecutively stagnates more than a predefined generation Stagsub , it 11
Page 11 of 37
will be replaced by the best solution BestP in P opp while BestP is better than BestF, and vice versa. The details of exchanging elite knowledge are described
ip t
as Algorithm 3.
Algorithm 3. Exchanging Elite(Stagf , BestF, Stagp , BestP, Stagsub )
cr
Begin
02:
BestF = BestP;
03:
f it(BestF) = f it(BestP);
04:
Stagf = 0;
us
01: If (Stagf > Stagsub ) && (f it(BestF) > f it(BestP)) Then
06:
BestP = BestF;
07:
f it(BestP) = f it(BestF);
08:
Stagp = 0;
an
05: ElseIf (Stagp > Stagsub ) && (f it(BestP) > f it(BestF)) Then
09: End If
3.3. Detecting operator
d
215
M
End
te
Although multi-population maintains the population’s diversity and overcomes premature convergence to some degree, it is unavoidable that the population may fall into a local optimum when optimizing a difficult multimodal
Ac ce p
problem. To solve the problem, a detecting operator (see Modul B in Fig.
220
1) is proposed in this research. Unlike mutation and perturbation strategy, in which randomly disturbances are adopted, the detecting operator takes advantage of much historical knowledge of the population to guide current global best solution (GBest) to search for a promising region. To easily collect historical knowledge and carry out the detecting operator,
the search space of each dimension is divided into many small-sized sub-regions. Based on the segmentation mechanism, we can easily determine which subregion that each dimension of GBest belongs to and which sub-region the elite individual will detect to. While taking the detecting operator, some historical information of the swarm is adopted to help GBest find out whether there are more promising positions within other sub-regions. In this research, each 12
Page 12 of 37
dimension has the same number of sub-regions. And these sub-regions satisfy
Rn ∪
sij = Si , sij
∩
ip t
the follow conditions: sik = Ø, j ̸= k
(7)
j=1
225
cr
where Si is the entire searching space of the ith dimension; Rn is the number
of sub-regions; sij and sik are the j th and the k th sub-region (1 ≤ j, k ≤ Rn) of
us
the ith dimension, respectively.
The easiest way to carry out the detecting operator is, like a random mutation operator, that GBest randomly selects a sub-region and then tests whether
230
an
there is a more promising solution. Although the random detecting strategy takes effect to some extent, we believe that it is a more effective way to
M
help GBest jump out of a local optimum that it selects an appropriate subregion rather than an arbitrary one to carry out the detecting operator. In this research, much useful information is selected to guide GBest to choose an
d
appropriate sub-region.
To obtain the useful knowledge, each individual’s historical best position
te
at each generation has been recorded. While GBest has stagnated more than StagGBest generation, statistical information of the population is used to guide it
Ac ce p
to perform a detecting operator. For simplicity, we only record how many times that all individuals’ historical best positions lie within a specific sub-region. The statistical information is described as (8). ST Aji = ST Aji + 1, if
235
pbXki
within
sji
(8)
where ST Aji (1 ≤ i ≤ D, 1 ≤ j ≤ Rn) is the times that the ith dimension value of all individuals’ historical best positions lie within the sub-region sji ; pbXki (1 ≤ i ≤ D, 1 ≤ k ≤ N ) is the ith dimension value of the k th individual’s historical best position; N , D and Rn are the population size, dimensionality of a problem, and the number of sub-regions, respectively.
240
It is worth to note that, for a fixed value of i, the sub-regions are categorized into three types: 1) superior sub-regions which have the largest value of ST Aji ,
13
Page 13 of 37
2) inferior sub-regions which have the smallest value of ST Aji , and 3) moderate sub-regions which have neither the largest value nor the smallest value of ST Aji .
245
ip t
To facilitate presentation, bestXi denotes the ith dimension value of the
GBest, and stai denotes the statistical result of ST Aji on the ith dimension.
cr
Correspondingly, the detecting operator on the ith dimension can be described as follows. If bestXi lies within a superior sub-region of stai , GBest will detect an
us
inferior sub-region on ith dimension and subsequently performs a replacement. Since the swarm, including GBest, has never or seldom reached the inferior 250
sub-region, so if there is a global optimum within the sub-region, it may be an
an
efficient operator that GBest detects the inferior sub-region to find out whether there is a neglected optimum solution. Even if there is no global optimum within the inferior sub-region, the operation wastes only one evaluation.
255
M
It is worth to note that a greedy strategy is adopted in the operator. Specifically, bestXi would be replaced by the detected position only if the new position improves the performance of the original GBest.
d
Considering statistical information in different periods may cause bestXi to
te
detect a same sub-region, which makes bestXi lose an opportunity to probe other undetected sub-regions, we thus adopt a tabu-based strategy for bestXi . For example, a sub-region will be set a taboo flag tabui,j =1 after bestXi detects
Ac ce p
260
the j th sub-region on the ith dimension. In this case, this sub-region is no longer detected by bestXi until tabui,j is reset to ‘0’. While all taboo flags tabui,j ( 1 ≤ j ≤ Rn) of the ith dimension have been set as ‘1’, these taboo flags will be removed at once. In other words, after bestXi carrying out the detecting
265
operator on all sub-regions, tabui,j (1 ≤ j ≤ Rn) will be reset to ‘0’; and new statistical information will be recorded. Based on the aforementioned discussion, the detecting procedure of the ith
dimension can be written as Algorithm 4. 3.4. Local search operator 270
The main objectives of the strategies introduced above are maintaining the population’s diversity and enhancing its explorative capability. To improve the 14
Page 14 of 37
Algorithm 4. Detecting(stai , GBest, f es, det times) Begin
02: tmpX = GBest;
ip t
01: Dividing stai into three categories: superior, inferior and moderate sub-regions; 03: If tmpXi lies within a superior sub-region Then /* tmpXi is the ith dimension value of tmpX */ Choosing an undetected inferior sub-region Ri and setting a detected flag on the sub-region; Random generating a value rnd within the sub-region Ri ;
06:
tmpXi = rnd; Evaluating the new tmpX; f es++;
07:
If f it(tmpX) < f it(GBest) Then
09:
GBest = tmpX; f it(GBest) < f it(tmpX); End If
us
08:
10: End If 11: If all sub-regions have been detected Then 12:
cr
04: 05:
Removing detected flags on all sub-regions;
an
13: End If
14: det times = det times+1; // Updating the times that detecting operator has been carried out
M
End
population’s expoitative ability, we adopt BFGS Quasi-Newton method as a local search operator (see Modul C in Fig. 1) at the later stage of evolution.
275
d
For simplicity, we divide the evolutionary process into two parts, i.e., initial stage and later stage according to the fitness evaluation times (f es) consumed by the
te
population. In this research, the evolutionary process is in “later stage” while f es is more than the half of maximum number of fitness evaluations (M axF Es).
Ac ce p
During the later stage of evolutionary, only GBest is selected to carry out
the local search process in every 20 times detecting operators. In each local
280
search operator, we assign [0.1*f es] fitness evaluations to BFGS Quasi-Newton method to refine GBest. The reason why we not assign a constant fitness evaluations to each search operator is that we believe the later the evolutionary stage is, the more fitness evaluations allocated to the operator is beneficial for improving solutions accuracy. According to the aforementioned, the local search
285
operator is detailed as Algorithm 5. To simplify, the function “f minunc” in Matlab 2009a is employed in this research to realize the BFGS Quasi-Newton method.
15
Page 15 of 37
Algorithm 5. Local searching(GBest,det times, f es, M axF Es) Begin
ip t
01: If (f es > M axF Es/2) && (mod(det times, 20)==0) Then 02:
Refining GBest by Quasi-Newton method using 0.05*f es fitness evaluations;
03:
f es = f es + 0.05*f es;
04: End If
cr
End
us
3.5. Pseudocode of FAPSO
Together with the aforementioned modules, FAPSO can be described as Algorithm 6.
an
290
4. Test functions and experimental setup
M
4.1. Benchmark Functions
In this research, we choose 30 benchmark functions, including basic unimodal problems (F1 -F4 ), modified unimodal problems (F5 -F8 ), basic multimodal problems (F9 -F20 ), and modified multimodal problems (F21 -F30 ), to testify FAPSO’s
d
295
performance on different environments. The basic information of the benchmark
te
functions are given in Table 1. The last column of it, abbreviated to “Acc. Err.”, is the predefined acceptable errors for the tested functions, the aim of which
Ac ce p
is to gauge whether a solution found by an algorithm is acceptable or not. Ac-
300
cording to the property of the functions, we choose 10−6 and 100 as the accepted errors for separable and non-separable functions, respectively. Due to the space limitation, the details of the functions can refer to literatures [8, 20, 28, 42, 46]. 4.2. Peer Algorithms
Twelve existing stochastic algorithms, including 5 FAs, 5 PSOs, and 2 hybrid
305
algorithms, are chosen to compare with FAPSO. The basic information and configurations of each peer algorithm detailed in Table 2 are exactly the same as that in the original papers. The characteristics of the peer algorithms can refer to the corresponding references.
16
Page 16 of 37
Algorithm 6. FAPSO Begin
ip t
01: Initializing a population with N individuals; 02: Dividing the population into two sub-populations: P opf and P opp ; 03: Initializing parameters: M axF Es, T , Stagsub ,Stagpop ;
04: Evaluating each xfi and xpi which are individuals in P opf and P opp , respectively;
07: While (f es < M axF Es && t < T ) 08:
t = t+1;
cr
05: f es = N , t = 1, Stagf = 0, Stagp = 0, Stagg = 0; 06: Updating BestF, BestP and GBest;
Updating individuals in P opf according to (??); Updating f es;
10:
If (f it(xfbest ) < f it(BestF) Then /* xfbest is the current best individual in P opf */
12:
BestF = xfbest ; f it(BestF) = f it(xfbest );
Stagf = 0; /* Stagf is the stagnation generations of firefly population */
13:
Else Stagf = Stagf +1;
14:
End If
an
11:
us
09:
15:
Updating individuals in P opp according to (5) and (6); Updating f es;
16:
If (f it(xpbest ) < f it(BestP) Then /* xpbest is the current best individual in P opp */
18:
BestP = xpbest ; f it(BestP) = f it(xpbest );
M
17:
Stagp = 0; /* Stagp is the stagnation generations of particle population */ Else Stagp = Stagp +1;
20:
End If
21:
TmpX={X| f it(X)=min(f it(BestF),f it(BestP))};
22:
If (f it(TmpX) < f it(GBest)) Then
24:
GBest = TmpX; f it(GBest) = f it(TmpX);
te
23:
d
19:
Stagg = 0; /* Stagg is the stagnation generations of the whole population */ Else Stagg = Stagg +1;
26:
End If
27:
Exchanging elite individuals between two populations according to Algorithm 1;
Ac ce p
25:
28:
Carrying out the detecting operator according to Algorithm 2;
29:
Carrying out the local-searching operator according to Algorithm 3;
30: End While End
4.3. Setup of Experiments
310
To fairly compare the performance among the 13 algorithms, each peer al-
gorithm carried out 30 independent runs on each test functions. The maximum number of fitness evaluations (M axF Es) in each run is set as 150 000. The population size setting is N = 30 for all the peer algorithms except HEFA and DEPSO in which the value is N =50 according to the original literatures.
17
Page 17 of 37
Table 1: Basic information of 30 benchmark functions. Function
D
Search space
Properties
F1
Sphere
30
[−100, 100]D
U/Sep
Acc. Err. 10−6
F2
Schwefel P2.22
30
[−10, 10]D
U/Sep
10−6
F3
Schwefel P1.2
30
[−100, 100]D
U/Non-Sep
102
F4
Schwefel P2.6 with Global optimum on Bounds
30
[−100, 100]D
U/Non-Sep
102
cr
ip t
No.
F5
Shifted Sphere
30
[−100, 100]
U/S/Sep
10−6
F6
Shifted Schwefel P1.2
30
[−100, 100]D
U/S/Non-Sep
102
F7
Shifted Schwefel P1.2 with Noise in fitness
30
[−100, 100]D
U/S/Non-Sep
102
F8
Shifted Rotated High Conditioned Elliptic
30
F12 F13
Rastrigin
30
Weierstrass
[−32, 32]
D
[−500, 500]
[−5.12, 5.12] [−0.5, 0.5]
D
M/Sep
10−6
M/Sep
10−6
M/Sep
10−6
D
M/Sep
10−6
M/Sep
10−6
D
30
102
D
D
[−5.12, 5.12]
U/S/R/Non-Sep
30
[−50, 50]
M/Sep
10−6
30
[−100, 100]D
M/Sep
10−6
30
[−100, 100]D
M/Non-Sep
102
30
[−30, 30]D
M/Non-Sep
102
30
[−600, 600]D
M/Non-Sep
102
Expanded Extended Griewank plus Rosenbrock
30
[−3, 1]D
M/Non-Sep
102
Schwefel P2.13
30
[−π, π]D
M/Non-Sep
102
[−5.12, 5.12]
D
M/S/Sep
10−6
30
[−5.12, 5.12]
D
M/S/Sep
10−6
D
F14
Penalized Salomon
F16
Pathological
F17
Rosenbrock
F18
Griewank
F19 F20
Shifted Rastrigin
Shifted Noncont Rastrigin
te
F22
30
Noncont Rastrigin
F15
F21
30
us
Schwefel
[−100, 100]D
an
F11
30
M
F10
Ackley
d
F9
D
30
Shifted Rosenbrock
30
[−100, 100]
M/S/Non-Sep
102
F24
Shifted Rotated Expanded Scaffer F6
30
[−100, 100]D
M/S/R/Non-Sep
102
F25
Shifted Rotated Griewank
30
[−600, 600]D
M/S/R/Non-Sep
102
30
[−32, 32]D
M/S/R/Non-Sep
102
Ac ce p
F23
without Bounds
F26
Shifted Rotated Ackley
with Global optimum on bounds
F27
Shifted Rotated Rastrigin
30
[−5.12, 5.12]D
M/S/R/Non-Sep
102
F28
Shifted Rotated Noncont Rastrigin
30
[−5.12, 5.12]D
M/S/R/Non-Sep
102
F29
Shifted Rotated Weierstrass
30
[−0.5, 0.5]D
M/S/R/Non-Sep
102
F30
Shifted Rotated Salomon
30
[−100, 100]D
M/S/R/Non-Sep
102
* In the fifth column, “S” and “R” represent the modified problems by adding shifting and rotating, respectively. “U”, “M”, “Sep”, and “Non-Sep” represent the unimodal, multimodal ,separable and non-separable, respectively.
315
5. Experimental results and analysis 5.1. Comparison on solution accuracy The comparison results between FAPSO and other 12 peer algorithms, in terms of mean value (Mean) and standard deviation (Std.Dev.) of the solutions,
18
Page 18 of 37
Table 2: Twelve peer algorithms Year
Parameters Settings
FA [40]
2009
α=0.2, γ=1, β0 =1
FAC [25]
2011
β0 =1, [µ1 , µ2 ]=4, [γ(1), α(1)] ∈ / 0, 0.25, 0.50, 0.75, 1.0
MFA [12]
2013
α=0.31, △α=0.98,γ=1, β0 =1, Number of sub-groups is 3
LFA [5]
2016
γ0 =4, η1 =0.3, η1 =0.1
OBLFA [34]
2015
α=0.2, γ=1, β0 =1, p=0.25
cr
ip t
Algorithm
1995
w:[0.4,0.9],c1 =c2 =2.0
2011
w:[0.4,0.9],c=2.0,G=5
SLPSO [8]
2012
w:[0.4,0.9],η=1.496,γ=0.01
SL-PSO [32]
2015
N = 100 + ⌊D/10⌋, α = 0.5, β = 0.01
HCLPSO [27]
2015
w:[0.4,0.9], c=1.49445
HEFA [1]
2012
α=0.2, γ=0.01, β0 =0.5, F =0.9, CR=0.9
DEPSO [6]
2010
LP =100, N =30, 50, 100 (D=10, 30, 50)
an
us
PSO [22] OLPSO [47]
320
algorithms is marked in bold.
M
are listed in Table 3, in which the best mean value on each problem among all
It can be seen from Table 3 that FAPSO yields most favorable performance
d
among the 13 algorithms on the 30 functions since the number of the best results achieved by FAPSO is 14 which is larger than the figures 5, 5, 3, and 3 obtained
325
te
by other 4 excellent peer algorithms, i.e., SLPSO, SL-PSO, HCLPSO, and MFA, respectively. The performance of LFA, PSO, HEFA, and DEPSO are the worst,
Ac ce p
in terms of the numbers of best mean results. 1) Unimodal problems (F1 -F8 ): Among the 8 unimodal problems, FAPSO
and SL-PSO achieve most promising performance since the numbers of best mean results obtained by them all are 3, followed by OBLFA, MFA, and SLPSO,
330
who obtain the best mean result on F1 , F2 , and F5 , respectively. It is worth to note that, among the 5 non-separable unimodal functions (i.e., F3 , F4 , F6 , F7 , and F8 ), FAPSO yields the best mean results on F3 , F6 , and F8 while SL-PSO offers the most favorable results on the other 2 non-separable problems (i.e., F4 and F7 ). From this perspective, FAPSO shows more pleasurable characteristics
335
than other peer algorithms on the difficult and non-separable problems. 2) Multimodal problems (F9 -F30 ): Among the 22 unimodal problems, FAPSO manifests the most outstanding performance on 11 out of the 22 functions,
19
Page 19 of 37
Table 3: Comparison results of solution accuracy.
MFA LFA OLBFA PSO OLPSO SLPSO SL-PSO HCLPSO HEFA DEPSO
F3
F4
1.68E-11±2.49E-11 5.75E+02±3.24E+02(+)
4.09E+03±6.53E+02 3.90E+03±6.93E+02(=)
6.90E-29±1.11E-28 6.26E-28±2.39E-28(=)
9.35E-06±3.28E-06(+) 8.48E-84±1.97E-84(+)
5.85E+02±1.74E+03(=) 1.90E-42±4.03E-43(=)
1.26E-05±5.58E-06(+) 1.46E-27±4.74E-28(=)
2.51E-05±3.77E-05(+)
2.43E+01±1.34E+01(+) 3.66E-02±6.81E-02(+)
4.71E-129±9.43E-130(=) 8.67E-30±1.96E-29(+)
1.67E+00±3.79E+00(+) 3.26E-13±1.79E-13(=)
3.02E-23±5.00E-23(+) 2.40E-17±1.31E-16(+)
1.12E-23±1.62E-23(=) 6.60E-35±5.10E-35(=)
1.31E-66±2.39E-66(+) 1.55E-21±3.26E-21(+)
1.85E-13±2.37E-13(=) 1.32E-02±4.03E-02(=)
2.80E-76±3.94E-77(+) 1.60E-26±6.56E-26(+)
2.89E-13±1.54E-12(=) F7 7.32E+03±2.58E+03 1.71E+04±4.96E+03(+)
FAC
1.44E-01±5.28E-02(+)
MFA
3.90E+03±1.51E+03(+) 6.66E+00±2.87E+00(+)
5.72E+04±1.99E+04(+) 1.46E+04±3.21E+03(+)
OLBFA PSO OLPSO SLPSO
HEFA DEPSO
1.77E+02±1.95E+02(+)
FAPSO
0.00E+00±0.00E+00 4.49E+01±9.41E+00(+)
FAC MFA LFA
2.22E+03±1.01E+03(−) 2.81E+03±7.97E+03(−)
PSO
OLPSO SLPSO
SL-PSO
HCLPSO HEFA
DEPSO
9.44E+03±1.94E+03(+) 4.99E+03±9.09E+02(+) 1.03E+04±1.90E+03(+)
5.95E+02±4.05E+02(+) 2.28E+03±5.80E+03(+)
4.24E+03±1.07E+03(=) 9.07E+03±3.98E+03(+)
5.18E+03±4.50E+03(+)
3.88E+04±8.30E+03(+) 1.62E+02±1.14E+02(+)
3.78E+03±1.70E+03(=) 8.58E+03±1.84E+03(+)
4.16E+01±3.13E+01(+) 0.00E+00±0.00E+00(−)
3.22E-04±2.99E-04(+) 9.03E-01±7.65E-01(+)
2.43E+03±6.11E+02(−) 3.46E+03±5.30E+02(−)
0.00E+00±0.00E+00(−) 1.09E-27±1.70E-27(=)
3.69E+02±4.34E+02(+) 3.71E-01±2.39E-01(+)
2.84E+03±1.47E+03(−) 3.49E+03±2.09E+03(=)
1.10E+03±1.10E+03(+)
3.02E+02±5.21E+02(−)
3.74E-04±1.71E-03(+) 6.06E-28±2.43E-28(=)
1.41E-27±1.05E-27(=)
F8
F9
F10
2.40E-03±5.70E-04 7.16E+06±3.37E+06(+)
4.86E-15±1.74E-15 3.09E-14±6.06E-15(=)
2.48E-11±6.44E-12 5.57E+03±8.29E+02(+)
1.89E+06±6.72E+05(+)
1.92E+01±2.02E-01(+)
5.43E+03±6.70E+02(+)
8.55E+06±3.44E+06(+) 2.58E+06±5.07E+05(+)
1.07E-15±1.66E-15(=) 1.35E+01±1.15E+00(+)
5.27E+03±7.20E+02(+) 7.55E+03±4.48E+02(+)
8.98E+06±4.43E+06(+) 1.31E+07±1.08E+07(+)
1.93E-01±5.06E-01(+) 4.64E-14±1.71E-14(=)
5.89E+03±1.02E+03(+) 2.92E+03±5.93E+02(+)
4.68E+07±1.60E+07(+) 4.07E+06±1.60E+06(+)
6.71E-11±2.65E-11(=) 3.71E-14±7.27E-15(=)
1.18E+01±3.61E+01(=) 3.41E-10±9.70E-10(=)
1.16E+06±4.10E+05(+) 1.99E+06±7.63E+05(+)
6.39E-15±1.45E-15(=) 8.63E-12±1.19E-11(=)
1.40E+03±3.41E+02(+) 5.13E+01±7.41E+01(+)
2.02E+06±3.87E+06(+) 4.81E+06±3.57E+06(+)
4.35E-01±6.19E-01(+) 2.98E-13±1.51E-12(=)
3.35E+03±8.03E+02(+) 4.68E+03±9.42E+02(+)
F12
F13
F14
F15
0.00E+00±0.00E+00 5.02E+01±1.37E+01(+)
3.80E-02±5.32E-02 1.28E+01±2.98E+00(+)
1.57E-32±0.00E+00 5.40E-32±2.81E-32(=)
3.36E-01±5.04E-02 5.37E-01±1.03E-01(+)
1.87E+02±3.12E+01(+) 4.75E+01±9.58E+00(+)
2.13E+02±2.94E+01(+) 5.96E+01±1.17E+01(+)
3.20E+01±7.28E+00(+) 3.68E-02±1.12E-01(+)
2.36E+01±2.83E+00(+) 9.06E-01±1.30E-01(+)
7.03E+01±1.73E+01(+) 5.27E+01±1.35E+01(+)
1.41E+02±3.51E+01(+) 6.25E+01±1.63E+01(+)
3.22E+01±4.55E+00(+) 0.00E+00±0.00E+00(−) 3.04E+01±1.95E+00(+)
1.68E+01±6.58E+00(+) 2.63E-01±7.04E-01(+)
1.55E+01±1.83E+00(+) 6.97E-01±1.54E-01(+)
3.63E+01±1.58E+01(+)
2.38E+01±1.61E+01(+) 2.00E-01±4.07E-01(+)
7.86E-29±1.93E-28(+)
5.22E+00±1.57E+00(+) 1.78E+00±6.10E-01(+)
Ac ce p
OLBFA
1.87E+04±7.42E+03(+) 4.71E+00±7.58E+00(−)
te
F11 FA
4.28E+03±5.23E+03(+) 4.25E+04±1.20E+04(+)
4.49E+04±1.19E+04(+) 3.70E+02±3.50E+02(+) 6.77E-04±9.40E-04(+) 1.16E+00±6.56E-01(+) 9.22E+02±2.21E+03(+)
SL-PSO HCLPSO
6.04E+04±1.52E+04(+) 1.73E+04±5.05E+03(+)
3.98E+03±1.27E+03(+) 3.47E+03±5.19E+03(+)
M
LFA
d
FA
8.24E-02±3.07E-02(+) 2.99E+02±1.41E+02(+) 7.36E-02±7.80E-02(+)
an
F6 1.00E-09±8.42E-10 4.25E+03±1.41E+03(+)
FAPSO
F5
ip t
FAC
F2 1.02E-17±1.43E-17 1.84E-02±3.93E-02(+)
cr
FA
F1 2.87E-127±1.76E-127 4.86E-129±7.96E-130(=)
us
FAPSO
0.00E+00±0.00E+00(=) 2.09E-13±1.14E-12(+)
1.11E+01±4.35E+00(+) 1.08E-01±3.79E-01(=)
5.06E-11±1.37E-10(=)
2.72E-14±4.52E-14(−) 8.53E-15±7.08E-15(−)
3.95E-25±6.94E-25(+) 1.57E-32±0.00E+00(=)
1.41E+01±5.01E+00(+) 8.62E-09±1.97E-08(+)
2.30E+01±3.64E+00(+) 6.67E-02±2.54E-01(=)
1.93E-03±1.06E-02(−) 1.44E-13±2.28E-13(−)
1.57E-32±0.00E+00(=) 5.42E-23±9.65E-23(+)
4.59E+01±1.18E+01(+) 4.07E+01±1.19E+01(+)
7.18E+01±3.90E+01(+) 2.69E+01±1.03E+01(+)
5.75E+00±2.33E+00(+) 6.59E-01±9.50E-01(+)
1.71E-01±3.92E-01(+) 2.02E-29±9.67E-29(+)
9.50E-01±1.80E-01(+) 3.60E-01±6.75E-02(=) 4.23E-01±7.28E-02(+) 7.90E-01±1.40E-01(+) 2.59E+00±1.37E+00(+)
followed by SLPSO, HCLPSO, SL-PSO, and MFA, the numbers of the best mean results achieved by which are 4, 3, 2, and 2, respectively. Moreover, we
340
also note that FAPSO demonstrates the best result on the non-separable multimodal functions because it yields 5 out of the 13 non-separable multimodal problems, followed by HCLPSO who achieves the best solutions on 3 non-separable problems (i.e., F24 , F27 , and F28 ). The comparison results indicate that FAPSO
20
Page 20 of 37
Table 3: (Continue) Comparison results of solution accuracy. F17
F18
F19
6.55E-11±1.99E-11 7.71E+01±8.56E+01(+)
1.74E-16±3.60E-16 3.12E-03±5.71E-03(+)
1.38E+00±2.59E-01 6.50E+00±2.05E+00(+)
1.88E+03±3.09E+03 3.33E+04±5.34E+04(+)
1.27E+01±3.57E-01(+) 1.12E+01±5.72E-01(+)
1.62E+02±3.33E+02(+) 3.55E+01±4.35E+01(+)
3.05E+01±1.41E+01(+)
1.38E+01±5.09E+00(+) 4.99E+00±1.13E+00(+)
6.82E+04±8.59E+04(+) 2.89E+04±1.73E+04(+)
1.13E+01±4.15E-01(+) 1.17E+01±7.41E-01(+)
1.64E+02±4.50E+02(+) 1.06E+02±1.76E+02(+)
3.44E+01±1.40E+01(+) 7.93E+00±3.41E+00(+)
1.57E+06±3.64E+05(+) 2.95E+04±1.75E+04(+)
7.90E+00±1.45E+00(+) 3.47E+00±4.26E-01(=)
6.24E+03±2.28E+04(+) 2.15E+01±2.25E+01(+)
2.55E+00±5.69E-01(−) 1.09E+01±1.14E+00(+)
3.25E+01±3.00E+01(+) 2.49E+01±1.49E+01(+)
0.00E+00±0.00E+00(−) 1.95E-02±2.08E-02(+)
2.92E+00±5.02E-01(+) 2.53E+00±4.77E-01(+) 1.08E+00±2.22E-01(−)
3.81E+04±1.89E+04(+) 6.29E+03±4.55E+03(+)
1.15E+01±7.25E+00(+) 2.66E+01±2.46E+01(+)
1.40E-03±3.22E-03(+) 7.77E-17±1.73E-16(=)
3.86E+00±9.48E-01(+) 1.65E+00±3.89E-01(+)
7.71E+03±7.45E+03(+) 6.49E+03±5.03E+03(+)
HEFA
4.32E+00±5.07E-01(+) 5.61E+00±1.08E+00(+)
DEPSO
8.20E+00±2.56E+00(+)
4.20E+01±3.28E+01(+)
4.92E-03±8.21E-03(+) 1.69E-02±1.82E-02(+)
9.81E+00±8.22E+00(+) 2.59E+00±8.90E-01(+)
4.36E+04±5.29E+04(+) 2.71E+04±2.40E+04(+)
F21
F22
F23
F24
F25
FAPSO
1.47E-12±6.16E-13 8.39E+01±1.89E+01(+)
1.66E-12±6.13E-13 1.25E+02±2.95E+01(+)
6.60E-08±1.41E-08 6.12E+09±6.61E+07(+)
1.27E+01±3.85E-01 1.29E+01±4.33E-01(+)
1.55E-02±1.62E-02 6.35E-03±8.34E-03(−)
3.13E+02±4.36E+01(+) 7.26E+01±1.55E+01(+)
3.19E+02±7.16E+01(+) 1.06E+02±1.92E+01(+)
5.97E+09±2.51E+07(+) 3.19E+09±7.06E+08(+)
1.42E+01±2.51E-01(+) 1.31E+01±4.28E-01(+)
8.72E+02±2.34E+02(+) 7.20E-01±5.24E-01(+)
2.16E+02±2.88E+01(+) 1.12E+02±2.98E+01(+)
3.34E+02±4.47E+01(+) 1.42E+02±3.65E+01(+)
1.64E+10±2.37E+09(+) 6.17E+09±7.67E+07(+)
1.28E+01±3.74E-01(=) 1.31E+01±5.50E-01(+)
2.20E-02±1.32E-02(+) 3.07E-02±4.18E-02(=)
6.58E+01±1.97E+01(+) 3.95E+00±3.85E+00(+)
3.66E+01±2.06E+01(+) 3.20E+00±1.40E+00(+)
7.11E+09±1.50E+09(+) 5.83E+09±0.00E+00(+)
1.23E+01±4.51E-01(−) 1.38E+01±1.95E-01(+)
2.77E+02±2.08E+02(+) 4.39E+00±3.38E+00(+)
5.83E+09±0.00E+00(+) 5.83E+09±1.43E+05(+)
1.31E+01±3.17E-01(+) 1.30E+01±2.05E-01(+)
1.55E-02±1.70E-02(=) 1.01E-02±1.25E-02(=)
5.83E+09±0.00E+00(+) 5.83E+09±2.38E+05(+)
1.22E+01±6.08E-01(−) 1.30E+01±3.13E-01(+)
1.90E-02±1.78E-02(=) 2.64E-01±4.11E-01(+) 4.42E+01±3.80E+01(+)
OLBFA PSO OLPSO SLPSO SL-PSO HCLPSO
FA FAC MFA LFA OLBFA PSO OLPSO SLPSO
4.74E-16±1.80E-15(=)
3.06E-11±9.37E-11(=)
SL-PSO
1.50E+01±4.75E+00(+) 9.95E-02±3.04E-01(=)
2.42E+01±5.76E+00(+) 2.33E-01±5.04E-01(+)
9.40E+01±2.32E+01(+) 6.23E+01±1.65E+01(+)
9.79E+01±2.48E+01(+) 4.25E+01±1.61E+01(+)
HEFA DEPSO
F26 FA FAC MFA LFA
2.00E+01±4.55E-02(=) 2.03E+01±4.81E-01(+)
PSO
OLPSO SLPSO
SL-PSO
2.10E+01±5.24E-02(+) 2.01E+01±3.62E-01(=) 2.09E+01±7.09E-02(+) 2.11E+01±6.62E-02(+) 2.01E+01±9.28E-02(+) 2.10E+01±5.56E-02(+)
HEFA
2.07E+01±9.72E-02(+) 2.04E+01±3.99E-01(+)
DEPSO
2.10E+01±6.44E-02(+)
HCLPSO
1.24E+01±3.88E-01(=)
F28
F29
F30
1.09E+02±3.88E+01 7.49E+01±1.99E+01(−)
1.57E+02±3.16E+01 1.02E+02±3.19E+01(−)
2.44E+01±3.00E+00 2.13E+01±3.69E+00(−)
3.23E-01±1.04E-01 5.33E-01±1.09E-01(+) 2.41E+01±2.65E+00(+) 8.66E-01±1.87E-01(+)
4.58E+02±1.14E+02(+)
5.94E+02±1.41E+02(+)
3.05E+01±3.87E+00(+)
6.76E+01±1.28E+01(−) 4.38E+02±6.82E+01(+)
9.66E+01±1.73E+01(−) 5.69E+02±8.01E+01(+)
2.29E+01±2.68E+00(−) 4.92E+01±2.09E+00(+)
1.10E+02±3.39E+01(=) 1.16E+02±3.71E+01(=)
1.41E+02±3.46E+01(=) 1.23E+02±3.15E+01(−)
2.17E+01±3.54E+00(−) 1.98E+01±2.85E+00(−)
1.68E+02±4.57E+01(+) 1.11E+02±3.73E+01(=)
1.84E+02±3.86E+01(+) 1.25E+02±2.98E+01(−)
2.74E+01±3.87E+00(+) 3.05E+01±3.01E+00(+)
1.64E+02±9.50E+00(+) 6.17E+01±1.14E+01(−)
1.32E+02±1.21E+01(−) 7.85E+01±2.64E+01(−)
6.34E+00±2.35E+00(−) 2.34E+01±2.50E+00(=)
1.46E+02±4.88E+01(+) 7.14E+01±4.05E+01(−)
1.52E+02±5.64E+01(=) 9.85E+01±4.65E+01(−)
2.63E+01±7.45E+00(=) 1.44E+01±3.37E+00(−)
Ac ce p
OLBFA
2.00E+01±5.44E-04 2.00E+01±5.56E-04(=)
5.96E+09±3.08E+08(+)
te
FAPSO
5.78E+04±5.75E+04(+)
F27
d
HCLPSO
5.01E-03±8.63E-03(+) 1.95E-02±2.14E-02(+)
cr
LFA
0.00E+00±0.00E+00(−) 1.55E-02±4.40E-03(+)
us
MFA
an
FAC
M
FA
F20
ip t
F16 3.36E+00±5.58E-01 1.07E+01±8.21E-01(+)
FAPSO
1.58E+01±2.23E+00(+) 7.30E-01±1.29E-01(+) 6.18E+00±1.71E+00(+) 1.63E+00±4.93E-01(+) 1.03E+00±2.45E-01(+) 3.34E-01±4.62E-02(=) 4.00E-01±6.43E-02(+) 9.00E-01±2.08E-01(+) 2.33E+00±1.54E+00(+)
has a pleasurable performance on not only separable multimodal functions but
345
also non-separable multimodal problems. It is worth to note that the canonical FA surpasses PSO on majority of the 8 unimodal problems. On the contrary, PSO shows more favorable performance on the complicated problems than FA. The results manifest PSO and FA have their own merits on different problems. Although PSO and FA are two basic
21
Page 21 of 37
350
components of FAPSO, the hybrid optimizer significantly outperforms PSO and FA on most of the test functions except slightly worsen than FA and PSO on
ip t
6 functions (i.e., F1 , F4 , F25 , F27 , F28 , and F29 ) and 5 functions (i.e., F7 , F24 ,
F28 , and F29 ), respectively, contributing to the mean solution accuracy. We
355
cr
believe that the superior characteristics of FAPSO is ascribable to the rational
integration of the two basic algorithms as well as the purposeful detecting and
us
local search operators. 5.2. Comparison on success rate and convergence speed
an
According to the acceptable error given for each function, we present the comparison results among the 13 peer algorithms in terms of convergence speed 360
and success rate. In Table 4, “FEs” indicates the average number of function
M
evaluations that an algorithm needed to reach the acceptable error while “SR%” denotes a success rate that an algorithm has reached a predefined acceptable error. The symbol “-” in the table stands for that an algorithm has never
error.
te
365
d
obtained a solution which is more accurate than the corresponding acceptable
1) Success rate (SR%): Table 4 demonstrates that FAPSO achieves 100% success rate on 23 out of the 30 benchmark functions, followed by SLPSO, H-
Ac ce p
CLPSO, and SL-PSO which yield 100% success rate on 20, 19, and 17 functions, respectively. Among the 8 unimodal functions, FAPSO and SL-PSO manifest
370
the same property, in terms of the number of functions solved (i.e., SR%=100). It is worth to note that FAPSO is the only algorithm yielding 100% success rate on F8 . On the contrary, SL-PSO solves F7 on which FAPSO never reach the predefined acceptable error in 30 runs. In addition, FAPSO also demonstrates the best performance among the 22 multimodal functions since there are only t-
375
wo functions that FAPSO never solved (i.e., SR%=0), followed by HCLPSO and SLPSO, the number of unsolved problems by which is 3. Moreover, among the non-separable functions including unimodal and multimodal functions, FAPSO and DEPSO manifest the best performance, in terms of the total number of problems that solved and partially solved. From this perspective, we regard 22
Page 22 of 37
Table 4: Mean function evaluations (FEs) in obtaining acceptable solutions
acceptable solutions.)
OBLFA
PSO
OLPSO
SLPSO
SL-PSO
HCLPSO
HEFA
DEPSO
F6
F7
F8
F9
f10
F11
F12
F13
F14
F15
11639
55462
-
61889
22793
57708
94459
91875
87340
21206
-
SR%
100
100
100
0
100
100
0
100
100
100
100
100
6.7
100
0
FEs
11246
17708
-
-
11255
-
-
-
16624
-
-
-
-
8609
-
SR%
100
20
0
0
100
0
0
0
100
0
0
0
0
100
0
-
-
-
-
-
-
-
-
-
-
-
0
0
0
0
0
0
0
0
-
25909
-
-
-
35952
14079
-
0
100
0
0
0
100
6.7
0
-
-
-
-
-
-
-
-
0
0
0
0
0
0
0
0
-
16628
-
-
-
-
8641
-
0
0
0
0
53.3
0
FEs
-
-
SR%
0
0
100
0
0
100
0
FEs
7813
26932
132910
-
17836
-
-
SR%
100
100
10
0
100
0
0
FEs
-
-
32508
-
-
105043
-
SR%
0
0
100
0
0
100
0
FEs
11260
17828
-
11230
-
-
SR%
100
13.3
6.7
0
100
FEs
103104
107670
117190
-
-
SR%
100
83.3
80
0
0
FAC
MFA
LFA
OBLFA
PSO
OLPSO
SLPSO
SL-PSO
HCLPSO
HEFA
DEPSO
0
0
86.7
-
-
-
117109
-
-
-
119790
99998
-
0
0
0
100
0
0
0
26.7
100
0
-
-
-
116760
95783
111460
124420
124370
81446
-
0
0
0
100
90
100
80
100
100
0
89694
107480
-
-
89312
SR%
100
100
0
0
3.3
FEs
51800
67605
104540
-
51649
110640
-
-
66923
79321
65609
75716
78159
42550
-
SR%
100
100
26.7
0
100
10
0
0
100
100
100
100
100
100
0
FEs
22404
33256
42406
-
SR%
100
100
100
0
FEs
79033
87294
73072
-
SR%
100
100
100
0
FEs
11312
71603
37922
-
11327
SR%
100
70
40
0
100
FEs
87805
92805
85711
SR%
100
FEs
30
SR%
100
100
100
22033
43407
72540
-
32838
-
-
-
48208
19448
-
100
100
100
0
100
0
0
0
96.7
100
0
49121
74266
-
-
100280
113110
127390
132460
119520
71989
-
100
100
0
0
100
63.3
100
93.3
100
100
0
50642
125190
-
16800
-
-
-
-
98344
-
80
30
0
60
0
0
0
0
60
0 -
-
80164
81967
-
101570
-
-
-
107130
81931
10
0
40
56.7
0
100
0
0
0
6.7
100
0
F17
F18
F19
F20
F21
F22
F23
F24
F25
F26
F27
F28
F29
F30
14499
139
101
59155
92480
92271
67989
30
1411
30
20091
-
30
30
100
100
100
40.0
100
100
100
100
100
100
53.3
0
100
100
-
-
30
881
30
2971
2541
30
30
100
100
100
93.3
46.7
100
100
-
-
Ac ce p FA
0
FEs
F16 FAPSO
cr
F5
-
us
LFA
F4
47410
an
MFA
F3
48423
M
FAC
F2
11677
d
FA
F1 FEs
te
FAPSO
ip t
and success rate (SR%) by tested algorithms (“-” indicates no trials reached
FEs
30
3632
81
63
-
-
SR%
100
73.3
100
100
0
0
0
0
-
-
-
-
FEs
-
SR%
100
73.3
100
100
0
0
0
0
100
0
0
100
100
FEs
30
6448
534
156
-
-
-
-
30
1908
0
100 30
4756
5350
30
30
SR%
100
96.7
100
100
0
0
0
0
100
100
100
96.7
53.3
100
100
FEs
30
128094
128
60
-
-
-
-
30
13445
30
-
-
30
30
SR%
100
73.3
100
100
0
0
0
0
100
100
100
0
0
100
100
FEs
30
3596
100
63
-
-
-
-
30
873
30
3250
2603
30
30
SR%
100
73.3
100
100
0
0
0
0
100
100
100
40
13.3
100
100
FEs
60
91299
376
135
-
-
-
-
60
19300
60
108755
109810
60
60
SR%
100
83.3
100
100
0
0
0
0
100
10
100
33.3
16.7
100
100
FEs
1020
76854
5262
2901
-
114080
-
-
1020
18207
1020
98599
66268
1020
1020
SR%
100
100
100
100
0
26.7
100
100
100
100
100
0
0
6.7
3.3
FEs
289
29579
367
237
-
61181
71368
-
300
1158
291
65921
85264
303
SR%
100
100
100
100
0
100
100
0
100
100
100
40
16.7
100
100
FEs
205
13961
1235
627
82519
-
-
-
205
2622
205
-
205
205
SR%
100
100
100
100
3.3
0
0
0
100
100
100
0
0
100
100
FEs
35
62734
3326
404
-
115260
121470
-
35
6581
35
39500
49358
35
35
SR%
100
100
100
100
0
90
80
0
100
100
100
100
76.7
100
100
FEs
60
18149
126
142
-
-
-
-
60
1456
60
4110
60
60
SR%
100
100
100
100
0
0
0
0
100
100
100
20
100
100
FEs
76
75668
439
395
-
-
-
-
77
6273
78
89199
109600
75
74
SR%
100
93.3
100
100
0
0
0
0
100
86.7
100
86.7
53.3
100
100
-
30
205
23
Page 23 of 37
380
hybridizing various algorithms may be a promising way for optimizing difficult problems.
ip t
2) Convergence speed : In this study, we only compare the convergence speed
among those algorithms who achieve 100% success rate on the test functions, in
385
cr
terms of the mean function evaluations (FEs). From the results we can see that FA variants yield higher convergence speed on the unimodal problems while
us
PSO variants have slower convergence speed. On the contrary, PSOs have more competitive performance on the difficult multimodal functions than FAs. Note that FAPSO achieves neither the best nor the worst performance in terms of
390
an
the convergence speed though it demonstrates the most favorable characteristics in terms of success rate. From the result we regard that if solution quality is the main concern, paying more fitness evaluations for the local search operator
M
is more desirable. On the contrary, if the number of function evaluations is restricted, less fitness evaluations for the local search operator is beneficial for keeping population diversity.
Comparing the results attained by FAPSO, FA, and PSO, we can find out
d
395
te
that FAPSO overpasses FA and PSO on the majority of test functions, in terms of the convergence speed and success ratio as well as the solution accuracy. We
Ac ce p
regard that the comprehensive performance of the hybrid optimizer relies on the various merits of PSO and FA. And the favorable results offered by FAPSO
400
manifest the strategies proposed in our search do take advantages of different merits in the different algorithms. 5.3. Statistical results
5.3.1. Best mean values
From the first row in Table 5, it can be seen that FAPSO yields the best per-
405
formance among the 13 algorithms. Note that three PSO variants, i.e., SLPSO, SL-PSO, and HCLPSO, manifest more favorable performance than the other peer algorithms in terms of the number of best mean results. The performances of PSO, HEFA, and DEPSO are the worst among all the peer algorithms where the figures are all zero. 24
Page 24 of 37
410
5.3.2. Success rate The second row in Table 5 demonstrates that FAPSO has the best perfor-
ip t
mance among the 13 algorithms in terms of the number of solved problems (#S).
SLPSO and HCLPSO achieves slightly worse results than FAPSO. Moreover,
415
cr
according to the number of never solved problems (#NS), FAPSO also obtains
the best result since there are only 4 problems it has never solved, followed by
us
SLPSO and HCLPSO. Although HCLPSO is slightly worsen than SLPSO in terms of (#S) and (#NS), it achieves a higher average success rate (Avg. SR%) than SLPSO. According to the values listed in the third row in Table 5 we
420
an
can draw a conservative conclusion that FAPSO has a promising comprehensive performance.
M
5.3.3. t-test
To investigate whether FAPSO is significantly better or worse than other twelve peer algorithms on the test functions, a two-tailed t-test operation was
“+”, “-” and “=” mean that FAPSO is significantly better than, significantly
te
425
d
carried out. The results of t-test are presented in Table 3, in which the symbol
worse than and almost the same as the corresponding competitor algorithm, respectively. The statistical information of t-test results is listed in Table 5 in
Ac ce p
which the symbol “#+”, “#=”, and “#-” denote the number that FAPSO is significantly better than, almost the same as, and significantly worse than the
430
corresponding competitor algorithm, respectively. It can be seen from the last row in Table 5 that FAPSO outperforms the
other 12 optimizers in terms of the t-test results. The number of functions where FAPSO yields significantly better results than the other algorithms is much larger than the number of functions where FAPSO is significantly worse
435
than the other competitors. It is also notable that the numbers of problems that FAPSO significantly outperforms FA and PSO are 20 and 24, respectively, though FA and PSO are the basic components of FAPSO. Consequently, we can draw a conservative conclusion that integrating different algorithms and strategies with proper
25
Page 25 of 37
Table 5: Statistical results of the 13 peer algorithms on the 30 problems FA
#BM
14
2
1
3
0
1
0
2
5
5
3
0
0
#S
23
12
9
13
10
10
10
15
20
17
19
11
12
#PS
3
4
1
5
1
7
7
6
4
2
5
8
8
#NS
4
14
20
12
19
13
13
9
6
11
6
11
10
52.1
35.8
42.9
44.4
57.0
69.8
60.0
76.8
49.7
54.4
80.0
47.8 32.4
cr
Avg. SR%
FAC MFA LFA OBLFA PSO OLPSO SLPSO SL-PSO HCLPSO HEFA DEPSO
ip t
FAPSO
-
20
28
22
29
22
24
22
16
18
16
24
22
#=
-
6
2
3
1
7
3
6
9
6
8
4
4
#-
-
4
0
5
0
1
3
6
2
4
us
#+
2
5
6
* Four different aspects of statistical results are proposed in this table, including the number of best mean values achieved (#BM), the number of solved (#S), partially solved (#PS), and never solved problems (#NS), the average success rate (Avg. SR%), and t-test over the 30 problems (α=0.05), where “#+”, “#=”, and “#-” represent the number of problems that the performance of FAPSO is
mechanisms might be a promising way to handle complex problems.
M
440
an
significant better than, almost the same as, and significant worse than the corresponding algorithm, respectively.
5.3.4. Friedman-test
The Friedman-test results of the 13 algorithms are listed in Table 6. In
d
addition, we also separately carry out Friedman-test on the unimodal and mul-
445
te
timodal functions, the results of which are also listed in Table 6. The results of the peer algorithms and their rankings are listed in ascending order (the lower the better). Furthermore, the statistics and the corresponding p values are
Ac ce p
shown at the bottom of the table.
It can be seen from Table 6 that FAPSO offers the best overall performance
on the 30 functions, while HCLPSO is the second best, followed by SL-PSO and
450
SLPSO. The Friedman-test results on the 8 unimodal functions demonstrates that SL-PSO yields more favorable performance than FAPSO. The reason of it may be that the detecting operator in FAPSO wastes a few of fitness evaluations since there is no local optimum within such problems. On the contrary, FAPSO offers the best performance on the 22 multimodal functions. The results verify
455
the effectiveness of the detecting operator on multimodal functions.
26
Page 26 of 37
Table 6: Friedman-test of all compared algorithms.
ip t
2.93 4.23 4.30 5.35 6.52 6.73 6.83 7.60 7.62 8.35 8.65 10.90 10.98
cr
FAPSO HCLPSO SL-PSO SLPSO MFA DEPSO FA OLPSO HEFA PSO OBLFA LFA FAC 140.808 0.000
Multimodal:F9 -F30 Algorithm Ranking FAPSO 2.82 HCLPSO 3.91 SLPSO 4.73 SL-PSO 5.07 MFA 6.30 FA 6.55 OLPSO 6.64 DEPSO 6.86 PSO 7.75 HEFA 8.20 OBLFA 8.84 LFA 11.45 FAC 11.89 125.296 0.000
us
Unimodal:F1 -F8 Algorithm Ranking SL-PSO 2.19 FAPSO 3.25 HCLPSO 5.13 HEFA 6.00 DEPSO 6.38 SLPSO 7.06 MFA 7.13 FA 7.63 OBLFA 8.13 FAC 8.50 LFA 9.38 PSO 10.00 OLPSO 10.25 37.599 0.000
Algorithm Ranking
an
Average Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 Statistic p value
M
5.4. Efficiency of introduced modules
In FAPSO, there are three new introduced modules besides the two parallel evolution modules, i.e., exchanging elite knowledge module, detecting module
performance of FAPSO, a series of experiments was carried out in this subsection.
te
460
d
and local-searching module. To measure how the three components affect the
Ac ce p
To quantify the significance of each module, we choose 3 FAPSO variants, named as FAPSO-EM , FAPSO-DM , and FAPSO-LM , to compare with FAPSO. In FAPSO, if BestF consecutively stagnates more than Stagsub generations,
465
it is replaced by BestP while BestP is better than BestF, and vice versa. However, in FAPSO-EM , if BestF consecutively unchanged more than Stagsub iterations, the worst individual in P opf rather than BestF will be replaced by BestP if BestP is better than BestF, and vice versa. In FAPSO-DM and FAPSO-LM , the detecting operator and local search operator are removed from
470
FAPSO, respectively. The comparison results are listed in Table 7 in terms of the mean and the standard deviation of the solutions. Comparing the results between FAPSO and FAPSO-EM , we observe that FAPSO-EM yields more favorable results than FAPSO on 5 out of the 8 uni-
27
Page 27 of 37
Table 7: Comparison results of introduced modules’ efficiency FAPSO-EM
FAPSO-DM
2.87E-127±1.76E-127 3.27E-129±5.21E-130 8.41E-129±8.17E-130 1.53E+127±6.18E-128 1.02E-17±1.43E-17
7.33E-21±4.72E-21
1.06E-20±1.40E-20
F3
1.68E-11±2.49E-11
1.72E-11±2.61E-11
4.02E-11±4.13E-11
4.09E+03±6.53E+02 3.17E+03±5.68E+02 4.23E+03±6.41E+02
F5
6.90E-29±1.11E-28
5.79E-29±9.68E-30
3.74E-28±4.94E-28
F6
1.00E-09±8.42E-10
1.39E-09±7.73E-10
4.67E-09±1.28E-10
7.32E+03±2.58E+03 7.34E+03±2.44E+03 6.14E+03±2.86E+03
1.37E-01±5.43E-02
4.42E+03±9.11E+02 1.26E-28±1.10E-28
1.26E+00±1.06E+00 3.06E+03±9.85E+02
us
F7
4.36E-22±3.91E-22
cr
F2 F4
FAPSO-LM
ip t
FAPSO F1
2.40E-03±5.70E-04
3.69E-04±2.37E-04
4.81E-28±9.42E-28
5.55E-25±1.24E-24
F9
4.86E-15±1.74E-15
4.69E-15±2.17E-15
4.97E-15±1.95E-15
7.11E-15±0.00E+00
F10
2.48E-11±6.44E-12
4.71E-11±4.28E-12
5.03E+03±1.99E+02
1.59E+00±1.21E+00
F11 0.00E+00±0.00E+00 0.00E+00±0.00E+00 2.33E+01±5.78E+00
1.62E+00±1.64E+00
an
F8
1.06E+00±1.16E+00
F12 0.00E+00±0.00E+00 0.00E+00±0.00E+00 2.38E+01±8.93E+00 F13
3.80E-02±5.32E-02
1.51E-01±4.79E-02
F14
1.57E-32±0.00E+00
1.57E-32±0.00E+00
F15
3.36E-01±5.04E-02
4.52E-01±3.29E-01
F16
3.36E+00±5.58E-01
6.33E+00±5.72E-01
F17
6.55E-11±1.99E-11
F18
1.74E-16±3.60E-16
F19
1.38E+00±2.59E-01
4.25E-01±6.33E-01
3.33E-02±4.96E-02
1.57E-32±0.00E+00
1.57E-32±0.00E+00 3.40E-01±8.94E-01 4.35E+00±1.03E+00
7.83E-11±2.31E-11
1.59E+00±2.18E+00
3.70E+01±3.34E+01
5.27E-04±2.77E-05
0.00E+00±0.00E+00
4.44E-03±6.14E-03
1.43E+00±8.17E-01
4.39E+00±7.56E-01
1.74E+00±2.46E-01
F20 1.88E+03±3.09E+03 2.01E+03±3.02E+03 4.01E+03±5.71E+03
7.74E+03±6.55E+03
d
M
2.80E-01±1.64E-01
7.07E+00±8.33E-01
1.47E-12±6.16E-13
1.88E-12±8.27E-13
1.04E+02±1.38E+01
F22
1.66E-12±6.13E-13
1.63E-12±7.22E-13
1.06E+02±2.57E+01
9.40E-01±8.58E-01
F23
6.60E-08±1.41E-08
6.71E-08±2.33E-08
8.38E-08±4.92E-08
5.83E+09±0.00E+00
F24
1.27E+01±3.85E-01
1.25E+01±3.99E-01
1.24E+01±4.69E-01
1.22E+01±6.06E-01
F25
1.55E-02±1.62E-02
1.54E-02±1.69E-02
1.67E-02±1.65E-02
2.07E-02±1.72E-02
Ac ce p
te
F21
F26
2.00E+01±5.44E-04
4.00E+00±7.71E+00 4.00E+00±8.94E+00
5.31E-01±4.90E-01
1.86E-01±4.17E-02
F27 1.09E+02±3.88E+01 9.25E+01±1.77E+01 7.62E+01±2.62E+01
7.80E+01±1.34E+01
F28 1.57E+02±3.16E+01 1.47E+02±4.03E+01 1.28E+02±3.17E+01
1.60E+02±4.97E+01
F29 2.44E+01±3.00E+00 2.33E+01±1.84E+00 2.16E+01±1.72E+00
1.89E+01±1.84E+00
F30
3.23E-01±1.04E-01
3.23E-01±9.77E-02
3.20E-01±8.37E-02
4.00E-01±7.07E-02
modal functions, in term of mean solution values. Furthermore, FAPSO-EM
475
manifests almost the same performance as FAPSO on F3 and F7 . On the contrary, the number of multimodal problems that FAPSO-EM is better than, almost same as, and worse than FAPSO are 5, 8, and 9. From the characteristics demonstrated by FAPSO-EM on different functions, we can obtain a conservative conclusion that the exchanging elite module in FAPSO is more suitable for
480
multimodal problems for the population diversity been well kept.
28
Page 28 of 37
From Table 7, it can be observed that FAPSO-DM demonstrates better performances than FAPSO on 4 out of the 8 unimodal functions. The reason lies
ip t
behind the comparison results is that the detecting module of FAPSO wastes a
few fitness evaluations since there is no local optimum in the unimodal functions. On the contrary, the modified FAPSO only achieves more promising results
cr
485
than the original algorithm on 4 out of the 22 multimodal problems. Moreover,
us
the performances of FAPSO on 11 multimodal functions are deteriorated when the detecting operator is removed. This results verify the efficiency of the operator on keeping the population diversity which is beneficial for multimodal optimization.
an
490
On the other hand, it can be seen that FAPSO domimates FAPSO-LM on the majority multimodal functions (18 out of the 22 problems). Furthermore,
M
FAPSO is slightly worsen than FAPSO-LM on the unimodal functions since the former yields better results on 2 problems while the latter achieves more 495
promising solutions on 3 functions. From the results we can see that the local
6. Conclusions
te
d
search operator has positive effects on both unimodal and multimodal problems.
Ac ce p
In this paper, we introduced a hybrid meta-heuristic optimizer based on
FA and PSO. In the new proposed algorithm, called FAPSO in this paper, the
500
population is divided into two sub-populations with the same size. During the optimizing process, the two sub-populations choose PSO and FA as their basic algorithm respectively to carry out the search operator in parallel. To efficiently utilize the merits of PSO and FA, the two sub-populations share their own optimal solutions if one of which has stagnated more than a threshold generations.
505
In addition, a knowledge-based detecting operator and a local search operator are introduced to tradeoff the contradiction between explorative and exploitative capabilities. While carrying out the purposeful detecting operator, each dimension of the searching space is divided into many small-sized sub-regions, based on which the historical knowledge of individuals is recorded to help the
29
Page 29 of 37
510
population take a purposeful detecting operator. The aim of the detecting operator is to enable the best solution to search for a more promising sub-region and
ip t
then dragging itself out of a possible local optimum. During the last stage of
evolutionary, the classical BFGS Quasi-Newton method is adopted to improve
515
cr
the exploitative ability of FAPSO and then accelerate its convergence speed.
From the comparison results, several conclusions can be drawn for FAPSO.
us
First, exchanging elite module in FAPSO is conductive to population diversity which is an important issue for multimodal optimization. Second, the detecting operator is more suitable for multimodal optimization while the local search
520
an
operator is conductive for improving solutions accuracy for different problems in various degrees. However, it is worth to note that the performance of FAPSO on those non-separable problems need to be further improved. The reason why
M
FAPSO cannot yield pleasurable results is that the detecting operator do not consider the correlation between different variables. As a result, FAPSO may waste lots of fitness evaluations. Our future works will focus on how to discern the interactions among different variables for the non-separable problems based
d
525
te
on the historical knowledge of population, and then help the population carry
Ac ce p
out a more efficient detecting operator.
Acknowledgement
This study was funded by the National Natural Science Foundation of China
530
(No.61663009, 61602174, 61562028), the National Natural Science Foundation of Jiangxi Province (No. 20161BAB202064, 20161BAB212052, 20151BAB207022), and the National Natural Science Foundation of Jiangxi Provincial Department of Education (No. GJJ150539, GJJ150496).
References 535
[1] A. Abdullah, S. Deris, M.S. Mohamad, S.Z.M. Hashim, A new hybrid firefly algorithm for complex and nonlinear problem, Dist. Comput. Artif. Intell. 151(2012)673-680. 30
Page 30 of 37
[2] A. Chatterjee, P. Siarry, Nonlinear inertia weight variation for dy-
540
33(3)(2004)859-871.
ip t
namic adaptation in particle swarm optimization, Comput. Oper. Res.
with chaos, Commun. Nonlinear Sci. 18(1)(2013)89-98.
cr
[3] A.H. Gandomi, X.S. Yang, S. Talatahari, A.H. Alavi, Firefly algorithm
[4] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchical
545
Trans. Evol. Comput. 8(3)(2004)240-255.
us
particle swarm optimizer with time-varying acceleration coefficients, IEEE
an
[5] B. Wang, D.X. li, J.P. Jiang, Y.H. Liao, A modified firefly algorithm based on light intensity difference, J. Comb. Optim. 31(3)(2016)1045-1060.
M
[6] B. Xin, J. Chen, Z.H. Peng, F. Pan, An adaptive hybrid optimizer based on particle swarm and differential evolution for global optimization, Sci. 550
China Inf. Sci. 53(5)(2010)980-989.
d
[7] B. Xin, J. Chen, J. Zhang, H. Fang, Z.H. Peng, Hybridizing differential
te
evolution and particle swarm optimization to design powerful optimizers: A review and taxonomy, IEEE Trans. Syst. Man, Cybern. C, Appl. Rev.
Ac ce p
42(5)(2012)744-767. 555
[8] C. Li, S. Yang, T.T. Nguyen, A self-learning particle swarm optimizer for global optimization problems, IEEE Trans. Syst., Man, Cybern. B, Cybern. 42(3)(2012)627-646.
[9] C. Wei, Z. He, Y. Zhang, Swarm directions embedded in fast evolutionary programming, in: Proceedings of Congress on Evolutionary Computation,
560
Hawaii, 2002, pp.1278-1283.
[10] C. Yuan, Z. Xia, X. Sun, Coverless image steganography based on SIFT and BOF, J. Internet Technol. 18(2)(2017)209-216. [11] F.J. Rodriguez, C. Garcia-Martinez, M. Lozano, Hybrid metaheuristics based on evolutionary algorithms and simulated annealing: Taxonomy, 31
Page 31 of 37
565
comparison, and synergy test, IEEE Trans. Evol. Comput. 16(6)(2012)787-
ip t
800. [12] F. Qiang, N. Tong, M. Zhao, Firefly algorithm based on multi-group learning mechanism, Appl. Res. Comput. 30(12)(2013)3600-3604.
on krill herd and quantum-behaved particle swarm optimization, Neural Comput. Appl. 27(4)(2016)989-1006.
us
570
cr
[13] G.G. Wang, A.H. Gandomi, A.H. Alavi, S. Deb, A hybrid method based
[14] G.G. Wang, S. Deb, A.H. Gandomi, A.H. Alavi, Opposition-based krill herd
an
algorithm with Cauchy mutation and position clamping, Neurocomputing 177(2016)147-157. 575
[15] H. Higashi, H. Iba, Particle swarm optimization with Gaussian mutation,
M
in: Proceedings of the IEEE Swarm Intelligence Symposium, Indianapolis, 2003, pp. 72-79.
d
[16] H. Wang, Z. Wu, S. Rahnamayan, Y. Liu, M. Ventresca, Enhancing particle swarm optimization using generalized opposition-based learning, Inform. Sci. 181(2011)4699-4714.
te
580
Ac ce p
[17] J. Li, X. Chen, M. Li, J. Li, P. Lee, W. Lou, Secure deduplication with efficient and reliable convergent key management, IEEE Trans. Parall. Distr. Sys. 25(6)(2014)1615-1625
[18] J. Li, X. Huang, J. Li, X. Chen, X. Huang, D.S. Wong, Securely outsourcing
585
attribute-based encryption with checkability, IEEE Trans. Parall. Distr. Sys. 25(8)(2014)2201-2210.
[19] J. Li, H. Yan, Z. Liu, X. Chen, Y. Xiang, Location-sharing systems with enhanced privacy in mobile online social networks, IEEE Sys. J. DOI: 10.1109/JSYST.2015.2415835.
590
[20] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer, in: Proceedings of IEEE Swarm Intelligence Symposium, Pasadena, 2005, pp.124-129. 32
Page 32 of 37
[21] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baska, Comprehensive learning
595
IEEE Trans. Evol. Comput. 10(3)(2006)281-295.
ip t
particle swarm optimizer for global optimization of multimodal functions,
[22] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings
cr
of the IEEE Conference on Neural Network, Perth, 1995, pp.1942-1948.
[23] J. Kennedy, R. Mendes, Population structure and particle swarm perfor-
600
us
mance, in: Proceedings of Congress on Evolutionary Computation, Hawaii, 2002, pp. 1671-1676.
an
[24] K. Mohammad, R.T. Mohd, E. Mahdiyeh, A new hybrid firefly algorithm for foundation optimization, Nati. Aca. Sci. Lett. 36(3)(2013)279-288.
M
[25] L. dos Santos Coelho, D.L. de Andrade Bernert, V.C. Mariani, A chaotic firefly algorithm applied to reliability-redundancy optimization, in: Pro605
ceedings of Congress on Evolutionary Computation, New Orleans, 2011,
d
pp. 517-521.
te
[26] M. Bidar, H. Rashidy Kanan, Modified firefly algorithm using fuzzy tuned parameters, in: Proceedings of Iranian Conference on fuzzy systems,
Ac ce p
Oazvin, 2013, pp. 1-4. 610
[27] N. Lynn, P.N. Suganthan, Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation, Swarm Evol. Comput. 24(2015)11-24.
[28] P.N. Suganthan, N. Hansen, J.J. Liang, Y.P.C.K. Deb, A. Auger, S. Tiwari, Problem definitions and evaluation criteria for the CEC 2005 special session
615
on real-parameter optimization, Nanyang Technological Univ., Singapore, Tech. Rep., 2005.
[29] Q. Liu, W. Cai, J. Shen, Z. Fu, X. Liu, N. Linge, A speculative approach to spatial-temporal efficiency with multi-objective optimization in a heterogeneous cloud environment, Secur. Commun. Netw. 9(17)(2016)4002-4012.
33
Page 33 of 37
620
[30] Q. Tian, S. Chen, Cross-heterogeneous-database age estimation through
ip t
correlation represention learning, Neurocomputing, 238(2017)286-295. [31] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of International Symposium on Micro machine & Human
625
cr
Science, Nagoya, 1995, pp.39-43.
[32] R. Cheng, Y.C. Jin, A social learning particle swarm optimization algorith-
us
m for scalable optimization, Inform. Sci. 291(2015)43-60.
[33] S.H. Ling, H.H.C. Iu, K.Y. Chan, H.K. Lam, B.C.W. Yeung, and F.H.
an
Leung, Hybrid particle swarm optimization with wavelet mutation and its industrial applications, IEEE Trans. Syst., Man, Cybern. B, Cybern. 630
38(3)(2008)743-763.
M
[34] S.H. Yu, S.L. Zhu, Y. Ma, D.M. Mao, Enhancing firefly algorithm using generalized opposition-based learning, Comput. 97(7)(2015)741-754.
d
[35] S.J. Huang, X.Z. Liu, W.F. Su, S.H. Yang, Application of hybrid firefly algorithm for sheath loss reduction of undergroud transmission systems, IEEE Trans. Power Delive. 4(2013)2085-2092. C.C. Yang,
Enhancing differential evolution utilizing
Ac ce p
[36] S.M. Guo,
te
635
eigenvector-based crossover operator,
IEEE Trans. Evol. Comput.
19(1)(2015)31-49.
[37] S. Sayah, A. Hamouda, A hybrid differential evolution algorithm based on
640
particle swarm optimization for nonconvex economic dispatch problems, Appl. Soft Comput. 13(4)(2013)1608-1619.
[38] T. Niknam, R. Azizipanah-Abarghooee, A. Roosta, Reserve constrained dynamic economic dispatch: a new fast self-adaptive modified firefly algorithm, IEEE Syst. J. 6(4)(2012)636-646.
645
[39] T. Peram, K. Veeramachaneni, and C. K. Mohan, Fitness-distance-ratio based particle swarm optimization, in: Proceedings of Swarm Intelligence Symposium, Indianapolis, 2003, pp. 174-181. 34
Page 34 of 37
[40] X.S. Yang, Firefly algorithms for multimodal optimization, Mathematics,
650
ip t
5972(2009)169-178. [41] X.S. Yang, S. Deb, Cuckoo search via Levy flights, in: World Congress on
cr
Nature and biologically inspired computing, Coimbatore, 2009, pp 210-214. [42] X.W. Xia, J.N. Liu, Z.B. Hu, An improved particle swarm optimizer based
Appl. Soft Comput. 23(23)(2014)76-90. 655
us
on tabu detecting and local learning strategy in a shrunk search space,
[43] Y. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: Proceed-
an
ings of the IEEE World Congress of Computational Intelligence, Alaska, 1998, pp. 68-73.
M
[44] Y. Zheng, J. Byeungwoo, D. Xu, Q.M.J. Wu, H. Zhang, Image segmentation by generalized hierarchical fuzzy C-means algorithm, J. Intell. Fuzzy 660
Syst. 28(2)(2015)961-973.
d
[45] Y. Shi, R.C. Eberhart, Fuzzy adaptive particle swarm optimization, in:
pp.101-106.
te
Proceedings of Congress on Evolutionary Computation, Korea, 2001,
Ac ce p
[46] Z.H. Zhan, J. Zhang, Y. Li, S.H.C. Henry , Adaptive particle swarm op-
665
timization, IEEE Trans. Syst. Man, Cybern. B, Cybern. 39(6)(2009)13621381.
[47] Z.H. Zhan, Y. Li, Y.H. Shi, Orthogonal learning particle swarm optimization, IEEE Trans. Evol. Comput. 15(6)(2011)832-847.
35
Page 35 of 37
Highlights
The population is divided into two sub-populations which select FA and PSO as their basic algorithm to carry out the searching process, respectively.
The two sub-populations share their own optimal solutions while they have stagnated more than a threshold generations.
The population’s historical knowledge is recorded to help it carry out a
ip t
purposeful detecting operator, and then drags the population jump out of a possible local optimum.
cr
A classical local-searching strategy, i.e., BFGS Quasi-Newton method, is
ce pt
ed
M
an
us
introduced to improve the exploitative capability of FAPSO.
Ac
Page 36 of 37
ip t
Xuewen Xia, received the Ph.D. degree in Computer Software and Theory from Wuhan University, Wuhan, China, in 2009. In 2009, he was a Lecturer with the Hubei Engineering University, Xiaogan, China. In 2012, he worked as a postdoctoral researcher at Wuhan University, Wuhan, China. He is currently an Associate Professor with the School of Software, East China Jiaotong University, Nanchang, China. His current research interests include the areas of computational intelligence techniques and their applications.
cr
Ling Gui, received the bachelor degree in Accounting from Huazhong Normal University, Wuhan, China, in 2012. She is currently an experimentalist at East China Jiaotong University, Nanchang, China. She current research interests include the areas of computational intelligence techniques and their applications.
an
us
Guoliang He, received the M.S. degree and Ph. D degree in Computer Software and Theory from Wuhan University, Wuhan, P.R. China in 2004 and 2007 respectively. He is currently an Associate Professor at the State Key Laboratory of Software Engineering, Wuhan University. His current research interests include data mining, machine learning and intelligent algorithms.
d
M
Chengwang Xie, received the Ph.D. degree in Computer Software and Theory from Wuhan University, Wuhan, China, in 2010. In 2012, he was an Associate Professor with the School of Software, East China Jiaotong University, Nanchang, China. In 2013, he worked as a postdoctoral researcher at Wuhan University, Wuhan, China. His current research interests include swarm intelligence techniques and their applications.
Ac ce p
te
Bo Wei, received the Ph.D. degree in Computer Software and Theory from Wuhan University, Wuhan, China, in 2013. In 2013,he was a Lecturer with East China Jiaotong University, Nanchang, China. His research interests include intelligent computation and machine learning. Ying Xing, received the bachelor degree in Software Engineering from East China Jiaotong University, Nanchang, China, in 2016. Currently, she is a postgraduate at East China Jiaotong University, Nanchang, China. She current research interests include the areas of computational intelligence techniques and their applications.
Ruifeng Wu, received the bachelor degree in Mechanical Engineering and Automation from Hebei University of Technology City College, Tianjin, China, in 2016. Currently, he is a postgraduate at East China Jiaotong University, Nanchang, China. She current research interests include the areas of computational intelligence techniques and their applications.
Yichao Tang, received the bachelor degree in Software Engineering from East China Jiaotong University, Nanchang, China, in 2016. Currently, he is a postgraduate at East China Jiaotong University, Nanchang, China. He current research interests include the areas of computational intelligence techniques and their applications.
Page 37 of 37