Adaptive comprehensive learning particle swarm optimization with cooperative archive

Adaptive comprehensive learning particle swarm optimization with cooperative archive

Accepted Manuscript Adaptive comprehensive learning particle swarm optimization with cooperative archive Anping Lin, Wei Sun, Hongshan Yu, Guohua Wu, ...

2MB Sizes 0 Downloads 169 Views

Accepted Manuscript Adaptive comprehensive learning particle swarm optimization with cooperative archive Anping Lin, Wei Sun, Hongshan Yu, Guohua Wu, Hongwei Tang

PII: DOI: Reference:

S1568-4946(19)30055-9 https://doi.org/10.1016/j.asoc.2019.01.047 ASOC 5319

To appear in:

Applied Soft Computing Journal

Received date : 10 April 2017 Revised date : 7 May 2018 Accepted date : 31 January 2019 Please cite this article as: A. Lin, W. Sun, H. Yu et al., Adaptive comprehensive learning particle swarm optimization with cooperative archive, Applied Soft Computing Journal (2019), https://doi.org/10.1016/j.asoc.2019.01.047 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

*Highlights (for review)

ACLPSO-CA Highlights 1. Proposed an adaptive mechanism for dynamically adjusting comprehensive learning probability. 2. Proposed a cooperative archive to exploit the valuable information of the current swarm and archive. 3. The proposed ACLPSO-CA is tested on CEC2013 test suite and CEC2017 test suite and compared with seven popular PSO variants to evaluate its performance. 4. The proposed ACLPSO-CA is applied to a radar system design problem to demonstrate its potential in real-life optimization.

aclpso_ca manuscript submit.docx Click here to view linked References

Adaptive comprehensive learning particle swarm optimization with cooperative archive AnpingLina,b,c, Wei Sun*a,b,c, Hongshan Yu*b,c, Guohua Wu*d,e, Hongwei Tangb,c

Abstract: Comprehensive learning particle swarm optimization (CLPSO) enhances its exploration capability by exploiting all other particles’ historical information to update each particle’s velocity. However, CLPSO adopts a set of fixed comprehensive learning (CL) probabilities to learn from other particles, which may impair its performance on complex optimization problems. To improve the performance and adaptability of CLPSO, an adaptive mechanism for adjusting CL probability and a cooperative archive (CA) are combined with CLPSO, the resultant algorithm is referred to as adaptive comprehensive learning particle swarm optimization with cooperative archive (ACLPSO-CA). The adaptive mechanism dividing the CL probability into three levels and adjusting the individual particle’s CL probability level dynamically according to the performance of the particles during the optimization process. The cooperative archive is employed to provide additional promising information for ACLPO-CA and itself is updated by the cooperative operation of the current swarm and archive. To evaluate the performance of ACLPSO-CA, ACLPSO-CA is tested on CEC2013 test suite and CEC2017 test suite and compared with seven popular PSO variants. The test results show ACLPSO-CA outperforms other comparative PSO variants on the two CEC test suites. ACLPSO-CA achieved high performance on different types of benchmark functions and exhibited high adaptability as well. In the end, ACLPSO-CA is further applied to a radar system design problem to demonstrate its potential in real-life optimization. Keywords: comprehensive learning; particle swarm optimization; cooperative archive; radar system design

1 Introduction The optimization problems are very common in the real-life world, such as distribution networks [1], electronics and electromagnetic [2], power systems and plants [3], imaging processing [4], antenna design [5], communication networks design and optimization [6], Clustering and classification [7], control system [8] and so on. Most of the optimization problems can be expressed by the following formula in D-dimensional space: Minimize f(x) Where x=[x1, x2,…, xD] Since some of the optimization problems are so complex, it is almost impossible to find out the analytic solutions. In addition, approximately solution are only needed for some real-life optimization problems. Hence *

Joint corresponding authors: W. Sun([email protected]), H. Yu ([email protected]), G. Wu ([email protected]) a State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, Hunan University, Changsha, 410082, PR China b College of Electrical and Information Engineering, Hunan University, Changsha, 410082, PR China c Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Changsha, 410082, PR China d School of Traffic and Transportation Engineering, Central South University, Changsha 410073, China e College of Information Systems and Management, National University of Defense Technology, Changsha,410082, PR China 1

scholars and engineers are often seeking to find better alternative approaches for such complex optimization problems. A lot of evolutionary algorithms (EA) and swarm intelligence algorithms (SIAs) emerged to solve complex optimization problems in the past decades. For example, genetic algorithm (GA) [9], ant colony optimization (ACO) [10], differential evolution (DE) [11, 12], Harmony search (HS) [13], Evolution Strategy with Covariance Matrix Adaptation (CMA-ES) [14], artificial bee colony (ABC) algorithm [15], Gravitational search algorithm (GSA) [16], teach-learning-based algorithms (TLBO) [17], across neighborhood search algorithm (ANS) [18] et.al. Table 1 The state-of-the-art PSO algorithms Contribution Year PSO variant

Improvements

2001

FAPSO [19] Y. Shi, R.C. Eberhart

Implemented a fuzzy system to adapt the inertia weight of PSO

FAPSO is not sensitive to the population size, and its scalability is acceptable. With fuzzy system tuning inertial weight can improve the performance of FAPSO to some extent.

2002

PSO-cf [20] M. Clerc, J. Kennedy

Explored how PSO works from the individual particle’s point of view. Applied constriction coefficients to control over the dynamical characteristics of the particle swarm, including its exploration versus exploitation properties.

2003

FDR-PSO [21] T. Peram, K. veeramachaneni

Introduced fitness-distance-ratio based mechanism to move the particles towards nearby particles of high fitness to combat premature convergence problem.

2004

FIPS [22] R. Mendes, J. Kennedy, et.al.

Proposed a “fully informed” approach where all the neighbors’ previous best values are used to modify the velocity of one particle.

2004

HPSO-TAVC [23] A.ratnaweera, S. K. Halgamuge,

Employed time-varying acceleration coefficients (TAVC) to reduce premature convergence. Introduced the concept of “mutation” to velocity vector to enhance swarm diversity and “self-organizing hierarchical particle swarm optimizer” (HPSO) as a performance improvement strategy.

2005

UPSO [24] K. E. Parsopoulos, M. N. Varhatis

Proposed a new scheme to harness the local and global variants of standard PSO.

Constriction coefficients can increase the ability of the algorithm to find the optimal points in the search space. Constriction coefficients can be applied to various parts of the PSO formula in order to guarantee convergence, while encouraging exploration. Empirical examination demonstrates that the convergence of FDR-PSO does not occur at an early phase of particle evolution. Avoiding premature convergence allows FDR-PSO to continue search for global optima in difficult multimodal optimization problems The results of FIPS are very promising, as informed individuals seem to find better solutions in all the benchmark functions [34]. The U-versions FIPS, for instance USquare, performs better in many cases than versions where the self is included in the neighborhood. TAVC can improve the performance of PSO significantly, especially for unimodal functions. The “mutation” along with TAVC can improve the algorithms performance in terms of the optimum solution. Combined HPSO with TAVC, the resultant HPSO-TAVC achieved a significant improvement of performance, compared with both PSO-TVIW and PSO-RANDIW methods. UPSO seems to exploit the good property of both global and local PSO variants and exhibited superiority against the standard PSO.

2005

DMS-PSO [25] J.J. Liang, P. N. Suganthan

Proposed a novel dynamic multi-swarm PSO (DMS-PSO). In DMS-PSO, the whole population is divided into many small swarms, these swarms are regrouped frequently by using various regrouping schedules and information is exchanged among the swarms.

DMS-PSO gives better performance on complex multi-modal problems when compared with some other PSO variants through combining the local version PSO with dynamic multi-swarm neighborhood topology.

2006

CLPSO [26] J.J. Liang, P. N. Suganthan,

Proposed novel comprehensive learning strategy where other particles’ previous best positions are exemplars to be learned from by any particle and each dimension of a particle can potentially learn from a different exemplar.

2009

APSO [27] Z. Zhan, J. Zhang, Y., Li et. al

2011

OLPSO [28] Z. Zhan, J. Zhang Y., Li et. al

Proposed a real-time evolutionary state estimation (ESE) procedure and an automatically control of the inertia weight, acceleration coefficients and other algorithmic parameters at run time according to the evolutionary state. Performing elitist learning strategy (ELS) to jump out of any possible local optima when the evolutionary state is classified as convergence state Employing orthogonal learning (OL) strategy to guide particles to fly in better directions by constructing a much promising and efficient exemplar via orthogonal experimental design.

2013

ALC-PSO [29] W. Chen, J. Zhang, Y. Lin et al.

The comprehensive learning strategy makes the particles have more exemplars to learn from and a larger potential space to fly. Experiments show comprehensive learning strategy enables the comprehensive learning particle swarm optimization (CLPSO) to make use of the information in swarm more effectively. CLPSO improves the performance of PSO significantly, especially on multimodal problems The experimental results show the ESE-based adaptive parameters control makes the algorithm extremely efficient, offering a substantially improved convergence speed in terms of both number of FEs and CPU time needed to reach acceptable solutions for both unimodal and multimodal functions. The benchmark tests show ELS can substantially improve the global solution accuracy. The OL strategy can be applied to PSO with any topological structures, such as the star, the ring, the wheel, and the von Neumann structures. The comparison tests show that OLPSO can significantly improve the performance of PSO, offer faster global convergence, higher solution quality, and stronger robustness. The aging mechanism in ALC-PSO actually serves as a challenging mechanism for promoting a suitable leader to lead the swarm. ALC-PSO managed to prevent premature convergence and keep the fast-converging feature of the original PSO as well.

Transplanted an aging mechanism to PSO. The aging mechanism assigning the leader of the swarm with a growing age and a lifespan, and allowing the other individuals to challenge the leadership when the leader becomes aged. The lifespan of the leader is adaptively tuned according to the leader’s leading power during the optimization process.

2

Inspired by a flock of birds, James Kennedy and Erberhart [30, 31] proposed particle swarm optimization (PSO) algorithm in 1995. Since PSO is easy to implement and has high convergence rate, it is widely used in solving real-world optimization problems. However, on complex multimodal problems, PSO suffers from premature convergence. To improve the performance of PSO on complex multimodal problems, many PSO variants have been reported. Some state-of-the-art PSO variants are summarized in Table 1. PSO has continuously attracted much attention of the evolutionary computation (EC) community since its inception. Scholars carried out a lot of research work to improve the performance of PSO on complex multimodal problems. For example, Valdez et.al. [32] employed fuzzy logic to adapt some parameters and used co-evolution technique to improve the performance of PSO. Li et.al. [33] utilized an estimation of distribution algorithm to estimate and preserve the distribution information of particles’ historical promising Pbests and proposed historical memory based PSO (HMPSO). Ouyang [34] et.al. developed an improved global-best-guided particle swarm optimization with learning operation (IGPSO). Gong et.al. [35] employed genetic evolution strategy to breed promising exemplars for PSO and obtained genetic learning PSO (GL-PSO). Qin et.al. [36] presented an improved PSO algorithm with an inter-swarm interactive learning strategy (IILPSO) to overcome the drawbacks of the canonical PSO algorithm’s learning strategy. Lim et.al. [37] combined the teaching-learning-based optimization with PSO and proposed teaching and peer-learning particle swarm optimization (TPLPSO). Hu et.al. [38] proposed a parameter control mechanism to adaptively change the parameters and thus improved the robustness of particle swarm optimization with multiple adaptive methods (PSO-MAM). Compared with the aforementioned PSO variants, CLPSO is relatively simple in concept and can achieve high performance on multimodal problems. The comprehensive learning (CL) strategy (all other particles’ historical best information is used to update a particle’s velocity) is adopted by many sophisticated PSO variants. CLPSO is still one of the state-of-the-art PSO algorithms at present. Since the CLPSO can obtain satisfactory performance on multimodal problems, it attracts great attention. A few improved CLPSO variants were proposed in recent years. For example, Lynn et.al. [39] combined the exploration enhanced CLPSO with exploitation enhanced CLPSO-G and proposed heterogeneous comprehensive learning particle swarm optimization (HCLPSO). The test results show HCLPSO performs better than state-of-the-art PSO variants on CEC05 test suite. Lynn et.al. [40] introduced ensemble particle swarm optimizer (EPSO) in which a pool of PSO strategies was constructed with inertia weight PSO, LIPS, HPSO-TVAC, FDR-PSO and CLPSO algorithms. Saban et.al. [41] introduced a novel parallel multi-swarm algorithm based on comprehensive learning particle swarm optimization (PCLPSO), which has multiple swarms based on the master-slave paradigm and works cooperatively and concurrently. Mohammad et.al. [42] developed two classes of learning automata for adaptive parameter selection to improve CLPSO. Omran et.al. [43] employed a fuzzy controller to tune the probability learning, inertia weight and acceleration coefficients of each particle in the swarm and thereby presented fuzzy-controlled CLPSO. Existing research works improve the performance of CLPSO by heterogeneity, ensemble, multi-swarm and adaptive parameters adjustment strategies, while the investigation on dynamically adjusting CL probability is lacking. The original CLPSO assigns different particle with different fixed CL probability to learn from other particles, thus the CL probability is not adjusted adaptively during the optimization process, which may weaken the performance of CLPSO. To overcome the drawback of fixed CL probability, an adaptive mechanism is employed to adjusting CL probability dynamically according to the performance of the particles, the resultant PSO algorithm is referred to as adaptive comprehensive learning particle swarm optimization (ACLPSO). To make good use of the explored information of the particles, the ACLPSO with cooperative archive (ACLPSO-CA) is proposed by incorporating cooperative archive into ACLPSO. The rest of this article is organized as follows: Section II is the briefly review of the related works. Section III introduces the methodologies. Section IV is the experimental results and Section V concludes the paper. 3

2. Literature review 2.1 Canonical PSO In the canonical PSO, each particle in the swarm is attracted by its Pbest and Gbest [44]. The velocity and position of canonical PSO are updated according to the following equations: (1) (2) where i=1,2,3…nP denotes for the index of each particle, nP is the swarm size, and d is the dimensionality of the solution space. Vi and Xi stand for the position vector and velocity vector of the ith particle, respectively. Vi =[ ], Xi = ]. ω is the inertia weight. c1 and c2 are two acceleration coefficients. r1 and r2 are two uniformly distributed random numbers within the range of (0,1). The canonical PSO can obtain high convergence rate, however, it only utilizes the Pbest and Gbest to guide the motion of particles. If the Pbest of one particle and the Gbest are trapped in the same basin of one local optimum, it is hard for the particle to escape from the local optimum without special operation.

2.2 CLPSO To overcome the premature convergence problem, CLPSO [26] is proposed. In CLPSO, the neighbor particles’ Pbest are employed to enhance exploration, different dimensions of one particle may learn from different neighbor particles. The velocity of CLPSO is updated according to equation (3). (3) The CL list of ith particle fi=[fi(1), fi(2),…, fi(D)] defines which particles’ Pbests the ith particle should follow. The can be the corresponding dimension of any particle’s Pbest including its own Pbest. The CL probability PC is adopted to decide whether to learn from neighbor particles at each iteration. For each dimension of one particle, a uniformly distributed random number is generated firstly. If the random number is smaller than PC(i), the relevant dimension will learn from the better particle among two tournament selected neighbor particles in the current swarm, otherwise, it will learn from its own Pbest. The PC for each particle is generated according to equation (4). (4) where nP denotes for the swarm size, a=0.05, b=0.45. The CLPSO enlarged the search range of PSO and improved its performance on multimodal optimization problems. Since CLPSO removes the global learning component, it converges relatively slowly on unimodal problems. To speed up the convergence of CLPSO, Lynn et.al. adopts CLPSO-G [39] to enhance exploitation of original CLPSO. The velocity of CLPSO-G is updated according to equation (5). (5) where ω, c1 and c2 are linearly adjusted during the evolutionary process to obtain better performance. At the early stage, CLPSO-G employs bigger ω and c1 and smaller c2 to enhance the exploration capability, while at the latter stage it adopts smaller ω and c1 and bigger c2 to enhance the exploitation capability. The overall performance of CLPSO-G is better than CLPSO, especially on unimodal problems. 4

2.3 Archive The archive strategy is widely used in multi-objective optimization to collect and reuse the non-dominated solutions. For example, Zhang et.al. [45] introduced an archive to collect and reuse the explored inferior solutions. Yang et.al. [46] adopted an elite archiving scheme to store non-dominated solutions and used the members of the archive to direct successive search behaviors. Zhang et.al. [47] proposed a multi-elite guide particle swarm optimization (MGPSO) by introducing archive into standard particle swarm optimization. The external archive which can preserve elite solutions along the evolutionary process was employed to provide multi-elite flying directions for particles. Wu et.al. [48] maintained a collection of super solutions for comprehensive learning by other particles. Cheng et.al. [49] applied a dynamic archive maintenance strategy to improve the diversity of solutions in multi-objective particle swarm optimization. Lin et.al. [50] employed an external archive to preserve the non-dominated solutions visited by the particles to enable evolutionary search strategies to exchange useful information among them. Patel et.al. [51] used a grid-based approach for archiving process and ε-dominance method to update archive, which may help the algorithm to increase the diversity of solutions.

3. Methodologies 3.1 ACLPSO The classic CLPSO [26] assigns every particle with different fixed CL probability. If one particle is assigned with improper CL probability, it may evolve slowly and impair the performance of the swarm. To further improve the performance of CLPSO, an adaptive mechanism is combined into CLPSO, the resultant algorithm is referred to as ACLPSO. In ACLPSO, the CL probability is divided into three levels, namely low, medium and high level. They are denoted by P1, P2 and P3, respectively. Each particle is assigned with one level of CL probability randomly in the initialization. The performance of the particles with different CL probability level are tracked and the CL probability is re-allocated before refreshing the CL exemplar. The selecting probability of each CL probability level is adjusted according to eq. (6-9). (6) (7) (8) (9) where k=1, 2, 3 denotes for the index CL probability level. nsk,t and nck,t standards for the numbers successful search (find a better solution than the particle’s Pbest) and total search completed by the particles with the kth CL probability level, respectively. t is the number of current iteration. and denote for the successful search rate and relative successful search rate of the kth CL probability level at the tth generation. The small constant is employed to avoid the possible null successful search rate. is the updating ratio to determine the impact of the swarm’s performance in the current iteration against the previous iterations. The effect of will be analyzed in section 4.4.4. When one particle’s Pc needs updating, the adaptive mechanism employs roulette wheel to select a CL probability level for that particle. The high performance CL probability level is encouraged with high selecting probability, vice versa the low performance CL probability is discouraged with low selecting probability. Thereby most of the particles have chances to select suitable CL probability levels to improve the performance of ACLPSO. 5

3.2 Cooperative archive The archive technique is widely used by PSO and other EAs to enhance swarm diversity. Zhang et.al. [45] point out the archive of explored inferior solutions can provide additional information about the promising progress direction. To make the archive more promising, the cooperative archive is proposed to exploit the valuable information of the current swarm and archive. The cooperative archive particle is generated according to equation (10), (11). (10) (11) where, i denotes for the index of any successful updated particle in current iteration. rd is a random number within the range of (0,1). w is the index of the worst archive particle to be updated. denotes for the dth dimension of the wth archive particle. archvalw and Pbestvali denote for the fitness value of archw and Pbesti, respectively. The archive will be updated once a better position is found by one particle and the new position’s fitness value is better than the worst archive particle. The cooperative archive updates one particle’ exemplar by the cooperative action of the latest Pbest and the worst archive particle. The generation process of archive particles is shown in Fig.1. “x”, “ y” denote for two coordinate axis in the search space, respectively. Twenty archive particles are generated with the same Pbesti and Archw. Fig. 1 tells that the archive particles are generated within the hyper box determined by Pbesti and Archw. When one particle’s exemplar needs refreshing, ACLPSO-AC updates the exemplar according to CLPSO and the archive particles are included in the CL probability controlled tournament selection. For each dimension of the new exemplar, two candidate particles are selected from the union of the current swarm and the archive, the one with better fitness value is selected as the exemplar of the corresponding dimension. Each archive particle has equal chance with a current swarm particle to be selected when updating CL exemplar. In other word, ACLPSO-CA constructs exemplar of one particle by the information from the current swarm and the cooperative archive. The cooperative archive can guide particles to search potential promising area via CL strategy.

Fig.1 The diagram of generating of archive particles

3.3 The proposed method Combining the ACLPSO with cooperative archive, the ACLPSO-CA is proposed. The pseudo code of ACLPSO-CA is presented in Algorithm 1 Algorithm 1 The pseudo code of ACLPSO-CA 1 Initialization 2 While ct<=maxfes 3 For i=1:nP 4 Update Vi, Xi according to eq.(5),(2) 5 ct=ct+1, t=t+1 6

6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

nck(i)= nck(i)+1 If fit(Xi)rg Update Pci by roulette wheel Update CL exemplar from the current swarm and the archive Stag(i)=0 End End End Update plk,t according to eq. (6~9) nck(i)=0, nsk(i)=0 End

Where ct, maxfes and t denote for the count of function evolutions (FEs), allowed maximum FEs and the count of current iteration , respectively. Stag(i) counts the successive iteration number without improving the fitness value of particle i and rg is refreshing gap for updating the CL exemplar. At every iteration, the successful search rate of each CL probability level is calculated. If one particle’s fitness value ceases to improve for continuously rg iterations, ACLPSO will update the particles CL probability at first before updating its CL exemplar. Once a better position is found by one particle, the worst archive particle will be updated. The information of latest Pbest is employed to update the worst archive particle, thereby the quality of archive particles can be improved.

4. Experiments 4.1 Test problems To test the performance and adaptability of the proposed ACLPSO and ACLPSO-CA, the CEC2013 [52] test suite and CEC2017 [53] test suite are employed. CEC2013 contains five unimodal functions, fifteen basic multimodal functions and eight composition functions. CEC2017 contains three unimodal functions, seven basic multimodal functions, ten hybrid functions and ten composition functions. The latest CEC2017 benchmark functions are developed with several novel features, such as new basic problems, composing test problems by extracting features dimension-wise from several problems, graded level of linkages, rotated trap problems, and so on. The exact equations of the test functions shouldn’t be used for fair comparison. All the search range of the functions is [-100, 100], the dimensionality is D=30, the maximum FEs is set as maxfes =10000×D. Each algorithm is executed for 30 independent runs. The mean errors and Wilcoxon signed-rank test [54-56] results are obtained to compare the performance of the involved algorithms. The experiments have been conducted on a PC with Intel Core i7-4790 3.60 GHz CPU, 8GB RAM, MS Windows 7 Ultimate SP1 64-bit OS and Matlab R2014b Compiler.

4.2 Parameters configuration Seven peer PSO variants are employed to compare with the proposed algorithms. Among the involved PSO variants, PSO-cf, FIPS and FDR-PSO are three classical PSO algorithms, ELPSO, SLPSO, and HCLPSO are three 7

up to date PSO variants. CLPSO-G is developed from CLPSO. ACLPSO and ACLPSO-CA are two PSO algorithms proposed in this study. PSO-cf [20]adopts a constriction factor to control the velocity of the swarm. It is a global PSO version and converges fast on unimodal functions. FIPS [22] adopts U-ring topology and performs well on complex multimodal functions. ELPSO [57] adopts five-staged successive mutation strategy to the swarm leader to mitigate premature convergence problem. SLPSO [58] employs a social learning strategy that particles only learns from its own Pbest and other particles’ Pbest with better fitness value than itself. HCLPSO [39] employs exploration-subpopulation and exploitation-subpopulation to obtain good balance between exploration and exploitation. CLPSO-G [39] is modified from CLPSO, it employs the information of Gbest to speed up convergence rate. ACLPSO adopts the scheme of CLPSO-G and employs an adaptive mechanism to adjust CL probability during the optimization process (ref. section 3.1). ACLPSO-CA further adopts cooperative archive to further improve the performance of ACLPSO (ref. section 3.2). The source codes of all the comparative PSO algorithms were obtained from the authors and the suggested parameter configurations in the original publications are utilized. The detailed parameter configurations of PSO variants are listed in Table 2. The special parameters ε, P1, P2 and P3 for ACLPSO and ACLPSO-CA defined in section 3.1 are set as: ε=0.025, P1=0, P2=0.1 and P3=0.9. P1=0 means adopting one dimension learning from other particle according to CL strategy [26]. Table 2 The parameters configuration of nine involved PSO algorithms No. 1 2 3 4

PSO algorithm PSO-cf FIPS FDR-PSO ELPSO

Parameters setting nP=40, χ=0.7298, c1=c2=2.05 nP =40, χ=0.7298, φ=4.1 nP =40, ω=0.9-0.4, φ1=1, φ2=1, φ3=2 nP =1500, ω=0.9~0.4, c1=c2=2, h=1, s=2, F=1.2 (h, s, F denote st.d of Gaussian mutation, scale of Cauchy mutation, scale factor of DE mutation respectively)

5

SL-PSO

M=100, α=0.5, β=0.01,

6 7 8 9

HCLPSO CLPSO-G ACLPSO ACLPSO-CA

nP =40, ω=0.99-0.2, c1=2.5-0.5, c2=0.5-2.5, c=3-1.5, g1=15, g2=25 nP =40, ω=0.99-0.2, c1=2.5-0.5, c2=0.5-2.5, m=5(refreshing gap) nP =40, ω=0.99-0.2, c1=2.5-0.5, c2=0.5-2.5, m=5, μ=0.3 nP =40, ω=0.99-0.2, c1=2.5-0.5, c2=0.5-2.5, m=5, nAR=20, μ=0.3

,

,

4.3 Search behavior of ACLPSO To reveal the characteristic of the adaptive mechanism adopted by ACLPSO, ACLPSO is tested on three different types of functions, namely unimodal function f3, basic multimodal function f15 and composition function f27. The search behavior of the swarm and individual particles are presented in Fig. 2. Fig. 2(a), (c), (e) show the search behavior of the swarm. “n(P1), n(P2), n(P3)” denote for the number of particles with CL probability level are P1, P2 and P3 in one iteration, respectively. Fig. 2(b), (d), (f) show the search behavior of two individual particles. The “index of P” stand for the relevant particle’s CL probability level. For example, “index of P” is 1 stand for the corresponding particle’s CL probability is equal to P1. Without losing generality, two random selected particles denoted by particler1 and particler2 are utilized to demonstrate the regulating action of CL probability level for any individual particle. Fig 2(a) tells on unimodal function f3, n(P1), n(P2), n(P3) are varying during the optimization process. At the early stage, n(P1) achieves high value, while at the latter stage, n(P2) achieves high value. Fig 2(b) shows at the early stage, particler1 mainly evolves with P3, while at the latter stage, it evolves with P1 and P2. particler2 mostly evolves with P2 and P3 at the early stage and mostly evolves with P1 and P2 for most of the iterations at the latter stage. Fig. 2(c) shows on f15, n(P3) is relatively higher at the early stage and n(P1) achieves very high value at the latter stage. Fig. 2(d) reveals particler1 is mostly evolving with P3. particler2 is mostly evolving with P1 and P2 at the early stage and mostly evolving with P1 and P3 at the latter stage. Fig. 2(e) tells on f27, n(P3) and n(P1) achieve high value alternatively. Fig. 2(f) shows particler1 mainly evolves with P2 at the early stage and mainly evolves with P3 at the latter stage. particler2 mainly evolves with P1 and P2 at the early stage, mainly evolves with P3 in the middle stage and mainly evolves with P1 and P2 at the latter stage. In general, the 8

adaptive mechanism can adjust the number of particles with different CL probability levels by regulating the individual particle’s CL probability level dynamically during the optimization according to the performance of swarm to achieve better performance.

(a) f3 search behavior of the swarm

(b) f3 search behavior of individual particles

(c) f15 search behavior of the swarm

(d) f15 search behavior of individual particles

(e) f27 search behavior of the swarm

(f) f27 search behavior of individual particles

Fig. 2 Search behavior of ACLPSO for CEC2013 benchmark functions

4.4 Comparison test on CEC2013 4.4.1 Experimental results In this section, the proposed ACLPSO and ACLPSO-CA are tested and compared with seven PSO variants on CEC2013 test suite. The mean errors of the tested algorithms are given in Table 3. The Wilcoxon signed-rank test [54, 55] with significance level of 0.05 is carried out to compare the performance of ACLPSO-CA versus other 9

involved PSO algorithms. The results of Wilcoxon signed-rank test are presented in Table 3 as well. The summary of Wilcoxon signed-rank test results on different types of functions are listed in Table 4. Table 3 The mean errors and Wilcoxon singed-rank test results of nine PSO variants on CEC2013 test suite f1 f2 f3 f4 f5 f6

f7

PSO-cf FIPS FDR-PSO ELPSO SL-PSO HCLPSO CLPSO-G ACLPSO ACLPSO-CA PSO-cf FIPS FDR-PSO ELPSO SL-PSO HCLPSO CLPSO-G ACLPSO ACLPSO-CA PSO-cf FIPS FDR-PSO ELPSO SL-PSO HCLPSO CLPSO-G ACLPSO ACLPSO-CA PSO-cf FIPS FDR-PSO ELPSO SL-PSO HCLPSO CLPSO-G ACLPSO ACLPSO-CA

2.274E-13, > 2.274E-13, > 4.547E-13, > 2.274E-13, > 2.274E-13, > 2.274E-13, > 2.274E-13, > 2.122E-13, > 1.819E-13

3.015E+06, > 1.698E+07, > 3.094E+05, < 4.930E+05, > 4.149E+05, < 1.493E+06, > 7.002E+05, > 4.698E+05, < 4.844E+05

1.321E+08, > 4.235E+06, < 4.642E+07, > 2.083E+08, > 2.769E+07, > 2.989E+07, > 1.556E+07, > 6.692E+06, > 5.031E+06

4.558E+02, > 8.736E+03, > 7.405E+02, > 6.600E+01, < 6.779E+03, > 1.377E+03, > 1.072E+03, > 2.546E+02, < 2.625E+02

2.270E-13, < 4.550E-13, > 3.411E-13, > 3.843E-03, > 1.137E-13, < 1.137E-13, < 1.137E-13, < 2.008E-13, < 2.350E-13

8.463E+01, > 6.122E+01, > 1.119E+01, < 3.726E+01, > 1.802E+01, > 1.502E+01, > 2.215E+01, > 1.828E+01, > 1.440E+01

9.744E+01, > 1.176E+01, > 2.358E+01, > 74.37E+01, > 3.464E+00, < 18.28E+01, > 1.603E+01, > 5.860E+00, < 6.253E+00

f8

f9

f10

f11

f12

f13

f14

2.094E+01, = 2.095E+01, = 2.093E+01, = 2.095E+01, = 2.096E+01, = 2.094E+01, = 2.096E+01, = 2.094E+01, = 2.095E+01

2.659E+01, > 2.810E+01, > 2.108E+01, > 2.463E+01, > 1.027E+01, < 1.783E+01, > 1.667E+01, > 1.181E+01, > 1.085E+01

9.775E-02, < 2.766E-01, > 7.886E-02, < 4.577E+00, > 3.000E-01, > 2.275E-01, > 1.479E-01, > 1.196E-01, < 1.334E-01

1.257E+02, > 5.036E+01, > 3.706E+01, > 1.121E+02, > 1.466E+01, < 1.386E+00, < 1.950E+01, < 2.189E+01, < 2.242E+01

1.605E+02, > 1.735E+02, > 4.577E+01, > 1.314E+02, > 1.568E+02, > 6.031E+01, > 5.141E+01, > 4.696E+01, > 4.203E+01

2.295E+02, > 1.745E+02, > 1.064E+02, > 1.872E+02, > 1.613E+02, > 1.260E+02, > 1.285E+02, > 1.063E+02, > 9.128E+01

2.584E+03, > 4.533E+03, > 1.138E+03, > 2.704E+03, > 7.960E+02, < 2.081E+01, < 7.800E+02, < 1.051E+03, > 9.765E+02

f15

f16

f17

f18

f19

f20

f21

3.791E+03, > 7.066E+03, > 3.813E+03, > 4.100E+03, > 4.407E+03, > 3.461E+03, > 3.592E+03, > 3.489E+03, > 3.154E+03

1.320E+00, < 2.590E+00, > 1.946E+00, < 9.948E-01, < 2.397E+00, > 1.386E+00, < 1.608E+00, < 2.429E+00, > 2.259E+00

1.017E+02, > 1.758E+02, > 7.168E+01, > 1.359E+02, > 1.611E+02, > 3.509E+01, < 5.639E+01, < 6.348E+01, > 6.013E+01

1.255E+02, > 2.062E+02, > 1.498E+02, > 1.159E+02, > 1.960E+02, > 9.058E+01, > 8.946E+01, > 7.877E+01, > 7.294E+01

4.536E+00, > 1.210E+01, > 2.907E+00, > 6.826E+00, > 3.409E+00, > 1.534E+00, < 2.685E+00, < 2.886E+00, > 2.756E+00

1.430E+01, > 1.182E+01, > 1.420E+01, > 1.171E+01, > 1.347E+01, > 1.044E+01, > 1.060E+01, > 9.656E+00, = 9.649E+00

3.478E+02, > 2.604E+02, < 3.359E+02, > 2.826E+02, < 3.125E+02, > 2.432E+02, < 3.412E+02, > 3.154E+02, > 3.002E+02

f22

f23

f24

f25

f26

f27

f28

2.407E+03, > 4.379E+03, > 1.137E+03, > 2.710E+03, > 6.339E+02, < 1.314E+02, < 7.451E+02, < 7.075E+02, < 8.693E+02

5.690E+03, > 7.023E+03, > 3.662E+03, > 4.909E+03, > 3.831E+03, > 3.960E+03, > 3.614E+03, > 3.351E+03, > 3.309E+03

2.816E+02, > 2.341E+02, > 2.454E+02, > 2.654E+02, > 2.216E+02, > 2.262E+02, > 2.294E+02, > 2.088E+02, = 2.082E+02

3.107E+02, > 2.822E+02, > 2.780E+02, > 3.036E+02, > 2.545E+02, > 2.584E+02, > 2.513E+02, = 2.493E+02, = 2.491E+02

3.270E+02, > 2.176E+02, < 3.358E+02, > 2.000E+02, < 2.528E+02, > 2.001E+02, < 2.082E+02, < 2.219E+02, < 2.300E+02

1.019E+03, > 8.543E+02, > 7.571E+02, > 9.447E+02, > 4.704E+02, > 5.767E+02, > 5.496E+02, > 4.356E+02, > 4.225E+02

7.060E+02, > 3.000E+02, = 2.500E+02, < 3.000E+02, = 3.000E+02, = 2.867E+02, < 3.000E+02, = 3.000E+02, = 3.000E+02

Note: The symbols “>”, “=”, “<” in Table 4 stand for the performance of ACLPSO-CA is significantly better, tie with or significantly worse than the compared PSO algorithm. The best results are highlighted in bold.

On five unimodal functions (f1-f5), ACLPSO-CA ranks first on f1. FDR-PSO, FIPS and ELPSO show the best performance on f2, f3 and f4, respectively. SL-PSO, HCLPSO and CLPSO-G tie for first on f5. The summary of Wilcoxon signed-rank test results in Table 4 show that compared with ACLPSO, ACLPSO-CA performs better, tie with and performs worse on two, zero and three functions, respectively. ACLPSO-CA performs better than other involved PSO algorithms except ACLPSO. With the adaptive mechanism for adjusting the CL probability, both ACLPSO and ACLPSO-CA achieve high performance on unimodal problems. On fifteen basic multimodal functions (f6-f15), ACLPSO takes the first place on f12, f13, f15, f18 and f20. HCLPSO ranks the first on f11, f14, f17 and f19. The summary of Wilcoxon signed-rank test results in Table 4 show that compared with other involved PSO algorithms, ACLPSO-CA wins on most of the basic multimodal functions. Compared with PSO-cf, FIPS and FDR-PSO, ACLPSO-CA wins on twelve, fourteen and eleven functions, respectively. Compared with EL-PSO, SL-PSO, HCLPSO, CLPSO-G and ACLPSO, ACLPSO-CA wins on thirteen, ten, nine, nine and ten functions, respectively. With cooperative archive, ACLPSO-CA performs much better than ACLPSO. On eight composition functions (f21-f28), ACLPSO-CA exhibits the best performance on f23, f24, f25 and f27. 10

HCLPSO performs the best on f21 and f22. The summary of Wilcoxon signed-rank test results in Table 4 show compared with HCLPSO, ACLPSO-CA wins on four functions and losses on four functions. Compared with ACLPSO, ACLPSO-CA wins, tie with, losses on three, three and two functions, respectively. Compared with the rest PSO algorithms, ACLPSO-CA shows significant better performance. The performance of ACLPSO-CA is comparable to HCLPSO, while better than other involved PSO algorithms. Table 4 Summary of Wilcoxon signed-rank test for comparing ACLPSO-CA versus other peer PSO variants on CEC2013 test suite Unimodal Basic multimodal Composition Total

PSO-cf 4/0/1 12/1/2 8/0/0 24/1/3

FIPS 4/0/1 14/1/0 5/1/2 23/2/3

FDR-PSO 4/0/1 11/1/3 7/0/1 21/1/5

ELPSO 4/0/1 13/1/1 5/1/2 22/2/4

SL-PSO 3/0/2 10/1/4 6/1/1 19/2/7

HCLPSO 4/0/1 9/1/5 4/0/4 17/1/10

CLPSO-G 4/0/1 9/1/5 4/2/2 17/3/8

ACLPSO 2/0/3 10/2/3 3/3/2 15/5/8

Note: The data format in the cell is “w/t/l”, stand for the number of ACLPSO-CA performs significantly better, tie or significantly worse than the compared PSO algorithm, respectively. Table 5 Rank of mean performance among nine PSO variants on CEC2013 test suite Func. f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 F16 f17 f18 f19 f20 f21 f22 f23 f24 f25 f26 f27 f28 Avg. rank

PSO-cf 3 8 8 4 5 9 7 2 8 2 9 8 9 7 5 2 6 6 7 9 9 7 8 9 9 8 9 9 6.857

FIPS 3 9 1 9 8 8 4 5 9 7 7 9 7 9 9 9 9 9 9 6 2 9 9 6 7 4 7 3 6.893

FDR-PSO 9 1 7 5 7 1 6 1 6 1 6 2 3 6 6 5 5 7 5 8 7 6 4 7 6 9 6 1 5.107

ELPSO 3 5 9 1 9 7 9 5 7 9 8 6 8 8 7 1 7 5 8 5 3 8 7 8 8 1 8 3 6.179

SL-PSO 3 2 5 8 1 4 1 8 1 8 2 7 6 3 8 7 8 8 6 7 5 2 5 3 4 7 3 3 4.821

HCLPSO 3 7 6 7 1 3 8 2 5 6 1 5 4 1 2 3 1 4 1 3 1 1 6 4 5 2 5 2 3.536

CLPSO-G 3 6 4 6 1 6 5 8 4 5 3 4 5 2 4 4 2 3 2 4 8 4 3 5 3 3 4 3 4.0716

ACLPSO 2 3 3 2 4 5 2 2 3 3 4 3 2 5 3 8 4 2 4 2 6 3 2 2 2 5 2 3 3.250

ACLPSO-CA 1 4 2 3 6 2 3 5 2 4 5 1 1 4 1 6 3 1 3 1 4 5 1 1 1 6 1 3 2.857

Note: Avg. rank denotes for average rank.

On all twenty eight CEC2013 functions, ACLPSO-CA, HCLPSO and FDR-PSO possess the best performance on ten, seven and five functions, respectively. The summary of Wilcoxon signed-rank test results in Table 4 show ACLPSO-CA performs better than other peer PSO algorithms. Compared with PSO-cf, FIPS and FDR-PSO, ACLPSO-CA wins on twenty four, twenty three and twenty one functions, respectively. Compared with ELPSO, SL-PSO and HCLPSO, ACLPSO-CA performs better on twenty two, nineteen and seventeen functions, respectively. ACLPSO-CA outperforms CLPSO-G and ACLPSO on seventeen and fifteen functions, respectively. ACLPSO-CA achieves high performance on different kinds of benchmark functions, especially on basic multimodal functions. The ranks of mean errors on twenty eight functions are presented in Table 5. The average rank stands for the mean rank of one algorithm on all twenty eight functions. It is employed to evaluate the algorithms’ overall performance. The lower average rank the better. The order of average rank is ACLPSO-CA, ACLPSO, HCLPSO, CLPSO-G, SL-PSO, FDR-PSO, ELPSO, PSO-cf and FIPS. The order of average rank indicates both ACLPSO-CA and ACLPSO performs better than other comparative PSO algorithms. ACLPSO-CA performs better than ACLPSO. The test results indicate that the cooperative archive can further improve the 11

performance of ACLPSO. 4.4.2 Convergence analysis

(a) Convergence curves of f3

(b) Convergence curves of f7 Fig. 3 The convergence curves of six representative CEC2013 functions

12

(c) Convergence curves of f15

(d) Convergence curves of f18 Fig. 3 (continued) The convergence curves of six representative CEC2013 functions

13

(e) Convergence curves of f22

(f) Convergence curves of f27 Fig. 3 (continued) The convergence curves of six representative CEC2013 functions

To compare the convergence rate of nine tested PSO algorithms, the convergence curve on six representative functions are presented in Fig. 3. On unimodal function f3, Fig. 3(a) shows SL-PSO converges fast at the beginning, however, it is surpassed by FIPS, ACLPSO-CA, ACLPSO and CLPSO-G. In the end, FIPS, ACLPSO-CA and ACLPSO occupy the top three places. With the adaptive CL probability, both ACLPSO-CA and ACLPSO yield higher performance. For ACLPSO and ACLPSO-CA adopting linearly adjusted inertia weight and acceleration coefficients to enhance exploration at the early stage, they converge relatively slowly at the beginning. With the cooperative archive, ACLPSO-CA performs slightly better than ACLPSO. 14

On basic multimodal functions, Fig. 3(b) tells that on f7 SL-PSO converges fast and hits the best performance. The convergence curves of ACLPSO and ACLPSO-CA are almost overlapped. ACLPSO and ACLPSO-CA rank second and third, respectively. Fig. 3(c) indicates on f15, PSO-cf and ELPSO converge fast at the early stage. ACLPSO-CA and ACLPSO converge relatively slowly at the beginning, however, they speed up convergence rate at the middle stage and yield higher performance. HCLPSO keeps steady convergence rate and outperforms ELPSO, PSO-cf and CLPSO-G in the end. ACLPSO-CA, HCLPSO and ACLPSO win first three places. Fig. 3(d) shows on f18, PSO-cf converges fast at the beginning. ACLPSO-CA and ACLPSO accelerate convergence rate at the latter stage and rank first, second, respectively. CLPSO-G and HCLPSO follow the ACLPSO and generate almost the same mean error. On composition functions, Fig.3(e) shows that on f22, SL-PSO and HCLPSO converge fast. HCLPSO surpasses SL-PSO at the middle stage and yield the best performance. ACLPSO and ACLPSO-CA speed up convergence rate at about 110000 FEs and surpass ELPSO, PSO-cf and FDR-PSO. In the end, HCLPSO, SL-PSO and ACLPSO occupy the top three places, ACLPSO-CA ranks the fifth. Fig. 3(f) tells on f27, SL-PSO converges fast at the early stage and achieves the third lowest mean error. ACLPSO-CA and ACLPSO keep steady convergence speed at the early stage and outperform other PSO variants. ACLPSO-CA and ACLPSO rank first and second, respectively. The convergence curves show both ACLPSO-CA and ACLPSO achieve higher performance on different kinds of benchmark functions. With cooperative archive, ACLPSO-CA ranks the first on f15, f18 and f27. The general performance of ACLPSO-CA is better than ACLPSO. 4.4.3 Computational complexity analysis The order of computational complexity shows the algorithms’ computational efficiency. Nine involved PSO algorithms’ order of computational complexity in terms of O notation for the process of initialization, evaluation, update and the overall complexity [59] are presented in Table 6. N and D in Table 6 stand for the swarm size and the number of dimensionality, respectively. FDR-PSO employs fitness-distance-ratio based learning strategy, each dimension of one particle may learn from any particle in the swarm. Its updating complexity order is O(N2D). ELPSO contains a separately mutation to all dimensions of swarm leader at each iteration, its updating complexity order is O(ND+D2). The overall complexity order of the remainder PSO algorithms is O(ND). The complexity order for adjusting CL probability level and updating cooperative archive are O(N) and O(ND), respectively. Hence the complexity order of adjusting CL probability level can be neglected while the complexity order for updating cooperative archive is the same to updating velocity. So ACLPSO and ACLPSO-CA have the same order of computational complexity with CLPSO-G. ACLPSO, ACLPSO-CA and CLPSO-G update velocity according to equation (5). Table 6 Order of computational complexity of nine PSO variants Algorithm PSO-cf FIPS FDR-PSO ELPSO SL-PSO HCLPSO CLPSO-G ACLPSO ACLPSO-CA

Initialize O(ND) O(ND) O(ND) O(ND) O(ND) O(ND) O(ND) O(ND) O(ND)

Evaluate O(ND) O(ND) O(ND) O(ND) O(ND) O(ND) O(ND) O(ND) O(ND)

Update O(ND) O(ND) O(N2D) O(ND+D2) O(ND) O(ND) O(ND) O(ND) O(ND)

Overall O(ND) O(ND) O(N2D) O(ND+D2) O(ND) O(ND) O(ND) O(ND) O(ND)

The average computational time of thirty independent runs of nine PSO algorithms are presented in Fig.4. Fig.4 shows the average computational time of PSO-cf, FIPS, FDR-PSO and ELPSO are longer than SL-PSO, HCLPSO, CLPSO-G, ACLPSO and ACLPSO-CA. That because of PSO-cf, FIPS, FDR-PSO and ELPSO updating 15

velocity and position in a particle by particle manner while SL-PSO, HCLPSO, CLPSO-G, ACLPSO and ACLPSO-SA updating all particles’ velocity and position at the same time. SL-PSO consumes the lowest computational time. The computational time of CLPSO-G is a bit higher than HCLPSO. The computational time of CLPSO-G and ACLPSO are almost the same while ACLPSO-CA needs slightly more computational time than ACLPSO. Due to higher order of complexity, FDR-PSO and ELPSO consume the most and the second most average computational time. These test results support the computational complexity analysis in this section. 50 45 40 35 30 25 20 15 10 5 0

PSO-cf

(Unit: Second)

FIPS FDR-PSO ELPSO SL-PSO HCLPSO CLPSO-G ACLPSO f3

f15

f27

ACLPSO-CA

Fig. 4 The average computational time of nine PSO variants on CEC2013 functions

4.4.4 Parameters sensitive analysis

(a) The effects of updating ratio

(b) The effects of archive size

Fig. 5 The parameters sensitive analysis of ACLPSO-CA

To demonstrate the effects of updating ratio and archive size, ACLPSO-CA is tested on six representative functions. The test results are presented in Fig. 5. In each test, only one parameter is assigned with different values, other parameters are set to default values given in Table 2. As the mean errors on different functions achieved by ACLPSO-CA are quite different, the normalized mean errors are employed to evaluate the performance of ACLPSO-CA with different parameter configurations. The normalized error is defined as the mean error of one parameter setting divided by the maximum mean error on the tested benchmark function. The lower normalized error stands for the lower mean error. Fig. 5(a), (b) show the unimodal function f3 is more sensitive to parameter changing than other multimodal functions. As the multimodal functions have many local optima and the fitness values of some local optima may be close to the global optima. Even if one algorithm is trapped into local optima, it still has chance to find a near promising solution. Hence the multimodal functions are less sensitive to different 16

parameter values than unimodal functions. Fig. 5(a) indicates ACLPSO-CA achieves the lowest mean errors on f3 and f7 when updating ratio μ=0.1, achieves lowest mean error on f18 and f27 when μ=0.3, achieves the lowest mean error on f22 when μ=0.5, and achieves the lowest mean error on f15 when μ=0.7. With μ=0.3 ACLPSO-CA achieves high performance on f18, f22 and f27. As a result, μ=0.3 is adopted as the default setting of ACLPSO-CA. Fig. 5(b) shows ACLPSO-CA achieves the lowest mean errors on f7 with nAR =10, achieves the lowest mean error on f15 when nAR =15, achieves the lowest mean errors on f3 and f27 with archive size nAR=20, and achieves the lowest mean errors on f18 and f22 with nAR =30. Since ACLPSO-CA achieves high performance on f3, f18, f22 and f27 with nAR=20, nAR=20 is adopted as the default setting of ACLPSO-CA.

4.5 Comparison test on CEC2017 In this section, nine involved PSO algorithms are tested on the latest CEC2017 test suite. The test results are presented on Table 7, Table 8, and Table 9. On three unimodal functions(f1-f3), HCLPSO, CLPSO-G and FDR-PSO win the best performance on f1, f2 and f3, respectively. The summary of Wilcoxon signed-rank test results in Table 8 show that compared with PSO-cf, FIPS, FDR-PSO, ELPSO, SL-PSO, HCLPSO, CLPSO-G and ACLPSO, ACLPSO-CA wins on two, three, two, two, three, two, two and three functions. ACLPSO-CA performs better than other involved PSO algorithms. Table 7 The mean errors and Wilcoxon singed-rank test results of nine PSO variants on CEC2017 test suite PSO-cf FIPS FDR-PSO ELPSO SL-PSO HCLPSO CLPSO-G ACLPSO ACLPSO-CA PSO-cf FIPS FDR-PSO ELPSO SL-PSO HCLPSO CLPSO-G ACLPSO ACLPSO-CA PSO-cf FIPS FDR-PSO ELPSO SL-PSO HCLPSO CLPSO-G ACLPSO ACLPSO-CA PSO-cf FIPS FDR-PSO ELPSO SL-PSO HCLPSO CLPSO-G ACLPSO ACLPSO-CA

f1

f2

f3

f4

f5

f6

f7

f8

5.819E+03, > 3.513E+03, > 4.773E+03, > 2.404E+03, < 5.106E+03, > 7.772E+01, < 3.111E+03, > 3.858E+03, > 2.989E+03 f9 6.503E+02, > 0.000E+00, < 1.907E+01, > 9.486E+02, > 8.401E-02,< 2.074E+01, > 5.802E+00, > 7.897E-01, > 6.936E-01 f17 4.503E+02, > 1.626E+02, > 1.961E+02, > 2.324E+02, > 9.080E+01, > 9.811E+01, > 1.139E+02, > 8.250E+01, > 7.535E+01 f25 4.171E+02, > 3.918E+02, = 3.882E+02, = 3.911E+02, = 3.874E+02, = 3.869E+02, = 3.874E+02, = 3.872E+02, = 3.873E+02

5.320E+20, > 1.108E+15, > 1.259E+10, > 4.634E+14, > 1.523E+11, > 2.145E+06, > 2.502E+05, < 9.033E+07, > 1.442E+06 f10 3.638E+03, > 6.305E+03, > 3.082E+03, > 3.423E+03, > 1.006E+03, < 2.070E+03, < 2.528E+03, < 2.580E+03, < 2.659E+03 f18 1.328E+05, > 3.075E+05, > 8.378E+04, = 2.274E+04, < 1.024E+05, > 9.355E+04, > 1.146E+05, > 7.291E+04, < 8.491E+04 f26 2.255E+03, > 1.942E+03, > 1.355E+03, > 2.036E+03, > 1.186E+03, > 4.306E+02, < 1.056E+03, < 9.961E+02, < 1.091E+03

1.717E-08, < 4.089E+03, > 1.546E-08, < 3.258E-02, > 7.047E+03, > 1.575E-03, > 6.147E-04, > 4.186E-07, > 6.987E-08 f11 1.371E+02, > 7.088E+01, > 1.039E+02, > 6.677E+01, > 2.806E+01, > 5.509E+01, > 5.739E+01, > 3.665E+01, > 2.713E+01 f19 1.302E+04, > 5.781E+03, > 8.812E+03, > 3.712E+03, < 2.226E+03, < 1.580E+02, < 6.650E+03, > 4.591E+03, < 5.891E+03 f27 5.892E+02, > 5.457E+02, > 5.304E+02, > 5.363E+02, > 5.195E+02, > 5.124E+02, > 5.171E+02, > 5.086E+02, = 5.093E+02

1.447E+02, > 1.247E+02, > 2.850E+01, < 7.800E+01, = 7.736E+01, = 6.848E+01, < 7.312E+01, < 7.560E+01, < 7.799E+01 f12 1.554E+06, > 5.291E+05, > 2.136E+04, > 7.220E+04, > 4.592E+04, > 3.588E+04, > 2.341E+04, > 1.831E+04, < 2.058E+04 f20 4.708E+02, > 1.927E+02, > 2.182E+02, > 3.312E+02, > 1.326E+02, > 1.267E+02, > 1.478E+02, > 1.524E+02, > 1.238E+02 f28 4.392E+02, > 4.066E+02, > 3.132E+02, < 4.133E+02, > 3.755E+02, > 3.758E+02, > 3.444E+02, = 3.295E+02, < 3.441E+02

8.075E+01, > 1.375E+02, > 5.657E+01, > 1.070E+02, > 1.842E+01, < 4.277E+01, > 4.107E+01, > 3.831E+01, > 3.695E+01 f13 1.060E+06, > 1.330E+04, > 1.679E+04, > 2.707E+03, < 1.573E+04, > 7.374E+02, < 1.045E+04, > 1.089E+04, > 9.419E+03 f21 2.896E+02, > 3.381E+02, > 2.595E+02, > 2.951E+02, > 2.200E+02, < 2.401E+02, > 2.456E+02, > 2.391E+02, > 2.369E+02 f29 9.545E+02, > 6.983E+02, > 6.613E+02, > 8.371E+02, > 4.895E+02, > 5.089E+02, > 5.188E+02, > 4.877E+02, > 4.823E+02

6.666E+00, > 2.600E-08, < 7.507E-02, > 2.064E+01, > 1.682E-06, < 3.956E-13, < 6.614E-04, > 4.824E-05, < 5.554E-05 f14 4.570E+04, > 6.170E+03, > 4.440E+03, > 9.221E+01, < 1.743E+04, > 3.442E+03, > 5.145E+03, > 2.816E+03, > 2.552E+03 f22 2.555E+03, > 1.000E+02, = 9.137E+02, > 2.675E+02, > 1.002E+02, = 1.002E+02, = 1.006E+02, = 1.002E+02, = 1.002E+02 f30 4.606E+04, > 2.809E+04, > 4.378E+03, < 6.639E+03, > 3.709E+03, < 4.500E+03, > 4.511E+03, > 4.013E+03, < 4.430E+03

1.277E+02, > 1.915E+02, > 9.450E+01, > 1.335E+02, > 1.878E+02, > 8.523E+01, > 9.434E+01, > 7.537E+01, > 7.508E+01 f15 1.457E+04, > 1.651E+04, > 7.190E+03, > 1.388E+03, < 1.890E+03, < 3.305E+02, < 4.153E+03, > 3.492E+03, < 3.863E+03 f23 4.392E+02, > 4.624E+02, > 4.123E+02, > 4.909E+02, > 3.706E+02, < 3.941E+02, > 4.059E+02, > 3.836E+02, = 3.825E+02 /

9.030E+01, > 1.361E+02, > 5.715E+01, > 8.856E+01, > 1.738E+01, < 4.372E+01, > 4.712E+01, > 3.920E+01, > 3.423E+01, > f16 9.090E+02, > 8.411E+02, > 6.923E+02, > 8.995E+02, > 1.537E+02, < 4.413E+02, = 6.832E+02, > 4.338E+02, = 4.392E+02 f24 5.057E+02, > 5.636E+02, > 4.700E+02, > 5.172E+02, > 4.477E+02, < 4.673E+02, > 4.719E+02, > 4.565E+02, = 4.545E+02 /

On seven basic multimodal functions (f4-f10), SL-PSO achieves the best performance on f5, f8 and f10. FDR-PSO, HCLPSO, ACLPSO-CA and FIPS generate the best results for f4, f6, f7 and f9, respectively. According to 17

the summary of Wilcoxon signed-rank test results in Table 8, ACLPSO-CA performs better than other comparative PSO algorithms except SL-PSO. SL-PSO wins the best performance on basic multimodal functions. On ten hybrid functions (f11-f20), ACLPSO-CA exhibits the best performance on f11, f17 and f20. HCLPSO generates the best results for f13, f15 and f19. The summary of Wilcoxon signed-rank test results in Table 8 show the performance of ACLPSO-CA is tie with ELPSO. ACLPSO-CA is superior to other involved PSO algorithms except ELPSO. Both ACLPSO-CA and ELPSO achieve high performance on hybrid functions. On ten composition functions (f21-f30), SL-PSO wins the best performance on f21, f23 f24 and f30. HCLPSO yields the best performance on f25 and f26. FIPS, ACLPSO, FDR-PSO and ACLPSO-CA achieve the best performance on f22, f27, f28 and f29, respectively. The summary of Wilcoxon signed-rank test results in Table 8 show the performance of ACLPSO-CA is worse than ACLPSO, tie with SL-PSO and better than PSO-cf, FIPS, FDR-PSO, ELPSO, HCLPSO and CLPSO-G. Table 8 Summary of Wilcoxon signed rank-test for comparing ACLPSO-CA versus other peer PSO variants on CEC2017 test suite Unimodal Basic multimodal Hybrid Composition Total

PSO-cf 2/0/1 7/0/0 10/0/0 10/0/0 29/0/1

FIPS 3/0/0 5/0/2 9/0/1 8/2/0 25/2/3

FDR-PSO 2/0/1 6/0/1 9/1/0 7/1/2 26/2/2

ELPSO 2/0/1 6/1/0 5/0/5 9/1/0 22/2/6

SL-PSO 3/0/0 1/1/5 7/0/3 4/2/4 15/3/12

HCLPSO 2/0/1 4/0/3 6/1/3 7/2/1 19/3/8

CLPSO-G 2/0/1 5/0/2 10/0/0 6/3/1 23/3/4

ACLPSO 3/0/0 4/0/3 5/1/4 2/5/3 14/6/10

Table 9 Rank of mean performance among nine PSO variants on CEC2017 test suite Func. f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 F16 f17 f18 f19 f20 f21 f22 f23 f24 f25 f26 f27 f28 f29 f30 Avg. rank

PSO-cf 9 9 2 9 7 8 6 8 8 8 9 9 9 9 8 9 9 8 9 9 7 9 7 7 9 9 9 9 9 9 8.200

FIPS 5 8 8 8 9 2 9 9 1 9 7 8 6 7 9 7 6 9 5 6 9 1 8 9 8 7 8 7 7 8 7.000

FDR-PSO 7 5 1 1 6 7 5 6 6 6 8 3 8 5 7 6 7 3 8 7 6 8 6 5 6 6 6 1 6 3 5.500

ELPSO 2 7 7 7 8 9 7 7 9 7 6 7 2 1 2 8 8 1 3 8 8 7 9 8 7 8 7 8 8 7 6.433

SL-PSO 8 6 9 5 1 3 8 1 2 1 2 6 7 8 3 1 3 6 2 3 1 2 1 1 4 5 5 5 3 1 3.767

HCLPSO 1 3 6 2 5 1 3 4 7 2 4 5 1 4 1 4 4 5 1 2 4 2 4 4 1 1 3 6 4 5 3.300

CLPSO-G 4 1 5 3 4 6 4 5 5 3 5 4 4 6 6 5 5 7 7 4 5 6 5 6 4 3 4 4 5 6 4.700

ACLPSO 6 4 4 4 3 4 2 3 4 4 3 1 5 3 4 2 2 2 4 5 3 2 3 3 2 2 1 2 2 2 3.033

ACLPSO-CA 3 2 3 6 2 5 1 2 3 5 1 2 3 2 5 3 1 4 6 1 2 2 2 2 3 4 2 3 1 4 2.833

On all thirty tested CEC2017 functions, SL-PSO, HCLPSO and ACLPSO-CA show the best performance on eight, seven and five functions. Though ACLPSO-CA only yields the best performance on five functions, it ranks among the top three ranks on twenty two functions. The summary of Wilcoxon signed-rank test results in Table 8 show compared with three state-of-the-art PSO variants, ACLPSO-CA wins on twenty nine, twenty five and twenty six functions, respectively. ACLPSO-CA performs significantly better than PSO-cf, FDR-PSO and FIPS. Compared with three recently reported PSO variants, ACLPSO-CA outperforms ELPSO, SL-PSO and HCLPSO on twenty two, fifteen, and nineteen functions. ACLPSO-CA performs better than ELPSO, SL-PSO and HCLPSO as well. 18

ACLPSO-CA outperforms CLPSO-G on twenty three functions. Compared with ACLPSO, ACLPSO-CA wins, tie with and losses on fourteen, six and ten functions, respectively. Due to cooperative archive, ACLPSO-CA performs better than ACLPSO on unimodal functions, basic multimodal functions and hybrid functions while on composition functions ACLPSO-CA performs worse than ACLPSO. The overall performance of ACLPSO-CA is better than ACLPSO. The rank of mean performance in Table 9 show ACLPSO-CA possesses the lowest average rank, which means the overall performance of ACLPSO-CA on all thirty CEC2017 functions is better than other involved PSO algorithms. The average ranks of nine PSO variants in Table 9 show the overall performance order on CEC2017 benchmark suite is ACLPSO-CA, ACLPSO, HCLPSO, SL-PSO, CLPSO-G, FDR-PSO, ELPSO, FIPS and PSO-cf. Both ACLPSO-cf and ACLPSO performs better than other involved PSO algorithms.

4.6 Discussions The test results in section 4.4 and 4.5 show the adaptive mechanism can significantly improve the performance of ACLPSO. With the adaptive mechanism for regulating the CL probability, ACLPSO performs much better than CLPSO-G. The cooperative archive can make good use of the explored information to further improve the performance of ACLPSO. With the cooperative archive, ACLPSO-CA performs better than ACLPSO. Both ACLPSO and ACLPSO-CA exhibit robust performance on different type of functions. They occupy the top two places on both CEC2013 test suite and CEC2017 test suite. The overall order of computational complexity of ACLPSO and ACLPSO-CA is equal to CLPSO-G. The test results in section 4.4.3 show the average computational time of ACLPSO is almost the same to CLPSO-G, while ACLPSO-CA needs slightly more average computational time than CLPSO-G.

4.7 Application to radar system design problem ACLPSO-CA is applied to the classic spread spectrum radar polyphase code design problem, to test its effectiveness in solving real-life problems. There are a lot of methods of radar modulation for pulse compression. Dukic and Dobrosavljevic [60] introduced the method for polyphase pulse compression code design based on the properties of the aperiodic autocorrelation function, and considering coherent radar pulse processing in the receiver. The polyphase codes are competitive for they have lower side-lobes in signal compression and easier to implementation of digital processing technique [61]. The polyphase codes design problem can be modeled as a continuous min-max nonlinear non-convex optimization problem [62]. It is a continuous min-max global optimization problem with numerous local optima. Its mathematic modal is as follows: (12) ,

and

X is (13) (14) (15

where xk represents symmetrized phase differences, and the objective of the problem is to minimize the module of the biggest among the samples of the autocorrelation function Φ [62]. In this section, a 20-dimensional spread spectrum radar polyphase code design problem is tested. The maximum FEs is set as 200000 FEs, other parameters are in accordance with Table 2. All the PSO algorithms are run for 30 independent times and the test results are presented in Table 10. Evaluated by mean error, SL-PSO 19

presents the best performance, ACLPSO-CA and ACLPSO rank the second and the third, respectively. Both ACLPSO and ACLPSO-CA perform better than HCLPSO and CLPSO-G. The HCLPSO achieves lowest standard deviation of error and exhibits robust performance on the radar system design problem. Although ACLPSO and ACLPSO-CA are inferior to SL-PSO, they perform better than the rest of comparative PSO algorithms without parameter tuning, which suggests ACLPSO and ACLPSO-CA is applicable to real-life optimization problems. Table 10 Test results for 20 dimensional spread spectrum radar polyphase code design algorithm Mean St.D Rank

PSO-cf 1.096 0.1659 6

FIPS 1.420 0.1311 0

FDR-PSO 1.2915 0.1780 8

ELPSO 1.199 0.1532 7

SL-PSO 0.8359 0.1407 1

HCLPSO 0.9775 0.1035 4

CLPSO-G 1.064 0.1524 5

ACLPSO 0.9569 0.1755 3

ACLPSO-CA 0.9278 0.1703 2

5 Conclusions and future works In this study, an adaptive mechanism for adjusting CL probability and a cooperative archive are combined with CLPSO, the resultant PSO algorithm is referred as ACLPSO-CA. To test the effectiveness of the adaptive mechanism for CL probability and cooperative archive, the proposed ACLPSO-CA is tested on CEC2013 and CEC2017 test suites and compared with seven popular PSO variants. The test results show ACLPSO-CA achieved high performance on different types of functions. The whole performance of ACLPSO-CA is better than other selected peer PSO algorithms on both CEC2013 test suite and CEC2017 test suite. With cooperative archive, ACLPSO-CA performs better than ACLPSO. Finally, the comparison test results on radar system design problem show that ACLPSO and ACLPSO-CA are applicable to real-life optimization problems. For future research, more intelligent adaptive mechanism deserves further investigation. The adaptive mechanism should consider more information such as the population distribution and the velocity of the particles, the success rate of the search behavior and so on. More parameters could be adaptively regulated dynamically, e.g., the archive size and the updating ratio. Furthermore, new archive strategies might be studied to efficiently record and reuse the information of explored candidate solutions.

Acknowledgements The authors thank the anonymous reviewers for providing constructive comments, which are very helpful for improving the quality of the manuscript. The authors thank Prof. Suganthan (http://www.ntu.edu.sg/home/epnsugan/) and Prof. Jin Yaochu (http://www.soft-computing.de/jin-pub_year.html) for sharing valuable codes and materials in their homepage. The authors thank the references’ authors for offering their codes and selfless help. This work is supported by Independent Research Project of State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body 71765003, the National Natural Science Foundation of China under Grant 61573135, the Hunan Province Graduate Student Scientific Research Innovation Fund Grant CX2017B110.Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing Open Fundation Grant 2017TP1011

References [1] S. Kannan, S. M. R. Slochanal, and N. P. Padhy, "Application and comparison of metaheuristic techniques to generation expansion planning problem," Ieee Transactions on Power Systems, vol. 20, pp. 466-475, Feb 2005. [2] F. Grimaccia, M. Mussetta, and R. E. Zich, "Genetical swarm optimization: Self-adaptive hybrid evolutionary algorithm for electromagnetics," Ieee Transactions on Antennas and Propagation, vol. 55, pp. 781-785, Mar 2007. [3] M. R. AlRashidi and M. E. El-Hawary, "A Survey of Particle Swarm Optimization Applications in Electric Power 20

Systems," Ieee Transactions on Evolutionary Computation, vol. 13, pp. 913-918, Aug 2009. [4] A. Banharnsakun, T. Achalakul, and B. Sirinaovakul, "The best-so-far selection in Artificial Bee Colony algorithm," Applied Soft Computing Journal, vol. 11, pp. 2888-2901, 2011 2011. [5] N. B. Jin and Y. Rahmat-Samii, "Advances in particle swarm optimization for antenna designs: Real-number, binary, single-objective and multiobjective implementations," Ieee Transactions on Antennas and Propagation, vol. 55, pp. 556-567, Mar 2007. [6] X. L. Liang, W. F. Li, Y. Zhang, and M. C. Zhou, "An adaptive particle swarm optimization method based on clustering," Soft Computing, vol. 19, pp. 431-448, Feb 2015. [7] A. A. A. Esmin, R. A. Coelho, and S. Matwin, "A review on particle swarm optimization algorithm and its variants to clustering high-dimensional data," Artificial Intelligence Review, vol. 44, pp. 23-45, Jun 2015. [8] A. Moharam, M. A. El-Hosseini, and H. A. Ali, "Design of optimal PID controller using hybrid differential evolution and particle swarm optimization with an aging leader and challengers," Applied Soft Computing, vol. 38, pp. 727-737, Jan 2016. [9] J. H. Holland, "Adaptation in Natural and Artificial Systems: An Introductory Analysis With Applications to Biology, Control, and Artificial Intelligence," The Quarterly Review of Biology, vol. 6, pp. 126–137, 1994. [10] A. Colorni, M. Dorigo, and V. Maniezzo, "Distributed Optimization by Ant Colonies," in Ecal91 - European Conference on Artificial Life, 1991. [11] R. Storn and K. Price, "Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces," Journal of Global Optimization, vol. 11, pp. 341-359, 1997. [12] G. H. Wu, X. Shen, H. F. Li, H. K. Chen, A. P. Lin, and P. N. Suganthan, "Ensemble of differential evolution variants," Information Sciences, vol. 423, pp. 172-186, Jan 2018. [13] Z. W. Geem, J. H. Kim, and G. V. Loganathan, "A new heuristic optimization algorithm: Harmony search," Simulation, vol. 76, pp. 60-68, Feb 2001. [14] N. Hansen, S. D. Muller, and P. Koumoutsakos, "Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES)," Evolutionary Computation, vol. 11, pp. 1-18, Spr 2003. [15] D. Karaboga and B. Basturk, "A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm," Journal of Global Optimization, vol. 39, pp. 459-471, Nov 2007. [16] E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, "GSA: A Gravitational Search Algorithm," Information Sciences, vol. 179, pp. 2232-2248, Jun 13 2009. [17] R. V. Rao, V. J. Savsani, and D. P. Vakharia, "Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems," Computer-Aided Design, vol. 43, pp. 303-315, Mar 2011. [18] G. Wu, "Across neighbourhood search for numerical optimization," Information Sciences, vol. 329, pp. 597-618, 2016. [19] Y. Shi and R. C. Eberhart, "Fuzzy adaptive particle swarm optimization," in IEEE Congress on Evolutionary Computation, 2001, pp. 101-106. [20] M. Clerc and J. Kennedy, "The particle swarm - explosion, stability, and convergence in a multidimensional complex space," IEEE Transactions on Evolutionary Computation, vol. 6, pp. 58-73, 2002. [21] T. Peram, K. Veeramachaneni, and C. K. Mohan, "Fitness-distance-ratio based particle swarm optimization," in Swarm Intelligence Symp., 2003, pp. 174-181. [22] R. Mendes, J. Kennedy, and J. Neves, "The fully informed particle swarm: Simpler, maybe better," IEEE Transactions on Evolutionary Computation, vol. 8, pp. 204-210, 2004. [23] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, "Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients," IEEE Transactions on Evolutionary Computation, vol. 8, pp. 240-255, 2004. [24] K. E. Parsopoulos and M. N. Vrahatis, "A unified particle swarm optimization scheme," in Int Conf Comput Methods 21

Sci Eng, Ser Lecture Series on Computer & Computational Sciences Attica, Greece: Vsp International Science, 2004. [25] J. J. Liang and P. N. Suganthan, "Dynamic Multi-Swarm Particle Swarm Optimizer with Local Search," in IEEE Congress on Evolutionary Computation, 2005, pp. 522-528. [26] J. J. Liang, A. K. Qin, S. Member, P. N. Suganthan, S. Member, and S. Baskar, "Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions," IEEE Transactions on Evolutionary Computation, vol. 10, pp. 281-295, 2006. [27] Z.-H. Zhan, J. Zhang, Y. Li, and H. S.-H. Chung, "Adaptive particle swarm optimization," IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society, vol. 39, pp. 1362-1381, 2009. [28] Z. H. Zhan, J. Zhang, Y. Li, and Y. H. Shi, "Orthogonal Learning Particle Swarm Optimization," Ieee Transactions on Evolutionary Computation, vol. 15, pp. 832-847, Dec 2011. [29] W. N. Chen, J. Zhang, Y. Lin, N. Chen, Z. H. Zhan, H. S. H. Chung, et al., "Particle swarm optimization with an aging leader and challengers," IEEE Transactions on Evolutionary Computation, vol. 17, pp. 241-258, 2013. [30] R. Eberhart and J. Kennedy, "A new optimizer using particle swarm theory," in Proceedings of the Sixth International Symposium on Micro Machine and Human Science, 1995, pp. 39-43. [31] J. Kennedy and R. Eberhart, "Particle Swarm Optimization," presented at the Proc. IEEE Int. Conf. Neural Netw., 1995. [32] F. Valdez, J. C. Vazquez, P. Melin, and O. Castillo, "Comparative study of the use of fuzzy logic in improving particle swarm optimization variants for mathematical functions using co-evolution," Applied Soft Computing, vol. 52, pp. 1070-1083, 3// 2017. [33] J. Li, J. Q. Zhang, C. J. Jiang, and M. C. Zhou, "Composite Particle Swarm Optimizer With Historical Memory for Function Optimization," Ieee Transactions on Cybernetics, vol. 45, pp. 2350-2363, Oct 2015. [34] H.-b. Ouyang, L.-q. Gao, S. Li, and X.-y. Kong, "Improved global-best-guided particle swarm optimization with learning operation for global optimization problems," Applied Soft Computing, vol. 52, pp. 987-1008, 3// 2017. [35] Y. J. Gong, J. J. Li, Y. C. Zhou, Y. Li, H. S. H. Chung, Y. H. Shi, et al., "Genetic Learning Particle Swarm Optimization," Ieee Transactions on Cybernetics, vol. 46, pp. 2277-2290, Oct 2016. [36] Q. D. Qin, S. Cheng, Q. Y. Zhang, L. Li, and Y. H. Shi, "Particle Swarm Optimization With Interswarm Interactive Learning Strategy," Ieee Transactions on Cybernetics, vol. 46, pp. 2238-2251, Oct 2016. [37] W. H. Lim and N. A. M. Isa, "Teaching and peer-learning particle swarm optimization," Applied Soft Computing, vol. 18, pp. 39-58, May 2014. [38] M. Hu, T. Wu, and J. D. Weir, "An Adaptive Particle Swarm Optimization With Multiple Adaptive Methods," IEEE Transactions on Evolutionary Computation, vol. 17, pp. 705-720, 2013. [39] N. Lynn and P. N. Suganthan, "Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation," Swarm and Evolutionary Computation, vol. 24, pp. 11-24, 2015. [40] N. Lynn and P. N. Suganthan, "Ensemble particle swarm optimizer," Applied Soft Computing, vol. 55, pp. 533-548, 6// 2017. [41] S. Gulcu and H. Kodaz, "A novel parallel multi-swarm algorithm based on comprehensive learning particle swarm optimization," Engineering Applications of Artificial Intelligence, vol. 45, pp. 33-45, Oct 2015. [42] M. Hasanzadeh, M. R. Meybodi, and M. M. Ebadzadeh, "Adaptive Parameter Selection in Comprehensive Learning Particle Swarm Optimizer," in Artificial Intelligence and Signal Processing, Aisp 2013. vol. 427, A. Movaghar, M. Jamzad, and H. Asadi, Eds., ed Berlin: Springer-Verlag Berlin, 2014, pp. 267-276. [43] M. G. H. Omran, M. Clerc, A. Salman, and S. Alsharhan, "A Fuzzy-Controlled Comprehensive Learning Particle Swarm Optimizer," in Swarm Intelligence Based Optimization. vol. 8472, P. Siarry, L. Idoumghar, and J. Lepagnot, Eds., ed Berlin: Springer-Verlag Berlin, 2014, pp. 35-41. 22

[44] R. C. Eberhart and Y. H. Shi, "Particle swarm optimization: Developments, applications and resources," Proceedings of the 2001 Congress on Evolutionary Computation, Vols 1 and 2, pp. 81-86, 2001. [45] J. Q. Zhang and A. C. Sanderson, "JADE: Adaptive Differential Evolution With Optional External Archive," Ieee Transactions on Evolutionary Computation, vol. 13, pp. 945-958, Oct 2009. [46] I. T. Yang, "Using Elitist Particle Swarm Optimization to Facilitate Bicriterion Time-Cost Trade-Off Analysis," Journal of Construction Engineering & Management, vol. 133, pp. 498-505, 2007. [47] R. Zhang, J. Z. Zhou, S. Ouyang, X. M. Wang, and H. F. Zhang, "Optimal operation of multi-reservoir system by multi-elite guide particle swarm optimization," International Journal of Electrical Power & Energy Systems, vol. 48, pp. 58-68, Jun 2013. [48] G. Wu, D. Qiu, Y. Yu, W. Pedrycz, M. Ma, and H. Li, "Superior solution guided particle swarm optimization combined with local search techniques," Expert Systems with Applications, vol. 41, pp. 7536-7548, 2014. [49] S. X. Cheng, H. Zhan, and Z. X. Shu, "An innovative hybrid multi-objective particle swarm optimization with or without constraints handling," Applied Soft Computing, vol. 47, pp. 370-388, Oct 2016. [50] Q. Z. Lin, J. Q. Li, Z. H. Du, J. Y. Chen, and Z. Ming, "A novel multi-objective particle swarm optimization with multiple search strategies," European Journal of Operational Research, vol. 247, pp. 732-744, Dec 16 2015. [51] S. Chinta, R. Kommadath, and P. Kotecha, "A multi-objective improved teaching-learning based optimization algorithm (MO-ITLBO)," Information Sciences, vol. 373, pp. 337-350, Dec 10 2016. [52] J. J. Liang, B. Y. Qu, P. N. Suganthan, and A. G. Hernández-Díaz, "Problem definitions and evaluation criteria for the CEC 2013 special session on real-parameter optimization," Comput. Intell. Lab., Zhengzhou Univ., Zhengzhou, China, Tech. Rep. 2013, 2013. [53] M. Z. A. N.H. Awad, J.J. Liang, B.Y. Qu, P.N. Suganthan. (2016). Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Real-Parameter Numerical Optimization. Available: Available: http://www.ntu.edu.sg/home/epnsugan/ [54] M. Hollander and D. A. Wolfe, Nonparametric statistical methods vol. 2: 2nd ed., John Wiley &SonsInc, 1999. [55] J. D. Gibbons and S. Chakraborti, Nonparametric Statistical Inference: 5th ed., Chapman & Hall, 2010. [56] J. Derrac, S. García, D. Molina, and F. Herrera, "A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms," Swarm and Evolutionary Computation, vol. 1, pp. 3-18, 2011. [57] A. R. Jordehi, "Enhanced leader PSO (ELPSO): A new PSO variant for solving global optimisation problems," Applied Soft Computing Journal, vol. 26, pp. 401-417, 2015. [58] R. Cheng and Y. Jin, "A social learning particle swarm optimization algorithm for scalable optimization," Information Sciences, vol. 291, pp. 43-60, 2015. [59] M. R. Tanweer, S. Suresh, and N. Sundararajan, "Self regulating particle swarm optimization algorithm," Information Sciences, vol. 294, pp. 182-202, 2015. [60] M. L. Dukic and Z. S. Dobrosavljevic, "A Method of a Spread-Spectrum Radar Polyphase Code Design," Selected Areas in Communications IEEE Journal on, vol. 8, pp. 743-749, 1990. [61] P. N. S. Swagatam Das, "Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems," Tech. Rep., 2010. [62] A. M. Perez-Bellido, S. Salcedo-Sanz, E. G. Ortiz-Garcia, and A. Portilla-Figueras, "A Hybrid Evolutionary Programming Algorithm for Spread Spectrum Radar Polyphase Codes Design," Gecco 2007: Genetic and Evolutionary Computation Conference, Vol 1 and 2, pp. 682-688, 2007.

Self introduction: Anping Lin received the B.S, M.S. in marine engineering from Dalian Maritime University, Dalian, China, in 23

2004 and 2007 respectively. From 2007 to 2014 he was teaching in Guangdong Ocean University, Zhanjiang, China. He is currently working towards the Ph.D in Hunan University, Changsha, China. His main research interests include evolutionary computation, multimodal optimization and computation intelligence. Wei Sun received his B.S, M.S, and Ph.D. degrees from the Department of Automation Engineering, Hunan University, P. R. China, in 1997, 1999 and 2003, respectively. He now is working as a Professor at the College of Electrical and Information Engineering, Hunan University. His areas of interests are computer vision and robotics, neural networks, intelligent control. HongshanYuis an associate Professor,College of Electrical and InformationEngineering, Hunan University. He receivedhis B.S., M.S., and Ph.D. degreesfrom Hunan University in 2001, 2004and 2007. His research interests includemobile robots navigation and computer vision. Guohua Wu received the B.S. degree in Information Systems and Ph.D degree in Operations Research from National University of Defense Technology, China, in 2008 and 2014, respectively. During2012 and 2014, he was a visiting Ph.D student at University of Alberta, Edmonton, Canada, supervised by Prof. WitoldPedrycz. He is now a Lecturer at the College of Information Systems and Management, National University of Defense Technology, Changsha, China. His current research interests include planning and scheduling, evolutionary computation and machine learning. He has authored more than 20 referred papers including those published in IEEE Transactions on System Man Cybernetics: System, Information Sciences, Computers & Operations Research, Applied Soft Computing. He serves as an Associate Editor of Swarm and Evolutionary Computation Journal and an editorial board member of International Journal of Bio-Inspired Computation. He is a regular reviewer of more than 20 journals including IEEE TEVC, IEEE TCYB, Information Sciences and Applied Soft Computing. Hongwei Tang was born in 1982, he is a Ph.D.candidatein the College of Electrical and Information Engineering,Hunan University. His research interests includeintelligent control, robot vision SLAM and path planningand intelligent optimization algorithms.

24