Multipopulation cooperative particle swarm optimization with a mixed mutation strategy

Multipopulation cooperative particle swarm optimization with a mixed mutation strategy

Journal Pre-proof Multipopulation Cooperative Particle Swarm Optimization with a Mixed Mutation Strategy Wei Li , Xiang Meng , Ying Huang , Zhang-Hua...

2MB Sizes 0 Downloads 98 Views

Journal Pre-proof

Multipopulation Cooperative Particle Swarm Optimization with a Mixed Mutation Strategy Wei Li , Xiang Meng , Ying Huang , Zhang-Hua Fu PII: DOI: Reference:

S0020-0255(20)30109-2 https://doi.org/10.1016/j.ins.2020.02.034 INS 15218

To appear in:

Information Sciences

Received date: Revised date: Accepted date:

11 November 2019 11 January 2020 9 February 2020

Please cite this article as: Wei Li , Xiang Meng , Ying Huang , Zhang-Hua Fu , Multipopulation Cooperative Particle Swarm Optimization with a Mixed Mutation Strategy, Information Sciences (2020), doi: https://doi.org/10.1016/j.ins.2020.02.034

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier Inc.

Multipopulation Cooperative Particle Swarm Optimization with a Mixed Mutation Strategy Wei Lia, Xiang Menga, Ying Huangb, Zhang-Hua Fuc,d,* a

School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China b Institute of Mathematical and Computer Sciences, Gannan Normal University, Ganzhou 341000, China c Shenzhen Institute of Artificial Intelligence and Robotics for Society d Institute of Robotics and Intelligent Manufacturing, The Chinese University of Hong Kong, Shenzhen, 518172, China

Abstract

The traditional particle swarm optimization algorithm learns from the two best experiences: the best position previously learned by the particle itself and the best position learned by the entire population to date. This learning strategy is simple and ordinary, but when addressing high-dimensional optimization problems, it is unable to quickly find the global optimal solution due to its low efficiency. This paper proposes a multipopulation cooperative particle swarm optimization (MPCPSO) algorithm with a dynamic segment-based mean learning strategy and a multidimensional comprehensive learning strategy. In MPCPSO, the dynamic segment-based mean learning strategy (DSMLS), which is employed to construct learning exemplars, achieves information sharing and coevolution between populations. The multidimensional comprehensive learning strategy (MDCLS) is employed to speed up convergence and improve the accuracy of MPCPSO solutions. Additionally, a differential mutation operator is introduced to increase the population diversity and enhance the global exploration ability of MPCPSO. Sixteen benchmark functions and seven well-known PSO variants are employed to verify the advantages of MPCPSO. The comparison results indicate that MPCPSO has a faster convergence speed, obtains more accurate solutions, and is more robust. Keywords: Particle swarm optimization, Dynamic segment-based mean learning strategy, Multidimensional comprehensive learning, Differential mutation

∗Corresponding author Email addresses: [email protected] (Zhang-Hua Fu)

1

1. Introduction Over the last few years, many real-world problems have proven difficult to address through traditional methods due to their increasingly complex structures. Therefore, scholars have tried to find better methods for solving complex optimization problems. Evolutionary algorithms (EAs), especially swarm intelligence optimization algorithms, have attracted increasing attention from many researchers due to their efficiency and stability when solving complex problems, such as the genetic algorithm (GA) [1], the artificial bee colony (ABC) algorithm [3], the differential evolution (DE) algorithm [4], simulated annealing (SA) [5], the ant colony optimization (ACO) algorithm [6], and the particle swarm optimization (PSO) algorithm [10]. The PSO has been widely adopted due to advantages such as the fact that it requires few parameters to be adjusted, is built on relatively simple concepts, and is easy to implement. The particle swarm optimization algorithm was originally proposed by Kennedy and Eberhart in 1995 to optimize continuous functions originating from the foraging behavior of birds [7, 8]. Since then, the PSO and its variants have been widely applied to solve complex practical problems, including data clustering [9], image processing [10], feature selection [11], power systems [12], artificial neural networks [13, 14], scheduling planning [15] and many others [16]. Although PSO has been intensively developed, some problems still remain to be solved. In particular, the PSO tends to converge prematurely when the problem space becomes complicated, and it is prone to a lack of population diversity in the later stages of the algorithm. Research works aimed at improving the PSO algorithm are divided into the following four main aspects: algorithm parameter fine-tuning, topology choices, learning strategy improvements, and integration with other algorithms. The following briefly introduces the main research branches from these four aspects. (1) Parameter tuning: Shi et al. proposed a PSO algorithm with an inertia weight set to a constant value to balance the global exploration and local exploitation abilities of the algorithm [36]. However, this approach limits the search ability of PSO to some extent. Therefore, a strategy that linearly decreased the inertia weight was proposed, which was better at balancing the search capability of PSO [17]. Ratnaweera et al. proposed an HPSO-TVAC algorithm in which the acceleration coefficient was changed in a timely manner to enhance the search capability of the algorithm [18]. Zhan et al. proposed an APSO algorithm with adaptive control parameters in which the control parameters of the algorithm are adaptively changed to balance local with global search [19]. Chen et al. proposed a CDWPSO algorithm with a chaotic inertia weight that introduced an inertia weight with chaotic mapping to modify the search direction [20]. 2

(2) Topological structure: Some scholars have studied the topological structure of PSO to enhance its population diversity and avoid premature convergence. Several typical topologies were proposed in [21], including a ring topology and a tooth topology. Mendes et al. proposed the FIPSO algorithm, which uses a fully informed strategy in which each particle update is based on the historical best experience of its neighbors rather than its personal historical best experience [22]. Peram et al. proposed the FDRPSO algorithm based on an adaptive distance ratio; in this approach, the particle with the highest adaptive distance ratio for each particle dimension is regarded as a learning exemplar to guide particle movements [35]. Janson et al. constructed a dynamically changing tree topology in which each particle learns from its parent to utilize the information of each particle effectively [37]. Wang et al. proposed the DTT-PSO algorithm with a dynamic tournament topology, in which each particle learns from particles randomly selected from the entire population. This approach significantly increases the population diversity and yields excellent performance in local search compared to other algorithms [24]. (3) Learning strategy: PSO is prone to fall into a local optimal solution and to experience premature convergence because it uses only two experiences to guide particle learning. This aspect of improving the PSO learning strategy has attracted the attention of many researchers. Liang et al. proposed a DMS-PSO algorithm with a dynamic neighborhood structure in which the learning of each particle is no longer limited to one population; instead, it also includes other populations [25]. Cao et al. proposed the NLPSO algorithm with a neighborhood learning strategy, where learned memories are used to update the entire population [26]. Liu et al. proposed the hierarchical simple THSPSO algorithm that uses diverse learning strategies at different search stages. In this approach, the promising information from each particle is fully utilized [27]. Deng et al. proposed the RBLSO algorithm with biased learning to accelerate algorithm convergence [28]. Zhan et al. proposed the OLPSO algorithm with orthogonal learning in which each particle obtains useful information both from its own historical best experience and from its neighborhood’s historical best experience through an orthogonal experimental design [34]. Yang et al. proposed the SPLSO algorithm that used segment-based predominant learning. This model first segments the dimensions of each poor-performing particle; then, each segment is learned from a better-performing particle, allowing the algorithm to capitalize on the information from the better particles and avoid premature convergence [29]. (4) Hybridization: Hybridization is a crucial research field in which the PSO algorithm is combined with other evolutionary algorithms to improve its performance. Zhang et al. proposed the DSPSO algorithm by combining the

3

differential mutation operation with the SLPSO algorithm [30]. Meng et al. proposed the CSPSO algorithm that introduced a crossover operator to increase the population diversity [31]. Nasiraghdam et al. proposed an HGAPSO algorithm by combining the PSO with the GA. Using the genetic operator in the PSO algorithm can enhance the population diversity and improves the algorithm’s ability to converge to the global optimal solution [32]. Gong et al. proposed a GL-PSO algorithm that added genetic learning, in which the learning exemplars are generated based on genetic operations that guide the movements of all the particles, accelerating the convergence speed of the PSO [33]. Other relevant studies [13, 42] showed that creating hybrid PSO algorithms combined with other evolutionary algorithms can not only enhance the population diversity but also prevent the algorithm from prematurely converging and improve the probability of finding the global optimal solution. According to the above analysis, the keys to improving PSO are to balance its global exploration and local exploitation abilities while maintaining the population diversity, which can prevent the PSO algorithm from prematurely converging. However, the nature of each optimization problem is different; consequently, a single improvement strategy does not always ensure that the algorithm can be most effectively applied to each problem. The existing literature shows that multiple learning strategies perform well at balancing the exploration and exploitation capabilities of the PSO and can help prevent the PSO algorithm from falling into a local optimum during complex optimization problems and increase the probability of the PSO converging to the global optimal solution. To further improve the performance of the PSO, inspired by the SPLSO algorithm, a multipopulation cooperative particle swarm optimization (MPCPSO) algorithm with a mixed mutation strategy is proposed in this paper. In each MPCPSO iteration, the population is divided into an “elitist population (EP)” and a “general population (GP)” using the competitive mechanism, and each population adopts a different learning strategy. For the general population, a dynamic segment-based mean learning strategy (DSMLS) is developed to divide the dimensions of each particle into m segments. Meanwhile, the tournament selection operation is introduced that selects constant particles from the elitist population to construct learning exemplars for each segment. For the elitist population, a multidimensional comprehensive learning strategy (MDCLS) is presented to guide particle evolution, and a differential mutation operator is introduced to increase population diversity and prevent MPCPSO from converging prematurely. The comparison results indicate that this learning strategy is highly effective and achieves comprehensively competitive solutions.

4

The remainder of this paper is organized as follows. Section 2 briefly introduces related works. Section 3 describes the two proposed learning strategies and the MPCPSO algorithm in detail. In Section 4, sixteen benchmark functions and seven well-known PSO variants are employed to verify the effectiveness of MPCPSO. Finally, relevant conclusions are given in Section 5.

2. Related works This section mainly reviews the classical particle swarm optimization algorithm and briefly introduces two typical PSO variants. In contrast to the traditional learning strategy, these two PSO algorithms use new learning strategies, and both provided some inspiration for the writing of this paper. 2.1 Classic PSO PSO is a population-based stochastic optimization algorithm in which each particle represents a potential problem solution. Each particle adjusts its trajectory in the feasible domain space based on its personal historical best experience (Pid) and its neighborhood’s historical best experience (Pgd). Without loss of generality, particle i at time t consists of a position vector Xi = (xi1, xi2, , xiD) and a velocity vector Vi = (vi1, vi2, , viD) in the D-dimensional search space. During the optimization process, the next-generation population is generated by Equations (1) and (2). Vid (t  1)   Vid (t)  c1 r1 ( Pid (t)  X id (t))  c2 r2 ( Pgd (t)  X id (t)) X id (t  1)  X id (t)  Vid (t  1)

where i =1, 2,

, N, d = 1, 2,

, D, N is the population size,

 

is the inertia

weight, r1 and r2 are random numbers in the interval [0, 1], and c1 and c2 are acceleration coefficients. In addition, the velocity of the particles is limited by using a velocity constraint, Vmax. Although the PSO algorithm has undergone substantial development, some problems still remain to be solved. For example, the PSO can easily become trapped in a local optimum when solving high-dimensional problems. Each particle utilizes only two experiences: the previous best position encountered by the particle itself and that of the best particles in the population to guide particle movements. The strategy is not difficult to use, but it is ineffective when solving problems in complex environments. Hence, researchers have proposed a variety of learning strategies. Two typical learning strategies are the comprehensive learning strategy (CLS) and the segment-based predominant learning strategy (SPL). A brief introduction to these two strategies is given below.

5

2.2 PSO with a comprehensive learning strategy Liang et al. [23] proposed the CLPSO algorithm, which differs from the classical PSO. Each dimension of particle i learns from its personal historical best experience with probability Pc or from the corresponding dimensions of its neighborhood’s historical best experience selected by tournament selection with a probability of 1Pc. The update processes can be calculated by Equation (3). Vid (t + 1)  Vid (t)  cr (P f ( d ) (t)  X id (t)) i

where

(3)

is the inertia weight, r is a random number in the interval [0, 1], c is the

acceleration coefficient, and fi(d) is the index of a learning exemplar constructed by the CLS. The process of the CLS is described briefly in the following. For each dimension of particle i: (1) Randomly generate a number p, p ∈ [0, 1]. (2) When p > Pc, the corresponding dimension of particle i follows its personal historical best experience. (3) When p < Pc, the best particle is selected from the population through tournament selection and used to guide the corresponding dimension of particle i. Thus, instead of learning from the same particle, using the CLS, which can learn from different particles, effectively guarantees the population diversity and provides a large and promising search space. 2.3 PSO with segment-based predominant learning strategy Inspired by the strategy in which exemplars generated by combining the dimensional information of different particles can provide potentially better search directions in OLPSO [34] and CLPSO [23], Yang et al. [29] proposed a segment-based learning strategy (SL). The SL strategy randomly divides the dimensions of particles into m segments, and each segment is guided by a learning exemplar. Using this approach, not only can potentially beneficial information be learned by poor-performing particles but also information from different dimensions of the exemplars can be preserved. Additionally, inspired by the CSO’s competitive learning strategy [38], they proposed a segment-based predominant learning (SPL) strategy by combining the SL and predominant learning (PL) strategies. The main idea of the SPL strategy is as follows: (1) Based on a competition mechanism, the population is divided into a better individual set RG and a poor individual set RP. (2) Then, the inferior particles learn from the superior particles, and the superior particles enter the next generation directly. (3) Each poor particle dimension is segmented, and each segment learns from a better particle selected from the RG. 6

The update equation for particle velocity is as follows: Gi Gi Gi Gi Gi Gi V ( t  1)  r1VRP ( t )  r2 ( X RG ( t )  X RP ( t ))   r3 ( Xˆ ( t )  X RP ( t )) RPj j j j g ( j, i)

(4) where G = (G1, , Gi,

, Gm) represents the i-th segmentation of the particle

dimension, m is the number of segmentations, g(j, i) indicates the better particle in the RG, ̂ represents the weighted mean of the entire population, r1, r2, and r3 are random numbers from the interval [0, 1], and

is employed to control the

influence of ̂. The related literature indicates that the SPL strategy significantly improves the performance of PSO and yields good solutions. Using the above two strategies, we propose applying different learning strategies to the evolutionary process of different subpopulations, as described in detail in the next section.

3. The proposed MPCPSO algorithm In this section, the multipopulation cooperative PSO algorithm with mixed mutation strategies is introduced in detail. First, the proposed dynamic segment-based mean learning strategy is introduced: the general framework is given in subsection 3.1. Then, the multidimensional comprehensive learning strategy is described in subsection 3.2. Finally, the relevant pseudocode of MPCPSO is listed in subsection 3.3. 3.1 The dynamic segment-based mean learning strategy In recent decades, multipopulation-based heuristic optimization algorithms have attracted researchers’ interests and have gradually become some of the most commonly used methods to used to address real-world optimization problems [39]. Yazdani et al. proposed the FTMPSO algorithm, which is based on a multipopulation strategy, to find multiple optimal solutions in a dynamically changing environment [40]. Liu et al. proposed the CMPSODMO algorithm in which an improved coevolutionary multipopulation strategy is proposed to solve the dynamic multiobjective problem in a rapidly changing environment; the information sharing strategy is used so that all swarms coevolve [41]. Xu et al. proposed a TSLPSO algorithm with two swarms and presented two different learning strategies to guide the search process of the algorithm [43]. In the proposed MPCPSO algorithm, the population in each iteration is divided into two groups by using a competitive mechanism based on fitness ranking. First, the fitness value of each particle is calculated and sorted in ascending order. Second, we should determine the number of selected particles t in advance. Finally, the first t particles are selected as one subpopulation, and the remaining particles are selected 7

as another subpopulation. The first group is called the elitist population and uses the MDCLS for population evolution. The second group is called the general population and adopts the DSMLS for population evolution. By using different learning strategies for each subpopulation, the population diversity is maintained and can guide the particle search process to achieve a promising search range. The PSO algorithm is also subject to the “oscillation” phenomenon and the “two steps forward, one step back” phenomenon [42]. Although the global search capability of some PSO variants are significantly improved, they tend to converge is slowly in the later stages of the algorithm. Therefore, we propose novel dynamic segment-based mean learning strategy (DSMLS) to accelerate the algorithm’s convergence speed. In the DSMLS, each particle in the general population learns from the elitist population. Algorithm 1 lists the pseudocode for the dynamic segment-based mean learning strategy, and the detailed process is described below.

Algorithm 1: Dynamic segment-based mean learning strategy Input: General population GP Output: position vector Xi , velocity vector Vi 1. for i=1→ N1 do %N1 is the subpopulation size 2. Calculate the number of segments of particle dimensions: m by Eq. (5); 3. Calculate ds=⌈ ⌉, y=D / m; 4. for j = 1: ds: D do 5. rod ←select k individuals from elitist population by tournament; 6. if j + ds - 1 > D then 7. for d=D-y+1 → D do 8. Construct the learning exemplars: pl by Eq. (6); 9. end for 10. else 11. for d=j → j+ds-1 do 12. Construct the learning exemplars: pl by Eq. (6); 13. end for 14. end if 15. end for 16. end for 17. Update Xi, Vi by Eq. (2) and Eq. (7). First, inspired by the SPLSO algorithm, the dimensions of each particle are divided dynamically into m segments, each of which follows the learning exemplars constructed from the elitist population. A novel dynamic segment strategy is designed to obtain more promising information from a better particle in the elitist population. The m segments is obtained by Equation (5):  gen  m  D  max gen 

8

(5)

where gen represents the current number of iterations, maxgen is the maximum number of iterations, and D is the problem dimension. Equation (5) shows that m increases from 1 to D; thus, it can guide the global search of the algorithm in the early stage and is helpful in enabling the algorithm to jump out of local optima near the end of the search. Second, tournament selection is employed to select k particles from the elitist population. So that the obtained particle will include more effective information, a weighted mean strategy is used to construct the learning exemplars pl, as shown in Equation (6): pl

d

k

 1k 

i 1

f ( rodi ) k



j 1

d rod i

(6)

f ( rod j )

where rod is the selected set of k particles in the elitist population, and f (·) is a fitness function. A learning exemplar constructed by using information from many different particles can provide more effective search information and better guide the movement of particles in the feasible domain. Finally, the particle velocity is updated by Equation (7): Vid (t+1)  Vid (t)  c1 r1 ( plid (t)  Xid (t))   r2 (Pgd (t)  Xid (t))

where

(7)

, r1, r2, and c1 have the same meanings as in Equation (1), Pgd represents the

best position in the current population, and

is the learning rate, which controls the

influence of Pgd and is calculated by Equation (8):

1

 (1  exp( 

me1 S S 

i 1

f ( X i ))) gen

(8) where S is the current population size, me1 is the mean fitness value in the first iteration, and gen represents the number of current iterations. Based on the above improvements, for subpopulations with poor fitness values, because each particle can learn from the dimensional information of multiple better particles, each particle obtains better dimensional information; thus, the population evolves in a good direction and prevents the algorithm from falling into a local optimal solution. 3.2 The multidimensional comprehensive learning strategy For the elitist population, when the particle velocity is updated, particle i will be affected only by mbest, which is generated by a dimensional learning strategy. The velocity updating equation is as follows:

9

Vid (t  1)  Vid (t)  cr ( mbestid (t)  X id (t))

(9) where

, r, and c have the same meanings as in Equation (3) and mbest is obtained

by Equation (10). As shown in Equation (9), the mbest is treated as the only learning exemplar used to guide particle movements. However, if the algorithm falls into a local optimum in some iterations, mbest will not be able to effectively guide the particle to find a better solution. Inspired by the fact that human beings always refer to other related problems and consult other people when solving difficult problems, the MDCLS was developed to help particles escape from a local optimum. In the MDCLS, each particle learns not only from other particles but also from its other dimensions. Therefore, each particle obtains more promising information and can search for the global optimum more quickly: mbestid



r D

D

 d 1

X id  (

(1 )(1r) M

M

)

X i 1

(10)

id

where r is a random number in the interval [0, 1], D is the dimension of the optimization problem, M is the number of particles in the elitist population, and

is

a dynamic weight calculated by Equation (11):



1

(11) D

1  exp(( X id  D1 

d 1

X id ))

where i is the particle index. An improved position update equation is proposed to accelerate the search process, as shown in Equation (12), Xid (t  1)   Xid (t)  Vid (t  1)

where

(12)

is a dynamically changing parameter used to control the influence of the

current position of the particles, which is calculated by Equation (8). Additionally, according to Equation (12), the convergence speed of MPCPSO will be greatly accelerated but this approach also increases the risk of falling into a local optimal solution. Therefore, when the algorithm falls into a local optimum in some iterations, a differential mutation operator is introduced to change the particles and enhance the population diversity [2, 44]. A particle will be updated with the best location (Pgd) in the entire population, and two particles will be selected randomly from the elitist population. In other words, first, two different individuals are selected randomly from the population. Then, the differential between those two individuals is calculated to form a differential vector. Finally, the differential vector is scaled and summed with Xbest. This operation is expressed in Equation (13):

Xid (t  1)  Xbest (t)  F(X m (t)  X n (t)) (13)

10

where Xbest is the best position in the whole population, F is the scale factor, m and n are randomly selected particle indexes, and i m n. Equation (13) shows that the mutation operation significantly changes the current particle search state, thereby improving the global search capability of the algorithm. Algorithm 2 lists the pseudocode for realizing the MDCLS.

Algorithm 2: Multidimensional comprehensive learning strategy Input: elitist population EP Output: position vector Xi , velocity vector Vi 1. for i=1→ N2 do 2. Calculate the mean of Xi → X_meani ; 3. Calculate the mean of Xd → D_meand; 4. Calculate mbest by Eq. (10); 5. if T > 5 then %T is the number of the algorithm falls into the local optimum 6. Update Xi , Vi by Eq. (9) and Eq. (13); 7. else 8. Update Xi, Vi by Eq. (9) and Eq. (12); 9. end if 10. end for 3.3 The general framework of the MPCPSO algorithm Based on the above improvements, the MPCPSO is constructed. The steps in the MPCPSO algorithm are described below: Step 1: Initialize the population and set the parameters: F=0.5, t=0.5, and T=0. Step 2: Calculate the fitness value of each particle f(Xi) and the global optimal value gbestness. Step 3: Sort the population based on the fitness values of all particles, and select the first t particles as subpopulation EP and the remaining particles as subpopulation GP. Step 4: Use the DSMLS with the GP subpopulation to update their position according to Eq. (7) and Eq. (2), and use the MDCLS with the EP subpopulation to conduct evolution through Eqs. (9) and (12). Step 5: When T is greater than 5, use Equation (13) to update the positions of the particles in the EP subpopulation. Step 6: Calculate the fitness values of all new particles and update gbestness. Step 7: Repeat Steps 3-6 while the number of function evaluations is less than the maximum allowable number (maxFEs). The framework of MPCPSO is shown in Fig. 1. The global exploration capability of the algorithm is used to find a more promising search space, while the local exploitation capability is used to conduct the local search in the best search range found thus far. Therefore, finding a balance between the exploration and exploitation abilities of the algorithm is an important focus that should be carefully 11

considered when improving the algorithm. By combining the DSMLS with the MDCLS, the proposed MPCPSO algorithm not only has good global exploration capability, because it can quickly search for a good search range but also has a good local exploitation capability, because it converges accurately to a good solution. Algorithm 3 gives the general implementation framework of the MPCPSO algorithm.

EP

Xi,1

...

Xi,j

...

...

Xi,D

Xi,D

learning

Tournament selection Xi,1 exemplars-m

Xi,1 GP

...

Xi,1

Xi,D

exemplars-1

Xi,1

...

...

Xi,D

exemplars-2

X1,1

...

X1,j

...

X1,D

Xi,1

...

... Xi,j

...

Xi,D

... XN,1

...

XN,j

...

... XN,D

... learning

learning learning

Initial population

Xi,1

Competition

Particle i

...

Xi,m1

...

Xi,m2

Differential mutation

Construct exemplar

Xi,D

...

Xi,m

DSMLS

...

...

MDCLS

Fig. 1. General framework of the MPCPSO algorithm

Algorithm 3: Multipopulation cooperative particle swarm optimization algorithm 1. Initialization:particle’s positon Xi, velocity Vi , F=0.5,T=0 2. Calculate particle’s fitness value f(Xi), gbestfitness, fit= gbestfitness; 3. while FEs C then 16. Update Xi ,Vi by Eq. (7) and Eq. (11); 17. else 18. Update Xi ,Vi by Eq. (7) and Eq. (10); 19. end if 20. end for 21. Calculate new particle’s fitness value f(Xi); 22. Update gbestfitness; 23. if gbestfitness < fit then 24. fit= gbestfitness, T=0; 25. else 26. T=T+1; 27. end if 28. end while

12

4.

Experiments

In this section, to verify the high efficiency of MPCPSO, a set of sixteen benchmark functions that were widely used in [45], are adopted. The results obtained are examined by comparing MPCPSO with other PSO implementations. The details of the experimental process are as follows.

Table 1 Sixteen test functions used in the comparison Search

Initialization

Range

Range

[-100,100]

Test Functions D f1 ( x )   xi2 i 1 D D f2 ( x )   | xi |   | xi | i 1 i 1

fmin

Accept

Name

[50,100]

0

10-6

Sphere

[-10,10]

[5,10]

0

10-6

Schwefel 2.2

[-30,30]

[15,30]

0

100

Rosenbrock

[-1.28,1.28]

[0.64,1.28]

0

10-2

Noise

[-500,500]

[250,500]

0

2000

Schwefel 2.6

[-600,600]

[300,600]

0

10-6

Griewank

[-32,32]

[16,32]

0

10-6

Ackley

[-5.12,5.12]

[2.56,5.12]

0

10-6

Rastrigrin

[-5,5]

[2.5,5]

0

10-6

Noncontinuous Rastrigin

[25,50]

0

10-6

Penalized

[-10,10]

[5,10]

0

10-6

Alpine

[-100, 100]

[50,100]

0

10-6

Rotated hyper-ellipsoid

[-500,500]

[250,500]

0

5000

Rotated Schwefel

[-5.12,5.12]

[2.56,5.12]

0

10-6

Rotated Rastrigin

D -1

f3 ( x)   [100( xi 1  xi2 ) 2  ( xi  1) 2 ] i 1

D f4 ( x )   ixi4  rand [0, 1) i 1 D

f5 ( x )  418.9829  D f6 ( x ) 

 xi sin( i 1

| xi |)

D D 1 x  x2  cos( i )  1 4000 i 1 i i i 1



1 D 2 f ( x)  20 exp(0.2   x ) 7 D i 1 i 1 D exp(   cos 2 x )  20  e i D i 1 D f8 ( x )   [ xi2 10cos(2 xi ) 10] i 1 D f9 ( x )   [ yi2 10cos(2 yi ) 10], yi  i 1

 xi |xi |<0.5  roud(2xi )/2

|xi |  0.5

 D 1    10sin 2 ( yi )  f10 ( x )  ( yi 1)2 {110sin 2 ( yi 1 )} (y D 1) 2  D  i  1  



D





i1

1 u ( xi ,10,100,4), yi  1  ( xi  1), u( xi , a, k, m)  4

 k( xi  a) m xi  a [-50,50]   0  a  xi  a  k (  x  a) m x   a i i 

n

f11 ( x ) 

 |xi sin(xi )  0.1xi |

f12 ( x ) 

 ( x j )

i 1 D

i1

i

2

j 1

D f13 ( z )  418.9829  D   z sin( | z |), i i i 1 z  M *x,M is an orthogonal matrix D 2 f14 ( z )   [z  10 cos(2 z )  10] i i i 1 z  M *x , M is an orthogonal matrix

13

 z |z |<0.5 D  i i f15 ( x)   [ yi2 10cos(2 yi )10], yi   i 1 roud(2zi )/2 |zi |  0.5 z  M * x, M is an orthogonal matrix

f16 ( z ) 

1 4000

D

D

i 1

i 1

 zi2   cos( zi /

[-5,5]

[2.5,5]

0

10-6

Rotated NoncontinuoisRastrigin

[-600,600]

[300,600]

0

10-6

Rotated Griewank

i)  1

z  M *x , M is an orthogonal matrix

4.1 Benchmark functions and parameter settings We employed the sixteen benchmark functions listed in Table 1 to demonstrate the advantages of MPCPSO. The test functions are classified into three main categories. The first category consists of four unimodal functions (f1-f4), where f1 and f2 are simple unimodal functions in a high-dimensional space. Function f3 can be considered as a multimodal function, and f4 has random perturbation. The second category includes seven complex multimodal functions (f5-f11). The last category includes a rotated unimodal function and four rotated multimodal functions (f12-f16). The global optimum of all the test functions is zero, and the biased initializations (found in Table 1, column 3), are used according to the definitions in [46] for all the functions. The initialization range is 1/4 of the feasible solution space, and the global optimal value is not included in the initialization space. We selected seven well-known PSO variants for comparison purposes, including PSO with an inertia weight [40], SPLSO [29], SLPSO [47], CLPSO [23], BLPSO [48], GL-PSO [35] and THSPSO [27]. Table 2 provides detailed descriptions of the parameter settings for these algorithm. Table 2 Parameter settings of PSO variants Algorithms PSO SPLSO

Parameter Settings = 0.729, c1 = c2 = 1.49445, Vmax = 0.2×Range m=D,

=0.1, Vmax = 0.2×Range

GL-PSO

= 0.729, c1 = c2 = 1.49445, Pm=0.01, sg=7, Vmax = 0.2×Range

SLPSO

N = 100 + floor (D/10), c=D/M×0.01, M = 100, α = 0.5, β = 0.01

CLPSO

=0.9~0.4, c = 2.0, gapm=7, Vmax = 0.2×Range

BLPSO

=0.9~0.4, c = 2.0, gapm=5, I = E = 1, Vmax = 0.2×Range

THSPSO

= 0.9, c1 = c2 = 2.0, t1=0.2, t2=0.8, Vmax = 0.2×Range

MPCPSO

= 0.729, c = 1.49445, F=0.5, t=0.5, Vmax = 0.2×Range

To ensure a fair comparison, the experiment used the same parameters for all the algorithms, including the maximum number of function evaluations (maxFEs), the maximum number of iterations (maxGen), and the number of independent runs (runNum), where maxGen = maxFEs /N and N is the population size. The other common parameter settings are given in Table 2. To reduce the influences of error, 25 independent trials were executed with each algorithm on each test function, and

14

the mean and standard deviation results were recorded and compared. All the test functions in this paper have 30, 50, and 100 dimensions. Other than the SLPSO algorithm, all the PSO algorithms were tested using the same population size (N = 40), and the maxFEs were set to 2 × 105 for each function. Additionally, the ranking achieved by each algorithm was used to intuitively compare the results of all algorithms. To determine the optimal value of t shown in Algorithm 3, many experiments were performed on the sixteen test functions. As shown in Table 3, it is clear that when t=0.5 (that is, the elitist population and the general population account for half of the initial population) the DMPSO algorithm yields better results. Therefore, in the following experiment, t was set to 0.5. Table 3 Selection of parameter t

f1

f2

f3

f4

f5

f6

f7

f8

f9

f 10

f 11

f 12

f 13

Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank

t=0.1

t=0.2

t=0.25

t=0.3

t=0.35

t=0.4

t=0.45

t=0.5

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.79E+01 7.00E-01 8 6.77E-05 6.41E-05 4 1.97E+03 1.59E+03 8 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 1.21E-01 1.61E-01 8 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 5.66E+03 1.86E+03 8

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.74E+01 4.92E-01 5 9.58E-05 7.64E-05 8 5.82E+02 5.15E+02 4 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 9.68E-03 4.11E-03 7 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 3.87E+03 9.85E+02 1

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.74E+01 5.89E-01 6 6.98E-05 7.37E-05 6 7.71E+02 7.14E+02 7 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 6.54E-03 3.08E-03 6 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 4.08E+03 1.06E+03 3

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.72E+01 4.51E-01 1 8.91E-05 1.04E-04 7 5.15E+02 4.79E+02 1 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 5.49E-03 2.26E-03 5 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 4.17E+03 1.02E+03 5

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.74E+01 3.43E-01 4 5.14E-05 5.27E-05 3 6.06E+02 3.46E+02 5 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 3.60E-03 3.36E-03 4 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 4.20E+03 1.12E+03 6

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.73E+01 3.61E-01 2 6.91E-05 6.33E-05 5 5.49E+02 4.86E+02 2 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 3.18E-03 1.97E-03 3 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 4.25E+03 1.26E+03 7

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.74E+01 2.86E-01 3 3.38E-05 2.97E-05 1 7.03E+02 9.03E+02 6 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.52E-03 1.25E-03 2 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 4.15E+03 1.02E+03 4

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.76E+01 3.78E-01 7 4.63E-05 4.07E-05 2 5.80E+02 6.44E+02 3 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 1.63E-03 1.07E-03 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 4.04E+03 1.21E+03 2

15

f 14

f 15

f 16

Mean Std Rank

0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1

Mean Std Rank Mean Std Rank

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1

2.94

2.25

2.44

1.88

2.06

1.88

1.69

1.63

Average rank

Table 4 Results of comparison algorithms on benchmark functions (30-D).

f1

f2

f3

f4

f5

f6

f7

f8

f9

f 10

f 11

f 12

f 13

f 14

PSO

SPLSO

SLPSO

CLPSO

GL-PSO

BLPSO

THSPSO

MPCPSO

Mean Std Rank

1.84E+04 8.98E+03 8

0.00E+00 0.00E+00 1

2.83E-90 4.45E-90 4

1.16E-09 4.86E-10 6

1.60E-69 5.97E-69 5

2.36E-08 1.50E-08 7

0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1

Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank

5.64E+01 1.63E+01 8 4.48E+07 5.69E+07 8 2.91E+01 2.61E+01 8 1.21e+03 8.01e+02 8 2.13E+02 1.04E+02 8 1.98E+01 8.83E-02 8 2.51E+02 4.21E+01 8 2.42E+02 3.95E+01 8 2.05E+07 7.09E+07 8 2.72E+01 1.04E+01 8 5.44E+04 2.92E+04 8 3.49E+03 6.21E+02 3

1.34E-289 0.00E+00 3 2.77E+01 2.81E-01 4 8.29E-05 3.48E-05 3 7.11E+03 2.95E+02 6 0.00E+00 0.00E+00 1 7.71E-15 9.84E-16 3 7.22E-01 1.62E+00 4 4.95E+01 2.79E+01 6 4.42E-02 9.81E-03 6 3.57E-290 0.00E+00 3 4.00E-311 0.00E+00 3 7.92E+03 3.59E+02 7

1.69E-46 1.31E-46 4 7.23E+01 5.17E+01 6 9.54E-03 2.78E-03 6 3.82E-04 0.00E+00 1 8.88E-04 2.45E-03 6 1.22E+00 4.26E+00 5 2.40E+01 8.29E+00 6 3.93E+01 1.33E+01 5 4.15E-03 2.07E-02 4 1.72E-14 1.11E-14 4 1.88E-06 2.22E-06 5 2.73E+03 3.68E+02 2

7.50E-07 2.26E-07 6 2.08E+01 9.59E+00 1 8.08E-03 1.99E-03 5 3.82E-04 3.15E-12 3 6.43E-07 6.26E-07 4 6.17E-05 3.29E-05 4 6.53E-05 7.93E-05 3 2.40E-04 1.82E-04 3 5.53E-11 2.42E-11 1 9.89E-04 2.95E-04 6 3.17E+03 6.42E+02 6 2.42E+03 3.80E+02 1

2.03E-41 5.35E-41 5 2.53E+01 1.66E-01 3 1.59E-03 6.76E-04 4 5.56E+03 7.49E+02 5 1.18E-03 4.61E-03 7 5.66E+00 9.26E+00 6 1.08E+01 1.88E+01 5 5.64E+01 2.85E+01 7 2.12E-02 7.72E-03 5 4.39E-04 2.09E-03 5 3.50E-07 1.68E-06 4 4.03E+03 1.36E+03 6

4.34E-05 1.33E-05 7 9.50E+01 3.17E+01 7 1.45E-02 3.23E-03 7 3.82E-04 2.50E-08 2 2.65E-04 3.63E-04 5 5.77E+00 4.03E+00 7 2.56E+01 2.63E+00 7 2.59E+01 2.70E+00 4 4.21E-09 2.37E-09 2 2.88E+00 5.30E-01 7 9.11E+03 9.03E+02 7 3.94E+03 2.85E+02 5

6.23E-322 0.00E+00 2 2.85E+01 1.81E-01 5 5.13E-06 4.57E-06 1 7.59E+03 5.42E+02 7 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 1.67E+00 9.06E-16 7 1.19E-322 0.00E+00 2 0.00E+00 0.00E+00 1 8.82E+03 6.39E+02 8

0.00E+00 0.00E+00 1 2.52E+01 2.41E-01 2 9.41E-06 8.44E-06 2 4.23E+02 6.13E+02 4 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 1.92E-06 9.00E-07 3 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 3.64E+03 1.30E+03 4

Mean Std Rank

2.43E+02 5.15E+01 8

0.00E+00 0.00E+00 1

3.03E+01 3.73E+01 4

1.35E+02 1.68E+01 6

7.78E+01 4.75E+01 5

1.96E+02 1.19E+01 7

0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1

16

Mean Std Rank Mean f 16 Std Rank Average rank Final rank f 15

2.94E+02 5.45E+01 8 1.91E+02 1.08E+02 8 7.69 8

6.43E+01 2.26E+01 3 0.00E+00 0.00E+00 1 3.44 3

1.20E+02 2.63E+01 5 4.44E-17 7.17E-17 4 4.44 5

1.44E+02 2.04E+01 6 8.19E-05 9.97E-05 5 4.13 4

1.09E+02 2.26E+01 4 4.47E-03 8.07E-03 6 5.13 6

2.02E+02 1.63E+01 7 1.87E-02 1.15E-02 7 5.94 7

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.56 2

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 1.63 1

Table 5 Results of comparison algorithms on benchmark functions (50-D).

f1

f2

f3

f4

f5

f6

f7

f8

f9

f 10

f 11

f 12

f 13

f 14

Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank

PSO

SPLSO

SLPSO

CLPSO

GL-PSO

BLPSO

THSPSO

MPCPSO

5.80E+04 1.80E+04 8 1.18E+02 2.57E+01 8 1.82E+08 8.46E+07 8 1.65E+02 7.40E+01 8 2.83E+03 9.67E+02 5 5.66E+02 1.99E+02 8 1.99E+01 4.48E-02 8 4.85E+02 5.95E+01 8 4.06E+02 7.32E+01 8 3.79E+08 2.97E+08 8 5.19E+01 1.15E+01 8 1.96E+05 1.56E+05 8 6.14E+03 9.13E+02 3 5.06E+02 9.45E+01 8

0.00E+00 0.00E+00 1 6.26E-273 0.00E+00 3 4.77E+01 2.63E-01 3 1.03E-04 3.92E-05 3 1.36E+04 9.43E+02 7 0.00E+00 0.00E+00 1 7.99E-15 0.00E+00 3 3.98E-02 1.99E-01 3 3.72E+01 7.23E+01 4 1.09E-01 8.01E-03 6 3.63E-273 0.00E+00 3 1.72E-262 0.00E+00 3 1.52E+04 3.71E+02 7 0.00E+00 0.00E+00 1

2.13E-60 2.00E-60 4 4.18E-31 5.38E-31 4 9.31E+01 6.38E+01 5 2.62E-02 5.52E-03 6 4.42E+01 8.36E+01 3 2.07E-03 5.06E-03 5 4.80E-01 2.40E+00 4 3.95E+01 1.49E+01 5 8.45E+01 3.76E+01 5 4.98E-03 1.72E-02 4 4.63E-14 1.34E-14 4 4.74E+01 2.45E+01 5 4.41E+03 6.28E+02 1 3.50E+02 1.85E+01 5

1.89E-04 6.27E-05 6 1.29E-03 2.23E-04 6 1.49E+02 4.04E+01 6 2.46E-02 5.74E-03 5 6.44E-04 2.09E-06 1 6.96E-04 2.22E-04 4 2.17E+00 9.54E-01 5 8.33E-01 6.68E-01 4 1.00E+01 2.35E+00 3 1.13E-05 4.37E-06 1 3.95E-02 1.50E-02 6 3.05E+04 3.59E+03 6 4.60E+03 4.23E+02 2 3.71E+02 2.79E+01 6

1.29E-45 3.53E-45 5 2.96E-29 5.90E-29 5 4.57E+01 5.00E-01 2 4.46E-03 1.83E-03 4 1.21E+04 9.72E+02 6 3.05E-03 5.70E-03 6 1.15E+01 1.04E+01 6 4.62E+01 5.54E+01 6 1.50E+02 6.20E+01 7 8.93E-02 3.05E-02 5 7.96E-05 2.06E-04 5 2.27E+01 5.75E+01 4 9.58E+03 1.78E+03 6 2.28E+02 7.27E+01 4

2.06E-03 7.62E-04 7 3.61E-02 1.04E-02 7 4.87E+02 1.26E+02 7 4.56E-02 7.20E-03 7 5.78E-03 3.00E-03 2 1.14E-02 6.65E-03 7 1.88E+01 1.27E+00 7 1.13E+02 7.51E+00 7 1.16E+02 1.39E+01 6 2.94E-04 1.68E-04 3 1.60E+01 1.84E+00 7 4.88E+04 4.63E+03 7 7.75E+03 3.56E+02 5 4.46E+02 1.88E+01 7

0.00E+00 0.00E+00 1 1.05E-321 0.00E+00 2 4.85E+01 2.75E-02 4 6.21E-06 5.73E-06 1 1.39E+04 4.85E+02 8 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 1.47E+00 4.53E-16 7 2.12E-322 0.00E+00 2 0.00E+00 0.00E+00 1 1.63E+04 6.61E+02 8 0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 4.54E+01 2.12E-01 1 7.90E-06 8.75E-06 2 3.45E+02 7.41E+02 4 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 1.76E-05 6.38E-06 2 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 6.41E+03 1.29E+03 4 0.00E+00 0.00E+00 1

17

Mean Std Rank Mean f 16 Std Rank Average rank Final rank f 15

5.21E+02 7.38E+01 8 5.45E+02 1.39E+02 8 7.5 8

5.16E+01 8.65E+01 3 0.00E+00 0.00E+00 1 3.25 1

3.08E+02 2.66E+01 5 1.08E-03 3.11E-03 4 4.31 4

3.83E+02 3.74E+01 6 2.44E-03 9.21E-04 5 4.5 5

2.62E+02 3.38E+01 4 2.86E-03 7.43E-03 6 5.06 6

4.57E+02 1.73E+01 7 2.95E-02 1.18E-02 7 6.25 7

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.56 2

4.2 Comparison and analysis of the algorithm convergence accuracy In this section, we compare the MPCPSO with the seven well-known PSO variants mentioned in subsection 4.2 on the 30-, 50-, and 100-dimensional test functions. The parameter settings for each algorithm are shown in Table 2. The experimental results are as follows. From Table 4, the proposed MPCPSO achieves a better mean and standard deviation on most of the functions when solving problems with 30 dimensions. For the four unimodal functions (f1-f4), MPCPSO found the global optimum for functions f1 and f2, while SPLSO and THSPSO obtained the global optimum only on function f1. On the functions f3 and f4, MPCPSO performs slightly worse than CLPSO and THSPSO, respectively, but it performs better than the other six algorithms. For the seven multimodal functions (f5-f11), MPCPSO is able to jump out of local minima easily on functions f6, f8, f9, and f11. MPCPSO performs worse than SLPSO, CLPSO, and BLPSO on function f5, because the elitist population that guides the learning of the general population converges so fast that the whole population falls into a local optimal solution. On function f7, MPCPSO and THSPSO obtain equally good results. On function f10, MPCPSO performs worse than CLPSO and BLPSO but much better than the other four algorithms. For the last five rotated functions, MPCPSO and THSPSO obtains the global optimum on functions f12, f14, f15, and f16, but on f13, MPCPSO performs better than THSPSO. The SPLSO algorithm achieves good results on functions f12, f14, and f16 especially on f14 and f16. CLPSO performs better on function f13 than do the other seven algorithms. The other algorithms have generalized performances. Among the sixteen test functions, MPCPSO ranks first 11 times, second twice, third once and fourth twice. The PSO, SPLSO, SLPSO, CLPSO, GL-PSO, BLPSO, and THSPSO algorithms obtain 0-, 4-, 1-, 3-, 0-, 0-, and 10 first-place rankings, respectively. The average ranking of MPCPSO is 1.63, far lower than that of the other algorithms, and its overall ranking is first. The above analysis shows that MPCPSO demonstrates strongly competitive advantages when tested on problems with 30 dimensions, and its optimization results are acceptable.

18

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 1.5 1

From Table 5, it is clear that MPCPSO performs well on the four unimodal functions (f1-f4) compared with the others. Similar to the 30-dimensional results, SPLSO and THSPSO still achieve the global optimum on function f1. On the functions f6-f9 and f11, MPCPSO obtains a better result than do the other algorithms. However, on function f5, the performance of MPCPSO is still unsatisfactory; nevertheless, its results on f5 for 50 dimensions do not differ widely from its results for 30 dimensions, which demonstrates that this function could be further optimized by MPCPSO. For the functions f12-f16, MPCPSO still obtains the global optimal value on functions f12, f14, f15, and f16. As shown in Table 5, the average ranking of MPCPSO is 1.5, and its overall ranking is still first. The results of the sixteen test functions on the 50- dimension problems show that MPCPSO not only achieves a higher solution accuracy but is also more robust. Table 6 Results of comparison algorithms on benchmark functions (100-D).

f1

f2

f3

f4

f5

f6

f7

f8

f9

f 10

f 11

f 12

PSO

SPLSO

SLPSO

CLPSO

GL-PSO

BLPSO

THSPSO

MPCPSO

Mean Std Rank

1.92E+05 2.33E+04 8

0.00E+00 0.00E+00 1

2.64E-31 3.86E-31 4

4.06E+00 5.91E-01 6

8.18E-24 3.11E-23 5

2.66E+01 5.80E+00 7

0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1

Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank

2.93E+02 4.23E+01 8 6.18e+08 1.41e+08 8 1.23e+03 4.23e+02 8 8.19E+03 1.54E+03 5 1.70E+03 2.80E+02 8 1.99E+01 3.24E-02 7

1.66E-256 0.00E+00 3 9.79E+01 3.27E-01 3 1.30E-04 4.41E-05 3 3.08E+04 1.04E+03 8 0.00E+00 0.00E+00 1 9.41E-15 2.90E-15 3

9.43E-16 2.08E-15 5 1.65E+02 6.19E+01 5 1.02E-01 1.44E-02 5 2.00E+02 2.01E+02 4 2.96E-04 1.48E-03 4 5.01E-01 2.51E+00 4

9.74E-01 1.51E-01 6 1.28E+04 1.80E+03 6 1.49E-01 1.93E-02 6 2.63E-01 4.80E-02 1 9.47E-01 5.40E-02 6 1.88E+01 2.00E-01 6

4.68E-17 4.84E-17 4 9.71E+01 8.37E-01 2 1.72E-02 6.88E-03 4 2.81E+04 1.49E+03 6 3.64E-03 7.59E-03 5 1.07E+01 1.05E+01 5

9.93E+00 2.39E+00 7 6.33E+04 1.07E+04 7 2.86E-01 2.84E-02 7 8.37E+01 3.96E+01 2 1.26E+00 4.62E-02 7 2.02E+01 1.34E-01 8

2.14E-321 0.00E+00 2 9.80E+01 2.47E-03 4 6.47E-06 7.24E-06 1 2.84e+04 5.65e+02 7 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1

0.00E+00 0.00E+00 1 9.52E+01 2.55E-01 1 1.22E-05 1.32E-05 2 1.07E+02 1.60E+02 3 0.00E+00 0.00E+00 1 8.88E-16 0.00E+00 1

Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank Mean Std Rank

1.15E+03 1.15E+02 8 9.85E+02 1.36E+02 8 1.04E+09 6.40E+08 8 1.29E+02 2.19E+01 8 1.01E+06 3.81E+05 8

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 3.01E-01 2.52E-02 4 1.48E-257 0.00E+00 3 2.06E-219 0.00E+00 3

2.73E+02 1.84E+02 6 8.14E+02 4.26E+01 7 2.53E-02 8.10E-02 2 1.12E-13 2.51E-14 4 1.02E+05 7.79E+03 5

9.56E+01 8.13E+00 5 1.22E+02 1.11E+01 4 1.41E+00 3.48E-01 6 4.16E+00 4.68E-01 6 1.67E+05 1.25E+04 6

7.09E+01 7.28E+01 4 4.64E+02 1.75E+02 5 2.86E-01 4.31E-02 3 1.51E-04 6.81E-04 5 4.88E+04 1.74E+04 4

5.02E+02 1.96E+01 7 5.54E+02 4.05E+01 6 4.54E+00 9.14E-01 7 7.41E+01 5.69E+00 7 2.40E+05 2.27E+04 7

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 1.33E+00 4.53E-16 5 4.35E-322 0.00E+00 2 0.00E+00 0.00E+00 1

0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.28E-04 5.79E-05 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1

19

Mean Std Rank Mean f 14 Std Rank Mean f 15 Std Rank Mean f 16 Std Rank Average rank Final rank f 13

1.27E+04 1.28E+03 3 1.16E+03 1.20E+02 7 1.17E+03 1.15E+02 8 1.73E+03 2.02E+02 8 7.38 8

3.35E+04 5.83E+02 7 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 2.63 3

9.57E+03 8.71E+02 1 8.88E+02 1.85E+01 5 8.63E+02 3.16E+01 5 1.18E-03 3.27E-03 4 4.38 4

1.09E+04 8.50E+02 2 1.08E+03 6.19E+01 6 1.07e+03 4.34e+01 6 9.53e-01 4.61e-02 6 5.25 6

2.68E+04 2.37E+03 6 6.65E+02 1.17E+02 4 7.18e+02 3.41e+01 4 1.39E-02 3.44E-02 5 4.44 5

2.08E+04 5.71E+02 5 1.17E+03 3.01E+01 8 1.16E+03 3.19E+01 7 1.27E+00 5.12E-02 7 6.63 7

3.48E+04 1.01E+03 8 0.00E+00 0.00E+00 1 0.00e+00 0.00e+00 1 0.00e+00 0.00e+00 1 2.38 2

1.37E+04 3.16E+03 4 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 0.00E+00 0.00E+00 1 1.38 1

Table 7 Comparisons of convergence speed, algorithm reliability and success performance

f1

f2

f3

f4

f5

f6

f7

f8

f9

f 10

f 11

f 12

f 13

PSO

SPLSO

SLPSO

CLPSO

FEs SR SP FEs SR SP FEs SR SP FEs SR SP FEs SR SP FEs SR SP

20697.50 8 258718.75 — 0 — 13567.18 44 30834.50 43777 4 1094425 1948.39 92 2117.82 — 0 —

3732.24 100 3732.24 5435.96 100 5435.96 1612.60 100 1612.60 2476.61 100 2476.61 — 0 — 3998.16 100 3998.16

23089.76 100 23089.76 34668.52 100 34668.52 49965.74 76 65744.39 115134.08 52 221411.69 1187.92 100 1187.92 23815.96 92 25886.91

168325.60 100 168325.60 177247.96 88 201418.14 145856.12 100 145856.12 180095.42 76 236967.66 15579.88 100 15579.88 191312.82 88 217400.93

27251.16 100 27251.16 36779 100 36779 12296.88 100 12296.88 25503.04 100 25503.04 — 0 — 33783.38 84 40218.31

181566.80 100 181566.80 — 0 — 184978.16 48 385371.18 185603.50 8 2320043.75 35578.08 100 25578.08 — 0 —

4914.88 100 4914.88 7293.44 100 7293.44 1777.44 100 1777.44 5966.44 100 5966.44 — 0 — 5057.96 100 5057.96

647.44 100 647.44 1271.72 100 1271.72 290.04 100 290.04 405.28 100 405.28 4894.78 92 5320.42 650.84 100 650.84

FEs SR SP FEs SR SP FEs SR SP FEs SR SP

— 0 — — 0 — — 0 — 25373.55 44 57667.15

5393.52 100 5393.52 13820.25 96 14396.09 — 0 — — 0 —

37962.29 96 39544.05 — 0 — — 0 — 22415.50 88 5472.16

— 0 — — 0 — — 0 — 153822 100 153822

50606.40 40 126516 97833.55 44 222348.97 — 0 — — 0 —

— 0 — — 0 — — 0 — 173222.48 100 173222.48

7015.80 100 7015.80 41971.56 100 41971.56 41996.04 100 41996.04 — 0 —

874.68 100 874.68 892.32 100 892.32 905.32 100 905.32 194084.17 24 808684.03

FEs SR SP FEs SR SP FEs SR SP

— 0 — — 0 — 823 100 823

5084.92 100 5084.92 6128.2 100 6128.2 — 0 —

35525.12 100 35525.12 190811.38 52 66944.97 1627.56 100 1627.56

— 0 — — 0 — 12201.08 100 12201.08

86845.53 76 114270.43 154432.96 100 154432.96 47982.64 88 54525.72

— 0 — — 0 — 105033.68 100 105033.68

8155.84 100 8155.84 5998.72 100 5998.72 — 0 —

853.92 100 853.92 1443.92 100 1443.92 9884.83 96 10296.70

20

GL-PSO

BLPSO

THSPSO

MPCPSO

FEs SR SP FEs f 15 SR SP FEs f 16 SR SP Average SR SR rank f 14

— 0 — — 0 — 22907 4 572675 18.5 8

18293.32 100 18293.32 82419 4 2060475 3853.64 100 3853.64 68.75 3

— 0 — — 0 — 26333.24 84 31349.09 65 4

— 0 — — 0 — — 0 — 47 6

— 0 — — 0 — 29265.09 88 33255.79 57.50 5

— 0 — — 0 — — 0 — 28.50 7

10279.40 100 10279.40 41970.20 100 41970.20 5238.76 100 5238.76 81.25 2

Additionally, we applied the proposed MPCPSO to experiments on the 100-dimensional functions to verify the scalability and effectiveness of the algorithm in solving complex problems with higher dimensions. To ensure that the conditions support a fair comparison, the same parameter settings are used as listed in Table 2, and the experimental results are shown in Table 6.

Fig. 2. The convergence curves under 30 dimensions for four unimodal functions

From Table 6, on the whole, the performances of MPCPSO on the 100-dimensional tests are similar to its performances on the 30- and 50-dimensional tests, which shows that the proposed MPCPSO performs steadily during the convergence process. MPCPSO still obtains the global optimum on functions f1, f2, f6, f8, f9, f11, f12, and f14- f16, which indicates that it is highly effective on these functions and has a high global search capability. However, on function f13, the result of the MPCPSO algorithm is poor, which indicates that as the dimension increase, the optimization ability of MPCPSO is reduced for some complex multimodal functions, and it cannot find a high-quality solution. Similar to the 21

879.68 100 879.68 956.20 100 956.20 670.08 100 670.08 94.50 1

MPCPSO algorithm, SPLSO and THSPSO also show better search capabilities than the other algorithms. On functions f1, f6, f8, f9, f11, f12, and f14-f16, these algorithms also obtain the global optimal solution or a high-quality solution. However, their overall optimization capabilities on the sixteen test functions are lower than that of MPCPSO. Other PSO algorithms performed poorly in the search space. The average ranking of MPCPSO is 1.38, and its overall ranking is still first. The above analysis indicates that the optimization performance of DMLPSO is better than those of the seven comparison algorithms. MPCPSO effectively handles high-dimensional complex optimization problems and has higher robustness and convergence accuracy.

22

Fig. 3. The convergence curves under 30 dimensions for seven multimodal functions

The above experimental results indicate that MPCPSO has advantages on most of the test functions primarily for the following reasons: First, the population information is shared in the two subpopulations by the dynamic segment-based mean learning strategy proposed in this paper, which cause the algorithm to jump out of local optimal solutions and coevolve in a better direction. In addition, the multidimensional comprehensive learning strategy proposed in this paper ensures that the population finds a better search range in complex environments. This result occurs because the learning exemplars constructed from the information of multiple better particles can provide more effective information for population evolution. Therefore, the proposed MPCPSO algorithm has better robustness and convergence accuracy. 4.3 Analysis of the algorithm convergence Because the MDCLS speeds up convergence and provides better learning exemplars for the general population, the MPCPSO can reach a solution with high precision at a fast convergence rate. Performance indicators including the average fitness evaluations (FEs), success rate (SR), and success performance (SP), are listed in Table 7, where SR =number of successful runs / total runs and SP = mean(FEs for successful runs × total runs) / (number of successful runs) [49]. The accuracy values that each algorithm needs to meet are listed in Table 1. From Table 7, MPCPSO has a higher success rate and more successful performances than do the other tested PSO algorithms on almost all functions especially on functions f1, f2, f4, and f6. These results demonstrate the advantages of the proposed DSMLS and MDCLS learning strategies, which significantly improve the convergence performance of MPCPSO. The SR shown in Table 7 also indicates that MPCPSO is most effective, achieving a mean SR of 94.5%, followed by THSPSO (81.25%), SPLSO (68.75%), and SLPSO (65%). These experimental results demonstrate that MPCPSO indeed has a competitive advantage compared to the seven tested PSO variants.

23

As shown in Fig. 2, for the four unimodal functions, MPCPSO also has a better convergence efficiency than do the compared PSOs on functions f1 and f2. On functions f3 and f4, although MPCPSO converges faster than do the other PSO algorithms, its convergence accuracy is general and does not converge to the global optimal solution. For the seven multimodal functions shown in Fig. 3, except for f5, MPCPSO has a significant convergence advantage over the other PSO algorithms. From Fig. 4, on the last rotated functions, the experimental results verify that MPCPSO performs competitively on these test functions. SPLSO and THSPSO also achieve good convergence rates, but overall, they perform worse than the MPCPSO. According to the above analysis, MPCPSO obtains better results than do the other seven PSO variants, not only with regard to solution accuracy but also with regard to convergence speed, especially when addressing complex problems. The main reasons are that, on the one hand, the proposed DSMLS guarantees information sharing from the optimal particles and coevolution between the two subpopulations; consequently, the algorithm achieves better convergence accuracy. On the other hand, the proposed MDCLS increases the population diversity and ensures better convergence speed by constructing efficient learning exemplars and introducing a differential mutation strategy.

24

Fig. 4. The convergence curves under 30 dimensions for five rotated functions

5. Conclusions In this paper, we proposed a multipopulation cooperative particle swarm optimization algorithm with a mixed mutation strategy. In the MPCPSO, the DSMLS is used to ensure the algorithm’s exploration ability, and the MDCLS is used to improve the algorithm’s local search. Additionally, the differential mutation operator is introduced to enhance the population diversity in the later algorithm stage. The comparison results between MPCPSO and seven well-known PSO algorithms verify the effectiveness of MPCPSO and indicate that the proposed DSMLS and MDCLS effectively help the algorithm to handle complex optimization problems and achieve faster convergence speeds and better convergence accuracies. However, the comparison results also show that MPCPSO has some weaknesses when solving some complex functions. MPCPSO still cannot always find the global optimum quickly, such as on function f2. Therefore, our future work will focus on the global search characteristics and population diversity of MPCPSO to improve the search efficiency. Moreover, practical uses for MPCPSO will be addressed.

Credit Author Statement Wei Li: Conceptualization, Methodology, Validation. Xiang Meng: Data curation, Writing - Original draft preparation. Ying Huang: Visualization, Software, Investigation. Zhang-Hua Fu: Supervision, Writing - Reviewing and Editing.

25

Declaration of interests ☒ The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgement This work was supported in part by projects from the National Natural Science Foundation of China (Grant Nos. 61903089, U1613226 and 1813216), in part by the Shenzhen

Science

and

Technology

Innovation

Council

(Grant

No.

JCYJ20170410171923840 and JCYJ20180508162601910), in part by the funding from Shenzhen Institute of Artificial Intelligence and Robotics for Society (Grant No. 2019-INT003), and in part by the Science Foundation of Jiangxi University of Science and Technology (Grant No. JXXJBS18059).

References [1] Hosseinabadi, A. A. R., Vahidi, J., Saemi, B., Sangaiah, A. K., & Elhoseny, M. (2019). Extended genetic algorithm for solving open-shop scheduling problem. Soft computing, 23(13). [2] Huang, Y., & Li, W. (2018). A self-feedback strategy differential evolution with fitness landscape analysis. Soft Computing, 22(23), 7773-7785. [3] Xue, Y., Jiang, J., Zhao, B., & Ma, T. (2018). A self-adaptive artificial bee colony algorithm based on global best for global optimization. Soft Computing, 22(9), 2935-2952. [4] Li, W., & Chen, Z. X. (2019). Self-Feedback Differential Evolution Adapting to Fitness Landscape Characteristics. Soft Computing, 23(4), 1151-1163. [5] Wei, L., Zhang, Z., Zhang, D., & Leung, S. C. (2018). A simulated annealing algorithm for the capacitated vehicle routing problem with two-dimensional loading constraints. European Journal of Operational Research, 265(3), 843-859. [6] Engin, O., & Güçlü, A. (2018). A new hybrid ant colony optimization algorithm for solving the no-wait flow shop scheduling problems. Applied Soft Computing, 72, 166-176. [7] Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. in Proc. IEEE Int. Conf. Neural Netw., 4, 1942-1948. [8] Eberhart, R. C., & Kennedy, J. (1995). A new optimizer using particle swarm theory. in Proc. 6th Int. Symp. Micromach. Human Sci., 39-43. [9] Alswaitti, M., Albughdadi, M., & Isa, N. A. M. (2018). Density-based particle swarm optimization algorithm for data clustering. Expert Systems with Applications, 91, 170-186. [10] Yadav, S., Ekbal, A., & Saha, S. (2018). Feature selection for entity extraction from multiple biomedical corpora: A PSO-based approach. Soft Comput., 22, 6881-6904.

26

[11] Jordehi, A. R. (2015). Particle swarm optimisation (PSO) for allocation of FACTS devices in electric transmission systems: a review. Renew. Sustain. Ener., 52, 1260-1267. [12] Pang, H., Liu, F., & Xu, Z. R. (2018). Variable universe fuzzy control for vehicle semi-active suspension system with MR damper combining fuzzy neural network and particle swarm optimization. Neurocomputing, 306, 130-140. [13] Wang, F., Zhu, H. Q., Li, W., Li, K. S. (2020). A Hybrid Convolution Network for Serial Number Recognition on Banknotes. Information Sciences, 512, 952-963. [14] Zhang, H. W., Xie, J. W., Ge, J. A., Lu, W. L., & Zong B. F. (2018). An Entropy-based PSO for DAR task schedulin problem. Appl. Soft Comput., 73, 862-873. [15] Wang, F., Li, Y. X., Zhou, A., & Tang, K. (2019). An estimation of distribution algorithm for mixed-variable Newsvendor problems. IEEE Transactions on Evolutionary Computation, https://doi.org/10.1109/TEVC.2019.2932624. [16] Belkadi, A., Oulhadj, H., Touati, Y., Khan, S. A., & Daachi, B. (2017). On the robust PID adaptive controller for exoskeletons: A particle swarm optimization-based approach. Appl. Soft Comput., 60, 87-100. [17] Shi, Y., & Eberhart, R. (1998). A modified particle swarm optimizer. In 1998 IEEE international conference on evolutionary computation proceedings. IEEE world congress on computational intelligence, 69-73. [18] Ratnaweera, A., Halgamuge, S. K., & Watson, H. C. (2004). Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evolut. Comput., 8 (3), 240-255. [19] Zhan, Z. H., Zhang, J., Li, Y., & Chung, H. S. H. (2009). Adaptive particle swarm optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern., 39 (6), 1362-1381. [20] Chen, K., Zhou, F. Y., & Liu, A. L. (2018). Chaotic dynamic weight particle swarm optimization for numerical function optimization. Knowledge-Based Systems, 139, 23-40. [21] Kennedy, J., & Mendes, R. (2002). Population structure and particle swarm performance. Proceedings of IEEE Congress on Evolutionary Computation, IEEE, 2, 1671-1676. [22] Mendes, R., Kennedy, J., & Neves, J. (2004). The fully informed particle swarm: simpler, maybe better. IEEE Trans. Evolut. Comput., 8(3), 204-210. [23] Liang, J. J., Qin, A., Suganthan, P. N., & Baskar, S. (2006). Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evolut. Comput., 10(3) , 281-295. [24] Wang, F., Zhang, H., Li, K. S., Lin, Z. Y., Yang, J., & Shen, X. L. (2018). A Hybrid Particle Swarm Optimization Algorithm Using Adaptive Learning Strategy. Information Sciences, 436-437, 162-177. [25] Liang, J., & Suganthan, P. (2005). Dynamic multi-swarm particle swarm optimizer. in: Proceedings of IEEE Swarm Intelligence Symposium, 124-129. [26] Cao, L., Xu, L., & Goodman, E. D. (2018). A neighbor-based learning particle swarm optimizer with short-term and long-term memory for dynamic optimization problems, Information Sciences, 453, 463-485.

27

[27] Liu, H. R., Cui, J. C., Lu, Z. D., Liu, D. Y., & Deng, Y. J. (2019). A hierarchical simple particle swarm optimization with mean dimensional information. Applied Soft Computing, 76, 712-725. [28] Deng, H., Peng, L., Zhang, H., Yang, B., & Chen, Z. X. (2019). Ranking-based biased learning swarm optimizer for large-scale optimization. Information Sciences, 493, 120-137. [29] Yang, Q., Chen, W. N., Gu, T., Zhang, H. X., Deng, J. D., Li, Y., & Zhang, J. (2016). Segment-based predominant learning swarm optimizer for large-scale optimization. IEEE transactions on cybernetics, 47(9), 2896-2910. [30] Zhang, X., Wang, X., Kang, Q., Cheng, J. (2019). Differential mutation and novel social learning particle swarm optimization algorithm. Information Sciences, 480, 109-129. [31] Meng, A., Li, Z., Yin, H., Chen, S., & Guo, Z. (2016). Accelerating particle swarm optimization using crisscross search. Information Sciences, 329, 52-72. [32] Nasiraghdam, M., & Nafar, M. (2011). New Approach Based on Hybrid GA and PSO as HGAPSO in Low-Frequency Oscillation Damping Using UPFC Controller. Journal of Basic and Applied Scientific Research, 1(11), 2208-2218. [33] Gong, Y. J., Li, J. J., Zhou, Y. C., Li, Y., Chung, H. S. H., & Shi, Y. H. (2016). Genetic learning particle swarm optimization. IEEE Trans. Cybern., 46, 2277– 2290. [34] Zhan, Z. H., Zhang, J., Li, Y., & Shi Y. H. (2010). Orthogonal Learning Particle Swarm Optimization. IEEE Trans. Evol. Comput., 15(6), 832-847. [35] Peram, T., Veeramachaneni, K., & Mohan, C. K. (2003). Fitness-distance-ratio based particle swarm optimization. in: Swarm Intelligence Symposium, 2003. SIS '03. Proceedings of the 2003 IEEE, 174-181. [36] Shi, Y., & Eberhart, R. (1998). Parameter selection in particle swarm optimization. in Evolutionary Programming VII. Springer Berlin Heidelberg, 591-600. [37] Janson, S., & Middendorf, M. (2005). A hierarchical particle swarm optimizer and its adaptive variant. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 35(6), 1272-1282. [38] Cheng, R., & Jin, Y. (2015). A competitive swarm optimizer for large scale optimization, IEEE Trans. Cybern., 45(2), 191-204. [39] Wang, F., Li, Y. X., Zhang, H., Hu, T., & Shen, X. L. (2019). An adaptive weight vector guided evolutionary algorithm for preference-based multi-objective optimization. Swarm & Evolutionary Computation, 49, 220-233. [40] Yazdani, D., Nasiri, B., Sepas-Moghaddam, A., & Meybodi, M. R. (2013). A novel multi-swarm algorithm for optimization in dynamic environments based on particle swarm optimization. Applied Soft Computing, 13(4), 2144-2158. [41] Liu, R., Li, J., Mu, C., & Jiao L. (2017). A coevolutionary technique based on multi-swarm particle swarm optimization for dynamic multi-objective optimization. European Journal of Operational Research, 261(3), 1028-1051. [42] vandenBergh, F., & Engelbrecht, A. P. (2004). A cooperative approach to particle swarm optimization. IEEE Trans. Evol. Comput., 8 (3), 225-239.

28

[43] Xu, G., Cui, Q., Shi, X., Ge, H., Zhan, Z. H., Lee, H. P., & Wu, C. (2019). Particle swarm optimization based on dimensional learning strategy. Swarm and Evolutionary Computation, 45, 33-51. [44] Wang, F., Zhang, H., Li, Y. X., Zhao, Y. Y., & Rao, Q. (2018). External Archive Matching Strategy for MOEA/D. Soft Computing, 22(23), 7833-7846. [45] Nagra, A. A., Han, F., Ling, Q. H., & Mehta, S. (2019). An improved hybrid method combining gravitational search algorithm with dynamic multi swarm particle swarm optimization. IEEE Access, 7, 50388-50399. [46] Cao, Y., Zhang, H., Li, W., Zhou, M., Zhang, Y., & Chaovalitwongse, W. A. (2018). Comprehensive learning particle swarm optimization algorithm with local search for multimodal functions. IEEE Transactions on Evolutionary Computation, 23, 718-731. [47] Cheng, R., & Jin, Y. (2015). A social learning particle swarm optimization algorithm for scalable optimization. Information Sciences, 291, 43-60. [48] Chen, X., Tianfield, H., Mei, C., Du, W., & Liu, G. (2017). Biogeography-based learning particle swarm optimization. Soft Computing, 21(24), 7519-7541. [49] Suganthan, P. N., Hansen, N., Liang, J. J., & Deb, K. (2005). Problem Definitions and Evaluation Criteria for the Cec 2005 Special Session on Real-parameter Optimization. in Proc. IEEE Congr. Evol. Comput, 1-50.

29