Chaotic dynamic weight particle swarm optimization for numerical function optimization

Chaotic dynamic weight particle swarm optimization for numerical function optimization

Accepted Manuscript Chaotic Dynamic Weight Particle Swarm Optimization for Numerical Function Optimization Ke Chen , Fengyu Zhou , Aling Liu PII: DOI...

940KB Sizes 1 Downloads 228 Views

Accepted Manuscript

Chaotic Dynamic Weight Particle Swarm Optimization for Numerical Function Optimization Ke Chen , Fengyu Zhou , Aling Liu PII: DOI: Reference:

S0950-7051(17)30471-9 10.1016/j.knosys.2017.10.011 KNOSYS 4073

To appear in:

Knowledge-Based Systems

Received date: Revised date: Accepted date:

9 January 2017 5 October 2017 6 October 2017

Please cite this article as: Ke Chen , Fengyu Zhou , Aling Liu , Chaotic Dynamic Weight Particle Swarm Optimization for Numerical Function Optimization, Knowledge-Based Systems (2017), doi: 10.1016/j.knosys.2017.10.011

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

Chaotic Dynamic Weight Particle Swarm Optimization for Numerical Function Optimization

AN US

CR IP T

Ke Chen1 Fengyu Zhou1 Aling Liu2 (1. School of Control Science and Engineering, Shandong University, Jinan 250061, Shandong, PR China) (2. Yuanshang Central Middle School, Laixi, Qingdao 266600, Shandong, PR China) *Corresponding author (Fengyu Zhou) [email protected] Abstract: Particle swarm optimization (PSO), which is inspired by social behaviors of individuals in bird swarms, is a nature-inspired and global optimization algorithm. The PSO method is easy to implement and has shown good performance for many real-world optimization tasks. However, PSO has problems with premature convergence and easy trapping into local optimum solutions. In order to overcome these deficiencies, a chaotic dynamic weight particle swarm optimization (CDW-PSO) is proposed. In the CDW-PSO algorithm, a chaotic map and dynamic weight are introduced to modify the search process. The dynamic weight is defined as a function of the fitness. The search accuracy and performance of the CDW-PSO algorithm are verified on seventeen well-known classical benchmark functions. The experimental results show that, for almost all functions, the CDW-PSO technique has superior performance compared with other nature-inspired optimizations and well-known PSO variants. Namely, the proposed algorithm of CDW-PSO has better search performance. Keyword: Particle swarm optimization; Chaotic map; Dynamic weight; Optimization

1. Introduction

AC

CE

PT

ED

M

In recent years, optimization problems are frequently encountered in many real-world applications such as statistical physics [1], computer science [2], artificial intelligence [3], pattern recognition and information theory [4, 5], etc. It is widely believed that it takes too much time to solve actual optimization problems with traditional optimization techniques, and searching for optimal solutions is extremely hard. So, there has been a growing interest in developing and investigating different optimization methods [6, 7] in the past twenty years, especially meta-heuristic optimization methods such as particle swarm optimization (PSO) [8], sine cosine algorithm (SCA) [9], moth-flame optimization algorithm (MFO) [10], ant colony optimization (ACO) [11], firefly algorithm [12], and the artificial bee colony (ABC) method [13]. These optimization algorithms have been adopted by researchers and are well suited for optimizing tasks such as function optimization [14], feature selection [15, 16], logic circuit design [17, 18] and artificial neural networks [19, 20]. With the development of nonlinear dynamics, chaotic theory has been widely applied to various domains. In this case, one of the most famous applications is the introduction of the chaotic concept into optimization algorithms [21]. At present, the chaotic concept has been successfully combined with several nature-inspired methods such as biogeography-based optimization (BBO) [22], gravitational search algorithm (GSA) [23], harmony search (HS) [24], and krill herd algorithm (KH) [25]. To date, there is no clear theory to explain the use of the chaotic sequences to replace certain parameters that change the performance of meta-heuristic optimization algorithms. However, empirical studies indicate that the chaotic theory owns a high-level of mixing ability, and thus, a chaotic map replaced the algorithm parameter. The generated solutions may have higher diversity and mobility; therefore, the chaotic map should be adopted in many studies, especially in meta-heuristic approaches. Particle swarm optimization (PSO) is a nature-inspired and global optimization algorithm originally developed by Kennedy et al. [26], which mimics the social behaviors of individuals in bird and fish swarms.

ACCEPTED MANUSCRIPT The PSO method is robust, efficient, and straightforward; it can be implemented in any computer language

CR IP T

very easily [27-29]. Therefore, this method has been applied to search optimum values in many fields. In some research, the performance of PSO has already been compared with other similar population-based optimization techniques such as genetic algorithm (GA) [30], differential evolution (DE) algorithm [31], and the bat algorithm (BA) [32]. The results show that the original PSO can find better values. However, basic PSO may get trapped in local optimum and premature convergence when solving complex optimization problems [33-35]. So to improve the performance of the original PSO on complex optimization problems, many different types of topologies have been designed to improve PSO performance. Liang et al. [36] proposed comprehensive learning PSO (CLPSO), which incorporates a novel learning strategy into PSO. Compared with other PSO variants, CLPSO shows better performance in solving multimodal problems, but the convergence rate of CLPSO on most problems is much lower than other methods. Zhan et al. [37] incorporated an orthogonal learning strategy into PSO and proposed an orthogonal learning PSO (OLPSO). Compared with other PSO algorithms and some well-known evolutionary methods,

ED

M

AN US

the experimental results illustrate that OLPSO has higher efficiency; however, for a few problems, OLPSO is easily trapped in the local optimal. Parsopoulos and Vrahatis [38] introduced local and global variants of PSO to combine their exploration and exploitation abilities. Liu et al. [39] introduced differential evolution to PSO and proposed a hybrid method called PSO-DE. Based on grouping and reorganization techniques, Liang and Suganthan [40] proposed dynamic multi-swarm PSO to improve PSO for avoiding local optima. To overcome premature convergence, Peram et al. [41] developed the fitness-distance-ratio-based PSO (FDR-PSO). Mendes et al. [42] proposed a fully informed PSO. Ratnaweera et al. [43] introduced self-organizing and time-varying acceleration coefficients to provide the required momentum for particles to search the global optimum solution. In the above-mentioned PSO variants, it is difficult to simultaneously achieve both fast convergence speed and escape from the local optima. Undeniably, many PSO improvements can enhance the performance of PSO, but the results of these efforts are still unsatisfactory in addressing the many challenges.

CE

PT

In this paper, in order to strengthen PSO performance and further balance the exploration and exploitation process, a chaotic dynamic weight particle swarm optimization called CDW-PSO is proposed. In CDW-PSO, there are two major highlights. Firstly, in the chaotic map, the future is completely determined by the past. In addition, the chaotic map has the properties of ergodicity, randomness and sensitivity. It can perform downright searches at higher speed compared with stochastic searches that mainly rely on probabilities. Therefore, the chaotic map is introduced to tune the inertia weight  . Secondly, the

AC

exploration and exploitation capabilities contradict each other. So to achieve better optimization performance, the exploration and exploitation abilities must be balanced effectively. According to the analysis of the PSO algorithm, three major changes are proposed by introducing dynamic weight, an acceleration coefficient and the best-so-far position to modify the search process. To verify the validity and feasibility of the proposed CDW-PSO method, seventeen well-known classical benchmark functions are adopted to test CDW-PSO search performance. Firstly, CDW-PSO is compared with PSO, C-PSO and DW-PSO, and the dimension is set to 30 and 50 in turn. The experimental results show that the CDW-PSO method is effective. Secondly, CDW-PSO is compared with other well-known intelligent algorithms, and the dimension is set to 30. The simulation results show that the CDW-PSO method acquires better results in relatively all classical benchmark functions tested and is more stable in most. Finally, to further verify the performance of the CDW-PSO approach, it is compared with the other excellent PSO variants while the dimension is set to 30. The results indicate that the CDW-PSO method outperforms other PSO variants for most classical benchmark functions. In summary, the proposed CDW-PSO algorithm is successful, the CDW-PSO has a

ACCEPTED MANUSCRIPT faster convergence speed and better search performance, it can simultaneously find the global optimal values or close to the theoretical optima values effectively, and the algorithm can escape trapping in local minima better than other relevant algorithms for relatively all functions. The remainder of this paper is presented as follows: the original PSO algorithm is presented in Section 2. Section 3 describes the proposed CDW-PSO algorithm, which shows better search performance and fast convergence speed. The simulation results and comparison of the methods are shown in Section 4. Finally, conclusions are drawn and future work is presented in Section 5.

2. The original PSO algorithm

CR IP T

Particle swarm optimization is a biological-inspired optimization inspired by social behaviors of the individuals in bird and fish swarms. This technique is very robust, straightforward, and efficient; it is population-based on a stochastic optimization algorithm. The PSO algorithm updates the positions and velocities of the members of the swarm by adaptively learning from good experiences. In the original PSO algorithm, a particle represents a potential solution. When searching in the D-dimensional space, each particle i has a position vector X id  [ xi1 , xi 2 , , xiD ] and a velocity vector Vid  [vi1, vi 2 ,, viD ] to

AN US

calculate its current state, where D is the dimensions of the solution space. Moreover, particle i will keep its personal previous best position vector pbesti  [ pbesti1 , pbesti2 ,, pbestiD ] . The best position discovered by the whole population is denoted as gbest  [ gbest1 , gbest2 , , gbestD ] . The position X id and velocity Vid are initialized randomly, and updates of D-dimensional of the i particle are calculated as follows:

Vi d  Vi d  c1  r1  ( pbestid  X id )  c2  r2  ( gbest d  X id )

(1)

X id  X id  Vi d

(2)

M

where c1 and c2 are the acceleration parameters, which are set to 2.0 commonly; r1 and r2 are two

ED

uniform distributed values in the range [0,1]. Vmax and Vmin parameters may be set for the velocity values determined for each particle to control the velocity within a reasonable range. In this study, Vmax and Vmin are set as 10% of the upper and lower

PT

values, respectively. Shi and Eberhart introduced an inertia weight in order to balance exploitation and exploration abilities [44]. Using inertia weight  , Eq. (1) is modified to Eq. (3) as follows: Vi d    Vi d  c1  r1  ( pbestid  X id )  c2  r2  ( gbest d  X id ) (3)

  max 

AC

CE

Shi and Eberhart also proposed a linearly decreasing inertial weight within [0,1]. In [44], an update mechanism is made as follows:

Mj M max

 (max  min )

(4)

where M j and M max are the current iteration and maximum iteration, respectively; max , min are defined by the user.

3. The CDW-PSO algorithm The PSO, proposed in 1995, has been widely applied to many real-world optimization problems, and many different PSO variants have been introduced in the literature. However, the challenges of premature convergence and easy trapping in the local optimum solution still persist [33-35]. Therefore, a chaotic dynamic weight particle swarm optimization called CDW-PSO is proposed to improve the optimization performance of the original PSO. In the CDW-PSO algorithm, the two major highlights are described below. Firstly, from Eq. (3) the inertia weight  is shown to be vitally important in deciding the convergence

ACCEPTED MANUSCRIPT speed. Here,  linearly decreases with increasing iterations, reducing the convergence speed and accuracy of the PSO algorithm. In the PSO algorithm,  does not need to continue to linearly decrease. In fact, the chaotic map has the properties of ergodicity, non-repetition and sensitivity; it can perform downright searches at higher speeds compared with stochastic searches that mainly rely on probabilities [45]. Therefore, the chaotic map is introduced to adjust the inertia weight  , and the improved PSO is called C-PSO. According to Ref. [25, 46], the sine map is selected to tune inertia weight  . The range of the sine map is always [0,1]. Mathematically, the inertia weight  is given as follows:

  xk  A  sin(xk 1 )

xk  (0,1) , 0  A  1, k  1,2,, M max

(5)

where k is the current iteration number.

CR IP T

Secondly, the global search process refers to the capability of finding the global optimal in the solution space of the optimization problem, while the local search process refers to the capability of applying the knowledge of previous solutions to find a better solution [47]. It is known that the global search and local search abilities contradict each other. In order to achieve better optimization performance, the exploration and exploitation abilities must be effectively balanced. Eq. (2) shows that the new particle position mainly depends on the following two parts: the previous position X id and the velocity Vi d . In order to further

AN US

improve the performance of PSO, three major changes are proposed by introducing dynamic weight, the acceleration coefficient and the best-so-far position to update the new position with the other two parts. Such dynamic weight-improved PSO is called DW-PSO. The search form of DW-PSO, described as Eq. (6), is good at balancing the exploration and exploitation abilities. The operation process can be modified with the following form:

X id  X id  wij  Vi d  w'ij    gbestd 

(6)

M

where wij and w 'ij are the dynamic weight, which controls the influence of the previous solution X id and

ED

the velocity Vi d ; gbestd is the best-so-far position, which can accelerate the convergence speed of DW-PSO;  is the acceleration coefficient, which decides the maximum step size;  is a random number between 0 and 1.

PT

In the search process, if the best particle fitness is very large, the particles are far away from the global optimal position. This means that a large adjustment is needed to find the optimum solution, and then wij and  should show larger values. Correspondingly, when the best particle fitness is very small, only a small correction is needed, and wij and  must have smaller values. In addition, the dynamic weight w'ij

AC

CE

is introduced to strengthen the coupling between the previous position and the velocity. Based on the above considerations, three major changes (dynamic weight, acceleration coefficient and best-so-far position) are proposed in the search stage to tune the parameters that are used to calculate new particle positions. In this paper, the dynamic weight and acceleration coefficient are defined as functions of fitness in the search process of PSO. wij , w'ij and  are defined as follows:

wij   

exp( f ( j ) / u ) (1  exp(  f ( j ) / u ))iter w'ij  1  wij

(7) (8)

where u is the mean fitness value in the first iteration; iter is the current iteration; f ( j ) is the fitness of the jth particle. Based on the above explanation, the pseudo-code of the CDW-PSO algorithm is shown in Fig. 1.

ACCEPTED MANUSCRIPT 1 Initialize the parameters (NP, D, Xmin, Xmax, Max_iter, Vmin, Vmax) 2 Uniformly randomly initialize each particle position Xi ( i  1,2, , NP ) in the swarm 3 Generate the initial velocities Vi ( i  1,2, , NP ) for each particle randomly 4 Evaluate the fitness value of each particle using the objective function f 5 Set pbest and gbest in the swarm 6 While iter < Max_iter do 7 8

Update inertia weight

9 for i=1:NP(all particle) do 10 Update the velocity Vi

CR IP T

  xiter  A  sin(xiter 1 )

Vi d    Vi d  c1  r1  ( pbestid  X id )  c2  r2  ( gbestd  X id )

11 12

Update the position of particle X i X id  X id  wij  Vi d  w'ij    gbestd 

13

Calculate the fitness values of the new particle X i using f

15

if X i is better than pbesti

16

Set X i to be pbesti

17 18

End if if X i is better than gbest

19

Set X i to be gbest

20

End for i

22

iter  iter  1

ED

23 End While

M

21

End if

AN US

14

Fig. 1 Pseudo-code of the CDW-PSO algorithm

PT

4. Experimental study and discussion

AC

CE

In this section, seventeen classical benchmark functions are listed in Table 1, which are used to verify the performance of the CDW-PSO algorithm. These benchmark functions are widely adopted in numerical optimization methods [9, 13-14, 36-37, 48]. In this paper, the seventeen benchmark functions are divided into three groups. The first group includes seven unimodal functions. The second group includes five complex multimodal functions, where f 8 (Rastrigin) owns a large number of local optima so that it is difficult to find the global optimization values, f 9 (Ackley) has one narrow global optimum basin and many minor local optima, f10 (Griewank) includes linkage components among variables so it is difficult to reach theoretical optimal values, and f11 and f12 are continuous but own many minor local optima. The third group includes three rotated multimodal functions, one shifted function and one shifted and rotated function. In Table 1, n denotes the solution space dimension, S denotes a subset of R n , and the global optimal solution and the global optimal value f min of seventeen classical benchmark functions are given in column 5 and column 6, respectively. Twenty independent experiments are completed for each optimization function. Furthermore, nonparametric Wilcoxon rank-sum tests are conducted to determine whether the results searched by CDW-PSO are statistically different from the best result obtained by other methods. The k values presented in column 3 of Tables 2, 4 and 6 are the results of t-tests. A k value of one indicates that the

ACCEPTED MANUSCRIPT performances of the two algorithms are statistically different with 95% certainty; however, a k value of 0 implies that the performances are not statistically different. Table 1 High-dimensional classical benchmark functions Group

Name

Test function

Global opt. x*

f min

[-100,100]n

[0]n

0

[-10,10]n

[0]n

0

[-100,100]n

[0]n

0

[-100,100]n

[0]n

0

n

 f1 ( x ) 

Sphere [48]

S

x

2 i

i 1

 x  x i

i

i 1

 f3 ( x ) 

Schwefel’s 1.2 [48]

i 1

n

i

i 1

j 1

 ( x )

2

j

 f 4 ( x )  max{ xi ,1  i  n}

Schwefel’s 2.21 [48]

i

 f5 ( x ) 

Rosenbrock [48]

n 1

[100( x

i 1

 xi2 )2  ( xi  1)2 ]

i 1

 f6 ( x ) 

Step [48]

n

 ([x  0.5])

2

i

i 1

 f7 ( x ) 

Noise [48]

n

 ix

4 i

 random[0,1)

i 1

n  f 8 ( x )   ( xi2  10 cos(2xi )  10)

Rastrigin [48]

i 1

 f 10 ( x ) 

Griewank [48]



n

cos(2x ))  20  e n

1 2 i )  exp(

i 1

i

[-30,30]n

[1]n

0

[-100,100]n

[-0.5]n

0

[-1.28,1.28]n

[0]n

0

[-5.12,5.12]n

[0]n

0

[-32,32]n

[0]n

0

[-600,600]n

[0]n

0

[-50,50]n

[0]n

0

[-50,50]n

[0]n

0

[-600,600]n

[0]n

0

[-0.5,0.5]n

[0]n

0

[-5.12,5.12]n

[0]n

0

[-100,100]n

o

-450

[-32,32]n

o

-140

i 1

1 n 2 xi  4000 i 1



  f11( x )  {10sin(y1 )  n

Multimodal

n

x

AN US

 1 f9 ( x )  20exp(0.2 n

Ackley [48]

CR IP T

Unimodal

n

n

 f2 ( x) 

Schwefel’s 2.22 [48]

n

 cos( i 1

xi

) 1

i

n 1

 ( y  1) [1  10sin (y 2

i 1 )]  ( yn

2

i

 1) 2}

i 1

n

 u( x ,10,100,4) i

i 1

M

 k ( xi  a) m , xi  a 1  where y i  1  ( x i  1) , u ( x i , a, k , m)   0 ,  a  x i  a 4 k (  x  a ) m , x   a i i 

Generalized Penalized [48]

 x  1 1  sin 3x  1 x

ED

 1  f12 x   sin 2 3x1   10  

n



n

2

2

i

i

n

i 1





  12 1  sin 2 2xn   

u xi ,5,100,4

PT

i 1

Rotated Griewank [36]

AC

Rotated and

Rotated Rastrigin [36]

Shifted

Shifted Schwefel’s 1.2 [49]

1 n 2 xi  4000 i 1



n

 cos( i 1

xi

) 1

i

where y  M  x , M is an orthogonal matrix

n   k max  k max f 14 ( x )     [a k cos(2b k ( y i  0.5))]  n  [a k cos(2b k  0.5)] i 1  k  0 k 0 

CE

Rotated Weierstrass [36]

 f13 ( x ) 

where a  0.5, b  3, k max  20 , y  M  x , M is an orthogonal matrix n

 f15 ( x ) 

 (x

2 i

 10 cos(2xi )  10)

i 1

where y  M  x , M is an orthogonal matrix  n   f16 ( x )       i 1 

 zj   j 1  i



2

  1  0.4 N (0,1)   f _ bias , z  x  o 4  

o  [o1 , o2 ,, on ] : shifted global optimum

Shifted and Rotated Ackley [49]

 1 f 17 ( x )  20 exp(0.2 n

n

z i 1

2 i

1 )  exp( n

n

 cos(2z ))  20  e  f _ bias , z  x  o i

i 1

o  [o1 , o2 ,, on ] : shifted global optimum

8

To obtain an unbiased CPU time comparison, all experiments are performed in a Windows XP Professional environment using an AMD Athlon ™ 64 X2 Dual Core system with 2.11 GHz and 2.0 GB RAM, and the codes are implemented in Matlab 2009a. 4.1 Comparison of CDW-PSO with PSO, C-PSO and DW-PSO

ACCEPTED MANUSCRIPT In this subsection, to verify the effectiveness of the chaotic map combined with dynamic weight, the proposed CDW-PSO algorithm was compared with PSO, C-PSO and DW-PSO. In the CDW-PSO algorithm, the maximum number of cycles is set as 500 for all functions; the number of the particle size is taken as 40, and the acceleration parameters c1  c2  2.0 . The values of the CDW-PSO algorithm equal those of the

CR IP T

PSO, C-PSO and DW-PSO. In addition, for every classical benchmark function of Table 2, the dimension is set to 30 and 50 in turn. In order to make a clear comparison, the value below 10-50 (the error rate) is assumed to be 0. The results are shown in Table 2 and Figs. 2 - 12. As indicated in Table 2, seventeen classical benchmark functions are used. The best mean of the objective values and the best standard deviations of the classical benchmark functions obtained by the four algorithms are shown in bold. As seen from Table 2, for seven unimodal functions, the CDW-PSO algorithm achieves better results on all unimodal functions than the original PSO and C-PSO. For five multimodal functions, CDW-PSO achieves the global optimal solutions or is close to the global optimal solution on 3 functions ( f 8 , f 9 and f10 ). However, on two multimodal functions ( f11 and f12 ), the CDW-PSO algorithm could not find the

ED

M

AN US

global optimal values under the specified maximum number of iterations. For five rotated and shifted functions, CDW-PSO can directly find the theoretical optimal value on 4 functions ( f13 , f14 , f15 and f16 ). On the f17 function, all algorithms could not find the global optimal values. The CDW-PSO method surpasses the other three algorithms on 13 functions ( f1 , f 2 , f 3 , f 5 , f 7 , f 8 , f 9 , f10 , f12 , f13 , f14 , f15 and f16 ), except for the functions f 7 , f 9 and f12 under Dim = 30. CDW-PSO achieves the same best result as the DW-PSO algorithm on 9 functions ( f 4 , f 6 , f 7 , f 8 , f 9 , f10 , f13 , f14 and f15 ), except for the functions f 4 , f 7 , f 9 , f13 and f14 under Dim = 50. The DW-PSO performed better than CDW-PSO on f 4 and f11 functions under Dim = 50, while the PSO also performs well on f11 and f12 under Dim = 30 and f17 under Dim = 50. Compared with PSO, the CDW-PSO could search much better solutions than PSO on all functions in Table 1, expect for the functions f11 and f12 under Dim = 30 and f17 under Dim = 50. Compared with C-PSO, the CDW-PSO algorithm gives better results for all functions, except for the function f17 under Dim = 30. According to the t-test results, these results are mostly different

PT

from the second best results. Figs. 2 - 12 show the convergence curves obtained by four algorithms on 11 functions ( f1 , f 3 , f 5 , f 7 , f 8 , f 9 , f10 , f12 , f14 , f16 and f17 ). The values shown in these figures are the average function

AC

CE

optimum achieved from 20 independent experiments. Fig. 2 clearly shows that CDW-PSO has better performance of escaping from the local optimum and better search accuracy than the other three methods. From Fig. 3, it can be seen that CDW-PSO has the best performance for this benchmark function. Fig. 4 shows that the CDW-PSO algorithm shows the fastest convergence speed initially. Fig. 5 illustrates the values achieved for the four algorithms when using f 7 , and the figure reveals that CDW-PSO owns stronger robustness and stability than PSO, C-PSO and DW-PSO. From Fig. 6, CDW-PSO overtakes all other approaches, and the algorithm found the global optimal solution for approximately 15 generations. Fig. 7 illustrates that CDW-PSO has faster convergence speed than other algorithms when using f 9 . In Fig. 8, with reference to this classical benchmark function and very similar to the f 8 function shown in Fig. 6, CDW-PSO shows a faster convergence rate initially. Fig. 9 shows that compared with the other three methods, the CDW-PSO method has fast convergence speed towards the global values. Fig. 10 references the rotated Weierstrass function and shows that CDW-PSO has high search accuracy and fast convergence speed. Fig. 11 illustrates that CDW-PSO can obtain the global optimum -450 on the shifted Schwefel’s function ( f16 ) with a fast convergence rate. Fig. 12 demonstrates that all algorithms cannot find the global optimal value, and the original PSO shows better convergence accuracy than the other three methods. In summary, as seen in Table 2 and Figs. 2 - 12, the CDW-PSO algorithm achieves better search

ACCEPTED MANUSCRIPT performance and a stronger ability to escape from local optimum solutions than PSO, C-PSO and DW-PSO. So the CDW-PSO method is very efficient for numerical optimization problems. Table 2 Mean and standard deviations of the function values Dim

f7

f11 f12 f13

1.64×10-38

0

0

50

1

2.12×101

5.47×100

1.21×103

1.42×103

2.13×10-27

5.58×10-27

0

0

5.14×10

0

3.25×10

0

1.37×10

-20

4.64×10

-20

0

1.93×10

0

8.90×10

0

9.84×10

-15

2.95×10

-14

30

1

2.03×10

-1 0

4.58×10

-2

3.16×10

0

50

1

2.25×10

30

1

5.73×102

2.90×102

3.46×103

1.46×103

1

6.99×10

3

1.73×10

3

4

4.38×10

3

3.46×10

4.15×10

0

9.55×10

-1

1.70×10

0

0

1

2.86×10

0

3.26×10

0

30

0

1.32×10

1

1.11×10

1

50

1

1.28×10

30

0

2.10×102

2.04×102

6.88×103

6.92×103

50

0

8.83×102

3.30×102

1.07×105

30

0

1.05×100

1.23×100

50

0

4.61×101

1.38×101

30

30

30

30

0 1 0 0 0 1 0

8.38×10

-8

2.39×10

-4

2.44×10

1

9.37×10

1

4.92×10

-1

2.56×10

0

4.96×10

-1

50

0

1.22×10

0

30

1

1.22×10

-2

50

1

2.38×100 -1

30

1

1.08×10

50

0

7.79×100

30

0

1.22×100

1

7.30×10

0

3.05×10

1

6.13×10

1

9.70×10

1

2.32×10

2

30

30

0 1 0 0

AC

f17

S.D.

6.64×10-39

50

f16

Mean

2.02×102

50

f15

S.D.

1.71×102

50

f14

Mean

2.03×10-1

50

f10

S.D.

3.90×10-1

50

f9

Mean

1

50

f8

S.D.

5.02×10

-8

1.71×10

-4

6.18×10

0

1.94×10

1

3.21×10

-1

2.82×10

-1

1.05×10

-1

6.43×10

-2

2.83×10

-2

1.81×10

1.33×100 7.95×10

-2

3.87×100

1.72×10-1 1.41×10

0

2.45×10

0

4.84×10

0

2.94×10

1

4.24×10

1 2

30

1

-386.35

1.40×10

50

1

-301.19

590×102 -2

30

0

-118.91

4.77×10

50

0

-118.78

2.70×10-3

9.99×10-38 -21

1.43×10

-20

0

0

3.79×10-36 0

0 0

-43

0 0 2.35×10-42

0

5.26×10

2.83×101

2.34×10-1

2.81×101

1.88×10-1

2.06×105

4.83×101

2.03×10-1

4.82×101

1.13×10-1

2.12×102

1.96×102

0

0

0

0

1.04×103

5.18×102

0

0

0

0

0

0

0

0

0

0

0

0

0

3.40×10

-3

9.61×10

-2

6.05×10

1

1.86×10

2

4.37×10

0

6.82×10

0

6.70×10

-3

7.90×10

-2

8.99×10

2.12×10

1

0

4.43×10

1

0

1.03×10

0

8.88×10

-16

1.20×10

0

7.64×10

-15

2.07×10

0

1.01×10

1

0

1.80×10

3.07×10

0

0

1.41×10

1

3.13×10

0

0

2.51×101

2.00×101

1

1

2.88×10

0

3.87×10-37

8.47×10

0 -37

CR IP T

f6

Mean

AN US

f5

CDW-PSO

30

50

f4

DW-PSO

M

f3

C-PSO

ED

f2

PSO

PT

f1

k

CE

f

4.03×10

0

1.77×10

0

-49

0

1.79×10

-14

0

0

5.10×10

8.88×10

0

8.88×10

-16

0 0

0 -2

8.45×10-2 1.96×10

0 -16

0

0 -1

3.72×10-1 2.24×10

3.91×10

-48

-1

0 -1

1.27×10-1

8.29×10-1

2.66×10-1

2.93×10

2.31×10

0

2.95×10-1

2.01×104

6.36×104

4.31×100

2.12×10-1

4.29×100

2.33×10-1

1.51×101

8.36×100

0

0

0

0

6.65×10

1

3.84×10

1

7.00×10

0

0

3.30×10

1

3.12×10

0

0

0

0

6.36×10

1

5.55×10

0

7.91×10

0

0

1.54×10

2

4.41×10

1

0

0

0

0

3.57×10

2

7.82×10

1

0

0

-446.04

4.43×10

0

-398.28

1.08×101

-119.08

8.71×10

-2

-118.77

4.17×10-2

Mean: mean of objective values; S.D.: standard deviation.

-15

3.07×10

-14

0 -9

2.74×10

-8

0

0

898.71

6.14×10

2

-450.00

3.01×10-6

989.75

5.39×103

-448.16

1.96×10-2

-118.95

1.04×10

-1

-118.95

1.04×10-1

-118.74

6.30×10-3

-118.76

4.44×10-2

ACCEPTED MANUSCRIPT 20

10

0

10

-20

-40

10

PSO C-PSO DW-PSO CDW-PSO

CR IP T

Mean of Best Function Values

10

-60

10

-80

10

-100

-120

10

0

50

100

150

200

AN US

10

250 Iterations

300

350

400

450

500

M

Fig. 2 Comparison of performances for minimization of f1 with Dim = 30 10

ED

10

0

10

-10

PT

-20

CE

10

-30

10

AC

Mean of Best Function Values

10

PSO C-PSO DW-PSO CDW-PSO

-40

10

-50

10

-60

10

0

50

100

150

200

250 Iterations

300

350

400

450

Fig. 3 Comparison of performances for minimization of f 3 with Dim = 50

500

ACCEPTED MANUSCRIPT

9

10

PSO C-PSO DW-PSO CDW-PSO

8

10

7

Mean of Best Function Values

10

6

10

5

CR IP T

10

4

10

3

10

1

10

0

50

100

150

200

AN US

2

10

250 Iterations

300

350

400

450

500

M

Fig. 4 Comparison of performances for minimization of f 5 with Dim = 30 20

ED

10

0

10

-20

PT

-40

CE

10

PSO C-PSO DW-PSO CDW-PSO

-60

10

AC

Mean of Best Function Values

10

-80

10

-100

10

-120

10

0

50

100

150

200

250 Iterations

300

350

400

450

Fig. 5 Comparison of performances for minimization of f 7 with Dim = 50

500

ACCEPTED MANUSCRIPT

5

10

0

-5

10

PSO C-PSO DW-PSO CDW-PSO

CR IP T

Mean of Best Function Values

10

-10

10

-15

-20

10

0

50

100

150

200

AN US

10

250 Iterations

300

350

400

450

500

M

Fig. 6 Comparison of performances for minimization of f 8 with Dim = 30 2

10

0

ED

10

-2

PT

-4

10

-6

PSO C-PSO DW-PSO CDW-PSO

CE

10

-8

10

AC

Mean of Best Function Values

10

-10

10

-12

10

-14

10

-16

10

0

50

100

150

200

250 Iterations

300

350

400

450

Fig. 7 Comparison of performances for minimization of f 9 with Dim = 50

500

ACCEPTED MANUSCRIPT

5

10

0

-5

10

PSO C-PSO DW-PSO CDW-PSO

CR IP T

Mean of Best Function Values

10

-10

10

-15

-20

10

0

50

100

150

200

AN US

10

250 Iterations

300

350

400

450

500

M

Fig. 8 Comparison of performances for minimization of f10 with Dim = 30 10

10

PSO C-PSO DW-PSO CDW-PSO

9

ED

10

8

10

PT

7

6

CE

10

5

10

4

10

AC

Mean of Best Function Values

10

3

10

2

10

1

10

0

10

0

50

100

150

200

250 Iterations

300

350

400

450

Fig. 9 Comparison of performances for minimization of f12 with Dim = 50

500

ACCEPTED MANUSCRIPT 5

10

0

-5

10

PSO C-PSO DW-PSO CDW-PSO

CR IP T

Mean of Best Function Values

10

-10

10

-15

AN US

10

-20

10

0

50

100

150

200

250 Iterations

300

350

400

450

500

Fig. 10 Comparison of performances for minimization of f14 with Dim = 30 4

M

2.5

PT

2

CE

1.5

1

AC

Mean of Best Function Values

PSO C-PSO DW-PSO CDW-PSO

ED

3

x 10

0.5

0

-0.5

0

50

100

150

200

250 300 Iterations

350

400

450

Fig. 11 Comparison of performances for minimization of f16 with Dim = 30

500

ACCEPTED MANUSCRIPT

-118.5 PSO C-PSO DW-PSO CDW-PSO

-118.6

-118.65

CR IP T

Mean of Best Function Values

-118.55

-118.7

-118.8

0

50

100

150

AN US

-118.75

200

250 Iterations

300

350

400

450

500

Fig. 12 Comparison of performances for minimization of f17 with Dim = 50

M

4.2 Comparison of CDW-PSO with other algorithms

In this subsection, the CDW-PSO algorithm is compared with the other optimization methods that have

AC

CE

PT

ED

been proposed in recent years. Biogeography-based optimization (BBO) [50], chaotic biogeography-based optimization (CBBO) [22], the krill herd algorithm (KH) [51], the chaotic krill herd algorithm (CKH) [25], the gravitational search algorithm (GSA) [52], the chaotic gravitational search algorithm (CGSA) [23], the sine cosine algorithm (SCA) [9], the moth-flame optimization algorithm (MFO) [10] and the artificial bee colony (ABC) method [13] are compared with the proposed method. The experimental parameter settings for the four algorithms are shown in Table 3. The functions used for comparison are all the classical benchmark functions listed in Table 1. The dimension of the seventeen classical benchmark functions is determined as 30. For the purpose of reducing statistical errors, each experiment is repeated 20 times independently for every function, and the means of the best classical benchmark function solutions achieved were recorded. The best results among the ten methods are shown in bold. In addition, to make a clear comparison, the values below 10-50 (error rate) are assumed to be 0. The mean and the standard deviation (S.D.) of the final values are given and compared in Table 4, while Figs. 13 – 20 show the convergence graphs in terms of the best mean fitness value of each algorithm for each benchmark function. The results in Table 4 show that the CDW-PSO algorithm achieves the theoretical global optima values on 12 functions ( f1 , f 2 , f 3 , f 4 , f 6 , f 7 , f 8 , f10 , f13 , f14 , f15 and f16 ). The t-test results show that the performance of CDW-PSO and the second best algorithm are statistically different. On 1 function ( f 9 ), the CDW-PSO method could find solutions close to the global optima value. However, on 4 functions ( f 5 , f11 , f12 and f17 ), the CDW-PSO method could not achieve the global optima solutions under the maximum iteration. KH is good with the Rosenbrock function ( f 5 ). ABC performs best on the multimodal

ACCEPTED MANUSCRIPT function f11 . Also, GSA does best on the generalized penalized function ( f12 ). CGSA yields the best solution on the rotated and shifted Ackley function ( f17 ). Table 3 Parameter settings of ten algorithms Population

Maximum iteration

Dim of each object

Other

BBO

40

500

30

Mu = 0.005,   0.8

CBBO

40

500

30

Mu = 0.005,   0.8

KH

40

500

30

N max = 0.01, V f = 0.02, D max = 0.005

CKH

40

500

30

N max = 0.01, V f = 0.02, D max = 0.005

GSA

40

500

30

G0  100,  20

CGSA

40

500

30

G0  100,  20

SCA

40

500

30

r1 , r2 , r3 and r4 are random numbers

MFO

40

500

30

t is a random number in the range [-2,1]

ABC

40

500

30

Limit = 200

CDW-PSO

40

500

30

c1  c2  2.0

CR IP T

Algorithms

AN US

On the unimodal functions, the CDW-PSO method is shown to offer superior search performance among all other well-known population-based algorithms except KH. The CDW-PSO method is not the most efficient at solving problems with a narrow valley, such as the Rosenbrock optimization problem ( f 5 ). The

ED

M

Rosenbrock function has a narrow valley from the perceived local optimizer to the global optimum. This will increase the difficulty in obtaining the global optimal solutions. In this experiment, all approaches cannot find the theoretical global optima values under the specified iteration. The algorithm itself lacks diversity in the search process, likely causing this deficiency. On the multimodal functions, the CDW-PSO algorithm generally outperforms all the other meta-heuristic algorithms. The CDW-PSO method can achieve the theoretically optimal value of the Rastrigin ( f 8 ), Ackley ( f 9 ) and Griewank ( f10 ) functions. Similar to the other algorithms, the CDW-PSO method fails on the Generalized Penalized ( f11 and f12 ). The ABC algorithm obtains high-quality mean solutions with an error value of 10-5 to the f11 function, while only GSA can achieve high-quality mean solutions with an error value of 10-23 to the f12 function.

PT

On the rotated and shifted functions, the CDW-PSO method generally does better than other intelligent optimization algorithms. The CDW-PSO algorithm still obtains the global optimal value of the rotated Griewank ( f13 ), rotated Weierstrass ( f14 ), rotated Rastrigin ( f15 ) and shifted Schwefel’s 1.2 functions. Like

AC

CE

the other methods, the CDW-PSO fails on the shifted and rotated Ackley function. Figs. 13 – 20 shows the convergence graphs obtained by ten algorithms on the 8 functions ( f 2 , f 4 , f 6 , f 8 , f 9 , f11 , f13 and f15 ). The values shown in these figures are the average function optimum achieved from 20 independent experiments. From Fig. 13, the CDW-PSO algorithm avoids falling into the local optimum more successfully than the other nine algorithms. From Fig. 14, with reference to this classical benchmark function, very similar to the f 2 function shown in Fig. 13, CDW-PSO shows a better ability of escaping from the local optimum and achieves higher search accuracy. From Fig. 15, the CDW-PSO method has good local search capabilities and converges quickly. From Fig. 16, the CDW-PSO algorithm is more successful than all of the other approaches, and the algorithm determines the global optimal solution at approximately 20 generations. From Fig. 17, CDW-PSO shows better convergence speed than the other nine algorithms. From Fig. 18, the ABC and GSA algorithms slightly outperform CDW-PSO and the other seven approaches, but the CDW-PSO technique shows better stability than the other nine methods. Fig. 19 shows that CDW-PSO variants can determine the theoretical optimal value 0, while CS-PSO shows the quickest convergence speed. From Fig. 20, with reference to this rotated Rastrigin function and very similar to the

ACCEPTED MANUSCRIPT f13 function shown in Fig. 19, the CDW-PSO method shows higher search accuracy.

AC

CE

PT

ED

M

AN US

CR IP T

In summary, as seen in Table 4 and Figs. 13 – 20, the proposed CDW-PSO algorithm shows stable search ability and better convergence accuracy than the other nine algorithms. Thus, it is concluded that the CDW-PSO technique has better search performance.

CR IP T

ACCEPTED MANUSCRIPT

Table 4 Mean and the standard deviations of the function values (Dim = 30) k

BBO

CBBO

KH

CKH

GSA

CGSA

Mean

1

7.17×100

1.77×100

5.19×10-1

7.89×10-1

1.25×10-7

1.18×10-16

SCA

MFO

ABC

CDW-PSO

1.11×101

2.00×103

3.15×10-5

4.79×100

8.92×10-1

2.76×10-1

2.15×10-1

2.59×10-8

2.14×10-17

0

1.83×101

4.22×103

3.47×10-5

6.82×10-1

1.20×10-1

3.84×100

2.31×102

1.60×10-3

2.70×10-1

0

1.62×10-2

3.51×101

4.42×10-3

2.28×10-1

6.60×10-2

2.11×100

3.03×101

1.80×10-4

5.69×10-1

0

2.12×10-2

2.27×101

1.40×10-3

6.85×103

7.55×103

3.31×102

4.31×102

3.19×102

6.55×102

0

2.10×104

1.86×104

1.78×104

1.67×103

3.23×103

1.41×102

1.31×102

1.14×102

2.69×102

0

1.12×104

1.40×104

2.92×103

1.20×100

1.71×101

6.09×100

9.71×100

9.32×10-2

4.86×100

0

7.37×101

6.76×101

4.35×101

4.50×100

3.00×100

1.46×100

1.44×100

1.87×10-1

0

1.23×100

2.11×101

9.49×100

5.05×100

8.49×102

4.18×102

8.96×100

1.91×102

0

2.62×101

5.41×101

1.70×106

9.59×102

4.36×101

5.24×102

2.96×102

1.08×102

1.24×102

2.81×101

2.00×10-1

5.16×101

4.86×106

1.04×103

4.41×101

1.48×101

1.45×100

4.61×10-1

5.15×10-1

1.88×10-1

1.37×10-7

7.77×10-17

3.69×101

1.01×103

5.82×10-5

2.59×101

6.30×10-1

1.56×10-1

1.90×10-1

0

2.77×10-8

2.40×10-17

8.84×101

3.19×103

9.74×10-5

8.31×10-2

9.01×10-2

7.76×10-2

9.44×10-2

0

3.55×10-2

1.64×10-1

6.65×10-1

6.31×100

3.03×10-1

4.53×10-2

5.33×10-2

3.17×10-2

3.07×10-2

0

1.62×10-2

2.76×10-1

1.81×100

8.85×100

8.00×10-2

3.21×102

3.14×102

1.45×101

6.43×101

0

2.19×101

2.33×101

5.91×101

1.59×102

5.09×100

6.47×100

2.62×100

1.07×101

9.44×101

0

6.82×100

5.75×100

3.65×101

2.66×101

2.32×100

1.69×100

6.45×10-1

5.52×100

8.07×100

0

2.66×10-4

6.35×10-9

1.15×101

8.86×10-8

1.54×10-1

4.70×10-1

4.88×10-1

1.40×100

7.34×10-1

8.88×10-16

3.23×10-5

7.30×10-10

9.95×100

6.38×10-8

1.43×10-1

1.08×100

8.25×10-1

1.48×10-1

0

1.65×10-1

5.65×100

2.09×101

1.04×100

8.59×10-1

2.60×10-2

7.83×10-2

1.57×10-1

0

4.04×10-2

8.30×10-2

2.21×100

5.32×100

2.94×10-1

1.44×10-1

2.86×10-2

3.42×10-1

0

2.10×10-2

3.41×100

4.37×100

3.58×10-2

1.20×100

4.17×106

4.69×100

1.93×10-5

2.93×10-1

4.44×10-1

3.77×10-2

1.20×100

1.09×100

7.42×10-2

8.20×10-1

1.38×107

1.81×100

4.33×10-5

1.27×10-1

6.52×10-1

1.08×10-1

3.85×10-2

1.11×101

3.67×10-23

1.36×10-1

1.51×107

1.10×101

1.01×10-5

2.31×100

2.18×10-1

5.85×10-2

1.41×10-1

3.23×101

8.82×10-23

3.50×10-1

2.87×107

9.65×100

1.04×10-5

2.95×10-1

6.08×100

5.68×100

1.11×100

1.25×100

8.39×101

7.74×101

1.99×100

2.20×102

1.42×100

0

3.25×100

1.32×100

4.22×10-2

1.06×10-1

2.88×101

2.31×101

9.78×10-1

2.52×102

2.60×10-1

0

3.13×101

3.29×101

1.91×101

2.65×101

8.45×100

9.99×100

4.24×101

3.11×101

3.14×101

0

3.06×100

2.37×101

3.02×100

3.28×100

1.81×100

2.77×100

1.13×100

4.34×100

1.98×100

0

6.17×101

7.04×101

2.46×101

6.34×101

2.21×101

2.74×101

2.18×102

2.64×102

4.17×102

0

1.74×101

1.93×101

1.20×101

9.01×101

7.26×100

7.98×100

1.36×101

6.36×101

3.16×101

0

1

3.23×104

3.98×104

5.24×104

4.89×104

6.25×104

5.69×104

3.46×104

5.37×104

5.54×104

-450.00

8.89×103 -118.92

1.18×104 -118.98

3.59×104 -119.03

1.94×104 -119.53

2.18×104

0

9.67×103 -118.98

-119.56

5.78×103 -118.30

6.48×104 -118.30

1.07×104 -118.96

3.01×10-6 -118.95

8.83×10-2

4.12×10-2

5.75×10-2

6.03×10-2

9.49×10-2

8.93×10-2

1.50×10-14

1.50×10-14

3.87×10-2

1.04×10-1

1

S.D.

f3

Mean

1

S.D.

f4

Mean

1

S.D.

f5

Mean

1

S.D.

f6

Mean

1

S.D.

f7

Mean

1

S.D.

f8

Mean

1

S.D.

f9

Mean

1

S.D.

f10

Mean

1

S.D.

f11

Mean

1

S.D.

f12

Mean

1

S.D.

f 13

Mean

1

S.D.

f 14

Mean

1

S.D.

f 15

Mean

1

S.D.

f 16

Mean S.D. S.D.

Mean: mean of objective values; S.D.: standard deviation.

AC

f17

Mean

M

Mean

ED

f2

PT

S.D.

AN US

Criteria

CE

f f1

ACCEPTED MANUSCRIPT 20

10

10

10

0

-10

BBO CBBO KH CKH GSA CGSA SCA MFO ABC CDW-PSO

10

-20

10

CR IP T

Mean of Best Function Values

10

-30

10

-40

10

-60

10

0

50

100

150

200

AN US

-50

10

250 Iterations

300

350

400

450

500

M

Fig. 13 Comparison of performances for minimization of f 2 with Dim = 30 10

10

ED

0

10

-10

PT

BBO CBBO KH CKH GSA CGSA SCA MFO ABC CDW-PSO

-20

CE

10

-30

10

-40

10

AC

Mean of Best Function Values

10

-50

10

-60

10

-70

10

0

50

100

150

200

250 Iterations

300

350

400

450

Fig. 14 Comparison of performances for minimization of f 4 with Dim = 30

500

ACCEPTED MANUSCRIPT 5

10

0

-5

BBO CBBO KH CKH GSA CGSA SCA MFO ABC CDW-PSO

-10

10

-15

10

-20

10

CR IP T

10

0

50

100

150

200

AN US

Mean of Best Function Values

10

250 Iterations

300

350

400

450

500

M

Fig. 15 Comparison of performances for minimization of f 6 with Dim = 30 5

ED

10

0

PT CE -5

10

BBO CBBO KH CKH GSA CGSA SCA MFO ABC CDW-PSO

AC

Mean of Best Function Values

10

-10

10

-15

10

0

50

100

150

200

250 Iterations

300

350

400

450

Fig. 16 Comparison of performances for minimization of f 8 with Dim = 30

500

ACCEPTED MANUSCRIPT 2

10

0

10

-2

-4

BBO CBBO KH CKH GSA CGSA SCA MFO ABC CDW-PSO

10

-6

10

-8

10

-10

10

-12

CR IP T

Mean of Best Function Values

10

10

-16

10

0

50

100

150

200

AN US

-14

10

250 Iterations

300

350

400

450

500

M

Fig. 17 Comparison of performances for minimization of f 9 with Dim = 30 10

PT

ED

BBO CBBO KH CKH GSA CGSA SCA MFO ABC CDW-PSO

5

CE

10

AC

Mean of Best Function Values

10

0

10

-5

10

0

50

100

150

200

250 Iterations

300

350

400

450

Fig. 18 Comparison of performances for minimization of f11 with Dim = 30

500

ACCEPTED MANUSCRIPT 5

10

0

-5

10

CR IP T

Mean of Best Function Values

10

-10

10

-15

-20

10

0

50

100

150

200

AN US

10

250 Iterations

300

350

400

BBO CBBO KH CKH GSA CGSA SCA MFO ABC CDW-PSO 450

500

M

Fig. 19 Comparison of performances for minimization of f13 with Dim = 30 5

ED

10

0

PT CE -5

10

BBO CBBO KH CKH GSA CGSA SCA MFO ABC CDW-PSO

AC

Mean of Best Function Values

10

-10

10

-15

10

0

50

100

150

200

250 Iterations

300

350

400

450

Fig. 20 Comparison of performances for minimization of f15 with Dim = 30

500

ACCEPTED MANUSCRIPT 4.3 Comparison of CDW-PSO with other PSO variants

CR IP T

In this subsection, in order to evaluate the effectiveness of the proposed approach, the CDW-PSO method is compared with other PSO variants with the results shown in Table 6. The results of the PSO variant algorithms in Table 6 are found in [37] and [53]. The classical benchmark functions used for comparison of methods are the eleven functions listed in Table 1. The dimension of the classical benchmark functions is set as 30. The performance of the CDW-PSO algorithm is compared with the other eleven PSO variants including GPSO, LPSO, SPSO-40, FIPSO, HPSO-TVAC, DMS-PSO, CLPSO, OPSO, OLPSO-G, OLPSO-L and LFPSO. The best mean of the objective values and the standard deviations of the classical benchmark functions obtained by the algorithms are shown in bold. In addition, the parameter settings in experiments for twelve algorithms are shown in Table 5. As seen in Table 6, the search results of the CDW-PSO method are the theoretical global optima on 4 functions ( f 8 , f10 , f13 and f15 ). In addition, the search results of the CDW-PSO method are close to the theoretical optima values on 4 functions ( f1 , f 2 , f 7 and f 9 ). However, on 3 functions ( f 5 , f11 and f12 ), the CDW-PSO technique could not determine the theoretical global optima solutions under the maximum iteration. According to the results of Wilcoxon rank sum tests, the experimental results are

AN US

different from the second best results. Through the above analysis, the following conclusions can be drawn: compared with the other 11 algorithms, the proposed CDW-PSO technique is successful, and it has shown very good search performance on numerical optimization problems. Table 5 Parameter settings of twelve algorithms Algorithms

Population size

GPSO

40

LPSO

40

SPSO-40

Parameter settings  : 0.9~0.4, c1  c2 = 2.0, Vmax = 0.2×Range

Reference [44]

FIPSO

40

 =0.729,

[42]

HPSO-TVAC

40

DMS-PSO

40

CLPSO

40

OPSO

40

OLPSO-G

40

OLPSO-L

40 40 40

AC

CDW-PSO

ED

c

i

= 4.1, Vmax = 0.5×Range

 :0.9~0.4, c1 :2.5~0.5, c2 : 0.5~2.5, Vmax = 0.5×Range  : 0.9~0.2, c1  c2 = 2.0, m = 3, R =5, Vmax = 0.2×Range  : 0.9~0.4, c =1.49445, m = 7, Vmax = 0.2×Range  : 0.9~0.4, c1  c2 =2.0, Vmax = 0.5×Range  : 0.9~0.4, c =2.0, G = 5, Vmax = 0.2×Range  : 0.9~0.4, c =2.0, G = 5, Vmax = 0.2×Range  : 1.0~0.0, c1  c2 = 2.0, Vmax = 0.2×Range  : 1.0~0.0, c1  c2 = 2.0, Vmax = 0.1×Range

PT

CE

LFPSO

M

[54]

40

 :0.9~0.4, c1  c2 = 2.0, Vmax = 0.2×Range  =0.721, c1  c2 = 1.193, K = 3, without Vmax

[55]

[43] [40] [36] [56] [37] [37] [53] --

CR IP T

ACCEPTED MANUSCRIPT

Table 6 Mean and the standard deviations of the function values f

Criteria

k

GPSO

LPSO

SPSO-40

FIPSO

HPSO-TVAC

DMS-PSO

CLPSO

f1

Mean

1

2.05×10-32

3.34×10-14

2.29×10-96

2.42×10-13

2.83×10-33

2.65×10-31

1.58×10-12

3.56×10-32

5.39×10-14

9.48×10-96

1.73×10-13

3.19×10-33

6.25×10-31

7.70×10-13

1.49×10-21

1.70×10-10

1.74×10-53

2.76×10-8

9.03×10-20

1.57×10-18

2.51×10-8

3.60×10

-21

1.39×10

-10

1.58×10

-53

9.04×10

-9

9.58×10

-20

-18

-9

4.07×10

1

2.81×10

1

1.35×10

1

2.51×10

1

2.39×10

1

3.22×10

1

2.18×10

1

1.46×10

1

5.10×10

-1

2.65×10

1

f5

Mean

1

S.D.

f7

Mean

1

S.D.

f8

Mean

0

S.D.

f9

Mean

1

S.D.

f10

Mean

0

S.D.

f11

Mean S.D.

f12

Mean

1

f13

Mean

1

S.D.

f15

Mean

CDW-PSO

4.69×10-31

2.65×10-103

4.64×10-18

6.34×10-54

1.28×10-38

2.50×10-30

1.19×10-102

1.26×10-10

9.85×10-30

7.67×10-22

2.64×10-17

8.44×10-54

-11

1.01×10

-29

5.63×10

-22

6.92×10

-17

4.16×10

1

3.03×10

1

-4

3.73×10-53

2.15×10

1

1.26×10

0

2.38×10

1

2.81×101

2.99×10

1

1.40×10

0

3.17×10

-1

1.88×10-1

5.58×10 4.96×10

1

1.28×10

3.65×10

1

3.82×10

-7

2.28×10-2

4.02×10-3

4.24×10-3

9.82×10-2

1.45×10-2

5.85×10-3

5.50×10-2

1.16×10-2

1.64×10-2

2.41×10-3

3.74×10-228

5.60×10-3

1.66×10-3

1.28×10-3

3.26×10-2

5.05×10-3

1.11×10-3

1.70×10-3

4.10×10-3

3.25×10-3

8.07×10-4

0

2.60×10

1

7.27×10

0

3.51×10

1

6.89×10

0

4.10×10

1

1

6.51×10

1

1.34×10

1

9.43×10

0

3.48×10

0

2.72×10

1

6.02×10

0

9.09×10

-5

1.25×10

-4

6.97×10

0

3.07×10

0

1.07×10

0

9.90×10

-1

0

4.54×10

0

0

1

0

0

1.03×10

1.31×10-14

8.20×10-8

3.73×10-2

2.33×10-7

7.29×10-14

1.84×10-14

3.66×10-7

6.23×10-9

7.98×10-15

4.14×10-15

1.68×10-14

8.88×10-16

2.08×10-15

6.73×10-8

1.90×10-1

7.19×10-8

3.00×10-14

4.35×10-15

7.57×10-8

1.87×10-9

2.03×10-15

0

4.84×10-15

0

2.12×10-2

1.53×10-3

7.48×10-3

9.01×10-12

9.75×10-3

6.21×10-3

9.02×10-9

2.29×10-3

4.83×10-3

0

8.14×10-17

0

-2

-3

-2

-11

-3

-3

-9

-3

-3

-16

0

2.23×10-31 -31

4.32×10

8.10×10-16 1.07×10

-5

1.11×10

1.25×10

7.47×10-2 3.11×10

1.84×10

1.96×10-15

8.33×10

2.71×10-29

0

1.11×10

-11

1.88×10

-29

0

4.46×10

2.51×10-30

6.45×10-14

1.56×10-19

1.59×10-32

1.57×10-32

4.67×10-31

2.93×10-1

-29

-14

-19

-33

-48

-31

1.27×10-1

8.14×10

1.02×10

8.57×10

3.70×10

5.48×10

1.67×10

8.63×10

1.03×10

2.79×10

9.01×10

1.32×10-3

3.26×10-13

1.76×10-3

2.70×10-14

2.79×10-28

2.64×10-3

1.25×10-12

1.46×10-18

4.39×10-4

1.35×10-32

1.51×10-28

2.31×100

3.64×10-3

3.70×10-3

4.11×10-3

1.57×10-14

2.18×10-28

4.79×10-3

9.45×10-13

1.33×10-18

2.20×10-3

5.59×10-48

8.00×10-28

2.95×10-1

-2

-3

-3

1.80×10

1.68×10

3.05×10

1.28×10

-8

9.26×10

-3

1.02×10

-2

7.96×10

-5

1.28×10

-3

1.68×10

-3

4.19×10

-8

1.48×10

-3

0

2.41×10-2

3.47×10-3

5.70×10-3

4.29×10-8

8.80×10-3

1.24×10-2

7.66×10-5

3.70×10-3

4.13×10-3

2.06×10-7

6.17×10-3

0

6.00×101

5.34×101

4.34×101

1.50×102

5.29×101

4.20×101

8.71×101

6.38×101

4.61×101

5.34×101

1.79×100

0

1.60×101

1.40×101

1.74×101

1.45×101

1.25×101

9.74×100

1.08×101

1.97×101

1.29×101

1.34×101

9.81×100

0

CE

S.D.

1

LFPSO

1.11×10-38

2.39×10-3

7.07×10

S.D.

5.84×10

OLPSO-L

4.12×10-54

9.32×10-3

2.18×10 1

3.79×10

OLPSO-G

6.45×10-18

AN US

S.D.

M

1

ED

Mean

PT

S.D.

f2

OPSO

AC

Mean: mean of objective values; S.D.: standard deviation.

ACCEPTED MANUSCRIPT 5. Conclusions and future work

CR IP T

The CDW-PSO algorithm is proposed to simultaneously enhance the search performance of PSO and acquire good global optimal ability. PSO is combined with the chaotic map and dynamic weight in this study. Firstly, CDW-PSO is compared with PSO, C-PSO and DW-PSO, and four algorithms are proposed on seventeen classical benchmark functions with 30 and 50 dimensions. Experimental results show that the CDW-PSO method combined with the chaotic map and dynamic weight is effective. Secondly, CDW-PSO is compared with nine other well-known intelligent algorithms. The simulation results show that the CDW-PSO method obtains better results in almost all classical benchmark functions tested and is more stable in most of the functions. Finally, in order to further verify the performance of the CDW-PSO approach, it is compared with the other PSO variants. The results indicate that the CDW-PSO method outperforms other PSO variants for most of the classical benchmark functions. In summary, the proposed CDW-PSO algorithm is successful, and CDW-PSO is suitable for application to optimization problems in the numerical domain. As future work, the study of the CDW-PSO algorithm will be expanded to include global optima not located on a special point (center or diagonal). In addition, the CDW-PSO algorithm will be applied to

AN US

various optimization fields such as image segmentation, artificial intelligence, training neural networks, pattern recognition, etc. Acknowledgments Project supported by National Natural Science Foundation of China (Grant No. 61375084 and Grant No.61773242), Key Program of Natural Science Foundation of Shandong Province (ZR2015QZ08), Key Program of Scientific and Technological Innovation of Shandong Province (2017CXGC0926), Key

AC

CE

PT

ED

M

Research and Development Program of Shandong Province (2017GGX30133). In addition, we are grateful to the anonymous reviewers for their valuable comments that helped us improve this paper. References [1] Q. K. Pan, M.F. Tasgetiren, P.N. Suganthan, et al. A Discrete Artificial Bee Colony Algorithm for the Lot-Streaming Flow Shop Scheduling Problem. Information sciences, 2011, 181(12): 2455-2468. [2] V. Ho-Huu, T. Nguyen-Thoi, T. Vo-Duy, et al. An adaptive elitist differential evolution for optimization of truss structures with discrete design variables. Computers & Structures, 2016, 165: 59-75. [3] G. Bartsch, A.P. Mitra, S.A. Mitra, et al. Use of Artificial Intelligence and Machine Learning Algorithms with Gene Expression Profiling to Predict Recurrent Nonmuscle Invasive Urothelial Carcinoma of the Bladder. The Journal of urology, 2016, 195(2): 493-498. [4] H.Q. Mu, K.V. Yuen, S.C. Kuok. Modal frequency-ambient condition pattern recognition by a novel machine learning algorithm//Mechanics of Structures and Materials: Advancements and Challenges. CRC Press, 2016: 1013-1018. [5] C.M. Lai, W.C. Yeh, Y.C. Huang. Entropic simplified swarm optimization for the task assignment problem. Applied Soft Computing, 2017, 58: 115-127. [6] M. Singh, B.K. Panigrahi, A.R. Abhyankar. Optimal coordination of directional over-current relays using Teaching Learning-Based Optimization (TLBO) algorithm. International Journal of Electrical Power & Energy Systems, 2013, 50: 33-41. [7] S. Duman, U. Güvenç, Y. Sönmez, et al. Optimal power flow using gravitational search algorithm. Energy Conversion and Management, 2012, 59: 86-95. [8] K.L. Du, M.N.S Swamy. Particle swarm optimization//Search and Optimization by Metaheuristics.

ACCEPTED MANUSCRIPT Springer International Publishing, 2016: 153-173.

CR IP T

[9] S. Mirjalili. SCA: a sine cosine algorithm for solving optimization problems. Knowledge-Based Systems, 2016, 96: 120-133. [10] S. Mirjalili. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowledge-Based Systems, 2015, 89: 228-249. [11] K.L. Du, M.N.S Swamy. Ant colony optimization//Search and Optimization by Metaheuristics. Springer International Publishing, 2016: 191-199. [12] X.S. Yang. Multiobjective firefly algorithm for continuous optimization. Engineering with Computers, 2013, 29(2): 175-184. [13] G. Li, P. Niu, Y. Ma, et al. Tuning extreme learning machine by an improved artificial bee colony to model and optimize the boiler efficiency. Knowledge-Based Systems, 2014, 67: 278-289. [14] M. Li, D. Lin, J. Kou. A hybrid niching PSO enhanced with recombination-replacement crowding strategy for multimodal function optimization. Applied Soft Computing, 2012, 12(3): 975-987.

ED

M

AN US

[15] B. Xue, M. Zhang, W.N. Browne, et al. A survey on evolutionary computation approaches to feature selection. IEEE Transactions on Evolutionary Computation, 2016, 20(4): 606-626. [16] S. Fong, R. Wong, A.V. Vasilakos. Accelerated PSO swarm search feature selection for data stream mining big data. IEEE Transactions on Services Computing, 2016, 9(1): 33-45. [17] J.P. Yang, C.K. Kung, F.T. Liu, et al. Logic circuit design by neural network and PSO algorithm//Pervasive Computing Signal Processing and Applications (PCSPA), 2010 First International Conference on. IEEE, 2010: 456-459. [18] P. Siano, C. Citro. Designing fuzzy logic controllers for DC–DC converters using multi-objective particle swarm optimization. Electric Power Systems Research, 2014, 112: 74-83. [19] Y.Y. Lin, J.Y. Chang, C.T. Lin. Identification and prediction of dynamic systems using an interactively recurrent self-evolving fuzzy neural network. IEEE transactions on neural networks and learning systems, 2013, 24(2): 310-321.

AC

CE

PT

[20] C. Karakuzu, F. Karakaya, M.A. Çavuşlu. FPGA implementation of neuro-fuzzy system with improved PSO learning. Neural Networks, 2016, 79: 128-140. [21] L. Huang, S. Ding, S. Yu, et al. Chaos-enhanced Cuckoo search optimization algorithms for global optimization. Applied Mathematical Modelling, 2016, 40(5): 3860-3875. [22] S. Saremi, S. Mirjalili, Lewis A. Biogeography-based optimization with chaos. Neural Computing and Applications, 2014, 25(5): 1077-1097. [23] F.Y. Ju, W.C. Hong. Application of seasonal SVR with chaotic gravitational search algorithm in electricity forecasting. Applied Mathematical Modeling, 2013, 37(23): 9643-9651. [24] B. Alatas. Chaotic harmony search algorithms. Applied Mathematics and Computation, 2010, 216(9): 2687-2699. [25] G.G. Wang, L. Guo, A.H. Gandomi, et al. Chaotic krill herd algorithm. Information Sciences, 2014, 274: 17-34. [26] R. Ebenhart. Kennedy. Particle swarm optimization//Proceeding IEEE Inter Conference on Neural Networks, Perth, Australia, Piscat-away. 1995, 4: 1942-1948. [27] J. Dai, H. Han, Q. Hu, et al. Discrete particle swarm optimization approach for cost sensitive attribute reduction. Knowledge-Based Systems, 2016, 102: 116-126. [28] M.R. Bonyadi, Z. Michalewicz. Particle swarm optimization for single objective continuous space problems: a review. Evolutionary computation, 2016. [29] M.S. Arumugam, M.V.C. Rao, A.W.C. Tan. A novel and effective particle swarm optimization like

ACCEPTED MANUSCRIPT algorithm with extrapolation technique. Applied soft computing, 2009, 9(1): 308-320.

CR IP T

[30] T. Vidal, T.G. Crainic, M. Gendreau, et al. A hybrid genetic algorithm with adaptive diversity management for a large class of vehicle routing problems with time-windows. Computers & operations research, 2013, 40(1): 475-489. [31] P. Civicioglu, E. Besdok. A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artificial intelligence review, 2013: 1-32. [32] X.S. Yang, A.H. Gandomi. Bat algorithm: a novel approach for global engineering optimization. Engineering Computations, 2012, 29(5): 464-483. [33] D. Datta, J.R. Figueira. A real-integer-discrete-coded particle swarm optimization for design problems. Applied Soft Computing, 2011, 11(4): 3625-3633. [34] D. Datta, J.R. Figueira. Graph partitioning by multi-objective real-valued metaheuristics: A comparative study. Applied Soft Computing, 2011, 11(5): 3976-3987.

ED

M

AN US

[35] Y. Wang, B. Li, T. Weise, et al. Self-adaptive learning based particle swarm optimization. Information Sciences, 2011, 181(20): 4515-4538. [36] J.J. Liang, A.K. Qin, P.N. Suganthan, et al. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE transactions on evolutionary computation, 2006, 10(3): 281-295. [37] Z.H. Zhan, J. Zhang, Y. Li, et al. Orthogonal learning particle swarm optimization. IEEE transactions on evolutionary computation, 2011, 15(6): 832-847. [38] K.E. Parsopoulos, M.N. Vrahatis. UPSO: A unified particle swarm optimization scheme. Lecture series on computer and computational sciences, 2004, 1(5): 868-873. [39] H. Liu, Z. Cai, Y. Wang. Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Applied Soft Computing, 2010, 10(2): 629-640. [40] J.J. Liang, P.N. Suganthan. Dynamic multi-swarm particle swarm optimizer with local

AC

CE

PT

search//Evolutionary Computation, 2005. The 2005 IEEE Congress on. 2005, 1: 522-528. [41] T. Peram, K. Veeramachaneni, C.K. Mohan. Fitness-distance-ratio based particle swarm optimization//Swarm Intelligence Symposium, 2003. SIS'03. Proceedings of the 2003 IEEE. 2003: 174-181. [42] R. Mendes, J. Kennedy, J. Neves. The fully informed particle swarm: simpler, maybe better. IEEE Transactions on evolutionary computation, 2004, 8(3): 204-210. [43] A. Ratnaweera, S.K. Halgamuge, H.C. Watson. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Transactions on evolutionary computation, 2004, 8(3): 240-255. [44] Y. Shi, R. Eberhart. A modified particle swarm optimizer//Evolutionary Computation Proceedings, 1998. IEEE World Congress on Computational Intelligence, The 1998 IEEE International Conference on. IEEE, 1998: 69-73. [45] A.H. Gandomi, X.S. Yang, S. Talatahari, et al. Firefly algorithm with chaos. Communications in Nonlinear Science and Numerical Simulation, 2013, 18(1): 89-98. [46] P. Niu, K. Chen, Y. Ma, et al. Model turbine heat rate by fast learning network with tuning based on ameliorated krill herd algorithm. Knowledge-Based Systems, 2017, 118: 80-92. [47] G. Li, P. Niu, X. Xiao. Development and investigation of efficient artificial bee colony algorithm for numerical function optimization. Applied soft computing, 2012, 12(1): 320-332. [48] X. Yao, Y. Liu, G. Lin. Evolutionary programming made faster. IEEE Transactions on Evolutionary

ACCEPTED MANUSCRIPT computation, 1999, 3(2): 82-102.

CR IP T

[49] P.N. Suganthan, N. Hansen, J.J. Liang, et al. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. IEEE Congress Evolutionary Computation, 2005: 91-107. [50] D. Simon. Biogeography-based optimization. IEEE transactions on evolutionary computation, 2008, 12(6): 702-713. [51] A.H. Gandomi, A.H. Alavi. Krill herd: a new bio-inspired optimization algorithm. Communications in Nonlinear Science and Numerical Simulation, 2012, 17(12): 4831-4845. [52] E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi. GSA: a gravitational search algorithm. Information sciences, 2009, 179(13): 2232-2248. [53] H. Haklı, H. Uğuz. A novel particle swarm optimization algorithm with Levy flight. Applied Soft Computing, 2014, 23: 333-345. [54] J. Kennedy, R. Mendes. Population structure and particle swarm performance. IEEE Congr. Evol.

AC

CE

PT

ED

M

AN US

Comput, 2002, 1671-1676. [55] Particle Swarm Central [Online]. Available: http://www.particleswarm.info. [56] S.Y. Ho, H.S. Lin, W.H. Liauh, et al. OPSO: Orthogonal particle swarm optimization and its application to task assignment problems. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 2008, 38(2): 288-298.