Differential evolution algorithm-based parameter estimation for chaotic systems

Differential evolution algorithm-based parameter estimation for chaotic systems

Chaos, Solitons and Fractals 39 (2009) 2110–2118 www.elsevier.com/locate/chaos Differential evolution algorithm-based parameter estimation for chaotic...

154KB Sizes 3 Downloads 161 Views

Chaos, Solitons and Fractals 39 (2009) 2110–2118 www.elsevier.com/locate/chaos

Differential evolution algorithm-based parameter estimation for chaotic systems Bo Peng b, Bo Liu a

a,b,*

, Fu-Yi Zhang b, Ling Wang

b

Center for Chinese Agricultural Policy, Institute of Geographical Sciences and Natural Resource Research, Chinese Academy of Sciences, Beijing 100101, China b Department of Automation, Tsinghua University, Beijing 100084, China Accepted 21 June 2007

Abstract Parameter estimation for chaotic systems is an important issue in nonlinear science and has attracted increasing interests from various research fields, which could be essentially formulated as a multidimensional optimization problem. As a novel evolutionary computation technique, differential evolution algorithm (DE) has attracted much attention and wide applications, owing to its simple concept, easy implementation and quick convergence. However, to the best of our knowledge, there is no published work on DE for estimating parameters of chaotic systems. In this paper, a DE approach is applied to estimate the parameters of Lorenz system. Numerical simulation and the comparisons demonstrate the effectiveness and robustness of DE. Moreover, the effect of population size on the optimization performances is investigated as well. Ó 2007 Elsevier Ltd. All rights reserved.

1. Introduction As a characteristic of nonlinear systems, chaos is a bounded unstable dynamic behavior that exhibits sensitive dependence on initial conditions and includes infinite unstable periodic motions. Control and synchronization of chaotic systems have been investigated intensely in various fields during recent years [1–6]. Many of the proposed approaches only work under the assumption that the parameters of chaotic systems are known in advance. Nevertheless, in real world, the parameters may be difficult to determine due to the complexity of chaotic systems. Therefore, parameter estimation for chaotic systems has become a hot topic in the past decade [7–16]. Some studies focused on synchronization-based methods for parameter estimation. In [7,8], the parameters of a given chaotic dynamic model were estimated by minimizing the average synchronization error using a scalar time series. In [9], a feedback-based synchronization method and an adaptive control method (suggested in [10]) were both introduced to estimate parameters for several chaotic systems. Simulation demonstrated that this kind of combination was effective and reasonably robust under noisy environment. The approach proposed in [9] was also used in [11] to estimate one parameter of the transmitter for chaotic signal communication to guarantee security. Alvarez et al. in [12] estimated * Corresponding author. Address: Department of Automation, Tsinghua University, Beijing 100084, China. Tel.: +86 10 62783125; fax: +86 10 62786911. E-mail address: [email protected] (B. Liu).

0960-0779/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.chaos.2007.06.084

B. Peng et al. / Chaos, Solitons and Fractals 39 (2009) 2110–2118

2111

parameter from the two-valued symbolic sequences generated by iterations of quadratic map when its initial value was known. Wu et al. in [13] further studied this kind of symbolic sequences and identified parameters even when the initial value was unknown by adding some features of the chaotic orbits into a bi-search means. In [14], several feedback control gains were introduced to synchronize the model system and the original physical system. The feedback gains were also treated as design parameters, and then all the parameters were estimated via a minimization procedure of the synchronization errors. In [15], an adaptive control-based synchronization method was presented for parameter identification for a modified chaotic Van der Pol-Duffing oscillator. Besides the methods based on chaotic synchronization and control theories, the linear associative memory method was applied to estimate parameters for the logistic map in [16]. Additionally, a genetic algorithm (GA) and particle swarm optimization (PSO) were adopted to estimate parameters for Lorenz chaotic system, respectively, in [17,18]. Recently, a new evolutionary technique, differential evolution (DE), has been proposed [22] as an alternative to genetic algorithm (GA) [23] and particle swarm optimization (PSO) [19–21,24] for unconstrained continuous optimization problems. Although the original objective in the development of DE was for solving the Chebychev polynomial problem, it has been found to be an efficient and effective solution technique for complex functional optimization problems. In a DE system, a population of solutions is initialized randomly, which is evolved to find optimal solutions through the mutation, crossover, and selecting operation procedures. Compared with GA and PSO, DE has some attractive characteristics. It uses simple differential operator to create new candidate solutions and one-to-one competition scheme to greedily select new candidate, which work with real numbers in natural manner and avoid complicated generic searching operators in GA. It has memory, so knowledge of good solutions is retained in current population, whereas in GA, previous knowledge of the problem is destroyed once the population changes and in PSO, a secondary archive is needed. It also has constructive cooperation between individuals, individuals in the population share information between them. Due to the simple concept, easy implementation and quick convergence, nowadays DE has attracted much attention and wide applications in different fields mainly for various continuous optimization problems [25]. However, to the best of our knowledge, there is no research on DE for parameter estimation of chaotic systems. In this paper, parameter estimation for chaotic systems is formulated as a multidimensional optimization problem, and a DE approach is implemented to solve the problem. To the best of our knowledge, this is the first research to apply DE to estimate parameters of chaotic systems. Numerical simulation based on Lorenz system and comparisons with results obtained by PSO [18] and GA [17,18] demonstrate the effectiveness, efficiency and robustness of DE. The remainder of this paper is organized as follows. The parameter estimation is formulated as a multidimensional optimization problem in Section 2. Section 3 presents a brief review and an implementation for DE. Numerical simulation and comparisons are provided in Section 4. Finally, we conclude in Section 5 with a brief summary of results. 2. Problem formulation Considering the following n-dimensional chaotic system: X_ ¼ F ðX ; X 0 ; h0 Þ; T

n

T

n

ð1Þ T

where X = (x1, x2, . . ., xn) 2 R denotes the state vector, X0 denotes the initial state and h0 = (h10, h20, . . ., hd0) is a set of original parameters. When estimating the parameters, suppose the structure of the system is known in advance, and thus the estimated system can be described as follows: Y_ ¼ F ðY ; X 0 ; hÞ;

ð2Þ T

where Y = (y1, y2, . . ., yn) 2 R denotes the state vector, and h = (h1, h2,   , hd) is a set of estimated parameters. Therefore, the problem of parameter estimation can be formulated as the following optimization problem: M 1 X min J ¼ kX k  Y k k2 by searching suitable h ð3Þ M k¼1 where M denotes the length of data used for parameter estimation, Xk and Yk (k = 1, 2, . . ., M) denote state vectors of the original and the estimated systems at time k, respectively. Obviously, the parameter estimation for chaotic systems is a multidimensional continuous optimization problem, where the decision vector is h and the optimization goal is to minimize J. The principle of parameter estimation for chaotic systems in sense of optimization can be illustrated with Fig. 1. Due to the unstable dynamic behavior of chaotic systems, the parameters are not easy to obtain. In addition, there are often multiple variables in the problem and multiple local optima in the landscape of J, so traditional optimization methods are easy to trap in local optima and it is difficult to achieve the global optimal parameters.

2112

B. Peng et al. / Chaos, Solitons and Fractals 39 (2009) 2110–2118 X0

.

X = F ( X , X 0 ,θ 0 ) .

Y = F (Y , X 0 ,θ ) Adjusting

θ

X1 ,..., XM Y1 ,..., YM Calculating

-

+

J

Algorithm

Fig. 1. The principle of parameter estimation for chaotic systems.

3. Differential evolution algorithm DE is a population-based evolutionary computation technique, which uses simple differential operator to create new candidate solutions and one-to-one competition scheme to greedily select new candidate. The theoretical framework of DE is very simple and DE is easy to be coded and implemented with computer. Besides, it is computationally inexpensive in terms of memory requirements and CPU times. Thus, nowadays DE has attracted much attention and wide applications in various fields [25]. In DE, it starts with the random initialization of a population of individuals in the search space and works on the cooperative behaviors of the individuals in the population. Therefore, it finds the global best solution by utilizing the distance and direction information according to the differentiations among population. However, the searching behavior of each individual in the search space is adjusted by dynamically altering the differentiation’s direction and step length in which this differentiation performs. The ith individual in the d-dimensional search space at generation t can be represented as Xi(t) = [xi,1, xi,2, . . ., xi,d], (i = 1, 2, . . ., NP, where NP denotes the size of the population). At each generation t, the mutation and crossover operators are applied on the individuals, and a new population arises. Then, selection takes place, and the corresponding individuals from both populations compete to comprise the next generation. For each target individual Xi(t), according to the mutation operator, a mutant vector Vi(t + 1) = [vi,1(t + 1), . . ., vi,d(t + 1)] is generated by adding the weighted difference between a defined number of individuals randomly selected from the previous population to another individual, which is described by the following equation: V i ðt þ 1Þ ¼ X best ðtÞ þ F ðX r1 ðtÞ  X r2 ðtÞÞ

ð4aÞ

where r1, r2 2 {1, 2, . . ., N} are randomly chosen and mutually different and also different from the current index i. F 2 [0, 2] is constant called scaling factor which controls amplification of the differential variation Xr1(t)  Xr2(t), and NP is at least 4 so that the mutation can be applied. Xbest(t), the base vector to be perturbed, is the best member of the current population so that the best information could be shared among the population. After the mutation phase, the crossover operator is applied to increase the diversity of the population. Thus, for each target individual Xi(t), a trial vector Ui(t + 1) = [ui,1(t + 1), . . ., ui,d(t + 1)] is generated by the following equation:  vi;j ðt þ 1Þ; if ðrandðjÞ 6 CRÞ or j ¼ randnðiÞ; ui;j ðt þ 1Þ ¼ j ¼ 1; 2; . . . ; d; ð4bÞ xi;j ðtÞ; otherwise where rand(j) is the jth independent random number uniformly distributed in the range of [0, 1]. randn(i) is a randomly chosen index from the set {1, 2, . . ., d}. CR 2 [0, 1] is constant called crossover parameter that controls the diversity of the population. Following the crossover operation, the selection arises to decide whether the trial vector Ui(t + 1) would be a member of the population of the next generation t + 1. For a minimum optimization problem, Ui(t + 1) is compared to the initial target individual Xi(t) by the following one-to-one based greedy selection criterion:  U i ðt þ 1Þ; if F ðU i ðt þ 1ÞÞ < F ðX i ðtÞÞ; ð4cÞ X i ðt þ 1Þ ¼ X i ðtÞ; otherwise; where F is the objective function under consideration, Xi(t + 1) is the individual of the new population. The procedure described above is considered as the standard version of DE, and it is denoted as DE/best/1/bin. Several variants of DE have been proposed, depending on the selection of the base vector to be perturbed, the number and selection of the differentiation vectors and the type of crossover operators [25]. The key parameters in DE are NP (size of population), F (scaling factor) and CR (crossover parameter). Proper configuration of the above parameters would achieve good tradeoff between the global exploration and the local

B. Peng et al. / Chaos, Solitons and Fractals 39 (2009) 2110–2118

2113

exploitation so as to increase the convergence velocity and robustness of the search process. Some basic principles have been given for selecting appropriate parameters for DE [25]. In general, the population size NP is choosing from 5 Æ d to 10 Æ d (number of dimension). F and CR lies in the range of [0.4, 1.0] and [0.1, 1.0], respectively. The procedure of standard DE is summarized as follows: Step 1: Randomly initialize the population of individual for DE, where each individual contains d variables (i.e., d = N). Step 2: Evaluate the objective values of all individuals, and determine Xbest which has the best objective value. Step 3: Perform mutation operation for each individual according to Eq. (4a) in order to obtain each individual’s mutant counterpart. Step 4: Perform crossover operation between each individual and its corresponding mutant counterpart according to Eq. (4b) in order to obtain each individual’s trial individual. Step 5: Evaluate the objective values of the trial individuals. Step 6: Perform selection operation between each individual and its corresponding trial counterpart according to Eq. (4c) so as to generate the new individual for the next generation. Step 7: Determine the best individual of the current new population with the best objective value. If the objective value is better than the objective value of Xbest, then update Xbest and its objective value with the value and objective value of the current best individual. Step 8: If a stopping criterion is met, then output Xbest and its objective value; otherwise go back to Step (3).

4. Simulation and comparisons As a typical chaotic system, Lorenz system is employed as an example in this paper. The mathematical description of Lorenz system is as follows: 8 > < x_ 1 ¼ aðx2  x1 Þ; x_ 2 ¼ bx1  x1 x3  x2 ; ð5Þ > : x_ 3 ¼ x1 x2  cx3 ; where a = 10, b = 28, c = 8/3 are the original parameters. In our simulation, the original Lorenz system firstly evolves freely from a random initial state. After a period of transient process, a state vector is selected as the initial state X0 for parameter estimation as shown in Fig. 1. Then successive M states (M = 300) of both the original system and the estimated system are used to calculate J. In DE, the maximum generation number is set to 100 (set as stopping condition), population size is set to 20, 40 and 120 when the number of unknown parameters is 1, 2 and 3, respectively. The searching ranges of a, b and c are set to [9, 11], [20, 30] and [2, 3], respectively. Moreover, a PSO in [18] and a binary-coded GA in [17] are used for comparison, where all the parameters are the same as those used in the literature [18,17] (the length of the chromosome L is set to 20, crossover rate Pr = 0.8 and mutation rate Pm = 0.1). To perform fair comparison, the same computational effort is used in DE, PSO and the GA. That is, the maximum generation, population size and searching range of the parameters in the DE are the same as those in PSO and GA. 4.1. Simulation on one-dimensional parameter estimation Firstly, we consider one-dimensional parameter estimation. That is, only one parameter among a, b and c is unknown and needs to be estimated. Table 1 lists the statistical results obtained by DE, PSO and GA for three cases, where each algorithm is implemented 20 times independently for each case. From Table1, it can be seen that the best results (estimated values) obtained by DE, PSO and the GA are very close to the true values. Nevertheless, the average and worst results obtained by DE greatly outperform those obtained by GA. 4.2. Simulation on two-dimensional parameter estimation Secondly, the two-dimensional parameter estimation is considered. That is, two of the three parameters (a, b and c) are unknown and need to be estimated. Tables 2–4 list the statistical results by DE, PSO and GA for three cases, where each algorithm is implemented 20 times independently for each case.

2114

B. Peng et al. / Chaos, Solitons and Fractals 39 (2009) 2110–2118

Table 1 Statistical results of different methods for one-dimensional parameter estimation a

J

b

J

Average result DE PSO GA

10.00000029 10.000000 10.000004

0.00000000031 0.000000 0.000039

28.00000033 28.000000 28.142774

0.00000000 0.000000 9141.76472

Best result DE PSO GA

9.999999825 10.000000 10.000000

0.00000000002 0.000000 0.000000

28.00000005 28.000000 28.000001

0.00000000 0.000000 0.000002

Worst result DE PSO GA

9.99999552 10.000000 10.000076

0.00000000098 0.000000 0.000726

28.00000212 27.999999 28.75000

0.00000000 0.000000 45063.5572

c

J

Average result DE PSO GA

2.666666706 2.666667 2.652081

0.00000000003 0.000000 10829.504876

Best result DE PSO GA

2.66666668 2.666667 2.666667

0.00000000000 0.000000 0.000014

Worst result DE PSO GA

2.666666521 2.666667 2.624999

0.00000000097 0.000000 30714.343450

Table 2 Statistical results of different methods for two-dimensional parameter estimation (aand bare unknown) a

b

J

Average result DE PSO GA

9.999994871 10.003777 9.863719

28.00000232 27.998373 28.089400

0.00000000037 0.016936 37.350294

Best result DE PSO GA

10.0000072 9.999778 10.030342

27.99999869 28.000081 27.987013

0.000000000042 0.000191 0.231930

Worst result DE PSO GA

9.999904133 10.017997 9.197794

28.00004134 27.992305 28.479586

0.000000092 0.081147 154.011912

From Table 2 to Table 4, it can be seen that the best, average and the worst results obtained by DE all outperform those obtained by PSO and GA, respectively. Especially, in Table 2 and 3, the worst results obtained by DE are even better than the best results obtained by the GA. 4.3. Simulation on three-dimensional parameter estimation Subsequently, we further consider three-dimensional parameter estimation where all the parameters in Lorenz system are unknown and need to be estimated. Table 5 lists the statistical results obtained by DE, PSO and the GA, where each algorithm is implemented 20 times independently.

B. Peng et al. / Chaos, Solitons and Fractals 39 (2009) 2110–2118

2115

Table 3 Statistical results of different methods for two-dimensional parameter estimation (a and c are unknown) a

c

J

Average result DE PSO GA

10.00000414 9.999580 9.751140

2.666666795 2.666653 2.654561

0.000000000560 0.007440 455.573793

Best result DE PSO GA

10.00000014 10.000001 10.012075

2.666666617 2.666667 2.667048

0.000000000051 0.000000 0.206471

2.666667459 2.666448 2.624927

0.00000000099 0.059740 1386.977496

Worst result DE PSO GA

9.999963944 9.992503 9.029427

Table 4 Statistical results of different methods for two-dimensional parameter estimation (b and c are unknown) b

c

J

Average result DE PSO GA

27.99999584 28.004684 27.161623

2.666666221 2.666998 2.616623

0.00000000048 0.597665 4769.4260986

Best result DE PSO GA

28.00000035 27.999417 28.014393

2.666666692 2.666627 2.667658

0.000000000019 0.000793 0.487866

Worst result DE PSO GA

27.99996646 28.034171 24.953003

2.666668285 2.669051 2.499762

0.00000000095 2.909609 29226.143472

Table 5 Statistical results of different methods for three-dimensional parameter estimation a

b

c

J

Average result DE PSO GA

10.010050 10.018417 10.139783

27.993870 27.993390 27.742735

2.666551 2.666281 2.648585

0.00036 4.18278 943.76294

Best result DE PSO GA

10.000096 9.995332 10.067167

27.999999 28.007146 27.922058

2.666664 2.667013 2.663426

0.0000002 0.048645 4.310715

Worst result DE PSO GA

10.054064 10.608212 10.929003

27.971791 27.704424 26.127605

2.665526 2.657231 2.562049

0.0016939 39.406026 6461.4801

As shown in Table 5, once again it is clear that the best, average and the worst results obtained by DE are better than those obtained by PSO and GA, respectively. And the average result obtained by DE is even better than the best result obtained by PSO and GA. In addition, the values of estimated parameters obtained by PSO are still very close to the

2116

B. Peng et al. / Chaos, Solitons and Fractals 39 (2009) 2110–2118 10 9 8 7

J

6 5 4 3 2 1 0

0

20

40 60 Generation

80

100

Estimation value of b

Estimation value of a

Fig. 2. A typical evolving process of the objective function value J.

11.5 11 10.5 10 9.5 9

10

20

30

40

50 60 Generation

70

80

90

100

10

20

30

40

50 60 Generation

70

80

90

100

28.8 28.6 28.4 28.2 28 27.8

Fig. 3. A typical searching process for parameters a and b.

true values of original parameters. So, it is concluded that DE is more effective and robust than PSO and GA to estimate parameters for chaotic systems. 4.4. Discussion on searching efficiency and parameter settings of PSO Firstly, to study the searching efficiency of DE, a case in two-dimensional parameter estimation problem (a and b need to be estimated) is used as an example. With the same controlling parameters mentioned before, a typical evolving process of the objective function J is illustrated in Fig. 2 and a typical convergence process of both parameter a and b is shown in Fig. 3. Fig. 2 shows that the value of J decreases very fast to zero, which implies that DE can converge to the global optimum very quickly. Moreover, it can be seen from Fig. 3 that both parameter a and b converge to the true values rapidly, which demonstrates the great efficiency of DE to achieve global optimization. To investigate the effect of swarm size on the performances of the DE, experiments are carried out based on the above two-dimensional parameter estimation problem with other controlling parameters in DE fixed. Figs. 4 and 5 illustrate the effect of population size on the average value of J (20 independent runs are executed) and the total number of evaluations. As shown in Fig. 4, when population size is too small, the results are poor because the solution space will not be explored enough. As population size increases, the results become better at a cost of more fitness evaluations (see

B. Peng et al. / Chaos, Solitons and Fractals 39 (2009) 2110–2118

2117

0.16 0.14

Average value of J

0.12 0.1 0.08 0.06 0.04 0.02 0 20

40

60 80 Population size

100

120

Fig. 4. The average objective value of J obtained by DE with different population size.

Number of fitness evaluations

10000 9000 8000 7000 6000 5000 4000 3000 2000 20

30

40

50

60 70 Population size

80

90

100

Fig. 5. Total number of evaluations in DE with different population size.

Fig. 5). But there is a threshold, beyond which the results will not be affected in a significant manner. As a consequence, considering both the searching quality and computational effort, it is recommended to choose population size between 40 and 60. If more parameters need to be estimated, larger population size is recommended.

5. Conclusion From the viewpoint of optimization, parameter estimation for chaotic systems was formulated as a multidimensional optimization problem in this paper. A novel evolutionary algorithm, DE, was applied to solve such an issue. Numerical simulation and comparisons based on Lorenz system demonstrated the effectiveness, efficiency and robustness of DE. To the best of our knowledge, this is the first report of applying DE to estimate parameters for chaotic systems. The future work is to apply DE for other chaotic systems, and to develop more effective and adaptive DE based approaches.

Acknowledgements This work is supported by both National Natural Science Foundation of China (Grant Nos. 60204008, 60374060 and 60574072) and National 973 Program (Grant No. 2002CB312200).

2118

B. Peng et al. / Chaos, Solitons and Fractals 39 (2009) 2110–2118

References [1] Hubler AW. Adaptive control of chaotic system. Helv Phys Acta 1989;62:343–6. [2] Ott E, Grebogi C, Yorke JA. Controlling chaos. Phys Rev Lett 1990;64:1196–9. [3] Lu Z, Shieh LS, Chen GR. On robust control of uncertain chaotic systems: a sliding-mode synthesis via chaotic optimization. Chaos, Solitons & Fractals 2003;18(4):819–27. [4] Kapitaniak T. Continuous control and synchronization in chaotic systems. Chaos, Solitons & Fractals 1995;6:237–44. [5] Yang SS, Duan CK. Generalized synchronization in chaotic systems. Chaos, Solitons & Fractals 1998;9(10):1703–7. [6] Elabbasy EM, Agiza HN, El-Dessoky MM. Global synchronization criterion and adaptive synchronization for new chaotic system. Chaos, Solitons & Fractals 2005;23(4):1299–309. [7] Parlitz U, Junge L. Synchronization based parameter estimation from times series. Phys Rev E 1996;54:6253–9. [8] Parltiz U. Estimating model parameters from time series by autosynchronization. Phys Rev Lett 1996;76:1232–5. [9] Maybhate A, Amritkar RE. Use of synchronization and adaptive control in parameter estimation from a time series. Phys Rev E 1999;59:284–93. [10] Huberman BA, Lumer E. Dynamics of adaptive systems. IEEE Trans Circ syst 1990;37:547–50. [11] Saha P, Banerjee S, Chowdhury AR. Chaos, signal communication and parameter estimation. Phys Lett A 2004;326:133–9. [12] Alvarez G, Montoya F, Romera M, Pastor G. Cryptanalysis of an ergodic chaotic cipher. Phys Lett A 2003;311:172–9. [13] Wu XG, Hu HP, Zhang BL. Parameter estimation only from the symbolic sequences generated by chaos system. Chaos, Solitons & Fractals 2004;22(2):359–66. [14] Xu DL, Lu FF. An approach of parameter estimation for non-synchronous systems. Chaos, Solitons & Fractals 2005;25(2):361–6. [15] Fostin HB, Woafo P. Adaptive synchronization of a modified and uncertain chaotic Van der Pol-Duffing oscillator based on parameter identification. Chaos, Solitons & Fractals 2005;24:1363–71. [16] Gu M, Kalaba RE, Taylor GA. Obtaining initial parameter estimates for chaotic dynamical systems using linear associative memories. Appl Math Comput 1996;76:143–59. [17] Dai D, Ma XK, Li FC, You Y. An approach of parameter estimation for a chaotic system based on genetic algorithm. Acta Physica Sinica 2002;11:2459–62 [in Chinese]. [18] Qie He, Ling Wang, Bo Liu. Parameter estimation for chaotic systems by particle swarm optimization. Chaos, Solitons & Fractals 2006;34(2):654–61. [19] Liu B, Wang L, Jin YH, Tang F, Huang DX. Improved particle swarm optimization combined with chaos. Chaos, Solitons & Fractals 2005;25:1261–71. [20] Liu B, Wang L, Jin YH, Tang F, Huang DX. Directing orbits of chaotic systems by particle swarm optimization. Chaos, Solitons & Fractals 2006;29:454–61. [21] Kennedy J, Eberhart RC, Shi Y. Eds. Swarm intelligence. San Francisco: Morgan Kaufman; 2001. [22] Storn R, Price K. Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 1997;11:341–59. [23] Wang L. Intelligent optimization algorithms with applications. Beijing: Tsinghua University & Springer Press; 2001. [24] Liu B, Wang L, Jin YH, Huang DX. Advances in particle swarm optimization algorithm. Control Instrum Chem Ind 2005;32(3):1–6. [25] Price K, Storn R. Differential evolution homepage. The URL of which is: .