Hybrid particle swarm cooperative optimization algorithm and its application to MBC in alumina production

Hybrid particle swarm cooperative optimization algorithm and its application to MBC in alumina production

Available online at www.sciencedirect.com Progress in Natural Science 18 (2008) 1423–1428 www.elsevier.com/locate/pnsc Hybrid particle swarm coopera...

271KB Sizes 0 Downloads 59 Views

Available online at www.sciencedirect.com

Progress in Natural Science 18 (2008) 1423–1428 www.elsevier.com/locate/pnsc

Hybrid particle swarm cooperative optimization algorithm and its application to MBC in alumina production Shengli Song a,b,*, Li Kong a, Yong Gan b, Rijian Su a,b a b

Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China College of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou 450002, China Received 4 March 2008; received in revised form 2 April 2008; accepted 4 April 2008

Abstract An effective hybrid particle swarm cooperative optimization (HPSCO) algorithm combining simulated annealing method and simplex method is proposed. The main idea is to divide particle swarm into several sub-groups and achieve optimization through cooperativeness of different sub-groups among the groups. The proposed algorithm is tested by benchmark functions and applied to material balance computation (MBC) in alumina production. Results show that HPSCO, with both a better stability and a steady convergence, has faster convergence speed and higher global convergence ability than the single method and the improved particle swarm optimization method. Most importantly, results demonstrate that HPSCO is more feasible and efficient than other algorithms in MBC. Ó 2008 National Natural Science Foundation of China and Chinese Academy of Sciences. Published by Elsevier Limited and Science in China Press. All rights reserved. Keywords: PSO; Simulated annealing; Simplex; Cooperative optimization

1. Introduction In science research and engineering projects, we often encounter the case in which a problem can be formulated as a global optimization problem of the objective function having nonlinear and multi-modal characteristics. Since it is necessary to derive a global solution to nonlinear and multi-modal optimization problems, global optimization is one of the most important topics in optimization. Recently, particle swarm optimization (PSO) is one of the most powerful methods for solving unconstrained and constrained global optimization problems. This method is an evolutionary computation, which was firstly introduced as a stochastic optimization algorithm by Eberhart and Kennedy in 1995 [1,2], inspired by *

Corresponding author. Tel./fax: +86 2787559613. E-mail address: [email protected] (S. Song).

social behavior of bird flocking or fish schooling. Compared with traditional evolutionary computing, due to its simple concept, easy implementation and quick convergence, nowadays PSO has gained much attention and wide applications in different fields [3–8]. However, the traditional PSO has problems of easily falling into local solution, slow convergence speed and low convergence accuracy, especially for nonlinear and multi-modal optimization problems. Therefore, since its first publication, a large amount of research has been done to study the performance of PSO and to improve its performance [9–14]. And how to improve the globally convergence ability has been the main research direction so far. In this study, the advantage and disadvantage of the traditional optimization algorithm is analyzed, and an effective hybrid particle swarm cooperative optimization (HPSCO) algorithm combining simulated annealing method and simplex method is proposed.

1002-0071/$ - see front matter Ó 2008 National Natural Science Foundation of China and Chinese Academy of Sciences. Published by Elsevier Limited and Science in China Press. All rights reserved. doi:10.1016/j.pnsc.2008.04.008

1424

S. Song et al. / Progress in Natural Science 18 (2008) 1423–1428

2. Particle swarm optimization algorithm PSO is a stochastic optimization approach which maintains a swarm of candidate solutions, referred to as particles. All particles have fitness values which are evaluated by the fitness function to be optimized, and have velocities which direct the flying of the particles. Particles are ‘‘flown” through hyper-dimensional search space, with each particle being attracted towards the best solution found by the particle’s neighborhood and the best solution found by the particle. PSO is initialized with a group of random particles (solutions) and then searches for optima by updating generations. In every iteration, each particle is updated by following two ‘‘best” values. The first one is the best solution (fitness) it has achieved so far. This value is called pbest. Another ‘‘best” value tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population. This best value is a global best and called gbest. After finding the two best values, the particle updates its velocity and position as follows [3]: ¼ w  vkid þ c1  randðÞ  ðpid  xkid Þ vkþ1 id þ c2  randðÞ  ðpgd  xkid Þ xkþ1 id

¼

xkid

þ

tkþ1 id

ð1Þ ð2Þ

where d = 1,2, . . . ,N; i = 1,2, . . . ,M, and M is the size of is the ith particle’s new velocity at the the swarm; tkþ1 id kth iteration step; pid is the position at which the particle has achieved its best fitness so far, and pgd is the position at which the best global fitness has been achieved so far; is the ith particle’s next position, based on its previous xkþ1 id position and new velocity at the kth iteration; x is called inertia weight; c1,c2 are two positive constants called cognitive and social parameter, respectively; rand() are random numbers, uniformly distributed in [0, 1]; and k = 1,2, . . ., determines the iteration number. The particles find the optimal solution by cooperation and competition among the particles. 3. Hybrid particle swarm cooperative optimization algorithm Simplex method (SM algorithm) [15,16], which is also called variable polyhedron search method, is a traditional, non-derivative, deterministic and direct approach for solving unconstrained function optimization problem. Its main idea is to be firstly initialized by a convex polyhedron contained N + 1 vertex from N-dimensional Euclidean space, and then the value of each vertex is obtained by calculation. The maximum, hypo-maximum and minimum values are produced, and then a better solution is found through the strategies of reflection, expansion and shrinking edges, using them to replace best or worst, which constitutes a new polyhedron. It can approach a very small point with better performance after many iterations. SM’s main advantages are that it is simple with fast optimizing speed, less calculation is needed, function is not needed to satisfy

differentiability, and it can be applied to wide scopes. However, it is strongly dependent on the initial value for the convergence of the algorithm and can easily fall into local minimum. Simulated annealing (SA algorithm) [15,16] is an intelligence optimization algorithm. It is the expansion of local search algorithm and is a global optimization algorithm. In the search process, the SA accepts not only better but also worse neighboring solutions with a certain probability. Such mechanism can be regarded as a trial to explore new space for new solutions, either better or worse. The probability of accepting a worse solution is larger at higher initial temperature. As the temperature decreases, the probability of accepting worse solutions gradually approaches zero. This feature means that the SA technique makes it possible to jump out of a local optimum to search for the global optimum. SA’s main advantages are that it is strong, simple and easy to achieve. However, due to the nature of serial algorithm and the generally low efficiency, the performance of the algorithm is strongly dependent on the algorithm parameters. Slow speed of convergence is its main shortcoming. Based on the main idea of multi-PSO algorithm [12,14], we here propose the hybrid particle swarm cooperative optimization algorithm (HPSCO). There are various characteristics of each algorithm in the multi-PSO algorithm, so that they can take advantage of the favorable conditions and avoid the unfavorable ones. The basic idea of HPSCO is to achieve optimization through cooperativeness of the different sub-groups among the groups. Firstly, particle swarm is divided into S(S > 1) sub-groups, and these subgroups are carried out for cooperative optimization. Second, the simplex algorithm and simulated annealing are used to search in the first sub-group, from the second to the (S  1)th sub-group, the velocity and position of the particle are updated based on PSO, and for the Sth subgroup, each particle’s velocity and position are updated according to the whole group. Thus, the independent search processes of the former S  1 sub-groups can ensure optimization search space in a large-scale search, and the Sth particle swarm chasing the current global optimal value can ensure the overall convergence capacity of the algorithm, which can improve the efficiency and accuracy of optimization. There are two key factors in HPSCO algorithm. One is how to cooperate between the main group and sub-group, and the other is how to search in the process of optimization. According to the characteristic that the convergence properties of PSO is determined by the different values of the weighting factor w, larger w has better global convergence, and the smaller one has a stronger local convergence capabilities. Different sub-groups can adopt different values of inertia weight factor w in HPSCO, and the main group and sub-groups have different search strategies. Therefore, it can make particle swarm always to have greater capability of local and global convergence. With the use of HPSCO, not only the optimization process can be carried

S. Song et al. / Progress in Natural Science 18 (2008) 1423–1428

out within the framework of a larger solution space, but also the search speed of the main group is accelerated. And so this convergence has a very high efficiency and precision. In order to further improve the space exploration capability of the particles, the variation factor is introduced in PSO: Let f_pbest(s,i,k) be the best fitness value of the ith particle of the sth sub-group at the kth iteration, let f_gbest(s,k) be the best fitness value of all the particles of the sth sub-group at the kth iteration. Then, the average fitness value of the sth sub-group at the kth iteration can be given by fit avgðs; kÞ ¼

N 1 X f pbestðs; i; kÞ N i¼1

ð3Þ

Let mut_prb(s,k) be the mutation probability at the kth iteration defined as follows: mut prbðs; kÞ ¼ f gbestðs; kÞ þ a  ðfit avgðs; kÞ  f gbestðs; kÞÞ

ð4Þ

where a 2 [0,1]. During the iteration process, the mutation probability for the current particle is determined by its fitness value. If the fitness value of a particle is greater than mut_prb(s,k), the particle updates its positions as follows: xkid ¼ xkid þ b  ð1  2  randðÞÞ  maxfxkid  sleftd ; srightd  xkid g

ð5Þ

where xkid 2 regionðdÞ ¼ ½sleftd ; srightd  and b is the mutation parameter in the [0.001,1]. In practical applications, b can be a constant or can be declined linearly or nonlinearly. Thus, Eq. (2) can be written as 8 k xid þ vkþ1 > id þ b  ð1  2  randðÞÞ > > < maxfxk  sleft ; sright  xk g; d d id id ð6Þ ¼ xkþ1 id > if ðf pbestðs; i; kÞ > mut prbðs; kÞÞ; > > : k xid þ vkþ1 id ; ðothersÞ So, the position of the ith particle is dynamically adjusted according to the fitness value at the kth iteration, and then some particles can adequately explore other areas to improve the ability of searching a global solution as the iterations go on. During the searching process, as the iterations go on, each sub-group will converge to the local or global optimum. At the same time, the optimal location of each particle of sub-groups, the globally optimal location of all particles, and each particle’s current position approach the same position, and each particle’s velocity quickly reaches 0. All the particles tend to equilibrium. In this case, according to the gathering degree and the steady degree of current sub-group [9], it can be determined whether all the particles of current sub-group will be mutated to escape from local solution and obtain the global optimum.

1425

The HPSCO algorithm can be summarized as follows: Step 1: The group is divided into S sub-groups, and the following relative parameters are determined: the initial position and velocity of each particle, the inertia weight factor, the initial temperature T0, a suitable annealing strategy, the stopping temperature of annealing, the allowed error E, and the maximum generations of simulated annealing and simplex. Step 2: Evaluate the fitness value of each particle and the average fitness value of sub-group, and the best position of the sth sub-group is set to be the global optimum position. Step 3: According to the gathering degree and the steady degree of the current sub-group, whether all the particles of current sub-group are mutated to escape from local optimum or not is determined. Step 4: Compare the pbest of each particle with its current fitness value. If the current fitness value is better, assign the current fitness value to pbest. Step 5: Determine the current best fitness value of each sub-group. If the current best fitness value of the sub-group is better than the gbest, then assign the current best fitness value to gbest. Finally, the best position of the sth sub-group will be set as the global optimum position. Step 6: Every particle in the first sub-group moves to a new location with simulated annealing algorithm, and the new position is accepted according to Metropolis sampling criteria. After L1 iterations, a simplex computation will be run in the first sub-group.Other sub-groups update particle velocity according to Eq. (1), update particle position according to Eq. (6). (Weight of the inertia factor, cognitive and social accelerating factor can be different in the main group and different sub-groups). Step 7: Repeat steps 2–6 until a stop criterion is satisfied or a predefined number of iterations are completed.

4. Computation results and analysis To test the performance of the HPSCO algorithm, firstly, two benchmark functions are introduced to test the new algorithm; then, it is applied to the material balance computation of alumina production process; finally, the final results of the HPSCO are compared with the PSO’s and the SM-PSO’s [9]. 4.1. Benchmark function simulation (i) Generalized Griewank function   n n Y 1 X xi ffi þ 1; x2i þ cos p f ðxÞ ¼ 2 4000 i¼1 i i¼1

600 6 xi 6 600 ð7Þ

1426

S. Song et al. / Progress in Natural Science 18 (2008) 1423–1428

(ii) Generalized Rosenbrock function n h i X f ðxÞ ¼ 100ðxiþ1  x2i Þ2 þ ðxi  1Þ2 ;

30 6 xi 6 30

i¼1

ð8Þ PSO, SM-PSO and HPSCO each run for 50 times. The parameters are set as follows: S = 2, a = b = 1, c1 = c2 = 2.0, x decreases linearly from 0.9 to 0.4, x0 = x + s  dist(x)  (rand()  0.5) is the function of the state of SA algorithm,and Dist(x) is the length of the interval of x. The accuracy of convergence is 1e  10 for SM. The swarm size, the maximum number of iteration, the variable dimension and the ranges of x0 s value are given in Table 1. Comparisons of PSO, SM-PSO and HPSCO are shown in Table 2. Table 2 shows that the HPSCO has higher convergence speed and superior accuracy than the PSO and the SMPSO for multi-dimensional and multi-mode function. From the mean and the deviation listed in Table 2, we can see that the HPSCO is better than the PSO and SMPSO with both a better stability and a steady convergence, and the average success rate of new algorithm is also better than those of PSO and SM-PSO. 4.2. Bayer material balance computation The material balance computation in alumina production process is an important method for guiding the production and the technical design [17,18]. Although there are many technical projects to construct a new alumina plant, only through the material balance computation can we select the best production method and technical process, and attain the purpose of the lowest investment and the lowest cost. For the worldwide production of

alumina and aluminum hydroxide, more than 90% is produced by Bayer. The production process of alumina is very complicated, which results in calculating the complexity of its material balance with tediousness (Flowsheet of the Bayer process can be seen in Refs. [17,18]). Finally, the material balance computation is turned into solving nonlinear multi-objective constrained optimization problem. The traditional method and calculation tools will have to spend a longer period of time, and it is easy to fall into local optimal solution, and so it cannot meet the needs of production and technical design. Bayer’s production process can be seen as a complex control system, many processes of which come down to the revert computation, and each process has a direct impact on the results of material balance calculation of the entire process. Based on the analysis and the actual process of deduction, the entire process must satisfy seven equations and two balance relation formula: r the conservation of additive soda quantity, s the conservation of alumina, t the conservation of alumina of cycle mother liquor, u the conservation of caustic alkali of cycle mother liquor, v the conservation of alumina of red mud washing, w the conservation of caustic alkali of red mud washing, x the conservation of carbon alkali of red mud washing, y the balance relation between finished alumina hydroxide and aluminum in the roasting precess, and z the balance relation between evaporation mother liquor and Nc additive liquor in the evaporation process. The balance equations r–x are objective functions bounded to satisfy two constraint conditions y and z. The material balance calculation on the whole can be turned into solving a nonlinear multi-objective constrained optimization problem:

Table 1 Configuration of some parameters Algorithm

PSO SM-PSO HPSCO

Griewank

Rosenbrock

Dimension

Generation

Swarm size

Dimension

Generation

Swarm size

30 30 30

1000 1000 500

100 100 100

30 30 30

1000 1000 500

100 100 100

Table 2 Comparison of the computational results of PSO, SM-PSO and HPSCO Function

Algorithm

Fitness value Best

Worst

Mean

Deviation

Griewank

PSO SM-PSO HPSCO

0 0 0

0.59695072 0.30446696 1.2372117e  2

3.0073573e  2 6.091299e  3 8.82734355e  4

9.85373e  3 1.8539784e  3 6.85964e  6

Rosenbrock

PSO SM-PSO HPSCO

26.662077 23.063086 3.5570951e  6

6.2334239e + 2 2.6443657e + 2 5.9407248e  5

51.44359146 80.8026905 1.1504045e  5

1.3162873e + 4 2.8527095e + 3 1e  10

S. Song et al. / Progress in Natural Science 18 (2008) 1423–1428

min FðxÞ ¼ minðf1 ðxÞ; f2 ðxÞ; . . . ; f7 ðxÞÞ x2R

1427

T

x2R

R ¼ fxjgðxÞ 6 0g; gðxÞ ¼ ðg1 ðxÞ; g2 ðxÞÞ

ð9Þ

T

T

x ¼ ðx1 ; x2 ; . . . ; x9 Þ ; x 2 R  E9 where F(x) is the optimization, g(x) is the constraint vector, x is the variable vector, xi(i = 1, 2, . . . ,9) is the quality of alumina in finished aluminum products, the quality of alumina in finished aluminum hydroxide products, the quality of additive soda in recombined process of mother liquid, the quality of alumina in red mud lotion, the quality of caustic alkali in finished red mud lotion, the quality of carbon alkali in red mud lotion, the total quality of red mud lotion, the quality of water in the evaporation process, and the quality of water in causticization liquor, respectively. This is a typical solving nonlinear and multi-objective optimization problem. PSO, SM-PSO and HPSCO are all run for 50 times. The population sizes are 100 for HPSCO, 200 for PSO and SM-PSO, and the maximum evolution generation is 500 for HPSCO, and 2000 for PSO and SM-PSO. Other parameters are set as follows: c1 = c2 = 2.0, x is declined linearly from 0.9 to 0.4 too, and the ranges of x’s value in the multi-objective optimization problem are set in Table 3. Comparisons of PSO, SM-PSO and HPSCO are shown in Table 4, Figs. 1–4. From Table 4, Figs. 1–4, we can see that there are higher convergence speed and accuracy for the HPSCO than that for the PSO and SM-PSO, and the HPSCO can easily escape from local optimum solution through the mutation of particles and attain global optimum solution while the PSO cannot. From the mean and the deviation listed in Table 4, we can see that the HPSCO is better than the PSO and SM-PSO with both a better stability and a steady convergence, and the average success rate of new algorithm is better than those of PSO and SM-PSO. All these results demonstrate that the HPSCO is more feasible and efficient than the PSO and SM-PSO.

Table 3 The range of x’s initial value Variable

x1,x2

x3,x4,x5

x6

x7

x8

x9

Initial value

[200,450]

[5,300]

[5,100]

[5,3000]

[5,5000]

[50,500]

Fig. 1. Comparison of convergence curves of PSO, SM-PSO and HPSCO for MBC.

Fig. 2. The quality of alumina in the finished aluminum products for 50 times iteration.

5. Conclusions Based on the analyses of the advantages and disadvantages of the traditional single optimization algorithms (including PSO, simulated annealing and simplex method), a hybrid particle swarm cooperative optimization algorithm combining simulated annealing method and simplex method is proposed to improve the speed and success rate of convergence. The results of simula-

Table 4 Comparison of the computational results of PSO, SM-PSO and HPSCO Function

Multi-objective Optimization Problem 9

Algorithm

PSO SM-PSO HPSCO

Fitness value Best

Worst

Mean

Deviation

0.41258308 0.42014157 0.13700788

12.094435 0.82538876 0.13730576

1.6116925808 0.5002210146 0.13701990280

8.37781841 5.881472e  3 1.706e  9

1428

S. Song et al. / Progress in Natural Science 18 (2008) 1423–1428

Acknowledgement This work was supported in part by the National High Technology Research and Development Program of China (Grant No. 2004AA1Z2420). References

Fig. 3. The quality of alumina in the finished aluminum hydroxide products for 50 times iteration.

Fig. 4. The quality of additive soda in the recombined process of mother liquid for 50 times iteration.

tion and practical application have proved that the HPSCO has not only the powerful ability to search the global optimal solutions, but also the ability to effectively avoid falling into local optimum solution, and the HPSCO is highly efficient. The application of the HPSCO in other areas can be further discussed, and the convergence pattern, dynamic and steady-state performances of the algorithm can be improved more to specific complex optimization functions through combining with other evolutionary computation techniques.

[1] Eberhart RC, Kennedy J. A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science, Nagoya, Japan, Piscataway, NJ: IEEE Press; 1995, p. 39–43. [2] Kennedy J, Eberhart RC. Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks, IV, Piscataway, NJ: IEEE Press; 1995, p. 1942–8. [3] Zeng JC, Jie Q, Cui ZH. Particle swarm optimization algorithm. Beijing: Science Press; 2004, [in Chinese]. [4] Yang SY, Ni GZ. A particle swarm optimization-based method for multiobjective design optimizations. IEEE Trans Magnet 2005;41(5):1756–9. [5] Zhang XH, Meng HY, Jiao LC. Intelligent particle swarm optimization in multiobjective optimization. IEEE Congress Evol Comput 2005;1:714–9. [6] Yang L, Hu WW, Yang SY. et al. Application of particle swarm optimization in multi-sensor multi-target tracking, In: 1st international symposium on systems and control in aerospace and astronautics; 2006, p. 715–9. [7] Li Y, Chen X. Mobile robot navigation using particle swarm optimization and adaptive NN. Advances in natural computation. Heidelberg: Springer-Verlag; 2005, p. 628–31. [8] Zhang LP, Yu HJ, Hu SX. Optimal choice of parameters for particle swarm optimization. J Zhejiang Univ 2005;6(6):528–34, Science edition. [9] Song SL, Kong L, Cheng JJ, et al. A novel stochastic mutation technique for particle swarm optimization. In: Proceedings of the first international conference on bio-inspired computing-theory and applications, Wuhan, China, September 18–21, 2006; 282—8. [10] Wen SH, Zhang XL, Li HN, et al. A modified particle swarm optimization algorithm. In: Proceedings of 2005 international conference on neural networks and brain, vol. 1. 2005, p. 318–21. [11] Li YG, Chen X. A new stochastic PSO technique for neural network training. Advances in neural networks. Berlin: Springer-Verlag; 2006, p. 564–9. [12] Bergh F, Engelbrecht AP. A cooperative approach to particle swarm optimization. IEEE Trans Evol Comput 2004;8(3):225–39. [13] LI AG. Particle swarms cooperative optimizer. J Fudan Univ (Nat Sci) 2004;43(5):923–5, [in Chinese]. [14] Zhao N, Zhang FS, Wei P. Reactive power optimization based on improved poly-particle swarm optimization algorithm. J Xi’an Jiaotong Univ 2006;40(4):463–7, [in Chinese]. [15] Wang L. Intelligent optimization algorithm and its application. Beijing: Tsinghua University Press; 2000, [in Chinese]. [16] Wang L, Yan M, Li QS, et al. Effective hybrid optimization strategy for complex functions with high dimension. J Tsinghua Univ (Sci Technol) 2001;41(9):118–21, [in Chinese]. [17] Wu JS. Material balance computation in alumina production process of Bayer and series-to-parallel. Beijing: Metallurgical Industry Press; 2002, [in Chinese]. [18] Bi SW. Alumina production process. Beijing: Chemical Industry Press; 2006, [in Chinese].