An improved particle swarm optimizer for parametric optimization of flexible satellite controller

An improved particle swarm optimizer for parametric optimization of flexible satellite controller

Applied Mathematics and Computation 217 (2011) 8512–8521 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homep...

356KB Sizes 75 Downloads 105 Views

Applied Mathematics and Computation 217 (2011) 8512–8521

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

An improved particle swarm optimizer for parametric optimization of flexible satellite controller Di Hu ⇑, Ali Sarosh, Yun-Feng Dong School of Astronautics of Beihang University, Beijing 100191, PR China

a r t i c l e

i n f o

Keywords: Improved particle swarm optimization Genetic algorithm PSO (GAPSO) Cross operation Global optimal solution Flexible satellite Controller parameters optimization

a b s t r a c t Parametric optimization of flexible satellite controller is an essential for almost all modern satellites. Particle swarm algorithm is a global optimization algorithm but it suffers from two major shortcomings, that of, premature convergence and low searching accuracy. To solve these problems, this paper proposes an improved particle swarm optimization (IPSO) which substitute ‘‘poorly-fitted-particles’’ with a cross operation. Based on decision possibility, the cross operation can interchange local optima between three particles. Thereafter the swarm is split in two halves, and random number (s) get generated by crossing the dimension of particle from both halves. This produces a new swarm. Now the new swarm and old swarm are mixed, and based on relative fitness a half of the particles are selected for the next generation. As a result of the cross operation, IPSO can easily jump out of local optima, has improved searching accuracy and accelerates the convergence speed. Some test functions with different dimensions are used to analyze the performance of IPSO algorithm. Simulation results show that the IPSO has more advantages than standard PSO and Genetic Algorithm PSO (GAPSO). In that it has a more stable performance and lower level of complexity. Thus the IPSO is applied for parametric optimization of flexible satellite control, for a satellite having solar wings and antennae. Simulation results shows that the IPSO can effectively get the best controller parameters vis-a-vis the other optimization methods. Ó 2011 Elsevier Inc. All rights reserved.

1. Introduction Particle swarm optimization (PSO) was proposed by Eberhart and Kennedy in 1995 is based on swarm intelligence and is applied to global optimization problems [1]. It simulates birds’ predatory behavior. Compared with genetic algorithm and ant colony algorithm, PSO has lower computation complexity, easier programming and faster convergence, which has raised great interest amongst researchers in recent years. Although PSO has the characteristics of fast convergence, good robustness, strong commonality, and has been successfully applied in many areas, it has the shortcomings of premature convergence, low searching accuracy and iterative inefficiency, especially the problems involving multiple peak values, and it is likely to fall in local optima [2]. In order to overcome the aforementioned limitations, many scholars have attempted to improve the PSO algorithm. Some of these include the Genetic Algorithm PSO [3], different evolutional PSO [4], dynamic multi-point detecting PSO [5], binary PSO [6], self adaptive PSO [7], knowledge based PSO [8] and so on. These improved PSO algorithms have enriched the PSO theory and they are convenient to apply to various areas.

⇑ Corresponding author. E-mail address: [email protected] (D. Hu). 0096-3003/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2011.03.055

D. Hu et al. / Applied Mathematics and Computation 217 (2011) 8512–8521

8513

Scholars have also proposed the modified PSO algorithms for various applications. In that, some scholars have introduced the concept of distance to maintain the diversity of particles [9], mutation as a way to train neural networks [10], modification of velocity formula to class the sample data [11], and so on. PSO has also been applied for parametric optimization of PID controller [12,13] and fault diagnosis [14]. However, application of parametric optimization for flexible satellite control system has been studied in only few literatures [15]. Modern satellites are very complex, because of strongly nonlinearity of systems, especially the attitude control system which has flexible affiliate, like solar wing, fluid fuel and antenna. As a result the attitude control algorithm becomes very complex because large number of parameters need to be considered for optimization [16–19]. Therefore it is important to improve the PSO so that it can solve the parametric optimization problem of nonlinear system. This paper introduces the cross operation method to improve the particle swarm optimizer. The first step of cross operation is called ‘‘local optima cross operation’’. Whereby the local optima of each particle exchanges information amongst three particles chosen by three random numbers. Local optima cross operation can keep the particles’ tending to the best position and accelerate their convergence speed. The second step of cross operation is called ‘‘swarm cross operation’’. Here the particle’s dimensions are crossed to improve the particles’ adaptability. This paper uses four benchmarks to test the improved PSO (IPSO) algorithm. The simulation results show that the IPSO is convenient to apply and quick in achieving the global optima. Comparative analysis show that, for different particles dimensions, the IPSO has better performance than the standard PSO (SPSO) and Genetic Algorithm PSO (GAPSO). Finally an application is demonstrated to show that the IPSO can solve complex problems such as the strongly nonlinear parametric optimization of system controller for flexible satellite attitude control. 2. Problem and basic PSO 2.1. Flexible satellite system [17] As scientific tasks increase, so do the complexity of satellite platforms. Modern satellites generally carry solar wings and fuel, which result in elastic vibrations that can lead to oscillation in the satellite attitude. Coupled with other flexible affiliates, like camera and antenna, the whole satellite system becomes strongly nonlinear and very complex. This paper considers three orthogonal flywheels, and uses Euler–Lagrange equation, the satellite dynamic equation, antenna dynamic equation, solar panel vibration equation and the antenna vibration equation respectively for defining the flexible attitude control system of a satellite. In refer [17] given the kinematics (1) and dynamics (2).

 þ qð0ÞEÞws ; q_ ¼ 1=2  ðq q_0 ¼ 1=2qT ws ; €1 þ C2g € 2 þ fum ¼ T c þ T d ; Iw_ s þ ws  ðIws þ JwF Þ þ Cðw_ s ÞA þ C 1 g

ð1Þ

€ 2 ¼ T; IA w_ sA þ wA  IA wA þ C T w_ s þ C 3 g

g€ 1 þ 2n1 k1 g_ 1 þ k21 g þ C T1 ws ¼ 0; g€ 2 þ 2n2 k2 g_ 2 þ k22 g2 þ C T2 w_ s þ C T3 w_ sA ¼ 0:

ð2Þ

Under speed mode, the control torque equation of flywheel and its kinetic equation is respectively given as formula (3).

T c ¼ J w_ F ; J w_ F þ f ðwF Þ ¼ K m A  T F d; LA_ þ RA þ ðK e þ K p K x ÞwF ¼ K p wc :

ð3Þ

The detail parameters are defined in refer [17], and the reference has also given the control law based on sliding mode controller. The controller is designed by three steps. The first-level sub-sliding tracks quaternion order qd quickly and accurately, as in formula (4).

s1 ¼ ðq  qd Þ þ B1

Z

t

½qðsÞ  qd ðsÞds  ½qð0Þ  qd ð0Þ ¼ 0

ð4Þ

0

~ as formula (5) it then, gets the desired value w

~ ¼ 2ðq0 þ q0 EÞð1Þ½H1 s1 þ B1 ðq  qd Þ  q_ d þ q1 Rðs1 Þ; w

ð5Þ

~ accurately at a faster response speed as formula in (6) The second-level sub-sliding tracks command w

s2 ¼ e2 ðtÞ þ B2

Z

t

e2 ðtÞds  e2 ð0Þ ¼ 0;

ð6Þ

0

e c as in formula (7) ~ we then get the T where, e2 ¼ w  w,

~_ þ B2 e2 þ b þ q2 Rðs2 Þ: Te c ¼ I½H2 s2  I1 ðw  ðIws þ JwF Þ þ C w_ sA Þ  w

ð7Þ

8514

D. Hu et al. / Applied Mathematics and Computation 217 (2011) 8512–8521

e c accurately and control input instruction of flywheel wc as in formula (8) The third level tracks command T

s3 ¼ e3 ðtÞ þ B3

Z

t

e3 ðtÞds  e3 ð0Þ ¼ 0;

ð8Þ

0

b c . Then we get the desired value wc as in formula (9) _FT where, e3 ¼ J w

_ F þ C1 ð Tb c  Te c Þ þ B3 e3 þ v þ q3 Rðs3 ÞÞ: wc ¼ ½L1 K m Kp1 ðH3 s3  L1 RJw

ð9Þ

For the derived control law, there are many parameters that need to be adjusted as shown in formula (10), which can be optimized by improved PSO (IPSO).

B1 ¼ diagðb1 > 0Þ33 H1 ¼ diagðh1 > 0Þ22

q>0 B2 ¼ diagðb2 > 0Þ33 H2 ¼ diagðh2 > 0Þ33

q2 > 0 B3 ¼ diagðb3 > 0Þ33 H3 ¼ diagðh3 > 0Þ33

q3 > 0:

ð10Þ

Now there are 9 parameters that need to be optimized, b1, h1, q, b2, h2, q2, b3, h3  q3. And ws is satellite inertial angular velocity vector. The details of parameters are defined in refer [17]. 2.2. Basic particle swarm optimizer Standard PSO is an optimization algorithm. It updates the position and speed of a swarm of particles in solution space to obtain the global optimal solution. The position represents the solution of optimal functions, and the speed determines the flight direction and distance. Fitness is the calculated value of the particles in optimal function. The local and global optimal solution could be evaluated through fitness of every particle. The detailed introduction of PSO is given in the reference [1]. Assume that in a D-dimension objective searching space, m particles form a swarm. For the ith particle, the position is X(x1, x2,    , xD), and the speed is V(v1, v2,    , vD). Each particle updates itself by keeping track of ‘‘the best location’’. One is the best location found by itself at present, which is local optimal solution (lPbest), the other is the best location found in all the best location of the whole particles for the moment, which is the global optimal solution (gPbest). ‘‘gPbest’’ is the best value of lPbest. For the kth iteration, each particle changes according to Eqs. (11) and (12).

v ij ðk þ 1Þ ¼ wv ij ðkÞ þ c1 randðÞðlPbestij  xij ðkÞÞ þ c2 randðÞðgPbestj  xij ðkÞÞ

ð11Þ

xij ðk þ 1Þ ¼ xij ðkÞ þ v ij ðk þ 1Þ:

ð12Þ

In Eqs. (11) and (12), i = 1, 2, . . . , m expresses m particles, j = 1, 2, . . . , D represents the dimension of particles. vij(k) is the jth component of flight speed vector in kth iteration for particle i. xij(k) is the jth component of position vector in kth iteration for particle i. lPbestij is thejth component of local best location (lPbest) for particle i. gPbestj is the jth component of global best location. c1, c2 are learning factors, rand() is a random function to produce random numbers in [0, 1]. 3. Improved particle swarm optimizer Improved particle swarm optimizer is composed of two cross operations, these include the local optima cross and swarm cross operation for maintaining the particles’ diversity and accelerating the convergence speed respectively. 3.1. Local optima cross operation The local optimal cross is used to cross the optima between the particles and adjusts the particle’s flight direction. The local optima cross operation is defined as follow. Whether a particle’s optimal is replaced or not, it needs to make a decision by a random possibility as given in formula (13).

P ¼ 0:1 þ 0:4  expðIter  10=MaxIterÞ;

ð13Þ

where, P is a random number called decision possibility, Iter is current iteration, MaxIter is the maximum iteration. The P shows that the learning possibility is tending to 0.1 when the iteration is close to the maximum iteration. That is, when

D. Hu et al. / Applied Mathematics and Computation 217 (2011) 8512–8521

8515

the particles come to the later period and the particle may get to the global optimal, so the cross operation need to more slowly or stop, and the possibility is smaller. After giving a decision possibility, it needs to decide as to how to cross the local optima. To avoid particles poverty, it needs to divide the cross operation into two steps. The first step is to produce a random coefficient, which is a random number between [0, 1]. The second step is to decide the cross operation or diversity the particles. If the decision possibility is bigger than random coefficient, then the current particle’s local optima needs to be replace by the best particle’s local optima among three random particle’s, and the particle’s position and velocity are also replaced. If the decision possibility is smaller than random coefficient, the current particle’s local optima needs to be replaced by an optima-which is the global best optima multiplied by a diversity parameter. The cross operation is shown as the condition (14).

if P > RandomCoef MinP ¼ minðlocalP 1 ; localP 2 ; localP2 Þ if localP > MinP localP ¼ MinP X ¼ X MinP V ¼ V MinP

ð14Þ

end else X ¼ X gPbest  ð1 þ randðÞÞ V ¼ V gPbest  ð1 þ randðÞÞ end where, P is decision possibility, RandomCoef is random coefficient between [0, 1], localPi,i = 1, 2, 3 is random selected among the whole particle swam, XMinP, VMinP represent the particle’s position and velocity, which is the minimum local optima among the three particles, X, Vrepresent the current particle’s position and velocity, gPbest is the global optima, ‘‘1 + rand()’’ is a diversity coefficient.

3.2. Swarm cross operation The swarm cross operation is divided into two step, the first step is to produce a new swarm according to the cross operation, and the second step is to get the better half of the two swarm particles. The cross operation is used to exchange the dimension of each particle. Dividing the swarm into two half parts, producing a serial numbers according to the particle’s dimension, exchange the two parts particle’s dimension number according to the serial numbers,as defined the pseudo code (15)

for i ¼ 1 : N=2 RandNumber ¼ maxðfloorðrandðDÞÞ; 1Þ; for j ¼ 1 : RandNumber exchangeðPart1; Part2; jÞ;

ð15Þ

end end where, RandNumber is a random number according to the dimension, D represents the particle’s dimension, Part1, Part2 are the two parts half of the swarm, N is the number of swarm particles, exchange is a function for exchange the jth dimension of Part1 and Part2. According to the cross operation, re-calculating the particles’ fitness, a new swarm is produced. Then, it needs to select the best particles to form the next generation swarm. Swarm selection operation is performed and the two swarms of particles are mixed. Then the particles are arranged according to the particle’s fitness. The half of particles with better fitness are chosen to form the particle swarm, and the other half of particles are discarded. Therefore, the new generation particles update their position and velocity according to the formula (11) and formula (12).

3.3. Steps of improved PSO In this section, the basic steps of improved PSO is described as follows: Step 1. Initialize N particles swarm. Step 2. Compute each particle’s fitness and initialize each particle’s local optimal solution and global optimal solution.

8516

D. Hu et al. / Applied Mathematics and Computation 217 (2011) 8512–8521

Step 3. Carry through local optima cross operation according to decision probability to form the offspring. Compute the offspring’s particles’ fitness and update the local optima and global optima. Step 4. Run swarm cross operation for the parent to form a new offspring. Mix the two particle swarm. Choose the optimal N particles to get into the next generation according to their fitness. Step 5. Compute the inertia weight, update the velocity and position respectively according to the formulae. Then update the local optima and global optima. Step 6. Compute the fitness of new generation of swarm particles. If it fits, the process is exited, else the algorithm returns to Step3. 4. Experiment and analysis 4.1. Test functions According to their properties, there are four functions divided into two groups. The first group is simple multi-modal problem and the second is un-rotated multi-modal problem. Every function has the same parameters when testing the PSO, GAPSO and IPSO. Sphere function (16) and Rosenbrock function (17) describe simple multi-modal problems.

f1 ðxÞ ¼

D X

x2i ;

ð16Þ

i¼1

f2 ðxÞ ¼

D1 X ½100ðxiþ1  x2i Þ þ ðxi  1Þ2 :

ð17Þ

i¼1

Sphere function is very simple, and the optimal solution is 0. Rosenbrock function is a classical complex optimization problem. It has a narrow valley from the perceived local optima to the global optimum. Because the function just provides little information for the optimization algorithm, the search direction is hard to decide and the global optima has little chance to be found. The global optimal solution of the Rosenbrock is 1, and minimum of the function is 0. Ackly function (18) and Rastrigin function (19) describe the un-rotated multi-modal problem.

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u D D X X u f3 ðxÞ ¼ 20expð0:2t1=D x2i Þ  expð1=D cosð2pxi ÞÞ þ 20 þ e; i¼1

f4 ðxÞ ¼

D X ðx2i  10cosð2pxi Þ þ 10Þ:

ð18Þ

i¼1

ð19Þ

i¼1

Ackley function has one narrow global optimum basin and the local optimal solution is symmetrically distributed around the search space, the minimum of this function is 0. Rastrigin function has many tubal and towering local optimal solutions. If the differences of particles are small, the algorithm is easy to get trapped into the local optimal solution. 4.2. Results comparison The four functions (above) are chosen to simulate and compare standard PSO (SPSO), Genetic Algorithm PSO (GAPSO) and Improved PSO (IPSO). The parameters are the same for the four functions. Learning factor c1 is equal to 1.4962, c2 is equal to 1.4962. Inertia weight w is equal to 0.7298. Particles number N is equal to 40. Maximum iterations MaxDT is equal to 5000. IPSO uses linear inertia weight whose range is [0.4, 1.5]. Different search dimensions are computed for the four functions. Each iteration has 5000 generations. The global optima is evaluated for comparison, as shown in Tables 1–4. The Table 1 shows the global optimal result of Sphere function with different dimensions. We can see that the SPSO has slow convergence speed and may even get trapped into the local optima. But the others can achieve the global optima (which is close to zero). However the IPSO is better than the GAPSO in terms of accuracy of optimal result. The Table 2 shows the global optima of Rosenbrock function with different dimensions. The data shows that the SPSO with higher dimension is again trapped into the local optima. The GAPSO is also trapped into the local optima in higher dimension while the convergence speed is also slow. However, the IPSO again shows better performance and achieves the global optima (by using the cross operation), especially for the case of 6 and 10 dimensions. Table 3 shows the global optima of Ackly function. The data shows that the SPSO is once again trapped into the local optima. The GAPSO and IPSO have the same performance with lower dimension, but for the case of optimization of data for 40 dimensions, the IPSO is much better and stable than GAPSO. Table 4 shows that the global optima for Rastrigin function. For different dimensions the IPSO is stable and the best, with a global optima of 0. This implies, this IPSO is very suitable to solve the problem like Rastrigin function. The SPSO and GAPSO have different performance for the different dimensions and are easily trapped into the local optima.

8517

D. Hu et al. / Applied Mathematics and Computation 217 (2011) 8512–8521 Table 1 Comparison global optima of Sphere function with different dimensions. Algorithms

Dimension

Initial search range

Optima result

SPSO GAPSO IPSO SPSO GAPSO IPSO SPSO GAPSO IPSO SPSO GAPSO IPSO

6 6 6 10 10 10 20 20 20 40 40 40

[5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12

0 0 4.02334e300 2.146e11 4.1879e104 1.460e233 3.6294 1.2032e41 4.9286e129 0.4637 3.8257e10 6.1524e45

5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12]

Table 2 Comparison global optima of Rosenbrock function with different dimensions. Algorithms

Dimension

Initial search range

Optima result

SPSO GAPSO IPSO SPSO GAPSO IPSO SPSO GAPSO IPSO SPSO GAPSO IPSO

6 6 6 10 10 10 20 20 20 40 40 40

[5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12

0.317915 0.397439 1.80321e28 8.1919 1.3277e4 1.7811e8 346.6204 15.2848 9.8086 355.1 36.9447 34.515

5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12]

Table 3 Comparison global optima of Ackly function with different dimensions. Algorithms

Dimension

Initial search range

Optima result

SPSO GAPSO IPSO SPSO GAPSO IPSO SPSO GAPSO IPSO SPSO GAPSO IPSO

6 6 6 10 10 10 20 20 20 40 40 40

[1 1] [1 1] [1 1] [1 1] [1 1] [1 1] [1 1] [1 1] [1 1] [1 1] [1 1] [1 1]

0.13251 4.227e15 8.881e16 2.013 4.4409e15 8.8818e16 5.2141 2.8140 4.4409e15 5.1773 6.2251 7.9936e15

Table 4 Comparison global optima of Rastrigin function with different dimensions. Algorithms

Dimension

Initial search range

Optima result

SPSO GAPSO IPSO SPSO GAPSO IPSO SPSO GAPSO IPSO SPSO GAPSO IPSO

6 6 6 10 10 10 20 20 20 40 40 40

[5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12 [5.12

4.71610 9.21330 0 18.9042 8.9546 0 49.9226 36.8134 0 185.6968 206.95 0

5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12] 5.12]

8518

D. Hu et al. / Applied Mathematics and Computation 217 (2011) 8512–8521

Thus from the simulation results, for the four test functions with different dimensions. In general, the IPSO has better results than SPSO and GAPSO. In the particular case of Ackly function and Rosenbrock function, SPSO and GAPSO has tendency to get trapped into local optima, but the IPSO does not. The IPSO is also very suitable for the Rastrigin function. From the analysis below, it can also be seen that IPSO uses fewer iterations to converge than the other two algorithms. Thus its convergence time is shorter than the others. Therefore IPSO has the best final result than the SPSO and GAPSO. 4.3. Characteristics Comparison The best fitness is chosen for comparison, once 5000 generations of each function with 10 dimensions has been accomplished. As a result, there are 5000 data points to compare. For effective analysis, parts of the data are chosen to compare and analyze. For the simple test Sphere function in Fig. 1, all the three algorithms exhibit excellent characteristics. IPSO converges fast to the optimal solution but the global optimal solution of GAPSO is not stable and SPSO converges to the optimal solution gradually. Refer Fig. 2, for solution of Rosenbrock function, SPSO traps into the local optimal solution. GAPSO could jump out of local optimal solution, while IPSO was quickly to access the global optima mainly owed to application of the two-step cross operation. Hence overall IPSO converges faster than the other two algorithms. Fig. 3 shows fitness trend of the three algorithms aimed at Ackly test function. As seen in the figure, IPSO could achieve the optimal solution more quickly than SPSO and GAPSO. Moreover IPSO converges to optima when particles evolved for the 40th generation, while the GAPSO and SPSO took more than 60 generations to begin covergence. Thus once again the result of IPSO is better than the other two algorithms. Fig. 4 shows fitness trend of the three algorithms aimed at Rastrigin test function. As seen in the figure, SPSO may trap into the local optimal solution. GAPSO may also have the same problem, plus the fact that its local optimal solution is unstable,

2

SPSO GAPSO IPSO

BestFitness

1.5

1

0.5

0 0

10

20

30

40

50

Iteration

Fig. 1. Sphere function fitness trend.

50 45

SPSO GAPSO IPSO

40

Best Fitness

35 30 25 20 15 10 5 0 0

10

20

30

40

50 60 Iteration

70

80

Fig. 2. Rosenbrock function fitness trend.

90

100

D. Hu et al. / Applied Mathematics and Computation 217 (2011) 8512–8521

8519

3.5

SPSO GAPSO IPSO

3

BestFitness

2.5 2 1.5 1 0.5 0 0

20

40

60

80

100

Iteration

Fig. 3. Ackly function fitness trend.

50

SPSO GAPSO IPSO

BestFitness

40

30

20

10

0 0

20

40

60

80

100

Iteration

Fig. 4. Rastrigin function fitness trend.

leads to the oscillation in global optimal solution as well. As for IPSO the mechanism of local optima cross operation and swarm cross operation guarantee that IPSO could jump out of local optimal solution and gradually find the optimal solution, which is non-oscillatory unlike that of GAPSO. As stated previously, IPSO has more advantages than SPSO and GAPSO. GAPSO fully utilizes the crossover operation of genetic algorithm to avoid trapping into the local optima. But it can not change the situation radically and still has the possibility to trap into the local optimal solution. IPSO not only has the excellence of the GAPSO, but also increases the particles’ diversity through crossover of each particle’s local optima, avoids trapping into local optima by swarm cross operation to find the global optimal solution, and finally converges to the global optimal solution. IPSO therefore has the fastest convergence speed and is much more stable performance than SPSO and GAPSO. 5. IPSO for flexible satellite controller parameters optimization According to the analysis of the problem concerning flexible satellite control system, many parameters need to be adjusted. In this paper the parameters are defined as the input variables of IPSO. In the general control system design, integrated error, integrated absolute error, integrated square error etc. are used as evaluation functions. While for the satellite control system, the fitness of each particle is given by formula (20)

F ¼ R1 0

1 ; tjeðtÞjdt

ð20Þ

where, F is the fitness, t is the control system running time, e(t) is the error of control result. The satellite inertia is given by formula (21)

½5247:97; 230:52; 115:30; 230:52; 5110:05; 41:11; 115:30; 41:11; 4142:48

ð21Þ

8520

D. Hu et al. / Applied Mathematics and Computation 217 (2011) 8512–8521 300 SPSO Fitness IPSO Fitness

Best Fitness

250 200 150 100 50 0 0

20

40

60

80

100

Iterations

Fig. 5. Comparison of the best fitness between IPSO and SPSO.

0.5 IPSO Optimized Result SPSO Optimized Result

Angle(deg)

0

−0.5

−1 0

50

100

150

200 250 Times(s)

300

350

400

Fig. 6. Attitude angle of satellite by IPSO optimization.

0.2 IPSO Optimized Result SPSO Optimized Result

Angular Velocity(deg/s)

0.15 0.1 0.05 0 −0.05 −0.1 0

50

100

150

200 250 Times(s)

300

350

400

Fig. 7. Attitude rate of satellite by IPSO optimization.

and inertia of each flywheel is taken as 0.119366, the initial attitude Euler angle is [0.5°, 0.5°, 0.5°], the attitude rate is [0.001°/ s, 0.02°/s, 0.03°/s]. The disturbance torque is Td = 0.005 + 0.004 ⁄ sin(t)(N  m). For the remainder parameters we can refer [17]. Then the fitness of SPSO and IPSO are compared as shown in Fig. 5. The Fig. 5 gives the comparison result of SPSO and IPSO, which shows that the SPSO is trapped into the local optimal while the IPSO quickly achieves the global result. The fitness curve exhibits that the IPSO jumps out of the cycle at 18th iteration, while the SPSO remain trapped into the local optimal until the cycle ends. The fitness curve of SPSO shows that the optimal result achieved at the end of 100th iteration remain unchanged up onto the end of cycle at 1000 iterations. This in comparison to IPSO’s global optimal result achieved at 18th iteration indicates that SPSO is far inferior in efficiency as compared to IPSO.

D. Hu et al. / Applied Mathematics and Computation 217 (2011) 8512–8521

8521

Now using the global result of Fig. 5, an attitude control solution is plotted for both IPSO and SPSO as shown in Figs. 6 and 7. Figs. 6 and 7 are the attitude control curves of flexible satellite system for first 400 s of operation. These curves indicate angular variations (Fig. 6) and rate variations (Fig. 7) along only the Y-axis direction. The figures illustrate that the IPSO optimized parameters are much more stable than SPSO. Therefore the IPSO is effective for the parametric optimization of flexible satellite system. 6. Conclusion For the parametric optimization of flexible satellite attitude controller, the paper proposes an improved PSO for application. The IPSO adopts local optima cross operation and swarm cross operation for the particles’ diversity and acceleration of convergence speed respectively. Four test functions are used to validate the algorithm. The test results show that for various particles dimensions the IPSO has superior performance than GAPSO and SPSO. When the IPSO is applied for the parametric optimization of flexible satellite attitude control system, the results validate the effectiveness of IPSO. Acknowledgment The authors thank the anonymous reviewers for providing valuable comments to improve this paper. References [1] R.C. Eberhart, J.A. Kennedy, New optimizer using particle swarm theory, in: Proceedings of the 6th International Symposium On Micro Machine and Human Science, Nagoya, Japan, 1995, pp. 39–43. [2] P.J. Angeline, Evolutionary optimization versus particle swarm optimization: philosophy and performance difference, in: Proceedings of the 7th Annual Conference On Evolutionary Programming, Germany, 1998, pp. 601–610. [3] Nie Ru, Yue Jianhua, A GA and particle swarm optimization based hybrid algorithm, in: IEEE congress on evolutionary computation 2008, CEC 2008, IEEE, Hongkong, 2008. 1047–1050. [4] M.G.H. Omran, A.P. Engelbrecht, A. Salman, Differential evolution based particle swarm optimization, in: Proceedings of the 2007 IEEE swarm intelligence symposium, 2007, pp.1–8. [5] Wang Yong, Pang Xing, A dynamic multipoint detecting PSO, International conference on logistics systems and intelligent management 2010, IEEE, Harbin, 2010. 474–479. [6] J. Kennedy, R.C.A. Eberhart, Discrete binary version of the particle swarm algorithm, Systems, Man, and Cybernetics, In: Proceedings of the IEEE International Conference on Computational Cybernetics and Simulation, vol. 5, October 12-15, 1997, pp. 4104–4108. [7] Yu Wang, Bin Li, Thomas Weise, et.al. Self adaptive learning based particle swarm optimization, Information Sciences, in press. doi:10.1016/ j.ins.2010.07.013. [8] Jing Jie, Jianchao Zeng, Chongzhao Han, et al, Knowledge based cooperative particle swarm optimization, Applied mathematics and computation 205 (2008) 861–873. [9] Yuxin Zhao, Wei Zu, Haitao Zeng, A modified particle swarm optimization via particle visual modeling analysis, Computers and mathematics with applications 57 (2009) 2022–2029. [10] Wei Gao, Hai Zhao, Jiuqiang Xu, et al, A dynamic mutation PSO algorithm and its application in the neural networks, in: First international conference on intelligence networks and intelligent systems, IEEE, 2008, pp. 103–106. [11] Qin Shen, Zhen Mei, Baoxian Ye, Simultanous genes and training samples selection by modified particle swarm optimization for gene expression data classification, Computers in biology and medicine 39 (2009) 646–649. [12] Weider Chang, PID control for chaotic synchronization using particle swarm optimization, Chao, solitions and fractals 39 (2009) 910–917. [13] Weider Chang, Shunpeng Shih, PID controller design of nonlinear systems using an improved particle swarm optimization approach, Commun. Nonlinear. Sci. Numer. Simulat. (2010), in press. doi:10.1016/j.cnsns.2010.01.005. [14] Shenwei Fei, Diagnostic study on arrhythmia cordis based on particle swarm optimization based support vector machine, Expert systems with application (2010), in press. doi:10.1016/j.eswa.2010.02.126. [15] Geng Lei, Optimization design for parameters of satellite attitude control system with flexible solar array(In Chinese), Master Thesis, Harbin Institute of Technology 2009. [16] Liu Yingying, Zhou Jun, Fuzzy attitude control for flexible satellite during orbit maneuver, in: Proceeding of the 2009 IEEE international conference on mechatronics and automation, August 9-12, Changchun, China, 2009, pp.1239–1243. [17] Wang Zhi, Lang Baohua, Compound control system design based on backstepping techniques and neural network sliding mode for flexible satellite, in: International conference on computer design nd application (ICCDA 2010). 2 (2010), pp. 418–422. [18] Ye Jiang, Qinglei Hu, Guangfu Ma, Adaptive backstepping fault tolerant control for flexible spacecraft with unknown bounded disturbances and actuator failures, ISA Transactions 49 (2010) 57–69. [19] Garth Watney, A model based architecture for a small flexible fault protection system, AIAA infotech@Aerospace conference, 6-9 April 2009, Seattle, Washington, AIAA-2009-2028, pp.1–7.