Constrained self-adaptive differential evolution based design of robust optimal fixed structure controller

Constrained self-adaptive differential evolution based design of robust optimal fixed structure controller

Engineering Applications of Artificial Intelligence 24 (2011) 1084–1093 Contents lists available at ScienceDirect Engineering Applications of Artifici...

743KB Sizes 0 Downloads 19 Views

Engineering Applications of Artificial Intelligence 24 (2011) 1084–1093

Contents lists available at ScienceDirect

Engineering Applications of Artificial Intelligence journal homepage: www.elsevier.com/locate/engappai

Constrained self-adaptive differential evolution based design of robust optimal fixed structure controller S. Sivananaithaperumal a,n, S. Miruna Joe Amali b, S. Baskar b, P.N. Suganthan c a

Electrical and Electronics Engineering Department, Dr. Sivanthi Aditanar College of Engineering, Thiruchendur, Tamil Nadu, India Electrical and Electronics Engineering Department, Thiagarajar College of Engineering, Madurai, Tamil Nadu, India c School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore b

a r t i c l e i n f o

a b s t r a c t

Article history: Received 17 March 2011 Accepted 11 May 2011 Available online 14 June 2011

This paper presents a constrained Self-adaptive Differential Evolution (SaDE) algorithm for the design of robust optimal fixed structure controllers with uncertainties and disturbance. Almost all real world optimization problems have constraints which should be satisfied along with the best optimal solution for the problem. In evolutionary algorithms (EAs) the presence of constraints reduces the feasible region and complicates the search process. Therefore, a suitable method to handle the constraints must also be executed. In the SaDE algorithm, four mutation strategies and the control parameter CR are self-adapted. Self-adaptive Penalty (SP) method is introduced into the SaDE algorithm for constraint handling. The performance of SaDE algorithm is demonstrated on the design of robust optimal fixed structure controller of three systems, namely the linearized magnetic levitation system, F-8 aircraft linearized model and a SISO plant. For the comparison purpose, reported results of constrained PSO algorithm and five DE algorithms with different strategies and parameter values are taken into account. Statistical performance in 20 independent runs is considered to compare the performance of algorithms. From the obtained results, it is observed that SaDE algorithm is able to self-adapt the mutation strategy and the crossover rate and hence performs better than the other variants of DE and the constrained PSO algorithm. Better performance of SaDE is achieved by sustained maintenance of diversity throughout the evolutionary process thus producing better individuals consistently. This also aids the algorithm to escape from local optima thereby avoiding premature convergence. & 2011 Elsevier Ltd. All rights reserved.

Keywords: Differential evolution Self-adaptive Differential Evolution (SaDE) Fixed-order control HN performance Robust control Structured synthesis

1. Introduction In practical control engineering, it is crucial to obtain reduced order/fixed structure controllers due to limitations of available computer resource and necessity of on-site controller tuning. Most of the real systems are vulnerable to external disturbances, measurement noise and model uncertainties. Robust controller designs are quite useful in dealing with the systems under parameter perturbation, model uncertainties and disturbances (Doyle et al., 1990; Zhou et al., 1996). There are two approaches for dealing with robust optimal controller design problems. One is the structure-specified controller and the other is the outputfeedback controller. However, the conventional output feedback design of optimal control is very complicated and not easily implemented for practical industrial applications as the order of the controller would not be lower than that of the plant. To n

Corresponding author. E-mail addresses: [email protected] (S. Sivananaithaperumal), [email protected] (S.M.J. Amali), [email protected] (S. Baskar), [email protected] (P.N. Suganthan). 0952-1976/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.engappai.2011.05.003

overcome this difficulty, the structure-specified approach solves the robust optimal control problem from suboptimal perspective (Chen et al., 1995; Chen and Cheng, 1998; Kitsios et al., 2001; Krohling and Rey, 2001; Ho and Lin, 2003; Ho et al., 2005). For controllers with a very special structure such as Proportional-Integral-Derivative (PID) or lead-lag compensators, various design methods are now available for control engineers (Astrom and Augglund, 1995; Dong Hwa, 2011). In particular, in the last few years various innovative techniques for designing controllers satisfying not only stability but also HN specifications have been proposed (Blanchini et al., 2004; Ho, 2003; Ho et al., 2004). However, it will be difficult to extend these methods for direct application to fixed-structure controller design problems, as they strongly depend on the specific structure. As far as conventional HN controller design with fixed-order/ fixed-structure is concerned, most approaches utilize linear matrix inequality (LMI) formulae. Apkarian et al. (2003) considered the design problem as optimization programs with a linear cost subject to LMI constraints along with non-linear equality constraints representing a matrix inversion condition. Further, Saeki (2006a, b) kept the controller variables directly in LMI to

S. Sivananaithaperumal et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1084–1093

cope with the fixed-structure constraints. However, it is difficult for any of these methods to treat both the controller structure and the multiple specifications simultaneously. Contrary to the deterministic approaches discussed above a probabilistic method based on randomized algorithms was proposed by Calafiore et al. (2000). In this method, a full order controller is randomly generated using a finite dimensional parameterization, and then model order reduction is utilized. Later Fujisaki et al. (2008) have proposed a mixed probabilistic/deterministic approach aimed at minimization of computation time. During the past decades, great attention has been paid to the optimization methods for controller design. In modern control theory, a design problem is formulated as an optimization problem with performance measures being the norms of some closed loop properties. The controller design is formulated as a non-linear minimization problem subject to a non-linear constraint for which no closed form solution can be obtained by conventional optimization techniques. Evolutionary algorithms (EAs) are global, parallel search techniques that emulate natural evolutionary processes. Because EAs simultaneously evaluate many points in the search space, it is more probable that it converges to an optimal solution. Besides, no assumption of a differentiable search space is necessary. Due to the high potential for global optimization, EAs have received great attention in the area of automatic control. Many researchers have employed evolutionary algorithms for mixed H2/HN optimal design strategy for optimal robust PID controller. Chen et al. (1995) proposed the design of simple genetic algorithm (GA) based mixed H2/HN controller for SISO system by minimizing Integral Square Error (ISE) (H2 norm) subjected to robust stability and disturbance attenuation constraints. Chen and Cheng (1998) adopted GA to design structure specified HN optimal controllers by minimizing the balanced robust performance criteria which combines the robust stability and disturbance attenuation performance measures, for practical applications. But their procedure requires prior domain knowledge that is the Routh-Hurwitz criterion for reducing the domain size of each design parameter. Krohling and Rey (2001) suggested design of GA based optimal disturbance rejection PID controller for a servo motor system. Ho and Lin (2003) applied Orthogonal Simulated Annealing algorithm (OSA) to design the optimal robust PID controller for MIMO system by minimizing combined ISE and balanced robust performance criteria. Ho et al. (2005) proposed Intelligent Genetic Algorithm (IGA) for the design of mixed H2/HN optimal robust PID controller. As an extension of this line of research, Maruta et al. (2009) proposed the constrained PSO algorithm for the design method of fixed structure robust controllers satisfying multiple HN norm specifications, namely, the robust stability criteria and load disturbance attenuation criteria. Recently, Differential evolution (DE) (Storn and Price, 1997) has been shown to be a simple yet efficient evolutionary algorithm for many real-parameter optimization problems in real-world applications. Its performance, however, is still quite dependent on the setting of control parameters such as the mutation factor and the crossover probability and trial vector generation strategy. Various methods to adapt the parameters of DE are available (Tvrdı´k, 2009). Qin et al. (2009) proposed the Self-adaptive Differential Evolution (SaDE) algorithm which adapts the trial vector generation strategy and the crossover probability as per the needs of the application avoiding the time consuming trial-and-error method. In this paper, SaDE algorithm is applied to the design of fixed structure robust controller considering minimization of maximum value of real part of the poles subject to the robust stability and load disturbance attenuation criteria.

1085

The remainder of this paper is organized as follows. Section 2 describes the fixed structure robust controller problem. Section 3 gives the basic DE algorithm. Section 4 presents the SaDE algorithm (Qin et al., 2009) and Self-adaptive Penalty based constraint handling method (Tessema and Yen, 2006). In Section 5, details of the three test problems are given. Section 6 presents the implementation details and the simulation results of SaDE based robust optimal fixed structure controller design. Finally, Section 7 concludes the paper.

2. Robust controller Let us consider a control system with ni inputs and no outputs, as shown in Fig. 1, where P(s) is the nominal plant transfer function, DP(s) is the plant perturbation transfer function, K(s) is the controller transfer function, r(t) is the reference input, u(t) is the control input, e(t) is the tracking error, d(t) is the external disturbance, and y(t) is the output of the system. Without loss of generality, the plant perturbation DP(s) is assumed to be bounded by a known stable function matrix W1(s)

sðDPðjoÞÞ r sðW1 ðjoÞÞ, 8o A ½0, aÞ

ð1Þ

where sðAÞ denotes the maximum singular value of a matrix A. If a controller K(s) is designed such that the nominal closed loop system DP(s)¼0 and dt ¼0 is asymptotically stable, the robust stability performance satisfies the following inequality: f1 ¼ :W1 ðsÞTðsÞ:1 o 1

ð2Þ

and the disturbance attenuation performance satisfies the following inequality: f2 ¼ :W2 ðsÞSðsÞ:1 o1

ð3Þ

Then the closed loop system is also asymptotically stable with

DP(s) and dt, where W2(s) is a stable weighting function matrix. S(s) and T(s)¼ I S(s) are the sensitivity and complimentary sensitivity functions of the system, respectively SðsÞ ¼ ðI þPðsÞKðsÞÞ1

ð4Þ

TðsÞ ¼ PðsÞKðsÞðI þ PðsÞKðsÞÞ1

ð5Þ

Robust stability and disturbance attenuation are often insufficient in the control system design for advancing the system performance. Therefore minimization of real part of the closed loop pole, minimization of balanced criteria, and minimization of closed loop norm are considered. Let a1 denote the ith pole of the closed-loop system T(s) and amax be the pole, whose real part is greater than that of any other pole; i.e. Re½amax  ¼ maxi ðReai Þ8i the minimization of Re½amax  is also considered. In this paper, the two different objective functions considered for the design of robust optimal controller are as follows: min J ¼ Re½amax 

ð6Þ

min T ¼ :TðsÞ:1

ð7Þ

x

x

Fig. 1. Control system with plant perturbation and external disturbance.

1086

S. Sivananaithaperumal et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1084–1093

subject to the multiple constraints given by expressions (2) and (3) simultaneously.

3. Differential evolution Differential evolution (DE) is an efficient and powerful population-based stochastic search technique that has been used for global optimization of many real-world problems (Storn and Price, 1997). The population size (NP) is a user-specified parameter. The initial population is given by Xi,G ¼ fx1i,G ,. . .,xD i,G g,where i¼1, y , NP and D ¼Input Dimension is chosen randomly and should cover the entire parameter space constrained by the prescribed minimum and maximum parameter bounds. 3.1. Mutation operation

ð8Þ

where r1, r2, r3 are three random indexes between 1 y NP excluding i and F is a scaling factor. 3.2. Crossover operation Crossover operation is applied to each pair of the target vector Xi,G and its corresponding mutant vector Vi,G to generate a trial vector Ui,G. DE generally employs the binomial crossover defined by 8 j < vi,G , if ðrandj ½0,1Þ r CR or ðj ¼ jrand Þ j ui,G ¼ : xj , otherwise i,G j ¼ 1,2,. . .,D

4.1. Initialization The initial population is generated randomly and it should better cover the entire search space as much as possible by uniformly randomizing individuals within the search space constrained by the prescribed minimum and maximum parameter bounds. 4.2. Trial vector generation strategy adaptation

At each generation, after initialization DE employs mutation to produce one mutant vector Vi,G for each target vector Xi,G. The mutant vector can be generated by using one of the various mutation strategies available. The basic mutation operation is given below vi ¼ xr1 þ Fðxr2 xr3 Þ

adaptation has been found to be highly beneficial for adjusting control parameters during evolutionary process, especially when done without any user interaction. The SaDE algorithm automatically adapts the trial vector generation strategies and the crossover rate parameter CR during evolution. SaDE algorithm is described in the following section. The detailed algorithm is given in Qin et al. (2009).

The mutation operator is applied to each individual or target vector Xi,G at the generation G to produce the mutant vector. After the mutation phase, crossover operation is applied to each pair of the target vector Xi,G and its corresponding mutant vector to generate a trial vector Ui,G. For each individual in the current population, one strategy will be chosen according to a probability learnt from its previous experience of generating promising solutions and applied to perform the mutation operation. If one of the strategies performs in the previous generations by generating promising solutions; the probability of that strategy being used in the current generation will increase. The strategy candidate pool consists of the following four strategies as given in Qin et al. (2009) 1) DE/rand/1/bin (ST1) ( xr1,j þ Fðxr2,j xr3,j Þ, ui,j ¼ xi,j ,

if rand

½0,1Þ o CR or j ¼ jrand

otherwise ð11Þ

ð9Þ

The crossover rate is a user-specified constant within the range [0,1), which controls the fraction of parameter values copied from the mutant vector. jrand is a randomly chosen integer in the range [1,D). The condition j¼jrand is introduced to ensure that the trial vector will differ from its corresponding target vector by at least one parameter.

2) DE/rand-to-best/2/bin (ST2) ( xi,j þ F:ðxbest,j xi,j Þ þ diff , ui,j ¼ xi,j ,

if rand ½0,1Þo CR or j ¼ jrand otherwise

where diff ¼ Fðxr1,j xr2,j Þ þ Fðxr3,j xr4,j Þ

ð12Þ

3.3. Selection In the selection phase, each trial vector is compared to the corresponding target vector; the better one will enter the population of the next generation. ( Ui,G , if f ðUi,G Þ rf ðXi,G Þ Xi,G þ 1 ¼ ð10Þ Xi,G , otherwise where f(Ui,G) is the fitness function value of the ith trial vector and f(Xi,G) is the fitness function value of the ith target vector. The loss of the best individuals in the following generation is avoided by this selection mechanism, as the better of each parent–offspring pair survives to next generation. This process continues until the maximum function evaluation is reached.

4. SaDE algorithm The performance of the original DE algorithm is highly dependent on the strategies and parameter settings. Also, during different evolution stages, different strategies with different parameter settings can be more effective than others. Self-

3) DE/rand-to-best/2/bin (ST3) ( xr1,j þ diff if rand ½0,1Þ oCR or j ¼ jrand ui,j ¼ xi,j , otherwise diff ¼ Fðxr2,j xr3,j Þ þ Fðxr4,j xr5,j Þ

ð13Þ

4) DE/current-to-rand/1 (ST4) Ui,G ¼ Xi,G þ KðXr1,G Xi,G Þ þ FðX r2,G X r3,G Þ

ð14Þ

Stochastic universal selection method is used to select one trial vector generation strategy for each target vector in the current population. The probabilities of the strategies are updated only after an initial learning period (LP) generation which is set by the user. The probabilities are initialized to 1/K, i.e., all strategies have the equal probability to be chosen. After the initial LP generations, the probabilities of choosing different strategies will be updated at each subsequent generation by pk,G ¼ PK

Sk,G

k¼1

Sk,G

ð15Þ

S. Sivananaithaperumal et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1084–1093

where PG1 Sk,G ¼ PG1

nsk,g PG1

g ¼ GLP

g ¼ GLP nsk,g þ

g ¼ GLP

nfk,g

þe

k ¼ 1,2,. . .. . .,K; G 4LP where nsk,g is the number of trial vectors successfully entering the next generation generated by kth strategy and nfk,g is the number of trial vectors discarded, generated by kth strategy and e ¼0.01. 4.3. Parameter adaptation In DE algorithm, the population size (NP) is a user-specified parameter because it highly relies on the complexity of a given problem. Between the other two parameters, CR is usually more sensitive to problems with different characteristics, while F is closely related to the convergence speed. Hence, in SaDE only the CR value is adapted. In the SaDE algorithm the population size (NP) is set by the user. The F parameter is approximated by a normal distribution with mean value 0.5 and standard deviation 0.3. A set of values is randomly sampled from such normal distribution and applied to each target vector in the current population. CR is normally distributed in a range with mean CRmk and standard deviation 0.1 with respect to the kth strategy. Initially, CRmk is set at 0.5 for all the strategies. A set of CR values conforming to the normal distribution is generated and applied to those target vectors to which the kth strategy is assigned. After the initial LP generations, the CRmk value is adapted with median of the successful CR values (that have generated trial vectors successfully entering the next generation) over the past LP generations for every subsequent generation. The control parameter K in the strategy ‘‘DE/current-to-rand/1’’ (ST4) is randomly generated within [0,1] so as to eliminate one additional parameter. The detail of parameter adaptation is given in Qin et al. (2009). 4.4. Constraint handling method Real world optimization problems are constrained and an additional method to handle the infeasible solutions needs to be employed. In EAs the presence of constraints reduces the feasible region and complicates the search process. When solving constrained optimization problems, an additional mechanism to handle constraints is required. One of the major issues in constrained optimization is how to deal with the infeasible individuals throughout the search process. Therefore, different techniques have been developed to exploit the information in infeasible individuals. Several constraint handling methods with EAs are available (Coello Coello, 2002). The simplest and the earliest method of involving infeasible individuals in the search process, even after sufficient number of feasible solutions are obtained, is the static penalty method. But they usually require different parameters to be defined by the user to control the amount of penalty added when multiple constraints are violated. To overcome this difficulty, Self-adaptive Penalty (SP) method was recently suggested where information gathered from the search process is used to control the amount of penalty added to infeasible individuals (Tessema and Yen, 2006). They do not require users to define parameters. Hence, this constraint handling method is employed in SaDE algorithm for the above stated robust controller design problems. This method is explained in the following section. 4.4.1. Self-adaptive Penalty (SP) This method was proposed by Tessema and Yen (2006), in which two types of penalties are added to each infeasible individual to identify the best infeasible individual in the current population. The

1087

selection of individuals is based on a value determined by the overall constraint violation and objective values. Thus, there is a chance for an individual with lower overall constraint violation and higher fitness to get selected over a feasible individual with lower fitness even when there is sufficient number of feasible solutions to form the parent population. The amount of the added penalties is controlled by the number of feasible individuals currently present in the combined population including both the parent and the offspring population. If there are a few feasible individuals, a higher amount of penalty is added to infeasible individuals with a higher amount of constraint violation. On the other hand, if there are several feasible individuals, then infeasible individuals with high fitness values will have small penalties added to their fitness values. This algorithm requires no parameter tuning. The final fitness value based on which the population members are ranked in Tessema and Yen (2006) is given below FðXÞ ¼ dðXÞ þ pðXÞ

ð16Þ

where d(X) is the distance value and p(X) is the penalty value. The distance value is given by 8 if rf ¼ 0 < vðXÞ, ffi dðXÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð17Þ 2 2 00 : f ðXÞ þ vðXÞ , otherwise where rf ¼

Number of feasible individuals population size

P v(X) is the overall constraint violation given by ð m i¼1 Pm wi ðGi ðXÞÞ= i ¼ 1 wi Þ, where wi ð ¼ 1=Gmaxi Þ is a weight parameter, Gmaxi is the maximum violation of constraint Gi(X). f 00 ðXÞ ¼ ððf ðXÞfmin Þ=ðfmax fmin ÞÞ where, fmax and fmin are the maximum and minimum values of the objective function f(X) in the current combined population. The penalty value is defined as pðXÞ ¼ ð1rf ÞMðXÞ þ rf NðXÞ

ð18Þ

where, ( MðXÞ ¼ ( NðXÞ ¼

0, vðXÞ,

if rf ¼ 0 otherwise

0, f 00 ðXÞ,

if X is feasible if X is infeasible

F(X) in Eq. (16) is used to rank the individuals in the combined population. The top NP individuals form the next generation.

5. Test systems In order to validate the performance of the SaDE algorithm on the design of robust fixed structure optimal controller, three different systems, namely simple magnetic levitation system (Test System-I), F-8 aircraft linearized model (Test System-II) and a simple SISO plant (Test system-III) are considered and the details are given below. 5.1. Test System-I Let us consider the unity feedback system consisting of the linearized model of the experimental magnetic levitation system (Kim et al., 2008; Maruta et al., 2009). The linearized model of the magnetic levitation plant about an equilibrium point of y¼0.018m is given as PðsÞ ¼

7:147 ðs22:5Þðs þ 20:9Þðs þ13:99Þ

ð19Þ

1088

S. Sivananaithaperumal et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1084–1093

The plant perturbation is unknown in fact, but bounded by the following known stable function. !3 104 WT ðsÞ ¼ 4:3867  107 ðs þ0:066Þðs þ31:4Þðs þ88Þ ð20Þ s þ 104 To treat the robust HN disturbance attenuation problem, the weighting function is chosen as Ws ðsÞ ¼

5 s þ1

ð21Þ

5.2. Test System-II The design problem of a reduced order HN controller for the F-8 aircraft linear model is considered (Apkarian et al., 2003; Saeki, 2006b; Calafiore et al., 2000). Consider the linear time invariant generalized plant P(s) described by 32 3 2 3 2 x_ p B2 A B1 xp 76 7 6 z 7 6C ð22Þ 4 5 ¼ 4 1 D11 D12 54 w 5 y C2 D21 D22 u where xpAR8 is the state vector, wAR3 is the exogenous bance vector, uAR2 is the control input vector, zAR4 performance variables vector, and yAR2 is the measured vector. Refer to Apkarian et al. (2003) for the details matrices.

disturis the output of the

5.3. Test System-III Let us consider the unity feedback system with following transfer function: PðsÞ ¼

17ð1 þ sÞð1 þ 16sÞð1s þs2 Þ sð1sÞð90sÞð1 þ s þ4s2 Þ

ð23Þ

To treat the robust HN disturbance attenuation problem, the weighting function is chosen as WS ðsÞ ¼

55ð1 þ 3sÞ 1 þ 800s

ð24Þ

6. Simulation results The SaDE algorithm and test systems are implemented using MATLAB 7.3 on an Intel Core 2 Duo, 1.86 GHz PC with 2 GB RAM.

Owing to the randomness of the evolutionary algorithms, statistical performances like the best, the mean and the worst value of objectives obtained in 20 independent runs are taken. For the comparison purposes, reported results of constrained PSO method and following five DE algorithms with different sets of CR and F values are considered. 1. 2. 3. 4. 5.

DE/rand/1/bin: F ¼0.9,CR ¼0.1(DE1) DE/rand/1/bin: F ¼0.5,CR ¼0.3(DE2) DE/rand-to-best/1/bin: F ¼0.9,CR ¼0.1(DE3) DE/rand-to-best/1/bin: F ¼0.5,CR ¼0.3(DE4) DE/current-to-rand/1: F¼0.9(DE5)

DE/rand/1/bin algorithm usually has a slow convergence rate, but has stronger exploration capability. DE/rand-to-best/1/bin algorithm has faster convergence speed, but it is more likely to get stuck in local optimal leading to premature convergence. The DE/current-to-rand/1 algorithm is a rotation-invariant algorithm mostly used for multimodal problems. All the algorithms are simulated separately with two stopping criteria namely: (i) maximum number of function evaluations (Stop1) and (ii) convergence criteria based on objective values falling below 1.e 4 (Stop2). 6.1. Test System-I results For magnetic levitation system, the controller structure as given in Eq. (25) is selected   1 10x3 s KðsÞ ¼ 10x1 1þ x2 þ ð25Þ 10 s 1 þ10ðx3 x4 Þ s where x¼(x1, x2, x3, x4)T denotes the controller design parameter vector. Therefore, the optimization problem aims at finding xAR4 which minimizes Re[amax] while satisfying the multiple stability and disturbance attenuation HN constraints given in Eqs. (2) and (3). The search space of the design parameter vector is given by x A R4 ; ð2,1,1,2ÞT rx r ð4,1,1,3ÞT . For fair comparison, the same population size ( ¼100) and maximum function evaluations (Fevalmax¼40000) as that of Constrained PSO are assumed (Maruta et al., 2009). The best optimal robust PID controller K(s) and their robust stability and disturbance attenuation criteria obtained using SaDE algorithm, five DE algorithms and reported constrained PSO algorithm for the Test System-I are reported in Table 1. The best, the worst and mean of real pole locations obtained in 20 runs for SaDE and

Table 1 Optimum controller for Test System-I. Algorithm Constrained PSO (Maruta et al., 2009) DE1 DE2 DE3 DE4 DE5 SaDE

a

Solutions with constraint violation.

Optimum PID controller K(s) !

1 0:1752s þ 0:1529s 1 þ ð8:2224  104 Þs ! 1 0:1746s þ 1þ 0:1557s 1 þ ð8:1696  104 Þs ! 1 0:1745s þ 1þ 0:1547s 1 þ ð8:1527  104 Þs ! 1 0:1498s þ 1þ 4 0:2209s 1 þ ð6:6435  10 Þs ! 1 0:1626s þ 1þ 0:2504s 1 þ ð7:2728  104 Þs ! 1 0:1746s þ 1þ 0:1516s 1 þ ð8:1752  104 Þs ! 1 0:1746s þ 1þ 4 0:1561s 1 þ ð8:1752  10 Þs

1821:6 1 þ 1810:5 1808:8 1781:1 1892:8 1810:9 1810:9

Re ½amax 

f1

f2

 1.7681

0.9995

0.9994

 1.7758

0.9999

1.0000

 1.7758

0.9999

1.0000

 2.1808a

1.1086

0.9951

 2.0407a

1.0391

1.0691

 1.7758

0.9999

1.0000

 1.7758

0.9999

1.0000

S. Sivananaithaperumal et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1084–1093

1089

Table 2 Statistical performance of Test System-I with Stop1 criteria. Algorithm

Re[amax] (Best)

Re[amax] (Mean)

Re[amax] (Worst)

Re[amax] (St. dev.)

FSR (%)

Constrained PSO (Maruta et al., 2009) DE1 DE2 DE3 DE4 DE5 SaDE

 1.7681  1.7758  1.7758  2.1808a  2.0407a  1.7758  1.7758

NR  1.7750  1.7745 2.6241 7.9661  1.7758  1.7758

NR  1.7702  1.7674 7.3251a 30.9338a  1.7752  1.7758

NR 0.0016 0.0026 4.7185 11.9343 1.8045e  04 2.0886e  07

93 100 100 0 0 100 100

NR: Not reported. a

Solutions with Constraint violation.

Table 3 Statistical performance of Test System-I with Stop2 criteria. Algorithm

Re[amax] (Best)

Re[amax](Mean)

Re[amax] (Worst)

Re[amax] (St. dev.)

Mean function evaluations

FSR (%)

DE1 DE2 DE3 DE4 DE5 SaDE

 1.7755  1.7757  2.0512a  2.0834a  1.7757  1.7758

 1.3577  1.7708 1.7949 7.3597  1.7721  1.7744

6.4962a  1.7590 7.3956a 33.0373a  1.7673  1.7664

1.8486 0.0049 4.5030 12.8348 0.0029 0.0023

11,875 8020 13,465 19,748 15,722 14,185

95 100 0 0 100 100

a

Solution with Constraint violation.

five DE variants with Stop1 criteria are reported in Table 2. Only optimal PID controller parameters for constrained PSO algorithm are taken from Maruta et al. (2009) and their corresponding robust stability and disturbance attenuation criteria are calculated. SaDE gives a 100% consistency in getting optimal robust controller. DE3 and DE4 algorithms give only infeasible solutions in all the 20 runs. Even though DE1 and DE2 produce optimal robust controller, the consistency of getting optimal solutions is comparatively less than SaDE algorithm as can be seen from the standard deviation value. DE5 algorithm produces results almost equivalent to SaDE algorithm but the consistency is slightly less. The Feasible Success Rate (FSR) is calculated as the number of runs with feasible solution divided by the total number of runs. The FSR of SaDE, DE1, DE2 and DE5 are comparatively better than Constrained PSO, DE3 and DE4 algorithms. The statistical performance for the DE variants and the SaDE algorithm with Stop2 stopping criteria is given in Table 3. Here, it is observed that DE1 and DE2 algorithms produce solutions closer to the optimal region but they are not able to get the best optimal solution due to premature convergence characteristic. DE3 and DE4 algorithms get only infeasible solutions. DE5 algorithm with Stop2 also performs closer to SaDE algorithm but the mean function evaluation is slightly greater than that of SaDE algorithm. In SaDE algorithm, because of the self-adaptation characteristics of the mutation strategy and the crossover rate, the diversity of the population is maintained throughout the evolutionary process, which aids it to overcome premature convergence problem. The SaDE algorithm converges relatively slower, but with a better optimal solution than the other DE variants and the consistency of getting optimal solutions is better for SaDE algorithm for the Stop2 criteria also. The SaDE algorithm convergences around 14,185 function evaluations using Stop2 criteria to the same optimal solution obtained by Stop1 criteria (Fevalmax¼40000) saving function evaluations. But the standard deviation is comparatively higher for Stop2 criteria for all the algorithms. Disturbance rejection response of the system with SaDE designed best controller for a disturbance signal dðtÞ ¼ 1expð6tÞ is given in Fig. 2. The changes in CRm and probabilities of four strategies over the generations for Stop1 stopping criteria are plotted in Figs. 3 and 4

Fig. 2. Disturbance rejection response of Test System-I.

Fig. 3. Self-adaptation characteristics of CRm with Stop1 criteria—Test System-I.

respectively. From Figs. 3 and 4, it is clear that SaDE is capable of adapting towards the best mutation strategy and tuning to the best parameter setting for CRm value. Also, it can be seen that there is an increase in the value of CRm from its initial setting of 0.5 for all

1090

S. Sivananaithaperumal et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1084–1093

is solved by setting the design parameter x ¼(x1, x2, y, x9)AR9 as "

Fig. 4. Self-adaptation characteristics of strategies with Stop1 criteria—Test System-I.

Fig. 5. Convergence characteristic with Stop2 criteria—Test System-I.

the three strategies. ST4 outperforms the other strategies for this problem. The convergence characteristics of all the DE variants and SaDE algorithm with Stop2 stopping criteria is given in Fig. 5. As can be seen, initially all the five DE variants and SaDE algorithm oscillate vastly, but as the evolutionary process progresses DE1, DE2, DE5 and SaDE algorithms are able to locate the optimal region. DE1, being a slow converging algorithm, settles only around 130 generations in the optimal region. But, only DE5 and SaDE algorithm settle for the optimal solution around 105 and 95 generations. The performance of the DE5 algorithm with Stop1 stopping criteria is almost equivalent to SaDE algorithm. DE3 and DE4 algorithms continue to oscillate and they converge prematurely to the local optima around 150 and 300 generations, respectively. In SaDE algorithm, as stated earlier, ST4 performs better, which is the same strategy as DE5 but with randomly generated F values.

Ak Ck

Algorithm

DE2

DE4

For the above system, the optimization based controller design problem is as follows: minimize 99TðsÞ991 of the closed loop system consisting of P(s) and K(s) subject to the constraint on internal stability. The above mentioned controller design problem

x1

6 ¼ 4 x4 x7

x2 x5 x8

x3

3

x6 7 5 x9

" Optimum controller

2 Constrained PSO 21:1183 (Maruta et al., 2009) 6 4 2:9907 20:0049 2 DE1 38:1529 6 2:9052 4

For F-8 aircraft model, the required controller K(s) is the first order one, and is described by ð26Þ

2

Table 4 Optimum controller for Test System-II.

6.2. Test System-II

u ¼ Ck xk þ Dk y

#

Its initial search space is taken as xAR9;  5rxr5. For fair comparison, the same population size ( ¼300) and maximum function evaluations (Fevalmax ¼1,20,000) as that of Constrained PSO are used (Maruta et al., 2009). The best optimal fixed structure controller and pole locations obtained using SaDE, five DE algorithms and those obtained using constrained PSO algorithm are given in Table 4. The best, the worst and mean of HN performance index obtained in 20 runs for SaDE and five DE variants using Stop1 stopping criteria are reported in Table 5. SaDE algorithm produces better consistency in getting optimal solution with even the worst value better than the best value of the other methods. DE1 algorithm is able to locate the optimal region. But, only DE5 algorithm is able to produce similar optimal solution as SaDE algorithm. However, the standard deviation is higher than that of SaDE algorithm. The statistical performance with Stop2 stopping criteria is given in Table 6. All the other DE algorithms tend to converge faster leading to premature convergence whereas SaDE algorithm is able to overcome that effect due to multiple strategies and varying CRm values used by the algorithm. The DE5 algorithm which performs better with the Stop1 criteria also suffers from premature convergence. Though SaDE algorithm has a slower convergence, it identifies the best optimal solution with a lesser number of function evaluations. This shows that SaDE algorithm has the capability to escape from premature convergence due to the four strategies employed, which produces diverse population. Of the 5 DE variants, as expected, DE1 algorithm has a slower convergence and a better optimal solution. Similar to the Test system-I, the CRmvalue for Test system-II also increases from the initial 0.5 to around 0.9 for all the three strategies. The self-adaptation characteristic of the strategies with Stop1 criteria is given in Fig. 6. It shows that the ST4 outperforms the other three strategies.

DE3

xk ¼ Ak xk þ Bk y,

Bk Dk

DE5

SaDE

27:2712 18:5524 6 1:5039 4 2

1:5886 0:4011

Ak Ck

Bk Dk

#

3 11:0822 0:5268 7 5

0:4298 0:9064 3 2:7147 15:9328 0:5039 0:9393 7 5 0:3558 0:3008 3 2:5961 26:1350 0:4113 0:0458 7 5

7:3766 0:7354 0:8960 3 0:7161 0:1354 2:9049 6 0:5186 0:6747 0:0576 7 4 5 3:2184 0:3672 1:0761 2 3 0:9151 0:0451 2:2663 6 0:8513 7 0:7017 0:0320 4 5 4:6475 0:1196 1:1882 2 3 84:6990 5:9285 26:1336 6 7:7067 7 0:3144 0:2143 4 5 35:3991 0:4271 0:9556 2 3 78:3790 8:9656 33:3157 6 4:6191 0:3149 0:2200 7 4 5 2

25:4307

0:0248

1:1011

:T(s):N Re[amax]

1.7092

 0.0154

1.6466

 0.0153

1.7105

 0.0152

1.7391

 0.0143

1.7294

 0.0143

1.6096

 0.0150

1.6051

 0.0151

S. Sivananaithaperumal et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1084–1093

1091

Table 5 Statistical performance of Test System-II Stop1 criteria. Algorithm

:T(s):N (Best)

:T(s):N (Mean)

:T(s):N (Worst)

:T(s):N (St. dev.)

FSR (%)

Constrained PSO (Maruta et al., 2009) DE1 DE2 DE3 DE4 DE5 SaDE

1.7092 1.6466 1.7105 1.7391 1.7294 1.6096 1.6051

1.7775 1.7167 1.7297 1.7727 1.7455 1.6583 1.6216

2.3732 1.8053 1.7793 1.8323 1.7682 1.8528 1.6427

0.0996 0.0469 0.0190 0.0317 0.0125 0.0717 0.0157

100 100 100 100 100 100 100

6.3. Test System-III

Table 6 Statistical performance of Test System-II with Stop2 criteria. Algorithm :T(s):N (Best)

:T(s):N (Mean)

:T(s):N (Worst)

:T(s):N (St. dev.)

Mean function evaluations

FSR (%)

DE1 DE2 DE3 DE4 DE5 SaDE

1.7831 1.8347 1.8548 1.8370 2.6010 1.6340

2.0203 1.8951 2.0530 1.9259 3.4985 1.7245

0.0946 0.0391 0.0819 0.0367 0.4266 0.0271

37,605 23,505 25,680 18,695 19,580 62,215

100 100 100 100 100 100

1.6668 1.7723 1.7502 1.7767 1.9911 1.6056

Fig. 6. Self-adaptation characteristics of strategies with Stop1 criteria—Test System-II.

Fig. 7. Convergence characteristic with Stop1 criteria—Test System-II.

The convergence characteristics using Stop1 criteria are given in Fig. 7. As can be seen in Fig. 7, SaDE algorithm has a smooth convergence to the optimum value. The other DE variants tend to oscillate vastly before settling to a local optimal solution. Only DE1 algorithm settles in the optimal region. SaDE algorithm converges to the optimal solution around 270 generations.

The fixed-order controller K(s) for SISO plant considered is of the form KðsÞ ¼

y0 þ a0 þ y2 s2 1 þ m0 þ b2 s2

ð27Þ

where x¼ (y0,a0,y2,m0,b2)T denotes the design parameter vector. The search space for the variables is taken as xAR5;  5rxi r5, i¼1, 2, y, 5 based on the problem setting in Fujisaki et al. (2008). Here, the aim is to find an optimal controller which minimizes Re½amax  and the disturbance attenuation constraint given in P Eq. (3). Also, the pole placement condition of Re½amax ð 3 ½s; xÞo 0:2 should be satisfied. For fair comparison, the same population size ( ¼200) and maximum function evaluations (Fevalmax ¼ 60,000) as that of Constrained PSO are assumed (Maruta et al., 2009). The best controller obtained in 20 runs, corresponding pole locations and load disturbance attenuations obtained using SaDE algorithm, five DE algorithms and reported results of Constrained PSO algorithm are given in Table 7. The best, the worst and mean of pole locations over twenty independent runs are given in Table 8. From the results, it is clear that SaDE performs better than the other methods and produces a 100% FSR. Even the worst result produced by SaDE is better than the best value of the other methods. DE1 algorithm is able to produce feasible solution with 80% FSR, whereas the DE2 algorithm produces feasible solution with only 30% success rate. DE3 obtains a 5% FSR. All the other DE variants only produce infeasible solutions. Statistical performance with Stop2 criteria is tabulated in Table 9. Similar to the previous two Test Systems, the SaDE algorithm converges better but more slowly thereby reducing the chances of premature convergence. DE1 and DE3 algorithms with Stop2 criteria produce only 5% FSR, whereas the other DE variants only produce infeasible solutions. SaDE algorithm with Stop2 criteria is able to produce equivalent results with the Stop1 criteria, thus requiring a lesser number of function evaluations. The disturbance rejection response of the system with optimum controller obtained using SaDE algorithm for the disturbance dðtÞ ¼ 1expð6tÞ is given in Fig. 8. Self-adaptation characteristic of CRm with Stop1 criteria are given in Fig. 9. As with the previous test cases the ST4 strategy performs better than the other three strategies. The convergence characteristics for the five DE variants and the SaDE algorithm for Test system-III with Stop1 criteria is given in Fig. 10. As seen, only SaDE algorithm identifies the optimal region, while other DE variants vary vastly but settle for poor solutions even with Stop1 criteria. The SaDE algorithm is able to identify a better feasible solution. With Stop2 criteria it can be shown that SaDE algorithm settles around 180 generations, thus saving function evaluations. The other DE variants converge to one of the local optimal solutions. The above three examples demonstrate that various DE algorithms perform better during various stages of evolution and also depend upon the particular application. Hence, it is not possible to

1092

S. Sivananaithaperumal et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1084–1093

Table 7 Optimum controller for Test System-III. Algorithm

Optimum controller K(s)

Re[amax]

f2

Constrained PSO (Maruta et al., 2009)

0:58910:7339s2:5918s2 10:55781:555s2 0:59030:7390s2:5953s2 10:5540s1:5498s2 0:61470:8141s2:6979s2 10:6213s1:6339s2 0:55850:5898s2:2400s2 10:0980s1:2965s2 0:51860:5951s2:1201s2 10:0983s0:9684s2 1:00860:4266s3:6219s2 10:0900s2:2247s2 0:58540:7232s2:5793s2 10:5542s1:5516s2

 0.5780

0.9984

 0.5524

0.9972

 0.4163

0.9947

DE1 DE2 DE3 DE4 DE5 SaDE

a

a

1.0240

a

1.3920

0.0309a

0.9997

 0.4019  0.3024

 0.5912

0.9995

Solution with constraint violation.

Table 8 Statistical performance of Test System-III with Stop1 criteria. Algorithm

Re[amax] (Best)

Re[amax] (Mean)

Re[amax] (Worst)

Re[amax] (St. dev.)

FSR (%)

Constrained PSO (Maruta et al., 2009) DE1 DE2 DE3 DE4 DE5 SaDE

 0.5780  0.5524  0.4163  0.4019a  0.3024a 0.0309a  0.5912

NR  0.3884  0.2416  0.0938  0.2158 0.0311  0.5885

NR 0.0436a 0.0893a 0.0475a  0.1290a 0.0315a  0.5833

NR 0.1955 0.0921 0.1089 0.0777 2.2299e  04 0.0021

37 80 30 5 0 0 100

NR: Not reported. a

Solution with Constraint violation.

Table 9 Statistical performance of Test System-III with Stop2 criteria. Algorithm Re[amax] (Best)

Re[amax] (Mean)

Re[amax] (Worst)

Re[amax] (St. dev.)

Mean function FSR evaluations (%)

DE1 DE2 DE3 DE4 DE5 SaDE

 0.0579  0.1235 0.0305  0.0710 0.0838  0.5810

0.1050a 0.0825a 0.1017a 0.0746a 0.1106a  0.4541

0.1556 0.1445 0.0935 0.1186 0.0241 0.0309

25,765 34,300 18,283 19,286 21,875 47,130

a

 0.3688  0.3672a  0.2159  0.2168a 0.0311a  0.5900

5 0 5 0 0 100

Solution with Constraint violation.

Fig. 9. Self-adaptation characteristic of CRm with Stop1—Test System-III.

Fig. 8. Disturbance rejection response—Test System – III.

identify a single algorithm that could perform better in all occasions. ST4 strategy of SaDE consistently produces better individuals and thereby the probability of ST4 is improved in all the three Test

Fig. 10. Convergence characteristic with Stop1—Test System-III.

S. Sivananaithaperumal et al. / Engineering Applications of Artificial Intelligence 24 (2011) 1084–1093

Systems. Generally, DE1 gives better solution than other algorithms for all the three problems with an increase in mean function evaluations. DE3 and DE4 perform poorly as compared to other DE algorithms. DE2 is not able to give consistent performance for all the three problems. The DE5 strategy is able to perform almost equivalent to SaDE algorithm in Test System I and Test System II. But for Test System III the performance is very poor as it is not able to generate even a feasible solution in 20 runs. The better performance of the SaDE algorithm is attributed to the self-adaptation characteristics involving four different mutation strategies and a varying CR value. The four strategies along with a better CR value ensure a diverse population, thus leading to identification of the best optimal solutions with Stop1 and avoiding premature convergence with Stop2.

7. Conclusion This paper discusses the application of SaDE algorithm for designing robust optimal fixed structure controllers for systems with uncertainties and disturbance. Minimization of maximum value of real part of the poles and minimization of closed loop norm are considered as objectives subject to the robust stability and load disturbance attenuation constraints. The performance and validity of SaDE algorithm based fixed structure robust controller is demonstrated with three test systems, namely a linearized magnetic levitation system, F-8 aircraft model and a SISO unity feedback system. For comparing the performance of constrained SaDE algorithm, results of constrained PSO and five DE algorithms with different sets of parameter values and strategies are examined. It is shown experimentally that the performance of the SaDE algorithm is better than the other DE algorithms and the reported Constrained PSO algorithm. Also the time consuming trial-and-error method of identifying the trial vector generation strategy and its associated parameter settings is avoided by employing SaDE. The better performance of the SaDE algorithm is attributed to the self-adaptation characteristics. Different DE algorithms perform differently during the evolutionary process. Hence, in most of the applications, a single algorithm will not be able to locate the optimal solution. Due to the improved diversity of the population, SaDE is able to produce optimal solutions consistently. Also Stop2 stopping criteria clearly show that only SaDE algorithm is able to overcome the premature convergence. All the other algorithms fail in at least one of the Test Systems. SaDE algorithm with Stop2 criteria produces optimal solutions in all the three Test Systems saving function evaluations. The SaDE algorithm consistently succeeds to find an optimal controller. Hence, the SaDE algorithm can be recommended for the design of robust optimal fixed structure controller.

Acknowledgements Authors thank University Grants Commission (UGC), New Delhi for financially supporting this work under the major project

1093

(38-248/2009(SR)) and Thiagarajar College of Engineering for providing necessary facilities for carrying this work.

References Apkarian, P., Noll, D., Tuan, H.D., 2003. Fixed-order HN control design via a partially augmented Lagrangian method. Int. J. Robust Nonlin. Cont. 13 (12), 1137–1148. Astrom, K.J., Augglund, T., 1995. PID Control-Theory Design and Tuning, 2nd ed. Instrument Society of America, Research Triangle Park, NC. Blanchini, F., Lepschy, A., Miani, S., Viaro, U., 2004. Characterisation of PID and lead/lag compensators satisfying given HN spacifications. IEEE Trans. Autom. Cont. 48 (5), 736–740. Calafiore, G., Dabbene, F., Tempo, R., 2000. Randomized algorithms for reduced order HN controller design. In: Proceedings of the American Control Conference, pp. 3837–3839. Chen, B.S., Cheng, Y.M., 1998. A structure spacified HN optimal controller design for practical applications: a genetic approach. IEEE Trans. Cont. Syst. Tech. 6 (6), 707–718. Chen, B.S., Cheng, Y.M., Lee, C.H., 1995. A genetic approach to mixed H2/ HN optimal PID control. IEEE Cont. Syst. Mag. 15 (5), 51–60. Coello Coello, C.A., 2002. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art. Comput. Methods App. Mech. Eng. 191, 1245–1287. Dong Hwa, Kim, 2011. Hybrid GA-BF based intelligent PID controller tuning for AVR system. Appl. Soft Comput. 11 (1), 11–22 (Online October 2009). Doyle, J.C., Francis, B., Tennenbaum, A., 1990. Feedback Control Theory. Macmillan Publishing co. Fujisaki, Y., Oishi, Y., Tempo, R., 2008. Mixed deterministic/randomized methods for fixed order controller design. IEEE Trans. Autom. Cont. 53 (9), 2033–2047. Ho, M., 2003. Synthesis of HN PID Controllers: a parametric approach. Automatica 39 (6), 1069–1075. Ho, M., Lin, C., 2003. PID controller design for robust performance. IEEE Trans. Autom. Cont. 48 (8), 1404–1409. Ho, S.J., Ho, S-Y., Shu, L-S., 2004. OSA: orthogonal simulated annealing algorithm and its application to designing mixed H2/HN optimal controllers. IEEE Trans. Syst. Man, Cybern. A Syst. Hum. 34 (5), 588–600. Ho, Shinn-Jang, Ho, Shinn-Ying, Hung, Ming-Hao, Shu, Li-Sun, Huang, Hui-Ling, 2005. Designing structure-specified mixed H2/HN optimal controllers using an intelligent genetic algorithm IGA. IEEE Trans. Cont. Syst. Tech. 13 (6), 1119–1124. Kim, T.H., Maruta, I., Sugie, T., 2008. Robust PID controller tuning based on the constrained particle swarm optimization. Automatica 44 (4), 1104–1110. Kitsios, I., Pimenides, T., Groumpos, P., 2001. A genetic algorithm for designing HN structure specified controllers. In: Proceedings of IEEE International Conference on Control Applications, Mexico, pp. 1196–1201. Krohling, R.A., Rey, J.P., 2001. Design of optimal disturbance rejection PID controllers using genetic algorithms. IEEE Trans. Evol. Comp. 5 (2), 78–82. Maruta, I., Kim, T.H., Sugie, T., 2009. Fixed-structure HN controller synthesis: a meta-heuristic approach using simple constrained particle swarm optimization. Automatica 45 (2), 553–559. Qin, A.K., Huang, V.L., Suganthan, P.N., 2009. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comp. 13 (2), 398–417. Saeki, M., 2006a. Fixed structure PID controller design for standard HN control problem. Automatica 42 (1), 93–100. Saeki, M., 2006b. Static output feedback design for HN control by descent method. In: Proceedings of the 45th IEEE Conference on Decision and Control, pp. 5156–5161. Storn, R., Price, K.V., 1997. Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11, 341–359. Tvrdı´k, Josef, 2009. Adaptation in differential evolution: a numerical comparison. Appl. Soft Comput. 9 (3), 1149–1155. Tessema, B., Yen, G.G., 2006. A self adaptive penalty function based algorithm for constrained optimization. IEEE Congress Evol. Comp., 246–253. Zhou, K., Doyle, J.C., Glover, K., 1996. Robust and Optimal Control. Prentice Hall.