ARTICLE IN PRESS
Engineering Applications of Artificial Intelligence 19 (2006) 927–938 www.elsevier.com/locate/engappai
Multiobjective controller design handling human preferences$ Miguel A. Martı´ nez, Javier Sanchis, Xavier Blasco Grupo de Control Predictivo y Optimizacio´n Heurı´stica (CPOH), Departamento de Ingenierı´a de Sistemas y Automa´tica, Universidad Polite´cnica de Valencia, Camino de Vera 14, 46022-Valencia, Spain Received 30 June 2005; received in revised form 23 January 2006; accepted 29 January 2006 Available online 23 March 2006
Abstract Trends in controller design point to the integration of several objectives to achieve new performances. Moreover, it is easy to set the controller design problem as an optimization problem. Therefore, future improvements are likely to be based on the adequate formulation and resolution of the multiobjective optimization problem. The multiobjective optimization strategy called physical programming provides controller designers with a flexible tool to express design preferences with a ‘physical’ meaning. For each objective (settling time, overshoot, disturbance rejection, etc.) preferences are established through categories such as desirable, tolerable, unacceptable, etc. to which numerical values are assigned. The problem is normalized and converted to a single-objective optimization problem but normally it results in a multimodal problem very difficult to solve. Genetic algorithms provide an adequate solution to this type of problems and open new possibilities in controller design and tuning. r 2006 Elsevier Ltd. All rights reserved. Keywords: Robust control; Multiobjective optimization; Controller design
1. Introduction In most cases controller design has to satisfy a set of conflicting specifications. For instance, high performance is not compatible with a robust controller for process variations or a bounded control effort. Therefore, the design of a controller can be understood as the search for the best tradeoff among all specifications, and multiobjective optimization (MO) seems a reasonable alternative. The solution to a MO problem is normally not unique, the best solution for all objectives does not exist. There is a set of good solutions referred to as non-dominated solutions (none is better for all objectives) that define the Pareto set and the Pareto front (objectives values for Pareto set solutions). Several techniques have been developed to obtain the Pareto Set Miettinen (1998), Coello Coello et al. (2002). Once the Pareto Set is obtained, the following step usually is the selection of a single solution. $ This research has been partially financed by DPI2004-08383-C03-02 and DPI2005-07835, MEC (Spain)-FEDER. Corresponding author. Tel.:+34963877007; fax: +34963879579. E-mail address:
[email protected] (X. Blasco). URL: http://ctl-predictivo.upv.es.
0952-1976/$ - see front matter r 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.engappai.2006.01.018
This is a subjective and non-trivial procedure that depends on designer preferences and is normally based on Pareto Front values. Decision maker (DM) algorithms focus on helping designers in this task. A traditional way to solve a MO problem (including DM) is to translate it into a single-objective problem based on the weighted sum of all the objectives. Weights are generally adjusted by a trial-and-error procedure, but it is difficult for designers to translate their knowledge and preferences. To overcome this disadvantage physical programming (PP) methodology Messac (1996) formulates design specifications in an understandable intuitive language for designers. Preferences for each objective (settling time, maximum control effort, etc.) are specified in a flexible and natural way, by means of the so-called Class Functions and range of preferences. All settings of MO and DM problems become more transparent for the designer, who only needs an algorithm to compute the objectives and to define the ranges of preference for each objective.1 Notice that this 1 It is not a limiting point, the way to compute objectives and the range of preferences is required independently of the design methodology selected.
ARTICLE IN PRESS M.A. Martı´nez et al. / Engineering Applications of Artificial Intelligence 19 (2006) 927–938
2. Physical programming PP is a methodology to solve a MO problem that includes information about the preferences for each objective and converts the problem into a single-objective problem. To this end, PP introduces innovations on how to include the designer’s knowledge and preferences. In a first step, PP converts the designer’s knowledge about the physical variables of the problem and his desired values to a Class Function previously established. In this step all variables are normalized and preferences included. The next step consists of aggregating all Class Functions into a single function and using an optimization technique to solve this new problem. Normally the new problem is multimodal, and the optimization technique chosen has to be powerful enough to solve it. GAs have proved to perform well for such type of problems and will be used for this paper. 2.1. Class Function For each objective or specification i, the designer has to provide a way of evaluation by using a function gi ðxÞ, where x is the vector of the optimization parameters. For each specification i, a Class Function gi ðgi ðxÞÞ is defined, and which will be the function to be minimized for that objective. The shape of each Class Function gi is a key point to express the designer preferences. Furthermore, the mapping of objectives gi in their respective Class Functions gi has a normalization effect since the different physical units, with different scales, are transformed into a dimensionless scale. These Class Functions can be hard or soft depending on the type of objective involved in the problem. To convert all common situations the list of classes to define could be: Soft classes
Class-1s: Class-2s: Class-3s: Class-4s:
smaller is better. larger is better. a value is better. a range is better.
Hard classes
Class-1h: Class-2h: Class-3h: Class-4h:
must must must must
be be be be
smaller. larger. equal. in a range.
Once a Class Function is selected for an objective, the designer has to choose gki values to establish the range of preferences. For example, for a class-1s type, gki values are g1i . . . g5i and the associate ranges:
Highly desirable (HD): gi pg1i Desirable (D): g1i pgi pg2i Tolerable (T): g2i pgi pg3i Undesirable (U): g3i pgi pg4i Highly undesirable (HU): g4i pgi pg5i Unacceptable (UNA): gi Xg5i
These ranges are defined in physical units, that is, the designer expresses his preferences in a natural and intuitive way. Figs. 1 and 2 show an example of different soft classes for different sets of designer range selection. In particular class-1s of Fig. 1 could correspond to an overshoot specification in a controller design problem:
Highly desirable: dp10% Desirable: 10%pdp20% Tolerable: 20%pdp30% Undesirable: 30%pdp40% Highly undesirable: 40%pdp50% Unacceptable: dX50%
All other soft classes are defined in a similar way. Regardless of the soft class type, gi image has a strictly positive value (gi ). Images of gki values, gk , play a key role, and have the same value for all soft Class Functions of 3 2.5 Class-1s
new optimization problem could be multimodal (several local minima) and requiring an adequate optimization technique. Genetic Algorithms (GAs) provide good solutions improving previous implementation. This paper is organized as follows. Section 2 shows PP methodology and concepts associated with Class Function that allow designers to express their preferences in an understandable ‘physical’ way. Section 3 describes the nonlinear optimization technique used to solve the MO problem: genetic algorithms (GA). Section 4 describes an example (ACC robust control benchmark) to illustrate the benefits of the proposed methodology ðPP þ GAÞ and specifications for this problem. Finally, Section 5 shows the results and compares them with those obtained by other authors and techniques.
HD
D
T
U
HU
UNA
2 1.5 1 0.5 10
20
30
40
50
3 2.5 UNA Class-2s
928
HU
U
T
D
HD
2 1.5 1 0.5 1
2
3 g(x)
Fig. 1. Classes 1s–2s.
4
5
ARTICLE IN PRESS M.A. Martı´nez et al. / Engineering Applications of Artificial Intelligence 19 (2006) 927–938
Class 3s
the problem (therefore it is not necessary to distinguish them with a subscript i). This characteristic has a normalizing effect on the dimensionless space gi for all gi specifications. For hard classes, only two values are set: possible or impossible, and they represent hard constraints in a classical optimization problem. For example, class-1h of Fig. 3 could be a settling time specification in a controller design problem: testð98%Þ p50 s.
3 2.5 2 1.5 1 0.5
HU
-30
U
-20
T
D
-10
0
D
T
20
10 g(x)
U
30
2.2. Aggregated function The Class Functions reveal the designer wishes using physical units for each of the specifications of the multiobjective problem. The image of each objective gi ðxÞ using the class functions selected gi ðgi ðxÞÞ, results in a dimensionless variable. Then, thanks to the Class Functions the problem is moved to a different space where all the variables are independent of the original multiobjective problem. The aggregated function combines all normalized objectives gi ðgi ðxÞÞ in a single function: in this way the multiobjective problem is converted to a single-objective optimization problem. PP establishes the following aggregated function: " # nsc X 1 JðxÞ ¼ log10 gðgi ðxÞÞ , nsc g5 i¼1
HU
40
(1)
where ðnsc Þ is the number of soft classes. The minimization problem to solve is then:
50
x ¼ arg½min JðxÞ
3
(2)
x
2.5 2 1.5
HU
U
T
D
HD
D
T
U
HU
1 0.5 -1
0
1
2
3
4
5
6
g(x) Fig. 2. Classes 3s–4s.
subject to hard class and x constraints. Remember that x is the parameter vector and it can be subject to design constraints. Problem (2) is, in general, a nonlinear optimization problem that could be solved by using several optimization techniques. The logarithmic formulation of (1) attempts to expand the search range in order to reduce iterations in the search
3
2.5
2.5
2
2 Class-2h
3
1.5 1
1.5 1
0.5
0.5
0
0
-0.5 10
20
30
40
-0.5
50
3
3
2.5
2.5
2
2 Class-4h
-2
Class-1h
-3
Class-3h
Class 4s
929
1.5 1 0.5
-50
-40
-30
-20
-10
0
1.5 1 0.5
0
0
-0.5
-0.5
0
20 g(x)
40
-50
Fig. 3. Classes 1h–4h.
0 g(x)
50
ARTICLE IN PRESS M.A. Martı´nez et al. / Engineering Applications of Artificial Intelligence 19 (2006) 927–938
930
algorithm selected (optimization algorithm). Function (1) is based on the sum of as many terms as soft classes are considered but with the distinguishing characteristic that all of them weigh equally. And, because of the class function, all terms are independent of the original problem specification showing a similar shape. The relationship of x with gi ðxÞ (problem dependent) and with gi ðgi ðxÞÞ (class function dependent) does not guarantee only one minimum in Eq. (1). For example, a simple problem with a single quadratic specification in the aggregated function generates two minima, independently of the class function selected (Messac, 1996). Of course, with several specifications the problem becomes more complex and, almost probably, with multiple minima; therefore, the quality of the solution is dependent on the optimization technique used. 2.3. Mathematical Class Function representation As mentioned above, the presence of local minima in (1) depends on relationships gi ðxÞ, imposed by the specific problem, and gi ðgi ðxÞÞ. The last ones can be defined to satisfy several properties, thus making the optimization algorithm work more efficiently. With these considerations, soft class functions have to be defined with the following properties (Messac, 1996):
(2) Points inside each interval are defined as ðgiðkÞ ; giðkÞ Þ;
Strictly positive. First derivative continuity. Strictly positive second derivative. All these properties have to be satisfied independently of the preference range selection.
For class 1s: Strictly positive first derivative. limgi !1 gi ðgi Þ ¼ 0.
(4)
In this case, giðkÞ has a subscript i because its value is dependent on specification i. This does not happen with extreme points. The function shape (Fig. 4) can be represented by two types of generic curves. The first can be a decreasing exponential curve to describe interval 1 (giðkÞ pg1i ): 1 s g1ið1Þ ¼ g1 exp i1 ðgið1Þ g1i Þ . (5) g For intervals 2, 3, 4 and 5 a spline segment is selected, defined by extreme points (ðgki ; gk Þ and ðgk1 ; gk1 Þ) and i k1 k slopes at these points (si ; si ). To guarantee strictly positive second derivatives (for interval k 2 ½2 . . . 5), parameters a and b have to be strictly positive in the following equation: d2 giðkÞ ¼ ðlki Þ2 ½aðxiðkÞ Þ2 þ bðxiðkÞ 1Þ2 , dg2iðkÞ
(6)
lki ¼ gki gk1 , i
(7)
xiðkÞ ¼
k ¼ ½1 . . . 5.
giðkÞ gk1 i lki
,
(8)
0pxiðkÞ p1.
(9)
The first derivative is obtained by the integration of (6), and finally, integration of the first derivative gives the spline segment: dgiðkÞ a b ¼ ðlki Þ3 ðxiðkÞ Þ3 þ ðxiðkÞ 1Þ3 þ c, (10) 3 3 dgiðkÞ
For class 2s:
HD
D
T
U
HU UNA
g-5
Strictly negative first derivative. limgi !1 gi ðgi Þ ¼ 0. For classes 3s and 4s:
Considering these properties, it is possible to develop a method to build class functions. Next, the method for building class 1s is presented which can be extended to the other classes. Definitions: (1) Interval extreme coordinates are defined as ðgki ; gk Þ;
k ¼ ½1 . . . 5.
si4
g-4
First derivative has only one zero.
s3i
g-3 g-2 g-1 g1i
(3)
si2
s1i g2i
g3i
g4i
g5i
Fig. 4. Class function 1s for a generic specification i.
ARTICLE IN PRESS M.A. Martı´nez et al. / Engineering Applications of Artificial Intelligence 19 (2006) 927–938
giðkÞ ¼
ðlki Þ4
a b ðx Þ4 þ ðxiðkÞ 1Þ4 12 iðkÞ 12
þ clki xiðkÞ þ d.
ð11Þ
For every interval k 2 ½2 . . . 5, constants a; b; c; d of Eq. (11) can be obtained: Using the designer’s preferences for gi , by means of extreme gk1 and gki selection. i Applying (11) to obtain the image of extreme values gk1 and gk . Applying (10) to obtain slopes at the extreme points sk1 i and ski . Giving a¼
3½3ski þ sk1 12~ski i
b¼
12~ski 3½ski þ 3sk1 i
2ðlki Þ3 2ðlki Þ3
(12)
,
(13)
ski ¼ ðski Þmax þ aDski .
4~ski ½ski þ 3sk1 i , 8
ðgk gk1 Þ lki
¼
k g~
lki
(14) (15)
.
(16)
Using Eq. (15) for a and b, it is possible to determine the slope limits for each interval in such a way that its values are always strictly positive: ðski Þmax ¼ 4~ski 3sk1 , i ðski Þmin ¼
4~ski sk1 i , 3
Dski ¼ ðski Þmax ðski Þmin ¼ 83 ½~ski sk1 . i
(17) (18) (19)
This last expression allows determining successive slopes for every point k of each specification gi . 2.4. Algorithm for Class Function construction For a number of specifications nsc , the extreme point image gk and associated slopes ski are obtained as follows: For every specification i 2 ½1 . . . nsc , do (1) Initialization2: (a) bi ¼ 1:5, (b) a ¼ 0:1, 1 (c) g~ ¼ 0:1, 1 (d) g1 ¼ g~ , 2
(c) lki ¼ gki gk1 , i k g~ k (d) s~i ¼ lk . i (3) s1i ¼ a s~2i (slope calculation at point ðg1i ; g1 Þ). (4) For k 2 ½2 . . . 5, do (a) ski ¼ ðski Þmin þ aDski , (b) a and b parameter calculation. (5) If any a or b is negative, set bi ¼ bi þ 0:5, go to step 2, and repeat until a and b will be positive. With the maximum value of bi all gk values are fitted.
,
where s~ki ¼
(2) For k 2 ½2 . . . 5, do k k1 (a) g~ ¼ bi nsc g~ , k (b) gk ¼ gk1 þ g~ ,
Notice that the correct application of the algorithm and its subsequent adjustments enable all specifications gi to have the same image gk at extreme points gki . For class 2s, the algorithm has to be modified at step 4a
c ¼ 2~ski 12 ½ski þ sk1 , i d ¼ gk1 lki
931
Initially, bi takes a small positive value. During algorithm execution it is changed to ensure that a and b will be positive. a also takes a small initial value.
(20)
To build classes 3s and 4s, class 1s and 2s algorithms are used independently. For class 4s a cubic function defines the ‘highly desirable’ region (HD). The parameters of the cubic function are obtained from values and slopes at extreme interval points. Notice that to obtain the Class Functions the participation of the designer is not necessary, thanks to this algorithm. The designer only has to choose for each objective its class and its gki values to establish the range of preferences (tolerable, desirable, etc). Then, the algorithm builds the splines and their curvature in an overall way considering all the objectives at the same time. 3. Nonlinear optimization with genetic algorithm Minimization of the aggregated function with constraints can be a nonlinear multimodal optimization problem. To solve this problem, it is necessary to use a suitable optimization technique. Genetic algorithms (GAs) have proved to perform well in this type of situation; for this reason, GAs have been selected in this article. A GA is an optimization technique that searches for the solution of the optimization problem, imitating species evolutionary mechanisms (Goldberg, 1989; Holland, 1975). In this type of algorithms, a set of individuals (called population) changes from one generation to another (evolution) adapting better to the environment. In an optimization problem, there is a function to optimize (cost function) and a zone to search (search space). Every point of the search space has an associate value of the function. The objective is to find the point that optimizes such function. In the translation of the optimization problem to a GA, the different points of the search space are the different individuals of the population. Similarly to natural genetics, every different individual is characterized by a chromosome; in the optimization problem, this chromosome
ARTICLE IN PRESS 932
M.A. Martı´nez et al. / Engineering Applications of Artificial Intelligence 19 (2006) 927–938
consists of the point coordinates in the search space x ¼ ðx1 ; x2 ; . . . ; xn Þ. Following with the simile, each coordinate corresponds to a gene. The cost function value of an individual has to be understood as the adaptation level to the environment for such individual. For example, in a minimization problem of function JðxÞ, an individual is considered to be better adapted than another if he has a lower cost function value. Once established the relationship between the chromosomes (the representation of individuals) with the search space points, and the adaptation level with cost function, it is necessary to describe an evolutionary mechanism, that is, the rules for changing populations throughout generations. Genetic operators are in charge of such evolution. A general GA evolutionary mechanism may be described as follows. From an initial population (randomly generated), the next generation is obtained as follows: (1) Certain individuals are selected for the next generation. This selection depends on the adaptation level (cost function value). Such individuals with a lower JðxÞ value have more probabilities to be selected (to survive).3 (2) An exchange of information between individuals is performed by means of the so-called crossover that produces an exchange of genes between chromosomes, or in other words, an exchange of coordinates between points. The rate of individuals to crossover is fixed by Pc (crossover probability). Mixing chromosomes in this way is a mechanism to explore the search space. This mechanism is considered as an oriented exploration because it is based on existing information from parent individuals and tries to extract potential qualities that are hidden in the population. (3) Certain individuals of the new generation are subject to a random variation in its genes (random variation of coordinates). This operation is called mutation, and the rate of individuals to mutate is set by mutation probability Pm . Mutation also produces a search space exploration but in this case it is not oriented (totally random), and it aims to explore the zone not explored with crossover operation. With this general framework, there are several variations in the GA implementation: different gene codifications, different implementation of the genetic operators, new operators (Goldberg, 1989; Michalewicz, 1996), etc. In this paper, the GA implementation presents the following characteristics: Real value codification (Michalewicz, 1996), i.e., each gene has a real value so the chromosome is an array of real values. 3
For a maximization problem larger values are better.
JðxÞ is not directly used as cost function. A ‘ranking’ operation is performed (Blasco, 1999; Back, 1996). The first individuals are sorted in decreasing JðxÞ value, and then, JðxÞ is replaced by its position in such distribution. Each individual has a new cost function value J 0 ðxÞ. The ranking operation prevents clearly dominant individuals from prevailing too soon, thus exhausting the algorithm. The table (21) below shows an example of a fourindividual population. x1
J (x) 10.51
J' (x) 1
x2 x3
0.32 1.25
4 3
x4
6.21
2
(21)
Selection is made by the operator known as stochastic universal sampling (SUS) (Baker, 1987). The survival probability of an individual (Pðxi Þ) is guaranteed to be (22). J 0 ðxi Þ Pðxi Þ ¼ PNind 0 , j¼1 J ðxj Þ
(22)
where Nind is the number of individuals. For example, with the population of example see (21), the survival probability and expected number of individuals in the new population are shown in the following table (23):
x1
P(xi) No. of expected individuals 0.1 0.4
x2
0.4
1.6
x3 x4
0.3 0.2
1.2 0.8
(23)
In this example, the SUS operator guarantees one individual of type x2 and another of type x3 . Two other new individuals could be of any type with greater probability of being x4 and x2 type. For crossover, the intermediate recombination operator is used (Mu¨hlenbein and Schlierkamp-Voosen, 1993). Offspring chromosomes x01 and x02 are obtained through the following operation on the parents chromosomes (x1 and x2 ): x01 ¼ a1 x1 þ ð1 a1 Þx2 ; x02 ¼ a2 x2 þ ð1 a2 Þx1 ; a1 2 ½d; 1 þ d; a2 2 ½d; 1 þ d: The operation can be performed on the whole chromosome or on each gene separately. In the latter case random parameters a1 and a2 have to be generated for each gene increasing search capabilities but with
ARTICLE IN PRESS M.A. Martı´nez et al. / Engineering Applications of Artificial Intelligence 19 (2006) 927–938
a higher computational cost. Implemented GA has been adjusted as follows: a1 ¼ a2 and generated for each chromosome. d ¼ 0. Crossover probability is set to Pc ¼ 0:8. The mutation operation is done with a probability Pm ¼ 0:1 and a normal distribution with standard deviation set to 20% of the search space range. 4. Robust control benchmark solution 4.1. Benchmark description Wie and Bernstein (1990,1991,1992a) proposed a series of problems type for robust control in which the controller designer must achieve a tradeoff between maximizing stability and robust performances of the system, and minimizing control effort. Fig. 5 shows the process described in the benchmark. It is a flexible structure of two masses connected by a spring. State space model is 32 3 2 3 2 3 2 0 0 1 0 x1 x_1 0 76 7 6 7 6 7 6 6 x_2 7 6 0 0 0 1 76 x2 7 6 0 7 76 7 6 7 6 7 6 76 7 þ 6 7u 6 7¼6 6 x_3 7 6 k=m1 k=m1 0 0 76 x3 7 6 1=m1 7 54 5 4 5 4 5 4 k=m2 k=m2 0 0 x4 x_4 0 3 2 0 7 6 6 0 7 7 6 þ6 7w, 6 0 7 5 4 1=m2 y ¼ x2 þ v, where x1 and x3 are mass 1 position and speed, respectively. x2 and x4 are mass 2 position and speed, respectively.
x2
x1
u
m1
m2
w
k
The nominal values for the two masses m1 and m2 and for spring constant k are 1. Control action u is the strength applied to mass 1, and controlled variable y is mass 2 position affected by noise measurements. Moreover, there is a disturbance w on mass 2. Wie and Bernstein (1992b) proposed three control scenarios, but in this study only the first is considered: The closed loop system has to be stable for m1 ¼ m2 ¼ 1 and k 2 ½0:5 . . . 2. The maximum settling time for the nominal system (m1 ¼ m2 ¼ k ¼ 1) has to be 15 s for unit impulse in perturbation w at time t ¼ 0. The phase and gain margins must be reasonable for a reasonable bandwidth. The closed loop must be relatively non-sensitive to high frequency noise in the measurements. Control effort must be minimized. Controller complexity must be minimized. Note that some of these criteria, e.g. parametric uncertainties and dynamic nominal performances, are completely determined while others, e.g. noise measurement, control effort, etc., are open to the designer’s interpretation. 4.2. Design objectives Numerator and denominator coefficients conform the parameter vector to obtain by optimization (x). Design specifications and objectives (gðxÞ) have to be quantities that the designer wishes to maximize, minimize, set to a specific value, etc. For the robust control benchmark, six functions that supply specification values for controller design will be used: (1) Nominal settling time (tnom est ): Maintaining the interpretation made in Stengel and Marrison (1992) and Messac and Wilson (1999). It is assumed that the controlled variable achieves steady state for a unit impulse in perturbation w when it is in a 0:1 range. (2) Worst case settling time (tmax est ): With the above interpretation, it is the maximum settling time for a given controller evaluated in the worst case k ¼ 0:5 or k ¼ 2. (3) Robust stability and robust performances (ReðlÞmax ): Stengel and Marrison (1992) shows that phase and gain margins for the worst case are poor indicators of robustness. Instead, closed loop poles for the worst case can be used and evaluated as ReðlÞmax : ReðlÞmax ¼ max Reðl½AðkÞÞ, k2½0:5...2
Fig. 5. A two mass and spring system with uncertainties in the parameters.
933
(24)
where A is the closed loop system matrix. (4) Noise sensitivity (noisemax ): For a specific frequency range, noise sensitivity is measured by the ratio of noise
ARTICLE IN PRESS M.A. Martı´nez et al. / Engineering Applications of Artificial Intelligence 19 (2006) 927–938
934
amplification vs a 20 db=dec slope. uðjwÞ 1 ; w 2 ½100 . . . 10 000. noisemax ¼ max k2½0:5;2 vðjwÞ jw (25) (5) Nominal control effort (unom ): Maximum control action produced by a unit impulse in disturbance for the nominal case. (6) Maximum control effort (umax ): Maximum control action produced by a unit impulse in the disturbance when there are uncertainties. There are six algorithms (or functions) that give the values described above for every combination of controller numerator and denominator coefficients. These functions can be used to compare performance with that obtained by other authors. Controller order (controller complexity) is selected a priori.
Table 1 Preferences for a controller (2,3) design
ReðlÞmax umax tmax est noisemax unom tnom est
g1i
g2i
g3i
g4i
g5i
0.001 7 1e3 5 1 13.5
0.0005 8 2e3 10 2 14
0.0001 8.5 3e3 30 3 14.5
5e–5 9 4e3 40 4 17
1e–5 12 5e3 50 5 21
Table 2 Preferences for a controller (3,4) design
ReðlÞmax umax tmax est noisemax unom tnom est
g1i
g2i
g3i
g4i
g5i
0.01 0.8 15 1.8 0.9 14
0.005 0.85 40 2 1.2 14.2
0.001 0.95 80 2.2 2 14.4
0.0005 1 90 2.5 2.5 14.6
0.0001 2 100 3 3 15
4.3. Preferences and class functions For comparison with other proposals, controllers with transfer functions of different complexity and strictly proper have been designed. Class 1s functions have been selected for all objectives. Tables 1–3 show the extreme values of the preference ranges for the six specifications and for the different controller orders ðn; mÞ (numerator and denominator polynomial order). As observed, the selected extreme values are more restrictive when the controller complexity increases. As a example, Fig. 6 shows the class function for the nominal settling time (tnom est ) that results from the preferences of Table 1. All the conditions are satisfied: strictly positive, first derivative continuity and strictly positive second derivative. 5. Results In this section the results of the robust control benchmark using PP with GAs are discussed. There are several works that present solutions (Stengel and Marrison, 1992) but for the purpose of comparison, attention is focused on controller designs obtained with an application of PP (Messac and Wilson, 1999) based on classical nonlinear optimization techniques (considered as very good solutions). The solution proposed in this paper goes beyond the solutions found in the literature. Table 4 shows six controllers of different complexity proposed by other authors and referenced as good solutions (some of them only for a subset of the objectives). PP combined with GA was applied to the preferences indicated in Tables 1–3. The controllers obtained using this technique are presented in Table 5. The comparison of the
Table 3 Preferences for a controller (4,5) design
ReðlÞmax umax tmax est noisemax unom tnom est
g1i
g2i
g3i
g4i
g5i
0.01 0.85 14 0.5 0.5 10
0.005 0.90 16 0.9 0.7 11
0.001 1 18 1.2 1 12
0.0005 1.5 21 1.4 1.5 14
0.0001 2 25 1.5 2 15
controllers has to be based on the analysis of the six proposed specifications. The numerical values of each specification for every controller are shown in Table 6. Table 7 shows the range obtained for each controller. The analysis of Tables 6 and 7 allows us to conclude that The controllers obtained by PP (Messac and Wilson, 1999) are better than its counterparts B23, W34 and J45 designed using other methods. Nevertheless for (4,5) controllers the improvement is not generalized: J45 is nom better in tmax est and test specification. The controllers obtained using PP with GA are the best for all specifications. Notice that even in controllers of lower complexity performance is improved. For example, PPGA23*, PPGA23** and PPGA34** are obtained with preferences corresponding to higher order controllers (3,4) and (4,5), respectively. These improvements are the consequence of the good quality and tuning of the optimization technique. The
ARTICLE IN PRESS 935
1200 1000 800 600 400 200
Class-1s’’
Class-1s’
Class-1s
M.A. Martı´nez et al. / Engineering Applications of Artificial Intelligence 19 (2006) 927–938
13
14
15
16
17
18 g
19
20
21
22
23
13
14
15
16
17
18 g
19
20
21
22
23
13
14
15
16
17
18 g(x)
19
20
21
22
23
300 200 100
1.2 1 0.8 0.6 0.4 0.2
Fig. 6. Class-1s function and its first and second derivatives for tnom est .
Table 4 Different solutions for the robust control benchmark Proposal
ðm; nÞ
f.d.t.
Byrns (B23) (Byrns and Calise, 1990)
ð2; 3Þ
40:4s2 þ 110:696s þ 33:7946 s3 þ 164:9258s2 þ 152:6928s þ 140:0193
Messac (M23) (Messac and Wilson, 1999)
ð2; 3Þ s3
Wie (W34) (Wie and Bernstein, 1990)
ð3; 4Þ
Messac (M34) (Messac and Wilson, 1999)
ð3; 4Þ
12:5s2 þ 12:8375s þ 3:1211 þ 21:8124s2 þ 26:44s þ 30:1605
2:13s3 5:327s2 þ 6:273s þ 1:015 s4 þ 4:68s3 þ 12:94s2 þ 18:36s þ 12:68 s4
0:66s3 4:101s2 þ 4:558s þ 0:627 þ 3:416s3 þ 10:15s2 þ 13:52s þ 9:281
Jayasuriya (J45) (Jayasuria et al., 1992)
ð4; 5Þ
302e3s4 þ 1:052e6s3 þ 1:157e6s2 þ 8:401e5s þ 2:978e5 s5 þ 47:55s4 þ 2234s3 þ 3:152e4s2 þ 1:783e5s þ 3:691e5
Messac (M45) (Messac and Wilson, 1999)
ð4; 5Þ
1:14s4 6:212s3 þ 7:353s2 þ 1:037s þ 0:0154 s5 þ 5:481s4 þ 15:01s3 þ 23:41s2 þ 14:31s þ 0:1629
solutions proposed by Messac and Wilson (1999) are local minima.4 Figs. 7 and 8 show process output for a unit impulse in the disturbance. PPGA34 offers better performances than other controllers of the same complexity. Better performances are obtained without increasing control effort excessively (Figs. 9 and 10), because PPGA34 also considers preferences in this specification. 4
Controller M34 is not even a local minimum.
For the measurement of noise sensitivity (in the ranges ½100 . . . 10 000 rad/s.), Fig. 11 shows that PPGA34 controller provides more attenuation than its counterparts W34 and PP34. 6. Conclusions A new methodology for controller design has been presented in this paper. It incorporates different types of specifications to design robust controllers in a simpler way.
ARTICLE IN PRESS M.A. Martı´nez et al. / Engineering Applications of Artificial Intelligence 19 (2006) 927–938
936
Table 5 Controllers obtained by PP with GA Proposal
ðm; nÞ
Preferences
PPGA23
ð2; 3Þ
ð2; 3Þ
f.d.t. 0:0025s2 þ 0:8747s þ 0:1176 þ 2:0767s2 þ 2:6282s þ 1:3372
s3 PPGA34
ð3; 4Þ
ð3; 4Þ
PPGA45
ð4; 5Þ
ð4; 5Þ
0:3226s3 2:276s2 þ 4:79s þ 0:7539 s4 þ 2:075s3 þ 8:664s2 þ 11:32s þ 7:825 s5
0:1513s4 2:9872s3 þ 7:9883s2 þ 1:8056s þ 0:101 þ 3:6964s4 þ 14:1408s3 þ 20:0359s2 þ 13:491s þ 0:9424
PPGA23*
(2,3)
ð3; 4Þ
1:5704s2 þ 3:1911s þ 0:52 s3 þ 5:2347s2 þ 7:2333s þ 5:2436
PPGA23**
(2,3)
ð4; 5Þ
1:0122s2 þ 2:9275s þ 0:4813 s3 þ 4:8721s2 þ 6:476s þ 4:5968
PPGA34**
(3,4)
ð4; 5Þ
0:6069s3 0:1801s2 þ 3:6853s þ 0:5740 s4 þ 3:5681s3 þ 9:3442s2 þ 10:1662s þ 5:6949
2.5
Table 6 Specification comparison for each controller
B23 M23 PPGA23
0.1416 0.0025 0.0251
W34 M34 PPGA34 PPGA23*
0.0427 0.0542 0.0166 0.0219
0.6793 0.6681 0.7194 0.5259
22.125 18.375 11.625 15.000
2.1317 0.6620 0.3241 1.5698
0.5595 0.5127 0.5872 0.4170
16.7756 13.3580 10.8731 10.8731
J45 M45 PPGA45 PPGA34** PPGA23**
0.2254 0.0168 0.0157 0.0233 0.0052
47.76 0.5873 0.5581 0.5461 0.5223
6.375 18.375 12.0 15.0 15.25
337072 1.1424 0.1551 0.6075 1.0121
0.4450 0.4622 0.4664 0.4475 0.4126
6.21 12.1157 10.5624 10.5624 10.8731
noisemax
— — 0.5669 21.000 0.4381 25.1250
unom
2
tnom est
ReðlÞmax umax
— 0.5127 20.8100 12.2448 0.4548 20.5030 0.0091 0.3309 11.1838
1.5 Salida
tmax est
Controller
W34 PP34 PPGA34
1
0.5
0 -0.5 0
5
10
15 20 t (seg.)
25
30
35
Fig. 7. Nominal settling time for closed loop with W34, PP34 and PPGA34.
1.8 Table 7 Range comparison obtained for each controller
W34 PP34 PPGA34
1.6
ReðlÞmax
umax
tmax est
noisemax
unom
tnom est
B23 M23 PPGA23
UNA HD HD
UNA HD HD
UNA HD HD
UNA T HD
HD HD HD
HU HU HD
W34 M34 PPGA23* PPGA34
HD HD HD HD
HD HD HD HD
D D HD HD
T HD HD HD
HD HD HD HD
U HD HD HD
J45 M45 PPGA23** PPGA34** PPGA45
HD HD D HD HD
U HD HD HD HD
HD U D D HD
U T T D HD
HD HD HD HD HD
HD U D D D
HD-Highly Desirable, D-Desirable, T-Tolerable, HU-Highly Undesirable, UNA-Unacceptable.
U-Undesirable,
Controlled Variable
1.4
Controller
1.2 1 0.8 0.6 0.4 0.2 0 -0.2
0
5
10
15 20 t (seg.)
25
30
35
Fig. 8. Worst case settling time (k ¼ 0:5) for closed loop with W34, PP34 and PPGA34.
ARTICLE IN PRESS M.A. Martı´nez et al. / Engineering Applications of Artificial Intelligence 19 (2006) 927–938
0.3 W34 PP34 PPGA34
0.2
Control Action
0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 0
5
10
15 20 t (seg.)
25
30
35
Fig. 9. Nominal control action for W34, PP34 and PPGA34.
W34 PP34 PPGA34
0.3 0.2
Control Action
The incorporation of specifications and design preferences is based on physical programming (PP) methodology. The multiobjective problem (designers preferences) are translated to a normalized domain by means of the socalled class functions; then all these class functions are aggregated in a single cost function. Successful minimization is possible if an adequate optimization technique is selected. GAs have proved to be an efficient tool and open new possibilities in multiobjective controller design (Fleming and Purshouse, 2002). For the ACC robust control benchmark, all solutions improved previous works, even those obtained with physical programming. Once the potential of PP with GAs has been demonstrated, future research will extend this technique to other controllers and other types of specifications. The methodology is applicable to other design domains different from the controller design. The only condition is for the design to be based on several optimization criteria. Acknowledgements
0.1
We would like to thank the RDi Linguistic Assistance Office at the Universidad Politecnica of Valencia for their help in revising this paper.
0 -0.1 -0.2
References
-0.3 -0.4 -0.5 -0.6 -0.7 -0.8 0
5
10
15 20 t (seg.)
25
30
35
Fig. 10. Control action in the worst case (k ¼ 0:5) for W34, PP34 and PPGA34.
-30
1/jw W34 PP34 PPGA34
-40
Magnitude (dB)
937
-50
-60
-70
-80
-90 102
103 Frequency (rad/sec)
104
Fig. 11. Representation of noise sensitivity in the range ½100 . . . 10 000 rad/s for W34, PP34 and PPGA34.
Back, T., 1996. Evolutionary Algorithms in Theory and Practice. Oxford University Press, New York. Baker, J.E., 1987. Reducing bias and inefficiency in the selection algorithms. In: Grefenstette, J.J., (Ed.), Proceedings of the Second International Conference on Genetic Algorithms, Lawrence Erlbaum Associates, Hillsdale, NJ, pp. 14–21. Blasco, F.X., 1999. Model based predictive control using heuristic optimization techniques. Application to non-linear and multivariable processes. Ph.D. Thesis, Universidad Polite´cnica de Valencia, Valencia (in Spanish). Byrns, E.V., Calise, A.J., 1990. Fixed-order dynamic compensators for h2/ hinty benchmark problem. In: Proceedings of the American Control Conference, San Diego, CA, pp. 963–965. Coello Coello, C.A., Van Veldhuizen, D.A., Lamont, G.B., 2002. Evolutionary Algorithms for Solving Multi-objective Problems. Kluwer Academic Publishers, Dordrecht. Fleming, P.J., Purshouse, R.C., 2002. Evolutionary algorithms in control systems engineering: a survey. Control Eng. Practice 10, 1223–1241. Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA. Holland, J.H., 1975. Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Arbor. Jayasuria, S., Yaniv, O., Nwokah, O.D.I., Chait, Y., 1992. Benchmark problem solution by quantitative feedback theory. Journal of Guidance, Control and Dynamics 15 (5), 1087–1093. Messac, A., 1996. Physical programming: effective optimization for computational design. AIAA Journal 34 (1), 149–158. Messac, A., Wilson, B.H., 1999. Physical programming for computational control. AIAA Journal 36 (1), 219–226. Michalewicz, Z., 1996. Genetic Algorithms + Data Structures ¼ Evolution Programs, Springer Series Artificial Intelligence, third ed. Springer, Berlin. Miettinen, K.M., 1998. Nonlinear multiobjective optimization. Kluwer Academic Publishers, Dordrecht.
ARTICLE IN PRESS 938
M.A. Martı´nez et al. / Engineering Applications of Artificial Intelligence 19 (2006) 927–938
Mu¨hlenbein, H., Schlierkamp-Voosen, D., 1993. Predictive models for the breeder genetic algorithm. i. Continuous parameter optimization. Evolutionary Computation, vol. 1(1), The MIT Press, Cambridge, pp. 25–49. Stengel, R., Marrison, C., 1992. Robustness of solutions to a benchmark control problem. Journal of Guidance, Control and Dynamics 15 (5), 1060–1067. Wie, B., Bernstein, D., 1990. A benchmark problem for robust control design. In: Proceedings of the American Control Conference, San Diego, CA, pp. 961–962.
Wie, B., Bernstein, D., 1991. A benchmark problem for robust control design. In: Proceedings of the American Control Conference, Boston, MA, pp. 1929–1930. Wie, B., Bernstein, D., 1992a. A benchmark problem for robust control design. In: Proceedings of the American Control Conference, Chicago, IL, pp. 2047–2048. Wie, B., Bernstein, D., 1992b. Benchmark problems for robust control design. Journal of Guidance, Control and Dynamics 15 (5), 1057–1059.