Copyright © IFAC Adaptive Systems in Control and Signal Processing, Budapest, Hungary, 1995
DUAL VERSION OF DIRECT ADAPTIVE POLE PLACEMENT CONTROLLER N. M. Filatov, H. Unbehauen and U. Keuchel
Automatic Control Laboratory, Faculty of Electrical Engineering, Ruhr University D-44780 Bochum, Germany
Abstract: An innovative dual (active adaptive) version of the direct adaptive pole placement controller (APPC) is designed, using bicriterial optimization. A new performance index for control optimization of adaptive pole placement systems is suggested. In contrast to the wellknown direct APPC, based on the certainty equivalence (CE), assumption, the accuracy of the parameter estimation and necessity of an optimal active excitation signal are taken into account in the presented controller design. The derived control algorithm is compared with the controller suggested by Elliott (1982). It is emphasized that the new controller provides improved control quality, especially at the commencement of adaptation. Simulated examples are used to demonstrate the potential and superiority of the designed controller. Keywords: Adaptive control, control system synthesis, dual control, pole placement, simulation, stochastic systems
of direct adaptive pole placement control systems can be improved if it is possible to apply the active adaptation in these systems. However, the following three problems have to be solved for extending the active adaptive approach to direct adaptive pole placement control systems: (i) Selecting an appropriate performance index for control optimization. (ii) Proper description of the uncertainty in the direct adaptive pole placement control systems, as well as defining a measure for the uncertainty. (iii) Performing of a convergence analysis of the active adaptive control in systems with direct adaptation: the problem is characterized by the complexity and nonlinearity of the active adaptive control algorithms.
I. INTRODUCTION
The dual effect is the corner stone of active adaptive control systems: allowing improvement of the control quality. Whereby, the control signal has the dual responsibility of bringing the plant output tracks the prescribed reference signal (control goal) and to accelerate the parameter estimation by optimal persistent excitation (identification goal) - see, for example, Astrom (1987) and references therein. The process of adaptation is relatively fast in these systems, and, as a result, the adaptation time becomes shorter and the total control quality higher. So far, however, the active adaptive approach has been applied only for control systems with indirect adaptation. Hitherto, in the well-known direct adaptive systems, the current accuracy of the parameter estimates and accuracy in tuning the controller have not been taken into consideration: using the estimates as if they are exact values of the unknown parameters. This assumption (certainty equivalence assumption) often leads to insufficient control quality during the adaptation time. Naturally, it can be expected that the control performance
The present paper deals with the synthesis of new control systems with pole placement (or, in other words, implicit reference model), and active adaptation for the design of an active adaptive version of the well-known direct APPC suggested by Elliott (1982) (see also Elliott et al.; 1984). The bicriterial approach (Zhivoglyadov et al., 1993, Filatov and Unbehauen, 1994) will be extended for the synthesis of active adaptive control systems with direct 419
2. DESIGN OF ADAPTIVE POLE PLACEMENT CONTROLLER WITII STANDARD APPROACH
adaptation. A new opurruzation criterion will be introduced for this kind of systems. This criterion is determined as conditional expectation of the square-error of the nominal system output. The covariance matrix of the unknown parameters of the regulator is used as uncertainty description and uncertainty measure. This uncertainty measure enters into the new control algorithm and provides robust and caution properties for the developed control system.
Consider the discrete-timelsingle-input single-output plant A(Z-I)y(Z) = Z-IB(Z-I)V(Z)+ 'I'(z) , (I) where Y(z) and V(z) are z-transforms of the input and output signals y(k) and u(k), k being the discrete time and 'P(z) the z-transform of the disturbance \jI(k), A(z -I) and B( Z-I ) are the polynomials of the form A(Z-I)=(l+~Z-I+ ... +o.,z-'" Xl-z-1)l" = 1+cV-I+... +a.z-" ,(2a)
There is neither an explicit reference model nor an error between the reference model output and system output in APPC systems. The reference model (desired dynamics of the closed-loop system) is used in these systems implicitly in the process of indirect identification. Normally, no optimization criterion is used in this kind of adaptive systems. The control quality is determined only by the structure of the controller (pI, PlO, etc.), and the adaptation mechanism has the aim to provide the desired dynamics of the closed-loop system as it is given by the reference model, using indirect identification. In the present paper, it will be suggested to use a new optimization criterion for adaptive pole placement control systems. At the outset, the nominal output will be defined as the desired system output when no disturbances act onto the system. Thu~, the nominal output is the desired output at every moment, and the optimization criterion is determined as conditional expectation of the square of the system output deviation from the nominal output. In contrast to the usual criterion, the nominal desired output of the system is unknown at every moment, because the parameters of the desired regulator are unknown. It will be shown that the application of this criterion leads to an optimal control algorithm which has the structure of the desired pole placement controller. Contrary to other wellknown controllers, the optimal one will depend on the covariance matrix of the uncertain parameters. Thus, the uncertainty level will be taken into consideration in this controller. It should be mentioned that, unlike the suggested approach, the usual criterion based on the error between the outputs of the system and the reference model leads to an adaptive system with explicit model reference.
B(Z-I) = ~ +~z-I+ ... +bn z-n 6 +I,
(2b)
6
where
integral actions are incorporated in the system, nit + K. It is assumed that only the plant order n is known, whereas the coefficients of the polynomials according to eqs. (2a) and (2b) are unknown. Consider the following control law for the plant (I): K~
K
0, n
=
U(z)=.!..[W(z)-S(Z-I)y(Z)-Z-IR(Z-I)U(Z»), ro ~O, (3) ro where W(z) is the z-transform of the reference signal w(k), and R(z-I)=Tj+r2z-J+... +Tn z-nR+I, R
nR=nlt-I, (4a)
S(z-I)=sO+SIZ-I+",+s"sz-ns , ns =n... +K"-I=n-I,(4b)
are controller polynomials. After transformation of eq. (3) to the discrete-time domain, the control law can be represented in the vector form
u(k)=~[w(k)-P6mo(k)],
ro;tO,
(5)
TO
where P6 = [so ... Sns:Tj ... rnR ]. moCk)
(6)
=[y(k) ... y(k - ns ):u(k -l)... u(k -nR)Y.
(7)
Consider the following monic and asymptotically stable polynomial
C( Z-I) = 1+cIZ -I + ... +cmz -m ,m S; n... +nB + K- 1, (8) where its zeros represent the desired location of the poles of the transfer function between the bounded external input w(k) and the output y(k). So the transform of the closed-loop output takes the form Y(z)
= Z-I B(Z-I) W(z) roC(z-l)
The paper is organized as follows. The algorithm for direct adaptive pole placement controller is derived in Sec. 2, using a well-known approach suggested by ElIiott (1982) which is based on the CE assumption. In contrast to ElIiott's controller, the considered one is synthesized for systems with stochastic disturbances and time-varying parameters. The active adaptive version of this controller is derived in Sec. 3, using a bicriterial approach and a new optimization criterion. Two simulated examples are presented in Sec. 4 to demonstrate the potential of the designed algorithms. In Sec. 5 the properties of the new controller are discussed and compared with the wellknown aforementioned one. Concluding remarks are presented in Sec. 6.
(9)
,
where roC(z-l)
=A(Z-I ~ro + Z-I R(Z-I )]+Z-I 8(z-1 )S(Z-I).
(10)
Equation (10) follows from eqs. (I) and (3). Equations (9) and (10) define the implicit reference model. To derive an appropriate model for estimating the controller parameter vector (6), as in ElIiott (1982) and ElIiott et al. (1984), the Bezout identity A(Z-I)D::z-1)+8(z-I)F(z-I)= roZ -I+2, l=n... +ns+K" (11) is considered, where D::z-l)=4J+dlZ-I+...+d.oz-·o, F(z-I)=l+ jjz-I+"'+!n z-n F , F
4J ~O,
nD =ns -2,(l2a)
nF =n... +K-1.
(l2b)
Multiplication of eqs. (10) and (11) gives C(AD+ BF) = Z-I+2[ A(TO +Z-I R)+ Z-I BS].
420
(13)
been derived for systems with noise and time-varying parameters. Moreover, the structure of this controller is suitable for the development of its active adaptive version, as will be shown in the next Section.
The arguments of the polynomials are omitted here for notational simplicity. After multiplying eq. (13) by Y(Z) and introducing eq. (1), one gets Z-I DCU(z)+ FCY(z)- Z-l RU(z)- Z-I+l Sy(z) = TOZ-1+1U(Z)+E:(z),
(14)
where E:(z) is the z-transform of the disturbance ;(k) which is obtained from \f' (z) after above algebraic manipulations. In order to obtain the vector form of the eq. (14), the following filtered-values of the input and output signals are introduced y(z) = C(Z-I )y(z), (15a)
3. DESIGN OF DIRECT ACTIVE ADAJYI1VE POLE PLACEMENT CONTROLLER Define the nominal output of the system y" (k + I) as the response to the input signal
U(z) = C(Z-I )U(z). (15b) After inverse z-transformation and implementation of eqs. (15a) and (15b), eq. (14) can be written in the vector form y(k)
=pTm(k -1)+ ;(k),
u,,(k)=2-[w(k)-P6mo(k)] (27) TO of the unknown controller which provides the desired system transfer function of eq. (9), when the state of the system is m o (k) and no disturbances act onto the system (;(k)=O). Then the dependence of y,,(k+l) from u" (k) according to eq. (16) takes the form
(16)
where
S,,],
(17) pT =[-A . uo ... -d"D :-ft... -fnF :TO:1j ... T" ~ :so ... s m(1e -1) = [u(Ie-I) ...u(1e -nD -1):Y(Ie-I)... y(1e -n,. ):u(1e -I + I): u(k-/) ...u(Ie-I-nR + 1):y(Ie-I+1)...y(Ie-I-ns + I)y.
y,,(k+l)=pTm,,(k). where m" (k)
(18)
The signals u(k) and y(k) are the representation of
U (z) and
=p(k)+q(k+ I~Y(k+1)- pT (k)m(k)].
q(k+l)=P(k)m(k~mT(k)P(k)m(k)+~r,
y(k-I+2)
m
u,,(k)=u,,(k)+ LCju(k-i).
It is clear that the control quality would be improved if the controller tries to bring the system output as close as possible to the nominal output, which is attained only for the controller with known parameters, after full noise compensation. In accordance with this and the bicriterial approach (Zhivoglyadov et al., 1993, Filatov and Unbehauen, 1994), the following two criteria are introduced in order to derive the control law:
if =E{tf(y" (k+I)-y(k+I»2I g .l; }•
={y(O)} ,
,? =4,
dO
(32) (33)
The first criterion, eq. (32), is used for control purposes to minimize the deviation of the system output from the unknown nominal output, which would be obtained by the adjusted unknown regulator. The coefficient tP is introduced for simplification of further algebraic manipulations. The second criterion, according to eq. (33), is used for acceleration of the parameter estimation process (Zhivoglyadov et al., 1993, Chan and Zarrop, 1985) by increasing the predictive error value, shown in the square bracket in eq. (20). These two criteria correspond to the two goals of the dual (or active adaptive) control: to control the system output and to accelerate the estimation for future control improvement. The direct active adaptive controller will be obtained after solving the bicriterial optimization problem, given by eqs. (32) and (33) for the system described by eqs. (16) and (27), using the constraints as described below:
(25)
g.l; is the set of input and output values at time k. It is assumed here that the initial values p(O) and P(O) for eqs. (20)-(22) are given. Using the CE approach for the controller according to eq. (5) and the adaptation algorithm of eqs. (20)-(22), the direct APPC takes the form
1 [ w(k)-po A T] U(k)=-A(k)mo(k) To(k)
/32
i: = -E{(y(k+l)- pT (k)m(k»2Ig.l;}'
where
g.l; ={y(O)... y(k):u(O) ... u(k-l)}, go
(31)
;=1
T
(24)
(30)
;=1
P(k + 1) = P(k)-q(k + l)m (k)P(k)+Q£(k), (22)
P(k) = E{
(29)
y,,(k+l)=y,,(k+I)+ Lc;y(k-i+I) ,
(21)
(23)
y(k-l-ns +2)r. m
(20)
p(k) = E{p(k)lg.l;},
=[u" (k) u(k -1)... u(k - nD): y(k) ... y(k - nF + 1):u(k -I + 2): u(k-l + 1) u(k-l- nR +2):
Y(z) in the discrete-time domain.
In the case of time-varying plant parameters, the parameter vector according to eq. (17) including the parameters of the controller, eq. (5), and the polynomials given by eqs. (12a) and (12b) will also become timevarying. If the case of stochastic parameter drift, described by a Wiener process p(k+l)=p(k)+e(k) (19) is considered, where e(k) is a white noise drift vector with zero mean and covariance matrix Q£(k), then in the case when ;(k) is a stochastic white noise with zero mean and variance C1 ~, the following Kalman filter approach is applied for the estimation of the parameters in eq. (16): p(k+ 1)
(28)
(26)
with Po being an estimate of Po'
In contrast to the regulator suggested by Elliott (1982), the direct APPC, according to eqs. (20)-(22) and (26), has 421
u(k) = arJ!:min u(t)eO.
i: '
where C4 (k) does not contain u(k) or ue(k). Introducing eqs. (42)-(44) in equation for provides i: (ue(k)- 8(k»- i: (u e (k) + 8(k» =
(34)
where ilk = [u e (k) - 8(k);u e (k) + 8(k)]. 8(k) 11 tr{P(k)}. 11~ O.
i:'
(35)
- Pd (k)(ue(k)- 8(k)+C4(k»2
=
o
ue(k)=argmini~,
-2p~PI (k)m l (k)(ue(k)- 8(k)+C4(k»
(36)
u(k)
+ Pdo (k)(ue(k)+ ~k)+C4 (k»2
u e (k) is the cautious control (see. for example. Chan and Zarrop. 1985; Bar-Shalom, 1981) which is obtained after minimization of eq. (32). According to eq. (34), the second performance index is minimized in the domain ilk which is symmetrically distributed around the cautious control. Therefore the size of this domain. according to eq. (35). defines the amplitude of the excitation.
+2p~1 (k)mt (k)(ue(k)+ ~k)+C4(k»
=4Pdo (k)ue(k)~k)+4P4PI
Substituting eq. (45) by eq. (41) provides the following active adaptive control law: u(k) ue(k)+ 11 tr{P(k)}
=
x Sign{pdo (k )ue(k)+ P~OPl (k)m t (k)}'
Upon implementation of eqs. (16). (28). (30) and (31) in eq. (32). one has i~ (rou n (k) - rou(k»2I g k}. (37)
where
. {} {I,
=E{
sign a =
Inserting eq. (27) in eq. (37). gives
J~ =E{cw(k)-P6'"o(k)-TOU(k»2ISt}
=-2ro(k)w(k)u(k)+~ro(k)p~(k)+p~Po (k)]
The simulated examples discussed below illustrate the behaviour of the DAAC and allow to compare its properties with those algorithms based on the CE assumption. In each example, both system with disturbances and controllers were simulated digitally and the polynomial orders appropriate for the plant were correctly chosen. The figures associated with each example portray the graphs of the setpoint (w), the system output (y) and the controller output (u). The initial parameters were taken to be 0.01 and the initial covariance matrix as P(O) = 0.51, where I is the unity matrix. Different second-order plants with four unknown parameters were considered in both examples. Therefore six parameters of the vector, according to eq. (17), must be estimated. The model of the plants with disturbances has the following form: y(k+ 1) = b t u(k)+b2u(k) +at y(k)+a 2y(k)+ Vt(k) , (47) where Vt(k) is the white noise with zero mean and small variance a~ =0.
where ct (k) does not depend on u (k ) . The minimization of the last equation gives the cautious control action described by A2
0
,
(38)
TO (k)+ P'o (k)
where the following elements of the covariance matrix P (k) have been used:
-TO (k»2I g k}' P'oPo (k) = E{ (ro - TO (k»(po - Po(k»lgk} . P'o (k)= E{(ro
(39) (40)
The minimization of eq. (34) with the constraints of eq. (35) leads to u(k) = u e (k) + 8(k)sign{ i: (ue(k)- 8(k» -i:(ue (k)+8(k»}.
0,
if a ~ . If a< O.
4. SIMULATED EXAMPLES
xlIlo (k)u(k)+(r02 (k)+ Pr.o (k»u 2 (k)+cl (k) ,
ro (k)w(k)-[ro (k)P6 (k)+p~ (k)]Ino(k)
-1,
(41)
The second criterion is met through substitution of eq. (16) in eq. (33):
J: (u(k» = -E{«p(k)- p(k){ m(k)+ ~(k»2Igk} =-E{ «do(k) -do (k»u(k)
=
+(PI (k)- PI (k){ m l (k»2IgJ+C2(k) =-Pd" (k)u2(k)-2p~Pl (k)m] (k»u(k)+c3 (k), where c 2 (k) andc3 (k) do not contain u(k),and m
ue(k)
=ue(k)+ LCiU(k-i) ,
Table 1 Pole
(42)
positionin~
i=t
P(k)=[ pdo(k) PdoPI(k)
P~oPI(k)],
(46)
Thus the direct active adaptive controller, with pole placement, is determined from eqs. (38) and (46).
=E{-2(w(k)-P6"'o (k»TOU(k)+(TOU(k»2ISt }+c1(k)
ue (k)
(k)m t (k)~k). (45)
(43)
Ppl(k)
pT(k)=[-do(k):p{(k»), m(k)=[u(k):m{(k)r (44)
It should be mentioned that eq. (42) leads to u(k) =u(k)+c4 (k), ue(k) = u e (k)+c4 (k), 422
Re
Im
0.6
+0.1
0.6
-0.1
0.2
o
It should be mentioned that in case of K=O (plant without integral behaviour) the amplification of the closed-loop system, according to eq. (9), must be estimated to correct the setpoint or the integral action have to be incorporated in the system (K~ 0) as part of the control law. In the following examples the plant amplification is assumed to be known.
4.1 Example 1
f~
The plant involved is unstable. I1llrumum phase. with integra! behaviour (K=I) and model according to eq. (2) A(Z-l) = 1-1. 9418z- 1 +0. 9418z- 2 ,
i\I~!
···tJ •
j
B(Z-l) = 0.OO88+0.0086z- l ,
L.. .
and the parameter for active learning 11 (see eq. (35» was taken to be 4.5. Figure 1 shows a typical run with the designed DAAC according to eqs. (38) and (46) and Fig. 2 presents a typical run with the standard Elliotfs controller of eq. (26), which is based on the CE assumption and uses the estimated parameters as if they had the true values. This standard controller gives a large overshoot at the beginning of the process, while the DAAC provides a smooth start up due to its cautious and active learning properties. ,_
~I
•• M
CIl:INoI(u)
-..
-I'
/'
I
~
j
:
\
r-
I
i
I
if
Y'::.,,---Y
- •••;;-----:;:-=-----=:;-----:;:-=-----=:;---=-=-----=:;---:-!.
J 1
I
r
I
:n\1\\
":-j'r-r,\, •
\'\
,:'
Fig. 2. Standard APPC for integral plant (Example 1)
:'~I 'f\.-1~~f~\ \I \
-.
~7~~ 1/ i\
'I \ \
,I "'----1 /
I
r~ ,!
Fig. 3. Direct active adaptive control for unstable plant (Example 2)
,I
,I
',,--Y
- ....;;-----;-:;--=---=:;-----=--=---=:;-----=--=---=:;----:-!.
,
Fig. I. Direct active adaptive control for integral plant (Example I)
.. 11.
~
A
i!'-----J\ ( V ~V
J
4.2. Example 2 Figures 3 and 4 show the simulation results for both controllers. the DAAC and the standard controller, respectively, for an unstable monotone and nonminimum phase plant (K=O), described by a discrete model with the polynomials of eq. (2) as A(Z-l) = 1- 2.1889z- 1 + 1.1618z-2,
.f/'
!\
B(Z-I) = -0.0132 + O.0139z- • For this example the parameter for active learning is 11 2.2. The same superiority of the designed DAAC can be observed in this case as in the first example.
;
!
'Tr\Tr / '\ ' I ,, ~/, :','\ "i,/ I ,
,\
1
,
~V
......... M--...,.,.(w)
'-1~~j\
•.• /
J
"---J \
~
\
; '/
,-_ J
'
l ,--J)
'
'I
=
Fig. 4. Standard APPC for unstable plant (Example 2)
423
control quality and robustness, of the new controller have been discussed.
5. COMPARISON OF CONTROLLERS BASED ON STANDARD AND ACTIVE ADAPTIVE APPROACHES It is well known that active adaptive controllers give improved control quality relative to standard adaptive controllers based on the CE assumption (Zhivoglyadov et al., 1993; Zhivoglyadov and Filatov, 1990; Chan and Zarrop, 1985; Bayard and Eslami, 1985; Milito et al., 1982; Filatov et al., 1994). This improvement of control quality has been shown in this paper through simulations: attained as a result of caution (robust) and active learning properties of the active adaptive control algorithms. The caution properties of the controller are due to the fact that the control law includes not only the parameter estimates, but also variances of their errors (see eq. (38». The presence of variances and covariances of parameter estimates provide the caution behaviour of the control system when the uncertainty in the system is large: active adaptive system tries to concentrate its efforts on the plant excitation for speeding up the parameter estimation. When the parameter estimates are more precise (the variances being small), the efforts of the active adaptive control are concentrated more on the control goal and the probing signal becomes small (see eqs. (38) and (46». It should be noted that in the case of independent cautious controllers, according to eq. (38), better control quality is provided compared with the adaptive control based on the CE assumption. But these controllers do not provide active learning signals (optimal excitation) and they have not found broad applications because of their slow identification, which sometimes leads to turn-off effects (Zhivoglyadov et al., 1993; Astrom and Wittenmark, 1971) in the presence of which the identification of the plant model is usually interrupted. The direct active adaptive control based on eqs. (38) and (46), as suggested here, gives the same improvement of the control quality as it has been already pointed out by many researches for indirect adaptive control (Astrom, 1987; Zhivoglyadov et al., 1993; Zhivoglyadov and Filatov, 1990; Chan and Zarrop, 1985; Bayard and Eslami, 1985; Milito et al., 1982).
7. ACKNOWLEDGEMENT The first author wishes to express his thank to the Alexander von Humboldt Foundation for their financial support.
8. REFERENCES Astrom, K.J. (1987). Adaptive feedback control. Proc. of the IEEE, 75,185-217. ElIiott, H. (1982). Direct adaptive pole placement with application to nonminimum phase systems. IEEE Trans. Autom. Control, 27, 720-722. ElIiott, H., WA Wolovich and M. Das (1984). Arbitrary adaptive pole placement for linear multivariable systems. IEEE Trans. Autom. Control, 29,221-229. Zhivoglyadov, V.P., G.P. Rao and N.M. Filatov (1993). Application of O-operator models to active adaptive control of continuous-time plants. Control - Theory and Advanced Technology, 9, 127-137. Filatov, N.M. and H. Unbehauen (1994). Strategies of model reference adaptive control with active learning properties. Proc. of 2nd IFAC Symposium on Intelligent Components and Instruments in Control Applications, Budapest, 305-310. Zhivoglyadov, V.P. and N.M. Filatov (1990). Synthesis and comparison of actively and passively adaptive control algorithms. Proc. IFAC Workshop on Evaluation of Adaptive Control Strategies in Industrial Application, Tbilisi, 9-14. Chan, S.S., and M.B. Zarrop (1985). A suboptimal dual controller for stochastic systems with unknown parameters.lnt. J. Control, 41,507-524. Bar-Shalom, Y., (1981). Stochastic dynamic programming: caution and probing. IEEE Trans. Autom. Control, 26, 1184-1194. Bayard, D.S. and M. Eslami (1985). Implicit dual control for general stochastic systems. Opt. Control Appl. Methods, 6, 265-273. Milito, R., C.S. Padilla, RA Padilla and D. Cadorin (1982). An innovations approach to dual control. IEEE Trans. Autom. Control, 27,132-137. Filatov, N.M., U. Keuchel and H. Unbehauen (1994). Application of direct active adaptive control to an unstable mechanical plant. Proc. of 3rd IEEE Conference on Control Applications, Glasgow, 989994. Astrom, K.J. and B. Wittenmark (1971). Problems of identification and control. J. Math. Anal., 34, 90-113. Radenkovic, M.S. (1988). Convergence of the generalised dual control algorithm.lnt. J. Control, 47,1419-1441. Janecki, D. (1988). Stability analysis of ElIiott's direct adaptive pole placement. Systems & Control Letters, 11,19-26.
The solution for strong convergence analysis for the control algorithm of eqs. (38) and (46) can be obtained using the approaches suggested by Filatov and Unbehauen (1994), Radenkovic (1988) and Janecki (1988).
6. CONCLUSIONS A new algorithm for direct active adaptive pole placement controller has been derived for systems based on the direct adaptive pole placement structure suggested by ElIiott (1982). Also, a new criterion for control optimization in direct adaptive pole placement systems has been introduced. This criterion is represented in a bicriterial approach which has been used for the solution of the synthesis problem. The simulated examples most visibly demonstrate the potential and superiority of the new controllers. The typical properties, i.e. improvement of 424