ROBUST TRAJECTORY TRACKING FOR A CLASS OF UNCERTAIN NONLINEAR SYSTEMS

ROBUST TRAJECTORY TRACKING FOR A CLASS OF UNCERTAIN NONLINEAR SYSTEMS

ROBUST TRAJECTORY TRACKING FOR A CLASS OF UNCERTAIN NONLINEAR SYSTEMS Engelbert Gruenbacher ∗ and Luigi Del Re ∗∗ ∗ Linz Center of Mechatronics, Alte...

333KB Sizes 2 Downloads 257 Views

ROBUST TRAJECTORY TRACKING FOR A CLASS OF UNCERTAIN NONLINEAR SYSTEMS Engelbert Gruenbacher ∗ and Luigi Del Re ∗∗ ∗

Linz Center of Mechatronics, Altenbergerstrasse 69, A-4020 Linz, Austria ∗∗ Johannes Kepler University, Altenbergerstrasse 69, A-4020 Linz, Austria

Abstract: In this paper we consider the output tracking problem of an uncertain nonlinear system under the action of an inverse control thus making a robustifying feedback component necessary. Conventionally, the robustifying control is designed taking into account an operating point. We present a nonlinear robustifying controller for tracking any trajectories inside its stability range. The design procedure of this nonlinear controller is based on a simple extension of a stabilizing H2 or H∞ controller. If sufficient conditions are satisfied, robustness in terms of the L2 gain from the disturbance input to the tracking error performance variable can be guaranteed and calculated a priori. Finally simulation results verify the proposed method. Copyright © 2007 IFAC Keywords: nonlinear control, robust tracking, H∞ analysis

1. INTRODUCTION Output tracking has been one of the main fields in control engineering for a long time. In the case of precise model information and under some conditions the problem of output tracking can be solved by the use of inverse control (see (Hunt, Mayer 1997), (Isidori 1995), (Gruenbacher, del Re 2005) and (Devasia et al. 1996)). If the mathematical model of the plant is imprecise or it contains uncertainties, inverse control is usually not performed. Instead the tracking problem is solved by formulating a robust feedback control problem which guarantees robust stability and robust performance (see e.g. (Colaneri et al. 1997) for linear systems). For nonlinear systems a linear approximation in combination with some model uncertainty is often sufficient. This approach tends to be conservative and results in a restricted sta-

bility range. For wider stability range nonlinear methods such as the one proposed in (El-Farra, Christofides 2003) or in (Conte, Serrani 1998) can be applied. In contrast with the usual approach as mentioned earlier to achieve a high tracking performance we use inverse control even if the system model is imprecise or uncertain. Because of model uncertainties it is necessary to have a robust feedback compensator which attenuates the deviations of the actual state from the desired state. For this purpose a stabilizing nonlinear controller as it is presented e.g. in (Gruenbacher et al. 2006), (Isidori, Astolfi 1992), (Astolfi 1997) and (Astolfi, Colaneri 2001) does not solve the control problem satisfactory since robust convergence of the tracking error is not guaranteed because of the equilibrium point which is moving along the reference trajectory.

The method presented here acts on the assumption that there exists a stabilizing controller which guarantees robustness for a sufficient large stability range. This robust controller is classically defined by a candidate Lyapunov function which is the solution either of the Hamilton Jacobi Bellman equation (HJB) or the Hamilton Jacobi Isaac (HJI) equation or inequality (see (Gruenbacher et al. 2006)). If some conditions are satisfied it is possible to use this scalar function to calculate a candidate Lyapunov function to design a controller for any trajectories inside the stability range. 2. PROBLEM STATEMENT Consider a nonlinear uncertain system that has well a defined relative degree: x˙ = f (x) + g (x) (u + w)

(1a)

y = hy (x)

(1b)

z = hz (x) + d (x) u

(1c)

where x ∈ Rn is the state, u ∈ Rm the control input, y ∈ Rp the output variable and z ∈ Rp the performance variable for a stabilizing optimal L2 or suboptimal H∞ controller (see section (Gruenbacher et al. 2006)) and w ∈ Rm is the disturbance input. Note by this definition it is possible to consider input disturbances caused e.g. by actuator uncertainties as well as model uncertainties - in the form of f (x) = f˜ (x)+∆ (x, u) with the known dynamics f˜ (x) and the uncertainty ∆ (x, u) - which can be modeled by g (x) w = ∆ (x, u). The latter is possible if span {∆ (x, u)} = span {g (x)} where ∆ (x, u) is assumed to be Lipschitz continuous in x and u. Note that smoothness is not a necessary condition. For system (1) we assume that x = 0 is an equilibrium point, i.e. f (0) = 0, and that h2 (0) = 0. Moreover, we also assume that f (x) is a smooth vector field and that g(x), hz (x) and hy (x) are smooth mappings. Without loss of generality it is assumed that d(x)0 hz (x) = 0. Finally, assume that g(x) and d(x) has full column rank. In particular, d(x)0 d(x) = R(x) > 0, for each x. For w = 0 we assume that there exists a feedforward control law that guarantees perfect tracking y = y˜ for any reference output y˜ as long as x ˜ ∈ X ⊂ R (˜ x is the corresponding desired state trajectory), x ˜˙ exists, and the initial state is well This feedforward control law, ¡ known. ¢ called kf f x ˜˙ , x ˜ , can be calculated using different methods (see (Hunt, Mayer 1997), (Isidori 1995), (Gruenbacher, del Re 2005) and (Devasia et al. 1996)). It is obvious that if w 6= 0 the feedforward control law fails and a tracking error will occur. Hence we want to solve a problem which is called Robustifying the Feedforward Control (RFC) problem.

3. DEFINITIONS AND PRELIMINARIES Definition 1. (RFC Problem). Consider the nonlinear system with matched disturbances ¡ ¢ (1) and the feedforward control law kf f x ˜˙ , x ˜ . Find an additional feedback controller kf b (˜ x, x) such that the control law ¡ ¢ u = kf f x ˜˙ , x ˜ + kf b (˜ x, x) (2) with kf b (˜ x, x)|x=˜x = 0 asymptotically stabilizes the closed loop system ∀˜ x ∈ X and the state error e=x−x ˜

(3)

is bounded in L2 as long as the input disturbance is bounded in L2 . Remark 1. For validating the performance of the controller that satisfies the RFC Problem we introduce an additional performance variable ze = he (e) where he (e) is the state error depending output map. Definition 2. (Gradient difference field). The gradient difference field in x is defined as difference of the gradients of a scalar function V (x) at two different points x and x + e for x, e ∈ Rn . ¯ ¯ ∂V (x) ¯¯ ∂V (x) ¯¯ (4) − Γx (e) = ∂x ¯x+e ∂x ¯x Lemma 1. For any point x the gradient difference field Γx (e) defines the gradient of a local scalar function Vx (e) (scalar function of e which is locally varying with x. Γx (e) =

∂Vx (e) ∂e

Proof: Referring to (Khalil 1996) the vector field Γx (e) is the gradient of a scalar function if and only if the Jacobian matrix 0 B B

B ∂Γx (e)0 =B B ∂e B @





1

∂ 2 V (x) ∂ 2 V (x) ··· 2 ∂x1 x+e ∂xn ∂x1 x+e C C C .. .. .. C (5) . C . . C 2 2 A ∂ V (x) ∂ V (x) ··· 2 ∂x1 ∂xn x+e ∂xn x+e

is symmetric. Since V (x) is a scalar function it is ensured that ¯ ¯ ∂ 2 V (x) ¯¯ ∂ 2 V (x) ¯¯ = ∂xj ∂xi ¯x+e ∂xi ∂xj ¯x+e and therewith is proved that (5) is symmetric. Lemma 2. If V (x) is a scalar convex function in the convex set x ∈ C ⊆ X then the local scalar function Vx (e) from Lemma 1 is convex

too. Furthermore with Vx (0) = 0 the function is greater than or equal to zero. Proof: Referring e.g. to (Mangarsarin 1994) the local function Vx (e) is convex with respect to e if and only if ¡ ¡ ¢¢ ¡ a ¢ Γx (ea ) − Γx eb e − eb ≥ 0 (6) where ea and eb defines the distance from the point x to any point xa or xb respectively. With (4) the differential convexity condition (6) gets à ! ¯ ¯ ¡ a ¢ ∂V (x) ¯¯ ∂V (x) ¯¯ − e − eb ≥ 0 (7) ∂x ¯x+ea ∂x ¯x+eb Now by a simple trick, just by adding and subtracting x inside the right brackets, we complete the proof since from (7) we get



∂V (x) ∂V (x) − ∂x x+ea ∂x x+eb 

! 

(x + ea ) − x + eb



≥0

which is exactly the necessary and sufficient differential convexity condition for V (x) stated in e.g. (Mangarsarin 1994). Now together with the condition that V (x) is convex we ensured that the inequality is satisfied and the lemma is valid. Remark 2. The local scalar function can be calculated using the path independent line integral Ze Γx (e) de.

Vx (e) =

(8)

0

Setting Vx (0) = 0 ensures that the scalar function is positive and if it is convex it is positive definite. Hence this function may be a candidate Lyapunov function. Definition 3. The distribution of the error vector field along a given trajectory caused by an imperfect feedforward tracking law is called the feedforward tracking error vector field and is defined as ¡ ¢ lx˜,x˜˙ (e) = f (˜ x + e) + g (˜ x + e) kf f x ˜˙ , x ˜ −x ˜˙ Definition 4. The maximum eigenvalue of the input weighting matrix is defined as follows λR = max λmax (R (x)) x∈X

4. SOLUTION OF THE RFC PROBLEM Consider a system given by (1) and assume that a stabilizing controller with guaranteed robustness bounds exists. As shown in (Bryson, Ho 1975), (Gruenbacher et al. 2006) and also in (Van der

Schaft 1992) such a controller for a given system (1) is u = −R (x)

−1

g (x)

0

∂V (x) ∂x

0

(9)

where V (x) can be calculated either by solving the HJB for the L2 optimal control law which is 0

∂V (x) 1 ∂V (x) −1 0 ∂V (x) f (x) − g (x) R (x) g (x) ∂x 2 ∂x ∂x 1 0 (10) + hz (x) hz (x) = 0 2 or the HJI for the suboptimal H∞ control law which is ∂V (x) 1 0 f (x) + hz (x) hz (x) − ∂x 2 0 1 ∂V (x) −1 0 ∂V (x) − g (x) R (x) g (x) + 2 ∂x ∂x 0 1 ∂V (x) 0 ∂V (x) g (x) g (x) ≤ 0 (11) + 2 2γ ∂x ∂x for the considered setup. It has been proven (see (Bolzern et al. 2002) for linear systems and (Gruenbacher et al. 2006) for nonlinear systems) that for both the optimal L2 controller as well as the suboptimal H∞ controller robustness in terms of a maximum H∞ attenuation level from the input disturbance w to the performance output variable z can be guaranteed and calculated a priori. 4.1 Sufficient conditions for solving the RFC problem Assume that there exists a stabilizing control law (9) which is either an optimal L2 controller or an suboptimal H∞ controller. Thus there exists a solution - called V (x) - of the HJB or of the HJI for which it should be assumed that V (x) is strictly convex and its Hessian matrix does not vanish. ∂ 2 V (x) > 0, ∀x ∈ X. (12) ∂x2 Since V (x) is a convex function we can now apply Lemma 2 and the Remark 2 to define a local scalar positive definite function Vx (e) by using the gradient difference field (see Definition 2). The following theorem is then sufficient for solving the RFC problem. Theorem 1. If there exists a positive definite scalar function V (x) which obeys (12) such that the controller (9) guarantees robustness and if there exist ±¡ positive ¢ constants k > 0, κ ≥ 1 and ¯ R such that the scalar function 0 < α < κ kλ Vx˜ (e) satisfies ° ° ° m° ° ∂Vx˜ (e) ° + 1 ∂Vx˜ (e) l ˙ (e) + 1 he (e)0 he (e) α ° ∂x ˜ °2 α ∂e x˜,x˜ 2 0

−k

∂Vx˜ (e) ˜ (e) 0 ∂Vx g (˜ x + e) g (˜ < 0 (13) x + e) ∂e ∂e

for all x ∈ X, for a proper but constraint ° ° error region kek < E and for m ≥ max∀t °x ˜˙ (t)°2 , then a robust tracking controller which solves the RFC problem is defined by ˜ (x) kf b (˜ x, x) = −R

−1

rx˜ (e)

with ∂Vx˜ (e) ∂e ˜ R (x) = R (x)/κ 0

(14)

0

Proof: In the first part we show that the feedback control law stabilizes the system in its equilibrium point x0 = x ˜0 = 0. ¯ ∂V (x) ¯¯ =0 ∂x ¯0

0

∂V (x) ∂x

˜ x˜ (e)0 x ˜ e (e)0 l ˙ (e) + λ ˜ e (e) 0 g (x) g (x)0 λ ˜ e (e) λ ˜˙ + λ x ˜,x ˜ ˜ e (e)0 g (x) R ˜ −1 (x) rx˜ (e) −λ 1 + he (e) he (e) ≤ 0 2

0

−1

and this stabilizes the system since R (x) ≤ ˜ (x)−1 . This can be seen by performing the R robustness analysis for the closed loop system. From there it follows that the system is stable and has a guaranteed robustness bound γ if the HJI 1 0 ˜ (x)−1 g (x)0 µ+ µ ˜ g (x) R 2 1 1 0 µ ˜ g (x) g (x)0 µ ˜ + he (x)0 he (x) ≤ 2γ 2 2 1 0 ˜ g (x) R (x)−1 g (x)0 µ+ µ ˜0 f (x) − µ 2 1 1 0 µ ˜ g (x) g (x)0 µ ˜ + he (x)0 he (x) ≤ 0 2γ 2 2 µ ˜0 f (x) −

˜

with µ ˜0 = ∂ V∂x(x) and µ0 = ∂V∂x(x) is satisfied. We refer the reader to (Gruenbacher et al. 2006) for the proof that this inequality is satisfied for both the optimal L2 and the suboptimal H∞ controller. In order to deal with the robustness of the tracking error, we now consider the closed loop system. The error differential equation results from (3) by inserting the tracking control law (2) with (14). ¡ ¢ e˙ = f (˜ x + e) + g (˜ x + e) kf f x ˜, x ˜˙ + g (˜ x + e) w −1 ˜ ˙ −x ˜ − g (˜ x + e) R (˜ x + e) rx˜ (e) It is now possible to introduce the notation of the feedforward tracking error vector field.

(16)

0

˜ e (e) = ˜ x˜ (e) = ∂ V˜x˜ (e) and λ where λ ∂x ˜ there exists a solution V˜x˜ (0) = 0, V˜x˜ (e) > 0

∂ V˜x ˜ (e) ∂e

0

. If

∀ kek > 0

along the trajectory x ˜, the effect of the input disturbance to the output is limited by the L2 gain γ. In (16) we now attempt to exploit the positive definite function Vx˜ (e). So we define 1 1 V˜x˜ (e) = Vx˜ (e) = α α

the feedback control law becomes g (x)

t=0

which has to be less than or equal to zero. For the closed loop system (16) the HJI gets

Theorem ±¡ 2. If ¢ there exists a solution for (13) with ¯ R then the controller (14) ensures an α = κ 2k λ H∞ attenuation level kTze ,w k∞ ≤ γ, where √ ¯ R 2k λ (15) γ= κ Theorem 1 and Theorem 2, since they are both a result of the same considerations, are proven together in the following proof.

−1

To perform the robustness analysis we use the performance variable ze = he (e) and the cost function Z∞ Z∞ γ2 1 0 0 z (t) z (t) dt − w (t) w (t) dt J∞ = 2 2 t=0

rx˜ (e) = g (˜ x + e)

˜ (x) kf b (0, x) = −R

˜ −1 (˜ e˙ = lx˜,x˜˙ (e) − g (x) R x + e) rx˜ (e) + g (x) w

Ze 0

vx˜ (e) de 0

with α > 0. In order to search for an upper bound of the left side of (16) we need to discuss the first term of (16) in more detail. After some computations starting with a Taylor Series approximation of the function V (x) V (x) = a2 x ⊗ x + a3 x ⊗ x ⊗ x + ... ,

(17)

which clearly starts at the power 2 since it is positive definite and the Hessian matrix does not vanish, it is possible - by completing the squares and performing a worst case assumption for x ˜˙ - to show that ° ° °˜ ° ˜ x˜ (e)0 x λ ˜˙ ≤ m °λ (18) x ˜ (e)° 2

° ° where m ≥ max∀t °x ˜˙ (t)°2 . Furthermore from (17) it is possible to check that the polynomials λx˜ (e) and λe (e) both start at a power of 2. Hence there is a chance that there exists a k20 > 0 such that ° ° °˜ 0° ˜ e (e)0 g (x) g (x)0 λ ˜ e (e) (19) m °λ (e) ° ≤ k2 0 λ x ˜ 2

Note that if there is a solution for the condition (13) then there exists a number k2 0 . With this result it is now possible to continue the proof. Therefore the HJI for the error dynamics system will be considered again. With (18) an upper bound of the left hand side of (16) is

° ° ° m° ° ∂Vx˜ (e) ° + 1 ∂Vx˜ (e) lx˜ (e) ° α ∂x ˜ °2 α ∂e 1 0 ˜ −1 (x) rx˜ (e) − rx˜ (e) R α 1 1 0 0 + 2 2 rx˜ (e) rx˜ (e) + he (e) he (e) ≤ 0 (20) 2γ α 2 Furthermore it should be assumed that there exists a k > 0 such that ° ° ° m° ° ∂Vx˜ (e) ° + 1 ∂Vx˜ (e) l ˙ (e) ° α ∂x ˜ °2 α ∂e x˜,x˜ 1 0 0 + he (e) he (e) − krx˜ (e) rx˜ (e) ≤ 0. (21) 2 An existence for k > 0 is guaranteed at least for a small error environment if there exists a k20 such that (19) is true and if the error dynamics vector field and the error performance vector are at least linearly dependent from the error. This can be shown by performing a Taylor series expansion. Combining (20) and (21) it follows 0 0 ˜ −1 0 krx˜ (e) rx˜ (e) − rx˜ (e) R (x) rx˜ (e) (22) 1 0 0 + 2 2 rx˜ (e) rx˜ (e) = rx˜ (e) Λ (x) rx˜ (e) ≤ 0 2γ α

where

µ Λ (x) = k +

1 2γ 2 α2

¶ I−

1 ˜ −1 R (x) α

Thus the only part still missing is to show that Λ (x) ≤ 0. Therefore we use the property of positive definite matrices: 1 1 q 0 q ≤ q 0 Q−1 q ≤ q 0 q (23) λmax (Q) λmin (Q) Hence using (23) for the positive definite matrix ˜ (x) it is easy to verify that (22) is true if R ¶ µ κ 1 k + 2 2 I − ¯ ≤ 0. 2γ α α λR ¯ R > 0) results in And this (since α > 0, γ > 0,λ ¡ ¢ ¯ R k − 2ακ < −λ ¯R . γ 2 2α2 λ Now we search for an α such that γ becomes a minimum. Thus the function in the brackets ¯ R k − 2ακ) has to be negative and (β (α) = 2α2 λ has to have a maximum absolute value. Hence we search for the minimum of β (α) which is at κ (24) α=α = ¯ − 2λR k and which ensures a negative value of β (α). Furthermore it ensures a maximum value for γ √ ¯ R 2k λ γ= κ And this completes the proof. If (24) is combined with (21) it results in the condition (13). If there

exist a constant k > 0 and a κ ≥ 1 then it is shown now that the L2 - gain is less than or equal to (25). Remark 3. Even if condition (13) includes the gradient functions with respect to x ˜ it is not necessary to calculate the scalar function Vx˜ (e) by integrating the gradient difference field. It is easy to show that the term ∂V∂x˜x˜(e) can be calculated by the following way. ¯ ∂Vx˜ (e) ∂ 2 V (x) ¯¯ ∂Vx˜ (e) = − e0 (25) ∂x ˜ ∂e ∂x2 ¯x˜ Thus it is not necessary to calculate the scalar function Vx˜ (e) to solve the RFC problem. Furthermore the problem of finding the positive constants such that (13) is fulfilled can be solved numerically. An analytical solution of this problem is difficult and not necessary. The lemma above ensures a H∞ attenuation level which depends on the positive constant k > 0 and the constant κ ≥ 1. κ should be close to 1 in order to let ˜ (x). To increase robustness the constant R (x) ≈ R k should be as small as possible. Remark 4. If there is no solution of the Theorem 2 but there is only a ±¡ solution¢of the Theorem ±¡ 1 which¢ ¯ R and 0 < α < κ k λ ¯R means that α 6= κ 2k λ then it follows from the proof that the guaranteed robustness bound is ¯R λ γ= ¯Rk 2ακ − 2α2 λ 5. SIMULATION EXAMPLE We will now consider a system already treated in (Astolfi, Colaneri 2001). x˙ = x2 + w + 2u y=x z = (x, u)

0

The authors also mentioned that the positive definite solution of the HJB equation arising in the H2 optimal control problem is ´ 4 ¢p 1³ 3 ¡ 2 x + x +4 x2 + 4 − V (x) = 6 3 We now want the output to track a given trajectory, for which we assume that the first derivative exists. ¡ Then ¢ for this system the inverse control law kf f x ˜, x ˜˙ (for w = 0) becomes ¢ ¡ ¢ 1¡ 2 x ˜ +x ˜˙ kf f x ˜, x ˜˙ = 2 Now it is possible to show that with m = 1, k = 2, κ = 4 and with the error performance variable ze = e the conditions in Theorem 1 and 2 are

7. ACKNOWLEDGEMENTS

2

output signal

1.5

system output − case 1 desired output system output − case 2

1

This work was founded by LCM during its starting period under grant 230100.

0.5 0

REFERENCES

−0.5 −1

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

0.8

1 time in sec

1.2

1.4

1.6

1.8

2

0.4

tracking error

0.2 0 −0.2 −0.4 −0.6

case 1 case 2

−0.8 −1

0

0.2

0.4

0.6

Figure 1. tracking results, case 1 (case 2); disturbance is slightly less (greater) than worst case assumption satisfied. Hence referring to (14) the robustifying feedback controller gets ´ ³ p p ˜ x ˜2 + 4 kf b (˜ x, x) = 2 x ˜2 − x2 − x x2 + 4 + x and this controller guarantees an attenuation level from w to ze which is less than 0.5 (see (15) in Theorem 2). In Figure 1 the simulation results are shown. The upper plot shows the tracking error if the disturbance w is slightly less than the worst case assumption and if the disturbance w is slightly greater than the guaranteed worst case assumption (for the latter case stability is not guaranteed). In both cases the disturbance is switched on at a simulation time of 0.8sec. The bottom plot shows the tracking error for both cases. Hence the simulation results points out that the theory presented is useful for robustifying a feedforward control law in the nonlinear setting. 6. CONCLUSIONS In this paper we have discussed the problem of robustifying a feedforward control law which is acting on an uncertain system. Therefore input uncertainties are considered. The control law we presented consists of a feedforward and a state feedback term where the feedback term is designed via extension of a stabilizing optimal L2 or suboptimal H∞ controller. A sufficient condition is presented under which it is ensured that in the presence of input disturbances the error system, which is defined by the error differential equation and the error performance output, is robust. This means that the L2 - gain from the input disturbance to the performance error is less than a certain value, which can be appraised a priori. Furthermore we discussed the condition that has to be satisfied such that the H∞ attenuation level becomes a minimum. Simulation results underlined the theoretical issues.

E. Gruenbacher, P. Colaneri, L. del Re (2006) Guaranteed robustness bounds for actuatordisturbance nonlinear control, Proc. ROCOND 2006, Toulouse, France. L. R. Hunt, G. Meyer (1997) Stable Inversion for Nonlinear Systems, Automatica, Vol. 33(8), pp. 1549-1554. A. Isidori (1995) Nonlinear Control Systems third edition, Springer Verlag, ISBN 3-54019916-0. S. Devasia, D. Chen, B. Paden (1996) Nonlinear Inversion-Based Output Tracking”, IEEE Transactions on Automatic Control, Vol. 41(7). N. H. El-Farra, P.D. Christofides (2003) Robust Inverse Optimal Control, Int. J. Robust Nonlinear Control, 13, 1371-1388. A. Isidori and A. Astolfi (1992) Disturbance attenuation and H-inf Control Via Measurement Feedback in Nonlinear Systems, IEEE Trans. on Automatic control, 37, pp. 1283-1293. Van der Schaft, A. J. (1992) L2 -gain analysis of nonlinear systems and nonlinear state feedback H∞ control, IEEE Trans. Automat. Control, 37, pp. 770-784. E. Gruenbacher, L. del Re (2005) Output Tracking of Non Input Affine Systems using Extended Hammerstein Models, Proc. American Control Conference, Portland, USA. A. E. Bryson, Y. -C. Ho (1975) Applied optimal control. Wiley, New York. P. Bolzern, P. Colaneri. G. De Nicolao, U. Shaked (2002) Guaranteed H∞ bounds for Wiener filtering and prediction, Int. J. Robust and Nonlinear Control, Vol. 12, pp. 41-46. A. Astolfi, P. Colaneri (2001) Trading robustness with optimality in nonlinear control, Automatica, 37, pp. 1961-1969. A. Astolfi (1997) Singular H-inf control for nonlinear systems, Int. J. Robust and Nonlinear Control, 7, pp. 727-740. P. Colaneri, J. C. Geromel, A. Locatelli (1997) Control Theory and Design - An RH2 and RH∞ Viewpoint, Academic Press, ISBN 0-12179190-4. G. Conte, A. Serrani (1998) Global Robust Tracking with Disturbance Attenuation for Unmanned Underwater Vehicles, Proccedings of the CCA, Trieste. H.K. Khalil (1996) Nonlinear Systems - second edition, Prentice Hall, ISBN 0-13-228024-8. O. L. Mangasarian (1994) Nonlinear Programming, SIAM, ISBN 0-89871-341-2, New York.