Nonlinear and locally optimal controllers design for input affine systems

Nonlinear and locally optimal controllers design for input affine systems

Proceedings of the 18th World Congress The International Federation of Automatic Control Milano (Italy) August 28 - September 2, 2011 Nonlinear and l...

344KB Sizes 0 Downloads 256 Views

Proceedings of the 18th World Congress The International Federation of Automatic Control Milano (Italy) August 28 - September 2, 2011

Nonlinear and locally optimal controllers design for input affine systems M. Sahnoun ∗ V. Andrieu ∗ M. Nadri ∗ ∗ Laboratoire d’Automatique et de G´enie des Proc´ed´es LAGEP, UMR CNRS 5007, Universit´e Claude Bernard Lyon I, Bat. 308G, 43, Bd du 11 Novembre 1918, 69622 Villeurbanne Cedex, France. sahnoun-at-lagep.univ-lyon1.fr, nadri-at-lagep.univ-lyon1.fr,

https://sites.google.com/site/vincentandrieu/

Abstract: Assuming that a globally stabilizing nonlinear controller is available, the aim is to modify the local behavior of the trajectories in order to get local optimality with respect to a given quadratic cost. A sufficient condition is given in terms of linear matrix inequalities to design a locally optimal and globally stabilizing control law. This approach is illustrated on an inverted pendulum model in order to stabilize this upper equilibrium point. In addition, a stronger sufficient condition is then given (which is not anymore linear) but allows to address the cases in which the previous LMI condition failed to be satisfied. Keywords: Nonlinear controllers, Lyapunov stabilization, Optimal control, LMI. 1. INTRODUCTION The design of global asymptotic stabilizers for systems described by nonlinear differential equations has received many attentions from the control community over the past three decades. Depending on the structure of the model, some techniques are now available to design a control law which globally stabilizes an equilibrium. For instance the backstepping (see Krstic et al. (1995) and references therein), the forwarding (see Mazenc and Praly (1996); Jankovic et al. (1996) and references therein), and some other approaches (see Kokotovi´c and Arcak (2001)) have been widely studied. Despite the fact that the stabilization of an equilibrium can be achieved, it is difficult to guarantee a certain performance for the closed loop system. On another hand, when the first order approximations of a nonlinear model is considered, performances issues can be handled by employing linear optimal control designs (for instance LQ controllers). Moreover, with this optimal linear controller, stabilization of an equilibrium can also be obtained but only locally. This leads to the idea to unite a (optimal) local linear controller and a global one. This uniting controller problem has already been addressed in the literature in Prieur (2001); Teel et al. (1997); Teel and Kapoor (1997); Efimov (2006) by employing some hybrid (and discontinuous) feedbacks. In the present paper, a sufficient condition is given for designing a continuous controller which unites a linear local stabilizer and a nonlinear global one. The theory behind these developments is inspired from recent results in Andrieu and Prieur (2010) in which a continuous uniting of two control Lyapunov functions has allowed to continuously unite a local stabilizer and a non-local one. In this paper, based on the results of Andrieu and Prieur (2010), the continuous uniting control problem is investigated and some of these results are extended to the particular case in which the local controller is linear and 978-3-902661-93-7/11/$20.00 © 2011 IFAC

the non-local one is global. More precisely, giving a global nonlinear control which stabilizes globally an equilibrium, the first result of the paper gives a sufficient condition to unite this controller with a local optimal controller. This sufficient condition is given in terms of Linear Matrix Inequality (LMI). This approach is then exploited to modify the local behavior of a controller which has been developed in Mazenc and Praly (1996) to stabilize an inverted pendulum to its upper position. Motivated by the fact that in some case the sufficient condition doesn’t apply for some linear optimal controller, a more general sufficient condition is given (see section 5). However, this one is not anymore in terms of LMI. Nevertheless, it seems that all stabilizing local controller can be reproduced with this extension. The paper is organized as follows. In section 2, the problem under consideration is formalized and Theorem 1 which gives a sufficient condition in terms of LMI to solve the problem mentioned above is given. The proof of this Theorem is given in Section 3. Section 4 is devoted to illustrate the proposed approach on an inverted pendulum system. Some further developments and a more general sufficient condition is given in section 5. Finally section 6 contains the conclusion. 2. PROBLEM STATEMENT AND MAIN RESULT 2.1 Problem formulation All along the paper, the following controlled nonlinear system, affine in the input is considered: x˙ = f (x) + g(x)u , x(0) = x0 , (1) n p where x in R is the state, u in R is the control input, and f : Rn → Rn and g : Rn → Rp are C 2 functions such that f (0) = 0. In the rest of this note, it is assumed that the system (1) satisfied the following two assumptions:

1267

10.3182/20110828-6-IT-1002.00227

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

Assumption 1. (Global Stabilization). There exists a positive definite, proper and C 2 function V∞ : Rn → R+ and a C 0 function φ∞ : Rn → Rp such that: h i ∂V∞ (x) f (x) + g(x)φ∞ (x) < 0 , ∀ x 6= 0 . (2) ∂x Assumption 2. (First order Controllability). The pair of matrices (A, B) in Rn×n × Rn×p with A = ∂f ∂x (0) and ∂g B = ∂x (0) is controllable. Under assumptions 1 and 2, the problem under consideration is a stabilization with imposed local behavior problem. It can be reformulated as follows: Under Assumptions 1 and 2, giving a linear (possibly optimal) local controller u = K0 x such that the matrix A + BK0 is Hurwitz, find a continuous control law φ : Rn → Rp such that the origin of the system x˙ = f (x) + g(x)φ(x) ,

is globally and asymptotically stable such that : ∂φ (0) = K0 . ∂x

(3)

When the function (f, g) are such that the system is in backstepping form (see Krstic et al. (1995)), this problem has been solved in Pan et al. (2001). However, when no structure restriction are imposed on the nonlinear functions and based on the theory developed in Andrieu and Prieur (2010), a sufficient condition can be given in terms of LMI which allows to solve the previous problem. Theorem 1. (LMI sufficient condition). Under assumptions 1 and 2, given P0 a positive definite symmetric matrix in Rn×n and K0 a matrix in Rn×p such that: P0 (A + BK0 ) + (A + BK0 )T P0 < 0 ,

(4)

n×p

if there is a matrix Km in R satisfying the following matrix inequalities  P0 (A + BKm ) + (A + BKm )T P0 < 0 , (5) P∞ (A + BKm ) + (A + BKm )T P∞ < 0 2

V∞ (0) then, there exists a continuous where, P∞ = ∂∂x∂x n function φ : R → Rp such that the origin of the system x˙ = f (x) + g(x)φ(x) is globally and asymptotically stable, and there exists a positive real sufficiently small number r∞ such that φ(x) = K0 x for all x verifying V∞ (x) < r∞ .

It can be checked that Theorem 1 gives a sufficient condition to solve the stabilization with imposed local behavior problem. Indeed, with Equation (4), the matrix K0 is such that A + BK0 is Hurwitz and moreover the function φ satisfies (3) (since φ(x) = K0 x in a neighborhood of the origin). The proof of Theorem 1 is given in section 3. Since Theorem 1 gives a sufficient condition in terms of linear matrix inequalities, it allows to employ the efficient LMI solvers to check wether or not this LMI condition is satisfied. These tools are used in section 4 to employ Theorem 1 and to modify the local behavior of a global controller on a inverted pendulum. However, as shown in section 1, it happens that for some linear local controllers, this sufficient condition doesn’t hold. In section 5, an extension of Theorem 1 is given which allows to overcome this difficulty.

2.2 Discussion It can be noticed that Assumption 1 is a strong assumption. However, depending on the structure of the functions f and g some tools are now available allowing the design of the globally stabilizing controller φ∞ and its associated Lyapunov function (backstepping, forwarding, feedback linearization, passivation, . . . ). Note that in Mazenc and Praly (1996), employing forwarding techniques a global controller for the model of an inverted pendulum is given. This one is studied in section 4 and its local behavior is modified. Considering assumption 2, a local controller ensuring local asymptotic stabilization of the origin of System (1) can be designed. Among the controls which provide the asymptotic stability, the problem of guaranteeing a certain performance can be addressed. This concept of performance can be formalized in terms of cost to minimize. For instance, it may be interesting that the controller locally minimizes a criterium defined as the limit, when t → ∞, of the operator J(x, u, t), Z t x(s)T Qx(s) + u(s)T Ru(s) ds , (6) J(x, u, t) = 0

where Q and R are a symmetric positive definite matrix respectively in Rn×n and Rp×p . Under Assumption 2, a local minimizer can be given as: u = K0 x = −R−1 B T P0 x , (7) with P0 a symmetric positive definite solution of the linear algebraic Riccati equation given as: P A + AT P − P BR−1 B T P + Q = 0 . (8) In section 4, this type of local controller is united with a global controller obtained by forwarding for the model of an inverted pendulum. It has to be noticed that inequality (5) implies that locally V∞ is a strict control Lyapunov function. This implies that this approach may failed when considering globally stabilizing controller which associated Lyapunov functions are not strict. This is for instance the case with most of the global controller obtained using some passivation arguments. 3. PROOF OF THEOREM 1 The proof of Theorem 1 is based on the tools developed in Andrieu and Prieur (2010). Consequently, in a first step we review the result obtained in this paper. 3.1 Continuously uniting local and non-local controller

In Andrieu and Prieur (2010), a sufficient condition is given to allow the construction of a continuous control law which unites a local and a non-local one and preserves the global stability of the closed loop systems. This approach is based on the uniting of two Control Lyapunov Functions. One of the result obtained in this paper can be summarized as follows: Theorem 2. (Andrieu and Prieur (2010)). Let φ0 : Rn → Rp and φ∞ : Rn → Rp be two continuous functions, V0 : Rn → R+ and V∞ : Rn → R+ be two C 1 positive definite and proper functions, R0 and r∞ be two positive real numbers such that the following holds.

1268

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

i) For all x in {x : 0 < V0 (x) ≤ R0 }, Lf V0 (x) + Lg V0 (x) φ0 (x) < 0 . (9) ii) For all x in {x : V∞ (x) ≥ r∞ } Lf V∞ (x) + Lg V∞ (x) φ∞ (x) < 0 , (10) n iii) {x : V∞ (x) > r∞ } ∪ {x : V0 (x) < R0 } = R , iv) For all x in {x : V∞ (x) > r∞ , V0 (x) < R0 } there exists ux in Rp such that: Lf V0 (x) + Lg V0 (x) ux < 0 , (11) Lf V∞ (x) + Lg V∞ (x) ux < 0 . Then, there exists a continuous function φ : Rn → Rp which solves the uniting controller problem, i.e. such that i) φ(x) = φ0 (x) for all x such that V∞ (x) ≤ r∞ ; ii) φ(x) = φ∞ (x) for all x such that V0 (x) ≥ R0 ; iii) the origin of the system x˙ = f (x) + g(x) φ(x) is a globally asymptotically stable equilibrium. This result is not presented in this way in Andrieu and Prieur (2010) but can be easily obtained from (Andrieu and Prieur, 2010, Theorem 3.1) and (Andrieu and Prieur, 2010, Proposition 2.2). The idea of the proof in Andrieu and Prieur (2010) is to design a controller which is a continuous path going from φ0 (x) for x small to φ∞ (x) for larger values of the state. The good behavior of the trajectories in between is ensured by adding a sufficiently large term which depends on the uniting control Lyapunov function. More precisely, the function φ : Rn → Rm obtained from Theorem 2 and which is a solution to the uniting controller problem is defined as φ(x) = H(x) − k c(x) Lg V (x)T , ∀x ∈ Rn , (12) n where V : R → R+ is the United Control Lyapunov Function obtained from (Andrieu and Prieur, 2010, Theorem 2.1). This function unites the local and nonlocal CLF V0 and V∞ and is given by h i V (x) = R0 ϕ0 (V0 (x)) + ϕ∞ (V∞ (x)) V∞ (x) (13) h i + r∞ 1 − ϕ0 (V0 (x)) − ϕ∞ (V∞ (x)) V0 (x) ,

where ϕ0 : R+ → [0, 1] and ϕ∞ : R+ → [0, 1] are two continuously differentiable non-decreasing functions satisfying 1 :    = 0 ∀ s ≤ r0 > 0 ∀ r0 < s < R0 , ϕ0 (s)  = 1 ∀ s ≥ R0  2 (16) ∀ s ≤ r∞  = 0 > 0 ∀ r ∞ < s < R∞ , ϕ∞ (s) 1  = ∀ s ≥ R∞ 2 and where r0 = max{x:V∞ (x) ≤ r∞ } V0 (x), and R∞ = min{x:V0 (x) ≥ R0 } V∞ (x), In (12) the function H continuously interpolates the two controllers φ0 and φ∞ and is 1

For instance, ϕ0 and ϕ∞ can be defined as: 3 2 3 ϕ∞ (s) = 2 ϕ0 (s) =

s − r0 2 s − r0 3 − , s ∈ [r0 , R0 ] , (14) R 0 − r0 R0 − r0  2  3 s − r∞ s − r∞ − , s ∈ [r∞ , R∞ ] .(15) R∞ − r ∞ R∞ − r ∞









given as H(x) = γ(x) φ0 (x) + [1 − γ(x)] φ∞ (x) where γ is any continuous function 2 such that  1 if V∞ (x) ≤ r∞ , γ(x) = 0 if V0 (x) ≥ R0 .

Also, in (12) the function c is any continuous function such that 3  = 0 if V0 (x) ≥ R0 or V∞ (x) ≤ r∞ , c(x) (19) > 0 if V0 (x) < R0 and V∞ (x) > r∞ ,

and k is a positive real number sufficiently large to ensure that V is a Lyapunov function of the closed-loop system. The existence of k is obtained employing compactness arguments (see analogous arguments in (Andrieu et al., 2008, Lemma 2.13)). 3.2 Proof of Theorem 1 Consider V0 (x) = xT P0 x. Along the trajectories of System z ˙ { (1) with u = K0 x, the function V0 satisfies: V0 (x) = xT S0 x + △0 (x), where S0 is matrix in Rn×n defined as S0 = P0 (A+BK0 )+(A+BK0 )T P0 , and where △0 : Rn → R is a C 2 function defined as, △0 (x) = 2xT P0 [(f (x) − Ax) + (g(x) − B)K0 x]. It can be checked that Inequality (4) implies that S0 is a symmetric negative definite matrix. 0 Moreover, the function △0 satisfies, △0 (0) = 0 , ∂△ ∂x (0) = ∂ 2 △0 ∂x∂x (0)

= 0. Hence, it yields: △0 (x) = ◦(|x|2 ). z ˙ { Consequently, V0 (x) < 0 along the trajectories of the System (1) with u = K0 x for all sufficiently small x Hence Item 1 of Theorem 2 is satisfied with R0 small enough. On another hand, with Assumption 1, Item 2 of Theorem 2 is trivially satisfied for all r∞ > 0. The functions V0 and V∞ being proper and definite positive, Item 3 is satisfied provided r∞ is selected sufficiently small. Now, along the trajectories of System (1) with u = Km x, z ˙ { it yields, V∞ (x) = xT S∞ x+△∞ (x), where S∞ is matrix in Rn×n defined as S∞ = P∞ (A + BKm ) + (A + BKm )T P∞ , and where △∞ : Rn → R is a C 2 function defined as, 0 ,

△∞ (x) = 2xT Km [f (x) − Ax + (g(x) − B)Km x]   ∂V∞ + (x) − 2xT P∞ (f (x) + g(x)Km x) . ∂x

Note that with (5), S∞ is a symmetric negative definite matrix. Moreover since △∞ satisfies △∞ (0) = ∂ 2 △∞ 2 ∞ 0 , ∂△ ∂x (0) = 0, ∂x∂x (0) = 0, it yields △(x) = ◦(|x| ). Consequently, along the trajectories of System (1) with z ˙ { u = Km x for all x small V∞ (x) < 0. It can be checked that the same conclusion holds with the function V0 . In other word, along the trajectories of System z ˙ { (1) with u = Km x for all x small V0 (x) < 0. The two previous inequalities implies that the control law u = Km x assigns the two functions V0 and V∞ for x small enough. Hence, Item 4 of Theorem 2 is satisfied provided R0 and r∞ are small enough.

2 For instance, giving ϕ and ϕ ∞ defined in (16), a possible choice 0 is: γ(x) = 1 − ϕ0 (V0 (x)) − ϕ∞ (V∞ (x)) . (17) 3 For instance, a possible choice is c(x) = max {0, (R0 − V0 (x))(V∞ (x) − r∞ )} (18)

1269

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

With Theorem 2, it yields that there exists a continuous function φ (for instance the one defined in (12)) which makes the origin of the system x˙ = f (x) + g(x)φ(x) globally and asymptotically stable and for all x such that V∞ (x) < r∞ then φ(x) = K0 x. 4. APPLICATION TO THE INVERTED PENDULUM Inverted pendulum is a classical ideal model in the control theory as an absolute-unstable, high-order, multivariables, and strong-coupled system. It has been used as a benchmark for motivating the study of nonlinear control techniques and used to illustrate many of the ideas emerging in the field of nonlinear control. The goal is to apply control torque for that stabilizes the inverted pendulum and raise it to its upper equilibrium position while the cart displacement is brought to zero in spite of disturbing base-point movements. The control law has to ensure the overall stability of the system and the local optimality with respect to a given LQ cost. 4.1 Dynamical model Consider the inverted pendulum constituted of a movable carriage in translation on a horizontal axis. The pendulum, while being fixed on the carriage is free to rotate. We consider the rigid rod of negligible mass, and we define M the mass of the carriage, m the mass of the pendulum, l the length of the rod, χ the position of the carriage from origin, θ the angle between the pendulum and the vertical. Using the equation of Euler-Lagrange, we get the following differential equations: ( χ ¨ cos(θ) + lθ˙ − g sin(θ) = 0 (20) (M + m) χ ¨ + ml cos(θ)θ¨ − ml sin(θ)θ˙2 = F 4.2 Globally stabilizing control law using forwarding We are interested in this paragraph in the control law given by Mazenc and Praly using the technique of forwarding or adding integrators (see Mazenc and Praly (1996)). Following Mazenc and Praly (1996), the differential equations of inverted pendulum (20) are rewritten in the new coordinates and new time: s r l g χ˙ χ ˙ ,τ = t (21) x = , v = √ , θ = θ, ω = θ l g l gl The new commande variable is given by: 1 F + mlθ˙2 sin θ − mg sin θ cos θ (22) u= g M + msin θ2 Consequently, the following equations are obtained (Mazenc and Praly (1996)):  x˙ = v, v˙ = u, θ˙ = ω (23) ω˙ = sin(θ) − u cos(θ) The forwarding approach (Mazenc and Praly (1996)) consists on 3 steps : i) stabilize the subsystem (θ, ω). ii) stabilize the subsystem (v, θ, ω) by adding a first integration. iii) stabilize the complete system by adding a final integration. To obtain a non bounded domain (i.e on R4 ), the following change of coordinate is considered in Mazenc and Praly (1996): x1 = x, v1 = v, t1 = tan(θ), r1 = (1 + t1 2 )ω1 , u1 = u

In this case, (23) is rewritten as following:   x˙ 1 = v1 , v˙ 1 = u1 , t˙1 = r1 p p 2t1 r1 2 (24) 2  r˙1 = + t − u 1 + t 1 + t1 2 1 1 1 1 + t1 2 The forwarding approach consists on making a change of coordinates to each new addition of integration and to calculate integrals. Using this approach the authors in Mazenc and Praly (1996) gave the control law which stabilizes the system globally and asymptotically as following: φ∞ (x1 , v1 , t1 , r1 ) = φ1 (t1 , r1 )

(25)

+φ2 (v2 , t1 , r1 ) + φ3 (x3 , v2 , t1 , r1 )   r1 2 + r1 , with: φ1 (t1 , r1 ) = 2t1 1 + 3 (1 + t1 2 ) 2 1 v2 , v2 = v1 + 2 √ r1 2 + t1 , φ2 (v2 , t1 , r1 ) = 1+t1 10 p φ3 (x3 , v2 , t2 , r2 ) = 2[2r2 + t2 ] 1 + t2 2 10x2 1 +2v2 [10 + |v2 |] + √ , 2 1 + x2 2   q 2 and x2 = x1 + 2 log t1 + 1 + t1 , r1 x3 = x2 + 10s2 + s1 + p . The associate control 1 + t21 Lyapunov function is: p (26) V∞ (x1 , v1 , t1 , r1 ) = 2V2 (v2 , t1 , r1 ) + 1 + x3 2 − 1 1 with: V2 (v2 , t1 , r1 ) = V1 (t1 , r1 ) + 5v2 2 + |v2 |3 , 6   2 2 32 V1 (t1 , r1 ) = r1 + (1 + t1 ) − 1 + r1 t1 .

With these data, it is shown in Mazenc and Praly (1996) that V∞ defined in (26) satisfies along the trajectories { z ˙ system (24), the following inequality V∞ (x1 , v1 , t1 , r1 ) < 0 , ∀(x1 , v1 , t1 , r1 ) 6= 0, with the controller defined in (25) Consequently, assumption 1 is satisfied for System (24). 4.3 Locally optimal stabilizing control law The matrices of the first order approximation of system (24) are given as     0 1 0 0 0 0 0 0 0  1  A= , B= . (27) 0 0 0 1 0  0 0 1 0 −1 Assumption 2 is satisfied for system (24). Consequently, a controller which minimizes a cost of the form (6) can be designed. As an example, in (6) the matrix Q and the real number R are chosen as: Q = Diag{0.1, 0.2, 0.3, 0.4} , R = 1. (28) Solving the associated Riccati (see equation (8)) by employing the routine (care) of Matlab, the matrix   0.4964 1.1321 1.4518 1.4484  1.1321 4.1682 5.7586 5.7380  P0 =  (29) 1.4518 5.7586 10.6983 10.3387  1.4484 5.7380 10.3387 10.3291 is obtained. It yields that the control law (7) is given as: φ0 (x) = [0.3162 1.5698 4.5801 4.5910]x (30)

1270

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

This controller is optimal for the linearization of the model in terms of the cost (6). However when considering the nonlinear model (24), only local asymptotic stability can be achieved with this control law.

1 Local control Global control Modified forwarding control

0.8

0.6

0.4

4.4 Synthesis of optimal control law locally and globally stable

0.2

0

The aim of this subsection is to employ Theorem 1 to unite the local optimal controller (30) and the globally 2 V∞ stabilizing controller (25). The matrix P∞ = ∂∂x∂x (0) is given as:   0.5 5.5 6 21/2  5.5 141/2 76 271/2  P∞ =  (31) 6 76 85 147  21/2 271/2 147 525/2 Using (31) and Theorem 1, the existence of a vector Km solution of LMI (5) with A, B given in (27) has to be verified. Employing the Yalmip package (L¨ofberg (2004)) in Matlab in combination with the solver 4 Sedumi (Sturm (1999)), the existence of Km which satisfies these inequalities is verified and it is given by Km = [0.0977 1.3345 3.5244 3.6020]. From Theorem 1, it yields that the control law given in (12) is a global stabilizer with the prescribed local behavior. Performances of the proposed controller are evaluated in simulation. Functions ϕ0 , ϕ∞ , γ and c are respectively defined in (14), (15), (17) and (19) with parameters R0 = 15.85103, r∞ = 0.0220, r0 = 8.5352, R∞ = 0.0409 and k = 10. The evolution of the control law is shown in Fig.1.

−0.2

−0.4

−0.6

−0.8

−1 15

20

25

30

35

40

45

50

Time[sec]

Fig. 2. The commands in the field interpolation 50

45 Cost function of forwarding 40

Cost function of modified forwarding

35

30

25

20

15

10

5

0

10

20

30

40

50

60

70

Time[sec]

60 Local control Global control Modified forwarding control

50 40

Fig. 3. Evolution of the cost functions

30 20

Remark 1. There are locally optimal control laws for which the matrix inequality (5) does not have a solution. To evaluate the frequency of these problematic cases, a statistical study on the frequency of solvability of the LM I condition given in Theorem 3 is done using the data obtained from the inverted pendulum studied previously. The matrix Q which defines the cost is defined by Q = Γ′ Γ, where Γ is a random matrix variable in [0, 1]4×4 and R = 1. Using simulation, it was shown that the LM I condition of the corresponding local control law was solved for only 36.24% of possible cost.

10 0 −10 Field interpolation −20 0

10

20

30

40

50

60

70

300 Local Lyapunov function Global Lyapunov function Lyapunov function of modified forwarding

250 200 150 100 Field interpolation

50 0 0

10

20

30

40

50

60

70

Time[sec]

Fig. 1. Evolution of the control law of the modified Forwarding and its associated Lyapunov function As can be seen in Fig.2 (Fig.2 is a zoom of the interpolation between the two control laws) at the moment which t = 18.29(sec), the control law of the modified Forwarding leaves the usual forwarding control law up to time t = 44.8(sec) where it reaches the locally optimal control law. Note that with this approach, there is no guarantee that, for all initial conditions, the cost obtained employing the uniting controller will be lower than the one obtained using the global one. More precisely, there exist initial conditions for which the use of the interpolation between both controllers affects too strongly the cost. However for the example shown in Fig.3 the cost is improved. 4 All matlab files can be downloaded from the website : https://sites.google.com/site/vincentandrieu/

5. EXTENSION OF THEOREM 1 To apply our approach and design a stabilizing control law globally and locally optimal, we have to solve the LMI test (5). However, as we have seen in the statistical analysis, in most cases, this is not possible. A solution to the problem where the local optimal control does not solve the LMI test is to use a transient Lyapunov function. Indeed, we have the following result. Theorem 3. (Extension). Under Assumptions 1 and 2, given P0 a symmetric definite positive matrix in Rn×n and K0 a matrix in Rn×p such that: P0 (A + BK0 ) + (A + BK0 )T P0 < 0 . (32) n×p If there are two matrices Km,1 and Km,2 in R , Pm a definite positive matrix in Rn×n such that the following matrix inequalities are satisfied:

1271

18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011

 P0 (A + BKm,1 ) + (A + BKm,1 )T P0 < 0    Pm (A + BKm,1 ) + (A + BKm,1 )T Pm < 0 , P (A + BKm,2 ) + (A + BKm,2 )T Pm < 0    m P∞ (A + BKm,2 ) + (A + BKm,2 )T P∞ < 0

(33)

2

V∞ where the matrix P∞ = ∂∂x∂x (0) then, there exists a conn tinuous function φ : R → Rp such that the origin of the system x˙ = f (x) + g(x)φ(x) is globally and asymptotically stable, and there exists a positive sufficiently small real number r∞ such that φ(x) = K0 x for all x verifying Vm (x) < r∞ .

Proof : The proof of Theorem 3 is a direct consequence of Theorem 1. Indeed, if matrix inequalities (33) are satisfied, we can apply Theorem 1 to obtain a control law φm , a real r∞,m sufficiently small such that the original system: x˙ = f (x) + g(x)φm (x) , (34) is globally and asymptotically stable, and for all x such that V∞ (x) < r∞,m then φm (x) = Km x. Moreover, as shown in the demonstration of Theorem 2 (see Andrieu and Prieur (2010)), a Lyapunov function Vm is obtained such that along the trajectories of System (34), we have V˙ m (x) < 0 and such that for all x such that Vm (x) < r∞,m , Vm (x) = xT Pm x. Consequently, with inequality (33), we can use again Theorem 1 to obtain a function φ, a real r∞ small enough such that the origin of the system: x˙ = f (x)+ g(x)φ(x) is globally and asymptotically stable, and for all x such that Vm (x) < r∞ then φ(x) = K0 x. 2 It has to be noticed that this result does not come in the form of a linear matrix inequality. Therefore, it is not possible to employ the usual LMI resolution tools to directly test this sufficient condition. However, by randomly selecting the matrix Pm , inequalities (33) become linear in the unknowns Km,1 , Km,2 . Consequently, given K0 , the local controller and its associated Lyapunov function P0 , we can employ the following algorithm: While the matrix inequalities (33) is not satisfied i) Select randomly a positive definite matrix Qm in Rn×n . ii) Solve the associated Riccati equation to get a Pm matrix in Rn×n which defines a CLF. iii) Check if the matrix inequalities (33) is satisfied Employing this simple algorithm, we have shown numerically that with 1000 different P0 and K0 , in all cases it was possible to find a Pm such that the matrix inequalities (33) was satisfied. Note that, with this algorithm the maximal number of transient CLF Pm that has to be tested was 15. Consequently, it seems that with Theorem 3, it is possible to design a globally stabilizing controller such that its first order approximation can be solution of all possible optimal LQ problem on this specific example. 6. CONCLUSION In this paper is presented a method to obtain globally stabilizing control law with an optimal local behavior. The approach is based on the use of a recent technique

developed in Andrieu and Prieur (2010). A sufficient condition in terms of LMI is first given. This approach has been illustrated on an academic problem of inverted pendulum based on the global controller developed in Mazenc and Praly (1996). It has been shown numerically that around 36% of local controller could be obtained with this approach. By extending this approach and employing a transitive Lyapunov function, it has then been shown that 100% of local controller could be obtained. Obtained results show the interest of this technique to modify the local behavior of a nonlinear controller which in practice is difficult to tune. In the future works, the performance of the proposed approach will be illustrated on the case of disturbed systems. REFERENCES V. Andrieu and C. Prieur. Uniting two control Lyapunov functions for affine Systems. IEEE Transactions on Automatic Control, 55(8):1923–1927, 2010. V. Andrieu, L. Praly, and A. Astolfi. Homogeneous approximation, recursive observer design and output feedback. SIAM Journal on Control and Optimization, 47(4):1814–1850, 2008. DV Efimov. Uniting global and local controllers under acting disturbances. Automatica, 42(3):489–495, 2006. M. Jankovic, R. Sepulchre, and P.V. Kokotovic. Constructive Lyapunov stabilization of nonlinear cascade systems. IEEE Transactions on Automatic Control, 41 (12):1723–1735, 1996. P.V. Kokotovi´c and M. Arcak. Constructive nonlinear control: a historical perspective. Automatica, 37(5):637– 662, 2001. M. Krstic, I. Kanellakopoulos, and P.V. Kokotovic. Nonlinear and Adaptive Control Design. John Wiley & Sons, Inc. New York, NY, USA, 1995. J. L¨ofberg. Yalmip : A toolbox for modeling and optimization in MATLAB. In Proc. of the CACSD Conference, Taipei, Taiwan, 2004. http://control.ee.ethz.ch/ ~joloef/yalmip.php. F. Mazenc and L. Praly. Adding integrations, saturated controls, and stabilization for feedforward systems. IEEE Transactions on Automatic Control, 41(11): 1559–1578, 1996. Z. Pan, K. Ezal, A.J. Krener, and P.V. Kokotovic. Backstepping design with local optimality matching. IEEE Transactions on Automatic Control, 46(7):1014–1027, 2001. C. Prieur. Uniting local and global controllers with robustness to vanishing noise. Mathematics of Control, Signals, and Systems, 14(2):143–172, 2001. J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software, 11-12:625–653, 1999. http:// sedumi.mcmaster.ca/. A. R. Teel and N. Kapoor. Uniting local and global controllers. In European Control Conference (ECC’97), volume 172, Brussels, Belgium, 1997. A. R. Teel, O. E. Kaiser, and R. M. Murray. Uniting local and global controllers for the Caltech ducted fan. In Proc. of the 16th American Control Conference, pages 1539–1543, Albuquerque NM, 1997.

1272