PRACTICAL STABILIZATION OF LINEAR TIME-VARYING CONTINUOUS SYSTEMS

PRACTICAL STABILIZATION OF LINEAR TIME-VARYING CONTINUOUS SYSTEMS

PRACTICAL STABILIZATION OF LINEAR TIME-VARYING CONTINUOUS SYSTEMS Germain Garcia ∗ and Sophie Tarbouriech ∗ ∗ LAAS-CNRS, University of Toulouse, 7 Av...

242KB Sizes 1 Downloads 57 Views

PRACTICAL STABILIZATION OF LINEAR TIME-VARYING CONTINUOUS SYSTEMS Germain Garcia ∗ and Sophie Tarbouriech ∗ ∗ LAAS-CNRS,

University of Toulouse, 7 Avenue du Colonel Roche, 31077 Toulouse cedex 4, France. E-mails: garcia,[email protected]

Abstract: In this paper, the problem of practical stabilization of linear time-varying continuous systems is considered. Necessary and sufficient conditions, based upon the solution to some Lyapunov differential matrix equations, are proposed for particular cases of interest. From these conditions, the design of time-varying state feedback controller guaranteeing the finite time closed-loop stability is presented. Numerical experiments c 2007 IFAC. illustrate the potentialities of the approach. Copyright ° keywords: Practical stability, Finite time stability, Lyapunov differential equation, Lyapunov theory, Linear time-varying system.

1. INTRODUCTION In control theory, the concept of Lyapunov stability is very popular, particularly when dealing with systems described in the state space (Kalman, 1963). Numerous applications in both control design or analysis problems contexts are widely based on the concept of Lyapunov stability because it corresponds in general to a satisfactory behavior in practice, even if from a performance point of view, other constraints have to be imposed. However, in many applications, it is necessary to maintain the state under some bounds during, at least, a specific time interval. This is the case, for example, when saturations are present (Tarbouriech et al., 2007), or when a linear model resulting from a linearization of the nonlinear one around an equilibrium point is used (Khalil., 1992). The linear model being only valid around the equilibrium point, it is important to be able to avoid large excursions of the state in order that the occurrence of the nonlinearities does not lead the system permanently far away from the equilibrium. In these situations, another interesting concept, like the finite time stability (FTS), can be advantageously considered. The (FTS) concept has

been introduced in (D’Angelo, 1970), (Dorato, 1961) and (Weiss and Infante, 1967). Furthermore, a more general concept, so-called the Practical Stability (PS) concept, has been studied in the literature (Michel and Porter, 1972), (Grujic, 1973). Some works dealing with analysis and control design of FTS control problems have been recently published, see for example (Abdallah et al., 2002), (Amato et al., 2001) in which sufficient conditions are proposed through Linear Matrix Inequality (LMI) formulations, or (Garcia and Tarbouriech, 2006) in which sufficient conditions are proposed through Riccati Differential Equations (RDE). In (Amato et al., 2003), a necessary and sufficient condition for FTS of linear systems has been developed improving in particular the results developed in (Amato et al., 2001). Using classical concepts of operator theory, a necessary and sufficient condition is formulated through a differential Lyapunov inequality, a symmetric uniformly bounded piecewise continuous differentiable matrix function has to satisfy on a time interval with constraints on the interval extremities. One of the main limitation is to obtain a solution to this differential inequality. More recently, the results of (Amato et al., 2003) have been extended to the time-varying continuous systems case in (Amato et al., 2005).

In this paper, the objective is to derive finite time stabilization controllers for linear time-varying continuous systems exploiting specific properties of the solutions of a Lyapunov differential matrix equation. It is one of the main difference with the technique developed in (Amato et al., 2003) and (Amato et al., 2005). Hence, a necessary and sufficient condition for stability analysis of linear time-varying system is first derived. This condition expressed through the solution of a Lyapunov differential matrix equation is then used to derive a state feedback controller, when it exists, which finite time stabilizes the system. This controller is a state feedback with time-varying gains. Indeed, the controller, according to the definition of finite time stability considered, allows of maintaining the closedloop trajectories in a certain domain (of ellipsoidal or polyhedral shape) during a specified interval of time, provided that the initial state belong to a given ellipsoidal set. The paper is organized as follows. In section 2, the system under consideration and the problem we intend to solve are described. Section 3 is dedicated to the stability analysis results, whereas section 4 addresses the stabilization problem by state feedback. Section 5 presents some numerical experiments to illustrate the potentialities of the proposed approach. Finally, the paper ends by some concluding remarks and prospectives. Notation. Notation is standard. The euclidian vector norm is denoted by k.k . For two symmetric matrices, A and B, A > B means that A − B is positive definite. When no confusion is possible, identity and null matrices will be denoted by I and 0, respectively. Ω is an interval of time, defined for example by Ω = [0, T ], with a given T > 0. From this, one can define the set C(Ω, S) as C(Ω, S) = { f : Ω → S ; f is continuous on S }. λi (A), λmin (A) and λmax (A) denote the ith, minimal and maximal eigenvalue of matrix A, respectively.

In some cases, it can be necessary to maintain the state under some bounds during a specific time interval (Amato et al., 2005), (Garcia and Tarbouriech, 2006), (Tarbouriech et al., 2007). Note that in such cases, the FTS concept can be invoked (Dorato, 1961). Let us consider the family of compact sets in Rn , denoted Et and parameterized in t ∈ Ω, such that x = 0 ∈ Et . Consider also a compact set I0 , containing x = 0 and such that I0 ⊆ Et0 . We introduce for system (1), the following concept of stability (Garashchenko, 1987). Definition 1. System (1) with u = 0 is said to be (I0 , Et , Ω)-stable if for any x0 ∈ I0 we have: x(t,t0 , x0 ) ∈ Et f or T ≥ t > t0 ≥ 0

(3)

or equivalently the solution x(t) = 0 of system (1) is said to be (I0 , Et , Ω)-stable. Definition 1, which is stated in a relatively general way, can be particularized to some interesting cases. For example, ellipsoidal and polyhedral domains are often used to bound the state of a system subject to constraints (Tarbouriech et al., 2007). Therefore, one can consider the following ellipsoidal sets I0 and Et : © ª I0 = x ∈ Rn ; x0 S0 x ≤ ρ20 , S0 = S00 > 0, ρ0 > 0 (4) © Et = x ∈ Rn ; x0 R(t)x ≤ ρ(t)2 , R(t) = R(t)0 > 0, ρ(t) > 0, ∀t ∈ Ω} , (5) As mentioned above, other interesting sets Et can be defined as a polyhedral set: ¯ ¯ © ª Et = x ∈ Rn ; ¯ fi (t)0 x¯ ≤ 1 for all t ∈ Ω, i = 1, · · · , N (6) where fi (t) ∈ C(Ω, Rn×1 ), i = 1, · · · , N. Remark 1. In the case where the sets Et and I0 are defined with R(t) = I, ρ(t) = γ > 0, ∀t, S0 = I, and ρ0 = δ, Definition 1 is exactly the definition of Finite Time Stability introduced in (Amato et al., 2003), (Garcia and Tarbouriech, 2006).

2. PROBLEM STATEMENT Consider the linear time-varying continuous system described by: x(t) ˙ = A(t)x(t) + B(t)u(t), x(t0 ) = x0

(1)

where matrix A(t) ∈ C(Ω, Rn×n ) and B(t) ∈ C(Ω, Rn×m ), t0 ∈ Ω. System (1) is supposed to be completely state reachable at time t > t0 . A necessary and sufficient condition for complete state reachability at time t is that rank of reachability grammian defined by:

Problem 1. Given an interval Ω and two sets I0 and Et . Determine a gain K(t) ∈ C(Ω, Rm×n ) such that with the state feedback controller u(t) = K(t)x(t)

Φ(t, s)B(s)B0 (s)Φ0 (t, s)ds

(2)

(7)

the closed-loop system x(t) ˙ = (A(t) + B(t)K(t))x(t)

Zt

W (t0 ,t) ,

The problem we intend to solve in this paper can be summarized as follows.

(8)

is (I0 , Et , Ω)-stable.

t0

is equal to n (Antsaklis and Michel, 1997), with Φ(t, s) the state transition matrix.

Problem 1 is referred to as the (I0 , Et , Ω)-Stabilization Problem.

3. PRELIMINARY RESULTS: STABILITY ANALYSIS PROBLEM

The Lagrangian of this optimization problem reads: £ ¤ L(x0 , β) = x00 Φ(t,t0 )0 R(t)Φ(t,t0 )x0 + β x00 S0 x0 − ρ20

In this section, we address the problem of stability analysis of system (1) with u = 0, which can be defined by:

with β ≥ 0. Then, by computing the optimality conditions, one gets:

x(t) ˙ = A(t)x(t), x(t0 ) = x0

(9)

where A(t) ∈ C(Ω, Rn×n ), t0 ∈ Ω. In other words, we want to study the sub-problem of Problem 1, described as follows. Problem 2. Given an interval Ω and two sets I0 and Et . Determine conditions such that system (9) is (I0 , Et , Ω)-stable. According to Definition 1, suppose that the sets I0 and Et are given and defined by (4) and (5). Indeed, the symmetric positive definite matrices S0 and R(t), and the positive scalars ρ0 and ρ(t), are known, for all t ∈ Ω. The following theorem can be stated. Theorem 1. Suppose that I0 ⊆ Et0 . System (9) is (I0 , Et , Ω)-stable if and only if ρ0 ≤ δ∗ and ρ(t) ≥ ρ∗ (t) for all t ∈ Ω, with δ∗ and ρ∗ (t) defined by h i 1/2 δ∗ = min ρ(t)λmin R(t)−1/2Y (t0 ,t)R(t)−1/2 t∈Ω h i −1/2 = min ρ(t)λmax R(t)1/2 X(t,t0 )R(t)1/2 (10) t∈Ω

h i R(t)−1/2Y (t0 ,t)R(t)−1/2 h i 1/2 = ρ0 λmax R(t)1/2 X(t,t0 )R(t)1/2 (11) −1/2

ρ∗ (t) = ρ0 λmin

where matrices X(t,t0 ) and Y (t0 ,t) are respectively the solutions to Lyapunov differential matrix equations:   ∂ X(t,t0 ) = A(t)X(t,t0 ) + X(t,t0 )A(t)0 (12) ∂t  X(t ,t ) = S−1 0 0 0 and (

∂ Y (t0 ,t) = −A(t)0Y (t0 ,t) −Y (t0 ,t)A(t) ∂t Y (t0 ,t0 ) = S0

(13)

Proof. Let us start with the proof of relation (11). Consider the following optimization problem: ( max ρ(t)2 = x(t)0 R(t)x(t) x(t)

x00 S0 x0 ≤ ρ20 , S0 = S00 > 0 This problem correspond to characterize the maximal ellipsoid Et , defined in (5), containing the state at time t for all initial condition taken in the ellipsoid I0 , defined in (4). An equivalent formulation only involving the initial condition x0 is obtained as follows: ( max ρ(t)2 = x00 Φ(t,t0 )0 R(t)Φ(t,t0 )x0 x0 x00 S0 x0



ρ20 ,

S0 = S00

>0

∂L = 2Φ(t,t0 )0 R(t)Φ(t,t0 )x0 + 2βS0 x0 = 0 ∂x0 ⇔ βΦ(t,t 0 )0 R(t)1/2 [R(t)−1/2 Φ(t 0 ,t)0 S0 Φ(t 0 ,t)R(t)−1/2 +β−1 I]R(t)1/2 Φ(t,t 0 )x0 = 0 (14) and ¤ ∂L £ 0 (15) = x0 S0 x0 − ρ20 = 0 ∂β To obtain a solution x0 6= 0 to these two conditions, it is necessary to have h i det R(t)−1/2 Φ(t0 ,t)0 S0 Φ(t0 ,t)R(t)−1/2 + β−1 I = 0 (16) Multiplying condition (14) on the left by x00 , it follows: x00 Φ(t,t0 )0 R(t)Φ(t,t0 )x0 + βx00 S0 x0 = 0 and therefore one can deduce from (15) that ρ(t)2 = −ρ20 β

(17)

Moreover, from (16), it follows that −β−1 is an eigenvalue of matrix R(t)−1/2 Φ(t 0 ,t)0 S0 Φ(t 0 ,t)R(t)−1/2 . Then, from (17) it follows: ρ∗ (t)2 = max ρ(t)2 h i −1/2 0 −1/2 = ρ20 λ−1 R(t) Φ(t ,t) S Φ(t ,t)R(t) 0 0 0 min h i 2 = ρ0 λmax R(t)1/2 X(t,t0 )R(t)1/2 If ρ(t) ≥ ρ∗ (t) for all t ∈ Ω, then system (9) is (I0 , Rt , Ω)-stable and the sufficiency part of relation (11) is proven. To prove the necessity part, suppose that system (9) is (I0 , Rt , Ω)-stable and ρ(t) < ρ∗ (t) for t ∈ Ω. Then, this implies that there exist both x0 ∈ I0 and t ∈ Ω such that x(t) does not belong to the ellipsoid Et , defined in (5). Consequently, from Definition 1, system (9) is not (I0 , Rt , Ω)-stable, which leads to a contradiction. This proves the necessity part. Now consider relation (10). Let us then define the following function: i h −1/2 ρ(t) = δ(t)λmin R(t)−1/2Y (t0 ,t)R(t)−1/2 with δ(t) > 0 and the sets: © ª x ∈ Rn ; x0 S0 x ≤ δ(t)2 , S0 = S00 > 0 From the above part of the proof, it follows that each set represents the set of initial conditions such that x(t) ∈ Et , defined in (5), at time t ∈ Ω. Then for all t ∈ Ω, if the initial condition x0 belongs to the set: © ª ∩t∈Ω x ∈ Rn ; x0 S0 x ≤ δ(t)2 , S0 = S00 > 0 © ª = x ∈ Rn ; x0 S0 x ≤ δ∗ , S0 = S00 > 0 with

i h 1/2 δ∗ = min ρ(t)λmin R(t)−1/2Y (t0 ,t)R(t)−1/2 t∈Ω

then it follows that x(t) ∈ Et , defined in (5), for all t ∈ Ω. The necessity of condition (10) can be proved by contradiction as in the above part. Some particular cases to Theorem 1 can be considered, depending on the choice of matrices S0 , R(t) and scalars ρ0 , ρ(t) when the sets I0 and Et are defined by (4) and (5): for example, by considering the case invoked in Remark 1. Another interesting extension to Theorem 1 corresponds to the case where Et is a polyhedral set defined by (6). Thus, suppose that the symmetric positive definite matrix S0 , the positive scalar ρ0 and the matrices fi (t), i = 1, · · · , N, are known for all t ∈ Ω. The following theorem can then be stated. Theorem 2. Suppose that I0 ⊆ Et0 . System (9) is (I0 , Et , Ω)-stable if and only if ρ0 ≤ δ∗ , with δ∗ defined by δ∗ = min min

t∈Ω i=1,··· ,N

= min min

t∈Ω i=1,··· ,N

p p

1 fi

(t)0 X(t,t

(18)

0 ) f i (t)

1

(19)

fi (t)0 [Y (t0 ,t)]−1 fi (t)

Proof. The solution to equation (20) can be written: Zε (t,t0 ) = X(t,t0 ) − εW (t0 ,t) where X(t,t0 ) is the solution to equation (12) and W (t0 ,t) is defined in (2). By definition, X(t,t0 ) and W (t0 ,t) are symmetric and positive definite matrices for all t. Hence, for all ε > 0, it follows that Zε (t,t0 ) < X(t,t0 ). From the definition of W (t0 ,t), it is clear that for 0 < ε1 < ε2 one gets Zε2 (t,t0 ) ≤ Zε1 (t,t0 ). The point 1 is then proven. To prove the point 2, let us rewrite Zε (t,t0 ) as: −1/2

Zε (t,t 0 )= εΦ(t,t 0 )S0 1/2

[ε−1 I− 1/2 −1/2

S0 Φ(t 0 ,t)W (t 0 ,t)Φ(t 0 ,t)]S0 S0

Φ(t,t 0 )

Thus, one gets that Zε (t,t0 ) is a positive definite matrix for a given t if the following inequality is satisfied: 1/2

1/2

ε−1 I > S0 Φ(t0 ,t)W (t0 ,t)Φ(t0 ,t)S0 Zt

1/2

1/2

S0 Φ(t0 , s)B(s)B0 (s)Φ0 (t0 , s)S0 ds

= t0

or equivalently, one gets Zε (t,t0 ) > 0 for all t0 ,t ∈ Ω, if ε satisfies for all t0 ,t ∈ Ω:  −1 Zt

where matrices X(t,t0 ) and Y (t0 ,t) are respectively the solutions of Lyapunov differential matrix equations (12) and (13).

1/2 1/2 εI <  S0 Φ(t0 , s)B(s)B0 (s)Φ0 (t0 , s)S0 ds t0

Noting that the right term increases with t, this right term allows to define the value of ε(T ) for t = T .

Proof. The proof is similar as the one of Theorem 1. A practical way for computing ε(T ) is proposed in the following lemma.

4. MAIN RESULTS: STABILIZATION PROBLEM In this section, we propose solution to Problem 1 in both cases where Et is an ellipsoidal and a polyhedral set. We first investigate some properties relative to the solution of the following ε-parameterized Lyapunov differential matrix equation:  ∂   Zε (t,t0 ) = A(t)Zε (t,t0 ) + Zε (t,t0 )A(t)0 ∂t (20) −1 0   −εB(t)B (t), Zε (t0 0 ,t0 ) = S0 , ε > 0 and S0 = S0 > 0 Lemma 1. The solution Zε (t0 ,t) to equation (20) satisfies: (1) For all ε1 and ε2 , 0 < ε1 < ε2 , it follows that Zε2 (t,t0 ) ≤ Zε1 (t,t0 ) < X(t,t0 ) for all t0 ,t ∈ Ω, where X(t,t0 ) is the solution to equation (12). (2) For T > 0, there exists a value of ε, denoted ε(T ) and defined by ·Z ε(T ) = λ−1 max

T

t0

Lemma 2. The value of ε(T ) is given by: h i 1/2 1/2 0 ε(T ) = λ−1 −S Φ(t , T )S(T )Φ (t , T )S (22) 0 0 max 0 0 where Φ(t0 , T ) is the solution to ( ∂ Φ(t0 ,t) = −Φ(t0 ,t)A(t) ∂t Φ(t0 ,t0 ) = I for all t0 ,t ∈ Ω

(23)

and S(t) is the solution to the following Lyapunov differential matrix equation: ½. S(t) = A(t)S(t) + S(t)A(t)0 − B(t)B0 (t) (24) S(t0 ) = 0, t0 ,t ∈ Ω

Proof. The solution to equation (24) evaluated at t = T reads: ZT

S(T ) = − Φ(T, s)B(s)B0 (s)Φ0 (T, s)ds t0

From the properties of Φ(t0 ,t), the result follows.

¸ 1/2

1/2

S0 Φ(t0 , s)B(s)B0 (s)Φ0 (t0 , s)S0 ds (21)

such that for all ε ∈ (0, ε(T )[, one gets Zε (t,t0 ) > 0, for all t0 ,t ∈ Ω.

Based on the use of Lemmas 1 and 2, the control law investigated to solve Problem 1 is defined by ε u(t) = K(t)x(t) = − B(t)Zε−1 (t,t0 )x(t) (25) 2

with ε ∈ ]0, ε(T )[ and Zε (t,t0 ) the solution of Lyapunov differential matrix equation (20). Note that considering the point 2 of Lemma 1, the family of controllers is well-defined.

4.1 Et is a ellipsoidal set In this subsection, the sets I0 and Et are ellipsoidal shape and therefore defined by (4) and (5), respectively. Indeed, the symmetric positive definite matrices S0 and R(t), and the positive scalars ρ0 and ρ(t), are known, for all t ∈ Ω. A solution to Problem 1 is then stated in the following theorem. Theorem 3. Suppose that I0 ⊆ Et0 . Consider δ∗ and ρ∗ (t) defined in relation (10) and (11). If there exists ε∗ ∈ ]0, ε(T )[ such that ρ0 ≤ δ∗K (ε∗ ) and ρ(t) ≥ ρ∗K (t, ε∗ ) for all t ∈ Ω, where δ∗K (ε) and ρ∗K (t, ε) are defined as follows: h i −1/2 δ∗K (ε) = min ρ(t)λmax R(t)1/2 Zε (t,t0 )R(t)1/2 t∈Ω

> δ∗ h i 1/2 ρ∗K (t, ε) = ρ0 λmax R(t)1/2 Zε (t,t0 )R(t)1/2

(26) (27) (28)

and ρ∗K (t, ε) satisfies, for all ε1 and ε2 ∈ ]0, ε(T )[, ε1 < ε2 : ρ∗K (t, ε2 ) ≤ ρ∗K (t, ε1 ) ≤ ρ∗ (t) for all t ∈ Ω then the control (25) solves Problem 1 for all ε ∈ [ε∗ , ε(T )[. Proof. From (20), we can write:  h i ∂ ε  0 −1  Z (t,t ) = A(t) − B(t)B(t) Z (t,t ) Zε (t,t0 )  ε 0 0 ε  ∂t 2 h i0 ε +Zε (t,t0 ) A(t) − B(t)B(t)0 Zε−1 (t,t0 )   2   Zε (t0 ,t0 ) = S0−1 , ε ∈ ]0, ε(T )[ From the point 1 in Lemma 1, it follows that for all ε ∈ ]0, ε(T )[, one gets Zε (t,t0 ) < X(t,t0 ) for all t0 ,t ∈ Ω. By using the Weyl’s monotonicity principle (see (Stewart and Sun, 1990) Corollary IV.4.9), one gets: λi [Zε (t,t0 )] < λi [X(t,t0 )] , i = 1, · · · , n, for all t ∈ Ω In particular, the following inequality is satisfied: λmax [Zε (t,t0 )] < λmax [X(t,t0 )] , for all t0 ,t ∈ Ω

4.2 Et is a polyhedral set In this subsection, the sets I0 and Et are supposed to be defined by (4) and (6), respectively. Thus, the symmetric positive definite matrix S0 , the positive scalar ρ0 and the matrices fi (t), i = 1, · · · , N, are known for all t ∈ Ω. A solution to Problem 1 is then stated in the following theorem. Theorem 4. Suppose that I0 ⊆ Et0 . If there exists ε∗ ∈ ]0, ε(T )[ such that ρ0 ≤ δ∗K (ε∗ ) where δ∗K (ε) is defined δ∗K (ε) = min min

p

t∈Ω i=1,··· ,N

1 fi

(t)0 Z

ε (t,t0 ) f i (t)

> δ∗

with δ∗ defined in relation (18), then the control (25) solves Problem 1 for all ε ∈ [ε∗ , ε(T )[. Proof. The proof is similar to the one of Theorem 3.

Remark 3. The results of Theorems 3 or 4 can be used in order to improve the property of (I0 , Et , Ω)-stability, in the sense that a controller (25) can be computed in order to increase the size of admissible sets I0 and Et for which the closed-loop system (8) is (I0 , Et , Ω)stable, with respect to the sets obtained in the openloop case.

5. NUMERICAL ISSUES In this section, an example is proposed to illustrate how the results of both stability analysis and control design can be used. Consider the linear time-varying continuous system (1) described by the following data: µ ¶ µ ¶ 0 1 √ 1 A(t) = ;B = 1 −2 cos 6t t sin 10t with Ω = [0, 2] and t0 = 0. Consider R(t) = I, ρ0 = 1 and S0 = I. • Stability analysis case. The open-loop behavior is characterized by the function ρ∗ (t) represented in figure 1 (ε = 0) and one obtains δ∗ = 0.3822. Figure 1 depicts the norm of the state for 50 initial conditions taken in I0 and generated randomly. There is no gap with the bound ρ∗ (t). rho*(t)

2.8 2.6

Then for all ε ∈ ]0, ε(T )[, it follows that δ∗K (ε) < δ∗ and ρ∗K (t, ε2 ) ≤ ρ∗K (t, ε1 ) ≤ ρ∗ (t) for all t ∈ Ω and ε1 < ε2 . Combining these facts with the results of Theorem 2, the result of Theorem 3 follows.

2.4 rho*(t)

2.2 2 1.8 1.6 1.4 1.2 1 0.8

Remark 2. The limit of the control gain is attained for values of ε close to ε(T ). For ε = ε(T ), note that the resulting matrix Zε (t,t0 ) is singular implying that the control gain is also singular.

0

0.2

0.4

0.6

0.8

1 time

1.2

1.4

1.6

1.8

2

Fig. 1. Evolution of the norm of state for 50 initial conditions of norm equal to 1 randomly generated

• Control design case. The value of ε(T ) is equal to 0.5216. By computing δ∗K (ε) for three values of ε: ε = 0.3, ε = 0.4 and ε = 0.5 one obtains δ∗K (0.3) = 0.5612, δ∗K (0.4) = 0.7106 and δ∗K (0.5) = 0.8346. It appears (according to Remark 3) that the control significantly improves the open-loop behavior. The functions ρ∗K (t, 0.3), ρ∗K (t, 0.4) and ρ∗K (t, 0.5) are represented in Figure 2 and the corresponding controls in Figures 3 and 4. rhoK*(t)

2.8 2.6 2.4 2.2 2 1.8

epsilon=0

1.6 epsilon=0.3

1.4 1.2

epsilon=0.4

1 epsilon=0.5 0.8

0

0.2

0.4

0.6

0.8

1 time

1.2

1.4

1.6

1.8

2

Fig. 2. ρ∗K (t, ε), for ε = 0 (ρ∗ (t)), 0.3, 0.4, 0.5. 0.6

0.4

0.2

K1(t)

0

−0.2

K2(t)

−0.4

−0.6

−0.8

0

0.2

0.4

0.6

0.8

1 time

1.2

1.4

1.6

1.8

2

Fig. 3. Variation of the control gains for ε = 0.3

0.5

K1(t)

0

−0.5 K2(t)

−1

−1.5

0

0.2

0.4

0.6

0.8

1 time

1.2

1.4

1.6

1.8

2

Fig. 4. Variation of the control gains for ε = 0.4

6. CONCLUSION In this paper, the problem of finite time stabilization of linear time-varying continuous systems is considered. After deriving a necessary and sufficient condition of finite time stability expressed through the solution of a Lyapunov differential matrix equation, a time varyingstate feedback controller is proposed. In all cases this controller allows to improve the open-loop finite time stability properties. A numerical example illustrates the potentialities of the proposed theory. Some interesting points not solved in this paper concern the problem of finite time stabilization of uncertain systems and the extension of the proposed approach to the case of time-varying discrete systems. These points will be investigated in a near future.

REFERENCES Abdallah, C.T., F. Amato, M. Ariola, P. Dorato and V. Koltchinsky (2002). Statistical learning methods in linear algebra and control problems: the example of finite-time control of uncertain linear systems. Linear Algebra and its Applications 351-352, 11–26. Amato, F., M. Ariola and C. Cosentino (2005). Finitetime control of linear time-varying systems via output feedback. In: Proc. of the American Control Conference. Portland, OR, USA. pp. 4722– 4726. Amato, F., M. Ariola and P. Dorato (2001). Finitetime control of linear systems subject to parametric uncertainties and disturbances. Automatica 37(9), 1459–1463. Amato, F., M. Ariola, C. Cosentino, C.T. Abdallah and P. Dorato (2003). Necessary and sufficient condition for finite-time stability of linear systems. In: Proc. of the American Control Conference. Denver, Colorado, USA. pp. 4452–4456. Antsaklis, P.J and A.N. Michel (1997). Linear Systems. The McGraw-Hill Companies Inc. D’Angelo, H. (1970). Linear time-varying systems: Analysis and Synthesis. Allyn and Bacon. Dorato, P. (1961). Short time stability in linear timevarying systems. Proc. IRE International Convention Record Part.4 pp. 83–87. Garashchenko, F.G. (1987). Study of practical stability problems by numerical methods and optimization of beam dynamics. Pricl. Matem. Mekhan 51(6), 559–564. Garcia, G. and S. Tarbouriech (2006). Finite time stabilization of linear systems. In: Proc. of the IFAC Workshop on Control Applications of Optimisation (CAO’06). Cachan, France. Grujic, L.T. (1973). On practical stability. International Journal of Control 17(4), 881–887. Kalman, R.E. (1963). Lyapunov functions for the problem of lur’e in automatic control. Proc. Nat. Acad. Sci. USA 49(2), 201–205. Khalil., H. K. (1992). Nonlinear Systems. MacMillan. Michel, A.N. and D.W Porter (1972). Practical stability and finite time stability of discontinous systems. IEEE Trans. Circ. Theory 19(2), 123–129. Stewart, G. and J.G. Sun (1990). Matrix perturbation Theory. Academic Press - Boston MA. Tarbouriech, S., G. Garcia and A.H. Glattfelder (Eds.) (2007). Advanced strategies in control systems with input and output constraints. Vol. 346. Springer Verlag. LNCIS. Weiss, L. and E.F. Infante (1967). Finite time stability under perturbing forces and on product spaces. IEEE Trans. Auto. Contr. 12, 54–59.