Adaptive control of nonlinearly parameterized systems with a triangular structure

Adaptive control of nonlinearly parameterized systems with a triangular structure

Automatica 38 (2002) 115–123 www.elsevier.com/locate/automatica Brief Paper Adaptive control of nonlinearly parameterized systems with a triangular...

141KB Sizes 0 Downloads 80 Views

Automatica 38 (2002) 115–123

www.elsevier.com/locate/automatica

Brief Paper

Adaptive control of nonlinearly parameterized systems with a triangular structure Aleksandar Koji'ca , Anuradha M. Annaswamyb ; ∗ b

a Adaptive Control Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Adaptive Control Laboratory, rm 3-461b, Massachusetts Institute of Technology, Cambridge, MA 02139, USA

Received 30 December 1999; revised 28 October 2000; received in 0nal form 18 June 2001

Abstract This paper deals with adaptive control of a class of nonlinear systems with a triangular structure and nonlinear parameterization. In Koji'c et al. [(Systems Control Lett. 37 (1999) 267)] it was shown that a class of second-order nonlinearly parameterized systems can be adaptively controlled in a globally stable manner. In this paper, we extend our approach to all nth order systems that have a triangular structure. Global boundedness and convergence to within a desired precision  is established for both regulation and tracking. Extensions to cascaded systems containing linear dynamics and static nonlinearities are also presented. ? 2001 Elsevier Science Ltd. All rights reserved. Keywords: Triangular system; Nonlinear parameterization; Adaptive control; Global stability; Nonlinear system

1. Introduction One of the most common assumptions made in the context of adaptive control is that the unknown parameters occur linearly, and appear in linear (Narendra & Annaswamy, 1989) and nonlinear systems (Kanellakopoulos, Kokotovic, & Morse, 1991; Krsti'c, Kanellakopoulos, & Kokotovi'c, 1992; Sastry & Bodson, 1989; Seto, Annaswamy, & Baillieul, 1994). Recently, a new approach has been developed (Annaswamy, Loh, & Skantze, 1998a; Loh, Annaswamy, & Skantze, 1999; Koji'c, Annaswamy, Loh, & Lozano, 1999) to address nonlinearly parameterized (NLP) systems and their adaptive control. The main problem that is introduced due to nonlinearity in the parameterization is the failure of the gradient approach. Whether viewed from an optimization or a stabilization view point, the gradient scheme is a powerful and simple procedure for adapting the ad This

paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor Jan Willem Polderman under the direction of Editor Frank L. Lewis. The work reported here was supported by U.S. Army Research OEce Grant No. DAAG55-98-1-0235. ∗ Corresponding author. E-mail addresses: [email protected] (A. Koji'c), [email protected] (A.M. Annaswamy).

justable parameters to cope with parametric uncertainty. When parameters occur linearly, the gradient scheme is suEcient to minimize the underlying cost function related to the parameter error; the gradient scheme guarantees a quadratic Lyapunov function leading to global stability. These properties are not suEcient when parameters occur nonlinearly. The approach in Annaswamy et al. (1998a), Loh et al. (1999), Koji'c et al. (1999) and Annaswamy, Thanomsat, Mehta, and Loh (1998b) outlines the construction of an alternative strategy for generating adaptation laws that guarantee stability. In Annaswamy et al. (1998a), it is assumed that the underlying parameterization is convex=concave which is made use of in constructing a quadratic Lyapunov function. In Loh et al. (1999), the results are extended to include general parameterizations. In both cases, it is assumed that state variables are accessible and that the underlying class of nonlinear systems are of the form X˙ p = Ap Xp + b(f( (t); ) + u);

(1)

where f is a scalar nonlinearity in the unknown parameter , and it can be globally stabilized. In Koji'c et al. (1999), the class in (1) is extended further to include a special class of systems where matching conditions (Taylor, Kokotovic, Marino, & Kanellakopoulos, 1989) are not satis0ed. These systems are second-order, have a

0005-1098/02/$ - see front matter ? 2001 Elsevier Science Ltd. All rights reserved. PII: S 0 0 0 5 - 1 0 9 8 ( 0 1 ) 0 0 1 7 3 - X

116

A. Koji,c, A.M. Annaswamy / Automatica 38 (2002) 115–123

triangular structure, and are of the form x˙1 (t) = x2 (t) +

n 

i fi (x1 (t); i );

i=1

(2)

x˙2 (t) = f0 (x; j0 ) + u(t); where x = [x1 ; x2 ]T , x1 , x2 , and u are scalar functions of time, i , i , and j0 are scalar parameters which are unknown, and f0 : R2 ×R → R, fi : R×R → R, i = 1; : : : ; n. Global stabilization and tracking to within a desired precision is established in Koji'c et al. (1999). In this paper, we seek to generalize these results to systems with a triangular structure and are of arbitrary order. These systems can be described as

.. .

 x  ; (x) = x 

0;

(3)

x˙n = u + fn (x1 ; x2 ; : : : ; x n ; n ); where x = [x1 ; x2 ; : : : ; x n ]T ∈ Rn , i ∈ Rmi , and i are unknown parameters. The goal is for x1 to track a desired trajectory xd . It is assumed that the states are accessible, and that the unknown parameters belong to a compact set mi in Rmi . It is also assumed that i and fi are diNerentiable functions of their arguments, and that @i =@xi+1 does not vanish. The proposed adaptive controller provides a stability framework for estimating the unknown parameters. For this purpose, in addition to a bounding function, functions generated using a min–max optimization problem are introduced in the adaptive controller, as in Annaswamy et al. (1998a) and Loh et al. (1999). The paper is organized as follows. Section 2 introduces a few necessary preliminaries about NLP systems. In Section 3, we present the adaptive tracking controller for the systems of the form in (3). The closed-loop global stability of the proposed controller is established. A direct extension of the proposed controller to a class of LNL systems is presented in Section 4. Section 5 presents an example of low-velocity friction system for which a controller is derived according to the proposed results. Final remarks are oNered in Section 6.

The following de0nitions and lemmas are useful in the development of the adaptive controllers. For de0nitions of concave=convex functions, see Annaswamy et al. (1998a). The notation x[i; j] for j ¿ i is used as shorthand to represent the elements xi ; xi+1 ; : : : ; xj of the vector x ∈ Rn , n ¿ j − i+1. Also, the functions sat(x) : Rn → Rn ,

x ¿ 0; x = 0:

(S(z; )) = (z);

S(z; ) = (z) |S(z; )| 6 1:

(5)

One example of S(z; ) is a sat(z) function with smooth corners. O z; ) : R × R × R → R is said to be a Denition 2. f(x; smooth bounding function of f(x; ) with respect to z and a buNer f if   O f(x; z; ) = S(z; ) max |f(x; )| + f : (6) ∈

O z; 0 ) is a bounding function of f(x; ) Lemma 3. If f(x; with respect to z with a bu3er f ¿ ; then for any y; all ∈ ; a compact set in Rm ; and for all |z | ¿ 0 ; the following holds: O z; 0 )) + S(y; 0 )] (z)[(f(x; ) − f(x; 6 − (f − ) ¡ 0:

(7)

Proof. The proof follows in a straightforward manner by substituting (6) into (7) and noting that Eq. (5) implies that (z)S(y; 0 ) 6 1 and since |z | ¿ 0 , S(z; 0 ) (z) = 1. Lemma 4. Let  be a simplex in Rm whose m+1 vertices are Si ; i = 1; : : : ; m + 1. Let ∈ ; and for a given ˆ ∈ ; let ˆ + !( ˆ − )]; J (!; ) = ![f( ; ) − f( ; )

(8)

a0 = min max J (!; );

(9)

!∈R ∈

2. Preliminaries

(4)

Denition 1. A smoothing function S(z; ) : Rn × R → Rn is an (n − 1) times diNerentiable odd function which satis0es the following: ∀|z | ¿  ¿ 0;

x˙1 = 1 (x2 ) + f1 (x1 ; 1 ); x˙2 = 2 (x3 ) + f2 (x1 ; x2 ; 2 );

(x) : Rn → Rn are de0ned as  x  ; x ¿ 1; sat(x) = x  x; x 6 1;

!0 = arg min max J (!; ); !∈R ∈

where ! and are known quantities independent of . Then; given and !; and de7ning g( ) = sign(!)f( ; );  A1 if g( ) is convex on ; a0 = 0 if g( ) is concave on ; (10)  A2 if g( ) is convex on ; !0 = ∇f ˆ if g( ) is concave on ;

A. Koji,c, A.M. Annaswamy / Automatica 38 (2002) 115–123

where A = [A1 ; A2 ]T = G −1 b; A1 is a scalar; A2 ∈ Rm ;   −1 !( ˆ − S1 )T     !( ˆ − S2 )T   −1   G= . ; ..  .   .  .   −1 !( ˆ − Sm+1 )T (11)   ˆ !(f − fS1 )    !(fˆ − fS )  2    b=   ..   .   !(fˆ − fSm+1 ) and ˆ fˆ = f( ; );

fSi = f( ; Si ):

Proof. See Annaswamy et al. (1998a) and Loh et al. (1999). Lemma 5. Let (;  be arbitrary positive quantities; and let (max ¿ (. For a given ˆ ∈  ⊂ Rm ; let a0 and !0 be chosen as in Eq. (9) with ! = (max sign(z); z ∈ R. If |z | ¿ ; the following is then true ∀ and ∀ ∈ ; whether f( ; ) is concave or convex on :    ˆ + ( ˆ − )T !0 ] − a0 sat z z ([f( ; ) − f( ; )  6 0;

(12)

where the function sat(·) denotes the saturation function. Proof. See Koji'c et al. (1999).

other obstacle is that the unknown parameter appears nonlinearly in the dynamics of z1 . We overcome the 0rst obstacle by choosing errors e0 and e1 such that when e1 → 0 it assures that e0 → 0. The task that remains then is to choose a u such that e1 tends to zero. In particular, a choice of e 0 = z1 ;

e˙ 0 = e1 + f1 (z1 ; ) − g1 (z1 ; e0 ); e˙ 1 = u + f2 (z1 ; z2 ; );

z˙2 = u

(13)

when is unknown. One of two main obstacles here is that the dynamics of z1 are not under our direct control. The

(14)

(15)

where f2 = (@g=@z1 ) (z2 +f1 )+(@g=@e0 ) (e1 +f1 −g1 ). By making g1 a bounding function of f1 with respect to e0 , we essentially stabilize e0 in the absence of e1 . To overcome the second obstacle and choose u such that e1 → 0, especially due to uncertainties in a nonlinear parameterization, the min–max algorithm as in Annaswamy et al. (1998a) is used. The problem that we consider in this paper is for the state x1 in (3) to asymptotically track a desired trajectory xd . We assume that xd is suEciently smooth so that x˙d ; xd(2) ; : : : ; xd(n) are bounded. A direct extension of our stabilization approach in Koji'c et al. (1999) to a higher-dimensional tracking problem for the system in (3) requires appropriate characterizations of n errors, ej , j = 0; 1; : : : ; n − 1. These errors are chosen such that, due to the characteristics of the system in (3), the control input u directly appears only in the dynamics of en−1 . The basic idea is to de0ne the other errors, ej , j = 0; 1; : : : ; n − 2 in such a way that if en−1 → 0, it guarantees that all errors ej , j = 0; : : : ; n − 1, tend to zero. Let us assume that the errors are of the form ej = j (x[1; j+1] ; xd ; : : : ; xd( j) ) + fO j (x[1; j] ; e[0; j−1] );

j = 1; : : : ; n − 1:

This section outlines the basic ideas behind our approach to designing a controller for the system in (3). In Koji'c et al. (1999), we have derived a stabilizing controller for (2) when n = 2. The question therefore pertains to the complexities introduced by a higher order system. This section discusses these complexities, and how they can be addressed, with the starting point being the approach taken in Koji'c et al. (1999). BrieQy, the problem considered in Koji'c et al. (1999) is the stabilization of the system z˙1 = z2 + f1 (z1 ; );

e1 = z2 + g1 (z1 ; e0 )

leads to error equations

e 0 = x1 ; 3. The controller structure

117

(16)

Suppose that the functions j are chosen in such a way that these errors satisfy the relationship e˙ j = ej+1 − ej−1 + fj+1 (x[1; j+1] ; ) − fO j+1 (x[1; j] ; e[0; j] ); j = 0; : : : ; n − 2;

(17)

e˙ n−1 = u + fn (x[1; n] ; );

(18)

where e−1 = 0 for suitably de0ned fj and fO j . The advantage of the structure in (17) and (18) is apparent if fO i is a bounding function of fi with respect to ei−1 , since the latter leads to the property (fi+1 − fO i+1 )(ei ) 6 0:

(19)

118

A. Koji,c, A.M. Annaswamy / Automatica 38 (2002) 115–123

This follows since V = 12 of the form V˙ =

n−2 

n−1 i=0

ei2 yields a time derivative

with k0 = 0;

[ei (ei+1 − ei−1 ) + ei (fi+1 − fO i+1 )]

i=0

+en−1 (u + fn ) 6 en−1 (en−2 + u + fn )

(20)

which suggests that V is a control Lyapunov function for (17) and (18), leading to global stabilization. The question that arises is if indeed errors ej an be constructed as in (16) so that (a) they satisfy (17) and (18) and (b) ensure that the resulting controller has no discontinuities. This is answered by the following recursive relationships: xd(i) ;

=

i = 1; : : : ; n − 1;

(21)

gi (x[1; i] ; e[0; i−1] ) = ki−1 + hOi−1 ;

(22)

where

ki (x[1; i+1] ; e[0; i] ) = ei−1 − ei−3 − hOi−2 +

i  @+i−1

@xj

j=2

+

i 



j=1

+

i−1  j=0

@ki−1 @hOi−1 + @xj @xj



@ki−1 @hOi−1 + @ej @ej

j i



j

i  @+i−1 j=2

@xj

+

j=0

@ej

+

@i ; @xi+1

(27)

fˆ n = kn−1 + hˆn−1 (x[1; n] ; ˆ[1; n] ); (23)

fj (x[1; j] ; j )i

hˆn−1 =

n 

n−1; j (x[1; j] ; e[0; j−1] )fj (x[1; j] ; ˆj );

j=1

a∗ =

n 



i ¿ 0; i = 0; : : : ; n − 1;



hj

(24)

i = 1; : : : ; n − 1

(25)

(29)

a∗j ;

+( ˆj − j )!j ]:

fj (x[1; j] ; j )

(28)

j = 1; : : : ; n;

j=1 ∗ (aj ; !j∗ ) = min max[hn−1; j−1 (x; j ) !j

for i = 2; : : : ; n − 1, +i = +i−1

d

ei = ei − i sat(ei =i );

@hOi−1 @ej

(26)

where ij is a known function of its arguments. Since the arguments of ij are themselves functions of time only, from (26) it follows that the dependence of hi on the unknown parameter j , j = 1; : : : ; i + 1, is equivalent to the dependence of fj on j . Therefore, any initial assumptions on the convexity=concavity of fj with respect to the unknown parameter j , j = 1; : : : ; i + 1, are preserved in hi . The adaptive controller is given by  − cen−1 − fˆ n − a∗ S(en−1 ; n−1 ) + x(n) ); u = +−1 (−en−2

 ˆ˙j = en−1 !i∗ ;

@ki−1 @hOi−1 + @xj @xj

 i−1  @ki−1

ij (x[1; j] ; e[0; j−1] )fj (x[1; j] ; j );

c ¿ 0;

++i fi+1 (x[1; i+1] ; i+1 ) 

i+1 

n−1

hi (x[1; i+1] ; e[0; i] ; [1; i+1] )

j=1

e−1 = 0:

j=1



×(ej+1 − ej−1 − hOj );

+

+0 = 1;

j=1

ei = ei−2 + +i−1 (x[2; i] )i (xi+1 ) + gi (x[1; i] ) −

i 

h0 = f1 (x1 ; 1 );

In (22) – (24), hOi (x[1; i+1] ) are chosen as smooth bounding functions of hi with respect to ei , with buNers fi such that f0 ¿ 1 and fi ¿ i−1 + i+1 for i = 2; : : : ; n − 2. Algebraic manipulations of (22) – (24) can be carried out to show that ei ’s satisfy (17) and (18). We now propose an adaptive controller for the tracking problem when is unknown. For this purpose, we note that the recursive relation in (24) implies that the function hi can be cast in the form i+1  hij (x[1; j] ; e[0; j−1] ; [1; i+1] ) hi =

e0 = x1 − xd ;

= hi−2 +

h−1 = 0;

− hn−1; j−1 (x; ˆj )

(30)

As mentioned earlier in Lemma 4, the closed-form solutions to (39) can be found along the lines outlined in Annaswamy et al. (1998a) and Loh et al. (1999), with the solutions being considerably simpler when hn−1; j is convex=concave with respect to j . The stabilizing property of the controller in (27) – (30) is given in the following theorem.

A. Koji,c, A.M. Annaswamy / Automatica 38 (2002) 115–123

Theorem 6. For the system in (3); the controller de7ned by (21)–(24) and (27)–(30) results in each |ei (t)| tending to i as t → ∞ thereby achieving global stability and desired tracking performance. Proof. By diNerentiating with time (21), and with the functions ki as in (23) and hi as in (24), it can be shown that e˙ i = ei+1 − ei−1 + hi − hOi ;

i = 0; : : : ; n − 2:

(31)

n−1 2 T ˜ and using Eqs. (21), Choosing V = 12 ( i=0 ei + ˜ ) (29), and (31), and the fact that d = d t(ei2 ) = 2ei e˙ i , we obtain a time derivative  n−2    O ˙ V= ei (ei+1 − ei−1 + hi − hi ) i=0 T

  +en−1 (+n−1 u + fn − xd(n) ) − en−1 ˜ !∗ ;

(32)

where fn = kn−1 +hn−1 . Substituting the control law from (27) into (32), we have  n−2    ei (ei+1 − ei−1 + i+1 sat(yi+1 ) V˙ = i=0



−i−1 sat(yi−1 ) + hi − hOi )    − en−2 en−1 − cen−1 en−1 T  +en−1 (fn − fˆ n − ˜ !∗ − a∗ S(en−1 ; ));

where yi = ei =i . Therefore, 2 V˙ 6 − cen−1  n−2   + ei (hi − hOi + i+1 sat(yi+1 ) − i−1 sat(yi−1 )) : i=0

(33) Since f0 ¿ 1 and fi ¿ i−1 + i+1 and hOi are bounding functions of hi , with buNers fi , respectively, it follows that the second term is nonpositive. Hence, V˙ 6 0: Since ei are bounded, and +i ; i ; gi are bounded functions of their arguments, it follows that xi are bounded. By Barbalat’s lemma this implies that all ei tend to zero. This means that all |ei | tend to i , which in turn, sets the bounds on all xi . Comments. (1) We note that the main result, stated above, establishes global boundedness and tracking to

119

within a desired precision, in contrast to local results obtained in the literature (for example, Hotzel & Karsenti, 1997). (2) The stabilizing controller proposed here requires the availability of two functions a∗ and !∗ . These in turn imply that closed-form solutions of (30) are needed. These can be constructed in a simple manner, as outlined in Annaswamy et al. (1998a), when fn is a convex=concave function of . Convexity=concavity of the underlying nonlinearity has also been exploited in Fomin, Fradkov, and Yakubovich (1981) and Ortega (1996). The computational burden increases when fn is a general function, and is discussed in Loh et al. (1999). Special classes of functions fi which can be reparameterized so as to result in concavity (or convexity) are described in Netto, Moya, Ortega, and Annaswamy (1999). For all such functions, the controller proposed above results in global boundedness. (3) The proof of Theorem 1 and the preceding discussions also demonstrate that stabilization of systems in chain form can be accomplished without adapting to the parameter . Instead of estimating fn as fˆ n , one could simply construct yet another bounding function and stabilize (3). However, the advantage of using fˆ n is that it enables the unknown parameter to be estimated in addition to stabilization. Once such a stable framework is generated for parameter estimation, conditions related to persistent excitation can be invoked to obtain parameter convergence. In Koji'c and Annaswamy (2000), it has been shown that for a class of error models of the form ˆ − a∗ S(e; ); e˙ = − e + f( ; ) − f( ; )

(34)

ˆ˙ = e !∗

(35)

conditions of persistent excitation of with respect to f for nonlinearly parameterized systems can be derived so as to result in the convergence of the parameter ˆ to to within a desired precision . It is worth noting that the error en−1 satis0es a diNerential equation that is quite similar in form to that of (34). Hence, an extension of the result in Koji'c and Annaswamy (2000) to parameter convergence using the controller presented here is quite feasible. (4) The stability result in Theorem 6 can be viewed as an extension of the parametric-strict-feedback systems considered in Kanellakopoulos et al. (1991) to the case when the unknown parameters occur nonlinearly. Examples of such systems abound in several applications (Annaswamy et al., 1998a, b; Netto et al., 1999; Boskovi'c, 1998). In contrast to the back-stepping approach suggested in Kanellakopoulos et al. (1991), we use a Bounding Function to generate the errors ei in the system. We note that in contrast to linear adaptive

120

A. Koji,c, A.M. Annaswamy / Automatica 38 (2002) 115–123

control where parameter adaptation is proposed at each of the ith level of back-stepping (Kanellakopoulos et al., 1991; Seto et al., 1994), we determine the adaptive laws at the 0nal step so as to generate a Lyapunov function. This enables us to achieve a stabilizing adaptive controller without using over-parameterization of the controller. (5) Robustness to additive external disturbances in (3) can be established in a straightforward manner by modifying the adaptive law in (28) as

function with a buNer f + 0 of f(z; ) with respect to e0 . Then,

 !i∗ − e ˆj ; ˆ˙j = en−1

with D2 (s) = s(D1 (s) − a1 sm−1 ). The adaptive controller is designed as  −1   @g u= −e0 − ce1 − D2 (s)[x] @z

e ¿ 0;

j = 1; : : : ; n − 1:

We refer the reader to Koji'c (2001) for further details. (6) The same approach of using a Bounding Function together with the min–max algorithms can be used to stabilize coupled second-order systems in chain form given by xV1 = x2 + f1 (x1 ; );

xVi = xi+1 ;

where xVn+1 = u, and xi ∈ Rp ; i = 1; : : : ; n. Towards this end, vector Bounding Functions along the lines of De0nition 2 have to be constructed (Koji'c, 2001). A speci0c example of a system in chain form is often used in nonlinear system identi0cation and is discussed in detail in the next section.

4. Control of LNL systems

z (n) = u;

(36)

where the unknown parameter lies in a compact set in

Rp and the goal is to stabilize this system and enable x

to track a desired trajectory. Eq. (36) can be considered to be a particular case of the general form of (3). In what follows, we present a stabilizing controller for the case when m is arbitrary and n = 1. The approach presented can be extended in a straightforward manner to include the systems where n ¿ 2. De0ne 

e0 = D(s)

0

t

x(/) d /;

e1 = D1 (s)[x] + g(z; e0 );

e˙ 1 =

  @g @g u + a1 + f(z; ) + D2 (s)[x] @z @e0

+

@g D1 (s)[x] @e0

(38)

@g D1 (s)[x] @e0   @g ˆ − a1 + f(z; ) @e0 −

− a∗ S(e1 ; )] ;

c ¿ 0; ˆ˙ = e1 !∗

(39)

with a∗ and !∗ adjusted according to the min–max algorithm. This leads to a time derivative of V = 1 2 2 ˜T˜ 2 (e0 + e1 + ) being nonpositive, V˙ 6 − |e0 |f − ce12 ;

A special class of chain-form systems has three systems in cascade, which include Linear dynamics, followed by static Nonlinearities, and Linear dynamics, and referred to as LNL systems (Kornenberg & Hunter, 1986). One such form is given by x(m) = f(z; );

e˙ 0 = e1 + f(z; ) − g(z; e0 );

(37)

where D(s) = sm + a1 sm−1 + · · · + am is a Hurwitz polynomial, D1 (s) = D(s) − sm and g(z; e0 ) is a bounding

indicating global stability of the system. From the de0nition of e0 , if |e0 | → 0 implies that the state x and all its time derivatives x(i) , i = 1; : : : ; m are bounded as well. It should be noted that in order to be able to compute the control input u as in Eq. (39), it is required that the inverse of @g=@z always exists. However, g is a constructed feature of the controller, and not of the physical system. Furthermore, g is a bounding function constructed from (6) as   (40) g(x; z; ) = S(z; ) max |f(x; )| + f : ∈

S is smooth in z, and f is arbitrary such that

|f | ¿  ¿ 0. Hence, g can be designed in such a way

that (@g=@z)−1 always exists.

5. Numerical example Using the methodology discussed in the previous sections, we present here an adaptive controller for a

A. Koji,c, A.M. Annaswamy / Automatica 38 (2002) 115–123

particular class of low-velocity friction compensation systems. Friction compensation plays a important part in control of high-precision positioning systems. As with any controller design, accurate models of the underlying physical system are required in order to improve performance. The model which we will use incorporates two features: compliance among the elements of the system and an accurate low-velocity model of the friction force. The 0rst feature allows for nonideal transmission elements by assuming small strains, and thus modeling them as linear springs. The friction force model is based on evidence (Armstrong-H'elouvry, Dupont, & Canudas de Wit, 1994) that, at low velocities, the friction force exhibits such behavior which can be best characterized by a nonlinearly parameterized model. For this example, we employ the following simpli0ed version of the model suggested in (Armstrong–H'elouvry et al.): 2

˙ s) ˙ e−(x=v ; F = K sgn(x)

First, we put the above system into the following form: z˙4 = u + (z1 − z3 );

e3 = e2 + z4 − x1(3) − g1 (z2 ; e1 ) − x˙1d + d +

@g1 (e2 − e0 − g1 ) + g2 (z2 ; e1 ; e2 ): @e1

2

g2 = S(e2 ; 2 )(e− min z2 | | + f2 );

−z2



   e 2 − e 0 − g1    Kx =  ;  e 3 − e 1 − g2    z3 − z1



(45)

where S(:) is an odd-powered 0fth-order polynomial that satis0es the smoothing function conditions given in Definition 1, and = 1 + @g1 =@z2 + @g1 =@e1 . The values of all the dead-zone coeEcients i , i = 0; 3 are set to unity, f1 = f2 = 2, and it is assumed that ∈ [0; 200]. The adaptive controller is then chosen as @g1 u = −e2 − e3 − z4 − (z4 − z2 ) − y1T Kx @z2 +fˆ 1 − a∗ S(e3 ; 3 ) + x1(4) + x1(3) + 2xV1d + x˙1d ; d d ˆ˙ = − e3 !1 ;

(46) 

     2  2     @2 g1  @g1 @g1 @ g1 @g1   + (z3 − z1 ) + ( )g2 +  @e2 (e2 − e0 − g1 ) − @e  2 @z @e @e   1 2 1 1 1 ; y1 =    @g1 @g2   +| |   @e1 @e2         @g1 @g1 @2 g1 @g2 @g2 @g1 (e2 − e0 − g − 1) + + 2 (z3 − z1 ) + + 

(44)

2

g1 = S(e1 ; 1 )(e− min z2 + f1 );

@g1 − @e1

@e1 @z2

@g1 (z3 − z1 ) @z2

The bounding functions g1 and g2 are constructed as

For the sake of simplicity, we take the linear coeEcients kx1 and K1 as unity. Furthermore, the exact value of the Stribeck parameter vs is unknown, but it is assumed to lie in the range of [vsmin ; vsmax ]. The goal was to develop an adaptive controller that can track a trajectory speci0ed by x1d = 10 sin(t) in the face of pronounced nonlinear eNects.

@e1

(43)

e2 = e1 + z3 − z1 − xV1d + g1 (z2 ; e1 );

(42)



z˙3 = z4 ;

e1 = e0 + z2 − x˙1d ;

2

where

z˙2 = z3 − z1 − sgn(z2 )e− 1 x˙1 ;

e0 = z1 − x1d ;

where x˙ represents the relative velocities of the bodies in contact, vs is the Stribeck parameter, and K is the friction coeEcient. Thus, we are interested in controlling the following system:

xV2 = u + kx1 (x1 − x2 ):

2

z˙1 = z2 ;

where = 1=vs2 . Using the presented control algorithm, we de0ne the following errors:

(41)

xV1 = kx1 (x2 − x1 ) − K1 sgn(x˙1 )e−(x˙1 =vs ) ;

121

@e1

@z2



0



     1     ˆ 2 fˆ1 = (z2 ) 1 + y1T   e− z2      

1

@z2

(47)

122

A. Koji,c, A.M. Annaswamy / Automatica 38 (2002) 115–123

Fig. 1. Time behavior of the position of two system elements, z1 and z3 . The  bounds of guaranteed convergence to the desired trajectory are given by dotted lines.

and a∗ , !1 are obtained through the min–max algorithm. The above controller and system was numerically simulated and the resulting positions are shown in Fig. 1, indicating satisfactory convergence of the system to the desired trajectory. 6. Summary In this paper, a class of nonlinear systems of the form of (3) was considered where fi are nonlinear with respect to the dynamics as well as the parameters. We show that a globally stabilizing and tracking controller can be determined. The stability approach consists of using Bounding Functions to de0ne errors ei in a recursive manner which are chosen such that when ei → 0, it guarantees that ej , j = 0; : : : ; i tend to zero. Finally, the control input and the adaptive law for estimating the unknown parameters are chosen so that they guarantee the convergence of en−1 to zero. For this purpose, a min–max strategy, originally proposed in Annaswamy et al. (1998a) is used. References Annaswamy, A. M., Loh, A. P., & Skantze, F. P. (1998a). Adaptive control of continuous time systems with convex=concave parametrization. Automatica, 34, 33–49. Annaswamy, A. M., Thanomsat, C., Mehta, N., & Loh, A. P. (1998b). Applications of adaptive controllers based on nonlinear parametrization. ASME Journal of Dynamic Systems, Measurement, and Control, 120, 477–487. Armstrong-H'elouvry, B., Dupont, P., & Canudas de Wit, C. (1994). A survey of models, analysis tools and compensation methods for the control of machines with friction. Automatica, 30(7), 1083–1138. Boskovi'c, J. D. (1998). Adaptive control of a class of nonlinearly parametrized plants. IEEE Transactions on Automatic Control, 43, 930–934. Fomin, V., Fradkov, A., & Yakubovich, V. (1981). Adaptive control of dynamical systems. Moscow: Nauka.

Hotzel, R., & Karsenti, L. (1997). Tracking control scheme for uncertain systems with nonlinear parametrization. Proceedings of the European control conference, Brussels, Belgium, May. Kanellakopoulos, I., Kokotovic, P. V., & Morse, A. S. (1991). Systematic design of adaptive controllers for feedback linearizable systems. IEEE Transactions on Automatic Control, 36, 1241–1253. Koji'c, A. (2001). Stable identi7cation and control in nonlinear regression problems. Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA, September. Koji'c, A., & Annaswamy, A. M. (2000). Parameter convergence in systems with convex=concave parameterization. The 2000 American control conference, Chicago, IL, June (pp. 2240 –2244). Koji'c, A., Annaswamy, A. M., Loh, A.-P., & Lozano, R. (1999). Adaptive control of a class of nonlinear systems with convex=concave parameterization. Systems and Control Letters, 37, 267–274. Kornenberg, M. J., & Hunter, I. W. (1986). The identi0cation of nonlinear biological systems: LNL cascade models. Biological Cybernetics, 55, 125–134. Krsti'c, M., Kanellakopoulos, I., & Kokotovi'c, P. V. (1992). Adaptive nonlinear control without overparameterization. Systems and Control Letters, 19, 177–186. Loh, A. P., Annaswamy, A. M., & Skantze, F. P. (1999). Adaptation in the presence of a general nonlinear parametrization: An error model approach. IEEE Transactions on Automatic Control, AC-44, 1634 –1652. Narendra, K. S., & Annaswamy, A. M. (1989). Stable adaptive systems. Englewood CliNs, NJ: Prentice-Hall. Netto, M. S., Moya, P., Ortega, R., & Annaswamy, A. M. (1999). Adaptive control of a class of 0rst-order nonlinearly parametrized systems. Proceedings of the CDC, Phoenix, AZ, USA, December. Ortega, R. (1996). Some remarks on adaptive neuro-fuzzy systems. International Journal of Adaptive Control and Signal Processing, 10, 79–83. Sastry, S. S., & Bodson, M. (1989). Adaptive control: Stability, convergence, and robustness. Englewood CliNs, NJ: Prentice-Hall. Seto, D., Annaswamy, A. M., & Baillieul, J. (1994). Adaptive control of nonlinear systems with a triangular structure. IEEE Transactions on Automatic Control, 39(7), 1411–1428. Taylor, D. G., Kokotovic, P. V., Marino, R., & Kanellakopoulos, I. (1989). Adaptive regulation of nonlinear systems with unmodeled dynamics. IEEE Transactions on Automatic Control, 34, 405–412.

A. Koji,c, A.M. Annaswamy / Automatica 38 (2002) 115–123 Aleksandar Kojic was born in 1974 and is a native of Kragujevac, Yugoslavia. He has received the B.Sc. from the Mechanical Engineering Department of University of Kragujevac in 1995. In 1998 he has received the MSME degree from the MIT Mechanical Engineering Department, where he is currently a Ph.D. candidate. His present research interests are in the area of identi0cation and control of nonlinearly parameterized systems.

Anuradha Annaswamy received the Ph.D. degree in electrical engineering from Yale University, New Haven, CT, in 1985. She was a Post-doctoral Fellow from 1985 to 1987 and subsequently a visiting assistant professor at Yale. During 1988–1991, she was at Boston

123

University in the Department of Aerospace and Mechanical Engineering as an Assistant Professor. Since 1992, she has been a faculty in the Department of Mechanical Engineering, MIT, Cambridge, MA, where currently she is a Principal Research Scientist. Dr. Annaswamy’s research interests pertain to dynamics and control, information and adaptation, neural networks, dynamic instabilities, and active control of combustion processes. She has published several articles in conferences and journals as well as coauthored a textbook, Stable Adaptive Systems, Prentice-Hall, 1989. Annaswamy has received a number of awards and honors including the Alfred Hay Medal from the Indian Institute of Science in 1977, the Arthur E. Stennard Fellowship from Yale University in 1980, the IBM post-doctoral fellowship during 1985 –86, the George Axelby Outstanding Paper award from IEEE Control Systems Society in 1988, and the Presidential Young Investigator award from the National Science Foundation in 1991. Dr. Annaswamy is a member of AIAA and Sigma Xi and an elected member of the Board of Governors of IEEE.