A variable structure gradient adaptive algorithm for a class of dynamical systems

A variable structure gradient adaptive algorithm for a class of dynamical systems

Systems & Control Letters 33 (1998) 171—186 A variable structure gradient adaptive algorithm for a class of dynamical systems Andrew Alleyne* Departm...

275KB Sizes 1 Downloads 56 Views

Systems & Control Letters 33 (1998) 171—186

A variable structure gradient adaptive algorithm for a class of dynamical systems Andrew Alleyne* Department of Mechanical and Industrial Engineering and The Coordinated Science Laboratory, University of Illinois, Urbana-Champaign, Urbana, IL 61801, USA Received 1 January 1995; received in revised form 1 May 1995, 1 July 1996 and 1 June 1997; accepted 1 October 1997

Abstract A standard assumption in adaptive control is that the parameters being estimated are either constant or vary ‘slowly’ as a function of time. This paper investigates the adaptive control of a class of systems in which the parameters vary as a specified function of state. The dynamic structure of the systems may be either linear or nonlinear. For this class of systems, the state space is separated into distinct subsets. The parameters are then required to remain constant, or be slowly time varying, within the subsets. Given a controller for the system, an analysis of the output error dynamics and the parameter error dynamics leads to a parameter adaptation algorithm with a variable structure. The stability and convergence of both the parameter error and the output tracking error are investigated. An analysis of SISO linear systems with full state information is used to motivate and illustrate the treatment of SISO feedback linearizable systems. ( 1998 Elsevier Science B.V. All rights reserved. Keywords: Adaptive control; State-varying parameters; Switching algorithm

1. Introduction For the most part, the globally stable adaptive control algorithms that have been developed [3,11,16] make the same assumption. The plant being controlled is linearly parameterized and the parameters are fixed or slowly time varying. There has been a significant amount of previous research concerning the control of linear systems with time-varying parameters [2,21]. Additionally, there has been work done on controlling classes of nonlinear systems with time-varying parameters. For example, in [9,14] the control of a class of nonlinear systems subject to time variations in parameters is examined by regularly identifying a local approximation to the system’s dynamics. Based on the spatially and temporally local system representation, a controller is designed to provide a bounded level of reference model tracking where the bound is a function of the sample time. This paper investigates the control of a different class of systems with varying parameters. The parameters vary with respect to the location of the system within its state space. This parametric

* Tel.: #1 217 244 9993; fax: #1 217 244 9956; e-mail: [email protected]. 0167-6911/98/$19.00 ( 1998 Elsevier Science B.V. All rights reserved PII S 0 1 6 7 - 6 9 1 1 ( 9 7 ) 0 0 1 1 2 - 6

172

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

variation is usually a consequence of good mechanical design. Many systems needing control contain mechanical elements in them which are usually designed for a specific purpose. Often, when the system has to operate in different modes, the mechanical designer tries to optimize the system with respect to safety, performance, cost, etc. Therefore, the system behaves differently when it is in different modes. Although the physical system has not changed, the parameters used in a dynamic model of the system may have changed. Therefore, even though the structure of the system’s dynamics would be fixed, it is still possible to have parameters switch suddenly but as a function of a state variable. As a motivating example, consider an automobile driven at low to moderate speed. The dynamics of the system, including steering dynamics, can be approximated as a linear transfer function with five poles and two zeros, all in the closed left half-plane. When driving forward, the system is open-loop Lyapunov stable: it can travel in a straight line with no driver input. However, if the car travels in reverse, the system is now an unstable one. One of the poles on the real axis moves into the open right half-plane. The reader is encouraged to try this in a empty parking lot for verification. There is a caster angle on the kingpin of the front wheels that is put there intentionally to make the car more stable to drive in the forward direction, which it does most of the time. However, the tradeoff is that it makes the car harder to stabilize when going backward. Changing the sign of the longitudinal velocity state also puts the zeros in the open RHP as well (nonminimum phase) since the steer inputs are now behind the vehicle center of gravity. A second example is as simple as a changing gear ratio in an automobile or bicycle. The dynamics of the system are the same, however the high-frequency gain of the system is changed suddenly to optimize the power output of the actuator (engine or person). Both of these examples show how good mechanical design can result in a system whose parameters vary as a function of state variables. In essence, the scheme proposed here connects a controller to a parameter adaptation algorithm (PAA) in a certainty equivalence fashion. The PAA switches between sets of parameter estimates as a function of the system location in the state space. There have been several recent investigations into ‘switching control’. Most notably, Morse [10] uses a ‘family’ of set point controllers acting under a ‘supervisor’ which compares the output estimation errors of the controllers and then decides which one to put into feedback with a SISO process. Additionally, Narendra and Balakrishnan [12] have used multiple identification models to switch between controllers and determine an appropriate control output. For linear systems they have shown improved transient response of the control. Both of these investigations deal primarily with changing the controller as a function of output error. In this investigation, the controller structure remains fixed with the controller parameters varying as a specified function of state. The approaches of [10,12] are limited to linear systems and do not consider nonlinear adaptive control. The field of nonlinear adaptive control has seen increased activity over the past half decade. Some of the major contributions to this field were developed in the field of robotics [13,15]. The principal methods were for feedback linearizable systems. For a further review of these techniques the reader is referred to [8,13,15,17,18,20] and the references contained therein. As with linear systems, most of the work done with nonlinear adaptive control has made the assumption of constant parameters. The results presented here were initially designed for SISO systems with a feedback linearizable dynamic structure. In particular, an asymmetric hydraulic actuator with unknown coefficients [1]. However, the basic idea can be easily applied to specific classes of systems with either linear or nonlinear dynamic structure. The rest of the paper is organized as follows. Section 2 defines a restricted class of systems with linear dynamics and gives an analysis. In the following, these systems will be referred to as ‘linear’ in reference to their dynamic structure and to differentiate them from the class of systems in Section 4. The class of linear systems to be considered are those with full state information. Section 3 offers some extensions of the basic methodology to more general linear systems. The analysis used in Section 2 will be used to develop a similar approach for a class of systems in Section 4 with nonlinear dynamic structure; henceforth called ‘nonlinear’ systems. The class of nonlinear systems to be considered in Section 4 are those that are feedback linearizable. Section 5 offers some conclusions.

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

173

2. Canonical linear systems with full state feedback The variable structure adaptive algorithm will first be developed for restricted classes of linear systems. The class of systems to initially be considered are SISO, LTI, systems in canonical form described by b b y" u" u, (2.1) A(s) sn#a sn~1#2#a s#a n~1 1 0 where the polynomial A(s) is monic with known degree n and b'0 is the high frequency gain. The relative degree of this system is n. The reason for initially choosing this particular class of systems is the similarity to feedback linearizable systems in the stability analysis. Section 3 will comment on extensions of the approach to more general linear systems. Assumptions. (1) The entire Rn state space of the system can be partitioned into a collection of a3Z1 subsets, S , where a is a finite integer. j (2) The subsets, S , consist of connected sets that are either open or the closure of open sets. j (3) The coefficients of A(s) and the high-frequency gain, b, are assumed to be unknown and to vary with location, S , in the state space. Moreover, the uncontrolled process satisfies conditions for global existence j and uniqueness of solutions. (4) The entire state vector is accessible. (5) A reference model to be followed is a stable, SISO, LTI, canonical form system described by b b m y " m r" r, (2.2) m A (s) sn#a sn~1#2#a s#a m m,n~1 m,1 m,0 where the polynomial A (s) is monic with known degree n, b '0, and r is a bounded reference signal. m m A standard adaptive control problem is to determine a control input, u, such that the actual system output asymptotically tracks the reference output; i.e. e(t)"y(t)!y (t) converges to zero. Additionally, the internal m signals of the systems should remain bounded. Assumptions (1) and (2) are used to exclude pathological partitioning of the state space into subsets such as S "Q and S "R1CQ, a3[1, 2], which would be 1 2 problematic in determining the presence of the system in a subset. Assumption (3) indicates that the uncontrolled system has unique trajectory solutions in the sense of Fillipov (ISF) [5]. Consider the neighborhood of any point along the closure, S , separating two adjoining open subsets: S and S . For the 3 1 2 described systems to possess solutions ISF, either one of two criteria must be satisfied: (i) A trajectory reaching S from subset S (S ) intersects S in exactly one point and continues to subset S (S ); 3 1 2 3 2 1 (ii) A trajectory reaching S from subset S (S ) is forced to stay on S in a sliding condition. Fig. 1 shows the 3 1 2 3 possible trajectory flows near a parameter switching boundary. The reader is referred to [5] for further details. The assumption of existence and uniqueness for the uncontrolled system is indicative of the fact that nearly all systems where the following approach would be used for are designed to satisfy the assumptions. The reader is referred ahead to Example 2.1 for a physical interpretation. In the following analysis, Assumption (5) can be relaxed to simply having a reference signal, y (t), that is continuously differentiable as needed. m Definition. The time varying index, Mk(x) : RnP[1, a]N, is an explicit function of the state and is defined as k(x)"j3[1, a] if x3S . j The index k(x) determines the present location of the system within the state space MS ,2, S x ,2, S N. 1 k( ) a For example, if k(x)"4 then the system state presently resides in the 4th subset of the partitioned state space. The transfer function given in Eq. (2.1) represents an nth-order linear system in canonical form with x3Rn, y3R1, u3R1. Choose z3R1 as a synthetic input z"y(n)!b en~1!2!b e, m n~1 0

(2.3)

174

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

Fig. 1. Trajectory flows for uncontrolled system.

where y(n) denotes the nth derivative of y and sn#b sn~1#2#b is a Hurwitz polynomial. Define the m m n~1 0 actual input to be 1 u" (z#aL y#aL yR #2#aL y(n~1))"W ThK , j3[1, a], 0,j 1,j n~1,j j bª j where a sets of estimated parameters are defined as

C

(2.4)

D

T 1 aL aL 0,j 2 n~1,j "[hK hK 2 hK ]T3Rn`1, j3[1, a] hK " 0,j i,j n,j j bª bª bª j j j

(2.5)

W"[z y 2 y(n~1)]T3Rn`1.

(2.6)

and

To ensure that the estimates of the unknown parameters in Eq. (2.5) remain bounded, each set of estimated parameters can be projected onto a desirable set with predetermined bounds [16]. Therefore, upper and/or lower bounds could be put on the allowable magnitudes of hK for i3[0, n], j3[1, a] in Eq. (2.5). The input in i,j Eq. (2.4) uses the coefficients appropriate to the present state space location, MS x N. Defining the k( ) state-dependent parameter error as hI "hK !h , Eqs. (2.3)—(2.6) result in the error dynamics, E" j j j [e eR 2 e(n~1)]T3Rn

C

EQ "A E#b B W ThI " e j e j

0

1

2

0

0

1

!b

0 2 }

0

!b

1

F 1

D CD

2 !b n~1

0

E#

0 F

b W ThI , j3[1, a], j j

(2.7)

1

e"C E"[1 0 2 0]E. e Theorem 2.1. Consider a system of the form in Eq. (2.1) and a reference model of the form of Eq. (2.2). Define P as the symmetric, positive-definite matrix satisfying the ¸yapunov matrix equation PA #ATP"!Q where Q"QT'0 e e

(2.8)

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

175

by the choice of the b ’s, i3[0, n!1] in Eq. (2.3). If the control input of Eq. (2.4) is applied in conjunction with i the parameter adaptation algorithm hQI x "hQK x "!C x WBTPE, C x 3R(n`1)C(n`1), C x '0, k( ) e k( ) k( ) k( ) k( ) hIQ "h4Q "0 ∀j3[1, a], jOk(x), (2.9) j j 1 u" y(n~1))"WThK x , k(x)3[1, a], (z#aL x y#aL x yR #2#aL 1,k( ) n~1,k(x) k( ) 0,k( ) bª x k( ) where C x '0 is the diagonal gain matrix corresponding to subset MS x N, then the error between the model and k( ) k( ) the plant converges to zero asymptotically. Proof. Consider the positive-definite scalar function

A

B

A

B

a a 1 1 + hI TC~1hI " ETPE# + hI TC~1hI #hI T x C~1 hI *0, (2.10) k( ) k(x) k(x) j j j j j j 2 2 j/1 jEk(x) where P and C x are symmetric, positive definite and P satisfies the Lyapunov Matrix Equation (2.8). k( ) Differentiating Eq. (2.10) gives 1 1 »" ETPE# 2 2

1 a »Q " ET(PA #ATP)E#hI T x (b x WBTPE#C~1 hIQ )# + hI TC~1hIQ "!1 ETQE)0 (2.11) 2 k(x) k(x) k( ) k( ) e j j j e e 2 x jEk( ) if the parameter update laws from Eq. (2.9) are used and b x , the high-frequency gain, is absorbed into the k( ) adaptation gain C x . Eq. (2.11) shows that »Q (x, t) is negative semi-definite. A Lyapunov-like corollary to k( ) Barbalat’s Lemma [16,18], is now used to prove stability. Lyapunov-like Lemma (Slotine and Li [18]). If a scalar function »(x, t) satisfies the following conditions: (i) »(x, t) is lower bounded, (ii) »Q (x, t) is negative semi-definite, and (iii) »Q (x, t) is uniformly continuous in time, then »Q (x, t)P0 as tPR. The first two conditions are shown directly in Eqs. (2.10) and (2.11). The uniform continuity of »Q (x, t) can be checked by examining the boundedness of »$ (x, t): »$ (x, t)"!ETQ(A E#b x BW ThI x ). (2.12) e k( ) k( ) The matrices Q, A , and B are constant and the parameter b 3¸ ∀j3[1, a]. Since »(x, t) is positive definite e j = (lower bounded by 0) and »Q (x, t) is negative semidefinite, »(x, t) is also upper bounded by »(x(0), 0), or »(x, t) is ¸ bounded. From the construction of »(x, t) in Eq. (2.10), this implies that E3¸ and also = = hI 3¸ ∀j3[1, a]. Using Assumption (5), y and all its necessary derivatives are bounded. Therefore, the j = m ¸ boundedness of E implies that y and its derivatives are also ¸ bounded. The boundedness of W is then = = given from the definition of W in Eq. (2.6). Therefore, »$ (x, t)3¸ and hence »Q (x, t) is uniformly continuous. = From the previous ‘‘Lyapunov-like lemma’’ »Q (x, t)P0NEP0 as tPR. Consequently, the output of the plant tracks the output of the reference model asymptotically. h The control in Eq. (2.4) is discontinuous. Therefore, the existence and uniqueness of solutions to the controlled system’s governing differential equations is not guaranteed. To address this, the method of hysteretic switching [10] can be employed. As the solution trajectory evolves in the state space, the activation of a new set of parameter estimates for the controller does not occur until the orthogonal distance from the trajectory and the boundary between the two adjoining open subsets is equal to d. The size of the hysteresis, d, should be chosen sufficiently small, on the order of the sensing capability of the feedback. Fig. 2 shows the hysteretic switching and indicates where the new parameter estimates are introduced along the solution trajectory. An alternative to hysteretic switching is dwell time switching [10] in which the activation of a new set of parameter estimates does not occur until some time q after the trajectory enters the new subset. The dwell $

176

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

Fig. 2. Hysteretic switching.

time switching is a de facto artifact of digital implementation where q *q , the controller’s sampling time. In $ 4 either the hysteretic or dwell time switching case, infinitely fast switching cannot occur and so existence and uniqueness of the differential equations involved is not an issue. The price of the switching hysteresis or dwell time is that there may be perturbations to the tracking error dynamics in Eq. (2.7) during transitions between subsets. However, this can be made as small as necessary by choosing smaller d or q . Additionally, the use of $ a hysteresis or dwell time allows Assumption (2) to be relaxed and the closures (S in Fig. 2) be disregarded as 3 separate subsets with their own parameter estimates. The use of a switching function such as in Fig. 2, however small, guarantees that there need not be a distinct set of parameters for the separating manifold. It also eliminates the problem of detecting when the solution trajectory is on the manifold which would be difficult in practice. As shown, the output of the actual system will track the desired output asymptotically. As the control in Eqs. (2.4) and (2.9) steers the system through the kth subset, MS x N, of the a regions of the state space, k( ) MS ,2, S x ,2, S N, the kth adaptation algorithm corresponding to that region is activated with all other 1 k( ) a adaptation algorithms being deactivated. Therefore, the kth parameter set MhK x N of the a sets of parameters k( ) MhK ,2, hK x ,2, hK N is updated with the other parameter estimates remaining constant until a new region of 1 k( ) a the state space is encountered. At this time, the algorithm of Eq. (2.9) deactivates the kth update law and switches to a new one corresponding to the new state subset. When the state trajectory leaves the kth state subset, the estimated parameter values from that kth state subset are frozen and stored. When that kth subset is re-entered by the system at a later time, the adaptation of MhK x N in Eq. (2.9) continues from the value of k( ) parameter estimates corresponding to the most recent presence of the system within that state partition. The convergence of the parameter errors hI x , k(x)3[1, a], is not guaranteed by the previous analysis. To k( ) guarantee parameter convergence in addition to error convergence, the input to the reference model must be persistently exciting (PE). Moreover, the PE condition must hold for all regions of the state space if all a sets of parameters are to converge to their actual value. For the linear case it is well known [11,16,18] that n/2 sine waves are sufficient excitation for the identification of n parameters. Example 2.1. Consider the 1 DOF oscillator with an asymmetric viscous damping curve shown in Fig. 3 and free response dynamics shown in Fig. 4. For practical relevance, the asymmetry in the damping element is characteristic of the difference between ‘jounce’ and ‘rebound’ phases of automotive suspensions. As shown in Fig. 4, the system in this example satisfies Assumption (3). Assuming an input force on the mass, the equation of motion of the system is my( #cyR #qy"u, c3[c , c ] and c 'c '0. 1 2 2 1

(2.13)

As is illustrated in Fig. 4, the state space is divided into two open subsets MS , S N where S "M(y, yR ): yR (0N 1 2 1 and S "M(y, yR ): yR '0N. In strict accordance with Assumption (2) the set S "M(y, yR ): yR "0N should be 2 3

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

Fig. 3. Force—Velocity curve.

177

Fig. 4. Partitioned state space with free response trajectories.

included, however, here the relaxed version is used along with one of the aforementioned hystereses. As in Eq. (2.2), the reference model is defined as a y( #a yR #a y "r. m,1 m m,2 m m,3 m Using the procedure outlined above, the input to the system is

(2.14)

u"mL (y( !b (yR !yR )!b (y!y ))#cL yR #qL y, j3[1, 2] j m 1 m 0 m j j "mL z#cL yR #qL y, j3[1, 2]"W ThK , j3[1, 2] (2.15) j j j j with the parameter and regressor vectors defined as hK "[mL , cL , qL ]T j3[1, 2] and W"[z, y, yR ]T, respectivej j j j ly. Following Eq. (2.9), the parameter update algorithms become

G

G

∀yR (0,

0 ∀yR (0, and hQK " (2.16) 2 ∀yR '0 !C WBTPE ∀yR '0 e 2 with B , P, and E determined as in Eqs. (2.7) and (2.8). Eq. (2.16) shows the variable structure nature of the e adaptation algorithm. Defining Q"I3R2C2 in Eq. (2.8), Eq. (2.16) becomes !C WBTPE e 1 hKQ " 1 0

G

CD z

!C

hQK " 1

0

1

y (p e#p eR ) 2 3 yR

∀yR (0, ∀yR '0

where P is the solution to the Lyapunov equation (2.8)

C

G

and hKQ " 2

D

0

CD

∀yR (0,

z

!C

2

y (p e#p eR ) ∀yR '0 2 3 yR

(2.17)

b2#b #b2 1 1 0 0 2b b 2b 0 1 0 . (2.18) 1 1#b 0 2b 2b b 0 0 1 Figs. 5—8 show the simulated performance of the variable structure adaptation scheme. In the simulation, m"q"1, c "1 and c "5. The controller design parameters are chosen as b "9, b "6 and the 1 2 0 1 adaptation gain matrix for both MS , S N is chosen as diag[2, 3, 2]. The desired reference output is the sum of 1 2 two sinusoids: 2sin put#sin 2put.

C

D

p p P" 1 2 " p p 2 3

178

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

Fig. 5. Tracking error.

Fig. 6. Damper estimates Mc , c N. 1 2

Fig. 7 demonstrates the regions in which the parameter estimates McL , cL N are alternately updated and 1 2 frozen. The simulations also indicate that the error does not go exactly to zero. There are slight perturbations to the tracking error, and hence the parameter estimates, due to the hysteretic switching. In this case, d was made large enough to demonstrate a noticeable effect. Note. Two sets of estimated parameters, MhK , hK N, are defined in Eqs. (2.16) and (2.17). This means that two 1 2 different estimates will be determined for the mass and spring terms, as well as the damper. Reference model tracking will still be guaranteed, although the only parameter that varies with respect to the state space is the

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

179

Fig. 7. Damper estimate ‘Close up’.

Fig. 8. Other parameter estimates: m, q.

damper. Consequently, in an effort to improve algorithmic efficiency, Eq. (2.16) or Eq. (2.17) could be modified so that the same update laws are used throughout the state space for the mass and the spring, while the adaptation on the asymmetric damper coefficient retained its variable structure framework. This is the approach used in Example 2.1 and is evidenced in Fig. 8 above. Note. The partitioning of the state space must be based on the structure of the dynamics of the system; e.g. the mass velocity dependence in the example. The system representation which most clearly indicates the change of parameters as a function of state would be the judicious choice for a system model. If the original

180

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

system were to undergo a state space transformation (e.g. ¹~1A¹ to modal coordinates), the new representation would simply be partitioned according to the transformation of the original partitions. Note. There are separate adaptation gains for each subset as shown in Eqs. (2.9) and (2.16). This allows for flexibility and tuning in the convergence rate of the parameter estimates. If the system performance in one subset is more critical than the other subsets, and a fast parameter convergence is required, then the adaptation gain for that subset can be increased relative to the other subsets.

3. Extensions to more general linear systems The methodology used in Section 2 can be extended to a more general class of linear systems. The class of systems is given by B(s) sm#b sm~1#2#b s#b m~1 1 0 u, u"b (3.1) A(s) sn#a sn~1#2#a s#a n~1 1 0 where the polynomials A(s) and B(s) are monic with known degree n, m respectively, and b'0 is the high-frequency gain. A(s) and B(s) share no common factors, n*m, and B(s) is Hurwitz. y"b

Assumptions. (1)—(5) of Section 2. Assumption (3) is modified to include the coefficients of B(s) are assumed to be unknown and to vary with location, S , in the state space. j Again, Assumption (5) can be relaxed as in Section 2. Define a synthetic input z3R1 as (3.3) z"y(n)!b e(n~1)!2!b e, m n~1 0 where y(n) denotes the nth derivative of y and sn#b sn~1#2#b is a Hurwitz polynomial. We now m m n~1 0 use the technique of dynamic extension [7] to redefine the input to the system. Define an extended input as the mth derivative of the actual input: v"u(m). Choose the extended input as 1 v" (z#aL y#aL yR #2#aL y(n~1))!bª u!bª uR !2!bª u(m~1) 0,j 1,j n~1,j 0,j 1,j m~1,j bª j "WThK , j3[1, a], j where a sets of parameters are defined as

C

(3.4)

D

T 1 aL aL 0,j 2 n~1,j bª hK " bª 2 bª j 0,j 1,j m~1,j bª bª bª j j j "[hK hK 2 hK hK 2 hK ]T3Rn`m`1, 0,j 1,j n,j n`1,j n`m,j

j3[1, a]

(3.5)

and W"[z y yR 2 y(n~1) u uR 2 u(m~1)]T3Rn`m`1.

(3.6)

Defining hI as before, Eqs. (3.3)—(3.6) result in the error dynamics E"[e eR 2 e(n~1)]T3Rn,

C

EQ "A E#b B W ThI " e j e j

0

1

2

0

0

1

!b

!b 0 1 e"C E"[1 0 2 0]E, j3[1, a]. e

0 2 }

F 1

D CD

2 !b n~1

0

E#

0 F

1

b W ThI , j j

(3.7)

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

181

This error system is similar to the one from Section 2 apart from the forcing function b B W ThI . An analysis j e j similar to the proof of Theorem 2.1 demonstrates asymptotic convergence of E as tPR. Theorem 3.1. Consider a system of the form in Eq. (3.1) and a reference model of the form of Eq. (2.2). Define P as the symmetric, positive-definite matrix satisfying the ¸yapunov matrix equation (3.8) PA #ATP"!Q where Q"QT'0 e e by the choice of the b ’s, i3[0, n!1] in Eq. (3.3). If the extended input of Eq. (3.4) is applied in conjunction with i the parameter adaptation algorithm hQI x "hQK x "!C x WBTPE, C x 3R(n`1)C(n`1), C x '0, k( ) k( ) e k( ) k( ) k( ) hIQ "hKQ "0 ∀j3[1, a], jOk(x), j j 1 v" y(n~1))!bª x u!bª x uR !2!bª u(m~1) (z#aL x y#aL x yR #2#aL 1,k( ) n~1,k(x) 0,k( ) 1,k( ) m~1,k(x) 0,k( ) bª x k( ) "WThK x , k(x)3[1, a], (3.9) k( ) where C x '0 is the diagonal gain matrix corresponding to subset MS x N, then the error between the model and k( ) k( ) the plant converges to zero asymptotically. Proof. Consider the positive-definite scalar function

A

B

a hI + hI TC~1hI #hI T x C~1 *0, (3.10) k( ) k(x) k(x) j j j jEk(x) where P and C x are symmetric, positive definite and P satisfies the Lyapunov matrix equation (3.8). k( ) Differentiating Eq. (3.10) gives 1 1 »" ETPE# 2 2

a 1 hIQ )# + hI TC~1hIQ "!1 ETQE)0 (3.11) »Q " ET(PA #ATP)E#hI T x (b x WBTPE#C~1 2 k(x) k(x) k( ) k( ) e j j j e e 2 x jEk( ) as in Theorem 2.1. The Lyapunov-like lemma can now be used to determine stability. The first two conditions, »(x, t) is lower bounded and »Q (x, t) is negative semidefinite, are given by Eqs. (3.10) and (3.11). The uniform continuity of »Q (x, t) can be checked by examining the boundedness of »$ (x, t). »$ (x, t)"!ETQ(A E#b x BW ThI x ). (3.12) e k( ) k( ) The matrices Q, Ae, and B are constant and the parameter b 3¸ ∀j3[1, a]. Since »(x, t) is positive definite j = (lower bounded by 0) and »Q (x, t) is negative semidefinite, »(x, t) is also upper bounded by »(x(0), 0), or »(x, t) is ¸ bounded. From the construction of »(x, t) in Eq. (2.10), this implies that E3¸ and also = = hI 3¸ ∀j3[1, a]. Using Assumption (5), y and all its necessary derivatives are bounded. Therefore, the j = m ¸ boundedness of E implies that y and its derivatives are also ¸ bounded. However, unlike Theorem 2.1, = = the boundedness of W cannot be given directly since it involves derivatives of the input. Consider the function W(t)"y(n)#a y(n~1)#2#a yR #a y. (3.13) n~1,j 1,j 0,j Since y and all its derivatives are bounded, W(t)3¸ . Considering Eq. (3.1), = (1/b ) j u(s)" W(s). (3.14) sm#b sm~1#2#b s#b m~1,j 1,j 0,j Since W(t)3¸ and B(s) is Hurwitz, Eq. (3.14) implies u3¸ . Differentiating Eq. (3.14) and, by abuse of = = notation, mixing time and Laplace domain notation su(s)"uR (s)" sm#b

(s/b ) j W(s). sm~1#2#b s#b m~1,j 1,j 0,j

(3.15)

182

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

Since s sm~1#2#b s#b ) m~1,j 1,j 0,j is a BIBO stable polynomial, NuR 3¸ . Similarly, all m derivatives of u can be shown to be ¸ bounded. = = Therefore, W is ¸ bounded by Eq. (3.6). Therefore, »$ (x, t)3¸ in Eq. (3.12) and hence »Q (x, t) is uniformly = = continuous. Therefore, »Q (x, t)P0NEP0 as tPR and reference model tracking is asymptotically achieved. h W(t)3¸ and = (sm#b

Note. The actual input can now be derived from the extended input by a series of m integrations. Additionally, all the elements of the regressor vector associated with the input can be obtained by successive integration of the extended input. Note. Either of the previously mentioned hysteretic switching or dwell time switching schemes from Section 2 would also be employed in this case. The error would converge to a bounded region around zero and the estimates would converge to neighborhoods of the true values for PE signals. Note. The class of systems examined here are minimum phase and hence may be suitable for robust nonadaptive approaches. However, the robustness to significant parameter changes may necessitate an increase in the overall ‘loop gain’ of the system. This may be practically infeasible in the presence of noise or of unmodeled dynamics. The approach given here makes explicit use of prior information to achieve its performance. The procedure of Sections 2 and 3 require the knowledge of the full state vector. While this assumption is restrictive, it is also necessary at present. This is because the knowledge of when to change update algorithms is dependent on the knowledge of the partition S that the state is in. If the system parameters change only as j a function of the input or output of the system, which are directly accessible signals, the extension of standard [3,11,16] output-based algorithms is straightforward (e.g. [19]). However, standard adaptive algorithms that are output-based, rely on filtered input and output signals to generate the stable adaptive controllers. The adaptive observers that are generated are nonminimal representations of the plant dynamics and, consequently, the states of the observer do not correspond to the physical plant states. Stable pole—zero cancellation takes care of this in the output tracking but there is no way of determining exactly where the system is in the state space using this approach. Therefore, the adaptive observer states cannot be used to regulate the activation/deactivation of estimated parameter sets for the systems described in this work. As a result, if the plant parameters change as a function of a state, the states must be available for the approach presented here to apply.

4. SISO feedback linearizable systems The analysis of Section 2 provides a clear insight into the variable structure adaptation approach. This section will extend the results to a class of nonlinear systems. The class of systems to be studied here are SISO systems that are feedback linearizable [6,17,22]. These systems take the form xR "f (x)#u(x)u, y"h(x), where the vector fields f, u and h are as smooth as necessary and x3Rn. Assumptions. (1), (2) and (4) from Section 2. (4) The systems of Equation (4.1) are Minimum Phase [4] nonlinear systems.

(4.1)

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

183

(5) The functions f and u can be linearly parameterized as p q f (x)" + h f f (x) and u(x)# + hu u (x), (4.2) i i j j i/1 j/1 where the functions f (x) and u (x) are known, smooth functions and the parameters h f and hu are unknown i j i j and vary with location, S , in the state space. Moreover, the uncontrolled process satisfies global existence j and uniqueness of solutions. If ¸f h and ¸uh are the Lie derivatives of h with respect to f and u, respectively, and ¸u¸r~1 h(x)O0 ∀x3Rn, f where r is the relative degree, then the control law 1 u" (!¸rf h#v) h ¸u¸r~1 f

(4.3)

results in the linear system y(r)"v

(4.4)

and v can be chosen to achieve desired system performance. Some of the states may be rendered unobservable by the state feedback. These states correspond to the zero dynamics of the system. Since the system in Eq. (4.1) is minimum phase, any zero dynamics are asymptotically stable. The adaptive problem arises when the functions f (x) and u(x) contain unknown parameters. In the sequel, we shall consider the relative degree"1 case, ¸u h(x)O0 ∀x3Rn, for clarity of exposition. As with the linear case of Section 2, the index, k(x)3[1, a], determines the present location of the system within the partitioned state space MS , 2, S x , 2, S N. Again, this requires the full knowledge of the system state. However, this 1 k( ) a restriction is quite common in the nonlinear control arena. In the following, the explicit dependence of the parameters on k(x) will be used. The estimates of the functions become q p fK (x)" + hK f x f (x) and uL (x)" + hK u x u (x). i,k( ) i j,k( ) j j/1 i/1 Defining the estimated Lie derivatives to be

(4.5)

p q ¸fK h" + hK f x ¸f h and ¸uL h" + hK u x ¸u h, i,k( ) i j,k( ) j i/1 j/1 the corresponding input, u, defined in Eq. (4.2) becomes

(4.6)

1 u" (!¸fK h#v). ¸uL h

(4.7)

Taking the derivative of y in Eq. (4.1) and substituting for u gives

A

yR "¸f h#

B

¸u h!¸uL h #1 (!¸ fK h#v) ¸uL h

p p " + h f x (t)¸f h! + hK f x (t)¸f h#v i i i,k( ) i,k( ) i/1 i/1 u u + q h x (t)¸u h!+ q hK (t)¸u h p f j j j/1 j,k(x) # j/1 j,k( ) + hK x (t)¸f h#v . (4.8) u i i,k( ) (t)¸ + q hK h x u j j/1 j,k( ) i/1 We can define the parameter vector to be h x "[h f x , h u x ]T3Rp`q, the parameter estimate vector to be k( ) k( ) k( ) hK x [hK f x , hK u x ]T3Rp`q and the parameter error vector as hI x "hK x !h x . If the regressor vector, W(x, t), is k( ) k( ) k( ) k( ) k( ) k( ) defined as

A

C

A

W" ¸f h, 2, ¸f h, ¸u h 1

p

1

B

B

A

BD

!¸fK h#v !¸fK h#v , 2, ¸u h q ¸uL h ¸uL h

T 3Rp`q,

(4.9)

184

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

then Eq. (4.8) can be rewritten as yR "v#W ThI x . k( ) Choosing v to be

(4.10)

v"yR !j(y!y ), j'0 (4.11) $%4*3%$ $%4*3%$ and defining the output error e"y!y results in the error dynamics $%4*3%$ eR #je"W ThI x . (4.12) k( ) Theorem 4.1. Given a system of the form in Eq. (4.1) that is feedback linearizable with the functions f (x) and u(x) as defined in Eq. (4.2), if y and its derivative are bounded then the control input and parameter update law $%4*3%$ hQI x "hQK x "!C x We, C x 3R(p`q)C(p`q), C x '0, k( ) k( ) k( ) k( ) k( ) (4.13) hQI "hQK "0 ∀j3[1, a], jOk(x), j j 1 u" (!¸fK h#yR !j(y!y )), j'0 (4.14) $%4*3%$ $%4*3%$ ¸uL h with ¸uL h nonzero for all x, yields asymptotic convergence of the error, e, to zero with bounded state variables x(t). Note. Note the similarity of Eqs. (4.13) and (4.14) to Eq. (2.9). Proof. Construct the positive-definite scalar function »(x, t). 1 1 1 1 a 1 a »" e2# + hI TC~1hI " e2# + hI TC~1hI # hI T x C~1 x hI x . j j j 2 j j j 2 2 k( ) k( ) k( ) 2 2 x j/1 jEk( ) Differentiating Eq. (4.15) gives

(4.15)

a hI »Q "eeR # + hIQ TC~1hI #hIQ T x C~1 k( ) k(x) k(x) j j j x jEk( ) hI "e(!je#W ThI x )#hIQ T x C~1 k( ) k(x) k(x) k( ) a Q a # + hQI TC~1hI "!je2#(eWT#hQI T x C~1 x )hI x # + hI TC~1hI . k( j j j k( j j j ) ) k( ) jEk(x) jEk(x) Using the parameter adaptation algorithms in Eq. (4.13) results in the desired derivative »Q "!je2.

(4.16)

(4.17)

To prove asymptotic convergence of e(t) to zero, another corollary to Barbalat’s lemma is invoked. Corollary (Narendra and Annaswamy [11]). If e3¸2W¸= and eR 3¸=, then lim

Me(t)N"0. t?=

Eq. (4.17) shows that » is bounded and therefore establishes the ¸ boundedness of e and hI x . Moreover, = k( ) t 1 t »(0)!»(t) »(0) e2(q) dq"! »Q (q) dq" ) (R (4.18) j j j 0 0 establishing the ¸ boundedness of e(t). The derivative eR is a function of e, hI , and W(x, t). The regressor vector 2 k W(x, t) is a continuous function of x by the smoothness of the functions f, u and h. The following analysis, similar to the proof of Theorem 2.1, demonstrates W(x, t)’s boundedness. Since e"y!y and e and $%4*3%$ y are bounded, this implies that y is bounded. Therefore x(t) is bounded, using the minimum phase $%4*3%$ assumption from above. Consequently, since W(x, t) is a continuous function of x(t) it is bounded. Therefore, the derivative of e is bounded by Eq. (4.12). Using the above corollary, the error eP0 asymptotically. h

P

P

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

185

For nonlinear systems, the persistency of excitation (PE) condition [11,16,18] states that e and hI x both k( ) converge to zero asymptotically if &a, b, d'0 such that

P

bI*

t0`d

t0

WW T dq*aI ∀t *0 0

(4.19)

for each MS x N. The PE condition must hold for all a subsets of the state space if all sets of parameters are to k( ) converge to their actual values. Unlike linear systems, where n/2 sinusoids are needed to identify n parameters, the PE condition is difficult to verify a priori for nonlinear systems. The hysteresis/dwell time switching would also be employed in the nonlinear case so that, even if PE conditions were satisfied, the parameter estimates would converge to small neighborhoods about the actual value and the error would converge to a bounded region. The case considered here can be generalized to feedback linearizable systems with higher relative degree in a very straightforward, although more tedious, manner. The reader is referred to the work of Sastry and Isidori [17] to see the extension for the case with constant parameters.

5. Conclusions This paper has investigated the performance of a variable structure gradient parameter adaptation algorithm for a specific class of dynamic systems. A clear motivation, based on good physical design, was given. The algorithm was first developed for a class of SISO, full state feedback, linear systems and was extended to more general linear systems as well as SISO, feedback linearizable, nonlinear systems. All of the systems had sets of parameters that were constant or slowly time varying within predefined regions of the state space. The adaptation algorithm changes its structure as a function of the states of the system by updating one set of parameters and holding the other sets of parameters constant. For all classes of systems presented, the output error can be shown to converge but the key requirements are the knowledge of the state space partitions and the detection of the system location within the state space. While this may seem somewhat restrictive, there is a significant class of physical systems to which this scheme would apply. The extension to output feedback adaptive schemes, in which the parameters varied as a function of the input or output, is straightforward [19]. Extension of the variable structure approach to general output feedback systems, where the parameters vary with nonmeasurable states, is an open problem. The approach presented in this paper has proven very effective in practice for the adaptive control of an asymmetric electrohydraulic actuator [1].

Acknowledgements The author would like to thank an anonymous reviewer whose comments greatly improved the final version of this paper. Thanks are also due to B. Bamieh for his opinions and A.S. Morse for his comments.

References [1] A. Alleyne, Nonlinear adaptive control of active suspensions, IEEE Trans. Control Systems Technol. 3 (1) (1995) 94—101. [2] A.M. Annaswamy, K.S. Narendra, Adaptive control of a first-order plant with a time-varying parameter, in: Proc. 1989 American Controls Conf., Pittsburgh, PA, June 1989, pp. 975—980. [3] K.J. Astrom, B. Wittenmark, Adaptive Control, Addison-Wesley, New York, 1989. [4] C. Byrnes, A. Isidori, A frequency domain philosophy for nonlinear systems with applications to stabilization and adaptive control, in: Proc. 23rd Conf. on Decision and Control, Las Vegas, NV, 1984, pp. 1569—1573. [5] A.F. Fillipov, Differential equations with discontinuous right hand side, Amer. Math. Soc. Transl. 42 (2) (1964) 191—231. [6] L.R. Hunt, R. Su, G. Meyer, Design for multi-input nonlinear systems, in: R.W. Brockett, R.S. Millman, H. Sussman (Eds.), Differential Geometric Control Theory, Birkhauser, Boston, 1983, pp. 268—298.

186

A. Alleyne / Systems & Control Letters 33 (1998) 171—186

[7] A. Isidori, Nonlinear Control Systems, 3rd ed., Springer, Berlin, 1995. [8] I. Kannellakopoulos, P.V. Kokotovic, A.S. Morse, Systematic design of adaptive controllers for feedback linearizable systems, IEEE Trans. Automat. Control 36 (11) (1991) 1241—1253. [9] I.M. Mareels, H.B. Penfold, R.J. Evans, Controlling nonlinear time-varying systems via Euler approximations, Automatica 28 (4) (1992) 681—696. [10] A.S. Morse, Control using logic based switching, in: A. Isidori (Ed.), Trends in Control: A European Perspective, Springer, Berlin, 1995, pp. 69—113. [11] K.S. Narendra, A.M. Annaswamy, Stable Adaptive Systems, Prentice-Hall, Englewood Cliffs, NJ, 1989. [12] K.S. Narendra, J. Balakrishnan, Improving transient response of adaptive control systems using multiple models and switching, in: Proc. 32nd Conf. on Decision and Control, San Antonio, TX, 1993, pp. 1067—1072. [13] R. Ortega, M.W. Spong, Adaptive motion control of rigid robots: a tutorial, Automatica 25 (1989) 877—888. [14] H.B. Penfold, I.M. Mareels, R.J. Evans, Adaptively controlling nonlinear systems using trajectory approximations, Internat. J. Adaptive Control Signal Process. 6 (1992) 395—411. [15] N. Sadegh, R. Horowitz, Stability and robustness analysis of a class of adaptive controllers for robotic manipulators, Internat. J. Robotics Res. 9 (3) (1990) 74—92. [16] S. Sastry, M. Bodson, Adaptive Control. Stability, Convergence and Robustness, Prentice-Hall, Englewood Cliffs, NJ, 1989. [17] S. Sastry, A. Isidori, Adaptive control of linearizable systems, IEEE Trans. Automat. Control 34 (11) (1990) 1123—1131. [18] J.J. Slotine, W. Li, Applied Nonlinear Control, Prentice-Hall, Englewood Cliffs, NJ, 1991. [19] G. Tao, P.V. Kokotovic, Adaptive control of plants with unknown output deadzones, Automatica 31 (2) (1995) 287—291. [20] D.G. Taylor, P.V. Kokotovic, R. Marino, I. Kanellakopoulos, Adaptive regulation of nonlinear systems with unmodeled dynamics, IEEE Trans. Automat. Control 34 (1989) 405—412. [21] K. Tsakalis, P.A. Iannou, Linear Time-Varying Plants: Control and Adaptation, Prentice-Hall, Englewood Cliffs, NJ, 1993. [22] M. Vidyasagar, Nonlinear Systems Analysis, Prentice-Hall, Englewood Cliffs, NJ, 1993.