Systems & Control Letters 10 (1988) 41-44 North-Holland
41
Remarks on smooth feedback stabilization of nonlinear systems * K y u n K. L E E Aristotle A R A P O S T A T H I S Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712, USA Received 3 February 1987 Revised 3 August 1987 Abstract: We investigate the problem of smooth feedback stabilization of nonlinear systems with stable uncontrolled dynamics. We present sufficient conditions for the existence of a smooth feedback stabilizing control that are also necessary in the case of linear systems. Analogous results are established for discrete time systems.
Keywords: Control systems, Stabilization, Controllability, Nonlinear systems.
1. I n t r o d u c t i o n One of the major results in linear system theory is the fact that linear feedback can arbitrarily change the stability characteristics of a controllable system. A satisfactory nonlinear analogue of this fundamental theorem does not exist so far. Sussmann [8] noted that even the use of a continuous feedback will not in general stabilize a smooth nonlinear controllable system. It is well known that for the existence of a control law, resulting in local asymptotic stability, it is sufficient that tile unstable modes of the linearized system are controllable and necessary that the linearized system has no uncontrollable unstable modes. Hence, as Brockett [3] notes, insofar as local asymptotic stability is concerned, the only difficult problems involve cases where the linearized system has some eigenvalues on the imaginary axis which correspond to uncontrollable modes. Kalouptsidis and Tsinias [5] obtain sufficient * This research was supported in part by the Air Force Office of Scientific Research under Grant AFOSR-86-0029and in part by the National Science Foundation under Grants ECS-8307547 and ECS-8412100.
conditions for global stabilizability of affine nonlinear systems by smooth feedback. Their work extends the results of Jurdjevic and Quinn [4], who considered the case of systems whose free dynamics are linear and the state transition matrix is unitary, to systems with stable free dynamics, under certain restrictive assumptions on the form of the associated Lyapunov function. In this communication we show that these conditions can be weakened. Theorem 1 generalizes these results and, in addition, its hypothesis becomes both necessary and sufficient in the case of a linear system. An interesting variant of this result extends to more general nonlinear systems. Consider a smooth affine control system, in e n, .~= f ( x ) + g ( x ) u
(1)
and suppose that x o ~ R" is a rest point of the free dynamics (uncontrolled system) 5c=f(x).
(2)
The pair ( f , g) is said to be smoothly stabilizable at x o if there is a C °O function u, with U(Xo)= O, such that x 0 is an asymptotically stable critical point of the vector field f + ug. Without loss of generality, we will assume that x 0 is the origin in R". It is well known, from the theory of Lyapunov stability and its ramifications [7], that a necessary and sufficient condition for equation (2) to be stable at the origin is the existence of a Coo function V, defined on a neighborhood U of 0 in R" satisfying: (i) V(0) = 0, and V(x) > 0 for x 4: 0, (ii) f [ V ] < O, where f [ V ] = (dV, f ). Furthermore, the function V can be chosen so as to make the inequality in (ii) strict, if and only if stability is asymptotic and the ability to define V on the whole R" and uniformly unbounded (i.e., V(x) --, oo as II x II oo) is equivalent to stability in the large. Henceforth, any Coo function V satisfying (i) and (ii)
0167-6911/88/$3.50 © 1988, Elsevier Science Publishers B.V. (North-Holland)
K.K. Lee, A. Arapostathis / Smooth feedback stabilization of nonlinear systems
42
above will be called a smooth Lyapunov function for (2). For smooth vector fields f and g on R", [ f , g] denotes their Lie bracket and for each integer k = 0, 1, 2 .... we define inductively ad~g = g,
ad~ +lg = [ f , ad~g].
(3)
The involutive closure of D is the accessibility distribution of (1).
2. Main results For a C ~ function V and a distribution D, D[V](x)=O or (dV, D ) ( x ) = 0 denotes that dV is orthogonal to D at x, i.e.,
(dV, h)(x)=0, Vh O. Theorem 1. Suppose that the C ~ affine control system (1) has stable free dynamics a, 0 and let V be an associated smooth Lyapunov function. If, on some neighborhood W of the origin, /'"[
for all integers it) > 0 implies x = O, then the pair ( f , g) is smoothly stabilizable at O. In addition u(x ) = - g[ V](x) is a smooth stabilizing feedback control law. Stabilization is global provided that V is uniformly unbounded and W = R". Proof. The origin is clearly a stable critical point for the closed loop system since
(f-g[V]g)[V](x)=f[V](x)-g[V]E(x)
<0.
To show that stability is asymptotic we argue by contradiction. Suppose that 2 ~: 0 is an o>limit point of some trajectory of the controlled system and let y~ denote its orbit. Clearly, then, g[V] ,= 0 and f [ V ] = 0 on y~. It follows that y~ is an integral curve of f. Consider the identity
(dV, [f, g ] ) = f [ g [ V l ] - g [ f [ V ] ] .
g [ f [ V ] ] = (d(dV, f ) , g ) = 0 , implying by (4) that adfg[V] = 0 on y.~. Applying recursively the above argument to ad~ + lg[ V] = f [ad~g[ V ]] - ad~g [ f [ VII
In a similar manner, if V is a C ~ function, we let f~[V] =f[fk-I[V]]" We introduce the distribution D = span(./, ad~g' k = 0, 1, 2 .... }.
~,~ and therefore d(dV, f ) - - O . Hence
(4)
The first term on the right hand side is clearly 0 since g[V] is constant on "r~. Considering the second term, observe that (dV, f ) is maximal on
for k = 1, 2 , . . . , we obtain a d ~ g [ V ] = 0 , and therefore (dV, D ) = 0 , on ~,,~. It hence follows that f"[(dV, D)](:~) = 0 , contradicting the original hypothesis. [] There are two main ways in which the above theorem extends the results of Kalouptsidis and Tsinias [5]. First there is no restriction imposed on the Lyapunov function V in Theorem 1, while they require V to be of a form such that q
f[Vl(x) = - E a2(x)¢,(x), i=1
where a i, ~ki are smooth and ~ki(x) > 0, '¢x 4= 0. Secondly, they assume that dim O = n. It is a direct conclusion of Theorem 1 that, provided the set { x : d V ( x ) = O } contains no positive semitrajectory, other than x = 0, of the free dynamics (2), then dim D ( x ) = n, Yx @ 0 is a sufficient condition for u ( x ) = - g [ V ] ( x ) to be a smooth stabilizing feedback control. Note though, that the requirement dim O - - n is too strong even for a linear system, as the following example shows. Example I. Consider the linear system
0 0 0j
Y¢= Ax + bu = 0 0
-1 0
2 x+ -1
U.
Clearly
O ( x ) = s p a n { A x , b, Ab, Aab) and hence dim O ~" 2. Furthermore, if we let V(x)=x
+
+
2
then f[V]<_O and (dV, D ) ( d ) = O for x = (0, 1, 1) T. On the other hand, f3[V] = - 16 4= 0 at x = ( 0 , 1, 1) v and u ( x ) = - g [ V ] ( x ) = - 2 x I is a smooth stabilizing feedback control law. In the case of the linear system
~ - - A x + bu,
(5)
K.K. Lee, A. Arapostathis / Smooth feedback stabilization of nonfinear systems
the distribution D takes the familiar form
a(x)=span{Ax,
b, Ab .... , A " - ' b } .
Suppose now, that A is a stable matrix, i.e., none of its characteristic roots has a positive real part and those on the imaginary axis are simple zeros of its minimal polynomial. As it is well known, a Lyapunov function may be constructed having the structure of an algebraic form of any given (even) degree. For our purposes, consider a positive quadratic form V(x) = x'rVx satisfying A r V + VA = - W with W positive semidefinite. Under these assumptions, the hypothesis of Theorem 1 becomes necessary in the linear case. In order to show necessity, we argue by contradiction. First observe that, in view of the similarity transformation A --, ( ~ ) - 1,4(¢V), where ¢V is the unique positive square root of V, we may, without loss of generality, assume that V = !, the identity matrix. Suppose now that, for some x + 0, f"[](x)=O,
0 = g(x) = 0. Then, u(x) = a(x) is a smooth stabilizing feedback for (7) provided the hypothesis of Theorem ! is satisfied if cme replaces the distribution D with b = { f, g } LA, the Lie algebra generated by f and g. The following example illustrates this remark. Example 2. Consider
[xX.':] :
1 xl +
Ix°]
u+
x2
IO]
We let I I ( x ) = ½(x 2 + x 2) and a feedback control law u(x)= - x l x 2. Then with 3 4
=g(x,
u(x))
XIX 2
= -
X2X2
we obtain (dV, g ) = - ( x 4 x 4 + x2x g) <_0. By a straightforward computation we verify that (dV, g) = 0 and f2(dV, g) = 0 imply x = 0. Hence, u ( x ) = - x l x 2 is a smooth stabilizing feedback law.
Vm_O.
It follows that
dm [ f'[Vl(x)---d-i-~lleA'xll2
43
=0,
t=0
m = I, 2 .... ,
(eA'x)T[b'Ab"
(6a)
... " A"-'b] =0.
(6b)
Let R " = Z o • Z_ be the direct sum decomposition of R" relative to A, such that the spectrum of the restriction of A on Z0 (Z_) lies on the imaginary axis (the open left half complex plane). The orbit {ea'x }, ~ r being bounded, there exists a sequence { t,, },% ~, t~ --, ~ , such that lira,, _. ~eAt"X = ~. Clearly, :~ ~ Z 0, ~ 4:0 and, in addition, (6b) implies that ~ is orthogonal to the controllable subspace. Hence, the system in (5) is not stabilizable. Note that in the linear case the stabilizing
feedback is simply u(x) = - 2b x Vx. Finally, the results of Theorem 1 may be adapted to more general nonlinear systems. Consider = f ( x ) + g(x, u)
(7)
and assume, as before, that 0 is a stable rest point of the free dynamics and let V be an associated smooth Lyapunov function. Suppose that there is a smooth a ( x ) such that, with g ( x ) = g(x, a(x)), glv] <-0 and, in addition, g[V](x) =
3. The case of discrete time
Analogous results for affine discrete time control systems, in R n, of the form xk+ , = f ( x k ) + g ( x k ) u k
(8)
whose free dynamics Xk+l=f(xk) have a stable rest point 0 ~ R", and an associated differentiable Lyapunov function V such that
AV(xk)- V(f(xk))-
V(xk)<0
(9)
are summarized in the following theorem. Theorem 2. Suppose lhat the affine discrete control
system (8) has stable free dynamics at 0 with a differentiable Lyapunov function V. If, on some neighborhood W of the origin,
=o, (dV(f"+'(x)), g(fm(x)))=O for all m >_0 implies x = O, then the discrete time control system (8) is stabilizable at O. In addition u(x)=-fl(x)(dV(f(x)),
g(x))
44
K.K. Lee, A. Arapostathis / Smooth feedback stabilization of nonfinear systems
is a stabilizing feedback control law where fl( x ) is selected by 13(x) = argrmn ( V ( y ) l y = f ( x ) B~R
- fl < d V ( f ( x ) ) , g(x)>g(x),
y~W}
where argmin denotes 'the argument which minimizes'. Stabilization is global provided that V is uniformly unbounded and W = R".
The prnof is analogous to that of Theorem 1 and will be omitted. Note that fl(x), as selected, satisfies (dV(f(x)
- fl(x)(dV(f(x)),
g(x))g(x)),
References
g(x)) = 0 and that the resulting feedback is not necessarily a continuous function of x. In the case of the linear system, Xk+ a = A x k + buk, and associated P > 0 satisfying AvPA - P = - W < 0, the hypothesis of Theorem 2 is also necessary for asymptotic stabilization and the corresponding feedback is given by u ( x ) = - ( bTVb ) - l bTZ'Ax.
The following example illustrates Theorem 2. Example 3. Consider
x2(k+l) Let V =
i_
o
-1
A'Tx.
Then
,av(f"'(x))
and
o Lx2(k)
o
stabilization properties of a control system is the construction of a smooth feedback law as well as a Lyapunov function associated with it. For an arbitrary nonlinear system, this is clearly a very difficult task. In the particular case of stable free dynamics, the uncontrolled system provides a Lyapunov function candidate V and the feedback control law. We should note that higher derivatives of a Lyapunov function have been utilized in the past [6,9], in classification studies of the trajectories of a differential equation in a neighborhood of an equilibrium point. Finally, related work on the subject, but in a somewhat different direction, has recently appeared in [1,2].
u.
g(f'"(x)))
vanish simultaneously only at x = O. Thus, the system is stabilizable and we obtain u ( x ) = - 1 as a stabilizing feedback law.
4. Conclusions From the point of view of classical Lyapunov theory, the central point in the analysis of the
[1] D. Aeyels, Stabilization of a class of nonlinear systems by a smooth feedback control, Systems Control Lett. 5 (1985) 289-294. [2] E.H. Abed and J.-H. Fu, Local feedback stabilization and bifurcation control, I. Hopf bifurcation, Systems Control Lett. 7 (1986) 11-17. [31 R.W. Brockett, Asymptotic stability and feedback stabilization, in: R.W. Brockett, R.S. Millmann and H.J. Sussmann, Eds., Differential Geometric Control Theory pp. 181-191, (Birkhauser, Boston, MA, 1983). [41 V. Jurdjevic and J.P. Quinn, Controllability and stability, J. Differential Equations 28 (1978) 381-389. [5] N. Kalouptsidis and J. Tsinias, Stability improvement of nonlinear systems by feedback, IEEE Trans. Automat. Control 29 (1984) 364-367. [6] M.B. Kudaev, The use of Lyapunov functions for investigating the behavior of trajectories of systems of differential equations, Soviet Math. Dokl. 3 (1962) 1802-1804. [71 J. Kurzweil, The converse second Lyapunov's theorem concerning the stability of motion, Amer. Math. Soc. Transl. 24 (1963) 19-77. [81 H.J. Sussmann, Subanalytic sets and feedback control, J. Differential Equations 31 (1979) 31-52.
[9] J.A. Yorke, Invariancefor ordinary differentialequations, Math. Systems Theory I (1967) 353-372.