A new decomposition method for stochastic dynamic stabilization

A new decomposition method for stochastic dynamic stabilization

Computers Math. Applie. Vol. 29, No. 3, pp. 31-36, 1995 Pergamon Copyright©1995 Elsevier Science Ltd Printed in Great Britain. All rights reserved 0...

313KB Sizes 5 Downloads 130 Views

Computers Math. Applie. Vol. 29, No. 3, pp. 31-36, 1995 Pergamon

Copyright©1995 Elsevier Science Ltd Printed in Great Britain. All rights reserved

0898-1221(94)00226-6

0898-1221/95 $9.50 + 0.00

A New Decomposition Method for Stochastic D y n a m i c Stabilization P. FLORCHINGER* URA CNRS No 399, Ddpartement de Mathdmatiques UFR MIM, Universitd de Metz Ile du Saulcy, F 57045 METZ Cedex, France

(Received October 1993; revised and accepted May 1994) A b s t r a c t - - T h e aim of this paper is to prove that the stabilizability of a controlled nonlinear stochastic system can be deduced from the stabilizability of a "reduced" system for which the existence of a stabilizing feedback law can be decided more easily. This result extends the well-known deterministic "chain of integrator lemma" by using a different scheme of reduction for the original stochastic system. K e y w o r d s - - N o n l i n e a r stochastic control system, Feedback law, Stochastic stability, Stochastic dynamic stabilization. 1. I N T R O D U C T I O N T h e dynamic stabilization of deterministic control systems has been studied in the last past years by Sontag-Sussmann [1], Kokotovic-Sussmann [2], Tsinias [3,4] and Coron-Praly [5]. In these papers, the stabilizability of a controlled system is deduced from the stabilizability of a system of smaller dimension given by the "chain of integrators" method. Furthermore, using a new reduction scheme, for which the one used in the "chain of integrators lemma" is a particular case, Outbib-SaUet [6] have extended some results obtained in the papers listed above to more general systems. Actually, one can find only a few results on the stabilization of controlled nonlinear stochastic systems in the literature. A stochastic version of the "chain of integrators lemma" for the dynamic stabilization of stochastic differential control system is proved in [7] following the scheme of the proof of Theorem 1 from [4] for deterministic control systems (see also [8]). Our aim in this paper is to extend the main result proved in [6] to the dynamic stabilization of stochastic control systems. Using a decomposition scheme analogous to the one proposed in the last cited paper, we prove t h a t the stabilizability of a stochastic control system can be deduced from t h a t of a "reduced" system for which the existence of a stabilizing feedback law can be decided more easily. Note t h a t the reduction scheme used in the "stochastic chain of integrators lemma" is a particular case of the result proved in this paper. This p a p e r is divided in two parts organized as follows. In Section 2, we introduce the class of stochastic differential control systems and the associated notion of stochastic stability t h a t we are dealing with in this paper. In Section 3, we state and prove the main result of the paper, and we conclude in providing an illustrative example. *Also: INR.IA Lorraine, Projet Congd, CESCOM, Technop(51e de Metz 2000, 4 Rue Marconi, F 57070 METZ, France. Typeset by A ~ - T E X 31

32

P. FLORCHINGER

2. P R O B L E M

STATEMENT

Denote by (~, ~', P ) a complete probability space, and by w, a standard Rm-valued Wiener process defined on this space. Let (.~t)t>_o be the complete right-continuous filtration generated by the Wiener process w. Consider the stochastic process x~ E ]Rn solution of the multi-input stochastic differential equation written in the sense of It6,

dxt = F (xt, u) dt + G (xt, u) dwt,

(1)

where 1. x0 is given in R ~, 2. u is an RP-valued control law, 3. F and G are Lipschitz functionals mapping R n × R p into R n, and R n x Rm, respectively, such that -

F ( 0 , 0) -- G ( 0 , 0) = 0,

- there exists a nonnegative constant K such that for any x E R n and u E R p,

[F(x,u)[ + IG(x,u)[ < K(1 + [x[ + lul). The stochastic differential system (1) is said to be stabilizable by means of a C r feedback law (r E N tA {c~,w}) if there exists a function u in C r ( R n , R p) such that the equilibrium solution xt - 0 of the closed-loop system

dxt = F (xt, u (xt)) dt + G (xt, u (xt)) dwt

(2)

is globally asymptotically stable in probability. (For a complete presentation of the stochastic stability theory, we refer the reader to [9] or [10].) A useful tool to study the stabilizability of stochastic differential systems is the "stochastic chain of integrator lemma" that we recall in the following lines. THEOREM 2.1. (c.f. [7]) Assume that the stochastic differential system

dxt = F (xt, u) dt + G (xt) dwt is stabilizable by means of a C °o feedback law, then the stochastic differential system dxt = F (xt, Yt) dt + G (x~) dwt dyt -- u dt is also stabilizable by means of a Coo feedback law. The proof of the latter theorem uses the stochastic version of Artstein's theorem [11], which relies on the notion of stochastic control Lyapunov function. In this paper, we want to prove a stabilization result by avoiding the use of such a result.

3. T H E

MAIN

RESULT

Before we state the main result of the paper, we introduce the following class of functions that we need in the sequel. NOTATION 3.1. Denote by C the class of smooth and nonnegative functions ¢ mapping R n x N p into R such that 1. ¢(0, y) = 0 if and only if y = O. 2. For any fixed x E IRn, the function

y -~ ¢(z, y) is proper (i.e., the set {y E RP/¢(x, y) <_ L} is compact for each L > 0). .

A New Decomposition Method

33

Further, assume that for any (x, y) 6 R '~ x R p, one can write the following decomposition of the functions F and G as F(x, y) = f ( x , l(x, y))h(x, y), (3) where l is a function mapping R n x ~P into ~P, h is a nonnegative function mapping R n × R p into R such that

h(x,y) = 0 ~ y = O, and G(z,y)

=

g(x,y)

(4)

REMARK 3.2. Note that such a decomposition for the functions F and G always exists, since equalities (3) and (4) with f = F, g = G, l(x, y) = y and h(x, y) -- 1 remains always true. Then, one can prove the following result which extends to the stochastic case [6, Theorem 2.1]. THEOREM 3.3. Assume that the following conditions are satisfied:

1. There exists a stabilizing Coo feedback law fL mapping A n into R p for the stochastic differential system dxt = f (xt, u) dt + g (xt, u) dwt. (5) 2. There exists a matrix function M ( x , y) with values in l~/tp×p(R), and a function ¢ in C

such that for any (x, y) ~ R n × ]~P, (l(x, y) - ~(x))h(x, y) = M ( x , y ) ~ y (X, y). 3. There exists a C °o function k mapping R "~ x ~P into ~P such that for any (x, y) 6 R n × R p,

0¢(x ' y) } h(x,y)

~(x,y) =

f ( x , ~ ( x ) ) , ~x + h(x,2 y---~)

02 ¢ (g(x)g(z)*)~,j Ox~Ozj

i,j=l

<0.

4. If V denotes a Lyapunov function associated to the closed-loop system dxt = f (xt, ~ (xt)) dt + g (xt, fz (xt)) dwt, the set A=



( x , y ) e R n x R p s.t. h(x,y) = 0 , ~ - - ~ ( x , 0 ) = 0 , M ( x , 0)*G(x, 0)* O(Y(x) + ¢(z, 0)) = k(x, O) Ox )

where, for any (x, y) 6 R n × R p, G(x, y) reads

G(x,y) = fo

Of

(x, t l(x, y) + (1 - t)~(x)) dr,

is either empty or reduced to {(0, 0)}.

(6)

34

P. FLORCHINGER

Then, the function u mapping R n × RP into R p defined by u(x,y) = - M ( z , y ) * a ( x , y ) *

-~x(X) +

(x,y)

+ k(x,y) - ~yy(X,y),

(7)

is a stabilizing C °O feedback law for the stochastic differential system dxt = f (xt, l (xt, Yt)) h (xt, Yt) dt + g (zt, Yt) v/h (xt, Yt) dwt, dyt = u dt.

(8)

REMARK 3.4.

1. The existence of a Lyapunov function V associated to the closed-loop system (6) is given by the stochastic converse Lyapunov theorem proved by Kushner [12]. 2. Assuming that the function G in (1) depends only on the x variable, and setting for any (x,y) E R n × R p, ¢(x,y) = (1/2)lly - ~2(x)ll2, M ( x , y ) = I, and k(x,y) = D~t(x)F(x,y), Theorem 3.3 reduces to the stochastic chain of integrators lemma (see [7]). 3. Conditions 1 and 4 in Theorem 3.3 can be weakened in assuming the existence of a '~¢eak" Lyapunov function associated to the closed-loop system (6), and adding conditions under which one can invoke the stochastic La Salle theorem (see [13]) to obtain asymptotic stability in probability. PROOF OF THEOREM 3.3. Let W be the function mapping R n × ~P into ~ defined by

w(z,y) =v(x) +¢(~,v). Then, one can prove easily that W is a Lyapunov function on R n × R p, and by easy computations, one gets for any (x, y) E R n × R p,

F(x, y) = f (x, ~(x)) h(x, y) + G(x, y) (l(x, y) - fi(x)) h(x, y). Thus, invoking Assumptions 2, one has



F(x, y) = f (x, ~t(x)) h(x, y) + G(x, y)M(z, y)~y (X, y). Therefore, denoting by £ the infinitesimal generator of the stochastic differential system (8), one has

£ W ( x , y) = (f (x, ~(x)), VV(x)) h(x, y) +

2 h(z,j__)) ~ (g(x)g(x)*)~,jL ~ i,j=l

+

ox,o~j j

+ f(~,~(x)),~(x,y) h(~,y)

.OW 0¢ y)} + (M(x,y)*a(x,u)-y[~(z,y),~(z,

Hence, since for any (x, y) E R n × R p,

u(x,y) = - M ( x , y ) * G ( x , y ) *

~z (x,y)



+ k(x,y) - -~y(X,y),

A New Decomposition Method

35

one has,

~w(z,y) =

1

o v(x)) h(:,y)

(i (z,~(x)),vv(~)) + ~

(g(x)g(~)*)~ 0z~0x~

i,j:l 0¢ x

2

+ Z(x, y) - ~ ( , y) J = nY(x)h(x,y) +j3(z,y)-

~y(X,y) 2,

(9)

where L denotes the infinitesimal generator of the closed-loop (6). Thus, for any (x, y) • R n x N p, yields

£ W ( x , y) < O, which implies that the equilibrium solution of the closed-loop system deduced from (8) when the control law u is given by (7) is stable in probability. Furthermore, invoking the stochastic La Salle Theorem (see [13]), one can deduce that the stochastic process xt tends in probability to the largest invariant set whose support is contained in the locus f-.W(xt, Yt) = 0 for any t > O. On the other hand, according to (9), one can deduce easily that, if f ~ W ( x t , Y t ) = 0 for any t _> O, one has L V (xt) h (xt, Yt) = 0, (10) and

0_¢¢(xt, Yt) = 0 Oy

(11)

for any t > 0. In the following, we prove that xt =- 0 and yt - 0 by considering cases according to the value of the function h at (x0, Y0)1. If h(xo, Yo) ¢ O, equality (10) implies that LV(xo) = 0, Lyapunov function for the stochastic differential system (5), Then, invoking (11) and the fact that ¢ E C, one can deduce result. 2. If h(xo,Yo) = O, one has, according to the hypotheses on the

and therefore, since V is a one has x0 -- 0. that Y0 = 0, which gives the function h, that y0 -- 0.

Hence, either ~ ytJt=o = 0 or d ytJt= ° ~t O. In the first case, one can conclude easily, invoking Assumption 4, whereas in the second case, since t ---* Yt is not constant, one can prove easily that h(xt, Yt) ~ 0 for some t > 0, which is impossible when arguing as previously. Therefore, x0 = Y0 --- 0, and according to the stochastic La Salle Theorem, xt tends to 0 in probability, which implies that (0, 0) is asymptotically stable in probability. This concludes the proof of Theorem 3.3. REMARK 3.5. Theorem 3.3 is stated in terms of C °° feedback laws, nevertheless, it remains true if one uses C r feedback laws, r E N tA {oo, w}. EXAMPLE. Let x0 be given in •, and denote by (xt, yt) E ]R2 the solution of the stochastic differential system,

xt = Xo + Yt =

/0'

3y 2 (xs + y3) ds +

/o'

3Jyslxsdws,

(12)

Us ds.

For any x, y, u E R, define the functions f , g, h and l considered in the reduction scheme of the stochastic differential system (12) by f ( z , u) = x + u;

h(x, y) = 3y2;

~(x, u) = u,

l(x, y) = y3.

36

P. FLORCHINGER

T h e n , t h e "reduced" s y s t e m a s s o c i a t e d w i t h t h e s t o c h a s t i c differential s y s t e m (12) reads, d x t = (xt + u) dt + x t dwt.

(13)

F u r t h e r m o r e , one c a n prove easily t h a t t h e function ~2 m a p p i n g R into R defined b y

= -2x is a s t a b i l i z i n g Coo feedback law for t h e "reduced" s y s t e m (13), a n d t h a t t h e f u n c t i o n V defined onRby V(x) =

1 2

is a n a s s o c i a t e d L y a p u n o v function. O n t h e o t h e r h a n d , s e t t i n g for a n y (x, y) E N 2, ¢ ( x , y ) = ~ y 1 6 + 2 x y 3 + 5 x 2, and M(x,y)

= l;

k ( x , y ) = 2x,

one gets, 3(x,y) = -3x2y 2 < 0 and

A = {(o, 0)}. Therefore, a p p l y i n g T h e o r e m 3.3, one can d e d u c e t h a t t h e function u defined on R 2 b y u ( x , y) = - 3 y 5 - 2y 3 - 6 x y 2 - 9 x is a s t a b i l i z i n g C °o feedback law for t h e s t o c h a s t i c differential s y s t e m (12). REFERENCES 1. E.D. Sontag and H.J. Sussmann, Further comments on the stability of the angular velocity of a rigid body, Systems and Control Letters 12, 213-217 (1988). 2. P.V. Kokotovic and H.J. Sussmann, A positive real condition for global stabilization of nonlinear systemsi Systems and Control Letters 13, 125-133 (1989). 3. J. Tsinias, Sufficient Lyapunov-like conditions for stabilization, Mathematics of Control Signals and Systems 2, 343-357 (1989). 4. J. Tsinias, A local stabilization theorem for interconnected systems, Systems and Control Letters 18, 429-434 (1992). 5. J.M. Coron and L. Praly, Adding an integrator for the stabilization problem, Systems and Control Letters 17, 84-104 (1991). 6. R. Outbib and G. Sallet, Reduction principle for the stabilization problem, Systems and Control Letters (to appear). 7. P. Florchinger, On the stabilization of interconnected stochastic systems (submitted). 8. P. Florchinger, Lyapunov-like techniques for stochastic stability, SIAM Journal of Control and Optimization (to appear). 9. R.Z. Khasminskii, Stochastic Stability of Differential Equations, Sijthoff & Noordhoff, Alphen aan den RJjn, (1980). 10. L. Arnold, Stochastic Differential Equations: Theory and Applications, Wiley, New York, (1974). 11. P. Florchinger, A universal formula for the stabilization of control stochastic differential equations, Stochastic Analysis and Applications 11 (2), 155-162 (1993). 12. H.J. Kushner, Converse theorems for stochastic Liapunov functions, SIAM Journal of Control 5 (2), 228-233 (1967). 13. H.J. Kushner, Stochastic stability, In Stability of Stochastic Dynamical Systems, Lecture Notes in Mathematics, (R. Curtain, Editor), Vol. 294, pp. 97-124, Springer-Verlag, Berlin, (1972).