Continuous and impulsive control of diffusion processes in R N

Continuous and impulsive control of diffusion processes in R N

h.&iia~ Ana&r~~. 73eoy. Printed III Great Bncam. Mrrhodr & Applicanonr, Vol. 8, No. 10. pp. 1227-1239. 1981. 0 CONTINUOUS AND IMPULSIVE CONTROL ...

689KB Sizes 0 Downloads 21 Views

h.&iia~ Ana&r~~. 73eoy. Printed III Great Bncam.

Mrrhodr

& Applicanonr,

Vol. 8, No. 10. pp. 1227-1239.

1981. 0

CONTINUOUS

AND IMPULSIVE CONTROL PROCESSES IN W

0362-546x&( $3.00 l .@I 1984 Per&moo Press Ltd.

OF DIFFUSION

B. PERTHAME E.N.S., 45, rue d’Ulm, 75230 Paris, Cedex 05, France (Received Key words and phrases:

equations, quasi-variational

1 October 1983, received for publication 23 January 1984)

Optimal stochastic control, impulsive control, Hamilton-Jacobi-Bellman inequalities, semi-concave fknctions, dynamic programming principle.

INTRODUCTION

of the optimal cost function for controlled diffusion processes leads to Hamilton-Jacobi-Bellman (H.J.B.) equations and to quasi-variational inequalities (Q.V.I.). The first case corresponds to continuous controls and was studied by Krylov [6], and by Lions [7]. On the other hand Bensoussan and Lions [l] introduced Q.V.I. that correspond to the control of processes which “jump” at a sequence of stopping times. Here, we combine these two kinds of controls in the general situation of possibly degenerate diffusion processes. In this case the dynamic programming principle leads us to think that the optimaI cost function is the solution of a nonlinear degenerate second order equation:

THE

STUDY

Max (;“,t (A(u)u

-f(o)),

u - Mu) = 0

where V is the set of controls and M is an operator Mu = k + $co(E)

a.e. in W”,

(1)

defined on Cb(R?) by: + 4~ + 91,

(2)

where k > 0 is lixed and 5 5 0 means E = (&, . . . , 5,~) with & 2 0. If u is regular (say C2) then the derivation of (1) is classical. But in general the optimal cost function is not in C’. In this paper we will prove that, although u has not the desired regularity, (1) holds in some sense (see Section 1 for precise results). Furthermore we prove that the optimal cost function is the maximum sub-solution of (1) and that (1) has a unique solution. Let us recall that this type of result has been proved in [7, lo] for H.J.B. equations (i.e. (1) with k = + a). In this paper we will often use the methods which were introduced in [7]. We will also use the techniques of Krylov [6] to prove some regularity for the optimal cost functions. Finally we will need a result we have already proved (see [12, 131): in the case of nondegenerate diffusion processes the equation (1) has a unique solution in W2*r(Wsv). It is easily shown that this solution is the optimal cost function for the associated control problem (see Section 3 for such a result.) This paper is ordered as follows: first we give some precision on the stochastic problemwe introduce the equations of the controlled process, the optimal cost function and we state the main results that we will prove in this paper; next we prove some regularity for the optimal cost function-this is principally a generalisation of regularity results proved in [6], indeed here we deal with noncontinuous random processes-they are only right-continuous and 1227

1228

B. PERTHAVE

left-limited; finally we have put together in Section 3 the proofs of the theorems announced in Section 1. We will come back in a future study to the case when the diffusion process is stopped on the boundary of an open set of 3.v.

1. THE

CONTROL

PROBLE.LI

1.1.

~otafions and USSK~JhVU Let V be a separable metric space (the set of values of the control) and let a,,(.~. 0). bi(x, u), f(~, u) be functions from W,vx V into W and let A > 0 be a constant. We assume some regularity on these functions:

I$(. , u) E W’~“(liB.‘) and 11q(. , u) IIwz.~,J+\,s K

(3)

for all u E V and $J = olj. b,, f, @(x, .) E C(V), Vx E Rv, @I= a,, , b,,f.

(4)

Finally let CObe a continuous sub-additive function on (R,)N with values in R+ such that co(O) = 0 (it defines (2)) and we shall denote by: a = UC+,

(5)

and: A(u) = -a,, (x, u)(f~~/(dxjax,)) - b,(x, u)(&~x,) + E.. (where we use, here and everyvvhere below the repeated index convention) (2) are now entirely defined. 1.2. The optimal cost problem We will call “admissible system” the collection of: 1 1 y space (S2, 9, St, P) with a right-continuous ( a ) a p ro b a b’l’t sub-u fields; (b) a Wiener process We,in Rv. B’-adapted, (c) two sequences: e1c e?< . . . < 8”. . . ,

so that (1) and

increasing filtration of complete

where 8” is an increasing sequence of stopping times (0” < = a.e.) such that t3”+ z a.e., and j” is a sequence of random variables in (R,).‘, 8” measurable, (d) v(t, w) E V is a progressively measurable process (roughly speaking u(t. w) is the control). Then, let y=(t), y;(t), 0 s n < 33 be the solutions of: @,0(r) = a(y:(t), !

u(t)) dw, + b(y!?(t), u(t)) dt, t 20

Y%O> = 1, dyX4

y:(w)

= a(~%>,

= y;-‘(P)

u(t))

dw, +

+ E”,

b(y;(t),

u(t)) dr, t 3 6”

1229

Continuous and impulsive control of diffusion processes in ?+’

and:

dyx(f) = drx(r>, u(t)>dw, + b(y,(t), u(t)) dt + Y,(O)

nilc&en&

= x,

where this equation means that yX(t) is a left-limited rightcontinuous y=(r) =x +

I0

‘o(yx(s),

u(s)) dws +

I0

‘L+,(s),

process such that:

u(s)) d_s+ (;’ + - - + ;“)

where: y! = max{n, 8” c r}, then one shows easily that:

(6)

yx(t) = y:(t), 8” S t < p+‘. We can now define the cost function for such a system: 1(x, s-4) = E

=f(y,(t),

u(t)) e-‘dt

+ nil(k

+co(?))

(7)

eeA'n)

and in the following we will study the “optimal cost function”: u(x) = i;fl(*,

(8)

&).

1.3. Main results We denote by D?(RN) the cone of semi-concave functions i.e. of functions K in C,(W”) such that U(X) - (1/2)CI x I2 1s concave for some constant C > 0. This is equivalent to: (a’u/Jll)

S C in B’(FP), t/x E Ws, 1x1= 1.

In particular we know that if u is semi-concave then u is lipschitz. Then we have: 1. Under assumptions

THEOREM

(3), (4) there exists some constant & such that, if A > &

then: (i) u E D:(FP); (ii) A(u)u E L”(IWN) and;;~]]A(u)u]jr-

< + =;

(iii) u is a solution of (1). Remark.

The constant

II%IIL-, IIWIL-. Moreover

& is computed

in [6, lo] and it depends

only on N,

we have the following uniqueness result for solutions of (7):

2: Under assumptions (i) If w E Cb(R”) satisfies:

THEOREM

explicitly

(3), (4):

A(u) w “-f(u) in S’(R”), WSMW,

Vu E V, I

(9)

1230

B.

&RTHAME

(ii) If w E W’.f(R’v) satisfies: A(o)w

E I!,~(R~‘) andsu~llA(u)wll,-(,s,

< + =,

UE

3C, AW s C, and if w is a solution of (l), then w = U. Let us recall also that it is proved in [lo] that if we assume, under the assumptions theorem 1, that on an open set 6 C R” we have: . .) N},3y>O,vXEe,32~1,

3pE{l,.

of

(10)

such that

2. REGULARITY

OF II

Our purpose in this section is to prove that under assumptions of theorem 1, the cost functions and u are semi-concave. This will be done by showing some regularity (in X) for the processes yX. 2.1. Derivability We first prove that J(x, &) E Ci(R”) and so we study “the derivability” y=(f). That is why we introduce z,(t), the solution of:

z,(t) = hf + I ok~x(s), u(s)>-x(s)

dw, f jib:(y&).

u(s))

0

.z,(s)

of processes

ds,

(11)

the existence of which is ensured by the linearity of coefficients and because d, and bJ are bounded. All throughout this section we will suppose, in order to simplify notations, that N = 1. We then have: LEMMA

1.

Yr+h(t) -r*(r) h

E

Proof. We denote G(s) = (y*+h(s) Ah(S) = 4Y=+d yr+h

- y,(s))/h

- 4Yx) -

y1-

and

(s), &(S)

= b(yx+h) yr-h

- b(YJ -

Yx

(3)

Continuous

(we omit the dependance

and impulsive

control

of diffusion

f I

of yx on the control) then: Z,(t) = 1 +

AhZh dw, +

I =

i I

BhZh ds,

I

Aoz, dw, +

1+

1231

in RzV

0

0

z,(t)

processes

Boz, ds,

where A&) = d,(y,(s)) and Be(s) = b:(y,(s)). Because of [4, Chapter 2, Section 71 it is enough to check that / Bh - Bol + /Ah - A01 z

0 in probability, for each s,

but as b and (7 are in C6, the result follows from: ECyz+h(f) - rm*-

h-0

0,

and to show this result we remark that: E{y,+h(t)

-y&))*

s

41hl* + 4~ {[[o(Yx+d + 4E

(I,'[b(y,+h)

=S41h12+4rK2

I0

‘E{ly,&)

'E&-h(S)

- a(Yx)] dw,}’

-

b(Yx)] ds)* -y,(s)j*ds

-yx(s)i*d&

and one easily concludes by Gronwall lemma.

LEMMA 2. Vm E N, 3C(m, K) such that: E{z,(t)}?” G eCf, Vt 3 0. Proof. By It63 formula: 2x(t) t”=l+

I

o~[2mz,t”-‘b’(YJL(s)

d.T + m(2m - l)zt”%‘2z3

t +

I0

2m$‘-‘d(y,)

z,(s) dw,

and so: E[z,(~)~]

c 1 + K(2m2 + m)

and one concludes by Gronwall lemma.

I0

&xt”)

(J) ds,

Q

1232

B. PERTHAM

PROPOSITION 1. If A > At, for each admissible C, where C only depends on K. COROLLARY 1. U(X) E Wi.“(?,.v). This corollary is immediately deduced

system

from:

;1 u(x + h;C) - K(X) ssup{J(.~+h~,.~)-J(x,.;1)}~CIh!foran~%E~~. Proof

of proposition

1. We directly J,(x. d)

Indeed

= E

prove

(i

z = 1.

that J’(x, A) is given by:

,,xf:(~As).

U(S)) ,=,(s)

e-‘“ds

we have:

and proposition 1 follows now from proved. In the same way we get:

lemma

2 and the convergence

that

has already

been

2.2. Estimates in W’.” First we assume that the functions q,, bi and f are in Ci and vve check that the estimates proved below only depend on the norm in W’.” of the coefficients so that the general result is easily deduced from [6]. LEMMA 3. Let qx(t) be the solution %t =

j

of:

gb CO&(S),o(s))z:(s)dw,

t j’b,(~&), 0

u(s))&)

d.s (12)

then:

J

k-.x

moreover: C ec*

t/s 3

(where c = c(K)). We don’t prove again this lemma since the method for a related proof see [4, 61. We deduce:

0

is very close to the one given

above.

Continuous

and impulsive

control

of diffusion

processes

1233

in R,’

2. There exists AJ (depending only on N, 11 Da,,\IL=, IID~,~~L.-) such that for A > & and for each admissible system ~4 we have: J(x, ~4) E Ci(W”), moreover there exists some constant C(K) such that:

PROPOSITION

I(D’J(.u. 54 ) llL’ S c. COROLLARY

2. u E OS(W).

Proof. We have for x E LR”, 1x1 = 1:

u(x + hx) + u(x - hx) - 24X) h’

s sup

-hX,sP)

J(X + hx, ss) +J(x

-U(x,

d)

h2

sl

SC

(by proposition 2), and since the lefthand (in 9’(R1”)), the result is proved.

3. PROOF

side of the inequality

OF THEOREMS

1 AND

converges

to (a’lr/Jp)

2

We divide this section in three parts: (1) Dynamic programming; (2) Proof of theorem 1; (3) Uniqueness results (Theorem 2). 3.1. Dynamic programming We begin by proving the “dynamic programming” for impulse control. This is an identity satisfied by the optimal cost function: let r be any stopping time (it may depend on .r2) then we have: u(x) = inf E dadm

rf (y,(s),

u(s)) eb ds + 7 (k + cg( y))

e”.“xenSr + u(y.J zj) eeir).

(13)

To prove (13) we begin by deducing it from (1) in the case where Q(U) are uniformly elliptic and then generalize it to the degenerate case: Remark.

One easily checks that one also has:

u(x) = inf E Aadm

o’f (y&), 11

u(s)) e-‘ds

+ 7 (k + CO(~)) e-‘snxsn
(13’)

where y;(t) = lim yX(t). J’( Iii (i) The uniformly elliptic case. Here we assume the matrices a(x, u) are uniformly elliptic. In this case we have already proved (see [12]) that (1) has a unique solution in W’~“(W~“). Applying Ito’s formula as in [6, 21 to this function it is easy to prove that it is the optimal cost function. We now prove (13) in this case: denote by ti the first exit time ofy; from the ball RR (for a fixed admissible system) and apply the generalized Ito’s formula (see [6]), then we

have:

and then: x~~~~A~~u(~:(~"/'ItA

I$)) exp - A(B”A rA t$) s E

+ Xen-lSUIr~ [k + Q(~+~)

+ u(y:+‘(8”“A

+ ~~~~~~~~~~~~~~~ u(yJtA But fR-

R-B-2

!

-

I%\ r’l

pl

ii O”/\TAr’k

Rfe-“ds

tA z$))] exp - A@-

6)) exp - J.(tA 6) I ~PA~,~~~~ i

+ 2, and thus we obtain: E{x~~,,u(~;(P/Y

r)) exp - A(B”A t)}~

+ x~“~L~~[~ + co(F+‘) + ~~~~~~~~~~ u(yl(4) then adding

8”

these equalities

u(x) s E

E (/R~~~:“‘~e-“ds

+ ~(y:-‘(8”-‘A e+

t))]exp

- A(8”‘1A

T)

,

we get:

emLds

for each admissible system. To get the other inequality, Therefore we fix E > 0, define {u < MU} we have:

+ c (k f c,,(tn)) n

e-“B”~e~Sr f ~(y.~( T)) e-,‘-‘/

we need systems that allows us to approach (13) within E. a measurable function C?(X) from G2.v into L* such that on

A@,

C(x)) u(x)

-f(i,(x),x) 3 - E,

(recall u satisfies (1) and see [3] for the existence of fi). Then let us choose some probabilit! space on which there exists some solution of the following equation (see [6, l-t] for existence results on stochastic differential equation with measurable coefficients): dy%)

= a[~%),

U^(Y”(~)] dw, + b[y”(s),

L$+))]

d.s,

y”(0) = X. !I Then denote by 8’ the first time y” enters % = {u = MU} and $’ = $(y”(O’)) where $’ is chosen measurable such that Mu(x) 2 k + cg(e(x)) + u(x + s’(x)) - E/Z(E’ is anyThing measurable on (0’ = + =}), and by induction: dy”(s)

= a[~%),

fi(~‘V))l

dw, + ~[Y”(J),

fi(~“(s))ld.

y=(ey = yn-‘(en)+ y,

where

1 8” is the first time y”-’

enters

% and $ = tn(yn-‘(P))

where

$‘” is chosen

measurable

Continuous

and impulsive

control

of diffusion

processes

in W’

1335

such that Mu(x) 2 k + c&“(x))

+ u(x + &t))

- E/T”.

Formula (14) is now written:

+ ~~~~~~~~~~ k + co(~*‘)

1

+ x~~~~A~*
+ ~(y:+~(8”-‘A

4)) exp - A(tA

tA I$)) - s]

exp - M’-’

tnR)I ~~;BAA~; . 1

As before, letting R + CG,adding these formulas and letting E- 0 we get the desired inequality, and (13) is now proved. Remark.

A precise justification of Itd’s formula for y: is given in [2]; it is also proved that P+ + 00 a.e. To be rigorous we should have taken bounded 8’, but one easily checks that it doesn’t affect the proof.

(ii) Th e g en er a1 case. We now turn to the proof of (13) in the general case by approximating yX(t) by some y:(t) associated to the diffusion matrices a,(~, u) = (alfi&Zn) E M,v~I.v, as yX(t) is associated to u by Section 1.2 then we set: U,(x) = inf E

+co(t’Y)~e~~e-“~~}.

u(t)) e-“d f+nil(k

sl

First we prove that y:(r) tends to y=(f) as E goes to 8. On this purpose we denote 6= (ol0) E M,vxl,v and we remark that the cost functions associated to u or B are the same; and so, we may identify admissible systems for u and u,. Then we have: r:(t) -rx(t)

= lo’(oXy:)

- a(~,)) dw, + j’(b(y:) 0

-b(yS)

ds,

and: E{(Y: - YN))*

s K(r + 1) j&(s) 0

--yX(s)~* d.s + Et,

then, Gronwall lemma shows that: E{y,(t) - yL(f)} 6 CE ecK(where C = C(K)). We deduce that: /u--u,l(x)~supE

sl

[ lz If(yXs)7 n(s)) -f(y&),

c sup E [i,‘KCly:-y,l(s)e-‘~ds)4C~, sl

u(s)> Ie+

d.~}

B. PERTH.WE

1336

14. uniformly in 2’ for d large enough E- II But now (13) holds for L(~and y:. and if

so that u,-

V,(r) = i’f(y:(r). I

u(s) e-j-’ ds - 1 (k + c,,(zn)) e-“r”~~fl,, c f n,( y:(Q)

eir.

we have easily:

s sup E

A

oi if(yx’(s))

ri

-f(y,(s))

1e-“dr

- ludy:(

r)) - ~(_yXt))

emi;).

But we know: sup E .d

il

O= of

-f(yJ

e-“‘“ds E-oO,

and: l~~(y.g(r))

- ~c(y,(r))

1e-AT< luXy:(r))

As (LL~(Y)- u(y)1 s CE, it remains

to prove

d 5))

s-;.~

e+__;O.

Ie -i’}~Ce-~~+E{~~l(~~f))

- Lo

--~(y~f))~e-‘~~(rST)}

easily.

3.2. Proof of theorem 1 We are now able to prove

theorem

1, we successively

Vu, A(u)u (9

“-f(u)

i u G Mu

and this will yield ALL (ii)

--11(,v

vve have:

E{lu(y:)

and we conclude

/ + ~u(yfir))

that:

- 14(ydr)

s;pEG(y:(r)) Since 14 is bounded

-rc(y:(r))

u is a solution

(i) u is a sub-solution

show:

in 5’(X.-v). in R”,

E IL.“(R~~),

of (1). of (1). We first show that IL satisfies: A(u)u

TO this end let us fix a system

Sf(u)

ti as follows:

inQ’(W,v). u(t) = u, constant,

(15) choose

some I and 0, large

Continuous

enough,

and impulsive

control

of diffusion

processes

1237

in 3’

from (13) with T = I we get:

u(x) - E{u(y.,(t)) e-j”} < 1 E t

-t

-f o f(_vr(s,. u)

e-‘” ds).

and when t--t 0, the second member converges to f(.r. u) (at least in ??‘(A’)) and the first member converges in S’(52.v) to A(u)u (cf. [7]). And the inequality (15) is proved. Next we show that u < ~Clrc:indeed, choose u(t) = u. constant. 8, = r = r. :L 2 0 and 0, large enough (i 2 2). by (13) we get: Lo

< Ct + (k -i co(&)) eek f E{u(J;(t)

t :i) e-‘0.

as t--t 0, we get:

and the second inequality is proved. Finally we prove that A(u)u E L”(W.‘). that ~1E D’_(R’) by [7], moreover:

But this is easily

deduced

from (15) and the fact

sup IIA(u)ull~= < + =. cE v (ii) u is a solution of (1). Using the dynamic programming principle one shows as in [9] that u is a viscosity solution of: Max(;ic (A(u)u -f(u)). ~1- ~1) = 0 with q = /MLL.And using the fact that A ( U)U E L” and u E 0:.

this implies

as in [9] that (1) is satisfied.

3.3. Proof of theorem 2 (i) if w E Cb(R”) satisfies (9) then w s u. To prove this result vve will introduce a sequence K,, E D$(%VV) which approximates u. It was first used in [l] to prove existence of solutions of Q.V.I. and is given by: x f(y,(s),

u(s)) eebds

where the infimum is taken over all systems e’<@<...
+ ,$i [k + cO( z)] e-‘@},

z9, which

are admissible

T from SLjC(2.v)

u(s)) e-“-‘ds

+ JIrr(yd

with n jumps

(16)

i.e.:

into itself by:

6)) emi’},

(17)

where the infimum is taken over all systems sd.’ composed by: a canonical probability space (52, 9, St, P) with a Wiener process w, (?P);,-adapted), a progressively measurable process u(t) and a %,-stopping time 8. Since Mw E Cb(R”‘), we know by [9. lo] that Tw is the unique viscosity solution z of: Max (;;c

(A(u)z

-f(u)),

z - M(w))

= 0.

(18)

B. PERTHAVE

1238

Moreover,

if ~1;E D?(R.v)

we know by [l 7. 131 that /Mw E &(A-‘)

Tbv E Dt(R”),

wg~~A(u)Tw

and so that

-f(~)il~=(~.~.) < + 3~

and (18) holds a.e. in 2v. Next we take w, = Tune 1 and we have: L~;\tht~ 4. u, = w, The proof of this lemma is the same as in Section 3.1 and so we only indicate the main steps. First, if the diffusion matrices are uniformly elliptic, the result is deduced by an easy induction from Ito’s formula and the choice of suitable admissible systems. Next we can show that the functions:

(where

y: was built in Section

3.2(ii))

satisfy:

ui -

u,, in C,(‘R”). In the same way we can E- 1) introduce wi the solution of (17) with y, replaced by y.: and then: LV:w, in Cb(2.v). As E-O we know that 11; = w: we get u, = w, in passing to the limit. We can now prove (i). First we remark that u, decreases with n (by formula (16)) and converges to u: indeed no-----+ uE in C,(R*v) by [12] and u,u by Section 3.2(ii) so that n-o E- 0 11, u in C,(R”). Now if w E Cb(Rv) is a subsolution of (1) we have: bv s 11,)(see [9]. or n-r remark wE < ~16and pass to the limit) and so: IWW 5 ~MUOand then: w s WI. By induction we get w < u,, for each .a 2 0, and thus w < U. (ii) Uniqueness. We will use the method introduced by Hanouzet and Joly [_‘I to solve uniqueness of Q.V.I. to this end we first need to deal with positive functions and so, adding some nonnegative and large enough constant C to f and C/A to w or u we may suppose: f L 0. u 2 0, w 2 0. Moreover if u E D’,(Rv) is nonnegative, Tu is nonnegative. Then vve can show the following properties on T: LEX~~MA5. T is increasing and concave. We don’t prove this lemma deduced from the fact that iv, itself, is increasing and concave. Now take some u E IO, l[ such that:

which

ult iMuo/lr- c k and wl, w2 nonnegative LEIVVIMA 6. If there

functions

of D’_(W’v) with rvl G ~0, then we have:

exists cuE [0, l] such that:

then: Twi - Twz s ~(1 - ~1) Twl.

is immediately

Continuous and impulsive control of diffusion processes in R,’

1239

Proof. The assumption may be written: (1 - CY)Wr + Ly.0 S Wr, and, by lemma 5, it is enough to show: T.Oz=,uTwl,

since wl s ~0, it is enough to prove: T ’ 0 2 uTuo,

i.e.

and this is clear since ,A4uo s ~ljMuojlL- s k. Finally we may complete the uniqueness proof: we apply lemma 6 with wr = U, wz = w and since they are nonnegative fixed points of T we get successively: Cd-WWU, u-

w~(l-/,J)PU,v/I~o.

Since we know by (i) that w G U, we get u = w and theorem 2 is proved. Acknowledgement-The material assistance.

author wishes to thank CEREMADE,

University of Paris IX Dauphine, in particular for

REFERENCES 1. BENSOUSSAN A. & LIONSJ. L., Nouvelle formulation de problemes de conrrble impulsionnel et applications. Cr. hebd. seanc. Acad. Sci. Paris 176, 11831192 (1973). 2. BENSOUSSAN A. & LIONSJ. L., Conhble Imptdsionnel et Inequations Quasi-oariationneiles, Dunod, Paris (1982). 3. FLEMINGW. H. & RISCHELR., Deterministic and Stochastic Optimal Control, Springer, Berlin (1975). 4. GIHMANI. I. & SKOROHODA. V., Stochastic Differential Equations, Springer, Berlin (1972). 5. HANOUZETB. & JOLY J. L., Convergence unif&me des it&s dCfini&an~ la solution faibie d’une incquation auasi-variationnelle. Cr. hebd. seanc. Acad. Sci. Parts 286. 735-738 (1978). 6. kRYLOV N. V., Controlled Diffusion Processes, Springer, Berlin (19sb). 7. LIONSP. L., Control of diffusion processes in (w*‘, Cornmum pure appl. Math. 34, 121-117 (1981). 8. LIONSP. L., Optimal control of diffusion processes and Hamilton-Jacobi-Bellman eouations, Part 1: The dvnamic programming principle and applications‘commun P. D. E. (to appear). 9. LIONSP. L.. Outimal control of diffusion orocesses and Hamilton-Jacobi-Bellman eouations. Part 2: \Yscositv solutions and uniqueness, C0mmun.t P. D.k (to appear). 10. LIONSP. L., Optimal control of diffusion processes and Hamilton-Jacobi-Bellman equations. Part 3: Regularity of the optimal cost function, Non Linear Partial Differential Equations and Applications, College de France Seminar Vol. V, Pitman, London (1983). 11. LIONSP. L. & PERTHAMEB., Une remarque sur les operateurs not-&n&sires intervenant dans les I.Q.V. Ann. Toulouse (to appear). 12. PERTHAMEB., Inequations quasi-variationnelles et equations de Hamilton-Jacobi-Bellman dans Z,‘, Ann. Toulouse (to appear). 13. PERTHAMEB., Mquations quasi-variationnelles et equations de Hamilton-Jacobi-Bellman, C.r. hebd. seanc. Acad. Sci. Paris 296, 37%376 (1983). 14. STROCKD. W. & VARADHANS. R. S., Multidimensional

Diffusion Processes, Springer, Berlin (1973).