Adapted solution of a backward stochastic differential equation

Adapted solution of a backward stochastic differential equation

Systems & Control Letters 14 (1990) 55-61 North-Holland 55 Adapted solution of a backward stochastic differential equation E. P A R D O U X Mathdmat...

302KB Sizes 0 Downloads 67 Views

Systems & Control Letters 14 (1990) 55-61 North-Holland

55

Adapted solution of a backward stochastic differential equation E. P A R D O U X Mathdmatiques, URA 225, Universitd de Provence, 13331 Marseille 3, and INRIA, France S.G. PENG Institute of Mathematics, Shandong University, Jinan and Institute of Mathematics, Fudan University, Shanghai, China Received 24 July 1989 Revised 10 October 1989 Abstract: Let { Wt; t ~ [0, 1]} be a standard k-dimensional Wiener process defined on a probability space (~2, .~-, P), and let {~ } denote its natural filtration. Given a ~ar1 measurable d-dimensional random vector X, we look for an adapted pair of processes (x(t), y(t); t E [0, 1]} with values in R d and R d x k respectively, which solves an equation of the form:

x(,~+ f,'f(s.~(~, y(s)~ ds + S,'tg(s,x(s~)+ y(s~] d~. : ~. A linearized version of that equation appears in stochastic control theory as the equation satisfied by the adjoint process. We also generalize our results to the following equation: x ( t ) + i l f ( s , x ( s ) , y ( s ) ) ds + ftlg(s, x ( s ) , y ( s ) ) d l , Vs = X under rather restrictive assumptions on g. Keywords and phrases: Backward stochastic differential equation; adapted process.

1. Introduction

T h e e q u a t i o n for the a d j o i n t process in o p t i m a l stochastic c o n t r o l (see B e n s o u s s a n [1], B i s m u t [2], H a u s s m a n n [4], K u s h n e r [6]) is a linear version o f the following e q u a t i o n : +

<,s

=

where ( I V ( t ) , t ~ [0, 1]} is a s t a n d a r d k - d i m e n s i o n a l W i e n e r process d e f i n e d on (~2, o~', p ) , ( ~tt, t ~ [0, 1]) is its n a t u r a l filtration (i.e. ~ = o ( W ( s ) , 0 ~ s ~ t)), X is a given ~-1 m e a s u r a b l e d - d i m e n s i o n a l r a n d o m vector, a n d f m a p s ~2 × [0, 1] × R d × R dxk into R d. f is a s s u m e d to b e ~ ® ~ d ® ~ d x k / ~ d m e a s u r a b l e , w h e r e ~ d e n o t e s the o - a l g e b r a of ~tt-progressively m e a s u r a b l e subsets of 12 × [0, 1]. W e a s s u m e m o r e o v e r that f is u n i f o r m l y L i p s c h i t z with respect to b o t h x a n d y. W e are l o o k i n g for a p a i r ( x ( t ) , y ( t ) ; t ~ [0, 1]} with values in R d × R dxk which we require to be {~tt} a d a p t e d . N o t e that it is the f r e e d o m of c h o o s i n g the process ( y ( t ) } which will allow us to find a n a d a p t e d solution. I n d e e d , in case y - 0 a n d X is deterministic, the a b o v e e q u a t i o n w o u l d have a u n i q u e ~ ' - a d a p t e d s o l u t i o n ( x ( t ) } , w h e r e ~ - t = o ( I V ( s ) - W ( t ) ; t ~< s ~< 1). N o t e also that the following r e l a t i o n exists b e t w e e n ( x ( t ) ) , ( y ( t ) } a n d ( W ( t ) } : d y(t)=-~(x,

IV)t-g(t,

x(t)),

t a.e.

where ( x , W ) t d e n o t e s the j o i n t q u a d r a t i c v a r i a t i o n p r o c e s s b e t w e e n x a n d IV. 0167-6911/90/$3.50 © 1990, Elsevier Science Publishers B.V. (North-Holland)

E. Pardoux,S.G.Peng/ Adaptedsolutionof backwardSDE

56

Our main result will be an existence and uniqueness result for an adapted pair ( x ( t ) , y(t); t ~ [0, 1]} which solves (1.1). We expect that our result will prove to be useful in optimal stochastic control. We shall also consider the more general equation

x(t) + ftaf(s, x(s), y(s)) ds + ft'g(s, x(s), y(s)) d W ( s ) = X

(1.2)

where g : $2 X [0, 1] X N d x R dxk --e, N aXk and is d°® "~d ® ~ d x k / ~ d x k measurable, and satisfies a rather restrictive assumption which implies in particular that the mapping y ~ g(s, x, y) is a bijection for any (s, ,0, x).

The paper is organized as follows: in Section 2, we study a version of equation (1.1) where f and g do not depend on x, in Section 3 we study equation (1.1), and in Section 4 we study equation (1.2). Let us close this section with the introduction of some notations that will be used throughout the present paper. M2(0, 1; R d) (resp. M2(0, 1; Rdxk)) will denote the set of Rd-valued (resp. RdXk-valued) processes which are .~,-progressively measurable, and are square integrable over /2 x (0, 1) with respect to P x X (here denotes Lebesgue measure over [0, 1]). For x ~ R d, Ix] will denote its Euclidean norm. An element y ~ R d x k will be considered as a d x k matrix; note that its Euclidean norm is given by l Y[ = ( T r ( y y * ) , and (y, z) = Tr(yz * ).

2. A simplified version of equation (1.1) As a preparation for the study of equation (1.1), we consider in this section simplified versions of that equation. Let us first prove:

Lemma 2.1. Given X ~ L2(IL o~1, P; Rd), f ~ M2(0, 1, R a) and g ~ MZ(o, 1; Rd×k ), there exists a unique pair (x, y) ~ M2(0, 1 R d) X M2(0, 1; R a×k) such that x(t)+

f,'sr

+f

=

, .< 1

Proof. Define

x ( t ) = E ' ~ ' [ x - ft]f(s)ds],

0~
It follows from a well-known martingale representation theorem (see e.g. Karatzas and Shreve [5], p. 182) that there exists ~ ~ M2(0, 1; R axk) such that

E~-t[g- foaf(s) as] =x(O, + fot.~(s) dW(s). We finally define y ( t ) = P ( t ) - g ( t ) , 0 ~< t ~< 1. It is easily seen that the pair (x, y ) which we have constructed solves (2.1), and that any solution to (2.1) takes the above form. [] We now consider the equation

x(t) + ftif(s, y ( s ) ) d s + f t l [ g ( s ) + y ( s ) ]

dW(s) = X

(2.2)

where now f : / 2 x (0, 1) X R axk --* R d is do® ~dk/~d measurable (do denotes the o-algebra of progressively measurable subsets of ~2 x (0, 1)) with the property that

f ( . , O) ~M2(O, 1; n d)

(2.3i)

and there exists c > 0 such that

If( t , Y l ) - f ( t ,

y2)l<~clyl-Y21

(2.3ii)

E. Pardoux, S.G. Peng / Adaptedsolution of backwardSDE

57

for any Yl, Y2 ~Ra×*, and (t, oa) a.e. Note that (2.3i)+ (2.3ii) imply that f ( . , y ( . ) ) ~ M 2 ( O , 1; R d) whenever y ~ M2(0, 1; Rd×k). Proposition 2.2. Let X ~ L2(Ja, -~1, P; Rd), g ~ M2( 0, 1; R a×*) and f : I2 × (0, 1) × R a×* --+ R d be a mapping satisfying the above requirements, in particular (2.3). Then there exists a unique pair (x, y) M2(0, 1; R d) x M2(0, 1; R a×k) which satisfies (2.2).

Proof. Uniqueness. Let (x 1, Ya) and (x 2, Y2) be two solutions. It follows from Itf's formula applied to I xl(s) - x2(s ) I 2 from s = t to s = 1 that [ x l ( t ) _ x2(t ) 12 + f t l l y l ( s ) _Y2(S) [ 2 ds = -2f'(f(s,

-2

y,(s))-f(s,

(xi(s)-x2(s),

y2(s)), x , ( s ) -

x2(s)) ds

[ y l ( s ) - Y 2 ( S ) ] dW,)

It follows from well-known inequalities for semi-martingales that SUPo~t~l ]xl(t) - x 2(t)[2 is P-integrable. Since moreover yl -Y2 ~ M2( O, 1; Rdxk), the above stochastic integral is P-integrable and has zero expectation. We obtain

EIx,(t)-x2(t)12+E = -2Efti(f(s,

f' l Y l ( S ) - y z ( s ) l

ya(s))-f(s,

'S,' l y l ( s ) - y z ( s ) l

<~E

z ds

y2(s)), x , ( s ) -

2 ds+2c2E

x2(s)) ds

S,'

I x a ( s ) - x 2 ( s ) l 2 as.

Uniqueness now follows from Gronwall's Lemma. Existence. With the help of Lemma 2.1, we define an approximating sequence by a kind of Picard iteration. Let Yo(t) = O, and {(x.(t), y.(t)); 0 <~ t <~ 1}.~l be a sequence in M2(0, 1; R a) × M2(0, 1; R axk) defined recursively by

x.(t) + Stif(s, y._i(s)) ds + Stl[g(s)+ y.(s)] dW(s)= X.

(2.4).

Using again the It6 formula and the same inequalities as above, we get (with K = 2c 2)

E(Ix.+,(t)-x.(t)12)+E

f l ly.+,(s)-y.(s)l

2 ds

ds+½EStlly.(s)-y._i(s)12

ds.

Define u.(t) = Eft 1 I x . ( s ) - x . _ l ( s ) l 2 ds and v.(t) = Eft 1 l y.(s) - y . _ l ( s ) l 2 ds, for n >~ 1 (Xo(t) = 0). The last inequality imphes that

d (U"+l(t) ert) + ertv"+i(t) ~< ½ eX'v"(t)" dt Integrating from t to 1, we obtain

U.+l(t)+

f l e r ( ' - ' ) v . + l ( s ) d s ~ ½ f, l e K ( s - t ) v . ( s ) d s .

It follows in particular that

fo ertv,+a( t ) dt <~2-"~ e K

(2.5)

E. Pardoux, S.G. Peng / Adaptedsolution of backwardSDE

58

with ~ = Efd I yl(t) I 2 dt = SUpo<,~lvl(t); but also then

u,+l(O ) ~< 2-"~ e K.

(2.6)

However, from (2.5) and (d/dt)u,+l(t) <~O, we deduce that

v,+1(0 ) <~Ku,+ a(O) + ½v,(O) <~2 - " K + ½v,(O) with ~ ' = ?K e K, from which follows immediately

v.+1(0 ) <~2 - " ( n h ' + o1(0)).

(2.7)

Since the square roots of the right hand sides of (2.6) and (2.7) are summable series, we have that { x. } (resp. ( y . }) is a Cauchy sequence in M2(0, 1; R J) (resp. in M2(0, 1; R d×k)). Then from (2.4)., { x. } is also a Cauchy sequence in L2(~?; C(0, 1; Rd)), and passing to the limit in (2.4). as n --* oo, we obtain that the pair (x, y) defined by x=

lim x.,

y = lim y.

solves equation (2.2). 3. Equation (1.1)

We can now study the equation

x ( t ) + f t l f ( s , x(s), y ( s ) ) d s +

ftl[g(s,X ( S ) ) q - y ( s ) ]

dW(s) = X

(3.1)

where f : 12 × (0, 1) × R d x R dxk ~ R d is ~ ® ~d ® ~ d k / ~ d measurable and g : ~2 × (0, 1) x R d ~ R d×k is ~ ® ~d/~dk measurable, with the properties that f ( . , 0, 0) ~ M 2 ( 0 , 1 ; Rd),

g(-, 0, 0) ~ M 2 ( 0 , 1 ; R Jxk)

(3.2i)

and there exists c > 0 such that (3.2ii) If( t , xl, Y l ) - f ( t, xz, Y2)1 < ~ c ( I x , - x 2 1 + l Y l - Y z [ ) , (3.2iii) Ig(t, x l ) - g ( t , xz)l < c l x l - x21 for all x, x 1, x 2 ~ R a, y, Yl, 72 ~Rdxk, (t, 0~) a.e. Note that (3.2i) + (3.2ii) + (3.2iii) imply that f ( ' , x(" ), y(" )) ~ M2(0, 1; R a) and g(., x(.), y ( - ) ) ~ M2(0, 1; Rdxk), whenever x ~ m2(0, 1; Rd), y M2(0, 1; Re×k). Theorem 3.1. Given X ~ L2(~2, ~1, P; Rd), f and g given as above and satisfying in particular (3.2), there exists a unique pair (x, y) ~ M2(0, 1; R d) X M2(0, 1; R dxk) which solves equation (3.1). Proof. Uniqueness. Let (x l, Yl) and (x 2, Y2) be two solutions in M2(0, 1; R d) x M2(0, 1; Re×k). By a similar argument as that in Proposition 2.2,

E lx,(t) - x2(t ) 12 + E f l l y l ( s ) -Y2(S) I 2 ds = -2Efl(/(s,

-E

x,(s), y , ( s ) ) - / ( s ,

f' Ig(s, x,(s))-g(s,

Xz(S ), Y2(S)), x , ( s ) -

x2(s))l 2 as

-2Ef:(g(s, Xl(S))-g(s, x2(s)), yl(s)-y2(s)) ds
Ixl(s)-x2(s)12 ds+:E

for a certain constant ~. The result follows.

ly,(s)-y2(s)la ds

x2(s)) ds

59

E. Pardoux, S.G. Peng / Adapted solution of backward SDE

Existence. We will now construct an approximating sequence using a Picard type iteration with the help of Proposition 2.2. Let Xo(t ) = O, and ((x.(t), y.(t)); 0 ~< t~< 1}.>~1 be a sequence in M2(0, 1; R d) × M2(0, 1; R d×k) defined recursively by xn(t )+

f(s, Xn_,(S),

[g(s, x._l(s))+y.(s)]

y.(s))ds+

dW(s)=X,

0~
(3.3)

Using a by now 'usual' procedure, we obtain 1

E(Jx.+l(t)-x.(t)j2)+~Eft

1

<~c(Eftllx.+~(s ) - x . ( s ) 1 2 Define u . ( t ) = El, ~ I x . ( s ) -

Jy.+l(s)-y.(s)l z ds ds+

EftlJx.(s ) - x . _ l ( s ) l

x~_~(s)I z ds.

It follows from (3.3):

dUn+l

dt (t)-cu.+l(t)<~cu.(t),

2 ds).

(3.4)

u~+l(1 ) = 0 ,

or

u.+l(t ) <~c f I e"ts-t)u.(s) ds. Iterating that inequality, we deduce that C eC) n

,o+,(o)

,----7. ul(O).

This, together with (3.4), implies that ( x , ) is a Cauchy sequence in M2(0, 1, R d) and ( y , } a Cauchy sequence in M2(0, 1; Rd×k). Then, from (3.3), ( x , ) converges also in L2(~; C(0, 1; Rd)). It then follows from (3.3) that x=

lim x , ,

y=

/1 ---~ OO

lim y, //"-* O~

solves equation (3.1).

[]

4. Equation (1.2) We now consider the equation

x(t) + ftlf(s,

x(s),

y(s))

ds+

ftlg(s, x(s), y(s))

dW(s)= X

(4.1)

where X ~ L 2(12, "~-1, P; R d), f : $2 X (0, 1) × R d × R a× k ~ R a is ~ ® ~ d ® "~d× k/~d measurable and g: 12 x (0, 1) x R d x R d×k ---, R d×k is ~® ~d ® ~d×k/~d×k measurable, and they satisfy:

f(',O,O)~M2(O, 1;a'),

g(',O,O)~M2(O,1;aa×k);

(4.2i)

there exists c > 0 such that I f ( t , xl,

y l ) - f ( t , xl, y2)J+lg(t, xl, yl)-g(t,

x 2, YE)I


lYl--eEl)

(4.2ii)

for all xl, x 2 ~ R d, Yl, Y2 E R d×k, (t, 6o) a.e.; and there exists a > 0 such that [g(t,

x, yl)-g(t, x,

y2)[>~ a J y ~ - Y 2 I

(4.2iii)

E. Pardoux,S.G. Peng/ Adaptedsolution of backwardSDE

60

for all x ~ R d, Yl, Y2 .~ Rd×k, (t, w) a.e. Note that (4.2iii) is satisfied is the case of equation (3.1), with a = 1. Moreover, (4.2ii) + (4.2iii) implies that for all x ~ R a and (t, w) a.e.,

y~g(t,

x, y)

is a bijection from R d×k onto itself. Indeed, the one-to-one property follows at once from (4.2ii) and the onto property from continuity (4.2ii), injectivity, and the fact that lim I g ( t , x, Y)I = +oo lyl~ +oo for all x ~ R d, (t, ca) a.e. (see the comment following the proof of Theorem 4.1 in Protter [7]). We have: Theorem 4.1. Under the above conditions on X, f and g, in particular (4.2), there exists a unique pair (x, y) ~ M2(0, 1; R a) X M2(0, 1; R dxk) which solves equation (4.1). Proof. The proof is an adaptation, with essentially obvious changes, of the proofs of the previous results. We just indicate the three steps, and explain the one new argument which is needed in the first step. The first step consists in studying the equation

x(t)+

g(s, y(s)) d W ( s ) = X

l f ( s ) ds +

(4.3)

where g satisfies the simplified version of (4.2) obtained by suppressing the dependence in x. The second step solves the equation

x(t)+ f'f(s, y(s))ds+ flg(s, y(s))dW(s)=X where f and g satisfy the simplified version of (4.2) obtained by suppressing the dependence in x, and the third step solves equation (4.1). Let us only consider equation (4.3). From L e m m a 2.1, there exists a unique pair (x, y) such that x(t)+

f'/(s)ds+ f,'y(s)dW(s) = x

It remains to show that given f ~ M2(0, 1; Rd×k), there exists a unique y ~ m2(0, 1;

g(t, y ( t ) ) = y ( t )

g~d×k) such

that

(t, ca)a.e.

It follows from the properties of g that for any (t, ca, y) ~ [0, 1] × $2 × R d×k, there exists a unique element g't(ca, Y) of R a×k such that

g(t, co, ¢ht(ca, y ) ) - - y . It remains only to show that ~ is ~ ® ~d×k/~dxk measurable. We can, without loss of generality, assume that $2 = C([0, 1]; Rk), W,(ca) = ca(t) and ffl field over 12. Note that the mapping

is the Borel

G(t, 60, y ) = (t, ca, g(t, ca, y)) is a bijection from o~ = [0, 1] X $2 x R dxk into itself. Since op is a complete and separable metric space, it follows from Theorem 10.5, page 506 in Ethier and Kurtz [3] that G -1 is Borel measurable, i.e. ~([0, 1])®o~ ®~dxk measurable. By considering for each t the restriction of the same map to [0, t] × C([0, t]; R d×k) × R d×k, we obtain that G -1 is ~ ® ~dxk measurable, which proves the above assertion for ¢/,. [] Remark 4.2. It would be nice to solve the above equations for an arbitrary (non-necessarily square integrable) final condition X. That extension of our results would follow from the following result. Let )(1,

E. Pardoux, S.G. Peng / Adapted solution of backward SDE

61

)(2 ~ L2( I2, ~-1, P ; R a ) , a n d let ( x l ( t ) }, { x 2 ( t ) } d e n o t e the c o r r e s p o n d i n g s o l u t i o n s of e q u a t i o n (4.1). T h e n x 1 = x z o n [0, 1] × { X 1 = X 2 ). H o w e v e r , e v e n i n the s i m p l e s t case of e q u a t i o n (2.1) w i t h f - 0 and g=0, x,(t)=E(Xy~),

i=1,2,

a n d the a b o v e r e s u l t does n o t hold, except if { X1 = X2 } ~ aro.

Bibliography [1] A. Bensoussan, Lectures on stochastic control, in: S.K. Mitter and A. Moro, Eds., Nonlinear Filtering and Stochastic Control, Lecture Notes in Math. No. 972 (Springer-Verlag, Berlin-New York, 1982). [2] J.M. Bismut, Th6orie probabiliste du contrfle des diffusions, Mere. Amer. Math. Soc. No. 176 (1973). [3] S.N. Ethier and T.G. Kurtz, Markov Processes: Characterization and Convergence (J. Wiley, New York, 1986). [4] U.G. Haussmann, A Stochastic Maximum Principle for Optimal Control of Diffusions, Pitman Research Notes in Math. No. 151 (1986). [5] I. Karatzas and S. Shreve, Brownian Motion and Stochastic Calculus (Springer-Verlag, Berlin-New York, 1988). [6] H.J. Kushner, Necessary conditions for continuous parameter stochastic optimization problems, S I A M J. Control 10 (1972) 550-565. [7] P. Protter, Stochastic Integration and Differential Equations. A New Approach (Springer-Verlag, to appear).