Available online at www.sciencedirect.com
Stochastic Processes and their Applications 123 (2013) 3328–3357 www.elsevier.com/locate/spa
BSDEs with jumps, optimization and applications to dynamic risk measures Marie-Claire Quenez a,b , Agn`es Sulem b,c,∗ a LPMA, Universit´e Paris 7 Denis Diderot, Boite courrier 7012, 75251 Paris cedex 05, France b INRIA Paris-Rocquencourt, Domaine de Voluceau, Rocquencourt, BP 105, Le Chesnay Cedex, 78153, France c Universit´e Paris-Est, F-77455 Marne-la-Vall´ee, France
Received 11 June 2012; received in revised form 8 December 2012; accepted 22 February 2013 Available online 14 March 2013
Abstract In the Brownian case, the links between dynamic risk measures and BSDEs have been widely studied. In this paper, we consider the case with jumps. We first study the properties of BSDEs driven by a Brownian motion and a Poisson random measure. In particular, we provide a comparison theorem under quite weak assumptions, extending that of Royer [21]. We then give some properties of dynamic risk measures induced by BSDEs with jumps. We provide a representation property of such dynamic risk measures in the convex case as well as some results on a robust optimization problem in the case of model ambiguity. c 2013 Published by Elsevier B.V. ⃝ MSC: 93E20; 60J60; 47N10 Keywords: Backward stochastic differential equations with jumps; Comparison theorems; Risk measures; Dual representation; Robust optimization
1. Introduction Linear backward stochastic differential equations (BSDEs) were introduced by Bismut (1976) [4] as the adjoint equations associated with stochastic Pontryagin maximum principles in stochastic control theory. The general case of non-linear BSDEs was then studied by Pardoux and Peng (1990) (see [16] and [17] in the Brownian framework). In [17], they provided Feynman–Kac ∗ Corresponding author at: INRIA Paris-Rocquencourt, Domaine de Voluceau, Rocquencourt, BP 105, Le Chesnay Cedex, 78153, France. Tel.: +33 3963 55 69. E-mail addresses:
[email protected] (M.-C. Quenez),
[email protected] (A. Sulem).
c 2013 Published by Elsevier B.V. 0304-4149/$ - see front matter ⃝ http://dx.doi.org/10.1016/j.spa.2013.02.016
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
3329
representations of solutions of non-linear parabolic partial differential equations. In the paper by El Karoui et al. (1997) [9], some additional properties are given and several applications to option pricing and recursive utilities are studied. The case of a discontinuous framework is more involved, especially concerning the comparison theorem, which requires an additional assumption. In 1994, Tang and Li [22] provided an existence and uniqueness result in the case of a natural filtration associated with a Brownian motion and a Poisson random measure. In 1995, Barles, Buckdahn, Pardoux [1] provided a comparison theorem as well as some links between BSDEs and non-linear parabolic integral–partial differential equations, generalizing some results of [16] to the case of jumps. In 2006, Royer [21] proved a comparison theorem under weaker assumptions, and introduced the notion of non-linear expectations in this framework. Furthermore, in 2004–2005, various authors have introduced dynamic risk measures in a Brownian framework, defined as the solutions of BSDEs (see [18,2,11,19]). More precisely, given a Lipschitz driver f (t, x, π ) and a terminal time T , the risk measure ρ at time t of a position ξ is given by −X t , where X is the solution of the BSDE driven by a Brownian motion, associated with f and terminal condition ξ . By the comparison theorem, ρ satisfies the monotonicity property, which is usually required for a risk measure (see [10]). Many studies have been recently done on such dynamic risk measures, especially concerning robust optimization problems and optimal stopping problems, in the case of a Brownian filtration and a concave driver (see, among others, Bayraktar and coauthors in [3]). In this paper, we are concerned with dynamic risk measures induced by BSDEs with jumps. We study their properties as well as some related optimization problems. We begin by studying BSDEs with jumps and their properties. We first focus on linear BSDEs which play an important role in the comparison theorems as well as in the applications to finance. We show that the solution is given by a conditional expectation via an exponential semimartingale, usually called the adjoint process. We also provide some additional properties of the solution and its adjoint process, which are specific to the jump case. Using these properties, we provide a comparison theorem as well as a strict comparison theorem, under mild assumptions, which generalize those stated in [21]. We also prove some optimization principles for BSDEs with jumps. More precisely, we consider a family of controlled drivers f α , α ∈ A and show that, under some hypothesis, the infimum of the associated solutions X α can be characterized as the solution of a BSDE. Moreover, the driver of this BSDE is given by the infimum of the drivers f α , α ∈ A. We provide a sufficient condition of optimality. Also, from the strict comparison theorem, we derive a necessary optimality condition. We then state some properties of dynamic risk measures induced by BSDEs with jumps. Note that contrary to the Brownian case, the monotonicity property does not generally holds, and requires an additional assumption. In the case of a concave driver f , we provide a dual representation property of the associated convex risk measure via a set of probability measures which are absolutely continuous with respect to the initial probability P. At last, we study the case of ambiguity on the model. More precisely, we consider a model parameterized by a control α as follows. With each coefficient α, is associated a probability measure Q α , equivalent to P, called prior, as well as a monotone risk measure ρ α induced, under Q α , by a BSDE with jumps. We consider an agent who is averse to ambiguity and define her risk measure as the supremum over α of the risk measures ρ α . We show that this dynamic risk measure is induced, under P, by a BSDE. The paper is organized as follows. In Section 2, we introduce the notation and the basic definitions. Section 3 is dedicated to linear BSDEs with jumps. In Section 4, comparison
3330
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
theorems for BSDEs with jumps are provided. We also prove two optimization principles which allow us to characterize the value function of an optimization problem written in terms of BSDEs. Section 5 is dedicated to dynamic risk measures induced by BSDE with jumps and related robust optimization problems. In Section 5.1, we give properties of dynamic risk measures induced by BSDEs with jumps. In the case of a concave driver, we provide a dual representation of the associated convex risk measure (Section 5.2). The problem of dynamic risk measures under model ambiguity is addressed in Section 5.3. Finally, in Section 5.4, we interpret the dependence of the driver with respect to x in terms of the instantaneous interest rate. In the Appendix, we provide some useful additional properties on exponential local martingales, and BSDEs with jumps. 2. BSDEs with jumps: notation and definitions Let (Ω , F, P) be a probability space. Let W be a one-dimensional Brownian motion and N (dt, du) be a Poisson random measure with compensator ν(du)dt such that ν is a σ -finite measure on R∗ , equipped with its Borel field B(R∗ ). Let N˜ (dt, du) be its compensated process. Let F = {Ft , t ≥ 0} be the natural filtration associated with W and N . The results of this paper can be generalized to multi-dimensional Brownian motions and Poisson random measures without difficulty. Notation. Let P be the predictable σ -algebra on [0, T ] × Ω . For each T > 0 and p > 1, we use the following notation. • L p (FT ) is the set of random variables ξ which are FT -measurable and p-integrable. • H p,T is the set of real-valued predictable processes φ such that T 2p p ∥φ∥H p,T := E φt2 dt < ∞. 0
For β > 0 and φ ∈ H2,T , we introduce the norm ∥φ∥2β,T := E p
p
βs φ 2 ds . e s 0 := R∗ |ℓ(u)| p ν(du) < +∞.
T
• L ν is the set of borelian functions ℓ : R∗ → R such that ∥ℓ∥ p,ν The set L 2ν is a Hilbert space equipped with the scalar product ⟨δ, ℓ⟩ν := δ(u)ℓ(u)ν(du) for all δ, ℓ ∈ L 2ν × L 2ν , and the norm •
p,T Hν
R∗ ∥ℓ∥22,ν
=
R∗
|ℓ(u)|2 ν(du) < +∞, also denoted by ∥ℓ∥2ν .
is the set of processes l which are predictable, that is, measurable l : ([0, T ] × Ω × R∗ , P ⊗ B(R∗ )) → (R, B(R));
(ω, t, u) → lt (ω, u)
such that ∥l∥
p p,T
Hν
:= E
0
T
∥lt ∥2ν dt
2p
< ∞.
2 For β > 0 and l ∈ H2,T ν , we set ∥l∥ν,β,T := E
T 0
eβs ∥ls ∥2ν ds . p
• S p,T is the set of real-valued RCLL adapted processes φ with ∥φ∥ S p := E(sup0≤t≤T |φt | p ) < ∞.
3331
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357 p
When T is fixed and there is no ambiguity, we denote H p instead of H p,T , Hν instead of p,T Hν , and S p instead of S p,T . • T0 denotes the set of stopping times τ such that τ ∈ [0, T ] a.s. Definition 2.1 (Driver, Lipschitz Driver). A function f is said to be a driver if • f : [0, T ] × Ω × R2 × L 2ν → R (ω, t, x, π, ℓ(·)) → f (ω, t, x, π, ℓ(·)) is P ⊗ B(R2 ) ⊗ B(L 2ν )-measurable, • f (., 0, 0, 0) ∈ H2 . A driver f is called a Lipschitz driver if moreover there exists a constant C ≥ 0 such that d P ⊗ dt-a.s., for each (x1 , π1 , ℓ1 ), (x2 , π2 , ℓ2 ), | f (ω, t, x1 , π1 , ℓ1 ) − f (ω, t, x2 , π2 , ℓ2 )| ≤ C(|x1 − x2 | + |π1 − π2 | + ∥ℓ1 − ℓ2 ∥ν ). Definition 2.2 (BSDE with Jumps). A solution of a BSDE with jumps with terminal time T , terminal condition ξ and driver f consists of a triple of processes (X, π, l) satisfying lt (u) N˜ (dt, du); XT = ξ (2.1) − d X t = f (t, X t − , πt , lt (·))dt − πt d Wt − R∗
where X is a RCLL optional process, and π (resp. l) is an R-valued predictable process defined on Ω × [0, T ] (resp. Ω × [0, T ] × R∗ ) such that the stochastic integral with respect to W (resp. N˜ ) is well defined. This solution is denoted by (X (ξ, T ), π(ξ, T ), lt (ξ, T )). Note that the process f (t, X t − , πt , lt (·)) is predictable and satisfies f (t, X t − , πt , lt (·)) = f (t, X t , πt , lt (·))d P ⊗ dt-a.s. We recall the existence and uniqueness result for BSDEs with jumps established by Tang and Li (1994) in [22]. Theorem 2.3 (Existence and Uniqueness). Let T > 0. For each Lipschitz driver f , and each terminal condition ξ ∈ L 2 (FT ), there exists a unique solution (X, π, l) ∈ S 2,T × H2,T × H2,T ν of the BSDE with jumps (2.1). 3. Linear BSDEs with jumps We now focus on linear BSDEs with jumps which play a crucial role in the study of properties of general BSDEs. We first provide some useful properties of exponential local martingales driven by a Brownian motion and a Poisson random measure. 3.1. Some properties of exponential local martingales Let (βt ) be an R-valued predictable process, a.s. integrable with respect to d Wt . Let (γt (.)) be an R-valued predictable process defined on [0, T ] × Ω × R∗ , that is, P ⊗ B(R∗ )-measurable, and a.s. integrable with respect to N˜ (ds, du). Let M = (Mt )0≤t≤T be a local martingale given by t t Mt := βs d W s + γs (u) N˜ (ds, du). (3.2) 0
0
R∗
3332
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
Let Z = (Z t )0≤t≤T be the solution of d Z s = Z s − d Ms ; Z 0 = 1. The process Z is the so-called exponential local martingale associated with the local martingale M, denoted by E(M). It is given by the Dol´eans-Dade formula (see (A.60) in the Appendix): s s 1 s 2 E(M)s = exp βu d W u − β du − γr (u)ν(du)dr 2 0 u 0 0 R∗ (1 + γr (∆Yr )) (3.3) × 0
where Yt := R∗ u N ([0, t], du). Classically, if γt (∆Yt ) ≥ −1, 0 ≤ t ≤ T a.s. then we have E(M)t ≥ 0, 0 ≤ t ≤ T a.s. Note that this property still holds for general exponential local martingales (see Appendix). Since here M is driven by a Brownian motion and a Poisson random measure, we have more precisely the following property.
Proposition 3.1. Let (βt ) and (γt (.)) be predictable R-valued processes and let M be the local martingale defined by (3.2). The following assertions are equivalent. • (i) For each n ∈ N, γTn (∆YTn ) ≥ −1P-a.s., where (Tn )n∈N is the increasing sequence of stopping times corresponding to the jump times of Y . • (ii) γt (u) ≥ −1d P ⊗ dt ⊗ dν(u)-a.s. Moreover, if one of this condition is satisfied, then we have E(M)t ≥ 0, 0 ≤ t ≤ T a.s. Similarly, if γt (u) > −1d P ⊗ dt ⊗ dν(u)-a.s., then, for each t, E(M)t > 0 a.s. These precisions will be useful in the sequel, in particular to prove Theorem 5.2. Proof. For each s > 0, we have 0
n∈N
= E R∗ ×R+
1{γr (u)<−1} ν(du)dr ,
because ν(du)dt is the predictable compensator of the Poisson random measure N (du, dt). The result follows. We now provide a sufficient condition for the square integrability property of E(M). Proposition 3.2. Let (βt ) and (γt (.)) be predictable R-valued processes and let M be the local martingale defined by (3.2). Suppose that T T βs2 ds + ∥γs ∥2ν ds (3.4) 0
0
is bounded. Then, we have E[E(M)2T ] < +∞. Note that in this case, by martingale inequalities, (E(M)s )0≤t≤T ∈ S 2,T .
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
3333
Proof. By the product formula (or by using the Dol´eans-Dade formula (3.3)), we get E(M)2 = E(2M + [M, M]), t where [M, M]t = 0 βs2 ds + s≤t γs2 (∆Ys ). Now, t t t 2 2 2 γs (∆Ys ) = γs (u)N (ds, du) = ∥γs ∥ν ds + R∗
0
s≤t
0
0
R∗
γs2 (u) N˜ (ds, du).
It follows that 2
E(M) = E N + 0
= E(N ) exp
.
βs2 ds .
0
+ 0
βs2 ds +
.
∥γs ∥2ν
.
0
ds
∥γs ∥2ν ds ,
(3.5)
t where Nt := 2Mt + 0 R∗ γs2 (u) N˜ (ds, du). Note that N is a local martingale. Now, by T T assumption, there exists K > 0 such that exp{ 0 βs2 ds + 0 ∥γs ∥2ν ds} ≤ K a.s. Also, by (3.5), E(N ) is non negative. Since it is also a local martingale, it follows that it is a supermartingale. Hence, we have E[E(M)2T ] ≤ E[E(N )T ] K ≤ K , which ends the proof. Remark 3.3. For example, if the processes βt and ∥γt ∥ν are bounded, the random variable (3.4) is then bounded, and hence, by the above proposition, E(M)T ∈ L 2 . This property will be used in the study of linear BSDEs as well as in the comparison theorem (Theorem 4.2). For example, condition is satisfied when there exists ψ ∈ L 2ν such that dt ⊗ d P ⊗ dν(u)-a.s. |γt (u)| ≤ ψ(u). More generally, we have the following property: if βt and ∥γt ∥ p,ν are bounded, for all p ≥ 2, then E(M)T is p-integrable for all p ≥ 2. This property, as well as additional ones, is shown in the Appendix (see Proposition A.1). It will be used in Section 5.3, to solve a robust optimization problem, where some p-integrability conditions, with p > 2, are required. 3.2. Properties of linear BSDEs with jumps We now show that the solution of a linear BSDE with jumps can be written as a conditional expectation via an exponential semimartingale. Let (δt ) and (βt ) be R-valued predictable processes, supposed to be a.s. integrable with respect to dt and d Wt . Let (γt (.)) be a predictable R-valued process defined on [0, T ]×Ω ×R∗ , supposed to be a.s. integrable with respect to N˜ (ds, du). For each t ∈ [0, T ], let (Γt,s )s∈[t,T ] be the unique solution of the following forward SDE dΓt,s = Γt,s − δs ds + βs d Ws + γs (u) N˜ (ds, du) ; Γt,t = 1. (3.6) R∗
s
The process Γt,. can be written as Γt,s = e t δu du Z t,s , where (Z t,s )s∈[t,T ] is the solution of the following SDE d Z t,s = Z t,s − βs d Ws + γs (u) N˜ (ds, du) ; Z t,t = 1. R∗
Theorem 3.4. Let (δ, β, γ ) be a bounded predictable process. Let Γ be the so-called adjoint process defined as the solution of SDE (3.6). Suppose that Γ ∈ S 2 .
3334
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
Let (X t , πt , lt ) be the solution in S 2,T × H2,T × Hν2,T of the linear BSDE −d X t = (ϕt + δt X t + βt πt + ⟨γt , lt ⟩ν )dt − πt d Wt − lt (u) N˜ (dt, du); R∗
(3.7)
X T = ξ. The process (X t ) satisfies X t = E Γt,T ξ +
T
Γt,s ϕ(s)ds | Ft ,
0 ≤ t ≤ T, a.s.
(3.8)
t
Proof. Fix t ∈ [0, T ]. To simplify notation, let us denote Γt,s by Γs for s ∈ [t, T ]. By the Itˆo product formula, and denoting Γt,s by Γs for s ∈ [t, T ], we have: −d(X s Γs ) = −X s − dΓs − Γs − d X s − d[X, Γ ]s = −X s Γs δs ds + Γs [ϕs + δs X s + βs πs + ⟨γs , ls ⟩ν ] ds − βs πs Γs ds − Γs ⟨γs , ls ⟩ν ds − Γs (X s βs + πs )d Ws − Γs − ls (u)(1 + γs (u)) N˜ (du, ds) R
= Γs ϕs ds − d Ms , with d Ms = −Γs (X s βs + πs )d Ws − Γs − R ls (u)(1 + γs (u)) N˜ (du, ds). By integrating between t and T , we get T X t − ξ Γt,T = Γt,s ϕs ds − MT + Mt a.s. (3.9) t
Recall that Γt,. ∈ and that X ∈ S 2 , π ∈ H2 and l ∈ H2ν . Moreover, the processes δ, β and γ are bounded. It follows that the local martingale M is a martingale. Hence, by taking the conditional expectation in equality (3.9), we get equality (3.8). S2
This property together with Proposition 3.1 yields the following corollary, which will be used to prove the comparison theorems. Corollary 3.5. Suppose that the assumptions of Theorem 3.4 are satisfied. • Suppose that the inequality γt (u) ≥ −1 holds d P ⊗ dt ⊗ dν(u)-a.s. If ϕt ≥ 0, t ∈ [0, T ], d P ⊗ dt a.s. and ξ ≥ 0 a.s., then X t ≥ 0 a.s. for all t ∈ [0, T ]. • Suppose that the inequality γt (u) > −1 holds d P ⊗ dt ⊗ dν(u)-a.s. If ϕt ≥ 0, t ∈ [0, T ], d P ⊗ dt a.s. and ξ ≥ 0 a.s., and if X t0 = 0 a.s. for some t0 ∈ [0, T ], then ϕt = 0 d P ⊗ dt-a.s. on [t0 , T ], and ξ = 0 a.s on A. Proof. Let us prove the first assertion. Since γt (u) ≥ −1d P ⊗dt ⊗dν(u)-a.s., by Proposition 3.1, we get Γt,T ≥ 0 a.s. and the result follows from the representation formula for linear BSDEs (3.8). The second assertion follows from similar arguments and the fact that if γt (u) > −1d P ⊗ dt ⊗ dν(u)-a.s., then Γt,T > 0 a.s. Note that when ξ ≥ 0 and ϕ ≥ 0, if the process γ can take values < −1 with strictly positive probability, then the solution X of the linear BSDE may take strictly negative values. Moreover,
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
3335
suppose that ξ ≥ 0, ϕ ≥ 0 a.s. and that the process γ ≥ −1 a.s., but can take the value −1 with strictly positive probability. Then, in general, the equality X 0 = 0 does not imply that ξ = 0 and ϕ = 0 a.s. This is illustrated in the example below. Example 3.1. Suppose that γ is a real constant, δ = β = 0, ϕ = 0, ν(du) := δ1 (du), where δ1 denotes Dirac measure at 1. The process Nt := N ([0, t] × {1}) is then a Poisson process with parameter 1, and we have N˜ t := N˜ ([0, t] × {1}) = Nt − t. Suppose that the driver f is given by f (ℓ) := ⟨γ , ℓ⟩ν = γ ℓ(1).
(3.10)
One can show that the associated adjoint process Γ0,s , denoted here by Γs , satisfies ΓT = (1 + γ ) NT e−γ T .
(3.11)
Consider now the special case when the terminal condition is given by ξ := N T . Let X be the solution of the BSDE associated with driver f and this terminal condition. By the representation property of linear BSDEs with jumps (see equality (3.8)) and classical computations, we get X 0 = E[ΓT N T ] = (1 + γ )T . Consequently, if γ < −1, then X 0 < 0 although ξ = N T ≥ 0 a.s. Consider now the special case when f (ℓ) = −ℓ(1)
and ξ = 1{T1 ≤T } .
(3.12)
From equality (3.11) with γ = −1, it follows that ΓT ≥ 0. a.s. and ΓT = 0 a.s. on {N T ≥ 1} = {T1 ≤ T }. The solution X of the associated BSDE satisfies X 0 = E[ΓT 1{T1 ≤T } ] = 0 although ξ ≥ 0 a.s. and P(ξ > 0) = P(T1 ≤ T ) > 0. 4. Comparison theorems and optimization principles for BSDEs with jumps 4.1. Comparison theorems The comparison theorems are key properties of BSDEs and play a crucial role in the study of optimization problems expressed in terms of BSDEs. In [21], Royer established a comparison theorem and a strict comparison theorem for BSDEs with jumps. Here, we prove these theorems under less restrictive hypotheses and provide some optimization principles for BSDEs with jumps. We begin by a preliminary result which will be used to prove the comparison theorems. Lemma 4.1 (Comparison Result with Respect to a Linear BSDE). Let (δ, β, γ ) be a bounded predictable process and for each t, let Γt,. be the exponential semimartingale solution of SDE (3.6). Suppose that Γt,. ∈ S 2 ∀t
and γt (u) ≥ −1 d P ⊗ dt ⊗ ν(du)-a.s.
Let ξ ∈ L 2 (FT ) and h be a driver (non necessarily Lipschitz). Let (X t , πt , lt ) be a solution in S 2,T × H2,T × H2,T ν of the BSDE − d X t = h(t, X t , πt , lt (·))dt − πt d Wt − lt (u) N˜ (dt, du); X T = ξ. (4.13) R∗
Let ϕ ∈ H2,T . Suppose that h(t, X t , πt , lt ) ≥ ϕt + δt X t + βt πt + ⟨γt , lt ⟩ν ,
0 ≤ t ≤ T,
d P ⊗ dt-a.s.
(4.14)
3336
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
Then, X is a.s. greater or equal to the solution given by (3.8) of the linear BSDE (3.7). In other terms, T X t ≥ E Γt,T ξ + Γt,s ϕ(s)ds | Ft , 0 ≤ t ≤ T, a.s. (4.15) t
Proof. Fix t ∈ [0, T ]. Since γt (u) ≥ −1d P ⊗ dt ⊗ ν(du)-a.s., it follows that Γt,. ≥ 0 a.s. To simplify notation, let us denote Γt,s by Γs for s ∈ [t, T ]. By the Itˆo product formula, and denoting Γt,s by Γs for s ∈ [t, T ], we have: −d(X s Γs ) = −X s − Γs − δs ds + βs d Ws + γs (u) N˜ (du, ds) + Γs − h(s, X s , πs , ls )ds R − Γs − πs d Ws + ls (u) N˜ (du, ds) − πs Γs βs ds R − Γs − γs (u)ls (u)N (du, ds). R
Using inequality (4.14) together with the non negativity of Γ , and doing the same computations as in the proof of Theorem 3.4, we derive that −d(X s Γs ) ≥ Γs ϕs ds − d Ms , where M is a martingale (since Γt,. ∈ S 2 and since (δt ) and (βt ) are bounded). By integrating between t and T and by taking the conditional expectation, we derive inequality (4.15). Now, by Theorem 3.4, the second member of this inequality corresponds to the solution of the linear BSDE (3.7). The proof is thus complete. The comments made in the linear case (see in particular Example 3.1, with γ < −1 and ξ = N T ) show the relevance of the assumption γt (u) ≥ −1 in the above lemma. Note also that if δ, β, γ are bounded and if |γt | ≤ ψ, where ψ ∈ L 2ν , Proposition 3.2 yields that, for each t, Γt,. ∈ S 2 . Using this remark together with the above lemma, we now show the general comparison theorems for BSDEs with jumps. Theorem 4.2 (Comparison Theorem for BSDEs with Jumps). Let ξ1 and ξ2 ∈ L 2 (FT ). Let f 1 be a Lipschitz driver. Let f 2 be a driver. For i = 1, 2, let (X ti , πti , lti ) be a solution in S 2,T × H2,T × H2,T ν of the BSDE i i i i i X Ti = ξi . lti (u) N˜ (dt, du); −d X t = f i (t, X t , πt , lt )dt − πt d Bt − (4.16) R∗
Assume that there exists a bounded predictable process (γt ) such that dt ⊗ d P ⊗ ν(du)-a.s., γt (u) ≥ −1 where ψ ∈
L 2ν ,
and
|γt (u)| ≤ ψ(u),
(4.17)
and such that
f 1 (t, X t2 , πt2 , lt1 ) − f 1 (t, X t2 , πt2 , lt2 ) ≥ ⟨γt , lt1 − lt2 ⟩ν ,
t ∈ [0, T ],
dt ⊗ d P a.s.
(4.18)
Assume that ξ1 ≥ ξ2 a.s. dt ⊗ d P a.s.
and
f 1 (t, X t2 , πt2 , lt2 ) ≥ f 2 (t, X t2 , πt2 , lt2 ),
t ∈ [0, T ],
(4.19)
3337
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
Then, we have X t1 ≥ X t2
(4.20)
a.s. for all t ∈ [0, T ].
Moreover, if inequality (4.19) is satisfied for (X t1 , πt1 , lt1 ) instead of (X t2 , πt2 , lt2 ) and if f 2 (instead of f 1 ) is Lipschitz and satisfies (4.18), then inequality (4.20) still holds. Proof. Put X¯ t = X t1 − X t2 ; π¯ t = πt1 − πt2 ; l¯t (u) = lt1 (u) − lt2 (u). Then ¯ −d X t = h t dt − π¯ t d Wt − l¯t (u) N˜ (dt, du); X¯ T = ξ1 − ξ2 , R∗
where h t := f 1 (t, X t1 , πt1 , lt1 ) − f 2 (t, X t2 , πt2 , lt2 ). The proof now consists to show that there exists δ and β such that h t satisfies inequality (4.14) and then to apply the comparison result with respect to a linear BSDE (Lemma 4.1). We have h t = f 1 (t, X t1 , πt1 , lt1 ) − f 1 (t, X t2 , πt1 , lt1 ) + f 1 (t, X t2 , πt1 , lt1 ) − f 1 (t, X t2 , πt2 , lt1 ) + f 1 (t, X t2 , πt2 , lt1 ) − f 1 (t, X t2 , πt2 , lt2 ) + f 1 (t, X t2 , πt2 , lt2 ) − f 2 (t, X t2 , πt2 , lt2 ). Let ϕt := f 1 (t, δt := βt :=
X t2− , πt2 , lt2 ) − f 2 (t,
X t2− , πt2 , lt2 )
(4.21)
and
f 1 (t, X t1− , πt1 , lt1 ) − f 1 (t, X t2− , πt1 , lt1 ) 1{ X¯ t ̸=0} X¯ t f 1 (t, X t2− , πt1 , lt1 ) − f 1 (t, X t2− , πt2 , lt1 ) π¯ t
1{π¯ t ̸=0} .
By the assumption (4.18) on f 1 , we get h t ≥ ϕt + δt X¯ t + βt π¯ t + ⟨γt l¯t ⟩ν dt ⊗ d P-a.s. Since f 1 is Lipschitz, the predictable processes (δt ) and (βt ) are bounded. By assumption (4.17), it follows from Proposition 3.2 that for each t, Γt,. ∈ S 2 , where the process Γt,. is defined by (3.6). Since γt (u) ≥ −1, it follows that Γt,. ≥ 0 a.s. By the comparison result with respect to a linear BSDE (see Lemma 4.1), we thus derive that T ¯ X t ≥ E Γt,T (ξ1 − ξ2 ) + Γt,s ϕ(s)ds | Ft , 0 ≤ t ≤ T, a.s. (4.22) t
Now, by assumption, ϕt ≥ 0d P ⊗ dt-a.s. and ξ1 − ξ2 ≥ 0 a.s. Hence X .1 ≥ X .2 . The second assertion follows from the same arguments but linearizing f 2 instead of f 1 .
Remark 4.3. Note that the presence of jumps as well as inequality (4.18), which is a relatively weak assumption, do not really allow us to proceed with a linearization method as in the Brownian case (see [9]). Indeed, there is somehow an asymmetry between the role of negative jumps and that of positive ones of X¯ . The above lemma thus appears as a preliminary step before proving the general comparison theorem in the case of jumps. We now provide a strict comparison theorem, which holds under an additional assumption. This property is an important tool for the study of optimization problems expressed in terms of BSDEs since it allows us to obtain a necessary condition of optimality (see Proposition 4.9 assertion 2). Theorem 4.4 (Strict Comparison Theorem). Suppose that the assumptions of Theorem 4.2 hold and that the inequality γt (u) > −1 holds dt ⊗ d P ⊗ dν(u)-a.s.
3338
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
If X t10 = X t20 a.s. on A for some t0 ∈ [0, T ] and A ∈ Ft0 , then X ·1 = X ·2 a.s. on [t0 , T ] × A, ξ1 = ξ2 a.s. on A and (4.19) holds as an equality in [t0 , T ] × A. Proof. The result follows from inequality (4.22) and the second assertion of Corollary 3.5.
Remark 4.5. Example 3.1 with (3.12) shows the relevance of the assumption γt (u) > −1 in the strict comparison theorem. Note that the conditions under which the above comparison theorems hold, are weaker that those made in [21] (see some more details in Section 4.3). 4.2. Optimization principles From the comparison theorem, we derive optimization principles for minima of BSDEs which generalize those of El Karoui et al. (1997) to the case of jumps. Theorem 4.6 (Optimization Principle I). Let ξ in L 2 (FT ) and let ( f, f α ; α ∈ AT ) be a family of Lipschitz drivers. Let (X, π, l) (resp. (X α , π α , l α )) be the solution of the BSDE associated with terminal condition ξ and driver f (resp. f α ). Suppose that f (t, X t , πt , lt ) = ess inf f α (t, X t , πt , lt ) = f α¯ (t, X t , πt , lt ), α
0 ≤ t ≤ T, d P ⊗ dt-a.s. for some parameter α¯ ∈ AT and that for each α ∈ A, there exists a predictable process α
f (t,
X t , πt , ltα ) −
α
f (t, X t , πt , lt ) ≥
⟨γtα , ltα
γα
− lt ⟩ν ,
(4.23) satisfying (4.17) and t ∈ [0, T ], dt ⊗ d P a.s.
(4.24)
Then, X t = ess inf X tα = X tα¯ , α
0 ≤ t ≤ T, a.s.
(4.25)
Proof. For each α, since f (t, X t , πt , lt ) ≤ f α (t, X t , πt , lt )d P ⊗ dt-a.s., the comparison Theorem 4.2 gives that X t ≤ X tα , 0 ≤ t ≤ T, P-a.s. It follows that X t ≤ ess inf X tα α
0 ≤ t ≤ T, P-a.s.
(4.26)
By assumption, X t is the solution of the BSDE associated with f α¯ . By uniqueness of the solution of the Lipschitz BSDE associated with f α¯ , we derive that X t = X tα¯ , 0 ≤ t ≤ T , a.s. which implies that inequality in (4.26) is an equality. Theorem 4.7 (Optimization Principle II). Let ξ in L 2 (FT ) and let ( f, f α ; α ∈ A) be a family of Lipschitz drivers. Suppose that the drivers f α , α ∈ A are equi-Lipschitz with common Lipschitz constant C. Let (X, π, l) be a solution of the BSDE associated with terminal condition ξ and driver f and (X α , π α , l α ) be the solution of the BSDE associated with terminal condition ξ and driver f α . Suppose that for each α ∈ A, f (t, X t , πt , lt ) ≤ f α (t, X t , πt , lt ),
0 ≤ t ≤ T, d P ⊗ dt-a.s.
(4.27)
and that there exists γ α and δ α satisfying (4.17) and ⟨γtα , ltα − lt ⟩ν ≤ f α (t, X t , πt , ltα ) − f α (t, X t , πt , lt ) ≤ ⟨δtα , l α − lt ⟩ν ,
t ∈ [0, T ], dt ⊗ d Pa.s.
(4.28)
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
3339
Suppose also that for each ε > 0, there exists α ε ∈ A such that ε
f α (t, X t , πt , lt ) − ϵ ≤ f (t, X t , πt , lt ),
0 ≤ t ≤ T, d P ⊗ dt-a.s.
(4.29)
Then, X t = ess inf X tα , α
0 ≤ t ≤ T, a.s.
(4.30)
Proof. By the comparison theorem, X t ≤ X tα , 0 ≤ t ≤ T , a.s. Hence, X t ≤ ess infα X tα a.s. for each t ∈ [0, T ]. Let us now show the inverse inequality. By estimation (A.65) in the Appendix, ε with η = C12 and β = 3C 2 + 2C, we get |X t − X tα |2 ≤ C12 eβT T ε a.s., which yields that ε
X t ≥ X tα − εK C,T ≥ ess inf X tα − εK C,T , α
0 ≤ t ≤ T, a.s.,
(4.31)
βT √ where K C,T = C1 e 2 T . Since this inequality holds for each ε > 0, we obtain X t ≥ ess infα X tα a.s. Hence, this inequality is an equality.
Remark 4.8. Note that α ε is ε ′ -optimal for (4.30) with ε ′ = εK C,T since by (4.31), we have ε X t ≥ X tα − ε ′ a.s. By the strict comparison theorem, we derive the following necessary optimality condition. Proposition 4.9. Suppose that Assumptions of Theorem 4.6 (resp. Theorem 4.7) are satisfied. Let αˆ ∈ A and let S ∈ T0 . Suppose αˆ is S-optimal, that is, ess inf X αS = X αSˆ α
a.s.
and that the associated process γtαˆ satisfies the strict inequality γtαˆ > −1
d P ⊗ dt ⊗ dν-a.s.
Then, we have f (t, X t , πt , lt ) = f αˆ (t, X t , πt , lt ),
S ≤ t ≤ T,
d P ⊗ dt-a.s.
4.3. Remarks on the assumptions of the comparison theorem Let us introduce the following condition. Let T > 0. Assumption 4.1. A driver f is said to satisfy Assumption 4.1 if the following holds: d P ⊗dt-a.s. for each (x, π, l1 , l2 ) ∈ R2 × (L 2ν )2 , f (t, x, π, l1 ) − f (t, x, π, l2 ) ≥ ⟨θtx,π,l1 ,l2 , l1 − l2 ⟩ν , with θ : [0, T ] × Ω × R2 × (L 2ν )2 → L 2ν ;
(ω, t, x, π, l1 , l2 ) → θtx,π,l1 ,l2 (ω, .)
P ⊗ B(R2 ) ⊗ B((L 2ν )2 )-measurable, bounded, and satisfying d P ⊗ dt ⊗ dν(u)-a.s., for each (x, π, l1 , l2 ) ∈ R2 × (L 2ν )2 , θtx,π,l1 ,l2 (u) ≥ −1 where ψ ∈ L 2ν .
and
|θtx,π,l1 ,l2 (u)| ≤ ψ(u),
(4.32)
3340
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
Assumption 4.1 is stronger than the one made in the comparison theorem (Theorem 4.2). Indeed, if the driver f 1 satisfies Assumption 4.1, then it satisfies condition (4.18) with γt = X 2− ,πt2 ,lt1 ,lt2
θt t , but the converse does not hold. Note also that condition (4.18) is only required along (X t2 , πt2 , lt2 ) (the solution of the second BSDE) as well as lt1 (the third coordinate of the solution of the first BSDE) but not necessarily for all (x, π, l). Also, if f α satisfies Assumption 4.1, then it satisfies the weaker condition (4.28) assumed in the optimization principle II. An important point is that Assumption 4.1 ensures a monotony property with respect to the terminal condition, in the following sense: for all ξ 1 , ξ 2 ∈ L 2 (FT ) with ξ 1 ≥ ξ 2 a.s., we have X (ξ 1 ) ≥ X (ξ 2 ) a.s., where X (ξ 1 ) (resp. X (ξ 2 )) denotes the solution of the BSDE associated with f and ξ 1 (resp. ξ 2 ). This clearly follows from the comparison theorem applied to f 1 = f 2 = f . As we will see in the next section, this assumption will be appropriate to ensure the monotonicity property of a dynamic risk measure induced by a BSDE. Remark 4.10. Assumption 4.1 implies that for each (x, π, l1 , l2 ), f (t, x, π, l1 ) − f (t, x, π, l2 ) ≤ ⟨γtx,π,l1 ,l2 , l1 − l2 ⟩ν
(4.33)
where γtx,π,l1 ,l2 (u) = θtx,π,l2 ,l1 (u). Note that Assumption 4.1 is weaker than the assumption made by Royer [21], where, in particular, it is moreover required that θtx,π,l1 ,l2 ≥ C1 (or equivalently γtx,π,l1 ,l2 ≥ C1 ) with C1 > −1. 5. Dynamic risk measures induced by BSDEs with jumps, robust optimization problems 5.1. Definitions and first properties ′
Let T ′ > 0 be a time horizon. Let f be a Lipschitz driver such that f (·, 0, 0, 0) ∈ H2,T . We define the following functional: for each T ∈ [0, T ′ ] and ξ ∈ L 2 (FT ), set f
ρt (ξ, T ) = ρt (ξ, T ) := −X t (ξ, T ),
0 ≤ t ≤ T,
(5.34)
where X t (ξ, T ) denotes the solution of the BSDE (2.1) with terminal condition ξ and terminal time T . If T represents a given maturity and ξ a financial position at time T , then ρt (ξ, T ) will be interpreted as the risk measure of ξ at time t. The functional ρ : (ξ, T ) → ρ· (ξ, T ) defines then a dynamic risk measure induced by the BSDE with driver f . We now provide properties of such a dynamic risk measure. We point out that, contrary to the Brownian case, the monotonicity property of ρt , which is naturally required for risk measures, is not automatically satisfied and needs Assumption 4.1, introduced in Section 4.3. • Consistency. By the flow property (see (A.69) in the Appendix), ρ is consistent: more precisely, let T ∈ [0, T ′ ] and let S ∈ T0,T be a stopping time, then for each time t smaller than S, the riskmeasure associated with position ξ and maturity T coincides with the risk-measure associated with maturity S and position −ρ S (ξ, T ) = X S (ξ, T ), that is, (CS) ∀t ≤ S, ρt (ξ, T ) = ρt (−ρ S (ξ, T ), S) a.s. • Continuity. Let T ∈ [0, T ′ ]. Let {θ α , α ∈ R} be a family of stopping times in T0,T , converging a.s. to a stopping time θ ∈ T0,T as α tends to α0 . Let (ξ α , α ∈ R) be a family of random variables such that E[ess supα (ξ α )2 ] < +∞, and for each α, ξ α is Fθ α -measurable. Suppose also that ξ α converges a.s. to an Fθ -measurable random variable ξ as α tends to α0 . Then, for each S ∈ T0,T , the random variable ρ S (ξ α , θ α ) → ρ S (ξ, θ ) a.s. and the process ρ(ξ α , θ α ) → ρ(ξ, θ ) in S 2,T when α → α0 (see Proposition A.6 in the Appendix).
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
3341
• zero–one law. If f (t, 0, 0) = 0, then the risk-measure associated with the null position is equal to 0. More precisely, the risk-measure satisfies the zero–one law property: (ZO) ρt (1 A ξ, T ) = 1 A ρt (ξ, T )a.s for t ≤ T , A ∈ Ft , and ξ ∈ L 2 (FT ). • Translation invariance. If f does not depend on x, then the associated risk-measure satisfies the translation invariance property: (TI) ρt (ξ + ξ ′ , T ) = ρt (ξ, T ) − ξ ′ , for any ξ ∈ L 2 (FT ) and ξ ′ ∈ L 2 (Ft ). This situation can be interpreted as a market with an interest rate rt equal to zero. The case rt ̸= 0 corresponds to a BSDE with a driver of the form −rt x + g(t, π, l) and can be reformulated as a problem with a driver independent of x by discounting the positions ξ (see Section 5.4.1). The general case when f depends on x in a non-linear way may be interpreted in terms of ambiguity on the interest rate (see Sections 5.4.2 and 5.4.3). • Homogeneous property. If f is positively homogeneous with respect to (x, π, l), then the riskmeasure ρ is positively homogeneous with respect to ξ , that is, for λ ≥ 0, T ∈ [0, T ′ ] and ξ ∈ L 2 (FT ), ρ. (λξ, T ) = λρ. (ξ, T ). From now on, we assume that the driver f satisfies Assumption 4.1 with T = T ′ . The comparison theorem for BSDEs with jumps (see Theorem 4.2) can then be applied, and yields the monotonicity of the risk measure ρ. • Monotonicity. ρ is nonincreasing with respect to ξ , that is : for each T ∈ [0, T ′ ]. (MO) For each ξ 1 , ξ 2 ∈ L 2 (FT ), if ξ 1 ≥ ξ 2 a.s., then ρt (ξ 1 , T ) ≤ ρt (ξ 2 , T ), 0 ≤ t ≤ T a.s. Note that the dynamic risk measure ρ f associated with driver f (ℓ) = γ ℓ(1) with γ < −1, in the case of a Poisson process with parameter 1, is not monotone (see Example 3.1). The comparison theorem also yields the following property. • Convexity. If f is concave with respect to (x, π, l), then the dynamic risk-measure ρ is convex, that is, for any λ ∈ [0, 1], T ∈ [0, T ′ ], ξ 1 , ξ 2 ∈ L 2 (FT ), ρ(λξ 1 + (1 − λ)ξ 2 , T ) ≤ λρ(ξ 1 , T ) + (1 − λ)ρ(ξ 2 , T ).
(5.35)
Suppose now that in Assumption 4.1, we have θtx,π,l1 ,l2 > −1. The strict comparison theorem (see Theorem 4.4) can then be applied and yields the no arbitrage property. • No Arbitrage. The dynamic risk measure ρ satisfies the no arbitrage property: for each T ∈ [0, T ′ ], and ξ 1 , ξ 2 ∈ L 2 (FT ) (NA) If ξ 1 ≥ ξ 2 a.s. and if ρt (ξ 1 , T ) = ρt (ξ 2 , T ) a.s. on an event A ∈ Ft , then ξ 1 = ξ 2 a.s. on A. Contrary to the monotonicity property, the no arbitrage property is generally not required for risk-measures. Note that the dynamic risk measure ρ f associated driver f (ℓ) = −ℓ(1), in the case of a Poisson process with parameter 1, is monotone but does not satisfy the no arbitrage property (see Example 3.1).
3342
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
The inverse problem. We now look at the inverse problem: when can a dynamic risk-measure be represented by a BSDE with jumps? The following proposition gives an answer. Proposition 5.1 (Royer M.). Suppose that the intensity measure ν of the Poisson random measure satisfies R∗ (1 ∧ u 2 )ν(du) < +∞. Let ρ be a dynamic risk measure, that is, a map which, to each ξ ∈ L 2 (FT ) and T ≥ 0, associates an adapted RCLL process (ρt (ξ, T )){0≤t≤T } . Suppose that ρ is nonincreasing, consistent, translation invariant and satisfies the zero–one law 1 as well as the no arbitrage property. Moreover, suppose that ρ satisfies the so-called E C,C domination property: there exists C > 0 and −1 < C1 ≤ 0 such that ρt (ξ + ξ ′ , T ) − ρt (ξ, T ) ≥ −X tC,C1 (ξ ′ , T ),
(5.36)
for any ξ, ξ ′ ∈ L 2 (FT ), where X tC,C1 (ξ ′ , T ) is the solution of the BSDE associated with terminal condition ξ ′ and driver f C,C1 (t, π, ℓ) := C|π| + C R∗ (1 ∧ |u|)ℓ+ (u)ν(du) − C1 R∗ (1 ∧ |u|)ℓ− (u)ν(du). Then, there exists a Lipschitz driver f (t, π, ℓ) such that ρ = ρ f , that is, ρ is the dynamic risk measure induced by a BSDE with jumps with driver f (t, π, ℓ). This proposition corresponds to Theorem 4.6 in [21], here written in terms of risk measures, which generalizes the result shown in the Brownian case by [5] to the case of jumps. For the proof, we refer the reader to [21]. 5.2. Representation of convex dynamic risk measures We now provide a representation for dynamic risk measures induced by concave BSDEs with jumps (which thus are convex risk measures). This dual representation is given via a set of probability measures which are absolutely continuous with respect to P. Let f be a given driver independent of x. For each (ω, t), let F(ω, t, ·, ·, ·) be the polar function of f with respect to (π, ℓ), defined for each (α 1 , α 2 ) in R × L 2ν by F(ω, t, α 1 , α 2 ) :=
sup (π,ℓ)∈R2 ×L 2ν
[ f (ω, t, π, ℓ) − α 1 π − ⟨α 2 , ℓ⟩ν ].
(5.37)
Theorem 5.2. Suppose that the Hilbert space L 2ν is separable. Let f be a Lipschitz driver with Lipschitz constant C, which does not depend on x. Suppose also that f satisfies Assumption 4.1 and is concave with respect to (π, ℓ). Let T ∈ [0, T ′ ]. Let AT be the set of predictable processes α = (α 1 , α 2 (.)) such that F(t, αt1 , αt2 ) belongs to H2T , where F is defined by (5.37). For each α ∈ AT , let Q α be the probability absolutely continuous with respect to P which admits Z Tα as density with respect to P on FT , where Z α is the solution of d Z tα = Z tα− αt1 d Wt + αt2 (u)d N˜ (dt, du) ; Z 0α = 1. (5.38) R∗
The convex risk-measure ρ(., T ) has the following representation: for each ξ ∈ L 2 (FT ), ρ0 (ξ, T ) = sup E Q α (−ξ ) − ζ (α, T ) , (5.39) α∈AT
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
3343
where the function ζ , called penalty function, is defined, for each T and α ∈ AT , by T ζ (α, T ) := E Q α F(s, αs1 , αs2 )ds . 0
Moreover, for each ξ ∈ L 2 (FT ), there exists α¯ = (α¯ 1 , α¯ 2 ) ∈ AT such that F(t, α¯ t1 , α¯ t2 ) = f (t, πt , lt ) − α¯ t1 πt − ⟨α¯ t2 , lt ⟩ν ,
0 ≤ t ≤ T,
d P ⊗ dt-a.s.,
where (X, π, l) is the solution to the BSDE with driver f , terminal time T and terminal condition ξ . Also, the process α¯ is optimal for (5.39). Remark 5.3. In the particular case of a Brownian filtration, this representation corresponds to that provided in [9,6] by BSDEs arguments. In this case, contrary to the case with jumps, all the probability measures Q α , α ∈ A, are equivalent to P. In our framework, due to the presence of jumps, the controls α are not valued in a finite dimensional space as R p but in the Hilbert space R × L 2ν . Note that the separability assumption made on this Hilbert space is used in the proof to solve some measurability problems. In particular, it allows us to apply the section theorem of [7], which requires that the space is lusinian, that is, isomorph to a borelian part of a polish space. Note that the above representation is related to some recent studies on robust portfolio optimization, with a specific quadratic driver (see [13]). Proof. Since, by assumption, R × L 2ν is separable, it admits a dense countable subset I . Since f is continuous with respect to (π, ℓ), the supremum in (5.37) thus coincides with the supremum over I , which implies the measurability of F. By results of convex analysis in Hilbert spaces (see e.g. Ekeland and Temam (1976) [8]), the polar function F is convex. It is also lower semicontinuous with respect to α 1 , α 2 as supremum of continuous functions. Also, since f is concave and continuous, f and F satisfy the conjugacy relation, that is, f (ω, t, π, ℓ) =
inf {F(ω, t, α1 , α2 ) + α1 π + ⟨α2 , ℓ⟩ν },
α∈Dt (ω)
where for each (t, ω), Dt (ω) is the non empty set of α = (α1 , α2 ) ∈ R × L 2ν such that F(ω, t, α1 , α2 ) > −∞. Now, the following lemma holds. Lemma 5.4. For each (t, ω), Dt (ω) ⊂ U , where U is the closed subset of the Hilbert space R × L 2ν of the elements α = (α1 , α2 ) such that α1 is bounded by C and ν(du)-a.s α2 (u) ≥ −1
and
|α2 (u)| ≤ ψ(u) ∧ C,
(5.40)
where C is the Lipschitz constant of f . For each process αt = (αt1 , αt2 ) ∈ AT , let f α be the associated linear driver defined by f α (ω, t, π, ℓ) := F(ω, t, αt1 (ω), αt2 (ω)) + αt1 (ω)π + ⟨αt2 (ω), ℓ⟩ν . Note first that for each α ∈ AT , f α ≥ f . Let T ∈ [0, T ′ ] and ξ ∈ L 2 (FT ). Let (X (ξ, T ), π(ξ, T ), l(ξ, T )) (also denoted (X, π, l)) be the solution in S 2 × H2 × H2ν of the BSDE associated with driver f , terminal time T and terminal condition ξ . The following lemma holds.
3344
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
Lemma 5.5. There exists a process α¯ = (α¯ 1 , α¯ 2 (.)) ∈ AT such that f (t, πt , ℓt ) = ess inf { f α (t, πt , ℓt )} = f α¯ (t, πt , ℓt ), α∈AT
0 ≤ t ≤ T,
d P ⊗ dt-a.s.
By the optimization principle for BSDEs with jumps (see Theorem 4.6), we thus derive that X 0 (ξ, T ) = inf X 0α (ξ, T ) = X 0α¯ (ξ, T ) α∈AT
(5.41)
where for each process α ∈ AT , X α (ξ, T ) is the solution of the linear BSDE associated with driver f α , terminal time T and terminal condition ξ . Let α = (α 1 , α 2 ) ∈ AT . By Lemma 5.4, |αt2 (u)| ≤ ψ(u) ∧ Cd P ⊗ dt ⊗ ν(du) a.s. Hence, by Proposition 3.2, the process Z α , defined by (5.38), belongs to S 2 . Consequently, by the representation formula of linear BSDEs (see (3.8)), we have T Z sα F(s, αs1 , αs2 )ds . X 0α (ξ, T ) = E Z Tα ξ + 0
Now, by Lemma 5.4, we also have that αt2 ≥ −1dt ⊗ d P ⊗ dν-a.s. Hence, (Z tα )0≤t≤T is a non negative martingale and the probability Q α which admits Z Tα as density with respect to P on FT is well defined. We thus have T α 1 2 X 0 (ξ, T ) = E Q α ξ + F(s, αs , αs )ds , 0
which completes the proof of the theorem.
Proof of Lemma 5.4. Without loss of generality, we can suppose that Assumption 4.1 is satisfied for each (ω, t). Let (t, ω) ∈ [0, T ] × Ω and let α = (α1 , α2 ) ∈ Dt (ω). Let us first show that α2 ≥ −1ν-a.s. Suppose by contradiction that ν({u ∈ R∗ , α2 (u) < −1}) > 0. Since f satisfies Assumption 4.1, the following inequality f (ω, t, 0, l) ≥ f (ω, t, 0, 0) + ⟨θt0,l,0 (ω), l⟩ν holds for each l ∈ L 2ν . It follows that, using the definition of F (see (5.37)), F(ω, t, α1 , α2 ) ≥ f (ω, t, 0, l) − ⟨α2 , l⟩ν ≥ f (ω, t, 0, 0) + ⟨θt0,l,0 (ω) − α2 , l⟩ν . By applying this inequality to l := n1{α2 <−1} , where n ∈ N, we thus derive that, F(ω, t, α1 , α2 ) ≥ f (ω, t, 0, 0) + n θt0,l,0 (ω, u) − α2 (u) ν(du), {α2 <−1}
and this holds for each n ∈ N. Now, θt0,l,0 (ω, u) ≥ −1. By letting n tend to +∞ in this inequality, we get F(ω, t, α1 , α2 ) = +∞, which provides the expected contradiction since (α1 , α2 ) ∈ Dt (ω). We thus have proven that α2 ≥ −1ν-a.s. By similar arguments, one can show that α1 is bounded by C and that |α2 (u)| ≤ ψ(u) ∧ Cν(du)-a.s., which ends the proof. Proof of Lemma 5.5. The proof is divided in two steps.
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
3345
Step 1: Let us first prove that for each (ω, t), there exists α¯ = (α¯ 1 , α¯ 2 ) ∈ U such that inf {F(ω, t, α1 , α2 ) + α1 πt (ω) + ⟨α2 , lt (ω)⟩ν }
α∈U
= F(ω, t, α¯ 1 , α¯ 2 ) + α¯ 1 πt (ω) + ⟨α¯ 2 , lt (ω)⟩ν .
(5.42)
The proof is based on classical arguments of convex analysis. Fix (ω, t). The set U is strongly closed and convex in R × L 2ν . Hence, U is closed for the weak topology. Moreover, since U is bounded, it is a compact set for the weak topology. Let φ be the function defined for each α = (α1 , α2 ) ∈ R × L 2ν by φ(α) := F(ω, t, α1 , α2 ) + α1 πt (ω) + ⟨α2 , lt (ω)⟩ν . This function is convex and lower semi-continuous (l.s.c.) for the strong topology in R × L 2ν . By classical results of convex analysis, it is l.s.c. for the weak topology. Now, there exists a sequence α n = (α1n , α2n )n∈N of U such that φ(α n ) → infα∈U φ(α) as n → ∞. Since U is weakly compact, there exists an extracted sequence still denoted by (α n ) which converges for the weak topology to α¯ = (α¯ 1 , α¯ 2 ) for some α ∈ U . Since φ is l.s.c. for the weak topology, it follows that φ(α) ¯ ≤ lim inf φ(α n ) = infα∈U φ(α). Therefore, φ(α) ¯ = infα∈U φ(α). Hence α¯ satisfies (5.42), which ends the proof of step 1. Step 2: Let us now introduce the set U of processes α: [0, T ] × Ω → R × L 2ν ; (t, ω) → αt (ω, .) which are measurable with respect to σ -algebras P and B(R) ⊗ B(L 2ν ) and which take their values in U d P ⊗ dt-a.s. Since the Hilbert space L 2ν is supposed to be separable, it is a polish space. Hence, the section theorem Section 81 in the Appendix of Ch. III in Dellacherie and Meyer (1975) [7] can be applied. It follows that there exists a process α¯ = (α¯ 1 , α¯ 2 (.)) which belongs to U such that, d P ⊗ dt-a.s., f (t, πt , ℓt ) = ess inf { f α (t, πt , ℓt )} = f α¯ (t, πt , ℓt ), αt ∈U
0≤t ≤T (5.43)
d P ⊗ dt-a.s. Let us show that the process α¯ t2 (.) is predictable. Since L 2ν is a separable Hilbert space, there exists a countable orthonormal basis (ei )i∈N of L 2ν . For each i ∈ N, define λit (ω) = ⟨α¯ t2 (ω), ei ⟩ν . Since on L 2ν , the process (λit ) is P-measurable. As α¯ t2 (u) = i the map ⟨., ei ⟩ν is continuous 2 : [0, T ] × Ω × R∗ → R; (t, ω, u) → α (ω)e (u), it follows that α ¯ ¯ t2 (ω, u) is P ⊗ B(R∗ )λ i i t measurable. It is thus predictable. Equality (5.43) together with the definition of f α¯ yields that F(t, α¯ t1 , α¯ t2 ) = f (t, πt , lt )−α¯ t1 πt −⟨α¯ t2 , lt ⟩ν , which implies that the process F(t, α¯ t1 , α¯ t2 ) belongs to H2T as a sum of processes in H2T . Hence, (α¯ t ) ∈ AT , which ensures that equality (5.43) still holds with U replaced by AT . Remark 5.6. In the dual representation, the supremum cannot be generally taken over the probability measures Q α equivalent to P. For instance, consider Example 3.1 with γ = −1, that is with driver f (ℓ) := −ℓ(1). Then, Dt = {−1} and A = {−1}. Also, Q −1 is the probability measure with density with respect to P given by ΓT = 0 NT e T , and is thus not equivalent to P. Note also that if f is positively homogeneous, then F = 0. The penalty function ζ is thus equal to zero, and for all T , the set AT coincides with the set of predictable processes (αt ) valued in Dt .
3346
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
5.3. Dynamic risk-measures under model ambiguity We consider now dynamic risk-measures in the case of model ambiguity, parameterized by a control α as follows. Let A be a polish space (or a borelian subset of a polish space) and let A the set of A-valued predictable processes α. To each coefficient α ∈ A, is associated a model via a probability measure Q α called prior as well as a dynamic risk measure ρ α . More precisely, for each α ∈ A, let Z α be the solution of the SDE: d Z tα = Z tα− β 1 (t, αt )d Wt + β 2 (t, αt , u)d N˜ (dt, du) ; Z 0α = 1, R∗
where β 1 : (t, ω, α) → β 1 (t, ω, α), is a P ⊗ B(A)-measurable function defined on [0, T ′ ] × Ω × A and valued in [−C, C], with C > 0, and β 2 : (t, ω, α, u) → β 2 (t, ω, α, u) is a P ⊗ B(A) ⊗ B(R∗ )-measurable function defined on [0, T ′ ] × Ω × A × R∗ which satisfies dt ⊗ d P ⊗ dν(u)-a.s. β 2 (t, α, u) ≥ C1
and
|β 2 (t, α, u)| ≤ ψ(u),
(5.44) Z Tα ′
p Lν
with C1 > −1 and ψ is a bounded function ∈ for all p ≥ 1. Hence, > 0 a.s. and, by Proposition A.1, Z Tα ′ ∈ L p (FT ′ ) for all p ≥ 1. For each α ∈ A, let Q α be the probability measure equivalent to P which admits Z Tα ′ as density with respect to P on FT ′ . t By Girsanov’s theorem (see [14]), the process Wtα := Wt − 0 β 1 (s, αs )ds is a Brownian motion under Q α and N is a Poisson random measure independent of W α under Q α with compensated process N˜ α (dt, du) = N˜ (dt, du) − β 2 (t, αt , u)ν(du)dt. Even if the filtration F is not generated by W α and N˜ α , we have a representation theorem for Q α -martingales with respect to W α and N˜ α . More precisely, we have the following. Lemma 5.7. Let (Mt ) be a martingale under Q α , and p-integrable under Q α , for some p > 2. Then, there exists a unique pair of predictable processes (πt , lt (.)) such that t t ls (u) N˜ α (ds, du) 0 ≤ t ≤ T a.s. (5.45) πs d Wsα + Mt = M0 + 0
0
R∗
Proof. Suppose first that (Mt ) is p-integrable under Q α . Since (Mt ) is a Q α -martingale, the process Nt := Z tα Mt is a martingale under P. By assumption (5.44), it follows from Proposition A.1 that Z Tα ′ ∈ L q (FT ′ ) for all q ≥ 1. By H¨older’s inequality, N T is thus square integrable under P. By the martingale representation theorem of Tang and Li [22], there exists a unique pair of predictable processes (ψt , kt (.)) ∈ H2 × H2ν such that t t Nt = N0 + ψs d Ws + ks (u) N˜ (ds, du) 0 ≤ t ≤ T a.s. 0
0
R∗
Then, by applying Itˆo’s formula to Mt = Nt (Z tα )−1 and by classical computations, one can derive the existence of (πt , lt (.)) satisfying (5.45). p
p
p
p
Remark 5.8. By Proposition A.2, (πt , lt (.)) ∈ Hα × Hν,α , where the spaces Hα and Hν,α are p defined as H p and Hν , but under probability Q α instead of P. For each control α, the associated dynamic risk measure will be induced by a BSDE under Q α and driven by W α and N˜ α , which makes sense by the above Q α -martingale representation property. Let us first introduce a function
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
3347
F : [0, T ′ ] × Ω × R × L 2ν × A → R; (t, ω, π, ℓ, α) → F(t, ω, π, ℓ, α) which is P ⊗ B(R) ⊗ B(L 2ν ) ⊗ B(A)-measurable. Suppose F is uniformly Lipschitz with respect to (π, ℓ), continuous with respect to α, and such that ess supα∈A |F(·, t, 0, 0, 0, α)| ∈ H p,T , for each p ≥ 2. Suppose also that F(t, π, l1 , α) − F(t, π, l2 , α) ≥ ⟨θtπ,l1 ,l2 ,α , l1 − l2 ⟩ν ,
(5.46)
for some adapted process θtπ,l1 ,l2 ,α (·) with θ : Ω × [0, T ′ ] × R × (L2ν )2 × A → L2ν ¯ being P ⊗ B(R) ⊗ B(L2ν )2 ⊗ B(A)-measurable and satisfying |θtπ,l1 ,l2 ,α (u)| ≤ ψ(u), where ψ¯ p π,l1 ,l2 ,α is bounded and in L ν , for all p ≥ 1, and θt ≥ −1 − C1 . For each α ∈ A, the associated driver is given by F(t, ω, π, ℓ, αt (ω)).
(5.47)
Note that these drivers are equi-Lipschitz. For each α ∈ A, let ρ α be the dynamic risk-measure induced by the BSDE associated with driver F(., αt ) and driven by W α and N˜ α . More precisely, fix T ∈ [0, T ′ ] and ξ ∈ L p (FT ) with p > 2. By Proposition A.1, Z Tα ∈ q L (FT ′ ) for all q ≥ 1. Hence, by H¨older’s inequality, ξ ∈ L 2α , where L 2α denotes the space of random variables which are square integrable under Q α . Similarly, |F(·, t, 0, 0, 0, αt )| ∈ H2,T α . Hence, there exists a unique solution (X α , π α , l α ) in Sα2 × H2α × H2α,ν of the Q α -BSDE − d X tα = F(t, πtα , ltα , αt )dt − πtα d Wtα − ltα (u) N˜ α (dt, du); X Tα = ξ, (5.48) R∗
driven by W α and N˜ α . The dynamic risk-measure ρ α (ξ, T ) of position ξ is thus well defined by ρtα (ξ, T ) := −X tα (ξ, T ),
0 ≤ t ≤ T,
(5.49)
with X α (ξ, T ) = X α . Assumption (5.46) yields the monotonicity property of ρ α . The agent is supposed to be averse to ambiguity. Her risk measure at time t is thus given, for each T ∈ [0, T ′ ] and ξ ∈ L p (FT ), p > 2, by ess sup ρtα (ξ, T ) = −ess inf X tα (ξ, T ).
(5.50)
α∈A
α∈A
Note that it defines a monotonous dynamic risk measure. We now show that this dynamic risk measure is induced by a BSDE driven by W and N˜ under probability P. Theorem 5.9. Let f be the function defined for each (t, ω, π, ℓ) by f (t, ω, π, ℓ) := inf {F(t, ω, π, ℓ, α) + β 1 (t, ω, α)π + ⟨β 2 (t, ω, α), ℓ⟩ν }. α∈A
(5.51)
Let ρ be the dynamic risk measure associated with driver f , defined for each T ∈ [0, T ′ ] and ξ ∈ L p (FT ) ( p > 2), by ρt (ξ, T ) := −X t (ξ, T ),
0 ≤ t ≤ T,
(5.52)
with X (ξ, T ) = X , where (X, π, l) is the unique solution in associated with driver f , that is, − d X t = f (t, πt , lt )dt − πt d Wt − lt (u) N˜ (dt, du);
S 2,T
R∗
× H2,T
× H2,T ν
X T = ξ.
of the P-BSDE
(5.53)
3348
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
For each T ∈ [0, T ′ ] and ξ ∈ L p (FT ) with p > 2, we have for each t ∈ [0, T ], ρt (ξ, T ) = ess sup ρtα (ξ, T ) a.s.
(5.54)
α∈A
Proof. In order to prove this result, we will express the problem in terms of BSDEs under probability P and then apply the second optimization principle. Fix now ξ ∈ L p (FT ) with p > 2. Since (X α , π α , l α ) is a solution of BSDE (5.48), it clearly satisfies α α α α α − d X t = f (t, πt , lt )dt − πt d Wt − ltα (u) N˜ (dt, du); X Tα = ξ, (5.55) R∗
which is a P-BSDE driven by W and N˜ , and where the driver is given by f α (t, π, ℓ) := F(t, π, ℓ, αt ) + β 1 (t, αt )π + ⟨β 2 (t, αt ), ℓ⟩ν .
(5.56)
The drivers f α are clearly equi-Lipschitz. Let p ′ be a real number such that 2 < p ′ < p. Now, Z Tα is q-integrable, for all q ≥ 1. Hence, p′
p′
by H¨older’s inequality, ξ ∈ L α , where L α denotes the space of random variables which are p′ p ′ -integrable under Q α . Similarly, F(t, 0, 0, αt ) ∈ Hα . By Proposition A.2 in the Appendix, p′ p′ p′ there exists a unique solution (X α , π α , l α ) in Sα × Hα × Hα,ν of the Q α -BSDE (5.48). ′ Now, suppose we have shown that (Z α )−1 ∈ S q,T for all q ≥ 1. Since p ′ > 2, by H¨older’s α α α 2 inequality, we derive that (X , π , l ) belongs to S × H2 × H2ν and is thus the unique solution of P-BSDE (5.55) in S 2 × H2 × H2ν . Moreover, for each α, f α satisfies Assumption 4.1. Indeed, we have f α (t, π, ℓ1 ) − f α (t, π, ℓ2 ) = F(t, π, ℓ1 , αt ) − F(t, π, ℓ2 , αt ) + ⟨β 2 (t, αt ), ℓ1 − ℓ2 ⟩ν ≥ ⟨θtπ,ℓ1 ,ℓ2 + β 2 (t, αt ), ℓ1 − ℓ2 ⟩ν ,
¯ with θtπ,ℓ1 ,ℓ2 + β 2 (t, αt ) ≥ (−1 − C1 ) + C1 ≥ −1 and |θtπ,ℓ1 ,ℓ2 + β 2 (t, αt )| ≤ ψ + ψ. Let us show that f , defined by (5.51), is a Lipschitz driver. Since A is a polish space, there exists a countable subset D of A which is dense in A. As F is continuous with respect to α, it follows that the above equality still holds with A replaced by D, which gives that f is P ⊗ B(R) ⊗ B(L 2ν )-measurable. Also, f is Lipschitz and f (·, 0, 0) ∈ H2,T , which yields that f is a Lipschitz driver. By the definitions of f (see (5.51)) and f α (see (5.56)), we get that for each α ∈ A, f ≤ f α . Also, for each ϵ > 0 and each (t, ω, π, l) ∈ Ω × [0, T ′ ] × R × L 2ν , there exists α ϵ ∈ A such that f (t, ω, π, ℓ) + ϵ ≥ F(t, ω, π, ℓ, α ϵ ) + β 1 (t, ω, α ϵ )π + ⟨β 2 (t, ω, α ϵ ), ℓ⟩ν . By the section theorem of [7], for each ϵ > 0, there exists an A-valued predictable process (αtϵ ) ϵ such that f (t, πt , lt ) + ϵ ≥ f α (t, πt , lt )d P ⊗ dt-a.s. Consequently, by the second optimization principle for BSDEs (Theorem 4.7), equality (5.54) holds, which is the desired result. q,T ′
It remains to show that (Z α )−1 ∈ Sα for all q > 1. Now, by classical computations, we derive that (Z α )−1 satisfies the following SDE: d(Z tα )−1 = (Z tα− )−1 −β 1 (t, αt )d Wtα − β 2 (t, αt , u)d N˜ α (dt, du) ; (Z 0α )−1 = 1. R∗
3349
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
Since β 1 is bounded and since β 2 satisfies (5.44), it follows from Proposition A.1 that (Z α )−1 T′ ∈ q,T ′
L α for all q > 1. Hence, by martingale inequalities, (Z α )−1 ∈ Sα thus complete. q
for all q > 1. The proof is
Moreover, from Theorem 4.6 and Proposition 4.9, the following property follows. Proposition 5.10. Let (X, π, l) be the solution of the BSDE associated with driver f and terminal condition (T, ξ ). Let α¯ ∈ A and let t ∈ [0, T ]. • (Sufficient condition of optimality) Suppose that f (s, πs , ls ) = F(s, πs , ls , α¯ s ) + β 1 (s, α¯ s )πs + ⟨β 2 (s, α¯ s ), ls ⟩ν , (5.57)
t ≤ s ≤ T, d P ⊗ ds-a.s. Then, α¯ is t-optimal, that is, optimal for (5.54).
πt ,ltα¯ ,lt
• (Necessary condition of optimality) Suppose that α¯ is t-optimal and that β 2 (t, α¯ t )+θt −1, d P ⊗ dt-a.s. (which is satisfied if, for example, C1 + C1′ > −1). Then equality (5.57) holds.
>
Remark 5.11. Suppose A is compact and F, β 1 and β 2 are continuous with respect to α. Then, for each (t, ω, π, l) ∈ Ω × [0, T ′ ] × R × L 2ν , there exists α¯ ∈ A such that f (t, ω, π, ℓ) = F(t, ω, π, ℓ, α) ¯ + β 1 (t, α)π ¯ + ⟨β 2 (t, α), ¯ l⟩ν . By the section theorem of [7], there exists an A-valued predictable process (α¯ t ) such that (5.57) is satisfied with t = 0. It follows that, for each t, α¯ is optimal for (5.54). If, instead of being compact, A is a bounded, convex and closed subset of a separable Hilbert space, and if F, β 1 and β 2 are convex and lower semicontinuous with respect to α, then, by similar arguments as those used in the proof of Theorem 5.2, there exists α¯ ∈ A such that (5.57) is satisfied, which implies that α¯ is optimal for (5.54). Example. We suppose that L 2ν is separable and that A is a borelian subset of the Hilbert space R × L 2ν such that A ⊂ [−K , K ] × Υ , where Υ := {ϕ ∈ P, C1′ ≤ ϕ(u) and |ϕ(u)| ≤ ψ(u)ν(du) a.s.}, p
with C1′ > −1 and ψ is bounded and in L ν , for all p ≥ 1. In this case, for each process α := (α 1 , α 2 ) ∈ A, the prior Q α admits Z Tα ′ as density with respect to P, Z α being the solution of (5.38). Theorem 5.9 can then be applied, as well as the previous remark. Remark 5.12. In the specific case of a controlled linear driver, that is, when F(t, ω, π, ℓ, αt (ω)) is linear with respect to π and ℓ, the above problem is related to some classical control problems, generally studied in the case of a Brownian filtration (see [9]) and to some robust utility maximisation problems studied e.g. in [12,15] in a discontinuous filtration. 5.4. Some links between dynamic risk-measures induced by BSDEs and the instantaneous interest rate From a financial point of view, the dependence of f (t, x, π, l) with respect to x is relevant for dynamic risk measures modeling, since it allows us to take into account the instantaneous interest rate in the market or some ambiguity on this rate.
3350
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
5.4.1. Case where the driver f is linear with respect to x Let f be a Lipschitz driver, linear with respect to x, that is, which can be written as f (., t, x, π, l) = −rt x + g(., t, π, l), where g a Lipschitz driver which does not depend on x and where (rt ) is a bounded predictable process, which can be interpreted as an instantaneous interest rate. Let us denote by ρ the associated risk-measure. Let T ∈ [0, T ′ ] and let ξ ∈ L 2 (FT ). Set X t := X t (ξ, T ). Let us consider t X˜ t := e− 0 rs ds X t , which can be seen as the discounted process. One can show that X˜ is the t t t − 0 rs ds solution of a BSDE associated with driver g(., ˜ t, π, l) = e g(., t, e− 0 rs ds π, e− 0 rs ds l) T and with terminal condition e− 0 rs ds ξ . The risk measure ρ thus reduces to a new risk measure ρ˜ associated with driver g. ˜ More precisely, we have e−
t
0 rs ds
ρt (ξ, T ) = ρ˜t (e−
T 0
rs ds
ξ, T ),
0 ≤ t ≤ T, a.s., T
for each ξ ∈ L 2 (FT ). In particular, for initial time t = 0, we get ρ0 (ξ, T ) = ρ˜0 (e− 0 rs ds ξ, T ). This new measure ρ, ˜ which operates on discounted positions, is translation-invariant because g˜ does not depend on x. We thus have for each L 2 (FT )-measurable variable ξ (position) and each constant m ∈ R ρ0 (ξ + me
T 0
rs ds
, T ) = ρ˜0 (e−
T 0
rs ds
ξ + m, T ) = ρ˜0 (e−
T 0
rs ds
ξ, T ) − m
= ρ0 (ξ, T ) − m. In other words, the risk measure ρ is (rt )-translation invariant (here at time 0), which makes sense from a financial point of view. The constant m may be interpreted as an initial amount which is invested in the riskless asset with instantaneous interest rate (rt ). An analogous property can be similarly proven at any time t ∈ [0, T ]. 5.4.2. Case of a concave driver f with respect to x We now consider the case when the driver f (t, x, π, l) is concave with respect to x (but not necessarily with respect to (x, π, l)) and show that it can be interpreted in terms of ambiguity on the instantaneous interest rate process. For each (ω, t, π, l), let F(ω, t, ·, π, l) be the polar function of f with respect to x, defined for each δ in R by F(ω, t, δ, π, l) := sup [ f (ω, t, x, π, ℓ) − δx]. x∈R
Proposition 5.13 (Robust Representation of ρ). Suppose that f is a Lipschitz driver with Lipschitz constant C, satisfying Assumption 4.1 and concave with respect to x. Let ρ be the dynamic risk-measure induced by the BSDE associated with driver f . Let T ∈ [0, T ′ ]. Let DT be the set of predictable processes δ valued in R such that F(t, δt , 0, 0) belongs to H2T . For each R-valued predictable process δ, let f δ be the driver defined by f δ (ω, t, x, π, ℓ) := F(ω, t, δt (ω), π, l) + δt (ω)x. For each ξ ∈ L 2 (FT ) and for each t ∈ [0, T ], we have ρt (ξ, T ) = ess sup ρtδ (ξ, T ) a.s., δ∈DT
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
3351
where for each δ ∈ DT , ρ δ is the dynamic risk-measure induced by the non linear BSDE associated with driver f δ . The coefficients (−δ, δ ∈ DT ) can be interpreted as possible instantaneous interest rates when there is ambiguity on the interest rate. Also, as seen above, the risk-measure ρ δ can be interpreted as a dynamic risk-measure in a market with interest rate process (−δt ). Proof. Since f is concave with respect to x, f and F satisfy the conjugacy relation f (ω, t, x, π, ℓ) =
inf
δ∈Dt (ω,π,l)
{F(ω, t, δ, π, l) + δx},
(5.58)
where for each (t, ω, π, l), Dt (ω, π, l) is the non empty set of reals δ such that F(ω, t, δ, π, l) < +∞. Since f is Lipschitz with constant C, Dt (ω, π, l) is bounded by C. The end of the proof follows from similar arguments, and even simpler since we work in R, as those used in the proof of Theorem 5.2. 5.4.3. Model with ambiguity on the interest rate and on the model In this section, we consider the model described in Section 5.3 but when there is also ambiguity on the instantaneous interest rate process. To each control α ∈ A, corresponds an instantaneous interest rate process δ(t, αt ), where δ : (t, ω, α) → δ(t, ω, α) is a P ⊗ B(A)measurable function defined on [0, T ′ ] × Ω × A and valued in [−C, C], where C > 0. Also, for each α ∈ A, the associated driver is given here by F(t, π, ℓ, αt ) + δ(t, αt )x,
(5.59)
instead of (5.47). Also, ρ α is the dynamic risk-measure induced by the Q α -BSDE associated with driver (5.59) and driven by W α and N˜ α . In this case, we have ρt (ξ, T ) = ess supα∈A ρtα (ξ, T ) a.s., where ρ is the dynamic riskmeasure induced by the P-BSDE associated with driver f , given by f (t, ω, x, π, ℓ) = inf {F(t, ω, π, ℓ, α) + δ(t, ω, α)x α∈A
+ β 1 (t, ω, α)π + ⟨β 2 (t, ω, α), ℓ⟩ν }. The driver f thus depends on x (and is not necessarily concave w.r. to x). Appendix A.1. Exponential local martingales Recall that, in the case of a filtration satisfying the usual conditions, if X is a semimartingale with X 0 = 0, there exists a unique semimartingale Z denoted by E(X ), which satisfies the t equation: Z t = 1 + 0 Z s − d X s for all t, and it is given by the so-called exponential formula of Dol´eans-Dade: 1 (1 + ∆X r )e−∆ X r , (A.60) E(X )t := exp X t − ⟨M Xc , M Xc ⟩t 2 r ≤t where M Xc is the continuous part of the local martingale associated with X . If Y is also a semimartingale with Y0 = 0, we get E(X )E(Y ) = E(X + Y + [X, Y ]), usually called the product formula.
3352
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
Note that if ∆X ≥ −1 (resp. > −1), then E(X ) is non negative (resp. positive). Moreover, if X is a local martingale, then E(X ) is a local martingale. In the case of local exponential martingales driven by a Brownian motion and a Poisson random measure, we show the following property used in Section 5.3. Proposition A.1. Let (βt ) and (γt (.)) be predictable R-valued processes and let t t T Mt := 0 βs d Ws + 0 R∗ γs (u) N˜ (ds, du). Suppose that the random variable 0 βs2 ds is bounded. (a) Suppose that E(M) ≥ 0. For each integer p ≥ 2, the following assertion holds: T p if the random variable 0 ∥γt ∥q,ν dt is bounded for each q ∈ [2, p], then E[E(M)T ] < +∞. (b) Otherwise, the above assertion holds for each even number p ≥ 2. T Moreover, if 0 ∥γt ∥ p,ν dt is bounded for all p ≥ 2, then E(M)T is p-integrable for all p ≥ 1. Proof. Let us show by induction Property (a). Suppose that E(M) ≥ 0 a.s. We have already shown that the property holds for p = 2. To clarify the main arguments of the proof, we first T show the property for p = 3 before proceeding with the induction. Suppose that 0 ∥γt ∥2,ν dt T and 0 ∥γt ∥3,ν dt are bounded. We have to prove that E[E(M)3T ] < +∞. By equality (3.5), it is sufficient to show that E[E(N )T E(M)T ] < +∞, with t Nt = 2Mt + γs2 (u) N˜ (ds, du) 0 R∗ t t = 2βs d Ws + 2γs (u) + γs2 (u) N˜ (ds, du). 0
0
R∗
Using more concise notation, N can be written as N = 2β.W + (2γ + γ 2 ). N˜ . By the product formula, E(N )E(M) = E(N + M + [N , M]). Now, by classical properties of [., .], we get [N , M] = [2β.W + (2γ + γ 2 ). N˜ , β.W + γ . N˜ ] = 2β 2 .dt + (2γ 2 + γ 3 ).N . Let N3 be the local martingale given by N3 = N + M + (2γ 2 + γ 3 ). N˜ . We thus have . . E(N )E(M) = E(N3 ) exp 2 βs2 ds + (2γs2 + γs3 )ν(du) ds . 0
0
(A.61)
R∗
We have seen in the proof of Proposition 3.2 that E(N ) ≥ 0 and, by assumption, we have E(M) ≥ 0. This together with the above equality yields that E(N3 ) ≥ 0. Since E(N3 ) is also a local martingale, it follows that it is a supermartingale. Using the boundedness property of T 2 T 0 βt dt and of 0 ∥γt ∥q,ν dt for q = 2, 3, we get E[E(N )T E(M)T ] ≤ E[E(N3 )T ] K ≤ K , where K is a positive constant, which yields that E[E(M)3T ] < +∞. . Let us show by induction that for all p ≥ 2, E(M) p = E(N p ) exp{ 0 a p (s)ds}, where a p (s) = k p βs2 + R p (γs (u))ν(du), and N p = pβ.W + Q p (γ ). N˜ , such that E(N p ) ≥ 0, where R p and Q p are polynomials with degree p. Suppose we have shown this property for some p ≥ 2 and let us show that it still holds at rank p + 1. Using the induction hypothesis, we get . E(M) p+1 = E(M) p E(M) = E(N p )E(M) exp a p (s)ds . 0
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
3353
Now, E(N p )E(M) = E(N p + M + [N p , M]) and [N p , M] = [ pβ.W + Q p (γ ). N˜ , β.W + γ . N˜ ] = pβ 2 .dt + Q p (γ )γ .N . Let N p+1 be the local martingale given by N p+1 = N p + M + Q p (γ )γ . N˜ . Then, using these equalities, we derive the desired property at rank p + 1. Using this result and similar arguments as above, one can derive Property (a). Let us show Property (b). First, we have already shown that the property holds for p = 2. Let T us now show the property for p = 2k, with k ≥ 2. Suppose 0 ∥γt ∥q,ν dt is bounded for each q ∈ [2, 2k]. Let us prove that E[E(M)2k T ] < +∞. By equality (3.5), it is sufficient to show that k E[E(N )T ] < +∞. Now, E(N ) ≥ 0 a.s. Applying the first assertion with M replaced by N and T using the fact that 0 ∥γt ∥q,ν dt is bounded for each q ∈ [2, 2k], we derive the desired result. The last assertion of Proposition A.1 follows from Property (b). Example. Suppose that the intensity measure ν of the Poisson random measure satisfies 2 )ν(du) < +∞. Let (β ) and (γ (.)) be predictable R-valued processes and let M (1 ∧ u t t R∗ be the local martingale defined by (3.2). Suppose β is bounded and that, dt ⊗ P ⊗ dν(u)-a.s., |γt (u)| ≤ K (1∧|u|), where K is a positive constant. Then, for all p ≥ 2, |γt (u)| p ≤ K q (1∧|u|2 ). Hence, for all p ≥ 1, E(M)T is p-integrable. A.2. Some complementary results on BSDEs with jumps A.2.1. BSDEs with jumps in L p , p ≥ 2 Proposition A.2. Let p ≥ 2 and let T > 0. For each Lipschitz driver f , such that f (t, 0, 0, 0) ∈ H p,T , and each terminal condition ξ ∈ L p (FT ), there exists a unique solution (X, π, l) ∈ p,T S p,T × H p,T × Hν of the BSDE with jumps (2.1). Remark A.3. The above result still holds in the case when there is an F-martingale representation theorem with respect to W and N˜ , even if F is not generated by W and N˜ . Proof. Let us first consider the case when the driver f does not depend T on x, π, ℓ. Then, X is given by the right-continuous version of X t = E[ξ + t f (s)ds | Ft ]. Also, since p ≥ 2, by the martingale representation theorem of Tang and Li [22] for locally square integrable martingales, (π, l) corresponds to the unique pair of predictable processes satisfying t t T E ξ+ f (s)ds | Ft = X 0 + πs d W s + ls (u) N˜ (ds, du) a.s. (A.62) 0
0
0
R∗
T
We have |X t | ≤ E[|ξ | + 0 | f (s)|ds | Ft ]. Hence, using martingale inequalities, we get p T
p
∥X ∥ S p,T ≤ C p E
|ξ | +
| f (s)|ds
,
0
where C p is a constant which does not depend on T . We derive that T 2p p p ∥X ∥ S p,T ≤ C p E |ξ | p + T 2 f (s)2 ds 0
p p = C p E[|ξ | p ] + T 2 ∥ f ∥H p,T ,
(A.63)
3354
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
for another constant still denoted by C p . Also, by Burkholder–Davis–Gundy inequalities, since p > 1, we have T 2p T πs2 ds + ∥ls ∥2ν ds E 0
0
≤ CpE
T
πs d Ws +
T
0
R
0
p . ls (u) N˜ (ds, du) ∗
Equality (A.62) together with the above estimates lead to p p p p ∥π∥H p,T + ∥ l∥ p,T ≤ C p E(|ξ | p ) + T 2 ∥ f ∥H p,T .
(A.64)
Hν
Let us consider the case of a general driver f (t, x, π, ℓ). Denote by H p,T the Banach space p,T S p,T × H p,T × Hν equipped with the norm ∥(X, π, l)∥ p,T := ∥X ∥ S p,T + ∥π∥H p,T + ∥l∥H p,T . ν
Let us define the map Ψ from H p,T into itself as follows. Given (U, V, k) ∈ H p,T , let (X, π, l) = Φ(U, V, k) be the unique element of H p,T , solution of the BSDE associated with driver f (s) = f (s, Us , Vs , ks ). Let us prove that Ψ is a contraction for T small enough. Let (U ′ , V ′ , k ′ ) be another element of H p,T and define (X ′ , π ′ , l ′ ) = Ψ (U ′ , V ′ , k ′ ). Set U¯ = U − U ′ , V¯ = V − V ′ , k¯ = k − k ′ , X¯ = X − X ′ , π¯ = π − π ′l¯ = l − l ′ . The ¯ is the solution of the BSDE associated with terminal condition 0 and the driver process ( X¯ , π¯ , l) ∆ f · = f (·, U, V, k) − f (·, U ′ , V ′ , k ′ ). By inequalities (A.63) and (A.64), we derive that p
p
¯ ∥ X¯ ∥ S p,T + ∥π¯ ∥H p,T + ∥ l∥
p p,T Hν
p
p
≤ C p T 2 ∥ f (·, U, V, k) − f (·, U ′ , V ′ , k ′ )∥H p,T .
Using the Lipschitz property of f , we get, for another constant still denoted by C p , √ √ ¯ p,T ≤ C p T ¯ p,T . ¯ ∥ S p,T + ∥V¯ ∥H p,T + ∥ k∥ ∥ X¯ ∥ S p,T + ∥π¯ ∥H p,T + ∥ l∥ T ∥ U Hν Hν √ Choosing T such that C p T < 1 and C p T < 1, the map Ψ is a contraction on the Banach space H p,T and hence admits a fixed point which corresponds to the solution of BSDE (2.1) in H p,T . The general case is obtained by subdividing the interval [0, T ] into a finite number of small intervals. A.2.2. Estimates and continuity result Proposition A.4 (Estimates). Let T > 0 and let ξ 1 , ξ 2 ∈ L 2 (FT ). Let f 1 be a Lipschitz driver with Lipschitz constant C and let f 2 be a driver. For i = 1, 2, let (X i , π i , l i ) be a solution of the BSDE (2.1) associated with terminal time T , driver f i and terminal condition ξ i . For s in [0, T ], denote X¯ s := X s1 − X s2 , π¯ s := πs1 − πs2 , l¯s := ls1 − ls2 , and f¯(s) := f 1 (s, X s2 , πs2 , ls2 ) − f 2 (s, X s2 , πs2 , ls2 ) and ξ¯ := ξ 1 − ξ 2 . Let η, β > 0 be such that β ≥ η3 + 2C. If η ≤ C12 , then, for each t ∈ [0, T ], we have βt
e X¯ t2 ≤ E[eβT ξ¯ 2 | Ft ] + ηE
∥ X¯ ∥2β ≤ T [eβT E[ξ¯ 2 ] + η∥ f¯∥2β ].
T
e
βs
f¯(s)2 ds | Ft
a.s.;
(A.65)
t
(A.66)
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
Also, if η <
1 , C2
3355
we then have
¯ 2ν,β ≤ ∥π¯ ∥2β + ∥l∥
1 [eβT E[ξ¯ 2 ] + η∥ f¯∥2β ]. 1 − ηC 2
(A.67)
Proof. From Itˆo’s formula applied to the semimartingale eβs X¯ s2 between t and T , it follows that T T T eβt X¯ t2 + β eβs X¯ s2 ds + eβs π¯ s2 ds + eβs ∥l¯s ∥2ν ds t
=e
βT
X¯ T2
−2
T
t
+2
T
βs
t
e X¯ s ( f (s, 1
t βs
e X¯ s π¯ s d Ws − 2
t
X s1 , πs1 , ls1 ) −
T
e t
βs
f 2 (s, X s2 , πs2 , ls2 ))ds
∗
X¯ s − l¯s (u)d N˜ (du, dt).
R
Now, by martingale inequalities, one can show that X 1 and X 2 belong to S 2 , which provide that the local martingales of the right hand side of the above equality are martingales. Moreover, | f 1 (s, X s1 , πs1 , ls1 ) − f 2 (s, X s2 , πs2 , ls2 )| ≤ | f 1 (s, X s1 , πs1 , ls1 ) − f 1 (s, X s2 , πs2 , ls2 )| + | f¯s | ≤ C| X¯ s | + (C|π¯ s | + C∥l¯s ∥ν + | f¯s |). Now, for all real numbers x, π , l, f and ϵ > 0 2 2 2x(Cπ + Cl + f ) ≤ xϵ 2 + ϵ 2 (Cπ + Cl + f )2 ≤ xϵ 2 + 3ϵ 2 (C 2 π 2 + C 2l 2 + f 2 ). Hence, we get T T βt ¯ 2 βs ¯ 2 βs 2 2 ¯ e Xt + E β e X s ds + e (π¯ s + ∥ls ∥ν )ds | Ft t
t
T 1 ≤ E eβT (ξ1 − ξ2 )2 + 2C + 2 eβs X¯ s2 ds ϵ t T 2 2 βs 2 2 2 ¯ + 3C ϵ e (π¯ s + ∥ls ∥ν )ds | Ft + 3ϵ E t
t
T
e
βs
2 ¯ f s ds | Ft .
(A.68)
Let us make the change of variable η = 3ϵ 2 . Then, for each β, η > 0 chosen as in the theorem, these inequalities lead to (A.65). We obtain the first inequality of (A.66) by integrating (A.65). Then (A.67) follows from inequality (A.68). Remark A.5. By classical results on the norms of semimartingales, one similarly shows that ∥ X¯ ∥ S 2 ≤ K E[ξ¯ 2 ] + ∥ f¯∥H2 , where K is a positive constant only depending on T and C. We denote by (X (ξ, T ), π(ξ, T ), l(ξ, T )) the solution of the BSDE associated with f , terminal time T > 0, and terminal condition ξ ∈ L 2 (FT ). Let S be a stopping time smaller than T and let ξ ∈ F S . Let (X (ξ, S), π(ξ, S), l(ξ, S)) (denoted here by (X, π, l)) be the solution associated with f , terminal time S and terminal condition ξ . By conventional notation, we extend the solution on the whole interval [0, T ] by setting X t = ξ, πt = 0, lt = 0 for t ≥ S. So, ((X t , πt , lt ); t ≤ T ) is the unique solution of the BSDE with driver f (t, x, π, l)1{t≤S} and terminal conditions (T , ξ ). By the uniqueness result (as in the Brownian case (see [9])), we derive the flow property for BSDEs with jumps. More precisely, X t (ξ, T ) = X t (X S (ξ, T ), S),
t ∈ [0, S], a.s.,
(A.69)
3356
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
and the same property holds for π(ξ, T ) and l(ξ, T ). This property ensures the consistency property of the associated risk measure ρ f . Moreover, the above estimates allow us to show the continuity property of X (ξ, T ) with respect to (ξ, T ), which yields the continuity property of the associated dynamic risk measure ρ f with respect to (ξ, T ). Proposition A.6 (A Continuity Result). Let T > 0. Let {θ α , α ∈ R} be a family of stopping times in T0 , converging a.s. to θ ∈ T0 as α tends to α0 . Let (ξ α , α ∈ R) be a family of random variables such that E[ess supα (ξ α )2 ] < +∞, and for each α, ξ α is Fθ α -measurable. Suppose that ξ α converges a.s. to an Fθ -measurable random variable ξ as α tends to α0 . Let f be a given standard driver. Let X α := X (ξ α , θ α ); π α := π(ξ α , θ α ); l α := l(ξ α , θ α ); and X := X (ξ, θ ); π := π(ξ, θ ); l := l(ξ, θ ). Then, for each S ∈ T0 , the random variable X αS converges to X S a.s., and the process X α converges to X in S 2,T . Proof. By the convention given in Section 5, (X, π, l) is the solution associated with BSDE with terminal time T , terminal condition ξ and driver f (t, x, π, l)1t≤θ . Also, (X α , π α , l α ) is the solution associated with BSDE with terminal time T , terminal condition ξ α and driver f (t, x, π, l)1t≤θ α . Applying the estimate (A.65), we get, for each stopping time S, (θ α ∨θ)∧S eβ S (X S − X αS )2 ≤ E eβT (ξ − ξ α )2 + η eβs f (s, X s , πs , ls )2 ds | F S (θ α ∧θ)∨S
with β and η as in Proposition A.4. By the assumptions and the Lebesgue theorem, we conclude that X αS converges to X S a.s. Moreover, by (A.66), the process X α converges to X in S 2,T . The above two properties, in particular estimate (A.65), are also useful to study optimal stopping problems for dynamic risk measures which is done in [20]. References [1] G. Barles, R. Buckdahn, E. Pardoux, Backward stochastic differential equations and integral-partial differential equations, Stochastics and Stochastics Reports (1995). [2] P. Barrieu, N. El Karoui, Optimal derivatives design under dynamic risk measures, in: Mathematics of Finance, in: Contemporary Mathematics (A.M.S. Proceedings), 2004, pp. 13–26. [3] E. Bayraktar, I. Karatzas, S. Yao, Optimal stopping for dynamic convex risk measures, Illinois Journal of Mathematics 54 (3) (2010) 1025–1067. A special volume in honor of Donald Burkholder. [4] J.M. Bismut, Th´eorie probabiliste du contrˆole des diffusions, Memoirs of the American Mathematical Society 4 (1976) 167. [5] F. Coquet, Y. Hu, J. Memin, S. Peng, Filtration consistent nonlinear expectations and related g-expectations, Probability Theory and Related Fields 123 (2002) 1–27. [6] F. Delbaen, S. Peng, E. Rosazza Gianin, Representation of the penalty term of dynamic concave utilities, Finance and Stochastics 14 (3) (2010) 449–472. [7] C. Dellacherie, P.-A. Meyer, Probabilit´es et Potentiel, Chap. I–IV, Hermann, 1975. MR0488194. [8] I. Ekeland, R. Temam, Convex analysis and Variationnal Problems, North Holland, Amsterdam, 1976. [9] N. El Karoui, S. Peng, M.C. Quenez, Backward stochastic differential equations in finance, Mathematical Finance 7 (1) (1997) 1–71. [10] H. F¨ollmer, A. Schied, Stochastic Finance. An Introduction in Discrete-time, in: Studies in Mathematics, de Gruyter, Berlin, 2002. [11] M. Frittelli, E. Rosazza-Gianin, Dynamic convex risk measures, in: G. Szeg¨o (Ed.), Risk Measures in the 21st Century, John Wiley & Sons, Hoboken, NJ, 2004, pp. 227–248. [12] M. Jeanblanc, A. Matoussi, A. Ngoupeyou, Robust utility maximisation in a discontinuous filtration. arXiv:1201. 2690v2, 2012. [13] R.J.A. Laeven, M.A. Stadje, Robust Portfolio Choice and Indifference Valuation, 2012. Eurandom Preprint Series:2012-006.
M.-C. Quenez, A. Sulem / Stochastic Processes and their Applications 123 (2013) 3328–3357
3357
[14] B. Øksendal, A. Sulem, Stochastic Control of Jump Diffusions, Springer Verlag, 2007. [15] B. Øksendal, A. Sulem, Forward–backward stochastic differential games and stochastic control under model uncertainty, Journal of Optimization Theory and Applications (2012). http://dx.doi.org/10.1007/s10957-012-0166-7. [16] E. Pardoux, S. Peng, Adapted solution of backward stochastic differential equation, Systems & Control Letters 14 (1990) 55–61. [17] E. Pardoux, S. Peng, Backward stochastic differential equations and quasilinear parabolic partial differential equations, in: Lect. Notes in CIS, vol. 176, 1992, pp. 200–217. [18] S. Peng, Nonlinear expectations, nonlinear evaluations and risk measures, in: Lecture Notes in Math., Springer, Berlin, 1856, pp. 165–253. [19] M-C. Quenez, Backward stochastic differential equations, in: Rama Cont (Ed.), Encyclopedia of Quantitative Finance, John Wiley and Sons, 2010. [20] M-C. Quenez, A. Sulem, Reflected BSDEs and robust optimal stopping for dynamic risk measures with jumps, INRIA Research report, RR-8211, 2013, arXiv:1212.6744v1. [21] M. Royer, Backward stochastic differential equations with jumps and related non-linear expectations, Stochastic Processes and Their Applications 116 (2006) 1358–1376. [22] S.H. Tang, X. Li, Necessary conditions for optimal control of stochastic systems with random jumps, SIAM Journal on Control and Optimization 32 (1994) 1447–1475.