Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation

Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation

Stochastic Processes and their Applications 118 (2008) 2223–2253 www.elsevier.com/locate/spa Multi-dimensional G-Brownian motion and related stochast...

1MB Sizes 1 Downloads 23 Views

Stochastic Processes and their Applications 118 (2008) 2223–2253 www.elsevier.com/locate/spa

Multi-dimensional G-Brownian motion and related stochastic calculus under G-expectation Shige Peng ∗ Institute of Mathematics, Shandong University, 250100, Jinan, China Received 26 January 2006; received in revised form 14 November 2006; accepted 29 October 2007 Available online 14 January 2008

Abstract We develop a notion of nonlinear expectation – G-expectation – generated by a nonlinear heat equation with infinitesimal generator G. We first study multi-dimensional G-normal distributions. With this nonlinear distribution we can introduce our G-expectation under which the canonical process is a multi-dimensional G-Brownian motion. We then establish the related stochastic calculus, especially stochastic integrals of Itˆo’s type with respect to our G-Brownian motion, and derive the related Itˆo’s formula. We have also obtained the existence and uniqueness of stochastic differential equations under our G-expectation. c 2008 Elsevier B.V. All rights reserved.

MSC: 60H10; 60H05; 60H30 Keywords: g-expectation; G-expectation; G-normal distribution; BSDE; SDE; Nonlinear probability theory; Nonlinear expectation; Brownian motion; Itˆo’s stochastic calculus; Itˆo’s integral; Itˆo’s formula; Gaussian process; Quadratic variation process; Jensen’s inequality; G-convexity

1. Introduction The purpose of this paper is to extend classical stochastic calculus for multi-dimensional Brownian motion to the setting of nonlinear G-expectation. We first recall the general framework of nonlinear expectation studied in [40,39], where the usual linearity is replaced by positive homogeneity and subadditivity. Such a sublinear expectation functional enables us to construct a Banach space, similar to an L1 -space, starting from a functional lattice of Daniell’s type. ∗ Tel.: +86 53188564100; fax: +86 53188564100.

E-mail address: [email protected]. c 2008 Elsevier B.V. All rights reserved. 0304-4149/$ - see front matter doi:10.1016/j.spa.2007.10.015

2224

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

Then we proceed to construct a sublinear expectation on the space of continuous paths from R+ to Rd , starting from 0, which will be an analogue of Wiener’s law. The operation mainly consists in replacing the Brownian semigroup by a nonlinear semigroup coming from the solution of a nonlinear parabolic partial differential equation (1) where there appears a mapping G acting on Hessian matrices. Indeed, the Markov property permits one to define in the same way nonlinear conditional expectations with respect to the past. Then we present some rules and examples of computations under the newly constructed G-Brownian (motion) expectation. The fact that the underlying marginal nonlinear expectations are G-normal distributions derived from the nonlinear heat equation (1) is very helpful for estimating natural functionals. As result, our G-Brownian motion also has independent increments with identical G-normal distributions. G-Brownian motion has a very rich and interesting new structure which non-trivially generalizes the classical one. We thus can establish the related stochastic calculus, especially G-Itˆo’s integrals (see [23, 1942]) and the related quadratic variation process hBi. A very interesting new phenomenon of our G-Brownian motion is that its quadratic process hBi also has independent increments which are identically distributed. The corresponding G-Itˆo’s formula is obtained. We then introduce the notion of G-martingales and the related Jensen inequality for a new type of “G-convex” functions. We have also established the existence and uniqueness of the solution to a stochastic differential equation under our stochastic calculus by the same Picard iterations as in the classical situation. Books on stochastic calculus, e.g., [10,20,22,24,29, 34,46,47,51], are recommended for understanding the present results and some further possible developments of this new stochastic calculus. As indicated in Remark 2, the nonlinear expectations discussed in this paper can be regarded ˆ as coherent risk measures. This with the related conditional expectations E[·|H t ]t≥0 makes a dynamic risk measure: the G-risk measure. The other motivation for our G-expectation is the notion of (nonlinear) g-expectations introduced in [36,37]. Here g is the generating function of a backward stochastic differential equation (BSDE) on a given probability space (Ω , F, P). The natural definition of the conditional g-expectations with respect to the past induces rich properties of nonlinear g-martingale theory (see, among others, [3,5–7,11,12,8,9,26,27,38,41,42,45]). Recently g-expectations are also studied as dynamic risk measures: g-risk measure (cf. [48,4,16]). Fully nonlinear superhedging is also a possible application (cf. [31,49] where a new BSDE approach was introduced). The notion of g-expectation is defined on a given probability space. In [40] (see also [39]), we have constructed a kind of filtration-consistent nonlinear expectation through the so-called nonlinear Markov chain. As compared with the framework of g-expectations, the theory of Gexpectation is intrinsic, a meaning similar to “intrinsic geometry” in the sense that it is not based on a given (linear) probability space. Since the classical Brownian expectation as well as many other linear and nonlinear expectations are dominated by our G-expectation (see Remark 26, Example 42 and [40]) and thus can be considered as continuous functionals, our theory also provides a flexible theoretical framework. One-dimensional G-Brownian motion was studied in [43]. Unlike for the classical situation, in general, we cannot find a system of coordinates under which the corresponding components i

B i, i = 1, . . . , d, are mutually independent from each other. The mutual quadratic variations B , B j will play an essential rule. During the process of revision of this paper, the author found a very interesting paper [17] by Denis and Martini on super-pricing of contingent claims under model uncertainty of volatility. They have introduced a norm on the space of continuous paths Ω = C([0, T ]) which corresponds to our L 2G -norm and developed a stochastic integral. There is no notion of nonlinear expectation

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

2225

such as G-expectation, conditional G-expectation, the related G-normal distribution and the notion of independence in their paper. But on the other hand, powerful tools in capacity theory enable them to obtain pathwise results for random variables and stochastic processes through the language of “quasi-surely”, (see Feyel and de La Pradelle [18]) in the place of “almost surely” in classical probability theory. Their method provides a way to proceed with a pathwise analysis for our G-Brownian motion and the related stochastic calculus under G-expectation; see the forthcoming paper of Denis, Hu and Peng. This paper is organized as follows: In Section 2, we recall the framework of nonlinear expectation established in [40] and adapt it to our objective. In Section 3 we introduce ddimensional G-normal distribution and discuss its main properties. In Section 4 we introduce d-dimensional G-Brownian motion, the corresponding G-expectation and their main properties. We then can establish the stochastic integral with respect to G-Brownian motion of Itˆo’s type, the related quadratic variation processes and then G-Itˆo’s formula in Section 5, the G-martingale and Jensen’s inequality for G-convex functions in Section 6, and the existence and uniqueness theorem of SDE driven by G-Brownian motion in Section 7. All the results of this paper are based on the very basic knowledge of Banach space and the parabolic partial differential equation (1). When this G-heat equation (1) is linear, our GBrownian motion becomes the classical Brownian motion. This paper still provides an analytical shortcut for reaching the sophisticated Itˆo calculus. 2. Nonlinear expectation: A general framework We briefly recall the notion of nonlinear expectations introduced in [40]. Following Daniell’s famous integration (cf. Daniell in 1918 [14]; see also [50]), we begin with a vector lattice. Let Ω be a given set and let H be a vector lattice of real functions defined on Ω containing 1, namely, H is a linear space such that 1 ∈ H and that X ∈ H implies |X | ∈ H. H is a space of random variables. We assume that the functions on H are all bounded. Definition 1. A nonlinear expectation Eˆ is a functional H 7→ R satisfying the following properties: ˆ ] ≥ E[Y ˆ ]. (a) Monotonicity: if X, Y ∈ H and X ≥ Y then E[X ˆ (b) Preservation of constants: E[c] = c. In this paper we are interested in the sublinear expectations which satisfy: (c) Subadditivity (or the self-dominated property): ˆ ˆ ] + E[Y ˆ ], ∀X, Y ∈ H. E[X + Y ] ≤ E[X ˆ ˆ ], ∀λ ≥ 0, X ∈ H. (d) Positive homogeneity: E[λX ] = λE[X Remark 2. It is clear that (b) + (c) implies: ˆ ˆ ] + c. Indeed, (e) Translation by constants: E[X + c] = E[X ˆ ] + c = E[X ˆ ] − E[−c] ˆ E[X ˆ ≤ E[X + c] ˆ ] + E[c] ˆ ˆ ] + c. ≤ E[X = E[X We recall that the notion of the above sublinear expectations was systematically introduced by Artzner, Delbaen, Eber and Heath [1,2], in the case where Ω is a finite set, and by Delbaen [15] ˆ for the general situation with the notation of risk measure: ρ(X ) = E[−X ]. See also Huber [21] ∗ ˆ for even earlier study of this notion E (called upper expectation E in Ch. 10 of [21]).

2226

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

ˆ We define kX k := E[|X ˆ We follow [40] in introducing a Banach space via H and E. |], X ∈ H. H forms a normed space (H, k·k) under k·k in the following sense. For each X, Y ∈ H such that kX − Y k = 0, we set X = Y . This is equivalent to saying that the linear subspace H0 := {X ∈ H, kX k = 0} is the null space, or in other words, we only consider the elements in the quotient space H/H0 . Under such an arrangement (H, k·k) is a normed space. We denote by ([H], k·k), or simply [H], the completion of (H, k·k). (H, k·k) is a dense subspace of the Banach space ([H], k·k) (see, e.g., Yosida [52], Sec. I–10). For any X ∈ H, the mappings X + (ω) = max{X (ω), 0} : H 7−→ H, X − (ω) = max{−X (ω), 0} : H 7−→ H satisfy |X + − Y + | ≤ |X − Y |, X − − Y − ≤ (Y − X )+ ≤ |X − Y |. Thus they are both contractions under the norm k·k and can be continuously extended to the Banach space [H]. We define the partial order “≥” in this Banach space. Definition 3. An element X in ([H], k·k) is said to be nonnegative, or X ≥ 0, 0 ≤ X , if X = X + . We also write X ≥ Y , or Y ≤ X , if X − Y ≥ 0. It is easy to check that if X ≥ Y and Y ≥ X , then X = Y in ([H], k·k). The nonlinear ˆ can be continuously extended to ([H], k·k) on which (a)–(e) still hold. expectation E[·] 3. G-normal distributions For a given positive integer n, we will denote by (x, y) the scalar product of x, y ∈ Rn and by |x| = (x, x)1/2 the Euclidean norm of x. We denote by lip(Rn ) the space of all bounded and Lipschitz real functions on Rn . We introduce the notion of nonlinear distribution — G-normal distribution. A G-normal distribution is a nonlinear expectation defined on lip(Rd ) (here Rd is considered as Ω and lip(Rd ) as H): P1G (φ) = u(1, 0) : φ ∈ lip(Rd ) 7→ R, where u = u(t, x) is a bounded continuous function on [0, ∞) × Rd which is the viscosity solution of the following nonlinear parabolic partial differential equation (PDE): ∂u − G(D 2 u) = 0, ∂t

u(0, x) = φ(x), (t, x) ∈ [0, ∞) × Rd ,

(1)

where D 2 u is the Hessian matrix of u, i.e., D 2 u = (∂x2i x j u)i,d j=1 and G(A) = G Γ (A) =

1 sup tr[γ γ T A], 2 γ ∈Γ

A = (Ai j )i,d j=1 ∈ Sd .

(2)

Sd denotes the space of d × d symmetric matrices. Γ is a given non-empty, bounded and closed subset of Rd×d , the space of all d × d matrices.

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

2227

Remark 4. The nonlinear heat equation (1) is a special kind of Hamilton–Jacobi–Bellman equation. The existence and uniqueness of (1) in the sense of a viscosity solution can be found in, for example, [13,19,35,51], and [30] for the C 1,2 -solution if γ γ T ≥ σ0 In , for each γ ∈ Γ , for a given constant σ0 > 0 (see also [34] for elliptic cases). It is a known result that u(t, ·) ∈ lip(Rd ) (see e.g. [51], Ch. 4, Prop.3.1., or [35], Lemma 3.1., for the Lipschitz continuity of u(t, ·), or Lemma 5.5 and Proposition 5.6 in [39] for a more general conclusion). The boundedness follows directly from the comparison theorem (or maximum principle) of this PDE. It is also easy to check that, for a given ψ ∈ lip(Rd × Rd ), P1G (ψ(x, ·)) is still a bounded and Lipschitz function in x. Remark 5. A equivalent definition of G-normal distribution and the corresponding G-normal distributed random variables are given in [44, Peng2007]. In the case where Γ is a singleton {γ0 } the above PDE becomes a standard linear heat equation. Thus for G 0 = G {γ0 } the corresponding G 0 -distribution is just the d-dimensional classical normal distribution N (0, γ0 γ0T ). In a typical case where γ0 = Id ∈ Γ , we have " # Z d X (x i )2 1 G0 exp − φ(x)dx. P1 (φ) = 2 (2π )d/2 Rd i=1 In the case where γ0 ∈ Γ , from the comparison theorem of PDE, 0

P1G (φ) ≥ P1G (φ),

∀φ ∈ lip(Rd ).

(3)

More generally, for each subset Γ 0 ⊂ Γ , the corresponding P G Γ 0 -distribution is dominated by P G in the following sense: GΓ 0

P1

GΓ 0

(φ) − P1

(ψ) ≤ P1G (φ − ψ),

∀φ, ψ ∈ lip(Rd ).

In fact it is easy to check that G Γ 0 (A) − G Γ 0 (B) ≤ G Γ (A − B), for any A, B ∈ Sd . From this we have the above inequality (see the Appendix of [44] for the proof). Remark 6. In [43] we have discussed the one-dimensional case, which corresponds d = 1 and Γ = [σ, 1] ⊂ R, where σ ∈ [0, 1] is a given constant. In this case the nonlinear heat equation (1) becomes ∂u 1 − [(∂x2x u)+ − σ 2 (∂x2x u)− ] = 0, ∂t 2

u(0, x) = φ(x), (t, x) ∈ [0, ∞) × R.

In the multi-dimensional case we also have the following typical nonlinear heat equation: d ∂u 1X − [(∂ 2i i u)+ − σi2 (∂x2i x i u)− ] = 0, ∂t 2 i=1 x x

where σi ∈ [0, 1] are given constants. This corresponds to Γ = {diag[γ1 , . . . , γd ], γi ∈ [σi , 1], i = 1, . . . , d}. The corresponding normal distribution with mean at x ∈ Rd and square variation t > 0 is √ + t × ·)). Just like for the classical situation of a normal distribution, we have

P1G (φ(x

2228

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

Lemma 7. For each φ ∈ lip(Rd ), the function √ u(t, x) = P1G (φ(x + t × ·)), (t, x) ∈ [0, ∞) × Rd

(4)

is the solution of the nonlinear heat equation (1) with the initial condition u(0, ·) = φ(·). d Proof. Let u ∈ C([0, ∞) × Rd ) be the viscosity solution of (1) with u(0, √ ·) = φ(·) ∈ lip(R ). d ¯ ¯ ¯ For a fixed (t , x) ¯ ∈ (0, ∞) × R , we define u(t, ¯ x) = u(t × √ t , x t + x). ¯ Then u¯ is the viscosity solution of (1) with the initial condition u(0, ¯ x) = φ(x t¯ + x). ¯ Indeed, let ψ be a C 1,2 function on (0, ∞) × Rd such that ψ ≥ u¯ (resp. ψ ≤ u) ¯ and ψ(τ, ξ ) = u(τ, ¯ ξ ) for a fixed √ x¯ ) ≥ u(t, x), for all (t, x) and (τ, ξ ) ∈ (0, ∞) × Rd . We have ψ( tt¯ , x− t¯   √ t x − x¯ , √ ψ = u(t, x), at (t, x) = (τ t¯, ξ t¯ + x). ¯ t¯ t¯ √ Since u is the viscosity solution of (1), at the point (t, x) = (τ t¯, ξ t¯ + x), ¯ we have      √ x¯ ∂ψ tt¯ , x− t x − x¯ t¯ 2 −G D ψ ≤ 0 (resp. ≥ 0). , √ ∂t t¯ t¯

But G is a positive homogeneous function, i.e., G(λA) = λG(A), when λ ≥ 0; we thus derive ∂ψ(t, x) − G(D 2 ψ(t, x))|(t,x)=(τ,ξ ) ≤ 0 (resp. ≥ 0). ∂t This implies that u¯ is the viscosity subsolution (resp. supersolution) of (1). According to the definition of P G (·) we obtain (4).  Definition 8. We define PtG (φ)(x) = P1G (φ(x +

√ t × ·)) = u(t, x),

(t, x) ∈ [0, ∞) × Rd .

(5)

From the above lemma, for each φ ∈ lip(Rd ), we have the following nonlinear version of the chain rule: G PtG (PsG (φ))(x) = Pt+s (φ)(x),

s, t ∈ [0, ∞), x ∈ Rd .

(6)

This chain rule was first established by Nisio [32,33] under the name of the “envelope of Markovian semigroups”. See also [40]. Lemma 9. The solution of (1) with initial condition u(0, x) = φ((a, x)), for a given φ ∈ lip(R), has the form u(t, x) = u(t, ¯ x), ¯ x¯ = (a, x), where u¯ is the solution of ∂ u¯ − G a (∂x¯ x¯ u) ¯ = 0, ∂t

u(0, x) ¯ = φ(x), ¯ (t, x) ¯ ∈ [0, ∞) × R,

(7)

where G a (β) =

1 max tr[γ γ T aaT β], 2 γ ∈Γ

β ∈ R.

The above PDE can be written as ∂ u¯ 1 − [σaaT (∂x¯ x¯ u) ¯ + + σ−aaT (∂x¯ x¯ u) ¯ − ] = 0, ∂t 2

u(0, x) ¯ = φ(x), ¯

(8)

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

2229

where we define aaT = [a i a j ]i,d j=1 ∈ Sd and σ A = sup tr[γ γ T A] = 2G(A), γ ∈Γ

A ∈ Sd .

(9)

Here Sd is the space of d × d symmetric matrices. Remark 10. It is clear that the functional P1G a (φ) = u(1, ¯ 0) : φ ∈ lip(R) 7→ R constitutes a special one-dimensional nonlinear normal distribution, called a G a -normal distribution. Proof. It is clear that the PDE (7) has a unique viscosity solution. We then can set u(t, x) = u(t, ¯ (a, x)) and check that u is the viscosity solution of (1). (8) is then easy to check.  Example 11. In the above lemma, if φ is convex, and σaaT > 0, then   Z ∞ (y − x)2 1 PtG (φ((a, ·)))(x) = p φ(y) exp − dy. 2σaaT t 2π σaaT t −∞ If φ is concave and σ−aaT < 0, then PtG (φ((a, ·)))(x)

1 =p 2π |σ−aaT |t

(y − x)2 φ(y) exp − 2|σ−aaT |t −∞

Z





 dy.

Proposition 12. We have (i) For each t > 0, the G-normal distribution PtG is a nonlinear expectation on the lattice lip(Rd ), with Ω = Rd , satisfying (a)–(e) of Definition 1. The corresponding completion space [H] = [lip(Rd )]t under the norm kφkt := PtG (|φ|)(0) contains φ(x) = x1n 1 × · · · × xdn d , n i = 1, 2, . . . , i = 1, . . . , d, x = (x1 , . . . , xd )T as well as x1n 1 × · · · × xdn d × ψ(x), ψ ∈ lip(Rd ) as its special elements. Relation (5) still holds. We also have the following properties: (ii) We have, for each a = (a1 , . . . , ad )T ∈ Rd and A ∈ Sd , PtG ((a, x)x∈Rd ) = 0, PtG (((a, x)2 )x∈Rd ) = t · σaaT ,

PtG ((−(a, x)2 )x∈Rd ) = t · σ−aaT ,

PtG (((a, x)4 )x∈Rd ) = 6(σaaT )t 2 ,

PtG ((−(a, x)4 )x∈Rd ) = −6(σ−aaT )2 t 2 ,

PtG (((Ax, x))x∈Rd ) = t · σ A = 2G(A)t. Proof. (ii) By Lemma 9, we have the explicit solutions of the nonlinear PDE (1) with the following different initial condition u(0, x) = φ(x): φ(x) = (a, x) H⇒ u(t, x) = (a, x), 2 2 φ(x) = (a, x)4 H⇒ u(t, x) = (a, x)4 + 6(a, x)2 σaaT t + 6σaa Tt ,

φ(x) = −(a, x)4 H⇒ u(t, x) = −(a, x)4 + 6(a, x)2 σ−aaT t − 6|σ−aaT |2 t 2 .

2230

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

Similarly, we can check that φ(x) = (Ax, x) implies, by setting A = aaT and A = −aaT ,

H⇒

u(t, x) = (Ax, x) + σ A t. This

φ(x) = (a, x)2 H⇒ u(t, x) = (a, x)2 + σaaT t, φ(x) = −(a, x)2 H⇒ u(t, x) = −(a, x)2 + σ−aaT t. More generally, for φ(x) = (a, x)2n , we have   Z ∞ (y − x)2 1 2n y exp − dy. u(t, x) = p 2σaaT t 2π σaaT t −∞ By this we can prove (i).



4. G-Brownian motions under G-expectations In the rest of this paper, we set  = C0d (R+ ), the space of all Rd -valued continuous paths (ωt )t∈R+ , with ω0 = 0, equipped with the distance ρ(ω1 , ω2 ) :=

∞ X i=1

2−i [( max |ωt1 − ωt2 |) ∧ 1]. t∈[0,i]

 is the classical canonical space and ω = (ωt )t≥0 is the corresponding canonical process. It is well known that in this canonical space there exists a Wiener measure (, F, P) under which the canonical process Bt (ω) = ωt is a d-dimensional Brownian motion. For each fixed T ≥ 0 we consider the following space of random variables: L i0p (HT ) := {X (ω) = φ(ωt1 , . . . , ωtm ), ∀m ≥ 1, t1 , . . . , tm ∈ [0, T ], φ ∈ li p (Rd×m )}. It is clear that {L i0p (Ht )}t≥0 constitutes a family of sublattices such that L i0p (Ht ) ⊆ L i0p (HT ), for t ≤ T < ∞. L i0p (Ht ) represents the past history of ω before the time t. Its completion will play the same role as the Brownian filtration FtB in the classical stochastic analysis. We also define L i0p (H) :=

∞ [

L i0p (Hn ).

n=1

Remark 13. lip(Rd×m ), L i0p (HT ) and L i0p (H) are vector lattices. Moreover, since φ and ψ ∈ lip(Rd×m ) implies φ · ψ ∈ lip(Rd×m ), thus X , Y ∈ L i0p (HT ) implies X · Y ∈ L i0p (HT ); X and Y ∈ L i0p (H) implies X · Y ∈ L i0p (H). We will consider the canonical space and set Bt (ω) = ωt , t ∈ [0, ∞) for ω ∈ Ω . Definition 14. The canonical process B is called a (d-dimensional) G-Brownian motion under a nonlinear expectation Eˆ defined on L i0p (H) if (i) For each s, t ≥ 0 and ψ ∈ lip(Rd ), Bt and Bt+s − Bs are identically distributed: G ˆ ˆ E[ψ(B t+s − Bs )] = E[ψ(B t )] = Pt (ψ).

(ii) For each m = 1, 2, . . . , 0 ≤ t1 < · · · < tm < ∞, the increment Btm − Btm−1 is “backwardly” independent from Bt1 , . . . , Btm−1 in the following sense: for each φ ∈ lip(Rd×m ), ˆ ˆ 1 (Bt , . . . , Bt )], E[φ(B t1 , . . . , Btm−1 , Btm )] = E[φ 1 m−1

2231

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

1 , . . . , x m−1 , B − B m−1 )], x 1 , . . . , x m−1 ∈ Rd . ˆ where φ1 (x 1 , . . . , x m−1 ) = E[φ(x tm tm−1 + x The related conditional expectation of φ(Bt1 , . . . , Btm ) under Htk is defined by

ˆ E[φ(B t1 , . . . , Btk , . . . , Btm )|Htk ] = φm−k (Bt1 , . . . , Btk ),

(10)

where 1 ˆ φm−k (x 1 , . . . , x k ) = E[φ(x , . . . , x k , Btk+1 − Btk + x k , . . . , Btm − Btk + x k )].

ˆ It is proved in [40] that E[·] consistently defines a nonlinear expectation on the vector 0 ˆ lattice L i p (HT ) as well as on L i0p (H) satisfying (a)–(e) in Definition 1. It follows that E[|X |], 0 0 0 0 X ∈ L i p (HT ) (resp. L i p (H)), is a norm and thus L i p (HT ) (resp. L i p (H)) can be extended, under this norm, to a Banach space. We denote this space by L 1G (HT ) (resp. L 1G (H)). For each ˆ 0 ≤ t ≤ T < ∞, we have L 1G (Ht ) ⊆ L 1G (HT ) ⊂ L 1G (H). In L 1G (HT ) (resp. L 1G (HT )), E[·] still satisfies (a)–(e) in Definition 1. Remark 15. It is suggestive to denote L i0p (Ht ) by Ht0 and L 1G (Ht ) by Ht , L 1G (H) by H and ˆ thus consider the conditional expectation E[·|H t ] as a projective mapping from H to Ht . The 1 notation L G (Ht ) is due to the similarity with L 1 (Ω , Ft , P) in classical stochastic analysis. ˆ Definition 16. The expectation E[·] : L 1G (H) 7→ R introduced through the above procedure is called G-expectation, or G-Brownian expectation. The corresponding canonical process B is ˆ said to be a G-Brownian motion under E[·]. p

p

For a given p > 1, we also define L G (H) = {X ∈ L 1G (H), |X | p ∈ L 1G (H)}. L G (H) is also ˆ a Banach space under the norm kX k p := (E[|X | p ])1/ p . We have (see Appendix) kX + Y k p ≤ kX k p + kY k p p

q

and, for each X ∈ L G , Y ∈ L G (Q) with

1 p

+

1 q

= 1,

ˆ kX Y k = E[|X Y |] ≤ kX k p kX kq . With this we have kX k p ≤ kX k p0 if p ≤ p 0 . We now consider the conditional expectation introduced Definition 14 (see (10)). For each 0 0 ˆ fixed t = tk ≤ T , the conditional expectation E[·|H t ] : L i p (HT ) 7→ L i p (Ht ) is a continuous ˆ E[X ˆ |Ht ]] = E[X ˆ ], X ∈ L 0 (HT ) and, since PtG is mapping under k·k. Indeed, we have E[ ip subadditive, ˆ |Ht ] − E[Y ˆ |Ht ] ≤ E[X ˆ ˆ E[X − Y |Ht ] ≤ E[|X − Y ||Ht ]. We thus obtain ˆ E[X ˆ |Ht ] − E[Y ˆ |Ht ]] ≤ E[X ˆ E[ − Y] and

ˆ ˆ |Ht ]

E[X |Ht ] − E[Y

≤ kX − Y k . 1 1 ˆ It follows that E[·|H t ] can be also extended as a continuous mapping L G (HT ) 7→ L G (Ht ). If 1 1 ˆ the above T is not fixed, then we can obtain E[·|H t ] : L G (H) 7→ L G (Ht ).

2232

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

0 ˆ Proposition 17. We list the properties of E[·|H t ], t ∈ [0, T ], that hold in L i p (HT ) and still hold for X , Y ∈ L 1G (HT ):

(i) (ii) (iii) (iv) (v) (vi) (vii)

ˆ |Ht ] = X , for X ∈ L 1 (Ht ), t ≤ T . E[X G ˆ |Ht ] ≥ E[Y ˆ |Ht ]. If X ≥ Y , then E[X ˆE[X |Ht ] − E[Y ˆ |Ht ] ≤ E[X ˆ − Y |Ht ]. ˆ E[X ˆ |Ht ]|Hs ] = E[X ˆ |Ht∧s ], E[ ˆ E[X ˆ |Ht ]] = E[X ˆ ]. E[ ˆE[X + η|Ht ] = E[X ˆ |Ht ] + η, η ∈ L 1 (Ht ). G ˆ ˆ |Ht ] + η− E[−X ˆ E[ηX |Ht ] = η+ E[X |Ht ], for bounded η ∈ L 1G (Ht ). We have the following independence: ˆ |Ht ] = E[X ˆ ], ∀X ∈ L 1 (Ht ), ∀T ≥ 0, E[X G

T

where L 1G (HtT ) is the extension, under k·k, of L i0p (HtT ) which consists of random variables of the form φ(Btt1 , Btt2 , . . . , Bttm ), φ ∈ lip(Rm ), t1 , . . . , tm ∈ [0, T ], m = 1, 2, . . . . Here we define Bst = Bt+s − Bt ,

s ≥ 0.

(viii) The increments of B are identically distributed: t t t ˆ ˆ E[φ(B t1 , Bt2 , . . . , Btm )] = E[φ(Bt1 , Bt2 , . . . , Btm )]. The meaning of the independence in (vii) is similar to the classical one: Definition 18. An Rn valued random variable Y ∈ (L 1G (H))n is said to be independent of Ht for some given t if for each φ ∈ lip(Rn ) we have ˆ ˆ E[φ(Y )|Ht ] = E[φ(Y )]. It is seen that the above property (vii) also holds for the situation X ∈ L 1G (Ht ) where L 1G (Ht ) is the completion of the sublattice ∪T ≥0 L 1G (HtT ) under k·k. From the above results we have Proposition 19. For each fixed t ≥ 0, (Bst )s≥0 is a G-Brownian motion in L 1G (Ht ) under the ˆ same G-expectation E[·]. √ Remark 20. We can prove, using Lemma 7, that B˜ = ( λBt/λ )t≥0 is also a G-Brownian motion. This is the scaling property of the G-Brownian motion, which is the same as that of the usual Brownian motion. The following property is very useful. ˆ |Ht ] = −E[−Y ˆ Proposition 21. Let X, Y ∈ L 1G (H) be such that E[Y |Ht ], for some t ∈ [0, T ]. Then we have ˆ ˆ |Ht ] + E[Y ˆ |Ht ]. E[X + Y |Ht ] = E[X ˆ |Ht ] = E[−Y ˆ ˆ ˆ |Ht ]. In particular, if E[Y |Ht ] = 0, then E[X + Y |Ht ] = E[X ˆ ˆ |Ht ] + E[Y ˆ |Ht ] and Proof. This follows from the two properties E[X + Y |Ht ] ≤ E[X ˆ ˆ |Ht ] − E[−Y ˆ ˆ |Ht ] + E[Y ˆ |Ht ].  E[X + Y |Ht ] ≥ E[X |Ht ] = E[X

2233

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

Example 22. From the last relation of Proposition 12(ii), we have ˆ E[(AB t , Bt )] = σ A t = 2G(A)t,

∀A ∈ Sd .

More generally, for each s ≤ t and η = (ηi j )i,d j=1 ∈ L 2G (Hs ; Sd ), s s ˆ E[(ηB t , Bt )|Hs ] = ση t = 2G(η)t,

s, t ≥ 0.

(11)

Definition 23. We will use, in the rest of this paper, the notation Bta = (a, Bt ),

for each a = (a1 , . . . , ad )T ∈ Rd .

(12)

From Lemma 9 and Remark 10, Ga a G ˆ E[φ(B t )] = Pt (φ((a, ·))) = Pt (φ),

where P G a is the (one-dimensional) G a -normal distribution. Thus, according to Definition 14 for d-dimensional G-Brownian motion, B a forms a one-dimensional G a -Brownian motion for ˆ which the G a -expectation coincides with E[·]. Example 24. For each 0 ≤ s − t, we have ˆ ˆ E[ψ(B t − Bs )|Hs ] = E[ψ(B t − Bs )]. If X is in L 1G (Ht ) and bounded, φ is a real convex function on R and at least not growing too fast, then a a −ˆ a a ˆ φ(B a − Bta )|Ht ] = X + E[φ(B ˆ E[X T T − Bt )|Ht ] + X E[−φ(BT − Bt )|Ht ]   Z ∞ X+ x2 =p φ(x) exp − dx 2(T − t)σaaT 2π(T − t)σaaT −∞   Z ∞ X− x2 −p dx. φ(x) exp − 2(T − t)|σ−aaT | 2π(T − t)|σ−aaT | −∞

In particular, for n = 1, 2, . . . , a n ˆ t−s ˆ ta − Bsa |n |Hs ] = E[|B | ] E[|B

= p

1 2π(t − s)σaaT

x2 |x| exp − 2(t − s)σaaT −∞

Z



n



 dx.

a n a a n ˆ ˆ But we have E[−|B t − Bs | |Hs ] = E[−|Bt−s | ] which is 0 when σ−aaT = 0 and   Z ∞ −1 x2 p |x|n exp − dx, if σ−aaT < 0. 2(t − s)|σ−aaT | 2π(t − s)|σ−aaT | −∞

ˆ ta − Bsa |Hs ] = 0 and Exactly as in classical cases, we have E[B a a 2 ˆ E[(B t − Bs ) |Hs ] = σaaT (t − s), a a 6 3 3 ˆ E[(B t − Bs ) |Hs ] = 15σaaT (t − s) , p 2(t − s)σaaT ˆ ta − Bsa ||Hs ] = E[|B , √ π

a a 4 2 2 ˆ E[(B t − Bs ) |Hs ] = 3σaaT (t − s) , a a 8 4 4 ˆ E[(B t − Bs ) |Hs ] = 105σaaT (t − s) , √ s)σ T ]3/2 ˆ ta − Bsa |3 |Hs ] = 2 2[(t − E[|B , √ aa π

2234

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

ˆ ta E[|B



Bsa |5 |Hs ]

√ 2[(t − s)σaaT ]5/2 =8 . √ π

Example 25. For each n = 1, 2, . . . , 0 ≤ t ≤ T and X ∈ L 1G (Ht ), we have a a −ˆ a a ˆ ˆ (B a − Bta )|Ht ] = X + E[(B E[X T − Bt )|Ht ] + X E[−(BT − Bt )|Ht ] = 0. T

This, together with Proposition 21, yields ˆ + X (B a − Bta )|Ht ] = E[Y ˆ |Ht ], E[Y T

Y ∈ L 1G (H).

We also have a a 2 −ˆ a a 2 ˆ (B a − Bta )2 |Ht ] = X + E[(B ˆ E[X T T − Bt ) |Ht ] + X E[−(BT − Bt ) |Ht ]

= [X + σaaT + X − σ−aaT ](T − t). Remark 26. It is clear that we can define an expectation E[·] on L i0p (H) in the same way as in Definition 14 with the standard normal distribution P10 (·) in place of P1G (·). If Id ∈ Γ , then it follows from (3) that P10 (·) is dominated by P1G (·) in the sense P10 (φ) − P10 (ψ) ≤ P1G (φ − ψ). Then E[·] can be continuously extended to L 1G (H). E[·] is a linear expectation under which (Bt )t≥0 behaves as a Brownian motion. We have ˆ ˆ ], − E[−X ] ≤ E 0 [X ] ≤ E[X

ˆ ˆ |Ht ]. −E[−X |Ht ] ≤ E 0 [X |Ht ] ≤ E[X P0

ˆ Eˆ 0 [X ] − Eˆ 0 [Y ] ≤ E[X − Y ],

P GΓ 0

(13)

More generally, if ⊂ Γ , since the corresponding = is dominated by = P GΓ , 1 0 0 ˆ thus the corresponding expectation Eˆ is well defined in L G (H) and Eˆ is dominated by E: Γ0

PG

X, Y ∈ L 1G (H).

Such an extension through the above type of domination relations was discussed in detail in [40]. With this domination we then can introduce a large kind of time consistent linear or nonlinear expectations and the corresponding conditional expectations, not necessarily positive homogeneous and/or subadditive, as continuous functionals in L 1G (H). See Example 42 for a further discussion. Example 27. Since a a a a a a ˆ ˆ E[2B s (Bt − Bs )|Hs ] = E[−2Bs (Bt − Bs )|Hs ] = 0,

we have a 2 a 2 a a a 2 a 2 ˆ ˆ E[(B t ) − (Bs ) |Hs ] = E[(Bt − Bs + Bs ) − (Bs ) |Hs ] a a 2 a a a ˆ = E[(B t − Bs ) + 2(Bt − Bs )Bs |Hs ]

= σaaT (t − s) and a 2 2 a 2 a 2 2 a a a 2 ˆ ˆ E[((B t ) − (Bs ) ) |Hs ] = E[{(Bt − Bs + Bs ) − (Bs ) } |Hs ] a a 2 a a a 2 ˆ = E[{(B t − Bs ) + 2(Bt − Bs )Bs } |Hs ]

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

2235

a a 4 a a 3 a a a 2 a 2 ˆ = E[(B t − Bs ) + 4(Bt − Bs ) Bs + 4(Bt − Bs ) (Bs ) |Hs ] a a 4 a a 3 a a 2 ˆ ˆ ≤ E[(B t − Bs ) ] + 4E[|Bt − Bs | ]|Bs | + 4σaaT (t − s)(Bs ) r 2 2 2 = 3σaa [σ T (t − s)]3/2 |Bsa | + 4σaaT (t − s)(Bsa )2 . T (t − s) + 8 π aa

Remark 28. A G-Brownian motion can be characterized as a zero-mean process with 3 ˆ independent and stationary increments such that E[kB t k ]/t → 0 as t ↓ 0 (see [44]). 5. Itˆo’s integral of G-Brownian motion 5.1. Bochner’s integral Definition 29. For T ∈ R+ , a partition πT of [0, T ] is a finite ordered subset π = {t1 , . . . , t N } such that 0 = t0 < t1 < · · · < t N = T , µ(πT ) = max{|ti+1 − ti |, i = 0, 1, . . . , N − 1}. We use πTN = {t0N < t1N < · · · < t NN } to denote a sequence of partitions of [0, T ] such that lim N →∞ µ(πTN ) = 0. Let p ≥ 1 be fixed. We consider the following type of simple processes: for a given partition {t0 , . . . , t N } = πT of [0, T ], we set ηt (ω) =

N −1 X

ξk (ω)I[tk ,tk+1 ) (t),

k=0 p

where ξk ∈ L G (Htk ), k = 0, 1, 2, . . . , N − 1 are given. The collection of these processes is p,0 denoted by MG (0, T ). Definition 30. For an η ∈ MG1,0 (0, T ) with ηt = integral is Z T N −1 X ηt (ω)dt = ξk (ω)(tk+1 − tk ). 0

P N −1 k=0

ξk (ω)I[tk ,tk+1 ) (t), the related Bochner

k=0

Remark 31. We set, for each η ∈ MG1,0 (0, T ), Z T N −1 X ˆ t ]dt = 1 ˜ET [η] := 1 ˆ k (ω)(tk+1 − tk ). E[η Eξ T 0 T k=0 It is easy to check that Eˆ T : MG1,0 (0, T ) 7−→ R forms a nonlinear expectation satisfying (a)–(e) of Definition 1. We then can introduce a natural norm Z 1 Tˆ 1 ˜ kηkT = ET [|η|] = E[|ηt |]dt. T 0 Under this norm MG1,0 (0, T ) can extended to MG1 (0, T ) which is a Banach space.

2236

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253 p

p,0

Definition 32. For each p ≥ 1, we denote by MG (0, T ) the completion of MG (0, T ) under the norm !1/ p 1/ p  Z T N −1 X

1 1 p p

|ηt | dt ˆ k (ω)| ](tk+1 − tk ) = E[|ξ . T 0 T k=0 We observe that  N −1  Z T Z X ˆE kξk (ω)k (tk+1 − tk ) = ηt (ω)dt ≤ 0

k=0

T

ˆ t |]dt. E[|η

(14)

0

We then have RT Proposition 33. The linear mapping 0 ηt (ω)dt : MG1,0 (0, T ) 7→ L 1G (HT ) is continuous and thus can be continuously extended to MG1 (0, T ) 7→ L 1G (HT ). We still denote this extended RT mapping by 0 ηt (ω)dt, η ∈ MG1 (0, T ). p

p

Since MG (0, T ) ⊂ MG1 (0, T ) for p ≥ 1, this definition makes sense for η ∈ MG (0, T ). 5.2. Itˆo’s integral of G-Brownian motion We still use Bta := (a, Bt ) as in (12). Definition 34. For each η ∈ MG2,0 (0, T ) of the form ηt (ω) = I (η) =

T

Z 0

η(s)dBsa :=

N −1 X

P N −1 k=0

ξk (ω)I[tk ,tk+1 ) (t), we define

ξk (Btak+1 − Btak ).

k=0

Lemma 35. We have, for each η ∈ MG2,0 (0, T ), Eˆ

Z

T

0



"Z 0

 η(s)dBsa = 0, T

η(s)dBsa

2 #

(15) ≤ σaaT

T

Z

ˆ 2 (s)]ds. E[η

(16)

0

Consequently, the linear mapping I : MG2,0 (0, T ) 7−→ L 2G (HT ) is continuous and thus can be continuously extended to I : MG2 (0, T ) 7−→ L 2G (HT ). Definition 36. We define, for a fixed η ∈ MG2 (0, T ), the stochastic calculus Z T η(s)dBsa := I (η). 0

It is clear that (15) and (16) still hold for η ∈ MG2 (0, T ).

2237

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

Proof of Lemma 35. From Example 25, for each k, ˆ k (Bta − Bta )|Ht ] = 0. E[ξ k k k+1 We have Z ˆE

T

η(s)dBsa

0



= Eˆ

t N −1

Z 0

t N −1

Z

η(s)dBsa

+ ξ N −1 (BtaN





BtaN −1 )

ˆ N −1 (Bta − Bta )|Ht ] + E[ξ N −1 N N −1 0 Z t N −1  = Eˆ η(s)dBsa . = Eˆ

η(s)dBsa



0

We then can repeat this procedure to obtain (15). We now prove (16): "Z "Z 2 # t N −1 T = Eˆ η(s)dBsa + ξ N −1 (Bta − Bta Eˆ η(s)dBsa = Eˆ

"Z

t N −1 0

 Z ˆ +E 2 0

= Eˆ

N

0

0

"Z

t N −1 0

η(s)dBsa t N −1

2 # ) N −1

2



η(s)dBsa ξ N −1 (BtaN − BtaN −1 ) + ξ N2 −1 (BtaN − BtaN −1 )2 |Ht N −1

η(s)dBsa

2

#

# + ξ N2 −1 σaaT (t N

− t N −1 ) .

R tN R t N −1 a 2 ˆ ˆ ˆ 2 ]σaaT (t N − t N −1 ). We then repeat this Thus E[( η(s)dBsa )2 ] + E[ξ N −1 0 η(s)dBs ) ] ≤ E[( 0 procedure to deduce "Z 2 # Z T N −1 T X 2 ˆ k )2 ](tk+1 − tk ) = ˆ Eˆ η(s)dBs ≤ σaaT E[(ξ E[(η(t)) ]dt.  0

k=0

0

We list some main properties of Itˆo’s integral of G-Brownian motion. We define, for some 0 ≤ s ≤ t ≤ T, Z t Z T ηu dBua := I[s,t] (u)ηu dBua . s

0

We have Proposition 37. Let η, θ ∈ MG2 (0, T ) and let 0 ≤ s ≤ r ≤ t ≤ T . Then in L 1G (HT ) we have: Rt Rr Rt (i) s ηu dBua = s ηu dBua + r ηu dBua . Rt Rt Rt (ii) s (αηu + θu )dBua = α s ηu dBua + s θu dBua , if α is bounded and in L 1G (Hs ), RT ˆ ˆ |Hs ], ∀X ∈ L 1 (H), (iii) E[X + r ηu dBua |Hs ] = E[X G RT RT a )2 |H ] ≤ σ 2 |H ]du. ˆ ˆ (iv) E[( η dB E[|η | T s u s aa r u r u

2238

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

5.3. Quadratic variation process of G-Brownian motion We now consider the quadratic variation of G-Brownian motion. It concentrically reflects the characteristic of the ‘uncertainty’ part of the G-Brownian motion B. This makes a major difference from the classical Brownian motion. Let πtN , N = 1, 2, . . . , be a sequence of partitions of [0, t]. We consider (Bta )2 =

N −1 X

[(BtaN )2 − (BtaN )2 ]

=

N −1 X

k

k+1

k=0

2BtaN (BtaN − BtaN ) +

k=0

k

k

k+1

N −1 X

(BtaN − BtaN )2 . k

k+1

k=0

N − t N ) → 0, the first term of the right side tends to As µ(πtN ) = max0≤k≤N −1 (tk+1 k The second term must converge. We denote its limit by hB a it , i.e.,



B

a

t

=

lim

N −1 X

µ(πtN )→0 k=0

(BtaN k+1



BtaN )2 k

=

(Bta )2

t

Z −2 0

Bsa dBsa .

Rt 0

Bsa dBsa .

(17)

By the above construction, hB a it , t ≥ 0, is an increasing process with hB a i0 = 0. We call it the quadratic variation process of the G-Brownian motion B a . Clearly hB a i is an increasing process. It is also clear that, for each 0 ≤ s ≤ t and for each smooth real function ψ such that ai a a ˆ ˆ ψ(hB a it−s ) ∈ L 1G (Ht−s ), we have E[ψ(hB t−s )] = E[ψ(hB it − hB is )]. We also have

a

−a

B t = B t = −B a t . It is important to keep in mind that hB a it is not a deterministic process except in the case σaaT = −σ−aaT and thus B a becomes a classical Brownian motion. In fact we have Lemma 38. For each 0 ≤ s ≤ t < ∞ 

 Eˆ B a t − B a s |Hs = σaaT (t − s), 

  Eˆ − B a t − B a s |Hs = σ−aaT (t − s).

(18) (19)

Proof. By the definition of hB a i and Proposition 37(iii), then Example 27,   Z t 

 Bua dBua |Hs Eˆ B a t − B a s |Hs = Eˆ (Bta )2 − (Bsa )2 − 2 s

a 2 a 2 ˆ = E[(B t ) − (Bs ) |Hs ] = σaaT (t − s). a 2 ˆ We then have (18). (19) can be proved analogously by using the equality E[−((B t) − a 2 (Bs ) )|Hs ] = σ−aaT (t − s). 

An interesting new phenomenon of our G-Brownian motion is that its quadratic process hBi also has independent increments. In fact, we have Lemma 39. An increment of hB a i is the quadratic variation of the corresponding increment of B a , i.e., for each fixed s ≥ 0,

a



B t+s − B a s = (B s )a t ,

2239

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

where Bts = Bt+s − Bs , t ≥ 0 and (B s )at = (a, Bst ). Proof.

B

a





t+s

− B

a



=

s

a (Bt+s )2

t+s

Z −2 0

a = (Bt+s − Bsa )2 − 2 a (Bt+s



s a = (B ) t . =

Bsa )2

Bua dBua Z

t+s

s

Z −2 0

t

 Z a 2 − (Bs ) − 2

s

0

Bua dBua



(Bua − Bsa )dBua

a a (Bs+u − Bsa )d(Bs+u − Bsa )



Lemma 40. We have   h i

a

a 2 2 2 ˆE B a 2 = Eˆ B t+s − B s |Hs = σaa Tt , t

s, t ≥ 0.

(20)

a i2 ]. ˆ Proof. We set φ(t) := E[hB t " 2 # Z t φ(t) = Eˆ (Bta )2 − 2 Bua dBua 0

≤ 2Eˆ

h

 i a 4

Bt

"Z

+ 8Eˆ

2 2 ≤ 6σaa T t + 8σaaT

0

Z 0

t

t

Bua dBua

2 #

a 2 ˆ E[(B u ) ]du

2 2 = 10σaa Tt . a i − hB a i )2 ] = φ(t − s) ≤ 10σ 2 (t − s)2 . For each s ∈ [0, t), ˆ This also implies E[(hB t s aaT h i



 2 Ba s + Ba t − Ba s φ(t) = Eˆ h  i h

2 i 

  2 ≤ Eˆ Ba s + Eˆ Ba t − Ba s + 2Eˆ B a t − B a s B a s h 

  i = φ(s) + φ(t − s) + 2Eˆ Eˆ B a t − B a s |Hs B a s 2 = φ(s) + φ(t − s) + 2σaa T s(t − s).

We set δ N = t/N , tkN = kt/N = kδ N for a positive integer N . By the above inequalities 2 N φ(t NN ) ≤ φ(t NN−1 ) + φ(δ N ) + 2σaa T t N −1 δ N 2 N N ≤ φ(t NN−2 ) + 2φ(δ N ) + 2σaa T (t N −1 + t N −2 )δ N

.. .. We then have 2 φ(t) ≤ N φ(δ N ) + 2σaa T

N −1 X k=0

2 2 tkN δ N ≤ 10t 2 σaa T /N + 2σaaT

N −1 X k=0

tkN δ N .

2240

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

Rt 2 2 2 2 2 a 2 ˆ Let N → ∞; we have φ(t) ≤ 2σaa T 0 sds = σaaT t . Thus E[hB it ] ≤ σaaT t . This, together 2 2 a i ] ≥ E 0 [hB a i ] = σ 2 t 2 , implies (20). In the last step, the classical normal ˆ with E[hB t t aaT distribution P10 , or N (0, γ0 γ0T ), γ0 ∈ Γ , is chosen such that 2 T T  tr[γ0 γ0T aaT ] = σaa T = sup tr[γ γ aa ]. γ ∈Γ

Similarly we have h i

3 3 3 Eˆ B a t − B a s |Hs = σaa T (t − s) , h i

4 4 4 Eˆ B a t − B a s |Hs = σaa T (t − s) .

(21)

Proposition 41. Let 0 ≤ s ≤ t, ξ ∈ L 1G (Hs ), X ∈ L 1G (H). Then h i h i Eˆ X + ξ((Bta )2 − (Bsa )2 ) = Eˆ X + ξ(Bta − Bsa )2 



 = Eˆ X + ξ B a t − B a s . Proof. By (17) and applying Proposition 21, we have   Z h i



ˆE X + ξ((Bta )2 − (Bsa )2 ) = Eˆ X + ξ B a − B a + 2 t s

s

t

Bua dBua







 = Eˆ X + ξ B a t − B a s . We also have h i ˆ Eˆ X + ξ((Bta )2 − (Bsa )2 ) = E[X + ξ {(Bta − Bsa )2 + 2(Bta − Bsa )Bsa }] ˆ = E[X + ξ(Bta − Bsa )2 ].  Example 42. We assume that in a financial market a stock price (St )t≥0 is observed. Let Bt = log(St ), t ≥ 0, be a one-dimensional G-Brownian motion (d = 1) with Γ = [σ∗ , σ ∗ ], with fixed σ∗ ∈ [0, 21 ) and σ ∗ ∈ [1, ∞). Two traders a and b in a same bank are using their own statistics to price a contingent claim X = hBiT with maturity T . Suppose, for example, under the probability measure Pa of a, B is a (classical) Brownian motion whereas under Pb of b, 21 B is a Brownian motion, where Pa (resp. Pb ) is a classical probability measure with its linear expectation Eˆ a (resp. Eˆ b ) generated by the heat equation ∂t u = 12 ∂x2x u (resp. ∂t u = 41 ∂x2x u). Since Eˆ a and Eˆ b are both dominated by Eˆ in the sense of (3), they can be both well defined as a linear bounded functional in L 1G (H). This framework cannot be provided by just using a classical probability space because it is known that hBiT = T , Pa -a.s., and hBiT = T4 , Pb -a.s. Thus there is no probability measure on Ω with respect to which Pa and Pb are both absolutely continuous. Practically this sublinear expectation Eˆ provides a realistic tool of a dynamic risk measure for a risk supervisor of the traders a and b: given a risk position X ∈ L 1G (HT ) we always have ˆ E[−X |Ht ] ≥ Eˆ a [−X |Ht ] ∨ Eˆ b [−X |Ht ] for the loss −X of this position. The meaning is that the supervisor uses a more sensitive risk measure. Clearly no linear expectation can play this role. The subset Γ represents the uncertainty of the volatility model of a risk regulator. The larger the ˆ subset Γ , the bigger the uncertainty, and thus the stronger the corresponding E.

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

2241

It is worth considering creating a hierarchic and dynamic risk control system for a bank, or a banking system, in which the Chief Risk Officer (CRO) uses Eˆ = Eˆ G for her risk measure and the Risk Officer of the ith division of the bank uses Eˆ i = Eˆ G i for his, where 1 1 G(A) = sup tr[γ γ T A], G i (A) = sup tr[γ γ T A], Γi ⊂ Γ , i = 1, . . . , I. 2 γ ∈Γ 2 γ ∈Γi Thus Eˆ i is dominated by Eˆ for each i. For a large banking system we can even consider creating Eˆ i j = Eˆ G i j for its (i, j)th subdivision. The reasoning is: in general, a risk regulator’s statistics and knowledge of a specific risk position X are less than those of a trader who is concretely involved in the business of the product X . To define the integration of a process η ∈ MG1 (0, T ) with respect to d hB a i, we first define a mapping: Z T N −1  X



 Q 0,T (η) = η(s)d B a s := ξk B a t − B a t : MG1,0 (0, T ) 7→ L 1 (HT ). k

k+1

0

k=0

Lemma 43. For each η ∈ MG1,0 (0, T ), Z T ˆ s |]ds, ˆE[|Q 0,T (η)|] ≤ σaaT E[|η

(22)

0

Thus Q 0,T : MG1,0 (0, T ) 7→ L 1 (HT ) is a continuous linear mapping. Consequently, Q 0,T can be uniquely extended to MG1 (0, T ). We still define this mapping by Z T

η(s)d B a s = Q 0,T (η), η ∈ MG1 (0, T ). 0

We still have   Z T Z

Eˆ η(s)d B a s ≤ σaaT 0

T

ˆ s |]ds, E[|η

0

∀η ∈ MG1 (0, T ).

(23)

Proof. By applying Lemma 38, (22) can be checked as follows: # " −1 N −1 h NX h ii  X

a 

a ˆE ξk B t − B t ≤ Eˆ |ξk | · Eˆ B a t − B a t |Htk k k k+1 k+1 k=0 k=0 =

N −1 X

ˆ k |]σaaT (tk+1 − tk ) E[|ξ

k=0

= σaaT

Z

T

ˆ s |]ds.  E[|η

0

We have the following isometry. Proposition 44. Let η ∈ MG2 (0, T ), "Z 2 # Z T a Eˆ η(s)dBs = Eˆ 0

0

T



η2 (s)d B a s .

(24)

2242

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

Proof. We first consider η ∈ MG2,0 (0, T ) with the form ηt (ω) =

N −1 X

ξk (ω)I[tk ,tk+1 ) (t)

k=0

and thus

RT 0

η(s)dBsa :=

P N −1 k=0

ξk (Btak+1 − Btak ) . By Proposition 21 we have for X ∈ L 1G (H), l 6= k.

ˆ ], ˆ E[X + 2ξk (Btak+1 − Btak )ξl (Btal+1 − Btal )] = E[X Thus Eˆ

"Z

T

0

η(s)dBsa

2 #



N −1 X

= Eˆ 

!2  ξk (Btak+1



Btak )

"

 = Eˆ

k=0

# ξk2 (Btak+1



Btak )2

.

k=0

This, together with Proposition 41, implies that "Z " 2 # N −1  T X a Eˆ η(s)dBs = Eˆ ξk2 B a t 0

N −1 X



k+1

− B

k=0

a

 tk

# = Eˆ

Z 0

T



η2 (s)d B a s .

Thus (24) holds for η ∈ MG2,0 (0, T ). We thus can continuously extend this equality to the case η ∈ MG2 (0, T ) and obtain (24).  5.4. Mutual variation processes for G-Brownian motion Let a = (a1 , . . . , ad )T and a¯ = (a¯ 1 , . . . , a¯ d )T be two given vectors in Rd . We then have their quadratic variation processes hB a i and B a¯ . We then can define their mutual variation process by Ei E D E D 1 hD a B + B a¯ − B a − B a¯ B a , B a¯ := t t t 4 D Ei 1 hD a+¯a E a−¯a B − B . = t t 4









Since B a−¯a = B a¯ −a = −B a−¯a , we see that B a , B a¯ t = B a¯ , B a t . In particular we have hB a , B a i = hB a i. Let πtN , N = 1, 2, . . . , be a sequence of partitions of [0, t]. We observe that N −1 X

(BtaN − BtaN )(Bta¯N − Bta¯N ) =

k=0

k

k+1

k

k+1

−1 1 NX a a 2 a a 2 [(Bta+¯ − Bta+¯ ) − (Bta−¯ − Bta−¯ ) ]. k k k+1 k+1 4 k=0

Thus as µ(πtN ) → 0, we have lim

N →0

N −1 X k=0

D E (BtaN − BtaN )(Bta¯N − Bta¯N ) = B a , B a¯ . k+1

k

k+1

k

t

We also have D Ei D E 1 hD a+¯a E B a , B a¯ = B − B a−¯a t t t 4  Z t Z t 1 = (Bta+¯a )2 − 2 Bsa+¯a dBsa+¯a − (Bta−¯a )2 + 2 Bsa−¯a dBsa−¯a 4 0 0

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

= Bta Bta¯ −

Z

t

0

Bsa dBsa¯ −

Z

t

0

2243

Bsa¯ dBsa .

Now for each η ∈ MG1 (0, T ) we can consistently define Z T Z Z D D E D E E 1 T 1 T ηs d B a , B a¯ = ηs d B a+¯a − ηs d B a−¯a . s s s 4 0 4 0 0 Lemma 45. Let η N ∈ MG1,0 (0, T ), N = 1, 2, . . . , be of the form ηtN (ω) =

N −1 X

ξkN (ω)I[t N ,t N ) (t) k

k=0

k+1

with µ(πTN ) → 0 and η N → η in MG1 (0, T ) as N → ∞. Then we have the following convergence in L 1G (HT ): Z T N −1 D E X η N (s)d B a , B a¯ := ξkN (BtaN − BtaN )(Bta¯N − Bta¯N ) 0

s

k=0 T

Z →

k

k+1

k+1

k

D E η(s)d B a , B a¯ . s

0

5.5. Itˆo’s formula for G-Brownian motion We have the corresponding Itˆo’s formula of Φ(X t ) for a “G-Itˆo process” X . For simplification, we only treat the case where the function Φ is sufficiently regular. For notational simplification, we define B i = B ei , the i-th coordinate of the G-Brownian motion B, under a given orthonormal basis (e1 , . . . , ed ) of Rd . Lemma 46. Let Φ ∈ C 2 (Rn ) be bounded with bounded derivatives and {∂x2µ x ν Φ}nµ,ν=1 be uniformly Lipschitz. Let s ∈ [0, T ] be fixed and let X = (X 1 , . . . , X n )T be an n-dimensional process on [s, T ] of the form D E D E j j + β ν j (Bt − Bs ), X tν = X sν + α ν (t − s) + ηνi j B i , B j − B i , B j t

αν ,

s

ηνi j

β νi j

where, for ν = 1, . . . , n, i, j = 1, . . . , d, and are bounded elements of L 2G (Hs ) 2 1 n T n and X s = (X s , . . . , X s ) is a given R -vector in L G (Hs ). Then we have Z t Z t j ν j Φ(X t ) − Φ(X s ) = ∂x ν Φ(X u )β dBu + ∂xν Φ(X u )α ν du s

s

 D Z t E 1 + ∂x ν Φ(X u )ηνi j + ∂x2µ x ν Φ(X u )β νi β ν j d B i , B j . u 2 s

(25)

Here we use the Einstein convention, i.e., the above repeated indices µ, ν, i and j (but not k) imply summation. Proof. For each positive integer N we set δ = (t − s)/N and take the partition N π[s,t] = {t0N , t1N , . . . , t NN } = {s, s + δ, . . . , s + N δ = t}.

2244

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

We have Φ(X t ) − Φ(X s ) =

N −1 X

[Φ(X t N ) − Φ(X t N )]

=

k

k+1

k=0 N −1  X

∂x µ Φ(X t N )(X k

k=0

µ N tk+1

µ tk

− X N)

 1 µ µ + [∂x2µ x ν Φ(X t N )(X N − X N )(X tνN − X tνN ) + ηkN ] , tk+1 tk k 2 k k+1

(26)

where ηkN = [∂x2µ x ν Φ(X t N + θk (X t N − X t N )) − ∂x2µ x ν Φ(X t N )](X k

k

k+1

k

µ N tk+1

µ tk

− X N )(X tνN − X tνN ) k

k+1

with θk ∈ [0, 1]. We have 2 2 ˆ N |] = E[|[∂ ˆ E[|η x µ x ν Φ(X t N + θk (X t N − X t N )) − ∂x µ x ν Φ(X t N )] k

× (X

µ N tk+1

k

k

k+1

µ tk

k

− X N )(X tνN − X tνN )|] k

k+1

3 3 3/2 ˆ ≤ cE[|X ], t N − X t N | ] ≤ C[δ + δ k

k+1

where c is the Lipschitz constant of {∂x2µ x ν Φ}dµ,ν=1 . In the last step we use Example 24 and (21). P ˆ N |] → 0. The remaining terms in the summation of the right side of (26) are Thus k E[|η k N N ξt + ζt with ξtN

=

N −1  X



∂x µ Φ(X t N ) α k

k=0

+ β µj (B

j N tk+1

µ

N (tk+1

− tkN ) + ηµi j

D

B ,B i

j

E

D

N tk+1

− B ,B i

j

E  tkN

  1 2 j j j µi ν j i i − B N ) + ∂x µ x ν Φ(X t N )β β (Bt N − Bt N )(B N − B N ) tk tk+1 tk k 2 k k+1

and ζtN =

 D −1 E D E  1 NX N ∂x2µ x ν Φ(X t N ) α µ (tk+1 − tkN ) + ηµi j B i , B j N − B i , B j N k tk+1 tk 2 k=0  D E D E  N − tkN ) + ηνlm B l , B m N − B l , B m N × α ν (tk+1 tk+1 tk D  E D E  N + α µ (tk+1 − tkN ) + ηµi j B i , B j N − B i , B j N β νl (Btl N − Btl N ). tk+1

tk

k+1

k

N ) We observe that, for each u ∈ [tkN , tk+1  2  N −1 X ˆ x µ Φ(X u ) − ∂x µ Φ(X N )|2 ] Eˆ  ∂x µ Φ(X u ) − ∂x µ Φ(X t N )I[t N ,t N ) (u)  = E[|∂ tk k k k+1 k=0 2 2 ˆ ≤ c2 E[|X u − X t N | ] ≤ C[δ + δ ]. k

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

Thus

P N −1 k=0

N −1 X

2245

∂x µ Φ(X t N )I[t N ,t N ) (·) tends to ∂x µ Φ(X · ) in MG2 (0, T ). Similarly, k

k

k+1

∂x2µ x ν Φ(X t N )I[t N ,t N ) (·) → ∂x2µ x ν Φ(X · ), k

k=0

k

in MG2 (0, T ).

k+1

Let N → ∞; by Lemma 45 as well as the definitions of the integrations with respect to dt, dBt and dhBit , the limit of ξtN in L 2G (Ht ) is just the right hand side of (25). By the next remark, we also have ζtN → 0 in L 2G (Ht ). We then have proved (25).  Remark 47. In the proof of ζtN → 0 in L 2G (Ht ), we use the following estimates: for ψ N ∈ P N −1 N N MG1,0 (0, T ) such that ψtN = N ) (t), and πT = {0 ≤ t0 , . . . , t N = T } with k=0 ξtk I[tkN ,tk+1 P N −1 ˆ N − t N ) ≤ C, for all N = 1, 2, . . . , we have lim N →∞ µ(πTN ) = 0 and k=0 E[|ξtNk |](tk+1 k P N −1 N N N 2 ˆ ¯ ∈ Rd , E[| k=0 ξk (tk+1 − tk ) ] → 0 and, for any fixed a, a # "   −1 N −1  NX  X

a 2

a

2 N a ξk B tN − B tN ≤ Eˆ Eˆ |ξkN | · Eˆ B t N − B a t N |Ht N k k=0 k k k+1 k+1 k=0 =

N −1 X

ˆ N |]σ 2 T (t N − t N )2 → 0, E[|ξ k k aa k+1

k=0

# " −1 NX 

a  N a N N ˆE B t N − B t N (tk+1 − tk ) ξk k=0 k k+1 ≤

N −1 X

h h ii

 N B a t N − B a t N |Ht N Eˆ |ξkN |(tk+1 − tkN ) · Eˆ

=

N −1 X

k

k

k+1

k=0

ˆ N |]σaaT (t N − t N )2 → 0, E[|ξ k k k+1

k=0

as well as " # N −1 −1 NX X a a a N a N N ˆ ˆ N |](t N − t N )E[|B E[|ξ Eˆ ξk (tk+1 − tk )(Bt N − Bt N ) ≤ N − Bt N |] k k k+1 tk+1 k=0 k k k+1 k=0 r −1 2σaaT NX ˆ N |](t N − t N )3/2 → 0 = E[|ξ k k k+1 π k=0 and "   # −1 NX 

a  a¯ a¯ N a ˆE Bt N − Bt N ξ B tN − B tN k=0 k k k+1 k k+1   N −1  X

a  a¯ N ˆ a a¯ ˆ ≤ E[|ξk |]E B t N − B t N |Bt N − Bt N | k+1

k=0



N −1 X k=0

ˆ N |]Eˆ E[|ξ k



Ba



N tk+1

k

2 − Ba t N k

k+1

1/2

k

a¯ ˆ E[|B − Bta¯N |2 ]1/2 tN k+1

k

2246

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

=

N −1 X

ˆ N |]σ 1/2T σ 1/2T (t N − t N )3/2 → 0. E[|ξ k k k+1 aa ¯ ¯ aa

k=0

We now can claim our G-Itˆo’s formula. Consider Z t Z t Z t D E νj j νi j βs dBs . X tν = X 0ν + αsν ds + ηs d B i , B j + 0

s

0

0

Proposition 48. Let α ν , β ν j and ηνi j , ν = 1, . . . , n, i, j = 1, . . . , d be bounded processes of MG2 (0, T ). Then for each t ≥ 0 and in L 2G (Ht ) we have Z t Z t j νj ∂xν Φ(X u )αuν du Φ(X t ) − Φ(X s ) = ∂x ν Φ(X u )βu dBu + s

s

Z t + s

 D E 1 νj νi j ∂x ν Φ(X u )ηu + ∂x2µ x ν Φ(X u )βuνi βu d B i , B j . u 2

(27)

Proof. We first consider the case where α, η and β are step processes of the form ηt (ω) =

N −1 X

ξk (ω)I[tk ,tk+1 ) (t).

k=0

From the above lemma, it is clear that (27) holds true. Now let Z t Z t Z t D E j ν j,N νi j,N d Bi , B j + βs dBs , αsν,N ds + ηs X tν,N = X 0ν + 0

0

s

0

where α N , η N and β N are uniformly bounded step processes that converge to α, η and β in MG2 (0, T ) as N → ∞. From Lemma 46 Z t Z t j ν j,N ∂xν Φ(X uN )αuν,N du Φ(X tN ) − Φ(X 0 ) = ∂x ν Φ(X uN )βu dBu + 0 0  D Z t E 1 νi j,N ν j,N + ∂x ν Φ(X uN )ηu + ∂x2µ x ν Φ(X uN )βuµi,N βu d Bi , B j . u 2 0 (28) We have N ,µ µ ˆ − X t |2 ] ≤ C E[|X t

Z 0

T

ˆ sµ,N − αsµ )2 ] + E[|η ˆ sµ,N − ηsµ |2 ] + E[(β ˆ sµ,N − βsµ )2 ]}ds {E[(α

We then can prove that, in MG2 (0, T ), ∂x ν Φ(X ·N )η·νi j,N → ∂x ν Φ(X · )η·νi j ∂x2µ x ν Φ(X ·N )β·µi,N β·ν j,N → ∂x2µ x ν Φ(X · )β·µi β·ν j ∂xν Φ(X ·N )α·ν,N → ∂xν Φ(X · )α·ν

∂x ν Φ(X ·N )β·ν j,N → ∂x ν Φ(X · )β·ν j . We then can pass to the limit in both sides of (28) and thus prove (27).



S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

2247

6. G-martingales, G-convexity and Jensen’s inequality 6.1. The notion of G-martingales We now give the notion of G-martingales: Definition 49. A process (Mt )t≥0 is called a G-martingale (resp. G-supermartingale, Gsubmartingale) if for each 0 ≤ s ≤ t < ∞, we have Mt ∈ L 1G (Ht ) and ˆ t |Hs ] = Ms , E[M

(resp. ≤ Ms , ≥ Ms ).

ˆ |Ht ]t≥0 is a G-martingale. In general, how to It is clear that, for a fixed X ∈ L 1G (H), E[X characterize a G-martingale or a G-supermartingale is still a very interesting problem. But the following example gives an important characterization: d Example 50. Let M0 ∈ R, φ = (φ i )i=1 ∈ MG2 (0, T ; Rd ) and η = (ηi j )i,d j=1 ∈ MG2 (0, T ; Sd ) be given and let Z t Z t Z t D E ij j i j i − 2G(ηu )du, t ∈ [0, T ]. ηu d B , B Mt = M0 + φu dBs + u

0

0

0

Then M is a G-martingale on [0, T ]. To consider this it suffices to prove the case η ∈ MG2,0 (0, T ; Sd ), i.e., ηt =

N −1 X

ηtk I[tk .tk+1 ) (t).

k=0

We have, for s ∈ [t N −1 , t N ], h D E D E i i j i j ˆ t |Hs ] = Ms + Eˆ ηti j E[M B , B − B , B − 2G(η )(t − s)|H t s N −1 N −1 t

ij

s

j

j

ˆ t (Bti − Bsi )(Bt − Bs )|Hs ] − 2G(ηt )(t − s) = Ms + E[η N −1 N −1 = Ms . In the last step, we apply the relation (11). We can then repeat this procedure, step by step backward, to prove the result for any s ∈ [0, t N −1 ]. Remark 51. It is worth mentioning that if M is a G-martingale, −M need not be a G-martingale. In the above example, if η ≡ 0, then −M is still a G-martingale. This makes an essential difference between the dB part and the d hBi part of a G-martingale. 6.2. G-convexity and Jensen’s inequality for G-expectation A very interesting question is whether the well-known Jensen’s inequality still holds for Gexpectation. In the framework of g-expectation, this problem was investigated in [3] in which a counterexample is given to indicate that, even for a linear function which is obviously convex, Jensen’s inequality for g-expectation generally does not hold. Stimulated by this example, [28] proved that Jensen’s inequality holds for any convex function under a g-expectation if and only if the corresponding generating function g = g(t, z) is super-homogeneous in z. Here we will discuss this problem from a quite different point of view. We will define a new notion of convexity:

2248

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

Definition 52. A C 2 -function h : R 7−→ R is called G-convex if the following condition holds for each (y, z, A) ∈ R × Rd × Sd : G(h 0 (y)A + h 00 (y)zz T ) − h 0 (y)G(A) ≥ 0,

(29)

where h 0 and h 00 denote the first and the second derivatives of h. It is clear that in the special situation where G(D 2 u) = 12 ∆u, a G-convex function is a convex function in the classical sense. Lemma 53. The following two conditions are equivalent: (i) the function h is G-convex. (ii) The following Jensen inequality holds: for each T ≥ 0, ˆ ˆ E[h(φ(B T ))] ≥ h(E[φ(B T )]),

(30)

for each C 2 -function φ such that h(φ(BT )), φ(BT ) ∈ L 1G (HT ). ˆ Proof. (i) H⇒(ii) By the definition u(t, x) := PtG [φ](x) = E[φ(x + Bt )] solves the nonlinear heat equation (1). Here we only consider the case where u is a C 1,2 -function. Otherwise we can use the language of viscosity solution as we did in the proof of Lemma 7. By simple calculation, we have ∂t h(u(t, x)) = h 0 (u)∂t u = h 0 (u(t, x))G(D 2 u(t, x)), or ∂t h(u(t, x)) − G(D 2 h(u(t, x))) − f (t, x) = 0,

h(u(0, x)) = h(φ(x)),

where we define f (t, x) = h 0 (u(t, x))G(D 2 u(t, x)) − G(D 2 h(u(t, x))). Since h is G-convex, it follows that f ≤ 0 and thus h(u) is a G-subsolution. It follows from the maximum principle that h(PtG (φ)(x)) ≤ PtG (h(φ))(x). In particular (30) holds. Thus we have (ii). (ii) H⇒(i): For a fixed (y, z, A) ∈ R × Rd ×Sd , we set φ(x) := y + hx, zi + 12 hAx, xi. By the definition of PtG we have ∂t (PtG (φ)(x))|t=0 = G(D 2 φ)(x). By (ii) we have h(PtG (φ)(x)) ≤ PtG (h(φ))(x). Thus, for t > 0, 1 1 [h(PtG (φ)(x)) − h(φ(x))] ≤ [PtG (h(φ))(x) − h(φ(x))]. t t We then let t tend to 0: h 0 (φ(x))G(D 2 φ(x)) ≤ G(Dx2x h(φ(x))). Since Dx φ(x) = z + Ax and Dx2x φ(x) = A. We then set x = 0 and obtain (29). Proposition 54. The following two conditions are equivalent: (i) the function h is G-convex.



2249

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

(ii) The following Jensen inequality holds: ˆ ˆ |Ht ]), E[h(X )|Ht ] ≥ h(E[X for each X ∈

L 1G (H)

t ≥ 0,

such that h(X ) ∈

(31)

L 1G (H).

Proof. The part (ii) H⇒(i) is already provided by the above lemma. We can also apply this lemma to prove (31) for the case X ∈ L i0p (H) of the form X = φ(Bt1 , . . . , Btm − Btm−1 ) by ˆ and E[·|H ˆ using the procedure of the definition of E[·] t ] given in Definition 14. We then can ˆ · |], to the general situation.  extend this Jensen’s inequality, under the norm k·k = E[| Remark 55. The above notion of G-convexity can be also applied to the case where the nonlinear heat equation (1) has a more general form: ∂u − G(u, ∇u, D 2 u) = 0, ∂t

u(0, x) = ψ(x)

(32)

(see Examples 4.3, 4.4 and 4.5 in [40]). In this case a C 2 -function h : R 7−→ R is said to be G-convex if the following condition holds for each (y, z, A) ∈ R × Rd × Sd : G(y, h 0 (y)z, h 0 (y)A + h 00 (y)zz T ) − h 0 (y)G(y, z, A) ≥ 0. We don’t need the subadditivity and/or positive homogeneity of G(y, z, A). A particularly interesting situation is the case of g-expectation for a given generating function g = g(y, z), (y, z) ∈ R × Rd ; in this case we have the following g-convexity: 1 00 h (y)|z |2 +g(h(y), h 0 (y)z) − h 0 (y)g(y, z) ≥ 0. 2 We will discuss such g-convex functions in [25].

(33)

Example 56. Let h be a G-convex function and let X ∈ L 1G (H) be such that h(X ) ∈ L 1G (H); ˆ |Ht ]), t ≥ 0, is a G-submartingale: for each s ≤ t, then Yt = h(E[X ˆ t |Hs ] = E[h( ˆ E[X ˆ |Ft ])|Fs ] ≥ h(E[X ˆ |Fs ]) = Ys . E[Y 7. Stochastic differential equations We consider the following SDE driven by G-Brownian motion: Z t Z t Z t D E j Xt = X0 + b(X s )ds + h i j (X s )d B i , B j + σ j (X s )dBs , 0

0

s

t ∈ [0, T ],

(34)

0

where the initial condition X 0 ∈ Rn is given and b, h i j , σ j : Rn 7→ Rn are given Lipschitz functions, i.e., |φ(x)−φ(x 0 )| ≤ K |x −x 0 |, for each x, x 0 ∈ Rn , φ = b, ηi j and σ j . Here the horizon [0, T ] can be arbitrarily large. The solution is a process X ∈ MG2 (0, T ; Rn ) satisfying the above SDE. We first introduce the following mapping on a fixed interval [0, T ]: Λ· (Y ) :=: Y ∈ MG2 (0, T ; Rn ) 7−→ MG2 (0, T ; Rn )

2250

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

by setting Λt = X t , t ∈ [0, T ], with Z t Z t Z t D E j i j σ j (Ys )dBs . Λt (Y ) = X 0 + X 0 + b(Ys )ds + h i j (Ys )d B , B + 0

s

0

0

We immediately have Lemma 57. For each Y, Y 0 ∈ MG2 (0, T ; Rn ), we have the following estimate: Z t ˆ t (Y ) − Λt (Y 0 )|2 ] ≤ C ˆ s − Ys0 |2 ]ds, t ∈ [0, T ], E[|Λ E[|Y 0

where the constant C depends only on K , Γ and the dimension n. Proof. This is a direct consequence of the inequalities (14), (16) and (23).



e−2Ct

We now prove that SDE (34) has a unique solution. By multiplying on both sides of the above inequality and then integrating them on [0, T ]. It follows that Z T Z T Z t ˆ t (Y ) − Λt (Y 0 )|2 ]e−2Ct dt ≤ C ˆ s − Ys0 |2 ]dsdt E[|Λ e−2Ct E[|Y 0

0

Z

0

TZ T

=C 0

s

= (2C)

−1

ˆ s − Ys0 |2 ]ds e−2Ct dt E[|Y

Z

T

C 0

ˆ s − Ys0 |2 ]ds. (e−2Cs − e−2C T )E[|Y

We then have Z T Z T ˆ t (Y ) − Λt (Y 0 )|2 ]e−2Ct dt ≤ 1 ˆ t − Yt0 |2 ]e−2Ct dt. E[|Λ E[|Y 2 0 0 We observe that the following two norms are equivalent in MG2 (0, T ; Rn ): Z T Z T ˆ t |2 ]e−2Ct dt. ˆ t |2 ]dt ∼ E[|Y E[|Y 0

0

From this estimate we can obtain that Λ(Y ) is a contracting mapping. Consequently, we have Theorem 58. There exists a unique solution of X ∈ MG2 (0, T ; Rn ) of the stochastic differential equation (34). Acknowledgements The author thanks for partial support The National Basic Research Program of China (973 Program), grant No. 2007CB814900 (Financial Risk). He thanks the anonymous referee for constructive suggestions. In particular, the discussions on Jensen’s inequality for G-convex functions in Section 6 were stimulated by one of the referee’s interesting questions. p

Appendix. Some inequalities in L G (H) For r > 0, 1 < p, q < ∞, such that

1 p

+

|a + b|r ≤ max{1, 2r −1 }(|a|r + |b|r ),

1 q

= 1, we have ∀a, b ∈ R

(35)

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

|ab| ≤

|a| p |b|q + . p q

2251

(36)

Proposition 59. ˆ ˆ ˆ |r ] + E[|Y |r ]), E[|X + Y |r ] ≤ Cr (E[|X

(37)

ˆ ˆ ˆ | p ]1/ p · E[|Y |q ]1/q , E[|X Y |] ≤ E[|X

(38)

p 1/ p

ˆ E[|X +Y | ]

p 1/ p

ˆ | ] ≤ E[|X

p 1/ p

ˆ | ] + E[|Y

.

(39) p0

ˆ ˆ | p ]1/ p ≤ E[|X | ] In particular, for 1 ≤ p < p 0 , we have E[|X

1/ p 0

.

Proof. (37) follows from (35). We set ξ=

X , ˆE[|X | p ]1/ p

η=

Y . ˆE[|Y |q ]1/q

By (36) we have  p  p  q q ˆE [|ξ η|] ≤ Eˆ |ξ | + |η | ≤ Eˆ |ξ | + Eˆ |η| p q p q 1 1 = + = 1. p q Thus (38) follows. ˆ ˆ E[|X + Y | p ] = E[|X + Y | · |X + Y | p−1 ] ˆ ˆ | · |X + Y | p−1 ] ≤ E[|X | · |X + Y | p−1 ] + E[|Y p 1/ p ˆ ˆ | ] ≤ E[|X · E[|X + Y |( p−1)q ]1/q ˆ ˆ | p ]1/ p · E[|X + E[|Y + Y |( p−1)q ]1/q . We observe that ( p − 1)q = p. Thus we have (39).



References [1] Ph. Artzner, F. Delbaen, J.-M. Eber, D. Heath, Thinking Coherently, RISK 10, November, 1997, pp. 68–71. [2] Ph. Artzner, F. Delbaen, J.-M. Eber, D. Heath, Coherent measures of risk, Math. Finance 9 (1999) 203–228. [3] Ph. Briand, F. Coquet, Y. Hu, J. M´emin, S. Peng, A converse comparison theorem for BSDEs and related properties of g-expectations, Electron. Comm. Probab. 5 (2000). [4] P. Barrieu, N. El Karoui, Pricing, hedging and optimally designing derivatives via minimization of risk measures, Contemp. Math. 2004 (in press), preprint. [5] Z. Chen, A property of backward stochastic differential equations, C.R. Acad. Sci. Paris S´er. I 326 (4) (1998) 483–488. [6] Z. Chen, L. Epstein, Ambiguity, risk and asset returns in continuous time, Econometrica 70 (4) (2002) 1403–1443. [7] Z. Chen, R. Kulperger, L. Jiang, Jensen’s inequality for g-expectation: Part 1, C. R. Acad. Sci. Paris S´er. I 337 (2003) 725–730. [8] Z. Chen, S. Peng, A nonlinear Doob–Meyer type decomposition and its application, SUT J. Math. (Japan) 34 (2) (1998) 197–208. [9] Z. Chen, S. Peng, A general downcrossing inequality for g-martingales, Statist. Probab. Lett. 46 (2) (2000) 169–175. [10] K.L. Chung, R. Williams, Introduction to Stochastic Integration, 2nd ed., Birkh¨auser, 1990. [11] F. Coquet, Y. Hu, J. M´emin, S. Peng, A general converse comparison theorem for backward stochastic differential equations, C.R. Acad. Sci. Paris, t.333, Serie I (2001) 577–581.

2252

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

[12] F. Coquet, Y. Hu, J. Memin, S. Peng, Filtration-consistent nonlinear expectations and related g-expectations, Probab. Theory Related Fields 123 (2002) 1–27. [13] M. Crandall, H. Ishii, P.-L. Lions, User’S guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. 27 (1) (1992) 1–67. [14] P.J. Daniell, A general form of integral, Ann. Math. 19 (1918) 279–294. [15] F. Delbaen, Coherent Risk Measures (Lectures given at the Cattedra Galileiana at the Scuola Normale di Pisa, March 2000), Scuola Normale di Pisa, 2002. [16] F. Delbaen, E. Rosazza Gianin, S. Peng, m-Stable sets, risk measures and g-expectations, 2005, preprint. [17] L. Denis, C. Martinin, A theoretical framework for the pricing of contingent claims in the presence of model uncertainty, Ann. Appl. Probab. 16 (2) (2006) 827–852. [18] D. Feyel, A. de La Pradelle, Espaces de Sobolev gaussiens, Ann. Inst. Fourier 39 (1989) 875–908. [19] W.H. Fleming, H.M. Soner, Controlled Markov Processes and Viscosity Solutions, Springer-Verlag, New York, 1992. [20] S.W. He, J.G. Wang, J.-A. Yan, Semimartingale Theory and Stochastic Calculus, CRC Press, Beijing, 1992. [21] P.J. Huber, Robust Statistics, John Wiley & Sons, 1981. [22] N. Ikeda, S. Watanabe, Stochastic Differential Equations and Diffusion Processes, North-Holland, Amsterdam, 1981. [23] K. Itˆo, Differential equations determining a Markoff process, J. Pan-Japan Math. Coll. No. 1077 (1942) Kiyosi Itˆo: Selected Papers, Springer, 1987. [24] K. Itˆo, M. McKean, Diffusion Processes and Their Sample Paths, Springer-Verlag, 1965. [25] G. Jia, S. Peng, A new look at Jensen’s inequality for g expectations: g-convexity and their applications, 2007, preprint. [26] L. Jiang, Some results on the uniqueness of generators of backward stochastic differential equations, C. R. Acad. Sci. Paris, Ser. I 338 (2004) 575–580. [27] L. Jiang, Z. Chen, A result on the probability measures dominated by g-expectation, Acta Math. Appl. Sinica (English Ser.) 20 (3) (2004) 507–512. [28] L. Jiang, Z. Chen, On Jensen’s inequality for g-expectation, Chin. Ann. Math. 25B (3) (2004) 401–412. [29] I. Karatzas, S.E. Shreve, Brownian Motion and Stochastic Calculus, Springer-Verlag, New York, 1988. [30] N.V. Krylov, Controlled Diffusion Processes, Springer-Verlag, New York, 1980. [31] T. Lyons, Uncertain volatility and the risk free synthesis of derivatives, Appl. Math. Finance 2 (1995) 117–133. [32] M. Nisio, On a nonlinear semigroup attached to optimal stochastic control, Publ. RIMS, Kyoto Univ. 13 (1976) 513–537. [33] M. Nisio, On stochastic optimal controls and envelope of Markovian semi-groups, in: Proc. of int. Symp. Kyoto, 1976, pp. 297–325. [34] B. Øksendal, Stochastic Differential Equations, fifth ed., Springer, 1998. [35] S. Peng, A generalized dynamic programming principle and Hamilton–Jacobi–Bellman equation, Stochast. Stochast. Rep. 38 (2) (1992) 119–134. [36] S. Peng, Backward SDE and related g-expectation, in: Backward Stochastic Differential Equations, in: El Karoui Mazliak (Ed.), Pitman Research Notes in Math. Series, vol. 364, 1997, pp. 141–159. [37] S. Peng, BSDE and stochastic optimizations, in: Topics in Stochastic Analysis, Science Press, Beijing, 1997, Yan, J., Peng, S., Fang, S., Wu, L.M. Ch.2 (Chinese vers.). [38] S. Peng, Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob–Meyer’s type, Prob. Theory Related Fields 113 (4) (1999) 473–499. [39] S. Peng, Filtration consistent nonlinear expectations and evaluations of contingent claims, Acta Math. Appl. Sinica (English Ser.) 20 (2) (2004) 1–24. [40] S. Peng, Nonlinear expectations and nonlinear Markov chains, Chin. Ann. Math. 26B (2) (2005) 159–184. [41] S. Peng, Dynamical evaluations, C. R. Acad. Sci. Paris, Ser. I 339 (2004) 585–589. [42] S. Peng, Dynamically consistent nonlinear evaluations and expectations, preprint (pdf-file available in arXiv:math.PR/0501415v1 24 Jan 2005), 2005. [43] S. Peng, G-Expectation, G-Brownian motion and related stochastic calculus of Itˆo’s type, in: Proceedings of the 2005 Abel Symposium, preprint (pdf-file available in: arXiv:math.PR/0601035v1, 3 Jan 2006), 2006 (in press). [44] S. Peng, G-Brownian motion and dynamic risk measure under volatility uncertainty, preprint in arXiv:math.PR/0711.2834v1, 2007. [45] S. Peng, M. Xu, Numerical calculations to solve BSDE, 2003, preprint. [46] Ph. Protter, Stochastic Integration and Differential Equations, Springer-Verlag, 1990. [47] D. Revuz, M. Yor, Continuous Martingales and Brownian Motion, Springer-Verlag, 1991.

S. Peng / Stochastic Processes and their Applications 118 (2008) 2223–2253

2253

[48] E.G. Rosazza, Some examples of risk measures via g-expectations, preprint, Insurance Math. Econom. 2003 (in press). [49] P. Cheridito, H.M. Soner, N. Touzi, N. Victoir, Second order backward stochastic differential equations and fully non-linear parabolic PDEs, preprint (pdf-file available in arXiv:math.PR/0509295v1, 14 Sep 2005). [50] J.-A. Yan, Lecture Note on Measure Theory, Science Press, Beijing, 1998, Chinese version. [51] J. Yong, X. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations, Springer-Verlag, 1999. [52] K. Yosida, Functional Analysis, sixth ed., Springer, 1980.

Further reading [1] C. Dellacherie, P.A. Meyer, Probabilities and Potentiel A and B, North-Holland, 1978, 1982. [2] N. El Karoui, M.C. Quenez, Dynamic programming and pricing of contingent claims in incomplete market, SIAM J. Control Optim. 33 (1) (1995). [3] N. El Karoui, S. Peng, M.C. Quenez, Backward stochastic differential equation in finance, Math. Finance 7 (1) (1997) 1–71. [4] E. Pardoux, S. Peng, Adapted solution of a backward stochastic differential equation, Syst. Control Lett. 14 (1) (1990) 55–61. [5] S. Peng, Nonlinear expectation, nonlinear evaluations and risk measures, in: K. Back, T.R. Bielecki, C. Hipp, S. Peng, W. Schachermayer (Eds.), Stochastic Methods in Finance Lectures, in: LNM, vol. 1856, Springer-Verlag, 2004, pp. 143–217. [6] S. Peng, Law of large numbers and central limit theorem under nonlinear expectations. Stochast. Process Appl. 2007 (in press), preprint in arXiv:math.PR/0702358v1, 13 Feb 2007. [7] S. Peng, M. Xu, gΓ -expectations and the related nonlinear Doob–Meyer Decomposition Theorem, 2005, preprint.