J. Math. Anal. Appl. 451 (2017) 906–923
Contents lists available at ScienceDirect
Journal of Mathematical Analysis and Applications www.elsevier.com/locate/jmaa
The least squares estimator of random variables under sublinear expectations Sun Chuanfeng a,1 , Ji Shaolin b,∗,2 a
School of Mathematical Sciences, University of Jinan, Jinan, Shandong 250022, PR China Institute for Financial Studies and Institute of Mathematics, Shandong University, Jinan, Shandong 250100, PR China b
a r t i c l e
i n f o
Article history: Received 27 May 2016 Available online 22 February 2017 Submitted by V. Pozdnyakov Keywords: Least squares estimator Conditional expectation Sublinear expectation Coherent risk measure g-Expectation
a b s t r a c t In this paper, we study the least squares estimator for sublinear expectations. Under some mild assumptions, we prove the existence and uniqueness of the least squares estimator. The relationship between the least squares estimator and the conditional coherent risk measure (resp. the conditional g-expectation) is also explored. Then some characterizations of the least squares estimator are given. © 2017 Elsevier Inc. All rights reserved.
1. Introduction Let (Ω, F, P ) be a probability space and let ξ be a random variable in L2F (P ) where L2F (P ) := {ξ : Ω → R; ξ ∈ L2 (P ) and ξ is F − measurable}. If C ⊂ F is a σ-algebra, then the conditional expectation of ξ, denoted by EP [ξ|C], is just the solution of the following problem: find a ηˆ ∈ L2C (P ) such that EP [(ξ − ηˆ)2 ] =
inf
η∈L2C (P )
EP [(ξ − η)2 ].
* Corresponding author. Fax: +86 0531 88564100. E-mail addresses:
[email protected] (C. Sun),
[email protected] (S. Ji). This work was supported by the Fund of Doctoral Program Research of University of Jinan (No. 160100119). 2 This work was supported by National Natural Science Foundation of China (No. 11571203); Supported by the Programme of Introducing Talents of Discipline to Universities of China (No. B12023). 1
http://dx.doi.org/10.1016/j.jmaa.2017.02.020 0022-247X/© 2017 Elsevier Inc. All rights reserved.
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
907
ηˆ is called the least squares estimator of ξ. The property that the conditional expectation coincides with the least squares estimator is the basis for the filtering theory (see for example Bensoussan [3], Davis [4] or Kallianpur [9]). From another viewpoint, the least squares estimator can also be used as an alternative definition of the conditional expectation. In recent decades, nonlinear expectations have been proposed and developed rapidly. Various definitions of conditional nonlinear expectations are introduced. For example, Peng studied the g-expectation in [10] and defined the conditional g-expectation Eg [ξ|Ft ] as the solution of a backward stochastic differential equation at time t. The g-expectation has many good properties including time consistency, i.e., ∀0 ≤ s ≤ t ≤ T , Eg [Eg [ξ|Ft ]|Fs ] = Eg [ξ|Fs ]. Artzner et al. [1] studied the coherent risk measures which can be seen as one kind of nonlinear expectations (see [6] and [12]). The conditional coherent risk measure in [2] is defined as ¯ t [ξ] := ess sup EP [ξ|Ft ], Φ P ∈P
where ξ ∈ FT , Ft is a sub-σ-algebra of FT and P is a family of probability measures. They have proved that if P is ‘m-stable’, then the conditional coherent risk measure defined above is time consistent. For the nonlinear expectation case, we do not know whether conditional nonlinear expectations still coincide with the least squares estimators. So it is interesting to explore the relationship between the least squares estimators and the nonlinear conditional expectations. In this paper, we focus on the sublinear expectation case. The least squares estimator under sublinear expectations is investigated. For a sublinear expectation ρ, since ρ can be represented as a supremum of a family of linear expectations, then our least squares estimate problem can be seen as a minimax problem. The key tool to solve these problems is the minimax theorem. We assume ρ is continuous from above in order that the minimax theorem holds. Then the existence and uniqueness of the least squares estimator are obtained. If we denote ρ(ξ|C) as the least squares estimator, note that for any real number λ, ρ(λξ|C)) = λρ(ξ|C). This result induces, generally speaking, both of Artzner et al.’s and Peng’s nonlinear conditional expectations fail to be least squares estimators. The problem comes naturally: when least squares estimators will coincide with Artzner et al.’s and Peng’s conditional expectations. To give the answer, we investigate the relationship between the least squares estimators and the nonlinear conditional expectations. A sufficient and necessary condition for the coincidence is given. In the rest part, through variational methods, we show to get the least squares estimator is equivalent to obtain the solution of an associated nonlinear equation. The paper is organized as follows. In section 2, we formulate our problem. Under some mild assumptions, we prove the existence and uniqueness of the least squares estimator in section 3. In section 4, we first give the basic properties of the least squares estimator. Then we explore the relationship between the least squares estimator and the conditional coherent risk measure and conditional g-expectation. In the last section, we obtain several characterizations of the least squares estimator. 2. Problem formulation 2.1. Preliminary For a given measurable space (Ω, F), we denote F as the set of bounded F-measurable functions, N as the set of natural numbers and R as the set of real numbers. Definition 2.1. A sublinear expectation ρ is a functional ρ : F → R satisfying (i) Monotonicity: ρ(ξ1 ) ≥ ρ(ξ2 ) if ξ1 ≥ ξ2 ; (ii) Constant preserving: ρ(c) = c for c ∈ R;
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
908
(iii) Sub-additivity: For each ξ1 , ξ2 ∈ F, ρ(ξ1 + ξ2 ) ≤ ρ(ξ1 ) + ρ(ξ2 ); (iv) Positive homogeneity: ρ(λξ) = λρ(ξ) for λ ≥ 0. Note that F is a Banach space endowed with the supremum norm. Denote the dual space of F by F∗ . Since there is a one-to-one correspondence between F∗ and the class of additive set functions, we denote the element in F∗ by EP where P is an additive set function. Sometimes we also use P instead of EP . Lemma 2.2. Suppose that the sublinear expectation ρ can be represented by a family of probability measures P, i.e., ρ(ξ) = sup EP [ξ]. For a sequence {ξn }n∈N , if there exists an M ∈ R such that ξn ≥ M for all n, then we have
P ∈P
ρ(lim inf ξn ) ≤ lim inf ρ(ξn ). n
n
Proof. Let ζn = inf k≥n ξk . ζn ≤ ξn and {ζn }n∈N is an increasing sequence. It is easy to see that ρ(lim inf ξn ) = ρ(lim ζn ) = lim ρ(ζn ) ≤ lim inf ρ(ξn ). n
n
n
n
2
Definition 2.3. We say a sublinear expectation ρ continuous from above on F if for any sequence {ξn }n∈N ⊂ F satisfying ξn ↓ 0, we have ρ(ξn ) ↓ 0. Lemma 2.4. If a sublinear expectation ρ is continuous from above on F, then for any linear expectation EP dominated by ρ, P is a probability measure. Proof. For any An ↓ φ, we have ρ(IAn ) ↓ 0. If a linear expectation EP is dominated by ρ, then P (An ) ↓ 0. It is easy to see that P (Ω) = 1. Thus P is a probability measure. 2 Proposition 2.5. A sublinear expectation ρ is continuous from above on F if and only if there exists a probability measure P0 and a family of probability measures P such that i) ρ(X) = supP ∈P EP [X] for all X ∈ F; ii) any element in P is absolutely continuous with respect to P0 ; dP iii) the set { dP ; P ∈ P} is σ(L1 (P0 ), L∞ (P0 ))-compact, 0 where σ(L1 (P0 ), L∞ (P0 )) denotes the weak topology defined on L1 (P0 ). Proof. ⇒ By Theorem A.1 in Appendix A, ρ can be represented by the family of linear expectations dominated by ρ. We denote P as all the linear expectations dominated by ρ. Since ρ is continuous from above on F, by Lemma 2.4, each element in P is a probability measure. By Theorem A.2, P is σ(F∗ , F)-compact, where σ(F∗ , F) denotes weak ∗ topology defined on F∗ . By Theorem A.3, there exists a P0 ∈ F∗c such that all the elements in P are absolutely continuous with respect to P0 , where F∗c is the set of linear expectations generated by countably additive measures. Then we can use L∞ (P0 ) to replace F. Note that L1 (P0 ) is the space of integrable random variables and L∞ (P0 ) is the space of all equivalence classes of bounded real valued random variables. Since ρ is continuous from above on F and the dual space of L1 (P0 ) is L∞ (P0 ), dP by Corollary 4.35 in [7], the set { dP ; P ∈ P} is σ(L1 (P0 ), L∞ (P0 ))-compact. 0 ⇐ The result is directly deduced by Dini’s theorem. 2
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
909
Remark 2.6. It is worth to point out that the proof of Proposition 2.5 follows directly from Theorem 3.6 in [5]. In the following, we will denote P as linear expectations dominated by ρ and P0 the probability measure as in Proposition 2.5. Definition 2.7. We say a sublinear expectation ρ proper if all the elements in P are equivalent to P0 . 2.2. Least squares estimator of a random variable Let C be a sub σ-algebra of F and C be the set of all the bounded C-measurable functions on (Ω, F). For a given random variable ξ ∈ F, our problem is to obtain its least squares estimator under the sublinear expectation ρ when we only know “the information” C. In more details, we want to solve the following optimization problem. Problem 2.8. For a given ξ ∈ F, find a ηˆ ∈ C such that ρ(ξ − ηˆ)2 = inf ρ(ξ − η)2 . η∈C
(2.1)
The optimal solution ηˆ of (2.1) is called the least square estimator. If ρ degenerates to a linear expectation, then P contains only one probability measure P and ρ is the mathematical expectation under P . In this case, it is well known that the least squares estimator ηˆ is just the conditional expectation EP [ξ | C]. 3. Existence and uniqueness In this section, we study the existence and uniqueness of the least squares estimator. For any ξ ∈ F, there exists a positive constant M such that sup |ξ| ≤ M and we will denote by G all the C-measurable functions bounded by M . 3.1. Existence Lemma 3.1. Suppose that ξ ∈ F. Then we have inf ρ(ξ − η)2 = inf ρ(ξ − η)2 .
η∈C
η∈G
Proof. For any η ∈ C, let η¯ := ηI{−M ≤η≤M } + M I{η>M } − M I{η<−M } . Then η¯ ∈ G. For any P ∈ P, EP [(ξ − η)2 ] − EP [(ξ − η¯)2 ] = EP [(¯ η − η)(2ξ − η − η¯)] ≥ EP [(M − η)(2ξ − 2M )I{η>M } ] + EP [(−M − η)(2ξ + 2M )I{η<−M } ]. Since −M ≤ ξ ≤ M , we have EP [(M − η)(2ξ − 2M )I{η>M } ] ≥ 0 and EP [(−M − η)(2ξ + 2M )I{η<−M } ] ≥ 0.
910
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
Thus for any P ∈ P, EP [(ξ − η)2 ] ≥ EP [(ξ − η¯)2 ], which yields ρ(ξ − η)2 ≥ ρ(ξ − η¯)2 and inf ρ(ξ − η)2 ≥ inf ρ(ξ − η)2 .
η∈C
η∈G
On the other hand, since G ⊂ C, the inverse inequality is obviously true. 2 Theorem 3.2. If the sublinear expectation ρ is continuous from above on F, then there exists an optimal solution ηˆ ∈ G for Problem 2.8. Proof. By Lemma 3.1, there exists a sequence {ηn }n≥1 ⊂ G such that ρ(ξ − ηn )2 < α +
1 , 2n
where α := inf η∈C ρ(ξ − η)2 . By Komlós Theorem (see Theorem A.4), there exists a subsequence {ηni }i≥1 and a random variable ηˆ such that k 1 ηni = ηˆ, k→∞ k i=1
lim
P0 − a.s.
Since {ηn }n≥1 is bounded by M , then ηˆ ∈ G. By Lemma 2.2, we have ρ(ξ − ηˆ)2 ≤ lim inf k→∞
k 1 1 ρ(ξ − ηni )2 ≤ lim (α + ) = α. k→∞ k i=1 k
Thus ηˆ is an optimal solution for Problem 2.8.
2
Remark 3.3. If we only assume there exists a probability measure P0 such that the ρ is a sublinear expectation generated by a family of probability measures which are all absolutely continuous with respect to P0 , Theorem 3.2 still holds. 3.2. Uniqueness In this subsection, if ρ is continuous from above on F and proper, we prove the optimal solution of Problem 2.8 is unique under P0 -a.s. sense. Lemma 3.4. If the sublinear expectation ρ is continuous from above on F, for a given ξ ∈ F, we have sup inf EP [(ξ − η)2 ] = max inf EP [(ξ − η)2 ].
P ∈P η∈G
P ∈P η∈G
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
911
Proof. By Proposition 2.5, there exists a probability measure P0 such that P P0 for all P ∈ P. Let dP fP := dP and 0 β := sup inf EP [(ξ − η)2 ] = sup inf EP0 [fP (ξ − η)2 ]. P ∈P η∈G
P ∈P η∈G
Take a sequence {fPn ; Pn ∈ P}n≥1 such that inf EP0 [fPn (ξ − η)2 ] ≥ β −
η∈G
1 . 2n
By Theorem A.4, there exists a subsequence {fPni }i≥1 of {fPn }n≥1 and a random variable fPˆ ∈ L1 (P0 ) such that k 1 fPni = fPˆ k→∞ k i=1
lim
P0 − a.s.
k Let gk := k1 i=1 fPni . gk ∈ {fP ; P ∈ P}. By Proposition 2.5, {fP ; P ∈ P} is σ(L1 (P0 ), L∞ (P0 ))-compact. By Dunford–Pettis theorem, it is uniformly integrable. Thus {gk }k≥1 is also uniformly integrable and ||gk − fPˆ ||L1 (P0 ) → 0, which shows Pˆ ∈ P. On the other hand, for any η ∈ G and k ∈ N, we have EP0 [gk (ξ − η)2 ] ≥ inf EP0 [gk (ξ − η˜)2 ]. η˜∈G
Then for any η ∈ G, we have lim EP0 [gk (ξ − η)2 ] ≥ lim sup inf EP0 [gk (ξ − η˜)2 ]. k→∞ η˜∈G
k→∞
Thus inf lim EP0 [gk (ξ − η)2 ] ≥ lim sup inf EP0 [gk (ξ − η˜)2 ]. ˜∈G k→∞ η
η∈G k→∞
Since {gk }k≥1 is uniformly integral and ||(ξ − η)2 ||L∞ ≤ 4M 2 , we have inf EP0 [fPˆ (ξ − η)2 ] = inf EP0 [ lim gk (ξ − η)2 ] = inf lim EP0 [gk (ξ − η)2 ].
η∈G
η∈G
η∈G k→∞
k→∞
Thus inf EP0 [fPˆ (ξ − η)2 ] ≥ lim sup
η∈G
k→∞
k 1 inf EP [fP (ξ − η)2 ] ≥ β. k i=1 η∈G 0 ni
Since Pˆ ∈ P, we have inf EPˆ [(ξ − η)2 ] = sup inf EP [(ξ − η)2 ].
η∈G
P ∈P η∈G
2
Corollary 3.5. If the sublinear expectation ρ is continuous from above on F, then for a given ξ ∈ F, we have sup inf EP [(ξ − η)2 ] = max inf EP [(ξ − η)2 ].
P ∈P η∈C
P ∈P η∈C
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
912
Proof. Choose Pˆ as in Lemma 3.4. By Lemma 3.1 and Lemma 3.4, we have sup inf EP [(ξ − η)2 ] ≤ sup inf EP [(ξ − η)2 ] = inf EPˆ [(ξ − η)2 ] = inf EPˆ [(ξ − η)2 ].
P ∈P η∈C
P ∈P η∈G
η∈G
η∈C
On the other hand, the inverse inequality is obvious. Then inf EPˆ [(ξ − η)2 ] = sup inf EP [(ξ − η)2 ].
η∈C
P ∈P η∈C
Since Pˆ ∈ P, we have sup inf EP [(ξ − η)2 ] = max inf EP [(ξ − η)2 ].
P ∈P η∈C
P ∈P η∈C
2
Theorem 3.6. If the sublinear expectation ρ is continuous from above on F and proper, then for any given ξ ∈ F, there exists a unique optimal solution of Problem 2.8. Proof. The existence result is proved in Theorem 3.2. Now we prove the uniqueness. Since P is σ(L1 (P0 ), L∞ (P0 ))-compact, by Theorem A.5, inf sup EP [(ξ − η)2 ] = sup inf EP [(ξ − η)2 ].
η∈C P ∈P
P ∈P η∈C
Since the optimal solution exists, by Corollary 3.5, min sup EP [(ξ − η)2 ] = max inf EP [(ξ − η)2 ]. η∈C P ∈P
P ∈P η∈C
Let ηˆ be an optimal solution and Pˆ as in Corollary 3.5. By Theorem A.7, (ˆ η , Pˆ ) is an saddle point, i.e., EP [(ξ − ηˆ)2 ] ≤ EPˆ [(ξ − ηˆ)2 ] ≤ EPˆ [(ξ − η)2 ],
∀P ∈ P, η ∈ C.
This result shows that if ηˆ is an optimal solution, then there exists a Pˆ ∈ P such that ηˆ = EPˆ [ξ|C]. Suppose that there exist two optimal solutions ηˆ1 and ηˆ2 . Their accompanying probabilities are denoted by Pˆ1 and Pˆ2 respectively. Thus ηˆ1 = EPˆ1 [ξ|C] and ηˆ2 = EPˆ2 [ξ|C]. Let P λ := λPˆ1 + (1 − λ)Pˆ2 , where ˆ
ˆ
dP1 dP2 λ ∈ (0, 1). Denote λEP λ [ dP ˆ1 and (1 − λ)EP λ [ dP λ |C] by λPˆ2 . It is easy to see that λPˆ1 + λPˆ2 = 1. λ |C] by λP Since ˆ
ηˆ1 = EPˆ1 [ξ|C] =
dP1 EP λ [ξ dP λ |C] ˆ
dP1 EP λ [ dP λ |C]
and ˆ
ηˆ2 = EPˆ2 [ξ|C] = then λP1 ηˆ1 + λP2 ηˆ2 = EP λ [ξ|C]. And
dP2 EP λ [ξ dP λ |C] ˆ
dP2 EP λ [ dP λ |C]
,
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
913
EP λ [(ξ − EP λ [ξ|C])2 ] = EP λ [(ξ − λPˆ1 ηˆ1 − λPˆ2 ηˆ2 )2 ] = EP λ [(λPˆ1 (ξ − ηˆ1 ) + λPˆ2 (ξ − ηˆ2 ))2 ] = EP λ [λ2Pˆ (ξ − ηˆ1 )2 + λ2Pˆ (ξ − ηˆ2 )2 + 2λPˆ1 λPˆ2 (ξ − ηˆ1 )(ξ − ηˆ2 )] 1
2
= EP λ [λPˆ1 (ξ − ηˆ1 ) + λPˆ2 (ξ − ηˆ2 )2 − λPˆ1 λPˆ2 (ˆ η1 − ηˆ2 )2 ] 2
η1 − ηˆ2 )2 ] = λEPˆ1 [(ξ − ηˆ1 )2 − λPˆ2 (ξ − ηˆ1 )2 + λPˆ2 (ξ − ηˆ2 )2 − λPˆ1 λPˆ2 (ˆ + (1 − λ)EPˆ2 [λPˆ1 (ξ − ηˆ1 )2 − λPˆ1 (ξ − ηˆ2 )2 + (ξ − ηˆ2 )2 − λPˆ1 λPˆ2 (ˆ η1 − ηˆ2 )2 ] η1 − ηˆ2 )2 ] = λEPˆ1 [(ξ − ηˆ1 )2 ] + λEPˆ1 [λ2Pˆ (ˆ 2
+ (1 − λ)EPˆ2 [(ξ − ηˆ2 ) ] + (1 − λ)EPˆ2 [λ2Pˆ (ˆ η1 − ηˆ2 )2 ] 1 ≥ α, 2
where α := inf η∈C ρ(ξ − η)2 . It yields EP λ [(ξ − EP λ [ξ|C])2 ] = α if and only if ηˆ1 = ηˆ2 P0 -a.s. On the other hand, since (ˆ η1 , Pˆ1 ) is a saddle point, EP λ [(ξ − EP λ [ξ|C])2 ] ≤ EP λ [(ξ − ηˆ1 )2 ] ≤ EPˆ1 [(ξ − ηˆ1 )2 ] = α. Then EP λ [(ξ − EP λ [ξ|C])2 ] = α. Thus, ηˆ1 = ηˆ2 P0 -a.s.
2
4. Properties of the least squares estimator In this section, we will first give some basic properties of the least square estimator. Then we explore the relationship between the least square estimator and the conditional coherent risk measure and conditional g-expectation. For a given ξ ∈ F, we will use ρ(ξ|C) to denote the least squares estimator with respect to C instead of ηˆ. The following properties hold. Proposition 4.1. If the sublinear expectation ρ is continuous from above on F and proper, then for any ξ ∈ F, we have: i) ii) iii) iv)
If C1 ≤ ξ(ω) ≤ C2 for two constants C1 and C2 , then C1 ≤ ρ(ξ|C) ≤ C2 . ρ(λξ|C) = λρ(ξ|C) for λ ∈ R. For each η0 ∈ C, ρ(ξ + η0 |C) = ρ(ξ|C) + η0 . If under each P ∈ P, ξ is independent of the sub σ-algebra C, then ρ(ξ|C) is a constant.
Proof. i) If C1 ≤ ξ(ω) ≤ C2 , then C1 ≤ EP [ξ|C] ≤ C2 for any P ∈ P. Since ρ(ξ|C) ∈ {EP [ξ|C]; P ∈ P} (see the proof of Theorem 3.6), thus ρ(ξ|C) lies in [C1 , C2 ]. ii) When λ = 0, the statement is obvious. When λ = 0, we have λ2 ρ(ξ −
ρ(λξ|C) 2 ) = ρ(λξ − ρ(λξ|C))2 = inf ρ(λξ − η)2 = λ2 inf ρ(ξ − η)2 , η∈C η∈C λ
which yields ρ(ξ − Thus
ρ(λξ|C) λ
ρ(λξ|C) 2 ) = inf ρ(ξ − η)2 . η∈C λ
= ρ(ξ|C) due to the uniqueness.
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
914
iii) Note that ρ(ξ + η0 − (η0 + ρ(ξ|C)))2 = ρ(ξ − ρ(ξ|C))2 = inf ρ(ξ − η)2 = inf ρ(ξ + η0 − η)2 . η∈C
η∈C
By the uniqueness of the least squares estimator, we have ρ(ξ + η0 |C) = η0 + ρ(ξ|C). iv) If under each P ∈ P, ξ is independent of the sub σ-algebra C, then EP [ξ|C] is a constant for each P ∈ P. Since ρ(ξ|C) ∈ {EP [ξ|C]; P ∈ P}, ρ(ξ|C) is also a constant. 2 The coherent risk measure was introduced by Artzner et al. [1] and the g-expectation was introduced by Peng [10]. The conditional coherent risk measure and some special conditional g-expectations can be defined by ess supEP [ξ|C]. In next two examples, we will show the least squares estimator is different from P ∈P
the conditional coherent risk measure and the conditional g-expectation. Example 4.2. Let Ω = {ω1 , ω2 }, F = {φ, {ω1 }, {ω2 }, Ω} and C = {φ, Ω}. Let P1 = 14 I{ω1 } + 34 I{ω2 } , P2 = 3 1 4 I{ω1 } + 4 I{ω2 } and P = {λP1 + (1 − λ)P2 ; λ ∈ [0, 1]}. For each ξ ∈ F, define ρ(ξ) = sup EP [ξ]. P ∈P
Take ξ = 2I{ω1 } + 8I{ω2 } . It is easy to check sup EP [ξ] = 6 +
P ∈P
1 2
and ρ(ξ|C) = EPˆ [ξ|C] = 5,
where Pˆ = 12 I{ω1 } + 12 I{ω2 } . Example 4.3. Let W (·) be a standard 1-dimensional Brownian motion defined on a complete probability space (Ω, F, P0 ). The information structure is given by a filtration {Ft }0≤t≤T , which is generated by W (·) and augmented by all P0 -null sets. M 2 (0, T ; R) denotes the space of all Ft -progressively measurable proT cesses yt such that EP 0 |yt |2 dt < ∞. Let us consider the g-expectation defined by the following BSDE: T
T |zs |ds −
yt = ξ + t
zs dW (s),
(4.1)
t
where ξ is a bounded FT -measurable function. Since there exists a unique adapted pair (y, z) solves (4.1), the solution yt is defined as the g-expectation with respect to Ft and denote it by E|z| (ξ|Ft ). Consider the following linear case: T
T μs zs ds −
yt = ξ + t
zs dW (s),
(4.2)
t
where |μs | ≤ 1 P0 -a.s. By Girsanov transform, there exists a probability P μ such that the solution {yt }0≤t≤T of (4.2) is a martingale under P μ . Let P := {P μ | |μs | ≤ 1 P0 − a.s.}. By Theorem 2.1 in [8], E|z| (ξ) = sup EP μ [ξ], P μ ∈P
∀ξ ∈ FT
and E|z| (ξ|Ft ) = ess sup EP μ [ξ|Ft ]. P μ ∈P
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
915
Since E|z| (·) is a sublinear expectation, if we denote the corresponding least squares estimator by ρ|z| (ξ|Ft ), then ρ|z| (ξ|Ft ) does not coincide with E|z| (ξ|Ft ) for all bounded ξ ∈ FT . Otherwise, if for all bounded ξ ∈ FT , ρ|z| (ξ|Ft ) = E|z| (ξ|Ft ), then by the property ii) in Corollary 4.1, we have ess sup EP μ [ξ|Ft ] = ρ|z| (ξ|Ft ) = −ρ|z| (−ξ|Ft ) = essμ inf EP μ [ξ|Ft ]. P ∈P
P μ ∈P
Since the set P contains more than one probability measure, the above equation can not be true for all bounded ξ ∈ FT . In the following, for simplicity, we denote ess supEP [ξ|C] by ηess . We first prove that ηess is the optimal P ∈P
solution of a constrained least squares optimization problem. Definition 4.4. A sublinear expectation ρ is called strictly comparable if for any ξ1 , ξ2 ∈ F satisfying ξ1 > ξ2 P0 -a.s., ρ(ξ1 ) > ρ(ξ2 ). The notion of the m-stable property was proposed in [1] (see Definition B.2). One result about this property is: If P is m-stable, then the sublinear expectation ρ is time consistent (see Proposition B.3). Proposition 4.5. Suppose that a sublinear expectation ρ is continuous from above, strictly comparable and the representation set P is m-stable. Then for ξ ∈ F, ηess is the unique solution of the following optimal problem: inf sup ρ[(ξ − η)2 + η˜(ξ − η)],
η∈C η ˜∈C+
(4.3)
where C+ denotes all nonnegative elements in C. Proof. We first show if the optimal solution of (4.3) exists, the optimal solution ηˆ of (4.3) lies in B, where B := {η ∈ C; η ≥ ηess P0 − a.s.}. Since inf η∈C supη˜∈C+ ρ[(ξ − η)2 + η˜(ξ − η)] ≤ supη˜∈C+ ρ[(ξ − ηess )2 + η˜(ξ − ηess )] ≤ ρ[(ξ − ηess )2 ] + supη˜∈C+ ρ[˜ η (ξ − ηess )] = ρ[(ξ − ηess )2 ] + supη˜∈C+ ρ[ess supP ∈P EP [˜ η (ξ − ηess )]] = ρ[(ξ − ηess )2 ] < ∞, the value of our problem is finite. Then sup ρ[˜ η (ξ − ηˆ)] ≤ sup ρ[(ξ − ηˆ)2 + η˜(ξ − ηˆ)] < ∞.
η˜∈C+
η˜∈C+
On the other hand, since the representation set P is ‘stable’, we have sup ρ[˜ η (ξ − ηˆ)] = sup ρ[˜ η (ηess − ηˆ)].
η˜∈C+
η˜∈C+
If A := {ω; ηˆ < ηess } is not a P0 -null set, as ρ is strictly comparable, we can choose η˜ to let ρ[˜ η (ξ − ηˆ)] larger than any real number. Thus P0 (A) = 0 and ηˆ ≥ ηess P0 -a.s.
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
916
For any η ∈ B and η˜ ∈ C+ , we have ρ[(ξ − η)2 + η˜(ξ − η)] − ρ[(ξ − η)2 ] ≤ ρ[˜ η (ξ − η)] ≤ 0. Thus for any η ∈ B, sup ρ[(ξ − η)2 + η˜(ξ − η)] ≤ ρ[(ξ − η)2 ].
η˜∈C+
On the other hand, sup ρ[(ξ − η)2 + η˜(ξ − η)] ≥ ρ[(ξ − η)2 + 0(ξ − η)] = ρ[(ξ − η)2 ].
η˜∈C+
Thus for any η ∈ B, ρ[(ξ − η)2 ] = sup ρ[(ξ − η)2 + η˜(ξ − η)] η˜∈C+
and inf sup ρ[(ξ − η)2 + η˜(ξ − η)] = inf ρ[(ξ − η)2 ].
η∈B η˜∈C+
η∈B
For any η ∈ B, since ρ[(ξ − η)2 ] = ρ[(ξ − ηess )2 + (ηess − η)2 − 2(η − ηess )(ξ − ηess )] ≥ ρ[(ξ − ηess )2 + (ηess − η)2 ] − 2ρ[(η − ηess )(ξ − ηess )] ≥ ρ[(ξ − ηess )2 ], then ηess is the least squares estimator among B. On the other hand, for any η ∈ / B, supη˜∈C+ ρ[(ξ − η)2 + η˜(ξ − η)] is equal to ∞. Thus ηess is also the optimal solution of (4.3). The uniqueness comes from ρ is strictly comparable. 2 Proposition 4.5 is just equivalent to say ηess is the unique solution of the following problem: inf η∈C ρ[(ξ − η)2 ] subject to ρ[(ηess − η)+ ] = 0. A necessary and sufficient condition for ηess being the least squares estimator is obtained. Theorem 4.6. Under assumptions in Proposition 4.5, for a given ξ ∈ F, ηess is the optimal solution of Problem 2.8 if and only if inf sup ρ[(ξ − η)2 + η˜(ξ − η)] = sup inf ρ[(ξ − η)2 + η˜(ξ − η)].
η∈C η ˜∈C+
η˜∈C+ η∈C
Proof. If ηess is the optimal solution of Problem 2.8, then sup inf ρ[(ξ − η)2 + η˜(ξ − η)] ≥ inf ρ[(ξ − η)2 ] = ρ[(ξ − ηess )2 ].
η ˜∈C+ η∈C
On the other hand, by Proposition 4.5,
η∈C
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
917
inf η∈C supη˜∈C+ ρ[(ξ − η)2 + η˜(ξ − η)] = supη˜∈C+ ρ[(ξ − ηess )2 + η˜(ξ − ηess )] ≤ ρ[(ξ − ηess )2 ] + supη˜∈C+ ρ[˜ η (ξ − ηess )] = ρ[(ξ − ηess )2 ]. Then inf sup ρ[(ξ − η)2 + η˜(ξ − η)] ≤ sup inf ρ[(ξ − η)2 + η˜(ξ − η)].
η∈C η ˜∈C+
η˜∈C+ η∈C
Since inf sup ρ[(ξ − η)2 + η˜(ξ − η)] ≥ sup inf ρ[(ξ − η)2 + η˜(ξ − η)]
η∈C η˜∈C+
η˜∈C+ η∈C
is obvious, thus inf sup ρ[(ξ − η)2 + η˜(ξ − η)] = sup inf ρ[(ξ − η)2 + η˜(ξ − η)].
η∈C η ˜∈C+
η˜∈C+ η∈C
Conversely, for any η˜ ∈ C+ , inf ρ[(ξ − η)2 + η˜(ξ − η)] = inf ρ[(ξ +
η∈C
η∈C
η˜ η˜2 η˜ − η)2 − ] ≤ inf ρ[(ξ + − η)2 ] = inf ρ[(ξ − η)2 ], η∈C η∈C 2 4 2
which implies sup inf ρ[(ξ − η)2 + η˜(ξ − η)] ≤ inf ρ[(ξ − η)2 ].
η ˜∈C+ η∈C
η∈C
Since sup inf ρ[(ξ − η)2 + η˜(ξ − η)] ≥ inf ρ[(ξ − η)2 + 0(ξ − η)] = inf ρ[(ξ − η)2 ]
η ˜∈C+ η∈C
η∈C
η∈C
is obvious, thus sup inf ρ[(ξ − η)2 + η˜(ξ − η)] = inf ρ[(ξ − η)2 ].
η ˜∈C+ η∈C
η∈C
This shows supη˜∈C+ inf η∈C ρ[(ξ − η)2 + η˜(ξ − η)] attains its supremum when η˜ = 0. By Proposition 4.5, we also know inf η∈C supη˜∈C+ ρ[(ξ − η)2 + η˜(ξ − η)] attains its infimum when η = ηess . Since inf sup ρ[(ξ − η)2 + η˜(ξ − η)] = sup inf ρ[(ξ − η)2 + η˜(ξ − η)],
η∈C η ˜∈C+
η˜∈C+ η∈C
then min sup ρ[(ξ − η)2 + η˜(ξ − η)] = max+ inf ρ[(ξ − η)2 + η˜(ξ − η)]. η∈C η˜∈C+
η˜∈C
η∈C
By Theorem A.7, (ηess , 0) is the saddle point, i.e., for any η ∈ C and η˜ ∈ C+ , we have ρ[(ξ − ηess )2 + η˜(ξ − ηess )] ≤ ρ[(ξ − ηess )2 ] ≤ ρ[(ξ − η)2 ]. The second inequality means ηess is the optimal solution of Problem 2.8.
2
918
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
5. Characterizations of the least squares estimator In this section, we obtain several characterizations of the least square estimator. 5.1. The orthogonal projection If P contains only one probability P , by probability theory, the least squares estimator ηˆ is just the conditional expectation EP [ξ|C]. It is well known that a conditional expectation is an orthogonal projection. In more details, for any η ∈ C, EP [(ξ − ηˆ)η] = EP [(ξ − EP [ξ|C])η] = 0. Does the above property still hold when ρ is a sublinear expectation? Note that for any η ∈ C, ρ[(ξ − ηˆ)η] = sup EP [(ξ − ηˆ)η] P ∈P
= sup EP [(ξ − EPˆ [ξ|C])η] P ∈P
≥ 0, where Pˆ is the accompanying probability of ηˆ as in the proof of Theorem 3.6. But we notice that inf ρ[(ξ − ηˆ)η] = 0. This motivates us to introduce the following definition. For any given ξ ∈ F, define η∈C
f : C → R by f (˜ η ) = inf ρ[(ξ − η˜)η]. η∈C
Denote the kernel of f by ker(f ) := {˜ η ∈ C | f (˜ η ) = 0}. We have proved that the least squares estimator is one element of the set {EP [ξ|C]; P ∈ P}. In the following, we show that this set can be described by the kernel of f . Lemma 5.1. If ρ is a sublinear expectation continuous from above on F, for any given ξ ∈ F, ker(f ) = {EP [ξ|C]; P ∈ P}. Proof. For any P ∈ P and η ∈ C, considering EP [ξ|C], we have ρ[(ξ − EP [ξ|C])η] ≥ EP [(ξ − EP [ξ|C])η] = 0. Then inf ρ[(ξ − EP [ξ|C])η] ≥ 0.
η∈C
It is obvious that inf ρ[(ξ − EP [ξ|C])η] ≤ ρ[(ξ − EP [ξ|C])0] = 0,
η∈C
which leads to inf η∈C ρ[(ξ − EP [ξ|C])η] = 0 for any P ∈ P. Thus {EP [ξ|C]; P ∈ P} ⊂ ker(f ).
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
919
On the other hand, for any η˜ ∈ ker(f ), since C is a convex set and ρ is a sublinear expectation continuous from above, by Theorem A.6, there exists a probability P˜ ∈ P such that inf EP˜ [(ξ − η˜)η] = inf ρ[(ξ − η˜)η] = 0.
η∈C
η∈C
If η˜ = EP˜ [ξ|C], then it is easy to find a η ∈ C such that EP˜ [(ξ − η˜)η ] < 0. Thus η˜ = EP˜ [ξ|C], which deduces ker(f ) ⊂ {EP [ξ|C]; P ∈ P}. 2 If P satisfies the m-stable property, we obtain the following result. Theorem 5.2. If ρ is a sublinear expectation continuous from above on F and the corresponding P is m-stable, then for a given ξ ∈ F, ker(f ) is just the set B := {˜ η ∈ C | ess inf EP [ξ|C] ≤ η˜ ≤ ess sup EP [ξ|C]}. P ∈P
P ∈P
Proof. By Lemma 5.1, ker(f ) is a subset of B. So we only need to prove B ⊂ ker(f ). Since P is ‘m-stable’, for any η ∈ C and η˜ ∈ B, we have ρ[(ξ − η˜)η] = ρ[ess supEP [(ξ − η˜)η|C]] P ∈P
= ρ[η + (ess supEP [ξ|C] − η˜) − η − (ess inf EP [ξ|C] − η˜)], P ∈P
P ∈P
where η + := η
0 and η − := −(η
0). It yields that for any η˜ ∈ B and η ∈ C, ρ[(ξ − η˜)η] ≥ 0.
Thus for any η˜ ∈ B, inf ρ[(ξ − η˜)η] ≥ 0.
η∈C
Since inf ρ[(ξ − η˜)η] ≤ ρ[(ξ − η˜)0] = 0,
η∈C
then inf η∈C ρ[(ξ − η˜)η] = 0 for any η˜ ∈ B, which implies that B ⊂ ker(f ). 2 5.2. A sufficient and necessary condition We give a sufficient and necessary condition for the existence of the least squares estimator in this subsection. Especially, here we do not need to assume ρ is continuous from above on F. Lemma 5.3. For a given ξ ∈ F, if ηˆ is an optimal solution of Problem 2.8, then ρ[(ξ − η)(ξ − ηˆ)] ≥ ρ(ξ − ηˆ)2 ,
∀η ∈ C.
Proof. For any η ∈ C, define f : [0, 1] → R by f (λ) = λ2 ρ(ξ − η)2 + (1 − λ)2 ρ(ξ − ηˆ)2 + 2λ(1 − λ)ρ[(ξ − η)(ξ − ηˆ)].
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
920
It is easy to check that f (λ) ≥ ρ[ξ − (λη + (1 − λ)ˆ η )]2 ≥ ρ(ξ − ηˆ)2 , which implies that f (λ) attains the minimum on [0, 1] when λ = 0. Then for any η ∈ C, f (λ)|0+ = −2ρ(ξ − ηˆ)2 + 2ρ[(ξ − η)(ξ − ηˆ)] ≥ 0. Thus ρ[(ξ − η)(ξ − ηˆ)] ≥ ρ(ξ − ηˆ)2 ,
∀η ∈ C.
2
Lemma 5.4. For any ξ1 , ξ2 ∈ F such that ρ(|ξ1 |p ) > 0 and ρ(|ξ2 |q ) > 0, we have 1
1
ρ(|ξ1 ξ2 |) ≤ (ρ(|ξ1 |p )) p (ρ(|ξ2 |q )) q , where 1 < p, q < ∞ with
1 p
+
1 q
= 1.
Proof. Let X= Since |XY | ≤
|X|p p
+
|Y |q q ,
ξ1 (ρ(|ξ1 |p ))
1 p
,
Y =
ξ2 1
(ρ(|ξ2 |q )) q
.
then ρ(|XY |) ≤ ρ(
|Y |q |X|p |Y |q |X|p + ) ≤ ρ( ) + ρ( ) = 1. p q p q
Thus 1
1
ρ(|ξ1 ξ2 |) ≤ (ρ(|ξ1 |p )) p (ρ(|ξ2 |q )) q .
2
Remark 5.5. If ρ can be represented by a family of probability measures, then the condition ρ(|ξ1 |p ) > 0 and ρ(|ξ2 |q ) > 0 can be abandoned. Theorem 5.6. Suppose inf ρ(ξ − η)2 > 0. For a given ξ ∈ F, ηˆ is the optimal solution of Problem 2.8 if and η∈C
only if it is the bounded C-measurable solution of the following equation inf ρ[(ξ − ηˆ)(ξ − η)] = ρ(ξ − ηˆ)2 .
η∈C
Proof. ⇒ Since ηˆ is the optimal solution of Problem 2.8, by Lemma 5.3, inf ρ[(ξ − ηˆ)(ξ − η)] ≥ ρ(ξ − ηˆ)2 .
η∈C
It is obvious that inf ρ[(ξ − ηˆ)(ξ − η)] ≤ ρ(ξ − ηˆ)2 .
η∈C
Then ηˆ is the solution of (5.1).
(5.1)
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
921
⇐ If ηˆ ∈ C satisfying equation (5.1), by Lemma 5.4, we have ρ(ξ − ηˆ)2 = inf ρ[(ξ − ηˆ)(ξ − η)] η∈C
1
1
≤ inf (ρ(ξ − ηˆ)2 ) 2 (ρ(ξ − η)2 ) 2 η∈C
1
1
= (ρ(ξ − ηˆ)2 ) 2 [ inf ρ(ξ − η)2 ] 2 . η∈C
Thus ρ(ξ − ηˆ)2 ≤ inf ρ(ξ − η)2 . η∈C
2
Remark 5.7. If ρ is a linear expectation generated by probability measure P , then EP [EP (ξ|C)η] = EP [ξη],
∀η ∈ C.
This means EP (ξ|C) not only satisfies (5.1) but also satisfies the following equation sup EP [(ξ − ηˆ)(ξ − η)] = EP (ξ − ηˆ)2 . η∈C
Remark 5.8. If ρ can be represented by a family of probability measures, then the condition inf ρ(ξ −η)2 > 0 η∈C
in Theorem 5.6 can be abandoned since Lemma 5.4 still holds for either ρ(|ξ1 |p ) = 0 or ρ(|ξ2 |q ) = 0. Appendix A. Some basic results In this section, some results used in our paper are restated. Theorem A.1. If ρ is a sublinear expectation and P is the family of all linear expectations dominated by ρ, then ρ(ξ) = max EP [ξ], ∀ξ ∈ F. P ∈P
Proof. By Corollary 2.4 of Chapter I in [13], for any ξ ∈ F, there exists a linear expectation L such that L ≤ ρ and L(ξ) = ρ(ξ). If we take all linear expectations dominated by ρ, then ρ(ξ) = max EP [ξ], ∀ξ ∈ F. 2 P ∈P
Theorem A.2. Let F be a normed space and ρ be a sublinear expectation from F to R dominated by some scalar multiple of the norm of F. Then {x∗ ∈ F∗ : x∗ ≤ ρ
on
F}
is
σ(F∗ , F) − compact.
Proof. See Theorem 4.2 of Chapter 1 in [13]. 2 We denote F∗c as the set of linear expectations generated by countably additive measures. Theorem A.3. If P is a subset of F∗c which is σ(F∗ , F) − compact, then there exists a nonnegative P0 ∈ P such that the measures in P are absolutely P0 -continuous, i.e., if P0 (A) = 0, then supP ∈P P (A) = 0. Proof. See Corollary 1.2 in [15].
2
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
922
Theorem A.4 (Komlós theorem). Let {ξn }n≥1 be a sequence of random variables bounded in L1 (P0 ). Then there exist a subsequence {ξni }i≥1 and a random variable ξ ∈ L1 (P0 ) such that k 1 lim ξni = ξ n→∞ k i=1
P0 − a.s.
Proof. See Theorem A.3.4 in [11]. 2 Theorem A.5. Let X be a normed vector space, A ⊂ X be a convex set which is compact for the weak topology σ(X, X ∗ ) and B be a nonempty convex set of a vector space Y . Let also f : A × B → R be a function with the property that (i) x → f (x, y) is concave and continuous on X for each y ∈ B; (ii) y → f (x, y) is convex on Y for each x ∈ A. Then sup inf f (x, y) = inf sup f (x, y).
x∈A y∈B
y∈B x∈A
Proof. See Theorem B.1.2 of Appendix B in [11]. 2 Theorem A.6 (Mazur–Orlicz theorem). Let X be a nonzero space, ρ : X → R be sublinear and D be a nonempty convex subset of X . Then there exists a linear functional L on X such that L is dominated by ρ, i.e., L(X) ≤ ρ(X) for all X ∈ X and inf L(X) = inf ρ(X).
X∈D
X∈D
Proof. See Lemma 1.6 of Chapter 1 in [13]. 2 Theorem A.7. Let A and B be two nonempty sets and f from A × B to R {∞}. Then f has saddle points, i.e., there exists (¯ x, y¯) ∈ A × B such that ∀x ∈ A, ∀y ∈ B :
f (x, y¯) ≤ f (¯ x, y¯) ≤ f (¯ x, y)
if and only if inf f (¯ x, y) = max inf f (x, y) = min sup f (x, y) = sup f (x, y¯).
y∈B
x∈A y∈B
y∈B x∈A
x∈A
Proof. See Theorem 2.10.1 of Chapter 2 in [14]. 2 Appendix B. Some results about coherent risk measure In this section, we give some basic definitions and results about coherent risk measures. Readers can see [1,2,5,7] for more details. Note that in order to ensure our statements of the entire paper is consistent, the expectation we used in our paper is sublinear which is different from the coherent risk measure defined in [1,2] or [5], in which it is superlinear. Thus it is represented as supP ∈P EP instead of inf P ∈P EP and the conditional expectation is taken as ess supP ∈P EP [·|C] instead of ess inf P ∈P EP [·|C]. Though the definition is different, the methods and results are not affected. For a given probability set (Ω, F, P0 ), {Fn }n≥1 is the filtration satisfying F := ∨n=1 Fn .
C. Sun, S. Ji / J. Math. Anal. Appl. 451 (2017) 906–923
923
Definition B.1. A map π : L∞ (P0 ) → R is called a coherent risk measure, if it satisfies the following properties: i) ii) iii) iv)
Monotonicity: for all X and Y , if X ≥ Y , π(X) ≥ π(Y ). Translation invariance: if λ is a constant, for all X, π(λ + X) = λ + π(X). Positive homogeneity: if λ ≥ 0, for all X, π(λX) = λπ(X). Subadditivity: for all X and Y , π(X + Y ) ≤ π(X) + π(Y ).
Definition B.2 (Stability). We say that the set P is stable if for elements Q0 , Q ∈ P e with associated martingales Zn0 , Zn and for each stopping time τ , the martingale L defined as Ln = Zn0 for n ≤ τ and e n Ln = Zτ0 Z Zτ for n ≥ τ defines an element of P, where P denotes the elements in P which is equivalent dQ to P0 and ZnQ := EP0 [ dP |Fn ]. 0 Proposition B.3. Let Φτ (ξ) = ess inf Q∈P e EQ [ξ|Fτ ]. Then the following is equivalent: i) Stability of the set P. ii) Recursivity: for each bounded FN -random variable ξ, the family {Φν (ξ)|ν is a stopping time} satisfies: for every two stopping times σ ≤ τ , we have Φσ (ξ) = Φσ (Φτ (ξ)). References [1] P. Artzner, F. Delbaen, J.M. Eber, D. Heath, Coherent measures of risk, Math. Finance 9 (3) (1999) 203–228. [2] P. Artzner, F. Delbaen, J.M. Eber, D. Heath, H. Ku, Coherent multiperiod risk adjusted values and Bellman’s principle, Ann. Oper. Res. 152 (2007) 5–22. [3] A. Bensoussan, Stochastic Control of Partially Observable Systems, Cambridge Univ. Press, 1992. [4] M. Davis, Lectures on Stochastic Control and Nonlinear Filtering, Tata Institute of Fundamental Research, vol. 75, Springer-Verlag, 1984. [5] F. Delbaen, Coherent risk measures on general probability spaces, in: Advances in Finance and Stochastics, Springer-Verlag, Berlin, Heidelberg, New York, 2002. [6] F. Delbaen, S. Peng, E. Rosazza Gianin, Representation of the penalty term of dynamic concave utilities, Finance Stoch. 14 (2010) 449–472. [7] H. Föllmer, A. Schied, Stochastic Finance. An Introduction in Discrete Time, Walter de Gruyter, Berlin, New York, 2002. [8] L. Jiang, Z. Chen, A result on the probability measures dominated by g-expectation, Acta Math. Appl. Sin. Engl. Ser. 20 (2004) 507–512. [9] G. Kallianpur, Stochastic Filtering Theory, Springer-Verlag, 1980. [10] S. Peng, BSDE and related g-expectations, in: Backward Stochastic Differential Equations, vol. 364, Pitman, 1997, pp. 141–159. [11] H. Pham, Continuous-Time Stochastic Control and Optimization with Financial Applications, Springer-Verlag, Berlin, Heidelberg, 2009. [12] E. Rosazza Gianin, Risk measures via g-expectations, Insurance Math. Econom. 39 (2006) 19–34. [13] S. Simons, From Hahn–Banach to Monotonicity, Springer-Verlag, Berlin, Heidelberg, 2008. alinescu, Convex Analysis in General Vector Spaces, World Scientific, River Edge, NJ, 2002. [14] C. Zˇ [15] X. Zhang, On weak compactness in spaces of measures, J. Funct. Anal. 143 (1997) 1–9.