Statistics and Probability Letters 119 (2016) 248–258
Contents lists available at ScienceDirect
Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro
A strong law of large numbers for sub-linear expectation under a general moment condition Cheng Hu School of Mathematics, Shandong University, Jinan, Shandong, 250100, China
article
abstract
info
Article history: Received 30 March 2016 Received in revised form 21 August 2016 Accepted 23 August 2016 Available online 31 August 2016
In this paper, we derive a strong law of large numbers for sub-linear expectation under a general moment condition. The result can be reduced to the classical strong law of large numbers when the sub-linear expectation coincides with the classical linear expectation. Moreover, we illustrate that this moment condition for strong law of large numbers in sublinear situation is the weakest. © 2016 Elsevier B.V. All rights reserved.
MSC: 60F15 Keywords: Strong law of large numbers Sub-linear expectation Capacity
1. Introduction The classical strong laws of large numbers (SLLNs for short) are widely known as fundamental results in probability theory whose importance requires no discussion. Recently, motivated by some problems in statistics, measures of risk, mathematical economics and super-hedging in finance, more and more researches adopted non-additive probability and non-linear expectation to describe some uncertain phenomena in these fields which cannot be modeled exactly by classical probability theory; for example Chen and Epstein (2002), Choquet (1953), Huber and Strassen (1973), Peng (1997, 2010) and Wasserman and Kadane (1990). Nowadays the SLLNs for non-additive probability and non-linear expectation are widely studied by a number of researchers; for instance Chen et al. (2013), Cozman (2010), Korchevsky (2015), Li and Chen (2011), Marinacci (1999), Maccheroni and Marinacci (2005) and Zhang (2016). The general sub-linear expectation and related notions of independent and identically distributed random variables are introduced by Peng (2010). The previous results of SLLNs for sub-linear expectation showed that given a sequence {Xn }∞ n=1 of i.i.d. random variables for sub-linear expectation, any cluster point of empirical averages lies between the lower expectation (super-linear expectation) E [X1 ] and the upper expectation (sub-linear expectation) E[X1 ] quasi-surely. That is
Sn Sn v E [X1 ] ≤ lim inf ≤ lim sup ≤ E[X1 ] = 1. n→∞
n
n→∞
n
Chen et al. (2013) obtained a SLLN of the above form for independent random variables under the condition of finite (1 +α)th moment for upper expectation. Zhang (2016) derived a SLLN of this form for negatively dependent identically distributed random variables under the condition of finite 1st moment for Choquet expectation. As we all know the Choquet expectation with respect to a sub-additive capacity is much larger than the sub-linear expectation. So it would be interesting to know whether we can obtain a SLLN under some weaker moment condition for sub-linear expectation.
E-mail address:
[email protected]. http://dx.doi.org/10.1016/j.spl.2016.08.015 0167-7152/© 2016 Elsevier B.V. All rights reserved.
C. Hu / Statistics and Probability Letters 119 (2016) 248–258
249
In this paper, we need the following notations. Let Φc (or, respectively, Φd ) denote the set of functions ψ(x) defined on
[0, ∞) such that ψ(x) satisfies:
(1) ψ(x) is nonnegative and nondecreasing on [0, ∞) and positive on [x0 , ∞) for some x0 ≥ 0. The series converges (respectively, diverges). (2) For any fixed a > 0, there exists C > 0 such that ψ(x + a) ≤ C ψ(x) for any x ≥ x0 .
∞
1 n=[x0 ]+1 nψ(n)
The value of x0 is not assumed to be the same for different ψ . Functions xα and (ln(1 + x))1+α for any α > 0 are examples of the class of functions Φc . The functions ln(1 + x) and ln ln(e ∨ x) belong to the class Φd . The purpose of this paper is to show that the SLLN is still true under a general moment condition which is weaker than finite (1 + α)th moment. Inspired by Petrov (1975) and Korchevsky (2015), we conjecture and then prove that the moment condition supn E[|Xn |ψ(|Xn |)] < ∞ for some ψ ∈ Φc can maintain the validity of the SLLN and this moment condition is the weakest. That is when only supn E[|Xn |ψ(|Xn |)] < ∞ for some ψ ∈ Φd holds, we can always find a counterexample such that the SLLN is not true. The rest of this paper is organized as follows. In Section 2, we recall some basic concepts and lemmas related to sublinear expectation. In Section 3, we state and prove our main result, that is the SLLN under a general moment condition for sub-linear expectation. In Section 4, we illustrate that this condition is the weakest. 2. Basic concepts and lemmas We use the notations similar to that of Peng (2010). Let (Ω , F ) be a given measurable space. Let H be a linear space of real functions defined on Ω such that if X1 , X2 , . . . , Xn ∈ H then ϕ(X1 , X2 , . . . , Xn ) ∈ H for each ϕ ∈ Cl,Lip (Rn ) where Cl,Lip (Rn ) denotes the linear space of (local Lipschitz) function ϕ satisfying
|ϕ(x) − ϕ(y )| ≤ C (1 + |x|m + |y |m )|x − y |,
∀x, y ∈ Rn ,
for some C > 0, m ∈ N depending on ϕ . H contains all IA where A ∈ F . We consider H as the space of random variables. Definition 2.1. A sub-linear expectation E on H is a functional E: H → R := [−∞, ∞] satisfying the following properties: for all X , Y ∈ H , we have (a) (b) (c) (d)
Monotonicity: If X ≥ Y then E[X ] ≥ E[Y ]. Constant preserving: E[c ] = c, ∀c ∈ R. Positive homogeneity: E[λX ] = λE[X ], ∀λ ≥ 0. Sub-additivity: E[X + Y ] ≤ E[X ] + E[Y ] whenever E[X ] + E[Y ] is not of the form ∞ − ∞ or −∞ + ∞.
Remark 2.1. By combining (b) and (d) we can easily obtain a basic property of sub-linear expectation: (e) Translation invariance: E[X + c ] = E[X ] + c, ∀c ∈ R. The triple (Ω , H , E) is called a sub-linear expectation space. Given a sub-linear expectation E, we define the conjugate expectation E of E by
E [X ] := −E[−X ],
∀X ∈ H .
Obviously, for all X ∈ H , E [X ] ≤ E[X ]. Definition 2.2. A set function V : F → [0, 1] is called a capacity if it satisfies (a) V (∅) = 0, V (Ω ) = 1. (b) V (A) ≤ V (B), A ⊂ B, A, B ∈ F . A capacity V is said to be sub-additive if it satisfies V (A
B) ≤ V (A) + V (B), A, B ∈ F .
The corresponding Choquet expectation is defined by ∞
V (X ≥ t )dt +
CV [X ] := 0
0
[V (X ≥ t ) − 1]dt . −∞
In this paper we only consider the capacity induced by sub-linear expectation. Given a sub-linear expectation space (Ω , H , E), we define a capacity: V (A) := E[IA ], ∀A ∈ F . We also define the conjugate capacity: v(A) := 1 − V (Ac ), ∀A ∈ F . Clearly, we can easily obtain that V is a sub-additive capacity and v(A) = E [IA ]. Definition 2.3. A sub-linear expectation E : H → R is said to be continuous if it satisfies: (1) Lower-continuity: if Xn ↑ X , then E[Xn ] ↑ E[X ], where 0 ≤ Xn , X ∈ H . (2) Upper-continuity: if Xn ↓ X , then E[Xn ] ↓ E[X ], where 0 ≤ Xn , X ∈ H .
250
C. Hu / Statistics and Probability Letters 119 (2016) 248–258
A capacity V : F → [0, 1] is called a continuous capacity if it satisfies: (1) Lower-continuity: if An ↑ A, then V (An ) ↑ V (A), where An , A ∈ F . (2) Upper-continuity: if An ↓ A, then V (An ) ↓ V (A), where An , A ∈ F . Now we give the following basic properties of E and V and the proofs can be referred to Zhang (2016).
∞
∞
Proposition 2.1. (1) If E is lower-continuous, then it is countably sub-additive, i.e. E[ n=1 Xn ] ≤ n=1 E[Xn ] for any ∞ 0 ≤ Xn , n=1 Xn ∈ H . ∞ ∞ (2) If V is lower-continuous, then it is countably sub-additive, i.e. V ( n=1 An ) ≤ n=1 V (An ) for any An ∈ F . (3) If E is lower-continuous, then V induced by E is also lower-continuous. The following result is the representation theorem of sub-linear expectation which is initiated by Peng (2010) and the proof can be found there. Proposition 2.2. Let (Ω , H , E) be a sub-linear expectation space. Then there exist a family of finitely additive probabilities {Pθ }θ∈Θ defined on (R, B (R)) such that for each X ∈ H ,
E[X ] = sup EPθ [X ]. θ∈Θ
Example 2.1. Let P be a family of probability measures defined on (Ω , F ). For any random variable X ∈ F , we define the upper expectation by
E[X ] := sup EQ [X ]. Q ∈P
Then E[X ] is a sub-linear expectation. For any Xn ↑ X , 0 ≤ Xn , X ∈ H ,
E[X ] = sup EQ [X ] = sup lim EQ [Xn ] = sup sup EQ [Xn ] Q ∈P
Q ∈P
n
Q ∈P
n
E[Xn ] = lim E[Xn ]. = sup sup EQ [Xn ] = sup n
n
n
Q ∈P
So E is lower-continuous and then countably sub-additive. Moreover, we can also define the capacity: V (A) := E[IA ] = supQ ∈P Q (A). By Proposition 2.1, we also have V is lower-continuous and countably sub-additive. Remark 2.2. By Example 2.1, it is obvious that our main results are also true for upper expectation. Definition 2.4. Given a capacity V , a set A is said to be a polar set if V (A) = 0. And we call a property holds ‘‘quasi-surely’’ (q.s.) if it holds outside a polar set. Definition 2.5 (Independence). Let X = (X1 , . . . , Xm ), Xi ∈ H and Y = (Y1 , . . . , Yn ), Yi ∈ H be two random variables on (Ω , H , E). Y is said to be independent of X , if for each test function ϕ ∈ Cl,Lip (Rm × Rn ) we have E[ϕ(X , Y )] = E[E[ϕ(x, Y )]|x=X ] whenever ϕ(x) := E[|ϕ(x, Y )|] < ∞ for all x and E[|ϕ(X )|] < ∞. {Xn }∞ n=1 is said to be a sequence of independent random variables, if Xn+1 is independent of (X1 , . . . , Xn ) for each n ≥ 1. Remark 2.3. This notion of independence for sub-linear expectation is introduced by Peng (2010). Actually, in the proof of our main result, we only need the following weaker independence condition: Let X1 , X2 , . . . , Xn+1 be a sequence of random variables on (Ω , H , E). Xn+1 is said to be independent of (X1 , . . . , Xn ), if for each non-negative bounded continuous function ϕi (·) on R with E[ϕi (Xi )] < ∞, i = 1, . . . , n + 1, we have
E
n+1 i=1
ϕi (Xi ) = E
n
ϕi (Xi ) E[ϕn+1 (Xn+1 )].
i=1
Definition 2.6 (Identical Distribution). Let X1 , X2 be two n-dimensional random variables defined respectively on sub-linear expectation spaces (Ω1 , H1 , E1 ) and (Ω2 , H2 , E2 ). They are called identically distributed if
E1 [ϕ(X1 )] = E2 [ϕ(X2 )],
∀ϕ ∈ Cl,Lip (Rn ),
whenever the sub-linear expectations are finite. Definition 2.7 (IID Random Variables). A sequence of random variables {Xn }∞ n=1 is said to be independent and identically distributed, if Xn+1 is independent of (X1 , . . . , Xn ) and Xn and X1 are identically distributed for each n ≥ 1.
C. Hu / Statistics and Probability Letters 119 (2016) 248–258
251
To prove our main results, we need the following lemmas. The proofs of Lemmas 2.1–2.3 can be found in Chen et al. (2013). Lemma 2.1 (Borel–Cantelli Lemma). Let {An }∞ n=1 be a sequence of events in F and V be the capacity induced by lower-continuous ∞ sub-linear expectation E. If n=1 V (An ) < ∞, then
V
∞ ∞
= 0.
Ai
n=1 i=n
Lemma 2.2 (Chebyshev’s Inequality). Let f (x) > 0 be a nondecreasing function on R. Then for any x, V (X ≥ x) ≤
E[f (X )] f (x)
,
v(X ≥ x) ≤
E [f (X )] f ( x)
.
Lemma 2.3 (Jensen’s Inequality). Let f (·) be a convex function on R. Suppose that E[X ] and E[f (X )] exist. Then
E[f (X )] ≥ f (E[X ]). Lemma 2.4. If E[|X |] < ∞, then |X | < ∞ q.s., i.e. V (|X | = ∞) = 0. Proof.
∞ E[|X |] V (|X | = ∞) = V {|X | > i} ≤ V (|X | > i) ≤ . i
i =1
Since E[|X |] < ∞, letting i → ∞ we have V (|X | = ∞) = 0.
Lemma 2.5. Assume that ψ(x) ∈ Φc , Then ∞
1
n=[x20 ]+1
nψ
n ln(1+n)
< ∞. √
Proof. Since ψ(x) ∈ Φc , we have ψ( ln(1n+n) ) ≥ ψ( n) for any n ≥ 1. So we only need to prove Let n0 = ∞ n=n0
k20
x20
∞
1√ n=[x20 ]+1 nψ( n)
< ∞.
be the smallest square number not less than [ ] + 1. Then we have
∞ ∞ (k + 1)2 − k2 1 √ = √ ≤ k2 ψ(k) nψ( n) nψ( n) k=k0 k2 ≤n<(k+1)2 k=k0
1
= This implies
∞
∞
2
k=k0
kψ(k)
1√ n=[x20 ]+1 nψ( n)
+
∞
1
k=k0
k2 ψ(k)
< ∞.
< ∞.
3. The strong law of large numbers for sub-linear expectation Before we state our main result we need to prove the following lemmas. The next lemma is a new exponential inequality and it is the key point of this paper. Lemma 3.1. For any x ∈ R, we have ex ≤ 1 + x + |x| ln(1 + |x|)e2|x| . Proof. Case 1. x ∈ [0, e − 1]. 2 3 By Taylor’s theorem we have ex = 1 + x + x2! + x3! + · · · . Moreover
1 + x + x ln(1 + x)e2x = 1 + x + x ln(1 + x) 1 + 2x +
22 2!
x2 + · · ·
= 1 + x + x ln(1 + x) + 2x2 ln(1 + x) + · · · .
252
C. Hu / Statistics and Probability Letters 119 (2016) 248–258
Comparing the above two expressions, we only need to prove: 2 ln(1 + x) ≥ x for x ∈ [0, e − 1] and this can be easily verified. Case 2. x ∈ [−(e − 1), 0). Let y = −x, then the inequality becomes e−y ≤ 1 − y + y ln(1 + y)e2y
when y ∈ (0, e − 1].
Then by similar calculation in case 1 we can easily verify it. Case 3. x ∈ (−∞, −(e − 1)) The result is obvious. Lemma 3.2. If 0 ≤ Xn ≤
(e − 1, ∞). and supn≥1 E[Xn ψ(Xn )] < ∞ for some ψ ∈ Φc , we have for any m > 1,
2n ln(1+n)
sup m ln(1 + n)E Xi ln 1 +
m ln(1 + n) n
1≤i≤n
→ 0 when n → ∞.
Xi
Proof. For n ≥ x30 ,
sup m ln(1 + n)E Xi ln 1 +
m ln(1 + n) n
1≤i≤n
≤ sup m ln(1 + n)E Xi ln 1 +
Xi
m ln(1 + n) n
1≤i≤n
+ sup m ln(1 + n)E Xi ψ(Xi )
ln 1 +
Xi
m ln(1+n) Xi n
ψ(Xi )
1≤i≤n
I Xi ≤ n
1 3
I
1
n 3 ≤ Xi ≤
2i ln(1 + i)
:= I + J . For I
1
I ≤ m ln(1 + n) · n 3 · ln 1 + 1
≤ m2 ln(1 + n) · n 3 ·
m ln(1 + n)
ln(1 + n)
2
n3
2
n3
→ 0. ψ(x) ln(1+x)
Since ψ ∈ Φc , the function f (x) = any i ≤ n. For J J ≤
≤
m ln(1 + n) ln(1 + 2m)
1 ψ n3
1
f
i ln(1+i)
≤
n ln(1+n)
for
sup E[Xi ψ(Xi )] 1≤i≤n
m ln(1 + n) ln(1 + 2m) ln 1 + n 3
must approach infinity when x → ∞. In addition we have
1
n3
sup E[Xi ψ(Xi )] i ≥1
→ 0. Lemma 3.3. Given a sub-linear expectation space (Ω , H , E). Let {Xn }∞ n=1 be a sequence of independent random variables with supn≥1 E[|Xn |ψ(|Xn |)] < ∞ for some ψ ∈ Φc and E[Xn ] = µ for each n ≥ 1. Suppose that
|Xn − E[Xn ]| ≤
2n ln(1 + n)
,
n = 1, 2, · · ·.
Then for any m > 1 we have
sup E exp n≥1
n m ln(1 + n)
n
i=1
[Xi − E[Xi ]]
< ∞.
C. Hu / Statistics and Probability Letters 119 (2016) 248–258
253
Proof. By noticing that supn≥1 E[|Xn |ψ(|Xn |)] < ∞ implies supn≥1 E[ψ(|Xn |)] < ∞ we have sup E[|Xn − E[Xn ]|ψ(|Xn − E[Xn ]|)] ≤ sup E[(|Xn | + |µ|)ψ(|Xn | + |µ|)] n ≥1
n ≥1
≤ sup E[(|Xn | + |µ|)ψ(|Xn | + |µ|)I (|Xn | ≤ x0 )] n ≥1
+ sup E[(|Xn | + |µ|)ψ(|Xn | + |µ|)I (|Xn | ≥ x0 )] n≥1
≤ (x0 + |µ|)ψ(x0 + |µ|) + C sup E[(|Xn | + |µ|)ψ(|Xn |)] n≥1
< ∞. m ln(1+n) n
(Xi − E[Xi ]) into the exponential inequality in Lemma 3.1 we can obtain m ln(1 + n) m ln(1 + n) exp (Xi − E[Xi ]) ≤ 1 + (Xi − E[Xi ])
By substituting x =
n
n
m ln(1 + n)
+
n
|Xi − E[Xi ]| ln 1 +
m ln(1 + n) n
2m ln(1 + n) |Xi − E[Xi ]| exp |Xi − E[Xi ]| . n
By assumptions of Lemma 3.3, it is easy to check that m ln(1 + n) n
|Xi − E[Xi ]| ≤
m ln(1 + i) i
|Xi − E[Xi ]| ≤ 2m.
By Lemma 3.2, for some constant L > 0, we have for sufficiently large n ≥ nL , m ln(1 + n)
sup
n
1≤i≤n
E |Xi − E[Xi ]| ln 1 +
m ln(1 + n) n
L |Xi − E[Xi ]| ≤ . n
So taking E[·] on both sides of the above inequality and combining the analyses above we have for any n ≥ nL ,
E exp
m ln(1 + n) n
(Xi − E[Xi ])
L
≤ 1 + e4m . n
By the independence of {Xn }∞ n=1 , we have for any n ≥ nL ,
E exp
n m ln(1 + n)
(Xi − E[Xi ])
n
=
n
E exp
i=1
m ln(1 + n) n
i=1
L
≤ 1 + e4m n
n
(Xi − E[Xi ])
L 4m n 4m = eLe < ∞. ≤ ene
Theorem 3.1. Given a sub-linear expectation space (Ω , H , E), E is lower-continuous and V is the induced capacity. Let {Xn }∞ n =1 be a sequence of independent random variables with supn≥1 E[|Xn |ψ(|Xn |)] < ∞ for some ψ ∈ Φc and E[Xn ] = µ, E [Xn ] = µ for each n ≥ 1. Let Sn =
n
i =1
Xi . Then
1 1 lim inf Sn < µ lim sup Sn > µ =0
V
n→∞
n
n→∞
n
and
1
1
v µ ≤ lim inf Sn ≤ lim sup Sn ≤ µ = 1. n→∞
n
n→∞
n
Proof. By the monotonicity and sub-additivity of V , we can obtain that the two resultant expressions are equivalent and we only need to prove
V
lim sup n→∞
1 n
Sn > µ
1 = 0 and V lim inf Sn < µ = 0. n→∞
n
254
C. Hu / Statistics and Probability Letters 119 (2016) 248–258
Step 1. Assume that |Xn − µ| ≤
2n ln(1+n)
for any n ≥ 1. Then {Xn }∞ n=1 satisfies the conditions of Lemma 3.3. For any ε > 0 we
choose m > 1ε . Then by Chebyshev’s inequality
V
Sn n
≥µ+ε =V
n m ln(1 + n)
n
≤e
≤
1
(1 + n)εm
By Lemma 3.3 and the convergence of
∞ Sn V
n =1
n
(Xi − µ) ≥ ε m ln(1 + n)
i=1
−ε m ln(1+n)
E exp
n m ln(1 + n)
n
sup E exp
(Xi − µ)
i=1
n m ln(1 + n)
n
n ≥1
∞
1 n=1 (1+n)εm ,
(Xi − µ)
.
i=1
we have
≥ µ + ε < ∞.
By Lemma 2.1 we have
V
lim sup n→∞
Sn n
≥ µ + ε = 0.
By the arbitrariness of ε and the lower-continuity of V , we have
V
lim sup n→∞
Sn n
> µ = 0.
Step 2. Define fn (x) = (− ln(1n+n) ) ∨ (x ∧
n ln(1+n)
) and fn (x) = x − fn (x). Then fn (·), fn (·) ∈ Cl,Lip (R). Let
Yn = fn (Xn − µ) − E[fn (Xn − µ)] + µ and Sn = i=1 Yi . Then Yn , n = 1, 2 · · · are independent, |Yn − µ| ≤ Noticing that
n
2n ln(1+n)
and E[Yn ] = µ.
|Yn | ≤ |Xn − µ| + E[|Xn − µ|] + µ, by the similar method in Lemma 3.3, we can also have sup E[|Yn |ψ(|Yn |)] < ∞. n ≥1
Then {Yn }∞ n=1 satisfies the conditions of Lemma 3.3. By virtue of Step 1 we have
V
lim sup n→∞
Sn n
> µ = 0.
(3.1)
Moreover Xn = Yn + fn (Xn − µ) + E[fn (Xn − µ)], and then 1 n
Sn =
1 n
Sn +
n 1
n i =1
n 1 fi (Xi − µ) + E[fi (Xi − µ)].
n i =1
By the subadditivity and translation invariance of E[·], we have
E[fi (Xi − µ)] = E[Xi − µ − fi (Xi − µ)]
≤ E[(Xi − µ)] + E[− fi (Xi − µ)] ≤ E[| fi (Xi − µ)|]. Therefore 1 n
Sn ≤
1 n
Sn +
n 1
n i=1
n 1 | fi (Xi − µ)| + E[| fi (Xi − µ)|].
n i =1
(3.2)
C. Hu / Statistics and Probability Letters 119 (2016) 248–258
255
Noticing that ∞ E[| fi (Xi − µ)|]
i
i =1
≤
∞ E[|X − µ|I (|X − µ| > i i
)]
i
i =1
2
≤
i ln(1+i)
[x0 ] E[|Xi − µ|I (|Xi − µ| >
i ln(1+i)
)]
i
i =1
+
∞ E[|Xi − µ|ψ(|Xi − µ|)] iψ( ln(1i+i) ) 2
i=[x0 ]+1
2
≤ sup E[|Xi − µ|]
[ x0 ] 1
i≤[x20 ]
i=1
i
+ sup E[|Xi − µ|ψ(|Xi − µ|)] i≥1
∞
1
i=[x20 ]+1
iψ( ln(1i+i) )
.
By Lemma 2.5, supi≥1 E[|Xi − µ|] < ∞ and supi≥1 E[|Xi − µ|ψ(|Xi − µ|)] < ∞, we have ∞ E[| fi (Xi − µ)|]
i
i =1
< ∞.
By Kronecker Lemma we have n 1
lim
n→∞
E[| fi (Xi − µ)|] = 0.
n i =1
On the other hand
E
∞ |fi (Xi − µ)| i
i =1
≤
∞ E[| fi (Xi − µ)|] i=1
i
< ∞.
Then by Lemma 2.4 we have ∞ |fi (Xi − µ)|
i
i =1
< ∞ q.s.
Also by Kronecker Lemma we have lim
n 1
n→∞
| fi (Xi − µ)| = 0 q.s.
n i =1
Taking lim supn→∞ on both sides of (3.2) and combining the analyses above, we have lim sup
Sn
n→∞
≤ lim sup
n
Sn
n→∞
q.s.
n
Then by expression (3.1) we have
V
lim sup n→∞
Sn n
> µ = 0.
Considering {−Xn }∞ n=1 , due to E[−Xi ] = −µ, we have
lim sup −
V
n→∞
Sn n
> −µ = 0.
This is equivalent to
V
lim inf n→∞
Sn n
< µ = 0.
Remark 3.1. If E is the classical expectation, we have V = v = P and µ = µ = EP [X1 ] where P is the classical probability. In this case our strong law of large numbers reduces to the standard Kolmogorov limit law:
P
lim
n→∞
1 n
Sn = EP [X1 ]
= 1.
256
C. Hu / Statistics and Probability Letters 119 (2016) 248–258
The following corollary reveals that the strong law of large numbers can be extended in the situation that the sub-linear expectations of random variables are not equal. Corollary 3.1. Given a sub-linear expectation space (Ω , H , E), E is lower-continuous and V is the induced capacity. Let {Xn }∞ n =1 be a sequence of independent random variables with supn≥1 E[|Xn |ψ(|Xn |)] < ∞ for some ψ ∈ Φc and E[Xn ] = µn , E [Xn ] = µn for each n ≥ 1. Let Sn =
V
lim inf n→∞
1 n
n
Xi . Then
i=1
Sn < lim inf n→∞
n 1
n i =1
µi
lim sup n→∞
1 n
Sn > lim sup n→∞
n 1
n i=1
µi
=0
and
v lim inf n→∞
n 1
n i =1
1
1
µi ≤ lim inf Sn ≤ lim sup Sn ≤ lim sup n→∞
n
n→∞
n
n→∞
n 1
n i=1
µi
= 1.
Proof. Taking Yn = Xn − µn , we can verify that supn≥1 E[|Yn |ψ(|Yn |)] < ∞. Then Yn satisfies the conditions of Theorem 3.1 and E[Yn ] = 0. Then we have
V
lim sup n→∞
n 1
n i=1
Yi > 0
= 0.
This implies
V
lim sup n→∞
1 n
Sn > lim sup n→∞
n 1
n i=1
µi
= 0.
Similarly taking Zn = µn − Xn , we can also obtain
V
lim sup n→∞
n 1
n i=1
Zi > 0
= 0.
This implies
V
lim inf n→∞
1 n
Sn < lim inf n→∞
n 1
n i =1
µi
= 0.
4. The weakest condition for sub-linear expectation In this section we will show that the moment condition: supn≥1 E[|Xn |ψ(|Xn |)] < ∞ for some ψ ∈ Φc is the weakest moment condition to ensure the validity of SLLN. We need the following theorem which can be referred to Zhang (2016). Theorem 4.1. Given a sub-linear expectation space (Ω , H , E), the induced capacity V is continuous. Let {Xn }∞ n=1 be a sequence of i.i.d. random variables. If
V
lim sup
|Sn |
n→∞
n
= ∞ < 1,
then CV [|X1 |] < ∞. Remark 4.1. Theorem 4.1 reveals that CV [|X1 |] = ∞ implies V (lim supn→∞
|Sn | n
= ∞) = 1.
The following example shows that for any given ψ ∈ Φd , there exists a sequence {Xn }∞ n=1 of i.i.d. random variables satisfying supn≥1 E[|Xn |ψ(|Xn |)] = E[|X1 |ψ(|X1 |)] < ∞ and the conditions of Theorem 4.1 so that the SLLN of Theorem 3.1 is not valid. Example 4.1. Let Ωi = {a0 , a1 , . . . , an , . . .}, i = 1, 2, . . . be a family of full spaces, Fi be a family of sets each one of which contains all subsets of Ωi . Let Pi = {P1 , P2 , . . . , Pn , . . .}, i = 1, 2, . . . be countable families of countable probability measures where P1 , P2 , . . . , Pn , . . . defined on each Ωi by:
Pi (aj ) =
1 1− i ψ( i) 1
iψ(i) 0
if j = 0 if j = i if j ̸= 0, i
C. Hu / Statistics and Probability Letters 119 (2016) 248–258
257
where ψ ∈ Φd and ψ(1) ≥ 1. Define the sequence {Xn }∞ n=1 of i.i.d. random variables on (Ωi , Fi ), i = 1, 2, . . . by 1
Xn (aj ) = j +
j = 1, 2, . . .
2
for any n ≥ 1. Then {Xn }∞ n=1 satisfies for any i, j ≥ 1 Pi (Xn > j) =
0
if j > i 1
n = 1, 2, . . . .
if j ≤ i
iψ(i)
Define the full space by
Ω=
∞
Ωi = Ω1 × Ω2 × · · · .
i=1
Define the product σ -algebra on Ω by
F =
∞
Fi = F1 × F2 × · · · .
i=1
Define the set P of probabilities on measure space (Ω , F ) by
P =
∞
Pi = P1 × P2 × · · · = {Pi1 × Pi2 × · · · : ij ∈ {1, 2, . . .}}.
i=1
We consider the sub-linear expectation defined by upper expectation:
E[X ] = sup EP [X ]. P ∈P
Define a sequence {Yn }∞ n=1 of random variables on (Ω , F ) by Yn (ω) = Yn (ω1 , ω2 , . . .) = Xn (ωn ). It is easy to check that {Yn }∞ n=1 is a sequence of independent and identically distributed random variables under sub-linear expectation E and the induced capacity V is continuous. Then we have
E[Y1 ψ(Y1 )] = sup EP [Y1 ψ(Y1 )] = sup P ∈P
= sup
P ∈P
Y1 (ω)ψ(Y1 )(ω)P (ω)
ω
X1 (ω1 )ψ(X1 )(ω1 )P (ω1 ) = sup EP [X1 ψ(X1 )]
P ∈P1 ω 1
= sup
2
n ≥1
= sup n ≥1
1
P ∈P1
ψ
ψ
1
1−
2
1 2
2
−
ψ
1
+ n+
nψ(n)
1 2
2nψ(n)
ψ n+ ψ(n)
+
1 2
1 2
ψ n+
ψ n+ 2nψ(n)
+
1 2
1
1
nψ(n)
2
.
So we have
E[Y1 ψ(Y1 )] < ∞. But CV [Y1 ] ≥
∞ i=1
≥
∞ i=1
V (Y1 > i) =
∞
sup P (X1 > i)
i=1 P ∈P1
Pi+1 (X1 > i) =
∞
1
i =1
(i + 1)ψ(i + 1)
= ∞.
So all the conditions of Theorem 4.1 are satisfied. The SLLN is not valid in this example. Acknowledgments The author would like to thank Professor Zengjing Chen and Professor Qiying Wang for their help. He also thank the editor and anonymous referees for their valuable comments and useful suggestions.
258
C. Hu / Statistics and Probability Letters 119 (2016) 248–258
References Chen, Z., Epstein, L., 2002. Ambiguity, risk, and asset returns in continuous time. Econometrica 70 (4), 1403–1443. Chen, Z., Wu, P., Li, B., 2013. A strong law of large numbers for non-additive probabilities. Internat. J. Approx. Reason. 54 (3), 365–377. Choquet, G., 1953. Theory of capacities. Ann. Inst. Fourier 5, 131–295. Cozman, F.G., 2010. Concentration inequalities and laws of large numbers under epistemic and regular irrelevance. Internat. J. Approx. Reason. 51 (9), 1069–1084. Huber, P., Strassen, V., 1973. Minimax tests and the Neyman-Pearson lemma for capacities. Ann. Statist. 1 (2), 251–263. Korchevsky, V., 2015. A generalization of the Petrov strong law of large numbers. Statist. Probab. Lett. 104, 102–108. Li, W., Chen, Z., 2011. Laws of large numbers of negatively correlated random variables for capacities. Acta Math. Appl. Sin. Engl. Ser. 27 (4), 749–760. Maccheroni, F., Marinacci, M., 2005. A strong law of large number for capacities. Ann. Probab. 33 (3), 1171–1178. Marinacci, M., 1999. Limit laws for non-additive probabilities and their frequentist interpretation. J. Econom. Theory 84 (2), 145–195. Peng, S., 1997. Backward SDE and related g-expectation. In: Mazliak, El Karoui (Ed.), Backward Stochastic Differential Equations. In: Pitman Research Notes in Math. Series, vol. 364. pp. 141–159. Peng, S., 2010. Nonlinear expectations and stochastic calculus under uncertainty. 24 Feb. In arXiv:1002.4546v1 [math.PR]. Petrov, V.V., 1975. Sums of Independent Random Variables. Springer-Verlag, Berlin. Wasserman, L., Kadane, J., 1990. Bayes’s theorem for Choquet capacities. Ann. Statist. 18 (3), 1328–1339. Zhang, L., 2016. Rosenthal’s inequalities for independent and negatively dependent random variables under sub-linear expectation with applications. Sci. China Math. 59 (4), 751–768.