J. Math. Anal. Appl. 484 (2020) 123662
Contents lists available at ScienceDirect
Journal of Mathematical Analysis and Applications www.elsevier.com/locate/jmaa
Complete convergence for widely acceptable random variables under the sublinear expectations Anna Kuczmaszewska Department of Applied Mathematics, Lublin University of Technology, Nadbystrzycka 38D, 20-618 Lublin, Poland
a r t i c l e
i n f o
Article history: Received 10 July 2019 Available online 15 November 2019 Submitted by U. Stadtmueller Keywords: Complete convergence Sub-linear expectations Widely acceptable random variables Regularly varying function Baum-Katz theorem
a b s t r a c t In this work there is considered complete convergence for widely acceptable random variables under the sub-linear expectations. The presented results are Baum-Katz type theorems that extend the corresponding results in classical probability space to the case of sub-linear expectation space. © 2019 The Author. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
1. Introduction The classical probabilistic models have long been used in making decisions in various areas of our live. However, making decisions in conditions of uncertainty means that models built on the basis of a classical probability space (Ω, F, P ) are insufficient. To model such phenomena we need new tools. Sub-additive probability measures and sub-linear expectations belong to the group of this type of tools and are very useful in modelling phenomena and processes in conditions of uncertainty in statistics, finances and insurance (see for example Denis and Martini [1] or Marinacci [3]). They enable to construct new probabilistic models that help financial decision makers to answer questions about acceptable risk positions or the amount of capital protecting the client from excessive risk. The use of new tools requires new definitions of many basic concepts such as, for example, independence or identical distribution. Under non-linear expectations and sub-additive probabilities many basic lemmas and theorems have new forms. Many limit theorems have been established relatively recently, for example, the central limit theorem and weak law of large numbers (Peng [5], [4]), the moment inequalities for the maximum partial sums and the Kolmogorov strong law of large numbers (Zhang [9]), exponential inequalities, the strong law of large numbers and the law of iterated logarithm for extended independent and E-mail address:
[email protected]. https://doi.org/10.1016/j.jmaa.2019.123662 0022-247X/© 2019 The Author. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
2
extended negatively dependent random variables (Zhang [7]), complete and complete moment convergence for extended negatively dependent random variables (Zhong and Wu [10]). This work will show new versions of the known inequalities (exponential inequality, Hoffmann-Jørgensen type inequality). They will be used to formulate the Baum-Katz type theorems under the non-linear expectation E. This work is inspired by Fazekas et al. [2], where there are studied widely acceptable random variables {ξn , n ≥ 1}, i.e. random variables fulfilling the condition Ee
n i=1
λξi
≤ g(n)
n
Eeλξi ,
i=1
where 0 ≤ g(n) < ∞, λ ∈ R, n ∈ N and E denotes the linear expectation corresponding to the probability measure P . We will conduct our considerations for random variables that satisfy the above condition but under some non-linear expectation E. 2. Preliminaries In this section, we introduce some basic notations and concepts, we use the notation established by Peng [5]. Let (Ω, F) be a measurable space and let H be a linear space of real-valued functions defined on (Ω, F) such that if X1 , X2 , . . . , Xn ∈ H, then ϕ(X1 , X2 , . . . , Xn ) ∈ H for each ϕ ∈ Cl,Lip (Rn ), where Cl,Lip (Rn ) denotes the linear space of functions ϕ satisfying |ϕ(x) − ϕ(y)| ≤ C(1 + |x|m + |y|m )|x − y|,
∀x,y∈Rn ,
for some C > 0 and m ∈ N depending on ϕ. The space H can be considered as a space of “random variables”. In particular, • if ϕ(x) = ax + b or ϕ(x) = xm , then both of these functions belong to the set Cl,Lip (R), • if ϕ, ψ ∈ Cl,Lip (R), then ϕ ∨ ψ, ϕ ∧ ψ ∈ Cl,Lip (R), • if X, Y ∈ H and ϕ, ψ ∈ Cl,Lip (R), then |X|, |X|m ∈ H and ϕ(X) · ψ(Y ) ∈ H. In our further considerations, there will be particularly important assumptions that H contains all constants and that X ∈ H implies |X| ∈ H. Definition 1. A sub-linear expectation E is a functional E : H → R satisfying the following conditions (i) monotonicity: E(X) ≥ E(Y )
if
X≥Y
and
X, Y ∈ H;
(ii) constant preserving: E(c) = c
for
c ∈ R;
(iii) sub-additivity: E(X + Y ) ≤ E(X) + E(Y )
for
X, Y ∈ H;
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
3
(iv) positive homogeneity: E(λX) = λE(X)
for
λ ≥ 0.
The triple (Ω, H, E) is called a sub-linear expectation space. In the sub-linear expectation space (Ω, H, E), we have the new concepts of independence and negative dependence. Definition 2. (Peng [4], Zhang [8]) (i) A random vector Y = (Y1 , Y2 , . . . , Yn ), Yi ∈ H is said to be independent to another random vector X = (X1 , X2 , . . . , Xm ), Xi ∈ H under E, if for each function ϕ ∈ Cl,Lip (Rm × Rn ) we have E ϕ(X, Y) = E E ϕ(x, Y) x=X , < ∞. whenever ϕ(x) ˜ := E ϕ(x, X) < ∞ for all x and E ϕ(X) ˜ (ii) A random vector Y = (Y1 , Y2 , . . . , Yn ), Yi ∈ H is said to be negatively dependent to another random vector X = (X1 , X2 , . . . , Xm ), Xi ∈ H under E, if for each pair of functions ϕ1 ∈ Cl,Lip (Rm ) and ϕ2 ∈ Cl,Lip (Rn ) we have E ϕ1 (X)ϕ2 (Y) ≤ E ϕ1 (X) E ϕ2 (Y) , whenever either ϕ1 , ϕ2 are coordinatewise non-decreasing, or non-increasing, with ϕ1 (X) ≥ 0, E ϕ2 (Y) ≥ 0, E ϕ1 (X)ϕ2 (Y) < ∞, E ϕ1 (X) < ∞, E ϕ2 (Y < ∞. (iii) A sequence {Xn , n ≥ 1} is said to be a sequence of independent random variables, if Xi+1 is independent to (X1 , X2 , . . . , Xi ), for each i ≥ 1. (iv) A sequence {Xn , n ≥ 1} is said to be a sequence of random variables negatively dependent, if Xi+1 is negatively dependent to (X1 , X2 , . . . , Xi ), for each i ≥ 1. For given sub-linear expectation E we define a conjugate expectation E as E(X) := −E(−X),
for all
X ∈ H.
Another important concept is the concept of capacity defined as a function V : G → [0, 1], where G ⊂ F, satisfying V (∅) = 0,
V (Ω) = 1,
V (A) ≤ V (B)
for all
A ⊂ B,
A, B ∈ G.
If the capacity V satisfies V (A ∪ B) ≤ V (A) + V (B) for all A, B ∈ G, it is called to be sub-additive. In a classical probability space (Ω, F, P ), because of equality P (A) = EIA , A ∈ F, the indicator function IA plays the important role. In the sublinear expectation space this equality is not true, so we need to modify the indicator function by functions in Cl,Lip (R). One of these modifications is a function h ∈ Cl,Lip (R) fulfilling the following conditions 0 ≤ h(x) ≤ 1, h(x) = 1 if |x| ≤ μ, for 0 < μ < 1, h(x) = 0 if |x| > 1 and h(x) ↓ as x > 0. For this function we have I[|x| ≤ μ] ≤ h(x) ≤ I[|x| ≤ 1], I[|x| > 1] ≤ 1 − h(x) ≤ I[|x| > μ]. For fixed (Ω, H, E) we define
V (A) := inf E|ξ| : IA ≤ ξ, ξ ∈ H
∀A ∈ F.
(1)
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
4
Then the Choquet integral of X ∈ H with respect to V has the following form 0
CV (X) :=
V (X > x) − 1 dx +
XdV = −∞
∞ V (X > x)dx. 0
In the next part of this section we present the main tools and preliminary lemmas that are useful in proofs of results presented in section 4. First, we recall the concepts of the slowly varying and regularly varying function that will be used in further considerations. Definition 3. A function L : (0, ∞) −→ (0, ∞) is (i) a slowly varying function (at infinity), if for any a > 0 L(ax) = 1, L(x)
lim
x→∞
(ii) a regularly varying function with index α > 0, if for any a > 0 L(ax) = aα . x→∞ L(x) lim
Lemma 1. Every regularly varying function (with index α > 0) l : (0, ∞) −→ (0, ∞) is of the form l(x) = xα · L(x), where L is a slowly varying function. Lemma 2. (Zhidong and Chun [11]) Let L : (0, ∞) −→ (0, ∞) be a slowly varying function (at infinity). Then (i) lim
x→∞
L(x + u) = 1, L(x)
for all
u > 0,
(ii) lim
sup
k→∞ 2k ≤x<2k+1
L(x) = 1, L(2k )
(iii) lim xδ L(x) = ∞,
x→∞
lim x−δ L(x) = 0,
x→∞
for all
δ > 0,
(iv) c1 2 L(ε2 ) ≤ kr
k
k
2jr L(ε2jr ) ≤ c2 2kr L(ε2k ),
j=1
for every r > 0, ε > 0, positive integer k, and some constants c1 > 0, c2 > 0,
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
5
(v) c3 2kr L(ε2k ) ≤
∞
2jr L(ε2jr ) ≤ c4 2kr L(ε2k ),
j=k
for every r > 0, ε > 0, positive integer k, and some constants c3 > 0, c4 > 0. Lemma 3. (Zhong and Wu [10]) Let α > 0 and p > 0. Suppose that X ∈ H and L is a slowly varying function. (i) Then for all c > 0
1 CV |X|p L |X| α < ∞
∞
is equivalent to
nαp−1 L(n)V |X| > cnα < ∞.
n=1
1 (ii) If CV |X|p L |X| α < ∞, then for any θ > 0 and c > 0 ∞
θkαp L(θk )V |X| > cθkα < ∞.
n=1
At the end of this section, we will introduce a definition that generalizes the concept of widely acceptable random variables (Wang, Li and Gao [6]) to the case of random variables in a sub-linear expectation space (Ω, H, E). Definition 4. Let {Yn , n ≥ 1} be a sequence of random variables in a sub-linear expectation space (Ω, H, E). The sequence {Yn , n ≥ 1} is called widely acceptable if for all t ≥ 0 and for all n ∈ N Ee
n i=1
tYi
≤ g(n)
n
EetYi ,
(2)
i=1
where 0 < g(n) < ∞. 3. Exponential inequalities Let (Ω, H, E) be a fixed sub-linear expectation space, X ∈ H and d > 0 be a real number. We define X = min{X, d}. Under the above notation we have the following version of the well-known exponential inequality. (d)
Theorem 1. Let X1 , X2 , . . . , Xn be a sequence of random variables in (Ω, H, E), with E(Xi ) ≤ 0 for 1 ≤ (d) i ≤ n. Let d > 0. Assume that Yi := Xi , 1 ≤ i ≤ n satisfy (2) for all t > 0. Then, for all x > 0, we have V (Sn > x) ≤ V ( max Xi > d) + g(n) exp 1≤i≤n
x d
−
x xd ln 1 + n . 2 d i=1 E|Xi |
(3) (d)
If X1 , X2 , . . . , Xn are such that E(−Xi ) = E(Xi ) = 0 and (2) is satisfied both for Yi := Xi (−Xi )(d) , 1 ≤ i ≤ n and t > 0, then for all x > 0 we have x x xd V (|Sn | > x) ≤ V ( max |Xi | > d) + 2g(n) exp . − ln 1 + n 2 1≤i≤n d d i=1 E|Xi |
and Yi :=
(4)
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
6
(d)
Proof. Let Yk = Xk and Tn = For any x > 0 we have
n i=1
+ Yi . Then Xk − Yk = Xk − d ≥ 0 and E(Yk ) ≤ E(Xk ) ≤ 0.
V (Sn > x) ≤ V (Sn > x, X1 > d ∪ X2 > d ∪ · · · ∪ Xn > d) +V (Sn > x, X1 ≤ d ∩ X2 ≤ d ∩ · · · ∩ Xn ≤ d) ≤ V (max Xk > d) + V (Tn > x). k≤n
Now we can see that for any t > 0 n V (Tn > x) ≤ e−tx E etTn ≤ e−tx g(n) EetYk . k=1
Moreover, etYk = 1 + tYk +
etYk − 1 − tYk 2 etd − 1 − td 2 Y ≤ 1 + tY + Yk , k k Yk2 d2
by monotonicity of the function f (x) =
ex −1−x . x2
Therefore etd − 1 − td etd − 1 − td 2 2 E etYk ≤ 1 + tE(Yk ) + · E(|Y | ) ≤ exp · E(|X | k k d2 d2 and then we obtain V (Tn > x) ≤ e−tx g(n)
n k=1
exp
etd − 1 − td · E(|Xk |2 ) 2 d
n etd − 1 − td ≤ g(n) exp −tx + · E(|Xk |2 ) . 2 d
(5)
k=1
n td 2 Because the minimum of the function f (t) = −tx + e −1−td · E(|X | ) is reached at t = ln 1 + 2 k 0 k=1 d xd n /d thus, putting in (5) t = t0 , we get the estimation 2 E(|X k| ) k=1 n 2 x xd k=1 E(|Xk | ) V (Tn > x) ≤ g(n) exp − ln 1 + n 1 + 2 d d dx k=1 E(|Xk | ) x x xd ≤ g(n) exp − ln 1 + n 2 d d k=1 E(|Xk | ) x
which proves (3). To prove (4), we note that V (|Sn | > x) = V (Sn > x ∪ −Sn > x) n n (d) ≤ V ( max |Xi | > d) + V ( Xi > x) + V ( (−Xi )(d) > x). 1≤i≤n
i=1
i=1
(6)
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
7
By (6), we get n x x xd (d) V( Xi > x) ≤ g(n) exp − ln 1 + n . 2 d d k=1 E(|Xk | ) i=1
Therefore it is obvious that we only have to show n x x xd V ( (−Xi )(d) > x) ≤ g(n) exp − ln 1 + n . 2 d d k=1 E(|Xk | ) i=1
(7)
By the assumption that (2) is true for Yi := (−Xi )(d) , 1 ≤ i ≤ n and t > 0, we have n n n d) (d) −tx t(−Xi )(d) −tx i=1 V ( (−Xi ) > x) ≤ e · Ee ≤e · g(n) E et(−Xi ) , i=1
(8)
i=1
for all x > 0. Moreover, by E(−Xi ) = E(Xi ) = 0, we get E (−Xi )(d) ≤ 0. Therefore etd − 1 − td (d) E et(−Xi ) ≤ 1 + tE (−Xi )(d) + · E(|(−Xi )(d) |2 ) d2 etd − 1 − td · E(|(−Xi )(d) |2 ) ≤1+ d2 etd − 1 − td 2 . · E(|X | ≤ exp k d2
(9)
Applying considerations from (5) and (6) for random variables (−X1 )(d) , (−X2 )(d) . . . , (−Xn )(d) , by (8) and (9), we get (7). This ends the proof of (4). Using the above exponential inequality, we can get the following Hoffmann-Jørgensen type inequality. Theorem 2. Let X1 , X2 , . . . , Xn be a sequence of random variables in (Ω, H, E), with E(−Xi ) = E(Xi ) = 0 (d) for 1 ≤ i ≤ n. Assume that (2) is satisfied both for Yi := Xi and Yi := (−Xi )(d) , 1 ≤ i ≤ n, for all d > 0 and all t > 0. Then, for any x > 0 and positive integer j, there exists a constant Cj > 0 such that V (|Sn | > x) ≤ V ( max |Xi | ≥ 1≤i≤n
Proof. Putting d =
x j
n 1 j x ) + Cj g(n) 2 E|Xi |2 . j x i=1
in (4) we get
V (|Sn | > x) ≤ V ( max |Xi | ≥ 1≤i≤n
≤ V ( max |Xi | ≥ 1≤i≤n
−j x x2 ) + 2g(n) · ej · exp ln 1 + n j j i=1 E|Xi |2
≤ V ( max |Xi | ≥ 1≤i≤n
x x · xj x x ) + 2g(n) · exp x − x ln 1 + n 2 j i=1 E|Xi | j j
−j x x2 ) + 2g(n) · ej · 1 + n 2 j j i=1 E|Xi |
(10)
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
8
n j j · i=1 E|Xi |2 x j ≤ V ( max |Xi | ≥ ) + 2g(n) · e · j n 1≤i≤n j x2 + j i=1 E|Xi |2 n j j 1 x 2 ≤ V ( max |Xi | ≥ ) + 2g(n) · e · j · E|X | . i 1≤i≤n j x2 i=1 j This ends the proof of (10) with Cj = 2 · e · j .
4. Complete convergence In this section we will use the following notation ˜ (d) = −dI[X < −d] + XI[|X| ≤ d] + dI[X > d] X and
˜ −X
(d)
= dI[X < −d] − XI[|X| ≤ d] − dI[X > d]
for d > 0. (d) Theorem 3. Let {Xn , n ≥ 1} be a sequence of random variables in (Ω, H, E) such that sequences {X˜n , n ≥ (d) 1} and { −X˜n , n ≥ 1} of truncated random variables are widely acceptable for all d > 0. Let 1 ≤ p < r ≤ 2 and αp ≥ 1. Assume that g(·) is regularly varying function with index s, where 0 < s < α(r − p). If for any ε > 0 the following conditions are satisfied ∞
(a) (b)
n=1 ∞
nα(p−r)−2 g(n)
n α ( εn ) r E |X˜k | 4 < ∞, k=1
nαp−2 V max |Xi | >
n=1
(c) n−α
1≤i≤n
εnα < ∞, 4
εnα + E |Xi | − → 0, 4 i=1
n
then ∞
nαp−2 V
n n
Xi − E(Xi ) ≥ εnα ∪ Xi − E(Xi ) ≤ −εnα < ∞.
n=1
i=1
(11)
i=1
Proof. Note that n n
Xi − E(Xi ) ≥ ε · nα ∪ Xi − E(Xi ) ≤ −ε · nα
V
i=1
=V
n
i=1
≤V
i=1 n Xi − E(Xi ) ≥ ε · nα ∪ −Xi − E(−Xi ) ≥ ε · nα i=1
n n
Xi − E(Xi ) ≥ ε · nα + V −Xi − E(−Xi ) ≥ ε · nα . i=1
i=1
In order to simplify the notation in this proof, let us denote
(12)
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
9
nα ε (d) . Xi = X˜i = −dI[Xi < −d] + Xi I[|Xi | ≤ d] + dI[Xi > d] for d := 4 Then we have ∞
I=
nαp−2 V
n
n=1
=
∞
nαp−2 V
n
n=1
Xi − E(Xi ) ≥ ε · nα
i=1
Xi − E(Xi ) +
i=1
n
n Xi + d I[Xi < −d] + Xi − d I[Xi > d]
i=1
+
n
(13)
i=1
E(Xi ) − E(Xi ) ≥ ε · nα
i=1 ∞
≤ ≤
nαp−2 V
n
n=1
i=1
∞
n
nαp−2 V
n=1
n n E(Xi ) − E(Xi ) ≥ ε · nα Xi − E(Xi ) + Xi − d I[Xi > d] + i=1
i=1
+
∞
i=1 α
ε·n Xi − E(Xi ) ≥ 2
nαp−2 V n−α
n=1
n i=1
+
∞
nαp−2 V
n=1
n
i=1
ε · nα Xi − d I[Xi > d] ≥ 4
ε = I1 + I2 + I3 . E(Xi ) − E(Xi ) ≥ 4
For the estimation of I3 we will use the assumption (c). We have n−α
n n E(Xi ) − E(Xi ) ≤ n−α E Xi − Xi i=1
= n−α
n
i=1
E (−d − Xi )I[Xi < −d] + (d − Xi )I[Xi > d]
i=1
≤ n−α
n
n + E (−d − Xi )I[Xi < −d] ≤ n−α E |Xi | − d → 0,
i=1
i=1
as n → ∞. Therefore, for sufficiently large n, n−α
n i=1
ε E(Xi ) − E(Xi ) < 4
(14)
which implies I3 < ∞. Moreover, by the assumption (b) I2 =
∞
nαp−2 V
n=1
n
i=1
∞ ε · nα αp−2 Xi − d I[Xi > d] ≥ ≤ n V max |Xi | > d < ∞. 1≤i≤n 4 n=1
To prove the convergence of I1 , we will use the method used in the proof of Theorem 1. Let’s first notice that if X1 , X2 , . . . Xn satisfy the widely acceptable condition (2), then we have Ee
n i=1
λ Xi −E(Xi )
= e−
n i=1
= Ee
λE(Xi )
n i=1
λXi − n i=1 λE(Xi )
n n = E e i=1 λXi · e− i=1 λE(Xi )
n n n · E e i=1 λXi ≤ e− i=1 λE(Xi ) · g(n) EeλXi i=1
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
10
= g(n) ·
n
e−λE(Xi ) · EeλXi = g(n)
i=1
= g(n)
n E e−λE(Xi ) · eλXi i=1
n
Eeλ
Xi −E(Xi )
,
n ∈ N,
for any
λ>0
i=1
which proves that X1 − E(X1 ), X2 − E(X2 ), . . . Xn − E(Xn ) are widely acceptable too. Hence we have ∞
nαp−2 V
n=1
n
i=1
∞ n ε · nα ε·nα ≤ Xi − E(Xi ) ≥ nαp−2 e−t 2 E et i=1 Xi −E(Xi ) 2 n=1
≤
∞
nαp−2 e−t
ε·nα 2
g(n)
n=1
n
Xi −E(Xi )
Eet
.
i=1
Similarly as in the proof of Theorem 1, we make the following estimation
e
t Xi −E(Xi )
=1+
t(Xk
−
E(Xk )
+
et
2 − 1 − t Xk − E(Xk ) · Xk − E(Xk ) 2 Xk − E(Xk )
Xk −E(Xk )
≤ 1 + t(Xk − E(Xk ) +
2 et2d − 1 − t2d · Xk − E(Xk ) (2d)2
≤ 1 + t(Xk − E(Xk ) +
r et2d − 1 − t2d · Xk − E(Xk ) . (2d)r
Hence we have e2td − 1 − 2td e2td − 1 − 2td r r E et Xi −E(Xi ) ≤ 1 + E X − E(X ) ≤ exp E X − E(X ) k k k k (2d)r (2d)r and V
n
i=1
n e2td − 1 − 2td Xk − E(Xk )r Xi − E(Xi ) ≥ x ≤ e−tx g(n) exp · E (2d)r i=1 n e2td − 1 − 2td r ≤ g(n) exp −tx + · E X − E(X ) k k (2d)r k=1
for any x > 0.
Putting t = ln 1 + n
x · (2d)r−1 /2d we have r k=1 E|Xk − E(Xk )|
V
n
i=1
x x (2d)r−1 x r Xi − E(Xi ) ≥ x ≤ g(n) exp − ln 1 + n 2d 2d k=1 E Xk − E(Xk ) x
≤ g(n)e 2d
x 2d
1 1+
x(2d)r−1 r k=1 E|Xk −E(Xk )|
n
x
≤ g(n)e 2d
n
k=1
x
≤ g(n)e 2d
1 x(2d)r−1 r k=1 E|Xk −E(Xk )|
n
x E|Xk − E(Xk )|r 2d . x(2d)r−1
x 2d
(15)
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
Hence for x =
I3 =
∞
11
ε · nα ε · nα and d = we get, by (a), 2 4 nαp−2 V
n=1
n
i=1
∞ n ε · nα ε r −αr Xi − E(Xi ) ≥ ≤ nαp−2 g(n)e · n E|Xk − E(Xk )|r 2 2 n=1 k=1
∞
n ε r ≤ nα(p−r)−2 g(n)e · E|Xk − E(Xk )|r < ∞. 2 n=1 k=1
This proves that I < ∞. On the basis of analogous considerations, conducted for {(−Xn )(d) , n ≥ 1}, we obtain J=
∞
nαp−2 V
n=1
n
−Xi − E(−Xi ) ≥ ε · nα < ∞ i=1
which completes (11). An important tool used in the proof of our next result is a sequence of functions hj (x) ∈ Cl,Lip (R), j ≥ 1 satisfying 0 ≤ hj (x) ≤ 1 for all x ∈ R and x hj jβ = 2
1
if
2jβ < |x| ≤ 2(j+1)β ,
0
if
|x| ≤ μ2jβ
or
|x| > (1 + μ)2(j+1)β .
For these functions we have X
≤ I μ2jβ < |X| ≤ (1 + μ)2(j+1)β , jβ 2 k X X |X|m hj jβ , m > 0 |X|m h kβ ≤ 1 + 2 2 j=1 hj
(16)
and ∞ X X ≤ hj jβ . 2kβ 2
1−h
(17)
j=k
Corollary 1. Let {Xn , n ≥ 1} be a sequence of random variables in (Ω, H, E) such that both sequences (d) ˜ n )(d) , n ≥ 1} of truncated random variables are widely acceptable for all d > 0 and {X˜n , n ≥ 1} and {(−X some regularly varying (with exponent s) function g(·), where 0 < s < α(r − p), 1 ≤ p < r ≤ 2. Moreover, we assume that there exist a random variable X and a constant C > 0 satisfying E ϕ(Xn ) ≤ CE ϕ(X) ,
(18)
for all non-negative functions ϕ ∈ Cl,Lip (R), and 1 1 E|X|p g |X| α ≤ CV |X|p g |X| α < ∞. Then for αp ≥ 1 and any ε > 0 ∞ n=1
nαp−2 V
n n
Xi − E(Xi ) ≥ εnα ∪ Xi − E(Xi ) ≤ −εnα < ∞. i=1
i=1
(19)
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
12
( Proof. Using the same notation as in Theorem 3 i.e. Xn := X˜n from Theorem 3. By (1), Cr inequality, (18) and (19) we have
εnα 4
)
we need to verify conditions (a)–(c)
εnα r εnα εnα E|Xk |r ≤ CE |Xk |r I[|Xk | ≤ I(|Xk | > ]+ ] 4 4 4 μX εnα X r k k ≤ CE |Xk |r h εnα + E 1 − h εnα 4 4 4 μX εnα X r E 1 − h εnα ≤ CE |X|r h εnα + 4 4 4 μX εnα εnα r r ≤ CE |X| h εnα + V (|X| > μ ] . 4 4 4
(20)
Therefore for condition (a), by (20), we get ∞
nα(p−r)−2 g(n)
n=1
n
E|Xk |r
k=1
n μX εnα r εnα ≤ nα(p−r)−2 g(n) E |X|r h εnα + V |X| > μ 4 4 4 n=1 ∞
k=1
≤C
∞
μX
nα(p−r)−1 g(n)E |X|r h
εnα
+
4
n=1
εnα nαp−1 g(n)V |X| > μ = I1 + I2 . 4 n=1 ∞
We can estimate I2 using Lemma 3. Indeed we see
I2 =
∞
αp−1
n
n=1
εnα g(n)V |X| > μ ∼ 4
≤ C · CV
∞
αpy αp−1 g(y)V |X| > cy α dy
(21)
0
1 |X| g |X| α < ∞, p
by the assumption (19).
I1 =
∞
α(p−r)−1
n
μX
g(n)E |X| h r
εnα 4
n=1
≤C
∞ k=1
≤C
∞ k=1
∞ 2 −1 k
=
μX nα(p−r)−1 g(n)E |X|r h εnα
k=1 n=2k−1
k α(p−r) k r 4μX 2 g(2 )E |X| h ε(2k )α
k k α(p−r) k 4μX 2 g(2 ) E |X|r hj ε(2j )α j=1
∞ ∞ 4μX k α(p−r) k r E |X| hj 2 g(2 ) ≤C j α ε(2 ) j=1 k=j
∞ j α(p−r) j r 4μX ≤C 2 g(2 )E |X| hj ε(2j )α j=1
≤C
∞ j α(p−r) j j αr ε2jα 2 g(2 )(2 ) V |X| > 4 j=1
4
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
≤C
13
∞
ε · nα nαp−1 g(n)V |X| > <∞ 4 n=1
by considerations in (21) and the assumption (19). To prove that, under the assumptions of Corollary 1, the condition (b) is fulfilled we note that ∞
αp−2
n
n=1
≤
∞
k=1
∞ n n Xi X αp−2 αp−2 n E 1 − h εnα n E 1 − h εnα ≤C
n=1
=C
∞ n εnα αp−2 εnα ≤ V max |Xi | > n V |Xi | > 1≤i≤n 4 4 n=1
4
k=1
∞
n=1
X nαp−1 E 1 − h εnα ≤C 4
n=1
εnα nαp−1 V |X| > μ <∞ 4 n=1
by the assumption (19). To complete the proof, we should also show that n−α Indeed we have n−α ≤ n−α
4
k=1
∞
E |Xi | − i=1
n
εnα 4
+
→ 0, as n → ∞.
n n εnα + εnα E |Xi | − ≤ n−α E |Xi |I[|Xi | > ] 4 4 i=1 i=1
n n Xi X E |Xi | 1 − h εnα E |X| 1 − h εnα ≤ n−α 4
i=1
≤ C · n−α+1
4
i=1
p
E|X| = Cn−αp+1 E|X|p → 0, as n → ∞, (nα )p−1
because αp > 1. The proof is completed. 5. Extended negatively dependent random variables In this section we will refer our results to extended negatively dependent random variables. Let Cb,Lip(R) denote the space of bounded and Lipschitz continuous functions. Definition 5. (Zhang [7]) In a sub-linear expectation space (Ω, H, E), random variables {Xn , n ≥ 1} are called to be upper (resp. lower) extended negatively dependent if there exists some constant K ≥ such that E
n i=1
n ϕi (Xi ) ≤ K E ϕi (Xi ) ,
n ≥ 1,
(22)
i=1
where ϕi ∈ Cb,Lip (R), ϕi ≥ 0, 1 ≤ i ≤ n and all functions ϕi are non-decreasing (resp. all are non-increasing). They are called extended negatively dependent, if they are both upper extended negatively dependent and lower extended negatively dependent. If {Xn , n ≥ 1} is a sequence of upper extended negatively dependent random variables and f1 (x), f2 (x), · · · ∈ Cl,Lip(R) are all nondecreasing (resp. all nonincreasing) functions, then {fn (Xn ); n ≥ 1} is also a sequence of upper (resp. lower) extended negatively dependent random variables. Moreover, we note that if {Xn , n ≥ 1} is a sequence of extended negatively dependent random variables, (d) (d) then {Xn , n ≥ 1} and {X˜n , n ≥ 1} are widely acceptable with g(n) = K i.e.
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
14
Ee
n
(d)
i=1
tXi
≤K
n
(d)
EetXi ,
and
Ee
n i=1
˜i tX
(d)
≤K
i=1
n
˜
EetXi
(d)
,
n ∈ N,
(23)
i=1
for all t > 0. It is a consequence of the fact that functions ϕ(x) := et(x∧d) and ϕ(x) := et[(x∧d)∨(−d)] , d > 0, belong to Cb,Lip (R) and are non-decreasing functions. Similarly, because the functions ϕ(x) := et(−x∧d) and ϕ(x) := et[(−x∧d)∨(−d)] , d > 0, also belong to ˜ n )(d) , n ≥ 1} are Cb,Lip (R) and are non-increasing functions, we conclude that {(−Xn )(d) , n ≥ 1} and {(−X widely acceptable, too. Therefore we have the following corollary. Corollary 2. Let {Xn , n ≥ 1} be a sequence of extended negatively dependent random variables in a sublinear expectation space (Ω, H, E). We assume that there exist a random variable X and a constant C > 0 satisfying E ϕ(Xn ) ≤ CE ϕ(X) ,
n ≥ 1,
for all non-negative functions ϕ ∈ Cl,Lip (R). Moreover, we assume E |X|p ≤ CV |X|p < ∞, for 1 ≤ p ≤ 2. Then for αp > 1 and any ε > 0 ∞
nαp−2 V
n=1
n n
Xi − E(Xi ) ≥ εnα ∪ Xi − E(Xi ) ≤ −εnα < ∞. i=1
i=1
Proof. The proof is a simple consequence of Theorem 3. Now, let’s consider random variables X1 , X2 , . . . such that for any positive integer n there exists a finite g(n) > 0 satisfying E
n i=1
n ϕi (Xi ) ≤ g(n) E ϕi (Xi ) ,
(24)
i=1
where ϕi ∈ Cb,Lip (R), ϕi ≥ 0, 1 ≤ i ≤ n and all functions ϕi are non-decreasing or all are non-increasing. It is obvious that the condition (24) generalizes the condition (22) and, similarly as in the extended (d) (d) negatively dependence case, {Xn , n ≥ 1} and {X˜n , n ≥ 1}, d > 0, are widely acceptable, if {Xn , n ≥ 1} satisfies (24). Therefore we can formulate the following generalization of Corollary 2 as a consequence of Theorem 3. Corollary 3. Let {Xn , n ≥ 1} be a sequence of random variables in (Ω, H, E) satisfying (24) for all nonnegative and non-decreasing or non-increasing functions ϕi ∈ Cb,Lip (R), 1 ≤ i ≤ n, n ∈ N and some regularly varying (with exponent s) function g(·), where 0 < s < α(r − p), 1 ≤ p < r ≤ 2. Assume that there exist a random variable X and a constant C > 0 satisfying (18) and (19), for all non-negative functions ϕ ∈ Cl,Lip (R). Then for αp ≥ 1 and any ε > 0 ∞ n=1
nαp−2 V
n n
Xi − E(Xi ) ≥ εnα ∪ Xi − E(Xi ) ≤ −εnα < ∞. i=1
i=1
A. Kuczmaszewska / J. Math. Anal. Appl. 484 (2020) 123662
15
This result is a generalization of the I. Fazekas, S. Pescora, B. Porvazsnyik [2] result for widely orthant dependent sequences of random variables {Xn , n ≥ 1}, weakly mean dominated by random variables X, from the classical probability space (Ω, F, P ) to the case of random variables from the sub-linear expectation space (Ω, H, E). References [1] L. Denis, C. Martini, A theoretical framework for pricing of contingent claims in the presence of model uncertainty, Ann. Appl. Probab. 16 (2006) 827–852. [2] I. Fazekas, S. Pescora, B. Porvazsnyik, General theorems on exponential and Rosenthal’s inequalities and on complete convergence, J. Math. Inequal. 12 (2018) 433–446. [3] M. Marinacci, Limit laws for non-additive probabilities and their frequentist interpretation, J. Econom. Theory 84 (1999) 145–195. [4] S.G. Peng, A new central limit theorem under sublinear expectations, arXiv:0803.2656v1 [math.PR], 2008. [5] S. Peng, Nonlinear expectations and stochastic calculus under uncertainty, arXiv:1002.4546v1 [math.PR], 2010. [6] Y. Wang, Y. Li, Q. Gao, On the exponential inequality for acceptable random variables, J. Inequal. Appl. 40 (2011), 10 pp. [7] L.X. Zhang, Strong limit theorems for extended independent and extended negatively dependent random variables under non-linear expectations, arXiv:1608.0071v1 [math.PR], 2016. [8] L. Zhang, Exponential inequalities under the sub-linear expectations with applications to laws of the iterated logarithm, Sci. China Math. 59 (2016) 2503–2526. [9] L.X. Zhang, Rosenthal’s inequalities for independent and negatively dependent random variables under sub-linear expectations with applications, Sci. China Math. 59 (2016) 751–768, arXiv:1408.5291 [math.PR]. [10] H. Zhong, Q. Wu, Complete convergence and complete moment convergence for weighted sums of extended negatively dependent random variables under sub-linear expectation, J. Inequal. Appl. 2017 (2017) 261, https://doi.org/10.1186/ s13660-017-1538-1R. [11] B. Zhidong, S. Chun, The complete convergence for partial sums of i.i.d. random variables, Sci. Sinica Ser. A 28 (1985) 1261–1277.