Statistics and Probability Letters 83 (2013) 13–20
Contents lists available at SciVerse ScienceDirect
Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro
Equivalent conditions of complete moment convergence of weighted sums for ρ ∗ -mixing sequence of random variables✩ Mingle Guo ∗ , Dongjin Zhu School of Mathematics and Computer Science, Anhui Normal University, Wuhu 241000, China
article
abstract
info
In this paper, the complete moment convergence of weighted sums for ρ ∗ -mixing sequence of random variables is investigated. By applying moment inequality and truncation methods, the equivalent conditions of complete moment convergence of weighted sums for ρ ∗ -mixing sequence of random variables are established. These results promote and improve the corresponding results obtained by Li et al. (1995) and Gut (1993) from i.i.d. to ρ ∗ -mixing setting. © 2012 Elsevier B.V. All rights reserved.
Article history: Received 19 May 2012 Received in revised form 19 August 2012 Accepted 20 August 2012 Available online 27 August 2012 MSC: 60F15 Keywords: ρ ∗ -mixing sequence of random variables Weighted sums Complete convergence Complete moment convergence
1. Introduction Let {Xn , n ≥ 1} be a sequence of random variables defined on probability space (Ω , F , P ). Write FS = σ (Xk , k ∈ S) ⊂ F ,
ρ (k) = sup ∗
S ,T
sup X ∈L2 (FS ), Y ∈L2 (FT )
√
Cov(X , Y )
Var(X ) · Var(Y )
,
where S , T are the finite subsets of positive integers such that dist(S , T ) ≥ k. We call {Xn , n ≥ 1} a ρ ∗ -mixing sequence if there exists k ≥ 0 such that ρ ∗ (k) < 1. Without loss of generality we may assume that a ρ ∗ -mixing sequence {Xn , n ≥ 1} is such that ρ ∗ (1) < 1 (see e.g. Bryc and Smolenski, 1993). The ρ ∗ -mixing conception is similar to ρ -mixing, but they are quite different from each other. Bryc and Smolenski (1993) and Peligrad (1998) pointed out the importance of the condition ρ ∗ (1) < 1 in estimating the moments of partial sums or maximum of partial sums. Various limit properties under the condition ρ ∗ (1) < 1 were studied. We refer to Bradley (1992) for the central limit theorem, Bryc and Smolenski (1993) for moment inequalities and almost sure convergence, Peligrad and Gut (1999) for the Rosenthal-type maximal inequality, and An and Yuan (2008) for the complete convergence. When {Xn , n ≥ 1} is independent and identically distributed (i.i.d.), Baum and Katz (1965) proved the following remarkable result concerning the convergence rate of the tail probabilities P (|Sn | > ϵ n1/p ) for any ϵ > 0.
✩ The research is supported by the National Natural Science Foundation of China (No 11271020 and 11201004),the Key Project of Chinese Ministry of Education (No 211077) and the Anhui Provincial Natural Science Foundation (No 10040606Q30 and 1208085MA11). ∗ Corresponding author. Tel.: +86 553 4119901. E-mail addresses:
[email protected] (M. Guo),
[email protected] (D. Zhu).
0167-7152/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2012.08.015
14
M. Guo, D. Zhu / Statistics and Probability Letters 83 (2013) 13–20
Theorem A. Let 0 < p < 2 and r ≥ p. Then ∞
r
np
−2
P (|Sn | > ϵ n1/p ) < ∞
for all ϵ > 0,
n =1
if and only if E |X |r < ∞, where EX1 = 0 whenever 1 ≤ p < 2. There is an interesting and substantial literature of investigation apropos of extending the Baum–Katz Theorem along a variety of different paths. Since partial sums are a particular case of weighted sums and the weighted sums are often encountered in some actual questions, the complete convergence for the weighted sums seems more important. Li et al. (1995) discussed the complete convergence for independent weighted sums and obtained the following results. Theorem B. Let {X , Xn , n ≥ 1} be a sequence of i.i.d. random variables. Let β > −1 and {ani ≈ (i/n)β (1/n), 1 ≤ i ≤ n, n ≥ 1} n be a triangular array of real numbers such that i=1 ani = 1 for all n ≥ 1. Then the following are equivalent:
E |X |1/(1+β) < ∞ for − 1 < β < −1/2; (i) E |X |2 log(1 + |X |) < ∞ for β = −1/2; E |X |2 < ∞ for β > −1/2. n ∞ a X − EX > ϵ < ∞ for all ϵ > 0. (ii) P i=1 ni i n =1 The most commonly studied method is Cesàro summation. Set, for α > −1,
(α + 1)(α + 2) · · · (α + n) , n!
Aαn =
n = 1, 2, . . . and Aα0 = 1.
Gut (1993) proved the following results on Cesàro summation of i.i.d. random variables: Theorem C. Let {X , Xn , n ≥ 0} be a sequence of i.i.d. random variables. Suppose that r > 1, 0 < α ≤ 1 and EX = 0. Then the following are equivalent:
E |X |(r −1)/α < ∞ for 0 < α < 1 − 1/r ; (i) E |X |r log(1 + |X |) < ∞ for α = 1 − 1/r ; E |X |r < ∞ for 1 − 1/r < α ≤ 1. n ∞ α−1 α r −2 A X > ϵ An < ∞ for all ϵ > 0. (ii) n P i=0 n−i i n =1 Liang (2000) extends Theorems B and C to negatively associated random variables. In the current work, we establish in Theorems 2.1 and 2.2 equivalent conditions of complete moment convergence which are new versions of Theorems B and C, respectively where the previous i.i.d. is replaced by the assumption n hypothesis n α−1 that {X , Xn , n ≥ 1} are identically distributed and ρ ∗ -mixing and where i=1 ani Xi resp., i=0 An−i Xi is replaced by
1 max1≤k≤n i=1 ani Xi resp., max0≤k≤n i=0 Aα− n−i Xi . For the proofs of the main results, we need to restate a few definitions and lemmas for easy reference. Throughout this paper, C will represent positive constants, the value of which may change from one place to another. The symbol I (A) denotes the indicator function of A. Let an ≪ bn denote that there exists a constant C > 0 such that an ≤ Cbn for sufficiently large n, and let an ≈ bn mean an ≪ bn and bn ≪ an . The following lemma can be easily proved. Here we omit the details of the proof.
k
k
Lemma 1.1. Let X be a random variable, then
(i)
∞
β+1 +α uβ E |X |α I (|X | > uγ )du ≪ E |X | γ
for any α > 0, γ > 0 and β > −1;
1
(ii) (iii)
∞
1
β+1 +α uβ log uE |X |α I (|X | > uγ )du ≪ E |X | γ log(1 + |X |) for any α > 0, γ > 0 and β > −1;
∞
β+1 +α uβ E |X |α I (|X | ≤ uγ )du ≪ E |X | γ
for any α > 0, γ > 0 and β < −1;
1
(iv)
∞
β+1 +α uβ log uE |X |α I (|X | ≤ uγ )du ≪ E |X | γ log(1 + |X |) for any α > 0, γ > 0 and β < −1.
1
The following lemma will play an important role in the proof of our main results. The proof is due to Peligrad and Gut (1999).
M. Guo, D. Zhu / Statistics and Probability Letters 83 (2013) 13–20
15
Lemma 1.2. Let {Xi , 1 ≤ i ≤ n} be a ρ ∗ -mixing sequence of random variables, Yi ∈ σ (Xi ), EYi = 0, E |Yi |M < ∞, i ≥ 1, M ≥ 2. Then there exists a positive constant C such that
M M /2 n n n , E Y ≤C E |Yi |M + E |Yi |2 i=1 i i=1 i=1
(1.1)
M M /2 j n n . E max Yi ≤ C E |Yi |M + (log2 n)M E |Yi |2 1≤j≤n i =1 i=1 i=1
(1.2)
Lemma 1.3. Let {Xn , n ≥ 1} be a ρ ∗ -mixing sequence of random variables, and {ani , 1 ≤ i ≤ n, n ≥ 1} be an array of real numbers. Then there exists a positive constant C such that, for any x ≥ 0 and all n ≥ 1,
1
−P
2
max |ani Xi | > x
n
1≤i≤n
P (|ani Xi | > x) ≤
i=1
C 2
+ 1 P max |ani Xi | > x .
(1.3)
1≤i≤n
Proof. Since max1≤i≤n |ani Xi | > x =
n i=1 |ani Xi | > x, max1≤j≤i−1 |anj Xj | ≤ x , we have n n n P |ani Xi | > x, max |anj Xj | ≤ x + P |ani Xi | > x, max |anj Xj | > x P (|ani Xi | > x) =
1≤j≤i−1
i =1
i =1
=P
max |ani Xi | > x
+
1≤i≤n
1≤j≤i−1
i =1
n P
|ani Xi | > x, max |anj Xj | > x .
(1.4)
1≤j≤i−1
i=1
Note that
n P
n |ani Xi | > x, max |anj Xj | > x = EI (|ani Xi | > x)I max |anj Xj | > x 1≤j≤i−1
i =1
≤E
n (I (|ani Xi | > x) − EI (|ani Xi | > x)) I i=1
+
1≤j≤i−1
i=1
n
P (|ani Xi | > x)P
i=1
max |anj Xj | > x
1≤j≤n
max |anj Xj | > x .
(1.5)
1≤j≤n
Combining with the Cauchy–Schwarz inequality and (1.1), we obtain
E
n
(I (|ani Xi | > x) − EI (|ani Xi | > x)) I max |anj Xj | > x 1≤j≤n
i=1
2 n ≤ E (I (|ani Xi | > x) − EI (|ani Xi | > x)) P max |anj Xj | > x 1≤j≤n
i=1
n ≤ C P (|ani Xi | > x)P max |anj Xj | > x 1≤j≤n
i =1
≤
n 1
2 i=1
P (|ani Xi | > x) +
C 2
P
max |ani Xi | > x .
(1.6)
1≤i≤n
Now, we substitute (1.6) into (1.5) and then into (1.4), and obtain (1.3).
2. Main results Now we state our main results. The proofs will be given in Section 3. Theorem 2.1. Let {X , Xn , n ≥ 1} be a ρ ∗ -mixing sequence of identically distributed random variables and let r > 1. Assume β > −1 and {ani ≈ (i/n)β (1/n), 1 ≤ i ≤ n, n ≥ 1} is a triangular array of real numbers such that ni=1 ani = 1 for all n ≥ 1.
16
M. Guo, D. Zhu / Statistics and Probability Letters 83 (2013) 13–20
Then the following are equivalent:
(r −1)/(1+β) < ∞ for − 1 < β < −1/r ; E |X |r E |X | log(1 + |X |) < ∞ for β = −1/r ; (i) r E |X | < ∞ for β > −1/r ; EX = 0; + ∞ k r −2 (ii) n E max ani Xi − ϵ < ∞ for all ϵ > 0. 1≤k≤n n =1 i =1
(2.1)
(2.2)
Corollary 2.1. Let {X , Xn , n ≥ 0} be a ρ ∗ -mixing sequence of identically distributed random variables and let r > 1. Let β > −1
β
1 i , 0 ≤ i ≤ n − 1, n ≥ 1} be a triangular array of real numbers such that and {ani ≈ n− n n the following are equivalent:
n−1 i=0
ani = 1 for all n ≥ 1. Then
E |X |(r −1)/(1+β) < ∞ for − 1 < β < −1/r ; E |X |r log(1 + |X |) < ∞ for β = −1/r ; (i) E |X |r < ∞ for β > −1/r ; EX = 0; + ∞ k r −2 < ∞ for all ϵ > 0. (ii) n E max ani Xi − ϵ 0≤k≤n−1 n =1 i=0 Theorem 2.2. Let {X , Xn , n ≥ 0} be a ρ ∗ -mixing sequence of identically distributed random variables and let r > 1, 0 < α ≤ 1. Then the following are equivalent:
E |X |(r −1)/α < ∞ for 0 < α < 1 − 1/r ; E |X |r log(1 + |X |) < ∞ for α = 1 − 1/r ; (i) E |X |r < ∞ for 1 − 1/r < α ≤ 1; EX = 0; + ∞ k α−1 r −2 α (ii) n E max An−i Xi /An − ϵ < ∞ for all ϵ > 0. 0≤k≤n n =1 i =0
(2.3)
(2.4)
Remark 2.1. Note that
+ k ani Xni − ϵ ∞> n E max 1≤k≤n i=1 n=1 ∞ ∞ k r −2 = n P max ani Xni − ϵ > x dx 1≤k≤n 0 n=1 i=1 ∞
r −2
k r −2 n P max ani Xni > x + ϵ dx. 1≤k≤n n=1 i=1
∞ ∞
= 0
(2.5)
Therefore, from (2.5), we obtain that the complete moment convergence implies the complete convergence, i.e., under the conditions of Theorem 2.1, result (2.2) implies ∞
k P max ani Xni > ϵ < ∞. 1≤k≤n i =1
n
r −2
n =1
(2.6)
Similarly, result (2.4) implies ∞ n =1
k α−1 α P max An−i Xi > ϵ An < ∞ for all ϵ > 0. 0≤k≤n i =0
n
r −2
Thus, we improve the results of Li et al. (1995) and Gut (1993) to maximum partial sums.
(2.7)
M. Guo, D. Zhu / Statistics and Probability Letters 83 (2013) 13–20
17
Remark 2.2. Without necessarily imposing any extra conditions, we not only promote and improve the results of Li et al. (1995) and Gut (1993) from i.i.d. to ρ ∗ -mixing setting but also obtain the complete moment convergence of maximum partial sums for ρ ∗ -mixing sequence of random variables. The method used for proving our main results is different from that of Li et al. (1995), Gut (1993) and Liang (2000). Our method can be used efficiently to the field of the complete moment convergence for ρ ∗ -mixing sequence of random variables. Taking α = 1 in Theorem 2.2, we get that the following corollary not only extends the results of Hsu and Robbins (1947) but also is a precise result on the convergence properties for ρ ∗ -mixing sequences. Corollary 2.2. Let {X , Xn , n ≥ 1} be a ρ ∗ -mixing sequence of identically distributed random variables with EX = 0 and let r > 1. Then the following are equivalent:
(i) E |X |r < ∞; ∞ r −2 (ii) n E max
(2.8)
+ k −1 n Xi − ϵ < ∞ for all ϵ > 0. 1≤k≤n i=1
n=1
(2.9)
3. Proofs of the main results Proof of Theorem 2.1. We firstly prove (2.1) ⇒ (2.2). Set Xni = ani Xi I (|ani Xi | ≤ 1), 1 ≤ i ≤ n, n ≥ 1. Since EX = 0, we have k
ani Xi =
k k (Xni − EXni ) + (ani Xi I (|ani Xi | > 1) − Eani Xi I (|ani Xi | > 1)).
i =1
i =1
(3.1)
i =1
By Lemma 1.2, Cr inequality, Markov’s inequality and Hölder’s inequality, for M ≥ 2, we obtain
+ + n k k +2 E |ani Xi |I (|ani Xi | > 1) ≤ E max (Xni − EXni ) − ϵ E max ani Xi − ϵ 1≤k≤n 1≤k≤n i=1 i=1 i =1 ∞ n k E |ani Xi |I (|ani Xi | > 1) (Xni − EXni ) > x + ϵ dx + 2 ≤ P max 1≤k≤n 0 i=1 i=1 M ∞ n k 1 dx + 2 E |ani Xi |I (|ani Xi | > 1) E max ( X − EX ) ≤ ni ni 1≤k≤n (x + ϵ)M 0 i =1 i=1 M n k + E |ani Xi |I (|ani Xi | > 1) ≪ E max (Xni − EXni ) 1≤k≤n i=1 i=1 M /2 n n n M M 2 ≪ E |Xni | + (log2 n) E |Xni | + E |ani Xi |I (|ani Xi | > 1).
i=1
i=1
(3.2)
i=1
Then
+ k n E max ani Xi − ϵ 1≤k≤n n =1 i=1 M /2 ∞ n n n r −2 M M 2 ≪ n E |Xni | + (log2 n) E |Xni | + E |ani Xi |I (|ani Xi | > 1)
∞
r −2
n =1
i=1
i=1
i=1
=: I1 + I2 + I3 .
(3.3)
So, in order to prove (2.2) it suffices to show that I1 < ∞, I2 < ∞ and I3 < ∞. Since ani ≈ (i/n)β (1/n), by Lemma 1.1, we have I3 =
∞
nr − 2
n=1
≪
∞ n=1
n
E |ani Xi |I (|ani Xi | > 1)
i =1
nr − 2
n i =1
iβ n−(1+β) E |X |I
|X | >
1 C
n1+β i−β
18
M. Guo, D. Zhu / Statistics and Probability Letters 83 (2013) 13–20
∞
x r −2
≈
yβ x−(1+β) E |X |I
|X | >
1
1
=
x
∞
1
u
u
du
1+β
1 C
x1+β y−β
(r −2−β)/(1+β)−1 β(r −1)/(1+β)
v
dydx
E |X |I
|X | >
1
1
1 C
u dv
(letting u = x1+β y−β , v = y)
∞ 1 (r −1)/(1+β)−2 u du ≪ E |X |(r −1)/(1+β) for − 1 < β < −1/r , u E | X | I | X | > C 1 ∞ 1 ur −2 log uE |X |I |X | > u du ≪ E |X |r log(1 + |X |) for β = −1/r , ≈ C 1 ∞ 1 ur −2 E |X |I |X | > u du ≪ E |X |r for β > −1/r .
(3.4)
C
1
By (2.1) we have that I3 < ∞. Taking sufficient large M such that (r − 2 − β)/(1 + β) − M < −1, r − 1 − M < −1, by Lemma 1.1, we obtain I1 =
≪
∞
nr − 2
n
n =1
i=1
∞
n
r −2
n
n =1
∞
≈
M β −M (1+β)
i
n
M
E |X | I
|X | ≤
i=1 u
du 1
E |ani Xi |M I (|ani Xi | ≤ 1) 1 C
n
1+β −β
i
u(r −2−β)/(1+β)−M v β(r −1)/(1+β) E |X |M I
|X | ≤
1
1 C
u dv
∞ 1 (r −2−β)/(1+β)−M M u du ≪ E |X |(r −1)/(1+β) for − 1 < β < −1/r , u E | X | I | X | ≤ C 1 ∞ 1 ≈ ur −1−M log uE |X |M I |X | ≤ u du ≪ E |X |r log(1 + |X |) for β = −1/r , C 1 ∞ 1 r −1 −M M u E |X | I |X | ≤ u du ≪ E |X |r for β > −1/r .
(3.5)
C
1
Therefore, I1 < ∞. Thus, in order to prove (2.2), it suffices to show that I2 < ∞. We consider two separate cases. When r ≥ 2, noting that (2.1) implies EX 2 < ∞, we obtain I2 ≪
∞
n
r −2
(log2 n)
M
n
M / 2 a2ni
≈
n
r −2
(log2 n)
M
n =1
i=1
n =1
∞
n
M / 2 2β −2(1+β)
i n
i =1
∞ nr −2−M (1+β) (log2 n)M for − 1 < β < −1/2, n =1 ∞ nr −2−M /2 (log n)M /2 (log2 n)M for β = −1/2, ≈ n =1 ∞ nr −2−M /2 (log n)M for β > −1/2.
(3.6)
2
n =1
Thus, taking sufficient large M such that r − 2 − M (1 + β) < −1 and r − 2 − M /2 < −1, we get I2 < ∞. When 1 < r < 2, since (2.1) implies EX r < ∞, taking sufficient large M such that r − 2 − M (r − 1)/2 < −1, r − 2 − Mr (1 + β)/2 < −1, we have I2 ≪
∞
n
r −2
n =1
(log2 n)
M
n i =1
M / 2 arni
≈
∞ n=1
n
r −2
(log2 n)
M
n
M / 2 r β −r (1+β)
i n
i=1
∞ nr −2−Mr (1+β)/2 (log2 n)M for − 1 < β < −1/r , n =1 ∞ nr −2−M (r −1)/2 (log n)M /2 (log2 n)M for β = −1/r , ≈ n = 1 ∞ nr −2−M (r −1)/2 (log2 n)M for β > −1/r . n =1
< ∞.
(3.7)
M. Guo, D. Zhu / Statistics and Probability Letters 83 (2013) 13–20
19
Now we prove (2.2) ⇒ (2.1). By Remark 2.1, we see that (2.2) implies ∞
k P max ani Xi > ϵ < ∞. 1≤k≤n i =1
n
r −2
n =1
Since max1≤k≤n |ank Xk | ≤ 2 max1≤k≤n | ∞
nr − 2 P
1≤k≤n
n =1
P
max |ank Xk | > ϵ
max |ank Xk | > ϵ
k
i=1
(3.8)
ani Xi |, then from (3.8) we have
< ∞;
(3.9)
→ 0 as n → ∞.
(3.10)
1≤k≤n
Therefore, by Lemma 1.3 and (3.10), it is easy to see n
P (|ani Xi | > ϵ) ≪ P
i =1
max |ank Xk | > ϵ
1≤k≤n
.
(3.11)
Now (3.9) and (3.11) yield ∞
nr − 2
n =1
n
P (|ani Xi | > ϵ) < ∞.
(3.12)
i=1
From the proof of Theorem 2.4 in Li et al. (1995), we see that (3.12) is equivalent to
(r −1)/(1+β) < ∞ for − 1 < β < −1/r ; E |X | r E |X | log(1 + |X |) < ∞ for β = −1/r ; E |X |r < ∞ for β > −1/r . n Moreover, noting that i=1 ani = 1, n ≥ 1, by (3.13), as the above proof we have n ∞ n ∞ r −2 r −2 a (X − EXi ) > ϵ < ∞. n P a X − EX > ϵ = n P i=1 ni i i=1 ni i n =1 n =1 Combining (3.8) and (3.14), we have EX = 0.
(3.13)
(3.14)
Proof of Corollary 2.1. Set Xni = ani Xi I (|ani Xi | ≤ 1), 0 ≤ i ≤ n − 1, n ≥ 1. We apply the same method of Theorem 2.1. Note that n −1
E |ani Xi |I (|ani Xi | > 1) ≪
i =0
n −1 1 (n − i)β n−(1+β) E |X |I |X | > n1+β (n − i)−β C
i =0
=
n
β −(1+β)
j n
E |X |I
|X | >
j=1
1 C
n
1+β −β
j
.
(3.15)
By (3.4), we have I3 ≪
∞
nr − 2
n =1
n −1
E |ani Xi |I (|ani Xi | > 1) =
i =0
∞ n=1
nr − 2
n
jβ n−(1+β) E |X |I
j =1
The rest of the proof is similar to that of Theorem 2.1 and is omitted. Proof of Theorem 2.2. Put
1 α ani = Aα− n−i /An , α−1 −α
|X | >
1 C
n1+β j−β
< ∞.
(3.16)
0 ≤ i ≤ n, n ≥ 1. Note that, for α > −1, Aαn ≈ nα /Γ (α + 1). Therefore, for
n α−1 n n , 0 ≤ i < n, n ≥ 1, ann ≈ n−α . Since Aαn = i=0 An−i , we have i=0 ani = 1. Thus, by Corollary 2.1, we see that (2.3) is equivalent to (2.4). α > 0, we obtain ani ≈ (n − i)
Acknowledgments The authors are deeply grateful to the anonymous referee and the Editor Prof. Yimin Xiao for the careful reading, valuable comments and correcting some errors, which have greatly improved the quality of the paper.
20
M. Guo, D. Zhu / Statistics and Probability Letters 83 (2013) 13–20
References An, J., Yuan, D.M., 2008. Complete convergence of weighted sums for ρ ∗ -mixing sequence of random variables. Statist. Probab. Lett. 78, 1466–1472. Baum, L.E., Katz, M., 1965. Convergence rates in the law of large numbers. Trans. Amer. Math. Soc. 120, 108–123. Bradley, R.C., 1992. On the spectral density and asymptotic normality of weakly dependent random fields. J. Theoret. Probab. 5, 355–374. Bryc, W., Smolenski, W., 1993. Moment conditions for almost sure convergence of weakly correlated random variables. Proc. Amer. Math. Soc. 119, 629–635. Gut, A., 1993. Complete convergence and Cesàro summation for i.i.d. random variables. Probab. Theory Related Fields 97, 169–178. Hsu, P.L., Robbins, H., 1947. Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33, 25–31. Li, D.L., Rao, M.B., Jiang, T.F., Wang, X.C., 1995. Complete convergence and almost sure convergence of weighted sums of random variables. J. Theoret. Probab. 8, 49–76. Liang, H.Y., 2000. Complete convergence for weighted sums of negatively associated random variables. Statist. Probab. Lett. 48, 317–325. Peligrad, M., 1998. Maximum of partial sums and invariance principle for a class of weakly dependent random variables. Proc. Amer. Math. Soc. 126, 1181–1189. Peligrad, M., Gut, A., 1999. Almost-sure results for a class of dependent random variables. J. Theoret. Probab. 12, 87–104.