J. Math. Anal. Appl. 373 (2011) 422–431
Contents lists available at ScienceDirect
Journal of Mathematical Analysis and Applications www.elsevier.com/locate/jmaa
On the strong convergence rate for positively associated random variables ✩ Guo-dong Xing a,∗ , Shan-chao Yang b a b
Dept. of Mathematics, Hunan University of Science and Engineering, Yongzhou, Hunan, PR China, 425100 Dept. of Mathematics, Guangxi Normal University, Guilin, Guangxi, PR China, 541004
a r t i c l e
i n f o
a b s t r a c t
Article history: Received 5 June 2009 Available online 6 August 2010 Submitted by Goong Chen
For sequences of the positively associated random variables which are strictly weaker than the classical associated random ones introduced by Esary et al. (1967) [7], strong convergence rate n−1/2 (log n)1/2 is obtained, which reaches the available one for independent random variables in terms of Berstein type inequality. Further, we give the corresponding precise asymptotics with respect to the rate mentioned above, which extend and improve the relevant results in Fu (2009) [8]. © 2010 Elsevier Inc. All rights reserved.
Keywords: Positively associated random variables Strong convergence rate Strictly stationary Maximal and moment inequalities
1. Introduction Definition 1.1. A finite family of random variables { X i , 1 i n} is said to be positively associated if for every pair of disjoint subsets A 1 and A 2 of {1, 2, . . . , n},
Cov f 1 ( X i , i ∈ A 1 ), f 2 ( X j , j ∈ A 2 ) 0 whenever f 1 and f 2 are coordinatewise increasing and the covariance exists. An infinite family is positively associated if every finite subfamily is positively associated. Definition 1.2. A finite family of random variables { X i , 1 i n} is said to be associated if for any subset A of {1, 2, . . . , n},
Cov f 1 ( X i , i ∈ A ), f 2 ( X j , j ∈ A ) 0 whenever f 1 and f 2 are coordinatewise increasing and the covariance exists. An infinite family is associated if every finite subfamily is associated. The associated random variables in Definition 1.2, which was introduced by Esary et al. [7], play an important role in a wide variety of areas, including reliability theory, mathematical physics, multivariate statistical analysis, life sciences and in percolation theory. One can refer to Barlow and Proschan [1], Newman [10–14], Cox and Grimmett [6], Birkel [3–5], Roussas [16–20], Gut [9], Shao and Yu [15], Yang [21,22] for further comprehension. Positive association in Definition 1.1, however, is strictly weaker than classical association in Definition 1.2. In fact, “association” implies “positive association”. However, there ✩
*
Supported by the National Natural Science Foundation of China (No. 10161004). Corresponding author. E-mail addresses:
[email protected] (S.-c. Yang),
[email protected] (G.-d. Xing).
0022-247X/$ – see front matter doi:10.1016/j.jmaa.2010.08.004
©
2010 Elsevier Inc. All rights reserved.
G.-d. Xing, S.-c. Yang / J. Math. Anal. Appl. 373 (2011) 422–431
423
exist positively associated random variables X 1 , X 2 which are not associated in [7]. Hence, investigating the limit theorems for positively associated random variables is of much interest. In this paper, by the maximal and moment inequalities established for positively associated random variables defined in Definition 1.1, we obtain the strong convergence rate n−1/2 (log n)1/2 , which reaches the available one for independent random variables in terms of Berstein type inequality. And to get more precise results, we give the corresponding precise asymptotics, which extend and improve the relevant results in [8]. Throughout this paper, let { X i , i 1∞} be a strictly stationary and positively associated sequence with E X 1 = 0, 0 < E X 12 < ∞ and 0 < σ 2 := E X 12 + 2 i =2 E X 1 X i < ∞ unless it is specially mentioned. And we always suppose that C denotes a positive constant which only depends on some given numbers and may vary from to line, an ∼ bn means that line n an /bn → 1 as n → ∞, a b means that a Cb, u (n) := supi 1 j: j−in Cov( X i , X j ), S n := i =1 X i . The layout of this paper is organized as follows. Section 2 contains the main results we obtain. Section 3 contains some lemmas. Section 4 contains the proofs of our main results. And the proof of Lemma 3.5 is given in Appendix A. 2. Main results In this section, we will give our main theorems as follow. Theorem 2.1. Let { X i , i 1} be a strictly stationary and positively associated sequence with E X 1 = 0, 0 < E X 12 < ∞. Assume that E (| X 1 |2+γ (log(1 + | X 1 |))
γ2 −γ1 −2 2
1+γ +γ2 ) ) < ∞ for some γ > 0 and 0 < γ2 < γ1 < γ + γ2 and that u (n) Cn−θ for θ γ1γ(+ γ2 −γ1 . Then
we have
S n /(n log n)1/2 → 0 a.s.
(2.1)
Remark 2.1. By (2.1), we obtain the rate n−1/2 (log n)1/2 of almost sure convergence to zero of S n /n → 0 a.s., which reaches the available one for independent random variables in terms of Berstein type inequality. Theorem 2.2. Let |kn |
lim 2β+2
0
C . log n
Then for −1 < β < 0, we have
∞ (log n)β n =2
n
P | S n | ( + kn )σ
E | N |2β+2
n log n =
(2.2)
β +1
and
lim 2β+2
0
∞ (log n)β n =2
n
P
max | S j | ( + kn )σ
n log n
1 j n
=
∞ 2E | N |2β+2
β +1
l =0
(−1)l , (2l + 1)2β+2
(2.3)
where N stands for the standard normal random variable. In particular, we have for −1 < β < 0,
lim 2β+2
0
∞ (log n)β n =2
n
P | S n | σ
n log n =
E | N |2β+2
(2.4)
β +1
and
lim 2β+2
0
∞ (log n)β n =2
n
P
max | S j | σ
n log n
1 j n
=
∞ 2E | N |2β+2
β +1
l =0
(−1)l , (2l + 1)2β+2
(2.5)
when kn is chosen as 0. Remark 2.2. (1) Theorem 2.2 extends Theorem 2.1 in [8] from associated setting to positively associated case. (2) By the way, although Definition 1 in [8] is given for positively associated random variables, his results are true only for associated random variables, since the lemmas quoted hold only for associated random variables. Theorem 2.3. Let |kn |
lim 2β+2
0
C . log n
Then we obtain for −1 < β < 0,
∞ (log n)β n =2
In particular, we have
n
I | S n | ( + kn )σ
n log n =
E | N |2β+2
β +1
a.s. and in L 2 .
(2.6)
424
G.-d. Xing, S.-c. Yang / J. Math. Anal. Appl. 373 (2011) 422–431
lim 2β+2
0
∞ (log n)β
n
n =2
I | S n | σ
n log n =
E | N |2β+2
β +1
(2.7)
a.s. and in L 2
when kn = 0. Remark 2.3. Theorem 2.3 extends Theorem 2.2 in [8] from associated setting to positively associated case. On the other hand, in order to get (2.7), Fu [8] used the following much more restrictive conditions. (i) E | X 1 |2+ϑ < ∞ for some 0 < ϑ 1, which is obviously stronger than E X 12 < ∞ in Theorem 2.3. (ii) The covariance function structure is required to satisfy ∞
u (n) =
Cov( X 1 , X j ) Cn−α
for some α > 1.
(2.8)
j =n+1
Our result, however, still holds without the condition (2.8). Hence, Theorem 2.3 extends and improves much Theorem 2.2 in [8]. It is due to the application of the maximal moment inequality in Lemma 3.1. 3. Some lemmas In this section, we give the following lemmas, which will be used in Section 4 and Appendix A. Lemma 3.1. Let { X i , i 1} be a positively associated sequence with zero mean and finite variances satisfying
L := sup
i 1
Cov( X i , X j ) < ∞.
j: j −i 1
Then we can obtain
E max S k2 2n max E X i2 + 4nL
(3.1)
1i n
1kn
for any n 1. Proof. Since E X i = 0 and ( S 1 , . . . , S i ) is an increasing function of ( X 1 , . . . , X i ), we see that { S n , n 1} is a demimartingale. Let n 1 be a given integer and let us set M n := max1i n S i and mn := min1i n S i . As mn S i M n for all 1 i n, we have
S 2i max mn2 , M n2 mn2 + M n2 Hence, we have get
max1i n S 2i
mn2
E max S k2 2E S n2 = 1kn
for any 1 i n.
+ Mn2
n n
and by Corollary 5 in [12], we have Emn2 E S n2 and E M n2 E S n2 . Therefore, we can
Cov( X i , X j )
i =1 j =1
for all n 1. And if n = 1, then the desired conclusion holds trivially. So suppose that n 2 and let us define a := max1i n E X i2 for all 1 i n. Thus, we have n n
Cov( X i , X j ) =
i =1 j =1
n i =1
E X i2 + 2
n −1
n
Cov( X i , X j ) an + 2nL .
i =1 j = i +1
From this and the result stated earlier, we can get the desired result. The proof is complete, now.
2
By Lemma 3.1 and induction on n, we have Lemma 3.2. Let 2 < p < r and { X i , 1 i n} be positively associated random variables with E X i = 0 and X i r < ∞ and u (n) < ∞, where X i r = ( E | X i |r )1/r . Assume that L < ∞ and that
u (n) Cn−θ
for some C > 0 and θ > 0.
Then, for every ε > 0, there exists K = K (ε , r , p , θ) such that
G.-d. Xing, S.-c. Yang / J. Math. Anal. Appl. 373 (2011) 422–431
E | S n | p K n1+ε max E | X i | p + n 2 max E X i2 + 4L 1i n
425
p /2
1i n
r ( p −2)/(r −2)
+ K C (r − p )/(r −2)n(r ( p −1)− p +θ ( p −r ))/(r −2)∨(1+ε) × max X i r 1i n
.
(3.2)
In particular, we have
E | S n | p K n1+ε max E | X i | p + n 2 max E X i2 + 4L 1i n
p /2
1i n
r ( p −2)/(r −2)
+ K C (r − p )/(r −2)n(1+ε) × max X i r 1i n
,
(3.3)
if θ (r − 1)( p − 2)/(r − p ) and
E | S n | p K n1+ε max E | X i | p + n 2 max E X i2 + 4L 1i n
+ KC
p /2
1i n
(r − p )/(r −2) p /2
n
r ( p −2)/(r −2)
× max X i r 1i n
(3.4)
,
if θ r ( p − 2)/(2(r − p )). Lemma 3.3. Let { X i , i 1} be a positively associated sequence with zero mean and finite variances. If 0 < ϑ1 < ϑ2 and ϑ2 − ϑ1 > E S n2 , then we have
P
max | S j | ϑ2
1 j n
(ϑ2 − ϑ1 )2 P | S n | ϑ1 . (ϑ2 − ϑ1 )2 − E S n2
(3.5)
Proof. Define
S n∗ = max(0, S 1 , . . . , S n ). Then by the proof of Lemma 3.1 with X i replaced by Y i = − X n−i +1 ,
E S n∗−1 − S n
2
2 = E max(Y 1 , Y 1 + Y 2 , . . . , Y 1 + · · · + Y n ) E S n2 .
(3.6)
Noting the fact that S n∗−1 and S n − S n∗−1 are positively associated and applying (3.6), we obtain, that for ϑ1 < ϑ2 ,
P S n∗ ϑ2 = P S n∗ ϑ2 , S n ϑ1 + P S n∗ ϑ2 , S n < ϑ1
P ( S n ϑ1 ) + P S n∗−1 ϑ2 , S n∗−1 − S n ϑ2 − ϑ1 P ( S n ϑ1 ) + P S n∗−1 ϑ2 P S n∗−1 − S n ϑ2 − ϑ1 2 P ( S n ϑ1 ) + P S n∗ ϑ2 E S n∗−1 − S n /(ϑ2 − ϑ1 )2 P ( S n ϑ1 ) + P S n∗ ϑ2 E S n2 /(ϑ2 − ϑ1 )2 ,
which implies that
P S n∗ ϑ2
(ϑ2 − ϑ1 )2 P ( S n ϑ1 ) (ϑ2 − ϑ1 )2 − E S n2
for (ϑ2 − ϑ1 )2 > E S n2 . By adding to (3.7) the analogous inequality with each X i replaced by − X i , we have
P
max | S j | ϑ2
1 j n
(ϑ2 − ϑ1 )2 P | S n | ϑ1 , (ϑ2 − ϑ1 )2 − E S n2
which completes the proof of Lemma 3.3.
2
(3.7)
426
G.-d. Xing, S.-c. Yang / J. Math. Anal. Appl. 373 (2011) 422–431
Lemma 3.4. (See [2].) Let { W (t ), t 0} be a standard Wiener process. Then for any x > 0,
P
∞
sup W (s) x = 1 −
0s1
(−1)l P (2l − 1)x N (2l + 1)x
l=−∞
=4
∞
(−1)l P N (2l + 1)x
l =0
=2
∞
(−1)l P | N | (2l + 1)x .
l =0
In particular,
P
sup W (s) x ∼ 2P | N | x ∼ √
0s1
4 2π x
2 e −x /2
as x → +∞.
Lemma 3.5. Under the conditions of Theorem 2.2, we have
max1 j n | S j |
√
σ n
→ sup W (s) and
Sn
√ → N in distribution.
σ n
0s1
The proof will be given in Appendix A. 4. Proofs In this section, we give the proofs of our main results. Firstly, we give Proof of Theorem 2.1. Let
X i1 = −(n log n)1/2 I X i −(n log n)1/2 + X i I | X i | < (n log n)1/2 + nβ I X i (n log n)1/2 , S j1 =
j
i =1
P
( X i1 − E X i1 ) and M j1 = max1k j | S k1 |. Then for any ε > 0,
max | S j | ε (n log n)1/2 n P | X 1 | (n log n)1/2 + P M n1 ε (n log n)1/2 = I + II.
1 j n
Hence, it is sufficient to prove that ∞
n−1 I < ∞ and
n =1
∞
n−1 II < ∞
by subsequence method. Noticing that E (| X 1 |2+γ (log(1 + | X 1 |)) ∞
(4.1)
n =1
n −1 I
n =1
γ2 −γ1 −2 2
∞ ∞
) < ∞, we have
1/2
P ( j log j )1/2 | X 1 | < ( j + 1) log( j + 1)
n=2 j =n
∞
1/2
j P ( j log j )1/2 | X 1 | < ( j + 1) log( j + 1)
j =2
γ2 −γ21 −2 < ∞. E | X 1 |2+γ log 1 + | X 1 | It remains to prove that obtain
1/ 2
ε(n√ log n) 2
E S n2
∞
n=1 n
−1 II
< ∞. Since E X 12 E S n2 /n u (0) for positively associated random variables, we
1/ 2 > 1 for n large enough. Thus by Lemma 3.3 with ϑ1 = ε(n log2 n) and ϑ2 = ε (n log n)1/2 and the moment
inequality (3.3) with p = 2 + γ1 , r = 2 + γ + γ2 and
=
γ1 −γ2 2
, it follows that
G.-d. Xing, S.-c. Yang / J. Math. Anal. Appl. 373 (2011) 422–431
∞
n−1 II 2
n =1
∞
1
n−1 P | S n1 |
2
n =1
C
∞
427
ε(n log n)1/2
n−
4+γ1 2
(log n)−
2+γ1 2
n−
4+γ1 2
(log n)−
2+γ1 2
E | S n1 |2+γ1
n =2
C
∞ n =2
+ KC C
∞
2+γ1 2
n1+ E | X 11 |2+γ1 + 2E | X 11 |2 + 4L
γ +γ 2 −γ 1 γ +γ 2
n−
n1+ · E | X 11 |2+γ +γ2
+
(log n)−
2+γ1 2
n
2+γ1 2
γ +γ1γ 2
E | X 11 |2+γ1 + C
n =2
+C
2+2γ1
∞
n−1 (log n)−
2+γ1 2
n =2
∞
n
2+γ − 21
+
2+γ − 21
(log n)
E | X 11 |2+γ +γ2
γ +γ1γ
2
n =2
C
∞
n−1 (log n)−
2+γ1 2
+C
n =2
+C
∞
n −1 −
γ2 2
(log n)−
2+γ1 2
E | X 11 |2+γ +γ2
2+2+γ +γ1γ
2
n =2
∞
n −1 −
γ2 2
2+γ − 21
(log n)
E | X 11 |2+γ +γ2
γ +γ1γ
2
n =1
:= II1 + II2 + II3 . Therefore, we need only to show that
II1 < ∞,
II2 < ∞
and
II3 < ∞. 2+γ1
Obviously, II1 < ∞. Turn to II2 . Noticing that either ( E | X 11 |2+γ +γ2 ) 2+γ +γ2 1 for sufficiently large n or E | X 11 |2+γ +γ2 < ∞, we have for the first case
II2 C
∞
n −1 −
γ2
(log n)−
2+γ1 2
−1 −
γ2
−
2+γ1 2
2
n =2
C
∞
n
2
(log n)
E | X 11 |2+γ +γ2
(n log n)
1 1 P ( j log j ) 2 | X 1 | < ( j + 1) log( j + 1) 2
2+γ +γ2 2
n =2
j n
2+γ +γ2 12 1 2 2 + ( j log j ) P ( j − 1) log( j − 1) | X 1 | < ( j log j ) j n
γ2 −γ21 −2 C E | X 1 |2+γ log 1 + | X 1 | < ∞. With respect to the second case, we can directly get II2 < ∞. Similarly, we can obtain II3 < ∞. Combining the results mentioned above yields (4.1). The proof is completed, now. 2 To prove Theorems 2.2 and 2.3, we need the following propositions. Without loss of generality, assume that what follows. By a similar proof to the one of Proposition 3.1 in [8], we have Proposition 4.1. For any β > −1, we have
lim 2β+2
0
∞ (log n)β n =2
n
P | N | ( + kn ) log n =
E | N |2β+2
β +1
and
lim 2β+2
0
∞ (log n)β n =2
n
P
sup W (s) ( + kn ) log n
0s1
where N stands for the standard normal random variable.
=
∞ 2E | N |2β+2
β +1
l =0
(−1)l , (2l + 1)2β+2
σ = 1 in
428
G.-d. Xing, S.-c. Yang / J. Math. Anal. Appl. 373 (2011) 422–431
Proposition 4.2. For −1 < β < 0, we obtain
lim 2β+2
0
∞ (log n)β P | S n | ( + kn ) n log n − P | N | ( + kn ) log n = 0
n
n =2
(4.2)
and
lim 2β+2
0
∞
(log n)β P max | S j | ( + kn ) n log n − P sup W (s) ( + kn ) log n = 0.
n
n =2
1 j n
0s1
(4.3)
Proof. Let J ( ) = exp( M2 ), where M > 4 and 0 < < 1/4. Since
2β+2
∞
(log n)β P max | S j | ( + kn ) n log n − P sup W (s) ( + kn ) log n
n
n =2
1 j n
0s1
(log n)β P max | S j | ( + kn ) n log n − P sup W (s) ( + kn ) log n
2β+2
n
n J ( )
1 j n
(log n)β
+ 2β+2
P
n
n > J ( )
max | S j | ( + kn ) n log n
1 j n
(log n)β
+ 2β+2
P
n
n > J ( )
0s1
sup W (s) ( + kn ) log n
0s1
:= I 1 + I 2 + I 3 , it is sufficient to prove that
lim I 1 = 0,
lim I 2 = 0
0
and
0
lim I 3 = 0
(4.4)
0
to obtain
lim 2β+2
0
∞
(log n)β P max | S j | ( + kn ) n log n − P sup W (s) ( + kn ) log n = 0, n =2
n
1 j n
0s1
respectively. We consider firstly I 1 . Set
√
n = sup P max | S j | x n − P sup W (s) x . 1 j n
x
0s1
Noticing Lemma 3.5, we have n → 0 as n → ∞. It follows that
β+1
lim I 1 lim 2β+2 log J ( )
0
(log n)β
1
(log J ( ))β+1
0
n
n J ( )
n → 0,
which implies that lim 0 I 1 = 0. Turn to I 2 , we have, by |kn | < /4 for sufficiently large n, the maximal moment inequality in Lemma 3.1 and −1 < β < 0,
lim I 2 lim 2β+2
0
0
lim 2β+2 0
lim 2β+2 0
lim 2β+2 0
(log n)β n > J ( )
n
P
max | S j |
1 j n
2
n log n
(log n)β 4E (max1 j n | S j |)2 n > J ( )
n
2n log n
(log n)β 8n max1i n E X 2 + 16nL i n > J ( )
n
2n log n
(log n)β 8nE X 2 + 16nσ 2 1 n > J ( )
n
2n log n
G.-d. Xing, S.-c. Yang / J. Math. Anal. Appl. 373 (2011) 422–431
(log n)β
C lim 2β 0
n > J ( )
∞ C lim
429
2β
0
n log n
(log x)β x log x
dx
J ( )
C
= − M β → 0 as M → ∞, β uniformly for 0 < < 1/4. Hence, √ lim 0 I 2 → 0 when M → ∞. On the other hand, noting Lemma 3.4 and that M > 4 and 0 < < 1/4 imply J ( ) − 1 J ( ), we can obtain
(log n)β
lim I 3 2 lim 2β+2
0
0
n
n > J ( )
P | N | ( + kn ) log n
∞ y 2β+1 P ( N y ) dy → 0 as M → ∞,
C √
M /8
uniformly for 0 < < 1/4. Thus we have lim 0 I 3 → 0 when M → ∞. Combining the earlier results together yields (4.4). Similarly, we have
lim 2β+2
0
∞ (log n)β P | S n | ( + kn ) n log n − P | N | ( + kn ) log n = 0. n =2
n
The proof is completed, now.
2
Proposition 4.3. Assume that f is a real function satisfying | f (x)| B and | f (x)| B. Then for β < 0, 0 < < 1/4 and exp(2/ ) m < l,
Var
l (log n)β
n=m
n
f
Sn
( + kn ) n log n
C
2
(log m)2β .
Proof. By Lemma 3.1 and simple computation, we can obtain (4.5).
(4.5)
2
Now, we can show Proof of Theorem 2.2. Combining Propositions 4.1 and 4.2 yields the desired results. Proof of Theorem 2.3. By (2.2) and Proposition 4.3, the desired result (2.6) follows.
2 2
Acknowledgment The authors are greatly grateful to an anonymous referee for his/her careful reading and valuable comments that improved presentation of the manuscript.
Appendix A In Appendix A we will give the proof of Lemma 3.5. For this purpose, we need the following lemma. Lemma 4.1. Let { X i , 1 i n} be positively associated random variables with joint and marginal characteristic functions, φ(r1 , . . . , rn ) and φ j (r ), then we have
n 1 φ j (r j ) φ(r1 , . . . , rn ) − 2 j =1
1 j =kn
|r j ||rk | Cov( X j , Xk ).
(A.1)
430
G.-d. Xing, S.-c. Yang / J. Math. Anal. Appl. 373 (2011) 422–431
Proof. We will proceed by induction on n. By Hoeffding equality, we have
Cov exp(ir1 X 1 ), exp(ir2 X 2 ) ∞ ∞ 2 i r1 r2 exp(ir1 x + ir2 y ) P ( X 1 > x, X 2 > y ) − P ( X 1 > x) P ( X 2 > y ) dx dy = −∞ −∞ ∞ ∞
|r1 r2 |
P ( X 1 > x, X 2 > y ) − P ( X 1 > x) P ( X 2 > y ) dx dy
−∞ −∞
= |r1 ||r2 | Cov( X 1 , X 2 ),
(A.2)
which implies that (A.1) holds for n = 2. Now, let us assume that (A.1) is true for n m. For n m + 1, we suppose that for some ζ = ±1, η = ±1 and n˜ ∈ {1, . . . , m}, ζ r j 0 for 1 j n˜ and ηr j 0 for n˜ + 1 j m + 1. Define
Y1 =
n˜
Y2 =
ζ r j X j,
j =1
m +1
ηr j X j .
j =˜n+1
Obviously, Y 1 and Y 2 are positively associated by Definition 1.1. Denoting the joint characteristic function of Y 1 and Y 2 n˜ m+1 by ψ , the marginal characteristic functions by ψl (l = 1, 2), φ (r ) by γ2 , we have, by (A.2) j =1 φ j (r j ) by γ1 and j =˜n+1 j j and induction hypothesis,
n φ j (r j ) ψ(ζ, η) − ψ1 (ζ )ψ2 (η) + ψ1 (ζ ) · ψ2 (η) − γ2 + ψ1 (ζ ) − γ1 · |γ2 | φ(r1 , . . . , rn ) − j =1
|ζ ||η| Cov(Y 1 , Y 2 ) + + = which completes the proof.
1 2
2
|r j ||rk | Cov( X j , Xk )
n˜ +1 j =km+1
|r j ||rk | Cov( X j , Xk )
1 j =k˜n
1 2
1
|r j ||rk | Cov( X j , Xk ),
1 j =kn
2
Now, we can give Proof of Lemma 3.5. The first thing we need to do is to show that
√
S n / n → N 0, σ 2
in distribution.
(A.3)
For this purpose, we need only to prove that
√ ψn (r ) =: E exp(ir S n / n ) → exp −σ 2 r 2 /2 .
(A.4)
For any fixed l ∈ N, let q = [n/l]. Then,
√ √ ψn (r ) − ψql (r ) = E cos(r S n / n ) − cos(r S ql / ql ) + i E sin(r S n / n ) − sin(r S ql / ql ) √ √ r S n / n + r S ql / ql r S n / n − r S ql / ql sin = 2 E sin 2 2 √ √ r S n / n + r S ql / ql r S n / n − r S ql / ql + i E cos sin 2 E sin
2
√
r S n / n + r S ql / ql
2
√
sin
r S n / n + r S ql / ql
2
√
2 1/2
r S n / n − r S ql / ql
+ 2 E cos sin 2 √ √ 2r Var( S n / n − S ql / ql ) → 0.
2
2 1/2
√
r S n / n − r S ql / ql 2
(A.5)
G.-d. Xing, S.-c. Yang / J. Math. Anal. Appl. 373 (2011) 422–431
Define next Y jl =:
S jl − S ( j −1)l
√
l
for j = 1, . . . , q (with S 0 = 0) such that S ql / ql =
√
Lemma A.1 with X j = Y jl and r j = r / q and the fact that
ψql (r ) − ψl (r /√q ) q 1 2
= r /2
Y 1l +···+Y ql √ q
and
√
σn2 =: Var( S n / n ). Then from
σn2 → σ 2 , it follows that
r 2 /q Cov(Y jl , Y kl )
1 j =kq
2
431
Cov( S ql / ql, S ql / ql ) − q
−1
q
Cov(Y jl , Y jl )
j =1
√ = r 2 /2 Var( S ql / ql ) − Var( S l / l ) → r 2 /2 σ 2 − σl2 .
(A.6)
By Taylor expansion, we have
√ q q √ q √ r2 ψl (r / q ) = E exp i (r / q ) S l / l = 1 − σl2 + o q−1 → exp −σl2 r 2 /2 . 2q
(A.7)
σl2 → σ 2 as l → ∞, we can obtain √ q lim supψn (r ) − exp −σ 2 r 2 /2 lim supψn (r ) − ψql (r ) + lim supψql (r ) − ψl (r / q ) n→∞ n→∞ n→∞ √ q + lim sup ψl (r / q ) − exp −σl2 r 2 /2 n→∞ + lim supexp −σl2 r 2 /2 − exp −σ 2 r 2 /2 n→∞ lim sup r 2 /2 σ 2 − σl2 + lim supexp −σl2 r 2 /2 − exp −σ 2 r 2 /2 → 0.
From (A.5)–(A.7) and
n→∞
n→∞
Hence, (A.4) holds. On the other hand, by Lemma 3.3 and Theorem 8.4 in [2], tightness follows. Hence, we have
W n (t ) → W (t ) where W n (t ) =
in distribution on C [0, T ] ,
S p +(nt − p ) X p +1 p √ , n σ n
t <
p +1 n
for 0 t T and, W (t ) stands for the standard Wiener process. By the result
above, we can obtain the desired results. The proof is completed, now.
2
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]
R.E. Barlow, F. Proschan, Statistical Theory of Reliability and Life Testing: Probability Models, Holt, Rinehart and Winston, New York, 1975. P. Billingsley, Convergence of Probability Measures, Wiley, New York, 1968. T. Birkel, Moment bounds for associated sequences, Ann. Probab. 16 (1988) 1184–1193. T. Birkel, On the convergence rate in the central limit theorem for associated processes, Ann. Probab. 16 (1988) 1685–1698. T. Birkel, A note on the strong law of large numbers for positively dependent random variables, Statist. Probab. Lett. 7 (1989) 17–20. J.T. Cox, G. Grimmett, Central limit theorem for associated random variables and the percolation model, Ann. Probab. 12 (1984) 514–528. J.D. Esary, F. Proschan, D.W. Walkup, Association of random variables, with applications, Ann. Math. Statist. 38 (1967) 1466–1474. Keang Fu, Exact rates in log law for positively associated random variables, J. Math. Anal. Appl. 356 (2009) 280–287. A. Gut, A. Sp˘ataru, Precise asymptotics in the Baum–Katz and Davis law of large numbers, J. Math. Anal. Appl. 248 (2000) 233–246. C.M. Newman, Normal fluctuations and the FKG inequalities, Comm. Math. Phys. 74 (1980) 119–128. C.M. Newman, A.L. Wright, An invariance principle for certain dependent sequences, Ann. Probab. 9 (1981) 671–675. C.M. Newman, A.L. Wright, Associated random variables and martingale inequalities, Z. Wahrscheinlichkeitstheor. Verw. Geb. 56 (1982) 361–371. C.M. Newman, Asymptotic independence and limit theorems for positively and negatively dependent random variables, in: Y.L. Tong (Ed.), Inequalities in Statistics and Probability, in: IMS Lecture Notes Monogr. Ser., vol. 5, Inst. Math. Statist., Hayward, CA, 1984, pp. 127–140. C.M. Newman, Ising models and dependent percolation, in: H.W. Block, A.R. Sampson, T.H. Savits (Eds.), Topics in Statistical Dependence, in: IMS Lecture Notes Monogr. Ser., vol. 16, 1990, pp. 395–401. Qiman Shao, Hao Yu, Weak convergence for weighted empirical process of dependent sequences, Ann. Probab. 24 (1996) 2098–2127. G.G. Roussas, Kernel estimates under association: Strong uniform consistency, Statist. Probab. Lett. 12 (1991) 393–403. G.G. Roussas, Curve estimation in random fields of associated processes, J. Nonparametr. Stat. 2 (1993) 215–224. G.G. Roussas, Asymptotic normality of random fields of positively or negatively associated processes, J. Multivariate Anal. 50 (1994) 152–173. G.G. Roussas, Asymptotic normality of a smooth estimate of a random field distribution function under association, Statist. Probab. Lett. 24 (1995) 77–90. G.G. Roussas, Exponential probability inequalities with some applications, in: T.S. Ferguson, L.S. Shapely, J.B. MacQueen (Eds.), Statistics, Probability and Game Theory, in: IMS Lecture Notes Monogr. Ser., vol. 30, Inst. Math. Statist., Hayward, CA, 1996, pp. 303–319. Shanchao Yang, Complete convergence for sums of positively associated sequences, Chinese J. Appl. Probab. Statist. 17 (2) (2001) 197–202 (in Chinese). Shanchao Yang, Min Chen, Exponential inequalities for associated random variables and strong law of large numbers, Sci. China Ser. A 49 (1) (2006) 75–85.