Almost sure convergence for ρ̃ -mixing random variable sequences

Almost sure convergence for ρ̃ -mixing random variable sequences

Available online at www.sciencedirect.com Statistics & Probability Letters 67 (2004) 289 – 298 Almost sure convergence for -mixing ˜ random variabl...

230KB Sizes 0 Downloads 88 Views

Available online at www.sciencedirect.com

Statistics & Probability Letters 67 (2004) 289 – 298

Almost sure convergence for -mixing ˜ random variable sequences Gan Shixin School of Mathematics and Statistics, Department of Mathematics, Wuhan University, Wuhan, Hubei 430072, PR China Received September 2003; received in revised form December 2003

Abstract In this paper we study the almost sure convergence for -mixing ˜ random variable sequences and obtain some new results which extend and improve the corresponding results of Jamison et al. (Z. Wahrschsinlichkeit. 4 (1965) 40) and Wu (J. Engng. Math. (Chinese), 18 (3) (2001) 58). c 2003 Elsevier B.V. All rights reserved.  MSC: 60F 15 Keywords: -mixing ˜ random variable sequence; The three-series theorem; Almost sure convergence; Strong stability

1. Introduction and preliminaries Let (; F; P) be a probability space. The random variables we deal with are all de>ned on (; F; P). Let {i ; i ¿ 1} be a sequence of random variables. Write F2 = (i ; i ∈ S ⊂ N ). Given -algebras B; R in F, let (B; R) = sup{corr(; );  ∈ L2 (B); ∈ L2 (R)}; (1.1)  where corr(; ) = (E − EE )= Var  Var . Bradley (1990) introduced the following coe@cients of dependence: (k) ˜ = sup{(FS ; FT )};

k ¿ 0;

(1.2)

where the supermum is taken over all >nite subsets S; T ⊂ N such that dist (S; T ) ¿ k. Obviously, 0 6 (k ˜ + 1) 6 (k) ˜ 6 1; k ¿ 0, and (0) ˜ = 1. 

Supported by the National Natural Science Foundation of China (No. 10071058). E-mail address: [email protected] (G. Shixin).

c 2003 Elsevier B.V. All rights reserved. 0167-7152/$ - see front matter  doi:10.1016/j.spl.2003.12.011

290

G. Shixin / Statistics & Probability Letters 67 (2004) 289 – 298

Denition 1.1. A random variable sequence {i ; i ¿ 1} is said to be a -mixing ˜ sequence if there exists k ∈ N such that (k) ˜ ¡ 1. Without loss of generality we may assume that {i ; i ¿ 1} is such that (1) ˜ ¡ 1 (see Bryc and Smolenski, 1993). -mixing ˜ is similar to -mixing, but they are quite diIerent from each other. So far as I know a number of writers have studied -mixing ˜ sequences. We refer to Bradley (1990, 1992) for the central limit theorem, Bryc and Smolenski (1993) for moment inequalities and almost sure convergence, Yang Shanchao (1998) for moment inequalities and strong law of large numbers, Wu Qunying (2001,2002) for almost sure convergence and Peligrad and Gut (1999) for almost-sure results. Denition 1.2. A random variable sequence {Yn ; n ¿ 1} is said to be strong stable if there exist two constant sequences {bn } and {dn } with 0 ¡ bn ↑ ∞ such that 1 b− n Yn – dn → 0

a:s:

(1.3)

Denition 1.3. A random variable sequence {Xn ; n ¿ 1} is said to be stochastically dominated by a non-negative random variable X if there exists a positive constant c such that P(|Xn | ¿ t) 6 cP(X ¿ t)

for ∀n ¿ 1; ∀t ¿ 0:

(1.4)

In this case we write {Xn } ¡ X . Hereinafter c always stands for a positive constant which may diIer from one place to another. The main purpose of this paper is to study the strong stability of linear sums of -mixing ˜ sequences and try to obtain some new results. In the other direction we establish the three-series theorem and prove a strong law of large numbers by using this theorem for -mixing ˜ sequences. Our results in this paper extend and improve the corresponding theorems of Jamison et al. (1965) and Wu Qunying (2001). For this goal we need some lemmas. Lemma 1.1 (Yang Shanchao, 1998, Theorem 2 and Peligrad et al., 1999). Let {i ; i ¿ 1} be a -mixing ˜ sequence with (1) ˜ ¡ 1. Xi ∈ (i ), EXi = 0, E|Xi |p ¡ ∞, i ¿ 1; p ¿ 1. Then there exists a positive constant c such that   n n  p    Xi  6 c E|Xi |p ; ∀n ¿ 1; 1 ¡ p 6 2; (1.5) (1) E    i=1

i=1

   n p=2   n n  p     ; Xi  6 c  E|Xi |p + EXi2 (2) E    i=1

i=1

∀n ¿ 1; p ¿ 2:

(1.6)

i=1

Lemma 1.2 (Yang Shanchao, 1998, Theorem 3). Let {i ; i ¿ 1} be a -mixing ˜ sequence with (1) ˜ ¡ 1, 1 ¡ p 6 2. For mean zero

random variable sequence {Xn ; n ¿ 1},

any Lp integrable ∞ p p if Xn ∈ (n ); n ¿ 1 and ∞ (log n) E|X | ¡ ∞. Then n n=1 n=1 Xn converges almost surely.

G. Shixin / Statistics & Probability Letters 67 (2004) 289 – 298

291

Corollary 1.1. Let the conditions of Lemma 1.2 be ful9lled and let {bn ; n

¿ 1} be a sequence n p p p −1 of positive numbers with bn ↑ ∞, if ∞ (log n) E|X | =b ¡ ∞. Then b n n n n=1 i=1 Xi → 0 a.s. Take bn = n1=p and bn = n1=p (log n)(p+1+)=p , respectively, in Corollary 1.1 we can immediately get the following corollaries. Corollary 1.2. Let {Xn ; n ¿ 1} be a mean zero -mixing ˜

sequence with (1) ˜ ¡ 1. If E|Xn |p 6 n c(log n)−(p+1+) , 1 ¡ p 6 2,  ¿ 0, n ¿ 2, then n−1=p i=1 Xi → 0 a.s. Corollary 1.3. Let {Xn ; n ¿ 1} be a mean zero -mixing ˜

sequence with (1) ˜ ¡ 1. If E|Xu |p 6 c; n − 1=p − (p+1+)=p n ¿ 1, 1 ¡ p 6 2,  ¿ 0, then n (log n) i=1 Xi → 0 a.s. Lemma 1.3. Let {an ; n ¿ 1} and {bn ; n ¿ 1} be two sequences of positive numbers with c1 = b1 =a1 , cn = bn =an log n, n ¿ 2 and bn ↑ ∞. {Xn ; n ¿ 1} is a sequence of any mean zero random variables with {Xn } ¡ X . De9ne N (x) = Card {n : cn 6 x}; x ∈ R. If (1)

∞ 

P(|Xi | ¿ ci ) ¡ ∞;

(2)

i=1

(1 ) EN (X ) ¡ ∞;

bn

n 



(2 )





1



1

i=1



−1

∞ 

P(|Xi | ¿ sci ) ds ¡ ∞; or

EN (X=s) ds ¡ ∞; then

ai EXi I (|Xi | 6 ci ) → 0

as n → ∞:

(1.7)

i=1

Proof. Clearly (1 ) ⇒ (1) and (2 ) ⇒ (2). It su@ces to show (1.7) under conditions (1) and (2). ∞ 

ai log i|EXi I (|Xi | 6 ci )|=bi 6

i=1

∞ 

ci−1 E|Xi |I (|Xi | ¿ ci )

i=1

6

∞ 

−1

ci



ci P(|Xi | ¿ ci ) +

i=1

=

∞  i=1

Therefore,



i=1

P(|Xi | ¿ ci ) +

∞  i=1

1





ci

P(|Xi | ¿ t) dt

P(|Xi | ¿ sci ) ds ¡ ∞:

(1.8)

ai EXi I (|Xi | 6 ci )=bi converges. By Kronecker’s lemma we have (1.7).

Lemma 1.4 (Shixin Lemma 1.3). Let X be a random variable and X0 a non-negative random variable. If for any t ¿ 0; P(|X | ¿ t) 6 cP(X0 ¿ t), then for ∀p ¿ 0; ∀t ¿ 0. E|X |p I (|X | 6 t) 6 c(t p P(X0 ¿ t) + EX0p I (X0 6 t)):

(1.9)

292

G. Shixin / Statistics & Probability Letters 67 (2004) 289 – 298

2. Main results In this section if X is a random variable and c is a positive constant we write X (c) = XI (|X | 6 c). Theorem 2.1. Let {an ; n ¿ 1} and {bn ; n ¿ 1} be two sequences of positive numbers with c1 = b1 =a1 ; cn = bn =an log n, n ¿ 2 and bn ↑ ∞. {Xn ; n ¿ 1} is a -mixing ˜ sequence with (1) ˜ ¡1 and {Xn } ¡ X . Set N (x) = Card {n : cn 6 x}; x ∈ R. 1 ¡ p 6 2. If the following conditions are satis9ed: (1) EN (X ) ¡ ∞; (2)



0

t

p −1

(2.1)

P(X ¿ t)



t

N (y)=yp+1 dy dt ¡ ∞;

(2.2)

then there exist dn ∈ R; n = 1; 2; : : : such that −1

bn

n 

ai Xi – dn → 0

a:s:

(2.3)

i=1

From Theorem 2.1 and Lemma 1.3 we can easily get Corollary 2.1. Let the conditions of Theorem 2.1 be ful9lled. If EXn = 0; n ¿ 1 and ∞ (3) EN (X=s) ds ¡ ∞; 1

(2.4)

then 1 b− n

n 

ai X i → 0

a:s:

(2.5)

i=1

Theorem 2.2. Let {an ; n ¿ 1} and {bn ; n ¿ 1} be two sequences of positive numbers with c1 = b1 =a1 ; cn = bn =an log n, n ¿ 2 and bn ↑ ∞. {Xn ; n ¿ 1} is a -mixing ˜ sequence with (1) ˜ ¡ 1 and {Xn } ¡ X . If conditions (1), (3) and (4) are satis9ed: (4) max

16j 6n

cjp

∞ 

cj−p = O(n);

(2.6)

j=n

then 1 b− n

n  i=1

ai X i → 0

a:s:

(2.7)

G. Shixin / Statistics & Probability Letters 67 (2004) 289 – 298

293

Theorem 2.3. Let {Xn ; n ¿ 1} be an identically distributed -mixing ˜ sequence with (1) ˜ ¡ 1. 1 ¡ p 6 2;  ¿ 0; ) ¿ max{(1 + )=p; 1}. If EX1 = 0 and E|X1 |p ¡ ∞, then    n ∞      (2.8) n)p−2− P  Xi  ¿ *n) ¡ ∞; ∀* ¿ 0:   n=1

i=1

Take  = 1; p = 2; ) = 1 in Theorem 2.3 we can get the following: Corollary 2.2. Let Xn ; n ¿ 1 be an identically distributed -mixing ˜ sequence with (1) ˜ ¡ 1, EX1 =0 2 and EX1 ¡ ∞. Then    n ∞       (2.9) n−1 P n−1 Xi  ¿ * ¡ ∞; ∀* ¿ 0:   n=1

i=1

Theorem 2.4 (The three-series theorem). Let {Xn ; n ¿ 1} be a -mixing ˜ sequence with (1) ˜ ¡ 1. 1 ¡ p 6 2. For some c ¿ 0, if the following three-series converge, i.e., (i)

∞ 

P(|Xn | ¿ c) ¡ ∞;

(2.10)

n=1

∞ 

(ii)

EXn(c)

converges;

(2.11)

n=1

∞ 

(iii) then



(log n)p E|Xn(c) |p ¡ ∞;

(2.12)

n=1

n=1

Xn converges almost surely.

Theorem 2.5. Let {Xn ; n ¿ 1} be a -mixing ˜ sequence with (1) ˜ ¡ 1, 1 ¡ p 6 2, {gn (x); n ¿ 1} is a sequence of even functions, positive and non-decreasing in the interval x ¿ 0. If one or the other of the following conditions is satis9ed for every n ¿ 1, (a) x=gn (x) ↑

as 0 ¡ x ↑;

(b) X=gn (x) ↓ and gn (x)=xp

(2.13) as 0 ¡ x ↑ and also EXn = 0;

(2.14)

then for any positive number sequence {bn ; n ¿ 1} with bn ↑ ∞ satisfying ∞  n=1

the series

(log n)p



n=1

Egn (Xn )=gn (bn ) ¡ ∞;

1 Xu =bn converges almost surely and therefore b− n

(2.15)

n

i=1

Xi → 0 a.s.

Remark. Theorems 2.4 and 2.5 are Theorems 4 and 5 of Wu Qunying (2001) when p = 2. But our results are more general and conditions (2.12) and (2.15) are a little weaker than those of Wu when 1 ¡ p ¡ 2.

294

G. Shixin / Statistics & Probability Letters 67 (2004) 289 – 298

3. Proofs Proof of Theorem 2.1. Let Sn = ∞ 

P(Xn = Xn(cn ) ) =

∞ 

n=1

n

ai Xi ; T n =

i=1

n

i=1

ai Xi(ci ) ; n ¿ 1. Then

P(|Xn | ¿ cn )

n=1

6c

∞ 

P(X ¿ cn ) 6 cEN (X ) ¡ ∞:

(3.1)

n=1 1 −1 By Borel–Cantelli lemma for any sequence {dn } ⊂ R, the sequences b− n Tn − dn and bn Sn − dn converge on the same set and to the same limit. From cr-inequality, Jensen’s inequality and Lemma 1.4 it follows that

∞ 

(log n)p E|an (Xn(cn ) − EXn(cn ) )|p =bpn

n=1

6c

∞ 

cn−p E|Xn |p I (|Xn | 6 cn )

n=1

6c

∞ 

cn−p (cnp P(X ¿ cn ) + EX p I (X 6 cn ))

n=1

6c

∞ 

P(X ¿ cn ) + c

n=1

−p

pcn

n=1

pcn−p



∞ 

−p



pcn

n=1



cn

t

0

p −1

0

cn

cn

0

n=1

6 cEN (X ) + c ∞ 

∞ 

t p−1 P(X ¿ t) dt

t p−1 P(X ¿ t) dt

P(X ¿ t) dt = p



0

6 p2



{n:cn ¿t }



= lim

u→∞

u

−p

{n:t¡cn ¡u}



N (u) 6 p

u



u

−p

N (u) − t

cn−p dt

{n:cn ¿t } ∞

0

u→∞

−p



t p−1 P(X ¿ t) t p−1 P(X ¿ t)

The last inequality follows from the fact that   −p −p cn = lim cn = lim u→∞

(3.2)

t

u



t

N (y)=yp+1 dy dt:

y−p dN (y)

N (t) + p

y−(p+1) N (y) dy → 0



t¡y6u

y

as u → ∞:

−(p+1)

N (y) dy

and

(3.3)

G. Shixin / Statistics & Probability Letters 67 (2004) 289 – 298

295

∞ (cn ) p Hence from (3.2), (3.3), (2.1) and (2.2) we obtain − EXn(cn ) )|p =bpn ¡ ∞. n=1 (log n) E|an (Xn

∞ (cn ) (cn ) By Lemma 1.2 it follows from n=1 an (Xn − EXn )=bn converges almost surely. Therefore,

n (ci ) (ci ) (ci ) n − 1 − 1 Kronecker’s −EXi ) → 0 a.s. Take dn =bn , n=1; 2; : : : ; i=1 ai (Xi i=1 ai EXi

nlemma that bn − 1 then bn a X – d → 0 a.s. The proof is complete. n i=1 i i

n 1 Proof of Theorem 2.2. By Lemma 1.3 and the proof of Theorem 2.1 we need only to show b− n i=1 ai (Xi(ci ) − EX (ci )i ) → 0 a.s. Put *n = max16j6n cj , *0 = 0. Then by cr-inequality, Jensen’s inequality and Lemma 1.4 we have ∞ ∞   (log n)p E|an (Xn(cn ) − EXn(cn ) )|p =bpn 6 c cn−p E|Xn |p I (|Xn | 6 cn ) n=1

n=1

6c

∞ 

cn−p (cnp P(X ¿ cn ) + EX p I (X 6 cn ))

n=1

∞ 

6 cEN (X ) + c ∞ 

EX p I (X 6 cn )=cnp 6

n=1

∞ 

EX p I (X 6 cn )=cnp ;

(3.4)

n=1

EX p I (X 6 *n )=cnp

n=1

=

∞  n 

EX p I (*j−1 ¡ X 6 *j )=cnp

n=1 j=1

6

∞ 

P(*j−1 ¡ X 6 *j )*jp

j=1

6c

∞  j=1

=c

∞  n=1

∞ 

1=cnp

n=j

jp(*j−1 ¡ X 6 *j ) = c 

j ∞  

P(*j−1 ¡ X 6 *j )

j=1 n=1

P(X ¿ *n−1 ) 6 c 1 +

∞ 



P(X ¿ cn )

n=1

6 c(1 + EN (X )) ¡ ∞: (3.5)

∞ (cn ) From (3.4), (3.5) and Lemma 1.2, − EXn(cn ) )=bn converges almost surely. By n=1 an (Xn

) (c (c n i 1 Kronecker’s Lemma we have b− − EXi i ) ) → 0 a.s. The proof is complete. n i=1 ai (Xi Proof | 6 n) ); Zni =Xi I (|Xi | ¿ n) ), 1 6 i 6 n; n ¿ 1. Then |

nof Theorem 2.3. Put

Ynni =Xi I (|Xi

6 | i=1 (Yni − EYni )| + | i=1 Zni | + | ni=1 EYni |, n ¿ 1. We need only to show   n  ∞      n)p−2− P  (Yni − EYni ) ¿ *n) =3 ¡ ∞;   n=1

i=1

n

i=1

Xi |

(3.6)

296

G. Shixin / Statistics & Probability Letters 67 (2004) 289 – 298 ∞  n=1

  n      n)p−2− P  Zni  ¿ *n) =3 ¡ ∞;  

(3.7)

i=1

  n     n) → 0 EY  ni   

as n → ∞:

(3.8)

i=1

Firstly, we prove (3.6). If follows from Markov inequality, cr-inequality Jensen’s inequality and Lemma 1.1 that   n  ∞      )p−2− ) n P  (Yni − EYni ) ¿ *n =3   n=1

i=1

6c

∞ 

n

−(2+)

n=1

6c

∞ 

i=1

n−(2+)

n=1

6c

∞ 

 n p     E (Yni − EYni )   n 

E|Yni |p

i=1

n−(1+) E|X1 |p ¡ ∞:

n=1

Secondly, we prove (3.7). Note that    n n      ) Zni  ¿ *n =3 ⊂ {|Xi | ¿ n) }:    i=1

i=1

Therefore, ∞ 

n

)p−2−

n=1

  n      ) P  Zni  ¿ *n =3   i=1

6

∞ 

n

)p−2−

n=1

n 

P(|Xi | ¿ n) )

i=1

∞  n)p−1− P(|X1 | ¿ n) ) = n=1

=

∞  n=1

n)p−1−

∞  j=n

P(j ) ¡ |X1 | 6 (j + 1)) )

(3.9)

G. Shixin / Statistics & Probability Letters 67 (2004) 289 – 298

=

j ∞  

297

n)p−1− P(j ) ¡ |X1 | 6 (j + 1)) )

j=1 n=1

6

∞ 

j )p− P(j ) ¡ |X1 | 6 (j + 1)) ) 6 E|X1 |p ¡ ∞:

(3.10)

j=1

Lastly, n

  n n n      EYni  6 n−) |EZni | 6 n−) E|Xi |I (|Xi | ¿ n) )   

−) 

i=1

i=1

=n

1− )

i=1

)

E|X1 |I (|X1 | ¿ n ) → 0

as n → ∞:

(3.11)

The proof is complete. Proof of Theorem 2.4. From cr-inequality, Jensen’s inequality and (2.11) it follows that ∞ 

(log n)p E|Xn(c) − EXn(c) |p 6 c

n=1

∞ 

(log n)p E|Xn(c) |p ¡ ∞:

(3.12)

n=1



∞ (c) (c) (c) By Lemma 1.2 we know n=1 (Xn − EXn ) converges almost surely. From (2.10) n=1 Xn converges almost surely. But from (2.9) we have ∞ 

P(Xn =

Xn(c) )

=

n=1

∞ 

P(|Xn | ¿ c) ¡ ∞:

n=1

In virtue of Borel–Cantelli Lemma we know

(3.13)



n=1

Xn converges almost surely.

Proof of Theorem 2.5. For each n ¿ 1, set Zn = Xn =bn . Then {Zn ; n ¿ 1} remains a -mixing ˜ sequence with (1) ˜ ¡ 1. Take c = 1 in Theorem 2.4. Since gn (x) ↑ as x ¿ 0, gn (|Xn |) ¿ gn (bn ) on {|Zn | ¿ 1}. So ∞ ∞   gn (|Xn |)=gn (bn )I (|Zn | ¿ 1) dP P(|Zn | ¿ 1) 6 n=1

n=1

6

∞ 

Egn (Xn )=gn (bn ) ¡ ∞:

(3.14)

n=1

We shall suppose that the function gu (x) satis>es condition (a), then in the interval |x| 6 bn , we have |x|=bn 6 gn (x)=gn (bn ). Therefore |x|p =bpn 6 gnp (x)=gnp (bn ) 6 gn (x)=gn (bn ). If, however, n is such that (b) is satis>ed, then in the same interval |x| 6 bn , it follows from gn (x)=xp ↓ that |x|p =bpn 6 gn (x)=gn (bn ). Consequently, ∞  n=1

(log n)p E|Zn(1) |p 6

∞  n=1

(log n)p Egn (Xn )=gn (bn ) ¡ ∞:

(3.15)

298

G. Shixin / Statistics & Probability Letters 67 (2004) 289 – 298

If condition (a) is satis>ed, then |EZn(1) | 6 E|Zn |I (|Xn | 6 bn ) 6 Egn (|Xn |)=gn (bn )I (|Xn | 6 bn ) 6 Egn (Xn )=gn (bn ): If condition (b) is satis>ed, then |EZn(1) | 6 E|Zn |I (|Zn | ¿ 1) 6 Egn (|Xn |)=gn (bn )I (|Zn | ¿ 1) 6 Egn (Xn )=gn (bn ): Consequently, in any case we have ∞ ∞   |EZn(1) | 6 Egn (Xn )=gn (bn ) ¡ ∞: n=1

So



n=1

n=1

EZn(1) converges. From Theorem 2.4 we know

(3.16)



n=1

Xn =bn converges almost surely.

References Bradley, R.C., 1990. Equivalent mixing conditions of random >elds. Technical Report No. 336. Center for Stochastic Processes, University of North Carolina, Chapel Hill. Bradley, R.C., 1992. On the spectral density and asymptotic normality of weakly dependent random >elds. J. Theoret. Probab. 5, 355–374. Bryc, W., Smolenski, W., 1993. Moment conditions for almost sure convergence of weakly correlated random variables. Proc. Amer. Math. Soc. 119 (2), 629–635. Jamison, B., Orey, S., Pruitt, W., 1965. Convergence of weighted averages of independent random variables. Z. Wahrschsinlichkeit. 4, 40–44. Peligrad, M., Gut, A., 1999. Almost-sure results for a class of dependent random variables. J. Theoret. Probab. 12, 87–104. Shixin, G. Strong stability of weighted sums of NA random variables, to appear. Wu Qunying, 2001. Some convergence properties for -mixing ˜ sequences. J. Engng. Math. (Chinese) 18 (3), 58–64. Wu Qunying, 2002. Convergence for weighted sums of -mixing ˜ random sequences. Math. Appl. (Chinese) 15 (1), 1–4. Yang Shanchao, 1998. Some moment inequalities for partial sums of random variables and their applications. Chinese Sci. Bull. 43 (17), 1823–1827.