The functional CLT for linear processes generated by mixing random variables with infinite variance

The functional CLT for linear processes generated by mixing random variables with infinite variance

Statistics and Probability Letters 78 (2008) 2095–2101 www.elsevier.com/locate/stapro The functional CLT for linear processes generated by mixing ran...

232KB Sizes 1 Downloads 43 Views

Statistics and Probability Letters 78 (2008) 2095–2101 www.elsevier.com/locate/stapro

The functional CLT for linear processes generated by mixing random variables with infinite variance H.J. Moon Department of Mathematics, Gyeongsang National University, Jinju 660-701, Republic of Korea Received 28 January 2008; accepted 28 January 2008 Available online 14 February 2008

Abstract In this paper we study the central limit theorem and the functional central limit theorem for a linear process generated by stationary ρ-mixing of random variables under the infinite variance assumption. c 2008 Elsevier B.V. All rights reserved.

MSC: 60F05; 60F17

1. Introduction and results Let {X n , n = 0, ±1, . . .} be a linear process defined by Xn =

∞ X

ai Yn−i ,

i=0

P∞ where theP innovations Yi are strictly stationary random variables and the ai are constants such that i=0 |ai | < ∞. n Let Sn = i=1 X i be the partial sum of the X i . The limit theorems for the linear process have been extensively studied in the literature under various conditions on the Yi . In this paper, we will devote our attention to the central limit theorem (CLT) and the functional central limit theorem (FCLT) for the partial sum Sn in the case where the Yi are random variables assuming infinite variance. Under the conditions of finite variance or a suitable finite moment, there has been a lot of work on those topic areas. For i.i.d. innovations Yi , we refer the reader to Fakhre-Zakeri and Lee (1992), Fakhre-Zakeri and Farshidi (1993) and Wang et al. (2002). For strictly stationary innovations Yi with certain dependency structures, we also refer the reader to Kim and Baek (2001), Lu (2003), Lee (1997) and Wang et al. (2001). Recently Kulik (2006) and Juodis and Raˇckauskas (2007) established the CLT and the FCLT for a normalized partial sum process of i.i.d. variables with infinite variances whose results seem to be an extension to the case of infinite variances for the papers mentioned above. Also, it is natural to ask about those limit theorems when the innovations Yi are strictly stationary dependent random variables with infinite variances.

E-mail address: [email protected]. c 2008 Elsevier B.V. All rights reserved. 0167-7152/$ - see front matter doi:10.1016/j.spl.2008.01.085

2096

H.J. Moon / Statistics and Probability Letters 78 (2008) 2095–2101

The aim of this paper is to investigate the CLT and the FCLT for a linear process generated by a stationary ρ-mixing sequence of random variables under the condition (1.1) below instead of the finite moment condition. Suppose that {Yi , i = 0, ±1, . . .} is a strictly stationary sequence of random variables on a probability space (Ω , F, P). For −∞ < n ≤ m < ∞ let Fnm denote the σ -field generated by {Yi , − ∞ < n ≤ i ≤ m < ∞}. We say that {Yi , i = 0, ±1, . . .} is ρ-mixing if ρ(n) → 0 as n → ∞, where ρ(n) is defined by ρ(n) = sup m∈N

sup

| corr( f, g)|.

m ) f ∈L 2 (F−∞ ∞ ) g∈L 2 (Fn+m

Throughout the paper we assume that EY0 = 0 and L(x) := EY02 I {|Y0 | ≤ x} is slowly varying at ∞.

(1.1)

This condition (1.1) is well known to be equivalent to saying that Y belongs to the domain of attraction of the normal law. Under the condition (1.1), Bradley (1988) and Shao (1993) proved the CLT and the FCLT for a stationary ρmixing sequence with infinite variance, respectively. Some results of those papers will play important roles in the proof of our main theorem. The following will be used: The greatest integer ≤ x will be denoted by [x]. The notation “⇒” denotes the weak convergence in the space D[0, 1] with Skorohod topology. Our main result is as follows: P n Theorem 1. Suppose that ρ(1) < 1 and ∞ n=1 ρ(2 ) < ∞. Then there exists a sequence {vn , n = 1, 2, . . .} of positive numbers with vn → ∞ as n → ∞, such that Sn D − → N (0, 1) βvn and Wn (t) ⇒ W (t), P∞ where β = i=1 ai , Wn (t) = S[nt] /(βvn ), 0 ≤ t ≤ 1; {W (t), t ≥ 0} is a standard Wiener process, N (0, 1) is a standard normal random variable. 2. Proof Throughout the section, the following arguments will play important roles. Let us choose a positive real sequence 2 {z n }∞ n=1 such that z n = n L(z n ) whose precise definition can be found in Bradley (1988). Define the random variables Y¯k := Yk I {|Yk | ≤ z n } − EYk I {|Yk | ≤ z n },

Yˆk = Yk − Y¯k

for each positive integer n and each integer k. Set Tn := Y1 + Y2 + · · · + Yn ,   1/2 vn := Var T¯n .

T¯n := Y¯1 + Y¯2 + · · · + Y¯n

It is easy to check from (3.10) in Bradley (1988) and Proposition 2.3 of Peligrad (1987) that   2+δ ¯ E max Ti ≤ C1 vn2+δ ≤ C2 {n L(z n )}(2+δ)/2 1≤i≤n

(2.1)

for some C1 , C2 > 0 and any δ > 0. The proof of Theorem 1 is based on the following well-known facts. Lemma 1 (Bradley, 1988, Theorem 1). Suppose {Yi , i = 0, ±1, . . .} is a strictly stationary sequence of nondegenerate real-valued random variables withP EY0 = 0. Let Tn = Y1 + · · · + Yn . Suppose that L(t) := EY02 I {|Y0 | ≤ t} is slowly n varying as t → ∞, ρ(1) < 1 and ∞ n=1 ρ(2 ) < ∞. Then Tn /vn → N (0, 1) in distribution as n → ∞.

2097

H.J. Moon / Statistics and Probability Letters 78 (2008) 2095–2101

Lemma 2 (Shao, 1993, Theorem 1.1). Suppose {Yi , i = 1, 2, . . .} is a strictly stationary ρ-mixing sequence of nondegenerate random variables with EY1 P = 0. Let Tn = Y1 + · · · + Yn . Suppose that L(t) := EY12 I {|Y1 | ≤ t} n is slowly varying as t → ∞, ρ(1) < 1 and ∞ n=1 ρ(2 ) < ∞. Then Wn ⇒ W as n → ∞, where Wn (t) = T[nt] /vn . Proof of Theorem 1. Let S˜n =

Pn

i=1

1 P max | S˜k − Sk | − →0 vn 1≤k≤n

P ∞

 a Yi . We first show that j j=0

as n → ∞.

(2.2)

We have S˜k =

k− j k X X j=1

=

ai Y j +

i=0

j−1 k X X j=1

!

∞ X

j=1

i=k− j+1

! ai Y j−i

!

k X

+

i=0

ai Y j !

k X

∞ X

j=1

i=k− j+1

ai Y j ,

and then S˜k − Sk = −

k ∞ X X j=1

! ai Y j−i

+

i= j

!

k X

∞ X

j=1

i=k− j+1

ai Y j =: Ak + Bk .

We can write, as to Ak , −Ak =

∞ k X X j=1

! ai Y¯ j−i

k ∞ X X

+

i= j

j=1

! ai Yˆ j−i

=: Ak1 + Ak2 .

i= j

Let δ > 0. By using H¨older’s inequality, Minkowski’s inequality and (2.1), and then using Kronecker’s lemma, we obtain, as n → ∞, that 1 E z n2



! 2 ! 2 2 k ∞ ∞ i∧k X X X X 1 1 max |Ak1 | ≤ 2 E max ai Y¯ j−i = 2 E max ai Y¯ j−i 1≤k≤n 1≤k≤n 1≤k≤n zn zn j=1 i= j i=1 j=1 !)2 ( ∞ i∧k X X 1 |ai | Y¯ j−i ≤ 2 E max j=1 z n 1≤k≤n i=1 !)2 ( ∞ i∧k X X 1 ≤ 2E |ai | max Y¯ j−i 1≤k≤n zn i=1 j=1  !!2+δ 2/(2+δ) ∞ i∧k  X X 1  ≤ 2 E |ai | max Y¯ j−i  1≤k≤n zn  j=1 i=1    ( )2+δ 1/(2+δ) 2 i∧k ∞ 1 X  X ¯  ≤ 2 |ai | E max Y j−i  1≤k≤n z n i=1 j=1 C ≤ 2 zn

∞ X

= o(1).

i=1

!2 |ai | {(i ∧ n)L(z n )}1/2

=C

∞ 1 X |ai |(i ∧ n)1/2 √ n i=1

!2

(2.3)

2098

H.J. Moon / Statistics and Probability Letters 78 (2008) 2095–2101

Also, by EYˆ1 = o(L(z n )/z n ), we have !   ∞ i∧k X X 1 1 |ai | max Yˆ j−i E max |Ak2 | ≤ E 1≤k≤n 1≤k≤n zn zn i=1 j=1 ∞ ∞ i∧k i∧n X X 1 X 1 X = |ai |E max |ai | Yˆ j−i ≤ E Yˆ j−i 1≤k≤n z n i=1 z n i=1 j=1 j=1 ∞ ∞ 1 X 1X |ai |(i ∧ n) |ai |(i ∧ n)o(L(z n )/z n ) = o(1) z n i=1 n i=1 ! n ∞ X 1X i|ai | + |ai | = o(1). = o(1) n i=1 i=n+1

=

Together with (2.3) this implies that 1 P → 0. max |Ak | − z n 1≤k≤n

(2.4)

As for Bk , we can write Bk =

=

!

k X

∞ X

j=1

i=k− j+1

k X

k X

i=1

ai

ai Y j ! Yj

+

j=k−i+1

∞ X i=k+1

ai

k X

! Yj

j=1

=: Bk1 + Bk2 .

(2.5)

We first compute Bk2 . Let { pn } be a sequence of positive integers such that pn → ∞ and pn = o(n). Then we get ! ∞ k X X 1 1 max |Bk2 | = max ai Yj z n 1≤k≤n z n 1≤k≤n i=k+1 j=1 ! ! k k ∞ ∞ X X X 1 X 1 ≤ |ai | max Y¯ j + |ai | max Yˆ j 1≤k≤ pn 1≤k≤ p z n i=1 z n n i=1 j=1 j=1 ! ! ∞ k ∞ k X X X X 1 1 Y¯ j + Yˆ j |ai | max |ai | max + 1≤k≤n 1≤k≤n z n i= p +1 z n i= p +1 j=1 j=1 n

=: I1 + I2 + J1 + J2 . Like for (2.3), we obtain, as n → ∞, that !2+δ !2+δ k ∞ X X 1 Y¯ j |ai | E max EI12+δ = 2+δ 1≤k≤ p n zn i=1 j=1  (2+δ)/2 pn L(z n ) ≤ C = o(1), z n2 !2+δ !2+δ ∞ k X X 1 2+δ |ai | E max Y¯ j EJ1 = 2+δ 1≤k≤n zn i= pn +1 j=1 !2+δ   ∞ X n L(z n ) (2+δ)/2 ≤ C |ai | = o(1), z n2 i= p +1 n

n

(2.6)

H.J. Moon / Statistics and Probability Letters 78 (2008) 2095–2101

2099

and ∞ X

1 EI2 ≤ zn

! |ai | E

i=1

|Yˆ j | ≤ C

j=1

!

∞ X

1 EJ2 = zn

!

pn X

n X

|ai | E

i= pn +1

≤ C

! |Yˆ j |

j=1

!2+δ

∞ X

o(L(z n )/z n ) = o(1), z n / pn

|ai |

i= pn +1

o(L(z n )/z n ) = o(1). z n /n

Hence, combining these with (2.6) yields 1 P →0 max |Bk2 | − z n 1≤k≤n

as n → ∞.

(2.7)

We next consider k X 1 1 ai max |Bk1 | = max z n 1≤k≤n z n 1≤k≤n i=1

! k X 1 ai Y¯ j + max z n 1≤k≤n i=1 j=k−i+1 k X

! Yˆ j j=k−i+1 k X

=: Q¯ n + Qˆ n

(2.8)

and set for m ≥ 1 Q¯ n,m Qˆ n,m

k X 1 max bi = z n 1≤k≤n i=1 k X 1 max bi = z n 1≤k≤n i=1

! Y¯ j , j=k−i+1 ! k X Yˆ j , j=k−i+1 k X

where bi = ai I (i ≤ m). Since ! ( k k X (n) (n) X (|a1 | + · · · + |ak |)(|Y1 | + · · · + |Yk |) bi Y¯ j ≤ (n) (n) i=1 (|a1 | + · · · + |am |)(|Yk−m+1 | + · · · + |Yk |) j=k−i+1

if 1 ≤ k ≤ m, if m < k ≤ n,

we have E Q¯ n,m ≤

m X

|ai |

i=1

= o(1)

m m X 1 X 1 E |Y¯ j | ≤ |ai | mE|Y¯1 | z n j=1 z n i=1

as n → ∞,

which gives P Q¯ n,m − →0

as n → ∞.

(2.9)

Note that by (2.1) we get E| Q¯ n − Q¯ n,m |

2+δ

! !2+δ k k X X Y¯ j ≤ 2+δ E max (ai − bi ) 1≤k≤n zn i=1 j=k−i+1 2+δ !2+δ ∞ k X X 1 |ai | E max Y¯ j ≤ 2+δ 1≤k≤n zn i=m+1 j=1 !2+δ ∞ X 1 {n L(z n )}2+δ/2 ≤ |ai | 2+δ z n i=m+1 1

= o(1)

as m → ∞,

2100

H.J. Moon / Statistics and Probability Letters 78 (2008) 2095–2101

which gives P Q¯ n − Q¯ n,m − →0

as m → ∞.

(2.10) P

Using now Theorem 4.2 (Billingsley, 1968, p.25), together with (2.9) and (2.10), yields Q¯ n − → 0 as n → ∞. In the P ˆ same manner we can obtain that Q n − → 0 as n → ∞. Therefore, combining these with (2.8) gives 1 P max |Bk1 | − →0 z n 1≤k≤n

as n → ∞,

and then this, together with (2.5) and (2.7), implies 1 P →0 max |Bk | − z n 1≤k≤n

as n → ∞.

(2.11)

Taking account of (2.4) and (2.11), and then using (3.10) of Bradley (1988) gives (2.2). Let us now apply Lemma 1 to S˜n . Then we get Tn D S˜n − → N (0, 1) = βvn vn

as n → ∞.

Hence, by the Slutsky lemma and (2.2), we have Sn D − → N (0, 1) βvn

as n → ∞.

Finally, it is easily seen from Lemma 2 and (2.2) that S[nt] sup |Wn (t) − W (t)| = sup − W (t) 0≤t≤1 0≤t≤1 βvn S˜ S S˜[nt] [nt] [nt] − − W (t) ≤ sup + sup βv βv βv n n n 0≤t≤1 0≤t≤1 S T[nt] S˜[nt] [nt] − − W (t) ≤ sup + sup βvn 0≤t≤1 vn 0≤t≤1 βvn P

− →0

as n → ∞,

and the proof of the theorem is complete.



Acknowledgements I would like to express my gratitude to the referee for carefully reading the manuscript and for many valuable comments. This work was supported by the Korea Research Foundation Grant funded by the Korean Government (MOEHRD; KRF-2006-351-C00006). References Billingsley, P., 1968. Convergence of Probability Measures. Wiley, New York. Bradley, R.C., 1988. A central limit theorem for stationary ρ-mixing sequences with infinite variance. Ann. Probab. 16 (1), 313–332. Fakhre-Zakeri, I., Farshidi, J., 1993. A central limit theorem with random indices for stationary linear processes. Statist. Probab. Lett. 17, 91–95. Fakhre-Zakeri, I., Lee, S., 1992. Sequential estimation of the mean of a linear process. Sequential Anal. 11, 181–197. Juodis, M., Raˇckauskas, A., 2007. A central limit theorem for self-normalized sums of a linear process. Statist. Probab. Lett. 77, 1535–1541. Kim, T.S., Baek, J.I., 2001. A central limit theorem for stationary linear processes generated by linearly positively quadrant-dependent process. Statist. Probab. Lett. 51, 299–305. Kulik, R., 2006. Limit theorems for self-normalized linear processes. Statist. Probab. Lett. 76, 1947–1953. Lee, S., 1997. Random central limit theorem for the linear process generated by a strong mixing process. Statist. Probab. Lett. 35, 189–196. Lu, C.R., 2003. The invariance principle for linear processes generated by a negatively associated sequence and its applications. Acta Math. Appl. Sin. Engl. Ser. 19 (4), 641–646.

H.J. Moon / Statistics and Probability Letters 78 (2008) 2095–2101

2101

Peligrad, M., 1987. The convergence of moments in the central limit theorem for ρ-mixing sequences of random variables. Proc. Amer. Math. Soc. 101 (1), 142–148. Shao, Q.M., 1993. An invariance principles for stationary ρ-mixing sequences with infinite variance. Chinese Ann. Math. 14B, 27–42. Wang, Q., Lin, Y.X., Gulati, C.M., 2001. Asymptotics for moving average processes with dependent innovations. Statist. Probab. Lett. 54, 347–356. Wang, Q., Lin, Y.X., Gulati, C.M., 2002. The invariance principle for linear processes with applications. Econometric. Theory 18, 119–139.