Almost sure convergence for non-stationary random sequences

Almost sure convergence for non-stationary random sequences

Statistics and Probability Letters 79 (2009) 857–863 Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage: ...

471KB Sizes 7 Downloads 182 Views

Statistics and Probability Letters 79 (2009) 857–863

Contents lists available at ScienceDirect

Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro

Almost sure convergence for non-stationary random sequences Tan Zhongquan a,∗ , Peng Zuoxiang b a

Department of Mathematics, Zunyi Normal College, Zunyi 563002, China

b

School of Mathematics and Statistics, Southwest University, Chongqing 400715, China

article

a b s t r a c t

info

Article history: Received 28 August 2007 Received in revised form 11 November 2008 Accepted 11 November 2008 Available online 18 November 2008

The almost sure central limit theorem for the maxima of non-stationary random sequences is proved under some weak dependence conditions. © 2008 Elsevier B.V. All rights reserved.

MSC: 60F15 60F05

1. Introduction Suppose we have a sequence of random variables {Wn } which converges in distribution to a random variable W , i.e. d

Wn → W as n → ∞. Many authors have considered almost sure version of the above limit theorem based on the pioneer work of Schatte (1988), Brosamler (1988) and Lacey and Philipp (1990) in the case of Wn = Sn , the partial sum of independent identically distributed (i.i.d.) Pn random variables. Let {Xn , n ≥ 1} be a sequence of i.i.d. random variables with mean 0, variance 1, and partial sums Sn = k=1 Xk . The simplest version of the so-called almost sure central limit theorem states lim

n→∞

1

n X 1

log n k=1 k

I (k−1/2 Sk ≤ x) = Φ (x)

(1)

almost surely for all x, where I denotes indicator function and Φ (x) stands for the standard normal distribution function. For recent work on almost sure central limit theorem see Berkes and Csáki (2001) and among others. For the sharp version of (1), see Hörmann (2005). Berkes and Csáki (2001) also considered the almost sure central limit theorem in the case of Wn = Mn , the partial maximum of an i.i.d. random sequence, which was first studied by Cheng et al. (1998) and Fahrner and Stadtmüller (1998). The almost sure central limit theorem of partial maxima reads as lim

n→∞

1

n X 1

log n k=1 k

I (Mk ≤ ak x + bk ) = G(x) a.s. w

(2)

for all x provided P (Mn ≤ an x + bn ) → G(x) as n → ∞ with real sequences an > 0, bn ∈ R, n ≥ 1 and a non-degenerate distribution G(x). For recent work on the almost sure central limit theorem related to order statistics, see Cheng et al. (2000),



Corresponding author. E-mail address: [email protected] (Z. Tan).

0167-7152/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2008.11.005

858

Z. Tan, Z. Peng / Statistics and Probability Letters 79 (2009) 857–863

Peng and Qi (2003), Fahrner (2000) and Stadtmüller (2002). And for the weak dependent cases such as stationary or nonstationary Gaussian sequences, see Csáki and Gonchigdanzan (2002) and Chen and Lin (2006). In this paper, we will consider the almost sure central limit theorem for the maxima of non-stationary random sequences studied by Hüsler (1986). With our method, more general results on the almost sure central limit theorem for extremes of non-stationary random sequences can be obtained under weak dependence conditions. Let {Xi , i ≥ 1} be a sequence of random variables with distribution functions Fi (x) = P (Xi ≤ x), i ≥ 1, and assume limx↑x0,i Fi (x) = Fi (x0,i − ) = 1, where x0,i = sup{x : Fi (x) < 1}, i ≥ 1. Equivalently, this means that Xi has no mass at the upper limit of its range. Suppose the levels {uni , i ≤ n, n ≥ 1} satisfy uni ↑ x0,i

as n → ∞, uniformly in i

which is equivalent to (n)

F max = max{F i (uni ), i ≤ n} → 0,

as n → ∞

where F (x) = 1 − F (x). As discussed in Hüsler (1986), in order to prove P (Xi ≤ uni , 1 ≤ i ≤ n) −

n Y

Fi (uni ) → 0

i=1

and to restrict to the interesting case are needed (c.f., Hüsler (1986)).

Qn

i=1

Fi (uni ) → exp(−τ ) as n → ∞ for some 0 < τ < ∞, the following conditions

Pn Condition A. Let Fn∗ = Fn∗ ({uni }) := i=1 F i (uni ). Then assume lim sup Fn∗ < ∞

and

n

lim inf Fn∗ > 0. n

The following weak dependence Condition D, introduced by Hüsler (1986), is a natural extension of condition D(u) in Leadbetter et al. (1983). Condition D (= D({uni })). For any integers 1 ≤ i1 < i2 < · · · < ip < j1 < j2 < · · · < jq ≤ n for which j1 − ip ≥ m, let I = {il , l = 1, . . . , p}, J = {jl , l = 1, . . . , q}, B(I ) = {Xi ≤ uni , i ∈ I } and similarly B(J ). Then assume sup |P (B(I ∪ J )) − P (B(I ))P (B(J ))| ≤ αn,m I ,J

where αn,mn → 0 as n → ∞ for some sequence mn such that (n)

mn F max → 0,

as

n → ∞.

Condition D0 . Let n, r be integers and I a subset of {1, . . . , n} of the form {i1 ≤ i ≤ i2 } such that assume that there exist αn∗,r and g (r ) such that max min I

XX

I ∗ ⊂I

i
P {Xi > uni , Xj > unj } ≤ αn∗,r

P

i∈I

F i (uni ) ≤ Fn∗ /r. Then

(3)

where lim lim sup r αn∗,r = 0

r →∞ n→∞

and where the min in (3) is considered on subsets I ∗ ⊂ I with

X

F i (uni ) ≤ g (r )/r

i∈I −I ∗

for all n ≥ n0 (r ), and g (r ) → 0

as r → ∞.

Condition D0 , also due to Hüsler (1986), is an extension of Condition D0 (un ) in Leadbetter et al. (1983). As pointed out by Leadbetter et al. (1983) and Hüsler (1986), the role of Conditions D0 and D0 (un ) is to limit the possibility of clustering of more than one exceedance in a small interval and to obtain a simple Poisson limit for an exceedance point process formed by exceedances of high levels (c.f., Theorem 5.2.1 in Leadbetter et al. (1983) and Theorem 2.8 in Hüsler (1986)). For the weak convergence of the distribution of Mn = max{Xi , i ≤ n}, let an > 0, bn ∈ R, n ≥ 1 be norming sequences and put uni = un =: un (x) = an x + bn for 1 ≤ i ≤ n. We introduce

Z. Tan, Z. Peng / Statistics and Probability Letters 79 (2009) 857–863

859

Condition E. Assume that for the sequences un (x) we have lim

n→∞

[nt ] X

F i (un (x)) = w(t , x)

i=1

for all 0 < t < 1, and (n)

nF max → c < ∞,

as n → ∞

where 0 < w(t , x) < ∞. Hüsler (1986) proved the following results. Theorem A. Let {Xi , i ≥ 1} be a random sequence satisfying Conditions A, D and D0 with respect to the levels {uni , i ≤ n}. Then P (Xi ≤ uni , i = 1, . . . , n) −

n Y

Fi (uni ) → 0,

as n → ∞.

i=1

In addition, if Fn∗ → τ as n → ∞ for some 0 ≤ τ < ∞. Then P (Xi ≤ uni , i = 1, . . . , n) → exp(−τ ),

as n → ∞

and conversely if P (Xi ≤ uni , i = 1, . . . , n) → exp(−τ ) holds for some 0 ≤ τ < ∞ then also Fn∗ → τ . Theorem B. Let {Xi , i ≥ 1} be a random sequence satisfying Conditions D, D0 and E in the case of un = un (x), x ∈ R. Then P (Mn ≤ un (x)) → H (x) = exp(−w(1, x)),

as n → ∞

where either (i) log H (x) is concave or (ii) x0 (H ) < ∞ and log H (x0 (H ) − exp(−x)) is concave for x > 0, or (iii) x1 (H ) is finite and log H (x1 (H ) + exp(x)) is concave for x > 0 where Mn = max1≤i≤n Xi , x1 (H ) = inf{x : H (x) > 0}, x0 (H ) = sup{x : H (x) < 1}. In this paper, the almost sure convergence related to Theorems A and B will be considered under weak conditions. As usual, an  bn means lim supn→∞ |an /bn | < +∞. 2. Main results Our main results are Theorems 2.1 and 2.2 and the proofs are provided in Section 3. In order to formulate the main results, we need to strengthen Condition D as: Condition D*. For any integers 1 ≤ i1 < i2 < · · · < ip < j1 < j2 < · · · < jq ≤ n for which j1 − ip ≥ m, let I = {il , l = 1, . . . p}, J = { jl , l = 1, . . . q}, Bk (I ) = {Xi ≤ uki , i ∈ I } and Bn (J ) = {Xi ≤ uni , i ∈ J }. For some β > 0, and all (log n)β ≤ k ≤ n, assume sup |P (Bk (I ) ∩ Bn (J )) − P (Bk (I ))P (Bn (J ))| ≤ αn,m I ,J

where αn,mn defined as before. Theorem 2.1. Let {Xi , i ≥ 1} be a random sequence satisfying Conditions D0 and D* with αn,mn  (log log n)−(1+ε) . Assume (n)

Fn∗ → τ for some 0 < τ < ∞ as n → ∞, and in addition assume that nF max is bounded, then lim

n→∞

1

n X 1

log n k=1 k

I (X1 ≤ uk1 , X2 ≤ uk2 , · · · , Xk ≤ ukk ) = exp(−τ ) a.s.

(4)

Theorem 2.2. Let {Xi , i ≥ 1} be a random sequence satisfying Conditions D0 , D* and E with uni = un = un (x), x ∈ R. If αn,mn  (log log n)−(1+ε) , then lim

n→∞

1

n X 1

log n k=1 k

I (Mk ≤ uk (x)) = H (x) = exp(−w(1, x))

where H (x) satisfies the properties in Theorem B.

a.s.

(5)

860

Z. Tan, Z. Peng / Statistics and Probability Letters 79 (2009) 857–863

For stationary random sequences, based on conditions D0 (un ) in Leadbetter et al. (1983) and Condition D* with uni = un , we have Corollary 2.1. Let {Xi , i ≥ 1} be a stationary random sequence satisfying Conditions D0 (un ) and D* with αn,mn (log log n)−(1+ε) . If n(1 − F (un )) → τ for some 0 ≤ τ < ∞, then lim

n→∞

n X 1

1

log n k=1 k

I (Mk ≤ uk ) = exp(−τ ) a.s.



(6)

where Mk = max1≤i≤k Xi and F denotes the marginal d.f. of {Xi }. Corollary 2.2 (Csáki and Gonchigdanzan, 2002). Let {Xi , i ≥ 1} be a sequence of stationary Gaussian random variables with zero mean, unit variance and covariance function rn satisfying rn log n  (log log n)−(1+ε) . Let {un } be a real sequence satisfying n(1 − Φ (un )) → τ as n → ∞ for some 0 ≤ τ < ∞. Then lim

n→∞

n X 1

1

log n k=1 k

I (Mk ≤ uk ) = exp(−τ ) a.s.

(7)

where Mk = max1≤i≤k Xi . Corollary 2.3 (Chen and Lin, 2006). Let {Xi , i ≥ 1} be a sequence of Gaussian random variables with zero mean, unit variance and covariance matrix (ri,j ) such that |ri,j | ≤ ρ|i−j| for i 6= j where ρn < 1 for all n ≥ 1 and ρn log n  (log log n)−(1+ε) . Let the Pn constants {uni , i ≤ n} be such that λn = min1≤i≤n uni ≥ c (log n)1/2 for some c > 0. If i=1 (1 − Φ (uni )) → τ as n → ∞ for some 0 ≤ τ < ∞, then lim

n→∞

n X 1

1

log n k=1 k

I (X1 ≤ uk1 , X2 ≤ uk2 , · · · , Xk ≤ ukk ) = exp(−τ )

a.s.

(8)

3. Proofs Let I = {1, 2, . . . , k}, J = {k + 1, k + 2, . . . , l}, Bk (I ) = {Xi ≤ uki , i ∈ I }, Bl (J ) = {Xi ≤ uli , i ∈ J }, Ck (I ) = {Xi > uki , for some i ∈ I }, Cl (J ) = {Xi > uli , for some i ∈ J }. Split the interval I into two subintervals I ∗ and I ∗∗ , where I ∗∗ = {1, · · · , mk } contains the first mk points of I and I ∗ contains the remaining ones. Split the interval J into two subintervals J ∗ and J ∗∗ , where J ∗∗ = {k + 1, · · · , k + ml } contains the first ml points of J and J ∗ contains the remaining ones. Let ml ≤ log l and ml ↑ ∞ as l → ∞. Lemma 3.1. Let {Xi , i ≥ 1} be a random sequence satisfying Condition D*. Assume αl,ml  (log log l)−(1+ε) , and assume in (n)

addition that nF max is bounded. Then for (log n)β ≤ k < l ≤ n,

Cov(I (X1 ≤ uk1 , . . . , Xk ≤ ukk ), I (Xk+1 ≤ ul(k+1) , . . . , Xl ≤ ull ))  (log log l)−(1+ε) + mk + ml . k

Proof.

Cov(I (X1 ≤ uk1 , . . . , Xk ≤ ukk ), I (Xk+1 ≤ ul(k+1) , . . . , Xl ≤ ull )) = |P (Bk (I ) ∩ Bl (J )) − P (Bk (I ))P (Bl (J ))| ≤ P (Bk (I ) ∩ Bl (J )) − P (Bk (I ∗ ) ∩ Bl (J ∗ )) + P (Bk (I ∗ ) ∩ Bl (J ∗ )) − P (Bk (I ∗ ))P (Bl (J ∗ )) + P (Bk (I ∗ ))P (Bl (J ∗ )) − P (Bk (I ))P (Bl (J )) =: I1 + I2 + I3 . (n)

Using the condition that nF max is bounded we get I1 = P (Bk (I ) ∩ Bl (J )) − P (Bk (I ∗ ) ∩ Bl (J ∗ ))





≤ P (Ck (I ∗∗ )) + P (Cl ( J ∗∗ )) X X ≤ P {Xi > uki } + P {Xi > uli } i∈I ∗∗

i∈J ∗∗

(k)

(l)

≤ mk F max + ml F max = 

mk k mk k

(k)

kF max +

+

ml l

.

ml l

(l)

lF max

l

Z. Tan, Z. Peng / Statistics and Probability Letters 79 (2009) 857–863

861

By Condition D*, we have I2 ≤ αl,ml . For I3 , we have I3 = |P (Bk (I ))P (Cl (J ∗∗ )) + P (Bl (J ))P (Ck (I ∗∗ )) + P (Ck (I ∗∗ ))P (Cl (J ∗∗ ))|

≤ P (Cl (J ∗∗ )) + P (Ck (I ∗∗ )) + P (Ck (I ∗∗ ))P (Cl (J ∗∗ )) ≤

ml

mk

+

l

+

k

mk ml k

l

.

Noticing αl,ml  (log log l)−(1+ε) , we obtain

Cov(I (X1 ≤ uk1 , . . . , Xk ≤ ukk ), I (Xk+1 ≤ ul(k+1) , . . . , Xl ≤ ull )) ≤

mk k

ml

+

+ αl,ml +

l

 αl,ml +

mk

+

k

mk k

+

ml l

+

mk ml k

l

ml

 (log log l)

l

−(1+ε)

+

mk k

+

ml l

.  (n)

Lemma 3.2. Let {Xi , i ≥ 1} be a random sequence such that nF max is bounded. Then E I (X1 ≤ ul1 , . . . , Xl ≤ ull ) − I (Xk+1 ≤ ul(k+1) , . . . , Xl ≤ ull ) 





k l

.

(9)

(n)

Proof. Using the condition that nF max is bounded we get E I (X1 ≤ ul1 , . . . , Xl ≤ ull ) − I (Xk+1 ≤ ul(k+1) , . . . , Xl ≤ ull ) = P (Xk+1 ≤ ul(k+1) , . . . , Xl ≤ ull ) − P (X1 ≤ ul1 , . . . , Xl ≤ ull )





≤ P (Xi > uli for some 1 ≤ i ≤ k) k X ≤ P (Xi > uli ) i =1

(l)

≤ kF max k

(l)

= lF max l k

 .  l

Lemma 3.3. Let (ξk )∞ k=1 be a sequence of uniformly bounded random variables, i.e. assume that there exists some M ∈ (0, ∞) such that |ξk | ≤ Ma.s. for all k ≥ 1. If Var

n X 1

k k=1

! ξk

 (log2 n)(log log n)−(1+ε)

(10)

for some ε > 0, then lim

n→∞

n X 1

1

log n k=1 k

(ξk − E ξk ) = 0,

a.s.

(11)

Proof. See Lemma 3.1 in Csáki and Gonchigdanzan (2002). Now we give the proof of Theorem 2.1. Proof of Theorem 2.1. Let ηk = I (X1 ≤ uk1 , X2 ≤ uk2 , . . . , Xk ≤ ukk ) − P (X1 ≤ uk1 , X2 ≤ uk2 , . . . , XkP ≤ ukk ). Notice that n 1 (ηk )∞ is a sequence of uniformly bounded random variables and Var (η ) ≤ 1. First, we estimate Var ( k k=1 k=1 k ηk ). Clearly Var

n X 1 k=1

k

! ηk

≤E

n X 1 k=1

k

!2 ηk

862

Z. Tan, Z. Peng / Statistics and Probability Letters 79 (2009) 857–863 n X E η2 k

=

k=1

X E(ηk ηl )

+2

k2

kl

1≤k
= : L1 + L2 . Clearly L1 =

n X 1 k=1

n X 1

E ηk2 ≤

k2

k=1

= O(1).

k2

To estimate L2 , we split it into three parts, i.e.,

X

L2 =

X

+

1≤k
X

+

1≤k<(log n)β
(log n)β ≤k
=: L21 + L22 + L23 . For the bounds of L21 and L22 we have 1

X

L21 

kl

1≤k
 (log(log n)β )2 and 1

X

L22 

1≤k<(log n)β
kl

 (log n) log(log n)β . For L23 , by Lemmas 3.1 and 3.2, we have

|E(ηk ηl )| = |Cov(I (X1 ≤ uk1 , . . . , Xk ≤ ukk ), I (X1 ≤ ul1 , . . . , Xl ≤ ull ))|   ≤ Cov I (X1 ≤ uk1 , . . . , Xk ≤ ukk ), I (X1 ≤ ul1 , . . . , Xl ≤ ull ) − I (Xk+1 ≤ ul(k+1) , . . . , Xl ≤ ull ) + Cov(I (X1 ≤ uk1 , . . . , Xk ≤ ukk ), I (Xk+1 ≤ ul(k+1) , . . . , Xl ≤ ull )) ≤ 2 E I (X1 ≤ ul1 , . . . , Xl ≤ ull ) − I (Xk+1 ≤ ul(k+1) , . . . , Xl ≤ ull ) + Cov(I (X1 ≤ uk1 , . . . , Xk ≤ ukk ), I (Xk+1 ≤ ul(k+1) , . . . , Xl ≤ ull )) k

mk

l

k

 2 + (log log l)−(1+ε) +

+

ml l

for (log n)β ≤ k < l ≤ n. Consequently 2 kl + (log log l)−(1+ε) +

X

L23 

1

X 1≤k
 log n +

l2

1≤k
n X

kl(log log l)

l(log log l)

1+ε k =1

l(log log l)

1+ε

 log n + (log n)

k

+

1≤k
n X

+

l =3

X log l l =3

1

l(log log l)1+ε l =3

k2 l

n l−1 X 1 X log k

l k=1

n

log l

X l =3

ml l

X log k

+

1+ε

l −1 X 1

1

n

 log n +

1

X

+

l =3

+

kl

(log n)β ≤k


mk k

l2

n

+

k2

X log l

+

1≤k
+

n l −1 X log l X 1 l =3

l2

2

X log l l =3

l2

+ 1 + log n

 (log2 n)(log log n)−(1+ε) . Hence Var

n X 1 k=1

k

! ξk

kl2

 (log2 n)(log log n)−(1+ε)

and the result of the theorem follows by Lemma 3.3 and Theorem A.



k=1

k

Z. Tan, Z. Peng / Statistics and Probability Letters 79 (2009) 857–863

Proof of Theorem 2.2. Theorem 2.2 is a special case of Theorem 2.1.

863



Proof of Corollary 2.1. The proof follows the same lines of Theorem 2.1.



Proof of Corollary 2.2. The result follows from the proof of Theorem 2.1 if both E I (Mk ≤ ul ) − I (Mk,l ≤ ul ) 





k l

,

for k < l ≤ n

(12)

and I2  (log log l)−(1+ε) ,

for (log n)β ≤ k < l ≤ n

(13)

hold, where I2 is the one defined in Lemma 3.1 with uni = un . The first assertion follows from Lemma 3.2 and n(1 − Φ (un )) → τ . For the second assertion, using the so-called normal comparison lemma (see Theorem 4.2.1 in (Leadbetter et al., 1983)), we get I2  k

l X s=[(log n)β ]

  µ2 + µ2l |rs | exp − k . 2(1 + |rs |)

Hence (13) follows by using Lemma 2.1 in Csáki and Gonchigdanzan (2002). Proof of Corollary 2.3. The proof follows the same lines of Corollary 2.2, since (13) holds by using Lemma 2.1 in Chen and Lin (2006). 



Pn

i =1

(1 − Φ (uni )) → τ for 0 ≤ τ < ∞ and

Acknowledgments The authors would like to thank the Editor-in-Chief and the referee for the careful reading of the paper and for the immense help in improving the paper. References Berkes, I., Csáki, E., 2001. A universal result in almost sure central limit theory. Stoch. Process. Appl. 94, 105–134. Brosamler, G.A., 1988. An almost everywhere central limit theorem. Math. Proc. Camb. Phil. Soc. 104, 561–574. Chen, S., Lin, Z., 2006. Almost sure max-limits for nonstationary. Gaussian sequence. Statist. Probab. Lett. 76, 1175–1184. Cheng, S., Peng, L., Qi, Y., 1998. Almost sure convergence in extreme value theory. Math. Nachr. 190, 43–50. Cheng, S., Peng, L., Qi, Y., 2000. Ergodic behaviour of extreme values. J. Aust. Math. Soc. (Ser. A) 68, 170–180. Csáki, E., Gonchigdanzan, K., 2002. Almost sure limit theorem for the maximum of stationary. Gaussian sequences. Statist. Probab. Lett. 58, 195–203. Fahrner, I., Stadtmüller, U., 1998. On almost sure max-limit theorems. Statist. Probab. Lett. 37, 229–236. Fahrner, I., 2000. An extension of the almost sure max-limit theorem. Statist. Probab. Lett. 49, 93–103. Hüsler, J., 1986. Extreme values of non-stationary random sequences. J. Appl. Probab. 23, 937–950. Hörmann, S., 2005. A note on the almost sure convergence of central order statistics. Probab. Math. Statist. 25, 317–329. Lacey, M.T., Philipp, W., 1990. A note on the almost sure central limit theorem. Statist. Probab. Lett. 9, 201–205. Leadbetter, M.R., Lindgren, G., Rootzén, H., 1983. Extremes and Related Properties of Random Sequences and Processes. Springer-Verlag, New York. Peng, L., Qi, Y., 2003. Almost sure convergence of the distributional limit theorem for order statistics. Probab. Math. Statist. 23, 217–228. Schatte, P., 1988. On strong versions of the central limit theorem. Math. Nachr. 137, 249–256. Stadtmüller, U., 2002. Almost sure versions of distributional limit theorems for certain order statistics. Statist. Probab. Lett. 58, 413–426.