An approximation to the Rosenblatt process using martingale differences

An approximation to the Rosenblatt process using martingale differences

Statistics and Probability Letters 82 (2012) 748–757 Contents lists available at SciVerse ScienceDirect Statistics and Probability Letters journal h...

258KB Sizes 0 Downloads 73 Views

Statistics and Probability Letters 82 (2012) 748–757

Contents lists available at SciVerse ScienceDirect

Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro

An approximation to the Rosenblatt process using martingale differences✩ Chao Chen a,∗ , Liya Sun b , Litan Yan b a

Department of Mathematics, East China University of Science and Technology, 130 MeiLong Rd., Xuhui, Shanghai 200237, PR China

b

Department of Mathematics, Donghua University, 2999 North Renmin Rd., Songjiang, Shanghai 201620, PR China

article

abstract

info

In this paper we give an approximation theorem for Rosenblatt processes with H > 1/2, using martingale differences. © 2011 Elsevier B.V. All rights reserved.

Article history: Received 31 August 2011 Received in revised form 9 December 2011 Accepted 10 December 2011 Available online 20 December 2011 MSC: primary 60G15 60H05 secondary 60H07 Keywords: Rosenblatt process Martingale differences Weak convergence

1. Introduction Self-similar processes are stochastic processes that are invariant in distribution under suitable scaling of time and space. They are of considerable interest in practice since aspects of the self-similarity appear in different phenomena. We refer the reader to the work of Taqqu (1986) for a guide on the appearance of self-similarity in many applications and to the monographs by Samorodnitsky and Taqqu (1994) and by Embrechts and Maejima (2002) for complete expositions on selfsimilar processes. However self-similar processes with long-range dependence are of practical interest in various other applications, including econometrics, Internet traffic, hydrology, turbulence, and finance. This is a rather special class of self-similar processes. In this paper we consider the so-called Rosenblatt process with index H ∈ ( 12 , 1), which is a special case of a self-similar process with long-range dependence. This process arises from the Non-Central Limit Theorem, and first studied by Taqqu (1979) (see also Dobrushin and Major, 1979). Consider a stationary Gaussian sequence (ξn )n∈(N) with mean 0 and variance 1 such that its correlation function satisfies r (n) := E (ξ0 ξn ) = n

2H −2 k

L(n)

(1.1)

with k ≥ 1 integer, H ∈ ( 21 , 1) and L a function slowly varying at infinity. Denote by Hm (x) the Hermite polynomial of degree m given by Hm (x) = (−1)m e

x2 2

dm − x2 e 2, dxm

m = 1, 2, . . .

✩ This project was sponsored by NSFC (11171062) and the Innovation Program of Shanghai Municipal Education Commission (12ZZ063).



Corresponding author. E-mail address: [email protected] (C. Chen).

0167-7152/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2011.12.006

C. Chen et al. / Statistics and Probability Letters 82 (2012) 748–757

749

with H0 (x) = 1. Let g be a function such that E (g (ξ0 )) = 0 and E (g (ξ0 )2 ) < ∞. Suppose that g has Hermite rank equal to k, that is, if g admits the following expansion in Hermite polynomials: g ( x) =



cj Hj (x),

cj = E (g (ξ0 Hj (ξ0 ))),

j ≥0

then k = min{ j; cj ̸= 0}. Since E [g (ξ0 )] = 0, we have k ≥ 1. Then, the Non-Central Limit Theorem (see Taqqu, 1979 and Dobrushin and Major, 1979) says that the sequence of stochastic processes as n → ∞ [nt ] 1 

nH j = 1

g (ξj )

converges in the sense of finite-dimensional distributions to the process     t  k − 21 + 1−k H (s − yj )+ ZHk (t ) = c (H , k) dsdB( y1 ) · · · dB( yk ), Rk

0

(1.2)

j =1

where x+ = max(x, 0), the integral is a multiple Wiener–Itô stochastic integral with respect to a Brownian motion  {B( y), y ∈ R} and the constant c (H , k) is positive constant such that E ZHk (t ) = 1. Definition 1.1 (Taqqu, 1979). Let k ≥ 1 be integer and H ∈ ( 12 , 1). The process (ZHk (t ))t ≥0 defined by (1.2) is called the Hermite process of order k, with index H. It is important to note that the Hermite process is not Gaussian for k ≥ 2. However, a Hermite process admits the following properties: (i) it exhibits long-range dependence; (ii) it is H-self-similar in the sense that for any c > 0, the processes {ZHk (ct ), t ≥ 0} and {c H ZHk (t ), t ≥ 0} have the same finite-dimensional distributions; (iii) it has stationary increments, that is, the joint distribution of (ZHk (t + h) − ZHk (t ), t ∈ [0, T ]) is independent of h > 0; (iv) the covariance function is

 1  2H t + s2H − |t − s|2H ; 2 (v) it is Hölder continuous of order γ < H. E ZHk (t )ZHk (s) =





These properties of the Hermite process motivate our interest in studying it. Clearly, when k = 1 the Hermite process is the fractional Brownian motion with Hurst parameter H ∈ ( 21 , 1). When k = 2 the Hermite process is called the Rosenblatt process, denoted by ZtH (see Taqqu, 1975), and it can be rewritten as Z (t ) = H

T



T



0

Q (t , y1 , y2 )dWy1 dWy2 ,

(1.3)

0

where {Wt , t ≥ 0} is a standard Wiener process, Q (t , y1 , y2 ) = αH 1[0,t ] ( y1 )1[0,t ] ( y2 )

y1 ∨y2

√ ′

H =

H +1 2

,

αH =

t



∂ KH ′ ∂ KH ′ ( u, y 1 ) (u, y2 )du ∂u ∂u

2(2H − 1)

(1.4)

(1.5)

√ (H + 1) H



and the kernel K H is given by KH ′ (t , s) = cH ′ s

1 −H ′ 2

t





3



1

(u − s)H − 2 uH − 2 du s

with t > s and cH ′ =



H ′ (2H ′ −1) β(2−2H ′ ,H ′ − 1 ) 2

 12

. More works on the Hermite process and Rosenblatt process can be found in

Chronopoulou et al. (2009), Maejima and Tudor (2007), Torres and Tudor (2009), Tudor (2008) and Tudor and Viens (2009) and the references therein. In recent years the fractional Brownian motion has become an object of intensive study, due to its interesting properties and its applications in various scientific areas including telecommunications, turbulence, image processing and finance.

750

C. Chen et al. / Statistics and Probability Letters 82 (2012) 748–757

Some surveys and complete literature for fractional Brownian motion can be found in Alós et al. (2001), Biagini et al. (2008), Decreusefond and Üstünel (1999), Gradinaru et al. (2005, 2003), Hu (2005), Mishura (2008) and Nualart (2006) and the references therein. However, in contrast to the extensive studies on fractional Brownian motion, there has been little systematic investigation on other Hermite processes. The main reasons for this, in our opinion, are the complexity of the dependence structures and the property of non-Gaussianity. Therefore, it seems interesting to study the Rosenblatt process. In this short note we will prove an approximation theorem for the Rosenblatt process by using martingale differences. Our main object is to explain and prove the following theorem. Theorem 1.1. Let (ξ (n) )n≥1 be a sequence of square integrable martingale differences satisfying the following conditions:

(ξi(n) )2 = 1 a.s. n→∞ 1/n

(1.6)

lim

for every i ≥ 1 and C (n) max |ξi | ≤ √ n

a.s.

(1.7)

1≤i≤n

for some C ≥ 1. Define the processes W n , n ≥ 1 by [nt ] 

Wtn :=

ξi(n) ,

0 ≤ t ≤ 1.

i=1

Define Ztn :=

T



T



0

0

[nt ] 

=

i,j=1,i̸=j

Q (n) (t , u, v)dWun dWvn

n

2



i n



i−1 n

j n

(n) (n)

j−1 n

Q (t , u, v)dudvξi ξj

(n)

(t , u, v) is an approximation of Q (t , u, v) given by    u  v n n [nt ] Q (n) (t , u, v) = n2 Q , r , p drdp

where Q

u−1 n

v−1 n

n

for u, v ∈ [ 1n , 1] and t ∈ [0, 1]. Then the processes Z n converge in distribution to the Rosenblatt process Z H as n tends to infinity. Remark 1 (Torres and Tudor, 2009). We eliminate the diagonal ‘‘i ̸= j’’ because the Rosenblatt process is defined as a double Wiener–Itô integral and as a consequence it has zero mean. When the diagonal ‘‘i = j’’ is included in the sum of Ztn then the limit is in general a double Stratonovich integral. This note is organized as follows. In Section 2 we present some preliminaries for the Rosenblatt process and martingale differences. In Section 3 we will give the proof of Theorem 1.1. 2. Preliminaries In this section, we briefly recall some basic properties of the Rosenblatt process and martingale differences. 2.1. The Rosenblatt process We refer the reader to Tudor (2008) for a complete description of the Rosenblatt process. For simplicity we let C stand for a positive constant depending only on subscripts and its value may be different in different appearances. Throughout this paper we assume that 12 < H < 1 and let Z H = {Z H (t ), t ∈ [0, T ]} be a Rosenblatt process with index H, defined on a probability space (Ω , F , P ). As we pointed out before, the Rosenblatt process Z H = {Z H (t ), t ∈ [0, T ]} with index H is the Hermite process of order k = 2, and admits the representation (1.2) with k = 2. Recall that fractional Brownian motion BH admits an integral representation of the form BH t =

t



KH (t , s)dBs , 0

0 ≤ t ≤ T,

(2.1)

C. Chen et al. / Statistics and Probability Letters 82 (2012) 748–757

751

where B is a standard Brownian motion and the kernel KH (t , u) satisfies

  1 1  s  2 −H ∂ KH 3 (t , s) = κH H − ( t − s) H − 2 ∂t 2 t with a normalizing constant κH > 0 given by

 κH =

2H Γ

3 2

−H

1/2



.

  Γ H + 12 Γ (2 − 2H )

< H < 1 we have    t 1 1 3 1 uH − 2 (u − s)H − 2 du. KH (t , s) = H − κH s 2 −H 1 2

More precisely, for

2

(2.2)

s

Similarly, the Rosenblatt process Z H = {Z H (t ), t ∈ [0, T ]} admits the following representation (see Tudor, 2008): d

Z (t ) = αH H

 t  t  0

t y1 ∨y2

0

 ∂ KH ′ ∂ KH ′ (u, y1 ) (u, y2 )du dWy1 dWy2 , ∂u ∂u

(2.3)

where {Wt , t ≥ 0} is a Wiener process and

√ ′

H =

H +1 2

Clearly, we have

,

3 4

αH =

√ . (H + 1) H

(2.4)

< H ′ < 1 and

E Z H (t )Z H (s) =



2(2H − 1)



1 2

t 2H + s2H − |t − s|2H ,



∀s, t ≥ 0.

(2.5)

Define, ∀t ∈ [0, T ], Q (t , y1 , y2 ) = αH 1[0,t ] ( y1 )1[0,t ] ( y2 )



t y1 ∨y2

∂ KH ′ ∂ KH ′ ( u, y 1 ) (u, y2 )du. ∂u ∂u

(2.6)

From Torres and Tudor (2009), we know that the Rosenblatt process Z H can be rewritten as Z (t ) = H

T

 0

T



Q (t , y1 , y2 )dWy1 dWy2 .

(2.7)

0

2.2. Martingale differences Denote by D = D(0, 1) the Skorohod space of right continuous functions on the interval [0, 1] with left-hand limits, and equip D with the metric d(u, v) := inf{ε > 0 : ∃λ ∈ Λ such that ∥λ∥ ≤ ε and sup |u(t ) − v(λ(t ))| ≤ ε}, t

where ∥λ∥ := sups̸=t | log(λ(t ) − λ(s))/(t − s)| and

Λ := {λ : [0, 1] → [0, 1], a strictly increasing and continuous mapping }. Under this metric, D is a separable and complete metric space. Let X , X n : Ω → D be random variables and TX := {t ∈ (0, 1) : P (Xt ̸= Xt − ) = 0} ∪ {0, 1}. Recall that X n converges in distribution to X if, for every bounded and continuous φ : D → R, E φ(X n ) → E (φ(X )) ,





(2.8) (n)

as n tends to infinity. Consider {ξ (n) = (ξi , Fi n )1≤i≤n , n ≥ 1}, a sequence of square integrable martingale differences such that

(ξi(n) )2 = 1, n→∞ 1/n lim

(2.9)

752

C. Chen et al. / Statistics and Probability Letters 82 (2012) 748–757

almost surely, for every i ≥ 1, and C (n) max |ξi | ≤ √ , n

(2.10)

1≤i≤n

almost surely, for every n ≥ 1, where C ≥ 1 is constant. Define the processes W n and Z n for n ≥ 1 respectively by [nt ] 

Wtn :=

ξin ,

0≤t ≤1

i =1

and T



Ztn :=

T



0

0

[nt ] 

=

Q (n) (t , u, v)dWun dWvn

n

2

i n



(n) (n)

j−1 n

i−1 n

i,j=1,i̸=j

j n



Q (t , u, v)dudvξi ξj

(n)

(t , u, v) is an approximation of Q (t , u, v) and    u  v n n [nt ] Q , r , p drdp Q (n) (t , u, v) = n2

where Q

u−1 n

v−1

n

n

for u, v ∈ [ , 1] and t ∈ [0, 1]. 1 n

Lemma 2.1. Let the condition (2.10) hold and for every t lim

[nt ] 

n→∞

(ξi(n) )2 = t ,

(2.11)

i=1

almost surely. Then the processes W n converge in distribution to a Brownian motion W as n → ∞. The above lemma is a classical result. For the proof see Borovskikh and Korolyuk (1997) and Hall and Heyde (1980). 3. Proof of Theorem 1.1 In order to prove Theorem 1.1 we need two results. The first one is in Billingsley (1968) which basically states that weak convergence in D can be proved by using the convergence of finite-dimensional distributions and the tightness of a sequence. Theorem 3.1. Suppose that X , X n : Ω → D are random variables and d

(Xtn1 , . . . , Xtnk ) −→ (Xt1 , . . . , Xtk ) d

holds whenever t1 , . . . , tk ∈ TX , where −→ means standard convergence in distribution. Assume that P {X1 ̸= X1− } = 0 and that E {|Xtn − Xtn1 |C |Xtn2 − Xtn |C } ≤ [F (t2 ) − F (t1 )]2α , for t1 ≤ t ≤ t2 and n ≥ 1, where C > 0, α >

1 2

and F is a nondecreasing, continuous function on [0, 1]. Then

d

X n −→ X in the sense of (2.8). Secondly, we need the Lindeberg condition (see Borovskikh and Korolyuk, 1997 and Hall and Heyde, 1980). Theorem 3.2. Let t ∈ (0, 1], σt2 ≥ 0 and let (ξ n )n≥1 be a sequence of martingale differences as in Section 2 satisfying the Lindeberg condition: [nt ]  



P

n E (ξin )2 I{|ξ (n) |>ε} |Fi− 1 −→ 0 i

i =1

for ε > 0. Then d

[nt ] i=1

P

(ξin )2 −→ σt2 implies

Wtn −→ N ∼ N (0, σt2 ).

C. Chen et al. / Statistics and Probability Letters 82 (2012) 748–757

753

Before proving Theorem 1.1, we need the following auxiliary lemma. Lemma 3.1. Let Q be as in (2.6) and let (ξ (n) )n≥1 satisfy conditions (2.9) and (2.10). Then, as n tends to infinity, n 

n

4

i n



1





j−1 n

i−1 n

i,j=1,i̸=j

j n



Q

[ntk ] n



, u, v dudv

i n



j n



j−1 n

i−1 n

 Q



[ntl ]

, u, v dudv(ξi(n) )2 (ξj(n) )2

n

1



Q (tk , u, v)Q (tl , u, v)dudv

−→ 0

0

holds almost surely, for tk , tl ∈ [0, 1]. Proof. Let us take tl = tk and prove the first case, that is, n 

n

4

i n



i−1 n

i,j=1,i̸=j

2

j n



(ξi(n) )2 (ξj(n) )2 −→

Q (tk , u, v)dudv

j−1 n

1



1



Q 2 (tk , u, v)dudv

(3.1)

0

0

holds almost surely as n tends to infinity. For every t , s ∈ [0, 1], define gn (t , s) := n

n 

2

1

i,j=1,i̸=j

i−1 i n ,n

 (t )1

j−1 j n ,n

i n



 (s)

j n





j−1 n

i−1 n

Q (tk , u, v)dudv

ξi(n) √ 1/ n



 ξj(n) √ . 1/ n

Then we get

 gn2

(t , s) = n

2



i n



j n j−1 n

i−1 n

2  Q (tk , u, v)dudv

ξi(n) √ 1/ n

2 

ξj(n) √ 1/ n

2 ,

for t ∈ ((i − 1)/n, i/n], s ∈ ((j − 1)/n, j/n] and 1

 0

n 

1



gn2

(u, v)dudv =

n

4

j n



j−1 n

i−1 n

i,j=1,i̸=j

0

i n



2 Q (tk , u, v)dudv

(ξi(n) )2 (ξj(n) )2 .

(3.2) ∂K

Firstly, we claim that gn2 (t , s) is a.s. uniformly integrable. Indeed, by Hölder’s inequality and the representation of ∂ tH (t , s), we have 1

 0

n 

1



gn2

(u, v)dudv =

n

4

i n

 n 

≤C

1

u 0 1 0

xH 0 1



H

≤C 1

x (x − u) H

x∧y

H 2

−1

(x − v)

H 2

−1

2 dx

H

H

u−H (x − u) 2 −1 ( y − u) 2 −1 du

 0 



x

yH

y−1

1



H 2

u−H (1 − u) −1



0 x

y

H −2

  −H 

0

dudv

2 dxdy

x

x

y

y

−2(1−H ) −1

x y

 H2 −1 −u 

dy dx

(x − y)2H −2 dydx < ∞ since 1/2 < H < 1. 0

Furthermore, for any t ∈ ((i − 1)/n, i/n], s ∈ ((j − 1)/n, j/n],

 gn2

(t , s) = n

2



i n i−1 n



j n j−1 n

2  Q (tk , u, v)dudv

ξi(n) √ 1/ n

2 

ξj(n) √ 1/ n

2 du

 

dy



x



= 0

1

0

 

x 0





0

≤C

(ξi(n) )2 (ξj(n) )2

u∨v

xH yH

1



v

1



Q (tk , u, v)dudv



−H −H

0

≤C

2

Q 2 (tk , u, v)dudv

j−1 n

1



≤C 

j−1 n j n



i−1 n

i,j=1,i̸=j



j n



i−1 n

i,j=1,i̸=j

0

i n



2 → Q 2 (tk , t , s)

dx

754

C. Chen et al. / Statistics and Probability Letters 82 (2012) 748–757

holds almost surely as n tends to infinity because

 n

i n



2

2

j n



Q (tk , u, v)dudv

j−1 n

i−1 n

→ Q 2 (tk , t , s)

and



ξi(n) √ 1/ n

2

 → 1,

ξj(n) √ 1/ n

2 →1

holds almost surely as n tends to infinity. Thus 1





1

0

0

0

1



gn2 (u, v)dudv −→

1



Q 2 (tk , u, v)dudv,

0

holds almost surely as n tends to infinity. Combining (3.2) with this result, it follows that (3.1) holds. Next we will prove the original claim. For simplicity, we define tkn := [ntk ]/n and tln := [ntl ]/n. From the previous discussion we know that when n → ∞, n 

i n



n4

1



i n



j n



(n)

(n)

j−1 n

i−1 n

Q (tl , u, v)dudv(ξi )2 (ξj )2

1



Q (tk , u, v)Q (tl , u, v)dudv,

−→ 0

Q (tk , u, v)dudv

j−1 n

i−1 n

i,j=1,i̸=j

j n



a.s.

0

We now have to show that the difference n 

n

i n

j n



− j−1 n

i−1 n

j n



Q ( , u, v)dudv tkn

i n



i n



Q (tk , u, v)dudv

j−1 n

i−1 n

i,j=1,i̸=j



i n



4

j n

Q (tl , u, v)dudv

j−1 n

i−1 n



 Q ( , u, v)dudv tln

j−1 n

i−1 n

j n



(ξi(n) )2 (ξj(n) )2

tends to zero almost surely as n tends to infinity. Since Q is increasing with respect to t and tk ≥ tkn , tl ≥ tln , we can obtain n 

0 ≤

n

4

i n



i n





j n

− j−1 n

i−1 n

n 



n4

n 

n

4



j n



j−1 n

i−1 n

i,j=1,i̸=j

+

Q ( , u, v)dudv tkn

i n



Q (tk , u, v)dudv

j−1 n

i−1 n

i,j=1,i̸=j

j n



i n i−1 n

i,j=1,i̸=j



i n



j n j−1 n

 Q ( , u, v)dudv i n



Q (tk , u, v)dudv ·

Q (tl , u, v)dudv

j−1 n

tln

j−1 n

i−1 n

j n



i−1 n j n



i n



j n



i−1 n

j−1 n



|Q ( , u, v) − Q (tl , u, v)|dudv · n 

n4

i,j=1,i̸=j

(n) 2

(n)

i n



i−1 n

j n



j−1 n



i n i−1 n



j n j−1 n

|Q (tkn , u, v)

Q (tl , u, v)dudv ·



i n i−1 n



j n j−1 n

|Q (tkn , u, v)

(n) 2

− Q (tk , u, v)|dudv(ξi ) (ξj ) . Using the first part of this proof, we get that n 

n4

i n





i−1 n

i,j=1,i̸=j 1



j−1 n

Q (tk , u, v)dudv



i n i−1 n



j n j−1 n



Q (tk , u, v) Q (tl , u, v) − Q (tln , u, v) dudv



0

(n)

(n)

Q (tl , u, v) − Q (tln , u, v) dudv(ξi )2 (ξj )2



1



−→ 0

j n

(n)

Q (tl , u, v) − Q (tln , u, v) dudv(ξi )2 (ξj )2



tln

− Q (tk , u, v)|dudv(ξi(n) )2 (ξj(n) )2 +

(ξi(n) )2 (ξj(n) )2



C. Chen et al. / Statistics and Probability Letters 82 (2012) 748–757 1

1



u− H v − H

≤C 1



tl



≤C

H 2

H 2

u−H (x − u) −1 ( y − u) −1 du

l

n

[ntl ]

which tends to zero as n tends to infinity because 0 ≤ tl − have that when n → ∞, i n





i,j=1,i̸=j 1

 Q

[ntk ] n



, u, v dudv

i n



j n





j−1 n

i−1 n

n

Q

< 1n . Similarly the last two summands tend to zero and we

[ntl ] n



, u, v dudv(ξi(n) )2 (ξj(n) )2

1



Q (tk , u, v)Q (tl , u, v)dudv

−→ 0

j n j−1 n

i−1 n



dydx

|y − x|2H −2 1 [ntl ] ,t  ( y)dxdy

0

n

2

n

0

4

H

1



=

n 

H

yH ( y − u) 2 −1 ( y − v) 2 −1 dydudv

|y − x|2H −2 dxdy

[ntl ]

0 1



tl [ntl ]

0

tl



≤C

H

755

n

x



n

1



xH y H

[ntl ]

0

H

xH (x − u) 2 −1 (x − v) 2 −1 dx



u∨v

0

0

tk



0

holds almost surely, and the lemma follows.



Proof of Theorem 1.1. We prove this theorem by using Theorem 3.1. We first have to prove that the finite-dimensional distributions of Z n converge to those of Z . For any a1 , . . . , ad ∈ R and t1 , . . . , td ∈ [0, 1], we want to show that the linear combination Y n :=

d 

ak Ztnk

k=1

converges in distribution to a normally distributed random variable with expectation 0 and variance

 E

k 

2 .

ak Z t k

k=1

The fact of the expectation is zero is trivial. Let us write Y n as Yn =

n 

(n) (n)

n2 ξi ξj

i,j=1,i̸=j

:=

n  i,j=1,i̸=j

d 

i n

 ak

j−1 n

i−1 n

k =1

j n



 Q

[nt ] n

 , u, v dudv

Yin,j .

The Lindeberg condition is satisfied if for any ε > 0, we have that n  i,j=1,i̸=j





P

n E (Yin,j )2 I{|Y n |>ε} |Fi− 1,j−1 −→ 0, i,j

as n tends to infinity. Consider the set

{|Yin,j | > ε} = {(Yin,j )2 > ε2 }. We get an upper bound to Yin,j by noticing that Q (t , u, v) is increasing with respect to t and decreasing with respect to u and v :

 (

) =

Yin,j 2

(n) (n) n2 i j

ξ

ξ

d 

ak

i−1 n

k=1

(n) 2

(n) 2



≤ n (ξi ) (ξj ) 4

i n



d 

j n



j−1 n

 Q

n

2  |ak |

(n) 2

(n) 2

≤ n (ξi ) (ξj )



d  k=1

=: n2 (ξi(n) )2 (ξj(n) )2 Aδ n ,

i n



2 

1 n

1 n



|ak | 0



0

2

, u, v dudv j n j−1 n

i−1 n

k=1 2

[nt ]

2 Q (1, u, v)dudv

Q 2 (1, u, v)dudv

756

C. Chen et al. / Statistics and Probability Letters 82 (2012) 748–757

 1/n  1/n 2 d Q (1, u, v)dudv . Thus we obtain where A := ( k=1 |ak |)2 and δ n := 0 0 {|Yin,j | > ε} ⊂ {n2 (ξi(n) )2 (ξj(n) )2 Aδ n > ε2 }.

(3.3)

Combining this with the Cauchy–Schwartz inequality, we can get that

  n E (Yin,j )2 I{|Y n |>ε} |Fi− = 1 , j − 1 i,j

 n2

d 

ak

n2



d 

1 n



j n



j−1 n

i−1 n

k=1



i n



1 n



0

Q

[nt ] n

2



, u, v dudv

Q (1, u, v)dudv

ak

k=1



2 

C2

(n) (n)



i,j

2 

n

0





n E (ξi ξj )2 I{|Y n |>ε} |Fi− 1,j−1

n E I{|Y n |>ε} |Fi− 1 ,j − 1 i,j





n ≤ C 4 Aδ n E I{n2 (ξ (n) )2 (ξ (n) )2 Aδn >ε2 } |Fi− 1,j−1 , i

j

almost surely, which implies that the Lindeberg condition is satisfied:

  n n     n 4 n n E (Yin,j )2 I{|Y n |>ε} |Fi− C A δ E I | F ≤ ( n ) ( n ) 2 2 2 n 2 1,j−1 {n (ξ ) (ξ ) Aδ >ε } i−1,j−1 i,j i

i,j=1,i̸=j

i =1

j

n



≤ C 4 Aδ n

i,j=1,i̸=j

I{C 4 Aδ n >ε2 } → 0,

as n tends to infinity, because δ n → 0 and I{C 4 Aδ n >ε2 } = 0 for large n. Moreover, we have n 

d 

(Yi,j )2 =

k=1,l=1

i,j=1,i̸=j

i n



n 

ak al





j−1 n

 Q

i n i−1 n

i,j=1,i̸=j j n

× i−1 n

n4

[ntl ] n

j n



j−1 n

 Q

[ntk ] n

 , u, v dudv



, u, v dudv(ξi(n) )2 (ξj(n) )2

for all n ≥ 1. By Lemma 3.1 and the fact that min(tk ,tl )



min(tk ,tl )



EZtk Ztl = 0

it follows that the sum d  k=1,l=1

Q (tk , u, v)Q (tl , u, v)dudv,

(3.4)

0

1



n



ak al 0

i,j=1,i̸=j

(Yi,j )2 converges to

1

Q (tk , u, v)Q (tl , u, v)dudv = E

0

 d 

2 ak Ztk

.

k=1

So by Theorem 3.2, the finite-dimensional distributions of Zn converge to those of Z . Finally, we need to prove the tightness of the sequence Zn in D. Let s, t ∈ [0, 1], s < t. By Proposition 2 of Torres and

  ] 2H Tudor (2009), we get that E (Ztn − Zsn )2 ≤  [ntn ] − [ns . Thus, for any s < t < r, n E |Ztn − Zsn ||Zrn − Ztn | ≤ (E (Ztn − Zsn )2 )1/2 (E (Zrn − Ztn )2 )1/2

     [nt ] [ns] H  [nr ] [nt ] H    ≤  − − n n   n n     [nr ] [ns] 2H   . − ≤ n n  We see that E |Ztn − Zsn ||Zrn − Ztn | ≤ |r − s|2H , if r − s ≥

1 . n

On the other hand if r − s <

(3.5) 1 , n

1 then either s and t or t and r belong to the same subinterval [ m , m+ ] for some n n d

integer m. Thus the left-hand side of (3.5) is zero. Therefore, (3.5) holds for all s < t < r and Z n −→ Z by Theorem 3.1. This completes the proof. 

C. Chen et al. / Statistics and Probability Letters 82 (2012) 748–757

757

Acknowledgments The authors would like to thank an anonymous painstaking referee and the associate editor for their careful readings of the manuscript and helpful comments. References Alós, E., Mazet, O., Nualart, D., 2001. Stochastic calculus with respect to Gaussian processes. Ann. Probab. 29, 766–801. Biagini, F., Hu, Y., Øksendal, B., Zhang, T., 2008. Stochastic Calculus for Fractional Brownian Motion and Applications. In: Probability and its Application, Springer, Berlin. Billingsley, P., 1968. Convergence of Probability Measures. Wiley, New York. Borovskikh, Y.V., Korolyuk, V.S., 1997. Martingale Approximation. VSP, Utrecht, The Netherlands. Chronopoulou, A., Tudor, C., Viens, F., 2009. Variations and Hurst index estimation for a Rosenblatt process using longer filters. Electron. J. Stat. 3, 1393–1435. Decreusefond, L., Üstünel, A.S., 1999. Stochastic analysis of the fractional Brownian motion. Potential Anal. 10, 177–214. Dobrushin, R.L., Major, P., 1979. Non-central limit theorems for non-linear functionals of Gaussian fields. Z. Wahrscheinlichkeitstheor. Verwandte Geb. 50, 27–52. Embrechts, P., Maejima, M., 2002. Self-Similar Processes. Princeton University Press. Gradinaru, M., Nourdin, I., Russo, F., Vallois, P., 2005. m-order integrals and the generalized Itô formula: the case of a fBm with any Hurst index. Ann. Inst. H. Poincaré Probab. Statist. 41, 781–806. Gradinaru, M., Russo, F., Vallois, P., 2003. Generalized covariations, local time and the Stratonovich–Itô formula for fBm with Hurst index H ≥ 41 . Ann. Probab. 31, 1772–1820. Hall, P., Heyde, C.C., 1980. Martingale Limit Theorem and Application. Academic Press. Hu, Y., 2005. Integral transformations and anticipative calculus for fractional Brownian motions. Mem. Amer. Math. Soc. 175 (825). Maejima, M., Tudor, C.A., 2007. Wiener integrals an a non-central limit theorem for Hermite processes. Stoch. Anal. Appl. 25, 1043–1056. Mishura, Y.S., 2008. Stochastic Calculus for Fractional Brownian Motion and Related Processes. In: Lect. Notes in Math., vol. 1929. Nualart, D., 2006. Malliavin Calculus and Related Topics. Springer. Samorodnitsky, G., Taqqu, M., 1994. Stable Non-Gaussian Random Variables. Chapman and Hall, London. Taqqu, M., 1975. Weak convergence to the fractional Brownian motion and to the Rosenblatt. Z. Wahrscheinlichkeitstheor. Verwandte Geb. 31, 287–302. Taqqu, M., 1979. Convergence of integrated processes of arbitrary Hermite rank. Z. Wahrscheinlichkeitstheor. Verwandte Geb. 50, 53–83. Taqqu, M., 1986. A bibliographical guide to selfsimilar processes and long-range dependence. In: Dependence in Probability and Statistics. Birkhäuser, Boston, pp. 137–162. Torres, S., Tudor, C.A., 2009. Donsker type theorem for the Rosenblatt process and a binary market model. Stoch. Anal. Appl. 27, 555–573. Tudor, C.A., 2008. Analysis of the Rosenblatt process. ESAIM Probab. Stat. 12, 230–257. Tudor, C., Viens, F., 2009. Variations and estimators for the selfsimilarity order through Malliavin calculus. Ann. Probab. 37, 2093–2134.