Journal of the Korean Statistical Society (
)
–
Contents lists available at ScienceDirect
Journal of the Korean Statistical Society journal homepage: www.elsevier.com/locate/jkss
Kolmogorov distance for the central limit theorems of the Wiener chaos expansion and applications✩ Yoon Tae Kim, Hyun Suk Park ∗ Department of Statistics, Hallym University, Chuncheon, Gangwon-Do 200-702, South Korea
article
info
Article history: Received 6 October 2014 Accepted 7 March 2015 Available online xxxx AMS 2000 subject classifications: primary 60H07 secondary 60F25
abstract This paper concerns the rate of convergence for the central limit theorems of the chaos expansion of functionals of Gaussian process. The aim of the present work is to derive upper bounds of the Kolmogorov distance for the rate of convergence. We apply our results to find the upper bound of the Kolmogorov distance in the quantitative Breuer–Major theorems (Nourdin et al., 2011), and prove that the upper bound in our results is more efficient than that in the quantitative Breuer–Major theorems. Also we obtain the explicit upper bound of the Kolmogorov distance for central limit theorems of sojourn times. © 2015 Published by Elsevier B.V. on behalf of The Korean Statistical Society.
Keywords: Malliavin calculus Kolmogorov distance Stein’s equation Central limit theorem Quantitative Breuer–Major theorems Sojourn times
1. Introduction Let Z be a standard Gaussian random variable on a probability space (Ω , F , P ). Suppose that {Fn } is a sequence of real-valued random variables of an infinite-dimensional Gaussian field. In the paper Nourdin and Peccati (2009a), authors combine Stein’s method and Malliavin calculus to derive explicit upper bounds for quantities of the type
|E[h(Fn )] − E[h(Z )]|, where h is a suitable test function. The results in recent work (Nourdin, Peccati, & Podolskij, 2011) generalize and refine some classic central limit theorems for the sequence of partial sums associated with Gaussian-subordinated process studied by Arcones (1994), Breuer and Major (1983) and Giraitis and Surgailis (1985). We consider the following sequence {Sn } of random variables of the type n 1 Sn = √ f (Xk ) − E[f (Xk )] , n k=1
n ≥ 1,
where X = {Xk , k ∈ Z} is a d-dimensional Gaussian process and f : Rd → R is a measurable function. We state the (i) (l) Breuer–Major theorems for stationary vectors: under certain conditions on f and the covariance function r (i,l) (j) = E[X1 X1+j ] for any 1 ≤ i, l ≤ d and l ∈ Z, the sequence {Sn } converges in distribution to a Gaussian random variable S. ✩ This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012R1A1A4A01012783 and NRF-2013R1A1A2008478). ∗ Corresponding author. E-mail address:
[email protected] (H.S. Park).
http://dx.doi.org/10.1016/j.jkss.2015.03.003 1226-3192/© 2015 Published by Elsevier B.V. on behalf of The Korean Statistical Society.
2
Y.T. Kim, H.S. Park / Journal of the Korean Statistical Society (
)
–
In Nourdin et al. (2011), by using Malliavin calculus, interpolation techniques and Stein’s method for normal approximation, authors deduce several complete quantitative Breuer–Major theorems, which provides upper bounds ϕi (n), i = 1, 2, 3, under the form of an optimization problem, for the following three quantities: (i) Let CC2 (R) be the class for all functions that are twice continuously differentiable and have a second derivative bounded by C . Define the distance dC (Sn , S ) = sup |E[h(Sn )] − E[h(S )]| ≤ ϕ1 (n). h∈CC2 (R)
(ii) Let Lip(1) be the collection of all functions with Lipschitz constant bounded by 1. Define Wasserstein distance dW (Sn , S ) = sup |E[h(Sn )] − E[h(S )]| ≤ ϕ2 (n). h∈Lip(1)
(iii) Define Kolmogorov distance dKol (Sn , S ) = sup |P (Sn ≤ z ) − P (S ≤ z )| ≤ ϕ3 (n). z ∈R
In particular, by using the relation (see Theorem 3.1 in Chen & Shao, 2005) dKol (Sn , S ) ≤ 2 dW (Sn /σ , S /σ ),
(1)
where σ is the variance of the random variable S. Nourdin et al. (2011) prove that the upper bound ϕ3 (n) of the Kolmogorov distance is given by 2
ϕ3 (n) =
2
σ
ϕ2 (n).
By using the same arguments as for quantitative Breuer–Major theorems, Pham (2013) gives information about the rate of convergence for the central limit theorems of sojourn times of Gaussian fields in both cases: the fixed and the moving level. In the paper Nourdin et al. (2011), authors obtain upper bounds under the form of an optimization problem. On the other hand, Pham (2013) deals with a continuous time field and gives an explicit upper bound of Wasserstein distance. In this paper, we directly derive the upper bound of the Kolmogorov distance for a sequence with the form of the chaos expansion of functionals of Gaussian process Fn =
∞
(fn,k ) , q ≥ 1. If Fn does not belong to the space D1,2 for all n, we = Nk=q Ik (fn,k ), for using techniques developed by Nourdin and
k=q Ik
consider the decomposition Fn = Fn,N + (Fn − Fn,N ), Fn,N Peccati (2009a,b). As applications, we find upper bounds of the Kolmogorov distance in quantitative Breuer–Major theorems and in the central limit theorems of sojourn times of Gaussian fields. We stress that our upper bound is more efficient than the upper bound obtained in (2.21) of Theorem 2.1 in Nourdin et al. (2011) as our upper bound converges to zero more fast. 2. Preliminaries In this section, we recall some basic facts about Malliavin calculus for Gaussian processes. The reader is referred to Nualart (2006) for a more detailed explanation. Suppose that H is a real separable Hilbert space with scalar product denoted by ⟨·, ·⟩H . Let B = {B(h), h ∈ H } be an isonormal Gaussian process, that is a centered Gaussian family of random variables such that E[B(h)B(g )] = ⟨h, g ⟩H . For every n ≥ 1, let Hn be the nth Wiener chaos of B, that is the closed linear subspace of L2 (Ω ) generated by {Hn (B(h)) : h ∈ H , ∥h∥H = 1}, where Hn is the nth Hermite polynomial defined by Hn (x) = (−1)n e
x2 2
dn − x2 e 2 . dxn
⊙n ⊗n ⊙n We define a linear isometric tensor product, √ mapping In : H → Hn by In (h ) = Hn (B(h)), where H is the symmetric equipped with the norm n!∥·∥H ⊗n . It is well known that any square integrable random variable F ∈ L2 (Ω , G, P ) (G denotes the σ -field generated by B) can be expanded into a series of multiple stochastic integrals:
F =
∞
Ik (fk ),
k=0
where f0 = E(F ), the series converges in L2 , and the functions fk ∈ H ⊙k are uniquely determined by F . Let S be the class of smooth and cylindrical random variables F of the form F = f (B(ϕ1 ), . . . , B(ϕn )),
(2)
where n ≥ 1, f ∈ Cb (R ) and ϕi ∈ H , i = 1, . . . , n. The Malliavin derivative of F with respect to B is the element of L2 (Ω , H ) defined by ∞
DF =
n
n ∂f (B(ϕ1 ), . . . , B(ϕn ))ϕi . ∂ xi i=1
(3)
Y.T. Kim, H.S. Park / Journal of the Korean Statistical Society (
)
–
3
We denote by Dl,p the closure of its associated smooth random variable class with respect to the norm
∥F ∥pl,p = E(|F |p ) +
l
p
E(∥Dk F ∥H ⊗k ).
k=1
We denote by δ the adjoint of the operator D, also called the divergence operator. The domain of δ , denoted by Dom(δ), is an element u ∈ L2 (Ω ; H ) such that
|E(⟨Dl F , u⟩H ⊗l )| ≤ C (E|F |2 )1/2 for all F ∈ Dl,2 . If u ∈ Dom(δ), then δ(u) is the element of L2 (Ω ) defined by the duality relationship
E[F δ(u)] = E[⟨DF , u⟩H ] for every F ∈ D1,2 . Let F ∈ L2 (Ω ) be a square integrable random variable. The operator L is defined through the projection operator Jn , n = ∞ 0, 1, 2 . . . , as L = n=0 −nJn F , and is called the infinitesimal generator of the Ornstein–Uhlenbeck semigroup. The relationship between the operator D, δ , and L is given as follows: δ DF = −LF , that is, for F ∈ L2 (Ω ) the statement F ∈ Dom(L) is equivalent to F ∈ Dom(δ D) (i.e. F ∈ D1,2 and DF ∈ Dom (δ)), and in this case δ DF = −LF . We also define the operator
∞ 1 −1 L−1 , which is the pseudo-inverse of L, as L−1 F = is an operator with values in D2,2 and n=1 − n Jn (F ). Note that L LL−1 F = F − E[F ] for all F ∈ L2 (Ω ). 3. Main results
{
this section, we derive an upper bound of the Kolmogorov distance for the central limit theorem of the sequence In ∞ k=q Ik (fn,k )}, q ≥ 1, through reduction to a finite chaos expansion. We start with the following simple lemma.
Lemma 1. Let b
g (x) = ax +
x2
,
for x > 0,
where a and b are positive constants. Then inf g (x) =
x >0
3 22/3
a2/3 b1/3 .
Recall that, for every z ∈ R, the function gz (x) = e
x2 /2
x
[1(−∞,z ] (u) − Φ (z )]e−u
2 /2
du
−∞
is a solution to Stein’s equation: 1(−∞,z ] (x) − Φ (z ) = g ′ (x) − xg (x),
x ∈ R,
(4)
√
where Φ (z ) = P (Z ≤ z ) [Z ∼ N (0, 1)], and also ∥gz ∥∞ ≤ 2π /4 and ∥gz′ ∥∞ ≤ 1. The following results, given in Lemma 2.3 and Theorem 2.4 of the paper Nourdin and Peccati (2009b), will play a crucial role in the proof of Theorem 1. Lemma 2. Let F ∈ D1,2 have zero mean. (a) Assume that F has an absolutely continuous law with respect to the Lebesgue measure. Then, for every z ∈ R, P (F ≤ z ) − Φ (z ) = E[fz′ (F )(1 − ⟨DF , −DL−1 F ⟩H )]. (b) Let Z ∼ N (0, 1). Then dKol (F , Z ) ≤
E[(1 − ⟨DF , −DL−1 F ⟩H )2 ].
The following statement is the main result of this paper. Theorem ∞ 1. Let {Fn , n ≥ 1} be a sequence of centered square integrable random variables given by the chaos expansion Fn = k=q Ik (fn,k ) for q ≥ 1, where
{fn,k } ⊂ H ⊙k for each k,
4
Y.T. Kim, H.S. Park / Journal of the Korean Statistical Society (
)
–
and let S ∼ N (0, σ 2 ), where
σ2 =
∞
σk2 < ∞,
σk2 = lim k!∥fn,k ∥2H ⊗k . n→∞
k=q
Then
dKol (Fn , S ) ≤ inf
N ≥q
N
1 N
σk2
2 E δk,l σk2 − k⟨Ik−1 (fn,k ), Il−1 (fn,l )⟩H
k,l=q
k=q
∞
+
σk2
k=N +1
σ2
+
3 2
1
√ πσ
∞ 2/3
2
k!∥fn,k ∥H ⊗k
1/3
,
(5)
k=N +1
where δk,l denotes the Kronecker delta defined by
δk,l =
1 0
if k = l if k = ̸ l.
Proof. For every z , ϵ ∈ R, it is clear that
{Fn ≤ z } ⊆ {Fn,N ≤ z + ϵ} ∪ {Fn − Fn,N ≤ −ϵ}. From this, we have P (Fn ≤ z ) ≤ P (Fn,N ≤ z + ϵ) + P (Fn − Fn,N ≤ −ϵ).
(6)
(fn,k ). Let FN be a centered Gaussian random variable with the variance N 2 Let ΦN (z ) = P (ZN ≤ z ), where ZN = FN / k=q σk . From (6), we estimate P (Fn ≤ z ) − P (S ≤ z ) ≤ P (Fn,N ≤ z + ϵ) − P (FN ≤ z + ϵ) + P (FN ≤ z + ϵ) − P (S ≤ z + ϵ) + P (S ≤ z + ϵ) − P (S ≤ z ) + P (Fn − Fn,N ≤ −ϵ) := Ψ1,n,N (z , ϵ) + Ψ2,N (z , ϵ) + Ψ3 (z , ϵ) + Ψ4,n,N (ϵ). N
For every N ≥ q, let us set Fn,N =
k=q Ik
N
k=q
σk2 .
It is obvious that, for each n ≥ 1 and N ≥ q, Fn,N ∈ D1,2 and E [Fn,N ] = 0. By using Stein equation (4), we have
Fn,N z+ϵ z+ϵ − P FN |Ψ1,n,N (z , ϵ)| = P ≤ ≤ N N N N 2 2 2 2 σ σ σ σ k k k k k=q k=q k=q k=q ′ Fn,N Fn,N Fn,N − . = E fαN fαN N N N 2 2 2 σ σ σ k k k k=q
k=q
(7)
k=q
Applying the result (b) in Lemma 2 to the sequence {Fn,N , n ≥ 1}, we estimate, from (7),
|Ψ1,n,N (z , ϵ)| ≤
N
1 N
σk2
E
k=q
σk2 − ⟨DFn,N , −DL−1 Fn,N ⟩H
k=q
≤
1 N
σk2
N 2 E δk,l σk − k⟨Ik−1 (fn,u ), Il−1 (fn,v )⟩H
k,l=q
k=q
≤
1 N k=q
σ
N 2 k,l=q k
E
δk,l σk2 − k⟨Ik−1 (fn,k ), Il−1 (fn,l )⟩H
2
,
(8)
Y.T. Kim, H.S. Park / Journal of the Korean Statistical Society (
where αN = zN+ϵ
k=q
. Let us set FN =
σk2
N k=q
)
–
5
σk2 I1 (1[0,1] ). In this case, we take H = L2 ([0, 1], B ([0, 1]), dλ), where λ is a
Lebesgue measure. By the similar estimate as for Ψ1,n,N (z , ϵ), we have
F z+ϵ S z+ϵ N |Ψ2,N (z , ϵ)| = P ≤ −P ≤ σ σ σ σ N 1 2 2 −1 σk ⟨DI (1[0,1] ), −DL I (1[0,1] )⟩L2 ([0,1]) ≤ 2 E σ − σ k=q ≤
1
∞
σ2
k=N +1
σk2 .
(9)
As for the third and the fourth terms Ψ3 (z , ϵ) and Ψ4,n,N (ϵ), we choose ϵ > 0. Then it is obvious that
|Ψ3 (z , ϵ)| ≤ √
ϵ
2π σ
.
(10)
By Markov inequality, ∞
Ψ4,n,N (ϵ) ≤ P |Fn − Fn,N | ≥ ϵ ≤
k=N +1
k!∥fn,k ∥2H ⊗k
ϵ2
.
(11)
From (8)–(11), we have
1
P (Fn ≤ z ) − P (S ≤ z ) ≤
N
σk2
∞ σk2 2 k =N +1 2 E δk,l σk − k⟨Ik−1 (fn,k ), Il−1 (fn,l )⟩H + σ2 k,l=q N
k=q
∞
+√
1
2π σ
ϵ+
k=N +1
k!∥fn,k ∥2H ⊗k
ϵ2
.
(12)
Applying Lemma 1 to the third and fourth terms on the right-hand side in (12), we obtain
P (Fn ≤ z ) − Φ (z ) ≤
1 N
σk2
∞ σk2 2 k=N +1 2 E δk,l σk − k⟨Ik−1 (fn,k ), Il−1 (fn,l )⟩H + σ2 k,l=q N
k=q
+
3
1
√ 2 πσ
∞ 2/3
k!∥fn,k ∥2H ⊗k
1/3
.
(13)
k=N +1
On the other hand,
{Fn − Fn,N ≤ z − ϵ} ⊆ {Fn ≤ z } ∪ {Fn,N ≥ ϵ}, which implies that P (Fn,N ≤ z − ϵ) ≤ P (Fn ≤ z ) + P (Fn − Fn,N ≥ ϵ).
(14)
The inequality (14) yields
P (Fn ≤ z ) − P (S ≤ z ) ≥ P (Fn,N ≤ z − ϵ) − P (FN ≤ z − ϵ) + P (FN ≤ z − ϵ) − P (S ≤ z − ϵ)
+ P (S ≤ z − ϵ) − P (S ≤ z ) − P (Fn − Fn,N ≥ ϵ) ≥ −|Ψ1,n,N (z , −ϵ)| − |Ψ2,N (z , −ϵ)| − |Ψ3 (z , −ϵ)| − P (Fn − Fn,N ≥ ϵ).
(15)
6
Y.T. Kim, H.S. Park / Journal of the Korean Statistical Society (
)
–
From (13) and (15), it follows that N
1
sup |P (Fn ≤ z ) − Φ (z )| ≤
N
z ∈R
σ
E
∞
δk,l σk2 − k⟨Ik−1 (fn,k ), Il−1 (fn,l )⟩H
2 k,l=q k
2
+
σk2
k=N +1
σ2
k=q
+
3 2
√
∞ 2/3
1
πσ
Thus the proof of theorem is completed.
k!∥fn,k ∥2H ⊗k
1/3
.
(16)
k=N +1
1,2 Remark 1. If Fn = for all n = 1, 2, . . . with E[Fn ] = 0, we directly obtain, from Theorem 2.4 in Nourdin k=q Ik (fn,k ) ∈ D and Peccati (2009b),
∞
d(Fn , F ) ≤
E[(1 − ⟨DFn − DL−1 Fn ⟩H )2 ].
We note that Fn ∈ D1,2 if and only if Fn − Fn,N belongs to D
1,2
.
∞
k=q
kk!∥fn,k ∥2H ⊗k < ∞. It is obvious that Fn,N ∈ D1,2 , but we do not know whether
4. Applications In this section, we apply Theorem 1 to obtain information about the rate of convergence for the central limit theorems of Breuer–Major and sojourn times of Gaussian fields. 4.1. Quantitative Breuer–Major theorems The following result will be applied to derive an upper bound of the Kolmogorov distance in Quantitative Breuer–Major Theorem. Corollary 1. We assume that for all k ≥ q and n ≥ 1, there exists a sequence {ϑk , k ≥ q} such that k!∥fn,k ∥2H ⊗k ≤ λϑk
for some constant λ > 0.
∞
k=q
ϑk < ∞ and (17)
Under the assumptions and notations given in Theorem 1, we have
N
1
d(Fn , F ) ≤ inf
N
N ≥1
E
δk,l σk2 − k⟨Ik−1 (fn,k ), Il−1 (fn,l )⟩H
2
k,l=q
σk2
k=q
+
1
σ 2/3
1+
3 1 2/3
λ
√ π
2
∞
ϑk
1/3
.
(18)
k=N +1
Proof. Using Hölder inequality ∞
σk2 ≤ σ 4/3
∞
k=N +1
σk2
1/3
,
k=N +1
we have, from (16), that d(Fn , F ) ≤
N
1 N
σk2
E
δk,l σk2 − k⟨Ik−1 (fn,k ), Il−1 (fn,l )⟩H
2
k,l=q
k=q
+
∞ 1
σ 2/3
k=N +1
σk2
1/3
+
3
1
√ 2 πσ
∞ 2/3 k=N +1
k!∥fn,k ∥2H ⊗k
1/3
.
(19)
Y.T. Kim, H.S. Park / Journal of the Korean Statistical Society (
)
–
7
From the assumptions given in Theorem 1 and (17), the right-hand side on (19) can be estimated as d(Fn , F ) ≤
N
2 E δk,l σk2 − k⟨Ik−1 (fn,k ), Il−1 (fn,l )⟩H
N
1
σk2
k,l=q
k=q
+
σ
1 2/3
1+
∞
3 1 2/3
λ
√ 2 π
ϑk
1/3
.
(20)
k=N +1
Thus the proof of corollary is completed.
Now we directly derive an upper bound of the Kolmogorov distance in Quantitative Breuer–Major Theorems by using Theorem 1. On the other hand, Nourdin et al. (2011) obtain this bound by using the inequality (1). For the rest of the paper, we assume that the conditions of Theorem 2.1 (Quantitative Breuer–Major Theorems) in Nourdin et al. (2011). (1) (d) Let X = (Xk , . . . , Xk )k∈Z for d ≥ 1 be a d-dimensional centered Gaussian process with the covariance (i) (l)
r i,l (j) = E[X1 X1+j ],
1 ≤ i, l ≤ d and j ∈ Z.
Let f : Rd → R be a measurable function and set n 1 Sn = √ f (Xk ) − E[f (Xk )] . n k=1
(21)
Now we describe the Breuer–Major theorems for stationary vectors. Theorem 2 (Breuer–Major Theorem). Let E[f 2 (X1 )] < ∞ and assume that the function f has Hermite rank equal to q ≥ 1. Suppose that
q r i,l (j) < ∞ for every l, l′ ∈ {1, . . . , d}.
(22)
j∈Z
Then we have that L
Sn −→ N (0, σ 2 ),
(23)
where σ 2 = Var [f 2 (X1 )] + 2
∞
L
j =1
Cov[f (X1 ), f (X1+k )] and the notation −→ denotes convergence in distribution.
We will use the same notations as in Nourdin et al. (2011) to represent an upper bound of the Kolmogorov distance. Notations.
A1,n =
A2,N
E[f 2 (X1 )] 2K 2 2
n
+d
q
θ (j)
q
|j|≤n
|j| n
+
θ (j)
q
,
|j|>n
∞ q E[fm2 (X1 )], = 2(2K + d θ )E[f 2 (X1 )] m=N +1 N E[f (X1 )] dm m 2 = (2m − 2l)!γn,m,l , 2 mm! l m=q √ 2K + dq θ E[f 2 (X1 )] p! p + s s − 1 √ s/2 = (s − p)! γn,s,s−p , × d 2 s! p l−1 q≤p
A3,n,N
A4,n,N
A5,n,N
where the coefficients θ (j), K , θ , σ 2 and γn,m,e are given by
θ (j) = max |r l,l (j)|, ′
1≤l,l′ ≤d
K = inf {θ (j) ≤ d−1 ∀|j| ≥ k}, k∈N
θ=
j∈Z
θ (j)q < ∞,
8
Y.T. Kim, H.S. Park / Journal of the Korean Statistical Society (
σm2 = Var [fm2 (X1 )] + 2
∞
Cov[fm (X1 ), fm (X1+k )]
)
–
for m ≥ q,
j =1
γn,m,e =
2θ n |j|≤n
θ (j)e
θ (j)m−e for m ≥ q and 1 ≤ e ≤ m − 1.
|j|>n
Here fm , m ≥ q, indicates the function of order m in the Hermite expansion of f with Hermite rank q ≥ 1. Now we derive the upper bound of the Kolmogorov distance. Theorem 3. Let S ∼ N (0, σ 2 ). Under the assumptions given in Theorem 2.1 in Nourdin et al. (2011), we have
dKol (Sn , S ) ≤ 2 inf
A1,n + A3,n,N + A4,n,N + A5,n,N N
N ≥q
+
σ
2/3 A2,N
2 k
1 2/3 3 1 2/3 1+ . √ 2σ 2 2 π
(24)
k=q
Proof. As in the proof of Theorem 2.1 in Nourdin et al. (2011), put fn,m of Theorem 1 n 1 bt uk,t1 ⊗ · · · ⊗ uk,tm , fn,m = √ n k=1 t ∈{1,...,d}m
where bt is the certain coefficients such that t → bt is symmetric on {1, . . . , d}m and
⟨uk,l , uk′ ,l′ ⟩H = r (l,l ) (k − k′ ), ′
for every k, k′ ∈ Z and every 1 ≤ l, l′ ≤ d. From (4.48) of Theorem 2.1 in Nourdin et al. (2011), the first term in (5) can be estimated as follows: N
1 N
2 E δk,l σk2 − k⟨Ik−1 (fn,k ), Il−1 (fn,l )⟩H ≤
2 k,l=q k
σ
k=q
2 N
(A1,n + A3,n,N + A4,n,N + A5,n,N ).
(25)
σ
2 k
k=q
Using the notations given in Corollary 1, we have, from Lemma 4.1 in Nourdin et al. (2011), that the estimate can be written as m!∥fn,m ∥2H ⊗m ≤ λϑm , where λ = (2K + dq θ ) and ϑm = E[fm2 (X1 )]. Using the notation A2,N and the estimate
σ 2 ≤ (2K + dq θ )E[f 2 (X1 )], we get ∞
λ
ϑm =
m=N +1
=
1
2
∞
2(2K + d θ ) q
E[ (X1 )] fm2
m=N +1
A22,N 4(2K + dq θ )E[f 2 (X1 )]
A22,N
≤
4σ 2
.
(26)
From (26), we deduce that the second term in (18) is bounded by
1
σ 2/3
1+
3 1 2/3 2
√
π
λ
∞ k=N +1
ϑk
1/3
1 2/3 3 1 2/3 2/3 ≤ 1+ A2,N . √ 2σ 2 2 π
The estimates (25) and (27) together with Corollary 1 complete the proof of theorem.
(27)
Remark 2. The upper bound of the Kolmogorov distance obtained in (2.21) of Theorem 2.1 in Nourdin et al. (2011) equals
√ 2
σ
× 4 inf A2,N N ≥q
1 4σ
+
1
4 (2K + dq θ )E[f 2 (X1 )]
+
A1,N + A3,n,N + A4,n,N + A5,n,N
N
1/2 .
(28)
σ
2 k
k=q
By comparing (24) and (28), we can see that the upper bound in (24) of Theorem 3 is more efficient than that obtained in (28) in the sense that the latter one converges to 0 more slowly as n tends to infinity.
Y.T. Kim, H.S. Park / Journal of the Korean Statistical Society (
)
–
9
4.2. Sojourn times of Gaussian fields Let X = {X (t ), t ∈ Rd } be a stationary centered Gaussian field and D be a measurable subset of Rd . The sojourn times of the process X above the level uT in D is defined by
1[X (t )≥uT ] dt . D
In the paper Pham (2013), the author has studied the rate of convergence for the central limit theorems in the case of the fixed level and moving level using Wasserstein distance dW . The central limit theorems are based on the following assumption: (A) The process X = {X (t ), t ∈ Rd } is a stationary centered Gaussian field with unit variance and covariance ρ(t ) such that
|ρ(t )|dt < ∞.
Rd
His works can be presented in the following statements. Let us set
ST =
[0,T ]d
1[X (t )≥u] dt .
Theorem 4 (Fixed Level). Let X = {X (t ), t ∈ Rd } be a stationary centered Gaussian field satisfying the assumption (A). Further assume that the covariance function ρ satisfies
|ρ(t )|dt ≤ c (log a)−1 for a → ∞.
Rd \[−a,a]d
(29)
Then it holds
dW
ST − E[ST ]
√
Td
, N (0, σ ) ≤
c
2
(log T )1/4
.
(30)
Here c is a constant depending on the field and level, and 0 < σ2 =
∞ φ(u)2 Hn2−1 (u) n!
n =1
ρ n (t )dt < ∞,
Rd
where φ is the density function of the stand Gaussian law and Hn is the nth Hermite polynomial. Let us set
∗
ST =
[0,T ]d
1[X (t )≥uT ] dt .
Theorem 5 (Moving Level). Let X = {X (t ), t ∈ Rd } be a stationary centered Gaussian field satisfying the assumption (A). Further assume that there exists a positive constant α ∈ (0, 2] such that in a neighborhood of 0, the covariance function ρ satisfies 1 − ρ(t ) ∼ = c ∥t ∥−α
for t → ∞.
(31)
Let uT be a function that tends to infinity as T goes to infinity. Then for every β ∈ (0, d/2), there exists constant cβ such that
dW
ST∗ − E[ST∗ ] Var (ST∗ )
, N (0, 1) ≤ cβ
2+α
uT α
(log T )1/6
+
1 T β φ(uT )uT
.
(32)
Now we derive an upper bound of the Kolmogorov distance instead of the Wasserstein distance in the case of a fixed level of Theorem 4. Theorem 6 (Fixed Level). Let X = {X (t ), t ∈ Rd } be a stationary centered Gaussian field satisfying the assumption (A). Further assume that the covariance function ρ satisfies the condition (29). Then it holds
dKol
ST − E[ST ]
√
Td
, N (0, σ ) ≤ c inf 2
NT
1 NT p=1
√ σp2 log T
3NT
+√
Td
+
1 1/6
NT
.
(33)
10
Y.T. Kim, H.S. Park / Journal of the Korean Statistical Society (
)
–
Proof. Obviously, the expansion of ST is given by ST = E[ST ] +
∞ φ(u)Hp−1 p!
p=1
Ip (h(t )⊗p )dt ,
(0,T ]d
(34)
where h(t ) = 1(0,t1 ]×···×(0,td ] is the elements in H corresponding to the process X = {X (t ), t ∈ Rd }. We decompose ST − E[ST ]
√
Td
= ST ,NT + S¯T ,NT ,
where
NT 1 φ(u)Hp−1 (u) ST ,NT = √ Ip (h(t )⊗p )dt p! T d p=1 (0,T ]d φ(u)Hp−1 (u) 1 Ip (h(t )⊗p )dt . S¯T ,NT = √ p! T d p=NT +1 (0,T ]d ∞
An obvious modification to continuous time of Theorem 1 in the discrete time yields
dKol
ST − E[ST ]
√
Td
≤ inf NT
, N (0, σ ) 1 φ(u)Hp−1 (u) 2 E δp,q σp − p √ pIp−1 (h(t )⊗p )dt , p! Td (0,T ]d p,q=1 NT
1 NT
2
σp2
p=1
φ(u)Hq−1 (u) √ q! Td
2/3
1
+
3 2
1
√ πσ
(0,T ]d
qIq−1 (h(t )
⊗p
)dt
+
T d p=N +1 T
p!
σ2 1/3
E[Ip (h(t )
⊗p
× (0,T ]d
σp2
p=NT +1
H
∞ φ 2 (u)Hp2−1 (u)
1
∞
2 1/2
(0,T ]d
)Ip (h(s) )]dtds ⊗p
:= inf{B1,T ,NT + B2,NT + B3,T ,NT }. NT
In the case of B1,T ,NT , the estimates (9) and (10) in the proof of Theorem 2 of Pham (2013) yield, observing that the term √1 log T
is of lower order than
B1,T ,NT ≤
NT
, that
NT
1
2 p,q=1 p
|δp,q σp2 − δp,q σT2,p | +
σ
p=1
NT
1 φ(u)Hp−1 (u) 2 E δp,q σT ,p − p √ p! Td p,q=1 NT
1
σp2
p=1
φ(u)Hq−1 (u) × pIp−1 (h(t ) )dt , √ d d q! T (0,T ]
≤
1
⊗p
1
NT
1 T 1/4
1
1/4 NT
σp2
+
1
T 1/4
+√
(0,T ]d
qIq−1 (h(t )
⊗p
)dt
2 1/2 H
NT
1
3
log T
+√ Td
p=1
≤
1 NT
1 1/4
σ
2 p
+√
NT
3NT
1 log T
+√
Td
.
(35)
p=1
By a similar estimate as for (8) in the proof of Theorem 2 of Pham (2013), we have B2,NT ≤ c φ(u)
Rd
|ρ(t )|dt
∞ p=NT +1
p−3/2 ≤
c 1/2
NT
.
(36)
Y.T. Kim, H.S. Park / Journal of the Korean Statistical Society (
)
–
11
As for B3,T ,NT , the same arguments as for B2,NT estimate B3,T ,NT =
=
=
≤
3 2 3 2 3 2 3 2
1
√ πσ 1
√ πσ 1
√ πσ 1
√ πσ
c
≤
1/6
NT
2/3
∞ φ 2 (u)Hp2−1 (u)
1
(p!)2
T d p=N +1 T
2/3
(0,T ]d
p!
)Ip (h(s) )]dtds ⊗p
1/3 ρ (t − s)dtds p
(0,T ]d
1/3 [−T ,T ]d
p!
p=NT +1
E[Ip (h(t )
⊗p
ρ (t )dt p
1/3
∞ φ 2 (u)Hp2−1 (u)
(0,T ]d
(0,T ]d
p!
p=NT +1
2/3
×
∞ φ 2 (u)Hp2−1 (u)
1/3
∞ φ 2 (u)Hp2−1 (u)
1
T d p=N +1 T
2/3
Rd
|ρ(t )|dt
.
(37) −1/6
Summing up (35)–(37) and observing that the first term in (35) and the estimate for B2,NT are of lower order than NT get
dKol
ST − E[ST ]
√
Td
, N (0, σ ) ≤ c inf 2
NT
3NT
1 NT
√ σp2 log T
+√
+
Td
1 1/6
NT
.
, we
(38)
p=1
ℓ(T ) Remark 3. (a) If we choose NT such that 3NT = ℓ(T ), where limT →∞ √ d = 0 and limT →∞ ℓ(T ) = ∞ (For example, T d
ℓ(T ) = T 2 −α for α ∈ (0, 2d )), we get the explicit upper bound of the Kolmogorov distance: ℓ(T ) ST − E[ST ] 1 1 2 +√ + dKol , N (0, σ ) ≤ c √ . √ (log ℓ(T ))1/6 log T Td Td
(39)
(b) In the paper Pham (2013), the author takes NT = (log T )/4. This NT , by using Theorem 6, gives the explicit upper bound of the Kolmogorov distance:
dKol
ST − E[ST ]
√
Td
, N (0, σ ) ≤
c
2
(log T )1/6
.
(40)
If we use the relationship (1) between the Kolmogorov distance and the Wasserstein distance as in quantitative Breuer–Major theorems, then we have, from Theorem 4, the upper bound of the Kolmogorov distance
dKol
ST − E[ST ]
√
Td
, N (0, σ ) ≤
c
2
(log T )1/8
.
(41)
Hence we can see that the upper bound in (40) is more efficient than that obtained in (41) in the sense that the latter one converges to 0 more slowly as T tends to infinity. In the same way as in the case of a fixed level, we can easily obtain the upper bound of the Kolmogorov distance in the case of a moving level. Hence we just state the results without the proof in the following theorem. Theorem 7 (Moving Level). Let X = {X (t ), t ∈ Rd } be a stationary centered Gaussian field satisfying the assumptions (A) and (31) in Theorem 5. Let uT be a function that tends to infinity as T goes to infinity. Then for every β ∈ (0, d/2), there exists constant cβ such that
dKol
ST∗ − E[ST∗ ] Var (ST∗ )
, N (0, 1) ≤ cβ inf NT
3NT T d/2 φ(uT )uT
2+α
+
uT 3α
1/18
NT
.
(42)
12
Y.T. Kim, H.S. Park / Journal of the Korean Statistical Society (
)
–
d
Remark 4. In the paper Pham (2013), the author takes NT such that 3NT = T −β+ 2 . This NT , by using Theorem 6, yields the explicit upper bound of the Kolmogorov distance:
dKol
ST∗ − E[ST∗ ]
Var (ST∗ )
, N (0, 1) ≤ cβ
2+α
uT 3 α
(log T )1/18
+
1 T β φ(uT )uT
.
(43)
If we use the relation (1) as in quantitative Breuer–Major theorems, then we obtain, from Theorem 5, the upper bound of the Kolmogorov distance
dKol
ST∗ − E[ST∗ ]
Var (ST∗ )
, N (0, 1) ≤ cβ
2+α
uT α
(log T )1/6
+
1 T β φ(uT )uT
.
This shows that the upper bound in (43) is more efficient than that obtained in (44).
(44)
Acknowledgments We would like to thank the editor and the anonymous referees for their comments which considerably improve the paper. References Arcones, M. A. (1994). Limit theorems for nonlinear functionals of a stationary Gaussian sequence of vectors. Annals of Probability, 22(4), 2242–2274. Breuer, P., & Major, P. (1983). Central limit theorems for nonlinear functionals of Gaussian fields. Journal of Multivariate Analysis, 13(3), 425–441. Chen, L., & Shao, Q.-M. (2005). Stein’s method for normal approximation. In Lect. notes ser. inst. math. sci. natl. univ. Singap: Vol. 4. An introduction to Stein’s method (pp. 1–59). Singapore: Singapore Univ. Press. Giraitis, L., & Surgailis, D. (1985). CLT and other limit theorems for functionals of Gaussian processes. Zeitschrift fur Wahrscheinlichkeitstheorie und Verwandte Gebiete, 70(3), 191–212. Nourdin, I., & Peccati, G. (2009a). Stein’s method on Wiener Chaos. Probability Theory and Related Fields, 145, 75–118. Nourdin, I., & Peccati, G. (2009b). Stein’s method and exact Berry–Esseen asymptotics for functionals of Gaussian fields. Annals of Probability, 37(6), 2231–2261. Nourdin, I., Peccati, G., & Podolskij, M. (2011). Quantitative Breuer–Major theorems. Stochasic Processes and their Applicatrions, 121, 793–812. Nualart, D. (2006). Malliavin calculus and related topics (2nd ed.). Springer. Pham, V.-H. (2013). On the rate of convergence for central limit theorems of sojourn times of Gaussian fields. Stochasic Processes and their Applications, 121, 2158–2174.