Journal Pre-proof On the rate of convergence in the central limit theorem for arrays of random vectors Le Van Dung, Ta Cong Son
PII: DOI: Reference:
S0167-7152(19)30317-7 https://doi.org/10.1016/j.spl.2019.108671 STAPRO 108671
To appear in:
Statistics and Probability Letters
Received date : 8 January 2018 Revised date : 22 October 2019 Accepted date : 28 October 2019 Please cite this article as: L.V. Dung and T.C. Son, On the rate of convergence in the central limit theorem for arrays of random vectors. Statistics and Probability Letters (2019), doi: https://doi.org/10.1016/j.spl.2019.108671. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
© 2019 Elsevier B.V. All rights reserved.
Journal Pre-proof
On the Rate of Convergence in the Central Limit Theorem for Arrays of Random Vectors
a The
of
Le Van Dunga , Ta Cong Sonb,∗ University of Da Nang, Da Nang University of Science and Education, Vietnam University of Science, Vietnam National University, Hanoi, Vietnam
p ro
b VNU
Abstract
Pr e-
Let {Xn,i ; 1 ≤ i ≤ kn , n ≥ 1} be an array of martingale difference random vectors and {kn ; n ≥ 1} a sequence of positive integers such that kn → ∞ as n → ∞. The aim of this paper is to establish the rate of convergence for the central limit theorem for the sum Sn = Xn,1 + Xn,1 + ... + Xn,kn . We also show that for stationary sequences of martingale difference random vectors, under condition E(kX1 k2+2δ ) < ∞ for some δ ≥ 1/2, the rate n−δ/(2+2δ) log n is reached, this rate is better than n−1/4 for δ > 1. Keywords: Random vector, Multivariate Normal, Normal Approximation, Central Limit Theorem, Convergence Rate. 2010 MSC: 60F15, 60G42
1. Introduction and Notation
urn
al
Let {Xn,i ; 1 ≤ i ≤ kn , n ≥ 1} be an array of d-dimensional martingale different random vectors taking valued in Rd , adapted to the filtration {Fn,i ; 0 ≤ i ≤ kn ; n ≥ 1}. It follows from Corollary 3.1 in Hall & Heyde (1980), we have by applying the Cramer-Wold Device the following theorem. Theorem A. Let {Xn,i ; 1 ≤ i ≤ kn , n ≥ 1} be an array of d-dimensional martingale difference random vectors adapted to the filtration {Fn,i ; 0 ≤ i ≤ kn ; n ≥ 1} such that E(kXn,i k2 ) < ∞ for all 1 ≤ i ≤ kn , n ≥ 1. if kn X P E(kXn,i k2 I{kXn,i k>ε} |Fn,i−1 ) − → 0 as n → ∞ for each ε > 0, i=1
and
kn X i=1
Jo
then
P
T E(Xn,i Xn,i |Fn,i−1 ) − → Id as n → ∞,
Sn =
kn X i=1
D
Xn,i − → Nd (0, Id ) as n → ∞,
(1.1)
where Nd = Nd (0, Id ) is the multivariate standard normal distribution, 0 is the zero vector in Rd and Id denotes the d × d identity matrix. ∗ Corresponding
author Email addresses:
[email protected] (Le Van Dung),
[email protected] (Ta Cong Son)
Preprint submitted to Statistics & Probability Letters
October 29, 2019
Journal Pre-proof
In view of Corollary 2.8 in Bhattacharya & Rao (2010), an equivalent condition to (1.1) is that E(f (Sn )) → E(f (Nd )) as n → ∞
for any bounded Lipschitz function f ∈ BL(Rd ) = {f : Rd → R : kf kBL ≤ 1}, where kf kL =
sup x,y∈Rd ; x6=y
|f (x) − f (y)| , kx − yk
kf k∞ = sup |f (x)|, x∈Rd
kf kBL = kf kL + kf k∞ ,
of
Here, kxk denotes the usual L2 -norm of a vector x ∈ Rd . For each pair of probability laws P , Q in Rd , we define Z Z dBL (P, Q) = sup f dP − f dQ . d f ∈BL(R )
p ro
It is well known that dBL is a metric, which is called the bounded Lipschitz metric. This metric metrizes the weak topology in the space of all probability measures in Rd . In addition, the following inequality holds d2P ≤ dBL ≤ 2dP , where dP is the Prohorov metric. For more details, the reader may refer to Billingsley (1999); Peter (1980).
Pr e-
For sequences of dependent random variables or random vectors {Xn ; n ≥ 1}, the convergence rate of FSn (x) to Φ(x) in different metrics have been treated by Basu (1980). In case of d = 1, the rate of convergence in this theorem was studied by many authors. In 1982, Bolthausen (1982) established the rate of convergence in central limit theorems for bounded martingale difference sequences. Then, Mourrat (2013) generalized this result. Haeusler (1984, 1988), El Machkouri & Ouchti (2007) extended the results of Bolthausen to unbounded martingale difference sequences. However, the method in R seems to be unsuited to extend to Rd with d > 1 in the Kolmogorov metric.
al
In this paper, we use the method of Bolthausen (1982) and Haeusler (1988) to establish the convergence rate of Sn to Nd (0, Id ) for general martingale difference random vectors in the bounded Lipschitz metric dBL . Dung et al. (2014) studied the case of d = 1 for bounded martingale differences. R¨ ollin (2018) also established convergence rates for unbounded martingale differences in dBL metric by the Stein’s method, he gave the rate n−1/2 log n for some class of martingale differences. Basu (1980) studied convergence rates for martingale difference random vectors in the class of all functions all of whose partial (mixed) derivatives of order up to r (r ≥ 2) are bounded and uniformly continuous (or Lipschitz).
urn
Throughout this paper, xT and AT respectively stand for the transposes of vector x and matrix A. The norm of a matrix A = (aij )m×n is defined by 1/2 m X n X kAk = a2ij . i=1 j=1
Jo
Let α = (α(1) , . . . , α(d) ) denote a multi-index, i.e., α(1) , . . . , α(d) are nonnegative integers. For each multiindex α = (α(1) , ..., α(d) ) and each vector x = (x(1) , . . . , x(d) )T ∈ Rd , we write
α
|α| = α(1) + α(2) + . . . + α(d) , α! = α(1) !α(2) ! . . . α(d) !, (1)
xα = (x(1) )α (x(2) )α
(1) (2) D1α D2α (k)
(d) . . . Ddα
(2)
. . . (x(d) )α th
(d)
. (k)
In addition, let D = denote the α -order derivative where Dkα is the partial differentiation of order α with respect to the k th coordinate variable. As usual, all random vectors in this paper pertain to the basic probability space (Ω, F, P ). Finally, the symbol C denotes a generic positive constant which is not necessarily the same one in each appearance throughout the paper. The rest of the paper is arranged as follows. Section 2 is devoted to present the formulation, some auxiliary results and the main theorem. All of the proofs will be aggregated in Section 3. 2
Journal Pre-proof
2. Statement of the Main Results (1)
(2)
(d)
an =
sup f ∈BL(Rd )
kn X i=1
for some δ ≥ 1/2.
|E(f (Sn )) − E(f (Nd ))|,
EkXn,i k2+2δ and Kn,2δ =
Theorem 2.1. Suppose that =
(
E(|1 −
l=1
(k)
(σn,i )2 a.s., 0 a.s.,
Pr e-
(k) (l) E(Xn,i Xn,i |Fn,i−1 )
d X
kn X
(l)
(σn,i )2 |δ+1 ),
p ro
Ln,2δ =
of
T Let {Xn,i = (Xn,i , Xn,i , . . . , Xn,i ); 1 ≤ i ≤ kn , n ≥ 1} be an array of the real random vectors which form a martingale difference array with respect to the σ-fields Fn,0 ⊂ Fn,1 ⊂ . . . ⊂ Fn,kn , where {kn ; n ≥ 1} is a sequence of positive integers such that limn→∞ kn = ∞. Suppose that E(kXn,i k2 ) < ∞. 2 T Set σn,i = E(Xn,i Xn,i |Fn,i−1 ), Sn = Xn,1 + ... + Xn,kn ,
i=1
if k = l if k = 6 l
Pkn 2 for 1 ≤ k, l ≤ d. If i=1 σn,i = Id a.s. For any δ ≥ 1/2, there exists a positive constant C depending only on δ and d such that an ≤ C(Ln,2δ )1/(2+2δ) (1 + | log(Ln,2δ )|) whenever Ln,2δ ≤ 1. If the assumption of
Pkn
i=1
2 σn,i = Id a.s. is removed, we obtain the following theorem.
al
Theorem 2.2. Suppose that (k) (l) E(Xn,i Xn,i |Fn,i−1 )
=
(
(k)
(σn,i )2 a.s., 0 a.s.,
if k = l if k = 6 l
urn
for 1 ≤ k, l ≤ d. For any δ ≥ 1/2, there exists a positive constant C depending only on δ and d such that an ≤ C(Ln,2δ + Kn,2δ )1/(2+2δ) | log(Ln,2δ + Kn,2δ )|
whenever Ln,2δ + Kn,2δ ≤ 1/(2dδ+1 ).
T We know that, when E(Xn,i Xn,i |Fn,i−1 ) is a positive definite random matrix a.s., then it has d (1)
(d)
Jo
positive random eigenvalues λn,i ,..., λn,i and there exists a random matrix Cn,i satisfying that Yn,i = Pd (1) (d) T T T ∗ Cn,i Xn,i has Cn,i Cn,i = Id a.s. and E(Yn,i Yn,i |Fn,i−1 ) = diag(λn,i , ..., λn,i ) a.s. set Kn,2δ = l=1 E(|1 − Pkn (l) 2 δ+1 ). We have the following corollary. i=1 (λn,i ) | 2 T Corollary 2.3. Suppose that for each i (1 ≤ i ≤ kn ), σn,i = E(Xn,i Xn,i |Fn,i−1 ) is a positive definite random matrix a.s. For any δ > 1/2 there exists a finite constant C = C(δ) depending only on δ such that kn X T ∗ ∗ an ≤ Ek(Cn,i − Id )Xn,i k + C(Ln,2δ + Kn,2δ )1/(2+2δ) | log(Ln,2δ + Kn,2δ )| i=1
whenever Ln,2δ +
∗ Kn,2δ
≤ 1/(2dδ+1 ).
3
Journal Pre-proof
T T Noting that in Corollary 2.3, when E(Xn,i Xn,i |Fn,i−1 ) is a diagonal matrix then Cn,i = Id a.s., so we obtain Theorem 2.2.
In the next theorem, we give optimal convergence rates in the central limit theorem for stationary martingale difference sequences.
p ro
of
Theorem 2.4. Let (Xi )i∈Z be a stationary sequence of martingale difference random vectors with respect to the σ-fields Fn = σ(Xi : i ≤ n). Suppose that E(kX1 k2+2δ ) < ∞ for any δ ≥ 1/2 and E(X1 X1T |F0 ) = E(X1 X1T ) = σ 2 is a positive definite matrix. Then, there exists a positive constant C depending only on of δ and d, we have Pn σ −1 i=1 Xi √ )) − E(f (Nd ))| ≤ C.n−δ/(2+2δ) log n sup |E(f ( n f ∈BL(Rd ) for large enough n. In the case d = 1, we have following corollary.
Pr e-
Corollary 2.5. Let (Xi )i∈Z be a stationary sequence of martingale difference with respect to the σ-fields Fn = σ(Xi : i ≤ n). Suppose that E(|X1 |2+2δ ) < ∞ for any δ ≥ 1/2 and E(X12 |F0 ) = E(X12 ) = σ 2 > 0. Then, there exists a positive constant C depending only on of δ and d, we infer Pn σ −1 i=1 Xi √ )) − E(f (Nd ))| ≤ C.n−δ/(2+2δ) log n sup |E(f ( n f ∈BL(Rd ) for large enough n.
Noting that for δ > 1 this rate is better than n−1/4 .
3. Proofs
For the proof of Theorem 2.1, we need the following preliminary lemma.
al
Lemma 3.1. Let f : Rd → R be a function, G be a sub σ-algebra of F, Nd be a random vector such that L(Nd |G) = Nd (0, Σ), where Σ is a random matrix. Let Y be a d-dimensional random vector which is conditional independent of Nd given G. Then
urn
E(f (Nd + Y )) = E(f ∗ ϕΣ (Y )),
(3.2)
where ϕΣ (x) is the density function of Nd (0, Σ), the convolution product f ∗ ϕΣ of f and ϕΣ is defined by Z f ∗ ϕΣ (x) = f (x − y)ϕΣ (y)dy.
Hence,
Jo
Proof. We have that
FNd +Y (x|G) =
Rd
Z
Rd
FNd (x − v|G)dFY (v|G).
Z Z f (x)dFNd +Y (x|G) = f (x)( ϕΣ (x − v))dxdFY (v|G)) Rd Rd Rd Z Z = ( f (x)ϕΣ (x − v))dx)dFY (v|G) d d ZR R = f ∗ ϕ(x)dFY (v|G) = E(f ∗ ϕ(Y )|G)
E(f (Nd + Y )|G) =
Z
Rd
which implies that (3.2) holds. 4
Journal Pre-proof
3.1. Proof of Theorem 2.1 Proof. Let Zn,1 , Zn,2 , ..., Zn,kn be i.i.d multivariate standard normal variables which are independent of {Xn,i ; 1 ≤ i ≤ kn } and Fn,kn . For fixed 0 < βn < 1, let η T = (η (1) , ..., η (d) ) be a centered normal with E(η (k) η (l) ) = 0 if k 6= l, V ar(η (1) ) = ... = V ar(η (d) ) = 2βn and η is independent of anything else. Let i−1 X
Xn,j ,
Yn,i = σn,j Zn,j ,
of
Un,i =
j=1
kn X
Wn,i = (
λ2n,i =
Yn,j + η),
(l)
(l)
2 σn,j + cov(η) = diag((λn,i )2 ),
j=i+1
p ro
j=i+1
Pkn
kn X
(l)
where (λn,i )2 = j=i+1 (σn,j )2 + 2βn . Noting that conditionally on σ-field generated by the family of random vectors {Zn,1 , ..., Zn,i } and Fn,kn , Wn,i is normally distributed with mean 0 and covariance matrix Pkn Yn,j is a standard multivariate normal. Therefore, for any function f ∈ BL(Rd ), λ2n,i and j=1 kn X
Yn,j + η))| + Cβn1/2 .
(3.1)
j=1
Pr e-
|E(f (Sn )) − E(f (Nd ))| ≤ |E(f (Sn + η)) − E(f (
Now we consider the first term on the right hand side of (3.1). Let ϕλn,i (x) be the density function of Wn,i . Noting that conditionally on Fn,i−1 , Wni and Un,i + Xn,i are independent of each other. According to an idea which goes back to Lindeberg (1922), one writes E(f (Sn + η)) − E(f ( =
kn X
Yn,j + η)) =
j=1
kn X {E(f (Wn,i + Un,i + Xn,i )) − E(f (Wn,i + Un,i + Yn,i ))} i=1
i=1
=
kn X
{E(gn,i (Un,i + Xn,i )) − E(gn,i (Un,i + Yn,i )},
urn
i=1
al
kn X {E(f ∗ ϕλn,i (Un,i + Xn,i )) − E(f ∗ ϕλn,i (Un,i + Yn,i ) =
where gn,i = f ∗ ϕλn,i . By Taylor expansion of gn,i on Rd ,
Jo
kn kn X α α X X E(Yn,i |Fn,i−1 ) − E(Xn,i |Fn,i−1 ) α E(f (Sn + η)) − E(f ( Yn,j + η)) = E D gn,i (Un,i ) α! j=1 i=1 |α|=1
+
kn X
X
E
α α E(Yn,i |Fn,i−1 ) − E(Xn,i |Fn,i−1 ) α D gn,i (Un,i ) α!
E
α Yn,i Dα gn,i (Un,i + θ˜n,i Yn,i ) α!
E
α Xn,i Dα gn,i (Un,i + θn,i Xn,i ) , α!
i=1 |α|=2
+
kn X X
i=1 |α|=3
−
kn X X
i=1 |α|=3
where 0 ≤ θn,i , θ˜n,i ≤ 1. α α It follows from E(Xn,i |Fn,i−1 ) = E(Yn,i |Fn,i−1 ) with |α| ≤ 2 that the first two types of summands in the 5
Journal Pre-proof
above expression vanish. Hence, |E(f (Sn + η)) − E(f (
Yn,j + η)|
j=1
kn X X α α 1 D gn,i (Un,i + θn,i Xn,i ) E Xn,i α! i=1 |α|=3
+
of
≤
kn X
X 1 α Dα gn,i (Un,i + θ˜n,i Yn,i ) E Yn,i α!
kn X
i=1 |α|=3
kn X X
α α E Xn,i D gn,i (Un,i + θn,i Xn,i )
|α|=3 i=1
+
kn X X
|α|=3 i=1
:=
α α D gn,i (Un,i + θ˜n,i Yn,i ) E Yn,i
(I (α) + II (α) ).
|α|=3
Pr e-
X
p ro
≤
(3.3)
We define a sequence of stopping time τj (0 ≤ j ≤ kn ), τ0 = 0, τj = inf{k : max
1≤l≤d
k X (l) (σn,i )2 ≥ jβn } for 1 ≤ j ≤ [βn−1 ], τ[βn−1 ]+1 = kn , i=1
1/2
where [βn−1 ] is the integer part of βn−1 . On {τj−1 < i ≤ τj } ∩ {kXn,i k ≤ βn } we have (l)
λn,i ≥ Noting that
p 1 − (j − 1)βn := λn,j .
al
Dα gn,i (x) = Dβ f ∗ Dγ ϕλn,i (x),
where |β| = 1, γ + β = α. Moreover, α
urn
I (α) =
kn X
α E(|Xn,i ||Dα gn,i (Un,i + θn,i Xn,i )|)
j=1
E
Jo
i=1
=
≤
Z
|Dβ f (x − t)|(1 + ktk2 )ϕ(t)dt λ2n,j Rd Z d = 2 (1 + kx − tk2 )ϕ(x − t)|Dβ f (t)|dt λn,j Rd
|D gn,i (x)| ≤
Then we get
d
kn X
i=τj−1 +1
−1 [βn ]+1
X j=1
τj X
E
+ E
α |Xn,i ||Dα gn,i (Un,i + θn,i Xn,i )|
τj X
τj X
i=τj−1 +1
α |Xn,i |I{kXn,i k≤β 1/2 } |Dα gn,i (Un,i + θn,i Xn,i )| n
i=τj−1 +1
α |Xn,i I{kXn,i k>β 1/2 } ||Dα gn,i (Un,i + θn,i Xn,i )| n 6
Journal Pre-proof
X
≤
j=1
E
τj X
+ E
τj X
i=τj−1 +1
−1 [βn ]+1
:=
X
−1 [βn ]+1
Ijα +
j=1
X
kXn,i k3 I{kXn,i k≤β 1/2 } |Dα gn,i (Un,i + θn,i Xn,i )| n
i=τj−1 +1
kXn,i k3 I{kXn,i k>β 1/2 } ||Dα gn,i (Un,i + θn,i Xn,i )| n
of
−1 [βn ]+1
I˜jα .
j=1
Pi−1 We set Rn,i = k=τj−1 +1 Xn,k and Anit = {kRn,i k ≤ ψ : Rd → R be defined by
Uτ +1 − t } for t ∈ R. Let the function j−1
p ro
1 2
(3.4)
ψ(x) = sup{(1 + kyk2 )ϕ(y) : kyk ≥ kxk/2 − βn1/2 }. It is the fact that
kUn,τj−1 +1 − tk = k(Un,i − t + θn,i Xn,i ) − Rn,i − θn,i Xn,i k
Pr e-
≤ kUn,i − t + θn,i Xn,i k + kRn,i k + kθn,i Xn,i k 1 ≤ kUn,i − t + θn,i Xn,i k + kUn,τj−1 +1 − tk + βn1/2 . 2 1/2
Then, we almost surely have that on the set {τj−1 < i ≤ τj } ∩ {kXn,i k ≤ βn },
Hence
urn
al
|Dγ gn,i (Un,i + θn,i Xn,i )| Z d (kUn,i − t − θn,i Xn,i k2 + 1)ϕ(Un,i − t − θn,i Xn,i )|Dβ f (t)|dt ≤ 2 λn,j Rd Z d ≤ 2 ψ(Uτj−1 +1 − t)|Dβ f (t)|IAnit dt λn,j Rd Z d + 2 ψ(Uτj−1 +1 − t)|Dβ f (t)|IAcnit dt λn,j Rd Z Z d d ψ(Uτj−1 +1 − t) Dβ f (t) dt + 2 IAc |Dβ f (t)|dt. ≤ 2 λn,j Rd λn,j Rd nit
Ijα ≤ E
τj X
i=τj−1 +1
+ Cβn1/2 E
:=
kXn,i k3 I{kXn,i k≤β 1/2 } |Dγ gn,i (Un,i + θn,i Xn,i )| τj X
i=τj−1 +1
Jo
≤
Cβn1/2 E
Cβn1/2 (Mjα
τj X
n
kXn,i k I{kXn,i k≤β 1/2 }
i=τj−1 +1
d
2
n
λ2n,j
kXn,i k2 I{kXn,i k≤β 1/2 } n
Z
d λ2n,j
+ Njα ).
Rd
Z
β ψ(Uτj−1 +1 − t) D f (t) dt
Rd
IAcnit |Dβ f (t)|dt
We consider Mjα first.
Mjα = E
τj X
i=τj−1 +1
kXn,i I{kXn,i k≤β 1/2 } k2 n
7
d λ2n,j
Z
Rd
β ψ(Uτj−1 +1 − t) D f (t) dt
(3.5)
Journal Pre-proof
=E
i=τj−1 +1
≤E
i=τj−1 +1
= CE ≤
C λ2n,j
τj X
kXn,i I{kXn,i k≤β 1/2 } k2 n
kXn,i I{kXn,i k≤β 1/2 } k2 n
τj X
i=τj−1 +1 τj X
Rd
Z
Rd
d λ2n,j
|Dβ f (Uτj−1 +1 − t)|ψ(t))dt
ψ(t))dt
1 E kXn,i k2 I{kXn,i k≤β 1/2 } ≤ Cβn 2 . n λ n,j i=τj−1 +1
kXn,i k2 I{kXn,i k≤β 1/2 } , Hn,j = n
so d
λ2n,j
Z
τj X
max
τj−1
k
k−1 X
i=τj−1 +1
Xn,i k, Bnit = {Hn,j >
Pr e-
i=τj−1 +1
d
n
(α)
τj X
λ2n,j
kXn,i I{kXn,i k≤β 1/2 } k2
Next, we consider Nj . Let Gn,j =
d
of
τj X
p ro
Z
(3.6)
1 kUτj−1 +1 −tk}, 2
kXn,i k2 I{kXn,i k≤β 1/2 } IAcnit |Dβ f (t)|dt E n d λ2n,j R i=τj−1 +1 Z τj X 1 ≤ 2 E kXn,i k2 I{kXn,i k≤β 1/2 } IBnjt |Dβ f (t)|dt n d λn,j R i=τj−1 +1 τj X 1 ≤ 2 E kXn,i k2 I{kXn,i k≤β 1/2 } n λn,j i=τj−1 +1 ( ) ! −2 Z kUτj−1 +1 − tk 2 β min 1, E(Hn,j /Fτj−1 ) |D f (t)|dt 2 Rd τj X 1 = 2 E kXn,i k2 I{kXn,i k≤β 1/2 } n λn,j i=τj−1 +1 ( ) ! −2 Z ktk β 2 E(Hn,j /Fτj−1 ) dt |D f (Uτj−1 +1 − t)| min 1, 2 Rd ( ) −2 Z τj X 1 ktk kXn,i k2 I{kXn,i k≤β 1/2 } min 1, 2dβn dt ≤ 2 E n 2 d λn,j R i=τj−1 +1 ( ) −2 Z βn ktk ≤C 2 min 1, 2dβn dt 2 λn,j Rd ( −2 ) Z βn ktk βn ≤C 2 min 1, dt ≤ C 2 . 2 λn,j Rd λn,j
Jo
urn
al
Njα =
Combining (3.5), (3.6) and (3.7) yield 3/2
Ijα ≤ C
8
βn . λ2n,j
(3.7)
Journal Pre-proof
Thus, −1 [βn ]+1
X
−1 [βn ]+1
Ijα
j=1
≤C
j=1
−1
[βn ]+1 3/2 X βn 1 3/2 = Cβ ≤ Cβn1/2 | log βn |. n 2 1 − (j − 1)β λn,j n j=1
(3.8)
P[βn−1 ]+1 ˜α Ij , it is obvious that for τj−1 < i ≤ τj , j=1 Z |Dγ gn,i (Un,i + θn,i Xn,i )| = Dγ ϕVn,i (Un,i − t − θn,i Xn,i )Dβ f (t)dt d ZR = |Dβ f (Un,i − λn,i y − θn,i Xn,i )||Dγ ϕVn,i (y)|dy Rd Z C d ψ(y)dy ≤ 2 . ≤ 2 λn,j Rd λn,j
Thus, −1 [βn ]+1
−1 [βn ]+1
X
I˜jα ≤
j=1
Cβn1−δ λ2n,j
j=1
τj X
i=τj−1 +1
E(kXn,i k2+2δ ) ≤ Cβn−1/2−δ Ln,2δ .
Pr e-
X
p ro
of
For
X
(3.9)
Combining (3.4), (3.8) and (3.9) we get
I α ≤ Cβn1/2 | log βn | + Cβn−1/2−δ Ln,2δ .
(3.10)
Next, we need to derive a bound for II α on the right-hand side of (3.3) α
II =
τj X
kn X
j=1 i=τj−1 +1
−1 [βn ]+1
X
≤
j=1
j=1
−1 [βn ]+1
n
τj X
i=τj−1 +1
j=1
α E |Yn,i |I{kXn,i k>β 1/2 } Dα gn,i (Un,i − θ˜n,i Yn,i )
−1 [βn ]+1
IIjα +
X
urn
:=
α E |Yn,i |IkXn,i k≤β 1/2 Dα gn,i (Un,i − θ˜n,i Yn,i )
al
X
X
τj X
i=τj−1 +1
−1 [βn ]+1
+
α E |Yn,i ||Dα gn,i (Un,i − θ˜n,i Yn,i )|
j=1
The second summand can be estimated by
α
fj . II
−1 [βn ]+1
X j=1
n
(3.11)
α
f ≤ Cβ −1/2−δ Ln,2δ . II j n
Jo
As for the first summand, we introduce A˜nit = kRn,i k ≤ kUτj−1 +1 − tk/4 , Bnit = {kYn,i k ≤ kUτj−1 +1 − tk/4}, then we have that IIjα = E
τj X
i=τj−1 +1
=E
τj X
i=τj−1 +1
α |Yn,i |I{kXn,i k≤β 1/2 } Dα gn,i (Un,i + θ˜n,i Yn,i n
Z α |Yn,i |I{kXn,i k≤β 1/2 } n
∞
−∞
Dγ ϕλn,i (Un,i + θ˜n,i Yn,i − t)Dβ f (t)dt 9
(3.12)
Journal Pre-proof
τj X
n
i=τj−1 +1
+ CE
α |Yn,i |I{kXn,i k≤β 1/2 }
Z
−∞
τj
X
i=τj−1 +1
+ CE
∞
τj X
|Dγ ϕλn,i (Un,i + θ˜n,i Yn,i − t)||Dβ f (t)|IA0nit ∩Bnit dt
α |Yn,i |I{kXn,i k≤β 1/2 } n
Z
α |Yn,i |I{kXn,i k≤β 1/2 }
Z
n
i=τj−1 +1
∞
−∞ ∞
−∞
IA˜c |D ϕλn,i (Un,i + θ˜n,i Yn,i − t)||D f (t)|dt γ
β
of
≤E
nit
γ c |D ϕλ IBnit (Un,i + θ˜n,i Yn,i − t)||Dβ f (t)|dt n,i
p ro
Use the independence of Zn,i , the first two types of summands can be estimated as I α and I˜α above. As for the third, we remark that Z τj X α γ ˆ =E c |D ϕλ M |Yn,i |I{kXn,i k≤β 1/2 } IBnit (Un,i + θ˜n,i Yn,i − t)||Dβ f (t)|dt n,i n
Rd
λ2n,j
C ≤ 2 λn,j ≤
C λ2n,j
≤
C λ2n,j
≤
C λ2n,j
i=τj−1 +1
τj X
i=τj−1 +1 τj X
i=τj−1 +1 τj X
i=τj−1 +1
τj X
i=τj−1 +1
1/2 kσn,i Zn,i k3 I{kXn,i k≤β 1/2 } n 3
Z
E(kZn,i k kXn,i k I{kXn,i k≤β 1/2 } ) n
EkXn,i k3 I{kXn,i k≤β 1/2 } n
Z
Rd
1/2
τj X
c ψ(Uτ IBmt − t)|Dβ f (t)|dt j−1 +1
Z
Rd
β c |D f (t)|dt IBmt
E(kXn,i k2 I{kXn,i k≤β 1/2 } ) n
Z
Rd
E kZn,i k3 I{kσn,i Zn,i k>kUτj−1 +1 −tk/4} |Dβ f (t)|dt
E(kXn,i k2 I{kXn,i k≤β 1/2 } )
i=τj−1 +1
Jo
c E kZn,i k3 IBmt |f (t)|dt
n
Z
|Dβ f (Uτj−1 +1 − t)|E kZn,i k3 I{kZn,i k>Cktk} dt
Rd Z τj 1/2 X βn 2 C 2 E(kXn,i k I{kXn,i k≤β 1/2 } ) n λn,j i=τj−1 +1 Rd
βn ≤C 2 λn,j
Rd
3
urn
=
E
τj X
al
≤
d
Pr e-
i=τj−1 +1
E kZn,i k3 I{kZn,i k>Cktk} dt
E(kXn,i k2 I{kXn,i k≤β 1/2 } ) ≤ C n
3/2
βn . λ2n,j
Handling this expression as above, one obtains −1 [βn ]+1
X j=1
IIjα ≤ Cβn1/2 | log βn |
It follows from (3.11), (3.12) and (3.13) that II α ≤ C βn1/2 | log βn | + βn−1/2−δ Ln,2δ . 10
(3.13)
(3.14)
Journal Pre-proof
Combining this with (3.1) and (3.10) one obtains an ≤ C βn1/2 | log βn | + βn−1/2−δ Ln,2δ + βn1/2 . 1/(1+δ)
Put βn = Ln,2δ
we have
an ≤ C(Ln,2δ )1/(2+2δ) (1 + | log(Ln,2δ )|).
of
Therefore, the theorem is proved.
Proof. For each l = 1, 2, ..., d, define the stopping times
p ro
3.2. Proof of Theorem 2.2
τl = max{0 ≤ k ≤ kn : with
P0
(l) 2 i=1 (σn,i )
k X i=1
(l)
(σn,i )2 ≤ 1},
˜ n,i =: (X ˜ 1 , ..., X ˜ d ) = (X 1 n,i I{i≤τ } , ..., X d n,i I{i≤τ } ) for 1 ≤ i ≤ kn and = 0. Put X n,i n,i 1 d τ1 X
(1)
(σn,i )2 )1/2 Y (1) , ..., (1 −
Pr e-
1 d ˜ n,k +1 := (X ˜ n,k ˜ n,k X , ..., X ) = ((1 − n n +1 n +1
i=1
τd X (d) (σn,i )2 )1/2 Y (d) ) i=1
where Y = (Y (1) , ..., Y (d) ) is independent of Fn,kn with
P (Y (l) = −1) = P (Y (l) = 1) = 1/2, and the corresponding components of Y are independently distributed. We have kX n +1
τl d X X (l) = Ln,2δ + E( (1 − (σn,i )2 ))1+δ l=1
i=1
d X
τl X
al
i=1
˜ n,i k2+2δ ≤ Ln,2δ + E(kX ˜ n,k +1 k2+2δ ) EkX n
≤ Ln,2δ + E[
l=1
(1 −
i=1
(l)
(σn,i )2 )I{τl =kn } +
d X l=1
(1 −
τl X i=1
(l)
(σn,i )2 )I{τl
urn
τl kn d d X X X X (l) (l) ≤ Ln,2δ + dδ [ E(|1 − (σn,i )2 |δ+1 ) + E((1 − (σn,i )2 )I{τl
l=1
d X
≤ Ln,2δ + dδ [Kn,2δ +
i=1
l=1
(l)
E max (σn,i )2+2δ ] 1≤i≤kn
l=1
≤ Ln,2δ + dδ [Kn,2δ + dLn,2δ ]
Jo
≤ 2dδ+1 [Ln,2δ + Kn,2δ ] ≤ 1
˜ n,i , 1 ≤ i ≤ kn + 1} is an array of martingale difference whenever Ln,2δ + Kn,2δ ≤ 1/(2dδ+1 ). Then, {X Pkn +1 ˜ satisfying the assumptions of Theorem 2.1. Set S˜n = i=1 Xn,i , applying Theorem 2.1 we get that sup
kX n +1
|E(f (S˜n ) − E(f (Nd ))| ≤ C(
f ∈BL(Rd )
i=1
kX n +1
˜ n,i k2+2δ )1/(2+2δ) | log( E(kX
i=1
˜ n,i k2+2δ ))|. E(kX
Thus, sup f ∈BL(Rd )
kX n +1
|E(f (Sn ) − E(f (Nd ))| ≤ C(
i=1
kX n +1
˜ n,i k2+2δ )1/(2+2δ) | log( E(kX 11
i=1
˜ n,i k2+2δ ))| E(kX
Journal Pre-proof
+ E(kSn − S˜n )k2+2δ )1/(2+2δ)
≤ C(Ln,2δ + Kn,2δ )1/(2+2δ) (1 + | log(Ln,2δ + Kn,2δ )|) + E(kSn − S˜n )k2+2δ )1/(2+2δ) .
(3.15)
On the other hand, put τ = min1≤l≤d {τl }, we get kn X
i=τ +1
˜ n,i )|2+2δ ) + E(|X ˜ n,k +1 |2+2δ )], X n
of
EkSn − S˜n )k2+2δ ≤ C[E(|(
and
i=τ +1
˜ n,i )k2+2δ ) ≤ C E X
≤ C E
kn X
i=τ +1
˜ n,i k2 |Fn,i−1 ) E(kX
kn d X X l=1 i=1
l |1 − (σn,i )2 |
!1+δ
!1+δ
+E
1≤i≤kn
τl d X X l=1 i=1
l |1 − (σn,i )2 |
≤ C(Ln,2δ + Kn,2δ ) which imply that
+ E( max kXn,i k2+2δ )
Pr e-
E(k
kn X
p ro
˜ n,k +1 |2+2δ ) ≤ C(Ln,2δ + Kn,2δ ), E(|X n
!1+δ
+ Ln,2δ
E(kSn − S˜n )k2+2δ )1/(2+2δ) ≤ C(Ln,2δ + Kn,2δ )1/(2+2δ) .
(3.16)
The conclusion of Theorem 2.2 follows from (3.15), (3.16) and noting that C(1 + | log(Ln,2δ + Kn,2δ )|) ≤ C| log(Ln,2δ + Kn,2δ )| whenever Ln,2δ + Kn,2δ ≤ 1/(2dδ+1 ). 3.3. Proof of Corollary 2.3
sup
∗ ∗ |E(f (Sn∗ )) − E(f (Nd ))| ≤ C(L∗n,2δ + Kn,2δ )1/(2+2δ) (1 + | log(L∗n,2δ + Kn,2δ )|)
urn
f ∈BL(Rd )
al
T Proof. Put Yn,i = Cn,i Xn,i , then {Yn,i , 1 ≤ i ≤ kn , n ≥ 1} satisfies the assumptions of Theorem 2.2. Hence, for any δ > 1/2, there exists a constant C = C(δ) > 0 depending only on δ such that
∗ whenever L∗n,2δ + Kn,2δ ≤ 1/(2dδ+1 ), where Sn∗ = Yn,1 + ... + Yn,kn ,
L∗n,2δ =
kn X i=1
EkYn,i k2+2δ =
kn X i=1
EkXn,i k2+2δ = Ln,2δ
Jo
On the other hand, an ≤
≤
sup
f ∈BL(Rd )
sup
f ∈BL(Rd )
|E(f (Sn∗ )) − E(f (Nd ))| + Ek |E(f (Sn∗ )) − E(f (Nd ))| +
kn X i=1
kn X i=1
T Cn,i Xn,i −
12
i=1
Xn,i k
T EkXn,i (Cn,i − Id )k
∗ ∗ ≤ C(Ln,2δ + Kn,2δ )1/(2+2δ) (1 + | log(Ln,2δ + Kn,2δ )|) +
This completes the proof.
kn X
kn X i=1
T Ek(Cn,i − Id )Xn,i k.
Journal Pre-proof
3.4. Proof of Theorem 2.4
kn X
2 σn,i = Id ,
i=1
n X i=1
and
EkYn,i k2+2δ ≤
Sn =
n X
Yni =
i=1
kσ −1 k2+2δ EkX1 k2+2δ , nδ
p ro
Ln,2δ =
of
σ −1 T 2 |Fn,i ) = diag(1/n, ..., 1/n), where Fn,i−1 = Proof. Set Yn,i = √ Xi for 1 ≤ i ≤ n, then σn,i = E(Yn,i Yn,i n σ{Xk , k ≤ i}. We see that {Yn,i ; 1 ≤ i ≤ n, n ≥ 1} is an array of martingale difference random vectors with respect to the σ-fields (Fn,k ; 1 ≤ k ≤ n).
n X
√ σ −1 Xi / n.
i=1
Pr e-
By Theorem 2.1, we have Pn σ −1 i=1 Xi √ sup |E(f ( )) − E(f (Nd ))| = sup |E(f (Sn )) − E(f (Nd ))| n f ∈BL(Rd ) f ∈BL(Rd )
≤ C(Ln,2δ )1/(2+2δ) (1 + | log(Ln,2δ )|) ≤ Cn−δ/(2+2δ) log n.
Acknowledgments
References
urn
References
al
We would like to thank the anonymous referee for helpful comments, which have further improved our results. This research has been partially supported by Vietnam’s National Foundation for Science and Technology Development (NAFOSTED) under the grant no. 101.03.2017.24 and by Funds for Science and Technology Development of the University of Danang under project number B2019-DN03-33.
Basu, A. K. (1980). On the rate of approximation in the central limit theorem for dependent random variables and random vectors. Journal of Multivariate Analysis, 10 , 565–578.
Jo
Bhattacharya, R. N., & Rao, R. R. (2010). Normal approximation and asymptotic expansion. Society for Industrial and Applied Mathematics Philadelphia. Billingsley, P. (1999). Convergence of probability measures. Willey. Bolthausen, E. (1982). Exact convergence rates in some martingale central limit theorems. The Annals of Probability, 10 , 672–688. Dung, L. V., Son, T. C., & Tien, N. D. (2014). L1 bounds for some martingale central limit theorems. Lithuanian Mathematical Journal , 54 , 48–60. El Machkouri, M., & Ouchti, L. (2007). Exact convergence rates in the central limit theorem for a class of martingales. Bernoulli , 13 , 981–999. 13
Journal Pre-proof
Haeusler, E. (1984). A note on the rate of convergence in the martingale central limit theorem. The Annals of Probability, 12 , 635–639. Haeusler, E. (1988). On the rate of convergence in the central limit theorem for martingales with discrete and continuous time. The Annals of Probability, 16 , 275–299.
of
Hall, P., & Heyde, C. C. (1980). Martingale Limit Theory and Its Applications. Academic Press, San Diego. Lindeberg, J. W. (1922). Eine neue herleitung des exponentialgesetzes in der wahrscheinlichkeitsrechnung. Mathematische Zeitschrift, 15 , 211–225.
p ro
Mourrat, J. C. (2013). On the rate of convergence in the martingale central limit theorem. Bernoulli , 19 , 633–645. Peter, J. H. (1980). Robust statistics. John Wiley & Sons.
Jo
urn
al
Pr e-
R¨ ollin, A. (2018). On quantitative bounds in the mean martingale central limit theorem. Statistics & Probability Letters, 38 , 171–176.
14