On the Nadaraya–Watson kernel regression estimator for irregularly spaced spatial data

On the Nadaraya–Watson kernel regression estimator for irregularly spaced spatial data

Journal of Statistical Planning and Inference 205 (2020) 92–114 Contents lists available at ScienceDirect Journal of Statistical Planning and Infere...

690KB Sizes 0 Downloads 69 Views

Journal of Statistical Planning and Inference 205 (2020) 92–114

Contents lists available at ScienceDirect

Journal of Statistical Planning and Inference journal homepage: www.elsevier.com/locate/jspi

On the Nadaraya–Watson kernel regression estimator for irregularly spaced spatial data ∗

M. El Machkouri a , , X. Fan b , L. Reding a a b

Laboratoire de Mathématiques Raphaël Salem, UMR CNRS 6085, Université de Rouen Normandie, France Center for Applied Mathematics Tianjin University, Tianjin, China

article

info

Article history: Received 17 July 2018 Received in revised form 13 May 2019 Accepted 17 June 2019 Available online 27 June 2019 MSC: 62G05 60J25 62G07

a b s t r a c t We investigate the asymptotic normality of the Nadaraya–Watson kernel regression estimator for irregularly spaced data collected on a finite region of the lattice Zd where d is a positive integer. The results are stated for strongly mixing random fields in the sense of Rosenblatt (1956) and for weakly dependent random fields in the sense of Wu (2005). Only minimal conditions on the bandwidth parameter and simple conditions on the dependence structure of the data are assumed. © 2019 Elsevier B.V. All rights reserved.

Keywords: Nadaraya–Watson estimator Strong mixing Random fields Asymptotic normality Physical dependence measure

1. Introduction In many situations, practicians want to know the relationship between some predictors and a response. If the form of the functional relation is unknown then a nonparametric approach is necessary. This is a natural question and a very important task in statistics. A very popular tool to handle this problem is the Nadaraya–Watson estimator (NWE) introduced by Nadaraya (1965) and Watson (1964). In this work, we investigate the asymptotic normality of the NWE in the context of dependent irregularly spaced spatial data. Let d, n and N be positive integers. Let also (Yi , Xi )i∈Zd be a strictly stationary R × RN -valued random field defined on a probability space (Ω , F , P). We assume that the common law µ of the random variables (Xi )i∈Zd is absolutely continuous with respect to the Lebesgue measure on RN . We denote by f the unknown probability density function of µ. Let Λn be a finite region of Zd and let (ηi )i∈Zd be iid RN -valued random variables with zero mean and finite variance and independent of (Xi )i∈Zd . The regression model is characterized by the relation Yi = R(Xi , ηi ) for i in Λn where R is an unknown functional. In our setting, it is important to note that no regularity condition is imposed on Λn which can be very general (irregularly spaced data). The regression function r is defined for any x in RN by

{ r(x) =

E [R(x, η0 )] E [Y0 ]

if f (x) ̸ = 0 else,

∗ Corresponding author. E-mail addresses: [email protected] (M. El Machkouri), [email protected] (X. Fan), [email protected] (L. Reding). https://doi.org/10.1016/j.jspi.2019.06.006 0378-3758/© 2019 Elsevier B.V. All rights reserved.

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

93

and the NWE rn of r is defined for any x in RN by

rn (x) =

⎧ ∑ ) ( x−Xi ⎪ Y K ⎪ i i∈Λn bn ⎪ ⎪ ⎪ ) ( ⎨ ∑ x−Xi i∈Λn

K

if



i∈Λn

(

K

x−Xi bn

)

̸= 0

bn

⎪ ⎪ 1 ∑ ⎪ ⎪ Yi ⎪ ⎩ |Λn |

else,

i∈Λn

N ∫where |Λn | is the number of elements in the region Λn , the function K : R → R is a probability kernel (that is K(t)dt = 1) and the bandwidth parameter b is a positive constant going to zero as n goes to infinity. For time n RN series (i.e. for d = 1), the problem which we are concerned has been extensively studied. One can refer, e.g., to Lu and

Cheng (1997), Masry and Fan (1997), Robinson (1983), Roussas (1988) and many references therein. In the spatial case (i.e. for d ⩾ 2), some contributions for strongly mixing random fields were made by Biau and Cadre (2004), Carbon et al. (2007), Dabo-Niang et al. (2011), Dabo-Niang and Yao (2007), El Machkouri (2007), El Machkouri and Stoica (2010), Hallin et al. (2004) and Lu and Chen (2002, 2004). The main motivation of this work is to provide sufficient simple conditions for the NWE to be asymptotically normal in the context of mixing but also non-mixing random fields. More precisely, we consider strongly mixing random fields in the sense of Rosenblatt (1956a) and weakly dependent random fields in the sense of Wu (2005) (see also El Machkouri et al. (2013)). To the best of our knowledge, our work provides the first central limit theorem (Theorem 2) for the NWE under minimal conditions on the bandwidth parameter and irregularly spaced dependent spatial data (i.e. bn → 0 and |Λn |bNn → ∞ as n → ∞). In particular, our result improves in several directions a previous central limit theorem for the NWE for spatial data established by Biau and Cadre (2004) (see the comments after Corollary 1). The paper is organized as follows. Our main results are stated and discussed in Section 2 whereas proofs of the main results and its preliminary lemmas are deferred to Sections 4 and 5. Finally, Section 3 is devoted to a numerical illustration of the central limit theorem obtained in Section 2. 2. Main results Given two σ -algebras U and V , the α -mixing coefficient introduced by Rosenblatt (1956a) is

α (U , V ) = sup{|P(A ∩ B) − P(A)P(B)| , A ∈ U , B ∈ V }. Let p be fixed in [1, +∞]. The strong mixing coefficients (α1,p (n))n⩾0 associated to (Xi )i∈Zd are defined by

α1,p (n) = sup {α (σ (Xk ), FΓ ), k ∈ Zd , Γ ⊂ Zd , |Γ | ⩽ p, ρ (Γ , {k}) ⩾ n}, where FΓ = σ (Xi ; i ∈ Γ ), |Γ | is the number of element in Γ and the distance ρ is defined for any subsets Γ1 and Γ2 of Zd by ρ (Γ1 , Γ2 ) = min{|i − j|, i ∈ Γ1 , j ∈ Γ2 } with |i − j| = max1⩽s⩽d |is − js | for any i = (i1 , . . . , id ) and j = (j1 , . . . , jd ) in Zd . We say that the random field (Xi )i∈Zd is strongly mixing if limn→∞ α1,p (n) = 0. Let m be a positive integer. We are also going to establish our results for Bernoulli fields of the form Xi = G εi−s ; s ∈ Zd ,

(

)

i ∈ Zd ,

(1)

m Zd

where G : (R ) → RN is some function and (εi )i∈Zd are iid Rm -valued random variables. Let (εj′ )j∈Zd be an iid copy of (εj )j∈Zd and let Xi∗ be the coupled version of Xi defined by Xi∗ = G εi∗−s ; s ∈ Zd ,

(

)

where εj∗ = εj if j ̸ = 0 and ε0∗ = ε0′ . Note that Xi∗ is obtained from Xi by replacing ε0 by its copy ε0′ . For any positive

]1/p

integer ℓ and any Rℓ -valued random variable Z ∈ Lp (Ω , F , P) with p > 0, we denote ∥Z ∥p := E ∥Z ∥p where ∥.∥ is the Euclidian norm of Rℓ . Following Wu (2005) and El Machkouri et al. (2013), we define the physical dependence measure

[

δi,p := ∥Xi − Xi∗ ∥p as soon as Xi is p-integrable for p ⩾ 2. We say that X is p-stable if i∈Zd δi,p < ∞. Physical dependence measure should be seen as a measure of the dependence of the function G (defined in (1)) in the coordinate zero. In some sense, it quantifies the degree of dependence of outputs on inputs in physical systems and provide a natural framework for a limit theory for stationary random fields (see El Machkouri et al. 2013). In particular, it gives mild and easily verifiable conditions (see condition (A3)(ii) below) because it is directly related to the data-generating mechanism. In mathematical physics, various versions of similar ideas (local perturbation of a configuration) appear. One can refer for example to Liggett (1985) or Stroock and Zegarlinski (1992). As an illustration, the reader should keep in mind the following two examples:



94

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

• Linear random fields: Let (εi )i∈Zd be i.i.d Rm -valued random variables such that εi belongs to Lp (Ω , F , P), p ≥ 2. The linear random field X defined for all i in Zd by Xi =



As εi−s

s∈Zd

(

where As = as,k1 ,k2

)

1⩽k1 ⩽N 1⩽k2 ⩽m

is a N × m matrix such that



s∈Zd

∑N

k1 =1

∑m

k2 =1

a2s,k

1 ,k2

< ∞ is of the form (1) with a

linear functional G. For all i in Zd ,

δi,p

  N m ∑ ∑ ′ ⩽ ∥ε0 − ε0 ∥p × √ a2i,k ,k . 1 2 k1 =1 k2 =1

√∑

∑m

N

2 So, X is p-stable as soon as k2 =1 ai,k1 ,k2 < ∞. Clearly, if H is a Lipschitz continuous function, under k1 =1 i∈Zd the above condition, the subordinated process Yi = H(Xi ) is also p-stable. • Volterra field: Another class of nonlinear random field is the Volterra process which plays an important role in the nonlinear system theory. Let i ∈ Zd and





Xi =

as1 ,s2 εi−s1 εi−s2 ,

s1 ,s2 ∈Zd

where as1 ,s2 are real coefficients with as1 ,s2 = 0 if s1 = s2 and (εi )i∈Zd are i.i.d. real random variables with εi in Lp (Ω , F , P), p ≥ 2. By the Burkholder inequality, there exists a constant Cp > 0 such that

δi,p ≤ Cp ∥ε0 − ε0 ∥p ∥ε0 ∥p × ′

√∑

(

as,i + ai,s

)2

s∈Zd

So, X is p-stable as soon as



i∈Zd

√ ∑

s∈Zd

(

as,i + ai,s

)2

< ∞.

Let (bn )n⩾1 be a sequence of positive real numbers going to zero as n goes to infinity. Denote Kn (x, v ) = K (x, v ) ∈ R × R and any integer n ⩾ 1. If x ∈ R and fn (x) ̸ = 0 then rn (x) = ϕn (x)/fn (x), where N

N

x−v bn

)

for any

N

∑ ϕn (x) =

(

i∈Λn

Yi Kn (x, Xi )

|Λn |bNn

∑ and

fn (x) =

i∈Λn

Kn (x, Xi )

|Λn |bNn

.

Recall that fn is the classical Parzen–Rosenblatt estimator of the marginal density f of X0 (see El Machkouri (2011, 2014), Parzen (1962), Rosenblatt (1956b)). Similarly, if ϕ is the function defined for any x ∈ RN by ϕ (x) = r(x)f (x) then ϕn is an estimator of ϕ . In the sequel, we consider the following assumptions: N (A1) Assume 1 > bn → 0 such that ∫ |Λn |bn → ∞ and that ∫ K is symmetric, Lipschitz and satisfies |K|∞ := supt ∈RN |K(t)| < ∞, lim∥t ∥→∞ ∥t ∥ |K(t)| = 0, RN |K(t)|dt < ∞ and RN ∥t ∥2 |K(t)|dt < ∞ where ∥ . ∥ is the Euclidian norm on RN . (A2) There exists κ > 0 such that |f0,i (x, y) − f (x)f (y)| ≤ κ for any (x, y) in RN × RN and any i in Zd \{0}, where f0,i is the joint density of (X0 , Xi ). ] [ (A3) There exists θ > 0 such that E |Y0 |2+θ < ∞ and one of the following condition holds:

(i) (Xi )i∈Zd is strongly mixing and

∞ ∑

n

n=1

(ii) (Xi )i∈Zd is of the form (1) and



|i|

(2d−1)θ +6d−2 2+θ

θ

+θ α12,∞ (n) < ∞;

d((3N +2)θ 2 +(10N +8)θ+8N) 2θ (θ+2)N

θ

δi2,2+θ < ∞.

i∈Zd

[ ] [ ] (A4) There exists θ > 0 such that E |Y0 |2+θ < ∞ and the function x ↦ → E Ψp (|Y0 |) |X0 = x is continuous for p ∈ {1, 2, 2 + θ} where Ψp (t) = t p for any real t. Moreover, the functions f and ϕ are twice differentiable with bounded second partial derivatives. Assumptions (A1), (A2) and (A4) are classical conditions in nonparametric statistics (see Carbon et al. (2007), Lu and Chen (2002)). Moreover, one can notice that if θ = ∞ then (A3)(i) and (A3)(ii) reduce to the conditions obtained in El Machkouri (2011) and El Machkouri (2014) respectively where the asymptotic normality of the Parzen–Rosenblatt estimator is established. First, we show that ϕn and fn are asymptotically unbiased estimators of ϕ and f respectively.

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

Theorem 1. Assume that f and ϕ are twice differentiable with bounded second partial derivatives and Then sup |E [fn (x)] − f (x)| = O b2n

( )

and



RN

95

∥t ∥2 |K (t)|dt < ∞.

sup |E [ϕn (x)] − ϕ (x)| = O b2n .

( )

x∈RN

x∈RN

Consequently, if |Λn |bNn +4 → 0 as n → ∞, then lim



n→∞

|Λn |bNn sup |E [fn (x)] − f (x)| = lim

n→∞

x∈RN

√ |Λn |bNn sup |E [ϕn (x)] − ϕ (x)| = 0. x∈RN

Our main result is the following central limit theorem for the NWE. Theorem 2.



If (A1), (A2), (A3) and (A4) hold, then for any x ∈ RN such that f (x) > 0,

( ) ( ) E [ϕn (x)] Law |Λn |bNn rn (x) − −−−−−−−→ N 0, σ 2 (x) , E [fn (x)]

where σ 2 (x) =

∫ V (x) f (x)

RN

n→∞

[

]

K 2 (t)dt and V (x) = E Y02 |X0 = x − r 2 (x).

Using Theorem 1, the condition |Λn |bNn +4 → 0 can be imposed for the control of the bias of the estimator and leads immediately to the following corollary (its proof is left to the reader). Corollary 1.



If (A1), (A2), (A3) and (A4) hold and |Λn |bNn +4 → 0, then for any x ∈ RN such that f (x) > 0,

( ) Law |Λn |bNn (rn (x) − r(x)) −−−−−−−→ N 0, σ 2 (x) , n→∞

where σ (x) is defined in Theorem 2. 2

The asymptotic normality of rn given by Theorem 2 holds under mild conditions on the regions Λn and the bandwidth bn , that is bn → 0 and |Λn |bNn → ∞. These conditions on the bandwidth parameter are sometimes called minimal conditions since these are required for the asymptotic normality of the Parzen–Rosenblatt estimator fn when the observations are assumed to be independent (see Parzen (1962)). To the best of our knowledge, Theorem 2 is the first central limit theorem for the NWE under minimal conditions on the bandwidth and irregularly spaced dependent spatial data. In particular, we improve in several directions Theorem 2.2 in Biau and Cadre (2004) for strongly mixing random fields where the authors considered a set of conditions on the bandwidth parameter and the mixing coefficients interlaced in a complicated way. More precisely, using ours notations, Theorem 2.2 in Biau and Cadre (2004) gives the asymptotic normality of the NWE as soon as E [exp (|Y0 |τ )] < ∞ for some positive real τ , the regions Λn are rectangular subsets N(1+2δ d) N +2 of Zd such that log(|Λn |)−8d/τ → ∞ for some 0 < δ < 1/2, there exists qn → ∞ such ( |Λn |bn → 0 and |)Λn |bn N(1+2δ d)

that q2d n = o bn

log(|Λn |)−8d/τ

∑∞

and |Λn | δ



n⩾1

Nδ nd−1 α1,∞ (nqn ) → 0 and b− (log(|Λn |))2/τ n



n⩾qn

nd−1 α1δ,∞ (n) → 0.

In particular, it is assumed that our results, one can notice that if [ n=1 n] α1,∞ (n) < ∞. In order to compare with ∑ δ = θ/(2 + θ ) ∈ ]0, 1/2[ and E |Y0 |2+θ with 0 < θ < 2 then (A3)(i) reduces to n⩾1 nd(3−δ)−1 α1δ,∞ (n) < ∞. However, our main result holds even if Y0 does not have finite exponential moments and also for general regions Λn (irregularly spaced spatial data) and under only minimal conditions on the bandwidth (bn → 0 and |Λn |bNn → ∞). d−1

3. Numerical illustration In order to illustrate the asymptotic normality of the NWE provided

by Corollary 1,( we ) are going to consider (AR) Xi,j and a Volterra

two regression models where the predictors are given by an autoregressive random field random field

(

(AR)

Xi,j

) (i,j)∈Z2 (AR)

Xi,j

(

(Vol)

Xi,j

) (i,j)∈Z2

(i,j)∈Z2

respectively (see Model 1 and Model 2 below). In Model 1, the autoregressive random field

is defined by

(AR) = 0.7Xi(AR) −1,j + 0.15Xi,j−1 + εi,j ,

(2)

where (εi,j )(i,j)∈Z2 are iid real random variables (N = 1) with standard normal law. From Kulkarni (1992), we know that the stationary solution of (2) is the linear random field given by (AR)

Xi,j

=

∑ ∑ ( s1 + s2 ) s1 ⩾0 s2 ⩾0

s1

(0.7)s1 (0.15)s2 εi−s1 ,j−s2 .

(3)

96

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114 (AR)

So, we fix a positive integer n1 and we simulate the εi,j ’s over the grid [0, 2n1 ]2 ∩ Z2 in order to get the data Xi,j in [n1 + 1, 2n1 ]2 ∩ Z2 following (2) and (3). In Model 2, in order to consider nonlinearity, we define (Vol)

Xi,j

=

for (i, j)

∑ ∑ ∑ ∑ (s1 + s2 )(t1 − s1 + t2 − s2 ) (0.7)s1 +t1 (0.15)s2 +t2 εi−s1 ,j−s2 εi−t1 ,j−t2 . t 1 − s1 s1 t >s t >s s1 ⩾0 s2 ⩾0 1

1 2

2

Since (Vol)

Xi,j

=

∑ ∑ (s1 + s2 ) s1

s1 ⩾0 s2 ⩾0

(0.7)2s1 (0.15)2s2 εi−s1 ,j−s2 βi−s1 ,j−s2

(4)

where

βi,j =

∑ ∑ (t1 + t2 )

(0.7)t1 (0.15)t2 εi−t1 ,j−t2 ,

t1

t1 >0 t2 >0

(5)

we fix a positive integer n2 and we simulate the εi,j ’s over the grid [0, 4n2 ]2 ∩ Z2 and we get the data βi,j for (i, j) in [2n2 + 1, 4n2 ]2 ∩ Z2 using (5) and following the previous implementation of (Xi(AR) ,j )(i,j)∈Z2 . Starting from the data εi,j βi,j (Vol)

for (i, j) in [2n2 + 1, 4n2 ]2 , we simulate in the same way the data Xi,j data sets (AR)

( ) = sin Xi(AR) + εi,j , ,j

(Vol)

( ) = sin Xi(Vol) + εi,j , ,j

Yi,j

(i, j) ∈ [n1 + 1, 2n1 ]2

for (i, j) in [3n2 + 1, 4n2 ]2 using (4). From the two

(Model 1)

and Yi,j

we consider 500 replications of fn(AR) (0) 1

ˆ

=

∑ rn(AR) (0) = 1









(AR)

(AR)

K

(AR)

Yi,j

(AR)

Xi,j

) ,

bn 1

(i,j)∈[n1 +1,2n1 ]2

(i,j)∈[n1 +1,2n1 ]2

(Model 2),

2 π fˆn1 (0)n21 bn1 rn1 (0) and

(

1 n21 bn1

(i, j) ∈ [3n2 + 1, 4n2 ]2

( K

(AR)

Xi,j

fn(Vol) (0) 2

ˆ

=



(Vol)

2 π fˆn2

1 n22 bn2

(Vol)

(0)n22 bn2 rn2 (0) where

( ∑

K



bn1

ˆ

,

rn(Vol) (0) = 2

(i,j)∈[3n2 +1,4n2 ]2

(Vol)

Yi,j

) ,

bn2

(i,j)∈[n2 +1,2n2 ]2

)

(AR) n21 bn1 fn1 (0)

(Vol)

Xi,j

( K

(Vol) n22 bn2 fn2 (0)

ˆ

(Vol)

Xi,j

bn2

) ,

the kernel K is Gaussian and the bandwidth parameters bn1 and bn2 are selected by cross validation. So, in Fig. 1 , we obtain the histograms for



√ (AR) (0) and 2 π fˆn1 (0)n21 bn1 rn(AR) 1





(Vol)

2 π fˆn2

(0) (0)n22 bn2 rn(Vol) 2

with n1 , n2 ∈ {10, 30} along with the standard normal law. 4. Preliminary lemmas In the sequel, for any sequences (an )n⩾1 and (bn )n⩾1 of real positive numbers, we denote an ⊴ bn if and only if there exists κ > 0 (not depending on n) such that an ⩽ κ bn . For any real x, we define also ⌈x⌉ = ⌊x⌋ + 1, where ⌊x⌋ is the largest integer less or equal than x. We shall need the following technical lemmas. Lemma 1. Assume (A1), (A2) and (A4) and let x ∈ RN . If Φ1 : R → R and Φ2 : R → R are two functions such that ∫ x ↦→ E [Φ1 (Y0 )|X0 = x] is continuous and the conditions supt ∈RN |Φ2 (K (t))| < ∞, lim∥t ∥→∞ ∥t ∥ |Φ2 (K (t))| = 0 and |Φ2 (K (t))| dt < ∞ are satisfied then RN lim

n→∞

E [Φ1 (Y0 )Φ2 (Kn (x, X0 ))] bNn

= E [Φ1 (Y0 )|X0 = x] f (x)

∫ RN

Φ2 (K (v )) dv.

⏐] [⏐ Moreover, we have also supj∈Zd E ⏐Kn (x, X0 )Kn (x, Xj )⏐ ⊴ b2N n . j ̸ =0

Proof. Let x ∈ RN and let n be a positive integer. It is obvious that

E [Φ1 (Y0 )Φ2 (Kn (x, X0 ))] = bNn

∫ RN

E [Φ1 (Y0 ) | X0 = x − v bn ] Φ2 (K(v )) f (x − v bn )dv.

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

Fig. 1. Histograms for





(AR)

(AR)

2 π fˆn1 (0)n21 bn1 rn1 (0) (Model 1) and





(Vol)

2 π fˆn2

(Vol)

(0)n22 bn2 rn2

97

(0) (Model 2) along with the standard normal density.

By Theorem 1A in Parzen (1962), we have

∫ lim

n→∞

RN

E [Φ1 (Y0 ) | X0 = x − v bn ] Φ2 (K(v )) f (x − v bn )dv

= E [Φ1 (Y0 ) | X0 = x] f (x)

∫ RN

Φ2 (K(v )) dv.

Consequently, lim

E [Φ1 (Y0 ) Φ2 (Kn (x, X0 ))] bNn

n→∞

= E [Φ1 (Y0 ) | X0 = x] f (x)

∫ RN

Φ2 (K(v )) dv.

In the other part, keeping in mind assumptions (A1) and (A2) and using (6), we derive sup E ⏐Kn (x, X0 )Kn (x, Xj )⏐ ⩽ κ

[⏐

⏐]

(∫

j∈Zd

j̸ =0

The proof of Lemma 1 is complete. □

RN

|Kn (x, u)| du

)2

+ (E [|Kn (x, X0 )|])2 ⊴ b2N n .

(6)

98

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

Lemma 2.

If (A3) holds, then there exists a sequence (mn )n⩾1 of positive integers satisfying

lim mn = +∞,

lim

n→∞

n→∞

θN mdn bn4+θ

⎧ N ∑ θ − 2θ+θ ⎨ lim 2+θ n→∞ bn |i|>mn α1,∞ (|i|) = 0 +2)+2N θ − θ (N2(2 ∑ ⎩ lim +θ ) b |i|d δ 2+θ = 0

= 0 and

n→∞

n

i,2

|i|>mn

if (A3)(i) holds if (A3)(ii) holds.

Notice that when |Λn |bNn → ∞, we have mdn = o (|Λn |). Proof. First, we assume (A3)(i). So, we have E |Y0 |2+θ < ∞ and γ > (4 + θ )/(2 + θ ) be fixed and let (mn )n≥1 be defined by

[

⎧ ⎨



−θ N d(4+θ )

( ∑

vn , ⎢bn ⎩ ⎢ ⎢

mn = max

|i|

d(4+θ ) 2+θ

|i|>vn

]

) d1γ ⎤⎫ ⎬ ⎥ α1,∞ (|i|) ⎥⎭ ⎥ θ



i∈Zd

|i|

d(4+θ ) 2+θ

−θ N 2d(4+θ )

and vn = ⌊bn

2+θ

θ

+θ α12,∞ (|i|) < ∞ for some θ > 0. Let

⌋.

Since vn → ∞, we have mn → ∞ as n goes to infinity. Moreover, θN mdn bn4+θ

⊴ max

⎧ ⎨

θN 2(4+θ )

bn

,

(∑



d(4+θ ) 2+θ

|i|

θ 2+θ 1,∞ (

α

) γ1 |i|)

θN bn4+θ

+

⎫ ⎬

−−−−−−−→ 0.



|i|>vn

n→∞

Since vn ⩽ mn , we have θN

mdn bn4+θ ⩾

(∑

|i|

θ

d(4+θ ) 2+θ

+θ (|i|) α12,∞

) γ1

.

|i|>mn

Consequently, N − 2θ+θ

bn

θ



α1,∞ (|i|) ⩽ 2+θ

(

θN mdn bn4+θ

+θ )− 24+θ ∑

|i|>mn

|i|

θ

α1,∞ (|i|) ⩽ 2+θ

(∑

N − 2θ+θ

⎧ ⎨ ⎩



−θ N d(4+θ ) bn

vn , ⎢ ⎢ ⎢

( ∑

|i|

|i|

d(4+θ ) 2+θ

θ

)−4−θ ) γ (2γ+θ(2+θ )

α1,∞ (|i|) 2+θ

.

|i|>mn

|i|>mn

Since γ > (4 + θ )/(2 + θ ), we derive limn→∞ bn Similarly, assume (A3)(ii) holds and define

˜ n = max m

d(4+θ ) 2+θ

θ



|i|>mn

d((3N +2)θ 2 +(10N +8)θ+8N) 2θ (2+θ )N

δ

+θ (|i|) = 0. α12,∞

θ 2+θ i,2

|i|>vn

) d1γ ⎤⎫ ⎬ ⎥ ⎥⎭ ⎥

with

γ >

((N + 2)θ + 2N)(θ + 4) 2θ (θ + 2)N

.

Then, arguing as before, we derive +2)+2N − θ (N2(2 +θ )

θN

˜ dn bn4+θ = 0 lim m

n→∞

and

lim bn



n→∞

θ

|i|d δi2,2+θ = 0.

˜n |i|>m

The details of the proof are left to the reader. The proof of Lemma 2 is complete. □ For any i in Zd , any positive integer n and any x ∈ RN , we denote

∆i = Lemma 3. − θN bn 2+θ

Kn (x, Xi ) − E [Kn (x, X0 )]



bNn

and Θi =

Yi Kn (x, Xi ) − E [Y0 Kn (x, X0 )]



bNn

.

(7)

Assume (A1), (A2) and (A4). If there exists θ > 0 such that E |Y0 |2+θ < ∞, then max{∥∆0 ∥22+θ , ∥Θ0 ∥22+θ } ⊴

[

.

Proof. Let θ > 0 such that E |Y0 |2+θ < ∞, we have

]

[

∥∆0 ∥22+θ ⩽

2 ∥Kn (x, X0 )∥22+θ bNn

+

2 (E [Kn (x, X0 )])2 bNn

and

∥Θ0 ∥22+θ ⩽

2 ∥Y0 Kn (x, X0 )∥22+θ bNn

+

2 (E [Y0 Kn (x, X0 )])2 bNn

.

]

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

99

Keeping in mind that |K|∞ := supt ∈RN |K(t)| < ∞ and using Lemma 1, we derive

⏐ ⏐ ⏐ ⏐ ⏐E [Kn (x, X0 )] ⏐ ⊴ bNn , ⏐ ⏐ ⏐ ⏐ ⏐E [Y0 Kn (x, X0 )] ⏐ ⊴ bNn .

E |Kn (x, X0 )|2+θ ⊴ bNn ,

[

]

E |Y0 Kn (x, X0 )|2+θ ⊴ bNn

[

]

and

N − 2θ+θ

Consequently, we obtain max{∥∆0 ∥22+θ , ∥Θ0 ∥22+θ } ⊴ bn Lemma 4.

. The proof of Lemma 3 is complete.



Assume (A1), (A2) and (A4). Then, supj∈Zd E |∆0 ∆j | ⊴ bNn . Moreover, if E |Y0 |2+θ < ∞ for some θ > 0 then

[

]

[

]

j̸ =0

sup E |Θ0 Θj | ⊴

[

]

θN bn4+θ

θN

and sup E |Θ0 ∆j | ⊴ bn2+θ .

[

j∈Zd

j∈Zd

j̸ =0

j̸ =0

]

Proof. Let j ̸ = 0 in Zd , then

[ ] ] E |Kn (x, X0 )Kn (x, Xj )| + 3 (E [|Kn (x, X0 )|])2 . E |∆0 ∆j | ⩽ N [

bn

Applying Lemma 1, we get sup E ⏐∆0 ∆j ⏐ ⊴ bNn .

⏐]

[⏐

j∈Zd

j̸ =0

Let L ⩾ 1. We have

E |Y0 Yj Kn (x, X0 )Kn (x, Xj )| + 3 (E [|Y0 Kn (x, X0 )|])2

[

E |Θ0 Θj | ⩽

[

]

]

(8)

bNn

and

E |Y0 Yj Kn (x, X0 )Kn (x, Xj )| =E |Y0 Yj 1|Y0 |⩽L 1|Yj |⩽L Kn (x, X0 )Kn (x, Xj )|

[

]

[

]

[ ] + E |Y0 Yj 1|Y0 |⩽L 1|Yj |>L Kn (x, X0 )Kn (x, Xj )| [ ] + E |Y0 Yj 1|Y0 |>L 1|Yj |⩽L Kn (x, X0 )Kn (x, Xj )| [ ] + E |Y0 Yj 1|Y0 |>L 1|Yj |>L Kn (x, X0 )Kn (x, Xj )| . By Cauchy–Schwarz’s inequality, we derive

E ⏐Y0 Yj Kn (x, X0 )Kn (x, Xj )⏐ ⩽L2 E ⏐Kn (x, X0 )Kn (x, Xj )⏐

⏐]

[⏐

⏐]

[⏐

+

√ [

+

√ [

]√ [

E Y02 K2n (x, X0 )

E Y02 1|Y0 |>L K2n (x, X0 )

]√ [

E Y02 1|Y0 |>L K2n (x, X0 )

E Y02 K2n (x, X0 )

] ]

[ ] + E Y02 1|Y0 |>L K2n (x, X0 ) . Let θ > 0 such that E |Y0 |2+θ < ∞. Applying Lemma 1, we get

]

[

E ⏐Y0 Yj Kn (x, X0 )Kn (x, Xj )⏐

⏐]

[⏐

bNn − 42N +θ

Making the choice L = bn

−θ /2 ⊴ L2 bN + L−θ ⊴ L2 bNn + L−θ/2 . n +L

(9)

and combining (8), (9) and Lemma 1, we obtain

θN ⏐] [⏐ sup E ⏐Θ0 Θj ⏐ ⊴ bn4+θ . j∈Zd

j̸ =0

Now,

E |Y0 Kn (x, X0 )Kn (x, Xj )| + 3E [|Y0 Kn (x, X0 )|] E [|Kn (x, X0 )|]

[

E |Θ0 ∆j | ⩽

[

]

]

bNn

.

100

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

So, if L′ ⩾ 1 is fixed then

[ ] [ ] ⏐] L′ E |Kn (x, X0 )Kn (x, Xj )| [⏐ E |Y0 |1|Y0 |>L′ |Kn (x, X0 )Kn (x, Xj )| ⏐ ⏐ E Θ0 ∆ j ⩽ + N N +

bn 3E [|Y0 Kn (x, X0 )|] E [|Kn (x, X0 )|] bNn

bn

.

By Cauchy–Schwarz’s inequality, we get

√ [

E |Y0 |1|Y0 |>L′ |Kn (x, X0 )Kn (x, Xj )| ⩽ L −θ /2 E |Y0 |2+θ K2n (x, X0 )

[



]

− 22N +θ

Applying Lemma 1 and making the choice L′ = bn ′ sup E ⏐Θ0 ∆j ⏐ ⊴ L′ bNn + L −θ /2 + bNn ⊴

⏐]

[⏐

θN bn2+θ

]√ [

E K2n (x, X0 ) .

]

, we obtain

.

j∈Zd

j ̸ =0

The proof of Lemma 4 is complete. □ The following proposition is a crucial tool in the proof of the asymptotic normality for the NWE (Theorem 2) when the random field (Xi )i∈Zd is of the form (1). Proposition 1. Let n and M be two positive integers and let x ∈ RN . If Λ is a finite subset of Zd and Φ : R → R is a measurable function such that ∥Φ (Y0 )∥2+θ < ∞ for some θ ∈ ]0, +∞] then for any family (ci )i∈Λ of real numbers and any (p, q) ∈ [2, +∞[×]0, +∞] such that p + q ⩽ 2 + θ , we have

  √∑  ∑ p q −q ∑   p+q p+q d ci Wi,n  ⩽ 8pM |K |∞ C (p, q) δip,p+q , ci2 bn    i∈Λ

i∈Λ

p

|i|>M

where Wi,n := Φ (Yi )Kn (x, Xi ) − E Φ (Yi )Kn (x, Xi )| Hi,M ,

]

[

C (p, q) = 2

2p+q p+q

q

q

p+q p+q ∥Φ (Y0 )∥p+q ∥K ∥Lip + |K |∞

(10)

     |Φ (R(x, η0 )) − Φ (R(y, η0 ))|     sup  .  (x,y)∈RN ×RN ∥x − y ∥  x̸ =y  p

and Hi,M = σ (ηi , εi−s ; |s| ⩽ M )

and

∥K ∥Lip =

|K (x) − K (y)| . ∥x − y∥

sup (x,y)∈RN ×RN

x̸ =y

Proof. Let M and n be two positive integers and let x in RN and i in Zd be fixed. Recall that Yi = R(Xi , ηi ). We follow the same lines as in the proof of Proposition 1 in El Machkouri ( et al. (2013).)Let 2 ⩽ p < 2 + θ and denote by Hn the measurable function such that Wi,n = Hn (Hi,∞ ) with Hi,∞ = σ ηi , εi−s ; s ∈ Zd . Then, we define the physical dependence (n)

(n)

measure coefficient δi,p associated to Wi,n by δi,p = ∥Wi,n − Wi∗,n ∥p , where Wi∗,n = Hn (Hi∗,∞ ) and Hi∗,∞ = σ ηi , εi∗−s ; s ∈ Zd keeping in mind that εj∗ = εj if j ̸ = 0 and ε0∗ = ε0′ . In other words, we obtain Wi∗,n from Wi,n by just replacing ε0 by its copy ε0′ (see Wu (2005)). Let τ be a bijection from Z to Zd and ℓ in Z be (fixed. We define the projection operator Pℓ by ) Pℓ f = E [f |Fℓ ] − E [f |Fℓ−1 ] for any integrable function f , where Fℓ = σ ετ (s) ; s ≤ ℓ . Consequently, by stationarity, we have

(

)

   [ ] [ ] Pℓ Wi,n  = E W0,n |T i Fℓ − E W0,n |T i Fℓ−1  , p p ) ( ) ( i where T Fℓ = σ ετ (s)−i ; s ≤ ℓ . Keeping in mind that W0,n = Hn H0,∞ , we derive  [ (  ( [ ( ) ] ( )   ) ] )   (i,ℓ) (i,ℓ)  Pℓ Wi,n  =  E Hn H0,∞ | T i Fℓ − E Hn H0,∞ | T i Fℓ  ⩽ Hn H0,∞ − Hn H0,∞  , p p

(i,ℓ) H0,∞

η0 , ετ′ (ℓ)−i , ε−s

d

T τ (ℓ)−i H0,∞ = σ ηi−τ (ℓ) , εi−τ (ℓ)−s ; s ∈ Zd = Hi−τ (ℓ),∞ ,

(

(i,ℓ)

)

T τ (ℓ)−i H0,∞ = σ ηi−τ (ℓ) , ε0′ , εi−τ (ℓ)−s ; s ∈ Zd \{i − τ (ℓ)} = Hi∗−τ (ℓ),∞ ,

(

p

) ,ℓ) ; s ∈ Z \{i − τ (ℓ)} . It means that H0(i,∞ is obtained from H0,∞ by replacing ετ (ℓ)−i by its

where =σ copy ετ′ (ℓ)−i . Consequently, using again the stationarity of the random field and noting that

(

)

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

101

we obtain

 ( ( )     ) (n) (i,ℓ)  Pℓ Wi,n  ⩽  Hn T τ (ℓ)−i H0,∞ − Hn T τ (ℓ)−i H0,∞  = Wi−τ (ℓ),n − Wi∗−τ (ℓ),n p = δi−τ (ℓ),p . p p ∑ Moreover, since Wi,n = ℓ∈Z Pℓ Wi,n , we have     ∑  ∑ ∑         cj Wj,n  =  cj Pℓ Wj,n    .  j∈Λ   ℓ∈Z j∈Λ  p p (∑ )

(11)

Since j∈Λ cj Pℓ Wj,n ℓ∈Z is a martingale difference sequence with respect to the filtration (Fℓ )ℓ∈Z , the Burkholder inequality (see Dedecker (2001), remark 6, page 85) implies

⎛    2 ⎞ 21 ⎛ ⎛ ⎞2 ⎞ 21 ∑  ∑  ∑ ∑ ∑       ⎟ ⎜ ⎟ ⎜   ⎝ |cj | Pℓ Wj,n p ⎠ ⎠ . cj Wj,n  cj Pℓ Wj,n    ≤ ⎝2p   ⎠ ≤ ⎝2p  j∈Λ   ℓ∈Z ℓ∈Z  j∈Λ j∈Λ p

(12)

p

Moreover, by the Cauchy–Schwarz inequality, we have

⎛ ∑ ⎝

⎞2 ∑  ∑    |cj | Pℓ Wj,n p ⎠ ≤ ci2 Pℓ Wi,n p × ∥Pℓ Wj,n ∥p .

j∈Λ

i∈Λ

Using (11), we have supℓ∈Z



∥Pℓ Wj,n ∥p ≤

j∈Zd

(13)

j∈ Λ



j∈Zd

δj(n) ,p . So, combining (12) and (13), we obtain

  ⎞ 12 ⎛  ∑ ∑ (n) ∑ ∑     Pℓ Wi,n  ⎠ .  ⎝ δj , p ci2 cj Wj,n   ≤ 2p  p   j∈Λ ℓ∈Z i∈Λ j∈Zd p

Using (11) and keeping in mind that τ is a bijection, we have supi∈Zd



ℓ∈Z

∥Pℓ Wi,n ∥p ≤



j∈Zd

δj(n) ,p . Hence, we derive

  ⎛ ⎞ 21  ∑ ∑ ∑ (n)    ⎝ cj2 ⎠ cj Wj,n  δj , p .  ≤ 2p    j∈Λ j∈Λ j∈Zd

(14)

p

Now, since

]∗

[ ] = E Φ (R(Xi∗ , ηi ))Kn (x, Xi∗ )|Hi∗,M , ) ( where Hi∗,M = σ ηi , εi∗−s ; |s| ≤ M , we have [ ] Wi∗,n = Φ (R(Xi∗ , ηi ))Kn (x, Xi∗ ) − E Φ (R(Xi∗ , ηi ))Kn (x, Xi∗ )|Hi∗,M . E Φ (Yi )Kn (x, Xi )|Hi,M

[

Moreover,

E Φ (Yi )Kn (x, Xi )|Hi,M = E Φ (Yi )Kn (x, Xi )|Hi,M ∨ Hi∗,M

[

]

[

]

and

E Φ (R(Xi∗ , ηi ))Kn (x, Xi∗ )|Hi∗,M = E Φ (R(Xi∗ , ηi ))Kn (x, Xi∗ )|Hi∗,M ∨ Hi,M .

[

]

[

]

Consequently,

    ∗  ∗ ∗    δi(n) ,p = Wi,n − Wi,n p ⩽ 2 Φ (R(Xi , ηi ))Kn (x, Xi ) − Φ (R(Xi , ηi ))Kn (x, Xi ) p .

(15)

Let L > 0 be fixed. From (15), we derive

  ( ) ( ) ∗ ∗ ∗   δi(n) ,p ⩽ 2 Φ (R(Xi , ηi )) Kn (x, Xi ) − Kn (x, Xi ) − Φ (R(Xi , ηi )) − Φ (R(Xi , ηi )) Kn (x, Xi ) p      p+q δi,p |Φ (R(x, η0 )) − Φ (R(y, η0 ))|    p −q/p ∥Φ (Y0 )∥p+q + 2|K|∞  sup ⩽ 2L ∥K∥Lip + 4|K|∞ L  δi,p . (x,y)∈RN ×RN  ∥x − y ∥ bn  x̸ =y  p

Optimizing this last inequality in L, we get p

−q

q

p+q p+q p+q δi(n) ,p ⩽ 2|K|∞ C (p, q)bn δi,p ,

(16)

102

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

where C (p, q) = 2

2p+q p+q

q

q

p+q p+q ∥Φ (Y0 )∥p+q ∥K∥Lip + |K|∞

     |Φ (R(x, η0 )) − Φ (R(y, η0 ))|     sup  . (x,y)∈RN ×RN  ∥x − y∥  x̸ =y  p

Now, by stationarity, we have δ

(n) i,p

    = Wi,n − Wi∗,n p ⩽ 2 W0,n p . Let ℓ ⩾ 0 be a fixed integer. We denote by Γℓ the set of

all j in Zd such that |j| = ℓ and we define aℓ :=

ℓ ∑

|Γj | = 1 + 2d

ℓ ∑

(2j + 1)d−1 .

j=0

j=1

If u = (u1 , . . . , ud ) and v = (v1 , . . . , vd ) are distinct elements of Zd , the notation u
• τ0 (1) = 0, • τ0 (s) ∈ Γℓ if aℓ−1 < s ⩽ aℓ and ℓ > 0, • τ0 (s) 0. ( ) Let GM = σ η0 , ετ0 (s) ; 1 ⩽ s ⩽ M and recall that H0,M = σ (η0 , ε−s ; |s| ⩽ M ). Since 1 ≤ s ≤ aM if and only if |τ0 (s)| ≤ M, we have GaM = H0,M . Consequently, ∑ W0,n = Dℓ where Dℓ = E [Φ (R(X0 , η0 ))Kn (x, X0 )|Gℓ ] − E [Φ (R(X0 , η0 ))Kn (x, X0 )|Gℓ−1 ] . ℓ>aM

Since (Dℓ )ℓ⩾1 is a martingale difference sequence with respect to the filtration (Gℓ )ℓ⩾1 , we apply Burkholder’s inequality (Dedecker (2001), remark 6, page 85) and we obtain

⎞1/2 ⎛ ∑   W0,n  ⩽ ⎝2p ∥Dℓ ∥2p ⎠ . p

(17)

ℓ>aM

) d , ε ; s ∈ Z \{−τ ( ℓ ) } , we have − s 0 0 ( ℓ) [ ] E [Φ (R(X0 , η0 ))Kn (x, X0 )|Gℓ−1 ] = E Φ (R(X0′ ,ℓ , η0 ))Kn (x, X0′ ,ℓ )|Gℓ . (

Let L > 0 be fixed. Denoting X0′ ,ℓ = G η0 , ετ′

Then

  ∥Dℓ ∥p ⩽ Φ (R(X0 , η0 ))Kn (x, X0 ) − Φ (R(X0′ ,ℓ , η0 ))Kn (x, X0′ ,ℓ )p   ( ) ( ) = Φ (R(X0 , η0 )) Kn (x, X0 ) − Kn (x, X0′ ,ℓ ) − Φ (R(X0′ ,ℓ , η0 )) − Φ (R(X0 , η0 )) Kn (x, X0′ ,ℓ )p       p+q   L ∥K∥Lip | Φ (R(x, η0 )) − Φ (R(y, η0 ))|   p ′ − q / p X0 − X  + 2|K|∞ L ∥Φ (Y0 )∥p+q + |K|∞  sup ⩽  0,ℓ p (x,y)∈RN ×RN  ∥x − y∥ bn  x̸=y  p   × X0 − X0′ ,ℓ p . Optimizing in L, we obtain −q  p  q p+q p+q ∥Dℓ ∥p ⩽ |K|∞ C (p, q)bn X0 − X0′ ,ℓ pp+q .

Moreover, by stationarity, we have

   ( ) ( ) X0 − X ′  = G ε−s ; s ∈ Zd − G ε′ , ε−s ; s ∈ Zd \{−τ0 (ℓ)}  0,ℓ p τ0 (ℓ) p  ( ) ( ) = G ε−τ0 (ℓ)−s ; s ∈ Zd − G ε0′ , ε−τ0 (ℓ)−s ; s ∈ Zd \{−τ0 (ℓ)} p    = X−τ (ℓ) − X ∗ −τ0 (ℓ) p

0

= δ−τ0 (ℓ),p . So, we derive p

−q

q

p+q p+q p+q ∥Dℓ ∥p ⩽ |K|∞ C (p, q)bn δ−τ (ℓ),p . 0

(18)

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

103

Combining (17) and (18), we obtain −q ∑ q −q ∑ q p √ p √   p+q p+q p+q p+q p+q W0,n  ⩽ |K|∞ 2pC (p, q)bn δ−τ 2pC (p, q)bn δjp,p+q ⩽ |K|∞ p 0 (ℓ),p

ℓ>aM

(19)

|j|>M

and (n)

−q p+q

p p+q





sup δi,p ⩽ 2 W0,n p ⩽ 2 2p|K|∞ C (p, q)bn





i∈Zd

q

δjp,p+q .

(20)

|j|>M

So, from (16) and (20), we get



−q p ∑ p+q q √ p+q p+q d δi(n) ⩽ 2 C (p , q)b 2p | K | (M + 1) δj,p . ∞ n ,p

i∈Zd

(21)

|j|>M

Finally, combining (14) and (21), we derive

  ( ) 21 ∑  −q ∑ q p ∑   p+q p+q 2 d ci bn δjp,p+q . ci Wi,n  ≤ 8pM |K|∞ C (p, q)    i∈Λ

i∈Λ

p

|j|>M

The proof of Proposition 1 is complete. □ Now, we denote by V(Z ) the variance of any square-integrable R-valued random variable Z . Assume (A1)–(A4). For any x ∈ RN such that f (x) > 0, we have

Lemma 5.

Λn bNn V [fn (x)]

lim |

n→∞

|

∫ = f (x) RN

K 2 (t)dt ,

lim |Λn |bNn V [ϕn (x)] = E Y02 |X0 = x f (x)

[

]

n→∞

lim |Λn |bNn Cov [ϕn (x), fn (x)] = r(x)f (x)

n→∞

∫ RN

∫ RN

K 2 (t)dt ,

K 2 (t)dt .

Proof. Let n ⩾ 1 and x ∈ RN such that f (x) > 0 be fixed. Then,

|

Λn bNn V [fn (x)]

|

[( ∑ )2 ] ] [ [ ] 1 ∑ i∈Λn ∆i |Λn ∩ (Λn − j)| E ∆0 ∆j . =E = E ∆20 + √ |Λn | |Λn | d j∈Z

j̸ =0

Consequently,

⏐ ]⏐ [ ] ⏐⏐ ∑ ⏐ [ ⏐ ⏐E ∆0 ∆j ⏐ , ⏐|Λn |bNn V [fn (x)] − E ∆20 ⏐ ⩽

(22)

j∈Zd

j ̸ =0

where

E K2n (x, X0 ) − (E [Kn (x, X0 )])2

[

E

[

∆20

]

=

]

bNn

.

Applying Lemmas 1 and 4, we get

[ ] lim E ∆20 = f (x)



n→∞

RN

K2 (v )dv

(23)

and θN

sup ⏐E ∆0 ∆j ⏐ ⊴ bNn ⩽ bn4+θ ,

⏐ [

]⏐

(24)

j∈Zd

j ̸ =0

where θ > 0 is given by (A3). Similarly, we have

ϕ

Λn bNn V [ n (x)]

|

|

[( ∑ =E

i∈Λn



Θi

|Λn |

)2 ]

[ ] [ ] 1 ∑ |Λn ∩ (Λn − j)| E Θ0 Θj . = E Θ02 + |Λn | j∈Zd

j ̸ =0

104

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

So, we derive

⏐ [ ] ⏐⏐ ∑ ⏐ [ ]⏐ ⏐ ⏐E Θ0 Θj ⏐ , ⏐|Λn |bNn V [ϕn (x)] − E Θ02 ⏐ ⩽

(25)

j∈Zd

j̸ =0

where

] [ [ 2 ] E Y02 K2n (x, X0 ) − (E [Y0 Kn (x, X0 )])2 E Θ0 = . N bn

Applying again Lemma 1, we obtain lim E Θ02 = E Y02 | X0 = x f (x)

[

]

]

[



n→∞

RN

K2 (v )dv.

(26)

By Lemma 4, we have also θN

sup ⏐E Θ0 Θj ⏐ ⊴ bn4+θ .

⏐ [

]⏐

(27)

j∈Zd

j ̸ =0

Arguing as before, we write

|Λn |bNn Cov (ϕn (x), fn (x)) = E [Θ0 ∆0 ] +

1



|Λn |

] [ |Λn ∩ (Λn − j)| E Θ0 ∆j .

j∈Zd

j ̸ =0

Consequently,

⏐ ∑⏐ [ ⏐ ]⏐ ⏐E Θ0 ∆j ⏐, ⏐|Λn |bN Cov (ϕn (x), fn (x)) − E [Θ0 ∆0 ] ⏐ ⩽ n

(28)

j∈Zd

j ̸ =0

where, using Lemma 1, we have

E Y0 K2n (x, X0 ) − E [Y0 Kn (x, X0 )] E [Kn (x, X0 )]

[

E [Θ0 ∆0 ] =

]

bNn

∫ −−−−−−−→ r(x)f (x) n→∞

RN

K2 (v )dv.

(29)

By Lemma 4, we have also θN

θN

sup ⏐E Θ0 ∆j ⏐ ⊴ bn2+θ ⩽ bn4+θ .

⏐ [

]⏐

(30)

j∈Zd

j ̸ =0

Now, we assume that (A3)(i) holds and we introduce the letter Ξ which can be replaced in the sequel by either ∆ or Θ . By Rio’s inequality (see Rio 1993), for j ̸ = 0, we have

⏐ [ ]⏐ ⏐E Ξ0 Ξj ⏐ ⩽ 2

2α1,∞ (|j|)

∫ 0

QΞ2 0 (u)du where QΞ0 (u) = inf t ⩾ 0 ⏐ P (|Ξ0 | > t ) ⩽ u .

{

Using Lemma 3 and noting that QΞ0 (u) ⩽ u θ ⏐ [ ]⏐ ⏐E Ξ0 Ξj ⏐ ⊴ α 2+θ (|j|) ∥Ξ0 ∥2

1,∞

2+θ



1 − 2+θ

− θN bn 2+θ



}

∥Ξ0 ∥2+θ , we derive θ

+θ α12,∞ (|j|).

(31)

Combining (24), (27) and (31) and using Lemma 2, we obtain θN ∑ θ θN ∑⏐ [ ]⏐ 2+θ +θ ⏐E Ξ0 Ξj ⏐ ⊴ md bn4+θ + b− α12,∞ (|j|) −−−−−−−→ 0, n n

(32)

n→∞

|j|>mn

j∈Zd

j ̸ =0

where mn is given by Lemma 2. Combining (22), (23), (25), (26) and (32), we get lim |Λn |bNn V [fn (x)] = f (x)

n→∞



K2 (t)dt

and

RN

Applying again Rio’s inequality, we have

⏐ [ ]⏐ ⏐E Θ0 ∆j ⏐ ⩽ 2

2α1,∞ (|j|)



QΘ0 (u)Q∆j (u)du 0

lim |Λn |bNn V [ϕn (x)] = E Y02 | X0 = x f (x)

[

n→∞

]

∫ RN

K2 (t)dt .

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

105

and by Lemma 3, we derive QΘ0 (u) ⩽ u

1 − 2+θ

N − 2(2θ+θ )

1

∥Θ0 ∥2+θ ⩽ u− 2+θ bn

and Q∆j (u) ⩽ u

1

1 − 2+θ

N − 2(2θ+θ )

∥∆0 ∥2+θ ⩽ u− 2+θ bn

for any u ∈ ]0, 1[. Consequently, we obtain θN θ ⏐ [ ]⏐ 2+θ +θ ⏐E Θ0 ∆j ⏐ ⊴ b− α12,∞ (|j|) . n

(33)

Combining (30) and (33), we derive θN θ θN ∑ ∑⏐ [ ]⏐ 2+θ +θ ⏐E Θ0 ∆j ⏐ ⊴ md bn4+θ + b− α12,∞ (|j|) −−−−−−−→ 0, n n

(34)

n→∞

|j|>mn

j∈Zd

j̸ =0

where mn is given by Lemma 2. Finally, combining (28), (29) and (34), we obtain lim |

n→∞

ϕ

Λn bNn Cov [ n (x)

|

, fn (x)] = r(x)f (x)

∫ RN

K2 (t)dt .

From now on, we assume (A3)(ii) holds. Keeping in mind that Ξ stands for either ∆ or Θ , we define Ξ i = E Ξi |Hi,mn for any i in Zd . Note that (Ξ i )i∈Zd is a 2mn -dependent random field (it means that if |i − j| > 2mn then Ξ i and Ξ j are independent) and

[

⏐ ⎡( ⎡(  2    )2 ⎤ )2 ⎤⏐⏐  ⏐ ∑  ∑ ( ∑( ∑ ∑ ⏐  ⏐ ) )       ⏐ ⏐E ⎣ Ξi − Ξ i  + 2  Ξi − Ξ i  . Ξ i ⎦⏐ ⩽  Ξ i  Ξi ⎦ − E ⎣ ⏐       ⏐ ⏐ i∈Λn i∈Λn i∈Λn i∈Λn 2 i∈Λn 2 2

]

(35)

Using Proposition 1 and Lemma 2, we obtain

  ∑  +2)+2N ∑ θ − θ (N2(2  +θ ) (Ξi − Ξ i ) ⊴ bn |j|d δj2,2+θ −−−−−−−→ 0.  n→∞  

−1/2 

|Λn |

i∈Λn

(36)

|j|>mn

2

In the other part, since (Ξ i )i∈Zd is 2mn -dependent, we have

⎡(

1

|Λn |

E⎣

∑ i∈Λn

)2 ⎤ [ 2] ∑ ] [ 1 Ξi ⎦ = E Ξ0 + |Λn ∩ (Λn − j) |E Ξ 0 Ξ j |Λn | d j∈Z \{0} |j|⩽2mn

and consequently

⏐ ⏐ ⎡( )2 ⎤ ⏐ [ 2 ]⏐⏐ ∑ ⏐ [ ⏐ 1 ]⏐ ⏐ ⎣ Ξ i ⎦ − E Ξ 0 ⏐⏐ ⩽ (2mn + 1)d sup ⏐E Ξ 0 Ξ j ⏐ . ⏐ |Λn | E j∈Zd ⏐ ⏐ i∈Λn

(37)

j̸ =0

Moreover, using (19) and Lemma 2 and noting that ∥Ξ0 ∥2 ⊴ 1, we have also

⏐ [ ] +2)+2N ∑ θ   [ ]⏐⏐ − θ (N2(2 2 ⏐ +θ ) δj2,2+θ −−−−−−−→ 0. ⏐E Ξ 0 − E Ξ02 ⏐ ⩽ 2 ∥Ξ0 ∥2 Ξ 0 − Ξ0 2 ⊴ bn n→∞

|j|>mn

So, using (23) and (26), we derive

[

2

]

lim E Ξ 0 = ξ (x)f (x)



n→∞

RN

K2 (t)dt ,

(38)

where ξ (x) = 1 if Ξ = ∆ and ξ (x) = E Y02 |X0 = x if Ξ = Θ . Similarly, using (19), we obtain

[

]

+2)+2N − θ (N2(2 +θ )

mdn sup ⏐|E Ξ 0 Ξ j |−|E Ξ0 Ξj |⏐ ⩽ 2mdn ∥Ξ0 ∥2 Ξ 0 − Ξ0 2 ⊴ bn



[

]

] ⏐

[





j∈Zd j̸ =0



θ

|j|d δj2,2+θ −−−−−−−→ 0. n→∞

|j|>mn θN

Using Lemma 4, we have supj∈Zd |E Ξ0 Ξj | ⊴ bn4+θ and consequently, by Lemma 2, we get

[

]

j̸ =0 θN

+2)+2N − θ (N2(2 +θ )

mdn sup ⏐E Ξ 0 Ξ j ⏐ ⊴ mdn bn4+θ + bn

⏐ [

j∈Zd

j̸ =0

]⏐



θ

|j|d δj2,2+θ −−−−−−−→ 0. n→∞

|j|>mn

(39)

106

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

Combining (37)–(39), we obtain lim

n→∞

1

|Λn |

⎡( )2 ⎤ ∫ ∑ ⎦ ⎣ = ξ (x)f (x) E Ξi i∈Λn

K2 (t)dt .

RN

(40)

Combining (35), (36) and (40), we get lim

n→∞

1

|Λn |

⎡( )2 ⎤ ∫ ∑ E⎣ Ξi ⎦ = ξ (x)f (x) i∈Λn

RN

K2 (t)dt .

So, we have shown lim |Λn |bNn V [fn (x)] = f (x)

n→∞



K2 (t)dt

and

RN

lim |Λn |bNn V [ϕn (x)] = E Y02 |X0 = x f (x)

[

]

n→∞

∫ RN

K2 (t)dt .

Now, it suffices to prove lim |Λn |bNn Cov [ϕn (x), fn (x)] = r(x)f (x)

n→∞



K2 (t)dt RN

when (A3)(ii) holds. If we define f n (x) =

1

∑ [

Λn bNn i∈Λn

|

|

E Kn (x, Xi )|Hi,mn

]

and ϕ n (x) =

1

∑ [

Λn bNn i∈Λn

|

|

E Yi Kn (x, Xi )|Hi,mn

]

then

|Λn |bNn Cov [ϕn (x), fn (x)] = C1 + C2 + C3 + C4 , where

)( )] ϕn (x) − ϕ n (x) fn (x) − f n (x) [( )( [ ])] C2 = |Λn |bNn E ϕn (x) − ϕ n (x) f n (x) − E f n (x) )( )] [( C3 = |Λn |bNn E ϕ n (x) − E [ϕ n (x)] fn (x) − f n (x) )( [( [ ])] C4 = |Λn |bNn E ϕ n (x) − E [ϕ n (x)] f n (x) − E f n (x) .

C1 = |Λn |bNn E

[(

Using Proposition 1, we have

⎛ ⎞2 ⏐⏐ ⏐⏐ ⏐⏐ ⏐⏐ ⏐⏐ ∑ ⏐⏐ ⏐⏐ ∑ ⏐⏐ +2)+2N ∑ θ − θ (N2(2 1 ⏐⏐ ⏐⏐ ⏐⏐ ⏐⏐ +θ ) (Θi − Θ i )⏐⏐ × √ (∆i − ∆i )⏐⏐ ⊴ ⎝bn |C1 | ⩽ √ |i|d δi2,2+θ ⎠ = o(1). ⏐⏐ ⏐⏐ ⏐⏐ ⏐⏐ |Λn | ⏐⏐i∈Λ |Λn | ⏐⏐i∈Λ | i |> m n n n 2 2 ⏐⏐∑ ⏐⏐ ⏐⏐∑ ⏐⏐ −1/2 ⏐⏐ −1/2 ⏐⏐ ⏐ ⏐ ⏐ ⏐ From (40), we have |Λn | j∈Λn ∆j 2 ⊴ 1 and |Λn | j∈Λn Θ j 2 ⊴ 1. So, ⏐⏐ ⏐⏐ ⏐⏐ ⏐⏐ ⏐ ⏐ ⏐ ⏐ ⏐⏐ ∑ ⏐⏐ θ ⏐⏐ ∑ ⏐⏐ − θ (N +2)+2N ∑ 1 1 ⏐⏐ ⏐⏐ ⏐⏐ ⏐⏐ ⊴ bn 2(2+θ ) (Θi − Θ i )⏐⏐ × √ |C2 | ⩽ √ ∆ |i|d δi2,2+θ = o(1) ⏐⏐ j ⏐ ⏐ ⏐ ⏐ ⏐⏐ |Λn | ⏐⏐i∈Λ |Λn | ⏐⏐j∈Λ ⏐⏐ |i|>mn n n 2 1

2

and

⏐⏐ ⏐⏐ ⏐⏐ ⏐⏐ ⏐⏐ ∑ ⏐⏐ ⏐⏐ ∑ ⏐⏐ θ ⏐⏐ ⏐ ⏐ − θ (N +2)+2N ∑ 1 ⏐⏐ ⏐⏐ ⏐⏐ ⏐⏐ ⊴ bn 2(2+θ ) |C3 | ⩽ √ (∆i − ∆i )⏐⏐ × √ Θ |i|d δi2,2+θ = o(1). ⏐⏐ j ⏐ ⏐ ⏐ ⏐ ⏐ ⏐ ⏐ ⏐ |Λn | i∈Λ |Λn | ⏐⏐j∈Λ ⏐⏐ |i|>m 1

n

n

2

2

n

Finally, since C4 = E Θ 0 ∆0 +

[

]

1

|Λn |



[ ] |Λn ∩ (Λn − j)|E Θ 0 ∆j ,

j∈Zd \{0} |j|⩽2mn

we obtain

⏐ ⏐ [ [ ]⏐ ]⏐ ⏐C4 − E Θ 0 ∆0 ⏐ ⩽ (2mn + 1)d sup ⏐E Θ 0 ∆j ⏐ .

(41)

j∈Zd j̸ =0

Using (19) and keeping in mind that ∥∆0 ∥2 ⊴ 1 and ∥Θ0 ∥2 ⊴ 1, we have also θ ⏐ [ ⏐     ] − θ (N +2)+2N ∑ ⏐E Θ 0 ∆0 − E [Θ0 ∆0 ]⏐ ⩽ ∥Θ0 ∥2 ∆0 − ∆0  + ∥∆0 ∥2 Θ 0 − Θ0  ⊴ bn 2(2+θ ) δi2,2+θ = o(1). 2 2

|i|>mn

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

107

Moreover, using Lemma 1, we have

E [ Θ0 ∆ 0 ] =

] ) 1 ( [ E Y0 K2n (x, X0 ) − E [Kn (x, X0 )] E [Y0 Kn (x, X0 )] −−−−−−−→ r(x)f (x) N n→∞ bn



K2 (t)dt

RN

and consequently, we obtain lim E Θ 0 ∆0 = r(x)f (x)

[

]

n→∞

∫ RN

K2 (t)dt .

(42)

Now, using (19) and Lemma 2, we have mdn sup ⏐|E Θ 0 ∆j | − |E Θ0 ∆j |⏐ ⩽ mdn ∥Θ0 ∥2 ∆0 − ∆0 2 + ∥∆0 ∥2 Θ 0 − Θ0 2



[

]

] ⏐

[



(



 )



j∈Zd j̸ =0

+2)+2N − θ (N2(2 +θ )

⊴ bn



θ

|j|d δj2,2+θ = o(1).

|j|>mn θN

θN

Using Lemma 4, we have supj∈Zd ⏐E Θ0 ∆j ⏐ ⊴ bn2+θ ⩽ bn4+θ and consequently, by Lemma 2, we get

⏐ [

]⏐

j̸ =0 +2)+2N ∑ θN θ ⏐ [ ]⏐ − θ (N2(2 +θ ) mdn sup ⏐E Θ 0 ∆j ⏐ ⊴ mdn bn4+θ + bn |j|d δj2,2+θ −−−−−−−→ 0.

(43)

n→∞

j∈Zd

|j|>mn

j̸ =0

Combining (41)–(43), we obtain

∫ C4 − −−−−−−→ r(x)f (x) n→∞

RN

K2 (t)dt .

The proof of Lemma 5 is complete. □ 5. Proofs of Theorems In this section, we present the proofs of Theorems 1 and 2. Proof of Theorem 1. Let n ⩾ 1 be fixed. Since K is symmetric such that RN ∥v∥2 |K(v )|dv < ∞ and f and ϕ are twice differentiable with bounded second partial derivatives, by Taylor’s formula, we get



⏐∫ ⏐

⏐ ⏐

⏐ ⏐

sup ⏐E [fn (x)] − f (x)⏐ = sup ⏐ x∈RN

x∈RN

RN

∫ ⏐ ⏐ (f (x − v bn ) − f (x)) K(v )dv ⏐ ⊴ b2n

RN

∥v∥2 |K(v )|dv

and

⏐∫ ⏐ ⏐ ⏐ ⏐ ⏐ sup ⏐E [ϕn (x)] − ϕ (x)⏐ = sup ⏐ x∈RN

x∈RN

RN

∫ ⏐ ⏐ 2 (ϕ (x − v bn ) − ϕ (x)) K(v )dv ⏐ ⊴ bn

RN

∥v∥2 |K(v )|dv.

The proof of Theorem 1 is complete. □ Proof of Theorem 2. Let n ⩾ 1 and x ∈ RN such that f (x) > 0 be fixed. Then rn (x) −

E [ϕn (x)] E [fn (x)]

=

(ϕn (x) − E [ϕn (x)]) E [fn (x)] − (fn (x) − E [fn (x)]) E [ϕn (x)] fn (x)E [fn (x)]

.

Combining Lemma 5 and Theorem 1, we obtain that fn (x) converges in probability to f (x) as n → ∞. Moreover, we have E[ϕ (x)] also limn→∞ E[f n(x)] = r(x). So, using Slutsky’s lemma and assumptions (A1)–(A4), it is sufficient to prove that n



√ ( ) Law λ1 |Λn |bNn (ϕn (x) − E [ϕn (x)]) + λ2 |Λn |bNn (fn (x) − E [fn (x)]) −−−−−−−→ N 0, ρ 2 (x) , n→∞ ∫ ( 2 [ 2 ] ) 2 2 2 where ρ (x) = λ1 E Y0 |X0 = x + 2λ1 λ2 r(x) + λ2 × f (x) RN K (t)dt for any (λ1 , λ2 ) ∈ R2 . Let (λ1 , λ2 ) ∈ R2 be fixed. Then √ √ ∑ λ1 |Λn |bNn (ϕn (x) − E [ϕn (x)]) + λ2 |Λn |bNn (fn (x) − E [fn (x)]) = |Λn |−1/2 Ui , i∈Λn

where Ui = λ1 Θi + λ2 ∆i =

(λ1 Yi + λ2 ) Kn (x, Xi ) − E [(λ1 Y0 + λ2 ) Kn (x, X0 )] √ bNn

108

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

with Θi and ∆i defined by (7). For the asymptotic normality of rn when (Xi )i∈Zd is of the form (1), we are going to use an approximation by 2mn -dependent random fields. So, recall that Hi,mn = σ (ηi , εi−s ; |s| ⩽ mn ) and define

] [ ∆i = E ∆i |Hi,mn ,

and U i = λ1 Θ i + λ2 ∆i = E Ui |Hi,mn .

] [ Θ i = E Θi |Hi,mn

[

]

By construction, (U i )i∈Zd is 2mn -dependent (it means that if |i − j| > 2mn then U i and U j are independent). So, if (A3)(ii) holds, applying Proposition 1 with Φ (t) = λ1 t + λ2 for any t ∈ R, then

  ∑ ( +2)+2N ∑ θ ) − θ (N2(2  +θ ) Ui − U i  ⊴ bn |i|d δi2,2+θ −−−−−−−→ 0 (by Lemma 2).  n→∞  

−1/2 

|Λn |

i∈Λn

|i|>mn

2

Consequently, when (Xi )i∈Zd is of the form (1) it suffices to establish the asymptotic normality of |Λn |−1/2 now on, we denote

{

Ui Ui

Zi =



i∈Λn

U i . From

if (Xi )i∈Zd is strongly mixing if (Xi )i∈Zd is of the form (1)

and

{ Mn =

Lemma 6.

mn 2mn

if (Xi )i∈Zd is strongly mixing if (Xi )i∈Zd is of the form (1).

limn→∞ E Z02 = ρ 2 (x).

[ ]

Proof. We have

E (λ1 Y0 + λ2 )2 K2n (x, X0 ) − (E [(λ1 Y0 + λ2 ) Kn (x, X0 )])2

[

E

[

U02

]

=

]

bNn

.

Applying Lemma 1, we get lim E U02 = E (λ1 Y0 + λ2 )2 | X0 = x f (x)

[

]

[

]



n→∞

RN

K2 (t)dt = ρ 2 (x).

Moreover, by Proposition 1 and Lemma 2 and using ∥U0 ∥2 ⊴ 1, we derive

⏐ [ ] +2)+2N ∑ θ   [ ]⏐⏐ − θ (N2(2 2 ⏐ +θ ) |i|d δi2,2+θ −−−−−−−→ 0. ⏐E U 0 − E U02 ⏐ ⩽ 2 ∥U0 ∥2 U0 − U 0 2 ⊴ bn n→∞

|i|>Mn

[

2

]

So, limn→∞ E U 0 = ρ 2 (x). The proof of Lemma 6 is complete.



[ ] (ξi )i∈Zd be i.i.d. normal random variables independent of (Xi )i∈Zd and (ηi )i∈Zd . Assume that E [ξ0 ] = 0 and E ξ02 = [ Let ] E Z02 . For any i ∈ Zd , we define Ti = √

Zi

and γi = √

|Λn |

ξi . |Λn |

Let g be the unique function from [1, |Λn |] ∩ Z to Λn such that g(k)
k ∑

Tg(s)

c and Sg(k) (γ ) =

|Λn | ∑

γg(s)

s=k

s=1

c with the convention Sg(0) (T ) = Sg( |Λn |+1)((γ ) = 0. Let ψ be)any measurable function from R to R. For any 1 ⩽ k ⩽ ℓ ⩽ |Λn |, c we introduce the notation ψk,ℓ = ψ Sg(k) (T ) + Sg( ℓ) (γ ) . Let h : R → R be a four times continuously differentiable (i) function such that max0⩽i⩽4 ∥h ∥∞ ⩽ 1. It suffices to prove limn→∞ |Ln | = 0, where

[ ( ∑

Ln := E h

i∈Λn



Zi

|Λn |

)]

[ ( ∑

−E h

i∈Λn

ξi √ |Λn |

)]

.

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

109

Using Lindeberg’s idea (Lindeberg, 1922) (see also (Dedecker, 1998)), we have

[

|Λn | ∑ [

]

Ln = E h|Λn |,|Λn |+1 − h0,1 =

E hk,k+1 − hk−1,k

]

k=1

=

|Λn | ( ∑ [

]

[

E hk,k+1 − hk−1,k+1 − E hk−1,k − hk−1,k+1

])

.

k=1

Applying Taylor’s formula, we get Ln =

|Λn | ( [ ∑

E Tg(k) h′k−1,k+1 +

k=1 2 where |vk | ⩽ Tg(k)

Ln =

2

]

[

1

2 h′′k−1,k+1 + vk − E γg(k) h′k−1,k+1 + Tg(k)

2

2 h′′k−1,k+1 + wk γg(k)

]) ,

) ( ) [ ] 2 2 1 ∧ |Tg(k) | and |wk | ⩽ γg(k) 1 ∧ |γg(k) | . Since γg(k) and h′′k−1,k+1 are independent, E γg(k) h′k−1,k+1 = 0 ]

(

[

[ 2 ] and E γg(k) =

1

E Z02

|Λn |

, we obtain

|Λn | ( ∑ [



]

E Tg(k) hk−1,k+1 +

k=1

[(

1 2

[ ])

2 Tg(k)

E



E Z02

] h′′k−1,k+1

|Λn |

)

+ E [vk − wk ] .

Since ξ0 is a Gaussian random variable with zero mean and variance E Z02 and E Z02 ⩽ E U02 , we have

[ ]

[ ]

|Λn | ∑

E |ξ0 |3 E U02 E [|wk |] ⩽ √ ⊴ √

[

]

( [

|Λn |

k=1

|Λn |

.

By Lemma 1, we have E U02 ⊴ 1 and consequently, we obtain limn→∞ |Λn | ∑

]

[

[

∑|Λn | k=1

[ ] E |U0 |2+θ

E [|vk |] ⩽ dn E Z02 + E Z02 1|Z0 |>dn √|Λn | ⩽ dn E U02 +

[ ]

]

])3/2

]

[

[

]

dθn |Λn |θ/2

k=1

E [|wk |] = 0. Let dn := |Λn |bNn

(

) 2(−θ θ+1)

. Then,

.

Using Lemma 3, we get |Λn | ∑

1

E [|vk |] ⊴ dn +



k=1

n

(

|Λn |bNn

−−−→ 0. )θ /2 = 2dn −−−n− →∞

Now, we have to prove that lim

|Λn | ( ∑ [

[( ′

]

E Tg(k) hk−1,k+1 + E

n→∞

[ ])

2 Zg(k) − E Z02

h′′k−1,k+1

2|Λn |

k=1

])

= 0.

(44)

For any integers n ⩾ 1 and 1 ⩽ k ⩽ |Λn |, we define (n) Ek

{

⏐ = j ∈ Λn ⏐ j Mn

} and

(M )

Sg(k)n (T ) =



Ti .

(n)

i∈Ek

(M )

(

(M )

)

c For any 1 ⩽ k < ℓ ⩽ |Λn | and any function ψ from R to R, we define also ψk−1n ,ℓ = ψ Sg(k)n (T ) + Sg( ℓ) (γ ) . Using Taylor’s

formula, we have ′ (M )

(

(M )

)

′′ (M )

Tg(k) h′k−1,k+1 = Tg(k) hk−1n,k+1 + Tg(k) Sg(k−1) (T ) − Sg(k)n (T ) hk−1n,k+1 + vk′ with

⏐ ( )( )⏐ ⏐ ⏐ (Mn ) (M ) |vk′ | ⩽ 2 ⏐Tg(k) Sg(k−1) (T ) − Sg(k) (T ) 1 ∧ |Sg(k−1) (T ) − Sg(k)n (T )| ⏐ . In order to obtain (44), we have to prove lim

|Λn | [ ∑

′ (M )

]

E Tg(k) hk−1n,k+1 = 0,

n→∞

(45)

k=1

lim

|Λn | [ ∑

(

(M )

)

′′ (M )

]

E Tg(k) Sg(k−1) (T ) − Sg(k)n (T ) hk−1n,k+1 = 0

n→∞ k=1

(46)

110

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

lim

n→∞

|Λn | ∑ [ ] E |vk′ | = 0,

(47)

k=1

and lim

n→∞

1

|Λn |

|Λn | ∑ [(

E

2 Zg(k) − E Z02

[ ])

h′′k−1,k+1 = 0.

]

(48)

k=1

c First, we are going to prove (45). Since γ is independent of T , then E Tg(k) h′ Sg(k +1) (γ )

[

(n)

(

)]

= 0. Consequently, if π is a one

(n)

to one map from [1, |Ek |] ∩ Z to Ek such that |π (i) − g(k)| ≤ |π (i − 1) − g(k)| then (n)

[ E

′ (M ) Tg(k) hk−1n,k+1

]

[

= E Tg(k)

(

′ (M ) hk−1n,k+1

))]

c − h Sg(k +1) (γ )

( ′

|Ek |

=



Cov Tg(k) , βi − βi−1 ,

(

)

i=1

where βi = h Sπ (i) (T ) + γ ) and Sπ (0) (T ) = 0. If (Xi )i∈Zd is strongly mixing then, using Rio’s inequality ((Rio, 1993), Theorem 1.1) and keeping in mind that |π (i) − g(k)| ≤ |π (i − 1) − g(k)|, we get ′

(

)

c Sg(k +1) (

(n)

|Ek | ∫ ⏐ [ ]⏐ ∑ ′ (M ) ⏐ ⏐ ⏐E Tg(k) hk−1n,k+1 ⏐ ⩽ 2 i=1

2α1,∞ (|π (i)−g(k)|)

QTg(k) (u)Qβi −βi−1 (u)du.

0

For any u ∈ ]0, 1[, noting that h′ is Lipschitz, we have QTg(k) (u) ⩽

u

1 − 2+θ

∥Z0 ∥2+θ √ |Λn |

and

Qβi −βi−1 (u) ⩽

u

1 − 2+θ

∥Z0 ∥2+θ . √ |Λn |

N − 2θ+θ

Moreover, by Lemma 3, we have ∥Z0 ∥22+θ ⩽ ∥U0 ∥22+θ ⊴ bn θN

and consequently, we obtain

(n)

|Λn | ⏐ [ |Λn | |Ek | ]⏐ b− 2+θ ∑ θ ∑ θ ∑ ′ (M ) − θN ∑ ⏐ ⏐ n +θ +θ α12,∞ α12,∞ (|π (i) − g(k)|) ⩽ bn 2+θ (|i|) . ⏐E Tg(k) hk−1n,k+1 ⏐ ⊴ |Λn | |i|>Mn

k=1 i=1

k=1

Using Lemma 2, we get (45). The following lemma is a simple consequence of Lemma 4 (its proof is left to the reader). Lemma 7.

[

]

θN

supj∈Zd E |U0 Uj | ⊴ bn4+θ . j ̸ =0

Since (Xi )i∈Zd is assumed to be strongly mixing, we have Zi = Ui for any i ∈ Zd . Using Lemmas 2 and 4, we have



⎞⎛



⎞⎤

|Λn | θN ∑ |Zi | ⎟⎥ ∑ ∑ ⎢ ⎜∑ ⎟⎜ [ ] d 4+θ ⎟ ⎜1 ∧ ⎜ ⎟⎥ ⩽ 2 [ ] | Z | | Z | E | U U | ⊴ M b E |vk′ | ⩽ 2E ⎢ −−−−−−−→ 0 √ i 0 0 i n n ⎠⎝ ⎣ ⎝ n→∞ |Λn | ⎠⎦ k=1 |i|⩽M |i|⩽M |i|⩽M n

i ̸ =0

n

n

i̸ =0

i̸ =0

and |Λn | ⏐ [ ( ) ′′ ]⏐ ∑ θN ∑ ⏐ ⏐ (M ) (M ) E [|U0 Ui |] ⊴ Mnd bn4+θ −−−−−−−→ 0. ⏐E Tg(k) Sg(k−1) (T ) − Sg(k)n (T ) hk−1n,k+1 ⏐ ⩽ n→∞

k=1

|i|⩽Mn i̸ =0

So, we obtain (46) and (47). Now, it suffices to prove (48). Let β ⩾ 1 be a positive integer. In the sequel, for any j ∈ Zd , the notation Eβ [Zj ] stands for the conditional expectation of Zj with respect to the σ -algebra σ (Zi ; i
|Λn |

|Λn | ∑ ⏐ [( 2 [ ]) ]⏐ ⏐E Z − E Z 2 h′′ ⏐ ⩽ I1 + I2 , g(k) 0 k−1,k+1 k=1

where I1 =

1

|Λn |

|Λn | ∑ ⏐ [( 2 ) ]⏐ ⏐ and I2 = ⏐E Z − Eβ [Z 2 ] h′′ g(k) g(k) k−1,k+1 k=1

The next result can be found in McLeish (1975).

1

|Λn |

|Λ n | ∑ ⏐ [( [ ]) ]⏐ ⏐. ⏐E Eβ [Z 2 ] − E Z 2 h′′ g(k) 0 k−1,k+1 k=1

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

Lemma 8.

Let

111

U and V be two σ -algebras and let X be a random variable which is measurable with respect to U . If

1 ⩽ p ⩽ r ⩽ ∞, then 1

1

∥E [X |V ] − E [X ] ∥p ⩽ 2(21/p + 1) (α (U , V )) p − r ∥X ∥r . Assume that (Xi )i∈Zd is strongly mixing. Using Lemma 8 with p = 1 and r = (2 + θ )/2 and keeping in mind that N − 2θ+θ

∥Z0 ∥22+θ = ∥U0 ∥22+θ ⊴ bn

, we have θ

N − 2θ+θ

+θ (β ) ∥Z0 ∥22+θ ⊴ 6bn I2 ⩽ Eβ [Z02 ] − E Z02 1 ⩽ 6α12,∞



[ ]

θ

+θ α12,∞ (β ) .

Now, we make the choice

⌈ ⌉ θN − β = bn (2d−1)θ+6d−2 .

(49)

Consequently, using (A3)(i), we obtain I2 ⊴ β

(2d−1)θ+6d−2 2+θ

θ

+θ (β ) − −−−−−−→ 0. α12,∞

n→∞

In the other part, noting that E

E

[(

[(

)

2 2 Zg(k) − Eβ [Zg(k) ] h′′k−1,k+1 = E

)

′′ (β )

]

2 2 Zg(k) − Eβ [Zg(k) ] hk−1,k+1 = 0, we have

]

[(

2 2 Zg(k) − Eβ [Zg(k) ]

′′ (β )

)(

h′′k−1,k+1 − hk−1,k+1

)]

.

So, we obtain

⏐ ⏐⎞ ⎤ ⏐ ⏐ ⏐∑ ⏐ ⏐ ⎢⎜ )⎥ Zi ⏐⎟ ( 2 ⏐ ⎜ ⏐⎟ Z + Eβ [Z 2 ] ⎥ . I1 ⩽ E ⎢ √ 0 0 ⎦ ⏐ ⎣⎝2 ∧ ⏐ ⎠ ⏐ |i|⩽β |Λn | ⏐ ⏐i
If L > 0, then

      ∑ ∑   [ ] [ ] ] [ ] [ Zi  L  E Z2 . I1 ⩽ √ E [|Z0 Zi |] + 2E Z02 1|Z0 |>L + 2 Eβ Z02 − E Z02 1 +  √ 0   |Λn | |i|⩽β  |i|⩽β |Λn |    i
Recall that Zi = Ui for any i in Zd . Since E U0 ⊴ 1 and ∥U0 ∥22+θ ⊴

[

β

θN d Lbn4+θ

I1 ⊴ √

|Λn |

− θ2N

+ L−θ bn

] 2

− θN bn 2+θ

, we derive from Lemma 7 that

     θ ∑ Zi    (2d−1)θ+6d−2 +θ  . 2+θ +β α12,∞ (β ) +  √    |i|⩽β |Λn |  i
Now, we make the choice 1

|Λn | 2(1+θ )

L=

d

(50)

θ (θ +6)N

β 1+θ bn2(1+θ )(4+θ ) and we obtain

I1 ⊴ |Λn |bNn

(

) 2(1−θ+θ )

θ 2 (2+θ )(d−1)N

× bn(1+θ )(4+θ )((2d−1)θ+6d−2)

     θ ∑ Ui    (2d−1)θ +6d−2 +θ  . 2+θ +β α12,∞ (β ) +  √    |i|⩽β |Λn |  i
112

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

Moreover,

 2   [ ] ∑   ( )⏐ ⏐ [ ]⏐ (2β + 1)d E U02 1 ∑⏐ Ui    ⩽ ⏐[−β, β]d ∩ [−β, β]d − j ⏐ ⏐E U0 Uj ⏐ + √   |Λn | |Λn |  |i|⩽β |Λn |  j∈Zd i
[ ] ∑⏐ [ ]⏐⎟ βd ⎜ ⎜E U 2 + ⏐E U0 Uj ⏐⎟ . 0 ⎠ |Λn | ⎝ j∈Zd

j ̸ =0

Using (32) and (34), we have

⏐ [ ]⏐ ⏐ ⏐ = o(1). Consequently, we get d E U0 Uj



j∈Z

j̸ =0

 2   ∑  d  1 1 Ui    ⊴ β ⊴ ⩽ . √ dθ N   N | Λ | | Λ | Λ | n n |bn n   |i|⩽β |Λn |bn(2d−1)θ +6d−2 i
So, we obtain I1 ⊴ |Λn |bNn

(

) 2(1−θ+θ )

θ 2 (2+θ )(d−1)N

× bn(1+θ )(4+θ )((2d−1)θ +6d−2) + β

(2d−1)θ+6d−2 2+θ

θ

1

+θ α12,∞ (β ) + √ −−−−−−−→ 0. n→∞ |Λn |bNn

Finally, if (Xi )i∈Zd is strongly mixing, then (48) holds. In order to complete the proof of Theorem 2, we only need to prove (45)–(48) when (Xi )i∈Zd is of the form (1). So, [ assume that ] (Xi )i∈Zd is of the form (1) and (A3)(ii) holds. Then ′ (M )

(Zi )i∈Zd = (U i )i∈Zd is Mn -dependent. Consequently, E Tg(k) hk−1n,k+1 = 0 and (45) follows.

[

]

(

)

supj∈Zd E |U 0 U j | = o Mn−d .

Lemma 9.

j ̸ =0

Proof. We have sup ⏐E |U 0 U j | − E |U0 Uj | ⏐ ⩽ 2 ∥U0 ∥2 U0 − U 0 2 .

⏐ [

]

[



]⏐



j∈Zd

j ̸ =0

Combining (19), Lemmas 2 and 7 and keeping in mind ∥U0 ∥2 ⊴ 1, we obtain +2)+2N − θ (N2(2 +θ )

θN

Mnd sup E |U 0 U j | ⊴ Mnd bn4+θ + bn

[

]



θ

|j|d δj2,2+θ =−−−−−−−→ 0. n→∞

j∈Zd

|j|>Mn

j ̸ =0

The proof of Lemma 9 is complete. □ Applying Lemma 9, we have





⎞⎤

⎞⎛

|Λn |

∑ [ ] ∑ |Zi | ⎟⎥ ∑ [ ⎢ ⎜∑ ⎟⎜ ] ⎜ ⎟⎥ ⩽ 2 ⎜ E |vk′ | ⩽ 2E ⎢ E |U 0 U i | −−−−−−−→ 0 |Zi |⎟ √ ⎣|Z0 |⎝ ⎠ ⎦ ⎠ ⎝1 ∧ n→∞ |Λn | k=1 |i|⩽M |i|⩽M |i|⩽M n

n

n

i ̸ =0

i̸ =0

i̸ =0

and |Λn | ⏐ [ ( ) ′′ ]⏐ ∑ [ ∑ ] ⏐ ⏐ (M ) (M ) E |U 0 U i | −−−−−−−→ 0. ⏐E Tg(k) Sg(k−1) (T ) − Sg(k)n (T ) hk−1n,k+1 ⏐ ⩽ n→∞

k=1

|i|⩽Mn i̸ =0

So, we obtain (46) and (47). Moreover, we have

E

[(

2 Zg(k) − E Z02

[ ])

′′ (M )

]

hk−1n,k+1 = E

[(

2

[

2

U g(k) − E U 0

])

′′ (M )

]

hk−1n,k+1 = 0.

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

113

Consequently, 1

|Λn |

|Λn | ∑ ⏐ [( 2 ]⏐ ) ⏐= ⏐E Z − E[Z 2 ] h′′ k−1,k+1 0 g(k)

|Λn | ⏐ [( [ 2 ]) ( )]⏐ ∑ ′′ (M ) 2 ⏐ ⏐ h′′k−1,k+1 − hk−1n,k+1 ⏐ ⏐E U g(k) − E U 0

1

|Λn | ⏐⎞ ⎡⎛ k=1 ⏐ ⎤ ⏐ ⏐ ⏐∑ ⏐ ( [ ])⎥ ⎢⎜ ⏐ U i ⏐⎟ ⎜2 ∧ ⏐ ⏐⎟ U 2 + E U 2 ⎥ . ⩽ E⎢ √ 0 0 ⏐ ⏐ ⎣⎝ ⎦ ⎠ ⏐ |i|⩽Mn |Λn | ⏐ ⏐i
k=1

N − 2θ+θ

2 As before, if L′ > 0, then using U 0 2+θ ⩽ ∥U0 ∥22+θ ⊴ bn



1

|Λn |



, we get

[

]

Mnd L′ supj∈Zd E |U 0 U j |

|Λn |

j ̸ =0

∑ ⏐ [( ) ]⏐ ⏐E Z 2 − E[Z 2 ] h′′ ⏐⊴ g(k) 0 k−1,k+1



k=1



− θ2N

+ L −θ bn

|Λn |

    ∑  [ 2]  Ui   . + E U0  √    |i|⩽Mn |Λn |  i
[

]

Applying Lemma 9 and keeping in mind that E U02 ⊴ 1 then

2  [ 2]     (2Mn + 1)d E U 0  ∑ Ui  ]⏐ ( )⏐ ⏐ [ 1 ∑⏐  ⩽ ⏐[−Mn , Mn ]d ∩ [−Mn , Mn ]d − j ⏐ ⏐E U 0 U j ⏐  + √   |Λn | |Λn |  |i|⩽Mn |Λn |  |j|⩽Mn  i
Then, 1

|Λn |

[

]

Mnd L′ supj∈Zd E |U 0 U j |

|Λn |

j ̸ =0

∑ ⏐ [( ) ]⏐ ⏐E Z 2 − E[Z 2 ] h′′ ⏐⊴ g(k) 0 k−1,k+1



k=1

|Λn |



− θ2N

+ L −θ bn

d/2

Mn

+√

|Λn |

.

For 1

|Λn | 2(1+θ )



L = (

[ ] Mnd supj∈Zd E |U 0 U j |

1 ) 1+θ

θN 2(1+θ )

,

bn

j ̸ =0

we get

( 1

|Λn |

|Λn | ∑ ⏐ [( 2 ) ]⏐ ⏐E Z − E[Z 2 ] h′′ ⏐⊴ g(k) 0 k−1,k+1 k=1

Mnd

) θ [ ] 1+θ supj∈Zd E |U 0 U j | j ̸ =0

( ) θ |Λn |bNn 2(1+θ )

d/2

Mn

. +√ |Λn |

Finally, using again Lemma 9 and keeping in mind that Mnd = o (|Λn |) (see Lemma 2) we derive (48). The proof of Theorem 2 is complete. □ Acknowledgments We are grateful for the useful suggestions of two anonymous reviewers which allowed us to improve the presentation of this work. M. El Machkouri would like to thank the hospitality of the University of Tianjin where a part of this work was done. X. Fan has been partially supported by the National Natural Science Foundation of China (Grant Nos. 11601375). References Biau, G., Cadre, B., 2004. Nonparametric spatial prediction. Stat. Inference Stoch. Process. 7 (3), 327–349. Carbon, M., Francq, Ch., Tran, L.T., 2007. Kernel regression estimation for random fields. J. Statist. Plann. Inference 137 (3), 778–798. Dabo-Niang, S., Rachdi, M., Yao, A., 2011. Kernel regression estimation for spatial functional random variables. Far East J. Theor. Stat. 37 (2), 77–113. Dabo-Niang, S., Yao, A.-F., 2007. Kernel regression estimation for continuous spatial processes. Math. Methods Statist. 16 (4), 298–317. Dedecker, J., 1998. A central limit theorem for stationary random fields. Probab. Theory Related Fields 110, 397–426. Dedecker, J., 2001. Exponential inequalities and functional central limit theorems for random fields. ESAIM Probab. Stat. 5, 77–104.

114

M. El Machkouri, X. Fan and L. Reding / Journal of Statistical Planning and Inference 205 (2020) 92–114

El Machkouri, M., 2007. Nonparametric regression estimation for random fields in a fixed-design. Stat. Inference Stoch. Process. 10 (1), 29–47. El Machkouri, M., 2011. Asymptotic normality for the Parzen–Rosenblatt density estimator for strongly mixing random fields. Stat. Inference Stoch. Process. 14 (1), 73–84. El Machkouri, M., 2014. Kernel density estimation for stationary random fields. ALEA Lat. Am. J. Probab. Math. Stat. 11 (1), 259–279. El Machkouri, M., Stoica, R., 2010. Asymptotic normality of kernel estimates in a regression model for random fields. J. Nonparametr. Stat. 22 (8), 955–971. El Machkouri, M., Volný, D., Wu, W.B., 2013. A central limit theorem for stationary random fields. Stochastic Process. Appl. 123 (1), 1–14. Hallin, M., Lu, Z., Tran, L.T., 2004. Local linear spatial regression. Ann. Statist. 32 (6), 2469–2500. Kulkarni, P.M., 1992. Estimation of parameters of a two-dimensional spatial autoregressive model with regression. Statist. Probab. Lett. 15 (2), 157–162. Liggett, T.M., 1985. Interacting Particle Systems. Springer-Verlag. Lindeberg, J.W., 1922. Eine neue Herleitung des Exponentialgezetzes in der Wahrscheinlichkeitsrechnung. Math. Z. 15, 211–225. Lu, Z., Chen, X., 2002. Spatial nonparametric regression estimation: Non-isotropic case. Acta Math. Appl. Sin. Engl. Ser. 18, 641–656. Lu, Z., Chen, X., 2004. Spatial kernel regression estimation: weak consistency. Statist. Probab. Lett. 68, 125–136. Lu, Z., Cheng, P., 1997. Distribution-free strong consistency for nonparametric kernel regression involving nonlinear time series. J. Statist. Plann. Inference 65 (1), 67–86. Masry, E., Fan, J., 1997. Local polynomial estimation of regression functions for mixing processes. Scand. J. Stat. 24 (2), 165–179. McLeish, D.L., 1975. A maximal inequality and dependent strong laws. Ann. Probab. 3 (5), 829–839. Nadaraya, È.A., 1965. On non-parametric estimates of density functions and regression. Teor. Verojatnost. Primenen. 10, 199–203. Parzen, E., 1962. On the estimation of a probability density and the mode. Ann. Math. Stat. 33, 1965–1976. Rio, E., 1993. Covariance inequalities for strongly mixing processes. Ann. Inst. H. Poincaré Probab. Statist. 29 (4), 587–597. Robinson, P.M., 1983. Nonparametric estimators for time series. J. Time Series Anal. 4 (3), 185–207. Rosenblatt, M., 1956a. A central limit theorem and a strong mixing condition. Proc. Nat. Acad. Sci. USA 42, 43–47. Rosenblatt, M., 1956b. Remarks on some nonparametric estimates of a density function. Ann. Math. Stat. 27, 832–837. Roussas, G.G., 1988. Nonparametric estimation in mixing sequences of random variables. J. Statist. Plann. Inference 18 (2), 135–149. Stroock, D.W., Zegarlinski, B., 1992. The logarithmic sobolev inequality for discrete spin systems on a lattice. Comm. Math. Phys. 149 (1), 175–193. Watson, G.S., 1964. Smooth regression analysis. Sankhya A 26, 359–372. Wu, W.B., 2005. Nonlinear system theory: another look at dependence. Proc. Natl. Acad. Sci. USA 102 (40), 14150–14154 (electronic).