On multivariate smoothed bootstrap consistency

On multivariate smoothed bootstrap consistency

Journal of Statistical Planning and Inference 138 (2008) 1828 – 1835 www.elsevier.com/locate/jspi On multivariate smoothed bootstrap consistency Dani...

161KB Sizes 9 Downloads 116 Views

Journal of Statistical Planning and Inference 138 (2008) 1828 – 1835 www.elsevier.com/locate/jspi

On multivariate smoothed bootstrap consistency Daniele De Martinia , Fabio Rapallob,∗ a Department SEMEQ, University of Piemonte Orientale, Corso Perrone 18, 28100 Novara, Italy b Department DISTA, University of Piemonte Orientale, Via Bellini 25/G, 15100 Alessandria, Italy

Received 20 January 2006; received in revised form 20 June 2007; accepted 20 June 2007 Available online 15 August 2007

Abstract This paper deals with the convergence in Mallows metric for classical multivariate kernel distribution function estimators. We prove the convergence in Mallows metric of a locally orientated kernel smooth estimator belonging to the class of sample smoothing estimators. The consistency follows for the smoothed bootstrap for regular functions of the marginal means. Two simple simulation studies show how the smoothed versions of the bootstrap give better results than the classical technique. © 2007 Elsevier B.V. All rights reserved. Keywords: Mallows metric; Locally adaptive kernel estimator; Bootstrap confidence intervals

1. Introduction Bootstrap methods have received great attention in the statistical literature, starting from Efron (1979). The books by Shao and Tu (1995) and Davison and Hinkley (1997) are major references in this area. These works describe a wide range of techniques, optimizations and applications to estimation and hypothesis testing. The bootstrap strongly depends upon the choice of the empirical distribution used in the plug-in. Thus, the use of a smooth estimator of the distribution function may be adopted when the distribution of the phenomenon is continuous. Following Silverman and Young (1987), Hall et al. (1989) and Wang (1995), the smoothed bootstrap (SMB) makes use of a smooth estimator for the distribution function instead of using the empirical distribution function. SMB provides better performances than classical bootstrap (CLB) when a proper choice of smoothing parameters is used, see, for example, De Angelis and Young (1992), Polansky and Schucany (1997) and Wang (1989). For a discussion on the relevance of parameters choice see Silverman (1986) and Hardle et al. (2004). The material presented here is related to the results presented in De Martini (2000) on univariate SMB consistency. In his work, bootstrap consistency was proved through the convergence in Mallows metric of smooth estimates. Related studies in the same direction can be found in El-Nouty and Guillou (2000) and Alonso and Cuevas (2003), but limited to one-dimensional distribution functions. Here, we first prove a convergence theorem for the k-dimensional kernel smoothed empirical distribution function. When the distribution functions present some local patterns and the sample size is small, a locally adaptive kernel estimator may produce a better fitting, see for example Scott (1992). As a consequence, we introduce a locally orientated ∗ Corresponding author.

E-mail addresses: [email protected] (D. De Martini), [email protected] (F. Rapallo). 0378-3758/$ - see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.jspi.2007.06.035

D. De Martini, F. Rapallo / Journal of Statistical Planning and Inference 138 (2008) 1828 – 1835

1829

k-dimensional kernel estimator belonging to the class of sample smoothing estimators (Terrell and Scott, 1992) and we prove its convergence in Mallows metric. The convergence in Mallows metric implies the consistency of the SMB for regular functions of the mean. In a simulation study we compare the performance of the different bootstrap methods. We focus on the computation of confidence intervals for regular functions of the means, under two different models. The paper is divided as follows. In Section 2, we recall some results on the multivariate CLB and its consistency in Mallows metric. In Section 3, we prove the consistency of the classical multivariate kernel estimator, while in Section 4 we prove the consistency of a locally orientated kernel estimator. It follows that the bootstrap method for statistics which are regular functions of the marginal means is consistent whenever we plug-in the smooth estimators defined in Sections 3 and 4. Finally, in Section 5, we present two simple simulation studies in order to allow a numerical comparison of the performance of the different bootstrap methods. 2. A theoretical reminder In this section, we present some results about the convergence in Mallows metric of distribution function estimators and its connections with bootstrap consistency. Let X ∈ Rk , with X ∼ F , where F is an unknown distribution on Rk such that E[X2 ] < ∞. Let X1 , . . . , Xn be a sample of size n drawn from the distribution function F . According to the plug-in principle, see Efron and Tibshirani (1993), we make use of the bootstrap method to approximate the distribution of √ (1) Hn = n{g(X n ) − g()}, where g is a real valued functional,  = E(X) is the mean parameter and Xn is the sample mean. In order to emphasize the dependence of the distribution of Hn on the estimator Fn of the distribution function F , we use here the functional notation √ (2) Hn = n{T (Fn ) − T (F )}, where Fn is an estimator of F based on X1 , . . . , Xn and T (F ) = g(E(X)). ∗ be an independent and identically distributed sample drawn from F . Consider Given X1 , . . . , Xn , let X1∗ , . . . , Xm n the bootstrap plug-in version of (2), i.e. √ (3) Hm∗ = m{T (Fm∗ ) − T (Fn )}, ∗. where Fm∗ is an estimator of Fn based on X1∗ , . . . , Xm The Mallows metric of index p 1 on the space of probability distributions on Rk with finite pth moment is defined by

dpk (F, G) = inf(E[A − Bp ])1/p ,

(4)

where the infimum is computed over all pairs of random variables (A, B) such that A ∼ F and B ∼ G. The reader can find the main properties of the Mallows metric in Bickel and Freedman (1981). The following lemma shows the connections between the convergence in Mallows metric and the consistency of the bootstrap. Lemma 1. Let Fn be an estimator of F such that d2k (Fn , F )→a.s. 0 as n → ∞. If, in addition, the function g is differentiable and ∇g() = 0, then √ √ ∗ (5) sup |P[ m{g(Xm ) − g(X n )} x | Fn ] − P[ n{g(X n ) − g()} x]| −→ 0 a.s. x

as n, m → ∞. √ Proof. By virtue of Theorem 2.2 in Bickel and Freedman (1981) and the subsequent remark, n(X n − ) and √ ∗ m(Xm − X n ) converge to the same limit in distribution. Now, we use the linearization technique as in Shao and Tu (1995, Section 3.1.5). We have g(X n ) = g() + ∇g()(X n − ) + op (n−1/2 )

(6)

1830

D. De Martini, F. Rapallo / Journal of Statistical Planning and Inference 138 (2008) 1828 – 1835

and ∗



∗ g(X m ) = g(X n ) + ∇g(X n )(X m − Xn ) + rm , (7) √ ∗ where mrm converges to 0 in probability, conditional on Fn , when m → ∞. From the expansions above, the √ √ ∗ distributions of m(g(X m ) − g(Xn )) conditional on Fn and of n(g(Xn ) − g()) converge to the same limit as n, m −→ +∞. Finally, Eq. (5) follows from the Glivenko–Cantelli theorem. 

In general, the distribution of (3) can be approximated by Monte Carlo methods. Moreover, the convergence in Eq. (5) states that we can compute approximated confidence intervals for the parameters of interest using hybrid-bootstrap. The consistency of the percentile-bootstrap also follows; see Shao and Tu (1995, Chapter 4). The bootstrap approximations are often faster than those based on the asymptotical normality; see again Shao and Tu (1995). We emphasize that the convergence in Mallows metric of Fn to F is a sufficient condition for the bootstrap consistency; see Bickel and Freedman (1981) and De Martini (2000). In fact, the first work in this direction can be found in Bickel and Freedman (1981), where the author showed the convergence of the classical empirical distribution function Fˆn , which assigns weight 1/n to each observation Xi . In their paper it is stated that, for all p 1, if F has finite pth moment, we have lim d k (Fˆn , F ) = 0 n→∞ p

a.s.

(8)

and they obtained the consistency of the CLB through the convergence in Eq. (8). 3. Mallows metric convergence of k-dimensional kernel estimates Kernel estimates are a large class of smooth estimates and are the most widely used; see for example Silverman (1986) and Scott (1992). A smooth estimate of the k-dimensional distribution function may be used for the plug-in in order to keep the bootstrap closer to the model hypotheses. Let X1 , . . . , Xn be an independent and identically distributed sample with unknown distribution F in Rk . We define the kernel estimator of F as follows: F˜n (x) = n−1

n 

Wn (x − Xj ),

j =1

where (Wn )n∈N is a sequence of distribution functions in Rk such that Wn →w 0 , i.e. the Dirac distribution in 0. In the applications, the kernel estimator is often expressed through a unique distribution function W and a sequence (Hn )n∈N of diagonal matrices with infinitesimal positive real elements on the main diagonal. In such a setting, the latter definition is equivalent to F˜n (x) = n−1

n 

W (Hn−1 (x − Xj )).

(9)

j =1

In particular, when Hn =hn Ik , with hn infinitesimal sequence of positive numbers and Ik the identity matrix of dimension k × k, the previous definition simplifies to   n  x − Xj −1 ˜ Fn (x) = n . W hn j =1

We assume that W has a density w with respect to the Lebesgue measure on Rk , where w is such that there exists M ∈ R such that supx |w(x)| < M and w(x) = w(−x) for all x ∈ Rk . Among the most used kernels there are the independent Gaussian kernel W = N(0, Ik×k ) and the Gaussian kernel based on the correlation matrix W = N(0, ), where  is the correlation matrix of the distribution F ; see Wand and Jones (1995) for further details. In particular, we use the last one in the simulation study referring to it as the “classical kernel”. Now, let us state the first theorem, where we prove the convergence in Mallows metric of F˜n to F . In the following proof, we make use of the techniques described in Mallows (1972), Bickel and Freedman (1981) and Freedman (1981).

D. De Martini, F. Rapallo / Journal of Statistical Planning and Inference 138 (2008) 1828 – 1835

1831

Theorem 2. Let p 1. If E[X1 p ] < + ∞ and there exists a random variable Z• ∼ W such that E[Z• p ] < + ∞, then lim d k (F˜n , F ) = 0 n→∞ p

(10)

a.s.

Proof. From the triangular inequality we have dpk (F˜n , F )dpk (F˜n , Fˆn ) + dpk (Fˆn , F ), where the second quantity in the sum tends to zero a.s. from Eq. (8). Denote Z• and Zn the random variables with distributions W and Wn = W (x/ hn ), respectively, with components Z•,i and Zn,i . Moreover, let An ∼ Fˆn and Bn = An + Zn , then Bn ∼ F˜n . Now, we use the definition of Mallows metric in Eq. (4) and we obtain ⎡ p/2 ⎤ k  2 ⎦ dpk (Fˆn , F˜n )p E[An − Bn p ] = E[Zn p ] = E ⎣ Zn,i i=1

p/2 ⎤ k  2 ⎦ = hpn E[Z• p ] → 0 Z•,i = E ⎣ h2n ⎡

i=1

because the sequence (hn )n∈N is infinitesimal and E[Z• p ] is finite.



Through this convergence theorem, Lemma 1 applies and therefore the consistency of the classical kernel smoothed version of the bootstrap is proved. The convergence in Eq. (5) is still valid using the smoothed estimator of F defined in Eq. (9). 4. Mallows metric convergence of a locally orientated kernel estimator When one uses classical kernel methods, the kernel is independent of the local shape of the observed sample. This feature may not be welcome, especially when the sample shows a clear pattern, which is even possible in many practical situations with a low amount of data. Therefore, a generic k-dimensional kernel may not reproduce the original local pattern because its shape is not locally adaptive, but it is only based on distribution function parameters, such as the covariance matrix; see e.g. Wand and Jones (1995). In order to obtain a dependence of the kernel W on the sample shape, we consider a locally orientated kernel estimate. This approach can bring some improvement, as shown in the simulation study in Section 5. The locally orientated kernel estimator we use here belongs to the class of the multivariate version of the sample smoothing estimators as defined in Terrell and Scott (1992), Eq. (1.7). In particular, we adopt a multivariate normal distribution for the kernel, and the individual scaling of the kernels is obtained through a nearest-neighbor approach. () Let  be a fixed integer such that k n. For each observation Xj , j = 1, . . . , n, denote by Pn,j the set of the  observations closest to the j th with respect to the Euclidean distance in Rk . For each j we can consider the empirical () () () correlation matrix Sn,j computed on Pn,j . As we set  k, the matrix Sn,j is well defined, symmetric and semi-positive definite for all j = 1, . . . , n. Hence, we define the locally orientated Gaussian kernel: ()

()

Wn,j ∼ N(0, h2n Sn,j ), where (hn )n∈N is a decreasing and infinitesimal sequence of positive real numbers. The locally orientated kernel satisfies the condition: ()

lim Wn,j = 0

n→∞

∀j = 1, . . . , n, ∀ such that k  n

in distribution. Using this kernel we can define a locally orientated smoothed estimator of F : F˘n (x) = n−1

n  j =1

()

Wn,j (x − Xj ).

1832

D. De Martini, F. Rapallo / Journal of Statistical Planning and Inference 138 (2008) 1828 – 1835

Now, we can show the consistency of F˘n in Mallows metric. Theorem 3. Let p 1 and let X1 , . . . , Xn be n i.i.d. random variables with distribution F on Rk . If F has finite pth moment, then lim d k (F˘n , F ) = 0 n→∞ p

(11)

a.s.

Proof. Using the triangular inequality we have dpk (F, F˘n )dpk (F, Fˆn ) + dpk (Fˆn , F˘n ). ()

The first quantity in the sum vanishes a.s. from Eq. (8). About the second quantity, denote by Zn,j , Zn and Z• the () random variables with distributions, respectively, W , N(0, h2n Ik ) and N(0, Ik ). Moreover, let An ∼ Fˆn and n,j

Bn = An +

n 

()

Zn,j I(An = Xj ),

j =1

where I denotes the indicator function. Then Bn ∼ F˘n . From the definition of Mallows metric we have p ⎤ ⎡  n () p p k ˆ ⎦ Zn,j I(An = Xj ) dp (Fn , F˘n ) E[An − Bn  ] = E ⎣ j =1 ⎡ ⎤ ⎤ ⎡ n n   1 () () E ⎣ Zn,j I(An = Xj )p ⎦ = p E ⎣ Zn,j p ⎦ n j =1 j =1 ⎡  k p/2 ⎤  p/2  k n n   () 1 1 ⎣  () 2 ⎦= (Zn,j,i ) E (Zn,j,i )2 = pE n np j =1

=

1 np

n  j =1



E

i=1

k 

j =1

2 (h2n Z•,i )

p/2

i=1

p

= hn E[Z• p ],

i=1

where the quantity on the right-hand side vanishes almost surely as the sequence (hn )n∈N is infinitesimal and E[Z• p ] is finite. Hence, the thesis follows.  A discussion on the smoothing parameter choice is presented in the next section. 5. Simulation study 5.1. Model and study design As we have mentioned in Section 1, we argue that the SMB can be used when the distribution functions in study are continuous, and that the locally orientated smoothed bootstrap (LOSMB) is particularly appropriate when the distribution F has some local patterns. In order to compare the CLB with the SMB and the LOSMB, we simulated the performances of the three methods on the computation of 95% confidence intervals for non-linear functions of the moments of multivariate distributions. In a first model (M1) we considered 1000 samples of size n = 10, 20 and 50 with bivariate distribution: X ∼ U[−2, 2], Y ∼ X2 + U[0, 1], where U denotes the uniform distribution. We want to estimate the parameter ϑ = x /y , i.e. the ratio of the marginal means. The true value of ϑ is 0.

D. De Martini, F. Rapallo / Journal of Statistical Planning and Inference 138 (2008) 1828 – 1835

1833

Table 1 Simulation results for models M1 and M2 M1

n = 10

CP (%)

 Me

 n = 20

CP (%)

 Me

 n = 50

CP (%)

 Me



M2

CLB

SMB

LOSMB

CLB

SMB

LOSMB

86.6 0.61 0.62 0.11

92.4 0.79 0.81 0.15

91.6 0.77 0.78 0.13

80.1 24.0 19.3 20.1

86.7 29.6 23.5 24.5

87.3 29.2 23.4 24.5

89.6 0.45 0.45 0.04

92.0 0.54 0.54 0.06

90.4 0.53 0.54 0.05

86.1 17.1 15.1 10.3

91.3 20.4 17.9 12.1

90.8 20.3 17.9 12.1

89.5 0.29 0.29 0.02

91.3 0.33 0.33 0.02

90.6 0.32 0.33 0.02

86.4 10.9 10.1 4.6

91.4 12.6 11.7 5.2

90.8 12.5 11.6 5.1

CP denotes the coverage probability. , Me and  are the mean, median and standard deviation of the confidence interval width, respectively.

In a second model (M2) we considered 1000 samples of size n = 10, 20 and 50 with three-dimensional distribution: X ∼ N(0, 1), Y ∼ X + N(0, 1), Z ∼ X2 + Y 2 + N(0, 1). We want to estimate the parameter ϑ = 2x + 2y + 2z , i.e. the sum of squares of the means. The true value of ϑ is 9. The main task for the different confidence intervals is to provide the correct coverage probability; second, to provide an interval width as small as possible. Consequently, in order to compare the performances of the confidence intervals related to their coverage probability, the difference between the observed coverage probability and the nominal level will be evaluated as the main indication. Then, the mean, the median and the standard deviation of the interval width will be considered. The simulation results for both models are reported in Table 1. 5.2. Smoothing parameter choice According to Scott’s Rule-of-Thumb, we define hn = n−1/(k+4)

k 

ˆ i /k,

i=1

where the ˆ i ’s are the estimates of the standard deviations of the ith marginal distribution of F . Note that the sequence hn verifies the asymptotic conditions for the convergence in Mallows metric of both smoothed estimators. Another choice is to set a different bandwidth for each component of the distribution. For further details, see Hardle et al. (2004). We point out that the bandwidth choice is a very tricky step in the SMB applications, see Polansky (2001), Grund and Polzehl (1997) and Alonso and Cuevas (2003). It is known that a bad choice of the bandwidth provides bad results. A solution may be the application of a calibrated parameter choice, following, for example, Hall and Martin (1988). Another possibility could be a generalization of the formulas in Polansky (2001) to the multidimensional case. With regard to the parameter , in order to have an acceptable definition of the locally orientated kernel,  should be at least k. But,  should not be equal to the minimum, k, because this choice could lead to a covariance matrix of the locally orientated kernel whose eigenvalues are closer to zero, giving problems during the Monte Carlo resampling procedure. Notice that with low values of , the locally orientated kernel is actually mainly influenced by the local shape of the sample. On the other hand, when n is quite large we can use the information on the shape not only coming from the

1834

D. De Martini, F. Rapallo / Journal of Statistical Planning and Inference 138 (2008) 1828 – 1835

k + 1 closest points, but also from a greater number of points whose number depends on n and increases with it. A traditional solution is to take n1/2 as a threshold. For these reasons, a suitable choice is  = max{k + 1, n1/2 }. Finally, we have used B = 1000 bootstrap replicates for each sample, according to bootstrap literature, see, for example, Efron and Tibshirani (1993) or Shao and Tu (1995, Chapter 4). 5.3. Results and conclusions In model M1, on one hand the coverage probability of CLB does not reach the nominal level, but the difference is quite small for n = 20 and n = 50. On the other hand, SMB and LOSMB present a coverage probability a bit higher than the nominal level, and the difference becomes smaller when n increases. The mean of the width of CLB confidence intervals is lower than those of smoothing techniques, while the standard deviation is almost the same. In model M2, the coverage probability of CLB does not reach the nominal level, and the differences remain considerable, even with n = 50. SMB and LOSMB underestimate the nominal level for n = 10, and overestimate for higher sample sizes. Nevertheless, the difference between the nominal level and the coverage probability becomes smaller. The mean and the standard deviation of the width of CLB confidence intervals are lower than those of smoothing techniques, and the differences reduce as n increases. It is worth noting that, since a conservative approach is usually preferred, a coverage probability higher than the nominal threshold is considered better than a lower one. Hence, LOSMB shows the best performances in both models. To summarize, in this paper we have analyzed multivariate SMB, and its improvement relative to the classical bootstrap, through an adequate choice of the smoothing parameters. In particular, we have used a classical kernel estimator and a locally orientated one. We have proved the asymptotic consistency of both multivariate SMB estimators. A simulation on two specific models shows that both versions of SMB provide coverage probabilities closer to the nominal level with respect to those of the classical approach. Moreover, the locally orientated bootstrap performs better than simple SMB in terms of coverage probability. In addition, the locally adaptive approach leads to a slightly smaller confidence intervals width. Finally, SMB techniques show the best performances, and reflect the continuous nature of the statistical models in study. Acknowledgments We wish to thank the two anonymous referees of this paper for their careful reading and helpful suggestions. References Alonso, A.M., Cuevas, A., 2003. On smoothed bootstrap for density functionals. J. Nonparametr. Statist. 15 (4–5), 467–477. Bickel, P.J., Freedman, D.A., 1981. Some asymptotic theory for the bootstrap. Ann. Statist. 9 (6), 1196–1217. Davison, A.C., Hinkley, D.V., 1997. Bootstrap Methods and Their Application. Cambridge University Press, Cambridge. De Angelis, D., Young, A.G., 1992. Smoothing the bootstrap. Internat. Statist. Rev. 60 (1), 45–56. De Martini, D., 2000. Smoothed bootstrap consistency through the convergence in Mallows metric of smooth estimates. J. Nonparametr. Statist. 12 (6), 819–835. Efron, B., 1979. Bootstrap methods: another look at the jackknife. Ann. Statist. 7 (1), 1–26. Efron, B., Tibshirani, R.J., 1993. An Introduction to the Bootstrap. Chapman & Hall, New York. El-Nouty, C., Guillou, A., 2000. On the smoothed bootstrap. J. Statist. Plann. Inference 83 (1), 203–220. Freedman, D.A., 1981. Bootstrapping regression models. Ann. Statist. 9 (6), 1218–1228. Grund, B., Polzehl, J., 1997. Bias corrected bootstrap bandwidth selection. J. Nonparametr. Statist. 8 (2), 97–126. Hall, P.W., Martin, M.A., 1988. On bootstrap resampling and iteration. Biometrika 75 (4), 661–671. Hall, P.W., Di Ciccio, T.J., Romano, J.P., 1989. On smoothing and the bootstrap. Ann. Statist. 17 (2), 692–704. Hardle, W., Muller, M., Sperlich, S., Werwatz, A., 2004. Nonparametric and Semiparametric Models. Springer, New York. Mallows, C.L., 1972. A note on asymptotic joint normality. Ann. Math. Statist. 43 (2), 508–515. Polansky, A.M., 2001. Bandwidth selection for the smoothed bootstrap percentile method. Comput. Statist. Data Anal. 36 (3), 333–349. Polansky, A.M., Schucany, W.R., 1997. Kernel smoothing to improve bootstrap confidence intervals. J. Roy. Statist. Soc. Ser. B 59 (4), 821–838. Scott, D.W., 1992. Multivariate Density Estimation. Wiley, New York. Shao, J., Tu, D., 1995. The Jackknife and Bootstrap. Springer, New York.

D. De Martini, F. Rapallo / Journal of Statistical Planning and Inference 138 (2008) 1828 – 1835 Silverman, B.W., 1986. Density Estimation for Statistics and Data Analysis. Chapman & Hall, London. Silverman, B.W., Young, G.A., 1987. The bootstrap: to smooth or not to smooth? Biometrika 74, 469–479. Terrell, G.R., Scott, D.W., 1992. Variable kernel density estimation. Ann. Statist. 20 (3), 1236–1265. Wand, M.P., Jones, M.C., 1995. Kernel Smoothing. Chapman & Hall, London. Wang, S., 1989. On the bootstrap and smoothed bootstrap. Comm. Statist. Theory Methods 18, 3949–3962. Wang, S., 1995. Optimizing the smoothed bootstrap. Ann. Inst. Statist. Math. 47, 65–80.

1835