Fiducial inference under nonparametric situations

Fiducial inference under nonparametric situations

Journal of Statistical Planning and Inference 142 (2012) 2779–2798 Contents lists available at SciVerse ScienceDirect Journal of Statistical Plannin...

277KB Sizes 6 Downloads 136 Views

Journal of Statistical Planning and Inference 142 (2012) 2779–2798

Contents lists available at SciVerse ScienceDirect

Journal of Statistical Planning and Inference journal homepage: www.elsevier.com/locate/jspi

Fiducial inference under nonparametric situations$ Shuran Zhao a,n, Xingzhong Xu b, Xiaobo Ding c a

Ocean University of China, Qingdao, China Beijing Institute of Technology, Beijing, China c Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China b

a r t i c l e i n f o

abstract

Article history: Received 14 January 2011 Received in revised form 27 March 2012 Accepted 28 March 2012 Available online 4 April 2012

The object of the paper is to provide recipes for various fiducial inferences on a parameter under nonparametric situations. First, the fiducial empirical distribution of a random variable was introduced under nonparametric situations. And its almost sure behavior was established. Then based on it, fiducial model and hence fiducial distribution of a parameter are obtained. Further fiducial intervals of parameters as functionals of the population were constructed. Some of their frequentist properties were investigated under some mild conditions. Besides, p-values of some test hypotheses and their asymptotical properties were also given. Three applications of above results and further results were provided. For the mean, simulations on its interval estimator and hypothesis testing were conducted and their results suggest that the fiducial method performs better than others considered here. Crown Copyright & 2012 Published by Elsevier B.V. All rights reserved.

Keywords: Fiducial empirical distribution Fiducial model Fiducial distributions Interval estimations p-Values

1. Introduction Fiducial inference was introduced by Fisher (1930). It is to give a probability statement about a parameter only from data. About it, some discussions and controversies can be found in Lindley (1958), Fraser (1962), Barnard (1963a,b), Pedersen (1978), and Zabel (1992). When using fiducial approach to do statistical inferences on a parameter y ¼ yðFÞ, the key step is to obtain the so-called fiducial model of the form

Y ¼ y^ x ðEÞ,

ð1Þ

where x ¼ ðx1 , . . . ,xn Þ is an iid sample of X  F, E is a random variable independent of X and its distribution is known (Barnard, 1977, 1995; David and Stone, 1982; Xu and Li, 2006). With the fiducial model, various statistical inferences can be easily carried out (see Xu and Li (2006), Section 7). Taking testing hypotheses as examples, their p values can be conveniently computed as the fiducial probabilities of the null hypothesis, i.e. the probabilities that y^ x ðEÞ falls into the null hypotheses conditioned on x known. So clearly fiducial method is succinct. Moreover, it is a competitive candidate in the problems where prior knowledge of parameters or trivial frequentist methods are unavailable. In the studies of generalized p values (Tsui and Weerahandi, 1989) and generalized confidence intervals (Weerahandi, 1993), fiducial method is proved to be a quite efficient method. Because it can give out generalized testing variables and generalized pivotal quantities (Hannig et al., 2006; Li et al., 2007), which are the key quantities for generalized inference. $ This study is supported by the National Natural Science Foundation of China (Grant no. 11071015), the Humanities and Social Science Research Foundation for Young Scholars of Ministry of Education, China (Grant no. 10YJC790396), the Natural Science Foundation of Shandong Provence, China (Grant no. ZR2010GQ008) and the Specialized Research fund for Young Teachers in Ocean University of China (Grant no. 82421119). n Corresponding author. Present address: Department of Finance, Ocean University of China, Qingdao, China. Tel.: þ 86 1509 2298 912. E-mail address: [email protected] (S. Zhao).

0378-3758/$ - see front matter Crown Copyright & 2012 Published by Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.jspi.2012.03.023

2780

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

In the paper, we try to provide a general method for constructing fiducial models under nonparametric situations, which is similar to Eq. (1) in parametric cases. To achieve it, we first introduce three nonparametric methods: bootstrap, Bayesian bootstrap and random weighting as follows. Define a randomized distribution function: F n,r ðtÞ ¼

n X

Ei Ixi r t ,

i¼1

where E ¼ ðE1 , . . . ,En Þ is independent of X and distributed as a known distribution function. Then we can obtain that

Y ¼ yðF n,r Þ ¼ y^ x ðEÞ, which is in conformity with (1). When the weight nE is distributed as the multinominal distribution M n ðn; 1=n, . . . ,1=nÞ or E  Dirichlet ðn; 1, . . . ,1Þ, it leads to Efron’s (1979) bootstrap or Rubin’s (1981) Bayesian bootstrap. And for more general weights, it leads to random weighting method (Zheng, 1987; Shi and Zheng, 1985). Moreover, let Xn from Fn and TðxÞ ¼ yðF n Þ with Fn the empirical distribution function of X, then for bootstrap method, y^ x ðEÞ and TðXn Þ have the same conditional distribution provided that x are given. Enlightened by three methods above and similarly setting out from the empirical distribution F n , we want to get a randomized estimate of F analogous to F n,r from a point of fiducial view. The resulting continuous distribution is called fiducial empirical (FE) distribution. Compared with F n,r ’s of bootstrap, Bayesian bootstrap and random weighting, FE distribution is smoother because of its continuity. This is more suitable for continuous population. Now there are two applications of the FE distribution. One is that Xu et al. (2009) applied the FE distribution to derive new tests for goodness of fit by substituting the FE distribution for the classical empirical distribution in the Kolmogorov–Smirnov, Cramer–von Mises statistics and so forth, and then taking the qth quantile and the expectation of randomized statistics obtained. Their simulations suggested that in most cases there existed some of the new tests having better power properties than the corresponding tests based on the classical empirical distribution and Pyke’s modified EDF. The other is that for an M/G/1 queueing system, Zhang and Xu (2010) used FE distribution to estimate the service time distribution and further to construct confidence intervals of performance measures. Their numerical examples showed that confidence intervals obtained dominated the bootstrap one in relative coverage in most cases. In the paper, based on the FE distribution, a fiducial model and hence a fiducial distribution of a parameter can be derived. Then constructions of confidence intervals and tests for three kinds of hypotheses are considered. Specially important, and perhaps of greater interest to practitioners, the fiducial intervals have asymptotically accurate frequentist coverage and tests are consistent under some mild conditions. Besides, convergence rate is also established in several examples. The paper is organized as follows. Section 2 constructs the fiducial empirical distribution of a random variable based on which, fiducial model and fiducial distribution of a parameter are constructed. Section 3 is devoted to the fiducial inference for general parameters under nonparametric situations and studies its asymptotic properties. Section 4 gives a variety of applications of the above results and some further improvements. Simulation studies are provided in Section 5 for the mean and suggest that the fiducial inference we proposed has nice performances. Section 6 gives some comments and some further problems needed to be studied. Proofs of main results are given in the Appendix. 2. Fiducial empirical distribution, fiducial model and fiducial distribution 2.1. Fiducial empirical distribution In the following context, we shall use X, U, V and x, u, v for real valued random variable and the value it takes on. Suppose that fX i ,1 r ir ng and fU i ,1 r ir ng are two iid sequences respectively distributed as a univariate continuous distribution F and U(0,1). X ð1Þ r    rX ðnÞ and U ð1Þ r    rU ðnÞ are their corresponding order statistics. Given x1 , . . . ,xn , the probability of X falling into the interval ðxði1Þ ,xðiÞ  is FðxðiÞ ÞFðxði1Þ Þ for i ¼ 2, . . . ,n, and those for ð1,xð1Þ  and ðxðnÞ ,1Þ respectively are Fðxð1Þ Þ and 1FðxðnÞ Þ. On the other hand, due to the continuity of F, we know that d

FðX ðiÞ Þ ¼ U ðiÞ : Therefore, we have PfX 2 ðxði1Þ ,xðiÞ g ¼ vi , PfX 2 ð1,xð1Þ g ¼ v1 , PfX 2 ðxðnÞ ,1Þg ¼ vn þ 1 :

ð2Þ

where vi ¼ uðiÞ uði1Þ for i ¼ 1, . . . ,n þ1, uð0Þ ¼ 0 and uðn þ 1Þ ¼ 1. Note that ðV 1 , . . . ,V n þ 1 Þ is jointly distributed as Dirichlet ðn þ 1; 1, . . . ,1Þ. From (2), we only know, for example, the whole probability vi over the interval ðxði1Þ ,xðiÞ , but how vi is assigned over it is unknown. To determine it, we introduce the principle of maximum entropy. According to maximum entropy, mass vi should be uniformly assigned over the interval ðxði1Þ ,xðiÞ  for i ¼ 2, . . . ,n and mass v1 and vn þ 1 assigned

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

2781

respectively in the form of exponential distributions with scale parameters k1 , k2 4 0 over ð1,xð1Þ  and ðxðnÞ ,1Þ. Then we can get a randomized distribution function of X: 8 > u eðtxð1Þ Þ=k1 , t r xð1Þ , > > ð1Þ > < uðiÞ uði1Þ ðtxði1Þ Þ þ uði1Þ , xði1Þ ot r xðiÞ ,i ¼ 2, . . . ,n, ð3Þ Hn ðt; x,uÞ ¼ x x ðiÞ ði1Þ > > > > ðtx Þ=k : ð1uðnÞ Þð1e ðnÞ 2Þþu t 4 xðnÞ , ðnÞ , where x ¼ ðx1 , . . . ,xn Þ and u ¼ ðu1 , . . . ,un Þ. We call the distribution as the fiducial empirical distribution of X. In (3), the parameters k1 and k2 need to be fixed. In the latter context, when dealing with parameters such as the mean, we shall provide a principle to choose them. But their choice is not necessary when F has a bounded support. Because if F is supported by [a,b] with a,b o1, we can make small modifications on the both tails of Hn as follows. Replace the interval ð1,xð1Þ Þ by ½a,xð1Þ , and ðxðnÞ , þ 1Þ by ðxðnÞ ,b, and then the mass uð1Þ is uniformly assigned over ½a,xð1Þ , so does 1uðnÞ over ðxðnÞ ,b, which has successfully avoided the appearances of k1 and k2 . The prime form of Hn is somewhat similar to the estimate F n,r in Section 1. Operationally they differ only in how probability mass is assigned to each corresponding interval. In F n,r , Ei is only attached to the point xðiÞ . Therefore it yields the less smooth distribution F n,r . Now define dðHn ,F n Þ ¼ sup9Hn ðt; x,uÞF n ðtÞ9 t

as the Kolmogorov–Smirnov distance between Hn and F n . Let Pn denote the conditional probability measure on U ¼ ðU 1 , . . . ,U n Þ provided that xi ’s are given. We have the following theorem. Theorem 2.1. Let Fu be the uniform distribution function on [0,1] and F ðuÞ n the empirical distribution function of u1 , . . . ,un . Then       i i1  ¼ dðF ðuÞ ,F u Þ: dðHn ,F n Þ ¼ max max uðiÞ  , uðiÞ  n n n  1rirn

The proof of the theorem is very simple and hence omitted here. The theorem implies that the Kolmogorov–Smirnov distance dðHn ,F n Þ is free of xi ’s and only depends on ui ’s. By the Glivenko–Cantelli theorem, dðHn ,F n Þ should converge to 0 with probability 1. Corollary 2.1. n o P lim dðHn ,FÞ ¼ 0 ¼ 1: n-1

Proof. Note that dðHn ,FÞ rdðHn ,F n Þ þdðF n ,FÞ-0 with probability 1 as n-1.

&

2.2. Fiducial model and fiducial distribution Consider a parameter of the form

y ¼ yðFÞ R R R a functional of F. For example, for the mean and variance, the relevant functionals are yðFÞ ¼ x dFðxÞ and ½x x dFðxÞ2 dFðxÞ respectively. Since Hn is an estimate of F, naturally yðHn Þ could be seen as an estimate of yðFÞ. Definition 1. Write y^ x ðuÞ ¼ yðHn ð; x,uÞÞ. We call

Y ¼ y^ x ðUÞ, U  P n the fiducial model of y, and call the distribution G~ x ðyÞ of y^ x ðUÞ under Pn as the fiducial distribution of y. Write the probability measure corresponding to G~ x ðyÞ as P~ x . Example 1 (Mean). Consider the mean of X and write xði1Þ þ xðiÞ , xð0Þ ¼ xð1Þ 2k1 and xðn þ 1Þ ¼ xðnÞ þ2k2 : yi ¼ 2 By Definition 1, the fiducial model of the mean is Z n þ1 X Y ¼ y^ x ðUÞ ¼ t dHn ðt; x,UÞ ¼ V i yi , i¼1

ð4Þ

2782

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

and the fiducial distribution function of the mean is ! n þ1 X n ~ ~ V iy r y : G x ðyÞ ¼ P x ðY r yÞ ¼ P i

i¼1

P Note that ni ¼þ 11 V i yi is the weighted sum of V 1 , . . . ,V n þ 1 . Since it is seldom that the fiducial distribution G~ x ðÞ can be calculated analytically, as for the bootstrap, the Monte Carlo simulation can be used to approximate G~ x ðÞ, i.e. repeatedly generate V 1 , . . . ,V n þ 1 and calculate y^ x ðUÞ. Example 2 (Median and quantiles). For p 2 ð0; 1Þ, the p-th quantile yðF,pÞ of F is defined as inffx : FðxÞ Zpg. Since F is assumed to be continuous, it must be the smallest solution of EF IX r yðF,pÞ p ¼ 0, where EF denotes the expectation under F. Similarly, due to the continuity and strict monotonicity of Hn , yðHn ,pÞ exists and is the unique solution of the equation Hn ðyðHn ,pÞÞ ¼ p:

We write y^ x ðuÞ ¼ yðHn ,pÞ, then 8 xð1Þ þk1 ðln pln uð1Þ Þ, > > > pu < ði1Þ ðxðiÞ xði1Þ Þ þ xði1Þ , y^ x ðuÞ ¼ u u ðiÞ ði1Þ > > > : x k ½lnð1pÞlnð1u Þ, ðnÞ ðnÞ 2

p r uð1Þ , uði1Þ r p r uðiÞ , p Z uðnÞ :

By Definition 1, the fiducial model is

Y ¼ y^ x ðUÞ, U  Pn , and the fiducial distribution function of yðF,pÞ is P~ x ðY r yÞ ¼ P n ðp r Hn ðy; x,UÞÞ 8 n P ðU ð1Þ eðyxð1Þ Þ=k1 ZpÞ, > > >  > <  yxði1Þ n U þ ðU U Þ Zp , P ¼ ði1Þ ði1Þ xðiÞ xði1Þ ðiÞ > > > > : P n ðU þ ð1U Þð1eðyxðnÞ Þ=k2 Þ Z pÞ, ðnÞ

ðnÞ

8 0, > > > > > ½1peðyxð1Þ Þ=k1 n , > > >  > <  yxði1Þ xðiÞ y n U ðiÞ þ U ði1Þ Zp , ¼ P xðiÞ xði1Þ xðiÞ xði1Þ > > > > > ðyxðnÞ Þ=k2 n >  , 1½1ð1pÞe > > > : 1,

y r xð1Þ , xði1Þ o y rxðiÞ i ¼ 2, . . . ,n,

y 4 xðnÞ y r xð1Þ þ k1 ln p, xð1Þ þk1 ln p o y r xð1Þ , xði1Þ o y rxðiÞ

i ¼ 2, . . . ,n,

ð5Þ

xðnÞ o y r xðnÞ k2 lnð1pÞ, y 4 xðnÞ k2 lnð1pÞ:

From (5), we can obtain that G~ x ðyÞ is continuous, strictly increasing. 3. Formulation of the fiducial inference on parameters under nonparametric situations First we introduce the notion of differential of yðFÞ. Given two points F and G in the space of all distribution functions, we say that yðFÞ has the Gˆateau differential with 1-linear structure at F in the direction of G, if the limit Z yðF þ lðGFÞÞyðFÞ d1 yðF; GFÞ ¼ lim ¼ hðx,FÞ dGðxÞ, l-0 þ

l

R

exists, where hðx,FÞ dFðxÞ ¼ 0. Note that d1 yðF; GFÞ is simply the ordinary right-hand derivative at l ¼ 0, of function TðlÞ ¼ yðF þ lðGFÞÞ of real variable l. Then a Taylor expansion of yðGÞyðFÞ is

yðGÞyðFÞ ¼ d1 yðF; GFÞ þ RðG,FÞ R with the remainder term RðG,FÞ ¼ yðGÞyðFÞ hðx,FÞ dGðxÞ. Now suppose that y has Gˆateau differential d1 yðF n ; Hn F n Þ. Thus

yðHn ÞyðF n Þ ¼

n þ1 X i¼1

V i ji ðF n Þ þRðHn ,F n Þ,

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

2783

where Z

j1 ðF n Þ ¼

xð1Þ

hðx,F n Þ 1

R xðiÞ

ji ðF n Þ ¼

xði1Þ

1 ðxxð1Þ Þ=k1 e dx, k1

hðx,F n Þ dx

xðiÞ xði1Þ Z

for i ¼ 2, . . . ,n,

þ1

1 ðxxðnÞ Þ=k2 e dx: k2 pffiffiffi Given x1 , . . . ,xn , a limit law for nðyðHn ÞyðF n ÞÞ is found by establishing RðHn ,F n Þ ¼ opn ðn1=2 Þ with probability 1 and pffiffiffi Pn þ 1 dealing with n i ¼ 1 V i ji ðF n Þ: For this, we shall assume the following conditions:

jn þ 1 ðF n Þ ¼

hðx,F n Þ

xðnÞ

(C1) 0 o s2h o1 with s2h the variance of h(x,F) under F. pffiffiffi a:s: (C2) Pn f9 nRðHn ,F n Þ9 4 e9X 1 , . . . ,X n g-0 for any e 40. pffiffiffiffiffiffiffiffiffiffiffi Pn þ 1 a:s: (C3) ð1= n þ 1Þ i ¼ 1 ji ðF n Þ-0. Pn þ 1 a:s: (C4) ð1=ðn þ 1ÞÞ i ¼ 1 ½ji ðF n Þ2 -E½hðx,FÞ2 . (C5) max9ji ðF n Þ9 ¼ oðn1=2 Þ a:s: pffiffiffi P: (C6) nRðF n ,FÞ-0. Theorem 3.1. Under Condition C1–C5, we have pffiffiffi L nðyðHn ÞyðF n ÞÞ-Nð0, s2h Þ a:s: where Nð0, s2h Þ denotes normal distribution with zero mean and variance s2h . The proof and ones in the latter context will be postponed to the Appendix. Now write X ¼ ðX 1 , . . . ,X n Þ, and consider properties of G~ X . Let y ¼ yðFÞ þ d for d 2 R and F denotes the standard normal distribution function. Note that pffiffiffi pffiffiffi G~ X ðyÞ ¼ P n ðyðHn Þ r yÞ ¼ Pn ð nðyðHn ÞyðF n ÞÞ r nðyyðF n ÞÞÞ: By Theorem 3.1, we have  pffiffiffi   nðyyðF n ÞÞ  a:s: G~ X ðyÞF  -0:

ð6Þ

sh

pffiffiffi pffiffiffi P L nðyðF n ÞyðFÞÞ ¼ n1=2 ni¼ 1 hðX i ,FÞ þ nRðF n ,FÞ-Nð0, s2h Þ under Conditions C1 and C6. Then 8 pffiffiffi  > pffiffiffi  pffiffiffi < 0 in probability for d o0, nðyyðF n ÞÞ nðyðFÞyðF n ÞÞ þ nd F ¼F - 1 in probability for d 40, > sh sh : U in distribution for d ¼ 0,

On the other hand, since

which, together with (6), implies the following corollary. Corollary 3.1. Under Conditions C1–C6, we have P ðiÞ G~ X ðyÞ-0 P ðiiÞ G~ X ðyÞ-1

for y o yðFÞ, for y 4 yðFÞ,

ðiiiÞ sup 9PðG~ X ðyÞ r pÞp9-0

for y ¼ yðFÞ:

p2½0;1

The result (iii) of Corollary 3.1 reveals that G~ X ðyðFÞÞ converges to U(0,1) in distribution. This is just the counterpart to d ~ G X ðyðFÞÞ ¼ Uð0; 1Þ in parametric models (Pedersen, 1978). Next we are to construct confidence intervals of yðFÞ by making fiducial probability statement concerning subsets of the parameter space directly. Definition 2. Suppose that G~ x ðÞ is the fiducial distribution function determined by the fiducial model

Y ¼ y^ x ðUÞ, U  P n : Let g 2 ð0; 1Þ, and

y^ g ðxÞ ¼ inffy : G~ x ðyÞ Z gg

2784

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

be the g-th quantile of G~ x ðÞ. Then given a confidence coefficient 1a, the fiducial interval of yðFÞ is defined as C a ðxÞ ¼ ½y^ a=2 ðxÞ, y^ 1a=2 ðxÞ:

The essential of the constructions of the fiducial confidence intervals above and latter hypothesis testings are somewhat similar to those of corresponding Bayesian inferences because fiducial distributions of parameters here are just corresponding to poster distributions, and so we can do various statistical inferences based on fiducial distribution along with Bayesian lines. Clearly, confidence sets we obtained here do not have a direct probability interpretation. However, the following theorem not only show that C a ðxÞ indeed has asymptotically frequentist coverage 1a, but also asserts that C a includes any false value with asymptotic probability 0. Theorem 3.2. Under Conditions C1–C6, we have lim PðyðFÞ 2 C a ðXÞÞ ¼ 1a,

n-1

lim Pðy 2 C a ðXÞÞ ¼ 0

n-1

for yayðFÞ:

Next consider problems of hypothesis testings. First consider the left-sided hypotheses of the form H0 : yðFÞ r y0 ,

H1 : yðFÞ 4 y0 ,

ð7Þ

where y0 is a specified value of the parameter. The p value of (7) can be conveniently computed as the fiducial probability of the null hypothesis, i.e. pðxÞ ¼ G~ x ðy0 Þ:

ð8Þ

We call

pðyÞ ¼ PðpðXÞ r a9yÞ

ð9Þ

as the power function of (7). Then we have the following theorem. Theorem 3.3. Under Conditions C1–C6, we have lim PðpðXÞ r aÞ ¼ a

for yðFÞ ¼ y0 ,

lim PðpðXÞ r aÞ ¼ 0

for yðFÞ o y0 ,

lim PðpðXÞ r aÞ ¼ 1

for yðFÞ 4 y0 :

n-1

n-1

and n-1

The theorem is the direct result of Corollary 3.1 and its proof is omitted here. Similarly, it is easily seen that the p value for testing H0 : y Z y0 ,

H1 : y o y0

is p ¼ 1G~ x ðy0 Þ and the resulting test is also consistent. At last, for two-sided hypothesis H0 : yðFÞ ¼ y0 ,

H1 : yðFÞay0 ,

ð10Þ

p value is defined as pðxÞ ¼ 2 minf1G~ x ðy0 Þ, G~ x ðy0 Þg, and the corresponding test is consistent. Before the section is ended, we consider Conditions C1–C6. These conditions are important bases of all results above and so deep analysis of them is necessary. P Theorem 3.4. Let hðx,FÞ ¼ kr ¼ 0 ar ðFÞ½xr EF ðX r Þ with k a finite positive integer and such that EhðX,FÞ ¼ 0. Suppose that 9ar ðF n Þ99ar ðFÞ9 converges to zero with probability1 for r r k, k1 ,k2 ¼ oðn1=ð2kÞ Þ with probability1 and 0 o s2h o 1. Then conditions C3–C5 hold. 4. Examples and further results Example 3 (Mean (continued)). For the mean of X, G~ x ðyÞ is continuous, strictly increasing with respect to y by results in David and Nagaraja (2003). It guarantees the existence and uniqueness of the g percentile y^ g ðxÞ given by Definition 2. On the other hand, it is easy to check that here hðx,FÞ ¼ xyðFÞ and RðHn ,F n Þ ¼ 0. By Theorem 3.4, Conditions C1–C6 are simplified to 0 o s2 o 1 and k1 ,k2 ¼ oðn1=2 Þ, a.s. Then, fiducial intervals and p values of three hypotheses can be obtained.

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

2785

Next consider the choices of k1 and k2 . Let En and Varn denote the expectation and variance under Pn . Then conditional expectation and variance of yðHn Þ or y^ x ðUÞ for the mean are En yðHn Þ ¼ En

n þ1 X

V i yi ¼ y,

i¼1

1 2 S nþ2 y P P with S2y ¼ ð1=ðn þ 1ÞÞ ni ¼þ 11 ðyi yÞ2 , y ¼ ð1=ðn þ 1ÞÞ ni ¼þ 11 yi . Varn yðHn Þ ¼

Definition 3. A quantity yðHn Þ, a functional of Hn , is Fn-unbiased, if En yðHn Þ ¼ yðF n Þ: For the mean of X, rewrite En yðHn Þ ¼

xð1Þ þ xðnÞ k2 k1 n þ : xþ nþ1 2ðn þ 1Þ nþ1

The Fn-unbiasedness of yðHn Þ requires that xð1Þ þxðnÞ : k2 k1 ¼ x 2

ð11Þ

Here, the most efficient k1 and k2 should make yðHn Þ has smallest conditional variance among all Fn-unbiased estimates. Thus, note that ðn þ2Þðn þ 1ÞVarn yðHn Þ ¼ ðxð1Þ k1 Þ2 þ ðxðnÞ þ k2 Þ2 þ

n X

y2i ðn þ 1Þy 2 :

i¼2

P The term ni¼ 2 y2i is free of k1 and k2 , so is y 2 under the condition (11). To get the minimum of Varn yðHn Þ subject to (11), it only needs to consider ðxð1Þ k1 Þ2 þ ðxðnÞ þk2 Þ2 which reaches its minimum at  _ xð1Þ þ xðnÞ x , k1 ¼ 0 2 k2 ¼ 0

 _ xð1Þ þxðnÞ : x 2

ð12Þ

Note that at least one of k1 and k2 in (12) is 0. For example, if k1 ¼ 0, then the mass u1 has to be attached to the point xð1Þ . Such attachment is somewhat reasonable. Because k1 ¼ 0 implies ðxð1Þ þxðnÞ Þ=2x r 0 and further that x is closer to xðnÞ rather than xð1Þ . In another word, the interval ððxð1Þ þxðnÞ Þ=2,xðnÞ Þ contains more information than ðxð1Þ ,ðxð1Þ þ xðnÞ Þ=2Þ does. Therefore, it is natural to extend the support of Hn toward the region larger than xðnÞ and stop at the point xð1Þ on the left. n n In the following context, we write k1 and k2 for k1 and k2 in (12). Lemma 1. Suppose that 0 o s2 ¼ VarðXÞ o1. Then k1 ,k2 ¼ oðn1=2 Þ, a:s: n

n

Proof. By Lemma 4 in Owen (1990), k1 ,k2 rmaxi 9X i 9 ¼ oðn1=2 Þ, a:s: n

n

n

&

n

It can be seen from Lemma 1 that k1 and k2 satisfy the condition of Theorem 3.4 with k¼1. n n Throughout the following context, let C be some positive real number. For k1 and k2 , we can obtain further results about the uniform difference in (iii) of Corollary 3.1, when the mean is the interesting parameter. R 6 n n Theorem 4.1. Define b6 ¼ E9X 1 EX 1 9 o 1, and yðFÞ ¼ x dFðxÞ. Suppose that b6 o 1 and take k1 ¼ k1 and k2 ¼ k2 . Then Cb sup 9PðG~ X ðyðFÞÞ r pÞp9 r 6 p6ffiffiffi : s n

ð13Þ

p2½0;1

By the continuity and monotonicity of G~ X and Theorem 4.1, we obtain  a  a Cb   r G~ X ðyðFÞÞ r 1 ð1aÞr 6 p6ffiffiffi : 9PðyðFÞ 2 C a ðXÞÞð1aÞ9 ¼ P 2 2 s n Example 4 (Median and quantiles (continued)). For quantiles, the fiducial interval has two advantages. One is that compared with the confidence interval based on the binomial distribution of IX i o yðF,pÞ , obtainable in Efron (1981), fiducial interval is more general, in that it exists even in situations where the former is unavailable. The other is that compared with the interval based on the asymptotic normality of the sample quantile yðF n ,pÞ, the fiducial interval successfully avoids the estimation of the density value at the true value yðF,pÞ.

2786

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

Note that the continuity and strict monotonicity of the fiducial distribution G~ x ðÞ guarantee the existence and uniqueness of y^ g ðxÞ. Besides, both Theorem 3.1 and Corollary 3.1 are not suitable for quantiles because that h(x,Fn) does not exist. However, the results of Corollary 3.1 still hold and are further refined by the following theorem. Theorem 4.2. Let 0 op, g o 1. If F is differentiable at yðF,pÞ and the derivative F 0 ðyðF,pÞÞ 40, then P ðiÞ G~ X ðyÞ-0

for y o yðF,pÞ;

P ðiiÞ G~ X ðyÞ-1

for y o yðF,pÞ; 3

C p E9IX 1 r yðF,pÞ p9 ðiiiÞ sup 9PðG~ X ðyÞ r gÞg9 r pffiffiffi nðEðIX 1 r yðF,pÞ pÞ2 Þ3=2 g2½0;1

for y o yðF,pÞ;

where Cp is a universal constant which is free of F, but may depend on p. The conditions in Theorem 4.2 are quite common in other statistical inferences on yðF,pÞ. The results of Theorem 4.2 imply that the difference between the coverage of the fiducial interval and nominal level has the same uniformly upper bound as in Theorem 4.2. For a special distribution family, the following theorem gives a uniformly upper bound pn þ ð1pÞn rather than Oðn1=2 Þ in (iii) of Theorem 4.2. Theorem 4.3. Suppose that X is distributed as uniform distribution on [a,b] with a,b o 1. Then for p 2 ð0; 1Þ, sup 9PðyðF,pÞ r y^ g ðXÞÞg9 r pn þ ð1pÞn :

ð14Þ

g2½0;1

For quantiles, it is difficult to choose k1 and k2 according to the principle of minimum conditional variance in all F n -unbiased estimates because that En yðHn ,pÞ and Varn yðHn ,pÞ are not explicit functions of k1 and k2 . In fact, the choice of k1 ,k2 could be ignored here. Because, only when p o U ð1Þ or p 4 U ðnÞ , could k1 or k2 appear in yðHn ,pÞ. But for pa0; 1, as it should not be, the probabilities of the two events fp oU ð1Þ g and fp 4U ðnÞ g will converge exponentially to zero as n-1, n n which implies that they almost impossibly appear. So in practice, we can still adopt k1 and k2 given in Example 3 and such choice does not affect the results of Theorem 4.2 because its proof is available for all k1 and k2. Example 5 (Variance). For the variance functional

yðFÞ ¼ EðX 1 X 2 Þ2 =2,

ð15Þ

we have (check) hðx,FÞ ¼ x2 

Z

x2 dFðxÞ2x

Z

x dFðxÞ þ2

Z

x dFðxÞ

2

and Z 2 RðG,FÞ ¼  x dðGðxÞFðxÞÞ : Note that h(x,F) is a quadratic function of x. Therefore, to obtain Theorem 3.1 and Corollary 3.1, it only needs to check P Conditions C2 and C5 according to Theorem 3.4. Also note that 9RðHn ,F n Þ9 ¼ ð ni ¼þ 11 V i ½Y i X Þ2 . Then by Lemma 4 in Owen (1990), we have 2 P 2 3 nþ1 P pffiffiffi n pffiffiffi6 ni ¼þ 11 ðY i X Þ2 i ¼ 1 ðY i X Þ 7 nE 9RðHn ,F n Þ9 ¼ n4 þ 5 ðn þ 1Þðn þ 2Þ ðn þ 1Þðn þ2Þ 2

r

2

2

C 1 max9X i 9 þ C 2 ðk1 þ k2 Þ i pffiffiffi n

ðC 1 and C 2 some positive real numbersÞ

a:s:

-0, which implies that pffiffiffi nRðHn ,F n Þ ¼ opn ð1Þ,

a:s:

ð16Þ

Furthermore, by the Hartman and Wintner LIL RðF n ,FÞ ¼ Oðn1 log log nÞ

a:s:

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

2787

and hence in particular pffiffiffi P nRðF n ,FÞ-0:

ð17Þ

Clearly, both (16) and (17) are in conformity with C2 and C5. Lemma 2. Assume that random vector V ¼ ðV 1 , . . . ,V n ÞT is jointly distributed as Dirichlet ðn1; 1, . . . ,1Þ. Let nonzero square matrix D ¼ ðdij Þnn and GðtÞ ¼ PðV T DV rtÞ. Then G(t) is continuous, strictly increasing with respect to t on its bounded support. Due to the form (15) of variance, the variance of the fiducial empirical distribution Hn has the quadratic form of ðV 1 , . . . ,V n þ 1 Þ. Therefore Lemma 2 guarantees the existence and uniqueness of y^ g ðxÞ for the variance. Analogous to n n n n quantiles, it is feasible to take k1 and k2 here because that k1 ,k2 ¼ oðn1=4 Þ with probability 1 as Theorem 3.4 required. 5. Simulation studies In the section, for the mean, numerical studies were conducted in terms of two aspects: interval estimation and testing hypothesis. Confidence intervals: We compare the fiducial empirical interval with other confidence intervals based on samples of size n¼ 20,30,60. Six population distributions were considered: the chi square distribution X 2 ð1Þ, U(2,3), Exponential with rate parameter 1 and location parameter 0 (Exp(1)), Beta(2,3), the t-distribution with 5 degrees of freedom (t(5)) and n Beta(.3,.7). In this study, for confidence level 90% and 95%, besides the fiducial empirical interval (Fid-Emp) with k1 ¼ k1 n and k2 ¼ k2 , a Bayesian bootstrap interval is considered, as was a Student’s t interval with tðn1Þ as its approximating distribution. Moreover, four bootstrap confidence intervals were also treated: percentile, bias-corrected percentile (Boot BC), bias-corrected accelerated percentile (Boot ABC) and bootstrap t (Boot t). The bootstrap intervals are discussed by Efron (1982), with the exception of bias-corrected accelerated intervals (Efron, 1987). All confidence intervals referred above are computed by Monte Carlo with size 5000. Besides, 5000 samples of size n from U(0,1) are generated to estimate the fiducial distribution and hence the endpoints of fiducial intervals. In Tables 1–6, CP denotes the simulated coverage probability of a confidence interval, LP the percentage of times the lower confidence limit exceeds the true mean, UP the percentage of times the upper confidence limit is less than the true mean, and EL the simulation estimate of the expected length of a two-sided confidence interval. First, Tables 1–6 show that for small sample sizes 20 and 30, the fiducial interval is the second closest to nominal coverage in most cases, but is the first closest in most cases for n¼ 60. Second, bias-corrected accelerated percentile interval for t(5), Bayesian bootstrap interval for X 2 ð1Þ, Exp(1), Beta(2,3) have substantially less than nominal coverage and performs worst in all cases; and bootstrap t interval for U(2,3) and Beta(.3,.7) with bound support is quite anticonservative and also performs worst in all cases. Third, the bootstrap-t interval appears to best balance coverage in both tails, whereas the fiducial interval is typically shifted to the left of the bootstrap-t interval in most cases. Overall, in terms Table 1 Numerical results from 5000 simulations for n¼ 20 and nominal 90%. CP

LP

UP

EL

CP

LP

UP

EL

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

X 2 ð1Þ .8566 .8204 .8528 .8384 .8416 .8394 .8902

.0202 .1356 .0110 .0246 .0358 .0548 .0358

.1232 .0440 .1362 .1370 .1226 .1058 .0740

1.0191 .9095 1.0245 .9443 .9663 1.0244 1.3226

.8980 .8808 .9048 .8848 .8936 .9012 .9230

.0504 .0598 .0470 .0574 .0528 .0488 .0384

.0516 .0594 .0482 .0578 .0536 .0500 .0386

.2114 .2010 .2221 .2060 .2062 .2064 .2231

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

Exp(1) .8790 .8360 .8722 .8488 .8536 .8638 .9016

.0254 .1158 .0186 .0366 .0438 .0542 .0406

.0956 .0482 .1092 .1146 .1026 .0820 .0578

.7326 .6665 .7452 .6886 .6983 .7233 .8521

Beta(2,3) .8876 .8692 .9022 .8760 .8812 .8874 .9156

.0490 .0716 .0394 .0532 .0530 .0520 .0382

.0634 .0592 .0584 .0708 .0658 .0606 .0462

.1444 .1381 .1528 .1417 .1419 .1422 .1542

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

t(5) .8890 .8576 .9098 .8746 .8674 .8564 .8876

.0538 .0746 .0430 .0600 .0634 .0674 .0542

.0572 .0678 .0472 .0654 .0692 .0762 .0582

.9463 .8758 .9731 .9013 .9054 .9170 1.0099

Beta(.3,.7) .8970 .8828 .9000 .8860 .8948 .9042 .9424

.0360 .0726 .0288 .0386 .0406 .0444 .0318

.0670 .0446 .0712 .0754 .0646 .0514 .0258

.2367 .2242 .2486 .2304 .2314 .2341 .2597

U(2,3)

2788

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

Table 2 Numerical results from 5000 simulations for n ¼20 and nominal 95%. CP

LP

UP

EL

CP

LP

UP

EL

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

X 2 ð1Þ .9030 .8816 .8932 .8840 .8906 .9018 .9324

.0078 .0990 .0032 .0104 .0144 .0250 .0176

.0892 .0194 .1036 .1056 .0950 .0732 .0500

1.2287 1.0940 1.2401 1.1171 1.1455 1.2374 1.6109

.9474 .9372 .9544 .9358 .9408 .9508 .9726

.0276 .0308 .0228 .0336 .0306 .0244 .0146

.0250 .0320 .0228 .0306 .0286 .0248 .0128

.2537 .2385 .2685 .2444 .2446 .2451 .2723

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

Exp(1) .9272 .9058 .9238 .9102 .9144 .9168 .9374

.0108 .0688 .0048 .0156 .0192 .0302 .0216

.0620 .0254 .0714 .0742 .0664 .0530 .0410

.8805 .8000 .9020 .8173 .8287 .8682 1.0642

Beta(2,3) .9420 .9290 .9446 .9302 .9328 .9374 .9570

.0234 .0424 .0196 .0280 .0274 .0266 .0190

.0346 .0286 .0358 .0418 .0398 .0360 .0240

.1713 .1647 .1849 .1684 .1686 .1692 .1882

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

t(5) .9428 .9202 .9586 .9308 .9258 .9162 .9426

.0286 .0414 .0208 .0332 .0356 .0398 .0280

.0286 .0384 .0206 .0360 .0386 .0440 .0294

1.1398 1.0568 1.1779 1.0733 1.0782 1.0963 1.222

Beta(.3,.7) .9404 .9328 .9424 .9336 .9394 .9546 .9780

.0212 .0448 .0118 .0186 .0192 .0192 .0116

.0384 .0224 .0458 .0478 .0414 .0262 .0104

.2759 .2659 .3009 .2734 .2747 .2789 .3241

EL

CP

LP

UP

EL

.8934 .8792 .8924 .8832 .8870 .9014 .9238

.0430 .0716 .0350 .0432 .0470 .0488 .0412

.0636 .0492 .0726 .0736 .0660 .0498 .0350

.1937 .1867 .1998 .1901 .1907 .1921 .2043

Beta(2,3) .8944 .8800 .8970 .8840 .8864 .8916 .9070

.0506 .0642 .0458 .0526 .0542 .0542 .0464

.0550 .0558 .0572 .0634 .0594 .0542 .0466

.1195 .1155 .1235 .1176 .1176 .1178 .1240

Beta(.3,.7) .8964 .8902 .9040 .8926 .8954 .9004 .9160

.0528 .0556 .0490 .0540 .0522 .0494 .0414

.0508 .0542 .0470 .0534 .0524 .0502 .0426

.1700 .1673 .1787 .1701 .1702 .1703 .1790

U(2,3)

Table 3 Numerical results from 5000 simulations for n ¼30 and nominal 90%. CP

LP

UP

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

X 2 ð1Þ .8806 .8516 .8700 .8584 .8646 .8678 .8976

.0208 .0380 .0142 .0250 .0350 .0456 .0362

.0986 .1104 .1158 .1166 .1004 .0866 .0662

.8637 .7737 .8403 .7964 .8114 .8509 .9996

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

Exp(1) .8854 .8620 .8794 .8700 .8702 .8718 .8950

.0264 .0402 .0216 .0300 .0382 .0488 .0420

.0882 .0978 .0990 .1000 .0916 .0794 .0630

1.2187 1.1132 1.2094 1.1396 1.1520 1.1835 1.3122

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

t(5) .9028 .8830 .9006 .8884 .8852 .8788 .8886

.0500 .0600 .0506 .0568 .0574 .0604 .0580

.0472 .0570 .0488 .0548 .0574 .0608 .0534

.5669 .5318 .5517 .5380 .5391 .5414 .5556

U(2,3)

of the proximity to nominal coverage, the average rankings of Fid-Emp, bayes boot, ordinary t, percentile, boot BC, boot ABC, boot t intervals respectively are (2, 6.5, 2, 5.3, 4.5, 3.8, 3.8), (2.2, 6.3, 2.7, 5.5, 4.7, 3.3, 3.3), (2, 6.5, 2.3, 5.3, 4.7, 3.5, 3.7), (2, 6, 2.3, 5.5, 4.8, 3.5, 3.8), (1.2, 6.7, 2.5, 4.8, 4.8, 4.3, 3.7), (1.5, 6.2, 3.5, 4.3, 4.2, 4.5, 3.8) for six tables. Therefore, comprehensively, Fid-Emp intervals performs best, with ordinary t following, then comes the bootstrap t and bootstrap ABC, and bayes boot worst. Testing hypotheses: Consider a left-side hypothesis of the form H0 : y r 3. Samples of size n ¼30, 60 were generated from six models: Nðy,1Þ,Uðy1, y þ1Þ, Betað2; 2Þ:5 þ y, log norm distribution LNðlogðyÞ:05; :1Þ, gamma distribution with shape

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

2789

Table 4 Numerical results from 5000 simulations for n¼ 30 and nominal 95%. CP

LP

UP

EL

CP

LP

UP

EL

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

X 2 ð1Þ .9248 .9032 .9114 .9050 .9092 .9184 .9458

.0084 .0198 .0040 .0112 .0156 .0242 .0152

.0668 .0770 .0846 .0838 .0752 .0574 .0390

1.0425 .9319 1.0114 .9450 .9693 1.0265 1.2550

.9442 .9392 .9436 .9370 .9414 .9498 .9706

.0184 .0374 .0142 .0208 .0224 .0246 .0162

.0374 .0234 .0422 .0422 .0362 .0256 .0132

.2325 .2218 .2405 .2259 .2265 .2288 .2495

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

Exp(1) .9336 .9164 .9262 .9182 .9212 .9246 .9464

.0104 .0198 .0060 .0132 .0170 .0246 .0188

.0560 .0638 .0678 .0686 .0618 .0508 .0348

1.4655 1.3364 1.4452 1.3550 1.3702 1.4200 1.6114

Beta(2,3) .9468 .9338 .9486 .9362 .9364 .9392 .9540

.0228 .0358 .0204 .0284 .0294 .0304 .0222

.0304 .0304 .0310 .0354 .0342 .0304 .0238

.1436 .1378 .1486 .1399 .1400 .1403 .1499

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

t(5) .9482 .9330 .9474 .9374 .9324 .9270 .9358

.0280 .0344 .0284 .0322 .0334 .0359 .0344

.0238 .0326 .0242 .0304 .0342 .0374 .0298

.6820 .6400 .6606 .6418 .6431 .6469 .6636

Beta(.3,.7) .9482 .9396 .9502 .9392 .9436 .9504 .9640

.0258 .0312 .0244 .0292 .0278 .0244 .0192

.0260 .0292 .0254 .0316 .0286 .0252 .0168

.2075 .1989 .2151 .2023 .2024 .2026 .2165

LP

UP

EL

U(2,3)

Table 5 Numerical results from 5000 simulations for n¼ 60 and nominal 90%. CP

LP

UP

EL

CP

Fid-Emp Bayes boot Ordinary Percentile Boot BC Boot ABC Boot-t

X 2 ð1Þ .9004 .8730 .8836 .8821 .8828 .8830 .9022

.0282 .0424 .0236 .0245 .0392 .0534 .0458

.0714 .0846 .0928 .09034 .0780 .0636 .0520

.6264 .5711 .5971t .5867 .5878 .6050 .6507

.8992 .8934 .8990 .8956 .8948 .8988 .9092

.0492 .0528 .0496 .0522 .0524 .0502 .0444

.0516 .0538 .0514 .0522 .0528 .0510 .0464

.1224 .1203 .1243 .1214 .1214 .1214 .1243

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

Exp(1) .8986 .8790 .8900 .8836 .8828 .8808 .8950

.0358 .0482 .0326 .0404 .0466 .0578 .0512

.0656 .0728 .0774 .0760 .0706 .0614 .0538

.8837 .8195 .8528 .8315 .8362 .8494 .8910

Beta(2,3) .8968 .8880 .8968 .8902 .8912 .8932 .9036

.0490 .0588 .0470 .0512 .0514 .0518 .0478

.0542 .0532 .0562 .0586 .0574 .0550 .0486

.0852 .0833 .0860 .0840 .0840 .0841 .0865

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

t(5) .8992 .8796 .8960 .8860 .8806 .8768 .8876

.0474 .0576 .0488 .0532 .0550 .0570 .0546

.0534 .0628 .0552 .0608 .0644 .0662 .0578

.5641 .5298 .5494 .5358 .5369 .5390 .5530

Beta(.3,.7) .8978 .8886 .8968 .8904 .8934 .8976 .9102

.0456 .0614 .0390 .0452 .0480 .0516 .0456

.0566 .0500 .0642 .0644 .0586 .0508 .0442

.1378 .1352 .1398 .1364 .1366 .1371 .1411

U(2,3)

parameter 8 and scale parameter y=8ðGammað8, y=8ÞÞ, exponential distribution with location parameter y1 and scale parameter 1 ðEð1, y1ÞÞ. The six distributions have a common mean y. Note that the first three distributions are symmetrical about the mean, and the others are not. The 10 000 samples of size n from U(0,1) were generated to estimate the fiducial distribution of the mean. All power values were computed by Monte Carlo with size 10 000. In Table 7, ‘F’ denotes the test based on the fiducial method, ‘T’ the test based on a Student’s t statistic and ‘W’ the one sample Wilcoxon signed rank test.

2790

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

Table 6 Numerical results from 5000 simulations for n ¼60 and nominal 95%. CP

LP

UP

EL

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

X 2 ð1Þ .9414 .9208 .9262 .9230 .9262 .9280 .9428

.0138 .0268 .0090 .0174 .0218 .0318 .0266

.0448 .0524 .0648 .0596 .0520 .0402 .0306

.7513 .6833 .7118 .6884 .6962 .7237 .7919

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

Exp(1) .9464 .9324 .9392 .9348 .9336 .9352 .9466

.0138 .0234 .0112 .0164 .0212 .0276 .0244

.0398 .0442 .0496 .0488 .0452 .0372 .0290

1.0595 .9820 1.0201 1.0014 .9940 1.0147 1.0777

Fid-Emp Bayes boot Ordinary t Percentile Boot BC Boot ABC Boot-t

t(5) .9484 .9326 .9480 .9372 .9332 .9284 .9386

.0246 .0314 .0248 .0284 .0300 .0320 .0304

.0270 .0360 .0272 .0344 .0368 .0396 .0310

.6783 .6371 .6578 .6392 .6408 .6441 .6605

CP

LP

UP

EL

.9518 .9486 .9540 .9509 .9496 .9530 .9618

.0234 .0272 .0218 .0235 .0244 .0228 .0190

.0248 .0242 .0242 .0256 .0260 .0242 .0192

.1445 .1432 .1488 .1445 .1445 .1445 .1492

Beta(2,3) .9484 .9438 .9484 .9440 .9436 .9464 .9520

.0260 .0282 .0234 .0272 .0276 .0278 .0246

.0256 .0280 .0282 .0288 .0288 .0258 .0234

.1017 .0993 .1030 .1000 .1000 .1001 .1033

Beta(.3,.7) .9502 .9430 .9458 .9432 .9458 .9512 .9596

.0208 .0330 .0176 .0218 .0226 .0240 .0206

.0290 .0240 .0366 .0350 .0316 .0248 .0198

.1649 .1609 .1674 .1623 .1625 .1633 .1698

U(2,3)

Table 7 Power of the test H0 : y r 3 from 10 000 simulations for nominal 95%. n¼ 30

y

3

3.15

3.25

3.35

Distribution

F

n¼ 60 T

W

F

T n, n

W

Nðy,1Þ Uðy1, y þ 1Þ Betað2; 2Þ:5 þ y LNðlogðyÞ:05; :1Þ Gammað8, y=8Þ Eð1, y1Þ

.0523(9%,2.8%) .0545(8.1%,2.4%) .0556(8.2%,2.6%) .0391(17.8%,116%) .0447(14.4%,36.7%) .0340(36%,432.1%)

.048 .0504 .0514 .0331 .0417 .0250

.0509 .0532 .0542 .0181 .0286 .0064

.0513( ) .0509(3%,.2%) .0530(2.3%,3.7%) .0424(6%,9.6%) .0407(5.2%,126.1%) .0310(13.6%,1966.7%)

.0512 .0494 .0518 .0400 .0387 .0273

.0552 .0508 .0511 .0387 .0180 .0015

Nðy,1Þ Uðy1, y þ 1Þ Betað2; 2Þ:5 þ y LNðlogðyÞ:05; :1Þ Gammað8, y=8Þ Eð1, y1Þ

.2074(5.9%,4.9%) .4334(8.1%,12.5%) .9801(.5%,2.4%) .1864(9.7%,75.2%) .1741(9.4%,59%) .1631(19.9%,297.8%)

.1958 .4009 .9757 .1699 .1592 .1360

.1977 .3853 .9569 .1064 .1095 .0410

.3198(.2%,2.2%) .6459(2.5%,9.3%) .9999(n, n ) .2823(3.8%,117.3%) .2565(3%,79.9%) .2822(7.8%,578.4%)

.3195 .6303 .9999 .2719 .2490 .2619

.3130 .5908 .9991 .1299 .1426 .0416

Nðy,1Þ Uðy1, y þ 1Þ Betað2; 2Þ:5 þ y LNðlogðyÞ:05; :1Þ Gammað8, y=8Þ Eð1, y1Þ

.3912(3%,3.6%) .7764(3.9%,11.2%) 1.0000(n, n ) .3595(7.8%,54.8%) .3084(7.6%,45.1%) .3743(12.9%,191.1%)

.3799 .7472 1.0000 .3336 .2866 .3316

.3776 .6980 1.0000 .2323 .2126 .1286

.6052(n ,1.8%) .9609(.6%,4.2%) 1.0000(n, n ) .5808(1.7%,66.1%) .5131(2.1%,52%) .6472(3.1%,240%)

.6065 .9555 1.0000 .5709 .5026 .6280

.5945 .9218 1.0000 .3494 .3375 .1906

Nðy,1Þ Uðy1, y þ 1Þ Betað2; 2Þ:5 þ y LNðlogðyÞ:05; :1Þ Gammað8, y=8Þ Eð1, y1Þ

.6098(2.5%,4.1%) .9598(1%,5.7%) 1.0000(n, n ) .5644(5.9%,39.3%) .4868(6%,35%) .6581(6.8%,101.2%)

.5948 .9504 1.0000 .5331 .4594 .6160

.5858 .9081 1.0000 .4053 .3606 .3271

.8506(.1%,1.6%) .9990(n ,.4%) 1.0000(n, n ) .8307(.8%,29.9%) .7538(.9%,29.5%) .9125(.8%,75.6%)

.8491 .9988 1.0000 .8237 .7468 .9051

.8369 .9948 1.0000 .6395 .5822 .5195

The sign n denotes that F test is as good as or inferior to T or W test. The two terms in brackets are the proportions in which F test respectively improves T and W tests in terms of power.

From Table 7, we can see that fiducial test is substantially superior to Student’s t and Wilcoxon signed rank tests. Moreover, Wilcoxon signed rank test performs worst in most cases, especially for the last three nonsymmetrical distributions. Its reason may be that Wilcoxon signed rank test is more suitable to symmetrical populations.

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

2791

6. Conclusions and discussions The fiducial method of the paper provides a recipe for deriving the fiducial distribution of a parameter under nonparametric situations. Technically, asymptotic properties of fiducial distributions are easy to study because a continuous weight vector ðV 1 , . . . ,V n þ 1 Þ is adopted. In the paper, we also provide some applications of the method, such as in the mean, variance and quantiles. Besides, the method can also be applied to goodness of fit testings, Behrens–Fisher problems and censored date. In the paper, we use exponential distributions to approximate tails of F. The approximation may not be as perfect as that over the interval ðxði1Þ ,xðiÞ ,i ¼ 2, . . . ,n because the maximum entropy distribution on a semi-infinite interval does not exist without other information. Moreover, for population distributions with tails heavier than those of exponential distributions, intuitively, it is better to adopt heavier-tail approximating distributions. But, due to none prior knowledge about tails, it is usually hard to choose it. In this regard, a perfect solution is that the approximating distribution should be data-driven and meanwhile can reflect the heaviness of tails. For example, by adopting an extreme value theories in Balkema and de Haan (1974) and Pickands (1975) to estimate the upper tail, one can first estimate the conditional probability F t0 ðyÞ ¼ PðXt 0 r y9X 4t 0 Þ by a generalized pareto distribution Gx, b ðyÞ, where t 0 o xðnÞ is a high threshold and ðx, bÞ can be estimated by maximum likelihood. Then one can get the tail estimator n0 F^ ðxÞ ¼ 1 ð1Gx^ , b^ ðxt 0 ÞÞ n

for x 4t 0 ,

with n0 the number xi larger than or equal to t 0 . For the lower tail, take X as  X and the rest is essentially the same. The approach has advantages to exponential distributions. But this is not to say the approach is nice enough because the complicated form of F^ ðxÞ makes that the asymptotic properties of resulting fiducial distribution are hard to study. Therefore deeper investigations on tails are still needed.

Appendix A. Proofs of main results

Proof of Theorem 3.1. Note that þ1 pffiffiffi pffiffiffi nX pffiffiffi nðyðHn ÞyðF n ÞÞ ¼ n V i ji ðF n Þ þ nRðHn ,F n Þ: i¼1

Due to Condition C2, it only needs to show that  (pffiffiffi P  )   n ni ¼þ 11 V i ji ðF n Þ  n  r x FðxÞ ¼ 0 lim supP n-1 x   sh

a:s:

By results in Breiman (1968), we have Z d V i ¼ Pn þi1

j¼1

Zj

,

i ¼ 1, . . . ,n þ 1,

where Z i ’s are independent standard exponentials. Therefore, ! ! ! Pn þ 1 þ1 n þ1 n þ1 X X pffiffiffi nX n 1=2 i ¼ 1 Z i ji ðF n Þ n V i ji ðF n Þ r sh x ¼ P n r n s x ¼ P r r  b Pn Pn þ 1 h in , in i¼1 i¼1 i¼1 j ¼ 1 Zj where

j ðF Þxsh n1=2 rin ¼ Pn þ 1 i n ðZ i 1Þ, ð i ¼ 1 ðji ðF n Þxsh n1=2 Þ2 Þ1=2 ji ðF n Þxsh n1=2 : ð i ¼ 1 ðji ðF n Þxsh n1=2 Þ2 Þ1=2

bin ¼ Pn þ 1

Note that conditioning on X 1 , . . . ,X n , rin ’s are mutually independent random variables and preserve that 8 n E rin ¼ 0 > > > > > n þ1 n þ1 X X > > > < Varn r ¼ b2 ¼ 1 in

in

i¼1

i¼1

> > n þ1 n þ1 > X X > 3 3 > > En 9rin 9 rC 9bin 9 : > > : i¼1

i¼1

ð18Þ

2792

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

Then by the classical Berry–Esseen theorem, to prove (18), it only needs to show that   !   n þ1 X   lim F  bin FðxÞ ¼ 0 a:s:, n-1 

ð19Þ

i¼1

n þ1 X

3 a:s:

9bin 9 -0:

ð20Þ

i¼1

First, for Eq. (19), note that pffiffiffi Pn þ 1 P n þ1 X nðxn1=2 s1  ni ¼þ 11 ðji ðF n Þxsh n1=2 Þ i ¼ 1 ji ðF n ÞÞ h ffi ¼ vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi bin ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi !ffi : Pn þ 1 u P P 1=2 Þ2 1 u i¼1 ð j ðF Þx s n ji ðF n Þ2 ð ji ðF n ÞÞ2 h i n i¼1 tðxn1=2 s1 Pn þ 1 j ðF n ÞÞ2 þ n n  2 h

i¼1

i

s2h

ð21Þ

n2 sh

Since by the conditions C3 and C4, we have ! P P n1 ji ðF n Þ2 ð ji ðF n ÞÞ2 a:s: -1:  s2h n2 s2h Then, the rest proof of Eq. (19) is similar to that of Lemma 2 in Tu (1987), Further for Eq. (20), note that P 3 n1=2 maxi 9ji ðF n Þ9 þ n1 sh 9x9 9ji ðF n Þxsh n1=2 9 r P P 2 3=2 2 1 1=2 ½n1 ½n 9  9ji ðF n Þxsh n 9ji ðF n Þxsh n1=2 9 1=2 ¼

n1=2 max9ji ðF n Þ9 þ n1 sh 9x9 i þ ¼ J1 þ J2 : P P 2 1=2 2 1 1=2 1 ½n ½n 9  9ji ðF n Þxsh n 9ji ðF n Þxsh n1=2 9 1=2

Clearly, supJ 1 ¼ x

n1=2 max9ji ðF n Þ9 a:s: i -0 P P ½n1 ji ðF n Þ92 1=2 9ji ðF n Þðn þ 1Þ1

and supJ 2 ¼ x

¼

½n1

P

sh =n 9ji ðF n Þ=xsh

2 =nð h

½s

P

2 n1=2 9 1=2

¼ sup ½n1

y

sh =n P ji ðF n ÞÞ2 s2h =ðn2 ji ðF n ÞÞ2

The rest is straightforward.

P

sh =n 2 9ji ðF n Þysh n1=2 9 1=2

.X a:s:  X take y ¼ ji ðF n Þsh n1=2 ji ðF n Þ1=2 -0:

&

Proof of Theorem 3.2. For the first conclusion, it needs to prove that for any g 2 ð0; 1Þ, PfyðFÞ r y^ g ðXÞg-g as n-1. To prove it, note that pffiffiffi pffiffiffi PfyðFÞ r y^ g ðXÞg ¼ Pf nðyðFÞyðF n ÞÞ r nðy^ g ðXÞyðF n ÞÞg: pffiffiffi On the one hand, by Theorem A in Chapter 6 of Serfling (1980), we have nðyðFÞyðF n ÞÞ converges to Nð0, s2h Þ in distribution under Conditions C1 and C5. On the other hand, Theorem 3.1 and the lemma in Section 1.5.6 of Serfling (1980) pffiffiffi implies that nðy^ g ðXÞyðF n ÞÞ converges to s2h zg in probability with zg the g-th quantile of standard normal distribution. Then the desired result follows from Slutsky’s theorem. For the second conclusion, first consider the case that y o yðFÞ. Suppose that y ¼ yðFÞd with d 4 0. Note that for any g 2 ð0; 1Þ, pffiffiffi pffiffiffi pffiffiffi Pfy r y^ g ðXÞg ¼ Pf nðyðFÞyðF n ÞÞ nd r nðy^ g ðXÞyðF n ÞÞg-1: Then, Pfy 2 C a ðXÞg ¼ Pfy r y^ 1a=2 ðXÞgPfy r y^ a=2 ðXÞg-0: For the case that y 4 yðFÞ, its proof is similar.

&

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

2793

2k

Proof of Theorem 3.4. Note that the condition s2h o 1 implies that E9hðx,FÞ9 o1 and hence that maxi 9X i 9 ¼ oðn1=2k Þ with probability 1 by Lemma 4 in Owen (1990). Then we have

j1 ðF n Þ ¼

k X

r X

ar ðF n Þr!

r¼0

ð1Þ2rs

s¼0

s k X k1 X rs ð1Þ ¼ Oð1Þoðns=2k ÞoðnðrsÞ=2k Þ ¼ oðn1=2 Þ ðrsÞ! r¼0

a:s:

P P s 1=2 Þ, a:s: Therefore, to built Conditions C3–C5, by a0 ðF n Þ ¼ Similarly, jn þ 1 ðF n Þ ¼ kr ¼ 0 ar ðF n Þr! rs ¼ 0 k2 X rs ðnÞ =ðrsÞ! ¼ oðn Pk Pn r ð1=nÞ r ¼ 1 ðar ðF n Þ i ¼ 1 xi Þ, it only needs to prove that 2 3 R X ðiÞ r n k n n X X pffiffiffi X 1 X 1 1 X ði1Þ x dx a:s: r pffiffiffi j ðF n Þ ¼ n ar ðF n Þ4  x 5-0, n i ¼ 2 X ðiÞ X ði1Þ n i ¼ 1 i ni¼2 i r¼1 R X ðiÞ r R X ðiÞ r 1 n k k n x 2 dx a:s: X X 1X 1 X X ði1Þ x dx X ½ji ðF n Þ2 ¼ ar1 ðF n Þar2 ðF n Þ  ði1Þ -E½hðX,FÞ2 , X X ði1Þ X ðiÞ X ði1Þ ni¼2 nr ¼0r ¼0 i ¼ 2 ðiÞ 1

2

and R X   ðiÞ xr dx k X  X ði1Þ  a:s: max2 r i r n 9ji ðF n Þ9 1 -0, pffiffiffi 9ar ðF n Þ9 r pffiffiffi max  n 2rirn r ¼ 0 n X ðiÞ X ði1Þ  which could be easily established in three cases: (i) X 0i s Z 0; (ii) X 0i s r0; (iii) there exist some i0 such that X ði0 1Þ o0 and X ði0 Þ Z0. & In order to prove theorems in Section 4, we need some lemmas as follows. Lemma 3. Let X i ’s be independent mean-zero random variables. Write Sn ¼ Then   1 p M p,n for 1 op o2; ðiÞ E9Sn 9 r 2 n p

ðiiÞ E9Sn 9 rcðpÞnp=2 M p,n

Pn

i¼1

X i ,M p,n ¼

for p Z2,

where c(p) is a universal constant depending only on p. Proof. See Barhr and Esseen (1965) and Dharmadhikar and Jogdeo (1969). Lemma 4. Let e denote the natural logarithm. Then 8 p1 > > ffi, p Z1, > < pffiffiffiffiffiffiffiffi 2pe sup9FðpxÞFðxÞ9r 1=p1 > x > > : pffiffiffiffiffiffiffiffiffi , 0 op o 1: 2pe Proof. It is a direct result of Taylor expansions and hence omitted here. R For simplicity, let m ¼ x dFðxÞ, and W 2a ¼

þ1 1 nX ðY mÞ2 , ni¼1 i

W 3a ¼

þ1 1 nX 3 9Y m9 , ni¼1 i

W 2b ¼

n 1X ðX mÞ2 , ni¼1 i

W 3b ¼

n 1X 3 9X m9 , ni¼1 i

where Y i ¼ ðX ði1Þ þX ðiÞ Þ=2. Lemma 5 provides the connection between W 2a and W 2b . Lemma 5. For any k1 ,k2 4 0,W 2a Z W 2b . Proof. For m rX ð0Þ , we have ðX ði1Þ þ X ðiÞ Þ=2m ZX ði1Þ m Z0 and hence W 2a Z

þ1 1 nX ðX mÞ2 Z W 2b : n i ¼ 1 ði1Þ

For m Z X ðn þ 1Þ , the proof of the lemma is similarly straightforward.

&

&

Pn

i¼1

p

E9X i 9 and Bn ¼

Pn

i¼1

EX 2i .

2794

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

So we only need to prove the lemma in the case that there exists a k0 2 f1, . . . ,n þ 1g such that X ðk0 1Þ r m r X ðk0 Þ . Let S¼

n þ1 X

ðX ði1Þ mÞðX ðiÞ mÞ Z

i¼1

kX 0 1

n þ1 X

ðX ðiÞ mÞ2 þ

i¼1

ðX ði1Þ mÞ2 þ ðX ðk0 1Þ mÞðX ðk0 Þ mÞ ¼ nW 2b þðX k0 1 mÞðX k0 mÞ:

i ¼ k0 þ 1

Then, W 2a ¼

W 2b 1 S 1 ½ðX ð0Þ mÞ2 þ ðX ðn þ 1Þ mÞ2  þ ZW 2b þ ½ðX ð0Þ mÞ2 þ ðX ðn þ 1Þ mÞ2 þ 2ðX ðk0 1Þ mÞðX ðk0 Þ mÞ ZW 2b : þ 4n 2n 4n 2

This completes the proof of Lemma 5.

&

k

Now let bk ¼ E9XEX9 and denote A1 : W 2b Z s2 =2, A2 : W 3a r2b3 , 2

A3 : max9X i m9 r C 0 s2 n1=2 ðC 0 40Þ: i

S T Write the condition A ¼ 3i ¼ 1 Ai and so A ¼ 3i ¼ 1 A i . The following lemma provides strong order bounds used in the proof of Theorem 4.1. Lemma 6. Suppose that b6 o 1. Then ðiÞ PðA 1 Þ r

C b3 pffiffiffi ,

s3 n

ðiiÞ PðA 2 Þ r

C b9=2 pffiffiffi ,

ðiiiÞ PðA 3 Þ r

s9=2 n

C b6 pffiffiffi :

s6 n

Proof. (i) By Markov inequality and (i) of Lemma 3, 3=2

PðA 1 Þ ¼ PðW 2b s2 r s2 =2Þ r

CE9W 2b s2 9

s

3

r

C b3 pffiffiffi :

s3 n 3

3

(ii) Note that by C r inequality, W 3a r W 3b þ ½9X ð0Þ m9 þ9X ðn þ 1Þ m9 =ð2nÞ, and 3

3

3

3

9X ð0Þ m9 þ 9X ðn þ 1Þ m9 r 19 max9X i m9 þ 99X m9 i

with X sample mean. Then,     19 9 19 1 3 3 3 max9X i m9 þ 9X m9 4 2b3 r PðW 3b 4 5b3 =4ÞP max9X i m9 4 b3 PðW 3a 42b3 Þ rP W 3b þ 2n i 2n 2n i 2   9 1 3 9X m9 4 b3 ¼ J1 þ J 2 þJ 3 : þP 2n 4 Now by Lemma 3, ! 9=2 n C b9=2 E9X 1 m9 1X 1 3 ½9X i m9 b3  4 b3 r C J1 ¼ P pffiffiffi r 9=2 pffiffiffi 3=2 ni¼1 4 n s n b 3

and J2 r

n X

3

ð9X i m9 4 Cnb3 Þ r

i¼1

C b9=2 C b9=2 r 9=2 pffiffiffi : 3=2 pffiffiffi n s n b3

For J 3 , such a bound is easily obtained by Lemma 3. Then the proof of (ii) is completed. (iii) Similar to the proof of Lemma 4 in Owen (1990),   X n Cb 2 2 & PðA3 Þ ¼ P max9X i m9 4 C 0 s2 n1=2 r Pð9X i m9 4 C 0 s2 n1=2 Þ r 6 p6ffiffiffi : i n s i¼1 pffiffiffi Remark 1. From Lemma 6, we can see that PðAÞ r C b6 =s6 n. pffiffiffi Remark 2. By Lemma 5 and (i) of Lemma 6, we have PðW 2a o s2 =2Þ rC b3 =s3 n. Proof of Theorem 4.1. For p ¼0,1, it is naturally true. So we only need to consider the case of p 2 ð0; 1Þ. Note that ! ! ! Pn þ 1 n þ1 n þ1 n þ1 X X X n n n i ¼ 1 ZiY i ~ V iY i r m ¼ P rm ¼ P rin r  bin , G x ðmÞ ¼ P Pn þ 1 i¼1 i¼1 i¼1 j ¼ 1 Zj

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

2795

where Y i m

rin ¼ Pn þ 1 ð

mÞ2 Þ1=2

i ¼ 1 ðY i 

Y i m

bin ¼ Pn þ 1 ð

mÞ2 Þ1=2

ðZ i 1Þ,

:

i ¼ 1 ðY i 

Note that conditioning on X 1 , . . . ,X n , rin ’s are mutually independent random variables and preserve that 8 n E rin ¼ 0, > > > > > n þ1 n þ1 X X > > > < Varn r ¼ b2 ¼ 1, in

in

i¼1

i¼1

> > n þ1 n þ1 > X X > 3 3 > > En 9rin 9 rC 9bin 9 , > > : i¼1

i¼1

which, together with the classical Berry–Esseen theorem, imply that ! ! n þ1 n þ1 X X 3 ~ p ¼ PðG X ðmÞ r pÞ rP F  b rp þC 9b 9 0

in

in

i¼1

ð22Þ

i¼1

and p0 Z P F 

n þ1 X i¼1

!

bin rpC

n þ1 X

3

9bin 9

! :

ð23Þ

i¼1

First we consider the right-hand side of the inequality in (22). Under Condition A and Lemma 5, we have n þ1 X

maxi 9Y i m9 maxi 9X i m9 þ 2k1 þ 2k2 Cb r 3 p3ffiffiffi : pffiffiffi 1=2 r pffiffiffi 1=2 s n nW 2b nW 2b n

3

9bin 9 r

i¼1

n

Then by (22) and Lemma 6, we get ! ! ! ! n þ1 n þ1 n þ1 X X X C b3 Cb 3 bin rp þ C 9bin 9 ,A þ PðAÞ rP F  bin rp þ 3 pffiffiffi ,A þ 6 p6ffiffiffi p0 r P F  n n s s i¼1 i¼1 i¼1 !   n þ1 X Cb Cb bin r F1 p þ 3 p3ffiffiffi ,A þ 6 p6ffiffiffi : ¼P  s n s n i¼1

ð24Þ

ð25Þ

Denote the first term on the right-hand side of the last equality in (25) by p1 . Since Pn n þ1 X i ¼ 1 ðX i mÞ bin ¼  p  ffiffiffi 1=2 nW 2a i¼1 and 1=2

W 2a

1=2

W 2b

ðX ð0Þ mÞ2 þ ðX ðn þ 1Þ mÞ2 r 1þ P 2 ni¼ 1 ðX i mÞ2

!1=2 r1þ

maxðX i mÞ2 ðX ð0Þ mÞ2 þðX ðn þ 1Þ mÞ2 C i r 1 þC r 1þ pffiffiffi , Pn P n 2 2 n 4 i ¼ 1 ðX i mÞ i ¼ 1 ðX i mÞ

where the last second inequality holds because of Conditions A1 and A3 , it follows from the Berry–Esseen theorem for selfnormalized statistics and Lemma 4 that Pn Pn     !  ! 1=2  W 2a Cb C Cb 1 1 i ¼ 1 ðX i mÞ i ¼ 1 ðX i mÞ p þ 3 p3ffiffiffi ,A r P  p p þ 3 p3ffiffiffi p1 r P  p ffiffiffi 1=2 r 1=2 F ffiffiffi 1=2 r 1þ pffiffiffi F n s n s n W 2b nW 2b nW 2b rpþ

C b3 C Cb pffiffiffi þ pffiffiffi r p þ 6 p6ffiffiffi , n s n

s3 n

which, together with (25), implies that p0 also has such an upper bound. On the other hand, setting out from the inequality (23), we can get p0 Z p

C b6 pffiffiffi ,

s6 n

whose proof parallels closely to the preceding work.

&

2796

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

Proof of Theorem 4.2. We first prove the third part. Note that PðG~ X ðyðF,pÞÞ r gÞ ¼ PðPn ðyðHn ,pÞ r yðF,pÞÞ r gÞ ¼ PðPn ðEHn IX r yðHn ,pÞ r EHn IX r yðF,pÞ Þ r gÞ ¼ PðP n ðpEHn IX r yðF,pÞ r 0Þ r gÞ: ð26Þ Furthermore, V1 EHn IX r yðF,pÞ ¼ k1

Z

X ð1Þ

1

ðxX ð1Þ Þ=k1

Ix r yðF,pÞ e

dxþ

n X

R X ðiÞ Vi

i¼2

X ði1Þ I x r yðF,pÞ dx

X ðiÞ X ði1Þ

þ

Vnþ1 k2

Z

þ1 X ðnÞ

Ix r yðF,pÞ eðxX ðnÞ Þ=k2 dx r V 1 þ

n þ1 X

V i IX ði1Þ r yðF,pÞ :

i¼2

Thus the right-hand side of (26) is not less than ! ! ! ! n þ1 n þ1 n þ1 X X X 3 V i IX ði1Þ r yðF,pÞ r0 r g Z P F  b~ in þ 9b~ in 9 r g , P Pn pV 1  i¼2

i¼1

i¼1

where (

b~ in ¼

ðp1Þ=Dn ,

i ¼ 1,

ðpIX ði1Þ r yðF,pÞ Þ=Dn ,

i ¼ 2, . . . ,n þ 1

and vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u n uX Dn ¼ t ðpIX i r yðF,pÞ Þ2 þ ðp1Þ2 : i¼1

Then the rest is similar to the proof of Theorem 4.1, we have 3

C p E9IX 1 r yðF,pÞ p9 PðG~ X ðyðF,pÞÞ r gÞ Z g pffiffiffi : nðEðIX 1 r yðF,pÞ pÞ2 Þ3=2

ð27Þ

It should be mentioning that the power of IX 1 r yðF,pÞ p in the denominator of (27) is 3 rather than 6 as in Theorem 4.1 because 9IX 1 r yðF,pÞ p9 o 1, which makes that the powers in two denominators of (ii) and (iii) in Lemma 6 are 3. P On the other hand, setting out from that EHn IX r yðF,pÞ Z ni¼ 1 V i IX ðiÞ r yðF,pÞ , and along the way to get (27), we could establish that 3

C p E9IX 1 r yðF,pÞ p9 PðG~ X ðyðF,pÞÞ r gÞ r g þ pffiffiffi , nðEðIX 1 r yðF,pÞ pÞ2 Þ3=2 which completes the proof of the third part. The proofs of (i) and (ii) are quite similar and hence we are only to prove (i). It needs to prove that for e, d 40, PðP n ðyðHn ,pÞ o yðF,pÞdÞ 4 eÞ-0:

ð28Þ

Similar to the proof of (iii) of the theorem, the left-hand side of (28) is no more than ! ! n þ1 n þ1 X X ~ ~ 3 ~ ~ b in þ 9b in 9 4 e P F  i¼1

i¼1

with ~ b~

( in

¼

ðp1Þ=D0n ,

i ¼ 1,

ðpIX i1 r yðF,pÞd Þ=D0n ,

i ¼ 2, . . . ,n þ 1

and

D0n

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u n uX ¼t ðpIX i r yðF,pÞd Þ2 þ ðp1Þ2 : i¼1

It is easy to get that



n þ1 X i¼1

~ b~

Pn þ 1 ~~ 3 i ¼ 1 9b in 9 converges to zero in probability and

pffiffiffi 1 P pffiffiffi ni¼ 1 ðIX i r yðF,pÞd EIX i r yðF,pÞd Þ þ nðEIX i r yðF,pÞd pÞ P n rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi þ op ð1Þ-1, in ¼ 1 Pn 2 ðpIX i r yðF,pÞd Þ n i¼1

where the last convergence holds because that EIX i r yðF,pÞd p o 0, which is guaranteed by F 0 ðgðF,pÞÞ4 0. Then the desired result is straightforward. &

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

2797

Proof of Theorem 4.3. Define hðt,UÞ ¼

tU ði1Þ , U ðiÞ U ði1Þ

U ði1Þ r t rU ðiÞ , i ¼ 1; 2, . . . ,n þ1:

Let z 2 ð0; 1Þ. Then Pn ðhðt,UÞ rzÞ ¼

n þ1 X i¼1

Pn



tU ði1Þ rz,U ði1Þ rt r U ðiÞ U ðiÞ U ði1Þ

 ¼

n þ1 X

Pn ðU ði1Þ þ zðU ðiÞ U ði1Þ Þ Z t,U ði1Þ rt r U ðiÞ Þ:

i¼1

Since with t fixed, Pn ðU ði1Þ þ zðU ðiÞ U ði1Þ Þ Z t,U ði1Þ rt r U ðiÞ Þ is a continuous function of z ði ¼ 1; 2, . . .Þ, so is Pn fhðt,UÞ rzg. Now note that PðyðF,pÞ r y^ g ðXÞÞ ¼ PðP n ðyðHn ,pÞ r yðF,pÞÞ r gÞ ¼ PðP n ðp r Hn ðyðF,pÞÞÞ r gÞ ¼ PfP n ðp rHn ðyðF,pÞÞ,U ð1Þ r p r U ðnÞ Þ þ Pn ðp rHn ðyðF,pÞÞ,p oU ð1Þ or p 4U ðnÞ Þ r gg:

ð29Þ

Clearly, we have Pn ðp rHn ðyðF,pÞÞ,p o U ð1Þ or p 4U ðnÞ Þ r P n ðp oU ð1Þ or p 4U ðnÞ Þ ¼ ð1pÞn þ pn : Furthermore, we have P ðp rHn ðyðF,pÞÞ,U ð1Þ r p r U ðnÞ Þ rP n

n

n [ i¼2

(

pU ði1Þ pU~ ði1Þ r ,U ði1Þ r p r U ðiÞ ~ U ðiÞ U ði1Þ U ðiÞ U~ ði1Þ

)! rP n ðhðp,UÞ rhðp, U~ ÞÞ

with U~ ¼ ððX 1 aÞ=ðbaÞ,ðX 2 aÞ=ðbaÞ, . . . ,ðX n aÞ=ðbaÞÞ. Note that U and U~ have a common distribution. Therefore, it follows from the continuity of the distribution of h(t,U) that the left-hand side of the last equality in (29) is not less than PðP n ðhðp,UÞ r hðp, U~ ÞÞ r gð1pÞn pn Þ ¼ gð1pÞn pn : On the other hand, we have PðP n ðp rHn ðyðF,pÞÞÞ r gÞ r PðPn ðp rHn ðyðF,pÞÞ,U ð1Þ r p r U ðnÞ Þ r gÞ ( )! ! n [ pU ði1Þ pU~ ði1Þ r ,U ð1Þ r p r U ðnÞ , rg ¼ P Pn U ðiÞ U ði1Þ U~ ðiÞ U~ ði1Þ i¼2 ! ! pU ð1Þ pU~ ð1Þ n n rP P r P ðp oU ð1Þ or p 4 U ðnÞ Þ r g ¼ g þð1pÞn þpn : U ð2Þ U ð1Þ U~ ð2Þ U~ ð1Þ

&

Proof of Lemma 2. Note that the support of G(t) is the interval IG ¼ ½mini,j dij ,maxi,j dij . Therefore we only need to prove that, for any t 0 2 IG such that Gðt 0 Þ 2 ð0; 1Þ, we have Gðt 0 Þ oGðt 0 þ eÞ for any e 4 0. Let ( ) n X Vi ¼ 1 , S ¼ V 2 Rnþ : i¼1

St0 ¼ fV 2 Rnþ : V T DV r t 0 g, S t0 ¼ fV 2 Rnþ : V T DV ¼ t 0 g: Note that S t0 is the margin of St0 . Clearly, S is a closed and convex subset in Rn and St0 is a closed subset. It is not hard to T show that their intersection is not empty and hence the intersection of S and S t0 is not empty too. Then take V t0 2 S S t0 T and V t0 þ e 2 S S t0 þ e . For l 2 ½0; 1, let f ðlÞ ¼ ½lV t0 þ ð1lÞV t0 þ e T D½lV t0 þ ð1lÞV t0 þ e : Since f ðlÞ is continuous about l,liml-0 f ðlÞ ¼ t 0 þ e and liml-1 f ðlÞ ¼ t 0 , then for any t 1 2 ðt 0 ,t 0 þ eÞ, there exists a l0 2 ð0; 1Þ 2 such that f ðl0 Þ ¼ t 1 . Denote l0 V t0 þð1l0 ÞV t0 þ e by V t1 and the Euclidean norm by J  99 . For any d 4 0, write ( ) n X Vi ¼ 1 : W d ¼ V 2 Rnþ : JVV t1 J2 r d and i¼1 T

Then as d is sufficiently small, V DV should be in the neighboring region of t 1 for all V 2 W d , and hence V T DV 2 ðt 0 ,t 0 þ eÞ. Thus Gðt 0 þ eÞGðt 0 Þ Z PðW d Þ 4 0, as was to be proved. & References Balkema, A., de Haan, L., 1974. Residual life time at great age. Annals of Probability 2, 792–804. Barhr, B.V., Esseen, C.G., 1965. Inequalities for the rth absolute moment of a sum of random variables 1 r r r 2. Annals of Mathematical Statistics 36, 299–303.

2798

S. Zhao et al. / Journal of Statistical Planning and Inference 142 (2012) 2779–2798

Barnard, G.A., 1963a. Some logical aspects of the fiducial argument. Journal of the Royal Statistical Society Series B 25, 111–114. Barnard, G.A., 1963b. Logical aspects of the fiducial argument. Bulletin of the International Statistical Institute 40, 870–883. Barnard, G.A., 1977. Pivotal inference and the Bayesian controversy. Bulletin of the International Statistical Institute 47, 543–551. Barnard, G.A., 1995. Pivotal models and the fiducial argument. International Statistical Review 63, 309–323. Breiman, L., 1968. In: ProbabilityAddison-Wesley, Reading, MA. David, H.A., Nagaraja, H.N., 2003. In: Order StatisticsJohn Wiley and Sons, New York. David, A.P., Stone, M., 1982. The functional model basis of fiducial inference. Annals of Statistics 10, 1054–1067. Dharmadhikar, S.W., Jogdeo, K., 1969. Bounds on moments of certain random variables. Annals of Mathematical Statistics 40, 1506–1509. Efron, B., 1979. Bootstrap methods: another look at the jackknife. Annals of Statistics 7, 1–26. Efron, B., 1981. Nonparametric standard errors and confidence intervals. Canadian Journal of Statistics 9, 139–158. Efron, B., 1982. The jackknife, the bootstrap, and other resampling plans. In: Conference Series in Applied Mathematics, No. 38. SIAM, Philadelphia. Efron, B., 1987. Better bootstrap confidence intervals (with discussion). Journal of the American Statistical Association 82, 171–200. Fisher, R.A., 1930. Inverse probability. Proceedings of the Cambridge Philosophical Society 26, 528–535. Fraser, D.A.S., 1962. On the consistency of the fiducial method. Journal of the Royal Statistical Society Series B 24, 425–434. Hannig, J., Iyer, H., Patterson, P., 2006. Fiducial generalized confidence intervals. Journal of the American Statistical Association 101, 254–269. Li, X., Xu, X., Li, G., 2007. A fiducial argument for generalized p-value. Science in China Series A—Mathematics 37, 733–741. Lindley, D.V., 1958. Fiducial distributions and Bayes’s theorem. Journal of the Royal Statistical Society Series B 20, 102–107. Owen, A.B., 1990. Empirical likelihood ratio confidence regions. Annals of Statistics 18, 90–120. Pedersen, J.G., 1978. Fiducial inference. International Statistical Review 46, 147–170. Pickands, J., 1975. Statistical inference using extreme order statistics. Annals of Statistics 3, 119–131. Rubin, D.B., 1981. The Bayesian bootstrap. Annals of Statistics 9, 130–134. Serfling, R., 1980. In: Approximation Theorems of Mathematical StatisticsJohn Wiley & Sons. Shi, J., Zheng, Z., 1985. Symmetric random weighting method. Chinese Science Bulletin 40, 582–585. Student, 1908. The probable error of a mean. Biometrika 6, 1–25. Tsui, K., Weerahandi, S., 1989. Generalized p values in significance testing of hypotheses in presence of nuisance parameters. Journal of the American Statistical Association 84, 602–607. Tu, D., 1987. The edgeworth expansion for the random weighting method. Chinese Journal of Applied Probability and Statistics 3, 340–347. Weerahandi, S., 1993. Generalized confidence intervals. Journal of the American Statistical Association 88, 899–905. Xu, X., Ding, X., Zhao, S., 2009. New goodness-of-fit tests based on fiducial empirical distribution function. Computational Statistics and Data Analysis 53, 1132–1141. Xu, X., Li, G., 2006. Fiducial inference in pivotal family of distributions. Science in China Series A—Mathematics 36, 340–360. Zabel, S.L., 1992. R A Fisher and fiducial argument. Statistical Science 7, 369–387. Zhang, Q., Xu, X., 2010. Confidence intervals of performance measures for an M/G/1 queueing system. Communications in Statistics Simulation and Computation 39, 501–516. Zheng, Z., 1987. Random weighting method. Acta Mathematica Sinica 10, 247–253.