Estimators for the inverse powers of a normal mean

Estimators for the inverse powers of a normal mean

Journal of Statistical Planning and Inference 143 (2013) 441–455 Contents lists available at SciVerse ScienceDirect Journal of Statistical Planning ...

431KB Sizes 0 Downloads 73 Views

Journal of Statistical Planning and Inference 143 (2013) 441–455

Contents lists available at SciVerse ScienceDirect

Journal of Statistical Planning and Inference journal homepage: www.elsevier.com/locate/jspi

Estimators for the inverse powers of a normal mean Christopher S. Withers a, Saralees Nadarajah b,n a b

Applied Mathematics Group, Industrial Research Limited, Lower Hutt, New Zealand School of Mathematics, University of Manchester, Manchester M13 9PL, UK

a r t i c l e i n f o

abstract

Article history: Received 7 October 2011 Received in revised form 17 June 2012 Accepted 19 June 2012 Available online 28 June 2012

Given a sample from a normal population unbiased estimators are obtained for positive powers of the mean and estimators of almost exponentially small bias are obtained for negative powers of the mean. Simulation studies show superior performance of these estimators versus known ones. & 2012 Elsevier B.V. All rights reserved.

Keywords: Bias reduction Normal Powers of the mean

1. Introduction For a random sample of size n from a normal distribution with mean m and variance v, let x and s denote the sample mean and the sample standard deviation, respectively. Consider the problem of estimating 1=m. This problem arises in many areas, including 1. In experimental nuclear Physics, a charged particle momentum p ¼ 1=m when m is the track curvature of a particle (Lamanna et al., 1981; Treadwell, 1982); 2. In the one-dimensional special case of the single period control problem, as discussed by Zellner (1971) and Zaman (1981b); 3. In estimation of structural parameters of a simultaneous equation as recognized in Zellner (1978) and described in Zaman (1981a); 4. In estimating various parameters of economic importance: for example, the investment multiplier, given an estimate of the marginal propensity to consume in a simple Keynesian model; or the long-run supply elasticity from Nerlove’s supply response model (see, for example, Braulke, 1982, Eq. (5)).

A related problem is the estimation of the inverse of the coefficient of variation, that is requiring this estimate:

pffiffiffi v=m. There are many situations

1. When dealing with the coefficient of variation of the normal distribution directly, the expected value of s9x, is infinite (see Johnson and Kotz, 1970, p. 75). This difficulty can be removed by estimating the inverse of the coefficient of variation (Chaturvedi and Rani, 1996); n

Corresponding author. E-mail address: [email protected] (S. Nadarajah).

0378-3758/$ - see front matter & 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.jspi.2012.06.018

442

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

2. The exact sampling distribution of the coefficient of variation is quite difficult to obtain for non-normal distributions. In some cases, it is easier to work with the inverse of the coefficient of variation (Sharma and Krishna, 1994; Ng, 2005); 3. In medical imaging, the inverse of the coefficient of variation is used to discriminate rolling leukocytes from cluttered environments (Dong et al., 2005; Sahoo et al., 2006); 4. In finance, the inverse of the coefficient of variation measures the performance index of banks or firms (Baker and Edelman, 1991; Turen, 1996), the risk/return ratio (Powers and Powers, 2009) and the risk-adjusted return (McAuliffe, 2010). The inverse of the coefficient of variation is also known as the Sharp ratio in finance (Kaluszka, 2003); 5. In electrical and electronic engineering, the inverse of the coefficient of variation is used to estimate signal reliability or signal-to-noise ratios (Brown et al., 2001; Bergemann and Zhao, in press); 6. In Physics, the inverse of the coefficient of variation is used to characterize the mixing efficiency of a flat cylinder that is driven by two surface acoustic waves (Frommelt et al., 2008); 7. Chauvet and Potter (2001) use the inverse of the coefficient of variation to examine whether the US business cycle expansion that started in March 1991 is a onetime unique event or whether its length is a result of a change in the stability of the US economy; 8. In structural design and construction, the inverse of the coefficient of variation is often used a reliability index (Zubeck and Kvinson, 1996; Duerr, 2008); 9. Gauthier and Horne (2004) find that the inverse of the coefficient of variation emphasizes the potential for species discrimination; 10. In remote sensing, the inverse of the coefficient of variation provides an indication of the separation between two populations (Ahern, 1988). The inverse of the coefficient of variation is also used as a descriptive parameter in faculty evaluation studies (Rousseau, 1998), to enhance permanent scatterers identification (Refice et al., 2003), to detect robust patterns in the spread of epidemics (Crepey and Barthelemy, 2007), and to model genetic variation in humans (Pavan et al., 2009). Another related problem is the estimation of the ratio of two independent normal means. This problem arises frequently in sample surveys and in biological sciences, for example, with respect to safety assessment of a substance such as a new pharmaceutical compound or a pesticide relative to a vehicle or negative control. Kotz and Johnson (1986, pp. 639–646) provide an excellent discussion of the application areas. There have been only a small number of approaches to estimate 1=m when v is assumed known. The maximum likelihood estimator for 1=m is 1 , x

ð1:1Þ

an estimator with infinite variance. Zellner (1978) provided the following improved estimator of 1=m: x x 2 þ v=n

,

ð1:2Þ

again a biased estimator, in the context of minimizing posterior expected loss. Srivastava and Bhatnagar (1981) considered the class of estimators taking the form of (1.2) x x 2 þ gv=n

,

ð1:3Þ

where g is a constant, and showed that these estimators have bias of the order of Oðn2 Þ. Voinov (1985) provided unbiased estimators for powers of 1=m under the assumption that the sign of m is known. In particular, Voinov (1985) provided the following unbiased estimator of 1=m: ( )  pffiffiffi  pffiffiffiffiffiffiffiffiffi nx 2np nðxÞ2 pffiffiffi exp F  pffiffiffi , ð1:4Þ 2v v v where FðÞ denotes the standard normal distribution function. pffiffiffi Assuming both m and v are unknown, the estimators for v=m corresponding to (1.1), (1.3) and (1.4) are s , x sx x 2 þ gs2 =n and

,

( )  pffiffiffi  pffiffiffiffiffiffiffiffiffi nx nðxÞ2 , F  2np exp s 2s2

respectively.

ð1:5Þ

ð1:6Þ

ð1:7Þ

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

443

Confidence intervals for ratios of normal means have been developed by many authors over the past 50 years. We mention Fieller (1954), Bennett (1963), Chakravarti (1971), Steffens (1971), James et al. (1974), Sevin (1978), Mendoza and Gutierrez-Pena (1999), Tsao and Hwang (1999), Li and Zhang (2003), Diaz-Frances and Sprott (2004), Lee and Lin (2004), Kim (2005), Mendoza (2005), Dilba et al. (2006) and Shi et al. (2008). Confidence intervals for ratios of means from correlated normals have also been studied, see Bose (1942), Roy and Potthoff (1958), Malley (1982), Hannig et al. (2003) and Hannig (2006). The focus of this paper will be with point estimators although the theory developed here can be used to obtain confidence intervals. In this paper, we consider the general problem of estimating vI mJ given a sample from Nðm,vÞ for I real and J an integer. We show how to construct estimators with bias of the order Oðnk Þ for any k 4 1. In particular, we provide unbiased estimators for mJ when J Z0 and estimators of exponentially small bias for mJ when J o0. We believe that this is the first time such general results have been obtained for functions of the normal mean and normal variance. The results are organized as follows. In Section 2, we suppose for given ðn,f Þ we observe independent variables 2

fs  vw2f ,

x  Nðm,v=nÞ,

ð1:8Þ

where I 4f =2 and  means ‘‘distributed as’’. For J Z 0, unbiased estimators are given. For J o0, we give estimators with Q J almost exponentially small bias as n-1. In particular, we estimate 1=m and m2 v. Section 3 gives estimators of pi¼ 1 vIi i mi i based on independent samples of size ni from Nðmi ,vi Þ, i ¼ 1; 2, . . . ,p. For fJ i Z 0g they are unbiased; otherwise their bias is almost exponentially small as min ni -1. Finally, Section 4 performs simulations to show that our estimators for 1=m and pffiffiffi v=m are better than (1.1), (1.3), (1.4), (1.5), (1.6) and (1.7). Throughout this paper, we use the notation a‘  b‘ to mean that a‘ =b‘ -1 as ‘ approaches its limiting value. As noted, we had also used  to mean ‘‘distributed as’’. The distinction between these two should be clear from the context. 2. One sample case Suppose (1.8) holds. Then E1=x does not exist though its principal values does. The problem of estimating the latter seems a difficult one—see Withers and Nadarajah (2010). We shall give estimators for t ¼ vI mJ ,

ð2:1Þ

for v m when I 4 f =2 and J is an integer. These are unbiased if J Z 0 and have almost exponentially small bias if J o 0. We assume that if I

J

J o 0,

0 o m0 o 9m9,

ð2:2Þ

where m0 is known. If J o0 we base our estimator on

mn ¼ xð1In Þ þcIn ,

ð2:3Þ

where In ¼ Ið9x9 r m0 Þ, and 0 o 9c9 r 1

is arbitrary,

for example, c ¼ m0 sign x. Set ( x if J Z 0, b¼ h ¼ ðm,vÞ, m mn if J o 0,

ð2:4Þ

b ,s2 Þ, hb ¼ ðm

ð2:5Þ

lI ¼ Eðw2f =f ÞI ¼ ðf =2ÞI Gðf =2þ IÞ=Gðf =2Þ,

ð2:6Þ

  J EN2i =lI þ i : aIJi ¼ 2i

ð2:7Þ

Let N  Nð0; 1Þ, so EN2i ¼ 1  3  5    ð2i1Þ ¼ ð2iÞ!2i =i!. Theorem 2.1. Suppose (2.1)–(2.7) hold. For k Z1, set X IJ t IJnk ðhÞ ¼ vI mJ ai ðn1 vm2 Þi :

ð2:8Þ

0riok

We have the following:

bÞ estimates t with bias  nk as n-1; (a) t IJnk ðh b Þ is unbiased; (b) If J Z0 and k ¼ J=2 þ 1 then t IJnk ðh b Þ estimates vI mJ with bias (c) Suppose J o 0. Fix 0 o A o1 and set kn ¼ 2½nA =2. Then t IJnkn ðh  expfð1AÞnA ln ng, as n-1.

ð2:9Þ

444

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

If J o 0 then t IJn1 ðyÞ converges or diverges as vm2 o or 4n=f . The proof of this theorem is given in Appendix A. We now give some examples. For r Z0, set ðJÞr ¼ JðJ1Þ    ðJr þ 1Þ, so ( J!=ðJrÞ! if J Z 0, ðJÞr ¼ ð1Þr ðJ þ r1Þr if J o 0:

b Þ, where Example 2.1. If J 4 0 an unbiased estimator of mJ is AJ ðh X i ðJÞ2i ðf =2þ i1Þ1 AJ ¼ AJ ðhÞ ¼ mJ i ðD=4Þ =i!

ð2:10Þ

0 r 2i r J

and D ¼ fn

1

vm2 . The first few are A2 ¼ m2 ð1 þ Df

A1 ¼ m,

A4 ¼ m4 ð1 þ 6Df

1

1

þ 3D2 f

A3 ¼ m3 ð1 þ 3Df

Þ,

1

1

Þ,

ðf þ 2Þ1 Þ,

A5 ¼ m5 ð1 þ 10Df

1

þ 15D2 f

1

ðf þ2Þ1 Þ,

A6 ¼ m6 ð1 þ 15Df

1

þ 45D2 f

1

ðf þ 2Þ1 þ15D3 f

A7 ¼ m7 ð1 þ 21Df

1

þ 105D2 f

1

1

ðf þ 2Þ1 þ 105D3 f

ðf þ 2Þ1 ðf þ4Þ1 Þ, 1

ðf þ2Þ1 ðf þ 4Þ1 Þ:

b Þ, where Example 2.2. If 0 oJ o f an unbiased estimator of ðv1=2 mÞJ is BJ ðh X i ðJÞ2i g 0 g 1 BJ ¼ BJ ðhÞ ¼ DJ1 J2i ðD=4Þ =i!,

ð2:11Þ

0 r 2i r J

where g j ¼ Gððf jÞ=2Þ, D1 ¼ ðvf =2Þ1=2 m, D ¼ fn ( if j Z 0, ðf =21Þj g 0 g 1 2j ¼ ðf =21jÞ1 if j o 0, j

1

vm2 ¼ 2=ðnD21 Þ. Since

we have B1 ¼ D1 g 0 g 1 1 ,

B2 ¼ D21 ðf =21 þ D=2Þ,

1 B3 ¼ D31 g 0 ðg 1 3 þ 3g 1 D=2Þ,

B4 ¼ D41 fðf =21Þ2 þ 3ðf =21ÞD þ3D2 =4g, 1 1 2 B5 ¼ D51 g 0 ðg 1 5 þ 5g 3 D=4 þ15g 1 D =4Þ,

B6 ¼ D6 fðf =21Þ3 þ 15ðf =21Þ2 D=2þ 45ðf =21ÞD2 =4 þ15D3 =8g, and so on. b Þ of (2.10) with 0 r2ir J replaced by 0 r io kn ¼ ½nA , Example 2.3. If J o 0 an estimator of mJ with bias as in (2.9) is AJ ðh where 0 o A o1. In particular, A1 ¼ m1

kX n 1

i ð2iÞ!ðf =2 þ i1Þ1 i ðD=4Þ =i!

ð2:12Þ

i¼0

and A2 ¼ m2

kX n 1

i ð2i þ1Þ!ðf =2þ i1Þ1 i ðD=4Þ =i!,

i¼0 1

where D ¼ fn

vm2 .

b Þ of (2.11) with 0 r2ir J replaced by Example 2.4. If J o 0 an estimator of ðv1=2 mÞJ with bias as in (2.9) is BJ ðh A m 2m 0 ri okn ¼ ½n , where 0 o Ao 1. In particular, for J¼2m, even, v =m is estimated using B2m ¼ D2m 1

kX n 1

i ð2m þ 2i1Þ2i ðf =2 þi þ m1Þ1 i þ m ðD=4Þ =i!,

i¼0

where D1 ¼ ðvf =2Þ1=2 m, D ¼ fn

1

vm2 ¼ 2=ðnD21 Þ.

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

445

b Þ is determined by I, J, f and l ¼ ðv=nÞ1=2 9m9, the coefficient of From Theorem 2.1 it follows that the relative bias of t IJnk ðh variation. 3. Multi-sample case Suppose x i  Nðmi ,vi =ni Þ and f i s2i =vi  w2f for i ¼ 1; 2, . . . ,p with all variables independent. We wish to estimate i Q J tðhÞ ¼ pi¼ 1 vIi i mi i , where Ii 4 f i =2 and Ji is an integer. If Ji o 0 assume 0 o m0i o 9mi 9, where m0i is known. Set ( x i if J i Z 0, bi ¼ h ¼ ðl,vÞ, m mni if Ji o 0,

mni ¼ x i ð1Ini Þ þ ci Ini , Ini ¼ Ið9x i 9 r m0i Þ, vb i ¼ s2i , where 0 r9ci 9r 1 is arbitrary. For k 2 Np , where N ¼ f0; 1,2, . . .g, set t IJnk ðhÞ ¼

p Y i¼1

IJ

t ni ki ðhi Þ, i i

of (2.8) with hi ¼ ðmi ,vi Þ. Then Theorem 2.1 implies. Theorem 3.1. With the notation set as above, we have the following: b Þ estimates tðhÞ with bias  Pp nki ; (a) t IJnk ðh i¼1 i b Þ is unbiased; (b) If J Z 0 and k ¼ J=2þ 1 then t IJnk ðh (c) Set ( 2½nAi i =2, where 0 o Ai o 1 if Ji o 0, ki ¼ if Ji Z 0: Ji =2 þ1 b Þ has bias Then t IJnk ðh 8 9 < X = Ai ð1Ai Þni ln ni  exp  : J o0 ; i

as min ni -1. Example 3.1. An estimator of m1 =m2 with bias  expfð1AÞnA2 ln n2 g as n2 -1 is x 1 Af 2 n2 ðmn2 ,s22 Þ, where Afn ðhÞ ¼ A1 of (2.12). 4. A simulation study Here, we perform simulations to compare the estimators proposed in Section 2 with known ones. We conduct two simulation studies. The R (R Development Core Team, 2011) codes used to perform these simulation studies can be obtained from the corresponding author, email: [email protected] Firstly, we compare the estimator for 1=m given by (2.12) assuming that v is unknown versus those given by (1.1), (1.3) and (1.4) assuming that v is known. We use two criteria for comparing the four estimators: mean squared error and bias. We computed (2.12), (1.1), (1.3) and (1.4) by simulating ten thousand replications of samples of size n from N (1, v) for v ¼0.2, 0.5, 2, 10 and n¼ 10, 20, y, 1000. We take f¼n 1, m0 ¼ 0:5 and c ¼ m0 sign x in (2.12). Note that m0 is half the true value of m. This is not an unreasonable lower bound on m. Even smaller values can be taken for m0 , but they would decrease the efficiency of the proposed estimators. In practice, m0 should be chosen as close as possible to m but not too large that m0 is not a lower bound. The constants g in (1.3) and A in (2.12) for a given v are chosen to minimize the respective variances. For g, we follow the procedure:

 select a range of values for g 4 0;  for each g, compute 0 12 000 10 000 X x 1 10X x 1 j k @ A  En ¼ 10 000 k ¼ 1 x 2k þ gv=n 10 000 j ¼ 1 x 2j þ gv=n for all n¼10, 20, y, 1000, where x k denotes the sample mean for the kth replication;

 choose the optimal g as the one that minimizes E10 þ E20 þ    þ E1000 .

446

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

For A, we follow the procedure:

 select a range of values for A 2 ð0; 1Þ;  for each A, compute 0 12 000 000 1 10X 1 10X ðkÞ ðjÞ A @ A1  A En ¼ 10 000 k ¼ 1 10 000 j ¼ 1 1



for all n ¼10, 20, y, 1000, where AðkÞ 1 is the A1 given by (2.12) for the kth replication; choose the optimal A as the one that minimizes E10 þ E20 þ    þ E1000 .

The same ten thousand replications are used for every g and for every A. The optimal choices given by these procedures allow the estimators, (1.3) and (2.12), to realize their greatest potential. The optimal choices for different v are different. However, the optimal choices of g and A appeared to take values around 1 and 1/4, respectively. Hence, for a fair comparison of the simulation results, we shall choose g ¼1 and A ¼1/4 for all v. The plots of the mean squared error and the bias versus n Z 10 for the selected v and for the four estimators are shown in Figs. 1 and 2. The x-axes are plotted on log scale. The following observations can be drawn from the figures:

1. the mean squared errors generally decrease to zero with increasing n; 2. the smallest mean squared errors for either v¼0.2, 0.5 and all n or v¼2, 10 and large n are by the new estimator given by (2.12); 3. the second smallest mean squared errors for either v¼ 0.2, 0.5 and all n or v ¼2, 10 and large n are by (1.3); 4. the third smallest mean squared errors for either v ¼0.2, 0.5 and all n or v¼ 2, 10 and large n are by (1.4); 5. the largest mean squared errors for either v¼0.2, 0.5 and all n or v ¼2, 10 and large n are by (1.1); 6. the smallest mean squared errors for v ¼2, 10 and small n are by (1.3); 7. the second smallest mean squared errors for v¼ 2, 10 and small n are by (1.4) and the new estimator given by (2.12); 8. the largest mean squared errors for v ¼2, 10 and small n are by (1.1); 9. the mean squared errors generally increase with increasing v; 10. the biases generally approach zero with increasing n; 11. the biases are generally positive for (1.1); 12. the biases are generally negative for (1.3) and the new estimator given by (2.12); 13. the smallest biases for all v and large n are by (1.3), (1.4) and the new estimator given by (2.12); 14. the smallest biases are for all v and small n are by (1.4); 15. the second smallest biases for all v and small n are by (1.3); 16. the third smallest biases for all v and small n are by the new estimator given by (2.12); 17. the largest biases for all v and for all n are by (1.1); 18. the biases generally increase with increasing v. The new estimator gives the best performance with respect to mean squared error for all large n. It also gives the best performance with respect to mean squared error for small v and for all n. The best performance with respect to bias is by (1.4), an unbiased estimator. We noted earlier in Section 1 that the estimator (1.1) has infinite variance. So, the mean squared error of (1.1) is also infinite. The estimates reported in Fig. 1 are finite because they are based on a finite number of replications. The estimates appear fairly stable at least for large n. To explain this, we have plotted the mean squared error of (1.1) versus the number of replications, see Fig. 3. We have taken n ¼20, m ¼ 1 and v ¼10. We can see that the divergence of the mean squared error to infinity is very slow even for one hundred thousand replications. Hence, the mean squared errors for (1.1) reported in Fig. 1 should be treated as underestimates. A refined estimator for 1=m based on (2.12) is 8 > < 2 A1 > :2

if A1 o2, if 2r A1 r 2, if A1 42:

This estimator suggested by a referee – for whom we are most grateful – will have smaller mean squared errors than (2.12). We hope to follow up this estimator in a future study. pffiffiffi The second simulation study is to compare the estimators for v=m given by Example 2.4, (1.5), (1.6) and (1.7), assuming that both m and v are unknown. We conducted the simulation as in the first study with f ¼ n1, m0 ¼ 0:5, c ¼ m0

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

v = 0.2

0.025

v = 0.5

0.08

0.015

Mean Squared Error

Mean Squared Error

0.020

0.010

0.06

0.04

0.005

0.02

0.00

0.00 10

20

50

100 n

200

500

1000

10

20

50

v= 2 1.0

0.4

0.8

0.3

0.2

0.0

0.0 50

100 n

500

1000

200

500

1000

0.4

0.2

20

200

0.6

0.1

10

100 n v = 10

0.5

Mean Squared Error

Mean Squared Error

447

200

500

1000

10

20

50

100 n

Fig. 1. Mean squared error of the estimators of 1=m given by (1.1) (in black), (1.3) (in brown), (1.4) (in blue) and (2.12) (in red) for m ¼ 1 and v¼ 0.2, 0.5, 2, 10. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)

sign x, v ¼0.2, 0.5, 2, 10 and n ¼10, 20,y,1000. The constants g in (1.6) and A in Example 2.4 for a given v are chosen similarly to the first simulation study. For g, we follow the procedure:

 select a range of values for g 4 0;  for each g, compute 0 12 000 10 000 X s x 1 10X s x 1 j j k k @ A  En ¼ 10 000 k ¼ 1 x 2k þ gs2 =n 10 000 j ¼ 1 x 2j þ gs2 =n k



j

for all n ¼10, 20,y,1000, where x k denotes the sample mean for the kth replication and sk denotes the sample standard deviation for the kth replication; choose the optimal g as the one that minimizes E10 þ E20 þ    þ E1000 .

448

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

v = 0.2

v = 0.5

0.06

0.020

0.04 Bias

Bias

0.015

0.010 0.02

0.005 0.00 0.000

10

20

50

100 n

200

500

1000

10

20

50

v= 2

100 n

200

500

1000

v = 10

0

0.4

−2

Bias

Bias

0.2

0.0

−4

−0.2

−6

10

20

50

100 n

200

500

1000

10

20

50

100 n

200

500

1000

Fig. 2. Bias of the estimators of 1=m given by (1.1) (in black), (1.3) (in brown), (1.4) (in blue) and (2.12) (in red) for m ¼ 1 and v¼0.2, 0.5, 2, 10. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)

For A, we follow the procedure:

 select a range of values for A 2 ð0; 1Þ;  for each A, compute 0 12 000 10 000 X 1 10X 1 ðkÞ ðjÞ @B  B A En ¼ 1 10 000 k ¼ 1 10 000 j ¼ 1 1 for all n ¼10, 20, y, 1000, where BðkÞ 1 is the B1 given by Example 2.4 for the kth replication;

 choose the optimal A as the one that minimizes E10 þ E20 þ    þ E1000 . The optimal choices given by these procedures are also different for different v. Again the optimal choices of g and A appeared to take values around 1 and 1/4, respectively, so we shall take g ¼1 and A¼ 1/4 for all v.

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

449

Mean Squared Error

10000 8000 6000 4000 2000 0 0e+00

2e+04

4e+04 6e+04 Number of Replications

8e+04

1e+05

Fig. 3. Mean squared error of (1.1) versus the number of replications for n ¼20, m ¼ 1 and v¼10.

v = 0.2

v = 0.5

0.015

Mean Squared Error

Mean Squared Error

0.06

0.010

0.005

0.04

0.02

0.00

0.000 10

20

50

100

200

500

1000

10

20

50

0.8

4

0.6

0.4

0.0

0 100 n

1000

200

500

1000

2

1

50

500

3

0.2

20

200

v = 10 5

Mean Squared Error

Mean Squared Error

v= 2 1.0

10

100 n

n

200

500

1000

10

20

50

100 n

pffiffiffi Fig. 4. Mean squared error of the estimators of v=m given by (1.5) (in black), (1.6) (in brown), (1.7) (in blue) and Example 2.4 (in red) for m ¼ 1 and v¼ 0.2, 0.5, 2, 10. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)

450

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

The plots of the mean squared error and the bias versus n Z 10 for the selected v and for the four estimators are shown in Figs. 4 and 5. Again the x-axes are plotted on log scale. The observations from these figures are the same as those from Figs. 1 and 2 except that 1. 2. 3. 4. 5. 6. 7.

the smallest mean squared errors for v ¼0.2 and small n are by (1.6) and (1.7); the second smallest mean squared errors for v ¼0.2 and small n are by the new estimator given by Example 2.4; the smallest mean squared errors for v ¼0.4 and small n are by (1.6) and the new estimator given by Example 2.4; the second smallest mean squared errors for v ¼0.4 and small n are by (1.7); the largest mean squared errors for v ¼0.2, 0.4 and small n are by (1.5); the smallest mean squared errors for v ¼2, 10 and small n are by (1.6); the second smallest mean squared errors for v ¼2, 10 and small n are by (1.7) and the new estimator given by Example 2.4; 8. the largest mean squared errors for v ¼2, 10 and small n are by (1.5); 9. the smallest mean squared errors for all large n are by the new estimator given by Example 2.4; 10. the second smallest mean squared errors for all large n are by (1.6); v = 0.2

v = 0.5 0.02

0.000

0.01

Bias

Bias

−0.004 0.00

−0.01

−0.008

−0.02 −0.012 −0.03 10

20

50

100 n

200

500

1000

10

20

50

v= 2

100 n

200

500

1000

200

500

1000

v = 10 4

0.2 0.1

2

−0.1

Bias

Bias

0.0 0

−0.2 −2 −0.3 −0.4

−4

10

20

50

100 n

200

500

1000

10

20

50

100 n

pffiffiffi Fig. 5. Bias of the estimators of v=m given by (1.5) (in black), (1.6) (in brown), (1.7) (in blue) and Example 2.4 (in red) for m ¼ 1 and v¼ 0.2, 0.5, 2, 10. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23.

the the the the the the the the the the the the the

451

third smallest mean squared errors for all large n are by (1.7); largest mean squared errors for all large n are by (1.5); biases are generally negative for (1.6) and (1.7); smallest biases for v ¼0.2, 0.5 and for all n are by the new estimator given by Example 2.4; second smallest biases for v ¼0.2, 0.5 and for all n are by (1.5); third smallest biases for v ¼0.2, 0.5 and for all n are by (1.7); largest biases for v ¼0.2, 0.5 and for all n are by (1.6); smallest biases for v ¼2, 10 and small n are by (1.7); second smallest biases for v ¼2, 10 and small n are by (1.5); third smallest biases for v ¼2, 10 and small n are by (1.6); largest biases for v ¼2, 10 and small n are by the new estimator given by Example 2.4; smallest biases for v ¼2, 10 and large n are by (1.6) and (1.7), and the new estimator given by Example 2.4; largest biases for v ¼2, 10 and large n are by (1.5).

The new estimator again gives the best performance with respect to mean squared error for large n. The best performance with respect to bias for small v and all n is by the new estimator. The best performance with respect to bias for large v and large n is by (1.6) and (1.7), and the new estimator. In summary, the estimators given by Theorem 2.1 show the best performance with respect to mean squared error for all large n. With respect to bias, the estimators in Theorem 2.1 give the best performance at least for all large n. They also give the smallest biases for some small n. Finally, we like to make a point about the procedures for choosing optimal g and A. Normally, the practitioner will not have the luxury of doing the required simulation for these procedures, even if there is some model that describes the observed data. It would be ideal to have some simple expressions for g or A based only on n, x, and s. Such expressions could be obtained, for example, by minimizing the first few terms of an asymptotic expansion for the mean squared error. In this paper, we have not studied asymptotic properties of the mean squared error of the estimators. Such a study will require substantially more work, so a topic for future work.

Acknowledgments The authors would like to thank the Editor and the two referees for careful reading and for their comments which greatly improved the paper. Appendix A. Proof of Theorem 2.1 p

We precede the proof with some lemmas. Set d ¼ m0 =9m9 2 ð0; 1Þ, d ¼ m1 ðv=nÞ1=2 and 9X9p ¼ ðE9X9 Þ1=p . Lemma A.1. We have PðIn ¼ 1Þ  n1=2 expðlnÞ as n-1, where l ¼ ð1dÞ2 m2 =2v. b =m1Þ=d  Nð0; 1Þ and a 7 ¼ ð1 7 dÞ=9d9. Then PðIn ¼ 1Þ ¼ Pð91 þ dN9 o dÞ ¼ Fða ÞFða þ Þ  fða Þ= Proof. Set N ¼ ðm a  n1=2 expðlnÞ, where fðÞ is the derivative of FðÞ. & Lemma A.2. For J and k non-negative integers ð1xÞJ ¼ AJk ðxÞ þBJk ðxÞ, where AJk ðxÞ ¼

k1 X

J j

i¼0

! ðxÞj ,

and BJk ðxÞ ¼

J1 X

J1 þ k

j¼0

jþk

! xj þ k ð1xÞ1j :

Proof. Differentiate ð1xÞ1 ¼

p1 X j¼0

J times.

&

xj þxp ð1xÞ1

452

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455 1

Lemma A.3. If x ¼ dN, Q ¼ J1 þk and z ¼ 21=2 d 9ð1In ÞBJk ðxÞ91

¼ 2d

1

9d9 r 1=2 then

1=2 k

9 =ðk=2Þ!:

Q !9d2

ðA:1Þ

Proof. The left hand side of (A.1) is less than or equal to ! J1 X Q jþk 1j E9x9 : d j þk j¼0

Note that   Q i i i E9x9 ¼ Q !9d21=2 9 =fðQ iÞ!ði=2Þ!g r Q !9d21=2 9 =ðk=2Þ!, i for iZ k. So, the left hand side of (A.1) is less than or equal to 1

Q !d

ðk=2Þ!1 9d21=2 9

k

J1 X

zj ,

j¼0

which is less than or equal to the right hand side of (A.1).

&

Lemma A.4. Assume (2.5). For j Z 0 set   J EN2j : K Jj ¼ vj mJ2j nJj , nJj ¼ 2j Then for J Z 0, k1 X

Ex J ¼

( nj K Jj þ

j¼0

Oðnk Þ 0

if k1 oJ=2, if k1 ZJ=2,

ðA:2Þ

and b Em

J

¼

k1 X

nj K J þ Enk , j

ðA:3Þ

j¼0

where

Enk ¼ mJ Efð1In ÞBJ2k ðxÞ þ In C J2k ðxÞg ¼ Oðnk Þ and C J2k ðxÞ ¼ cJ AJ2k ðxÞ. Proof. Note that x ¼ mð1 þ dNÞ, so (A.2) follows by the binomial expansion. Note that b J ¼ x J ð1In Þ þcJ In ¼ mJ fAJ2k ðxÞ þ ð1In ÞBJ2k ðxÞ þIn C J2k ðxÞg ¼ LA þ LB þ Lc m say, where x ¼ dN. Note that mJ EAJ2k ðxÞ is the first term on the right hand side of (A.3). Also ELB  nk by Lemma A.3. Fix 1 o p o1, 1=p þ1=q ¼ 1. Then 9ELC 9 r 9In 9p 9C J2k ðxÞ9q  n1=2 expðln=pÞ by Lemma A.1. & Proof of Theorem 2.1. For J Z 0, ( k1 X Oðnk Þ 2I J j I J Es x ¼ n lI v K j þ 0 j¼0

if k1o J=2, if k1Z J=2,

and Es2I x J ¼

k1 X

nj lI vI K J þ Oðnk Þ, j

j¼0

so (a) and (b) follow upon substitution. Put J 0 ¼ J. Then  k1  3 X X J bÞ ¼ EtIJnk ðh Akli , EN 2i ðnÞi vI þ i Eðmn ÞJ2i ¼ t 2i i¼0 i¼1 corresponding to 0

ðmn ÞJ ¼ mJ

0

3 X

AJ0li ,

i¼1

equalling the right hand side of (A.3) with ðJ,kÞ ¼ ðJ 0 ,2lÞ.

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

453

2

Choose l ¼ 2½nA =2. Set gm ¼ ðJ0 þ 2m1Þ!ðJ 0 1Þ!1 ðd =2Þm =m!. Then   kX þ l2 l1 kX þ l2 X X m Akl1 ¼ gm ð1Þi gm C mk þ gm bmlk , ¼ 1þ i i¼0 k l where the inner sum is over 0 ri r m, i o k and mi o l,   k1 X m C mk ¼ ð1Þi i i¼0 is bounded by



m k1



,

k1 X

bmlk ¼

ð1Þi

i ¼ ml þ 1



m i



¼ bm,ml þ 1 þb0m,mk þ 1 ,

and bmN ¼

X0

ð1Þi



N

00    mN X m ð1Þi , b0mN ¼ , i i

m

P P sums to [m/2] halving the last term if m is even, 00 sums from m/2 if m is even, halving the last term, and 00 where sums from (mþ1)/2 if m is odd.     m   So, b0mN ¼ ð1Þm bmN . For m ¼2M, 0 r bmN r m , so 0 rb r m þ k1 . For m¼2M þ1, 0 rð1ÞM bmN r mNMþ M , so l1 þ MN k1 þ M mlkl1 þl1  bmlk ¼ bm,ml þ 1 bm,mk þ 1 is bounded by þ 2 M M . So, M M        kX þ l2 KX þ L2 m 2M l1 þM 9Akl1 19 r gm g2M ¼ S1 þS2 þS3 þ þ2g2M þ 1 k1 l1 M L k P0

say, where K ¼ k=2 and L ¼ l=2. 2 Set r n ¼ d =2. Then gM  fr n 4 expð1Þmgm mJ1  gm ðk þ l2ÞJ1 g ¼ rn 4 expð1Þðkþ l2Þ. So,   þ l2 k þl2 k X S1  gm  ðk þlÞk þ l þ J1:5 k0:5k l0:5l gk T, k1 k

for

m r k þl2

by

Stirling’s

formula,

where

where l2 X



gi  1,

i¼0

as g-0. So, ln S1 r constant kln nð1AÞð1 þoð1ÞÞ, so S1 behaves as nA1 . Also   þ L2 k þl4 K X g2M ðkþ l4ÞJ1  S1 S2  l1 L and S3 

KX þ L2

r n g2M ðk þ l4ÞJ  lrn S2  S2 ,

L

so Akl1 1 behaves as nA1 . By Lemma A.3, ! k1 X J 0 2 9Akl2 9 ¼ EN2i ðd Þi Eð1In ÞBJ02lþ 2i ðxÞ  expfl ln nð1AÞð1 þ oð1ÞÞg, 2i i¼0 by Stirling’s formula. Note that the right hand side of (A.4) behaves (2.9). Also for 1 o p o1 and 1=p þ 1=q ¼ 1, ! ( ) J0 þ 2k1 J 0 J0 J0 2k2 9EIn C 2k ðxÞ9 r 9In 9p 9C 2k ðxÞ9q r9In 9p 9C9 þ 2 9N 9q J0 1 for d r 1=2 since Akl3 ¼

k1 X i¼0

 L þ j L

J0 2i

and 9Nj 9q increase with j. So, ! 2

EN 2i ðd Þi EIn C J02lþ 2i ðxÞ

ðA:4Þ

454

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

is bounded by 9In 9p

J0 þ 2k2 J 0 1

! EN

( 2k2

29C9

J 0

þ 49N

2l2

9q

J 0 þ 2k þ2l3 J0 1

!) :

2

For d r 1=2 and d rc2 =2, this bound behaves as n1=2 expfnlg by Lemma A.1 using 9Nj 9q  fqj expð1Þg5=2 and

0 0 J 1 þ j  jJ 1 as j-1 and l ln l 5nl. The proof is complete. & J 0 1 References Ahern, F.J., 1988. The effects of bark beetle stress on the foliar spectral reflectance of lodgepole pine. International Journal of Remote Sensing 9, 1451–1468. Baker, H.K., Edelman, R.B., 1991. Valuation implications of AMEX listings: a joint test of the liquidity-signaling hypothesis. Quarterly Journal of Business and Economics 30, 87–109. Bennett, B.M., 1963. On combining estimates of a ratio of means. Journal of the Royal Statistical Society B 25, 201–205. Bergemann, T.L., Zhao, L.P. Signal quality measurements for cDNA microarray data. IEEE/ACM Transactions on Computational Biology and Bioinformatics, in press. Bose, P., 1942. On the exact distribution of the ratio of two means belonging to samples drawn from a given correlated bivariate normal population. Bulletin of Calcutta Mathematical Society 34, 139–141. Braulke, M., 1982. A note on the Nerlove model of agricultural supply response. International Economic Review 23, 241–246. Brown, C.S., Goodwin, P.C., Sorger, P.K., 2001. Image metrics in the statistical analysis of DNA microarray data. Proceedings of the National Academy of Sciences of the United States of America 98, 8944–8949. Chakravarti, I.M., 1971. Confidence set for the ratio of means of two normal distributions when the ratio of variances is unknown. Biometrische Zeitschrift 13, 89–94. Chaturvedi, A., Rani, U., 1996. Fixed-width confidence interval estimation of the inverse coefficient of variation in a normal population. Microelectronics and Reliability 36, 1305–1308. Chauvet, M., Potter, S., 2001. Recent changes in the U.S. business cycle. Manchester School 69, 481–508. Crepey, P., Barthelemy, M., 2007. Detecting robust patterns in the spread of epidemics: a case study of influenza in the United States and France. American Journal of Epidemiology 166, 1244–1251. Diaz-Frances, E., Sprott, D.A., 2004. Inference for the ratio of two normal means with unspecified variances. Biometrical Journal 46, 83–89. Dilba, G., Bretz, F., Guiard, V., 2006. Simultaneous confidence sets and confidence intervals for multiple ratios. Journal of Statistical Planning and Inference 136, 2640–2658. Dong, G., Ray, N., Acton, S.T., 2005. Intravital leukocyte detection using the gradient inverse coefficient of variation. IEEE Transactions on Medical Imaging 24, 910–924. Duerr, D., 2008. Design factors for fabricated steel below-the-hook lifting devices. Practice Periodical on Structural Design and Construction 13, 48–52. Fieller, E.C., 1954. Some problems in interval estimation. Journal of the Royal Statistical Society B 16, 175–185. Frommelt, T., Kostur, M., Wenzel-Schafer, M., Talkner, P., Hanggi, P., Wixforth, A., 2008. Microfluidic mixing via acoustically driven chaotic advection. Physical Review Letters 100, 034502. Gauthier, S., Horne, J.K., 2004. Potential acoustic discrimination within boreal fish assemblages. ICES Journal of Marine Science 61, 836–845. Hannig, J., 2006. Asymptotic bounds for coverage probabilities for a class of confidence intervals for ratio of means in a bivariate normal distribution. Journal of Probability and Statistical Science 4, 41–49. Hannig, J., Wang, C.M., Iyer, H.K., 2003. Uncertainty calculation for the ratio of dependent measurements. Metrologia 4, 177–186. James, A.T., Wilkinson, G.N., Venables, W.N., 1974. Interval estimates for a ratio of means. Sankhya¯ A 36, 177–183. Johnson, N.L., Kotz, S., 1970. Distributions in Statistics: Continuous Univariate Distributions, vol. 1, first edition. Houghton Mifflin Company, Boston. Kaluszka, M., 2003. Mean-variance optimal local reinsurance contracts. Control and Cybernetics 32, 883–896. Kim, H.J., 2005. Bayesian analysis for a power of the ratio of two normal means. Far East Journal of Theoretical Statistics 15, 143–156. Kotz, S., Johnson, N.L., 1986. Encyclopedia of Statistical Sciences, vol. 7. John Wiley and Sons, New York. Lamanna, E., Romano, G., Sgarbi, C., 1981. Curvature measurements in nuclear emulsions. Nuclear Instruments and Methods 187, 387–391. Lee, J.C., Lin, S.H., 2004. Generalized confidence intervals for the ratio of means of two normal populations. Journal of Statistical Planning and Inference 123, 49–60. Li, S.Y., Zhang, B.X., 2003. Maximum likelihood estimation of ratios of means and standard deviations from normal populations under semi-order restriction. Journal of Biomathematics 18, 257–261. Malley, J.D., 1982. Simultaneous confidence intervals for ratios of normal means. Journal of the American Statistical Association 77, 170–176. McAuliffe, R.E., 2010. Coefficient of variation. In: The Blackwell Encyclopedia of Management /http://www.blackwellreference.com/S. Mendoza, M., 2005. Inferences on the ratio of normal means and other related problems. Estadistica 57, 168–169. Mendoza, M., Gutierrez-Pena, E., 1999. Bayesian inference for the ratio of the means of two normal populations with unequal variances. Biometrical Journal 41, 133–147. Ng, C.K., 2005. Performance of three methods of interval estimation of the coefficient of variation. InterStat 9, 1–8. Pavan, M., Ruiz, V.F., Silva, F.A., Sobreira, T.J., Cravo, R.M., Vasconcelos, M., Marques, L.P., Mesquita, S.M.F., Krieger, J.E., Lopes, A.A.B., Oliveira, P.S., Pereira, A.C., Xavier-Neto, J., 2009. ALDH1A2 (RALDH2) genetic variation in human congenital heart disease. BMC Medical Genetics 10 http://dx.doi.org/ 10.1186/1471-2350-10-113. Powers, M.R., Powers, T.Y., 2009. Risk and return measures for a non-Gaussian world. Journal of Financial Transformation 25, 51–54. R Development Core Team, 2011. A Language and Environment for Statistical Computing: R Foundation for Statistical Computing. Vienna, Austria. Refice, A., Mattia, F., de Carolis, G., 2003. Polarimetric optimisation applied to permanent scatterers identification. In: Proceedings of the 2003 IEEE International Geoscience and Remote Sensing Symposium, vol. 2, pp. 687–689. Rousseau, R., 1998. Evenness as a descriptive parameter for department or faculty evaluation studies. In: Informatiewetenschap de Smet, E. (Ed.), Werkgemeenschap Informatiewetenschap, Antwerp, Belgium, pp. 135–145. Roy, S.N., Potthoff, R.F., 1958. Confidence bounds on vector analogues of the ‘‘ratio of means’’ and the ‘‘ratio of variances’’ for two correlated normal variates and some associated tests. Annals of Mathematical Statistics 29, 829–841. Sahoo, S., Ray, N., Acton, S.T., 2006. Rolling leukocyte detection based on teardrop shape and the gradient inverse coefficient of variation. In: Proceedings of the International Conference on Medical Information Visualization—BioMedical Visualization, pp. 29–33. Sevin, A.D., 1978. Small sample estimation and testing procedures for ratios of means of independent, normally distributed random-variables. Biometrics 34, 166. Sharma, K.K., Krishna, H., 1994. Asymptotic sampling distribution of inverse coefficient-of-variation and its applications. IEEE Transactions on Reliability 43, 630–633.

C.S. Withers, S. Nadarajah / Journal of Statistical Planning and Inference 143 (2013) 441–455

455

Shi, H.F., Li, S.Y., Ji, Y.G., 2008. Maximum likelihood estimation of ratios of means and standard deviations from normal populations with different sample numbers under semi-order restriction. Journal of Mathematical Research and Exposition 28, 1031–1036. Srivastava, V.K., Bhatnagar, S., 1981. Estimation of the inverse of mean. Journal of Statistical Planning and Inference 5, 329–334. Steffens, F.E., 1971. On confidence sets for the ratio of two normal means. South African Statistical Journal 5, 105–113. Treadwell, E., 1982. A momentum calculation for charges tracks with minute curvature. Nuclear Instruments and Methods 198, 337–342. Tsao, C.A., Hwang, J.T., 1999. Generalized Bayes confidence estimators for Fieller’s confidence sets. Statistica Sinica 9, 795–810. Turen, S., 1996. Performance and risk analysis of the Islamic banks: the case of Bahrain Islamic Bank. Journal of King Abdulaziz University: Islamic Economics 8, 3–14. Voinov, V.G., 1985. Unbiased estimation of powers of the inverse of mean and related problems. Sankhya¯ B 47, 354–364. Withers, C.S., Nadarajah, S., 2010. Stable Laws for Sums of Reciprocals. Technical Report Applied Mathematics Group, Industrial Research Ltd., Lower Hutt, New Zealand. Zaman, A., 1981a. Estimates without moments: the case of the reciprocal of a normal mean. Journal of Econometrics 15, 289–298. Zaman, A., 1981b. A complete class theorem for the control problem and further results on admissibility and inadmissibility. Annals of Statistics 9, 812–821. Zellner, A., 1971. An Introduction to Bayesian Inference in Econometrics. John Wiley and Sons, New York. Zellner, A., 1978. Estimation of functions of population means and regression coefficients including structural coefficients. Journal of Econometrics 8, 127–158. Zubeck, H., Kvinson, T.S., 1996. Prediction of low-temperature cracking of asphalt concrete mixtures with thermal stress restrained specimen test results. Journal of the Transportation Research Board 1545, 50–58.