Admissibility of the constant-coverage probability estimator for estimating the coverage function of certain confidence interval

Admissibility of the constant-coverage probability estimator for estimating the coverage function of certain confidence interval

:? -, ~ .' t- / z ~ f: J,il ;'" STATISTICS& PROBABILITY LETTERS ELSEVIER Statistics & Probability Letters 36 (1998) 365-372 Admissibility of the...

283KB Sizes 0 Downloads 107 Views

:? -, ~ .' t- / z ~

f:

J,il ;'"

STATISTICS& PROBABILITY LETTERS

ELSEVIER

Statistics & Probability Letters 36 (1998) 365-372

Admissibility of the constant-coverage probability estimator for estimating the coverage function of certain confidence interval Hsiuying Wang Department q/' Business' Administration, Chaoyan9 Institute ~[ Technoloqy. WuJenq, Taiehun9 Country, Taiwan, ROC Received 1 May 1996; received in revised form 1 November 1996; accepted 5 May 1997

Abstract Consider a confidence interval for a randomly chosen linear combination of the elements of the mean vector of a p-dimensional normal distribution. The constant coverage probability is the usual estimator for the coverage function of this interval. Wang (1995) have shown that this estimator is inadmissible under the squared error loss, if p~>5. In this paper, we consider the case where p~<4 and prove that it is admissible under the same loss. @ 1998 Elsevier Science B.V.

AMS classifications.• primary 62C15; secondary 62C10 Keywords: Confidence interval; Admissibility; Coverage function; Constant coverage probability estimator

I. Introduction Let

X~N(O, lpxp)

W = ( W t ..... Wp)',

and

(1)

be two p - d i m e n s i o n a l statistically independent vector r a n d o m variables, where 0 is a vector o f u n k n o w n parameter, Ip×p is the p × p identity matrix and W/ are i.i.d, r a n d o m variables with j o i n t density function k(w). Let us also define p = W'O. The usual 1 - 7 confidence interval for # is given b y

C x•w = { t ~" ' W ' IXv ¢-l I ~ I

<~c

}

,

where c is the 1 - 7/2 cutoff point o f N(0, 1). Consequently, the coverage function of Cx, w is defined b y 1

I(/IECx, w)=

0

if/~ c Cx, w, otherwise.

0167-7152/98/$19.00 @ 1998 Elsevier Science B.V. All rights reserved PH SO 167-7152(97)00083-7

(2)

366

H. Wang~Statistics& Probability Letters 36 (1998) 365-372

For an estimator r(X, W) of (2), consider the squared error loss function L[r(X, W), I(# E Cx, w )] = E[r(X, W) - I(# E Cx, w )]2,

(3)

where E denotes the expectation with respect to X and W, Wang (1995) showed that for p~>5 the constant coverage probability 1 - 7 is an inadmissible estimator for estimating (2) under the loss function (3). In this paper, we prove that the same estimator is admissible under the squared loss if p ~<4. The reason that we focus on this model is due to Brown (1990). Brown showed that in model (1), W'X is inadmissible for estimating # under the squared-error loss if p >i 3. If W is fixed, instead of random, then W'X is admissible (e.g. Brown, 1990, p. 473). In this problem W is an ancillary statistic. Hence, admissibility of W~X depends on the distribution of ancillary statistic. According to the "principle of conditionality", statistical inference should be independent of the distribution of ancillary statistic. This result contradicts the widely held notation about ancillary statistic. Hence, Brown called this phenomenon a paradox. The above result is paradoxical in the point estimation. In the interval estimation, Wang (1995) also found an ancillary paradox for estimating (2). In that paper, the constant coverage probability is shown to be admissible and inadmissible under the squared loss whenever W is fixed and random, respectively, if p/> 5.

2. Main result

The main result of this section is the proof of admissibility of the usual constant coverage function for estimating (2) if p~<4. It is worth noting that the current problem is related to Brown and Hwang (1990). They assumed that X ~ N ( O , Ip×p) and proved that 1 - 7 is an admissible estimator of I(OE Cx) under the squared-error loss for p ~<4, where

cx = { 0 : I x - 01
(4)

Let us define

(5)

~.(0) = ~.(0)~(0),

where v,(O)=qn(lOI2),

qn(t)

=

1

if 0~
( 1- ln'~ lnn,]

if 1 <~t<~n/2,

an 4t2 /n 2 -- b

if n/2~
367

11. Wang~Statistics & Probability Letters 36 (1998) 365-372

a n = l n 32/ln 2n

and

b=l-ln2.

Then n.(0) ~ zt(0) as n goes to infinity. L e m m a 1. Let X, and W be defined as in (1) and let 7z and ~. be defined in (4) and (5), respectively. Also,

let D = { O ' ' w ' x - w'Ol and r(D) = P(O E D).

(6)

Then £ r~.(O)go(x - O)dO = rc~(x)r(D ) ÷ KD,.(x ),

(7)

where 1

KD,.(x) =

0 ( 1--~n)

if 0 "--
O[rf.([xl2)]

if 1 ~<]x]2 <.n/Z,

o[.'.(Ixl 2)1 +

O ( e - Ix12)

if n/2 <~Ix[2 < 00,

for n sufficiently large and dp(.) is the p.d.f of standard normal distribution. Moreover, the big 0 above is uniform for all x. Proof. By the definition of qn(t), we have

2 ( l -lnt'~ -t--~nn I-G/

if 1 <~t <~n/2,

r/'.(t) =

8ant/n 2 - b12

if n/2 <<.t < oe.

(Note that q'.(t) is continuous on t > 1, even at t = n / 2 ) and 2 / In t \ 2 tTi-l-l-l-l-l-l-~n n [1 - 1--n-nn) + ln----~n t2

qtn~(t)=

8an~n2--- + 128ant2/n4 (4t2/n 2 - b) 2 (4t2/n 2 - b) 3

if 1 ~< t < n/2,

if n/2<~t
Now rewrite the left-hand side of (7) as AA, (9(0 - x)dO + £NA: l'ln ([0]2)

(8)

q~(0 -- x)d0,

where A1 = {0 : 0 ~<[0[ 2 < 1} and A2 = {0 : 1 ~< [012 < cx~}. Let z = 0 - x and A = Iz + x] 2 -

Ix] 2 =

2z. x + Iz[2.

H. WangIStatistics&ProbabilityLetters36 (1998)365 372

368

For 0~< ]xF~< I, by Taylor expansion, there exists a 4, which is between [z + x] 2 and ]xl2, such that Eq. ( 8 ) = f

dDNd

q~(z)dz + f

JDNA2

I

= -fD

N(AiUA2)

q~(z) dz

[qn(IX[2) Jr Aq'n(~)](~(z)dz

+{

dDOA2

Aqtn(~)q~(z)dz

<~r(D)Trn(x)+mlmax(12n , (l~b-)24an/n), where ml ~>1 is a positive constant. The last inequality is due to (6), ~,(x) = 1 if 0 ~
fDnA. M[tl:(~)cb(z)dz~ mlmax (12n ' (i4a~/n --g2

The last inequality is due to 1~'.(1~12)1 is decreasing m 1~< 14]2 < ~ For the part {x: I ~
and 0~< [x]2~<1.

A2

Eq" (S)= fonA, d)(z)dz + J2nA, [tln(]Xl2)+ Atfn(]x]2)+ -~tl~(~)l ~)(z)dz, since,

q~.(t) is

absolutely continuous on It]/> 1. The last expression is equal to

~DnA O(z)dZ-- ~DnA nn(X)O(z)dz + ~n(x)r(D)+ fonA,. [AII'n(]x]2)+-~-q. A2 " (~)]

q)(z) dz.

To complete this proof, it suffices to show that

JD Aq"('x'2)(a(z)dZ+ fnA._ ~-t/n(~)q)( A2 " z )d z =O[y/tn(IXl2)]

(9)

and

l~]x12<~n/2,

O[r/tn(]x[2)] if fo

qi(z)dz - fD nA,

nA,

%(x)c~(z)dz=

(10) O(e-[Xl 2)

if n/2 < i:tl2
Now, consider the first term on the left-hand side of (9)

~NA, zJrl~(]XI2)(P(z)dz: fDNA, (2Z "X Jr" ]z]2)tl;(]x[2)~(z)dz.

UA2

Since, A~ = R p, D being symmetric about the origin with respect to z, and zdp(z) being an odd function, we have fDn(A,UA2)ZCb(z)dz=O.By this and f ]z]2(a(z)dz<~, the last expression is equal to

-2 [ zxtltn(Ixl2)dp(z)dz+ O[q'.(lxl2)] JDnAi

~< 2lxlt/~([xl 2) f

JDAAI

[zl4(z)dz + oM(Ixl2)]

= 21xl.'(Ixl2)O(e-lXl 2) + o[.'.(Ix12)] = O[,/n([Xl2)].

H. Wang~Statistics& ProbabilityLetters 36 (1998) 365-372

369

The last second equality holds by the following arguments: Assume that S = {x:x <<.Ix01} and its complement S c are, respectively, the sets such that D N A1 5£ 0 and D A A1 = 13. Thus, for x E S c, fDr~A, IzlCP(z)dz = 0 and for x E S, there exists a constant M~ such that

fD

Iz[(a(z)dz<~mle-lX°L" ~
Now, we want to prove that the second term on the left-hand side of (9) is equal to O[,'.(Ix Iz )1.

( 11 )

Let Ri = {z: IAI > 1/2lxl 2} and R] is its complement. Rewrite the second term on the left-hand side of (9) as

-21(f l(z~R,)tAl2n.(~)4(z)dz+ f l(zCR~)lAiZ,.,(~)4)(z)dz) = ~(G1 + H), where G=fl(zER,)[AI2tl~(~)4)(z)dz and H=fI(zER~)lAIZtl~(~)c~(z)dz. O[t/~.(lx[2)] by resorting to the normal exponential tail. The definition of Rl implies

We will prove that G =

[zl2 + 2lxllzl > ½txl2, which implies

Iz (d Let R2 denote the region {z:lz I > ( V / ~ 1)lxl}. Then Ri C R2. If 1~< Ixl2~n/2, there exist two positive constants M2 and M3 independent o f x such that, for x satisfying l<.lx12<~n/2,

G <- f l(z~R2)[AI20(l~n) 4)(z)dz = f l(z ~ R2)(4(z . x)2 + 4z . x,z,2 + ,z,4)O(l@n) Cp(z)dz <~~zl>(~_l)lxl(4'zl2[xl2 + lzl4)O( l@n) ~p(z)dz (making the change of variables t = Izl in the following) ~< M2

~

(4t21x[2 + t4)O

f,t>(x/(3/2)- l )lxl

o( IxlSe-(l/2~(v4555-~ ~21~1:) = o[,'.(Ixl2)].

e-~t p-1 dt



H. Wang/Statistics & Probability Letters 36 (1998) 365-372

370

(Note that the last second equality holds by the straightforward calculation for p ~<4)

I(zERCl)(2z.x+ ]zl2)Zck(z)dz

H ~< O ]xl41n n

M3 ~

2

(since

~>½lxl2)

Ixl2

= O[~/.(Ix12)]

for sufficiently large n. Also, if n/2 < Ix[2 < cx~, by similar arguments as above, we have

G

fz(z

R2)lAl2~b(z)dz 8an~n22

~

(1

-

b)

8an/n 2 O([x[8e-(1/Z)(v/~ )-l)21xl~)(1 _ b)2

(12)

= o[e'.(Ixl2)]

and O[

g = O([xl2)"

-8an~n2]

k(4lxl--~_- b)Z j

= OEn'.(Ix?)].

Thus, combining the above results, (9) is established. Now, we still need to prove (10). The left-hand side of (10) [1 - 7t.(x)] f ~b(z)dz = [1 - ~.(x)]O(e -Ix12) JD FqA which is equal to the right-hand side of (10) by a straightforward caculation. Hence, the proof is completed. [] In fact, in Lemma 1 D = {0: I w ' x - w'Ol/lw[ <~c} is not the only sufficient condition of Eq. (7). In the following lemma, D is replaced by the other set and the result of Lemma 1 is still valid.

Lemma 2. In Lemma 1, if D={0: Iw'x- w'OI/lwl~e} is replaced by {0: Ix- 0l < e}, then Eq. (7) still holds.

The proof of Lemma 2 is similar to that of Lemma 1. Note that, if c in Lemma 1 is o~, then r ( D ) = 1. Using this fact, we have the Bayes estimator of (2) with respect to (5) given that X = x and W = w r~,,(x, w ) = flw'x-w'Ol/Iwl
= 1 - 7 + Xc~.,,.(x),

(13)

where Kcx..,.(x) is given in (7). Theorem 3. Let X and W be defined as in (1), the constant coverage probability 1 - 7 is an admissible estimator o f (2) under the squared error loss if p<~4.

H. Wang~Statistics & Probability Letters 36 (1998) 365-372

371

Proof. Choose the prior distribution tin(O) in (5) for 0, which is the same as that in Brown and Hwang (1990). Define

A. = f {E[1 -

- i ( # E Cx, w)] 2 - E i r " ( x , w) - I ( # E Cx, w)]2}u,(O)dO,

where r " ( X , W ) is the Bayes estimator of (2) with respect to prior x,. By the Blyth method, if we could show that lim A , = 0 ,

(14)

n----+ o ~

then the proof is completed. Let, c* be choosen such that P(]X - Ol < c * ) = P

{ tw'x \

~c

) =1-7

and let

c~ = {o: IX - Ol <~*}. Then, An can be written as

-[r~'(x,w) - I(0 ~ G) - I(~ ~ G,w) + I(0 E G)12}q~(x

-

O)k(w)~,(O)dx dwdO,

(15)

where q~(x- 0) and k ( w ) are the p.d.f.s of x and w, respectively. By a straightforward computation, (15) is equal to A + B, where

A = f / f {[1- 7-I(O~Cx)]2-[r"(x,w)- l(O C Cx)]2}ch(x-O)k(w)~,(O)dxdwdO and

For B, integrating the variable 0 first and using Lemmas I and 2, yields B = 2 / ./[(1 - 7)Un(X) + Kcx,...(x) - (1 - 7)n.(x) - Kcx,n(x)] ×[1 - V - (1 - 7) -Kca:,,..n(x)]k(w)dxdw ----/ O[K2~ ,.,n(X)] dx ---- f O[K2x,,,,n(t)]t (p/2)-I dt.

H. Wano/Statistics & Probability Letters 36 (1998) 365-372

372

Hence, for p = 4, B has the same order as

ln2ntdt+[ ~/2 /0,1 aI

t 2 In2 n

lnt~ 2t dt

Inn J

/'2 - b )--4 ~-~t t2 dt + f n ~ + fn/~ 64a 2 ( 4~2 2

O(e-t)t dt c ~

+ \lnnJ j d l n t +

--21n2~ +la,

4_~ln 2n 1 -

Inn

---- ln2n

2

+ 31n 2n

inn

-

fni2 64a2 ( 4nt--~ - b )

+ 32an(1 - b)-3 (12

t2 4n-~tdt+O(!)

b6) + O ( ~ )

= o(1)

for sufficiently large n. Note that for the case p ~<3, by a straightforward caculation, we also can obtain that the order of B is o(l). Now, we turn to the A term, which is bounded above by

f f f {[1- ~ - I(O~ Cx)]2- [r~"(x) - I(OE Cx)]2}ck(x- O)k(w)~.(O)dxdwdO by a Bayes type argument, where r~."(X) is the Bayes estimator of I(OE Cx) with respect to prior ~z.(0). Since B = o ( 1 ) , hence (14) holds if (16) approaches zero as n goes to infinity, which has been proved by Brown and Hwang (1990). Thus, the proof is completed. References

Brown, L.D., 1990. An ancillarityparadox which appears in multiple linear regression (with discussion). Ann. Statist. 18, 471-538. Brown, L.D., Hwang,J.T.G., 1990. Admissibilityof confidenceestimators. In: Chao, M.T., Cheng, P.E. (Eds.), Proc. 1990 Taipei Syrup. in Statistics. Wang, H., 1995. Brown's paradox in the estimated confidenceapproach. Technical Report. Institute of Statistics, National Tsing Hua University, Hsinchu,Taiwan.