Quadratic unbiased estimation without invariance and its application in the unbalanced one-way random model

Quadratic unbiased estimation without invariance and its application in the unbalanced one-way random model

Journal of Statistical Planning and Inference 27 (1991) 25-35 25 North-Holland Quadratic unbiased e’stimat ion without invariance and its appli...

546KB Sizes 0 Downloads 19 Views

Journal

of Statistical

Planning

and Inference

27 (1991) 25-35

25

North-Holland

Quadratic unbiased e’stimat ion without invariance and its application in the unbalanced one-way random model * Czeslaw

Stepniak

Insiitute of Applied Mathematics, Agricultural University of Lublin, Akademicka 13, PL-20-934 Lublin, Poland Received

7 September

Recommended

1988; revised

manuscript

Abstract: Quadratic

23 October

unbiased

estimation

in a general

1989

mixed model is considered.

(1987, Ann. Insf. Statist. Math. 39, 563-573)

in Stepniak

is a limit of unique

locally best quadratics

unbiased

(LBQUE’s).

estimators

(or MINQUE are given.

received

by V.P. Godambe

with invariance)

The unbalanced

It was recently estimator

proved

in the problem

in a wide sense. We focus on the usual locally best quadratic that any locally best invariant quadratic

It is shown,

is a limit of LBQUE’s. one-way

that any admissible

random

model

Explicit formulae is considered

for LBQUE’s

unbiased

estimator

and their variances

in detail.

AMS Subject Classification: 625 10. Key words and phrases: Variance variance;

Lehmann-Scheffe

Introduction

components;

theorem;

unbalanced

unbiased

estimation;

one-way

quadratic

estimation;

without

in-

model.

and summary

The aim of this work is quadratic unbiased estimation of the variance components in a general mixed normal model. It is well known that a best solution of the problem, that is a uniformly minimum variance quadratic unbiased estimator exists rather rarely. In general, one is only able to reduce the class of all estimators to a smaller one, called a minimal complete class. Such a class always exists and coincides with the set of all admissible quadratic estimators. It was recently shown by Stepniak (1987) that the minimal complete class consists of all unique locally best unbiased quadratics in a wide sense and some of their limits. In particular, any usual locally best quadratic unbiased estimator (LBQUE) is admissible. We will focus on such estimators. At first we shall show some connections between these and other quadratic unbiased estimators considered in the literature. * This work was completed

037%3758/91/$03.50

0

when visiting

1991-Elsevier

the Mathematical

Science

Publishers

Sciences

Institute

B.V. (North-Holland)

of Cornell

University.

26

C. Stepniak / Quadratic estimation without invariance

A great part of known results refers to estimation with invariance (cf. Rao, 1979; Kleffe, 1980; Kleffe and Seifert, 1986 and Rao and Kleffe, 1988). Recall that any locally best invariant quadratic unbiased estimator (LBIQUE) depends on the ‘true’ parameter through the variance components only, while the LBQUE depends also on the expectation. The greater sensibility is partly compensated by lower attainable variance of the estimators in a neighbourhood of the ‘true’ parameter; it is just smaller in the second case. We shall also show that any LBIQUE may be presented as a limit of the LBQUE’s. This means that the invariant locally best quadratics lie on the boundary of the set of all LBQUE’s. Of course the LBQUE’s, as well as all quadratic unbiased estimators, have some disadvantages. Namely, they are inadmissible among all quadratics (see Stepniak, 1986) and they can take negative values. The second one does not refer to estimation of the error component. The mathematical framework of the present paper is based on the Lehmann-Scheffe theorem for unbiased estimation (1950). Its versions for linear unbiased estimation were given by Zyskind (1967) and Seely and Zyskind (1971). We state a version of the theorem for quadratic estimation. Seely (1970) pointed out that any problem of quadratic estimation in a linear normal model can be considered as estimation in a linear space of quadratic forms. Following him, we derive useful formulae for quadratic unbiased estimation without invariance by a transference of some well known results in linear estimation. This is a further step to a unified theory of (linear and quadratic) estimation in a linear model. The last section of this paper is more technical. It deals with locally best quadratic unbiased estimation without invariance in the unbalanced one-way random normal model. A similar problem, but for estimation with invariance, was considered by Swallow and Searle (1978). It appears that their results can be simplified by considering limits of our estimators.

1. Quadratic

unbiased

estimation

without

invariance

in a general linear normal

model Let X be a normal E,=Act and covariance &=

random

vector

in IR” with expectation (1.1)

matrix VCJ,

(1.2)

where a=(ai ,..., cr,)‘and o=(a, ,..., a,)’ are unknown parameters, A is a known matrix of size n xp and I/ is a linear operator from mq into the space S, of symmetric n x n-matrices.

C. Stepniak / Quadraticestimation without invariance

21

It is assumed that the span of possible aa’ (’ denotes transposition) is the entire space Sp and F’cris positive definite for all 0. Usually the operator V is given by Va=

~ (TiI/;,

i=l

(1.3)

where Vi, . . . . V4 are known matrices. It is well known that (*)

c’a is linearly estimable in the model (1 . l)-( 1.2) if and only if c E CO&~‘).

Moreover, if c’a is estimable, then its locally best linear unbiased estimator may be expressed in the form (0,X), where a=(Vo)_‘A[A’(l/a)_‘A]fc,

(1.4)

’ denotes the Moore-Penrose generalized inverse and (. , . ) is the usual inner product. The variance of this estimator may be written in the form (c, [A’( Va)-‘A]+c). We are interested in estimation of a parametric function w = C’(T+ a’&, where c E lRq and CE S,,, by quadratics X’QX, where Q E S,, . Such quadratic may be presented in the form [Q, Y], where Y= XX’ and [. , - ] is the inner product in S, defined by [Q,, Q2I= WQl Qd. Let us consider the random element Y=XX’. We notice that

E[Q,Yl = IQ,&a, aa’)

(1.5)

COV([QI,Yl, [QP Yl>= [QI, wo,aQ21,

U-6)

and where B is a linear operator from the linear space lR4x Sp (endowed with the inner product ((ml ; Ml ), (m2 ; M2)) = nz; rn2 + tr(M, M,)) into S, , defined by B(m;M)

= I’m +AMA’.

(1.7)

and W,, is a linear operator in S,, defined by W,,,Q=2B(a,aa’)QB(qaa’)-2Aaa’A’QAaa’A’.

(1.8)

Thus our problem reduces to linear estimation of IJ in the model (1.5)-(1.6). Seely (1970) proved that the condition (*) has its analogue in this problem, namely, (**)

rl/ = c’a+a’Ca is estimable in the model (lS)-(1.6) if and only if (c; C) E tZ(B*),

where B is the operator defined by (1.7), B* denotes its adjoint while %?(B*) means the range of B*. One can verify that the adjoint operator B* is defined by

B*Q=W-{QV}, . . ..tr{QI/.})‘;A’QA),

(1.9)

28

where

C. Stepniak / Quadratic estimation without invariance

Vi, . . . , VP are matrices appearing analogue of formula (1.4).

in (1.2).

This

specification

leads to the

following

1. Let Y be a random element in the space S, of symmetric n X nmatrices (endowed with the trace inner product [. , -I), subject to the model

Theorem

and

E[Q, Yl

= tQ, B(o, aa’)1

CWQ,,

Yl, IQ29 Yl)=

(1.5) [Ql, w,,Qzl,

(1.6)

where B and WO,, are linear operators defined in (1.7) and (1.8). Cbnsider unbiased estimation of an estimable parametric function I,V= c’o + a’Ca. Then the lower bound of variance of the estimators [Q, Y] at a given but arbitrary point (a, o) is attained for Q= WD;;B(B*W,T;B)-‘(c;C)

(1.10)

and is equal to W,TAB)-‘(c;C)),

Var[Q, Y]= ((c;C),(B*

(1.11)

where ( . , . > is the inner product in the product space Rq x SP. Proof.

At first we shall show that

E[Q, Y] = [ W-‘B(B*

[Q, Y] is unbiased.

WP’B)-‘(c;

= (B* W-‘B(B*

Indeed

C), B(o, aa’)]

W-‘B)-‘(c;

C), (a, aa’))

=((c;C),(a,aa’))=c’a+a’Ca. Now by the Lehmann-Scheffe

theorem, we only need to verify that [Q, Y] is uncorrelated with any unbiased estimator of zero. Suppose that [Qe, B(a;aa’)l =0 for all o and a. Then Cov([Qa, Y], [Q, Y])= [Qo, WQ] = [Qo, B(B* WP1 B)-’ (c; C)] = 0. The formula (1.11) can be verified directly. This completes the proof. 0

2. Quadratic

unbiased

estimation

of variance components

Let us return to the initial normal random vector X with expectation Aa and covariance matrix Ci oi K. In this section we focus on locally best quadratic unbiased estimation (LBQUE) of a parametric function v/ =c’o. At first we shall simplify, for this case, the results given in Theorem 1. It is well known that a quadratic X’QX is an unbiased estimator of c’o, where c=(c,, . . . . c,)‘, if and only if, A’QA =0

(2.1)

and [Q, I$] =ci,

i= l,..., q.

(2.2)

C. Stepniak / Quadratic estimation without invariance

Thus

the last term in (1.8) is zero and therefore

generality, that WQ = 2BQB Next, by (1.9), we get

and

hence

we may assume,

IV-‘B(m;M)

29

without

= +B-‘( k’m +

B*W~‘B(m;M)=~((tr(B-‘(Vm+AMA’)B-’

loss of

AMA’)B-’ .

V,}, . . . .

tr{B-‘(Vm+AMA’)B-’

V,})‘;

A’B-‘(VWz+AMA’)B-‘A). Now let us restrict our attention to so expectation p 1n, where fi is an unknown n ones. Then the matrix M reduces to B* WPIB(m;m,+l) may be written in the

B*W-‘B(m;m,+,)=+F

&y:; [

where F= (&) is a symmetric

while

I$+ 1= 1,lA.

matrix

FB-’ y),

Aj=tr(B-’

Moreover,

called random linear models with the scalar and 1, denotes the column of a scalar, say mq+l, and the operator form , I

q + 1 defined

of order

by

i,j= 1, . . . . q+ 1,

the operator

W-‘B

(2.3) may be expressed

as

q+l

W-‘B(m;m,+,)=+

c rn;B-’ FB-‘. i=l

Thus,

in consequence,

we get:

2. Let X be a normal random vector with expectation p 1n and covariance matrix Cp=, (Si k( and let v = c’u be quadratic estimable by X. Then the locally best Theorem

quadratic unbiased estimator of y at cr= o0 and p = p” is given by X’OX, Q= CTJ’=‘,‘miQi,while m,,...,m,,, are defined by

where

F is defined by (2.3) with B = B(a”, (,u”)2) = X7=1 0; I: + (P’)~ 1, IA, Qi = B-‘(a’,

(F’)~) I$B-‘(o’, (p’)‘),

i = 1, . . . , q,

(2.4)

and

Q q+ 1= B-l@‘, (P’)~) l,l;B-‘(o’,

(,u”j2).

0

Remark 1. Other formulae for LBQUE of c’o were given in LaMotte Pringle (1974) and Rao and Kleffe (1988, Theorem 5.3.1, p. 100).

(2.5)

(1973),

The LBQUE is often identified with the locally best invariant quadratic unbiased estimator (LBIQUE). The second one is defined as a quadratic minimizing [Q, VQV] in the class of all quadratics satisfying conditions (2.2) and

QA=O.

(2.6)

30

C. Stepniak

/ Quadratic

estimation

without invariance

Focke and Dewess (1972) (cf. also Kleffe, 1977) noted that any LBIQUE presented as a limit of LBQUE’s. A version of their result is:

may be

Theorem 3. Let G be a non-negative definite matrix of order p such that row(G) 2 row(A). Consider a sequence {dk} of positive numbers such that lim,,, dk = co

and define conditions conditions { Qk} such

Gk=dkG for k-1,2,.... Suppose that Q0 minimizes [Q, VQV] under (2.2) and (2.6) and Qk minimizes [Q, (V+ AGkA’) Q( V+ AGkA’)] under (2.1) and (2.2). Then there exists a convergent subsequence { Qk,} of that lim,., m Qk, = Qe.

Remark 2. The lim,,

m Qk can be considered in the norm I/Q /I= l/m or in the norm (/Q 11 suP= SUP~,~= 1d?@?x. Since /)Q 11 suPI (/Q/I I fi I/Q )Isup, it does not matter which of these norms is used. tr(Qk VQk[V+AGkA’])~

Proof of Theorem 3. It follows from the conditions tr(Qo VQO[V+ AG,A’]) and Q,,A = 0 that tr(Q,+ VQk[ V-t AG,A’]

i tr(Qa VQOV).

(2.7)

Now writing the matrix Gk in the form RkR; one observes tr(RLA’Qk VQkARk) 2 0. Thus, by (2.7) we get

that tr(Qk VQ,AGkA’)=

tr(Qk VQ,t U 5 tr(Q, vQo VI

(2.8)

lim tr(Qk VQ,AGA’)

(2.9)

and

k+co

= ,jLrnmL tr(Qk VQkAGkA’) = 0. dk

Therefore, jlQk(/ = /IV-1’2 V”2Qk I/“’

V~1’21/5 /IV~1’211211V1’2Qk V”211

= 11V-“2(/2~tr(QkVQkV)~

I~V-1’2~~21/tr(QOVQ0V)<~,

and hence, the sequence (Qk} is bounded in the norm /I. (1. Let {Qk,} be a convergent subsequence of { Qk} and let lim,,, Qk,= Q. We note that Q satisfies the condition (2.2). Since any LBIQUE is uniquely determined by conditions (2.2) and (2.6), we only need to show that QA = 0. Suppose not. Then there exists a vector u E IRP such that u’A’QVQAu>O. Next, by the assumption row(G) 2 row(A), one can write Au = AG1’2 w for some w E fRP. tr(Q VQAGA’) > Therefore, we get w’G”~ A ‘QVQAG”2 w > 0 and, in consequence, 0. On the other hand, it follows from (2.9) that tr(Q@AGA’) = 0. This contradiction completes the proof. 0

3. Application

in the unbalanced

one-way

random

normal model

In this section we shall apply Theorem 2 to quadratic invariance in the unbalanced one-way random normal

unbiased estimation without model. To this aim we only

C. Stepniuk / Quadratic estimation without invariance

need

to

specify

the

matrices

F and

31

i = 1, . . . , q + 1, defined

Qi,

by

formulae

(2.3)-(2.5). The unbalanced one-way random model is given by the conditions E,=,ul, and _Zx=a,I,+azdiag(l,,l~,,...,l.,l~,), where n, k and nl,...,nk are positive integers such that k
1;B-2 1,

tr(B-‘DB-‘) tr(B-‘DB-‘D) l;B-‘DB-‘1,

tr(BP2) tr(B-‘DB-‘) 1;B-2 1, r

lABplDBpl (l;B-‘1,)’

1, 1

and next Q,=B-‘, Q2=B-‘DB-’ and Qs=BP1l,lAB-‘. On the other hand, by Stepniak (1988), the matrix B-’ = Bvl(ol, written as a;‘(l-H), where H is a block matrix of the form H=(hijln,lA,)i,j=l

a2,p2)

may be

,_._, k

with for i#j, hij = for i=j,

where e = 02/0, and A =~*/a~. Next by a longer but simpler

algebra

we get

o;BP2 zz (sij lq 1;,>, where

A2c n,/(

1 + @2,)2

[ [l +A.Cn,/(l+@n,)]2 1

A[l/(l -

+@ni)+

l/(1 +@nj)]

1 + A c n,/( 1 + en,)

1

i#j,

X--

1 +@ni 1 +@nj’ sjj = A2 c n,/( 1 + &Wr,)2 [ [l +ACn,/(l 1

x (1 +@ni)2 -@

a:B-‘DB-’

= (fjj I,, l;,),

1

+@zJ2

(

1

1 +eni

2A/( 1 + @ni) - 1 +/ICn,/(l

+&Vr,)

1 + (1 +@ni)2 > ’

1 i=j,

C. Stepniak / Quadratic estimation without invariance

32

where

I

A2c n,2/(1 + en,)2

D

~ [ni/( 1 + @ni) + nj/( 1 + @nj)]

[ 1 + A c n,/( 1 + @,)]2 1

1 + A c n,/( 1 + en,)

i#j,

X--

tij =

1 +@ni 1 +@ni’ A2 c n,“/( 1 + en,)2

2Ani/(l

+@ni)

[ [ 1 + A c n,/( 1 +&x2,)12 - 1 + I c n,/( 1 + en,)

I

1

1

1

1

1 i=j,

’ (1 +@ni)2 + (1 +@ni)2’

and

02B-’ l,l;B+ 1 where uij=

= (uij l,, 1;,>,

(i(l+nCL!L 1

1 +en,

To write our results K’

i r=l

in a concise

n,

2 >> --

1

1

form,

let us introduce N=

l+@r,’

i,j=l

1 +eenj 1 +@I$

i r=l

7--*, k.

the notations

nr (1 +enJ3

and

where X,.. = Cy_ 1 Xrj, r= 1, . . . , k, and XV denotes the j-th observation in the i-th subclass. Now, by Theorem 2, the LBQUE without invariance for a parametric function I+V = c, o1 + c202 at a point a = ap(l, e, A)’ may be presented in the form fi=

i

t?liZ.



ml !I [I /=I

where

(3.1)

Cl

m2 m3

=F{’

c2

0

9

(3.2)

C. Stgmiak

/ Quadratic

estimation

without

invariance

33

whereas dZL2

2AN --+n-e(K+L) l+AK

(1+1K)’

F, =

A’L(K-L)

-~

@(l+LK)’

d2L(K-L)

2A(L-N) _~ ~(1+ AK)’ e(l+L)

+L

A(K- L)’

2A(L-N)+L

L

e(1+ AL)

(1+ 1K)2

2A(K-2LiN)

K-L

K-L

+Q’(~+AK)

-

@‘(l+lK)

+

L

K-L

K2

~(1 AK)’

(l+AK)’

(3.3)

~(1 +IK)’

e

(l+AK)’

and 2, =

z,=-

2LRS-~(T+U)+ l+AK

(1 +AK)2 R2A2

K-L

Q (1+AK)2 1

z,= Moreover,

/12L

(1 +AK)2

the variance

R2-

c X,2, i,j

2A

R(R-S)+

e(1 + AK)

U,

R2.

of the estimator

@ at d is given by

Var(Q)=2(ap)2E’Fi’E, where c = (ci, c2, 0)‘. After simplification

(3.4)

the formulae

(3.1)-(3.4)

may be expressed

in the form

(3.5) where L2 AK-1

G=

i

21N --+n-e(K+L) K2 AK+ 1 AK+ 1 -L(K-L)

AK-

1

L(K-L) ~--~ @K2

2A(L-N)

(K-L)’

___-_+L @L2 AK+ 1 @(AK+ 1)

gl= Moreover

2A(L2 - KN) $K2lGl

the variance

~-_

@2K2

1 ’

g2= K21GI

of the estimator

Var(t,?) = 2(~7p)~(c,, c2)G-’

AK-

1

21(L -N) +L

AK+ 1

AK-1 AK+ 1

2A(KN-L2)

OK+

2A(K-2L+N)

1

+-

K-L

~‘(1 +dK)

e

+ n(L-K)+eK2

@(AK+ 1)

I,$ at d may be presented Cl . L c2 I

1)

e

>.

in the form (3.6)

Corresponding results for quadratic unbiased estimation with invariance were obtained by Swallow and Searle (1978). An alternative formula in this case can be derived from (3.1), (3.5) and (3.6) as a limit when 2 tends to infinity. In this way the LBIQUE of v/ = cl o1 + c2 o2 at ap(l, e) may be presented in the form

C. Stepniak / Quadratic estimation without invariance

34

(3.7) where

with L= --K2

2N K +n-e(K+L)

L(K-L)

2(L-N)

QK=

-

+L

QK

G1 = L(K-L) QK=

I

2(L-N) -

+L

(K-L)= ~Q=K=

QK

2,=~R2-$RS-Q(T+U)+

2(K-2L+N) Q2K

CXij i,j

and &-

K-L e

In particular,

I?--2(R-S)+U. e

if n, = ..f = nk = n/k,

then gl = 0, g2 = (k + @n)2/n2 k(k - l), and

1

In-k

G;‘=Gp’=

k

_~

n(n - k)

k

-~

n(n -k) (k+pz)= n2(k-

1)

+

k* k) 1

n2(n-

and the estimators I,Cand I,? reduce to the usual ANOVA estimator of v/. We note that the variance of the LBIQUE presented by Swallow and Searle depends formally on six quantities, but only three of them are independent (as our K, L and N).

References Focke,

.I. and G. Dewess (1972). iiber

die Schtitzmethode

MINQUE

gemeinerung. Math. Operationsforsch. Statist. 3, 129-143. Kleffe, J. (1977). A note on co-MINQUE in variance covariance

van C.R.

components

Rao und ihre Verall-

Math. Operutions-

models.

forsch. Statist. 8, 331-343. Kleffe,

J. (1980). On recent progress

totic normality

and explicit

of MINQUE

formulae.

theory

- Nonnegative

estimation,

consistency,

asymp-

Math. Operationsforsch. Statist. Ser. Statist. 11, 563-588.

Kleffe,

Multi-

J. and B. Seifert (1986). Computation of variance components by MINQUE method. 18, 107-l 16. LaMotte (1973). Quadratic estimation of variance components. Biometrics 29, 31 l-330.

J.

Lehmann,

- Part

variate Anal.

E.L. and H. ScheffC (1950). Completeness,

similar regions

and unbiased

estimation

1.

SankhyP 10, 305-340. Pringle,

R.M. (1974). Some results on estimation

Assoc. 69. 987-989.

of variance

components

by MINQUE.

J. Amer.

Statist.

C. Stqmiak

Rao,

C.R.

(1979). Estimation

RML estimation.

Sankhya

/ Quadratic

of variance

estimation

components

without

invariance

- MINQUE

theory

35

and its relation

to ML and

Ser. B 41, 138-153.

Rao, C.R. and J. Kleffe (1988). Estimation

of Variance Components

and Applications.

North-Holland,

Amsterdam. Seely, J. (1970).

Linear

Seely, J. and G. Zyskind Statist.

spaces

and unbiased

(1971). Linear

Math. Stepniak,

Ann.

Math.

variance

Statist.

41, 1725-1734.

unbiased

estimation.

estimators.

Statist.

Ann. Math.

42, 691-703.

Stepniak, C. (1986). On admissibility 4, 17-19. Stepniak,

estimation.

spaces and minimum

C. (1987). A complete 39, 563-573. C. (1988). Inversion

in various

classes of quadratic

class for linear estimation of covariance

matrices:

in a general

explicit formulae.

linear model. Ann. SIAM

J. Matrix

Probab.

Lett.

Inst. Stat&. Anal. Appl.,

to appear. Swallow,

W.H.

and S.R. Searle (1978). Minimum

of variance components. Technometrics Zyskind, G. (1967). On canonical forms, squares

estimators

in linear

models.

variance

quadratic

20, 265-272. non-negative covariance

Ann.

Math.

Statist.

unbiased matrices

38, 1092-l 109.

estimation

(MIVQUE)

and best and simple least