On properties of BLUEs under general linear regression models

On properties of BLUEs under general linear regression models

Journal of Statistical Planning and Inference 143 (2013) 771–782 Contents lists available at SciVerse ScienceDirect Journal of Statistical Planning ...

227KB Sizes 2 Downloads 26 Views

Journal of Statistical Planning and Inference 143 (2013) 771–782

Contents lists available at SciVerse ScienceDirect

Journal of Statistical Planning and Inference journal homepage: www.elsevier.com/locate/jspi

On properties of BLUEs under general linear regression models Yongge Tian n CEMA, Central University of Finance and Economics, Beijing 100081, China

a r t i c l e i n f o

abstract

Article history: Received 26 July 2011 Received in revised form 4 October 2012 Accepted 10 October 2012 Available online 23 October 2012

The best linear unbiased estimator (BLUE) of parametric functions of the regression coefficients under a general linear model M ¼ fy,Xb, s2 Rg can be written as Gy, where G is the solution of a consistent linear matrix equation composed by the given matrices in the model and their generalized inverses. In the past several years, a useful tool—the matrix rank method was utilized to simplify various complicated operations of matrices and their generalized inverses. In this paper, we use this algebraic method to give a comprehensive investigation to various algebraic and statistical properties of the projection matrix G in the BLUE of parametric functions under M. These properties include the uniqueness of G, the maximal and minimal possible ranks of G and CovðGyÞ, as well as identifying conditions for various equalities for G. In addition, necessary and sufficient conditions were established for equalities of projection matrices in the BLUEs of parametric functions under the original model and its transformed models. & 2012 Elsevier B.V. All rights reserved.

Keywords: General linear model Transformed model BLUE Projection matrix Generalized inverses of matrices Rank formulas

1. Introduction and preliminary results Throughout this paper, Rmn stands for the collection of all m  n real matrices. The symbols A0 , rðAÞ and RðAÞ stand for the transpose, the rank and the range (column space) of a matrix A 2 Rmn , respectively; Im denotes the identity matrix of order m. The Moore–Penrose inverse of A, denoted by A þ , is defined to be the unique solution X satisfying the four matrix equations ðiÞ AGA ¼ A, ðiiÞ GAG ¼ G, ðiiiÞ ðAGÞ0 ¼ AG, ðivÞ ðGAÞ0 ¼ GA: A matrix X 2 Rnm is called a generalized inverse (g-inverse) of A, denoted by A , if it satisfies (i), while the set of all A is denoted by fA g. Further, let PA , EA and FA stand for the three orthogonal projectors (symmetric idempotent matrices) PA ¼ AA þ , EA ¼ Im AA þ and FA ¼ In A þ A. One of the most important applications of generalized inverses is to derive some closed-form formulas for ranks of partitioned matrices, as well as general solutions of matrix equations; see Lemmas 1.1 and 1.2 below. Consider a general linear model M ¼ fy,Xb, s2 Rg,

ð1:1Þ

where X 2 Rnp is a known matrix of arbitrary rank, y 2 Rn1 is an observable random vector with EðyÞ ¼ Xb and CovðyÞ ¼ s2 R, b 2 Rp1 is a vector of unknown parameters to be estimated, R 2 Rnn is a known or unknown non-negative definite (nnd) matrix of arbitrary rank, and s2 is a positive unknown parameter. n

Tel.: þ86 10 62289253. E-mail address: [email protected]

0378-3758/$ - see front matter & 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.jspi.2012.10.005

772

Y. Tian / Journal of Statistical Planning and Inference 143 (2013) 771–782

Assume that K 2 Rkp is a given matrix. Then the vector of parametric functions Kb is said to be estimable under (1.1) if there exists a matrix G 2 Rkn such that the expectation EðGyÞ ¼ Kb holds under (1.1). When k¼1, Kb is a linear combination of the parameter vector b. It is well known that the given vector of parametric functions Kb is estimable under (1.1) if and only if RðK0 Þ D RðX0 Þ, or equivalently K ¼ ZX for some Z; see, e.g., Alalouf and Styan (1979) and Tian et al. (2008). Assume that Kb is estimable under M. Then the well-known best linear unbiased estimator (BLUE) of Kb under (1.1), denoted by BLUEM ðKbÞ, is defined to be a linear estimator Gy such that EðGyÞ ¼ Kb and CovðG1 yÞCovðGyÞ is nnd for any other unbiased linear estimator G1 y of Kb. BLUEs of parametric functions under linear regression models have been a main object of study in regression analysis. Many papers on BLUEs can be found in the statistical literature since 1970s because of the optimal statistical properties of BLUEs when estimating parametric functions under regression models. Even so, there are still many problems that are worth for further consideration. It can be seen from the definition of BLUEs that algebraic and statistical properties of BLUEs are mainly determined by the coefficient matrix G. Hence, a complete approach to the coefficient matrix G in a BLUE will establish a solid foundation in the theory of BLUEs. This paper aims at investigating the projection matrix G through generalized inverses of matrices and rank formulas for matrices. Precisely, we consider the following problems on G and the corresponding BLUEM ðKbÞ ¼ Gy: (I) (II) (III) (IV) (V)

The maximal and minimal possible ranks of the projection matrix G and its rank invariance. The rank of the covariance matrix of BLUEM ðKbÞ. The uniqueness of G and BLUEM ðKbÞ. Various equalities satisfied by the projection matrix G. Relations between the projection matrices under the original model and its transformed models.

It is well known that through generalized inverses of matrices, general expression of the BLUE of the linear transformation Kb under M can be written in certain closed form. The following result was given by Drygas (1970) and Rao (1973). Lemma 1.1. Let K 2 Rkp and assume that Kb is estimable under (1.1). Then Gy ¼ BLUEM ðKbÞ3G½X, REX  ¼ ½K,0:

ð1:2Þ

The matrix equation in (1.2) is always consistent, that is, Rð½K,00 Þ DRð½X, REX 0 Þ, or equivalently ½K,0½X, REX  þ ½X, REX  ¼ ½K,0: In this case, the general solution of (1.2), denoted by PK;X;R and called a projection matrix, can be written in the following parametric form G ¼ PK;X;R ¼ ½K,0½X, REX  þ þ UðIn ½X, R½X, R þ Þ, where U 2 R

kn

ð1:3Þ

is arbitrary. In this case,

E½BLUEM ðKbÞ ¼ Kb

and

Cov½BLUEM ðKbÞ ¼ s2 PK;X;R RP0K;X;R :

ð1:4Þ

In particular, Gy ¼ BLUEM ðXbÞ3G½X, REX  ¼ ½X,0,

ð1:5Þ

G ¼ PX;R ¼ ½X,0½X, REX  þ þUðIn ½X, R½X, R þ Þ,

ð1:6Þ

where

and U 2 R

nn

is arbitrary.

It can be seen from (1.2) that algebraic and statistical properties of BLUEM ðKbÞ are mainly determined by the projection matrix PK;X;R and the matrix products PK;X;R X and PK;X;R R, so that it is an essential task to give a complete investigation to the projection matrix PK;X;R from mathematical points of view. Note that PK;X;R in (1.3) is in fact a linear matrix expression consisting of the Moore–Penrose inverses of matrices and an arbitrary matrix. In fact, all estimations under linear regression models can be written as some matrix expressions involving generalized inverses of given matrices in the models. Traditional methods to simplify expressions composed by matrices, inverses of matrices and generalized inverses of matrices are based on the conventional matrix operations, as well as some known formulas for inverses and generalized inverses of matrices. The effectiveness of these methods is quite limited, and the simplifying processes are quite tedious. As is well known, one of the most fundamental quantities in linear algebra is the rank of a matrix, which is a well understood and easy to compute number. It has been realized in the past decades that the rank of matrix is a useful tool for simplifying complicated matrix expressions or equalities. In fact, note that A ¼ 0 if and only if rðAÞ ¼ 0. Hence two matrices A and B of the same size are equal if and only if rðABÞ ¼ 0. If some closed-form formulas for the rank of AB are derived, they can be used to characterize relations between the two matrices A and B. This algebraic method, called the matrix rank method, is available for studying various expressions composed by matrices, inverses of matrices, generalized inverses of matrices and arbitrary matrices. In regression analysis, this algebraic method can be used to characterize consistency of linear models, implicit restrictions to parameters in linear models, superfluous observations in linear models, as well as

Y. Tian / Journal of Statistical Planning and Inference 143 (2013) 771–782

773

estimability, testability, unbiasedness, uniqueness, invariance, equalities, additive decomposability, block decomposability, uncorrelatedness, independence, orthogonality, proportionality and parallelness of estimators, etc. In a series of papers by Tian (2007, 2009a,b, 2010a, b, 2012), Puntanen et al. (2005), Qian and Tian (2006), Tian et al. (2008), Tian and Liu (2010), Tian and Puntanen (2009), Tian and Styan (2005, 2006), Tian and Takane (2007, 2008a, b, 2009a, b), Tian and Tian (2010), Tian and Wiens (2006), and Tian and Zhang (2011), the algebraic method was widely used to characterize a variety of algebraic and statistical properties, equalities, additive and block decompositions of estimations under general linear models. In comparison, the matrix rank method, as shown in these papers, is quite effective to simplify various complicated matrix expressions or equalities, while the results obtained can easily be represented as some simple rank equalities, range equalities or matrix equations. In this paper, we also use the matrix rank method to derive various properties of BLUEs of parametric functions under (1.1). The following are some known results due to Marsaglia and Styan (1974) on ranks of matrices, which are used in the latter part of this paper. Lemma 1.2. Let A 2 Rmn , B 2 Rmk , C 2 Rln and D 2 Rlk . Then r½A,B ¼ rðAÞ þ rðEA BÞ ¼ rðBÞ þ rðEB AÞ, r

r



C  "

r

r

A





ð1:7Þ

¼ rðAÞ þ rðCFA Þ ¼ rðCÞ þrðAFC Þ,

A

B

C

0



¼ rðBÞ þ rðCÞ þ rðEB AFC Þ,

AA0

B

B0

0

A

B

C

D



ð1:8Þ

ð1:9Þ

# ¼ r½A,B þ rðBÞ, " ¼ rðAÞ þ r

ð1:10Þ

0

EA B

CFA

DCA þ B

# :

ð1:11Þ

In particular, RðBÞ DRðAÞ

r



A

B

C

D



and

RðC0 Þ DRðA0 Þ ) r

¼ rðAÞ3EA B ¼ 0, CFA ¼ 0



A

B

C

D

and



¼ rðAÞ þrðDCA þ BÞ,

CA þ B ¼ D:

ð1:12Þ

ð1:13Þ

The following result was shown by De Moor and Golub (1991) through the restricted singular value decomposition (RSVD), and by Tian (2002) and Tian and Cheng (2003) through generalized inverses of matrices. Lemma 1.3. Let A 2 Rmn , B 2 Rmk and C 2 Rln . Then the maximal and minimal ranks of ABZ and ABZC with respect to Z are given by the following closed-form formulas max rðABZÞ ¼ minfr½A,B, ng,

ð1:14Þ

min rðABZÞ ¼ r½A,BrðBÞ ¼ rðEB AÞ,

ð1:15Þ

   A max rðABZCÞ ¼ min r½A,B, r : C Z2Rkl

ð1:16Þ

Z2Rkn

Z2Rkn

Lemma 1.4. Let A 2 Rmn and B 2 Rpn be given. Then, the following hold.

(a) Penrose (1955) The linear matrix equation XA ¼ B is consistent if and only if RðB0 Þ DRðA0 Þ or equivalently, BA þ A ¼ B. In this case, the general solution can be written in the following parametric form X ¼ BA þ þUEA ,

ð1:17Þ

pm

is arbitrary. In particular, the solution is unique if and only if rðAÞ ¼ m. where U 2 R (b) Khatri and Mitra (1976) Under A, B 2 Rmn , XA ¼ B has a solution X ¼ X0 if and only if BA þ A ¼ B

and

A0 B ¼ B0 A:

ð1:18Þ

774

Y. Tian / Journal of Statistical Planning and Inference 143 (2013) 771–782

In this case, the general symmetric solution of XA ¼ B can be written as X ¼ BA þ þ ðBA þ Þ0 AA þ BA þ þEA UEA , 0

where U ¼ U 2 R

mm

AA þ B ¼ B,

ð1:19Þ

is arbitrary. In particular, under

BA þ A ¼ B, A0 B ¼ B0 A,

ð1:20Þ

the general symmetric solution of XA ¼ B can be written as X ¼ BA þ þ EA UEA ,

ð1:21Þ

where U ¼ U0 2 Rmm is arbitrary. (c) Under A,B 2 Rmn and RðB0 Þ DRðA0 Þ, the solution BA þ of XA ¼ B satisfies the rank equality " 0 # A BB0 A B0 EA : r½BA þ ðBA þ Þ0  ¼ r EA B 0

ð1:22Þ

Hence, the solution BA þ is symmetric if and only if RðBÞ D RðAÞ and A0 B ¼ B0 A. Proof. Applying (1.12) and simplifying by elementary block matrix operations (EBMOs), we obtain 2 3 A0 A 0 A0    ! A 0 þ Im 6 þ þ 0 0 07 ¼ r4 0 r½BA ðBA Þ  ¼ r ½B,Im  A A B 52rðAÞ B0 0 A0 B A 0 2 3 2 3 0 0 A0 A0 A 0 A0 6 6 0 0 0 07 07 ¼ r 4 A B 0 B 52rðAÞ ¼ r 4 B AA B 0 B 52rðAÞ " ¼r as required for (1.22).

B

A

0

A0 BB0 A

B0 EA

EA B

0

B

#

A

0

ðby ð1:9ÞÞ,

&

We also use the following simple results in Ben-Israel and Greville (2003), Campbell and Meyer (1991), and Rao and Mitra (1971) on Moore–Penrose inverses, ranges and ranks of matrices: A ¼ AA0 ðA þ Þ0 ¼ ðA þ Þ0 A0 A,

A þ ¼ ðA0 AÞy A0 ¼ A0 ðAA0 Þy , A þ Þ þ ¼ A, ðA þ Þ0 ¼ ðA0 Þ þ ,

RðBÞ D RðAÞ3r½A,B ¼ rðAÞ3AA þ B ¼ B, and

RðAÞ D RðBÞ RðA1 Þ ¼ RðA2 Þ A

and

and

ð1:23Þ ð1:24Þ

rðAÞ ¼ rðBÞ3RðAÞ ¼ RðBÞ3AA þ ¼ BB þ ,

ð1:25Þ

RðB1 Þ ¼ RðB2 Þ ) r½A1 ,B1  ¼ r½A2 ,B2 ,

ð1:26Þ

B are nnd ) R½ðABÞ2  ¼ RðABAÞ ¼ RðABÞ,

A is nnd ) rðBAB0 Þ ¼ rðBAÞ

and

RðBAB0 Þ ¼ RðBAÞ:

ð1:27Þ ð1:28Þ

2. Properties of the projection matrix PK;X;R In what follows, we assume that the model (1.1) is consistent, i.e., y 2 R½X, R

holds with probability 1,

ð2:1Þ

see Rao (1971, 1973). In this section, we establish some algebraic and statistical properties of the projection matrix PK;X;R in (1.3) and the corresponding estimator BLUEM ðKbÞ. Some well-known properties of the matrix PK;X;R are given below; see, e.g., Puntanen et al. (2000) and Rao (1971, 1973). Lemma 2.1. Let PK;X;R and BLUEM ðKbÞ be as given in (1.3) and (1.2). Then, the following hold.

(a) The rank and range of the matrix ½X, R satisfy r½X, R ¼ r½X, REX  ¼ r½X,EX R ¼ rðXÞ þ rðREX Þ ¼ rðXÞ þrðEX RÞ,

ð2:2Þ

R½X, R ¼ R½X, REX  ¼ R½X,EX R,

ð2:3Þ

RðXÞ \ RðREX Þ ¼ RðXÞ \ RðEX RÞ ¼ f0g:

ð2:4Þ

Y. Tian / Journal of Statistical Planning and Inference 143 (2013) 771–782

775

(b) The product PK;X;R ½X, R can uniquely be written as PK;X;R ½X, R ¼ ½K,0½X, REX  þ ½X, R,

ð2:5Þ

or separately, PK;X;R X ¼ K, PK;X;R R ¼ ½K,0½X, REX  þ R:

ð2:6Þ

(c) The covariance matrix of BLUEM ðKbÞ in (1.2) is given by Cov½BLUEM ðKbÞ ¼ s2 PK;X;R RPK;X;R 0 ¼ s2 ½K,0½X, REX  þ Rð½K,0½X, REX  þ Þ0 :

ð2:7Þ

(d) PK;X;R is unique if and only if r½X, R ¼ n. (e) BLUEM ðKbÞ is unique with probability1 if and only if y 2 R½X, R, i.e., M is consistent.

More results on PK;X;R in (1.3) are given below. Theorem 2.2. Let PK;X;R and PX;R be as given in (1.3) and (1.6). Then, the following hold.

(a) max rðPK;X;R Þ ¼ n þrðKÞr½X, R. (b) min rðPK;X;R Þ ¼ rðKÞ. XR (c) rðCov½BLUEM ðKbÞÞ ¼ rð½K,0½X, REX  þ RÞ ¼ r½K 0 r½X, R. (d) The rank of PK;X;R is invariant 3 PK;X;R is unique 3 r½X, R ¼ n. (e) ½Ip ,0½X, REX  þ 2 fX g, so that both ½X,0½X, REX  þ and ½Ip ,0½X, REX  þ X are idempotent. (f) Rð½X,0½X, REX  þ Þ ¼ RðXÞ. (g) ðPX;R RÞ0 ¼ PX;R R. (h) PX;R RP0X;R ¼ PX;R R. (i) P2X;R R ¼ PX;R R. (j) For any Z1 ,Z2 2 fPX;R g, Z1 Z2 2 fPX;R g and lZ1 þ ð1lÞZ2 2 fPX;R g, where l is any real number. (k) fPK;X;R g ¼ fPK;X;ðR þ XTX0 Þ g, where T is any nonnegative definite matrix. Proof. Applying (1.14) and (1.15) to PK;X;R in (1.3) gives " # ½K,0½X, REX  þ þ , max rðPK;X;R Þ ¼ maxrð½K,0½X, REX  þ UE½X, R Þ ¼ r E½X, R U " min rðPK;X;R Þ ¼ minrð½K,0½X, REX  þ þUE½X, R Þ ¼ r U

½K,0½X, REX  þ

ð2:8Þ

#

E½X, R

rðE½X, R Þ:

ð2:9Þ

Further we derive from (1.7) and (1.8) that rðE½X, R Þ ¼ nr½X, R, " # ½K,0½X, REX  þ r ¼ rð½K,0½X, REX  þ Þ þ rðE½X, R Þ ¼ rðKÞ þ nr½X, R: E½X, R Substituting these two equalities into (2.8) and (2.9) gives (a) and (b). Applying (1.28) and simplifying by (1.12) and (2.6), Lemma 2.1(a) and EBMOs, we obtain rðCov½BLUEM ðKbÞÞ ¼ rðPK;X;R RPK;X;R 0 Þ ¼ rðPK;X;R RÞ ¼ rð½K,0½X, REX  þ RÞ " #   ½X, REX  R X R ¼r r½X, R, r½X, REX  ¼ r ½K,0 0 K 0 as required for (c). The equivalence in (d) follows from (a), (b) and Lemma 2.1(d). Results (e) and (f) are derived from ½X,0½X, REX  þ X ¼ X. Applying (1.12) and simplifying by Lemma 2.1(a) and EBMOs gives " 0 # þ " #! X X0 0 þ r½PX;R RðPX;R RÞ  ¼ r ½X,0½X, REX  RR EX R 0

776

Y. Tian / Journal of Statistical Planning and Inference 143 (2013) 771–782

2 6 6 ¼ r6 6 4 2 6 6 ¼ r6 6 4

½X, REX  0 ½X,0 ½0, REX  0 ½X,0

2

½0,0 6 6 ¼ r6 6 0 4 ½X,0

3 #7 X X 7 72r½X, REX   EX R 0 7 5 R 0 3 R R " # " #7 X0 7 X0 72r½X, R  0 0 7 5 "

0

0

"

#

"

0

R

0

0

R

0   0

R

R

3

#7 7 72r½X, R ¼ 0, 0 7 5 0

X0

establishing (g). Results (h) and (i) follow from (e) and (g). For any Z1 ,Z2 2 fPX;R g, and any real number l, it can be derived from (1.6) and (e) that Z1 Z2 ¼ ð½X,0½X, REX  þ þU1 E½X, R Þð½X,0½X, REX  þ þ U2 E½X, R Þ ¼ ½X,0½X, REX  þ þ Z1 U2 E½X, R 2 fPX;R g, and

lZ1 þ ð1lÞZ2 ¼ lð½X,0½X, REX  þ þ U1 E½X, R Þ þ ð1lÞð½X,0½X, REX  þ þ U2 E½X, R Þ ¼ ½X,0½X, REX  þ þ½lU1 þð1lÞU2 E½X, R 2 fPX;R g, establishing (j). Note that ðR þXTX0 ÞEX ¼ REX for any T. So that (k) follows from (1.2).

&

Concerning the symmetry of the projector PX;R in (1.6), we have the following result. Theorem 2.3. Let PX;R be as given in (1.6). Then, the following rank equalities hold minrðP0X;R PX;R Þ ¼ 2r½X, RX2rðXÞ,

ð2:10Þ

minrðPX PX;R Þ ¼ r½X, RXrðXÞ,

ð2:11Þ

r½½X,0½X, REX  þ ð½X,0½X, REX  þ Þ0  ¼ 2r½X, RX2rðXÞ,

ð2:12Þ

rðPX ½X,0½X, REX  þ Þ ¼ r½X, RXrðXÞ:

ð2:13Þ

PX;R

PX;R

In consequence, (a) The following statements are equivalent: (i) There exists a symmetric PX;R , namely, the equation G½X, REX  ¼ ½X,0 has a symmetric solution. (ii) There exists a PX;R satisfying PX;R ¼ PX . (iii) ½X,0½X, REX  þ ¼ ð½X,0½X, REX  þ Þ0 . (iv) ½X,0½X, REX  þ ¼ PX . (v) RðRXÞ D RðXÞ. (b) Under RðRXÞ D RðXÞ, the general expression of the symmetric PX;R can be written as PX;R ¼ PX þE½X, R UE½X, R ,

ð2:14Þ

where U ¼ U0 2 Rnn is arbitrary. Proof. Eq. (2.10) was given by Liu and Tian (2008). Applying (1.15) to PX PX;R and simplifying by (1.12) and (2.6), Lemma 2.1(a) and EBMOs, we obtain minrðPX PX;R Þ ¼ minrðPX ½X,0½X, REX  þ UE½X, R Þ PX;R

U

¼ r½ðPX ½X,0½X, REX  þ Þ½X, R ¼ ðPX R½X,0½X, REX  þ RÞ 2 0 " # XX ½X, REX 0 ½X, REX  ½X, REX 0 R 6 ¼r r½X, REX  ¼ r 4 EX RX ½X,0 PX R X

X0 REX 2

EX R EX 0

X0 R

3

EX R 7 5r½X, R PX R 2

Y. Tian / Journal of Statistical Planning and Inference 143 (2013) 771–782

2

3 " 0 0 X0 REX 0 X REX 6 2 27 ¼ r 4 0 EX R EX ðEX RÞ 5r½X, R ¼ r EX R2 EX X 0 0 " 0 # 0 X REX þ rðXÞr½X, R ¼r 0 EX R

#

0 EX R

¼ rðX0 REX Þ þrðREX Þ þ rðXÞr½X, R ¼ r½X, RXrðXÞ

777

þ rðXÞr½X, R

ðby ð1:26Þ and ð1:27ÞÞ

ðby ð1:7ÞÞ,

establishing (2.11). Applying (1.22) and simplifying by E½X, REX  ½X,0 ¼ 0 gives " rf½X,0½X, REX  þ ð½X,0½X, REX  þ Þ0 g ¼ r " ¼r

0 EX RX

0

X REX 0

½X, REX 0 ½X,0½X,00 ½X, REX 

½X,00 E½X, REX 

E½X, REX  ½X,0

0

#

¼ 2rðEX RXÞ ¼ 2r½X, RX2rðXÞ

#

ðby ð1:7ÞÞ,

establishing (2.12). Applying (1.12) and simplifying by Lemma 2.1(a) and EBMOs gives " # ½X, REX 0 ½X, REX  ½X, REX 0 þ r½X, REX  rðPX ½X,0½X, REX  Þ ¼ r ½X,0 PX 2 3 2 0 3 X0 REX X0 XX 0 X0 REX 0 2 6 7 6 7 ¼ r 4 EX RX EX R EX EX R 5r½X, R ¼ r 4 0 EX R2 EX EX REX 5r½X, R X 0 PX X 0 0 " 0 # 0 X REX ¼r þ rðXÞr½X, R ðby ð1:26Þ and ð1:27ÞÞ EX R2 EX EX R " 0 # 0 X REX ¼r þrðXÞr½X, R ¼ rðX0 REX Þ þ rðREX Þ þrðXÞr½X, R 0 EX R ¼ r½X, RXrðXÞ

ðby ð1:7ÞÞ,

establishing (2.13). Setting the right sides of (2.10)–(2.13) equal to zero leads to the equivalence of the five statements in (b). Result (b) follows from Lemma 1.4(b). & In the theory of regression analysis, it is quite likely to assume that both the model matrix X has full column rank and the covariance matrix R is positive definite in (1.1). In such cases, it is well known that the BLUEs of the parameter vector b and the mean vector Xb in (1.1) can uniquely be written in the following standard form: BLUEðbÞ ¼ ðX0 R1 XÞ1 X0 R1 y,

BLUEðXbÞ ¼ XðX0 R1 XÞ1 X0 R1 y:

ð2:15Þ

These estimators are usually called generalized least-squares estimators (GLSEs) or weighted least-squares estimators (WLSEs) in the literature. In this case, the projector PX;R can uniquely be written as PX;R ¼ XðX0 R1 XÞ1 X0 R1 . For more properties of this projector, see Tian and Takane (2008b, 2009b). 3. Relations between the projection matrices under the original model and its transformed models A linear transformation of the general linear model in (1.1) is Mt ¼ fAy,AXb, s2 ARA0 g, mn

where A 2 R

ð3:1Þ

is a given matrix of arbitrary rank. Some special cases of (3.1) are given below:

(I) If A ¼ ½In1 ,0, then (3.1) is the following sub-sample model of (1.1): Mt ¼ fy1 ,X1 b, s2 R11 g, n1 1

n1 p

ð3:2Þ n1 n1

, X1 2 R and R11 2 R . where y1 2 R (II) If A ¼ X0 V, where V is a nnd matrix, then (3.1) becomes Mt ¼ fX0 Vy,X0 VXb, s2 X0 VRVXg, which is used to find weighted least-squares estimation under (1.1).

ð3:3Þ

778

Y. Tian / Journal of Statistical Planning and Inference 143 (2013) 771–782

(III) Partition Xb in (1.1) as Xb ¼ X1 b1 þ X2 b2 , where X1 2 Rnp1 and X2 2 Rnp2 with p1 þp2 ¼ p, and let A ¼ EX1 . Then (3.1) becomes Mt ¼ fEX1 y,EX1 X2 b2 , s2 EX1 REX1 g,

ð3:4Þ

which is called a correctly reduced version of (1.1); see Nurhonen and Puntanen (1992) and Puntanen (1996, 1997). Under the assumption in (2.1), we also see that Ay 2 R½AX,ARA0  holds with probability 1:

ð3:5Þ

So that BLUEs of parametric functions under (3.5) can be established accordingly. However, estimations under transformed models can bias inference. Thus it is necessary to compare the original model and its transformed models. Relations between BLUEs of Xb under (1.1) and (3.1) were studied by Baksalary and Kala (1981), Farebrother (1979), Lucke (1991), Tian and Puntanen (2009), and Zhang (2007). In this section, we derive necessary and sufficient conditions for equalities of two projection matrices associated with BLUEs of parametric functions under (1.1) and its transformed model (3.1) to hold. Under the condition in (2.1), assume that the vector of parametric functions Kb with K 2 Rkp is estimable under (3.1), i.e., RðK0 Þ D RðX0 A0 Þ. Then by Lemma 1.1, GAy ¼ BLUEMt ðKbÞ3G½AX,ARA0 EAX  ¼ ½K,0,

ð3:6Þ

where G ¼ PK;AX;ARA0 ¼ ½K,0½AX,ARA0 EAX  þ þU0 E½AX,ARA0 EAX  ,

ð3:7Þ

and U0 2 Rkm is arbitrary. In this case, E½BLUEMt ðKbÞ ¼ Kb

and

Cov½BLUEMt ðKbÞ ¼ s2 PK;AX;ARA0 ARA0 P0K;AX;ARA0 :

In particular, the BLUE of AXb in Mt can be expressed as BLUEMt ðAXbÞ ¼ PAX;ARA0 Ay,

ð3:8Þ

where PAX;ARA0 ¼ ½AX,0½AX,ARA0 EAX  þ þ U0 E½AX,ARA0 EAX  , mm

and U0 2 R

ð3:9Þ

is arbitrary. In this case,

E½BLUEMt ðAXbÞ ¼ AXb

and

Cov½BLUEMt ðAXbÞ ¼ s2 PAX;ARA0 ARA0 P0AX;ARA0 :

Algebraic properties of the matrix PK;AX;ARA0 in (3.7) can be established accordingly from Theorems 2.1, 2.2 and 2.3. Note that the coefficient matrices PK;X;R and PK;AX;ARA0 A in (1.3) and (3.6) are not necessarily equal. So that the following four cases may occur: (I) (II) (III) (IV)

fPK;X;R g \ fPK;AX;ARA0 Aga|, fPK;X;R g DfPK;AX;ARA0 Ag, fPK;X;R g + fPK;AX;ARA0 Ag, fPK;X;R g ¼ fPK;AX;ARA0 Ag.

Identifying conditions for characterizing these four cases are presented below. Theorem 3.1. Let PK;X;R and PK;AX;ARA0 be as given in (1.3) and (3.7), and let BLUEM ðKbÞ and BLUEMt ðKbÞ be as given in (1.2) and (3.6). Also denote   AR AX M¼ and N ¼ ½0,K: 0 0 X Then, the following hold. (a) The following statements are equivalent: (i) There exist PK;X;R and PK;AX;ARA0 such that PK;X;R ¼ PK;AX;ARA0 A. (ii) fPK;AX;ARA0 Ag D fPK;X;R g. (iii) RðN0 Þ DRðM0 Þ. (b) The following statements are equivalent: (i) fPK;X;R g D fPK;AX;ARA0 Ag. (ii) fPK;X;R g ¼ fPK;AX;ARA0 Ag. (iii) rðAÞ ¼ n. In both case, BLUEMt ðKbÞ ¼ BLUEM ðKbÞ holds with probability 1.

Y. Tian / Journal of Statistical Planning and Inference 143 (2013) 771–782

779

Proof. From (1.3) and (3.7), the general expression of the difference PK;X;R PK;AX;ARA0 A can be written as PK;X;R PK;AX;ARA0 A ¼ G þ UE½X, REX  U0 E½AX,ARA0 EAX  A, 0

þ

ð3:10Þ kn

þ

where G ¼ ½K,0½X, REX  ½K,0½AX,ARA EAX  A, and U 2 R min

PK;X;R ,PK;AX;ARA0

km

and U0 2 R

are arbitrary. Applying (1.15) to (3.10) gives

rðPK;X;R PK;AX;ARA AÞ ¼ minrðG þ UE½X, REX  U0 E½AX,ARA EAX  AÞ 0

0

U,U0

2 6 ¼ r4

3

G E½X, REX  E½AX,ARA0 EAX A

" # E½X, REX  7 : 5r E ½AX,ARA0 EAX  A

ð3:11Þ

Applying (1.7) and simplifying by EBMOs, we obtain 2 3 G   M 6 7 E½X, REX  þ nrðXÞr½X, Rr½AX,AR, r4 5¼r N E½AX,ARA0 EAX  A " r

E½X, REX  E½AX,ARA0 EAX A

ð3:12Þ

# ¼ rðMÞ þnrðXÞr½X, Rr½AX,AR:

ð3:13Þ

The details are omitted. Substituting (3.12) and (3.13) into (3.11) gives   M min rðPK;X;R PK;AX;ARA0 AÞ ¼ r rðMÞ: PK;X;R ,PK;AX;ARA0 N Setting the right-hand side to zero results in the equivalence of (i) and (iii) in (a). Applying (1.15) to (3.10) gives minrðPK;X;R PK;AX;ARA0 AÞ ¼ minrðG þ UE½X, REX  U0 E½AX,ARA0 EAX  AÞ PK;X;R

U

" ¼r

GU0 E½AX,ARA0 EAX  A E½X, REX 

" ¼r

GU0 E½AX,ARA0 EAX  A E½X, REX 

# rðE½X, REX  Þ # þ r½X, Rn:

ð3:14Þ

Further by (1.16), ! " # " #   GU0 E½AX,ARA0 EAX  A G Ik U0 E½AX,ARA0 EAX  A ¼ maxr  maxr E½X, REX  E½X, REX  U0 U0 0 9 8 2 3 G " #> > = < G I k 6 7 E½X, REX  ¼ min r 4 5, r E 0 > > ½X, REX  ; : E ½AX,ARA0 EAX  A 8 2 9 3 G > > < = 6 7 E ½X, RE X  ¼ min r 4 5, k þnr½X, R > > : E ; ½AX,ARA0 EAX  A     M ¼ min r þnrðXÞr½X, Rr½AX,AR, k þ nr½X, R : N

ð3:15Þ

Combining (3.14) and (3.15) yields max minrðPK;X;R PK;AX;ARA0 AÞ ¼ r

PK;AX;ARA0 PK;X;R



M N

 rðMÞ:

ð3:16Þ

Setting the right-hand side to zero results in the equivalence of (ii) and (iii) in (a). Applying (1.15) to (3.10) gives min rðPK;X;R PK;AX;ARA0 AÞ ¼ minrðG þ UE½X, REX  U0 E½AX,ARA0 EAX  AÞ

PK;AX;ARA0

U0

" ¼r

GUE½X, REX  E½AX,ARA0 EAX  A

#

" rðE½AX,ARA0 EAX  AÞ ¼ r

Further by (1.16), ! " # " #   GUE½X, REX  G Ik UE½X, REX  maxr ¼ maxr  E½AX,ARA0 EAX  A E½AX,ARA0 EAX  A U U 0

GUE½X, REX  E½AX,ARA0 EAX  A

# þ r½AX,ARrðAÞ:

ð3:17Þ

780

Y. Tian / Journal of Statistical Planning and Inference 143 (2013) 771–782

9 8 2 3 G " # > = < G Ik > 6 7 E½X, REX  ¼ min r 4 5, r E 0 0 > > ½AX,ARA EAX  A ; : E ½AX,ARA0 EAX  A 8 2 9 3 G > > < = 6 7 E½X, REX  ¼ min r 4 5, k þrðE½AX,ARA0 EAX  AÞ > > : E ; ½AX,ARA0 EAX  A     M ¼ min r þ nrðXÞr½X, Rr½AX,AR, k þrðAÞr½AX,AR : N

ð3:18Þ

Combining (3.17) and (3.18) yields max min rðPK;X;R PK;AX;ARA0 AÞ ¼ r PK;X;R PK;AX;ARA0



M N

 rðMÞ þ nrðAÞ:

By setting the right-hand side equal to zero, we see that (i) in (b) is equivalent to   M r ¼ rðMÞ and rðAÞ ¼ n: N Moreover, under rðAÞ ¼ n, we find by (1.10) that 2 3 R X        R R X M 6 0 7 , rðMÞ ¼ r 0 r ¼ r4 X 0 5 ¼ r 0 þ r X X K N 0 K

X 0



¼r



R X0

ð3:19Þ

 þrðXÞ,

so that (3.19) is equivalent to RðK0 Þ D RðX0 Þ and rðAÞ ¼ n, establishing the equivalence of (b).

&

Corollary 3.2. Let PK;X;R and PK;AX;ARA0 be as given in (1.3) and (3.7), and let BLUEM ðKbÞ and BLUEMt ðKbÞ be as given in (1.2) and (3.6). Then, the following statements are equivalent: (a) fPK;AX;ARA0 Ag DfPK;X;R g for any K with RðK0 Þ D RðX0 A0 Þ. (b) R½AXR0  \ R½AX 0  ¼ f0g. In this case, BLUEMt ðKbÞ ¼ BLUEM ðKbÞ holds with probability 1 for any K with RðK0 Þ DRðX0 A0 Þ. Assume that the parametric functions Kb are estimable under (3.2), i.e., RðK0 Þ D RðX01 Þ. Then the BLUE of Kb under M1 can be expressed as BLUEM1 ðKbÞ ¼ PK;X1 ;R11 y1 ¼ PK;X1 ;R11 ½In1 ,0y,

ð3:20Þ

where PK;X1 ;R11 ¼ ½K,0½X1 , R11 EX1  þ þ U0 E½X1 , R11 EX1  , kn1

and U0 2 R

ð3:21Þ

is arbitrary.

Theorem 3.3. Let PK;X;R and PK;X1 ;R11 be as given in (1.3) and (3.21), and let BLUEM ðKbÞ and BLUEM1 ðKbÞ be as given in (1.2) and (3.20). Then, the following hold. (a) The following statements are equivalent: (i) There exist PK;X;R and PK;X1 ;R11 such that PK;X;R ¼ PK;X1 ;R11 ½In1 ,0. (ii) fPK;X1 ;R11 ½In1 ,0g DfPK;X;R g. 2 0 3 2 03 X1 (iii) 0 K 6R 7 6 7 R4 0 5 D R4 11 X1 5. R21 X2 0 (b) In this case, BLUEM1 ðKbÞ ¼ BLUEM ðKbÞ holds with probability 1. fPK;X1 ;R11 ½In1 ,0g D fPK;X;R g for any K with RðK0 Þ D RðX0 A0 Þ if and only if R½RX110 RX120  \ R½X01  ¼ f0g. In this case, 1 2 BLUEM1 ðKbÞg ¼ BLUEM ðKbÞ holds with probability 1 for any K with RðK0 Þ DRðX01 Þ. Assume that the vector of the parametric functions Kb is estimable under (3.3) for V ¼ In , i.e., RðK0 Þ DRðX0 XÞ ¼ rðX0 Þ. Then the BLUE of Kb under (3.3) reduces to BLUEMt ðKbÞ ¼ KðX0 XÞ þ X0 y,

ð3:22Þ

which is the ordinary least-squares estimator (OLSE) of Kb under (1.1). In this case, applying Theorem 3.1 leads to the following result.

Y. Tian / Journal of Statistical Planning and Inference 143 (2013) 771–782

781

Theorem 3.4. Let PK;X;R be as given in (1.3) and let BLUEM ðKbÞ and OLSEM ðKbÞ be as given in (1.2) and (3.22). Then, the following statements are equivalent: (a) (b) (c) (d)

KðX0 XÞ þ X0 2 fPK;X;R g. 0 0 R½K0  DR½XRXX X0 . þ KX REX ¼ 0. R½ðKX þ RÞ0  D RðXÞ.

In this case, BLUEM ðKbÞ ¼ OLSEM ðKbÞ. Proof. The equivalence of (a), (b) and (c) follows from Theorem 3.1(a). Applying (1.8), (1.11) and (1.23) gives 2 3 2 0 3 " # 0 X0 X X R X0 X X0 6 7 6 0 7 þ rðXÞ ¼ rðKX þ REX Þ þ2rðXÞ: r4 X 0 5 ¼ r4 0 5¼r X0 þ KX R 0 0 K KX þ R Substituting this equality into (b) leads to the equivalence of (b), (c) and (d).

&

Assume that the parametric functions Kb are estimable under (3.4), i.e., RðK0 Þ DRðX02 EX1 Þ. Then the BLUE of Kb under Mt can be expressed as BLUEMt ðKbÞ ¼ PK;EX1 X2 ;EX1 REX1 EX1 y,

ð3:23Þ

where PK;EX1 X2 ;EX1 REX1 ¼ ½K,0½EX1 X2 ,EX1 REX1 EEX1 X2  þ þU0 E½EX1 X2 ,EX1 REX1 

ð3:24Þ

and U0 2 Rkn1 is arbitrary. Theorem 3.5. Let PK;X;R and PK;EX1 X2 ;EX1 REX1 be as given in (1.3) and (3.23), and let BLUEM ðKbÞ and BLUEMt ðKbÞ be as given in (1.2) and (3.24). Then, the following hold. (a) fPK;EX1 X2 ;EX1 REX1 EX1 g D fPK;X;R g always holds, so that BLUEMt ðKbÞ ¼ BLUEM ðKbÞ holds with probability 1. (b) In particular, BLUEMt ðEX1 X2 b2 Þ ¼ BLUEM ðEX1 X2 b2 Þ always holds.

Acknowledgments The author thanks anonymous referees for helpful comments and constructive suggestions that improved the presentation of the article. This work was supported in part by National Natural Science Foundation of China (Grant no. 11271384).

References Alalouf, I.S., Styan, G.P.H., 1979. Characterizations of estimability in the general linear model. Annals of Statistics 7, 194–200. Baksalary, J.K., Kala, R., 1981. Linear transformations preserving best linear unbiased estimators in a general Gauss–Markoff model. Annals of Statistics 9, 913–916. Ben-Israel, A., Greville, T.N.E., 2003. Generalized Inverses: Theory and Applications, 2nd ed. Springer-Verlag, New York. Campbell, S.L., Meyer, C.D., 1991. Generalized Inverses of Linear Transformations. Dover Publications, New York Corrected reprint of the 1979 original. De Moor, B., Golub, G.H., 1991. The restricted singular value decomposition: properties and applications. SIAM Journal on Matrix Analysis and Applications 12, 401–425. Drygas, H., 1970. The Coordinate-Free Approach to Gauss–Markov Estimation. Springer, Heidelberg. Farebrother, R.W., 1979. Estimation with aggregated data. Journal of Econometrics 10, 43–55. Khatri, C.G., Mitra, S.K., 1976. Hermitian and nonnegative definite solutions of linear matrix equations. SIAM Journal on Applied Mathematics 31, 579–585. Liu, Y., Tian, Y., 2008. More on extremal ranks of the matrix expressions ABX 7 X n Bn with statistical applications. Numerical Linear Algebra and its Applications 15, 307–325. Lucke, B., 1991. On BLU-estimation with data of different periodicity. Economics Letters 35, 173–177. Marsaglia, G., Styan, G.P.H., 1974. Equalities and inequalities for ranks of matrices. Linear and Multilinear Algebra 2, 269–292. Nurhonen, M., Puntanen, S., 1992. A property of partitioned generalized regression. Communications in Statistics—Theory and Methods 21, 1579–1583. Penrose, R., 1955. A generalized inverse for matrices. Proceedings of the Cambridge Philosophical Society 51, 406–413. Puntanen, S., 1996. Some matrix results related to a partitioned singular linear model. Communications in Statistics—Theory and Methods 25, 269–279. Puntanen, S., 1997. Some further results related to reduced singular linear models. Communications in Statistics—Theory and Methods 26, 375–385. Puntanen, S., Styan, G.P.H., Tian, Y., 2005. Three rank formulas associated with the covariance matrices of the BLUE and the OLSE in the general linear model. Econometric Theory 21, 659–664. Puntanen, S., Styan, G.P.H., Werner, H.J., 2000. Two matrix-based proofs that the linear estimator Gy is the best linear unbiased estimator. Journal of Statistical Planning and Inference 88, 173–179. Qian, H., Tian, Y., 2006. Partially superfluous observations. Econometric Theory 22, 529–536. Rao, C.R., 1971. Unified theory of linear estimation. Sankhya, ¯ Series A 33, 371–394. Rao, C.R., 1973. Representations of best linear unbiased estimators in the Gauss–Markoff model with a singular dispersion matrix. Journal of Multivariate Analysis 3, 276–292.

782

Y. Tian / Journal of Statistical Planning and Inference 143 (2013) 771–782

Rao, C.R., Mitra, S.K., 1971. Generalized Inverse of Matrices and its Applications. Wiley, New York. Tian, Y., 2002. The maximal and minimal ranks of some expressions of generalized inverses of matrices. Southeast Asian Bulletin of Mathematics 25, 745–755. Tian, Y., 2007. Some decompositions of OLSEs and BLUEs under a partitioned linear model. International Statistics Review 75, 224–248. Tian, Y., 2009a. On an additive decomposition of the BLUE in a multiple partitioned linear model. Journal of Multivariate Analysis 100, 767–776. Tian, Y., 2009b. On equalities for BLUEs under misspecified Gauss–Markov models. Acta Mathematica Sinica, English Series 25, 1907–1920. Tian, Y., 2010a. Weighted least-squares estimators of parametric functions of the regression coefficients under a general linear model. Annals of the Institute of Statistical Mathematics 62, 929–941. Tian, Y., 2010b. On equalities of estimations of parametric functions under a general linear model and its restricted models. Metrika 72, 313–330. Tian, Y., 2012. Characterizing relationships between estimations under a general linear model with explicit and implicit restrictions by rank of matrix. Communications in Statistics—Theory and Methods 41, 2588–2601. Tian, Y., Beisiegel, M., Dagenais, E., Haines, C., 2008. On the natural restrictions in the singular Gauss–Markov model. Statistical Papers 49, 553–564. Tian, Y., Cheng, S., 2003. The maximal and minimal ranks of ABXC with applications. New York Journal of Mathematics 9, 345–362. Tian, Y., Liu, C., 2010. Some equalities for estimations of variance components in a general linear model and its restricted and transformed models. Journal of Multivariate Analysis 101, 1959–1969. Tian, Y., Puntanen, S., 2009. On the equivalence of estimations under a general linear model and its transformed models. Linear Algebra and its Applications 430, 2622–2641. Tian, Y., Styan, G.P.H., 2005. Cochran’s statistical theorem for outer inverses of matrices and matrix quadratic forms. Linear and Multilinear Algebra 53, 387–392. Tian, Y., Styan, G.P.H., 2006. Cochran’s statistical theorem revisited. Journal of Statistical Planning and Inference 136, 2659–2667. Tian, Y., Takane, Y., 2007. Some algebraic and statistical properties of estimators under a general growth curve model. Electronic Journal of Linear Algebra 16, 187–203. Tian, Y., Takane, Y., 2008a. On sum decompositions of weighted least-squares estimators for the partitioned linear model. Communications in Statistics—Theory and Methods 37, 55–69. Tian, Y., Takane, Y., 2008b. Some properties of projectors associated with the WLSE under a general linear model. Journal of Multivariate Analysis 99, 1070–1082. Tian, Y., Takane, Y., 2009a. On consistency, natural restrictions and estimability under classical and extended growth curve models. Journal of Statistical Planning and Inference 139, 2445–2458. Tian, Y., Takane, Y., 2009b. On V-orthogonal projectors associated with a semi-norm. Annals of the Institute of Statistical Mathematics 61, 517–530. Tian, Y., Tian, Z., 2010. On additive and block decompositions of WLSEs under a multiple partitioned regression model. Statistics 44, 361–379. Tian, Y., Wiens, D.P., 2006. On equality and proportionality of ordinary least-squares, weighted least-squares and best linear unbiased estimators in the general linear model. Statistics and Probability Letters 76, 1265–1272. Tian, Y., Zhang, J., 2011. Some equalities for estimations of partial coefficients under a general linear regression model. Statistical Papers 52, 911–920. Zhang, B., 2007. The BLUE and MINQUE in Gauss–Markoff model with linear transformation of the observable variables. Acta Mathematica Scientia 27, 203–210.