Admissible linear estimators in linear models with respect to inequality constraints

Admissible linear estimators in linear models with respect to inequality constraints

Linear Algebra and its Applications 354 (2002) 187–194 www.elsevier.com/locate/laa Admissible linear estimators in linear models with respect to ineq...

84KB Sizes 8 Downloads 102 Views

Linear Algebra and its Applications 354 (2002) 187–194 www.elsevier.com/locate/laa

Admissible linear estimators in linear models with respect to inequality constraints 聻 Chang-Yu Lu a,∗ , Ning-Zhong Shi b a Department of Statistics, East China Normal University, Shanghai 200062,

People’s Republic of China b Department of Mathematics, Northeast Normal University, Changchun, Jilin 130024,

People’s Republic of China Received 29 April 1999; accepted 25 January 2001 Submitted by H.J. Werner

Abstract Admissibility of linear estimators is characterized in linear models E(Y ) = Xβ, D(Y ) = V , with an unknown multidimensional parameter (β, V ) varying in the Cartesian product C × V, where C is a halfspace and V is a given set of nonnegative definite symmetric matrices. The relation between admissibility of inhomogeneous and homogeneous linear estimators is discussed, and some sufficient and necessary conditions for admissibility of an inhomogeneous linear estimator are given. Some results were extended to the case where C is a given polyhedral convex cone. © 2002 Elsevier Science Inc. All rights reserved. AMS classification: 62C05; 62F10 Keywords: Admissibility; Inequality constraint; Homogeneous/Inhomogeneous linear estimation

1. Introduction Consider the following linear model: Y = Xβ + ,

E() = 0,

D() = V ,

聻 Supported by National Natural Sciences Foundation of P. R. China #10071011. ∗ Corresponding author.

E-mail addresses: [email protected] (C.-Y. Lu), [email protected] (N.-Z. Shi). 0024-3795/02/$ - see front matter  2002 Elsevier Science Inc. All rights reserved. PII: S 0 0 2 4 - 3 7 9 5( 0 1) 0 0 2 9 3 - 2

188

C.-Y. Lu, N.-Z. Shi / Linear Algebra and its Applications 354 (2002) 187–194

where Y is the n × 1 response variable, X is a known n × p matrix,  is the n × 1 error variable, and the unknown multidimensional parameter (β, V ) varies in T, in the sequel T is a subset of the Cartesian product Rp × V, and V is a given set of nonnegative definite symmetric matrices of order n × n. For the reason of shortness, we write (Y, Xβ, V |(β, V ) ∈ T ). We focus our attention on admissibility of linear estimators. For the Gauss–Markov model, this problem was originated by Cohen [2] and developed, among others, by Shinozaki [12], Rao [11], LaMotte [5], Zhu and Lu [16,17], Stepniak [13,14], Mathew et al. [10], Wu [15], Klonecki and Zontek [4], Zontek [19] and Baksalary and Markiewicz [1] in the case T = Rp × σ 2 V , σ 2 > 0, i.e., (β, σ 2 ) is unconstrained; ellipsoidal constraints β  Nβ  σ 2 , where N is known positive definite matrix, were considered by Hoffmann [3], Mathew [9], and Zhu and Zhang [18]. The above two cases are unified by Lu and Zhu [8], Lu and Li [6] and Lu [7] in the linear model with an ellipsoidal constraint β  Nβ  σ 2 , where N is known nonnegative definite matrix. In this paper, we discuss the admissibility of linear estimator in the model (Y, Xβ, V |(β, V ) ∈ T ), while β with inequality constraints, that is C = {β : Rβ  0} with some known k × p matrix R. Such a model will be written (Y, Xβ, V |Rβ  0, V ∈ V). In Section 2, we deal with the problem for the special case of k = 1. In Section 3, we extend the results to the case of k > 1. We use the following notations in this paper. Suppose S is a known s × p matrix, and we want to estimate the vector parameter function Sβ. LI = {AY + a : A is an s × 1 vector}, the class of all linear estimators. LH = {AY : A is an s × n matrix}, the class of all homogeneous linear estimators. For a matrix A(n × m), M(A), rk(A), A+ , A , and M⊥ (A) denote the range, rank, Moore–Penrose inverse, transpose of A, and the orthogonal complement of M(A) in Rn , respectively. A  B and A > B denote A − B is nonnegative definite matrix and positive definite matrix, respectively. For a vector v, v  0 means the coordinates of v are nonnegative. We will be concerned with quadratic loss function L(AY + a, Sβ) associated with a vector estimator AY + a of a vector parameter function Sβ: L(AY + a, Sβ) = (AY + a − Sβ) (AY + a − Sβ). The risk function is R(AY + a, Sβ) = E(L(AY + a, Sβ)). The estimator AY + a is called as good as BY + b on T iff R(AY + a, Sβ)  R(BY + b, Sβ) for all (β, V ) ∈ T , and AY + a is called better than BY + b on T iff AY + a is as good as BY + b on T and AY + a has smaller risk than BY + b at some point in T. Let L be a class of estimators. Then d(Y ) will be said to be admissible in L on T iff d(Y ) ∈ L and there exists no estimator in L which is L

better than d(Y ) on T, and we denote d(Y ) ∼ Sβ(T ) if d(Y ) is admissible for Sβ in L on T.

C.-Y. Lu, N.-Z. Shi / Linear Algebra and its Applications 354 (2002) 187–194

189

2. Admissible linear estimators with an inequality constraint In this section, we discuss admissibility of linear estimators in the model (Y, Xβ, V |r  β  0, V ∈ V), where r is a known p × 1 vector. Lemma 2.1. Let C be a cone in Rp . Then for any vector b and real number d, the condition β b + d  0

(2.1)

for all β ∈ C, if and only if b ∈ C ∗ and d  0. Here C ∗ = {α : α  β  0 for all β ∈ C} is the dual cone of C. Proof. Note that if β ∈ C, then λβ ∈ C for any λ > 0. Thus (2.1) holds true for all β ∈ C if and only if d  0 and β  b  0 for all β ∈ C, i.e., b ∈ C ∗ and d  0.  Lemma 2.2. For any two nonnegative definite matrices A and B, and real numbers d1 , d2 , β  Aβ + d1  β  Bβ + d2

for all β with r  β  0

implies that B − A is nonnegative definite symmetric matrix, and d1  d2 . LH

Theorem 2.1. AY ∼ Sβ(T ) under the linear model (Y, Xβ, V |r  β  0, V ∈ V) LH

if and only if AY ∼ Sβ under the linear model (Y, Xβ, V |β ∈ Rp , V ∈ V). This is the first result concerning admissibility of homogeneous linear estimators in linear model with inequality constraints. Its proof is simple, and we omitted it. LI

Theorem 2.2. Consider the linear model (Y, Xβ, V |r  β  0, V ∈ V). If AY + a ∼ Sβ(T ), then (a) a ∈ M(AX − S). (b) r  (AX − S)+ a  0, or r  ∈ M((AX − S) ). LH

(c) AY ∼ Sβ(T ). Proof. (a) Suppose that a  ∈ M(AX − S), write a = a1 + a2 , where a1 ∈ M(AX − / 0, hence a  a > a1 a1 . Then for any (β, σ 2 ) ∈ T S), a2 ∈ M⊥ (AX − S), so a2 = we have R(AY + a, Sβ) − R(AY + a1 , Sβ) = a  a − a1 a1 > 0, hence, AY + a1 is better than AY + a, which contradicts the assumption. (b) Suppose, by contradiction, that r is such that r ∈ M((AX − S) ) and r  (AX − S)+ a  0. Write r = (AX − S) (AX − S)r0 for some r0 , let b = (AX − S)+ a − λr0 , where λ > 0 . Then for all (β, V ) ∈ T we have

190

C.-Y. Lu, N.-Z. Shi / Linear Algebra and its Applications 354 (2002) 187–194

R(AY + (AX − S)b, Sβ) − R(AY + a, Sβ) = −2λr  β − 2λr  (AX − S)+ a + λ2 r0 (AX − S) (AX − S)r0 .

(2.2)

Therefore, by Lemma 2.1, for λ sufficiently small, we have for all (β, V ) ∈ T R(AY + (AX − S)b, Sβ) − R(AY + a, Sβ)  0. LI

So, AY + (AX − S)b is better than AY + a, which contradicts AY + a ∼ Sβ(T ). (c) According to part (a), we suppose a = (AX − S)a0 for some a0 . Suppose BY is as good as AY, that is, for all (β, V ) ∈ T , we have EL(BY, Sβ)  EL(AY, Sβ).

(2.3)

By Lemma 2.2, we have trBV B   trAV A ,

(2.4)

(BX − S) (BX − S)  (AX − S) (AX − S).

(2.5)

Therefore, from (2.4) and (2.5), ∀(β, V ) ∈ T trBV B  + (β + a0 ) (BX − S) (BX − S)(β + a0 )  trAV A + (β + a0 ) (AX − S) (AX − S)(β + a0 ).

(2.6)

This means that, ∀(β, V ) ∈ T E(L(BY + (BX − S)a0 , Sβ))  E(L(AY + a, Sβ)).

(2.7)

LI

Since AY + a ∼ Sβ(T ), we have the equality in (2.7), implying equality in (2.6), for all (β, V ) ∈ T . Note that (β, V ) ∈ T means (λβ, V ) ∈ V for all positive numbers λ, hence for all (β, V ) ∈ T , we get E(L(BY, Sβ)) = trBV B  + β  (BX − S) (BX − S)β = trAV A + β  (AX − S) (AX − S)β = E(L(AY, Sβ)). This means that there exists no homogeneous linear estimator which is better than LH

Ay on T, therefore AY ∼ Sβ(T ). Theorem 2.3. Consider the linear model (Y, Xβ, V |r  β  0, V ∈ V), where V is LI

a cone of nonnegative definite n × n matrices, AY + a ∼ Sβ(T ), if and only if (a) a ∈ M(AX − S). (b) r  (AX − S)+ a  0, or r  ∈ M((AX − S) ). LH

(c) AY ∼ Sβ(T1 ).

C.-Y. Lu, N.-Z. Shi / Linear Algebra and its Applications 354 (2002) 187–194

191

Proof. (⇒): By Theorem 2.2, the necessity of conditions holds. (⇐): By the proof of condition (a) in Theorem 2.2 it suffices to show that there are no B ∈ Rs×p and b ∈ Rs such that BY + (BX − S)b is better than AY + (AX − S)a0 = AY + a, say. Suppose that BY + (BX − S)b is as good as AY + a, thus, for all (β, V ) ∈ T , trBV B  + (BX − S)(β + b)2  trAV A + (AX − S)(β + a0 )2 .

(2.8)

Note that kV ∈ V for all V ∈ V and all k > 0; thus by inspecting the limits of both sides in (2.8) when k tends to infinity, and when k tends to zero, respectively, we get trBV B   trAV A , and (BX − S)(β + b)2  (AX − S)(β + a0 )2 .

(2.9)

Similarly, replacing β by λβ, where λ tends to infinity, we conclude that (BX − S)β2  (AX − S)β2 .

(2.10)

and consequently the homogeneous linear estimator BY is as good as AY. Admissibility of AY now entails that trBV B  + (BX − S)β2 = trAV A + (AX − S)β2 . and therefore, trBV B  = trAV A ,

(2.11)

(BX − S) (BX − S) = (AX − S) (AX − S).

(2.12)

and We obtain, from (2.8) and (2.10)–(2.12) that for all (β, V ) ∈ T 2β  (BX − S) (BX − S)b − 2β  (AX − S) a + b (BX − S) (BX − S)b − a  a = 2β  (AX − S) (AX − S)(b − (AX − S)+ a) + b (AX − S) (AX − S)b − a  a  0.

(2.13)

By Lemma 2.1, (2.13) implies that (AX − S) (AX − S)(b − (AX − S)+ a) ∈ C∗

(2.14)

b (AX − S) (AX − S)b − a  a  0.

(2.15)

and Note that C∗ = {−λr : λ  0}, thus (2.14) implies that there exists λ  0 such that (AX − S) (AX − S)(b − (AX − S)+ a) = −λr.

(2.16)

192

C.-Y. Lu, N.-Z. Shi / Linear Algebra and its Applications 354 (2002) 187–194

If λ = 0, then (2.16) becomes (AX − S) (AX − S)(b − (AX − S)+ a) = 0,

(2.17)

(AX − S)b = a.

(2.18)

and If λ > 0, then from (2.16) we know r ∈ M(AX (b),

− S  ), and by the sufficient condition

r  (AX − S)+ a  0.

(2.19)

From (2.16) 0  (b − (AX − S)+ a) (AX − S) (AX − S)(b − (AX − S)+ a) = −λr  (b − (AX − S)+ a),

(2.20)

hence r  b  r  (AX − S)+ a.

(2.21)

On the other hand, from (2.16) and (2.15), −λ(r  (AX − S)+ a + r  b) = a  (AX − S)b − a  a + b (AX − S) (AX − S)b − b (AX − S) a = b (AX − S) (AX − S)b − a  a  0.

(2.22)

From this and (2.21), we have r  (AX − S)+ a  0,

for λ  0.

This together with (2.19), (2.21), and (2.22) implies r  a0 = 0,

r  b = 0,

for λ  0.

This together with (2.16) implies that (2.18) also holds for the case of λ > 0. From (2.11), (2.12), (2.18) we get equality in (2.8) for all (β, V ) ∈ T . This means that there exists no linear estimator which is better than AY + a. Hence, AY + LI a ∼ Sβ(T ). Thus we have proved the sufficiency.  We now turn our attention to the characterizations of A and a for admissible linear estimators AY + a in some special linear models. LI

Theorem 2.4. Consider the linear model (Y, β, σ 2 I |r  β  0). Then AY + a ∼ β(T ) if and only if (a) a ∈ M(AX − I ), (b) r  (AX − I )+ a  0 or r  ∈ M(A − I ), (c) 0  A  I . Proof. The assertion follows directly from Theorems 2.1, 2.3, and 3.1 in [11].

C.-Y. Lu, N.-Z. Shi / Linear Algebra and its Applications 354 (2002) 187–194

193

From Theorems 2.1–2.3 in [16], we get the following. Theorem 2.5. Consider the linear model (Y, β, σ 2 V |r  β  0), V may be singular. LI

Then AY + a ∼ β(T ) if and only if (a) a ∈ M(AX − I ), (b) r  (AX − I )+ a  0 or r  ∈ M(A − I ) , (c) AV A  AV , (d) rk(I − A)V = rk(I − A). For the general Gauss–Markov model, Theorem 3.1 in [8] implies: Theorem 2.6. Consider the linear model (Y, Xβ, σ 2 V |r  β  0), V may be singuLI

lar, then AY + a ∼ Sβ(T ) if and only if (a) a ∈ M(AX − S), (b) r  (AX − S)+ a  0 or r  ∈ M((AX − S) ), (c) M(V  A) ⊂ M(X  ), (d) AV D  is symmetric, and AV A  AV D  , (e) rk(AX − S) = rk(A − D  )B, or rk(AX − S) = rk(AX − S)X + B, where B = 1 1 1 1 V − V 2 P V 2 , P = V 2 H (V 2 H )+ , H = I − XX + , D = SX + . 3. Admissible linear estimators with a number of linear inequality constraints In this section, we focus our attention on admissibility of linear estimators in a linear model (Y, Xβ, V ) with convex cone constraints β ∈ {Rβ  0}, where R is a known k × p matrix. Throughout this section, we denote T = {(β, V ) : Rβ  0, V ∈ V} and T1 = {(β, V ) : Rβ  0, V ∈ V1 }, where V1 is a cone of nonnegative definite n × n matrices and C = {β : Rβ  0}. Theorem 3.1. Consider the linear model (Y, Xβ, V |Rβ  0, V ∈ V1 ). Suppose that the following conditions hold true: (a) a ∈ M(AX − S), (b) c (AX − S)+ a  0 for all c ∈ M((AX − S) ) ∩ C ∗ , LH

(c) AY ∼ Sβ(T1 ), LI

Then AY + a ∼ Sβ(T1 ). Theorem 3.2. Consider the linear model (Y, Xβ, V |Rβ  0, V ∈ V). If AY + LI

a ∼ Sβ(T ), then (a) a ∈ M(AX − S), (b) c (AX − S)+ a  0 for all c ∈ M((AX − S) ) ∩ C ∗ . The proofs of Theorems 3.1 and 3.2 are similar to the proofs of Theorems 2.2 and 2.3, and therefore we omitted them.

194

C.-Y. Lu, N.-Z. Shi / Linear Algebra and its Applications 354 (2002) 187–194 LI

Remark. We conjecture that the sufficient conditions for AY + a ∼ Sβ(T1 ) in Theorem 3.1 are also the necessary conditions. Acknowledgements The authors thank the referees very much for their comments which lead to the reorganization of Theorems 2.2 and 2.3, and some spelling and grammar, thus making the paper more clear and informative. References [1] J.K. Baksalary, A. Markiewicz, Admissible linear estimator in the general Gauss–Markov model, J. Statist. Plan. Inference 19 (1988) 349–359. [2] A. Cohen, All admissible estimates of the mean vector, Ann. Math. Statist. 37 (1966) 458–463. [3] K. Hoffmann, Admissibility of linear estimation with respect to restricted parameter sets, Math. Oper. Statist. Ser. Statist. 8 (1977) 425–438. [4] W. Klonecki, S. Zontek, On the structure of admissible linear estimators, J. Multivariate Anal. 24 (1988) 11–30. [5] L.R. LaMotte, Admissibility in linear estimation, Ann. Statist. 10 (1982) 245–255. [6] C.Y. Lu, W.X. Li, Admissibility of linear estimators in linear model with respect to an incomplete ellipsoidal restriction, Acta. Math. Sinica. 37 (3) (1994) 289–295. [7] C.Y. Lu, Admissibility of inhomogeneous linear estimators in linear model with respect to an incomplete ellipsoidal restriction, Commun. Statist. Theory Methods 24 (7) (1995) 1737–1742. [8] C.Y. Lu, X.H. Zhu, Admissible linear estimator in linear models, J. Northeastern Math. 10 (1) (1994) 71–80. [9] T. Mathew, Admissible linear estimation in singular models with respect to restricted parameter set, Commun. Statist. Theory and Methods 14 (2) (1985) 491–498. [10] T. Mathew, C.R. Rao, B.K. Sinha, Admissibility of linear estimation in singular linear models, Commun. Statist. Theory and Methods 13 (24) (1984) 3033–3045. [11] C.R. Rao, Estimation of parameter in a linear models, Ann. Statist 4 (1976) 1023–1037. [12] N. Shinozaki, A study of generalized inverse of matrix and estimation with quadratic loss, Ph.D. Thesis, Keio University, Japan, 1975. [13] C. Stepniak, On admissible estimators in a linear model, Biometrical J. 26 (1984) 815–816. [14] C. Stepniak, Admissibility in mixed models, J. Multivariate Anal. 31 (1989) 90–106. [15] Q.G. Wu, Admissibility of linear estimates of regression coefficient in general Gauss–Markoff model, Acta. Math. Appl. Sinica. 9 (1986) 251–256. [16] X.H. Zhu, C.Y. Lu, Admissibility of inhomogeneous linear estimator of regression coefficient, Chinese Appl. Probab. Statist. 2 (2) (1986) 97–99. [17] X.H. Zhu, C.Y. Lu, Admissibility of linear estimator in linear model, Chinese Ann. Math. 8A (2) (1987) 220–226. [18] X.H. Zhu, S.L. Zhang, Admissible linear estimator in linear model with respect to a restricted set, Kexue Tongbao 34 (2) (1989) 805–808 (Chinese). [19] S. Zontek, On characterization of linear admissible estimator: an extension of a result due to C.R. Rao, J. Multivariate Anal. 23 (1987) 1–12.