Estimation of linear models with crossed-error structure

Estimation of linear models with crossed-error structure

Journal of Econometrics ESTIMATION 2 (1974) 67-78. 0 North-Holland OF LINEAR MODELS Publishing Company WITH CROSSED-ERROR STRUCTURE* Wayne A...

671KB Sizes 177 Downloads 184 Views

Journal of Econometrics

ESTIMATION

2 (1974) 67-78. 0 North-Holland

OF LINEAR

MODELS

Publishing Company

WITH

CROSSED-ERROR

STRUCTURE*

Wayne A. FULLER and George E. BATTESE** Iowa State University, Ames, Iowa 50010, U.S.A. Received March 1973, revised version received October 1973 Sufficient conditions are presented under which the generalized least-squares estimator, with estimated covariance matrix, is unbiased for the parameters in the crossed-error model and has the same asymptotic distribution as the generalized least-squares estimator. The model permits the presence of independent variables that are constant over cross sections or time periods. The model does not require that the variance components associated with cross sections or time periods be positive.

1. Introduction

In this paper we consider the estimation of a class of linear models in which the residual error is the sum of three components of variation. We assume that N ‘cross sections’ are observed in each of T‘time-periods’ and that it is of interest to estimate the parameters of the linear statistical model Yij

=

E

xijkPk+“ij,

i=

l,...,

N;j=

T,

l,...,

(1)

k=l

in which the random errors, Uij, have the decomposition Uij =

viSej+Eij,

(2)

and the errors vi, ei and cij are independently distributed with zero means and variances c$ h 0, uf 2 0 and u,” > 0, respectively. The model is more fully explained in sect. 2. Wallace and Hussain (1969), Amemiya (1971), Nerlove (1971), Swamy and Arora (1972) and others, discuss this linear model and suggest its use in the combining of cross-section and time-series data. In these papers the linear model is defined with a constant as the first parameter and is investigated under the assumptions that the variance components ~,2 and of are positive and that the matrix Of mean squares and products Of the deviations, XIjk

-

~i,k

-

~.

jk

+

.f.

.k,

*Journal Paper no. J-7530 of the Iowa Agriculture and Home Economics Experiment Station, Ames, Iowa; Project no. 1806. **The authors are professor of statistics and economics, and associate in the Statistical Laboratory, respectively, at Iowa State University. The research for this paper was partly supported by a Joint Statistical Agreement with the U.S. Bureau of the Census, J.S.A. 72-4.

68

W.A. Fuller, G.E. Battese, Linear crossed-error models

k = 2,3, . . . , p, not associated with the constant term, is nonsingular. Wallace and Hussain (1969) show that, under these assumptions, the limiting covariance matrix of the covariance estimator (which treats the errors ui and ej as fixed effects) is the same as the limiting covariance matrix of the generalized leastsquares estimator. Swamy and Arora (1972) note that, for small samples, the generalized least-squares estimator with estimated covariance matrix could have larger variances than, either, the ordinary least-squares estimator if the variances 0: and (T: are small, or the covariance estimator if of and gf are very large. Neither Wallace and Hussain (1969) nor Swamy and Arora (1972) made explicit the assumption that of and ~2 are strictly greater than zero. Many of their results, however, are valid only under this assumption [e.g., see equation (ii) of Wallace and Hussain (1969, p. 63) and equation (4.5) of Swamy and Arora (1972, p. 268)]. Amemiya (1971) considers the estimation of the variance components in the crossed-error model and notes that estimators constructed from least-squares residuals are inefficient relative to those obtained by the maximum likelihood method. Nerlove (1971) investigates the properties of the covariance matrix for the errors in the crossed-error model and suggests a transformation of the observations by which the generalized least-squares estimates are efficiently obtained. In this paper, we consider some of the properties of the generalized leastsquares estimator with estimated covariance matrix. Our results are more general than those appearing in the literature in that we (i) explicitly consider the cases where 0,’ and/or c,’ are equal to zero, and (ii) do not require that the matrix of mean squares and products of the deviations Xijk -Z.i,k- ~,j~ + X.,k, k = limit. The latter situation permits us to 1,2,. . . ,p, have a positive-definite obtain the limiting behavior of the estimated constant term and of estimated coefficients associated with x-variables that are constant over time or over cross sections. Thus, for example, our theory is applicable if the linear model (1) contains a time trend as one of the x-variables. The order of our presentation is as follows : The model is presented in sect. 2. In sect. 3 different estimators for the parameters in the model are defined. The fitting-of-constants estimators for the variance components are presented in sect. 4. Given these estimators for the variance components, properties of the estimated, generalized least-squares estimator are given in sect. 5. In sect. 6 we present a transformation of the observations in the model that permits the estimated, generalized least-squares estimates to be computed with use of an ordinary least-squares regression program. A method for handling unbalanced data with the crossed-error model is also presented in sect. 6.

2. The model To facilitate

our derivations,

we express

the linear

model

(l)-(2)

in matrix

W.A. Fuller, G.E. Battese, Linear crossed-error models

69

notation as Y = xp+u,

(3)

where Y=(Y11,Yl2,...,Y1T,...,YNI,YN*,...,Y~T), and the observations in the (NTxp) matrix X are similarly ordered by cross sections. We assume that X is a matrix of fixed constants and has rank p. The covariance matrix for the vector of random errors u can be expressed as E(uu’) = I’ = afZ,,+a,2A+a,ZB,

(4)

where A is the Kronecker product matrix of Z, and Jr; B is the Kronecker product matrix of JN and IT ; I,, , Z, and I, are identity matrices of order NT, N and T, respectively; and JN and J, are (N x N) and (TX T) matrices, respectively, having all elements equal to one. In the presentation of our results, it is convenient to define the decomposition of the X-matrix x = M..X+Mr.X+M.,X+M,,X,

(5)

where the square matrices M,., M,,, M., and Ml2 are defined by M.. = J&NT, Ml, = AIT-J,,/NT, M,, = BIN-J,,INT, and M,, = INT-A/TB/N+ JNT/NT. These four M-matrices are clearly symmetric idempotent matrices that are mutually orthogonal. The ijth elements of the kth columns of the matrices M,.X, Ml,X, M.lX and MlzX are i..,, (~i.k-~..k), (T.jk-3?..k) and (x,~~-~~,~-~,~~+~.,~), respectively, where T,,k is the mean of the NT observations on the kth x-variable, fi,k is the mean of the T observations for the ith cross section on the kth x-variable, and z,jk is the mean of the N observations for thejth time period on the kth x-variable. If the first column of X is a column of ones, then the first column of the matrices M, .X, M., X and Ml 2 Xis identically zero. Further, if the kth column of X is a cross-section characteristic that is constant for all time-period observations within a cross section (i.e., Xijk = xi.k, i = 1,2,. . . , N), then the elements of the kth column of M.,X and Ml z X are zero. With use of these mutually orthogonal, symmetric idempotent matrices, the covariance matrix Vcan be expressed as I’= a,ZM,,+(of+Tot)M,.+(a,Z+Na,Z)M.z +(a,2+Tu,2+No,Z)M..,

(6)

where the coefficients of the M-matrices are the four distinct characteristic roots of Y [e.g., see Nerlove (1971)]. From this expression it follows [e.g., see Lemma 1 of Fuller and Battese (1973)] that the inverse of Vcan be expressed as I’-’ = (a,Z)-‘M,,+(a,Z+Tat)-‘M,.+(a,Z+Na,2)-’M.z +(of+T~$+Nu,2)-~M._.

(7)

70

W.A. Fuller, G.E. Battese, Linear crossed-error models

It is also convenient to define the matrix v* = a,M,,+(a,Z+Ta,Z)*M,.+(af+Na,Z)*M., +(a:+

Tc$+Na;)*M..,

~8)

and its inverse I’-+ = a;'M,,+(a,2+Tat)-*M,.+(a,2+Na,2)-tM., +(of+Ta,Z+Na,Z)-*M..,

(9)

where the matrix I’--* satisfies the condition V-*vv-*

= INT.

(10)

In our investigation of the limiting behavior of estimators for /?, as n E NT increases we assume that both N and Tare strictly increasing functions of n. 3. Estimators for jl Under our model assumptions, the ordinary least-squares estimator p = (rx)-‘x’Y

(11)

is unbiased for /I and has convariance matrix var (8) = (X,X)-lxIVX(x’X)-l, = (X'X)-'{a,2X'Ml,X+(a,2+Ta2)X'Ml.X +(af+Na:)X'M,,X +(~,~+T~U~+NU,~)~M..X}(X'X)-~. The best, linear unbiased estimator estimator fi = (XV-lx)-

‘x’v-

for /I is the generalized

1Y

(12) least-squares (13)

that has covariance matrix var (8) = (,,V-‘X)-’ = {(af)-'X'M,,X+(af+Ta,Z)-'X'M,.X +(af+Na;)-'X'M,,X +(a,Z+TaZ+Na,2)-'X'M..X}-'.

(14)

Although the ordinary least-squares estimator fl is unbiased for /I, the estimator for the covariance matrix of 8, computed by the ordinary least-squares formula, is not unbiased for var (8). For example, with the simple crossed-error model Yij

=

Pl

+“ij9

i=

l,...,

N;j=

l,...,

T,

(15)

W.A. Fuller, G.E. Battese, Linear crossed-error models

71

the estimators fll and p1 are equivalent and their variance, obtained from (12) or (14), is var (fi 1) = (a: + Tc$ + N&/NT. (16) The ordinary least-squares formula (X’X)S* = Y’[I,,.- X(X’X)-‘x’]

IS*,where

Y/(NT-p),

is seriously biased for var (/Ii ) since it has expectation [o,z(NT-

1) + o;T(N-

1) + c$V(T-

l)]/NT(NT-

1).

When the variance components a:, 0: and 0: are unknown, the generalized least-squares estimator (13) cannot be computed. We consider the estimated, generalized least-squares estimator fl= (x’P’x)-‘X’P’Y,

(17)

where P denotes an estimator for the covariance matrix (4) for the crossederror model. The method of estimating the variance components is of some importance in that the limiting behavior of the estimated, generalized leastsquares estimator depends upon the order of the variances of the variance component estimators. 4. Estimation of variance components We estimate the variance components in V by the ‘fitting-of-constants’ method [e.g., see Searle (1971)]. Instead of creating the appropriate dummy variables for use in regressions, we present the fitting-of-constants estimators for the variance components in terms of deviations from different means. For convenience of presentation we use the notation of the generalized inverse of matrices, but brief remarks are made on the computational procedures in sect. 6. We define three vectors of least-squares residuals : e = M1*Y-M12X(X’M12X)+X’M12Y

(18)

fi =

(~,2+~1.){Y-~[~‘(~I*+~l.)Xl+X’(~12+~,.)Y}

(19)

2=

~~12+~.2~~~-~~X’~~12+~.2~~++X’(~12+~.2~~~

(20)

where A+ denotes the generalized inverse of A. By evaluation of the expectations of the three residual sums of squares, it can be shown that unbiased estimators for the variance components in the crossederror model are .m



2’2

= (N-l)(T-l)-p+A,+A,-1’

(21)

72

W.A. Fuller, G.E. Battese, Linear crossed-error models

a; = 6; =

o’e--[T(N-l)--p+~,]a,Z T(N-I)-Ttr {[X’(M,2+M1.)X]+X’M,.X}’ l)-p+&]8,2 {[X’(M,2+M.,)X1+X’M,2X}’

(22)

ZIG- [N(T-

N(T-1)-Ntr

(23)

where p-A1 is the rank of X’M,.X; p-A2 is the rank of X’M.,X; and p - 1, - 1, + 1 is the rank of X’M, ,Xif X contains a column of ones. If X does not contain a column of ones, then the denominator of the estimator ~9: is increased by one. We note that, in general, 2, is the number of x-variables that have the same time-period values for given cross sections and A2 is the number of x-variables that have the same cross-section values for given time periods. Although it is possible for a linear combination of the x-variables to be constant over cross sections without the individual x-variables being constant over cross sections, this should be a rare occurrence in practice. The variance component estimates obtained from (22) and (23) are not guaranteed to be non-negative. In practice, negative values would be replaced by zero for the estimation of the parameters in the model. To examine the asymptotic properties of the estimated, generalized leastsquares estimator (17) it is necessary to determine the probability order of the variance component estimators (21)-(23). The probability orders of these estimators are given in Theorem 1. Theorem I : If the random errors Vi, ej and Eij in the crossed-error model (l)(2) are normally distributed, the fitting-of-constants estimators for the variances a:, at, 0: satisfy af = 0; + O,(l/VE), (24a)

8: = 0: +0,(1/l/N), = O+O,(l/Tz/%),

if 0: > 0 if 0: = 0 I ’

CW

8: = a,Z+Or(l/l/T),

if cr: > 0 if c7,’ = 0 I ’

(24~)

= O+O,(l/N1/F),

Proof The result of (24a) follows directly from the fact that 6: of (21) is a multiple of a chi-square random variable and has variance 2af/[(N- l)(T- 1) p+1,+;1,-11. We outline the proof for the results of (24~) and the results of (24b) follow by the same arguments. It is convenient to write the crossed-error model as

B

Y

= (X:W,:W,)

0 0

Y

+r,

(25)

W.A. Fuller, G.E. Battese. Linear crossed-error models

13

where W, is a matrix of dummy variables for cross-section effects; Wz is a (NTxp2) matrix such that W;W, = NI,,2, M.2WZ = W2, M1.W, = Mea W2 = M, 2 W2 = 0 and y has a multivariate normal distribution with zero mean vector and covariance matrix ozIP2 ; and IP2 is the identity matrix of order pz, wherep, = T-I,. It is readily seen that the numerator of the estimator 8: can be expressed as 9’Cy;‘9-p&, where 9 is the vector of estimates for y obtained by applying ordinary least-squares to model (25), and C,, is the submatrix of [(A’:W, : W,)’ (X: W, : W,)]- ’ associated with 9. Since the covariance matrix of 9 is uz1P2+ afC,, , it follows that 9’C,‘9 is distributed as C$‘$1 SjXiz(l), where the xj (1) are independent chi-square random variables with one degree of freedom, and the 6, are the characteristic roots of [a:Zp,+afCYV]Cyyl = a,ZC,‘+a,ZI,,. Thus var (9’C,‘9) = $J 263, i=i

where Wj denotes the characteristic root of CY;’ associated with aj. From the definition of W2 and C,’ it follows that the characteristic roots Wjare no greater than N. Thus, the variance of 9’C,‘9 is, at most, of order TN2 if ef > 0 and of order T if t~f = 0. Further, the term in the denominator of (23), tr {[X’(M, 2 + M.2)Xj+X’M.2X}, is less than or equal top- Iz,. From these results it follows that the variance of 8: is of order l/T if cf > 0 and of order 1/N2T if ~3 0 . 0 With use of the variance component estimators (21)-(23), the covariance matrix estimator, p, in the estimated, generalized least-squares estimator (17) is

= a,ZM,,+(af+T~:)M,.+(62+Na:)M,, + (8; + T6$ + N&,2)M.

(26)

Some of the properties of the estimated, generalized least-squares estimator (17) that has Pdefined by (26) are presented in sect. 5. The alternative estimators for the variance components c’, and .a,

s; =

vii T(N-p+1,

8: - 1) -T’

(27)

and 2: =

0’8

8:

N(T-p+l,-1)

- 7’

(28)

74

W.A. Fuller, G.E. Battese, Linear crossed-error

models

where ii = M1,Y-M,,X(X’M,.X)+X’M,~Y,

(27a)

e’ = M., Y-M.,X(X’M.,X)+X’M.2 also satisfy the probability var (i3:) =

order relationships

Y, of Theorem

(284 1. For example,

since

2(a,2 + TO;)” T’(N-p+

A, - 1) 2af

+T’[(N-l)(T-1)-p+j.,+&-11’ it is clear that the variance of Zi is of order N-’ if CJ,’> 0 and of order (T*N)-’ if c f = 0. Thus the estimator 8,’ has the properties given in (24b) for 8:. The variance of Zz, however, is never less than the variance of the fitting-of-constant estimator, 8:. The variance estimators 5: and Sz are equivalent to those considered by Swamy and Arora (1972, p. 265).

5. Properties of the estimated GLS estimator In Theorem 2 we present sufficient conditions for the estimated, least-squares estimator (17) to be unbiased for p.

generalized

Theorem 2: For the crossed-error model (l)-(2), the estimated, generalized least-squares estimator (17), with estimated covariance matrix (26), is unbiased for /3 if (i) the errors are symmetric (ii) the expectation

about zero and have fourth moments,

of (c?f)- ’ exists.

Proof: Since u is distributed symmetrically about zero and the estimators a:, 8: and 6: are even functions of U, it follows from the result of Kakwani (1967) that j? is unbiased if its expectation exists. To demonstrate that our conditions are sufficient for fi to have finite expectation, we consider an arbitrary linear combination of (8-p). Let r] denote any vector of real numbers from the NTdimensional space. We consider the expectation of 1$X@-/?)I. Now ($X&/I)/

= I?@P-*x(x’P-‘X)-‘x’P-‘ul, 5 (~‘~~)*(U’P-‘x(XP-‘x)-‘X’P-‘U>+, 5 (9’ Vq)*(U’ P- lU)f..

The minimum

characteristic

root of P is 8: and the maximum

root is 8: +

W.A. Fuller, G.E. Battese, Linear crossed-error models

75

i53: + Naz. Therefore (rj’ P$*(u’ P- ‘u)* 5 [(a; + 2%; + N8z,)/~+,z]*($rj)*(u’u)*. Since af + T8: + N6’: is a quadratic form in U, it is bounded by a multiple of U’U, the multiple depending on the matrix X. Thus since the expectations of (13:)-r and (u’u)~ exist, it follows that the expectation of $X(/?-p) exists. /-J Sufficient conditions for the estimated, generalized least-squares estimator for the crossed-error model to have the same asymptotic distribution as the generalized least-squares estimator are presented in Theorem 3. Theorem 3: If the components of variance vi, ej and lij in the crossed-error model (l)-(2) are normally distributed and N and T are strictly increasing functions of n( = NT), then the estimated, generalized least-squares estimator (17), with estimated covariance matrix (26), satisfies the condition

of@-/I)

= of@-P>+O&-+>,

where D is the (p xp) diagonal matrix composed of the diagonal elements of X’V- ‘X; jis the generalized least-squares estimator, b = (X’V- ‘X)- ‘X’V- ’ Y; and L is the minimum of N and T. Proof:

Now D”(jj-p)

where P-’ -bfp-‘. can be expressed as

= [D*(X’r-‘X)-‘D*][D-+X18-‘u],

(29)

From the expression for V-r in (7) it follows that r-’

V-’ = ofV-1+dlM,.+d2M.2+d,M..,

(30)

where dl = ~,z@f+T~~)-‘-a,2(a,2+Ta~)-‘,

(304

d,

= ~f(~,‘+N~~)-l-.f(,f+Na,Z)-l,

(Job)

d,

= af(8f

+ T&f + Naz)-

’ - af(af

+ Taz + NC:).

(304

From the results of Theorem 1 it can be shown that d, , d, and d, have probability orders O,(l/Td%), = W Ifi), d2 = O,(l/NV,\/T), = O,(ll~% dl =

if cr,”> 0, if$ ” =O. 9 if 0: > 0, if a2 = 0.

d, = O,(l/(N+T)dNT),

if cri > 0: 0: > 0,

= O,( l/z-%%), = ~,(l/N~~),

if CT:> 0, cr,2= 0,

= 0,(1/1/L),

if a2” = 0 ye e2 > 0 9 if u2” = 0 ,e a2 = 0 *

76

W.A. Fuller, G.E. Battese, Linear crossed-error models

With use of (30) we obtain o-+(Xv-‘X)D-+

= ofD-*(XV-‘X)D_* +D-*X’[dlM1.+d,M,.+d,M..]XD-*,

and since, for example, D-*(X44, .X)D-* order 1 if 0: is zero, it follows that o-+(Xv-‘X)D-+

is of order T if g,’ is positive

= a@-*(XV-‘X)D-*+AI,

and of

(31)

where AI denotes a (p xp) matrix of random variables which are of probability order L-*. Further, D-+X’pIu = ,J;D -)X'V-'U+D-~X'[~~M~.+~,M,.+~,M..]~, and since, for example, the variance of D-fX'M,.u, given by (a: +Tai)D-* (X'M,.X)Dpi,is of order T2 if cri is positive and of order 1 if c,” is zero, it follows that

D-+X'p-'u = afD-+X'V-%+A,,

(32)

where A2 denotes a (p x NT) matrix of random variables which are of probability order L-*. By substitution of (3 1) and (32) into (29) D*@-P)

= Df(fl--P)+A~,

where AJ = O,(L-*). Given the assumptions

(33)

cl

of Theorem 3 and the condition that lim,,, D-~(X'IJ'-~X)D-* = G, where G is a (p xp) positive definite matrix, the limiting distribution of D*@--jl)is multivariate normal with zero mean vector and covariance matrix G-‘. It should be noted that the normality assumption in Theorem 3 is used to establish the variances of the variance component estimators. The result of Theorem 3 would, however, hold for the variance component estimators (21), (27) and (28), given that the errors in the model have finite fourth moments. To prove asymptotic normality for such estimators would require additional regularity conditions on the X matrix.

6. A note on computational procedure The estimated, generalized least-squares estimates of the parameters in the crossed-error model can be computed with four ordinary least-squares regressions and a few matrix manipulations. Given the original data on the dependent and independent variables, the cross-section, time-period and overall means should be computed for each variable. The sum of squares PPof (21) is computed as the residual sum of squares from the regression of the y-deviations, yij-ji. Y.j+Y,,, on the x-deviations, Xijk-~i,lr-~,jk+~,,k, k = 1, 2,. . . ,p, that are

W.A. Fuller, G.E. Battese, Linear crossed-error models

77

not identically zero. Throughout this section we ignore the possibility that a linear combination of the nonzero deviations is zero. The sum of squares 6’0 of (22) is computed as the residual sum of squares from the regression of the y-deviations, yij-y.i, on the x-deviations, Xijk - Z.jk, , p, that are not identically zero. The term k= 1,2,... tr UVM1,+M1.)Xl+~M~.-V in the denominator of (22) is computed as the trace of the product of the inverse of the XX-matrix from the regression used to obtain the O’s and the matrix of the sum of squares and products of the x-deviations, Xi,k - Z,,, , k = 1,2, . . . , p, that are not identically zero. The sum of squares &‘@of (23) is computed as the residual sum of squares from the regression of the y-deviations, Yij-Yi,, on the x-deviations, Xijk-~i.k, k = 1,2, . . . , p, that are not identically zero. The term tr {[X’(M,2+M.2)XI+X’M.zX] in the denominator of (23) is computed as the trace of the product of the inverse of the X’X-matrix used to obtain the d’s and the matrix of the sum of squares and products of the x-deviations, X.jk -T,,k, k = 1,2, . . . , p, that are not identically zero. Given the estimates for af, CJ~and a:, the estimated, generalized least-squares estimates for the parameters in the crossed-error model are computed by the ordinary least-squares regression of the dependent variables, Yij-B,Y,,on the independent variables, ~~~~-d~~~,~-d~X,~~+d~X.,~, dZ.Y,j+a3J..9 k = 1,2,. . . ,p,where&,,&,andd,aredefinedby d, = 1 - [a,z/@; + T&,2)]+, d, = 1 - [l+f/(C?f+ N&,2)]+, 8, = d, + 8, - 1-k [6,2/(~9f+ T8,2 + NS,z)]*. The estimated standard errors computed by the ordinary least-squares regression program serve as approximate standard errors for the estimated coefficients.’ We may find in practice that, for some cross sections, data are missing in some time periods. In such situations the estimation methods outlined in this paper can be applied after creating an appropriate set of balanced data. If no observation is available at the jth time period for the ith cross section, we suggest that a dummy (independent) variable be defined that has zero values corresponding to all observations except the (i,j)th (missing) observation for which the value ‘If the true variances are substituted into the expressions i, , & and &, the transformed variables used in the regression can be obtained by pre-multiplying the model (3) by the matrix o~V-~‘~. The errors in the transformed model are uncorrelated and have variance o,~. Nerlove (1971, p. 385) suggests using the transformation matrix Y-“*. The transformation that uses the matrix c,P’-~‘~ ISpresented by Shih (1966, p. 94).

78

W.A. Fuller, G.E. Battese, Linear crossed-error models

assigned is one. Any values can be assigned to the dependent and independent variables where missing observations exist. If there are r missing observations, then r dummy variables are defined to obtain a set of balanced data. It can be shown that the parameter estimates obtained by use of these balanced data are equivalent to those obtained by computing the estimated, generalized leastsquares estimates by matrix manipulations with the original unbalanced data. References Amemiya, T., 1971, The estimation of the variances in a variance-components model, International Economic Review 12, 1-13. Fuller, W.A. and G.E. Battese, 1973, Transformations for estimation of linear models with nested error structure, Journal of the American Statistical Association 68636-642. Kakwani, N.C., 1967, The unbiasedness of Zellner’s seemingly unrelated regression equations estimators, Journal of the American Statistical Association 62, 141-142. Nerlove, M., 1971, A note on error components models, Econometrica 39, 383-396. Searle, S.R., 1971, Topics in variance component estimation, Biometrics 27, l-76. Shih, Chang Sheng, 1966, Interval estimation for the exponential model and the analysis of rotation experiments, Ph.D. Thesis (Iowa State University, Ames) 126 pp. unpublished. Swamy, P.A.V.B. and S.S. Arora, 1972, The exact finite sample properties of the estimates of coefficients in the error components regression models, Econometrica 40,261-275. Wallace, T.D. and A. Hussain, 1969, The use of error components models in combining cross section with time series data, Econometrica 37, 55-72.