Estimation of error variance in linear regression models with errors having multivariate student-t distribution with unknown degrees of freedom

Estimation of error variance in linear regression models with errors having multivariate student-t distribution with unknown degrees of freedom

Economics Letters North-Holland 47 27 (1988) 47-53 ESTIMATION OF ERROR VARIANCE IN LINEAR REGRESSION MODELS HAVING MULTIVARIATE STUDENT-r DISTRIBUT...

367KB Sizes 0 Downloads 71 Views

Economics Letters North-Holland

47

27 (1988) 47-53

ESTIMATION OF ERROR VARIANCE IN LINEAR REGRESSION MODELS HAVING MULTIVARIATE STUDENT-r DISTRIBUTION WITH UNKNOWN OF FREEDOM

Radhey

WITH ERRORS DEGREES

*

S. SINGH

University of Guelph, Guelph, Ont., Canada Nl G 2 WF Received Accepted

30 October 22 January

Linear regression distribution with This estimator is estimators of the

1987 1988

model y = Xb + u, where the components unknown degrees of freedom, is considered. used to provide estimates, computable from error variance and of their variances, which

of the disturbance vector u have Jointly multivariate Student-t An estimator of the degrees of freedom parameter is provided. the data, of the usual unbiased and minimum mean square error are. otherwise, not computable from the data.

1. Introduction

In many theoretical research work in linear regression analysis as well as in many applications of linear regression models to practical situations, the error terms are assumed to be normally and independently distributed, each with zero mean and common variance. However, as it is well known, error terms can have non-normal distributions [e.g. see Zellner (1976) for references to works employing non-normal errors]. As such the normality assumption on the distribution of the error terms could be misleading and its violation could have adverse effects on the inference drawn in a number of situations; see, for example, Gnanadesikan (1977). Recently, some authors [e.g. Zellner (1976), King (1979, 1980), Ullah and Zinde-Walsh (1984) and Singh (1987)J have employed a broader assumption, namely, that the error terms have a joint multivariate Student-t distribution. With this assumption, the marginal distribution of each term is univariate Student-t, a distribution that includes the Cauchy and normal distributions as special cases. However, to the best of this author’s knowledge, almost all the works in linear regression models employing error terms having multivariate Student-1 distribution have assumed that the degrees of freedom of the distribution is known. For the linear regression model with the disturbance (error) terms having jointly multivariate Student-r distribution, it is known, see, for example, Singh (1987), that the usual least square estimate (LSE) of the vector of the regression coefficients is not only the maximum-likelihood estimator (MLE) but it is also the unique minimum variance unbiased estimator (MVUE). However, the variance of this MVUE as well as the unbiased and the minimum mean square error estimators of the dispersion parameter of the model and their variances all depend on the degrees of freedom of the * The research was carried out during the author’s visit to the Department of Econometrics and Operations Research, Monash University, Clayton, Victoria, Australia. The author expresses his gratitude to the Department, particularly, to Max King for inviting him to visit the Department and for providing the research support. The author also wishes to thank Mrs. Carol Clark for graciously and efficiently typing the manuscript.

0165-1765/88/$3.50

0 1988, Elsevier Science Publishers

B.V. (North-Holland)

Student-t distribution of the disturbance vector, and hence these are not computable for in practice unless we assume the knowledge of this degrees of freedom or we estimate from the data. In this paper we consider the linear regression model where the elements vector have jointly multivariate Student-t distribution with unknown degrees of freedom a way out to the above problem. In particular, we first give an estimator of the degrees parameter and then use this estimator to provide estimators for the above mentioned which can be easily computed from the data for applications.

2. The linear regression

model and estimators

The model we consider

here is given by

(2.1)

J’=xp+u. where y’ = (.r,, X is a T x p. p /3’=(p,,_...&) of disturbances. with probability

f(uly,

applications it somehow of the error and suggest of freedom quantities,

. , y,) are T observations on the dependent variable (the variable to be explained), T, full column matrix of T non-stochastic observations on p explanatory variables, and U’ = (u,, . , uT) is the vector is th e vector of unknown regression coefficients Assume that the T elements of u have jointly multivariate Student-t distribution density function (p.d.f.) given by

-c

/qy)

,,

CT’)= -,(@y*

\

I

-(y+T)P

u’u

u*

i



y ’

0,

u > 0,

(2.2)

and -cQczc)u,
i=l,....T,

where

k(Y)

= y

Y+zY{ ( y + T)/2} 77T’2T( y/2)

(2.3) ’

and 0’ and y are respectively the dispersion and the degrees of freedom parameters, none of which are assumed to be known. We, however, assume at the outset that y > 4. Let JC,‘,i = 1,. . , T, be the row vectors of the design matrix X. As an approach to estimation of y, we prove the relation (2.4) where

(2.5)

Notice

that the identity

(2.4) satisfies

our assumption

that y > 4.

R.S. Singh

/ Estimation

of errorvariunce

49

rn linear regressron models

To prove (2.4) we proceed as follows. As noted in Zellner (1976), the error vector u in (2.1) can be regarded as being randomly drawn from a multivariate normal distribution with a random standard deviation T generated from the inverted gamma distribution with p.d.f. given by

+exp(

-

l/2

ya2/T2),

Y >

u >

0,

This concept follows from a well-known observation that the multivariate (2.2) is a member of the family of distributions with p.d.f. given by

0

and

Student-r

7)

0.

(2.6)

p.d.f. given in

where

fMN(UIT) =

exp( -u’u/2r2) (27172)-T’/2

- cc <: u, < cc,

i=l

,...,

T

is the conditional (on T) p.d.f. of a multivariate normal distribution with mean vector 0 and variance-covariance matrix 7’1, and g( T 1 .), with 0 < T < co, is a proper p.d.f. for T. When g(r ) .) is taken to be the p.d.f. given in (2.6), then the integration in (2.7) produces (a marginal) multivariate Student-t p.d.f. for U, f(u 1y, o*) given in (2.2). Thus we can regard that, given r with p.d.f. (2.6) the error vector u has the multivariate normal distribution with mean vector 0 and variance-covariante matrix T*Z. With the above observation in view, given T with the p.d.f. (2.6). u,, . , uT are independent normal random variables each with mean zero and variance TV. Therefore E,u~ = r2, Since

E(

T”)

T

=

and

E,uP =

3T4,

i=

l,...,

has the p.d.f. given by (2.6),

y2u4/(

2)( y - 4);

y -

see, for example,

Zellner

(1971, pp. 371-372).

and 2 W)=

T.

(y_3;)py_4)

4 7

i=l

,...1 T.

Thus, from (2.8) we get

(2.8)

50

R.S. Smgh / Estvnatron

of error uariuncr rn linear regression models

if x:, i = I,. . , T are the row-vectors

Consequently,

I = E\T-’

;

of X, then u, =y, - x:P, and if we define

(y, -x,‘&

,=l

and T T-‘CM; ,=I

!

c=E

! = E\T-’

i

; (y, -x,‘j3)” , r=l i

then h = y(y - 2))‘~~ and c = 3y2(y - 2))‘(y - 4))‘~~. Hence with u = e/h*, we get u = 3(y 2)/(y - 4). Solving this last equation for y we get (2.4). From (2.4) it appears to be natural to estimate y by replacing u in (2.4) by some reasonable estimator of a. Since the usual least square estimator of j3, given by p^=

(X’X)~‘x’y,

is the best linear

T-’

5

(2.9) unbiased

estimator

(BLUE)

I

T-’

5 (y,-x$)*

(2.10)

2 ’

if and whenever

possible,

2’

T,-’

$ (y,, - x,!,&)* I=1

where

T,xp



a by

1

I=1

or more precisely,

( Y,> q

to estimate

(y,-x,‘p^)”

1=I

i=

r,Xl

of j3. it is reasonable

j=l ,..., k

by

(2.11)

R.S. Singh

/ Estimatron

oferror rwrrance VIlineur

51

regressron models

satisfy the regression model (2.1) and together constitute the data from k repeated observations the experiments with x,, being the ith row-vector of X,, y,, being the ith component of y,, and

of

B,= (xx,)r’x,‘Y, being the BLUE of p in the jth observation. The estimator (2.11) is suggested in view of the expectation operators involved on the right-hand side of (2.5). Having given estimator a^ of a. the degrees of freedom parameter y, in view of (2.4) can now be estimated by (2.12) It is known

[see, for example,

Singh (1987)] that the variance-covariance

matrix

of the BLUE p^ is

V(&=E(p^-@(P-P) = ( X’X)P’yaz/(y and the unbiased

- 2)

estimator

(2.13)

of u’, in terms of y, is (2.14)

where li = y - Xp^ is the vector of residuals. If M = (I - X( X’X))‘X’), then M is an idempotent matrix of rank (T - p), ii = My = Mu and G’C = UMU. Since given r, u = ( y - X/3) has a multivariate normal distribution with mean vector 0 and variance-covariance matrix r21, given 7. (G’G/r2) has XfTPPJ distribution. Therefore, since E( r2) = y(y - 2))‘~’ and E(r4) = y2(y - 2))‘(y - 4))‘~~. it follows that and

E( li’ri)2 =

(T-p)(T-p

+ 2)y%4

(Y-2)(~-4)

(2.15) .

Now from (2.15) it follows that the minimum mean square error (MMSE) estimator of all estimators of the type af’ri, as also noted in Zellner (1976) is given by a’(y)

of u’ in the class

(y-4)fi’G

=

Y(T_P The variances

(2.16)

+ 2)

of the unbiased

estimator

3’ and the MMSE

a”* of uz are, from (2.15)

now given by

(2.17) and V(CP(y))

respectively.

= 2(T-p)-----

(Y-4)~~--P+Y-2)

4

(y--2)2(T--p+2)’

u ’

(2.18)

Since c?‘(y) estimated by

is an unbiased

estimator

for u’, from (2.13) and (2.14).

V(p^) can be unbiasedly

(2.19)

From (2.4) we have y(y - 2))’ = (2~ - 3)/u and (y - 4)~~‘= 3/(2a - 3) which can now be estimated, respectively, by (22 - 3)/Z and 3/(2a^ - 3). Therefore, from (2.14) 6’(y) and c’(y). when y is unknown, can be estimated, respectively, by

(2.20)

and 3(ri’ri)

.

(2a^-3)(T-p+2)



($2 =

whereas

(2.21)

V(o^*(y)) can be approximated

q 62) =

2(0**)*(7--p+~-2) (T-P)(?

by

(2.22) - 4)



where u * * may be taken as 8’ given in (2.20) or as a”* given in (2.21). Similarly, when y is unknown, an approximation p(6’) to V(&*(y)) can be obtained by replacing y and a* in (2.18) by 4 and u **, respectively. Values of the estimators p(B), g2, d’, v(c?*) and f( 6’) of V(p), 6’) a”*, I’(&*) and V(cY’), respectively, when the degrees of freedom parameter y is unknown, can now be easily computed from the data for applications. Remark.

Instead of using the estimator p in (2.12) via (2.10) or if (2.11) is not possible, one may get a better estimator of y by employing an iterative method as follows: First, get the first approximation, say T,, to y by (2.12) using a^ in (2.10) and employ it to get the first approximation, say Zf, to the unbiased estimator 6*(y) given in (2.14). Then use 9, and 6: as the degrees of freedom parameter and dispersion parameter, respectively, to generate a vector of observations, rl, = (fi ,,, . . . , ii,,) on u from the p.d.f. of the error term and to obtain a new data set $,( = Xp^ + 2,). Now use this new data set to get a second approximation B2 = (X’X))‘X’j, to j3 (where ,f?, = p^). Use this say f2, to y. ji and B2 (instead of y and p^) in (2.10) to get, via (2.12), a second approximation, Continue this iteration process until T,, y2, . . . converges (approximately) to a value. Take this value as the final estimate of y.

References Gnanadesikan, R., 1977, Methods for statistical data analysis of multivariate observations (Wiley, New York). King, M.L., 1979, Some aspects of statistical inference in the linear regression model, Ph.D. thesis (Department of Economics, University of Canterbury, Christchurch). King, M.L., 1980. Robust tests for spherical symmetry and their application to least squares regression, Annals of Statistics X, 126551271.

R.S. Singh / Estimation

of error variance in I~near regremon models

53

Singh, Radhey S., 1987, A family of improved estimators in linear regression models with errors having multivariate Student-t distribution, Working paper (Department of Econometrics and Operations Research, Monash University, Clayton). Ullah, Aman and Victoria Zinde-Walsh, 1984, On the robustness of LM, LR and W tests in regression models, Econometrics, 1055-1066. Zellner. A., 1971, An introduction to Bayesian inference in econometrics (Wiley, New York). Zellner, A., 1976, Bayesian and non-Bayesian analysis of the regression model with multivariate Student-t error terms, Journal of the American Statistical Association 71, 400-405.