A simple lagrange multiplier test for lognormal regression

A simple lagrange multiplier test for lognormal regression

Economics Letters 4 (1979) 0 North-Holland Publishing 251-255 Company A SIMPLE LAGRANGE MULTIPLIER TEST FOR LOGNORMAL REGRESSION Dale J. POIRIER ...

214KB Sizes 0 Downloads 112 Views

Economics Letters 4 (1979) 0 North-Holland Publishing

251-255 Company

A SIMPLE LAGRANGE

MULTIPLIER

TEST FOR LOGNORMAL REGRESSION

Dale J. POIRIER University of Toronto,

Toronto, Ontario M5S IAl,

Canada

Paul A. RUUD Massachusetts Institute of Technology, Received

5 December

Cambridge, MA 02139,

USA

1979

In the spirit of diagnostic testing, this study proposes a simple Lagrange multiplier test for the appropriateness of the lognormal regression model. The alternative hypothesis covers the class of Box-Cox truncated regression models considered by Poirier (1978). The simplicity of the test arises from the absence of any truncation under the null hypothesis.

Lagrange multiplier (LM) tests have recently been recognized as valuable diagnostic tools because they only require estimation of the model under the null hypothesis. ’ Since the null hypothesis of a diagnostic test often represents a simple prototypical case which is particularly easy to estimate, LM tests offer a computational advantage over tests that require estimation under the alternative hypothesis. Regression applications such as those developed by Engle (1979) and Godfrey (1978) reflect this advantage. In this note, a simple diagnostic test will be developed for the appropriateness of the lognormal regression model. Under the null hypothesis, the positive dependent variable,yi, is assumed to follow a lognormal regression lnyi =Xi fl+ Ei,

i = 1 , 2, . ., n,

where xi is a k-dimensional row vector of fixed regressors including an intercept, fl is a k-dimensional column vector of unknown coefficients, and ei is a stochastic disturbance term. The usual conditions of the simple linear model are assumed so that theeiarei.i.d.N(0,aZ)(i=1,2 ,..., n)and lim n-l (X’X) = Q ,

n+-

’ For example,

see Breusch

and Pagan (1978),

and Engle (1979). 251

D.J. Pokier, P.A. Ruud /A simple Lagrange multiplier test

252

where

and Q is a positive definite matrix. The alternative hypothesis covers the class of Box-Cox truncated distributions defined over the positive reals [see Poirier (1978)]. ’ Briefly, this class can be defined as follows. Let G(e) and @(*) denote the standard normal density and distribution functions, respectively, and let

Zi = T@i, A) =

h-l (y; - l),

i= 1,2, .. ..?I

lnyi, denote the Box-Cox transformation of the sample of positive random variablesyj (i= 1, 2, .. .. n) by an unknown parameter A. Because the range of the Box-Cox transformation is the interval [T(O, h), T(-, A)], the density of ai is the truncated normal density f(Zi) = [@(bj) - @(ai)]-’

U-‘@[U-l

(Zj - Xjfl)] 9

T(O, X) < Zi < T(w, A) 2

where ai = IJ-~ [T(O, h) - Xifl] , bi = 0-l [T(m, X) - Xi e]. Therefore the density of yi is g(ji) = y:-'f(Zj)

,

0
Finally, assuming independent to (1) is given by

sampling, the loglikelihood

(1) function

corresponding

= - 5 ln(2nu2 ) - 1 u-’ (2 - Xfl)‘(z - Xfl)

(2) ’ For a discussion of the case in which the alternative hypothesis consists only of the usual regression model in levels as opposed to logarithms, see Evans and Deaton (1977).

D.J. Poin’er, P.A. Ruud /A simple Lagrange multiplier test

253

where 2 z [zr, z2, . .. . zn]‘. Computation of the first- and second-order derivatives of the loglikelihood with respect to all unknown parameters is burdensome. However, in the lognormal regression case where h = 0, substantial simplification occurs since it represents the only non-truncated version of (1). The first derivatives are aL * = o-2x’(z -

E3L =

au2

g

- Xg) ,

(3)

1 [-na-’ + u-4(2 - xg)‘(z - X8)],

(4)

= -f 0-2 (z’*‘)‘(z - Xp) t 2’1,

and the information

(5)

matrix is

7

J=“-2[[]

(6)

where 2C2)= [z: , z: ) ...)

z;I’,

1- [l, 1, . ..) l]‘, P=

b-41,1*2,

..../.bJ’=X~ 3

ri”= ~:,iJ:, ....&I’. When the model is estimated under the null hypothesis that h = 0, the first-order derivatives with respect to fl and u2 are set equal to zero yielding the usual OLS estimators s = (X’x)_lx’2

)

s2 = n-l (2 - Xi)‘@

- Xi)

)

and the LM test statistic for testing X = 0 has a convenient form. In general, the LM (or efficient score) test statistic is given by (7) where 0 = [IT!‘,u ‘, h] ’ and both aL/aO and the information matrix J are evaluated at the maximum of the likelihood conditional on the null hypothesis. In the pres-

254

D.J. Poirier, P.A. Ruud /A simple Lagrange multiplier test

ent case, it is straightforward

to show that (7) reduces to

ir2 [a-2@‘2’)‘(z - Z) - 2nZ]2 w = (;(2’)‘M($2’)

+ 802(z - ZL)‘(Z - 5~) t 6nG4 ’

where .i = z’tfn, i-

[jr,&

, ..&J=xg,

;C2) = [ii) ig, ...) ii]’

)

M=I-X(X’x)-‘X’. Since the first term in the denominator in (8) is simply the error sum of squares from a regression of iC2) on X, and the second term is just the regression sum of squares from the regression under the null hypothesis, test statistic (8) is straightforward to compute. The first-order derivative with respect to h only requires the mean of the log dependent variable and the inner product of the squares of the log dependent variable with the residuals from the null regression. Asymptotically (8) has a chi-square distribution with one degree of freedom. 3 Another interesting special case of (1) corresponds to h = 1, the ordinary linear model. Unfortunately, the truncations do not disappear as in the lognormal case so that there are few simplifications. Thus a LM test of X = 1 offers few computational advantages over its asymptotically equivalent competitors: the likelihood ratio test and the Wald test. In summary, this note has proposed a simple diagnostic test for the lognormal regression model. Test statistic (8) is readily computed from the standard output of two ordinary regressions and one vector inner product. In the spirit of diagnostic checking, this test should be regularly employed whenever a lognormal regression model is postulated. The simplicity of the Lagrange multiplier test statistic arises from the absence of any truncation under the null hypothesis.

References Andrews, D.F., 1971, A note on the selection of data transformations, Biometrika 58, 249-254. Breusch, T.S. and A.R. Pagan, 1978, The Lagrange multiplier test and its application to model specification in econometrics, CORE Discussion Paper no. 7820 (Universite Catholique de Louvain, Louvain). Engie, R.F., 1979, A general approach to the construction of model diagnostics based upon the

3 Andrews (1971) proposed a test involving the Box-Cox parameter A that has a known finite sample distribution. Due to truncation, Andretis’ test is exact only in the case A = 0. NO asymptotic optimality can, however, be claimed for Andrews’ test even in this case.

D.J. Poirier, P.A. Ruud /A simple Lagrange multiplier test

255

Lagrange multiplier principle, Warwick Economic Research Paper no. 156 (University of Warwick, Coventry). Evans, G.A. and AS. Deaton, 1977, Testing linear versus logarithmic regression models, Department of Economics Discussion Paper no. 61 (University of Bristol, Bristol). Godfrey, L.G., 1978, Testing for multiplicative heteroskedasticity, Journal of Econometrics 8, 227-236. Milhken, G.A. and F.A. Graybill, 1970, Extensions of the general linear hypothesis, Journal of the American Statistical Association 65, 797-807. Poirier, D.J., 1978, The use of the Box-Cox transformation in Limited dependent variable models, Journal of the American Statistical Association 73, 284-287.