ARTICLE IN PRESS
Statistics & Probability Letters 78 (2008) 402–404 www.elsevier.com/locate/stapro
Generalized least squares transformation and estimation with autoregressive error Dimitrios V. Vougas Department of Economics, School of Business and Economics, Richard Price Building, Swansea University, Singleton Park, Swansea SA2 8PP, UK Received 6 December 2005; received in revised form 6 March 2007; accepted 25 July 2007 Available online 14 September 2007
Abstract Approximations of the usual GLS transformation matrices are proposed for estimation with AR error that remove boundary discontinuities. The proposed method avoids constrained optimization or rules of thumb that unnecessarily enforce estimated parameters to be in the interior. r 2007 Elsevier B.V. All rights reserved. MSC: C12; C13; C22 Keywords: Gaussian AR model; GLS transformation; NLS and QML estimation; LR test
1. Introduction Nonlinear least squares (NLS) and quasi maximum likelihood (QML) estimation of models with autoregressive (AR) error requires implementing restrictions via (numerically expensive) constrained optimization. For stationary AR(1) error with coefficient r, jrjo1, a starting error with variance proportional to ð1 r2 Þ1 is assumed. This introduces the Prais and Winsten (1954) transformation, see also Gurland (1954) and Kadiyala (1968). The restriction jrjo1 must be implemented to avoid numerical problems, see Hamilton (1994, pp. 118–119), and jrjX1 is prohibited. Hence, unit root inference is prevented in this context. This paper suitably modifies the usual GLS transformation matrix and removes the discontinuity at jrjX1, avoiding restrictions and allowing for unit root inference. Note that the approach of Andrews (1999) may not be as successful as the proposed approach. The paper also applies a similar approach for estimation with AR(2) error. The paper is organized as follows: Section 2 presents the new GLS transformations and resulting NLS and QML estimation methods for a regression with AR(1) or AR(2) error, also exemplifying LR (unit root) testing. Finally, Section 3 concludes.
Tel.: +44 1792 602102; fax: +44 1792 295872.
E-mail address:
[email protected] URL: http://www.swan.ac.uk/economics/staff/dv.htm 0167-7152/$ - see front matter r 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2007.07.025
ARTICLE IN PRESS D.V. Vougas / Statistics & Probability Letters 78 (2008) 402–404
403
2. Transformations and NLS and QML estimation Focusing on the linear regression (generating model) y ¼ X b þ u,
(1)
where y and u are T 1, X is T k ðkoTÞ and b is k 1 parameter vector. The error u is AR(1), generated via Pu ¼ ,
(2) 2
where Nð0; s I T Þ (I T is the T T identity matrix). The GLS transformation matrix P is defined by P½1; 1 ¼ k, P½j; j ¼ 1 and P½j; j 1 ¼ r for j ¼ 2; 3; . . . ; T and 0 elsewhere. A specific value for k implies a corresponding assumption for u1 , see Dufour (1990, p. 475) and Dufour and King (1991, p. 116). Setting k ¼ 0 ignores the first observation, while k ¼ 1 is the assumption of Berenblut and Webb (1973). In general, u1 Nð0; s2 k2 Þ and independent from 2 ; . . . ; T . Under stationarity, jrjo1, and for identification, it is assumed that k ¼ ð1 r2 Þ1=2 . In this case, uNð0; s2 SðrÞÞ with SðrÞ ¼ ðP0 PÞ1 . Still under stationarity, NLS minimizes Sðb; rÞ ¼ u0 P0 Pu,
(3)
while QML maximizes the (concentrated, with a constant ignored) log-likelihood T 1 ln Sðb; rÞ þ lnð1 r2 Þ, (4) 2 2 see Judge et al. (1985, Chapter 8). Beach and MacKinnon (1978a) introduce an iterative algorithm to derive the QML estimators. From Eq. (4), lðb; rÞ is not defined for jrjX1. Constrained maximization is required, that is imposing jrjo1. Also, NLS and QML estimators differ in general. To avoid constrained optimization and incorporate stationarity asymptotically, one defines and employs k as a function of T and r, ( )1=2 1=2 T 1 X 1 r2T 2s k kðT; rÞ:¼ r 40 8r, (5) 1 r2 s¼0 pffiffiffiffiffiffiffiffiffiffiffiffiffi in line with Gurland (1954, p. 221). For jrjo1 as T ! 1, kðT; rÞ ! 1 r2 . Define P equal to P for all elements except P ½1; 1 ¼ kðT; rÞ. It is assumed that P u ¼ and uNð0; s2 SðrÞ Þ with SðrÞ ¼ ðP0 P Þ1 . The proposed NLS now minimizes lðb; rÞ ¼
S ðb; rÞ ¼ 0 P0 P ,
(6)
while the new QML maximizes T 1 ln S ðb; rÞ þ ln kðT; rÞ2 . (7) 2 2 d Note that any value for r is allowed, including jrjX1. For jrjo1 as T ! 1, u1 ! Nð0; s2 =ð1 r2 ÞÞ (the original assumption under stationarity). For jrj ¼ 1, it holds kðT; rÞ ¼ T 1=2 . Resulting NLS and QML estimators also differ. Equivalence of S and l does not apply and LR tests based on S differ from LR tests r and b, ¯ r¯ denote restricted (under a null) and unrestricted estimates, respectively, from based on l . Let b, ^ ~ r~ denote restricted (under a null) and unrestricted estimates, respectively, NLS estimation, while b, r^ and b, from QML estimation. The corresponding LR tests are l ðb; rÞ ¼
¯ ¯ rÞ=S rÞ ¯ rÞÞ=S S ðb; LRNLS ¼ T lnðS ðb; ðb; rÞÞ ðb; rÞ ¯ TðS ðb; ¯ ¯
(8)
~ rÞ ^ rÞÞ. ~ l ðb; ^ LRQML ¼ 2ðl ðb;
(9)
and
Estimation of Eq. (1) with AR(2) error follows similar lines. In this case, ut is generated by ut ¼ r1 ut1 þ r2 ut2 þ t
or
Qu ¼ .
(10)
ARTICLE IN PRESS D.V. Vougas / Statistics & Probability Letters 78 (2008) 402–404
404
The first two autocorrelations, y1 and y2 , are y1 ¼ r1 =ð1 r2 Þ and y2 ¼ r2 þ r1 y1 , respectively. Under stationarity jr2 jo1 and jy1 jo1; the last part comes from r1 o1 r2 and r2 o1. The transformation matrix Q qffiffiffiffiffiffiffiffiffiffiffiffiffiqffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffi is defined as Q½1; 1 ¼ 1 r22 1 y21 (derived from Judge et al. (1985, p. 294)), Q½2; 1 ¼ y1 1 r22 , qffiffiffiffiffiffiffiffiffiffiffiffiffi Q½2; 2 ¼ 1 r22 , Q½j; j ¼ 1, Q½j; j 1 ¼ r1 , Q½j; j 2 ¼ r2 for j ¼ 3; . . . ; T and 0 elsewhere. Beach and MacKinnon (1978b) provide a specific algorithm for QML estimation with AR(2) error. Similarly to the analysis above, one defines ( )1=2 ( )1=2 T 1 X 1 y2T 2s 1 k1 k1 ðT; y1 Þ:¼ y1 40 8r1 ; r2 (11) 1 y21 s¼0 and (
)1=2
1=2 1 r2T 2 k2 k2 ðT; r2 Þ:¼ 40 8r1 ; r2 . (12) 1 r22 s¼0 qffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffi Under stationarity as T ! 1, k1 ðT; y1 Þ ! 1 y21 while k2 ðT; r2 Þ ! 1 r22 . The employed approximate T 1 X
r2s 2
transformation matrix Q is equal to Q for all elements except Q ½1; 1 ¼ k1 k2 , Q ½2; 1 ¼ y1 k2 and Q ½2; 2 ¼ k2 . The proposed NLS estimation minimizes S2 ðb; r1 ; r2 Þ ¼ 0 Q0 Q ,
(13)
while the new QML estimation maximizes T 1 l 2 ðb; r1 ; r2 Þ ¼ ln S2 ðb; r1 ; r2 Þ þ ln k1 ðT; y1 Þ2 þ ln k2 ðT; r2 Þ2 . (14) 2 2 It is assumed that Q u ¼ and uNð0; s2 Sðr1 ; r2 Þ Þ with Sðr1 ; r2 Þ ¼ ðQ0 Q Þ1 . Constrained optimization is not required. Note that no equivalence between S2 and l 2 holds, and LR tests based on S2 differ from LR tests based on l 2 . 3. Conclusions This paper proposes NLS and QML estimation of a linear regression with AR(1) or AR(2) error, which is based on modification of the usual GLS transformation matrices. Unconstrained optimization is only required. Generalization to any AR order is intractable and unnecessary, since the methods for models with AR(1) or AR(2) error can handle a number of interesting empirical applications. Extension to nonlinear models is straightforward. Future research of interest includes simulation of the proposed estimators and LR (especially unit root) testing. References Andrews, D.W.K., 1999. Estimation when a parameter is on the boundary. Econometrica 67, 1341–1383. Beach, C.M., MacKinnon, J.G., 1978a. A maximum likelihood procedure for regression with autocorrelated errors. Econometrica 46, 51–58. Beach, C.M., MacKinnon, J.G., 1978b. Full maximum likelihood estimation of second-order autoregressive error models. J. Econometrics 7, 187–198. Berenblut, I.I., Webb, G.I., 1973. A new test for autocorrelated errors in the linear regression model. J. Roy. Statist. Soc. Ser. B 35, 33–50. Dufour, J.-M., 1990. Exact tests and confidence sets in linear regressions with autocorrelated errors. Econometrica 58, 475–494. Dufour, J.-M., King, M.L., 1991. Optimal invariant tests for the autocorrelation coefficient in linear regressions with stationary or nonstationary AR(1) errors. J. Econometrics 47, 115–143. Gurland, J., 1954. An example of autocorrelated disturbances in linear regression. Econometrica 22, 218–227. Hamilton, J.D., 1994. Time Series Analysis. Princeton University Press, Princeton, NJ. Judge, G.E., Griffiths, W.E., Hill, R.C., Lutkepohl, H., Lee, T.-C., 1985. The Theory and Practice of Econometrics, second ed. Wiley, New York. Kadiyala, K.R., 1968. A transformation used to circumvent the problem of autocorrelation. Econometrica 36, 93–96. Prais, S.J., Winsten, C.B., 1954. Trend estimators and serial correlation. Cowles Commission Discussion Paper No. 383, Chicago.