Conditional Recursive Estimation of Dynamic Models with Autocorrelated Perturbations

Conditional Recursive Estimation of Dynamic Models with Autocorrelated Perturbations

Copyright@ IFAC Computation in Economics, Finance and Engineering: Economic Systems, Cambridge, UK, 1998 CONDmONAL RECURSIVE ESTIMATION OF DYNAMIC MO...

1MB Sizes 0 Downloads 75 Views

Copyright@ IFAC Computation in Economics, Finance and Engineering: Economic Systems, Cambridge, UK, 1998

CONDmONAL RECURSIVE ESTIMATION OF DYNAMIC MODELS WITH AUTOCORRELATED PERTURBATIONS J. del Hoyo J. GuiUenno L10rente Dpto. de Economia Cuantitativa Universidad Autonoma de Madrid 28049 Madrid. Spain Phone: +34-1-3975033 or 3974812 Fax: +34-1-3974091 e-mail juan.hoyo8uam.es e-mail guilleriuam.es

ABSTRACT Most economic applications do not need on-line estimation, although recursive estimates are useful for diagnostic purposes. Recursive algorithms to estimate dynamic models with autocorrelated perturbations can be computationally complicated. To avoid this problem, this paper proposes a Conditional Recursive Least Squares algorithm (CRLS), that uses consistent estimators obtained from a correctly specified model. Once consistent estimators are known the model is linearized to obtain recursive consistent estimators along the full sample, and later, to test for structural breaks based on these recursive estimators. Copyright $19981FAC Key Words: Nonlinear Models; Autocorrelated Perturbations; Recursive Estimators; Recursive Sequential Tests.

1.- INTRODUCTION

A related procedure, computationally simpler than the RPEM, is the Recursive Instrumental Variable Method (RlVM), Young (1984). Recursive estimation of the model parameters as well as the states of linear and nonlinear dynamic models can also be obtained using a Bayesian approach, Sorenson (1988).

The hypothesis of constant coefficients is usually assumed in many quantitative economic applications. Failing to realize that the model's coefficients are not constant implies inconsistent estimations, incorrect structural interpretations, inadequate distributions of the usual econometric tests, and inefficient predictions.

The above cited recursive methods are very elegant and provide consistent estimators. Nevertheless, these procedures usually require large amounts of data to achieve convergence and the final estimates might be very sensitive to the initial conditions when applied to noisy models. These requirements play an important role in most economic applications. Economic studies are usually characterized by models with low signal-tonoise ratios, short time series data, and the models often involve complicated dynamic structures, including multiplicative seasonality.

Aggregation problems, Lucas effects, neglected time varying nature of the coefficients, nonlinearities, omitted relevant explanatory variables and inadequate dynamic specifications, are among the most common sources of misspecifications leading to time varying parameters (TVP). Therefore, detecting TVP might indicate the model is not well specified and may be . misleading for applied purposes. Statistical tests based on recursive estimators are efficient techniques for detecting departures from the constant coefficient hypothesis. Recursive Least Squares (RLS) in linear models are easily computed, \ Plackett (1957). Once the recursive estimators are obtained, tests based on them can be used for diagnostic purposes, Stock (1995). Recursive algorithms have been proposed to estimate general dynamic models. In particular, the Recursive Prediction Error Method (RPEM), is a general method that includes as special cases most known algorithms _for recursive estimation, Ljung and SOderstrom (1983).

On-line estimates of the model coefficients are hardly needed in economic applications. Excluding high frequency data, the time elapsed between two consecutive samples is quite large, since economic data are mainly published on a daily, monthly, quarterly, or yearly basis. Nevertheless, recursive estimations can be of great value as a first step to detect errors in the model specification. Given the previous considerations, it seems natural to verify the constancy of the estimates making use of the en-block estimates, that are consistent under the null of

243

a correctly specified model. This point of view allows a drastic simplification of the recursive algorithms by means of conditional linearizations around the consistent estimates.

the seasonal period, t; is white noise and the lag polynomials are of orders: (n a ' nb, .. , n h). The lag operator will be dropped to simplify the notation (i.e. A(L) will be written as A). The coefficients on the polynomials are assumed to be constant, and the usual hypothesis on the stationarity and invertibility conditions are assumed to hold.

This paper presents a simplified, easy to implement, and efficient method, for conditional recursive estimation of the coefficients of general dynamic models, under the null hypothesis· or correct specification. Conditional recursive estimators are used to test for misspecified models using recursive sequential tests. The dynamic model under consideration includes as particular cases many others, in particular: the CLM, ARMAX, ARMA and Transfer Function Models (1FM), including multiplicative seasonal factors.

Model (1) can easily be generalized to multiple output and multiple input representations (MIMO). For the sake of easy exposition, and without loosing generality, the model will be simplified to a single equation with a unique input, nx=I, SISO model, and non-seasonal perturbations, G=H= I, unless otherwise stated. The dynamic structure of model (1), non-linear in the parameters, makes it impossible to apply any RLS. Recursive estimations for the coefficients of model (1) can be obtained in the non-seasonnal case using either the RPEM algorithm, Ljung and SOderstrom (1983), or the RIVM, Young (1984). Nevertheless, for applications in economics, where dynamic models often show low signal-to-noise ratios, possibly with nonlinear constraints among some of the coefficients, and where data samples are not large enough, these algorithms may need large amounts of data to achieve convergence.

The proposed method produces the recursive estimators in two steps. Firstly, under the null hypothesis of correct specification and constant coefficients of the initial dynamic model, consistent and efficient en-block estimates of the coefficients are obtained using the full sample. In the second step, the model is linearized conditionally on the consistent estimates previously obtained, and recursive estimates are computed. These recursive estimations are used to test for the null hypothesis of constant coefficients. This method, called Conditional Recursive Least Squares algorithm (CRLS), is related to the Pseudolinear Regression Method (PLR), Ljung and SOderstrom (1983).

The Pseudo Linear Regression (PLR), a modification of the RPEM, is applicable if the resulting model corresponds to an ARMAX, or its associates, Ljung and SOderstrom (1983). The basic idea behind the PLR consists in rewriting the dynamic model as a Linear Regression Model. The problem with this procedure is that the rewritten model includes unobserved perturbations as explanatory variables.

The CRLS algorithm is easy to implement. Consistent estimates may be obtained using any conventional software in the first stage, and later, in the second stage, the recursive estimates may be computed by means of any RLS algorithm for linear models. This procedure could be made fully on-line once a convenient point is reached in the sample.

2.1 The Recursive Prediction Error Method (RPEM)

The outline of the paper is as follows. Section 2 reviews previous algorithms and their adecuacy for the problem under consideration. Section 3 presents the proposed method. Section 4 develops recursive tests to test for non constant coefficients. Section 5 implements the method in one real example. Finally, Section 6 concludes.

The Prediction Error Method (PEM) lllJJllJlllzes a function Vt (9), of the sum of squares of the prediction errors, up to time t, to estimate the (kxl) vector 9 with the parameters of the prediction model. Given a model to predict the output, a simplified version of the RPEM looking like the RLS, SOderstrom and Stoica (1989) is:

2. -THE MODEL

et

I.

(2a)

9t = 9t- 1 + lCte t 1 lCt =Pt-l'If t [1 + 'I''t P t- 1'If t Pt = P t- 1 - P t- 1'I' t [1 +'If't P t- 1'If t

Most economic applications use particularizations of Ljung and SOderstrom's (1983) model that incorporates multiplicative seasonal factors into the noise. The complete model for the case of a single multiplicative seasonal factor, using Ljung's notation is:

S A(L) = BdL) x . + C(L)G( L ) Yt i=1 Fi (L) I,t D(L)H( LS) et

=Yt - Yt-l (1)

r

where:

Yt-l (1) and eb

r

(2b) 1P t_l 'If t

are the one-step ahead

1

predictor and the prediction error respectively. The

( )

(kxl) vector 'If 1= - del, is the negative gradient of

a9

where, Yt is the output, xl,t to Xn• ,t ' are the inputs, s is

244

implies a9 =0 for every t:5: T under the null. This last result can be recursively estimated and tested.

the prediction errors. The (kxk) matrix PI is the inverse of the hessian of V t (9) , evaluated at et-I, and 1C I is a (lexl) vector. The gradient '1'1 is equal to

For the linearized model under the null, and for every t :5: T , LS estimators of a9 are consistent and their asymptotic distribution will coincide with the one obtained from (1) by NLLS. Therefore, any RLS algorithm can be applied into (5) to obtain recursive estimates. Notice, that if the initial model is well specified, the recursive estimates of a9 will tend to be zero. Thus, if the recursive estimates of a9 differs substantially from zero along the sample, it means that the model might have some specification problems.

the explanatory variables when the prediction model is linear, thus RPEM reduces to the RLS algorithm. In general, the prediction model will not be linear, therefore, 'If I will also be nonlinear and will depend not only on the variables of the model but also on the parameters. This fact complicates the RPEM algorithm structure and may heavily affect the numerical properties of the recursive estimates, particularly at the beginning of the sample. It can be proved that RPEM is consistent and asymptotically has the same distribution as the ML estimator, attaining the Cramer-Rao lower bound, Ljung and SOderstrom (1983). The RPEM may be also interpreted as a general method that includes many others derived from particular approximations to '1'1 .

4.- TESTING FOR STRUCTURAL BREAKS WITH RECURSIVE ESTIMATORS

Once recursive estimators are obtained, is possible to verify the presence of structural breaks using any recursively computed variant of the Chow or the Wald tests. These statistics have the usual distributions under the null of no change, if the break dates are known a priori. Nonstandard distributions, based on brownian processes, must be considered when these dates are not known beforehand, Ploberger et al. (1989), Banerjee et al. (1992), Stock (1995).

The behaviour of the RPEM is quite good when dealing with long series of data, but in the case of economic applications characterized by series of moderate length and with low signal-to-noise ratios, the RPEM might have convergence problems. 3.-RECURSIVE CONDmONAL LEAST SQUARES ESTIMATION Write model (I) as: Yt f(O)

Ploberger et al. (1989), develops what it is called the fluctuation test, and derives its distribution under the null of no change without imposing a priori the break date. This test compares the discrepancy between each component of the vector of coefficients recursively estimated against the estimations computed with the full sample. In particular, given a linear model with a the coefficient vector, under the

= f(O) + et, with:

B =(1- A) Yt +px t

CG

+ (DH

-l)e t (3)

where, a is the (lexl) vector of coefficients in (A, B, ..., H). Given consistent estimations of 9 and ~ and expanding around the consistent estimates 9T gives:

A"

= f(9 T ) + df(9 T ) (0 - 9 T ) + o(aO)

f(O) Now,

dO

denoting

the

(lxk)

gradient

null:

(4)

vector

a~iT) . ----as-; and also defining : f O,t =f(aT) ;

eO,t

aa

=(9 - eT) ,results in:

=&,t A9 + et + o(A6)

A

,..

where 6 1 and 6 T are the

vectors estimations obtained with t and T samples, respectively. The fluctuation test will reject the null if the discrepancy between these estimates are large inside a given sample interval. Since the null hypothesis for the considered model is, Ho:

as

&,t =

eo.t =Yt-fo.,; and

Ho: 6 1 =9 T ,

aa t = 0, the test will be slightly different from the (5)

fluctuation test.

Since eT and the estimated et are consistent under the null of correct specification: plim a9 =0 => plim o(aO) The gradient

Changing the notation to make full use of standardized brownian random processes in [0,1], define A = t!f , (t = 1,2, ... ,T). This notation implies

=O.

a=o,t (9T ) is a continuous function

at =a

(A)

because of the structure in model (3) and the stationary and invertibility conditions on the polynomial lag

Aa t

operators. Thus, plima=o,t(eT)=gt(9), Amemiya

and

eT

(A) = ~ (A), and

=9(1) .

Now,

&,t = xt

in (5), it can be

defining

proved under very weak assumptions, Stock (1995), that the statistic:

(1985). Note that this result holds for every t:5: T , where T is the full sample length. Therefore, model (5) for T ~ 00 is equivalent to: et gt a9 + et; this

=

245

is distributed as A.-1Wk(A.), where Wk(A.), is a k-

Figures 1 to 6 in Appendix 1 show recursive estimates of the coefficients, using both the CRLS and the RPEM algorithms. It can be seen that there is some degree of non-stationarity. In particular, in the neighborhood of t=260, a likely change in the structure is evident As figures 1 to 6 show, both algorithms offer similar evolution patterns for 1>0 and less clear for the others. The CRLS reaches the final estimates quicker than the RPEM. This conclusion only applies to this particular example, put forward merely to illustrate how the conditional method works. Therefore, it does not mean superiority of one procedure over the other. To achieve such a conclusion, further research should be done with more examples and this is not the purpose of this paper.

dimensional standardized brownian process in [0,1], and

a~

is the LS estimator of the residual variance.

The modified fluctuation test MF(A.), is obtained by applying the continuous mapping theorem (CMT), max maxISi(A.~~ sup maxlA.-1Wki(A.)1

A" sAsA., lSist

denote:

A" SASA, lSiSk



MF(A.)= max max ISi (A.)I A.oSA.sA.\ ISisk

(7)

(8)

where, k is the number of regressors and Wk,i
To test for constant coefficients along the sample MF(A.) and ~1(A.) are used. Table 1, Panel A presents the critical values for the recursive tests statistics for k=6 (the number of parameters in this example). Simulations are based on 10000 Monte Carlo replications and T=3600 observations. The values of the statistics using the data on the example are MF(A.) =11.56 and FRl(A.)=14.98. Therefore the null hypothesis of constant coefficientes is rejected.

An alternative way of testing for the existence of at least one structural break along the sample, makes use of the Wald-type statistics, Banerjee et al (1992) or Stock (1995). Similarly to the fluctuation test, it is possible to derive the asymptotic distribution of a Wald type test for recursively testing Rfi = r in model (5). The distributions for this Wald type test, under several particularizations of the null hypothesis of interest, may be found in del Hoyo and L10rente (1997). In particular, for the case considered in this work: Ho: R=I, r=O, the resulting statistic is:

O( In]·
~(A.)

)-1 fl R'

TABLE 1 PANEL A: CRITICAL VALUES Percentile MF(A.) ~1(A.) 0.10 6.78 2.76 0.05 7.41 3.13 0.01 8.67 3.89 PANEL B: SIZE AND POWER POWER SIZE Observations MF(A.) FR 1(A.) MF(A.) ~1(A.) 95.5 99.9 26.2 16.4 T=150 14.9 95.7 99.9 14.1 T=300 98.8 100 11.7 12.4 T=500 99.5 100 11.1 11.8 T= 1000 Panel B presents results on size and nominal (not adjusted by size) power for both statistics for several sample sizes. Size is simulated under Ho: ~9 =0 , and

CRJki..)-r) (9)

ta2Ci..) where [.] is the integer function. It can be proved that for the hypothesis to test:

~(A.)~

_l_W'k(A.) Wk(A.)

Ak

(10)

applying the CMT under the null: max FR(A)~ sup (Ak)-IW'k(A.)Wk(A.) (11) A"sMl

A"SASl

denote:

~1(A.)=

max FR (A.)

(12)

A.osA.Sl

The distribution corresponding to this test has to be tabulated by Monte Carlo methods.

power under HI: ~9t = ~9t_1 + 'Ill' where 11 t is white noise. Entries are based on 1000 replications and present the percent rejections by both statistics based on 10% critical values from Panel A. The results on size and power indicates the excelent statistical properties of both statistics.

S.- APPLICAnON As an illustration of the conditional recursive method the gas-furnace model is considered, Box and Jenkins (1970):

Y, I

=

(bJ+hJL+hJI:) (i+ji.)

);-3

+

~

(i+dl L+d2I3)

6.- CONCLUSIONS

(13)

The hypothesis of constant coefficients is crucial not only in the verification process of any model, but also for its potential use in structural analysis and forecasting. This paper presents an efficient and operational procedure for conditional recursively estimating non-linear dynamic models, and for testing whether this model has constant coefficients through

where the output y, is C~ (carbodioxide), the input x is ~ (methane), and et is white noise. The first stage MLE estimates, using the SCA package are (T=296):

bo = -.527 bl = -.380 b2 = -.522 i l = -.55 cll = -1.53 cl2 = .631 246

the whole sample. The procedure can be implemented using conventional software under the null hypothesis of correct model specification and constant coefficients. The procedure has been applied to the gas-furnace data to show its potential usefulness.

Acknowledgements A previous draft has benefited from the comments of Prof. A. Zellner. Project financed by the DGICYT under Grant PB94-018.

7.- REFERENCES

Amemiya T. (1985). Advanced Econometrics. Basil BackweU. Banerjee A., R.L. Lurnsdaine and J.H. Stock (1992). Recursive Sequential Tests of the Unit-Root and Trend-Break Hypotheses: Theory and International Evidence. J. of Business and Economic Statistics, 10, 3,271-287. Box G.E.P. and G.M . Jenkins (1970). Time Series Analysis Forecasting and Control. Holden Day, San Francisco. Hoyo, 1. del and J.G. Llorente (1997). Contrastes de Cambio Estructura1. Mimeo. UAM. Ljung, L. and T. SOderstrom (1983). Theory and Practice of Recursive Identification. MIT Press. Plackett R.L. (1950). Some Theorems in Least squares. Biometrika, 37,149-157. Ploberger, W.,W Kramer and K. Kontrus (1989). A New Test for Structura1 Stability in Linear Regression Model. J. of Econometrics, 40,307-318. Sorenson H.W. (1988). Recursive Estimation of Nonlinear Dynamic Systems. In: Bayesian Analysis of Time Series and Dynamic Models ( J.c. Spall, Bd.). Marcel Dekker, inc. SOderstrom T. and P. Stoica Identification. Prentice Hall.

(1989).

System

Stock J. H. (1995) . Unit Roots, Structural Breaks and Trends. In Handbook of Econometrics (R. F. Engle and D.L. McFadden, Bd.), vol4, 2739-2839. Elsevier. Young, P. (1984). Recursive Estimation and Time Series Analysis. Springer. Berlin.

247

APPENDIX!

Note: In the figures each ordinate is

Aa + aT. This provides a quick comparison with the full sample estimates. 0.4~---------------_,

·0.4 ~------------------,

:: T:::V-"~"\-~i"~-~-,-\

-0.6 -0.8

... ~----.~. -

-0.2

CRLS

-1 .0

-0.4 -1 .2

-0.6

Fig. 2 Recursive Estimationol b1

Fig. 1 Recursive Estimation 01 bO

-0.4,-----------------,

·0.2~----------------,

CRLS

-0.4

-0.6

-0.6

-0.8

-0.8 · 1.0 -1 .0 -1.2 -1 .4

RP EM

-1 .2

t-· __···

/t._ ...._._/·-.___.f---·--·--·--···~······-·-·-'J·.-·---·...-..-."---'

-1 .4

-1.6 +-'-'~"""""""""'''''''''''...,...~..,..,..,.........,~....,...,~rr-r-~''''''''''TTT.........J 150 200 250 100 50 Rg. 3 Recursive Estimation 01 b2

-1.6

0.8 . , . . . . - - - - - - - - - - - - - - - - - ,

0.2

100 150 200 Fig. 4 Recursive estimation 01 d1

250

100 150 200 Rg. 6 Recursive Estimation of 11

250

50

0.0

-0.2

-0.4

-0.6 50

Fig. 5 Recursive Estimation 01 d2

248