Convergence of least squares learning to a non-stationary equilibrium

Convergence of least squares learning to a non-stationary equilibrium

economics letters Economics Convergence Letters 46 (1994) 131-136 of least squares learning to a non-stationary equilibrium George W. Evans”, Se...

357KB Sizes 2 Downloads 109 Views

economics letters Economics

Convergence

Letters

46 (1994) 131-136

of least squares learning to a non-stationary equilibrium George

W. Evans”,

Seppo

Honkapohjab,c2*

=Department of Economics,

University of Edinburgh, William Robertson Building, 50 George Square, Edinburgh EHS 9JY, UK bAcademy of Finland, Finland ‘Department of Economics, University of Helsinki, Box 54, Helsinki, SF-00014, Finland Received 14 December 1993; accepted 23 February 1994

Abstract Linear solutions condition JEL

rational expectations we show that least is met.

models squares

can have explosive equilibria. For a linear model with multiple AR(l) learning can converge to an explosive AR(l) equilibrium if a stability

classification: D83

1. Introduction Least squares learning in linear rational expectations models has been studied, for example, in Bray and Savin (1986) and Marcet and Sargent (1989a). Recently there has been further interest in studying the dynamics of learning in linear models when there are multiple rational expectations equilibria (REE), e.g. Evans and Honkapohja (1991). An issue which has received little attention is the issue of convergence when the equilibrium is explosive.’ Consider, for example, the model Y, = a + 6Y,-I + PO ,-Iv:

+ P* ,-IY;+I + u, 3

where u, is an exogenous iid process with 0 mean expectations held by agents at time t - 1. Provided polynomial,

(1)

and the superscript e denotes the that the roots of the characteristic

* Correspondence to: Seppo Honkapohja, Department of Economics, University of Helsinki, Box 54, Helsinki, SF-00014, Finland. ’ Marcet and Sargent (1989b) treat an explosive case in the context of a deterministic model. 01651765/94/$07.00 0 1994 Elsevier SSDZ 0165-1765(94)00470-M

Science

B.V. All rights reserved

132

G.W. Evans, S. Honkapohja

p,z’ + (PO are real,

there

Y, = 44

Letters 46 (1994)

131-136

1)z + 6 = 0 )

are two REE PY,Pl

+

i Economics

(2)

solutions

of the form

+ u, 3

(3)

where p is a root of (2) and a(p) Depending on the values of the absolute value less than 1 can be one such root, so that there is a need not hold in general.

= (~(l -PO - p,(l + p)))‘. parameters in (1) the number of stationary roots of (2) with 0, 1 or 2. In the standard ‘saddlepoint-stable’ case there is unique stationary solution of the form (3). However, this

2. Learning We now suppose that agents do their expectations on least squares examine the local stability of REE stationary. Suppose agents at time motion Y, = a,-,

+ LY,-l

not (initially) hold rational expectations, but instead base regression of y, on lagged y. Evans and Honkapohja (1991). of form (3) under least squares learning when the REE is t - 1 believe the economy follows the (perceived) law of

+ u, >

where (a,_, , b,_,) have been estimated by a least squares regression intercept using data i = 1, . . . , t. The corresponding forecasts are e I-lY,

=a,-l

e

t-lYy+l

+ LY,-1

Y, = C(a

forecasts

f-17

L)

and an

3

= at-1 (1+ b,-1) + C-0-l

Inserting these perceptions:

of y, on y,_,

into

.

(1) we obtain

+ wL,)Y,-1

the actual

law of motion

generated

by these

+ u, )

where T&7, b) = cz + &)a + @(l-t

b)

and Tb(b) = 6 + &b + &b2 .

(4)

The question is whether this process can converge to a specified AR(l) REE. If the REE is stationary, it follows from Evans and Honkapohja (1991, 1992) that its local stability under learning is governed by whether it is expectationally stable (E-stable), i.e. whether the differential equation, d(a, b)ldT

= V’,(a, b), T,(b))

- (a, b) ,

is locally asymptotically stable at the REE. These conditions are PO + p, - 1 + &p < 0 and & - 1 + 2&p < 0; see Evans and Honkapohja (1992, Proposition 2).

G.W. Evans, S. Honkapohja

I Economics

Letters 46 (1994) 131-136

133

As Evans and Honkapohja (1992) show, however, it is possible for a non-stationary REE to satisfy these E-stability conditions. This raises the question of whether it is possible for the least squares learning algorithm (or an appropriate modification) to converge to a nonstationary E-stable REE.

3. The explosive case We thus turn to the case of an explosive AR( 1) equilibrium, i.e. a solution of the form (3) where IpI > 1. Certain aspects of the situation should be noted. First, as t becomes large the intercept term becomes negligible relative to py,_,. We deal with this difficulty by simply assuming that agents ignore any intercept and forecast using the perceived law of motion Y, = b,-,Y,-1 + u,. Second, the procedure of Marcet and Sargent (1989a) assumes a stationary REE. It is possible to transform to stationary variables, as we do below. However, if var(u,) = ai < +m, then u, becomes negligible relative to ~y,_~, so that the solution tends to the deterministic process yt = py,_, . It would still be impossible to apply Marcet and Sargent (1989a), because the moment matrix for the transformed variables would be singular, violating their Assumption A.3. The second difficulty is avoided by considering the case in which var(u,) explodes over time sufficiently quickly to permit a non-degenerate distribution for a suitable transformation of the variables. For example, suppose that var(u,) = A’ai for some A > p2 > 1. Let yT = A-f’2y,, u* = A -t’2~t and p* = ph-“2. Then it is easily checked that along an explosive AR( 1) solution the transformed variable yT tends to the stationary AR( 1) process y “; = p*~,*_~ + UT. Learning dynamics can be analysed in this context by considering least squares based on the transformed variables y T. When put in recursive form and restated in terms of the original variables, we obtain the following algorithm: b, = b,_, + (A-“-“lt)R,‘y,_,c,,

R,=R,_,+ (A -(l--l)lf)(y:_l - R,_lA(‘-‘))

w ,

W)

where “r = Y, - b,-,Y,-1 7 Y, = a + Ubr-1)Yt-1

(5c) + u, *

(5d)

When A = 1 the algorithm (5a)-(5c) is the standard recursive formulation of least squares: (5a) updates the estimate for b by the last forecast error (5c), and (5b) is the update of the moment matrix. More generally, with var(u,) = A’a: the algorithm (5a)-(5c), with appropriate initial conditions, is recursive weighted least squares, with weights inversely proportional to var(u,). Before stating the main result we must for technical reasons make a modification to the algorithm. Following Ljung (1977) and Marcet and Sargent (1989a) we introduce a “projection facility’. This is defined by a pair of sets D, and D,, with D, C D,, which modify the algorithm as follows. As long as (b,, R,) lies in D, the algorithm (5) is used. However, if

134

G.W. Evans,

S. Honkapohja

I Economics

Letters 46 (1994) 131-136

(b,, R,) would leave D, under (5) it is projected to some value in D,. (A natural choice would be the last value taken in D2). We assume that D, is a bounded open set and D, is a closed set with the equilibrium inside an open ball contained in D,. With the projection facility we can obtain convergence with probability 1, when the REE is E-stable. Without the projection facility there would be the possibility of a ‘bad draw’ of exogenous shocks early in the process, so that the shocks would take the system outside the domain of attraction.2 For this algorithm we have the following result:3 Proposition. Consider an AR(l) REE with parameter p. Assume that for some A satisfying A >p2 > 1 the sequence u,A-~‘~ is identically and independently distributed, with 0 means and bounded moments. If T;(p) = &, + 2&p < 1, then there exists a non-trivial projection facility such that under the algorithm (5), b,+ p with probability 1. Proof. Write y: = y,A-“*, rewritten

UT = r~,h-~‘*, p* =~h-“~

b; = b;-, + (llt)R,‘y,*_,(y,* R, = R,_, + (llt)(y,*_2, The equation

- b,*-,y,*-,)

-R,_,)

and bT = bJ”*.

The algorithm

can be

,

f

for y, can be rewritten

yt* = CYA-r’2

+

?XG)yt*-, + UT,

where T,*(b*) = 6A-“2 Substituting

+ plA1’2b*2.

+&b*

for y[* the expression

b: = b,*_, + (llt)R,‘yl”_,(aA-“*

for br* becomes +y,*_,(T;(b~*_,)

-b,*_,)

+ q*) .

With this set-up, and given our assumptions, it is now possible to apply Ljung’s (1977) theorems on the convergence of recursive stochastic algorithms. Marcet and Sargent (1989a, Propositions 1 and 3) and Evans and Honkapohja (1991, Proposition 4.1) should be consulted for the technical details. The convergence of b: is determined by the stability of the fixed points of an ‘associated differential equation’ which in this case take the form db*ldT dRldr where stable

= R-‘M,,(b*)(T;(b*) = M,,(b*)

-R

- b*) ,

,

M,,(b*) = a:/(1 - b*)2. T,*(b*) -b* at (p*, R*) if the ‘small’ differential

’ Convergence results without the projection 3 The E-stability condition in the proposition allow for an intercept.

has a root equation,

at p* =pA-“*.

This system

is locally

facility device are discussed in Evans and Honkapohja (1993a,b). below is weaker than that in Section 2, since the agents do not

G.W. Evans, S. Honkapohja

db*ldr

= T,*(b*)

-b*

I Economics

Letters 46 (1994) 131-136

,

135

(6)

is locally stable at p * . But the stability condition of (6) follows from the condition dT,(p)ldp < 1. We thus conclude that b,* + p * = PA-“~ with probability 1 and hence b, = b: A”*+ p with probability 1. 0 In fact it is clear that the proposition instability results can also be developed.

also holds when p2 < 1 and A > 1. Finally, we remark that

4. Discussion The preceding result extends the connection between E-stability and stability under adaptive learning algorithms to the explosive case. We note that Evans and Honkapohja (1991, 1992) show that there is no simple connection between E-stability and stationarity. Even in the ‘saddle-point stable’ case it is possible for the explosive REE to be stable under learning. The method of transformed variables will also work for more general models, e.g. where y, depends on expectations of its values in the more distant future as in Evans and Honkapohja (1991), or if the dating of expectations is changed. The principal alteration required is the form of the T-mapping from perceived to actual law of motion and the corresponding E-stability condition.

Acknowledgements to Thomas J. Sargent for discussions on how to approach We are indebted Financial support from the SPES Program of the EC is gratefully acknowledged.

this case.

References Bray, M. and N.E. Savin, 1986, Rational expectations equilibria, learning and model specification, Econometrica 54, 1129-1160. Evans, G.W. and S. Honkapohja, 1991, Learning, convergence, and stability with multiple rational expectations equilibria, STICERD Discussion Paper TE/90/212, LSE, revised March 1993; forthcoming in European Economic Review. Evans, G.W. and S. Honkapohja, 1992, On the robustness of bubbles in linear RE models, International Economic Review 33, 1-14. Evans, G.W. and S. Honkapohja, 1993a, Local convergence of recursive learning to steady states and cycles in stochastic nonlinear models, forthcoming in Econometrica. Evans, G.W. and S. Honkapohja, 1993b, Economic dynamics with learning: New stability results, mimeo. Ljung, L., 1977, Analysis of recursive stochastic algorithms, IEEE Transactions on Automatic Control AC-22, 551-557.

136

G.W. Evans, S. Honkapohja

I Economics Letters 46 (1994) 131-136

Marcet, A. and T.J. Sargent, 1989a, Convergence of least squares learning mechanisms in self-referential stochastic models, Journal of Economic Theory 48, 337-368. Marcet, A. and T.J. Sargent, 1989b, Least-squares learning and the dynamics of hyperinflation, in: W.A. Barnett et al., eds. Economic complexity (Cambridge University Press, Cambridge) 119-137.