The role of theory in econometrics

The role of theory in econometrics

JOURNALOF Econometrics Journal of Econometrics 67 (lYY5) 61-79 The role of theory in econometrics M. Hashem Pesaran*va, Ron Smithb “Trinity Co...

1MB Sizes 3 Downloads 77 Views

JOURNALOF

Econometrics

Journal

of Econometrics

67 (lYY5) 61-79

The role of theory in econometrics M. Hashem

Pesaran*va,

Ron Smithb

“Trinity College, Cambridge CB2 ITQ, UK bBirkbeck College, London WIP IPA, UK

Abstract This paper discusses the way that theory is used in applied econometrics. The traditional strategy of marrying theory and evidence relied on the fact that older theory implied explicit restrictions on the conditional distribution of observable variables and could be evaluated in terms of the conditional predictions of the model embodying the theoretical restrictions. However, this is not true of newer theories based on dynamic stochastic optimisation of models which are not based on quadratic objective functions and linear constraints; the so-called ‘LQ form’. Because these models do not usually have closed-form solutions, they tend to be calibrated rather than estimated and cannot be readily evaluated in terms of their conditional predictions. The application of the stochastic version of the Maximum Principle to such models results in Lagrange multipliers, often shadow prices corresponding to missing markets, which are not observed by the econometrician. Just as agents condition their decisions on unobserved expected prices when forward markets do not exist, they also condition on unobserved shadow prices when particular current or contingent markets do not exist. The approach suggested in this paper is to substitute out the Lagrange multipliers in terms of their determinants, just as is often done with expectations. The approach is illustrated in some detail for two examples: consumer behaviour under liquidity constraints, and oil production.

Key words: Economic theory; Maximum JEL classi&rtion: ClO; c51; c52

* Corresponding

principle;

Shadow prices; Applied econometrics

author.

A previous version of this paper was presented at the Conference on ‘The Significance of Testing in Econometrics’ held at Tilburg, The Netherlands, December 1991. The first author wishes to gratefully acknowledge.financial support from the ESRC and the Isaac Newton Trust of Trinity College, Cambridge. In preparing this version we have benefited from comments and suggestions by Tony Lawson, Adrian Pagan, Simon Potter, Hossein Samiei, and two anonymous referees of the Journal.

0304~4076/Y5/%OYSO 0 SSDI 030440769401627

1995 Elsevier Science S.A. All rights reserved C

62

M.H. Pesaran. R. ~mithlJourna~

of Econometrics 67 (1995) 61-79

1. Intr~uction In his editorial statement, launching the first issue of Econometrica, Frisch (1933, p. 2) made a vigorous appeal for the unification of theory and measurement in economics. In this paper we first provide a schematic discussion of the use of theory in applied econometric research and then propose a new approach to unifying theory and econometrics in optimising models which do not take the standard linear-quadratic form. Following the Cowles Commission, there evolved what Hylleberg and Paldam (1991) call the ‘traditional strategy’ of doing empirical research, of marrying theory and evidence. The e~onometrician took from the theorist some static or long-run relationship and added some auxiliary assumptions, to take account of dynamics, functional forms, and other ceteris paribus conditions, to provide the basis for a statistical model which described the conditional distributions of observable variables. Stigum (1990) provides an axiomatic basis for this traditional strategy, which we discuss in Section 2. The underlying theory (IS-LM, static demand theory, explanations of cycles in terms of stochastic linear difference equations) could easily be cast in the form of linear or simple nonlinear relationships among observable variables, and its primary role was to identify relevant variables and plausible signs for their coefficients. Over time, the theoretical input to econometric models tended to grow, following Stone (1954) demand theory was used to impose cross-equation restrictions on a system of equations, and under rational expectations optimising theory was used to impose cross-equation parametric restrictions on vector autoregressions (VAR). However, this latter development ran into the difficulty that it is only in optimising models which take what Whittle (1982) calls the ‘LQ form’: linear constraints with quadratic objective functions, that decision rules take the form of a linear VAR. In the case of more complicated stochastic dynamic optimisation, analytical solutions for decision rules are rare. Outside the LQ form, the theory cannot easily be cast as restrictions on the conditional distributions or the observable variable, and this has led some economists such as the real business cycle theorists to focus on matching the unconditional moments predicted by the theory with their sample counterparts, often using simulation techniques applied to calibrated models. Others have tried to solve and estimate nonlinear dynamic stochastic optimization models by brute force computer simulations, so far this has proved to be computationally unattractive, except for relatively simple specifications (see, for example, Rust, 1987; Deaton, 1991). Moreover, this latter approach although mathematically appealing, is often too inflexible in practice to be employed readily for predictions and policy analysis. We discuss these responses in Section 3. Whereas the traditional strategy had worked for issues that could be treated as if they were in an Arrow-Debreu world, more complex models had to face the missing markets problem. The absence of forward markets means that agents

M.H. Pesaran, R. Smith/Journal of Econometrics 67 (1995) 61-79

63

condition not on an observed forward price, but on their expectations of a future price, which is unobservable to the econometrician. An answer to this problem was found in the rational expectations (RE) hypothesis, where the unobserved expectations were replaced by their mathematical expectations derived conditional on the underlying behavioural model. Under the rational expectations, the theory then imposed strong cross-equation parametric restrictions on the parameters of the linear vector autoregressions (VAR) that were (and still are) the predominant statistical model for the analysis of economic time series. The equally widespread absence of current (e.g., contracts for second-hand capital goods) and contingent (contracts conditional on the agent being liquidityconstrained) markets means that agents condition on unobservable (to the econometrician) shadow prices. In Section 4, we suggest an alternative approach to the problem of estimating intertemporal optimising problems with missing markets. The approach employs the stochastic version of the Maximum Principle to solve the optimisation problem and determines the decision and state variables jointly. The intertemporal links between the decision variables are established via the values of the Lagrange multipliers, shadow prices, and the state variables. The unobserved shadow prices can then be treated in the same way as the unobserved expectations associated with missing forward markets. i.e., replaced by the observed determinants. The procedure, which allows us to maintain intrinsic nonlinearities in the problem and introduce institutional and other constraints through the shadow prices, is illustrated by two examples. This paper extends some of the discussions in an earlier paper by Pesaran (1988) and the recent paper by Pesaran and Smith (1992) which discusses the closely related issue of the impact of observations on theory.

2. The traditional marriage of theory and evidence The traditional strategy emerged from the work of Tinbergen, Haavelmo, and the Cowles Commission. Central to it was a dichotomy between theoretical and empir-ical activities: the theorist provided the model and the econometrician estimated and tested it. This proved a highly productive strategy which dominated empirical econometrics until the 1970s and still remains healthy for certain types of problems. It was effective because the theory involved (IS-LM, static demand theory, explanations of cycles in terms of stochastic linear difference equations) could easily be cast in the form of linear or simple nonlinear relationships among observables. The primary role of theory in this context was ‘identifying the list of relevant variables to be included in the analysis, with possibly the plausible signs of their coefficients’ (Tinbergen, 1939). Within the traditional strategy the role of econometrics was straightforward. Firstly, regression was the appropriate tool. The old theory primarily focused on conditional statements, such as what would happen to demand if price were to

64

M.H. Pesaran. R. Smith/Journal

of Econometrics

67 (1995) 61-79

fall; decision makers focused on conditional predictions, such as what would happen to unemployment if government spending were to increase. Regression methods, by estimating the conditional means, provided a flexible way of quantifying and testing qualitative statements about conditional moments. The testing was usually of a limited though useful sort: was the effect significant and of the correct sign? In the pragmatic application of the traditional strategy, though not in the strict Cowles Commission view, regression also allowed the empirical analyst great scope to make auxiliary assumptions to take account of the acknowledged departures from the basic theory: add variables to allow for ceteris paribus conditions, choose functional forms, add lags for adjustment processes, and experiment with proxies for unobservables. This flexibility meant that the theory became almost unfalsifiable: it was never clear whether rejection of a particular empirical model cast doubt on the theoretical core or merely on one of the host of auxiliary assumptions which were required for estimation. However, this approach allowed the empirical analyst to take account of a wide range of historical, institutional, and physical constraints, in a reasonably flexible manner. This increased the applicability of the theory and allowed the model to better represent the data while remaining consistent with the basic theory as cast in terms of observable variables. In view of the dichotomy between theory and econometrics, the major problem the econometricians faced was providing appropriate estimators for these given economic models. Thus the Cowles Commission tradition of econometric theory was characterized by a proliferation of estimators. Special estimators were needed to take account of simultaneity (IV, FIML, LIML, 2SLS 3SLS, k class estimators, etc.), nonspherical disturbances (various feasible GLS estimators, like Cochrane Orcutt, SURE), distributed lags (Koyck, Almon, Rational), etc. See the review of the development of these estimators in Pesaran (1987a). Certainly this period generated far more estimation procedures than test procedures; see Qin (1991). Even without formal procedures for diagnostic and misspecification testing, the predictions (conditional expectations) generated by the model could be compared directly with the actual% allowing an informal judgement of statistical adequacy. The Cowles Commission view, though dominant, was never unchallenged. A prominent example of this is Vining’s (1949) response to the attack that Koopmans (1947), representing the emerging econometric approach of the Cowles Commission, launched on the predominantly data-instigated method of Burns and Mitchell at the National Bureau of Economic Research. But throughout the period when the traditional strategy dominated empirical work there were others who questioned it. Since the traditional strategy has been the subject of much subsequent criticism, it is important to emphasize its success at the time. The example of macro-econometric models, for which Bodkin, Klein, and Marwah (1991) provide a history, is an example. The theory, Keynesian and Neo-classical, was cast

M.H. Pesaran, R. Smith/Journal of Econometrics 67 (199s) 61-79

65

into a form that could be estimated successfully on an equation-by-equation basis; the system as a whole replicated the main features of the economy, as the independent tests of its cyclical properties by Adelman and Adelman (1959) showed; and the systems were found useful and widely adopted by business and government. These models are still widely used despite their failures and despite the criticisms that academic theorists and econometricians have directed at them. Mankiw (1988) uses an analogy with Ptolemaic and Copernican astronomy to explain their persistence. Ptolemaic methods continued to be used in navigation for many years after the development of the Copernican system because they provided more accurate predictions for practical purposes than the embryonic Copernican procedures. Implicit in the analogy is the belief that the new theory will, like the Copernican, eventually develop predictions that are practically useful. It is certainly true that the theory used in large macro-models, and which continues to be used in the Average Economists Regressions that fill journals to this day, is largely old theory in the sense that it does not embody full stochastic dynamic optimization with incomplete markets. The difficulties of doing this are evident in Jorgenson’s (1963) classic study of investment. The theory of demand for capital was set up in an intertemporal optimisation framework, but it required the assumption that there were complete markets for second-hand capital goods to identify the shadow price of capital as the user cost and adjustment dynamics had to be added in an ad hoc way that was inconsistent with intertemporal optimization. The example of investment is discussed in more detail in Pesaran and Smith (1992).

3. The divorce of theory and econometrics During the 1970’s the traditional strategy received three major shocks. Firstly, there was increasing evidence that the models did not represent the data, being outperformed on occasion by univariate models. This led to an increased emphasis on developing measures of model adequacy, a proliferation of diagnostic and misspecification tests, and a shift away from emphasis on the estimation of a given model towards issues of dynamic specification and model selection. Secondly, there was an increasing insistence, by theorists, that the models did not represent the theory. Dynamic optimization with rational expectations and incomplete markets implied that the estimated relationships between observable variables would be neither stable nor structural. This theory was used as the basis of the Lucas Critique, though in his paper Lucas (1976) used only simple illustrative models rather than fully specified dynamic optimization problems to make the point. Strangely, given the devastating nature of the critique, there were few attempts to test its empirical relevance. Theorists regarded it as demolishing econometrics, empirical workers treated it as a

66

M.H. Pesaran, R. Smith/Journal of Econometrics 67 (1995) 61-79

logical curiosum of doubtful empirical significance; see the discussion in Alogoskoufis and Smith (1991 b). Criticism of econometric models by theorists became the norm and spread beyond the New Classical school; e.g., Mankiw (1988) and Summers (199 1). Thirdly, decision makers often complained that the models were ineffective for practical purposes of forecasting and policy analysis. In terms of the three general criteria of model evaluation discussed in Pesaran and Smith (1985), the models were seen as statistically inadequate, theoretically inconsistent, and practically irrelevant. The critiques of the 1970s prompted a number of responses, which tended to undermine the traditional dichotomy between the functions of the theorists and the econometricians. The response of many econometricians was to put much greater priority on representing the data, largely ignoring the theory, which they saw as having relatively little to contribute particularly for the purpose of forecasting and business-cycle research. This return to purely data-instigated approaches produced the explicitly atheoretical approaches of Sims (1980) using VARS (see Cooley and LeRoy, 1985, for an early critique of this approach). VARs need not be inherently atheoretical since Rational Expectations and optimisation can impose cross-equation restrictions on the VAR which can be used to make inferences about the ‘deep’ parameters of tastes and technology; e.g., Hansen and Sargent (1991). Only slightly less atheoretical was the approach of Hendry (1987), where the only role envisaged for the theory was to provide some simple long-run equilibrium restrictions in the context of error correction models; Alogoskoufis and Smith (1991a) provide a critique of the atheoretical nature of the error correction models (ECMs) used by Hendry and others. The two approaches have now been combined in the cointegration approach pioneered by Granger; see, for example, the collection of papers in Engle and Granger (1991), Phillips (1991), and Phillips and Hansen (1990). While it is true that cointegration theory goes beyond the traditional scope of the pure time-series approach and considers relations between economic variables, most of the applications primarily use economic theory (if at all) only to specify long-run relationships between observable variables. The response of many theorists was to place much greater priority on representing the theory, often at the expense of statistical analysis. There were two strands to this astatistical approach. One strand, advocated recently by Summers (1991), emphasises reliance on ‘stylised facts’ rather than elaborate formal econometric models. This approach is discussed in Pesaran and Smith (1992). The other astatistical strand, followed by many real business cycle theorists, focuses on matching the unconditional sample moments with the corresponding moments predicted by the nonlinear dynamic stochastic optimisation model. Since these models rarely have a closed form solution, they are often calibrated rather than being statistically estimated and are solved using simulation techniques. A prominent example of this approach can be found in

M.H. Pesaran. R. Smith/Journal of Econometrics 67 (1995) 61-79

61

Kydland and Prescott (1982, 1991). Although Kydland and Prescott do not use methods of statistical inference, they regard their procedures as econometric, in the spirit of some of Frisch’s exercises. They say: ‘The key econometric problem is to select the parameters for an experimental economy’ (1991, p.169). This is not the traditional definition of the econometric problem, which focuses on making statistical inferences (be they Bayesian or Classical) about particular aspects of the real economy. While the use of optimisation in applied work is to be welcomed and in general can provide a useful framework for empirical analysis, the calibration techniques employed by real business cycle researchers are subject to a number of important criticisms and represent a backward step in the history of econometrics, in the sense that it is a return to one of the least desirable features of the traditional strategy: the emphasis on estimation rather than on testing and model evaluation.’ In principle such nonlinear dynamic optimisation models can be solved and estimated by maximum-likelihood methods using the brute force of computer simulation. A good application of this approach is Rust (1987). In general, this approach has proved to be computationally unattractive except for relatively simple specifications. To make this approach operational, the models used tend to employ simple functional forms for tastes and technology, assume representative agents with homogeneous information sets,’ and give little or no consideration to institutional constraints. These are the type of criticisms that Summers (1991) also makes about the work of Hansen and Singleton (1982, 1983). The information homogeneity assumption is discussed in Pesaran (1987b, 1990b); while Kirman (1992) discusses the significance of the representative agent assumption. The lack of consideration of institutional constraints means that important features of reality, e.g., the structure of the tax system, are difficult to take into account, reducing the usefulness of these models for policy analysis. As a result, particular solutions for the state and control variables, even if they are obtained via computer simulations, are often not in accordance with important features of reality. More importantly, respecification of the underlying optimization problem, with the aim of making the predictions of the model more conformable with reality invariably involves considerable theoretical and computational efforts to the extent that it makes the whole approach very inflexible. The optimization approach thus can become a straightjacket rather than a flexible tool of enquiry.

’ Kim and Pagan (1995) emphasize some of the common approaches. Here we focus on the differences.

features

of the calibration

and economtc

‘The information homogeneity assumption, as pointed out by Arrow (1986) is particularly inappropriate in an explanatory model of decentralised markets where individual differences are the prime motive for trade.

68

MN. Pesaran, R. Smith/Journal of Econometrics 67 (1995) 61-79

4. Towards a bridge between theory and evidence One important role of economic theory is to produce general, unifying insights which promote our understanding of the working of the economic system by abstracting from the complex mass of details which constitute the ‘reality’, thus allowing the theorist to provide tractable analysis. The usefulness of any abstraction depends on whether it opens rather than closes doors, that is whether it enables the theorist to gain a deeper understanding of a wider range of interconnected phenomena. The theory also acts as a unifying framework within which new results can be related to what is already known. The question is how to devise a procedure that provides the precision of the modern dynamic stochastic optimisation theory, but at the same time does not suffer from its formal stricture when applied literally to economic problems. There are two general solution methods for dynamic optimisation problems. The standard method used in the literature is Bellman’s optimality procedure that uses the recursive formulation of the intertemporal objective function and solves the resultant functional equations recursively. For a general treatment of this approach see, for example, Stokey, Lucas, and Prescott (1989). The alternative solution strategy is to use a stochastic version of the Maximum Principle discussed in Whittle (1982). As Whittle notes, in the conventional approach the optimization proceeds by choosing the values of the decision variable, and then deriving the values of the state variables, from the ‘plant’ or the ‘transition’ equations, recursively. Under the Maximum Principle the decision and the state variables are determined jointly, and the constraints are then taken into account by means of Lagrangian multipliers which, when appropriately formulated, can be interpreted as shadow prices (or ‘expected shadow prices’ in the stochastic case). The Maximum Principle allows the decoupling of the problem of choosing the value of the decision variables at time t from that of deciding the values of the decision variables at other times. The intertemporal links between the values of the decision variables in different time periods are thus established via the values of the shadow prices and the state variables. This solution strategy is particularly useful for empirical analysis. By approximating the shadow prices in terms of linear (or possibly simple nonlinear) functions of the observable, it will be possible to combine the flexibility of the regression analysis with the interpretive power of the theoretical analysis. The concept of shadow prices has a long history in economics and naturally arises in optimization problems subject to constraints. In many applications these shadow prices directly correspond to prices of goods or services in particular future or current markets that, for one reason or another, do not exist. A great deal of the complexity of the empirical implementation of the dynamic optimization approach is due to missing markets that render the shadow prices unobservable. When prices are observable, perhaps with error (as with Jorgenson’s cost of capital and Tobin’s Q in theories of investment), they can be used

M.H. Pesaran, R. Smith jJournal of Econometrics 67 (1995) 61.-79

69

directly in empirical analysis. But in most interesting cases they are not observable. For instance under uncertainty, when investment is irreversible, as in Pindyck (1989), the return that justifies investment is greater than the normal return because of the ‘option value’ of waiting. In an Arrow-Debreu world there are complete markets for all current, contingent, and forward contracts. The widespread absence of forward contracts means that agents have to condition their decisions not on the known forward price but on their expectations of the price in the future. Economists have dealt with this problem by replacing the unobservable (to the econometrician) expectations of the future price by its observed determinants. The equally widespread absence of current and contingent markets means that agents have to condition on unobservable (to the econometrician) shadow prices.3 Our empirical approach involves replacing the unobservable shadow prices by linear or simple nonlinear functions of the observable state variables which determine them. What this procedure does is to maintain the structure of dynamic optimization, but allow other relevant institutional (taxation and ownership rights) and physical constraints (e.g., the exhaustibility of oil reserves} to enter the problem through their influence on the shadow prices. Thus it provides a consistent theoretical way to enter other relevant factors not explicitly treated by the basic theory, but which constrain the optimizing behaviour of economic agents. This is in accord with a rich tradition of using shadow prices in economics to encapsulate the information needed by decision makers when markets do not exist. For an early example of this see Set-r’s(1960) work on the Choice of Techniques. This framework also opens an avenue of discourse with the theorist, since the empirical significance of problems such as liquidity constraints can be presented more readily in theoretical terms. Below we illustrate this approach with two examples, consumption and oil production. Both examples are partial equilibrium in the sense that agents optimise with respect to given exogenous variables and constraints. 4.1. Consumption behaviour under liquidity constraints The saving behaviour of consumers facing liquidity constraints has recently been the subject of a number of theoretical and empirical investigations. See, for example, Helpman (1981), Zeldes (1989), and Deaton (1991). Here we provide a solution to the consumer’s optimization problem using a stochastic version of the Maximum Principle of optimization that allows us to write the consumer’s decision rules (first-order conditions for the optimization problem) in terms of (expected) shadow prices.

3The approach is of much wider applicability, another obvious example is investment theory which is discussed in Pesaran and Smith (1992).

M.H. Pesaran. R. SmithJJournai

70

of Econometrics

67 (1995) 61-79

Consider the consumption decision problem of an individual with initial real net worth, w,_ 1, labour income streams yt, y,+ 1, . . . , and suppose that the consumer sets his/her consumption streams c,, c,+ 1 . . . by maximizing the expected life-time utility functional

subject to the period-by-period W t+r=u

+d

Wt+rv

budget constraints:4 Yz+, - Ct+r,

1 +

r=o,1,2

)...)

(2)

where Sz,is the consumer’s information set at time t, /I = l/( 1 + r), 0 6 /I< 1, I is the subjective rate of time preference, and p is the real rate of interest (assumed fixed). It is further assumed that the single-period utility function, u(c,), satisfies the usual conditions, namely u(c,) is increasing in c,, and the marginal utility of consumption is nonincreasing in ct [a’(~,) > 0, u”(c,) < 01. Suppose now that in solving the above optimization problem the consumer also faces the borrowing constraints,

wt+r 2 - b,+,,

r=o,1,2,

. ..)

(3)

where b, > 0 is the given level of real borrowing at time t. In this case the familiar quasi-martingale condition, U’(G) = B(I + ~)ECu’(c,+1) IQl,

(4)

will no longer be valid and needs to be modified to account for the borrowing constraints (3). As discussed above the standard method of solution uses Bellman’s optimality procedure. For an application of this method to the above liquidity-constrained consumption problem see, for example, Zeldes (1989) and Deaton (1991). The alternative is to use a stochastic version of the Maximum Principle. To apply the Maximum Principle to the above problem, we first note that by using (2) the borrowing constraints (3) may be written as H t+r

=

(1 + P)” l W,-1

+

i

(1 + p)TeiSt+i + b,+, 2 0,

(3’)

i=O

for

4Note

that in this formulation

w, represents

t=0,1,2,

.. . ,

the level of real (net) assets at the end of period

t.

M.H. Pesaran, R. Smith/Journal of Econometrics 67 (1995) 61-79

71

where s, = y, - c,. The first-order conditions for the consumer’s optimization problem may now be obtained from the necessary conditions for the unconstrained maximization of the Lagrangian function,

with respect to c,+, and wffr, r = 0, 2, .

, where

Gf+r = n(ct+,) + A+rCW+r - (1 + P)W+r-1 + Yt+r -

c,+,l

+ P*+A+r.

(6)

The auxiliary random variables &+, and ~l~+~,r = 0,1,2, . . . , are the Lagrange multipliers which can be interpreted as shadow prices of money and the liquidity constraints, respectively. Taking first-order derivatives of (5) with respect to c, and w, we have

and (8)

E,(A) = YE,@,+ I),

where y = /I(1 + p) = (1 + p)/(l + Y). Similarly taking derivatives of (5) with respect to c,+ i we have

E,Cu’(ct+~)l = Wt.,) + Et( iII c YiplPt+i). Using (8) to eliminate E,(A,) and E,(I,+,) following generalization of (4): 5 u’(c,) =

(9)

from (7) and (9) now yields the

(104

~EtCu’(ct+~)l + put,

where pLr= max{O,u’(xJ -

YE,C~‘(C,+I)I},

UW

and x, = yt + (1 + p)wl_ 1 + b, is the value of the ‘cash in hand’ at the start of the period. It is clear that with the relaxation of the borrowing constraint (namely as b, is allowed to increase) u’(x,) becomes smaller and smaller and pu,tends to zero. Eqs. (10a) and (lob) also imply a’(~,) = max {I’, ‘Note that since it is assumed will be p,.

(11)

yEICu’(c,+ I)l}. that yt, c,, w,_,,

and b, are the agent’s

information

set, then so

72

M.H. Pesaran, R. Smith/Journal

of Econometrics

67 (1995) 61-79

This corresponds to Eq. (6) of Deaton (1991). A derivation using the Bellman optimality principle for the special case where y < 1 (the case where the discount rate exceeds the real interest rate) and b,+, = 0, for z = 1,2,3, . . . , is given in Zeldes (1989). For y b 1, (11) has multiple solutions given by U’(C,)= y - l u’(c,-1) - VIPLt-l

- 82,

where E, is an arbitrary martingale difference process. Assuming that a, is uncorrelated with pLt_,, since u”(c,) < 0, it follows that &,/a~,_ 1 > 0; namely the greater is the degree of liquidity constraint experienced by the consumer in period t - 1, the larger will be the consumer’s consumption (relative to its past) in period t. For y < 1, familiar results from the rational expectations literature can be used to show that the unique solution to (11) is

U’(G)=

/‘-t + f

YiE(Pt+i I QtL

(12)

i=l

which is well defined if the transversality condition,

lim {~~ECu’(c,+~)lQ,l))= 0,

(13)

Nd’X

is satisfied. Computing exact solutions to (10a) and (lob) poses intractable problems except in the simplest case. Deaton does it numerically for u’(c) = C-O, y,, nonlabour income being IID N(p, a’) and b, = 0, t = 0,1,2, . . . This is a onedimensional computational problem. When he generalises the problem slightly to allow y, to follow an AR(l) process, the curse of dimensionality strikes. ‘Rather than transfer to a supercomputer’ (p. 1232) Deaton adopts an approximation. If an R-point grid is used for each variable, then an R2 grid is required for the two state variables x, and y,. If borrowing is introduced and b,is assumed to follow an &h-order autoregression while y, follows an &h-order autoregression, the dimensionality goes up from 2 to m + n + 1, and the computational problems grow exponentially. There is another more pragmatic approach. In the case that the autoregressions generating y, and b, are of order m and n, with normally distributed disturbances, taking the second term in (12), and writing it as a function of the state variables, we have

=

et,

Q),

(14)

where 8 is a vector of parameters of the utility function and the processes generating the exogenously given labour income and the borrowing limits,

M.H. Pesaran, R. Smith/Journal of Econometrics 67 (199s) 61-79

13

and z, is the vector of predetermined variables, z, = (x,, yt,y,_ t, . . . , y,_,; 1, . . . , b,_,). Even if the borrowing limits are unobserved, they will reflect observable factors like credit and hire purchase controls which have been found to be important in practice. To define the switching point of the consumption decision, consider the level of consumption that would be chosen if the constraint is not binding in period t.6 Denote this level of consumption by c ,*, defined by a’($) = F(z,, 0). It then follows from (12) and (14) that b,, b,_

u’(c,) =

F(z,, 0) { u’(x*)

if if

c: < xrr c: > x,,

(15)

where x, is cash in hand. Notice that this is a standard limited dependent variable formulation; a similar approach to solving limited dependent variable rational expectations models with future expectations has been followed in Pesaran and Samiei (1995). Actual consumption equals desired consumption if there is enough cash to finance it, in which case the shadow price CL,is zero. Otherwise, actual consumption equals cash on hand, and the shadow price adjusts to make this equality hold. The next step is to approximate F(z,, 0) by linear or simple nonlinear functions of z,. Adopting, for example, a linear specification we have F(z,, (3)= 01’2,+ v,, where U,is the error of approximation. In the present application this error may well be quite small, since we have removed the major source of nonlinearity: the switch in function depending on whether the constraint bites or not. Certainly the diagram in Deaton (1991, p. 1228) which is close to being piecewise linear suggests that the approximation would be close. In fact, if the axes had been logarithmic, the diagram would have been even closer to being piecewise linear. This approach removes the need to confine ourselves to quadratic utility functions to obtain econometrically tractable specifications. However, although our choice is widened, we do have to choose some specific form for the utility function, and this will inevitably involve some error of approximation. As is conventional we will ignore this error below. But there is no reason to suppose that the error in approximating u’(ct) by a particular parametric form is necessarily smaller than the error in approximating F(z,, O),which itself depends on the functional form chosen for u’(c,). Suppose u’(c,) = c;“, then a log-linear specification for estimation takes the form log ct =

a’logz, + u, if c: xt, i log x*

6 We do not, however rule out that the constraint that p, = 0 and E(P,+~IQ) # 0, for some i > 0.

could be binding

in some future period, namely

74

M.H. Pesamn,

R. Smith/Journal

of Econometrics

67 (1995) 61-79

which is perfectly tractable econometrically on individual data.’ This specification is theoretically consistent, the size of the error of approximation can, in principle, be measured, and the shadow prices ,u~can be recovered for each individual, using the parameter estimates and the observations on y,, x,, and b,. 4.2. Oil production As a second illustration of the empirical usefulness in estimation of replacing the unobserved stochastic shadow prices by their determinants consider the case of oil production, discussed in Pesaran (1990a). This is a simplified version of the model in Favero and Pesaran (1994), which also allows for differential taxation and the joint determination of exploration and development. Consider a pricetaking, profit-maximizing oil producer with the profit function n* = Pt4r - c(q,,&r)

-

W1tX1t

-

W2rXzt?

(16)

where pt is the real oil price, qr the quantity pumped in t, c(q,, R,_ 1) the operating cost of pumping qr, given total reserve in the field at the end of the previous period, R,_ 1; wit and w2t are the unit costs of exploration and field development, and xlt and x2, are the levels of exploratory and development efforts expended. The firm operates under the following resource and technological constraints: AR,, = Wr,,Xr,,-I) A& =

4(~2,t-l,&l,u)

-

4(~2,t-l,&1,.)

-

qt +

+

(17)

ult,

~2t.

(18)

The function F(.) describes how current exploration efforts and cumulative past efforts X1,,_ 1 produce discoveries of new undeveloped reserves R,,. The function +(.) describes how past undeveloped reserves are converted to developed reserves Rtd, by past development effort with R, = R,, + Rtd. The disturbance are assumed to be orthogonal to the firm’s information set terms uIt and ~4~~ Q2,_1 and represent the effect of revisions/additions to reserves and the uncertainty that surrounds the discovery and the development of oil fields. The intertemporal problem facing the firm is maximizing the expected discounted future stream of profits subject to the constraints (17) and (18) and the stock-flow identity X1, = X1,, _ 1 + xlt. Application of the Maximum Principle to this problem involves the unconstrained maximization of

,

‘Note

that in this case we have approximated

loge:

= - (I/a)log

jF(z,,tI)}

by a’logz,

in the text.

75

M.H. Pesaron, R. Srni~~i~~urn~lof Econometrics 67 (199.J;)61--79

withrespecttoq,+,,x,,,+,,xZ,r+s, discount factor and

Rt+s,d, R,+,,,fors

= 0,1,2, . . . ,wherepisthe

The Lagrange multipliers E,i,, lWztare the co-state random variables, which represent the shadow prices of developed and undeveloped reserves, respectively, and Alitis the shadow price of exploration. Industry evidence suggests specific forms for F(a) and r#(*)and a cost function of the form

with a2 > 0 and zj3 > 0. The first-order condition (Euler Equation) with respect to qt can be written as

(20) The expected shadow price of developed reserves is equal to the expected difference between the real well-head price and the marginal extraction cost. Using (19) in (20) now yields qt = CRt-l/(7: +

LI)ICJS-I(P~)- E,-I(~IJI~

(21)

where y = 6,/& > 0. Production depends on the excess of the expected oil price over the expected shadow price of the reserves. For estimation the expectations of the price and shadow price can be treated symmetrically and replaced by linear approximations in terms of their determinants. However, we have retained the intrinsic nonlinearity which comes from the physical constraints relating production to reserves and pressure in the field. The extraction equation, Eq. (21), has the nice property that the price responsiveness of oil supply declines with the amount of available reserves. As determinants of E, _ 1(A,,) Favero and Pesaran (1994) use lagged values of discoveries, development efforts, reserve-output ratios, and the cost of exploration and development. By substituting for both expected and actual shadow prices (21) provides the basis of a reduced form which can be estimated. This reduced form is explicitly derived from a stochastic dynamic optimization, takes account of the physical constraints on the oil supply process, and as shown in Favero and Pesaran

76

h4.H. Pesaran, R. Smith/Journal

of Econometrics

67 (1995) 61-79

(1994), can also readily take account of a priori information about the complicated institutional structure of the tax system and the long lags between exploration, discovery, development, and production. All these features ~ stochastic optimization, geology, tax, and time delays - are formalized in a mutually consistent and coherent way. It is this consistency of the empirical evidence with the theory which could help build bridges between the theorists and econometricians.

5. Concluding remarks A necessary condition for being able to evaluate theories is that we can compare the predictions they make with the evidence. In economics the relevant predictions relate to the conditional distributions of the observables. This is not a sufficient condition for at least two reasons. Firstly, there is the problem of inference, that there is no agreed method of judging whether the conditional predictions match the data and thus whether the evidence rejects the theory. Secondly, the conditional predictions result from the conjunction of the theory and the auxiliary assumptions required to produce an empirical model, and it is not clear which is being rejected. These two problems are sufficiently serious that it seems unlikely that economic theories can be tested. However, within an agreed procedure for inference it may be possible to judge whether the conditional predictions of a particular empirical model, which embodies the theory, do in fact match the data better than another rival model. With the ‘old’ theory this was relatively straightforward, and theory and evidence were related in the traditional strategy discussed in Section 2. With the ‘new’ theory, involving stochastic dynamic optimization with incomplete markets, comparing predictions with evidence has become much more difficult. The theory can become a straightjacket where the models cannot be solved easily to provide predictions for observed data. This is further accentuated when the models cannot be further extended to incorporate prior information about physical or institutional features of the problem. As we described in Section 3, the professional response to this tension between the theory and the evidence was for increasing amounts of work to become either ‘atheoretical’, using VARS, or ‘astatistical’ by resorting to calibration techniques or simple stylized facts. This is unsatisfactory. Formal theory is essential in enabling us to organize our a priori knowledge about the problem in a consistent and coherent way. But the formal theory must also be confronted with the data, at least indirectly via a particular empirical model, if it is to have any relevance and to enhance our understanding of the real life problems. In Section 4 we suggest an approach which might bridge the current gulf between theory and evidence. This approach does conduct the theoretical analysis within the ‘new’ framework, i.e., as the solution to an explicit dynamic

M.H. Pesaran, R. Smith/Journal

of Econometrics

67 (1995) 61-79

II

optimization problem. But it escapes from the straightjacket and avoids the computational and estimation problems by replacing the unobserved Lagrange multipliers or shadow prices by functions of the variables that determine them. As our two examples show, in most cases the resulting models, although nonlinear, can be solved to give conditional predictions of the observables. They can be solved because, although they maintain the intrinsic nonlinearity (e.g., associated with whether a constraint binds or not), they approximate the incidental nonlinearity in the determination of the shadow prices with linear or other tractable functions. As a result they can be estimated without too much difficulty. We have demonstrated the approach with two, very different, examples, consumption and oil production, and the approach clearly has the potential to be applied to a wide range of other cases. However, we would hope that it has the potential not only to be an approach to estimation but also to improve the dialogue between theory and evidence. As we discussed above a major source of tension between theory and econometrics arises because of different purposes. The prime objective of the empirical worker is to explain the data, albeit within a theoretical framework which provides consistency and coherence. This is not the prime objective of the theorist. Being able to present the evidence in theoretically coherent terms - shadow prices - rather than as the parameters of conditional distributions may aid the dialogue by delivering ‘facts in a form where they can be apprehended by theory’ to adopt the phrase used by Summers (1991). Improving this dialogue is important if the econometrician is to provide ‘the healthy element of disturbance that constantly threatens and disquiets the theorist and prevents him from coming to rest on some inherited obsolete set of assumptions’ to adopt the phrase used by Frisch (1933, p. 2).

References Adelman, I. and F.L. Adelman, 1959, The dynamic properties of the Klein-Goldberger model, Econometrica 27, 596-625. Alogoskoufis, G. and R. Smith, 1991a, On error correction models, Journal of Economic Surveys 5, 95-128. Alogoskoufis, G. and R. Smith, 1991b, The Phillips curve, the persistence of inflation and the Lucas critique: Evidence from exchange rate regimes, American Economic Review 81, 1254- 1275. Arrow, K.J., 1986, Economic theory and the hypothesis of rationality, Journal of Business 59; Reprinted in: The new Palgrave: A dictionary of economics, Vol. II, 69-75. Bodkin, R.G., L.R. Klein, and K. Marwah, 1990, A history of macroeconometric model building (Edward Elgar, Aldershot). Cooley, T.C. and S. LeRoy, 1985, Atheoretical macroeconometrics: A critique, Journal of Monetary Economics 16, 283-308. Deaton, A., 1991, Savings and liquidity constraints, Econometrica 59, 1221-1248. Engle, R.F. and C.W.J. Granger, eds., 1991, Long-run economic relationships (Oxford University Press, Oxford). Favero, C.A. and M.H. Pesaran, 1994, Oil investment in the North Sea, Economic Modelling 11, 308-329.

78

M.H. Pesaran. R. Smith/Journal

of Econometrics

67 (1995) 61-79

Frisch, R., 1933, Editors’ introduction, Econometrica 1, l-4. Hansen, L.P. and T.J. Sargent, 1991, Rational expectations econometrics (Westfield Press, Boulder, CO). Hansen, L.P. and K.J. Singleton, 1982, Generalised instrumental variable estimation of nonlinear rational expectations models, Econometrica 50, 126991286. Hansen, L.P. and K.J. Singleton, 1983, Stochastic consumption. risk aversion and the temporal behaviour of asset returns, Journal of Political Economy 91, 2499265. Helpman, E., 1981, Optimal spending and money holdings in the presence of liquidity constraints, Econometrica 49, 155991570. Hendry, D.F., 1987, Econometrics methodology: A personal perspective, in: T.F. Bewley, ed., Advances in econometrics, Fifth world congress, Vol. II (Cambridge University Press, Cambridge). Hylleberg, S. and M. Paldam, 1991, New approaches to empirical macroeconomics, Scandinavian Journal of Economics 93, 121-128. Jorgenson, D.W., 1963, Capital theory and investment behaviour, American Economic Review, Papers and Proceedings 53, 247-259. Kim, K. and A.R. Pagan, 1995, The econometric analysis of calibrated macroeconomic models, in: M.H. Pesaran and M. Wickens, eds., Handbook of applied econometrics, Vol. I (Basil Blackwell, Oxford). Kirman, A.P., 1992, Whom or what does the representative individual represent?, Journal of Economic Perspectives 6, 117-l 36. Koopmans, T.C., 1947, Measurement without theory, Review of Economics and Statistics 29, 16ll 172. Kydland, F.E. and E.C. Prescott, 1982, Time to build and aggregate fluctuations, Econometrica 50, 1345-1370. Kydland, F.E. and EC. Prescott, 1991, The econometrics of the general equilibrium approach to business cycles, Scandinavian Journal of Economics 93, 161-178. Lucas, R.E. Jr., 1976, Econometric policy evaluation: A critique, Carnegie-Rochester Series on Public Policy 1, 19-46. Mankiw, N.G., 1988, Recent developments in macroeconomics:.A very quick refresher course, Journal of Money, Credit and Banking 20, 4366439. Pesaran, M.H., 1987a, Econometrics, The new Palgrave: A dictionary of economics, Vol. II (Macmillan, London) 8-22. Pesaran, M.H., 1987b, The limits to rational expectations (Basil Blackwell, Oxford). Pesaran, M.H., 1988, The role of theory in applied econometrics, Economic Record, Dec., 336-339. Pesaran, M.H., 1990a, An econometric analysis of exploration and extraction of oil in the UK continental shelf, Economic Journal 100, 367-390. Pesaran, M.H., 1990b, Linear rational expectations models under asymmetric and heterogeneous information, Paper presented to the workshop on learning and expectations, Sienna, Italy, June. Pesaran, M.H. and H. Samiei, 1995, Limited-dependent rational expectations models with future expectations, Journal of Economic Dynamics and Control, forthcoming. Pesaran, M.H. and R. Smith, 1985, Evaluation of macroeconometric models, Economic Modelling 2, 1255134. Pesaran, M.H. and R. Smith, 1992, The interaction between theory and observation in economics, Economic and Social Review 24, l-23. Phillips, P.C.B., 1991, Optimal inference in cointegrated systems, Econometrica 59, 2833306. Phillips, P.C.B. and B. Hansen, 1990, Statistical inference in instrumental variables regression with 1(l) processes, Review of Economic Studies 57, 99-125. Pindyck, R.S., 1988, Irreversible investment, capacity choice and the value of the firm, American Economic Review 79, 9699985.

M.H. Pesaran, R. Smith/Journal of Econometrics 67 (1995) 63-79

79

Qin, D., 1991, Testing in econometrics during 1930-1960, Paper presented at the conference on the significance of testing in econometrics, Tilburg, The Netherlands. Rust, J., 1987, Optimal replacement of GMC bus engines: An empirical model of Harold Zurcher, Econometrica 55, 999-1033. Sen, A.K., 1960, Choice of techniques: An aspect of the theory of planned economic development, 3rd ed., 1968 (Basil Blackwell, Oxford). Sims, CA., 1980, Macroeconomics and reality, Econometrica 48, 1-48. Stigum, B.P., 1990, Towards a formal science of economics (MIT Press, Cambridge, MA). Stokey, N., R.E. Lucas, Jr. and E.C. Prescott, 1989, Recursive methods for economic dynamics(MIT Press, Cambridge, MA). Stone. J.R.N., 1954, Linear expenditure systems and demand analysis, Economic Journal 64. 51 l- 527. Summers, L.H., 1991, The scientific illusion in empirical macroeconomics, Scandinavian Journal of Economics 93, 1299148. Tinbergen, J., 1939, Statistical testing of business cycle theories, Vols. I and II (League of Nations, Geneva). Vining, R., 1949, Methodological issues in quantitative economics, Review of Economics and Statistics 3 1, 77-94. Whittle, P., 1982, Optimization over time: Dynamic programming and stochastic control, Vol. I (Wiley, Chichester). Zeldes, S., 1989, Consumption and liquidity constraints: An empirical investigation, Journal of Political Economy 97, 305-346.