Economic Modelling
ELSEVIER
Economic Modelling 14 (1997) 155-173
Recursive methods for computing equilibria of general equilibrium dynamic Stackelberg games Steve A m b l e r * , Alain Pa que t Centre de Recherchesur l'Emploi et les Fluctuations l~conomiques, Universit~du Quebec h Montreal, P.O. Box 8888, Station Centre-ville, Montreal, Quebec H3C 3P8, Canada
Abstract We extend the Hansen and Prescott approximation method for the numerical solution of dynamic business cycle models to models of two agents who play a dynamic Stackelberg game. Such models have application to the analysis of issues of optimal policy with the government as the leader and a representative private agent as the follower. We show in detail how to derive time-consistent policy rules. We discuss the details of the numerical algorithm and its convergence, and we consider an application of the algorithm to a model of optimal fiscal policy. JEL classification: E32; E62 Keywords: Business cycles; Dynamic games; Optimal policy
1. Introduction The m o d e m equilibrium approach to macroeconomic analysis involves solving models that are often extensions of the basic neoclassical growth model. These models are typically too complicated to be solved analytically, and many m o r e different numerical simulation techniques have been developed to solve them. ~
"Corresponding author. Tel.: + 514 987-8372; fax: + 514 987-4707; e-mail:
[email protected]. 0264-9993/97/$17.00 © 1997 Elsevier Science B.V. All rights reserved PII S0264-9993(96)01 033-4
156
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
One popular technique is described in detail in Hansen and Prescott (1995). It involves using dynamic programming with quadratic approximations of the value function and the one-period return function in the neighborhood of the deterministic steady state, and solving iteratively for the coefficients of the quadratic form of the value function. Hansen and Prescott consider the application of their algorithm to social planning problems (when the competitive equilibrium is Pareto optimal), to homogeneous-agent economies with distortions, and to heterogeneous-agent economies. In this paper we consider extending this technique to the case of dynamic Stackelberg games. The obvious application is to the modeling of optimal government intervention when the government internalizes the reaction functions of atomistic private agents. Dynamic programming techniques are used to calculate the equilibrium strategy of the Stackelberg leader. 2 Our setup is a generalization of the problem considered by Kydland (1975) and Kydland and Prescott (1977). We extend this previous literature by providing a more detailed discussion of the numerical algorithm and a discussion of convergence (conditional on the existence and uniqueness of the solution to the steady state). Our solution concept supposes that private agents know that the government is using dynamic programming techniques to derive its optimal policy, and that they expect the government to continue to do so. This corresponds to the no-precommitment or consistent solution in Kydland (1975) and Blanchard and Fischer (1989, p. 594). Other equilibria are possible. For example, in cases where the government is assumed to be able to precommit to its optimal policy, the so-called Ramsey (1927) plan can be calculated using Lagrangian or discrete-time Hamiltonian techniques (for examples applied to optimal taxation and monetary policy, see Chari et al., 1991, 1995). Despite the pioneering work of Kydland and Prescott (1977), most work on optimal policy in models with optimizing agents has focused on the precommitment case. In cases where there are no compelling arguments for supposing precommitment or for ruling it out, it may be useful to compare the attainable levels of welfare with and without precommitment. One of the contributions of our paper is to show that it is relatively straightforward to calculate the solution to an approximation of the problem without precommitment. The technique is designed for economies where the equilibrium is not Pareto optimal, so that the equilibrium cannot in general be computed as the solution to an appropriate social planning problem. We consider the case where there is one representative private agent who takes the values of economy-wide aggregates as given. We show how to compute the competitive equilibrium explicitly by enforcing consistency between the feedback rules of the private agent and their aggregate counterparts.
lSee Taylor and Uhlig (1990) for a survey and a comparison of the accuracy of different techniques in solving a version of the basic neoclassical growth model. ZPallage (1995) considers dynamic programming techniques for solving dynamic games between players who use Nash strategies.
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
157
The paper is organized as follows. In Section 2 we describe the dynamic maximization problems of the representative private agent and of the government, and briefly summarize the numerical algorithm used to compute the solution to the game. In Section 3 we deal with how to solve numerically for the economy's steady state, which is the point around which the quadratic approximation of the oneperiod return function is calculated. In Section 4 we consider the details of the numerical algorithm. In Section 5 we briefly consider an application of the technique to deriving the optimal level of government spending in a competitive neoclassical growth model with a government sector. Conclusions are presented in Section 6. An appendix discusses the question of the convergence of the numerical algorithm.
2. Maximization problems and equilibrium conditions We consider a problem similar to the one discussed in the appendix of Kydland and Prescott (1977). Kydland and Prescott specify a linear-quadratic problem, while we consider a quadratic approximation of a more general return function in the neighborhood of the deterministic steady state. There is a representative private agent who is assumed to take the actions of the government as given. The private agent solves a dynamic programming problem of the following form:
o(z, g, S, s) = max{r(z, g, S, s, D, d) +/3E[v(z', g', S', s')[z, g]},
(1)
d
with a subjective discount rate /3 ~ ]0, 1[, subject to
z' = A ( z ) + e', s' = B ( z , g, S, s, D, d),
(la) (lb)
S' = B(z, g, S, S, D, D),
(lc)
D = D(z, g, S),
(ld)
g = g(z, S),
(le)
g' = F(z, g, S).
(If)
Here, z is a vector of exogenous state variables of dimension rtz, s is a vector of endogenous state variables under the control of the representative private agent, of dimension r/s, S denotes the average or aggregate values of the elements of s, d is the vector of control variables or instruments of the private agent, of dimension rid, D denotes the aggregate values of these controls, g is the vector of control variables of the government, of dimension ~Tg, and e is a vector of white-noise stochastic shocks, also of dimension r/Z. We use primes to denote next-period values of the variables. In general, when both lower-case and upper-case versions of variables are present in the model, we use lower-case letters to denote variables that are subject to the control of individual private agents and upper-case letters to denote aggregate per capita quantities which private agents take as given.
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
158
The solution to this problem gives a reaction function for the control variables d as a function of the form d = d(z, g, s, S).
(lg)
We impose two additional equilibrium conditions. First, the individual's reaction function must be compatible with the reaction function for their aggregate per capita equivalents given in (ld): d(z, g, S, S) = D ( z , g, S).
(lh)
Secondly, the government's reaction function given in (le) must be the solution of the government's optimization problem as described below. The form of the equation of motion for the government's control variables follows from its reaction function given in (le), which depends on aggregate states and exogenous variables, and from the equations of motion for the endogenous state variables S in (lc) and the exogenous state variables z in (la). We have: g' = g ( z ' , S') = g ( A ( z ) + ~', B ( z , g, S, S, D ( z , g, S), D ( z , g, S))) =- F(z, g, S).
(2)
We drop the 6' term as an argument of the F( [] ) function, since we will be solving a linear-quadratic approximation of the original problem to which the certainty equivalence principle applies. The government's problem can be expressed as follows: V ( z , S) = max{r(z, g, S, S, D ( z , g, S)) + [3E[V(z', S')lz]}, g
(3)
subject to: z' = A ( z ) + ~', S' = B ( z , g, S, S, D ( z , g, S), D ( z , g, S)).
(3a) (3b)
We have used an upper-case letter for the government's value function in order to distinguish it from the private agent's value function. Since we assume that the government is benevolent and tries to maximize the utility of the representative private agent, the functional form of the one-period return function is identical to that of the private agent's problem, but it depends only on aggregate quantities since the government internalizes the aggregate consistency constraint. Although the government maximizes the utility of the private agent, the problem will typically not be a team problem unless the government has access to lump-sum taxation which enables it to attain a first-best optimum. 3 3See Ambler and Desruelle (1991) for more details on this point.
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
159
These problems are of the form as the one analyzed in the fourth section of Hansen and Prescott (1995), and can be solved numerically using the techniques described in their section five. Since the return function is approximated by a quadratic function, the reaction functions of both the private sector and the government are linear. The form of the g([]) function in Eq. (le) depends on the solution to the government's dynamic programming problem, which in turn depends on the solution to the private agent's problem. We use the following iterative pseudo-algorithm to find a consistent solution to the problems of both the private agent and the government: Posit the initial linear reaction function 4 g = FIJz + F~S. Initialize the government's value function V". Initialize the loop counter i = 1. Do until convergence of the government's value function V. Initialize the private agent's value function v °. Initialize the loop counter j = 1. Do until convergence of the private agent's value function v. Using v j - 1, derive v j using the method described in Section 4. Check the divergence between v j - 1 and v j. Increment loop counter j. End do. (At this point, we have a private sector reaction function which is optimal given the government's reaction function.) Using V i- 1, derive V i using the method described in Section 4. A byproduct of deriving V i is a new reaction function g = F~z + F~S. Check the divergence between V i - z and V i. Increment loop counter i. End do. Note that each time the government's value function is updated, which entails deriving a new set of coefficients in the government's feedback rule, a new private sector feedback rule is derived that is optimal given the government's feedback rule. For this reason, the algorithm is fairly computer-intensive. It is consistent with the proof of convergence outlined in the appendix, which supposes that each time the government's value function is updated the private sector's feedback rule is optimal. In practice, the following pseudo-algorithm, which typically involves fewer total iterations, can be used to compute consistent solutions to the private agent's and government's problems: Posit the initial linear reaction function g = FIJz + F°S. Initialize the loop counter i = 1.
4 For instance, the initial conjecture for the linear government reaction function can be set in a way that is consistent with the model steady-state value of the elements in g.
160
s. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
Do until convergence of the government reaction function. Initialize the private agent's value function v °. Initialize the loop counter j = 1. Do until convergence of the private agent's value function v. Using v y- 1, derive v j using the method described in section 4. Check the divergence between v i- L and v y. Increment loop counter j. End do. (This gives a private sector reaction function which is optimal given the government reaction function g = F~-lz + F~-1S.) Initialize the government's value function V °. Initialize the loop counter j = 1. Do until convergence of the government's value function V. Using V 1- 1, derive V j using the method described in Section 4. A byproduct of deriving V j is a new reaction function g = F~z + F2Js. Check the divergence between V j-~ and V 1. Increment loop counter j. End do. (This gives a new government reaction function g = F~z + F~S, given the private sector reaction function which is optimal for g = F[- lz + F~- 1S.) Check the divergence between F~- l and F~, as well as between F~- 1 and F~. Increment loop counter i. End do. This version of the algorithm involves updating the private agent's value function less often than the previous version, and can take less time. However, we cannot demonstrate that it will converge. If it does not, then we can always use the previous algorithm.
3. Steady state We start by deriving the first-order conditions of the private agent's problem, conditional on the assumed laws of motion. We have, ~v
ar
c)v ~s'
Od
Od + [30s' ,gd
O.
(4)
Differentiating the value function with respect to the current states s and making use of this first-order condition gives: c~v
c~r
c)s
3s
~v cgs'
+/3----
c~s' c)s
In the steady state, this gives:
(5)
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
3s
-
I-/3
-~s
161
(6)
'
so that in the steady state, the first-order condition for the private agent becomes: -3r +
3d
~a-sOr(-asaS') -1 3s'c)d IB--
0.
(7)
Imposing the aggregate consistency condition s = S, this, together with the laws of motion (lc), gives ~d + ~?s equations to solve for the steady state levels of the endogenous state variables and the private agent's control variables, conditional on the steady-state levels of the exogenous states and the government's control variables. The steady states of the exogenous state variables can be found by imposing the steady state and solving Eqs. (la). To find the steady-state levels of the government control variables, first calculate the first-order conditions of the government's maximization problem:
3V 3g
ar ag
ar OD aD ag
B---~.OV(OS' aS' aD) - - + - - - - a S ' ag aD 3g
- - = - - + - - - - +
=0.
(8)
Differentiating the government's value function with respect to the current states S and using the first-order condition gives:
c)V 3S
- - = - -
c~r 3r 3D + - - - aS 3D ~)S
+/3
( 3V OS' 3D 3S' 3D aS
At the steady state, this gives:
(
OV ar ar aD = --+---3S aS OD aS
)(
1-/3
( 3S' aV ----+
aD aS
3V 3S') c~S' 3S
+ - - - -
aS' tl aS ]]
.
(9)
1
(10)
Evaluating the first-order condition at the steady state and substituting this expression for the partial derivative of the value function with respect to the states gives:
(or
Or o O)(
--+----+/3 --+---ag aD ag aS aD aS ( as' aS' aD ) × ag + a----Dag = 0.
O0 0S l) l
I-¢~ - - - - + aD aS
as] (11)
This gives us % equations to solve for the steady states of the government control variables. We must also be able to evaluate the partial derivatives of the private controls D with respect to the states S and the government's controls g. To do this, we can totally differentiate the first-order conditions of the private agent and then impose the aggregate consistency constraints and the steady state.
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
162
We do not have a general proof of the existence and uniqueness of the deterministic steady state. For a particular application, this involves demonstrating the existence and uniqueness of the solution to a system of ~a + r/, + ~/g static, non-linear equations. For simple models, including the application discussed in Section 5 below, this is not too difficult.
4. Numerical algorithm In this section we describe in some detail the iterative method for updating the coefficients of the quadratic form of the value function. The quadratic approximation of the one-period return function is formed in exactly the same manner as in section 3.1 of Hansen and Prescott (1995). An initial quadratic approximation for the value function, v °, is selected, and a sequence of approximations is obtained by first solving the approximation to the private agent's problem and then solving the approximation to the government's problem. For the private agent, the value function recursion is given by
v"+'(z, g, S, s) = max{yTQy + flv"(z', g', S', s')},
(12)
d
where y is the stacked vector (z T, gT, S T, s T, D T, dT)T, the superscript T denotes the transpose operator, and the matrix Q is the quadratic form of the approximation of the one-period return function. The maximization is undertaken subject to the following constraints, which are either linear to begin with or linearized versions of the original constraints given in (1) above: Zt ~ AZ,
(12a)
s' = B,z + B2g + B3S + B4s + BsD + B6d,
(12b)
S' = B,z + B2g + B3S + B4S + BsD + B6D,
(12c)
D = D1z + D2g + D3S,
(12d)
g' = F~'z + F~'g + r~'s,
(12e)
where (12e) follows from the posited reaction function of the government, g' = r ; z ' + r ; s '
= F~'Az + F~(Blz + B2g + B3S + B4S + (B 5 + B6)(DlZ + D2g + D3S)) = (F~A + F~(B l + (B 5 + B6)D1))z + F~(B 2 + (B 5 + B6)D2)g + F ~ ( B 3 + B 4 + (B 5 + B6)D3)S ; r;'z +
r:g + r~s,
(13)
and where certainty equivalence allows us to drop the stochastic part of the
S. Ambler,A. Paquet / EconomicModelling14 (1997)155-173
163
evolution of the exogenous states in (12a). The problem can be written out as follows: Z
[ZT gT
S T sT
Q..ll
--"
Ql6
g S
661
"'"
Q66
D
D T d T]
$
V n+l ~" m a x , d
_d
[
v~l
- F ~ [ z ' T g ' T S ' T s 'T]
n U14
n V44
L
.
(14)
•
g' S' S'
Collecting terms in d, we have:
dT[Q61 z -F Q62g + Q63S + Q64s + Q65D + Q66d] + [zTQ16 + gTQ26 + S TQ36 + sTQ46 + ova56 ] d "~- fl [ Z 'Tu~4 -t'g-- ,T 024 +,
S,Tv~4]B6d
v "+' = m a x +/3[zTB T +gTBT + STB f + sTB~r + DTBTIv24B6 d d 51 +fldTB6r[v:lz ' + v~'2g' + v:3S']
,
(15)
+fldTBTv~4[Blz + B2g + B3S + B4s + BsD + Brd] +O.T. where O.T. stands for other terms which do not depend on d. The first-order condition with respect to d g i v e s :
Q61z + Q62g + Q63S + Q64s + Q65D + Q66d +flBT[v~I z' + V~'2g' + 04n3S' "~- O~a(BIZ + B2g + B3S + B4s +BsD + Brd)] = 0.
(16)
Replacing z', g' and S' using (12a), (12c) and (12e) gives: -
(Q66 + flBTv~4Br) d
= (Q61 + flBT[v:l A + v4n2(r~A + r~'B1 + F~'(B5 + B6)D1) + (v4n3 + v£)Bl])z T
n
n
T
n
n
+(Q62 + fiB6 [v42(FzB2 + F~(B5 + B6)D2)
n + (043 +
v~4)Bz])g
+(Q63 + ~3B6[o42(F2 (B3 -F B 4) + F~(B 5 + B6)D3)
+ v~3(B3 + B 4) + v~4B3])S +(Q64 + [3BTv~nB4)S +(Q65 + flBT[v:3(B5 + B6) + v~4Bs])D.
(17)
164
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
Imposing the aggregate consistency conditions D = d and S = s then gives: T I1 --(Q65 + Q66 + fiB6 (u43 ,1, u~4)(B 5 ,1, B 6 ) ) 0 --- (061 + flnT[V~l z
+ O4~2(FIn z + F~81 + F2'(85 ,1, 8 6 ) 0 1)
+(v~'3 + o~'4)BII)Z
+(Q62 + ~BT[v:2(F~B2 + F~(B5 + B6)D2) + (v4~ + v24)B2])g T n n n "1"1"(063 "1" 064 "1" f i n 6 [U42(F 2 ( B 3 ,1, B 4) 4- F 2 (85 .1. B 6 ) D 3) .+(v~'3 .1. v ~ 4 ) ( 8 3 ,1, B 4 ) ] ) S ,
(18)
which can be solved for D as a linear function of z, g and S, so that we have: D = D l z + D2g + D3S.
(19)
Thus, the posited form of the feedback rule for the aggregate control variables in (12d) is consistent with the first-order conditions of the private agent and the aggregate consistency conditions. This expression for the feedback rule for the aggregate control variables can then be used to substitute out D in Eq. (17), and we can derive a solution of the following form for the representative agent's feedback rule. 5 d = D4z + Dsg + D6S .1. D7s.
(20)
Eqs. (19) and (20) can now be used to substitute into Eq. (14) in order to eliminate d and D from the updated value function. This process is repeated until the change in the updated value function is sufficiently small. The feedback rule (19), which is optimal given the posited government reaction function in (13), can then be used to set up the value function recursion for the government's maximization problem, which can be written as: V " + l ( z , S) = max{yTQy + f l V n ( z ', S')}, g
(21)
where the stacked vector y is redefined so that it only depends on aggregate quantities, (z T, gT, S T, S T, D T, DT)T. The Q matrix is the same as that of the private agent's problem, reflecting the assumption that the government is benevolent and maximizes the utility of the representative private agent while internalizing the aggregate consistency constraints. The maximization is subject to the following constraint in addition to (12a): S' = B l z + B2g .1. (B 3 .1. B4)S + ( B 5 + B 6 ) ( D l z + D2g .1. D3S) - Clz + C2g + C3S.
(21a)
5Since we assume that private agents are too numerous for them to behave strategically, we can omit D from the private agent's reaction function by substituting (19) into (17). There are very few exceptions to this rule in the business cycle literature. The exceptions include Rotemberg and Saloner (1986) and Rotemberg and Woodford (1992).
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
165
We can write out the problem as follows: Z
0,,
• • •
O16
[z v gV S T S T D x D v ]
]
•..066] V n+l = max g
Sg S D _O.
VI'I
• ,.
[714.
(22)
g'
+[~[z'T g'T S'T S 'T]
S' s, where it is understood that D = D~ z + D 2 g + D 3 S and S' = C l z + C 2 g + C 3 S, and where it is understood that z' and S' are functions of z, S, and g because of the constraints (12a) and (21a). Collecting terms in g, we have:
tgT[Q21z + Q23S + 024S "1- (Q25 + Q26)(D1z + D3S)] + [zTal2 + sT032 + sT042 + ( ZTDT + STDT)(052 + 062)]g
+gTDV21061Z + (Q63 + 064) S + (065 + Q66)(D1z + D3S)] + [z'ra16 + sT(036 + 046) + ( ZTDT + STDT)(056 + 066)]D2g +gXD~[Q51z + (Q53 + Q54)S + (Q55 + Q56)(DI z + D3S)] V n+l = max, + [ z T 015 + Sv(035 + 0,5) + (zVDf + STD~)(Q55 + 065)]D2g g +gT[Q22 + (025 + Qz6)D2]g + gT[DT(05Z + 062)]g +gTDVz[Q55 + 056 + Q65 + Q66]D2g +t3[z'Tv, c2 + (zTC + sTc )]g +flgT[C~V;,z' + (C,z + C:g + C3S) ] +O.T. (23) The first-order condition with respect to g gives:
( = (
Q22 + (Q25 + Q26)D2 + D~(Q52 + Q6z) ) +D2V(Q55 + Q56 + Q65 + Q66)D2 + flCfV~2C2 g
Q21 +(Q25+026)D' +D~(Qs' +Q61) ) +D~(Q55 + Q56 + 065 + Q66)D, + flCf(Vff, A + Vff2C,)
166
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
+(Q23 + Q24 + (Q25 + Q26)D3 + D~'(Q53 + Q54 + Q63 + Q64)) +DT(Qs5 + Qs6 + 065 + Q66)D3 + [3C:Vff2C3 S.
(24)
This can be solved for g, which gives the feedback rule: g = F('+lz + F~'+Is.
(25)
Eq. (25) can be substituted into Eq. (22) in order to eliminate g from the updated value function. This process is repeated until the change in the updated value function is sufficiently small. At this point, F('+ l and F~'+ l can be compared with F1~ and F~' to determine whether the algorithm has converged at some predetermined tolerance level.
5. Application: Optimal government spending The following example illustrates how the suitably extended Hansen and Prescott (1995) methodology can be employed in the context of a real business cycle model with an optimizing government. As in Ambler and Paquet (1996), we consider the effects of the introduction of a government sector with three types of spending. A portion of its current spending, G~,, and public investment, I v , respond optimally to the exogenous shocks that set the business cycle in motion. Another portion of its current spending, G2t , is modeled as exogenous because some component of public expenditures may remain beyond the immediate control of the government (such as military spending, for instance). The government finances its spending through a combination of lump-sum taxes, T,, and a constant proportional tax rate on total income, r. The representative forward-looking private agent chooses time paths for private consumption cP+~, hours worked ht, employment rate per quarter et, and private investment i t to maximize a time-separable utility function (26) over an infinite horizon subject to a sequence of budget constraints (27) for each period, the initial individual and aggregate stocks of capital, and the processes describing the accumulation of capital. That is, max
/
U(t) = E, ~ B ~ ln(cP~ + a G l t )
{cP,,h .... e,+,,i,+~F-o
i=0 ')/2
__Tt
et+ih~+i~
1 "1" 1//1
~
-1+$9 + 4) ln(Gl,+i))
1 + 1#2 ~'t+i -
(26)
subject to cP+i + it+ i --- (1 - r)(w,+int+ i + qt+ikt+i ) - Tt+ i,
(26a)
k,+i+ 1 = (1 - 8)k,+ i + it+ i ,
(26b)
g,+i+ 1 = (1 - ~ ) K t + i + It+ i.
(26c)
S. Ambler, A. Paquet /Economic Modelling 14 (1997) 155-173
167
The private agent takes as given the wage rate w,, the gross marginal return on capital qt, the tax rate on total income r, lump-sum taxes T,, and the paths of the components of government spending. Total hours worked per period by the private agent are given by n, = h,e t.
(27)
Competitive firms rent capital and hire labor in order to maximize profits. The aggregate production function is given by ,rx(l Y, = ( ¢, iv,)
- 0) rTO r 1 0
(28)
~, %~,
where the state of technology Ct evolves according to the process ln(~t+j) = ln(~t) + At+l,
At ~ i.i.d.(h,
O'A2).
(29)
Because the logarithm of technology follows a random walk with drift, the private agent's problem is not stationary as stated in its original form. However, it can easily be converted into a stationary one by casting it in terms of normalized stationary variables. For any variable X,, define the following transformation:
Z-
X, ( ¢,)
(30)
Then, the private agent's one-period return function can be written as:
T1 ethl D t , d t ) =ln[~t P + a G 1 , ] - 1 +~01
rP(z,,g,,S,,st,
3'2 1+02
+ 6t
e~+,_. + 6 ln[~,,],
(31)
with ~ = (1 - 'r)(~thte , + qtk,) - i t - T t + a G , t k t , ""1 K°f Nt ] g exp
~,=(1-0) and with
z,=
At
[ log
r
(
-At(l-
, gt=[logG1,],
/
1
)
0 ) ( 0 + 03) O-~9g '
S,=K,,
s,=k,,
Dr=
E,,
i,
d,=
[ht] e, .
i,
168
S. Ambler,A. Paquet / EconomicModelling14 (1997)155-173
The agent's dynamic programming problem can be written as:
v(z, g, S, s) = max{r(z, g, S, s, D, d) + flE[v(z', g', S', s')lz, g]}, d
(32)
subject to [
11 l = { At+
LIn a2t+l
1 A (l -- p) In
_
0
G2
0
(32a)
pJtln
k,+~ = (1 - ,3) exp(-A,)k, + it,
(32b)
gt+ , =
(32c)
(1 -- ¢ 3 ) e x p ( - - A t ) = , ~ t + l , ,
~It] [ H(zt, gt' St)] +,
= IE<+,,+,,+,>I I(zt'gt'St)
+t L
J
alt+l]
n
Igt
(32d)
,
(32.)
"= G~zt+ 1 + G2St+ 1.
The government's one-period return function can be written in stationary form as:
rg(zt, gt,St,Dt) = ln[C-'P + ~ a , t ] Y2 - - E 1 +~2
"gl Et Htl +Cq l+O~
1+~2 + ¢ In[G1, ],
(33a)
where
~p Nt(l-0)-0-0 =
K, Kfl exp
( -At(l1
0 ) ( 0 + Og)) 0 S-O: - It -- I g t - a i r - a2t.
(33b)
The model was calibrated subjected to a series of stochastic simulations to compute its predicted comovements. Table 1 reports the results, averaged across 500 independent replications of 148 observations each, with the standard deviations across replications reported in parentheses. The table also shows the corresponding moments in the data. The model's predictions concerning comovements other than with the components of government spending are close to those of standard RBC models without a government sector. The model also captures the qualitative features of the relative volatilities. Total government spending is about as volatile as aggregate output in the data, with military expenditures and public investment that are both signifieantly more volatile than output, and non-military current spending less
169
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173 Table 1 Stochastic properties of the modela Statistic
U.S. data
Both shocks
Technology shocks only
Spending shocks only
%
0.0159
~i/%
3.1826
G/o~,
0.5189
~r~x,/G
1.1189
o~.~/tr,.
0.7228
cr~,2/ o~,
2.6093
o'i.JG
1.9826
corr(c,y)
0.8979
corffi,y)
0.9266
corr(ag,y)
0.2317
corr(gl,y)
0.1737
corr(g z,y)
0.0951
corr(iJy)
0.2733
0.0193 (0.002) 3.1838 (0.278) 0.5067 (0.058) 0.9088 (0.065) 0.4080 (0.021) 1.5630 (0.219) 2.8488 (0.156) 0.7373 (0.090) 0,9504 (0.017) 0.8537 (0.040) 0.9036 (0.019) 0.4334 (0.127) 0.8999 (0.046)
0.0198 (0.002) 3.1572 (0.231) 0.4731 (0.042) 0.8120 (0.014) 0.3925 (0.014) 0.7180 (0.006) 2.8133 (0.138) 0.7830 (0.070) 0.9569 (0.014) 0.9707 (0.011 ) 0.9301 (0.011) 0.9958 (0.001) 0.9056 (0.036)
0.0043 (0.0005) 2.3788 (0.023) 0.1433 (0.041) 2.6603 (0.130) 0.1092 (0.005) 6.9013 (0.050) 2.2509 (0.030) - 0.5257 (0.329) 0.9849 (0.003) 0.9962 (0.001) - 0.6697 (0.O64) 0.9975 (0.001) 0.9787 (0.004)
aThe notation ~rx refers to the standard deviation of a variable x, and corr(x,y) to the contemporaneous correlation between variables x and y. For each predicted comovement statistic, we report its mean across 500 replications with the associated standard error in parentheses beneath. The variable ag is total government spending, and other definitions of variables are given in the text. Data sources are described in footnote 8 of the text. volatile t h a n output. This is what the m o d e l predicts. The m o d e l predicts that public i n v e s t m e n t should be m o r e c o r r e l a t e d with o u t p u t t h a n total public spending, a n d that exogenous s p e n d i n g should be less c o r r e l a t e d with o u t p u t t h a n total g o v e r n m e n t spending. This is what we see in the data. U n f o r t u n a t e l y , the m o d e l significantly overpredicts the c o r r e l a t i o n of total g o v e r n m e n t s p e n d i n g a n d of each of its c o m p o n e n t s with output.
6. Conclusions H a n s e n a n d Prescott (1995) m o t i v a t e d their c o m p u t a t i o n a l algorithm as e c o n o m i z i n g o n the time s p e n t l e a r n i n g new techniques a n d a d a p t i n g t h e m to
170
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
specific applications, even though alternative methods might work as well and might minimize computer costs by a few minutes. They showed how their computational methods applied to Pareto-optimal social planning problems, to problems with distortions, and to models with heterogeneous agents. This paper has extended the technique to dynamic Stackelberg games with endogenously determined government choice variables. We have shown that their computational method with some suitable modifications is flexible enough to be applicable to such instances. Even though there are other considerations that could be taken into account in modeling fiscal policy, the exercise that we have reported provides an interesting example of the usefulness and the adaptability of Hansen-Prescott methodology.
Acknowledgements A m b l e r acknowledges financial support from the S S H R C and the F C A R . P a q u e t acknowledgesfunding f r o m the F C A R .
Appendix We show that the operators implied by the maximization problems of the representative private agent and that of the government are contraction mappings. This can be done by verifying that Blackwell's sufficient conditions of monotonicity and discounting are verified for the relevant operators (see Sargent, 1987, p. 344; or Stokey et al., 1989, p. 54). Some fairly weak restrictions about the stochastic disturbances and the set-up of the problem make dynamic programming methods appropriate. For instance, many encountered problems assume a time-separable objective function, along with independently and identically distributed stochastic disturbances, ~', that are realized after the vector of current period choice variables has been chosen. In these cases, date t controls affect only current and future returns. 6 The first stage of the game is defined as the quadratic approximation to the private agent's dynamic programming problem, represented by Eqs. (12), (12a)-(12e). Let r(z, g, S, s, D, d) =- y'rQy, where y = (z'r, gT, S T, s v, D T, dX) x is the quadratic approximation to the one-period return function of the private agent's problem. We can define the T operator as To = max {~(z, g, S, s, D, d) + ~ E [ v ( z ' , g', S', s')lz, S]} d~Rna
6For more
ch. 9).
details regarding these aspects of stochastic dynamic programming, see Stokey et al. (1989,
171
S. Ambler, A. Paquet / E c o n o m i c Modelling 14 (1997) 1 5 5 - 1 7 3
for the constraints (12a)-(12e). (i) To show that T is monotone, suppose that v(z,g,S,s)>w(z,g,S,s),
V(z,g,S,s)~X
Tv = max {~(z, g, S, s, D, d) + flE[v(z', g', S', s')lz,S]} dE RTM
>_>_ max {~(z, g, S, s, D, d) + [3E[w(z', g', S', s')lz, S]} = Tw. d~R
Q.E.D.
TM
(ii) Show that T discounts. For any positive constant K, T ( o + K) =
max {F(z, g, S, s, D, d) + flE[v(z', g', S', s') + KIz, S]}
d ~ R TM =
max dE R ))d
{?(z, g, S, s, D, d) + [3E[v(z', g', S', s')lz, S] + ilK} = Tv Q.E.D.
(i) and (ii) imply that T is a contraction mapping. If we can define a complete metric space involving the state variables and an appropriate metric, then the functional v = To has a unique fixed point. Stokey et al. (1989, section 9.4) discuss the conditions that are required in the linear-quadratic case where the return function is unbounded. In order for these conditions to hold for our linear-quadratic approximation, we require that the original problem involve convex production technologies and constraints and concave utility functions so that the quadratic form we use to approximate the return function is negative semi-definite. The second stage of the game is that of the quadratic approximation to the government's dynamic programming problem, represented by the value function given by (21), and the constraints (12a) and (21a). As shown in Section 4 of the paper, the quadratic approximation of the government's one-period return function (21) is obtained by substituting out D and d into the approximation of the one-period return function of the representative private agent. This is done using the linearized feedback rules for D and d, respectively, given by Eqs. (19) and (20). Then, the arguments are redefined so that the return function depends on aggregate per capita quantities. Hence, the quadratic approximation to the government's one-period return function is given by r(z, g, S, S, D(z, g, S), D ( z , g, S)) =- yTQy, where y = (z T, gr, S T, S T, D T, DT) T. We note that if r(z, g S, s, D, d) is concave, then so is r(z, g, S, s, D(z, g, S), d(z, g, S, s)) in z, g, and S, which we show as follows. Theorem. Let z = f ( x 1, X2) be concave, continuous and twice differentiable, and let x: = g(x l) be a linear fimction. Then, f ( x j , g(xl)) is concave in x I.
S. Ambler, A. Paquet / Economic Modelling14 (1997) 155-173
172
Proof. If z ~ f ( x 1, x 2) is concave, then the Hessian matrix of this function given by
[I,, 1,21 "= if,2
f2 l
where fii denotes Of/OxiOx.i, is semi-negative definite, so that f l l --< 0,
I HI
= fllf22 -fff2 -> 0.
Since x 2 = g(x l) is linear, then gll = 0. Substituting x 2 out in f ( x 1, x2), we have z = f ( x 1, g(xl)), whose Hessian matrix is given by:
H*= [ f" flzgl
f,zgl ] fz2[gl ]2 + f2gll
To establish the concavity of f ( x 1, g(xl)), it remains to show that H * is semi-negative definite. This is the case since both fl~ -< 0 and ] H * [ = f l l ' { f 2 2 " [ g t ] 2 + f 2 g l l } - [ f l z g l ] 2 = [gl]Z'[fllf22-f22]
-> 0
from the linearity of g(xl), i.e. gll = 0. Q.E.D. Therefore, the government's maximization problem is a contraction mapping for a given value of the coefficients in the reaction function in Eq. (19). Since we rederive the optimal private-sector reaction function every time we update the government's policy rule, we cannot guarantee that the government's value function converges in the usual way. However, if the change in the private-sector reaction function between iterations is sufficiently small between iterations and if its change in size decreases with the change in size of the government's policy rule, then convergence will be achieved in practice.
References Ambler, S. and D. Desruelle, 1991, Time inconsistency in time-dependent team games, Economics Letters 37, 1-6. Ambler, S. and A. Paquet, 1996, Fiscal spending shocks, endogenous government spending and the business cycle, Journal of Economic Dynamics and Control 20, 237-256. Blanchard, O.J. and S. Fischer, 1989, Lectures on macroeconomics (MIT Press, Cambridge, MA). Chari, V.V., L.J. Christiano and P.J. Kehoe, 1991, Optimal fiscal and monetary policy: Some recent results, Journal of Money, Credit and Banking 23, 519-539. Chari, V.V., L.J. Christiano and P.J. Kehoe, 1995, Policy analysis in business cycle models, in: T. Cooley, ed., Frontiers of business cycle research (Princeton University Press, Princeton, NJ) 243-293. Hansen, G. and E.C. Prescott, 1995, Recursive methods for computing equilibria of business cycle models, in: T. Cooley, ed., Frontiers of business cycle research (Princeton University Press, Princeton, NJ) 39-64. Kydland, F.E., 1975, Noncooperative and dominant player solutions in discrete dynamic games, International Economic Review 16, 321-335. Kydland, F.E. and E.C. Prescott, 1977, Rules rather than discretion: The time inconsistency of optimal
S. Ambler, A. Paquet / Economic Modelling 14 (1997) 155-173
173
plans, Journal of Political Economy 85, 473-491. Pallage, S., 1985, Dynamic games and growth theory, PhD thesis, Carnegie-Mellon University. Ramsey, F.P., 1927, A contribution to the theory of taxation, Economic Journal 37, 47-61. Rotemberg, J. and G. Saloner, 1986, A supergame-theoretic model of price wars during booms, American Economic Review 76, 390-407. Rotemberg, J. and M. Woodford, 1992, Oligopolistic pricing and the effects of aggregate demand on economic activity, Journal of Political Economy 100, 1153-1207. Sargent, T.J., 1987, Dynamic macroeconomic theory (Harvard University Press, Cambridge, MA). Stokey, N.L. and R.E. Lucas with E.C. Prescott, 1989, Recursive methods in economic dynamics (Harvard University Press, Cambridge, MA). Taylor, J.B. and H. Uhlig, 1990, Solving nonlinear stochastic growth models: A comparison of alternative solution methods, Journal of Business and Economic Statistics 8, 1-17.