Dynamic optimization and forward looking processes

Dynamic optimization and forward looking processes

Journal of Economic Dynamics 22 (1997) 49-66 ELSEVIER and Control Dynamic optimization and forward looking processes Piermarco Carmarsa ‘, *, Mass...

828KB Sizes 1 Downloads 58 Views

Journal of Economic Dynamics 22 (1997) 49-66

ELSEVIER

and Control

Dynamic optimization and forward looking processes Piermarco

Carmarsa ‘, *, Massimo Giannini b, Maria Elisabetta

a Dipartimento

di Matematica,

b Dipartimento

Tessitore a

Universitb di Roma “Tor Vergata” Via della Ricerca Scientifica, 00133, Roma, Italy di Procedura Civile. Universitci di Roma “Tor Vergata” Via della Ricerca Scientijica, 00133, Roma. Italy

Abstract We study an optimization problem for a linear system under the action of a noncausal control, i. e. such that the current process variables are effected by future controls. The economic model that motivates this research arises in rational expectation theory, and cannot be treated by standard dynamic programming techniques. We characterize the optimal policy as the unique solution of a fourth-order ordinary differential system with suitable initial and asymptotic conditions. We also discuss the qualitative behaviour of optimally controlled systems. This analysis exhibits instability for a certain parameter range. Keywords: Rational expectations; Optimization; tive differential games JEL classification: C61; C72; E5

Forward looking processes;

Non-coopera-

1. Introduction

The application of optimal control techniques to political economy problems has seriously been criticized by rational expectations (r.e.) theorists, such as Kydland and Prescott (1977). In fact, according to this theory, rational agents can modify their behaviour as a reaction to the expected future policy actions. In other words, the observed system modifies its structure according to the implemented control. Obviously, this behaviour contradicts the optimal control ‘philosophy’, which requires that the state equations remain unchanged along the observation range.

l Corresponding author. Partially supported by the Project MURST controllo in economia e finanza’ of the University of Rome Tor Vergata.

0165-1889/97/$17.00 0 1997 Elsevier PIZ SO165-1889(97)00041-9

Science

B.V. All rights reserved

60%

‘Modelli

aleatori

e

50

P. Cannarsa et al. I Journal oj’ Economic Dynamics and Control 22 (1997)

49-66

Therefore, if r.e. are assumed, the standard dynamic programming approach is no longer appropriate. The aim of this paper is to study a simple class of optimization problems in continuous time, for systems subject to rational expectations under an initial condition. For simplicity, we will assume perfect foresight, neglecting stochastic disturbance. As a performance index we focus our attention on a quadratic loss fimction. It is well known that typical examples in political economy exhibit such a behaviour. Moreover, the quadratic structure is particularly suitable to apply calculus of variations in Hilbert spaces, deriving the Euler equation satisfied by minimizers. Although the Hilbert space approach is classical in optimization problems, see e.g. Banks (1969), Banks and Kent (1972), Lukes (1971) and Papavassilopoulos and Wallace (1973) to our knowledge the model under investigation in this paper has not been treated by variational techniques or by other methods so far. Analysis of the Euler equation shows that this equation can be reduced to an initial-boundary-value problem for a fourth-order linear differential system. Moreover, we prove that this problem has a unique solution which coincides with the minimizing policy for the original quadratic functional. For particular choices of parameters, the optimal control can be computed explicitly. This analytical representation allows us to study the qualitative behaviour of systems under optimal actions. For significant parameter values, we obtain that optimal policies exhibit an oscillatory behaviour at infinity, with a divergent absolute value. We conclude this introduction with an outline of the paper. In Section 2 we recall the economic model that motivates our research. In Section 3 we study the related mathematical problem and prove our characterization of optimal controls. In this Section we provide the full treatment of the scalar case n = 1. In the Appendix we study the general case n 2 1. Finally, in Section 4 we apply the previous results to analyse the qualitative behaviour of economic models under r.e.

2. An optimization problem in political economy We begin by analysing a simple model discussed in the famous article by Sargent and Wallace (1973), that is a modified version of the Cagan model (1956). In such a model the price level P and money supply A4 are assumed to satisfy the following demand function for real balances: = p(t). Here n(t) is the expected rate of inflation which under the assumption of perfect foresight, is equal to (d/dt) log P(t), and q is a real parameter. With this remark

P. Cannarsa et al. IJournal of Economic Dynamics and Conrrol22

we can rewrite the above expression

P’(l) =

as the following

(1997)

linear differential

@r)- p(t)l,

where p(t) = log(P(t)) of p w.r.t. time. If an initial condition

p(t)

5I

equation:

(2.1) and p’ stands for the derivative

and m(t) = log(M(t))

(2.2)

P(to) = PO is assigned,

49-66

then the solution

= p(to)e(‘O-‘)‘q

of (2.1) is given by

+ i 1’ e(s+hm(s) b

d.s.

In the cited paper by Sargent and Wallace (1973), on the contrary, the solution is modified to equate the current price level to the discounted (expected) path of the money supply. Consequently, in the above mathematical model, parameter q is assumed to be negative and the initial condition (2.2) is replaced by the asymptotic condition lim e”“p(t) t-too

=

0.

(2.3)

It is then easy to see that the solution p(t) = -;

of (2.1)-(2.3)

is given by

e(s-‘%z(s)ds.

(2.4)

Clearly, a necessary and sufficient condition for the existence is the integrability of e”“m(t). Let us now assume the following aggregate offer curve: v’(t) = /Q(t) + 4(P(t)

of such a solution

- a,

where $J E (0,l) and /? > 0. The last equation means that the change in product y(t) depends on an exogenous component (the /3r(t) term) and on the difference between the current price level and its target value. This value is exogenously fixed by the policy maker (4 is the speed of adjustment). If we assume that the price level evolves according to Eq. (2.4), we can describe the economy state by

O”e(s-‘VVm(s)&

Y’(f)= MO - 4 ; (I

I

.

- fj

(2.5)

>

The above equation asserts that the system state at time t (the product y(t)) depends on v(t) itself and on the discounted future (expected) monetary policy m(t)

52

P. Cannarsa et al. I Journal of Economic Dynamics and Control 22 (1997)

49-66

(control variable), which should be chosen in some optimal way. We assume that the policy maker’s objective is to minimize the following quadratic functional J: J(m) =

ow {e-"'(y(t)-

71' + eCp'[(m(t) - Ei)1*)

s

dt,

where 1 and p are positive constants and 7 and Fr are fixed target values. Summing up, we have obtained an optimization problem with an initial condition and a forward looking term. We conclude this section noting that one might derive the above optimization problem from some generalized form of Stackelberg’s games as well. In fact, in such a class of non-cooperative games, there are two players, usually termed leader and follower, the latter reacting to the optimal policy implemented by the former. If one assumes that the follower’s reaction 242(a),takes the form m uz(t) =

eoL(r-s+4~(s)ds,

J1

where u](a) is the leader’s policy, then the dynamic of the game is given by YV) = Pv(t) + Y+

U20),

which is of type (2.5). In this sense the problem we are studying may be viewed as the ‘brute force’ reduction of a two person game to an optimal control problem. However, we point out that the presence of an infinite horizon in (2.5) and the non-causal dependence of 242on ui, make it difficult to apply standard methods leading to a feedback solution, as described, for instance, in Baser and Olsder (1982). In fact, this is one of the motivations of our work. 3. Analysis of the mathematical problem Let us consider a system governed by the state equation m eao+)u(s) ds , t > 0, r’(t) = By(t) + Y+ I r 1 y(O)=xc R”,

(3.1)

where CLis a positive constant, y E iRB”and B is an n x n constant matrix. A measurable function u : [O,oo) + II?’ will be called control. We denote by (e, .) and (( * )I the Euclidean scalar product and norm in R”. Remark 3.1. It is easy to see that Eq. (2.5) can be put in the above form with n = 1, y = (#q)F, u(t) = -(#q)m(t) and CY = -I/q.

P. Cannarsa et al. IJournal of Economic Dynamics and Control 22 (1997)

49-66

53

Definition 3.2. For any p > 0 we define the space co

e-P’Ilu(t)112dt < 00

u E LiO,(O,00; W) :

$(O, 00; rw”) =

s0

1

. 1

Moreover, we denote by Lz(O,co; W) the space of square-integrable with compact support. Finally, we define the space

Lb(O, 00; W) =

u E Lf,,(O, 00; W) :

dt < CXJ .

e-P’(ju(t)JI Jrn 0

C

functions

>

We write Li(O, 00) = Ls(O,oo; F!), LA(O,oo) = Lb(O, 00; W), L~(O,CCI) = L2,

(0, 0; W). Remark 3.3. The space Li(O, 00; El”) equipped with the inner product (u, u)~ = Srn

eepf (u(t), u(t)) dt

Vu, u E L;(O,oo; UP),

0

is a real Hilbert

space.

If u E LL(O, 00; II?‘), then Eq. (3.1) has a unique sented as

solution

which can be repre-

I y(t) = eBf x + where the linear operator (Its)

s

e-OLrU(r) dr.

(3.3)

of minimizing

the functional

- rl12 + e-“‘(lu(t) - uII~} dt

u E L;(O, oo), where y is the solution

/I = sup{Rez(z where a(B)

the problem

Om {e-“(/y(t)

over all controls Let us define

(3.2)

1: is defined as

= 6’ e@leB)’ d.s 6”

Let us now consider

J(u) =

,

ewBSyds + (1:24)(t) J0

{

denotes

of (3.1).

E a(B)}, the spectrum

(3.4)

(3.5) of B, i. e. the set of all eigenvalues

of B.

Theorem 3.4. Assume that ,u < 2a. Then there exists a unique control u* E Li(O, 00; W) such that J(u*) = UEni;w, J(u) < 00. I’ ’

(3.6)

54

P. Cannarsa et al. IJournal of Economic Dynamics and Control 22 (1997) 49-66

Furthermore, if 1 > max{O, 2/I}, then for every h E R and 0 E Lz(O, 00; Rn), J(u* + hu) < cc and J is dlrerentiable at u* with respect to any direction v. Proof: First of all we remark that if p < 2a, then Li(O, 00; W”) c Li(O, co; K’). In fact, for any u E Li(O, 00; 172’)Holder’s inequality yields 112 e-OLflju(t)l(dt 5 Srn 0

=

00

O”e-“‘llu(t)l12 dt {J

0

I

(s

l/2

e(/‘-2@)fdt

0

I

II4 IL:,(O,c4~)

&G-ji

.

Next, we show that J is finite for some control in Li(O, 00; BB”)and so is the minimum above. To see this fact, it sufhces to take a constant control u(t) = -a(y + Bx). In fact, the corresponding solution is also constant y(t) = x and therefore y E L:(O, 00; Rn). Moreover, J is strictly convex and coercive: J(u) = ooo{e-*jJy(t) J L Cl

Jrn 0

- ~11~+ e-tillu(t)

- iill’} dt

edNl[u(t)/I2dt + CZ,

where Ci > 0, C2 E 08. Therefore, J(U) has a unique minimum. To complete the proof of the Theorem, for every h E 03 and u E Lz(O, co; UP), let yu=+ho and yU= denote the solutions of (3.1) with controls U* + hu and u*, respectively. Then, by Eq. (3.2) and Holder’s inequality we derive b'+hv

Wll I Ilv&>ll

+ Ihl ll~‘<&W)ll

I Ilru4)lI

+--&~l~l~2l~ J'e+s)~ll, 0

(3.7)

where II - II2 = II . Ilt:(o,M;~np We note that II Jl ea(‘-S) d.91I 5 Kiebt if /I > 0, while 11Jofe@‘+) dsll 5 t if /I 5 0, for some positive constant Ki. Therefore, exploiting estimate (3.7) we obtain J(u* + hu) =

Jow

{e_Yl yU*+hv(t) - ~11 + ewCUllu*(t) + h(t)

I2J(u*)

- jl12} di

+~l141221h12~~we(2fl-L)tdt + llv([221h12< 00,

where K = K(a, /?) is a positive constant. From the above estimate, (3.6) follows.

P. Cannarsa et al. IJournal of Economic Dynamics and Control 22 (1997)

Moreover,

for every u E Ll(O, co; R”), applying

Is”

e-“’ < y*(t)

-

Holder’s

49-66

inequality,

jj’,(Z~u)(t)>

+epp’ -c u*(t) - U, u(t) > dt

yl12dt

e-‘.‘I 1(Z:u)(t)l

55

we have

0

< -

112

e-“‘lly*(t)

-

iJm

112

e-P’/ [u*(t) - ~1I2dt

+

~K~l~l~2{~~e(‘“-“)‘dr}“~

) 1s

e-pL’llu(t)J12dt

{/"

I

{~~e-“flly*(~)-gl12dr}1’2 0

0

+Ilull~

112

co

0

0

12(t)dt

e-p'Ilu*(t)

-

ii//2dl)1’2

< co,

0

where K = K(cc, /?) is a positive constant. 0 to any direction u E L:(O, 00; W).

Hence, J is differentiable

Remark 3.5. Problem (3.1)-(3.4) can be restated the introduction of a new state variable w(t) =

J

with respect

in a more classical

way by

M

e-oL(r-s)u(s) ds.

I

Differentiating r’(t)

w with respect to time we obtain the following = KY(t) + Y + w(t)

t > 0, y(0) =x

problem

E R”,

lim w(t) = 0. f-+rn

w’(t) = aw(t) - u(t),

Notice that, in the above equation, we have an initial condition for the first n variables, and an asymptotic condition for the last ones. This fact would reflect on the adjoint variables producing an asymptotic condition for the y-related costates and an initial condition for the costates corresponding to w. For these reasons we prefer the variational approach to (3.1)-(3.4) which, in our opinion, seems a more direct way to solve the problem. In order to derive the Euler equation following Lemma. Let B* be the adjoint Lemma have

3.6. For every

e-”

for problem (3.1X3.4), matrix of B.

u E Lz(O, 00; tll”) and for

< w(t),(&)(t)

> dt =

Jrn 0

every

we need the

w E LA(O, co; W ) we

e-“(u(t), (Zf’w)(t))

dt,

(3.8)

P. Cannarsa et al. IJournal of Economic Dynamics and

56

where operator

(Z:u)(t)

If

=

is dejned

22

(1997)

49-66

as

/’ e(a’--B)s ds Jm eea”u(r)

dr.

s

0

Moreover,

Control

Z: : ~$0, co; W) -+ C'([O, 00); W).

ProoJ: By Fubini’s Theorem:

(I,Bu)(t)=~e(“-E~dr~me-“v(r)dr=~e-aY +lmeew

(le’“‘-B)Sds)

(~re(‘l-B)SdS)

v(r)dr

v(r)dr

eta’-@ ds ) v(r) dr. Recalling that (e”)* = 8’

SW 0

e-“(w(t),Z:v(t))

we obtain

dt = ~wc~ardt~me~“dr~rhr(e~a’B~v(r),W(I))dS

= =

~me-ard~~~e-a~dt~~Ar(v(~),e(a’-B*~w(t))dF

Jrn 0

e-“‘(u(t),Zf’w(t))

dt.

The last statement of the Lemma easily follows from the absolute continuity of primitives w.r.t. the Lebesgue integral. 0

For the sake of simplicity, we now continue our analysis for the scalar case corresponding to (3.1), i.e. M r’(t) = BY(t)+ Y + { y(0) =x

E lR+,

J

ea(‘-%4(s)ds , t > 0,

(3.9)

t

where a is a positive constant and y, j3 E R. The general case is treated in the appendix. Let us consider the polynomial P(z)=(a-fi-A+p-z)(p-A-z)(p-a-z)(p-p-z)

and denote by P(D) the-corresponding cients, i.e.

(3.10)

differential operator with constant coefi-

P(D)u=(a-/3--I+p-D)(p-A-D)(p-a-D)(p-fi-D)u,

P. Cannarsa et al. IJournal of Economic Dynamics and Control 22 (1997)

49-66

57

where Du = u’.

Theorem 3.7. Assume p < 201and 3, > 28. Then the problem e(B+‘-p)fP(D)u(t) + u(t) = e(fl+k-p)lP(o)G - a(/Ijj + y),

t > 0,

u(0) = ti, D*u(O) - (2,u - p - a)Du(O) + y -x

(3.11)

= 0

has a unique solution u* E Li(O, 00) fl CDo(O,00). Moreover, u* is the optimal control for problem (3.4)-(3.9). The definition of the space Li(O, 00) provides additional asymptotic conditions. This fact explains the stated uniqueness of solutions to (3.11). Proof

Let U* be the optimal control for problem (3.4)-(3.9).

Step I: We derive the Euler equation of J. In view of Theorem 3.4, differentiating J with respect to any direction v E Lz(O,oo) and applying the previous Lemma with w(t) = e(a-“)r[y*(t) - 71, we obtain ;J’(u*)v = &J(u*

+ hv&,

= Bm {e-%*(t) s =

J

- Ll(I,Bv)(t) + e+‘[u*(t) - ii]v(t)} dt

co

e-"'v(t)

{Z,B(e("-A)' [y*(.)

-

P])(t)

+ e(“-p)‘[u*(t)

- ii]} dt.

0

Since J achieves its minimum in U* we have J'(u* )v = 0 for any v E Lz(O, 00). Noting that e--Or*# 0, we conclude that J’(u* )v = 0 for any v E Lz(O, 00) if and only if Z! (e(“-“)‘[y*(.) - 71) (t) + e(a-p)’ [u*(t) - a] = 0

for a.e. t E [O,oo). (3.12)

Since the range of Z! is contained in C’([O,co)), Eq. (3.12) implies that U* is continuous and so (3.12) holds for every t E [0, w). Hence, taking t = 0 we immediately derive that u*(O) = 2. Step II: We invert the operator Zi. Let 4(t) = (Z!~w)(t), where w E Li(O,oo). Then e -%(s)

d.s = e-(‘-B)‘#(t).

58

P. Cannarsa et al. IJournal of Economic Dynamics and Control 22 (1997)

Differentiating

the above equality

-e-“w(t)

= [e-(“-P”#(t)]’

49-66

once again we obtain = e-(a-8)f(fi

- u + D)Dd(t),

so that w(t) = eBt(cc - /I -0)0+(l).

(3.13)

Step III: U* is a solution of (3.11). Applying formula (3.13) to Eq. (3.12), e(‘-‘)‘[y*(t)

we obtain

- y] = eBt(, - j3 - D)De(a-PL)‘[Z - u*(t)] = e8’(, - B) [(a - p)C + (CL- a - D)u*(t)] +eBr(-D)

e(‘-p)t

[(a - p)Z + (p - ci - D)u*(t)]

e(‘-p)f

= eSf(a - /3) [(a - p)Z + (p - a - D)u*(l)]

e(‘+)’

+eSr(p - a) [(a - ji)ii + (p - a - D)u*(t)] -e(8+a-r)‘D

ecaep)’

[(a - p)Fi + (p - a - D)u*(t)]

= e(b+K-p)’ {(a - p)( p - p)a +(,u - B - D)(p - a - D)u*(t)}. Since e(OL-“N[y*(t) _ Jr] = e(a-l+B)r

x + y

!_$I!

+ (Z,Bu*)(t)

_ eCa-lJr-ji, >

(

(3.14) plugging

(3.14)

(z/u*)(t)

in the above equation

= -

we obtain

1 - e-fit x + y+ e+‘F B ( >

+e(“-p)‘{(a

- p)(p - /?)E + (p - /3 - D)(p - a - D)u*(t)}. (3.15)

From Eq. (3.15),

setting

t = 0 we recover

the third condition

in (3.11)

0 = --x + p + (a - p)(p - &ii + (p - /3 - D)(p - u - D)u*(t)lt=o

=&4*(o)

- (2p - /I - a)DU*(o) + 7 - x

(3.16)

P. Cannarsa et al. I Journal of Economic Dynamics and Control 22 (1997)

59

49-66

In order to verify that U* satisfies Eq. (3.11) we apply formula (3.13) to equation (3.15): e-Bf -

u*(t) = e@(a - p - D)D

y---{

1

P

+epf(a - /? - D)D { e(i’-+(p

-

x +

- p-

e-p’ y +

e(z-~)r(a-

D)(p - a -

p)(p

- p)u

1

D)u*(i)}

= epf(a - p - D)D+(t) + e@(a - (3 - D)D x {ev-~~’ (P - B - D)(P - a - W”O)}

(3.17)

>

where $ is defined as

$(t) =

e-8’ _ Y_B

1

- x +

e+ j?+ e(‘+)‘(a

- p)(ji - j?)G.

Now, we compute eP(a -

/I - D)D$(t) =

-a(y + BY) + (,u - A+ a - /?)(A - ,u)(a - P)(,u - jQ’ie(B+‘-F)t.

(3.18)

On the other hand, @(a _ fi _ D)&(A-p)r

= e(B+A-p)t(a - /I

(P - P - WP - a - w*(t)

+ p - A.- D)(1 - p + D)(p -

c1-

D)(p - j? - D)u*(t).

(3.19) Substituting Eqs. (3.18) and (3.19) in Eq. (3.17) we conclude that U* is a solution of (3.11). Step IF? The solution of (3.11) is unique. Since the functional J is convex there is a one-to-one correspondence between

its minimum points and its stationary points, i.e. the solutions of the Euler equation From the previous steps it follows that any stationary point of J is a solution of (3.11). The conchrsion follows from the fact that J is strictly convex and so it has a unique minimum point. Cl Remark 3.8. Although the differential equation in (3.11) cannot be explicitly solved, in general, its solutions can be approximated by well known numerical algorithms. Therefore, using Theorem 3.7, one can also compute the optimal control U* of problem (3.4)-(3.9). Another way to calculate such a control is via

60

P. Cannarsa et al. I Journal of Economic Dynamics and Control 22 (1997)

49-66

Laguerre polynomials L,. In fact, we could write

= (u(t), L(~wr))~and S, = (y(t), L,(llx))l. These coefficients can be computed from problem (3.9H3.4) again by well-known numerical algorithms. This method, however, does not seem to be more general than the one presented here, which seems, on the contrary, more useful to investigate the qualitative behaviour of solutions. We now show that, for particular values of parameters 1 and p, the differential problem (3.11) can be solved explicitly.

where G

Corollary 3.9. (3.20) then the unique optimal control u* E L~(O,oo) of problem (3.4)-(3.9) unique solution of the following equation with constant coe@cients: (a - D)Q

- D)%(t) + u(t) = a2B2P- a(/3 7 + y) ,

is

the

t > 0,

u(0) = ii,

(3.21)

D%(O) - (a + j?)Du(O) + 7 - x = 0, Moreover, u*(t) = e&[cl cos bt + c2 sin bt] + u”.

(3.22)

where the constants 6, a, b, cl, c2 are given by C = a2B2Ti--a@ u + y) cGg2 + 1 a+8 a=1+2Re[J

1

Cl =ii-C,

c2 =

(a - /Q2 f 4i] ,

b = ilrn [d(a - /Q2 f 4i] ,

(ii-iZ)(a+fi+b21)+x--y b(2 - a -j?)



Prooj: The iirst part of the conclusion follows from Theorem 3.7, therefore we just prove formula (3.22) for u*.

P. Cannarsa et al. /Journal of Economic Dynamics and Control 22 (1997)

First of all, a constant u”=

solution

u”(t) = u” of (3.21)

is obviously

given by

cr2p2u - ct(fi + y> a2p+

Next, we consider (CI- D)Q

(3.23)

.

1

the homogeneous

equation (3.24)

- D)2U(t) + u(t) = 0.

Notice that all solutions

61

49-66

of the associated

characteristic

equation (3.25)

(a - 2)2(/3 - z)2 = -1, are complex. Moreover, it is easy to check that the solutions by the two solutions zt , z2 of the second-order equation

of (3.25)

are given

z2 - (c! -t p)z + c$ = i. and by their conjugates given by u(t) = e”‘[ci

Zi ,Z2. Therefore,

the general

integral

of Eq. (3.2 1) is

cos btt + c2 sin bit] + e’*‘[cs cos b2t + c4 sin bzt] + u”.

where zj = aj + ibj, j = 1,2. From the above equation we obtain z2 - (c1+ /?)z + c$ = fi.

z=

a + /I + J(a

+ PI2 - 4(aS F 9 2

In order to obtain a solution c1+ /I, which yields in turn a+P Re[z] = 2 The condition

+- :Re

resulting

of (3.2 1) in Li(O, 00) we need that 2 Re[z] -C ,u =

[J(u

- /3)2 f 4i] < T.

is

Re [ d(cr - /I)2 f 4i] -C 0. Let 4 = Arg [(a - p)2 + 4i], th en 0 < 4 5 n/2. square root of (IX- /?)2 + 4i such that Re[r] = d(a

- p)4 + 16~0s

Moreover,

let r denote

the

62

P. Cannarsa et al. IJournal of Economic Dynamics and Control 22 (1997)

49-66

The general solution u of (3.21) is u(t) = e”‘[cr cos bt + c2 sin bt] + ii. Now we use the two initial conditions of (3.21) to determine constants cr and ~2.

From u(0) = ii we obtain cl = U-u”, while from D2~(0)-(cr+j?)D~(O)+~-~ we obtain c2 =

(ii-ii)(a+p+b’-

= 0

1)+x-p

b(2-a-8)





Remark 3.10. From formula (3.22) it follows that u* is positive if we choose the parameters in appropriate way. In fact,

a+P Re[z] = 2

a+!34

I+

+ 2

(3.26)

J

is negative if a -t-/I is small enough. 4. Concluding remarks

In this paper we have developed a direct method to solve an optimization problem with r.e. under a perfect foresight assumption. Theorem 3.7 establishes that this problem can be reduced to a fourth-order o.d.e. with variable coefficients and suitable initial and asymptotic conditions. Numerical approximation schemes are easily applicable to such a problem. Moreover, choosing parameters as in (3.20) an explicit solution of the above o.d.e. can be obtained. In this case qualitative behaviour of the system under investigation can be described as follows. (a) The optimal control U* has an oscillatory dynamic. Therefore, we cannot have monotone convergence to the particular solution u”=

a2/12ii - a(/?7 + y)

a2fi2+1



which represents the equilibrium value. With respect to our model, the optimal monetary policy is oscillatory and so the state equation (3.9) is governed by an oscillatory dynamic. (b) The stability of optimal controls depends on parameters. Divergent oscillatory behaviours are possible for optimal controls. Economies characterized by different parametric sets exhibit optimal controls with very different asymptotic behaviours. (c) Vice versa, it is possible to identify a parametric region which yields a very fast convergence to the equilibrium; dynamic systems which lie in this region are more stable and the optimal control achieves a constant value (the equilibrium) in a short time, see Remark 3.10.

P. Cannarsa et al. IJournal of Economic Dynamics and Control 22 (1997)

49-66

63

(d) The initial value of any optimal control is equal to the target value U; i.e. it is optimal to start in the ‘best way’. Nevertheless, the optimal control could get very far from the equilibrium and, in turn, from the target value for sufficiently large times. In conclusion, we would like to underline one of the main differences due to the presence of the forward looking term in the economic model. In classical optimal control problems, one expects optimizing policies to converge asymptotically to a steady state. On the contrary, the introduction of r. e. may produce asymptotic instability.

Appendix We now state the n-dimensional

version

of Theorem

3.7.

A.I. Assume p < 201 and 1 > 28, where p is defined in (3.5). Then exist matrices Ao(t), At(t), AZ(t), A3(t) and Ad(t) such that the problem

Theorem there

(Ao(t) + I)u(t) + Al(t)Du(t)

+ Az(t)D2u(t)

64.1)

+ A3(t)D3u(t) + A4(t)D4u(t) = Ao(t)Z - a(B 7 + YL), u(0) = ii, D2u(0) - [(2j4 - a)Z - B*]&(O) + 7 -x

= 0

has a unique solution u* E Li(O, 00; W) n P((0, cm); US”), which is the optimal control for problem (3.1)-(3.4). Moreover, matrices Ao(t), Al(t), AZ(t), A3(t) and Ad(t) can be computed explicitly Ao(t) = [(aI - B)eL’ - e”H]HF, Al(t) = -eL’(H2G + 2HF) + (aZ - B)eL’(HG + F), AZ(t) = -(cY.Z- B)e”(H - G) + e”(H2 - 2HG - F), A3(t) = -(cd - B)eL’ + eL’(2H - G), Ad(t) = eL’, where the matrices L, F, G and H are defined as L = (2 - ,u)Z + B*, F = (crl - B*)(a - u) - I(a - u)2 G=(2,u-a)l-B*,

H=-B+L.

64

P. Cannarsa et al. IJournal of Economic Dynamics and Control 22 (1997)

Proof

49-66

Let U* be the optimal control for problem (3.4)-(3.9).

Step I: Proceeding as in Theorem 3.7 we derive the Euler equation of J.

I,“’(e(“-“)‘[y*(.)

_ 71)

(t)

+

e(a-P)’ [u*(t) - ir] = 0 for a.e. t E [O,oo).

(A-2) Since the range of 1,“’ is contained in C’([O, 00); KY’),Eq. (A.2) implies that U* is continuous and so (A.2) holds for every t E [O,co). Hence, taking t = 0 we immediately derive that u*(O) = U. Step II: We invert the operator I,“, where M is an n x n-matrix. Let 4(t) = (ZF+v)(t), where w E Li(O,oo; R”). Then m J1

e +%v(s) ds = e-(d-M)t#(t).

Differentiating the above equality once again we obtain -eVtiw(t)

= -(d

- M)e-(“‘-“)‘4’(‘)

+ e-(cll-M)‘@“(t),

so that w(t) = (aZ - M)eMr&(t) - e”‘#‘(t).

(A-3)

Step ZZZ:u*is a solution of (A.1). Applying formula (A.3) to Eq. (A.2), we obtain e(“-““[u*(t)

_ 71 = (& _ B*)eB”e(“-B)‘(jj - u*(t))’ _ eE”e(OL-p)‘(U- u*(t))” = (@J_ B’)eB*‘e(Y)’ [((I- p)(E - u”(t) -m*(t))] -eB”e(‘-“)‘[(a - p)*(E - u*(t)) -2(a - p)Du*(t)

-&U*(t)].

Therefore, y*(t) = e(“-p)reE”(aZ - B*)[(a - p)(ii - u*(t) - h*(t))]

-e(L-lc)teB’r[(a - p)*(ii - u*(t)) - 2(a - p)Du*(t) -D%*(t)] = eL’E(t) + 7,

(A-4)

where E(t) = F(ii - u*(t))

+7

- GDu*(t) + D*u*(t),

P. Cannarsa

and the matrices

et al. IJournal

of Economic Dynamics and Conrrol22

(1997)

49-66

65

L, F and G are defined as

L=(A-p)I+B*,

F=(~--B*)(~--~)-Z(CI-~)~

and

G=(2n-cr)l-B’. Recalling r*(t) plugging

that = eBt x + ‘emBlyd.s + (Z,Bu*)(t) , { s0 > (A.4))

and setting H = -B + L, we obtain

in the above equation

I (l:u*)(t)

= eHfE(t) + eeB’y -x

-

e-‘“y ds = 4(t).

(A.5)

s0 From Eq. (AS),

setting t = 0 we recover

- [(2~ - a)1 - B*]&*(O)

D2u*(0)

+ 7 -x

In order to verify that U* satisfies Eq. (A.l) (A.5): u*(t) = (& - B)eB’#(t)

in (A.l)

= 0.

we apply formula

(A.3) to equation

- eB’#‘(t),

where 4 is defined in Eq. (AS). (ccl - B)eB’#(t)

the third condition

(‘4.6)

We compute

= (ccl - B)eBfHeHrF(E - u*(t)) -(cd

- B)eB’(HeHf + eH’F)Du*(t)

+ (d - B)e”

(HeHI - eH’G)D2u*(t) + (LYJ- B)e’.‘D3u*(t) -(cd

- B)e

- (cd - B)y.

(44.7)

On the other hand, _ ,$4”(t)

= -eB’H2@F(z

- u*(t)) + eE’(H2eH’G + 2HeH’F)h*(t)

-eB’(H2eH’ - 2HeH’G - eH’F)D2u*(t)

- eB’

(2HeHt - eH’G)D3u*(t) - eL’D4u*(t) - B2v -By.

64.8)

Substituting Eqs. (A.7) and (A.8) in Eq. (A.6) we conclude that U* is a solution of (A.l). Moreover, as in Theorem 3.7, we have that the solution u’ of (A.l) is unique. 0

66

P. Cannarsa et al. I Journal of Economic Dynamics and Control 22 (1997)

49-66

References Banks, Th., 1969. Variational problems involving functional differential equations, SIAM J. Control, 7 (I), l-17. Banks, T., Kent, G.A., 1972. Control of functional differential equations of retarded and neutral type to target sets in function space. SIAM J. Control 10, 567-593. Basar, T., Olsder, G.J., 1982. Dynamic Noncooperative Game Theory, Academic Press; London, 1995. Cagan, P., 1956. The monetary dynamics of hyperinflation, In: Milton F. (Ed.), Studies in the Quantity Theory of Money. Chicago, University of Chicago Press, pp. 25-117. Kydland, FE., Prescott, E.C., 1977. Rules than discretion: the inconsistency of optimal plans, Journal of Political Economy 3, 473-492. Lukes, D.L., 1971, Equilibrium feedback control in linear games with quadratic cost SIAM J. Control, 9 (2), 234-252. Papavassilopoulos, G.P., Cruz J.B., Jr., 1980 Sufficient conditions for Stackelberg and Nash strategies with memory. JOTA, 31 (2) 233-260. Sargent, T., Wallace, N., 1973. The stability of models of money and growth with perfect foresight, Econometrica 6, 1043-l 048.