A note on partially observed Markov systems

A note on partially observed Markov systems

207 Economics Letters 10 (1982) 207-209 North-Holland Publishing Company A NOTE ON PARTIALLY Michael MARKOV SYSTEMS * ROTHSCHILD Mathematics,...

138KB Sizes 2 Downloads 111 Views

207

Economics Letters 10 (1982) 207-209 North-Holland Publishing Company

A NOTE

ON PARTIALLY

Michael

MARKOV

SYSTEMS

*

ROTHSCHILD

Mathematics, Received

OBSERVED

Inc., Princeton, NJ 08540, USA

19 May 1982

The purpose of this note is to argue that systems in which actors do not have complete information are unlikely to be Markovian - even if, in some sense, the real process driving the system does have a simple Markovian structure.

One type of model with an underlying Markovian structure for which the endogenous process generated will not have this property is the demand for assets by traders possessing differential information. Their demands for assets and thus equilibrium prices are a function of their information and of their wealth. Individual traders’ wealth is not observable and thus it is hard to see how the equilibrium price can reveal the private information of traders - assuming there is more information than there are prices so that the results of Radner (1979) do not apply. Let E be a finite set of states, and suppose that there is a stationary Markov process on E such that M,,=Pr{e(t+

l)+ile(t)+j)

forallt.

(1)

We assume M,, > 0

for allj,

j E E.

(2)

* This paper first appeared as a handout for an IMSSS workshop at Stanford University, which Bob Wilson and I gave in July 1978. I am grateful to the National Science Foundation for research support. During the preparation of this draft of the paper, I held the Oskar Morgenstem Distinguished Fellowship at Mathematics.

0165 1765/82/0000-0000/$02.75

0 1982 North-Holland

208

M. Rothschild / Partially observed Markov systems

To capture the notion that each state contains an observable and an unobservable component we suppose there are two finite sets A and B suchthatforanyeEEthereisaEAandbEBsuchthate=(a,b)and that for any a E A and b E B, (a, b) E E. This assumption, of course, implies that 1E I= 1Al .I B(, where 1E 1denotes the number of elements in the set E. We assume A represents observable events and B unobservable events. Of course different actors viewing the same events may make different partitions into observable and unobservable events. While for me, my endowment is observable and yours is not, for you the opposite is true. In general, actors will be interested in the probability distribution of next period’s state - or at least of its observable component. Their actions will depend on these probability estimates. If an actor observed this period’s state completely, then he could estimate this distribution simply by consulting M. That is Pr{a(r+

l)=~~u(r)=j,b(r)=k)

= ;:a Pr{a( t + 1) = I, b( t+ l)=iIa(r)=j,b(t)=k}.

(3)

The right-hand side of (3) is simply a sum of elements of M. Call the quantity defined in (3) Q,,(k). In general Q,,(k) will be a function of k - the value of b(r). But b(r) was not observed and must be estimated. Let R(k) = Pr(b( t) = k), the probability that a(l + 1) = I is just C,,,Q,J’( k)R( k). Thus the problem of evaluating the probability distribution of future values of the observed component of the state entails estimating the probability distribution of the current value of the unobserved value of the state. We will see that to do this an actor will in general use the entire past history of observed variables. Thus the actor’s probability assessments will in general not be Markovian and if his actions are based on these assessments, they will also not be Markovian. Partial observability turns a Markov system into a non-Markovian system. We now consider the problem of estimating R(k). Let (II,) be a fixed sequence of elements of A representing past observed values of a(r); that Now define the JBI by JBI matrices is, a(r-s)=h,, s=O, 1,2 ,..... P(h,,...,h,) P,,(h,,...,

by h,)=Pr{b(r)=kIa(r-u)=h.,u=O,

l,...,~;

b(r-s)=n}.

M. Rothschrld / Partially observed Markou sysfems

Note that P,,(h,,

~k,(kl~hl) =

h,) is simply computed

from M:

Pr(a(t)=h,,b(l)=kla(tc

209

I)=h,,h(t-

,,t.Pr{a(t)=h,,b(t)=mla(t-

l)=n}

l)=h,b(t-

l)=“)’

(4) It is also easy to see that PkJhD,h,,...,

h,? h,+,)=CP&,, 4

~W..J,)P,,(L

h,+,)

or that

=

tI P(h,, h,+,),

r=O

where each matrix P(h,, h,, , ) is as defined only a finite number of such matrices, and positive, the weak ergodic theorem of stable that the matrices P(h,,. . , h,) converge to a in the sense that

~k,(~O~...A)

.‘%Pkq(hO,...,hs)

=

in (4). Now, since there are by (2) all their entries are population theory ’ implies matrix with a single column

(5)

l.

Obviously this common limit is R(k). R(k) the entire past history of observed values.

is, as claimed,

a function

of

References Golubitsky. M., E. Keeler and M. Rothschild, 1975, Convergence of the age structure: Applications of the projective metric, Theoretical Population Biology 7, Feb., 84-93. Radner, R., 1979, Rational expectations equilibrium: Generic existence and the information revealed by prices, Econometrica 47, May, 655-678.

’ For a proof see, for example, Golubitsky, Keeler and Rothschild an estimate of the rate of convergence in (5).

(1975) which also gives