D. N. Shanbhag and C. R. Rao, eds., Handbook of Statistics, Vol. 19 © 2001 Elsevier Science B.V. All rights reserved.
")'~ z ~ .¢.,
Martingales and Some Applications
M. M. Rao
1. I n t r o d u c t i o n
As in the case of many other parts of Probability Theory, martingales also have their origins in certain games of chance. A type of game which has a byline stating "double the bets after a loss and drop out after a win" is termed a martingale. Such games seem to have been quite familiar to the general public in France which is described by the following item. Depicting a somewhat non cordial relationship in the early 1960's, between Prince Ranier of Monaco and General De Gaul, the French President at the time, there was a cartoon in a leading French news paper (reprinted by the New York Times) with a subtitle, "the prince plays a martingale with the general". To motivate the subject and to explain the game of doubling strategy, (and to appreciate the above anecdote and the cartoon) one may connect it to the following streamlined mathematical version: Suppose Xn denotes the outcome of the nth game where Xn = 1 if it is a win, and = - 1 if it is a loss, with probabilities 0 < P[Xn = 1] = p = 1 - P [ X n = - 1 ] = 1 - q < 1. The game structure indicates that the X~ are also independent so that the nth toss (or play) does not depend on the previous ones. Suppose the player bets an dollars on the nth game, so that the total win Sn at that play is given by Sn = a~X1 + " . + anXn = Sn-1 + anXn
(1)
where X0 = 0. Instead of the an being constants, suppose they are also random variables which are determined basing only on the preceding trials. For instance, in the "doubling the bets" game, let al = 1 and for n > 1, if the first n - 1 are losses with the nth a win, then
an =
2~-1, 0,
ifX1 . . . . . Xn 1 = - 1 otherwise ,
SO that the player's total loss at the nth game is ~ = 1 2k 1 = 2n _ 1. Now if the (n+l)th play is a win, then the total gain is Sn+l = S ~ + a n + l - 1 = - ( 2 n - 1) + 2n = 1 since X,+I = 1. 765
M. M. Rao
766
Suppose now that r denotes the first time the player wins, i.e., ~ = inf{n _> 1 :S~ = 1}. Thus
Pit = n] = P[X1 = - 1 , . . . ,Xn_ 1 = -1,X~ = l] = p ( 1 -p)~-~ ,
(2)
and OO
O(3
n=l
n=l
so that the win results in a finite (but not necessarily bounded) time with probability 1. Note that the expected value, E(Xn) = p - q, n _> 1, and the game is called fair i f p = q(= ±) and in this case E(Sn) = E(X,) = 0, n > 0; it is favorable 2 (to the player) i f p > ½and unfavorable (to the player favorable to the gambling house!) i f p < ½. Also in a fair game not only P[S~ = 1] = 1 and hence E(S~) = 1, but the player starting with the initial capital X0 = 0 can increase the expected wealth to 1 although E(Sn) = 0 = E(So) = E(Xo). This system of doubling bets (fairly) is the popular martingale. Since the time to win, namely r, is finite but not bounded, one needs a large (possibly unbounded) capital as well as time. Thus it is physically unrealizable. Let us note parenthetically that the S, of (1) is not a sum of independent Bernoulli random variables and hence the classical Wald identity E(S~)= E(r)E(X1), established after Theorem 3.6 below, does not hold. Also although {S~,n >_ 1} is a martingale (to be defined precisely below) the "optionally stopped" new 'sequence' {X~,X0} need not be a martingale, a situation that is covered in the ensuing discussion. This illustration raises a host of problems of both practical as well as theoretical interest and importance. To begin with, one should define not only a martingale or a fair game, but also present its close relatives the favorable and unfavorable ones. Further the processes should not be restricted to sequences or discrete times. In fact, the present day studies in Probability can be broadly (but not exclusively) divided into four categories: (i) processes (or sequences) consisting of independent random variables, (ii) stationary (harmonizable and related second order) processes, (iii)Markovian types, and (iv)martingales. Evidently, a given process can belong to more than one category, and the methods of one part are often used in others for solving such problems. The purpose of the following account is to present a general view of martingales and some of its extensions as well as applications. Thus the next section is devoted to a precise description of martingale concepts, the fundamental inequalities and some of their extensions to be used in applications later. Then Section 3 contains a discussion of the basic convergence theorems. In both these accounts, the discrete as well as continuous parameter versions are considered, including the optional stopping (or skipping) for the process as indicated in the above illustration. Most of Section 4 is devoted to continuous parameter (semi)martingales which will be used in various applica-
Martingales and some applications
767
tions in the rest of the article. These consist of an extended (semi)martingale calculus, including (semi)martingale integrals. Section 5 discusses some applications to likelihood ratios, used in stochastic inference, and Section 6 contains an account of exponential (semi)martingales which arise typically as solutions of linear stochastic equations. The next set of applications consists of an extended discussion of financial mathematical models where (exponential) martingale methods play an effective role. These applications also show the importance of Girsanov's theorem in this analysis. Finally in the last section some remarks on multivariate and multiparameter versions of some of these results are indicated. To appreciate the subject for several of the assertions, (an outline o1" some times full) proofs are given with explanatory remarks.
2. Martingale concepts and inequalities Let {X,, n >_ 1} be a sequence of independent mean zero random variables (relative to the underlying probability space (O, Z,P)), and set S~ = ~i=1 X~i. Then one has, using the properties of (conditional expectations and the standard notation, E(S~+I IX1,... ,X~) = E(S~ + X,+I IX1, .. ,X~) = E(SN + Xn+l IS1,..., Sn) = s. + E(Xn+1 IXl, .. =S,+0 ,
= s, + E(X,+I)
(3)
with probability one, since X,,+I is unaffected by the conditions on the previous variables. This serves as a good motivation, and is generalized for a not necessarily independent system taking (3) as a definition for the sequence {S,, n > 1}. Before stating the general concept, let us also motivate with a gambling situation. Thus suppose that Z1, Z2,... are the fortunes of a gambler at plays 1,2,... where only the knowledge of the present and the past games is available (and no clairvoyance). Assuming that the Z, have finite means, the game is said to be fair if E(Zn+
Z,) = Z ,
(4)
with probability one, and it is said to be favorable (to the player) if '=' is replaced by '>_' in (4) [it is unfavorable (to the player but favorable to the house) if '=' is replaced by '_<']. A complete knowledge of X1,...,Xn is the statement that one has control (or knowledge) on all the values which these random variables can take. This means mathematically that the a-algebra g n = cr(X1,... ,Xn), generated by these variables is known. With such a notion, the concept can be stated quite generally as follows. Let {Xt, t E I} be an indexed set of random variables on the basic probability space (O, N,P), modeling the experiment, where the index set I is (perhaps partially) ordered, i.e., for some pairs a, b c I, a < b is defined and '_<' is a partial order on I, and it is directed if also for each finite subset of I there is an
M. M. Rao
768
element (in I) dominating this subset in the above order. If I c R, the reals, then I has the usual order which is linear. Let ~ t = a(Xs,s <_ t). The family {Xt, ~ t , t E I} is called adapted, meaning each Xt is ~t-measurable and t _< t r implies Y t c @e. With these notions the desired concept about martingale processes can be presented as follows: 1. DEFINITION. Let { X t , ~ t , t E I} be an adapted integrable family of (real) random variables on the basic triple (Q, S,P). Then it is called a (sub or super) martingale if for each tl, t2 ~ 1, tl _< t2 one has: E(Yg 2 I~tl ) = ( ~ o r ~ ) ~ 1
,
(5)
with probability 1 (or w.p. 1). It should be noted that for (sub or super) martingales, it is necessary to have an ordering notion to describe the past, present and future. Moreover, for sub and super martingales the range should have an order relation as well. Thus for multivariate Xt one needs to have an order in the range (or value or state) space since otherwise only the martingales (but not the sub and super concepts) can be defined. Consequently, for most of the following, only the real valued processes will be considered so that sub and super martingales can be included. Also it should be noted that in general E(X,[Y,) = E(Xn]X1,... ,Xn_I) is a function of the conditioned variables so that it equals g(Xi,...,Xn-1) for a Borel function g, and the (sub)martingale hypothesis restricts this dependence to the given immediate past. This is the crucial part of the hypothesis of the abstract notion. Hereafter ESn(.) and E(.IY~) are synonymous. After a brief discussion of the discrete parameter processes, most of what follows is directed towards the continuous case, since one can often "embed" the discrete in the continuous parameter theory by the following device. If {X,,@n,n > 0} is an adapted sequence, let ~-t = ~ and Xt =X~ for N n < t < n + 1, n _> 0. Then Y t - = a(Xs, s < t) = @n-1 for all n, {Xt, ~-~t, t > 0} is an adapted process, with moreover, ~ t + = 0s>t ~&s = ~&t and ~ t - = a(Us
0} of o--algebras contained in S, is usually called a standard filtration so that ~ t C Yt' for t < t r, J t = ~t+. For technical convenience, one augments the family to have all P-null sets for each t. This general concept will be of importance in applications later. Let us start with an elementary but basic characterization of a sub martingale in the discrete case, due to Doob (1953). It has far reaching implications in the subject and also shows why the continuous parameter case presents a really new challenge. (All the following statements should be taken to be true a.e. (or with probability 1) unless the contrary is stated.) 2. PROPOSITION. Let { X n , ~ n , n >_ 1} be an adapted integrable sequence on a probability space (f2, £, P). Then it can be uniquely decomposed as."
X~=X~+A~,
n_>l ,
(6)
Martingales and some applications
769
where { X ~ , ~ , , n >_ 1} is a martingale and { A n , ~ n - l , n >_ l} is an adapted integrable sequence with AI = O. Moreover, the given process is a sub martingale iff An <_An+l,n >_ 1, Proof. The following argument is simple and constructive. Thus for the representation (4), define A1 = 0 and recursively A . = E ~" ' ( & ) - X . - 1 + A , _ , ,
n > 1 .
(7)
Then writing, Yn = X~ - A,, one gets since clearly An is ~-n_l-adapted, E g" (Yn+l) =
E g"
(Xn+,)
-
A,+I
= E~° (Xn+l - A.+I)
=32,-A,
= Y,(=X~),
by (7).
Hence { Yn,,Y , , n >_ 1} is a martingale so that (6) holds. If Xn = X,'~+ An = X~' + A1n are two such representations, then X~ - X~' = A', - A n . The right side is ~ , _ l - a d a p t e d , and the left side is a martingale. Hence {A', - An, g , , n > 1} must be a martingale so that by the defining relation (4) one has on iteration that E ~.-~ (A,', - A,) = A] - A~ = 0 a.e., and since each An,A1, is J , _ l - a d a p t e d it follows that A, = A;,, n > 1. But then X" - X " a.e., so that the representation is unique. In case {Xn, ~ , , n >_ 1} is a sub martingale, then E go (Xn+l) >_ 32, a.e. and hence by definition A1 >_ 0, A: > A1,..., implying 0 < An f t . Conversely, if 0 _< An J , then by the representation (6) one has _
E~'(X,+I) ---X ~,+An+l >_X ~.+An = X , . Consequently the sequence is a sub martingale.
[]
It is to be noted that in this construction A, is ~-,_l-adapted so that at the nth stage it is (based on the past) essentially known, or predictable. Also substituting from (7) for n = 1, 2 , . . . , one finds n
A n = ~E~n-l(Xk
--Xk_l) ,
n > 1 ,
(8)
k=l
which is a sum of certain random variables. If the given process is of continuous parameter, these two facts do not easily generalize, and the corresponding problem remained open for nearly a decade when it was finally resolved by Meyer (1962-63). Subsequently alternative methods were found to simplify the work, and the result, termed the Doob-Meyer decomposition, will be given later. First some inequalities of importance for applications are established. Recall that the conditional expectation E 2 (X) of an integrable random variable X for each a-algebra N C Z of (f2, X,P), satisfies the identity E ( X ) = E(E~ (X) ) which follows from the defining equation fA Y d P = fA E~ (X) dP,A E by taking A = f2. Also if q0 : ~ -+ R is a convex function and E(~p(X)) exists, then for any a-algebra N c X, one has the conditional Jensen inequality:
770
M. M. Rao
(9)
E~(~o(X)) > ~o(E~(X)) ,
with a strict inequality (on a set of positive probability) if q~ is strictly convex and X is not N-measurable. This result can be verified from a classical characterization of a (continuous) convex function as the upper envelop of a countable collection of affine lines f~ : x ~-+ a11x + b,, x E ~. Thus (p(x) > f~(x), n _> l. Consequently (I0)
(p(X) > fn(X) = an)( + b~ .
Applying E ~, which preserves inequalities, to both sides and taking supremum on the right of (10), one gets E~(qo(X)) > sup{anEW(X) + b11} 11
= s u p f n ( E ~ ( X ) ) = (p(E~(X)), a.e. tl
which is (9). The strict inequality statement follows from the fact that in (10) the right side line touches ~0 at most at a single point if it is strictly convex (but i f X is N-measurable there is always equality in (9)). An interesting consequence of conditional Jensen's inequality is given by the following result which links martingales and sub martingales by convex transforms. 3. PROPOSITION. Let {X~,Y11,n >_ 1} be a (sub)martingale and ~o : ~ --+ ~+ be a convex (increasing) function. Then {q0(X,), ~11, n > 1} is always a sub martingale whenever E((o(X11) ) < oc, n >_ 1. In particular, { X +, Y n , n > 1 } is a sub martingale. Proof. Noting that {(p(X11), @11,n > 1} is an adapted integrable sequence, one has from the fact that Egn(X11+l) = (_>)Am a.e., E~"(qo(X,+l)) > (o(E~"(X,+I)), a.e. ,
by the conditional Jensen inequality (9), (and q0(-) is increasing). Since ~0(x) = max(x, 0) = x + is increasing and convex, the last statement follows at once from the preceding one. [] An interesting application of (9), and this proposition, is the following curious result. As in (3) one writes E(X] Y) for E ~(y)(X) for convenience. 4. PROPOSITION. I f X , Y are integrable random variables such that E ( X IY) = Y and E ( Y I X ) = X a.e., then X = Y a.e. Proof. If X, Y have two moments then the result is very easy. Indeed using the basic identity (recalled prior to (9)):
E ( x - y)2 = EIE(Ix- Y]2Px)] -- E [ ~ ( x 2 1 x ) ] - 2 ~ [ E ( x z ~ l x ) ] + ~ I E ( y 2 J x ) ] = e[x 2 _ 2x2 + e(r21x)]
= e ( r ~) - e ( x 2) .
771
Martingales and some applications
Interchanging X and Y and remembering the symmetry in the hypothesis one gets E(X - y)2 = 0 so that X = Y a.e. However, if X, Y have only one moment, then the above argument fails. An alternative proof, involving the conditional Jensen inequality, is as follows. First note that if (p(x) = ]xI, then q~l(x) -- f~xl(fl _ e_t)dt, fi _> 2 is strictly convex and satisfies q~(x) < qh(x)._< riO(x) for all real x so that E(q~l(Z)) < fiE(IZ]) < co, where Z = X, or = Y. Hence the inequality (9) implies
E(qh(X)[Y) > ~ol(E(XIY)) = ~ol(Y), a.e. Interchanging X , Y here one finds E(Ol(Y)IX)>(pl(X) a.e. But taking expectations on both sides, one gets E(cpl(X)) > E(~ol(Y)) > E(CPl(X)). Since (Pl is strictly convex, these two inequalities cannot hold, unless X = Y a.e. as asserted. [] REMARK. A direct (but somewhat slick) proof of this proposition is also in Doob (1953), p. 314. A useful consequence of the result is that a sequence and its reverse are martingales iff the sequence is just a single random variable repeating itself explaining an inherent restriction involved in the martingale concept. Next some maximal inequalities of considerable importance will be presented. These are mostly from Doob (1953).
(a) Let {Xk, Wk, 1 < k < n} be a sub martingale. Then for each
5. THEOREM.
2 E •, one has 2P[ maxXk > 2] 1
<-fireaxl<_k2] XndP <_E(IXnI) ,
(11)
and (12) 1
inl2]
(b) I f in the above each Xk E LP(P), 1 <_p < oc, then E ( m a x Ixk[P < \l
~@1{1+E(IXn[log +IX~L)},
p > 1, q - p-l, p=l .
(13)
Proof. (a) The result (11) when the Xk are squares of a partial sum sequence of independent centered random variables with finite variances is the celebrated Kolmogorov inequality, and the classical proof extends to the present case. The basic idea is to express the 'max'('min') as a disjoint union of sets on each of which the integral can be estimated. The details are as follows. LetAt = [X~ > 21, and for k > 1,Ak = IX,- < 2, 1 < i < k - 1,Xk > 2] so thatAk is the set on w h i c h & exceeds 2 for the first time. ThusA = [-J~=l Ak = [maxk
M. M. Rao
772
P=
X~dP
_>
Xk dP, by the sub martingale condition,
k=l
k
This is (11). The proof of (12) is similar. (b) If X * = maxk<~ Ix l, then since {IXklP,Yk,k 2 l} is a positive sub martingale one has by (11) for p > 1,2 > 0 >
<
IX~Ip dP
(14)
m
Now consider the distribution F of X*, i.e., F(B) subset of R, and use the image law to get:
= P o (X*) -I (B) for B a Borel
~(X*)PdP = ~+xP dF(x) P
£+ (1 - F(x))x p-1 dx, integrating by parts, +xp-2dx
*>x]
=p£X.dPfoX*Xp-2dx -- P f X,,(X*)P-~ dP, ifp > 1 p - 1 ./~ <- qll&llpH(x*)p-lllq, by HSlder's inequality, ,
P-
= qPlX~rlp(llx lip)q, q > 1 . Excluding the (true and) trivial case that I f p = 1, (15) may be expressed as
I/x'lip
(16)
= 0 (16) implies (13) i f p > 1.
fX,>ljX* dP <- ~+ ~dx ~,>x_>l] X~ dP = L X ~ log + X* dP . Since a l o g b _ < a l o g + a + l o g ~ < a l o g + a + ~ attains the maximum for a = ~[, (17) becomes X*dP- 1 <
(17) for a_>0, b > 0
[since log
*>lj
<- ~ Xnl°g+X~dP+leJef X*dP "
(18)
Martingales and some applications
Hence (18) gives the second part of (13), as desired.
773
[]
The above inequalities are used in mathematical analysis and in many applications. This will now be indicated by an interesting result. Let X = {X,, ~ , n > 1 } be a square integrable martingale, so that E(X 2) < oc for all n. Then Y~ = X~ -Xn-1, (X0 = 0), n _> 1, is termed a martingale difference (and increment in the continuous parameter case) sequence. Observe that { Yn, n _> 1} is an orthogonal set since E(Yn~,+I) = E(E'~"(YnYn+I))
= ~(Y.E °(x.+l - x . ) ) = E(Y,(X, - X , ) ) = 0, by the martingale property . n Y;2+)-, and let s(X) = lim, s,(X) which exists since s,(X) Now let s,(X) = (~k=l is monotone. The s,(X) corresponds to the classical "Luzin square function" of trigonometric series. We shall see in the next section that for a martingale satisfying sup, E(]X~l)< oc,X,---+ Xo~ a.e. holds. Assuming this result for the moment, and also the fact that s(X) < ~ a.e., one may define the following spaces of martingales for each given filtration {~-~,n > 1} from (f2, S,P). Namely, ~ 1 = {X: E(s(X)) < oo} and BMO = {X: sup, IIE~"(IX~ -x,_112)ll~ < ec). The set ~ 1 is the analog of the classical Hardy space with norm IlXlll = E(s(X)), and BMO is that of bounded mean oscillation space of considerable importance in mathematical analysis. The norm in the latter space is given by
112118 = sup IIEEgn(X~ - x . _ l )
2 1
]211~ .
It is seen that (j,f l , I1"II1) is a normed linear space, and one can verify that it is complete. On the other hand (BMO, I1' lie) is also a normed linear space (and complete which is much harder to establish), but there is a deep relation (duality) between these two spaces, which was an open problem for some years. Then Fefferman (1971) showed that BMO is the dual of -24~1. His work was for functions but admits an adaptation for martingales. This was discussed in detail in Garsia (1973), and a streamlined treatment in the context of the general theory of processes is given in Rao (1995, Section 4.5). A number of other interesting inequalities with extensions was obtained in Burkholder (1973). Since the convergence aspect of martingale theory is needed in this discussion, let us turn to it now.
3. Convergence theorems First we consider the special case of positive martingales, and later extend it to the general case by a (Jordan type) decomposition for the whole class of (sub, super, quasi or semi-)martingales to be discussed shortly. Thus the starting key result is: 1. THEOREM. Let {Xn, Yn, n >_ 1} be a nonnegative martingale. Then it converges a.e. In symbols, Xn --+ X~ a.e., and also E(X~) <_lira infn E(X,).
774
M. M. Rao
Proof Since t ~+ (p(t) = e -t is a convex function ((p'(t) > 0), by Proposition 2.3, Y~ = p(X~) = e-X",n > 1, is a bounded positive sub martingale for the same filtration { ~ , n > 1}. Since ~0(.) is one-to-one, X, ~Xo~ iff Y~ ---+ Y~ a.e., and the result follows if it is shown that each positive bounded sub martingale converges a.e. We show more generally that a positive Lz(P)-bounded sub martingale { Z n , Y , , n > 1}, (i.e., E(Z 2) < K < oc,n >_ 1) converges a.e. and in Lz(P)-mean. The mean convergence is easy and it is first proved and is then used in the pointwise conclusion. Indeed, it follows from definition of a sub martingale that the expectations are monotone nondecreasing. The same is true of the sub martingale {Z2, ~ , n > 1}. Thus E(Z 2) <_E(Z2+I) _< Ko < oo so that the sequence {E(Z~),n >_ 1} converges. Then 0 <_E(Z~) - E(Z~) = E(Z,, - Zm) 2 + 2E(Zm(Z, - Zm)),
m
(19)
and E(Zm(Zn - Zm)) = E(ZmEg~(Zn - Zm)) >>0 by the sub martingale property. Since the left side of (19) goes to zero (by the convergence of the above bounded monotone sequence) and each of the right side terms is nonnegative, so each must tend to zero as n --+ oc. In particular E(Zn - Z m ) 2 --* O, so that {Z,,n > 1} is Cauchy and hence converges to a limit Zo~ E L2(p), by completeness of the latter space. It will now be shown that Zn -+ Zo~ pointwise a.e., and this is the hard part for which one needs Theorem 2.5(a). Here are the details. For an arbitrarily fixed integer m _> 1, the sequence {Zk - Zm, Y ~ , m < k < n} is evidently a sub martingale. So for each e > 0, one has by Theorem 2.5(a): P I max (Zk - Zm) >_ ~] <_ 1E(IZn - Zml)
(20)
1_m < k < n
and P I min ( Z k - Zm) _> L m
-el < 1 [E(IZn - Z m ] ) -
(21)
E(Zm+a- Zm)] •
Adding (20) and (21), letting n -~ oc, and using the L2(P)-convergence established above, it follows that
<_ 1 [2E(IZm _ Zo~ I) + E(tZm+~ - Zm J)l ---' 0
as m ---+ ~
,
(22)
since L2(P)-convergence implies L ~(P)-convergence. This means lim P [ s u p I Z k - Z n l > e] = 0 . n--+oc
l_k>n
(23)
775
Martingales and some applications
If Z* = lira supn Z,, Z, = lim inf, Z,, then (23) implies P[Z* - Z, >_ 2el <_ 21im P[sup ]Zk -- Zm[ >_ ~] = 0 . n
Thus Z* = Z, so that Z, ~ Zoo(--- Z* = Z,) a.e. and hence X~ -+ Xoo a.e. This is the desired assertion. The last part is now a consequence of Fatou's lemma. [] The above argument also contains the following additional information, regarding sub (super) martingales. 2. COROLLARY. If {Xn, if'n, n ~_ 1} is a (not necessarily positive) sub martingale (or { - X ~ , Y , , n >_ 1} a super martingale) such that E(X~) <_Ko < oo, then X~ ---+Xoo a.e. and also in L2(P)-mean as n -+ oc. The assertion of Theorem 1 is immediately extended for a martingale {X~, g ~ , n > 1} satisfying the condition supn EIX, I) < oo. We deduce this from a well-known consequence (due to Nikod)m) of the classical Vitali (-Hahn-Saks) theorem of Real Analysis as follows. For the martingale {Xn, ~ n , n >_ 1} as above, let #~ :A H fAX~dP, A E ~ , so that I#~[(A)=-fA [X~IdP, the variation, is a a-additive function on ~-,. But {]Am[,~ , , n _> 1} is a sub martingale (cf., Proposition 2.3), so that I#~I(A) = f
[X~IdP -< fA [Xn+l]dP = [#~+I[(A),
A E @~
and [#,l(t2) = E(IX~I) <_ supnEC[Xn]) < oc by hypothesis and lim,_~oo I#,l CA) = vk(A) exists for all A E J~k. What is more v~(.) is a-additive by the Vitali-Niko@m consequence noted above [cf., e.g., Rao (1987), p. 181]. Further it is evident that vk = Vk+llY~ and vx << P. If Yk = dvk/dP, then the last equation implies that {Yk, ~ k , k >_ 1} is a positive martingale and hence converges a.e. by Theorem 1. Moreover, fA YkdP = vk(A) >
I#kl(/) >-~(A)= f~ XNdP
,
for all A E ~ k , and the extreme integrands are ~,~k-measurable. This implies that Zk = Yk --Xk >_ 0 and that {Zk, ~-k, k _> 1} is a positive martingale being the difference of two martingales on the same filtration. Hence X, = Yn - Zn, n > 1 is a difference of two positive martingales and by Theorem 1 both Y~ -~ Yoo,Z~ --+ Zoo a.e. so that X, ---+ Y~ - Zoo = Xoo (say) a.e. as n -~ oc. Thus we have the general assertion (the last being a consequence of Fatou's lemma): 3. COROLLARY (Doob's martingale convergence theorem). I f { X n , ~ n , n >_ 1} is a martingale such that supnE(IXn])
M. M. Rao
776
4. COROLLARY. An Ll(P)-bounded sub (or super) martingale {X~,Wn,n > 1) (i.e., sup~E([X,I) < c~) converges a.e. to X~ and E(IXoo[) _< liminf~E(]X,[).
Proof. We consider the sub martingale case. By Proposition 2.2 the X,-process can be expressed as X,=X'~+A,,
n> l ,
where {X~,Y,,n > 1} is a martingale and A1 = O,{An,~n-l,n >_2} is a nonnegative increasing process. Also 0 <_E(An) = E(X~) + E(X~) <_ 1}, if the event IT = n]E ~ n , n _> 1. More generally if {~-t, t _> 0} is an increasing family of o--sub algebras of X such that @t = ~s>t ~ s , t > 0, so that it is a standard filtration, as defined in Section 2, each completed for the P-null sets for convenience, then T : (2 --+ ;~+ is a stopping time of such a filtration if {co : T(co) _< t} E Yt, t > 0. Thus T is a stopping time iff its values are determined by the past and present (i.e., < t) but not on the (unforeseen) future. Thinking a (sub) martingale {Xt, Yt, t > 0} as a fair (favorable) game, suppose the player skipped the game at certain random times T1 _< T2 < .... Then it is legitimate to ask whether the processes {Xr,, n _> 1} is again a (sub) martingale, i.e., whether the values Xr,, observed at times T,, constitute a new (sub) martingale, and if so do the convergence results hold. Conditions for affirmative solutions to these questions will be outlined now as they have considerable theoretical interest in the subject and are also of importance in applications. A simple example of stopping times is given by: TA inf{t _> 0 :Xt E A} with inf{0} = ~ . Thus TA is thefirst time that the process enters the set A. This was used in the proof of the above recalled theorem with A = (2, ec) c N. As is clear from this discussion, there will be new technical problems in employing this powerful stopping time tool. In fact, for each stopping time T~ of the family {@t,t > 0} the symbol Xr,, is the composition of the functions X~-{Xt, Yt, t> 0} and T, (=Xr,,(co)=XT,,(o)(CO) or simply X(T,(co),co)) and hence is a random variable for each n. However it is adapted to a new family (not necessarily Yt, t _> 0). The new class is defined as: f f ( T , ) = {An [T~ _< t] c Yt, V t _> 0}, and is called the family of events "prior" to T~. One verifies that it is =
Martingales' and some applications
777
a o--algebra and for each n, T~ <_ T,+I ~ ~-(Tn)C ~(T~+1) and T~ is ~ ( T , ) adapted. The following basic assertions on a calculus of such real stopping times will be needed in applications below. There are fewer mathematical technicalities if each stopping time T,, takes values in { 1 , 2 , . . . , oc} rather than in E+, but both cases occur in the problems usually studied. 5. PROPOSITION. Let { ~ t , t >_ 0} be a standard filtration from a probability space ((2, Z,P) and {Tk,k >_ 1} be a collection of stopping times of the filtration. Then inf(Tkl, Tk2), sup(Tkl, Tk2),Tkl + Tk2, liminfk Tk, lim supk Tk, (but not necessarily Tk~--Tk2 or c~Tk for O< c~ < 1) are stopping times. I f Tk >_Tk+l then and T = limk Tk is a stopping time satisfying J ( T k ) > g(Tk+l) g ( T ) = N7_1 In many practical examples, it is usually easy to verify that the composition of random functions with relevant stopping times are again random variables, especially with discrete values, although in some cases (e.g., for continuous parameter processes) this can be an annoying technical problem. [For a discussion of these questions, see Dellacherie (1972), and Dellacherie-Meyer (1980), or for a quick account of the results that are used below, one may also refer to Rao (1995), Chapter IV.] Recall that a process (or random function) t ~ Xt(co) is left [right] continuous at to, if limsTt0Xs(CO)[limpet0X~(co)] exists for almost all 09 E f2. It is verified quickly, using Corollary 3, that a (sub) martingale {Xt, ~-t, t >_ 0} relative to a standard filtration has left and right limits at each t > 0, for almost all co, and these limits can be different for at most a countable set of time points. The discontinuity set of points {tn, n _> 1} can be either jumps or they may be "fixed" or "moving". [A point to is a fixed discontinuity of the process if PIe) : lima+t0 X,(w) = Xt0(co)] < 1 and a discontinuity which is not fixed is called a moving one, the latter are thus not (jump or) of first kind, but are called 'second kind' in real analysis.] We now have the following form of the (sub) martingale under optional sampling (or skipping) times. 6. THEOREM (Doob). Let { X t , ~ , t >_ 0} be a (sub) martingale relative to a standard filtration. Let Tn < Tn+l,n = 1 , 2 , . . . , be a sequence of stopping times of the filtration. Then the process {XTo, @(T~), n > 1} is again a (sub) martingale whenever there is an integrable random variable Y >_ 0 such that IXt] < Y a.e., t > O. More generally, the conclusion holds if {Xt, t >_ 0} is uniformly integrable, i.e., l i m k ~ E ( Z i i x , l>k]lXtl)=O uniformly in t. This is always satisfied if suptE(IXtl p) < K < oc for some 1 < p < oc (by the Hflder inequality). We shall use this result in our applications below. It can be proved using the preceding properties and Corollaries 3 and 4, but involving many more details. The complete proofs and improvements can be found in the references given preceding the statement of the theorem. Before proceeding further, it will be instructive to present an application of this result to establish the fundamental identity of Sequential Analysis due to Wald. There are a number of proofs, but we include one based on martingale theory.
778
M. M. Rao
The problem is as follows: Let X 1 , X 2 , . . . , be a sequence of independent random n variables with a common distribution having mean c~. If S. = }-~k=l Xk, one observes the sequence as long as a _< S, _< b and stops when either Sn < a or S , > b for a given pair of reals a < b , so that T = i n f { n _ > l : S n < a , or S~ > b} = inf{n _> 1 :S~ E N - [a,b]}. Then T is a stopping time of the filtration {Yn,n > 1},~-n = o'(X~,, 1 < k < n). Suppose that E(T) < oc. Then the Wald identity states that E(ST) = teE(T). Indeed, consider Y k = X k - - e and S', = ~ = 1 Yk. Then E(IS~I) < E(IS~[) + nl~ I < 00, and since the Yk are independent with means zero, {S~, ~',, n _> 1} is a martingale. If 7"1 = 1, T2 = T _> 1, then T1,T2 are stopping times of the filtration {~-,,n_> 1} and ff(T~) = ~-1 c if2 = i f ( T ) . Since a finite set of integrable random variables is trivially uniformly integrable and E ( T ) < ec by hypothesis, it follows that E(lSril) < oc and (by Theorem 6) that {S'r~, ~(T/)}~ is a two element martingale. But a martingale has constant expectations. Hence E(S~2 ) = E(S~I ) = E ( S 1 - c~) = E(X1) - c¢ = 0. However S~,2 = (St - Tc¢) so that 0 = E(S~r) = E(Sr) - 7E(T), as desired. There is also a beautiful proof of this identity due to Blackwell (1946) based on Kolmogorov's strong law of large numbers. His proof admits an extension to certain uncorrelated (but not necessarily independent) random variables as follows. 7. COROLLARY. Let X 1 , X 2 , . . . , be a sequence oJ" uncorrelated random variables with a common mean and uniformly bounded variances. Suppose that the sequence of observations is stopped at a (random) time T based on the past and present such that either ST < a or ST > b where S, = ~ = 1 Xn as before. I f E(T 2) < cx), then E(Sr) = E(X1)E(T) holds. The proof is an extension of Blackwell's method, but now it uses Rajchman's (instead of Kolmogorov's) form of the strong law of large numbers. (For details see Rao (2000), III.6.6, p. 129.) It should be observed that {Sin,~,~n, n > 1} need no longer be a martingale in this case; and this is partly compensated by the stronger assumption of the existence of second moments of the Xn as well as the stopping time and of their uniform boundedness.
4. Elements of (semi)martingale calculus Although most of the limit theorems appearing in applications use only the discrete parameter martingale convergence theory and the fundamental inequalities of the preceding sections, for some important parts, such as the finance mathematics to be detailed in a later section, the work depends crucially on the corresponding continuous parameter processes and their extensions. To facilitate this account, some (differential and integral) calculus results in a sufficiently general form, to include both the sub and super martingale cases, will be discussed in this section and they play a key role in the rest of the following analysis.
Martingales and some applications
779
The starting point is a search for a continuous parameter analog of the last half of Proposition 2.2 on the Doob decomposition of a sub martingale. The difficulty starts already in finding a suitable substitute for the integrable increasing process {Am,~ - l , n _> 1} when the discrete index n is replaced by a continuous one, denoted by t. The appropriate concept and the resulting decomposition were discovered by Meyer (1962) almost a decade after the problem was raised by Doob. It took another decade to find simpler proofs of the result, one of which by extending Doob's original method for the discrete case using a weak compactness argument of abstract analysis, thereby (justifiably) calling the end result the Doob-Meyer decomposition theorem. The statement will now be presented after introducing the relevant new concepts. It is then possible to give a unified and general versions of these propositions to use in integration and subsequent applications. The fact that Am is ~n_l-adapted may be understood as being "predictable" from the knowledge of the past, i.e., that denoted by Yn-1. In the continuous parameter case this has to be made precise and it is as follows. 1. DEFINITION. Let {@t, t _> 0} be a standard filtration from the basic probability space (O, Z,P), i.e., ,~t = J r + = A~>t g~, t _> 0 and (for convenience) all P-null sets be included in each ~-t. Then the a-algebra ~ generated by the sets {(s, t] x F : s <_ t,F E J , } tA {{0} × F , F E ~0} is called a predictable a-algebra. An adapted process X = {Xt, @t, t >_ 0} is termed predictable if, regarded as a function X : ~+ x ~2 ---+ R, it is measurable relative to ~ c N(R +) ® Z (and if it is just measurable relative to ~ ( R +) ® £, it is simply termed a measurable process). Although this definition looks somewhat involved on the surface, it can be verified that ~ is the same as the a-algebra generated by all (or even only left) continuous adapted processes {Art,Yt, t _> 0}. This a-algebra ~ plays a key role in both the theory and applications. For instance, it can be shown that if Y denotes the set of all stopping times of the filtration { ~ t , t > 0}, then ~ is also generated by all the "stochastic intervals" ~0, T1 which are sets of the form t0, T] = {(t, co) : 0 < t < T(co)} for all T C J-. This connection between stopping times and predictable a-algebras is useful. Many results on stopping times and the corresponding a-algebras, as well as the related classifications are discussed in the literature in detail. See, e.g., Dellacherie (1972), Dellacherie and Meyer (1980), M6tivier (1982), Rao (1995), and others where proofs of the above statements together with a rather detailed treatment regarding their calculus as well as the classification may be found. The next concept is an essential ingredient of the desired decomposition. 2. DEVlN~TION. Let {,~t,t >_ 0} be a standard filtration from (f2, Z,P) and {At, ~ t , t > 0} be an integrable right continuous increasing process with A0 = 0, a.e., and suPtE(At ) < oo. Then it is called a predictable increasing process, if for each right continuous positive bounded martingale { X t , ~ t , t > 0} (so limt_,~ Xt = Xo~ exists a.e., and limnXt__l = Xt± exists a.e.)
M. M. Rao
780
E ( f ~ + X , - dAc) = E(XooA~) ,
(24)
where the integral relative to At is a pointwise Stieltjes integral. Since some of the limit operations here and later involve continuous parameters (and hence are more than countable) there will be technical problems of a measure theoretical nature. However, the hypothesis of right (or left) continuity of the process allows one to invoke "separability" of the families so that effectively countable operations accomplish the desired task. This point will not be pointed out at every turn when such appears. The legality will be implicitly used. Here Eq. (24) incorporates the condition that At is o~t measurable and this is a technical requirement which is the continuous parameter version of that given in Eq. (8) in Section 2. This is shown in a routine fashion that {At, ~-t, t _> 0} is measurable relative to the predictable o--algebra ~ determined by the standard filtration { ~ t , t _> 0}, given above. Roughly stated, this is equivalent to the requirement that for any bounded random variable X, if ~ = a(Ut> 0 3t),Xo~ = E ~ (X) and ~ t = a(U0<,
=E
(/0
= e(e
)
X dE s,- (At) , (this is legitimate),
= e(X
A
) .
Also a process {Xt, @t, t E •+} is said to be of class (D) if for each collection of stopping times {Tj,j ~ J} of the filtration {~-t, t E N+}, the set {X o Tj.,j E J} is uniformly integrable. If this condition holds for each compact interval of N+ it is locally of class (D), or termed of class (DL). This technical condition was introduced by Doob motivated by the solutions of Dirichlet's problem in potential theory. With these concepts at hand, it is possible to present the continuous parameter version of Proposition 2.2, for super martingales, due to Meyer (1962-63): 3. TnEO~,EM. Let {Xt, ~ t , t >_ 0} be a right continuous super martingale of class (DL) , relative to a filtration satisfying the standard conditions. Then there exists an increasing integrable predictable process {At, ~ t , t > 0},A0 = 0 and a (right continuous) martingale { Yt, Y t , t >_ 0} such that Xt=Yt-At,
t_>0 ,
(25)
and the decomposition is unique. Several proofs (and extensions) of this result exist, but none is simple enough to present here. So it may be referred to any one of the references given above.
781
Martingales and some applications
In general for a (sub or super) martingale X = {Xt, Yt, t >_0} which satisfies suptE(IXtl) < oc (i.e., L 1(P)-bounded), one has the following assertion: For each rc : 0 _< to < tl < .-. < t, < oc, since Eg~o-1(Xt,,) = (>_, _<)J(t, 1, it is true that n
)
~= E(E~'-I (X,~) - ~ 1
< supEIX, I = gx < oo , t
for any partition ~, The left side may also be written (since for martingales the quantity inside is zero, and for sub or super martingales it is either non negative or non positive) as:
supZE([Eg'~-, (Xt~-Xt~_,)[) <_ Kx < oc . rc
(26)
i=1
Any adapted process X for which (26) holds is called a quasirnartingale, a concept originally introduced and analyzed by Fisk (1965) for continuous processes, and generalized further by Orey (1967) for right continuous ones. Thus this class includes all the Ll(p)-bounded (sub and super) martingales, but is larger. In fact it is evident that the class of quasimartingales on the same filtration {Yt, t _> 0} is a vector space so that linear combinations of (L 1(P)-bounded) sub or super martingales are quasimartingales but not necessarily sub or super martingales. Now if {Xt = }-~4~=1aiX/, Yt, t > 0} is such a combination of class (DL) right continuous integrable processes with the same standard filtration, then using Theorem 3, one finds immediately that there is a decomposition Xt = Y t - At where Yt = ~ i n _ l aiYti, At = 2in_l aiAi with Xj = Y / - A I , so that {Yt,~t,t >~0} is a right continuous martingale and At (as a linear combination of increasing integrable predictable processes), is an integrable, predictable process of bounded variation. Hence it may be expressed as a difference of suitable increasing processes. The interesting fact is that the converse of this statement is true, and thus an intrinsic characterization of quasimartingales of considerable interest, especially for stochastic integration, can be given. The appearance of a process of bounded variation along with a martingale component indicates that this study must be related to some standard results in real analysis. This nontrivial fact is the content of the following theorem due to Dol6ans-Dade and Meyer (1970). The idea here is to associate a (real) set function on the predictable a-algebra determined by the given standard filtration, find a suitable (additional) condition for its a-additivity, and then analyze the structure of the process. [There are alternative procedures, but this is somewhat shorter and reveals the nature of the problem better.] Thus let ~ be the predictable a-algebra of (R + x ~, ~ ( R +) ® Z) introduced earlier for the filtration { ~ t , t > 0}, and let ~a be the corresponding class when R + is replaced by [0, a]. Then one can identify for a < a ~, ~a C ~a' C ~. For an integrable process X = {Xt, ~ t , t > 0} define
#~((O, a l x A ) = [ ( X s - X t ) d P , JA
AE~s,
O
,
(27)
782
M. M. Rao
(with the above = 0 if s = a). It is not hard to verify t h a t / ~ : ~a -+ N is finitely additive. The desired signed measure representation for our process is given by: 4. THEOREM. Let X = {Xt, ~ t , t > 0} be a right continuous quasimartingale with {Yt, t _> 0} as a standard filtration of(O, ~, P). Let #~ be the set function associated with the process X on [0, a] by (27). Then there exists a unique signed measure #x : .~ ___+~ such that l~]~a = #x for a >_ O, iff the process Y is of class (DL). For a proof of this useful result, which is not simple, one may refer to Dellacherie-Meyer [(1980), Chapter VII] or Rao [(1995), pp. 365-369]. An interesting consequence of this result will be detailed in the following (a special case of which was already discussed prior to Corollary 2.3): 5. THEOREM (Generalized Jordan Decomposition). Let X ~ {Xt, ~ t , t >_ 0} be as above, i.e., a right continuous quasimartingale of class (DL). Then it is the difference of two positive super maringales { X t i , ~ t , t > 0 } , i = 1,2; Xt = x t l - ~t2, t ~ O.
Every right continuous martingale X is of class (DL) so that Xt = Xt1 - X t 2 holds by theorem, with {X/, ~ t , t _> 0}, i = 1,2 being positive super martingales. But -Xt = (-Xt 1) - (-Xt 2) gives (-X/)-processes to be also super martingales, since - X is again a right continuous, class (DL), martingale. Hence the Xtiprocesses must be positive martingales. Consequently one has the following Jordan type decomposition for continuous parameter martingales. 6. COROLLARY. Every right continuous Ll(P)-bounded martingale admits a decomposition Xt = Xt I - X 2 where the {Xti, ~ t , t > 0}, i = 1,2 are positive right continuous martingales. (Hence Corollary 2.3 holds for continuous parameters as well.) We now turn to a sketch of the PROOF OF THEOREM 5. Let #x : ~ __+ N be the signed measure associated with the given quasimartingale, as assured by the preceding result. Then by the classical Jordan decomposition, #x = (#x)+ _ (#~)-. This is in general not unique, but can be made unique by demanding that the positive measures (#~)-~ be mutually singular (i.e., they have essentially disjoint supports). Since #~ is bounded (being a signed measure), the (#~)± are also bounded. Moreover #~(A)= #x((t, oc) x A), A c ~,~t, is P-continuous and so the same is true of the positive components, Let Xt± = d(#~)±/dP, by the R a d o n Nikod)m theorem. It then follows that {Xt±, Yt, t _> 0} are positive super martingales and one has Xt = X + -Xt- , t _> 0, giving the desired decomposition. It can also be shown that, in this decomposition, one can take X[%processes to be right continuous and of class (DL), but this detail will be omitted here. [] The interest of this result is seen from a comparison of it with the Doob-Meyer decomposition given in Theorem 3. Indeed one has the following important consequence.
Martingales and some applications
783
7. COROLLARY. Let X = {Xt, Y t , t >_ 0} be a right continuous quasimartingale of class (DL). Then it admits a unique decomposition as: Xt=Yt+Vt,
t_>0 ,
(28)
where {Yt, Yt, >_ 0} is a right continuous martingale and {Vt, ~ t , t >_ 0} is a right continuous predictable process of bounded variation on each compact t-interval, in the sense that Vt is the difference of two increasing predictable right continuous processes for the same filtration { ~ t , t > 0}. This result implies that it is only necessary to define a stochastic integral relative to a martingale, in order to consider integrators with quasimartingales in applications, since the latter are typically of unbounded variation and the Stieltjes definition of integral does not apply. Thus the difficulty is relegated to the martingale component in such a study. For a unified treatment of stochastic calculus, we also introduce the following general concept. 8. DEFINITION. A right continuous process {Xt, ~ t , t >_ 0} on (9, X,P) relative to a standard filtration {~-t, t >_ 0} from S, is called a semimartingale ifXt = Yt + Zt where {Yt, ~ t , t > 0} is a martingale and {Zt, ~ t , t > 0} is a process which is the difference of two predictable increasing processes on the same filtration. If {Yt, ~ t , t > 0} is a local martingale, then the given X~-process is called a local semimartingale. [We recall that a right continuous process {Yt, ~ t , t > 0} is called a local martingale if there is an increasing sequence of stopping times {Tn, n > 1} of the filtration such that (i) P[Tn _< n] = 1, (ii) P[limn Tn = oo] = 1, and (iii) ifz~' = Tn A t, and Y~ = Y~7then {I?~, ~-t, t >_ 0} is a uniformly integrable martingale for each n.] The local martingale concept is technical and unmotivated. It may be verified that each positive local martingale is a (positive) super martingale. Thus the local concept plays an important technical role in the Doob-Meyer decomposition as well as in the general integration theory, and it roughly occupies the position of local compactness in the work on topological measure theory. The relation between quasimartingales and semimartingales is very close, as one may expect. The precise statement is given by: 9. THEOREM. Let X = {Xt, ~ t , t > 0) be a right continuous process on a probability space (Q, Z,P) such that suptE(lXtl) < oc, i.e., is L 1(P)-bounded. Then X is a semimartingale iff it is a quasimartingale of class (DL). In general, a quasimartingale is a local semimartingale. The proof depends on the general theory of (continuous parameter) martingales, and may be found in any of the standard works on the subject cited above [e.g., cf., Rao (1995), p. 271]. The reason for discussing these concepts here is to introduce stochastic integrals, and present some of their key properties to use in the following applications, starting in the next section. As indicated already, the
M. M. Rao
784
semimartingale concept is useful for concisely stating the general results, and quasimartingate is the 'work horse' of the subject to lean on. For the purpose of quick advance into the integration theory, it is expedient to introduce a general boundedness principle, originally due to Bochner (1954), as it unifies the It6 integral and the Wiener-Kolmogorov-Cram6r-Sratonovich as well as virtually all the other stochastic integrals studied so far in the literature. It can be specialized to various other definitions and obtain their properties for a detailed study. Here is the basic principle: 10. DEFINITION. A processX = {Xt, t _> 0}, Xt C LP(P),p > 1,p > 0, is said to be is an absolute constant C = (Cp,p > 0) such that for each Borel simple function f : ~+ -+ R, one has
LP'P-bounded, if there
E( j[+f(t)dXtP) < C j[+,f(s),°ds ,
(29)
where for f
= ~i~_1 aiz[t,,t,+~)one has the (clearly unambiguously defined) symbol fu+f(t)dXt = ~;Ll ai(Xt,+,-Xt,). If p = p = 2 in (29), then X is termed L 2,2bounded, the simplest case but is often used as a first key step. This definition immediately applies to the Wiener or Brownian Motion (BM) process. Indeed, recall that a BM is a process {Xt, t _> 0} of independent increments such that for each s < t, Xt - X s is normal with mean zero and variance cr2(t - s), written as N(0, o-2lt - sl). The existence of such a process was first established by Wiener in 1923, and several simpler proofs of the result are now available. Two such proofs may be found in McKean (1969). The process has continuous sample paths, i.e., t ~ Xt(co) is continuous for a.a. (co). Then the left side of (29) becomes with p = p = 2: t
=E
ai
,+1 - ~ t ,
2
+
= ~" a2iE(Xt,+~ X,,)2 _
i=l
+2
~
aiajU(Xt,+l-Xt,)(Xy+,-Xg)
1
=
_ t,) + o, i--1
because of independent increments,
~- a a .~+ If(s)I 2 ds
,
(30)
and (29) is valid with C = ¢2 > 0 and even an equality. Thus the BM is L2,2bounded. It may be shown with a more detailed analysis that all stable processes are LO,P-bounded for some p > 0 and p but both not 2. [The necessary argument is in Bochner (1954).] Now if the Xt-process is of orthogonal (but not independent) increments, as it appears in the representation of (second order) weakly stationary
Martingales and some applications
785
processes, then the above definition does not directly apply. However, stochastic integrals already exist for such integrators in the literature, due to Kolmogorov, Cram~r and others. The It6 integral is a generalization of the one with the BM in that the integrand f in (29) is also a stochastic function. To include all these cases and most martingale integrals generally, Definition 10 has to be extended and the following concept fulfills the desired search as shown below: 11. DEFINITION. Let q)i : ~ ---> N+ be an increasing, symmetric function such that q~i(x) = 0 iffx = 0, i = 1,2, and X = {Xt, t c I c R} be a right continuous (or just X(.,-) : I x O -+ R is measurable relative to ~ ( I ) ® Z) process on (f2, S,P). Let (9 C N([) ® S be a ~-algebra, and e a a-finite measure on (P. Then X is called Lel,~2-bounded relative to (9 and c~ if there exists an absolute constant K ( = K~I,~2 > 0) such that
E(~o2(zf) ) <_K ~
JI xf2
~Ol0C)d~ ,
(31)
where ~ : f H f~fdXt is a mapping defined exactly as in (29) for each simple function f(t, og) = ~i~_1 fti(og)Z[t~,t,+,)(t), with tl < t2 < ... < t~+l,ti E I, and fie" Z[ti,t~+~)an (9-measurable bounded function. Note that if q h ( x ) = IxlP,~o2(x)= IxlP,~ = ~ ® p where # is the Lebesgue measure, ft,(co) E R, and (9 = N(I) ® {(b, f2}, then L~,~2-boundedness becomes LO,P-boundedness. In this particular case e and (9 are usually not mentioned, as they are regarded familiar objects. The definition above may appear too general, but the following examples show that it admits specializations and includes all the cases that are currently considered in the literature. The essential point of either of the above definitions is that ~ : LP(~) ~ Lp(p) is a linear mapping defined on all simple functions, and by (29) or (31), it is bounded. Consequently ~ has a unique bound preserving (linear) extension onto J~, the closure of the simple functions of the (linear) metric space LP (~) (or in the general case L ej (c~)), which is L~(c~) itself (but in the general case it could be a proper subspace ofL ~ (c~)however). The thus obtained extended mapping, denoted by the same symbol, z(f) = ft f(t)dXt, is the desired stochastic integral. After the following generic examples, the general statement of the theorem will be given. EXAMPLZ a. (The K o l m o g o r o ~ C r a m 6 r integral) This is defined for integral representations of stationary processes. Thus let {Zt, t E R} be an orthogonally valued process, so that Zt EL2(P),E(Zt)=0 (for simplicity), and for tl < t2 <~ t3
la~IaE(IZ~,+, - L,I e)
~=1
+ 2 ~ l
aiajE[(Z,,+~ - Zt,)(Zt,+~ -
Z,~)]
M. M. Rao
786
n
=~
lail2E[tZ,i÷lI2 - IZtil 2] + O,
i=1
since i + 1 _
=~
lail2C~(ti+') - ]A([i))
,
i=1
where #(t) = E(I/,I a) 4t'), t <_ t' in I and we take I to have a least element a0 and Zao = 0 for computational facility, since otherwise Zt - Za0 will do. Thus # defines a (bounded) Borel measure, denoted by the same symbol, and one gets E(lzfl2) = .fI If(t) Is d#(t) ,
(32)
and here d/z(t) is not necessarily the Lebesgue measure. But (31) implies, by (32), that for all f E L 2 ( # ) , ~ f = f l f d Z t is a well defined stochastic integral and {Zt, t E I} is L2,2-bounded. Taking f = ZA,,,An ~ O, this shows that Z(.) defines a vector measure, i.e., a o--additive function on X into the vector space of random variables L2(p). A n interesting and useful consequence of this fact applied to the integral given by the above example is the following: Let T : L2(p) ~ L2(p) be any bounded linear operator. Then one has T ( r f ) = T f1 f(t)dZt = fl f ( t ) d ( T o Zt). The last equation is a consequence of a classical result due to E. Hille, which says that a bounded linear operator and a vector integral, as here, commute. Now if Z(.) = T o Z(.) then T ( r f ) = fz f(t)d2t, and hence
= [JT(~f)II~ _< K211zfl122
= K 2 f If(t)12d#(t),
by (32) .
(33)
Thus one can conclude from (33) that 2t is also L2'2-bounded so that the stochastic integral -~f = f~ fdZt, f E L;(#) is defined. Note that {Zt, t ~ I} is not orthogonally valued and if T ¢ id., then (31) holds with inequality only. This interesting consequence is recorded for reference as follows: EXAMPLE b. Let {Zt, t C I} be a process with orthogonal increments, and T : L2(p) ~ LZ(P) be a bounded linear operator. Then Zt = T o Zt(E L2(P)) defines again an L2,2-bounded process and hence the stochastic integral ~f = f1 f d 2 t is well-defined relative to the process {Zt, t E I}. Taking T = H, an orthogonal projection, one gets the corresponding measure as a (weakly) harmonizable spectral measure, and thus such harmonizable processes satisfy our extended boundedness principle. [It is known that every harmonizable spectral measure has an orthogonal dilation and thus this is covered by the general principle.] For an
Martingales and some applications
787
account of harmonizable processes, see e.g., Rao (1982) and a more extensive treatment is in the recent monograph by Kakihara (1997) including the multidimensional case. In the next example the integrand is also a stochastic process and is more general than either of the cases considered in (29), (32), or (33). EXAMPLE C. Let X = {Xt, Nt, t >_ 0} be a real right continuous square integrable martingale where {Nt, t _> 0} is a standard filtration (i.e., Nt = Ns>t N~, ~t ? c £ and completed in (~2, Z, P)). Let {~t, t > 0} be another standard filtration with { ~ t , t >_ 0} and { g t c Nt, t_> 0}. If ~ is the predictable a-algebra of N(R+) ® (¢o~ for the Nt-filtration, where N~ = a(mt>0 Nt), consider a simple function as: /1
f
= EaiZA~Z(t~,t,+l], i--O
Ai C Jt~,
0 < tl < " " < tn+l <_ t < O0 .
Define as usual zf = fa+ f d X . Using the commutative property of conditional expectations (E~C'E ~¢' = E~'E ~s = E~C.~,s < t) and the identity E(E~(h)) = E(h), h C L 1(P), for any a-algebra N c Z, one can conclude that the martingale X is also L2,Z-bounded relative to ~ ( = (9, of Definition 11) and a a-finite measure/~ to be established below. Indeed consider: n
E(l
fl 2) =
.
EkA5 %,
- X,,) 2]
i=0
+2 E
aiajE[ZA,nAY(Xt¢+1--Xt')(Xty +z --)tj)]
O<
=
a ieE A (xL, - x # ) j + o, i=0
using the martingale property and ~ t c f#t, n
= E a i ~2 x [(li, ti__l) X A;]
,
i=0
where #x [(ti ' t~+l) × A i] is a-additive (actually a measure) on ~ , associated with the positive right continuous sub martingale {X2, f#t, t > 0}, by Theorem 4. (This is called the Dol6ans-Dade measure.) Hence (33) implies
E(lzfL 2) = L +×~ If(t, co)12 dgX(t, co)
(34)
Note that the dominating measure #x on ~ is not necessarily of product type. Thus r is a bounded linear mapping on the simple functions of (hence having a unique extension to all of) L2(#x) taking values in La(P), and so by definition {Xt, fqt, t >_ 0} is L2,Lbounded relative to ~ and #x. Then (zf) t = f ~ f ( s , .)dX, gives the desired integral, and it is easily verified that {(zf)t, fft, t >_ 0} is again a martingale for f c L2(~, y ) . If the Xt-process is a BM, then this is the usual
788
M.M.Rao
It6-integral. (In this case, it is both a martingale and a Markov process.) The general form, for martingale integrators, of z f is due to Kunita and Watanabe (1967), and to Meyer in a series of articles with a final account in his major (1976). It is now appropriate to formulate stochastic integrals for semimartingales, extending and unifying the preceding examples, which again follows the boundedness principle (given in Definition 11).
Cours
EXAMPLE d. Let X = {Xt, fYt, t > 0} be a right continuous L2(p)-bounded (i.e., the process lies in some ball) semimartingale, {fft, t _> 0} being a standard filtration from (O, S, P), as above. If,~ is the predictable a-algebra from N(~+) ® N for the filtration, then X is L2,2-bounded relative to ~ and a a-finite measure on .~. This is verified by using the result in the preceding example, as follows. Since the Xt-process admits the decomposition Xt = Yt + Zt (cf., Definition 8), and X is L2(p)-bounded, it can be assumed for the present purposes that the martingale Y = {Yt, fft, t _> 0} is L2(p)-bounded and Z = {Zt, .%_, t _> 0} is a predictable (L2(p)-bounded) process of bounded variation where fgt- = a(Us
E('~fI?)= E[ f~+f(t)dYt+ j[R+f(t)dZt21 _<2El £ + f ( t ) d Y t 2 +
(£+[f(t),d[Zt,)21,
since (a + b) 2 _< 2(a a ÷ b2),
< 2 I£+×olfl2dl~X+ E(lZool~+ lfled'Zt')] by the last example, and Jensen's inequality, = 2[ 1/12[d/~"+ J~ +×(2
IzooldlZ~l]
= 2[ ]fl2dc~ Je +×~Q where c~(.) is the measure defined by the term in [ ] above. This shows that X is L2,2-bounded, and hence (zf) t = is uniquely defined on L2(@ where c~(.) is a a-finite measure constructed above on ~. It also follows that {(zf)t, fqt, t > 0} is a semimartingale. The general theory implies that if the initial work is accomplished for the L 2,2bounded processes, then the integral can be extended with standared tricks, using stopping times and truncation, to local semimartingales and to locally integrable f relative to c~. The details are found in the paper [Rao (1993)] and the book [Rao (1995), Chapter VI]. Here we state as the main theorem that unifies the above examples and more, thereby indicating that the generalized Bochner boundedness principle is the best one in some sense.
f~f(s)dX~
789
Martingales and some applications
13. THEOREM. Suppose that X = {Xt, f#t,t >_ 0} is L~°l'q°2-bounded (cf., Definition 11) relative to the predictable a-algebra ~ c .~(~+) ® S with the standard filtration {Nt, t >_ 0} from the basic probability space (f2, X,P), and a a-finite measure c~ on ~. Then the stochastic integral
(zf)(t) =
f(s,.)dX=, f E L~(c~),
4~(2x) < K4l(X), x > 0
,
is defined and the dominated convergence theorem holds for this integral. On the other hand, if Xt ~ Le2(p),t > O,Le2(p) is a separable Orlicz space [a Banach function space that reduces to the Lebesgue space LP(P) i f (pg(X) = [xlP,p > 0], and if the stochastic integral z f exists, L ~°2(P)-bounded for all simple functions f , and the dominated convergence assertion holds, then there exists a convex function ~o1 such that (ol(x) = 0 iff x = O, as well as ~l(X)~. oc as x T cc (not necessarily X ~o1(x) = [x[P), and a a-finite measure c~ on the associated (to the given standard filtration) predictable a-algebra, for which X is L e~'~°2-bounded. This result implies that the concept of generalized boundedness of Definition 11 is essentially an optimal condition if the stochastic integral is to obey the dominated convergence criterion which clearly is the most desirable property of the (stochastic or any other for that matter) integral. The direct part which is most relevant is simple and uses the same argument as in the above examples, but the converse direction uses results from functional analysis and is somewhat involved. However, it indicates the generality of the principle. Using this extended version, the corresponding It6 differential formula, a corner stone of the stochastic integration theory, will be given to use it often in the applications to financial mathematical models below. The presentation of the above formula is facilitated if we recall the concept of quadratic (co)variation of an LZ,2-bounded process (i.e., ( P l ( X ) = (P2(X)= X2). Thus for a process X = {Xt, Nt, t > 0}, consider the dyadic partition of [0, t] : 0 = to < tl < --. < t~ < t where t; = it~2~, i = 0, 1,... ,n (in applications X0 is usually a constant). Then [X]t = p l i m ~ + ~ x-'"-I z_~i=0 IX. v t,+~ - Xti) 2, if this (in probability) limit exists, is called the quadratic variation of the process X. It can be verified that this limit exists if X is L2,2-bounded relative to the predictable a-algebra ~ associated with the standard filtration {Nt, t >_ 0}. It may then be verified that t ~-+ [X]t is an increasing predictable locally bounded process. To see this, consider for a dyadic partition It of [0, t] given above with I~* = (t~, tz+ll and the following identity: 2" - 1
V-x0e
=
2=- 1
Z (XJ'2+~,lt-- X~t) 2
~7f
j=0
j=0
2.-1
Z
2=-1
(Aj:;X)2 + 2 ~-~X~A,:,X,
j=0
j=0
with A~2X = Xt,+, - Xti,
;~oIAs,,IXI2+ 2
2=-1
J~/t
f=(s)~= ,
(35)
M. M. Rao
790
X-~2"-1 wheref~(t) z~j=0 ~ t_~ZI"j is a simple function. Since X is L2,2-bounded and fn is ~-measurable, it has a limit as n ~ oc. The right side integral converges to fz, Xs_dXs by the 'dominated convergence' for such integrals, so that the sum converges in measure to IX]t, the quadratic variation. Thus for the L2,Z-bounded (right continuous) processes the quadratic variation on [0, t] for each t < oc exists. Next if X i = {Xti,~t,t >_ 0}, i = 1,2 are two L2,Z-bounded processes, so that {Xt1 +Xt 2, ~t, t _> 0} is also such, with [X1]t, [X2]t, IX 1 +X2]t as their respective quadratic variations, then their quadratic covariation is given, with the polarization identity, by: =
[X1,X2]t = l ( [ x i +X2]t - [X1]t - [X2It),
t> 0 .
(36)
This exists and defines a predictable process of bounded variation on each compact interval [0, t], and is a bilinear form. With these concepts at hand, the following generalized It6 formula can be presented: 14. THEOREM. Let X = {Xt, Nt, t >_ 0} be an LZ,2-bounded right-continuous process with left limits relative to ~ and a ~-finite c~on ~ , as before. I f f : ~ -+ ~ is a twice continuously differentiable function, then for each 0 < t < oc,
1/oi
f ( X t ) - f(Xo) =
f'(X~_)dX~ + g + ~
f"(X~_)d[X],
(f(Xs) - f ( X , _ ) -f'(Xs_)AX~)
O
2 Z
(37)
Ut'(Xs)(AXs)2 '
O
where the last two series are convergent a.e., AX~ = X~ - X ~ _ being the jump of the process at s. In particular, if t ~ Xt # a.e. continuous (especially for the B M ) , then the last two terms of (37) vanish, and then one has (the original Itd-formula) : f(Xt) - f(Xo) =
//
'/o
f'(X~)dX~ + ~
f'(Xs)d{X]~ ,
(38)
where [X]t = t for the BM. I f X 1 , X 2 are two LZ,X-bounded processes on the same standard filtration, with [X1,X 2] as its covariation process, g : N: ---* ~ has two continuous (partial) derivatives, then (37) takes the form:
2i~=1~t~ig(xl X2 )dXi 1 2
~t
~,j.~.l [
+ 2..
~2
c-g (X) ,X~ )d[X1,X2]~
Jo ax~axj
-
-
O
,
(39)
Martingales and some applications
791
the series converging a.e. (and the last two terms drop out if the X i are continuous).
The original formula (38) for the BM process was first obtained by It6 in the early 1950s, the extension to L2(P)-martingales by Kunita-Watanabe (1967), and the result for semimartingales with formula (37) is due to Dol6ans-Dade and Meyer (1970). The proof may be found in either of the above references, and some multidimensional extensions are also available [cf., M6tivier (1982)]. The stochastic integral and the (general) It6 formula are employed in the studies of stochastic differential equations (SDEs), and this is how important applications to, for instance, financial mathematics as well as the stochastic fundamental theorem of calculus, to be discussed below, are formulated. Thus one considers equations of the form: dXt = b(Xt, t) + a(Xt, t) dZt dt dt '
(40)
where b, a : R x ~+ ~ ~ are (locally) bounded measurable functions and where {Zt,-~t, t _> 0} is a semimartingale or an L2'Z-bounded process. However, the It6 equation (38) (or more generally (37), (39)) implies that the L2,2-bounded process does not have finite variation on non degenerate intervals so that dZt/dt is not defined in the ordinary sense of Lebesgue's theory. So (40) is formally written as dXt = b(Xt, t)dt + a(Xt, t)dZt ,
(41)
and is understood in the weak sense, i.e., for any real continuous function ~p on with compact support one has: qo(s)dXs =
(p(s)b(Xs, s)ds +
(p(s)cr(X~, s)dZs ,
(42)
where the left side is by definition, the right side in which the first is in the standard (pointwise) Lebesgue [or more generally the Bochner] integral and the second one is the stochastic integral relative to the L2,2-bounded process Z just defined above. The problem then is to find conditions on the coefficients b, a in order that (41) or (_42) has a unique solution {Xt, ~t, t >_ 0}. Taking cp(s) = s on [0, t] and writing b(x,s) = q)(s)b(x,s) and 6(x,s) = ~o(s)a(x,s) one can simply express (42) as: X~ - Xo =
/0
[,(Xs, s)ds +
/0
#(Xs, s)dZ~ ,
(43)
and this is regarded as a first order (non linear) SDE, and is rigorously interpretable because of the stochastic calculus presented above. It is in this sense (and form) one understands an SDE given by (41). Moreover, if a solution of (43) exists, then it will be (at least locally) an L2,2-bounded process when b, a satisfy certain integrability conditions. Familiar examples of (40) are the following: (i) The Langevin equation for the motion of a free (Brownian) particle:
792
M. M. Rao
du
dt
-
flu+A(t)
(44)
,
where u is the velocity of the particle, -flu is the dynamical friction and A(t) is the random fluctuation. This is the 'white noise' or dB(t) = A(t)dt which gives the BM differential, [cf., Chandrasekhar (1943), p. 20]. It is an example of (40). (ii) Pricing of contingent claims in a stock market: dVt : [~Vt - D1 (Vt, t)]dt + a(Vt, t)dZt ,
(45)
where Vt denotes the price of a dividend-liability traded security at time t, a2(Vt, t) is the instantaneous variance rate, D1 is the dividend flow rate, and Zt is the BM fluctuation, [cf., Merton (1997)]. This is an example of (41). [Chandrasekhar and Merton are 1983 and 1997 Nobel lauriates in Physics and Economics respectively.] The first is a linear and the second a slightly more general (first order) SDEs to be understood in the form (42) and (43). Higher order SDEs are also of interest in some important applications as well as theory, but this is comparatively less developed. [Regarding the state of the art in the linear and nonlinear (higher order) cases, it is discussed in, for instance, by Rao (1997).] The above applications, especially in finance, are mostly considered for the linear equations. For this case a quite general existence and uniqueness of solutions can be presented and it is included here to give a bird's-eye view of the subject. 15. TH~ORZM. Consider the (linear) SDE given by." k
dXt = (c~(t)Xt + ~o(t))dt + ~(fli(t)Xt + yi(t))dBi ,
(46)
i--1
where {BI, fgt, t > 0}f_l are LZ'2-bounded processes with the same standard filtration, the coefficients C~o,7,f l i ~ i ~ i = 1,... ,k being nonstochastic and continuous. Then (46) has a unique solution for any constant given initial value )20, and in fact the solution Xt, t >_ O, is explicitly given by." Xt = M~l {XO + ~otMsc~(s)ds
'±/o'
Msfii(S)Ti(s)dB I, +
2z d 1 0
±Io'
Msyi(s)dB~
i 1
}
,
(47)
where the strictly positive (LZ,2-bounded) process {Mr, ~ft, t >_ 0} is defined by Mt = exp
I /o -
c~(s)ds-
+ g ,_
fii(s)dB~
fii(s)flj(s)d[B',BJ],
.
(48)
Martingales and some applications
793
Moreover the solution is a Markov process if the Bi are also independent with independent increments (in particular if they are B M processes). This result easily extends if X is an n-vector process, ~, fli~ ~i, i = 1,... ,k are n x n matrices and c~0,B i are n-vectors, in which case Mt will be an LZ,Z-bounded process (or all of them can be regarded as semimartingales). [Indeed it also holds for higher order (linear) SDEs.] Taking k : 1 and n = 1 this applies to the problems noted in the above examples. A proof of this result is given (when B i are BMs) in Wu [(1985), p. 80] and a simpler version (in the L2,2-bounded case) in last reference above. (See Section 3, and the [non typist's] typographical errors should be corrected. The last part is a consequence of Theorem 4.2 there.) The simplifications obtain by converting the general It6 integrals into the Stratonovich form. This is of interest in some of these computations, and a definition will be included for understanding the distinction. If Yi, i = 1,2 are two L2,Z-bounded right continuous processes on the same standard filtration, then let Yt1 o dYt2 = YtldYt2 +½d[Y 1, ](2It so that in the integrated form one has
f0
y1 o dY2 =
ft
#dY 2 + ~
d [ Y ' , Y2]s .
(49)
The right side symbols are well-defined, by the preceding work, and the thus defined left side is called the Stratonovich integral which obeys the boundedness principle. This also follows the rules of the ordinary calculus, especially the integration by parts formula holds for it in contrast to (37). It is noted that if one of the processes yi is locally of bounded variation, then both integrals coincide, and this is useful in the analysis. As an illustration, let us note the solution of Langevin's equation (44): 16. Example. Let k = 1, c~(t) = -/~, a constant, c~0 = 0 =/~l and "~1 = 1 in (46) which reduces to (44). Then (48) becomes M(t) = e at and (47) becomes Xt = e -~t
(
+
0l) e ~s dBs
,/0
This solution was derived (around (1905)) independently by Einstein and Smoluchowski with Bt as BM, long before the stochastic integration was rigorously established by Wiener in 1923. This description is enough for what follows. 5. An application to likelihood ratios An immediate application of martingale theory is to find likelihood ratios of a process X = {Xt, t ~ T = [a, b] C R} governed by a pair of probability measures (corresponding to a simple hypothesis versus a simple alternative). Thus let canonical (or function space) representation of the process be (O, S, ~) where f2 = R r and S is the cylinder a-algebra, i.e., t ~-+Xt(co)- co(t), co E O, is the
M. M. Rao
794
coordinate function, then X is the smallest a-algebra relative to which all the Xt are measurable. Suppose that t H Xt is right continuous. The problem is to find the likelihood ratio dQ/dP, based on a realization of the process if Q << P. First consider the special case of a sequence of random variables X~,X2,... and let ~ n -- a(X1,... ,X,n) -- (X1,... , X n ) - l ( ~ ~) C X, where N" is the Borel a-algebra of the Euclidean space R ~. If Q,, P~ are the restrictions of Q, P to ~-, and if Q << P then Qn << P~, with f , = dQ,,/dP,. We observe that {f,, Y , , n _> 1} is a positive martingale. To see the martingale property, let A C ~ , and consider Q~+I (A) = ~ f~+l dP~+l = .f4 E g ' (f~+l)dP~,
=/if~dP,, = On(A) •
by definition of conditioning,
by definition off~, (50)
This implies the martingale property. If Q << P is dropped in the above, then considering the absolutely continuous parts Q~ relative to P~ and noting that QC (A) ->- Q~(A) A E @~ one sees from the same computation that the fnn+l sequence will be a positive supermartingale. Consequently by Theorem 3.1 (or Corollary 3.4) f , ~ f ~ a.e. In the continuous parameter case one should replace the process by suitable countable sequences so that the above argument can be used. It is for this reason that we need some 'separability' conditions on the process, and the right continuity assumed above is convenient and then one of the two following methods can be employed. In many important cases one may replace the given process by a countable set of 'observable coordinates', proposed and with great effectiveness used by Grenander (1950), for which the martingale convergence theorem directly applies. The second method is to consider partitions ~ : a -- t~ < t]' < ..- < t],, _< b and let ~ = a(Xt,, ti E 7Cn). If a(Xt, t E T) = a(U . ~-~n) as the partitions are refined, i.e. Iz~[ --+ 0, let Q~,P~ be restrictions of Q,P to ~ , , and f~, = dQ,/dPn, then {f~,,, ~ , , n _> 1} forms a (super) martingale. Here the refinement order is in general only partial not linear, and an extension of Theorem 3.1 is available but the convergence is unfortunately only in probability and further restrictions are needed for pointwise convergence. For a comparison, the exact result will be stated as follows. Recall that a set I is directed if it is partially ordered by ' < ' and if ~, fi E I then there is a 7 E I such that cq/3 < 7. Also a directed collection {f~,~ ~ I } is terminally uniformly integrable if Ilf~lll < K < oc, Vc~ E I and for each e > 0 there are ~0 c I, 60 > 0 such that fA If~dP <--~, VA E X, P(A) < 8o, and all ~ > c~0. The precise result is as follows: 1. THEOREM. Let ( f ~ , ~ , ~ E I} be a directed indexed martingale, so that E 5~ (fp) = f~ for c~< fi, which is terminally uniformly integrable. Then there exists
Martingales and some applications
an f E LI(P) such that IIf~ - f i l l hence f~ --+ f in probability.
795
---* 0 as c~ --+ oo, and then L = E~=(f) a.e., and
[The proof can be found in, e.g., Rao (1981), Thm. IV.4.6 on p. 209. It may be noted that when the terminal uniform integrability holds, one has Q << P as a consequence.] Both these methods noted above will now be briefly illustrated. The first one is as follows. Let X = {X, a < t < b} be a real process on (~2, S, ~) with a continuous common covariance function r, means 0, m(-) for P, Q respectively and J2 Im(t)ldt < oo. Suppose {2~, (pn,n _> 1} are the eigen values and the corresponding eigen functions of the integral equation:
~o(t) = )~
/a
r(s, t)~o(s)ds ,
(51)
The classical theory of integral equations and Mercer's theorem imply that there exists a sequence of numbers 2n > 0, and functions (pn satisfying (51), forming a complete orthonormal set in L2([a, b], dr) such that
r(s, t) = ~ (&(s)ep(t) n=l )on
(52) '
the series converging uniformly and absolutely. Let Z~ = f£o Xtrp~(t)dt and m~ = fba m(t)~o.(t)dt. Then Ep(Z.) = O, EQ(Z.) = m. and Cov(Zj,Zk) = (1/2k)aj~ under both P, Q. Moreover, (Ep denotes expectation on (~2, S,P)) one has
Xt = m(t) + ~-~ Zk ~o~(t___~) k=l
(53)
V/~
the series converging in mean under both measures P, Q (known as the Karhunen-Lo~ve representation). Now let Yn = ~(Z1,... ,Z~) and ,~oo = o-(Un @~). Then each Xt is ~-oo-adapted and it follows that ~oo = a(Xt, t ¢ [a, hi). Letting P~ = P I ~ , Qn = QIY,, to be the measures governing ( Z l , . . . , Zn) which are thus derived from those of the given measures, so that by (50) {f, = dQ~/dP,, , ~ , n >_ 1} is a positive super martingale and a martingale if Qn << P,,, n > 1, and f , -+ foo a.e. (= dQ~/dPoo by a theorem of Andersen-Jessen and independently Grenander, cf., e.g., Rao (2000), Theorem V.I.1). If Q << P is assumed then Q~ = Q~ << P,, 1 < n < oo and foo is the desired likelihood ratio of the process on ,,~oo. Let us illustrate this by the following example which specializes a result due to Grenander (1950).
2. Example. Let {Xt, t E [0, 1]} be a real Gaussian process with mean 0 (m(-)) under P(Q) and the same covariance under both, given by (V = max, A = min) r(s,t)=c°sh(sAt)c°sh(1-sVt) sinh 1
'
0
t< 1 "
(54)
M. M. Rao
796
A computation (by converting (51) into a differential equation) shows that "~n = 1 q-n2Tc 2, opt(t) = v ~ c o s n = 1 , 2 , . . . , (20 = 1, (P0(t) = 1) and set an = v/2 f0~re(t) cos dt, Z~ = v/2 J0' Xtcos n = 1,2,... Z0 f2 Xt dt where m E L 1([0, 1], dt)). It is seen that = 0, Varp(Z~) = l/ft, = VarQ(Z,) and the Zn are independent. So a standard computation shows that
nzt,
mzt
dQn (oo) = e x p Moreover ]a, I < j~
Im(t)[dt = K
n~tdt,
Ep(Zn)
(do = .[~ m(t)dt,
{ 2l ~_12 2iai + ~.~=l]LiZi(co)} < ec, all n. If we define Yi =
(55)
2i(Ziai-~),
~'~i=12ia~ OO
then OO
< oc, so that by the Kolmogorov two series theorem Y = ~i=1 Y// converges with probability one under both norms and one gets
f-
dQ~ _ ey
dPoo
as the desired density. A difficulty here is the ability to calculate 2~, (p,, for a given covariance kernel r. A simple way of obtaining the observable sequences is crucial to implement this method. Some useful techniques for the purpose are given in Grenander (1950). The second method, as already observed, is to find a suitable partion of the index set [a, b] such that the refinements become dense in it. Some times, in particular cases, partions such as those based on dyadic rationals will give linear ordering and a.e. convergence can then be obtained. The following illustration from Pitcher (1959) explains this point as well as the method. 3. EXAMPLE. Let {Xt, t E [a, b]} be a Gaussian process on (~2, X, ~), in the canonical form, as above, with a common covariance function r, and means 0 and f respectively for P and Q. It is again desired to find the likelihood ratio for a general continuous r (not necessarily of the form (54)). Consider the random variables (a V ( - n ) ) _< -~ < b A n, k E N, and let ~-~ be the ~-algebra generated by these functions and Y ~ -- o-(U~ ~,,). It is clear that ~ T and Xt is ~ - a d a p t e d . Suppose that is a linearly independent (finite) set of X = (X~_~,kE N). Thus they have a (nondegenerate) kN-variate Gaussian distributic~'n with means 0 and a common covariance matrix R N with inverse A N = (AN), and determinant IRNI. Then for any ~ , - m e a s u r a b l e bounded function 9 : N" --+ R, one has n
--
(X<,...,XkN)
orf(ki),
I/2 / l ll x exp
{
1 ZANxixj}dx "
21RNI ij
Ai~XkJ(ky),
Ai~f(ki)f(kj),
(56)
Let (p,(X) = [~NI£ v and C, = ]~NI£ g so that by the positive definiteness of R N, C~ > 0. By using a linear change of variables one gets:
Martingales and some applications
797
Ep(g(X) exp{q~n(X) - ½Cn})
_-£...f ×exp{
'
1 ,g(x) 2,1N,~A~(xi- f(ki))(xj- f(kj))}dx
£/
(27c)@]RNI½ . . .
g(xl + f ( k l ) , . . . ,xn +f(kn))
xexp{2IRIijlNZANijXiXy} = Ee(g(X + f ( k ) ) ) =
dx Eo(g(X)) .
(57)
Since g(X) is any @n-measurable bounded function, it follows that dQ, gn(X) - dPn - exp{pn(X) - Cn} , and hence {gn(X), J n , n _> 1} is a positive martingale so that on(X) --' 0(X) a.e. [P]. Now let co be a point in a set of positive P-measure. Since the mean is zero for P, its finite dimensional distributions are invariant under the measure preserving mapping x ---+ -x, and hence exp[qG(-X(o0)) - ½Cn] = exp[-~o(X(o))) - ½Cn] ~ a finite limit = a(X) (say). Multiplying this and the result without the transformation gives a simplification e -c,, --+ ~(X)g(X), from which one finds that Cn ~ C > 0, (a constant). But then q)n(X) -+ (p(X) a.e., and hence dPo~ - e x p
(p(X)-~C
>0 ,
(58)
so that Qoo << Poe and the result (58) is the likelihood ratio. A further analysis shows that the same conclusion holds i f f is replaced by af, a E R and if Qa is the corresponding measure, then Qaeo << Poo, and moreover,
dQaoe-exp{a~p(X) - ~ C } d-Poe
so that q0 is a linear function. Its explicit form can also be given. Thus the martingale convergence theorems come into play at a deeper level in these applications in computing the likelihood ratios of processes. (See the monographs of Grenander (1981) and Rao (2000) for other aspects.) We shall now proceed to a different set of important and novel applications.
6. Exponential (semi)martingales As seen in Theorem 15 of the preceding section, the solution process is generally an absolutely continuous process. As such one may ask whether the Fundamental
M. M. Rao
798
T h e o r e m of (Stochastic) Calculus is valid for the integrals defined here. This is nontrivial since the integrator is typically of u n b o u n d e d variation. It will be seen that such a statement has an i m p o r t a n t role to play in the study of linear SDEs, especially as it concerns the financial applications to be discussed in the next section. Consider the particular case of the S D E given in T h e o r e m 4.15 in which k = 1, s0 = 0, 71 = 0,/~1 -- o- > 0 and ~ = #, a constant so that one has: dXt =/rift dt + aXt dBt .
(59)
An explicit solution can be written down immediately f r o m that theorem. However, let us express (59) in the integrated f o r m with the initial value X0 = 1 and # = 0, o- = 1. Thus it becomes:
/0'
Xt = 1 +
X~_ dB, .
(60)
If Yt = Xt - 1 then (60) can be expressed as: Yt =
/0
X~_ d B , .
(61)
I f {B, t _> 0} is a BM, then (Bt+h - Bt)Z/h is a chi-squared r a n d o m variable with one degree of freedom, so it follows that:
P[(Bt+h -- Bt) 2 <_ ah] =
f0°he -x2 &v = o(h),
h --+ 0 .
(62)
M o r e generally, suppose the distribution of the increments o f Bt satisfies the order of growth condition (62), so it holds if the process is stochastically continuous. Letting AhYt = Yt+h -- Yt and similarly for AhBt, one has:
AhYt AhB;-Xt-
1 ft+h - A~BtJt (Xs- - X t _ ) d B , = AhBtl~ (say) .
(63)
Assuming both the {Xt, Nt , t > 0} and {Bt, Nt, t _> 0} are square integrable processes with a standard filtration, and the Bt-process is also a martingale, one finds that {I~, -~t+h, t > 0} is a martingale. In fact, for 0 < s < t t+h
E
+ J +h E ~+h [E % (X._ - X~_)dBu]
(IL) -- E%+,,
t+h
=
+
)E%(dB.)] ds+h
= I~ + 0, a . e . , by the martingale p r o p e r t y of the Bt-process (so the increments are centered). Consequently, for c > 0, 3 > 0, one has
799
Martingales and some applications
sup O
>
_<
1
t 2
1 , ~ 1 [ t+h
(Xu_ - X ,
)2 d[B]u dP --~ O, as6-+0
.
(64)
However for any k > 0 and e > 0
Now choose k > 0 large enough that the last term is o@) by assumption (62), and then choose 5 small enough that the first term on the right is small by (64). Then ( A h Y t / A h B t ) ~ X t in probability. Therefore we have the following result essentially due to Isaacson (1969): 1. PROPOSITION. (Fundamental theorem of (stochastic) calculus) Let {Bt,(fft, t > O} be a square integrable stochastically continuous martingale verifying (62), and {Xt, f#t-, t > O} be a square integrable right continuous process. I f the Yt-process is given by (60), then (AhYt/AhBt) --+ Xt- in probability as h ---+O. Following the analogy of the solution of the ODE dy = y dx or y = y0ex, the solution process Yt of Yt= 1 +
Y~. dX,,
or
dYt=Yt_dXs ,
(65)
is termed an exponential of the Xt-process, denoted g(X)t. If the latter is a semi(or local)martingale, then g(X) inherits these properties. We can present its structure in the following explicit form, due essentially to Dol6ans-Dade (1970). One uses the known fact that every (semi)martingale can be decomposed as a sum of its continuous and discrete processes of the same type [cf., e.g., Rao (1995), p. 448]. 2. THEOREM. If { X t , ~ t , t > 0} is a semimartingale, and { Y t , ~ t , t >_ 0} is the unique solution of (65), with initial value Yo ¢ O, then one has: ~(X)t = Yt = Yo e x p { X t - X o - ~ [ X ° , X C l t } x H (1 + AXx)e -Ax" ,
(66)
0 0 : AXt = - 1 } then •(X)t # O(g(X)t_ # O) on the stochastic interval [[O,z) = {(t, oJ): 0 < t < z(o))}, ([[0, z]),"
M. M. Rao
800
(c) if X is a continuous local martingale, then so is g(X), and in fact, E(X)t =
with Xt(°) -=- 1, and for n > 1, z._~
n!
n=O
yff/= n!
,
dye,
= n
dY,2..,
dXs.
nq) dX~, (say) ,
so that {Xt(n), J~t, t > O} is a continuous local martingale," (d) (Yor's formula) if {Xi, Y t , t > 0}, i = 1,2 are a pair of semimartingales with L IX I , X21Jt as their covariation process (= 0 i f X 1,X 2 are independent), then ozo(Xl)t~(Y'2)t = oxe(X 1 @ X 2 -}- [X1,X2])t,
t _> 0 .
(67)
Finally, if X = X 1 + iX 2 is a complex semimartingale, (so xJ, j = 1,2 are real semimartingales) and Yt = Yt1 + iYt2 satisfies (65) which means one has the following pair of equations, Yt1 = 1 + yt2 =
/o'
ysl d..)(1_
/;
y? ~ 2
/o' Y?_ dXs 1 @ /oa~sI_ d.~2
,
then also all the above statements and properties ( a ) - ( d ) hold. For instance (66) becornes
g(X)t = Yo{exp(Xt - Xo - ½"" IX lc , X• tc~J t - i - ~1-' [ X 2~, X• 2c.] t - l [ X. • lc , X-2c]t)} x E ((1 + AX,)e -Axs) , O
(68)
the (infinite) products in both (66) and (68) converge absolutely a.e. The details, for instance, may be found in a compressed form in Rao [(1995), VI.6.2, pp. 530-531]. See also Mel'nikov (1996), and Jacod-Shiryaev (1987, p. 59). However formula (67) is not given in these places, and since it plays an important part in the applications of the next section, we shall discuss it here in a somewhat more general form. It depends on the extended It6 formula (cf., T h e o r e m 4.14, especially the two variable version (39) in the differential form). Let us restate it in a way that is used below. If f : ~22 --+ R is twice continuously differentiable, and X ~,X 2 are semimartingales on the same standard filtration, then dr(z121 ,Xa)t = ~
2 Of
i= 1
1 2
~
t
02f
(X]_)dX/q- ~ i ~
1
2
.lc
-2c
(X , X )t_d[X , X ],
• .= 2
+ ~ [Af(X1X2) t- _ ~ ~Of ( x 1 x Z ) , _ ( A X / ) ] O
(69)
Martingales and some applications
801
Taking f ( u , v) = uv in (69), one has the integration by parts formula: d(X1X2)t = ~t 1_ d X 2 q-X?_ dXt 1 q- d[X l,X2]t ,
(70)
or in integrated form x t l x ? - x ~ x 1~
2 =
+
dO
./o
x ~ - d X ; + LIX 1 , X 2 ]Js ,
(71)
giving an extra term at the end in contrast to the classical Lebesgue case. With (69) or (70), we assert that the following more general form of (67) holds. Denote by o~H(X)t, the unique solution of the integral equation: Yt = Ht +
Y~_ dXs ,
(72)
where {Ht, ~ t , t > 0} and {Xt, J t , t > 0} are semimartingales with the same filtration. If Ht = 1 a.e., then the earlier case results, and by Theorem 4.15, suitably interpreted, (72) has a unique solution Yt = gh'(X)t on the stochastic interval [0,~) where z = i n f { t : A X t = - l } , and in fact, the explicit solution is gl (X)t = oQX)t of the previous case, and if Ht = 0 a.e., then Yt = 0 is the only solution. Indeed if AXt ¢ - 1 , t >_ 0, one has the solution as:
{ f0
Yt = ~H(X)t = g(X)t Ho +
g(X)~_ 1_dZ °
(73)
where z ° = ~ - [Hc,2~], -
~
(1 + A X , ) - ~ A X ,
O
So if Ht = I1o, t>> O, then Zt° = Y0, and then Yt = ¥oG(X)t, as desired. Here g(X)/1 = g ( - X * ) t where X[ = X t - [xc,2cl t - ~ (1 + AX,)-~(AX~.)2 , O
(74)
which is obtained by an application of ItS's formula for f ( x ) = X-1, X "> O. It may be noted that the covariation process of the semimartingales {Xt ~, ~ t , t >_ 0}, i = 1,2 may be computed using their continuous parts as: iX 1 X2 ] = - .lc
L
,
Jt
[x
, x .2clt+
E
Ag~X~,
O
and the jumps need not be predictable. As a consequence, the covariation process {IX a,X2]t, g t , t > 0} need not be predictable, and this is why one has to pay more attention in special computations for a sharper analysis, although in the general aspects of integration, the LZ,2-boundedness implies that the integrals are welldefined.
M. M. Rao
802
With this discussion, formula (67) is easily established as follows. Let Ut = g(X)t, Vt = oQY)t be the unique solutions of(as usual we s e t & _ = 0 = Y0-)
Ut = Ht +
/0'
Us- dXs,
Vt = Kt +
J0'
V~_ d ~ ,
(75)
where Ht- and Kt-processes are semimartingales with the same filtration as are the Xt- and Yt-processes. Let {Lt, Wt, t _> 0} be the semimartingale defined by
Lt =
/0'
Us- dKs +
/0'
V~_ dHs + [14,Kit ,
t> 0 .
(76)
Then one asserts that
gn(X)tEK(Y)t = oQ(X + Y + IX, Y])t,
t >_ 0 .
(77)
I f / / t = 1 = Kt, t _> 0 so that Lt = 1 also, then (67) follows from (77). To verify (77), let U, V be as given, and one has with (70)
d(UV)t = Ut_ dVt + Vt_ dUt + d[U, V]t .
(78)
But Ut, Vt satisfy (75), so that with their differentiated forms, and the bilinearity of [-, .], we get
[U, VJt = [11,Kit +
U~_ dXs,
V~_ dY~ t
+dHt
/o
Us_ dY~ + dN
/o
V~_dX~.
Hence d[U, V]t --- d[H,K]t + Ut_ Vt_d[X, Y]t + 0 + 0 ,
(79)
and substituting (75), (79) in (78) d(UV) t = Ut_ dKt + Vt dHt + d[H,K]~ + Ut_ Vt_ d(X + Y + IX, Y])t = tilt + Ut_ Vt_ d(X + Y + IX, Y])t •
(80)
Since UV = o~;4(X)gx(Y), by definition, integrating (80) gives (77) as desired. In fact (67) also implies the expression for the reciprocal of ~ (X) indicated in (74), since g(0) = 1. For instance, taking X to be continuous for simplicity, so that X[ = Xt - [X,X]t, one finds e(x),e(-x*),
= ~ ( x - x + [ x , x ] + Ix, [x,x]] - [x,x])~
= C(0)t = 1 ,
(81)
since IX, A] = 0 for A of (locally) bounded variation. This formula holds even if X, Y are only right continuous; and then d~(X)[ 1 = d~(-X*)t where
Martingales and some applications
Xt* = X t -
[Y°,X°] t - ~
803
(1 + AX~)-z(AX~) 2 .
O
These formulas find interesting application in studies of models of stocks and bonds, as we now turn to them in the next section. Some other properties of stochastic exponentials are discussed in Mel'nikov 0996).
7. Applications to financial market models Suppose an investor purchases 'a' shares at time t for a price S, and sells at time t + h for St+h and realizes a capital gain of a(St+h - St). If in a period of [0, T 1 this is repeated at times 0 = to < tl < • • • < tn = T with at, shares in (ti, tz+l], then the realized capital gain is v'n-1 z_~i=0 a ti~ISt~+l - St,), which in a continuous market operation can be approximated by f~ at dSt. Since the evolution of stock prices {St, t > 0} (a risky asset which depends on chance) is a random process, the gain above is a stochastic integral. Also an investor typically owns some bank account or bonds (riskless asset) which initially is of value B0 and increases at an interest rate r > 0 so that at time t the value becomes Bt =- Boe rt or dBt -= rBt dt, I f the interest is variable and a set bt of bonds is owned, then the realized capital (continuous compounding) thus becomes f f bt dBt at time t as a Stieltjes integral. The applications here are mostly based on Shiryaev et al. (1994), and Mal'nikov (1996). See also the b o o k by Musiela and Rutkowski (1997). If the investor has b0 bonds at value B0 and a0 stocks at price So, then the initial capital is X0(= boBo + aoSo), and if at time t the investor holds bt bonds and at stocks, the pair rc = (at, bt) is called the trading portfolio (or strategy), and Xt = X [ = atSt + btBt is the current wealth. The security ~zis called self-financing if
X7=aoSo+boBo+
/0
a, d S , +
/0
budB,,
O
.
(82)
Generally the wealth X [ > 0, although at, bt can individually take negative values (at < 0 corresponds to selling of stock at time t but not delivering it until time T, and bt < 0 denotes borrowing at riskless interest rate r). Note that a self-financing strategy does not allow borrowing from the bonds and stocks at the same time. However {au, u _> 0}, {bu, u > 0} are assumed below to be (locally) of bounded variation, for practical reasons. The strategy 7c is said to admit an arbitrage opportunity at time T if X~ = 0, X~ >_ 0 and P[X[ > 01 > 0, so that ~z gives a possibility of riskless (arbitrarily large) profit. Some more terminology: Let ~z = {~zz,0 < t < T} be a self-financing strategy. Then for an initial investment capital x > 0, i.e., X0~ = x > 0, and a function f r ( = f(St, 0 < t < T)) >_ 0, the strategy is an (x,fr) - hedge ifX~- > f r , and ~ is a minimal hedge if there is equality here. Let B = {Bn, 1 < n < N}, S = {Sn, 1 < n < N}, and consider the Bond-Stock (or B,S)-market. Suppose that the participant can issue a security to the buyer the option to take back the stocks at time N at a fixed price K. Such a security is
804
M. M. Rao
termed the European call option. This means i f S N > K, the options owner can buy back the stocks at price K and sell at SN getting a profit of SN -- K and the hedge function fN = (Sx - K) +. If on the other hand, the expiration time is random in { 1 , 2 , . . . , N}, the corresponding security is termed an A m e r i c a n call option. These two options roughly correspond to fixed and sequential decision making (and the place names have little to do with geographic locations). There are also other options, but we do not consider any but the European case for our treatment since the general subject and applications are already made clear. Thus the problem is to find a (smooth) function f : [0, T] × R + -+ ~ such that the corresponding capital X [ satisfies the boundary condition X [ = f ( T t, St); only the continuous market will be considered here. On the other hand, )~f (= X~) being the self-financing security, must also satisfy the SDE Xt - X0 =
f;
a, dS~ +
i0'
b~ dBu ,
(83)
as well as Xt = at& + btBt, or bt = (Xt - a t & ) / B t . I f f is continuously differentiable, once in the first and twice in the second variable, with partial derivatives denoted f ~ , f x , f c x for f ( s , x ) , one can use It6's formula and get another SDE for
x , ( : x[) as: Xt - X o =
]o'
fx(T - u, S~)dS~ -
+ ~
fo'
f , ( T - u,S~)du
~2SZfxx(T - u , & ) d u
(84)
.
From (83) and (84), which agree for all t, so that the stochastic and the nonstochastic integrals must be the same, one has
/0'
a. dS. =
/0
f x ( T - u, Su)dS.
,
and since B~ = BoeTM or dBu = Bore TM du, the second parts become
io'
io'
b.Bore TM du = ~ do S2fxx(T - u, S~)du -
f~,(T - u, S~)du ,
or in differentiated form this gives
a, = ~~X( r -
t,s,),
(85)
and
0-2
2
b , n o r e r' = T s ~ L x ( r
- t,s,) -f,(r
- t,s,)
.
Substituting b,Bo = e - n ( X t - at&) (see the bt value given after (83)) and using (85) one gets the PDE:
Martingales and some applications
a f a 2 a 2 f ( ~ f ) a s - -2 xa-6~x2 + r X ~ x - f
'
805
(86)
whenever (t,x) E [0, T] x (0, ~ ) with the boundary condition f(0,x) = ( x - K) + following from Xr = ( S T - K) +. This is a (not easy) parabolic PDE with the boundary condition given above. The solution of this PDE gives f and hence X[ = f ( T - t, St) as the desired capital, illustrating also a deep connection between the It6 formula and the PDEs. There is a probabilistic method of solving this equation which is based on the change of variables technique in an SDE, bringing in the exponential martingale analysis of the preceding section, which will now be sketched. That method also shows the essential role of the linear SDEs in these interesting financial applications. Let us specialize the expressions in Theorem 4.15 in which we take k = 1,~ = ~0 = fll = 0 and 7l~ = ~s,X0 = 1 so that the equation becomes Xt = 1 +
/0
X~7,dfi~ ,
(87)
where {fl~.,~ s , s > 0} is the BM. Taking 7s = 1, the unique solution of (87) is an exponential martingale given by
Xt = ~(fi)t = e~'-l[~'~]t = e~'-~ , and replacing fit by aflt ~a E [~, one gets for the corresponding (unique) solution 2
X, = g ( a fi ) t = ea/~'-~[/~'~]~ = e fadfi'-lf'oazds
(88)
If a = a~ defines a continuous function on N+, then (88) gives a well-defined process g(afi)t =);-t = ef~ asd~s-lfto a2ds ,
(89)
and if this is differentiated using It6's formula (cf., Theorem 4.14) one gets
Xt = 1 +
/0
2sas dfi, ,
(90)
and this equation has a unique solution which therefore is given by the exponential (89). Here the fact that the process {fi~,J~,s >_ 0} is BM is crucial. However the new {-gt, ~-t, t > 0} and the original {Xt, ~ t , t >_ 0} processes have a close relationship which is clarified by the following result on change of equivalent measures P and/5 on (~2, Z), due to Girsanov (1960), and it is also useful in other applications.
M. M. Rao
806
1. THEOREM. Suppose that E(Xz) = 1 for the process given by (90) on (f2, Z,P) where T is the maturity time of option. Then the new process fit = f i t foasdfi~,t >_ O, defines {fit,@t,t > 0} as a B M on (f2, X,J~), where dPt = X t d P is an equivalent probability measure on ( O , ~ t ) and there is a unique fi on ~oo = a(Ut>o~t) such that Pt = f i l ~ t (which exists by Theorem 4.4 since the Xtprocess is evidently a continuous [right continuity is enough] class (DL)member). This result and some further extensions of Girsanov's work are detailed in Liptser and Shiryeav (1977, Vol. I, p. 323), and will not be discussed further. A specialization of the assertion to financial mathematics gives interesting applications, and also an alternative argument to the solution of (86) noted above. Recall that our basic model governing the stock market is
dSt=#Stdt+astdfit
,
t>O ,
(91)
where/~ c ~, and a > 0 the so-called volatility parameter, the chance fluctuations being the BM {fit, t > 0}. But now the latter can be changed with a Girsanov transformation into a new noise process /Tt by the above theorem wherein one takes a = s(it - r)/cr so that/Tt = fit + ((It - r)/a)t. Then (91) becomes
dSt = ItSt dt + ~St [dfit - I t - r d t ] = rStdt + aStdfit ,
t >_ O ,
(92)
and the new probability b on ~ t = a(fit, t _> 0) is determined (because of (88)) as: d/5 = exp ( ~ @ f fit - 1 ( ~ @ f ) 2 t ) d P
.
(93)
This change allows the elimination of the flee parameter It and brings in the interest rate r _> 0. On this new space (f2, Z,/5), the desired function f satisfying (86) can be obtained as follows. Note that on this new space {Dr, t z 0} is a BM, and this property is crucial in the calculations. Consider a new function f defined by f ( t , x ) = e-~tf(t,x). This rather unmotivated function may be thought of as a "discounted security" o f f for St, but will work for the problem at hand, It is 'suggested' by the classical Feynman-Kac method which expresses the solution of a PDE such as (86) as the expected value of a suitable functional of the BM and f above turns out to be one such. We apply It6's formula of two variables (Theorem 4.14 is the one dimensional version) as: t8 t~ Z(Xt, Y t ) - f(Xo, Yo)= L" ~x (Xs, Y~)dX~+.~ ~fy (X~,Y~)dG
1 i¢' e2d J0 ~
d[X, Y], .
(94)
Substituting f ( T - t, St) = e-~tf(T - t, St) in the above and simplifying it after using (86), one finds on taking Xt as the BM in (94):
Martingales and some applications
e r'f(T -
t, St) - f ( T ,
So) = a
e -~
(T
- u, S u ) S , d/~u .
807
(95)
[The integrand in (95) is just e-rUauSu.] Since ~ f / S x is assumed continuous and hence bounded on I0, t], the right side of (95) defines a martingale on (~2,Z,/5). Setting t = T and taking expectations one finds (since So = x is a constant and the right side is zero): f ( T , x ) = f ( T , So) = E~[e-~rf(0, Sr)] = E~[e-~r(Sr - K) +] ,
(96)
by the boundary condition f(O,x) = (x - K ) + which thus gives f by simplifying the right side of (96) using the fundamental law of probability together with the fact that/5 is determined by the BM {/~t, t _> 0}. After a nontrivial but standard manipulation of the Gaussian integral one finds [see Shiryaev et al. (1994), II, Section 4]: f ( T, x) = x~(g( T, x) ) - Ke-~r eb(h( T, x) ) ,
(97)
where ~b is the standard normal distribution whose density thus is given by • '(x) = (27~)__1 ; and 9, h are found to be 2e-T,
~(r,x)= log2+
r+ T r/~v~;
h(r,x)=g(r,~)-~v~.
One can verify that the f of (97) indeed satisfies (86). Thus 7(V, S0) = E~[e-~r (ST -- K) +] is the rational amount to be invested with a self-financing strategy (a, b) where at = ( ~ f / S x ) ( T - t, St) and bt = f O r - t, St)/B.O < t < r , the {Bt, t > 0} being the bond asset. The above detailed account is included to motivate a general study of the market in which both stocks and bonds are allowed random fluctuations. Thus one begins with the (generalized) bonds and stocks as: Xt = )2o +
/0
Xs_ dMs,
Yt = Y0 +
/0
Y~_d N s ,
(98)
where X0, I10 > 0, and M , N are semimartingales so that they can be uniquely expressed as: Mt
=
Mo + At + 2hit, Nt = No + Bt
4-
ATt, t > 0 .
(99)
Here M0, No are finite random variables, {At, Bt, t >_ 0} are processes of locally finite variation, and {Mt, Nt, ~ t , t >_ 0} are locally square integrable martingales. Let ~ = (c~t,Wt, t >_ 0),~ = ( y t , ~ t , t _> 0) be adapted predictable processes of locally bounded variation representing the (generalized) bond and stock securities so that ~ = (~, 7) is the investor portfolio. The wealth of the investor is thus given by
808
M. M. Rao
zT=~,~+hh,
t_>0,
(100)
and it is self-financing if
/0'
Z~ = z ~ +
c~, d,X, +
J0'
y~.dY, ,
(10l)
where the stochastic integrals are assumed to exist. The strategy admits an arbitrage opportunity if Z~=0,
Z~ > O, and P[Z[ > O] > O,
t >>_O .
The rational (or fair) market problem is to find conditions so that there is no arbitrage. Here T is the time of maturity of the European security under consideration. A solution is obtained if one can find a probability measure /5 on (~2, X, Yt, t k O) such that ~st ~ Pt, (Pt = P I J ' t ) and {(Z~/Xt), J t , t _> O} is a martingale on (f2, 27, Yt, t _> 0,/5) since then one gets
\ x , / _- ~0
(102)
'
(expectation of a martingale being a constant) so that /ST[Z~ > 0] > 0 cannot occur if Z~ = 0. Note that /5 on ~(Ut>0J~t) exists iff the martingale { d ~ / d P , Yt, t > 0} is uniformly integrable on (~, 27,P) (analog of Corollary 3.2, and the details may be found in many books, e.g., Rao (1995), p. 279). Here it is sufficient to consider a compact interval [0, T]. Solutions of the problem for a large class of portfolios will now be discussed. The self-financing condition takes the form dZ~ = 0 so that (90) gives Xt d~t + Yt dyt = aZT/= 0 . By ItS's formula d(Z~
\x, j
=d
0q+Tt ~
= d~t +~TdTt + 7t_d 1
= Z (Xt dcq + Yed~k) + 7t
d(~
\XJ
, d/" Yt'~ = 3t- ~XT)' by (102) . This in the integrated form becomes e2
_
z~
N
_
za+
&
I'
7,-
d(5"~
\xj
•
(103)
809
Martingales and some applications
Thus the martingale property of {R~, ~ t , t > 0} relative to some/5 reduces to studying the same property of {Rt =N,~t,Y~ ~ t _> 0}, independent of the security re. The problem then is to show that the latter process is a (local) martingale for a large class of portfolios ~z relative to some such equivalent measure/5, which is often called a martingale measure. In the case of the BM such a measure was found in (93), and we generalize that procedure here. Let {Wtt,~ t , 0 < t < T} be a P-local martingale such that it satisfies (i) P[infte[0,r] Wt > 0] = 1, and Ep(WT) 1 where the filtration { ~ t , t > 13} is as usual standard. Define a measure/5 by the equation d/st = ~ dP. Then Pr N Pr, and (~,S,/5) is an equivalent probability space to the original one, and let dVt = ~2-_dWt or dW~ = Wt_dVt, or equivalently Wt -: W0 + f~ W~_ dV~. So the Vtprocess is a P-(local) semimartingale iff the Wt-process is, by the Girsanov theorem. This result may be used to state that our Rt-process is a/5-local martingle iff the (WR)t-process is a P-local martingale. But by the exponential martingale theory Wt Woe(V)t,(cf. (73) of Section 6). Substituting similar values ofXt, Yt in the definition of Rt one gets when AXe.¢ 0,s C [0, T]: =
--
Rt
--
=
f
=Roe
M, _
= R o ~ t ( M , N ) , (say) ,
(105)
as a consequence of Theorem 6.2 (and some algebraic simplification). But the Ot-process is a P and/5 semimartingale, as/5 ~ P. One finds Rt to be a solution of t
Rt = Ro +
L
Rs dO~(M,N) •
(106)
But it was already noted that the Rt-process is a (local) martingale iff VtRt is a P-(local) martingale, and Rt is given by (105). Thus the Rt-process has the desired property iff Rt W0e (V)t-process is a P-(local) martingale. But using (105) for Rt one gets R t e ( v ) t : RoOt(M, N ) g ( V ) t = Roe(l~(M, N, V))t, (say) .
Here the new exponential ~ is obtained with Theorem 6.2 exactly as ¢ above. With this construction of /5 based on a positive Wt-process we have {Rt, ~ t , t >_ 0} to be a/5-(local) martingle, so that E~(R}) = R E = Z~/Xo, a constant. By the market model equations (98), the desired solution is Xr = X0g(A~r)r. But Z} >_f r and the fair (or rational) price for the investor is the minimal value, i.e., Z} = f t . Hence the equation becomes
810
M . M . Rao
\Xr) =
-1
= E p ( g ( - M * ) r f r ) , by (81) of Section 6 .
(107)
This may be summerized in the following: 2. TheOREM. For the general market model (98), and the measure/5 defined by the auxiliary process Wt, -15~ P, and if AX~ ¢ - 1, then the @-process is locally a Pmartingale implies that the Rt = Yt/Xt-process is a [~-martingale locally and a rational price solution of the (European type) model exists. The preceding argument contains the following simple, but interesting in itself, representation which will be stated for reference. 3. PROPOSITION. Let { X , Y t , t >_ 0} be a positive right continuous semimartingale on (0, Z,P) such that P[inftXt > O] = 1. Then it admits an (stochastic) integral representation: X~ =X0 +
Xs_ dN~
relative to a semimartingale {Art, ~ t , t >_ 0} with a right continuous version, or equivalently, an exponential representation, Xt = Xo~(N) c In fact the N-process t 1 can be taken as d N = (Xt-) -1 dXt or N = fo~_ dX~. This statement is in a sense converse to Proposition 6.1, and may be thought of as a simple analog of a vector Radon-Nikodl)m theorem. To amplify the above lengthy discussion, we now present a few examples, essentially adapted from Mel'nikov (1996).
4. Example. (Extended Black-Scholes model) The financial markets of equations (98) are now given by dBt=r(t)Btdt,
Bo>O, r>_O ,
and
dSt=-St(#(t) dt+a(t)dflt),
So>0, ~>0
,
where r , # , a are deterministic Borel functions satisfying J o ¢ 2 ( s ) d s < o o , for(s)ds < ~ , and Jo#(s)ds < oe. Take M , = Jor(s)ds, N s -- f~#(s)ds + Joa(s)ds, Xt = Bt, Yt = St, and we shall find a process Vt = fo~(s)dfis where fo~2(s)ds < c~ with which a /5 can be obtained. With such an ~(.), set Ot(M,N, V) = Jo(g(s) - r(s) + c~(s))ds + fo(cr(s) + c@))dfls , subject to ~(s) ~(s) ~(s)r(s)• Choosing/5 from the relation =
Martingales and some applications
811
{ ~0ts~(~)-~(~) l~ot(/A(S)_ =_F(S)~2 } ~-~5 d/~s _ ~ \ o(~) ) ds dP, ,
d/St : exp -
the integrand is seen to be uniformly integrable relative to P so that/st on J t defined extends to be a measure on Z and {¢t, ~ t, t > 0} is a local/5-martingale, and hence {Rt = ~ , ~,~t, t _> 0} is a local/5-martingale by Theorem 2, and a rational pricing strategy exists. This extends the original model since now e, #, r, a are time dependent satisfying the (local) integrability conditions.
5. Example. (Cox-Ross-Rubinstein model) This time the process is discrete. Again the market consists of a bond and a stock (B, S) satisfying the equations: AB,, = rB,, 1;
ASh =
p,S,-1;
Bo > O, So > 0 .
Here r = a fixed interest rate for B, and p,, n = l, 2 , . . . , is a sequence of i.i.d. Bernoulli random variables taking values a,b, where - 1 < a < r < b , with probabilities p and q. Thus p~ = ~ + b-a2~ where the i.i.d, random variables en satisfy P [ e n = l l = p , P [ e = - l ] = q , p + q = l . In the model to conform to Theorem 2, let Xt=B~, Y t = S ~ , W t = ~ n = a ( e l , . . . , e , , ) , n < t < n + l , n > 1, thereby embedding the discrete into a continuous process. Thus in our earlier notation
AMn = r,
ANn--
a+b b-a 2 f- - ~ - Sn,
A Vn = c~g,, ,
where V0 = 0, ~n = en - Do - q) and c~ is a constant to be chosen later. We then have
A~s~(M,N, V) = (1 ÷ r)-l[ANn - AMn -4-AVe(1 + AN,)] . Substituting various values, one gets
ANn
a+b --
2
b-a +--~-~-q+~n]'
ANnAV~= [-a+b 2
b --~a
Do - q)
l ~ f+ ~<
[1 - Do - q)2],
b-a A,/,,, = (1 + r ) -1 { [La ~+-b- + - b- 5- a- Do- q) - r + ~<~--(1 - Do _q)2) 1 [b-a +-~--+c~
(
a+b 1-4
2
b-a 2
)] Do-q)
} e~ "
Now the free parameter ~ is chosen so that ~9n is a P-martingale. The desired value is found to be (a + b) + (b - a)Do - q) - 2,"
(b - a)(DO - q)~ - 1)
812
M. M. Rao
Then {Rt = g , ~~t , , s ', >_ 0} will be a /3-martingale if/3 is defined as: /3[gk = 1] =
2p(r - a)
(b - a)(1 + Co - q)) 2q(b - r) /3[ek = - 1 ] = ( b - a)(1 - ( p - q)) ' Thus the n-dimensional likelihood ratio is given by dP~ - I-I(1 + ~(ek - (p - q))) • k=l
I f p = q then/5[ek = 1] = (r - a ) / ( b - a),/3[ek = - 1 ] = (b - r ) / ( b - a). It is worthy of note that the fair price model (absence of arbitrage) depends only on the behavior of the martingale c o m p o n e n t of the semimartingale in the problem. Modifications to allow arbitrage (and so submartingale concepts enter) have been discussed in Musiela and Rutkowski (1997) together with several other models. Both the above examples can be combined to formulate as a multidimensional (here two) equations, and such generalizations have been discussed in the literature.
8. Remarks on multiparameter and other extensions
A generalization of the preceding martingale analysis for a multidimensional or a partially ordered index set, is not automatic, and in fact most of the results fail without additional restrictions. However, one can treat specialized index sets such as I = ~2+ with coordinate (or lexicographic) ordering. Here even the martingale concept has several avatars, and the following point of view gives sharp results and it is due to Cairoli and Walsh (1975). Thus for s = (SI, $2) , t = (tl, t2) E R2+, define the ordering s -~ t iff si < t i , i = 1,2 and a complementary order s Z t iff sl < t l but s2 _>t2. Let { ~ t , t ~ ~2+} be a family of o--subalgebras from (f2, Z , P ) satisfying the filtration conditions; (F1) : @s c ~ t for s -4 t, (F2) : ~ t is complete for P, t E D22, and (F3) : ~ s = Ut~-, ~ t , as in the single parameter case. Then an integrable adapted process {Xt, ~ t , t c ~2+} is a p l a n a r martingale ifs -~ t ~ E ~;s (Xt) --- X~ a.e. [P]. Let ~ 1 s = 0.(Us~> 0 ~ s l , s 2 ) and similarly 5", oZ'2 be defined. If Is, t] denotes the rectangle with diagonal from s = (sj,s2) to t - - (tl, t2) in N 2, then the increment of the process X ( s , t] = Xtm - X,1~.2 - X~lt2 + X ~ 2 can be used to define the following additional martingale concepts: (a) Xt is a w e a k martingale i f E s,' (X(s, t]) = 0 a.e., . . . . afXt as ~a-~'i (b) an >martmgale c a d a p t e d a n d E Y ~(X(s, t]) = 0, i = 1,2 a.e., and (c) a strong martingale if E ~ ( g ; u J ; ) ( X ( s , t ] ) = 0, a.e. There is no good relationship between these concepts if the filtration is not further restricted, and not much detailed analysis is possible. In order to go forward, one imposes a fourth tech~ 1 and ~*t oz-2 are conditionally innical (not very intuitive) condition: (F4): ~*t dependent given ~ t for each t E N2+. Under these four conditions on the filtration,
Martingales and some applications
813
it can be verified that {Xt, ~ t , t E R 2} is a planar martingale iff it is an i-martingale for i = 1,2 simultaneously. Using these conditions for this filtration and assuming that {Xt, t E R 2} is a Wiener-BM, i.e., a Gaussian random field with means zero and covariance E(XsXt) = ~2sl/~ tl. s2 A t2 where e2 > 0 is a constant (to be taken as ~-- 1 for convenience), a great deal of the theory of single parameter then extends, nontrivially. [It should be noted that there is another extension of the standard BM to multidimensions due to P. L6vy which is called the L6vy-BM. It is also a Gaussian field starting at the origin, mean zero, but the covariance E(XsXt) = ½[llsll + Iltll - IIs - till where Ils]t denotes the Euclidean norm of An. This also has interesting properties distinct from the Wiener-BM, with different applications.] For the (Wiener-)BM, the corresponding double and line stochastic integrals and related results have been extensively developed by Cairoli and Walsh (1975). These results naturally lead to certain analogs in harmonic function theory including the Stokes and Green theorems as well as stochastic partial differential equations. The area is in an intensive developing process of advanced analysis, and much can and should be done. The corresponding L%2-boundedness and a generalization of the BM-integration is a followup problem for study. Both L6vy- and Wiener-BMs satisfy an L 2,2boundedness condition locally. [A brief account of this analysis is in Rao (1995), Sections VI. 3 and VII. 3.] The Cairoli-Walsh theory has been extended to semimartingales in the plane by finding a suitable definition, since there are several such concepts, by Brennan (1979). He presented the initial spade work and a further detailed analysis of these fields has been recently obtained by Green (1997). [See also Dozzi (1989) for related subjects.] All these results and applications are of considerable interest and thus are areas for new research. Another line of enquiry is to stay with one dimensional time parameter, but let the process be vector valued. This area of research is being pursued by M6tivier (1982) and his associates when the range (or state) space of the process is infinite dimensional. Indeed even finite (> 1) dimensional work is appropriate in relation to higher order stochastic differential equations, both linear and nonlinear. To indicate the flavor, consider an nth order linear equation of the form:
Ln(D)f =
(a n d n-1 ) a o - ~ ÷ a l dt-~_l + . . . 4 - a n f ~-g .
(108)
This when f = Xt and 9 is the white noise, symbolically written as 9 = dfiffdt, has to be interpreted as:
qois) ~
ds =
/0
q0(s)d/~, ,
(109)
for all square integrable functions 9) on [0, t1, and (108) is taken in the integrated form
/0
qo(s)Ln(D)Xs ds =
/0'
q~(s)d]~s ,
(110)
814
M. M. Rao
a well-defined quantity where {fit, t _> 0} is the standard BM so that the right side defines a martingale. This may be written symbolically as a first order SDE as follows: Let Yk = d k X t / d t k, and Y = (Y1,..., Y~)' be the column vector of (symbolic) derivatives, Bt = (0, 0 , . . . , fit) ~, and A be the n × n-matrix given as:
1 0 ... 0)
0 0 n ~
.
0 -an
0 .
.
1
... ".
"
0
...
1
an-2
• •.
al
.
0 -an
1
0
so that dYt = AYt dt + dBt,
Yo=C,
(111)
which can be solved with a vector analog of Theorem 4.15. The solution Yt as a vector takes values in Rn, and it will be a (vector) semimartingale. One obtains the solution as Yt = C + M ( t ) ~0 t M ( s )
1 dB, ,
(112)
where M ( t ) is the fundamental n x n matrix solution of the associated homogeneous equation dYt = AYt dt. This problem has been analyzed by Dym (1966) and its sample function analysis has also been carried out. In this case, it turns out that the solution of (111) is a vector Markov process (and a martingale), but the scalar solution of (108) is neither. The analysis presents several novel features that are not present in the first order case. These problems are also of interest in physis [as originally discussed with the motion of a simple harmonic oscillator in Chandrasekhar (1943)], as well as in financial market models, with multidimensional Black-Sholes problem [cf., e.g., Musiela and Rutkowski (1997), p. 250] among others. However, there are several questions that have to be answered, and noncommutativity of matrices complicates the analysis. A detailed account of the subject as it stands at present is given in a recent paper by the author [cf., Rao (1997)]. Finally, it should be noted that, since the partial sum sequence of independent integrable random variables with zero means always forms a martingale, one can consider a generalization of the classical situation with martingale differences. These are not necessarily independent, but inherit several properties of the independent case. Consequently, a great deal of the results on the central limit problem, the law of the iterated logarithm, and their use in obtaining asymptotic properties of estimators of parameters of the underlying models can be found in the literature. An extensive treatment of related problems is given in the volume by Jacod and Shiryaev (1987). It is thus evident that martingale methods and results pervade large parts of analysis as well as concrete applications. Many books treating semimartingales are listed in the references, but there are many more works devoted to specializations, and the reader can easily find them from
Martingales and some applications
815
the books and papers already listed here. These can be regarded as a representative set of a vast collection of works on this ever expanding subject.
References Black, F. and M. Scholes (1973). The pricing of options and corporate liabilities. J. Political Economy 81, 637-657. Blackwell, D. (1946). On an equation of Wald. Ann. Math. Stat. 17, 84-87. Bochuer, S. (1955). Harmonic Analysis and the Theory of Probability. Univ. Calif. Press, Berkeley, CA. Bochner, S. (1956). Statiouarity, boundedness, ahnost periodicity of random valued functions. Proc. 3rd Berkeley Symp. Math. Stat. and Prob. 2, 7-27. Brennan, M. D. (1979). Planar semimartingales. J. Multivar. Anal. 9, 465-486. Burkholder, D. L. (1973). Distribution function inequalities for martingales. Ann. Prob. 1, 19-42. Cairoli, R. and J. B. Walsh (1975). Stochastic integrals in the plane. Acta. Math. 134, 111 183. Chandrasekhar, S. (1943). Stochastic problems in physics and astronomy. Rev. Mod. Physics 15, 1-89. Dol~ans-Dade, C. (1970). Quelques applications de la formule de changement de variables pour les semimartingales. Z. Wahrs, 16, 180-194. Dol~ans-Dade, C. and P. A. Meyer (1970). Integrales stochastiques par rapport aux martingales locales. Sem. d. Prob. IV, Springer Lect. Notes Math. 124, 77-107. Dellacherie, C. (1972). Capacites et Processes Stochastiques. Springer-Verlag, New York. Dellacherie, C. and P. A. Meyer (1980). Probabilitds et Potentiel, Partie B: Thdorie des Martingales. Hermann, Paris. Doob, J. L. (1953). Stochastic Processes. Wiley, New York. Dozzi, M. (1989). Stochastic Processes with a Multidimensional Parameter. Longmans Scientific and Wiley, New York. Dym, H. (1966). Stationary measures for the flow of a linear differential equation driven by white noise. Trans. Am. Math. Soc. 123, 13~164. Fefferman, C. L. (1971). Characterization of bounded mean oscillation. Bull. Am. Math. Soc. 77, 587-588. Fisk, D. L. (1965). Quasi-maringales. Trans. Am. Math. Soc. 120, 369 387. Garsia, A. M. (1973). Martingale Inequalities. Benjamin Inc., Reading, MA. Green, M. L. (1997). Planar stochastic integration relative to quasimartingales. In Real and Stochastic Analysis. CRC Press, New York, pp. 65-157. Grenander, U. (1950). Stochastic processes and statistical inference. Ark. Mat. 1, 195-277. Grenander, U. (1981). Abstract Inference. Wiley, New York. Isaacson, D. (1969). Stochastic integrals and derivatives. Ann. Math. Stat. 40, 1610 1616. It6, K. (1951). On a formula concerning stochastic differentials. Nagoya Math. J. 3, 55 65. Jacod, J. and A. N. Shiryaev (1987). Limit Theorems for Stochastic Processes. Springer-Verlag, New York. Kakihara, Y. (1997). Multidimensional Second Order Stochastic Processes. World Scientific Inc., Singapore. Kunita, H. and S. Watanabe (1967). On square integrable martingales. Nagoya Math. J. 30, 209-245. Liptser, R. S. and A. N. Shiryaev (1977). Statistics of Random Processes. Vols. L II, Springer-Verlag, New York. McKean, H. P. (1969). Stochastic Integrals. Academic Press Inc., New York. Mel'nikov, A. V. (1996). Stochastic differential equations: singularity of coefficients, regression models, and stochastic approximation. Russian Math. Surveys. 51, 819 909. M&ivier, M. (1982). Semimartingales. W. de Gruyter Inc., New York. Metron, R. C. (1997). On the role of Wiener process in finance theory and practice, the case of replicating portfolios, in The Legacy of Norbert Wiener: A centennial Syrup. Am. Math. Soc. Pure Math. Series 60, 209321.
816
M. M. Rao
Meyer, P. A. (1962/3). A decomposition theorem for super martingales: existence; uniqueness. Illinois J. Math. 6, 193-205; 7, 1-17. Musiela, M. and M. Rutkowski (1997). Maringale Methods in Financial Modelling. Springer-Verlag, New York. Orey, S. (1967). F-processes. Proc. 5th Berkeley Symp. Math. Stat. and Prob. 2, 301-313. Pitcher, T. S. (1959). Likelihood ratios for Gaussian processes. Ark. Mat. 4, 35-44. Rao, M. M. (1981). Foundations of Stochastic Analysis. Academic Press inc., New York. Rao, M. M. (1982). Harmonizable processes: structure theory. L'Enseign. Math. 28, 295-356. Rao, M. M. (1987). Measure Theory and Integration. Wiley-Interscience, New York. Rao, M. M. (1993). An approach to stochastic integration. In Multivariate Analysis: Future Directions. North-Holland, Amsterdam, The Netherlands, pp. 347374. Rao, M. M. (1995). Stochastic Processes." General Theory. Kluwer Academic Publishers, Dordrecht, The Netherlands. Rao, M. M. (1997). Higher order stochastic differential equations. In Real and Stochastic Analysis. CRC Press, New York, pp. 225 302. Rao, M. M. (2000). Stochastic Processes." Inference Theory. Kluwer Academic Publishers, Dordrecht, The Netherlands. Shiryaev, A. N., Yu. M. Kabanov, O. D. Kramkin and A. V. Mel'nikov (1994). Toward the theory of pricing of options of both European and American types, I. Discrete time; II. Continuous time. Theor. Prob. Appl. 39, 14~60; 61-102. Wu, R. (1985). Stochastic Differential Equations. Pitman Advanced Publication Program, Boston, MA.