Random functions of Poisson type

Random functions of Poisson type

JOURNAL OF FUNCTIONAL ANALYSIS Random 33, 1-35 (1979) Functions DAVID Unizwsity of Pennsylvania, of Poisson Type SHALE* Philadelphia, Pennsy...

2MB Sizes 5 Downloads 116 Views

JOURNAL

OF FUNCTIONAL

ANALYSIS

Random

33, 1-35 (1979)

Functions DAVID

Unizwsity

of Pennsylvania,

of Poisson Type SHALE*

Philadelphia,

Pennsyle~ania 19174

Communicated by the Editors Received September

7, 1976; revised January 26, 1978

Let (X, 3E,p) be a o-finite nonatomic measure space. We think of the customary analysis based upon (X, 3E, p) as continuum analysis. By contrast discrete analysis is based upon an arbitrary countable subset of X, rather than upon X itself, and all countable subsets are treated alike with a Poisson process used to distinguish among them probabilisticly. The sort of functions appropriate for discrete analysis are the Campbell functions, or, as they are called in the present paper, the random functions of Poisson type. The paper presents an account of the ideas underlying discrete analysis and treats briefly the specifies of representation, stochastic integrals, and duality theory for random functions of Poisson type. It is chiefly concerned, however, with those random functions which occur in connection with the discrete analysis of Brownian motion, (for example, with Gaussian noise). In particular it shows that there is a completely positive map which carries such discrete processes onto an algebraic version of Wiener’s Brownian motion process, and that under this map, random functions of Poisson type go over to the appropriate random functions of Wiener type. It also shows that the map carries random variables into noncommuting operators characteristic of quantum theory.

1. INTRODUCTION

1.1. Wiener’s Brownian motion process serves as a guiding model throughout much of the theory of stochastic processes. From this point of view, a random function is a function #(w, x) where w ranges over a probability measure space and x is a real variable. Such random functions will be said to be of Wiener type. A rather different notion appears if Poisson processes are regarded as basic. To give these processes a suitably abstract setting, let (X, X, p) be a nonatomic a-finite measure space; then by the standard Poisson process over X with mean II, we shall mean the countably additive random measure P on which assigns, to each A in 3 with p(A) < co, a random variable P(A) with a Poisson distribution and mean p(A), and which has the further property that it takes independent * Research supported

in part by N.S.F.

1 0022-1236,‘79/070095-24$02.00/O Copyright 0 1979 by Academic Press. Inc. -411 rights of reproduction in any form reserved.

2

DAVID

SHALE

values on disjoint sets. Such a process may be realized on the space Q of all countable subsets of X. In this setting the natural notion for a random function of order n turns out to be a measurable Q+J; Xl >..., %I), where w varies over Sz and x i ,..., IC, vary over those ordered n-tuples whose members lie in w and have the additional property that xi # xg if i # j. These functions are the Campbell functions discussed in [5] and [13]. In the present paper they will also be called random functions of Poisson type partly to emphasize the difference between them and those of Wiener type. This difference, however, is not so great as might at first be supposed, since it turns out that there is a decoupling transform which maps random functions of the one type onto those of the other, as will be shown in Section 4.3 below. 1.2. What gives the random function of Poisson type their importance is the central role played by the Poisson process in describing purely random phenomena. Such phenomena may be represented mathematically by random measures taking independent values on disjoint sets. W. F. Stinespring and the author took up the study of such measures in [I l] and [ 121where we called them Wiener processes. This was because of the importance, as it seemed to us, of examples treated by Wiener himself. In particular, in [12] it is shown that if W is a nonatomic a-finite Wiener process over (X, fi), then W is isomorphic to a direct sum :

Here p is a signed ordinary measure and A: is a normal Wiener process. That is, there is a u-finite nonatomic measure z’ on X so that N(A) is normally distributed with mean 0 and variance c(A). J is called a jump process. Such processes arise as follows. Let (x, X) denote an arbitrary point in X x R; let p be a nonatomic u-finite measure on X x R; and let P be the standard Poisson process on X x R with mean p. Then J(A) is obtained by integrating XdP - sin h dp over A x R. For this prescription to work a certain condition must be satisfied by CL. If we impose even stronger conditions, for example, that there is a sequence of sets A, with union X such that p(An ): R) < co, then the renormalization term sin /\ dp may be omitted. Thus, in this case, J may be given by

JW = JAXR hdP*

(1)

A process of this general type will be called a pure jump process. There is no difficulty in extending the concept random function of Poisson type to such processes.

RANDOM

FUNCTIONS

OF POISSON

TYPE

3

Now an ordinary measure p on X may be approximated by the pure jump process UP that arises by taking u > 0 and P as the standard Poisson process with mean p/o. Then UP has mean p and as 0 - 0, UP - p weakly. Again let N be a non-atomic normal Wiener process. Then we may approximate N by a pure jump process J by making a suitable choice for the structure measure TVwhich determines J. To do this in a very general way, suppose that z’ is the u-finite measure which gives the variance of N. Let m be a probability measure on the line with mean 0 and variance CJand support R - (01. It will be assumed that m is invariant under reflection in the origin and, for technical reasons, that jh4 drn < roe. Now set

The pure jump process / which arises from (I) by using this choice of ~1 has the same mean and variance as N. We shall call such a J a pure jump process approximating N. If we choose a sequence of measures ml , m, ,..., whose corresponding variances pi , (~a,..., tend to 0, we achieve a sequence of pure jump processes Ji , Jz ,..., which have N as their weak limit. Since ordinary measures are weak limits of Poisson processes and since pure jump processes are constructed using Poisson processes, the central role of such processes is thereby established. A consequence is that both the case of measurable functions on a measure space, and the case of random functions of Wiener type relative to Brownian motion are special limiting cases of classes of random functions of Poisson type. Part of the significance of this remark lies in the special simplicity of the analysis based on these functions. The technical difficulties of stochastic integration relative to Wiener’s Brownian motion process, for example, appear greatly simplified. However, it is not merely the case that we can get as close to a given normal process 11: as we please via an approximating pure jump process J. Theorem 1 of this paper says that associated with each J approximating iv:, there is a completely positive map which gives rise to an operator-theoretic version of N. Under this mapping random functions of Poisson type relative to J go to random functions of Wiener type relative to D. A part of these results with J a Gaussian jump process, appeared earlier in [9] where it was more-or-less disguised as quantum field theory. To give it a rather more customary setting, consider what happens on the line when we regard the variable s as the time. The process AT then becomes Wiener’s Brownian motion process whose derivative is white noise. Among the approximating pure jump processes is Gaussian noise which results from taking the measure m in (2) to be Gaussian. Another is the random walk which results from choosing that m which assigns weight l/2 to -t-(u)’ z and -(a)* ‘?. For both these cases, Theorem 1 says that there is a canonical construction involving them which produces a version of white noise. Theorem I, there-

4

DAVID

SHALE

fore, is an analogue of a remarkable result of Anderson [ 11, which says that if an appropriate random walk is constructed within nonstandard analysis, then the operation of standardization produces Wiener’s Brownian motion process. 1.3. Whether in the description of Nature we use the Brownian motion process with its continuous paths, or for that matter, whether we use nonstandard analysis with its infinitesimals, or whether instead we use jump processes, depends partly on mathematical convenience and partly on whether we believe the underlying reality to be continuous or discrete. This second matter is philosophical in its nature and I shall discuss it here only briefly. A more detailed account is given in [lo]. The question really concerns the ideas which lie at the foundations of geometry. In times past, the space in which we appear to live and the relations of bodies were the primary examples from which our geometrical ideas were formed. More recently, space has been replaced by space-time and bodies by events. As to events, primacy must be given to those fundamental events wherein particles are created or destroyed, as happens, for example, when an electron emits or absorbs a photon. Interactions are increasingly interpreted as the result of events of this sort. For example, one way in which two electrons can interact is for electron 1 to emit a photon at event A and for the photon to be subsequently absorbed by electron 2 at event B. It does not seem very daring therefore to suppose that all interactions whatsoever arise in a similar way through particle exchange. The view of what is ultimately real to which this leads light be called the universal Feynman diagram. It consists of vertices, representing events, and line segments connecting pairs of events, each segment indicating that some particle (Ywas created at an event ,d and annihilated at an event B. Each vertex has several line segments beginning or ending at it. Taking a view similar to that maintained by Liebniz against Newton (see [6]), we suppose that instead of the diagram being imbedded in a space-time continuum, space and time are derivative notions from the diagram itself. The apparent conservation of energy-momentum seems to require us to postulate that all particles exchange gravitons; that most of these exchanges are with the distant matter in the universe; and that graviton exchanges occur with enormous frequency. We may establish something like a “clock” by counting vertices along the line representing a particle in the universal diagram, and we may then use graviton exchanges to correlate clocks and measure distances. In this way the notions of “frame” and “space-time coordinates” appear at least in an approximate sense. It is entirely consistant with these ideas that on a sufficiently large scale, for example that of the hydrogen atom, it is legitimate to assign precise space-time coordinates to the electron. Nevertheless at a fundamental level such an assignement is to be viewed as no more than approximately correct. The above ideas lead to two principles:

RANDOM FUNCTIONS OF POISSON TYPE

5

Principle I. Between two related events there can be at most finitely many intermediate events. Principle II. It is impossible to assign precise spacetime coordinates to events. These principles appear to be incompatible with much of the traditional mathematical formulations of Geometry and Physics. We conjecture that in quantum theory, the violation of Principle II leads to a summation of amplitudes over more apparent possibilities than actually exist, and that this is the reason for the divergence of the self-energy calculation for the electron. 1.4. The difficulty with analyses like the above is what they destroy. Thus, at the outset, it is not clear whether any mathematical theory at all can be constructed satisfying Principles I and II. Nor may any particular attempt be said to be successful until it has solved one of the test problems of the subject-for example, computed the self-energy of the electron. A start on what may, or may not, provide the basis for a successful mathematical theory was made in [9]. The idea of that paper was to replace a continuum X by the ensemble 52 of all countable subsets of X, and to distinguish among the members of Q probabilisticly via a Poisson process, and it traced what happened to a free Boson field when the normal process was replaced by a Gaussian jump process. The present paper proceeds in the same way and traces what happens to the concept of “function”. The work was done in 1975 and 1976 and a first version written in 1976 before the author knew of the work in [5] and [13]. 1.5. Sumr~ary of Contents. Sections 2 and 3 recount certain elementary matters for random functions of Poisson type. Section 4 discusses the representation of random functions as stochastic integrals of ordinary ones. Section 5 extends the basic definitions to pure jump processes. In Section 6 a necessary minimum about normal Wiener processes is given. Section 7 discusses approximately normal pure jump processes. Section 8 discusses the connection between these processes and normal Wiener processes. 2. RANDOM

FUNCTIONS

2.1. This section first gives a precise definition of Wiener processes. It then describes the two basic realizations for Poisson processes and their related spaces of random functions. Let X be a non-empty set and let X be a u-algebra of subsets. The precise definition of a Wiener process over X is that it is a mapping lY from a subring $3 of X to random variables on a probability measure space (Q, p). As part of the definition we suppose: (1) X is the union of countably many elements in !K; (2) 117takes independent values on disjoint sets; and (3) W is countably additive in the following sense. Let A, , B, ,..., be disjoint sets in %. Let their union be d. Then 9 is in % if and only if ZW(rZJ converges in measure, in which case the

6

DAVID

SHALE

sum is W(A). The measure ring of W is the smallest u-algebra G of subsets of Q with regard to which all the IV(A) with A in %, are measurable. The triple (Q, G, V) is called the probability measure space of W. If we identify sets which differ by a null set, it is unique up to isomorphism. 2.2. Let p be a nonatomic u-finite measure on (X, X) and let P be the standard Poisson process with mean CL.Then P is a Wiener process. The scatter representation of P is defined as follows. The set Q is the ensemble of all countable subsets of X; P(A) is the function on Q given by

where #(0) means the number of points in 0, this being a nonnegative integer or co. The u-algebra G is the smallest one containing all sets {w E 52: #(w n -4) = m> and v is the unique measure on 6 which assigns to each such set the value exp[-p(A)]p(A)“‘/m! and is such that P takes independent values on disjoint sets. For each w in s2 let wtnJ be all n-tuples .‘ci ,..., x,, drawn from w with .yi + xj if i # j. We shall call this the reduced n-fold product of w with itself. The fiber space of order n over Q is the set R, of all (w; xi ,..., .r,) with w in 52and (xi ,..., .Y,) with w in Q and (xi ,..., x,) in w ctl). Let 6, be the smallest u-algebra of subsets of Q;2,with regard to which all

=(w)fh ?...>-5,)

(3)

are measurable, where a(~) is G-measurable andf(x, ,..., .v’,) is f” measurable. The random functions of Poisson type and order n are just those functions on Q, which are 6, measurable. There is a canonical measure V, on S, which in [5] and [13] is called the Campbell measure. (See Section 3.2 below for its precise construction.) The triple (Q, , &, , v,) will be called the scatter representation space for the random functions of order n. We shall identify random variables on Q with the random functions of order 0 and sometimes write (J& , 6,) vO) in place of (Q, G, v). 2.3. When p(X) < 00, it is often convenient to realize P in the exponential representation. This arises when 52 is taken to be X0 v X1 v x” U ..., with X0 = { @a);P(A)(w) is defined as the number of coordinates of w which lie in -4; and v is given on X” by the formula e-uw)p”‘/m!

.

(4)

Then A C Q is in G if and only if: (i) d n X”” C 3E”’ for m = 0, 1,2,..., and (ii) =2 is invariant under all permutations of the coordinates. For n > 1, we may realize the random functions of order n as follows. Let

RANDOM

FUNCTIONS

OF

POISSON

TYPE

m 3 n. For each selection of a sequence of n distinct 1,..., m, let p: X”l-+ Xfl be the map

integers i(l),...,

7 i(n) from

p: (% ,..., h,) - (Xi(l) ,..., %h,). There are m!/(m - n)! such maps. For each one take a copy of X’” and label it (Xrn, p). Set I;,, = (J (A-, p) 0 and let Q, = I-, u YT,+l u ... . An arbitrary point in Q, may be specified by (w, p) since w tells us which X” it comes from and p tells us which selection of n coordinates from w has been made. The expression (3) above determines a function on Q, , namely the one whose value at (w, p) is 4W) x f o P(w). Then 6, is the smallest o-algebra with regard to which all the functions (3) are measurable, while the Campbell measure V, is that measure which, on each (XTn, p), is given by (4) above. For future reference we mention at this point the outcome of a straight-forward computation using the exponential representation. Let A be a real or complex number. Let n >, 0 and let exp[hP(X)] be regarded as a random function of order n. Then

I,,

eAp(x) dv, = eaAp(X)n exp[(&

-

I) p(X)].

2.4. Suppose $ is a function on Q, whose restriction to each (XfH, p) is measurable. Before sketching the proof that the exponential and scatter representations are equivalent, it is convenient to give a necessary and sufficient condition that 4 be a random function. To this end let n be any permutation of the integers I 7..., m. Then x determines a natural mapping of X” onto itself, which we shall again denote by r. This second rr in turn determines a mapping of I;,,, onto itself-that is, a mapping among the spaces (X”I, p). This last, which we still denote by n, is given by

5-r:(w, p) + (mu, p @x-1 1.

(6)

The necessary and sufficient condition is this, that on each I*,,, , and for each n, we shall have ZJ0 rr = $. Proof is omitted. To see that, when p(X) < co, the scatter and exponential representations yield isomorphic classes of random functions, we first take the exponential representation and excise from each (X??l, p) the set of all (w, p) for which w has repeated coordinates. Taken together, all these sets form a subset of Q, which is null relative to the measure v,~ . On the complement of this subset, we

8

DAVID SHALE

impose an equivalence relation by setting (w, p) N (w’, p’) whenever there is a rr, as is given in (6) above, which carries the one onto the other. Then the equivalence classes stand in a natural one-to-one correspondence with the points of Q,, taken in the scatter representation. Further, because in the two representations, the a-algebras 6, and the random functions are defined in the same way-they both use (3) above-the random functions coincide. 2.5. We return to the general case and take P in the scatter representation. Let A E X and let Q;2:,and Szi be the representation spaces relative to A and X - ,4 respectively. Then

Let J+G be a random function of order n. If $ vanishes off Szi, x Qi , it will be said to be weakly based on A; while if in addition $ = 4’ x 1 on Q:, x 52: then Z/Iwill be said to be strongly based on ,4. These notions are often useful in establishing measurability. As an example, consider the case X = reals, X = Bore1 sets, and dp = dx/a with o > 0. Then Q is a discrete analogue of the line. We will refer to Q supplied with the stochastic measure UP as the discrete line. Consider the following function on Q,: #(w, X) = l/u if x is the nearest member of w to 0, and $(w, x) = 0 otherwise. Then 4 is an analogue of the a-function. To show that z/ is measurable, and hence a random function, let $,,(w, X) = #(w, X) if --N .< s < N and let #N(w, x) = 0 otherwise. Then #N is strongly based on [N, N] and #,v ---f # pointwise as N - c;o. Thus we need only establish the measurability of #.V(w, X) relative to the restriction of P to [-hi, N]. But this last is obvious once we take the exponential representation.

3. STOCHASTIC INTEGRALS 3.1. This section establishes the basic facts concerning the stochastic integrals of random functions. It describes how such integrals may be used to set up the Campbell measure for the random functions of order n, and it associates with a random function $ an ordinary function which we shall call its standard expectation. The notation of the previous section is continued. Suppose that #(w; x1 ,..., s,J is a random function of Poisson type. For 0 < r < n, the stochastic integral of 1+5relative to the variables .“c,+~,..., X, is that random function of order Y whose value at (w; .vi ,..., x,) is given by the equation J

I) dP(s,.,,)

... dP(xJ

= ~,+fE*,.,,,. c’ Z,lE~,, 1cl(w; ~1 >...v xn),

(8)

RANDOM

FUNCTIONS

OF

POISSON

TYPE

9

where the prime indicates that only those xrtl ,..., x, are to be taken for which x1 ,..., N, aye II different elements in W. Thus a stochastic integral, as we define it, depends upon all the variables present. To establish the measurability of the right side of (8), we suppose first that p(X) < cc. Let A, ,..., A4, b e d isjoint members of X with characteristic functions Et, ,..., K, . Let f(.x i ,..., .r,) = K,(.r,) ... N,(xn). Setting # = a(w)f(.~r ,..., .r,) we find the right side of (8) to be ~(w)P(4,+,) ... P(-;I.) it, ... N,(.Y,). This expression is measurable. But the measurable functions on Q, form the smallest class containing finite linear combinations of such # and closed under pointwise limits. Hence in this case the stochastic integral is measurable for all random functions 4. Now suppose p(X) = ‘cc. In this case, if # 3 0, then the stochastic integral is measurable although possibly infinite valued. To see this first choose rl E X with ~(~4) < 7). Let v 3 0 be strongly based on A4 and let a(~) >> 0 be a is weakly based on -4 and the result just random variable. Then # = Ok proved assures measurability in this case. Now the set of all non-negative random functions which are weakly based on A forms the smallest class which: (1) contains all such 4, (2) is closed under the operation of taking linear combinations with positive coefficients, and (3) is closed under the operation of taking pointwise limits. Hence we have measurability when JI is weakly based on -4. Now let +A be an arbitrary non-negative random function. Let ~~(-2.) be the characteristic function of A. Then $A = #?J.T~) ... RA(x.,) is weakly based on A. (See equation (7).) Letting A increase to X makes &, + # pointwise. The stated measurabilitv follows. 3.2. There is no difficulty integrals replacing ordinary Fubini and of the Lebesque integrals is to establish the belong to 2,, . Denote the

in carrying out a version of analysis with stochastic ones. For example, the stochastic versions of the convergence theorems hold. One use of stochastic Cambell measure V, on (Q, , ~5~). To do it, let A characteristic function by Et., . Let V, be given by

(9) Then V, is countably additive on G, . This follows from the Lebesque monotone convergence theorem in both its ordinary and stochastic versions. The measure V, given by (9) coincides with the V, already obtained when p(X) < 03. on Xn, then f may be regarded as a random If f(.Yi ,..., s,) is p”-integrable function. It was shown in [9, p. 2621 that the expected value of sf dPn is just Jfdp”. It follows that sfdv,i = ~fd~ll. 3.3. Kext we define the standard expectation of a random function, Let 4 be a random function of order n. Let A E X” with characteristic function xA .

10 Let p be the “measure”

DAVID

SHALE

on Xn whose value at A is given by

= j (1 N,$dP”)dv R ‘X”

by (9)

It may happen that the measure p is actually countably additive. This will certainly be the case, for example, when (CIis v,n-integrable. Now observe that pn(A) = 0 implies that p(9) = 0. Th us, when p is a countably additive measure, it is absolutely continuous with regard to p”. In such a case the Radon-Nikodym derivative will be called the standard e.lcpectation of 4. In other words, denoting the standard expectation of $ by E,(4), we have

E,(4) is to be interpreted

as the conditional expectation of 4 relative to the space of measurable functions on X”. It is convenient to interpret E, as the expectation E associated with the probability space (Q, 6, v). For A E ,7E”, we will regard JN,& dPn as the stochastic integral of yGover -4. Then (10) may be written as

j, E,(~)do"= !, (J, * dp")dv. This formula is useful in computing

(11)

standard expectations.

EXAMPLE 1. Let B E X with p(B) < co. We may regard P(B) - p(B) as that random function of order n which at (w;xi,..., w,) takes the value P(B)(w) -p(B). A straightforward computation using (11) shows that E,(P(B) - p(B)) is that function which at (xi ,... , x,) takes the value

%(d + ... + X,(x,). EXAMPLE 2. Take the discrete line as defined at the end of Section 2.5 and consider #(w; x, y) defined, if x < y, to be o times the number of intermediate points between x and y, and if x > y, by #(w; x, y) = ---Km; J’, s). Then 1c,is a random function with standard expectation y - .r. The discrete line with the difference in coordinates between points given by 4 satisfies Principles I and II of Section 1.3.

3.4. The result cited in Section 3.2 that, forf(x, ,..., sn), the expected value of extends to random functions in the following way. Let 4 be

sf dPn is just J-f dp”,

RANDOM FUNCTIONS

a random function Then

11

OF POISSON TYPE

of order 1zand suppose that it is v,-integrable.

E, ( j 4 dp(x,+,)

... W,))

=

\ Ed4

44r+d

Let 0 < r < 12.

... 44.G

(12)

The case r = 0 follows from equation (11). To establish (12) when T > 0, let A E Xr. Then the left side of (12) is the Radon-Nikodym derivative of the measure p given by

dP(s,.+,)‘. dP(s,))dP(s,)... dP(s,)) = J;Q dvCS,,x.4(jx.-r z+G = j, dv ( \

XA#

dP”)

‘X”

=j X” &PA)

4”

.

=I‘X” x.&M W

Hence dp!dp’ = j En($) dp(.r,+l ) ... dp(x,), and the formula

4. REPRESENTING RANDOM FUNCTIOKS

is proved.

BY STOCHASTIC INTEGRALS

4.1. It is often useful to represent a random function of order n as a sum of stochastic integrals of ordinary functions of higher order. As an example, suppose that f( yr , . . . , ym) is non-negative and .P-measurable. Let M(W) - Jf dPm. Wh en n > 1, we may regard a(~) as a random function of order n, namely that function on Q, which takes the value CY(W)at (w; A+ ,..., s,). Regarded in this way, a(~) is not itself the stochastic integral of a function. However, we may express CY(W)as a sum of stochastic integrals of functions of higher order. To do this let y be any collection of ordered pairs

where the x~(~) are drawn without repetition from 2cr,..., A+,,and the yju.) are drawn without repetition from x1 ,..., X, and the yj(3 are drawn without repetition from yr , . . . , y,,, . We shall call r the order of y and write it r(y). Now let

12

DAVID

SHALE

r denote the collection of all y. For y E r, r(y) < min(n, m). Note that the empty set Q; is in r and has order 0. Now set

for the function which results from f(yr ,..., ?)nl) by replacing each yiu:) by its corresponding xiu.) from y. Then regarding a(~) as a random function of order n, we get (13)

where each stochastic integral takes place with regard to all the remaining y variables. A second example concerns random functions which are products of stochastic integrals. For this purpose f(q ,..., X, , yr ,..., y,,J will be abbreviated asf(.r, y) and the stochastic integral with regard to dP(y,),..., dP(y,) will be abbreviated as J~(x, y) dP(y)“‘. Similarly g(x, ,..., x, , z, ,..., zt) will be abbreviated as J-g(x , z ) an d t h e st och ast’ic integral as Jg(x, x) dP(z)t. Analogously to what we did in the previous example we set Y = {(Yih) 9 zj(l))t..., bYi(r) 9 zj(r))l~ where the yitlc) , for tz = l,..., are all distinct and so are the zitk) . Let r and r(y) be defined as above, and let

(f#d, denote the function which results from f(x, y)g(x, z) by replacing each pair yitk) and zick) in y by a new variable wI;, for 1 < K < r(y). In particular (f#g) a is just f(x, y)g(X, z) itself. PROPOSITION

1. With the notation abowe,iff(x, y) 2 0 ad g(x, y) 2 0, thz

j-f (x, Y) dP(yP J g(“? 4 m4t where on the right side integration

= c J (f #g)Y dpm+t-rcv)Y ver

(14)

takes place with regard to all variables except

x1 )...) x, . Proof. We realize the random functions in the scatter representation and let (W; x1 ,..., xn) denote an arbitrary point in Q, . According to the definition (8), the left side of (14) is the sum of all

f(% ,..*, x, , y1 ,.**,Y,&(~l

,..*, x, , Xl ,...I 3)

RANDOM

FUNCTIONS

OF POISSON

TYPE

13

for which (xr ,..., x, , J’~ ,..., y,,J is a sequence of distinct elements in w and the same is true of (x1 ,..., x, , z, ,..., zt). But members of (~7~,..., ~7,~)can agree with members of (zr ,..., zt). Each y in r gives a list of pairs of variables which may agree. When we collect terms according to the y to which they belong, we get the right side of (14). 1 4.2. We now consider duality for the space of random functions of order n associated with a Poisson process. An early treatment of duality is given by Segal in [7] for integration on Hilbert space with regard to the normal distribution. This is essentially the same theory as that for the normal Wiener process N. Another treatment of duality is due to Ito: in [3] for the normal case and in [4] for the Poisson case. These cases are treated again from the present point of view in [9]. What follows is an adaptation, to random functions, of the treatment of the Poisson case from [9]. Let (X, X, p) be a nonatomic u-finite measure space with corresponding standard Poisson process P. Duality may be set up in a manner appropriate to the theory of Boson fields by setting H =L,(X) and considering S(H) the algebra of symmetric tensors over H. Now S(H) is sometimes denoted by eH. For convenience we carry over the same idea here by defining the exponential of the measure space (X, X, CL)to be the set ex = XOV XIV

XzU ..‘,

where X0 = {U >. Here X0 is supplied with the measure 1 and, for n > 1, Xn is supplied with the measure p n. A set A C ex will be called measurable if and only if, for each n > 1 A n Xn is in SE”and in addition A n Xn is invariant under all permutations of coordinates. For random variables-this is the case when rz = O-the space appropriate for duality is L,(eX). For random functions of order n > 1, it is L,(X”eX). To set up duality we proceed exactly as in [9] by introducing the stochasticand-ordinary measure

WYYd - MY,)1 ... [dP(Ym)- 44YJ* This expression is to be interpreted in the following way: (i) Multiply out and express the result as a sum of monomial terms. (ii) For each monomial term, compute the ordinary integrals first, then compute the stochastic integrals with regard to all the remaining variables. The measure will be abbreviated as

W(y) - W91m When m = 0, “integration” integrate.”

or as

[dP - d@.

is to be interpreted as the instruction:

“don’t

DAVID SHALE

14 Forf(x, formation

y) in (L, n L,)(XV) defined by

with support

in Xn x X’” let D be the trans-

(15) This mapping extends at once to the tamefunctions in (L, n L,)(X+) by which is meant allf in (15, n L,)(X”eX) which vanish outside Xm x X* for all sufficiently large n. PROPOSITION 2. The duality transform D which has just been defined on the tamefunctions, extends uniquely from a unitary map from L,(XneX) onto L2(SZn).

Proof.

As in [4], establishing Proposition 2 is straightforward verification. For this reason we give the proof only in outline. Let -4, ,..., A,,+, be disjoint sets in fi with finite p-measure; let N, ,..., x,,,, be the corresponding characteristic functions; let

and let fS be the result of symmetrizing f with regard to the variables yr ,..., y,nr . Allowing m and the Ai to vary, we see that finite linear combinations of such fS are dense in L,(Xnex). To see that D is an isometry on such linear combinations, one need only compute. It follows that D maps L,(XTzex) isometrically into L&u* To show that the image is dense, we first observe, taking linear combinations of rather more general fS than those above, that random functions of the form are in the range of D. Taking power series expancc(W)= P(A$ ... P(A.+$ sions, the estimate which is given by equation (5) at the end of Section 2.3, lets us conclude that e-AIP(A1)...-~,n+nP(A

Jn+JK1(xl)

‘.

X,(x,)

(16)

lies in the range of D for all /\r > 0 ,..., A,,,,, 3 0. Now let 6 be the “event” consisting of all (w, x) such that P(A,) = K, for K, 2.0 and 1
RANDOM FUNCTIONS

OF POISSON TYPE

15 .

L&3

F or, applying

OJWW

sequence of Hilbert

the Proposition space isomorphisms: L,(Q,)

twice, we obtain the following

N L,(XneX) y L,(eX) @ L,(X”) ” Jq-qj)

We shall call the ensuing unitary

T: L&2J

(17)

0 L&q.

operator

--f L,(Q,) @ L,(-?Cn)

the decoupling transformation for&@?,).

Proposition

2 has the following

corollary.

PROPOSITION 3. For a standard Poisson process P, the decoupling transform T is the unique unitary mapping from L,(Q,J to L,(Q,) @ L,(Xn) which, for f(Xl ,...Ix,J in (L, n L&(X”) and g(y, ,...,- JJ,,~)in (L, n L2)(Xn’), with m = 0, I, 2 ,.,., has the action

T: ‘f(x) g(y) dP(y)“’ + J’g(y) dP(?,)“’ @f(x). J

(18)

Proof. Suppose that A,, . . . , A,, B, ,..., B,,, are n + m disjoint rectangles in X each with finite p-measure. Letf(x) be a constant times the characteristic function of A, x ... x A, and let g(y) be the characteristic function of B, x ... x B,,, . Let g(y)s be the function which results from symmetrizing g(y). Then L,(Xn x Xm) n L,(Xnex) is the closure of the linear span of all such f(x)g(y)S , If, in addition, we vary m, such finite linear combinations are dense in L,(Xnex). Thus according to Proposition 2 and the sequence of isomorphisms (17) above, T is the unique unitary operator which on f(x)g(y),* of the special form above has the action:

s f(x) g(y)s[dP(y)- 4491’~~ - 1 g(y),WYv) - 44?.91”’~3f(.d then we To carry out this mapping, we first multiply out [dP(y) - d&)1”, integrate with regard to the dp(yi), and finally we apply the operator given by (18) above. The proposition follows. 1 We should perhaps remark that the decoupling transform T is not trivial because the stochastic integrals on the left and right sides of (18) involve different numbers of variables. For example, suppose that n = m = 1 and Instead T[a(w)f(x)] = 4~) = .I&4 WY)- Th en T[a(w)f(x)] f a(~) @f(x).

4W) Of(x) + 1 Of(x)g@). 4.4. One use of Proposition 2 is to explicate the action on L,(Q,) of the group of measure preserving transformations of X onto itself. For suppose that 5 580/33/1-2

16

DAVID

SHALE

maps X onto itself and preserves the measure TV.The corresponding Q n, which we again denote by 5, is 5: (w;

x1 ,...,

x,)

4

(&IJ;

5x1 (...,

action on

&).

u(c): 4 + # 0 5-l. In spite of its simple appearance this map is difficult to treat directly. However, it is easy to explicate the action by transferring it to L,(X”eX) via D. As an immediate consequence of the proposition, we have

D-‘u([)D:

f ---tf c,E-1,

where [ is the canonical action on X”eX determined by 5. Now suppose that G is a group of mappings which operate transitively on X. When p(X) = 00, it is well-known that the action of G on J2, is ergodic. Now in discrete analysis the random functions of order 1 behave much like the ordinary function on X in the usual theory. It is interesting to note, therefore, that the action of G on Q, need not be ergodic. To see this, take the discrete line as we described it in Section 2.5, and consider the space Qn, . We have the continuous shift: (w, x) - (W + t, x + t) and the discrete shift: (w, x) - (w, x+) where .Y+ is the next member on the right in w. It is easy to see that the joint action is ergodic. However, the continuous shifts by themselves do not act ergodicly. To establish this, we take (w, x) and setting x = x0 write w in increasing order as ... s-i, a+s, xi ,... . Let d, = .k-,+1 - 5, . We let d = {d,}, and let d be the collection of all d. We make d into a probability measure space by assigning an exponential distribution to each df . Then we can identify (w; “Y) with (d, .Y) and Q, with the measure space d x R. Clearly any v(d) which does not depend on .Y, is invariant under the continuous shift.

5. PURE

JUMP

PROCESSES

5.1. The purpose of this section is to adapt parts of the preceeding theory to pure jump processes. First we give the definition. Let X be a non-empty set; let 3Ebe a o-algebra of subsets; let R = reals and b = Bore1 sets of R. A point in X x R will be denoted (x, X) and X will also be used to denote the function: (x, h) - /\. Let p be a u-finite nonatomic measure on X x 23 and let P be the standard Poisson process (X x R, X x b, CL).We shall suppose that P is taken in the scatter representation, with representation space (Sz, 6, v). Under suitable conditions on CL,we may define a pure jump process J, as has been already described in the Introduction, by setting

J-4) .= j

X dP. A/R

RANDOM FUNCTIONS

OF POISSON TYPE

17

To see what condition on TVis appropriate, consider the case X = [O, co) and let Jt be J([O, t]). If we take Principle I of Section 1.3 as our guide, we want Jt to have finitely many jumps (with probability 1) on any finite interval A. This, in turn, implies that p(A x R) < co. In the general case, therefore, we introduce the measure p by setting ,&I)

= ,u(A x R).

(19)

Then, as part of the definition of J, we shall suppose that p is a a-finite nonatomic measure on X. Following [12], the Poisson process P will be called the counting process for J. To establish the appropriate classes of random functions for /, we begin by considering the corresponding classes of functions for the counting process P. First let (Sz, , G, , v,) be the scatter representation space for the random functions of order n, as is defined in Section 2.1. We use (Q, , 6, , v”) to mean (Q, G, v). For n ~l,apointinR,isan(w;x,,X,,...,.~,.X,)wherew~SZ,and @I , u..., (%I v &L) are distinct members of W. Now G, is the smallest o-algebra with regard to which all ~~(w)f(.vi , /\i ,..., s, , A,) are measurable, where CX(W)is a random variable andf is (X x !B)“-measurable. For J we shall define a a-algebra S:, of subsets of Sz, which at first sight appears to be smaller than S, We do this by letting Sk be the smallest o-algebra of subsets of Sz,, with regard to which all

J(4wlg(x, ,...>4 are measurable. Here A ranges over all sets in .% with finite p measure and g ranges over the X”-measurable functions on X”. PROPOSITION

4.

With 6, and G:, as just de$ned, G:, = 6, .

Proof. According to the Ito measurability theorem ([12] Theorem 2), 2 1, this implies that random variables 01(w), when regarded as Gh=G,.Forn random functions of order n, are G:, measurable. Now letf(q , A1,..., Y,~, A,) be (X x $93)”measurable. We may also regard f as a random function. It is sufficient to show that the random function f is 61, measurable. To establish this, in its turn, we need only show that the map (w; .Q , Xi ,..., s,, , A,) -+ X, is ,E;, measurable. This last is proved in the lemma of Section 5.2 below. 1 We may now establish the appropriate let Sz:, be the set of all

classes of random functions

for /. \i’e

(co; Xl )...) .x,),

where w lies in 5-2, where .x1 ,..., X, are distinct members of X, and where also there exist numbers h, ,..., An so that (xi , /\i) ,..., (.vn , h,J all lie in W.

DAVID SHALE

18

Now consider the map from 52, to Sz:: ( w; .x1 ) A, )..., x, ) A,) -+ (w, x1 )..., x,). It sends the measure vtl on Q, to a measure I):, on 52:, . Proposition 4 says that the map is measure space isomorphism. Accordingly we shall identify Q, and Q:, . We define a random function of Poisson type and order rz relative to J to be any G,-measurable function on Q, . However we shall usually write such a function as l&w; x1 )..., Xn). 5.2. LEMMA 1.

With the notation of Section 5.1, for ( w; x1 7 A, >..., x, 9 A,) -+ Xi is G’, measurable.

1 < i < n, the map

Proof. Let T(W; x, X) be a random function of order 1. Define a random function II, of order n by $(w; x1 , X, ,..., x, , X,) = T(W; xi, hi). In this way we imbed the random functions of order 1 into those of order n, and in doing so, imbed 6, in 6, and 6; in Gi,. It is enough to show that (w; x, X) + h is Gi measurable. Case I. p(X x R) < ;o. We realize Sz, in the exponential representation as given in Section 2.3. Let m > 1. Let p: (xl , X, ,..., x, , h,) ---f (xi , hi). Then Q, is the union of the spaces ((X x R)“), p). W e can identify a typical point of ((X x R)I’~, p) by writing it, not as we did formerly as (w, p), but as

Now we select a sequence q, , qa >... of successively finer partitions of X in the following way. Using the measure p defined in (19), we first divide X into two disjoint sets /iii , -3,, of equal p-measure and set ‘$i = {A,, , 9ia); then we partition A,, and A,, each into two parts of equal p measure, obtaining ‘$e = ‘p, by (A,,, ,..., A,,,) G&,1 > A.2 7 =1,,, 9 - 4 2.4 }.9 and so on. We shall denote where :V =-=2”. Now let

Clearly #.,, is G; measurable. We will complete the proof for Case I by showing that $,(w; x, A) e h pointwise a.e. as n tends to infinity. We first choose n sufficiently large so that 27i > m. Then we decompose the space ((X s R)“, p) as the union of rectangles of the form E; x .‘. x k,)

(22)

where each I) = A,,,. x R for some Y. For (w; x, h) in 52, , when w lies in a

RANDOM FUNCTIONS

OF POISSOK TYPE

19

rectangle of the form (22) with all the Yj distinct, (21) tells us that $J,(w, J, h) = /\. Thus all we need to do is estimate the sum of the measures of those rectangles (22) with Yj = Yk for some j # k. Now each such rectangle is contained in a rectangle K = Z, X *a* x Z,,, where two of the Zj are the same -4,,, x R and the remainder are all equal to X x R. There are m(m - 1) 2n-1 such rectangles K, and according to (4) above, each has vi-measure (~n!)-‘2-~~ exp[-p(X

x

R)]p(X x R)“&.

Let A, be the union of all rectangles K. Then fl, has measure a/4” where CI depends on llz but not on n. Because #Jw; x, h) = h on the complement of fl, in((X x Wp), we see that #,(w; x, A) + X pointwise a.e. on each ((X x R)““, p). Hence the lemma holds in Case I. Case II. p(X x R) = co. Choose 4 E fi with ,(a i< R) < co. For i = 0 or 1 denote the representation spaces for the random functions of order i which result from restricting P to A x R by !2: and from restricting P to (X - A) x R by Qf. Then according to (7) in Section 2.5 above,

Let G;(A) denote smallest u-algebra of subsets of Qr with regard to which all J(B)f(x) KA(x) are measurable with B C A. All such functions vanish on Q0 x Qr and have the form ‘p x I on Q2, >: Q0 . Now let #: (w; x, h) + h and let #A = N.4(~)#. By Case I, 4 is G;(A) measurable and hence Gi measurable. Finally let A increase to X. Then #A tends pointwise to #, and the G; measurability of I/ is established. 1

6. THE NORMAL WIENER PROCESS 6.1. As a preliminary to discussing how the normal Wiener processes and the jump processes are related, we recount some essentials about normal processes and their spaces of random functions. Let X be a non-empty set; let X be a u-algebra of subsets of X; and let c be a u-finite nonatomic measure on X. Let N be the normal Wiener process with mean 0 and variance v as was described in the Introduction(See Section 1.2). When it comes to actually realizing N, authors sometimes make heavy going of it. What is simplest is to prove an existance theorem for N. This may be done, for example, in the manner of Segal [7] and. [8] by taking the normal distribution and with unit variance on the Hilbert space L,(X, v). Now let (L3, ID, 7) be the probability measure space for N. Thus 3 is the smallest u-algebra of subsets of d for which all N(A), with v(A) < co, are

20

DAVID

SHALE

measurable. Now let % denote the ideal of q-null sets in ID. The pair (a/%, q) are uniquely determined by v and do not depend upon which representation is chosen. This last follows readily when we consider the characteristic function J exp[itN(A)] dq. Now, instead of the random variable N(A) on the probability measure space d, it is only slightly more abstract to take the operator onL,(d) which results from multiplication by N(A). Accordingly, we shall say that we have an algebraic version of N if we have the following structural elements: a Hilbert space H with distinguished unit vector e, a map ‘P from {A E X: ~(~4) < co} to self adjoint operators on H, and if, in addition, we have a unitary operator which carries H onto L,(d), e onto I, and Y(4) onto the operation of multiplication by N(A). Algebraic versions of N have been implicit since square integrable random functions for the normal Wiener process on the line were expanded in terms of Hermite functionals, and they have been used explicitly, except for trivial differences in language, since the publication of [7] and [8]. 6.2. Let rz 2 1 and let f(yi stochastic integral

,...,y,,< ) E&(X’)~). ‘f(:y) 1

The theory

dN(y)”

of the multiple (23)

is due to Ito [3]. The essence of Ito’s treatment is as follows. When A, ,..., nl,, are disjoint sets in X with finite v-measure, andf(y) is the characteristic function Then of d, x ‘.. x A,,, ) then (23) is defined to be N(A,) x ... x N(A,). the stochastic integral is the unique linear extension which is continuous from L&P) to L(A). Below we shall need the following simple special case which we give without proof. Suppose that v(A) < CO. Then I

N,,,&

, yz) dN(Jy

== N(A)”

- v(A).

(24)

Duality, in the form in which we shall need it is also due to Ito [3]. Take (X, 3, v) and formL,(eX) just as we did in Section 4.2. Then the DN is the unitary map from L,(eX) to L,(d) whose restriction to f(.~i ,..., -v,,J with support in Xnl C ex is given by

“N:f’&

J‘fdN-.

(25)

Now let f be in Ls(X). When the operation of multiplication by Jf dN is carried back via D;? to a selfadjoint operator on L,(eX), we obtain the operator Q(f) in the Fock representation of quantum field theory. In other words specifying the Q(f) of the Fock representation amounts to producing an algebraic version of the normal Wiener process. 6.3. We now establish a product formula for stochastic integrals relative to N.

RANDOM

FUNCTIONS

OF POISSON

21

TYPE

For this purpose let f(yi ,...,yJ EL~(X~~) andg(z, ,..., .zt) eL2(Xt). Now carry over the special notation: y? r(y), r, and (f # g), which was introduced in Section 4.1 for the Poisson case. In addition, let the reduced tensor (f@~)~ be defined by (f 0 A, = j (f # cd,, d+w,) We give without LEMMA

2.

proof the following

elementary

(26)

. . . d++). lemma.

The map f x g + (f @g), is continuous from L2(Xnb)

x

Lp(Xt) to

L,(Xnl+t-‘). We have the following case. 1 f(y)

dN(y)“”

analogue of formula (14) (Proposition

1 g(z) dIV(#

1) for the Poisson

= c 1 (f @g), dLVZl+t-r’Y’. YEI-

(27)

Proof. To establish this formula, first suppose that f(y) and g(z) are the characteristic functions of rectangles A, ,Y ... x A,,, and B, x ... x B, where -4 i ,..., -4, are disjoint and so are B, ,..., B, . If, in addition, for each pair i, i, Ai and Bj either coincide or are disjoint, the formula follows directly from (24). F ur th er we can reduce the more general case to this special one by subdividing the Ai and Bj and taking linear combinations of characteristic functions. We can now extend the formula to the case when both f(y) and g(z) are linear combinations of characteristic functions of rectangles and thence to the general case by taking L, limits using Lemma 2. a When f is symmetric in the variables y1 ,..., y,,, and g is symmetric in the variables xi ,..., zt then (f @g), depends only on the order of y. In this case, if r = r(y), we shall write (f @g), instead of (f @g), . Now when 0 < Y < min(m, t), the number of distinct y with r(r) = r is

mt t r )-

m! t! (m - r)! (I - r)! r!

and (27) becomes

j f dN” 1 g dh” == 1 ( mrt ) 1 (f @ g)r dl\r”Y r 6.4. The thing to be said about the theory of random functions of order n (Wiener type) relative to N is that it is trivial. Thus the space of square-integrable random functions f(w; x1 ,..., x,) is just La(d) @ L,(Xn), (or H @ L,(X”) if an algebraic version of N is being taken). The formulae (27) and (28) extend at once to random functions f(x, y) and g(.v, z) which are square-integrable in y and

22

DAVID

SHALE

z for almost all x. Further, identifying L,(Xn~) the duality transform

with L,(g) @ L2(Xn), we obtain

DN: L2(Xnex) --f L,(d) @ L,(Xn).

(29)

This is just 0; @ I where 0; now denotes the duality transform from L,(eX) to L,(d) which is given by (25) above.

7. APPROXIMATELY NORMAL JUMP PROCESSES 7.1. Let (X, 3, TJ)b e a nonatomic u-finite measure space and let N be the normal Wiener process over X with mean 0 and variance ZI. We proceed to discuss the class of pure jump processes J which approximate N. To recapitulate the definitions given in the Introduction, Section 1.2, let R denote the real numbers; let 23 denote the Bore1 sets in R, and let m be a probability measure space on 23 with support R - {0), mean 0 and variance 0. Suppose that m has the following additional properties (i) (ii)

m is invariant under reflection in the origin, J h* dm(X) < ~0.

This second condition is just a technical convenience. It is sufficiently broad for the ensuing theory to include the cases which matter, particularly Gaussian jump processes and generalized random walks. Again we let p be the measure on 3 x 23 which is given by dp = 1 x dv x dm.

(2)

u

We let P be the standard Poisson process on X x 23 with mean p. We use (x, X) to denote an arbitrary point in X x R and, for each A with v(A) finite, we define J(A) by J(-4 = I,,,

h 0.

Then J(A) has mean 0 and variance V(A). The process J will be called a pure jump processapproximating N. In order to have a name for the class of processes arising in this way, we shall also say that J is approximately normal. We do this even though from a conventional viewpoint the processes J and N seem quite different. We will suppose that the random functions of order n relative to J are realized in the scatter representation on (Q, , 6, , v,). 7.2. We now discuss duality for /. Consider the measure space (X X x s, p) and form the related exponential measure spaces eXXR

and

(X

x

R)n eXXR.

x

R,

RANDOM

FUNCTIONS

OF POISSON

TYPE

23

A natural duality arises in connection with the counting process P, namely the isomorphism D: &((X

x R) n exxR) ---, L2(L?n)

(30)

whose existance is asserted in Proposition 2. The special case when / was a Gaussian jump process and tl = 0 was discussed by the author in [9] in connection with quantum field theory. The essential point in that treatment was that it is not the elements ofL,(eXxR) w h’ICh correspond to states of a Boson field but rather those of L,(eX). This leads to the observation that there is a natural isometric imbedding of L,(eX) into L,(eXXR). To proceed in the same way here, we letf(,xr ,..., zc, , yr ,..., ym) be any square integrable function on XtL+lrL which is symmetric in the variables yr ,..., ym . Further we let (yr , X,) ,..., (vnl , An,) be related pairs of variables over X x R. Then, the map 8: f(X, y) + cm!“& ... h”,f(.r, Y)

(31)

determines an isometric imbedding of L,(Xnex) into L,((X x R)” exxR). To see this, observe that because the auxiliary measure m used in the definition of p-see equation (2)-k a probability measure with variance cr, we have the two special properties:

1 I4a2 d44 = us,,, Iww 44x,4, -x f I k(YV dV(Y) = I,, Rx2 I NYY 44% 4.

-x

These properties insure that 0 maps La(X” x X”‘) isometrically intoL,((XX R)” ;< (X x R)“) for m = 0, 1, 2 ,... . Notice, in particular, that for m = 0, B is just the imbedding which sendsf(x) on X” to u”lz~(f(s)on (X x R)“. Now set DJ = D 0 0.

(32)

Then D,: L,(Xnex) + L2(Qn) and D, is an isometric imbedding. Following [9] we shall call the image set the space of labelled random functions of order n relative to J, and we shall denote it by C, The duality transform D which is given in (32), is apparently quite complicated because, according to Proposition 2, it involves integration with regard to the mixed stochastic-and-ordinary measure [dP - d,u]l”. However the restriction

24

DAVID

SHALE

that the measure m used in constructing p be symmetric about the origin insures that sAi dm(&) = 0. This means that in computing DJ all terms involving integration with regard to some dm(h,) drop out. Thus, forf(x, y) in (L, n L,) x (Xn x Xn), we have ‘. D.&x, Y> = ~,;;;:,.

. J

A, ... A,f(.Y, y) dP(y, Ap.

(33)

We can write this analogously to formula (25) for the normal case by setting

Then (32) may be written

7.3. For ?1= 0, 1, 2,..., let 6, be the projection of L,(Q) onto p!, . For * E Jw4J

will be called the normal expectation of Z/J.Here follows an example of the computation of a normal expectation. Suppose that g(y, ,..., y,,,) is in L, n L, of X”l and let a(~) = jg d](y)“” in Q, . Letf(x, ,..., x,) be a bounded function which vanishes off some rectangle of finite measure on Xn. Then a(w)f(x) is a function on Q, . It does not lie in 5!!, but we do have

~nb4w) f(41=j fb9g(y) d~b9’71.

(35)

In establishing (35) there is no loss in supposing that g(y) is symmetric in Yl 1-e.)ym . It is convenient also to rename the auxilary variables A. Thus we shall associate xi with Xi for i = l,..., m and yj with A; for J’ = I,..., m. Let h = x; ... X’,g(y, ,...) ym). Then a(~) = jh dP”‘. Now regard CY(W) as a random function of order n. Proceeding as we did in Section 4.1 we may express a(~) as a sum of stochastic integrals of ordinary functions. For this purpose let

be as in formula (13) and pair off the hi and Xj correspondingly. Let order of y and let r be the set of all y. Then as in (13) we can write a(w) = 1 j h, dPm+‘v) vel-

Y(Y)

be the

RANDOM

FUNCTIONS

OF POISSON

TYPE

25

and ci(oJ)f(x)

=

zr /f(x) h.,dP-r(y’

(36)

the integrals being with regard to the dP(y, , hj). The particular term in (36) w h ic h corresponds to y = ,E is just the right side of (34). Thus to verify (36) we must show that every other term in (35) is orthogonal to S, . Fix y + 3 and set r(y) = Y. Then with some renumbering of variables we can writef(.r) h, as

4 ... X,&+1.” &&xl

,..., .?Jg(q ,..., .yr .yr+1 v..,y,,,)

(37)

This expression is to be integrated with regard to dP(y,.+, , XL,,) ... dP(y,, , hi,). The restrictions on f and g imply thatf(xi ,..., XJg(xi ,..., ym) is in (L, n I.,) x (Xn x x”‘). It follows that the term corresponding to y in (35) may be written

s

A, .'. A,f(x)g(a,

y) d](y)“‘-‘.

(38)

Now transfer both (38) and I?, itself to La((X x R)ll exxR) via D-l the duality transform associated with P. Then (38) g oes to a multiple of the expression (37) now while !&, goes to the image of L,(Xnex) under 8. The stated orthogonality follows from the definition of the measure p. 7.4. There will now be described for ] a product formula which is analogous to formula (27) for normal Wiener processes. The principle difference is that, for /, the product is followed by the normal expectation. Suppose thatf(y, ,..., ym) is measurable on X l)l. To avoid analytical difficulties we shall suppose thatf(y) vanishes off some rectangle of finite measure. Similarly suppose that g(z, ,..., zt) is measurable and vanishes off a rectangle of finite measure on Xt. We consider

(39) and use Proposition 1 to expand it as a sum of stochastic integrals relative to powers of dP. For i =T l,..., m let the auxiliary variable of yi be Xi , and forj = I,..., t let the auxiliary variable of .zi be hj’ . As in Proposition I let

be a set of r pairs with yifk) drawn without repetition from yi ,..., ynr and the zitk) are drawn without repetition from zi ,..., z1 . We shall suppose that the Xj and the XI are paired correspondingly. Let (f # g), be as defined in connection

26

DAVID

SHALE

with Proposition 1, and let r(y) and r have their usual meanings. Then it follows from the proposition that (39) may be written as ,$;. J (ty#

Bg), dP’)l+t-r’y).

(40)

We proceed to compute &,[~fd~ sg dJ”] and begin by examining a typical term in (39). To avoid notational difficulties we shall suppose that y = (Ym 9 zt)>. We have additional variables wr ,..., zu, . Assobn-r+1 > %r+&, ciate with these the corresponding auxiliary variables A;,..., I\:. Then (ef# eg),, may be written:

h; .‘. /\:,-,A; ... x;:_,(x;)’ ... (h:‘y x f(y1 ,..., y,,-t. >WI,..., q.)g(q

>..*, Zt-r

> Wl 9.0.) w).

(41)

To ensure that J(ef# Bg), dPnz+l-r shall lie in the domain of Es, that is in L,(QJ, certain conditions must be imposed. One has already been imposed in the definition of TV,namely that J”” dm < co. As a second condition, we shall suppose that f is square integrable and that g is bounded. Now set dQ = dP - dp. Then dP = dQ + dp and dpn’+t-’

=

[dQ

+

&]m+t-r,

Suppose that the right side has been multiplied out and expressed as monomial terms. Consider what happens when a typical term is used to (40). If a term involves d&vi, Ai) for i = I,..., m - Y or d,u(z,h;) I,..., t - Y, the contribution is zero because J/\ dm = 0. Thus we consider a typical monomial of the form dQ(y,

, A;) -..,dQ(zt-, x dQ(wl+l -.Y-

a sum of integrate for j = need to

, XF-2 dp(w, , X’) ... dtL(wi >V) -_-

, A;;,) ... dQ(w, , K’),

(42)

where 0 < 1 < Y. In particular when I = Y the last group of factors is missing. In this case the result of integrating (41) may be written as

s

(f 0 g), dJ”‘+*-r>

(43)

where (f @g), is the reduced tensor defined in for the normal case. The expression (43) already We shall now show that when 0 < I < Y the regard to (41) is othogonal to !i?a_ For simplicity since this case is sufficiently representative. In

connection with formula (27) lies in .& . result of integrating (41) with we shall suppose that 1 = 0 this case, the middle group of

RANDOM FUNCTIONS OF POISSON TYPE

27

factors is missing from (42). Now suppose that (41) has been integrated with regard to (42). Apart from a constant factor, the resulting random variable is the image under the duality transform D a certain function in L2(erXR), namely that function on (X X R)“L+t-r which results from symmetrizing (41). The presence of the factors (hr)” ... (A:)’ in (41) insures that this function will be orthogonal to B(L,(e*)) in L,(eXXR ). The stated orthogonality is established. We have shown that

(2”( j f(y) dJ(y)“’ j ‘d.4 dly)

= ,; j (f ‘3 ‘4%dJm+t-rcv)-

(43)

If we further suppose thatf(y, ,...,ym) and g(zl ,..., zt) are symmetric functions, define (f @g)7 and introduce (m,‘) as we did for formula (28) for the normal case, we obtain the following analogue of that formula:

The following

slightly

more complicated

case will be needed in Section 8.

PROPOSITION 5. Let f(yl ,..., y,,J be a symmetric and square integrable function on XI” which vanishes outside some rectangle in Xril with finite measure. Let Jf(y) d-f(y)” denote the corresponding random variable. Letg(x, ,..., x, , z1 ,..., zt) be a bounded measurable function on Xntt which vanishes off some rectangle of jinite measure and is symmetric in the variables z1 ,..., zt . Let sg(x, z) dJ(z)t denote the corresponding random function of Poisson type. Then j f(v) is in L,(Q,)

dl(y)“’

j- &,

4 dJ(4’

and

e’, (1 f (-v) d](y)”

I g(x, z) Jj(+)

= T (“, “) j (f @gP)r dl”+“,

(45)

where, on the right side all variables are integrated except xl ,..., xn . The proof is a straightforward combination establish (35) and (44). Details are omitted.

of the arguments

needed to

8. ON THE CONNECTION BETWEEN NORMAL AND APPROXIMATELY NORMAL WIENER PROCESSES 8.1. Let (X, P, v) be a u-finite nonatomic measure space and let N be the corresponding normal Wiener process as defined in Section 6. Let J be a pure jump process approximating N as defined in Section 7. To compare the theories

DAVID SHALE

28 relative random random let D,-

to N and J, we set up a duality transform between the spaces of labelled functions of Poisson type relative to 1 and the corresponding spaces of functions (Wiener type) relative to N. Specifically, for n = 0, 1, 2,..., be the map from !&, onto L,(d) @L2(XVz) which is given by Dns = D,D;‘.

Here D, (33) and given by Fix n. function

(46)

is the duality transform fromL,(Xnex) to -Ln which is given by (32) and D, is the duality transform from L,(Xnex) to L,(d) @ L2(Xn) which is (25) for II = 0 and by (29) for 1z > 1. For m = 0, 1, 2 ,..., letf(.r, ,..., x, ; y1 ,..., J~,J be any square integrable on Xn+‘jl. The random function

liesing!,. It will be called an algebraic element of order m. The linear span of all such will be called the algebraic elements in & . Similarly the linear span of all sf(.x, ~7)&V(y)” will be called the algebraic elements in L,(d) @ L,(X”). As an easy consequence of formulae (25) and (33) we give without proof the following. PROPOSITION 6. For n = 1, 2,..., the duality transform D, % is the unique unitary map from !G!, to L*(A) @ L,(Xn) w h ose action on the algebraic elements of each order is given by

Dn- : 0” ‘2 if(~,3’)d1(~)‘i’-Sf(.~,r)dN(y)“‘.

(47)

8.2. We now adopt for both N and / the algebraic point of view described in Section 6.1. Thus if I+G is a random function of order n for J, we shall consider the multiplication operator M& on L2(Qn). Now the projection E, of L2(Qnn) onto !& gives rise to the completely positive map ,4 + EnA@, from operators on L2(SZn) to operators on p!, . We shall be particularly concerned with what happens to M* under this map. That is we study the map: M* --t [EnM&]

-,

(48)

where 5 denotes operator closure. It is necessary to consider operator closure in (48) because the M, that we treat are not in general bounded operators. The first result concerns random variables. That is the case n = 0. Let -4 be any set in X with v(A) < co. Let

Then we have the following

extension

of Theorem

I of [9].

R4NDOM FUNCTIONS THEOREM

approximating

OF POISSON TYPE

29

1. Let N be a normal Wiener process and let / be a pure jump process N. Then the map Y(A) is an algebraic version of N.

Proof. As we defined it in Section 6.1 an algebraic version for N has the following features: a Hilbert space H, the selfadjoint operators #(A) on H, a distinguished unit vector e in H, and a unitary map from H to &(A). This map carries Y(A) to MNta) and e to 1. In our case H is !& , e is the function 1 on Sz,,, and the unitary map is the operator D,, 5 described by (46) above. Clearly D,” carries 1 on Q0 to 1 on d. Now let t >, 0 and let g(z, ,..., ,zt) be a measurable function on X’. Suppose that g(z, ,..., zt) bounded, symmetric, and that it vanishes off a rectangle of finite measure. We may compute and

M,(,,

-g(z) dN(z)’ J

by using formulae (43) and (28) respectively. The answers are the same except that for the first the individual terms involve integrals with regard to powers of dJ while for the second the corresponding terms involve the same powers of dN. Then according to Proposition 6 and agree on all algebraic elements of the form jg(z) dN(z)‘. To conclude the proof it is sufficient to show that MNca) is essentially selfadjoint on the linear span of all such elements. Call this domain 3. We establish that (MNca) 1D) is essentially self-adjoint by turning it into a question about free Boson fields. We do this by transferring everything from L&l) to L,(e”) via 0;’ the inverse of the duality transform given by (25). It is well known that under this map M,(,, goes to the field operator Q(f) with f(y) = K,.,(J)). (An account is given in [9].) Th e image under D;’ of the domain D is the linear span of all g(z, ,..., zt) in L,(eX), when t = 0, 1, 2 ,... . Call this 0’. Let L,(X’), denote all square integrable symmetric functions on Xt. Thus L2(Xt)S C L,(eX). Observe that D n La. Wt), is dense in Le(Xt), . Observe also that (Q(f) 1 L,(X’),) is bounded. It follows that (Q(f) 1 a’) h has each La(Xf)y in its domain. But a result’ of Cook [2] says that Q(f) is essentially selfadjoint on the linear a span of all L.JXf).y . We conclude that (MNtA) 1D) is essentially selfadjoint. r Because [2] is somewhat difficult to read, it may be helpful to sketch here another well known proof. Write H = L,(X) so that La(G) becomes the symmetric tensor algebra eH. Observe that if e’H~zHz’ s en 1 0 eHz. This reduces the question of the self-adjointness of Q(f) the case when Q(f) acts on eHr with HI the l-dimensional subspace spanned byf. Now apply the duality transform given by Segal in [7]. This reduces the question to that of showing that multiplication by .v is essentially selfadjoint on the linear span of the Hermite functions in Lz of the real line. This argument was drawn to the author’s attention by E. Nelson.

30

DAVID SHALE

The realization of N described by Theorem 1 has the attraction that everything is explicitly given. For example, if w(X) < co, we may use the exponential representation to realize the counting process associated with J, and hence J itself and thus N. In particular we may realize Wiener’s Brownian Motion process on [0, l] in this way if we choose. Of course the difficulties with stochastic integrals for N are not thereby avoided, but they take on a different aspect. Thus, for v(X) < co andf( x ) measurable, jf(x) d](x) will always exist. It will not lie in !& , however, unless f(x) is square integrable. Therefore, unless f(x) is square integrable, we do not obtain Jr(x) dN(x). An interesting question is what happens to the pointwise product of random variables when we transfer the theory from L,(d) to &, . Thus if #; and & are square integrable random variables on d we may transfer them to corresponding random variables I,/J~and I,!J~on Q, . On d we have the pointwise product &I& . This will correspond to a new product I,!+0 4s on Q, . The connection between this product and the usual pointwise product on Q0 seems to be

and this may be verified in special cases. There are difficulties, however, with the general case. Thus if J is a Gaussian jump process, and v(X) < co, there are no bounded random variables in !i$ . It may be that if N is to be studied in this way, / is most conveniently taken to be a generalized random walk (cf. Anderson [ 11). 8.3. We proceed to extend Theorem 1 by analysing the connection between the random functions of order IZ 3 1 for J and the corresponding random functions for N. In this connection we define Y((A), analogously to (48) by setting

Now the map D, c sends 2, onto L,(d) random functions of Poisson type relative of Wiener type for N. Also, iff(xi ,..., x,) M, on L, correspond to I @ AZ, on L,(d)

@ La(X”l). Thus it makes the labelled to J correspond to random functions is bounded and measurable. it makes @ L,(X”). Finally we have

THEOREM 2. Let N be a normal Wiener process and let J be a pure jump process approximating N. Then under the action of the duality transform D,” we have

D, - : Y(A),

-

MN(a) 0 I*

Proof. Let g,(x, ,..., x,) be bounded, measurable, and vanish off a rectangle of finite measure in X”. Let g,(z, ,..., zt) be bounded, symmetric, measurable, and vanish off a rectangle of finite measure in X’. Let g(x, Z) = gi(x) g2(z).

FUINDOM

FUNCTIONS

OF

POISSON

TYPE

31

According to (45) and (28), the action of Y(A)n on un/2 jg(x, z) ItJ(z)’ is precisely analogous to the action of MN(a~ @ I on jgs(z) ANY @ gt(x). The rest follows much as it did in the proof of Theorem 1. Details are omitted. 1 For brevity

let L,(A) @&(X’I)

denote La(fl) when n = 0.

CONJECTURE 3. Let (X, X, V) be a a-finite nonatomic measure space, and let N be the normal Wiener process with mean 0 and variance n. Let / be a pure jump process approximating IV. Letf(y, ,..., y,,J be a square integrable function on Xl<” which vanishes off a finite rectangle. Let If(y) dJ(y)“l denote the corresponding random variable. Then for n = 0, 1, 2,..., the duality transform D,LS carries

The formulae (45) and (27) let us conclude that the image of the first operator agrees with the second operator on the algebraic elements. What needs to be shown is that either operator is essentially selfadjoint when restricted to the algebraic elements. This is a question in the theory of free Boson fields. While the author has established it in special cases, the correctness of the general result has not been shown. 8.4. The author’s present concern with normal IViener processes stems from their connection with quantum theory, and in particular, from their connection with quantum mechanics through the relation between the Feynman and Wiener integrals. A peculiarly elusive question is why do the observables not commute, or to put it differently, why must we add amplitudes rather than probabilities. No answer to this question has ever proved completely satisfactory. The “Copenhagen interpretation” rapidly leads to metaphysical nonsense and any attempt to derive quantum behavior from an underlying “classical” motion appears wrong-headed. Hence the author’s conjecture that what is involved is a form of prediction theory which is forced upon us when we attempt to use continuum methods to describe what is inherently a discrete World. Now the normal expectation E, which maps &(Q,) onto 2, gives a form of prediction theory relating the “discrete” / with the “continuous” A’. It is unlikely that this particular prediction theory is the one required. Nevertheless, for interest’s sake, we give the result below. Let / be an approximately normal pure jump process and let La(Q,) be the span of square integrable random variables for J. Let 91 be the multiplication algebra on I&&,) determined by the bounded measurable functions on 52, . Let (II be mapped to operators on Q0 by the completely positive map =2 - @,A&, ( and let B be the weak * -algebra of operators on !I?,,which is generated by the image of 91. THEOREM

4.

23 is all bounded operators on Ill!,,.

32

DAVID

SHALE

Proof. Let B(H) denote the ring of all bounded operators on H. We shall use the duality transform &’ (see (32) and (34)) to transfer operators from !&, to f?*(ez). Now setting H = L,(X), we may write L,(eX) as eH the space of symmetric tensors over H. In other words, L,(eX) is the Fock space over&(X). Thus showing that 23 = B(L,(eX)) amounts to an exercise in the theory of free Boson fields. The proof will be given in a sequence of steps. Step 1. Suppose that the theorem has been established in the case when z(X) < co. Let E(X) = co, and let YE 3Z with V(Y) < co. Now the decomposition X =: 1’ u (X - Y) produces decompositions: Q, E 52; x 9; where Szhcorresponds to Y and SzL corresponds to X - Y; La(Q,,) ‘v L,(.G$,) @ L,(Qi) and !&, II! !$, @ 2;; where 2; and !G: refer to the theories over I’ and X - Y respectively. Now let b( ET) denote the *-algebra of operators on !?a which is generated under M,,, -+ ~Jl&,&, by all those bounded measurable v on Sz, which can be written as 9 x 1 on QA x Qi . Assuming the theorem true with Y replacing X, we see that 93(Y) decomposes onL,(Qi) @I!&$) as B x C where B = B((L,(Qi)) and C is all scalar multiples of the identity. Now use the duality transform 0;’ to transfer everything to L,(eX). We have L,(eX) EL,(er) @L,(e(*-r)) and B @ C goes to B, @ C, where BY = B(L,(eY)) and C, is all scalars onL,(eX-r). Now let I’ range over an increasing sequence of subsets of X, each with finite measure, and union X. It is an exercise to show that the weak closure of the union of all the B, ) Cr is B(L,(eX)). Details are omitted. Step 2. Suppose that V(X) < co. We can use the exponential representation to realize the counting process P and hence also J. Thus the space Q, is given by

and the probability x R)% is

measure v on Q, is that measure whose restriction

to

(X

where, as usual, p = u-l dm >: dv. Let K,, be the characteristic function of (X x R)n, and with ‘p = N, , consider the action of C&J&,&, on !i!!, . To compute the matrix elements of this operator, we let f(y) EL~(X~&) and g(z) E&(X”) and consider

( %aj-f(y) To evaluate this inner product, j f(r)

dJ(y)“, j- g(z) dl(4’).

(52)

we merely integrate dl(yP

j” d4

dJW

(53)

RANDOM

FUNCTIONS

OF POISSON

TYPE

33

over (X x R)” using the measure (51). To write out explicitly the value of (53) dP” and sg(x) dJ(z)t = on (X x R)“, we write sf(~) dJ(y)” = j-h, ... &f(y) JAi ... &g(z) dJ(z)t and then apply the definitions. If m > n or t > n, (53) is identically zero on (X x R)“. If m < 12and t < n and m + t, on (X >: @ (53) will be a sum of monomial terms, each one of which will involve some Xi to the first power. The property JX dm = 0 then implies that (52) equals zero. We conclude, therefore, that (52) is zero unless m = t < rr. In particular when az = t = n, (51) will be a nonzero multiple of sfg dv”‘. Now let K be the operator on L,(eX) which results from transferring C&M,&, via 0;‘. Also for i = 0, 1,2 ,..., let Qi by the projection ofL,(eX) onto the subspace of symmetric functions of order i. In field theoretic terms, Qi is the operator corresponding to there being exactly i particles present. The argument above shows that

K = a,&,, + an-lQn-l + ... + aoQo with a, + 0. Letting n take successive values 0, 1, 2,‘.., we conclude each i, Qi lies in the image of 23 under D;‘.

that for

Step 3. We proceed to construct a special orthonormal set for La($). We let f0 be that function on ex which is 1 on X0 and 0 elsewhere. Thusf, is the Fock vacuum. We let jr(x), fa(x),... be any othonormal basis for L,(X) which has the special property that all fi(s) EL,(X). If fiu,(x), fin,..., ficn,(x) are any n elements from this basis (with repetitions allowed), let

denote the symmetrized product. Thus (54) denotes a member of L,(eX) with support X”. When n = 0, (54) will be interpreted as f. . The collection of all elements of the form (54) constitute the desired orthonormal set. Apart from normalizing factors it is an orthonormal basis for L,(eX). A typical member of our orthonormal set will be denoted by ol.

Step 4. For any element 01in the set constructed above let T, be the operator on L,(eX) which sends c1to f. and is 0 on every set element except 01.If ,8 is a second set element, Tz T, maps LYto p and is again 0 on every set element except 01. We shall show that T, lies in the image of B under Di’. From this it follows that T$T, also lies in the image of 23 and hence that the image is weakly dense in the algebra of all bounded operators on L,(e*). Step 5. To produce T, , we first fix CJ!as in (54). We choose a > 0 so that [-a, a] has positive measure with regard to the auxiliary measure m and we let A = X if -a ,< X < a and let x = 0 otherwise. Next we let CJJ be that function on Q. with support (X x R)n which, on (X x R)n, is given by

34

DAVID SHALE

We consider (E,,M,&, and transfer it to L,(eX) via 0;’ obtaining D;l(&,M&,)

D, .

Now let g(x, ,..., ,v,) be any symmetric square integrable function on X”l. Then g(x) E L,(er). We shall compute the matrix element

This may be written as (@,M,&, D,g, D,f,) and then as (vD,g, 1) in L2(sZ,,). We may write this last as the integral of a certain expression over (X x I?)” x dp”/n! The expression in question with regard to the measure exp[-a(x)/u] is a multiple of

where F(x, , h, ,..., N, , ,I,) is the restriction s h, ... h,,,g(x1 ,..., x,,‘) dP”< to (X x R)“. When m > n, F = 0. When m < n the fact that the auxiliary measure used to define TVis symmetric insures that the integral of (57) is zero. This only leaves the casem = n. In this casewhen we integrate (57), we find that the matrix element (56) is a nonzero constant times 1‘fi(l)(xl)

. ..fi(n)(Xn) g(x, >...I in) dzl(x)“*

Now let ,!?be any member of the special orthonormal set for L,(eX) which was constructed in Step 3. We conclude from the above computation that (D;‘(Wt&)

DJ& fo) f 0 =o

if

/3=a

if

/3 = 0~.

(58)

Step 6. Finally consider the following operator on L,(er), namely:

where Q0 and Qn are the number-of-particles operators defined at the end of Step 2. According to (58), the operator (59) is nonzero scalar times T, . The proof is thereby complete. 1

REFERENCES 1. R. NI. ANDERSON, A nonstandard representation for Brownian motion and Ito integration, Bull. Amer. Math. Sm. 82 (1976). 99-101. 2. J. &I. COOK, The mathematics of second quantization, Trans. &zer. Ilfnth. Sot. 74 (1953), 222-245.

RANDOM

FUNCTIONS

OF POISSON

TYPE

35

3. I<. ITO, Multiple Wiener integral, 1. Math. Sot. ]apan 3 (1951), 157-169. 4. I<. ITO. Spectral type of the Shift transformation of differential processes with stationary increments, Trans. Amer. Math. Sot. 81 (1956), 253-263. teilbare Punktprozesse,” 5. J. KERSTAN, K. MATTHES, AND J. MECKE, “Unbegrenzt Akademie-Verlag, Berlin, 1974. 6. A. KOYRB, “From the Closed World to the Infinite Universe, Johns Hopkins Cnilv. Press, Baltimore, 1957. 7. I. E. SEG.4L, Tensor algebras over Hilbert spaces, I, Trans. =Inw. M&h. Sot. 81 (1956). 106-134. 8. I. E. SEGAL, Distributions in Hilbert space and canonical systems of operators, Trans. Amer. Math. Sot. 88 (1958), 12-41. 9. D. SHALE, Analysis over discrete spaces, /. Fu~zkctiona/ dna&vsis 16 (1974), 258-288. IO. D. SH.~LE, On geometric ideas at the foundation of quantum theory, .4hwrrc~s in Math. 32 (1979), 175-203. 1 I. D. SHALE A\ND \V. F. STINESPRING, n’iener processes, J. Fttwctiorzal rlnn!,Gs 2 (1968). 378-395. 11. D. SHALE .~ND \v. F. STINESPRING, ‘iviener processes, 11, /. Ffrr1ctiolzu/ .-b?a/vsis 5 (1970), 334-353. 13. .A. M. S'ERSHII<, I. M. GEL'FAND, AND ILL I. GRAEV, Representations of the group of diffeomorphisms, Russiun Math. S~meys 30, No. 6 (1975), I-50.