Characterization of vector valued, gaussian, stationary, markov processes

Characterization of vector valued, gaussian, stationary, markov processes

Statistics & Probability Letters 6 (1987) 17-19 North-Holland September 1987 CHARACTERIZATION OF VECTOR VALUED, GAUSSIAN, STATIONARY, MARKOV PROCESS...

187KB Sizes 3 Downloads 107 Views

Statistics & Probability Letters 6 (1987) 17-19 North-Holland

September 1987

CHARACTERIZATION OF VECTOR VALUED, GAUSSIAN, STATIONARY, MARKOV PROCESSES

Henryk GZYL A.P. 52120, Caracas 1050-A, Venezuela Received July 1986 Revised April 1987

Abstract: We extend an old result by Doob characterizing real-valued, Gaussian, stationary, Markov processes to the vector case. In this case a deterministic component appears that consists of a system of harmonic oscillators while the random part is a collection of independent oscillator processes, modulo linear changes of coordinates.

Keywords: vector Markov processes, characterization.

1. Introduction and preliminaries

As usual, we assume throughout that EX t = 0 and put, vectors being "column vectors",

There is a result by Doob asserting that a real valued, Gaussian, stationary, Markov process is essentially Ornstein-Uhlenbeck (or its gauge equivalent, an oscillator process), see Doob (1953) or Simon (1979). Here we extend the result to an R"-valued process and prove that, under some assumptions on the covariance, and R~-valued, stationary, Gaussian, Markovian process X(t) can be realized as a superposition of oscillator processes and simple harmonic oscillators. That is, under a suitable transformation of coordinates, the deterministic 1 component of X(t) is a group of rotations acting transitively on R 2k and its random component consists of n - 2k independent oscillator processes. Our decomposition could be thought of as a specialized version of that of Faurre (1973). Let (X(t), Ft, P) denote the process under study. The filtration ~ being such that, for bounded Borel f,

E[f(St) l~ ]=E[f(S,)lSs],

t>s.

(1.1)

C( t ) = E{ X( t ) X(O) T },

the superscript T denoting transposes. C(t) is the covariance matrix of the process. We assume throughout that C(O) is non-singular and C(t) continuous. From the positive definiteness of the covariance matrix it follows that, for arbitrary x, y~R

n,

(x, C(O)x) + (y, C(O)y) + (x, C ( t ) y ) + (y, C(t)x) >~O, where (., .) denotes the standard scalar product in R". From the inequality above, with y = - x , we obtain that

(x, C(t)x) ~ (x, C ( 0 ) x ) and, replacing x by C(O)-W2x and putting A(t) = C(0)-1/2C(t)C(0)-1/2, the last inequality becomes

(x, A ( t ) x ) <~(x, x). By deterministic we mean the following: there is a flow on R n (i.e., a family Ft of bijections such that F ( x ) = x and F~-Ft = F~+ / for all real s and t) such that, for every x in R n, P[. IX(0)] is carried by the curve Ft(X(0)) as t varies.

(1.2)

(1.3)

In order to justify our terminology, let us rapidly connect rotations with mechanical oscillators. A system of k mechanical oscillators Goldstein

0167-7152/87/$3.50 © 1987, Elsevier Science Publishers B.V. (North-Holland)

17

Volume 6, Number 1

STATISTICS & PROBABILITY LETTERS

Also, X(t) can be written as a superposition of oscillator processes and simple harmonic oscillators.

(1962) is described by the system fCi = Oi'

i, j = 1 ..... k,

~)i = -- E ~'~ijXj '

(1.4) where ( 12ij } is assumed to be a symmetric non-degenerate matrix. By passing to n o r m a l modes, i.e., diagonalizing l~ b y means of an orthogonal U and putting Q, = EU~jxj, Pj = EU/jvj, (1.4) becomes

Oi=ei,

P,=-w2Q,,

i = 1 . . . . . k,

[~, =

(1.5)

- w,q,,

pi(t)]

-sin

wit

sinwitlqiO,) COS

wit]~pi(O)

16,

Therefore, a system of mechanical oscillators and a subgroup of the rotation group are identical (modulo coordinate transformations). Observe also, putting q2 + p 2 = Ep/2 + q2 and dffd(=dpl'"dpndql""dq., that the measure

(1.7)

is invariant under the group given b y (1.6) and, if (q(0), p(0)) is distributed according to (1.7), then (q(t), p(t)) given by (1.6) is a Gaussian, stationary, M a r k o v process on R 2k. See F o r d et al. (1965) for some physical applications.

2. M a i n r e s u l t

- C(s)C(o)tc(o)

= o

it follows that X ( t + s ) - C ( s ) C ( O ) - l X ( t ) X(t) are independent for any t, s. Therefore

and

E ( X ( t + s) - C ( s ) C ( O ) - ' X ( t ) I X ( t ) } = 0

E { g ( t + s) l g ( t ) } = C ( s ) C ( O ) - ' S ( t ) for any real s and t. N o w let s and t be positive. T h e last identity and the M a r k o v p r o p e r t y yield

C(t + s) = C ( s ) C ( O ) - ' C ( t ) from which it follows (exchange the roles of s and t) that

A ( t + s) = A ( s ) A ( t ) = A ( t ) A ( s ) for positive s and t. It is not hard to see 2 that there exists a matrix B such that, for t >~ 0,

A ( t ) = exp - tB and since C ( - t ) = C ( t ) T, we obtain the corresponding representation for negative values of t. Since the hypothesis A ( t ) A ( t ) T= A(t)TA(t), it follows that

and it is k n o w n (see Schmidt (1986) for a nice

2.1. Let X(t) be an Rn-valued, zero mean, Gaussian, stationary, Markov process with continuous covariance matrix C(t), such that A ( t ) = C(O)-1/2C(t)C(O)-I/E is normal. Then, for t >~O, Theorem

18

C(s)C(O)-IX(t)] X(t) r }

BB T = BTB,

Let us now state the main

C ( t ) = C(O) '/2 e-tnc'/2(O).

+s)-

'

i = 1 . . . . . k.

{exp - t ( p 2 + q2)/2} d~ d?/

E([X(t

for we are assuming the process to be centered. W e thus obtain

the solution to which is

coswi/

Proof. The first assertion can be obtained by mimicking D o o b ' s proof. T o begin, note that the r a n d o m variable X(t + s) - C(s)C(O)-ls(t) is Gaussian, and since

= C(s)

where the w~z are the eigenvalues of ~2. By making the change of scale q = wl/2Q, p = w-1/2P, the system above becomes O, = w p , ,

September 1987

(2.2)

2 An outline of a proof that A(t + s)= A(t)A(s) and A(t) continuous in t imply the existence of a matrix B such that A(t) = exp-Bt, is the following. Let r > 0 be such that A(t)=rtA(t) satisfies [ A ( I ) - I I <1. Now, as in Gantmacher (1977), define /~ = -log rA(1), and use the continuity argument to conclude that A ( t ) = e x p - tB. Put B = B + log r to get the desired result.

Volume 6, Number 1

STATISTICS & PROBABILITY LETTERS

proof) that there is an orthogonal matrix D such that DTBD is a tridiagonal 3 matrix, with matrices

sj wj -wj sj ) repeated n~ times, and numbers lj along the diagonal repeated nj times. The lj are the real eigenvalues of B, each appears a number nj of times, 2m + 1 ~
When we exponentiate DTBD, that is when we compute DTA(t)D, each of the matrices in the diagonal results in a matrix eSJtRj(t), with Rj(t) orthogonal and satisfying Rj( t + s) = Rj( t )Rj( s ), and certainly each l i results in e 9 "t. From (1.3) and (A(t)Tx, x) <~(x, x) it follows that sj = 0 and that lj is negative, therefore the complex eigenvalues are purely imaginary and the real eigenvalues are negative. If we put, for positive t, DTA(t)D=A(t) = exp - tB, exp - tBa, then AT(t) = A ( - t ) implies that

September 1987

pendent oscillator processes with covariances exp-Itllj. If we denote the process introduced above by Z(t), it is clear that X(t)= C(O) 1/2DZ(t) and we are finally through with the proof of {he theorem.

3. Comments (i) The time reversibility of X(t) in the sense of Willems (1978) is contained in A T ( t ) = A ( - t ) . And from the normality of A(t) it follows that both, the deterministic and the random component of X, are reversible in the same sense. (ii) What about the infinite dimensional analogue of (2.1)? That is, given a stationary, Gaussian, Markov field, does there exist an infinite dimensional Ornstein-Uhlenbeck process and a wave field into which the original field could be decomposed?

Aknowledgements

A-(t) = exp - It Ins exp - tB, for any value of t. Here B, is the diagonal matrix with elements lj for 2 m + 1 <~j<~n, and zero elsewhere; and B a is the tridiagonal matrix with submatrices

(Owj o along the diagonal for j = 1, 2 . . . . . m, and zero elsewhere. It is clear now that A(t) is the covariance matrix of a process Zl(t)

Z2(t ) 1' where Za(t ) consists on m independent, deterministic Gaussian processes as described in the Introduction, and Z2(t) is a collection of n - 2m inde3 Tridiagonal: all matrix elements except those along and just above and below the main diagonal are zero.

I would like to thank J. van Schuppen for profitable conversations on the subject of this paper, and to the referees of this paper, whose comments helped me to eliminate some mistakes and a lot of misprints.

References Doob, J. (!953), Stochastic Processes (Wiley, NY). Simon, B. (1979), Functional Integration and Quantum Physics (Acad. Press, NY). Faurre, P. (1973), Realizations Markovienes de processus stationaires, IRIA, Rep. #13. Goldstein, H. (1962), Classical Mechanics (Addison Wesley, Reading). Ford, G.-W., M. Kac and P. Mazur (1965), Statistical mechanics of assembfies of oscillators, J. Math. Phys 6, 604-615. Schmidt, E.J.P.G. (1986), An alternative approach to canonical forms of matrices, Am. Mathem. Monthly 93, 176-184. Willems, J.C. (1978), Time reversibility in deterministic and stochastic systems, Lecture Notes in Economics and Mathematical Systems 162 (Sprmger-Verlag, NY). Gantmacher, F.R. (1977), The Theory of Matrices (Chelsea Publ. Comp., NY).

19