A class of models for non-stationary Gaussian processes

A class of models for non-stationary Gaussian processes

Probabilistic Engineering Mechanics 18 (2003) 203–213 www.elsevier.com/locate/probengmech A class of models for non-stationary Gaussian processes Mir...

215KB Sizes 1 Downloads 50 Views

Probabilistic Engineering Mechanics 18 (2003) 203–213 www.elsevier.com/locate/probengmech

A class of models for non-stationary Gaussian processes Mircea Grigoriu* School of Civil and Environmental Engineering, Cornell University, 220 Hollister, Ithaca, NY 14853-3501, USA Received 18 November 2002; revised 18 March 2003; accepted 24 March 2003

Abstract Non-stationary processes reducible to stationary processes, oscillatory processes, and truncated Karhunen – Loe`ve or Fourier series are currently used to represent non-stationary Gaussian processes and generate samples of these processes. This paper presents alternative models for non-stationary, continuous-time Gaussian processes and develops Monte Carlo simulation algorithms based on these models. Two classes of models and Monte Carlo algorithms are considered depending on whether the target Gaussian process is or is not Markov. Generally, the Monte Carlo simulation algorithms for non-stationary Gaussian –Markov processes are more efficient and simpler to implement than the algorithms for non-stationary Gaussian processes. Examples are presented to illustrate the models for non-stationary Gaussian processes discussed in the paper and the Monte Carlo simulation algorithms based on these models. The examples include non-stationary Gaussian processes that satisfy and violate the Markov property. q 2003 Elsevier Ltd. All rights reserved. Keywords: Conditional Gaussian variables; Linear filters; Markov processes; Monte Carlo simulation; Non-stationary; Continuous-time Gaussian processes

1. Introduction Most of the continuous-time models for stationary and non-stationary Gaussian processes used to generate samples of these processes are parametric, that is, they are finite sums of deterministic functions with random coefficients. For example, the spectral representation theorem has been used to develop parametric models for stationary Gaussian processes. These models consist of a finite sum of harmonics with deterministic or random frequencies and random amplitudes ([6], Section 4.3.2.1; [7], Section 5.3.1.1; [14]). Alternative parametric models have been proposed for stationary processes with a bounded frequency range based on an extension of a sampling theorem for deterministic functions to random functions ([6], Section 4.3.2.2; [7], Section 5.3.1.3). The above models for stationary Gaussian processes cannot be extended to represent non-stationary Gaussian processes since the classical concept of spectral density is not defined for non-stationary processes. The continuoustime models used to generate samples of non-stationary Gaussian processes include (1) stationary processes * Tel.: þ1-607-255-3334; fax: þ 1-607-255-4828. E-mail address: [email protected] (M. Grigoriu). 0266-8920/03/$ - see front matter q 2003 Elsevier Ltd. All rights reserved. doi:10.1016/S0266-8920(03)00014-6

modulated by slowly varying deterministic functions, referred to as uniformly modulated stationary processes ([10], Chapter 6), (2) stationary processes viewed in a new, distorted clock obtained from the original clock by a deterministic mapping ([8,11], Section 11.80), (3) oscillatory processes characterized by sums of harmonics with fixed frequencies and random amplitudes that vary slowly in time ([10], Chapter 6; [13]), and (4) truncated Karhunen – Loe`ve and Fourier series representations for non-stationary Gaussian processes ([2], Section 6.4; [5]). The first two classes of models are referred to as reducible to stationary processes since they become stationary by appropriate scaling and time change, respectively. The classes of reducible to stationary and oscillatory processes have some common members. The models based on truncated Karhunen– Loe`ve and Fourier series have been used to represent arbitrary non-stationary Gaussian processes and fields defined on bounded intervals [4,5]. This paper (1) presents alternative models for nonstationary, continuous-time Gaussian processes and (2) uses these models to develop Monte Carlo algorithms for generating samples of arbitrary non-stationary, continuous-time Gaussian processes. Two classes of models and Monte Carlo simulation algorithms are considered depending on whether the target non-stationary Gaussian

204

M. Grigoriu / Probabilistic Engineering Mechanics 18 (2003) 203–213

process is or is not Markov. The developments in the paper are based on properties of Gaussian and Gaussian – Markov processes and classical concepts of linear random vibration. Numerical examples are used to illustrate some of the proposed models for non-stationary Gaussian processes and the Monte Carlo simulation algorithms based on these models.

2. Available non-stationary Gaussian models Available models for non-stationary, continuous-time Gaussian processes are briefly reviewed. The review presents essential properties of and relation between these models. Let YðtÞ; t [ R; be a stationary Gaussian process with mean 0, variance 1, covariance function cÐ y ðtÞ ¼ E½YðtÞYðt þ tÞ; and spectral representation YðtÞ ¼ R eint dZðnÞ; where Z is a Gaussian process with E½dZðnÞ ¼ 0 and E½dZðnÞdZðn0 Þ ¼ dðn 2 n0 Þsy ðnÞdn; and sy denotes the spectral density of Y: Let a : R ! R be a non-negative function. Then t [ R;

XðtÞ ¼ aðtÞYðtÞ;

ð1Þ

is called a uniformly modulated process. By definition X is a non-stationary Gaussian process with mean 0 and covariance function cx ðt; sÞ ¼ E½XðtÞXðsÞ ¼ aðtÞaðsÞcy ðt 2 sÞ: This process has been used extensively in earthquake engineering applications [15]. Since only the scale of X changes in times, the model in Eq. (1) is adequate for a restricted class of non-stationary Gaussian processes. Consider now a positive function w : ½0; 1Þ ! ½0; 1Þ such that wð0Þ ¼ 0 and w0 ðtÞ . 0 at all times. The process ð XðtÞ ¼ YðwðtÞÞ ¼ einwðtÞ dZðnÞ; ð2Þ R

obtained by the time change t 7 ! wðtÞ is Gaussian with mean zero and covariance function cx ðt;sÞ ¼ E½XðtÞXðsÞ ¼ E½YðwðtÞÞYðwðsÞÞ ð ¼ einðwðtÞ2wðsÞÞ sy ðnÞdn ¼ cy ðwðtÞ 2 wðsÞÞ: R

ð3Þ

›cx ðt; sÞ ›c ðt; sÞ dt þ x ds ¼ 0 ›t ›s

dðwðtÞ 2 wðsÞÞ ¼ w0 ðtÞdt 2 w0 ðsÞds ¼ 0 implying

›cx ðt; sÞ=›t w0 ðtÞ ¼2 0 : ›cx ðt; sÞ=›s w ðsÞ

R

where Z is the process specified in the definition of Y above (Eqs. (1) and (2)) and the function aðt; nÞ is such that its Fourier transform with respect to the argument t is concentrated at the origin for each frequency n so that aðt; nÞ is a slowly varying function of time for all ns ([3,10], Chapter 6; [15]). Uniformly modulated processes are oscillatory with aðt; nÞ ¼ aðtÞ provided the modulation function aðtÞ varies slowly in time. The process in Eq. (2) may or may not be oscillatory depending on the rate of change of w since ð ð XðtÞ ¼ einwðtÞ dZðnÞ ¼ einðwðtÞ2tÞ eint dZðnÞ R

R

so that aðt; nÞ ¼ einðwðtÞ2tÞ Eq. (5). For example, the process X is oscillatory if wðtÞ ¼ ð1 þ 1Þt and 1 . 0 is a small parameter since aðt; nÞ ¼ ein1t is an harmonic oscillating slowly in time. Uniformly modulated processes cannot be obtained by changing the clock of a stationary processes since the solution wðtÞ ¼ t þ lnðaðtÞÞ=ðinÞ of the equation aðtÞ ¼ einðwðtÞ2tÞ does not have the required properties for a time change function. The properties of the models in Eqs. (1), (2), and (5) show that these models can only represent a subset of the class of non-stationary Gaussian processes. General nonstationary Gaussian processes can be represented by Karhunen – Loe`ve and Fourier series ([7], Sections 3.9.4.4 and 5.3.2.2). The implementation of the Karhunen– Loe`ve and Fourier series representation is less simple than the implementation of the models in Eqs. (1), (2), and (5).

3. Problem statement

Consider the collection of times ðt; sÞ such that wðtÞ 2 wðsÞ ¼ t is a constant, that is, the collection of times ðt; sÞ in the original clock corresponding to a fixed time lag t in the new clock. For these times cx is a constant so that dcx ðt; sÞ ¼

The condition in Eq. (4) and the properties of the time change function w limit the class of non-stationary Gaussian processes that admit the representation in Eq. (2). A non-stationary Gaussian process X is said to be oscillatory if it has the representation ð XðtÞ ¼ aðt; nÞeint dZðnÞ; ð5Þ

ð4Þ

Let XðtÞ; t [ ½t0 ; t1 ; 0 # t0 , t1 , 1; be an Rd -valued non-stationary Gaussian process with mean E½XðtÞ ¼ 0 and covariance function cðt; sÞ ¼ E½XðtÞXðsÞT : The assumption E½XðtÞ ¼ 0 is not restrictive. If E½XðtÞ – 0; we consider the process XðtÞ 2 E½XðtÞ: The process X and all the other vectors in the paper are viewed as column vectors. Suppose that the second-moment properties of X are specified, that is, E½XðtÞ ¼ 0 and the covariance function cðt; sÞ ¼ E½XðtÞXðsÞT  is known. Our objective is to define models for X that can be used to develop efficient Monte Carlo algorithms for generating samples of this process. The proposed Monte Carlo algorithms can be applied to generate samples of the non-stationary Gaussian processes in Section 2. However, they are intended for general non-stationary Gaussian processes. Two classes of models and Monte

M. Grigoriu / Probabilistic Engineering Mechanics 18 (2003) 203–213

205

Fig. 1. Models for non-stationary Gaussian processes.

Carlo algorithms are developed depending on whether X is or is not a Markov process. Section 4 presents properties of Gaussian processes that are relevant to our discussion. Criteria are given for assessing whether a Gaussian process is sample continuous and/or Markov. These properties are used in the subsequent sections to derive models for non-stationary Gaussian processes and develop Monte Carlo simulation algorithms based on these models. The flowchart in Fig. 1 outlines the models developed in the paper for a general non-stationary Gaussian process X: If X is a Markov process, it can be characterized by its transition probability density function or defined as the output of a linear filter driven by Gaussian white noise, provided that X has continuous samples. If X is not a Markov process, it may be possible to augment X to a process Z that is Markov, in which case developments for Markov processes can be applied to Z. Otherwise, properties of conditional Gaussian variables can be used to characterize X and develop Monte Carlo algorithms for generating samples of this process.

4. Properties of Gaussian variables and processes Let XðtÞ; t [ ½t0 ; t1 ; be the Rd -valued non-stationary Gaussian process in Section 3 with mean E½XðtÞ ¼ 0 and covariance function cðt; sÞ ¼ E½XðtÞXðsÞT : The first of the following five properties relates to conditional Gaussian variables. The last four properties provide criteria for determining whether X has continuous samples and/or is Markov.

Property 1. Conditional Gaussian variables are Gaussian. Let G be an Rq -valued Gaussian variable with mean m ¼ E½G and covariance matrix c ¼ E½ðG 2 mÞðG 2 mÞT ;

a property denoted by G , Nðm; cÞ: Let G1 and G2 be the first 1 # q1 , q and the last q2 ¼ q 2 q1 coordinates of G: Denote by mi ¼ E½Gi  the mean of Gi ; i ¼ 1; 2; and by cij ¼ E½ðGi 2 mi ÞðGj 2 mj ÞT ; i; j ¼ 1; 2; the covariance matrices of ðG1 ; G2 Þ: The conditional variable G1 lðG2 ¼ jÞ is Gaussian with the properties ([7], Section 2.11.5) 21 G1 lðG2 ¼jÞ,Nðm1 þc12 c21 22 ðj2m2 Þ;c11 2c12 c22 c21 Þ:

ð6Þ

Property 2. If the coordinates of X are such that E½lXi ðt þ hÞ 2 Xi ðtÞlp  # klhl1þr ;

ð7Þ

for all i ¼ 1; …; d and t; t þ h [ ½t0 ; t1  where r # p and k are strictly positive constants, then there is a version of X whose coordinates satisfy a Lipschitz condition of order a [ ð0; r=pÞ a.s. in ½t0 ; t1 : A real-valued function zðtÞ; t [ ½t0 ; t1 ; satisfies the Lipschitz condition of order a in ½t0 ; t1  if there is a constant z . 0 such that lzðt þ hÞ 2 zðtÞl # zlhla for all t þ h; t [ ½t0 ; t1 : If z satisfies this condition, it is continuous since lhl ! 0 implies lzðt þ hÞ 2 zðtÞl ! 0 for all t; t þ h [ ½t0 ; t1 : Hence, an arbitrary process satisfying Eq. (7) has a version whose almost all samples are Lipschitz of order a [ ð0; r=pÞ in ½t0 ; t1  and, therefore, are continuous. A useful discussion on sample continuity for stochastic processes is in Ref. [1] (Sections 4.2 and 4.4). Two processes are said to be versions if they have the same finite dimensional distributions. The condition in Eq. (7) can be used as a criterion for assessing whether X has continuous samples. We note that X may have continuous samples even if Eq. (7) is not satisfied. The criterion for sample continuity suggested by

206

M. Grigoriu / Probabilistic Engineering Mechanics 18 (2003) 203–213

Eq. (7) is useful in applications since it involves only moments of X which can be calculated without difficulties in many cases. For example, let d ¼ 1 and X ¼ B denote a Brownian motion process. Since Bðt þ hÞ 2 BðtÞ is a Gaussian variable with mean 0 and variance lhl; we have E½ðBðt þ hÞ 2 BðtÞ4  ¼ 3lhl so that Eq. (7) is satisfied for p ¼ 4; k $ 3; and r ¼ 1: Hence, B has a version with a.s. continuous samples. Suppose now that XðtÞ; t [ ½t0 ; t1 ; is a real-valued Gaussian process with mean E½XðtÞ ¼ 0 and covariance function cðt; sÞ ¼ E½XðtÞXðsÞ: Since Xðt þ hÞ 2 XðtÞ is a Gaussian variable with mean 0 and variance cðt þ h; t þ hÞ þ cðt; tÞ 2 2cðt þ h; tÞ; we have E½ðXðt þ hÞ 2 XðtÞÞ4  ¼ 3ðcðt þ h; t þ hÞ þ cðt; tÞ 2 2cðt þ h; tÞÞ2 # 6ðcðt þ h; t þ hÞ 2 cðt þ h; tÞÞ2 þ 6ðcðt þ h; tÞ 2 cðt; tÞÞ2 by properties of Gaussian variables and the inequality ða ^ bÞ2 # 2ða2 þ b2 Þ: If the covariance function of X is such that

E½XðtÞlXðsÞ¼j¼cðt;sÞcðs;sÞ21 j so that (Eq. (10)) ð E½XðtÞlXðuÞ ¼ y ¼ cðt; sÞcðs; sÞ21 d j fXðsÞlXðuÞ ðjlyÞdj R

¼ cðt; sÞcðs; sÞ21 E½XðsÞlXðuÞ ¼ y; or cðt; uÞcðu; uÞ21 y ¼ cðt; sÞcðs; sÞ21 cðs; uÞcðu; uÞ21 y for all y [ Rd ; which yields Eq. (9). We have just shown that Eq. (9) holds for any Gaussian – Markov process X: The following two properties show that a Gaussian process X with covariance function satisfying Eq. (9) is also Markov. Property 4. Let XðtÞ; t [ ½t0 ; t1 ; be a real-valued ðd ¼ 1Þ Gaussian process with mean E½XðtÞ ¼ 0 and covariance function cðt; sÞ ¼ E½XðtÞXðsÞ such that cðt; t0 Þ – 0 for all t [ ½t0 ; t1 : Then X is a Markov process if and only if cðt; sÞ ¼ f ðt _ sÞgðt ^ sÞ;

ð11Þ

ð8Þ

where the functions f ðtÞ . 0 and gðtÞ $ 0 are such that hðtÞ ¼ gðtÞ=f ðtÞ is an increasing function in ½t0 ; t1 :

for some constants r; z . 0; then E½Xðt þ hÞ 2 XðtÞÞ4  # 12z2 lhl1þr so that X has a version with a.s. continuous samples.

The notations t _ s and t ^ s in Eq. (11) mean maxðt; sÞ and minðt; sÞ; respectively. Since X is a Gaussian –Markov process, its covariance function satisfies Eq. (9). This equation with t0 ¼ u , s , t # t1 is cðt; t0 Þ ¼ cðt; sÞcðs; t0 Þ= cðs; sÞ and gives

lcðt þ h; t þ hÞ 2 cðt þ h; tÞl # zlhlð1þrÞ=2 lcðt þ h; tÞ 2 cðt; tÞl # zlhlð1þrÞ=2

cðt; sÞ ¼ cðt; t0 Þcðs; sÞ=cðs; t0 Þ Property 3. Suppose that the Gaussian process X is also Markov and that cðs; sÞ21 exits for all s [ ½t0 ; t1 : Then the covariance function of X satisfies the condition cðt; uÞ ¼ cðt; sÞcðs; sÞ21 cðs; uÞ

ð9Þ

for all u , s , t in ½t0 ; t1 : Since X is a Gaussian process, XðtÞlðXðsÞ ¼ jÞ; t . s; is an Rd -valued Gaussian variable with mean (Eq. (6)) E½XðtÞlXðsÞ ¼ j ¼ cðt; sÞcðs; sÞ21 j: Since X is a Markov process, the Chapman –Kolmogorov relationship, ð fXðtÞlXðuÞ ðxlyÞ ¼ d fXðtÞlXðsÞ ðxljÞfXðsÞlXðuÞ ðjlyÞdj; R

holds, where fXðtÞlXðuÞ; fXðtÞlXðsÞ; and fXðsÞlXðuÞ; denote the densities of XðtÞlXðuÞ; XðtÞlXðsÞ; and XðsÞlXðuÞ; respectively, ([7], Section 3.6.3). The Chapman –Kolmogorov relationship gives ð E½XðtÞlXðuÞ¼y¼ d E½XðtÞlXðsÞ¼ jfXðsÞlXðuÞ ðjlyÞdj ð10Þ R

by multiplication with x and integration over R with respect to x: Since X is a Gaussian process, we have d

so that cðt; sÞ has the form in Eq. (11) with f ðtÞ ¼ cðt; t0 Þ and gðsÞ ¼ cðs; sÞ=cðs; t0 Þ for t . s: The inequality cðt; sÞ # ½cðt; tÞcðs; sÞ1=2 ; t $ s; and the functional form of cðt; sÞ in pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Eq. (11) imply f ðtÞgðsÞ # f ðtÞgðtÞf ðsÞgðsÞ so that pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 hðtÞf ðsÞ2 hðsÞ provided that f ðtÞ . 0; f ðtÞf ðsÞhðsÞ # f ðtÞ pffiffiffiffiffiffiffiffiffiffi and hðsÞ # hðtÞhðsÞ or hðsÞ # hðtÞ if gðtÞ $ 0: Let B be a Brownian motion. Then YðtÞ ¼ f ðtÞBðhðtÞÞ is a Gaussian –Markov process by the properties of the functions f ; g; h: Also, the functional form of cðt; sÞ in Eq. (11) shows that Y has the same second-moment properties as X; that is, E½YðtÞ ¼ 0 and E½YðtÞYðsÞ ¼ f ðtÞf ðsÞE½BðhðtÞÞBðhðsÞÞ ¼ f ðtÞf ðsÞðhðtÞ ^ hðsÞÞ ¼ f ðt _ sÞgðt ^ sÞ: Since X and Y are Gaussian processes, they are versions of each other. Hence, X has the Markov property because Y is a Markov process ([16], Section 2.5). We also note that under the above conditions X can be replaced by Y so that it has the representation f ðtÞBðhðtÞÞ: Property 5. Let X be an Rd -valued Gaussian process with mean E½XðtÞ ¼ 0 and covariance function cðt; sÞ ¼ E½XðtÞ XðsÞT  such that cðs; sÞ21 and cðs; t0 Þ21 exist at all times

M. Grigoriu / Probabilistic Engineering Mechanics 18 (2003) 203–213

s [ ½t0 ; t1 : Then X is a Markov process if and only if cðt; sÞ ¼ fðt _ sÞgðt ^ sÞ;

5. Gaussian –Markov processes ð12Þ

where the ðd; dÞ matrices f; g can be inverted everywhere in ½t0 ; t1 : We first note that Property 4 is a special case of the property examined here. The Properties 4 and 5 are considered separately because of the interesting representation provided by Property 4 for real-valued Gaussian – Markov processes. Since X is a Gaussian –Markov process, its covariance function satisfies the condition in Eq. (9). This condition with t0 ¼ u , s , t # t1 yields cðt; sÞ ¼ cðt; t0 Þcðs; t0 Þ21 cðs; sÞ

207

ð13Þ

so that cðt; sÞ has the functional form in Eq. (11) with the notation fðtÞ ¼ cðt; t0 Þ and gðsÞ ¼ cðs; t0 Þ21 cðs; sÞ for t . s: The matrices f and g can be inverted in ½t0 ; t1  by the postulated properties of cðs; sÞ and cðs; t0 Þ: Suppose now that X is a Gaussian process with mean E½XðtÞ ¼ 0 and covariance function cðt; sÞ given by Eq. (12) and that fðtÞ21 ; gðtÞ21 exist at all times t [ ½t0 ; t1 : Since X is a Gaussian process, the Rd -valued random variables XðtÞ;

Let XðtÞ; t [ ½t0 ; t1 ; be an Rd -valued non-stationary Gaussian –Markov process with mean E½XðtÞ ¼ 0 and covariance function cðt; sÞ ¼ E½XðtÞXðsÞT : Hence, cðt; sÞ must have the functional form in Eq. (12). Two Monte Carlo algorithms for generating samples of X are discussed. The algorithms are based on properties of the transition density for X and the representation of this process as the output of a linear filter driven by Gaussian white noise. 5.1. Transition density Since X is a Gaussian process, XðtÞlXðsÞ; t . s; is an Rd valued Gaussian variable with the second-moment properties (Eq. (6)) XðtÞlðXðsÞ ¼ jÞ ,Nðcðt; sÞcðs; sÞ21 j; cðt; tÞ 2 cðt; sÞcðs; sÞ21 cðs; tÞÞ:

ð14Þ

Since X is also a Markov process, the conditional random variables XðtÞlðXðuÞ; to # u # sÞ and XðtÞlXðsÞ have the same distribution for all t . s in ½t0 ; t1 : The above properties of X suggest a two-step Monte Carlo algorithm for generating samples of this process in ½t0 ; t1 :

E½XðtÞlXðsÞ ¼ cðt; sÞcðs; sÞ21 XðsÞ Step 1: Select a partition t0 ¼ s0 , s1 , · · · , sm ¼ t1 of the time interval ½t0 ; t1 : Step 2: Apply the recurrence formula

^ ¼ XðtÞ 2 E½XðtÞlXðsÞ ¼ XðtÞ 2 cðt; sÞcðs; sÞ21 XðsÞ XðtÞ are Gaussian at all times u , s , t in ½t0 ; t1 : Since

Xðsk ; vÞ ¼ Xðsk21 ; vÞ þ Gk21 ðvÞ;

T ^ E½XðtÞXðtÞ  ¼ cðt; uÞ 2 cðt; sÞcðs; sÞ21 cðs; uÞ ¼ 0

^ ¼ XðtÞ 2 E½XðtÞ by Eq. (12), and the random variables XðtÞ lXðsÞ and XðuÞ are Gaussian, they are independent for all u , s , t in ½t0 ; t1 : Hence, the Gaussian process X is also Markov. Some of the above formulas require that the matrices cðs; sÞ21 and cðs; t0 Þ21 exist at all times, but these requirements may not be always satisfied. For example, if Xðt0 Þ ¼ 0; then cðs; t0 Þ21 does not exist at any time s [ ½t0 ; t1 : Under this assumption we have cðs; t0 þ 1Þ . cðs; t0 Þ þ aðsÞ1 ¼ aðsÞ1; where 1 . 0 is small and aðsÞ ¼ ›cðs; uÞ=›u for u ¼ t0 : If Xðt0 þ 1Þ – 0 for any small 1 . 0; the matrix aðsÞ21 exists for s 2 t0 . 0 small so that cðt; sÞ . aðtÞaðsÞ21 cðs; sÞ for s 2 t0 ; t 2 t0 . 0 small (Eq. (13)). Suppose now that the matrix cðs; sÞ21 does not exist at some time s [ ½t0 ; t1 ; for example, one or more coordinates of XðsÞ are zero and/or two or more coordinates of XðsÞ are perfectly correlated. If some of the coordinates of XðsÞ vanish, the above considerations for cðs; t0 Þ can be extended to calculate cðs; sÞ: If two or more coordinates of XðsÞ are perfectly correlated, XðsÞ lives in a subspace of Rd and its covariance matrix has to be calculated in this subspace rather than Rd :

ð15Þ

k ¼ 1; …; m; to generate a sample Xðsk ; vÞ of X at time sk ; where Xðsk21 ; vÞ is the value of the sample v of X at time sk21 ; Gk21 ¼ Xðsk ÞlXðsk21 Þ is an Rd -valued Gaussian variable whose second-moment properties can be calculated from Eq. (14) with t ¼ sk ; s ¼ sk21 ; and Xðsk21 ; vÞ in place of j; and Gk21 ðvÞ denotes a sample of Gk21 : Standard algorithms can be used to generate samples of Gk21 ([7], Section 5.2.1). We note that the Monte Carlo algorithm for generating samples of X based on Eq. (15) involves a matrix inversion at each time step. The use of this algorithm can become inefficient if the dimension d of X and/or the number m of time steps is large. 5.2. Linear filter Suppose that X is a Gaussian – Markov process defined by _ ¼ aðtÞXðtÞ þ bðtÞWðtÞ; XðtÞ

t [ ½t0 ; t1 ;

ð16Þ

208

M. Grigoriu / Probabilistic Engineering Mechanics 18 (2003) 203–213

where a and b are ðd; dÞ and ðd; d0 Þ matrices, respectively, W 0 is an Rd -valued Gaussian white noise formally defined by 0 WðtÞ ¼ dBðtÞ=dt; and B denotes an Rd -valued Brownian motion process. It is assumed that the functions a, b are such that the solution X of Eq. (16) exists and is unique ([7], Section 4.7.1.1). Under these assumptions the following statements are true: (1) If the expectation of the initial state is E½Xðt0 Þ ¼ 0; so is the expectation of X at all times. (2) The covariance function of X satisfies the differential equations ([15], Section 5.2.1)

›cðt; sÞ ¼ aðtÞcðt; sÞ; ›t

t0 # s # t # t1 ;

g_ ðtÞ ¼ aðtÞgðtÞ þ gðtÞaðtÞT þ bðtÞbðtÞT ;

ð17Þ ð18Þ

t [ ½t0 ; t1 ; where gðtÞ ¼ cðt; tÞ: (3) X has a.s. continuous samples ([7], Section 4.7.1.1). (4) The defining equation for X can be used to develop a Monte Carlo algorithm for generating samples of X. For example, samples of X at times t0 ; t0 þ Dt; t0 þ 2 Dt; … can be calculated by the Euler scheme Xðt þ DtÞ ¼ ði þ aðtÞDtÞXðtÞ þ bðtÞDBðtÞ;

ð19Þ

where Dt . 0 is a time step, i denotes the ðd; dÞ identity 0 matrix, and DBðtÞ ¼ Bðt þ DtÞ 2 BðtÞ is an Rd -valued Gaussian variable whose coordinates are independent copies of Nð0; DtÞ: Finite difference approximations of Eq. (16) superior to the Euler scheme in Eq. (19) can be found in Ref. [9]. Our objective is to establish conditions under which a Gaussian – Markov process X with a.s. continuous samples in ½t0 ; t1 ; expectation E½XðtÞ ¼ 0; and a specified covariance function cðt; sÞ ¼ E½XðtÞXðsÞT  is the solution of a differential equation of the type in Eq. (16). We give two results. Result 1: Suppose that the covariance function of X satisfies the conditions: 21

(1) ½›cðt; sÞ=›tcðt; sÞ exists and is a function only of t for all s # t in ½t0 ; t1  and (2) g_ðtÞ 2 aðtÞgðtÞ 2 gðtÞaðtÞT with aðtÞ in Eq. (20) below is positive definite at all times t [ ½t0 ; t1 : Then X is the solution of Eq. (16) with aðtÞ ¼

›cðt; sÞ cðt; sÞ21 ›t

t0 # s # t # t1 ;

bðtÞbðtÞT ¼ g_ ðtÞ 2 aðtÞ 2 gðtÞ 2 gðtÞaðtÞT ;

ð20Þ t [ ½t0 ; t1 :

The matrix a in Eq. (20) is well defined by the first condition. The equality in Eq. (17) becomes an identity with

a given by Eq. (20). If b is given by Eq. (20), then Eq. (18) is satisfied identically. The above condition on g_ ðtÞ 2 agðtÞ 2gðtÞaðtÞT is needed since bðtÞbðtÞT is a positive definite matrix at all times. Suppose that hðtÞ ¼ bðtÞbðtÞT in Eq. (20) can be inverted at each time t [ ½t0 ; t1 : Then the entries of bðtÞ can be obtained by using the Cholesky decomposition, which gives bij ðtÞ ¼

hij ðtÞ 2

Xj21

b ðtÞbjr ðtÞ r¼1 ir X j21 hjj ðtÞ 2 r¼1 bjr ðtÞ2

ð21Þ

P with the convention 0r¼1 bir ðtÞbjr ðtÞ ¼ 0 ([6], Appendix C). Other methods can be used to find the matrix bðtÞ: For example, if hðtÞ has simple eigenvalues, then FðtÞT hðtÞ FðtÞ ¼ lðtÞ is a diagonal matrix with non-zero entries li ðtÞ; i ¼ 1; …; d; where ðl1 ðtÞ; …; ld ðtÞÞ are the eigenvalues of hðtÞ and the columns of the ðd; dÞ matrix FðtÞ are the pffiffiffiffiffi eigenvectors of hðtÞ: Hence, we have bðtÞ ¼ FðtÞ l ðtÞ; pffiffiffiffiffi where l ðtÞ is a diagonal matrix with non-zero entries pffiffiffiffiffiffi li ðtÞ; i ¼ 1; …; d: The following result provides alternative expressions for the coefficients a and b in Eq. (20) based on the functional form for the covariance function for Gaussian –Markov processes Eq. (12). Result 2: If fðtÞ21 and gðtÞ21 exist and the matrix fðtÞ T _ is positive definite al all times t [ ½t0 ; t1 ; g_ ðtÞ 2 ðfðtÞgðtÞÞ the Gaussian –Markov process X is the solution of Eq. (16) with _ 21 ðtÞ aðtÞ ¼ fðtÞf

ð22Þ

T _ bðtÞbðtÞT ¼ fðtÞ_gðtÞ 2 ðfðtÞgðtÞÞ ;

_ ¼ dfðtÞ=dt and g_ ðtÞ ¼ dgðtÞ=dt: where fðtÞ The above expressions of a and b follow from Eqs. (12) and (20), the symmetry of the covariance matrix gðtÞ implying fðtÞgðtÞ ¼ ðfðtÞgðtÞÞT : The differentiation of this equality with respect to time yields dðfðtÞgðtÞÞ=dt ¼ dððfðtÞ gðtÞÞT Þ=dt:

Example 1. Let XðtÞ; t [ ½0; 1, be a real-valued Gaussian process with mean E½XðtÞ ¼ 0 and covariance function cðt; sÞ ¼ e2lðt2sÞ 2 e2lðtþsÞ ¼ e2lt ðels 2 e2ls Þ;

ð23Þ

t $ s $ 0; where l . 0 is a constant. The covariance function of X has the form in Eq. (11) with f ðtÞ ¼ e2lt and gðsÞ ¼ els 2 e2ls ; so that X has the Markov property. The conditional random variable XðtÞlðXðsÞ ¼ jÞ; t . s; is Gaussian with mean cðt; sÞj=cðs; sÞ and variance cðt; tÞ 2 cðt; sÞ2 =cðs; sÞ: This observation and the Monte Carlo algorithm in Eq. (15) can be used to generate samples of X: A more efficient algorithm for generating samples of X

M. Grigoriu / Probabilistic Engineering Mechanics 18 (2003) 203–213

can be developed based on the representation pffiffiffiffi _ ¼ 2lXðtÞ þ 2lWðtÞ; t $ 0; XðtÞ

example, the defining equation for X becomes ð24Þ

for this process and a finite difference approximation of this representation of the type in Eq. (19), where Xð0Þ ¼ 0 and W is interpreted as the formal derivative of a Brownian motion process. The definition of X in Eq. (24) is valid since

so that X has a.s. continuous samples (Eq. (8)), aðtÞ ¼ ½›c ðt; sÞ=›t=cðt; sÞ ¼ 2l; and bðtÞ2 ¼ g_ðtÞ þ 2lgðtÞ ¼ 2l (Eq. (20)). The initial condition Xð0Þ ¼ 0 implies E½XðtÞ ¼ 0 and gð0Þ ¼ cð0; 0Þ ¼ 0 consistently with the stated properties of X:

Example 2. Let Y be a stationary Gaussian process with mean my ðtÞ ¼ E½YðtÞ ¼ 0 and covariance function cy ðt; sÞ ¼ expð2llt 2 slÞ and let X be defined by Eq. (1). The process X is Gaussian and its first two moments are

mx ðtÞ ¼ E½XðtÞ ¼ 0; t $ 0; ð25Þ

The process X is also Markov since its correlation function has the functional form in Eq. (11). If a is bounded, that is, a ¼ supt$0 laðtÞl , 1; and is Lipschitz of order a . 1=2; that is, laðt þ hÞ 2 aðtÞl # clhla for some c . 0; then lcðt þ h; tÞ 2 cðt; tÞl ¼ laðt þ hÞaðtÞe2lh 2 aðtÞ2 l ¼ laðt þ hÞaðtÞe2lh 2 aðtÞ2 e2lh þ aðtÞ2 e2lh 2 aðtÞ2 l # laðtÞlðlaðt þ hÞ 2 aðtÞle2lh þ laðtÞlle2lh 2 1lÞ # aðclhla þ al  lhlÞ so that X has continuous samples a.s. (Eq. (8)). Since

›cx ðt; sÞ ¼ ða_ ðtÞ 2 laðtÞÞaðsÞe2lðt2sÞ ; t $ s $ 0; ›t ›cx ðt; sÞ=›t a_ ðtÞ 2 laðtÞ ¼ ; t $ s $ 0; cx ðt; sÞ aðtÞ

  _ ¼ ð2p=t1 Þsinð2pt=t1 Þ 2 l XðtÞ XðtÞ 1 2 cosð2pt=t1 Þ pffiffiffiffi þ 2lð1 2 cosð2pt=t1 ÞÞWðtÞ for aðtÞ ¼ 1 2 cosð2pt=t1 Þ and t [ ½0; t1 :

lcðt þ h; tÞ 2 cðt; tÞl ¼ ð1 2 e2lh Þð1 2 e22lt Þ # lh

cx ðt; sÞ ¼ E½XðtÞXðsÞ ¼ aðtÞaðsÞe2lðt2sÞ ; t $ s $ 0:

209

6. Gaussian processes Let XðtÞ; t [ ½t0 ; t1 ; be an Rd -valued non-stationary Gaussian process with mean E½XðtÞ ¼ 0 and covariance function cðt; sÞ ¼ E½XðtÞXðsÞT : It is assumed that X does not have the Markov property so that its covariance function does not satisfy Eqs. (9) and (12). Two Monte Carlo algorithms are presented for generating samples of X. They are based on state augmentation and properties of Gaussian variables. An alternative Monte Carlo simulation algorithm for generating samples of X can be found in [5]. The algorithm is based on representations of X by a finite sum of harmonics with random correlated coefficients whose second-moment properties are given by the coefficients of the Fourier series for the covariance function of X in ½t0 ; t1  £ ½t0 ; t1 :

6.1. State augmentation Suppose that there is an Rdþq -valued Gaussian –Markov process Z in ½t0 ; t1  whose first d coordinates coincide with X. Then the results in the previous section apply to Z. For example, samples of X can be extracted from samples of Z generated by one of the above Monte Carlo simulation algorithms. It was not possible to establish simple criteria for the existence of Z. The existence of this process can be proved by construction. Unfortunately, the construction of Z with the above properties, provided it exists, may involve lengthy iterative calculations. The construction of a linear filter driven by Gaussian white noise with output Z is illustrated by an example.

ð26Þ

we have aðtÞ ¼ a_ffiðtÞ=aðtÞ 2 l; g_ðtÞ 2 2aðtÞgðtÞ ¼ 2laðtÞ $ pffiffiffi 0; and bðtÞ ¼ 2laðtÞ (Eq. (20)). Hence, X is the solution of the differential equation   pffiffiffiffi _XðtÞ ¼ a_ ðtÞ 2 l XðtÞ þ 2laðtÞWðtÞ; t $ 0; ð27Þ aðtÞ 2

where the driving noise is defined formally by WðtÞ ¼ dBðtÞ=dt and B denotes a Brownian motion process. For

Example 3. Let XðtÞ; t $ 0; be an R2 -valued Gaussian – Markov process with a.s. continuous samples, mean E½XðtÞ ¼ 0; and covariance function " cðt; sÞ ¼

cosðnðt 2 sÞÞ

sinðnðt 2 sÞÞ=n

2n sinðnðt 2 sÞÞ

cosðnðt 2 sÞÞ

# gðsÞ; ð28Þ

t $ s $ 0;

210

M. Grigoriu / Probabilistic Engineering Mechanics 18 (2003) 203–213

where 2

  1 sinð2ntÞ t 2 6 2n2 2n 6 gðtÞ ¼ 6 6 4 1 ð1 2 cosð2ntÞÞ 4n2

›c21 ðt; sÞ ¼ a21 ðtÞc11 ðt; sÞ þ a22 ðtÞc21 ðt; sÞ ›t ›c22 ðt; sÞ ¼ a21 ðtÞc12 ðt; sÞ þ a22 ðtÞc22 ðt; sÞ ›t

3 1 ð1 2 cosð2 n tÞÞ 7 4n2 7 7   7; 1 sinð2ntÞ 5 tþ 2 2n

t $ 0:

ð29Þ

Since the ratio ð›c11 ðt; sÞ=›tÞ=c11 ðt; sÞ is function of ðt; sÞ; there is no scalar differential equation of the type in Eq. (16) with solution X1 :

Example 4. Let XðtÞ; t $ 0; be a real-valued non-stationary Gaussian process with mean E½XðtÞ ¼ 0 and covariance function cðt; sÞ ¼ zðsÞcosðntÞ þ jðsÞsinðntÞ; t $ s $ 0;

ð30Þ

where zðsÞ ¼ ðs 2 sinð2nsÞ=ð2nÞÞ=ð2n2 Þ; jðsÞ ¼ ð1 2 cosð2nsÞÞ=ð4n3 Þ; and t ¼ t 2 s: Since the ratio

›cðt; sÞ=›t 2nzðsÞsinðntÞ þ njðsÞcosðntÞ ¼ ; t $ s; cðt; sÞ zðsÞcosðntÞ þ jðsÞsinðntÞ depends on t and s; there is no scalar differential equation of the type in Eq. (16) defining X: Consider an R2 -valued process Z defined by the differential equation " # " # a11 ðtÞ a12 ðtÞ b11 ðtÞ b12 ðtÞ _ ZðtÞ ¼ ZðtÞ þ a21 ðtÞ a22 ðtÞ b21 ðtÞ b22 ðtÞ " # W1 ðtÞ ; ð31Þ W2 ðtÞ

for t $ s: If X is the first coordinate of Z, the covariance function c11 must coincide with c in Eq. (30). The first of the above equations with c11 ðt; sÞ ¼ cðt; sÞ is satisfied for a11 ðtÞ ¼ 0; a12 ðtÞ ¼ 1; and c21 ðt; sÞ ¼ 2nzðsÞsinðntÞ þ njðsÞ cosðntÞ: The third equation with the above expressions for c11 and c21 holds for a21 ðtÞ ¼ 2n2 and a22 ðtÞ ¼ 0: The second and the forth equations with the above coefficients aij become

›c12 ðt; sÞ ¼ c22 ðt; sÞ ›t ›c22 ðt; sÞ ¼ 2n2 c12 ðt; sÞ ›t so that ›2 c22 ðt; sÞ=›t2 ¼ 2n2 c22 ðt; sÞ and c22 ðt; sÞ ¼ dðsÞcosðntÞ þ eðsÞsinðntÞ c12 ðt; sÞ ¼ c12 ðs; sÞ þ ðdðsÞsinðntÞ þ eðsÞð1 2 cosðntÞÞÞ=n; where dðsÞ and eðsÞ; s # t; are unknown functions. The above expressions of the covariance functions of Z give g11 ðtÞ ¼ zðtÞ; g21 ðtÞ ¼ g12 ðtÞ ¼ njðtÞ; and g22 ðtÞ ¼ dðtÞ: Consider now the differential equation for the covariance matrix gðtÞ ¼ E½ZðtÞZðtÞT  of ZðtÞ (Eq. (18)) with the above expressions of gij ðtÞ; i; j ¼ 1; 2: The entry (1,1) of this equation, g_ 11 ðtÞ ¼ 2g12 ðtÞ þ b11 ðtÞ2 þ b12 ðtÞ2 ; implies b11 ðtÞ2 þ b12 ðtÞ2 ¼ 0 at all times so that we must have b11 ðtÞ ¼ b12 ðtÞ ¼ 0 for t $ 0: The entries ð1; 2Þ and ð2; 1Þ coincide and give dðsÞ ¼ t=2 þ sinð2ntÞ=ð4nÞ: The entry ð2; 2Þ requires b21 ðtÞ2 þ b22 ðtÞ2 ¼ 1: Hence, the coordinate X ¼ Z1 of Z defined by the differential equation " # " #" # 0 1 0 0 W1 ðtÞ _ ¼ ZðtÞ ZðtÞ þ ; 2n2 0 W2 ðtÞ b21 ðtÞ b22 ðtÞ ð32Þ

t $ 0;

t $ 0;

where W1 and W2 are independent Gaussian white noise processes, the coefficients aij ; bij are not yet specified, and Zð0Þ ¼ 0: The above differential equation implies that Z is a Gaussian – Markov process with a.s. continuous samples and mean E½ZðtÞ ¼ 0; t $ 0; for any coefficients aij ; bij such that Eq. (31) has a unique solution. Our objective is to determine whether it is possible to find expressions for aij ; bij in Eq. (31) such that X is a version of the first coordinate of Z, that is, Z ¼ ðZ1 ¼ X; Z2 Þ: The covariance function of Z satisfies the differential equations (Eq. (17))

has the desired properties provided that b21 ðtÞ2 þ b22 ðtÞ2 ¼ 1: Since the differential equation for Z1 is not driven by noise and the noise, b21 ðtÞW1 ðtÞ þ b22 ðtÞW2 ðtÞ; driving the equation for Z2 has the same probability law as WðtÞ ¼ dBðtÞ=dt; the differential equation for Z can be simplified to " # " # 0 1 0 _ZsðtÞ ¼ ZðtÞ þ WðtÞ; t $ 0: ð33Þ 2 2n 0 1

›c11 ðt; sÞ ¼ a11 ðtÞc11 ðt; sÞ þ a12 ðtÞc21 ðt; sÞ ›t ›c12 ðt; sÞ ¼ a11 ðtÞc12 ðt; sÞ þ a12 ðtÞc22 ðt; sÞ ›t

Note also that the matrix

" T

T

_ 2 aðtÞgðtÞ 2 gðtÞaðtÞ ¼ bðtÞbðtÞ ¼ gðtÞ

0

0

0

1

#

corresponding to Eq. (33) is positive definite because xT ðbðtÞbðtÞT Þx ¼ x22 for all x ¼ ðx1 ; x2 Þ [ R2 : In this example, an augmentation of the scalar process X to

M. Grigoriu / Probabilistic Engineering Mechanics 18 (2003) 203–213

211

Fig. 2. Samples of X and exact/estimated standard deviation of XðtÞ for t [ ½0; 10 and n ¼ 5:

the R2 -valued process Z has been sufficient. In more complex situations, X may have to be augmented to larger vector processes Z and the construction of the defining differential equations for these processes is likely to involve iterations. Fig. 2 shows 10 samples of X for n ¼ 5 generated from the finite difference approximation (Eq. 19) " Zðt þ DtÞ ¼

1

Dt

2

1

2n Dt

# ZðtÞ þ

" # 0 1

DBðtÞ;

t $ 0;

with a time step Dt ¼ 5 £ 10 ; where DBðtÞ ¼ Bðt þ DtÞ 2BðtÞ denotes the increment of the Brownian motion B in the time interval ½t; t þ Dt: The figure also shows the evolution of the exact and the estimated standard deviation of X in the time interval ½0; 10: The estimated standard deviation is based on 100 samples of X: The exact variance of X is given by Eq. (30) with t ¼ s: 24

where ckþ1;k ¼ ½E½Xðskþ1 ÞXðsk ÞT ; …; EðXðskþ1 ÞXðs0 ÞT  ¼ cTk;kþ1 2 6 6 ck;k ¼ 6 6 4

ð35Þ

½EðXðsk ÞXðsk ÞT 



.. . ½EðXðs0 ÞXðsk ÞT  …

½EðXðsk ÞXðs0 ÞT  .. . ½EðXðs0 ÞXðs0 ÞT 

3 7 7 7: 7 5

Since X is a Gaussian process, Xðskþ1 ÞlðXðkÞ ¼ jðkÞ Þ is an Rd -valued Gaussian variable with the properties Xðskþ1 ÞlðXðkÞ ¼ jðkÞ Þ ðkÞ 21 , Nðckþ1;k c21 k;k j ; cðskþ1 ; skþ1 Þ 2 ckþ1;k ck;k ck;kþ1 Þ: ð36Þ

Two Monte Carlo algorithms are presented for generating samples of X in ½t0 ; t1 . The first algorithm is based on 6.2. Gaussian variables the representation XðmÞ ¼ bG; where b is a square matrix delivered, for example, by the Cholesky decomposition Suppose that it is not possible or it is computationally too and G denotes an Rðmþ1Þd -valued random variable with demanding to augment X to a Gaussian –Markov process. independent Nð0; 1Þ coordinates (Eq. (21)). Samples of XðmÞ Then properties of Gaussian variables can be used to can be calculated from samples of G and the above develop Monte Carlo algorithms for generating samples representations. The second algorithm is based on properties of X. of conditional Gaussian variables. Suppose that a sample Consider a partition t0 ¼ s0 , s1 , · · · , sm ¼ t1 of jðkÞ of XðkÞ ; 0 # k , m; has been generated and the ½t0 ; t1 : Let XðkÞ be a column vector in Rðkþ1Þd whose first, objective is to generate a sample of Xðskþ1 ÞlðXðkÞ ¼ jðkÞ Þ; second, and last d coordinates are the Rd -valued random that is, of the conditional random variable in Eq. (36). The variable Xðs0 Þ; Xðs1 Þ; and Xðsk Þ; respectively, and let jðkÞ be calculation of the second-moment properties of the random a sample of XðkÞ : The covariance matrix of the zero-mean variable in Eq. (36) requires the inversion of a new and Gaussian vector ðXðs0 Þ; Xðs1 Þ; …; Xðsk Þ; Xðskþ1 ÞÞ is larger matrix at each time step, that is, the matrix ck;k : Direct calculation of c21 k;k at each k ¼ 1; …; m is inefficient. A 21 " # recurrence formula relating c21 kþ1;kþ1 to ck;k can be used to cðskþ1 ; skþ1 Þ ckþ1;k calculate the sequence of inverse matrices c21 ckþ1;kþ1 ¼ ; ð34Þ k;k : It can be ck;kþ1 ck;k shown that ([12], Appendix A) 2 3 21 21 21 21 ; s Þ þ cðs ; s Þ c dc cðs ; s Þ 2cðs ; s Þ c d cðs kþ1 kþ1 kþ1 kþ1 kþ1;k k;kþ1 kþ1 kþ1 kþ1 kþ1 kþ1;k 4 5; c21 ð37Þ kþ1;kþ1 ¼ 2dck;kþ1 cðskþ1 ; skþ1 Þ21 d

212

M. Grigoriu / Probabilistic Engineering Mechanics 18 (2003) 203–213

Fig. 3. Samples of X and exact/estimated standard deviation of XðtÞ for t [ ½0; 1 and n ¼ 5:

where d ¼ ck;k 2 ck;kþ1 cðskþ1 ; skþ1 Þ21 ckþ1;k : This property of the covariance matrices of XðkÞ suggests the following Monte Carlo simulation algorithm. Step 1: Select a partition t0 ¼ s0 , s1 , · · · , sm ¼ t1 of the time interval ½t0 ; t1 : Step 2: Calculate recursively the inverse of the covariance matrices ck;k and the second moment properties of the vectors XðkÞ ; k ¼ 0; 1; …; m 2 1: Step 3: Generate samples of the conditional vectors Xðskþ1 ÞlXðkÞ using the probability law of these vectors in Eq. (36).

Example 5. Let XðtÞ, t $ 0; be the real-valued Gaussian process in Example 4. Fig. 3 shows as in Fig. 2 10 samples of X and the evolution in the time interval [0,10] of the exact] and the estimated standard deviation of X for n ¼ 5. The estimates are based on 100 samples of the vector XðmÞ ¼ ðXðs1 Þ; …; Xðsm ÞÞ calculated from the representation XðmÞ ¼ bG delivered by the Cholesky decomposition and samples of a Gaussian vector G with independent Nð0; 1Þ coordinates, where sk ¼ kDt; k ¼ 1; …m; and m ¼ 1000: Since the size of the vector XðmÞ increases as the time step Dt decreases, the time step cannot be very small to be possible to calculate the matrix b in the representation of X (m). Denser partitions of the time interval [0,1] can be used if the generation of samples of X uses the Monte Carlo algorithm based on the recurrence formula in Eq. (37). As expected, the results in Figs. 2 and 3 are similar. The Monte Carlo algorithm for generating samples of X used to produce Fig. 2 is more efficient than the algorithm used to construct Fig. 3.

7. Conclusions Models have been developed for representing general non-stationary, continuous-time Gaussian processes and generating samples of these processes. The class of

non-stationary Gaussian processes has been divided in two subclasses consisting of Gaussian – Markov and Gaussian process that do have the Markov property. A simple criterion has been established for determining whether a Gaussian process is also Markov. The transition probability density function can be used to characterize a Gaussian – Markov process. If such a process has continuous samples and its covariance function satisfies some conditions, it can be defined as the output of linear filter driven by Gaussian white noise. Two representations have been used for non-stationary Gaussian processes that lack the Markov property. If these processes can be transformed into Gaussian –Markov processes by state augmentation, the above representations for Gaussian – Markov processes apply directly. Otherwise, properties of conditional Gaussian variables need to be used to represent these processes. It was shown that efficient Monte Carlo algorithms can be developed for generating samples of Gaussian –Markov processes, particularly if these processes can be represented by the output of linear filters to Gaussian white noise. However, this is not the case with the class of non-stationary Gaussian processes that are not Markov. If these processes cannot be transformed into Gaussian – Markov processes by state augmentation, the Monte Carlo algorithms for generating samples of these processes are less efficient and are based on properties of conditional Gaussian variables. Examples have been presented to illustrate the models for non-stationary Gaussian processes discussed in the paper. The construction of a Gaussian – Markov process by augmenting the state of a Gaussian process that is not Markov has been illustrated by a simple example. Some of the Monte Carlo simulation algorithms in the paper have been applied to generate samples of non-stationary Gaussian processes.

References [1] Cramer, H., & Leadbetter, M. R. (1967). Stationary and related stochastic processes. New York: Wiley. [2] Davenport, W. B., & Root, W. L. (1958). An introduction to the theory of random signals and noise. New York: McGraw-Hill.

M. Grigoriu / Probabilistic Engineering Mechanics 18 (2003) 203–213 [3] Deodatis, G. (1996). Non-stationary stochastic vector processes: seismic ground motion applications. Probab Engng Mech, 11, 149–168. [4] Ghanem, R. G., & Spanos, P. D. (1991). Stochastic finite elements: a spectral approach. New York: Springer. [5] Grigoriu, M. (1993). Simulation of nonstationary Gaussian processes by random trigonometric polynomials. J Engng Mech, ASCE, 119(2), 328–343. [6] Grigoriu, M. (1995). Applied non-Gaussian processes: examples, theory, simulation, linear random vibration, and MATLAB solutions. Englewoods Cliffs, NJ: Prentice Hall. [7] Grigoriu, M. (2002). Stochastic calculus. Applications in science and engineering. Boston: Birkha¨user. [8] Grigoriu, M., Ruiz, S. E., & Rosenblueth, E. (1988). The Mexico earthquake of September 19, 1985—nostationary models of seismic ground acceleration. Earthquake Spectra, 4(3), 551 –568.

213

[9] Kloeden, P. E., & Platen, E. (1992). Numerical solutions of stochastic differential equations. New York: Springer. [10] Priestley, M. B. (1988). Non-linear and non-stationary time series analysis. New York: Academic Press. [11] Pugachev, V. S. (1965). Theory of random functions and its applications to control problems. New York: Pergamon Press. [12] Schweppe, F. C. (1973). Uncertain dynamic systems. Englewood Cliffs, NJ: Prentice Hall. [13] Shinozuka, M. (1970). Random processes with evolutionary power. J Engng Mech Div, ASCE, 96(EM4), 543–545. [14] Shinozuka, M., & Jan, C. M. (1972). Digital simulation of random processes and its applications. J Sound Vibration, 25(1), 111–128. [15] Soong, T. T., & Grigoriu, M. (1983). Random vibration of mechanical and structural systems. Englewood Cliffs, NJ: Prentice Hall. [16] Wong, E., & Hajek, B. (1985). Stochastic processes in engineering systems. New York: Springer.