Predicting integrals of diffusion processes

Predicting integrals of diffusion processes

Journal of Statistical Planning and Inference 90 (2000) 183–193 www.elsevier.com/locate/jspi Predicting integrals of di usion processes Montserrat F...

101KB Sizes 0 Downloads 36 Views

Journal of Statistical Planning and Inference 90 (2000) 183–193

www.elsevier.com/locate/jspi

Predicting integrals of di usion processes Montserrat Fuentes ∗ Statistics Department, North Carolina State University, Patterson Hall 210C, Box 8203, Raleigh, NC 27695, USA Received 9 February 1999; received in revised form 15 December 1999; accepted 6 March 2000

Abstract Consider predicting the integral of a di usion process Z in a bounded interval A, based on the observations Z(t1n ); : : : ; Z(tnn ); where t1n ; : : : ; tnn is a dense triangular array of points (the step of discretization tends to zero as n increases) in the bounded R interval. The best linear predictor is generally not asymptotically optimal. Instead, we predict A Z(t) dt using the conditional expectation of the integral of the di usion process, the optimal predictor in terms of minimizing the mean squared error, given the observed values of the process. We obtain that, conditioning on the observed values, the order of convergence in probability to zero of the mean squared prediction error is Op (n−2 ). We prove that the standardized conditional prediction error is approximately Gaussian with mean zero and unit variance, even though the underlying di usion is generally non-Gaussian. Because the optimal predictor is hard to calculate exactly for most di usions, we present an easily computed approximation that is asymptotically optimal. This approximation is c 2000 Elsevier Science B.V. All rights reserved. a function of the di usion coecient. MSC: primary 62M20; secondary 62M40, 41A25 Keywords: Di usion process; Fixed-domain asymptotics; In ll asymptotics; Numerical integration

1. Introduction R Consider predicting A Z(t) dt for a homogeneous di usion process Z based on observations Z(n) =(Z(t0n ); : : : ; Z(tnn )), where A=[0; 1] and 0=t0n ¡ · · · ¡ tnn =1, n=1; 2; : : : : We assume the step of discretization tends to zero as n → ∞: limn→∞ max26i6n (tin −ti−1; n )=0. Let Zˆ n (t)=E[Z(t)|Z(n) ] and cnZ (t1 ; t2 )=E[Z(t1 )Z(t2 )|Z(n) ]− Zˆ n (t1 )Zˆ n (t2 ) be the conditional mean and covariance functions, respectively, for Z. For Z a Brownian motion, Zˆ n (t) is a linear function of Z(n) and cnZ (t1 ; t2 ) is nonrandom. However, for di usions, Zˆ n (t) is generally not linear in Z(n) and cnZ (t1 ; t2 ) is random. ∗

Tel.: +1-919-515-1921; fax: +1-919-515-1169. E-mail address: [email protected] (M. Fuentes).

c 2000 Elsevier Science B.V. All rights reserved. 0378-3758/00/$ - see front matter PII: S 0 3 7 8 - 3 7 5 8 ( 0 0 ) 0 0 1 2 1 - X

184

M. Fuentes / Journal of Statistical Planning and Inference 90 (2000) 183–193

Numerical integration over deterministic functions is studied, for example, by Davis and Rabinowitz (1984), Ghizzetti and Ossccini (1970) and Chakravarti (1970) by taking a linear function of Z(n) . Numerical integration over random functions taking a linear function of Z(n) is done by Marnevskaya and Jarovich (1984), Stein (1993, 1995), Pitt et al. (1995), and Ritter (1995). Stein (1987) presents a non-linear approximation to the integral of a transformation of a Brownian motion process. Considering the close relationship between predicting integrals and estimating regression coecients from stochastic processes described by Sacks and Ylvisaker (1966), the work by Sacks and Ylvisaker (1966, 1968, 1970), Eubank et al. (1981) and Wahba (1971, 1974) on designs for estimating regression coecients are also relevant for the prediction problem. Matheron (1985) has developed a technique for approximating the uncondiR tional distribution of A Z(t) dt R when the process is non-Gaussian, but we are interested in making inferences about A Z(t) dt given Z(n) , so the conditional distributions are relevant. The conditional expectation of the integral of the di usion process,  Z Z Z(t) dt Z(n) = Zˆ n (t) dt E A

A

is R the optimal predictor (in the sense of minimizing the mean squared error) of Z(t) dt given Z(n) . Since this predictor is hard to calculate exactly for most difA fusions, we present an approximation that still yields better results than the best linear predictor (BLP) and is asymptotically optimal as n → ∞: For a Brownian motion the best predictor is actually the BLP, and the order of convergence to zero of the mean squared predictor error (given the observed values) is Op (n−2 ). For a di usion in general, the best predictor will be nonlinear. We prove in this paper that the order of convergence to zero of the mean squared predictor error is also Op (n−2 ). The primary purpose of this paper is to show that, under certain conditions, the standardized conditional prediction error R R Z(t) dt − A Zˆ n (t) dt A R R [var{ A Z(t) dt − A Zˆ n (t) dt|Z(n) }]1=2 is approximately N(0; 1). The asymptotic normality clearly holds for a Brownian motion because it is a Gaussian process with cnZ (t1 ; t2 ) nonrandom, but this asymptotic normality will hold even if the underlying process is non-Gaussian. The proof of the asymptotic normality is based on the Markov property satis ed by the di usion process. This property leads to a representation of the prediction error as a sum of conditionally independent terms which is asymptotically normal. R We also obtain an easily computed approximation for the general nonlinear predictor, Zˆ (t) dt, that is a function of the di usion coecient. A n First, in Section 2, we show that for large n, the conditional distribution of the prediction error is approximately Gaussian. Next, in Section 3, we show that the BLP is generally not asymptotically optimal and give an easily computed asymptotically optimal predictor. Then, in Section 4, we give an approximation for the conditional standard error of the prediction. The order of the mean squared prediction error is

M. Fuentes / Journal of Statistical Planning and Inference 90 (2000) 183–193

asymptotic to n−2 remarks.

R A

185

2 (Z(t)) dt. In Section 5 we present some conclusions and nal

1.1. Deÿnitions and notation R1 Consider the prediction of 0 Z(t) dt based on observing Z(t) at equally spaced observations in time 0 = t0n ¡ t1n · · · ¡ tnn = 1. We will often write ti to denote tin for i = 0; : : : ; n to simplify the notation. We will make the following assumptions about the di usion process Z throughout this paper: (A.1) E{Z(t + s) − Z(t)|Z(t) = x} = s(x) + o(s); (A.2) E{(Z(t + s) − Z(t))2 |Z(t) = x} = s2 (x) + o(s); (A.3) E{|Z(t + s) − Z(t)|3 |Z(t) = x} = s3=2 k(x) + o(s3=2 ) for a continuous function k. The di usion parameter (x) is frequently called the drift coecient, and 2 (x) the di usion coecient. The following assumption gives explicit hypothesis on the di usion parameters, and implies (A.3): (A.4) The di usion parameters,  and 2 are bounded, (see Karatzas and Shreve, p. 367), i.e. |(x)| + 2 (x)6;

06t ¡ ∞;

and the di usion coecient, 2 ; is a strictly positive and uniformly bounded function that has a continuous and uniformly bounded derivative. The drift coecient (x) is a uniformly continuous function. We de ne i (x; t) to be the conditional density that from the state value x at time t the sample path of Z satis es Z(ti+1 ) = zi+1 at time ti+1 ; and make the following assumption: (B.1) i (x; t) has two continuous derivatives with respect to x and it is a di erentiable function with respect to t. The following assumption, (B.2), has explicit hypotheses on  and  and implies (B.1): (B.2) (x) and (x) are bounded and uniformly Holder-continuous functions (see Karatzas and Shreve, 1991, p. 368). We now give the following de nitions that will be needed in the proof of the main theorem Section 2: (i) Conditioned di usion process. Let Zi∗ be a process on [ti ; ti+1 ] such that L(Zi∗ ) = L(Z|Z(ti ) = zi ; Z(ti+1 ) = zi+1 );

(1.1)

where L(X ) denotes the distribution law of a random variable X . The conditioned di usion process is itself a di usion with a time-varying drift. The transition density

186

M. Fuentes / Journal of Statistical Planning and Inference 90 (2000) 183–193

of Zi∗ for ti 6t ¡ s6ti+1 , is given by pi∗ (t; x; s; y) dy = p(y ¡ Z(s)6y + dy|Z(ti ) = zi ; Z(t) = x; Z(ti+1 ) = zi+1 ) =

p(t; x; s; y)i (y; s) dy ; i (x; t)

(1.2)

where p is the transition density of Z, p(t; x; s; y) dy = p{y ¡ Z(s)6y + dy|Z(t) = x}: By the Markov property of the process Z we can write p(t; x; s; y) as a function of s − t; x and y, i.e. p(s − t; x; y). The drift coecient for Zi∗ ; for ti 6t ¡ ti+1 ; is Z 1 +∞ ∗ (y − x)pi∗ (t; x; t + h; y) dy: (1.3) i (x; t) = lim h↓0 h −∞ R1 (ii) Prediction error. We de ne PEn , the error in predicting 0 Z(t) dt given Z(n) with the optimal predictor, that is, Z 1 {Z(t) − Zˆ n (t)} dt: PEn = 0

Furthermore, SPEn is the standardized prediction error given Z(n) , SPEn =

PEn : [var{PEn |Z(n) }]1=2

2. Asymptotic normality of the prediction error In this section we prove that SPEn is asymptotically N(0; 1). 2.1. The main theorem Theorem 2.1. Consider predicting the integral of a di usion process Z over [0; 1]; R1 ˆ based on the observations Z(n) = (Z(t1n ); : : : ; Z(tnn )); by 0 Z n (t) dt. Suppose (i) the parameters of the di usion process Z satisfy relations (A:1)– (A:3) (ii) the conditional probabilities i satisfy condition (B:1). Then conditional on Z(n) L

SPEn → N(0; 1)

(2.1)

in probability. In the following corollary, we reformulate the assumptions of Theorem 2.1 in terms of the di usion parameters, (x) and 2 (x). Corollary 2.1. Theorem 2:1 still holds if (A:3) is replaced by (A:4) in (i) and (B:1) is replaced by (B:2) in (ii):

M. Fuentes / Journal of Statistical Planning and Inference 90 (2000) 183–193

187

We present the following lemma that will be used in the proof of Theorem 2.1. We assume that the di usion process Z satis es assumptions (A.1) – (A.3) and (B.1). Lemma 2.1. The conditioned di usion process Zi∗ has a nonhomogeneous inÿnitesimal mean: @p(ti+1 − ti ; zi ; zi+1 )=@zi 2  (zi ): i∗ (zi ; ti ) = (zi ) + (2.2) p(ti+1 − ti ; zi ; zi+1 ) The di usion coecient of the process Zi∗ is the same as the di usion coecient of the process Z. Proof of Lemma 2.1. Condition (B.1) permits the use of the following Taylor expansion for i (zi ; ti ): @i @i (zi ; ti ) + h (zi ; ti ) + o(y − zi ) + o(h): i (y; ti + h) = i (zi ; ti ) + (y − zi ) @zi @ti (2.3) Thus, substituting (1.2) into (1.3) we obtain Z 1 @i (zi ; ti )=@zi ∗ (y − zi )p(h; zi ; y) dy + i (zi ; ti ) = lim h↓0 h i (zi ; ti ) Z 1 (y − zi )2 p(h; zi ; y) dy (2.4) × lim h↓0 h by (2.4) and assumptions (A.1), (A.2), and (B.1) we get (2.2). By using a similar argument, we get the following expression for i∗ (zi ; ti ), the di usion coecient of Zi∗ : i∗ (zi ; ti ) = (zi ) and Lemma 2.1 follows. Proof of Theorem 2.1. Given Z(n) ; PEn is a sum of independent, mean zero random variables, so this is a triangular array situation. We study the conditional absolute moments of order 3 of the prediction error because, applying Lyapounov’s condition (Billingsley, 1995) for  = 1, we can prove the weak convergence of the distributions. random variables, such that L(Yin )=L(Xi |Z(ti )= Suppose Y1n ; : : : ; Ynn are independent R ti Z(t) dt for i = 1; : : : ; n: Thus, zi ; Z(ti+1 ) = zi+1 ), where Xi = ti−1 ! Z 1 Z(t) dt Z(n) = z : L(Y1n + · · · + Ynn ) = L(X1n + · · · + Xnn |Z(n) = z) = L 0 R ti

(2.5) Zi∗ (t) dt

given Zi∗ Therefore, the problem is reduced to studying moments for ti−1 at time ti , where the process Zi∗ () is de ned in (1.1). In Lemma 2.1 we showed that the variance of Zi∗ is 1 (2.6) lim var{Zi∗ (t + h) − Zi∗ (t)|Zi∗ (ti ) = zi } = 2 (zi ): h↓0 h

188

M. Fuentes / Journal of Statistical Planning and Inference 90 (2000) 183–193

The following higher-order in nitesimal moment relation is satis ed when (A.3) holds: 1 (2.7) lim E{|Zi∗ (t + h) − Zi∗ (t)|3 |Zi∗ (ti ) = zi } = 0: h↓0 h This means that E{|PEn |2 |Z(n) = z(n) } Z ti+1  n−1 P ∗ ∗ var Zi (t) dt|Zi (ti ) = zi = i=0

=

ti

n−1 P i=0

(ti+1 − ti )3 {2 (zi )=12} + o(n−2 ):

(2.8)

We get the rst equality by applying (2.5), and the last expression by integrating the in nitesimal variance of Zi∗ (t) when t belongs to [ti ; ti+1 ], where by assumptions (A.1) – (A.3), the remainder term in (2.8) is uniform in z(n) . Therefore, the order of the standard error for the prediction error, given Z(n) , is Op (n−1 ) as n → ∞. It follows from Eq. (2.8) that n[var{PEn |Z(n) }]1=2 converges in probability to the random variable R1 2 1  (Z(t)) dt as n → ∞. L = 12 0 Using the same argument as the one leading to Eq. (2.8) together with (2.7), we get that E{|PEn |3 |Z(n) } = op (n−3 ): Let sn2 (Z(n) ) be the conditional variance of Sn (Z(n) ) = (X1n + · · · + Xnn ) − E(X1n + · · · + Xnn |Z(n) ) given Z(n) . By Eqs. (2.7) and (2.8), we get lim

n P

1

n→∞ k=1 sn3 (Z(n) )

E[|Xkn − E(Xkn |Z(n) )|3 |Z(n) ] = 0

in probability;

which is Lyapounov’s condition for  = 1. Since Lyapounov’s condition holds, L Sn (Z(n) )=sn (Z(n) ) → N(0; 1). If we write the result in terms of the di usion process Z, then relation (2.1) holds and the conditional prediction error is asymptotically normal. 3. Approximating the optimal predictor We present the following approximation to the optimal predictor that yields an asymptotically optimal predictor: In (Z(n) ) =

n P

(tin − ti−1; n )Zi +

i=1

n 1 P (tin − ti−1; n )(2 )0 (Zi ): 4n i=1

(3.1)

Here, we write Zi to denote Z(tin ) to simplify the notation. Approximation (3.1) is a linear function of Z(n) , plus a nonlinear term in Z(n) that could be thought of as the R c n = 1 Z(t) dt − In (Z(n) ). adjustment for the conditional bias of the BLP. We de ne PE 0

M. Fuentes / Journal of Statistical Planning and Inference 90 (2000) 183–193

189

c n − PEn ; in In this section Rwe show, in Propositions 3.1 and 3.2, that the error PE 1 ˆ approximating 0 Z n (t) dt with In (Z(n) ); is negligible compared to the conditional standard deviation of the prediction error PEn : Thus, in Theorem 3.1 we prove that In (Z(n) ); R1 the proposed nonlinear estimate of 0 Z(t) dt; has the same asymptotic behavior as the optimal predictor. Theorem 3.1. Under assumptions (A:1)–(A:3) and (B:1); conditional on Z(n) as n → ∞; cn PE L → N(0; 1); [var{PEn |Z(n) }]1=2 in probability. We now present two propositions that will be used in the proof of Theorem 3.1. Proposition 3.1. Under conditions (A:1)–(A:3) and (B:1); as n → ∞; Z 1 c n − PEn = Zˆ n (t) dt − In (Z(n) ) = op (n−3=4 ): PE 0

Proposition 3.2. Under conditions (A:1)–(A:3) and (B:1); as n → ∞; var{PEn |Z(n) } = Op (n−2 ):

(3.2)

Proof of Proposition 3.1. By the Markov property of the di usion process Z, Z Z 1 n−1 P ti+1 Zˆ n (t) dt = E(Z(t)|Z(ti ); Z(ti+1 )) dt: 0

i=0

ti

By letting z = (z0 ; : : : ; zn ) be the observed values of Z at times t0 ; : : : ; tn , we get Z 1 Z n−1 P ti+1 E(Z(t)|Z(n) = z) dt = E(Z(t)|Z(ti ) = zi ; Z(ti+1 ) = zi+1 ) dt: (3.3) 0

i=0

ti

We approximate E(Z(t)|Z(ti ) = zi ; Z(ti+1 ) = zi+1 ), by obtaining rst an expression for the transition density of Zi∗ , which is a function of p, the transition density of Z, and i . Let P be the probability measure of the di usion process Z, and P a probability  Z has drift coecient measure such that P and P are equivalent, and under P; (x)  = −2 (x)g00 (x){2g0 (x)}−1 :

Rz

(3.4)

It is routine to verify that the process B=g(Z), for g(z)= z0 (y)−1 dy, is a Brownian  Therefore, by a change of variables we get the following backward motion under P. equation for B in terms of p: @p(t; b; bi+1 ) 1 @2 p(t; b; bi+1 ) = : @t 2 @b2

(3.5)

190

M. Fuentes / Journal of Statistical Planning and Inference 90 (2000) 183–193

The solution is a Gaussian density. By a change of variables again, we can express  and obtain the solution in terms of the process Z (under P) Z zi+1 3 1 @p(ti+1 − ti ; zi ; zi+1 )=@zi −1 = − (ti+1 − ti ) (zi ) dy (3.6) p(ti+1 − ti ; zi ; zi+1 ) 2 (y) zi and (z  i ) = 0 (zi )(zi )=2. With respect to i (x; t). Assumption (B.1) postulates sucient regularity for i (x; t) to permit the use of the following Taylor expansion: Z t+h @i @i (x; t) + (x; ) d (t + h − ) i (y; t + h) = i (x; t) + (y − x) @x @ t Z y (y − )2 @2 i + (; t) d (3.7) 2 @2 x where  ∈ (x; y). We de ne ˆi (y; t + h) as @i (x; t) + ˆi (y; t + h) = i (x; t) + (y − x) @x

Z t

t+h

(t + h − )

@i (x; ) d: @

(3.8)

By (3.6) and the de nition of pi∗ in (1.2), when the probability i is substituted by  ˆi into the formula for pi∗ , we get (under P): 0 (Zi )(Zi ) (ti+1 − ti )2 + Ri ; 2 where Ri is the resulting residual term (due to the substitution of i by ˆi ).  Then we get that (under P) Z 1 n−1 n−1 n−1 P P 0 (Zi )(Zi ) P Zˆ n (t) dt = (ti+1 − ti )Zi+1 − Ri (ti+1 − ti )2 + 2 i=1 i=1 i=1 0 E(Z(t)|Z(ti ) = zi ; Z(ti+1 ) = zi+1 ) dt = (ti+1 − ti )Zi+1 −

= In (Z(n) ) +

n−1 P i=1

Ri :

(3.9)

By the de nition of p∗ (t; x; s; y) in (1.2), (3.6) and the Taylor expansion (3.7), we obtain Z Z ti+1 2 @ i (x; t)=@x2 +∞ (y − x)3 p(t; x; y) dy dt + o(n−2 ): Ri = i (x; t) ti −∞ By a change of variables again, we can get an explicit expression for the second derivative of i (x; t) with respect of x, using the same argument as in (3.5). By assumption (A.3), Ri is of order Op (n−5=2 ). Thus, we obtain n−1 P i=1

Ri = Op (n−3=2 ):

 Then we get that (under P) Z 1 Zˆ n (t) dt = In (Z(n) ) + Op (n−3=2 ): 0

(3.10)

M. Fuentes / Journal of Statistical Planning and Inference 90 (2000) 183–193

191

 Thus, under the probability measure P, Z 1 n−1 P 3=4 (ti+1 − ti )Zi+1 Zˆ (n) (t) dt − n i=1

0

+

n−1 P i=1

0 (Zi )(Zi ) (ti+1 − ti )2 2



P

c n − PEn ) → 0: = n3=4 (PE

(3.11)

It is straightforward consequence of the equivalence of P and P that (3.11) holds under P as well. In other words, P

c n − PEn ) → 0: n3=4 (PE

R1 Thus, if we approximate the optimal predictor of the integral 0 Zˆ n (t) dt with In (Z(n) ), c n − PEn , is op (n−3=4 ). Next proposition shows that the error in this approximation, PE R1 this error in approximating 0 Zˆ n (t) dt by using In (Z(n) ) is negligible compared to the standard deviation of PEn . Proof of Proposition 3.2. By the Markov property of the di usion process and using the same argument as in (3.3), the variance of the prediction error, when conditions (A.1) – (A.3) and (B.1) are satis ed, can be written as Z ti+1  n−1 P var Z(t) dt |Z(ti ); Z(ti+1 ) i=0

ti

= =

Z n−1 P i=0

ti

ti+1

Z ti

ti+1

cov{Zi∗ (x)Zi∗ (y)|Zi∗ (ti )} dx dy

n−1 P i=0

(ti+1 − ti )3 {2 (Z(ti ))=12} + op (n−2 ):

(3.12)

Where the process Zi∗ is de ned in (1.1). Then the order of the standard error for the conditional prediction error is Op (n−1 ). Proof of Theorem 3.1. We proved in Theorem 2.1 that the standardized prediction error SPEn is, conditional on Z(n) , asymptotically N(0; 1). By Propositions 3.1 and 3.2, c n − PEn is negligible compared to the conditional standard deviation of PEn , so PE Theorem 3.1 follows.

4. The variance of the prediction error In Proposition 3.2 we proved that the order of convergence to zero of the conditional variance of the prediction error is Op (n−2 ). Then the order of convergence to zero of R1 the error in approximating 0 Zˆ n (t) dt by In (Z(n) ) is faster than the order of the conditional standard deviation of PEn . The variance of the prediction error, when conditions

192

M. Fuentes / Journal of Statistical Planning and Inference 90 (2000) 183–193

(A.1) – (A.3) and (B.1) are satis ed, by Eq. (3.12) can be written as n−1 P i=0

(ti+1 − ti )3 {2 (Z(ti ))=12} + op (n−2 ):

(4.1)

Pn−1 Thus, when the variance of the prediction error is estimated using i=0 (ti+1 − ti )3 × {2 (Z(ti ))=12}, the resulting standardized prediction error has the same asymptotic behavior as when var{PEn |Z(n) } is known.

5. Conclusions and ÿnal remarks In this paper we have presented the asymptotic normality of the prediction error for predicting the integral of a di usion process with the integral of the conditional expectation of the process, the optimal predictor. We have also obtained a simple asymptotically optimal approximation for the predictor and a simple asymptotically valid approximation for the variance of the prediction error, assuming that the di usion coecient was a known function. To assess the accuracy of the proposed approximations to the integral of a di usion process Z and to the variance of the prediction error in nite samples, Fuentes (1999) performed several simulation experiments, assuming that the di usion coecient was a known R 1 function. The results from the simulations suggest that the prediction intervals for 0 Z(t) dt, obtained using the nonlinear predictor and the approximate conditional standard prediction error proposed in this paper, have better coverage than the ones obtained using the best linear predictor.

Acknowledgements The author is grateful to Michael L. Stein for his many valuable comments, insights and guidance in obtaining the results in this paper. This research was supported in part by National Science Foundation grants DMS 95-04470 and DMS 97-09696.

References Billingsley, P., 1995. Probability and Measure, 3rd Edition. Wiley, New York. Chakravarti, P.C., 1970. Integrals and Sums: Some New Formulae for their Numerical Evaluation. Oxford University Press, New York. Davis, P.J., Rabinowitz, P., 1984. Numerical Integration, 2nd Edition. Academic Press, Orlando. Eubank, R.L., Smith, P.L., Smith, P.W., 1981. Uniqueness and eventual uniqueness of optimal designs in some time series models. Ann. Statist. 9, 486–493. Fuentes, M., 1999. Asymptotic normality of conditional integrals of di usion processes, Institute of Statistics Mimeo Series #2520, North Carolina State University, Raleigh. Ghizzetti, A., Ossccini, A., 1970. Quadrature Formulae. Academic Press, New York. Karatzas, I., Shreve, S.E., 1991. Brownian Motion and Stochastic Calculus, 2nd Edition. Springer, New York.

M. Fuentes / Journal of Statistical Planning and Inference 90 (2000) 183–193

193

Marnevskaya, L.A., Jarovich, L.A., 1984. Laplace method for Riemann integrals of random elds. Vesci Akademii Nauk Byelorussia SSR, Seryja Fizika-Matematycnyh Nauk. Leninskii prospekt, 68, Minsk, USSR, Vol. 4, pp. 9 –12. Matheron, G., 1985. Change of support for di usion-type random functions. J. Internat. Assoc. Math. Geol. 17, 137–165. Pitt, L.D., Robeva, R., Wang, D.Y., 1995. An error analysis for the numerical calculation of certain random integrals: Part 1. Ann. Appl. Probab. 5, 171–197. Ritter, K., 1995. Average Case Analysis of Numerical Problems. University of Erlangen. Sacks, J., Ylvisaker, D., 1966. Designs for regression problems with correlated error. Ann. Math. Statist. 37, 66–89. Sacks, J., Ylvisaker, D., 1968. Designs for regression problems with correlated error: many parameters. Ann. Math. Statist. 39, 46–69. Sacks, J., Ylvisaker, D., 1970. Designs for regression problems with correlated error III. Ann. Math. Statist. 41, 2057–2074. Stein, M.L., 1987. Gaussian approximations to conditional distributions for multi-Gaussian processes. Math. Geol. 19, 387–405. Stein, M.L., 1993. Asymptotic properties of centered systematic sampling for predicting integrals of spatial processes. Ann. Appl. Probab. 3, 874–880. Stein, M.L., 1995. Predicting integrals of stochastic processes. Ann. Appl. Probab. 5, 158–170. Wahba, G., 1971. On the regression problem of Sacks and Ylvisaker. Ann. Math. Statist. 42, 1035–1053. Wahba, G., 1974. Regression design for some equivalence classes of kernels. Ann. Statist. 2, 925–934.