Statistics & Probability Letters 45 (1999) 159 – 165
On approximation of Gaussian integrals A.M. Nikouline St. Petersburg State Technical University, Universite Victor Segalen Bordeaux 2, U.F.R. M.I.2S, 146, rue Leo-Saignat, 33076 Bordeaux Cedex, France Received November 1998; received in revised form January 1999
Abstract In this article we compare the dierence between an absolute moment of any Gaussian measure on the Hilbert space c 1999 Elsevier Science B.V. and the corresponding moment of its projection onto some nite-dimensional subspace. All rights reserved MSC: 60F10; 60F15; E15 Keywords: Gaussian measure; Hilbert space; Banach space; In nite-dimensional Gaussian integral
Let X = L2 be a separable Hilbert space and be a Gaussian measure on X . Suppose that the mean value of the measure is equal to zero. The covariance operator K is a symmetric positive kernel operator (see Gikhman and Skorohod, 1971): its eigenvectors form an orthogonal basis, its eigenvalues k are positive and ∞ X
k ¡ ∞;
k ¿0 k ∈ N:
k=1
It is natural to chose the following orthonormal basis {ek }∞ k=1 of the space X so that ek ; k = 1; 2; : : : are the eigenvectors of K , and they are enumerated in decreasing order of the corresponding eigenvalues: 1 ¿2 ¿ · · · ¿n ¿ · · · : Note that (n) =
∞ X
k → 0;
n → ∞:
k=n+1
In the case, supp = X , our measure is a product measure (see in detail in Vakhania et al., 1985). Consider the system of projectors {n }∞ n=1 ; n : X → Xn = Span{e1 ; e2 ; : : : ; en }; c 1999 Elsevier Science B.V. All rights reserved 0167-7152/99/$ - see front matter PII: S 0 1 6 7 - 7 1 5 2 ( 9 9 ) 0 0 0 5 5 - 3
160
A.M. Nikouline / Statistics & Probability Letters 45 (1999) 159 – 165
where n (x) = n
∞ X
! xk ek
=
k=1
n X
xk ek ;
x ∈ X;
xk = (x; ek );
k=1
and (·; ·) is the scalar product in X . Let k · k be the Hilbert norm in X; k · k n the semi-norm in Xn , generated by n ; n ∈ N, and h(x) = k x k p ;
hn (x) = kn (x)k p = k x k pn ;
p¿1;
x ∈ X:
Theorem 1. In our situation; for any p¿1 there exist ÿnite constants C1; p; ¡ ∞; C2; p; ¡ ∞; such that Z Z hn (x) d(x) 6C2;p; (n): C1; p; (n)6n; p = h(x) d(x) − X
X
The proof is based on the following evident proposition: Proposition 1. ∀x; y ¿ 0; ∀p¿1; 1 2 y(x
+ y)(p−1) 6(x + y)p − xp 6py(x + y)(p−1) :
It can be observed that k x k p − kx k pn =
kx k 2p − kx k 2p n : kx k p + kx k pn
So we have (see Proposition 1) p−2 1 6 kx k p 4 ln (x) kx k
− kx k pn 6pln (x)k x k p−2 ;
p¿1;
where ∞ X
ln (x) =
xk2 :
k=n+1
Therefore Theorem 1 can be deduced from the following simple propositions. Proposition 2. For any Gaussian measure such that the covariance operator K is not ÿnitedimensional ∀ ¿ 0
( kx k6)6m
for any m ¿ 0:
Proposition 3. ∀p ¿ 0 ∃Kp ¡ ∞;
Cp ¡ ∞
A.M. Nikouline / Statistics & Probability Letters 45 (1999) 159 – 165
such that
Z
Kp (n)6
X
ln (x)kx k p d(x)6Cp (n): C˜p ¡ ∞
∀p ∈ [0; 1] ∃K˜p ¡ ∞; such that K˜p (n)6
161
Z
ln (x) d(x)6C˜p (n): kx k p
X
We show here that the change of the basis of the Hilbert space X does not improve the order of n; p . (’) Let now ’ = {’n }∞ n=1 be a new orthonormal basis of our space X . De ne n; p by the following formula: Z Z (’) p p kx k n; ’ d(x) ; n; p = kx k d(x) − X
where k x k pn; ’ =
X
( n X
)p=2 (x; ’k )2
and name Xn(’) = Span{’1 ; : : : ; ’n }:
k=1
Note that (’) n; p
Z =
X
Z
p
kx k d(x) −
X
kx k pn; ’ d(x);
(e) (e) are equal to n; p ; k x k n and Xn in Theorem 1, respectively (see Nikulin, and that n; p ; kx k n; e and Xn 1999).
Lemma. For any orthonormal basis ’ = {’n }∞ n=1 ∀n ∈ N
(e) (’) n; 2 6n; 2 :
Proof of Lemma. Since we have two orthonormal bases, there exists the orthonormal matrix C = {cij }∞ i; j=1 such that ∀i ∈ N ’i =
∞ X
cij ej ;
j=1
∀i ∈ N
∞ X
cij2 = 1;
∀j ∈ N
j=1
∞ X
cij2 = 1;
i=1
and it is obvious that k x k e = kx k ’ = kx k: Our statement means that Z Z kx k 2n; e d(x)¿ kx k 2n; ’ d(x): ∀n ∈ N X
X
162
A.M. Nikouline / Statistics & Probability Letters 45 (1999) 159 – 165
Let xk = (x; ek ), so that (x; ’k ) =
x;
∞ X
! cki ei
=
i=1
∞ X
cki xi ;
i=1
and furthermore, Z n X k x k 2n; e d(x) = k ; X
Z X
k=1
k x k 2n; ’ d(x)
=
(∞ Z X n X X k=1
)2 cki xi
d(x) =
i=1
∞ n X X
cki2 i
k=1 i=1
=
∞ X
i
i=1
( n X
) cki2
:
k=1
Therefore, it is sucient to show that ! ∞ n n X X X 2 i cki 6 i : i=1
i=1
k=1
This is equivalent to the inequality ! ! ∞ n n ∞ X X X X 2 2 i cki 6 i cki : i=n+1
i=1
k=1
k=n+1
Denote ai =
n X
cki2 ;
i = n + 1; n + 2; : : : ;
k=1
and ∞ X
bi =
cki2 ;
i = 1; 2; : : : ; n:
k=n+1
We observe that ∀i ∈ N; ∀k ∈ N; ∀n ∈ N :
n X
cki2 61;
k=1
n ∞ X X
cki2 =
i=1 k=1
∞ n X X
cki2 = n;
i=1 k=1
and ∞ X i=n+1
ai =
n X
bi :
i=1
We can decompose ai for every i in the following way: (2) (n) ai = a(1) i + ai + · · · + ai ; ∞ X i=n+1
ai(k) = bk ;
ai(k) ¿0;
k = 1; 2; : : : ; n:
k = 1; : : : ; n;
i = n + 1; n + 2; : : :
A.M. Nikouline / Statistics & Probability Letters 45 (1999) 159 – 165
163
Indeed, if an+1 ¡ b1 ; then we set a(1) n+1 = an+1 ;
(k) an+1 = 0;
and
k = 2; : : : ; n;
else a(1) n+1 = b1 : If an+1 − a(1) n+1 ¡ b2 ; then we put (1) a(2) n+1 = an+1 − an+1
and
(k) an+1 = 0;
k = 3; : : : ; n;
else a(2) n+1 = b2 ;
etc:
As an+1 6
n X
bk ;
k=1
this procedure is justi ed and can be continued for an+2 ; an+3 ; an+4 ; : : : : Since ∞ X
ai =
i=n+1
n X
bi ;
i=1
this procedure delivers the desired decompositions. One has ! ∞ ∞ ∞ n n X X X X X (k) i ai = i ai (i ai(k) ) = i=n+1
i=n+1
6
k=1 i=n+1
k=1
∞ n X X
(k ai(k) )
=
k=1 i=n+1
n X k=1
(
k
∞ X
) ai(k)
i=n+1
=
n X
k bk ;
k=1
and we obtain our result. Let {n; ’ }n∈N be the system of projections such that n; ’ : X → Xn(’) ;
n; ’ (x) =
n X
(x; ’k )’k ;
n ∈ N;
k=1
ln; ’ (x) =
∞ X
(x; ’k )2 = kx k 2 − kx k 2n; ’ = k x − n; ’ x k 2 :
k=n+1
Note that {n; e }n∈N and ln; e (x) are equal to {n }n∈N and ln (x) from Theorem 1. Now, we prove the main result: Theorem 2. Under our conditions for any p¿1 there exists C = Cp; ¡ ∞, such that (e) (’) n; p 6Cp; n; p :
164
A.M. Nikouline / Statistics & Probability Letters 45 (1999) 159 – 165
Proof of Theorem 2. We show in the Theorem 1 (see Nikulin, 1999) that ∀p¿1
p−2 1 6 kx k p 4 ln; e (x)kx k
Therefore, (e) ∃C = Cp; ¡ ∞: n; p 6Cp;
− kx k pn; e 6pln; e (x)k x k p−2 :
Z X
ln; e (x) d(x):
From the proof of Theorem 1 (see Nikulin, 1999) it follows that ∀p¿1
p−2 1 6 kx k p 4 ln; ’ (x)kx k
− kx k pn; ’ 6pln; ’ (x)k x k p−2 ;
for an arbitrary system ’ = {’n }n∈N . Thus Z Z 1 1 (’) 2 p−2 kx − n; ’ x k kx k d(x) = ln; ’ (x)k x k p−2 d(x): n; p ¿ 4 X 4 X We note that Z Z (ln; ’ (x))1=2 d(x) = kx − n; ’ x k d(x) X
X
Z =
X
kx − n; ’ x k k x k (p−2)=2 k x k −(p−2)=2 d(x)
Z 6
X
ln; ’ (x)kx k
p−2
1=2 Z d(x)
X
kxk
2−p
1=2 d(x)
:
By using Lemma 2 (see Nikulin, 1999) Z kx k p d(x) ¡ ∞; ∀p ∈ R X
we have
Z
∃Kp; ¡ ∞:
X
ln; ’ (x)kx k
p−2
Z d(x)¿Kp;
X
2
1=2 ln; ’ (x) d(x)
:
Now, we use the following result (see Vakhania et al., 1985): Let Y be the centralized Gaussian vector with values in a Banach space. Then for any positive constants ¿ 0;
ÿ¿0
there exists a constant C = C(; ÿ) ¡ ∞ independent of the space and independent of the distribution Y such that {E k Y k }1= 6C{E kY k ÿ }1=ÿ : Therefore, we obtain Z 2 Z 1=2 ln; (x) d(x) ¿C ln; ’ (x) d(x): ∃C = Cp : ’ X
X
A.M. Nikouline / Statistics & Probability Letters 45 (1999) 159 – 165
165
By using Lemma 2 (see Nikulin, 1999) we see that C(1; 12 ) = 13 . Thus Z Z ln; ’ (x)kx k p−2 d(x)¿C˜p; ln; ’ (x) d(x) = C˜p; (’) ∃C˜p; ¡ ∞: n; 2 : X
X
By using the lemma we obtain: (e) (’) n; 2 ¿n; 2 :
Therefore, (’) (e) (e) ∃C1 ; C2 ¡ ∞: (’) n; p ¿C1 n; 2 ¿C1 n; 2 ¿C2 n; p ;
and we obtain our result. References Gikhman, I.I., Skorohod, A.V., 1971. Theory of the Stochastic Processes. Nauka, Moscow (in Russian). Nikulin, A.M., 1999. Approximation of in nite-dimensional Gaussian integrals. In: Ibragimov, I., Balakrishnan, N., Nevzorov, V. (Eds.), Proceedings of the Conference Dedicated to 50 years of the Chair of Probability Theory and Math. Statistics. St. Petersburg, June 1998. Birkhauser, Boston. Vakhania, N.N., Tarieladze, V.I., Chobanyan, S.A., 1985. The Probability Measures in Banach Spaces. Nauka, Moscow (in Russian).