The self-intersections of a Gaussian random field

The self-intersections of a Gaussian random field

Stochastic Processes and their Applications 116 (2006) 1294–1318 www.elsevier.com/locate/spa The self-intersections of a Gaussian random fieldI Rongm...

374KB Sizes 2 Downloads 61 Views

Stochastic Processes and their Applications 116 (2006) 1294–1318 www.elsevier.com/locate/spa

The self-intersections of a Gaussian random fieldI Rongmao Zhang, Zhengyan Lin ∗ Department of mathematics, Zhejiang University, Xixi Campus, Hangzhou, China Received 1 November 2004; received in revised form 28 January 2006; accepted 14 February 2006 Available online 9 March 2006

Abstract Let X (t) be an (N , d, α) non-deterministic Gaussian field. In this paper, the sufficient conditions for existence of the k-multiple points, the Hausdorff measure and the Hausdorff dimension for the k-multiple times set {(t1 , t2 , . . . , tk ) : X (t1 ) = X (t2 ) = · · · = X (tk ) for distinct t1 , t2 , . . . , tk } and the local times of the process Y (T ) = {X (t2 ) − X (t1 ), . . . , X (tk ) − X (tk−1 )} are evaluated. c 2006 Elsevier B.V. All rights reserved.

MR Subject Classification: 60G17; 60J65; 60F15 Keywords: Hausdorff measure; Multiple point; k-Multiple times set; Local time; Gaussian random field

1. Introduction and main results For our purposes we first introduce some notation to be used in this paper. We write hx, yi =

d X

xi yi

for any x, y ∈ R d ;

i=1

T = (t1 , . . . , tk ),

N ti ∈ R+ , i = 1, . . . , k;

S = (s1 , . . . , sk ),

N si ∈ R+ , i = 1, . . . , k;

B(S, r ) = {T : |ti − si | < r, i = 1, . . . , k};   Nk RηN k = T : min |ti − t j | > η ; IηN k = {T : T ∈ B(S, η), S ∈ R3η }. i6= j

R denotes the class of all IηN k , η > 0, and | · | denotes the Euclidean norm. I Project supported by NSFC(10131040), SRFDP(2002335090). ∗ Corresponding author.

E-mail addresses: [email protected] (R. Zhang), [email protected] (Z. Lin). c 2006 Elsevier B.V. All rights reserved. 0304-4149/$ - see front matter doi:10.1016/j.spa.2006.02.005

1295

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

Let Y (T ) = {X (t2 ) − X (t1 ), . . . , X (tk ) − X (tk−1 )}. Then Y (T ) is called the self-intersection process of X (t). And the local time L(x, T ) of Y (T ) is called the self-intersection local time of X . Furthermore, a point x is called the k-multiple point of X if there exist k distinct time points t1 , . . . , tk such that X (t1 ) = X (t2 ) = · · · = X (tk ) = x. Let I = {(t1 , t2 , . . . , tk ) : X (t1 ) = X (t2 ) = · · · = X (tk ), for distinct t1 , t2 , . . . , tk }. Then we say I is the k-multiple times set of X . The existence of multiple points and the Hausdorff dimension of the k-multiple times set for some processes have been investigated in depth. For example, Taylor [19] studied the multiple points for the sample paths of the symmetric stable process, Hendricks [9] studied the multiple points for a process in R 2 with stable components, Kˆono [10] investigated the double points of Gaussian sample paths, Cuzick [4,6] studied the local properties and the existence of multiple points for the Gaussian vector field, Geman et al. [8] obtained the intersections of Brownian paths in the plane, Rosen [14] studied the self-intersections for plan Brownian motion and fractional Brownian motion, Talagrand [18] evaluated the Hausdorff measure of trajectories for multi-parameter fractional Brownian motion and so on. And there are different tools for studying these problems, such as potential theory, capacity and local times. In this paper, we will establish the Hausdorff measure and Hausdorff dimension of the multiple times sets for a locally non-deterministic Gaussian field X (t) via studying the local times of Y (T ). For our purposes, we shall introduce the following definition. N , be a stationary Gaussian field with Definition. Let X (t) = (X 1 (t), . . . , X d (t)), t ∈ R+ mean zero and whose components X i , i = 1, . . . , d, are independent. Suppose that for each i = 1, . . . , d, there exist a non-decreasing continuous function σi (x) and constants C, δ > 0 such that

E(X i (t) − X i (s))2 ≤ σi2 (|t − s|)

(1.1)

 Var X i (tn ) − X i (tn−1 )|X i (t j ) − X i (t j−1 ), 1 ≤ j ≤ n − 1 ≥ Cσi2 (|tn − tn−1 |),

(1.2)

and

where t0 , t1 , . . . , tn are distinct points lying in a cube of edge length at most δ and satisfying |t j − t j−1 | ≤ |t j − ti | for all i < j ≤ n, σi (x) is a function with index αi , that is, ( )   −α −α αi = sup α > 0, lim sup |t| σi (|t|) = 0 = inf α > 0, lim inf |t| σi (|t|) = ∞ |t|→0

|t|→0

and satisfying that there exist constants M1 , M2 such that for any x ∈ (0, 1] M1 ≤ lim inf r →0

σi (r x) σi (r x) ≤ lim sup ≤ M2 . α α i σi (r )x r →0 σi (r )x i

(1.3)

Then we shall say that X (t) is a locally non-deterministic (LND) Gaussian field with index α = (α1 , . . . , αd ). A typical example of a LND Gaussian field is the so-called fractional Brownian motion taking values in R d with index α, that is X (t) = (X 1 (t), X 2 (t), . . . , X d (t)), where X 1 , . . . , X d are independent copies of the centered real valued Gaussian random field Y (t) with covariance function 1 EY (t)Y (s) = (|t|2α + |s|2α − |t − s|2α ). 2

1296

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

There is a long history of the study of LND Gaussian processes. For more information, we refer the reader to [2,12,1] and so on. And there are some different definitions of the LND Gaussian field. For example the definition we introduce here is slightly different from that of [6] and [21]. Cuzick [6] introduced a different LND Gaussian field by replacing (1.3) with a conditional covariance and studied the sufficient conditions for the existence of k-multiple points. Rosen [14] showed that as X is a planar Brownian motion, then the k-multiple self-intersection local time L(x, B) of X has the following property L(x, B) ≤ C2 |B|1/k (log log |B|)k−1

a.s.

(1.4)

Moreover, if X is a fractional Brownian motion taking values in R d with index α, then P(dim(I ) = N k − d(k − 1)α) > 0,

(1.5)

where dim(I ) denotes the Hausdorff dimension of set I . Recently, Xiao [21] introduced a strongly locally non-deterministic Gaussian field X with index α, that is, X (t) = (X 1 (t), X 2 (t), . . . , X d (t)) is a LND Gaussian field with σ1 (x) = σ2 (x) = · · · = σd (x) = x α L(x), where L(x) is a slowly varying function and satisfies  Z a ε(t) dt , L(x) = exp t x ε(t) : [0, a] → R is a bounded measurable function and limx→0 ε(x) = 0. He obtained the H¨older condition for the local time and the Hausdorff measure for the level set of X in the case of N > αd. It is clear that the definition of Xiao for the strongly LND Gaussian field is contained in ours. Talagrand [18] established the Hausdorff measure for the multiple points for the fractional Brownian motion taking values in R d with index α. There are two objects of this paper. First, we shall generalize the result (1.4) for planar Brownian motion to that of the LND Gaussian field X , that is, we shall consider the selfintersection local time of X , and use a result on this local time to obtain a sufficient condition for the existence of k-multiple points. Second, we shall use a method similar to that of Talagrand [18] to establish the Hausdorff measure for the k-multiple times sets for this LND Gaussian field. The results that we obtain are sharper and more general than those of (1.5). In the sequel, K 0 , K 1 , . . . denote positive absolute constants. The following is our main result. N , be a LND Gaussian field with index α and Y (T ) = (X (t ) − Theorem 1.1. Let X (t), t ∈ R+ 2 Pd αi , then for any X (t1 ), . . . , X (tk ) − X (tk−1 )) for any fixed k. Assume that N k > (k − 1) i=1 Borel set B ∈ R, Y (T ) has square integrable local times L(x, ·) on B.

Pd Theorem 1.2. Let Y (T ) be defined as above with N k > (k − 1) i=1 αi , and let L(x, ·) be its N k with local times, L ∗ (B(S, r )) = supx∈R (k−1)d L(x, B(S, r )). Then for any x ∈ R (k−1)d , S ∈ R+ si 6= s j as i 6= j, lim sup r →0

L(x, B(S, r )) ≤ K0 φ1 (r )

and for any Borel set E ∈ R,

a.s.

(1.6)

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

lim sup sup S∈E

r →0

L(x, B(S, r )) ≤ K1 φ2 (r )

1297

(1.7)

a.s.

In particular, as σ1 (x) = σ2 (x) = · · · = σd (x), we have lim sup r →0

L ∗ (B(S, r )) ≤ K 00 φ1 (r )

(1.60 )

a.s.

and lim sup sup r →0

S∈E

where φ1 (r ) =

L ∗ (B(S, r )) ≤ K 10 φ2 (r )

r N k (log log 1/r )β Qd , φ2 (r ) ( i=1 σi (r ))k−1

(1.70 )

a.s. =

r N k (log 1/r )β Qd ( i=1 σi (r ))k−1

and β = (k − 1)

Pd

i=1 αi /N .

N , be a LND Gaussian field with index α. If N k > (k − Theorem 1.3. Let X (t), t ∈ R+ Pd N , X (t) has a point of multiplicity on B with positive 1) i=1 αi , then for any open set B ∈ R+ Pd probability. Otherwise, if N k < (k − 1) i=1 αi , there are no points of multiplicity with probability one. N k : X (t ) = · · · = X (t ), t 6= t , i 6= j}. Theorem 1.4. Let I = {T = (t1 , . . . , tk ) ∈ R+ k i j 1 Pd Suppose that N k > (k − 1) i=1 αi and on R − {0} there exists a continuous function i (x)) = ϕi (x), 1 ≤ i ≤ d, then for any set Q ∈ R, ϕ(x) = (ϕ1 (x), . . . , ϕd (x)) such that d(σdx

K 2 L(0, Q) < φ1 − m(I ∩ Q) < ∞ Corollary 1.1. ( P dim(I ∩ Q) = N k − (k − 1)

d X

(1.8)

a.s.

) αi

> 0.

(1.9)

i=1

Remark. It is easy to see that Theorem 1.2 is more general than conclusion (1.4). The result of Theorem 1.3 is similar to that of [6]. The result of Theorem 1.4 is sharper than that of [14]. Cuzick mentioned that by [5], with positive probability the HausdorffPdimension of the k-multiple time d set for the Gaussian field that he discussed is N k − (k − 1) i=1 αi . It is well known that the Hausdorff measure is always more difficult to deal with and sharper than that of the Hausdorff dimension. So, the conclusion of Theorem 1.4 is sharper than Cuzick’s claim on the Hausdorff dimension result. Furthermore, the corollary of Theorem 1.4 is true for all I ∩ B, B ∈ R, instead of just I , so it is more general than that of Cuzick and (1.5). The idea of the proof is completely different from that of [5]. The method we used can also be applied to determine the Hausdorff properties of the multiple times sets and level sets for other processes. 2. The existence and uniform H¨older condition of the local times In this section, we will use arguments similar to those of [14] and [8] to prove Theorems 1.1 and 1.2. Before starting to prove the theorems, we give the following lemmas that we need.

1298

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

Lemma 2.1. Let u 0 = 0 and Ck be a constant. Then for any fixed integer 1 ≤ p ≤ k, ( ) Z k X kCk 2 exp − |hu i−1 − u i , σ (|ti − si |)i| du 1 · · · du k−1 2(k − 1) i=1,i6= p R (k−1)d k Y

≤ K3

d Y

(σ j (|ti − si |))−1 ,

(2.1)

i=1,i6= p j=1

where σ (x) = (σ1 (x), . . . , σd (x)). Proof. Let vi = u i−1 − u i . Then the left side of (2.1) is equal to ) ( Z k X kCk 2 |hvi , σ (|ti − si |)i| dv1 · · · dvk−1 exp − 2(k − 1) i=1,i6= p R (k−1)d ( ) Z k d h i2 Y kCk X j = exp − vi σ j (|ti − si |) dvi1 · · · dvid d 2(k − 1) i=1,i6= p R j=1 k Y

≤ K3

d Y

(σ j (|ti − si |))−1 .

i=1,i6= p j=1

This completes our proof.



The following lemma is an extension of the Abel transform. j

Lemma 2.2. Let ti ∈ R N , u l0 = u lk = ti0 = 0 for all i = 1, . . . k, l = 1, . . . , n. Then we have n X k−1 X

j

j

j

u i (X (ti+1 ) − X (ti )) =

j=1 i=1 j

where ωi =

n X k X

j

j

ωi (X (ti ) − X (ti

j−1

)),

j=1 i=1

Pn

l l= j (u i−1

− u li ), 1 ≤ i ≤ k, 1 ≤ j ≤ n.

Lemma 2.3. Let π be a permutation of {1, 2, . . . , k}, then if Pk−1 i=1 vi (aπ(i+1) − aπ(i) ), we have X ul = sgn(π(i + 1) − π(i))vi

Pk−1 i=1

u i (ai+1 − ai ) =

i:[π(i),π(i+1)]⊇[l,l+1]

and u π(l) − u π(l)−1 = vl − vl−1 . The proof can be found in [14]. The following lemma can be deduced by an argument similar to that of Seneta [15]. Lemma 2.4. Assume that σi (x) satisfies the condition (1.3) and N > αi θ, then for r > 0 small enough Z 1 N −1 Z 1 x N −1 x −θ dx ≤ K (σ (r )) dx i 4 θ αθ 0 (σi (r x)) 0 x i

1299

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

and Z B(0,1)

1 dt ≤ K 5 (σi (r ))−θ (σi (r |t|))θ

where B(0, 1) is the ball in

RN

Lemma 2.5. Let θ > 0 with θ

(B(s,r ))n



1 dt |t|αi θ

B(0,1)

with radius 1 and center 0.

Pd

i=1 αi

N . Then < N , 0 < r < δ and s ∈ R+

!−θ

n Y d Y

Z

Z

σ j (|tπ(i+1) − tπ(i) |)

dt1 · · · dtn

i=1 j=1

d P αi θ/N n K 6 (n!)i=1

·r

Nn

d Y

·

!−nθ σi (r )

i=1

where π is a permutation of {1, . . . , n} such that tπ(0) = 0, tπ(1) = t1 , |tπ(i) − tπ(i−1) | = min{|t j − tπ(i−1) |, j ∈ (1, . . . , n) ∩ (π(1), . . . , π(i − 1))c }. Proof. By Lemma 2.4 for r small enough we have !−θ Z n Y d Y σ j (|tπ(i) − tπ(i−1) |) dt1 · · · dtn (B(s,r ))n

= r Nn

i=1 j=1 n Y d Y

Z (B(s,1))n

≤ K 5n r N n

d Y

!−θ σ j (r |tπ(i) − tπ(i−1) |)

!−nθ Z

σ j (r )

≤ K 6n r N n

j=1

!−nθ σ j (r )

n  Y (B(s,1))n i=1

j=1 d Y

dt1 · · · dtn

i=1 j=1

n Y i=1

θ

i

d P j=1

α j /N

−θ min |ti − tl |

d P

αj

j=1

l≤i−1

= K 6n r N n

d Y

dt1 · · · dtn !−nθ

σ j (r )

(n!)

θ

d P j=1

α j /N

. (2.2)

j=1

Here, we have used the fact from [21, Lemma 2.3] that for any integer n ≥ 1 and any distinct t1 , . . . , tn ∈ B(s, r ),  −θ Z  −θ α min |t − t j | dt ≤ A0r N |r n −1/N |α .  B(s,r )

1≤ j≤n

Proof of Theorem 1.1. By using Fourier analysis (see [2]), in order to prove the existence of the local time of Y (T ), it is enough to show that Z Z Z E exp{ihu, Y (T ) − Y (S)i}dudSdT < ∞. (2.3) B

B

R (k−1)d

N k such that B = B(S 0 , η) = By the definition of R, for any B ∈ R there exist η > 0, S 0 ∈ R3η Qk 0 i=1 B(si , η). It follows that for any S, T ∈ B and all 1 ≤ i ≤ k, si is the point closest to ti among {s1 , . . . , sk , t1 , . . . , tk }, i.e., |si − ti | = min(|si − ti |, |si − s j |, |si − t j |, 1 ≤ j 6= i ≤ k).

1300

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

By (1.2) and a well known result of [12], we see that there exists a constant Ck such that for any ti , 0 ≤ i ≤ k, defining as in (1.2), ( ) k k X X j j Var u i (X i (t j ) − X i (t j−1 )) ≥ Ck |u i σi (|t j − t j−1 |)|2 . j=1

j=1

Put u k = u 0 = 0, vi = u i−1 − u i , i = 1, . . . , k. Then by Lemma 2.1 and the generalized H¨older inequality, the left side of (2.3) is equal to ( !) Z Z Z k X 1 exp − Var dvdSdT hvi , X (ti ) − X (si )i 2 B B R kd i=1 ) ( Z Z Z k X d 1 X j 2 [v σ j (|ti − si |)] dvdSdT ≤ exp − Ck 2 i=1 j=1 i B B R kd ( ) Z Z Z k k d Y X X 1 j 2 = exp − Ck [v σ j (|ti − si |)] dvdSdT 2(k − 1) i=1,i6= p j=1 i B B R kd p=1 ) !1/k ( Z Z Z Y k d k X X k j 2 Ck [v σ j (|ti − si |)] dv dSdT exp − ≤ 2(k − 1) i=1,i6= p j=1 i R kd B B p=1 !−1/k Z Z Y d k k Y Y ≤ K3 σ j (|ti − si |) dSdT B

= K3

Z Z Y k d Y B

≤ K3

B p=1 i=1,i6= p

B i=1

k Z Y i=1

j=1

!− k−1 k σ j (|ti − si |) d Y

Z B(si0 ,η)

dSdT

j=1

B(si0 ,η)

!− k−1 k σ j (|ti − si |)

dsi dti .

(2.4)

j=1

Therefore on taking n = 2 in Lemma 2.5, the right side of (2.4) is bounded. It follows that Y (T ) has local times on B, and the proof of Theorem 1.1 is completed.  N , satisfies the conditions of Theorem 1.1 and L(x, ·) is Lemma 2.6. Assume that X (t), t ∈ R+ the local time of Y (t), then there exists δ > 0 such that for any 0 < r < δ, B = B(S, r ) ∈ P

R, x, y ∈ R (k−1)d , even integer n ≥ 2 and 0 < γ < min{1, E[L(x, B)]n ≤ K 7n (n!)β r N nk

,

d Y

N k−(k−1) 2α0

d i=1 αi

}

!n(k−1) σi (r )

,

(2.5)

i=1

E[L(x + y, B) − L(x, B)]n ≤ 

K 8n |y|nγ (n!)ζ r N nk , n(k−1) d Q σi (r ) (σ0 (r ))nγ

(2.6)

i=1 (k−1)

Pd

α +γ (α0 +N )

i=1 i where ζ = N min{σ1 (x), . . . , σd (x)}.

and α0 denotes the αi corresponding to σ0 (x) :=

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

1301

Proof. It follows from (25.5) and (25.7) in [7] that for any x, y ∈ R d(k−1) and any integer n ≥ 1 E[L(x, B)]n = (2π )

(

Z

Z

−nd(k−1)

exp −i R nd(k−1)

Bn

( ·E exp i

n X

n X

) hu , xi j

j=1

) hu , Y (T )i du ∗ dT ∗ j

j

(2.7)

j=1

and for any even integer n ≥ 2, E[L(x + y, B) − L(x, B)]n Z Z −nd(k−1) = (2π ) Bn

( ·E exp i

n X

n Y

(exp{−ihu j , x + yi} − exp{−ihu j , xi})

R nd(k−1) j=1

) hu , Y (T )i du ∗ dT ∗ j

j

(2.8)

j=1 Nk. where u ∗ = (u 1 , . . . , u n ), T ∗ = (T 1 , . . . , T n ), u i ∈ R (k−1)d and T i ∈ R+ By (2.7), it follows that

E[L(x, B)]n ≤ (2π )

−nd(k−1)

Z

Z Bn

j

(

n X 1 hu j , Y (T j )i exp − Var 2 R nd(k−1) j=1

!) du ∗ dT ∗ =: H.

j

Since T j = (t1 , . . . , tk ) ∈ B, j = 1, . . . , n, it follows that for any fixed i, j, l j

p

q

|ti − til | < min{|ti − ti 0 |, i 0 6= i for all 1 ≤ p, q ≤ n}. Therefore, for any fixed i = {1, 2, . . . , k}, we can find a permutation π i of {ti1 , . . . , tin } such π i (0)

that ti

π i ( j)

|ti

π i (1)

= 0, ti

= ti1 and

π i ( j−1)

− ti

π i ( j−1)

| = min{|til − ti

|, l ∈ (1, . . . , n) ∩ (π(1), . . . , π( j − 1))c }.

Then by Lemmas 2.2 and 2.3, we have !) k−1 n X X 1 j j j H = (2π ) exp − Var hu i , X (ti+1 ) − X (ti )i du ∗ dT ∗ 2 B n R nd(k−1) j=1 i=1 ( !) Z Z n X k X 1 j j j−1 = (2π )−nd(k−1) exp − Var hvi , X (ti ) − X (ti )i du ∗ dT ∗ 2 B n R nd(k−1) j=1 i=1 −nd(k−1)

Z

Z

(

= (2π )−nd(k−1) ( !) Z Z n X k X 1 j π i ( j) π i ( j−1) × exp − Var hai , X (ti ) − X (ti )i du ∗ dT ∗ 2 B n R nd(k−1) j=1 i=1

1302

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

= (2π )−nd(k−1)

Z

π i ( j−1)

− X l (ti

= (2π )−nd(k−1)

(

n X k X d X 1 j π i ( j) exp − Var hail , X l (ti ) nd(k−1) n 2 R B j=1 i=1 l=1 !)

Z

)i

du ∗ dT ∗

R nd(k−1)

Bn

n k d X X X

Var

(

k Y

Z

Z

exp −

p=1

π i ( j)

j

hail , X l (ti

k 2(k − 1) π i ( j−1)

) − X l (ti

!!) k1 )i

du ∗ dT ∗

(2.9)

j=1 i=1,i6= p l=1

where j

vi =

n X

j

m (u i−1 − u im ),

j−1

ai − ai

π i ( j)

= vi

π i ( j)−1

− vi

,

i = 1, . . . , k.

(2.10)

m= j

By (1.2), we have   π i ( j) π i ( j−1) det Cov X l (ti ) − X l (ti ), 1 ≤ i ≤ k, i 6= p, 1 ≤ j ≤ n =

n Y

k Y

 π i ( j) π i ( j−1) π i (q) π i (q−1) det Cov X l (ti ) − X l (ti )|X l (tm ) − X l (tm ),

j=1 i=1,i6= p π i (q 0 )

1 ≤ m < i, m 6= p, 1 ≤ q ≤ n, X l (ti ≥ C n(k−1)

n Y

π i (q 0 −1)

) − X l (ti

), q 0 < j



k  2 Y π i ( j) π i ( j−1) σl (|ti − ti |) .

j=1 i=1,i6= p

Therefore by the generalized H¨older inequality, the right side of (2.9) is less than (Z Z Y k n k d X X X k j π i ( j) −nd(k−1) (2π ) exp − Var hail , X l (ti ) 2(k − 1) B n p=1 R nd(k−1) j=1 i=1,i6= p l=1 −



π i ( j−1) X l (ti )i

) k1

!!

(2π K 60 )−nd(k−1)/2

du



dT ∗

( k d Y Y

Z

B n p=1

l=1

1 ≤ i ≤ k, i 6= p, 1 ≤ j ≤ n



(2π K 60 C 2 )−nd(k−1)/2

Z

 π i ( j) π i ( j−1) det Cov X l (ti ) − X l (ti ),



k Y

) −1 2k dT ∗ k Y

n Y d Y

B n p=1 i=1,i6= p j=1 l=1

π i ( j)

{σl (|ti

π i ( j−1)

− ti

1

|)}− k dT ∗

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

=

(2π K 60 C 2 )−nd(k−1)/2

k Y n Y d Y

Z Bn

π i ( j)

{σl (|ti

π i ( j−1)

− ti

|)}−

k−1 k

1303

dT ∗

i=1 j=1 l=1

  !− k−1 !−n(k−1) k  k Y n  d d Y Y Y ≤ K 7n (n!)β rN σl (r ) = K 7 (n!)β r N nk σl (r )   i=1 j=1 l=1 l=1 and the proof of (2.5) is complete. Now we turn to showing (2.6). By (2.8) and the elementary inequality |ei x − 1| ≤ 21−γ |x|γ for any x ∈ R and 0 < γ < 1, and by the argument above, we have ! Z Z n Y n −nd(k−1) n(1−γ ) nγ j γ E[L(x + y, B) − L(x, B)] ≤ (2π ) 2 |y| |u | Bn

(

n X 1 hu j , Y (T j )i · exp − Var 2 j=1

= (2π )

−nd(k−1) n(1−γ )

2



Z

j=1

!) du ∗ dT ∗ n Y

Z

|y|

Bn

R nd(k−1)

R nd(k−1)

j γ

!

|u |

j=1

(

n X k X 1 j π i ( j) π i ( j−1) × exp − Var hai , X (ti ) − X (ti )i 2 j=1 i=1

!) du ∗ dT ∗

=: (2π )−nd(k−1) 2n(1−γ ) |y|nγ J. P j j j j j j Let u k = u 0 = 0 for all j ≤ n, then k1=1,i6= p (u i − u i−1 ) + (u p − u p−1 ) = 0. Therefore for any p ≤ k ! k−1 k−1 X X j j j j j j j |u | = K 9 |u i − u i−1 | + |u p − u p−1 | ≤ 2K 9 |u i − u i−1 |. i=1,i6= p

i=1,i6= p

On the other hand, it follows from (2.10) that for all 1 ≤ i ≤ k and 1 ≤ j ≤ n. π i (n)

ui

π i (n)

− u i−1 = ain

(2.11)

and j

j−1

ai − ai

π i ( j)

= vi

π i ( j)−1

− vi

π i ( j)−1

= ui

π i ( j)−1

− u i−1

.

Hence, we have n Y

|u j | ≤ (2K 9 )n

j=1

n k Y X

j

j−1

(|ai | + |ai

|)

j=1 i=1,i6= p

which on combining with the generalized H¨older theorem yields that ( !γ Z Z k n k Y Y X j j−1 nγ J ≤ (2K 9 ) (|ai | + |ai |) Bn

R nd(k−1) p=1

j=1 i=1,16= p

(2.12)

1304

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

× exp −

n k d X X X k j π i ( j) Var hail , X l (ti ) 2(k − 1) j=1 i=1,i6= p l=1

π i ( j−1) X l (ti )i



≤ (2K 9 )

da ∗ dT ∗ "

k Z Y

n k d Y X X

nd(k−1) p=1 R

Bn

· exp −



(

Z



!!) k1

π i ( j−1) X l (ti )i

j=1 i=1,i6= p l=1

n X

k Var 2(k − 1)

!γ j j−1 (|ail | + |ail |)

k X

d X

π i ( j)

j

hail , X l (ti

)

j=1 i=1,i6= p l=1

)1/k

!!# da

dT ∗ .



(2.13)

Without loss of generality, we may assume that σ0 (x) = σd (x). Then by elementary calculations and an argument similar to that of [21], we can get that the right side of (2.13) is no more than (K 90 )nγ (n!)γ

d Y

×

n Y k Y

Z

B n j=1 p=1 π i ( j) σl (|ti

(

π i ( j)

{σd (|ti

π i ( j−1)

− ti

|)}−γ /k

!)− k−1 k

π i ( j−1) − ti |)

dT ∗

l=1



0γ (K 6 K 9 )n (n!)γ

αd γ +(k−1)

(n!)

d P i=1

N

αi

·

d Q

σi (r )

r N nk n(k−1)

(σd (r ))nγ

i=1 (N +αd )γ +(k−1)

=

0γ (K 6 K 9 )n

(n!)

d P i=1

N

αi

·

r N nk .  n(k−1) d Q nγ σi (r ) (σd (r ))

i=1

So, E[L(x + y, B) − L(x, B)]n ≤

0γ (2π )−nd(k−1) 2n(1−γ ) |y|nγ (K 6 K 9 )n r N nk

·

d Q

i=1

n(k−1) σi (r ) (σd (r ))nγ

(N +αd )γ +(k−1)

(n!)

N

d P i=1

αi

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318 (N +αd )γ +(k−1)

=: K 8n (n!)

d P i=1

αi

N

·

d Q

σi (r )

r N nk n(k−1)

1305

. (σd (r ))nγ

i=1

And the proof of (2.6) is complete.



Lemma 2.7. For any B = B(S, r ) ∈ R, there exist finite positive constants b1 , b2 depending only on N , k, αi , i = 1, . . . , d, such that for any a > 0, we have            1/β    N k K 7 ar a P L(x + X (s), B) ≥  , (2.14) ≤ b1 exp −  k−1   2 d   Q     σi (r )   i=1             N k γ K 8 ar |y| P L(x + y + X (s), B) − L(x + X (s), B) ≥   k−1   d   Q    σi (r ) (σ0 (r ))γ    i=1

 ≤ b2 exp − Proof. Let Λ =

a 1/ζ



2

.

(2.15)

L(x+X (s),B) Q k−1 , d K 7 r N k / i=1 σi (r )

then by Chebyshev’s inequality and Lemma 2.6, we get

that for any integer n, EΛn ≤ (n!)β . Then along the lines of the proof of Lemma 3.8 in [3] (see also Lemma 2.7.9 in [11]), we can draw the conclusion (2.14). The proof of (2.15) is similar.  Lemma 2.8. Let Y (T ) be defined like that in Theorem 1.1 and then for any r > 0 small enough, Nk, S ∈ R+ ( ) P

sup

T ∈B(S,r )

|Y (T ) − Y (S)| ≥ uσ00 (r ) ≤ exp{−u 2 /K 10 },

where σ00 (x) := max{σ1 (x), . . . , σd (x)}. Proof. By a consequence of [17], we have ( ) P

sup

T ∈B(S,r )

|Y (T ) − Y (S)| ≥ uσ00 (r )

( sup

≤P

k−1 X

T ∈B(S,r ) i=1

( ≤P

sup

) (|X (ti+1 ) − X (si+1 )| + |X (ti ) − X (si )|) ≥

k X d X

T ∈B(S,r ) i=1 j=1

) |X j (ti ) − X j (si )| ≥

uσ00 (r )/2

uσ00 (r )

1306

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

( ≤P

sup

k X d X |X j (ti ) − X j (si )|

T ∈B(S,r ) i=1 j=1

σ j (|t j − s j |)

u ≥ 2

) ≤ exp{−u 2 /K 10 }. 

We now turn to proving Theorem 1.2. Proof of Theorem 1.2. By Lemma 2.7 and a standard argument, it is easy to derive (1.6) and Nk (1.7). Here we shall only give the proof of (1.60 ) and (1.70 ). For any fixed S = (s1 , . . . , sk ) ∈ R+ −m with si 6= s j as i 6= j, let Bm = B(S, 2 ), m ∈ N . It follows from the condition σ1 (x) = σ2 (x) = · · · = σd (x) that σ0 (x) = σ00 (x) = · · · = σ1 (x). So, by Lemma 2.8, we have ( ) p −m P sup |Y (T ) − Y (S)| ≥ σ0 (2 ) 2K 10 log m ≤ m −2 . T ∈Bm

By the Borel–Cantelli lemma, lim sup sup m→∞ T ∈Bm

|Y (T ) − Y (S)| p ≤ 1 a.s. σ0 (2−m ) 2K 10 log m

(2.16)

Let θm = σ0 (2−m )(log m)−2 , p G m = {x : x ∈ R (k−1)d , |x| ≤ σ0 (2−m ) 2K 10 log m, x = pθm , for some p ∈ Z (k−1)d }. √ Then for m large enough, the cardinality of G m is no more than 2K 10 (log m)3kd . It follows from Lemma 2.7 that β

P{L(x + Y (S), Bm ) ≥ K 7 a1 φ1 (2−m ) for some x ∈ G m }   p 1 ≤ 2K 10 (log m)3kd b1 exp − a1 log log 2m 2 p 3kd = b1 2K 10 (log m) (m log 2)−a1 /2 =: Am . P Take a1 ≥ 3, then ∞ m=1 Am < ∞. Therefore, by the Borel–Cantelli lemma lim sup sup m→∞ x∈G m

L(x + Y (S), Bm ) β

K 7 a1 φ1 (2−m )

≤1

(2.17)

a.s.

For any fixed integers m, h ≥ 1, and any x ∈ G m , define ( ) h X −j (k−1)d (k−1)d  j 2 ,  j ∈ {0, 1} ,1 ≤ j ≤ h . F(m, h, x) = y ∈ R : y = x + θm j=1

Then by Lemma 2.7, we have   ∞   [  P |L(y1 + Y (S), Bm ) − L(y2 + Y (S), Bm )|  h=1  ≥

K 8 2−m N k |y1 − y2 |γ (a2 h log log 2m )ζ for some x ∈ G m and  d k−1 Q σi (2−m ) (σ0 (2−m ))γ i=1

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

1307

   −h (k−1)d  y1 , y2 ∈ F(m, h, x) with |y1 − y2 | = θm 2 ,  ∈ {0, 1}    ≤ ]G m

∞ X

kdh

2

h=1



1 b2 exp − a2 h log log 2m 2



∞ X p ≤ b2 2K 10 (log m)3kd 2kdh (m log 2)−a2 h/2 . h=1

Select a2 so large that ∞ X

∞ X p 2kdh (m log 2)−a2 h/2 < ∞, b2 2K 10 (log m)3kd h=1

m=1

then by the Borel–Cantelli lemma again we have that except for finitely many m, |L(y1 + Y (S), Bm ) − L(y2 + Y (S), Bm )| ≤

K 8 2−m N k |y1 − y2 |γ (a2 h log log 2m )ζ (2.18)  d k−1 Q σi (2−m ) (σ0 (2−m ))γ i=1

holds almost surely for all x ∈ G m , h ≥ 1 and any y1 , y2 ∈ F(m, h, x) with |y1 − y2 | = θm 2−h ,  ∈ {0, 1}(k−1)d . p For fixed m and y ∈ R (k−1)d with |y| ≤ σ0 (2−m ) 2K 10 log m, we can represent y in the Ph form y = limh→∞ yh with yh = x + θm j=1  j 2− j , y0 = x ∈ G m and  j ∈ {0, 1}(k−1)d . Then by (2.18) and continuity of the local times L(·, Bm ), it follows that |L(y + Y (S), Bm ) − L(x + Y (S), Bm )| ≤

=

∞ X

K 8 |σ0 (2−m )(log m)−2 · 2−h |γ

h=1

≤ K 11 φ1 (2−m )

(a2

∞ X K 8 2−m N k |yh − yh−1 |γ (a2 h log log 2m )ζ  d k−1 Q h=1 −m (σ0 (2−m ))γ σi (2 ) i=1 ζ h) (log log 2m )ζ −β

(σ0 (2−m ))γ

φ1 (2−m ) (2.19)

a.s.

Combining (2.17) with (2.19) yields that L(y + Y (S), Bm ) ≤ K 12 φ1 (2−m ) for any y ∈ R (k−1)d sup x∈R (k−1)d

a.s. p with |y| < σ0 (2−m ) 2K 10 log m. Therefore

L(x, Bm ) =

sup

x∈Y (Bm )∗

L(x, Bm ) ≤ K 12 φ1 (2−m )

a.s.

(2.20)

where Y (Bm )∗ denotes the closure of Y (Bm ). For given small r > 0, we can find some m such that 2−m < r ≤ 2−m+1 . Then (2.20) implies that sup x∈R (k−1)d

L(x, B(S, r )) ≤ K 12 φ1 (r ).

This completes the proof of (1.60 ).

1308

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

Now we turn to proving (1.70 ). The proof is very similar to that of (1.60 ).N For any Borel set Nk E ∈ R, without loss of generality, it is enough to consider the case of E = i=1 [1, 2] ∩ RηN k m N k dyadic cubes for some η > 0. Let D be the family of 2 Q lm =

Nk O

[1 + (li − 1)/2m , 1 + li /2m ]

\

RηN k ,

i=1

l = (l1 , . . . , l N k ) ∈ {1, . . . , 2m } N k =: Jm . Let θm0 = σ0 (2−m )m −2 and G 0m = {x ∈ R (k−1)d : |x| ≤ mσ0 (2−m )2m N k , x = pθm0 , p ∈ Z (k−1)d }. Then by Lemma 2.7, we have β

P{L(x, Q lm ) ≥ K 7 a1 φ2 (2−m ) for some x ∈ G 0m and some l ∈ Jm }   1 ≤ 2m N k · ]G 0m b1 exp − a1 log 2m 2 ≤ K 13 2m(N k+(k−1)d N k−a1 /2) · m 3(k−1)d =: E m0 . P 0 Select a1 so large that ∞ m=1 E m < ∞, then the Borel–Cantelli lemma implies that L(x, Q lm )

lim sup sup sup m→∞ l∈Jm x∈G 0m

β

K 7 a1 φ2 (2−m )

≤ 1 a.s.

(2.21)

For any fixed integers m, h ≥ 1 and x ∈ G 0m , we set ( ) h X 0 (k−1)d 0 −j (k−1)d F (m, h, x) = y ∈ R : y = x + θm  j 2 ,  j ∈ {0, 1} ,1 ≤ j ≤ h . j=1

Then      ∞   [ [ K 8 2−m N k |y1 − y2 |γ (a2 h log 2m )ζ |L(y1 , Q lm ) − L(y2 , Q lm )| ≥  P k−1  d  Q l∈Jm h=1   −m  σi (2 ) (σ0 (2−m ))γ  

i=1

and y1 , y2 ∈ F (m, h, x) with |y1 − y2 | = θm0 2−h ,        (k−1)d   ∈ {0, 1}       for some x ∈

≤ 2m N k · ]G 0m

0

G 0m

∞ X

2kdh b2 exp{−a2 h log 2m /2}

h=1

≤ K 14 2m N k (m 3 2m N k )(k−1)d

∞ X h=1

2kdh−a2 mh/2 =: Fm .

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

1309

P We can choose a2 large enough such that ∞ m=1 Fm < ∞, which on combining with the Borel–Cantelli lemma yields that except for finitely many m, |L(y1 , Q lm ) − L(y2 , Q lm )| ≤

K 8 2−m N k |y1 − y2 |γ (a2 h log 2m )ζ  d k−1 Q −m σi (2 ) (σ0 (2−m ))γ

(2.22)

i=1

holds h ≥ 1, l ∈ Jm and any y1 , y2 ∈ F 0 (m, h, x) with |y1 − Note that Y (T ) is almost surely continuous on E. By Lemma 2.8, it is easy to show that except for finitely many m, almost surely for all x ∈ G 0m , y2 | = θm0 2−h ,  ∈ {0, 1}(k−1)d .

sup |Y (T )| ≤ mσ0 (2−m )2m N k

(2.23)

a.s.

T ∈E

Therefore if m large enough and y ∈ Y (E) := {y ∈ P R (k−1)d : y = Y (T ), T ∈ E}, we can h −j 0 represent y with y = limh→∞ yh , where yh = x + θm0 j=1  j 2 , y0 = x ∈ G m . By (2.22), we have ∞ X K 8 2−m N k |yh − yh−1 |γ (a2 h log 2m )ζ |L(y, Q lm ) − L(x, Q lm )| ≤  d k−1 Q h=1 −m (σ0 (2−m ))γ σi (2 ) i=1 ∞ X K 8 2−m N k [σ0 (2−m )m −2 2−h ]γ (a2 h log 2m )ζ ≤  d k−1 Q h=1 −m σi (2 ) (σ0 (2−m ))γ i=1

=

∞ X h=1

K 8 (log 2m )ζ −β (a2 h)ζ φ2 (2−m ) < K 15 φ2 (2−m ) a.s. m 2γ 2hγ

(2.24)

Combining (2.21)–(2.24) and a monotonicity argument we can draw the conclusion as desired. This finishes the proof of Theorem 1.2.  3. The existence of the multiple points In this section, we will show the existence of the k-multiple points. N k , we can choose set A ∈ R such that Proof of Theorem 1.3. Given any Borel set B ∈ R+ Pd A ⊆ B. From the proof of Theorem 1.1, it is easy to see that if N k > (k − 1) i=1 αi , then Z Z [det cov(Y (T ) − Y (S))]−1/2 dSdT A A Z Z Z −(k−1)d/2 = (2π ) E exp{ihu, Y (T ) − Y (S)i}dudSdT < ∞. A

A

R (k−1)d

Hence, it follows from Theorem 1.3 of [4] (see also [6]) that Y (T ) hits zero with positive probability for some T ∈ A. Therefore, X (t) has multiple points with positive probability. Pd In the case of N k < (k − 1) i=1 αi , it follows directly from Theorem 3 of [6] that X (t) has no multiple point with probability 1.

1310

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

4. The lower bound of the Hausdorff measure With the self-intersection local time of X and the upper density theorem, we will obtain the lower bound of the Hausdorff measure of the k-multiple times sets in this section. Lemma 4.1. For any x ∈ R (k−1)d , let L(x, ·) be the local time of Y at x. Then we have that for almost all S ∈ Q, Q ∈ R, lim sup r →0

L(x, B(S, r )) ≤ K 16 φ1 (r )

(4.1)

a.s.

that is    L(x, B(S, r )) L x, S ∈ Q : lim sup > K 16 =0 φ1 (r ) r →0 Proof. Let f m (S) = L(x, B(S, 2−m )), Am = {S ∈ Q : determined later. Then R E Q | f m (S)|n L(x, dt) E L(x, Am ) ≤ . (K 17 φ1 (2−m ))n

a.s.

f m (S) φ1 (2−m )

≥ K 17 }, where K 17 will be

(4.2)

N k , T 1 = S, u i ∈ R d(k−1) , i = 1, . . . , n + 1, and u ∗ = (u 1 , . . . , u n+1 ), T ∗ = Let T i ∈ R+ 1 n+1 (T , . . . , T ). By an argument similar to that of proposition 4.1 in [21], we have that for any positive integer n ≥ 1, Z E | f m (S)|n L(x, dt) Q

= (2π )−(n+1)d(k−1)

Z Z

·E exp i

n+1 X

R (n+1)d(k−1)

B n (S,2−m )

Q

(

Z

exp −i

n+1 X

) < x, u j >

j=1

! < u j , Y (T j ) > du ∗ dT ∗

j=1

≤ (2π )−(n+1)d(k−1)

!) n+1 X 1 < u j , Y (T j ) > du ∗ dT ∗ exp − Var 2 B n (S,2−m ) j=1 !−n(k−1) d Y −m σl (2 ) (

Z Z Q

n ≤ K 18 (n!)β (2−m ) N nk

l=1

which on combining with (4.2) implies that n (n!)β (2−m ) N nk K 18



E L(x, Am ) ≤ " K 17

d Q

σl



2−m N k (log log 2m )β

K 18 K 17

n 

n log log 2m

nβ

−n(k−1)

l=1



d Q

l=1



(2−m )

.

σl

(2−m )

−(k−1) #n

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

1311

Take n = [log m] and K 17 to be large enough such that E L(x, Am ) ≤ m −2 . This implies that ! ∞ X E L(x, Am ) < ∞. m=1

Therefore for L(x, ·), we have that for almost all S ∈ Q, lim sup m→∞

f m (S) ≤ K 17 φ1 (2−m )

(4.3)

a.s.

By (4.3) and a monotonicity argument, it follows that (4.1) is true.  Pd Proposition 4.1. Assume that N k > (k − 1) i=1 αi , then for any Q ∈ R φ1 − m(I ∩ Q) ≥ K 1−1 L(0, Q). Proof. Obviously, we can consider Y −1 (0) ∩ Q instead of I ∩ Q. Let A = {S ∈ Q : lim supr →0 L(0, B(S, r ))/φ1 (r ) ≥ K 16 }. Then by Lemma 4.1, L(0, A) = 0

a.s.

Using the upper density theorem of Rogers and Taylor [13], we have −1 φ1 − m(I ∩ Q) = φ1 − m(Y −1 (0) ∩ Q) ≥ φ1 − m(Y −1 (0) ∩ (Q Ac )) ≥ K 16 L(0, Q Ac ) −1 = K 16 L(0, Q)

a.s.

This completes the proof of Proposition 4.1.



5. The upper bound of the Hausdorff measure In this section, we are going to establish the upper bound for the Hausdorff measure of the k-multiple times sets. For this purpose, first we give some lemmas that we need. Let η(s), s ∈ E, be a Gaussian process and for any s, t ∈ E, we define the distance apart of s and t by d(s, t) = (E(η(s) − η(t))2 )1/2 . We denote by N (E, ε) the smallest number of open d-balls of radius ε needed to cover E. The following two lemmas are basic facts for η(s). Lemma 5.1. For any u > 0, we have ( 

sup |η(s) − η(t)| ≥ K 19 u +

P

a

Z

p

log N (E, ε)dε

)

≤ exp(−u 2 /a 2 ).

0

s,t∈E

This is just Lemma 2.1 of Talagrand [17]. Lemma 5.2. Let ψ be a function with N (E, ε) ≤ ψ(ε) for all ε > 0. If there exists some constant C such that for all ε > 0, ψ(ε)/C ≤ ψ(ε/2) ≤ Cψ(ε). Then ( P

) sup |η(s) − η(t)| ≤ u

≥ exp(−ψ(u)/K 20 ).

s,t∈E

This has been proved in Talagrand [16].

1312

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

Let X i (t), i = 1, . . . , d, be the components of X (t) and Ri (s, t) = E X i (s)X i (t). Then there exist non-negative symmetric measures ∆i (dλ), 1 ≤ i ≤ d on R N − {0} satisfying R λ2 R N 1+λ2 ∆i (dλ) < ∞ such that Z Ri (s, t) = (ei − 1)(e−i − 1)∆i (dλ). RN

Furthermore, there is a centered complex valued Gaussian random measure W (dλ) = (W1 (dλ), . . . , Wd (dλ)) such that Z X i (t) = (ei − 1)Wi (dλ) RN

N, and for any A, B ⊆ R+

E Wi (A)Wi (B) = ∆i (A ∩ B),

Wi (−A) = Wi (A).

In order to obtain the small ball probability for X , we will use the spectral representation of X to create the independence. Given 0 < a < b < ∞, we consider the process Z (ei − 1)Wi (dλ). X i (a, b, t) = a≤|λ|≤b

Clearly for a < b < a 0 < b0 , X i (a, b, t) is independent to X i (a 0 , b0 , t). Next we show how well X i (a, b, t) approximates X i (t). Lemma 5.3. Let X i , i = 1, . . . , d, be the components of X . For some constant C 0 > 0 and any C 0 < a < b, E(X i (a, b, t) − X i (t))2 ≤ K 21 [t 2 a 2 σi2 (a −1 ) + σi2 (b−1 )]. The proof is similar to that of Lemma 3.4 in [20], and here we omit the details. Lemma 5.4. Let ψi (s) = inf{t ≥ 0 : σi (t) ≥ s}, A = r 2 a 2 σi2 (a −1 ) + σi2 (b−1 ). Suppose that √ there exists C 0 > 0 such that for any C 0 < a < b, 0 < r < 1/C 0 , it follows that ψi ( A) ≤ 12 r , √ then for any u ≥ K 22 (A log(K 7r/ψi ( A)))1/2 , we have ( ) P

sup |X i (t) − X i (a, b, t)| ≥ u

≤ exp(−K 23 u 2 /A).

(5.1)

|t|≤r

Proof. Let ξi (t) = X i (t) − X i (a, b, t). Then ξi (t) is a Gaussian process with Eξi (t) = 0, Eξi2 (t) ≤ K 21 A for |t| ≤ r . By Lemma 5.1, it follows that (5.1) is true.  Lemma 5.5. Given r small enough and 0 < ε < 1, we have that for any 0 < a < b, ( ) P

sup |X i (a, b, t)| ≤ εσi (r )| ≥ exp(−K 24 ε −N /αi ).

|t|≤r

Proof. Let E = {t : |t| ≤ r } and the distance on E be  1/2 di (s, t) = E(X i (a, b, s) − X i (a, b, t))2 .

(5.2)

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

1313

Then di (s, t) ≤ σi (|s −t|) and N (E, ε) ≤ K 25 (r/ψi (ε)) N , where ψi (ε) is defined in Lemma 5.4. By condition (1.3) and Lemma 5.2, ( ) ( ) sup |X i (a, b, t)| ≤ εσi (r ) ≥ P

P

sup |X i (a, b, t)| ≤ 2M2−1 σi (ε1/αi r )

|t|≤r

|t|≤r

  K 25 ≥ exp −  K 20

!N  

r ψi (2M2−1 σi (ε1/αi r ))

This completes the proof of Lemma 5.5.

n o =: exp −K 24 ε −N /αi .





Lemma 5.6. There exists a δ > 0 such that for any 0 < r0 < δ and U = (u 1 , . . . , u k ), ( P ∃r, r02 ≤ r ≤ r0 , sup

sup

√ l≤k |t−u l |≤2 Nr

−αi /N

× (log log 1/r )

|X i (t) − X i (u l )| ≤ K 26 σi (r )

) ,i ≤ d

≥ 1 − d exp{−(log 1/r0 )1/2 }.

(5.3)

Proof. The proof is similar to that of [18]. First we show that for any given a, b, r , ) ( P

sup

|X i (a, b, t) − X i (a, b, u l )| ≤ εσi (r )| ≥ exp(−K 24 ε −N /αi ). (5.4)

sup√

1≤l≤k |t−u l |≤2 Nr

To see this, we simply apply Lemma 5.2 to the R k valued process Z i (t1 , . . . , tk ) = (X i (a, b, t1 ), . . . , X i (a, b, tk )). Note that E|Z i (t1 , . . . , tk ) − Z i (t10 , . . . , tk0 )|2 ≤

X

σi2 (|ti − ti0 |).

i≤k

Then, by an argument similar to that of Lemma 5.5, we have the conclusion (5.4). The rest of the proof for this lemma is very similar to that of Proposition 3.1 in [20]. We only need to replace Xiao’s Corollary 3.1 by our Lemma 5.4. Here we will not give the details.  Similar to the argument of (2.16), we have Lemma 5.7. If X i , i ≤ d, are the components of X (t). Then lim sup

sup

n→∞

s,t,|s−t|≤2−n

|X i (t) − X i (s)| ≤ K 27 √ σi (2−n ) n

a.s.

The following proposition concerns the upper bound of the Hausdorff measure. Proposition 5.1. Suppose X is given by Theorem 1.4, then φ1 − m(I ∩ Q) < ∞

a.s.

1314

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

Without loss of generality, we assume ( ) k k O O Nk Q= Ql = T : T ∈ B(sl , η), S = (s1 , . . . , sk ) ∈ R3η . l=1

l=1

In order to obtain Proposition 5.1, the key is to construct a random covering of Q. To this end, we will slip the process X into two independent processes X (1) and X (2) such that X (1) is a very small perturbation of X and then construct the random covering depending only on X (1) . Let S 0 = (s10 , . . . , sk0 ) be a point such that |sl − sl0 | = 2η, l ≤ k and define Σ2 = σ {X (sl0 , l ≤ k)}, X (2) (t) = E(X (t)|Σ2 ), X (1) (t) = X (t) − X (2) (t). It is easy to see that X (1) , X (2) are independent. Furthermore, we have Lemma 5.8. If the variance function σ 2 (x) of X satisfies the condition of Theorem 1.4, then for any l ≤ k, i ≤ d, u 1 , u 2 ∈ Q l , we have |E(X i (u 1 ) − X i (u 2 ))X (sl0 )| ≤ K 28 |u 1 − u 2 |.

(5.5)

Proof. By the fact that E X i (t)X i (s) = 21 (σi2 (|s|) + σi2 (|t|) − σi2 (|s − t|)), we have that the left side of (5.5) is equal to 1 2 |σ (|u 1 |) − σi2 (|u 2 |) + σi2 (|u 2 − sl0 |) − σi2 (|u 1 − sl0 |)| 2 i  1 2 |σi (|u 1 |) − σi2 (|u 2 |)| + |σi2 (|u 2 − si0 |) − σi2 (|u 1 − sl0 |)| ≤ 2 ! Z |u −s 0 | Z 2 l 1 |u 2 | ϕi (x)dx ≤ K 28 |u 1 − u 2 | ϕi (x)dx + = 0 2 |u 1 −s | |u 1 | l

where the last inequality follows from the fact that for any u ∈ Q l , |sl0 −u| ≥ |sl0 −sl |−|sl −u| ≥ η > 0.  Lemma 5.9. For any u 1 , u 2 ∈ Q l , i ≤ d, (2)

(2)

|X i (u 1 ) − X i (u 2 )| ≤ K 29 |u 1 − u 2 | max |X i (sl0 )|. l≤k

Proof. Since X (2) (t) = E(X (t)|Σ2 ) and X 1 , . . . , X d are independent, it follows that X (2) X i (t) = ali j E(X i (t)X i (sl0 ))X i (s 0j ) l, j≤k

where ali j depends only on X i (s10 ), . . . , X i (sk0 ). Hence by Lemma 5.8, we have the conclusion of Lemma 5.9.  Proof of Proposition 5.1. We say a cube C is a dyadic cube with order n if there exists a nonnegative integer vector mi = (m 1i , . . . , m N i ), such that C can be represented as  k k  O O ml ml + 1 n n , C := C = Cl = 2n 2n l=1 l=1    k  O m Nl m Nl + 1 m 1l m 1l + 1 , × · · · × , . = 2n 2n 2n 2n l=1

1315

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

Let ( T ∈ Q : ∃r ∈ [2−2n , 2−n ] such that sup

Hn =

sup

|X i (tl ) − X i (sl )| ≤

sup

|X i (tl ) − X i (sl )| ≤

√ l≤k |tl −sl |≤2 Nr

K 26 σi (r )(log log 1/r )

−αi /N

) ,i ≤ d ;

( Hn0

=

T ∈ Q : ∃r ∈ [2−2n , 2−n ] such that sup

√ l≤k |tl −sl |≤2 Nr

0 K 26 σi (r )(log log 1/r )−αi /N , i

(1)

(1)

) ≤d ;

 √ Ωn1 = ω : λ(Hn ) ≥ λ(Q)(1 − exp(− n/4)) ;  √ Ωn2 = ω : λ(Hn0 ) ≥ λ(Q)(1 − exp(− n/4)) ;   Ωn3 = ω : max |X i (sl0 )| ≤ 2βi n , i ≤ d , where 0 < βi < 1 − αi ; l≤k ( k O Ωn4 = ω : for each dyadic cube C = Cl with order n, C ∩ Q 6= Ø and l=1 ) √ sup sup |X i (sl ) − X i (tl )| ≤ k27 σi (2−n ) n, i ≤ d . l≤k sl ,tl ∈Cl

Then by Lemma 5.6 and Fubini’s theorem, we have ∞ X

c P(Ωn1 ) < ∞.

(5.6)

n=1

By some elementary properties of X , it is easy to see that ∞ X

c P(Ωn3 ) < ∞.

(5.7)

n=1

By Lemma 5.9, it is easy to see that Ωn2 ⊇ Ωn1 ∩ Ωn3 , which implies that ∞ X

c P(Ωn2 ) < ∞.

(5.8)

n=1

Furthermore, by Lemma 5.7, it follows that ∞ X

c P(Ωn4 ) < ∞.

(5.9)

n=1

Let Ωn = Ωn2 ∩ Ωn3 ∩ Ωn4 . Then by (5.7)–(5.9), we have that for n large enough, the event Ωn happens with probability one. Next, we to constructing the random covering of I ∩ Q. For any U = (u 1 , . . . , u k ), let Nturn k C n (U ) = l=1 C n (u l ) be the unique dyadic cube of order n containing U = (u 1 , . . . , u k ). If

1316

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

C n (U ) has the property that for all i ≤ d, sup l≤k

sup

s,t∈C n (u

l )∩Q l

(1)

(1)

0 |X i (t) − X i (s)| ≤ 8K 26 σi (2−n )(log log 2n )αi /N ,

then we say it is a good dyadic cube. Otherwise, we say it is a bad dyadic cube. It follows from (5.8) that each point U = (u 1 , . . . , u k ) ∈ Hn0 is contained in a good dyadic cube with order p, n ≤ p ≤ 2n. Therefore we can find a disjoint family of good dyadic cubes H1 (n) to cover the set Hn0 . That is, Hn0 ⊆

2n [

V p =: V

p=n

where V is a disjoint union of good dyadic cubes C p with order p. Moreover, the family H1 (n) N k , none depends only on X (1) (t). Let H2 (n) be a family of bad dyadic cubes of order 2n on R+ of which meet H1 (n). Then Q − V can be covered by the dyadic cubes in H2 (n). If Ωn2 occurs, the number of such bad dyadic cubes is at most √ K 29 λ(Q)22N kn exp(− n/4). S Let H(n) = H1 (n) H2 (n). For any A ∈ H(n), we pick a distinguished point U A = (u A1 , . . . , u Ak ) of A and define p

Ω A = {ω : |X i (u Al ) − X i (u A1 )| ≤ ri (A), 2 ≤ l ≤ k, i ≤ d} , where ri (A) =



0 32K 26 σi (|A|)(log log |A|−1 )αi /N 8K 27 σi (|A|)(log |A|−1 )1/2

if A ∈ H1 (k) if A ∈ H2 (k)

and |A| denotes the side length of set A. Furthermore, we set F(n) = {A ∈ H(n) : Ω A occurs}. Next we show that for n large enough on Ωn , F(n) covers I ∩ Q. For any T ∈ I ∩ Q, we have that (1) X (t1 ) = X (t2 ) = · · · = X (tk ), (2) there exists a dyadic cube A such that T ∈ A. Assume that A ∈ H1 (n) with order p, then n ≤ p ≤ 2n and by Lemma 5.9, we have that for n large enough, on Ωn , for i ≤ d, l ≤ k, (1)

(1)

(1)

(1)

|X i (u Al ) − X i (u A1 )| ≤ |X i (u Al ) − X i (tl )| + |X i (u A1 ) − X i (t1 )| (2)

(2)

(2)

(2)

+ |X i (u Al ) − X i (tl )| + |X i (u A1 ) − X i (t1 )| 0 0 ≤ 16K 26 σi (2− p )(log log 2 p )αi /N + 2K 29 2− p+βi ≤ 32K 26 σi (2− p )(log log 2 p )αi /N = rl (A).

Similarly, as A ∈ H2 (n) with order p, we also have that |X i (u Al ) − X i (u A1 )| ≤ ri (A). So, for n large enough, on Ωn , the event Ω A happens, which implies that F(n) covers I ∩ Q. N k ). Noting that X (1) , X (2) are independent and Let Σ1 = σ (X (1) (t) : t ∈ R+ n o (2) (2) P |X i (u Al ) − X i (u A1 ) − x| < r, 2 ≤ l ≤ k ≤ K 30r k−1 (5.10)

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

1317

(this inequality can be obtained by an argument similar to one in [18]), we have P(Ω A |Σ1 ) ≤

d Y

K 30rik−1 (A).

(5.11)

i=1

By (5.11) and Fatou’s lemma, it follows that 

 X

E(φ1 − m(I ∩ Q)) ≤ lim inf E  IΩn n→∞

φ1 (|A|)

A∈F (n)



 X

≤ lim inf E  IΩn n→∞

E(IΩ A |Σ1 )φ1 (|A|)

A∈H(n)

 ≤ lim inf K 31 E  IΩn n→∞

 X

|A| N k 

A∈H(n)

≤ K 31 λ(Q). This yields the conclusion of Proposition 5.1.



6. The proofs of Theorem 1.4 and Corollary 1.1 Proof of Theorem 1.4. Combining Propositions 4.1 and 5.1, we have the conclusion of Theorem 1.4.  Proof of Corollary 1.1. Let cov(T ) be the covariance matrix of Y (T ), then by (2.7) we have that for any B ∈ R, Z Z −d(k−1) E L(0, B) = (2π ) E exp{ihu, Y (T )i}dudT d(k−1)   ZB ZR 1 = (2π )−d(k−1) exp − Var(hu, Y (T )i) dudT 2 B R d(k−1) Z = (2π )−d(k−1)/2 [det(cov(T ))]−1 dT > 0, B

which implies that P{L(0, B) > 0} > 0, which on combining with Theorem 1.4 yields the conclusion of Corollary 1.1.  References [1] R.J. Adler, The Geometry of Random Fields, Wiley, New York, 1981. [2] S.M. Berman, Local time and sample function properties of stationary Gaussian process, Trans. Amer. Math. Soc. 137 (1969) 277–299. [3] M. Cs¨org˝o, Z.Y. Lin, Q.M. Shao, On moduli of continuity for local times of Gaussian process, Stochastic Process. Appl. 58 (1995) 1–21. [4] J. Cuzick, Some local properties of Gaussian vector fields, Ann. Probab. 6 (1978) 984–994. [5] J. Cuzick, The Hausdorff dimension of the level sets of a Gaussian vector field, Z. Wahrsch. verw. Gebiete 51 (1980) 287–290. [6] J. Cuzick, Multiple points of Gaussian vector fields, Z. Wahrsch. verw. Gebiete 61 (1982) 431–436. [7] D. Geman, J. Horowitz, Occupation densities, Ann. Probab. 8 (1980) 1–67.

1318

R. Zhang, Z. Lin / Stochastic Processes and their Applications 116 (2006) 1294–1318

[8] D. Geman, J. Horowitz, J. Rosen, A local time analysis of intersections of Brownian paths in the plane, Ann. Probab. 12 (1984) 86–107. [9] W.J. Hendricks, Multiple points for a process in R 2 with stable components, Z. Wahrsch. verw. Gebiete 28 (1974) 113–128. [10] N. Kˆono, Double points of a Gaussian sample path, Z. Wahrsch. verw. Gebiete 45 (1978) 175–180. [11] Z.Y. Lin, C.R. Lu, L.X. Zhang, Path Properties of Gaussian Process, Science Press, 2001. [12] L.D. Pitt, Local times for Gaussian vector fields, Indiana Univ. Math. J. 27 (1978) 309–330. [13] C.A. Rogers, S.J. Taylor, Functions continuous and singular with respect to a Hausdorff measure, Mathematika 8 (1961) 1–31. [14] J. Rosen, Self-intersections of random fields, Ann. Probab. 12 (1984) 108–119. [15] E. Seneta, Regularly Varying Functions, in: Lecture Notes in Math., vol. 508, Springer-Verlag, New York, 1976. [16] M. Talagrand, New Gaussian estimates for enlarged balls, Funct. Anal. 3 (1993) 502–520. [17] M. Talagrand, Hausdorff measure of trajectories multi-parameter fractional Brownian Motion, Ann. Probab. 23 (1995) 767–775. [18] M. Talagrand, Multiple points of trajectories of multi -parameter fraction Brownian motion, Probab. Theory Related Fields 112 (1998) 545–563. [19] S.J. Taylor, Multiple points for the sample paths of the symmetric stable process, Z. Wahrsch. verw. Gebiete 5 (1966) 247–264. [20] Y.M. Xiao, H¨older measure of the sample paths of Gaussian random fields, Osaka J. Math. 33 (1996) 895–913. [21] Y.M. Xiao, H¨older conditions for local times and the Hausdorff measure of the level sets of Gaussian random fields, Probab. Theory Related Fields 109 (1997) 129–157.