~
i ¸' i ~ : ' , '-,
STATISTICS& PROBABILITY LETTERS
ELSEVIER
Statistics & Probability Letters 41 (1999) 169-178
On the large increments of fractional Brownian motion Charles E1-Nouty Universitd Paris VI/L.S.T.A. l , 185 Boulevard Vincent Auriol 75013, Paris, France
Received May 1997
Abstract Let {BH(t), t ~>0} be a fractional Brownian motion with index 0 < H < 1 and define the statistic Vr =
sup
flrlBH(s + aT) -- BH(s)I,
O~s<~T--aF
where /3r and ar are suitably chosen functions of T i>0. We establish some laws of the iterated logarithm for Vr. (~) 1999 Published by Elseiver Science B.V. All rights reserved AMS
classification: 60F15, 60G15
Keywords: Fractional Brownian motion; Law of the iterated logarithm; Slfpian's lemma
1. Introduction and m a i n result
Let {BH(t), t >10} be a fractional Brownian motion with index 0 < H < 1, i.e. a centered Gaussian process with stationary increments satisfying B H ( 0 ) = 0 , with probability 1, and ~Z(BH(t))2= t 2H, t~>0. We also will make use of the fact that B , is self-similar with index H i.e. B H ( a t ) and a " B H ( t ) have the same distribution for all a > 0. We refer to Beran (1994), and Samorodnitsky and Taqqu (1994) for further details on B , . When H = ½, B1/2 is a standard Wiener process. Let av be a nonincreasing fucntion of T>~O such that O<~aT<~ T, aT
1
.
]s a nonincreasing function of T ~>0, In T / a T
lim - = r C [0, + ~ ] . T~ ln2 T I Tour 45-55, 4 place Jussieu, 75252, Paris Cedex 05, France. 0167-7152/99/$ - see front matter (E) 1999 Published by Elsevier Science B.V. All rights reserved PII: S01 67-7152(98)001 40-0
(1.1) (1.2)
(1.3)
C El-Nouty / Statistics & Probability Letters 41 (1999) 169-178
170
Here, we set In u = log(u V e) and ln2 u = ln(lnu) for u~>0. Define the statistic Vr by Vr =
sup
flrlBH(s + aT) -- BH(S)[,
O<~s<~T--ar
where ~y~ =a~(2Lr)l/2=a~r (2 (ln r + In2 T ) ) 1/2 .
When H = ½, the behavior of Vr was studied by Cs6rg6 and R~v~sz (1979, 1981), and Book and Shore (1978). Their results are stated in the following theorem. Theorem A. Let ar be a nondecreasing function o f T>>.O satisfying (1.1)-(1.3). When H = ½, we have with probability 1, lim sup Vr = 1,
(1.4)
T~oc
lira inf Vv = V/ r +r 1' r~oo where we set V / ~
(1.5)
+ 1 ) = 1 if r = +e~.
When H ¢ ½, general results for Vr were obtained by Ortega (1984). He established the following theorem. Theorem B. Let aT be a nondeereasing funetion o f T >>.Osatisfying (1.1)-(1.3). We have with probability 1,
lira sup VT = 1,
(1.6)
T---*go
if r = +cx~ then lira lit = 1. T--*oo
(1.7)
The purpose of the present article is to investigate the case when r < 0% i.e. the behavior of large increments of B#. Our main result is stated in the following theorem. Theorem 1. Let ar be a nondeereasino funetion o f T >~O satisfying (1.1)-(1.3), and set l i m r ~ where z E [0, 1]. I. l f O < H < l , and v>0, then we have with probability 1, lim inf Vr = 0.
a r / T = z,
(1.8)
T---~oo
II. I f O < H <~½, and z = 0 , then we have with probability 1, liminf VT= Vfr r T---*oc ~
1"
(1.9)
III. I f ½< H < 1, z = O, and r > 4 n/(4 - 4 #), then we have with probability 1, l iv-~oo m i n f V r = ~ r +r 1'
(1.10)
Observe that Theorem 1 is an extension of (1.5) and (1.7). We will follow roughly the arguments used by Cs6rg6 and R6v6sz (1979, 1981 ), Book and Shore (1978), and Ort6ga (1984). However, the deep dissimilarities between the classical case H = ½ and the case H ~ ½ imply that general principles have to be used rather
C. EI-Nouty I Statistics & Probability Letters 41 (1999) 169-178
171
than specific features (see Goodman and Kuelbs, 1991; Kuelbs et al., 1995; Kuelbs et al., 1994; Monrad and Rootzen, 1995; Ort6ga, 1989; Talagrand, 1996). Moreover, there is also a huge difference between the case 0 < H < ~ i and the case i < H < 1, i.e. between short-range dependence and long-range dependence (see Beran, 1994, p. 52; Samorodnitsky and Taqqu, 1994, p. 123). In Section 2, we give the proof of (1.8). (1.9) is proved in Section 3. We achieve the proof of the theorem in Section 4. Denote by [x] the integer part of the real x. 2. P r o o f of Part I
Since aT>/rT, we obviously have r = 0 . In this case, Vr has the same suP0~<~.~r_a T [3rlY(s + aT) -- Y(s)l, where Y(t)=aHr BM(t/ar ), t >~O. But, we have YT =
sup
IBH(u +
O<~u<~(v,,'ar)-J sup 0~<,~-~-I
as
Yr=
(2LT) 1/2
O<~s<~T--al
sup
distribution
1) --
BH(u) I
(2Lr) ]/2
IBH(u +
1) -
BH(u)I
(2LT) 1/2
When T ---, e~, YT ~ 0, with probability 1, and consequently Vr ~ 0, in probability, as T ~ ~ . Therefore, there exists a nondecreasing subsequence { Tk, k f> 1 } such that Vr~ ~ O, with probability 1, as k ~ w . This concludes the proof of (1.8). 3. P r o o f of Part II
Consider the sequence {T~, n t> 1 } defined by T~ = (1 + 6) n, where 6 > 0. We set
V(k, n) =
BH((k + 1)at,) - BH(karo) aH , T,
where 0~
j ,
p(j,k,n)= F(V(k,n)V(j,n))= ½((k + 1 _ j ) z H + (k - 1 _ j ) z H _ 2(k _ j)zH). For 0 < ~ < 1, let Ak = {[V(k,n)[ <~2n}, where 2n = (1 - e)~r---~(2LTo)I/2. To prove part II, we will establish the following three steps. Step 1: We first show that, with probability 1, liminf
~
max IV(k'n)] ~/ r O<~k<~(To)(2LTo)1/2 ~ __r + 1
If r = 0, then there is nothing to prove. So, we may assume r > 0.
(3.1)
172
C. El-Nouty I Statistics & Probability Letters 41 (1999) 169-178
Now, for 3 > 0 sufficiently small (depending on e), and n large enough, we have (r - 3)1n2 (Tn) ~
(3.2)
Since the function 9 : x ~ x 2H, x ~>1, is concave for H < 1, we have
p(j,k,n)<~O, which will enable us to apply Sl6pian's lemma (1962) in a convenient way. We refer to Tong (1980), and Ledoux and Talagrand (1990) for further details on this field. To prove (3.1), it suffices to show that Z
p
n=l
Ak
(3.3)
\k=0
To obtain an appropriate upper bound for ~ \l Ik=0 k}, we apply S16pian's lernma (1962). We get
P
Ak
~< H P(Ak).
\ k=0
k=0
But, stationarity implies that P(Ak)= P(Ao) for all k E {0, 1..... ~(T.)}, and therefore P
~< (P(A0)) ~(r")+l ~< (P(A0)) ~(T").
dk \ k=0
Moreover, we have ~(A0)= ~(IV(0,n)l ~< 2=) ~< P(V(O,n)
P (~k~r"iAk)~< (1-- ~ ( 2 n l - - 2 n 3 )
~< exp
<~,~,). Then, we have for n large enough
exp (---~)) ~(r")
- 2(2~) -1/2t ~ ) \ a-t-, /
(ln Tn) -0 -")2~'2 '
where y2 = r/(r + 1). For n large enough, we can find 0 < 1 , arbitrary close to 1, such that ~(Tn)/> (3.2), ~(T~) t> 0(In Tn)~-~ = O(nIn (1 + 6)) r-~. Thus, we have f:(Tn)
\
P ~k~=oAk) ~0, as long as we choose
0<6
(1--e)2y2 l - (1 - e)272'
which is possible since
(1--e)2y 2 (1--e)2r r 1 - (1 -- e)272 = (1 + r)--(1-e)2r < 1 =r.
OT~/ar,, and therefore, by
C. El-Nouty I Statistics & Probability Letters 41 (1999) 169-178
173
We also have
Lr. <. (1 + r + 6) ln2Tn = (1 + r + ~)(ln n + ln2(1 + 3)) ~< n 2q, where 2q = ( r - f ) ( 1 - ( 1 - ~ ) 2 7 2 ) - ( 1 - t ; ) 2 7 2 > 0 . Thus, we have for all sufficiently large n,
P
A~
<~ H P(Ak) <~ exp(-Knq),
\ k=0
(3.4)
k=0
where K is a constant. Hence, (3.3) is proved, and therefore(3.1). Step 2: Its purpose is to establish that, with probability 1, l i m i n f max --IV(k'n)] ,/ r 0~
(3.5)
Since B14(u) and THBtt(u/T) have the same distribution, max IV(k'n)l max fir IBH(kar. +ar.)-BH(kaT.)l O~k~(To) (2LT.)I/2 0~
mn =
Vff I BH(t + aT./rn)--BH(t)[ t_narn ~ ~,(T.)arn (2ZTn)l/2a~ --V'T.' T. ..... Tn max
Moreover, since {(T) = [T/av]--l, we have {(T)aT/T <~ 1 - ar/T. Thus, we obtain M. ~<
sup O<~t<~l
arn
To
IO.(t + ar./T.)-BH(t) I t2Z Tnl~l/2aH /TH k Tnl n
By applying the upper bound part of the modulus of continuity of fractional Brownian motion (see E1-Nouty, 1994, pp. 36-42, for a specific proof), we have with probability 1, sup IBz4(t+ ar./T.)-BH(t) I cxz o<~t<~l-~ r--g-TS~__~~~ln(ln/aT.))~,.a.~./ln (ln(rn/aT.)/LT.) 1/2
lim supM. ~< lim sup
. . . . .
r+l So, we have as n ~ ec,
and consequently, as n---, ee, P(
max [V(k,n)[ \O<~k<~¢(T,)(2LT,) 1/2 <~
r~
)
---+ 1.
This last result achieves the proof of step 2.
C El-Nouty I Statistics & Probability Letters 41 (1999) 169-178
174
Combining (3.5) with (3.1), we have with probability 1,
[V(k,n)[
~
liminf max - - n~oo 0.
r . r+ 1
(3.6)
Step 3. To complete the proof of Part II, we have to prove that, with probability 1, lim inf Vr >~ ~/ r r~ r+l
(3.7)
Let T/> 1 such that T~ =(1 + 6)~ ~< T 0 , we have aT
aT,
(T
Y <~-~ ~ at-at. <~at.
-~-
1)(T~+I
1)
<<.aT. \---TT-
~ 6ar. ~ 6ar.
Thus, we have
Vr =
sup
fiT IBH(S + aT. )--BH(S) + BH(S + ar)--BH(s + aT, )]
O<~s<~T--ar
~>
fir ]BH(S + ar.)--BH(S)]--
sup O<~s<~T - a r
i>
max O<~k~(Tn)
fir IBi~(s + ar)-B~i(s + av.)[
sup O<~s<~T--aT
[V(k,n)la~.flr.< -
sup
sup
O<~t<~T--6aT O~s<~rJar
IBi4(t+s)--BH(t)lflr,
and consequently, liminfVT 1> lirn+inf max T---+oo
n
oo O<~k<.((Tn)
[V(k,n)la~flr.+,-limsup
sup
sup
T-+o<) O~t<<.T-aarO<<.s<~aar
]B~l(t + s)--BH(t)]flr.
(3.8) Using Ort~ga's Theorem 3 (1984), we have with probability 1, lim sup
sup
sup
[Bz-i(t+s)--BH(t)]flr<~limsupflraHa~r(2(ln T ))1/2 T---~o~ fiat +ln2T ~ 6 'q. (3.9)
T---*oo O<<.t<~T--aarO<~s<~aar
Next, we deduce from (3.6) that, with probability 1,
fir. 0.
(3.10)
Note that, if 6 is small enough and n large enough, then flr,/fT.+~ is arbitrary close to 1. Thus, combining the above remark with (3.8), (3.9) and (3.10), we obtain (3.7), and achieve therefore the proof of Part II. 4. Proof of Part IH
Recall that, for 1 / 2 < H < l , the function g : X - - + X 2H, X ~ 1 , is convex, and consequently p(j,k,n)>O for ---+ (x + 1)2H + (x - 1)2H--2xTM, x i> 1. Its derivative is f'(x)=2H((x + 1)2h'-l + (x 1)2H-I--2X2/4-1). Since the function x ~ x 2H-1, x i> 1, is concave, f ' ( x ) < 0, and therefore f is nonincreasing. Then, we have for all n
k>j. Set f : x
1 p(j,k,n)= ½ f ( k - j ) <~p= $m
1 2H - 2 ) = 2 3(2
2H-I
- 1 < 1.
(4.1)
C. El-Nouty I Statistics & Probability Letters 41 (1999) 169-178
175
Moreover, for ( k - j ) large enough, we have
p(j, k, n) = ½(k-j)2H(2H(2H - 1)(k-j) -2 + o((k - j)-2) ) <~K(k_j)2H-2.
(4.2)
Here and in the sequel, we denote by K a positive constant which may vary at each occurence. Now, an application of S16pian's lemma (1962) yields P
Ak
~< H P(Ak) + S,,
\ k=O
k=O
where ~(T,) ;(T,, )
S.= Z
Z
j=0 k=j+l
222 - 222.~(j, k, n) p(j,k,n) 2x(l_~(j,k,n)2),/2 exp 2(l_~(j,k,n)2) '
and
0 <~~(j,k,n) <~p(j,k,n). Our aim is to show that P
A~.
< +ec.
(4.3)
\ k=O
Since (3.4) holds i.e. Y~n+l H~(f0 ) P(A~)< + oc, it is sufficient to show that y~,+~ S~ < + oc. ,~ 1 / ( 1 - - H ) To this purpose, set g, = ,~ . Then, we have •
¢(T,,)(j~]
S~= Z
l=O \ k = j + l
~,
+
)
k=j+[U.I+I]
p(j,k,n) 2rt(1-fi(j,k,n)2) l/2exp
22 :=S,n+S2n. 1 + fi(j,k,n)
We will treat the above two quantities separately. Note first that (4.1) implies
p(j,k,n) p(j,k,n) p <~ <<. (1-~(j,k.n)2) 1/2 (1-p(j,k,n)2) 1/2 (1-p2)l/2' and 1
l + ~(j,k,n)
~<
1
l + p(j,k,n)
~
1
l+p
-
1
22H-l"
Then, we have ~(T.)k-j+[u.]
SI"<~KZ
Z exp .j=o k=j+l
<~K(~(Tn) + 1)pn exp < K(L; ) l/(2(1-H)) Tn
(1 -- 8)272LT. 22H_ 2 (1 -- e)272Lro 22H_ 2
1 at. (In Tn)((1-e)272/22" 2)(1÷r-6)
176
C EI-Nouty I Statistics & Probability Letters 41 (1999) 169-178
~
1
(4.4)
(ln T n )r(((l -e)2/22H-2 )_ 1)-6((( 1-8)2]~2/2T M - 2)+1 )"
Recall that we assume r > 4 H / ( 4 - 4/4). We have 22H-2
--
(4.5)
We suppose that e > 0 is small enough so that e
2/4-~ Y
- - -
22H-2 72
4=~(1 _ e ) 2 > _ _
Then we have (1 - e) 2 22H_2
> 1 1 1 ~5--1=-r>0'
and (1 - e) 2
1)
we can find
6> 0
r So,
6
~fff--5
/(1
-- ~)2
< r ~ ~2-ff-~
-"
f(1 - e) 2 , = r ~ ~2-ff_-5
f(1 - e) 2 = r ~ ~ 2~'--~
1-!)
1
~ )>r(y~
1
~2 )----0.
small enough so that 1
~ ) / ((15-~4~;2 + 1) •
Let v > 0 be defined by l+v=r~,
((1 ~ e ) 2
22H----~
1)~/'
(1 - e)2~;2 1) 0~k 2~'-H-'Z2 + "
(4.4) can be rewritten as follows: (ln(lnTn)) 1/2(1-#)
$1~ <<.K
(lnTn)v/2
1
1
(lnTn)l+v/2 <~Knl+v/2'
and therefore, SIn < "~ 00. Z n=l
(4.6)
Recall that ((T.)
((T.) Z
SEn= E
j=0 k=j+[/~,,]+ 1
p(j, k, n) exp 2Z¢(1-- ~(j,k,n)2) 1/2
1 + ~(j,k,n)'
Note that (4.2) implies 1
1
(1 - ~(j,k,n)2) 1/2 <~(1 - p(j,k,n)Z)l/2
~<
1
(1 - p2)1/2,
c. El-Nouty I Statistics & Probability Letters 41 (1999) 169-178
177
and • -2H--2 - - -~ 2H--2 J
~(j,k,n)<~p(j,k,n)<<.K(k - jJ
~#,
rl
1
~*~-£~.
Then, we have
S2" < ~ K Z
Z
(k--j)2H-mexp
I+K{
j = 0 k=j+[p.]+ 1
.< K
Z (k j=0 k=j+[m]+ J
exp- 2o
1- K
~(To) {(T,) ~
(4.7)
j=0 k=j+[pn]- 1
We also have
(X -j)2H-2dx<~K({(Tn)-j) 2H-l
(k __ j ) Z H - 2 < J+[u°]
k=j+[#n]+l
and therefore, {(T.)
~(T.)
Z
Z
/,~(T,,)
(k-j)2H-2<'KLI
(~(Tn)-
X) 2H-I
dx<~K(~(Tn)) 2H.
j=0 k=j+[#n]+l
Combining the above inequalities with (4.7), we get Szn ~
K(~(Tn))2Hexp(-j'2n) 1
~< K(ln T. -) 2 H ( r + a ) (In Tn )2(1--g)272(l+r--6) K ~< (In Tn)2r((1-e)2-H)-26((1-e')2"fl+H)"
(4.8)
Consider the function f : H --+ 3 - 2 H - 4/4 n, ½ < H < 1. We have f ( 1 / 2 ) = f ( 1 ) = 0 and f ' ( H ) = - 2 + (4 log 4)4 -H. f'(H) = 0 ~
H = 1+
log(log 2) 2 log 2
and, consequently, f is increasing for 1/2 < H < 1 + (log(log 2))/(2 log 2), and decreasing for 1 + (log(log 2))/ (2 log 2) < H < 1. Hence, f ( H ) > 0, and therefore 3 - 2 H > 4/4/4. Combining the last result with (4.5), we get 1 72 < 3
1 2 H ¢~ H + 2r < 1.
We suppose that e > 0 is small enough so that 1
g
1
+ 2 r r ¢=> (1 -- g ) 2 > g + ~rr"
Then, we have (1 - g)2 _ H > 0 ,
and 2r((1 - ~)2 _ H ) - 1 = 2r((1 - g)2 _ (H + 1 / 2 r ) ) > 0 .
C. El-Nouty / Statistics & Probability Letters 41 (1999) 169-178
178
So, we can find 6 > 0 small enough so that 6 < 2 r ( ( 1 - e) 2 - H ) 1 2((1 - ~)2 72 + H ) Let v ' > 0 be defined by 1 + v' = 2r((1 - e) 2 - H ) - 26((1 - e)272 + H). So, we deduce from (4.8) that
S2n <~K/nl+v', and
consequently
S2n < + c~.
(4.9)
n=l
Combining (4.6) with (4.9), we get (4.3) and achieve the proof o f the theorem, by noting that the two last steps o f the proof o f Part II are available for 1 / 2 < H < l . []
References Beran, J., 1994. Statistics for Long-Memory Processes. Chapman & Hall, London. Book, S.A., Shore, T.R., 1978. On large intervals in the Cs6rg6-R6v6sz theorem on increments of a Wiener process. Z. Wahrsch. Verw. Gebiete 46, 1-11. Cs6rg6, M., R6v6sz, P., 1979. How big are the increments of a Wiener process? Ann. Probab. 7, 731-737. Cs6rg6, M., R6v6sz, P., 1981. Strong approximations in Probability and Statistics. Academic Press, New York. E1-Nouty, C., 1994. Lois limites sur les proeessus fractals. M6moire de doctorat de l'Universit6 Paris VI. Goodman, V., Kuelbs, J., 1991. Rates of clustering for some Gaussian self-similar processes. PTRF 88, 47-75. Kuelbs, J., Li, W,V., Shao Qui-man., 1995. Small Ball Probabilities for Ganssian Processes with Stationary Increments under H61der norms. J. Theoret. Probab. 8, 361-386. Kuelbs, J., Li, W.V., Talagrand M., 1994. Lira inf results for Ganssian samples and Chung's functional LIL. Ann. Probab. 22, 1879-1903. Ledoux, M., Talagrand, M., 1990. Probability on Banach Spaces. Springer, Berlin. Monrad, D., Rootzen, H., 1995. Small values of Gaussian processes and functional laws of the iterated logarithm. PTRF 101, 173-192. Ort6ga, J., 1984. On the size of the increments of nonstationary Ganssian processes. Stoeh Proc. Appl. 18, 47-56. Ort6ga, J., 1989. Upper classes for the increments of fractional Browninan motion. PTRF 80, 365-379. Samorodnitsky, G., Taqqu, M.S., 1994. Stable non-Gaussian Random Processes. Chapman & Hall, London. Sl6pian, D., 1962. The one-sided barrier problem for Ganssian noise. Bell System Tech. L 41,463-501. Talagrand, M., 1996. Lower classes of fractional Brownian motion. J. Theoret. Probab. 9, 191-213. Tong, Y.L., 1980. Probabilities Inequalities in Multi-variate Distributions. Academic Press, New York.