Upper bound of a volume exponent for directed polymers in a random environment

Upper bound of a volume exponent for directed polymers in a random environment

Ann. I. H. Poincaré – PR 40 (2004) 299–308 www.elsevier.com/locate/anihpb Upper bound of a volume exponent for directed polymers in a random environm...

222KB Sizes 0 Downloads 35 Views

Ann. I. H. Poincaré – PR 40 (2004) 299–308 www.elsevier.com/locate/anihpb

Upper bound of a volume exponent for directed polymers in a random environment Olivier Mejane Laboratoire de statistiques et probabilités, Université Paul Sabatier, 118, route de Narbonne, 31062 Toulouse, France Received 18 March 2003; received in revised form 16 October 2003; accepted 21 October 2003 Available online 13 February 2004

Abstract We consider the model of directed polymers in a random environment introduced by Petermann: the random walk is Rd -valued and has independent N (0, Id )-increments, and the random media is a stationary centered Gaussian process (g(k, x), k  1, x ∈ Rd ) with covariance matrix cov(g(i, x), g(j, y)) = δij (x − y), where  is a bounded integrable function on Rd . For this model, we establish an upper bound of the volume exponent in all dimensions d.  2004 Elsevier SAS. All rights reserved. Résumé On considère le modèle de polymères dirigés en environnement aléatoire introduit par Petermann : la marche aléatoire sousjacente est à valeurs dans Rd , ses incréments sont des variables indépendantes de loi N (0, Id ), et le milieu aléatoire est un processus gaussien stationnaire centré (g(k, x), k  1, x ∈ Rd ) de matrice de covariance cov(g(i, x), g(j, y)) = δij (x − y), où  est une fonction bornée intégrable sur Rd . Pour ce modèle, nous établissons une majoration de l’exposant de volume, pour toute dimension d.  2004 Elsevier SAS. All rights reserved. MSC: 60K37; 60G42; 82D30 Keywords: Directed polymers in random environment; Gaussian environment

1. Introduction The model of directed polymers in a random environment was introduced by Imbrie and Spencer [7]. We focus here on a particular model studied by Petermann [8] in his thesis: let (Sn )n0 be a random walk in Rd starting from the origin, with independent N (0, Id )-increments, defined on a probability space (Ω, F , P), and let g = (g(k, x), k  1, x ∈ Rd ) be a stationary centered Gaussian process with covariance matrix   cov g(i, x), g(j, y) = δij (x − y), E-mail address: [email protected] (O. Mejane). 0246-0203/$ – see front matter  2004 Elsevier SAS. All rights reserved. doi:10.1016/j.anihpb.2003.10.007

300

O. Mejane / Ann. I. H. Poincaré – PR 40 (2004) 299–308

where  is a bounded integrable function on Rd . We suppose that this random media g is defined on a probability space (Ω g , G, P ), where (Gn )n0 is the natural filtration:   Gn = σ g(k, x), 1  k  n, x ∈ Rd for n  1 (G0 being the trivial σ -algebra). We denote by E (respectively E) the expectation with respect to P (respectively P ). We define the Gibbs measure .(n) by: f (n) =

n  1  E f (S1 , . . . , Sn )eβ k=1 g(k,Sk ) Zn n

for any bounded function f on (Rd ) , where β > 0 is a fixed parameter and Zn is the partition function:   n Zn = E eβ k=1 g(k,Sk ) . Following Piza [9] we define the volume exponent   ξ = inf α > 0: 1{maxkn |Sk |nα } (n) → 1 in P-probability . n→∞

Here and in the sequel, |x| = max1id |xi | for any x = (x1 , . . . , xd ) ∈ Rd . Petermann obtained a result of 1 −λ|x| superdiffusivity in dimension one, in the particular case where (x) = 2λ e for some λ > 0: he proved that ξ  3/5 for all β > 0 (for another result of superdiffusivity, see [6]). Our main result gives on the contrary an upper bound for the volume exponent, in all dimensions: ∀d  1, ∀β > 0

3 ξ . 4

(1)

This paper is organized as follows: • In Section 2, we first extend exponential inequalities concerning independent Gaussian variables, proved by Carmona and Hu [1], to the case of a stationary Gaussian process. Then, following Comets, Shiga and Yoshida [2], we combine these inequalities with martingale methods and obtain a concentration inequality. • In Section 3, we obtain an upper bound for ξ when we consider only the value of the walk S at time n, and not the maximal one before n. In fact we prove a stronger result, namely a large deviation principle for (1Sn /nα ∈. (n) , n  1) when α > 3/4. This result and its proof are an adaptation of the works of Comets and Yoshida on a continuous model of directed polymers [3]. • In Section 4, we establish (1). • Appendix A is devoted to the proof of Lemma 2.4, used in Section 2, which gives a large deviation estimate for a sum of martingale-differences. It is a slight extension of a result of Lesigne and Volný [5, Theorem 3.2].

2. Preliminary: a concentration inequality 2.1. Exponential inequalities Lemma 2.1. Let (g(x), x ∈ Rd ) be a family of Gaussian centered random variables with common variance σ 2 > 0. n We fix q, β > 0,(x1, . . . , xn ) ∈ (Rd ) and (λ1 , . . . , λn ) in Rn . Then for any probability measure µ on Rd : n   2 2 n 2 β2 σ 2  eβ i=1 λi g(xi ) − β 2σ q  E  e q  e 2 q+ i=1 |λi | . βg(x)µ(dx) Re

O. Mejane / Ann. I. H. Poincaré – PR 40 (2004) 299–308

301

The proof is identical with the one made by Carmona and Hu in a discrete framework (µ is the sum of Dirac masses), and is therefore omitted. Lemma 2.2. Let (g(x), x ∈ Rd ) be a centered Gaussian process with covariance matrix cov(g(x), g(y)) = (x − y). Let σ 2 = (0), and let µ be a probability measure on Rd . Then for all β > 0, there are constants c1 = c1 (β, σ 2 ) > 0 and c2 = c2 (β, σ 2 ) > 0 such that:   β 2σ 2 −c1 (x − y) µ(dx) µ(dy)  E log eβg(x)− 2 µ(dx)  −c2 (x − y) µ(dx) µ(dy). R

Rd

Rd

In particular,

  2 2 βg(x)− β 2σ −c1 σ  E log e µ(dx)  0. 2

Rd

Proof. Let {Bx (t), t  0}x∈Rd be the family of centered Gaussian processes such that   E Bx (t)By (s) = inf(s, t)(x − y), with Bx (0) = 0 for all x ∈ Rd . Define X(t) = Mx (t) µ(dx), t  0, Rd

where Mx (t) = eβBx (t )−β

2 σ 2 t /2

. Since dMx (t) = βMx (t) dBx (t), one has 2 2

dMx , My t = β Mx (t)My (t) dBx , By t = β 2 eβ(Bx (t )+By (t ))−β σ t (x − y) dt,  2 2 and dX, Xt = Rd β 2 eβ(Bx (t )+By (t ))−β σ t (x − y) µ(dx) µ(dy) dt. Thus, by Ito’s formula, 2

β2 E(log X1 ) = − 2



1

µ(dx) µ(dy) (x − y) Rd

E 0

eβ(Bx (t )+By (t ))−β Xt2

2σ 2 t

 dt.

By Lemma 2.1, we have for all t:    β(Bx (t )+By (t ))−β 2σ 2 t  eβ(Bx (t )+By (t )) e 2 2 2 2 e−β σ t  E = E  e8β σ t .  2 2 Xt ( R eβBx (t ) µ(dx)) Hence: 2 2



e8β σ − 1 16σ 2

(x − y) µ(dx) µ(dy)  E(log X1 )  −

1 − e−β 2σ 2

Rd

2σ 2

(x − y) µ(dx) µ(dy), Rd

d

which concludes the proof since X1 =

 Rd

e

2 2 βg(x)− β 2σ

µ(dx).

2

2.2. A concentration result Proposition 2.3. Let ν > 1/2. For n ∈ N, j  n and fn a nonnegative bounded function, such that E(fn (Sj )) > 0. n β We note Wn,j = E(fn (Sj )e k=1 g(k,Sk ) ). Then for n  n0 (β, ν),  



 1 (2ν−1)/3 ν



P log Wn,j − E(log Wn,j )  n  exp − n . 4

302

O. Mejane / Ann. I. H. Poincaré – PR 40 (2004) 299–308

Proof. We use the following lemma, whose proof is postponed to Appendix A.  Lemma 2.4. Let (Xni , 1  i  n) be a martingale difference sequence and let Mn = ni=1 Xni . Suppose that there i exists K > 0 such that E(e|Xn | )  K for all i and n. Then for any ν > 1/2, and for n  n0 (K, ν),     1 (2ν−1)/3 ν . P |Mn | > n  exp − n 4 • We first assume that fn > 0. i = E(log W To apply the Lemma 2.4, we define Xn,j n,j | Gi ) − E(log Wn,j | Gi−1 ) so that log(Wn,j ) − E(log Wn,j ) =

n

i Xn,j .

i=1

It is sufficient to prove that there exists K > 0 such that E(e|Xn,j | )  K for all i and (n, j ). For this, we introduce:    i  i i en,j . = fn (Sj ) exp βg(k, Sk ) , Wn,j = E en,j i

1kn, k=i i > 0 since we assumed that f > 0. If E is the conditional expectation with respect to G , then Wn,j n i i i i ) = Ei−1 (log Wn,j ), so that: Ei (log Wn,j     i i i Xn,j − Ei−1 log Yn,j , (2) = Ei log Yn,j

with i Yn,j = e−β

2 /2

Wn,j i Wn,j

=

eβg(i,x)−β

2 /2

µin,j (dx),

(3)

Rd

µin,j being the random probability measure: µin,j (dx) =

1 i Wn,j

 i  E en,j | Si = x P(Si ∈ dx).

Since µin,j is measurable with respect to Gn,i = σ (g(k, x), 1  k  n, k = i, x ∈ Rd ), we deduce from Lemma 2.2 that there exists a constant c = c(β) > 0, which does not depend on (n, j, i), such that:   2 −c  E log eβg(i,x)−β /2 µin,j (dx) | Gn,i  0, Rd

and since Gi−1 ⊂ Gn,i , we obtain:   i  c. 0  −Ei−1 log Yn,j Thus we deduce from (2) and (4) that for all θ ∈ R i

i + E eθXn,j  ecθ E eθEi (log Yn,j ) with θ + := max(θ, 0). By Jensen’s inequality,  i θ i eθEi (log Yn,j )  Ei Yn,j

(4)

O. Mejane / Ann. I. H. Poincaré – PR 40 (2004) 299–308

303

so that

i

+  i θ +  i θ E eθXn,j  ecθ E Yn,j = ecθ E E Yn,j | Gn,i .

Assume now that θ ∈ {−1, 1}, hence in both cases, the function x → x θ is convex; using (3), we obtain  i )θ  θ(βg(i,x)−β 2/2) µi (dx), so that: (Yn,j n,j Rd e

   i θ  θ(βg(i,x)−β 2/2)  2 E Yn,j | Gn,i  E e | Gn,i µin,j (dx) = E eθ(βg(i,x)−β /2) µin,j (dx) Rd

Rd

  2 = E eθ(βg(1,0)−β /2) ,

using that g(i, x) is independent from Gn,i , and is distributed as g(1, 0) for all i and x. We conclude that for all n and 1  i, j  n, i i i 2 E e|Xn,j |  E eXn,j + E e−Xn,j  K := ec + eβ . • In the general case where fn  0, we introduce hn = fn + δ for some 0 < δ < 1. The first part of the proof  -a.s. δ = E(h (S )eβ nk=1 g(k,Sk ) ), it remains to show that log W δ − E(log W δ ) P→ applies to hn : noting Wn,j n j n,j n,j δ→0

log Wn,j − E(log Wn,j ). Since fn is bounded by some constant Cn > 0, the following inequality holds for all δ  log ((C + 1)Z ). Since 0  E log Z  log EZ = nβ 2 (0)/2 < ∞, the 0 < δ < 1: log Wn,j  log Wn,j n n n n conclusion follows from dominated convergence. 2 Corollary 2.5. Let ν > 1/2. Let us fix a sequence of Borel sets (B(j, n), n  1, j  n). Then P -almost surely, there exists n0 such that for every n  n0 , every j  n,

 

log 1S ∈B(j,n) (n) − E log 1S ∈B(j,n) (n)  2nν . j

j

Proof. Let us write An,j = {| log E(fn (Sj )eβ tion 2.3 implies that     1 An,j  n exp − n(2ν−1)/3 . P 4

n

k=1 g(k,Sk )

) − E[log E(fn (Sj )eβ

n

k=1 g(k,Sk )

)]|  nν }. Proposi-

j n

Hence, by Borel–Cantelli, P -almost-surely there exists n0 such that for every n  n0 and every j  n: n n

   

log E fn (Sj )eβ k=1 g(k,Sk ) − E log E fn (Sj )eβ k=1 g(k,Sk )  nν . Then one applies this result to fn (x) = 1x∈B(j,n) and to fn (x) = 1.

2

3. A first result In this section, we prove that a large deviation principle holds P -almost surely for the sequence of measures (1Sn /nα ∈. (n) , n  1) if α > 3/4. This was first proved by Comets and Yoshida [3, Theorem 2.4.4], for a model of directed polymers in which the random walk is replaced by a Brownian motion and the environment is given by a Poisson random measure on R+ × Rd . Theorem 3.1. Let α > 3/4. Then a large deviation principle for (1Sn /nα ∈. (n) , n  1) holds P -a.s., with the rate function I (λ) = λ 2 /2 and the speed n2α−1 , . denoting the Euclidean norm on Rd . In particular, for all ε > 0, lim −

n→∞

1 n2α−1

log 1 Sn εnα (n) =

ε2 2

P -a.s.

304

O. Mejane / Ann. I. H. Poincaré – PR 40 (2004) 299–308

Remark 3.2. In particular this result implies that for all α > 3/4, P -a.s. 1|Sn |nα (n) → 0.

(5)

n→∞

Proof. Let us fix λ ∈ Rd , n  1, and then introduce the following martingale:  2 eλ.Sp −p λ /2 if p  n, Mpλ,n = λ.S −n λ 2 /2 e n if p > n, with x.y denoting the scalar product between two vectors x and y in Rd . If Qλ,n is the probability defined by Girsanov’s change associated to this positive martingale, Girsanov’s formula ensures that, under Qλ,n , the process (S˜p := Sp − λ(p ∧ n), p  1) has the same distribution as S under P. Therefore: n  (n)   2 Zn eλ.Sn = en λ /2 E Mnλ eβ k=1 g(k,Sk )  n λ,n  2 = en λ /2 E eβ k=1 g (k,Sk +kλ)   n λ,n 2 = en λ /2 E eβ k=1 g (k,Sk ) , where we denote by g λ,n the translated environment   g λ,n (k, x) := g k, x + λ(k ∧ n) . By stationarity, this environment has the same distribution as (g(k, x), k  1, x ∈ Rd ), hence  d  n   n λ,n E eβ k=1 g (k,Sk ) = E eβ k=1 g(k,Sk ) , thus

 (n) E log eλ.Sn = n λ 2 /2.

(6)

nα−1 λ

instead of λ, (6) gives Now let us fix α > 3/4. With  nα−1 λ.Sn (n) = n2α−1 λ 2 /2. E log e

(7) n

α−1

Let us define fn (x) = en λ.x . This function is positive and E(fn (Sn )eβ k=1 g(k,Sk ) ) < ∞, so that the result of Corollary 2.5 is still true with fn (x) instead of 1x∈B(j,n) . Since 2α − 1 > 1/2, this implies lim

1

n→∞ n2α−1

  nα−1 λ.Sn (n) (n)   α−1 log e = 0 P -a.s. − E log en λ.Sn

(8)

From (7) and (8), we get: (n)  α−1 1 lim log en λ.Sn = λ 2 /2 P -a.s. n→∞ n2α−1 α−1

1 Let us define hn (λ) = n2α−1 log en λ.Sn (n) and h(λ) = λ 2 /2. From what we proved, we deduce the existence g of A ⊂ Ω , with P (A) = 1, on which hn (λ) → h(λ) for all λ ∈ Qd . Now we show that on A the convergence actually holds for all λ ∈ Rd , by using that the functions hn are convex and h is continuous. To this goal, we can reduce the proof to the case d = 1. Indeed if d  2, we fix (d − 1) coordinates in Qd−1 and use that hn is still convex as a function of the last coordinate, and then repeat the process. So let us assume d = 1 and fix λ ∈ R∗ . There exist two sequences (ai , i  0) and (bi , i  0) in QN that converge to λ, (ai , i  0) being increasing and (bi , i  0) decreasing. Let us fix i  1 and n  1. Since hn is convex and satisfies hn (0) = 0, the function x → hn (x)/x is increasing. Hence the following inequalities hold:

hn (ai ) hn (λ) hn (bi )   . ai λ bi

O. Mejane / Ann. I. H. Poincaré – PR 40 (2004) 299–308

305

Since hn converges towards h on Q, it follows that h(ai ) hn (λ) hn (λ) h(bi )  lim sup   lim inf . n→∞ ai λ λ bi n→∞ The limit function h being continuous, we obtain by letting i → +∞ that the limit of hn (λ)/λ exists and is equal to h(λ)/λ, which proves that hn (λ) → h(λ) for all λ ∈ R. One then concludes by the Gärtner–Ellis–Baldi theorem (see [4]). 2

4. Upper bound of the volume exponent We now extend the result (5) to the maximal deviation from the origin: Theorem 4.1. For all d  1 and α > 3/4, P -a.s. 1{maxkn |Sk |nα } (n) → 0. n→∞

Proof. We will use the following notations: for x ∈ Rd and r  0, B(x, r) = {y ∈ Rd , |y − x|  r}. For α  0 and j = (j1 , . . . , jd ) ∈ Zd , Bjα = B(j nα , nα ). We will use the fact that the union of the balls (Bjα , j ∈ (2Z)d \{0}) form a partition of Rd \B(0, nα ). We first prove the following upper bound: Proposition 4.2. Let n  0 and k  n. Then for any j ∈ Zd and α > 0, d   −n2α−1 E log 1Sk ∈Bjα (n)  (ji − εi )2 , 2 i=1

where εi = sgn(ji ) (= 0 if ji = 0). n

α α ˜ = E(1Sk ∈Bjα eβ i=1 g(i,Si ) ), so that 1Sk ∈Bjα (n) = ak,j /Zn . Let be λ = λ/k with λ˜ i = Proof. Let note ak,j α (ji − εi )n , 1  i  d; then let us define the martingale  2 eλ.Sp −p λ /2 if p  k, Mpλ,k = λ.S −k λ 2 /2 e k if p > k,

where x.y denotes the usual scalar product in Rd and x the associated euclidean norm. Under the probability Qλ,k associated to this martingale, (Sp )p0 has the law of the following shifted random walk under P:   p ˜ ˜ ∧1 . Sp = Sp + λ k It follows that: n   −1 ˜ ˜ 2 α ˜ i) ak,j , = E e k (λ.Sk + λ /2) 1Sk ∈B α −λ˜ eβ i=1 g(i,S j

(9)

where g(i, ˜ x) = g(i, x + λ˜ (i/k ∧ 1)). ˜ one has λ.S ˜ k  0: indeed if we write Sk = (S 1 , . . . , S d ), then Now we notice that on the event {Sk ∈ Bjα − λ}, k k for any 1  i  d, |Ski − ji nα + λ˜ i |  nα , hence: • for ji  1, λ˜ i = (ji − 1)nα  0 and 0  Ski  2nα ,

306

O. Mejane / Ann. I. H. Poincaré – PR 40 (2004) 299–308

• for ji  −1, λ˜ i = (ji + 1)nα  0 and −2nα  Ski  0, • for ji = 0, λ˜ i = 0, so that in all cases λ˜ i Ski  0 and thus λ˜ .Sk  0. Therefore on the event {Sk ∈ Bjα − λ˜ }, e

−1 ˜ ˜ 2 k (λ.Sk + λ /2)

e

˜ − λ

2n

2

e

−n2α−1 2

d

1 (ji −εi )

2

,

and (9) leads to: α e ak,j

−n2α−1 2

d

1 (ji −εi )

2

n   ˜ i) . E 1Sk ∈B α −λ˜ eβ i=1 g(i,S j

On the other hand, Zn  E(1Sk ∈B α −λ˜ eβ

n

i=1 g(i,Si )

j

distribution as g, it follows that for all j

), and since by stationarity the environment g˜ has the same

∈ Zd ,

d   −n2α−1 E log 1Sk ∈Bjα (n)  (ji − εi )2 . 2

2

i=1

Let ν > 1/2. We deduce from Proposition 4.2 and from Corollary 2.5 (with B(k, n) = Bjα ) that, P -a.s., for n  n0 , k  n, and all j ∈ Zd : log 1Sk ∈Bjα (n)  2nν −

d n2α−1 (ji − εi )2 . 2 i=1

So, P -a.s., for n  n0 , 1|Sk |nα (n)  1{maxkn |Sk |nα } (n) 

n



j ∈(2Z)d \{0} e

2α−1 2

2nν − n



1|Sk |nα (n) 

d

i=1 (ji −εi )

ν − n2α−1 2

ne2n

2

, and

d

i=1 (ji −εi )

2

.

j ∈(2Z)d \{0}

k=1

But by symmetry, for any C > 0,

e−C

d

i=1 (ji −εi )

2

 2d



j 2 e

−C(j −1)2





ν

ν+1 2 ,

2

d 

e−C(ji −εi )

2

i=2 ji ∈2Z

j 1 e

1{maxkn |Sk |nα } (n)  C(d)ne2n Thus for all α >

e−C(j1 −1)

j1 2

j ∈(2Z)d \{0}

and using that and for n  n0 :



−Cj

=

e−C , 1−e−C

e−n

we conclude that, P -a.s., for some constant C(d) > 0,

2α−1 /2

1 − e−n

2α−1 /2

.

P -a.s.,

1{maxkn |Sk |nα } (n) → 0. n→∞

This is true for all ν > 1/2, which ends the proof. 2 Appendix A. Proof of Lemma 2.4 The beginning of the proof is exactly identical with the one made by Lesigne and Volný in [5, pp. 148–149]. Only the last ten lines differ.

O. Mejane / Ann. I. H. Poincaré – PR 40 (2004) 299–308

307

Let us denote (Fni )1in the filtration of (Xni )1in . The hypothesis that it is a martingale difference sequence means that for each i, Xni is Fni -measurable and, if i  2, E[Xni | Fni−1 ] = 0. Let us fix a > 0 and for 1  i  n define

Yni = Xni 1|Xni |an1/3 − E Xni 1|Xni |an1/3 Fni−1 and



Zni = Xni 1|Xni |>an1/3 − E Xni 1|Xni |>an1/3 Fni−1 ,   and then define Mn = ki=1 Yni and Mn

= ki=1 Zni . Since (Xni )1in is a martingale difference sequence, (Yni )1in and (Zni )1in are martingale difference sequences and Xni = Yni + Zni (1  i  n). Let us fix t ∈ (0, 1). For every x > 0,       (A.1) P |Mn | > nx  P |Mn | > nxt + P |Mn

| > nx(1 − t) .

Since |Yni |  2an1/3 for 1  i  n, Azuma’s inequality implies:    2 2    |Mn | nxt t x 1/3 P |Mn | > nxt = P >  2 exp − 2 n . (A.2) 2an1/3 2an1/3 8a  To control the second term in (A.1), we notice that E((Mn

)2 ) = ni=1 E(Zni )2 . For each 1  i  n, if we note Fni (x) = P (|Xni | > x):

  2 2    2  E Zni = E Xni 1|Xni |>an1/3 − E E Xni 1|Xni |>an1/3 Fni−1  2   E Xni 1|Xni |>an1/3 =−

+∞ x 2 dFni (x).

an1/3

Since Ee|Xn |  K, Fni (x)  Ke−x for all x  0, hence: i

+∞ −

2 2/3 −an1/3

x dFi (x)  Ka n 2

e

an1/3

+∞ + 2K

  1/3 xe−x dx = K a 2 n2/3 + 2an1/3 + 2 e−an .

an1/3

It follows that E((Mn

)2 )  nK(a 2 n2/3 + 2an1/3 + 2)e−an , and: 1/3

  P |Mn

| > nx(1 − t) 

K x 2 (1 − t)2

 2 −1/3  1/3 a n + 2an−2/3 + 2n−1 e−an .

(A.3)

2 2

We choose a = 12 (tx)2/3 so that t8ax2 = a. From (A.1), (A.2) and (A.3), we deduce:       K 1 2/3 1/3 (tx) f (t, x, n) exp − n P |Mn | > nx  2 + , (1 − t)2 2 with f (t, x, n) = 14 t 4/3 x −2/3 n−1/3 + t 2/3 x −4/3 n−2/3 + 2x −2 n−1 . Now by taking x = nν−1 , we have:         K 1 2/3 2/3 1/3 ν , g(t, n) exp − t x n P |Mn | > n = P |Mn | > nx  2 + 2 (1 − t)2

(A.4)

(A.5)

with g(t, n) = f (t, nν−1 , n) = 14 t 4/3 n−(2ν−1)/3 + t 2/3 n−2(2ν−1)/3 + 2n−(2ν−1) . Now we fix ε > 0 and choose 2/3 t0 ∈ (0, 1) such that 0 < 1 − t0 < ε/2. (A.5) implies that:         1 ε (2ν−1)/3 K ν (2ν−1)/3 P |Mn | > n exp (1 − ε)n g(t0 , n) exp − n  2+ . 2 (1 − t0 )2 4

308

O. Mejane / Ann. I. H. Poincaré – PR 40 (2004) 299–308

But, since ν > 1/2, (2 + n  n0 (ε), 

P |Mn | > n

ν



K g(t0 , n)) exp(− 4ε n(2ν−1)/3) → 0. (1−t0 )2 n→∞

  1 (2ν−1)/3  exp − (1 − ε)n . 2

Therefore there exists n0 (ε) such that, for all

(A.6)

When ε = 1/2 this is exactly the statement of the Lemma 2.4.

References [1] P. Carmona, Y. Hu, On the partition function of a directed polymer in a Gaussian random environment, Probab. Theory Related Fields 124 (3) (2002) 431–457. [2] F. Comets, T. Shiga, N. Yoshida, Directed polymers in random environment: path localization and strong disorder, Bernoulli 9 (4) (2003) 705–723. [3] F. Comets, N. Yoshida, Brownian directed polymers in random environment, Comm. Math. Phys. (2003), submitted for publication. [4] A. Dembo, O. Zeitouni, Large Deviations Techniques and Applications, second ed., in: Applications of Mathematics, vol. 38, SpringerVerlag, New York, 1998. [5] E. Lesigne, D. Volný, Large deviations for martingales, Stochastic Process. Appl. 96 (1) (2001) 143–159. [6] C. Licea, C.M. Newman, M.S.T. Piza, Superdiffusivity in first-passage percolation, Probab. Theory Related Fields 106 (4) (1996) 559–591. [7] J.Z. Imbrie, T. Spencer, Diffusion of directed polymers in a random environment, J. Statist. Phys. 52 (3–4) (1988) 609–626. [8] M. Petermann, Superdiffusivity of directed polymers in random environment, Part of thesis, 2000. [9] M. Piza, Directed polymers in a random environment: Some results on fluctuations, J. Statist. Phys. 89 (3–4) (1997) 581–603.