Stochastic Processes and their Applications 112 (2004) 285 – 308 www.elsevier.com/locate/spa
Fluctuation exponents and large deviations for directed polymers in a random environment Philippe Carmonaa , Yueyun Hub;∗ a Laboratoire
de Math ematiques Jean Leray, Universit e de Nantes, 2 Rue de la Houssiniere, BP 92208, F-44322 Nantes Cedex 03, France b Laboratoire de Probabilit es et Modeles Al eatoires, CNRS UMR 7599, Universit e Paris VI, case 188, 4 Place Jussieu, 75005 Paris Cedex 05, France Received 17 February 2003; received in revised form 24 February 2004; accepted 15 March 2004
Abstract For the model of directed polymers in a Gaussian random environment introduced by Imbrie and Spencer, we establish: • a Large Deviations Principle for the end position of the polymer under the Gibbs measure; • a scaling inequality between the volume exponent and the 5uctuation exponent of the free energy; • a relationship between the volume exponent and the rate function of the Large Deviations Principle. c 2004 Elsevier B.V. All rights reserved. MSC: 60K37 Keywords: Random environment; Directed polymers; Large deviations; Concentration of measure
1. Introduction Let us ;rst describe the model of directed polymers in a random environment introduced by Imbrie and Spencer (1988). The random media is a family (g(k; x); k ¿ 1; x ∈ Zd ) of independent identically distributed random variables de;ned on a probability space ((g) ; F(g) ; P). Let (Sn ) be the simple symmetric random walk in Zd . Let Px be the probability measure of (Sn )n∈N starting at x ∈ Zd and let Ex be the corresponding expectation. ∗
Corresponding author. Tel.: +33-1-44-27-70-44; fax: +33-1-44-27-72-73. E-mail addresses:
[email protected] (P. Carmona),
[email protected],
[email protected] (Y. Hu). c 2004 Elsevier B.V. All rights reserved. 0304-4149/$ - see front matter doi:10.1016/j.spa.2004.03.006
286
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
The object of our study is the Gibbs measure ·(n) , which is a random probability measure de;ned as follows: Let n be the set of nearest neighbor paths of length n: def
n = { : {1; : : : ; n} → Zd ; | (k) − (k − 1)| = 1; k = 2; : : : ; n}: For any measurable function F : n → R+ def
F(S)(n) =
1 E0 (F(S)e Hn (g; S) ); Zn ( )
def
Hn (g; ) =
n
g(i; (i));
i=1
where here and in the sequel, ¿ 0 denotes some ;xed positive constant and Zn ( ) is the partition function Zn ( ) = Zn ( ; g) = E0 (e Hn (g; S) ):
(1.1)
In other words, for a given realization g(!) of the environment, the Gibbs measure gives to a polymer chain having an energy Hn (g; ) at temperature T = 1= , a weight proportional to e Hn (g; ) . Dating back to the pioneer work of Imbrie and Spencer (1988), the situation in dimension d ¿ 3 is well understood for small ¿ 0. There exists some constant 0 ¿ 0 such that for all 0 ¡ ¡ 0 , Sn is di;usive under ·(n) . Theorem A (Imbrie and Spencer, 1988; Bolthausen, 1989; Sinai, 1995; Albeverio and Zhou, 1996). If d ¿ 3, then there exists 0 = 0 (d) ¿ 0 such that for 0 ¡ ¡ 0 , |Sn |2 (n) → 1; n
P-a:s:
(1.2)
It is believed in many physics papers (see e.g. Huse and Henley, 1985; Kardar, 1985) that for low dimension d = 1; 2 and small enough |Sn |2 (n) behaves as n2 for some ¿ 21 . However, no rigorous proof has been given yet. The aim of this paper is to establish some results when the random media is given by i.i.d. N(0; 1) Gaussian random variables. Our results will be available for all dimension d ¿ 1 and ¿ 0; however, the behavior of ·(n) will be very diHerent from small to large even in the same underlying d-dimensional lattice space. Our ;rst result is a large deviations principle for the end position of the polymer under the Gibbs measure. Denote by p( ) the free energy which plays an important rˆole: for d ¿ 1 and ¿ 0, def
1 1 E log Zn ( ) = lim log Zn ( ) ∈ n→∞ n n¿1 n
p( ) = sup
0;
2 ; 2
P-a:s:
(1.3)
def
(the above limit exists and also holds in L1 (P), cf. Section 2). Let d = {x=(x1 ; : : : ; xd ) o
∈ Rd : |x1 | + · · · + |xd | 6 1}. Let d be the interior of d and @d its boundary.
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
287
We have Theorem 1.1. For d ¿ 1 and ¿ 0, there exists a deterministic convex rate function I : d → [0; log(2d) + p( )] such that P almost surely, (n) 1 Sn ∈F 6 − inf I () for F closed ⊂ d ; lim sup log ∈F n n→∞ n (n) Sn 1 lim inf log ¿ − inf I () for G open ⊂ d : ∈G n→∞ n ∈G n The function I is continuous in the interior of d and lim
o
y→x:y∈d
I (y) = I (x);
∀x ∈ @d :
(1.4)
We shall prove that the rate function I is exactly the pointwise rate function at least in the interior of d . o
Theorem 1.2. Let d ¿ 1 and ¿ 0. For $ ∈ d ∩ Qd , we have 1 log1(Sn =n$) (n) → −I ($); n
P-a:s:;
(1.5)
where we take the limit along n such that P0 (Sn = n$) ¿ 0. Moreover, I (0) = 0, I (1 ; : : : ; d ) = I (±%(1) ; : : : ; ±%(d) ) for any permutation % of [1; : : : ; d], and for e1 = (1; 0; : : : ; 0) ∈ Zd , we have I (e1 ) = log(2d) + p( ):
(1.6)
Finally, there exists some positive constant cd ¿ 0 only depending on d such that 2 +
2 ; ∈ d : (1.7) I () ¿ cd || − − p( ) 2 For the sake of notational convenience, we have omitted (and shall omit) the dependence on d in I (and in other quantities). Remark 1.3. It remains open to characterize the zero set of I . In convergence (1.5), we do not know what happens with a general boundary point $ ∈ @d . However, (1.6) shows that (1.5) still holds for those 2d boundary points. Our second result is a scaling inequality between the volume exponent and the 5uctuation exponent of the free energy. Following Piza (1997) we de;ne the volume exponent def
= inf {& ¿ 0 : 1(maxk6n |Sk |6n& ) (n) → 1 in P probability}
288
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
and the @uctuation exponent of the free energy ' = sup{& ¿ 0 : Var(log Zn ) ¿ n2& for all large n} with the convention here that sup ∅ = 0. We shall establish a scaling inequality similar to the one obtained by Piza (1997) in a diHerent framework. He works with a polymer model more related to oriented percolation, where furthermore the potentials g are assumed non-positive. Theorem 1.4. For all d ¿ 1 and ¿ 0, '¿
1 − d : 2
Remark 1.5. If we believe the superdiHusivity of (Sn ) under ·(n) , i.e. ¿ 12 (which is always unproven to our best knowledge), the above inequality is trivial when d ¿ 2. The large deviations result (Theorem 1.1) may seem at ;rst sight highly uncorrelated to these two exponents and '. However, there is a close relationship between the volume exponent and the rate function I of the large deviations principle. Theorem 1.6. Assume that for some constants & ¿ 1 and c ¿ 0, the rate function satisAes I ($) ¿ c |$|& ;
∀$ ∈ d :
(1.8)
Then, the volume exponent satisAes 61 −
1 : 2&
(1.9)
Using the lower bound of I in (1.7), we deduce from Theorem 1.6 the following corollary Corollary 1.7. We have p( ) =
2 3 ⇒ 6 2 4 ⇒ '¿
1 8
(1.10) if d = 1:
(1.11)
When d ¿ 3 and ¿ 0 is small, we have = 12 , p( ) = 2 =2 (cf. Theorem A) and ' = 0 (by using, e.g. Theorem 1.5 of Carmona and Hu, 2002), therefore (1.10) does not give any eHective bound in this situation; however, it seems interesting that some (rough) bounds on volume and 5uctuation exponents can be obtained only in terms of free energy. It is worthy noticing that in two related models, the exponent 34 is universal for all d¿1 and ¿0. More precisely, when (Sn ) is replaced by a discrete time d-dimensional
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
289
Brownian motion and g(·; ·) by a Gaussian ;eld, Petermann (2000) showed ¿ 35 in the one-dimensional case and MNejane (2004) showed 6 34 for all d ¿ 1. Comets and Yoshida (2003) recently presented another model with Brownian motion in a Poisson environment; they also showed that 6 34 . In both these models, Comets and Yoshida (2003, Theorem 2.4.4) and MNejane (2003, Theorem 4.3.1) obtained a large deviations principle and computed the rate function I ($)=|$|2 =2 by using the Girsanov transform for Brownian motion and the rotation invariance of the models. It would be very interesting to obtain an invariance principle between the Brownian motion model and the random walk model. This paper is organized as follows: In Section 2, we estimate the rate of convergence of nth free energy (Propositions 2.3 and 2.4) by using the concentration inequality; The proof of Theorem 1.1 is given in Section 3 by using the subadditivity, whereas Theorem 1.2 is proven in Section 4 with Lemmas 4.1 and 4.3; In Section 5, we give a weak law of large numbers for the polymers measure with biased random walk; in Sections 6 and 7, we prove, respectively, Theorems 1.4 and 1.6. Throughout this paper, c, c , c denote some unimportant positive constants whose values may change from one paragraph to another.
2. Notations and basic tools For the sake of notational convenience, we omit sometimes the dependence of in the partition functions. In addition to the partition function Zn , we de;ne the partition function starting from x: def
Zn (x) = Zn (x; g) = Ex (e Hn (g; S) );
Hn (g; ) =
n
g(i; (i))
i=1
and the point to point partition function Zn (x; y) = Zn (x; y; g) = Ex (e Hn (g; S) 1(Sn =y) ): This function is strictly positive if and only if there exists a nearest neighbor path of length n connecting x to y, i.e. Px (Sn = y) = P0 (Sn = y − x) ¿ 0. We shall denote this fact by y − x ←- n. Observe that x ←- n if and only if n −
d
xj ≡ 0 (mod 2) and
1
d
|xj | 6 n
1
with x = (x1 ; : : : ; xd ) ∈ Zd . We shall also write x←-n to mean that the sum is taken over those x such that x ←- n. Let *n be the time shift of order n on the environment (*n g)(k; x) = g(k + n; x)
(x ∈ Zd ; k ¿ 1):
290
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
Our ;rst tool is Lemma 2.1 (Markov Property). For every integer n; m and every x; y ∈ Zd , we have Zn (x; y; g)Zm (y; *n g); (2.1) Zn+m (x) = y
Zn+m (x; z) =
Zn (x; y; g)Zm (y; z; *n g):
(2.2)
y
Proof. The two identities are proved similarly, with the help of the Markov property for the random walk Sn . Indeed, Hn+m (g; S) =
n+m
g(i; Si ) = Hn (g; S) + Hm (*n g; Sn+· ):
i=1
Therefore, Zn+m (x; g) = Ex (e Hn (g; S) e Hm (*n g; Sn+· ) ) = Ex (e Hn (g; S) Zm (Sn ; *n g)) = Zn (x; y; g)Zm (y; *n g): y
Let us recall the concentration of measure property of Gaussian processes (see Ibragimov et al., 1975). Proposition 2.2. Consider a function F : RM → R and assume that its Lipschitz constant is at most A, i.e. |F(x) − F(y)| 6 Ax − y
(x; y ∈ RM );
where x denotes the euclidean norm of x. Then if g = (g1 ; : : : ; gM ) are i.i.d. N(0; 1) we have u2 (u ¿ 0): P(|F(g) − E(F(g))| ¿ u) 6 exp − 2 2A Following Talagrand (1998), we apply this proposition to partition functions. De;ne for n integer pn ( ) = E[(1=n) log Zn ( )]. It is easy to establish the convergence of the free energy (1.3) (c.f. Carmona and Hu, 2002). Indeed, we deduce from Lemma 2.1 that the sequence n → pn ( )=(1=n)E[log Zn ( )] is superadditive so p( )=lim pn ( ) is well de;ned. The concentration of measure (Proposition 2.3) implies that (1=n) log Zn ( ) − pn ( ) → 0 a.s and in L1 . See also Comets et al. (2003) where they studied general environments g by using martingale deviations.
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
Proposition 2.3. (i) For any u ¿ 0, and x ←- n, 1 nu2 P log Zn ( ) − pn ( ) ¿ u 6 exp − 2 ; n 2 u2 : P(|log Zn (0; x) − E[log Zn (0; x)]| ¿ u) 6 exp − 2n 2
291
(2.3) (2.4)
(ii) Let 0 ¿ 12 . Then almost surely there exists n0 (!) such that for every n ¿ n0 : |log Zn ( ) − npn ( )| 6 n0 ;
(2.5)
|log Zn (0; x) − E[log Zn (0; x)]| 6 n0
(x ∈ Zd ; x ←- n):
(2.6)
(iii) There exists a constant c = c(d; ) ¿ 0 such that 1 E exp √ |log Zn (0; x) − E(log Zn (0; x))| 6 c (n ¿ 1; x ←- n): n Proof. (i) Fix n. Consider the set 1 = {(i; x) : 1 6 i 6 n; x ←- i} and let M = #1. We de;ne a function F : RM → R by n zi; Si F(z) = log E0 exp
= log E0 exp
i=1
zi; x 1(Si =x)
(i;x)∈1 S
= log E0 [e a :z ]; where z = (zi; x ; (i; x) ∈ 1) ∈ RM and a ∈ RM is the vector with coordinates a i; x = 1( (i)=x)
((i; x) ∈ 1):
Cauchy–Schwarz’ inequality yields that √ |a :z − a :z | 6 a z − z = nz − z :
√ Therefore, F has Lispschitz constant at most A = n, and we obtain the concentration of measure inequality (2.3). Inequality (2.4) is obtained in the same way. (ii) The second part of the proposition is proved by introducing the events: An = {|log Zn (0; x) − E[log Zn (0; x)]| ¿ n0 } x←-n
∪ {|log Zn ( ) − npn ( )| ¿ n0 }: Then, P(An ) 6 cd nd exp(−n20−1 =2 2 ), n P(An ) ¡ +∞ and we conclude by Borel Cantelli’s Lemma.
292
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
(iii) The third part is a straightforward consequence of the concentration inequality 1 E exp √ |log Zn (0; x) − E(log Zn (0; x))| n ∞ √ = P(|log Zn (0; x) − E[log Zn (0; x)]| ¿ n log v) dv 0
61 +
1
∞
(log v)2 exp − 2 2
dv
= c(d; ) ¡ +∞: Proposition 2.4. Fix 0 ¿ 12 . (i) For all large n, we have E[log Zn ( )] 6 n0 +
1 E[log Z2n (0; 0)]: 2
(ii) For m large enough 1 1 E[log Zm ( )] 6 p( ) 6 E[log Zm ( )] + m0−1 m m and almost surely, for m large enough 1 1 log Zm ( ) − m0−1 6 p( ) 6 log Zm ( ) + m0−1 : m m (iii) Almost surely for all large even n, 0 ¿ log 1(Sn =0) (n) ¿ −n0 : Proof. (i) The Markov Property (Lemma 2.1) implies that for x ∈ Qd such that nx ←- n, Z2n (0; 0; g) ¿ Zn (0; nx; g)Zn (nx; 0; *n g): Therefore, E[Z2n (0; 0)] ¿ E[log Zn (0; nx)] + E[log Zn (nx; 0)] = 2E[log Zn (0; nx)]: Let
1 2
¡ 0 ¡ 0 and let 3 = n−0 . Then, 1 log E[Zn3 ] (Jensen’s inequality) 3 1 3 (since 0 ¡ 3 ¡ 1) Zn (0; x) 6 log E 3 x←-n
E[log Zn ] 6
(2.7)
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
293
1 3(log Zn (0; x)−E[log Zn (0; x)]) 3E[log Zn (0; x)] e e = log E 3 x←-n
1 3E[log Zn (0; x)] 6 log c e 3 x←-n 6
(by Proposition 2:3)
1 log(c(2n + 1)d e(3=2)E[log Z2n (0; 0)] ) 3
6 n0 +
(by (2:7))
1 E[log Z2n (0; 0)] 2
for all large n. (ii) The second claim in (ii) follows from the ;rst one and from the concentration of measure property. We omit the parameter . By construction (1=m) E[log Zm ] 6 p( ). Therefore, we only need to establish that for m large enough p( ) 6
1 E[log Zm ] + m0−1 : m
Fix a large m and a small 0 ¡ 3 = 3(m) ¡ 1 whose value will be determined later. De;ne def
h3 (n) = log E[Zn3 ] ¿ 3E[log Zn ]: Observe that by the Markov property and the concavity of x → x3 ,
3 3 Zn+k = Zn (0; x; g)Zk (x; *n g) x←-n
6
x←-n
Zn3 (0; x; g) Zk3 (x; *n g):
Hence, by independence,
3 3 h3 (n + k) 6 log E[Zn (0; x; g)]E[Zk (x; *n g)] x←-n
6 log((2n)d E[Zn3 ]E[Zk3 ]) = h3 (n) + h3 (k) + d log(2n): Thanks to Hammersley’s (1962) general subadditivity theorem, the following limit exists and satis;es: for all large m, h(3) = lim
n→+∞
h3 (n) h3 (m) log m 6 +c : n m m
294
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
Recall that h3 (n) ¿ 3E[log Zn ]. Hence, h(3) ¿ 3p( ). We now choose 3 = m−0 with 1 2 ¡ 0 ¡ 0. We obtain that h3 (m) = log E[e3(log Zm −E[log Zm ]) ] + 3E[log Zm ] 6 c + 3E[log Zm ]
(by Proposition 2:3):
Therefore, p( ) 6
log m 1 h3 (m) h(3) +c 6 E[log Zm ] + c m0 −1 log m; 6 m3 m3 m 3
proving the desired result since 0 ¡ 0. (iii) Combining (i) and (ii), we have that for all n even and large enough, 1 1 E[log Zn (0; 0)] 6 p( ) 6 E[log Zn (0; 0)] + n0−1 ; n n hence E(log1(Sn =0) (n) ) ¿ −n0 . This in view of the concentration property (Proposition 2.3) imply (iii). We end this section by recalling the well-known result (see Talagrand, 2003, Proposition 1.1.3) Lemma 2.5. Let (g(i))16i6N be a family of N(0; 1) random variables, not necessarily independent. Then, E max g(i) 6 2 log N : 16i6N
3. Large deviations principle: Proof of Theorem 1.1 The proof of Theorem 1.1, inspired by Varadhan (2002) relies essentially on the subadditivity. Fix d ¿ 1 and ¿ 0. Firstly, we establish an auxiliary lemma Lemma 3.1. For any 6 ¿ 0, there exists a continuous convex function I (6) : Rd → R+ such that almost surely, for any ∈ d and any sequence x n ∈ Rd satisfying x n =n → , we have −
1 loge−6|Sn −x n | (n) → I (6) (): n
The above limit also holds in L1 . Furthermore, we have I (6) () 6 p( ) + log(2d);
∈ d ;
where p( ) is the free energy deAned in (1.3).
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
295
Proof. De;ne Vn(6) (x; y; g) = log Ex [e
n
g(i;Si ) −6 |Sn −y|
e
1
x ∈ Zd ; y ∈ Rd :
];
For i ¿ 0 and x ∈ Zd , denote by *i; x the shift operator: *i; x ◦ g(·; ·) = g(i + ·; x + ·). Then Vn(6) (x; y; g) = Vn(6) (0; y − x; *0; x ◦ g). Let n; m ¿ 1 and x ∈ Zd ; y; z ∈ Rd . Since |Sn+m − y| 6 |Sn − z| + |(Sn+m − Sn ) − (y − z)|, we have (6) Vn+m (x; y; g) ¿ log Ex [e
= log Ex [e
n+m
g(i;Si ) −6|Sn −z| −6|(Sn+m −Sn )−(y−z)|
e
1
n 1
e
g(i;Si )−6|Sn −z| Vm(6) (0;y−z;*n; Sn ◦g)
= Vn(6) (x; z) + log
e
%n (u)e
]
]
Vm(6) (0;y−z;*n; u ◦g)
u∈Zd
¿ Vn(6) (x; z) +
%n (u) Vm(6) (0; y − z; *n; u ◦ g);
(3.1)
u∈Zd
by the concavity of the logarithmic function, and where def
(6)
%n (u) = e−Vn (x;y;g) Ex [e satis;es u %n (u) = 1. De;ne
n
g(i;Si )−6|Sn −z|
1
vn(6) (y) = E(Vn(6) (0; y; g));
1(Sn =u) ]
y ∈ Zd :
Then vn(6) (y) = vn(6) (−y) by symmetry. Since %n (·) is independent of Vm(6) (0; y − z; *n; u ◦ g), we deduce from (3.1) that (6) (6) (6) vn+m (y) = E(Vn+m (0; y; g)) ¿ vn(6) (z) + vm (y − z);
∀y; z ∈ Rd :
Observe the elementary relation: |vn(6) (y) − vn(6) (z)| 6 6 |y − z|; vn(6) (y) 6 npn ( ) 6 np( ): Using the subadditivity theorem, we obtain a function 86 : Rd → (−∞; p( )] such that 86 is concave, 6-Lipschitz continuous and for any ∈ Rd vn(6) (x n ) = 86 (): n→∞; x n =n→ n lim
De;ne def
I (6) () = p( ) − 86 () ¿ 0;
∈ Rd :
This gives that −
1 E loge−6|Sn −x n | (n) → I (6) (); n
n → ∞:
(3.2)
296
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
Based on (3.2), we shall prove the a.s. convergence stated in the Lemma by the concentration property of Gaussian measure. The a.s. convergence implies the convergence in L1 in view of the fact that |−(1=n)loge−6|Sn −x n | (n) | 6 |6|(1 + |x n |=n) is bounded on n. It remains to establish the concentration property. Firstly, since 1 6|x n − x˜n | |loge−6|Sn −x n | (n) − loge−6|Sn −x˜n | (n) | 6 ; n n
∀x n ; x˜n ∈ Rd ;
it suSces to prove that for any ;xed ∈ Rd , −
1 loge−c|Sn −n| (n) → I (6) (); n
a:s:
(3.3)
Following the same lines as in the proof of Proposition 2.3, we deduce from the concentration of Gaussian measure that almost surely, for all large n, |loge−6|Sn −n| (n) − E loge−6|Sn −n| (n) | = O(n0 ) for some 0 ¿ 12 (the above estimate in fact holds uniformly on ∈ d ). This and (3.2) yield (3.3). Finally, for any ∈ d , we take x n ∈ Zd such that x n =n → and P0 (Sn = x n ) ¿ 0. It follows from Jensen’s inequality that e−6|Sn −x n | (n) ¿ 1(Sn =x n ) (n) =
n 1 E0 (e 1 Zn ( )
g(i;Si )
| Sn = x n )P0 (Sn = x n )
n
P0 (Sn = x n ) : g(i; Si )| Sn = x n ¿ exp E0 Zn ( ) 1
Since the variables g(i; x) are centered, we take log and then the expectation E for both sides of the above inequality. This implies that for any 6 ¿ 0, I (6) () 6 p( ) + 2(d=29)d=2 ||2 6 p( ) + log(2d);
∈ d ;
proving the lemma. The function I (6) is nondecreasing on 6 and we let 6 → ∞, hence the limit function I : d → [0; p( ) + log(2d)] is convex and lower semi-continuous on d . Proof of Theorem 1.1. Upper bound: It suSces to show that for any ∈ d and small 3 ¿ 0, lim sup n→∞
1 log1(|Sn =n−|63) (n) 6 −I∞ () + o(1); n
where o(1) (possibly random) tends to 0 when 3 → 0. Let : ¿ 0. Take 6 ¿ 0 be suSciently large such that |I () − I (6) ()| 6 :. Since the number of lattice points x n
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
297
such that |x n =n − | 6 3 is of order nd , we obtain 1(Sn =x n ) (n) 1(|Sn =n−|63) (n) = x n ←-n:|x n =n−|63
6
e−6|Sn −x n | (n)
x n ←-n:|x n =n−|63
6
(2n + 1)d 6n3 −6|Sn −n| (n) e e ; 3d
which in view of Lemma 3.1 imply that lim sup n→∞
1 log1(|Sn =n−|63) (n) 6 63−I (6) () 6 −I () + 2:; n
for 3 6 :=6. Lower bound: Let ∈ G such that I ()¡∞. We shall bound below 1(|Sn =n−|¡3) (n) . Let 6 be suSciently large such that 63 ¿ I () ¿ I (6) () and I (6) () = I () + o(1). Therefore, by using again Lemma 3.1, we have (6)
1(|Sn =n−|¡3) (n) ¿ e−6|Sn −n| (n) − e−63n ¿ e−I
()(1+o(1))n
¿ e−I ()(1+o(1))n ;
showing the lower bound. The continuity of I in the interior of d follows from the convexity. It remains to show (1.4). Since I is uniformly bounded below and above, we may repeat the same argument in the proof of Theorem 10.4 (Rockafellar, 1970, p. 86) and prove that lim sup I (y) 6 I (x);
y→x: y∈od
which together with the lower-continuity of I yields (1.4).
4. Pointwise rate function: Proof of Theorem 1.2 Before entering into the proof of Theorem 1.2, we need some preliminary lemmas (Lemma 4.1 is devoted to the proof of (1.6) and Lemma 4.3 to (1.5)). Lemma 4.1. There exists a constant c = c(d) ¿ 0 such that for all 0 ¡ 3 ¡ 1=9, #n(3) 6 ecn3 log(1=3) ; where def
n(3) = { ∈ n : (0) = 0; | i (n)| ¿ (1 − 3)n; ∃i = 1; : : : ; d}
(4.1)
298
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
with i (n) denoting the ith coordinate of (n) ∈ Zd . Consequently, almost surely for all small 3 ¿ 0, lim sup n→∞
1 log1(|Sn =n−e1 |63) (n) 6 c 3 log(1=3) − log(2d) − p( ) n
(4.2)
for some constant c = c (d) ¿ 0. Proof. Let Sn1 =Sn ·e1 be the ;rst coordinate of Sn . Then (Sn1 )n¿0 is a symmetric random 1 1 walk on Z with step distribution P0 (Sn1 − Sn−1 = +1) = P0 (Sn1 − Sn−1 = −1) = 1=(2d) 1 = 0) = 1 − 1=d. The large deviations principle implies that and P0 (Sn1 − Sn−1 def
log P0 (|Sn1 | ¿ (1 − 3)n) ∼ −n sup(6(1 − 3) − (6)) = −n
∗
6∈R
where
(1 − 3);
1
(6) = log E0 e6S1 = log(1 + (cosh(6) − 1)=d). Elementary computations show that ∗
(1 − 3) = log(2d) − (c + o(1)) 3 log(1=3);
3→0
for some constant c ¿ 0. This implies (4.1) by symmetry. Finally, 1 E(log1(|Sn =n−e1 |63) (n) ) n 1 = E(log E0 [1(|Sn =n−e1 |63) e Hn (g; S) ]) − pn ( ) n
log #n(3) E max Hn (g; ) + − log(2d) − pn ( ) n ∈n(3) n 6 2c3 log(1=3) + c3 log(1=3) − log(2d) − pn ( );
6
√ by applying Lemma 2.5 to the Gaussian family {Hn (g; )= n; ∈ n(3) }. Thanks to the concentration inequality (2.6), we deduce from the Borel–Cantelli lemma that almost surely for all large n, log1(|S =n−e |63) (n) − E log1(|S =n−e |63) (n) = O(n0 ) n 1 n 1 for any 0 ¿ 12 , which completes the proof. The following lemma is an easy consequence of subadditivity o
Lemma 4.2. For any $ ∈ d ∩ Qd def
a($) =
1 1 E(log Zn (0; n$)) − p($) = log1(Sn =n$) (n) ; lim n$←-n; n→∞ n n n$←-n; n¿1 sup
the above limit exists almost surely and in L1 (P).
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
299
The function a(·) is auxiliary: In fact, according to Theorem 1.2, a($) = −I ($). Proof. From the Markov Property (Lemma 2.1) we get that for x ←- n and y ←- m, Zn+m (0; x + y; g) ¿ Zn (0; x; g)Zm (x; x + y; *n g)
(x; y ∈ Zd ):
Taking the expectation of the logarithm, we get, since *n g is distributed as g, E[log Zn+m (0; x + y)] ¿ E[log Zn (0; x)] + E[log Zm (x; x + y)] = E[log Zn (0; x)] + E[log Zm (0; y)]:
(4.3)
This shows that the sequence n → Elog Zn (0; n$) is super additive and the standard subadditivity theorem and the concentration inequality (2.6) yield that (1=n) log Zn (0; n$) converges almost surely and in L1 (P). The integrability is guaranteed by Lemma 2.5. This together with (1.3) complete the proof. o
Lemma 4.3. For any $ ∈ d ∩ Qd , P almost surely, 1 log1(|Sn −n$|¡3n) (n) = a($); 3→0 n$←-n; n→∞ n lim lim inf
where a($) has been deAned in Lemma 4.2. def
Proof. Pick a small 3 ¿ 0 such that 33d ¡ 1 − |$|L1 , where |$|L1 = o
d 1
|$j | ¡ 1 for
d
$ = ($1 ; : : : ; $d ) ∈ d ∩ Q . Write $ = (p1 =p; : : : ; pd =p) where p1 ; : : : ; pd ∈ Z and p is d an integer such that p ¿ 1 |pj |. We shall consider those n → ∞ such that n=p is even. This choice ensures that n$ ←- n. Let 23d n 3d ∼ n; k = k(n) = 2p 1 1 − |$|L p 1 − |$|L1 where x denotes the integer part of x. For any x n ∈ Zd satisfying x n ←- n and def |x n − n$| 6 3n, we de;ne xn = (k + n)$ − x n ∈ Zd . Observe that | xn |L1 6 k|$|L1 + d|x n − n$| ¡ k; by our choice of k = k(n). Hence xn ←- k. By the Markov property (Lemma 2.1), we get Zk (0; xn ) Zn ( xn ; (n + k)$; *k g) Zn+k (0; (n + k)$) ¿ xn ←-k
¿
x n ←-n:|x n −n$|63n
Zk (0; xn )Zn ( xn ; (n + k)$; *k g):
(4.4)
300
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
Observe that by stationarity, Zn ( xn ; (n + k)$; *k g) has the same law as Zn (0; x n ). It follows from (2.4) that for 0 ¿ 12 , xn ; (n + k)$; *k g) − log Zn (0; x n )| ¿ 2n0 ) P(∃x n : x n ←- n; |log Zn ( 6 P(|log Zn ( xn ; (n + k)$; *k g) − E log Zn ( xn ; (n + k)$; *k g)| ¿ n0 ) x n ←-n
+
P(|log Zn (0; x n ) − E log Zn (0; x n )| ¿ n0 )
x n ←-n
6 2(2d + 1)n e−n
20−1
=(2 2 )
;
whose sum on n converges. The Borel–Cantelli lemma implies that almost surely for any 12 ¡ 0 ¡ 1 and all large n, xn ; (n + k)$; *k g) − log Zn (0; x n )| 6 2n0 : max |log Zn (
∀x n ←-n
(4.5)
On the other hand, for any y ←- k, Jensen’s inequality implies that E log Zk (0; y) = E[log E0 (e
k
1
g(i;Si )
|Sk = y)] + log P0 (Sk = y)
¿ log P0 (Sk = y) ¿ −k log(2d); which combined with (2.6) imply that almost surely for all large k, inf log Zk (0; y) ¿ −k(1 + log(2d)):
(4.6)
y←-k
Now, we can complete the proof of Lemma 4.3 as follows: P-almost surely, let n be large such that n=p is even. Injecting (4.5) and (4.6) into (4.4), we get 0 Zn+k (0; (n + k)$) ¿ e−k(1+log(2d))−2n Zn (0; x n ) x n ←-n:|x n −n$|63n 0
= e−k(1+log(2d))−2n Zn ( )1(|Sn −n$|63n) (n) : Since k = k(n) ∼ (23d=(1 − |$|L1 ))n, we deduce from the above estimate and from Lemma 4.2 and (1.3) that 1 Zn+k (0; (n + k)$) a($) = lim log n=p even; n→∞ n + k Zn+k ( ) ¿ −c3 + lim inf
n$←-n; n→∞
1 log1(|Sn −n$|63n) (n) n+k
¿ −c 3 + (1 + c 3) lim inf
n$←-n; n→∞
1 log1(|Sn −n$|63n) (n) ; n
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
301
where c ¿ 0 and c ¿ 0 denote some constants depending on d and $. Let 3 → 0, we obtain that 1 0 log1(|Sn −n$|63n) (n) 6 a($); lim sup lim inf n$←-n; n→∞ n 3→0 yielding Lemma 4.3 since 1(|Sn −n$|63n) (n) ¿ 1(Sn =n$) (n) . Combining Lemmas 4.3, 4.1 with Theorem 1.1, we immediately obtain Theorem 1.2. Proof of Theorem 1.2. Let $ ∈ d ∩ Qd and let n → ∞ with n$ ←- n. Since the single point set {$} is closed, we deduce from the upper bound of Theorem 1.1 that P almost surely, 1 log1(Sn =n$) (n) 6 −I ($); a($) = lim n→∞; n$←-n n by using the notation a($) introduced in Lemma 4.2. To show the lower bound, we can assume that I ($) ¡ ∞, because otherwise there is nothing to prove. Pick a small 3 ¿ 0. Using the lower bound of Theorem 1.1, we have that P almost surely, for all large n, 1(|Sn =n−$|¡3) (n) ¿ e−(I ($)+o(1))n ; where o(1) → 0 when 3 → 0. This in view of Lemma 4.3 imply that a($) ¿ −I ($), completing the proof of (1.5). To show (1.7), thanks to the concavity of the logarithm, we have that for ∈ Qd and n ←- n, 1 1 E[log Zn (0; nz)] 6 log E[Zn (0; n)] n n =
2 1 log(en =2 P0 [Sn = n]): n
We conclude from the local central limit theorem. Finally, (1.6) follows from (4.2) by letting 3 → 0. According to Proposition 2.4(iii), we let n → ∞ and obtain that I (0) = 0. 5. A weak law of large numbers In this section, we present a law of large numbers for the biased random walk. Firstly, to apply Varadhan’s lemma (Dembo and Zeitouni, 1998, Theorem 4.3.1), we remark that the moment condition (4.3.3) is satis;ed since |Sn | 6 n. Then Theorem 1.1 implies that Proposition 5.1. P-almost surely, 1 lim loge6·Sn (n) = 8 (6); n→+∞ n
∀6 ∈ Rd
302
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
with 8 (6) = sup{6 − I () : ∈ d }: The above convergence also holds in L1 (P). The function 8 : Rd → R is convex nonnegative, 8 (0) = 0, and for every permutation %: 8 (61 ; : : : ; 6d ) = 8 (±6%(1) ; : : : ; ±6%(d) ). The L1 (P) convergence follows from a:s: convergence and the fact that |(1=n) loge6·Sn (n) | 6 |6|. We can also directly prove the above convergences by using the sub-additivity. We now state a Law of Large Numbers in dimension d=1. Say (Xn )n∈N is a nearest neighbour random walk on Z with mean a ∈ [−1; +1] if Xn is the partial sum of iid variables with common distribution P(X1 = +1) = (1 + a)=2 and P(X1 = −1) = (1 − a)=2. We de;ne a polymer measure ·(n; a) associated with (Xn ) and (g(i; x)) in the same way as ·(n) does to (Sn ) and (g(i; x)). Proposition 5.2. Assume that the function 8 is di;erentiable at 6 ∈ R. Then the walk (Xn ) satisAes a law of large numbers under the polymer measure ·(n; tanh 6) : (n;tanh 6) Xn → 8 (6) almost surely: n The exact value of 8 (6) is unknown. This very weak law of large numbers is not law
a surprise for the symmetric random walk, since we have then Sn (n) = − Sn (n) and thus E[Sn (n) ] = 0. Proof. The function 8 is the limit of the convex C 1 functions fn (6)=(1=n) loge6Sn (n) . Therefore, since 8 is diHerentiable at 6 (see Lemma 5.3), almost surely 8 (6) = lim fn (6) = lim
1 Sn e6Sn (n) n e6Sn (n)
De;ne (6) = log cosh(6). Then Mn6 = e6Sn −n probability def
(6)
Ex(6) [f(Sk ; k 6 n)] = E0 [f(Sk ; k 6 n)e6Sn −n
is a martingale and under the new (6)
]
the nearest neighbor random walk S has mean tanh(6). Let us denote by S (6) this walk; then almost surely 1 8 (6) = lim Sn(6) (n) ; n→∞ n which is the desired result. Lemma 5.3. Assume the sequence fn of real valued, di;erentiable, convex functions converge on an open interval I to a function f. Then f is convex, and for every x such that f is di;erentiable at x, fn (x) converges to f (x).
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
303
Proof. See Theorem 25.7 (Rockafellar, 1970, p. 248). Although we only assume the diHerentiability of f at one point, the argument in pp. 249–250 still works. 6. A scaling inequality involving the volume and the *uctuation exponents: proof of Theorem 1.4 The following lemma is elementary, but nevertheless gives useful lower bounds on the variance. Lemma 6.1 (Newman and Piza, 1995, Lemma 2). Let X ∈ L2 (; F; P) and assume that G1 ; : : : ; Gk ; : : : is a sequence of independent sub-%-Aelds of F. We have ∞ Var(X ) ¿ Var(E[X |Gk ): k=1 law
Lemma 6.2. Denote by g = N(0; 1) a standard real-valued Gaussian variable. For any ¿ 0, there exists some constant c ¿ 0 such that for all u; v ¿ 0, we have 1 1(u¿1; v¿1) ): Cov(log(u + e g ); log(v + e g )) ¿ c max(0; 1(u61; v61) ; uv Proof. Denote by f(u; v) the above covariance. Since f(u; v) = Cov(log(1 + (1=u)e g ), log(1 + (1=v)e g )), we get limu→∞ f(u; v) = 0: Let g be an independent copy of g, we have @2 f 1 1 ; (u; v) = Cov @u@v u + e g v + e g 1 1 =E − (u + e g )(v + e g ) (u + e g )(v + e g ) e g − e g =E (u + e g )(v + e g )(v + e g ) (e g − e g )(u + e g ) =E (u + e g )(u + e g )(v + e g )(v + e g ) (e g − e g )e g =E (u + e g )(u + e g )(v + e g )(v + e g ) 1 (e g − e g )2 ; (6.1) = E 2 (u + e g )(u + e g )(v + e g )(v + e g ) where the last equality is obtained by interchanging g and g. Remark that for u; v ¿ 0, limv→∞ (@f=@u)(u; v) = 0, hence ∞ ∞ @2 f f(x; y) = du dv (u; v) ¿ 0: @u@v x y
304
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
Going back to (6.1), we remark that @2 f (u; v) = c ∈ (0; ∞) 06u62;06v62 @u@v inf
and for all u ¿ 12 ; v ¿ 12 , @2 f (u; v) @u@v
(e g − e g )2 1 = 2 2E 2u v (1 + (1=u)e g )(1 + (1=u)e g )(1 + (1=v)e g )(1 + (1=v)e g ) ¿
c : u2 v 2
From the above estimates, the desired conclusion follows. Proof of Theorem 1.4. Applying Lemma 6.1 to {%(g(j; x)); 1 6 j 6 n; x ∈ Zd }, we get Var (log Zn ( )) ¿
n
Var(E[log Zn ( ) | g(j; x)]):
j=1 x←-j def
Fix j 6 n and x ∈ Zd such that x ←- j. Denote by Dn (g; S) = e
n 1
g(i; Si )
. We have
Zn ( ) = E0 [Dn (g; S)1(Sj =x) ] + E0 [Dj−1 (g; S)1(Sj =x) ]e g( j; x) Zn−j (x; g ◦ $j ): It follows that log Zn ( ) = log(Y + e g( j; x) ) + log[Zn−j (x; g ◦ $j )E0 [Dj−1 (g; S)1(Sj =x) ]]; where P0 [Sj = x] ¿ 0 and def
Y =
E0 [Dn (g; S)1(Sj =x) ] Zn−j (x; g ◦ $j )E0 [Dj−1 (g; S)1(Sj =x) ]
is independent of g(j; x). Since Zn−j (x; g ◦ $j )E0 [Dj−1 (g; S)1(Sj =x) ] is also independent of g(j; x), we get Var(E[log Zn ( ) | g(j; x)]) = Var(E[log(Y + e g( j; x) ) | g(j; x)]) 2
g( j; x)
g( j; x) =E P(Y ∈ dy)[log(y + e ) − E[log(y + e )]] =
y¿0
P(Y ∈ dy1 )P(Y ∈ dy2 ) Cov(log(y1 + e g( j; x) ); log(y2 + e g( j; x) ))
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
305
¿ c max((P(Y 6 1))2 ; (E[Y −1 1(Y ¿1) ])2 ) 2 c 1 E 1∧ ; ¿ Y 4 where the ;rst inequality is due to Lemma 6.2. Recalling the de;nition of Y , we remark that 1(Sj =x) (n) 1 ¿ e− g( j; x) 1(Sj =x) (n) ; = e− g( j; x) Y 1 − 1(Sj =x) (n) which in view of Lemma 6.1 imply that c Var(log Zn ) ¿ (E[1 ∧ (e− g( j; x) 1(Sj =x) (n) )])2 : 4 d j6n; x∈Z def
Pick ¿ . De;ne Mn (g) = maxj6n; |x|6n g(j; x). Cauchy–Schwarz’ inequality implies that (E[1 ∧ (e− g( j; x) 1(Sj =x) (n) )])2 j6n;|x|6n
¿
(
j6n; |x|6n
E[1 ∧ (e− g( j; x) 1(Sj =x) (n) )])2 j6n; |x|6n 1 2
¿ n−1− d
E[1 ∧ (e−Mn (g) 1(Sj =x) (n) )]
j6n;|x|6n
2
¿ n−1− d
E[(e−Mn (g) 1(Mn (g)¿0) 1(Sj =x) (n) )]
j6n;|x|6n −1− d
=n
j6n
2 E[e
−Mn (g)
(n)
1(Mn (g)¿0) 1(|Sj |6n ) ]
:
It follows from the standard extreme value theory (Resnick, 1987) that if def
Nn = |{(j; x) : j 6 n; |x| 6 n}| = (n + 1)(2n + 1)d nd+1 ; then M (g) n → 1; 2log Nn
almost surely:
We introduce the event 1 An = 2log Nn 6 Mn 6 2 2log Nn ; 2
306
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
hence P(An ) → 1. Let Xn = 1(supk6n |Sk |6n ) (n) ∈ (0; 1): then Xn → 1 in probability thanks to the de;nition of . Therefore, Xn → 1 in L1 (P), and E[Xn 1An ] → 1. So, there exists n0 such that for n ¿ n0 , E[Xn 1An ] ¿ 12 : Consequently, for n ¿ n0 and j 6 n, E[e−Mn (g) ]1(Mn (g)¿0) 1(|Sj |6n ) (n) ¿ E[e−Mn (g) Xn 1An ] ¿ exp(−2 2log Nn )E[Xn 1An ] ¿ Hence, Var(log Zn ) ¿
1 exp(−2 2log Nn ): 2
c 1− d −4√2log Nn n ; e 16
which implies that 2' ¿ 1 − d for all ¿ , ending the proof. 7. A relationship between the volume exponent and the shape of the rate function: proof of Theorem 1.6 Proof of Theorem 1.6. Fix 0 ¿ 12 and such that 1 + &( − 1) ¿ 0. Using the concentration of measure inequality as in the proof of Proposition 2.3, it is easy to show that almost surely for n large enough, for all k 6 n and z ←- k: log1(S =z) (n) − E[log 1(S =z) (n) ] 6 2n0 : k k Assume that k is even, so that 0 ←- k. Then, Markov property implies Zk (0; z)Zn−k (z; *k g) 1(Sk =z) (n) = x Zk (0; x)Zn−k (x; *k g) 6
Zk (0; z) Zn−k (z; *k g) : Zk (0; 0) Zn−k (0; *k g)
law
Since Zn−k (z; *k g) = Zn−k (0; *k g), we obtain E[log1(Sk =z) (n) ] 6 E(log Zk (0; z)) − E(log Zk (0; 0)): By Lemma 4.2 (recalling a($)=−I ($)), we have E(log Zk (0; z)) 6−kI (z=k)+k p( ). On the other hand, by means of Proposition 2.4(i) and (ii), E(log Zk (0; 0)) ¿ kp( )−k 0 for all large even k. It follows that E[log1(Sk =z) (n) ] 6 − kI (z=k) + k 0 :
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
307
Hence, if |z| ¿ n , we get almost surely for large enough n, log1(Sk =z) (n) 6 −kI (z=k) + k 0 + 2n0 6 −ck|z=k|& + 3n0
by assumption on I )
6 −ck 1−& n & + 3n0
(since |z| ¿ n )
6 −cn1−&+ & + 3n0
(since k 6 n and & ¿ 1)
6 −c n1−&+ & : Thus, we have proven that for k even 1−&+ &
1(|Sk |¿n ) (n) 6 c nd e−c n
:
If k is odd, then 1(|Sk |¿n ) (n) 6 1(|Sk−1 |¿n −1) (n) and we obtain the same type of upper bound. It turns out that 1−&+ & → 0: 1(|Sk |¿n ) (n) 6 c nd+1 e−c n 1(∃k6n; |Sk |¿n ) (n) 6 k6n
Therefore, P almost surely, 1(maxk6n |Sk |6n ) (n) → 1; hence ¿ . We conclude by letting 0 ↓
1 2
and ↓ 1 − 1=2&.
Acknowledgements We are very grateful to an anonymous referee and an associated editor for their careful reading on the preliminary version and for their suggestions and comments. References Albeverio, S., Zhou, X.Y., 1996. A martingale approach to directed polymers in a random environment. J. Theoret. Probab. 9 (1), 171–189. Bolthausen, E., 1989. A note on the diHusion of directed polymers in a random environment. Comm. Math. Phys. 123, 529–534. Carmona, Ph., Hu, Y., 2002. On the partition function of a directed polymer in a Gaussian random environment. Probab. Theory Related Fields 124, 431–457. Comets, F., Shiga, T., Yoshida, N., 2003. Directed polymers in random environment: path localization and strong disorder. Bernoulli 9 (4), 705–723. Comets, F., Yoshida, N., 2003. Brownian directed polymers in random environment. Preprint, http://www.proba.jussieu.fr/mathdoc/preprints/index.html. Dembo, A., Zeitouni, O., 1998. Large Deviations Techniques and Applications, Applications of Mathematics, vol. 38. Springer, Berlin. Hammersley, J.M., 1962. Generalization of the fundamental theorem on subadditive functions. Proc. Cambridge Philos. Soc. 58, 235–238.
308
P. Carmona, Y. Hu / Stochastic Processes and their Applications 112 (2004) 285 – 308
Huse, D.A., Henley, C.L., 1985. Pinning and roughening of domain walls in Ising systems due to random impurities. Phys. Rev. Lett. 54, 2708–2711. Ibragimov, I.A., Sudakov, V., Tsirelson, B.S., 1975. Norms of Gaussian sample functions. Proceedings of the Third Japan–USSR Symposium on Probability, Tashkent, Lecture Notes in Mathematics, Vol. 550. Springer, Berlin, pp. 20–41. Imbrie, J.Z., Spencer, T., 1988. DiHusion of directed polymers in a random environment. J. Statist. Phys. 52, 608–626. Kardar, M., 1985. Roughening by impurities at ;nite temperatures. Phys. Rev. Lett. 55, 2924. MNejane, O., 2004. Upper bound of a volume exponent for directed polymers in a random environment. Ann. Inst. H. PoincarNe, to appear. N MNejane, O., 2003. Equations aux diHNerences alNeatoires: des fonctionnelles exponentielles de LNevy aux polymNeres dirigNes en environnement alNeatoire. Ph.D. Thesis, UniversitNe Toulouse III. Newman, C.M., Piza, M.S.T., 1995. Divergence of shape 5uctuations in two dimensions. Ann. Probab. 23 (3), 977–1005. Petermann, M., 2000. SuperdiHusivity of directed polymers in random environment. Ph.D. Thesis, University of ZVurich. Piza, M.S.T., 1997. Directed polymers in a random environment: some results on 5uctuations. J. Statist. Phys. 89 (3/4), 581–603. Resnick, S.I., 1987. Extreme Values, Regular Variation and Point Processes. Applied Probability, Vol. 4. Springer, Berlin. Rockafellar, R.T., 1970. Convex analysis. Princeton Mathematical Series, Vol. 28. Princeton University Press, Princeton, NJ. Sinai, Y.G., 1995. A remark concerning random walks with random potentials. Fund. Math. 147 (2), 173–180. Talagrand, M., 1998. The Sherrington–Kirkpatrick model: a challenge to mathematicians. Probab. Theory Related Fields 110, 109–176. Talagrand, M., 2003. Spin glasses: A Challenge for Mathematicians. Cavity and Mean Field Models. Springer, Berlin. Varadhan, S.R.S., 2002. Large deviations for random walks in a random environment. preprint.