Renewal Theorem for (L, 1)-Random Walk in Random Environment

Renewal Theorem for (L, 1)-Random Walk in Random Environment

Acta Mathematica Scientia 2013,33B(6):1736–1748 http://actams.wipm.ac.cn RENEWAL THEOREM FOR (L, 1)-RANDOM WALK IN RANDOM ENVIRONMENT∗ ) Wenming...

294KB Sizes 1 Downloads 113 Views

Acta Mathematica Scientia 2013,33B(6):1736–1748 http://actams.wipm.ac.cn

RENEWAL THEOREM FOR (L, 1)-RANDOM WALK IN RANDOM ENVIRONMENT∗

)

Wenming HONG (

 )†

Hongyan SUN (

Laboratory of Mathematics and Complex Systems, School of Mathematical Sciences, Beijing Normal University, Beijing 100875, China E-mail: [email protected]; [email protected]

Abstract We consider a random walk on Z in random environment with possible jumps {−L, · · · , −1, 1}, in the case that the environment {ωi : i ∈ Z} are i.i.d.. We establish the renewal theorem for the Markov chain of “the environment viewed from the particle” in both annealed probability and quenched probability, which generalize partially the results of Kesten (1977) and Lalley (1986) for the nearest random walk in random environment on Z, respectively. Our method is based on the intrinsic branching structure within the (L, 1)-RWRE formulated in Hong and Wang (2013). Key words random walk in random environment; renewal theorem; multitype branching process in random environment; coupling 2010 MR Subject Classification

1

60K37; 60F05

Introduction

We begin with the model of random walk in random environment with bounded jumps; see for example [1–3] for a general review. Let L ≥1 be a fixed integer. Denote by Σ the collection of probability measures on Λ := {−L, · · · , −1, 1}. Let Ω := ΣZ , and F denote the σ-field on Ω generated by cylinder sets. A random environment is an element ω := {ωi }i∈Z ∈ Ω. Let P be a probability measure on (Ω, F ) such that ωi s are i.i.d. and satisfy that for all i ∈ Z, ωi (−1) > 0, ωi (1) > 0, P -a.s..

(1.1)

One define naturally the shift operator θ on Ω by (θω)n := ωn+1 , ∀n ∈ Z. For a fixed medium ω ∈ Ω and x ∈ Z, let {Xn (ω)}n≥0 denote the Markov chain on Z with X0 (ω) = x and the transition probabilities Pω (Xn+1 (ω) = y + z|Xn (ω) = y) = ωy (z), ∗ Received

∀y ∈ Z, ∀z ∈ Λ.

(1.2)

March 19, 2012; revised November 6, 2012. The project is supported by NSFC(11131003), 985Project. † Corresponding author: Hongyan SUN.

No.6

W.M. Hong & H.Y. Sun: RENEWAL THEOREM FOR (L, 1)-RANDOM WALK

1737

We denote by Pωx the probability measure on (ZN , G) induced by {Xn }n≥0 , where G is the σfield on ZN generated by cylinder sets. We refer Pωx as the “quenched” law. We define another probability measure on the space (ZN , G) by Z x P (·) = Pωx (·) P (dω), Ω

which is referred as “annealed” law. {Xn }n≥0 is called the (L, 1)-random walk in random environment, for short, (L, 1)-RWRE. Notice that as a process on the probability space (ZN , G, Px ), {Xn : n ≥ 0} is not a Markov chain. Throughout the whole paper, we use E, Eωx , and Ex to denote the expectation corresponding to the laws P , Pωx , and Px , respectively. For simplicity, we also write Pω , P, Eω , E for Pω0 , P0 , Eω0 , E0 , respectively. From the point of view “ environment viewed from the particle”, it is natural to define A(t) := {Ai (t) : i ∈ Z} = {ωXt +i : i ∈ Z} = θXt ω, ∀ t ≥ 0. By the same method in [3, Lemma 2.2.1.18], which deals with the (1,1)-RWRE, we get that the process {A(t)}t≥0 is a Markov chain under either Pω or P with the transition probability M (ω, dω ′ ) = ω0 (1)δθω=ω′ + ω0 (0)δω=ω′ + ω0 (−1)δθ−1 ω=ω′ + · · · + ω0 (−L)δθ−L ω=ω′ . In this paper we will prove that the process {A(t) : t ≥ 0} converges in distribution both in annealed sense and in quenched sense. To state the main result of this paper, we recall some notation and definitions. In what follows, all the vectors are row vectors. Let ei := (0, · · · , 1, · · · , 0), where the i-th element is 1 and the others are 0, for i = 1, · · · , L. For every L P x ∈ RL , set |x| := |xi |, and for every A ∈ RL×L , kAk = sup{|xA| : x ∈ RL , |x| = 1}. For i=1

1 ≤ i ≤ L and k ∈ Z, we also set

and

ai (k) :=

ωk (−i) + · · · + ωk (−L) ωk (1)



a1 (k) · · · aL−1 (k) aL (k)

   M (k) :=    

1 .. .

··· .. .

0 .. .

0 .. .

0

···

1

0



   .   

For the nearest random walk in random environment on Z, namely the (1, 1)-RWRE, Kesten [4] has proved that the process {A(t) : t ≥ 0} converges in distribution in annealed sense. The first result of this paper is extending Kesten’s renewal theorem for the (1, 1)-RWRE to the (L, 1)-RWRE as follows. Theorem 1.1 Assume !! ∞ X 1 T E 1+ e1 M (i) · · · M (1)e1 <∞ (1.3) ω0 (1) i=1 and (1.1) hold. Then there exists a probability measure P∞ on Ω such that for every continuous function f : Ω → R, Z E(f (A(t))) →

f dP∞ as t → ∞.

(1.4)

1738

ACTA MATHEMATICA SCIENTIA

Vol.33 Ser.B

Remark 1.1 By Br´emont [1], under the condition (1.3), {Xn }n≥0 satisfies the law of large number with a drift c > 0; in fact,  c = 1/E

 ∞ X 1 T (1 + e1 M (i) · · · M (1)e1 ) , ω0 (1) i=1

which further implies Xn → ∞ as n → ∞, P-a.s. Remark 1.2 It is easy to check that P∞ is the invariant measure of the process {A(t)}t≥0 , which plays an important role in the proof of law of large number of RWRE, see for example [5] for (1,1)- RWRE and [6] for (L, 1)-RWRE. Furthermore, based on Kesten’s work, Lallay [7] has proved that the process {A(t) : t ≥ 0} has limit distribution in quenched sense for the (1,1)-RWRE. We partially extend his result to the (L, 1)-RWRE as follows. Theorem 1.2 Assume (1.1) and (1.3) hold. Then for every continuous function f : Ω → R and every ε > 0,   Z lim P ω : Eω f (A(t)) − f dP∞ > ε = 0. (1.5) t→∞

The key points in our proofs are that the regeneration time (see (2.3)) can be expressed explicitly in terms of the intrinsic multitype branching processes for the (L, 1)-RWRE, which was formulated in [6]. The next two sections are devoted to the proof of Theorem 1.1 and Theorem 1.2, respectively. In what follows, we always assume that (1.1) and (1.3) hold, and that X0 = 0, P-a.s.

2

Branching Structure and Proof of Theorem 1.1

Notice that, under the assumption (1.3), Xn → ∞, P-a.s. by Remark 1.1. To recall the branching structure within such an (L, 1)-RWRE {Xn } (see [6]), we need some definitions. Define the hitting times Tn := inf{k ≥ 0 : Xk = n}, ∀ n ≥ 0. For n > 0, −∞ < i < n, and 1 ≤ l ≤ L, set n Ui,l := #{0 < k < Tn : Xk−1 > i, Xk = i − l + 1},

which records the number of the steps made by the walk from positions above i to i − l + 1 before Tn . We also set n n n ). , Ui,2 , · · · , Ui,L Uin := (Ui,1

Observe that |Uin | is the total number of steps made by the walk reaching or crossing i downward from positions above i before Tn . For k = 0, 1, · · · , n − 1, let Ik := {Xm : Tk ≤ m < Tk+1 }. Then {Ik }0≤k≤n−1 decompose the path of the random walk before time Tn into n independent and non-intersecting pieces. For 1 ≤ l ≤ L and i < k, define Uln (k, i) := #{Tk ≤ m < Tk+1 : Xm−1 > i, Xm = i − l + 1},

No.6

W.M. Hong & H.Y. Sun: RENEWAL THEOREM FOR (L, 1)-RANDOM WALK

1739

which counts the number of the steps in Ik from positions above i to i − l + 1. Let U n (k, i) := (U1n (k, i), U2n (k, i), · · · , ULn (k, i)), which records the number of the steps in Ik reaching or crossing i downward from positions above i. Set U n (k, i) := 0 for i > k, and U n (k, k) := e1 . From the definitions of Uin and U n (k, i), it follows that Uin =

n−1 X

U n (k, i).

k=(i+1)∨0 n It is proved in [6] that the process Un−1 , · · · , U0n is closely related to a specific multitype branching process with immigration in random environment {Z−n : n ≥ 0} with the following probability structure: For each integer k, let Z(k, t) denote the L-type branching process in a random environment, which starts at time k, with a type-1 immigration at the beginning. Set Z(k, t) := 0 for k < t, and Z(k, k) := e1 . For k > t, when the environment {ωi : i ∈ Z} and Z(k, k), Z(k, k − 1), · · · , Z(k, t + 1) are given, Z(k, t) is the number of the offsprings of all the Z(k, t + 1) particles, and each of the Z(k, t + 1) particles gives birth to new particles independently with the following distributions,

Pω (Z(k, t) = (u1 , · · · , uL )|Z(k, t + 1) = e1 ) (u1 + · · · + uL )! u1 uL = ω (−1) · · · ωt+1 (−L)ωt+1 (1), u1 ! · · · · · uL ! t+1

(2.1)

and for all l = 2, · · · , L, Pω (Z(k, t) = (u1 , · · · , ul−2 , ul−1 + 1, ul , · · · , uL )|Z(k, t + 1) = el ) (u1 + · · · + uL )! u1 uL = ω (−1) · · · ωt+1 (−L)ωt+1 (1). u1 ! · · · · · uL ! t+1

(2.2)

We assume that conditioned on ω, for fixed k and all t < k, each particle at generation t produces new particles independently of everything up to the t-th generation. We also assume that conditioned on ω, the processes {Z(k, ∗)}k∈Z are independent. For n ≥ 0, let Z−n denote the total number of offsprings born at time −n of the immigrants who arrived between time 0 and −n + 1, namely, Z(0) = 0, and for n > 0, Z−n =

n−1 X k=0

Z(−k, −n).

Define ν0 := 0, νk+1 := inf{n > νk : Z−n = 0}, k ≥ 0. X X Wk := Z−n = Z−n , k ≥ 0. νk ≤n<νk+1

Yk :=

∞ X

m=k+1

νk
Z(−k, −m), k ≥ 0.

The following result is established in [6].

1740

ACTA MATHEMATICA SCIENTIA

Vol.33 Ser.B

Theorem 2.1 Suppose that Xn → ∞, P-a.s. Then (a) Tn = n +

n−1 X

i=−∞

|Uin | +

n−1 X

i=−∞

n Ui,1 =n+

n−1 X

Uin e0 ;

i=−∞

where e0 = (2, 1, · · · , 1)T . (b) For almost every ω ∈ Ω, each of the processes U n (k, ∗), 0 ≤ k ≤ n − 1, is an inhomogeneous multitype branching process beginning at time k with branching mechanism Pω (U n (k, i − 1) = (u1 , · · · , uL ) U n (k, i) = e1 ) (u1 + · · · + uL )! = ωi (−1)u1 · · · ωi (−L)uL ωi (1), u1 ! · · · uL ! and for 2 ≤ l ≤ L,

=

 Pω U n (k, i − 1) = (u1 , · · · , 1 + ul−1 , · · · , uL ) U n (k, i) = el (u1 + · · · + uL )! ωi (−1)u1 · · · ωi (−L)uL ωi (1). u1 ! · · · uL !

Moreover, conditioned on ω, U n (k, ∗), k = 0, 1, · · · , n − 1 are mutually independent and each n of the branching processes U n (k, ∗) has independent line of descent. Consequently Un−1 = n n n 0, Un−2 , · · · , U1 , U0 are the first n generations of an inhomogeneous multitype branching process with a type-1 immigration in each generation in random environment. (c) n n Un−1 = 0, Un−2 , · · · , U1n , U0n has the same distribution with the first n generation of an inhomogeneous multitype branching process with a type-1 immigration in each generation in random environment {Z−n : n ≥ 0} with the branching mechanism defined in (2.1) and (2.2). Now, we decompose the path of the walk by the regeneration times defined by τ0 := 0, τk+1 := inf{t > τk : Xm ≥ Xt > Xn for all m ≥ t > n}, ∀ k ≥ 0,

(2.3)

see for example [4]. Let  ξk : = τk+1 − τk , X(τk+1 ) − X(τk ), ωX(τk )+i ,

 0 ≤ i < X(τk+1 ) − X(τk ), Xt+1 − Xt , τk ≤ t < τk+1 ,

(2.4)

which describes the “piece of the path between τk and τk+1 ”. We emphasize that the technique of regeneration time is a powerful tool for dealing with random walk in random environment. Paths between consecutive regeneration times are structures with no intersection. Therefore one could use essentially the independence of the environment. In higher dimension, this tool plays a more significant role. In the literatures the proofs of the 0-1 law, LLN, CLT and also LDP depend more or less on regeneration times. For details, see Sznitman [8], Sznitman-Zener [9] and the reference therein. Here and in what follows, for typographical reasons we shall occasionally write X(t) for Xt , and later T (x) for Tx , and τ (k) for τk .

No.6

W.M. Hong & H.Y. Sun: RENEWAL THEOREM FOR (L, 1)-RANDOM WALK

1741

We also set lk := τk+1 − τk and dk := X(τk+1 ) − X(τk ), which denote the “length” and “displacement” of ξk , respectively. From (2.3), it follows that lk ≥ 1 and dk ≥ 1, and hence the space of ξk is the disjoint sum X {l} × {d} × Σd × Λl =: Ξ. l,d≥1

For the (1,1)-RWRE, Kesten [4] proved that these ξk s are i.i.d. conditioned on the event {Xt ≥ 0 for all t ≥ 0}. Here, for the (L, 1)-RWRE, we have the similar result as follows. Lemma 2.1 For each k > 0, τk is finite a. s.. For each k ≥ 0, ξk+1 is independent of {ωi : i < 0} and ξ0 , ξ1 , · · · , ξk ; ξ1 , · · · , ξk , · · · are identically distributed, and each has the conditional distribution of ξ0 , given B0 := {Xt ≥ 0 for all t ≥ 0}. Also {ωi : i < 0} are independent of B0 . Finally, given B0 , {ωi : i < 0} and ξ0 , ξ1 , · · · are conditionally independent, and the conditional distribution of each ξk , k ≥ 0 given B0 is the same. Proof See Lemma 1 in Kesten [4]. The fact does not matter with the jump range of the random walk. 2 Now we describe the conditional distribution of ξ0 given B0 by the means of a branching process with immigration in a random environment as follows. Lemma 2.2 For any N ≥ 0, the conditional joint distribution of n o X(τN +1 ) lN −i , dN −i , 0 ≤ i ≤ N, ωX(τ , 0 < j ≤ X(τ ), U , 0 ≤ r ≤ X(τ ) N +1 N +1 )−j X(τ )−r−1 N +1

N +1

given B0 , is the same as the distribution of  Wi eT0 + (νi+1 − νi ), (νi+1 − νi ), 0 ≤ i ≤ N, ω−j+1 , 0 < j ≤ νN +1 , Z−r , 0 ≤ r ≤ νN +1 . Proof Let 0 = x0 < x1 < · · · < xN +1 . Then the event {B0 ∩ X(τi ) = xi , 0 < i ≤ N + 1} can be described as follows:  Uxi −1 = 0, i = 0, · · · , N + 1, |Uj−1 | ≥ 1, 0 ≤ j ≤ XN +1 , j 6∈ {x0 , · · · , xN +1 } . x

(2.5)

N +1 If UxN +1 −1 = 0, then Uj−1 = Uj−1 for j < xN +1 , since Xt ≥ xN +1 for all t ≥ TxN +1 . So (2.5) equals to n o x +1 xN +1 UxN +1 −1 = 0, UxiN−1 = 0, 0 ≤ i ≤ N, |Uj−1 | ≥ 1, j ∈ [0, xN +1 ]\{x0 , · · · , xN +1 } .

By Theorem 2.1(a), we have that, on B0 ∩ {X(τ1 ) = x1 }, τ1 =

x1 X

x1 T Ui−1 e0 + x1 ,

i=0

and that if X(τk ) = xk and X(τk+1 ) = xk+1 , then xk+1

lk = τk+1 − τk =

X

i=xk

x

k+1 T Ui−1 e0 + xk+1 − xk .

1742

ACTA MATHEMATICA SCIENTIA

Vol.33 Ser.B

i−1 P ¯ Now fix integers l¯i , d¯i ≥ 1 for i = 0, · · · , N , and put x0 = 0, xi = dj for i = 1, · · · , N +1. j=0

If the vectors uj−1 ∈ NL satisfy

uxi −1 = 0, i = 0, 1, · · · , N + 1, |uj−1 | ≥ 1, j ∈ [0, xN +1 ]\{x0 , · · · , xN +1 }, and xi+1 X l¯i = uj−1 eT0 + d¯i ,

(2.6)

j=xi

then for given sets Cj ∈ B(Σ), j = 1, · · · , xN +1 , we have

 P lN −i = ¯lN −i , dN −i = d¯N −i , i = 0, · · · , N, ωX(τ ∈ Cj , 0 < j ≤ X(τN +1 ), )−j N +1  X(τ ) UX(τ N +1)−j−1 = uxN +1 −j−1 , 0 ≤ j ≤ X(τN +1 ) B0 N +1  = P ωx ∈ Cj , 0 < j ≤ xN +1 , −j N +1  xN +1 Ux = u , 0 < j ≤ x , U = 0 /P(B0 ). (2.7) x −j−1 x −1 N +1 −j−1 N +1 N +1 N +1

Since BxN +1 is defined in the term of steps Xt+1 − Xt with t ≥ TxN +1 and Xt ≥ xN +1 for t ≥ TxN +1 , we have that {UxN +1 −1 = 0} = BxN +1 is independent of {ωxN +1 −j , 0 < j ≤ xN +1 } x

N +1 and {Ur−1 , 0 ≤ r < xN +1 }. In addition, observe that

P(UxN +1 −1 = 0) = P(BxN +1 ) = P(B0 ) X

and UX N +1−1 = 0. Therefore, (2.7) equals to N +1

 P ωx

N +1

−j

 x ∈ Cj , 0 < j ≤ xN +1 , Ux N +1−j−1 = uxN +1 −j−1 , 0 ≤ j ≤ xN +1 . N +1

x x As shown in Theorem 2.1(c), the distribution of Ux−1 , · · · , U−1 is the same as that of Z0 , · · · , Z−x . Thus (2.7) equals to   P ω−j+1 , 0 < j ≤ xN +1 , Z−r = uxN +1 −r−1 , 0 ≤ r ≤ xN +1  = P Wi eT0 + (νi+1 − νi ) = ¯lN −i , νi+1 − νi = d¯N −i , 0 ≤ i ≤ N, ω−j+1 ∈ Cj ,  0 < j ≤ νN +1 , Z−r = uxN +1 −r−1 , 0 ≤ r ≤ xN +1 . (2.8)

Noticing that both the left side of (2.7) and the right side of (2.8) vanish when (2.6) fails, we have Lemma 2.2. 2 Using Lemma 2.2 we calculate E(l1 ) as follows. Corollary 2.1 P(l1 = 1) > 0. Moreover, E(l1 ) < ∞ if and only if !! ∞ X 1 T E 1+ < ∞. e1 M (i) · · · M (1)e1 ω0 (1) i=1 Finally, lim

t→∞

∞ X

k=1

P(τk = t) = lim P(τk = t for some k ≥ 0) = t→∞

1 . E(l1 )

(2.9)

No.6

W.M. Hong & H.Y. Sun: RENEWAL THEOREM FOR (L, 1)-RANDOM WALK

1743

Proof By Lemma 2.1, ξ1 has the same distribution as ξ0 given B0 . Thus by Lemma 2.2 with N = 0, l1 has the same distribution as W0 eT0 + ν1 . From this, it follows that  P(l1 = 1) = P W0 eT0 + ν1 = 1 = P(W0 = 0, ν1 = 1) = P(ν1 = 1) = P(Z−1 = 0) = E(ω0 (−1)) > 0, where we use the fact that ω0 (−1) > 0, P -a. s. in the last inequality. Now, notice that the maximum of the elements of the matrix M (1) is max{a1 (1), 1}, which is no more than ω11(1) . Therefore, !! ∞ X 1 E(kM (1)k) < E 1+ e1 M (i) · · · M (1)eT1 < ∞. ω0 (1) i=1 Observe that the sequence {ωi }i∈Z are i.i.d.. Then by Key [10, Theorem 4.2], there exist positive constants k1 and k2 such that P(ν1 > t) < k1 exp(−k2 t), and hence, E(ν1 ) < ∞.

(2.10)

On the other hand,  T

E W0 e0 = E

νX 1 −1 t=0

=

∞ X t=0

Z−t eT0

!

E Y−t eT0 , ν1 > t

 = E Y0 eT0 E(ν1 ) =E

1 ω0 (1)

1+

∞ X i=1



e1 M (i) · · · M (1)eT1

!

!

− 1 E(ν1 ).

(2.11)

Here in the last equality, we use the fact E(Y0 eT0 ) = E(T1 − 1) and the relation !! ∞ X 1 T 1+ e1 M (i) · · · M (0)e1 ET1 = E . ω0 (1) i=1 From (2.10), (2.11) and El1 = E(W0 eT0 + ν1 ), we deduce E(l1 ) < ∞ ⇔ E

1 ω0 (1)

1+

∞ X i=1

e1 M (i) · · · M (1)eT1

!!

< ∞.

Finally, since {τk : k ≥ 1} are independent and {τk : k ≥ 2} are identically distributed as in Lemma 2.1, by Feller [11, Renewal theorem, Chapter XI], we have lim

t→∞

∞ X

k=1

P(τk = t) = lim P(τk = t for some k ≥ 0) = t→∞

1 , E(l1 )

1744

ACTA MATHEMATICA SCIENTIA

Vol.33 Ser.B

which complete the proof of Corollary 2.1. 2 Now we turn to the proof of Theorem 1.1. Proof of Theorem 1.1 With the Lemma 2.1 and the renewal theorem for the regeneration time (2.8) in corollary 1, we can carry out the proof of Theorem 1.1 following the procedure of the proof of Theorem 1.1 in [4] line by line. Here we omit the details. This finishes the proof of Theorem 1.1. 2

3

Proof of Theorem 1.2 To prove Theorem 1.2, it suffices to establish   Z lim P ω : Eω f (A(t)) − f dP∞ < ε = 1 t→∞

(3.1)

for every continuous function f : Ω → R depending only on finitely many coordinates and for every ε > 0. With the help of   Z ω : Eω f (A(t)) − f dP∞ > ε + 4εkf k∞ ( ) ! k 1X ⊂ ω : Eω f (A(t)) − Eω f (A(t + 2s)) > 2εkf k∞ k s=1 ( ! ) Z k 1 X [ ω : Pω f (A(t + 2s)) − f dP∞ > ε > ε , k s=1

Theorem 1.2 follows from the next two propositions. Proposition 3.1 For every continuous function f : Ω → R depending only on finitely many coordinates and for every ε > 0, there exist integers k and t0 so large that whenever t ≥ t0 , ! Z k 1 X P f (A(t + 2s)) − f dP∞ > ε < ε2 . (3.2) k s=1

To state the second proposition, we need some definitions. For a fixed environment ω = (ωz )z∈Z , let δ be a constant satisfying 0 < δ < 21 , and define n o 1 1 S = S(ω, δ) := z ∈ 2N : ωz (1) ≥ δ 2 , ωz+1 (−1) ≥ δ 2 ,

where elements of S will be referred as slack points. Also let Nt = Nt (δ) := ♯ {s ≤ t : Xs ∈ S, and (Xs−2 , Xs−1 ) 6= (Xs , Xs + 1)} . Since ωz (1) > 0, ωz+1 (−1) > 0, P -a.s., we choose δ small enough such that S contains infinitely many points. For this δ and fixed ω, by the Markov property of {Xs : s ∈ Z} under Pω , we have lim Nt = ∞, a.s.-Pω . (3.3) t→∞

Proposition 3.2 There exist i.i.d. random variables ξ1 , ξ2 , · · · , defined on (ZN , B(ZN ), Pω ) with distributions Pω (ξn = 1) = Pω (ξn = −1) = δ/2,

and Pω (ξn = 0) = 1 − δ,

(3.4)

No.6

W.M. Hong & H.Y. Sun: RENEWAL THEOREM FOR (L, 1)-RANDOM WALK

1745

such that for k = 1, 2, · · · and t = 0, 1, · · ·

ω

F − F ω ≤ 2Pω (υk ≥ Nt ). t t+2k

Therefore

Here, υk :=min{n :

n P

j=1

k

ω 1X ω Ft+2s ≤ 2Pω (υk ≥ Nt ).

Ft −

k s=1

(3.5)

ξj = k}, Ftω denotes the distribution of Xt (under Pω ), and kF k is the

total variation norm of F , namely,  Z  kF k = sup f dF : kf k∞ = 1, f is a measurable function .

Proof of Theorem 1.2 By Proposition 3.1, for every ε > 0, there exist integers k and t0 large enough such that whenever t ≥ t0 , ! ! Z k 1 X E ω : Pω f (A(t + 2s) − f dP∞ > ε > ε k s=1 ! Z k 1 X 1 ≤ P f (A(t + 2s)) − f dP∞ > ε < ε. (3.6) k ε s=1

Notice that υk < ∞, Pω -a.s.. So by (3.3) and (3.5), we have

k

ω 1X ω lim Ft − Ft+2s = 0, t→∞

k s=1

and hence, for the k in (3.6), there exists t1 large enough such that for all t ≥ t1 , ( ) k 1X P ω : Eω f (A(t)) − Eω ( f (A(t + 2s))) > 2εkf k∞ k s=1

( ) k

1X ω

≤ P ω : Ftω − Ft+2s > ε < ε.

k

(3.7)

s=1

Combining (3.6) and (3.7), we obtain that for t ≥ max{t0 , t1 },   Z P ω : Eω f (A(t)) − f dP∞ > ε + 4εkf k∞ < 2ε, which yields (3.1). The following are the proofs of the two propositions. Proof of Proposition 3.1 To obtain (3.2), we first prove P∞ ≪ P.

2

(3.8)

To this end, we use the same method which leads to the equation (1.14) in [7]. In fact, we have  TX  Z 1 −1 1 E f (A(t)) = f dP∞ . E(T1 ) t=0

1746

ACTA MATHEMATICA SCIENTIA

Vol.33 Ser.B

On the other hand, it is proved in [6, the proof of Theorem 1.1.3] that  TX  Z 1 −1 E f (A(t)) = f (ω) t=0



X 1 1 + e1 M (0) · · · M (−i + 1)eT1 ω0 (1) i=1 ωi (1)

!

dP.

Hence, (3.8) is available. Let Lk := sup{n : τn ≤ 2k}. To get (3.2), we rewrite Lk P

j=1

P

f (A(t))

τj ≤t<τj+1 t is even

Lk + · k

Lk

P

1 k

k P

f (A(t))

0


k

f (A(2t)) as

t=1

P

f (A(t))

2k
k

,

It is easy to see that the second summand tends to 0 in probability as k → ∞. Since τLk +1 −2k ≤ d τLk +1 − τLk = τ2 − τ1 < ∞, P-a.s., and f is bounded, then the third summand also tends to 0 in probability as k → ∞. Moreover, by the Ergodic Theorem (see Durrett [12, Theorem 2.1, Chapter 6]) and the renewal theorem (see Durrett [12, Theorem 4.1, Chapter 3]), the first summand tends to X 2 ·E f (A(t)) E(τ2 − τ1 ) τ ≤t<τ 1 2 t is even

which confirms that 1 k

k P

1 k

k P

t=1

f (A(2t)) converge to

t=1

as k → ∞,

f (A(2t)) converge in probability as k → ∞. On the other hand, R

f dP∞ in distribution by Theorem 1.1. So

Z k 1X P f (A(2t)) → f dP∞ k t=1

by

P-a.s.,

as k → ∞.

(3.9)

Now, by (3.8), we define another probability measure P∗ on the measurable space (ZN , B(ZN ))

dP∗ dP∞ := . dP dP Then by the same procedure to get (3.9), we have k ! Z X ∗ 1 lim P f (A(2s)) − f dP∞ > ε = 0, ∀ ε > 0. k→∞ k s=1

(3.10)

Notice that the process {A(t) : t ≥ 0} is a Markov chain. By Theorem 1.1, for every fixed k > 0, we obtain  X   X  k k 1 1 f (A(t + 2s)) → LP∗ f (A(2s)) , (3.11) LP k s=1 k s=1 as t → ∞, where LP (f (A(t + 2s))) denotes the distribution function of f (A(t + 2s)) with respect to the probability measure P, and LP∗ f (A(2s)) the distribution function of f (A(2s)) with respect to the probability measure P∗ . Finally, combing (3.10) and (3.11), we obtain (3.2). 2

No.6

W.M. Hong & H.Y. Sun: RENEWAL THEOREM FOR (L, 1)-RANDOM WALK

1747

Proof of Proposition 3.2 The proof of (3.5) depends on the coupling method. To this end, we construct two processes {Xt }t≥0 and {Xt∗ }t≥0 satisfying ∗ Xt = Xt+2k

whenever t ≥ min{n : Nn ≥ υk }.

Here {Xt }t≥0 has the same finite dimensional distributions as the process defined in (1.2). Then by the coupling inequality, we have ω kFtω − Ft+2k k ≤ 2Pω (t < min{n : Nn ≥ υk }) ≤ 2Pω (υk ≥ Nt ).

Noticing that υk is increasing on k, we get (3.5) immediately. In the (L, 1)-RWRE, since we assume for all i ∈ Z, ωi (−1) > 0 and ωi (1) > 0 a.s.-P , there are segments of the sample path having the form z → z + 1 → z along almost every trajectory of the process {Xt : t ≤ 0}. So we also let one loop be a segment of the sample path with the form z → z + 1 → z. The key of the construction is that {Xt }t≥0 is a Markov chain under the probability measure Pω , so the number of loops {Xt }t≥0 made at every slack point z has the geometric distribution pn = γzn (1 − γz ), n = 0, 1, · · · , (3.12) where γz = ωz (1)ωz+1 (−1) ≥ δ. After finishing these loops, {Xt }t≥0 proceeds to its next two steps according to the distribution H z , where ¯ z (z + 1, z + 2) = ωz (1)ωz+1 (1)/(1 − γz ), H ¯ z (z + 1, z + 1 − i) = ωz (1)ωz+1 (−i)/(1 − γz ), i = 2, · · · , L, H ¯ z (z − i, z − i + 1) = ωz (−i)ωz−1 (1)/(1 − γz ), i = 1, · · · , L, H

¯ z (z − i, z − i − j) = ωz (−i)ωz−i (−j)/(1 − γz ), i, j = 1, · · · , L. H

If at time t ≥ 0, Xt = z is not a slack point, {Xt }t≥0 proceeds to its next two steps according to the distribution Hz , where Hz (z + 1, z + 2) = ωz (1)ωz+1 (1)/(1 − γz ), Hz (z + 1, z + 1 − i) = ωz (1)ωz+1 (−i)/(1 − γz ), i = 1, · · · , L, Hz (z − i, z − i + 1) = ωz (−i)ωz−1 (1)/(1 − γz ), i = 1, · · · , L, Hz (z − i, z − i − j) = ωz (−i)ωz−i (−j)/(1 − γz ), i, j = 1, · · · , L. Let z ∈ S(ω, δ) be a slack point, and Gz be the probability distribution on Z2+ given by Gz (0, 0) = (1 − γz )(1 − δ/2);

Gz (m, m + 1) = Gz (m + 1, m) = γzm (1 − γz )δ/2, Gz (m, m) =

γzm−1 (1

Gz (m, n) = 0,

− γz )(γz − δ(1 + γz )/2),

otherwise.

m ≥ 0;

m ≥ 1;

1748

ACTA MATHEMATICA SCIENTIA

Vol.33 Ser.B

It is easy to verify that Gz is a probability distribution, and the marginal distributions of Gz are all the geometric distribution described in (3.12). Let random vector (λ, λ∗ ) has the distribution Gz . Then ξ = λ − λ∗ has the distribution described in (3.4). Now, we construct the two processes according to the behavior of {Xt }t≥0 by the same method in the proposition 1 in [7]. Their trajectories are the same except {Xt }t≥0 visiting a slack point. Explicitly, when {Xt }t≥0 at the slack point z, {Xt }t≥0 and {Xt∗ }t≥0 make loops. The numbers of their loops are given by the random variables λ and λ∗ , respectively. After finishing loops, they make their next two steps identically according to the distribution H z . When at time t ≥ 0, Xt = z is not a slack point, the two processes behavior alike according to the distribution Hz in the next two steps. Since at any slack point one of the processes {Xt , t ≥ 0} and {Xt∗ , t ≥ 0} may be delayed (with respect to the other) by making different number of loops. Finally, the accumulated delay of the path {Xt∗ , t ≥ 0} will reach 2k, after that time, {Xt∗ }t≥0 be forced to follow {Xt }t≥0 . About this construction, we refer the reader to the figure in page 91 in Lalley [7] for an intuition. 2 Acknowledgements The authors would like to thank Huaming Wang for the stimulating discussions and his kind helps with the revised vision. The authors would like to thank Lin Zhang for the helpful discussions. References [1] Br´ emont J. On some random walks on Z in random medium. Ann Prob, 2002, 30(3): 1266–1312 [2] Sznitman A S. Lectures on random motions in random media//DMV Seminar 32. Basel: Birkhauser, 2002 [3] Zeitouni O. Random walks in random environment//Picard J, ed. LNM 1837. Berlin, Heidelberg: Springer-Verlag, 2004: 189–312 [4] Kesten H. A renewal theorem for random walk in a random environment//Probability (Proc Sympos Pure Math, Vol XXXI. Univ Illinois, Urbana, Ill, 1976). Providence, RI: Amer Math Soc, 1977: 67–77 [5] Alili S. Asymptotic behavior for random walks in random environments. J Appl Prob, 1999, 36(2): 334– 349 [6] Hong W M, Wang H M. Branching structure for an (L − 1) random walk in random environment and its applications. To appear in Infinite Dimensional Analysis, Quantum Probability and Related Topics, 2013 [7] Lalley S. An extension of Kesten’s renewal theorem for random walk in a random environment. Adv Appl Math, 1986, 7(1): 80–100 [8] Sznitman A.S. Topics in random walks in random environment//School and Conference on Probability Theorem, ICTP Lect Note 17. Trieste: Abdus Salam Int Cent Theoret Phys, 2004: 203–266 [9] Sznitman A S, Zerner M. A law of large number for random walk in random environment. Ann Prob, 1999, 27(1): 1851–1867 [10] Key E S. Limiting distributions and regeneration times for multitype branching processes with immigration in a random environment. Ann Prob, 1987, 15(1): 344–353 [11] Feller W. An Introduction to Probability Theory and Its Applications, Vol II. 2nd ed. New York, London, Sydney: John Wiley & Sons Inc, 1971 [12] Durrett R. Probability: Theory and Examples. 2nd ed. Belmont, CA: Duxbury Press, 1996