On random walks with jumps scaled by cumulative sums of random variables

On random walks with jumps scaled by cumulative sums of random variables

STATISTICS& PROBABILITY LETTERS ,!w ELSEVIER Statistics & Probability Letters 35 (1997) 409-416 On random walks with jumps scaled by cumulative sum...

351KB Sizes 0 Downloads 33 Views

STATISTICS& PROBABILITY LETTERS

,!w ELSEVIER

Statistics & Probability Letters 35 (1997) 409-416

On random walks with jumps scaled by cumulative sums of random variables Konstantin Borovkov Department of Statistics, University of Melbourne, Parkville 3052, Australia Received 1 November 1996; received in revised form 1 January 1997

Abstract

For random walks in which jumps are scaled by cumulative sums of i.i.d.r.v.'s, we establish the strong law of large numbers, CLT-type theorems, and two results related to the distributions of the first hitting times. (~) 1997 Elsevier Science B.V.

AMS classification: Primary 60J05; Secondary 60F05, 60G40, 60G50 Keywords: Gambling system; Strong law of large numbers; Functionals of Wiener processes; Boundary crossing

1. Introduction

Let ((Xk,Zk)}k>~l be a sequence of i.i.d, random vectors in ~2 with finite expectations p = EXk and a = Elk. Given initial conditions Z0 and A1, we define a sequence {Z,}n>~0 by putting

Zn =Zn-I +AnXn, An+l=An +In.

(1)

The process {Z, } is a time non-homogeneous random walk, in which successive jumps are scaled by cumulative sums of random variables. The main motivation for considering this process is that it is a natural and interesting generalization of a random walk closely related to the so-called Oscar's system in gambling. The latter can be described briefly as follows. The game consists of a sequence of plays and terminates when the gambler gains a net profit of one unit. In each play, the gambler wins (independently of all other plays) with a certain fixed probability p. The first bet is one unit. If a bet is lost, the next one is of the same size. If a bet is won, the next one will be one unit larger unless winning the next gain would give a net profit exceeding one unit. In the latter case, the bet size becomes just sufficient to produce a profit of one unit. The system decreases the average time required to achieve the aim of getting a unit net profit when compared to simple gambling. On the other hand, the bet size in the system increases in a moderate way (unlike the martingale system doubling the bet size after each play), which enables the gambler to avoid effectively house limit type restrictions. Now note that if we put Xj = 1 if the gambler wins in the jth play and Xj = - 1 otherwise, Ij = I(Xj = 1), then {(Xj, Ij)} is a sequence of i.i.d, random vectors with

P(Xj =Ij = 1 ) = 1 - P(Xj = - 1 , I j = 0 ) = p. 0167-7152/97/$17.00 (~ 1997 Elsevier Science B.V. All rights reserved PII S01 67-71 52(97 )0003 9-4

(2)

410

K. Borovkov / Statistics & Probability Letters 35 (1997) 409-416

The gambler's gain Yn after the nth game is given by Yn = Yn-1 + B . X . ,

B,,+1 = min{B,, + In, 1 - Yn}.

(with the initial conditions B1 = 1 and Y0 = 0 ) . This system was recently extensively studied by Ethier (1996a, b), the most difficult and interesting case being the "critical" one when success probability p = 1/2. It was shown in Borovkov (1996) that the distribution tail of the duration of the game according to Oscar's system is dominated by that of the first hitting time of the level x = 1 by the process Z, defined by (1) in the special case (2). This observation, together with embedding techniques and boundary problem results for the Wiener process, were used there to derive a sharp upper bound for this tail. In the present note, we deal with the general processes of the form (1). Processes of this type proved to be rather interesting objects from the theoretical point of view. On the other hand, they can also be of applied interest in financial mathematics and some other areas. We prove for such processes a strong law of large numbers (SLLN) type result, CLT-type results in the case of finite variances, and show that if a# < 0 then, under appropriate restrictions on initial conditions, the whole trajectory of {Zn}~>0 can stay forever below zero with a positive probability. We also establish an upper bound of the form obtained in Borovkov (1996) for the distribution tails of the first hitting times of fixed levels.

2. Statement of results

We begin with observing that the process {Zit j -Z0}t~>0 can also be viewed as a stochastic integral (with varying upper limit t; LtJ denotes the integer part of t) of the random step function A[tj with respect to the step process {SLtj}t~>0, where Sn = X l + . . . +Xn, So = 0 . Further, by the SLLN the vector-valued process --~ {(a,12)t}o<~t<~l

{n-l(ALntj,Slntj)}o<~t<~l

a.s. as n ~ c~,

(3)

and, when a 2 = Var (Xk) < cx~ and s 2 = Var (lk) < cxz, by the multivariate invariance principle {n-1/2(A[nt] - ant, S[ntj - flnt)}o<~t~l

ct {(wl(t),w2(t))}o<~t<~l

as n ~ 00,

(4)

where (wl,w2) is a two-dimensional Wiener process with zero mean, Ew21(t)=s2t, E w 2 ( t ) = tr2t, and correlation Corr(wl(t), w 2 ( t ) ) = C o r r ( I k , X k ) - 7. d

Here and in what follows, ) stands for convergence in distribution, and (3) and (4) relate random elements of the Skorokhod space D[0, 1] (note that convergence in (3) is actually in the uniform topology). This observation makes the assertions of the two following theorems rather natural. Theorem 1. n-2Z~ ~ l a p a.s. as n---* oo. Theorem 2. Let a 2 =Var(Xk) < (x~ and s 2 =Var(Ik) < co. Then, f o r any initial conditions Zo and A1, as n --~ oo (i) / f a - - - # = 0 , then n_]Z~ where ~

a s~

(

(1 - 72) 1/2

/ol

Wl(t)dW2(t)+

~(W22(1)- 1)

and W2 are independent standard Wiener processes;

)

,

(5)

K. Borovkov I Statistics & Probability Letters 35 (1997) 409-416

411

(ii) otherwise

n_3/2(Zn _ l al~n2 ) d N(0, v2),

(6)

the normal distribution with zero mean and variance V2 = 1(a20"2 -4--,uEs2 -t- alerts?).

(7)

A simple special case of Theorem 2 (ii) with Ik = 1 and Xk being independent Bernoulli random variables with success probabilities 1/2 is the well-known result on the asymptotic normality of the one-sample Wilcoxon (signed-rank) statistic (see e.g. Lehmann (1975, p. 351)). Using Theorem 1, we can also prove the following result extending the corresponding assertion for Oscar's system from Ethier (1996b). Denote by 7 = supk~>0Zk the global maximum of the process {Zk}. Theorem 3. I f a# < 0, Z0=0, P(#A1 < 0 ) > 0 , P ( Z = O) > O.

and AI is independent of the sequence {(Xk,Ik)}k>~l, then

A similar assertion holds for infk~>0Zk under symmetric conditions. Finally we state the following result for the distribution tail of the stopping time 0x = min{k >~ 1: Zk > x}, x > 0 is fixed. Theorem 4, / f # = 0 , a ¢ 0, and Eexp(2(lll I + IX1[)) < c~ for some 2 > 0, then, for any initial conditions Zo and A I and fixed x > O, one has P(Ox > n)=O(n-3/210g3/2 n)

as n ~ 00.

Note that by symmetry the same estimate holds for the first crossing time of a fixed negative level.

3. Proofs Proof of Theorem 1. By the SLLN, we have Ak = k ( a + ek), where eke0

(8)

a.s. as k ~ c ~ .

One has

Zn=Zo'-F~-~AkXk=lo-Fa~kXk k=l

(9)

q ' - ~ g , kkXk . k=l

k=l

For the last term here, by virtue of (8)

~-~kkXk ~n~l~kllXkl=o k=l

k=l

Ixkl =o(n 2)

n

a.s.

k=l

since, by the SLLN, ~ = 1 IXk[ ~ nEIXII. Therefore the only asymptotically significant contribution to Zn can be from the second sum on the right-hand side of (9) only. Moreover, this also proves the assertion of Theorem 1 when a = 0 . Since by the SLLN we have Sk =k(/a + gk) with ek ~ 0 a.s. as k --~ c~, and clearly Zn =Zo + A1Sn + II(Sn - $1) + 12(8n -- 32) q'- " " " --t-In-l(Sn - Sn-l ), it is not hard to see that a similar argument provides a proof for the case # = 0 as well.

K. Borovkov / Statistics & Probability Letters 35 (1997) 409-416

412

It remains to notice that 1 ~

-~

1 ~-~

kXk = -~

k=l

1 ~--~

?12

k=l

where we have 1 ~

(an -- S k _ l ) - Sn-.~

Sk-l'

k=l

Sn/n ~ I1 and

Sk_l

~- 1 - ! )

~ ~lk=l~___k"---*0

= nl-~E~k=l(~___k_ # ) kn

k=l

as n-~ ~ by the SLLN, which yields that n -2 }-']~=1 kXk---, #/2 and hence completes, in view of (9), the proof of Theorem 1. [] Proof of Theorem 2. We have already noticed that Zn - ZO =

(10)

A[ntJ dtS[ntJ.

(i) It follows from our observation (4) and Theorem 1.7 from Strasser (1986) that in this case n_l Z n d ,

wz (t) dw2(t).

To prove (5), it remains to notice that one has the following representation for the participating Wiener process:

(wl(t),w2(t)) a=_(s(1 -

y2)1/2Wl(t) +

syW2(t), aW2(t)).

(11)

(ii) In this case, we have to re-write (10) as Z n -- Z 0 =

/0 (ALntJ- aLntJ)dt(SLntj - ~LntJ) + /0 alntJ dt(SLntj - ~[ntJ ) LntJ /o /o a [ntJ +

(A LntJ - a

) dt(/~ [ntJ ) +

d t ( p LntJ ).

The last term on the right-hand side equals

~-~ al~k= ½al~n(n+ 1) k=l

and is basically removed by centering in (6), while (i) implies that the first one is o(n 3/2) in probability. It follows from (3), (4), (11), and Strasser's result that the left hand side of (6) converges in distribution to

aa

/o1tdW2(t)+#s ((1-~,2) 1/2/01

Wl(t)dt+y

/01

W2(t)dt

)

.

It is not hard to see that this is a normal random variable with zero mean and variance

1~2s2(1-72)Var (folWl(t)dt) +a2tr2Var (foltdw2(t)) -t-l~2S2y2Var (folWz(t)dt) +2a#scrTE(foltdw2(t) x~'W2(t)dt).

K. Borovkov I Statistics & Probability Letters 35 (1997) 409-416

413

Now note that for j = 1, 2 Var

(/o

Wy(t)dt

)/o1 o .1 =

Var

Cov(Wl(s), Wl(t))dsdt

=

tdW2(t

/ol/0 ' min (s, t) ds dt = 31'

=

t2

dt = g.

Further, since by It6's formula we have d(tW2(t))= tdWE(t)+ Wz(t)dt, the last two formulas for variances yield 1 = E(W22(1))=E

= 5 +2E

t dW2(t) +

(So

td~(t)dt

x

W2(t)dt

/01 )1 ~(t)dt

+ 3'

so that the mixed moment of interest is just 1, which completes the proof of the Eq. (7).

[]

Proof of Theorem 3. Without loss of generality, we may assume that p < 0, a > 0 (otherwise we could just put (Xk,Ik):=--(Xk,Ik)), and A1 = 1. It follows from Theorem 1 and the SLLN for {Sn} that both 2 and = supk~>0 Sk are proper random variables, which entails that there exists an x0 > 0 such that

P(-Z>xo)<<.l/3,

P(-S>xo)<<.l/3.

(12)

Since p < 0 , P(Xk < 0 ) > 0 . Consider now the three following (not mutually exclusive) special cases. Case 1: P(Xk O ) > O . Case 2: P(Xk <0, I k = 0 ) > 0 . Case 3: P ( X k < 0 , I k < 0 ) > 0 . It clearly suffices to prove Theorem 3 in each of these cases. Case 1: There exist q, s, ~ > 0 such that one has r = P(Bk) > 0 for the events

Bk =- {Xk C [--q - ~, --q], Ik E [s,s + ct]}. These events are independent for different k, and hence the probability of the event B* = N k <, Bk is P ( B * ) = r" >0. On this event, one has clearly maxk<, Zk = 0 and

l+ks<~Ak+t=l+Ii+...+Ik<~l+k(s+~), k - - 1 .... ,n, Zn ~<- q - (1 + s)q . . . . . (1 + (n - 1)s)q = - qn(1 + ls(n - 1)).

(13)

The probability of interest

P(Z = O) >1P(Z = 01 B* ) P(B* ) = ( 1 - P(-Z > 01B*n )) rn,

(14)

and it remains to show that, for some n, one has P ( Z > 0 1 B * ) < 1. Now take n so large that - ½ q s n ( n - l ) + x o + n(s + a)xo<0, which ensures by (13) that Z, + A,+lXO < 0

on B*.

(15)

K. Borovkov / Statistics & ProbabilityLetters 35 (1997) 409-416

414 Further, for j > 0,

Zn+j - Z n = An+ 1Xn+l -~- • • • q- An+jXn+j = (An+ 1 - 1 )Sj* + Zj*,

(16)

where Sj* =Sn+j - Sn, j>~O, is a copy of {Sj}, and Z7 =Xn+ 1 + ( 1 +In+l)Xn+ 2 + "'" + ( 1 +In+l + "'" +In+j-l)Xn+j,

j>~O,

is a copy of {Zj, j >>.0}, both of them being independent of {Zj, 1 ~ j ~n}. Since maxk~
B* A { Z > O } C B* A {supZn+j - Zn>(An+, -1)Xo + Xo} C_B* N ({(An+l -- 1)S* >(An+1 - 1)x0} tO {Z* >Xo}) _CB* fq ({S* >x0} tO {Z* >xo}),

(17)

where Z* = supj~>0Zj* and S* = supy~>oS7 (to get the last relation in (17), we made use of the fact that An+l - 1 >~ns>O on B*). Now note that in the last line of (17) we have the intersection of two independent events, which implies that P ( ~ > 0 1 B , * ) ~x0) + P ( Z * >x0) ~<2/3 by virtue of (12). In view of (14), this proves Theorem 3 in Case 1. Case 2: For some q > 0 one has for the event Dk = {Xk < -- q, Ik = 0} that r = P(Dk) > 0. It is evident that, on the event D* = (]~=1 Dk with P(D*) = r ~, maxZk = 0 ,

k<~n

Z, <<.-qn,

An+l =

1.

From (16) we have Zn+j- Z, = Z 7 and taking n>xo/q yields

P ( Z = O) >>.P(-Z = 01D*)P(D*) >>.(1 - P(Z* > qn))r" >~2rn

/> (1 - P ( Z * >xo))rn-~--~ - > 0 .

Case 3: Since we have already proved the theorem in Case 1, we may assume now that P(Xk < 0 , Ik > 0 ) = 0. Then a > 0 will imply that P(Xk>>.O, Ik > 0 ) > 0 . We will consider two special subcases. Case 3(A): P(Xk = 0 , h > 0 ) > 0 . Here, for some q,s,s'>O and any ~ > 0 , the events Vk- = {Ark E [ - q - ~,--q], Ik E [ - s -- ~,--s]}, Vk+ = {Xk = 0 , Ik E [J,s' + ~]} both have positive probabilities. It is not hard to see that ~ > 0 can be chosen so small that there exist such Mo = 1
I~ =

Vk-

ifk=Mj

Vk+

otherwise,

for some j = 0 , 1 , 2 ....

K. Borovkov I Statistics & Probability Letters 35 (1997) 409-416

415

then, on the event Vn*= [']~=1 Vk with n =Mjo and j0 = L2(1 +s')xo/qJ + 1, the walk {Zk} on the time interval [0,n] will have negative jumps at times k = M j , j = 0 , 1,2 ..... and no jumps at other times while {Ak} will be increasing between successive times Mj, and 1 ~ r n, r = min{P( Vk- ), P( Vk+ )} >0, that maxk0, I k > 0 ) > 0 . This means that, for some q,q',s,s',~>O, one has for the events

C[ = {Xk E [--q -- c~,--q], Ik e [--s -- ~,--s]}, C~ = {Xk C [q',q' + ~], Ik C [s',s' + ~]} that r = min{P(C[),P(C[)}>0. Denote by F. the event that for all k = 1..... n, if Ak/> 0, then C [ holds true, else C+: L F. := [ ]

{({Ak>~O}nCZ)u({A~
k:l

It is easy to see that

P(F~) >~ P(F~ [ C ; ) P ( C Z ) + P(F~ IC+)P(C +) >~ (P(Fn-1 71 {An ~>0}) -F P(Fn-1 I-I {An
= P(Fn_l)r >>.... >1P(F1)rn-l >>.r n > O, and on F.

-s--~<<.Ak<~s' + ~ + l,

k~
O=Zo>~Z1>~... >~Zn.

(18)

Moreover, since on F, for at least a half of the steps k<~n we have [Ak[>>.½min{s,s'}, it follows that for sufficiently large n one has on this event inequality Z. <<.-fin, where f l = ½min{s,s'}min{q,q'} >0. Now choose m = L(s + ~ + 1)/s'J + 1 and put no = n - m. On the event Fn*=FnoA ( 5

C~-)

withP(Fn*)>~rn>O

(19)

k =no + 1

one has from (18) by the choice of m that l

~An+l ~

1 + (m + 1)(s' + ~).

(20)

On the other hand, it follows from (18) that on F.*

Zn <~-flno + m2(q ' + ~)(J + ~ + 1 ). Taking now n so large that

flno - m2(q I + oO(s' + o~+ 1)>(2 + (m + 1)(s' + ~))xo,

(21)

K. Borovkov I Statistics & Probability Letters 35 (1997) 409-416

416

we have from (20) and (21) similarly to (17) that

F* N {-Z>O} CF* N { supZn+j - Zn>[Jn° - m2(q' + ~)(s' + ~ + l) C F * N {(An+l -- 1)5" + 2* > ( 1 + (m + 1)(s' + ~))xo + x o }

C_F* n {(A.+I - 1)5" + 2* >(A.+I - 1)xo +xo} C F * n ( { S * >xo} U {2* >xo}). Since we have here the intersection o f two independent events, the proof is completed in the same way as in Case 1 (see (17) and the argument thereafter). Theorem 3 is proved. []

Proof of Theorem 4. The argument here follows exactly the same scheme as that o f the proof o f the main theorem in Borovkov (1996) with certain obvious changes in what concerns embedding (we will have to embed the martingale difference sequence {XnAn} into a Wiener process; for more details on such embedding o f random variables see e.g. Skorokhod (1965)) and the use of Lemma 3 of that paper.

Acknowledgements The author is grateful to the referee whose useful comments contributed substantially to the improvement o f the paper.

References Borovkov, K.A., 1996. A bound for the distribution of a stopping time for a stochastic system. Siber. Math. J. 37, 783-789 (in Russian). Ethier, S.N. 1996a. Analysis of a gambling system, in: Eadington, W.R., Cornelius, J.A. (Eds.), Finding the Edge: Mathematical and Quantitative Aspects of Gambling. Proc. 9th Intemat. Conf. Gambling and Risk Taking, vol. 4. Univ. of Nevada Press, Reno (to appear). Ethier, S.N., 1996b. A gambling system and a Markov chain. Ann. Appl. Probab. 6, 1248-1259. Lehmann, E., 1975. Nonparametrics: Statistical Methods based on Ranks. Holden-Day, San Francisco. Skorokhod, A.V., 1965. Studies in the Theory of Random Processes. Addison-Wesley,Reading, MA. Strasser, H., 1986. Martingale difference arrays and stochastic integrals. Probab. Theory Related Fields 72, 83-98.