Geometric random variables: Descents following maxima

Geometric random variables: Descents following maxima

Accepted Manuscript Geometric random variables: Descents following maxima Margaret Archibald, Aubrey Blecher, Charlotte Brennan, Arnold Knopfmacher, H...

305KB Sizes 0 Downloads 46 Views

Accepted Manuscript Geometric random variables: Descents following maxima Margaret Archibald, Aubrey Blecher, Charlotte Brennan, Arnold Knopfmacher, Helmut Prodinger PII: DOI: Reference:

S0167-7152(15)30034-1 http://dx.doi.org/10.1016/j.spl.2017.01.017 STAPRO 7830

To appear in:

Statistics and Probability Letters

Received date: 13 August 2015 Revised date: 5 January 2017 Accepted date: 6 January 2017 Please cite this article as: Archibald, M., Blecher, A., Brennan, C., Knopfmacher, A., Prodinger, H., Geometric random variables: Descents following maxima. Statistics and Probability Letters (2017), http://dx.doi.org/10.1016/j.spl.2017.01.017 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Geometric random variables: Descents following maxima Margaret Archibalda,1,∗, Aubrey Blechera , Charlotte Brennana,1 , Arnold Knopfmachera,1 , Helmut Prodingerb,1 a The John Knopfmacher Centre for Applicable Analysis and Number Theory, School of Mathematics, University of the Witwatersrand, Private Bag 3, Wits 2050, Johannesburg, South Africa b Department of Mathematical Sciences, Mathematics Division, Stellenbosch University, Private Bag X1, 7602 Matieland, South Africa

Abstract We study descents from maximal elements in samples of geometric random variables and consider two different averages for this statistic. We then compare the asymptotics of these averages as the number of parts in the samples tends to infinity, and also find an asymptotic expression for the mean of the greatest descent after a maximum value in such a sample. Keywords: Geometric random variable; generating function; Rice’s method; Mellin transform; asymptotic approximation. 2010 MSC: Primary: 05A15, 05A16; Secondary: 60C05

1. Introduction A sample of geometric random variables of length n is a sequence of independent and identically distributed geometric random variables (Γ1 , Γ2 , . . . , Γn ) where P(Γj = i) = pq i−1 for 1 ≤ j ≤ n with p + q = 1, p, q ≥ 0, and i ≥ 1. 5

A maximum value in a sample of geometric random variables is a value which

is larger or equal to any other in the sample. ∗ Corresponding

author: Email address: [email protected] (Margaret Archibald) 1 This material is based upon work supported by the National Research Foundation under grant numbers 89147, 86329, 81021 and 2053748 respectively.

Preprint submitted to Statistics and Probability Letters

January 4, 2017

Some relevant references on maxima in geometric random variables are [2, 4]. By “descent” we mean the difference in size between a value and its right-hand neighbour. If a maximum h occurs at the rightmost end, then we say it is 10

followed by a descent of size h. Our interest in the descents after the maxima was aroused by an application of geometric random variables to skip lists (see [7]). The search cost in skip lists is dependent on the vertical and horizontal costs (or comparisons). The vertical search cost in a skip list is just the height (number of linked lists used

15

to make searching for data more efficient) which corresponds to the maximum value in a sample of geometric random variables. The descents after the maxima in geometric samples are of relevance to the horizontal search cost in skip lists. This cost is affected by the relative size of the geometric random variable right after the maximum and the value of the element being searched for.

20

A recent work of Yakubovich (see [14]) shows that the descent after the last maximum is asymptotically negligible and can therefore be ignored. However, for the sake of consistency with the authors’ previous paper (see [1]), we retain the convention that a maximum situated at the right-most end is followed by a descent of size h.

25

We are interested in the average size of the descent after any maximum in a sample of geometric random variables. We also consider the largest descent after any maximum in a sample of geometric random variables which is a more difficult problem analytically. Previously, in [1], the authors studied the easier problems of the descent af-

30

ter the first and last maxima only. Yakubovich (in [14]) recently continued this study for more general distributions. Here, we focus on the geometric distribution but consider the size of the descent after any maximum. The questions in this paper are much more general and require additional mathematical techniques to those in [1]. The related question of separation of the maxima was

35

recently studied in [3]. Example. If Γ = Γ1 , . . . , Γn = 241142143124 is a sample of geometric random variables of length n = 12, then there are 4 maximal elements, in 2

positions 2, 5, 8 and 12. The descents after these maxima are of size 3, 2, 1 and 4 respectively. 40

In this paper, we concentrate on asymptotic mean values for our statistics. Results on higher moments and asymptotic distributions related to maxima in samples of geometric random variables can be found in [11] and [12]. 2. Notation In the following, ∅ stands for “possibly empty”; B stands for “block”, and

E stands for “element”.

Eh := an element of size h,

E
∅ B
B
of repeats of R, and (R)+ to represent a non-empty such sequence. 3. Average descent after any maximum Let d(Γ) be the sum of the descents after all maxima in a sample of geometric random variables Γ, let m(Γ) be the number of maxima in that sample, and let

50

|Γ| be the number of parts in the sample Γ. P d(Γ) The mean value is given by m(Γ) P(Γ), where P(Γ) is the probability |Γ|=n

of that sample occurring. The standard method of finding the mean value of a quantity is to differentiate the relevant generating function. In this case, we P d(Γ) need to use both differentiation and integration in order to find m(Γ) P(Γ). |Γ|=n

3

Theorem 1. The average descent after the maxima in a sample of n geometric random variables is, for n ≥ 2, n−1 n   X n − 1  pq k 1X n 1 k+1 . (−1) + (−1)k+1 k (1 − q k )(1 − q k+1 ) n k 1 − qk k=1

k=1

Proof. We split the samples of geometric random variables into two dis-

55

joint cases. Case A: The sample ends in a maximum, which can be expressed ∗

∅ ∅ symbolically as (B
∅ ∅ a maximum, (B
Let z mark the number of geometric random variables, u the sum of descents after all maxima and v the number of maxima. Now we construct the followP ing generating function: F (z, u, v) := Γ z |Γ| ud(Γ) v m(Γ) P(Γ). Note that (the following only holds for n ≥ 2)

Z 1 X d(Γ) ∂ F (z, u, v) P(Γ) = [z n ] dv . m(Γ) ∂u 0 v u=1

|Γ|=n

We define ah (z) := 1 − z(1 − q h ) and bh (u) :=

Case A:

= and Case B:

1

bh (u) z 2 vpq h−1 ah−1 (z) 1−zvpq h−1

ah−1 (z)(1 −

uh−j pq j−1 and have for

zvuh pq h−1 ah−1 (z) 1 − zvpq h−1 zvuh pq h−1 (u − q) , − q) − z 2 vup2 q h−1 (uh−1 − q h−1 )

zvpq h−1 )(u

bh (u) z 2 vpq h−1 ah−1 (z) 1−zvpq h−1 bh (u) z 2 vpq h−1 − ah−1 (z) 1−zvpq h−1

=

j=1

1

1 1−

Ph−1

(3.1)

1 ah−1 (z)

1 ah−1 (z)(1 − zvpq h−1 )(u − q) z 2 vup2 q h−1 (u − q)(uh−1 − q h−1 )(1 − zvpq h−1 ) · . ah−1 (z)(1 − zvpq h−1 )(u − q) − z 2 vup2 q h−1 (uh−1 − q h−1 )

Summing these two cases over h gives the overall generating function: F (z, u, v) :=

X

zvpq h−1 (u − q) ah−1 (z)(1 − − q) − z 2 vup2 q h−1 (uh−1 − q h−1 ) h≥1   zup(uh−1 − q h−1 ) · uh + . (3.2) ah−1 (z)(u − q) zvpq h−1 )(u

4

Now, we divide the sum of the descents after the maxima by the number of maxima for each sample Γ. We integrate from 0 to 1 with respect to v and then differentiate with respect to u and put u = 1, leaving a function of z, see (3.1). This is best done on a computer algebra system, and leads to the following generating function X

h≥1

!   z 2 q h−1 (hp − 1 + q h ) h log(ah−1 (z)) − log(ah (z)) + . | {z } | {z } ah−1 (z)ah (z) | {z } α β

(3.3)

γ

To extract the coefficients of z we consider these three terms separately: n

[z n ]α = −

(1 − q h−1 )n ; n

[z n ]β = −

(1 − q h )n ; n

and using partial fractions [z n ]γ =

hp − 1 + q h hp − 1 + q h h n (1 − q ) − (1 − q h−1 )n . p(1 − q h ) p(1 − q h−1 )

To complete the proof of Theorem 1, the coefficient of z n is  X h  hp − 1 + q h   (1 − q h )n − (1 − q h−1 )n + (1 − q h )n−1 − (1 − q h−1 )n−1 n p

h≥1

n   n−1 X n − 1  1 q k (q − 1) 1X n k (−1) k (−1)k + . = k k n q −1 (1 − q k )(1 − q k+1 ) k=1



k=1

4. Asymptotic average descent after any maximum We find an asymptotic expression for the average descent after any maxi60

mum. We use the notation Q := 1/q; L := log Q; χk :=

2kπi L ,

where k ∈ Z

and k 6= 0. Also, γ := 0.5772 denotes Euler’s constant and Γ is the Gamma

function.

Theorem 2. The average descent after any maximum in a sample of n geo-

65

γ metric random variables is logQ n + L + 21 − p1 + δ(logQ n) + O( logn n ) as n → ∞, 1P where δ(x) = Γ(−χk )e2kπix and χk = 2kπi L for k ∈ Z\{0}. L k6=0  Pn−1 pq k k+1 Proof. We consider the first sum k=1 n−1 in Thek (−1) (1−q k )(1−q k+1 )

orem 1. We use Rice’s method (see [5]) to approximate the alternating sum by 5

computing the residues of the product of (1 −

pq z − q z+1 )

and

q z )(1

Γ(n)Γ(−z) . Γ(n − z)

The resulting main term from the residue at z = 0 is logQ n +

γ L

+

1 2



1 p

as n → ∞, with a small fluctuating term arising from the residues at χk for P k ∈ Z \ { 0}, i.e., δ(logQ n) = L1 k6=0 Γ(−χk )e2kπi logQ n .

Now, it is sufficient to note that the second sum in the statement of Theo-

70



rem 1 is of order O( logn n ) by Rice’s method. 5. Pseudo-average descent

We now consider a more intuitive way of calculating an average descent after any maximum. In Section 3, we calculated the average, in the standard P P d(Γ)P(Γ) d(Γ) P |Γ|=n way, as |Γ|=n m(Γ) P(Γ). Here we find the quantity m(Γ)P(Γ) , which |Γ|=n

75

we call the pseudo-average. Such pseudo-mean values have previously been considered in the context of divisor functions d and unitary-divisor functions d∗ in arithmetical semigroups. An example where the mean value of the ratio d/d∗ is not asymptotic to the intuitive value mean(d)/mean(d∗ ) is given in [9]. By contrast in [6] it is shown that the mean length of a run of equal letters in a

80

sample of n geometric variables is indeed asymptotically equal to n divided by the mean length of a run, up to O(1/n). Theorem 3. The pseudo-average descent after any maximum in a sample of n geometric random variables is, for n ≥ 2, P

d(Γ)P(Γ)

|Γ|=n

P

m(Γ)P(Γ)

=

n−2 P k=0



n−2 k

(−1)

|Γ|=n

k

p2 q k nq k+1 − nq 2k+3 − q k+1 + 1 2

(1 − q k+1 ) (1 − q k+2 ) n−1 P n−1 npq k k k (−1) 1 − q k+1 k=0

2



.

Proof. Consider the generating function in (3.2). To find the numerator

of

P d(Γ)P(Γ) P |Γ|=n , |Γ|=n m(Γ)P(Γ)

we need to differentiate (3.2) with respect to u and then set P zqh−1 (zqh +hp(zqh +1)−z) both u and v to be 1. This gives . Expanding this as (a (z))2 h

h≥1

6

a series around z = 0 shows that the coefficients of z n (for n ≥ 2) are X

h≥1

q h−1 1 − q h

The term 1 − q h

n−2 

n−2

  n q h − hq + h − 1 + hq h+1 − (h + 1)q h + 1 .

can be expanded using the binomial theorem, so that

after summing on h, we find X

d(Γ)P(Γ) =

n−2 X k=0

|Γ|=n

  2 k nq k+1 − nq 2k+3 − q k+1 + 1 n−2 k p q . (5.1) (−1) 2 2 k (1 − q k+1 ) (1 − q k+2 )

As in [8] we obtain X

m(Γ)P(Γ) =

n−1 X k=0

|Γ|=n

 n−1 npq k (−1)k . k 1 − q k+1

(5.2) 

Dividing (5.1) by (5.2) completes the proof of Theorem 3. 6. Asymptotic pseudo-average descent

In this section we find an asymptotic estimate for the pseudo-average descent 85

as n → ∞. Theorem 4. The pseudo-average descent after any maximum in a sample of n geometric random variables as n → ∞ is asymptotic to logQ n +

γ L



1 p

+ Lδ1 (logQ n)

1 + δ1 (logQ n)

where

δ1 (x) =

1X Γ(1 − χk )e2kπix . L k6=0

Proof. We approximate the quotient in Theorem 3 using Rice’s method. For its numerator (5.1), we calculate the residue at z = −1 of

 Γ(n − 1)Γ(−z) p2 q z nq z+1 − nq 2z+3 − q z+1 + 1 . 2 2 Γ(n − 1 − z) (1 − q z+1 ) (1 − q z+2 )

This gives the main term in the numerator as totic to

p(log n+γ)−L qL2

log q−

(q−1)(nHn −1) n−1 q log2 q

which is asymp-

as n → ∞. For the residues at z = −1 + χk , k 6= 0, after

letting n → ∞ we have 2kiπ

n log q

pΓ (1 − χk ) p = Γ (1 − χk ) e−2kiπ logQ n . qL qL 7

(6.1)

z

npΓ(n)q Γ(−z) For the denominator, the residue of (1−q z+1 )Γ(n−z) at z = −1 gives a main term p of (see also [8]). The fluctuations from the residues at z = −1 + χk are qL

p Γ (1 − χk ) e−2kiπ logQ n qL

as

n → ∞.

(6.2)

The ratio of the asymptotic expressions for the numerator and denominator 

gives the required result.

Heuristically one expects the average and pseudo-average to have the same main term, i.e., logQ n, which they do. We note that the constant term of 90

Theorem 4 is smaller than that of Theorem 2 by 1/2. 7. Greatest descent after any maximum Let Gn be the greatest descent in a sample of n geometric random variables. Theorem 5. The generating function for the mean of the greatest descent after any maximum in a sample of n geometric random variables is X X zq j n E(Gn )z = aj (z)(1 − z) n

(7.1)

j≥0

 z 2 pq 2h−j−2 (1 − q j )   . − a (z) ah (z) − z 2 pq h−1 (q h−1−j − 1) h≥j+1 h−1 X

(7.2)

Proof. We find the mean of the greatest descent after any maximum in a geometrically distributed sample. Let h be the largest value that occurs in a sample and let Hj (z) count samples where the greatest descent after any P maximum is at most j. The mean can be computed as P(X > j), so the j≥0

generating function for the sequence of greatest descent means is  X z − Hj (z) . 1−z

(7.3)

j≥0

For Hj (z), either the sample does not end in an h, or it does. We decompose these two cases using the symbols defined in Section 2.  + ∅ ∅ For the first case, we have the decomposition B
symbolically E
∅ For the second (noting that h ≥ j) we have B
the generating function Hj (z) =

XX

h≥1 i≥1

where

1 ah−1 (z)

Pmin{j,h−1} k=1

Hj (z) =

min{j,h−1}

(zpq

)

h−1 `

`≥1

X

zpq

h−k−1

k=1

i

1 ah−1 (z)

+

j X zpq h−1 ah (z)

h=1

zpq h−k−1 = zq h−1 (q −min{j,h−1} − 1). Hence,

X X  zpq h−1 zq h−1 (q −min{j,h−1} − 1) i ah−1 (z)(1 − zpq h−1 )

h≥1 i≥1

=

X

1 ah−1 (z)

+

j X zpq h−1 ah (z)

h=1

X

j

X zpq h−1 z 2 pq 2h−2 (q −min{j,h−1} − 1)  + . h−1 2 2h−2 −min{j,h−1} ah (z) a (z) ah−1 (z)(1 − zpq ) − z pq (q − 1) h≥1 h−1 h=1

Substituting this into (7.3) (after some simplification) gives (for a fixed j)

j X X z zpq h−1 z 2 pq 2h−j−2 (1 − q j )  . − − 1−z ah−1 (z)ah (z) a (z) ah (z) − z 2 pq h−1 (q h−1−j − 1) h=1 h≥j+1 h−1 95

The second term above is the telescoping sum

j  P

h=1

1 ah



1 ah−1

with the first term above and (7.2) yields the required result.



which together 

Theorem 6. The mean of the greatest descent after any maximum value in a sample of n geometric random variables (ignoring fluctuations) is logQ n +

γ pi 1 1X + − L 2 L i(1 − q i )

as

i≥1

Proof. Using partial fractions on (7.1) we have

n → ∞. zq j aj (z)(1−z)

=

1 1−z

Here the coefficients of z n are given by  X XX X  [z n ] zk − (z(1 − q j ))k = 1 − (1 − q j )n . j≥0

Since lim

P

n→∞ j≥0

k≥0 j



1 aj (z) .

j≥0

k≥0



e−nq − (1 − q j )n = 0, we have X j≥0

 X j 1 − (1 − q j )n ∼ (1 − e−nq ). j≥0

9

(7.4)

The Mellin transform (see [13], Appendix B.7) of

P

j≥0

−Γ(s)

X

q −js =

j≥0

−Γ(s) 1 − q −s

j

(1 − e−nq ) is

in the fundamental strip h−1, 0i.

(7.5)

To invert the Mellin transform in (7.5), we ignore fluctuations and calculate the negative residue of

−Γ(s) −s 1−q −s n

at s = 0, as −

log n γ 1 − + . log q log q 2

(7.6)

For the sum in (7.2), we have the following: X X

−z 2 pq 2h−j−2 (1 − q j )   a (z) ah (z) − z 2 pq h−1 (q h−1−j − 1) j≥0 h≥j+1 h−1 =

XX

−z 2 pq j+2k (1 − q j )  . a (z) aj+k+1 (z) + z 2 pq j+k (1 − q k ) j≥0 k≥0 j+k

(7.7)

We use the method of bootstrapping and the symbol ‘∼’ (see [10]) to simplify one of the denominator terms in (7.7), i.e., 1 − z(1 − q j+k+1 ) + z 2 pq j+k (1 − q k ) =

(1 − σj,k z)(1 − τj,k z). The dominant root of the quadratic satisfies z ∼ 1 + q j+k+1 + pq j+k (1 − q k ) = 1 + q j+k − pq j+2k .

One starts with a crude bound for the dominant root: z = O(1), plugs this in and gets in the next iteration the improved result z = 1 + O(q j+k ). Another iteration leads to z = 1 + q j+k − pq j+2k + O(q 2j+2k ).

Furthermore,

1 z

∼ 1 − q j+k + pq j+2k and 1 − z(1 − q j+k + pq j+2k ) ∼ 0. So

asymptotically it is equivalent to consider (instead of (7.7)), XX

−z 2 pq j+2k (1 − q j )  . a (z) aj+k (z) − zpq j+2k j≥0 k≥0 j+k

(7.8)

By partial fraction expansion,

1 Aj,k Bj,k Cj,k = + + . aj+k (z)(1 − σj,k z)(1 − τj,k z) aj+k (z) 1 − σj,k z 1 − τj,k z C

Since τj,k = O(q j+k ), we get [z n ] 1−τj,k = O(q (j+k)n ) as n → ∞. The approxij,k z

mation of the dominant root σj,k leads to   Bj,k Bj,k [z n ] − = O(q (j+k)n ), 1 − σj,k z 1 − (1 − q j+k + pq j+2k )z 10

as n → ∞.

This justifies the use of the simplified form (7.8), which can be separated into partial fractions as follows: XX

 z(1 − q j ) z(1 − q j ) . − aj+k (z) aj+k (z) − zpq j+2k

j≥0 k≥0

(7.9)

The coefficient of [z n ] in (7.9) is XX

j≥0 k≥0

  (1 − q j ) (1 − q j+k )n−1 − (1 − q j+k + pq j+2k )n−1 .

Using similar reasoning to that in (7.4), we can approximate this sum as XX

(1 − q j ) e−nq

j+k

j≥0 k≥0

− e−nq

j+k

(1−pq k )

The Mellin transform of the last expression is Γ(s)

XX



.

 (1 − q j )q −sj−sk 1 − (1 − pq k )−s ,

j≥0 k≥0

with fundamental strip h−1, 0i. Now, Γ(s)

XX

(1 − q j )q −sj−sk

j≥0 k≥0

=

X Γ(s + i) i≥1

100

i!

X −s i≥1

i

(−pq k )i

1 pq −s pi . 1 − q i−s (1 − q −s )(1 − q 1−s )

Since i ≥ 1, we have a fundamental strip h−1, ∞i for the gamma function terms.

For a fixed i, by the Mellin inversion formula, the negative of the residue −pi Γ(s+i)pi+1 q −s n−s . The sum of the negative at s = 0 of i!(1−q is −s )(1−q 1−s )(1−q i−s ) iL(1 − q i ) P pi of the residues is thus − L1 i≥1 i(1−q i ) . Combining this and (7.6) gives the

asymptotic result of Theorem 6. There are further small fluctuating terms that

105

come from the residues at χk =

2kπi log Q

for k ∈ Z\{0}. We note that all series

converge, thanks to the presence of an exponentially small factor.



7.1. Comparison of asymptotic results The result of Theorem 2 is asymptotically equivalent to the average descent after the first and last maximum (see [1]). The main term (i.e., ignoring fluctu110

ations) in each of these cases is: logQ n + 11

γ L

+

1 2

− p1 .

The only difference between that case and the result in Theorem 6 is that P pi − p1 is replaced by − L1 i(1−q i ) . This constant is approximately −1.12898 for i≥1

q = 0.2, and −1.79192 for q = 0.5, which is slightly larger than the equivalent

constant (− p1 ) in the cases of the descent after the first/last/any maximum 115

(−1.25 for q = 0.2 and −2 for q = 0.5).

We demonstrate that the difference − L1

P

i≥1

pi i(1−q i )

+

tends to 0 as q → 0

1 p

and 1/4 as q → 1:   1 1 X (1 − q)i 1 X (1 − q)i lim + = 1 + lim . q→0 1 − q q→0 log q log q i(1 − q i ) i(1 − q i ) i≥1

i≥1

However X i≥1

since

X X pi X Y 1 pi 1 = q ij = log = log ∼ − log q i j i(1 − q ) i 1 − pq 1 − pq j j≥0 i≥1

Q

1 j≥0 1−pq j



j≥0

j≥0

This finally leads to the limit stated above, as q → 0.

1 q.

For the limit as q → 1, we have the following (recall L = − log q):     1 1 X (1 − q)i 1 1 X (1 − q)i lim − = lim 1− . q→1 1 − q q→1 1 − q L i(1 − q i ) L i2 i≥1

Now

P

i≥1

(1−q)i i2

(7.10)

i≥1

= (1 − q) + 41 (1 − q)2 + O((1 − q)3 ) and L = (1 − q) + 12 (1 −

q)2 + O((1 − q)3 ), so

1 L

P

i≥1

(1−q)i i2

3 = 1 − 1−q 4 + O((1 − q) ). Substituting this into

(7.10), we find that the limit as q → 1 is 1/4 as required. 120

Acknowledgement. We thank the referee for useful comment and critique. References [1] Archibald M, Blecher A, Brennan C, Knopfmacher A. Descents following maximal values in samples of geometric random variables. Statistics and Probability Letters 2015;97:229–40.

125

[2] Baryshnikov Y, Eisenberg B, Stengle G. A necessary and sufficient condition for the existence of the limiting probability of a tie for first place. Statistics and Probability Letters 1995;23:203–9. 12

[3] Brennan C, Knopfmacher A, Mansour T, Wagner S. Separation of the maxima in samples of geometric random variables. Applicable Analysis 130

and Discrete Mathematics 2011;5(2):271–82. [4] Eisenberg B, Stengle G, Strang G. The asymptotic probability of a tie for first place. Annals of Applied Probability 1993;3:731–45. [5] Flajolet P, Sedgewick R. Mellin transforms and asymptotics: Finite differences and Rice’s integrals. Theoretical Computer Science 1995;144:101–24.

135

[6] Grabner P, Knopfmacher A, Prodinger H. Combinatorics of geometrically distributed random variables: Run statistics. Theoretical Computer Science 2003;297:261–70. [7] Kirschenhofer P, Prodinger H. The path length of random skip lists. Acta Informatica 1994;31:775–92.

140

[8] Kirschenhofer P, Prodinger H. The number of winners in a discrete geometrically distributed sample. Annals of Applied Probability 1996;6:687–94. [9] Knopfmacher J. Abstract Analytic Number Theory. North Holland, 1975. [10] Knuth DE. The average time for carry propagation. Indagationes Mathematicae (Proceedings) 1978;81(2):238–42.

145

[11] Louchard G, Prodinger H. Asymptotics of the moments of extreme-value related distribution functions. Algorithmica 2006;46:437–67. [12] Louchard G, Prodinger H. The asymmetric leader election algorithm: Another approach. Annals of Combinatorics 2009;12:449–78. [13] Sedgewick R, Flajolet P. Analytic Combinatorics. Cambridge University

150

Press, 2009. [14] Yakubovich Y. On descents after maximal values in samples of discrete random variables. Statistics and Probability Letters 2015;105:203–8.

13