Statistics and Probability Letters 82 (2012) 1497–1503
Contents lists available at SciVerse ScienceDirect
Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro
Embedded Markov chain analysis of the superposition of renewal processes Wolfgang Stadje ∗ Institute of Mathematics, University of Osnabrück, 49069 Osnabrück, Germany
article
info
Article history: Received 31 January 2012 Received in revised form 10 April 2012 Accepted 10 April 2012 Available online 21 April 2012
abstract For a superposition of i.i.d. renewal processes we derive in closed form the limiting distribution of an embedded counting process that describes the simultaneous presence of points from the individual renewal streams in consecutive inspection intervals of fixed given length. © 2012 Elsevier B.V. All rights reserved.
Keywords: Superposition Renewal process Inspection interval Embedded Markov chain Asymptotic distribution
1. The inspection chain Superpositions of independent renewal streams form a class of point processes that is appealing in stochastic modeling. Their general theory can be found in Cox and Smith (1954), Lam (1991, 1993), Lam and Lehoczky (1991), Alsmeyer (1996), Torab and Kamen (2001), Mamonova (2005) and Mitov and Yanev (2006) and references therein. Special results for models based on superpositions can be found, for example, in Albin (1986), Bratiychuk and Kempa (2003), Kella and Stadje (2006) and Elsayed and Perros (2000) on queueing, in Kella and Stadje (2008) for coupon collecting in continuous time, in Coffman et al. (1996) for makespan scheduling problems for multiple processors, and in Kallen et al. (2010) for the reliability theory. In this paper, we consider the superposition of d independent and identically distributed renewal processes. Our aim is to find the limiting distribution of an embedded counting process that describes the simultaneous presence of points from the d individual renewal streams in consecutive inspection intervals of fixed length α > 0. For this purpose, we divide the time interval [0, ∞) into the subintervals [0, α), [α, 2α), . . . , which are inspected as follows. (a) If there are no points of the superposition in [kα, (k + 1)α) the next step is to move to the next interval. (b) If there are points of the superposition in [kα, (k + 1)α) we take them away in their temporal order step-by-step until no such point is left. Then we proceed to [(k + 1)α, (k + 2)α). (i)
Formally, let (Sn )n≥1 , i = 1, . . . , d, be the renewal times of the ith (possibly delayed) process. We assume that the d sequences are independent and have the same continuous inter-arrival distribution function F . The following recursively defined d-dimensional Markov chain Mn = (Mn,1 , . . . , Mn,d ) describes the above inspection process. We start at time 0 with M0 , the vector of the first renewal points of the d streams. If M0,i0 = mini M0,i ≥ α , we go α time units ahead and set M1 = M0 − α e, where e = (1, . . . , 1). If M0,i0 < α , we take out the first renewal point from the i0 th stream and set (i )
(i )
M1 = M0 + (S2 0 − S1 0 )ei0 , where ei is the d-vector with a 1 at the ith position and a 0 elsewhere. In general, if the current
∗
Tel.: +49 419692530; fax: +49 41 9692770. E-mail addresses:
[email protected],
[email protected].
0167-7152/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2012.04.010
1498
W. Stadje / Statistics and Probability Letters 82 (2012) 1497–1503
Mn has minimal component Mn,in = mini Mn,i ≥ α , we move by α time units and set Mn+1 = Mn − α e, while if Mn,in < α (i )
(i )
(i )
we take out the next point from the in th stream, say Sknn , and set Mn+1 = Mn + ein (Sknn+1 − Sknn ). To avoid ambiguities in the selection of the minimizing components (i.e., in the definition of the in ), we assume that every Mn has pairwise distinct components; since F is continuous, this occurs with probability 1. It is easy to see that Mn is a Markov chain with transition kernel
1B (x − α e), K (x, B) = 1B (x + uej ) dF (u),
if min xi ≥ α i
if xj = min xi < α and j ∈ {1, . . . , d}
(1.1)
i
for x ≥ 0 and B ⊂ Rd+ a Borel set. Thus if Mn = x and all components of x are larger than or equal to α , all components are decreased by α ; if there are components smaller than α , an independent random quantity with distribution F is added to the smallest component. We denote by Pµ and Eµ the underlying probability measure and the associated expectation operator when M0 has distribution µ, and by Px and Ex the corresponding quantities if M0 ≡ x. Let T ∈ (0, ∞] be the right endpoint of the support of the inter-arrival time distribution. For simplicity we assume throughout that F is continuous, has a finite mean mF and an absolutely continuous component with a Lebesgue density f that is positive on (0, T ) and bounded away from 0 on an interval (2δ, 3δ) for some δ ∈ (0, α). Then Mn has state space I = {x ∈ [0, α + T )d | x has distinct components} and is irreducible with respect to Lebesgue measure, i.e., every subset of I having positive Lebesgue measure is visited with positive probability. Now define the sets C = (0, δ)d ∩ I ,
D = (2δ, 3δ)d ∩ I .
Our assumptions on F clearly imply that (i) for the first entrance time τC = inf{n ≥ 1 | Mn ∈ C } we have Ex [τC ] < ∞ for all x ∈ I; (ii) Pµ (Mn ∈ C ) > 0 for n large enough under any initial distribution µ on C for M0 ; (iii) taking µD to be the uniform distribution on D, we have the following lower bound for the distribution of the MC after d steps, for every x ∈ C and every Borel subset B of D:
Px (Md ∈ B) ≥
d
{y∈Rd+ |x+y∈B} i=1
f (yi ) dy
≥ β d ℓd (B − x) = β d ℓd (D)µD (B) = εµD (B), where β = inf(2δ,3δ) f > 0, ℓd denotes d-dimensional Lebesgue measure and ε = β d ℓd (D) > 0. It follows from these properties that Mn is an aperiodic positive Harris chain. Let π be the associated stationary distribution. Then Px (Mn ∈ ·) → π in total variation for all x ∈ I. It seems to be quite difficult to determine π in closed form. Our main interest is in the asymptotic distribution of a counting function of Mn . Let g (x), x ∈ I, be the number of components of x that are smaller than α : g (x) = card{i ∈ {1, . . . , d} | xi < α},
x ∈ I.
Then Xn = g (Mn ) is the number of renewal streams which just after the time at which Mn is observed have points in the current inspection interval. Our aim is to determine pi = lim P(Xn = i), n→∞
i = 0, . . . , d.
Let F ∗n be the nfold convolution of F with itself and let U = prove the following result.
∞
n =0
F ∗n be the renewal function associated with F . We will
Theorem 1. The asymptotic distribution of Xn is given by
pi =
(1 + (α d/mF ))
−1
d d k=1
k
(k)
(k)
qd−k k(ak−i+1 − ak−i ),
i = 1, . . . , d
(1 + (α d/mF )) , i=0 ∞ 1 where q = m− F α (1 − F (u)) du and α α k−1 k−2 (k) j−1 k−j aj = G1 (u) G2 (u) ν(du) + (k − 1) G1 (u)j−i G2 (u)k−j−1 H2 (u) ν(du) j−1 j−1 0 0 α k−2 + (k − 1) G1 (u)j−i−2 G2 (u)k−j H1 (u) ν(du). j−2 0
−1
(1.2)
(1.3)
W. Stadje / Statistics and Probability Letters 82 (2012) 1497–1503
1499
1 In (1.3), ν is the measure with Lebesgue density u → m− F (1 − F (α − u))U (u) and the functions G1 , G2 , H1 , H2 are defined as follows: u
1 G1 (u) = m− F
∞
α−u
0
{U (u − v) − U (α − v − y)}(1 − F (v)) dv dF (y)
G2 (u) = 1 − q − G1 (u) 1 H 1 ( u) = m − F
u
U (v)(1 − F (α − v)) dv 0
u
H 2 ( u) =
mF
− H1 (u).
In Section 2 we derive the formula for p0 . Section 3 is devoted to the calculation of some renewal-theoretic quantities that show up in the derivations (and appear in (1.2)). Section 4 introduces a splitting of the Markov chain Mn on which the computation of pi for i > 0 is based. In Section 5 this computation is reduced to that of a certain integral, which is then carried out in Section 6. 2. The formula for p0 Let Ci = {x ∈ I | xi = minj xj < α} and C0 = {x ∈ I | minj xj ≥ α}. We start with the relation p0 + dπ (C1 ) = π (C0 ) +
d
π (Ci ) = 1,
(2.1)
i=1
which is due to symmetry among the d i.i.d. renewal processes. Next, for any ζ ∈ C with nonnegative real part,
e−ζ x1 dπ (x) −
0 = I
=
I
d
e
=
[e
−ζ x′1
−
e
K (x, dx ) dπ (x) ′
I
Ci
i =0
I
−ζ x1
′
e−ζ x1 K (x, dx′ ) dπ (x)
−ζ x1
−e
−ζ (x1 −α)
] dπ (x) +
C0
d i=1
= (1 − eζ α )
= (1 − eζ α )
e−ζ x1 dπ (x) + C0
C0
ζα
= (1 − e )
e
e−ζ x1 − e−ζ x1 −
e
dπ (x) +
′
e−ζ x1 K (x, dx′ ) dπ (x)
[0,T )
1−
C0
e
K (x, dx ) dπ (x) ′
I
1
C1
−ζ x1
−
−ζ x′1
I
Ci
C
e−ζ x1 dπ (x) +
−ζ x1
e [0,T )
−ζ u
e−ζ (x1 +u) dF (u) dπ (x)
dF (u) e−ζ x1 dπ (x).
(2.2)
C1
The first equality is a consequence of the invariance of π , the second one is obvious, and the third, fourth and fifth one follow from the transition mechanism of the underlying Markov chain (cf. (1.1)). (Note that the terms of the sum in the third line of (2.2) for i > 1 are all zero.) Now divide the right-hand side of (2.2) by −ζ and let ζ → 0. We obtain απ (C0 ) − mF π (C1 ) = 0, which can be inserted in (2.1). This yields the neat formula p0 = [1 + (α d/mF )]−1 .
(2.3)
3. Some renewal integrals To determine pi for i > 0 we have to compute some renewal-theoretic quantities. Let v ∈ [0, α) and let S0 = v, v + S1 , v + S2 , . . . be a renewal process with inter-arrival distribution function F and inter-arrival times Yn = Sn − Sn−1 . Let Tv be largest point of this renewal process in [0, α) and Nv (t ) be its number of points in [0, t ]. We denote by the ∞ −1 ∗n U (t ) = F n=0 (t ) the renewal function (and the corresponding measure) associated to F and by feq (u) = mF (1 − F (u)) v and Feq (v) = 0 feq (u) du the equilibrium density and distribution function, respectively. We define the measure ν on [0, α) by
ν(B) =
0
α
E[Nv (Tv )1{Tv ∈B} ]feq (v) dv
1500
W. Stadje / Statistics and Probability Letters 82 (2012) 1497–1503
for Borel sets B ⊂ [0, α), and the functions G1 , G2 , H1 , H2 by α
G1 (u) =
0
H1 (u) =
P(Tv < u)feq (v) dv,
G2 (u) =
α
P(Tv ≥ u)feq (v) dv
0
α
E[Nv (u)1{Tv
0
H2 (u) =
α
E[Nv (u)1{Tv ≥u} ]feq (v) dv.
0
Note that G1 is a distribution function. First let us express ν in terms of F :
ν(B) =
∞ α
0
nE[1{v+Sn−1 <α≤v+Sn } 1{v+Sn−1 ∈B} ]feq (v) dv
n=1
∞ α
=
mF n=1
0
α
n
α−u
0
(1 − F (α − v − u)) 1{v+u∈B} (1 − F (v))F ∗(n−1) (du) dv
0
∞ α
(1 − F (α − y)) 1B (y)
= 0
n=1
n mF
(F ∗(n−1) (y) − F ∗n (y)) dy
feq (α − y)U (y) dy.
=
(3.1)
B
Hence, ν has the Lebesgue density y → feq (α − y)U (y) on (0, α). Taking B = [0, u) in (3.1) and using Nv (Tv )1{Tv
α
E[Nv (u)1{Tv
−1
0
u
U (v)(1 − F (α − v)) dv
(3.2)
0
and H2 (u) =
α
0
{E[Nv (u)] − E[Nv (u)1{Tv
u mF
− H1 (u).
(3.3)
The function G1 is given by G1 (u) =
u ∞
0
=
P(v + Sn−1 < u, α ≤ v + Sn )feq (v) dv
n =1
u ∞ 0
u
0
P(Sn−1 < u − v, α − v ≤ Sn−1 + Yn | Yn = y) dF (y) feq (v) dv 0
n =1
1 = m− F
∞
∞ α−u
{U (u − v) − U (α − v − y)}(1 − F (v)) dv dF (y)
(3.4)
and G2 (u) = Feq (α) − G1 (u).
(3.5)
Thus we obtain the measure ν and the functions Gi and Hi defined in Section 1. 4. Determination of the pi : preliminaries The key idea of the determination of pi for i > 0 is to split the Markov chain Mn in consecutive random blocks that are governed by the same mechanism and differ only in their initial distributions. Consider a component sequence (Mn,i )n≥1 for a fixed i ∈ {1, . . . , d}. A little reflection shows that after the deletion of all repetitions in this sequence (which occur at steps when only the other components change) this component sequence behaves exactly like the Markov chain Mn in one dimension, i.e., for a single renewal process. Moreover, after the deletions of repetitions the d component sequences are independent. In the one-dimensional case d = 1 the stationary distribution can be explicitly determined from (2.2), which takes the form ζα
0 = (1 − e
)
−ζ x
e [α,∞)
dπ (x) + (1 − ψ(ζ ))
[0,α)
e−ζ x dπ (x),
(4.1)
where we have set ψ(ζ ) = [0,T ) e−ζ u dF (u) (the Laplace–Stieltjes transform of F ). For ζ = 2π in/α, n ∈ Z, the first term on the right-hand side of (4.1) vanishes and we find that [0,α) e−ζ x dπ (x) = 0 for n ∈ Z \ {0}. It follows that π on [0, α) is a
W. Stadje / Statistics and Probability Letters 82 (2012) 1497–1503
1501
multiple of the Lebesgue measure. Now Eq. (2.3) for d = 1 yields p1 = α/(mF + α) and thus π ([0, x]) = x/(mF + α) for x ∈ [0, α). Next we obtain from (4.1)
ψ(ζ ) − 1 1 − eζ α
e−ζ x dπ (x) = [α,∞)
α
e−ζ x
0
1 mF + α
dx =
e−ζ α (1 − ψ(ζ ))
(mF + α)ζ
.
(4.2)
Laplace inversion yields that π on [α, ∞) has the density x → (1 − F (x −α))/(mF +α), x ≥ α . Thus, in the one-dimensional case π has density which is a mixture of a uniform density on [0, α) and the equilibrium density feq (x) = (1 − F (x))/mF of F shifted to [α, ∞) (with weights α/(mF + α) and mF /(mF + α), respectively). Now consider the subchain Mn′ of Mn obtained by deleting all its values outside the set C0 . Formally, let σ0 = 0, σn = inf{m > σn−1 | Mm ∈ C0 } for n ≥ 1 and Mn′ = Mσn . Then the asymptotic and stationary distribution of Mn′ is given by the probability measure π ′ = π /π (C0 ) = (1 + (α d/mF ))π on C0 . On the other hand, the components of Mn′ are independent, each having the stationary and asymptotic distribution as computed above for Mn on C0 in the one-dimensional case, having density x → (1 − F (x − α))/(mF + α), x ≥ α . Let τn = σn + 1. Then the Markov chain Mn′′ = Mτn = Mn′ − α e is a deterministic translate of Mn′ . The stationary and asymptotic distribution of Mn′′ , which we call ρ , thus has the product density d r (x) = m− F
d
(1 − F (xi )),
x ∈ I.
i =1
5. The key formulas We fix i ∈ {1, . . . , d}. Let us count the number of indices n such that Xn = i in between two consecutive visits of I \ C0 = {x ∈ I | min xi < α}:
Ni1 =
1Ci (Mm )
and Nil =
0≤m<τ1
τl−1 ≤m<τl
1Ci (Mm ),
l ≥ 2.
We show next that pi = p0 Eρ [Ni1 ].
(5.1)
First note that Eρ [exp{sNi1 }] < ∞ for all s ∈ (0, | log F (α)|). Indeed, Ni1 > m implies that (i) at least i renewal streams have their initial points in [0, α) (because otherwise Ci could not be visited before τ1 ) and that (ii) these renewal streams together have at least m renewal intervals inside [0, α) before M1′′ is reached (i.e., before τ1 ). Hence,
Pρ (Ni1 > m) ≤ Pρ (X0 ≥ i)F (α)m so that the distribution of Ni1 has an exponential tail. Consequently,
Eρ [exp{sNi1 }] =
∞
Pρ (Ni1 = m)esm ≤
m=0
∞
Pρ (X0 ≥ i)F (α)m−1 esm < ∞,
s ∈ (0, | log F (α)|).
m=0
In particular, all moments of Ni1 are finite. n Now let Jn = sup{m ≥ 0 | τm ≤ n}. Since |Jn − m=1 1{Xm =0} | ≤ 1, we have lim Jn /n = p0
n→∞
Pρ − a.s.
This yields lim τn /n = lim τn /Jτn = 1/p0
n→∞
n→∞
Pρ − a.s.
Therefore, n− 1
n
Nil = (τn /n)τn−1
l=1
τn
1 1{Xm =i} → p− 0 pi
Pρ − a.s.
(5.2)
m=1
Under the initial distribution ρ the sequence (Nil )l≥1 of counting variables is stationary and thus, by Birkhoff’s ergodic theorem, its averages converge Pρ − a.s. and in L1 (Pρ ). Therefore, the limit of these averages that has been established in (5.2) is equal to the expected value Eρ [Ni1 ]. Eq. (5.1) is proved. It remains to determine Eρ [Ni1 ]. First write
Eρ [Ni1 ] =
d k=i
Ex [Ni1 ]r (x) dx. Ck
(5.3)
1502
W. Stadje / Statistics and Probability Letters 82 (2012) 1497–1503
Now we can split Ck into its subsets Ck (S ) = {x ∈ I | xj < α for j ∈ S and xj ≥ α for j ∈ {1, . . . , d} \ S }, where S ⊂ {1, . . . , d} has k elements. By invariance with respect to permutations, the integrals in (5.3) over each of these subsets have the same value, i.e., are equal to Ck ({1, . . . , k}), and there are
Eρ [Ni1 ] =
d d
k
k=i
I ∩([0,α)k ×[α,∞)d−k )
d k
d Ex [Ni1 ]m− F
of them. Hence,
d
(1 − F (xi )) dx.
(5.4)
i=1
Carrying out the integration over the last d − k variables xk+1 , . . . , xd in (5.4) we arrive at
Eρ Ni1
[
]=
d d
k
k=i
d−k
q
−k
α
mF
...
α
Ex1 ,...,xk [Ni1 ]
0
0
k
(1 − F (xi )) dx1 . . . dxk ,
(5.5)
i =1
∞
1 1 1 where q = m− F α (1 − F (u)) du = 1 − Feq (α) and Ex1 ,...,xk [Ni ] is the expected value of Ni for k renewal streams with initial renewals at x1 , . . . , xk . So, in view of the two key formulas (5.1) and (5.5), our final task is to get a hold on the k fold integral in (5.5), i.e., express it in terms of F .
6. The final result Fix k ∈ {1, . . . , d} and x1 , . . . , xk ∈ [0, α). Consider the evolution of the counting sequence Xn when n runs from 0 to τ1 , for k independent renewal streams with initial renewals at x1 , . . . , xk . After starting at X0 = k, when moving from Xn to Xn+1 the sequence remains constant or decreases by 1, the jumps occurring whenever one of the renewal processes reaches (k) (1) its last point in [0, α). Let T1 , T2 , . . . , Tk be the last points of the k renewal streams Sn , . . . , Sn in [0, α) and denote by (j) T(1) < T(2) < · · · < T(k) the Tm in their temporal order. Let Nj (B) = card{n ≥ 0 | Sn ∈ B}. Then Ni1 , the number of indices n ∈ {0, . . . , τ1 − 1} (before reaching 0) for which Xn = i, can be expressed as Ni1 =
k
Nj ([T(k−i) , T(k−i+1) )).
(6.1)
j =1
It follows that the integral on the right-hand side of (5.5) can be written as α
α
...
0
0
α
=k
Eix1 ,...,xk [Ni1 ]
k
(1 − F (xi )) dx1 . . . dxk
i=1
...
α
0
Ex1 ,...,xk [N1 ((0, T(k−i+1) ))] − Ex1 ,...,xk [N1 ((0, T(k−i) ))]
k
0
(1 − F (xi )) dx1 . . . dxk .
(6.2)
i=1
To compute Ex1 ,...,xk [N1 ((0, T(j) ))] we decompose this expected value as follows:
Ex1 ,...,xk [N1 ((0, T(j) ))] =
k
Ex1 ,...,xk N1 ((0, T(j) ))1{T(j) belongs to the mth renewal stream} .
(6.3)
m=1
Let us first consider the term for m = 1 in the sum in (6.3). We condition on T1 ∈ du and obtain, by the independence of the renewal processes,
Ex1 ,...,xk N1 ((0, T(j) ))1{T(j) belongs to the first renewal stream}
=
J ⊂{2,...,k}, card J =j−1
0
α
k}\J QxJ1 ,...,xk (u)R{x21,..., ,...,xk (u)Ex1 ,...,xk [N1 ((0, T1 ])1{T1 ∈du} ],
(6.4)
where QxJ1 ,...,xk (u) =
j∈J
Px1 ,...,xk (Tj < u),
RJx1 ,...,xk (u) =
Px1 ,...,xk (Tj > u).
j∈J
Indeed, conditional on T1 ∈ du there must be a set J of indices as in (6.4) such that for the j ∈ J the last point of the jth stream in [0, α) is smaller than u and for j ̸∈ J ∪ {1} the last point of the jth stream is larger than u, and by the independence assumption the corresponding probabilities can be factored out. Note that Px1 ,...,xk (Tj ≤ u) is independent of {x1 , . . . , xk } \ {xj }.
W. Stadje / Statistics and Probability Letters 82 (2012) 1497–1503
1503
Now we turn to the general term for m > 1 in the sum in (6.3). First let j ≥ 2. We argue as above and obtain, conditioning on Tm ∈ du,
Ex1 ,...,xk N1 ((0, T(j) ))1{T(j) belongs to the mth renewal stream}
=
J ⊂{2,...,k}\{m}, cardJ =j−1
k}\(J ∪{m}) QxJ1 ,...,xk (u)Rx{21,..., (u) ,...,xk
× Ex1 ,...,xk [N1 ((0, u))1{T1 ≥u} ]Px1 ,...,xk (Tm ∈ du) +
J ⊂{2,...,k}\{m}, cardJ =j−2
k}\(J ∪{m}) QxJ1 ,...,xk (u)Rx{21,..., ( u) ,...,xk
× Ex1 ,...,xk [N1 ((0, u))1{T1
(6.5)
For m > 1 and j = 1 we find that
Ex1 ,...,xk N1 ((0, T(1) ))1{T(1) belongs to the mth renewal stream} α
= 0
k}\{m} R{x21,..., (u)Ex1 ,...,xk [N1 ((0, u))1{T1 >u} ]Px1 ,...,xk (Tm ∈ du). ,...,xk
(6.6)
It is clear that after inserting (6.3)–(6.6) in (6.2) the underlying symmetry, i.e., the identical distributions of the renewal streams, leads to groups of one-dimensional integrals that are equal and given by the functions introduced in Section 3. We find that
α
α ... Ex1 ,...,xk [N1 ((0, T(k−i+1) ))] dFeq (x1 ) . . . dFeq (xk ) 0 α0 α = ... Ex1 ,...,xk N1 ((0, T(k−i+1) ))1{T(k−i+1) belongs to the first renewal stream} dFeq (x1 ) . . . dFeq (xk ) 0 0 α α + (k − 1) ... Ex1 ,...,xk N1 ((0, T(k−i+1) ))1{T(k−i+1) belongs to the second renewal stream} dFeq (x1 ) . . . dFeq (xk ) 0 0 α α k−1 k−2 k−i i−1 = G1 (u) G2 (u) ν(du) + (k − 1) G1 (u)k−i G2 (u)i−2 H2 (u) ν(du) k−i k−i 0 0 α k−2 + (k − 1) G1 (u)k−i−1 G2 (u)i−1 H1 (u) ν(du). k−i−1 0
(6.7)
Inserting (6.7) in (5.5) and the result in (5.1) now yields (1.2). References Albin, S.L., 1986. Delays for customers from different arrival streams to a queue. Manag. Sci. 32, 329–340. Alsmeyer, G., 1996. Superposed continuous renewal processes: a Markov renewal approach. Stoch. Proc. Appl. 61, 311–322. Bratiychuk, M.S., Kempa, W., 2003. Application of the superposition of renewal processes to the study of batch arrival queues. Queueing Syst. 44, 51–67. Coffman Jr., E.G., Flatto, L., Whitt, W., 1996. Stochastic limit laws for schedule makespans. Stoch. Models 12, 215–243. Cox, D.R., Smith, W.L., 1954. On the superposition of renewal processes. Biometrika 41, 91–99. Elsayed, K., Perros, H.G., 2000. The superposition of discrete-time Markov renewal processes with applications to statistical multiplexing of bursty traffic sources. J. Appl. Math. Comput. 57, 43–62. Kallen, M.J., Nicolai, R.P., Farahani, S.S., 2010. Superposition renewal processes for modelling imperfect maintenance. In: Bris, R., Guedes Soares, C., Martorell, S. (Eds.), Risk, Safety and Reliability. pp. 629–634. Kella, O., Stadje, W., 2006. Superposition of renewal processes and an application to multi-server queues. Statist. Probab. Lett. 76, 1914–1924. Kella, O., Stadje, W., 2008. A coupon collector’s problem with renewal arrivals. J. Appl. Probab. 45, 610–620. Lam, C.T., 1991. Joint simulation of backward and forward recurrence times in a superposition of independent renewal processes. J. Appl. Probab. 28, 930–933. Lam, C.T., 1993. Superposition of Markov renewal processes and applications. Adv. in Appl. Probab. 25, 585–606. Lam, C.T., Lehoczky, J.P., 1991. Superposition of renewal processes. Adv. in Appl. Probab. 23, 64–85. Mamonova, A.B., 2005. Superposition of Markov renewal processes in stationary phase integration. Cybernet. Systems Anal. 41, 736–750. Mitov, K.V., Yanev, N.M., 2006. Superposition of renewal processes with heavy-tailed interarrival times. Statist. Probab. Lett. 76, 555–561. Torab, P., Kamen, E.W., 2001. On approximate renewal models for the superposition of renewal processes. In: IEEE International Conference on Communications, ICC, 2001, Helsinki, Finland, vol. 9, June 11–14, pp. 2901–2906.