MATHEMATICAL COMPUTER MODELLING
PERGAMON
Mathematical
and Computer
Modelling
31 (2000) 31-46 www.elsevier.nl/locate/mcm
When Is a MAP
Poisson?
N. G. BEAN AND D.’ A. GREEN Department of Applied Mathematics, University of Adelaide Adelaide, Australia 5005
[email protected] [email protected]
Abstract-The departure process of a queue is important in the analysis of networks of queues, ss it may be the arrival process to another queue in the network. A simple description of the departure process could enable a tractable analysis of a network, saving costly simulation or avoiding the errors of approximation techniques. In a recent paper, Olivier and Walrand [I] conjectured that the departure process of a MAP/PHI1 queue is not a MAP unless the queue is a stationary M/M/l queue. This conjecture was prompted by their claim that the departure process of an MMPP/M/l queue is not a MAP unless the queue is a stationary M/M/l queue. We note that their proof has an algebraic error, see [2], which leaves the above question of whether the departure process of an MMPPIPH/l queue is a MAP, still open. There is also a more fundamental problem with Olivier and Walrand’s proof. In order to identify stationary M/M/l queues, it is essential to be able determine from its generator when a stationary MAP is a Poisson process. This is not discussed in [I], nor does it appear to have been discussed elsewhere in the literature. This deficiency is remedied using ideas from nonlinear filtering theory, to give a characterisation as to when a stationary MAP is a Poisson process. @ 2000 Elsevier Science Ltd. All rights reserved.
1. INTRODUCTION A Markovian Arrival Process (MAP) is a process which counts transitions of a finite state Markov chain. Unlike its name suggests, a MAP can be used to describe a variety of point processes, including departure processes. For example, consider an M/M/l and service rate p > 0, modelled by a Markov chain x = {Q, where zt represents the number infinite transition rate matrix
in the queue
1; -A P 0
where the number
-(A c1 x+ P)
in the queue x increases
queue, with arrival rate X > 0 t 2 0) on the state space Z+,
at time t. This Markov
x -(Ltp) 0
0. .
0. .
as the row number
... ,..
chain
has the following
I
(1)
’
of the matrix.
In the situation
where X < CL,the queue is positive recurrent and, under stationary conditions, the point process of occurrence of transitions (n + 1, n), n 2 0, is known (see [3]) to be a Poisson process of rate X which is of course a trivial MAP. In addition to exhibiting a MAP departure process, the above The authors would like to thank S. Asmussen for the example in equation (47). We would also like to thank P. Taylor for his initial input which led to this work and his helpful suggestions in refining notation and presentation. 0895-7177/00/$ - see front matter @ 2000 Elsevier Science Ltd. All rights reserved. PII: SO895-7177(00)00070-4
Typeset by A&-‘&X
N. G. BEAN ANDD. A. GREEN
32
example also shows that it is possible for a process which counts transitions Markov chain to be statistically
of an infinite state
equivalent to a MAP.
A natural question to ask is whether the property that the departure process is a MAP holds for other queues. That is, for more general queues, does there exist a finite state Markov chain and a set of transitions
for which the counting process of the transitions
is identical to the departure
process of the original queue. Olivier and Walrand [l] presented an arguement to show that there exists no such finite state chain for a +/M/l queue when the arrival process is a non-Poisson MMPP, and conjectured Unfortunately,
that this is also true for a ./PHI1
queue fed by a non-Poisson MAP.
there is an algebraic error in the argument of Olivier and Walrand as pointed
out in [2], and so the question of whether the output of a ./M/l queue fed by a non-Poisson MMPP can be a MAP still remains open. Consideration of this problem leads to the fundamental question of when is an MMPP, or more generally a MAP, a Poisson process. This question was not discussed in [l]. To emphasise, since it is possible for the arrival process of a MAP/M/l queue to be Poisson, but with a possibly complicated description, and since we know that the output of such a queue is a MAP (as mentioned above, it is Poisson), it is essential to be able to tell from its generator when a MAP is, in fact, Poisson.
2. WHEN
IS A STATIONARY
MAP POISSON?
The question of when a stationary MAP is Poisson appears not to have been discussed in This problem is considered using the literature, even though it is a very natural question. the techniques of nonlinear filtering, as given in [4, Section 10.21. Consider a Markov chain x = {xt, t > 0) having a finite state space X, with transition rate matrix Q. For i # j E X, let
[Qlli,j5
[Qli,j,
for i E X let 6 I
[Q&i< 00,
(3)
and let
Qo=&-&I. The transitions
(4)
with rates Qr are said to be observed, while those in Qc are hidden.
count the number of observed transitions
Let J(t)
up to time t, and let
Q(4k) = P{zt = k 1{J(t)}}, so that \E(t, k) is the probability of being in state k: a time t conditioned by the jump process up to time t. Also, let \k(t) be the row vector ‘B(t) = {rk(t, k), k E X}.
(5)
From [4, Theorem 10.2.131, it can be seen that a point process defined by Q = Qs + Qr has the same finite dimensional distribution as a Poisson process of rate X if and only if for some given initial distribution of states, w, qreQotlQleQOh
meQotlQleQotz
. . . QleQotkQll . . . QleQotk
1
= X, for all k L 1, and ti E [O,oo), for i E {1,2,.
. . , k},
(6)
where 1 is a column of ones of the appropriate dimension. Any point process generated by the observed transitions of Q = QO + &I which is equivalent to a Poisson process of rate X > 0 must satisfy equation (6). One case where this clearly occurs
When Is a MAP Poisson?
33
is when Q11 = Xl. This result is not affected by the initial distribution K and corresponds to the case where the MAP has the same arrival rate X in every phase. We will consider the matrix Qs in its spectral form, first looking at the more general case when Qc is not assumed to be diagonalisable, and then consider the diagonalisable special case. To avoid overcomplicating
matters, the general result is motivated in stages by first considering the
equivalence of a PH-random
variable to an exponential random variable. We then consider the
equivalence of a PH-renewal process to a Poisson process and finally generalise to the equivalence of a MAP to a Poisson process. 2.1. Jordan Canonical Form of QO For simplicity, the matrix Qc is assumed to be irreducible so that by [5, Theorem 2.61, the eigenvalue of maximal real part Xi is unique and has associated positive right and left eigenvectors. For an insight into the considerations necessary for reducible Qc, see (5, Section 1.21. Although the reducible case can be handled, in general it would be very messy and noninformative and examples would be better dealt with on an individual basis. Consider the matrix Qs in upper Jordan canonical form, written (see [6, p. 152, Volume 1; 7, Section 11.61 for similar notation)
where g is the number of Jordan blocks, Xj is the eigenvalue corresponding to the jth Jordan block, T is the transformation matrix for the Jordan canonical form, and Ej and Nj are idempo tent and nilpotent matrices, respectively, which give the form for each Jordan block. For example, a typical order-three Jordan block would have a matrix
Ej and an associated Nj as follows:
It follows from [6, p. 100, Volume l] that the exponential eQot can be written
where NT E the identity matrix and pj is the order of the nilpotent matrix Nj (that is, pj is the smallest positive integer for which NY = 0 and NF -’ # 0). Again, from [6], the inverse Qil
Q,p
= T
can be written
5 2 (k) (z)‘-l EjNjzI-lEj j=l
v=l
We present some notation for the transformation matrices
T
=
(m,lr
172~7..
. ,72,pzl
, [73,1,.
. . ,73,p31,.
T and T-'.T canbe written as
. . , [wt..
. ,q,p,])
,
N. G. BEAN AND D. A. GREEN
34
where the columns Tj,pj are right eigenvectors corresponding to the jth eigenvalue Xj of Qe and rj,v for v E {1,2,. . . ,pj - 1) are th e generalised right eigenvectors corresponding to the jth Jordan block. Similarly, T-’
can be written as
where the rows Tj,pj are left eigenvectors
corresponding
to the jth eigenvalue Xj of Qc_ For
v E (1 T*- . ,Pj - 1)~ Fj,v are the general&d left eigenvectors corresponding to the jth Jordan block. Note that the eigenvalue Xi is simple, by [5, Theorem 2.61, since we have assumed that Q. is irreducible, and so the associated Jordan block is of order one. If there are s 5 g distinct eigenvalues, then we let dj, for the Jordan blocks with Xj on their diagonal. Then,
for j = 1,2,. . . , s, contain the indices
QO may also be written as
I)
T-l,
(9)
with
where Etj,i) and N(j,i) are the idempotent
and nilpotent matrices, respectively, of the Jordan form description which correspond to the i th Jordan block in set dj, corresponding to the distinct eigenvalue Xj. In (9) and (lo), for each distinct eigenvalue Xj, within the square brackets there is a unique collection of indices i E dj for that Xj. Therefore, without ambiguity, we can reduce the number of subscripts by an abuse of notation and write canonical
Qo=T
2
C
j=l
iEAj
XjEi
+ Ni
(11)
with
eQot= T
$
‘$(&)
iE,
eXjtEJV~-lE~T-l,
(12)
3
where for each j E {1,2,.
2.1.1.A PH-random
. . ,s},
variable
Any distribution on [0, co), which can be obtained as the distribution of time until absorption .in a continuous-time finite-space Markov chain which has a single absorbing state into which absorption is certain, is said to be of PH-type (see [S]).
When Is a MAP Poison?
35
Consider a Markov chain with m + 1 states, initial probability vector (a, a,,.,+~) and transition rate matrix
where U is a nonsingular m x m matrix with Vii < 0, Uij 2 0 for all i # j, and U” 1 0 is an m x 1 vector such that Ul
+ U" = 0.
The probability distribution of time until absorption into state m + 1 is a PH-type
distribution
with representation (c~, U). If we consider the (m + l)th state of the PH-random variable as an instantaneous state, in that we instantztneously restart the process using the probability vector cr, then the process consisting of absorption epochs is a PH-renewal process with representation (a, V). To investigate the equivalence of a PH-random variable, we assume that
variable with a negative exponential random
al = 1 (that is, a,+1
G 0). Otherwise, there would be a positive atom of probability at t = 0, which would be a contradiction to an exponential random variable. Henceforth, we make this assumption.
THEOREM 2.1. A PH (a,& 0) random variable is negative exponential with parameter X > 0, if and only if X = --Xl and for a11(j,u) E (2,. . . , s} x (1,. . . , Pj},
aT
PROOF. If 1 -
(isjEiNf-lEi) T-l1
=o.
aeQotl= 1 - ewxt, then from (12) we have
This is of the form
8
cleXlt +
pj
C C
C(j,u)i?u-'&t
=
emAt,
(14)
j=2 v=l where cl = CITE~T-~~, and for (j, V) f (2,. . . , s} x (1,. . . , Pj},
Because the Xj are distinct, the functions creXlt and c(j,v)tv-lcxjt for (j, V) E (2,. . . , s) x on the left-hand side of equation (14) are linearly independent. This follows, for {l,...,Pj) example, from [9, Theorem 12, Chapter 21. Now, cl can be rewritten as cl = a~lFl1by using the definitions of T and T-‘, and the fact that El consists entirely of zeros except for a one in the top left-hand corner. From [5], 71 and ~1 are positive, because Qe is an irreducible generator matrix, cl > 0. Equation (13) is true for all t E [0, co), and therefore, as eXjt # 0 for j E {1,2,3,. all t E [0, co), we must have X = -Xl and for each j E (2,. . . , s},
aT (igjEiNJmlEi)
T-‘l=O,
foreachvE{l,...,Pj}.
. . , s} and for
(15) I
N. G. BEAN AND D. A. GREEN
36
We now consider what information can be drawn from this theorem in the following corollary for the special case when all the eigenvalues are distinct. COROLLARY 2.2. A PH (a,& 0) random variable, where the Jordan canonical form has distinct eigenvahres in each Jordan block, is negative exponential with parameter X > 0, if and only if A=-XrandforalljE{2,...,g},
arj,u = 0
or
for each v E (1,. . . ,pj}.
= 0,
i’j,vl
PROOF. We use the result from the previous theorem noting that in this case, s = g and hence each set dj has only one element. From equation (15) we see that for each j E {2,3, . . . , g}, aT (Ej) T-l1
(w = 1)
CYT(EjN.-lEj)
(V=2,...,Jlj)
The nilpotent matrix Nj is such that EjNj “-‘Ej
T-'1
= 0, = 0.
0’3)
has only one nonzero entry so that position i&j
OZT
(
E’N’j-‘Ej 3 3
=(O,...,O,(a~j,~),O,...,O),
>
and the left-hand side of the last equality in (16) becomes (e,11)
position jPj (0,
* - * 9O,(aij,l)
TO,.-
* 70)
t
(S.,l)
:I ha,
position j, pj,
1)
and hence, we can rewrite this as (aTj,l)
(17)
(?j,pj l) = O,
so that either aTj,r = 0 or ?j,pj 1 = 0. There are three cases which must be considered: Tj,pj 1 # 0, aTj,l # 0, and the case where both aTj,r = 0 and Fj,pjl = 0. CASE 1. When Fj,pj 1 # 0, we have cYTj,r = 0 and by considering v = pj - 1, in (16), we see that (aTj,l)
(i;j,pj-ll)
+ (aTj,2)
(+j,pjl) =
0,
and using the fact that aTj,r = 0 and Fj,pj 1 # 0, we get
This procedure is then repeated for v = pj - 2backtov=lin(16)togetforeachjE{2,3,...,g} QTj,v
=
0,
for each v E {1,2,...,pj}*
CASE 2. When aTj,l # 0, we have Tj,pi 1 = 0, and by systematically stepping backwards through the equalities in (16) as before, we find that for each j E {2,3, . . . , g}
Fj,vl= 0,
foreachvE{l,...,pj).
When Is a MAP Poisson?
37
CASE 3. In the case when both cx~j,l = 0 and 7j,pjl = 0, the choice of v = pj - 1 in (16), we
see that (arj,l)
(?j,pj-11)
which gives no further information. (aTj,l)
(+j,pj-21)
+ (aTj,2) (Tj,pjl)
= OT
Further, considering 2, = pj - 2 in (16), we see that + (aTj,Z) (Fj,p,-ll)
+ caTj,3) (Tj,pjl)
= OT
which reduces to (aTj,2) so
('jppj--I')
=
O,
or
?j,pj-11
that we get “Tj,2
=
0
= 0.
This is a similar equation to that in equation (17) so, repeating the argument, we deduce that for each j E {2,3,. . . , g), aTj,v
A PH-renewal
2.1.2.
=
or
0
Fj,vl = 0,
for each w E (1,. . . ,pj}.
I
process
For the special case of a PH-renewal process, we have &I = -QO1q
so that for all tk E [0, w),
equation (6) becomes --7reQ’JtlQ01 x *eQOtll
=
for k = 1,
’
(rreQotlQol)(aeQot2Qol) ...cY~~O~~Q,-J= -cxe*otkQO1 = x (rreQotlQol)(cueQotzQol)...aeQotkl THEOREM 2.3. A stationary PH-renewal
and only if X = -Xl
aT
’
for k > 1.
(19)
process, (a, QO), is a Poisson process of rate X > 0, if
and for all (j, u) E (2,. . . , s} x (1,. . . , Pj},
dPROOF.
a&?Otkl
and (18)
(igjEiN;-lEi) T-~I = o.
Using equations (11) and (12), equation (19) can be rearranged to get
ex’t(Xl +X)El+I$g3 g (gyeX’tEiNSy-‘Ei
>
[(Xj + X)1 + Ni]
T-‘1
= 0.
Now, 71 and 71 are positive since QO is irreducible, so a~1711 > 0. Therefore, as eXjt # 0 for j E {1,2,3 ,..., s} for all t E [O,oo), it must be that X = -X 1. Then, by the same argument as for Theorem 2.1, it can be seen that for each j E {2,3,. . . , s}
f_.lT
C
(EiN,“-‘Ei)
[(Xj - X1)1 + iVi]
‘I’-‘1 = 0,
foreachvE{1,2,3,...,Pj}.
(20)
GA,
Now consider equation (18). Note that as we are considering the stationary process, the initial distribution K = v (the stationary distribution admitted by the PH-renewal process). As QO and eQot commute, the necessary condition X = -X1 yields
-vQoeQotl = veQOtI
_x
1,
for all t E [O,CQ).
(21)
38
N. G. BEAN AND D. A. GREEN
The next step is to establish a relationship renewal probability vector CL Consider
between the stationary
distribution
Y and the
VQ = v(Q,, - Q&Y) = 0, from which it can be seen that
v&o = (v&olb. From the assumption of irreducibility, Qs is nonsingular, and so v = (uQol)aQ~l. Note that (vQo1) is a scalar quantity, so we can rearrange equation (21) to get -cxeQotl
aQ,leQotl
=
for all t E [0,00).
-X1,
(22)
Now, rewrite QO ’ eQot using equations (11) and (12) to get
where
Equation (22) then can be written
Rearranging, we get (YT eXltEl + f:
c
j=2.i&tj
=
5
(t”-l/(v - I)!) exjtEiNtvlEi
T-11
v-1
AlaT
F ((
1
& + f: )
j=2
c iEdj
3
(eAjt/Xj) P(X,, t, v)EiN;-lEi
T-ll,
v=l
to give
CLF
2 ig,$ ((6) 3 =
- (2) P(h,t,,,> eAjtEiN,Vel&
T--l1 = 0.
(23)
Consider the polynomial
(&) - ($)VAj7t.vl =(&)-(~)gz($-(~)
=i
( >>
,
~~~~~)l)(l_(~))_(p) (24) II::;:.
When Is a MAP Poisson?
Note that (Xl/Xj)
39
# 1 for j E {2,3,. . . , s}, because QO is an irreducible generator matrix. Hence,
in equation (24) for v = 1, 1 -(X,/X,)
# 0 and for 212 2, it can be seen that term containing t”-’
has a nonzero coefficient. Therefore, the polynomial of degree v - 1 is not identically zero for all t 1 0. Then, as (23) holds for all t E [0, oo), using the same argument as for Theorem 2.1, we have for each j E {2,3, . . . , s}
(25) By substituting (25) into equation (20) we see that it is also satisfied, and hence the proof is complete.
I
COROLLARY 2.4.A stationary PH-renewal process (a, Qo), where the Jordan canonical form has distinct eigenvalues in each Jordan block, is a Poisson process of rate X > 0, if and only if X=--Xl
andforalljE{2,...,g}, or
CV.Tj,v = 0
i;j,,l
= 0,
for each TJE (1,. . . ,pj}.
PROOF. Proceeding in the same fashion as before, taking up the story from equation (25), written for the case where each Jordan block has a distinct eigenvalue on its diagonal so that for each j E (273,. . ., 917 aT(Ej)T-11
1)
(v=
= 0, (26)
CD (EjNJ-l.Ej) T-ll=O,
(2)=2,...,pJ
which is the same set of equalities as (16), and the result follows as in the previous proof of Corollary 2.2. 2.1.3.
General
I MAPS
The proofs for the PH-renewal
process equivalence were made much simpler because of the
rank one nature of the matrix parameter &I = -Qola. This caused the infinite number of conditions in (6) to collapse to the two simple conditions (18) and (19). The same situation does not generally apply here because of the greater generality allowed for the matrix parameter Q1. This means we have a more complex product form for k > 2.
THEOREM 2.5.A stationary general MAP is Poisson of rate X > 0, if and only if X = -X1 and
E.N"(n)-l&T-'Q1 t 2 for all k E iz+ with (j(k),v(k)) (j(n),v(n))
E (2 ,...,
S} x
T
c &N,V(k)'l&T-ll = 0, GA,(k)
(1,. . .,Pj(k)}
and 72 E {1,2,.
(27)
. . , k - 1) with
E (1,. . ., s) x {L * * * 1Pjb)).
PROOF. We will consider (6), first checking equivalence for the initial time interval t = tl, and then for the subsequent time intervals tk for k E {2,3,. . . }. Rearrangidg equation (6) for the case k = 1, and using the fact that Qol = -Qll, we get veQot’(Qol + Xl) = 0. Using equations (11) and (12), this can be rewritten as
UT(exltl(X1 + X)E~)~-ll [(Xj + X)Ei + Ni]
T-l1
= 0,
(28)
N. G. BEAN AND D. A. GREEN
40
which by the same reasoning as for the proof of Theorem 2.1, yields the necessary condition that X = -X1, and for all j E { 2,3, . . . , s}, VT c
EiNz?-lEi((Xj
- X1)Ei + Ni]T-‘1
= 0,
for all w E {1,2,3 ,...,
Pj}.
iEAj
The nilpotent matrices Ni are such that for each j E {2,3, . . . , s}, this equation can be rewritten as (V =
Pj)
VT c
EiNzri-l Ei(Xj - X1)T-‘l
= 0,
iEAj
(v=
Pj -I)
Ei(Xj - Xl) + EiNp’-‘Ei
T-l1
= 0, (29)
UT
(w = 1)
C
Ei(Aj - Xl) + EiNiEi
T-l1
= 0.
iEAi
Therefore, noting that Xj - XI # 0 for all j E {2,3,. . . , s}, we have for all (j, V) E {2,3,. . . , s} x {1,2,3,
. . . , Pj} that IJT c
EiN,V-lEiT-‘l
= 0.
(30)
iEAj
Equation (6), for the case k 2 2, can be rearranged, again using the fact that Qol = -Qll, to get
(geQotnQl)
y
eQoth(Qol
+ ~1) =
0.
Using equations (11) and (12), with the above necessary condition that X = -XI,
it follows that
ns=o,
EiNi”‘“‘-lE 1’
where
and
for all k E Z+ with (j(k)),v(k)) (j(n),v(n))
E (2 ,...,
E (1,. . . , s} x (1,. . . , Pj(n)}.
S} x
(1,. .., Pj(k)}
and n E {1,2 ,...,
k - 1) with
The functions t 4+--1
(&
- $xicl)tt,
for any choice of v(e) E {1,2,. . . , Pj(e)} and j(e) E (1,2,. . . , s}, are clearly linearly independent for any choice of C E {1,2,. . . , k}. This is because the variables te for C E {1,2,. . . , k} are independent and for each e, the Xj(e) are distinct for each j(C) E {1,2,. . . , s}. This follows, by the independence argument presented in the proof of Theorem 2.1. We also note that Aj(k)
-
xl
#
0,
for all j(k) E {2,3,. . . , s},
and &i(Ote
> 0,
forallteE[O,oo)
Using these facts, the coefficient terms of the functions
and
e~{1,2
,...,
k}.
41
When Is a MAP Poisson?
in As
must be zero. That is,
where k-l
T
-r = u n n==l
c
,
EiNi”(n)-lE,T-lQl
iEd,
and Q = T
for all k E Zf,
C EiNi”Ck’-lEi iE+k)
[(Xj(k) - Al) Ei + Ni] T-ll,
(31)
with (j(k),v(k))
E (2,. , . ,s} x (1,. . ,P’(k)} and n E {1,2,. . . ,k - 1) with The nilpotent matrices N, are such that for each (j(n), u(n)) E (1,. . .IS) x (1,. . . , Pj(n)}. j(k) E {2,3,. . . , s}, equation (31) can be rewritten for each v(k) as
c
T
(v(k)= q(k))
EiNtf+’
E&$(k)
-
Xl)T-l1 =
0,
GA,(k)
(v(k) =
Pj(k)
E2N2Pj(@ E&(/q c iEdj(k)
T
- 1)
T
(v(k)= 1)
T-l1
- xl) + E,NPJ(“+E2
= 0,
1
Ei(Xj(k) - Xl) + EiNiEt
C iEdj(k)
Therefore, we have that k-l
nT
v
n=l
c
COROLLARY
c
EiN,“(‘)-‘E,T-‘1
= 0,
(32)
iEdj (k)
iEdj (n)
for all k E Z+ with (j(k),v(k)) (j(n),v(n))
T
E,N$“)-‘EiT-‘Q1
E (2 ,...,
s} x {l,...
, P,(k)}
and n E {1,2,.
. , k - 1) with I
E (1,. . .1 s) x (1, *. ., Pj(4). 2.6. A stationary
general MAP,
where the Jordan canonical form has distinct
eigenvalues in each Jordan block, is Poisson of rate X > 0, if and only if X = -Xl “Tju
=
0,
foralIvE
{1,2,3 ,...,
pj},
?j,l
= 0,
forallv6
{1,2,3 ,...,
pj},
or
and (33) (34)
for all j E (2, . . . , g} and for k > 2, we have k-l v
TE, n n=l
3(n)N~(n)-lEj(,jT-‘Q1 j(n)
TE,(k)N~~~~-lE,(k,T-‘l
= 0,
(35)
N. G. BEAN AND D. A. GREEN
42
for all (j(k),v(k))
E (2,. . . , s} x (1,. . . ,pj(~)}
and n E {l, 2,. . . , k - 1) with (j(n)), w(n)) E
(17. *. >s) x {l,...?Pj(,,).
PROOF. We will consider the appropriate equations from the proof of Theorem 2.5 following the same method of approach, while noting that each dj has only one element in this case. For k = 1, we get the necessary condition from equation (28) that
x = -x1,
(36)
and from equation (30), for each j E {2,3,. . . , g}, we get VT(E$!+
(w = 1)
UT E~N~j-‘Ej)
(2,=2,...,pJ
3 3
= 0,
TV'1 = 0.
These are similar equalities to (16), so that for all j E {2,3, . . . , g}, we have
UTjv
=
0,
forallVE{1,2,3,...,pj}, for all 21E {1,2,3,.
?j”l = 0,
or . . ,pj}.
When k 2 2, we have from equation (32) that k-l v
rI TEj(n) n=l
for all (j(k),v(k)) E (2,. (1,. . * 74 x (1 ~-**,Pj(n))-
TEj(k~N~~~-‘Ej(k~T-‘l
V(n)-IEj(,jT-‘Q1 Nj(,,
. . ,s}
x (1,.
= 0,
and n E {1,2,. . . , k - 1) with (j(n),v(n))
. . ,pj(k)}
(37)
E I
2.2. Special Case of Diagonalisable QO If the m x m matrix Qe is diagonalisable, then it has m independent eigenvectors and may be written in spectral form as m QO = % Xjrjlj, (38) j=l
where Xj are the eigenvalues of Qs with corresponding left eigenvectors lj and corresponding right eigenvectors rj. If there are s < m distinct eigenvalues, let dj, for j = 1,2,. . . , s, contain the list of eigenvalues which are identically Xj. Then, QO may also be written
Qo =
2 c j=l
iEd,
Xjr&,
(39)
where ri is the eigenvector and li is the left eigenvector corresponding to the ith eigenvalue in set dj. Recall that because Qs is assumed to be an irreducible matrix, we have that Ai E {l} since X1 is the eigenvalue of Qc of maximal real part, which has multiplicity one (see [5, Theorem 2.61). The proofs of all of the subsequent theorems and corollaries follow directly from their counterparts in the general case, by rewriting the results such that each Jordan block is of order 1.
When Isa MAP Poisson? 2.2.1.
A PH-random
THEOREM
43
variable
2.7. A PH-random
nential with parameter
variable (a,&~),
where &a is diagonalisable,
is negative expo-
X > 0, if and only X = --Xl and
a
ril,l = 0, iEd,
for all j E {2,3, . . . , s}. 2.8. A PH-random
COROLLARY exponential
variable (a, Qs), where QO has distinct eigenvalues, is negative
with parameter X > 0, if and only if X = -X1 and arj =0
ljl = 0,
or
for all j E {2,3, . . . , m}. 2.2.2. A PH-renewal THEOREM
process
2.9. A PH-renewal
process (~,Qo),
where QO is diagonalisable, is Poisson of rate
X > 0, if and only if X = --XI and rilil = 0, CY c iEd, for all j E {2,3, . . . , s}. 2.10. A stationary PH-renewal process ((w,Qo), where QO has distinct eigenvalues,
COROLLARY
is Poisson of rate X > 0, if and only if X = --XI and CYlrj=O
or
ljl = 0,
for all j E {2,3, . . . , m}. 2.2.3.
General MAPS
THEOREM
2.11. A stationary general MAP, where QO is diagonalisable,
is Poisson of rate X > 0,
if and only if X = -Xl and
for all k E Zf,
{j(l),j(2),
. . . ,j(k
- l),j(k)}E {I,?. . , s}~-’
x {2,3,. . . , s}.
For the following corollary, the next definitions are useful. 1~(0) +if{j 2 1 : urj # 0},
IL(O) ef {j > 2 : ljl # O},
(40)
and for all i 2 1 IR(~) Ef {z : IjQlr, # 0, for some j E 1~(i - l)},
(41)
IL(~) ef {y : lyQlrj # 0, for some j E IL(~ - 1)).
(42)
COROLLARY 2.12. A stationary general MAP, where QO has distinct eigenvalues, is Poisson of rate X > 0, if and only if X = -XI, yrj=O
or
ljl=O,
for all j E {2,3, . . . , m},
(43)
N. G. BEAN AND D. A. GREEN
44
and for a11i 2 0 and 2 E IR( i) , y E IL (i), the following hold:
l,Qlr, for
all z $ {IL(~) U
= 0,
and
or
l,Qrr,
(44)
IR(~)} either lzQrrz = 0
PROOF. The requirement that X = -Xi
= 0.
(45)
and (43) follow directly from Corollary 2.6. We may
rewrite equation (37) for the case where we have distinct eigenvalues as
r~~n$~~n~Q~)rj(+(kjl
v (g for all (j(k))
= 0,
E (2,. . . , .s} and n E {1,2,. . . , k - 1) with (j(n))
(46)
E (1,. . . , s}.
Consider k = 2 in (46), which yields
$(l)Qlrjp)
= 0,
for all j(1) E IR(O),
j(2) E IL(O).
Now, considering k = 3 in (46), we can see that we require
l~(l)Qlrj(2)lj(z)Qlrjo
= 0,
for all j(1) E In(O), j(3) E IL(O). This, along with the deduction from k = 2, implies that either lj(l)Qlrj(2) = for all j(1) E IR(O),
0 or lj(2)Qlrjp)
j(3) E IL(O),
= 0,
j(2) 6 MO)
u k(O)).
If we then consider k = 4 in (46), we can see that we.require
for all j(1) E IR(O) and j(4) E IL(O). Using the above results and the definitions in (42), we see that
lj(l)Qlrjp)
# 0,
for all j(2) E 1~(1)
$(3)Qlq(4) # 0,
and
for all j(3) E IL(~),
which implies that we must have
4(2)Qlrj(3) = 0,
for all j(2) E In(l),
j(3) E IL(l).
For k = 5, we have
for all j(1) E In(O) and j(5) E IL(O). Again, using the previous result, it can be seen that this reduces to
lj(2)Qlrj(3)lj(3)Qlrj(4)
= 0,
for all j(2) E 1~(1) and j(4) E IL(~). Th ese conditions are similar to those for the case k = 3 and in fact similarly imply that either
&(2)Qlrj(3) = 0 for all j(2) E IR(~),
or
&)Qlrj(4)
j(4) E IL(~), and j(3) $ {IR(~) U IL(I)).
= 0,
45
When Is a MAP Poisson?
By further
considering
k = 6,7,. . . , the result
can easily be established
as the conditions
are
repeated.
I
It is not clear from the statement conditions
in (44) and (45).
since there is a finite number
of Corollary
However,
there
of eigenvectors
2.12 whether
there are finitely
can only be a finite number for any given finite matrix
or infinitely
of unique
many
conditions,
Qo. Furthermore,
if there
IL(K) = IR(K) = 0, then there will be no conditions in equations (44) and (45) for i > K. In fact, if there exists a K’ such that only one of IL(K’) = 8 or IR(K’) = 8, there will be no conditions in (44) for i > K, and by considering (46), it is easy to see that there are no new conditions in (45) for i > K.
exists
a K such that
At this point, For example,
it is worth noting
let us consider
that
(44) and (45) are not a consequence
the following
It can easily be verified that
non-Poisson
example
of X = --Xl and (43).
which satisfies
X = -X1 and (43):
IR(O) = {1,2} and IL(O) = (3) while
hQlr3 # 0 and 12Qlr3 # 0. The following Under stationary
three MiPs are examples which show three possibilities conditions, they are complicated descriptions of a Poisson
here, v = (2/5,1/5,2/5)
here,
v = (g/20,1/5,7/20)
which is the left eigenvector
and
1 is the right
of the matrix
eigenvector
for Corollary 2.12. process of rate 1.
QO corresponding
of the matrix
to X1 = 1.
QO corresponding
to
x1 = 1.
here,
Y = (l/6,1/6,2/3)
an d neither
u nor 1 is an eigenvector
corresponding
to X1 = 1, but
vr2 = 0 and 131 = 0. Hence, IR(O) = {1,3} and IL(O) = (2) and so, for the point process to be Poisson, we require that both llQlr2 = 0 and lsQlr2 = 0. These conditions can be shown to hold and as 1~ (1) = 0, we are finished.
REFERENCES 1.
C. Olivier and J. Walrand, On the existence of finite-dimensional filters for Markov-modulated Journal of Applied Probability 31, 515-525 (1994). 2. N.G. Bean, D.A. Green and P.G. Taylor, The output process of an MMPP/Mf 1 queue, Journal of Probability 35 (4) (1998). . 3. P.J. Burke, The output of a queueing system, Operations Research 4, 135-165 (1956). 4. J. Walrand, An Introduction to Queue&g Networks, Prentice-Hall, Englewood Cliffs, NJ, (1988).
traffic,
Applied
46
N. G. BEAN AND D. A. GREEN
E. Seneta, Non-negative Matrices and Markov Chains, Springer-Verlag, New York, (1981). F.R. Gantmecher, The Theory of Matrices, Volume 1, Chelsea Publishing Company, New York, (1959). B. Noble, Applied Linear Algebra, Prentice-Hall, Englewood Cliffs, NJ, (1969). M.F. Neuts, Matrix-Geometric Solutions in Stochastic Models, The Johns Hopkins University Press, Baltimore, (1981). 9. E.A. Coddington, An Introduction to Ordinary Diflercntial Equations, Prentice-Hall, Englewood Cliffs, NJ, (1961). 10. P.K. Pollett and P.G. Taylor, On the problem of establishing the existence of stationary distributions of continuous-time Markov chains, Probability in the Engineering and Informational Sciences 7, 529-543 (1993). 5. 6. 7. 8.