Statistics and Probability Letters 82 (2012) 40–48
Contents lists available at SciVerse ScienceDirect
Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro
The first-passage times of phase semi-Markov processes Xuan Zhang ∗ , Zhenting Hou School of Mathematics, Central South University, Changsha, Hunan, PR China
article
info
Article history: Received 29 November 2010 Received in revised form 29 August 2011 Accepted 29 August 2011 Available online 7 September 2011 Keywords: Generalized inverse Phase semi-Markov process Fundamental matrix
abstract In this paper, we consider a class of semi-Markov processes, known as phase semi-Markov processes, which can be considered as an extension of Markov processes, but whose times between transitions are phase-type random variables. Based on the theory of generalized inverses, we derive expressions for the moments of the first-passage time distributions, generalizing the results obtained by Kemeny and Snell (1960) for Markov chains. © 2011 Elsevier B.V. All rights reserved.
1. Introduction Markov processes(MPs) have been investigated extensively because of their wide applications in physics, economics, biology and so forth (Ross, 1996; Kemeny and Snell, 1960). However, Markov models have many limitations in applications due to the restriction of exponentially distributed sojourn times. To overcome these drawbacks, Lévy (1954) and Smith (1955) proposed a class of stepped processes, called semi-Markov processes(SMPs). For a SMP, the Markovian property is required only for the jump points. Thus, the distribution of the sojourn time at each state can be arbitrary (Howard, 1971). Possibly because of the complexity of their discussions by transform methods, it proved to be extremely difficult to study semi-Markovian models directly. Nevertheless, note that there is great merit in considering a special case of SMPs in which we impose the mild restriction that the sojourn time at each state is of phase-type. Such processes are referred to as phase semi-Markov processes, which can be considered as an extension of MPs, but whose times between transitions are phasetype random variables. It is worth noting that the phase-type distribution is a generalization of the exponential distribution while still preserving much of its analytic tractability, and has been used in a wide range of stochastic modeling applications in areas as diverse as reliability theory, queueing theory and biostatistics (Neuts, 1981; Fackrell, 2009). The phase semiMarkov process was first introduced to study the problem of stochastic stability for a class of nonlinear stochastic systems with semi-Markovian jump parameters. For more detail on this topic readers are referred to Hou et al. (2006, 2009). The first-passage times of SMPs have drawn significant attention in the past few decades (see, e.g., Çinlar, 1975; Feller, 1964; Hunter, 1992 and the references therein). Using the approach of generalized inverses, the matrix expressions for the first two moments of the first-passage times of a SMP were presented by Hunter (1982). The concept of fundamental matrix was introduced by Kemeny and Snell (1960) and has found several applications in the theory of MPs (see, e.g. Kemeny and Snell, 1960; Yao, 1985 and Keilson, 1979). Fygenson (1989) extended the notion of the fundamental matrix to SMP and derived various moment formulae. In addition, Yao (1985) generalized Hunter’s approach to continuous time MPs and derived a simple recursive formula for moments of all orders in terms of the fundamental matrix for Markov chains. Motivated by the above papers, in this paper, we aim to derive expressions for the moments of the first-passage time distributions of phase semi-Markov processes. We first extend the concept of the fundamental matrix to phase semi-Markov
∗
Corresponding author. E-mail address:
[email protected] (X. Zhang).
0167-7152/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2011.08.021
X. Zhang, Z. Hou / Statistics and Probability Letters 82 (2012) 40–48
41
processes and show that the moment formulae of Fygenson (1989) can be generalized to phase semi-Markov processes. Then, we derive expressions for the moments of the first-passage times of phase semi-Markov processes by making use of generalized inverses, generalizing the results of Kemeny and Snell (1960) obtained for Markov chains. The paper is organized as follows. Section 2 gives some preliminaries that will be used in the following sections. Some definitions and basic concepts, such as phase-type distributions, generalized inverses, Markov renewal processes and phase semi-Markov processes, are introduced. Section 3 introduces a fundamental matrix for the phase semi-Markov process and derives the moments formulae in terms of this fundamental matrix. In Section 4, the moments of the first-passage time distributions of a phase semi-Markov process are obtained using generalized inverses. In Section 5, we show that, by supplementary variable technique and simple transformations, a finite phase semi-Markov process can be transformed into a finite Markov chain. 2. Preliminaries 2.1. Phase-type distributions Definition 2.1. A probability distribution F (·) on [0, ∞) is said to be a phase-type distribution (PH-distribution) with representation (α, T ) if it is the distribution of the time until absorption in a finite-state MP on the states {1, 2, . . . , m + 1} with generator
T 0
η
0
and initial distribution (α, αm+1 ), where α is a row vector of dimension m, η is a column vector and 0 is a 1 × m vector of zeros. The states {1, 2, . . . , m} shall be transient, while the state m + 1 is absorbing. The m × m matrix T is non-singular with negative diagonal entries and non-negative off-diagonal entries and satisfies η + Te = 0, where e denotes an appropriately dimensioned column vector with all components equal to one. The dimension m of T is called the order of the PH-distribution. The distribution F (·) is given by F (t ) = 1 − α exp(Tt )e, for all t ⩾ 0. Its density function is f (t ) = α exp(Tt )η, t > 0, and the kth moment is mk = (−1)k k!α T −k e,
k = 1, 2, . . . .
To avoid uninteresting considerations, we assume that PH-distributions considered in this paper have a zero mass at t = 0. Without loss of generality, the matrix T + ηα is assumed to be irreducible. That is, each phase in the MP defined by α and T has a positive probability of being visited before absorption. The representation (α, T ) is then said to be irreducible. For more details about PH-distributions see Neuts (1981). 2.2. Generalized inverses Definition 2.2. A generalized inverse (g-inverse) of a matrix A is any matrix A− such that AA− A = A. Note that A− is, in general, not unique, unless A is nonsingular, in which case A− = A−1 . Lemma 2.3 (Lemma 2, Yao, 1985). A necessary and sufficient condition for the equation AX = C to be consistent is that AA− C = C , where A− is any g-inverse of A. If the equation is consistent, then its general solution is given by X = A− C + (I − A− A)U ,
(2.1)
where U is an arbitrary matrix, and I is the identity matrix. 2.3. Markov renewal processes and phase semi-Markov processes Let E = {1, 2, . . . , m} be a state space and (Ω , F , P ) be a probability space. For every n ≥ 0, define the following random variables: Xn : Ω → E , Tn : Ω → [0, +∞], where Xn and Tn are the state and the time of the n-th transition, respectively. Definition 2.4. The process {(Xn , Tn ), n ≥ 0} is called a time-homogeneous Markov renewal process (MRP) if P (Xn+1 = j, Tn+1 − Tn ≤ t |X0 , . . . , Xn , T0 , . . . , Tn ) = P (Xn+1 = j, Tn+1 − Tn ≤ t |Xn ) = Qij (t ) for all n ≥ 0, i, j ∈ E and t ≥ 0.
42
X. Zhang, Z. Hou / Statistics and Probability Letters 82 (2012) 40–48
Definition 2.5. A stochastic process {Y (t ), t ≥ 0}, defined by the following relation Y ( t ) = Xn
for Tn ≤ t < Tn+1 , t ≥ 0,
(2.2)
is called a SMP associated to the MRP {(Xn , Tn ), n ≥ 0}. It follows that, if {(Xn , Tn )} is a MRP with semi-Markov kernel Q (t ) = [Qij (t )], then {Xn } is a discrete time Markov chain with one step transition probabilities pij = P (Xn+1 = j|Xn = i) = Qij (∞),
i, j ∈ E ,
(2.3)
and P = [pij ] is the transition matrix of {Xn }. We shall assume that {Xn } is irreducible and thus has a stationary probability vector. Given that the (n + 1)th transition takes the process from state i to state j, the time between transitions is governed by a distribution function Qij (t )
Fij (t ) = P (Tn+1 − Tn ≤ t |Xn = i, Xn+1 = j) =
Qij (∞)
and, by convention, we let Fij (t ) = 1 for t > 0, whenever pij = 0. It thus follows that Qij (t ) = pij Fij (t ). Let Hi (t ) = P (Tn+1 − Tn ≤ t |Xn = i) be the distribution function of the time until the next transition given that the process has just entered state i. Then Hi (t ) =
m −
pij Fij (t ) =
j =1
m −
Qij (t ).
j =1
Let Tij be the duration of the first-passage from state i to state j, and let Gij (t ) be the distribution function of Tij . Then Pyke (1961) have shown that Gij (t ) can be written as Gij (t ) = Qij (t ) +
m ∫ − k=1 k̸=j
t
Qik (t − u)dGkj (u).
(2.4)
0
Furthermore, introduce the following notation for moments. (r )
∞
∫
t r dGij (t ),
mij =
µ(ijr ) =
0
∞
∫
t r dFij (t ),
µ(i r ) =
0
(1)
∞
∫
t r dHi (t ). 0
(0)
(1)
For convenience, we write mij = mij , µij = 1, and µij = µij . Definition 2.6. The SMP {Y (t ), t ≥ 0} defined in (2.2) is called a phase semi-Markov process (PSMP) if (a) Fij (t ) is independent of j, (b) Fi (t ) , Fij (t ) is a PH-distribution. It follows from the definition of PSMP that Fi (t ) = Hi (t ), i.e., Fi (t ) is the distribution of the sojourn time of the process at state i, i ∈ E. Denote by (α (i) , T (i) ) the m(i) order representation of Fi (t ), i ∈ E. This implies Fi (t ) = 1 − α (i) exp(T (i) t )e,
t ≥0
and the semi-Markov kernel of the PSMP is given by Qij (t ) = pij Fi (t ) = pij [1 − α (i) exp(T (i) t )e], (r )
t ≥ 0, i, j ∈ E .
(r )
Note that µij = µi . Remark 2.7. If Fi (t ) is an exponential distribution with parameter λi , i.e., α (i) = (1), T (i) = (−λi ) and Qij (t ) = pij 1 − e−λi t (i, j ∈ E ), then a PSMP becomes a continuous-time MP with state space E and parameters λi for the exponential holding times.
X. Zhang, Z. Hou / Statistics and Probability Letters 82 (2012) 40–48
43
3. The fundamental matrix for PSMPs Consider a finite state PSMP with state space E = {1, 2, . . . , m}, which is regular. Let Fi (t ) (i ∈ E ) be the distribution function of the sojourn time of the process at state i, its Laplace–Stieltjes transform is
Ψi (s) =
∞
∫
e−st dFi (t ). 0
Let P be the transition matrix of the embedded Markov chain of this PSMP. If L is the limiting matrix of P, then [I − P + L]−1 exists, and it is a g-inverse of I − P (see Corollary 3.3.3 of Hunter, 1982). Define
Ψ (s) = diag(Ψ1 (s), Ψ2 (s), . . . , Ψm (s)) and B(s) = (I − P + L)−1 Ψ (s). Kemeny and Snell (1960) referred to the matrix Z = [I − P + L]−1 as the fundamental matrix for a regular Markov chain with transition matrix P. Here we extend the notion of the fundamental matrix to PSMP and show that important quantities for PSMP can be obtained by using the matrix B(s). Therefore we refer to it as the fundamental matrix for PSMP. Let J denote the matrix with all components equal to 1, and also define M = [mij ],
Md = [δij mij ],
Note that µij = µi = −α
(i)
T
(i) −1
N = [µij ],
and Λ = diag(µ1 , µ2 , . . . , µm ).
e and N = ΛJ.
Theorem 3.1. The mean of the first-passage time matrix, M, satisfies the equation M = N + P [M − Md ].
(3.1)
Proof. This follows by taking conditional means (refer to Theorem 4.4.4 of Kemeny and Snell, 1960). Theorem 3.2. Let π = (π1 , π2 , . . . , πm ) be the stationary probability (row) vector of P. Let λ =
∑m
i=1
µi πi , then
mii = λ/πi .
(3.2)
Proof. It is easy to show that LP = L and LN = LΛJ = λJ. Therefore, pre-multiplying both sides of (3.1) by L gives LMd = λJ
(3.3)
and (3.2) follows by taking diagonal elements of (3.3).
Theorem 3.3. The Eq. (3.1) of Theorem 3.1 has a unique solution, which is M = [λ(I − Z ) + B′ L − J (B′ L − λZ )d ]D, ′
where B = −
(3.4)
(s)|s=0 and D is the diagonal matrix with diagonal elements dii = 1/πi .
d B ds
Proof. Suppose that M and M ′ are two solutions of (3.1). From (3.2) we have Md = Md′ . This together with (3.1) gives M − M ′ = P (M − M ′ ). Then, as in the proof of Theorem 4.4.6 of Kemeny and Snell (1960), we have M = M ′ . The uniqueness of the solution of (3.1) follows immediately. Finally, we need only show that M as defined by (3.4) satisfies (3.1). Let
(2)
M (2) = mij (2)
(2)
N (2) = µij
,
,
H = pij µij .
and
(2)
Note that µij = µi . Theorem 3.4. The matrix M (2) satisfies (2)
M (2) = N (2) + 2H [B′ L − J (B′ L − λZ )d − λZ ]D + P [M (2) − Md ].
(3.5)
Proof. Taking conditional means gives (2)
mij
= pij µ(ij2) +
−
pik µik + 2
(2)
−
= µ(i 2) + 2
−
pik µik mkj +
−
k̸=j
pik µik mkj +
k̸=j
k̸=j
−
(2)
pik mkj
k̸=j
(2)
pik mkj ,
k̸=j
or, equivalently, (2)
M (2) = N (2) + 2H [M − Md ] + P [M (2) − Md ]. This, together with (3.4), yields (3.5).
(3.6)
44
X. Zhang, Z. Hou / Statistics and Probability Letters 82 (2012) 40–48
Theorem 3.5.
(2)
−
mkk =
(2)
πi µi + 2
m − −
πi hij mjk
πk .
(3.7)
i=1 j̸=k
i
Proof. This follows from pre-multiplying (3.6) through by L and taking the (k, k)th element.
Theorem 3.6. Eq. (3.5) has a unique solution, which is (2) M (2) = B′′ J + 2ZH [B′ L − J (B′ L − λZ )d − λZ ]D − ZPMd + JR,
(3.8)
where R is a diagonal matrix satisfying (2)
Rd = [I + (ZP )d ]Md
− 2 ZH (B′ L − J (B′ L)d + λ(JZ )d − λZ ) d D − (B′′ J )d .
Proof. The Uniqueness proof is the same as that given in Theorem 3.3, and then it can be verified that (3.8) satisfies Eq. (3.5). 4. Solving for the moments of PSMPs using g -inverses In this section, we shall derive explicit expressions for M and M (2) using the theory of g-inverses rather than the fundamental matrix. First we show that the results presented in Eqs. (3.1) and (3.6) can be obtained by an alternative proof, as follows. Lemma 4.1. (r )
(r )
mij = µi
+
r − − r s
s=1
(r −s)
pik µik
(s)
.
mkj
(4.1)
k̸=j
Proof. Obviously, it follows from (2.4) that (r )
mij
= pij µ(ijr ) +
−
= pij µ(ijr ) +
−
∫
= pij µij + = pij µij + =
(r )
pik µik +
∞
∞
∫
(u + t )r dx Fik (t ) dGkj (u)
0
∞
pik
r − r
0
0
s
(r )
s
s=0
(r −s)
pik µik
(s)
us t r −s dx Fik (t ) dGkj (u)
mkj
k̸=j
r − − r s=1
k
∫
r − − r s=0
−
u
∞ 0
∫
k̸=j
(r )
xr dx Fik (x − u) du Gkj (u)
pik
k̸=j
−
∞
∫
0
k̸=j
(r )
∞
∫ pik
s
(r −s)
pik µik
(s)
mkj
k̸=j
(r )
which implies (4.1) since µik = µi .
(r )
−r = (−1)r r !α (i) T (i) e, (4.1) can be expressed as r − (i) −r (i) −(r −s) − r (s) r (i) r −s (i) = (−1) r !α T e+ (−1) (r − s)!α T e pik mkj .
Remark 4.2. Using the fact that µi (r )
mij
s=1
s
k̸=j
Furthermore, let
(r )
M (r ) = mij
,
(r )
N (r ) = µij
,
(r )
H (r ) = pij µij
Then, from Lemma 4.1, we immediately obtain
.
(4.2)
X. Zhang, Z. Hou / Statistics and Probability Letters 82 (2012) 40–48
45
Theorem 4.3. M (r ) = N (r ) +
r − r s =1
s
(s)
H (r −s) [M (s) − Md ].
(4.3)
Remark 4.4. If r = 1, then (4.3) reduces to M = N + P [M − Md ] as given by (3.1). Similarly, from (4.3) with r = 2, we see that M (2) satisfies (3.6). From (3.1), we have
(I − P )M = N − PMd .
(4.4)
This is an equation of the type AX = C which can be solved by g-inverses. However, the right hand side of (4.4) involves Md , which can be determined by Theorem 3.2. To prove our main results by using g-inverses, we need the following facts from Lemma 3 in Yao (1985). Lemma 4.5. Let P be the transition matrix of a finite irreducible Markov chain with stationary probability (row) vector π . Let t and u be any column vectors such that π t ̸= 0 and uT e ̸= 0 (the superscript denoting transpose). If G is any g-inverse of I − P, then (i) G = [I − P + tuT ]−1 + ef T + g π
(4.5)
for arbitrary column vectors f and g. (ii)
[I − G(I − P )]U = JW ,
(4.6)
where U is the arbitrary matrix in (2.1), and W = diag(w1 , w2 , . . . , wm ). Corollary 4.6. Let G be any g-inverse of I − P, then J − GP + E (GP )d = I − G + JGd . Proof. This follows by taking U = I in (4.6).
Theorem 4.7. Let G be any g-inverse of I − P, then M = [GΛL − J (GΛL)d + λ(I − G + JGd )]D,
(4.7)
where D is the diagonal matrix with diagonal elements dii = 1/πi . Proof. First of all, to prove the consistency of (4.4), it suffices to show that [I − (I − P )G] (N − PMd ) = 0.
(4.8)
Let G be any g-inverse of I − P with the general form of (4.5). It is easily verified that
(I − P )(I − P + tuT )−1 = I −
tπ . πt
This along with (4.5), further yields [I − (I − P )G] (N − PMd ) =
[
t
πt
] − (I − P )g π (N − PMd ).
(4.9)
Moreover, pre-multiplying both sides of (4.4) by π and noting that π P = P gives
π N = π Md . Thus (4.8) follows from (4.9). Next, we derive the desired relation (4.7). By virtue of Lemma 2.3 and (4.6), we find that the general solution is given by M = G (N − PMd ) + [I − G(I − P )] U = G (N − PMd ) + JW .
(4.10)
Taking diagonal elements of (4.10), it follows that Md = (GN )d − (GP )d Md + W .
(4.11)
46
X. Zhang, Z. Hou / Statistics and Probability Letters 82 (2012) 40–48
Solving for W from (4.11) and substituting into (4.10), we have M = GN − J (GN )d + [J − GP + J (GP )d ] Md = GΛJ − J (GΛJ )d + [J − GP + J (GP )d ] Md .
(4.12)
Then by Corollary 4.6, we get M = GΛJ − J (GΛJ )d + [I − G + JGd ] Md .
(4.13)
On the other hand, (3.3) implies that
(GΛJ )d =
1
λ
(GΛL)d Md .
(4.14)
Hence, substituting (3.3) and (4.14) into (4.13) and combining (3.2), we obtain (4.7).
Remark 4.8. If the PSMP degenerates to a discrete-time Markov chain, i.e., the sojourn time in each state is 1, then λ = 1 and Λ = I. Hence for this case, (4.7) becomes M = [GL − J (GL)d + I − G + JGd ] D. Next, we will derive the expression for M (2) . Theorem 4.9. Let G be any g-inverse of I − P, then (2)
Md
where β =
= β I + 2[(LHGΛL)d − λ(GΛL)d − λ(LHG)d + λ2 Gd ]D D, ∑
i
(4.15)
πi µ(i 2) and D is the diagonal matrix with diagonal elements dii = 1/πi .
Proof. For convenience, denote (2)
(2)
Λ(2) = diag(µ1 , µ2 , . . . , µ(m2) ). Pre-multiplying (3.6) by L and noting that LN (2) = LΛ(2) J = β J, gives (2)
LMd
= β J + 2LH [M − Md ].
This, combining (3.3) and (4.7), yields (2)
JMd
= β J + 2LH [GΛL − J (GΛL)d + λ(JGd − G)]D D.
Therefore, by taking diagonal elements of (4.16) and using LHJ = λJ, we get (4.15).
(4.16)
Then from (3.6) and Theorem 4.9, we can obtain the following theorem by similar arguments as in the proof of Theorem 4.7. Theorem 4.10. (2)
M (2) = [I − G + JGd ]Md
+ 2[J (GH )d − GH ]Md + 2[GHM − J (GHM )d ] + GN (2) − J (GN (2) )d .
(4.17)
5. Transformation of the PSMP into a Markov chain In what follows, we will show that if a stochastic process is a finite-state PSMP, then it could be transformed to a finite Markov chain by supplementary variable technique. As mentioned before, consider a PSMP {Y (t )}, which is finite and regular. Let E = {1, 2, . . . , m} be its state space. Suppose the transition matrix defined by (2.3), P = [pij ], is irreducible. Let (α (i) , T (i) ) denote the m(i) order representation of Fi (t ), and E (i) be the corresponding all transient states set (obviously, the number of the elements in E (i) is m(i) ), where α (i) = (a(1i) , a(2i) , . . . , a(mi)(i) ) and T (i) = (Tjk(i) ), j, k ∈ E (i) . From the description above, it is easy to see that the process Y (t ) is not a MP, unless the sojourn time in each state is exponentially distributed. But we can show that considering the process Y (t ) only at the jump points Tn yields a (discrete-time) MP. Furthermore, the behavior of Y (t ) is piecewise deterministic in the intervals between jump points. For the Markovization of Y (t ) we therefore have to add the information on the neighboring jump points. Hence, to continue with our study, let J (t ) denote the phase of Y (t ) at time t, we can easily get the following result. For any i ∈ E, we define (i,0)
Tj
=−
m(i) −
(i)
Tjk ,
j = 1, 2, . . . , m(i) ,
k=1
Ω = (i, k(i) )|i ∈ E , k(i) = 1, 2, . . . , m(i) .
X. Zhang, Z. Hou / Statistics and Probability Letters 82 (2012) 40–48
47
Theorem 5.1. The process Z (t ) = (Y (t ), J (t )), with state space Ω , is a MP with infinitesimal generator Q ∗ = [q∗µν ]µ,ν∈Ω given by
q∗ = Tk((ii)) k(i) , (i,k(i) )(i,k(i) ) (i) q∗ (i) (i) = T (i) (i) , (i,k )(i,k ) k k q∗ = pij Tk((ii,)0) a(kj()i) , (i,k(i) )(j,k(j) )
(i, k(i) ) ∈ Ω ,
(i)
(i)
(i, k(i) ) ∈ Ω , (i, k ) ∈ Ω and k(i) ̸= k , (i)
(5.1)
(j)
(i, k ) ∈ Ω , (j, k ) ∈ Ω and i ̸= j.
Proof. For an infinitesimal time interval [t , t + ∆), we arrive at (a) (i)
P Z (t + ∆) = (i, k(i) )|Z (t ) = (i, k(i) ) = 1 + Tk(i) k(i) ∆ + o(∆),
(i, k(i) ) ∈ Ω ,
(b) (i)
P Z (t + ∆) = (i, k )|Z (t ) = (i, k(i) ) = T
(i) (i) ∆ k(i) k
+ o(∆),
(i)
(i)
(i, k(i) ), (i, k ) ∈ Ω with k(i) ̸= k ,
(c) (i,0) (j)
P Z (t + ∆) = (j, k(j) )|Z (t ) = (i, k(i) ) = pij Tk(i) ak(j) ∆ + o(∆),
Hence the result is proven.
(i, k(i) ), (j, k(j) ) ∈ Ω with i ̸= j.
Note that the state space Ω can be ordered alphabetically as follows:
Ω = (1, 1), (1, 2), . . . , (1, m(1) ), (2, 1), (2, 2), . . . , (2, m(2) ), . . . , (m, 1), (m, 2), . . . , (m, m(m) ) . For simplicity, we further denote each element (i, k) ∈ Ω by its ordinal number, i.e.,
ϕ((i, k)) ,
i−1 −
m(r ) + k,
i ∈ E , 1 ≤ k ≤ m(i) .
(5.2)
r =1
Moreover, let c =
∑
i∈E
m(i) , and define
Yˆ (t ) = ϕ((X (t ), J (t ))) = ϕ(Z (t )), qˆ ϕ((i,k))ϕ((i′ ,k′ )) = q∗(i,k)(i′ ,k′ ) . Consequently, {Yˆ (t )} is an associated MP of {Y (t )} with state space S = {1, 2, . . . , c } and infinitesimal generator matrix Qˆ = [ˆqij ]i,j∈S , where qˆ ij is the transition rate from state i to state j.
Let qˆ i = −ˆqii . Denote the nth jump point of the process {Yˆ (t )} by τn (n ≥ 0), where 0 = τ0 < τ1 < · · · < τn . The discrete(e) (e) time process {Xn , n ≥ 0} is the so-called jump chain of Yˆ (t ). That is, Xn = Yˆ (τn ), n ≥ 0. The transition probabilities matrix Pˆ = [ˆpij ] of the jump chain is given by pˆ ij =
− qˆ ij / qˆ ik , 0,
k̸=i
i ̸= j, i = j.
ˆ Thus, the fundamental matrix of {Yˆ (t )} can be written as: Thus, Qˆ = Qˆ d (I − Pˆ ). Let Lˆ be the limiting matrix of P. ˆZ = (I − Pˆ − Lˆ )−1 . Let ∆ be any real number satisfying ∆ ≥ max{ˆqi }, i ∈ S. Then the uniformization technique transforms {Yˆ (t )} into a discrete-time (uniformized) chain, whose transition matrix is Pˆ = I + Qˆ /∆, see Ross (1996). Let πˆ = (πˆ 1 , πˆ 2 , . . . , πˆ c ) be ˆ the stationary probability vector of P. ˆ (r ) the matrix whose i, jth component is the rth moment of the first-passage time of {Yˆ (t )} from state i to j. Denote by M From Yao (1985), we have the following result. Theorem 5.2. (r −1)
ˆ (r ) = r M ˆ (1) diag(πˆ M ˆj M
ˆ (r −1) − J (Zˆ M ˆ (r −1) )d ], ) + r [Zˆ M
and
ˆ (1) = [V1 − Zˆ + J Zˆd ]V2 , M 1 1 1 ˆ− ˆ− where V1 = diag(ˆq− ˆ 1 , πˆ 2 , . . . , πˆ c ). c ) and V2 = diag(π 1 ,q 2 ,...,q
−1
−1
−1
48
X. Zhang, Z. Hou / Statistics and Probability Letters 82 (2012) 40–48
Acknowledgments The authors would like to thank the anonymous referees and editors for their helpful suggestions that led to a significant improvement of this paper. References Çinlar, E., 1975. Introduction to Stochastic Processes. Prentice-Hall, Englewood Cliffs, NJ. Fackrell, M., 2009. Modelling healthcare systems with phase-type distributions. J. Health Care Manag. Sci. 12, 11–26. Feller, W., 1964. On semi-Markov processes. Proc. Natl. Acad. Sci. 51, 653–659. Fygenson, M., 1989. A fundamental matrix for regular semi-Markov processes. Stochastic Process. Appl. 32, 151–160. Hou, Z., Luo, J., Shi, P., Nguang, S.K., 2006. Stochastic stability of Ito differential equations with semi-Markovian jump parameters. IEEE Trans. Autom. Control 51, 1383–1387. Hou, Z., Tong, J., Zhang, Z., 2009. Convergence of jump–diffusion non-linear differential equation with phase semi-Markovian switching. Appl. Math. Model. 33, 3650–3660. Howard, R.A., 1971. Dynamic Probabilistic Systems. John Wiley & Sons, New York. Hunter, J.J., 1992. Stationary distributions and mean first passage times in Markov chains using generalised inverses. Asia-Pac. J. Oper. Res. 9, 145–153. Hunter, J.J., 1982. Generalized inverses and their application to applied probability problems. Linear Algebra Appl. 45, 157–198. Keilson, J., 1979. Markov Chain Models-Rarity and Exponentiality. Springer-Verlag, New York. Kemeny, J.G., Snell, J.L., 1960. Finite Markov Chainsc. Van Nostrand, Princeton. Lévy, P., 1954. Systèmes semi-Markoviens à au plus infinité dénombrable d’états possibles. In: Proc. Int. Congr. Math. vol. 2. p. 294. Neuts, M.F., 1981. Matrix Geometric Solutions in Stochastic Models: An Algorithmic Approach. The John Hopkins University Press, Baltimore. Pyke, R., 1961. Markov renewal processes with finitely many states. Ann. Math. Stat. 32, 1243–1259. Ross, S.M., 1996. Stochastic Processes. Wiley, New York. Smith, W.L., 1955. Regenerative stochastic processes. Proc. R. Soc. Lond. Ser. A 232, 6–31. Yao, D.D., 1985. First-passage-time moments of Markov processes. J. Appl. Probab. 22, 939–945.