An InternationalJournal Available online at www.sciencedirect.com
sc,..c~ @o,.~cT.
computers&
mathematics with applications
Computers and Mathematics with Applications 50 (2005) 1251-1270 www.elsevier.com/locate/camwa
Algorithmic Analysis of a Multiserver Markovian Queue with Primary and Secondary Services V. I. K L I M E N O K Department of Applied Mathematics and Computer Science Belarusian State University, Minsk, 220050, Belarus
klimenok©bsu, by S. m. CHAKRAVARTHY Department of Industrial and Manufacturing Engineering and Business Kettering University, Flint, MI 48439, U.S.A.
schakzav©kett ering, edu
A. N. DUDIN Department of Applied Mathematics and Computer Science Belarusian State University, Minsk, 220050, Belarus
dudinObsu, by
(Received and accepted June 2005) A b s t r a c t - - W e consider a multiserver retrial queueing model with applications in service industries, in which customers arrive according to a Markovian arrival process (MAP). Any admitted customer either (a) gets service and leaves the system as a satisfied primary customer; or (b) gets service and becomes a secondary customer. The secondary customers leave the system either as satisfied or unsatisfied with their services. In this paper, we perform the steady state analysis of the model and also study the algorithmic aspects of this queueing model including an optimization problem. (~ 2005 Elsevier Ltd. All rights reserved. K e y w o r d s - - M a r k o v i a n arrival process, Retrials, Loss probability, Multiserver queue, Tandem, Asymptotically quasi-Toeplitz Markov chain.
1. I N T R O D U C T I O N
AND
MODEL
DESCRIPTION
The motivation for studying this queuing model came from a real-life application experienced by one of the authors. The laptop computer owned by this author had a problem for which the computer manufacturer was contacted. After waiting for a random amount of time, he went over the problem with the expert who then advised to do a scanning test on the hard drive first and get back with the test results. With a few tries the author was successful in reaching an expert to report the test results. The service person, after analyzing the test results, told the A part of this paper was presented at the eighteenth Belarussian winter workshop in queueing theory held in Minsk, Belarus, February 22-24, 2005. 0898-1221/05/$ - see front matter @ 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.camwa.2005.06.002
Typeset by .4~S-TEX
1252
V . I . KLIMENOK et al.
author that a replacement hard drive would probably solve the problem. Thus, the author had to go through a primary service (reaching a service person to discuss the problem), a secondary service (to perform a scanning test on the hard drive) and another primary service (to report the test results based on the secondary service) before the problem was resolved. In this paper, we consider a queueing system that closely models the above mentioned real-life application. While the model is based on this specific application, there are other service industries where such a model will be useful. Before we list the basic assumptions of the model, we set up some notations. By e, we will denote a column vector (of appropriate dimension) of 1 and I, an identity matrix (of appropriate dimension). When needed, we will identify the dimension of this matrix with a suffix. For example, I~ will denote an identity matrix of dimension r. By A ( C I , . . . , Cn), we denote the diagonal (block) matrix whose ith diagonal (block) element is given by Ci. Oaxb will denote a rectangular matrix of dimension a x b with all entries equal to zero. We will denote the case when a = b by O or Oa. The latter symbol will be used when appropriate. The symbol ® denotes the Kronecker product of matrices. For details and properties on Kronecker products, we refer the reader to [1]. 2. M O D E L
DESCRIPTION
• Arrivals occur according to a Markovian arrival process (MAP) with parameter matrices (D0, D1) of order m. Let Q* = Do + D1 denote the generator of the Markov process governing the MAP process with @Q* = 0 and ~e = 1. Note that Do governs transitions in the continuous time Markov chain that correspond to no arrivals, and D1 governs transitions that correspond to arrivals. These arrivals are referred to as primary customers. Any arrival finding an idle server gets into service immediately, otherwise, the arrival is considered lost. The constant A = v~ Die, referred to as the fundamental rate gives the expected number of arrivals per unit of time in the stationary version of the MAP. For details on MAP and their usefulness in stochastic modelling, we refer to [2-4] and for a review and recent work on MAP, we refer the reader to [5]. • The service system is divided into two groups: main and self-service. In the main system, there are c homogeneous exponential servers. There is no bound on the number of customers in the self-service system. • The primary customers are served in the main system at a rate #1, 0 < #1 < co. • Upon completion of a service in the main system, a primary customer may move to the self-service system with probability 7, 0 <_ U < 1; and with probability ~ = 1. - 7, the primary customer will leave the system. That is, with probability r] a primary customer needs to do a self-test. • The service time of a customer in the self-service system is assumed to be exponentially distributed with parameter n, 0 < ~ < c~. Upon completion of a service in this system, the customers need to get served again in the main system. These customers will be referred to as secondary customers. • A secondary customer finding an idle server in the main system will get into service immediately. However, if all the main servers are busy, then this customer will try to get into an orbit of finite buffer size (M) from where the customers will try to access the main system. If at the time of getting into the orbit the buffer is full, the secondary customer will be lost with probability q, 0 _< q __G1, and with probability 1 - q the customer will wait for an exponential amount of time with parameter ~ before trying to get into the orbit. • While in orbit, each (secondary) customer will try to access a free server in the main system at random times that are exponentially distributed with parameter ~, 0 < ~ < oo. That is, if there are k customers in the orbit then the duration of a retrial time is exponential
Algorithmic Analysis
1253
with parameter ka. • The service time of a secondary customer in the main system is assumed to be exponentially distributed with parameter #2, 0 < p2 < 00. Upon completing a service in the main system, the secondary customer will leave the system.
3. M A R K O V PROCESS D E S C R I P T I O N The queueing model outlined in Section 1 can be studied as a continuous-time Markov chain, {~(t) : t > 0}, with a large state space. Defining • ~1 (t) = number of customers in the self-service system at time t, • ~2(t) - number of secondary customers in the orbit at time t, • ~a (t) = number of busy (main) servers at time t, • ~4(t) = number of secondary customers in main service group at time t, • ~5(t) = phase of the arrival process at time t, the 5-tuple process {~(t) -- (~l(t),~2(t),~3(t), ~4(t), ~s(t)) : t > 0} is a continuous-time Markov chain with state space given by { ( i , j , n , k , v ) , i >_ O, 0 <_ j <_ M , 0 <_ n <_ c, O<_k<_n, l O, the set of states given by { ( i , j , n , k , v ) , 0 <_ j <_ M, 0 < n < c, 0 < k < n, 1 < u < m}, the generator of the Markov chain, {~(t) : * > 0}, is given by Al,1 Ao A2,2 A2,1 Ao As,2 A3,1 Ao
Q=
.
(1)
In the sequel, we set K = 0.5(c + 1)(c + 2). The square matrices A~j of dimension K ( M + 1)m, partitioned into smaller blocks of dimension K m , are defined to be i_>1,
A,,2 (r,r) = ix (/~(1) ® I r a ) , Ai,2 ( M , M ) = ix (~(1) + q ~ ) ® ira,
i_>1,
® Ira),
+/~(2) ® D1 -
ra (I
0
i~0,(~(2
- E) - ira,
(4)
(5)
0
Ai,i(r,?._X)-~_rol(E(l'®Im),
(2)
(3)
i_>1,
A,,: (r, r + 1) = ia (/~ ® Ira),
A0 (r,r)
0
1
<(r <~M,
0 < r < M -
(6)
1,
Ai,1 (M,M)= C -]- ~/~1 (.~(1)®/to,) '+/../,2(.,~(2)~ ../rn) + .~(2)® D1
(8)
- M a (I - E) ® I m - ix (I - £ ) ® I m - qix ( £ ® Ira), i >_ O, and the entries of the above matrices are defined as follows. • E(/)(a+l), is a rectangular matrix of dimension a x (a + 1) defined by
E(l:(a+l) def(Oa×l =
I~) ,
E a×(a+l) (~) def(Ia :
Oaxl) "
(9)
• ~(r) is a square matrix of order K defined as 01
j~(r) clef____.
~1x2
~'(~)
-L~2x3 ,
~:(r)
"~cx (c+l) Oc+l
r = 1,2.
(10)
1254
V.I. KLIMENOKet al.
• /~ is a square matrix of order K whose only nonzero (block) entry is the last diagonal block. That is, de.~f( OK-c-1 O(K--c--1)x(c-kl)) \ O(c-kl)× (K-c- 1) I¢+1 "
(ol
(11)
)
• /~(~), is a square matrix of order K which in partitioned form is defined as /~(r) def =
H~~)
r = 1,2,
H~) H (r)
(12)
Oc+l
where the matrices H (r), r = 1, 2 of dimension (n + 1) x n are defined by n--1 ,
H(2)=
2
n)'
l
(13)
• C (n) = A(C0(~),..., C~(')), 0 < n < c, and C = A(C(°),..., C (c)) with C(~) de_fD0 - [k#2 + (n - k)/Zl] I q- (1 - 6~,c) D1,
0 < k < n,
0 < n < c,
(14)
where (in,c is the Kronecker delta. 3. S T E A D Y
STATE
ANALYSIS
First, observe that the Markov chain {~(t) : t _> 0} is a level dependent quasibirth and death process and such processes have been studied in the literature (see, e.g., [6-10]) using a number of techniques. Here, we will use the results for the multidimensional asymptotically quasi-Toeplitz Markov chains (AQTMC) as presented in [11,12], to show that the Markov chain, {~(t) : t _ 0}, under study is always stable. Let Ri be the diagonal matrix obtained by taking the diagonal entries of the matrix Ai,i, i >_ 0. It is well known that the block matrix P, having the structure like (1) with blocks Pi,z = R[1AiJ, l = i - 1, i + 1, and P~,i = R~lAi,i + I, i >_ O, is the one-step transition probability matrix of the embedded (or jump) discrete-time Markov chain, {~(t) : t > 0}, corresponding to the original continuous-time Markov chain {~(t) : t _> 0}. It is easy to verify that, (i) this discrete-time Markov chain is AQTMC (see [11,12]) and (ii) if this chain is positive recurrent then the Markov chain {~(t) : t > 0} is also ergodic. It is easy to see that the limits given by Y)¢ = lim~-~ooPi,i+k-1, for k = 0, 1, 2, exist for all values of the system parameters for the model under study. Specifically, the expressions for these limits derived by considering two cases, (a) q > 0 , and (b) q = 0,
are given in the following two laminas. In the following, the matrices Yk will be displayed in block partitioned form and we denote the (r,j) th block of Yk by Yk(r,j). LEMMA 1. For q > O, we have Y1 = ]/2 = O, and the nonzero blocks of the matrix Yo are given by
( ~(1) ® Ira, Y0(r,j)
(
j = r,
O< r < M-1,
(.~(1) _[_/~) @ I r a ,
j ~-- r =-
M,
E®Im,
j=r+l,
PROOF. Follows immediately from the definition of the limits.
(15) 0
Algorithmic Analysis
1255
LEMMA 2. For q = O, the nonzero btocks of the matrices Yk, k = 0, 1, 2, are given by
{$
(1)®zm,
Yo(r,j)=
E®Im,
j=T,
0
j=r+l,
(16)
0
Y1 ( M , M ) = (E ® Ira)il~ 1 [C ~c~ 1 (j~(1)] ® Ira) + - Me (z - ~) ® Imj +
#2 (/~(2)®
Zm)
(17)
IKm,
Y2(M,M) =7#1 (E® I,~) ~-(1 (~r(1) ® Im),
(18)
where ~:1 = R;I(M, M). PROOF. Follows immediately from the definition of the limits. REMARK. It is easy to verify that (since q = 0) the matrix (E ® Im)R.~ 1 does not depend on i and so the matrices Yk(M, M), k = 1, 2 also do not depend on i. In order to establish the stability condition along the lines of [11,12], we first define the matrix generating function Y(z) = ]1"o+ Ylz + Y2z2, lz] <_1. When Y(1) is a reducible matrix with stochastic irreducible blocks, Y(0(1), 1 < 1 < N, of its normal form and Y'(1) < cx~,the Markov chain {~(t) : t >_ 0} is stable if the following conditions are fulfilled (see [11,12]), ul
dV~z(Z)l~=le< 1,
1 < n < N,
(19)
where ul is the unique solution of the equations, unY('~)(1) -- u , ,
u~e = 1,
1 < n < N.
(20)
Now, we are ready to establish the following theorem. THEOREM 1. The Markov chain {~(t) : t >_ 0} is stable. PROOF. In the case when q > 0, from Lemma 1, we see that Y'(z) is equal to zero for any z. So, the condition in (19) always holds good. In the case when q =- 0, it follows from Lemma 2 that the nonzero blocks of the matrix generating function Y(z) are given by { /~(t) ® Im,
Y(r,j)(z)=
j = r,
E®Im, ~(1) ®Im + y(M,M) (Z),
0 < r < M - 1,
j=r+l, 0
(21)
where the square matrix y(M,M)(z) of order K is such that the only nonzero block entry occurs in the last (c + 1)m rows which is given by
[O(c+l)m×(K_2c_l)r~ R -1 (2~AcYz) z R-1C(C)z -~-zX(c+l)m] , and the diagonal (block) entries of the diagonal matrix R -1 are the (c + 1)m diagonal entries of the m a t r i x / ~ - 1 and the matrices .A and B are defined by
.~: ([~l~g(cl)Ac.2g~2))~Im,
B:.I~(H(1) ~Im).
Transforming the matrix Y(1) into the normal form (see, e.g., [13]), it is easy to verify that the unique irreducible (stochastic) block of this matrix is
{ ~_1~c) + ~ y(1) (1) = \
/m
~_lc~ 2 om/
'
1256
V . I . KLIMENOK et al.
where the diagonal entries of the diagonal m a t r i x / ~ - 1 are given by the last m diagonal entries of the matrix R -1. The corresponding block in the matrix Y(z) has the form,
y(1)(z)= (z[~-lC(ciC)+ zI z[~-1c#2) = (zR-l(Q*-clz2I)+ zI 0 I
z/~-;c#2)
'
It can be verified that the vector ul, which is the solution of system (20), is calculated as ul = O (recall that OQ* -- 0 and Oe = 1), and consequently, inequality (19) has the form, 0 ( - Q * + c#2Im) e > O, and this reduces to c#2 > O, a fact that is always true. Hence, the Markov chain {~(t) : t >_ 0} is always stable. REMARK. We formally proved the existence of the stationary distribution of the Markov chain
{~(t) : t > 0}. However, we can intuitively explain this existence. With respect to the primary customers the system is like a multiserver loss system in that an arriving (primary) customer finding all servers is considered lost. With a certain probability a primary customer becomes a secondary customer. When all secondary customers compete (the worst case scenario by taking q = 0) with primary customers for an additional service in the main system, the system will be stable as these secondary customers will have a very high probability of occupying a free (main) server once the number in self-service system becomes relatively large. Let x, partitioned as x = ( x ( 0 ) , x ( 1 ) , x ( 2 ) , . . . ) , denote the steady-state probability vector of Q. That is, x satisfies xQ = 0, x e = 1. (22) We further partition x(i) as x(i) = ( y 0 ( i ) , . . . , yM(i)), where yj (i) = (Yj,o,o (i),Yj, I,o (i), Yj,I,1 ( i ) , . . . , Yj,~,o ( i ) , . . . , yj,~,~ (i)), and each one of yj&r(i), 0 < j < M, 0 < k < c, 0 < r < k, i >_ 0, is of dimension m. Note that yj,k,~(i) gives the steady state probability that there are i customers in the self-service system, j customers in the retrial orbit, and out of k servers who are busy in the main system r are serving the secondary customers. The system of equations given in (22) can be solved either by (a) (b) (c) (d) (e)
a direct truncation method; generalized truncation method; truncation method using level dependent quasi-birth-and-death processes (LDQBD); using matrix-geometric approximation; or using AQTMC.
It is not our purpose to discuss and compare these methods here. We will only describe the method, namely, the direct truncation method, that is used in this paper. 3.1. D i r e c t T r u n c a t i o n M e t h o d
In this method, one can truncate the system of equations in (22) for sufficiently large value of the number of customers in the self-service unit, L. That is, the number in the self-service unit is restricted to L such that the probability of a secondary customer finding L self-service customers at the time of completing a service in the primary unit is less than a prespecified infinitesimal number, say, e. Due to the intrinsic nature of the system in (22), the only choice available for studying L is through algorithmic methods. The cut-off point L is obtained such that x(L)e < e.
Algorithmic Analysis
1257
In the direct truncation method, the generator Q as given in (1) is modified as follows. Ao,1 Ao A1,2 AI,1 A2,2 =
Ao A2,1 Ao A3,2 A3,1 Ao
I (23)
AL-L2 AL-I,1 Ao AL,2 AL,1+ AoJ The steady-state probability vector, X (L) , is now partitioned as x (L) = (x(0), x(1),.., x(L)) and is solved exploiting the special structure of the coefficient matrices. There are a variety of methods such as (block) Gauss-Seidel, aggregate/disaggregate available for this and one can refer to [14] for full details• Since there is no clear cut choice for L, we may start the iterative process by taking, say, L = 1 and increase it until the x(L)e < e. Before we proceed with listing some key system performance measures, we will display the steady-state equations which exploit the structure of the coefficient matrices• Defining (24)
11 ] ][i]i I
F~(i) = F - ja
iaI,
0 <_ j < M - 1,
(25)
I
0
Fu(i) = F - Ma [I
--
(26)
i~
I
I
0
qI
the steady state equations satisfying x(/)(~ = 0 ,
x(L)e -- 1,
(27)
can now be rewritten as yo (0)/~o (0)+ ~yl (0)[/~(1)® I] + gyo (1)[E(1)® I ] - - 0 , yj (0)/~j (0)T (j + 1 ) - y j + l (0)[/~(1)® I] + t~yj_l (1)[E ® I] + ayj (1)[~(1)®
yM(O)[2M(O)WgyM_l(1)[~'J®][]÷i~yM(i)[(E (1) + q E ) ® I ]
(28)
I]= 0,
(29)
=0,
(30)
and f o r l < i < L - 1 , ~?ttlYo ( i - 1)[/~(1)@I 1 +
yo(i)Fo(i)÷ayl(i)[/~(1)®i] (31)
+ ( i + 1 ) , y 0 ( i + 1)[/~(1)@i] = 0 ,
+ (i + 1)~yj-1 (i + 1)[~ ® I] + (i + 1)~yj (i + 1)[~(1)® ~] = 0,
(32)
1258
V. I. KLIMENOKet
al.
~/#lYM (i -- 1) [E(') ® I] + YM (i)/~M (i) + (i -F 1) aYM-1 (i + 1) [E ® I]
+(i+l) ayM(i+l)[([2,(')+qE)®I]
(33) =0,
and r],ly0(L-1)
[/~(1)®1] +y0(L)/~0(L)_i_o~y 1 ( n ) [ / ~ ( 1 ) ® / ] q_ r / , l y 0 ( / ) [~(1) ® i] --- 0, (34)
??#lYj (L - 1)[~(1)® I] + yj (L)/~j ( L ) + (j + 1 ) a y j + l (L)[j~(1)® I] +~/#lY~ (L)r[/~O)® ij1 = 0,
1 _< j _< i -
1,
~#lyM(L-1) [E(1)®I] + yM(L) FM(L)-F71#1YM(L)[E(1)®I] =0,
(36)
with the normalizing condition, L
M
E EYJ (i) e= 1. i=0 d=o
(37)
Further exploitation of the coefficient matrices appearing in equations (28)-(37) will result in equations whose coefficient matrices are of dimension m only. For example, equation (28) will be decomposed into many equations of which the first one is given by Y0,0,0 (0) Do + r]#lY0,1,0 (0) + #2Y0,1,1 (0) = 0. All other equations are similarly written and details are omitted. 3.2. S e l e c t e d S y s t e m P e r f o r m a n c e M e a s u r e s
With the knowledge of the steady-state probability vector, x, we can compute a variety of system performance measures to study the qualitative behavior of the model. Here, we will list a few measures along with their formulas. These are given in general and should be modified when using the direct truncation method or any other truncation method. • The probability, p(1) * 1OSS~that a primary customer will be lost is calculated as
p(1) = .L_~ ~los.
M c E E yi .... (i)Dle.
E
(38)
i=O j=O r=O
• The probability, p(2), loss that a self-service (or secondary) customer will be lost is calculated as .P(2)loss= q E
E YM,c,r(i)e.
(39)
i = l r=O
• The probability mass function, f ~ o and the mean, #NO, of the number of customers in the orbit are obtained as k
fNO=~EYj,k,r(i)e, i=0 k=0 r=0 M .No = JS7 °. j=O
O<_j~M, (40)
Algorithmic Analysis
1259
• The probability mass function, f s s , and the mean, ttss, of the number of self-service customers in the system are obtained as M
k
Z s= E ~ E yj,k,~(i)e,
i>0,
j=0 k=0 r=0
(41)
OO
.ss = E i Z S. i=0
• The probability mass function, the system are obtained as
fkBs , and M
fBS=
the mean, #BS, of the number of busy servers in
k
E
E
yj,k,r (i)e'
O
i=0 j=0 r=0
(42)
C
.,s=
Eks
s
k=0
• The fraction, fSFO, of customers successfully capturing a free server from the orbit is calculated as M
~a
c--1
k
a y~j y~. Z fSFO
----
j~l
~yj,k,r(i)e
i=0 k=0 r=0
a#NO
+
(43)
~#ss
• The fraction, fsss, of customers successfully capturing a free server from the self-service system is calculated as oo
M
c--1
k
aE i E E Eyzk,,-(i)e fsss =
~=1 j=o k=o ~=o
(44)
O~ptNO -}-/~#SS
* The fraction, fsPc, of time the main system is busy with the primary customers is calculated as
~M~k-lk--r
fsPc =
E
E
~yj,k:
(i) e.
(45)
i=0 j=0 k=l r=0
. The fraction, f s s c , of time the main system is busy with the secondary customers is calculated as oo
M
c
k
:sso -- E E E E"~Yj,k,~ (i) e.
(46)
i=0 j = 0 k = l r = l
. The rate (or throughput), % of satisfied customers leaving the system is given by
"y )~(1- p(1)'~ (1_~,~(2)" ~
(47)
4. ILLUSTRATIVE N U M E R I C A L E X A M P L E S In this section, we discuss some interesting numerical examples that qualitatively describe the model under study. The correctness and the accuracy of the code are verified by a number of accuracy checks. For example, we obtained the numerical solution for the Poisson arrivals in its simple form. Next, we implemented the general algorithm, but using the following MAP representation. Let Do be an irreducible, stable matrix with eigenvalue of maximum real part -~- < 0. Let ~ denote the corresponding left eigenvector, normalized by ~e = 1. Taking D1 =
1260
V.I. KLIMENOKet al.
- D o e g, the MAP representation reduces to the Poisson arrival process with intensity rate ~-. The general algorithm does not utilize this fact in any manner, but the numerical results agreed very much. Also, letting c -~ c~, the results should approach those that involve only parameters of the model. For example, we can use the following results (where the limits are as c --, ee), ~ t N O --4
0,
~SS
"--~
~BS
--* --
-
-
+
#i
--,
#2
/SFO ~ 0, fsss ~ 1. In this section, we discuss some interesting numerical examples that qualitatively describe the performance of the queuing model under study. For the arrival process, we consider the following five sets of values for Do and D1. 1. E r l a n g ( E R L )
-5 D0
5 -5
5 -5
D1
2. E x p o n e n t i a l ( E X P ) Do = ( - 1 ) ,
D1 = ( 1 ) .
3. Hyperexponential (HEX) D0= (-1090
0 ) -0.19 '
DI=
~1.710 \0.171
0.190~ 0.019]"
4. M A P w i t h N e g a t i v e C o r r e l a t i o n ( M N C )
Do =
(1F100 0 0 ) ( 0 00/ -1.00222 0
,
D1 =
,
D1 =
-225.75
5. M A P w i t h P o s i t i v e C o r r e l a t i o n ( M P C )
( 100222 10022 Do =
0 0
-1.00222 0
0)
0 -225.75
0.01002 223.4925
0 0
0.9922 . 2.2575/
(000) 0.9922 2.2575
0 0
0.01002 223.4925
.
All these five MAP processes are normalized so as to have an arrival rate of one. However, these are qualitatively different in that they have different variance and correlation structure. The first three arrival processes, namely, ERL, EXP, and HEX, correspond to renewal processes and so the correlation is 0. The arrival process labelled MNC has correlated arrivals with a correlation value of -0.4889, and the arrivals corresponding to the process labelled MPC has a positive correlation with a value of 0.4889. The ratio of the standard deviations of these five arrival processes with respect to ERL are, respectively, 1.0, 2.236068, 5.019353, 3.151781, and 3.151781.
Algorithmic
Mean number
Analysis
1261 Mean number
in orbit
In o r b i t
M=IO
M=I 0.35 0250"3
~
12
I~YI¢fI]
0.15 ~ 0.1 ~
o os
•
~ , ~:~;,~,~ - ~-,~ ~:~ ,:~+~:
+" ..... :" ~'~;<~
0,01
~:~:
l~l
~ ~
J ..... ERL-3
"
0,10
0.50
" :' ~
0.90
)1~I
~I I~:~-.".
0.95 T/ l
--
": t 1 1
'
i t += ,Y :~'i,,~[I1 ~i
o= o'I. =.':" I!,!~[i£11 ..........I I !
;~ l~i~-~'":'" E -
~'~;~:~; . . . . .
I=:II
-
I _ e _ ERL~ 2-
~i}i
0.01
--
..
.....
0.10
!~ii~ ,iii.!(
-.~-ERL
I ;:::12-1 1I---=---ERL_l -'~£9OERL_10
~ ,,=I'~. . . .
0.50
0.90
0.95
2
/7
-
(a). E r l a n g a r r i v a l s .
Mean number
Mean number
in o r b i t
o.5
I
~ ~.~ t ~ = i % ~ , ~ 3 ~ ! ~ ; , ~ ' ; ~ : ~ : ~ ' ~ 1 ~ ~ % ~ . ; ~ ~
°:!::
j 'I..HI]
"" ] ~ !
(b). H y p e r e x p o n e n t i a l
Mean number
'
:~ : ~ ! ~ : ~ i
Mean number
in orbit
~¥
1~
[ . - ~ - MNC 2 ' .. ~ . M N C - 3
~ i ~
Mean number
0 95 .
77
[ - + - - HEX_I 5 -
0 . . . . . . . 0.01
correlated
:'
~'~:~i.~;~:~<~:l
............... 0.10 0.50
F 0.90
I ......MNC-3
~ I - ¢ - = M N C 15 0.95 /7 ~ - - ~ ' - -
arrivals.
Mean number
in o r b i t
in orbit
M=IO
M=I ~
'
in orbit
............. ~:<\~ ~ ~!~I~i~ii!
0 08
I--~-MNC_5 I--~--MNC_I0 *:~: =:":~! ::~'~~ I~ MNC 15 0.90 0.95 /7 ~ . . . . . . - ' -
(c). N e g a t i v e l y
....
0.90
M=10
.o~ ~, ~ ~ = ~ , ~ . ~ - ~ ! % ~ @ ~ ! ~ ¢ - - ~ # ~ 1 0 0.01 0.10 0.50
0.12
'
0.50
arrivals.
014 0"12
~ "
~!~ l-,,--HEX_10
~i:
M=I
~i~i ~
]--~--HEX_2 ~T~] I .... HEX-3
o 2 i,,=====,~:.,~,=o=
!!
. . . . . . . 0.01 0.10
" 0 •0 8
in orbit
M=IO
M=I
0011, 2
~>
o
% 1 I _ = , c 1o
'0 0.01
0.t0
0.50
0.90
lii'~i:;i!! - + 0.95 ~7
MPC-15
u -e.--..-.~"~ " ~ . 0.01 0.10
(d). P o s i t i v e l y c o r r e l a t e d
.
. 0.50
.
. 0.90
I - + ~ M P C 15 0.95 71 ' -
arrivals.
Figure 1. Mean number of customers in the orbit.
4.1. E x a m p l e 1 T h e p u r p o s e of this e x a m p l e is to s t u d y t h e effect of t h e p a r a m e t e r s ~/, M , c, and t h e t y p e of arrival process, on s o m e selected p e r f o r m a n c e measures. Here, we fix )~ = 1,
# i = 2,
#2 = 5,
a = 1,
n = 1,
0 = 0.1,
a n d v a r y M , ~, a n d c. F i g u r e s 1-8 c o n t a i n g r a p h s of v a r i o u s p e r f o r m a n c e measures. following, we will list s o m e key o b s e r v a t i o n s for each one of t h e s e p e r f o r m a n c e measures.
In the
1262
V.I. KLIMENOKet
aM.
Mean n u m b e r in self service M=0
Mean n u m b e r In self service M=10
n
0.6 ~'~:~
.:,~i:~>~!~@
ERL3
0.6
~ ~" ERL3
0.4 02 0 0.01
0.10
0.50
0.90
0.95 /7 /
-
0.01
0.10
0.50
'
0.90
'
r ~ ERL4 I --:~- ERL-5 I - ~ - EPJ--10 15 0.95 /7' I -~--ERL -'~"ERL-15
(a). Erlang arrivals. Mean n u m b e r in self service M=0
Mean n u m b e r in self service M=10 1
o5
0.B
'
I-3 .;..: , . .FlEX 3
~& : ~" ;~"l ~` ~ !: 00"4~ ..................... ~ ~= ': ::::':~; :. . . . . . . .
~::~I: /~I _t ~- X; HEX_I ~:~::~ : 4;H4EX
:
~:,v2 ';
~
0.6
:~<":
0.4 0.2
O.Ol
0.10
0.50
0.90
0.95 ;'7 I"'~"-HEX-1-5
0 0.01
0.10
0.50
0.90
0.95 17
(b). Hyperexponential arrivals. Mean n u m b e r in selfservJce M=0
1 o.5
~~ ............~,,;i~i
Mean n u m b e r in self service M=I0
. . . .~ :
1 ~ I--~P-MNC 2
0.6 0.4
~::'o~1E~:~
0.2 li,~! 0.01
~" ~ ,,;~,~,~ ~ I...II;,I~Z- ~
°0:
~ 0.10
,:~
~
0.50
0.90
0.95
0.01
0.10
0.50
0.90
0.95 ?7 ,
-
(c). Negatively correlated arrivals, Mean n u m b e r in self service M=0
Mean n u m b e r in self service M=10
O7
0.4 0.3
"~
............
.............
--IP-MPC. 5 0,01
0,10
0.50
0.90
0.95 r/[-..~-.MPC_l
0.01
0.10
0.50
0.90
0.95 /7
(d). Positively correlated arrivals. Figure 2. Mean number of customers in self-service station• 4.1.1. The
mean
number
of customers
in the orbit
• As is to be e x p e c t e d , t h e m e a n n u m b e r decreases as e increases, and increases as M increases, in all eases. • W h i l e t h e r a t e of decrease going from c = 1 to c = 2 is significantly large in t h e case of all e x c e p t t h e M N C arrivals, this r a t e seems to be small for t h e M N C case. 4.1.2.
The mean
number
of customers
in the self-service unit
• W h e n the o r b i t size is 0, it is i n t e r e s t i n g to n o t e t h a t t h e E R L arrivals show no significant difference in t h e m e a n n u m b e r in self-service as c is increased. W i t h r e s p e c t to H E X
Algorithmic Analysis
1263
Mean number of busy sewers M=0
Mean number of b u s y servers M=IO
0.~
0.~ ~
0"6 0"5
J~
ERL_2
.
~
~
-
~
0.01
-
0.10
% . .~qe
! 5
~:
--~--ERL_I 4 - - ERL 2
00~3 2 '~- - ~ ° 0.1 ~ i z 0
.
0"6 ~ 0"5 I "
~'
....~... ERL._4 I
- - , - ERL 10I 0.50
0.90
0.95 ;7 -'+"'ERL--15i
O.Ol
O.lO
o.5o
0,9o
o.55 ;7 /'-+--ERL~151
(a). Erlang arrivals. Mean number o f b u s y servers M=O 0.8 0,7 0.6
.
.
.
.
Mean number o f b u s y servers M=IO
-*=-- HEX 2
.
0.5
O.4 0.3 0.2
j :~-..HF-X4 J~ HEX_5
o.~ ~ 0
~
~
0.0,
:
~
0.10
~
I..,_HEXJ o
0.50
0.,0
0.~5 ~ I - , - , ~ 5
0.8 0.7 ~ 0.6
.
.
.
.
.
.
.
-.e--HEX 1 I._=...HEX 2
o°',
1.......
0'3
I"" ~*-HEX_4
0"2 ~ ' ~
o"1 ~
•o .
!
.
0.01
~
~
. 0.10
:
I .
~
. 0.50
~ ~
~
I--*-H~ 5 I ......
I-'-"~-1 °_
0.90
~ I--P-HEX 15 0.95 ' / -
(b). Hyperexponential arrivals. Mean number of busy servers M=O
08
Mean number o f b u s y sewers M:10
~ 07
05 • 0.4 03 0"2
•
"
o"1 1 ~ ~ , ~ , ( ~ , o,
~
0.01
'
.
~
0.10
~
~ 0.50
,
~ ~
I ~-MNC3 I ~ ' MNC_4
~
~'
•. . . . . .
] -.,-- MNC_2
I .......
I-*--MNC-5
t - :::::-:_ I- ' - ~ " ~ - 1 °
~ 0.90
0.6 "~
"o l
0.95 /7 ) ~ M N C _ 1 5
~
~
O.Ol
~ O.lO
~
: 0.50
~
I'-'-"MNC-- 1~ o.~o
o.,~J--~'~
(c). Negatively correlated arrivals. Mean number of busy servers M=O
Mean number of b u s y servers M=IO
0.5 0,4 .
I - e - - MPc-2
I ........ ~ p c 4 •
{ ---]~--MPC_5 0.01
0.10
0.50
0.90
0.95 ;7
I--X--MPC 5 0.01
0.10
0.50
0.90
0.95 17 t-'+~MPC-15
(d). Positively correlated arrivals. Figure 3. Mean number of busy servers. arrivals, we notice a difference in this performance measure as c is increased from 1 to 2 a n d the differences become smaller as c is increased further. More noticeable differences are seen for MNC arrivals when c is changed from 1 to 2. It should be pointed out t h a t the offered load to the system is small and when this is increased we did see more pronounced differences in this measure for all arrivals (graphs are not attached here). I n all cases, we noticed MNC arrivals showing the most increase when going from c -- 1 to c = 2. • However, when M is increased to 10, we see noticeable differences in this measure as c increases for large values of U. Again, this is not surprising as more p r i m a r y customers
1264
V.I.
KLIMENOK
et al.
P(an arrival is lost) M=O,c=t
P(an arrival is lost) M=IO, c = I
0.7 -
0.7
~.
n50"6
I .... , ~
•
0.3 02
,
0°:
~
o"1 I
~
~
0.01
0.10
o'1
0.50
0.90
'o!
0.9527
~ ~ o.oi
0.,0
o.~o
0.90
II
I .... ~;;~1 [--~-MPCII I o.9~
~
,
__]
(a). Single-server ease. P(an arrival is lost) M=O,c=2
0
5
~
P(an arrival is lost) M=10,¢=2 0.6 ~: :: ~ ~ , O5
~
0"4 [ N ~
o, ~ 0"3
~
'o
:
0.2 ~ ' •
01
~. . . . . . . . . . . . .
o~
~
~
[I ~e:'-"HEX E~P
'°°
~;
.....
°
'
~
~
I
i¸~°:~ii ;~
•0
0,01
0,10
0,50
0.90
o.9~
~/
0.01
0.10
0.50
0.90
0.95
17
(b). Two-server case. Figure 4. P(an arriving customer is lost)• P(a self service customer is lost) M=0,c=l
P(a sell service customer is lost) M=l,c=t
o.o~~
0.03 0025 I
~
~
I: 1
0005 0.01
0.10
~
0 008
0.50
O.gO
0.95
0•000~
~
~
I .-¢,-.,E34=
0"002
77
I ~M~N~P. MNC "~'°" 0.01
0.10
0.50
0.90
0.95
?7
(a). Single-server case. P(a self service customer is lost) M:O,c:2
o.o~ ~ 0.0t 0.008
P(a self service customer is lost) M=1, c = 2
~ •
ooo35~ ~ ............;~ 0 nn'~ ~ ~ . ~ , ~ ' ~ I~ ~
oo;;; ~
• • ~*
O.OCeo004~~"
,-I
0.10
~ =
I2: il
0"002
0.01
~
o 002 E ~ I ~ @ ~
0.50
0.~
0.95
/7
0.01
0.10
0.50
....... '
0.90
0.95
~l .......sex
T]
(b). Two-server case. Figure 5. P(a self-service customer is lost). will b e a d m i t t e d w h e n c is increased, a n d due to a h i g h value for ~/, we see an increase in #ss4.1.3. The mean
number
of busy servers
• In t h e case of E R L and H E X arrivals, w h e n c = 1, t h e m e a n n u m b e r of b u s y servers does n o t s e e m t o v a r y significantly as r / i s increased. H o w e v e r , for c > 1, we see significant increase as ~7 increases. As e x p e c t e d , t h e m e a s u r e a p p r o a c h e s t h e l i m i t value of ),/#1 +
Algorithmic Analysis
1265
Fraction s u c c e s s f u l f r o m orbit M=IO
Fraction s u c c e s s f u l from orbit M=t
0.2 015
0.09
--t--ERL 1 1 ......... ERL 23 -.~.-. ERL
h:"f~i' :: :~:~ ~',~ "~ ;~',:~';:i .... , ......~ .......~ ....~
i 0.10
. . . . .
~
'
~
02
i
~!~l --xt~ ERI._5 ~;~iif;~i~ '4 ~.~ --~--ERL 10
ii~ii
~ i 0.01
0.25
0.50
0.90
0.01
0.95 /7
0,10
0.50
0.90
0.95 ]7
(a). E r l a n g a r r i v a l s .
Fraction s u c c e s s f u l from orbit M=tO
Fraction s u c c e s s f u l from orbit M=I 0.2
*: '
~
~ ":
"~I' ~
~:
0.25 ,~.~., r~'
i~i
; ~ ' ~ "~
d ....
~
'
02
o;sB
01
o.1 : :: '
o 05
"
•
~;~
'::
~
"'~;
~!,~
~i~2<¢i~)i~
"; ' ~ ' ~
0.05 0
o
0.01
0.t0
0.50
0.90
0.95 ]7
(b). H y p e r e x p o n e n t i a l arrivals•
Fraction s u c c e s s f u l f r o m orbit M=IO
Fraction successful from orbit M=t 0.14
I ~
o12 ~
'
0 '4
~ i ~
N~,~
0"06 ~;? ^'o, I ~ I ~ ; ~ 0.01
0.10
0.50
0.00
0.95 ]7
-
. . . . 0.01
~N
' ~ 0.10
-.-~.-=MNC_I
~{~'!~N~ '
. . 0.50
.
. . 0.90
MNC_4 I_~_=,,,~-,,
.~,' / --e.- MN~_15 0.95 q '
(c). N e g a t i v e l y c o r r e l a t e d a r r i v a l s .
Fraction s u c c e s s f u l f r o m orbit M=10
Fraction successful from orbit M=I
0.16 ~ i ~ N ~ . ~ j ~ / ~ ~ ~ 0.14 .:w~!~ ~ ' ~ " '~ ~: !~ : "" ~ ;;~;" " '~
o.1
i~Ni~;~;®~N
0.08 ,~.~,,:,, ~ ........?;t,:t~a,a 0.04 4 ~ 0 I ~ ~ 0'20 [ i ~
i
.... 0.01
--~P- MPC_5
~:~:}~,i~i,~i~= '
0.10'
0.50'
0.90'
I--~--MPC 10
0.95 T
j
~
0.06 ~ 0.1:14 ~ ! ~ : ~ ~ L ~ 0
0.01
0.10
~®~N~:~
:1 I ~
0,50
---~-MPC 1 ~Mpc-2
0.90
:'~
°~ ..;~.,MPC4 ~ MPC_5 --=--MPC 11
0.95 ;7 t""÷"-MPC 1!
(d). P o s i t i v e l y c o r r e l a t e d a r r i v a l s . F i g u r e 6. P r a c t i o n of c u s t o m e r s s u c c e s s f u l f r o m o r b i t t o r e a c h a free s e r v e r .
• In t h e case of correlated arrivals, there appears to be a significant change in /zBS as c is increased, and this measure approaches A / # I + Ar//#2 as c ~ oo. However, the rate of a p p r o a c h is very small (i.e., need a large value for c) for M P C arrivals as c o m p a r e d to all others including MNC. 4.1.4. P ( a n arrival is lost) = P ( a p r i m a r y customer is lost) • As is to be expected for all arrivals, we see t h a t this measure a p p e a r s to decrease either when c or M is increased.
1266
V . I . KLIMENOK et al. Fraction successful from self-service M=IO
Fraction successful from self-service M=0
0.01
0.10
0.50
0.00
0,01
0.95 ~7
0.10
0.50
0,90
0.95 77
(a). Erlang arrivals. Fraction successful from self-service M=fO
Fraction successful from self-service M=O
(,-~-- HEX~2 I ..... HEX_3 ,,,~,, HEX_4
II -~-- "Ex-9 I
I - * - HEX_10i 0.01
0.10
0.50
090
~1-~-~151
095
(b). Hyperexponential arrivals. Fraction successful from self.service M=IO
Fraction successful from self-service M=O
%"
0.01
0.10
0,50
0.90
' ~ '
0.01
0.95 ?7
'~ ' t "~ '= ' i ~ " ~ ~ii! ~ ! ~ t i ~ i ~ ~ ! ~ i ~ ~i~t ~ ; ~ ~ ~ l ~' ! t ~ ~Nli~
0.10
0.50
0.~0
I "~-MNc-5 I t--~MNCj01
0.95 ~ L . . . .
%"~
(c). Negatively correlated arrivals. Fraction successful from self-service M=10
Fraction successful from self-service M=O 12
0.8
~N~;~ ~'=~ ~i~,"- "~
=" ~.
~
~
0.6 0.4 0.01
0.10
0.50
0.90
0.99 ~
(d).
/
~
I
o
-4¢-*MPC 2 :
I~
MPC9
I : MPC.4 I --.~K---MPC 5 0.01
0,10
0.50
0.90
I-~--MPJ15
0,95 T/ I ' "" ~ ' -
Positively correlated arrivals.
Figure 7. Fraction of customers successful from self-service unit to reach a free server.
• W i t h respect to renewal arrivals, this measure appears to increase with increasing variance of the arrival times. • However, when compared with correlated arrivals~ this measure a p p e a r s to be the largest in the case of positively correlated arrivals. Note t h a t when c = 1 b o t h M P C and MNC produce the same results. More oil this will be discussed later. 4.1.5. P ( a self-service customer is lost) • Here, the orbit size appears to make a significant effect on this measure. • Furthermore, M P C arrivals yield the smallest value.
Algorithmic Analysis
1267 M e a n n u m b e r in self service
Mean n u m b e r in o r b i t 1 08
:!~::; ~ :~,: ~4~~
0.08 ~U~,, ':~;,i~:i:~i,,~............................. 0 06
~:;~:~
0,4 0.2
"0
....~I t - ; - MNC_M=IO /+ M PC_M=0
0 1
2
3
4
5
10
15
1
20
2
3
4 ¢ 5
10
15
20
I - e ' - MPC-M= 1 --~-- MPC_M=5
¢
P(all servers are idle)
Mean n u m b e r o f b u s y servers
I~MPC_M=I 2
3
4
5
10
15
1
20
2
3
4
5
P(an arrival is lost) ~
15
20
P(a s e c o n d a r y c u s t o m e r is lost) 0.01
• ......
--~,-- MNC M=0 ....~....MNq.,M= 1 MNC M=5
0.008 ~;~ no0"3
~,~:~ .~ ,~ ,~ ~;~;~ ! ~
0.1 0
~i'i~ . . . . . . . . . 1
2
3
4
I - ' ~ - MPC-M=5
c
c
07 ~
10
:
Ai 5
MNC M=5 ......
10
15
20
MNC-M=10
--x--MPC M=0 --4--MPC M=I I /-+-MPC_M=5 ;
0,006
~MNC-M=IO
0.004 0.002 0 1
2
3
4
5
¢
10
15
20
I - + - MPCM=5
c
Fraction s u c c e s s f u l fl'om s e l f servi~
Fraction s u c c e s s f u l f r o m orbit
- - ~ - MNC M=I
2
3
4
5
10
15
2
20
3
4
Fraction service s y s t e m b u s y w i t h primary customers
1
2
3
4
5
10
15
20
¢
¢
5
10
15
20
¢
Fraction service s y s t e m b u s y w i t h s e c o n d a r y customers
1
2
3
4
5
10
15
--~---MNC --~-.-.MNC . MNC ....~i, MNC
M=0 M=I M=5 M=10
~MPC
M=I
20
¢
Figure 8. Comparison of various performances measures for MNC and MPC arrivals• 4.1.6. T h e f r a c t i o n s u c c e s s f u l f r o m o r b i t • It is interesting to note t h a t for all arrival processes, this measure appears to approach 0 as c increases. Intuitively, it can be explained as follows• As the n u m b e r of available servers increases there is less need for the orbit and hence, the fraction goes to zero. • As expected this fraction increases when M is increased. • A m o n g the renewal arrivals, we notice t h a t this fraction appears to increase with increasing variance of the interarrival times.
1268
V.I.
KLIMENOK et al.
• We notice a very significant drop in this fraction in the case of M P C arrivals when going from c = 1 to c = 2. 4.1.7. T h e f r a c t i o n s u c c e s s f u l f r o m s e l f - s e r v i c e u n i t For all arrival processes, we see that this fraction appears to decrease as M increases. This is as is to be expected since a decrease in the buffer size will make the secondary customers to try more often (from self-service unit) to capture a free server. This fraction appears to increase as c increases, which is again very intuitive as the chances of capturing a free server are higher for both types of customers. For M P C arrivals, this measure appears to approach one faster than for any other case a S C---~ OO.
4.1.8. C o m p a r i s o n o f M N C a n d M P C
arrivals
The purpose of this comparison is to see how both positively and negatively correlated arrivals behave when all other parameters axe fixed. Note that these two arrival processes have the same mean and variance. • W h e n c = 1, these two arrival processes have identical values for all system performance measures. This is due to the fact that these two arrival processes having the same Do matrix produce identical steady state probability vector x for all the examples we ran so far. This is an interesting fact worth proving mathematically and currently we are investigating this. • For c > 1, we see the differences in these two arrival processes. For example, with respect to the performance measures: #NO, #SS~ #BS, and fSFO MNC arrivals tend to produce p(1) a large value as compared to M P C arrivals. For the other performance measures: . loss, Pl~Js, fsss, and P(all servers are idle), M P C arrivals produce a large value. • The above observations indicate the vital role played by correlation among the interarrival times which is largely ignored in the literature. 4.1.9. A n o p t i m i z a t i o n p r o b l e m One of the challenges facing any service industry is to determine how m a n y service people (or technical experts) need to be hired so that the customers can be serviced (and be made happy) as efficiently as possible. Apart from judging the quality of the products, customers are interested in the quality of service they get from the manufacturer. The quality of semrice may be measured in a number of ways and it is not our purpose to review or summarize all aspects of quality. Ideally, if any customer can access a service person when needed, the customer will be happy. However, when a customer cannot access a service person due to all service people busy serving other customers, the customer will have a negative feeling and may consider the quality of service to be poor. This will drive customers away from the current manufacturer to its competitor(s). This will result in a reduced market share for the manufacturer. But at the same time, the manufacturer wants to minimize the number of service people (due to costs of hiring them) hired. Thus, here, we will propose an optimization problem that takes into account various costs and arrive at an optimum value for c. Specifically, we define • dl = cost per service person per unit of time, • d2 = cost per lost primary customer per unit of time, • d3 = cost per lost secondary customer per unit of time. T h e optimization problem of interest is to find the optimum number of service person to hire so that the total expected cost per unit of time is minimized. T h a t is, .~ . o ( 1 ) -Jr d3A ( 1 - D. loss] (I}~ ~D(2)I mix [cdl + ~2,'~-~-loss 'l ~ lossJ '
(39)
Algorithmic AnMysis •w h e r e •p(1) ioss a n d 0(2) " 1o~s are as d e f i n e d in (31) a n d (32).
1269
W i t h o u t loss o f g e n e r a l i t y , we will t a k e
d l = 1 a n d r e w r i t e e q u a t i o n (39) as
minc [c +
d2APl~)s + d3A (1
- "loss]P(1)'~ '/-~P(2) llossj •
(40)
4.2. E x a m p l e 2 I n t h i s e x a m p l e w e will look a t t h e effect o f A, d2, d3, a n d t h e t y p e o f arrival p r o c e s s e s o n t h e o p t i m u m value, c*, o f t h e o p t i m i z a t i o n p r o b l e m (40). W e fix all o t h e r p a r a m e t e r s as follows, M--
10, #1 = 1 , / z 2 = 2, a = 1, n = 1, z1 = 0 . 9 ,
andq---
0.1. F u r t h e r m o r e , w e t a k e d 2 = d 3 .
T a b l e 1 c o n t a i n s values o f c* for v a r i o u s c o m b i n a t i o n s . A n e x a m i n a t i o n o f t h e t a b l e reveals t h e following o b s e r v a t i o n s . • A s e x p e c t e d , c* is a n o n d e c r e a s i n g f u n c t i o n o f )~ for all a r r i v a l p r o c e s s e s . • W i t h r e s p e c t t o r e n e w a l arrivals, it is i n t e r e s t i n g t o n o t e a p a t t e r n in c* as a f u n c t i o n o f d2. For d2, u p t o a c e r t a i n value, c* a p p e a r s t o i n c r e a s e w i t h d e c r e a s i n g v a r i a n c e o f t h e a r r i v a l t i m e s . A f t e r t h i s c u t - o f f value, c* a p p e a r s t o i n c r e a s e w i t h i n c r e a s i n g v a r i a n c e of t h e arrival t i m e s . • I n t h e c a s e o f c o r r e l a t e d arrivals, c* for M P C a r r i v a l s is always s m a l l e r as c o m p a r e d t o M N C arrivals. Table 1. Values of c* when M = 10, #1 -----1, It2 = 2, c~ = 1, ~ = 1, 71 = 0.9, q -----0.1. d2 ~- d3
0.5
1
1.5
2
2.5
Arrival
A= 1
,k = 2
)~ = 5
.k = 10
ERL
1
1
1
EXP
1
1
1
HEX
1
1
1
1
MNC
1
1
1
MPC
1
1
1
ERL
1
1
d2 = d3
ARRIVAL
,k = 1
A= 2
A= 5
,~ = 10
1
ERL
1
3
8
16
1
EXP
1
3
8
16
HEX
1
2
7
16
1
MNC
1
3
8
16
1
MPC
1
1
4
8
1
1
ERL
2
3
9
17
EXP
2
3
9
17
HEX
1
3
9
17
3
EXP
1
1
1
1
HEX
1
1
1
1
MNC
1
1
1
1
MNC
2
4
9
17
MPC
1
1
1
1
MPC
1
2
4
9
3.5
ERL
1
1
1
1
ERL
2
4
9
17
EXP
1
1
1
1
EXP
2
4
9
18
HEX
1
1
1
1
HEX
2
4
10
19
MNC
1
1
1
i
MNC
2
4
9
18
MPC
1
1
1
1
MPC
1
2
5
9
ERL
1
1
5
12
ERL
2
4
9
18
EXP
1
1
5
11
EXP
2
4
10
18
4
HEX
1
1
2
8
HEX
2
4
10
20
MNC
1
1
4
11
4.5
MNC
2
4
10
18
MPC
1
1
2
5
MPC
1
2
5
10 18
ERL
1
2
7
14
ERL
2
4
10
EXP
1
2
7
14
EXP
2
4
10
19
HEX
1
1
5
13
HEX
2
5
11
20
MNC
1
2
7
14
MNC
2
5
10
19
MPC
1
1
3
7
MPC
1
2
5
10
5
1270
V.I. KLIMENOK et aL
REFERENCES 1. A. Graham, Kronecker Products and Matrix Calculus with Applications, Ellis Horwood, Chichester, U.K.,
(1981). 2. M.F. Neuts, Structured Stochastic Matrices of M//G/1 Type and Their Applications~ Marcel Dekker, New York, (1989). 3. D.M. Lucantoni, New results on the single server queue with a batch Markovian arrival process, Stochastic Models 7, 1-46, (1991). 4. M.F. Neuts~ Models based on the Markovian arrival process, IEICE Transactions on Communications E75B, 1255-1265, (1992). 5. S.R. Chakravaxthy, The Batch Markovian Arrival Process: A Review and Future Work. Advances in Probability Theory and Stochastic Processes, (Edited by A. Krishnamoorthy et al.), pp. 21-39, Notable Publications Inc., New Jersey, (2000). 6. J.R. Artalejo and M. Pozo, Numerical calculation of the stationary distribution of the main multi-server retrial queue, Annals of Operations Research 116, 41-56, (2002). 7. L. Bright and P.C. Taylor, Calculating the equilibrium distribution in level dependent quasi-birth-and-death processes, Stochastic Models l l , 497-525, (1995). 8. S.R. Chakravarthy, A. Krishnamoorthy and V.C. Joshua, Analysis of a multi-server retrial queue with search of customers from the orbit, (submitted). 9. G.I. Falin, Multi-channel queueing systems with repeated calls under high intensity of repetition, Journal of Infor~nation Processing and Cybernetics 1, 37-47, (1987). 10. M.F. Neuts and B.M. Rao, Numerical investigation of a multiserver retrial model, Queueing Systems 7,
169-190, (1990). 11. L. Breuer, A.N. Dudin and V.I. Klimenok, A retrial B M A P I P H I N system, Queueing Systems 40, 433-457, (2002). 12. A.N. Dudin and V.I. Klimenok, Multi-dimensional asymptotically quasi-Toeplitz Markov chains, In The X X I V International Seminar on Stability Problems for Stochastic Models. Jurmala, Latvia, September 1017, 2004, pp. 111-118, (2004). 13. F.P. Gantmacher, Theory of Matrices, Science, Moscow, (1967). 14. W.J. Stewart, Introduction to the Numerical Solution of Markov Chains, Princeton University Press, Princeton, N J, (1994).