Applied Mathematics and Computation 183 (2006) 1399–1409 www.elsevier.com/locate/amc
Computational analysis of the maximal queue length in the MAP/M/c retrial queue Jesus R. Artalejo, Srinivas R. Chakravarthy
*
Department of Statistics and Operations Research, Faculty of Mathematics, Complutense University of Madrid, Madrid 28040, Spain Department of Industrial and Manufacturing Engineering, Kettering University, Flint, MI 48504, USA
Abstract We consider a multi-server retrial queueing model in which arrivals occur according to a Markovian arrival process. Using continuous-time Markov chain with absorbing states, we determine the distribution of the maximum number of customers in a retrial orbit. Illustrative numerical examples that reveal some interesting results are presented. 2006 Elsevier Inc. All rights reserved. Keywords: Markovian arrival process; Retrial; Busy period; Queueing; Algorithmic probability
1. Introduction Our main objective in this paper is to determine the distribution of the maximum number of customers in a retrial orbit during a busy period of the MAP/M/c retrial queue. Retrial queues are motivated by many queueing situations where blocked customers do not leave the system permanently. Instead, they compete for service after waiting for a random amount of time. Between trials a blocked customer is said to be in orbit. For a detailed review of the main results and the literature the reader is referred to [13], and for a comparative analysis of retrial queues and their standard counterparts one may refer to [2]. The maximum queue length of a queueing process is often a performance descriptor of interest. Moderate values of the queue indicate that the system is operating successfully, whereas large values indicate that some kind of an action such as adding an extra server, increasing the current service rate, or rescheduling of jobs, may be warranted on the part of the system manager. The existing literature shows that extreme values of queue length can be investigated in different ways. For example, in [19] the emphasis is on the study of the distribution of the maximum queue length during a busy period, and in [23] an asymptotic analysis approach is taken. In this paper, we will adopt the former approach to study the maximal number in the orbit queue during a busy period. The analysis of the maximum number of customers in orbit for the M/G/1 and the M/M/c retrial queues is the subject matter of some recent papers [3,14,16]. While a number of papers on retrial queues employing *
Corresponding author. E-mail addresses:
[email protected] (J.R. Artalejo),
[email protected] (S.R. Chakravarthy).
0096-3003/$ - see front matter 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.05.140
1400
J.R. Artalejo, S.R. Chakravarthy / Applied Mathematics and Computation 183 (2006) 1399–1409
matrix-analytic methods have appeared in the literature (see e.g. [1,4,6,7,10–12,15,24]), to our knowledge only one paper [16] deals with the study of the maximum queue length during a busy period using matrix-analytic methods. The authors in [16] consider the M/G/1 retrial queue where some particular choices of the phase type distribution are chosen as service time distribution. Our purpose here is to fill this gap by specifically studying the maximum queue length in a busy period for the MAP/M/c retrial queue. Since we deal with a multi-server queue with Markovian arrival process, our model and the subsequent methodology differ substantially of the approach in [16]. More concretely, our main goals in this paper can be summarized as follows: (a) We show how to evaluate the maximal queue length using matrix-analytic methods. Although the methodology is exposed for the MAP/M/c retrial queue, it also holds for any two-dimensional Markov queueing model. (b) Most retrial queues assume that the orbit is an invisible queue, i.e., the system manager cannot observe the queue. It occurs for instance in applications to mobile telephone systems. Thus, it seems interesting to study the performance characteristics of retrial queueing systems related directly to the orbit behavior. One of the key characteristics that describes the orbit congestion is the maximal orbit size. (c) We put emphasis on the computational analysis of Markovian arrival processes with different correlation structure. The importance of the study of correlated arrivals is a well-known feature in many telecommunication systems. However, the numerical aspect of this fact is largely ignored in the literature. (d) We illustrate how the maximal queue length distribution can be used in optimal design problems. To this end, in Section 5 we propose a simple optimization problem involving the number of available servers c. Many other similar optimization problems can be formulated. For example, we could find optimal retrial and service rates such that the retrial orbit size exceeding a pre-specified value would have an arbitrarily small probability. The rest of the paper is organized as follows. In Section 2 we briefly describe the mathematical model. The steady-state analysis of the maximum number of customers in orbit is presented in Section 3 and the algorithmic analysis is discussed in Section 4. Some illustrative numerical results as well as an optimization problem are given in Section 5. A very brief review of the Markovian arrival process (MAP) is given in the appendix. 2. Description of the model In this paper, we consider a retrial queueing model in which arrivals occur according to a Markovian arrival process with representation (D0, D1) of order m. A brief description of this arrival process is given in the appendix. The services are offered on a first-come-first-served basis by one of c homogeneous servers. We assume that the service times are exponentially distributed with rate l. Any arriving customer finding all servers busy will enter into a retrial orbit. The orbit is of infinite size and, from there, the blocked customers will compete for service. The retrial times are assumed to be exponentially distributed with parameter h. We assume that the arrival process, the retrial times and the service times are mutually independent. For use in sequel, let e(r), ej(r) and Ir denote, respectively, the (column) vector of dimension r consisting of 1’s, column vector of dimension r with 1 in the jth position and 0 elsewhere, and an identity matrix of dimension r. When there is no need to emphasize the dimension of these vectors we will suppress the suffix. Thus, e will denote a column vector of 1’s of appropriate dimension. The notation ‘‘ 0 ’’ appearing in a matrix will stand for the matrix transpose. The notation will stand for the Kronecker product of two matrices. Thus, if A is a matrix of order m · n and if B is a matrix of order p · q, then A B will denote a matrix of order mp · nq whose (i, j)th block matrix is given by aijB. For more details on Kronecker products, we refer the reader to [18]. 3. The steady-state analysis of the maximum number of customers in orbit In this section, we will analyze the model in steady state. Let N(t), C(t), and M(t) denote, respectively, the number of customers in the retrial orbit, the number of servers busy, and the phase of the arrival pro-
J.R. Artalejo, S.R. Chakravarthy / Applied Mathematics and Computation 183 (2006) 1399–1409
1401
cess at time t. The process {(N(t), C(t), M(t)) : t P 0} is a continuous-time Markov chain with state space given by [ X ¼ fði; j; kÞ : i P 0; 0 6 j 6 c; 1 6 k 6 mg: ð1Þ By level 0* we denote the set of m states given by 0 ¼ fð0; 0; kÞ; 1 6 k 6 mg:
ð2aÞ
By level 0 we denote the set of cm states given by 0 ¼ fð0; j; kÞ; 1 6 j 6 c; 1 6 k 6 mg:
ð2bÞ
By level i, for i P 1, we denote the set of (c + 1)m states given by i ¼ fði; j; kÞ; 0 6 j 6 c; 1 6 k 6 mg:
ð2cÞ
The infinitesimal generator of the Markov chain governing the system, in partitioned form, is given by 1 0 e01 ðcÞ D1 D0 C B le ðcÞ I A10 A00 C B 1 C B C B A21 A11 A0 C B ð3Þ Q¼B C; A22 A12 A0 C B C B C B A23 A13 A0 A @ .. .. .. . . . where the coefficient matrices appearing in (3) are given by 1 0 D0 lI D1 C B 2lI D0 2lI D1 C B C; A10 ¼ B .. .. .. C B A @ . . . clI
ð4Þ
D0 clI
A00 is a rectangular matrix of dimension cm · (c + 1)m whose elements are all zero except the (c, c + 1)th block entry which is given by D1, 1 0 D0 ihI D1 C B lI D0 ðl þ ihÞI D1 C B C B C B . .. ð5Þ A1i ¼ B C; i P 1; C B C B ðc 1ÞlI D0 ððc 1Þl þ ihÞI D1 A @ clI D0 clI 1 1 0 0 I 0 I C C B B I I C C B B C C B B .. .. C C; i P 2 B A21 ¼ hB ; A ð6Þ ¼ ih 2i . . C C B B C C B B @ @ IA IA 0
0
and A0 is a square matrix of dimension (c + 1)m whose elements are all zero except the (c + 1, c + 1)th block entry which is given by D1. Let x, partitioned as x = (x*, x(0), x(1), . . .), denote the steady-state probability vector of Q. That is, x satisfies xQ ¼ 0;
xe ¼ 1:
ð7Þ
1402
J.R. Artalejo, S.R. Chakravarthy / Applied Mathematics and Computation 183 (2006) 1399–1409
The details of the computation of the vector x can be obtained from the model considered in [9] by taking p = 0 in their model. We next define what we mean by a busy period. The busy period of the MAP/M/c retrial queue is defined as the duration that starts at an epoch when an arriving customer finds the system empty and ends when the system becomes empty again at a departure epoch. Define Nmax as the maximum number of customers seen in the retrial orbit during a busy period. First note that the event {Nmax < n} corresponds to the event that starting from the sub-level (0, 1) of the level 0, the Markov process {(N(t), C(t), M(t)) : t P 0} will hit level 0* before hitting level n, for n P 1. Thus, the computation of P(Nmax < n), for n P 1, reduces to finding the probability of absorption into a particular state in a finite-state continuous-time Markov chain with two absorbing states. Specifically, let Q(n) denote the generator of the Markov chain with two absorbing states, say, 0* and n* and whose form is given by 1 0 0 0 C B le ðcÞ e A A00 10 C B 1 C B C B A21 A11 A0 C B C B A22 A12 A0 C B C; ð8Þ QðnÞ ¼ B C B A A A 23 13 0 C B C B .. .. .. C B . . . C B C B @ A2;n1 A1;n1 ecþ1 ðc þ 1Þ D1 e A 0
0
where the coefficient matrices appearing in (8) are as given in (4)–(6). Let T(n) denote the part of the generator Q(n) that corresponds to the transient states of the Markov chain and let T0(n) be column vector of dimension (n 1)(c + 1)m + cm such that the only nonzero (block) entry occurs in the first block which is given by le. Define g(n) = k(x*D1, 0, . . . , 0), where k is the normalizing constant such that ge = 1. In the sequel we will denote by g0(n) = kx*D1. Then, it is easy to verify that 1
P ðN max < nÞ ¼ gðnÞðT ðnÞÞ T 0 ðnÞ;
n P 1:
ð9Þ
The comparison of the different methods in solving tridiagonal systems of linear equations is not our aim in this paper. However, we note that as n becomes large it is imperative to have an efficient way to compute the probability P(Nmax < n) so in the next section we will describe an efficient algorithm for the computation of this probability. 4. Algorithm for computing P(Nmax < n) In this section we will give an efficient algorithm for computing the probability of interest. Before we proceed with the description of the algorithm we register a number of observations to ease the understanding of the particular form of the equations proposed. • The algorithmic solution proposed here requires manipulation of matrices whose dimensions are of order only m. • To minimize the storage requirements of the number of auxiliary matrices needed, the equations are modified accordingly. For example, where we need matrices of the form (ilI D0)1, 1 6 i 6 c, we simply store only (clI D0)1 and modify the equations appropriately. Note that this process may slow the convergence. • The equations are displayed in such a way that they are ready for numerical implementation. Here one can use a number of numerical methods such as Gaussian method and aggregate/disaggregate method. Define 1
bðnÞ ¼ gðnÞðT ðnÞÞ ;
n P 1:
ð10Þ
J.R. Artalejo, S.R. Chakravarthy / Applied Mathematics and Computation 183 (2006) 1399–1409
1403
Partition b(n) as bðnÞ ¼ bðnÞ ð0Þ; bðnÞ ð1Þ; . . . ; bðnÞ ðn 1Þ and further partition b(n)(k), 0 6 k 6 n 1, as follows: ðnÞ bðnÞ ð0Þ ¼ b1 ð0Þ; . . . ; bcðnÞ ð0Þ and ðnÞ bðnÞ ðkÞ ¼ b0 ðkÞ; . . . ; bðnÞ ðkÞ ; c
1 6 k 6 n 1:
ðnÞ
First note that P ðN max < nÞ ¼ lb1 ð0Þe. The equation in (10) can be written in terms of matrices of dimension m that are well suited for numerical implementation. The necessary equations are as follows. We need to distinguish the cases c = 1 and c P 2. Also, the case n = 1 need to be written separately. Case 1. c = 1 and n = 1. In this case, the equations are given by ðnÞ
1
b1 ð0Þ ¼ g0 ðnÞðlI D0 Þ : Case 2. c = 1 and n P 2. In this case, the equations are given by h i ðnÞ ðnÞ 1 b1 ð0Þ ¼ g0 ðnÞ þ hb0 ð1Þ ðlI D0 Þ and, for 1 6 k 6 n 2, h i ðnÞ ðnÞ ðnÞ b0 ðkÞ ¼ ðn 1 kÞhb0 ðkÞ þ lb1 ðkÞ ððn 1ÞhI D0 Þ1 ; h i ðnÞ ðnÞ ðnÞ ðnÞ 1 b1 ðkÞ ¼ ðb0 ðkÞ þ b1 ðk 1ÞÞD1 þ ðk þ 1Þhb0 ðk þ 1Þ ðlI D0 Þ and ðnÞ
ðnÞ
1
b0 ðn 1Þ ¼ lb1 ðn 1Þððn 1ÞhI D0 Þ ; h i ðnÞ ðnÞ ðnÞ 1 b1 ðn 1Þ ¼ b0 ðn 1Þ þ b1 ðn 2Þ D1 ðlI D0 Þ : Case 3. c P 2 and n = 1. In this case, the equations are given by h i ðnÞ ðnÞ ðnÞ 1 b1 ð0Þ ¼ g0 ðnÞ þ 2lb2 ð0Þ þ ðc 1Þlb1 ð0Þ ðclI D0 Þ ; h i ðnÞ ðnÞ ðnÞ ðnÞ bi ð0Þ ¼ bi1 ð0ÞD1 þ ðc iÞlbi ð0Þ þ ði þ 1Þlbiþ1 ð0Þ ðclI D0 Þ1 ;
2 6 i 6 c 1;
ðnÞ
bcðnÞ ð0Þ ¼ bc1 ð0ÞD1 ðclI D0 Þ1 : Case 4. c P 2 and n P 2. In this case, the equations are given by h i ðnÞ ðnÞ ðnÞ ðnÞ 1 b1 ð0Þ ¼ g0 ðnÞ þ 2lb2 ð0Þ þ ðc 1Þlb1 ð0Þ þ hb0 ð1Þ ðclI D0 Þ ; h i ðnÞ ðnÞ ðnÞ ðnÞ ðnÞ 1 bi ð0Þ ¼ bi1 ð0ÞD1 þ ðc iÞlbi ð0Þ þ ði þ 1Þlbiþ1 ð0Þ þ hbi1 ð1Þ ðclI D0 Þ ; h i ðnÞ ðnÞ bcðnÞ ð0Þ ¼ bc1 ð0ÞD1 þ hbc1 ð1Þ ðclI D0 Þ1 and, for 1 6 k 6 n 2,
2 6 i 6 c 1;
1404
J.R. Artalejo, S.R. Chakravarthy / Applied Mathematics and Computation 183 (2006) 1399–1409
h i ðnÞ ðnÞ ðnÞ 1 b0 ðkÞ ¼ ððc 1Þl þ ðn 1 kÞhÞb0 ðkÞ þ lb1 ðkÞ ðððc 1Þl þ ðn 1ÞhÞI D0 Þ ; h ðnÞ ðnÞ ðnÞ ðnÞ bi ðkÞ ¼ bi1 ðkÞD1 þ ððc 1 iÞl þ ðn 1 kÞhÞbi ðkÞ þ ði þ 1Þlbiþ1 ðkÞ i ðnÞ 1 þðk þ 1Þhbi1 ðk þ 1Þ ðððc 1Þl þ ðn 1ÞhÞI D0 Þ ; 1 6 i 6 c 1; h i ðnÞ ðnÞ ðnÞ ðkÞ ¼ ðb ðkÞ þ b ðk 1ÞÞD þ ðk þ 1Þhb ðk þ 1Þ ðclI D0 Þ1 ; bðnÞ 1 c1 c1 c c h i ðnÞ ðnÞ ðnÞ b0 ðn 1Þ ¼ ðc 1Þlb0 ðn 1Þ þ lb1 ðn 1Þ ðððc 1Þl þ ðn 1ÞhÞI D0 Þ1 ; h i ðnÞ ðnÞ ðnÞ ðnÞ bi ðn 1Þ ¼ bi1 ðn 1ÞD1 þ ðc 1 iÞlbi ðn 1Þ þ ði þ 1Þlbiþ1 ðn 1Þ 1
ðððc 1Þl þ ðn 1ÞhÞI D0 Þ ; 1 6 i 6 c 1; h i ðnÞ 1 ðnÞ bðnÞ c ðn 1Þ ¼ bc1 ðn 1Þ þ bc ðn 2Þ D1 ðclI D0 Þ : 5. Illustrative numerical examples and an optimization problem In this section we discuss some interesting numerical examples that qualitatively describe the model under study. Also, we propose a simple optimization problem among a class of optimization problems of interest. For the arrival process, we consider the following five sets of values for D0 and D1. 1. Erlang (ERL): 0 5 5 B 5 B B B D0 ¼ B B @
5 5
5 5
1
0
1
C C C C; C C 5 A 5
B B B D1 ¼ B B B @
C C C C: C C A 5
2. Exponential (EXP): D0 ¼ ð1Þ;
D1 ¼ ð1Þ:
3. Hyperexponential (HEX): 1:90 0 D0 ¼ ; 0 0:19
D1 ¼
1:710 0:190 : 0:171 0:019
4. MAP with negative correlation (MNC): 0 1:00222 1:00222 0 B 0 1:00222 0 D0 ¼ @ 0
0
1 C A;
225:75
5. MAP with positive correlation (MPC): 0 1 1:00222 1:00222 0 B C D0 ¼ @ 0 1:00222 0 A; 0 0 225:75
0
0 B D1 ¼ @ 0:01002 223:4925 0
0
B D1 ¼ @ 0:9922 2:2575
0 0 0
0 0
1 0 C 0:9922 A:
0
2:2575 0
1
C 0:01002 A: 223:4925
All these five MAP processes have an arrival rate of 1. However, they will be normalized whenever we change the arrival rate in our numerical examples. Note that these are qualitatively different processes in that they have different variance and correlation structure. The coefficient of correlation between two successive arrivals of the MAP is given by (see [5]):
J.R. Artalejo, S.R. Chakravarthy / Applied Mathematics and Computation 183 (2006) 1399–1409 1 1 1 #ðD0 Þ D1 ðD0 Þ e k 1 2 #ðD0 Þ e k12 k
k12
1405
:
Then the first three arrival processes correspond to renewal processes and so the correlation is 0. The arrival process labelled MNC has correlated arrivals with a correlation value of 0.4889, and the arrivals corresponding to the process labelled MPC has a positive correlation with a value of 0.4889. The ratio of the standard deviations of these five arrival processes with respect to ERL are, respectively, 1.0, 2.236068, 5.019353, 3.151781, and 3.151781. Example 1. In this example, we fix k = 0.5, c = 1, l = 1.0, h = 0.5 and calculate the probability mass function of Nmax. Table 1 contains these probabilities (displayed through 99th percentile). An examination of these probabilities reveal the following interesting observations: • In the case of renewal arrivals (namely, the first three processes labelled as ERL, EXP and HEX), the probability at the origin appears to decrease with increasing variability of the arrival process. Also the tail of the probability function appears to be longer with increasing variability of the arrivals. • In the case of correlated arrivals (namely, the ones labelled as MNC and MPC) we see different characteristics. First, we notice that the positively correlated arrivals have a larger mass at the origin, whereas the corresponding mass for the negatively correlated arrivals is very small. In fact, even when compared to the renewal arrivals this mass is much smaller. Secondly, we notice a longer tail for the positively correlated arrivals, longer than any other processes. Example 2. This example is similar to Example 1 except that the number of servers is increased to 3. That is, we have k = 0.5, c = 3, l = 1.0, h = 0.5 and calculate the probability mass function of Nmax. Table 2 contains these probabilities (displayed through 99th percentile) and immediately we note the following conclusions:
Table 1 Probability mass function for the case k = 0.5, c = 1, l = 1.0, h = 0.5 n
ERL
EXP
HEX
MNC
MPC
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
0.814066 0.099903 0.044251 0.022840 0.011092
0.666667 0.133333 0.069565 0.044720 0.030364 0.020548 0.013535 0.008613 0.005297
0.552863 0.113271 0.065780 0.048678 0.039136 0.032444 0.027164 0.022753 0.018977 0.015723 0.012926 0.010538 0.008520 0.006833 0.005438 0.004297 0.003373 0.002632
0.025011 0.466495 0.163092 0.110286 0.076267 0.054586 0.037993 0.025489 0.016399 0.010157 0.006090
0.879686 0.088129 0.016580 0.002946 0.000565 0.000184 0.000126 0.000115 0.000113 0.000111 0.000109 0.000108 0.000106 0.000105 0.000104 0.000102 0.000101 0.000100 0.000099 0.000097 0.000096 0.000095 0.000094 0.000093 0.000092
1406
J.R. Artalejo, S.R. Chakravarthy / Applied Mathematics and Computation 183 (2006) 1399–1409
Table 2 Probability mass function for the case k = 1.5, c = 3, l = 1.0, h = 0.5 n
ERL
EXP
HEX
MNC
MPC
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 .. . 68
0.877002 0.067277 0.032229 0.014391
0.769231 0.075597 0.050181 0.035663 0.025075 0.016970 0.010982 0.006815
0.603737 0.069705 0.052185 0.043192 0.037059 0.032162 0.027906 0.024061 0.020547 0.017349 0.014476 0.011936 0.009730 0.007848 0.006269 0.004963 0.003899 0.003041
0.650254 0.111770 0.067930 0.051695 0.038389 0.027982 0.019394 0.012821 0.008103 0.004934
0.966808 0.012954 0.002175 0.000469 0.000211 0.000171 0.000164 0.000161 0.000158 0.000156 0.000154 0.000152 0.000150 0.000148 0.000147 0.000145 0.000143 0.000141 .. . 0.000082
• Once again, in the case of renewal arrivals the probability at the origin appears to decrease, and the tail appears to increase, with increasing variability of the arrival process. • In the case of correlated arrivals (namely, the ones labelled as MNC and MPC) we see characteristics similar to Example 1. However, the mass at the origin for the negatively correlated arrivals is much larger (compared to Example 1) and still less when compared to the positively correlated ones. It is worth pointing out that both MNC and MPC processes have the same variability, but they have different correlation structure. The numerical examples point out the significant role played by the correlation which is largely ignored in the literature. An optimization problem: While one can construct a variety of optimization problems depending on the environment under which such retrial models are used, here we propose a simple optimization problem to illustrate the qualitative behavior of the five arrival processes considered in our examples. Let c1 denote the cost incurred during a busy period whenever the retrial orbit hits or exceeds a pre-determined value, say, n*, and c2 denote the cost of having a server in the system during a busy period. The optimization problem of interest is to find the value of c for which the total expected cost is minimized. That is, min½c1 P ðN max P n Þ þ c2 c: cP1
The above problem can be written using only one cost (relative cost) as follows: min½P ðN max P n Þ þ dc: cP1
Since, 0 6 P(Nmax P n*) 6 1, for any n*, the only interesting case for the optimization problem lies when 0 < d 6 1. Since P(Nmax P n*) is a nonincreasing function of c when all other parameters are fixed, to find c* for the optimization, one has to calculate the objective function (as a function c) until the function starts to increase. Example 3. In this example, we vary three parameters (one at a time by fixing all other parameters) and determine the optimum c* given a specific value for n*. First we fix k = 1, h = 0.5 and vary d, l, and n*. The optimum c* along with the objective function values are displayed in Table 3. By looking at this table, we notice the following observations.
J.R. Artalejo, S.R. Chakravarthy / Applied Mathematics and Computation 183 (2006) 1399–1409
1407
Table 3 Optimum number of servers required d
n*
l
ERL
EXP
HEX
MNC
MPC
0.01
1
4 2 4/3 10/9 4 2 4/3 10/9 4 2 4/3 10/9
(2, 0.02049) (3, 0.03030) (3, 0.03419) (4, 0.04061) (2, 0.02001) (2, 0.02226) (3, 0.03038) (3, 0.03195) (1, 0.01178) (2, 0.02000) (2, 0.02162) (3, 0.03000)
(3, 0.03238) (4, 0.04226) (4, 0.05042) (5, 0.05396) (2, 0.02793) (3, 0.03442) (4, 0.04252) (4, 0.04624) (2, 0.02009) (2, 0.02850) (3, 0.03122) (3, 0.03488)
(3, 0.04052) (5, 0.05266) (6, 0.06350) (6, 0.06884) (3, 0.03227) (4, 0.04417) (5, 0.05483) (5, 0.06249) (2, 0.02345) (3, 0.03282) (4, 0.04204) (4, 0.04789)
(4, 0.04065) (4, 0.04637) (5, 0.05442) (5, 0.05924) (3, 0.03154) (4, 0.04110) (4, 0.04654) (5, 0.05193) (2, 0.02115) (3, 0.03031) (3, 0.03538) (4, 0.04042)
(2, 0.03070) (2, 0.03625) (3, 0.04313) (3, 0.04502) (2, 0.03012) (2, 0.03167) (2, 0.03568) (2, 0.04048) (1, 0.02262) (2, 0.03105) (2, 0.03299) (2, 0.03466)
4 2 4/3 10/9 4 2 4/3 10/9 4 2 4/3 10/9
(2, 0.04049) (2, 0.05341) (3, 0.06419) (3, 0.07187) (2, 0.04001) (2, 0.04226) (3, 0.06038) (3, 0.06195) (1, 0.02178) (2, 0.04000) (2, 0.04162) (2, 0.05519)
(3, 0.06238) (3, 0.07695) (4, 0.09042) (4, 0.10025) (2, 0.04793) (3, 0.06442) (3, 0.08160) (4, 0.08624) (2, 0.04009) (2, 0.04850) (3, 0.06122) (3, 0.06488)
(3, 0.07052) (4, 0.09461) (5, 0.11538) (6, 0.12884) (3, 0.06227) (4, 0.08417) (5, 0.10483) (5, 0.11249) (2, 0.04345) (3, 0.06282) (4, 0.08204) (4, 0.08789)
(3, 0.07339) (4, 0.08637) (4, 0.10343) (5, 0.10924) (3, 0.06154) (3, 0.07623) (4, 0.08654) (4, 0.09445) (2, 0.04115) (3, 0.06031) (3, 0.06538) (3, 0.07756)
(2, 0.05070) (2, 0.05625) (2, 0.06942) (3, 0.07502) (1, 0.04064) (2, 0.05167) (2, 0.05568) (2, 0.06048) (1, 0.03262) (1, 0.04790) (2, 0.05299) (2, 0.05466)
4 2 4/3 10/9 4 2 4/3 10/9 4 2 4/3 10/9
(2, 0.10049) (2, 0.11341) (3, 0.15419) (3, 0.16187) (1, 0.07644) (2, 0.10226) (2, 0.12726) (3, 0.15195) (1, 0.05178) (2, 0.10000) (2, 0.10162) (2, 0.11519)
(2, 0.12703) (3, 0.16695) (3, 0.19972) (4, 0.22025) (2, 0.10793) (2, 0.15216) (3, 0.17160) (3, 0.19140) (2, 0.10009) (2, 0.10850) (3, 0.15122) (3, 0.15488)
(3, 0.16052) (3, 0.21151) (4, 0.25391) (5, 0.28169) (2, 0.13455) (3, 0.18028) (4, 0.22516) (4, 0.25083) (2, 0.10345) (3, 0.15282) (3, 0.18232) (4, 0.20789)
(3, 0.16339) (3, 0.20260) (4, 0.22343) (4, 0.24108) (2, 0.13223) (3, 0.16623) (3, 0.20595) (4, 0.21445) (2, 0.10115) (2, 0.14516) (3, 0.15538) (3, 0.16756)
(1, 0.09961) (2, 0.11625) (2, 0.12942) (2, 0.14144) (1, 0.07064) (2, 0.11167) (2, 0.11568) (2, 0.12048) (1, 0.06262) (1, 0.07790) (2, 0.11299) (2, 0.11466)
2
5
0.02
1
2
5
0.05
1
2
5
• As n* increases, c* appears not to increase. This is as is to be expected since increasing n* corresponds to having more tolerance (i.e., less penalty) for having customers in the orbit. • As l decreases, one would expect c* either to remain the same or to increase as the load of the system is now increased. This can be seen in the numbers shown in the table. • Among the renewal arrivals, it appears that larger variability in the arrivals requires more servers in order to minimize the total cost. This is again due to the fact that larger variability in the arrivals leads to a larger probability of seeing n* or more customers in the orbit. • It is interesting to note that in the case of correlated arrivals, MNC appears to require more servers than the MPC to minimize the total cost. This phenomenon is quite contrary to what has been seen in other queueing models [8] with no retrials. Acknowledgement The first author thanks the support received from the research project MTM 2005-01248. This research was conducted while the second author was visiting the Complutense University of Madrid, Madrid, Spain, and would like to thank the hospitality of the Department of Statistics and Operations Research.
1408
J.R. Artalejo, S.R. Chakravarthy / Applied Mathematics and Computation 183 (2006) 1399–1409
Appendix The Markovian arrival process (MAP) is a tractable class of Markov renewal processes in which customers arrive one at a time. It should be noted that by appropriately choosing the parameters of the MAP the underlying arrival process can be made as a renewal process. The MAP is a rich class of point processes that includes many well-known processes such as Poisson, PH-renewal processes, and Markov-modulated Poisson process. One of the most significant features of the MAP is the underlying Markovian structure and fits ideally in the context of matrix-analytic solutions to stochastic models. Matrix-analytic methods were first introduced and studied by [20]. As is well known, Poisson processes are the simplest and most tractable ones used extensively in stochastic modelling. The idea of the MAP is to significantly generalize the Poisson processes and still keep the tractability for modelling purposes. Furthermore, in many practical applications, notably in communications engineering, production and manufacturing engineering, the arrivals do not usually form a renewal process. So, MAP is a convenient tool to model both renewal and non-renewal arrivals. While MAP is defined for both discrete and continuous times, here we will need only the continuous time case. The MAP in continuous time is described as follows. Let the underlying Markov chain be irreducible and let Q* be the generator of this Markov chain. At the end of a sojourn time in state i, that is exponentially distributed with parameter ki, one of the following two events could occur: with probability pij(1) the transition corresponds to an arrival and the underlying Markov chain is in state j with 1 6 i, j 6 m; with probability pij(0) the transition corresponds to no arrival and the state of the Markov chain is j, j 5 i. Note that the Markov chain can go from state i to state i only through an arrival. Also, we have m m X X pij ð1Þ þ pij ð0Þ ¼ 1; 1 6 i 6 m: j¼1
j¼1;j6¼i
Define matrices D0 ¼ ðd 0ij Þ and D1 ¼ ðd 1ij Þ such that d 0ii ¼ ki , 1 6 i 6 m, d 0ij ¼ ki pij ð0Þ, for j 5 i and d 1ij ¼ ki pij ð1Þ, 1 6 i, j 6 m. By assuming D0 to be a nonsingular matrix, the interarrival times will be finite with probability one and the arrival process does not terminate. Hence, we see that D0 is a stable matrix. The generator Q* is then given by Q* = D0 + D1. Thus, D0 governs the transitions corresponding to no arrival and D1 governs those corresponding to an arrival. It can be shown that MAP is equivalent to Neuts’ versatile Markovian point process. The point process described by the MAP is a special class of semi-Markov processes with transition probability matrix given by Z x 1 eD0 t dtD1 ¼ ½I eD0 x ðD0 Þ D1 ; x P 0: 0
Let # be the stationary probability vector of the Markov process with generator Q*. That is, # is the unique (positive) probability vector satisfying. #Q ¼ 0;
#e ¼ 1:
Let a be the initial probability vector of the underlying Markov chain governing the MAP. Then, by choosing a appropriately we can model the time origin to be (a) an arbitrary arrival point; (b) the end of an interval during which there are at least k arrivals; (c) the point at which the system is in specific state such as the busy period ends or busy period begins. The most interesting case is the one where we get the stationary version of the MAP by a = #. The constant k = #D1e, referred to as the fundamental rate gives the expected number of arrivals per unit of time in the stationary version of the MAP. Often, in model comparisons, it is convenient to select the time scale of the MAP so that k has a certain value. That is accomplished, in the continuous MAP case, by multiplying the coefficient matrices D0 and D1, by the appropriate common constant. For further details on MAP and their usefulness in stochastic modelling, we refer to [17,21,22] and for a review and recent work on MAP we refer the reader to [5]. References [1] A.S. Alfa, K.P. Sapna Isotupa, An M/PH/k retrial queue with finite number of sources, Computers and Operations Research 31 (2004) 1455–1464.
J.R. Artalejo, S.R. Chakravarthy / Applied Mathematics and Computation 183 (2006) 1399–1409
1409
[2] J.R. Artalejo, G.I. Falin, Standard and retrial queueing systems: A comparative analysis, Revista Matematica Complutense 15 (2002) 101–129. [3] J.R. Artalejo, J.R., A. Economou, M.J. Lopez-Herrero, Algorithmic analysis of the maximum queue length in a busy period for the M/M/c retrial queue, Informs Journal on Computing, in press. [4] L. Breuer, A.N. Dudin, V.I. Klimenok, A retrial BMAP/PH/N system, Queueing Systems 40 (2002) 433–457. [5] S.R. Chakravarthy, The batch Markovian arrival process: A review and future work, in: A. Krishnamoorthy et al. (Eds.), Advances in Probability Theory and Stochastic Processes, Notable Publications Inc., NJ, 2000, pp. 21–39. [6] S.R. Chakravarthy, A.N. Dudin, A multi-server retrial queue with BMAP arrivals and group services, Queueing Systems 42 (2002) 5–31. [7] S.R. Chakravarthy, A.N. Dudin, Analysis of a retrial queuing model with MAP arrivals and two types of customers, Mathematical and Computer Modelling 37 (2003) 343–363. [8] S.R. Chakravarthy, A multi-server queueing model with Markovian arrivals and multiple thresholds, Asia-Pacific Journal of Operational Research, in press. [9] S.R. Chakravarthy, A. Krishnamoorthy, V.C. Joshua, Analysis of a multi-server queue with search of customers from the orbit, Performance Evaluation 63 (2006) 776–798. [10] B.D. Choi, Y. Chang, MAP1, MAP2/M/c with retrial queue with the retrial group of finite capacity and geometric loss, Mathematical and Computer Modelling 30 (1999) 99–113. [11] B.D. Choi, Y. Chang, B. Kim, MAP1, MAP2/M/c retrial queue with guard channels and its application to cellular networks, Top 7 (1999) 231–248. [12] J.E. Diamond, A.S. Alfa, Matrix analytic methods for a multi-server retrial queue with buffer, Top 7 (1999) 249–266. [13] G.I. Falin, J.G.C. Templeton, Retrial Queues, Chapman and Hall, London, 1997. [14] A. Gomez-Corral, On extreme values of orbit lengths in M/G/1 queues with constant retrial rate, OR Spectrum 23 (2001) 395–409. [15] A. Gomez-Corral, A bibliographical guide to the analysis of retrial queues through matrix analytic techniques, Annals of Operations Research 141 (2006) 163–191. [16] M.J. Lopez-Herrero, M.F. Neuts, The distribution of the maximum orbit size of an M/G/1 retrial queue during a busy period, in: J.R. Artalejo, A. Krishnamoorthy (Eds.), Notable Publications Inc., NJ, 2002, pp. 219–231. [17] D.M. Lucantoni, New results on the single server queue with a batch Markovian arrival process, Stochastic Models 7 (1991) 1–46. [18] M. Marcus, H. Minc, A Survey of Matrix Theory and Matrix Inequalities, Allyn & Bacon, Boston, MA, 1964. [19] M.F. Neuts, The distribution of the maximum length of a Poisson queue during a busy period, Operations Research 12 (1964) 281–285. [20] M.F. Neuts, Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach, The Johns Hopkins University Press, Baltimore, MD, 1981. [21] M.F. Neuts, Structured Stochastic Matrices of M/G/1 type and their Applications, Marcel Dekker, NY, 1989. [22] M.F. Neuts, Models based on the Markovian arrival process, IEICE Transactions on Communications E75B (1992) 1255–1265. [23] R.F. Serfozo, Extreme values of birth and death processes and queues, Stochastic Processes and their Applications 27 (1988) 291–306. [24] Y.W. Shin, Multi-server retrial queue with negative customers and disasters, in: B.D. Choi (Ed.), Proceedings of the Fifth International Workshop on Retrial Queues, TMRC, Korea University, Seoul, 2004, pp. 53–60.