Computers & Operations Research 35 (2008) 2482 – 2496 www.elsevier.com/locate/cor
Queues in DOCSIS cable modem networks J. Lambert∗ , B. Van Houdt, C. Blondia Department of Mathematics and Computer Science, Performance Analysis of Telecommunication Systems Research Group, University of Antwerp, Middelheimlaan, 1, B-2020 Antwerp, Belgium Available online 29 January 2007
Abstract In this paper we determine the optimal fraction c∗ of the uplink channel capacity that should be dedicated to the contention channel in a DOCSIS cable network in order to minimize its mean response time. For this purpose, we have developed an open queueing network with a non-standard form of blocking consisting of tens to hundreds of nodes. The network contains several types of customers that enter the network at various points according to a Markovian arrival process with marked customers. One of the main building blocks of the model exists in capturing the behavior of the conflict resolution algorithm by means of a single processor sharing queue. To assess the performance characteristics of this open queueing network we rely on an advanced decomposition technique that is specifically designed to deal with the Markovian nature of the arrival pattern. Several simulations are run to confirm the accuracy of the decomposition technique. We also explore the impact of a variety of systems parameters, e.g., the number of cable modems, the initial backoff window size, the correlation structure of the arrival process, the mean packet sizes, etc., on the optimal fraction c∗ . 䉷 2007 Elsevier Ltd. All rights reserved. Keywords: DOCSIS cable networks; Contention channel; Performance evaluation; Queueing networks; Decomposition algorithm
1. Introduction Recently, the rapid growth of the number of residential Internet users and the increased bandwidth requirements of multimedia applications have necessitated the introduction of an access network that can support the demand of such services. The data over cable service interface specifications (DOCSIS) [1] are the dominant specifications for carrying data over cable TV distribution (CATV) networks and have been developed by CableLabs and MCNS (multimedia cable networks systems), which is a group of major cable companies, to support IP flows over hybrid fiber coaxial (HFC) networks. DOCSIS defines the modulation schemes and protocols for high speed bi-directional data transmissions over cable systems. It has been accepted by most major vendors and is now a widely used specification to provide high-speed residential access. DOCSIS specifies a set of interface protocols between the cable modem (CM) customer side and the termination network side. The media access control (MAC) protocol defined in the DOCSIS RFIv1.1 is based on time division multiple access (TDMA). It uses MAC management messages, referred to as MAP messages, to describe the usage of the uplink channel (that is, from the end-user to the network). A given MAP message, broadcasted on the downlink channel (that is, from the network to the end-users), indicates the upstream bandwidth allocation over the next MAP time, termed the ∗ Corresponding author. Tel.: +32 3 265 38 75; fax: +32 3 265 37 77.
E-mail addresses:
[email protected] (J. Lambert),
[email protected] (B. Van Houdt),
[email protected] (C. Blondia). 0305-0548/$ - see front matter 䉷 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2006.12.004
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
2483
MAP length. The MAP assigns part of the uplink minislots to particular CMs to transmit data, other slots are available as contention slots to request bandwidth. This is one of the critical components of the DOCSIS MAC layer and the DOCSIS specification purposely does not specify these bandwidth allocation algorithms so that vendors are able to develop their own competitive solutions. In this paper, we develop an open queueing network with blocking, whose performance determines the optimal ratio between the number of contention slots and reservation slots during a single MAP length. For this purpose, we have developed an open queueing network with a non-standard form of blocking consisting of tens to hundreds of nodes. The model will consist of a single queue for each CM and two additional queues: one to reflect the behavior of the reservation channel and one to capture the activity on the contention channel. The CM queues will be FIFO queues with independent Markovian input streams that hold different types of customers, while the reservation and contention queues are processor sharing (PS) queues. Blocking occurs as there can be at most one customer in either the reservation or contention queue that originated from the same CM. To solve this queueing network, a decomposition technique splits the network into two subsystems: the first being a BCMP network with a finite number of customers and multiple customer types [2], the second an MMAP[C]/M[C]/1 queue [3]. By repeatedly solving these two subsystems, while exchanging a number of system parameters until convergence occurs, we capture the interaction between the two subsystems. The work presented in this paper is in the same spirit as [4] which was limited to the special case of Poisson arrival streams at the CM, this greatly simplified the decomposition technique as both subsystems were much easier to solve and the exchange of only a single parameter was needed. An overview of earlier work on DOCSIS and the performance evaluation of DOCSIS networks can be found in [5] and the references therein. This prior work typically relied on simulations, whereas we propose an analytical method to determine the performance of a DOCSIS CM network. Due to the strong similarity between 802.16 and DOCSIS, it seems quite probable that the techniques, described in this paper and used to asses the performance of a DOCSIS CM network, can be adapt for 802.16 networks. The paper is structured as follows. Section 2 presents a general description of the cable network considered. The contention resolution algorithm (CRA) specified by the DOCSIS standard is discussed in Section 3. Section 4 provides some general information about the Markovian arrival process with marked customers, which is the arrival process used in this paper. The queueing network model together with the decomposition techniques used to assess its performance is introduced in Section 5, whereas Section 6 validates this model. Finally, we numerically explore the influence of several system parameters in Section 7. We end this paper with some conclusions in Section 8. 2. The DOCSIS cable network The DOCSIS CM network considered in this paper is shown in Fig. 1 and consists of a single cable modem termination system (CMTS), located in the head-end of a cable operator or service provider and a number of CMs that are installed at the end-users. At initialization time each CM registers itself with the CMTS and at least two service flows are created for each CM: one in the downstream and one in the upstream direction. Since the head-end is the only transmitter in the downstream channel, no downstream MAC mechanism is needed. The upstream channel, on the other hand, is shared by all CMs and transports signals from the CMs to the head-end. The available bandwidth is divided into fixed length allocation units, called minislots, and the DOCSIS specify a reservation-based, centralized approach for distributing these minislots among the CMs. downstream Net
Cable Modem Headend
upstream
Fig. 1. Logical topology of a cable modem network.
2484
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496 = Allocation Map
= Data
= Time stamp
Downstream PDUs
Management header
Allocation MAP current frame
Contention
ACK previous frame
Management header
Data slots
slots
Allocation MAP current frame
Contention
ACK previous frame
Data slots
slots Upstream minislots
Fig. 2. Upstream bandwidth allocation.
Periodically, the CMTS sends a bandwidth allocation map (MAP) message over the downstream channel. A MAP message contains a number of data grants, such a grant indicates when a particular CM is allowed to transmit data on the uplink channel. Moreover, the MAP also identifies the minislots part of the contention channel. The total time scheduled in a single MAP message is referred to as the MAP length and the time interval assigned to the contention channel is called the contention region, whereas the remaining part of the MAP length is termed the reservation region. Fig. 2 illustrates the upstream mapping. Any CM that wants to transmit data must request bandwidth to the CMTS by means of a request message that contains a count of the number of minislots needed. The request can be transmitted in two ways: (1) on the one hand the CM can send the request in the contention region. In this interval collisions may occur, meaning the request might get lost. The CRA specified by the DOCSIS standard and operating on the contention channel, will be discussed in Section 3. (2) On the other hand, once a CM has received a grant, it has the opportunity to piggyback the new request in its reserved upstream allocation window, avoiding any collisions. Thus, piggybacking allows stations to request new bandwidth when transmitting data packets, without reentering the contention based request process. After processing a request, the CMTS generates a data grant and transmits it in the next MAP if capacity is available. It is the objective of this study to determine how to distribute the minislots part of the MAP length between the contention and the reservation region in an optimal way, and to examine the influence of various system parameters, e.g., the number of CMs, on the location of this optimum. The obtained results apply to any DOCSIS cable network whose bandwidth allocation algorithm fairly distributes the upstream bandwidth between the requesting CMs, and given the mean packet length (for each customer type), are also independent of the data packet length distributions at hand. 3. The contention resolution algorithm This section discusses the collision resolution algorithm, being a truncated binary exponential backoff (BEB) algorithm, specified by the DOCSIS standard. The aim of the BEB is to minimize the collision probability between the request packets transmitted in the contention region. Therefore, each CM has to postpone every transmission attempt by a random time interval. The length of this time interval, called the backoff interval, indicates the number of contention slots1 that the CM must let pass before it transmits its own request and grows as the number of consecutive unsuccessful transmissions increases. That is, at each attempt to transmit a request packet, the length of the backoff interval is uniformly chosen in the range [0, w − 1], where w is called the current contention window (CW). The value of the parameter w depends on the number of transmission failures that occurred so far for the particular request packet. At the first transmission attempt, the CM initializes its backoff counter to 0 and w is set equal to CW min , called the 1 The size of a contention slot, expressed in minislots, depends on the modulation scheme and equals the time needed to transmit a single request packet.
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
2485
minimal CW. After transmitting the request packet with a backoff counter equal to i − 1, for i > 0, the CM waits for an ACK. The contention resolution phase ends if the request is successfully received by the CMTS, otherwise the CM increases his backoff counter (to i) and the CW is set equal to Wi = 2min(i,m ) CW min , where CW max = 2m CW min is the maximum length of the CW. Thus, as soon as the maximum value CW max is attained, the CW remains equal to its maximum value until there is either a successful transmission or until the maximum backoff counter value m + 1 is reached. When the backoff counter is increased to m + 1, the CM drops the request packet. The DOCSIS standard specifies that the maximum number of retries m should be set to 15. The values of CW min and CW max are not specified by the standard and are controlled by the CMTS. In order to construct a queueing model to dimension the contention channel in a DOCSIS cable network, it is vital to capture the behavior of the BEB algorithm accurately. Although the BEB is considered as hard to model mathematically, Bianchi [6] managed to develop a fairly simple mathematical framework that provides accurate results for the BEB in an 802.11 setting. This model was further extended by a number of authors, including [7,8]. Although we are not considering the BEB in a 802.11 setting, we have demonstrated in [4] that a similar approach can be followed to accurately model the BEB in a DOCSIS world. A fundamental role in developing a model for the BEB is played by the saturation throughput S(n). S(n) is defined as the limit reached by the throughput of a system with n stations as the offered load increases and represents the maximum load that a system with n stations can carry under stable conditions. The saturation throughput values S(n) are used in Section 5.1 to develop a processor sharing (PS) type of queue to model the contention channel. The computation of S(n) is analogue to the approach taken by Bianchi [6] and Wu [7] and is discussed in [4]. 4. Markovian arrival process This section specifies the arrival process used to model the way in which data packets are generated in a CM. The arrivals generated in each CM are assumed to be independent from one another and are each driven by their own (identical) process. To enable us to study the impact of various statistical properties of this process, we consider a fairly general subclass of the Markovian arrival processes with marked transitions [9], i.e., the MMAP[C] process without batch arrivals, which is a generalization of the well known (B)MAP process [10]. The MAP is a stochastic process that is able to represent correlation structures and a large group of inter-arrival time distributions together with analytical tractability. It is, in general, non-renewal and it includes the Markov-modulated Poisson process, the PH-renewal process and superpositions of such processes as particular cases. Because of its versatility, it lends itself very well to modeling the bursty arrival processes commonly arising in computer and communications applications. There are also several effective (B)MAP fitting approaches available in the literature: an efficient and numerical stable method for estimating the parameters of a batch Markovian arrival process (BMAP) with the EM algorithm is introduced in [11], a maximum likelihood-based method is proposed in [12], whereas a two-step MAP fitting approach is described in [13], etc. An MMAP[C] is a point process with arrivals of different types. Consider an M-state Markov renewal process with an irreducible embedded Markov chain with probability matrix P and exponential sojourn times given by Fi (x) = 1 − exp(−i x) for i = 1, . . . , M. This process is also a continuous time Markov chain with a generator matrix D where Di,i = −(1 − Pi,i )i and Di,j = Pi,j i for i = j ∈ {1, . . . , M}. An MMAP[C] is a Markov renewal process {(Jn , Ln , n ), n0} on the statespace {{1, . . . , M} × {1, . . . , C} × [0, ∞]} with transition probability matrix x P {Jn = j, Ln = c, n x|Jn−1 = i} = [ 0 exp(D0 u) duD c ]i,j , where Ln is the marking variable. We call an arrival a type c arrival if it is marked by c. The matrices Dc , for c = 1, . . . , C are non-negative and D0 has negative diagonal elements and is assumed to be singular. In addition, we have D = D0 + D1 + · · · + DC . We consider a special subclass of the MMAP[C] processes, where C =MK for some K 1. Denote the type as (m, k) for m = 1, . . . , M and k = 1, . . . , K as opposed to 1, . . . , C. In addition we restrict ourselves to those MMAP[MK] processes for which −i if i = j, (D0 )i,j = 0 otherwise, (D(m,k) )i,j =
i Pi,j pL (k) if i = m, 0 otherwise,
2486
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
for some probabilities pL (1), . . . , pL (K) with k pL (k) = 1. Thus, D0 is a diagonal matrix and D(m,k) has only one row different from zero, being the m-th. In other words, the packet inter-arrival times in a CM are exponential with a mean that depends on the current state of the MMAP[MK] process. When an arrival occurs we make a transition from state i to j with probability Pi,j and we mark the arrival by (i, k) with probability pL (k), where i is the state in which the arrival was generated. The packet length distribution is affected by k and has a mean L(k). Define as the stationary probability vector of D, i.e., D = 0 and e = 1, where e is a column vector with all entries equal to one. The arrival rate is then given by = −D0 e. We can also express the arrival rate of a type k customer that arrives in state m, which we denote by (m,k) = D(m,k) e. For further use, define the vector M = (1 , . . . , M ). 5. A queueing model for a DOCSIS cable network To enable a mathematical analysis of the performance of DOCSIS cable networks, we will develop an open queueing network with blocking composed of N + 2 queues, where N represents the number of CMs connected to the access network. The transmission buffer of each CM is represented by a single queue, whereas the remaining two queues represent the contention and the reservation channel, respectively. Details on the service disciplines of the contention and reservation queues are given after the general model description. A data packet that needs to be transmitted from CM i toward the network is placed in the transmission buffer of CM i. If the packet finds the buffer empty upon arrival, CM i needs to generate a request for this data packet and has to send it to the CMTS via the contention channel, before the actual data transmission can take place. On the other hand, if some of the earlier data is still stored within the transmission buffer, we can use a piggybacking strategy to transmit the request collision free. In our queueing model, we represent each data packet by a customer. Depending on the progress of the packet transmission, its corresponding customer C is in one of the following three queues: 1. Customer C is waiting in queue CM i, whenever there are still other (older) packets that need to be (partially) transmitted first by CM i. 2. Customer C is part of the contention queue, if the data packet found the transmission buffer empty upon arrival and the CM is trying to transmit a request via the contention channel. 3. Customer C is part of the reservation queue, if all other (older) packets have been transmitted and a request for this packet was either piggybacked or successfully transmitted using the contention channel. A new packet arrival at CM i corresponds to a new customer arrival in queue CM i, whereas customers who complete service in the reservation queue will leave the queueing network. Finally, a customer leaving the contention queue, will enter the reservation queue. Note, the contention and reservation queues hold at most one customer per CM at a time. Thus, a customer at the head of line in a CM queue is blocked until the previous customer generated by the same CM has left the queueing system. The reservation channel is modeled as a processor sharing (PS) queue, reflecting the DOCSIS MAC design principle of distributing the transmission capacity fairly among the active stations. The rate (k) of the server depends upon the type of the customer k and is determined by: (i) the rate of the uplink channel r (expressed in bits/ms), (ii) the mean length of a data packet L(k) (including overhead and given the customer is of type k) and (iii) the fraction 1 − c of the MAP length dedicated to the reservation region. That is, (k) = (1 − c)r/L(k). The contention channel is also modeled as a PS queue, but the service rate (n) depends upon the number of customers n present in the contention queue. The rate (n) is chosen as cS(n)r/Q, where c denotes the fraction of the MAP length associated with the contention region, Q the request size expressed in bits and S(n) the saturation throughput. This is a fairly logical choice as 1/S(n) is also the mean time between two successful transmissions in a saturated system with n stations. The fact that the behavior of the contention channel can be accurately captured by this single node queueing model was demonstrated extensively in [4]. Moreover, the usage of the saturation throughput as the service rate of a PS queue when modeling the contention channel was proven to be effective by several other authors as well (within the 802.11 setting), e.g., [8,14]. Finally, we assume that the packets arrive at the CMs according to an MMAP[MK] as discussed in Section 4. In general, queueing networks with blocking typically do not have a closed-form solution and therefore it is very difficult to analyze the system as a whole. In order to solve this queueing model, we make use of a decomposition scheme
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
2487
Service center 1 Service center 2 1
2 Service center 3
. . pe
. 1-pe N
Fig. 3. The first subsystem.
similar to [15]. The general approach in decomposition is to decompose the entire system into smaller subsystems, analyze these subsystems individually and then take into account the interactions between the various subsystems in putting them back together. We will make use of two subsystems, described in the next subsections, where the analysis of each subsystem requires information that can be obtained by solving the other subsystem. 5.1. The first subsystem: a closed BCMP queueing network The first subsystem is obtained by removing the waiting rooms in front of each of the N CM queues and by closing the network (see Fig. 3). The resulting network is of the BCMP type (see [2]) with three service centers and N customers. Service center 2 (top right) represents the contention channel, whereas service center 3 (bottom right) represents the reservation channel. Service center 1 (left) has an infinite server (IS) queueing discipline, i.e., the number of servers in the service center is greater than or equal to the number of customers N in the network. A customer arriving at service center 1 always finds a free server and never queues for service. The service rate of service center 1 is determined by M . More specifically, each customer has a mark of the form (m, k) and the service time in center 1 is exponential with an average of 1/m ms. Service centers 2 and 3 are identical to the processor sharing (PS) disciplines of the original queueing network. Recall, the mean service time of a customer, marked as (m, k), in the reservation queue is influenced by k. The idea behind this construction is as follows. For each CM we have a single customer in the closed network. The marking (m, k) of customer i indicates that the oldest customer originating from CM i is of type k (which affects its packet size) and this customer arrived while the MMAP[MK] process of CM i was in state m. If the customer i is in service center 1, this marking will be related to the next customer to arrive in CM i. Whenever the transmission buffer in CM i becomes empty, customer i moves to service center 1, where it has to wait until the next customer arrival. Given that the MMAP[MK] process of CM i is in state m this will take an exponential amount of time with mean m (see Section 4). Also note, the packet length distributions of the different customer types are irrelevant (apart from their mean) as the performance measures of such a BCMP network are insensitive to the service time distribution of its PS service centers. The 3MK × 3MK routing matrix PR of this closed queueing network is identical to
0 PR = 0 A
I 0 0
0 I , B
(1)
where I, 0, A and B are MK × MK matrices, I is the unity matrix and A holds the probabilities that an (m, k) marked customer moves from service center 3 to center 1, while changing its marking in (m , k ). It is defined as Ai,j = (Pe )(m ,k ) Pm,m pL (k ),
2488
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
for i = (m − 1)K + k and j = (m − 1)K + k , where k, k ∈ {1, . . . , K} and m, m ∈ {1, . . . , M}. The vector Pe is an input parameter of this subsystem and is determined by solving the second subsystem. (Pe )(m,k) represents the probability that a packet of type k that arrived while the MMAP[MK] was in state m finds the transmission buffer empty upon arrival. Due to the nature of the arrival process, (Pe )(m,k) depends solely on the state m and not on the customer type k, i.e., (Pe )(m,k) = (Pe )m for k = 1, . . . , K. The matrix B covers the transitions from center 3 to itself and is identical to A except that (Pe )(m,k) is replaced by 1 − (Pe )(m,k) for all m and k. To calculate the mean response times E[Rcont ] and E[Rres ](k) for all k, in service center 2 and 3 (for a type k customer), respectively, one may use either one of the following a lgorithms [16]: the convolution, the mean value analysis (MVA) or the local balance algorithm. These mean response times are used as input values in the second subsystem. 5.2. The second subsystem: an MMAP[C]/M[C]/1 queue The second subsystem consists of N independent infinite single server queues, one for each CM. Queue i is solved as an MMAP[MK]/M[MK]/1 queue as packets are generated in the CM according to an MMAP[MK] arrival process and the service discipline is FCFS. The service time at each server accounts for the time spent on the reservation channel (to transmit the data packet) and the possible time needed to visit the contention queue (to send the request), therefore, for an arrival in state m and of type k, the service rate CM (m, k) is chosen as CM (m, k)=1/(E[Rcont ](Pe )(m,k) +E[Rres ](k)), where the mean response time of the contention channel, respectively the mean response time of a type k customer of the reservation channel, was denoted by E[Rcont ], respectively E[Rres ](k). Remark that these expected values are obtained by solving the first subsystem. For this subsystem we determine the probabilities (Pe )(m,k) that the server is found empty by a new arrival in state m of type k. These probabilities are used as the new input values of (Pe )(m,k) in the first subsystem. Clearly, the probabilities (Pe )(m,k) can be found as the probabilities that the waiting time of an arrival of type k in state m equals 0. An expression to compute such probabilities for an SM[K]/PH[K]/1 FCFS queue was presented in [3]. Our MMAP[MK]/M[MK]/1 queue belongs to this set of queues, therefore, we can rely on this expression to compute the probability that the waiting time W (m, k) of an arbitrary type k customer that arrived in state m is equal to zero. For our system, the expression in [3] reduces to P (W (m, k) = 0) = 1 −
0 tot R(m,k) (e ⊗ Ttot ), (m,k)
(2)
with ⊗ the matrix Kronecker product and =
m,k
(m,k) . CM (m, k)
Let us describe the other components part of Eq. (2). First, let Ttot be an MK × MK diagonal matrix with −CM (m, k) if i = j, (Ttot )i,j = 0 else, 0 = −T e and define the where i = (m − 1)K + k for k ∈ {1, . . . , K} and m ∈ {1, . . . , M}. Set the MK × 1 vector Ttot tot 2 1 × M K vector tot as
tot =
1 (m,k) D(m,k) ⊗ , CM (m, k) m,k
where the vector (m,k) is a size MK vector with all its entries equal to zero, except entry (m − 1)K + k which equals one. The square matrices R(m,k) of dimension M 2 K can be written as R(m,k) = L(D(m,k) ⊗ I ).
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
2489
Thus, provided that we know the M 2 K × M 2 K matrix L, we can compute the required probabilities (Pe )(m,k) . The matrix L obeys the following set of equations: ⎞ ⎛ 0 T = I ⊗ Ttot + L ⎝ (3) D(m,k) ⊗ Ttot (m,k) ⎠ , m,k
−I = T L + L(D0 ⊗ I ),
(4)
where the second equation for L is known as a Sylvester matrix equation, which can be rewritten as (L) = − (I )(T ⊗ I + I ⊗ D0 ⊗ I )−1 ,
(5)
where T is the transpose of the matrix T and (L) is the direct sum of the matrix L. In order to retrieve L from these equations, two iterative approaches can be followed, the convergence of which is guaranteed by [17, Theorem 2.1]. We start by setting T [0]=I ⊗Ttot and compute L[N ], for N 0, by either inserting T [N ] in Eq. (4) or (5). Subsequently, we compute T [N + 1] from L[N ] via Eq. (3). Using the direct sum approach results in a time complexity of O((M 2 K)6 ), whereas solving the Sylvester equation requires only O((M 2 K)3 ) time. Nevertheless, for M 2 K small, which is the case in our numerical examples, the direct sum approach may still prevail. Similarly, the probability that an arbitrary customer finds the transmission buffer empty, which we denote by pe , can be obtained as pe = P (W = 0) = 1 −
0 tot R(e ⊗ Ttot ),
(6)
with R=
R(m,k) .
m,k
The decomposition algorithm executes successive iterations during which both subsystems are solved one after the other, until the probabilities (Pe )(m,k) have converged. That is, as soon as the absolute differences of (Pe )(m,k) between two successive iterations are less than = 10−14 . After the algorithm has converged, the probabilities (Pe )(m,k) are used to calculate the total mean response times, which equal E[Rcont ](Pe )(m,k) + E[Rres ](k) with k ∈ {1, . . . , K} and m ∈ {1, . . . , M}. 5.3. An alternative decomposition approach We have also investigated whether it is possible to use a simplified decomposition approach that exchanges fewer parameters between the two subsystems. Therefore, we have considered the following two subsystems. The first subsystem is the BCMP network with no distinction between the different customer types and states (see [4]), the second subsystem was reduced to a MAP/M/1 queue. Packets still arrived at the CMs according to the same Markovian arrival process, but were no longer marked. The service rate CM was set to CM = 1/(E[Rcont ]pe + E[Rres ]) and is identical for all customers. The mean response times of the reservation channel and contention channel are obtained by solving the first subsystem and from the second subsystem we can determine the probability that an arbitrary customer finds the transmission buffer empty, i.e., pe . We have performed some trial experiments using this simplified decomposition approach. These experiments pointed out that the simplified decomposition technique can, for larger c values (and depending on the initial choice of pe ), lead to instability in the MAP/M/1 queue. Intuitively, this can be understood as follows. If we make a distinction between the different states m (and types k), the probability (Pe )m that the server is found empty by a new arrival in state m, will be lower for states with a high arrival rate m (see Fig. 6(b)) resulting in a mean service time E[Rcont ](Pe )m + E[Rres ](k) that is a somewhat lower as well. This is no longer the case if all customers are treated equally, which causes the instability. Also, even if stability is retained, the initial choice of pe may affect the final result, which was never the case for the more complex decomposition scheme presented in Sections 5.1 and 5.2.
2490
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
Table 1 Default system parameters Parameters
Symbol
Value
Upstream channel capacity Request packet size Number of data packet types Data packet of type 1 Data packet of type 2 Probability of type 1 data packet Probability of type 2 data packet Number of states Mean time in state 1 Mean time in state 2 Mean time in state 3 Number of stations Min. contention window Max. value Max. backoff stage
r Q K L(1) L(2) pL (1) pL (2) M
104 bits/ms 16 bytes 2 438 bytes 538 bytes 0.7 0.3 3 100/8 10000/15 1000/15 100 4 6 15
−1 1 −1 2 −1 3
N CW min m m
6. Model validation Using a two-step procedure, that included various simulation runs, we demonstrated in [4] that the behavior of the contention channel can be accurately mimicked by our PS queue described in Section 5. In this section we will validate the precision of the decomposition scheme by comparing our analytical results with those obtained from a detailed event-driven simulation of the original open queueing network with blocking. The default parameter settings used in this section are presented in Table 1. Most of these parameters are in correspondence with the default values proposed in the Cisco specification “Cisco —Understanding Data Throughput in a DOCSIS World”. In addition, the default setting for the matrix P, which holds the probabilities of changing from state i to state j for i, j ∈ {1, . . . , M} is given by
0.99 0.001 0.009 P = 0.021 0.975 0.004 . (7) 0.012 0.001 0.987 We assume an average type 1 and 2 packet length of L(1) and L(2) bytes, respectively, this includes an 8% forward error correction (FEC) overhead, the 6 byte overhead of the DOCSIS header, etc. The 10 Mbit channel supports both the contention channel and the reservation channel. The fraction of contention channel, resp. reservation channel, is represented by c, resp. 1 − c, thus the type k service rate of the reservation queue, (k), and the service rate of the contention queue when n customers are receiving service, (n), equal: (k) = (1 − c) (n) = cS(n)
104 , L(k) ∗ 8
104 . 16 ∗ 8
(8) (9)
Remark that the unit of these rates is 1/ms. The data load of the system can be computed as data = N
8 ∗ E[L] , 104
(10)
where denotes the mean arrival rate as computed in Section 4 and E[L] denotes the mean packet size. The effective load on the reservation channel is obviously higher and equals res =
data . 1−c
(11)
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
a
b 1
9 8 7
Aλ(1)
0.95
Aλ(2) Aλ(4/3)
0.9
6
Probabilities pe
Mean Response times (msec)
2491
5 4
Simulation
Analysis
3 2 1 0 0.05
0.85
Analysis
0.8
Simulation
0.75 0.7
Aλ(2)
0.65
Aλ(1) Aλ(4/3)
0.1
0.15
0.2 fraction c
0.25
0.3
0.35
0.05
0.1
0.15
0.2
0.25
0.3
0.35
fraction c
Fig. 4. Validating the decomposition technique: the influence of M on: (a) the mean response times, and (b) the probability of arriving in an empty transmission buffer pe .
Numerical results for three different arrival processes are depicted in Fig. 4(a) and (b). These processes differ only in their choice for the vector M , which determines the mean sojourn times (i.e., inter-arrival times) in each of the M states. Let A (1) be the default process, whose parameters were given in Table 1. The other processes A (r), for r = 1, are systematically obtained from A (1) by replacing the vector M by M,r = M /r, i.e., M,r = (1 /r, . . . , M /r). Note, A (r) is identical to A (1) except that the inter-arrival times between consecutive packets are r times as long. Fig. 4(a) and (b) shows that there is no perfect agreement between the analytical and simulation results, especially for r small (that is, higher load scenarios). Still we observe that the location of the optimal c value, that is, the optimal fraction that should be dedicated to the contention channel, does agree to a high level of accuracy. Results not included here have shown that altering the number of CMs N (e.g., N = 50, 200) while keeping the arrival process identical to A (1), results in a similar type of accuracy, that is, there is no perfect match between the simulation and analytical model, but the location of the optimal c is quite accurate. A comparison of the mean response time and the probability that a new packet finds the transmission buffer empty upon arrival pe , for different values of CW min , is made in Fig. 5(a) and (b), respectively. We can see that different values of the minimum CW lead to similar results. The accuracy of the results in this section are of a similar nature to the results presented in [4], where we also observed an overestimation of the mean response time by the analytical model. In [4], the main cause for this overestimation was identified as the inaccuracy of the saturation throughput values S(n) determined by the Bianchi-like approach. Indeed, these throughputs S(n), used to determine the service rates of the contention queue, were typically underestimated for n small, causing an overestimation of the mean delay. 7. Numerical results In this section we study the optimal fraction c∗ of the MAP length that should be dedicated to the contention channel such that the mean response time is minimal. Unless otherwise stated the protocol parameters are set as indicated in Table 1 and Eq. (7). Fig. 6(a) and (b) show the mean response time and the probability of arriving in an empty transmission buffer for the default scenario. Fig. 6(a) shows the influence of the different customer types on the optimum c∗ . When plotting the influence of the contention fraction c we add a “∗ ” to identify the optimum c∗ . The mean response times for the different customer types are computed as E[R](k) = E[Rcont ]pe + E[Rres ](k) for k = 1, . . . , K. From this figure we may conclude that the smaller packet sizes lead to a higher optimum c∗ , and to lower minimum mean response times. This is in line with our expectations, as larger packets benefit more from an increased service rate at the reservation queue. Experiments not shown here indicate that the influence of the state m on the mean
2492
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
a
b 10
1 CWmin=4
0.95
CWmin=8
8
CWmin=16
0.9
7
Probabilities pe
Mean Response times (msec)
9
6 5
Analysis Simulation
4
0.85
0.75
3
0.7
2
0.65
1 0.05
0.1
0.15
0.2
0.25
0.3
Analysis
Simulation
0.8
0.35
0.05
CWmin=4 CWmin=8 CWmin=16 0.1
0.15
0.2
0.25
0.3
0.35
fraction c
fraction c
Fig. 5. Validating the decomposition technique: the influence of the minimum contention window CW min on: (a) the mean response time, and (b) the probability pe .
a
b 1
8
0.9
(Pe)2
7 Probabilities pe
Mean Response times (msec)
9
6 5 E[R](1)
4
E[R](2)
E[R]
0.8 (Pe)3 0.7 (Pe)1 pe
0.6
3 0.5
2 1 0.05
0.1
0.15
0.2 fraction c
0.25
0.3
0.35
0.4 0.05
0.1
0.15
0.2
0.25
0.3
0.35
fraction c
Fig. 6. (a) Mean response time for the default scenario, and (b) the probability of arriving in an empty transmission buffer pe for the default scenario.
response time or the optimum c∗ is less significant compared with the impact of k. The probability that a new packet, the arrival of which occurred in state m, finds the transmission buffer empty, i.e., Pe (m), is obviously affected by the state m (see Fig. 6(b)). Let us investigate the impact of the minimum CW CW min on the optimum c∗ . Fig. 7(a) shows that the optimum c∗ , as well as the minimum mean response time, increases as a function of CW min . Such an increase reduces the collision probability, because, on average, a station defers its transmission by a greater number of contention slots. Thus, a CM needs less attempts to transmit a request, but the time between two attempts is substantially larger, causing higher delays on the contention channel and augmenting the need for more contention slots. Basically, what this result states is that there is not enough activity on the contention channel to justify a larger minimal CW. Fig. 7(b) is an extension of Fig. 7(a) that shows the mean response times for the different customer types k = 1, 2. Like in Fig. 6(a), we observe
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
b 8
5.5
7
5
6
Mean Response times (msec)
Mean Response times (msec)
a
2493
CWmin=16
5 4
CWmin=4 CWmin=8
3 2 1 0.05
CWmin=8 CWmin=16
4.5 4
E[R](2) E[R]
3.5
E[R](1)
3 2.5 2
0.1
0.15
0.2
0.25
0.3
1.5 0.05
0.35
0.1
0.15
fraction c
0.2
0.25
fraction c
Fig. 7. (a) Mean response time when varying CW min , and (b) mean response time when varying CW min , influence of the different packet sizes.
b 8
9
7
8 Mean Response times (msec)
Mean Response times (msec)
a
6 5 4 3
Aλ(1)
Aλ(4/3)
Aλ(2)
2
pL=[0.2 0.8]
7
pL=[0.5 0.5]
6
pL=[0.8 0.2]
5 4 3 2
1 0 0.05
pL=[0.7 0.3]
0.1
0.15
0.2 fraction c
0.25
0.3
0.35
1 0.05
0.1
0.15 0.2 fraction c
0.25
0.3
Fig. 8. Mean response time when: (a) M,r varies, and (b) pL varies.
that the effect of the packet size on the optimum c∗ and the minimum mean response time, is similar for each of the CW min values. Additional experiments have demonstrated that this tendency also arises when we change other system parameters, therefore, in the remainder of the paper, we will defer from showing numerical results per customer type k. Let us now focus on the influence of the selection of the arrival process. As before, we start by varying the vector M . Define A (r) as in Section 6. Recall, increasing r, i.e., decreasing , corresponds to a lower data load data . Consequently, the probability that a new packet finds an empty transmission buffer grows. Thus, less request packets can make use of the piggybacking scheme and the contention channel becomes more needed. Fig. 8(a) confirms that the optimal fraction c∗ of the contention channel is greater. Clearly, the minimum mean response time increases as a function of the load data . It is worth noticing that the optimum becomes broad as the load diminishes, meaning that even if the fraction c is not exactly equal to c∗ we still have a nearly minimum mean response time. Hence, the choice
2494
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
4
Mean Response times (msec)
3.5
3
2.5
2
1.5 0.05
0.1
0.15
0.2
0.25
0.3
fraction c Fig. 9. Mean response time when varying the correlation.
of c becomes more critical as more bandwidth is being consumed. As such we recommend to dimension the contention channel for high load traffic scenarios as this will also guarantee a close to optimal performance for the low load cases as well. Similar results are observed when the number of CMs N is varied, while M remains fixed. Fig. 8(b) shows the influence of the weight dedicated to the two different packet sizes, i.e., the influence of varying q if pL = (pL (1), pL (2)) = (q, 1 − q). We can conclude that the optimal fraction c∗ increases, whereas the minimum mean response time decreases, as a function of q. This can be explained as follows. If q increases, the mean packet size, i.e., E[L], will decrease. Therefore the data load decreases as well. Consequently more packets will find the transmission buffer empty, that is, pe increases; thus, we need more contention channel. Clearly, reducing the data load gives rise to a lower mean response time. We now study the influence of the correlation structure of the arrival process on the optimal fraction c∗ . For this purpose we create a series of arrival processes AP (r) that systematically changes the correlation between consecutive inter-arrival times, without altering the arrival rate or data load. Let AP (1) = A (1) be the default arrival process and define AP (r) as follows for r 1. All the processes AP (r), r > 1, are identical to AP (1), except that the probability matrix P that determines the MMAP[MK] state when an arrival occurs is set equal to (Pr )i,j = Pi,j /r i = j, (Pr )i,i = 1 − j =i (Pr )i,j . The correlation between consecutive inter-arrival periods of AP (r) increases with increasing r [18]. The results for r = 1, 2, 20, 50 and 100 are shown in Fig. 9. It shows that the results for different r values nearly coincide. This is a useful result as it seems to indicate that the correlation structure of the arrival process has a very limited impact on the amount of contention channel needed and therefore can be neglected when dimensioning the contention channel. Finally, we examine the influence of having a fixed load data , but a varying number of CMs N (i.e., N is kept constant). Define AN, (1) as the default process and define AN, (r) by multiplying N, the number of CMs, by r and by dividing M by r. The other system parameters remain unaltered. Fig. 10(a) and (b) shows some results for different r values that correspond with a data load of 0.63. The probability pe that a new packet arrives in an empty transmission buffer is substantially higher when the number of CMs N increases and data is fixed (see Fig. 10(a)), leading to a greater demand for more contention channel (see Fig. 10(b)).
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
a
b 1
9
0.8
Mean Response times (msec)
0.9
Probabilities pe
2495
N=200, λ/2 N=100, λ
0.7
N=50, 2λ
0.6 0.5 0.4 0.05
8 7 6 N=100, λ
5
N=200, λ/2
4
N=50, 2λ
3 2
0.1
0.15
0.2
0.25
0.3
0.35
fraction c
1 0.05
0.1
0.15
0.2
0.25
0.3
0.35
fraction c
Fig. 10. (a) Influence of a fixed load data : the probability of arriving in an empty transmission buffer pe , and (b) influence of a fixed load data : mean response time.
8. Conclusion In this paper we introduced an open queueing network with a non-standard form of blocking to asses the performance of a DOCSIS CM network. We analyzed the performance of this queueing network via a decomposition technique. After validating the accuracy of this approach, we relied on this model to dimension the contention channel of a DOCSIS access network. Based on the numerical results presented in Section 7, we recommend to dedicate 10–15% of the minislots part of the MAP length to the contention channel as this will result in a near optimal mean response time for all arrival rates . Indeed, for high data loads the optimum is small and close to 10%, whereas for lower data loads the optimal fraction is larger, but broader as well, meaning that the a fraction between 10% and 15% still provides near optimal results. The exact optimal fraction c∗ depends on several system parameters, e.g., the minimum CW CW min , the number of CMs, etc. Acknowledgment B. Van Houdt is a post-doctoral Fellow of the FWO-Flanders. References [1] CableLabs. Data-over-cable service interface specifications, radio frequency interface specification. Available at http://www. cablemodem.com/specifications . [2] Baskett F, Chandy KM, Muntz RR, Palacios FG. Open, closed, and mixed networks of queues with different classes of customers. Journal of the Association for Computing Machinery 1975;22(2):248–60. [3] He Q. Analysis of a continuous time SM[K]/PH[K]/1/FCFS queue: age process, sojourn times, and queue lengths. Working paper 04-01, Department of Industrial Engineering, Dalhousie University, 2004. [4] Lambert J, Van Houdt B, Blondia C. Dimensioning the contention channel of DOCSIS cable modem networks, In: Proceedings of Networking 2005, lecture notes in computer science, vol. 3462, Waterloo, Canada: April 2005, p. 342–57. [5] Shah N, Kouvatsos D, Martin J, Moser S. A tutorial on DOCSIS: protocol and performance models. In: Proceedings of the international working conference on performance modeling and evaluation of heterogeneous networks, Ikley, UK, July 2005. [6] Bianchi G. Performance analysis of the IEEE 802.11 distributed coordination function. IEEE Journal on Selected Area in Communication 2000;18(3). [7] Wu H, Peng Y, Long K, Cheng S, Ma J. Performance of reliable transport protocol over IEEE 802.11 wireless LAN: analysis and enhancement. In: IEEE infocom 2002, New York, June 2002.
2496
J. Lambert et al. / Computers & Operations Research 35 (2008) 2482 – 2496
[8] Litjens R, Roijers F, van den Berg JL, Boucherie RJ, Fleuren MJ. Performance analysis of wireless LANs: an integrated packet/flow level approach. In: Proceedings of ITC-18, Berlin, September 2003. [9] He Q, Neuts MF. Markov chains with marked transitions. Stochastic Processes and their Applications 1998;74:37–52. [10] Lucantoni DM. New results on the single server queue with a batch markovian arrival process. Stochastic Models 1991;7(1):1–46. [11] Klemm A, Lindemann C, Lohmann M. Modeling IP traffic using the batch Markovian arrival process. Performance Evaluation 2003;54(2): 149–73. [12] Breuer L. Parameter estimation for a class of BMAPs. In: Latouche G, Taylor P, editors. Advances in algorithmic methods for stochastic models. Notable Publications Inc.; 2000. p. 87–97. [13] Horváth G, Telek M, Buchholz P. A MAP fitting approach with independent approximation of the inter-arrival time distribution and the lag correlation. In: Second international conference on the quantitative evaluation of systems, QEST, Torino, Italy, April 2005. Silver Spring, MD: IEEE CS; 2005. p. 124–33. [14] Foh CH, Zukerman M. Performance analysis of the IEEE 802.11 MAC protocol. In: European Wireless 2002, Florence, Feb 2002. [15] Ramesh S, Perros HG. A multilayer client-server queueing network model with synchronous and asynchronous messages. IEEE Transactions on Software Engineering 2000;26(11). [16] Lavenberg SS. Computer performance modeling handbook. New York: Academic Press; 1983. [17] Sengupta B. Markov processes whose steady state distribution is matrix-exponential with an application to the GI/PH/1 queue. Advances in Applied Probability 1989;21:159–80. [18] Van Houdt B, Blondia C, Casals O, García J. Performance evaluation of a MAC protocol for broadband wireless ATM networks with QoS provisioning. Journal of Interconnection Networks (JOIN) 2001;2(1):103–30.