Computer Networks 51 (2007) 3902–3918 www.elsevier.com/locate/comnet
Probe-based admission control for a differentiated-services internet Ignacio Ma´s *, Gunnar Karlsson School of Electrical Engineering, KTH, Royal Institute of Technology, SE-100 44 Stockholm, Sweden Received 6 October 2006; received in revised form 21 February 2007; accepted 16 April 2007 Available online 29 April 2007 Responsible Editor: Nelson Fonseca
Abstract End-point admission control solutions have been proposed to meet quality requirements of audio-visual applications with little support from routers. These proposals decentralize the admission decision by requiring each host or access gateway to probe the network before sending data. In this paper we describe a probe-based admission control scheme that offers a reliable upper bound on packet loss, as well as small end-to-end delay and delay jitter. The admission control supports host mobility and multicast communications without adding any complexity to the network nodes. We present a mathematical analysis which relates system performance to design parameters and which can be used as a dimensioning aid for the system. Finally, we describe performance results from an experimental prototype as well as simulations that prove that the scheme provides a reliable and efficient solution for QoS provisioning for delay and loss sensitive applications. 2007 Elsevier B.V. All rights reserved. Keywords: QoS; Admission control; DiffServ
1. Introduction Today’s multimedia transmissions on the Internet require a better and more predictable service quality than that obtained with the available best-effort service. Most multimedia applications are designed to manage losses and to smooth out the jitter incurred under this network condition. Interactive communication requires stringent delay requirements. For *
Corresponding author. Tel.: +46 84045580. E-mail addresses:
[email protected] (I. Ma´s),
[email protected] (G. Karlsson).
example, IP telephony requires a one-way delay of roughly 150 ms that needs to be kept during the whole call [1]. The IETF has proposed two different architectures to provide quality of service (QoS) guarantees: Integrated Services (IntServ) [2] and Differentiated Services (DiffServ) [3]. IntServ provides three classes of service to the users: the guaranteed service (GS) [4] offers transmission with no packet loss and with deterministically bounded end-to-end delay by assuring a fixed amount of bandwidth for the traffic flows; the controlled-load service (CLS) [5] provides a service similar to a best-effort service in a lightly
1389-1286/$ - see front matter 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2007.04.009
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
loaded network by preventing network congestion; and, finally, the best-effort service, which lacks any kind of QoS assurances. IntServ routers need to keep per-flow states and must process per-flow reservation requests, which can create an unmanageable processing load in the case of many simultaneous flows. Consequently, the IntServ architecture provides an excellent quality of service in the GS class, and tight performance bounds on the CLS class; however it has known scalability limitations. The second approach to providing QoS in the Internet, the DiffServ architecture, puts much less burden on the routers. DiffServ uses an approach sometimes referred to as class of service (CoS), by mapping flows into a few service levels. Applications or ingress nodes mark packets with a DiffServ code point (DSCP) according to their QoS requirements. The routers have to provide a set of per-hop behaviors (PHB), with associated queues and scheduling mechanisms, like expedited forwarding [6] or assured forwarding [7], and schedule packets based on the DSCP field. A drawback of the DiffServ scheme is that it does not contain admission control. The scheme relies instead on service level agreements (SLA), which means that a given service class might be overloaded and all the flows belonging to that class will then suffer increased packet delays and loss. Thus, the aggregation of flows in DiffServ gives improved scalability, but at the cost of a less stringent QoS assurances to user flows. The same scalability properties create as well a less dynamic environment for setting up sessions, since the admission control is based on written SLA’s. In this paper, we propose a probe-based admission control (PBAC) scheme which provides a reliable upper bound on the packet loss probability that a flow is exposed to in the network. The proposed PBAC scheme allows a DiffServ implementation of the controlled-load service [8–12]. The CLS can operate along the best-effort service by allocating a fixed part of the link capacity to it. The besteffort traffic can use both capacity that has not been reserved by the CLS class as well as reserved capacity that the class does not use. The delay in the controlled-load service is bounded by using small, packet-scale buffers in the routers [13]. Our proposal has low implementation complexity and builds on router functionality which is available in commonly deployed routers in today’s networks. The paper summarizes all the findings previously reported in [10–12]; it adds new results from simulations and
3903
testbed experiments, and presents new extensions of the scheme to support mobility; it describes the implementation details of our testbed prototype. The remainder of the paper is organized as follows: in Section 2 we give an overview of traditional per-hop measurement-based admission control schemes, usually applied in IntServ-like architectures, as well as the new family of end point based ones; Section 3 describes the probing procedure and the different parameters involved, as well as the prototype implementation; Section 4 describes the application of PBAC to multicast, and Section 5 describes the mobility extensions. Section 6 offers an analytical model. We validate the model with simulations and experimental results, and offer a performance analysis in Section 7. Finally, in Section 8 we offer our conclusions. 2. Related work The field of admission control has been extensively investigated in recent years. In the following two sections we offer an overview of different measurement-base admission control (MBAC) schemes. The schemes are classified in per-hop or end-point schemes as a way to explain the evolution of the ideas in the research community. 2.1. Per-hop measurement-based admission control schemes A set of measurement-based admission control schemes has appeared in the literature during the last 10 years. These schemes follow the idea of IntServ to limit network load by connection admission control that does not require per-flow states or exact traffic descriptors. The schemes use some worst-case traffic descriptor, like the peak rate, to describe flows trying to enter the network. Then they base the acceptance decision in each hop on real-time measurements of the aggregate flows. All these algorithms focus on providing resources at a single network node and follow some admission policy, like complete partitioning or complete sharing. The capacity-partitioning scheme assumes a fixed partition of the link capacity for each of the different classes of connections. Each partition corresponds to a range of declared peak rates, and the partitions cover together the full range of allowed peak rates without overlap. A new flow is admitted only if there is enough capacity in its class partition. This provides a fair distribution of the
3904
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
blocking probability amongst the different traffic classes, but it risks lowering the total throughput if some classes are lightly loaded while others are overloaded. The capacity sharing scheme, on the contrary, makes no distinction among flows. A new flow is admitted if there is capacity for it, which may lead to a dominance of flows with smaller peak rates. To perform the actual admission procedure, measurement-based schemes use RSVP signaling, which can incur on additional blocking because of provisional reservations, which need to be released when a downstream router blocks the call or there is a long call setup phase. The idea of measurement-based admission control is further simplified in [14]. In this proposal the egress routers decide about the admission of a new flow. Edge routers passively monitor the aggregate traffic on transmission paths, and accept new flows based on these measurements. An overview of several MBAC schemes is presented in [15]. This overview reveals that all the considered algorithms have similar performance, independently of their algorithmic complexity. While measurement-based admission control schemes require limited capabilities from the routers and source nodes, compared to traditional admission control or reservation schemes, they show a set of drawbacks: Flows over longer transmission paths experience higher blocking probabilities than flows over shorter paths, and flows with low capacity requirements are favored over those with high capacity needs. The later drawback is somehow mitigated in the capacity-partitioning schemes, though it still suffers from the level of granularity that the partitioning achieves. 2.2. End-point admission control schemes In the recent years a new family of admission control solutions has been proposed to provide controlled-load like services with little or no support from the routers. These proposals share the common idea of end-point admission control based on measurements: A host sends probe packets before starting a new session and decides about the flow admission based on statistics of probe packet loss [16,8,17], explicit congestion notification marks [18–20], delay or delay variation [21–23]. The admission decision is thus moved to the edge nodes, and it is made for the entire path from the source to the destination, rather than per-hop. Consequently, the service class does not require any other support
from the routers than one of the various scheduling mechanisms required by DiffServ, and possibly the capability of marking packets. In most of the schemes, the accuracy of the admission decision requires the transmission of a large number of probe packets to provide an estimation of the network state with good confidence. Furthermore, the schemes require a high multiplexing level on the links to make sure that the load variations are small compared to the average load. A detailed comparison of different end-point admission control proposals is given in [24]. It shows that their respective performances are quite similar, and thus the complexity of the schemes may be the most important design consideration. 3. Probe-based admission control Our admission control belongs to the family of end-point admission control schemes. In this way it avoids the added complexity that per-hop schemes presented in Section 2.1 require from the network nodes. The main difference of our scheme in comparison with the end-point admission control schemes presented in the previous section is the level of complexity that the schemes require, as well as support for advance functionality like multicast or mobility. The ECN-based schemes presented in [18–20] treat probe and data-packets identically, marking them when congestion occurs. Flows send the amount of traffic they wish, but they are charged for the marked packets. Our scheme, on the contrary, differentiates probe and data packets, so that probes can never disturb ongoing sessions. It also imposes a limit to the amount of traffic a flow can send, so that the traffic never exceeds the quantity you requested from the network by probing. The other set of schemes presented in the previous section are a sample of a group of schemes that try to estimate the available bandwidth in the network path, by using delay timing techniques over packet pairs or trains of packets. The main difference with our scheme is that we do not look for a figure of available bandwidth; instead we just answer the question of whether the new flow fits in the network path or not. The delay timing techniques relay on a set of mathematical formulas applied over the difference in inter-packet time at source and destination, which achieve different levels of accuracy in the bandwidth estimation. PBAC, however, only measures packet loss and makes an
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
Probes
High priority
Low priority
Low + High priority
Double Queue Scheme
new session
new session
Probe
Probe Measurement Probe length Ploss < Ptarget
ACK
Data
Timeout
Timeout
Measurement Probe length Ploss < Ptarget ACK
Threshold Queue Scheme
Fig. 1. The queuing system.
Data
Measurement Probe length ACK
Data
Timeout
Probe discard threshold
Ploss > Ptarget
Probe Probe
Low + High priority
Probe length NACK
Backoff time
new session
Data + Probes
Measurement
Fig. 2. The probing procedure.
Ploss < Ptarget Timeout
Data
Timeout
The admission control is done by measuring the loss ratio of probe packets that are sent at the peak rate of the flow and transmitted with low priority to avoid disturbing already established sessions. The scheduling system of the routers consequently has to differentiate data packets from probe packets. To achieve this, two different approaches are possible (see Fig. 1). In the first, there are two queues, one with high-priority for data and one with low priority for probe packets. In the second approach, there is just one queue with a discard threshold for the probes. Considering the double-queue solution, the size of the high-priority buffer for the data packets is selected to ensure a low maximum queuing delay and an acceptable packet loss probability, i.e., to provide packet-scale buffering [25]. The buffer for the probe packets can accommodate a small number of probe packets, usually one packet per router input interface to ensure an over-estimation of the datapacket loss while avoiding probe thrashing effects
Timeout
3.1. General description of the admission control
(discussed in Section 3.2.4). The threshold-queue can be designed to provide similar performance to the double-queue solution, as shown in Section 7.1. The choice between the two approaches can be left as a decision for the router designer. Fig. 2 shows the phases of the PBAC session establishment scheme. When a host wishes to set up a new flow, it starts by sending a constant bit rate probe at the maximum rate that the data flow will require. The probe duration is chosen by the sender within a range of values defined in the service contract. This range forces new flows to probe for a sufficient time to obtain a sufficiently accurate measurement, while it prohibits them from performing unnecessary long probes. The probe packet size should be small to maximize the number of packets in the probing period. When the host sends the probe packets, it specifies the peak bit rate and the length of the probe measured in packets. With this information the host can perform an early rejection based on the maximum number of packets lost in order not to surpass the target loss probability. If surpassed, the flow is immediately rejected. The probe packets also contain a flow identifier to allow the host to distinguish probes for different sessions. Since one sender could open more than one session simultaneously, the IP address in the probes is not enough to differentiate them. Upon receiving the first probe packet for a flow, the host starts counting the number of received packets and the number of lost packets (by checking the sequence numbers of the packets it receives). When the probing period finishes and the host receives the last probe packet, it compares the upper bound of a confidence interval for the estimated probe loss (see Section 3.2.1) with the target loss. It then sends back an acknowledgment packet
Timeout
estimation of the loss the flow would experience if admitted. The purpose of our admission control scheme is to prevent new sessions from degrading the QoS of ongoing sessions below some pre-established level. We are, thus, preventing congestion from occurring, rather than resolving it after it happens as with reactive schemes such as TCP friendly rate control. Our contribution builds on the arguments offered in [24] by providing a low complexity end point admission control solution which only requires support from routers which is commonly available in deployed equipment.
3905
3906
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
accepting or rejecting the incoming flow depending on whether the bound was met or exceeded. This acknowledgment packet is sent back at high-priority to minimize the risk of loss. If the decision is positive, the receiver starts a timer to control the arrival of data packets. The value of this timer ought to be slightly more than two round-trip times (RTT), since there could be variations on the RTT value and the ACK packet has to arrive to the sender and trigger the transmission of data packets. If this timer goes off, the host assumes that the acceptance packet has been lost and resends it. The value of the timer can be set, for example, to 600 ms, as a design choice, which should suffice as twice the RTT for most paths on the Internet. This value only affects the setup delay for new flows in the case of a loss of the acknowledgment packet. The receiving host cleans up the information for the incoming flow when the timer expires without having received any data packet. When the probe process finishes, the sender starts a timer with a value greater than two times the expected round-trip time. The value follows the same argument as the receiver timeout, explained in the previous paragraph. This timer goes off in case the sender does not receive an acknowledgment to the probe. The timer allows the sender to infer that none of the probe packets went through or that the acknowledgment packet with the acceptance decision from the receiver got lost. The sender assumes the first scenario, since the admission decision is sent at high priority, and aborts the process. The value of the timer can again be set to 600 ms. Nevertheless, the effect of this timer on the whole setup delay is negligible, since it only affects flows that would have to back off anyway. Finally, when the sending host receives the acceptance decision, it starts sending data with high-priority, or, in the case of a rejection, it backs off for a certain amount of time, before starting to probe again. In the case in which the sender has no data to be sent, a keep alive mechanism is started as high-priority data to avoid the receiver from closing the connection. In subsequent tries, the host can increase the probe duration, up to the maximum level allowed, so that a higher accuracy is achieved for the measurement. There is a maximum number of retries that a host is allowed to perform before having to give up. The back-off strategy and the number of retries affect the connection setup time for new sessions and should be carefully tuned to balance the acceptance probability with the expected setup delay. The experimental prototype
and the simulations use an exponential back-off with an average of twice the probe duration, which has proved effective to reduce the effect of probe thrashing (see Section 3.2.4). The acceptance threshold is fixed for the service class and is hence the same for all sessions. The reason for this is that the QoS experienced by a flow is a function of the load from the flows accepted in the class. Considering that this load depends on the highest acceptance threshold among all sessions, by having different thresholds all flows would degrade to the QoS required by the one with the least stringent requirements. The class definition also has to state the maximum data rate allowed to limit the size of the sessions that can be set up. Each data flow should not represent more than a small fraction of the service class capacity (no more than 10%), to ensure that statistical multiplexing works well [26]. The enforcement of the maximum peak rate can be done by the service provider by using simple monitoring tools in the aggregation node [27]. 3.2. Architectural considerations Admission control based on end-point measurements has a number of difficulties that deal with the open and best-effort nature of the Internet. To overcome those difficulties some architectural considerations need to be taken into account. The following sections offer some insights into common problems of end-point admission control schemes, with our proposed solutions and comments. 3.2.1. The acceptance decision To perform the acceptance decision, we define a target loss probability ppr and measure the empirical loss probability pme from the probe. The accuracy of the measurement depends on the number of probe packets, so we should use the smallest packet size possible in order to maximize the number for a chosen probe duration. For our first test of the admission decision rule, we assumed a normal distribution of the probe loss [8]. This assumption allows us to define a confidence level on the measured loss probability, qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiso that a session is acceptable if: pme þ zR
pme ð1pme Þ s
6 ppr , given that
ppr · s > 10. In this, s is the number of probe packets sent, R is the confidence level we want to have and zR is the 1 (1 R/2)-quantile of the normal distribution. The second condition ensures that we have sufficient number of samples for the estimation.
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
3.2.2. Multiple service levels As we have already mentioned, there is only one acceptance threshold for the service class. This is required in order to avoid flows with less stringent packet loss requirements from being admitted even when their acceptance deteriorates the service being offered to already accepted flows. However, it seems natural to offer several different quality of services
Cumulative distribution function
1.1 1 0.9 0.8 0.7 0.6 0.5
1.1
Cumulative distribution function
We have tested the normality assumption experimentally and by simulations by performing over one thousand probe sessions and then using a Kolmogorov–Smirnov test for each probe loss distribution. The results can be seen in Figs. 3 and 4. These two figures were generated by experimental runs in our laboratory prototype setup (see Section 3.3.3) using probe sessions of 2 s. Both figures were generated with background on–off traffic with exponentially distributed holding times with 20 ms average on holding time and 1 Mb/s rate when on, and 35.5 ms average off holding time. The background traffic was filling 85% of the link capacity. Fig. 3 shows the CDF of the probe loss ratio for 1247 probe sessions of 1 Mb/s, as well as the CDF of the normal distribution with the same average and variance. The experimental data distribution has a 97.7% chance of being obtained from the normal distribution according to the goodness of fit test. Fig. 4 show the CDF for a probe of 5 Mb/s peak rate over the same link and with the same background traffic. In this second case the goodness of fit test gives a 82.3% confidence value for our assumption. Some extra simulations have proven that the normality assumption holds with some other source types like Pareto distributed on–off sources and for any level of link load.
3907
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.28
Empirical cdf Normal cdf 0.3
0.32
0.34
0.36
0.38
0.4
0.42
0.44
Packet loss ratio
Fig. 4. CDF of the probe loss ratio for a 5 Mb/s peak rate call over a 100 Mb/s link.
over the CLS partition. One possible way is to use source shaping and forward error correction to differentiate amongst ongoing sessions, thus achieving application specific quality differentiation in terms of end-to-end delay and packet loss probability [28]. 3.2.3. Routing stability An important concern of all end-to-end admission control schemes is the stability of the route for an accepted session. If the network routing changes, then we cannot guarantee that the new path will still conform to the maximum packet loss rate. However, we assume that route changes occur either due to a load balancing scheme or to topological changes. In the first case, as we are reducing the load of the links, there is no need to perform a new acceptance decision. In the second case the applications notice a disruption in the data flow. We require that a flow is terminated when the target loss probability is violated (a force majeure clause in the service specification). Hence a new call setup is required in order to reestablish the flow. Note that the session may continue at best effort in the meantime. The determination of a service violation is done by the admission control software in the host an can be enforced by simple monitoring schemes in the service provider access nodes [27]. The violation of the service needs to be specified in the service level agreement.
0.4 0.3 0.2 0.1 0 0.01
Empirical cdf Normal cdf 0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
Packet loss ratio
Fig. 3. CDF of the probe loss ratio for a 1 Mb/s peak rate call over a 100 Mb/s link.
3.2.4. Thrashing and the impact of uncompliant sources Another important issue to deal with is that of many flows probing the network at the same time. Breslau et al. [24] argue that a high cumulative probe traffic can prevent further admissions and thus reduce the number of flows to well below the
3908
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
possible link utilization. We have tried to quantify this possible thrashing behavior by simulations and we have found that it requires a significant percentage of the link capacity (over 10%) of synchronized probing in order to reduce the link utilization. Moreover, by using an exponential back-off strategy that randomizes the subsequent arrivals of probing flows, the thrashing behavior turns into a temporal situation that resolves itself after a few probing rounds. There is one situation in which the admission control scheme would have problems to accurately predict the data loss, and it is the case in which we have a certain number of admitted sessions that transmit nothing at all for periods of time longer than the probe length. However, the multiplexing of independent sessions makes it highly unlikely for a large number of them to keep silent at exactly the same time. The silent sessions only have a noticeable effect on the prediction if they are synchronized and represent a large share of the link capacity (>10%). Further details about the thrashing effect and the impact of uncompliant sources can be found in [10]. 3.3. Experimental prototype One of the most important reasons for the lack of deployment of QoS architectures in today’s Internet is the complexity required. Both IntServ and DiffServ schemes demand significant changes to the network nodes. One of the main advantages of PBAC is the ease of its implementation in current software or hardware routers. The main goal of PBAC is to provide reliable admission control with minimal support from the routers. With this goal in mind we have moved the required complexity to the end-nodes, following the end-to-end argument in systems design [29]. A test prototype has been implemented in Linux and it provides the basic functions of the PBAC admission control protocol. The basic modules of the prototype are presented in the following subsections. 3.3.1. The queuing system The implementation uses the simple 3-band priority scheduler of the Linux kernel to offer the capabilities of the double-queue scheme, reserving the third band in the scheduler for the low priority best-effort traffic. Minor modifications have been performed to allow controlling the size of the priority queues in the scheduler by adding two variables
to the pfifo_fast queue [30]. The queue honors the type of service flag of IP packets, and inserts highpriority packets in band 0. FIFO applies to each of the three bands. The length of the total queue is obtained from the interface configuration, while the two extra variables control the length in packets of band 0 and band 1. More complete implementations of the queuing scheme will recognize the DSCP code point of the DiffServ architecture instead of the ToS field which has been rendered void. 3.3.2. The admission control libraries The PBAC protocol has been implemented as a set of libraries that both clients and servers call when opening a CLS UDP socket in order to perform the probing phase. These libraries can thus be linked on demand in the applications willing to support the controlled-load service class. The functions implemented in the libraries include creating the probe packets with the proper DSCP field (or type of service in our prototype), controlling the CLS flow table (see Section 3.3.3) and performing the admission decision. The signaling functions needed for the multicast and mobility support have been left out of the prototype implementation. All the CLS packets carry an extra header that contains the following information: flow identity, peak rate of the flow, probe length in packets, packet type and sequence number. This information is required to update the CLS flow table that all end-nodes contain, as well as the root node of the multicast tree when using multicast, or the home agent for the mobility support. The packet type differentiates the following packets: probe and data packets, admission decision, change of rate packets for the multicast case and extra padding packets to achieve the peak rate in case of mobile nodes changing attachment point. It is worth noticing that the real-time protocol [31] can be used to provide the required header information. We would just need to encode the probe length and probe peak rate in the CSRC field of the RTP header. Since RTP is basically implemented in most applications, using it as our realtime transport protocol over UDP only requires modifying our PBAC libraries that encapsulate all the controlled-load service packets. 3.3.3. The CLS flow table In order to provide support for mobility and multicast, we need the end nodes to maintain soft
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
information about the CLS flows that they are transmitting or receiving. This information is stored in a table which contains the following information for each flow: source and destination IP address and source and destination port, flow identity, peak rate of the flow and length of the probing period, as well as whether the flow is in the probing phase or transmitting data. All the elements in the network which are PBAC-aware, i.e. end-nodes, multicast root nodes and home agents, contain this table and are able to check the status of each particular flow to perform the necessary functions to allow mobility or multicast transmission. 4. Application to multicast In order to adapt the admission control for multicast we create two multicast groups: one for the probe process and another for the data session itself. The only requirement of the admission control is that the root node of the multicast tree must perform an admission decision for new senders, but the rest of the routers only need to have the priority-based queuing system to differentiate probes from data. All that the probing procedure assumes is a sender-based multicast routing protocol with a root node (rendez-vous point). By having a separate multicast tree for probe packets we only have probe traffic in the tree when there are receivers willing to join the multicast group. If there are no receivers in the probing phase, the probe traffic is only forwarded to the source of the tree. The described procedure also works for the emerging PIM-SSM [32] routing protocol. The creation of source-specific multicast trees both for probe and data packets offers a perfect control on the admission decision for each one of the different senders. Multicast receivers perform an admission decision for each of the flows from different senders independently and there is no need to perform an admission decision for senders, as the root node is the sender itself. The following two subsections offer a description of the way PBAC allows multicast senders and receivers to perform admission control when joining a multicast group. 4.1. Multicast sender procedure The simplest multicast case we consider is a single sender with a fixed peak rate. The sender only probes the path to the root node of the multicast tree. This probe ensures that the sender is able to
3909
deliver the data to this node with the desired quality. When the root node receives the probe, it forwards the probe packets onto the multicast tree for probing so that receivers which have joined can start measuring the loss. Meanwhile, the root node measures the probe–packet loss and makes an admission decision for the sender. If the decision is negative, the sender is rejected. In this case, the receivers will also see too high losses to join and will back-off. The probing from the sender to the root continues for as long as the sender is active. In the case of multiple multicast senders, there are some concerns to take into account. The first pertains to the sender policy for the group: there may be multiple senders for the session, which may be active simultaneously or sequentially. However, as far as the admission control is concerned, it is only the peak rate of the transmission that should be considered. Only when it increases (either because the sender changes its rate or because we have a new sender with a different rate) do we need to perform the probing of the receivers again. The second concern is how to define the admission policy for a new peak rate. There are cases in which an increased peak rate should only be allowed if all receivers are able to meet the desired quality of service. If this is the case, the failure of a single receiver to remain below the loss target for the new probe would be enough to reject the increase. There could also be a threshold on the number of receivers that are not able to properly receive the multicast traffic for the sender to join the group. How to define the threshold is an issue related to the particular policy of the multicast group, although enforcing complex admission decisions requires some signaling protocol support that is not included in the admission control. If we have multiple senders that share the same reservation and do not transmit simultaneously, then a new sender only needs to perform a unicast probe to the root node. In case an increase of the peak rate is necessary, the sender notifies all the receivers by a special data packet (‘increase of rate’ packet; see Section 3.3.2 for a description of the signaling packets). This scheme also works when we have more than one simultaneous sender. If a new sender wishes to join the multicast group, it starts sending probe packets to the root node, until it receives the admission decision from it. Then, it sends the ‘increase of rate’ packet to the data group in order to notify all the receivers of the increased probe rate. The
3910
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
interesting idea in this scheme is that it completely decouples the coordination of the multicast senders from the admission decision made for the receivers. The receivers do not care whether it is just one or many senders sending data to the multicast group, as long as they stay within the peak rate of the session. The coordination of the senders can be performed by some other means, like a different multicast group or a central coordination node, for example. However, this is a session layer issue that is not part of the PBAC protocol. 4.2. Multicast receiver procedure The procedure for a multicast receiver to join the group is quite simple. The receiver first joins the probe group and measures the probe packet loss for a certain period of time. It chooses the amount of time that it will measure the probe packet loss, as a longer admission period gives a higher accuracy of the probe packet loss and consequently also a lower blocking probability. Once the receiver has performed the estimation of the packet loss probability and compared it with the target loss, it leaves the probe group. If the admission decision is positive, the receiver immediately joins the data group, while in the case of a negative decision, it needs to back-off for a period of time before trying to join again. The time the receiver has to back-off can be stated in the service contract or in the announcement of the multicast group. It is important to note that there are probe packets forwarded on the multicast probe tree only when there are receivers in the probing phase. In the case of a negative decision, the host should give up after three consecutive attempts, where a failure also could be the lack of probe packets (if the multicast session has ended). All receivers must be able to recognize the special ‘increase of rate’ packets in the data group. These packets are sent by the multicast senders when the peak rate of the group needs to be increased. When a host receives one of these packets, it should immediately join the probe group, because the peak rate of the data group will be increased and a new admission decision is therefore necessary. 5. Host mobility support The new advances in radio hardware have enabled a proliferation of IP-based wireless communication networks which allow mobility of the end
nodes. End-node mobility requires a set of mechanisms to be implemented in both the end hosts and the network, which are generally called mobility management mechanisms. These mechanisms can be handled at different layers, but IP solutions are the most general and have been standardized by the IETF with the generic name of Mobile IP [33]. 5.1. Mobile admission control through a home agent Probe-based admission control works transparently over current Mobile IP standards, both for IPv4 and IPv6. When Mobile IP is employed via a home agent, the admission control protocol simply uses the bidirectional IP tunnel created between the home agent and the foreign agent to perform a new probe when the mobile node changes its network attachment point. In order to avoid data loss in the hand off, the data packets originally addressed to the home address are used as probe packets in the IP tunnel, sent at low priority and padded up to the required peak rate of the ongoing flow. The application of PBAC to mobile nodes assumes a stable radio channel and thus does not consider effects like fading or multi-path propagation, which could increase the losses. When a mobile node visits a new network, it updates its home agent with the new IP address. Upon receiving the update, the home agent checks the information of the ongoing flows addressed to this particular mobile node in the CLS flow table. The table contains the peak rate of the ongoing flows, which was read from the original probe packets to the mobile node. The home agent then starts encapsulating the data received for the mobile node into new IP packets addressed to the foreign agent or the care-of-address of the mobile node. The home agent sets the DSCP of the new IP packets to low priority and uses the original IP packets as the data of the new probe, adding extra IP packets as needed to achieve the desired peak rate for the probing. The mechanism to create the new probe packets resembles a token bucket where the original IP packets fill up the data part of the constant bit rate IP packets that the bucket generates, and when the bucket is empty it generates extra padding IP packets which are marked accordingly (see Section 3.3.3). Upon receiving the first low priority data packet, the mobile node starts again to measure the loss ratio and performs the usual acceptance decision based on the estimated loss probability. If the decision is positive, the mobile node notifies the home
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
agent that it should set the DSCP code point to high-priority and stop padding packets to match the peak rate. The admitted flow then proceeds in high-priority through the IP tunnel from the home agent to the mobile node. In the case of a rejection, two different policies can be implemented: The flow can proceed as a best-effort low priority flow without any kind of QoS guarantees, or it can be terminated. 5.2. Mobile admission control through route optimization Route optimization extensions to Mobile IP have been standardized for Mobile IPv6. These extensions allow the mobile node to notify its current care-of-address to the nodes participating in the data transmission. The peer nodes have a binding cache which stores the care-of-address of the mobile node, and use this address as the destination for their IP packets. The actual home address of the mobile node is included in a special IPv6 routing header. Route optimization includes a mechanism to establish security parameters that authenticates the signaling messages being exchanged. The addition of the admission control over the routing optimization is straightforward, since we can use the binding update message to indicate the need of a new probing phase to the peer node. When the binding cache is updated, the peer node sends the data packets to the new care-of-address as low priority IP packets and it adds padding IP packets to match the peak rate, which is stored in the CLS flow table. The mobile node receives the low priority data packets and performs the admission decision as appropriate, notifying the peer node of the acceptance or rejection of the transferred flow. Again, it is a question of the policy of the communication taking place how to react to a rejection, either terminating the flow or keeping on transmitting it as a best-effort flow. 5.3. Mobile admission control when the mobile node is the sender PBAC also works in the case when the mobile node is the origin of the multimedia flow. The procedure for both the home agent and the route optimization cases is quite similar. When the mobile node enters the new cell it starts transmitting the data packets in low priority and pads up the data flow to the required peak rate. The receiver
3911
identifies the data packets as low priority and starts counting the packet loss ratio, performing an admission decision after the defined probing time. The low priority data packets will traverse the bidirectional IP tunnel or be directly routed to the peer node, depending on the Mobile IP scheme used. 6. Analytical model One of the main concerns in all the admission control schemes is how to properly determine the initial parameters for the system. In this section we provide an approximate analysis of PBAC, able to give useful insights on the behavior of the admission control in a real situation. This analysis provides the means to relate the following variables: queue size (K), acceptance threshold (Th), acceptance probability (Pa), packet loss probability (Ploss) and link utilization. By using this type of analysis, we are able to give a bound on the latter variables as a function of the acceptance threshold and the queue size in our system. The analysis focuses on a single link, where new calls arrive according to a Poisson process of rate k calls/s. We assume the average connection holding time to be 1/l. Each new call probes the link with a CBR train of probe packets during a time tp at the peak rate (R) of the call with probe packets of s bits, which gives np probe packets per probing period (np = R · tp/s). If we assume that the packet loss probability for subsequent probe packets is independent due to the low peak-to-link rate ratio, then we can express the probability for a new connection to be accepted as: Pa ¼
np X np i n i ðP sc Þ ð1 P sc Þ p ; i i¼nmin
ð1Þ
where i is the number of probe packets successfully transmitted and nmin is the minimum number of successful probe packets for a call to be accepted, which can be expressed as: nmin ¼ np bTh np c: In (1), Psc represents the probability of success of one probe packet. This probability needs to be computed for the two different queue types that PBAC considers. We will develop here the analysis of the double-queue scheme, following the work in [22]. We consider a double queue with a low
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
3912
priority packet buffer of one packet. In this case, when a probe packet arrives to the system it will be successfully transmitted if the high-priority queue is empty, or if the residual duration of the busy period (frb) in the high-priority queue is less than T, being T the probe packet inter-arrival time (T = s/R): Z T P sc ¼ ð1 qÞ þ q frb ðtÞ dt ¼ ð1 qÞ þ qF rb ðT Þ
with Rps ðK; T ; nÞ ¼
nT X n nm am ð1 aÞ QmT ðKÞ m m¼1
and Rbs ðK; T ; nÞ ¼
n X n mT : am ð1 aÞnm T m m¼nT þ1 ð8Þ
0
ð2Þ with q ¼ lk. For the analysis to be tractable, we need to assume that the high-priority packet buffer is infinite. This assumption allows us to consider the highpriority queue as an M/D/1 system, in which the cumulative distribution function of the busy period (Fbp) can be obtained as: F bp ðbÞ ¼
j1 n X ðjqÞ ejq ; j! j¼1
where n ¼
b EðsÞ
ð3Þ
but we are interested in the remaining busy period in the queue, which is related to Fbp(t) as follows: frb ðtÞ ¼
1 F bp ðtÞ ; E½bp
with
E½bp ¼
1 : 1q
ð4Þ
The residual busy period is a step function, as it depends only on the number of packets in the queue to be served. If we assume that the packet being served has 0.4 of its remaining service time, then the function will have values 0.4, 1.4, 2.4 service times and so forth. If we normalize over the service time, we obtain: Z 0
T
frb ðtÞ dt ¼
bT 1c X
frb ðiÞ þ frb ðbT cÞðT bT cÞ:
ð5Þ
i¼0
Finally, applying (3)–(5) to (2), we obtain the probability of success of one probe packet as a factor of T and q. Once we have the probability of success of one probe packet, we just need to apply it to the binomial distribution in (1) to obtain the acceptance probability for a new call. In order to compute the actual packet loss probability that the ongoing sessions will experience, we will use the formula developed in [34], which approximates the packet loss rate by: Rloss ðk; T ; nÞ ¼ Rps ðK; T ; nÞ þ Rbs ðK; T ; nÞ
ð6Þ
ð7Þ
In these formulas, Rps and Rbs represent the packet and burst-scale lost rates respectively, K represents the buffer size of the HP queue, n is the number of ongoing calls producing on/off traffic, with an activity factor of a. The ongoing calls produce a periodic stream of packets with period T in the on state. Finally, QmT ðKÞ is the exact solution for the tail probability of an m*D/D/1 queue [34]. 7. Performance evaluation We have performed simulations with the network simulator NS-2 to validate our analytical model. The simulations contained sources with exponentially distributed on–off times, and peak rates (rpr) of 100 kb/s over a 10 Mb/s link, or 1 Mb/s over a 100 Mb/s link. The on–off holding times had an average of 20 and 35.5 ms, respectively. Packet sizes were 64 bytes for the probe packets and 128 bytes for the data packets, while the probe length was always 2 s. We used a confidence interval for the admission decision of 95%. The simulation time was 2000 s and the average holding time for accepted flows (1/l) was 50 s. Confidence intervals for each simulation are too small to be plotted in the figures. The flow arrival rate (k) varied in each simulation to increase the offered load to the system. The queue used, unless noted otherwise, was a double queue with a low priority buffer of one packet per input port and a high priority buffer of one thousand packets, to simulate the infinite queue, or 10 packets for a finite queue. To prove the behavior of our probing scheme we have used a simple one bottleneck link topology. The results for a multilink scenario obtained in [8] show that the highest loaded link dominates the behavior and that the scheme discriminates against flows with longer paths. The setup used to generate the figures contains a variable number of sources sending data to one router which implements the double-queue scheme. The output link connects to another router with the same queuing scheme. A traffic sink is
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
Bottleneck
CLS Router
Traffic sources
Link
CLS Router
Traffic sink
Fig. 5. A simple one link topology network.
connected to the second router to collect measurements and act as PBAC receiver (see Fig. 5). 7.1. The choice of queuing scheme In order to understand the effect that the two proposed queuing schemes would have on our admission control, we have performed a set of simulations with both queues and compared the results. The tests were run by fixing a background load of ongoing sessions on the network and running over 1500 probing sessions of 2 s with 100 kb/s and 1 Mb/s rate for each load and queue type used. The background load was provided by exponential on–off sources with the characteristics described previously. The buffer sizes of the double queues were two packets for the probe queue (we had one single input port) and 10 packets for the data queue. The threshold-queue had 10 packets of buffer space in total, with a threshold value of two packets. Fig. 6 shows the results. Other simulations with Pareto distributed on–off sources gave the same type of behavior. From this figure, we notice that there is similar behavior in both probe loss curves, the main difference being the magnitude of the loss that we get for the probing. As expected, the probe loss in the case
Probe packet loss
0.4
of the threshold-queue is slightly higher than the one we would achieve for the double-queue system, since the two buffer positions below the threshold are shared between high and low priority packets. The loss levels are however always in the same order of magnitude for all load levels we have tested. We conclude that the design decision of choosing one queuing system or the other does not affect the feasibility of our admission control, and both queuing systems could be used together in a network. We have also tested how much we should increase the discard threshold on the single queue to achieve a similar probe loss than the double queue, in order to provide a suggestion on the values that should be considered for the router buffers. The results obtained for the exponential on–off background traffic can be seen in Fig. 7. We can see that with a threshold value of three or four packets, depending on the load of the link, we achieve similar probe losses as with the double queues. The other scenarios show similar behaviors, requiring between three to five positions below the threshold to achieve comparable losses as to the other queuing scheme. It is important to understand that the choice of short buffers is desirable to provide a service with loss as the main degradation, where delay jitter is so small that its removal is simple. 7.2. Evaluation of the admission control We have used our experimental prototype to validate the simulations in our lab environment. All the experimental results were generated by using a Spirent SmartBits 6000B traffic generator to generate the background traffic with the same
0.4
Double queue rpr=1e5 Threshold queue rpr=1e5 Double queue rpr=1e6 Threshold queue rpr=1e6
0.35
Probe packet loss
0.5 0.45
0.35 0.3 0.25 0.2 0.15
3913
Double queue (10+2) Threshold queue (10(4)) Threshold queue (10(3))
0.3 0.25 0.2 0.15 0.1
0.1 0.05
0.05 0 0.74 0.76 0.78
0.8
0.82 0.84 0.86 0.88
0.9
0.92 0.94
Utilization
Fig. 6. Probe packet loss with the two different queue schemes for exponential on–off sources of 100 kb/s and 1 Mb/s peak rate.
0 0.74 0.76 0.78
0.8
0.82 0.84 0.86 0.88
0.9
0.92 0.94
Utilization
Fig. 7. Probe losses for both queuing schemes with 3 and 4 packets as discard threshold and 100 kb/s peak rate.
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
3914 0.016 0.014 0.012 0.01 0.008 0.006 0.004
Probe packets Data packets Target loss Blocked sessions
0.014
Measured loss ratio
Measured loss ratio
0.016
Probe packets Data packets Target loss
0.002
0.012 0.01 0.008 0.006 0.004 0.002
0 1000
1200
1400
1600
1800
0 1030
2000
Time (sec)
characteristics as the traffic in the simulations. The probing sources have been generated by using a simple software UDP traffic generator,1 to which we have added the needed probing functions from the PBAC library. The topology used in the laboratory implements that of the simulations with one bottleneck link. The routers are PC’s running Debian Linux with a 2.6.9 kernel providing the special double-queue system, with 100 Mb/s Ethernet interfaces. Our first evaluation offers the performance of our prototype admission control working in the testbed. Fig. 8 shows the probe packet loss and data-packet loss of a test run of 2000 s in our lab. For the sake of clarity we offer the period from 1000 to 2000 s, when the system has achieved a steady state. The offered load was 105 Mb/s, with sessions of 1 Mb/s peak rate. We loaded the network with 50 Mb/s of admitted background traffic and then used 150 sessions arriving with k = 3 to test the admission control algorithm. As the figure shows, packet losses during the 2 s probing phase were distributed between 0.2% and 1.6% while the loss rate for admitted flows was almost always less than 0.2%. Our target loss rate was 1% and the blocking rate for the 1000 s in the figure was 20.73%. Fig. 9 shows a detail of the period between 1030 and 1070 s. In the figure it can be seen that there are some flows that are admitted, since the probe packet loss rate is under the admission threshold, while their corresponding session suffers a packet loss higher than 1%. The error rate of the admission control was in total 1.16%, which is the percentage of flows that were admitted and
1040
1045
1050
1055
1060
1065
1070
Time (sec)
Fig. 9. Detailed view of probe and data packet loss rates for 40 s of an experimental run.
failed to bound the packet loss under our admission threshold. To evaluate the accuracy of our analytical study we first consider the probability of probe packet loss, the probability of flow acceptance and the network utilization at a given level of offered load. Then we evaluate the maximum link utilization, data loss probability at a given link utilization and the relation of probe and data-packet loss probabilities. Fig. 10 shows the comparison of the probe packet loss probability obtained by simulations and the one given by our mathematical analysis. Note that we have not limited the probe packet loss to an admission threshold value, since we want to obtain the relation between probe packet loss and utilization up to the full link capacity. When the admission threshold is active the utilization would not reach 100%, as Fig. 13 illustrates. The figure compares the analytical results with the assumption of an infinite high-priority buffer to simulation 1
Probe packet loss probability
Fig. 8. Probe and data-packet loss rates for 1000 s of testbed experiment.
1035
0.1
0.01
0.001 Analytical model Simulation rpr=1e6 K=10 Experimental K=10 Simulation rpr=1e5 K=10
1e-04
1e-05 0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Utilization 1
http://rude.sourceforge.net.
Fig. 10. Probe packet loss probability for a double-queue system.
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
Acceptance Probability
1
0.8
0.6
0.4
0.2 Analytical model Simulation rpr=1e5 K=10
0 0.7
0.72
0.74
0.76
0.78
0.8
0.82
0.84
0.86
Utilization
Fig. 11. Acceptance probability for a new flow as a function of the load on the system for 100 kb/s call peak rate, with an acceptance threshold of 102.
1
Acceptance Probability
results with 1000 and 10 buffer positions and for the two different peak rate values. The figure also contains the curve obtained from the experimental results for the 1 Mb/s peak rate case. Note that the analytical results do not depend on the actual peak rates but only on the ratio of the peak rate over link rate. The figure shows a close match between the analysis, the simulation and the experimental results. The analysis with the assumption of infinite high-priority buffer gives an upper bound for losses in the finite buffer case. This is due to the fact that more data packets are lost in the high-priority queue when we have a buffer size of 10 packets, thus reducing slightly the average remaining busy period, which in turn increases the probability of success of a probe packet (see Eq. (2)). The different curves suggest that the probe loss increases exponentially as the link utilization increases, which can then offer a sharp transition in the acceptance probability. Figs. 11 and 12 show the acceptance probabilities of a new call with an acceptance threshold of 102, with call peak rates of 100 kb/s and 1 Mb/s. In both cases the analytical curve intersects the curves obtained by simulation and in the experimental evaluation, due to the assumption of independent probe loss in the analysis. The values of the curves with a finite buffer size of K = 10 are always slightly over the infinite case, for the reason explained in the previous paragraph. The transient period is shorter in the case of high peak rates, as the number of probe packets transmitted is higher and the loss estimation more accurate. Table 1 summarizes the relationship for the curves for each of the two peak
3915
0.8
0.6
0.4
0.2
Analytical model Simulation rpr=1e6 K=10 Experimental K=10
0 0.7
0.72
0.74
0.76
0.78
0.8
0.82
0.84
0.86
Utilization
Fig. 12. Acceptance probability for a new flow as a function of the load on the system for 1 Mb/s call peak rate, with an acceptance threshold of 102.
rates. From the values in the table, it can be seen that the model gives an upper bound on the utilization for an acceptance probability of 95% which is approximately 3% higher than the simulation results, while it offers a lower bound for the 5% case with a difference of less than 2% of link utilization. Fig. 13 shows the link utilization achieved as a function of the offered load, for an admission threshold of 102. This figure illustrates that the admission control scheme leads to a stable system. The utilization follows the offered load up to a load level of 0.75. After this point, the mathematical analysis slightly overestimates the utilization of the simulated and experimental system, due to the sharper change of the flow admission probability. Towards high loads the analysis might underestimate the achievable utilization due to the same reason, with up to 2% of link capacity. Data-packet loss probability values are shown in Fig. 14 as a function of link utilization for different high-priority buffer sizes, considering streams with peak rate of 1 Mb/s. The results show that the data-packet loss probabilities grow exponentially with increasing link utilization. The mathematical model provides accurate results for the small buffer sizes of interest, and, as expected, does not give correct result for large buffers, since it does not consider burst-scale buffering. As in the case with Fig. 10 we have not limited the packet loss to an admission threshold value, since we want to obtain the relation between packet loss and utilization up to the full link capacity. As stated before, when the admission threshold is active the utilization would not reach 100%, as Fig. 13 illustrates.
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
3916
Table 1 Link utilization for acceptance probabilities of 5% and 95% Pa
rpr = 100 kb/s
0.95 0.5
rpr = 1 Mb/s
Model
Simulation K = 1000
Simulation K = 10
Model
Simulation K = 1000
Simulation K = 10
0.769 0.825
0.732 0.832
0.735 0.837
0.789 0.808
0.753 0.821
0.759 0.827
0.84
0.01
0.82 0.8
Data packet loss
Utilization
0.78 0.76 0.74 0.72 0.7
Analytical model rpr=1e6 Simulation rpr=1e6 K=10 Experimental K=10 Analytical model rpr=1e5 Simulation rpr=1e5
0.68 0.66 0.64 0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
0.001
1e-04 1e-05
Offered load
Loss probability
0.1 0.01
Model K=5 Simulation K=5 Model K=10 Simulation K=10 Experiment K=10 Model K=20 Simulation K=20
0.001 1e-04 1e-05 1e-06 1e-07 0.45
0.5
0.55
0.6
0.65
0.7
0.75
1e-04
0.001
0.01
0.1
Probe loss
Fig. 13. Accepted versus offered load for 1 Mb/s call peak rate, with an acceptance threshold of 102.
1
Analytical model Simulation rpr=1e6 Experimental rpr=1e6 Simulation rpr=1e5
0.8
0.85
0.9
0.95
1
Utilization
Fig. 14. Packet loss probability for accepted sessions, for different high-priority queue buffer sizes.
Finally, based on the previous results we evaluate the connection between probe and data loss values in Fig. 15 for admitted flows. The figure shows the data-packet loss as a function of the probe loss experienced in the probing phase for ongoing flows. The probe loss is always higher than the data loss independently of the utilization level. The actual relationship between probe and data-packet loss is in this case over half an order of magnitude. As stated before the whole admission procedure of PBAC relies on a high degree of multiplexing
Fig. 15. Sessions versus probe packet loss for 100 kb/s and 1 Mb/ s call peak rates.
[8,10], in order to obtain a smooth statistical behavior of the ongoing traffic. For lower levels of multiplexing, the effects of thrashing on the simulation results should be taken into account, thus reducing the acceptance probability and the link utilization (see [10]), as well as the reliability of the measurement of the accepted traffic. Other traffic sources experience the same type of behavior regarding the probe/data packets loss relationship, as shown in [10]. For multiplexing levels below 10%, the various sources used to model real–time communications (i.e. Poisson or exponentially distributed on–off sources and traffic traces) show the same behavior, so we are expecting our model to work well for all these sources. 8. Conclusions In this paper we propose an admission control method based on probing that provides a welldefined upper limit on the packet loss probability for a flow. The scheme offers service quality for real-time applications with admission control. It keeps functionality for the load control outside the network, following the end-to-end principle of systems design, only requiring some form of priority queuing in the core routers. As only the end nodes,
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
sender and receiver, take active part in the admission control process, PBAC is able to provide perflow QoS guarantees in the current stateless Internet architecture. We offer a description and evaluation of an experimental prototype realized in commodity PCbased routers. The experimental prototype implements the core functionality of PBAC and serves as a proof of concept that the admission control scheme can easily be deployed in today’s Internet. Our admission control scheme supports multicast. Multicast sources can share a reserved bit rate, by time division, transmitting one at a time, or by rate splitting (a share each). The scheme offers different blocking probabilities for different senders and receivers, depending on their position on the multicast tree. PBAC also supports host mobility by using the current standards for Mobile IP (both for IPv4 and IPv6) with little added complexity on the mobility enabling agents and mobile nodes. We offer an approximate analytical model which offers useful relationships amongst probe packet loss, ongoing sessions packet loss, acceptance probability, acceptance threshold and buffer sizes. These relationships provide strict upper bounds which can be used to dimension the network parameters. Simulation and experimental results validate our model. Our results show a clear relationship between the probe packet loss and the expected session loss, thus allowing admission control based solely on end-toend loss measurements. The analytical results, verified by simulations and the experimental prototype, prove that PBAC leads to a link utilization without overload, with a clear upper bound on the packet loss probability. Consequently, the probe-based admission control provides a reliable and efficient solution for QoS provisioning for delay and loss sensitive applications, with little support in the routers.
[2] [3]
[4]
[5] [6] [7] [8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
Acknowledgements The authors thank Dr. Vikto´ria Fodor for her collaboration on the work described herein, as well as the anonymous reviewers who provided valuable comments that helped to improve the original manuscript.
[17]
[18]
[19]
References [20] [1] International Telecommunication Union (ITU), Transmission systems and media, general recommendation on the
3917
transmission quality for an entire international telephone connection; one way transmission time (recommendation G.114), Technical report, Telecommunication Standardization Sector of ITU, Geneva, Switzerland, March 1993. R. Braden, S. Clark, S. Shenker, Integrated services in the internet architecture, RFC 1633, IETF, June 1994. S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, W. Weiss, An architecture for differentiated services, RFC 2475, IETF, December 1998. S. Shenker, C. Partridge, R. Guerin, Specification of guaranteed quality of service, RFC 2212, IETF, September 1997. J. Wroclawski, Specification of the controlled-load network element service, RFC 2211, IETF, September 1997. V. Jacobson, K. Nichols, K. Poduri, An expedited forwarding PHB, RFC 2598, IETF, June 1999. J. Heinanen, F. Baker, W. Weiss, J. Wroclawski, Assured forwarding PHB group, RFC 2597, IETF, June 1999. V. Fodor (ne´e Elek), G. Karlsson, R. Ro¨nngren, Admission control based on end-to-end measurements, in: Proceedings of the 19th Infocom, Tel Aviv, Israel, March 2000, IEEE, 2000, pp. 623–630. G. Karlsson, F. Orava, The DIY approach to QoS, in: Proceedings of the IWQoS 99, LNCS, London, UK, May 1999, Springer, 1999, pp. 6–8. I. Ma´s, G. Karlsson, PBAC: probe-based admission control, in: Proceedings of the QoFIS 2001, September 2001, LNCS (Coimbra, Portugal), vol. 2156, Springer, 2001, pp. 97– 109. I. Ma´s, V. Fodor, G. Karlsson, Probe-based admission control for multicast, in: Proceedings of the 10th IWQoS (Miami Beach, FL, USA), May 2002, IEEE, 2002, pp. 99– 105. I. Ma´s, V. Fodor, G. Karlsson, The performance of endpoint admission control based on packet loss, in: Proceedings of the QoFIS 2003, October 2003, LNCS (Stockholm, Sweden), vol. 2856, Springer, 2003. N. McKeown, D. Wischik, Making router buffers much smaller, Computer Communication Review 35 (3) (2005) 75– 78. C. Cetinkaya, E. Knightly, Egress admission control, in: Proceedings of the 19th Infocom, Tel Aviv, Israel, March 2000, IEEE, 2000. L. Breslau, S. Jamin, S. Shenker, Comments on the performance of measurement-based admission control algorithms, in: Proceedings of the 19th Infocom, Tel Aviv, Israel, March 2000, IEEE, 2000, pp. 1233–1242. G. Karlsson, Providing quality for internet video services, in: Proceedings of the CNIT/IEEE ITWoDC 98 (Ischia, Italy), pp. 133–146, September 1998. G. Bianchi, N. Blefari-Melazzi, M. Femminella, Per-flow QoS support over a stateless differentiated services IP domain, Computer Networks 40 (September) (2002) 73–87. K. Ramakrishnan, S. Floyd, A proposal to add explicit congestion notification (ECN) to IP, RFC 2481, IETF, January 1999. F.P. Kelly, P.B. Key, S. Zachary, Distributed admission control, IEEE Journal on Selected Areas in Communications 18 (12) (2000) 2617–2628. T. Kelly, An ECN probe-based connection acceptance control, ACM Computer Communication Review 31 (July) (2001) 14–25.
3918
I. Ma´s, G. Karlsson / Computer Networks 51 (2007) 3902–3918
[21] G. Bianchi, A. Capone, C. Petrioli, Throughput analysis of end-to-end measurement-based admission control in IP, in: Proceedings of the 19th Infocom, Tel Aviv, Israel, March 2000, IEEE, 2000, pp. 1461–1470. [22] G. Bianchi, A. Capone, C. Petrioli, Packet management techniques for measurement based end-to-end admission control in IP networks, Journal of Communications and Networks 2 (June) (2000) 147–156. [23] G. Bianchi, F. Borgonovo, A. Capone, C. Petrioli, Endpoint admission control with delay variation measurements for QoS in IP networks, ACM Computer Communication Review 32 (April) (2002) 61–69. [24] L. Breslau, E.W. Knightly, S. Shenker, I. Stoica, H. Zhang, Endpoint admission control: architectural issues and performance, in: Computer Communication Review – Proceedings of the Sigcomm 2000, vol. 30 (Stockholm, Sweden), ACM, August/September 2000, pp. 57–69. [25] J.W. Roberts, U. Mocci, J. Virtamo (Eds.), Broadband Network Teletraffic – Final Report of Action COST 242, LNCS, vol. 1155, Springer, 1996. [26] W.-C. Lau, S.-Q. Li, Traffic analysis in large-scale high-speed integrated networks: validation of nodal decomposition approach, in: Proceedings of the 12th Infocom, San Francisco, California, March/April 1993, IEEE, 1993, pp. 1320– 1329. [27] I. Ma´s, J. Brage, G. Karlsson, Lightweight monitoring of edge-based admission control, in: Proceedings of the IEEE 2006 International Zurich Seminar on Communications, February 2006. [28] G. Da´n, V. Fodor, Quality differentiation with source shaping and forward error correction, in: Proceedings of the MIPS’03 (Naples, Italy), pp. 222–233, November 2003. [29] J.H. Saltzer, D.P. Reed, D.D. Clark, End-to-end arguments in system design, ACM Transactions on Computer Systems 2 (November) (1984) 277–288. [30] Linux Advanced Routing and Traffic Control.
. [31] H. Schulzrinne, S. Casner, R. Frederick, V. Jacobson, RTP: A transport protocol for real-time applications, RFC 1889, IETF, January 1996. [32] H. Holbrook, B. Cain, Source-specific multicast for IP, RFC 4607, IETF, August 2006.
[33] C. Perkins, IP mobility support for IPv4, RFC 3344, IETF, August 2002. [34] J.W. Roberts (Ed.), COST 224: performance evaluation and design of multiservice networks, vol. EUR 14152 EN of Information technologies and sciences. Commission of the European Communities, 1992.
Ignacio Ma´s received his M.Sc. degrees in electrical engineering from the Royal Institute of Technology (Stockholm, Sweden) and from Universidad Politecnica de Madrid (Spain). He obtained his Technology Licentiate degree from the Royal Institute of Technology as well. He has authored and coauthor several papers on admission control and quality of service on the Internet. He is working now in the Service Layer Technology department of Ericsson Research in Stockholm. His research interests include admission control, quality of service, multimedia transport, signaling and network security.
Gunnar Karlsson is professor in the School of Electrical Engineering of KTH, the Royal Institute of Technology since 1998; he is the director of the Laboratory for Communication Networks. He has previously worked for IBM Zurich Research Laboratory and the Swedish Institute of Computer Science (SICS). His Ph.D. is from Columbia University (1989), New York, and the M.Sc. from Chalmers University of Technology in Gothenburg, Sweden (1983). He has been visiting professor at EPFL in Lausanne, Switzerland, and the Helsinki University of Technology in Finland, and ETH Zurich in Switzerland. His current research relates to quality of service, wireless LAN developments and wireless content distribution.