IP networks

IP networks

Computer Communications 29 (2006) 812–819 www.elsevier.com/locate/comcom Service differentiation with MEDF scheduling in TCP/IP networks Michael Ment...

270KB Sizes 0 Downloads 38 Views

Computer Communications 29 (2006) 812–819 www.elsevier.com/locate/comcom

Service differentiation with MEDF scheduling in TCP/IP networks Michael Menth, Ruediger Martin* Department of Distributed Systems, Institute of Computer Science, University of Wu¨rzburg, Am Hubland, D-97074 Wu¨rzburg, Germany Available online 9 September 2005

Abstract The Differentiated Services architecture implements appropriate Per Hop Behaviors (PHBs) for service differentiation to achieve Quality of Service (QoS) in Next Generation Networks (NGNs). In this paper, we review several buffer management and scheduling algorithms that may be applied in this context. The major part of this work focuses on the interaction between TCP and the Modified Earliest Deadline First (MEDF) scheduling algorithm. It is an appealing option for service differentiation because it relies on a single parameter only and because this parameter can be configured independently of the current share of high priority traffic in the network. This is contrary to many other scheduling disciplines or buffer management strategies. The unavailability of this information in today’s Internet makes MEDF an interesting candidate for commercial application. q 2005 Elsevier B.V. All rights reserved. Keywords: Scheduling disciplines; Buffer management; Differentiated Services

1. Introduction Current research for multi-service Next Generation Networks (NGNs) focuses amongst other aspects on the provisioning of Quality of Service (QoS) for different service classes. The Differentiated Services architecture [1, 2] implements appropriate Per Hop Behaviors (PHBs) for different Transport Service Classes (TSCs) to achieve QoS. Packets from different TSCs compete for buffer space and forwarding capacity in routers. These resources are managed by mechanisms that decide whether a new packet can still be accepted for forwarding in an overload situation and control the order in which accepted packets are dequeued and sent. The first issue is commonly referred to as buffer management and the second one as scheduling. Common examples and recommendations [3,4] to enforce appropriate PHB are scheduling algorithms like Weighted Round Robin (WRR), Class Based Queueing (CBQ) [5], and Deficit Round-Robin (DRR) [6]. They assign a ‘fair share’ qi of the forwarding capacity to different TSCs. Hence, the expected traffic rates must be known in advance to configure the algorithms appropriately. This information may come from admission * Corresponding author. Tel.: C49 931 8886651; fax: C49 931 8886632. E-mail address: [email protected] (R. Martin).

0140-3664/$ - see front matter q 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.comcom.2005.08.003

control (AC) entities that track high priority flows together with their demands. If AC is not deployed in the network, the fractions of individual TSCs may be estimated such that the QoS can be enforced for its flows. However, if the real traffic rate of a specific TSC exceeds its estimate significantly, the sending of the packets is impeded instead of expedited since the assigned forwarding capacity is too low. Thus, the above schedulers may lead to QoS degradation if the required a priori information about expected traffic rates is missing. The authors of [7] introduced the Modified Earliest Deadline First (MEDF) algorithm for scheduling purposes. Its objective is to keep the waiting time in the buffer for high priority packets shorter than for low priority traffic whereby the maximum difference of comparable waiting times is controlled by a time offset parameter. Thus, MEDF can be configured independently of the expected traffic mix of high and low priority traffic and it is, therefore, an interesting candidate for the use in commercial routers. In this paper, we focus on the impact of MEDF scheduling on the throughput of different TSCs in TCP/IP networks in concert with different buffer management mechanisms. Scheduling mechanisms influence the round trip time (RTT) and the throughput while buffer management mechanisms effect packet losses, which throttle the congestion window sizes of TCP connections and thereby their sending rates. We first show by an experiment on a

M. Menth, R. Martin / Computer Communications 29 (2006) 812–819

single link that the prioritization of high priority traffic over low priority traffic is independent of the traffic mix and study the influence of the offset parameter. Then, we extend our study from a single link to several hops. We investigate the interaction with different buffer management strategies, and finally, we compare the scheduling discipline MEDF and the buffer management strategy RED regarding their service differentiation capability. This work is structured as follows. Section 2 gives an overview of popular buffer management and scheduling algorithms. In Section 3 we present the setup of our simulations, illustrate their results, and discuss them in detail. Section 4 concludes this work with a short summary.

2. Buffer management and scheduling algorithms Network congestion occurs when the forwarding capacity of a router does not suffice to send the packets arriving for a specific interface. If the interface is busy, arriving packets are queued in the buffer, and if the buffer is full, they are discarded. This leads to packet delay and packet loss, which both degrade the QoS of the traffic transport. The introduction of various transport service classes (TSCs) can mitigate this problem for a subset of the flows. Packets belonging to high priority flows receive a preferential treatment in the routers compared to packets belonging to low priority flows. Firstly, buffer space is reserved to store only high priority packets if congestion occurs to prevent excessive packet losses. Secondly, high priority packets are dequeued earlier from the buffer than low priority packets to keep their queuing delay low. Buffer sizes and forwarding capacity are limited resources of a router. For their controlled usage in a network with multiple TSCs, buffer management algorithms decide whether an arriving packet may be stored or discarded depending on the TSC it belongs to. Then, a scheduling algorithm creates the order in which enqueued packets are dequeued and sent. The combination of the buffer management and scheduling strategy determines the degree to which high priority packets are expedited and low priority packets are impeded. In the following, we review several buffer management and scheduling algorithms that are discussed in the literature. 2.1. Buffer management algorithms

813

different TSCs. If not mentioned differently, we use FBS as default buffer management strategy in our simulations. 2.1.2. Static buffer sharing (SBS) A simple and intuitive buffer management strategy is partitioning the buffer space among the TSCs such that packets of a specific TSC are queued and lost independently of packets from other TSCs. It can be achieved by the following. The overall buffer capacity is given by C and for each traffic class TSCi, 0%i!n there is a threshold Ti such that a new packet p is discarded if its size GetSize(p) together with the current buffer occupation Bi exceeds the threshold, i.e. if BiCGetSize(p)OTi holds. The thresholds Ti are configured such that S0%i!nTi%C is met. This is neither a practical nor an economic strategy for the following two reasons. Firstly, the expected TSC-specific traffic rates are mostly unknown but they are required for a reasonable partitioning of the buffer space to control the TSC-specific packet loss probabilities. Secondly, the introduction of buffer space partitions diminishes economy of scale. As a result, SBS requires more buffer space than FBS to achieve comparable packet loss probabilities. Therefore, we do not apply SBS in our study. 2.1.3. Priority buffer sharing (PBS) PBS defines TSC-specific thresholds Ti such that T0%.%Ti.%TnK1%C holds with TSC0 having lowest priority and TSCnK1 having highest priority. In contrast to SBS, these thresholds pertain to the buffer occupation by all packets with the same or lower priority. Thus, the amount of usable buffer space reduces with decreasing priority level. Fig. 1 shows the maximum allowed buffer occupation for each of the three TSCs. It also motivates the name ‘Russian Dolls’ bandwidth constraints model (RDM) which was suggested by the IETF traffic engineering working group (TEWG) in [8] for Diffserv-aware MPLS Traffic Engineering. To clarify the operation of PBS, we describe its decision for accepting a packet in Algorithm 1. First, the TSC of packet p is derived by GetTSC(p). Then, the amount of overall buffer space currently occupied by all packets of TSCs with priority i or lower is calculated. If this sum does not exceed Ti, the packet is accepted and stored in the buffer; otherwise it is dropped. On the one hand, packets of high priority TSCs can occupy the entire buffer such that the packets of low priority TSCs find their share occupied. On the other hand, this is not possible vice-versa since high priority TSCs have a guaranteed

We present four different buffer management strategies. They decide which packets are accepted or dropped in overload situations. 2.1.1. Full buffer sharing (FBS) FBS accepts packets in the buffer as long as free buffer space is available. Thus, there is no prioritization among

Fig. 1. Priority buffer sharing (PBS): illustration of the allowed buffer occupations for each of the three TSCs.

814

M. Menth, R. Martin / Computer Communications 29 (2006) 812–819

amount of buffer space that cannot be taken by low priority packets. Algorithm 1:. ENQUEUE operation for priority buffer sharing (PBS). Require: packet p, buffer occupation Bi and buffer threshold Ti for 0%i!n iZGETTSC(p) P if (GETSIZE(p)C 0%j%i Bj )%Ti then {enough buffer space available} STORE(p) else {space limit exceeded} DROP (p) end if

2.1.4. Random early detection (RED) The RED gateway was originally presented in [9] and in [10] it was recommended for deployment in the Internet. It was designed to detect incipient congestion by measuring the average buffer occupation in routers and to take appropriate countermeasures. That means, packets are dropped or marked to indicate congestion to senders. Thus, RED achieves congestion state dependent random packet drops and became a synonym for this buffer management strategy. RED calculates the average queue length using an exponentially weighted moving average (EWMA) to filter sudden increases of buffer occupation due to traffic bursts. Hence, the average queue length avg is updated every time a packet arrives by avgZ(1Kwq)! avgCwq!q with q being the current buffer occupation and 0!wq!1 being the weight parameter for the EWMA. Several improvements have been suggested, e.g. in [11] and [12], to achieve fairness in the presence of non-adaptive connections and to introduce TSC priorities. We apply RED in the following manner for two traffic classes. The average queue length avg determines a RED packet loss probability 8 0 avg% minth > > > > < 0 1 (1) pl Z avgKmin > th A > @1; avgO min min > th > : maxth Kminth based on two threshold parameters minth and maxth. The packet loss probability increases linearly from zero to one for an increasing average queue length avg between these values [9]. This is illustrated in Fig. 8 in Section 3. We apply RED dropping only to low priority packets. Hence, high priority packets are accepted as long as free buffer space is available. The original mechanism does not distinguish traffic classes. The average queue length avg, from which the loss probability—for low priority packets in case of prioritization—is derived, is calculated on the basis of the queue length that comprises all packets, i.e. both low and high priority packets here. Then, the RED algorithm leads to

high drop probabilities and starvation for low priority traffic if the buffers are mostly occupied by high priority packets. Therefore, we calculate the average queue length avg on the basis of low priority packets only. 2.2. Scheduling algorithms The buffer management decides whether a packet is accepted for forwarding or dropped. Once the packet is stored in the buffer it will be sent. The scheduling algorithm creates the order in which the packets in the buffer are dequeued and sent. This decision influences the delay and thereby the round trip time (RTT) such that the throughput of TCP flows is affected. We present FIFO and static priority (SP) for comparison with MEDF because they are simple and can be operated without the knowledge of the expected TSC-specific traffic shares. 2.2.1. First-in-first-out (FIFO) FIFO dequeues the packets in the same order as they are enqueued. Hence, packets of all TSC are handled in the same way. Its implementation is very simple and, therefore, this trivial scheduling algorithm is widely spread. Thus, we use it as performance baseline in our comparisons. 2.2.2. Static priority (SP) The SP concept works with a fixed number of TSCs. Their packets are stored in TSC-specific queues and the scheduler chooses the front packet of the non-empty queue with the highest priority. Thus, low priority packets are only served if no high priority packets are waiting. Within a TSC all packets are served in FIFO manner. 2.2.3. Modified earliest deadline first (MEDF) In the context of the UMTS Terrestrial Radio Access Network (UTRAN), the authors of [7] introduce a modified version of the Earliest Deadline First (EDF) algorithm and call it Modified Earliest Deadline First (MEDF). With EDF, deadlines are assigned to packets and the EDF scheduler serves the packets in the order of ascending deadlines. As the deadlines may be set arbitrarily, EDF scheduling needs intelligent data structures and sorting. This makes its implementation complex and its runtime unpredictable for hardware applications. In contrast, MEDF is much easier to implement because sorting is not required. Like SP, it supports only a fixed number n of different TSCs and packets are stored in TSC-specific queues in FIFO manner. They are stamped with a modified deadline, i.e. their arrival time plus a TSC-specific offset Mi, 0%i!n. The MEDF scheduler selects among the front packets in all queues the one with the lowest deadline for transmission. For only two TSCs, this is a choice between two packets, which is much simpler than sorting packets according to ascending deadlines for EDF. The difference jMiKMjj between two TSCs i and j is a relative delay advantage that influences the behavior of the scheduler. We have two TSCs only in our

M. Menth, R. Martin / Computer Communications 29 (2006) 812–819

simulations. We set the MEDF offset parameter for high priority traffic to MhighZ0 and the offset parameter for low priority traffic to Mlow2{0 s, 0.1 s, 0.5 s, 1.0 s, 1.5 s}. Thus, high priority data packets receive a relative delay advantage of Mlow. Note that the queues of all scheduling algorithms can be implemented on the basis of a shared buffer such that they can be combined with every presented buffer management strategy. We use three buffer management algorithms in our performance evaluation: FBS, PBS, and RED.

3. MEDF performance evaluation The buffer management and scheduling strategy influences the packet loss and delay. Packet drops decrease TCP’s congestion window size and reduce thereby the TCP throughput. Packet delay increases the RTT and leads to degraded TCP throughput. In addition, heavily delayed packets are perceived as lost packets. The objective of this section is the investigation of the impact of various buffer management and scheduling strategies on the TCP throughput. We present first the setup of our simulation experiments. Then we investigate the throughput of TCP flows depending on the MEDF parameter on a single link and on multiple links. Finally, we study the impact of various buffer management strategies on the throughput in combination with and without MEDF.

815

to obtain statistically reliable results of this non-ergodic stochastic process. Our results show only average values but the simulated time was chosen to yield very narrow confidence intervals. We use the following default settings in our experiments. We choose a packet size of spZ500 bytes including headers and a link bandwidth of clZ1.28 Mbit/s. Thus, the transmission delay of a packet is dTX Z sp =cl Z 3:125 ms. We allow a maximum queuing delay of 0.5 s and set, therefore, the buffer size to sBZcl!0.5 sZ80 KB, which corresponds to 160 packets. The path of the flows comprises nlinks. Then, the minimum round trip time amounts to RTTZ2!nlinks!(dTXCdprop) with dprop being the link propagation delay. We set dpropZ46.875 ms to get a round trip time on a single link of RTTZ2!(3.125 msC 46.875 ms)Z100 ms. As a result, a single saturated TCP connection may have RTT=dTX Z 100 ms=3:125 msZ 32 unacknowledged packets in an unloaded network. Thus, we set the maximum TCP window size to 16 such that two flows can send with a maximum rate of 640 KB/s. This is the minimum number of overall flows (nmin) in our simulations since more flows are needed to cause packet loss and significant delay. This is necessary to observe effects regarding the TCP throughput due to its congestion control mechanism. We partition the number of flows in our simulation equally into low and high priority flows if not mentioned differently. 3.2. TCP throughput with SP and FIFO scheduling

3.1. Simulation objective and setup The objective of this work is to characterize the prioritization level of saturated TCP flows for different buffer management strategies and scheduling algorithms. We work with two different TSCs for high and low priority traffic (TSClow and TSChigh). Let nhigh be the number of high priority flows and nlow the number of low priority flows. The average throughput of all high and low priority flows is the simulation target and denoted by Qhigh and Qlow, respectively. If the number of flows in both TSCs differs, it is hard to compare their throughput. Therefore, we normalize the overall throughput QX per traffic class by the number of corresponding flows nX to get the TSC-specific per flow throughput. We measure the prioritization level of the high priority flows by the ratio of the TSC per flow throughputs: pZ

Qhigh nhigh Qlow nlow

Z

Qhigh !nlow Qlow !nhigh

(2)

We used the network simulator (NS) version 2 [13] to run the experiments and chose the NEW RENO implementation of TCP [14] with delayed acknowledgement. As the TCP sources may synchronize due to the saturation of the senders and the artificially constant simulation conditions, we randomize the start time of the TCP sources within the first second. Hence, we applied the replicate-delete method

FIFO does not prioritize traffic at all and is, therefore, one extreme within the spectrum of scheduling disciplines. Another extreme is Static Priority (SP). It leads to starvation of low priority flows during network congestion regardless of the applied buffer management. Due to insufficiency of link capacity, there are almost always high priority packets waiting for transmission in the router buffer. SP dequeues those packets and even though low priority packets occupy most of the buffer space, their chance to leave the buffer is very low. SP does not consider a maximum delay for low priority traffic. Thus, the TCP timers for those connections expire. This happens again and again. As a consequence, SP leads to starvation of low priority TCP flows and is, therefore, completely inadequate for severely congested TCP/IP networks. 3.3. Impact of MEDF scheduling on a single link To understand the effect of the MEDF parameter more easily, our investigation focuses first on a single link experiment with FBS as buffer management strategy. Fig. 2 shows the classical dumbell topology for our single link experiment. A number of nhigh high priority TCP senders are connected via a single link with the same number of high priority TCP receivers and, analogously, we have nlow low priority flows. Router B has sufficient capacity to serve the

816

M. Menth, R. Martin / Computer Communications 29 (2006) 812–819

Fig. 2. Simulation setup of the single link experiment.

link and to distribute the arriving packets to their TCP receivers. Fig. 3 shows the prioritization level p of the high priority flows. The x-axis shows the number of simultaneous TCP connections chosen as nZnmin!2i, 0%i%7. We conducted this experiment for the traffic mixes nhigh:nlow of 1:3, 1:1, and 3:1 with MEDF parameter M low Z0.5 s, which corresponds to the maximum waiting time of a packet in the presence of FIFO scheduling. Due to the design of the experiments, there is no congestion for nZ2 users. Both can send at a maximum speed of 640 KB/s without significant interference so that we do not realize any prioritization (pZ 1). The main result of Fig. 3 is that MEDF achieves the same prioritization of high priority packets regardless of the traffic mix. We recognize that by the fact that the curves for the three traffic mixes take approximately the same course. The influence of the congestion intensity, i.e. the number of connections, is significantly larger than the influence of the traffic mix. Opposed to that, conventional algorithms like Weighted Round Robin (WRR) require a traffic mix specific configuration and, therefore, their prioritization level decreases severely with an increasing fraction of high priority traffic when the WRR parameters remain unchanged. This has been pointed out in [7]. In the experiment above we have illustrated the impact of an exponentially increasing congestion intensity. Now, we study the prioritization in scenarios with less competition for resources in the following experiment. We consider two high priority and two low priority saturated TCP connections and vary the link bandwidth cl from 128 KB/s up to 2.56 MB/s, a value sufficient to support four connections at

their maximum speed of 640 KB/s. A very low band width of clZ128 KB/s throttles all sources so much that the throughput is mainly determined by packet losses. However, packet losses are determined by the buffer management and not by the packet scheduling strategy. Therefore, Fig. 4 shows a relatively moderate prioritization level p for MEDF vs. FIFO in that scenario. The prioritization of the high priority flows increases linearly with the bandwidth as long as the capacity is less than about 50% of the capacity needed to serve all flows at full speed. The maximum bandwidth until which we observe this linearity decreases with increasing offset parameter Mlow and seems to converge to exactly 1.28 MB/s. The explanation is simple. At 50%! 2.560 MB/s, the resources are sufficient to support two flows. An increasing delay advantage for high priority traffic yields schedulers with an increasing prioritization such that the MEDF scheduler converges towards a SP scheduler. With SP, the bandwidth is used to first support high priority flows exclusively before increasing capacity suffices to serve also low priority flows. The different maximum prioritization levels show that MEDF’s offset parameter effectuates an interpolation between SP and FIFO scheduling from a prioritization point of view. Hence, Mlow controls the prioritization level. We interpret Fig. 4 as follows. The throughput Qhigh for high priority flows is almost saturated (Zconstant) at a bandwidth of 1.28 MB/s. This explains the hyperbolic decrease of the prioritization level pZ Qhigh = Qlow as the additional bandwidth between 1.28 and 2.56 MB/s can be used by low priority flows. Note that the prioritization level for MlowR1 s is larger than 200% (pR2) if the available bandwidth is about 10% or more of

Fig. 3. Impact of congestion intensity and traffic mix on the prioritization level on a single link with 1.280 MB/s.

Fig. 4. Impact of the congestion intensity and the MEDF delay advantage on the prioritization level of two low and two high priority TCP connections.

M. Menth, R. Martin / Computer Communications 29 (2006) 812–819

817

Fig. 5. Simulation setup of the multiple link experiment.

the capacity needed to serve all flows at maximum rate (2.56 MB/s). If more than 50% is available, the high priority flows can send at almost maximum speed. For available bandwidth below 10%, the congestion is so strong that the throughput of high priority connections would be also very low under strict prioritization with SP scheduling. 3.4. Impact of MEDF scheduling on multiple links We extend our single link experiment to multiple links to assess the influence of MEDF on the prioritization level p if MEDF is applied multiple times. Simply concatenating several links and routers does not provide a reasonable environment to investigate the impact of scheduling on the throughput of TCP in networks. In such an experiment, packets compete for resources at the first router, which brings them into an order. Then, the packets are passed from one router to the next one in exactly this order without suffering loss and delay. Therefore, cross-traffic is required. Fig. 5 shows the simulation topology for the multi-link experiment in the case of two links. Additional TCP sources connect to the routers and share exactly one link with the connections under study. The TCP connections need the same bandwidth per flow on all links. If the bandwidth differs from link to link, the link with the lowest capacity becomes the bottleneck and dominates the observable effects. However, doubling the bandwidth of the links with cross-traffic solves this problem. We send the cross-traffic over the same number of links to account for comparable round trip times for the observed traffic and the cross-traffic. Therefore, we introduce router D and the link between C and D. To make the results for different numbers of links more comparable, we want to keep the maximum rate for a TCP source with a sender window of 16 packets at 640 KB/s, so we strive again for a RTT of 100 ms. Thus, we set the link propagation delay to dprop Z RTT=2 !nlinks KdTX . We set the offset parameter MlowZ0.5 s, i.e. equivalent to the buffer size, and used FBS as default buffer management strategy. There are as many high priority connections as low priority connections for all

pairs of communication endpoints. Fig. 6 shows the effect of the path length and congestion intensity on the prioritization level over multiple links. In general, the prioritization level increases with the number of links in the path because the number of MEDF scheduling applications increases. 3.5. Impact of MEDF scheduling and buffer management strategies We consider the impact of MEDF scheduling on the prioritization level in combination with different buffer management strategies. Fig. 7 shows the prioritization results for the single link experiment and an equal traffic mix. With FIFO and FBS all packets are handled in the same manner such that the prioritization level is pZ1 regardless of the congestion intensity. FIFO with PBS increases the prioritization level slightly up to pZ1.25. The impact of MEDF is obviously stronger as it achieves a prioritization level of about pZ2 with FBS. The curve for MEDF in combination with PBS provides a prioritization level of about pZ2.8 and looks like a superposition of the curves for FIFO with PBS and for MEDF with FBS. Hence, the priority

Fig. 6. Impact of congestion intensity and the path length on the prioritization level.

818

M. Menth, R. Martin / Computer Communications 29 (2006) 812–819

Fig. 7. Impact of congestion intensity, buffer management strategy, and scheduling discipline on the prioritization level.

Fig. 9. Impact of congestion intensity and RED parameter minth on the prioritization level on a single link with 1.280 MB/s for maxthZ80 packets and CZ160 packets.

effects of scheduling disciplines and buffer management strategies complement each other.

network congestion the prioritization level increases and, finally, it reaches a nearly constant level and decreases only slightly for very high network congestion. The value for the minimum threshold influences the prioritization level directly because low values make RED drop low priority packets earlier. In contrast to RED, MEDF strongly prioritizes high priority packets already in low and medium congested networks. A comparison of Figs. 3 and 9 makes this obvious.

3.6. Comparison of MEDF with RED buffer management RED gateways help to differentiate the treatment for high and low priority packets in particular in the presence of TCP flow control. Fig. 8 recalls the meaning of the RED-specific parameters minth and maxth regarding the RED packet loss probability. Note that the average queue length avg is calculated based only on low priority packets to prevent starvation of low priority flows. Fig. 9 shows the prioritization level for increasing congestion intensity and various minimum thresholds minth, which control the discard probabilities for low priority traffic. In contrast to MEDF, the prioritization level starts at a value of pZ1, i.e. high priority packets receive no preferential service. If the network is only slightly congested, the buffer is usually not filled enough such that the packet drop probabilities for low priority packets are too low. Therefore, RED cannot take significant effect, in particular for high values of minth. For medium

Fig. 8. Packet loss probability induced by RED buffer management depending on the average queue length avg of low priority packets.

4. Conclusion In this work we examined the impact of the Modified Earliest Deadline First (MEDF) scheduling discipline on the TCP throughput in congested TCP/IP networks. We considered a Differentiated Services environment with several transport service classes (TSCs) of different priorities, which are enforced by scheduling disciplines and buffer management strategies in routers. Conventional scheduling disciplines like Weighted Round Robin (WRR), Deficit Round-Robin (DRR) or Class Based Queuing (CBQ) assign fixed bandwidth shares to TSCs. This is problematic if the traffic fractions of the different TSCs fluctuate as this information is usually not instantly available to configure the algorithms. As a result, wrong settings can severely deteriorate the QoS in terms of packet loss and delay. In contrast, MEDF can be configured without the knowledge of the traffic mix. It is simple to implement and simple to configure because the delay advantage of high priority flows is the only parameter in case of two TSCs. Therefore, MEDF is an interesting candidate for the use in commercial routers. We investigated the prioritization level, which we introduced as the ratio of the per flow throughput for high and low priority flows. Our simulation results showed the independence of the prioritization level of the traffic mix, which depends on the delay advantage. MEDF allows to

M. Menth, R. Martin / Computer Communications 29 (2006) 812–819

interpolate the prioritization level between First-in-First-out (FIFO) and Static Priority (SP) scheduling. SP leads to starvation of low priority traffic while FIFO effects no prioritization at all. In contrast, MEDF achieves a controllable prioritization and avoids starvation of low priority flows. Our experiments with multiple link scenarios showed that the prioritization of high priority traffic regarding comparable flows increases with their path length. Full Buffer Sharing (FBS) was the default buffer management strategy in our experiments. However, we also studied combinations of the scheduling disciplines FIFO and MEDF with the buffer management strategies FBS and Priority Buffer Sharing (PBS). Our simulation results showed that they complement each other. Random Early Detection (RED) gateways with packet drops depending on the average queue length are often discussed as attractive buffer management strategies for traffic prioritization. However, RED has many parameters, which make the configuration complex. In addition, our simulations showed that its mechanisms take only effect in severe overload situations while MEDF works also visibly under low and moderate overload conditions.

References [1] S. Blake, et al., RFC2475: An Architecture for Differentiated Services, ftp://ftp.rfc-editor.org/in-notes/rfc2475.txt (Dec. 1998).

819

[2] D. Grossman, RFC3260: New Terminology and Clarifications for Diffserv, ftp://ftp.rfc-editor.org/in-notes/rfc3260.txt (Apr. 2002). [3] B. Davie, et al., An Expedited Forwarding PHB (Per-Hop Behavior), ftp://ftp.rfc-editor.org/in-notes/rfc3246.txt (Mar. 2002). [4] A. Charny, et al., Supplemental Information for the New Definition of the EF PHB (Expedited Forwarding Per-Hop Behavior), ftp://ftp.rfceditor.org/in-notes/rfc3247.txt (Mar. 2002). [5] S. Floyd, V. Jacobson, Link-sharing and Resource Management models for packet networks, IEEE/ACM Transactions on Networking 3 (4) August 1995. [6] M. Shreedhar, G. Varghese, Efficient fair queueing using deficit Round-Robin, IEEE/ACM Transactions on Networking 4 (3) June 1996. [7] M. Menth, M. Schmid, H. Heiß, T. Reim, MEDF—a simple scheduling algorithm for Two Real-Time Transport Service Classes with application in the UTRAN, IEEE INFOCOM (2003). [8] F.L. Faucheur, RFC4127: Russian Dolls Bandwidth Constraints Model for Diffserv-aware MPLS Traffic Engineering, ftp://ftp.rfceditor.org/in-notes/rfc4127.txt (Jun. 2005). [9] S. Floyd, V. Jacobson, Random early detection gateways for congestion avoidance, IEEE/ACM Transactions on Networking 1 (4) (1993) 397–413 August 1993. [10] B. Braden, D. Clark, J. Crowcroft, et al., RFC2309: Recommendations on Queue Management and Congestion Avoidance in the Internet, ftp://ftp.rfc-editor.org/in-notes/rfc2309.txt (Apr. 1998). [11] F. Anjum, L. Tassiulas, Balanced-RED: an Algorithm to Achieve Fairness in the Internet, IEEE INFOCOM (1999). [12] U. Bodin, O. Schele´n, S. Pink, Load-tolerant Differentiation with Active Queue Management, SIGCOMM Computer Communication Review July 2000 . [13] K. Fall, K. Varadhan, The ns Manual, http://www.isi.edu/nsnam/ns/ doc/ns_doc.pdf (Dec. 2003). [14] W.R. Stevens, W.R. Stevens, TCP/IP Illustrated, vol. 1, AddisonWesley/Longman, Reading, MA/New York, 1994.