Fair intelligent bandwidth allocation for rate-adaptive video traffic

Fair intelligent bandwidth allocation for rate-adaptive video traffic

Computer Communications 23 (2000) 1425–1436 www.elsevier.com/locate/comcom Fair intelligent bandwidth allocation for rate-adaptive video traffic D.B...

745KB Sizes 2 Downloads 102 Views

Computer Communications 23 (2000) 1425–1436 www.elsevier.com/locate/comcom

Fair intelligent bandwidth allocation for rate-adaptive video traffic D.B. Hoang*, X. Yu, D. Feng Basser Department of Computer Science, University of Sydney, Sydney, Australia

Abstract This paper presents an algorithm for network bandwidth sharing using explicit rate feedback, the Fair Intelligent Bandwidth Allocation (FIBA) for transporting rate-adaptive video traffic. We show that the FIBA algorithm is capable of allocating bandwidth fairly for a given fairness criteria, among competitive, rate-adaptive video sources. The algorithm is able to reallocate smoothly when a new connection is admitted or when there is renegotiation of the minimum guaranteed cell rate by some connections, or when a connection is throttled somewhere earlier along the connection path. Furthermore, it can prevent congestion, especially during the initial periods when buffer queues can build up significantly. q 2000 Elsevier Science B.V. All rights reserved. Keywords: Rate-adaptive video traffic; Fair bandwidth allocation; Network bandwidth sharing algorithm; Rate-based congestion control

1. Introduction For high-speed networks, explicit rate feedback control mechanism has become essential for several reasons. Explicit rate feedback mechanism allows congestion conditions to be monitored, detected or predicted early enough so that the network can exercise appropriate measures to maintain network throughput, prevent excessive delays and avoid network collapse. With explicit feedback, feedback information is available readily and frequently enough for effective monitoring and congestion control. Explicit rate information allows a control algorithm to exercise effectively and definitively by hard-limiting the amount of traffic that can be supported by the network at the control instant. Without explicit rate measure, it may take a number of network round trip times before the congestion control decision becomes effective, yet, in the meantime congestion conditions may still build up uncontrollably. Explicit rate feedback mechanism has been proved effective and was adopted by the ATM Forum for ABR congestion. Besides congestion control, explicit rate feedback helps to allocate resources fairly, especially in the case of compressed video traffic competing for limited network bandwidth available. Most approaches to supporting the transmission of video traffic relied on preventive control mechanisms such as call admission control, usage parameter control. One approach is for the video encoder to produce a constant bit rate (CBR)

* Corresponding author. E-mail address: [email protected] (D.B. Hoang).

data stream by dynamically adjusting the quantisation of the video sequence, and for the network to offer a constant bandwidth to each video connection. However, the CBR approach often causes the video quality to fluctuate, sometimes severely [1]. Other approaches allow the encoder to generate a variable bit rate (VBR) data stream, thereby obtaining more efficient statistical multiplexing gain [2], and relatively constant image quality. Simple preventive mechanism on VBR performs well as long as there is no data loss. However, network congestion can cause data loss due to buffer overflows; or excessive delays with a consequent degradation in image quality. The use of a best-effort service for transport of compressed video has also been explored [3]. This requires potentially substantial source adaptation, as in the Internet video tools (such as VIC, NV) where the sources adapt to the rate offered by the network. In this case, quality can get unacceptably poor, since there is no minimum rate guaranteed to a flow. Due to the problems preventive congestion control techniques have in supporting video traffic, combination of preventive and reactive (i.e. feedback-based) congestion control schemes have been explored [4,5]. It has been demonstrated that the degradation in image quality during congestion periods can be controlled gracefully by adapting the source bitrate of a video encoder based on the state of the network [4]. However, a drawback of these schemes is that during congestion periods sources with more complex image sequences see a greater reduction in image quality. The main problem is unfairness in allocation of bandwidth among competitive sources. We believe explicit-rate ABR service can be enhanced, in terms of fair and efficient

0140-3664/00/$ - see front matter q 2000 Elsevier Science B.V. All rights reserved. PII: S0140-366 4(00)00187-0

1426

D.B. Hoang et al. / Computer Communications 23 (2000) 1425–1436

Nomenclature ABR ACR ATM CBR CCR ER ERICA FIBA FICC ICR IM MACR MCR mcsa PCR RM VBR

Available bit rate Allowed cell rate Asynchronous transfer mode Constant bit rate Current cell rate Explicit rate Explicit rate indication congestion avoidance Fair intelligent bandwidth allocation Fair intelligent congestion control Initial cell rate Intelligent marking Mean allowed cell rate Minimum cell rate Mean current share allocation Peak cell rate Resource Management Variable bit rate

bandwidth sharing, for transporting rate-adaptive video for a number reasons: • Rate-adaptive video sources can adapt its rate dynamically by adjusting their compression parameters. • Rate-adaptive video is inherently bursty and will become more so with the advance of compression technology and VLSI chip design. The bursty nature of traffic can be exploited for more efficient bandwidth sharing [2]. • Admission control for ABR is based on the minimum cell rate (MCR) negotiated on connection setup. Consequently, it provides the same simplicity as CBR for admission control. • Rate renegotiation of guaranteed minimum cell rate can be done more frequently and effectively by way of RM cells. Consequently, we can exploit the high short-term correlation of video to predict the needed rates over very short intervals. • The information returned in the RM cells may be used to adapt the bitrate of the video encoder appropriately. • Unlike traditional best-effort services in data networks such as the current TCP/IP Internet, ABR service guarantees a minimum bandwidth for its application. This can ensure acceptable quality even in periods of congestion. • The use of the explicit-rate schemes used in ABR for managing a video source is also attractive from an implementation point of view, since it allows the use of a single feedback control mechanism for a wide range of sourceadaptable traffic. • Recently, efforts have demonstrated that such ABR-like traffic is feasible and desirable for fair bandwidth sharing. Several efforts have employed ABR-like feedback control mechanism for sharing network bandwidth among

competing video connections [6–8]. It should be noted that ABR-like control mechanism satisfies the minimum requirements of video communication by guaranteeing a minimum acceptable bandwidth demanded by the connection. This minimum bandwidth is the MCR negotiated between the source and the network at the beginning of the connection. In Refs. [6,7], an MCR-proportional max– min policy was proposed to support rate-adaptive video. However, the paper focused on source rate matching and it was unclear of what distributed feedback control algorithm to be used to achieve such network bandwidth sharing policy. For such an application of ABR-like feedback control mechanism to be fruitful, two main difficulties have to be overcome. The first difficulty is caused by the source behaviour. A rate-adaptive video source does not behave exactly like an ABR-source in that its rate does not change continuously either linearly or exponentially. The source rate jumps from one explicit rate to another as indicated by the feedback mechanism. This makes it difficult for a switch algorithm to track. Secondly, to guarantee the quality of service demanded by a rate-adaptive video application, a rate allocation algorithm must be definitive rather than suggestive in allocating bandwidth. This implies that a switch needs to keep more information concerning the availability of the network bandwidth resources and information concerning the demand of its connections. Under these circumstances an ABR-like adaptive mechanism will not adapt well to video traffic with large jumps in their rates, without appropriate considerations concerning convergence to target rate. Recently, Hou et al. [9] proposed an attractive algorithm for bandwidth sharing and showed that the algorithm allocates to each connection its guaranteed minimum rate plus an additional “weighted” max–min share of the available bandwidth. The algorithm decouples the source’s actual transmission rate from the Allowed Cell Rate variable used for protocol convergence. We believe that the algorithm can be improved computationally. However, in this paper we propose an algorithm radically different from that in Ref. [9]. Our algorithm also includes a mechanism to prevent congestion, especially in the transient periods. In this paper, we propose a distributed feedback control algorithm for network bandwidth sharing, the Fair Intelligent Bandwidth Allocation (FIBA) for transporting rateadaptive video traffic. Part of the algorithm is derived from the Fair Intelligent Congestion Control (FICC) which was proposed earlier for ABR congestion control [10,11] and from which it inherited fairness and congestion-like properties. Simulation results on various network configurations show that the FIBA algorithm is capable of allocating bandwidth fairly among competitive, adaptive video sources. The algorithm is able to reallocate smoothly when a new connection is admitted. It supports renegotiation of the minimum cell rate (MCR) for connections. It can prevent congestion, especially during the initial

D.B. Hoang et al. / Computer Communications 23 (2000) 1425–1436

transient periods. Furthermore, the algorithm can be extended to work with different criteria of fairness. The paper is organised as follows. Section 2 reviews briefly the ATM Forum’s ABR explicit feedback mechanism and in particular the Fair Intelligent Congestion Control (FICC) for congestion control. Section 3 details the design and development of the Fair Intelligent Bandwidth Allocation algorithm. Section 4 describes the simulation environment and its parameters. Section 5 presents simulation results and analysis. Section 6 concludes the paper by summarising the contributions and direction for further research.

2. Review of the explicit rate ABR feedback mechanism Explicit Rate (ER) feedback mechanism has been proved effective for congestion control for high-speed networks that it has been adopted for traffic management by the ATM Forum [12]. To obtain end-to-end feedback information the source end system sends special Resource Management (RM) cells to the destination end system on a regular basis. On receiving the RM cells to the source, the destination indicates the network congestion status by setting the congestion indication field in the RM cell header before returning the cell to the source. The source then adjusts its cell rate accordingly to avoid congestion, but also maintains an acceptable quality of service of the connection. In explicit rate feedback, each intermediate switch can mark explicitly the desired rate it can support for the connection. An ER field in the RM cell is provided for this purpose. The source end system writes the initial cell rate (ICR) in the ER field when it generates an RM cell. This ER value can be modified downwards by a congested switch along the ABR connection path, or by the destination source end to a value it can currently support. Since the maximum flow of the connection is determined by the switch along the path with smallest rate, intermediate switches are not allowed to increase the ER value. Any rate above the minimum ER would result in loss of information due to cells being dropped. In ER feedback scheme, the source sending rate is modified according to the congestion status of the network. The Initial Cell Rate (ICR), Minimum Cell Rate (MCR) and Peak Cell Rate (PCR) are specified by the network when the connection is set up and from then on the source is allowed to send data at a rate between 0 and PCR. Explicit Rate congestion control schemes differ mainly in the way they detect congestion condition and the way they predict the ER for its connections. The Intelligent Marking (IM) scheme and its variants [13] provide good performance without resorting to per-virtual circuit accounting. The Explicit Rate Indication Congestion Avoidance (ERICA1) and its variants [14] allow better bandwidth allocation at the cost of switch complexity. They require a

1427

switch to keep a fair amount of information about its connections. Recently we proposed the Fair Intelligent Congestion Control (FICC) scheme [10,11]. It is demonstrated that FICC is simple, robust, effective, and importantly it is able to allocate bandwidth fairly among its connections and it is also scalable. For the rest of this section, we highlight the main goals and features of the design and restate the FICC switch algorithm for congestion control. Traditionally in most congestion control schemes, different rate allocation policies are employed for congestion period and noncongestion period. Greatly different allocation rates result. When a connection experienced congestion its rate is restricted by the bottleneck switch, while other connections can increase their rate usually to a peak rate. Since the sources that traverse more lines are more likely to experience congested switches, they will be allocated a much lower rate than others traversing less links and unfairness is introduced. We believe that it is a weakness to use a fixed queue threshold to arbitrarily divide a network into congestion and noncongestion states. The Fair Intelligent Congestion Control treats rate allocation mechanisms for noncongestion and for congestion periods in a similar and consistent manner. In fact it does not have a hard line separating noncongestion from congestion. Instead, it aims for a target operating point where the switch queue length is at an optimum level for good throughput and low delay, and where the rate allocation is optimum for each connection. The rate allocation appropriately reflects the distinction among connections that experience switches at different congestion levels. In order to estimate the current traffic generation rate of the network and allocate it among connections fairly, a Mean Allowed Cell Rate (MACR) per output queue is kept at the switch. An explicit rate is then calculated as a function of the MACR. The function employed is a queue control function that encourages traffic sources if the target operating point is not reached and discourages the sources if the switch operates beyond its target operating point. The FICC is described in the algorithm below. Essentially, when the switch receives a resource management (RM) cell in the forward direction, it updates its prediction of the MACR for connections that pass through its output queues. When the switch receives a backward RM cell, if the target point is not reached, it can oversell the available bandwidth by working out a queue control factor f(Q) (called DPF in the algorithm). However, if the target point is already reached, connections are discouraged by f(Q) which takes into consideration the fraction of buffer available at the time. It should be noted that the switch does not differentiate RM cells from different connections using the same queue in calculating the MACR, however, the ER (in the backward RM cell) takes into account the Minimum Cell Rate (MCR) for a particular connection. It should also be noted that a

1428

D.B. Hoang et al. / Computer Communications 23 (2000) 1425–1436

negligible performance penalty. The function can be defined as: f …Q† ˆ

Buffer_Size 2 Q ; Buffer_Size 2 Q0

for Q . Q0

and f …Q† ˆ

…a 2 1†*…Q0 2 Q† 1 1; Q0

for Q # Q0:

3. Fair intelligent bandwidth allocation Fig. 1. The choice of queue control function f(Q).

particular source will receive only backward RM cells corresponding to the RM cells that it sends. Fair Intelligent Congestion Control Algorithm Parameters *B: The average ratio *BUR: Buffer Utilisation Ratio *a: Queue control function parameter Per Queue Variable MACR: Mean Allowed Cell Rate DPF: Down Pressure Factor Q0: Target Queue Length Initialisation Q0 ˆ BUR*BufferSize At ATM Switch if (receive RM (ACR, ER, DIR ˆ forward)) if (QueueLength . Q0) if (ACR , MACR) MACR ˆ MACR 1 B*(ACR-MACR) else MACR ˆ MACR 1 B*(ACR-MACR) if (receive RM (ACR, ER, DIR ˆ backward)) if (QueueLength . Q0) DPF ˆ

BufferSize 2 QueueLength BufferSize 2 Q0

else DPF ˆ

…a 2 1†*…Q0 2 QueueLength† 11 Q0

ER ˆ max (MCR, min(ER, DPF*MACR)) The linear function f(Q) employed in our scheme is shown in Fig. 1. f(Q) is a linear function with values between 1 and 0 for queue lengths in the range of [Q0, buffer size] and between a and 1 for queue lengths in the range of [0, Q0]. The two lines intersect at Q0, where the value of f(Q) is 1. Various exponential forms for f(Q) have been explored, but it is simplest to use this linear one with

In this section we state the goal of the algorithm, present design considerations with respect to the goals, and present the algorithm. 3.1. Goal We aim to develop a bandwidth allocation algorithm that can allocate bandwidth fairly according to a given criterion of fairness. The algorithm must be able to reallocate smoothly when a new connection is admitted or when there is renegotiation of the minimum guaranteed cell rate by some connections. Furthermore, in the case where the target allocation bandwidth for some connection cannot be utilised because the connection was limited by its peak cell rate or because it was throttled by earlier switches along the path to its destination, the unused bandwidth must be reallocated fairly to other connections. The algorithm should also able to handle congestion if it occurs. 3.2. Design considerations 3.2.1. Target fair share allocation In order to make use of the total link bandwidth we have to know at all time the exact amount of bandwidth available to be allocated. Each connection is guaranteed a MCR, so the amount of bandwidth left to be allocated is (C-sum_mcr) where C is the link capacity and sum_mcr the sum of all the MCRs guaranteed for all connections passing through the switch. Given the fairness criteria for allocation the target_ fair_share for a connection can be calculated. For example, in a MCR proportional rate allocation policy, the target_ fair_share is given by target_ fair_share ˆ mcr 1 mcr…C 2 sum_mcr†=sum_mcr In other words, the switch has to keep the parameter C and the target_ fair_share parameters. 3.2.2. Mean current share allocation In FICC, we keep a running average (called Mean Allowed Cell Rate, MACR) of the current cell rate of all connections for an output queue. This MACR increases or decreases smoothly according to the switch buffer queue length and the connection current cell rate. In the case of compressed video sources, the current cell

D.B. Hoang et al. / Computer Communications 23 (2000) 1425–1436

rate as indicated in an RM cell does not vary smoothly as that of an ABR source (the ABR source decreases its current cell rate exponentially as long as the cell rate does not drop below MCR, and increases its cell rate linearly as long as it does not exceed PCR). However, the current cell rate of a compressed video source jumps from one level to another as dictated by the ER of the backward RM cells. The variable MACR is no longer a true running average of current cell rate. For this reason, the variable mcsa—mean current share allocation (as opposed to MACR) is employed here to provide an indication of how far its allocation is from the target_ fair_share allocation. If it is already above its target_ fair_share, mcsa will have to be reduced proportionally. Similarly, if it is below the target_ fair_share, mcsa will be increased proportionally. The target_ fair_ share allocation is used as a fixed point to modify the mean current share allocation. 3.2.3. Target buffer queue length Similar to FICC, each switch employs the queue length indicator, Q0 (Q0 is derived from the buffer size and the desired level of buffer utilisation) which indicates the level we want the switch to operate (a design parameter which ensures a certain level of throughput and delay). If the switch is operating above that level (Q0 ˆ BufferSize*BUR), the ER will be modified appropriately with the help of the down pressure factor, DPF.

1429

to do is to iteratively modify the mcsa, and pull it towards the target_ fair_share for the connection. The switch estimates the ER of the connection using a queue control factor. The ER is then sent back to the source to indicate the maximum rate that can be supported for that connection and the source rate has to be adapted to this ER. The basic algorithm is in fact very much similar to the FICC for ABR traffic. One difference is that the FICC can be done per output switch queue that consists of many connections. In FIBA, the algorithm is performed per connection. Another difference is in the way FIBA modifies mcsa using target_ fair_share allocation as the fixed point. We first specify each connection’s source and destination behaviours, and then present the basic switch algorithm. Algorithm 1: End System Behaviour • Source Behaviour: The source start with ACR ˆ ICR, ICR ˆ MCR; For every Nrm transmitted data cells, the source sends a RM cell (CCR ˆ ACR, MCR ˆ MCR, ER ˆ PCR, TACR ˆ PCR); Upon the receipt of a backward RM cell, set ACR ˆ ER. • Destination Behaviour: The destination end system of a connection returns every RM cell back towards the source upon receiving it. Algorithm 2: Basic Switch Algorithm

3.2.4. Queue control function Same as with FICC, we aim to maintain the operating point of the switch around its target buffer queue length. On the one hand, when the target operating point is not reached we want to encourage the connection to send more than its current rate. This can be interpreted as an effort to oversell the capacity to ensure high bandwidth utilisation. On the other hand, when the target operating point is already reached we want to discourage the connection appropriately. The linear function f(Q) (Fig. 1) allows the encouragement or discouragement proportional to the distance between the current operating point from the target operating point. See Ref. [10] for further discussion on f(Q). 3.2.5. Explicit rate estimation The ER to be sent back to the source is a simple product of the DPF and the mcsa for the connection. If the product is larger than the ER carried on the current RM cell, it means that the connection is already hard-limited by other switched downstream. In this case the switch leaves the ER on the RM cell intact. 3.2.6. The basic algorithm In the simplest case we assume that no new connection is admitted, no MCR renegotiation is requested over the lifetime of the connection and the PCR of the connection is always greater than the target_ fair_share for the connection, the algorithm is extremely simple. All the switch needs

If cell is RM (DIR ˆ forward…) if (abs(mcsa-target_ fair_share)) , e /*e is a small 1 ve real value in the order of 10 23*/ mcsa ˆ target_ fair_share else if (mcsa . target_ fair_share) mcsa ˆ mcsa 2 ((mcsa 1 acr)/2-target_ fair_ share)*AV else mcsa ˆ mcsa 1 (target_ fair_share 2 (mcsa 1 acr)/2)*AV; If cell is RM (DIR ˆ backward…) if QueueLength . BUR*BufferSize DPF ˆ (BufferSize 2 QueueLength)/(BufferSize 2 BUR*BufferSize); else DPF ˆ (a 2 1)(BUR*BufferSize 2 QueueLength)/ BUR*BufferSize 1 1; ER ˆ Min(ER, DPF*mcsa) If we are to make the algorithm to support more complicated situations, other considerations may have to be taken into account. 3.2.7. Target fair share allocation limitation by the connection peak cell rate When the connection’s PCR is less than its target_ fair_ share allocation, the connection cannot use up all the bandwidth share allocated to it by the algorithm. In this case

1430

D.B. Hoang et al. / Computer Communications 23 (2000) 1425–1436

Algorithm 3: Switch Algorithm

Fig. 2. A peer-to-peer network.

the algorithm has to allocate this unused portion of the bandwidth to other connections. The FIBA algorithm keeps track of these connections and their unused allocation and modifies the target_ fair_share for other connection appropriately. 3.2.8. Target fair share allocation limitation by bottleneck from other switches If there is a bottleneck somewhere along the path of a connection, the whole target_ fair_share allocation cannot be taken up by the connection for that segment of the path, this condition has to be signalled to other switches along the path so that they can take appropriate measure. In FIBA, the parameter tacr (true allocation cell rate) is used for the purpose. This parameter is carried in the one field of the RM cell. If a switch sees that its calculated target_ fair_share allocation is greater than the tacr, it has to make adjustment and reallocates the left over portion to other connections. 3.2.9. Renegotiation of the minimum guaranteed cell rate We have been assumed that each connection has prior knowledge of its requirement of a certain minimum cell rate guarantee to support minimum video quality. However, it is sometimes difficult for each connection to have an accurate estimate of its minimum required rate. Therefore, it will be very useful that a user can renegotiate its MCR. FIBA does provide support for this type of renegotiation and reallocate target_ fair_share allocation to its connections. 3.2.10. Admission of new connection If there is a new connection that has been accepted by the Connection Admission Control, then the switch again has to readjust its share algorithm to accommodate the new connection. The complete switch algorithm is as follows:

If cell is RM (DIR ˆ forward…) if RM (acr, mcr, er, tacr,..) cell signals connection initiation {/* acr, mcr, er, pcr values here apply to individual VCI */ mcsa ˆ mcr; sum_mcr ˆ partial_sum_acr ˆ mcr 1 sum_mcr; spare_ per_mcr_sum ˆ 0; /* sum of bandwidth not taken up by constrained connections per mcr */ x ˆ 1; /* x indicates that a new connection has been established */ } else { if (mcr has been changed) { recalculate sum_mcr, and set partial_sum_mcr ˆ sum_mcr; spare_ per_mcr_sum ˆ 0; u0 ˆ (target_ fair_share-mcr)/mcr; /* recalculate fair share per mcr for this VC */ } if (last_tacr! ˆ 0) and (last_tacr! ˆ tacr) /* tacr has been changed from previous value */ { target_ fair_share ˆ u0*mcr 1 mcr; /* recalculate fair share for the connection */ partial_sum_mcr ˆ sum_mcr; spare_ per_mcr_sum ˆ 0; } calculate (sw); if (target_ fair_share . tacr) /* the connection cannot take up its fair share because of bottleneck restriction elsewhere */ { partial_sum_mcr ˆ partial_sum_mcr-mcr; spare_ per_mcr_sum ˆ spare_ per_mcr_sum 1 (target_ fair_share-tacr)/partial_sum_mcr; target_ fair_share ˆ last_tacr ˆ tacr; } tacr ˆ target_ fair_share; /* pulling the mean current share allocation towards the target_ fair_share allocation */ if (abs(mcsa-target_ fair_share)) , e

Fig. 3. A three-node network.

D.B. Hoang et al. / Computer Communications 23 (2000) 1425–1436

1431

Fig. 4. A parking-lot network.

mcsa ˆ target_ fair_share else if (mcsa . target_ fair_share) mcsa ˆ mcsa 2 ((mcsa 1 acr)/2 2 target_ fair_share)*AV else mcsa ˆ mcsa 1 (target_ fair_share 2 (mcsa 1 acr)/2)*AV; } If cell is RM (DIR ˆ backward…) if QueueLength . BUR*BufferSize DPF ˆ (BufferSize 2 QueueLength)/(BufferSize 2 BUR*BufferSize); else DPF ˆ (a 2 1)(BUR*BufferSize 2 QueueLength)/ BUR*BufferSize 1 1; ER ˆ Min (ER, DPF*mcsa) Calculate (sw) { spare ˆ temp ˆ 0; /* spare, temp are temporary parameters */ for VCs passing swi and traversing link i { if (this VC can take up its share of the spare bandwidth) or (x ˆ ˆ 1) { u0 ˆ (C 2 sum_mcr)/sum_mcr 1 spare_ per_mcr_sum; target_ fair_share ˆ u0*mcr 1 mcr; } if (target_ fair_share . pcr) /* target_ fair_share is restricted by peak cell rate */ { spare ˆ spare 1 target_ fair_share 2 pcr; target_ fair_share ˆ pcr; temp ˆ temp 1 mcr; } } if (spare . 0) /* spare bandwidth due to pcr constraint */ { partial_sum_mcr ˆ partial_sum_mcr 2 temp; spare_ per_mcr_sum ˆ spare_ per_mcr_sum 1 spare/partial_sum_mcr; } x ˆ 0; }

Parameters used in the algorithm: • x: x is set to 1 if there is a new connection. It will be reset to 0 if the new connection calculation is completed. • last_tacr: is used to identify the changing of tacr. Table 1 Simulation parameters Components

Parameter

Value

Speed Distance Delay Speed Distance Delay

10 Mbps 1000 km 5 ms/km 100 Mbps 1 km 5 ms/km

Host

ICR (initial cell rate) Nrm Input queue size

MCR 32 3000 cells

Switch

Processing delay Output buffer size AV (averaging factor) BUR (buffer utilisation ratio) A (queue control parameter)

4 ms 500 cells 0.25 1/32 1.005

Link Interswitch link

Host-switch link

Peer-to-peer configuration VC1, VC2, VC3 PCR VC1 MCR VC2 MCR VC3 MCR

5 Mbps 1.5 Mbps 1 Mbps 0.5 Mbps

Three-node configuration VC1 PCR MCR VC2 PCR MCR VC3 PCR MCR VC4 PCR MCR

7.5 Mbps 0.5 Mbps 9 Mbps 1.5 Mbps 4 Mbps 2 Mbps 10 Mbps 1 Mbps

Parking-lot configuration VC1 PCR MCR VC2 PCR MCR VC3 PCR MCR VC4 PCR MCR

3.5 Mbps 1.5 Mbps 2 Mbps 1 Mbps 5 Mbps 1 Mbps 5 Mbps 0.5 Mbps

1432

D.B. Hoang et al. / Computer Communications 23 (2000) 1425–1436

Table 2 Minimum rate requirement, peak rate constraint, target fair share allocation (Mbps) for each connection for each of the three configurations in the simulation: (a) peer-to-peer; (b) three-node; (c) parking-lot MCR

PCR

Target rate

Peer-to-peer VC1 VC2 VC3

1.5 1 0.5

5 5 5

Three-node VC1 VC2 VC3 VC4

0.5 1.5 2 1

7.5 9 4 10

1.5 4.5 4 8.5

Parking-lot VC1 VC2 VC3 VC4

1.5 1 1 0.5

3.5 2 5 5

3.5 2 3 1.5

5 3.333 1.666

• u0 ˆ the fair share per unit mcr ˆ (Capacity 2 (Sum of MCR))/(Sum of MCR) 1 (spare bandwidth)/(Sum of MCR for connections which share the spare bandwidth). • sum_mcr: sum of the MCR of all connections. • partial_sum_mcr: sum of MCR of connections that are not limited by PCR or by bottleneck elsewhere. • spare: accumulated unused bandwidth as a result of some connections that can not take up its target_ fair_share allocation because of the its PCR. • spare_ per_mcr_sum: unused bandwidth from connections that are constrained by PCR, and by bottleneck from other switches along the path of the connection per the unit mcr of the rest, unconstrained connections. • target_ fair_share: is target rate. target_ fair_share ˆ u0*mcr 1 mcr.

Fig. 6. The maximum switch buffer queue length of Switch 1.

Table 3 The MCR, PCR and ACR (Mbps) of all connections for the peer-to-peer configuration

VC1 VC2 VC3

MCR

PCR

ACR at source

1.5 1 0.5

5 5 5

5 3.35 1.67

simulation techniques and has graphic–user interface representation capabilities. Three network configurations are employed in the simulation, which include a peer-to-peer configuration (Fig. 2), a three-node configuration (Fig. 3), and a parking-lot configuration (Fig. 4). For the networks in the simulation, all ATM switches are assumed to have output port buffering. Each output port employs the simple first-in–first-out queuing discipline for all cells destined to that port. At the source side, we set the initial cell rate (ICR) to be the same as the minimum required cell rate (MCR) for the connection. The parameter Nrm is set to 32 for all connections. Other host, link, switch parameters are shown in Table 1.

4. Simulation environment

5. Simulation results

Our simulation tool is based on the ATM/HFC Network Simulator [15] developed by the National Institute of Standards and Technology. This tool is based on a network simulator developed at MIT that provide discrete event

In this section we present the simulation results in terms of the Allowed Cell Rate for each connection and of the maximum switch buffer queue length. Table 2 shows the minimum rate requirement, the

Fig. 5. The ACR of all connections for the peer-to-peer configuration.

D.B. Hoang et al. / Computer Communications 23 (2000) 1425–1436

1433

Fig. 7. The ACR of all connections for the three-node configuration.

5.1. Peer-to-peer configuration Fig. 5 shows the ACR at the source for VC1, VC2 and VC3, respectively. Each connection starts with its minimum cell rate. Fig. 6 shows the maximum queue length at Switch 1. Table 3 shows the MCR, PCR and the Allowed Cell Rate at the source. 5.2. Three-node configuration Fig. 8. The maximum switch buffer queue length of Switches 1 and 2.

Table 4 The MCR, PCR and ACR (Mbps) of all connections for the three-node configuration

VC1 VC2 VC3 VC4

MCR

PCR

ACR at source

0.5 1.5 2 1

7.5 9 4 10

1.51 4.51 4 8.52

peak cell rate, and the target fair share allocation for each connection and for all network configurations. The target fair share figure is used to judge how well the algorithm allocates cell rate to connections.

For this network, there are four connections and the output port links of Switch 1 (Link1) and Switch 2 (Link2) are potential bottleneck links. Fig. 7 shows the ACR for each connection. Fig. 8 shows the maximum queue length at Switches 1 and 2. Table 4 shows the MCR, PCR and the Allowed Cell Rate at the source. 5.3. Parking-lot configuration Fig. 4 shows a parking-lot configuration, where connections VC1 and VC2 start from the first switch (Switch 1) and go to the last switch (Switch 4), and connections VC3 and VC4 start from Switches 2 and 3, respectively, and terminate at the last switch. Fig. 9 shows the ACR for each connection. Fig. 10 shows the maximum queue length at Switches 1–3. Table 5 shows the MCR, PCR and the Allowed Cell Rate at the source.

Fig. 9. The ACR of all connections for the parking-lot configuration.

1434

D.B. Hoang et al. / Computer Communications 23 (2000) 1425–1436

set of fair share for all connections. For the three-node configuration, Fig. 11 shows the ACR for each connection before and after the MCR renegotiation. The change of MCR for VC1 from 1.5 to 0.5 Mbps occurs at time t ˆ 300 ms: Table 6 shows the MCR, new MCR, target_ fair_ share and new target_ fair_share and the allocated rate for all connections. 5.5. Introduction of new connection Fig. 10. The maximum switch buffer queue length of Switches 1–3.

Table 5 The MCR, PCR and ACR (Mbps) of all connections for the parking-lot configuration

VC1 VC2 VC3 VC4

MCR

PCR

ACR at source

1.5 1 1 0.5

3.5 2 5 5

3.5 2 3.01 1.51

When a new connection is accepted into the network, it brings with it an additional MCR demand. The algorithm has to take this requirement into account and has to readjust the fair share for all connections (new and old). Fig. 12 shows that algorithm can cope with this situation smoothly. Table 7 shows the MCR, target_ fair_share and allocated rate for all connections after the introduction of VC1 at time t ˆ 300 ms: The simulation is done for the parking-lot configuration. 5.6. Discussion on the results

5.4. MCR renegotiation Each time a connection changes its minimum rate, the target_ fair_share allocation for all connections in the network will change. However, it has to be checked that the sum of the new set of MCRs cannot exceed the link’s capacity on any link it traverses. If this condition is not satisfied, then the newly negotiated minimum rate is rejected. Our algorithm is able to reiterate and converge to a new

The simulation results presented above show that our distributed feedback allocation algorithm is capable of allocate each connection its fair share of the available bandwidth. It should be noted that the algorithm has to iterate, taking into account various constraints to reach a final fair share allocation for each connection. Each source ACR reaches a steady state value, which is in fact the final fair share. It should be noted that the final fair share for each connection might be different from the target_ fair_share allocation if a connection is not able to take up its

Fig. 11. The ACR of all connections for the three-node configuration. VC1 changed its MCR from 1.5 to 0.5 Mbps at t ˆ 300 ms: Table 6 The MCR, renegotiated MCR, the old and new target/allocated share (Mbps)

VC1 VC2 VC3 VC4

MCR

New MCR

PCR

Target/allocated rate

New target/allocated rate

1.5 1.5 2 1

0.5 1.5 2 1

7.5 9 4 10

3/3.01 3/3.01 4/4 7/7.01

1.5/1.5 4.5/4.51 4/4 8.5/8.51

D.B. Hoang et al. / Computer Communications 23 (2000) 1425–1436

1435

Fig. 12. The MCR, PCR and ACR of all connections for the parking-lot configuration. VC1 start at t ˆ 300 ms:

Table 7 Introduction of a new connection, the old and new target/allocated share (Mbps)

VC1 VC2 VC3 VC4

MCR

PCR

1.5 1 1 0.5

3.5 2 5 5

Target/ allocated rate

2/2 5/5 3/3

Target/allocate rate (VC1 start at t ˆ 300 ms) 3.5/3.5 2/2 3/3.01 1.5/1.51

target_ fair_share. In this case the unused bandwidth is distributed among the other connections. The convergence rate of the algorithm can be adjusted depending on how fast and how smoothly one would like the ACR to converge to the final fair share of the connection. The averaging factor, AV, can be adjusted for this purpose. Under normal conditions, there is no network congestion and the network operates just below the target operating level (Q0). To have an exact convergence of ACR to the final fair share, the Down Pressure Factor should be 1.0, and this means that the queue control parameter a should be 1.0. However, when congestion is present, the network will be pushed to operate above Q0, the queue control function will then come into play to pull the network back around this stable operating point. The situation also manifests itself during the initial, transient period when switch buffer can build up significantly. Our algorithm is able to control this situation by maintaining the switch buffer queue around the Q0 level. It should also be noted that in situations where rateadaptive video and ABR traffic are mixed, we expect the congestion control feature will play a significant role in both rate allocation and congestion control.

6. Conclusions ABR-like control mechanism is desirable for transport of rate-adaptive video traffic for a number of reasons. It allows exploitation of the bursty nature of video traffic to gain more

efficient bandwidth sharing. It preserves the simplicity of CBR connection admission control. It allows a single feedback control mechanism for a wide range of source-adaptive traffic. This paper presents a robust explicit feedback algorithm for bandwidth sharing. It is demonstrated that the algorithm is capable of allocating bandwidth fairly among competitive, adaptive video sources. The algorithm is able to reallocate smoothly when a new connection is admitted. It supports renegotiation of the MCR for connections. It can prevent congestion, especially during the initial transient periods. Several issues, however, may warrant further exploration. Firstly, it is desirable to examine some source rate adaptation algorithm that allows source’s coder output to be buffered and adapt smoothly to the source ACR which is dictated by the connection explicit rate. Secondly, the algorithm can work with other criteria of fairness, by just changing the formula for calculating the target_ fair_share allocation. In this paper we concentrate mainly on the MCRproportional fairness criterion. Other criteria such as PCRproportional, generalised weight-based may deserve further exploration. Thirdly, we have not examined cases where rate-adaptive video traffic and ABR traffic are mixed. In these cases the congestion control feature of our algorithm would play an important role. Finally, the algorithm proposed here relies on an explicit-feedback mechanism, and it would be applicable to the Internet if the use of end-to-end congestion control and explicit congestion notification as promoted in Refs. [16,17] is adopted. In that case, the FIBA may be employed as a scheduling algorithm per aggregrate service of the differentiated service model.

References [1] A.R. Reibman, A.W. Berger, Traffic description for VBR video teleconferencing over ATM networks, IEEE/ACM Transactions on Networking 3 (1995) 329–339. [2] D. Heyman, A. Tabatabai, T.V. Lakshman, Statistical analysis and simulation study of VBR video teleconference traffic in ATM networks, IEEE Transactions on Circuits and Systems for Video Technology March (1992) 49–59.

1436

D.B. Hoang et al. / Computer Communications 23 (2000) 1425–1436

[3] V. Jacobson, Multimedia conferencing on the Internet, Conference Tutorial 4, ACM SIGCOMM, August 1994. [4] H. Kanakia, P.P. Misha, A. Reibman, An adaptive congestion control scheme for real-time packet video transport, IEEE/ACM Transactions on Networking 3 (1995) 671–682. [5] Y. Omori, T. Suda, G. Lin, Feedback-based congestion control for VBR video in ATM networks, Proceedings of the Sixth IEEE Workshop on Packet Video, 1994. [6] T.V. Lakshman, P. Mishra, K.K. Ramakrihsnan, Transporting compressed video over ATM networks with explicit rate feedback control, Proceedings of the IEEE INFOCOM’97, Japan, 1997, pp. 38–47. [7] P.P. Mishra, Fair bandwidth sharing for video traffic sources using distributed feedback control, Proceedings of the IEEE GOBECOM’95, Singapore, 1995, pp. 1102–1108. [8] B.J. Vickers, M. Lee, T. Suda, Feedback control mechanisms for realtime multipoint video services, IEEE Journal on Selected Areas in Communications 13 (3) (1997) 512–530. [9] T.Y. Hou, S.S. Panwar, Zhi-Li Zhang, H. Tzeng, Ya-Qin Zhang, Network bandwidth sharing for transporting rate-adaptive packet video using feedback, Proceedings of the IEEE GLOBECOM’98, 1998, pp. 1547–1555.

[10] D.B. Hoang, Q. Yu, Performance of the fair intelligent congestion control for TCP applications over ATM networks, Proceedings of the Second International Conference on ATM (ICATM’99), Colmar, France, June 1999, pp. 390–395. [11] D.B. Hoang, Z. Wang, A fair intelligent congestion control for ATM, Basser Department of Computer Science, The University of Sydney, Technical Report No. TR-224, 1999. [12] The ATM forum Technical Committee, Traffic Management Specification Version 4.0, ATM Forum Contribution, AF-TM 96-0056.00, April 1996. [13] K.Y. Siu, H. Tzeng, Intelligent congestion control for ABR service in ATM networks, Computer Communication Review 3 (1995) 81–106. [14] R. Jain, S. Kalyanaraman, Y. Goyal, S. Fahmy, R. Viswanathan, ERICA switch algorithm: a complete description, ATM Forum Contribution 96-1172, 1996. [15] The NIST ATM/HFC Network Simulator. http://isdn.ncsl.nis.gov/ mis/hsnt/prd atm-sim.html. [16] S. Floyd, TCP and explicit congestion notification, ACM Computer Communication Review 24 (5) (1994) 10–23. [17] S. Floyd, K. Fall, Promoting the use of end-to-end congestion control in the Internet, IEEE/ACM Transactions on Networking 7 (4) (1999) 458–472.