Performance of TCP traffic and ABR congestion control mechanisms

Performance of TCP traffic and ABR congestion control mechanisms

Computer Networks and ISDN Systems 30 Ž1998. 1763–1774 Performance of TCP traffic and ABR congestion control mechanisms Ilias Iliadis a a,) , Danie...

150KB Sizes 2 Downloads 117 Views

Computer Networks and ISDN Systems 30 Ž1998. 1763–1774

Performance of TCP traffic and ABR congestion control mechanisms Ilias Iliadis a

a,)

, Daniel Orsatti

b

IBM Research DiÕision, Zurich Research Laboratory, 8803 Ruschlikon, Switzerland ¨ b IBM Networking Hardware DiÕision, 06610 CER La Gaude, France

Abstract This paper presents a performance investigation of TCP traffic carried over ATM networks when the available bit rate ŽABR. congestion control mechanism is exercised at the ATM layer. Two ABR schemes are investigated; the EFCI marking scheme and the relative rate marking scheme. The joint effect of the ABR mechanism with the early packet discard ŽEPD. mechanism is also considered. The focus of this study is to evaluate these schemes based on the effective throughput experienced by end users, taking the effect of retransmissions into account. Simulation results are obtained for a multihop switch configuration. Various aspects regarding the synergy of the TCP and ABR flow control mechanisms are presented. q 1998 Elsevier Science B.V. All rights reserved. Keywords: TCPrIP congestion control; ATM networks; ABR service

1. Introduction The transmission control protocol ŽTCP. is widely used as the transport layer protocol to support the majority of current data applications. Owing to the rapid deployment of ATM, the issue of transporting TCPrIP traffic over ATM networks is of great interest w13,15x. High-speed ATM networks are designed to support multimedia traffic consisting of a variety of traffic classes with different quality-of-service requirements. Two basic classes of traffic are considered in ATM networks: reserved traffic with a guaranteed quality-of-service in terms of cell transfer delay, and best-effort traffic with no explicit guaranteed service. The CBR Žconstant bit rate. and VBR Žvariable bit rate. service classes fall into the former )

Corresponding author. E-mail: [email protected].

category, whereas the ABR Žavailable bit rate. and UBR Žunspecified bit rate. fall into the latter category. These four service classes have been defined by the ATM Forum w1x. The TCP traffic is connection-oriented but no quality-of-service parameters are specified. It is delay insensitive and does not require that user traffic characteristics be specified. The available bandwidth is shared by all active users. This type of TCP traffic falls into the best-effort traffic category. The bursty and unpredictable nature of best-effort traffic may result in congestion and subsequent data losses. The TCP protocol has the capability of detecting data losses and initiating appropriate retransmissions in order to ensure a reliable data transfer service. The performance of TCP over UBR Žplain ATM. has been studied in w13x. It was shown that the effective throughput may be significantly reduced

0169-7552r98r$ - see front matter q 1998 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 9 - 7 5 5 2 Ž 9 8 . 0 0 1 6 1 - 5

1764

I. Iliadis, D. Orsattir Computer Networks and ISDN Systems 30 (1998) 1763–1774

owing to the fragmentation problem. This problem stems from the fact that isolated cell losses result in the retransmission of entire packets, which increases congestion and, consequently, causes throughput degradation. Furthermore, network bandwidth is wasted owing to the transmission of cells belonging to corrupted packets. These results demonstrate the necessity of providing an efficient control mechanism at the ATM layer in order to achieve acceptable performance. Two such mechanisms, called partial packet discard ŽPPD. and early packet discard ŽEPD., were proposed in w13x. They reduce the bandwidth that is potentially wasted when cells belonging to corrupted packets are transmitted. A comparative evaluation of TCP over EPD and TCP over the feedback congestion control mechanisms employed at the ATM link layer has been presented in w4x. In the meantime, the ATM Forum standards organization has adopted the rate-based congestion control mechanism as the technique to be employed for congestion control of ABR traffic w1x. This scheme does not exercise flow control inside the network; network congestion information is propagated to the end user, where the appropriate control action is taken. Based on the various techniques proposed concerning the switch response to congestion and the source rate adjustment, three ABR schemes have been defined. The EFCI Žexplicit forward congestion indication. scheme and the RR Žrelative rate. scheme belong to the class of binary feedback schemes, where resource management ŽRM. cells contain one bit of information used to indicate congestion. This information is propagated from the switches towards the source that, subsequently, increases or decreases its rate according to this information. In contrast to the binary feedback schemes, the ER Žexplicit rate. scheme uses information conveyed from the switches to the source in the form of explicit values w8–10x. The binary feedback schemes are simpler to implement than the explicit rate scheme. A performance investigation of TCP over ABR with an EPD enhancement was presented in w3,12x. It has been shown that good performance can be achieved when the ABR parameters, and the EPD thresholds are set appropriately. The results in w3x, obtained using the EFCI scheme, concentrate on the fairness issue, whereas the results in w12x were obtained using the ER scheme.

In this paper we consider the performance of TCP in conjunction with the two different binary feedback implementations of the ABR congestion control mechanism; the EFCI marking scheme and the RR marking scheme. The joint effect of the ABR and the EPD mechanism is considered as well. The focus of this study is to determine the effective throughput experienced by end users, taking the effect of retransmissions into account. Simulation results are obtained for the multihop switch configuration described in Section 2. Data are exchanged between the transmitting and receiving end users by the TCPrIP protocol, the main characteristics of which are also presented in Section 2. The congestion control schemes applied at the ATM layer and considered in this paper are described in detail in Section 3. The performance of TCP over ATM is investigated by means of simulation. The various schemes used are compared in Section 4 based on simulation results. These results demonstrate the synergy between the congestion control mechanisms employed at the ATM link layer and the TCP control mechanism employed at the transport layer. In addition, the effectiveness of the ABR congestion control mechanism in the absence of a higher-layer flow control mechanism is also demonstrated.

2. TCP r IP simulation model The TCPrIP simulation environment developed for the purpose of this study is depicted in Fig. 1. A simple network topology was chosen in order to study the dynamic behavior of the schemes under consideration. As shown in Fig. 1, ten TCP applications send data to the same receiver. The ten TCP connections first send data to switch 1 through ten separate and identical links. The transmission time of a cell at the link is taken as the time unit Žtu., so that each link has a normalized capacity of 1 cellrtu Žfor example, for a link with a transmission capacity of 141 Mbrs, the corresponding time unit is equal to 3 ms.. Fiber optic links are assumed to be deployed. Distances are converted to time units assuming the transmission speed of a fiber optic link to be equal to 200,000 kmrs. The distance between the ten senders and switch 1 is set equal to 1 tu Žfor a 3-ms time unit, the corresponding distance is 600 m.. From this

I. Iliadis, D. Orsattir Computer Networks and ISDN Systems 30 (1998) 1763–1774

1765

Fig. 1. Simulation model.

point on, data share the same resources. The data are forwarded to switch 2, located at a distance of d1 tu through a single link whose bandwidth is r 1 times that of an incoming link. Then, they are forwarded to switch 3 and subsequently to switch 4. Finally, they are sent to the receiver and to the corresponding applications. Assuming that the link bandwidths satisfy the inequalities 1 F r4 F r 3 F r 2 F r 1 F 10, the network configuration considered here contains four congestion points; the first one is in switch 1 Ži.e., ten incoming links of capacity 1 cellrtu feed an outgoing link of capacity r 1 cellsrtu.. The second one is in switch 2 Ži.e., one incoming link of capacity r 1 cellsrtu feeds an outgoing link of capacity r 2 cellsrtu.. The other two switches are also congestion points. All switches are assumed to be identical and to have an output buffer size of 2000 cells. The buffer space is separated into two parts; one part is used for reserved traffic Ž256 cells. and the remaining part for best-effort traffic. Consequently, the buffer space used by the TCPrIP traffic is 1744 cells. In the case of TCP operation over plain ATM, there is no congestion control at the ATM level. When the switch buffer is full, arriving ATM cells are discarded. Cells are transmitted at the outgoing links according to their priority. Reserved traffic has a higher priority than nonreserved traffic. Moreover, it is assumed that the active connections contributing to the besteffort traffic are served according to a round-robin scheme. ATM cells corresponding to the various connections are distinguished based on their VCI value. The round-robin discipline is known to pro-

vide equal share and fairness among the active connections w2,11x. Each sender forwards TCP segments Žhereafter referred to as packets. using the TCPrIP protocol. The simulator incorporates Jacobson’s slow-start congestion control and avoidance scheme w5x as well as enhancements such as the improved roundtrip time estimate and Karn’s algorithm for coping with the retransmission ambiguity problem w14x. The sender is allowed to transmit up to the minimum of its congestion window and the advertised window imposed by the receiver. The TCP’s window-adjustment algorithm contains two phases that are determined based on a threshold. The first phase, called the slow-start phase, begins when congestion is detected. When the retransmission timer expires, the corresponding packet is retransmitted, the threshold is set to half the congestion window value, and the congestion window is reduced to one packet. Each time an ACK is received, the congestion window is increased by one packet until it reaches the threshold. This constitutes an exponential increase because the congestion window essentially doubles during each roundtrip time period. When the congestion window reaches the threshold, the congestion-avoidance period begins. Upon receipt of an ACK, the congestion window is incremented by the inverse of its current value. This corresponds to an increase of roughly one packet per roundtrip time period. The congestion window is never allowed to exceed the receiver’s advertised window. The receiver’s advertised window is set to 64 kbytes and the threshold is initially set to 32 kbytes. In addition to the time-out

1766

I. Iliadis, D. Orsattir Computer Networks and ISDN Systems 30 (1998) 1763–1774

mechanism, the fast-retransmission procedure is used to detect packet losses. A packet is considered lost if four successive ACK acknowledge the same data packet w14x. At the transport layer level, a TCPrIP packet transmission occurs when the packet is passed from the network layer to the ATM layer at the sender. A unique VCI is assigned to each of the applications. ATM cells corresponding to the various applications are, therefore, distinguished based on their distinct VCI value. The packet ŽIP datagram. is fragmented into ATM cells, which are subsequently transmitted from the sender to switch 1. The sender’s buffer occupancy at the ATM layer may increase if there is a difference between the rate of transmission and the rate of fragmentation. It is assumed that this buffer is sufficiently large to store all the ATM cells waiting to be transmitted. The number of ATM cells is upper bounded by the TCP maximum receiver window size. At the receiver side, ATM cells are reassembled to form the original packet using the ATM Adaptation Layer 5 ŽAAL5.. The packet is then passed to the networkrtransport layer as a TCPrIP packet. It is assumed that ACKs are sent from the receiver to the senders along the reverse path. Finally, each TCP connection is assumed to have an infinite amount of data to be transmitted.

3. ATM layer flow control 3.1. Early packet discard (EPD) mechanism According to the EPD scheme, the switch discards all the cells of an incoming packet when the buffer content exceeds a global threshold Ždenoted by Eg .. The cells of a packet are identified by their VCI value. The switch continues to drop cells from the same VC until it detects the ATM layer user-touser ŽAUU. parameter set in the ATM cell header, indicating the end of the packet. As long as the buffer content exceeds the threshold, the switch continues to discard cells of incoming packets from multiple connections. In addition to the global threshold, we consider a selective threshold Ždenoted by Es . for each connection. The selective thresholds prevent certain connections from monopolizing the buffer space at the expense of other connections.

They therefore enhance fairness among the connections w3x. The selective thresholds should be adjusted dynamically based on the number of connections. A sufficiently large number of connections Žsay more than eight., allows the thresholds to be set in a way that takes advantage of statistical multiplexing gains. The selective EPD thresholds, for example, could be set by considering the buffer size increased by a gain factor and divided by the number of connections. One could also consider a limited number of possible threshold values corresponding to a representative range of number of connections. This issue is elaborated in Section 4.1. 3.2. AÕailable-bit-rate (ABR) mechanisms ABR is a closed-loop, rate-based traffic control mechanism. It is basically an end-to-end flow control mechanism, but it also provides the option for intermediate switches or networks to segment the control loop. Each source end system ŽSES. periodically sends special control cells called resource management ŽRM. cells to the destination end system ŽDES., which, in turn, sends them back to the source as backward RM cells. The purpose of these cells is to convey information to the source about the state of congestion of the network. More specifically, RM cells contain a bit, called the congestion indication ŽCI. bit, that allows the source to increase or decrease its rate. Another bit, called the no increase ŽNI. bit, allows the source to keep its rate unchanged. Each source sends a forward RM cell after transmitting Nrm–1 data cells. As mentioned above, the destinations turn the forward RM cells around and send them back to the sources as backward RM cells. When a source receives a backward RM cell with the CI bit set, it reduces its maximum allowed cell rate ŽACR. multiplicatively as follows: ACR § maxŽACR = Ž1–RDF., MCR., where RDF is the rate decrease factor negotiated at call setup. When the source receives a backward RM cell with both the CI and NI bits not set, it increases its cell transmission rate linearly as follows: ACR § minŽACR q RIF = PCR, PCR., where RIF is the rate increase factor negotiated at setup. If, however, the NI bit is set, the source does not increase its rate. This behavior is referred to as a linear increase and exponential decrease of the source rate, and it facilitates conver-

I. Iliadis, D. Orsattir Computer Networks and ISDN Systems 30 (1998) 1763–1774

gence and stability in the network. At connection establishment, a peak cell rate ŽPCR. and a minimum cell rate ŽMCR. for the connection are negotiated. The source is allowed to send data at the ACR, which never exceeds the PCR and never falls below the MCR. Upon establishing a connection, the ACR is set to an initial cell rate ŽICR., in our case equal to the PCR. To make the ABR mechanism robust against network failures, the source also decreases its ACR if it had transmitted a certain number Ždenoted by CRM. of forward RM cells without having received a backward RM cell. The values of the source parameters are as follows: PCR s 1 cellrtu, MCRs 0, RDF s 1r16,

ICR s PCR, RIF s 1r16, NRM s 32.

We consider two methods for implementing congestion notification at the switches. 3.2.1. EFCI marking The switch marks only the EFCI bit in the data cells to indicate congestion, and it is the responsibility of the destination to convey control information to the source. The switch declares a congestion state if the instantaneous queue length exceeds a global threshold, denoted by CHG. The state returns to normal when the queue length drops below a global threshold, denoted by CLG. When a data cell arrives at the destination, the DES stores the EFCI bit of the data cell. When a forward RM cell arrives at the destination, the DES writes the latest EFCI stored value on the CI bit of the RM cell and sends it back to the source as a backward RM cell to notify congestion. When the source receives this returned RM cell, it reduces its ACR as described above. In addition to the global thresholds, we consider a set of selective thresholds Ždenoted by CHS and CLS, respectively. corresponding to each connection. 3.2.2. RelatiÕe rate (RR) marking The switch uses the backward RM cells to provide feedback on its state of congestion to the sources. This operation is based on the explicit backward congestion notification ŽEBCN. scheme. The CI bit is marked according to the procedure described for

1767

EFCI marking. In addition, the switch declares a no-increase state if the instantaneous queue length exceeds a threshold, NHG. The state returns to normal when the queue length drops below a low threshold, NLG. In addition to the global thresholds, we consider a set of selective thresholds Ždenoted by NHS and NLS, respectively. corresponding to each connection.

4. Numerical results 4.1. TCP r IP congestion control A simulation study of the performance of TCPrIP over ATM has been conducted, where the congestion control mechanism at the ATM layer is one of the mechanisms presented in Section 3, i.e., the EPD, one of the two ABR mechanisms, or a combination of one of the two ABR mechanisms and EPD. The numerical results presented in this section correspond to the case r 1 s 10, r 2 s 4, r 3 s 2, and r4 s 2. All the links considered here are assumed to be shared with higher-priority traffic. The link alternates between aÕailability periods and unaÕailability periods caused by the transmission of higher-priority traffic. Cells corresponding to TCP packets can be transmitted only during the availability periods. It is assumed that the availability and unavailability periods are independent and identically distributed with a mean of 10 tu. This implies that the long-term bandwidth available to the TCP traffic at the bottleneck is 1 cellrtu. For the purposes of the present study, we have assumed that the periods are exponentially distributed. The quantity of interest is the effective throughput Žas seen at the TCPrIP layer. achieved by each connection. It turns out that, because of the use of appropriately set selective thresholds, all mechanisms considered provide fairness among the connections. In our simulations, as well as those presented in w3,12x, fairness is assessed based on visual judgments of the throughput versus time diagrams. A comprehensive fairness study, however, should be based on generic fairness consideration ŽGFC. models w1x, and the degree of fairness should be estimated based on metrics such as the fairness index

1768

I. Iliadis, D. Orsattir Computer Networks and ISDN Systems 30 (1998) 1763–1774

w6,7x. Owing to the symmetry of the network configuration, this implies that all the throughputs tend to the same value. The sum of the individual throughputs of all connections is bounded above by the maximum possible achievable throughput, rmax , given by the long-term bandwidth available at the bottleneck link between switch 4 and the receiver, i.e., rmax s 1. All throughput results are normalized to this maximum throughput. This study considers fixed-size TCPrIP packets. Three different packet sizes are considered: 512 bytes Žthe packet size often used in IP networks., 4352 bytes Žthe packet size

used in FDDI networks., and 9180 bytes Žthe default of IP over ATM.. Given that the ATM cell payload is 48 bytes ŽAAL5., the corresponding number of cells per packet is 11, 91, and 192, respectively. The various thresholds were set as follows: Eg s 1488, CHG s 1232, CHS s 64, NHG s 976, NHS s 32,

Es s 128, CLG s 1231, CLS s 63, NLG s 975, NLS s 31.

In the case of the RR marking, we have also

Table 1 IP packet length s 11 cells Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.89 0.92 0.88 0.93 0.88 0.94 0.87 0.95

0.82 0.93 0.76 0.92 0.83 0.93 0.80 0.93

Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.72 0.87 0.77 0.87 0.82 0.90 0.83 0.87

0.70 0.85 0.73 0.88 0.77 0.89 0.77 0.85

Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.66 0.77 0.70 0.78 0.80 0.80 0.83 0.80

0.66 0.81 0.64 0.83 0.73 0.84 0.74 0.85

Table 2 IP packet length s 91 cells

Table 3 IP packet length s 192 cells

I. Iliadis, D. Orsattir Computer Networks and ISDN Systems 30 (1998) 1763–1774

considered a hysteresis case, where the EFCI low thresholds were set equal to the corresponding high no-increase thresholds, i.e., CLG s 976, and CLS s 32. Two cases were studied: a short-distance scenario Ž d1 s d 2 s d 3 s 1 tu and d 4 s 1 tu. and a mediumdistance scenario Ž d1 s d 2 s d 3 s 60 tu, and d 4 s 1 tu.. We have also studied long-distance scenarios where distances are of the order of 1000 tu. The results reveal that the introduction of the ABR schemes does not improve performance owing to the lack of responsiveness due to the long distances. The results are listed in Tables 1–3. The table entries referred to as plain ATM and EPD correspond to the UBR service class. The remaining entries correspond to the ABR service class. The simulations were carried out for an extremely large number of events such that 95% confidence intervals were very small. As expected, performance deteriorates as packet length increases. The results demonstrate that the use of congestion control at the ATM layer generally improves the performance over that of plain ATM. It is also evident that the performance of EPD is practically insensitive to the distance in the range considered. In contrast, the performance of the EFCI and RR schemes deteriorates as distance increases. Note also that, when the ABR mechanism is used in place of EPD, performance in general deteriorates. This indicates that the synergy between the TCP and ABR flow control mechanisms is not as pronounced as that between TCP and EPD. The simultaneous employment of both flow control mechanisms becomes counterproductive as far as throughput is concerned; it results, however, in a small increase of the degree of fairness among the connections. As mentioned in w12x, the fairness improvement is more pronounced in network configurations with small switch buffers for which the EPD mechanism alone is not sufficient to provide a satisfactory degree of fairness. Comparing the performance of EFCI and RR, both mechanisms yield roughly the same performance for short packets. However, for medium and large packet sizes it is clear that the RR mechanism provides a substantial improvement over the EFCI mechanism. This also holds in the range of short distances. When these mechanisms, however, are considered in conjunction with EPD, their relative

1769

difference becomes minimal. On the contrary, if the threshold is not selected wisely, the introduction of EPD may result in a performance degradation. This is shown in the case of RR and RR q EPD Žhysteresis. for 192-cell packet size and short distances. The employment of EPD causes a throughput reduction from 0.83 to 0.80, which is explained as follows. In case of congestion, the EPD discard should take place only after the ABR mechanism has had a chance to decrease the source rates adequately. This requires a sufficient difference between the ABR and EPD threshold settings. Apparently, in this example the difference was not sufficient. Additional simulations, however, have shown that if the selective EPD threshold is set to 160 cells, then there is no throughput reduction. Furthermore, throughput increases to 0.89 if the selective EPD threshold is set to 192 cells. Increasing the selective EPD threshold, however, decreases the degree of fairness. Similar behavior was observed in the case of EFCI marking w3x. The above results also demonstrate that the two cases of hysteresis and no-hysteresis yield roughly the same performance. The EPD and ABR thresholds can be selected appropriately based on packet length and number of connections. In the case of 11 cell packets, the maximum buffer occupancy for ten connections is given by 10Ž128 q 11. s 1390, which means that the global EPD threshold of 1488 is insignificant. This suggests a selective threshold increase considered in terms of another threshold variant given by Eg s 1488, CHG s 1232, CHS s 192, NHG s 976, NHS s 96,

Es s 256, CLG s 1231, CLS s 191, NLG s 975, NLS s 95.

Like in the previous study, in the case of RR marking, we have also considered a hysteresis case, in which the EFCI low thresholds were set equal to the corresponding high no-increase thresholds, i.e., CLG s 976, and CLS s 96. Note that, in general, the selective thresholds should be adjusted according to the number of connections. A sufficiently large number of connections Žsay more than eight. allows the thresholds to be set in a way that takes advantage of statistical multiplexing gains. The selective EPD

1770

I. Iliadis, D. Orsattir Computer Networks and ISDN Systems 30 (1998) 1763–1774

thresholds, for example, could be set by considering the buffer size increased by a gain factor of 1.5 and divided by the number of connections. In our example, this corresponds to a selective EPD threshold of about 256 cells. One could also consider a limited number of possible threshold values corresponding to a representative range of number of connections. In this case, for example, one could set a selective threshold of 256 cells for a range of connections between 8 and 16, a threshold of 128 for connections between 16 and 32, a threshold of 64 for connections

between 32 and 64, and a threshold of 32 for connections exceeding 64. The results for the new thresholds are given in Tables 4–6. Comparing the above results with those corresponding to the former threshold values, we reach the following conclusions. For short packet sizes the performance remains practically the same, whereas for large packet sizes it varies. Increasing the selective thresholds causes a performance deterioration of the EFCI and RR schemes. Clearly, when the system

Table 4 IP packet length s 11 cells Žnew thresholds. Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.89 0.96 0.85 0.96 0.88 0.95 0.89 0.95

0.82 0.91 0.84 0.91 0.82 0.91 0.83 0.92

Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.72 0.91 0.72 0.91 0.83 0.91 0.84 0.91

0.70 0.90 0.75 0.91 0.75 0.89 0.78 0.91

Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.66 0.88 0.67 0.86 0.71 0.90 0.73 0.87

0.66 0.87 0.68 0.87 0.69 0.86 0.70 0.86

Table 5 IP packet length s 91 cells Žnew thresholds.

Table 6 IP packet length s 192 cells Žnew thresholds.

I. Iliadis, D. Orsattir Computer Networks and ISDN Systems 30 (1998) 1763–1774

becomes congested and the thresholds are raised, it takes longer for the ABR mechanism to be activated which, in turn, results in inefficient operation. In contrast, the performance of the ABR schemes combined with EPD improves under the new threshold setting. This also holds for the case of simple EPD. Once again, this indicates the significance of the EPD mechanism and the importance of its threshold settings. The impact of less buffer space has also been

1771

investigated by examining the case of a buffer size of 1044 cells. We have assumed the following threshold values: Eg s 788, CHG s 532, CHS s 192, NHG s 276, NHS s 96,

Es s 256, CLG s 531, CLS s 191, NLG s 275, NLS s 95.

Like in the previous cases, we have also consid-

Table 7 IP packet length s 11 cells Žreduced buffer. Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.84 0.94 0.82 0.89 0.89 0.94 0.78 0.92

0.78 0.89 0.74 0.88 0.78 0.90 0.72 0.88

Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.66 0.89 0.71 0.89 0.77 0.91 0.72 0.91

0.67 0.87 0.62 0.89 0.65 0.88 0.63 0.87

Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.58 0.86 0.56 0.84 0.71 0.87 0.70 0.85

0.56 0.85 0.56 0.84 0.62 0.84 0.61 0.84

Table 8 IP packet length s 91 cells Žreduced buffer.

Table 9 IP packet length s 192 cells Žreduced buffer.

1772

I. Iliadis, D. Orsattir Computer Networks and ISDN Systems 30 (1998) 1763–1774

ered hysteresis for the case of RR marking, in which the EFCI low thresholds were set equal to the corresponding high no-increase thresholds, i.e. CLG s 276, and CLS s 96. The results are listed in Tables 7–9. Note that reducing the buffer size causes a significant reduction of the throughput associated with the plain ATM, EFCI, and RR schemes. In contrast to these schemes, however, the throughput reduction corresponding to the schemes involving the EPD mecha-

nism is negligible. This observation reinforces the significance of the EPD mechanism. 4.2. No TCP r IP congestion control In this section we consider the impact of the ABR mechanism in the absence of the TCPrIP congestion control mechanism. A packet ŽIP datagram. is fragmented into ATM cells, which are subsequently transmitted through the switches. When a switch

Table 10 IP packet length s 11 cells Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.02 1.00 0.68 0.72 0.95 0.97 0.92 0.95

0.02 1.00 0.62 0.71 0.85 0.96 0.84 0.95

Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.00 0.82 0.47 0.71 0.90 0.96 0.84 0.96

0.00 0.77 0.43 0.71 0.72 0.95 0.70 0.95

Effective throughput

Short distances

Medium distances

Plain ATM EPD EFCI EFCI q EPD RR RR q EPD RR Žhysteresis. RR q EPD Žhysteresis.

0.00 0.24 0.36 0.46 0.89 0.94 0.85 0.94

0.00 0.24 0.36 0.46 0.61 0.92 0.57 0.93

Table 11 IP packet length s 91 cells

Table 12 IP packet length s 192 cells

I. Iliadis, D. Orsattir Computer Networks and ISDN Systems 30 (1998) 1763–1774

buffer is full, arriving ATM cells are discarded. Cells may also be discarded owing to the employment of the EPD mechanism. At the receiver side, ATM cells are reassembled to form the original packet using the ATM Adaptation Layer 5 ŽAAL5.. The packet is then passed to the networkrtransport layer as an IP packet. Note that, owing to the absence of the TCP mechanism, there are no packet retransmissions. The measure of interest is the effective throughput measured in terms of the correctly reassembled packets. Note that whole packets that were lost in the network are not counted. The results are listed in Tables 10–12. These results demonstrate that, in the case of plain ATM, the system is not capable of delivering correctly reassembled packets. This is because the switch buffers are essentially full and so it is likely that some of the cells corresponding to a newly arriving packet will be lost. The necessity of employing some sort of a control mechanism at the ATM layer is obvious. Let us consider next the case of employing the EPD mechanism. The results for short packets are impressive, indicating that the system achieves its maximum possible throughput regardless of distance. The reason for this is that, by employing selective thresholds, the incoming cells of already accepted packets in the buffer are never lost. In the case of an 11-cell packet size, the maximum buffer occupancy for ten connections is given by 10Ž128 q 11. s 1390, which is less than the total buffer size of 1744 cells. However, for longer packets this is no longer the case. The effective throughput deteriorates as the packet size increases. In the case of a 192-cell packet size, the effective throughput drops to 0.24. The employment, however, of the ABR mechanism results in a performance improvement. Comparing the EFCI and RR, we notice that the RR provides better performance than the EFCI owing to its superior responsiveness. The results demonstrate that this holds in general, regardless of the packet size and the distance. Finally, we note that the combination of RR and EPD provides very high throughputs close to the absolute maximum. In conclusion, the introduction of the ABR service in addition to EPD results in a significant performance improvement. However, in the presence of the higher-level TCPrIP congestion control mechanism, performance may be adversely affected

1773

or improve only slightly, as discussed in Section 4.1. The combination of the TCPrIP and the EPD mechanisms is powerful enough that the additional employment of the ABR mechanism does not produce a significant performance improvement. It helps, however, in ensuring a good degree of fairness among the various connections, in particular when the switch buffers are small w12x.

5. Conclusions This paper has investigated the performance of TCP traffic carried over ATM networks when either the ABR congestion control mechanism or the EPD control mechanism, or a combination thereof, is employed at the ATM layer. Our simulation results have demonstrated that the performance depends on the threshold and parameter values selected. The performance of the two binary ABR feedback schemes has been investigated. The RR scheme was found to outperform the EFCI scheme because of its fast responsiveness. It has been shown that the combination of TCP and EPD is quite effective and that the introduction of ABR achieves only a slight improvement. The benefits of employing the ABR mechanism have been clearly demonstrated in the case where there is no employment of higher-layer congestion control mechanism such as TCP. This first attempt to study the coexistence of the binary ABR congestion control schemes with the TCP control mechanism was based on a simple network configuration. It would be useful to explore further the implications due to the presence of constant bit rate and variable bit rate isochronous traffic. It would also be interesting to study more complex network configurations.

References w1x The ATM Forum, Traffic Management Specification, Draft 95-0013R10, Worldwide Headquarters, 2570 West El Camino Real, Suite 304, Mountain View, CA 94040-1313, March 1996. w2x E.L. Hahne, Round-robin scheduling for max-min fairness in data networks, IEEE J. Select. Areas Commun. 9 Ž7. Ž1991. 1024–1039.

1774

I. Iliadis, D. Orsattir Computer Networks and ISDN Systems 30 (1998) 1763–1774

w3x G. Hasegawa, H. Ohsaki, M. Murata, H. Miyahara, Performance of TCP over ABR service class, Proc. GLOBECOM ‘96, vol. 3, IEEE, London, 1996, pp. 1935–1941. w4x I. Iliadis, Performance of TCP traffic and ATM feedback congestion control mechanisms, Proc. GLOBECOM ‘96, vol. 3, IEEE, London, 1996, pp. 1930–1934. w5x V. Jacobson, Congestion avoidance and control, Proc. SIGCOMM’88, ACM, 1988, pp. 314–329. w6x R. Jain, D. Chiu, W. Hawe, A quantitative measure of fairness and discrimination for resource allocation in shared computer systems, DEC Research Report TR-301, September 1984. w7x R. Jain, Fairness: how to measure quantitatively?, ATM Forum Contribution 94-0881, September 1994. w8x R. Jain, S. Kalyanaraman, R. Goyal, S. Fahmy, R. Viswanathan, ERICA switch algorithm: a complete description, ATM Forum Contribution 96-1172, August 1996. w9x S. Kalyanaraman, R. Jain, S. Fahmy, R. Goyal, F. Lu, S. Srinidhi, Performance of TCPrIP over ABR service on ATM networks, Proc. GLOBECOM ‘96, vol. 1, IEEE, London, 1996, pp. 468–475. w10x S. Kalyanaraman, Traffic management for the available bit rate ŽABR. service in asynchronous transfer mode ŽATM. networks, Ph.D. Dissertation, The Ohio State University, 1997. w11x M.G.H. Katevenis, Fast switching and fair control of congested flow in broadband networks, IEEE J. Select. Areas Commun. 5 Ž8. Ž1987. 1315–1326. w12x H. Li, K.Y. Siu, H.Y. Tzeng, C. Ikeda, A simulation study of TCP performance in ATM networks with ABR and UBR services, Proc. INFOCOM ‘96, vol. 3, IEEE, San Francisco, 1996, pp. 1269–1276. w13x A. Romanow, S. Floyd, Dynamics of TCP traffic over ATM networks, IEEE J. Select. Areas Commun. 13 Ž4. Ž1995. 633–641. w14x W.R. Stevens, TCPrIP Illustrated: The Protocols, vol. 1, Addison-Wesley, Reading, MA, 1994. w15x C.B. Tipper, J.N. Daigle, ATM cell delay and loss for best-effort TCP in the presence of isochronous traffic, IEEE J. Select. Areas Commun. 13 Ž8. Ž1995. 1457–1464.

Ilias Iliadis received a B.S. degree in Electrical Engineering in 1983 from the National Technical University of Athens, Greece, an M.S. degree in 1984 from Columbia University, New York, as a Fulbright Scholar, and a Ph.D. degree in Electrical Engineering in 1988, also from Columbia University. From 1986 to 1988, he was affiliated with the IBM Thomas J. Watson Research Center in Yorktown Heights, NY, as a work-study student. In 1988, he joined the IBM Zurich Research Laboratory as a member of the switching systems group working on broadband switching. He was responsible for the performance evaluation of the IBM’s PRIZMA switch chip. Currently, he works in the field of ATM-based customer premises networks. His research interests include analysis of distributed systems, performance evaluation of computer communication networks, switching architectures, development of protocols and congestion control schemes, and optimization and network design algorithms. Ilias Iliadis is a member of Sigma Xi, IEEE and the Technical Chamber of Greece.

Daniel Orsatti graduated from Ecole Superieure D’Electricite, ´ ´ France, in 1983. He joined the IBM CER, Telecommunication Development Laboratory in La Gaude, France, in 1985. From 1985 to 1991, he was in the Communication Controllers Development Department, where he worked on the development of switching fabric and communications ASICs. Since 1992 he has been in the ATM Campus Design and Development Department. His interests and activities include ATM switch design, traffic management schemes, and performance analysis. He has participated in ETSI, ITU and ATM Forum standard bodies on traffic management issues.