Computer Communications 25 (2002) 1765–1773 www.elsevier.com/locate/comcom
Performance comparison between TCP Reno and TCP Vegas Yuan-Cheng Lai*, Chang-Li Yao Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan, ROC Received 21 December 2000; revised 7 January 2002; accepted 28 January 2002
Abstract This paper compares the performance, throughput and stability of Reno and Vegas in network environments with homogeneous and heterogeneous versions of TCP. Simulation results indicate that while TCP Vegas obtains a better throughput and stability in homogeneous cases than TCP Reno, the former performs poorly in heterogeneous cases, accounting for the reluctance among users to adopt it. Thus we propose a simple approach, parameter adjustment for Vegas, to solve this problem. This approach enables Vegas to achieve its fair share of bandwidth, even gaining an advantage over Reno, encouraging users to switch their TCP from Reno to Vegas. q 2002 Elsevier Science B.V. All rights reserved. Keywords: Reno; Vegas; Congestion control
1. Introduction With the development of the Internet, TCP has become the most widespread protocol. New versions of TCP have been continuously proposed to elevate transmission performance. The first version of TCP, standardized in RFC 793, defined the basic structure of TCP, i.e. the windowbased flow control scheme and a coarse-grain timeout timer. The second version, TCP Tahoe, added the congestion avoidance scheme and fast retransmission proposed by Van Jacobson [1]. The third version, TCP Reno, extended the congestion control scheme by including fast recovery scheme [2]. Today, TCP Reno is the most popular version and most TCP congestion controls are based on this approach. The reason of using such a window-based adjustment approach is the simplicity in implementation. Despite becoming the most widely used version, TCP Reno causes network congestion and unstable bandwidth usage because it estimates the available bandwidth by detecting the packet loss. Brakmo and Peterson [3] proposed a congestion control algorithm, TCP Vegas, by modifying the congestion avoidance scheme of TCP Reno. According to their results, TCP Vegas achieved a 37 –71% higher throughput than TCP Reno did in a homogeneous environment. In addition to a higher throughput, TCP Vegas has a * Corresponding author. E-mail addresses:
[email protected] (Y.C. Lai),
[email protected] (C.L. Yao).
fairer and stabler bandwidth share than TCP Reno does. Many studies have conferred that the TCP Reno connection with a longer propagation delay obtains a lower bandwidth than the connection with a shorter delay [4,5]. This unfairness is markedly reduced when TCP Vegas is used. The merits of TCP Vegas would appear to suggest that it should replace TCP Reno soon. However, an unfairness problem occurs when TCP Reno and Vegas connections coexist [4,5]. TCP Reno is an aggressive control scheme in which each connection captures more bandwidth until the transmitted packets are lost. Meanwhile, TCP Vegas is a conservative scheme in which each connection obtains a proper bandwidth. Thus, the TCP Reno connections take bandwidth from the TCP Vegas connections when they coexist. Although these studies [4,5] shows that TCP Vegas suffers from the delay bias when competing with TCP Reno, a few issues, such as the effects on this unfairness of the buffer size, the traveling hop, and the number of connections, have not been discussed. Also the other important performance metric, stability, has not been studied thoroughly. For the further understanding of their interaction, this paper compares the performance of both versions in various environments by conducting some simulations. Simulation results provide further insight into the merits and limitations of both TCP versions. Also we propose an adjustment of Vegas’s parameters to let it be treated more fairly in a heterogeneous environment. Thus users will switch their TCP from Reno to Vegas.
0140-3664/02/$ - see front matter q 2002 Elsevier Science B.V. All rights reserved. PII: S 0 1 4 0 - 3 6 6 4 ( 0 2 ) 0 0 0 9 2 - 0
1766
Y.-C. Lai, C.-L. Yao / Computer Communications 25 (2002) 1765–1773
Fig. 1. Congestion control diagram of TCP Reno.
The rest of this paper is organized as follows. Section 2 describes the Reno and Vegas schemes. Section 3 presents the simulation environment, including the network topology and the TCP parameters. Section 4 summarizes the simulation results and discusses their implications. Also a proposal to upgrade Vegas is presented here. Conclusions are finally made in Section 5.
2. Description of Reno and Vegas TCP typically uses the acknowledgements to estimate the available bandwidth. Reno and Vegas differ mainly in how to estimate bandwidth estimation. Reno treats the packet loss as an indicator of network congestion. The source is aware that it overuses the bandwidth when the transmitted packets are lost. Vegas calculates the expected and actual rates based on the round-trip time of each packet and controls the amount of transmitted packet by using the difference between the expected and actual rates. This section describes the control scheme of Reno [6] and then presents TCP Vegas [3]. 2.1. TCP Reno Reno uses a congestion window (cwnd ) to control the amount of transmitted data in one round trip time (RTT ) and a maximum window (mwnd ) to limit the maximum value of cwnd. The control scheme of Reno can be divided into five parts, which are interpreted as follows. Fig. 1 schematically
depicts the TCP Reno version specified with these parts. 1. Slow-start. As a connection starts or a timeout occurs, the slow-start state begins. The initial value of cwnd is set to one packet in the beginning of this state. The sender increases cwnd exponentially by adding one packet each time it receives an ACK. Slow-start controls the window size until cwnd achieves a preset threshold, slow-start threshold (ssthresh ). When cwnd reaches ssthresh, the ‘congestion avoidance’ state begins. 2. Congestion avoidance. Since the window size in the slow start state expands exponentially, the packets sent at this increasing speed would quickly lead to network congestion. To avoid this, the ‘congestion avoidance’ state begins when cwnd exceeds ssthresh. In this state, cwnd is added by 1/cwnd packet every receiving an ACK to make the window size grow linearly. 3. Fast retransmission. The duplicate ACK is caused by an out-of-order packet received in the receiver. The sender treats it as a signal of a packet loss or a packet delay. If three or more duplicate ACKs are received in a row, packet loss is likely. The sender performs retransmission of what appears to be the missing packet, without waiting for a coarse-grain timer to expire. 4. Fast recovery. When fast retransmission is performed, ssthresh is set to half of cwnd and then cwnd is set to ssthresh plus three packet sizes. Cwnd is added by one packet every receiving a duplicate ACK. When the ACK of the retransmitted packet is received, cwnd is set to ssthresh and the sender reenters the congestion
Y.-C. Lai, C.-L. Yao / Computer Communications 25 (2002) 1765–1773 Table 1 Parameters and their default values set in our simulations Parameter
Timer granularity
ssthresh
cwnd
a
b
baseRTT
Value
500 ms
64KB
1KB
1
3
0
avoidance. Restated, cwnd is reset to half of the old value of cwnd after fast recovery. 5. Timeout retransmission. For each packet sent, the sender maintains its corresponding timer, which is used to check for timeout of non-received ACK of the packet. If a timeout occurs, the sender resets the cwnd to one and restarts slow-start. The default value of clock used for the round-trip ticks is 500 ms, i.e. the sender checks for a timeout every 500 ms. 2.2. TCP Vegas Vegas uses the measured RTT to accurately calculate the amount of data packets that the sender can send to avoid packet losses. In Vegas, the sender must record the RTT and the sending time of each packet. The minimum round trip time, baseRTT, must also be kept. Herein, three modifications are proposed based on the measured RTT [3]. (1) New congestion avoidance. When receiving an ACK, the sender calculates the difference of the expected and the actual throughputs as follows: diff ¼ ðexpected – actualÞ £ baseRTT; expected ¼ cwnd=baseRTT; actual ¼ cwnd=average measured RTT: Expected throughput represents the available bandwidth for this connection without network congestion, and actual throughput represents the bandwidth currently used by the connection. Vegas defines two thresholds (a, b ) as a tolerance that allows the source to control the difference between expected and actual throughputs in one RTT. Cwnd is increased by one packet if diff , a and decreased by one packet if diff . b. That is 8 cwnd þ 1; if diff , a > > < cwnd ¼ cwnd 2 1; if diff . b ð1Þ > > : cwnd; otherwise (2) Earlier packet loss detection. When receiving a duplicate ACK, the sender checks whether if the difference between the current time and the sending time plus baseRTT of the relevant packet is greater than the timeout value. If it is, Vegas retransmit the packet without waiting for three duplicate ACKs. This modification can avert a situation in which the sender never receives three duplicate ACKs and, therefore, must rely on the coarse-grain timeout. Aiming to multiple-packet-loss problem, Vegas pay special attention to the first two partial ACK after a
1767
retransmission. The sender determines whether this is a multiple-packet-loss by checking the timeout of unacknowledged packets. If any timeout occurs, the sender immediately retransmits the packet without waiting for any duplicate ACK. Also to avoid the over-reduction of window size due to the loss occurred at the previous window size, Vegas compares the timestamp of the retransmitted packet and the timestamp of the last window decrease. When the retransmitted packet is sent before the last decrease, Vegas will not decrease cwnd on receiving duplicate ACKs for this packet because this packet loss occurred due to the previous window size. For Vegas, it is very important that designing such a mechanism to avoid unnecessary reduction of cwnd because of its quick response of packet loss. (3) Modified slow-start mechanism. To detect and avoid congestion during slow-start, Vegas doubles it window size every other RTT, not every RTT as Reno.
3. Simulation environments The network simulator (ns) [7] developed by the Lawrence Berkeley Laboratory is used to run our simulations. This simulator is often used in TCP-related studies [4,5]. Table 1 lists the control parameters of TCP used in our simulations. According to Fig. 2, a four-router configuration is used as the network topology, where S, D, and R denote ‘Source’, ‘Destination’, and ‘Router’, respectively. TCP connections are classified into five groups (a, b, c, d, e ) based on their traveling routers. Restated, these five groups of TCP connections are transmitted from Sa to Da, Sb to Db, Sc to Dc, Sd to Dd, and Se to De. Thus, groups a and b travel through two routers, groups c and d travel through three routers, and group e travels through four routers. The link between two neighboring routers is 1 ms in delay and 1.5 Mbps capacity. Each packet is fixed at a length of 1000 bytes. The buffer size is set to 50 packets because the simulation outcome is the same when a larger buffer size is used. Some groups have an additional link delay between routers and sources or destinations to balance the propagation delay for all groups, thus eliminating the unfairness caused by different propagation delays. This study measures the throughput—the number of successful ACKs, excluding duplicate ACKs, received by the sender for FTP transfer in each connection. Also investigated herein is the performance parameter, including the mean throughput, MT, by averaging the throughput of all connections in the same group and the instability degree, IS, as Max 2 Min 100 ð%Þ Mean where Max, Min, and Mean are the maximum, minimum, and mean throughputs, respectively, of all connection in the same group. The smaller IS is, the more stable it represents.
1768
Y.-C. Lai, C.-L. Yao / Computer Communications 25 (2002) 1765–1773
Fig. 2. Network topology.
Restated, the throughput that a connection obtains is more predictable. The length of simulation is 50 s, i.e. long enough for two reasons: 1. The network status is stable at this time. Simulation results do not differ even the length of the simulation is expanded. 2. This period is long enough to observe the performance under a heavy-loaded network.
4. Simulation results and discussions The simulations are conducted in three different environments, homogeneous, heterogeneous, and real-life environments. In a homogeneous environment, a single version of TCP connections exists, while different versions of TCP connections coexist in a heterogeneous environment. In a real-life environment, the sources and destiTable 2 Mean throughput and instability for five TCP connections with various buffer sizes Buffer size
5 7 9 11 13 14 15 16 17 20 50 315 316 500
Reno
nations and durations of various TCP connections are random. We choose these different environments according to real observations. It is reasonable to suppose a single enterprise would deploy a single TCP version (Reno is the most popular) across its network, thus the connections within an intranet use the same version. However, some enterprises would deploy an enhanced TCP version (Vegas) for its performance upgrade. Thus the connections across the Internet may use different TCP versions. We can think the intranet is a homogeneous environment and the Internet is a heterogeneous environment, and estimate how often in reality the two environments happen. A real-life environment is constructed according to the real condition by randomizing some variables. 4.1. Homogeneous environment Five TCP connections of group e with various buffer sizes supplied in the routers are first run. In the second case, the number of TCP connections of group e varies. These two cases reveal the stability condition of TCP Vegas. Finally, five groups are used to investigate the influence of different hop counts on performance.
Vegas
MT
IS (%)
MT
IS (%)
1783 1809 1814 1816 1817 1817 1815 1812 1813 1814 1810 1873 1873 1873
185.78 186.23 136.75 129.76 101.60 87.10 26.03 17.48 24.92 23.86 12.25 8.18 3.41 3.41
1862 1868 1871 1872 1873 1873 1873 1873 1873 1873 1873 1873 1873 1873
488.56 304.16 198.61 41.16 0.42 0.42 0.21 0.00 0.00 0.00 0.00 0.00 0.00 0.00
4.1.1. Buffer sizes Exactly how buffer sizes in the router affect the performance of TCP Reno and Vegas is of priority concern. This simulation uses group e only. Table 2 summarizes the simulation results. Some observations reveal how the performance of TCP Reno differs from that of Vegas. 1. TCP Vegas achieves a better throughput than TCP Reno. For the Reno connections, cwnd occasionally vibrates due to the packet losses, resulting in fast recovery or timeout. Therefore, for Reno, the larger the buffer size in the router implies a lower probability of a slow start, thus improving throughput. Conversely, TCP Vegas is very stable due to its new congestion avoidance method.
Y.-C. Lai, C.-L. Yao / Computer Communications 25 (2002) 1765–1773 Table 3 Mean throughput and instability for various numbers of TCP connections ðbuffer size ¼ 50Þ Number of connections
2 4 6 8 10 12 14 16 17 18 20
Reno
Vegas
MT
IS (%)
MT
IS (%)
4512 2258 1506 1131 901 755 651 568 534 507 456
3.52 2.78 3.51 7.55 11.42 44.32 44.82 53.64 60.95 171.03 190.18
4679 2341 1560 1170 936 779 668 584 550 518 465
0.10 0.00 0.12 0.25 0.32 0.38 1.19 1.36 40.36 218.27 249.62
2. TCP Vegas achieves good stability when the buffer size is sufficient; meanwhile, instability occurs when it is insufficient. When the buffer size is insufficient, some Vegas connections obtain the bandwidth first and increase their window sizes and other connections compliantly obtain the residual bandwidth. Thus, instability occurs. Although having a larger throughput than Reno, Vegas suffers from high instability when the buffer size is insufficient. Also TCP Vegas becomes more stable with an increasing buffer size. Vegas achieves a stable status when the buffer size is sufficient. Hence, a sufficient buffer size in the router is essential when using TCP Vegas. 3. To maintain stability, TCP Reno requires a markedly larger buffer size than TCP Vegas. When the instability value is expected to be less than 5%, the buffer size must exceed 316 packets and 16 packets when using TCP Reno and Vegas, respectively. To keep small instability, TCP Reno requires a markedly larger buffer size to avoid frequent packet losses. 4.1.2. Number of connections In this case, exactly how the number of connections affects the performance of TCP Reno and Vegas is examined. This simulation uses group e only and the buffer size is fixed to 50 packets. Table 3 summarizes the simulation results. (1)Estimate the prerequisite for TCP Vegas connections to achieve stability. In the previous case, five Vegas connections achieved stability when buffer sizes exceeded 12 packets. In this case, buffer sizes of 50 packets are needed to achieve stability when the number of connections is below 16. These results can be synthesized into a rough formula to achieve stability. When the formula buffer size in packets $b number of Vegas connections holds, TCP Vegas can achieve a fair and stable share of bandwidth. If it is not held, instability possibly occurs.
1769
Vegas compares the difference between the actual and expected throughput rates to the a and b thresholds, implying that the number of extra buffers the connection occupying in the network is bounded between a and b. Thus, our formula is very reasonable since it ensures that the buffer size is adequate for each Vegas connection. Herein, we have verified the accuracy of this formula for different buffer sizes, numbers of connections, and network topologies. The formula can be used to confirm whether a network environment is appropriate for TCP Vegas. 4.1.3. Number of hops This investigation uses five groups, a, b, c, d, and e, to examine how the number of hops affects the performance of TCP Reno and Vegas connections. The five groups of connections have the same distances. Groups a and b travel through two routers, groups c and d travel through three routers and group e travels through four routers. Table 4 summarizes the simulation results. 1. The larger the hop counts that the connection travels implies a better improvement that TCP Vegas achieves. For groups a and b, the mean throughput of Reno connections is slightly better than that of Vegas connections while the inverse results exhibit for groups c, d, and e. Namely, TCP Vegas improves more than Reno when a connection travels the more hops. TCP Vegas can achieve a 37– 71% higher throughput than Reno when 17 hops are traveled [3], a 4 –20% higher throughput when nine hops are traveled [8], and a 0– 10% higher throughput when four hops are traveled in our simulation. 2. The hop count of a TCP Vegas connection influences its throughput to a lesser extent. Vegas achieves a smaller throughput gap between groups traveling different hop counts. In contrast, the hop count significantly impacts the throughput of a TCP Reno connection. The Reno connection traveling four hops has a significantly lower throughput than the connection traveling two hops. 3. TCP Vegas is fairer than TCP Reno. The differences in throughput between groups a and b, and those between groups c and d for TCP Vegas connections are smaller than those for TCP Reno connections. We believe that the connections passing through the same hop count should have a similar performance. Obviously, Reno fails to achieve this property. 4. TCP Vegas is more stable than TCP Reno. The throughput of each group for Vegas connections is quite stable except for six connections in group e. This exception is due to the lack of an enough buffer. Conversely, TCP Reno always has a large variance of throughput in each group. 4.2. Heterogeneous environments Heterogeneous and homogeneous environments differ
1770
Y.-C. Lai, C.-L. Yao / Computer Communications 25 (2002) 1765–1773
Table 4 Mean throughput and instability for each group ðbuffer size ¼ 50Þ TCP version
No. of connections in each group
Group a
Group b
Group c
Group d
Group e
MT
IS (%)
MT
IS (%)
MT
IS (%)
MT
IS (%)
MT
IS (%)
Reno
1 2 3 4 5 6
5288 2969 1705 1246 1217 1056
– 8.21 48.50 19.66 76.41 31.62
6084 2635 2205 1613 1156 960
– 16.16 34.73 42.59 14.27 75.20
3202 1288 1224 939 581 401
– 23.67 48.69 65.49 56.97 43.64
2577 1624 779 583 614 506
– 1.47 50.70 44.59 76.05 32.60
374 304 95 116 57 62
– 28.94 36.84 46.55 56.14 93.54
Vegas
1 2 3 4 5 6
4086 1970 1278 964 764 604
– 0.10 0.15 0.20 0.39 0.49
4570 1971 1285 966 765 606
– 0.01 0.01 0.10 0.39 0.33
3360 1729 1185 894 723 578
– 0.01 0.01 0.22 0.27 0.86
2879 1729 1180 894 723 578
– 0.01 0.00 0.33 0.13 0.51
1918 982 655 479 382 373
– 0.00 0.30 0.62 0.26 129.75
only in that the connections of different TCP versions exist simultaneously. In a heterogeneous environment, we also investigate the effects of the buffer size, number of connections, and number of hops. The interaction between TCP Vegas and Reno connections is our priority concern. 4.2.1. Buffer sizes First we run five TCP Reno and five TCP Vegas connections of group e with various buffer sizes supplied in the routers. Exactly how buffer size affects the performance in a heterogeneous environment is of priority concern. Table 5 summarizes the simulation results. 1. In the heterogeneous case, TCP Reno obtains a better throughput when a larger buffer size is used. When the buffer size increases, the throughput of TCP Reno increases while that of TCP Vegas decreases. The Table 5 Mean throughput and instability for five Reno and five Vegas connections with various buffer sizes Buffer size
10 20 30 40 50 60 70 80 90 100 200 300 331 332 400 500
Reno
progressive growth of throughput for TCP Reno is due to the additional buffers supported and the conservation behavior of TCP Vegas. TCP Reno fights for buffers and takes bandwidth from TCP Vegas. TCP Vegas performs well when the buffer sizes are small because TCP Reno has a higher likelihood of incurring timeouts or packet losses. At each timeout or packet loss, TCP Reno connections release bandwidth, resulting in a situation in which TCP Vegas connections take back bandwidth. However, TCP Vegas cannot outperform TCP Reno that is performing well when the buffer size is large. 2. The stability in the heterogeneous case depends mainly on the behavior of TCP Reno connections. As mentioned earlier, TCP Reno is stable when the buffer sizes exceed 316 packets and TCP Vegas is stable when the buffer sizes exceed 16 packets in the homogeneous environments. Obviously, the stability heavily depends on the behavior of TCP Reno when TCP Reno and Vegas coexist. Hence, in the heterogeneous environment, Vegas cannot maintain virtue such as good stability in the homogeneous environment.
Vegas
MT
IS (%)
MT
IS (%)
123 435 800 1042 1177 1247 1314 1364 1404 1430 1559 1756 1772 1778 1778 1778
261.32 238.29 138.12 1.34 37.44 4.00 3.80 4.17 0.35 8.17 30.33 26.81 36.73 4.20 4.20 4.20
1744 1434 1066 823 687 616 548 495 453 425 288 108 99 93 93 93
286.54 60.42 60.20 33.62 34.33 33.75 33.75 37.31 34.40 36.90 29.84 2.76 3.03 2.82 2.82 2.82
Table 6 Mean throughput and instability for various numbers of Vegas connections within 10 TCP connections ðbuffer size ¼ 50Þ Number of Vegas connections
0 1 2 3 4 5 6 7 8 9 10
Reno
Vegas
MT
IS (%)
MT
IS (%)
901 933 940 995 1067 1177 1364 1628 2169 3716 –
15.42 10.28 11.17 22.91 17.42 37.44 1.09 3.62 2.25 – –
– 908 878 781 729 687 645 634 625 625 936
– – 26.87 35.45 33.58 34.33 1.70 1.26 1.28 0.47 0.32
Y.-C. Lai, C.-L. Yao / Computer Communications 25 (2002) 1765–1773
1771
Table 7 Mean throughput and instability for each group ðbuffer size ¼ 50Þ Number of connections in each group
Group a MT
Group b IS (%)
MT
Group c IS (%) – –
MT 2609 925
Group d IS (%) – –
MT 2593 742
Group e IS (%) – –
MT 297 371
IS (%)
Reno Vegas
1
4274 830
– –
4517 776
– –
Reno Vegas
2
2237 669
2.23 0.14
1764 558
1.75 21.50
784 540
44.77 30.92
1464 465
4.98 22.58
206 189
16.01 33.86
Reno Vegas
3
1091 858
48.30 25.52
895 564
37.93 34.21
431 505
32.94 45.74
785 652
27.13 43.86
73 124
65.75 45.96
3. The formula to achieve stability for TCP Vegas is inappropriate for heterogeneous environments. Because TCP Reno heavily influences the performance of TCP Vegas in heterogeneous environments, the formula to achieve stability only applies to pure TCP Vegas connections.
4.2.3. Number of hops Five groups are used to investigate exactly how the number of hops affects the performance of TCP Reno and Vegas connections. The number of TCP Reno and Vegas connections in each group is varied. Table 7 exhibits the simulation results.
4.2.2. Number of connections Herein, the number of Vegas connections within 10 TCP connections of group e is changed. Connections other than TCP Vegas perform TCP Reno. This case explains how different numbers of TCP Vegas connections affect the mean throughput and stability. Table 6 shows those results.
TCP Reno takes bandwidth from TCP Vegas. Comparing Table 7 with Table 4 reveals the outcome in which TCP Reno takes a higher amount of bandwidth from TCP Vegas. For example, for group a, the throughput for two Reno connections is 2969 and the throughput for two Vegas connections is 1970. However, the throughput for one Reno and one Vegas connections is 4274 and 830, respectively.
1. TCP Reno throughput is substantially improved with a higher proportion of TCP Vegas connections. Intuitively, the larger the number of TCP Reno connections implies a worse throughput of TCP Vegas connections. However, this is not true. The larger the number of TCP Reno connections implies more intensive bandwidth contentions between connections themselves. The bandwidth contentions may cause timeouts, thus reducing the throughput of TCP Reno and giving TCP Vegas the bandwidth. Once the number of Reno connections decreases, TCP Reno throughput grows and TCP Vegas throughput shrinks. 2. Users will be reluctant to adopt TCP Vegas. When all connections are TCP Vegas, each achieves a better throughput and stability. Hence, if ‘all’ users simultaneously switch from Reno to Vegas, performance will actually be promoted. However, the transition is progressive. The first individual using TCP Vegas will get an improved throughput from 901 to 908. The second switching to TCP Vegas does not improve this throughput because the throughput drops from 933 to 878. Additionally, the throughput of the first user is reduced from 908 to 878. The throughput of the final person to make the change will drop from 3716 to 936, although this change can significantly enhance the throughput of others. Hence, acting individually, users will delay adopting Vegas to obtain more bandwidth. Furthermore, they are not motivated to make this change because of no personal benefit.
4.3. Real-life environment In a real-life environment, four TCP sources are connected to each router in the four-router topology as shown in Fig. 2. Each source establishes one FTP connection periodically according to the following steps. 1. Wait a period of time, which is uniformly distributed from 0 to 60 s. 2. Randomly choose one destination. 3. Randomly choose a TCP version, Vegas or Reno. 4. Create a FTP connection with a period of time uniformly distributed from 0 to 60 s. Creation of this environment comes from a real-life observation. In this case, Reno and Vegas connections are randomly generated to investigate the performance of both versions in
Fig. 3. Network topology.
1772
Y.-C. Lai, C.-L. Yao / Computer Communications 25 (2002) 1765–1773
Table 8 Results of the real-life environment within 3000 s TCP version
Connections
Transmitted packets
Total time
Packets/second
Reno Vegas
511 520
11 766 892 8 377 307
16 141 16 372
729.006 511.685
the real-time environment. Table 8 summarizes the simulation results within 3000 s. Obviously, the throughput of Vegas is obviously poorer than the throughput of Reno. This finding confirms that Reno has an undoubtedly better throughput than Vegas in a real-life environment. 4.4. Enlarge a and b Vegas relates the difference between the actual and expected throughput to the a and b thresholds, which values can thus be thought in terms of the number of extra buffers the connection occupies in the network. Thus, the Vegas connections with larger a and b allow more packets staving in the buffer and result in higher throughput. We adjust a and b to observe the effects of these parameters. A simulation is conducted by varying a and b ¼ a þ 2: Fig. 3 is the simulation topology and Fig. 4 shows that Vegas with larger a and b behaves more aggressively. However, this increase in aggression is not unlimited corresponding to the increase of a and b. Increasing a beyond 13 has no effect on the throughputs of Vegas and Reno, because when a and b are increased further, Vegas encounters more packet loss, reduces its window more often, and finally achieves a stable status. Fig. 5 plots the throughput ratio R, calculated as (throughput of Reno)/(throughput of Vegas), versus a and b. When the value is below 1, the throughput of Vegas is
superior to that of Reno, which condition we expect to see. As seen in this figure, increasing b has no effect when b . a þ 4: Therefore, setting a proper a is even more important than setting b. a should not be set too low and b should be a little larger than a to achieve our goal. However, setting a large a does not cause a fall in utilization, as seen in Fig. 4. Thus, we finally suggest choosing a large a and b.
5. Conclusions In a homogeneous case, TCP Vegas outperforms TCP Reno owing to its higher throughput and stability. However, TCP Vegas is in a squally situation when TCP Reno and Vegas coexist. Because two TCP versions share the same buffer in the routers, the progressive behavior of TCP Reno and the conservative behavior of TCP Vegas cause unfairness when they are used simultaneously. TCP Reno obtains a significant amount of bandwidth, which originally belonged to TCP Vegas. Also, as more connections change their version, TCP Reno takes more bandwidth. Thus, users delay, even protest, to use TCP Vegas despite its better performance than TCP Reno in a homogeneous environment. This accounts for why TCP Vegas is still unpopular. Thus, we proposed a simple approach to solve this problem. Our approach does not require the router algorithm and TCP Vegas to be modified. It only adjusts a and b of Vegas. The
Fig. 4. Throughput of Vegas and Reno with different a ðb ¼ a þ 2Þ:
Y.-C. Lai, C.-L. Yao / Computer Communications 25 (2002) 1765–1773
1773
Fig. 5. Throughput ratio versus various a and b.
approach can achieve a good fairness, even giving Vegas an advantage. Thus adjusting a and b is an excellent proposal because the values can be easily controlled with no side effects.
Acknowledgments The authors would like to thank the National Science Council of the Republic of China for financially supporting this research under Contract No. NSC 88-2218-E-006-038.
References [1] V. Jacobson, Congestion avoidance and control, ACM SIGCOMM ’88 (1988) 273–288.
[2] V. Jacobson, Modified TCP Congestion Avoidance Algorithm, mailing list, end2end-interest, April 30, 1990. [3] L.S. Brakmo, L.L. Peterson, TCP Vegas end to end congestion avoidance on a Global Internet, IEEE JSAC 13 (1995) 1465–1480. [4] O. Ait-Hellal, E. Altman, Analysis of TCP Vegas and TCP Reno, IEEE ICC ’97 (1997) 495–499. [5] J. Mo, R.J. La, V. Anantharam, J. Walrand, Analysis and comparison of TCP Reno and Vegas, IEEE INFOCOM’99 (1999) 1556–1563. [6] W. Stevens, Slow TCP start, congestion avoidance, fast retransmit, and fast recovery algorithms, RFC 2001 January 1997. [7] K. Fall, S. Floyd, ns-Network Simulator, http://www-mesh.cs.berkeley. edu/ns. [8] J.S. Ahn, P.B. Danzig, Z. Liu, L. Yan, Evaluation of TCP Vegas: Emulation and Experiment, http://excalibur.usc.edu/research/vegas/ doc/vegas.html.