Performance study of buffering within switches in LANs

Performance study of buffering within switches in LANs

computer communications Computer Communications 19 (1996) 659-667 Performance study of buffering within switches in LANs Amr Elsaadany, Department ...

812KB Sizes 0 Downloads 49 Views

computer

communications

Computer Communications 19 (1996) 659-667

Performance study of buffering within switches in LANs Amr Elsaadany, Department

of Computer

and Information

Mukesh

Singhal,

Ming T. Liu

Science, The Ohio State University.

Columbus,

OH43210,

USA

Abstract Today’s demanding applications, such as multimedia, require higher network transfer rates. Traditional LANs will not be able to provide the throughput required by these applications. The use of switches in LANs is an effective technique to increase the throughput of the network. These switches have finite buffers at their input and output ports. The size of these buffers affect the packet loss rate. It also affects the delay at the switch, as packets may have to wait for the output buffer to become available. In this paper, we study the effect of buffer sizes within these switches. We show how the buffer size is related to the performance of the switch, as well as overall performance of the LAN. Keywords:

Local area

network; Switching hub; LAN protocol; LAN performance; Ethernet; Multimedia

1. Introduction A switching hub is a communication device that has multiple input and output ports. Each port is capable of sending as well as receiving data. A buffer is associated with each port, so that incoming and outgoing packets can be queued for processing. Based on the destination address of the incoming packets, the hub switches the packets from the incoming ports to the proper outgoing port. Switching hubs are used in LANs to improve the performance and increase the effective bandwidth. For example, a single bus with a large number of stations can be segmented into several buses with a smaller number of stations, which in turn can be connected via the switching hub. A switching hub ignores packets from a station on a LAN segment destined for another station on the same LAN segment. When the switching hub receives a packet destined for a different LAN segment, it forwards it to the port that is connected to the destination LAN segment. Several factors can affect the performance of switching in LANs. The use of a switch in a LAN introduces a delay for inter-segment traffic. However, the switch improves the overall performance of the network, because it allows multiple communications to happen in parallel on different LAN segments. In addition, the switch reduces the contention on the shared medium. Thus, the effective bandwidth and throughput of the system increase. 0140-3664/96/$15.00 0 1996 Elsevier Science B.V. All rights reserved PZZ SO140-3664(96)01096-l

There is a finite amount of buffer associated with each port of the switch and if a buffer overflow happens, incoming packets are discarded. A very important performance criteria is the packet loss rate at the switch. The loss rate depends on the buffer size at the ports of the switch. An important question is what is the buffer size necessary to minimize the loss of packets for the offered traffic load. The purpose of this paper is to study how buffering at the switch affects the performance of a network. In particular, how the buffer size at the ports of the switch affects the packet loss rate and the percentage of delayed packets. We model switches with different buffer sizes and compare their performance merits. The paper is organized as follows. Section 2 gives a background on LANs and switching in general. Section 3 describes the model of the system. Section 4 discusses performance evaluation and simulation results. Section 5 concludes the paper.

2. Background A LAN is basically a communications network providing interconnection of a variety of PCs or workstations and other communication devices over a small geographical area. It is also used to share common resources such as printers and file servers. LAN-to-LAN connectivity can be achieved via many devices, based on the nature and need of connectivity. A

660

A. Elsaadany et al./Computer

Communications

19 (1996) 659-667

r Network Layer

Network Layer

LLC

LLC

MAC

MAC

MAC

PHYl

PHYI PHY?

PHY2

Fig. 1. Bridge operation.

repeater connects LANs at the physical layer to increase the distance/length of these LANs. This is because there is a limitation on the length of a single LAN segment. Bridges, on the other hand, operate at the data link layer, and thus could connect similar as well as different LANs (as long as they share the same LLC protocol). For example, Ethernet-to-Ethernet-to-FDDI [l]. Fig. 1 describes the operation of a bridge connecting two different LANs that have different physical layers implementations (PHY 1 and PHY2). Routers operate at layer 3 of the protocol hierarchy, and could connect dissimilar networks that have the same network layer protocols. They are usually used to connect LANs to Wide Area Networks (WAN). More information about routers versus bridges can be found in [2]. A classification of the general architecture of switches is discussed in [3,4]. Switches can be either time division or space division. In time division switches, time is shared by several input ports. This could be memory access time in a shared memory switches, or bus time in shared medium switches. In space division switches, multiple paths from input ports to output ports are shared. Examples of such architectures are crossbar, matrix or multistage switches. The above classification applies to packet switching in general. Packet switching is usually used in WANs, as in many public networks. In packet switching, packets are transmitted from source to destination (over a network of switches) in a store-and-forward manner. Each packet contains its destination (and source) address. A switching hub resembles a bridge, in that it operates at the data link layer of the protocol. It basically operates at the IEEE 802.2 (LLC) level of LAN protocol. A switching hub actually filters incoming packets arriving at its ports. It then forwards (switches) them to their appropriate destinations. However, some switching hubs provide combined routing and bridging capabilities. In a TCP/IP network, such switches operates at the network layer, thus recognizing IP packets.

The architecture of a traditional switching hub (or a bridge) is based on a single processor switching/filtering traffic from one port to another. Other architectures use a switching network within the hub to connect input ports to output ports. These new architectures use application specific processors (ASICs) at each port. Most switching hubs are store-and-forward switches, but some are cut-through switches. The former forwards a packet to its destinations after it has received the whole packet, while the latter forwards a packet as soon as it decodes its destination address (even before the entire packet has been delivered into the incoming buffer). Cut-through switches reduce delay and could improve the throughput of the network. However, since they do not read the whole packet before transmitting, they can forward incomplete packets. Moreover, they cannot perform error detection on the whole packet, and thus can forward bad packets. The way in which switching hubs are used in a LAN greatly depends on the existing configuration of the network. It also depends on the type of devices connected by the network, as well as the protocols used within the network. For instance, if the network is an FDDI ring, then a switching hub must support an FDDI interface. Switching hubs can be used to design high performance LANs. Existing LANs can be reconfigured or microsegmented to improve their performance. Microsegmentation in a LAN means assigning fewer stations per LAN segment and using switching hubs to interconnect these segments. A segment can have as a few as one station, giving it a dedicated LAN connection to the switch. Single station segments could be needed in a client-server application that requires a large bandwidth, such as a multimedia application downloading video streams. For instance, a 640 x 480 24-bit 30 frame/s video compressed 200 times would need 1.2 Mbps, which is well within the bandwidth of a typical LAN [5,6]. The same application will not perform adequately if the LAN segment was populated with large number of stations.

A. Elsaadany

et al.lComputer

Communications

19 (1996) 659-667

661

3. System and model description This section describes the architecture of switching hubs used in LANs. It also describes the models we use to study the performance of switching hubs. The architecture of a general switching hub is shown in Fig. 2. It consists of incoming and outgoing ports connected to LAN segments. The switching function box connecting the ports provides the switching functionality of the hub, namely, forwarding incoming packets (from input ports buffers) to their destination output ports buffers. The architecture of a (parallel) switching hub is based on a switching network connecting the input and output ports of the hub, The switching network within the switching hub allows multiple connections between input ports and output ports to happen in parallel. Thus, multiple inter-segment communications can be in progress simultaneously. Models of different switching hub architectures and their performance in LANs are studied in [7]. Fig. 3 shows a model of a parallel hub (with four ports). The set of queues on the left represents the delay at the (N) input ports of the switch. The set of queues in the middle represents the delay incurred by the switching function of the hub. The switching function in our model supports up to N/2 simultaneous paths between the input and output ports. The set of queues on the right represents the delay at the (N) buffers at the output ports (before the packet can be transmitted on the destination LAN segment). Each port of the switch has a finite buffer. If a packet arrives at an input port when its buffer is full, it is dropped. Packets are switched from input ports to output ports (via the switching network of the hub). Once packets are at the output port buffer, they are queued up for transmission. If the output buffer is full, packets at the input buffers are blocked, and hence delayed. When studying the performance of switches in a LAN, one should evaluate the packet delay and the packet loss

input port N

input ports queues Fig. 3. Parallel

output ports queues switching

hub queueing

model.

rate and their effect on the overall performance under varying traffic loads for different buffer sizes. The use of a switch in a LAN introduces packet delay for intersegment traffic, and this delay should be as small as possible so that the effect on the response time of intersegment traffic is minimal. Minimizing the packet loss rate is a very important performance requirement, since it affects the overall performance of the network. This is because, once the buffer is full, packets arriving at an input port are dropped, and these packets will have to be retransmitted. The next section presents the results of the effect of varying buffer sizes on the overall performance of the network. The job interarrival is assumed to be exponentially distributed. The mean interarrival time is 64 milliseconds per station, and the number of stations on each bus segment is variable. When the number of stations (NWS) in the system is 64, the arrival rate (ARRIVAL) at the switch is 1 job per millisecond. The probability of traffic locality, DIST, is the percentage of traffic that is destined for stations on the same LAN segment (i.e. the source and destination stations are on the same LAN segment). If DIST is 0.75, then 75% of the traffic is local and the other 25% is non-local traffic (that has to go through the switch). The number of packets per job is assumed to be uniformly distributed between 1 and NPACK. The bus bandwidth is 10 Mbps and the packet length is 1 Kbit (thus, packet transmission time is 0.1 millisecond). The service time at the switch is the amount of time it takes (by the switching function) to move a packet from an input buffer to an output buffer. The average switching delay or (service time) at the switch is assumed to be 0.04 milliseconds (we refer to this as the DELAY). This value is typical of currently available switches, such as the Kalpana EtherSwitch 1500.

4. Simulation results

Fig. 2. General

switching

hub architecture.

An event-driven, discrete-time simulator was developed to study ;he effect of buffer sizes on the performance of the network under various system parameters. The simulator simulates a local computer network with

A. Elsaadany et al./Computer

662 Table 1 Simulation

Communications

NPORT is the mum number of to the hub), and or output ports. switch under the

parameters

Parameter

Value(s)

NPACK DELAY DIST NWS NPORT BUFFER

32 40 50 variable 8 or 16 variable

19 (1996) 659-667

l l l

Increasing the traffic load on the network. Varying the buffer size. Varying the number of ports.

The performance measures used were mean packet delay, average end-to-end delay, probability of packet loss, and probability of delayed packets. The mean packet delay is the average delay at the switch. It is the time interval starting when the incoming packet arrives at the input port of the switch and ending when the outgoing packet is at the output port. This mean packet delay depends on the switching delay, which is the time it takes the switching function of the hub to move a packet from the input buffers to the output buffers (given that there is no waiting or blockage within the switch). The mean packet delay also depends on the queueing time within buffers. The average end-to-end delay (or mean response time), on the other hand, is the time interval between the instant a transmission is requested (by the source station) and the instant the transmission is completed (at the destination station). This delay has three components: the source LAN segment response time; delay at the switch; and the destination LAN segment response time. The probability of packet loss is the percentage of packets that are dropped due to input buffer overflow. The probability of delayed packets is the percentage of

one switching hub. Stations are connected together via LAN segments, each of which is attached to a switching hub port. At each station, jobs arrive, and generate some packets which are destined for either another station on the same LAN segment or a station on a different LAN segment (with equal probability). Requests for transmission to a different LAN segment are queued for service at the input port to which the originating LAN segment is connected. The switching function of the hub forwards the packets to the port to which the destination station is connected. Packets are then queued up for transmission on the destination LAN segment. The packets compete with other traffic on the destination LAN segment for successful transmission to the desired station. The inputs to the simulator are the number of ports at the switching hub, the total number of stations, input/ output buffer size (in packets), and the duration of the simulation (number of jobs). The simulation was run for 32000 successful job transmissions. The simulator collected statistics on the response time, queue lengths and utilization. Table 1 lists the values of the parameters used in our simulations.

Packet Delay

number of ports per hub (i.e. the maxiLAN segments that can be connected BUFFER is the buffer size at the input We studied the performance of the following conditions:

(ms)

2.6

2.5

2.4

I 20

Fig. 4. Packet

I

1 40

I

I 60

I

I

I

80

delay at the switch as buffer size increases.

I

I

*Buffer

Size (packets)

100 NPACK:

32; DELAY:

40; ARRIVAL:

2.

663

A. Elsaadany el al./Computer Communications 19 (1996) 659-667 Packet Delay

(ms)

I 20 Fig. 5. Comparison

packets that are blocked buffer unavailability. 4.1. Effect

I

1

I

40

I

I

1

60

of packet

1

80

delay for different

at the input buffer due to output

ofvarying the bt.@er size

Fig. 4 shows the packet delay as the buffer size increases. In this experiment, NPORT is 16 ports, NWS is 128 stations and ARRIVAL is 2 jobs per ms. We observed that the packet delay decreases as the buffer size increases. Packet delay is very high with small buffer sizes. It drops off quickly as buffer size

1

1

*Buffer

Size (packets)

100 traffic loads. NPACK:

32; DELAY:

40.

increases beyond 30 packets, and then levels out as the buffer size is increased beyond 80 packets. This is because, as the buffer size increases, fewer packets have to wait for the buffer to become available. Remember that when the output buffer is full, packets at the input ports are blocked. Fig. 5 compares the packet delay for two different arrival rates (ARRIVAL = 2 and ARRIVAL = 4) as the buffer size increases. Again, the packet delay decreases as the buffer size increases. It drops off quickly as buffer size increases beyond 30 packets, and then

Packet LOSS (%)

5

Fig. 6. Percentage

ARRIVAL

of packets

=4

lost at the switch as buffer size increases.

NF’ACK:

32; DELAY:

40.

A. Elsaadany et aLlComputer

664 Delayed

Packets

Communications

I9 (1996) 659-667

(%)

\ \ARRIVAL

20 Fig. 7. Percentage

40

of delayed

packets

=4

60

80

loo

at the switch as buffer size increases.

levels out as the buffer size is increased beyond 80 packets. The improvement for the load with an arrival rate of 4 is slightly better than with an arrival rate of 2. For instance, there is about an 11% improvement when the buffer is increased from 40 to 50 with ARRIVAL = 2, while there is only a 3% improvement with ARRIVAL = 4. Thus, higher traffic loads show higher sensitivity in packet delay as the buffer size changes. Fig. 6 shows the percentage of packet loss for different arrival rates as the buffer size increases. Packets are

NPACK:

32; DELAY:

40.

dropped if they arrive at an input port that has its buffer full. We observed that the packet loss rate decreases as the buffer size increases. It drops off quickly as the buffer size increases beyond 30 packets, and goes down to zero as the buffer size is increased beyond a certain point (depending on the arrival rate of the traffic). The packet loss rate drops by 50% as buffer size increases from 40 to 50. The packet loss rate is higher for traffic load with higher arrival rates, because in the latter case, the load on the switch becomes higher and the queue build up at the buffers becomes quicker.

Response Time (ms)

/

NPORT = 8

3.5

I

I I

Fig. 8. Response

2

3

time for small buffer as traffic load increases.

-+ Arrival rate Cjobdms)

4 NPACK:

32; DELAY:

40; BUFFER:

50.

A. Elsaadany et aLlComputer Communications 19 f1996) 659-667

665

Response Time (ms)

./ NPORT = 8

/ 3.5 1

I Fig. 9. Response

I

I

/

2

3

4

time for large buffer as traffic load increases.

Fig. 7 shows the percentage of delayed packets for different arrival rates as the buffer size increases. Packets are delayed if they are bound to an output port that has its buffer full. The packet delay percentage decreases as the buffer size increases. It drops off quickly as the buffer size increases beyond 30 packets, and goes down to zero as the buffer size is increased beyond a certain point (depending on the arrival rate of the traffic). The packet delay percentage drops by 50% as the buffer size increases from 40 to 50. As the arrival rate increase, the percentage of delayed packets increases (for the same buffer size). This is because the load on the switch becomes higher and the buffers get used up quickly. An important observation here is that the percentage of delayed packets is higher than the percentage of dropped packets for the same offered traffic load. This is because packets at the output buffer are waiting for transmission on the destination LAN segtnent. Transmission on a LAN segment is based on contention between stations on the LAN segment, including the switch itself; while packets in the input buffers are waiting to be served by (the fast) switching function of the hub (which deliver them to the output buffers). The average service time of the switching function is much less than the transmission time on a LAN segment. 4.2. Effect of varying the number of ports Fig. 8 shows the response time as the arrival rate increases for a small buffer size (BUFFER = 50). The figure shows two curves, corresponding to one switch with 16 ports and another with 8 ports.

NPACK:

* Arrival rate Cjobs/ms)

32; DELAY:

40; BUFFER:

100.

We observe that the response time increases as the load increases. The increase in response time for the switch that has 8 ports is faster than that which has 16 ports (as the arrival rate increases). This is because, in the former case, 8 ports are handling the traffic load that 16 ports are handling in the latter case. Thus, the expected number of packets arriving at any port is doubled in the 8 ports case. The diversion between the two curves increases as the arrival rate increases, meaning that beyond a certain point the response time may not be acceptable with 8 ports. Each LAN segment in the network that uses the g-port switch has twice the number of stations as the LAN segments in the network that uses the 16-port switch. This means that the load on each segment increases, and thus the contention between stations on each LAN segment increases. As a result, the response time at each LAN segment increases. As the arrival rate increases, the LAN segment response time increases, and hence the response time for inter-segment traffic increases. Fig. 9 shows the response time as the arrival rate increases for a large buffer size of 100 packets. The figure shows the results for a 16-port switch and an g-port switch. Again, the difference between the two cases increases as the arrival rate increases. The switch with a smaller number of ports works well under light traffic load. At very high traffic loads, it performs much worse than the switch with a larger number of ports. There is a very slight improvement in the results of Fig. 9 over those in Fig. 8. This improvement is due to the increased buffer size. The increase in buffer size improves the performance of the switch, as well as

666

A. Elsaadany et al./Comoutv Packet

rommunications

19 (1996) 659-667

Loss (7~)

7

5-

4-

NPORT

I Fig. 10. Percentage

I I of packets

I 2

Packets

>

I

Arrival rate Cjobs/ms)

4

3

lost at the switch as traffic load increases.

reducing the percentage of lost and delayed packets. However, after a certain point (as the load increases), increasing the buffer size does not improve the response time of the network. That is, from the response time point of view, the buffer is not the bottleneck. Fig. 10 shows the percentage of lost packets for 8 ports and 16 ports as the arrival rate increases for a buffer size of 50 packets. Fig. 11 shows the percentage of delayed packets for 8 and 16 ports as the arrival rate increases for a buffer size of 50 packets. We observe that the percentage of delayed packets and of lost packets increases as the traffic load increases. Also, the Delayed

= 16

BUFFER:

50; NPACK:

32; DELAY:

40.

percentage of delayed packets is larger than the percentage of lost packets. Comparing the values obtained with a buffer size of 100 and those obtained with a buffer size of 50, there is a substantial improvement in the packet loss rate and the percentage of delayed packets. For instance, in the 16 port case with an arrival rate of 4 jobs/ms, the percentage of delayed packets is 7.78 with a buffer size of 50, and it drops down to 0.05% with a buffer size of 100. In the 8 port case with an arrival rate of 4 jobs/ms, the percentage of delayed packets is 17.06 with a buffer size of 50, and it drops down to 1.06% with a buffer size of

(‘%)

4

NPORT

= 8

/

NPORT

= 16

+ Arrival rate (jobs/ms) 4 Fig. 11. Percentage

of delayed

packets

at the switch as traffic load increases.

BUFFER:

50; NPACK:

32; DELAY:

40.

A. Elsaadany

et al./Computer

100. Thus, although increasing the buffer size only has a slight improvement in response time, it has a substantial effect on the packet loss rate and the percentage of delayed packets.

5. Conclusions The use of switches in LANs is an effective technique to increase the throughput of the network. These switches have finite buffers at their input and output ports. The size of these buffers affects the packet loss rate, and the delay at the switch, as packets may have to wait for the output buffer to become available. In this paper we studied how buffering at the switch affects the performance of the network, in particular, how the buffer size at the ports of the switch affects the packet loss rate and the percentage of delayed packets. We studied these factors for parallel switches, which allow multiple inter-segment, as well as intra-segment, communications to happen simultaneously. The performance improvement obtained by using switching hubs in a network also depends on some other factors, such as the traffic distribution of the network and switching delay, which affect that traffic which has to go through the switching hub. The switching hub has to be placed appropriately in the network so as to reduce the inter-segment traffic. This traffic suffers from a delay at the switching hub in addition to being transmitted on two separate LAN segments. The overall performance of the network is affected by the percentage of inter-segment versus intra-segment traffic. Packet delay decreases as the buffer size increases. The decrease is significant when the buffer size is very small, and then it slows down as the buffer size grows larger. Moreover, higher loads show a higher sensitivity in packet delay as the buffer size changes. Although the increase in buffer size improves the performance of the switch, after a certain point (as the load increases), increasing the buffer size does not improve the response time of the network. That is, from a response time point of view, the buffer is not the bottleneck. While increasing the buffer size has only a slight improvement in response time, it has a substantial effect on the packet loss rate and the percentage of delayed packets. The percentage of delayed packets is higher than the percentage of dropped packets for the same offered traffic load. This is because packets at the output buffer are waiting for transmission on the destination LAN

Communications

19 (1996) 659-667

661

segment. Transmission on a LAN segment is based on contention between stations on the LAN segment, including the switch itself, while packets in the input buffers are waiting to be served by (the fast) switching function of the hub (which deliver them to the output buffers). The average service time of the switching function is much less than the transmission time on a LAN segment. The percentage of delayed packets depends on the size of the output buffers, hence the output buffer of a switch should be larger than the input buffer; how much larger depends on the switching delay and the transmission time on the LAN segments. Large buffer sizes at each port adds to the cost of the switch. When designing a network, one should consider the appropriate buffer size and delay of the switch for the offered traffic load of that network. In other words, the trade-off between buffer size, switching delay and number of ports has to be considered when designing or reconfiguring a network in order to come up with the most cost-effective switch that fits the needs of the network.

Acknowledgements herein was partially supported Research reported by U.S. Army Research Office, under contract No. DAAL03-92-G-0184. The views, opinions, and/or findings contained in this paper are those of the authors and should not be construed as an official Department of the Army position, policy or decision.

References [‘I G. Bucci, A. Del Bimbo and S. Santini, Performance

analysis of two different algorithms for Ethernet-FDDI interconnection, IEEE Trans. Parallel and Distributed Systems, 5 (6) (June 1994). 121W.M. Seifert, Bridges or routers. IEEE Computer (March 1987). V. Cherkassky and M. Garver, Finding the right [31R. Rooholamini, ATM switch for the market. IEEE Computer (April 1994) 17-28. Fast packet switch architecture for broadband [41F. Tobagi. integrated services digital networks, Proc. IEEE (January 1990) 133-167. networking. Data Comm. (February 1993) [51N. Lippis, Multimedia 60. Artech House, 161D. Minoli and R. Keinath, Distributed Multimedia. 1994. M. Singhal and M. Liu, Performance evaluation [71A. Elsaadany, of switching in local area networks. Technical Report, CIS Department, Ohio State University, March 1995.