Share-based congestion control scheme for systems of interconnected networks

Share-based congestion control scheme for systems of interconnected networks

Share-based congestion control scheme for systems of interconnected networks Luis Orozco-Barbosa and Na6l Hirzal la In the past few years, there has...

1MB Sizes 1 Downloads 79 Views

Share-based congestion control scheme for systems of interconnected networks Luis Orozco-Barbosa

and Na6l Hirzal la

In the past few years, there has been an increasing novel protocol operation MAN

linking

coupling

mechanisms

of internetwork together

of subnetworks

capable

the proper

systems consisting of a high-speed lower-speed

LANs.

The

having a big difference

sion speed has been identified overcome.

of ensuring

need for

Due to the LAN

challenges to

as one of the main

to MAN

proper

in transmis-

speed disparity.

the

likely to be overflowed during periods of high inter-LAN traffic activity. In this paper, we propose a novel congestion control scheme specially designed for LAN/MAN internetwork systems. The scheme aims to control the use of the buffer space of the internetworking units. The mechanism is based on the definition of a virtual partitioning of the buffer space among all the communicating inter-LAN traffic streams. This scheme therefore allows for a flexible and efficient sharing of the critical system resources. i.e. the buffer space of the internetworking units. We have conducted a performance study to evaluate the quality of service provided to the various streams. as well as the overhead needed for the scheme to operate. buffer space of the internetworking

Keywords: congestion control,

units is more

interconnected

networks,

LAN-

MAN interconnectivity

Network interconnection has become increasingly important within the field of data communications during the past decade’. The term ‘network interconnection’ has been used broadly to mean any technique which enables systems on one network to communicate with, or make use of, services of systems on another network. Local area networks (LAN) such as Ethernet or token ring allow users to share storage, computer and peripheral resources. They are considered to be the backbone for a corporate or university network. A Department of Electrical Engineering, University Louis Pasteur. Ottawa, Ontario KIN 6N5, Canada Ptrpw rewiwcl. 3 August 1993: revised puprr rcwiwd:

of Ottawa,

161

6 Mnrc11 I994

LAN must be capable of interconnecting a large number of stations until network limitations are reached for a single LAN, in terms of distance or the number of attachments. As a result, a means of interconnecting subnetworks together becomes necessary. In some situations it would also be preferable to distribute the workstations and servers over several local subnetworks, in order to distribute the load. These subnetworks can then be interconnected by a backbone network, running at a higher speed. Such a backbone network, typically a metropolitan area network, may even be used to interconnect remote LANs distributed over a large geographical area within a city’. A MAN running at speeds of 100 Mbit/s, I.55 Mbit/s or more can interconnect lower speed LANs over small as well as large areas in the metropolitan area and over 50 km in diameter. This study focuses on a high-speed backbone network interconnecting LANs: one of the most important MAN applications nowadays. In systems of LAN-MAN interconnection, the speed disparity between the high-speed backbone network (the MAN) and the lower-speed LANs that are connected to it may create a bottleneck situation at the interconnection devices. Indeed, up to a large number of stations, it is no problem for the backbone network to carry all the load to the destination LANs. However, simultaneous transmission to the same destination may yield a temporary overload, causing packet loss at a bridge, an interconnection device. This may result in packet retransmission and eventually congestion. It could be argued that this problem may be solved by large buffers. For many reasons, network engineers generally do not like long queues within the network. In addition, bridges available commercially generally have a very limited buffering capacity. Furthermore, even large memories cannot definitely solve the problem of congestion. A control policy is necessary to regulate the flow of traffic travelling

0140-3664/95/$09.50 computer

(Cl Elsevier

communications

Science volume

B. V. All rights reserved

18 number

2 february

1995

79

Congestion

control for interconnected

networks:

L Orozco-Barbosa

and N Hirzalla

across the bridges from the backbone network to the LANs. In this work, we propose a novel congestion control mechanism and analyse its performance.

INTERNETWORK CONFIGURATION

SYSTEM Backbone

Figuvr I shows the basic configuration of a multiple subnetwork communications system. The system consists of n LANs interconnected with a high-speed backbone network via interconnection devices. The interconnection devices must provide the protocol mechanisms for ensuring proper operation of the overall communications system. In this study, we assume an internetwork system consisting of LANs using the same medium access control (MAC) protoco13,4. The interconnection is therefore carried out at the MAC level using bridges. Each bridge comprises two sets of buffers, namely the input and output buffers. The input buffer temporally holds the packets to be transmitted via the backbone network to the destination LAN, while the output buffer holds the packets already transmitted through the backbone having as their destination the LAN directly attached to the bridge. Figure 2 shows the general configuration of a bridge. The service rates of the output and input buffers are determined by the rate of successful access to the LAN and backbone, respectively. Data messages are transmitted from one LAN to another via the backbone network as a sequence of data packets. Packets may be lost when a buffer overflow happens at the bridges. The lost or discarded packets are assumed to be recovered by the end systems through the use of a higher layer protocol. The backbone is able to deliver traffic to the output buffers at a rate much higher than the service rate of a LAN. On some occasions, the backlog of packets at an output buffer may build up quickly, leading to a buffer overflow condition, which then causes the loss of packets. This may arise when many sources happen to transmit their data all at once to the same destination bridge. It may also happen when the traffic generation rate of a source is greater than the service rate of the output buffer. Another situation is when the rate of accessing a LAN through the output buffer decreases due to the increase in the local load of that LAN. All the cases above contribute to a traflic buildup at the output buffer. Buffer overflow in the bridges will thus be a common phenomenon if no proper flow control is applied. CONGESTION CONTROL MECHANISMS FOR LAN/MAN SYSTEMS In the last few years, there has been a great interest in the study of congestion control mechanism for systems of interconnected computer networks’. The main aim in this area is the definition of simple protocol mechanisms

80

computer

communications

volume

18 number

Figure I

Network

Figure 2

A bridge

Network

topology

specifically designed by considering the characteristics of novel transmission technologies. The development of LANs using coaxial cables in the mid-1970s, and thereafter of libre optics, have spurred the development of high-speed networks. In turn, new communications protocols are necessary to better exploit the characteristics of these novel networks: high-speed and low error rates. Wong and Schwartz6 presented a simple, straightforward scheme to control the traffic entering the backbone network through the bridge. This scheme, known as the simple scheme, was based on a flow control mechanism proposed for the DQDB’. It simply allows the traffic to fill up the output buffer until its capacity is reached. A buffer-full (BF) message is then sent. The corresponding packets are held at their input buffers thereafter, until a space becomes available at the output, which is indicated by a receive-ready (RR) message received. Then the packets are allowed into the output buffer on a FIFO basis, i.e. the first packet which was denied entrance due to the lack of space in the output buffer is the one that should enter the backbone network first, and so on. In the simple scheme, the bridge which has transmitted the BF message will transmit an RR message soon after a buffer space becomes available. The bridge would soon become congested again if a packet is received. Very soon, the bridge will transmit a BF message once again, and so on. This will lead to an increase in the control overhead.

2 february

1995

Congestion

control for interconnected

Wang and Schwartz” also proposed another congestion control scheme. The scheme, referred to as the Longest Input Queue First, states that new arrivals should be kept at the input buffers. This scheme further specifies that under normal conditions, at most one packet should be allowed to remain in the output buffer. Under normal conditions, it is understood that none of the input buffers is rapidly tilling up, so that its buffer occupancy is close to its maximum capacity. By preserving spaces at the output buffer for those input buffers that may be overflowed, this scheme is expected to improve the overall system performance: lower overall packet loss probability and a higher system throughput. The results of a study6 have shown a significant improvement in the throughput of the output buffer as compared to that provided by the simple scheme. Indeed, this scheme is based on a basic principle: the ability to thwart temporary input data bursts by allowing the traffic streams to ‘borrow’ the output buffer space. This scheme therefore increases the overall system throughput at the expense of those input buffers that have a low activity. Another scheme has recently been proposed by Inai and Ohtsukix. It can be summarized as follows. At the input buffer, the highest priority in transmission is given to the packets which have as their destination the bridge with the largest available output buffer spaces. For this reason we call this scheme the Shortest Output-Queue First. Packets destined for a filled output buffer are kept at their input buffers. This scheme is based on a proposal made by Ephremides et ul.“. This proposal states that the optimal forwarding decision is to send a packet to the shortest queue. The idea is to guarantee fast packet delivery between uncongested bridges, even if congestion arises at some other bridges. This scheme requires each input buffer to know the exact amount of available space at each output buffer. In other words, each output bridge has to continuously inform all the input bridges of a new arrival or departure. Obviously, the overhead introduced by this requirement will affect the performance of the overall system. In what follows. we propose a novel congestion control mechanism and study its performance by simulation. In our mechanism we apply a form of virtual partitioning of the output buffer space to the individual items of traffic. However, at the same time, available spaces may still be used by excessively demanding traffic. Hence, the service provided to a traffic stream exhibiting low activity is almost insensitive to the excessive demand from others. This will allow us to provide a better service to all the traffic from different LANs, in terms of delay and packet loss probability. Furthermore, this new mechanism does not require each bridge to know the exact number of packets found at every output buffer. Instead, each bridge only needs to know when the number of packets in the output buffer has been reached or has sunk to a given level. This mechanism and the information required for its operation, are explained in the following sections.

computer

networks:

SHARE-BASED

L Orozco-Barbosa

CONGESTION

and N Hirzalla

CONTROL

MECHANISM

Motivation The previous section briefly described some relevant congestion control schemes for systems interconnecting LANs to a MAN via bridges. In this description, the main features as well as the drawbacks of each of the schemes were highlighted. In the three schemes, a bridge is required to continuously send information as soon as its output buffer becomes full or a buffer space is freed. This requirement may lead to an extensive exchange of information in periods of high activity. Another major drawback of these schemes is their inability to guarantee access to the system resources to all the traffic streams. For instance, the first two schemes give a higher priority to a highly active traffic stream over streams exhibiting low or moderate activity. These latter traftlc streams will experience a relatively higher packet delay and blocking probability. The inability to control this kind of situation makes these schemes unsuitable to be used in systems where resource sharing is an important feature. Thus, the need for a simple control policy guaranteeing the sharing of the critical resources becomes essential. The main motivation for designing a new control scheme is twofold. First, the control scheme must guarantee access to the system resources to all the traffic streams, in particular, the sharing of the critical resources: the (output) buffer space. This is done by virtually partitioning the buffer space. In this way, low and moderate active traffic streams are not severely affected by highly-active streams. Moreover, the scheme should be flexible enough to allow the highly active streams to use any unused buffer space. This allows enough flexibility and guarantees a minimum quality of service. Second, a control scheme for systems of interconnected networks must be simple. All the control schemes previously described introduce a lot of overhead by requiring the bridges to continuously exchange information. The simple scheme requires a bridge to continuously inform all the other bridges of its space availability during high load period; in the Longest Input Queue First Scheme, the input buffer queue size of each bridge must be known by all the other bridges in the system; and in the Shortest Output Queue First Scheme, the number of spaces available in the output buffers must be communicated among all the bridges. This requirement, the continuous exchange of information among bridges, may severely affect system performance. The new mechanism proposed here does not require each bridge to know the exact output buffer size in the other bridges. Instead, each bridge only needs to know that the number of packets in an output buffer has reached a certain level. This mechanism, and the information required for its operation, are explained in the following sections. A performance study of this new

communications

volume

18 number

2 february

1995

81

Congestion

control for interconnected

networks:

L Orozco-Barbosa

and N Hirzalla

scheme via simulation is also given. This study focuses on the ability of the scheme to provide a minimum quality of service and a flexible service. The success in meeting these goals is measured in terms of the delay and packet loss probability faced by the different streams”.

Output buffer occupancy The control scheme is exercised at the entry points of the backbone, i.e. at the input buffers. Every bridge uses two bits to keep track of the status of the (output) buffer space of each other bridge attached to the backbone. Each pair of bits will reflect the status of the corresponding output buffer. For example, in a system comprising n bridges, each bridge requires 2(n - 1) bits to represent the status of the output buffers of all the other bridges. These two bits are named FULL and EMPTY bits. Each of these can be in either of two states; the set state (represented by the Boolean value 1), or the reset state (expressed by the Boolean value 0). An output buffer is said to be in the Almost Empty state if the EMPTY bit is set and the FULL bit is reset. The Normal state of an output buffer corresponds to both the FULL and EMPTY bits being reset. An output buffer is in the Almost Full state when the FULL bit is set and the EMPTY bit is reset. The state 11 (both bits set) is not used. To keep these two status bits up-to-date, each bridge is responsible for keeping track of and informing all the other bridges of the state of its output buffer. For a bridge to know when to inform the bridges of a state change of its output buffer, two thresholds in the output buffer have been defined: L,, called the upper threshold, and LI, known as the lower threshold. At the time of initializing the internetwork system, the state of all the output buffers is set to Almost Empty. Assume that once in operation, the number of packets in an output buffer of a given bridge exceeds the lower threshold, LI. At this time, the bridge is required to inform all the other bridges of this situation by broadcasting an RE message. Upon receiving the message, the bridges reset the corresponding EMPTY bit, i.e. the output buffer state is set to Normal. In case the number of packets in the output buffer increases, and as soon as the upper threshold is reached, the bridge broadcasts an SF message to inform all the other bridges of this situation. Upon receiving the SF message, the bridges set the corresponding FULL bit, and the state of the output buffer becomes Almost Full. In the third case, if the queue size of the output buffer sinks to the lower threshold, LI, the bridge must inform all the other bridges by sending an SE-RF message. Upon receiving the message, the bridges set and reset the corresponding EMPTY and FULL bits, respectively: the status of the output buffer is set to Almost Empty. Therefore, four transitions between the states are

82

computer

communications

volume

18 number

Figure 3

Table 1

State transitions

Control

at a source bridge

messages Status bits

SF SE-RF RE

Set FULL bit Set EMPTY bit and Reset FULL Reset EMPTY bit

bit

allowed. The transitions discussed above are drawn in Figure 3. Table 1 shows the three different control packets used in the scheme. The actions taken by a bridge when the state of its output buffer changes as discussed above are summarized in the following algorithm: IF a packet arrives at the output buffer THEN BEGIN IF lzew queue size = L, THEN send(Sfl; IF new queue size = LI + 1 THEN send(RE); END ELSE IF a packet departs THEN BEGIN IF new queue size = LI THEN send(SE-Rfl; END; It is important to notice that the Normal and Almost Full states do not only depend upon the occupancy state of the output buffer, i.e. the number of packets, but on the way in which the state occupancy is reached. An output buffer is set to the Normal state whenever the number of packets increases beyond the lower threshold LI and as long as the number of packets remains below the upper threshold L,. As soon as this upper threshold is reached, the output buffer is set to the Almost Full state. The output buffer will remain in this state as long as the number of packets does not sink to the lower threshold.

Congestion control procedure In the previous section, we described the way in which the bridges keep track of and exchange information regarding the state of their output buffers. In this section, we focus on the actions to be taken by the

2 february

1995

Congestion

control for interconnected

bridges in order to exercise the control scheme. In the following, buffer capacity is measured in terms of the number of packets, and a buffer space is assumed to be equivalent to a packet. For the sake of clarity, we refer to the bridge(s) i to denote the bridge(s) sending data from their input buffers to the output buffer of bridgej. For the scheme to work, the output buffer space of each bridge is virtually allocated among all the other bridges. This is done by defining Sji as the share parameter. Sii defines the amount of output buffer space at bridgej (in packets) that bridge i is guaranteed to use. This parameter is set at the same time as initializing the FULL/EMPTY control bits, and is defined for each pair (source bridge, destination bridge). In other words, the output buffer of bridge ,j is virtually divided into n - I partitions. The size of each of these partitions is given by the parameter Sji. Furthermore, a counter F-COUNT is required to keep track of the actual number of buffer spaces of output bufferj being used by bridge i. Finally, two levels of priorities are used for the transmission of data packets; we refer to these as high and low priority. Control packets, RE, SF and SE-RF are all transferred using high priority. The status bits of all the bridges are initially set to 01 (Almost Empty state) and the value of the counter F-COUNT is set to Sji. Once in operation, a bridge i having data packets in its input buffer ready to be forwarded through the backbone will verify the status of the FULL/EMPTY bits of the destination bridgej. If the values of these two bits correspond to the Almost Empty state, the bridge sends the data packets to the destination bridge ,j using the high priority and leaving the value of F-COUNT unchanged. In case the number of packets in output buffer ,j increases beyond the lower limit, LI, bridgej informs all the bridges of this situation by issuing an SE-RF packet. Upon receiving an SE-RF, the state of the output buffer is set to Normal and bridge i can proceed to transmit up to S;i packets to bridge j using the high priority. For each packet transmitted, bridge i decrements F-COUNT. Assume that at the end of transmitting S,, packets from bridge i the number of packets in the output buffer has not reached the upper threshold. In this case, bridge i may only proceed to send more packets to bridgej as far as the state of the output buffer of bridgej is not AlmostFulf. However, these extra packets are transmitted over the network using low priority. This mechanism allows any buffer space left to be used by those bridges that have extra packets to transmit. Assume the case when bridge i has more than S;i packets in its input buffer when the SERF message is received from bridgej, and at least one of the other bridges has less packets than its share value for this same bridge j. In this case, bridge i may make use of this space. However, bridge i transmits these additional packets with a lower priority than that used for transmitting the Sli packets. The use of a lower priority guarantees that none of the additional packets will be transmitted before completing transmission of all the Ski packets, where k = 1,2,. , n, k # j, from all the bridges.

computer

networks:

L Orozco-Barbosa

and N Hirzalla

In case the number of packets in the output buffer of bridge ,j reaches the upper threshold L,,. bridge j will broadcast an SF packet. Upon receiving the SF packet, bridge i will set the status of the output buffer of bridge j to Almost Full. Under this state, a packet directed to bridge,j is held in the input buffer of all bridges. In other words, no bridge is allowed to transmit any packet directed to the output buffer of bridgej. This will allow the number of packets in the output buffer of bridge,j to sink to the lower threshold. As soon as the number of packets in the output buffer of bridgej sinks to LI bridgej will inform all the bridges of this situation by sending an SE-RF packet. Upon receiving the SE-RF packet from bridge ,j, bridge i changes the state of bridgej to Almost-Empty and sets F-COUNT equal to S;i. Under this state, bridge i may proceed to forward packets to bridge ,j, as already explained. This scheme will avoid a given bridge making use of the whole space, thus allowing other bridges to make use of their full share. This is of particular interest during period of high activity. The three different output buffer states and corresponding actions to be taken are depicted in Table 2. It is important to highlight the use of the share parameter 5’;i as well as the upper and lower thresholds, L, and LI. For a given bridgej, the summation of all the partitions, namely the buffer sharing, must be set so as to satisfy the following relation:

B#rSharing

= 2 S;, d (L, - LI) i= I if/

for output

buffer of bridge ,j, j = I, 2,.

,n

(1)

This relation indicates that the summation of all the partitions for the output buffer of bridge ,j must be at most equal to the buffer space between the lower threshold LI and the upper threshold L,. This eliminates the possibility of having to reject any of the S;i packets that belong to the stream from LAN i, for i = 1,2,. ,n, at the output buffer of bridge ,j. This could be explained as follows: when the queue size of the output buffer of bridgei sinks to the lower threshold LI an SE-RF message is sent. Every bridge i will be

Table 2

Output buffer status bits 0

I

0

NORMAL IF F-COUNT = 0 THEN Packets may be sent to the output buffer using low priority (see text) IF F-COUNT = S,, THEN Send share using high priority (see text)

I

ALMOST EMPTY Packets should be sent to the output buffer

communications

volume

18 number

ALMOST FULL Packets are held at the input buffer

2 february

1995

83

Congestion

control for interconnected

networks:

L Orozco-Barbosa

allowed to send up to S;i packets to bridgej. If all the bridges happen to send their full share S/i, relation (1) guarantees that the number of packets to be sent plus the number of packets already found at the output buffer of bridgej will not exceed the upper threshold L,,. In other words, it is guaranteed that none of these packets will be lost due to output buffer overflow. The parameter L, is mainly used to avoid any overflow of the output buffers. It is well known that the ratio between the latency and the bandwidth plays a major role in the performance of high-speed networks. In this kind of network, the bridges may continue to transmit their packets while the output buffer is full, and a control packet for stopping the transmission is on its way. Obviously, these packets will be lost at the output buffer due to lack of space. For this reason the parameter L, is introduced, and can be set up by realizing that the maximum number of packets in transit to a given bridge j at any time is given to satisfy the following relation: Bi - L, > [p, x tprop/tvan.~p~

(2)

where Bi and L, are the size and the upper threshold of the output buffer ,i expressed in packets, respectively, transp is the packet transmission time and t~vo~ is the propagation delay between the two furthest nodes on the backbone. IT1 is defined as the smallest integer not less than the real number T. Pj is the routing probability to bridge j, and is given by:

Pj'

l/(H-

1) 2 i= ‘:I#/

Aij

(4)

where Aii represents the mean arrival rate of packets from LAN i to LAN ,i, assuming that all the arrival processes are Poisson. From equation (2) it can be seen that the routing probabilities play a major role in the selection of the value for L, and therefore on the operation of this mechanism. This can be shown by taking some typical values. Assume an interconnection system using a tibre optics backbone with a nominal bit rate of 100 Mbit/s and a network length of 50 km. Assume further the use of a packet length of 1 kbyte. This value is supported by most MAN and LAN standards”. 12,14,15. From these values, the maximum number of packets in transit is bounded to 3.12. The worst case will be to have this maximum number of packets directed to a particular output buffer at the same time. For a system consisting of n LANs, and assuming a routing probability evenly distributed (i.e. p;j = I/(n - I), for i, ,j = 1,2,. . ,n and i # (i), the mean number of packets will be bounded to 1. This mechanism is therefore suitable for the current values of most MAN and LAN standards.

84

computer

On the other hand, the latency of the backbone access time plus propagation delay - could be assumed negligibly small compared to the capacity of a LAN. In this case, the output buffer is always busy whenever there is at least one packet in the system. This is because whenever the output queue size reaches LI, an SE-RF message is sent immediately to the other bridges. Upon receiving this message, the bridges will resume the transmission of packets towards that bridge. Hence, the time needed for the SE-RF message to reach the bridges, and for a packet to be transmitted and to reach its destination output buffer, is very small compared to the time needed for LI packets to be transmitted onto the LAN. Therefore, new packets will reach the bridge even before its output buffer is completely emptied.

PERFORMANCE

STUDY

The main objective of this study is to evaluate the mean delay and packet loss probability of the various streams, as well as the amount of overhead introduced by the scheme. The packet loss probability is defined as the ratio between the number of packets being lost to the total number of packets being generated per traffic stream. The mean delay is defined as the average time elapsed from the time that a packet is received at the input buffer of a source bridge, up to the time at which the packet leaves the output buffer of the destination bridge.

P;,

where Pji is the probability that a packet arriving at the input buffer of bridge i will be destined to LAN ,i. In turn, P;j is given by: P,, = Aij 2 I j= l;j#i

and N Hirzalla

communications

volume

18 number

Simulation

model

Figure 4 shows the queueing network model representing the internetwork system. Each input buffer is connected to a LAN via a single port. Multiple traffic streams having different destinations arrive at each of the various input buffers. It is assumed that packets arriving at an input buffer will be routed to any of the output bridges with a probability defined by equation

(4). Arriving packets at the input buffer will either be stored or discarded, depending on space availability. For the purposes of comparison between different schemes, a Poisson packet arrival process is assumed. In practice, the rate and order of packet transmission depend not only upon the congestion control scheme being used, but also on the rate and the access method employed by the MAN. For simplicity, the MAN has been modelled as a single server with a deterministic service rate of mean 13*16. Under normal conditions, a packet having been PB transmitted through the backbone network will arrive at the destination bridge, where it will be queued until it gains access to the LAN connected to the bridge. The service rate of the output buffer is determined by the rate of successful access to the LAN. Each output buffer is modelled as a single server with an exponential service time of mean p-‘.

2 february

1995

Congestion

control for interconnected

Buffet

L Orozco-Barbosa

n:

Sii = (L, - Lr)/(n - 1) = 2 Figure 4

Simulation

and N Hirzalla

assuming a backbone network length of 50 km and a typical packet length of 1 kbyte, the upper and lower buffer thresholds have been set to ,!,I = 1 and L, = 9. Under these assumptions, the mean and maximum number of packets in transit directed to a particular bridge from the furthest bridge are one and three, respectively. By assuming that all the traffic streams have the same traffic characteristics, and il;, is the same for all streams, the output buffers of all the bridges have virtually been divided into (n - I) equal partitions, by setting:

OutputBuffer2

Output

networks:

for i.,j = I, 2,. and i#,j

model

,n (7)

Assumptions

Results and discussion

Throughout this work we have assumed the use of a constant packet length, as well as error free transmissions. To study the effect of a traffic stream from a LAN over the traffic streams in-coming from other LANs sharing the same destination bridge, a system comprising five LANs (n = 5) has been judged appropriate. By choosing M = 5, it has been possible to evaluate the effect of traffic from all the other LANs. This should provide more accurate results by taking the average over a larger number of samples. The mean service rate of each output buffer has been set to p = 0.3. In this kind of system, the capacity of the MAN is likely to be much higher than that of a LAN, typically from IO-20 times higher”. Therefore, pB has been set to 4.0. The study has focused mainly on the performance of the scheme when the output buffers (the potential system bottlenecks) are submitted to traffic loads of over 70%. Therefore, Aii = 0.055, i #,j < n, which gives:

The performance results of the proposed congestion control mechanism are compared to both the performance results of a system without any congestion control in place and those of the simple schemr. It is worth noticing that the simple scheme is equivalent to the Shortest Output-Queue First scheme when used in a high-speed backbone network. Moreover, it is unsuitable to compare our scheme with the Longest ZnputQueue First scheme. This is because the main goals of the schemes are incompatible. The goal of the Longest Znput Queue First scheme is to maximize the system throughput. This is done by favouring some traffic streams over others, which clearly opposes the main goal of the proposed scheme. For the sake of clarity in presenting the results, the system condition without any congestion control is referred to as Scheme A, that implementing the simple scheme as Scheme B, and the Share-Based Control Mechanism as Scheme C. The results shown are all taken as the average of many points, either from the same simulation run or different runs. Each simulation was executed to produce around 5 x IO5 packets per traffic stream, i.e. around IO’ packets in total. The simulation model has been implemented using the simulation language QNAP2”. Figures 5 and 6 show the packet loss probability for traffic T12 and Ti2, where i = 3, 4 and 5, as a function of 112. In Scheme A, packets at the input buffer are transmitted to the destination bridge as soon as they gain access to the backbone. By increasing ,112, packets that belong to traffic T12 and T;z are rejected by the output buffer of bridge 2 due to lack of space. This is reflected by an increase in the packet loss probabilities for traffic T12 and T;?. For Schemes B and C, packets destined to congested output buffers are held at their input buffers. This has a significant impact on the packet loss of traffic T12 and also traffic Tiz. By comparing Schemes B and C, it can be seen that Scheme C better enforces the misbehaving traffic Tlz, decreasing its effect on all the other traffic streams. As can be seen in Figure 5, Scheme B fails to control the loss experienced by the traffic Ti2 for high loads of T12,

1, = Aj = 2

/Iii = 0.22

(5)

/=I

/#I

therefore: p = 0.73

(6)

where p, according to the queueing theory, is defined as the traffic intensity (n/p). To study the effect of a traffic stream requesting increasingly more output buffer space (a misbehaving traffic stream) over other (compliant) traffic streams, the mean rate of the traffic stream going from LAN 1 to LAN 2 (111) has been made to vary from 0.05 to 0.15. For the sake of clarity, this traffic stream is referred to as Tr2. As the traffic load 212 is increased, the overall generation rate of the packets going to LAN 2 will exceed the access rate of traffic to LAN 2 through the output buffer. A backlog of packets will be created in the output buffer of the bridge interconnecting the MAN to the LAN. The size of the output and input buffers at each bridge has been set to K, = Ki = IO packets. By

computer

communications

volume

18 number

2 february

1995

85

Congestion

control for interconnected

networks:

L Orozco-Barbosa

and N Hirzalla

TIZ(Scheme

10;

0.04

0.04

0.12

0.08

Figure 5

Tll packet loss probability

0.16

~vrsus 112

.g 6.Oe.-3.z

4.Oe3-

! 4 E

2.Oe-3-

0.0 0.04

I 0.12

I 0.08

I 0.16

L Figure 6

T,2 packet loss probability

versus i12

whereas Scheme C limits the loss of T;z by penalizing the misbehaving traffic. This effect arises because, in Scheme C, for higher demands on an output buffer the transmission towards that buffer is stopped and resumed later by allowing each bridge to send its share of packets before letting them proceed with the transmission of extra packets. In Scheme B, packets are released in global FIFO order among all the inputs. This will allow the more demanding traffic to get the lion’s share out of the output buffer. This is not done without any price paid by other traffic sharing the same output buffer. In Figure 7, the price paid by traffic T;l in terms of mean delay can be seen. Moreover, the delay experienced by traffic T;z in Scheme B is higher than that of traffic Ti2. On the other hand, in Scheme C, traffic Ti2 experiences a lower mean delay compared to traffic Ti2 and to Scheme B, which seems more appropriate as the traffic load of Ti2 increases. From the above results, the performance figures of

86

computer

Mean delay ver.rus E.,?

12

8.Oe.-3-

i

, I , I , 1 , I , I, 0.06 0.08 0.10 0.12 0.14 0.16

h 12 Figure 7

h

I

C)

communications

volume

18 number

Scheme C show a better packet loss and mean delay of traffic sharing the same output buffer with highly demanding traffic. It punishes the sources which are increasingly generating more traffic. In Schemes B and C packets destined for a congested output buffer may stay waiting in the input buffer for some time. Those packets may build up in the input buffer and eventually block the entrance for any new arrival, whether their destination output buffer is full or empty. This will result in almost equal packet loss probabilities for all the traffic streams arriving at the same input buffer, i.e. from the same LAN. To decrease this effect, the number of packets destined for a certain bridge could be restricted to a maximum numbers. This will save some space for the new arrivals at the input buffer. Figure 8 shows the packet loss probability for traffic Tli(j = 3,4,5) and T,, (i = 2,3,4,5,j = 1,3,4,5 and without input restrictions. The i # .i), with and maximum number of packets has been set equal to 7. To explain this, consider traffic Tli for the ShareBased Scheme in Figure 8. The packet loss probability without input restrictions is much higher than that with input restrictions. This is due to the fact that packets belonging to Ti2 are filling the input buffer of bridge I and blocking the entrance against new arrivals of either Tli or Ti2. It is also worth noting that Ti2 in both schemes will experience slightly higher loss due to this restriction. The lower that the maximum number is set, the higher the loss for Ti2 and the lower the loss for Tli will be. The amount of overhead required for a control scheme to operate is another important feature when it comes to studying the performance of such schemes. In this study, the control messages are supposed to be sent by using separate control packets. It has further been assumed that they are of fixed sizes as the data packets, and that there is always buffer space available for them. Let R; be the ratio of the number of control packets sent to the number of data packets received by bridge i from the other bridges. Figure 9 shows RI versus 212 for both Schemes B and C. Under low loads, R2 for Scheme C is higher than that for Scheme B. This is

2 february

1995

Congestion

control for interconnected

networks:

L Orozco-Barbosa

and N Hirzalla

Scheme B

.

o! 0.04

0.12

0.08

h Figure 8

Restricted

,

.

0.06

0.04

0.16

,

.

,

0.08

.

0.10

,

.

0.12

, 0.14

.

(

0.16

12

input buffer occupancy

because Scheme C tries not to leave the output buffer idle. Under this scheme, the bridge is required to send an SE-RF message to indicate that there is enough space in its output buffer before this gets empty. However, this high overhead may not be a problem compared to the high value of R2 for high loads in Scheme B. This higher value of R2 is because when the output buffer of bridge 2 becomes full, the bridge will send a message to the other bridges to stop the transmission. Another message will also be sent to resume the transmission when a space is made available. During high loads, the output buffer will soon be filled again. The bridge is therefore required to send these two messages again, and so on. This feature makes Scheme C particularly useful: the overhead introduced by the scheme during periods of high demand for resources is remarkably low. It is interesting to analyse the distribution of the different control messages. In Scheme B, two messages are needed: the Buffer Full (BF) and Receive Ready (RR). Under this scheme, a bridge issuing a BF message will transmit an RR as soon as a space becomes available. Therefore, 50% of the messages used by Scheme B are BF and 50% are RR. Under a heavy load condition, R can reach a ratio as high as 2:l. This kind of situation may arise when each packet arriving at the output buffer causes the bridge to send two messages. One message is when the packet fills the buffer, and another is when it moves forward and leaves a space behind. In Scheme C, three different control packets are used: RE, SE-RF and SF. When an RE packet is issued, indicating that the lower threshold of the output buffer has increased beyond, LI, the output buffer is said to enter into the Normal state. The SF packet is sent by bridge j if the number of packets in its output buffer reaches the upper threshold. The output buffer is set to the Almost Full state. Bridges hold in their input buffers any data packet directed to bridge j. Finally, an SE-RF packet is sent by bridge j to indicate that the number of packets in the output buffer has sunk to the lower threshold. Upon receiving this control packet, the

computer

bridges must set F-COUNT to S;,; the bridges are allowed to resume transmission of data packets destined for bridgef. Figure 10 shows the percentage of all three messages out of Rl for Scheme C versus IVY?. As 212 increases, more SF messages and fewer SE-RF and RE messages are sent. It is worth looking at the amount of overhead introduced by the Shortest Output-Queue First scheme and the Longest Input-Queue First scheme by assuming the use of a procedure similar to that considered above, i.e. the bridges communicate the control information using explicit control packets. In the Shortest OutputQueue First scheme, R is always equal to 2, since a bridge is required to inform all the other bridges of any change in the output queue size. Thus, a bridge has to send a control message upon receiving a packet, and a second one just after transmitting a packet into the destination LAN. In addition, the size of each output buffer queue should be known by all the bridges in the system.

100 SERF, RE

3 B Z

SF

/

1 of04

/ 0106

0:08

OllO

h Figure IO

communications

Percentage

volume

of SF. SE-RF

18 number

0112

0114

0!16

12

and RE messages

2 february

1995

87

Congestion

control

for interconnected

networks:

L Orozco-Barbosa

In the Longest

Input-Queue First scheme, R may be than 2. Each bridge needs to information continuously send regarding the occupancy of the input buffer. To keep all the information regarding input buffer occupancy up-todate, the bridges have to send a control packet for every data packet arrival and another control packet for a data packet departure. Furthermore, whenever a data packet departs leaving the output buffer of a bridge empty, the corresponding bridge has to request the transmission of a data packet from the longest queue size input buffer. equal

to

or

even

and

ACKNOWLEDGEMENTS

greater

This work has been supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant number OGPIN 334. The second author has also been supported by the Canadian International Development Agency (CIDA) and the Jordan University of Science and Technology (JUST).

REFERENCES I

CONCLUSION

2

In this paper a new congestion control scheme has been presented. This scheme has been designed to be used in systems interconnecting LANs to a MAN via bridges. The design aims to define a simple mechanism to guarantee to all the inter-LAN traffic streams a share of the system resources. This is done by virtually partitioning the buffer space of the bridges interconnecting the MAN to the LANs. A performance study via simulation of the proposed scheme was carried out. This study focused on the mean delay and packet loss probability. It also comprised an evaluation of the amount of overhead needed for this scheme to operate. The results show that this scheme meets its main objective: to offer a flexible service and guarantee the use of critical system resources (buffer space) to all the traffic streams. We are currently carrying out a performance evaluation of this scheme under bursty traffic conditions. Future multi-rate applications will be characterized by the wide diversity of communications service requirements. For instance, the traflic generated by digital voice sources is characterized by alternating between idle and active periods, i.e. a bursty traffic pattern. This kind of application will require the use of communication services able to guarantee short delays and low loss probabilities. It is therefore important to explore the quality of service that this kind of control scheme may provide to all such applications.

88

computer

N Hirzalla

communications

volume

18 number

7 8

9

IO

II

12

13 14

15 16

I7

2 february

Biersack. E W ‘Annotated bibliography on network interconnection’, IEEE J Se&ted Areus in Cormnun.. Vol 8 No I (January 1990) pp 22-41 Vall&e. R. Orozco Barbosa. L and Georgdnas, N D ‘Interconnection of token rings by an IEEE 802.6 metropolitan area network’, Proc,. IEEE INFOCOM ‘XY. Ottawa, Canada (1989) pp 934942 IEEE Standard 802. I Overview und Archilecture (October 1988) IEEE Standard 802.1 MAC Bridges (September 1988) Gerla. M and Kleinrock. L ‘Congestion control in interconnected LANs’, IEEE Network. Vol 2 No I (January 1988) pp 72-76 Wong, L and Schwartz, M ‘Flow-control in metropolitan area networks’, froc. IEEE INFOCOM ‘89, Ottawa, Canada (1989) pp 826-833 Attachment #25, Minutes of IEEE 802.6 MAN. Vancouver, Canada (July 1987) Inai. H and Ohtsuki, K ‘Performance study of congestion control for high-speed backbone networks’, Contpur. Commun.. Vol I5 No 7 (September 1992) pp 429437 Ephremides. A. Varaiya, P and Walrand. J ‘A simple dynamic routing problem’, IEEE Trans. Auto. Control. Vol 25 No 4 (August 1980) pp 690-693 Fendick, K, Mitrani, D. Rodrigues. M, Seery, J and Weiss, A ‘An approach to high-performance, high-speed data networks’, IEEE Cormnun. Mug. (October I99 I) pp 74-82 IS0 93 14-l Infiwmution Processing Sj:~/ems Fihre Diswihured Dutu In,eTfbce (FDDI) Par/ I: Token Ring Lqer Protocol (PHY) (April 1989) IS0 93 14-2 Infinwution Prowssing SJwems - Fihre Diswibuted Dutu [email protected] (FDDI) - Purr 2: Token Ring Media Access Control (MAC) (May 1989) IEEE 802.6 Disfrihuted Queue Duul Bus (DQDB) Subnetwork of’u Mewopolitun Areu Network (MAN) (July 1991) ISOjlEC 8802-3 1990 (ANSI/IEEE Std 802.3-1990) Currier Sense Multiple AWESS Method und Ph~sicul Layer SpeciJicutions ( 1990) IEEE Std 802.5-1989 (ANSI) Token Ring Access Method and Physicul Luger Spec~ficutions Hahne. E L. Choudhury. A K and Maxemchuk, N F ‘DQDB networks with and without bandwidth balancing’, IEEE Trans. Connnun., Vol 40 No 7 (July 1992) pp 1192-1204 Potier. D Near Users’ Introduction to QNAP2. Technical Report No. 40, INRIA, France (October 1984)

1995