Call admission control and load balancing for voice over IP

Call admission control and load balancing for voice over IP

Performance Evaluation 47 (2002) 243–253 Call admission control and load balancing for voice over IP David Houck∗ , Gopal Meempat1 Lucent Technologie...

231KB Sizes 0 Downloads 46 Views

Performance Evaluation 47 (2002) 243–253

Call admission control and load balancing for voice over IP David Houck∗ , Gopal Meempat1 Lucent Technologies, 101 Crawfords Corner Road, Room 4L431, Holmdel, NJ 07733, USA

Abstract IP networks are traditionally designed to support a best-effort service, with no guarantees on the reliable and timely delivery of packets. With the migration of real-time applications such as voice onto IP-based platforms, the existing IP network capabilities become inadequate to provide the quality-of-service (QoS) levels that the end-users are accustomed to. While new protocols such as DiffServ and MPLS allow some amount of traffic prioritization, guaranteed QoS requires call admission control. This paper reviews several possible implementations and shows simulation results for one promising method that makes efficient use of the network and is scalable to large networks. © 2002 Published by Elsevier Science B.V. Keywords: QoS; Congestion management; VoIP

1. Introduction IP networks are traditionally designed to support a best-effort service, with no guarantees on the reliable and timely delivery of packets. With the migration of real-time applications such as voice onto IP-based platforms, the existing IP network capabilities become inadequate to provide the quality-of-service (QoS) levels that the end-users are accustomed to. While some limited QoS capabilities based on isolating voice traffic from data are currently evolving (e.g., DiffServ [1], MPLS [2]), no standard capabilities exist to perform admission control as in TDM networks. Even with traffic prioritization, there is the risk of admitting more voice calls than the available bandwidth; this can lead to severe QoS degradation for the admitted calls. An important point to note here is that all calls in progress, not just the recently admitted calls, will experience packet loss. In addition to the poor QoS this creates, this could lead to substantial call abandonments and reattempts, potentially overloading the signaling network and/or the call processing capacity of the network. To address this problem, we will present a novel methodology for call admission control (CAC) and load balancing (LB) for voice-over-IP services. Our approach employs a network function referred to as Call Admission Manager (CAM) that keeps track of the voice occupancy levels across the IP network. Based on this model, we will first present an exact algorithm to perform the control functions, and then ∗

Corresponding author. Tel.: +1-732-949-1290. E-mail address: [email protected] (D. Houck). 1 Gopal Meempat is now at Velio Communications, Inc., 210 Carnegie Center, 5th floor, Princeton, NJ 08540, USA. 0166-5316/02/$ – see front matter © 2002 Published by Elsevier Science B.V. PII: S 0 1 6 6 - 5 3 1 6 ( 0 1 ) 0 0 0 6 6 - 9

244

D. Houck, G. Meempat / Performance Evaluation 47 (2002) 243–253

present an inexact variant that allows scalability into large-scale networks. The inexact algorithm we describe employs a measurement-based strategy, where the IP routers directly monitor the voice traffic volumes on the links in order to assist in inferring network congestion. The framework presented is suitable for packetized voice services over any IP network that supports DiffServ, so that voice traffic can be isolated from data traffic. In addition, the network may optionally support MPLS explicit routing; this would allow a load-balancing capability in addition to CAC, thereby improving network efficiency. This paper assumes that the network is a single DiffServ domain owned and operated by a single service provider and that QoS within this network is the key item to manage. When multiple service provider networks are connected together there are clearly many additional issues to overcome in providing end-to-end QoS. This problem is currently being studied within the ETSI/TIPHON standards organization (see [5]). They address issues associated with end-to-end signaling, control, and management of QoS for VoIP and multimedia services. A key element for managing QoS across networks is the interconnect function (ICF), the functional entity that interconnects transport domains with entities outside of the transport domain. It polices authorized media flows between two transport domains to ensure they are consistent with the QoS policy specified by the relevant manager. In addition, they define a Transport Resource Manager (TRM) that manages the QoS within each network. The CAM that we propose is a TRM in the TIPHON terminology. Section 2 presents the problem that is being addressed and several potential QoS solutions that have been proposed. Section 3 describes our method of managing QoS using measurements and admission control in a network that supports both DiffServ and MPLS. Section 4 presents some performance results related to tuning the key parameters associated with the proposed approach and Section 5 is the summary. 2. Potential approaches to enable VoIP QoS First, it is useful to describe the problem that we are addressing here. Fig. 1 illustrates one way a service provider can use an IP network for telephonic services. Voice calls come through the PSTN circuit switched network to a gateway (GW) where the voice signal is packetized and sent onto the IP network. The Softswitch box shown here combines the H.323 functionality of a signaling gateway, a media gateway controller, and a gatekeeper. The Softswitch controls the gateways via a “control-to-media” protocol; the standard protocol being H.248. (Predecessors to this protocol are IPDC and MGCP.) The

Fig. 1. Generic VoIP architecture.

D. Houck, G. Meempat / Performance Evaluation 47 (2002) 243–253

245

actual bearer traffic is encoded in the gateways and carried over the IP network. The combination of Softswitches, gateways, and the IP network replaces the Toll/Tandem switches in traditional circuit-based networks. We now describe a call setup message flow, loosely based on H.323, but the specific protocol is not important here. At call setup time, the PSTN switch seizes a trunk to the GW and signals the near-end Softswitch with the call information. The Softswitch determines which zone the called number is in and sends a setup message to the terminating Softswitch that includes the IP address of the originating GW. The terminating Softswitch identifies the terminating GW and signals the far-end PTSN switch with a trunk ID from the GW. The PSTN switch completes the call to the number dialed. In parallel, the terminating GW is sending a message over the IP network to the originating GW containing its own IP address and to setup an RTP path between them. The key point to observe here is that no check is made of the congestion status of the backbone IP network before the call is set up. Even worse, under congestion, all calls in progress, not just the newly admitted calls will experience packet loss. It has been shown that voice quality deteriorates quite rapidly with packet loss. Although, the specifics depend on the coding algorithm used, the packet size, and burstiness of the packet loss, there is a point at say a 10–20% packet loss where the speech becomes completely unintelligible. Thus, unlike data traffic, if one is interested in maximizing the total utility of the network, then admission control for VoIP is necessary [6]. Since voice quality can deteriorate at as little as 1% packet loss, competition forces most services providers to aim for relatively low packet loss objectives. In this paper, we focus on controlling packet loss as a measure of QoS. Delay and jitter are also important QoS criteria. Jitter is managed by adding jitter buffers in each of the gateways to account for the majority of delay variation in the network. Thus, delay consists of packetization delay, processing delays, propagation delays, the jitter buffer delay, and queueing delays, with a standard requirement of 150 ms end-to-end. The first four of these are independent of network load and can be managed in advance. The methods described below, mostly manage the utilization of the links as a surrogate for reducing both packet loss and queueing delays. With high-speed links and proper buffering and scheduling algorithms, both delays and packet loss can be kept small as long as utilization for the voice service class is kept below 100% of its allocated bandwidth. We shall now briefly review some potential methods that might be used to achieve this goal and enable voice quality in IP networks. These may be broadly categorized as (1) capacity over-provisioning, (2) admission control via packet delay measurement, (3) per-call bandwidth reservation, and (4) pre-provisioned trunking, leading to (5) the completely shared voice sub-network approach. The latter encompasses the methodology that we seek to develop in this paper. 2.1. Capacity over-provisioning Over-provisioning is predicated on the notion that bandwidth is available in abundance, and that voice would constitute a small fraction of the total network traffic. By allocating adequate slack capacity on each network, the voice traffic can, in principle, be made immune to the effects of traffic congestion and no admission control would be necessary. However, no systematic procedures exist to determine the over-provisioning factors needed to meet the desired QoS. One strategy would be to assign voice capacities on the links by extrapolating from the finite number of ports at the gateways and the likely routing patterns. Some examples that we have analyzed using this strategy indicate over-provisioning factors in the range of 3–5 even in relatively small networks. Note further that our analysis did not

246

D. Houck, G. Meempat / Performance Evaluation 47 (2002) 243–253

account for uncertainties in routing and/or failures. With adequate slack capacities assigned to cover such factors, and when scaled to larger networks, the over-provisioning factor may be expected to be much larger than these figures. In addition, other network access technologies such as cable, DSL, and line-side access gateways do not provide any natural port limits on the offered traffic and strict over-provisioning is no longer meaningful. In summary, the over-provisioning approach, while providing a simple initial solution, does not scale well with greater penetration of the VoIP service and does not handle failure scenarios well; hence the need for some form of call management. 2.2. Admission control via packet delay measurement With this approach, a user wishing to establish a voice call first sends a preamble packet stream, such as a ping, to the intended destination, and measures the delay encountered. If the measured delay exceeds a threshold then it is interpreted as excessive congestion in the network, and the call is blocked (or sent to the PSTN); else the preamble is replaced with the voice payload and the call proceeds. (This is related to the work by Gibbens and Kelly [7] in which packets encountering congestion are marked and calls may be blocked if congestion is high enough.) A limitation with this approach is that the behavior encountered by a limited number of preamble packets cannot provide estimates of actual performance with statistical confidence. More importantly, in a converged network with multiple services, what is observed at call initiation is no indication of what can happen over the duration of the call. For these reasons, the delay measurement approach appears to be rather weak in providing voice QoS, though its simplicity is appealing. 2.3. Per-call bandwidth reservation Yet another strategy would be to establish a complete path end-to-end at the initiation of each call, with adequate bandwidth reserved along each link, and tear it down upon termination. A protocol such as RSVP [3] may be used to support the associated per-call signaling. An incoming call is blocked if there is inadequate bandwidth on any link along the chosen path. This approach can in principle provide absolute guarantees on voice QoS, and furthermore, allow efficient utilization of the bandwidth resources. However, the signaling and processing at the intermediate routers associated with per-call path setup/teardown imposes a burden on their processing capacities. Also, while some ATM switches have been designed for high SVC rates, the state-of-the-art in IP technology is far from being amenable to support per-call bandwidth reservation and this is not a practical solution. 2.4. Pre-provisioned trunks One effective strategy to implement VoIP call management is to setup label-switched paths (LSPs) between voice gateways using MPLS, and then reserve bandwidth for each path along each link that it traverses. Since each path now constitutes a trunk group, it is easy to count the number of calls/bandwidth on each path and block incoming calls when the path is fully utilized. As with the SVC approach, this scheme also offers the advantage of providing an absolute level of QoS. Though not as bandwidth-efficient as the SVC approach, it is feasible from the processing perspective. It does, however, have some limitations in regard to scalability. Note that the number of paths to be setup grows as the square of the number of voice gateways, and can become quite large as the network and service penetration grow. In particular, the static

D. Houck, G. Meempat / Performance Evaluation 47 (2002) 243–253

247

fragmentation of link bandwidth among a large number of paths would lead to poor statistical multiplexing and a quick exhaustion of the available capacity. Moreover, with the explosion in the number of trunks, the trunk servicing and provisioning tasks would also become formidable. In addition, maintaining an accurate count of the number of calls in progress is prone to a number of types of errors. This may be compounded by calls with different compression rates and fax or modem calls that may change their codec from a compressed to an uncompressed mode. It is also difficult to predict the bandwidth requirements for calls with silence suppression since there may be variability in codec parameters that affect the way background noise and echo are handled, not to mention different speech activity factors. This method is somewhat similar to the traditional circuit-switched network in which dedicated capacity (trunk groups) are provisioned between pairs of switches. In circuit networks the scalability problem, with respect to the number of trunk groups, is managed by the use of tandem and hierarchical switching in which intermediate switches get involved in call processing. One could add a hierarchy of soft switches in a similar and add all the routing algorithms of a circuit switch, but that would add a tremendous cost and complexity that we are trying to avoid. To circumvent these shortcomings, a scheme with similar functionality but not requiring per-path bandwidth allocation would be ideal. Such a scheme, identified above as approach (5), does exist and is described in the remainder of this paper.

3. VoIP call management based on a completely shared voice sub-network The approach proposed in this paper, while achieving a level of QoS on par with the pre-provisioned trunking approach, is more scalable since we employ a link congestion metric rather than a path congestion metric and is more bandwidth efficient since link bandwidth is fully shared among all paths. Our approach requires that links are either dedicated to voice or that a certain amount of bandwidth is reserved for voice, e.g., via DiffServ and class-based queuing (CBQ). In effect, this would isolate voice from other contending traffic, thus leading to the notion of a voice sub-network. We shall henceforth focus our attention on the voice sub-network carved out by virtue of DiffServ. In the general case, the proposed scheme would provide an LB feature in addition to admission control, provided that MPLS explicit routing is available. For the general case, we assume multiple spatially diverse MPLS explicit paths (unidirectional) dedicated to voice, setup from each edge router to every other edge router that connect to one or more voice gateways. Path redundancy serves to improve reliability and allows the load-balancing feature of the proposed control scheme. We however propose functional independence between MPLS and traffic prioritization via DiffServ. To be specific, there is no bandwidth reservation per LSP as in the case of the trunk reservation approach. Instead, a certain aggregate bandwidth is reserved for voice on each link. The LSPs, that traverse a particular link, are allowed to fully share the capacity reserved for voice on that link. In this way different traffic classes could share an LSP if desired from an operation perspective. One strategy for integrating DiffServ and MPLS as described above is to perform packet forwarding based only on the MPLS label, with traffic prioritization being based on the DiffServ class. At the packet level, today’s routers can be configured to give each service class, including a voice, a bandwidth allocation using WFQ, or they could be configured to give voice strict priority and the other classes a bandwidth allocation with WFQ. All priorities are non-preemptive. With the above arrangement, it becomes possible to perform the CAC and LB functions by tracking the voice occupancy of each link, rather than of each LSP. Thus the complexity of our algorithm is a

248

D. Houck, G. Meempat / Performance Evaluation 47 (2002) 243–253

function of the number of physical links, rather than the number of paths, which grows as the square of the number of edge nodes/gateways. Note that multiple gatekeepers across the network domain admit calls independently and thereby affect the voice occupancies of the links. To implement an orderly control, it is therefore necessary to maintain and provide a consistent view of the usage status of the links to all gatekeepers. This is achieved by introducing a new functional module, which we shall refer to as the CAM. The CAM maintains a complete topology image of the network domain, including the allocated voice capacity and occupancy on each link, and the configuration of each LSP in terms of the links that it traverses. The CAM in effect provides all the intelligence required for the gatekeepers to decide whether to admit an incoming call and which of the alternate paths to use. We will first examine an exact form of CAC and LB followed by an inexact variant. These alternatives provide different tradeoffs between bandwidth efficiency and scalability/ease of implementation. In the remainder of this section we shall examine them a little more closely. 3.1. An exact algorithm for CAC and LB With the exact approach, a tight coupling exists between the gatekeepers and the CAM. Accordingly, whenever a new call request is presented to a gatekeeper, it consults the CAM by providing the source and destination nodes. The CAM identifies the best path from among the alternatives available for the particular source–destination pair, by comparing the occupancy levels of their bottleneck links. This constitutes the LB decision. It then compares the bottleneck link occupancy of the chosen path against a threshold. If the threshold is exceeded then the gatekeeper is instructed to block the call. Otherwise the call is admitted and the CAM sends the identity of the chosen path to the gatekeeper, and the latter proceeds to establish the call on the selected path. This constitutes the admission control decision. Also upon admission, the CAM updates the occupancy metrics of the links in the selected path to reflect the addition of the new call. Finally, whenever a call terminates, the relevant gatekeeper alerts the CAM and the latter updates the affected link occupancy metrics to reflect the change. It may be readily appreciated that the above set of procedures allows a precise control of admission and LB. It does, however, pose a limitation in that it requires per-call communication between the gatekeeper and CAM, and the associated processing and it does limit scalability to some extent as the network grows. Thus there is an interest in some inexact alternatives that can potentially overcome this drawback. The basic idea behind the inexact approach is that database updates as well as control decision computations are performed on a periodic, rather than on a per-call basis. In particular, the CAM link metrics are updated once every T seconds, based on the current view of the network. Next, the LB and CAC decisions for all source–destination pairs are computed and disseminated to the gatekeepers. These decisions are used for admitting all the calls arriving until the next update is received. The update interval T may be chosen so as to make the approach scalable for networks of any size; the processor load is no longer dependent on the call volume. However, the decisions are now based on information that could be out-dated, depending on the duration of T. This leads to two forms of risks: (a) a call may be blocked when there is adequate bandwidth on one of the available paths, and (b) a call may be admitted on a path with inadequate bandwidth on its bottleneck link. The latter drawback can be marginalized to any desired degree by maintaining adequate slack capacities (guard band) on the links. In other words, for a given network size and level of QoS (e.g., link overload probability), there exists a tradeoff between the duration of the update interval T and the slack capacity. The minimum value of

D. Houck, G. Meempat / Performance Evaluation 47 (2002) 243–253

249

the update interval depends on the capabilities of the various network elements involved in the control updates and decision making. A rigorous quantitative analysis is necessary to quantify these tradeoffs in realistic network scenarios. Some results pertaining to this are reported in Section 4. 3.2. A measurement-based algorithm for CAC and LB With the measurement-based approach, per-call messages are no longer required to update the link occupancies. Instead, the IP routers track the occupancy levels of the voice bandwidth on the incident links during each update period. In particular, the router stores a byte count by DiffServ class in an MIB that is polled by the CAM once every T seconds. The CAM can then convert this into an equivalent occupancy measure and directly update the link metrics. Once the link updates are performed, the CAM proceeds to compute the decision variables for each GW-to-GW pair (equivalently between pairs of edge routers) for the current update cycle and disseminates the information to the gatekeepers. 3.3. Partial distribution of the decision update computation The inexact algorithms allow a partial distribution of the decision update processing, to further improve scalability and reliability. Specifically, the CAM may simply serve as an aggregation point for the link status updates but not perform the path decision updates. Instead, following the link status update during each update epoch, the CAM may simply relay the updated link utilizations to the gatekeepers. Each gatekeeper uses this information to compute the limited set of decision variables applicable only to the source–destination node pairs between which it admits calls. The advantage with the distributed approach is that the computational burden on each gatekeeper is less than on the CAM in the centralized mode, by a factor of K, where K is the number of gatekeepers in the administrative domain. A second advantage is the superior robustness, since the failure of the control functions at one gatekeeper affects only the gateways that it manages whereas the failure of CAM in the centralized mode can affect the whole network. In addition, the CAM does not need to know the MPLS routing used by each node pair. It should also be noted that, although we describe this algorithm as one in which all measurements and policy decisions are set at the beginning of a decision epoch, we actually expect that polling of the routers and updating link and path status will occur on a continuous rolling basis. This may allow for smoother network performance and also makes the method more scalable; multiple polling engines could independently update a subset of the links on independent schedules. 3.4. Extension of the CAC capability to networks employing OSPF routing The admission control feature of the proposed scheme can be extended to OSPF routing scenarios. Since OSPF does not allow the flexibility to setup multiple independent paths between a given source–destination node pair, the LB feature is not available in conjunction with OSPF. Note also that OSPF does not allow explicit control over setting up the paths. Thus, to implement the CAC capability, the gatekeepers need to infer the paths that are autonomously assigned by OSPF. MPLS explicit paths also allow the service provider to plan restoration paths with sufficient capacity, so that any specific failure scenario could be accounted for as well. It is possible to design protocols so that calls in progress could be maintained during many failure scenarios.

250

D. Houck, G. Meempat / Performance Evaluation 47 (2002) 243–253

4. Performance studies As mentioned earlier, a key objective is to understand the tradeoffs among the update interval T, the amount of guard band on the links, and the actual QoS delivered to voice. This insight is crucial to evaluate the feasibility of adopting the inexact approach. With this goal, we have conducted some early simulation studies based on a single link model. In this model, calls arrive according to a specified probability distribution, and a decision is made to admit or reject each call based on a specific control policy. An example would be to admit all calls if the measured occupancy is less than 90%, block 50% if the measured occupancy is between 90 and 98%, and block all calls if the measured occupancy exceeds 98% (i.e., the guard band is equivalent to 2% of the capacity). The simulation model emulates the control policy as well as the uncertainties and delays introduced by the measurement procedures inherent to the inexact algorithm under study. Simulations were run over different loading ranges up to a 120% overload, and subject to both Poisson and peaked arrivals. Fig. 2 shows the results of one simulation in which link information is updated periodically over different intervals. In this case, a simple control mechanism was put in place; admit all calls until the measured occupancy exceeds 99% and then block all calls. The link had a voice capacity of 1728 simultaneous calls (corresponding to an OC3 link with 16 ms IP voice packets, and accounting for 12 bytes of RTP, 8 bytes of UDP, 20 bytes of IP and 4 bytes of MPLS overhead), but given an offered load of 120%. We used update intervals of 2, 5, 10, and 20 s and recorded the probability that a call ever loses a packet (due to congestion), the fraction of time the link is overloaded, and the call blocking probability. An exact call-by-call algorithm would have blocked 16.9% of the calls (Erlang B). We see that as the update interval increases, performance worsens, but not dramatically. Even with a 20 s update very few packets are lost, even though there is a 20% overload on the link and only 18.6% of the calls are blocked. Where the acceptable point occurs depends on subjective voice quality measurements and objectives. Recent studies [4] have shown that with G.711 and an appropriate packet loss concealment technique, 1–3 % packet loss might be acceptable. Further investigations are required to understand all the trade-offs

Fig. 2. Performance of a single-level control policy—CAC A.

D. Houck, G. Meempat / Performance Evaluation 47 (2002) 243–253

251

Fig. 3. Performance with a 2-level control policy—CAC B.

between subjective voice quality measurements and algorithm choices. It is clear that factors that matter include the codec being used, the packet size, the packet loss concealment algorithm, and the burstiness of the packet loss. In addition, fax and voice-band data may impose additional requirements if they are not using a relay mode. Fig. 3 describes a similar scenario in which an OC3 link is given a 20% overload, but this time a slightly different control policy is applied. It basically has one additional level of control: when the occupancy is between 95 and 99% block 25% of the calls. As before block all calls when the occupancy is over 99% and no calls when it is below 95%. The results here show minimal packet loss even with update intervals as long as 100 s. It should be clear that as the guard band increases packet loss can be virtually eliminated. It can then be up to the service provider to determine the tradeoffs between increasing the guardband and improving voice quality under overloads. It is important to note that update intervals such as these, in the range of 1 min, are within the capabilities of the nodal processors and polling devices that generate the measurement reports or perform decision computations. So far we have only showed behavior under severe overloads. Fig. 4 shows the behavior of CAC A (block all calls when utilization is above 99%) under a range of loads where updates are only performed on a 50 s basis. In the left-hand graph, we compare the packet loss of CAC A versus what would occur without any CAC. Obviously, as the load increases a CAC is essential to the operation of the network. The right-hand graph compares CAC A with 50 s updates with the ideal per-CAC with exact information. This illustrates that some additional blocking is required to compensate for the 50 s update interval. Obviously different control policies would have different characteristics. The most important factor that influences the impact of the update interval would be the average holding time of a call, since the current utilization is being used to determine a policy that will last for the duration of the next interval. These simulations used a call holding time with an exponential distribution with a mean of 5 min. Other factors such as the size of the link and the peakedness of the call arrival process could also influence the update interval.

252

D. Houck, G. Meempat / Performance Evaluation 47 (2002) 243–253

Fig. 4. Performance of CAC A as a function of load.

This analysis demonstrates the feasibility of the approach. We are not suggesting that CAC policy B is optimal for this link size and load, but that policies of this type work reasonably well under these circumstances. Further analysis will be required to assess the various options. Possible policies include static policies as presented above with fixed thresholds and corresponding blocking percentages, adaptive policies that react to (at least) the current policy and the change in load measurements over the past two periods, and rate control policies that control that maximum number of calls admitted in the next period. Each of these policy types has some advantage in different load regions and a combined percentage and rate control might be the best combination. When everything is exponential, an analytic model of the policy simulated above can be solved. Work is currently underway to extend this model to find optimal policies under different load conditions.

5. Summary In this paper, we presented a novel approach referred to as the shared voice sub-network approach for CAC and LB to meet the quality of service requirements for voice supported over IP. To this extent we employ a network module referred to as CAM that keeps track of the voice occupancy levels across the IP network. Our approach requires an integration of traffic prioritization based on DiffServ and explicit routing based on MPLS. It allows us to perform the control actions by tracking the voice occupancies on the network links rather than on each path. This is in contrast with one of the available alternatives to enable voice-over-IP QoS, the trunking approach, and leads to advantages in terms of scalability and network efficiency. Based on our model, we first described an exact algorithm that provides a precise form of control. To extend scalability into large-scale networks, we also presented an inexact alternative that perform control updates on a periodic, rather than on a per-call, basis. We finally discussed the extension of the admission control capability to networks that use OSPF routing instead of the evolving MPLS. A key parameter associated with the inexact algorithms is the duration of the update interval. To examine the impact of the update interval on the attainable QoS and slack capacity requirements, we provided some results from the simulation studies that were undertaken. Our results indicate that an acceptable level of QoS can be maintained using practical values for the update interval (i.e., of the order of 1 min) and at fairly small slack capacity requirements.

D. Houck, G. Meempat / Performance Evaluation 47 (2002) 243–253

253

References [1] [2] [3] [4] [5]

Blake, et al., An Architecture for Differentiated Services, RFC 2475, December 1998. Rosen, et al., Multiprotocol Label Switching Architecture, Internet Draft, April 1999. J. Wroclawski, The Use of RSVP with IETF Integrated Services, RFC 2210, September 1997. Subjective Results on Impairment Effects of Packet Loss, ITU SG12 Contribution D.110, September 1999. Telecommunications and Internet Protocol Harmonization Over Networks (TIPHON) End-to-end Quality of Service in TIPHON Systems (TS 101 329-3-V1.1.1), Part 3, January 2001. [6] S. Shenkar, Fundamental design issues for the future internet, IEEE J. Select. Areas Commun. 13 (7) (1995) 1176–1188. [7] R.J. Gibbens, F.P. Kelly, Distributed connection acceptance control for a connectionless network, in: ITC16, Edinburgh, UK, Elsevier, Amsterdam, 1999, pp. 941–952.

David Houck received his B.A. in Mathematics in 1970 and Ph.D. in 1974, both from Johns Hopkins University. He was an Assistant Professor at the University of Maryland, Baltimore County from 1974 to 1979 and has been at Bell Labs/AT&T Labs since 1979. He is currently leading the Performance Modeling and QoS Management Group within the Advanced Technologies division of Bell Labs, in a project focused on traffic management of converged packet networks with QoS requirements. Dave’s research interests include performance modeling and control, network design methodology, and optimization.

Gopal Meempat is a System Architect with Velio Communications, Inc., Princeton, NJ, where he is currently focusing on the algorithmic development of very high-speed on-chip switching systems. He has extensive experience in the design and modeling of broadband networking technologies including ATM, IP, and wireless, both at the micro-architecture and at the systems level. Prior to joining Velio in 2000, Gopal was a Member of Technical Staff at Bell Laboratories, Lucent Technologies from 1998, a Research Scientist with Bell Communications Research from 1992 onwards, and was an Assistant Professor of Computer Science and Engineering at the University of Nebraska-Lincoln from 1989. Dr. Meempat received his B.E. degree in Electronics and Communication and M.Sc.(Engg.) degree in Computer Science from the Indian Institute of Science, Bangalore, India, and his Ph.D. degree, with specialization in network modeling, from the University of Arizona, Tucson.