Available online at www.sciencedirect.com
Computer Networks 52 (2008) 563–580 www.elsevier.com/locate/comnet
Hybrid optimization for QoS control in IP Virtual Private Networks Raffaele Bolla, Roberto Bruschi *, Franco Davoli, Matteo Repetto Department of Communications, Computer and Systems Science (DIST), University of Genoa, Via Opera Pia 13, 16145 Genoa, Italy Received 14 March 2007; received in revised form 22 August 2007; accepted 17 October 2007 Available online 26 October 2007 Responsible Editor: J.C. de Oliveira
Abstract A multiservice IP Virtual Private Network is considered, where Quality of Service is maintained by a DiffServ paradigm, in a domain that is supervised by a Bandwidth Broker (BB). The traffic in the network belongs to three basic categories: Expedited Forwarding (EF), Assured Forwarding (AF) and Best Effort (BE). Consistently with the DiffServ environment, the Service Provider’s Core Routers (CRs) only treat aggregate flows; on the other hand, the user’s Edge Routers (ERs) keep per-flow information and convey it to the BB. The latter knows at each time instant the number (and the bandwidth requirements) of flows in progress within the domain for both EF and AF traffic categories. A global strategy for admission control, bandwidth allocation and routing within the domain is introduced and discussed in the paper. The aim is to minimize blocking of the ‘‘guaranteed’’ flows (EF and AF), while at the same time providing some resources also to BE traffic. In order to apply such control actions on-line, a computational structure is sought, which allows a relatively fast implementation of the overall strategy. In particular, a mix of analytical and simulation tools is applied jointly, by alternating ‘‘local’’ decisions, based on analytical models, with flow-level simulation that determine the effect of the decisions on the whole system and provide feedback information. The convergence of the scheme (under a fixed traffic pattern) is investigated and the results of its application under different traffic loads are studied by simulation on three test networks. 2007 Elsevier B.V. All rights reserved. Keywords: VPN; IP DiffServ; QoS-IP; Resource allocation
1. Introduction The current number of hosts in the Internet is approaching half billion [1]. Such huge number is spread across many administrative domains, orga* Corresponding author. Tel.: +39 10 353 2802; fax: +39 10 353 2154. E-mail addresses: raff
[email protected] (R. Bolla), roberto.
[email protected] (R. Bruschi),
[email protected] (F. Davoli),
[email protected] (M. Repetto).
nized in a hierarchical structure, to cope with complexity and scale. Within this scenario, an additional interesting possibility is that of creating Virtual Private Networks (VPNs) for larger users, thereby further delegating control actions to more limited domains (see, among others [2–6]). A wide range of network architectures and technologies has been proposed (and used) to support this kind of service, based on both ‘‘layer 2’’ (Frame Relay or, in a broad interpretation of layer 2, ATM and MPLS)
1389-1286/$ - see front matter 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2007.10.006
564
R. Bolla et al. / Computer Networks 52 (2008) 563–580
and ‘‘layer 3’’ technologies (IP over IP, GRE, IPSEC) [7]. More recently, the use of ‘‘layer 1’’ technologies, such as SDH or WDM, has come into effect [8], offering the possibility to guarantee bandwidth allocations on the network links with a minimum acceptable level of granularity and efficiency. As an example, the SDH Generic Framing Procedure (GFP) [9] can provide leased links with a bandwidth allocation up to 10 Gbps and a minimum granularity of about 1 Mbps (with this extension, one may indeed consider SDH as another layer 2 technology). All these technological solutions can be used to realize VPNs with guaranteed bandwidth allocations and support for QoS algorithms [10,11]. We have already addressed the issue of planning such networks in [12], in the presence of user traffic requiring differentiated QoS support. This paper takes a different perspective, by addressing the online control of such networks, in order to ensure that QoS differentiation is maintained. This is achieved by devising adaptive strategies for call admission control (CAC), bandwidth allocation and routing. The paper is a re-organized and extended version of the work originally presented in [13,14]. In particular, the following new contributions and improvements can be pointed out: (i) the definition of a new cost function in the optimization procedure that leads to the CAC and bandwidth allocation; (ii) the adoption of a greatly simplified, though still sufficiently accurate, model for BE traffic, which significantly reduces the computational complexity of the overall optimization procedure; (iii) the performance evaluation over different and increasingly complex network topologies; (iv) the discussion of the sources of computational complexity and of the scalability of the method. A central controller, denoted as Bandwidth Broker (BB), may be dedicated to perform resource reservation in the VPN, as well as to collect aggregate information for the upper levels. The BB can run on a dedicated machine or on a generic router and exchanges messages with the network nodes by some signaling protocols. We will analyze in detail in the following what kind of information the BB needs to exchange with the nodes in our proposal. It is worth noting that similar control architectures might be adopted on a larger scale, as well. Actually, control actions might be applied independently (perhaps on a longer time scale) within the domain of the ISP, to share the available resources among the VPNs of different customers, or even across multiple ISP domains.
Within a DiffServ [15] VPN domain, we consider three QoS classes: Expedited Forwarding (EF), Assured Forwarding (AF) and Best Effort (BE). EF is the maximum priority traffic, and it should experience very low loss, latency and jitter while traversing the network [16]; such a service appears to the endpoints like a point-to-point connection, or a leased line. AF is an intermediate class, with less pressing requirements on bandwidth, delay and jitter [17]. BE is the current Internet traffic with no guarantees. A global strategy for admission control, bandwidth allocation and routing within such VPN domain is introduced and discussed in the paper. The aim is to minimize blocking of the ‘‘guaranteed’’ flows (EF and AF), while at the same time providing some resources also to BE traffic. As the main objective is to apply the abovementioned control actions on-line, a computational structure is sought, which allows a relatively fast implementation of the overall strategy. In particular, a mix of analytical and simulation tools is applied jointly: ‘‘locally optimal’’ bandwidth partitions for the traffic classes are determined over each link (which set the parameters of the link schedulers in the Core Routers, the routing metrics and admission control thresholds); a ‘‘high level’’ simulation (involving no packet-level operation) is performed, to determine the per-link traffic flows that would be induced by the given allocation; then, a new analytical computation is carried out, followed by a simulation run, until the allocations do not change significantly (under a stationary traffic pattern). Obviously, in the case of slow variations in the traffic load and proportion, the procedure can be carried on continuously in an adaptive fashion. The paper is organized as follows. The following section outlines the main resource sharing tasks, and how the BB addresses them. Section 3 details the BB architecture, the models used and the control strategies. In Section 4, we report the results of a large number of numerical simulations, in order to analyze the performance of the control architecture. Section 5 contains the conclusions. The results of tests carried out to validate the analytical and simulative tools used in the proposed control scheme, and the configuration parameters adopted in all our performance evaluations, respectively, are reported in the Appendices. 2. Resource allocation issues A suitable queuing discipline is needed to achieve the required performance for every class. For this
R. Bolla et al. / Computer Networks 52 (2008) 563–580
purpose, we assume to have different queues on each link, one for each kind of traffic. The EF queue is given priority over all others; if we limit the amount of EF traffic in the network, this results in very low delay and jitter experienced by these packets. To meet this requirement, EF traffic is assigned a priority queue, which is served by the scheduler as long as it is non-empty. Users requiring this kind of service are asked to declare their peak rate, which (given that this service is likely to be more expensive) we assume coinciding with their mean rate. Thus, EF users are classified in a limited number of subclasses, according to their peak rates. Classifiers and traffic conditioners (meters, markers, shapers and droppers) are required at the network edge to perform this task, as well as to verify the compliance of the users’ traffic to the agreement, but this issue is beyond the scope of our treatment. Once the EF queue is empty, AF high priority traffic is served, while low priority packets are served only if there are still unused AF reserved resources. Finally, when all these queues are empty, BE traffic is served. An AF user declares its committed and peak transmission rates. The committed transmission rate is always guaranteed to users; better performance can be achieved if the network load is low. Thus, at the edge nodes, packets belonging to the same micro-flow will be marked as high priority AF, until they do not exceed the user’s committed rate; surplus traffic will be marked as low priority AF or discarded [18]. In this setting, the performance experienced by the packets depends on the amount of traffic flows of each category admitted into the network: CAC is then necessary to achieve the QoS promised by the DiffServ architecture. With such a discipline, each queue is served at the maximum speed allowed by the link; our aim is to control the bandwidth reserved for each class on each link. Given the peak rate for EF traffic, we know in advance how many connections can be accepted not to exceed the reserved bandwidth. A similar consideration may hold for AF, by considering the committed rates or, in a more detailed fashion, the effective bandwidths, computed according to some criterion (see, e.g. [19–21]), instead of the peak rates. In practice, starting from the traffic descriptors (e.g., committed rate and burst size) declared in the Service Level Agreement (SLA) of each AF flow, we can obtain a capacity threshold, which ensures satisfaction of all the packet-level QoS requirements of that flow. In other
565
words, we implicitly suppose that all QoS problems at the packet level are already accounted for by the link schedulers at the network routers; thus, we can concentrate on flow-level operations, in the same fashion as for EF flows. Indeed, as already noted in [12], with respect to the same model that we adopted for network planning, at the flow level this is equivalent to assume the presence of Continuous Bit Rate (CBR) traffic, peak (or committed/equivalent) rate admission control, and throughput requirements only. Briefly, we fix two bandwidth thresholds on each link: one for EF and the other for AF; the total rates of flows traversing the link for each category can never exceed these thresholds. BE traffic can exploit all the unused link bandwidth at any time instant. We constrain the sum of the thresholds to be less than or equal to the capacity of the link. Therefore, we adopt a Partitioning CAC (PCAC) policy [22], where we define two partitions for both EF and AF, but not for BE, whose upper bound is not fixed, but depends dynamically on the bandwidth left free by the other traffic flows. However, BE is guaranteed the link bandwidth exceeding the two thresholds, but the queuing disciplines do not assure any performance on delay and jitter. According to the partitioning policy, different traffic classes (distinguished by their requirements in terms of bandwidth and offered load), within EF and AF, compete among them independently in Complete Sharing (CS) fashion. Other parametric CAC policies might be adopted. As noted in [23], the Upper Limit (UL), Complete Partitioning (CP) and Trunk Reservation (TR) policies (among others) have been shown to outperform CS, when significant differences exist between traffic classes’ requirements; a form of dynamic TR has been analyzed in [24]. The partitioning policy is a hybrid between CP and CS, which may lend itself almost naturally in a situation like the one we are considering, where traffic classes can be grouped in wider categories (EF and AF, in our case); moreover, as regards on-line analytical computations, it has the advantage of belonging to the class of Coordinate Convex policies, which admit a Product Form Solution [22]. It is worth noting that the degree of inflexibility caused by the use of thresholds will be mitigated by the fact that we make them adaptive, and can change them on-line, according to an optimization procedure. Regarding routing, as EF and AF flows require QoS, we think that this kind of traffic needs a fixed route from source to destination, which appears to
566
R. Bolla et al. / Computer Networks 52 (2008) 563–580
them as a virtual leased line [16]. Routing of both AF and EF connections is a priori fixed by the BB, so that it can perform the CAC on the bandwidth available on each link traversed and, possibly, deny access to the network. Thus, we have already identified a first task for our BB. Since we wish to maximize the network throughput, we use simple min-hop metrics for both AF and EF, to reduce the number of links traversed. The whole architecture is, however, suitable for more advanced routing metrics [22], especially those based on aggregation, and multi-parameter ones [25]. For BE traffic routing, instead, we choose a cost equal to the reciprocal of the residual available bandwidth left by the connections with a guaranteed level of QoS. In this case, paths vary dynamically as network load changes over time. EF, AF and BE traffic flows are generated at the nodes of the network as aggregates: this means that the same node can generate more EF or AF connections or BE flows. In such a way, we can easily view the node either as a host or as a border router, connected to a hierarchical lower level network (as mentioned in Section 1). Now, the issue is how to share link bandwidth in an adaptive fashion among the three competitive classes, by also choosing the AF and EF thresholds according to some optimality criterion. Obviously, QoS traffic is critical, because it is thought of as a high revenue one. But we would like to give some form of ‘‘protection’’ to BE, as well, whose overall volume can indeed produce relevant revenue. To do this, we assign a cost to each class (as explained later), and try to minimize some given cost function. Nodes collect data on the incoming traffic load; the BB gathers all these data and computes a new allocation for all links. Finally, it sends the EF and AF thresholds to each node for its output links. This is the second task to be performed by the BB. To complete the description of our architecture, let us note that a third task of the BB is to collect information from the nodes about traffic entering or exiting the domain, and to act as a node for the BB of an upper level within the same ISP (e.g., one controlling bandwidth assignments among the VPNs of different organizations). 3. Bandwidth broker architecture and allocation algorithm Allocating resources over the whole network is a very complex task, especially if many links are pres-
ent. Furthermore, resources to be shared reside on each link. As previously mentioned, the BB knows the input traffic matrix for each flow from each single node; knowing also the different allocations of resources for every link, the BB would be able to simulate the behavior of the whole VPN in terms of routing, mean link utilizations and blocking probabilities. So, the BB is certainly the best network control point to perform a centralized task for the optimization of the whole system. However, when the input traffic matrix changes enough to make new allocations necessary, in order to cope with the variations and maintain a high level of utilization, the task performing dynamic resource allocation has to be launched on the BB and should complete its calculations as soon as possible. In order to reduce the computational complexity of this allocation task, we have decided to calculate the new CAC thresholds independently for each link, (in a ‘‘locally optimal’’ way), and to use a very fast network simulator to obtain all the data necessary to perform this operation. More specifically, the independent link allocations, calculated in this way, have to be tested with a new simulation process, to understand if the new network settings have produced routing changes, undesired link utilizations or high blocking probabilities. In particular, it has to be noted that our BE routing metric depends on the links’ EF and AF allocations and on the ensuing traffic distribution; but the allocation itself, in its turn, depends on the paths selected by the routing algorithm. A congested link at a certain time instant can become an empty one after allocation, owing, for example, to the discovery of a new optimal route for BE traffic. This mechanism should then be applied iteratively, until a fixed point is hopefully reached, if the traffic patterns remain stationary for a sufficiently long time interval. Therefore, the mechanism is composed by the iteration of two different modules: a fast network simulator (a numeric fluid model that works at aggregate flow level) and an allocation algorithm, based on an analytical model (a stochastic knapsack [22]) and on the output of the simulator, which calculates the resource reservations independently for each link. Summarizing the resource allocation architecture, the BB alternates simulation processes and allocation procedures. Through the model we generate network traffic and evaluate offered load on every link; successive allocations try to share the available link bandwidth among the different traffic
R. Bolla et al. / Computer Networks 52 (2008) 563–580
classes. If the allocation converges, network resources are optimized, and the BB can set new thresholds on all nodes. When variations in the incoming traffic are measured at the nodes, the BB is notified and starts a new allocation procedure. 3.1. Traffic simulation models In a DiffServ network we can reasonably consider that the traffic behavior is dependent on the different QoS level: in fact, it is obvious to think that AF and EF will be used to carry real time flows, like multimedia streams, whereas BE will be used primarily for data transport (i.e., web, mail, file sharing or other). Moreover, in a DiffServ environment, both admission control and traffic shaping have to be applied to QoS flows. Therefore, in our model we had to take into account the significant differences between services with assured QoS level and the common BE ones. As noted before, in this kind of environment we need a traffic model that could yield valuable performance in describing the average behavior of aggregate flows, with a very low computational effort. Moreover, the aim of the traffic model is not to describe the network dynamics at the IP packet level, but to extract some performance parameters, like link bandwidth occupations or blocking probabilities, with a good approximation. It is obvious that for this particular purpose a packet-level simulator, like ns-2 [26], is not the best suitable choice. Therefore, as regards BE traffic (which we suppose to be composed primarily of TCP connections), characterized by an elastic behavior, we have chosen to implement a traffic simulation model based on a fluid approximation, which can efficiently describe the average behavior of aggregate flows, while ignoring details of the packet level or about the state of each TCP connection. With a similar viewpoint, EF and AF traffic is described exclusively at the call level (where the peak and the committed/equivalent bandwidths are assigned, respectively), on the basis of the aforementioned analytical models, whose underlying hypotheses on the call-level behavior of the sources are used also for traffic generation. Recalling the bandwidth assignment that is operated on EF and AF (their connections use a shortest path route from source to destination, which appears to them as a virtual leased line), in order to ensure performance guarantees, we will refer to them collectively as Guaranteed Performance (GP) flows. This hypothesis would be
567
very realistic if we think that, actually, this kind of IP traffic is frequently transported by means of switching technologies that simply assure the desired QoS level (e.g., MPLS [27]). This simple GP traffic model gives us the possibility to reduce the overall complexity level of the proposed allocation scheme, and it can be regarded as a ‘‘worst case’’ model, since, all the EF and AF flows occupy their maximum reserved bandwidth, leaving in this way less resources to BE traffic. For what concerns the BE traffic, the rationale of the very simple model adopted relies on the fact that our goal is not to represent the dynamics of the individual TCP connections, but rather to capture the aggregate and average behavior of TCP flows between an ingress–egress router or source–destination pair for control purpose. In greater detail, our BE traffic model can be regarded as a ‘‘fast’’ procedure – namely, the ‘‘Stationary Model’’ previously introduced in [12] – which operates on traffic aggregates, by using a fluid approximation and the max– min fair rule [28], to estimate their average behavior in terms of link utilizations and mean throughput. This model is indeed an evolution of the ‘‘fluid simulator’’ used in [14,29]. It needs only the traffic offered loads of all BE flows as input parameters, and it provides an accuracy level very similar to the ones of fluid simulators (as shown in Appendix 1). Its computational times are so low to allow its effective usage inside complex optimization procedures (see Section 4.4): it is worth recalling that this model is used to estimate the BE throughput every time that the available bandwidth on a network link changes (i.e., at each AF or EF flow’s birth or death). Another element in our simulation environment is the GP traffic source. In fact, we also approximate the incoming process of the GP flows with a sort of aggregate source. This source is thought to generate an aggregate of GP flows, which is composed by all the AF (or EF) connections that traverse the same path between two network nodes (like FECs in MPLS). These connections appear directly available at the ingress Edge Router, which can indifferently represent the access node for the hosts or a gateway to another VPN. GP flows are generated at each source with exponentially distributed and independent inter-arrival and service times. To reduce the computational effort of the traffic model, we decided to describe both AF and EF flows as CBR traffic (as discussed in Section 2, AF flows can be treated this way, if their effective bandwidth is used). For EF
568
R. Bolla et al. / Computer Networks 52 (2008) 563–580
flows, the constant rate is the maximum assured bandwidth, whereas, for AF ones, it is thought of as the minimum nominal bandwidth, which must be guaranteed in every condition (i.e., committed or effective bandwidth). If there are sufficient resources, the AF flows can increase their rates up to a fixed percentage. To obtain steady and meaningful average values, like BE link utilizations or GP blocking probabilities from the flow-level simulation model, the simulation lengths are extracted by a stabilization criterion, which allows to achieve a measured mean value of the parameters under study that can differ from the statistical mean within a 5% confidence interval, with 95% confidence level. 3.2. Model-based resource allocation As the BB knows the input traffic matrix for each flow from the single node, by executing a simplified and fast simulation at the flow level, it can collect data about offered load, in terms of number of flows trying to cross a given link (GP) and average usage of BE bandwidth on the link. From this point on, it begins an independent allocation for each link. We choose a bandwidth step bw_step, equal to the minimum admissible rate for GP connections. Then, the algorithm starts by calculating the costs for all possible bandwidth assignments, which are generated as follows. GP-assigned bandwidth is initially set to zero, then it is increased by bw_step, until the link bandwidth is reached. For each of these values, GP bandwidth is shared between EF and AF in a similar way: the EF threshold is set initially to zero and the AF threshold to the total GP assigned bandwidth. All possible assignments are generated by incrementing EF and decreasing AF thresholds by bw_step. 3.2.1. Cost calculation In this particular environment, GP traffic looks like traditional circuit switched one: there are incoming connections of a given size (peak rate for EF and committed or effective rate for AF), competing for a limited resource viewed at each link. This is the so-called Stochastic Knapsack problem; the blocking probabilities can be easily calculated with an extended Erlang Loss formula [22]. Let K denote the number of possible rates (of EF, for example) and C the resource units. Connections
of the kth class are distinguished by their bit rate, bk, their arrival rate, kk, and their mean duration, 1/lk. Let qk = kk/lk and let us denote with nk the number of active connections of class k and with S the space of admissible states: S ¼ fn 2 I K : b n 6 Cg;
ð1Þ
where IK denotes the Kth dimensional set of non-negative integers, b = (b1, b2, . . . , bK), n = (n1, n2, . . . , nK) and Æ is the scalar product. By defining Sk as the subset of states in which the knapsack admits an arriving class-k object, that is S k ¼ fn 2 S : b n 6 C bk g
ð2Þ
the blocking probability for a class-k connection is P QK nj j¼1 q =nj ! n2S Bk ¼ 1 P kQK : ð3Þ nj j¼1 q =nj ! n2S The blocking probability is computed separately for EF and AF, since each category has its own reserved bandwidth and incoming traffic, independently of the others. Efficient algorithms for this computation can be found in [22]. The link cost is derived from the blocking probabilities of the different rate classes: J GP ¼ maxfBk g k
ð4Þ
and it holds for both EF and AF. For BE traffic, we have a single aggregate: it is difficult to say if it has enough space or not, because TCP aims at occupying all the available bandwidth. A possibility consists of deciding that the bandwidth available to BE on a link can be considered sufficient if its average utilization by the BE traffic is within a given threshold. To fix this kind of threshold analytically is a very complex task; instead, we have chosen to run several ns-2 simulations to extract information about the average behavior of an individual connection, with respect to different values of aggregate utilization ratio. In Figs. 1 and 2, we report two among several results obtained about the individual TCP connection’s performance; for both cases, we have a quick deterioration of the average rate per connection as the aggregate rate approaches 80% of the bandwidth. Thus, we can choose to impose 80% as maximum threshold for the utilization ratio. This threshold is indeed fairly conservative; as noted in [30] (see also [31,32]), the performance of a fairly shared bottleneck is excellent, as long as demand is somewhat less than capacity.
R. Bolla et al. / Computer Networks 52 (2008) 563–580
569
Starting from this, we expect that connections on the link are given enough bandwidth if this ratio is less than a given value d = 0.8.
Fig. 1. Average rate per connection, with respect to different values of aggregate utilization of a single link, whose bandwidth is 10 Mbps.
3.2.2. Optimization function Several cost functions might be selected to optimize the bandwidth sharing, depending on which goals one is pursuing. Our aim is to equalize the costs of the traffic classes, so that we can vary the system’s fairness simply by changing some weight parameters. However, we want to give some kind of priority to AF and EF traffic classes, with respect to the BE one. Thus, we have chosen a particular equalization cost function J, which gives us the possibility to smoothly impose two different constraints: the first one on the maximum blocking probability JGP, and the second one on the BE aggregate utilization ratio. To take these constraints into account, we have defined an ad hoc BE cost function, namely J BE ¼
arc tanfa½UT dg þ p2 ; p arctanhfUT bg
ð8Þ
where d > 0 is the desired utilization ratio value, and a > 0 and b > 0 are two shape parameters; moreover, we have introduced a penalty function for the GP traffic, that is
Fig. 2. Average rate per connection, with respect to different values of aggregate utilization of a single link, whose bandwidth is 100 Mbps.
To compute the utilization, let q(c) be the probability to have c resources occupied in the system by EF and AF connections (which can be calculated from the knapsack model, given the EF and AF thresholds); the average quantity UTIL of occupied resources results: UTIL ¼
C X
cqðcÞ:
ð5Þ
c¼0
Thus, for a total link bandwidth LINKBW, the mean available bandwidth for BE is AVLBBE ¼ LINKBW UTILEF UTILAF :
ð6Þ
The BB keeps track of the BE mean occupied bandwidth on the link, OCCBW, during the simulation, so we now are able to define the utilization ratio UT as OCCBW UT ¼ : ð7Þ AVLBBE
arc tanfe ½J GP ug arc tan½eu e J GP ¼ ; arc tanfe ½1 ug arc tan½eu
ð9Þ
where e > 0 is a shape parameter and u is the maximum acceptable blocking probability. With these new cost components we are now ready to write the final equalization function as J TOT ¼ J BE þ J GP þ e J GP J GP J BE e
ð10Þ
so that the equalization of costs results in the minimization min fJ TOT g
T EF ;T AF
ð11Þ
with respect to TEF and TAF, which are the thresholds for EF and AF traffic, respectively. A graph of the chosen cost function is reported in Fig. 3, where d = 0.77, a = 50, b = 0.7, e = 500 and u = 0.05. As can be seen, this function can be roughly divided into three different regions, which are smoothly interconnected: • the first region corresponds to values of UT and JGP that are both below the upper bounds; namely, both constraints are satisfied, and the
570
R. Bolla et al. / Computer Networks 52 (2008) 563–580
bandwidth. Since the reserved bandwidths are calculated with a minimum granularity equal to the lowest admissible rate for GP flows, the domain of this problem is also discrete. As an example, Fig. 4 reports one of the computed domains, which was obtained by varying all the possible reserved bandwidth thresholds for AF and EF flows, under fixed link capacity and traffic load, and by calculating the resulting UT and JGP values. 4. Numerical results
Fig. 3. Graph of the cost equalization function used in link optimization.
associated global cost is very low, as the link is not congested; • the second region corresponds to an excessive value of UT by BE traffic, while the blocking probabilities remain below their upper bound: in this case, the cost is quite high, as there are insufficient resources on the link to carry both BE and GP traffic efficiently, and the allocator decides to penalize BE, to respect the constraint on GP blocking probabilities; • finally, the third region corresponds to the case where the GP constraint is not respected: the cost equalization function takes on higher values because, in heavy congestion conditions, network optimization attempts to respect the GP constraint first, and then, if possible, to protect the BE traffic. Obviously, the optimization problem is not defined for all values of UT and JGP, but only in a limited domain, which depends on both BE and GP traffic load on the considered link, with fixed
Fig. 4. Example of a domain for the link resource allocation problem.
With the aim of analyzing the performance of the proposed bandwidth allocation scheme, in the following we report and discuss the results of three test sessions, carried out with the network topologies in Figs. 5, 10 and 18, respectively, each characterized by an increasing complexity level. For each network considered, we studied how the allocator’s efficiency changes according to the traffic load. Moreover, we have decided to compare the performance results of the proposed framework with the ones of a CS-based network: with this kind of policy we expect to obtain (with respect to a PCAC policy with the same allocation mechanism) no protection for BE traffic, but better (overall) GP performance, especially at high network loads, because in this case the only thresholds for the CAC are the link bandwidths. This Section is organized as follows. Sections 4.1, 4.2 and 4.3 report the performance test results obtained with the three network topologies considered; namely, ‘‘simple’’, ‘‘intermediate’’ and ‘‘complex’’, respectively. A large set of input parameters to the simulator, like traffic matrixes and link thresholds obtained from the optimization, can be
Fig. 5. The simple network topology: the bandwidth is fixed to 100 Mbps for all links, except L3 and L6 that are fixed to 50 Mbps.
4.1. Simple network environment
Throughput [Mbps]
The first test network used for the allocator’s performance evaluation is represented in Fig. 5. This network topology is not too complex, and allows easy understanding of how the mechanism works: in fact, it includes only three bottleneck links (with reference to Fig. 5, L2, L3 and L6, respectively) where the resource sharing among the traffic classes might be critical. The performance test has been performed by increasing the GP traffic offered load (i.e., by rising the number of incoming AF and EF flows), while keeping BE traffic constant. Details about the traffic matrix parameters and the optimal link thresholds, calculated by the allocator, are reported in Tables 2 and 4 in Appendix 2. Fig. 6 depicts the total average throughput of all BE aggregate flows, while Figs. 7 and 8 show the maximum blocking probabilities for EF and AF 64 62 60 58 56 54 52 50 Allocator 48 CS 46 110 115 120 125 130 135 140 145 150 155 160 Offered Load [Mbps]
Fig. 6. Simple Network environment: overall throughput of BE flows.
14 12
571
Allocator CS
10 8 6 4 2 0 110 115 120 125 130 135 140 145 150 155 160 Offered Load [Mbps]
Fig. 7. Simple Network environment: maximum EF blocking probabilities.
Blocking Probabilities [%]
found in Appendix 2. Section 4.4 includes a detailed analysis of the computational complexity of the allocator. Some validation results regarding the accuracy of both the analytical and the simulative modules can be found in Appendix 1. Some basic parameter values are kept constant throughout all tests in the paper: the mean lifetime of EF connections is fixed to 200 s, the one of AF connections to 800 s, and the mean burst size of BE traffic is 0.122 MBytes. The propagation delay for all links is 5 ms. The values of the cost function parameters are fixed to d = 0.77, a = 50, b = 0.7, e = 500 and u = 0.05, respectively, which means that we try to maintain the GP blocking probability under 5% and the utilization ratio for BE under 77%.
Blocking Probabilities [%]
R. Bolla et al. / Computer Networks 52 (2008) 563–580
14 12
Allocator CS
10 8 6 4 2 0 110 115 120 125 130 135 140 145 150 155 160
Offered Load [Mbps] Fig. 8. Simple Network environment: maximum AF blocking probabilities.
flows, respectively. From all these results we can remark how the BE aggregates are ‘‘protected’’, as far as possible, by GP allocated thresholds, as the network congestion level rises. On the contrary, we can stress how a CS policy (without BE protection) could obviously improve the GP performance in terms of higher average throughputs and lower blocking probabilities. To better understand how the allocation decisions are adaptively extracted from the congestion level of the links, Fig. 9 shows for each bottleneck link how the cost of the ‘‘locally optimal’’ link resource configuration changes, according to the increase in network offered load. In this respect, we have reported all the optimal allocations calculated during the test on the cost functions of the link that better reflects the specific behavior we want to highlight. In Fig. 9a, we can note how the allocator tries to satisfy both BE and GP constraints on the bottleneck link L2, as long as the network congestion is low; when the traffic load increases and link resources are no longer sufficient, the allocator
572
R. Bolla et al. / Computer Networks 52 (2008) 563–580
Fig. 9. Mapping on the cost function profile of the change in resource allocation on link L2, L3 and L6, while the traffic load increases.
decides to maintain the GP blocking probability constraint and to sacrifice BE performance, accepting a value of utilization ratio higher than the desired one. On the other hand, Fig. 9b highlights the shift in the optimal configuration during the increase of the traffic load for link L3: in such case, the resources remain sufficient to satisfy both the GP constraint and the BE one, and so all the allocations appear mapped onto the lower part of the cost function. Finally, as we can observe for link L6 in Fig. 9c, first, when the traffic load rises slightly, the allocator has not enough resources to assure the BE ratio constraint; afterwards, while traffic load continues to increase further, there are no more resources to satisfy the GP constraint, as well. Note that in this last case the BE ratio values of the calculated allocation do not diverge, because BE traffic continues to use all the reserved bandwidth that the AF and EF flows do not temporarily utilize. 4.2. Intermediate network environment The second performance test session is based on the network topology in Fig. 10. The bandwidth
of all links is fixed to 80 Mbps. In this new test session we used six EF aggregate flows, five AF and six BE ones; the tests shown were performed by increasing only the GP traffic loads up to exceed the overall network capacity (see Table 3 in Appendix 2 for more details on the traffic matrix used). Figs. 11 and 12 report bandwidth utilizations and blocking probabilities, respectively, of flows crossing link 8: these results show that, in the presence of network congestion, the proposed mechanism tends to equalize the resources reserved to the QoS traffic classes and the BE one, with respect to the CS policy. More in detail, the proposed mechanism allows achieving bandwidth utilization about 8 Mbps larger than the CS policy (on a total BE offered load equal to 22 Mbps). Regarding the blocking probabilities, Fig. 12 shows that the allocator respects the imposed constraints (EF and AF maximum blocking probabilities equal to 4% and 3%, respectively), until the network becomes heavily congested (offered load values equal to 294, 305 and 316 Mbps are larger than the whole network capacity). Moreover, as can be noted from Fig. 12, while, with the CS policy, we obtain blocking probabilities
45 QoS - Allocator QoS - CS BE - Allocator BE - CS
Throughput [Mbps]
40 35 30 25 20 15 10 5 160
Fig. 10. The Intermediate network topology used for evaluating the allocator’s performance test.
180
200
220 240 260 280 300 320 Offered Load [Mbps]
Fig. 11. Throughput of all the flows crossing link 8 with allocation and with CS policy.
R. Bolla et al. / Computer Networks 52 (2008) 563–580
6 5
EF - Allocator EF - CS AF - Allocator AF - CS
Throughput [Mbps]
Blocking Probabilities [%]
7
4 3 2 1 0 160
180
200
220 240 260 280 Offered Load [Mbps]
300
320
15 14 13 12 11 10 9 8 7 6 5 160
573
Allocator CS
180
200 220 240 260 280 Offered Load [Mbps]
300
320
Fig. 12. Maximum blocking probability value for all the EF flows crossing link 8 with allocation and with CS policy.
Fig. 14. Throughput of BE flow 4 in both allocator and CS environments.
near to zero when there is no network congestion, the proposed mechanism results in blocking probabilities close to the constraint values. This effect is substantially due to a design choice: we have chosen to limit, even when there is no congestion, the bandwidth utilization of QoS traffic to avoid possible transients with no resources for BE traffic. Figs. 13 and 14 report the maximum blocking probability values of AF flow 2 (see again Table 3) and the throughput of BE flow 4, respectively. Also in this case, when the network becomes congested, the allocation policy tends, somehow, to trade off QoS blocking probabilities with BE throughput. Figs. 15–17 show the overall performance results obtained in this test session. More in particular, Fig. 17 shows the total BE offered load and the sum of the throughputs of all BE aggregate flows, for both the allocation with our mechanism and the CS policy. Figs. 15 and 16 show the maximum blocking probability values for EF and AF flows,
respectively, still in both allocations, with our mechanism and the CS policy. As we can observe from these results, under considerable congestion levels the allocation mechanism, limiting the resources reserved to GP, assures better performance to the BE traffic class than the CS policy. 4.3. Complex network environment We finally performed a test session to evaluate the performance of the proposed allocation framework when a complex network environment is used. In greater detail, we chose the network topology in Fig. 18, which includes 13 nodes and 40 links with bandwidth equal to 1 Gbps, except the ones between node 0 and node 2, and node 1 and node 0, whose bandwidth is fixed to 350 Mbps. In greater detail, these two links can be considered like the only two bottlenecks in the network. For what concerns the traffic matrix, we used the aggregate flows shown in Table 5 (Appendix 2). The traffic offered 12
6
Allocator CS
Blocking Probabilities [%]
Blocking Probabilities [%]
7
5 4 3 2 1 0 160
180
200 220 240 260 280 Offered Load [Mbps]
300
320
Fig. 13. Blocking probability of AF flow with id = 2 in both allocator and CS environments.
10
Allocator CS
8 6 4 2 0 160
180
200
220
240
260
280
300
320
Offered Load [Mbps] Fig. 15. Maximum blocking probability for all the EF flows with allocation and with CS policy.
R. Bolla et al. / Computer Networks 52 (2008) 563–580
Blocking Probabilities [%]
574
Allocator CS
10 8 6 4 2 0 160
180
200
220
240
260
280
300
320
Offered Load [Mbps] Fig. 16. Maximum blocking probability for all the AF flows with allocation and with CS policy.
Throughput [Mbps]
5
EF AF
4 3 2 1 0
2
3
4
5
6
7
8
9
10
11
AF Aggregated Flows [#]
100 95 90 85 80 75 70 65 60
Allocator CS BE Offered Load
Fig. 19. Maximum blocking probability for both the EF and the AF flows with the allocation policy.
Throughput [Mbps]
Blocking Probabilities [%]
12
550 500 450 400 350 300 250 200
Allocator CS
2
3
4
5
6
7
8
9
10
11
AF Aggregated Flows [#] Fig. 20. Overall BE throughput with allocation and with CS policy.
160
180
200
220
240
260
280
300
320
Offered Load [Mbps] Fig. 17. Total offered load and sum of the throughputs of all the BE aggregate flows with allocation and with CS policy.
loads are kept constant for all the aggregated flows, while the performance tests were carried out with a variable number of AF aggregates: we started with only AF0 and AF1; then, we added a new aggregate for each following test, up to include the AF10 flow. Fig. 19 shows the blocking probabilities obtained with the allocator; the CS ones are not reported, since they are very close to zero (about 0.001%).
Fig. 18. The complex network topology.
As shown in Fig. 20, this performance gap in terms of GP blocking probability with respect to the CS policy is clearly balanced by a higher BE throughput. Thus, also in such case, the proposed mechanism tends to balance the resources reserved to the QoS traffic classes and the BE one. 4.4. Complexity analysis For what regards the analysis of computational complexity, we must consider the specific features of both the analytical and simulative models and their iterations. In particular, the number of iterations between the simulative and the analytical modules, needed to achieve convergence, strictly depends on the dynamics of the link metrics of the routing algorithm in use: more specifically, as the dynamic level of link metrics rises, we need a larger number of iterations. In all tests performed, we used link weights depending on the available bandwidth for BE traffic, and we never exceeded four iterations. With respect to the simulative module, we may note that the latter is substantially composed by an event-driven simulator, which computes the average bandwidth utilization of network links at each GP flow’s birth and death. The simulation is
R. Bolla et al. / Computer Networks 52 (2008) 563–580
Computation Times [s]
stopped only when the output parameters (i.e., the link utilizations and the GP blocking probabilities) are stabilized. The computational complexity needed to perform a single simulation event is very low, and mostly depends, in a nearly linear way, on the numbers of both GP and BE aggregated traffic flows and of network links. The largest contribution to computational complexity is basically due to the number of events that we need to simulate for achieving a stable estimation of all the measured parameters. In greater detail, in order to evaluate the stability degree of output parameters, we used a method based on the T-Student probability distribution, which substantially fixes the number of simulated events and, then, the time horizon of simulations. In this sense, we can reasonably conclude that the time horizon and the simulated events’ number (and then the computational complexity of the simulative module) generally increase according to the second order statistics of measured network parameters. For what concerns the analytical module, we have to underline that it is independently applied on each link; then, its complexity rises in a linear way according to the number of network links. Moreover, to further reduce the computation times, we can parallelize more instances of this module (i.e., computing the bandwidth allocations of different links simultaneously). With this assumption in mind, the computational complexity of the analytical module can be considered equal to the one needed to compute the allocation of a generic network link. Moreover, the latter linearly depends on the ratio between the link bandwidth and the minimum GP class rate. Fig. 21 reports the computation times of the simulative module and of the analytical one (for a single link allocation only), obtained with the complex
1400 1200 1000 800 600 400 200 0
Simulation 1 link Allocation
575
topology used in Section 4.3. All these results were obtained on an AMD Athlon MP processor with an internal clock equal to 1.5 GHz and a cache size equal to 256 kByte. 5. Conclusions A multiservice IP VPN based on the DiffServ paradigm has been considered, composed by Edge Routers (ER) and Core Routers (CR), forming a domain that is supervised by a Bandwidth Broker (BB). The traffic in the network belongs to three basic categories: Expedited Forwarding (EF), Assured Forwarding (AF) and Best Effort (BE). A global strategy for admission control, bandwidth allocation and routing within the domain has been introduced and discussed. The aim is to minimize blocking of the ‘‘guaranteed’’ flows (EF and AF) within a fixed constraint on the maximum blocking value, while at the same time providing some resources also to BE traffic in terms of ‘‘utilization ratio’’. As the main objective is to apply the above mentioned control actions on-line, a computational structure has been devised, which allows a relatively fast implementation of the overall strategy. In particular, a mix of analytical and simulation tools is applied jointly: ‘‘locally optimal’’ bandwidth partitions for the traffic classes are determined over each link (which set the parameters of the link schedulers in the CRs, the routing metrics and admission control thresholds); a ‘‘high level’’ simulation (involving no packet-level operation) is performed, to determine the per-link traffic flows that would be induced by the given allocation; then, a new analytical computation is carried out, followed by a simulation run, until the allocations do not significantly change. The possible convergence of the scheme (under a fixed traffic pattern) has been investigated by ns-2 simulation and the results of its application under different traffic loads have been studied on a test network. The results show the capability of the proposed method of controlling the performance of the network and of maintaining a good level of global efficiency. Acknowledgement
2
3
4
5
6
7
8
9
10
11
AF Aggregated Flows [#] Fig. 21. Average computation times of the simulative and the analytical modules, according to different numbers of AF flows.
This work was supported in part by the Italian Ministry of Education, University and Research (MIUR) under the TANGO and FAMOUS projects.
576
R. Bolla et al. / Computer Networks 52 (2008) 563–580
Appendix 1. Validation tests
1
Error [%]
In this Appendix, we report some numerical results obtained to validate both the simulative and the analytical components of the proposed framework. All tests shown were carried out by using the simple network topology in Fig. 5 and the traffic matrix in Table 2. The link thresholds are those calculated during Simulation # 8 of the allocation test in Section 4.1 (shown in Table 4 in Appendix 2). In greater detail, our first goal is to understand and estimate the accuracy level of the traffic model used within the allocation mechanism. To this purpose, we decided to compare our flowlevel simulation model with the packet level Network Simulator 2 (ns-2). Moreover, to obtain steady and meaningful average values from both the simulative model and ns-2, the simulation lengths are extracted by a stabilization criterion, which allows to achieve a measured mean value of the parameters under study that can differ from the statistical mean within a 5% confidence interval, with 95% confidence level. Fig. 22 reports the error of the adopted traffic model with respect to ns-2 to estimate the average
BE AF
0.8 EF 0.6 0.4 0.2 0 110
120
130
140
150
160
Offered load [Mbps] Fig. 22. Traffic model error with respect to ns-2 to estimate the throughput of EF, AF, and BE aggregated traffic flows.
throughput of EF, AF and BE aggregated flows, respectively. Observing this figure, we can note how low the model error is for all traffic classes. In fact, it is generally much lower than 0.05%, except for BE traffic, whose error increases up to 0.65% when the network achieves total congestion. For what concerns the blocking probabilities, we observe identical values between ns-2 and the traffic model used in all tests performed. In the second validation session, we verified the invariance of the allocation results with respect to different initial network configurations. With this
Table 1 Invariance test Link 1
Sim 1 Sim 2 Sim 3 Sim 4 Sim 5 Sim 6 Sim 7 Sim 8 Sim 9 Sim 10 Sim 11 Sim 12 Sim 13 Sim 14 Sim 15 Sim 16 Sim 17 Sim 18 Sim 19 Sim 20 Sim 21 Min allocation Max allocation
Link 2
Link 6
Link 7
EF
AF
EF
AF
EF
AF
EF
AF
0 0 0 0 0 0 20 20 20 20 20 40 40 40 40 60 60 60 80 80 100 48 50
0 20 40 60 80 100 0 20 40 60 80 0 20 40 60 0 20 40 0 20 0 43 44
0 0 0 0 0 0 20 20 20 20 20 40 40 40 40 60 60 60 80 80 100 46 46
0 20 40 60 80 100 0 20 40 60 80 0 20 40 60 0 20 40 0 20 0 41 42
0 0 0 0 0 0 10 10 10 10 10 20 20 20 20 30 30 30 40 40 50 26 26
0 10 20 30 20 50 0 10 20 30 40 0 10 20 30 0 10 20 0 10 0 42 44
0 0 0 0 0 0 20 20 20 20 20 40 40 40 40 60 60 60 80 80 100 28 28
0 20 40 60 20 100 0 20 40 60 80 0 20 40 60 0 20 40 0 20 0 26 26
Initial network configurations and Optimal thresholds. All the threshold values are in Mbps.
R. Bolla et al. / Computer Networks 52 (2008) 563–580
577
allocations do not depend on the starting network configurations, but correctly depend only on the traffic matrix.
aim, using the network topology in Fig. 5 and the traffic matrix in Simulation # 7 of Table 2 (Appendix 2), we have executed some allocations, starting with 21 different configurations of reserved bandwidth in all the network links. Table 1 shows both the 21 initial conditions and the corresponding optimal link bandwidth thresholds, calculated with the proposed scheme. Observing these results, we can note how similar the resulting bandwidth allocations are (they differ within a unit of bandwidth granularity); then, we can clearly conclude that the new optimal network
Appendix 2. Test configuration parameters This Appendix includes the configuration parameters used in all test sessions shown in Section 4. In particular, Tables 2 and 4 regard the simple network environment (Section 4.1), and report the traffic matrix parameters and the resulting optimal thresholds for GP traffic, respectively. Tables 3 and 5
Table 2 Simple Network environment EF 0
EF 1
AF 0
AF 1
BE 0
BE 1
BE 2
Tx Node Rx Node
0 5
1 6
0 6
1 6
2 5
4 6
1 6
Sim Sim Sim Sim Sim Sim Sim Sim Sim Sim Sim Sim Sim
28 29 30 31 32 33 34 35 36 37 38 39 40
14 15 16 17 18 19 20 21 22 23 24 25 26
28 29 30 31 32 33 34 35 36 37 38 39 40
15 15 15 15 15 15 15 15 15 15 15 15 15
24 24 24 24 24 24 24 24 24 24 24 24 24
11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5
11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5 11.5
1 2 3 4 5 6 7 8 9 10 11 12 13
Average offered loads (in Mbps) for the aggregate flows.
Table 3 Intermediate network environment EF 0
EF 1
EF 2
EF 3
EF 4
EF 5
AF 0
AF 1
AF 2
AF 3
AF 4
BE 0
BE 1
BE 2
BE 3
BE 4
BE 5
Tx Node Rx Node
0 3
6 4
7 4
1 3
5 3
0 4
7 3
0 3
6 3
0 4
5 4
6 2
6 4
6 0
7 2
7 4
7 0
Sim 0 Sim 1 Sim 2 Sim 3 Sim 4 Sim 5 Sim 6 Sim 7 Sim 8 Sim 9 Sim10 Sim11 Sim12 Sim13 Sim14
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
8 8 8 8 8 8 8 8 8 8 8 8 8 8 8
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
Offered load (in Mbps) of the aggregate flows.
578
R. Bolla et al. / Computer Networks 52 (2008) 563–580
Table 4 Simple Network environment Link 1
Sim Sim Sim Sim Sim Sim Sim Sim Sim Sim Sim Sim Sim
1 2 3 4 5 6 7 8 9 10 11 12 13
Link 2
Link 6
Link 7
EF
AF
EF
AF
EF
AF
EF
AF
42 44 44 46 46 48 48 50 50 52 54 52 52
36 38 39 39 42 42 44 45 46 47 46 47 47
42 44 44 46 46 46 46 46 50 52 54 52 52
36 38 39 39 42 42 41 41 44 47 46 47 47
21 21 22 23 24 25 26 26 27 28 28 28 28
36 38 39 39 41 42 43 45 45 46 47 48 49
21 22 23 24 26 27 28 29 31 32 32 34 35
26 28 26 28 26 26 26 26 26 26 28 26 26
Resource thresholds (in Mbps) allocated for EF and AF traffic. Table 5 Complex network environment Flow ID
Node Tx
Node Rx
Offered load (Mbps)
EF 0 EF 1 EF 2 EF 3 EF 4 BE 0 BE 1 BE 2 BE 3 BE 4 BE 5 BE 6 BE 7 AF 0 AF 1 AF 2 AF 3 AF 4 AF 5 AF 6 AF 7 AF 8 AF 9 AF 10
5 0 6 9 7 0 9 0 9 0 9 0 9 5 0 8 7 6 5 0 8 7 7 5
4 3 11 10 3 4 3 4 3 4 3 4 3 3 3 11 3 12 3 3 11 3 3 11
65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65 65
Traffic Matrix and Offered Load values (in Mbps).
contain all the traffic parameters used in both the Intermediate and the Complex network environments (Sections 4.2 and 4.3). References [1] Network Wizards, Internet Domain Survey.
.
[2] Y. Maeda, Standards for Virtual Private Networks (Guest Editorial), IEEE Commun. Mag. 42 (6) (2004) 114–115. [3] M. Carugi, J. De Clercq, Virtual Private Network Services: scenario, requirements and architectural construct from a standardization perspective, IEEE Commun. Mag. 42 (6) (2004) 116–122. [4] C. Metz, The latest in Virtual Private Networks: part I, IEEE Internet Comp. 7 (1) (2003) 87–91. [5] C. Metz, The latest in VPNs: part II, IEEE Internet Comp. 8 (3) (2004) 60–65. [6] W. Cui, M.A. Bassiouni, Virtual Private Network bandwidth management with traffic prediction, Comp. Networks 42 (6) (2003) 765–778. [7] P. Knight, C. Lewis, Layer 2 and 3 Virtual Private Networks: taxonomy, technology, and standardization efforts, IEEE Commun. Mag. 42 (6) (2004) 124–131. [8] T. Takeda, I. Inoue, R. Aubin, M. Carugi, Layer 1 Virtual Private Networks: service concepts, architecture requirements, and related advances in standardization, IEEE Commun. Mag. 42 (6) (2004) 132–138. [9] S.S. Gorshe, T. Wilson, Transparent Generic Framing Procedure (GFP): a protocol for efficient transport of block-coded data through SONET/SDH networks, IEEE Commun. Mag. 40 (5) (2002) 88–95. [10] J. Zeng, N. Ansari, Toward IP Virtual Private Network quality of service: a service provider perspective, IEEE Commun. Mag. 41 (40) (2003) 113–119. [11] I. Khalil, T. Braun, Edge provisioning and fairness in VPNDiffServ networks, J. Network Syst. Manage. 10 (1) (2002) 11–37. [12] R. Bolla, R. Bruschi, F. Davoli, Capacity planning in IP Virtual Private Networks under mixed traffic, Comp. Networks 50 (8) (2006) 1069–1085. [13] R. Bolla, R. Bruschi, F. Davoli, M. Repetto, Analytical/ simulation optimization system for access control and bandwidth allocation in IP networks with QoS, in: Proceedings of the 2005 International Symposium on Performance Evaluation of Computer and Telecommunication System (SPECTS’05), Cherry Hill, NJ, July 2005, pp. 339–348. [14] R. Bolla, F. Davoli, M. Repetto, A control architecture for quality of service and resource allocation in multiservice IP
R. Bolla et al. / Computer Networks 52 (2008) 563–580
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22] [23]
[24]
[25]
[26] [27]
[28] [29]
[30]
networks, in: Proceedings of the International Workshop on Architectures for Quality of Service in the Internet (Art-QoS 2003), Warsaw, Poland, March 2003, Lecture Notes in Computer Science, vol. 2698, Springer-Verlag, Berlin, 2003, pp. 49–63. S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, W. Weiss, An Architecture for Differentiated Services, RFC 2475, URL , December 1998. V. Jacobson, K. Nichols, K. Poduri, An Expedited Forwarding PHB, RFC 2598, URL , June 1999. J. Heinanen, T. Finland, F. Baker, W. Weiss, J. Wroclawski, Assured Forwarding PHB Group, RFC 2597, URL: , June 1999. J. Heinanen, R. Guerin, Network Working Group, A Two Rate Three Color Marker, RFC 2698, URL , September 1999. F. Kelly, Notes on effective bandwidths, in: Stochastic Networks: Theory and Applications, Oxford University Press, 1996. H. Li, C. Huang, M. Devetsikiotis, G. Damm, Extending the concept of effective bandwidths to DiffServ networks, in: Proceedings of the Electrical and Computer Engineering Conference, Ontario, Canada, May 2004, vol. 3, pp. 1669– 1672. L. Aspirot, P. Belzarena, P. Bermolen, A. Ferragut, G. Perera, M. Simo´n, Quality of service parameters and link operating point estimation based on effective bandwidths, Perform. Evaluation 59 (2005) 103–120. K.W. Ross, Multiservice Loss Models for Broadband Telecommunication Networks, Springer, London, 1995. C.C. Beard, V.S. Frost, Prioritized resource allocation for stressed networks, IEEE/ACM Trans. Network. 9 (5) (2001) 618–633. S.C. Borst, D. Mitra, Virtual partitioning for robust resource sharing: computational techniques for heterogeneous traffic, IEEE J. Select. Areas Commun. 16 (5) (1998) 668–678. S. Chen, K. Nahrstedt, An overview of quality-of-service routing for the next generation high-speed networks: problems and solutions, IEEE Network 12 (6) (1998) 64– 79. The Network Simulator – ns-2. Documentation and Source Code from the Home Page: . D. Awduche, J. Malcom, J. Agogbua, M. O’Dell, J. McManus, Requirements for Traffic Engineering Over MPLS, RFC 2702, URL , 1999. D. Bertsekas, R. Gallager, Data Networks, second ed., Prentice-Hall, 1992. R. Bolla, R. Bruschi, M. Repetto, A fluid simulator for aggregate TCP connections, in: Proceedings of the 2004 International Symposium on Performance Evaluation of Computer and Telecommunication System (SPECTS’2004), San Diego, CA, July 2004, pp. 5–10. S. Ben Fredj, T. Bonald, A. Proutie`re, G. Re´gnie´, J.W. Roberts, Statistical bandwidth sharing: a study of congestion control at flow level, in: Proceedings of the ACM SIGCOMM, San Diego, CA, August 2001, pp. 111–122.
579
[31] J.W. Roberts, A survey on statistical bandwidth sharing, Comp. Networks 45 (3) (2004) 319–332. [32] J.W. Roberts, Internet traffic, QoS and pricing, Proc. IEEE 92 (9) (2004) 1389–1399.
Raffaele Bolla received the ‘‘laurea’’ degree in Electronic Engineering in 1989 and the Ph.D. degree in Telecommunications in 1994 from the University of Genoa. From 1994 to 1996 he had a post-doc grant at the Department of Communications, Computer and Systems Science (DIST)-University of Genoa. Since November 1996 he is ‘‘Ricercatore’’ with DIST at the University of Genoa, and since September 2004 he is ‘‘Professore associato’’ at the same Department. Since 1999 he teaches Telematics 1 and 2, Telecommunication Networks and Telemedicine, Protocols and Architectures for Wireless, for a total number of six classes in the Master Degrees of Telecommunications Engineering, Computer Engineering, Biomedical Engineering and Environmental Engineering. He has been and is responsible for DIST of many important research projects and contracts with both public Institutions (Italian Ministry, European Union, Regional Entities) and private companies (Selex Communication, Alcatel-Lucent, Ericsson, Intel, ...) in the telecommunication field. He acts as reviewer for many different international magazines (IEEE/ACM Transaction of Networking, IEEE Transaction on Vehicolar Technology, Computer Networks, ...) and congresses; he participates to technical committees of international congresses (SPECTS, QoSIP, Globecom, ...). He has co-authored over 130 scientific publications in international journals and international conference proceedings. His current research interests are in: (i) resource allocation, routing and control of IP QoS (Quality of Services) networks, (ii) modeling and design TCP/IP networks, (iii) P2P overlay network traffic measure and modeling, (iv) advance platform for software routers, (v) performance and traffic monitoring in IP networks and (vi) advance mobility management in heterogeneous wireless networks.
Roberto Bruschi received the ‘‘laurea’’ degree in Telecommunication Engineering in 2002, and the Ph.D. degree in 2006 from the University of Genoa. Currently, he works with the Department of Communications, Computer and Systems Science of the University of Genoa. He is also a member of CNIT, the Italian inter-university consortium for telecommunications. He was an active member of some Italian research project groups, such as TANGO, FAMOUS, BORA-BORA and EURO. He has co-authored over 15 scientific publications in international journals and international conference proceedings. His main interests include Open Routers, Network processors, TCP modelling, VPN design, P2P and QoS network control.
580
R. Bolla et al. / Computer Networks 52 (2008) 563–580
Franco Davoli received the ‘‘laurea’’ degree in Electronic Engineering in 1975 from the University of Genoa, Italy. Since 1990 he has been Full Professor of Telecommunication Networks at the University of Genoa, where he is with the Department of Communications, Computer and Systems Science (DIST). From 1989 to 1991 and from 1993 to 1996, he was also with the University of Parma, Italy. His current research interests are in bandwidth allocation, admission control and routing in multiservice networks, wireless mobile and satellite networks and multimedia communications and services. He has co-authored over 250 scientific publications in international journals, book chapters and conference proceedings. He is Associate Editor of the International Journal of Communication Systems (Wiley), member of the Editorial Board of the international journal Studies in Informatics and Control, and Area Editor of Simulation – Transactions of the SCS. In 2004, he has been the recipient of an Erskine Fellowship from the University of Canterbury, Christchurch, New Zealand, as Visiting Professor. He has been a Principal Investigator in a large number of projects and has served in several positions in the
Italian National Consortium for Telecommunications (CNIT). He was the Head of the CNIT National Laboratory for Multimedia Communications in Naples for the term 2003–2004, and he is currently Vice-President of the CNIT Management Board.
Matteo Repetto was born in Genova, Italy, in 1976. He received the ‘‘laurea’’ degree in Telecommunication Engineering in 2000 and the Ph.D. degree in Electronics and Informatics in 2004 from the University of Genoa. He is also a member of CNIT, the Italian inter-university consortium for telecommunications. He has co-authored over 10 scientific publications in international journals and conference proceedings, and he is cooperating in many different national and European research projects in the networking area. His current research interests are in the modeling of wireless networks, multiple access techniques for ad hoc networks, estimation of freeway vehicular traffic and in resource allocation for wireless and wired IP networks.