Providing service differentiation in pure IP-based networks

Providing service differentiation in pure IP-based networks

Computer Communications 35 (2012) 33–46 Contents lists available at SciVerse ScienceDirect Computer Communications journal homepage: www.elsevier.co...

2MB Sizes 0 Downloads 20 Views

Computer Communications 35 (2012) 33–46

Contents lists available at SciVerse ScienceDirect

Computer Communications journal homepage: www.elsevier.com/locate/comcom

Review

Providing service differentiation in pure IP-based networks Antonio Varela ⇑, Teresa Vazão, Guilherme Arroz INESC-ID/Instituto Superior Técnico - Av. Prof. Dr. Anibal Cavaco Silva, 2744-016 Porto Salvo, Portugal

a r t i c l e

i n f o

Article history: Received 22 January 2011 Received in revised form 14 June 2011 Accepted 27 July 2011 Available online 5 August 2011 Keywords: Service differentiation QoS Management plane SLAs Flow-based routing

a b s t r a c t The lack of an effective cooperation between the data, control and management plane of QoS routing solutions presented so far, prevents the implementation of service differentiation in the context of pure IP-based networks. Most of paths calculation proposals performed by the control plane are unaware of service characteristics of each flow. Scalable data plane QoS proposals ignore the issue about selecting the best paths to route the traffic. Proposed management plane schemes do not perform the network state maintenance and service level monitoring. Multi-service routing is a flow-based forwarding protocol that implements the service differentiation in pure IP-based networks, using a straight cooperation between data, control and management plane. This cooperation is accomplished by a data plane supporting the DiffServ model and performs route selection based on flows service class, which is exploited by the management plane to carry out the network state maintenance, and performance monitoring by using the RTCP protocol, to provide service metrics to control plane for route calculation. Simulation experiments show better performance results achieved by Multi-service routing compared to those obtained by traditional link state protocol with the DiffServ model and QoS routing in heavy loaded network scenarios of mixed traffic having different service requirements. Ó 2011 Elsevier B.V. All rights reserved.

1. Introduction Quality of service (QoS) is an important matter, as far as IP networks are concerned, considering the progressive convergence of data and telecommunication networks, where unnumbered services and applications with different performance requirements must coexist. Introduction of an efficient QoS support in the internet is a fundamental issue for the success of new emerging business models. Nowadays, there is a significant effort in this domain both from the academia and from the industrial players, and different research studies and standardization efforts have been actively contributing to solve this lack of QoS support. So far, in the context of pure IP-based networks, solutions are mostly focused on a particular part of the problem. New routing mechanisms are aimed at defining more efficient paths to forward the traffic, disregarding each particular type of service. New service models are focused on treating each packet according to its class of service, without taking into account the best route to forward it. And finally, management solutions are essentially targeted to assess the level of service being delivered, without having any interference on the way the packets are routed or treated in the network. Therefore, control, data and management planes are not effectively cooperating to provide the best service to the users. ⇑ Corresponding author. Tel.: +351 967653129; fax: +351 214233252. E-mail addresses: [email protected] (A. Varela), [email protected] (T. Vazão), [email protected] (G. Arroz). 0140-3664/$ - see front matter Ó 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.comcom.2011.07.006

In our previous work [1,2], the Multi-service routing is proposed as an extension of the traditional intra-domain routing protocols, which copes with service differentiation. In that proposal, the traffic is classified according to the Differentiated Service (DiffServ) model into three classes: Expedited Forwarding (EF), Assured Forwarding (AF) and Best-Effort (BE), and treated according to the Per-Hop Behaviors (PHB) defined in the DiffServ framework. From a routing perspective, each traffic flow was classified as a priority or a non-priority flow and is forwarded according to that classification: a priority traffic is always forwarded through the shortest path, while a non-priority traffic might be forwarded through longer paths, whenever network performance is being degraded. The results achieved in a small network context have shown that the priority traffic experienced a better performance when this approach is used, if compared to the performance achieved by the DiffServ alone, or by the DiffServ plus QoS routing without cooperation. The use of Multi-service architecture requires specialized network elements, enhanced with certain capabilities, not yet fully described in our previous work. This paper presents those specialized network elements and defines all the three components and their interactions:  In the data plane, the DiffServ architecture.  In the control plane, the Multi-service path calculation.  In the management plane, the performance monitor.

34

A. Varela et al. / Computer Communications 35 (2012) 33–46

In this paper, we complement the sensitivity analysis with an assessment of performance at the service level that uses the Geant 16 nodes network with different types of traffic and different service level agreements (SLAs). After describing the related work, in Section 2, the Multi-Service architecture is presented in Section 3, considering the three planes and their mutual interactions. A simple example is used to show its main principles. The impact of the Multi-service architecture on the level of service being delivered by a network is a key issue. In Section 4, the proposed model is assessed using the network simulator (ns-2) [3] and is compared with a DiffServ network and a DiffServ network plus a QoS routing protocol. We finalize our work, in Section 5, by drawing some conclusions and by discussing some open issues that have been identified so far. 2. Related work Traditional routing protocols such as the Open Shortest Path First (OSPF) and the Routing Information Protocol (RIP) are designed to use a data and control plane information: the data plane is used to select the packet path across a network, while the control plane is used to calculate the routes to each destination. Due to the lack of an effective management plane these protocols cannot provide QoS because routing information is processed only on an administrative basis. The IETF framework for QoS routing [4] recommends a set of features that should be added to traditional routing protocols aimed at providing QoS support. These features implicitly imply the existence of a management plane fully cooperating with the remaining data and control planes, as they state for:  The availability of multiple paths between each pair of source– destination nodes pair to convey traffic with different QoS requirements.  The possibility of switching traffic to a better path only when the service level cannot be met any longer, in order to avoid nasty effects of routing oscillations.  The use of non-optimal paths to increase the amount of traffic being carried by the network.

Tiwari and Sahoo [9] propose a scheme to provide QoS in a OSPF based best effort network, using a method that determines a number of alternative paths eligible to reroute priority traffic whenever a node experiences congestion on an outgoing link. The number of alternative paths is determined by a parameter calculated after OSPF has converged. Nevertheless, it is not clear whether the protocol suffers from oscillations effects, as the nodes involved in the alternate path routing revert back to original OSPF routing, once the congestion is over. Some other approaches based their mechanisms on the recommendation that traffic with different QoS requirements should be routed differently [10], and thus every router performs the selection of the next hop depending on the type of service actually to be delivered. So, as long as the same kind of tasks are carried-out in every router, the proposed architectures do not comply with one of the most fundamental design principle of the Next Generation Networks (NGN) [11] stating that complex tasks should be confined at the network edge, while the network core should care about fast routing matters. Multi-Topology Routing (MTR) [12] creates multiple logical topologies on a single physical network, to which packets can be logically mapped at each hop, according to predefined criteria such as a specific packet header content. With MTR, multiple paths are available between each ingress and egress pair. In this way, multiple logical topologies on a single physical topology provide network traffic load balancing capabilities. Bae and Henderson [13] describe an experimental evaluation of an extension of OSPF, as MTR to deploy traffic engineering (and, consequently, service differentiation) in the context of a simple and pure IP-based network. Kwong et al. [14] carried out an interesting investigation in a scenario composed of Dual Topology Routing (DTR), with one routing targeting delay and the other throughput, but their goal went beyond optimizing routing decisions as they sought the network robustness, as well. The control plane proposed by Masip-Bruin et al. [15] is aimed at reducing effects of routing inaccuracy by using prediction strategies at source nodes to determine path availability rather than updating through conventional link state advertisement packets. However, this approach is applied in a context of switched-circuits, where QoS is not actually an issue, and focused on bandwidth constrained applications.

2.1. Control plane QoS support 2.2. Data plane QoS support The first set of solutions that have been proposed to solve the lack of QoS support in the internet consists of routing schemes [5] that addressed QoS problem by retrieving the least loaded path between two end-points, using network state information. In these routing schemes the retrieved least loaded paths convey all traffic types, disregarding their specific performance requirements or class of service, thus falling short of an effective service differentiation support. Masip-Bruin et al. [6] survey the most important unsolved issues related to QoS routing, and some make important proposals to address those issues. So far, QoS-aware routing protocols, such as the extensions proposed by Apostolopoulos et al. to enhance the OSPF [7], implicitly introduce a basic management plane, which collects network state information gathered from a data plane and provides this information to a control plane to determine a network path capable of meeting the traffic’s QoS requirements. Even assuring QoS, its basic management plane does not offer a scalable solution as it aims to accommodate network flows in individual basis. Alternatively, Nelakuditi et al. [8] propose a basic management plane at each node that processes data collected locally, as the control plane selects a path among a set of candidate paths. Nevertheless, this approach is not a pure IP-based oriented proposal.

At the data plane level, the Integrated Services (IntServ) [16] and Differentiated Services (DiffServ) are two architectures proposed by the IETF that act as a support layer for QoS functionality. IntServ model provides strict QoS guarantees for traffic flows but does not scale well to large networks, while DiffServ is a scalable model which assures QoS to aggregated traffic flows classified into a set of restricted service classes. So far, the use of the DiffServ over an IP/MPLS (Multi-Protocol Label Switching [17]) networks seems to be the most promising solution for delivering QoS in large scale networks. None the less, fulfilling the service level permanently is a complex task as the DiffServ static configuration does not always cope with the network traffic dynamics. To enhance the DiffServ support of QoS, new network elements have also been proposed which dynamically configure a DiffServ domain [10], but no standardization effort has been done so far, and dynamic configuration remains an open issue. An interesting data plane proposal is described by Stylianos and Tsaoussidis [18], in which service differentiation is promoted with size oriented queue management using both dropping and scheduling mechanisms where small packets have an increased priority, ignoring the case of flows composed by big packets such as in the video transmissions.

A. Varela et al. / Computer Communications 35 (2012) 33–46

2.3. Management plane QoS support The policy-based management provides a fairly interesting approach to service level, as it comprises the possibility of defining a broad range of policies, starting at a high level and ending up at a level where the network components are properly configured [19]. IETF is one of the major players in the definition of a policy-based management framework that includes a consistent information model [20]. However, while permitting end-to-end QoS control, the policy-based approach has not yet a comprehensive standardization of a semantics framework among all layers of abstraction for policies. Moreover, such approaches generally present a single point of critical failure. Kvalbein and Lysne [21] propose a mechanism using MTR for intra-domain traffic engineering which includes a management plane that monitors the network links occupancy and it adapts the traffic matrix variation by simply changing the mapping from ingress-egress flow to topology. Nevertheless, for this mechanism, important issues need to be addressed, as how to avoid the system instabilities when flows are constantly moved from one topology to another and the definition of monitoring methods to assess the path quality in the different topologies. An optimization framework proposed by Rocha et al. [22] describes a management plane capable of configuring the routing weights of link-state protocols in order to improve QoS levels in the internet, but the complexity of evolutionary computation is still an issue. Papanagiotou and Howarth [23] also propose an evolutionary algorithm for the management plane to change links’ weights in order to accomplish delay differentiation between two classes of flows, but the authors are not clear enough about that algorithm complexity. 3. Multi-service architecture 3.1. General overview The routing protocol proposed in this paper relies on a simple and scalable architecture solution composed by a data plane, a control plane and a management plane, effectively cooperating with each other. In the data plane, it selects a path to each incoming traffic flow according to its specific QoS requirements, as a virtual circuit that routes aggregated flows. In the management plane, it gathers and computes both local QoS information disseminated by each node and global service level information.

35

Service level information is offered by the network, and its content is determined by a set of service metrics from which the control plane calculates paths made available to the incoming traffic. Scalability in this solution is achieved by processing the traffic as aggregated flows and simplicity is achieved by performing the most complex task at the edge of the network, delegating to the network core the task of expediting packets according to the policy rules that have been defined at the edge. 3.2. Concept of a Multi-service network A Multi-service network achieves the goal of providing support to an effective service differentiation by defining three types of routes to be traversed by data flows (Fig. 1). Path types are: standard paths, used by priority data flows; alternative paths, traversed by non priority flows; and null paths, which carry null priority data flows corresponding to demoted non-priority data flows whenever a relevant network service level is threatened of not to be fulfilled. A standard path is the shortest route available between a pair of source and destination nodes, while an alternative path generally corresponds to a longer one. Null path stands to a route traversed by data flows in which packets are dropped right way when entering the Multi-service network. Depending on the network congestion level, standard and alternative paths might be coincident, specially in light load conditions. Standard and alternative paths are calculated using two independent instances of the traditional intra-domain routing protocol. The routing protocol instance that calculates standard paths is based on administrative metrics, while the other that calculates alternative paths is based on local service level metrics. Null paths are directly determined by the amount of the global service level being delivered by the network. The identification and mapping of incoming data flows are both performed at ingress nodes of a Multi-service domain, while core nodes only perform an aggregated flows identification mechanism, as aggregated flows are the entities allowed to be routed within this domain. 3.3. Multi-service architecture Multi-service architecture is depicted in Fig. 2 where the three planes: data, management and control, respectively, are highlighted with their main functionalities. The data plane performs the path selection task by identifying/ classifying the aggregated service class to which each incoming flow is associated with. Each flow is forwarded along an available path determined by its aggregated service class.

Fig. 1. A Multi-service network.

36

A. Varela et al. / Computer Communications 35 (2012) 33–46

Fig. 2. Multi-service architecture.

The management plane exploits data flow streams to carry out its network state maintenance task by monitoring both local and global performance and by assessing the current service level being delivered by the network, according to a predefined set of QoS metrics. The control plane executes the route calculation task by means of a service metrics set provided by the management plane. Standard, alternative and null paths are processed by the control plane routing instances and are, afterwards, made available to data plane to forward aggregated service class flows. 3.4. The data plane and path selection The task of selecting a path between a standard, alternative and null path to convey an incoming data flow, is supported not only by mechanisms of identification/mapping, as earlier mentioned, but also by an appropriate set of routing and forwarding rules. Routing rules are applied whenever the first packet belonging to a new incoming data flow arrives at each node and needs to find the next hop toward its destination. This information is obtained by selecting the appropriate routing table to be consulted. The routing table selection process is submitted to the following rules: Routing Rule 1

Routing Rule 2

Priority data flows always select the standard (shortest) path, as this one has a higher probability of assuring the required service level. This is accomplished using the standard routing table available at each node. Non-priority data flows always select an alternative path - which may coincide with the standard path, whenever the network is unloaded - by using the alternative routing table available at each node.

Considering pkt the first packet of an incoming data flow arrived at a network node with d as its final IP destination address; which is associated with an aggregated flow aijp , where i and j are Multi-service ingress and egress nodes, while p is the packet priority (p = 0 for priority packets; p > 0 for non-priority packets and p < 0 for null packets). Each domain node owns both a set of aggregated flows n o A ¼ aijp , corresponding to paths including that node, and a cache n o aijp ; xijp flow table T cf ¼ made of aggregated flows and corresponding output node interface identifiers pairs (p = 0 stands for an outgoing interface belonging to a standard path).

A domain standard routing table T s ¼ ðdh ; xh Þ and alternative routing table T a ¼ ðdh ; xh Þ are both composed of destination addresses and output node interface identifiers pairs (xh belongs to a standard path while xh belongs to an alternative path). T is an unspecified routing table, either standard or alternative one. Moreover, InterfaceID(table, item) is a function that returns an output node interface identifier and depends on two arguments: a table (a routing table or a cache flow table), and item (a destination IP address or an aggregated flow identifier), respectively, and function Forward(interface, pkt) simply forwards pkt through the path related to the node output interface. Assuming that an output node interface is denoted as xs when it is related to a standard path, as xa when it is related to an alternative path and, finally, as x if it is related to an unspecified path type, thus routing a packet pkt of a generic aggregated flow aijp can be defined as:   If aijp R A Then A A [ aijp ; x InterfaceID(T,  d); T cf T cf [ aijp ; x ; While the routing rules - which are exclusively applied to the first packet of an incoming data flow - are defined as follows: Routing Rule 1:

Routing Rule 2:

Priority data flows  p=0  T = Ts  x = xs Non-priority data flows  p – 0; (p < 0 and a < 0, for null priority data flows)  T = Ta  x = xa

Subsequent packets of the same flow do not use the routing table, as the next hop and the corresponding aggregated flow identification are made available by the mentioned first packet using a flow cache table repository. The forwarding process obeys to the following rules: Forwarding Rule 1

Priority data flows: packets belonging to a priority aggregated flow are always forwarded through the same path which has been previously associated to it when the first packet of the corresponding flow entered to the network.

A. Varela et al. / Computer Communications 35 (2012) 33–46

Forwarding Rule 2

Non-priority data flows: non-priority packets are forwarded through the path previously assigned to their aggregated flow. However, that path might change while their corresponding flow is still active whenever noticeable variation on the quality of service level being delivered by the network occurs. This occurrence may be caused by either local congestion or failure on network SLA fulfillment.

So, forwarding a packet pkt belonging to the aggregated flow can be defined as:   If aijp 2 A Then

  x InterfaceID T cf ; aijp ; Forward(x, pkt);

The forwarding rules applied to packet pkt can be stated as follows: Forwarding Rule 1:

Forwarding Rule 2:

Priority data flows  p=0  x = xs Non-priority data flows  p – 0; (p < 0 and a < 0, for null priority data flows)  x = xa

Following Fig. 3 describes how routing rules and forwarding rules are applied to a packet at ingress and core nodes of a Multi-service network to perform path selection task. 3.5. Management plane and network state maintenance In the Multi-Service architecture network, service levels are met by ensuring that different traffic types are routed using the appropriate available paths. The availability of those paths is guaranteed by the existence of a monitoring mechanism that attains to a precise view of both local and global network state information. The combination of local and global network state information provides an important tool to fulfill accurately any kind of SLA under which the network is committed to.

37

The existence of a wide variety of services with different QoS requirements suggests the devising of specific measurement methods for each traffic type. For instance, real time traffic performance is evaluated by delay and packet losses which can be assessed through network probes installed at domain ingress nodes that gather information retrieved from Real Time Control Protocol (RTCP) [24] packet fields, namely Delay LSR (DLSR) and Cumulative Number of Packet Loss (CNPL), respectively. Furthermore, interactive traffic like HTTP imposes the deployment of specialized agents at network edges to evaluate page reply time. Both the local and global network state information are obtained by processing data from performance indicators measurements which are previously assessed using an Exponentially Weighed Moving Average (EWMA) technique to smooth over indicators instantaneous surge values. Actions, related to quantified service metrics, are triggered whenever predefined threshold values are reached. To avoid nasty oscillation effects, hysteresis mechanisms are deployed as well. Considering: A(t) the action associated to the performance indicator value V(t) at sampling time t; Dt the sampling interval; Kmax represents the number of defined thresholds; Pk represents the kth threshold value corresponding to the performance indicator value V(t) associated to a specific service level metric; Hk represents the hysteresis value of the kth threshold. Thus, at the next sampling time t + Dt an action is triggered whenever one of the following two conditions holds:

VðtÞ < Pk KVðt þ DtÞ P Pk ) Aðt þ DtÞ ¼ Ak ;

ð1Þ

where 1 6 k 6 Kmax, and

VðtÞ P Pk KVðt þ DtÞ < Pk  Hk ) Aðt þ DtÞ ¼ Ak1 ;

ð2Þ

where 1 6 k 6 Kmax. All actions Ak  1 might be alternatively replaced by a single action A0. Such replacement implies that Pk  Hk = A0, where 1 6 k 6 Kmax. Therefore,

VðtÞ P Pk KVðt þ DtÞ < P0 ) Aðt þ DtÞ ¼ A0 ;

ð3Þ

where 1 6 k 6 Kmax. It is now worth to emphasize the different sort of actions triggered by the local and global monitor, respectively. Actions triggered by a local monitor, once a threshold is reached, consist of running a procedure that updates the alternative routing table having a different routing metric value, corresponding to the new local load state of a node neighborhood. Updating the alternative routing table produces new alternative paths to be used by nonpriority data flows, while actions triggered by a global monitor

Fig. 3. Multi-service forwarding rules.

38

A. Varela et al. / Computer Communications 35 (2012) 33–46

consist of dropping non-priority packets. This dropping process yields a new set of null paths which will route demoted non-priority data flow.

3.6. The control plane and path calculation Multi-service protocol is devised to work as an extension of existing intra-domain routing protocols, such as OSPF and RIP, where a restricted amount of modifications and additional components are considered. As far as routing is concerned, two independent protocol instances (Fig. 4) are used to provide two independent set of paths: standard and alternative, respectively. Null paths are alternative paths downgraded by specific values of global service metrics assessed by a global performance monitor. The traditional routing instance calculates the traditional routing table based on regular administrative metrics which are periodically disseminated by the routing protocol. The traditional routing table content makes standard paths available to priority aggregated flows. The alternative routing instance calculates the alternative routing table - which provides alternative paths to be used by nonpriority aggregated flows - based on local service metrics reflecting

each neighborhood state information. To prevent excessive QoS routing information from traversing the network core due to frequent service metrics changing, which may also cause routing instabilities, this type of information is transferred on a periodic basis, or whenever a change occurs after a specific stability lap since the last change expiration. As shown in Fig. 5, Event1- which is related to a service metric variation - occurring within the stability period is simply ignored, while Event2 occurred at instant t > s0 + T, actually triggers the alternative routing table update procedure. Since there are no further events related to the service metric variation, regular alternative routing table cycle updates (Event0) are performed at time instants s2 = s1 + DT0 and s3 = s2 + DT0. Focusing on events caused by the service metric variation, since the regular alternative routing table’s update cycle is similar to one used by the traditional routing tables, and considering M(t) and M(t + dt) as service metric’s values that occur at subsequent sampling instant intervals t and t + dt, respectively; considering also T as the value of period of stability; such service metric variation will generate an alternative routing table update event Event(t + Dt) if the following condition holds:

Mðt þ DtÞ – MðtÞKDt P T ) DisseminatefEv entðt þ DtÞg

ð4Þ

In order to better clarify these concepts, lets use an example: Fig. 6 represents a network composed of three border nodes i0, j0 and j1 and by three core nodes c0, c1 and c2, respectively. There are six links (l0, l1, l2, l3, l4 and l5) connecting network nodes all having unitary cost (ck = 1 where k = 0, 1, 3, 4, 5). As an example, for a given instant time t the following paths exist defined as ordered sets of nodes corresponding to routes from ingress node i0 to egress node j0: i j

Fig. 4. The path calculation.

i j

 A unique standard path R00 0 ¼ fi0 ; c1 ; c2 ; j0 g having cost C 00 0 ¼ 3 which corresponds to the shortest path between node ingress i0 and egress node j0. This path is mapped to the priority aggrei j gated flow a00 0 . i j  Two alternative paths: Path R10 0 ¼ fi0 ; c1 ; c2 ; j0 g having cost i0 j0 i j C 1 ¼ 3 mapped to the non-priority aggregated flow a10 0 and i j path R20 0 ¼ fi0 ; c0 ; j1 ; c1 ; c2 ; j0 g mapped to non-priority aggrei j i j gated flow a20 0 that costs C 20 0 ¼ 5, respectively.

Fig. 5. Update events of the alternative routing table.

39

A. Varela et al. / Computer Communications 35 (2012) 33–46

Fig. 6. A Multi-service network composed of 3 border nodes and 4 core nodes. i j

i j

The alternative path R10 0 is equal to the standard path R00 0 while is longer, because of a previous alternative routing table update caused by an occurrence of local service metric variation between node i0 and node c1. i j R20 0

3.7. Scalability, stability and convergence The scalability of the Multi-service architecture is guaranteed by using the aggregated flow based routing mechanism, which is similar to virtual circuit approach. Each incoming flow is mapped to one of the existing aggregated flows, which determines a specific network path, whenever it enters the network. This operation consists of assigning an identifier tag to a specific field of packets header (flow label field of IPv6 packet header). Different flows eventually share the same aggregated flow identifier, which is justified by having the same network ingress and egress node, and the same traffic priority service. Ingress nodes are responsible for mapping operation, while core nodes just route packets based on their flow label field content. For instance, considering a Multi-service network composed of N border nodes and let smax be the maximum age assigned to the aggregated flow identifiers and each border node is able to create snull null aggregated flows, then the number of aggregated flows the network is able to identify is:

NðN  1Þ½ðsmax  snull Þ þ 1:

ð5Þ

The convergence of the Multi-service routing protocol is achieved simply by considering the fact that the alternative routing table update cycle has twofold components: a periodic based mechanism like traditional routing protocols and an event based, which related actions are triggered only after a stability period expires since the last routing table update. Therefore, the convergence time of the Multi-service routing is in the same order of magnitude of the traditional intra-domain routing.

Fig. 7. Multi-service simulation thresholds.

This modified version of ns-2 network simulator is also configured with the DiffServ module and runs two independent instances of the link state routing protocol. Each node resource is monitored by assessing the DiffServ queues’ lengths; the global intra-domain performance monitor for service evaluation is performed using probes that retrieves information from RTP/RTCP protocol installed at each network border node. Two main sets of simulation experiences were fulfilled to validate this routing protocol proposal. Most of these experience tests were comparing the Multi-service routing protocol (MS-R) performance with those obtained with both a link state traditional routing protocol (LS-R) and a quality of service routing protocol (QS-R). The ns-2 implementation of the QS-R is quite similar to MS-R, but it does not include the traffic classifier and the second address classifier; as it is unaware of traffic differentiation, only one routing instance is enough. The first set of simulations were carried out using a basic network scenario, to determine the Multi-service routing protocol thresholds values and to evaluate the Multi-service architecture performance when only the local resource monitor module was deployed. Setting the Multi-service thresholds values is an important task as it permits to adjust routing protocol performance. Three thresholds (Fig. 7) and their corresponding actions were defined: The deviation threshold

4. Simulation studies The critical threshold The proposed Multi-service routing architecture has been tested by means of simulations, using the network simulator ns2, version 2.27, enhanced with a set of additional capabilities and functionalities implemented by the Multi-service classifier, which is basically composed of a flow/aggregated classifier that selects a path for a packets based on its header fields information, a traffic classifier that determines the packet traffic class priority and two address classifiers to be used in supporting the unicast packet forwarding of the priority and non-priority aggregated traffic flows, respectively. Additionally, the Multi-service classifier includes a flow label generator/identifier module to map each incoming packet to a specific aggregated traffic flow.

The standard threshold

acts like a neighborhood pre-congestion alarm once it is reached, on the ascending edge of the triggered action’s axis, all previous traffic flows maintain their paths, as the new incoming non-priority traffic flows are assigned to new, and generally longer, alternative paths that divert from that congested neighborhood. once it is reached, on the ascending edge of the triggered action’s axis, it diverts a portion of the non-priority traffic crossing a congested neighborhood. indicates that a steady light traffic load situation is reached and paths including that neighborhood are available again to all new incoming non-priority traffic flows. This read mission action is triggered on the descending edge of the triggered action’s axis.

A. Varela et al. / Computer Communications 35 (2012) 33–46

Delay (ms)

40

500 475 450 425 400 375 350 325 300 275 250 225 200 175 150 125 100 75 50 25 0

Priority Flow [Critical Threshold = 80%] Priority Flow [Critical Threshold = 90%] Non−Priority Flow [Critical Threshold = 80%] Non−Priority Flow [Critical Threshold = 90%]

0

20

40

60

80

100

120

140

160

180

200

Time (s) Fig. 9. Delay for the critical threshold values of 80 percent and 90 percent. Fig. 8. A simple Multi-service network scenario.

10000

The second set of simulations targeted the performance evaluation of the Multi-service protocol in a larger network scenario, having both a greater topology complexity and greater traffic sources type diversity, using the previous elected threshold values, along with the global intra-domain performance management module enabled.

8000

Packet losses (n)

7000

4.1. Parameterization of threshold values Parameterization of the critical and deviation threshold values was accomplished running a set of simulations using different traffic load conditions - applied to the a simple network scenario described in Fig. 8 which characterization is illustrated in Table 1. The standard threshold value was assumed to be equal to the deviation threshold one, as in these specific test experiences the traffic load is always applied in the ascending way. 4.1.1. Critical threshold value parametrization During the first part of the referred simulation experiments, the Multi-service was supposed to encompass only the critical threshold, which means that at a certain point in time all non-priority flows are deviated from the shortest path. These situations happen only when the network is heavy loaded, meaning that the tested threshold values must be as high as, for instance, 80 percent and 90 percent of the link occupancy. According to Figs. 9 and 10, the 80 percent critical threshold value offers the best performance as it reduces delay and packet losses. Although both values have similar performance as far as priority flows are concerned, however, regarding the non-priority flows; the 80 percent value assigned to the critical threshold, determines no transitory spike to appear before the path deflection occurs, which implies less packet losses. 4.1.2. Deviation threshold value parametrization The second part of these simulation experiments, consisted of fixing the critical threshold value at 80 percent, while tuning the deviation threshold value. Three different values were tested: 20

Priority Flow [Critical Threshold = 80%] Priority Flow [Critical Threshold = 90%] Non−Priority Flow [Critical Threshold = 80%] Non−Priority Flow [Critical Threshold = 90%]

9000

6000 5000 4000 3000 2000 1000 0 0

20

40

60

80

100

120

140

160

180

200

Time (s) Fig. 10. Packet losses for the critical threshold values of 80 percent and 90 percent.

percent, 50 percent and 70 percent, respectively. The results are shown in Figs. 11 and 12. On the one hand, if the deviation threshold is adjusted to 20 percent of the link capacity, longer delays are achieved by both priority and non-priority flows, as incoming non-priority flows start to be diverted too soon. On the other hand, when the 70 percent value was selected, non-priority packet losses have been higher because paths deflection happened not earlier enough, when the smaller capacity link, between node 15 and node 12, was already saturated. Fixing the deviation threshold at the value of 50 percent of the link capacity, good performance is achieved by both priority and non-priority flows, with smaller delay values and no losses of non-priority packets. 4.1.3. Parameterized threshold values evaluation The previous tests have shown that the values of 50 percent and 80 percent assigned to the deviation threshold and critical thresh-

Table 1 Traffic characterization I. Traffic class

Type

Number of sources

CoS

Traffic

Source node

Destination node

Rate [Kb/s]

Packet size [B]

Total rate [Kb/s]

Priority Non priority Non priority

Single Aggregate Aggregate

1 42 0.18

EF EF BE

CBR CBR CBR

1 2 3

20 20 20

24 24 500

40 40 1.500

24 1.008 0.9000

41

A. Varela et al. / Computer Communications 35 (2012) 33–46 375

Priority Flow [Deflection Threshold = 20%] Priority Flow [Deflection Threshold = 50%] Priority Flow [Deflection Threshold = 70%] Non−Priority Flow [Deflection Threshold = 20%] Non−Priority Flow [Deflection Threshold = 50%] Non−Priority Flow [Deflection Threshold = 70%]

350 325 300 275

delay (ms)

250 225 200 175 150 125 100 75 50 25 0 0

20

40

60

80

100

120

140

160

180

200

Time (s) Fig. 11. Delay for the critical threshold values of 20 percent, 50 percent and 70 percent.

60000

Priority Flow [Deflection Threshold = 20%] Priority Flow [Deflection Threshold = 50%] Priority Flow [Deflection Threshold = 70%] Non−Priority Flow [Deflection Threshold = 20%] Non−Priority Flow [Deflection Threshold = 50%] Non−Priority Flow [Deflection Threshold = 70%]

Packet losses (n)

50000

40000

30000

20000

10000

0 0

20

40

60

80

100

120

140

160

180

200

Time (s) Fig. 12. Packet losses for the critical threshold values of 20 percent, 50 percent and 70 percent.

old have made the Multi-service routing protocol to accomplish the best performance. These threshold values are then fixed during the remaining test experiences, as far as the Multi-service architecture configuration is concerned. 4.1.4. A basic performance evaluation of Multi-service routing protocol The second simulation experiments were devoted to the performance evaluation of the Multi-service routing (MS-R) compared to those obtained by both the traditional link state (LS-R) and quality

of service routing protocol (QS-R). Table 2 summarizes the main characteristics of each of these three routing protocols used during the remaining tests. The traffic characterization is described in Table 3 and it differs from the one presented previously in the way the number of nonpriority BE sources account to each simulation set. Nineteen subsequent simulation tests were here performed. For the nth simulation test, n BE traffic sources are gradually activated at the rate of 500 Kb/s per source unit, starting from zero to n. The goal is to compare performances of the elected routing protocols, starting from a scenario of light network load (1 Mb/s of the priority traffic alone) to an increasingly network congested situation until 10 Mb/s of the total traffic load is reached (1 Mb/s of the priority traffic and 9 Mb/s of non-priority traffic). The starting time of the traffic sources were roughly the following: first, 5 percent of priority sources, followed by all sources of non-priority and finally the remaining 95 percent of the priority sources. Tables 4 and 5 show the results obtained for three meaningful network load situations - light, intermediate and heavy traffic load, respectively - where for both the priority and non-priority traffic, the MS-R exhibits smaller delays, while the throughput and losses are close to those exhibited by QS-R which turns out to be far better than the ones achieved by LS-R. Fig. 13 shows the occupancy rate of the link between node 15 and node 12 during 30 s when the simulation is started with a non-priority traffic load greater than 2 Mb/s. This figure illustrates the basic behavior dynamics of both the Multi-service and the quality of service routing protocol. So, at time instant 3 s, the Multi-service deflection threshold is crossed determining an alternative and longer path to all new incoming non-priority flows that will not include the current link. At the same instant time, the QS-R determines an alternative path to all new incoming flows belonging either to the priority or non-priority traffic, maintaining the same link occupancy rate value afterwards. At time instant 20 s, the Multi-service critical threshold is reached, as new priority flows still make their way across the link starting at instant time 18 s, diverting all non-priority flows crossing the link, which provokes a sharp decrease on the link occupancy rate. As new priority flows still cross that link until time instant 24 s, the correspondent occupancy rate increases, reaching a steady value, afterwards. Fig. 14 represents delay evaluation, where a better performance of the priority traffic is obtained when the Multi-service protocol is used, as this traffic type is always routed across the shortest path, which is not the case when the QS-R is used instead. Indeed, from nearly time instant 20 s, the priority traffic delay increases approximately 80 percent if the QS-R is used with respect to values obtained with its MS-R counterpart. In contrast, from that same instant time, if MS-R is used, non-priority traffic delay is slightly greater than values obtained with the QS-R, as larger paths are always assigned to this traffic type whenever a link critical threshold is crossed. Fig. 15 illustrates a snapshot of events

Table 2 Routing protocols main characteristics. Routing protocol

Deflection threshold

Critical threshold

Routing table instance

Routing strategy

Value

Action

Value

Action

Traditional

Alternative

Multi-service (MS-R)

50%

80%

Non-priority traffic routing

Aggregate

80%



Re-routing of non-priority flows –

Priority traffic routing

Quality of service (QS-R)



All traffic routing

Aggregate

Traditional link state (LS-R)



Deflection of new incoming non-priority flows Deflection of new incoming flows –





All traffic routing



Datagram

42

A. Varela et al. / Computer Communications 35 (2012) 33–46

Table 3 Traffic characterization II. Traffic class

Type

Number of sources

CoS

Traffic

Source node

Destination node

Rate [Kb/s]

Packet size [B]

Total rate [Kb/s]

Priority Non priority Non priority

Single Aggregate Aggregate

1 42 0.18

EF EF BE

CBR CBR CBR

1 2 3

20 20 20

24 24 500

40 40 1.500

24 1.008 0.9000

Table 4 Priority traffic results. Load rate 1 Mb/s Routing protocol Delay [ms] Packet losses [%] Throughput [Mb/s]

5 Mb/s

MS-R 11.91 0 0.99

QS-R 11.91 0 0.99

LS-R 11.91 0 0.99

QS-R – – –

LS-R – – –

10 Mb/s

MS-R 11.95 0 0.96

QS-R 15.41 0 0.96

LS-R 1.0807 80.18 0.18

MS-R 11.98 0 0.91

QS-R 19.9 0 0.91

LS-R 25813.4 95.6 0.04

QS-R 30.25 0 8.6

LS-R 25895.3 76.34 1.93

Table 5 Non-priority traffic results. Load rate 1 Mb/s Routing protocol Delay [ms] Packet losses [%] Throughput [Mb/s]

MS-R – – –

5 Mb/s

140 130

10 Mb/s QS-R 29.06 0 3.42

MS−R QS−R LS−R

120 110 100 90

Delay (ms)

Occupancy (%)

MS-R 28.71 0 3.92

80 70 60 50 40 30 20 10 0 0

2

4

6

8

LS-R 10.607 80.16 1.78

40 38 36 34 32 30 28 26 24 22 20 18 16 14 12 10 8 6 4 2 0

Priority Flow [MS−R] Priority Flow [QS−R] Non−Priority Flow [MS−R] Non−Priority Flow [QS−R]

0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

10 12 14 16 18 20 22 24 26 28 30

Time (s)

Time (s)

Fig. 14. Delay evaluation of the priority and non-priority flows.

Fig. 13. Link occupancy rate between node 15 and node 12.

Simulation studies were carried out in a larger simulation scenario with multiple traffic sources types, with different characteristics, and running in two extreme network traffic load situations: The first situation is characterized by the fact that all the network links are congested (80 percent of the links occupancy rate), and the second situation is characterized by the existence of a large network region heavily congested (100 percent of the links occupancy rate) and remaining regions congested with 80 percent of the links occupancy rate. Each traffic source type is related to a specific service to be delivered by the network, according to a predefined set of SLA specifications.

Priority traffic delay(ms)

occurred during the first 30 s of simulation time related to the priority traffic delay offered by the MS-R and QS-R.

4.2. Large scale network

MS-R 31.96 0 8.6

25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

MS−R QS−R

1. Deflection of new non−priority flows 2. Deflection of all new flows [Link 15−12]

4. Arrival of new non−priority flows

5. Deflection of all non−priority flows 6. Regular network traffic load

3. Deflection of all new flows [Link 15−16]

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

Time (s) Fig. 15. Events occurred during the first 30 s of simulation time.

43

A. Varela et al. / Computer Communications 35 (2012) 33–46

The purpose of running simulations in two network extreme traffic load scenarios was justified by the fact that experience tests carried out using the Multi-service routing protocol in the first network congested situation, did not take advantages of the specific functionalities of its global intra-domain performance management module because the network dynamics spare it to do so. So, the benefits of the Multi-service architecture can thoroughly be sensed by means of simulation studies if a perceptible portion of a network is heavily loaded, which implies that actions from the global intra-domain performance management module must be triggered. Therefore, the goal of the corresponding simulation studies was to evaluate and compare performances of both the Multi-service routing protocol version that monitors only the local resources (MS-R) and Multi-service routing protocol version that also includes the global intra-domain performance management module (MSD-R). Fig. 16. 16 nodes GEANT’s network topology.

4.2.1. Simulation scenario Fig. 16 describes the network topology where simulations are carried out, and it is based on a GEANT network composed of 16 nodes. The network is made up of three regions: Western, Central and Eastern. The network links are bidirectional and all have a capacity of 2 Mbps, 5 ms of latency and a cost of 1, except the link between node 2 and node 12, which has 20 ms of latency and costs 5. This Multi-service network is configured as a DiffServ domain. Deflection and critical thresholds values are set up at 50 percent and 80 percent, respectively, while the standard threshold is assigned the value of 40 percent. Traffic characterization is described in Table 6. Each traffic type is mapped to a Multi-service traffic class and to a DiffServ class of service; the number of sources that generate each traffic type and the nodes where they are located and the related destination nodes; the sources bit rate and the corresponding packets size. Each western node randomly generates four VoIP sessions to each eastern node. A VoIP session is classified as a Multi-service priority traffic and a DiffServ EF service class with a rate of 24 Kb/s and a packet size of 24B. An HTTP session is classified as a Multi-service non-priority traffic and as an AF4 DiffServ class. Each network node randomly generates 20 HTTP sessions with each of 3 HTTP servers located at nodes 3, 6 and 9. Pages have 2500B size and are composed by 4 objects and are requested at a rate of 1 page/s. The

CBR traffic is classified as a Multi-service non-priority traffic class and as a BE DiffServ class. The CBR random traffic sources generate load in the network according to the specific network traffic load congestion actually deployed: either 80 percent of all links capacities, or 80 percent of both western and eastern links capacity and 100 percent of the central links capacity. Specification of SLAs for the VoIP and HTTP traffic and the corresponding Multi-service network performance objectives are defined as shown in Table 7. Two service thresholds are defined along with the relative triggering condition and triggered action for the global intra-domain performance management module as shown in Table 8. 4.2.2. Simulation results from a scenario with all network links at 80 percent of their capacity Figs. 17 and 18 show the results related to the delay and losses experienced by the VoIP traffic. It is straightforward that the MS-R is the only routing protocol that complies with defined SLAs, as its delay and losses values are smaller than those previously agreed. These goals are achieved by the MS-R due to its capability to find alternative paths to nonpriority traffic when the shortest ones become loaded or to drop

Table 6 Traffic characterization III. Multi-service traffic class

Number of sources

DiffServ CoS

Traffic type

Source node

Destination node

Note

Priority

4

EF

VoIP

Western node

Eastern node

Non-priority

20

AF4

HTTP

All nodes

3,6,9

Non-priority

2  20

BE

CBR





Packet size: 24B Rate: 24 Kb/s Pages composed by 4 objects: Main page size: 1000B of size Other objects size: 5000B each 1 page request/s Occupancy equals 80% of all links capacity. —————— Occupancy equals 80% of both Eastern and western links capacity and 100% of Central links capacity

Table 7 SLA’s specification and corresponding Multi-service performance objectives. Specification number

Traffic type

1

VOIP

2

HTTP

Description of SLA specification Delay < 100 ms Packet losses/session < 0.5% Response time < 5 s regarding 90% of page requests

DiffServ CoS

Multi-service traffic class

Multi-service network performance goal

EF

Priority

Delay < 90 ms Packet losses/Session < 0.4%

AF4

Non-priority



44

A. Varela et al. / Computer Communications 35 (2012) 33–46

Service threshold

Triggering condition

Triggering action

Critical

Delay > 90 ms or Packet losses/ Session > 0.5% Delay < 90 ms and Packet losses/Session < 0.5%

Non-priority traffic packets are dropped None

Standard

non-priority packets entering to the network in the brink of overloaded condition. These MS-R mechanisms assure that for a priority traffic, as VoIP, the most suitable paths in all network load situations are always available. In contrast, the QS-R does not devote special attention to the priority traffic as it is treated the same way as a non-priority traffic. In the QS-R, as the network load increases, all new incoming priority and non-priority flows are equally conveyed through new alternative and, generally, longer paths, which means that those priority traffic flows, as VoIP, will be eventually affected by additional delay when crossing more nodes. In fact, when the QS-R is used, 25 percent of the priority traffic experiences a delay value which is not SLAs compliant. As expected, VoIP traffic SLA violations occur when the LS-R is used - more than 90 percent of the VoIP traffic has a delay value greater than 100 ms and packet losses around 80 percent -, as both the priority and non-priority traffic always share the same paths. Indeed, as soon as the network gets more congested, all traffic will be affected by an increasing delay and packet losses. 1 0.9 0.8 0.7

0.5 0.4 0.3 0.2 0.1 0 10

100

1000

10000

Delay (ms) / Number of Hops MS−R

QS−R

LS−R

Fig. 17. Cumulative distribution function of the VOIP traffic delay.

100 95 90 85 80 75 70 65 60 55 50 45 40 35 30 25 20 15 10 5 0 [ < 10 ] [10−20] [20−30] [30−40] [40−50] [50−60] [60−70] [70−80] [80−90][90−100] [ > 100 ]

Delay (ms) MS−R

QS−R

LS−R

Fig. 19. Distribution of the VOIP sessions mean delay.

The histogram depicted in Fig. 19 illustrates the VoIP sessions mean delay which is a useful tool to accurately validate the conclusions earlier reached. For instance, if the MS-R is used, 50 ms is the maximum VoIP session delay value and it is experienced by only 14 percent of all VoIP sessions. In addition, if the QS-R is used, 35 percent of all VoIP sessions are not SLAs compliant. Finally, if the LS-R is used, 100 percent of the VoIP sessions are not SLAs compliant. Fig. 20 depicts a set of points representing delay and packet losses experienced for each VoIP session during alternate simulation tests using the Multi-service, quality of service and traditional link state routing protocol. The shadow area in figure stands for a subset of points that are not compliant with the defined SLAs. It is worth to emphasize that all points related to Multi-service are located outside that area. The HTTP traffic results are plotted in Fig. 21 and they are expressed as cumulative distribution of reply time and as the mean reply time per HTTP session, respectively. As HTTP traffic is classified as a non-priority traffic it is expectable that its performance level will not be as good as a VoIP priority traffic. Nevertheless, the results show that the SLA specifications related to HTTP traffic are fulfilled when the Multi-service protocol is used, as 90 percent of replies take less than 5 s and more than 75 percent of the HTTP sessions suffer a delay shorter than 10 s. However, this traffic achieves better performances whenever the QS-R is used, as more than 95 percent of replies are subject to a delay that is shorter than 5 s. This disparity is explained by the fact that while the MS-R routing protocol diverts all HTTP sessions from congested regions to

100 90 80

VoIP Sessions (%)

CDF

0.6

VoIP Sessions (%)

Table 8 SLA’s specification and corresponding Multi-service performance objectives.

70 60 50 40 30 20 10 0 [ 0 ] [ 0 − 1 ][ 1 − 2 ] [ 2 − 3 ] [ 3 − 4 ] [ 4 − 5 ] [ 5 − 6 ] [ 6 − 7 ] [ 7 − 8 ] [ 8 − 9 ] [ 9 − 10 ] [ > 10 ]

Packet Losses (%) MS−R

QS−R

LS−R

Fig. 18. Distribution of packet losses/VOIP session.

Fig. 20. Set of points representing mean delay and losses for each VOIP session.

45

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6 CDF

CDF

A. Varela et al. / Computer Communications 35 (2012) 33–46

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0 0

1

2

3

4

5

6 7 8 9 Reply Time (s) QS−R

MS−R

0

10 11 12 13 14 15

0

LS−R

1

2

3

4 5 6 7 8 9 10 Delay (ms) / Number of Hops

12

13

14

MSD−R

MS−R

Fig. 21. Cumulative distribution function of the VOIP traffic delay.

11

Fig. 23. Cumulative distribution function of the VOIP traffic delay.

75 35

70 65 60

30

VoIP Sessions (%)

55

Links (%)

25

20

15

50 45 40 35 30 25 20

10

15 10

5

5 0 [ < 10 ] [10−20] [20−30] [30−40] [40−50] [50−60] [60−70] [70−80] [80−90] [90−100] [ > 100 ]

0 [0−10]

[10−20] [20−30]

Delay (ms)

[30−40] [40−50] [50−60] [60−70] [70−80] [80−90] [90−100]

Link Occupancy (%) MS−R

QS−R

MS−R

MSD−R

LS−R

Fig. 24. Distribution of the VOIP sessions mean delay. Fig. 22. Distribution of the network link’s occupancy.

longer paths, the QS-R always keeps established sessions to their initial paths. Focusing now on the behavior per session, and considering that both the MS-R and QS-R are based on the same aggregated flow level strategy, the worst behavior is not common to all HTTP sessions due to the use of a cache where forwarding information is stored and shared by all packets belonging to a session. A better traffic balancing among existing paths linking each pair of border nodes (Fig. 22) is achieved when the MS-R is used - which is quite similar to that obtained with the QS-R - instead of the LS-R protocol.

the same value is obtained for 85 percent of the corresponding VoIP sessions. Even if in both cases the maximum delay is less than 60 ms, again the MSD-R protocol shows to get a slightly better performance with respect to MS-R protocol when the priority traffic is routed. The histogram in Fig. 25 shows the distribution response time per HTTP session and, as expectable, considering that the whole area where the HTTP servers are hosted is heavy congested, none

45 40 35

HTTP Sessions (%)

4.2.3. Simulation results from a scenario with the network central links full loaded and the remaining links at 80 percent of their capacity The results shown in Fig. 23 proof that, even in the presence of an heavily congested region, the Multi-service architecture is still able to assure the SLAs fulfilment as delay values observe the predefined specifications (packet losses are not here depicted as null values were obtained in both the MS-R and MSD-R cases). Even small in amount - about 1 ms -, the MSD-R delays were inferior to those experienced by its MS-R counter part, because of the dropping mechanism put in place at ingress nodes at the expense of the packets belonging to non-priority traffic. Furthermore, the histogram displayed in Fig. 24 shows that, with the MSD-R protocol, 92 percent of the VoIP sessions have a mean delay value of about 50 ms, while with its MS-R counterpart

30 25 20 15 10 5 0 [ < 10 ] [10−20] [20−30] [30−40] [40−50] [50−60] [60−70] [70−80] [80−90] [90−100] [ > 100 ]

Reply Time (s) MS−R

MSD−R

Fig. 25. Distribution of response time of HTTP sessions.

Links (%)

46

A. Varela et al. / Computer Communications 35 (2012) 33–46 25

could reasonably be applied as a Multi-service’s management plane mechanisms to pro-actively ensure SLAs.

20

Acknowledgments

15

This work was supported by FCT (INESC-ID multiannual funding) through the PIDDAC Program funds.

10

References

5

0 [0−10]

[10−20] [20−30]

[30−40] [40−50] [50−60] [60−70] [70−80] [80−90] [90−100]

Link Occupancy (%) MS−R

MSD−R

Fig. 26. Distribution of the network link’s occupancy.

of the routing protocols is able to fulfil this SLA specification. Even though, the MSD-R shows lower mean reply time than the MS-R protocol, as the result of non-priority traffic packets dropping at the network ingress nodes, which alleviates the nasty effects related to core network nodes queues filling with packets that eventually worsen the network congestion situation. Dropping non-priority packets at ingress nodes also increments the number of links having median occupation values, which determines a smoother balancing of traffic load among all network links, as shown in Fig. 26.

5. Conclusion Multi-service routing is a scalable and a flow-based forwarding protocol supporting service differentiation in the context of a pure IP-based network by using an effective cooperation between its data, management and control plane. The data plane, which includes the DiffServ service model, selects the best route to each flow crossing the network according to its service class; by using the RTCP standard monitoring protocol the management plane monitors the data flow conveyed by the data plane to assess the network service level being delivered; the control plane uses the service metrics provided by the management plane to calculate a set of routes to each service class. Results from simulation experiments show the Multi-service routing protocol capability to assure SLAs in heavy loaded network scenarios characterized by mixed traffic types with dissimilar performance requirements, outperforming both the traditional routing protocol with DiffServ model and the QoS routing protocol. Future work consists of studying new metrics and the way they can be dynamically tuned for each instance of the management plane at each network node, and investigating the feasibility of Multi-service architecture adoption in the context of the Multitopology routing, by modifying/extending some functionalities of the data and control plane, to allow paths (i.e., services) mapping into topologies. Another interesting extension to this work is to explore how far some existing IP network fast recovery techniques

[1] A. Varela, T. Vazão, G. Arroz, Multi-service: a service aware routing protocol for the next generation internet, JCM 1 (2006) 57–66. [2] A. Varela, T. Vazão, G. Arroz, An integrated approach to service management in the next generation internet, in: Proceedings of the 2nd EuroFGI Workshop on IP QoS and Traffic Control, IST Press, Portugal, 2007, pp. 99–107. [3] ns-2, The Network Simulator NS-2, , 2007. [4] E. Crawley, R. Nair, B. Rajagopalan, H. Sandick, A Framework for QoS-based routing in the internet, RFC 2386 (Informational), 1998. [5] S. Chen, K. Nahrstedt, An overview of quality-of-service routing for the next generation highspeed networks: problems and solutions, Network, IEEE 12 (1998) 64–79. [6] X. Masip-Bruin, M. Yannuzzi, J. Domingo-Pascual, A. Fonte, M. Curado, E. Monteiro, F. Kuipers, P.V. Mieghem, S. Avallone, G. Ventre, P. Aranda-Gutierrez, M. Hollick, R. Steinmetz, L. Iannone, K. Salamatian, Research challenges in QoS routing, Computer Communications 29 (2006) 563–581 (Networks of Excellence). [7] G. Apostolopoulos, S. Kama, D. Williams, R. Guerin, A. Orda, T. Przygienda, QoS routing mechanisms and OSPF extensions, RFC 2676 (Experimental), 1999. [8] S. Nelakuditi, Z.-L. Zhang, D.H.C. Du, On selection of candidate paths for proportional routing, Computer Networks 44 (2004) 79–102. [9] A. Tiwari, A. Sahoo, Providing QoS in OSPF based best effort network using load sensitive routing, Simulation Modelling Practice and Theory 15 (2007) 426– 448 (Performance Modelling and Analysis of Communication Systems). [10] M.C. Oliveira, B. Melo, G. Quadros, E. Monteiro, Quality of service routing in the differentiated services framework, in: A.L. Chiu, F. Huebner-Szabo de Bucs, & R.D. van der Mei (Eds.), Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, volume 4211 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pp. 256–263. [11] ITU-T, Recommendation Y.2001 - General overview of NGN, 2004. [12] P. Psenak, S. Mirtorabi, A. Roy, L. Nguyen, P. Pillay-Esnault, Multi-Topology (MT) Routing in OSPF, RFC 4915 (Proposed Standard), 2007. [13] S. Bae, T.R. Henderson, Traffic engineering with OSPF multi-topology routing, in: Military Communications Conference, IEEE, 2007, pp. 1–7. (MILCOM 2007). [14] K.W. Kwong, R. Guérin, A. Shaikh, S. Tao, Balancing performance, robustness and flexibility in routing systems, network and service management, IEEE Transactions on 7 (2010) 186–199. [15] X. Masip-Bruin, E. Marin-Tordera, M. Yannuzzi, R. Serral-Gracia, S. SanchezLopez, Reducing the effects of routing inaccuracy by means of prediction and an innovative link-state cost, Communications Letters, IEEE 14 (2010) 492– 494. [16] R. Braden, D. Clark, S. Shenker, Integrated services in the internet architecture: an overview, RFC 1633 (Informational), 1994. [17] E. Rosen, A. Viswanathan, R. Callon, Multiprotocol label switching architecture, RFC 3031 (Proposed Standard), 2001. [18] S. Dimitriou, V. Tsaoussidis, Promoting effective service differentiation with size-oriented queue management, Computer Networks 54 (2010) 3360–3372. [19] P. Flegkas, P. Trimintzios, G. Pavlou, A policy-based quality of service management system for ip DiffServ networks, Network, IEEE 16 (2002) 50–56. [20] S. Waldbusser, J. Saperia, T. Hongal, Policy based management MIB, RFC 4011 (Proposed Standard), 2005. [21] A. Kvalbein, O. Lysne, How can multi-topology routing be used for intradomain traffic engineering?, in: Proceedings of the 2007 SIGCOMM Workshop on Internet Network Management, INM ’07, ACM, New York, NY, USA, 2007, pp. 280–284. [22] M. Rocha, P. Sousa, P. Cortez, M. Rio, Quality of service constrained routing optimization using evolutionary computation, Application of Soft Computing 11 (2011) 356–364. [23] I. Papanagiotou, M. Howarth, Delay-based quality of service through intradomain differentiated routing with optimised link weight setting, Computers and Communications, IEEE Symposium on 0 (2010) 622–627. [24] H. Schulzrinne, S. Casner, R. Frederick, V. Jacobson, RTP: A Transport Protocol for Real-Time Applications, RFC 3550 (Standard), 2003. (Updated by RFC 5506).