Label switched protocol routing with guaranteed bandwidth and end to end path delay in MPLS networks

Label switched protocol routing with guaranteed bandwidth and end to end path delay in MPLS networks

Journal of Network and Computer Applications 42 (2014) 21–38 Contents lists available at ScienceDirect Journal of Network and Computer Applications ...

7MB Sizes 13 Downloads 130 Views

Journal of Network and Computer Applications 42 (2014) 21–38

Contents lists available at ScienceDirect

Journal of Network and Computer Applications journal homepage: www.elsevier.com/locate/jnca

Label switched protocol routing with guaranteed bandwidth and end to end path delay in MPLS networks Mehdi Naderi Soorki, Habib Rostami n Computer Engineering Department, School of Engineering, Persian Gulf University of Bushehr, Bushehr 75168, Iran

art ic l e i nf o

a b s t r a c t

Article history: Received 27 November 2012 Received in revised form 2 October 2013 Accepted 10 March 2014 Available online 25 March 2014

Rapid growth of multimedia applications has caused a tremendous impact on how people communicate. Next generation networks (NGN) have been proposed to support newly emerged multimedia IP based applications such as voice over IP (VOIP), video on-demand (VOD), and IPTV using a core IP backbone. The development of technologies like Multi-Protocol Label Switching (MPLS) has laid the foundation for NGN to support multimedia applications like Voice over IP. One of the important concepts in MPLS Traffic Engineering (TE) is Label Switched Path (LSP) routing. The objective of the routing algorithm is to increase the number of accepted request while satisfying Quality of Service (QoS) constraints. Although much work has been done on laying MPLS paths to optimize performance, most of them have focused on satisfying bandwidth requirements. Relatively little research has been done on laying paths with both bandwidth and delay constraints. In this paper, we present a new bandwidth and end to end delay (bandwidth-delay) constrained routing algorithm which uses data of the ingress–egress node pairs in the network. In this algorithm we use LR-Servers theory to compute path delay. We name the proposed algorithm as Minimum Delay and Maximum Flow (MDMF). We do extensive simulations to evaluate the performance of MDMF algorithm. In addition, we compare the performance of MDMF against some previous related works such as MHA, WSP, MIRA, BCRA, MIRAD, BGDG, BGLC, and SAMCRA. The simulation results show that MDMF rivals them in terms of flow management and outperform them in terms of end to end delay management, maximum flow and number of accepted request (call blocking ratio). & 2014 Elsevier Ltd. All rights reserved.

Keywords: Quality of service Traffic engineering MPLS QoS-based routing LR servers Maximum flow

1. Introduction The development of multimedia and real time applications in communication networks and rapid growth of IP based networks have brought the discussion of Next Generation Networks (NGN) (Di Sorte et al., 2009; Mehmood et al., 2011). NGN's aim is to facilitate the convergence of data, voice and video networks into a single unified packet-based multi-service network capable of providing futuristic services. This has caused a tremendous growth of traffic flows in the core of network. In this way traffic engineering (TE) has become essential for service providers to optimize the utilization of existing network resources and provide QoS and subsequently gain more revenue (Agrawal and Jennings, 2011; Salles and Carvalho, 2011; Leroux et al., 2011; Vilalta et al., 2012). Therefore it is important to use traffic engineering techniques. TE is the process of controlling how traffic flows through a network in order to optimize resource utilization and network performance

n

Corresponding author. Tel.: þ 98 9124184483; fax: þ98 7714540376. E-mail addresses: [email protected] (M.N. Soorki), [email protected], [email protected] (H. Rostami). http://dx.doi.org/10.1016/j.jnca.2014.03.008 1084-8045/& 2014 Elsevier Ltd. All rights reserved.

(Elwalid et al., 2001). New real time Internet applications like Voice over IP (VoIP) require setting up constrained paths through the network. In fact, IP routing takes into account only the destination of packets and the only way to change routing is to modify metrics used by routing protocols. Hence the existing IP routing protocols cannot provide QoS routing (Elwalid et al., 2001; Rosen et al., 2001; Ghein, 2007; Akar and Toksöz, 2011). The Internet Engineering Task Force (IETF) has proposed three different service models and mechanisms to support the requested QoS. These models are Integrated Services (Intserv) (Awduche et al., 2002), Resource Reservation Protocol (RSVP) (Awduche et al., 2001), Differentiated Services (Diffserv) (Capone et al., 2006) and Multi-Protocol Label Switching (MPLS) (Rosen et al., 2001; Ghein, 2007). MPLS can provide fast packet forwarding and traffic engineering. Among different traffic engineering methods, Multi Protocol Label Switching (MPLS) is known to be the most effective. MPLS can be used from edge-to-edge in a converged data and voice networks. Furthermore MPLS performs across a variety of physical layers to enable efficient data forwarding together with reservation of bandwidth for traffic flows with different QoS requirements. In addition, MPLS can operate on top of various routing protocols, including OSPF, RIP and BGP (Capone et al., 2006; Wang and

22

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

Nahrstedt, 2002; Kodialam and Lakshman, 2000). MPLS integrates a label swapping framework with network layer routing. A Label Edge Router (LER) assigns short fixed lengthlabels to packets at the ingress to an MPLS cloud. The labels attached to packets are used to make forwarding decisions. The path of data follows is defined by the transition in label values, as the label is swapped at each Label Switch Router (LSR). This path is called Label Switched Path (LSP). Before mapping packets onto an LSP, the LSP is setup using a signaling protocol such as RSVP or CR-LDP. Since the route of packets in MPLS networks is fixed, these routes may be the subject of traffic engineering. Therefore different requirements of quality of service, that are used in new application, such as delay, jitter, hop count, and bandwidth, may be guaranteed. It has been suggested that one of the most significant initial applications of MPLS will be in traffic engineering. One of the key issues in providing QoS guarantees is how to determine paths satisfying QoS constraints. The solution of such a problem is known as QoS routing or constraint based routing. Therefore, routing schemes that can make more efficient use of network resources and guarantee requirements of QOS are needed. There are two types of routing algorithm for label paths, the online and the offline (Kodialam and Lakshman, 2000). In offline algorithm the information of all the label routes is available at the beginning of calculating the route, and the purpose is optimizing the use of resources. In online algorithms each request is routed independently and without having the information of previous and next label routes. In the algorithms, the purpose is to maximize the number of accepted requests (Kodialam and Lakshman, 2000). There is also compound algorithms, having offline and online phases. At online phase, the exact information of all label routes is not needed and general information such as total traffic of a router to another is used (Kodialam and Lakshman, 2000). For traffic engineering purposes, the routing algorithm has to be an online algorithm. In Kodialam and Lakshman (2000) and Awduche et al. (2002), it is shown that these types of problems are in the category of NP-complete. Different innovative algorithms have suggested for solving these problems. In this paper, we consider the problem of setting up the bandwidth and end to end delay guaranteed tunnels (path or route) in the MPLS network where tunnel setup requests arrive one by one and future demands are unknown. The only dynamic information available to the routing algorithm is the link residual capacities provided by Link-state protocols such as OSPF (Moy, 1998). The new routing algorithm presented in this paper, called Minimum Delay and Maximum Flow (MDMF), guarantees the bandwidth, as well as end to end delay. Also, it aims to distribute the load into the network uniformly for all of the demands. In the presented algorithm, the resource utilization is optimized and it intends to increase the number of accepted requests besides guarantees end to end delay and bandwidth constraints. In the proposed algorithm (MDMF), we measure path delay using Latency-Rate (LR)-servers and this makes our algorithm to be dynamic. Simulation results show that our algorithm outperforms the previous ones. The rest of the paper is organized as follows. In Section 2 the related works are studied. In Section 3, we develop foundations of the proposed algorithm. Our proposed routing algorithm is described in Section 4. In Section 5, we present simulation results and comparison of MDMF with the other algorithms and finally in Section 6, we conclude the study.

2. Related work QoS based routing is a mechanism of routing which the route of the flow is marked based on available resources and requirements of quality of service of flow (Crawley et al., 1998). Several solutions and

algorithms have been proposed for QoS based routing. They can be classified into three groups. The first group consists of the algorithms trying to optimize using the network resources by the use of previous measured information and the type of requests. The second group uses mathematical analysis and methods to solve the problem. The third group contains the heuristic algorithms, which use the QoS based routing with some information of topology and state of the network. In this paper the emphasis is on third group. The bandwidth-constrained routing problem, which belongs to link-constrained routing, is the most addressed problem (Kodialam and Lakshman, 2000; Apostolopoulos et al., 1999; Chen and Nahrstedt, 1998; Guerin et al., 1996; Mishra and Saran, 2000). One of the most commonly used and the simplest algorithms for routing LSPs is the minimum hop routing algorithm (MHA) (Awduche et al., 2001). In MHA, the path with the least number of links between ingress and egress routers is chosen. The widest shortest path (WSP) algorithm uses a feasible shortest path that has the largest residual capacity. The shortest widest path (SWP) algorithm, on the other hand, selects the path with the maximum available bandwidth and if there are more than one such paths, the one with the least number of hops is chosen (Kodialam and Lakshman, 2000). Even though simple and efficient, MHA, WSP, and SWP can create bottlenecks for future LSPs, and lead to network under utilization. The Minimum Interference Routing algorithm (MIRA) (Kodialam and Lakshman, 2000; Kar et al., 2000) exploits the knowledge of ingress–egress pairs in finding a feasible path. The idea is that a newly routed connection should follow a path that does not interfere too much with a path that may be critical to satisfy a future demand. MIRA concentrates only on setting up bandwidth guaranteed paths. The first problem with this algorithm is that it cannot guarantee constraints such as hop count and delay. The second problem is that MIRA chooses a longer path to avoid from critical links. This causes that many resources of the network be used, leading to increase of the delay in the path and it will be no more used. Another big problem appearing in this algorithm is that it may reject requests where there are enough resources for their passing in the network. Moreover, MIRA takes neither the current load condition on the paths nor load balancing throughout the network. Our proposed algorithm avoids network bottlenecks by distributing load throughout networks. MDMF uses the residual bandwidth on the link to influence the weight of the link and the shortest path is chosen with respect to these dynamically changing weights. Moreover, our TE (Traffic Engineering) algorithm is very suitable for delay constrained applications. In Song et al. (2009), an anycast QoS routing algorithm, named ART, for supporting traffic engineering in MPLS networks is proposed. ART uses MIRA to avoid interference. Despite aim of ART (Song et al., 2009) which is anycast routing, we develop an unicast routing algorithm for MPLS networks. Heydarian proposed a dynamic routing algorithm called Optimal Dynamic Unicast Multichannel QoS Routing (ODUMR). The aim of ODUMR is to handle component (nodes and link) failure in the network. To do so, ODUMR uses multichannel path routing and transmits the messages using all available links, which increases the network traffic. Despite ODUMR, we use unichannel route between each ingress egress pair and our aim is to optimize network utilization while guaranteeing QoS constraints. Alidadi et al. (2009) present a bandwidth guarantee algorithm for MPLS traffic engineering (BGLC). In Alidadi et al. (2009), LSP setup requests are represented in terms of a pair of ingress and egress routers as well as their bandwidth requirement. In Alidadi et al. (2009), weights of links are proportional to total number paths that cross the link over number of paths between the source destination pair. To simplify the algorithm and increase its performance, weights of the links are considered static that are far from real world. I

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

n addition, the used strategy to assign weights to links does not determine the links that influence maximum flow. For example, many paths may cross a link which connects a source node to the network. But, if this link has enough bandwidth, it will not limit the maximum flow. An advantage of our proposed algorithm in this paper, over Alidadi et al. (2009), is that it considers and handles maximum flow problem. In Kar et al. (2000) the possibility of incorporating various policies, hop count and delay constraints within the bandwidth routing framework is discussed. This is done by translating other constraints into effective bandwidth requirements. Specifically for delay constraints they suggest that the problem becomes a constrained shortest path problem which is NP-hard. They suggest using a pseudo-polynomial time algorithm or heuristic approach to solve the problem. Kotti et al. (2007) proposed an algorithm called bandwidth constrain routing algorithm (BCRA). BCRA compromises between network load balancing, reducing path length, and minimizing path cost. In this algorithm a critical link is defined as a link whose load is running above some threshold. In BCRA the threshold is defined as the mean link load throughout the network. A critical path is one which has critical links. Hence a path is more critical if it contains more number of critical component links. The problem with this algorithm is that it has not taken into account the information of ingress–egress pairs and network topology. But in MDMF, we exploit the informations of ingress–egress pairs in the network topology, therefore, when performing the routing, we can restrict our attention to paths in the feasible network. Note that if the ingress and the egress routers are disconnected in the feasible network, then there is no path that has the desired bandwidth and the LSP request is rejected. All the algorithms mentioned so far, aimed to guarantee the bandwidth. In fact, there are other algorithms that have taken two constraints of bandwidth and delay into consideration that are among the constraints of quality of service for real time and multimedia applications. In Gopalan et al. (2004) an algorithm called link criticality based routing (LCBR) is introduced, the purpose of which is to distribute the load at network evenly, therefore saving the resources for unspecified future demand. Being among profile based routing algorithms is the problem of LCBR algorithm. It is in need of previous information about the traffic between ingress and egress pairs. This paper aims at studying the algorithms that are needless of such information. In Yang et al. (2003), each link supposed to have two features of residual capacity and link delay. By link delay, the propagation delay of data as well as queuing delay in the first node of link is meant. The purpose of algorithm is to maximize the capacity between ingress and egress. In Kulkarni1 et al. (2012), another algorithm called BGDG is proposed. The algorithm considers that latency of links is fixed. It uses BGLC algorithm to find a path that satisfies required minimum bandwidth and then check it to satisfy the required delay. The main disadvantages of this algorithm are that it considers delay of links to be fixed and does not consider maximum flow. Another algorithm that can be considered related to ours is SAMCRA (Self Adaptive Multi-Constraints Routing Algorithm) (Van Mieghem and Kuipers, 2004). The aim of SAMCRA is to present an exact algorithm for multi-constraint routing problem in tractable time complexity. The main disadvantage of SAMCRA is that it only works with fixed constraints which are far from real world. For example, delay of links (including queueing delay) varies dynamically as load of requests changes. In addition, SAMCRA does not consider future demands and only considers current demands. The main problem of these algorithms (Yang et al., 2003; Kulkarni1 et al., 2012; Van Mieghem and Kuipers, 2004) is that they consider delays of links unchangeable and fix. But as we know that the delay of each link is the summation of propagation

23

delay of that link and queuing delay in importing node of that link. The queuing delay is affected by the bandwidth of LSPs that routed in that link so by setting up each request the delay of link changes. So in our proposed algorithm we attend variation of link delay and compute the delay of each link and path with LR-server theory. We explain this in the next sessions of this paper. Another problem of these algorithms is that they do not attend the maximum flow in the network for future demands because they weight the links proportional to delay and bandwidth of link. So this weighting does not take into account minimum cut sets for maximum flow in the network topology graph. Another idea in our proposed algorithm is that a newly routed connection should follow a path that does not interfere with a path that may be critical to satisfy a future demand. Therefore in MDMF, we try to keep maximum flow high for future demands. Both path-constrained path-optimization routing and multipath-constrained routing are computationally NP-complete problems. So, we cannot expect an exact solution for them in reasonable time. Among these classes, delay cost constrained routing and delay-constrained least cost routing have received the most attention (Chen and Nahrstedt, 1998; Neve and Mieghem, 2000; Salama et al., 1997). The presented algorithms are approximation or heuristic algorithms.

3. General idea of the algorithm In this paper, it is supposed that routing algorithm is done by source nodes. Therefore, the state information and topology are saved in each ingress node database and any change in such information will be updated in this database. Furthermore it is assumed that, in case of failure, the disruption of routing can be observed. Collecting and updating state information are important in this method. Link state protocols, such as IS–IS (Gredler and Goralski, 2004) or OSPF (Moy, 1998), collect and update state information. The notations used in this paper are as follows: the network is modeled as an undirected graph GðN; LS; C; PD; PÞ, where N is a set of nodes (routers) and LS is a set of links between the nodes in N, C is set of bandwidths of the links. In the other words cij A C where ði; jÞ A LS. PD is set of propagation delays of links ði; jÞ A LS and P is the set of potential ingress–egress pairs. Furthermore let (s,d) be a generic element of P. The setup request for a path i is defined as a quadruple ðsi ; di ; Bi ; Di Þ where si specifies the ingress router, di specifies the egress router, Bi specifies the minimum required bandwidth and Di specifies the delay upper bound requirement. We assume that path setup requests arrive one at a time and there is no prior knowledge of future requests. Our optimization goal is to determine a feasible path for each request so that we can maximize the number of requests accepted into the network, save network load balancing, increase maximum flow and reduce minimum end to end delay, and save capacity for future requests. The only dynamic information that MDMF (our proposed algorithm) needs is the link residual capacities provided by Linkstate protocols such as OSPF. But we need end to end delay (including propagation delay and queueing delay) between any ingress–egress pair to satisfy delay constraints. Propagation delay is a physical property of each link and is static. Also, to obtain maximum total delay (propagation and queueing delay) of each link, we use latency-rate server model (Stiliadis and Varma, 1996). The main ingredients of our proposed algorithm (MDMF) are interference, minimum end to end delay and maximum flow. Therefore, in the rest of this section we overview interference concept, latency-rate server (LR-server), minimum end to end delay and maximum flow to provide foundations of our algorithm.

24

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

3.1. Interference The main idea of the proposed algorithm in route selection for a (s,d) ingress–egress pair, is to avoid selecting paths with more interference with other ingress–egress pairs. In Kar et al. (2000), the concept of interference is defined for maximum flow between ingress and egress pairs. In this paper, we generalize the concept to include both bandwidth and delay simultaneously. So we have the minimum interference for a LSP between routers s and d, when the path maximizes the minimum potential (will be defined based on bandwidth and delay) between all other source destination pairs. To define the potential of a link and a path, we should overview concepts of maximum flow (θðs; dÞ) and minimum delay (dðs; dÞ) between a pair of ingress–egress nodes (s,d). 3.2. Latency-rate server The concept of latency-rate servers (LR-servers) is introduced in Stiliadis and Varma (1996). Also, Stiliadis and Varma (1996) showed that a group of scheduling algorithms belongs to LR-server class. In order to a scheduling algorithm to be categorized under LR server, the average rate of service given to the traffic in each time period must be at least equal to the reserved rate for that traffic. Algorithms Weighed Fair Queueing (PGPS), VirtualClock, SCFQ, Weighted Round Robin, and Deficit Round Robin are in LR server group (Stiliadis and Varma, 1996). Also, they present a general model for calculating maximum end to end delay for a path in a network which has routed with heterogeneous scheduling algorithms belonging to LR-servers. They showed that in a network consisting of LR servers, a maximum end to end delay for a path P is equal to Dm that is calculated using the following formula (Stiliadis and Varma, 1996): ! m t R b M M ij : þ ∑ þ Dm ¼ þ propij ð1Þ t  r R ði;jÞ A P R C ij In this formula, r is requested rate, t is maximum rate, b is burstiness rate, M is maximum packet length, Mm ij is the maximum packet length belonging to label routes passing from the link which is between node i and node j which is called (i,j). Cij is capacity of the links, propij is propagation delay for (i,j) link, and R is the minimum bandwidth allocated to LSP in the path way. The first term signifies shaping delay in ingress node and the second term is queue buffer and propagation delays of nodes along the path. As shown in (1), the maximum end to end delay depends on the chosen path and also the reserved bandwidth (R) in the path. If bandwidth of request allocated in the each link of path increases, the delay will be decrease. In our proposed algorithm (MDMF), demands have been passed in an MPLS cloud, the routers use the LR server scheduling algorithms, and ingress traffic is shaped by leaky bucket algorithm (Tanenbaum, 2002). 3.3. Maximum flow Maximum flow is the largest bandwidth demand between ingress and egress nodes that can be accepted by dividing bandwidth request flow in that graph (Kodialam and Lakshman, 2000). To reduce interference, in terms of maximum flow, with other source destination nodes, we should route the incoming request in such a way that minimum maxflow (maximum flow) between the other ingress–egress pairs becomes maximum. To do so, we should maximize minimum maxflow in the form of a MAX–MIN–MAX problem which is computationally NP-complete (Aissi et al., 2005). On the other side, considering just minimum maxflow has a major

drawback, because other maxflows are not considered and they may be decrease. Therefore, to increase the performance of the network in average case, we route new demands in such a way that weighted sum of maxflows between all ingress–egress nodes becomes maximum. We can formulate the goal function as follows: max 8

paths

∑ γ sd θðs; dÞ

ð2Þ

ðs;dÞ A P

where γ sd is weight of ingress–egress pair (s,d) set by the network administrator. By maximizing Eq. (2), we just cover maxflow constraints of the demands and to cover minimum end to end delay, we develop another model in the same way in the next subsection. 3.4. Minimum end to end delay Minimum end to end delay of a pair of ingress–egress nodes is the minimum delay that a data flow between the nodes experiences. In the other words, minimum end to end delay of a pair of ingress–egress nodes is lower bound of sum of delays of links in paths between the nodes. To reduce interference, in terms of minimum end to end delay, with other ingress–egress pairs, we should route the new requests in such a way that the new established paths have the least effect on the other paths. As maxflow optimization, we consider minimization of weighted sum of minimum end to end delays as follows: min 8

paths

∑ λsd dðs; dÞ

ð3Þ

ðs;dÞ A P

3.5. Maximum flow and minimum end to end delay To optimize route allocation in terms of maxflow and minimum end to end delay, we should optimize Eqs. (2) and (3) simultaneously in an multi objective optimization problem as follows: ! max μflow ∑ γ sd θðs; dÞ þ max νdelay  ∑ λsd dðs; dÞ ðs;dÞ A P

ð4Þ

ðs;dÞ A P

where the negative sign reduce the minimization term to a maximization one. Also, μflow and νdelay are set by the network administrator to prioritize flow over delay or vice versa (we set μflow þ νdelay ¼ 1). Therefore, before any route allocation, we should solve a multi object weighted summation maximization problem (Eq. (4)). Although, finding the exact solution of the problem is possible using linear programming in restricted cases, but solving the problem in general form is NP-complete, because we face with a weighted summation maximization problem (Kar et al., 2000). So, this justifies development of a heuristic algorithm. To do so, we assign an innovative weight to each link and then we find the shortest weighted path between a given ingress–egress pair using Dijkstra (Cormen, 2001) algorithm in polynomial time. We should design weights of links in such a way that selection of minimum weighted shortest path in the labeled graph with the mentioned weights, maximizes Eq. (4). The following weights of links do so: ! ∂θðs; dÞ ∂dðs; dÞ wðlij Þ ¼ μflow ∑ γ sd þνdelay  ∑ λsd ð5Þ ∂r ij ∂r ij ðs;dÞ A P ðs;dÞ A P where rij is residual bandwidth of link lij and ∂θðs; dÞ=∂r ij is maximum flow change between ingress and egress pair (s,d) per a unit change in residual bandwidth of link lij. We should note that ∂θðs; dÞ=∂r ij always is greater than or equal to zero, because an increment (decrement) in residual bandwidth of a link increases

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

(decreases) maximum flow in the link is a critical link in the path or it does not affect the maximum flow between the ingress and egress pair. On the other side, ∂dðs; dÞ=∂r ij always is less than or equal to zero, because it is clear that if the link is a critical link in the path connecting the nodes, any increment (decrement) in residual bandwidth of a link between a source and destination nodes (s,d) will decrease (increase) end to end delay between the nodes (see Eq. (1)) and if the link is not critical, it will not affect path delay and differential will be zero. So we can write  ∂dðs; dÞ=∂r ij ¼ j∂dðs; dÞ=∂r ij j and hence wðlij Þ ¼ μflow ∑ γ sd ðs;dÞ A P

∂θðs; dÞ þνdelay ∂r ij

 ! ∂dðs; dÞ   ∑ λsd  ∂r ij  ðs;dÞ A P

ð6Þ

Eq. (6) shows the effect of a partial change in the residual bandwidth of a link in a selected path between an ingress and egress pair on the goal function (Eq. (4)). Because ∂θðs; dÞ=∂r ij Z 0, θðs; dÞ is an increasing function of ∂r ij . On the other side, we investigate effect of an arbitrary link (i,j) on maxflow of (s,d). From the graph theory (West, 2001), we know that only edges of a minimum cut set of a pair can affect maximum flow between them. We name a link critical for pair (s,d), if it belongs to a minimum cut set of the pair. If we define CMsd as set of minimum cut links between s and d, we have ∂θðs; dÞ ¼ ∂r ij



1

if lij A CM sd

0

else

ð7Þ

25

Therefore, we have   ∂dðs; dÞ ¼ ∑ λsd  λ ∑ ∂r ij  lij A CDsd :ðs;dÞ A P sd ðs;dÞ A P

ð11Þ

Based on Eqs. (6), (8), (11), we can assign the following weights to links of the network: wðlij Þ ¼ 1 þ μflow



lij A CM sd :ðs;dÞ A P

γ sd þ νdelay



lij A CDsd :ðs;dÞ A P

λsd

ð12Þ

The added one to Eq. (12) is to avoid assignment of zero weight to the links that are not in critical path of any ingress–egress pair. The designed weight of the links (Eq. (12)) is based on static parameters of the network and does not include dynamic parameters such as residual bandwidth of the links. Although, the designed weights maximizes our goal function (Eq. (6)), but as a secondary goal, we aim to distribute the load of the network evenly. To do so, we set weight of the links in indirect ratio of its residual bandwidth. So, links with less residual bandwidth have less chance to participate in new routes than the links with more residual bandwidth. Therefore, we redefine the weights of the links as follows: wðlij Þ ¼

1 þ μflow ∑lij A CMsd :ðs;dÞ A P γ sd þ νdelay ∑lij A CDsd :ðs;dÞ A P λsd r ij

ð13Þ

When residual bandwidth of a links is zero, its weight is infinite (wðlij Þ ¼ 1) and it has no chance to participate in a weighted shortest path between an ingress and egress pair. So, paths that include such links never returned by the shorted path algorithm (Cormen, 2001).

Therefore we can write ∑ γ sd

ðs;dÞ A P

∂θðs; dÞ ¼ ∑ γ sd ∂r ij lij A CMsd :ðs;dÞ A P

4. The proposed routing algorithm ð8Þ

To calculate j∂dðs; dÞ=∂r ij j, we define delay critical links and delay critical link sets. As Eq. (1) shows end to end delay of a path depends on both static parameters of the links, i.e. propagation delay and total bandwidth of links, and dynamic parameters like allocated bandwidth of the route. Therefore, finding delay critical links are not as straight forward as finding maxflow critical links. To find delay critical links of each ingress–egress pair, we do the following steps:

 We define weight of a link between nodes (i,j) as follows: m

dwij ¼

M M ij þ þ propij R C ij

ð9Þ

 Then, using a shortest path algorithm, like Dijkstra (Cormen, 2001), the weighted shortest path between each ingress–egress pairs is calculated (as shortest time path). In the path, the link with minimum unreserved bandwidth is critical. After removing the critical link, we repeat the shortest path algorithm to find next minimum end to end delay path and next critical link between that ingress–egress pair. This process continues until no path between the ingress– egress pair remains. We define LP sd ¼ fLP 1sd ; …; LP isd ; …; LP ksdst g, where LPisd is the weighted shortest path between (s,d) pair in iteration i of the algorithm. Also, if Cisd is the critical link of path LPisd, we define CDsd ¼ fC 1sd ; …; C isd ; …; C ksdst g as delay critical link set of (s,d) ingress–egress pair. Therefore, we can write    ∂dðs; dÞ 1 if ði; jÞ A CDsd   ð10Þ  ∂r  ¼ 0 else ij

After development of foundations of the algorithm (MDMF) in the previous section, in this section, we present the algorithm. In MDMF, the requests are stated as ðs; d; Tspec; RspecÞ in which s stands for source node (ingress), d stands for destination node (egress). Tspec is the traffic specification, having ðM; r; t; bÞ definition, the parameters of which have been introduced in Eq. (1). And Rspec specifies QoS specification such as the maximum end to end delay and minimum required bandwidth. In the first step of MDMF algorithm, the weights of all links are calculated with Eq. (13). In the second step, after achieving each link weight according to Eq. (13), the links that residual bandwidth (R) of them is less than requested bandwidth, will be removed from the network graph. In the remaining subgraph, we calculate the weighted shortest path between ingress and egress nodes using Dijkstra algorithm (Cormen, 2001) and based on Eq. (13). If the calculated weighted shortest path is more than requested delay, then the allocated bandwidth of a link in the path with the most residual bandwidth will be increased for one unit (one unit is equal to one outcoming unit of leaky bucket in the requests of egress to the network). Then, we repeat the same process until we reach the requested delay or we find that satisfying the delay request is not feasible in the path (it means that links capacity is full). In the later case, we remove the link with the lowest bandwidth (bottleneck link) and seek for a new shortest path in the absence of the removed bottleneck link. The process repeats until either a feasible path is found or the request is rejected. After achieving the path, the request resources are reserved and the state of each link is updated. The MDMF psudocode is presented by Algorithm 1. Algorithm 1. MDMF. 1:

Input: A network graph GðN; LS; C; PD; PÞ, a LSP request ðs; d; TSpec; RSpecÞ.

26

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16:

17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32:

Output: Route with requested bandwidth and end to end delay or reject the request if no such a route exists. Eliminate all links that have residual bandwidth less than requested bandwidth. for all ði; jÞ A LS do compute Wði; jÞ according to Eq. (13). end for Use Dijkstra algorithm to compute the shortest path in the resulted graph of the previous step. if no path found then return reject the request. else name the found path X end if while true do D(X) ¼end to end delay of path X according to Eq. (1). if DðXÞ r RSpec then route the bandwidth requirement of B units from s to d along this shortest path and update the residual link bandwidth. return the found route. else find a link ði; jÞ A X with maximum residual bandwidth along path X. if residual bandwidth of ði; jÞ ¼ ¼ 0 then Delete link with minimum bandwidth along path X. Use Dijkstra algorithm to compute the shortest path in the resulted graph of the previous step. if no path found then return reject the request. else name the found path X end if else Increment allocated bandwidth of (i,j) by 1 unit. end if end if end while

4.1. Computational complexity of the algorithm and comparison with the other works In this section we analysis computational complexity of Algorithm 1 run by the source nodes. If L is number of edges of the network and n is number of nodes of the network and P is number of ingress–egress pairs, the complexity of the algorithm is as follows: In lines 3–5, we compute Wði; jÞ, jLj times. To compute Wði; jÞ, in a preprocessing step, we compute all minimum cut sets

between all ingress–egress pairs in OðjPj  jnj  jLj2 Þ using Edmonds–Karp algorithm. In addition, we consider a variable for each edge which keeps the number of minimum cut sets that it is in. So, in the for loop of lines 3–5, we just need to lookup this variable for each link. Also, time complexity of the main while loop of the algorithm is OðjLjÞ for line 14, plus OðjLjÞ for line 14 plus OðjLjþ jnjlog 2 jnjÞ for implementation of Dijkstra algorithm with Fibonacci heaps (Fredman and Tarjan, 1987). On the other side, the main while structure runs at most jLj  W times where W is maximum bandwidth of a link (based on the defined unit in Section 5) in the network. Therefore, after removing lower terms, the time complexity of the algorithm is OðjPj  jnj  jLj2 þ jLj  W  jnj log 2 jnjÞ. Also, because in connected graphs jLj Zjnj  1 and W is a constant factor, we can write complexity of the algorithm as follows: OðjPj  jnj  jLj2 Þ Table 1 shows time complexity of the proposed algorithm (MDMF) and the other related algorithms. As it can be seen none of them outperforms the others in all circumstances. It means that performance of the algorithms depends on the network topology and number of ingress–egress pairs. In the sparse networks with reasonable ingress–egress pairs, MDMF has the best performance, while in the dense networks with some restricted conditions SAMCRA shows the best time complexity (for kmax ¼ 1 and m ¼1 the complexity of SAMCRA becomes Oðn log n þLÞ).

5. Experimental results and comparison with the other works 5.1. Simulation setup We do extensive simulations to study the performance of our proposed routing algorithm (MDMF). Two different network topologies were used to compare the different routing algorithms. The first topology, adopted from Kodialam and Lakshman (2000), Kotti et al. (2007), Yang et al. (2003), is called the MIRA topology and consists of 15 nodes as shown in Fig. 1. All the links are bidirectional. There are two different kinds of links in the network: The thin links which have a capacity of 12 units and the thick links have a capacity of 48 units (taken to model the capacity ratio of OC  12 and OC 48 links). In addition, a subset of the nodes in the network acts as the ingress–egress pairs. In the MIRA topology, four ingress–egress pairs are considered, which are ð1; 13Þ, ð5; 9Þ, ð4; 2Þ, and ð5; 15Þ. The second topology, adopted from Kodialam and Lakshman (2000), Kotti et al. (2007), Gopalan et al. (2004), and Yang et al. (2003), which is an uneven topology by inserting additional links to increase the connectivity and use the effect of shortcut links. This topology consists of

Table 1 Comparison of time complexity of the routing algorithms. Time complexity

Routing algorithm

Oðn2 L þ L2 Þ Oðn2 þ LÞ Oðpn3 log n þ n2 þ LÞ Oðn2 þ LÞ Oðpn3 log n þ cðn2 þ LÞÞ Oðcðn2 þ LÞÞ OðjPj  jnj  jLj2 Þ 2

Oðkmax n log ðk  nÞþ kmax  m  LÞ, " # ∏ 8 edgesLi Li ; ⌊eðn  2Þ!c where kmax ¼ min max 8 edgesLi Li and m and k are constants

WSP MHA MIRA BCRA MIRAD BGDG MDMF SAMCRA

Fig. 1. MIRA topology (Kodialam and Lakshman, 2000).

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

18 nodes and 30 links and is presented in Fig. 2. Each link has a capacity of 20 units. Three pairs in this topology are considered as ingress–egress pairs, which are ð1; 17Þ, ð2; 16Þ and ð4; 15Þ (Yang et al., 2003). For simulation, we will scale all the capacities by 100. This larger capacity network is used for the performance studies because this permits us to experiment with thousands of LSP setups. Ingress– egress router pairs for LSP setup requests are chosen randomly from the above potential nodes. We assume that all LSPs are long lived, i.e., once an LSP is routed, it will not be terminated. The total number of demands is 3000 for MIRA topology and 3500 for ANSNET topology and granted paths are not removed from the network until the end of simulation (Kodialam and Lakshman, 2000; Kar et al., 2000). Also, in all experiments, we repeat the simulation to get interval level of approximately 95% and confidence interval of one unit. 5.2. Performance evaluation In this subsection, we evaluate effect of parameters of the network on the performance of the algorithm in terms of acceptance or rejection of requests. To do so, we measure call blocking ratio of the algorithm. Eq. (14), formally defines call blocking ratio: Call blocking ratio ¼

Number of rejected requested Total number of requests

ð14Þ

Call blocking ratio shows the number of the requests that will be rejected because of the unavailability of desired capacity or congestion to total number of the requests. The call blocking ratio is under effect of the desired network resource utilization. If the call blocking ratio becomes more, the more requests are rejected.

Fig. 2. Expanded ANSNET topology (Yang et al., 2003).

27

While the network may not be full yet and there are probably capacities in the network, inappropriate resource utilization and congestion of request lead to request rejection. So, call blocking ratio is a good performance criterion. In this subsection, we evaluate effect of different values of μflow and νdelay on the performance of the system. As we mentioned before, μflow and νdelay determine relative importance of maximum flow and minimum end to end delay of the requests. It means that relative values of the parameters are more important than their values. Therefore, we set μflow þνdelay ¼ 1. So, increasing one of the parameters leads to the decrement of the another one. In the experiments, we choose requested bandwidth of the demands randomly from f1; 2; 3; 4g using a uniform distribution with average 2.5. Figures 3 (on MIRA topology) and 4 (on ANSNET topology) show effect of μflow (and hence 1  νdelay ) and average of requested end to end delay of demands on the performance of the algorithm in terms of call blocking. As it can be seen, in the both networks (MIRA Kodialam and Lakshman, 2000 and ANSNET Yang et al., 2003), with a fixed average of upper bound of the requested end to end delay of demands, increasing μflow (decreasing νflow) causes the increment in call blocking ratio. Because, by increasing μflow, we increase relative importance of bandwidth constraint over end to end delay constraint (we should note that in all demands the average and distribution of requested bandwidths are fixed). On the other side, if we fix μflow (and hence νflow) and decrease (increase) average of requested end to end delay limit (with the same distribution), call blocking ratio increases (decreases), because ability of the system decreases (increases) in accepting the limited end to end delay requests. Figures 5 and 6 show effect of μflow and average of requested bandwidths of the demands on the performance of the algorithm. To evaluate the effect of the mentioned parameters, we fix average of the requested end to end delay of the demands to 97.5 ms and choose them according to a uniform random distribution form f95; 96; 97; 98; 99; 100g. Also, average of the requested bandwidths is changed from 2.5 to 10.5. In addition, we change μflow from 0 to 1 (and hence νflow from 1 to 0). As, it can be seen, by increasing average of requested bandwidths, the call blocking ratio increases. The result is expected, because in the case of large requested bandwidths the network reach to its maximum capacity faster. Also, with a fixed average of

Fig. 3. Effect of μflow parameter and end to end delay on call blocking ratio in MIRA topology (average requested bandwidths is fixed to 2.5).

28

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

Fig. 4. Effect of μflow parameter and end to end delay on call blocking ratio in ANSNET topology (average requested bandwidths is fixed to 2.5).

Fig. 5. Effect of μflow parameter and average of requested bandwidth on call blocking ratio in MIRA topology (average requested end to end delay of demands is fixed to 97.5 ms).

Fig. 6. Effect of μflow parameter and average of requested bandwidth on call blocking ratio in ANSNET topology (average requested end to end delay of demands is fixed to 97.5 ms).

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

requested bandwidths, increasing μflow causes decreasing in call blocking ratio (in all demands the average and distribution of requested end to end delay are fixed). Therefore, the results show that MDMF (Algorithm 1) is tunable and can be configured based on requirements of the network and the services that it provides. Now, as a case study, we are going to find the optimal values of μflow (and νflow) for MIRA (Kodialam and Lakshman, 2000) and ANSNET (Yang et al., 2003) networks when they run applications which their demands can be characterized by requested bandwidths with average 2.5 chosen from set f1; 2; 3; 4g and end to end delay requirements with average 97.5 ms chosen uniformly from f95; 96; 97; 98; 99; 100g. Figure 7 shows the call blocking ratio for different values of μflow for MIRA (Kodialam and Lakshman, 2000) network. It can be seen that under mentioned conditions, the optimum value of the μflow , where the call blocking ratio is minimum, is around 0.45 and νflow is around 0.55. Figure 8 shows the call blocking ratio for different values of μflow for ANSNET (Yang et al., 2003) network. In the ANSNET network under the defined conditions, call blocking ratio is minimum when we set μflow around 0.4 and νflow around 0.6. It is obvious that for different distributions of requested bandwidths and end to end delay we will have different optimum values for the parameters. So for tuning a network, we should

29

estimate characteristics of the demands and then find the optimum values for the network parameters. We should note that in design phase of real world networks, the load characteristics of the system is predictable and the network is designed to support a set of desired applications. 5.3. Comparison with other works In this subsection, the simulation results of our proposed MDMF algorithm are presented and compared with other related works (MHA, Awduche et al., 2001; WSP, Guerin et al., 1996; MIRA, Kodialam and Lakshman, 2000; MIRAD, Pastaki et al., 2011 and BCRA Kotti et al., 2007; BGLC, Alidadi et al., 2009; BGDG, Kulkarni1 et al., 2012, and SAMCRA, Van Mieghem and Kuipers, 2004). In the first step, we investigate effect of different load conditions on the performance of our algorithm (MDMF) and other related works and then we compare performance of the algorithms in terms of call blocking ratio, path mean length, mean max flow, variance of link utilization, average of end to end path delay and variance of end to end path delay. 5.3.1. Effect of load condition To investigate effect of different load conditions on the performance of the algorithms, we consider some scenarios. Also, to

Fig. 7. Effect of μflow on call blocking ratio in MIRA topology.

Fig. 8. Effect of μflow on call blocking ratio in ANSNET topology.

30

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

have realistic conditions, we consider traffic units as follows (Salah et al., 2008):

 For one directional single voice call, the required bandwidth is 90.4 kbps (1 unit).

 For bi-directions single voice call, with symmetric flow, the required bandwidth is 180.8 kbps (2 units).

 The required bandwidth for a single video call is 676.8 kbps (7.5 units).

 Hence, for a bidirectional video conferencing session the required bandwidth is 857.6 kbps (9.5 units).

The experimented scenario for performance analysis of the algorithm is as follows:

 Scenario 1: requested bandwidth is chosen from f1; 2; 3; 4g and   

end to end delay form f95; 96; 97; 98; 99; 100g using a uniform distribution. Scenario 2: requested bandwidth is chosen from f1; 2; 3; 4g and end to end delay form f60; 61; 62; 63; 64; 65g using a uniform distribution. Scenario 3: requested bandwidth is chosen from f1; 2; 7:5; 9:5g and end to end delay form f95; 96; 97; 98; 99; 100g using a uniform distribution. Scenario 4: requested bandwidth is chosen from f1; 2; 7:5; 9:5g and end to end delay form f60; 61; 62; 63; 64; 65g using a uniform distribution.

Figure 9 shows number of accepted requests by each algorithm from 3000 requests in MIRA (Kodialam and Lakshman, 2000) topology. In the first scenario, the algorithms show almost equal performance. But in the second scenario, where we decrease end to end delay of requests, the algorithms which consider end to end delay (i.e. MDMF, MIRAD, SAMCRA, BGDG) react to the decrement and their performance decreases. Because, they cannot provide requested end to end delay using some remained paths in the network. But the algorithms which do not guarantee end to end

delay (i.e. WSP, MIRA, BCRA, MHA, BGLC), basically ignore end to end delay condition specified by the requests. In the third scenario, we increase requested bandwidths. As it can be seen, in this scenario, both end to end delay guaranteeing and bandwidth guaranteeing algorithms accept less requests than the first scenario. We should note that although the algorithms which guarantee end to end delay do not guarantee bandwidth and may allocate less bandwidth than requested bandwidth to a demand (when enough residual bandwidth between the source and destination nodes does not exist). But by increasing requested bandwidth, by any algorithm, the network is saturated by less requests. As Fig. 9 shows, in the third scenario, our proposed algorithm MDMF shows better performance than the other algorithms and it is comparable with BCRA and BGLC which only guarantee bandwidth and do not care end to end delay constraints of the requests. Also, in Section 5.3.3, we show that BCRA returns paths with more length than MDMF and hence the returned paths by BCRA have more end to end delay than MDMF. In the fourth scenario, we increase mean of the requested bandwidth and decrease end to end delay. Again, our MDMF algorithms shows better performance than the other algorithms except BCRA and BGLC which ignore end to end delay constraints of the requests and have the other mentioned drawbacks. 5.3.2. Call blocking ratio In the rest of this section, we assume that the LSP bandwidth demands are taken to be uniformly distributed between 1 and 4 units (as done in Kodialam and Lakshman, 2000 and Kar et al., 2000). Also, the end to end delay requests for each demand are chosen in 95–100 millisecond with uniform distribution. The call blocking ratio (Eq. (14)) is under the effect of desired network resource utilization. If the call blocking ratio becomes more, the more requests will be rejected. While the network is not saturated yet and there are probably capacities in the network, unappropriate resource utilization and requests congestion lead to request rejection. Simulation results are shown in Figs. 10 (MIRA network) and 11 (ANSNET network) in terms of call blocking ratio. As it can be seen, in the MIRA topology (Fig. 10), the algorithms

Fig. 9. Number of accepted requests by each algorithm from 3000 requests in MIRA topology.

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

31

Fig. 10. Call blocking ratio in MIRA.

Fig. 11. Call blocking ratio in ANSNET.

almost have the same performance, except MIRAD and SAMCRA which have more call blocking ratio. Also, by increasing load of the network, call blocking ratio increases in all of the algorithms. It is expected, because by increasing the number of accommodated requests, capacity of the network decreases and chance of rejecting a new request increases. In ANSNET topology (Fig. 11), we see different performances. BCRA, MHA and WSP have the best call blocking ratio and MIRAD and SAMCRA have the worst ones and MDMF has better call blocking ratio than MIRAD and more than BCRA, MHA and WSP. We should note that against BCRA, MHA and WSP which only consider a criteria, MDMF guarantees both end to end delay and

bandwidth. In addition, we can tune MDMF to show better call blocking ratio by increasing μflow (in cost of decreasing end to end delay significance) in this experiment. But to have a fair comparison, we do not reconfigure the algorithm in each experiment.

5.3.3. Mean length Mean length of the assigned paths to requests is a proper criteria to evaluate the performance of a routing algorithm. Because paths with more length use more links as resources of the network. The label switched path mean length parameter is

32

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

achieved from the following: Mean length ¼

s ∑NLSP i¼1

NLSP s

lengthðLSP i Þ

ð15Þ

In Eq. (15), NLSPs shows the number of the set up paths and length ðLSP i Þ shows the number of the path links. As it can be seen in Figs. 12 (MIRA topology) and 13 (ANSNET topology), WSP, SAMCRA and MHA algorithms have better mean length than the other algorithms. The reason is that they gives maximum priority to finding shortest path under bandwidth constraint but they can lead to congestion in the network. Our algorithm (MDMF) finds

longer paths than other algorithms except BCRA. In MDMF design, our main design goal is to satisfy end to end delay and bandwidth conditions and distribute load of the network over the links evenly by design of the weights of the links (Eq. (13)). Therefore, we can say that MDMF algorithm has appropriate mean length with regard to the more granted requests and the guaranteeing constraints for routing. In comparison MDMF with other algorithms, MIRAD has less length mean than MDMF but MIRAD has granted less requests too. Although MIRA has less mean length than MDMF but it only guarantees bandwidth for requests and does not care about end to end delay of them.

Fig. 12. Mean length in MIRA.

Fig. 13. Mean length in ANSNET.

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

5.3.4. Mean maximum flow As we mentioned before, maximum flow between a pair of ingress–egress nodes (s,d), in a state of the network, is the maximum bandwidth that can be allocated to a future request from s to d in that state. If we show maximum flow between s and d by maxflowðs; dÞ, mean of maximum flow will be as follows: max flowavg ¼

∑ðs;dÞ A P max flowðs; dÞ jPj

ð16Þ

where P is the set of ingress–egress pairs of the network. The mean maximum flow shows the average available network capacity for future demands.

33

Figures 14 (MIRA topology) and 15 (ANSNET topology) show the effect of network load on mean maximum flow of the network by the algorithms. The general trend of all of the algorithms is expected, because by increment of number of requests the established routes will be increased and residual capacity of the network decreased. Although, algorithms like MIRA and MIRAD which designed to keep maximum flow in the highest level, and show better maximum flow than MDMF, for large requests (around 3000) MDMF algorithm has the same performance as them in MIRA topology and in light load (0–2500) in ANSNET topology. Also, as we showed in Section 5.3.1, to have more maximum flow, we can tune MDMF algorithm by increasing μflow.

Fig. 14. Maxflow in MIRA.

Fig. 15. Maxflow in ANSNET.

34

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

5.3.5. Load balancing By load balancing in the network, we mean to distribute workload across network links evenly to achieve optimal resource utilization, better throughput, better response time, and avoid overload. To measure load balancing, we use variance of link utilization by the algorithms. Figures 16 (MIRA topology) and 17 (ANSNET topology) show the variance of link utilization of MDMF algorithm in comparison with other algorithms. Less variance in utilization of links means that traffic is distributed more uniformly and we have more load balancing.

As it can be seen, MHA, WSP, MIRAD and SAMCRA algorithms have the most variance of link utilization in comparison with the other algorithms. BGLC and BCRA algorithms have the least variance of link utilization. But they use more links (as we showed in Section 5.3.3, BCRA has the most mean length). Although MDMF guarantees bandwidth and end to end delay together, because of appropriate link weight that is in indirect ratio to residual bandwidth in routing, it has the third least variance of link utilization (after BCRA and BGLC) in comparison with other algorithms that only guarantee bandwidth constraint.

Fig. 16. Variance of link utilization in MIRA.

Fig. 17. Variance of link utilization in ANSNET.

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

5.3.6. Average of end to end path delay Figures 18 (MIRA topology) and 19 (ANSNET topology) show average of end to end path delay for accommodated requests in the network by the routing algorithms. As the figures show, in light loads (less than 2000 requests), WSP and MHA have minimum end to end delay. The reason is that these algorithms use shortest path for routing, but their end to end delays increase as the number of requests increases because of congestion. SAMCRA has the minimum end to end delay but the number of accepted request of it is too low. Because SAMCRA does not care about critical links for future demands. BCRA has the maximum end to end path delay because the aim of BCRA is

35

distribution of load uniform across the network and it uses the longest link in routing. Also, MIRA and MIRAD only care about maximum flow to cover future demands and they do not consider end to end delay. So the performance of MIRA and MIRAD, in terms of end to end delay, decreases as load of the network increases (i.e. 3000 þ requests). As it can be seen, MDMF algorithm has the second least end to end delay in high loads. Because the weight of each link in MDMF is in direct ratio to effect of that link on minimum end to end delay. So MDMF keeps end to end path delays minimum for future demands. As the figures show, the performance of MDMF, in terms of end to end delay, is near to constant even when the network is

Fig. 18. Average of end to end path delay in MIRA.

Fig. 19. Average of end to end path delay in ANSNET.

36

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

saturated. The reason is that MDMF guarantees end to end delay of accepted requests. 5.3.7. Variance of end to end path delay Variance of end to end path delay is an indicator which shows how an algorithm distributes end to end delay to the requests evenly. Figures 20 (MIRA topology) and 21 (ANSNET topology) show variance of end to end path delay of accepted requests by the algorithms. As it can be seen, the proposed algorithm, MDMF, has the best end to end variance. The reason is that MDMF guarantees end to end delay and there is no preference for early requests. The other

algorithms do not care end to end delay constraints and as a result when the load of the network increases average of end to end delay of the requests increases respectively. Although SAMCRA has the minimum variance of end to end path delay, the number of accepted request of it is low. Because SAMCRA does not care about critical links for future demands.

6. Conclusion In this paper, quality of service based routing for traffic engineering in the core of multi service networks equipped with

Fig. 20. Variance of end to end path delay in MIRA.

Fig. 21. Variance of end to end path delay in ANSNET.

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

MPLS architecture was studied. The appearance of real time and multimedia applications with different quality of service requirements highlights importance of the issue. In this paper, we presented Minimum Delay and Maximum Flow (MDMF) algorithm which guarantees end to end delay and bandwidth of label switched path requests simultaneously. MDMF considers not only the importance of critical links, but also their relative importance to routing possible future LSP set-up requests. MDMF tries to avoid passing traffic streams on the links that decrease maximum flow or increase minimum delay between source and destination nodes or both of them. Therefore it manages and conserves resources of networks for future demands. In this algorithm, we just use LR server mechanism to calculate end to end path delay. MDMF just uses the residual bandwidths of links to calculate a path delay and does not need any additional information like other algorithms. The presented algorithm is parametric tunable in terms of making a balance between significance of flow or delay in a network. So, the routers which employ MDMF algorithm can be deployed in all MPLS networks with different configuration requirements and traffic characteristics. According to the simulation, if the tunable parameters are adjusted to their optimum values, the call blocking ratio will be reduced. As mentioned the optimum value of tunable parameter depends on bandwidth and end to end delay of requests and they can be set by the network administrator to prioritize flow over delay or vice versa. For example, if there is a reduction in average of requested end to end delay limit, we must increase the weight of tunable parameter related to minimum end to end delay in objective function to decrease call blocking ratio, because ability of the system increases in accepting the limited end to end delay requests. In practice, administrator knows traffic and QOS characteristics of input demands before deploying the network, because the load characteristics of the system is predictable in design phase of real world networks and the network is designed to support a set of desired applications. Estimation of input demand with high accuracy is on the important point for implementation MDMF. Therefore, he could estimate optimum values of tunable parameters. After tuning parameters, if there is a protocol that just inform residual bandwidth of network links to the routing algorithm it will be sufficient to run MDMF algorithm as a routing algorithm in MPLS routers. After calculating LSP by MDMF, then it will be setup using a signaling protocol such as RSVP or CR-LDP. The only problem of implementation is computational complexity of MDMF. In the sparse networks with reasonable ingress–egress pairs, MDMF has the best performance, while it is better to use fast hardware in router to decrease response time of MDMF algorithm in the dense networks. To evaluate the performance of MDMF, we did extensive simulations. Also, we compared the performance of MDMF with some other related algorithms (i.e. MIRA, BCRA, MHA, MIRAD, WSP, BGDG, BGLC and SAMCRA). The results showed that MDMF rivals the other algorithms in terms of flow management and has the best end to end delay management. This is a significant result, because the other algorithms only support one constraint, bandwidth or end to end delay, but MDMF supports both of the constraints simultaneously. In addition, MDMF has a very good traffic load balancing among the studied algorithms and the best delay load balancing.

References Agrawal H, Jennings A. Evaluation of routing with robustness to the variation in traffic demand. J Netw Syst Manag 2011;19:513–28, http://dx.doi.org/10.1007/ s10922-010-9193-6. Aissi H, Bazgan C, Vanderpooten D. Complexity of the min-max and min-max regret assignment problems. Oper Res Lett 2005;33(6):634–40, http://dx.doi. org/10.1016/j.orl.2004.12.002. Akar N, Toksöz MA. MPLS automatic bandwidth allocation via adaptive hysteresis. Comput Netw 2011;55(5):1181–96, http://dx.doi.org/10.1016/j.comnet.2010.11.009.

37

Alidadi A, Mahdavi M, Hashmi MR. A new low-complexity QoS routing algorithm for MPLS traffic engineering. In: IEEE 9th Malaysia international conference on communications; 2009. Apostolopoulos G, Guerin R, Kamat S, Tripathi SK. Server based QoS routing. In: Proceedings of GLOBECOM'99, Rio de; 1999. p. 762–8. Awduche D, Berger L, Gan D, Li T, Srinivasan V, Swallow G. RSVP-TE: extensions to RSVP for LSP tunnels. Report, IETF RFC 3209; 2001. Awduche DO, Chiu A, Elwalid A, Widjaja I, Xiao X. Overview and principles of internet traffic engineering. Report, IETF RFC 3272; 2002. Capone A, Fratta L, Martignon F. Dynamic online QoS routing schemes: performance and bounds. Comput Netw 2006;50:966–81. Chen S, Nahrstedt K. Distributed quality-of-service routing in high-speed networks based on selective probing. IEEE J Sel Areas Commun 1998;17:1488–505. Cormen T. Introduction to algorithms. In: MIT electrical engineering and computer science series. MIT Press; 2001. 〈http://books.google.com/books? id=NLngYyWFl_YC〉. Crawley E, Nair R, Rajagopalan B, Sandick H. A framework for QoS-based routing in the internet. Report, IETF RFC 2386; 1998. Di Sorte D, Femminella M, Reali G. QoS-enabled multicast for delivering live events in a digital cinema scenario. J Netw Comput Appl 2009;32(1):314–44, http://dx. doi.org/10.1016/j.jnca.2008.02.004. Edmonds J, Karp RM. Theoretical improvements in algorithmic efficiency for network flow problems. J ACM 1972;19(2):248–64, http://dx.doi.org/10.1145/ 321694.321699. Elwalid A, Jin SHLC, Widjajal I. Mate: MPLS adaptive traffic engineering. In: INFOCOM. 2001. Fredman ML, Tarjan RE. Fibonacci heaps and their uses in improved network optimization algorithms. J ACM 1987;34:596–615. Ghein LD. MPLS fundamentals. 1st edition, Indianapolis; 2007. Gopalan K, cker Chiueh T, Lin Y-J. Load balancing routing with bandwidth-delay guarantees. IEEE Commun Mag 2004;42:108–13. Gredler H, Goralski W. The Complete IS-IS Routing Protocol. New York: Springer Verlag; 2004. Guerin R, Orda A, Williams D. QoS routing mechanisms and OSPF extensions, 1996. Heydarian M. A high performance optimal dynamic routing algorithm with unicast multichannel QoS guarantee in communication systems. J Supercomput. http:// dx.doi.org/10.1007/s11227-011-0723-0. Kar K, Kodialam M, Lakshman TV, Member S. Minimum interference routing of bandwidth guaranteed tunnels with MPLS traffic engineering application. IEEE J Select Areas Commun 2000: 2579. Kodialam M, Lakshman T. Minimum interference routing with applications to MPLS traffic engineering. In: INFOCOM. 2000. Kotti A, Hamza R, Bouleimen K. Bandwidth constrained routing algorithm for MPLS traffic engineering. In: Proceedings of the third international conference on networking and services. IEEE Computer Society: Washington, DC, USA; 2007. p. 20. http://dx.doi.org/10.1109/ICNS.2007.40. Kulkarni1 S, Sharma R, Mishra I. New QoS routing algorithm for MPLS networks using delay and bandwidth constraints. Int J Inf Commun Technol Res 2012;2: 285–293. Leroux P, Latré S, Staelens N, Demeester P, De Turck F. Mobile tv services through IP datacast over DVB-H: dependability of the quality of experience on the IP-based distribution network quality of service. J Netw Comput Appl 2011;34(5): 1474–88, http://dx.doi.org/10.1016/j.jnca.2010.10.004. Mehmood R, Alturki R, Zeadally S. Multimedia applications over metropolitan area networks (MANS). J Netw Comput Appl 2011;34(5):1518–29, http://dx.doi.org/ 10.1016/j.jnca.2010.08.002. Mishra P, Saran H. Capacity management and routing policies for voice over IP traffic. IEEE Netw 2000;14:20–7. Moy J. Ospf Version 2; 1998. Neve HD, Mieghem PV. Tamcra: a tunable accuracy multiple constraints routing algorithm. Comput Commun 2000;23:667–79. Pastaki AG, Sahab AR, Sadeghi SM. A new routing algorithm: MIRAD, World Academy of Science. Eng Technol 2011;79:59–62. Rosen E, Viswanathan A, Callon R. Multi-protocol label switching architecture. Report, IETF RFC 3031; 2001. Salah K, Calyam P, Buhari MI. Assessing readiness of ip networks to support desktop videoconferencing using OPNET. J Netw Comput Appl 2008;31(4):921–43, http: //dx.doi.org/10.1016/j.jnca.2007.01.001. Salama HF, Reeves DS, Viniotis Y. A distributed algorithm for delay-constrained unicast routing. In: IEEE INFOCOM'97; 1997. p. 239–50. Salles RM, Carvalho JM. An architecture for network congestion control and charging of non-cooperative traffic. J Netw Syst Manag 2011;19:367–93, http: //dx.doi.org/10.1007/s10922-010-9185-6. Song L, Chen F, Yang X. A minimum interference anycast QoS routing for MPLS traffic engineering. In: IC-NIDC2009. 2009. Stiliadis D, Varma A. Latency-rate servers: a general model for analysis of traffic scheduling algorithms. In: Proceedings of IEEE INFOCOM '96. 1996. p. 111–9. Tanenbaum A. Computer networks. 4th edition, Prentice Hall Professional Technical Reference; 2002. Van Mieghem P, Kuipers FA. Concepts of exact QoS routing algorithms. IEEE/ACM Trans Netw 2004;12(5):851–64, http://dx.doi.org/10.1109/TNET.2004.836112. Vilalta R, MunOz R, Casellas R, Martinez R, ViLchez J. GMPLS-enabled MPLS-TP/ PWE3 node with integrated 10 gbps tunable DWDM transponders: design and experimental evaluation. Comput Netw 2012;56(13):3123–35. Wang J, Nahrstedt K. Hop-by-hop routing algorithm for permiumclass traffic in diffserv networks. In: INFOCOM. 2002.

38

M.N. Soorki, H. Rostami / Journal of Network and Computer Applications 42 (2014) 21–38

West DB. Introduction to graph theory. 2nd edition Prentice Hall; 2001 〈http:// www.amazon.com/Introduction-Graph-Theory-Douglas-West/dp/ 0130144002〉.

Yang Y, Zhang L, Muppala JK, Chanson ST. Bandwidth-delay constrained routing algorithms. Comput Netw 2003;42:503–20, http://dx.doi.org/10.1016/S13891286(03)00199-3.