Computer Networks 53 (2009) 1530–1545
Contents lists available at ScienceDirect
Computer Networks journal homepage: www.elsevier.com/locate/comnet
A novel path protection scheme for MPLS networks using multi-path routing Sahel Alouneh a,*, Anjali Agarwal b, Abdeslam En-Nouaary c a b c
German-Jordanian University, Jordan Department of Electrical and Computer Engineering, Concordia University, 1515 St. Catherine West, Montreal, Canada H4G 2W1 Institut National des Postes et Telecommunications (INPT) Madinat Al Irfane, Rabat, Morocco
a r t i c l e
i n f o
Article history: Received 17 April 2008 Received in revised form 17 November 2008 Accepted 3 February 2009 Available online 11 February 2009 Responsible Editor: G. Ventre Keywords: MPLS Fault tolerance Failure recovery Path protection Packet loss
a b s t r a c t Multi-protocol label switching (MPLS) is an evolving network technology that is used to provide traffic engineering (TE) and high speed networking. Internet service providers, which support MPLS technology, are increasingly demanded to provide high quality of service (QoS) guarantees. One of the aspects of QoS is fault tolerance. It is defined as the property of a system to continue operating in the event of failure of some of its parts. Fault tolerance techniques are very useful to maintain the survivability of the network by recovering from failure within acceptable delay and minimum packet loss while efficiently utilizing network resources. In this paper, we propose a novel approach for fault tolerance in MPLS networks. Our approach uses a modified (k, n) threshold sharing scheme with multi-path routing. An IP packet entering MPLS network is partitioned into n MPLS packets, which are assigned to node/link disjoint LSPs across the MPLS network. Receiving MPLS packets from k out of n LSPs are sufficient to reconstruct the original IP packet. The approach introduces no packet loss and no recovery delay while requiring reasonable redundant bandwidth. In addition, it can easily handle single and multiple path failures. Ó 2009 Elsevier B.V. All rights reserved.
1. Introduction Multi-protocol label switching (MPLS) [1] is an evolving technology that improves routing performance and speed, and enables traffic engineering by providing significant flexibility in routing. It is also capable of providing controllable quality of service (QoS) by primarily prioritizing Internet traffic. MPLS provides mechanisms in IP backbones for explicit routing using label switched paths (LSPs), encapsulating
* Corresponding author. Tel.: +1 5146929353. E-mail addresses:
[email protected],
[email protected] (S. Alouneh),
[email protected] (A. Agarwal), ennouaar@ece. concordia.ca (A. En-Nouaary). 1389-1286/$ - see front matter Ó 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2009.02.001
the IP packet in an MPLS packet. When IP packets enter a MPLS based network, label edge routers (LERs) assign them a label identifier based on classification of incoming packets and relating them to their forward equivalence class (FEC). Once this classification is complete and mapped, different packets are assigned to corresponding labeled switch paths (LSPs), where label switch routers (LSRs) place outgoing labels on the packets. In this basic procedure all packets which belong to a particular FEC follow the same path to the destination, without regards to the original IP packet header information. The constraint based label distribution protocol (CR-LDP) [2] or RSVP-TE [3], an extension of the resource reservation protocol, is used to distribute labels and bind them to LSPs. Fig. 1 shows a simple process for requesting and assigning labels in a MPLS network. Here, an IP packet with IP prefix value 47.1 enter-
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
1531
Fig. 1. MPLS single path label distribution.
ing the MPLS network is assigned labels L7 and L4 to setup an LSP used to forward the packet from the ingress router towards the egress router. Therefore, each LSR that receives a MPLS packet with Label IN value, after checking its FEC value, forwards the packet to the next router with Label Out value. MPLS has many advantages for traffic engineering. It increases network scalability, simplifies network service integration, offers integrated recovery, and simplifies network management. However, MPLS is very vulnerable to failures because of its connection oriented architecture. Path restoration is provided mainly by rerouting the traffic around a node/link failure in a LSP, which introduces considerable recovery delays and may incur packet loss. Such vulnerabilities are much costly to time-critical communication [5] such as real-time applications, which tolerate a recovery time in the order of seconds down to 10’s milliseconds. Therefore, service disruption due to a network failure or high traffic in the network may cause the customers significant loss of revenue during the network down time, which may lead to bad publicity for the service provider [4]. To prevent such bad consequences, routers and other system elements in MPLS networks should be resilient towards node or link failures. In other words, MPLS networks should implement efficient mechanisms for ensuring the continuity of operations in the event of failure anywhere in the network. This aspect is usually referred to as fault tolerance. It is defined as the property of a system to continue operating properly in the event of failure of some of its parts. Over past years, several research works have been done to deal with fault tolerance in MPLS networks (for instance [5–14]). They can be classified into two categories, namely: pre-established protection techniques and dynamic protection techniques. In the former, a backup LSP is preestablished and configured at the beginning of the communication in order to reserve extra bandwidth for each working path. In the latter, no backup LSP is established in advance but after a failure occurs; hence, extra bandwidth is reserved upon the happening of failures. Preestablished path protection is the most suitable for restoration of MPLS networks in real-time due to fast restoration speed. The dynamic protection model, however, does not waste bandwidth but may not be suitable
for time sensitive applications because of its large recovery time [5–7]. In addition, recovery schemes can be classified as either link restoration or path restoration according to the initialization locations of the rerouting process. In link restoration, the nodes adjacent to a failed link are responsible for rerouting all affected traffic demands. In contrast, in path restoration, the ingress node initiates the rerouting process irrespective of the location of the failure. When the reserved spare capacity can be shared among different backup paths, it is called shared path/link restoration. In general, path restoration requires less total spare capacity reservation than link restoration scheme [8]. Besides the above classification issue, protection techniques can be compared one to another based on parameters like bandwidth redundancy, type of failures handled, recovery time, and packet loss. The first parameter measures how much extra bandwidth is required by the protection scheme. The second parameter determines whether the protection technique recovers from single or multiple failures. The third parameter indicates how much time is required by the technique to reroute traffic after failure. Finally, the last parameter gives the percentage of packets lost due to failures. As pointed out later on, none of the existing fault tolerance schemes provides path protection without packet loss, recovery delay, and minimum redundant bandwidth. Therefore, there is still a need for new protection techniques that can optimize all of these factors. In this paper, we present a novel approach for fault tolerance in MPLS networks using a modified (k, n) threshold sharing scheme with multi-path routing. An IP packet entering MPLS network is partitioned into n MPLS packets, which are assigned to n disjoint LSPs across the MPLS network. Receiving MPLS packets from k out of n LSPs is sufficient to reconstruct the original IP packet. The proposed approach has the following advantages. Firstly, it handles single failure as well as multiple failures. Secondly, it allows the reconstruction of the original packet without any loss of packets and a zero recovery delay. Thirdly, it requires a reasonable redundant bandwidth for full protection from failures. It is worth to note that in case the network topology does not offer n disjoint paths, the proposed approach should use n maximally disjoint paths. In this case, the reconstruction of the original packet might
1532
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
be at risk if failures occur in the shared links. We should also point out here that our proposed method. The rest of this paper is organized as follows. Section 2 is devoted to related work. Section 3 introduces our approach for fault tolerance in MPLS and discusses the main issues related to it. Section 4 presents the performance evaluation and the simulation results. Section 5 concludes the paper. 2. Related work As indicated previously, the recovery approaches in MPLS fall into one of two categories: pre-established protection and dynamic protection. Since our approach is based on n pre-established paths between the ingress and egress routers, the following discussion is limited to existing pre-established protection schemes only. The 1 + 1 ‘‘one plus one” protection discussed in [5,6 and 9] can provide path recovery without packet loss or recovery time. The resources (bandwidth, buffers, and processing capacity) on the recovery path are fully reserved, and carry the same traffic as the working path, requiring substantial amount of dedicated backup resources. Selection between the traffic on the working and recovery paths is made at the path merge LSR (PML) or the egress LER. In this scheme, the resources dedicated for the recovery of the working traffic may not be used for anything else. Generally speaking, protection schemes use 1:1 path protection (extendible to 1:N and M:N shared protection) for efficient bandwidth utilization, where every link can carry either regular traffic or backup traffic and thus does not require dedicated backup links. The resources on the recovery path may be shared with other working paths. To support differentiated services, normally if the network is safe, the capacity of the backup paths is utilized to carry packets belonging to lower priority class types (e.g. best effort). In case of failure, the lower class traffic is blocked to support backup for high priority traffic [10]. The paper by Makam et al. [5] proposes a PSL (Path Switched LSR) oriented path protection mechanism that consists of three components: minimize the delay experienced by notification message traveling from the fault detection node to the protection switching node (i.e., PSL) by building a fast and efficient reverse notification tree structure, a hello protocol to detect faults, and a lightweight notification transport protocol to achieve scalability.
The paper by Haskin et al. [11] is also based on preestablished alternative path. The backup path is comprised of two segments. The first segment is established between the last hop working switch and the ingress LSR in the reverse direction of the working path. The second segment is built between the ingress LSR and the egress LSR along an LSP that does not utilize any working path. In Fig. 2, if the link between LSR4 and LSR9 fails, all the traffic in working path is rerouted along the backup path, LSR 3–2–1–5–6–7– 8–9. Optionally, as soon as LSR1 detects the reverse traffic flow, it may stop sending traffic downstream of the primary path and start sending data traffic directly along the second segment. The paper by Buddhikot et al. [12] addresses guaranteed uninterrupted connectivity in case of link/node failure in primary path by finding a set of backup LSPs that protect the links along the primary LSP. The authors introduce the concept of ‘‘backtracking” where the backup path may originate at the failed link (i.e., local restoration) or in the worst case the backup paths may originate at the ingress node (i.e., path protection). In other words, it provides algorithms that offer a way to tradeoff bandwidth to meet a range of restoration latency requirements. The paper by Virk et al. [13] presents an economical global protection framework that is designed to provide minimal involvement of intermediate LSRs, reduction in the number of Path Switch LSRs responsible to switch the traffic from failed working path to the backup path. The proposed scheme uses a directory service that is logically centralized and physically distributed database to provide a fast lookup of information. In the paper, the performance evaluation only considers packet loss. The authors of [14] present a solution to further reduce the spare capacity allocation (SCA) to near optimality by proposing a successive survivable routing (SSR) algorithm for mesh based communication networks. First, per-flow spare capacity sharing is captured by a spare provision matrix (SPM). The SPM matrix has a dimension of the number of failure scenarios by the number of links. It is used by each demand to route the backup path and share spare capacity with other backup paths. Next, based on a special link matrix calculated from SPM, SSR iteratively routes/updates backup paths in order to minimize the cost of total spare capacity. The network redundancy (which is the ratio of the total spare capacity over the total working capacity) for shared path restoration cases measured in this paper
Fig. 2. Path restoration examples Haskin et al.
1533
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
ranges from 35% to 70%. The paper, however, does not consider recovery delay, packet loss and re-ordering. The proposed work in reference [31] provides a method for capacity optimization for path and span restorable networks. The approach uses the integer program formulation based flow constraints which solves the spare and working capacity placement problem for both span and path restorable networks. The paper however does not consider restoration delay or packet loss overhead. The paper in reference [32] proposes a mixed shared path protection scheme. The scheme defines three types of resources: (a) primary resources that can be used by primary paths, (b) spare resources that can be used by backup paths, and (c) mixed resources that can be shared by both the primary and backup paths. The approach in this paper is different than other shared protection scheme because it allows some primary and backup paths to share the common mixed resources if the corresponding constraints can be satisfied. However, the paper does not consider path restoration delay and packet loss overhead. Seok et al. proposed in [15] a fault tolerant multi-path traffic engineering scheme for MPLS networks with the objective to effectively control the network resource utilization. The proposed scheme consists of the maximally disjoint multi-path configuration and the traffic rerouting mechanism for fault recovery. When some link failures are detected, the proposed mechanism routes the traffic flowing on the failed LSPs into available LSPs. Only bandwidth utilization is considered; recovery time and packet loss have not been discussed in this work. However, this approach applies the same mechanism used in any other path protection approach where recovery time and packet loss occurs. A distributed LSP scheme to reduce spare bandwidth demand in MPLS networks proposed in [7] is also based on pre-established protection scheme. Here the ingress LER groups the incoming LSP, which is carrying the incoming IP packet flows, into several sub-groups each of which is assigned to a distinct sub-LSP from a number of K subLSPs. A single failure is only assumed; hence only one backup sub-LSP of the amount 1/K is required. The scheme proposed in [7] does not however consider reducing recovery delay and/or packet loss. A path failure results in switching over to the backup path and therefore incurs considerable recovery delay. Dispersity routing on ATM networks [16] also requires networks with multiple disjoint paths to spread the data from a source over several paths. To provide redundancy, message is divided into fewer sub-messages than there are paths. Additional sub-messages are constructed as a linear combination of the bits in the original sub-messages (e.g. modulo-2, parity check codes), such that the original message may be reconstructed without receiving all of the sub-messages. For a (N, K) system, N is the number of paths used and the original message is subdivided into K parts, where N > K. For a modulo-2 system N = K + 1, and the system can only tolerate single path failure. The missing sub-message creates holes in the codeword whose position should be known and whose value can be reconstructed from the received sub-messages. In order to handle multiple failures, N > K + 1, for example in a (3, 1)
system, ‘‘Flooding” strategy is used where the entire message is transmitted on all three paths until at least one path is received, thus requiring more redundant bandwidth. It is worth to note that we have also proposed in [30] the use of threshold sharing scheme to improve the security in MPLS networks. The difference between the two papers is in the application of this threshold sharing scheme. The security requirements and analysis are different from those required in fault tolerance. In other words, data confidentiality and integrity are the main issues that were considered in [30]. It is seen from the previous related work in MPLS fault tolerance that recovery time, packet loss and bandwidth utilization are the main service parameters for real-time traffic. However, most of the approaches in the literature focus on reducing working and recovery bandwidth utilization while considering the recovery delay. There is no scheme that can provide path protection with no packet loss and no recovery delay except the 1 + 1 protection at the cost of 100% redundant bandwidth reservation, or dispersity routing that can handle single failures with lower redundant bandwidth but requires to know the location of the failure. 3. Our approach to fault tolerance in MPLS This section presents our approach for fault tolerance in MPLS networks. The approach uses a modified version of the (k, n) threshold sharing scheme (TSS) [17] with multi-path routing wherein k out of n LSPs are required to reconstruct the original message. Threshold sharing scheme is a very well-known concept used to provide security. However, to the best of our knowledge it has never been employed for providing fault tolerance in networks, particularly MPLS networks. The idea behind the threshold sharing scheme (TSS) is to divide a message into n pieces, called shadows or shares, such that any k of them can be used to reconstruct the original message. Using any number of shares less than k will not help to reconstruct the original message. There exists different ways for implementing TSS [17–19]. For instance, the Adi Shamir polynomial approach [17] is based on the Lagrange interpolation for polynomials. The polynomial function is as shown in Eq. (1), where p is a prime number, coefficients a0, . . ., ak1 are unknown elements over a finite field Zp, and a0 = M is the original message.
f ðxÞ ¼ ðak1 xk1 þ þ a2 x2 þ a1 x þ a0 Þmod p:
ð1Þ
Using Lagrange linear interpolation the polynomial function can be represented as follows:
f ðxÞ ¼
k X j¼1
yij
Y 16s6k;s–j
x xis : xij xis
ð2Þ
Since yij = f(xij), 1 6 j 6 k, 1 6 i 6 n, a subset of participants can obtain k linear equations in the k unknowns a0,. . .,ak 1, where all arithmetic is done in Zp. If the equations are linearly independent, there will be a unique solution, and a0 will be revealed. The original TSS is not suitable for network data communication because of its excessive overhead [22] as
1534
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
Fig. 3. Distribution and reconstruction processes.
pointed out in Section 4.1. That is why in our approach we modify it such that the coefficients of Eq. (1) are not chosen randomly but are parts of the original IP packet. Fig. 3 gives an architectural view of our approach and shows how the original IP packet is distributed and reconstructed by our technique. Indeed, when an IP packet enters a MPLS ingress router, a distribution process at the ingress router is used to divide, encode and generate the n share messages that will become the payloads for n MPLS packets. The generated MPLS packet shares are allocated over n disjoint LSPs obtained using multi-path routing [4,20–22] or equal-cost multi-path (ECMP) routing [23]. When an ECMP routing scheme is used, where all multiple LSPs found have the same cost, at least k MPLS packets are to be received at the same time by the egress router. In case of other multi-path routing protocols, at least k MPLS packets should
be received within the time frame of receiving a MPLS packet from the slowest LSP. Thereafter the reconstruction process at the egress router generates the original IP packet from the first k MPLS packets received. Notice that the intermediate LSRs are not involved at all in the division and reconstruction of the original packets; their role is limited to the routing of the MPLS packets they receive towards the egress router. To help us better understand our proposed algorithm, the following illustrates the distribution and reconstruction processes through an example. 3.1. An example of the distribution process Fig. 4 shows an example of how the distribution process apply a (3, 4) modified threshold sharing scheme onto an IP packet. The IP packet is first divided into m blocks S1, S2, . . ., S1 …….
Sm …
…
…
…
…
44
S2
4A
1F
Block size L= 3 bytes
0B
08
1C
IP packet
L/3 (bytes)
a2
a1
a0
f ( Sm ,x) = (a
Distributor process
2
4A
)x2 + (a 1)x + (a 0) … 1F
08
f (S
1 ,x)
1C
f (Sm , x =x3)
f (Sm , x= x2)
f (Sm , x= x1)
… f (S2, x =x4 ) = 34 f (S1, x =x4 ) = 246
… f (S2, x = x3) = 256 f (S1, x = x3) = 162
… f (S2, x=x2) = 112 f (S1, x=x2) = 94
… f (S2, x= x1) = 116 f (S1, x= x1) = 42
MPLS packet 4
MPLS packet 3
LSP 3
LSP 2
LSP 1
Fig. 4. Distribution process in the ingress router applying a (3, 4) modified TSS.
mod 257
1/3 of total IP packet size
f (Sm , x= x4 )
MPLS packet 1
mod 257
06
= (08)x2 + (28)x + (06)
MPLS packet 2
mod 257
0B
f ( S2 ,x) = (74)x2 + (31)x + (11)
(3, 4)
LSP 4
06
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
Sm where each block size L is a multiple of k bytes The effect of block size L on packet processing is shown later in Fig. 14. From Eq. (1), it can be easily seen that there are three coefficients, a0, a1 and a2, for a (3, 4) scheme where k = 3. Each block is therefore divided in k equal parts and these coefficients are assigned values from the block (unlike original TSS scheme where a1 and a2 are assigned random values). For example a0 = 06, a1 = 28 (1C in hex), and a2 = 08 for the block S1. Next mquadratic equations f(Sj, x), where 1 6 j 6 m, are generated using the three coefficients from each of the m blocks, that is, every block generates a quadratic equation. Each quadratic equation is solved n times using the n different xi, 1 6 i 6 n, values as agreed between a sender (ingress) and a receiver (egress). Each MPLS packet payload therefore consists of m encoded values obtained from the m quadratic equations using the same xi value, as shown in Fig. 4. Each LSP corresponds to a xi value. It can be easily seen that the size of each MPLS packet payload is m L/k; in other words, it is equal to the size of an IP packet divided by k. For example, for block S1, the equation generated is
f ðS1 ; xÞ ¼ ð8x2 þ 28x þ 6Þmod 257: The above equation is solved for different n values as follows:
1535
f ðSj¼1 ; x ¼ 1Þ ¼ 42 f ðSj¼2 ; x ¼ 1Þ ¼ 116; f ðSj¼1 ; x ¼ 2Þ ¼ 94 f ðSj¼2 ; x ¼ 2Þ ¼ 112; f ðSj¼1 ; x ¼ 3Þ ¼ 162 f ðSj¼2 ; x ¼ 3Þ ¼ 256; f ðSj¼1 ; x ¼ 4Þ ¼ 246 f ðSj¼2 ; x ¼ 4Þ ¼ 34: The complexity of the distribution process is deduced from the explanation above; it is expressed in terms of the original packet size, the size of the blocks to be used, and the number of LSPs over which the resulting MPLS packets are sent. More precisely, if a is the size of the original IP packet coming into the ingress router, b is the size of the blocks resulting from the division of the IP packet, and c is the number of LSPs used between the ingress and egress routers then the complexity of the distribution process is Oðba cÞ. 3.2. An example of the reconstruction process Now, we consider again the example of Fig. 4 to illustrate how the reconstruction of the original IP packet is done at the egress router. Fig. 5 shows the process after receiving any k of the nMPLS packets of Fig. 4. In the figure, MPLS packets received from the LSP2, LSP3 and LSP4 are considered. Since both ingress and egress routers use the same polynomial function, the order of coefficients
Fig. 5. Reconstruction process in the egress router applying a (3, 4) modified TSS.
1536
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
ða0 þ a1 ð4Þ þ a2 ð4Þ2 Þ 246 ðmod 257Þ; from LSP4:
voted to some issues related to the proposed approach, namely: the variability of the transfer delay over the communication paths used between the ingress and egress routers, the multiple QoS of the traffics over MPLS networks, packets ordering in MPLS networks, the finding of disjoint paths between the ingress and egress routers, and the applicability of the proposed approach to MPLS multicast.
Using Eq. (2), the following is obtained:
3.3. Considering variable path length
a0,a1,. . .,ak 1 are already preserved and does not depend on the location of failure or the path that has failed. The following three equations are used to obtain the function f(S1, x) using Langrage Interpolation:
ða0 þ a1 ð2Þ þ a2 ð2Þ2Þ 94 ðmod 257Þ; from LSP2; ða0 þ a1 ð3Þ þ a2 ð3Þ2 Þ 162 ðmod 257Þ; from LSP3;
2
f ðS1 ; xÞ ¼ 8x þ 28x þ 6 mod 257; where the original values of the coefficients for block S1 obtained are a2 = 8 (0 08 in hex), a1 = 28 (0 1C in hex), and a0 = 6 (0 06 in hex). Similarly, the following three equations are used to obtain the function f(S2, x) using Langrage Interpolation:
ða0 þ a1 ð2Þ þ a2 ð2Þ2 Þ 112 ðmod 257Þ; from LSP2; ða0 þ a1 ð3Þ þ a2 ð3Þ2 Þ 256 ðmod 257Þ; from LSP3; ða0 þ a1 ð1Þ þ a2 ð1Þ2 Þ 34 ðmod 257Þ; from LSP4: Using Eq. (2), the following is obtained:
f ðS2 ; xÞ ¼ 74x2 þ 31x þ 11 mod 257;
One of the issues related to multi-path routing is the case of uneven length of paths. The transfer delay may be different for the LSPs used to route the MPLS packets because one LSP may be longer than the other. So, for the egress router to be able to reconstruct successfully the original IP packet, it should on one hand possess enough buffers for storing the arriving MPLS packets and on the other hand use a timer to wait for the latest MPLS packet to arrive. The value of the timer is dictated by the slowest LSP used. To formalize this, let us consider an original traffic flow f with n subflows f1, f2, . . ., fn. The end-to-end delay for a subflow ft, t = 1, . . ., n, towards the egress node is def noted by d t;egress and is calculated as follows: f
where the original decimal values of the coefficients for block S2 are a2 = 74 (4A in hex), a1 = 31 (1F in hex), and a0 = 11 (0B in hex). Similar to the distribution process, the complexity of the reconstruction process can be easily derived from the previous explanation. It is expressed in terms of the number of MPLS packets required to reconstruct the original IP packet, the number of blocks used, and the complexity of the Lagrange linear interpolation. More precisely, if a is the number of MPLS packets required and b is the number of blocks used then the complexity of the reconstruction process is O(b a3), where the complexity of the Lagrange linear interpolation is O(a3) according to [29]. Now that our approach for fault tolerance is introduced, and the distribution and reconstruction processes are illustrated with examples, the following subsections are de-
d t;egress ¼
X
dij ;
ð3Þ
ði;jÞ2LSPt
where dij is the delay of each link (i, j) in LSPt. Hence, the delay for the slowest ft belonging to f is then: f
f
dslowest ¼ maxðfd t;egress gÞ: t¼1...n
ð4Þ
Therefore, the timer used by the egress router to reconstruct the original IP packet should be at least equal to f dslowest . Moreover, the buffer size required at the egress should be large enough to store all the k 1 subflows received while waiting for the slowest one, as well as any other traffic originated from other IP packets and received before the slowest subflow of f. In other words, the buffer size required for each subflow ft is:
Fig. 6. A (2, 3) modified TSS example.
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
ft;egress f Bft;egress ¼ dslowest d t;egress bft
ð5Þ
and the buffer size needed by the reconstruction process is:
Bf ;egress ¼
X
Bft;egress
ð6Þ
8LSPt
where bft is the bit rate arrival for the egress node from flow ft. Let us consider an example to illustrate more the calculation of the required buffer size and the value of the timer at the egress router. Fig. 6 shows a network topology with a modified (2, 3) TSS model. The egress node receives shares from three disjoint LSPs (LSP1, LSP2 and LSP3). In this case: f
d 1;egress ¼
X
dij
ði;jÞ2LSP1
¼ dingress;3 þ d3;5 þ d5;7 þ d7;9 þ d9;egress ¼ 4 þ 5 þ 3 þ 5 þ 3 ¼ 20;
f
d 2;egress ¼
X
dij
ði;jÞ2LSP2
¼ dingress;11 þ d11;13 þ d13;14 þ d14;15 þ d15;egress ¼ 4 þ 5 þ 4 þ 5 þ 3 ¼ 21;
f
d 3;egress ¼
X
dij
ði;jÞ2LSP3
¼ dingress;2 þ d2;4 þ d4;6 þ d6;8 þ d8;10 þ d10;egress ¼ 4 þ 5 þ 5 þ 4 þ 3 þ 4 ¼ 25:
Table 1 Buffer allocation required at the egress node. f
d t;egress ms f1,egress f2,egress f3,egress
20 21 25
max
Partial Buffer (bits)
Total buffer size (bits) 1152
25
(25–20).128 = 640 (25–21).128 = 512 0
For this example, the value of the timer to be used by the egress router should be at least equal to 25 ms and therefore the total buffer size equals to 1152 bits, as shown in Table 1. The calculation shown above in Table 1 is only used to show how to calculate the buffering at the egress router for the (2, 3) modified TSS multi-path connection. However, in real networks, the bit rate value can be very large (i.e., in Mega or Giga bits per second). In this case our approach may have a real challenge as the buffering size can be very large, especially when there are high length variations between the LSPs. 3.4. Considering packets ordering As explained earlier, each IP packet creates n MPLS packets; each MPLS packet is sent over an LSP. MPLS packets generated are sent in the same order in which the IP packets were received by the ingress router, and therefore the MPLS packets at each LSP will also be received in order if there is no MPLS packet lost due to transmission errors. To identify packets lost due to transmission errors, we propose to use sequence numbering. However, in MPLS the shim header length is only four bytes long. The header format does not provide space for the packet sequence number. In our approach, we adopted the solution proposed in [28] to provide sequencing for MPLS packets. A control word (CW) can be added to each MPLS packet share payload. In other words, this requires the CW to be carried as the first four bytes of the MPLS share payload as shown in Fig. 7, where: Flags (bits 4–7): These bits may be used by for per-payload signaling. FRG (bits 8 and 9): These bits are used when fragmenting MPLS packet share payload. Length (bits 10–15): When the path between ingress and egress nodes includes an Ethernet segment, the MPLS packet shares may include padding appended by the Ethernet Data Link Layer. So, the length field serves to determine the size of the padding added by the MPLS network, and hence only the payload required for the reconstruction process is extracted by the egress router. Sequence number (Bit 16–31): these bits are used for MPLS packet shares ordering.
MPLS packet share payload
Header
CW
1537
Encoded Data
Fig. 7. CW as part of MPLS packet payload.
1538
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
The use of the version field in the CW is summarized as follows. All IP packets start with a version number that is checked by LSRs performing MPLS payload inspection [28]. Therefore, to prevent the incorrect processing of packets, MPLS packet payload must not start with the value 4 (IPv4) or the value 6 (IPv6) in the first nibble, as those are assumed to carry normal IP payloads. In our proposed scheme, the payload of a MPLS packet share is not an IP packet; it is an encoded payload produced by the distribution process at the ingress node. So, in order to avoid having the previous values in the beginning of MPLS packet share payload, the version filed has to be given different value. On the other hand, the egress node is responsible for checking the content the CW in the MPLS payload, and accordingly uses the sequence number value to synchronize the order of MPLS packet shares. 3.5. Considering multiple classes of quality of service In networking, different traffics may require different kinds of treatment. Therefore, our approach should be able to consider multiple classes of QoS services. To clarify this point, consider the example shown in Fig. 8. There are two types of traffics traversing through the network. The first type is a high priority traffic demonstrated by traffics C1 and C2 which are allocated into three disjoint LSPs, where shares from two LSPs are required to reconstruct the original data at the egress router. In our approach, high priority traffic does not tolerate recovery delay or packet loss, and therefore can not be pre-empted. The second type is a low priority traffic demonstrated by traffic C3. This type has no stringent traffic requirements such as recovery delay or packet loss, and accordingly can be pre-empted. In other words, the amount of traffic dedicated for C3 on LSP3 can be pre-empted in favor of high priority traffic C4 which can be a part of another network traffic connection. In this case, C3 can still be reconstructed at the egress node from only two LSPs but it is not provided with protection if a link/node failure occurs on any of these two LSPs.
of research proposals have been proposed by researchers. For instance, Bhandari [4] has proposed an algorithm to find n > 2 disjoint paths by extending the modified Dijkstra approach that finds n = 2 disjoint paths in the same manner to obtain n > 2 disjoint paths. The n > 2 disjoint paths are obtained from n iterations of the modified Dijkstra in a graph modified at the end of each iteration. Other approaches for deriving multiple disjoint paths in a network can be found in [23–26]. In case n disjoint LSPs are not available, maximally disjoint paths should be found between the ingress and the egress pair, where one or more links may be shared between two or more LSPs. It is clear that when n disjoint paths are used the method guarantees full protection from failures. However, if the method builds on n maximally disjoint paths then networks with failure in the shared links will not obviously be protected unless packets are received from at least k paths. To exemplify the protection power of our approach, let us consider the network in Fig. 9. Suppose that a (2, 3) TSS scheme is used. Here, we need to set up 3 LSPs and the egress router has to receive packets from at least 2 LSPs to be able to reconstruct the original IP packet. However, the topology in the figure does not offer 3 disjoint LSPs but 3 maximally disjoint LSPs; the link between LSR1 and LSR2 is shared between the second and the third LSPs. Hence, if a failure occurs on this link then the egress router will obviously fail to reconstruct the original IP packet and therefore the path protection is at risk. However, if a failure occurs elsewhere in the network then the egress router will successfully be able to reconstruct the original IP packet and therefore the path protection is guaranteed. It is worth to note that our approach can provide multiple path failures by increasing the number of n disjoint paths. The following provides the level of protection based on the values of n and k.
3.6. Finding n disjoint paths
Level of protection 8 > < n k ¼ 0; path protection cannot be provided; ¼ n k ¼ 1; single path failure protection ð7Þ > : n k > 1; multiple path failures protection
As indicated previously, the approach we are proposing builds on multi-path routing. It assumes that n disjoint paths are available through the MPLS network being used to carry the traffic. To find these disjoint LSPs, a number
It is clear that multiple path failures protection is resource consuming. In other words, more path protection requires more LSPs to be set up between the ingress and egress routers, which means more extra bandwidth should be
Fig. 8. Example on traffic protection with/without pre-emption.
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
1539
Fig. 9. An example of a network with maximally disjoint paths.
dedicated to the traffic being carried. Fortunately, this scenario is very unlikely to happen in networking. 3.7. Application of modified TSS to MPLS multicasting In this paper we focused primarily on applying the modified TSS in unicast environments. However, the proposed approach can be extended to MPLS multicasting. The idea behind this extension is to divide and encode the traffic into multiple disjoint trees using a modified (k, n) Threshold Sharing Scheme. In the event of node/link failure(s) of any (n k) trees, our approach provides fault
tolerance without introducing any recovery delay and packet loss, and with best utilization of network resources. To better understand the extended version of the proposed work, we show an example of modified TSS on the well-known National Science Foundation (NSF) network topology of 10. In the first scenario (1), we consider a subset of egress routers (receivers) E1 = {N4, N9} which have the same source N0. For this subset, we can build three disjoint trees T1, T2, and T3 as shown in Fig. 10. At the source router (ingress N0), the original traffic f is split into three different shares based on a modified (2, 3) TSS model. Each share f1, f2, and f3 will carry an encoded traffic of amount
Fig. 10. (a) Complete disjoint trees coverage and (b) Maximally disjoint trees coverage.
1540
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
equal to the half of the original traffic f and allocated to any one of the three trees T1, T2, and T3. Accordingly, each receiver in E1 should receive at least two shares from any two trees to be able to reconstruct the original traffic f. A failure in one tree (node(s) or link(s)) will not affect the reconstruction of the original traffic as long as the other two trees have no failure. The second scenario is similar to the previous scenario except that the subset E1 now contains more participating receivers in the group E1 = {N4, N9, N12}. Indeed, we call these trees maximally disjoint trees. Fig. 10 shows a possible establishment of maximally disjoint trees that cover E1. We should notice that T1 and T2 are sharing the link between N8 and N9. However, if a failure occurred on this link, neither N9 nor N12 will be affected because N9 and N12 are still able to reconstruct the original traffic from {T2 and T3} and {T1 and T3}, respectively. Another possible application for the modified TSS on MPLS multicast is using the group shared tree algorithm. The basic idea of the group shared tree approach can be summarized as the following: When more than one sender belongs to the same multicast group, i.e. (*, G), they can share one central point called Rendezvous point (RP). A RP is a router that acts as a meeting point between all the senders and receivers of the same group. Thus, one shared tree is built from the RP to all the receivers, allocating group state information only in the routers along the shortest path from the receivers to the RP. The same tree is valid for all the senders, because all the sources of the same group send the information to the same RP. In multicasting, the PIM–SM is the most widely implemented protocol. The PIM–SM approach uses a single RP point which inherits the drawbacks of centralized networking. The drawbacks for using a single RP approach can be summarized as follows: Relying on one RP for a multicast operation can result in a single point of failure. Traffic concentration on RP results in longer delays and congestion.
From the previous discussion, the modified (k, n) TSS can be applied to build n multiple trees for each multicast session. In this scenario, each group shared tree has it own RP router. As a result, for a (2, 3) TSS level, a failure in one RP does not affect the reconstruction process at receivers. 4. Performance evaluation This section is devoted to performance analysis we did to evaluate our approach. We are especially going to measure and analyze the bandwidth requirement, the processing time, the recovery delay, and the packet loss for our approach. 4.1. Bandwidth analysis In the proposed modified TSS algorithm all coefficients values are considered to be part of the original IP packet (Section 3), therefore the total size for n MPLS packets for a Q bytes IP packet is:
Total n packets size T modifiedTSSðnÞ ¼
n Q k
bytes
ð8Þ
and
Redundant bandwidth required ¼
ðn kÞ Q k
bytes: ð9Þ
For single th failures, the redundant bandwidth required is only 1k of the original IP packet size. In the original TSS scheme, the secret message is the only value supplied to the polynomial coefficients (to coefficient a0) where other coefficients values are selected randomly. The total size of n MPLS packets for a Q bytes IP packet for original TSS is:
Total n packets size T originalTSSðnÞ ¼ n Q bytes
ð10Þ
The original TSS generates n shares of size Q bytes each. This is because the coefficients a1 and a2 were selected to have random values.
Fig. 11. (1:3) Shared path protection scheme: three working traffic sharing one backup path.
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
1541
Fig. 12. Proposed (k, n) TSS:three working traffics encoded into four equal shares.
By comparing Eqs. 8 and 9, it is clear that our approach is efficient in terms of bandwidth requirements.
Extra Bandwidth for ðk; nÞ TSS scheme
4.1.1. Compared to (1:N) shared protection scheme Figs. 11 and 12 show an example of bandwidth utilization in (1:3) shared protection scheme and our method, respectively. Consider Fig. 11 where the ingress router in the MPLS network receives three different IP traffic U1, U2, and U3. For each working path W1, W2, and W3 the bandwidth required for U1, U2, and U3 is reserved, respectively. The backup path, which protects the three disjoint working paths and handles single failure at any time of any of the three working paths, is also disjoint and reserves bandwidth required for the largest traffic.
which is less than Eq. (12) for a (1:N) scheme.
Extra Bandwidth for ð1 : NÞ scheme ¼ MaxðU1 ; U2 ; U3 Þ:
ð11Þ
On the other hand, our method presented in Fig. 12 handles the same amount of traffic but differently. The ingress router encodes each of the three traffic into four shares allocated to four disjoint LSPs W1, W2, W3, and We. Each LSP therefore needs a bandwidth of {U1/3, U2/3, U3/ 3}. The extra bandwidth needed in our method is represented by the We working LSP which amounts to the following:
¼ Average ðU1 ; U2 ; U3 Þ
ð12Þ
4.1.2. Compared to (1 + 1) protection scheme In the 1 + 1 protection scheme, the backup path carries the same traffic as the working path. This requires 100% of dedicated redundant bandwidth to be reserved for each working traffic.
Extra Bandwidth for ð1 þ 1Þ scheme ¼ Sum ðU1 ; U2 ; U3 Þ:
ð13Þ
Table 2 compares redundant bandwidth required for different traffic sizes for (1: N), (1 + 1), and our modified (k, n) TSS approach. It is clear that our approach always requires 33.3% (=1/3) of redundant bandwidth for a (3, 4) scheme; it is not affected by the variation of different traffic values. It also performs better than the other approaches where redundant bandwidth depends on the different traffic values. Fig. 13 shows the effect of (k,n) on the redundant bandwidth to handle single failures where n = k + 1. It is seen that the redundant bandwidth needed goes down as higher (k,n) TSS level is used. However, the number of disjoint LSPs is usually limited by the network topology. The choice of the values of k and n used therefore depends on the number of disjoint paths available. 4.2. Distribution and reconstruction processing time
Table 2 Redundant bandwidth required for (1: 3), (1 + 1) and our (3, 4) TSS approach. (U1, U2, U3) Traffic sizes
6, 6, 6, 3,
6, 6, 3, 3,
6 9 9 12
Redundant Bandwidth for (1:3) (%)
(1 + 1) (%)
(3, 4) TSS (%)
6 or 33.3 9 or 43 9 or 50 12 or 66.6
18 21 18 18
6 7 6 6
or or or or
100 100 100 100
or or or or
33.3 33.3 33.3 33.3
Simulations were performed on 3 GHz Intel Pentium 4 CPU processor with 1 GB memory running under Linux operating system to measure the time taken by our modified TSS scheme to distribute the original IP packet to n MPLS packets and reconstruct it at the ingress and egress routers, respectively. In our simulation we used the MPZ functions under GNU multiple precision (GMP) arithmetic library. The average packet processing time (in ls) for different IP packet sizes obtained for the distribution process of a (3,
1542
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
Fig. 13. The effect of k multiple paths on redundant bandwidth in modified TSS where n = k + 1.
Fig. 14. Average packet processing time for distribution process using a modified (3, 3) TSS.
Fig. 15. The effect of variable (k, k) modified TSS level on packet processing time.
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
1543
Fig. 16. Multi-path routing topology for a (2, 3) TSS.
3) TSS at the ingress router is shown in Fig. 14. Different block sizes L for the same IP packet size affects the average packet processing time in the distribution process. For larger block sizes, packet processing takes more time but is still of the order of ls. It was observed during simulation that the packet processing time for reconstruction is similar to that required by the distribution process. It is also clear from the figure that the packet processing time is
much lower than the total packet transmission time and can thus be neglected. The effect of variable (k,k) level on IP packet processing time is shown in Fig. 15 for an IP packet size of 2 Kbytes and a block size of (24 k) bytes. We conclude from the results that variable (k,k) level has no significant effect on the IP packet processing time. Our method therefore does not impose a significant processing delay.
Fig. 17. Recovery time for different link failure locations in LSP1.
Fig. 18. Packet loss for different link failure locations in LSP1.
1544
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545
Fig. 19. Out-of-order packets received for different link failure locations in LSP1.
4.3. Recovery time delay, packet loss, and out-of-order packets All existing recovery approaches aim to minimize the recovery time when switching and rerouting the traffic to a backup LSP. Unlike other approaches, the recovery delay time and packet loss has nothing to do with our approach due to the way the threshold sharing scheme works. The egress router’s reconstruction process just needs to receive k MPLS packets from any of the k LSPs. Therefore, if failure occurs in any of the (n–k) LSPs, it does not affect the reconstruction process at the egress router, and consequently no delay and packet loss occurs to reconstruct the original IP packet. The concept of sending a fault notification message to the router that is responsible for rerouting the traffic to another backup path is not required in our approach. The restoration time that includes the failure detection time, notification time, recovery operation time, and traffic restoration time [9, 27] are therefore all absent in our approach. Figs. 17–19 show our simulation results compared to the other path protection mechanisms (Makam [5] and Haskin [11]) that was done on the network shown in Fig. 16 with the following parameters: link delay of 10 ms, link bandwidth 1 Mbps, IP packet size 200 bytes, Constant Bit Rate (CBR) traffic at 400 Kbps and each node uses drop tail queue. Single failure was injected at different link locations of LSP1. Results from Fig. 17 confirm that the recovery time is zero in our approach if link/node failure occurred at any location in LSP1. In the simulation, the egress router was able to reconstruct the original IP packets received from the other two LSPs (LSP 2 and LSP 3) using a (2, 3) TSS. For the other two approaches (Makam and Haskin), the recovery time was dependent on the location of the link failure. Obviously, it increases as the distance of the failure location increases from the ingress router. Figs. 18 and 19 confirm that there is no packet loss in our approach as well as all packets received are in order. For the other two approaches (Makam and Haskin), the number of packets lost were dependent on the location
of the link failure. It increases as the distance of the failure location increases from the ingress router. Unlike Haskin’s approach, Makam’s scheme produces no packet re-ordering since it does not reroute packets from the failed link/ node before the switch over to backup path takes place. Hence, more packets are lost in Makam’s approach than in Haskin’s approach.
5. Conclusion In this paper, we presented a mechanism to provide fault tolerance for MPLS networks that is based on multipath routing and a modified Threshold Sharing scheme (TSS). An IP packet entering MPLS ingress router is partitioned into n MPLS packets, which are then assigned to node/link disjoint LSPs across the MPLS network, where k out of n LSPs are required to reconstruct the original IP packet. Our approach is able to handle single or multiple failures allowing n k paths to fail at a single time. It is shown from the simulation results that the processing time of our mechanism at ingress and egress routers does not impose significant delay and is negligible as compared to packet transmission time. Our mechanism is able to provide path protection with no packet loss and without any recovery delay. Additionally, it utilizes the least amount of extra bandwidth required as compared to 1:N and 1 + 1 path protection mechanisms. Our approach can therefore also be useful in time-critical applications running on networks susceptible to failures.
References [1] E. Rosen et al., Multiprotocol Label Switching Architecture, IETF, RFC 3031, 2001. [2] L. Andersson et al., LDP Specification, IETF, RFC 3036, 2001. [3] D. Awduche, L. Berger, D. Gan, T. Li, V. Srinivasan, G. Swallow, RSVP-TE: Extensions to RSVP for LSP Tunnels, IETF, RFC 3209, 2001. [4] R. Bhandari, Survivable networks, Algorithms for Diverse Routing, Kluwer Academic Publishers, 1999. [5] K. Makam, C. Huang, V. Sharma, Building reliable MPLS networks using a path protection mechanism, IEEE Communication Magazine (2002) 156–162.
S. Alouneh et al. / Computer Networks 53 (2009) 1530–1545 [6] D. Wang, G. Li, Efficient distributed solution for MPLS fast reroute, in: Proceeding of the Fourth International IFIP-TC6 Networking Conference, Waterloo, Canada, May 2005. [7] K-S Sohn, Y.N. Seung, D.K. Sung, A distributed LSP scheme to reduce spare bandwidth demand in MPLS networks, IEEE Transactions on Communications 54 (7) (2006) 1277–1288. [8] R. Doverspike, B. Wilson, Comparison of capacity efficiency of DCS network restoration routing techniques, Journal of Network and System Management 2 (2) (1994) 95–123. [9] A. Autenrieh, Recovery time analysis of differentiated resilience in MPLS, in: Proceeding of DRCN/IEEE, Alberta, Canada, 2003, pp. 333– 340. [10] V. Rajeev, C.R. Muthukrishnan, Reliable backup routing in fault tolerant real-time networks, in: Proceeding of the Nineth IEEE International Conference on Networks, 2001, pp. 184–189. [11] D. Haskin, A method for setting an alternative label switched paths to handle fast reroute, Internet Draft (2001). [12] L. Li, M. Buddhikot, C. Chekuri, K. Guo, Routing bandwidth guaranteed paths with local restoration in label switched networks, IEEE Journal on Selected Areas in Communications (2005) 437–449. [13] A. Virk, R. Boutaba, Economical protection in MPLS networks, Computer Communications 29 (3) (2006) 402–408. [14] Y. Lui, D. Tipper, P. Siripongwutikorn, Approximating optimal spare capacity allocation by successive survivable routing, IEEE/ACM Transactions on Networking 13 (1) (2005). [15] Y. Seok, Y. Lee, N. Choi, Fault-tolerant multi-path traffic engineering for MPLS networks, IASTED International Conference on Communications, Internat, and Information Technology, USA, November 2003, pp. 91–101. [16] N. Maxemchuk, N. Murray, Dispersity Routing on ATM Networks, INFOCOM’93, San Francisco, USA, 1993, pp. 347–357. [17] A. Shamir, How to share a secret, Communications of ACM 24 (1979). [18] B. Schneier, Applied Cryptography, second ed., John Wiley and Sons, 1996 (Chapters 3 and 23). [19] G.R. Blakley, Safeguarding cryptographic keys, in: Proceedings of the National Computer Conference, 1979, American Federation of Information Processing Societies 48, 1979. [20] D. Sidhu, R. Nair, S. Abdallah, Finding disjoint paths in networks, in: Proceeding ACM-SIGCOMM’91 Symposium, 1991, pp. 43–51. [21] M. Blesa, C. Blum, Ant colony optimization for the – maximum edgedisjoint paths problem, in: Raidl et al. (Ed.), 1st (EvoCOMNET’04), vol. 3005 of Lecture Notes in Computer Science, Coimbra, April 2004, pp. 160–169. [22] W. Lou, Y. Fang, A multi-path routing approach for secure data delivery, MILCOM IEEE, vol. 2, 2001, pp. 1467–1473. [23] Y. Shiloach, A Polynomial Solution to the Undirected Two Paths Problem 27 (3) (1980) 445–456. [24] J. Park, H. Kim, H. Lim, Many-to-Many Disjoint Path Covers in a Graph with Faulty Elements, Book Chapter, Lecture Notes in Computer Science, vol. 3341, Springer, Berlin/Heidelberg, 2004, pp. 742–753. [25] R. Shenai, K. Sivalingam, Hybrid survivability approaches for optical WDM mesh networks, Journal of Lightwave Technology 15 (10) (2005). [26] M. Henning, Graphs with large paired-domination number, Journal of Combinatorial Optimization, vol. 13, Springer, Netherlands, 2007. [27] S. Alouneh, A. Agarwal, A. En-nouaary, A novel approach for fault tolerance in MPLS networks, Innovations in Information Technology, IEEE (2006) 1–5. [28] S. Bryant, G. Swallow, L. Martini, D. McPherson, Pseudowire Emulation Edge-to-Edge (PWE3) Control Word for Use over an MPLS PSN, RFC4385, IETF, February 2006. [29] M. Iwaki, K. Toraichi, R. Ishii, Fast Polynomial Interpolation for Remez exchange Method, in: IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, 1993, 411–414.
1545
[30] S. Alouneh, A. En-nouaary, A. Agarwal, A multiple LSPs approach to secure data in MPLS networks, Journal of Networks. 2 (4) (2007) 51– 58. [31] R. Iraschko, M. MacGregor, W. Grover, Optimal capacity placement for path restoration in STM or ATM mesh survivable networks, IEEE/ ACM Transactions on Networking 6 (3) (1998). [32] G. Lei, C. Jin, Y. Hongfang, L. Lemin, Path-based routing provisioning with mixed shared protection in WDM mesh networks, Journal of Lightwave Technology 24 (3) (2006) 1129.
Sahel Alouneh received his B.E. in 2000 from JUST University, Jordan, The M.Sc. degree in 2004 and Ph.D. degree in 2008 both from Concordia University, Canada. Currently, he is and assistant professor at the German Jordanian University. His research Interests include network and software security, fault tolerance, and multicasting.
Anjali Agarwal received her B.E. degree (with Distinction) in Electronics and Communication Engineering from Delhi College of Engineering, India, in 1983, the M.Sc. degree in Electrical Engineering from University of Calgary, Canada, in 1986, and Ph.D. degree in Electrical and Computer Engineering from Concordia University, Canada, in 1996. Presently, she is an Associate Professor in the Department of Electrical and Computer Engineering at Concordia University, Montreal, Canada. She has an industrial experience as well as an academic experience. Her current research interests are Voice over Data Networks, Real Time Multimedia Communication over Internet, IP telephony, Quality of Service, and communication protocols.
Abdeslam En-Nouaary received an engineer degree in computer engineering, option data communication and computer networks, from ENSIAS (École Nationale Supérieure d’Informatique et d’Analyse des Systèmes), Rabat, Morocco, in 1996, the M.Sc. and Ph.D. degrees in Computer Science from the University of Montreal in 1998 and 2001, respectively. Abdeslam En-Nouaary is currently an assistant professor at the Electrical and Computer Engineering Department of Concordia University, Montreal, Canada. His research interests include protocol engineering, software engineering, computer security, and real-time systems.